aboutsummaryrefslogtreecommitdiffstatshomepage
diff options
context:
space:
mode:
authorLinus Torvalds <torvalds@linux-foundation.org>2023-10-31 05:10:11 -1000
committerLinus Torvalds <torvalds@linux-foundation.org>2023-10-31 05:10:11 -1000
commit89ed67ef126c4160349c1b96fdb775ea6170ac90 (patch)
tree98caaf8bba44b21f9345a0af1dd2bd9987764e27
parentMerge tag 'cgroup-for-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup (diff)
parentnet: pcs: xpcs: Add 2500BASE-X case in get state for XPCS drivers (diff)
downloadwireguard-linux-89ed67ef126c4160349c1b96fdb775ea6170ac90.tar.xz
wireguard-linux-89ed67ef126c4160349c1b96fdb775ea6170ac90.zip
Merge tag 'net-next-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski: "Core & protocols: - Support usec resolution of TCP timestamps, enabled selectively by a route attribute. - Defer regular TCP ACK while processing socket backlog, try to send a cumulative ACK at the end. Increase single TCP flow performance on a 200Gbit NIC by 20% (100Gbit -> 120Gbit). - The Fair Queuing (FQ) packet scheduler: - add built-in 3 band prio / WRR scheduling - support bypass if the qdisc is mostly idle (5% speed up for TCP RR) - improve inactive flow reporting - optimize the layout of structures for better cache locality - Support TCP Authentication Option (RFC 5925, TCP-AO), a more modern replacement for the old MD5 option. - Add more retransmission timeout (RTO) related statistics to TCP_INFO. - Support sending fragmented skbs over vsock sockets. - Make sure we send SIGPIPE for vsock sockets if socket was shutdown(). - Add sysctl for ignoring lower limit on lifetime in Router Advertisement PIO, based on an in-progress IETF draft. - Add sysctl to control activation of TCP ping-pong mode. - Add sysctl to make connection timeout in MPTCP configurable. - Support rcvlowat and notsent_lowat on MPTCP sockets, to help apps limit the number of wakeups. - Support netlink GET for MDB (multicast forwarding), allowing user space to request a single MDB entry instead of dumping the entire table. - Support selective FDB flushing in the VXLAN tunnel driver. - Allow limiting learned FDB entries in bridges, prevent OOM attacks. - Allow controlling via configfs netconsole targets which were created via the kernel cmdline at boot, rather than via configfs at runtime. - Support multiple PTP timestamp event queue readers with different filters. - MCTP over I3C. BPF: - Add new veth-like netdevice where BPF program defines the logic of the xmit routine. It can operate in L3 and L2 mode. - Support exceptions - allow asserting conditions which should never be true but are hard for the verifier to infer. With some extra flexibility around handling of the exit / failure: https://lwn.net/Articles/938435/ - Add support for local per-cpu kptr, allow allocating and storing per-cpu objects in maps. Access to those objects operates on the value for the current CPU. This allows to deprecate local one-off implementations of per-CPU storage like BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE maps. - Extend cgroup BPF sockaddr hooks for UNIX sockets. The use case is for systemd to re-implement the LogNamespace feature which allows running multiple instances of systemd-journald to process the logs of different services. - Enable open-coded task_vma iteration, after maple tree conversion made it hard to directly walk VMAs in tracing programs. - Add open-coded task, css_task and css iterator support. One of the use cases is customizable OOM victim selection via BPF. - Allow source address selection with bpf_*_fib_lookup(). - Add ability to pin BPF timer to the current CPU. - Prevent creation of infinite loops by combining tail calls and fentry/fexit programs. - Add missed stats for kprobes to retrieve the number of missed kprobe executions and subsequent executions of BPF programs. - Inherit system settings for CPU security mitigations. - Add BPF v4 CPU instruction support for arm32 and s390x. Changes to common code: - overflow: add DEFINE_FLEX() for on-stack definition of structs with flexible array members. - Process doc update with more guidance for reviewers. Driver API: - Simplify locking in WiFi (cfg80211 and mac80211 layers), use wiphy mutex in most places and remove a lot of smaller locks. - Create a common DPLL configuration API. Allow configuring and querying state of PLL circuits used for clock syntonization, in network time distribution. - Unify fragmented and full page allocation APIs in page pool code. Let drivers be ignorant of PAGE_SIZE. - Rework PHY state machine to avoid races with calls to phy_stop(). - Notify DSA drivers of MAC address changes on user ports, improve correctness of offloads which depend on matching port MAC addresses. - Allow antenna control on injected WiFi frames. - Reduce the number of variants of napi_schedule(). - Simplify error handling when composing devlink health messages. Misc: - A lot of KCSAN data race "fixes", from Eric. - A lot of __counted_by() annotations, from Kees. - A lot of strncpy -> strscpy and printf format fixes. - Replace master/slave terminology with conduit/user in DSA drivers. - Handful of KUnit tests for netdev and WiFi core. Removed: - AppleTalk COPS. - AppleTalk ipddp. - TI AR7 CPMAC Ethernet driver. Drivers: - Ethernet high-speed NICs: - Intel (100G, ice, idpf): - add a driver for the Intel E2000 IPUs - make CRC/FCS stripping configurable - cross-timestamping for E823 devices - basic support for E830 devices - use aux-bus for managing client drivers - i40e: report firmware versions via devlink - nVidia/Mellanox: - support 4-port NICs - increase max number of channels to 256 - optimize / parallelize SF creation flow - Broadcom (bnxt): - enhance NIC temperature reporting - support PAM4 speeds and lane configuration - Marvell OcteonTX2: - PTP pulse-per-second output support - enable hardware timestamping for VFs - Solarflare/AMD: - conntrack NAT offload and offload for tunnels - Wangxun (ngbe/txgbe): - expose HW statistics - Pensando/AMD: - support PCI level reset - narrow down the condition under which skbs are linearized - Netronome/Corigine (nfp): - support CHACHA20-POLY1305 crypto in IPsec offload - Ethernet NICs embedded, slower, virtual: - Synopsys (stmmac): - add Loongson-1 SoC support - enable use of HW queues with no offload capabilities - enable PPS input support on all 5 channels - increase TX coalesce timer to 5ms - RealTek USB (r8152): improve efficiency of Rx by using GRO frags - xen: support SW packet timestamping - add drivers for implementations based on TI's PRUSS (AM64x EVM) - nVidia/Mellanox Ethernet datacenter switches: - avoid poor HW resource use on Spectrum-4 by better block selection for IPv6 multicast forwarding and ordering of blocks in ACL region - Ethernet embedded switches: - Microchip: - support configuring the drive strength for EMI compliance - ksz9477: partial ACL support - ksz9477: HSR offload - ksz9477: Wake on LAN - Realtek: - rtl8366rb: respect device tree config of the CPU port - Ethernet PHYs: - support Broadcom BCM5221 PHYs - TI dp83867: support hardware LED blinking - CAN: - add support for Linux-PHY based CAN transceivers - at91_can: clean up and use rx-offload helpers - WiFi: - MediaTek (mt76): - new sub-driver for mt7925 USB/PCIe devices - HW wireless <> Ethernet bridging in MT7988 chips - mt7603/mt7628 stability improvements - Qualcomm (ath12k): - WCN7850: - enable 320 MHz channels in 6 GHz band - hardware rfkill support - enable IEEE80211_HW_SINGLE_SCAN_ON_ALL_BANDS to make scan faster - read board data variant name from SMBIOS - QCN9274: mesh support - RealTek (rtw89): - TDMA-based multi-channel concurrency (MCC) - Silicon Labs (wfx): - Remain-On-Channel (ROC) support - Bluetooth: - ISO: many improvements for broadcast support - mark BCM4378/BCM4387 as BROKEN_LE_CODED - add support for QCA2066 - btmtksdio: enable Bluetooth wakeup from suspend" * tag 'net-next-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1816 commits) net: pcs: xpcs: Add 2500BASE-X case in get state for XPCS drivers net: bpf: Use sockopt_lock_sock() in ip_sock_set_tos() net: mana: Use xdp_set_features_flag instead of direct assignment vxlan: Cleanup IFLA_VXLAN_PORT_RANGE entry in vxlan_get_size() iavf: delete the iavf client interface iavf: add a common function for undoing the interrupt scheme iavf: use unregister_netdev iavf: rely on netdev's own registered state iavf: fix the waiting time for initial reset iavf: in iavf_down, don't queue watchdog_task if comms failed iavf: simplify mutex_trylock+sleep loops iavf: fix comments about old bit locks doc/netlink: Update schema to support cmd-cnt-name and cmd-max-name tools: ynl: introduce option to process unknown attributes or types ipvlan: properly track tx_errors netdevsim: Block until all devices are released nfp: using napi_build_skb() to replace build_skb() net: dsa: microchip: ksz9477: Fix spelling mistake "Enery" -> "Energy" net: dsa: microchip: Ensure Stable PME Pin State for Wake-on-LAN net: dsa: microchip: Refactor switch shutdown routine for WoL preparation ...
-rw-r--r--Documentation/admin-guide/sysctl/net.rst1
-rw-r--r--Documentation/bpf/libbpf/program_types.rst10
-rw-r--r--Documentation/bpf/prog_flow_dissector.rst2
-rw-r--r--Documentation/bpf/standardization/instruction-set.rst8
-rw-r--r--Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml1
-rw-r--r--Documentation/devicetree/bindings/i3c/i3c.yaml6
-rw-r--r--Documentation/devicetree/bindings/mfd/syscon.yaml2
-rw-r--r--Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml2
-rw-r--r--Documentation/devicetree/bindings/net/brcm,asp-v2.0.yaml2
-rw-r--r--Documentation/devicetree/bindings/net/dsa/brcm,sf2.yaml1
-rw-r--r--Documentation/devicetree/bindings/net/dsa/dsa.yaml11
-rw-r--r--Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml10
-rw-r--r--Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml22
-rw-r--r--Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml3
-rw-r--r--Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml4
-rw-r--r--Documentation/devicetree/bindings/net/dsa/qca8k.yaml1
-rw-r--r--Documentation/devicetree/bindings/net/dsa/realtek.yaml2
-rw-r--r--Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml10
-rw-r--r--Documentation/devicetree/bindings/net/engleder,tsnep.yaml1
-rw-r--r--Documentation/devicetree/bindings/net/ethernet-switch.yaml14
-rw-r--r--Documentation/devicetree/bindings/net/fsl,fec.yaml1
-rw-r--r--Documentation/devicetree/bindings/net/loongson,ls1b-gmac.yaml114
-rw-r--r--Documentation/devicetree/bindings/net/loongson,ls1c-emac.yaml113
-rw-r--r--Documentation/devicetree/bindings/net/mscc,vsc7514-switch.yaml46
-rw-r--r--Documentation/devicetree/bindings/net/nxp,tja11xx.yaml1
-rw-r--r--Documentation/devicetree/bindings/net/renesas,ether.yaml3
-rw-r--r--Documentation/devicetree/bindings/net/renesas,etheravb.yaml3
-rw-r--r--Documentation/devicetree/bindings/net/snps,dwmac.yaml5
-rw-r--r--Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml2
-rw-r--r--Documentation/devicetree/bindings/net/ti,icssg-prueth.yaml8
-rw-r--r--Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml1
-rw-r--r--Documentation/driver-api/80211/mac80211.rst2
-rw-r--r--Documentation/driver-api/dpll.rst551
-rw-r--r--Documentation/driver-api/index.rst1
-rw-r--r--Documentation/netlink/genetlink-c.yaml45
-rw-r--r--Documentation/netlink/genetlink-legacy.yaml51
-rw-r--r--Documentation/netlink/genetlink.yaml39
-rw-r--r--Documentation/netlink/netlink-raw.yaml23
-rw-r--r--Documentation/netlink/specs/devlink.yaml1558
-rw-r--r--Documentation/netlink/specs/dpll.yaml510
-rw-r--r--Documentation/netlink/specs/ethtool.yaml3
-rw-r--r--Documentation/netlink/specs/handshake.yaml8
-rw-r--r--Documentation/netlink/specs/mptcp.yaml393
-rw-r--r--Documentation/netlink/specs/netdev.yaml21
-rw-r--r--Documentation/networking/device_drivers/appletalk/cops.rst80
-rw-r--r--Documentation/networking/device_drivers/appletalk/index.rst18
-rw-r--r--Documentation/networking/device_drivers/ethernet/index.rst1
-rw-r--r--Documentation/networking/device_drivers/ethernet/intel/idpf.rst160
-rw-r--r--Documentation/networking/device_drivers/ethernet/mellanox/mlx5/kconfig.rst2
-rw-r--r--Documentation/networking/device_drivers/index.rst1
-rw-r--r--Documentation/networking/devlink/i40e.rst59
-rw-r--r--Documentation/networking/devlink/index.rst29
-rw-r--r--Documentation/networking/dsa/b53.rst14
-rw-r--r--Documentation/networking/dsa/bcm_sf2.rst2
-rw-r--r--Documentation/networking/dsa/configuration.rst102
-rw-r--r--Documentation/networking/dsa/dsa.rst162
-rw-r--r--Documentation/networking/dsa/lan9303.rst2
-rw-r--r--Documentation/networking/dsa/sja1105.rst6
-rw-r--r--Documentation/networking/filter.rst4
-rw-r--r--Documentation/networking/index.rst2
-rw-r--r--Documentation/networking/ip-sysctl.rst41
-rw-r--r--Documentation/networking/ipddp.rst78
-rw-r--r--Documentation/networking/mptcp-sysctl.rst11
-rw-r--r--Documentation/networking/msg_zerocopy.rst13
-rw-r--r--Documentation/networking/netconsole.rst22
-rw-r--r--Documentation/networking/page_pool.rst4
-rw-r--r--Documentation/networking/pktgen.rst12
-rw-r--r--Documentation/networking/scaling.rst42
-rw-r--r--Documentation/networking/sfp-phylink.rst10
-rw-r--r--Documentation/networking/tcp_ao.rst444
-rw-r--r--Documentation/networking/xdp-rx-metadata.rst7
-rw-r--r--Documentation/process/7.AdvancedTopics.rst18
-rw-r--r--Documentation/process/maintainer-netdev.rst15
-rw-r--r--Documentation/userspace-api/netlink/genetlink-legacy.rst16
-rw-r--r--Documentation/userspace-api/netlink/specs.rst23
-rw-r--r--MAINTAINERS51
-rw-r--r--arch/arm/net/bpf_jit_32.c280
-rw-r--r--arch/arm/net/bpf_jit_32.h4
-rw-r--r--arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi2
-rw-r--r--arch/arm64/net/bpf_jit_comp.c2
-rw-r--r--arch/s390/net/bpf_jit_comp.c267
-rw-r--r--arch/x86/net/bpf_jit_comp.c148
-rw-r--r--drivers/Kconfig2
-rw-r--r--drivers/Makefile1
-rw-r--r--drivers/atm/fore200e.c8
-rw-r--r--drivers/bluetooth/btmtksdio.c44
-rw-r--r--drivers/bluetooth/btqca.c68
-rw-r--r--drivers/bluetooth/btqca.h5
-rw-r--r--drivers/bluetooth/btusb.c11
-rw-r--r--drivers/bluetooth/hci_bcm4377.c5
-rw-r--r--drivers/bluetooth/hci_qca.c11
-rw-r--r--drivers/dpll/Kconfig7
-rw-r--r--drivers/dpll/Makefile9
-rw-r--r--drivers/dpll/dpll_core.c798
-rw-r--r--drivers/dpll/dpll_core.h89
-rw-r--r--drivers/dpll/dpll_netlink.c1423
-rw-r--r--drivers/dpll/dpll_netlink.h13
-rw-r--r--drivers/dpll/dpll_nl.c164
-rw-r--r--drivers/dpll/dpll_nl.h51
-rw-r--r--drivers/i3c/master.c35
-rw-r--r--drivers/infiniband/hw/mlx5/main.c17
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_ib.c4
-rw-r--r--drivers/net/Kconfig9
-rw-r--r--drivers/net/Makefile2
-rw-r--r--drivers/net/Space.c6
-rw-r--r--drivers/net/appletalk/Kconfig102
-rw-r--r--drivers/net/appletalk/Makefile7
-rw-r--r--drivers/net/appletalk/cops.c1005
-rw-r--r--drivers/net/appletalk/cops.h61
-rw-r--r--drivers/net/appletalk/cops_ffdrv.h532
-rw-r--r--drivers/net/appletalk/cops_ltdrv.h241
-rw-r--r--drivers/net/appletalk/ipddp.c345
-rw-r--r--drivers/net/appletalk/ipddp.h28
-rw-r--r--drivers/net/bareudp.c45
-rw-r--r--drivers/net/bonding/bond_netlink.c2
-rw-r--r--drivers/net/can/Kconfig1
-rw-r--r--drivers/net/can/at91_can.c984
-rw-r--r--drivers/net/can/dev/dev.c51
-rw-r--r--drivers/net/can/dev/rx-offload.c2
-rw-r--r--drivers/net/can/dev/skb.c6
-rw-r--r--drivers/net/can/sja1000/peak_pci.c2
-rw-r--r--drivers/net/can/sja1000/sja1000.c2
-rw-r--r--drivers/net/can/usb/etas_es58x/es58x_core.c1
-rw-r--r--drivers/net/can/usb/etas_es58x/es58x_core.h6
-rw-r--r--drivers/net/can/usb/etas_es58x/es58x_devlink.c57
-rw-r--r--drivers/net/dsa/b53/b53_common.c4
-rw-r--r--drivers/net/dsa/b53/b53_mdio.c2
-rw-r--r--drivers/net/dsa/b53/b53_mmap.c6
-rw-r--r--drivers/net/dsa/b53/b53_srab.c8
-rw-r--r--drivers/net/dsa/bcm_sf2.c49
-rw-r--r--drivers/net/dsa/bcm_sf2.h2
-rw-r--r--drivers/net/dsa/bcm_sf2_cfp.c4
-rw-r--r--drivers/net/dsa/dsa_loop.c9
-rw-r--r--drivers/net/dsa/hirschmann/hellcreek.c8
-rw-r--r--drivers/net/dsa/lan9303-core.c4
-rw-r--r--drivers/net/dsa/lantiq_gswip.c45
-rw-r--r--drivers/net/dsa/microchip/Makefile2
-rw-r--r--drivers/net/dsa/microchip/ksz8795.c86
-rw-r--r--drivers/net/dsa/microchip/ksz8795_reg.h21
-rw-r--r--drivers/net/dsa/microchip/ksz9477.c274
-rw-r--r--drivers/net/dsa/microchip/ksz9477.h43
-rw-r--r--drivers/net/dsa/microchip/ksz9477_acl.c1436
-rw-r--r--drivers/net/dsa/microchip/ksz9477_i2c.c5
-rw-r--r--drivers/net/dsa/microchip/ksz9477_reg.h20
-rw-r--r--drivers/net/dsa/microchip/ksz9477_tc_flower.c281
-rw-r--r--drivers/net/dsa/microchip/ksz_common.c645
-rw-r--r--drivers/net/dsa/microchip/ksz_common.h42
-rw-r--r--drivers/net/dsa/microchip/ksz_ptp.c2
-rw-r--r--drivers/net/dsa/microchip/ksz_spi.c5
-rw-r--r--drivers/net/dsa/mt7530-mmio.c7
-rw-r--r--drivers/net/dsa/mt7530.c32
-rw-r--r--drivers/net/dsa/mv88e6xxx/chip.c4
-rw-r--r--drivers/net/dsa/mv88e6xxx/pcs-639x.c2
-rw-r--r--drivers/net/dsa/mv88e6xxx/ptp.c4
-rw-r--r--drivers/net/dsa/ocelot/felix.c68
-rw-r--r--drivers/net/dsa/ocelot/felix.h6
-rw-r--r--drivers/net/dsa/ocelot/ocelot_ext.c8
-rw-r--r--drivers/net/dsa/ocelot/seville_vsc9953.c8
-rw-r--r--drivers/net/dsa/qca/qca8k-8xxx.c50
-rw-r--r--drivers/net/dsa/qca/qca8k-common.c7
-rw-r--r--drivers/net/dsa/qca/qca8k-leds.c6
-rw-r--r--drivers/net/dsa/qca/qca8k.h2
-rw-r--r--drivers/net/dsa/realtek/realtek-smi.c36
-rw-r--r--drivers/net/dsa/realtek/realtek.h2
-rw-r--r--drivers/net/dsa/realtek/rtl8365mb.c5
-rw-r--r--drivers/net/dsa/realtek/rtl8366-core.c8
-rw-r--r--drivers/net/dsa/realtek/rtl8366rb.c44
-rw-r--r--drivers/net/dsa/rzn1_a5psw.c8
-rw-r--r--drivers/net/dsa/sja1105/sja1105_clocking.c21
-rw-r--r--drivers/net/dsa/sja1105/sja1105_main.c4
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx-core.c49
-rw-r--r--drivers/net/dsa/vitesse-vsc73xx-platform.c8
-rw-r--r--drivers/net/dsa/xrs700x/xrs700x.c30
-rw-r--r--drivers/net/ethernet/8390/ax88796.c6
-rw-r--r--drivers/net/ethernet/8390/mcf8390.c5
-rw-r--r--drivers/net/ethernet/8390/ne.c5
-rw-r--r--drivers/net/ethernet/actions/owl-emac.c6
-rw-r--r--drivers/net/ethernet/aeroflex/greth.c6
-rw-r--r--drivers/net/ethernet/allwinner/sun4i-emac.c5
-rw-r--r--drivers/net/ethernet/altera/altera_tse.h2
-rw-r--r--drivers/net/ethernet/altera/altera_tse_main.c19
-rw-r--r--drivers/net/ethernet/amazon/ena/ena_netdev.c2
-rw-r--r--drivers/net/ethernet/amd/au1000_eth.c6
-rw-r--r--drivers/net/ethernet/amd/pds_core/core.c50
-rw-r--r--drivers/net/ethernet/amd/pds_core/core.h7
-rw-r--r--drivers/net/ethernet/amd/pds_core/dev.c11
-rw-r--r--drivers/net/ethernet/amd/pds_core/devlink.c31
-rw-r--r--drivers/net/ethernet/amd/pds_core/main.c50
-rw-r--r--drivers/net/ethernet/amd/sunlance.c6
-rw-r--r--drivers/net/ethernet/amd/xgbe/xgbe-platform.c48
-rw-r--r--drivers/net/ethernet/apm/xgene-v2/main.c6
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_main.c21
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_main.h3
-rw-r--r--drivers/net/ethernet/apple/macmace.c6
-rw-r--r--drivers/net/ethernet/arc/emac_arc.c6
-rw-r--r--drivers/net/ethernet/arc/emac_rockchip.c5
-rw-r--r--drivers/net/ethernet/asix/ax88796c_ioctl.c2
-rw-r--r--drivers/net/ethernet/atheros/ag71xx.c8
-rw-r--r--drivers/net/ethernet/atheros/atl1c/atl1c.h3
-rw-r--r--drivers/net/ethernet/atheros/atl1c/atl1c_main.c80
-rw-r--r--drivers/net/ethernet/atheros/atlx/atl1.c4
-rw-r--r--drivers/net/ethernet/atheros/atlx/atl2.c2
-rw-r--r--drivers/net/ethernet/broadcom/asp2/bcmasp.c8
-rw-r--r--drivers/net/ethernet/broadcom/bcm4908_enet.c6
-rw-r--r--drivers/net/ethernet/broadcom/bcm63xx_enet.c14
-rw-r--r--drivers/net/ethernet/broadcom/bcmsysport.c8
-rw-r--r--drivers/net/ethernet/broadcom/bgmac-platform.c6
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/Makefile1
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.c275
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.h13
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c95
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c678
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h545
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.c241
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.h30
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c2
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h14
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c26
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.c26
-rw-r--r--drivers/net/ethernet/broadcom/sb1250-mac.c6
-rw-r--r--drivers/net/ethernet/broadcom/tg3.c81
-rw-r--r--drivers/net/ethernet/broadcom/tg3.h3
-rw-r--r--drivers/net/ethernet/brocade/bna/bfa_ioc.c2
-rw-r--r--drivers/net/ethernet/cadence/macb_main.c6
-rw-r--r--drivers/net/ethernet/calxeda/xgmac.c6
-rw-r--r--drivers/net/ethernet/cavium/liquidio/lio_ethtool.c18
-rw-r--r--drivers/net/ethernet/cavium/liquidio/lio_main.c2
-rw-r--r--drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c3
-rw-r--r--drivers/net/ethernet/cavium/liquidio/octeon_device.c11
-rw-r--r--drivers/net/ethernet/cavium/octeon/octeon_mgmt.c5
-rw-r--r--drivers/net/ethernet/chelsio/cxgb3/l2t.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb3/sge.c15
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/l2t.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/sched.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/sge.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/smt.h2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/sge.c2
-rw-r--r--drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c43
-rw-r--r--drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h36
-rw-r--r--drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c2
-rw-r--r--drivers/net/ethernet/cirrus/cs89x0.c5
-rw-r--r--drivers/net/ethernet/cirrus/ep93xx_eth.c8
-rw-r--r--drivers/net/ethernet/cirrus/mac89x0.c5
-rw-r--r--drivers/net/ethernet/cortina/gemini.c12
-rw-r--r--drivers/net/ethernet/davicom/dm9000.c6
-rw-r--r--drivers/net/ethernet/dec/tulip/tulip.h2
-rw-r--r--drivers/net/ethernet/dnet.c6
-rw-r--r--drivers/net/ethernet/engleder/tsnep.h2
-rw-r--r--drivers/net/ethernet/engleder/tsnep_hw.h2
-rw-r--r--drivers/net/ethernet/engleder/tsnep_main.c121
-rw-r--r--drivers/net/ethernet/ethoc.c6
-rw-r--r--drivers/net/ethernet/ezchip/nps_enet.c2
-rw-r--r--drivers/net/ethernet/faraday/ftgmac100.c5
-rw-r--r--drivers/net/ethernet/faraday/ftmac100.c5
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.c2
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc.h2
-rw-r--r--drivers/net/ethernet/freescale/enetc/enetc_qos.c2
-rw-r--r--drivers/net/ethernet/freescale/fec_main.c83
-rw-r--r--drivers/net/ethernet/freescale/fman/fman_memac.c11
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c18
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mii-fec.c10
-rw-r--r--drivers/net/ethernet/freescale/fsl_pq_mdio.c12
-rw-r--r--drivers/net/ethernet/google/gve/gve_main.c4
-rw-r--r--drivers/net/ethernet/hisilicon/hip04_eth.c6
-rw-r--r--drivers/net/ethernet/hisilicon/hisi_femac.c6
-rw-r--r--drivers/net/ethernet/hisilicon/hix5hd2_gmac.c17
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c6
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_enet.c5
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hnae3.h5
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c1
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c3
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3_enet.c3
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c116
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c161
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h18
-rw-r--r--drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c2
-rw-r--r--drivers/net/ethernet/hisilicon/hns_mdio.c5
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_devlink.c217
-rw-r--r--drivers/net/ethernet/huawei/hinic/hinic_tx.c8
-rw-r--r--drivers/net/ethernet/i825xx/sni_82596.c5
-rw-r--r--drivers/net/ethernet/ibm/ehea/ehea_main.c10
-rw-r--r--drivers/net/ethernet/ibm/emac/core.c6
-rw-r--r--drivers/net/ethernet/ibm/emac/mal.c8
-rw-r--r--drivers/net/ethernet/ibm/emac/rgmii.c6
-rw-r--r--drivers/net/ethernet/ibm/emac/tah.c6
-rw-r--r--drivers/net/ethernet/ibm/emac/zmii.c6
-rw-r--r--drivers/net/ethernet/ibm/ibmveth.c2
-rw-r--r--drivers/net/ethernet/ibm/ibmvnic.c5
-rw-r--r--drivers/net/ethernet/intel/Kconfig14
-rw-r--r--drivers/net/ethernet/intel/Makefile1
-rw-r--r--drivers/net/ethernet/intel/e100.c2
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_main.c2
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c8
-rw-r--r--drivers/net/ethernet/intel/i40e/Makefile3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e.h212
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.c8
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.h3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_alloc.h24
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_client.c1
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_common.c69
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_dcb.c4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ddp.c31
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debug.h47
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debugfs.c3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_devlink.c236
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_devlink.h18
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_diag.h5
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ethtool.c16
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_hmc.c16
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_hmc.h4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_io.h16
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c9
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_main.c125
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_nvm.c2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_osdep.h59
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_prototype.h12
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ptp.c3
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_register.h5
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.c9
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.h1
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx_common.h2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_type.h62
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_xsk.c4
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_xsk.h4
-rw-r--r--drivers/net/ethernet/intel/iavf/Makefile2
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf.h46
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.c578
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_client.h169
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_common.c32
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_ethtool.c8
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_main.c244
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_prototype.h2
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_txrx.c46
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_type.h12
-rw-r--r--drivers/net/ethernet/intel/iavf/iavf_virtchnl.c115
-rw-r--r--drivers/net/ethernet/intel/ice/Makefile2
-rw-r--r--drivers/net/ethernet/intel/ice/ice.h23
-rw-r--r--drivers/net/ethernet/intel/ice/ice_adminq_cmd.h306
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.c756
-rw-r--r--drivers/net/ethernet/intel/ice/ice_common.h51
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ddp.c465
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ddp.h27
-rw-r--r--drivers/net/ethernet/intel/ice/ice_devids.h10
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dpll.c2120
-rw-r--r--drivers/net/ethernet/intel/ice/ice_dpll.h114
-rw-r--r--drivers/net/ethernet/intel/ice/ice_eswitch_br.c6
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ethtool.c228
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ethtool.h8
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c24
-rw-r--r--drivers/net/ethernet/intel/ice/ice_flow.c5
-rw-r--r--drivers/net/ethernet/intel/ice/ice_flow.h3
-rw-r--r--drivers/net/ethernet/intel/ice/ice_gnss.c3
-rw-r--r--drivers/net/ethernet/intel/ice/ice_hw_autogen.h53
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lag.c135
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lag.h2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_lib.c43
-rw-r--r--drivers/net/ethernet/intel/ice/ice_main.c96
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ptp.c679
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ptp.h41
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ptp_hw.c758
-rw-r--r--drivers/net/ethernet/intel/ice/ice_ptp_hw.h95
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sched.c56
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sched.h6
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sriov.c307
-rw-r--r--drivers/net/ethernet/intel/ice/ice_sriov.h17
-rw-r--r--drivers/net/ethernet/intel/ice/ice_switch.c63
-rw-r--r--drivers/net/ethernet/intel/ice/ice_txrx_lib.c2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_type.h29
-rw-r--r--drivers/net/ethernet/intel/ice/ice_vf_lib.c2
-rw-r--r--drivers/net/ethernet/intel/ice/ice_vf_lib.h9
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl.c71
-rw-r--r--drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c29
-rw-r--r--drivers/net/ethernet/intel/ice/ice_xsk.c22
-rw-r--r--drivers/net/ethernet/intel/idpf/Makefile18
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf.h968
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_controlq.c621
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_controlq.h130
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_controlq_api.h169
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c171
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_dev.c165
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_devids.h10
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_ethtool.c1369
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_lan_pf_regs.h124
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h293
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_lan_vf_regs.h128
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_lib.c2379
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_main.c279
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_mem.h20
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c1183
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_txrx.c4292
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_txrx.h1023
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_vf_dev.c163
-rw-r--r--drivers/net/ethernet/intel/idpf/idpf_virtchnl.c3798
-rw-r--r--drivers/net/ethernet/intel/idpf/virtchnl2.h1273
-rw-r--r--drivers/net/ethernet/intel/idpf/virtchnl2_lan_desc.h451
-rw-r--r--drivers/net/ethernet/intel/igb/igb_ethtool.c4
-rw-r--r--drivers/net/ethernet/intel/igb/igb_main.c55
-rw-r--r--drivers/net/ethernet/intel/igbvf/netdev.c2
-rw-r--r--drivers/net/ethernet/intel/igc/igc_ethtool.c5
-rw-r--r--drivers/net/ethernet/intel/igc/igc_main.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c4
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_main.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c26
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c2
-rw-r--r--drivers/net/ethernet/korina.c6
-rw-r--r--drivers/net/ethernet/lantiq_etop.c6
-rw-r--r--drivers/net/ethernet/lantiq_xrx200.c6
-rw-r--r--drivers/net/ethernet/litex/litex_liteeth.c6
-rw-r--r--drivers/net/ethernet/marvell/mv643xx_eth.c11
-rw-r--r--drivers/net/ethernet/marvell/mvmdio.c6
-rw-r--r--drivers/net/ethernet/marvell/mvneta.c8
-rw-r--r--drivers/net/ethernet/marvell/mvneta_bm.c6
-rw-r--r--drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c12
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c168
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_config.h22
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c24
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h18
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_main.c213
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_main.h13
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h4
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_rx.h3
-rw-r--r--drivers/net/ethernet/marvell/octeon_ep/octep_tx.h4
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/cgx.c8
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/mbox.h11
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/npc.h8
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/ptp.c86
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c7
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c53
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c464
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c62
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c2
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c31
-rw-r--r--drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c58
-rw-r--r--drivers/net/ethernet/marvell/pxa168_eth.c5
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.c11
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.h2
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe.c4
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe.h19
-rw-r--r--drivers/net/ethernet/mediatek/mtk_ppe_offload.c12
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed.c1432
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed.h57
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_debugfs.c400
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_mcu.c150
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_regs.h369
-rw-r--r--drivers/net/ethernet/mediatek/mtk_wed_wo.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_rx.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/fw.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Kconfig8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Makefile3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cmd.c70
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/dev.c122
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/devlink.c11
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c49
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c118
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/dpll.c432
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en.h13
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/fs.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/health.c187
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/health.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c422
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c346
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rqt.c32
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rqt.h9
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rss.c144
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rss.h20
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c105
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h12
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c26
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h25
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c146
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_fs.c8
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_main.c97
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rep.c32
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.c16
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c11
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c96
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/events.c5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/health.c129
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c24
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c47
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c27
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c10
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c25
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c542
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h14
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/main.c33
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h36
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c101
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c26
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c242
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c69
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c35
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/cmd.h43
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.c178
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.h6
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c70
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h15
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_env.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core_thermal.c3
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/i2c.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/pci.c37
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/reg.h20
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.c95
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c20
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c30
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c69
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c20
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c2
-rw-r--r--drivers/net/ethernet/micrel/ks8842.c5
-rw-r--r--drivers/net/ethernet/micrel/ks8851_par.c6
-rw-r--r--drivers/net/ethernet/microchip/lan743x_ethtool.c3
-rw-r--r--drivers/net/ethernet/microchip/lan743x_main.c51
-rw-r--r--drivers/net/ethernet/microchip/lan743x_main.h8
-rw-r--r--drivers/net/ethernet/microchip/lan743x_ptp.c9
-rw-r--r--drivers/net/ethernet/microchip/lan966x/lan966x_main.c7
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c3
-rw-r--r--drivers/net/ethernet/microchip/sparx5/sparx5_main.c6
-rw-r--r--drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c2
-rw-r--r--drivers/net/ethernet/microsoft/mana/mana_en.c5
-rw-r--r--drivers/net/ethernet/moxa/moxart_ether.c6
-rw-r--r--drivers/net/ethernet/mscc/ocelot_vsc7514.c6
-rw-r--r--drivers/net/ethernet/natsemi/jazzsonic.c6
-rw-r--r--drivers/net/ethernet/natsemi/macsonic.c6
-rw-r--r--drivers/net/ethernet/natsemi/xtsonic.c6
-rw-r--r--drivers/net/ethernet/netronome/nfp/crypto/ipsec.c45
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfd3/dp.c2
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfd3/xsk.c2
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfdk/dp.c2
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_repr.h2
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h2
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c2
-rw-r--r--drivers/net/ethernet/ni/nixge.c11
-rw-r--r--drivers/net/ethernet/nxp/lpc_eth.c6
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_dev.h3
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_lif.c12
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_main.c4
-rw-r--r--drivers/net/ethernet/pensando/ionic/ionic_txrx.c77
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_debug.c7
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_devlink.c6
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_ethtool.c46
-rw-r--r--drivers/net/ethernet/qualcomm/emac/emac.c6
-rw-r--r--drivers/net/ethernet/realtek/r8169_main.c4
-rw-r--r--drivers/net/ethernet/renesas/Kconfig9
-rw-r--r--drivers/net/ethernet/renesas/Makefile4
-rw-r--r--drivers/net/ethernet/renesas/ravb_main.c6
-rw-r--r--drivers/net/ethernet/renesas/rswitch.c55
-rw-r--r--drivers/net/ethernet/renesas/rswitch.h2
-rw-r--r--drivers/net/ethernet/renesas/sh_eth.c6
-rw-r--r--drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c6
-rw-r--r--drivers/net/ethernet/seeq/sgiseeq.c6
-rw-r--r--drivers/net/ethernet/sfc/efx_channels.c2
-rw-r--r--drivers/net/ethernet/sfc/mae.c62
-rw-r--r--drivers/net/ethernet/sfc/mcdi.c3
-rw-r--r--drivers/net/ethernet/sfc/ptp.c27
-rw-r--r--drivers/net/ethernet/sfc/siena/efx_channels.c2
-rw-r--r--drivers/net/ethernet/sfc/tc.c337
-rw-r--r--drivers/net/ethernet/sfc/tc.h8
-rw-r--r--drivers/net/ethernet/sfc/tc_conntrack.c91
-rw-r--r--drivers/net/ethernet/sgi/ioc3-eth.c6
-rw-r--r--drivers/net/ethernet/sgi/meth.c6
-rw-r--r--drivers/net/ethernet/smsc/smc91x.c6
-rw-r--r--drivers/net/ethernet/smsc/smsc911x.c6
-rw-r--r--drivers/net/ethernet/socionext/netsec.c8
-rw-r--r--drivers/net/ethernet/socionext/sni_ave.c6
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Kconfig11
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Makefile1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/common.h2
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-anarion.c10
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c15
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c15
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c13
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-ingenic.c33
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c34
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c1
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c27
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c209
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c19
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c6
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c25
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c53
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c14
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c16
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-starfive.c10
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c14
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c130
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c6
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c23
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c10
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c18
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c2
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_main.c50
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c73
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c34
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h2
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c2
-rw-r--r--drivers/net/ethernet/sun/niu.c5
-rw-r--r--drivers/net/ethernet/sun/sunbmac.c6
-rw-r--r--drivers/net/ethernet/sun/sunqe.c6
-rw-r--r--drivers/net/ethernet/sunplus/spl2sw_driver.c6
-rw-r--r--drivers/net/ethernet/ti/Kconfig9
-rw-r--r--drivers/net/ethernet/ti/Makefile1
-rw-r--r--drivers/net/ethernet/ti/cpmac.c1251
-rw-r--r--drivers/net/ethernet/ti/cpsw_priv.c2
-rw-r--r--drivers/net/ethernet/ti/davinci_emac.c40
-rw-r--r--drivers/net/ethernet/ti/davinci_mdio.c6
-rw-r--r--drivers/net/ethernet/ti/icssg/icssg_config.c14
-rw-r--r--drivers/net/ethernet/ti/icssg/icssg_prueth.c49
-rw-r--r--drivers/net/ethernet/ti/icssg/icssg_prueth.h2
-rw-r--r--drivers/net/ethernet/ti/netcp_core.c5
-rw-r--r--drivers/net/ethernet/ti/netcp_ethss.c4
-rw-r--r--drivers/net/ethernet/toshiba/spider_net.c2
-rw-r--r--drivers/net/ethernet/toshiba/tc35815.c10
-rw-r--r--drivers/net/ethernet/tundra/tsi108_eth.c6
-rw-r--r--drivers/net/ethernet/via/via-rhine.c6
-rw-r--r--drivers/net/ethernet/via/via-velocity.c6
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_ethtool.c169
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_ethtool.h8
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_hw.c191
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_hw.h9
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_lib.c20
-rw-r--r--drivers/net/ethernet/wangxun/libwx/wx_type.h82
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c5
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c2
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_main.c7
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c119
-rw-r--r--drivers/net/ethernet/wangxun/ngbe/ngbe_type.h3
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c5
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c110
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h1
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_main.c10
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c56
-rw-r--r--drivers/net/ethernet/wangxun/txgbe/txgbe_type.h6
-rw-r--r--drivers/net/ethernet/wiznet/w5100-spi.c12
-rw-r--r--drivers/net/ethernet/wiznet/w5100.c10
-rw-r--r--drivers/net/ethernet/wiznet/w5300.c5
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac_main.c5
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet_main.c6
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_emaclite.c8
-rw-r--r--drivers/net/ethernet/xscale/ixp4xx_eth.c74
-rw-r--r--drivers/net/fjes/fjes_main.c2
-rw-r--r--drivers/net/geneve.c205
-rw-r--r--drivers/net/gtp.c4
-rw-r--r--drivers/net/hamradio/Kconfig15
-rw-r--r--drivers/net/hamradio/baycom_epp.c4
-rw-r--r--drivers/net/hyperv/netvsc.c18
-rw-r--r--drivers/net/ipa/ipa_power.c2
-rw-r--r--drivers/net/ipvlan/ipvlan_core.c8
-rw-r--r--drivers/net/ipvlan/ipvlan_main.c1
-rw-r--r--drivers/net/macsec.c6
-rw-r--r--drivers/net/mctp/Kconfig9
-rw-r--r--drivers/net/mctp/Makefile1
-rw-r--r--drivers/net/mctp/mctp-i3c.c755
-rw-r--r--drivers/net/mdio/mdio-aspeed.c6
-rw-r--r--drivers/net/mdio/mdio-bcm-iproc.c6
-rw-r--r--drivers/net/mdio/mdio-bcm-unimac.c6
-rw-r--r--drivers/net/mdio/mdio-gpio.c6
-rw-r--r--drivers/net/mdio/mdio-hisi-femac.c6
-rw-r--r--drivers/net/mdio/mdio-ipq4019.c6
-rw-r--r--drivers/net/mdio/mdio-ipq8064.c7
-rw-r--r--drivers/net/mdio/mdio-moxart.c6
-rw-r--r--drivers/net/mdio/mdio-mscc-miim.c6
-rw-r--r--drivers/net/mdio/mdio-mux-bcm-iproc.c6
-rw-r--r--drivers/net/mdio/mdio-mux-bcm6368.c6
-rw-r--r--drivers/net/mdio/mdio-mux-gpio.c5
-rw-r--r--drivers/net/mdio/mdio-mux-meson-g12a.c6
-rw-r--r--drivers/net/mdio/mdio-mux-meson-gxl.c6
-rw-r--r--drivers/net/mdio/mdio-mux-mmioreg.c6
-rw-r--r--drivers/net/mdio/mdio-mux-multiplexer.c6
-rw-r--r--drivers/net/mdio/mdio-octeon.c5
-rw-r--r--drivers/net/mdio/mdio-sun4i.c6
-rw-r--r--drivers/net/mdio/mdio-xgene.c27
-rw-r--r--drivers/net/netconsole.c155
-rw-r--r--drivers/net/netdevsim/bus.c12
-rw-r--r--drivers/net/netdevsim/health.c118
-rw-r--r--drivers/net/netkit.c936
-rw-r--r--drivers/net/pcs/pcs-xpcs.c29
-rw-r--r--drivers/net/pcs/pcs-xpcs.h2
-rw-r--r--drivers/net/phy/Kconfig4
-rw-r--r--drivers/net/phy/amd.c33
-rw-r--r--drivers/net/phy/ax88796b.c2
-rw-r--r--drivers/net/phy/broadcom.c154
-rw-r--r--drivers/net/phy/dp83867.c137
-rw-r--r--drivers/net/phy/micrel.c22
-rw-r--r--drivers/net/phy/nxp-tja11xx.c6
-rw-r--r--drivers/net/phy/phy.c207
-rw-r--r--drivers/net/phy/phylink.c45
-rw-r--r--drivers/net/phy/sfp.c41
-rw-r--r--drivers/net/phy/smsc.c6
-rw-r--r--drivers/net/ppp/pppoe.c2
-rw-r--r--drivers/net/usb/lan78xx.c2
-rw-r--r--drivers/net/usb/r8152.c85
-rw-r--r--drivers/net/usb/sr9800.c4
-rw-r--r--drivers/net/veth.c25
-rw-r--r--drivers/net/virtio_net.c258
-rw-r--r--drivers/net/vxlan/vxlan_core.c450
-rw-r--r--drivers/net/vxlan/vxlan_mdb.c190
-rw-r--r--drivers/net/vxlan/vxlan_private.h2
-rw-r--r--drivers/net/wan/ixp4xx_hss.c4
-rw-r--r--drivers/net/wireless/ath/ar5523/ar5523.c2
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.c49
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.h3
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_rx.c1
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_tx.c16
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.c26
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.c2
-rw-r--r--drivers/net/wireless/ath/ath10k/snoc.c18
-rw-r--r--drivers/net/wireless/ath/ath10k/spectral.c26
-rw-r--r--drivers/net/wireless/ath/ath11k/Makefile3
-rw-r--r--drivers/net/wireless/ath/ath11k/ahb.c10
-rw-r--r--drivers/net/wireless/ath/ath11k/core.c121
-rw-r--r--drivers/net/wireless/ath/ath11k/core.h23
-rw-r--r--drivers/net/wireless/ath/ath11k/debugfs.c8
-rw-r--r--drivers/net/wireless/ath/ath11k/debugfs_sta.c30
-rw-r--r--drivers/net/wireless/ath/ath11k/dp.c2
-rw-r--r--drivers/net/wireless/ath/ath11k/dp_rx.c39
-rw-r--r--drivers/net/wireless/ath/ath11k/dp_tx.c4
-rw-r--r--drivers/net/wireless/ath/ath11k/fw.c168
-rw-r--r--drivers/net/wireless/ath/ath11k/fw.h27
-rw-r--r--drivers/net/wireless/ath/ath11k/hal.c8
-rw-r--r--drivers/net/wireless/ath/ath11k/hal_rx.c31
-rw-r--r--drivers/net/wireless/ath/ath11k/hal_rx.h18
-rw-r--r--drivers/net/wireless/ath/ath11k/hal_tx.c2
-rw-r--r--drivers/net/wireless/ath/ath11k/hif.h54
-rw-r--r--drivers/net/wireless/ath/ath11k/htc.h12
-rw-r--r--drivers/net/wireless/ath/ath11k/mac.c118
-rw-r--r--drivers/net/wireless/ath/ath11k/mhi.c19
-rw-r--r--drivers/net/wireless/ath/ath11k/pci.c24
-rw-r--r--drivers/net/wireless/ath/ath11k/pcic.c6
-rw-r--r--drivers/net/wireless/ath/ath11k/peer.c2
-rw-r--r--drivers/net/wireless/ath/ath11k/qmi.c54
-rw-r--r--drivers/net/wireless/ath/ath11k/reg.c11
-rw-r--r--drivers/net/wireless/ath/ath11k/reg.h3
-rw-r--r--drivers/net/wireless/ath/ath11k/spectral.c28
-rw-r--r--drivers/net/wireless/ath/ath11k/thermal.c22
-rw-r--r--drivers/net/wireless/ath/ath11k/thermal.h8
-rw-r--r--drivers/net/wireless/ath/ath11k/wmi.c70
-rw-r--r--drivers/net/wireless/ath/ath12k/core.c137
-rw-r--r--drivers/net/wireless/ath/ath12k/core.h31
-rw-r--r--drivers/net/wireless/ath/ath12k/debug.c2
-rw-r--r--drivers/net/wireless/ath/ath12k/dp.c1
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_mon.c16
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_rx.c55
-rw-r--r--drivers/net/wireless/ath/ath12k/dp_tx.c16
-rw-r--r--drivers/net/wireless/ath/ath12k/hal.c12
-rw-r--r--drivers/net/wireless/ath/ath12k/hal_rx.c2
-rw-r--r--drivers/net/wireless/ath/ath12k/hif.h18
-rw-r--r--drivers/net/wireless/ath/ath12k/hw.c24
-rw-r--r--drivers/net/wireless/ath/ath12k/hw.h6
-rw-r--r--drivers/net/wireless/ath/ath12k/mac.c311
-rw-r--r--drivers/net/wireless/ath/ath12k/mac.h2
-rw-r--r--drivers/net/wireless/ath/ath12k/mhi.c12
-rw-r--r--drivers/net/wireless/ath/ath12k/pci.c4
-rw-r--r--drivers/net/wireless/ath/ath12k/peer.h3
-rw-r--r--drivers/net/wireless/ath/ath12k/qmi.c12
-rw-r--r--drivers/net/wireless/ath/ath12k/qmi.h1
-rw-r--r--drivers/net/wireless/ath/ath12k/reg.c14
-rw-r--r--drivers/net/wireless/ath/ath12k/reg.h6
-rw-r--r--drivers/net/wireless/ath/ath12k/rx_desc.h91
-rw-r--r--drivers/net/wireless/ath/ath12k/wmi.c131
-rw-r--r--drivers/net/wireless/ath/ath12k/wmi.h28
-rw-r--r--drivers/net/wireless/ath/ath5k/base.c6
-rw-r--r--drivers/net/wireless/ath/ath5k/led.c3
-rw-r--r--drivers/net/wireless/ath/ath5k/pci.c4
-rw-r--r--drivers/net/wireless/ath/ath6kl/cfg80211.c8
-rw-r--r--drivers/net/wireless/ath/ath6kl/init.c2
-rw-r--r--drivers/net/wireless/ath/ath6kl/main.c4
-rw-r--r--drivers/net/wireless/ath/ath6kl/txrx.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_phy.c11
-rw-r--r--drivers/net/wireless/ath/ath9k/debug.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/hif_usb.c34
-rw-r--r--drivers/net/wireless/ath/ath9k/hif_usb.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_debug.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_txrx.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/xmit.c2
-rw-r--r--drivers/net/wireless/ath/carl9170/usb.c10
-rw-r--r--drivers/net/wireless/ath/dfs_pattern_detector.c21
-rw-r--r--drivers/net/wireless/ath/wcn36xx/dxe.c6
-rw-r--r--drivers/net/wireless/ath/wcn36xx/smd.c20
-rw-r--r--drivers/net/wireless/ath/wcn36xx/smd.h2
-rw-r--r--drivers/net/wireless/ath/wcn36xx/testmode.c2
-rw-r--r--drivers/net/wireless/ath/wil6210/cfg80211.c3
-rw-r--r--drivers/net/wireless/ath/wil6210/wmi.c2
-rw-r--r--drivers/net/wireless/atmel/atmel.c72
-rw-r--r--drivers/net/wireless/broadcom/b43/dma.c4
-rw-r--r--drivers/net/wireless/broadcom/b43/pio.c2
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c5
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c6
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h2
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c6
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h2
-rw-r--r--drivers/net/wireless/intel/ipw2x00/ipw2100.c20
-rw-r--r--drivers/net/wireless/intel/ipw2x00/ipw2200.c23
-rw-r--r--drivers/net/wireless/intel/ipw2x00/libipw.h2
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965-mac.c2
-rw-r--r--drivers/net/wireless/intel/iwlegacy/common.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/cfg/ax210.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/cfg/bz.c16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/cfg/sc.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/commands.h33
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/dev.h14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/main.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rs.h12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/tt.h9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/tx.c9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/acpi.c44
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/acpi.h8
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/commands.h30
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/d3.h46
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h38
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/debug.h22
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h68
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/offload.h6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/power.h7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/rfi.h7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/rx.h16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/stats.h153
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h78
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/api/txq.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/dbg.c203
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/dbg.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/debugfs.c14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/file.h32
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/img.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/notif-wait.h3
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/rs.c1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/runtime.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/uefi.c50
-rw-r--r--drivers/net/wireless/intel/iwlwifi/fw/uefi.h17
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-config.h10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-csr.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h5
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h8
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-drv.c90
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-drv.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c5
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-fh.h13
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c83
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h19
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-prph.h20
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-trans.h24
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/constants.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/d3.c177
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c76
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c148
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw.c141
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/link.c30
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c28
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c572
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c264
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c25
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mvm.h176
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/nvm.c14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ops.c44
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c72
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/power.c5
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rs.h23
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rx.c157
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c335
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/scan.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sta.c39
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sta.h12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tdls.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/time-event.c198
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/time-event.h21
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tt.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tx.c130
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/utils.c61
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/drv.c7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/internal.h59
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/rx.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/trans.c46
-rw-r--r--drivers/net/wireless/intel/iwlwifi/queue/tx.c9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/queue/tx.h8
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap.h1
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap_download.c3
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap_ioctl.c228
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap_main.c3
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap_wlan.h2
-rw-r--r--drivers/net/wireless/intersil/p54/p54.h2
-rw-r--r--drivers/net/wireless/legacy/ray_cs.c6
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11h.c4
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cfg80211.c3
-rw-r--r--drivers/net/wireless/marvell/mwifiex/main.h4
-rw-r--r--drivers/net/wireless/marvell/mwifiex/pcie.c322
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.c10
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.h4
-rw-r--r--drivers/net/wireless/mediatek/mt76/Kconfig1
-rw-r--r--drivers/net/wireless/mediatek/mt76/Makefile1
-rw-r--r--drivers/net/wireless/mediatek/mt76/debugfs.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/dma.c14
-rw-r--r--drivers/net/wireless/mediatek/mt76/eeprom.c7
-rw-r--r--drivers/net/wireless/mediatek/mt76/mac80211.c64
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76.h36
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/beacon.c76
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/core.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/init.c8
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/mac.c52
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/main.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7603/regs.h5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/init.c5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/main.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/mcu.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac.h6
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h18
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c28
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c191
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h60
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c8
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c11
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c13
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt76x02_util.c4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/init.c33
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mac.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/main.c53
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mcu.c79
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mcu.h18
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mmio.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h4
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/regs.h1
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7915/soc.c5
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/init.c57
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mac.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/main.c78
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mcu.c155
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mcu.h13
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h18
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/pci.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7921/usb.c12
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/Kconfig30
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/Makefile9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/debugfs.c319
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/init.c235
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/mac.c1452
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/mac.h23
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/main.c1454
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/mcu.c3174
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/mcu.h537
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h309
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/pci.c586
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/pci_mac.c148
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/pci_mcu.c53
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/regs.h92
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7925/usb.c332
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt792x.h38
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt792x_core.c30
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt792x_dma.c49
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt792x_usb.c9
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/init.c53
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mac.c111
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/main.c65
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mcu.c359
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mcu.h37
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h2
-rw-r--r--drivers/net/wireless/mediatek/mt76/mt7996/regs.h8
-rw-r--r--drivers/net/wireless/mediatek/mt76/tx.c108
-rw-r--r--drivers/net/wireless/mediatek/mt7601u/tx.c2
-rw-r--r--drivers/net/wireless/microchip/wilc1000/cfg80211.c4
-rw-r--r--drivers/net/wireless/microchip/wilc1000/netdev.c20
-rw-r--r--drivers/net/wireless/microchip/wilc1000/netdev.h2
-rw-r--r--drivers/net/wireless/microchip/wilc1000/wlan.c2
-rw-r--r--drivers/net/wireless/purelifi/plfxlc/mac.c2
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/cfg80211.c4
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/commands.c5
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/core.c2
-rw-r--r--drivers/net/wireless/quantenna/qtnfmac/event.c4
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800.h18
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800lib.c314
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800mmio.c3
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00.h6
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00dev.c2
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/base.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/core.c18
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/ps.c17
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c8
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c9
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192cu/sw.c1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c34
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.h7
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c18
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c4
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c8
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c7
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ee/sw.c1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c16
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h4
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.h4
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c16
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c8
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c7
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h1
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/wifi.h15
-rw-r--r--drivers/net/wireless/realtek/rtw88/debug.c4
-rw-r--r--drivers/net/wireless/realtek/rtw88/debug.h12
-rw-r--r--drivers/net/wireless/realtek/rtw88/fw.c74
-rw-r--r--drivers/net/wireless/realtek/rtw88/fw.h3
-rw-r--r--drivers/net/wireless/realtek/rtw88/main.h10
-rw-r--r--drivers/net/wireless/realtek/rtw88/ps.c2
-rw-r--r--drivers/net/wireless/realtek/rtw88/reg.h23
-rw-r--r--drivers/net/wireless/realtek/rtw88/regd.c24
-rw-r--r--drivers/net/wireless/realtek/rtw88/regd.h2
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8821c.c67
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8821c.h1
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8821c_table.c1154
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8822c_table.c1239
-rw-r--r--drivers/net/wireless/realtek/rtw88/rtw8822cu.c4
-rw-r--r--drivers/net/wireless/realtek/rtw88/usb.c9
-rw-r--r--drivers/net/wireless/realtek/rtw89/chan.c1576
-rw-r--r--drivers/net/wireless/realtek/rtw89/chan.h34
-rw-r--r--drivers/net/wireless/realtek/rtw89/coex.c25
-rw-r--r--drivers/net/wireless/realtek/rtw89/core.c465
-rw-r--r--drivers/net/wireless/realtek/rtw89/core.h429
-rw-r--r--drivers/net/wireless/realtek/rtw89/debug.c286
-rw-r--r--drivers/net/wireless/realtek/rtw89/fw.c712
-rw-r--r--drivers/net/wireless/realtek/rtw89/fw.h144
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac.c211
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac.h49
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac80211.c19
-rw-r--r--drivers/net/wireless/realtek/rtw89/mac_be.c397
-rw-r--r--drivers/net/wireless/realtek/rtw89/pci.c3
-rw-r--r--drivers/net/wireless/realtek/rtw89/phy.c515
-rw-r--r--drivers/net/wireless/realtek/rtw89/phy.h136
-rw-r--r--drivers/net/wireless/realtek/rtw89/phy_be.c576
-rw-r--r--drivers/net/wireless/realtek/rtw89/reg.h409
-rw-r--r--drivers/net/wireless/realtek/rtw89/regd.c4
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8851b.c29
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8851b_table.c1337
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8851b_table.h3
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852a.c28
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852a_table.c2
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852a_table.h1
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852b.c37
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852b_table.c333
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852b_table.h3
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c.c57
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c107
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h3
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_rfk_table.c42
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_table.c3782
-rw-r--r--drivers/net/wireless/realtek/rtw89/rtw8852c_table.h3
-rw-r--r--drivers/net/wireless/realtek/rtw89/txrx.h271
-rw-r--r--drivers/net/wireless/realtek/rtw89/wow.c4
-rw-r--r--drivers/net/wireless/silabs/wfx/data_tx.c125
-rw-r--r--drivers/net/wireless/silabs/wfx/data_tx.h21
-rw-r--r--drivers/net/wireless/silabs/wfx/hif_tx.c43
-rw-r--r--drivers/net/wireless/silabs/wfx/hif_tx.h1
-rw-r--r--drivers/net/wireless/silabs/wfx/main.c5
-rw-r--r--drivers/net/wireless/silabs/wfx/queue.c38
-rw-r--r--drivers/net/wireless/silabs/wfx/queue.h1
-rw-r--r--drivers/net/wireless/silabs/wfx/scan.c66
-rw-r--r--drivers/net/wireless/silabs/wfx/scan.h6
-rw-r--r--drivers/net/wireless/silabs/wfx/sta.c41
-rw-r--r--drivers/net/wireless/silabs/wfx/sta.h1
-rw-r--r--drivers/net/wireless/silabs/wfx/wfx.h8
-rw-r--r--drivers/net/wireless/st/cw1200/txrx.c4
-rw-r--r--drivers/net/wireless/ti/wl1251/main.c2
-rw-r--r--drivers/net/wireless/ti/wl1251/tx.c6
-rw-r--r--drivers/net/wireless/ti/wl12xx/main.c6
-rw-r--r--drivers/net/wireless/ti/wl18xx/main.c7
-rw-r--r--drivers/net/wireless/ti/wlcore/boot.c5
-rw-r--r--drivers/net/wireless/ti/wlcore/event.c2
-rw-r--r--drivers/net/wireless/ti/wlcore/main.c16
-rw-r--r--drivers/net/wireless/ti/wlcore/wlcore.h2
-rw-r--r--drivers/net/wireless/virtual/mac80211_hwsim.c58
-rw-r--r--drivers/net/wireless/virtual/mac80211_hwsim.h19
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h2
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_imem_ops.h4
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_mux.h2
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_pm.h2
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_port.h2
-rw-r--r--drivers/net/wwan/iosm/iosm_ipc_trace.h2
-rw-r--r--drivers/net/wwan/rpmsg_wwan_ctrl.c2
-rw-r--r--drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c2
-rw-r--r--drivers/net/wwan/t7xx/t7xx_state_monitor.c3
-rw-r--r--drivers/net/wwan/t7xx/t7xx_state_monitor.h2
-rw-r--r--drivers/net/wwan/wwan_core.c9
-rw-r--r--drivers/net/xen-netback/interface.c5
-rw-r--r--drivers/ptp/Kconfig1
-rw-r--r--drivers/ptp/ptp_chardev.c129
-rw-r--r--drivers/ptp/ptp_clock.c45
-rw-r--r--drivers/ptp/ptp_ocp.c369
-rw-r--r--drivers/ptp/ptp_private.h28
-rw-r--r--drivers/ptp/ptp_sysfs.c13
-rw-r--r--drivers/s390/net/ctcm_main.c4
-rw-r--r--drivers/s390/net/qeth_core_main.c2
-rw-r--r--drivers/ssb/Kconfig3
-rw-r--r--drivers/ssb/main.c2
-rw-r--r--drivers/staging/qlge/qlge_devlink.c60
-rw-r--r--drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c9
-rw-r--r--drivers/vhost/vsock.c21
-rw-r--r--include/linux/avf/virtchnl.h15
-rw-r--r--include/linux/bpf-cgroup-defs.h5
-rw-r--r--include/linux/bpf-cgroup.h90
-rw-r--r--include/linux/bpf.h64
-rw-r--r--include/linux/bpf_mem_alloc.h1
-rw-r--r--include/linux/bpf_verifier.h44
-rw-r--r--include/linux/brcmphy.h10
-rw-r--r--include/linux/btf.h1
-rw-r--r--include/linux/can/dev.h4
-rw-r--r--include/linux/ceph/mon_client.h2
-rw-r--r--include/linux/cgroup.h12
-rw-r--r--include/linux/compiler_types.h32
-rw-r--r--include/linux/dpll.h170
-rw-r--r--include/linux/dsa/sja1105.h2
-rw-r--r--include/linux/ethtool.h19
-rw-r--r--include/linux/filter.h67
-rw-r--r--include/linux/fortify-string.h4
-rw-r--r--include/linux/i3c/master.h11
-rw-r--r--include/linux/ieee80211.h106
-rw-r--r--include/linux/igmp.h2
-rw-r--r--include/linux/ipv6.h50
-rw-r--r--include/linux/kasan.h2
-rw-r--r--include/linux/linkmode.h18
-rw-r--r--include/linux/micrel_phy.h4
-rw-r--r--include/linux/mlx5/device.h3
-rw-r--r--include/linux/mlx5/driver.h20
-rw-r--r--include/linux/mlx5/fs.h1
-rw-r--r--include/linux/mlx5/mlx5_ifc.h131
-rw-r--r--include/linux/mm_types.h13
-rw-r--r--include/linux/netdevice.h100
-rw-r--r--include/linux/netfilter.h10
-rw-r--r--include/linux/overflow.h35
-rw-r--r--include/linux/pci_ids.h2
-rw-r--r--include/linux/pds/pds_core_if.h1
-rw-r--r--include/linux/percpu.h1
-rw-r--r--include/linux/phy.h1
-rw-r--r--include/linux/phylink.h56
-rw-r--r--include/linux/posix-clock.h35
-rw-r--r--include/linux/soc/mediatek/mtk_wed.h76
-rw-r--r--include/linux/socket.h1
-rw-r--r--include/linux/sockptr.h23
-rw-r--r--include/linux/stmmac.h2
-rw-r--r--include/linux/tcp.h61
-rw-r--r--include/linux/trace_events.h6
-rw-r--r--include/linux/udp.h66
-rw-r--r--include/linux/virtio_vsock.h10
-rw-r--r--include/net/Space.h1
-rw-r--r--include/net/af_vsock.h7
-rw-r--r--include/net/bluetooth/bluetooth.h2
-rw-r--r--include/net/bluetooth/hci.h3
-rw-r--r--include/net/bluetooth/hci_core.h40
-rw-r--r--include/net/bluetooth/hci_sync.h2
-rw-r--r--include/net/cfg80211.h257
-rw-r--r--include/net/devlink.h69
-rw-r--r--include/net/dropreason-core.h33
-rw-r--r--include/net/dsa.h69
-rw-r--r--include/net/dsa_stubs.h22
-rw-r--r--include/net/dst.h11
-rw-r--r--include/net/flow_offload.h2
-rw-r--r--include/net/ieee80211_radiotap.h6
-rw-r--r--include/net/if_inet6.h2
-rw-r--r--include/net/inet_connection_sock.h22
-rw-r--r--include/net/inet_sock.h11
-rw-r--r--include/net/inet_timewait_sock.h3
-rw-r--r--include/net/ip.h15
-rw-r--r--include/net/ip6_route.h19
-rw-r--r--include/net/ip_fib.h2
-rw-r--r--include/net/ipv6.h42
-rw-r--r--include/net/ipv6_stubs.h5
-rw-r--r--include/net/mac80211.h134
-rw-r--r--include/net/mana/hw_channel.h2
-rw-r--r--include/net/mana/mana.h2
-rw-r--r--include/net/net_namespace.h15
-rw-r--r--include/net/netfilter/nf_conntrack.h14
-rw-r--r--include/net/netfilter/nf_conntrack_labels.h2
-rw-r--r--include/net/netfilter/nf_tables.h67
-rw-r--r--include/net/netkit.h38
-rw-r--r--include/net/netlink.h73
-rw-r--r--include/net/netns/conntrack.h2
-rw-r--r--include/net/netns/ipv4.h3
-rw-r--r--include/net/nexthop.h8
-rw-r--r--include/net/page_pool/helpers.h230
-rw-r--r--include/net/page_pool/types.h6
-rw-r--r--include/net/pkt_cls.h6
-rw-r--r--include/net/pkt_sched.h8
-rw-r--r--include/net/regulatory.h1
-rw-r--r--include/net/route.h10
-rw-r--r--include/net/sch_generic.h4
-rw-r--r--include/net/sock.h41
-rw-r--r--include/net/tc_act/tc_ct.h1
-rw-r--r--include/net/tcp.h371
-rw-r--r--include/net/tcp_ao.h362
-rw-r--r--include/net/tcx.h7
-rw-r--r--include/net/tls.h21
-rw-r--r--include/net/udp_tunnel.h30
-rw-r--r--include/net/udplite.h14
-rw-r--r--include/net/xdp.h19
-rw-r--r--include/net/xdp_sock.h18
-rw-r--r--include/net/xfrm.h2
-rw-r--r--include/trace/events/mptcp.h2
-rw-r--r--include/trace/events/vsock_virtio_transport_common.h12
-rw-r--r--include/uapi/linux/bpf.h52
-rw-r--r--include/uapi/linux/devlink.h3
-rw-r--r--include/uapi/linux/dpll.h207
-rw-r--r--include/uapi/linux/if_bridge.h18
-rw-r--r--include/uapi/linux/if_link.h32
-rw-r--r--include/uapi/linux/mptcp.h172
-rw-r--r--include/uapi/linux/mptcp_pm.h150
-rw-r--r--include/uapi/linux/netdev.h16
-rw-r--r--include/uapi/linux/netlink.h5
-rw-r--r--include/uapi/linux/nl80211.h43
-rw-r--r--include/uapi/linux/pkt_sched.h15
-rw-r--r--include/uapi/linux/ptp_clock.h2
-rw-r--r--include/uapi/linux/rtnetlink.h18
-rw-r--r--include/uapi/linux/snmp.h8
-rw-r--r--include/uapi/linux/tcp.h118
-rw-r--r--include/uapi/linux/vm_sockets.h17
-rw-r--r--kernel/bpf/bpf_iter.c2
-rw-r--r--kernel/bpf/bpf_struct_ops.c26
-rw-r--r--kernel/bpf/btf.c35
-rw-r--r--kernel/bpf/cgroup.c28
-rw-r--r--kernel/bpf/cgroup_iter.c65
-rw-r--r--kernel/bpf/core.c37
-rw-r--r--kernel/bpf/cpumap.c10
-rw-r--r--kernel/bpf/devmap.c10
-rw-r--r--kernel/bpf/hashtab.c7
-rw-r--r--kernel/bpf/helpers.c109
-rw-r--r--kernel/bpf/memalloc.c72
-rw-r--r--kernel/bpf/offload.c18
-rw-r--r--kernel/bpf/ringbuf.c3
-rw-r--r--kernel/bpf/stackmap.c2
-rw-r--r--kernel/bpf/syscall.c71
-rw-r--r--kernel/bpf/task_iter.c282
-rw-r--r--kernel/bpf/tcx.c4
-rw-r--r--kernel/bpf/trampoline.c4
-rw-r--r--kernel/bpf/verifier.c1293
-rw-r--r--kernel/cgroup/cgroup.c18
-rw-r--r--kernel/time/posix-clock.c36
-rw-r--r--kernel/trace/bpf_trace.c10
-rw-r--r--kernel/trace/trace_kprobe.c14
-rw-r--r--kernel/trace/trace_syscalls.c4
-rw-r--r--lib/nlattr.c22
-rw-r--r--lib/test_bpf.c371
-rw-r--r--mm/kasan/kasan.h1
-rw-r--r--mm/percpu.c35
-rw-r--r--net/Kconfig11
-rw-r--r--net/appletalk/Kconfig30
-rw-r--r--net/appletalk/aarp.c2
-rw-r--r--net/appletalk/ddp.c36
-rw-r--r--net/atm/atm_sysfs.c2
-rw-r--r--net/ax25/af_ax25.c2
-rw-r--r--net/bluetooth/amp.c3
-rw-r--r--net/bluetooth/hci_conn.c123
-rw-r--r--net/bluetooth/hci_core.c3
-rw-r--r--net/bluetooth/hci_event.c92
-rw-r--r--net/bluetooth/hci_sync.c36
-rw-r--r--net/bluetooth/hci_sysfs.c23
-rw-r--r--net/bluetooth/iso.c38
-rw-r--r--net/bluetooth/l2cap_sock.c2
-rw-r--r--net/bluetooth/msft.c20
-rw-r--r--net/bridge/br.c1
-rw-r--r--net/bridge/br_device.c3
-rw-r--r--net/bridge/br_fdb.c71
-rw-r--r--net/bridge/br_input.c2
-rw-r--r--net/bridge/br_mdb.c184
-rw-r--r--net/bridge/br_multicast.c5
-rw-r--r--net/bridge/br_netfilter_hooks.c98
-rw-r--r--net/bridge/br_netfilter_ipv6.c6
-rw-r--r--net/bridge/br_netlink.c17
-rw-r--r--net/bridge/br_private.h26
-rw-r--r--net/can/j1939/socket.c2
-rw-r--r--net/can/raw.c5
-rw-r--r--net/ceph/mon_client.c2
-rw-r--r--net/core/Makefile1
-rw-r--r--net/core/dev.c182
-rw-r--r--net/core/dev.h6
-rw-r--r--net/core/dev_ioctl.c2
-rw-r--r--net/core/dst.c10
-rw-r--r--net/core/filter.c83
-rw-r--r--net/core/gso_test.c278
-rw-r--r--net/core/netclassid_cgroup.c6
-rw-r--r--net/core/netdev-genl.c12
-rw-r--r--net/core/page_pool.c31
-rw-r--r--net/core/pktgen.c102
-rw-r--r--net/core/rtnetlink.c152
-rw-r--r--net/core/selftests.c9
-rw-r--r--net/core/skbuff.c27
-rw-r--r--net/core/sock.c228
-rw-r--r--net/core/xdp.c4
-rw-r--r--net/dccp/ipv4.c2
-rw-r--r--net/dccp/ipv6.c10
-rw-r--r--net/dccp/timer.c4
-rw-r--r--net/devlink/core.c223
-rw-r--r--net/devlink/dev.c60
-rw-r--r--net/devlink/devl_internal.h98
-rw-r--r--net/devlink/dpipe.c14
-rw-r--r--net/devlink/health.c411
-rw-r--r--net/devlink/linecard.c83
-rw-r--r--net/devlink/netlink.c358
-rw-r--r--net/devlink/netlink_gen.c757
-rw-r--r--net/devlink/netlink_gen.h64
-rw-r--r--net/devlink/param.c14
-rw-r--r--net/devlink/port.c66
-rw-r--r--net/devlink/rate.c6
-rw-r--r--net/devlink/region.c8
-rw-r--r--net/devlink/resource.c4
-rw-r--r--net/devlink/sb.c17
-rw-r--r--net/devlink/trap.c9
-rw-r--r--net/dsa/Makefile6
-rw-r--r--net/dsa/conduit.c (renamed from net/dsa/master.c)118
-rw-r--r--net/dsa/conduit.h22
-rw-r--r--net/dsa/dsa.c224
-rw-r--r--net/dsa/dsa.h12
-rw-r--r--net/dsa/master.h22
-rw-r--r--net/dsa/netlink.c22
-rw-r--r--net/dsa/port.c144
-rw-r--r--net/dsa/port.h7
-rw-r--r--net/dsa/slave.h69
-rw-r--r--net/dsa/switch.c20
-rw-r--r--net/dsa/switch.h8
-rw-r--r--net/dsa/tag.c10
-rw-r--r--net/dsa/tag.h26
-rw-r--r--net/dsa/tag_8021q.c22
-rw-r--r--net/dsa/tag_8021q.h2
-rw-r--r--net/dsa/tag_ar9331.c4
-rw-r--r--net/dsa/tag_brcm.c14
-rw-r--r--net/dsa/tag_dsa.c6
-rw-r--r--net/dsa/tag_gswip.c4
-rw-r--r--net/dsa/tag_hellcreek.c4
-rw-r--r--net/dsa/tag_ksz.c20
-rw-r--r--net/dsa/tag_lan9303.c4
-rw-r--r--net/dsa/tag_mtk.c4
-rw-r--r--net/dsa/tag_none.c6
-rw-r--r--net/dsa/tag_ocelot.c22
-rw-r--r--net/dsa/tag_ocelot_8021q.c12
-rw-r--r--net/dsa/tag_qca.c6
-rw-r--r--net/dsa/tag_rtl4_a.c6
-rw-r--r--net/dsa/tag_rtl8_4.c6
-rw-r--r--net/dsa/tag_rzn1_a5psw.c4
-rw-r--r--net/dsa/tag_sja1105.c30
-rw-r--r--net/dsa/tag_trailer.c4
-rw-r--r--net/dsa/tag_xrs700x.c4
-rw-r--r--net/dsa/user.c (renamed from net/dsa/slave.c)1473
-rw-r--r--net/dsa/user.h69
-rw-r--r--net/ethtool/common.c21
-rw-r--r--net/handshake/genl.c2
-rw-r--r--net/handshake/netlink.c2
-rw-r--r--net/handshake/tlshd.c6
-rw-r--r--net/ipv4/Kconfig17
-rw-r--r--net/ipv4/Makefile2
-rw-r--r--net/ipv4/af_inet.c9
-rw-r--r--net/ipv4/datagram.c6
-rw-r--r--net/ipv4/igmp.c2
-rw-r--r--net/ipv4/inet_diag.c4
-rw-r--r--net/ipv4/ip_forward.c4
-rw-r--r--net/ipv4/ip_output.c17
-rw-r--r--net/ipv4/ip_sockglue.c187
-rw-r--r--net/ipv4/netfilter/iptable_mangle.c9
-rw-r--r--net/ipv4/ping.c15
-rw-r--r--net/ipv4/proc.c8
-rw-r--r--net/ipv4/raw.c19
-rw-r--r--net/ipv4/route.c54
-rw-r--r--net/ipv4/syncookies.c36
-rw-r--r--net/ipv4/sysctl_net_ipv4.c17
-rw-r--r--net/ipv4/tcp.c276
-rw-r--r--net/ipv4/tcp_ao.c2392
-rw-r--r--net/ipv4/tcp_bbr.c13
-rw-r--r--net/ipv4/tcp_input.c213
-rw-r--r--net/ipv4/tcp_ipv4.c384
-rw-r--r--net/ipv4/tcp_lp.c2
-rw-r--r--net/ipv4/tcp_metrics.c22
-rw-r--r--net/ipv4/tcp_minisocks.c69
-rw-r--r--net/ipv4/tcp_output.c316
-rw-r--r--net/ipv4/tcp_sigpool.c358
-rw-r--r--net/ipv4/tcp_timer.c63
-rw-r--r--net/ipv4/udp.c101
-rw-r--r--net/ipv4/udp_offload.c4
-rw-r--r--net/ipv4/udp_tunnel_core.c51
-rw-r--r--net/ipv4/udp_tunnel_nic.c11
-rw-r--r--net/ipv4/udplite.c1
-rw-r--r--net/ipv4/xfrm4_input.c4
-rw-r--r--net/ipv6/Makefile1
-rw-r--r--net/ipv6/addrconf.c57
-rw-r--r--net/ipv6/af_inet6.c19
-rw-r--r--net/ipv6/datagram.c15
-rw-r--r--net/ipv6/icmp.c4
-rw-r--r--net/ipv6/inet6_connection_sock.c2
-rw-r--r--net/ipv6/ioam6_iptunnel.c2
-rw-r--r--net/ipv6/ip6_flowlabel.c8
-rw-r--r--net/ipv6/ip6_output.c171
-rw-r--r--net/ipv6/ip6_udp_tunnel.c75
-rw-r--r--net/ipv6/ipv6_sockglue.c242
-rw-r--r--net/ipv6/mcast.c11
-rw-r--r--net/ipv6/ndisc.c6
-rw-r--r--net/ipv6/netfilter/ip6table_mangle.c9
-rw-r--r--net/ipv6/ping.c6
-rw-r--r--net/ipv6/proc.c3
-rw-r--r--net/ipv6/raw.c18
-rw-r--r--net/ipv6/route.c6
-rw-r--r--net/ipv6/syncookies.c5
-rw-r--r--net/ipv6/tcp_ao.c168
-rw-r--r--net/ipv6/tcp_ipv6.c411
-rw-r--r--net/ipv6/udp.c52
-rw-r--r--net/ipv6/udplite.c1
-rw-r--r--net/ipv6/xfrm6_input.c4
-rw-r--r--net/ipv6/xfrm6_output.c2
-rw-r--r--net/l2tp/l2tp_core.c6
-rw-r--r--net/l2tp/l2tp_eth.c34
-rw-r--r--net/l2tp/l2tp_ip6.c6
-rw-r--r--net/mac80211/Kconfig11
-rw-r--r--net/mac80211/Makefile2
-rw-r--r--net/mac80211/agg-rx.c63
-rw-r--r--net/mac80211/agg-tx.c63
-rw-r--r--net/mac80211/airtime.c10
-rw-r--r--net/mac80211/cfg.c490
-rw-r--r--net/mac80211/chan.c156
-rw-r--r--net/mac80211/debugfs.c11
-rw-r--r--net/mac80211/debugfs_key.c20
-rw-r--r--net/mac80211/debugfs_netdev.c161
-rw-r--r--net/mac80211/debugfs_netdev.h15
-rw-r--r--net/mac80211/debugfs_sta.c4
-rw-r--r--net/mac80211/driver-ops.c54
-rw-r--r--net/mac80211/driver-ops.h159
-rw-r--r--net/mac80211/drop.h49
-rw-r--r--net/mac80211/ethtool.c20
-rw-r--r--net/mac80211/ht.c58
-rw-r--r--net/mac80211/ibss.c104
-rw-r--r--net/mac80211/ieee80211_i.h223
-rw-r--r--net/mac80211/iface.c180
-rw-r--r--net/mac80211/key.c149
-rw-r--r--net/mac80211/key.h11
-rw-r--r--net/mac80211/link.c63
-rw-r--r--net/mac80211/main.c93
-rw-r--r--net/mac80211/mesh.c24
-rw-r--r--net/mac80211/mesh_hwmp.c2
-rw-r--r--net/mac80211/mesh_pathtbl.c22
-rw-r--r--net/mac80211/mesh_plink.c6
-rw-r--r--net/mac80211/mesh_ps.c6
-rw-r--r--net/mac80211/mesh_sync.c4
-rw-r--r--net/mac80211/mlme.c709
-rw-r--r--net/mac80211/ocb.c19
-rw-r--r--net/mac80211/offchannel.c120
-rw-r--r--net/mac80211/pm.c13
-rw-r--r--net/mac80211/rc80211_minstrel_ht.c7
-rw-r--r--net/mac80211/rx.c113
-rw-r--r--net/mac80211/s1g.c15
-rw-r--r--net/mac80211/scan.c226
-rw-r--r--net/mac80211/spectmgmt.c13
-rw-r--r--net/mac80211/sta_info.c171
-rw-r--r--net/mac80211/sta_info.h26
-rw-r--r--net/mac80211/status.c111
-rw-r--r--net/mac80211/tdls.c88
-rw-r--r--net/mac80211/tests/Makefile3
-rw-r--r--net/mac80211/tests/elems.c101
-rw-r--r--net/mac80211/tests/module.c10
-rw-r--r--net/mac80211/trace.h11
-rw-r--r--net/mac80211/tx.c73
-rw-r--r--net/mac80211/util.c263
-rw-r--r--net/mac80211/wep.c9
-rw-r--r--net/mac80211/wpa.c42
-rw-r--r--net/mptcp/Makefile3
-rw-r--r--net/mptcp/ctrl.c16
-rw-r--r--net/mptcp/fastopen.c1
-rw-r--r--net/mptcp/mptcp_pm_gen.c179
-rw-r--r--net/mptcp/mptcp_pm_gen.h58
-rw-r--r--net/mptcp/pm.c2
-rw-r--r--net/mptcp/pm_netlink.c114
-rw-r--r--net/mptcp/pm_userspace.c89
-rw-r--r--net/mptcp/protocol.c75
-rw-r--r--net/mptcp/protocol.h92
-rw-r--r--net/mptcp/sockopt.c73
-rw-r--r--net/mptcp/subflow.c46
-rw-r--r--net/netfilter/core.c6
-rw-r--r--net/netfilter/ipvs/ip_vs_sync.c16
-rw-r--r--net/netfilter/nf_conntrack_core.c76
-rw-r--r--net/netfilter/nf_conntrack_helper.c7
-rw-r--r--net/netfilter/nf_conntrack_labels.c17
-rw-r--r--net/netfilter/nf_conntrack_proto_tcp.c7
-rw-r--r--net/netfilter/nf_nat_proto.c69
-rw-r--r--net/netfilter/nf_synproxy_core.c2
-rw-r--r--net/netfilter/nf_tables_api.c564
-rw-r--r--net/netfilter/nf_tables_core.c8
-rw-r--r--net/netfilter/nf_tables_trace.c8
-rw-r--r--net/netfilter/nfnetlink_queue.c15
-rw-r--r--net/netfilter/nft_dynset.c23
-rw-r--r--net/netfilter/nft_set_bitmap.c53
-rw-r--r--net/netfilter/nft_set_hash.c109
-rw-r--r--net/netfilter/nft_set_pipapo.c80
-rw-r--r--net/netfilter/nft_set_pipapo.h4
-rw-r--r--net/netfilter/nft_set_rbtree.c200
-rw-r--r--net/netlink/genetlink.c3
-rw-r--r--net/netlink/policy.c29
-rw-r--r--net/netrom/af_netrom.c2
-rw-r--r--net/openvswitch/actions.c27
-rw-r--r--net/openvswitch/flow_table.c7
-rw-r--r--net/openvswitch/flow_table.h2
-rw-r--r--net/openvswitch/meter.h4
-rw-r--r--net/packet/internal.h2
-rw-r--r--net/rose/af_rose.c2
-rw-r--r--net/sched/act_ct.c41
-rw-r--r--net/sched/cls_api.c26
-rw-r--r--net/sched/cls_route.c37
-rw-r--r--net/sched/em_meta.c2
-rw-r--r--net/sched/sch_fq.c391
-rw-r--r--net/sched/sch_fq_pie.c2
-rw-r--r--net/sched/sch_frag.c4
-rw-r--r--net/sched/sch_generic.c9
-rw-r--r--net/sched/sch_netem.c2
-rw-r--r--net/sched/sch_qfq.c4
-rw-r--r--net/sched/sch_taprio.c2
-rw-r--r--net/sctp/ipv6.c9
-rw-r--r--net/sctp/protocol.c4
-rw-r--r--net/sctp/sm_make_chunk.c2
-rw-r--r--net/smc/af_smc.c2
-rw-r--r--net/tipc/link.c4
-rw-r--r--net/tls/tls.h11
-rw-r--r--net/tls/tls_device.c101
-rw-r--r--net/tls/tls_device_fallback.c23
-rw-r--r--net/tls/tls_main.c62
-rw-r--r--net/tls/tls_sw.c194
-rw-r--r--net/unix/af_unix.c58
-rw-r--r--net/vmw_vsock/af_vsock.c66
-rw-r--r--net/vmw_vsock/virtio_transport.c99
-rw-r--r--net/vmw_vsock/virtio_transport_common.c307
-rw-r--r--net/vmw_vsock/vsock_loopback.c6
-rw-r--r--net/wireless/Kconfig11
-rw-r--r--net/wireless/Makefile1
-rw-r--r--net/wireless/ap.c24
-rw-r--r--net/wireless/chan.c51
-rw-r--r--net/wireless/core.c72
-rw-r--r--net/wireless/core.h64
-rw-r--r--net/wireless/ibss.c76
-rw-r--r--net/wireless/lib80211_crypt_tkip.c12
-rw-r--r--net/wireless/mesh.c28
-rw-r--r--net/wireless/mlme.c23
-rw-r--r--net/wireless/nl80211.c544
-rw-r--r--net/wireless/nl80211.h7
-rw-r--r--net/wireless/ocb.c43
-rw-r--r--net/wireless/pmsr.c4
-rw-r--r--net/wireless/rdev-ops.h2
-rw-r--r--net/wireless/reg.c99
-rw-r--r--net/wireless/reg.h16
-rw-r--r--net/wireless/scan.c111
-rw-r--r--net/wireless/sme.c82
-rw-r--r--net/wireless/sysfs.c4
-rw-r--r--net/wireless/tests/Makefile3
-rw-r--r--net/wireless/tests/fragmentation.c157
-rw-r--r--net/wireless/tests/module.c10
-rw-r--r--net/wireless/trace.h88
-rw-r--r--net/wireless/util.c60
-rw-r--r--net/wireless/wext-compat.c47
-rw-r--r--net/wireless/wext-sme.c59
-rw-r--r--net/x25/af_x25.c2
-rw-r--r--net/xdp/xsk.c32
-rw-r--r--net/xdp/xsk_buff_pool.c3
-rw-r--r--net/xfrm/xfrm_policy.c2
-rw-r--r--samples/bpf/Makefile19
-rw-r--r--samples/bpf/syscall_tp_kern.c15
-rw-r--r--samples/bpf/syscall_tp_user.c45
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-cgroup.rst16
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-net.rst8
-rw-r--r--tools/bpf/bpftool/Documentation/bpftool-prog.rst8
-rw-r--r--tools/bpf/bpftool/bash-completion/bpftool14
-rw-r--r--tools/bpf/bpftool/btf_dumper.c2
-rw-r--r--tools/bpf/bpftool/cgroup.c16
-rw-r--r--tools/bpf/bpftool/gen.c58
-rw-r--r--tools/bpf/bpftool/link.c15
-rw-r--r--tools/bpf/bpftool/net.c7
-rw-r--r--tools/bpf/bpftool/prog.c7
-rw-r--r--tools/bpf/bpftool/struct_ops.c6
-rw-r--r--tools/include/uapi/linux/bpf.h52
-rw-r--r--tools/include/uapi/linux/if_link.h141
-rw-r--r--tools/include/uapi/linux/netdev.h16
-rw-r--r--tools/lib/bpf/bpf.c16
-rw-r--r--tools/lib/bpf/bpf.h5
-rw-r--r--tools/lib/bpf/bpf_helpers.h1
-rw-r--r--tools/lib/bpf/bpf_tracing.h2
-rw-r--r--tools/lib/bpf/btf.c160
-rw-r--r--tools/lib/bpf/elf.c143
-rw-r--r--tools/lib/bpf/libbpf.c237
-rw-r--r--tools/lib/bpf/libbpf.h88
-rw-r--r--tools/lib/bpf/libbpf.map8
-rw-r--r--tools/lib/bpf/ringbuf.c85
-rw-r--r--tools/net/ynl/Makefile1
-rwxr-xr-xtools/net/ynl/cli.py3
-rw-r--r--tools/net/ynl/generated/Makefile6
-rw-r--r--tools/net/ynl/generated/devlink-user.c3861
-rw-r--r--tools/net/ynl/generated/devlink-user.h3301
-rw-r--r--tools/net/ynl/generated/ethtool-user.h82
-rw-r--r--tools/net/ynl/generated/fou-user.h2
-rw-r--r--tools/net/ynl/generated/handshake-user.h12
-rw-r--r--tools/net/ynl/generated/netdev-user.c19
-rw-r--r--tools/net/ynl/generated/netdev-user.h7
-rw-r--r--tools/net/ynl/lib/nlspec.py6
-rw-r--r--tools/net/ynl/lib/ynl.c12
-rw-r--r--tools/net/ynl/lib/ynl.h22
-rw-r--r--tools/net/ynl/lib/ynl.py77
-rw-r--r--tools/net/ynl/samples/Makefile3
-rw-r--r--tools/net/ynl/samples/netdev.c8
-rwxr-xr-xtools/net/ynl/ynl-gen-c.py287
-rw-r--r--tools/testing/selftests/bpf/DENYLIST.aarch642
-rw-r--r--tools/testing/selftests/bpf/DENYLIST.s390x26
-rw-r--r--tools/testing/selftests/bpf/Makefile56
-rw-r--r--tools/testing/selftests/bpf/bpf_experimental.h346
-rw-r--r--tools/testing/selftests/bpf/bpf_kfuncs.h14
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c5
-rw-r--r--tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h2
-rw-r--r--tools/testing/selftests/bpf/cgroup_helpers.c38
-rw-r--r--tools/testing/selftests/bpf/config2
-rw-r--r--tools/testing/selftests/bpf/liburandom_read.map15
-rw-r--r--tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c4
-rw-r--r--tools/testing/selftests/bpf/netlink_helpers.c358
-rw-r--r--tools/testing/selftests/bpf/netlink_helpers.h46
-rw-r--r--tools/testing/selftests/bpf/network_helpers.c34
-rw-r--r--tools/testing/selftests/bpf/network_helpers.h1
-rw-r--r--tools/testing/selftests/bpf/prog_tests/align.c241
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/bpf_iter.c44
-rw-r--r--tools/testing/selftests/bpf/prog_tests/btf.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/connect_ping.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/exceptions.c409
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fib_lookup.c83
-rw-r--r--tools/testing/selftests/bpf/prog_tests/fill_link_info.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/iters.c208
-rw-r--r--tools/testing/selftests/bpf/prog_tests/kprobe_multi_testmod_test.c20
-rw-r--r--tools/testing/selftests/bpf/prog_tests/libbpf_str.c6
-rw-r--r--tools/testing/selftests/bpf/prog_tests/linked_list.c16
-rw-r--r--tools/testing/selftests/bpf/prog_tests/lwt_helpers.h3
-rw-r--r--tools/testing/selftests/bpf/prog_tests/missed.c138
-rw-r--r--tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c5
-rw-r--r--tools/testing/selftests/bpf/prog_tests/percpu_alloc.c128
-rw-r--r--tools/testing/selftests/bpf/prog_tests/preempted_bpf_ma_op.c89
-rw-r--r--tools/testing/selftests/bpf/prog_tests/queue_stack_map.c2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ringbuf.c26
-rw-r--r--tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c15
-rw-r--r--tools/testing/selftests/bpf/prog_tests/section_names.c45
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sock_addr.c612
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_basic.c8
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h2
-rw-r--r--tools/testing/selftests/bpf/prog_tests/sockmap_listen.c150
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tailcalls.c269
-rw-r--r--tools/testing/selftests/bpf/prog_tests/task_under_cgroup.c11
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_helpers.h4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_netkit.c687
-rw-r--r--tools/testing/selftests/bpf/prog_tests/tc_opts.c131
-rw-r--r--tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c20
-rw-r--r--tools/testing/selftests/bpf/prog_tests/timer.c4
-rw-r--r--tools/testing/selftests/bpf/prog_tests/uprobe.c95
-rw-r--r--tools/testing/selftests/bpf/prog_tests/xdp_metadata.c2
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c (renamed from tools/testing/selftests/bpf/progs/bpf_iter_task_vma.c)0
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_iter_tasks.c (renamed from tools/testing/selftests/bpf/progs/bpf_iter_task.c)0
-rw-r--r--tools/testing/selftests/bpf/progs/bpf_misc.h3
-rw-r--r--tools/testing/selftests/bpf/progs/connect_unix_prog.c40
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions.c368
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions_assert.c135
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions_ext.c72
-rw-r--r--tools/testing/selftests/bpf/progs/exceptions_fail.c347
-rw-r--r--tools/testing/selftests/bpf/progs/getpeername_unix_prog.c39
-rw-r--r--tools/testing/selftests/bpf/progs/getsockname_unix_prog.c39
-rw-r--r--tools/testing/selftests/bpf/progs/iters.c695
-rw-r--r--tools/testing/selftests/bpf/progs/iters_css.c72
-rw-r--r--tools/testing/selftests/bpf/progs/iters_css_task.c47
-rw-r--r--tools/testing/selftests/bpf/progs/iters_task.c41
-rw-r--r--tools/testing/selftests/bpf/progs/iters_task_failure.c105
-rw-r--r--tools/testing/selftests/bpf/progs/iters_task_vma.c44
-rw-r--r--tools/testing/selftests/bpf/progs/linked_list_fail.c4
-rw-r--r--tools/testing/selftests/bpf/progs/missed_kprobe.c30
-rw-r--r--tools/testing/selftests/bpf/progs/missed_kprobe_recursion.c48
-rw-r--r--tools/testing/selftests/bpf/progs/missed_tp_recursion.c41
-rw-r--r--tools/testing/selftests/bpf/progs/percpu_alloc_array.c190
-rw-r--r--tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c109
-rw-r--r--tools/testing/selftests/bpf/progs/percpu_alloc_fail.c164
-rw-r--r--tools/testing/selftests/bpf/progs/preempted_bpf_ma_op.c106
-rw-r--r--tools/testing/selftests/bpf/progs/profiler.inc.h2
-rw-r--r--tools/testing/selftests/bpf/progs/recvmsg_unix_prog.c39
-rw-r--r--tools/testing/selftests/bpf/progs/sendmsg_unix_prog.c40
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c18
-rw-r--r--tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c18
-rw-r--r--tools/testing/selftests/bpf/progs/test_bpf_ma.c180
-rw-r--r--tools/testing/selftests/bpf/progs/test_ldsx_insn.c9
-rw-r--r--tools/testing/selftests/bpf/progs/test_task_under_cgroup.c28
-rw-r--r--tools/testing/selftests/bpf/progs/test_tc_link.c13
-rw-r--r--tools/testing/selftests/bpf/progs/test_uprobe.c61
-rw-r--r--tools/testing/selftests/bpf/progs/test_vmlinux.c4
-rw-r--r--tools/testing/selftests/bpf/progs/timer.c63
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_bswap.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_gotol.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_ldsx.c152
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_movsx.c4
-rw-r--r--tools/testing/selftests/bpf/progs/verifier_sdiv.c4
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_hw_metadata.c2
-rw-r--r--tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c4
-rw-r--r--tools/testing/selftests/bpf/progs/xsk_xdp_progs.c22
-rwxr-xr-xtools/testing/selftests/bpf/test_bpftool_synctypes.py9
-rw-r--r--tools/testing/selftests/bpf/test_loader.c4
-rw-r--r--tools/testing/selftests/bpf/test_progs.c2
-rw-r--r--tools/testing/selftests/bpf/test_progs.h2
-rwxr-xr-xtools/testing/selftests/bpf/test_xsk.sh40
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.c134
-rw-r--r--tools/testing/selftests/bpf/trace_helpers.h8
-rw-r--r--tools/testing/selftests/bpf/unpriv_helpers.c33
-rw-r--r--tools/testing/selftests/bpf/urandom_read.c15
-rw-r--r--tools/testing/selftests/bpf/urandom_read_lib1.c22
-rw-r--r--tools/testing/selftests/bpf/xdp_features.c4
-rw-r--r--tools/testing/selftests/bpf/xdp_hw_metadata.c80
-rw-r--r--tools/testing/selftests/bpf/xsk.c3
-rw-r--r--tools/testing/selftests/bpf/xsk.h2
-rwxr-xr-xtools/testing/selftests/bpf/xsk_prereqs.sh10
-rw-r--r--tools/testing/selftests/bpf/xsk_xdp_common.h12
-rw-r--r--tools/testing/selftests/bpf/xsk_xdp_metadata.h5
-rw-r--r--tools/testing/selftests/bpf/xskxceiver.c1020
-rw-r--r--tools/testing/selftests/bpf/xskxceiver.h57
-rwxr-xr-xtools/testing/selftests/drivers/net/netdevsim/devlink.sh21
-rw-r--r--tools/testing/selftests/net/Makefile1
-rw-r--r--tools/testing/selftests/net/af_unix/scm_pidfd.c1
-rw-r--r--tools/testing/selftests/net/af_unix/test_unix_oob.c2
-rwxr-xr-xtools/testing/selftests/net/fdb_flush.sh812
-rw-r--r--tools/testing/selftests/net/forwarding/Makefile3
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh283
-rwxr-xr-xtools/testing/selftests/net/forwarding/bridge_mdb.sh184
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_join.sh23
-rwxr-xr-xtools/testing/selftests/net/mptcp/mptcp_sockopt.sh1
-rw-r--r--tools/testing/selftests/net/nettest.c5
-rwxr-xr-xtools/testing/selftests/net/route_localnet.sh6
-rwxr-xr-xtools/testing/selftests/net/rtnetlink.sh981
-rwxr-xr-xtools/testing/selftests/net/test_vxlan_mdb.sh108
-rw-r--r--tools/testing/selftests/netfilter/Makefile2
-rwxr-xr-xtools/testing/selftests/netfilter/nf_nat_edemux.sh46
-rwxr-xr-xtools/testing/selftests/netfilter/xt_string.sh128
-rw-r--r--tools/testing/selftests/ptp/ptpchmaskfmt.sh14
-rw-r--r--tools/testing/selftests/ptp/testptp.c19
-rw-r--r--tools/testing/selftests/tc-testing/Makefile2
-rw-r--r--tools/testing/selftests/tc-testing/README65
-rw-r--r--tools/testing/selftests/tc-testing/TdcPlugin.py4
-rw-r--r--tools/testing/selftests/tc-testing/TdcResults.py3
-rw-r--r--tools/testing/selftests/tc-testing/config9
-rw-r--r--tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py194
-rw-r--r--tools/testing/selftests/tc-testing/plugin-lib/rootPlugin.py4
-rw-r--r--tools/testing/selftests/tc-testing/plugin-lib/valgrindPlugin.py5
-rwxr-xr-xtools/testing/selftests/tc-testing/scripts/taprio_wait_for_admin.sh (renamed from tools/testing/selftests/tc-testing/taprio_wait_for_admin.sh)0
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json45
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/csum.json69
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/ct.json54
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/ctinfo.json36
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/gact.json75
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/gate.json36
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/ife.json144
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json72
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/mpls.json159
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/nat.json81
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/pedit.json198
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/police.json102
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/sample.json87
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/simple.json27
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json90
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/skbmod.json54
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json117
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/vlan.json108
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/actions/xt.json24
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/bpf.json10
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/fw.json315
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/matchall.json141
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/route.json25
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/filters/u32.json25
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/infra/actions.json144
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/infra/filter.json9
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/cake.json82
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/cbs.json38
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json30
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json34
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json10
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json18
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/ets.json284
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json98
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json68
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json54
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json5
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json28
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/hfsc.json58
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json36
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/htb.json46
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/ingress.json36
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json62
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json18
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json30
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/prio.json85
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json39
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/red.json34
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json48
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json40
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json16
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json8
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json36
-rw-r--r--tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json34
-rwxr-xr-xtools/testing/selftests/tc-testing/tdc.py250
-rw-r--r--tools/testing/vsock/.gitignore1
-rw-r--r--tools/testing/vsock/Makefile11
-rw-r--r--tools/testing/vsock/msg_zerocopy_common.c87
-rw-r--r--tools/testing/vsock/msg_zerocopy_common.h18
-rw-r--r--tools/testing/vsock/util.c257
-rw-r--r--tools/testing/vsock/util.h8
-rw-r--r--tools/testing/vsock/vsock_perf.c80
-rw-r--r--tools/testing/vsock/vsock_test.c341
-rw-r--r--tools/testing/vsock/vsock_test_zerocopy.c358
-rw-r--r--tools/testing/vsock/vsock_test_zerocopy.h15
-rw-r--r--tools/testing/vsock/vsock_uring_test.c342
1865 files changed, 121291 insertions, 33038 deletions
diff --git a/Documentation/admin-guide/sysctl/net.rst b/Documentation/admin-guide/sysctl/net.rst
index 4877563241f3..c7525942f12c 100644
--- a/Documentation/admin-guide/sysctl/net.rst
+++ b/Documentation/admin-guide/sysctl/net.rst
@@ -71,6 +71,7 @@ two flavors of JITs, the newer eBPF JIT currently supported on:
- s390x
- riscv64
- riscv32
+ - loongarch64
And the older cBPF JIT supported on the following archs:
diff --git a/Documentation/bpf/libbpf/program_types.rst b/Documentation/bpf/libbpf/program_types.rst
index ad4d4d5eecb0..63bb88846e50 100644
--- a/Documentation/bpf/libbpf/program_types.rst
+++ b/Documentation/bpf/libbpf/program_types.rst
@@ -56,6 +56,16 @@ described in more detail in the footnotes.
| | ``BPF_CGROUP_UDP6_RECVMSG`` | ``cgroup/recvmsg6`` | |
+ +----------------------------------------+----------------------------------+-----------+
| | ``BPF_CGROUP_UDP6_SENDMSG`` | ``cgroup/sendmsg6`` | |
+| +----------------------------------------+----------------------------------+-----------+
+| | ``BPF_CGROUP_UNIX_CONNECT`` | ``cgroup/connect_unix`` | |
+| +----------------------------------------+----------------------------------+-----------+
+| | ``BPF_CGROUP_UNIX_SENDMSG`` | ``cgroup/sendmsg_unix`` | |
+| +----------------------------------------+----------------------------------+-----------+
+| | ``BPF_CGROUP_UNIX_RECVMSG`` | ``cgroup/recvmsg_unix`` | |
+| +----------------------------------------+----------------------------------+-----------+
+| | ``BPF_CGROUP_UNIX_GETPEERNAME`` | ``cgroup/getpeername_unix`` | |
+| +----------------------------------------+----------------------------------+-----------+
+| | ``BPF_CGROUP_UNIX_GETSOCKNAME`` | ``cgroup/getsockname_unix`` | |
+-------------------------------------------+----------------------------------------+----------------------------------+-----------+
| ``BPF_PROG_TYPE_CGROUP_SOCK`` | ``BPF_CGROUP_INET4_POST_BIND`` | ``cgroup/post_bind4`` | |
+ +----------------------------------------+----------------------------------+-----------+
diff --git a/Documentation/bpf/prog_flow_dissector.rst b/Documentation/bpf/prog_flow_dissector.rst
index 4d86780ab0f1..f24270b8b034 100644
--- a/Documentation/bpf/prog_flow_dissector.rst
+++ b/Documentation/bpf/prog_flow_dissector.rst
@@ -113,7 +113,7 @@ Flags
used by ``eth_get_headlen`` to estimate length of all headers for GRO.
* ``BPF_FLOW_DISSECTOR_F_STOP_AT_FLOW_LABEL`` - tells BPF flow dissector to
stop parsing as soon as it reaches IPv6 flow label; used by
- ``___skb_get_hash`` and ``__skb_get_hash_symmetric`` to get flow hash.
+ ``___skb_get_hash`` to get flow hash.
* ``BPF_FLOW_DISSECTOR_F_STOP_AT_ENCAP`` - tells BPF flow dissector to stop
parsing as soon as it reaches encapsulated headers; used by routing
infrastructure.
diff --git a/Documentation/bpf/standardization/instruction-set.rst b/Documentation/bpf/standardization/instruction-set.rst
index c5d53a6e8c79..245b6defc298 100644
--- a/Documentation/bpf/standardization/instruction-set.rst
+++ b/Documentation/bpf/standardization/instruction-set.rst
@@ -283,6 +283,14 @@ For signed operations (``BPF_SDIV`` and ``BPF_SMOD``), for ``BPF_ALU``,
is first :term:`sign extended<Sign Extend>` from 32 to 64 bits, and then
interpreted as a 64-bit signed value.
+Note that there are varying definitions of the signed modulo operation
+when the dividend or divisor are negative, where implementations often
+vary by language such that Python, Ruby, etc. differ from C, Go, Java,
+etc. This specification requires that signed modulo use truncated division
+(where -13 % 3 == -1) as implemented in C, Go, etc.:
+
+ a % n = a - n * trunc(a / n)
+
The ``BPF_MOVSX`` instruction does a move operation with sign extension.
``BPF_ALU | BPF_MOVSX`` :term:`sign extends<Sign Extend>` 8-bit and 16-bit operands into 32
bit operands, and zeroes the remaining upper 32 bits.
diff --git a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml
index 28ded09d72e3..e7720caf31b3 100644
--- a/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml
+++ b/Documentation/devicetree/bindings/arm/mediatek/mediatek,mt7622-wed.yaml
@@ -22,6 +22,7 @@ properties:
- mediatek,mt7622-wed
- mediatek,mt7981-wed
- mediatek,mt7986-wed
+ - mediatek,mt7988-wed
- const: syscon
reg:
diff --git a/Documentation/devicetree/bindings/i3c/i3c.yaml b/Documentation/devicetree/bindings/i3c/i3c.yaml
index ab69f4115de4..d9483fbd2454 100644
--- a/Documentation/devicetree/bindings/i3c/i3c.yaml
+++ b/Documentation/devicetree/bindings/i3c/i3c.yaml
@@ -55,6 +55,12 @@ properties:
May not be supported by all controllers.
+ mctp-controller:
+ type: boolean
+ description: |
+ Indicates that the system is accessible via this bus as an endpoint for
+ MCTP over I3C transport.
+
required:
- "#address-cells"
- "#size-cells"
diff --git a/Documentation/devicetree/bindings/mfd/syscon.yaml b/Documentation/devicetree/bindings/mfd/syscon.yaml
index 8103154bbb52..c77d7b155a4c 100644
--- a/Documentation/devicetree/bindings/mfd/syscon.yaml
+++ b/Documentation/devicetree/bindings/mfd/syscon.yaml
@@ -49,6 +49,8 @@ properties:
- hisilicon,peri-subctrl
- hpe,gxp-sysreg
- intel,lgm-syscon
+ - loongson,ls1b-syscon
+ - loongson,ls1c-syscon
- marvell,armada-3700-usb2-host-misc
- mediatek,mt8135-pctl-a-syscfg
- mediatek,mt8135-pctl-b-syscfg
diff --git a/Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml b/Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
index 4bfac9186886..7fe0352dff0f 100644
--- a/Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
+++ b/Documentation/devicetree/bindings/net/allwinner,sun8i-a83t-emac.yaml
@@ -158,6 +158,8 @@ allOf:
patternProperties:
"^ethernet-phy@[0-9a-f]$":
type: object
+ $ref: ethernet-phy.yaml#
+ unevaluatedProperties: false
description:
Integrated PHY node
diff --git a/Documentation/devicetree/bindings/net/brcm,asp-v2.0.yaml b/Documentation/devicetree/bindings/net/brcm,asp-v2.0.yaml
index aa3162c74833..75d8138298fb 100644
--- a/Documentation/devicetree/bindings/net/brcm,asp-v2.0.yaml
+++ b/Documentation/devicetree/bindings/net/brcm,asp-v2.0.yaml
@@ -53,7 +53,7 @@ properties:
const: 0
patternProperties:
- "^port@[0-9]+$":
+ "^port@[0-9a-f]+$":
type: object
$ref: ethernet-controller.yaml#
diff --git a/Documentation/devicetree/bindings/net/dsa/brcm,sf2.yaml b/Documentation/devicetree/bindings/net/dsa/brcm,sf2.yaml
index b06c416893ff..f21bdd0f408d 100644
--- a/Documentation/devicetree/bindings/net/dsa/brcm,sf2.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/brcm,sf2.yaml
@@ -78,6 +78,7 @@ properties:
ports:
type: object
+ additionalProperties: true
patternProperties:
'^port@[0-9a-f]$':
diff --git a/Documentation/devicetree/bindings/net/dsa/dsa.yaml b/Documentation/devicetree/bindings/net/dsa/dsa.yaml
index ec74a660beda..6107189d276a 100644
--- a/Documentation/devicetree/bindings/net/dsa/dsa.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/dsa.yaml
@@ -40,17 +40,8 @@ $defs:
patternProperties:
"^(ethernet-)?ports$":
- type: object
- additionalProperties: false
-
- properties:
- '#address-cells':
- const: 1
- '#size-cells':
- const: 0
-
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-9a-f]+$":
description: Ethernet switch ports
$ref: dsa-port.yaml#
unevaluatedProperties: false
diff --git a/Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml b/Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml
index e532c6b795f4..1c2444121e60 100644
--- a/Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/mediatek,mt7530.yaml
@@ -60,7 +60,7 @@ description: |
Check out example 6.
- - Port 5 can be wired to an external phy. Port 5 becomes a DSA slave.
+ - Port 5 can be wired to an external phy. Port 5 becomes a DSA user port.
For the multi-chip module MT7530, the external phy must be wired TX to TX
to gmac1 of the SoC for this to work. Ubiquiti EdgeRouter X SFP is wired
@@ -154,10 +154,12 @@ properties:
patternProperties:
"^(ethernet-)?ports$":
type: object
+ additionalProperties: true
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-6]$":
type: object
+ additionalProperties: true
properties:
reg:
@@ -184,7 +186,7 @@ $defs:
patternProperties:
"^(ethernet-)?ports$":
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-6]$":
if:
required: [ ethernet ]
then:
@@ -210,7 +212,7 @@ $defs:
patternProperties:
"^(ethernet-)?ports$":
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-6]$":
if:
required: [ ethernet ]
then:
diff --git a/Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml b/Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml
index 03b5567be389..b3029c64d0d5 100644
--- a/Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/microchip,ksz.yaml
@@ -38,6 +38,8 @@ properties:
Should be a gpio specifier for a reset line.
maxItems: 1
+ wakeup-source: true
+
microchip,synclko-125:
$ref: /schemas/types.yaml#/definitions/flag
description:
@@ -49,6 +51,26 @@ properties:
Set if the output SYNCLKO clock should be disabled. Do not mix with
microchip,synclko-125.
+ microchip,io-drive-strength-microamp:
+ description:
+ IO Pad Drive Strength
+ enum: [8000, 16000]
+ default: 16000
+
+ microchip,hi-drive-strength-microamp:
+ description:
+ High Speed Drive Strength. Controls drive strength of GMII / RGMII /
+ MII / RMII (except TX_CLK/REFCLKI, COL and CRS) and CLKO_25_125 lines.
+ enum: [2000, 4000, 8000, 12000, 16000, 20000, 24000, 28000]
+ default: 24000
+
+ microchip,lo-drive-strength-microamp:
+ description:
+ Low Speed Drive Strength. Controls drive strength of TX_CLK / REFCLKI,
+ COL, CRS, LEDs, PME_N, NTRP_N, SDO and SDI/SDA/MDIO lines.
+ enum: [2000, 4000, 8000, 12000, 16000, 20000, 24000, 28000]
+ default: 8000
+
interrupts:
maxItems: 1
diff --git a/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml b/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml
index 8d7e878b84dc..9973d64f15a7 100644
--- a/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/microchip,lan937x.yaml
@@ -37,8 +37,9 @@ properties:
patternProperties:
"^(ethernet-)?ports$":
+ additionalProperties: true
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-7]$":
allOf:
- if:
properties:
diff --git a/Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml b/Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml
index 4d5f5cc6d031..9432565f4f5d 100644
--- a/Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/nxp,sja1105.yaml
@@ -43,6 +43,7 @@ properties:
# PHY 1.
mdios:
type: object
+ additionalProperties: false
properties:
'#address-cells':
@@ -74,8 +75,9 @@ properties:
patternProperties:
"^(ethernet-)?ports$":
+ additionalProperties: true
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-9]$":
allOf:
- if:
properties:
diff --git a/Documentation/devicetree/bindings/net/dsa/qca8k.yaml b/Documentation/devicetree/bindings/net/dsa/qca8k.yaml
index df64eebebe18..167398ab253a 100644
--- a/Documentation/devicetree/bindings/net/dsa/qca8k.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/qca8k.yaml
@@ -73,6 +73,7 @@ $ref: dsa.yaml#
patternProperties:
"^(ethernet-)?ports$":
type: object
+ additionalProperties: true
patternProperties:
"^(ethernet-)?port@[0-6]$":
type: object
diff --git a/Documentation/devicetree/bindings/net/dsa/realtek.yaml b/Documentation/devicetree/bindings/net/dsa/realtek.yaml
index cfd69c2604ea..cce692f57b08 100644
--- a/Documentation/devicetree/bindings/net/dsa/realtek.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/realtek.yaml
@@ -68,6 +68,8 @@ properties:
interrupt-controller:
type: object
+ additionalProperties: false
+
description: |
This defines an interrupt controller with an IRQ line (typically
a GPIO) that will demultiplex and handle the interrupt from the single
diff --git a/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml b/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml
index 833d2f68daa1..ea285ef3e64f 100644
--- a/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml
+++ b/Documentation/devicetree/bindings/net/dsa/renesas,rzn1-a5psw.yaml
@@ -61,17 +61,11 @@ properties:
ethernet-ports:
type: object
- properties:
- '#address-cells':
- const: 1
- '#size-cells':
- const: 0
-
+ additionalProperties: true
patternProperties:
"^(ethernet-)?port@[0-4]$":
type: object
- description: Ethernet switch ports
-
+ additionalProperties: true
properties:
pcs-handle:
maxItems: 1
diff --git a/Documentation/devicetree/bindings/net/engleder,tsnep.yaml b/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
index 82a5d7927ca4..34fd24ff6a71 100644
--- a/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
+++ b/Documentation/devicetree/bindings/net/engleder,tsnep.yaml
@@ -63,6 +63,7 @@ properties:
mdio:
type: object
$ref: mdio.yaml#
+ unevaluatedProperties: false
description: optional node for embedded MDIO controller
required:
diff --git a/Documentation/devicetree/bindings/net/ethernet-switch.yaml b/Documentation/devicetree/bindings/net/ethernet-switch.yaml
index f1b9075dc7fb..72ac67ca3415 100644
--- a/Documentation/devicetree/bindings/net/ethernet-switch.yaml
+++ b/Documentation/devicetree/bindings/net/ethernet-switch.yaml
@@ -36,7 +36,7 @@ patternProperties:
const: 0
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
+ "^(ethernet-)?port@[0-9a-f]+$":
type: object
description: Ethernet switch ports
@@ -53,14 +53,16 @@ oneOf:
additionalProperties: true
$defs:
- base:
+ ethernet-ports:
description: An ethernet switch without any extra port properties
$ref: '#'
patternProperties:
- "^(ethernet-)?port@[0-9]+$":
- description: Ethernet switch ports
- $ref: ethernet-switch-port.yaml#
- unevaluatedProperties: false
+ "^(ethernet-)?ports$":
+ patternProperties:
+ "^(ethernet-)?port@[0-9a-f]+$":
+ description: Ethernet switch ports
+ $ref: ethernet-switch-port.yaml#
+ unevaluatedProperties: false
...
diff --git a/Documentation/devicetree/bindings/net/fsl,fec.yaml b/Documentation/devicetree/bindings/net/fsl,fec.yaml
index b494e009326e..8948a11c994e 100644
--- a/Documentation/devicetree/bindings/net/fsl,fec.yaml
+++ b/Documentation/devicetree/bindings/net/fsl,fec.yaml
@@ -59,6 +59,7 @@ properties:
- const: fsl,imx6sx-fec
- items:
- enum:
+ - fsl,imx8dxl-fec
- fsl,imx8qxp-fec
- const: fsl,imx8qm-fec
- const: fsl,imx6sx-fec
diff --git a/Documentation/devicetree/bindings/net/loongson,ls1b-gmac.yaml b/Documentation/devicetree/bindings/net/loongson,ls1b-gmac.yaml
new file mode 100644
index 000000000000..c4f3224bad38
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/loongson,ls1b-gmac.yaml
@@ -0,0 +1,114 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/loongson,ls1b-gmac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Loongson-1B Gigabit Ethernet MAC Controller
+
+maintainers:
+ - Keguang Zhang <keguang.zhang@gmail.com>
+
+description: |
+ Loongson-1B Gigabit Ethernet MAC Controller is based on
+ Synopsys DesignWare MAC (version 3.50a).
+
+ Main features
+ - Dual 10/100/1000Mbps GMAC controllers
+ - Full-duplex operation (IEEE 802.3x flow control automatic transmission)
+ - Half-duplex operation (CSMA/CD Protocol and back-pressure support)
+ - RX Checksum Offload
+ - TX Checksum insertion
+ - MII interface
+ - RGMII interface
+
+select:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - loongson,ls1b-gmac
+ required:
+ - compatible
+
+properties:
+ compatible:
+ items:
+ - enum:
+ - loongson,ls1b-gmac
+ - const: snps,dwmac-3.50a
+
+ reg:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+ clock-names:
+ items:
+ - const: stmmaceth
+
+ interrupts:
+ maxItems: 1
+
+ interrupt-names:
+ items:
+ - const: macirq
+
+ loongson,ls1-syscon:
+ $ref: /schemas/types.yaml#/definitions/phandle
+ description:
+ Phandle to the syscon containing some extra configurations
+ including PHY interface mode.
+
+ phy-mode:
+ enum:
+ - mii
+ - rgmii-id
+
+required:
+ - compatible
+ - reg
+ - clocks
+ - clock-names
+ - interrupts
+ - interrupt-names
+ - loongson,ls1-syscon
+
+allOf:
+ - $ref: snps,dwmac.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/clock/loongson,ls1x-clk.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+
+ gmac0: ethernet@1fe10000 {
+ compatible = "loongson,ls1b-gmac", "snps,dwmac-3.50a";
+ reg = <0x1fe10000 0x10000>;
+
+ clocks = <&clkc LS1X_CLKID_AHB>;
+ clock-names = "stmmaceth";
+
+ interrupt-parent = <&intc1>;
+ interrupts = <2 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "macirq";
+
+ loongson,ls1-syscon = <&syscon>;
+
+ phy-handle = <&phy0>;
+ phy-mode = "mii";
+ snps,pbl = <1>;
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
+
+ phy0: ethernet-phy@0 {
+ reg = <0x0>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/net/loongson,ls1c-emac.yaml b/Documentation/devicetree/bindings/net/loongson,ls1c-emac.yaml
new file mode 100644
index 000000000000..99001b940b83
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/loongson,ls1c-emac.yaml
@@ -0,0 +1,113 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/net/loongson,ls1c-emac.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Loongson-1C Ethernet MAC Controller
+
+maintainers:
+ - Keguang Zhang <keguang.zhang@gmail.com>
+
+description: |
+ Loongson-1C Ethernet MAC Controller is based on
+ Synopsys DesignWare MAC (version 3.50a).
+
+ Main features
+ - 10/100Mbps
+ - Full-duplex operation (IEEE 802.3x flow control automatic transmission)
+ - Half-duplex operation (CSMA/CD Protocol and back-pressure support)
+ - IEEE 802.1Q VLAN tag detection for reception frames
+ - MII interface
+ - RMII interface
+
+select:
+ properties:
+ compatible:
+ contains:
+ enum:
+ - loongson,ls1c-emac
+ required:
+ - compatible
+
+properties:
+ compatible:
+ items:
+ - enum:
+ - loongson,ls1c-emac
+ - const: snps,dwmac-3.50a
+
+ reg:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+ clock-names:
+ items:
+ - const: stmmaceth
+
+ interrupts:
+ maxItems: 1
+
+ interrupt-names:
+ items:
+ - const: macirq
+
+ loongson,ls1-syscon:
+ $ref: /schemas/types.yaml#/definitions/phandle
+ description:
+ Phandle to the syscon containing some extra configurations
+ including PHY interface mode.
+
+ phy-mode:
+ enum:
+ - mii
+ - rmii
+
+required:
+ - compatible
+ - reg
+ - clocks
+ - clock-names
+ - interrupts
+ - interrupt-names
+ - loongson,ls1-syscon
+
+allOf:
+ - $ref: snps,dwmac.yaml#
+
+unevaluatedProperties: false
+
+examples:
+ - |
+ #include <dt-bindings/clock/loongson,ls1x-clk.h>
+ #include <dt-bindings/interrupt-controller/irq.h>
+
+ emac: ethernet@1fe10000 {
+ compatible = "loongson,ls1c-emac", "snps,dwmac-3.50a";
+ reg = <0x1fe10000 0x10000>;
+
+ clocks = <&clkc LS1X_CLKID_AHB>;
+ clock-names = "stmmaceth";
+
+ interrupt-parent = <&intc1>;
+ interrupts = <2 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "macirq";
+
+ loongson,ls1-syscon = <&syscon>;
+
+ phy-handle = <&phy0>;
+ phy-mode = "mii";
+ snps,pbl = <1>;
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
+
+ phy0: ethernet-phy@13 {
+ reg = <0x13>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/net/mscc,vsc7514-switch.yaml b/Documentation/devicetree/bindings/net/mscc,vsc7514-switch.yaml
index 8ee2c7d7ff42..86a9c3fc76c8 100644
--- a/Documentation/devicetree/bindings/net/mscc,vsc7514-switch.yaml
+++ b/Documentation/devicetree/bindings/net/mscc,vsc7514-switch.yaml
@@ -24,7 +24,7 @@ allOf:
compatible:
const: mscc,vsc7514-switch
then:
- $ref: ethernet-switch.yaml#
+ $ref: ethernet-switch.yaml#/$defs/ethernet-ports
required:
- interrupts
- interrupt-names
@@ -33,28 +33,18 @@ allOf:
minItems: 21
reg-names:
minItems: 21
- ethernet-ports:
- patternProperties:
- "^port@[0-9a-f]+$":
- $ref: ethernet-switch-port.yaml#
- unevaluatedProperties: false
- if:
properties:
compatible:
const: mscc,vsc7512-switch
then:
- $ref: /schemas/net/dsa/dsa.yaml#
+ $ref: /schemas/net/dsa/dsa.yaml#/$defs/ethernet-ports
properties:
reg:
maxItems: 20
reg-names:
maxItems: 20
- ethernet-ports:
- patternProperties:
- "^port@[0-9a-f]+$":
- $ref: /schemas/net/dsa/dsa-port.yaml#
- unevaluatedProperties: false
properties:
compatible:
@@ -185,7 +175,7 @@ examples:
};
# VSC7512 (DSA)
- |
- ethernet-switch@1{
+ ethernet-switch@1 {
compatible = "mscc,vsc7512-switch";
reg = <0x71010000 0x10000>,
<0x71030000 0x10000>,
@@ -212,22 +202,22 @@ examples:
"port7", "port8", "port9", "port10", "qsys",
"ana", "s0", "s1", "s2";
- ethernet-ports {
- #address-cells = <1>;
- #size-cells = <0>;
-
- port@0 {
- reg = <0>;
- ethernet = <&mac_sw>;
- phy-handle = <&phy0>;
- phy-mode = "internal";
- };
- port@1 {
- reg = <1>;
- phy-handle = <&phy1>;
- phy-mode = "internal";
- };
+ ethernet-ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+ ethernet = <&mac_sw>;
+ phy-handle = <&phy0>;
+ phy-mode = "internal";
+ };
+ port@1 {
+ reg = <1>;
+ phy-handle = <&phy1>;
+ phy-mode = "internal";
};
};
+ };
...
diff --git a/Documentation/devicetree/bindings/net/nxp,tja11xx.yaml b/Documentation/devicetree/bindings/net/nxp,tja11xx.yaml
index ab8867e6939b..85bfa45f5122 100644
--- a/Documentation/devicetree/bindings/net/nxp,tja11xx.yaml
+++ b/Documentation/devicetree/bindings/net/nxp,tja11xx.yaml
@@ -20,6 +20,7 @@ allOf:
patternProperties:
"^ethernet-phy@[0-9a-f]+$":
type: object
+ additionalProperties: false
description: |
Some packages have multiple PHYs. Secondary PHY should be defines as
subnode of the first (parent) PHY.
diff --git a/Documentation/devicetree/bindings/net/renesas,ether.yaml b/Documentation/devicetree/bindings/net/renesas,ether.yaml
index 06b38c9bc6ec..29355ab98569 100644
--- a/Documentation/devicetree/bindings/net/renesas,ether.yaml
+++ b/Documentation/devicetree/bindings/net/renesas,ether.yaml
@@ -81,9 +81,8 @@ properties:
active-high
patternProperties:
- "^ethernet-phy@[0-9a-f]$":
+ "@[0-9a-f]$":
type: object
- $ref: ethernet-phy.yaml#
required:
- compatible
diff --git a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
index 3f41294f5997..5d074f27d462 100644
--- a/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
+++ b/Documentation/devicetree/bindings/net/renesas,etheravb.yaml
@@ -109,9 +109,8 @@ properties:
enum: [0, 2000]
patternProperties:
- "^ethernet-phy@[0-9a-f]$":
+ "@[0-9a-f]$":
type: object
- $ref: ethernet-phy.yaml#
required:
- compatible
diff --git a/Documentation/devicetree/bindings/net/snps,dwmac.yaml b/Documentation/devicetree/bindings/net/snps,dwmac.yaml
index ddf9522a5dc2..5c2769dc689a 100644
--- a/Documentation/devicetree/bindings/net/snps,dwmac.yaml
+++ b/Documentation/devicetree/bindings/net/snps,dwmac.yaml
@@ -394,6 +394,11 @@ properties:
When a PFC frame is received with priorities matching the bitmask,
the queue is blocked from transmitting for the pause time specified
in the PFC frame.
+
+ snps,coe-unsupported:
+ type: boolean
+ description: TX checksum offload is unsupported by the TX queue.
+
allOf:
- if:
required:
diff --git a/Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml b/Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml
index b04ac4966608..f07ae3173b03 100644
--- a/Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml
+++ b/Documentation/devicetree/bindings/net/ti,cpsw-switch.yaml
@@ -86,7 +86,7 @@ properties:
const: 0
patternProperties:
- "^port@[0-9]+$":
+ "^port@[12]$":
type: object
description: CPSW external ports
diff --git a/Documentation/devicetree/bindings/net/ti,icssg-prueth.yaml b/Documentation/devicetree/bindings/net/ti,icssg-prueth.yaml
index 311c570165f9..229c8f32019f 100644
--- a/Documentation/devicetree/bindings/net/ti,icssg-prueth.yaml
+++ b/Documentation/devicetree/bindings/net/ti,icssg-prueth.yaml
@@ -19,6 +19,7 @@ allOf:
properties:
compatible:
enum:
+ - ti,am642-icssg-prueth # for AM64x SoC family
- ti,am654-icssg-prueth # for AM65x SoC family
sram:
@@ -106,6 +107,13 @@ properties:
phandle to system controller node and register offset
to ICSSG control register for RGMII transmit delay
+ ti,half-duplex-capable:
+ type: boolean
+ description:
+ Indicates that the PHY output pin COL is routed to ICSSG GPIO pin
+ (PRGx_PRU0/1_GPIO10) as input so that the ICSSG MII port is
+ capable of half duplex operations.
+
required:
- reg
anyOf:
diff --git a/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml b/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml
index f0fa92b04b32..3b212f26abc5 100644
--- a/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml
+++ b/Documentation/devicetree/bindings/soc/mediatek/mediatek,mt7986-wo-ccif.yaml
@@ -20,6 +20,7 @@ properties:
items:
- enum:
- mediatek,mt7986-wo-ccif
+ - mediatek,mt7988-wo-ccif
- const: syscon
reg:
diff --git a/Documentation/driver-api/80211/mac80211.rst b/Documentation/driver-api/80211/mac80211.rst
index 67d2e58b45e4..e38a220401f5 100644
--- a/Documentation/driver-api/80211/mac80211.rst
+++ b/Documentation/driver-api/80211/mac80211.rst
@@ -120,7 +120,7 @@ functions/definitions
ieee80211_rx
ieee80211_rx_ni
ieee80211_rx_irqsafe
- ieee80211_tx_status
+ ieee80211_tx_status_skb
ieee80211_tx_status_ni
ieee80211_tx_status_irqsafe
ieee80211_rts_get
diff --git a/Documentation/driver-api/dpll.rst b/Documentation/driver-api/dpll.rst
new file mode 100644
index 000000000000..e3d593841aa7
--- /dev/null
+++ b/Documentation/driver-api/dpll.rst
@@ -0,0 +1,551 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============================
+The Linux kernel dpll subsystem
+===============================
+
+DPLL
+====
+
+PLL - Phase Locked Loop is an electronic circuit which syntonizes clock
+signal of a device with an external clock signal. Effectively enabling
+device to run on the same clock signal beat as provided on a PLL input.
+
+DPLL - Digital Phase Locked Loop is an integrated circuit which in
+addition to plain PLL behavior incorporates a digital phase detector
+and may have digital divider in the loop. As a result, the frequency on
+DPLL's input and output may be configurable.
+
+Subsystem
+=========
+
+The main purpose of dpll subsystem is to provide general interface
+to configure devices that use any kind of Digital PLL and could use
+different sources of input signal to synchronize to, as well as
+different types of outputs.
+The main interface is NETLINK_GENERIC based protocol with an event
+monitoring multicast group defined.
+
+Device object
+=============
+
+Single dpll device object means single Digital PLL circuit and bunch of
+connected pins.
+It reports the supported modes of operation and current status to the
+user in response to the `do` request of netlink command
+``DPLL_CMD_DEVICE_GET`` and list of dplls registered in the subsystem
+with `dump` netlink request of the same command.
+Changing the configuration of dpll device is done with `do` request of
+netlink ``DPLL_CMD_DEVICE_SET`` command.
+A device handle is ``DPLL_A_ID``, it shall be provided to get or set
+configuration of particular device in the system. It can be obtained
+with a ``DPLL_CMD_DEVICE_GET`` `dump` request or
+a ``DPLL_CMD_DEVICE_ID_GET`` `do` request, where the one must provide
+attributes that result in single device match.
+
+Pin object
+==========
+
+A pin is amorphic object which represents either input or output, it
+could be internal component of the device, as well as externally
+connected.
+The number of pins per dpll vary, but usually multiple pins shall be
+provided for a single dpll device.
+Pin's properties, capabilities and status is provided to the user in
+response to `do` request of netlink ``DPLL_CMD_PIN_GET`` command.
+It is also possible to list all the pins that were registered in the
+system with `dump` request of ``DPLL_CMD_PIN_GET`` command.
+Configuration of a pin can be changed by `do` request of netlink
+``DPLL_CMD_PIN_SET`` command.
+Pin handle is a ``DPLL_A_PIN_ID``, it shall be provided to get or set
+configuration of particular pin in the system. It can be obtained with
+``DPLL_CMD_PIN_GET`` `dump` request or ``DPLL_CMD_PIN_ID_GET`` `do`
+request, where user provides attributes that result in single pin match.
+
+Pin selection
+=============
+
+In general, selected pin (the one which signal is driving the dpll
+device) can be obtained from ``DPLL_A_PIN_STATE`` attribute, and only
+one pin shall be in ``DPLL_PIN_STATE_CONNECTED`` state for any dpll
+device.
+
+Pin selection can be done either manually or automatically, depending
+on hardware capabilities and active dpll device work mode
+(``DPLL_A_MODE`` attribute). The consequence is that there are
+differences for each mode in terms of available pin states, as well as
+for the states the user can request for a dpll device.
+
+In manual mode (``DPLL_MODE_MANUAL``) the user can request or receive
+one of following pin states:
+
+- ``DPLL_PIN_STATE_CONNECTED`` - the pin is used to drive dpll device
+- ``DPLL_PIN_STATE_DISCONNECTED`` - the pin is not used to drive dpll
+ device
+
+In automatic mode (``DPLL_MODE_AUTOMATIC``) the user can request or
+receive one of following pin states:
+
+- ``DPLL_PIN_STATE_SELECTABLE`` - the pin shall be considered as valid
+ input for automatic selection algorithm
+- ``DPLL_PIN_STATE_DISCONNECTED`` - the pin shall be not considered as
+ a valid input for automatic selection algorithm
+
+In automatic mode (``DPLL_MODE_AUTOMATIC``) the user can only receive
+pin state ``DPLL_PIN_STATE_CONNECTED`` once automatic selection
+algorithm locks a dpll device with one of the inputs.
+
+Shared pins
+===========
+
+A single pin object can be attached to multiple dpll devices.
+Then there are two groups of configuration knobs:
+
+1) Set on a pin - the configuration affects all dpll devices pin is
+ registered to (i.e., ``DPLL_A_PIN_FREQUENCY``),
+2) Set on a pin-dpll tuple - the configuration affects only selected
+ dpll device (i.e., ``DPLL_A_PIN_PRIO``, ``DPLL_A_PIN_STATE``,
+ ``DPLL_A_PIN_DIRECTION``).
+
+MUX-type pins
+=============
+
+A pin can be MUX-type, it aggregates child pins and serves as a pin
+multiplexer. One or more pins are registered with MUX-type instead of
+being directly registered to a dpll device.
+Pins registered with a MUX-type pin provide user with additional nested
+attribute ``DPLL_A_PIN_PARENT_PIN`` for each parent they were registered
+with.
+If a pin was registered with multiple parent pins, they behave like a
+multiple output multiplexer. In this case output of a
+``DPLL_CMD_PIN_GET`` would contain multiple pin-parent nested
+attributes with current state related to each parent, like::
+
+ 'pin': [{{
+ 'clock-id': 282574471561216,
+ 'module-name': 'ice',
+ 'capabilities': 4,
+ 'id': 13,
+ 'parent-pin': [
+ {'parent-id': 2, 'state': 'connected'},
+ {'parent-id': 3, 'state': 'disconnected'}
+ ],
+ 'type': 'synce-eth-port'
+ }}]
+
+Only one child pin can provide its signal to the parent MUX-type pin at
+a time, the selection is done by requesting change of a child pin state
+on desired parent, with the use of ``DPLL_A_PIN_PARENT`` nested
+attribute. Example of netlink `set state on parent pin` message format:
+
+ ========================== =============================================
+ ``DPLL_A_PIN_ID`` child pin id
+ ``DPLL_A_PIN_PARENT_PIN`` nested attribute for requesting configuration
+ related to parent pin
+ ``DPLL_A_PIN_PARENT_ID`` parent pin id
+ ``DPLL_A_PIN_STATE`` requested pin state on parent
+ ========================== =============================================
+
+Pin priority
+============
+
+Some devices might offer a capability of automatic pin selection mode
+(enum value ``DPLL_MODE_AUTOMATIC`` of ``DPLL_A_MODE`` attribute).
+Usually, automatic selection is performed on the hardware level, which
+means only pins directly connected to the dpll can be used for automatic
+input pin selection.
+In automatic selection mode, the user cannot manually select a input
+pin for the device, instead the user shall provide all directly
+connected pins with a priority ``DPLL_A_PIN_PRIO``, the device would
+pick a highest priority valid signal and use it to control the DPLL
+device. Example of netlink `set priority on parent pin` message format:
+
+ ============================ =============================================
+ ``DPLL_A_PIN_ID`` configured pin id
+ ``DPLL_A_PIN_PARENT_DEVICE`` nested attribute for requesting configuration
+ related to parent dpll device
+ ``DPLL_A_PIN_PARENT_ID`` parent dpll device id
+ ``DPLL_A_PIN_PRIO`` requested pin prio on parent dpll
+ ============================ =============================================
+
+Child pin of MUX-type pin is not capable of automatic input pin selection,
+in order to configure active input of a MUX-type pin, the user needs to
+request desired pin state of the child pin on the parent pin,
+as described in the ``MUX-type pins`` chapter.
+
+Phase offset measurement and adjustment
+========================================
+
+Device may provide ability to measure a phase difference between signals
+on a pin and its parent dpll device. If pin-dpll phase offset measurement
+is supported, it shall be provided with ``DPLL_A_PIN_PHASE_OFFSET``
+attribute for each parent dpll device.
+
+Device may also provide ability to adjust a signal phase on a pin.
+If pin phase adjustment is supported, minimal and maximal values that pin
+handle shall be provide to the user on ``DPLL_CMD_PIN_GET`` respond
+with ``DPLL_A_PIN_PHASE_ADJUST_MIN`` and ``DPLL_A_PIN_PHASE_ADJUST_MAX``
+attributes. Configured phase adjust value is provided with
+``DPLL_A_PIN_PHASE_ADJUST`` attribute of a pin, and value change can be
+requested with the same attribute with ``DPLL_CMD_PIN_SET`` command.
+
+ =============================== ======================================
+ ``DPLL_A_PIN_ID`` configured pin id
+ ``DPLL_A_PIN_PHASE_ADJUST_MIN`` attr minimum value of phase adjustment
+ ``DPLL_A_PIN_PHASE_ADJUST_MAX`` attr maximum value of phase adjustment
+ ``DPLL_A_PIN_PHASE_ADJUST`` attr configured value of phase
+ adjustment on parent dpll device
+ ``DPLL_A_PIN_PARENT_DEVICE`` nested attribute for requesting
+ configuration on given parent dpll
+ device
+ ``DPLL_A_PIN_PARENT_ID`` parent dpll device id
+ ``DPLL_A_PIN_PHASE_OFFSET`` attr measured phase difference
+ between a pin and parent dpll device
+ =============================== ======================================
+
+All phase related values are provided in pico seconds, which represents
+time difference between signals phase. The negative value means that
+phase of signal on pin is earlier in time than dpll's signal. Positive
+value means that phase of signal on pin is later in time than signal of
+a dpll.
+
+Phase adjust (also min and max) values are integers, but measured phase
+offset values are fractional with 3-digit decimal places and shell be
+divided with ``DPLL_PIN_PHASE_OFFSET_DIVIDER`` to get integer part and
+modulo divided to get fractional part.
+
+Configuration commands group
+============================
+
+Configuration commands are used to get information about registered
+dpll devices (and pins), as well as set configuration of device or pins.
+As dpll devices must be abstracted and reflect real hardware,
+there is no way to add new dpll device via netlink from user space and
+each device should be registered by its driver.
+
+All netlink commands require ``GENL_ADMIN_PERM``. This is to prevent
+any spamming/DoS from unauthorized userspace applications.
+
+List of netlink commands with possible attributes
+=================================================
+
+Constants identifying command types for dpll device uses a
+``DPLL_CMD_`` prefix and suffix according to command purpose.
+The dpll device related attributes use a ``DPLL_A_`` prefix and
+suffix according to attribute purpose.
+
+ ==================================== =================================
+ ``DPLL_CMD_DEVICE_ID_GET`` command to get device ID
+ ``DPLL_A_MODULE_NAME`` attr module name of registerer
+ ``DPLL_A_CLOCK_ID`` attr Unique Clock Identifier
+ (EUI-64), as defined by the
+ IEEE 1588 standard
+ ``DPLL_A_TYPE`` attr type of dpll device
+ ==================================== =================================
+
+ ==================================== =================================
+ ``DPLL_CMD_DEVICE_GET`` command to get device info or
+ dump list of available devices
+ ``DPLL_A_ID`` attr unique dpll device ID
+ ``DPLL_A_MODULE_NAME`` attr module name of registerer
+ ``DPLL_A_CLOCK_ID`` attr Unique Clock Identifier
+ (EUI-64), as defined by the
+ IEEE 1588 standard
+ ``DPLL_A_MODE`` attr selection mode
+ ``DPLL_A_MODE_SUPPORTED`` attr available selection modes
+ ``DPLL_A_LOCK_STATUS`` attr dpll device lock status
+ ``DPLL_A_TEMP`` attr device temperature info
+ ``DPLL_A_TYPE`` attr type of dpll device
+ ==================================== =================================
+
+ ==================================== =================================
+ ``DPLL_CMD_DEVICE_SET`` command to set dpll device config
+ ``DPLL_A_ID`` attr internal dpll device index
+ ``DPLL_A_MODE`` attr selection mode to configure
+ ==================================== =================================
+
+Constants identifying command types for pins uses a
+``DPLL_CMD_PIN_`` prefix and suffix according to command purpose.
+The pin related attributes use a ``DPLL_A_PIN_`` prefix and suffix
+according to attribute purpose.
+
+ ==================================== =================================
+ ``DPLL_CMD_PIN_ID_GET`` command to get pin ID
+ ``DPLL_A_PIN_MODULE_NAME`` attr module name of registerer
+ ``DPLL_A_PIN_CLOCK_ID`` attr Unique Clock Identifier
+ (EUI-64), as defined by the
+ IEEE 1588 standard
+ ``DPLL_A_PIN_BOARD_LABEL`` attr pin board label provided
+ by registerer
+ ``DPLL_A_PIN_PANEL_LABEL`` attr pin panel label provided
+ by registerer
+ ``DPLL_A_PIN_PACKAGE_LABEL`` attr pin package label provided
+ by registerer
+ ``DPLL_A_PIN_TYPE`` attr type of a pin
+ ==================================== =================================
+
+ ==================================== ==================================
+ ``DPLL_CMD_PIN_GET`` command to get pin info or dump
+ list of available pins
+ ``DPLL_A_PIN_ID`` attr unique a pin ID
+ ``DPLL_A_PIN_MODULE_NAME`` attr module name of registerer
+ ``DPLL_A_PIN_CLOCK_ID`` attr Unique Clock Identifier
+ (EUI-64), as defined by the
+ IEEE 1588 standard
+ ``DPLL_A_PIN_BOARD_LABEL`` attr pin board label provided
+ by registerer
+ ``DPLL_A_PIN_PANEL_LABEL`` attr pin panel label provided
+ by registerer
+ ``DPLL_A_PIN_PACKAGE_LABEL`` attr pin package label provided
+ by registerer
+ ``DPLL_A_PIN_TYPE`` attr type of a pin
+ ``DPLL_A_PIN_FREQUENCY`` attr current frequency of a pin
+ ``DPLL_A_PIN_FREQUENCY_SUPPORTED`` nested attr provides supported
+ frequencies
+ ``DPLL_A_PIN_ANY_FREQUENCY_MIN`` attr minimum value of frequency
+ ``DPLL_A_PIN_ANY_FREQUENCY_MAX`` attr maximum value of frequency
+ ``DPLL_A_PIN_PHASE_ADJUST_MIN`` attr minimum value of phase
+ adjustment
+ ``DPLL_A_PIN_PHASE_ADJUST_MAX`` attr maximum value of phase
+ adjustment
+ ``DPLL_A_PIN_PHASE_ADJUST`` attr configured value of phase
+ adjustment on parent device
+ ``DPLL_A_PIN_PARENT_DEVICE`` nested attr for each parent device
+ the pin is connected with
+ ``DPLL_A_PIN_PARENT_ID`` attr parent dpll device id
+ ``DPLL_A_PIN_PRIO`` attr priority of pin on the
+ dpll device
+ ``DPLL_A_PIN_STATE`` attr state of pin on the parent
+ dpll device
+ ``DPLL_A_PIN_DIRECTION`` attr direction of a pin on the
+ parent dpll device
+ ``DPLL_A_PIN_PHASE_OFFSET`` attr measured phase difference
+ between a pin and parent dpll
+ ``DPLL_A_PIN_PARENT_PIN`` nested attr for each parent pin
+ the pin is connected with
+ ``DPLL_A_PIN_PARENT_ID`` attr parent pin id
+ ``DPLL_A_PIN_STATE`` attr state of pin on the parent
+ pin
+ ``DPLL_A_PIN_CAPABILITIES`` attr bitmask of pin capabilities
+ ==================================== ==================================
+
+ ==================================== =================================
+ ``DPLL_CMD_PIN_SET`` command to set pins configuration
+ ``DPLL_A_PIN_ID`` attr unique a pin ID
+ ``DPLL_A_PIN_FREQUENCY`` attr requested frequency of a pin
+ ``DPLL_A_PIN_PHASE_ADJUST`` attr requested value of phase
+ adjustment on parent device
+ ``DPLL_A_PIN_PARENT_DEVICE`` nested attr for each parent dpll
+ device configuration request
+ ``DPLL_A_PIN_PARENT_ID`` attr parent dpll device id
+ ``DPLL_A_PIN_DIRECTION`` attr requested direction of a pin
+ ``DPLL_A_PIN_PRIO`` attr requested priority of pin on
+ the dpll device
+ ``DPLL_A_PIN_STATE`` attr requested state of pin on
+ the dpll device
+ ``DPLL_A_PIN_PARENT_PIN`` nested attr for each parent pin
+ configuration request
+ ``DPLL_A_PIN_PARENT_ID`` attr parent pin id
+ ``DPLL_A_PIN_STATE`` attr requested state of pin on
+ parent pin
+ ==================================== =================================
+
+Netlink dump requests
+=====================
+
+The ``DPLL_CMD_DEVICE_GET`` and ``DPLL_CMD_PIN_GET`` commands are
+capable of dump type netlink requests, in which case the response is in
+the same format as for their ``do`` request, but every device or pin
+registered in the system is returned.
+
+SET commands format
+===================
+
+``DPLL_CMD_DEVICE_SET`` - to target a dpll device, the user provides
+``DPLL_A_ID``, which is unique identifier of dpll device in the system,
+as well as parameter being configured (``DPLL_A_MODE``).
+
+``DPLL_CMD_PIN_SET`` - to target a pin user must provide a
+``DPLL_A_PIN_ID``, which is unique identifier of a pin in the system.
+Also configured pin parameters must be added.
+If ``DPLL_A_PIN_FREQUENCY`` is configured, this affects all the dpll
+devices that are connected with the pin, that is why frequency attribute
+shall not be enclosed in ``DPLL_A_PIN_PARENT_DEVICE``.
+Other attributes: ``DPLL_A_PIN_PRIO``, ``DPLL_A_PIN_STATE`` or
+``DPLL_A_PIN_DIRECTION`` must be enclosed in
+``DPLL_A_PIN_PARENT_DEVICE`` as their configuration relates to only one
+of parent dplls, targeted by ``DPLL_A_PIN_PARENT_ID`` attribute which is
+also required inside that nest.
+For MUX-type pins the ``DPLL_A_PIN_STATE`` attribute is configured in
+similar way, by enclosing required state in ``DPLL_A_PIN_PARENT_PIN``
+nested attribute and targeted parent pin id in ``DPLL_A_PIN_PARENT_ID``.
+
+In general, it is possible to configure multiple parameters at once, but
+internally each parameter change will be invoked separately, where order
+of configuration is not guaranteed by any means.
+
+Configuration pre-defined enums
+===============================
+
+.. kernel-doc:: include/uapi/linux/dpll.h
+
+Notifications
+=============
+
+dpll device can provide notifications regarding status changes of the
+device, i.e. lock status changes, input/output changes or other alarms.
+There is one multicast group that is used to notify user-space apps via
+netlink socket: ``DPLL_MCGRP_MONITOR``
+
+Notifications messages:
+
+ ============================== =====================================
+ ``DPLL_CMD_DEVICE_CREATE_NTF`` dpll device was created
+ ``DPLL_CMD_DEVICE_DELETE_NTF`` dpll device was deleted
+ ``DPLL_CMD_DEVICE_CHANGE_NTF`` dpll device has changed
+ ``DPLL_CMD_PIN_CREATE_NTF`` dpll pin was created
+ ``DPLL_CMD_PIN_DELETE_NTF`` dpll pin was deleted
+ ``DPLL_CMD_PIN_CHANGE_NTF`` dpll pin has changed
+ ============================== =====================================
+
+Events format is the same as for the corresponding get command.
+Format of ``DPLL_CMD_DEVICE_`` events is the same as response of
+``DPLL_CMD_DEVICE_GET``.
+Format of ``DPLL_CMD_PIN_`` events is same as response of
+``DPLL_CMD_PIN_GET``.
+
+Device driver implementation
+============================
+
+Device is allocated by dpll_device_get() call. Second call with the
+same arguments will not create new object but provides pointer to
+previously created device for given arguments, it also increases
+refcount of that object.
+Device is deallocated by dpll_device_put() call, which first
+decreases the refcount, once refcount is cleared the object is
+destroyed.
+
+Device should implement set of operations and register device via
+dpll_device_register() at which point it becomes available to the
+users. Multiple driver instances can obtain reference to it with
+dpll_device_get(), as well as register dpll device with their own
+ops and priv.
+
+The pins are allocated separately with dpll_pin_get(), it works
+similarly to dpll_device_get(). Function first creates object and then
+for each call with the same arguments only the object refcount
+increases. Also dpll_pin_put() works similarly to dpll_device_put().
+
+A pin can be registered with parent dpll device or parent pin, depending
+on hardware needs. Each registration requires registerer to provide set
+of pin callbacks, and private data pointer for calling them:
+
+- dpll_pin_register() - register pin with a dpll device,
+- dpll_pin_on_pin_register() - register pin with another MUX type pin.
+
+Notifications of adding or removing dpll devices are created within
+subsystem itself.
+Notifications about registering/deregistering pins are also invoked by
+the subsystem.
+Notifications about status changes either of dpll device or a pin are
+invoked in two ways:
+
+- after successful change was requested on dpll subsystem, the subsystem
+ calls corresponding notification,
+- requested by device driver with dpll_device_change_ntf() or
+ dpll_pin_change_ntf() when driver informs about the status change.
+
+The device driver using dpll interface is not required to implement all
+the callback operation. Nevertheless, there are few required to be
+implemented.
+Required dpll device level callback operations:
+
+- ``.mode_get``,
+- ``.lock_status_get``.
+
+Required pin level callback operations:
+
+- ``.state_on_dpll_get`` (pins registered with dpll device),
+- ``.state_on_pin_get`` (pins registered with parent pin),
+- ``.direction_get``.
+
+Every other operation handler is checked for existence and
+``-EOPNOTSUPP`` is returned in case of absence of specific handler.
+
+The simplest implementation is in the OCP TimeCard driver. The ops
+structures are defined like this:
+
+.. code-block:: c
+
+ static const struct dpll_device_ops dpll_ops = {
+ .lock_status_get = ptp_ocp_dpll_lock_status_get,
+ .mode_get = ptp_ocp_dpll_mode_get,
+ .mode_supported = ptp_ocp_dpll_mode_supported,
+ };
+
+ static const struct dpll_pin_ops dpll_pins_ops = {
+ .frequency_get = ptp_ocp_dpll_frequency_get,
+ .frequency_set = ptp_ocp_dpll_frequency_set,
+ .direction_get = ptp_ocp_dpll_direction_get,
+ .direction_set = ptp_ocp_dpll_direction_set,
+ .state_on_dpll_get = ptp_ocp_dpll_state_get,
+ };
+
+The registration part is then looks like this part:
+
+.. code-block:: c
+
+ clkid = pci_get_dsn(pdev);
+ bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE);
+ if (IS_ERR(bp->dpll)) {
+ err = PTR_ERR(bp->dpll);
+ dev_err(&pdev->dev, "dpll_device_alloc failed\n");
+ goto out;
+ }
+
+ err = dpll_device_register(bp->dpll, DPLL_TYPE_PPS, &dpll_ops, bp);
+ if (err)
+ goto out;
+
+ for (i = 0; i < OCP_SMA_NUM; i++) {
+ bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop);
+ if (IS_ERR(bp->sma[i].dpll_pin)) {
+ err = PTR_ERR(bp->dpll);
+ goto out_dpll;
+ }
+
+ err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops,
+ &bp->sma[i]);
+ if (err) {
+ dpll_pin_put(bp->sma[i].dpll_pin);
+ goto out_dpll;
+ }
+ }
+
+In the error path we have to rewind every allocation in the reverse order:
+
+.. code-block:: c
+
+ while (i) {
+ --i;
+ dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
+ dpll_pin_put(bp->sma[i].dpll_pin);
+ }
+ dpll_device_put(bp->dpll);
+
+More complex example can be found in Intel's ICE driver or nVidia's mlx5 driver.
+
+SyncE enablement
+================
+For SyncE enablement it is required to allow control over dpll device
+for a software application which monitors and configures the inputs of
+dpll device in response to current state of a dpll device and its
+inputs.
+In such scenario, dpll device input signal shall be also configurable
+to drive dpll with signal recovered from the PHY netdevice.
+This is done by exposing a pin to the netdevice - attaching pin to the
+netdevice itself with
+``netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin)``.
+Exposed pin id handle ``DPLL_A_PIN_ID`` is then identifiable by the user
+as it is attached to rtnetlink respond to get ``RTM_NEWLINK`` command in
+nested attribute ``IFLA_DPLL_PIN``.
diff --git a/Documentation/driver-api/index.rst b/Documentation/driver-api/index.rst
index 1e16a40da3ba..f549a68951d7 100644
--- a/Documentation/driver-api/index.rst
+++ b/Documentation/driver-api/index.rst
@@ -114,6 +114,7 @@ available subsections can be seen below.
zorro
hte/index
wmi
+ dpll
.. only:: subproject and html
diff --git a/Documentation/netlink/genetlink-c.yaml b/Documentation/netlink/genetlink-c.yaml
index 9806c44f604c..c58f7153fcf8 100644
--- a/Documentation/netlink/genetlink-c.yaml
+++ b/Documentation/netlink/genetlink-c.yaml
@@ -13,6 +13,11 @@ $defs:
type: [ string, integer ]
pattern: ^[0-9A-Za-z_]+( - 1)?$
minimum: 0
+ len-or-limit:
+ # literal int or limit based on fixed-width type e.g. u8-min, u16-max, etc.
+ type: [ string, integer ]
+ pattern: ^[su](8|16|32|64)-(min|max)$
+ minimum: 0
# Schema for specs
title: Protocol
@@ -26,10 +31,6 @@ properties:
type: string
doc:
type: string
- version:
- description: Generic Netlink family version. Default is 1.
- type: integer
- minimum: 1
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink, genetlink-c ]
@@ -46,6 +47,12 @@ properties:
max-by-define:
description: Makes the number of attributes and commands be specified by a define, not an enum value.
type: boolean
+ cmd-max-name:
+ description: Name of the define for the last operation in the list.
+ type: string
+ cmd-cnt-name:
+ description: The explicit name for constant holding the count of operations (last operation + 1).
+ type: string
# End genetlink-c
definitions:
@@ -142,13 +149,14 @@ properties:
type: array
items:
type: object
- required: [ name, type ]
+ required: [ name ]
additionalProperties: False
properties:
name:
type: string
type: &attr-type
- enum: [ unused, pad, flag, binary, u8, u16, u32, u64, s32, s64,
+ enum: [ unused, pad, flag, binary,
+ uint, sint, u8, u16, u32, u64, s32, s64,
string, nest, array-nest, nest-type-value ]
doc:
description: Documentation of the attribute.
@@ -187,13 +195,19 @@ properties:
type: string
min:
description: Min value for an integer attribute.
- type: integer
+ $ref: '#/$defs/len-or-limit'
+ max:
+ description: Max value for an integer attribute.
+ $ref: '#/$defs/len-or-limit'
min-len:
description: Min length for a binary attribute.
$ref: '#/$defs/len-or-define'
max-len:
description: Max length for a string or a binary attribute.
$ref: '#/$defs/len-or-define'
+ exact-len:
+ description: Exact length for a string or a binary attribute.
+ $ref: '#/$defs/len-or-define'
sub-type: *attr-type
display-hint: &display-hint
description: |
@@ -215,6 +229,18 @@ properties:
not:
required: [ name-prefix ]
+ # type property is only required if not in subset definition
+ if:
+ properties:
+ subset-of:
+ not:
+ type: string
+ then:
+ properties:
+ attributes:
+ items:
+ required: [ type ]
+
operations:
description: Operations supported by the protocol.
type: object
@@ -275,6 +301,11 @@ properties:
type: array
items:
enum: [ strict, dump, dump-strict ]
+ config-cond:
+ description: |
+ Name of the kernel config option gating the presence of
+ the operation, without the 'CONFIG_' prefix.
+ type: string
do: &subop-type
description: Main command handler.
type: object
diff --git a/Documentation/netlink/genetlink-legacy.yaml b/Documentation/netlink/genetlink-legacy.yaml
index 12a0a045605d..938703088306 100644
--- a/Documentation/netlink/genetlink-legacy.yaml
+++ b/Documentation/netlink/genetlink-legacy.yaml
@@ -13,6 +13,11 @@ $defs:
type: [ string, integer ]
pattern: ^[0-9A-Za-z_]+( - 1)?$
minimum: 0
+ len-or-limit:
+ # literal int or limit based on fixed-width type e.g. u8-min, u16-max, etc.
+ type: [ string, integer ]
+ pattern: ^[su](8|16|32|64)-(min|max)$
+ minimum: 0
# Schema for specs
title: Protocol
@@ -26,10 +31,6 @@ properties:
type: string
doc:
type: string
- version:
- description: Generic Netlink family version. Default is 1.
- type: integer
- minimum: 1
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink, genetlink-c, genetlink-legacy ] # Trim
@@ -46,6 +47,12 @@ properties:
max-by-define:
description: Makes the number of attributes and commands be specified by a define, not an enum value.
type: boolean
+ cmd-max-name:
+ description: Name of the define for the last operation in the list.
+ type: string
+ cmd-cnt-name:
+ description: The explicit name for constant holding the count of operations (last operation + 1).
+ type: string
# End genetlink-c
# Start genetlink-legacy
kernel-policy:
@@ -53,6 +60,10 @@ properties:
Defines if the input policy in the kernel is global, per-operation, or split per operation type.
Default is split.
enum: [ split, per-op, global ]
+ version:
+ description: Generic Netlink family version. Default is 1.
+ type: integer
+ minimum: 1
# End genetlink-legacy
definitions:
@@ -180,14 +191,15 @@ properties:
type: array
items:
type: object
- required: [ name, type ]
+ required: [ name ]
additionalProperties: False
properties:
name:
type: string
type: &attr-type
description: The netlink attribute type
- enum: [ unused, pad, flag, binary, u8, u16, u32, u64, s32, s64,
+ enum: [ unused, pad, flag, binary, bitfield32,
+ uint, sint, u8, u16, u32, u64, s32, s64,
string, nest, array-nest, nest-type-value ]
doc:
description: Documentation of the attribute.
@@ -226,13 +238,19 @@ properties:
type: string
min:
description: Min value for an integer attribute.
- type: integer
+ $ref: '#/$defs/len-or-limit'
+ max:
+ description: Max value for an integer attribute.
+ $ref: '#/$defs/len-or-limit'
min-len:
description: Min length for a binary attribute.
$ref: '#/$defs/len-or-define'
max-len:
description: Max length for a string or a binary attribute.
$ref: '#/$defs/len-or-define'
+ exact-len:
+ description: Exact length for a string or a binary attribute.
+ $ref: '#/$defs/len-or-define'
sub-type: *attr-type
display-hint: *display-hint
# Start genetlink-c
@@ -254,6 +272,18 @@ properties:
not:
required: [ name-prefix ]
+ # type property is only required if not in subset definition
+ if:
+ properties:
+ subset-of:
+ not:
+ type: string
+ then:
+ properties:
+ attributes:
+ items:
+ required: [ type ]
+
operations:
description: Operations supported by the protocol.
type: object
@@ -316,12 +346,17 @@ properties:
description: Command flags.
type: array
items:
- enum: [ admin-perm ]
+ enum: [ admin-perm, uns-admin-perm ]
dont-validate:
description: Kernel attribute validation flags.
type: array
items:
enum: [ strict, dump, dump-strict ]
+ config-cond:
+ description: |
+ Name of the kernel config option gating the presence of
+ the operation, without the 'CONFIG_' prefix.
+ type: string
# Start genetlink-legacy
fixed-header: *fixed-header
# End genetlink-legacy
diff --git a/Documentation/netlink/genetlink.yaml b/Documentation/netlink/genetlink.yaml
index 3d338c48bf21..3283bf458ff1 100644
--- a/Documentation/netlink/genetlink.yaml
+++ b/Documentation/netlink/genetlink.yaml
@@ -13,6 +13,11 @@ $defs:
type: [ string, integer ]
pattern: ^[0-9A-Za-z_]+( - 1)?$
minimum: 0
+ len-or-limit:
+ # literal int or limit based on fixed-width type e.g. u8-min, u16-max, etc.
+ type: [ string, integer ]
+ pattern: ^[su](8|16|32|64)-(min|max)$
+ minimum: 0
# Schema for specs
title: Protocol
@@ -26,10 +31,6 @@ properties:
type: string
doc:
type: string
- version:
- description: Generic Netlink family version. Default is 1.
- type: integer
- minimum: 1
protocol:
description: Schema compatibility level. Default is "genetlink".
enum: [ genetlink ]
@@ -115,13 +116,14 @@ properties:
type: array
items:
type: object
- required: [ name, type ]
+ required: [ name ]
additionalProperties: False
properties:
name:
type: string
type: &attr-type
- enum: [ unused, pad, flag, binary, u8, u16, u32, u64, s32, s64,
+ enum: [ unused, pad, flag, binary,
+ uint, sint, u8, u16, u32, u64, s32, s64,
string, nest, array-nest, nest-type-value ]
doc:
description: Documentation of the attribute.
@@ -160,13 +162,19 @@ properties:
type: string
min:
description: Min value for an integer attribute.
- type: integer
+ $ref: '#/$defs/len-or-limit'
+ max:
+ description: Max value for an integer attribute.
+ $ref: '#/$defs/len-or-limit'
min-len:
description: Min length for a binary attribute.
$ref: '#/$defs/len-or-define'
max-len:
description: Max length for a string or a binary attribute.
$ref: '#/$defs/len-or-define'
+ exact-len:
+ description: Exact length for a string or a binary attribute.
+ $ref: '#/$defs/len-or-define'
sub-type: *attr-type
display-hint: &display-hint
description: |
@@ -184,6 +192,18 @@ properties:
not:
required: [ name-prefix ]
+ # type property is only required if not in subset definition
+ if:
+ properties:
+ subset-of:
+ not:
+ type: string
+ then:
+ properties:
+ attributes:
+ items:
+ required: [ type ]
+
operations:
description: Operations supported by the protocol.
type: object
@@ -244,6 +264,11 @@ properties:
type: array
items:
enum: [ strict, dump, dump-strict ]
+ config-cond:
+ description: |
+ Name of the kernel config option gating the presence of
+ the operation, without the 'CONFIG_' prefix.
+ type: string
do: &subop-type
description: Main command handler.
type: object
diff --git a/Documentation/netlink/netlink-raw.yaml b/Documentation/netlink/netlink-raw.yaml
index 896797876414..775cce8c548a 100644
--- a/Documentation/netlink/netlink-raw.yaml
+++ b/Documentation/netlink/netlink-raw.yaml
@@ -47,6 +47,12 @@ properties:
max-by-define:
description: Makes the number of attributes and commands be specified by a define, not an enum value.
type: boolean
+ cmd-max-name:
+ description: Name of the define for the last operation in the list.
+ type: string
+ cmd-cnt-name:
+ description: The explicit name for constant holding the count of operations (last operation + 1).
+ type: string
# End genetlink-c
# Start genetlink-legacy
kernel-policy:
@@ -187,7 +193,7 @@ properties:
type: array
items:
type: object
- required: [ name, type ]
+ required: [ name ]
additionalProperties: False
properties:
name:
@@ -240,6 +246,9 @@ properties:
max-len:
description: Max length for a string or a binary attribute.
$ref: '#/$defs/len-or-define'
+ exact-len:
+ description: Exact length for a string or a binary attribute.
+ $ref: '#/$defs/len-or-define'
sub-type: *attr-type
display-hint: *display-hint
# Start genetlink-c
@@ -261,6 +270,18 @@ properties:
not:
required: [ name-prefix ]
+ # type property is only required if not in subset definition
+ if:
+ properties:
+ subset-of:
+ not:
+ type: string
+ then:
+ properties:
+ attributes:
+ items:
+ required: [ type ]
+
operations:
description: Operations supported by the protocol.
type: object
diff --git a/Documentation/netlink/specs/devlink.yaml b/Documentation/netlink/specs/devlink.yaml
index 065661acb878..c6ba4889575a 100644
--- a/Documentation/netlink/specs/devlink.yaml
+++ b/Documentation/netlink/specs/devlink.yaml
@@ -15,6 +15,161 @@ definitions:
name: ingress
-
name: egress
+ -
+ type: enum
+ name: port-type
+ entries:
+ -
+ name: notset
+ -
+ name: auto
+ -
+ name: eth
+ -
+ name: ib
+ -
+ type: enum
+ name: port-flavour
+ entries:
+ -
+ name: physical
+ -
+ name: cpu
+ -
+ name: dsa
+ -
+ name: pci_pf
+ -
+ name: pci_vf
+ -
+ name: virtual
+ -
+ name: unused
+ -
+ name: pci_sf
+ -
+ type: enum
+ name: port-fn-state
+ entries:
+ -
+ name: inactive
+ -
+ name: active
+ -
+ type: enum
+ name: port-fn-opstate
+ entries:
+ -
+ name: detached
+ -
+ name: attached
+ -
+ type: enum
+ name: port-fn-attr-cap
+ entries:
+ -
+ name: roce-bit
+ -
+ name: migratable-bit
+ -
+ type: enum
+ name: sb-threshold-type
+ entries:
+ -
+ name: static
+ -
+ name: dynamic
+ -
+ type: enum
+ name: eswitch-mode
+ entries:
+ -
+ name: legacy
+ -
+ name: switchdev
+ -
+ type: enum
+ name: eswitch-inline-mode
+ entries:
+ -
+ name: none
+ -
+ name: link
+ -
+ name: network
+ -
+ name: transport
+ -
+ type: enum
+ name: eswitch-encap-mode
+ entries:
+ -
+ name: none
+ -
+ name: basic
+ -
+ type: enum
+ name: dpipe-match-type
+ entries:
+ -
+ name: field-exact
+ -
+ type: enum
+ name: dpipe-action-type
+ entries:
+ -
+ name: field-modify
+ -
+ type: enum
+ name: dpipe-field-mapping-type
+ entries:
+ -
+ name: none
+ -
+ name: ifindex
+ -
+ type: enum
+ name: resource-unit
+ entries:
+ -
+ name: entry
+ -
+ type: enum
+ name: reload-action
+ entries:
+ -
+ name: driver-reinit
+ value: 1
+ -
+ name: fw-activate
+ -
+ type: enum
+ name: param-cmode
+ entries:
+ -
+ name: runtime
+ -
+ name: driverinit
+ -
+ name: permanent
+ -
+ type: enum
+ name: flash-overwrite
+ entries:
+ -
+ name: settings-bit
+ -
+ name: identifiers-bit
+ -
+ type: enum
+ name: trap-action
+ entries:
+ -
+ name: drop
+ -
+ name: trap
+ -
+ name: mirror
attribute-sets:
-
@@ -31,6 +186,17 @@ attribute-sets:
-
name: port-index
type: u32
+ -
+ name: port-type
+ type: u16
+ enum: port-type
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: port-split-count
+ type: u32
+ value: 9
# TODO: fill in the attributes in between
@@ -45,18 +211,224 @@ attribute-sets:
name: sb-pool-index
type: u16
value: 17
-
-
name: sb-pool-type
type: u8
enum: sb-pool-type
+ -
+ name: sb-pool-size
+ type: u32
+ -
+ name: sb-pool-threshold-type
+ type: u8
+ enum: sb-threshold-type
+ -
+ name: sb-threshold
+ type: u32
+ -
+ name: sb-tc-index
+ type: u16
+ value: 22
# TODO: fill in the attributes in between
-
- name: sb-tc-index
+ name: eswitch-mode
type: u16
- value: 22
+ value: 25
+ enum: eswitch-mode
+
+ -
+ name: eswitch-inline-mode
+ type: u16
+ enum: eswitch-inline-mode
+ -
+ name: dpipe-tables
+ type: nest
+ nested-attributes: dl-dpipe-tables
+ -
+ name: dpipe-table
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-table
+ -
+ name: dpipe-table-name
+ type: string
+ -
+ name: dpipe-table-size
+ type: u64
+ -
+ name: dpipe-table-matches
+ type: nest
+ nested-attributes: dl-dpipe-table-matches
+ -
+ name: dpipe-table-actions
+ type: nest
+ nested-attributes: dl-dpipe-table-actions
+ -
+ name: dpipe-table-counters-enabled
+ type: u8
+ -
+ name: dpipe-entries
+ type: nest
+ nested-attributes: dl-dpipe-entries
+ -
+ name: dpipe-entry
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-entry
+ -
+ name: dpipe-entry-index
+ type: u64
+ -
+ name: dpipe-entry-match-values
+ type: nest
+ nested-attributes: dl-dpipe-entry-match-values
+ -
+ name: dpipe-entry-action-values
+ type: nest
+ nested-attributes: dl-dpipe-entry-action-values
+ -
+ name: dpipe-entry-counter
+ type: u64
+ -
+ name: dpipe-match
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-match
+ -
+ name: dpipe-match-value
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-match-value
+ -
+ name: dpipe-match-type
+ type: u32
+ enum: dpipe-match-type
+ -
+ name: dpipe-action
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-action
+ -
+ name: dpipe-action-value
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-action-value
+ -
+ name: dpipe-action-type
+ type: u32
+ enum: dpipe-action-type
+ -
+ name: dpipe-value
+ type: binary
+ -
+ name: dpipe-value-mask
+ type: binary
+ -
+ name: dpipe-value-mapping
+ type: u32
+ -
+ name: dpipe-headers
+ type: nest
+ nested-attributes: dl-dpipe-headers
+ -
+ name: dpipe-header
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-header
+ -
+ name: dpipe-header-name
+ type: string
+ -
+ name: dpipe-header-id
+ type: u32
+ -
+ name: dpipe-header-fields
+ type: nest
+ nested-attributes: dl-dpipe-header-fields
+ -
+ name: dpipe-header-global
+ type: u8
+ -
+ name: dpipe-header-index
+ type: u32
+ -
+ name: dpipe-field
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-dpipe-field
+ -
+ name: dpipe-field-name
+ type: string
+ -
+ name: dpipe-field-id
+ type: u32
+ -
+ name: dpipe-field-bitwidth
+ type: u32
+ -
+ name: dpipe-field-mapping-type
+ type: u32
+ enum: dpipe-field-mapping-type
+ -
+ name: pad
+ type: pad
+ -
+ name: eswitch-encap-mode
+ type: u8
+ value: 62
+ enum: eswitch-encap-mode
+ -
+ name: resource-list
+ type: nest
+ nested-attributes: dl-resource-list
+ -
+ name: resource
+ type: nest
+ multi-attr: true
+ nested-attributes: dl-resource
+ -
+ name: resource-name
+ type: string
+ -
+ name: resource-id
+ type: u64
+ -
+ name: resource-size
+ type: u64
+ -
+ name: resource-size-new
+ type: u64
+ -
+ name: resource-size-valid
+ type: u8
+ -
+ name: resource-size-min
+ type: u64
+ -
+ name: resource-size-max
+ type: u64
+ -
+ name: resource-size-gran
+ type: u64
+ -
+ name: resource-unit
+ type: u8
+ enum: resource-unit
+ -
+ name: resource-occ
+ type: u64
+ -
+ name: dpipe-table-resource-id
+ type: u64
+ -
+ name: dpipe-table-resource-units
+ type: u64
+ -
+ name: port-flavour
+ type: u16
+ enum: port-flavour
# TODO: fill in the attributes in between
@@ -68,16 +440,40 @@ attribute-sets:
# TODO: fill in the attributes in between
-
+ name: param-type
+ type: u8
+ value: 83
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: param-value-cmode
+ type: u8
+ enum: param-cmode
+ value: 87
+ -
name: region-name
type: string
- value: 88
# TODO: fill in the attributes in between
-
+ name: region-snapshot-id
+ type: u32
+ value: 92
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: region-chunk-addr
+ type: u64
+ value: 96
+ -
+ name: region-chunk-len
+ type: u64
+ -
name: info-driver-name
type: string
- value: 98
-
name: info-serial-number
type: string
@@ -106,6 +502,29 @@ attribute-sets:
# TODO: fill in the attributes in between
-
+ name: fmsg
+ type: nest
+ nested-attributes: dl-fmsg
+ value: 106
+ -
+ name: fmsg-obj-nest-start
+ type: flag
+ -
+ name: fmsg-pair-nest-start
+ type: flag
+ -
+ name: fmsg-arr-nest-start
+ type: flag
+ -
+ name: fmsg-nest-end
+ type: flag
+ -
+ name: fmsg-obj-name
+ type: string
+
+ # TODO: fill in the attributes in between
+
+ -
name: health-reporter-name
type: string
value: 115
@@ -113,9 +532,36 @@ attribute-sets:
# TODO: fill in the attributes in between
-
+ name: health-reporter-graceful-period
+ type: u64
+ value: 120
+ -
+ name: health-reporter-auto-recover
+ type: u8
+ -
+ name: flash-update-file-name
+ type: string
+ -
+ name: flash-update-component
+ type: string
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: port-pci-pf-number
+ type: u16
+ value: 127
+
+ # TODO: fill in the attributes in between
+
+ -
name: trap-name
type: string
value: 130
+ -
+ name: trap-action
+ type: u8
+ enum: trap-action
# TODO: fill in the attributes in between
@@ -131,23 +577,68 @@ attribute-sets:
# TODO: fill in the attributes in between
-
- name: trap-policer-id
+ name: netns-fd
+ type: u32
+ value: 138
+ -
+ name: netns-pid
+ type: u32
+ -
+ name: netns-id
type: u32
- value: 142
# TODO: fill in the attributes in between
-
- name: reload-action
+ name: health-reporter-auto-dump
type: u8
- value: 153
+ value: 141
+ -
+ name: trap-policer-id
+ type: u32
+ -
+ name: trap-policer-rate
+ type: u64
+ -
+ name: trap-policer-burst
+ type: u64
+ -
+ name: port-function
+ type: nest
+ nested-attributes: dl-port-function
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: port-controller-number
+ type: u32
+ value: 150
# TODO: fill in the attributes in between
-
+ name: flash-update-overwrite-mask
+ type: bitfield32
+ enum: flash-overwrite
+ enum-as-flags: True
+ value: 152
+ -
+ name: reload-action
+ type: u8
+ enum: reload-action
+ -
+ name: reload-actions-performed
+ type: bitfield32
+ enum: reload-action
+ enum-as-flags: True
+ -
+ name: reload-limits
+ type: bitfield32
+ enum: reload-action
+ enum-as-flags: True
+ -
name: dev-stats
type: nest
- value: 156
nested-attributes: dl-dev-stats
-
name: reload-stats
@@ -182,9 +673,25 @@ attribute-sets:
# TODO: fill in the attributes in between
-
+ name: port-pci-sf-number
+ type: u32
+ value: 164
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: rate-tx-share
+ type: u64
+ value: 166
+ -
+ name: rate-tx-max
+ type: u64
+ -
name: rate-node-name
type: string
- value: 168
+ -
+ name: rate-parent-node-name
+ type: string
# TODO: fill in the attributes in between
@@ -193,60 +700,329 @@ attribute-sets:
type: u32
value: 171
+ # TODO: fill in the attributes in between
+
+ -
+ name: linecard-type
+ type: string
+ value: 173
+
+ # TODO: fill in the attributes in between
+
+ -
+ name: selftests
+ type: nest
+ value: 176
+ nested-attributes: dl-selftest-id
+ -
+ name: rate-tx-priority
+ type: u32
+ -
+ name: rate-tx-weight
+ type: u32
+ -
+ name: region-direct
+ type: flag
+
-
name: dl-dev-stats
subset-of: devlink
attributes:
-
name: reload-stats
- type: nest
-
name: remote-reload-stats
- type: nest
-
name: dl-reload-stats
subset-of: devlink
attributes:
-
name: reload-action-info
- type: nest
-
name: dl-reload-act-info
subset-of: devlink
attributes:
-
name: reload-action
- type: u8
-
name: reload-action-stats
- type: nest
-
name: dl-reload-act-stats
subset-of: devlink
attributes:
-
name: reload-stats-entry
- type: nest
-
name: dl-reload-stats-entry
subset-of: devlink
attributes:
-
name: reload-stats-limit
- type: u8
-
name: reload-stats-value
- type: u32
-
name: dl-info-version
subset-of: devlink
attributes:
-
name: info-version-name
- type: string
-
name: info-version-value
- type: string
+ -
+ name: dl-port-function
+ name-prefix: devlink-port-fn-attr-
+ attr-max-name: devlink-port-function-attr-max
+ attributes:
+ -
+ name-prefix: devlink-port-function-attr-
+ name: hw-addr
+ type: binary
+ value: 1
+ -
+ name: state
+ type: u8
+ enum: port-fn-state
+ -
+ name: opstate
+ type: u8
+ enum: port-fn-opstate
+ -
+ name: caps
+ type: bitfield32
+ enum: port-fn-attr-cap
+ enum-as-flags: True
+
+ -
+ name: dl-dpipe-tables
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-table
+
+ -
+ name: dl-dpipe-table
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-table-name
+ -
+ name: dpipe-table-size
+ -
+ name: dpipe-table-name
+ -
+ name: dpipe-table-size
+ -
+ name: dpipe-table-matches
+ -
+ name: dpipe-table-actions
+ -
+ name: dpipe-table-counters-enabled
+ -
+ name: dpipe-table-resource-id
+ -
+ name: dpipe-table-resource-units
+
+ -
+ name: dl-dpipe-table-matches
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-match
+
+ -
+ name: dl-dpipe-table-actions
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-action
+
+ -
+ name: dl-dpipe-entries
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-entry
+
+ -
+ name: dl-dpipe-entry
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-entry-index
+ -
+ name: dpipe-entry-match-values
+ -
+ name: dpipe-entry-action-values
+ -
+ name: dpipe-entry-counter
+
+ -
+ name: dl-dpipe-entry-match-values
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-match-value
+
+ -
+ name: dl-dpipe-entry-action-values
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-action-value
+
+ -
+ name: dl-dpipe-match
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-match-type
+ -
+ name: dpipe-header-id
+ -
+ name: dpipe-header-global
+ -
+ name: dpipe-header-index
+ -
+ name: dpipe-field-id
+
+ -
+ name: dl-dpipe-match-value
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-match
+ -
+ name: dpipe-value
+ -
+ name: dpipe-value-mask
+ -
+ name: dpipe-value-mapping
+
+ -
+ name: dl-dpipe-action
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-action-type
+ -
+ name: dpipe-header-id
+ -
+ name: dpipe-header-global
+ -
+ name: dpipe-header-index
+ -
+ name: dpipe-field-id
+
+ -
+ name: dl-dpipe-action-value
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-action
+ -
+ name: dpipe-value
+ -
+ name: dpipe-value-mask
+ -
+ name: dpipe-value-mapping
+
+ -
+ name: dl-dpipe-headers
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-header
+
+ -
+ name: dl-dpipe-header
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-header-name
+ -
+ name: dpipe-header-id
+ -
+ name: dpipe-header-global
+ -
+ name: dpipe-header-fields
+
+ -
+ name: dl-dpipe-header-fields
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-field
+
+ -
+ name: dl-dpipe-field
+ subset-of: devlink
+ attributes:
+ -
+ name: dpipe-field-name
+ -
+ name: dpipe-field-id
+ -
+ name: dpipe-field-bitwidth
+ -
+ name: dpipe-field-mapping-type
+
+ -
+ name: dl-resource
+ subset-of: devlink
+ attributes:
+ # -
+ # name: resource-list
+ # This is currently unsupported due to circular dependency
+ -
+ name: resource-name
+ -
+ name: resource-id
+ -
+ name: resource-size
+ -
+ name: resource-size-new
+ -
+ name: resource-size-valid
+ -
+ name: resource-size-min
+ -
+ name: resource-size-max
+ -
+ name: resource-size-gran
+ -
+ name: resource-unit
+ -
+ name: resource-occ
+
+ -
+ name: dl-resource-list
+ subset-of: devlink
+ attributes:
+ -
+ name: resource
+
+ -
+ name: dl-fmsg
+ subset-of: devlink
+ attributes:
+ -
+ name: fmsg-obj-nest-start
+ -
+ name: fmsg-pair-nest-start
+ -
+ name: fmsg-arr-nest-start
+ -
+ name: fmsg-nest-end
+ -
+ name: fmsg-obj-name
+
+ -
+ name: dl-selftest-id
+ name-prefix: devlink-attr-selftest-id-
+ attributes:
+ -
+ name: flash
+ type: flag
operations:
enum-model: directional
@@ -255,10 +1031,7 @@ operations:
name: get
doc: Get devlink instances.
attribute-set: devlink
- dont-validate:
- - strict
- - dump
-
+ dont-validate: [ strict, dump ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -273,7 +1046,6 @@ operations:
- bus-name
- dev-name
- reload-failed
- - reload-action
- dev-stats
dump:
reply: *get-reply
@@ -282,9 +1054,7 @@ operations:
name: port-get
doc: Get devlink port instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit-port
post: devlink-nl-post-doit
@@ -303,16 +1073,90 @@ operations:
reply:
value: 3 # due to a bug, port dump returns DEVLINK_CMD_NEW
attributes: *port-id-attrs
+ -
+ name: port-set
+ doc: Set devlink port instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - port-type
+ - port-function
- # TODO: fill in the operations in between
+ -
+ name: port-new
+ doc: Create devlink port instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - port-flavour
+ - port-pci-pf-number
+ - port-pci-sf-number
+ - port-controller-number
+ reply:
+ value: 7
+ attributes: *port-id-attrs
+
+ -
+ name: port-del
+ doc: Delete devlink port instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes: *port-id-attrs
+
+ -
+ name: port-split
+ doc: Split devlink port instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - port-split-count
+
+ -
+ name: port-unsplit
+ doc: Unplit devlink port instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes: *port-id-attrs
-
name: sb-get
doc: Get shared buffer instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -330,15 +1174,11 @@ operations:
attributes: *dev-id-attrs
reply: *sb-get-reply
- # TODO: fill in the operations in between
-
-
name: sb-pool-get
doc: Get shared buffer pool instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -357,15 +1197,29 @@ operations:
attributes: *dev-id-attrs
reply: *sb-pool-get-reply
- # TODO: fill in the operations in between
+ -
+ name: sb-pool-set
+ doc: Set shared buffer pool instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - sb-index
+ - sb-pool-index
+ - sb-pool-threshold-type
+ - sb-pool-size
-
name: sb-port-pool-get
doc: Get shared buffer port-pool combinations and threshold.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit-port
post: devlink-nl-post-doit
@@ -385,15 +1239,29 @@ operations:
attributes: *dev-id-attrs
reply: *sb-port-pool-get-reply
- # TODO: fill in the operations in between
+ -
+ name: sb-port-pool-set
+ doc: Set shared buffer port-pool combinations and threshold.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - sb-index
+ - sb-pool-index
+ - sb-threshold
-
name: sb-tc-pool-bind-get
doc: Get shared buffer port-TC to pool bindings and threshold.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit-port
post: devlink-nl-post-doit
@@ -414,41 +1282,264 @@ operations:
attributes: *dev-id-attrs
reply: *sb-tc-pool-bind-get-reply
- # TODO: fill in the operations in between
+ -
+ name: sb-tc-pool-bind-set
+ doc: Set shared buffer port-TC to pool bindings and threshold.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - sb-index
+ - sb-pool-index
+ - sb-pool-type
+ - sb-tc-index
+ - sb-threshold
+
+ -
+ name: sb-occ-snapshot
+ doc: Take occupancy snapshot of shared buffer.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ value: 27
+ attributes:
+ - bus-name
+ - dev-name
+ - sb-index
+
+ -
+ name: sb-occ-max-clear
+ doc: Clear occupancy watermarks of shared buffer.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - sb-index
+
+ -
+ name: eswitch-get
+ doc: Get eswitch attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes: *dev-id-attrs
+ reply:
+ value: 29
+ attributes: &eswitch-attrs
+ - bus-name
+ - dev-name
+ - eswitch-mode
+ - eswitch-inline-mode
+ - eswitch-encap-mode
+
+ -
+ name: eswitch-set
+ doc: Set eswitch attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes: *eswitch-attrs
+
+ -
+ name: dpipe-table-get
+ doc: Get dpipe table attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-table-name
+ reply:
+ value: 31
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-tables
+
+ -
+ name: dpipe-entries-get
+ doc: Get dpipe entries attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-table-name
+ reply:
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-entries
+
+ -
+ name: dpipe-headers-get
+ doc: Get dpipe headers attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ reply:
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-headers
+
+ -
+ name: dpipe-table-counters-set
+ doc: Set dpipe counter attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - dpipe-table-name
+ - dpipe-table-counters-enabled
+
+ -
+ name: resource-set
+ doc: Set resource attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - resource-id
+ - resource-size
+
+ -
+ name: resource-dump
+ doc: Get resource attributes.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ reply:
+ value: 36
+ attributes:
+ - bus-name
+ - dev-name
+ - resource-list
+
+ -
+ name: reload
+ doc: Reload devlink.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - reload-action
+ - reload-limits
+ - netns-pid
+ - netns-fd
+ - netns-id
+ reply:
+ attributes:
+ - bus-name
+ - dev-name
+ - reload-actions-performed
-
name: param-get
doc: Get param instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
request:
- value: 38
attributes: &param-id-attrs
- bus-name
- dev-name
- param-name
reply: &param-get-reply
- value: 38
attributes: *param-id-attrs
dump:
request:
attributes: *dev-id-attrs
reply: *param-get-reply
- # TODO: fill in the operations in between
+ -
+ name: param-set
+ doc: Set param instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - param-name
+ - param-type
+ # param-value-data is missing here as the type is variable
+ - param-value-cmode
-
name: region-get
doc: Get region instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit-port-optional
post: devlink-nl-post-doit
@@ -467,16 +1558,97 @@ operations:
attributes: *dev-id-attrs
reply: *region-get-reply
- # TODO: fill in the operations in between
+ -
+ name: region-new
+ doc: Create region snapshot.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ value: 44
+ attributes: &region-snapshot-id-attrs
+ - bus-name
+ - dev-name
+ - port-index
+ - region-name
+ - region-snapshot-id
+ reply:
+ value: 44
+ attributes: *region-snapshot-id-attrs
+
+ -
+ name: region-del
+ doc: Delete region snapshot.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ attributes: *region-snapshot-id-attrs
+
+ -
+ name: region-read
+ doc: Read region data.
+ attribute-set: devlink
+ dont-validate: [ dump-strict ]
+ flags: [ admin-perm ]
+ dump:
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - region-name
+ - region-snapshot-id
+ - region-direct
+ - region-chunk-addr
+ - region-chunk-len
+ reply:
+ value: 46
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - region-name
+
+ -
+ name: port-param-get
+ doc: Get port param instances.
+ attribute-set: devlink
+ dont-validate: [ strict, dump-strict ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes: *port-id-attrs
+ reply:
+ attributes: *port-id-attrs
+ dump:
+ reply:
+ attributes: *port-id-attrs
+
+ -
+ name: port-param-set
+ doc: Set port param instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port
+ post: devlink-nl-post-doit
+ request:
+ attributes: *port-id-attrs
-
name: info-get
doc: Get device information, like driver name, hardware and firmware versions etc.
attribute-set: devlink
- dont-validate:
- - strict
- - dump
-
+ dont-validate: [ strict, dump ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -500,9 +1672,7 @@ operations:
name: health-reporter-get
doc: Get health reporter instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit-port-optional
post: devlink-nl-post-doit
@@ -519,15 +1689,97 @@ operations:
attributes: *port-id-attrs
reply: *health-reporter-get-reply
- # TODO: fill in the operations in between
+ -
+ name: health-reporter-set
+ doc: Set health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - port-index
+ - health-reporter-name
+ - health-reporter-graceful-period
+ - health-reporter-auto-recover
+ - health-reporter-auto-dump
+
+ -
+ name: health-reporter-recover
+ doc: Recover health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ attributes: *health-reporter-id-attrs
+
+ -
+ name: health-reporter-diagnose
+ doc: Diagnose health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ attributes: *health-reporter-id-attrs
+
+ -
+ name: health-reporter-dump-get
+ doc: Dump health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ dump-strict ]
+ flags: [ admin-perm ]
+ dump:
+ request:
+ attributes: *health-reporter-id-attrs
+ reply:
+ value: 56
+ attributes:
+ - fmsg
+
+ -
+ name: health-reporter-dump-clear
+ doc: Clear dump of health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ attributes: *health-reporter-id-attrs
+
+ -
+ name: flash-update
+ doc: Flash update devlink instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - flash-update-file-name
+ - flash-update-component
+ - flash-update-overwrite-mask
-
name: trap-get
doc: Get trap instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -545,15 +1797,27 @@ operations:
attributes: *dev-id-attrs
reply: *trap-get-reply
- # TODO: fill in the operations in between
+ -
+ name: trap-set
+ doc: Set trap instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - trap-name
+ - trap-action
-
name: trap-group-get
doc: Get trap group instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -571,15 +1835,28 @@ operations:
attributes: *dev-id-attrs
reply: *trap-group-get-reply
- # TODO: fill in the operations in between
+ -
+ name: trap-group-set
+ doc: Set trap group instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - trap-group-name
+ - trap-action
+ - trap-policer-id
-
name: trap-policer-get
doc: Get trap policer instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -597,15 +1874,41 @@ operations:
attributes: *dev-id-attrs
reply: *trap-policer-get-reply
- # TODO: fill in the operations in between
+ -
+ name: trap-policer-set
+ doc: Get trap policer instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - trap-policer-id
+ - trap-policer-rate
+ - trap-policer-burst
+
+ -
+ name: health-reporter-test
+ doc: Test health reporter instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit-port-optional
+ post: devlink-nl-post-doit
+ request:
+ value: 73
+ attributes: *health-reporter-id-attrs
-
name: rate-get
doc: Get rate instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -624,15 +1927,66 @@ operations:
attributes: *dev-id-attrs
reply: *rate-get-reply
- # TODO: fill in the operations in between
+ -
+ name: rate-set
+ doc: Set rate instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - rate-node-name
+ - rate-tx-share
+ - rate-tx-max
+ - rate-tx-priority
+ - rate-tx-weight
+ - rate-parent-node-name
+
+ -
+ name: rate-new
+ doc: Create rate instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - rate-node-name
+ - rate-tx-share
+ - rate-tx-max
+ - rate-tx-priority
+ - rate-tx-weight
+ - rate-parent-node-name
+
+ -
+ name: rate-del
+ doc: Delete rate instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - rate-node-name
-
name: linecard-get
doc: Get line card instances.
attribute-set: devlink
- dont-validate:
- - strict
-
+ dont-validate: [ strict ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -650,16 +2004,27 @@ operations:
attributes: *dev-id-attrs
reply: *linecard-get-reply
- # TODO: fill in the operations in between
+ -
+ name: linecard-set
+ doc: Set line card instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - linecard-index
+ - linecard-type
-
name: selftests-get
doc: Get device selftest instances.
attribute-set: devlink
- dont-validate:
- - strict
- - dump
-
+ dont-validate: [ strict, dump ]
do:
pre: devlink-nl-pre-doit
post: devlink-nl-post-doit
@@ -671,3 +2036,18 @@ operations:
attributes: *dev-id-attrs
dump:
reply: *selftests-get-reply
+
+ -
+ name: selftests-run
+ doc: Run device selftest instances.
+ attribute-set: devlink
+ dont-validate: [ strict ]
+ flags: [ admin-perm ]
+ do:
+ pre: devlink-nl-pre-doit
+ post: devlink-nl-post-doit
+ request:
+ attributes:
+ - bus-name
+ - dev-name
+ - selftests
diff --git a/Documentation/netlink/specs/dpll.yaml b/Documentation/netlink/specs/dpll.yaml
new file mode 100644
index 000000000000..cf8abe1c0550
--- /dev/null
+++ b/Documentation/netlink/specs/dpll.yaml
@@ -0,0 +1,510 @@
+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+
+name: dpll
+
+doc: DPLL subsystem.
+
+definitions:
+ -
+ type: enum
+ name: mode
+ doc: |
+ working modes a dpll can support, differentiates if and how dpll selects
+ one of its inputs to syntonize with it, valid values for DPLL_A_MODE
+ attribute
+ entries:
+ -
+ name: manual
+ doc: input can be only selected by sending a request to dpll
+ value: 1
+ -
+ name: automatic
+ doc: highest prio input pin auto selected by dpll
+ render-max: true
+ -
+ type: enum
+ name: lock-status
+ doc: |
+ provides information of dpll device lock status, valid values for
+ DPLL_A_LOCK_STATUS attribute
+ entries:
+ -
+ name: unlocked
+ doc: |
+ dpll was not yet locked to any valid input (or forced by setting
+ DPLL_A_MODE to DPLL_MODE_DETACHED)
+ value: 1
+ -
+ name: locked
+ doc: |
+ dpll is locked to a valid signal, but no holdover available
+ -
+ name: locked-ho-acq
+ doc: |
+ dpll is locked and holdover acquired
+ -
+ name: holdover
+ doc: |
+ dpll is in holdover state - lost a valid lock or was forced
+ by disconnecting all the pins (latter possible only
+ when dpll lock-state was already DPLL_LOCK_STATUS_LOCKED_HO_ACQ,
+ if dpll lock-state was not DPLL_LOCK_STATUS_LOCKED_HO_ACQ, the
+ dpll's lock-state shall remain DPLL_LOCK_STATUS_UNLOCKED)
+ render-max: true
+ -
+ type: const
+ name: temp-divider
+ value: 1000
+ doc: |
+ temperature divider allowing userspace to calculate the
+ temperature as float with three digit decimal precision.
+ Value of (DPLL_A_TEMP / DPLL_TEMP_DIVIDER) is integer part of
+ temperature value.
+ Value of (DPLL_A_TEMP % DPLL_TEMP_DIVIDER) is fractional part of
+ temperature value.
+ -
+ type: enum
+ name: type
+ doc: type of dpll, valid values for DPLL_A_TYPE attribute
+ entries:
+ -
+ name: pps
+ doc: dpll produces Pulse-Per-Second signal
+ value: 1
+ -
+ name: eec
+ doc: dpll drives the Ethernet Equipment Clock
+ render-max: true
+ -
+ type: enum
+ name: pin-type
+ doc: |
+ defines possible types of a pin, valid values for DPLL_A_PIN_TYPE
+ attribute
+ entries:
+ -
+ name: mux
+ doc: aggregates another layer of selectable pins
+ value: 1
+ -
+ name: ext
+ doc: external input
+ -
+ name: synce-eth-port
+ doc: ethernet port PHY's recovered clock
+ -
+ name: int-oscillator
+ doc: device internal oscillator
+ -
+ name: gnss
+ doc: GNSS recovered clock
+ render-max: true
+ -
+ type: enum
+ name: pin-direction
+ doc: |
+ defines possible direction of a pin, valid values for
+ DPLL_A_PIN_DIRECTION attribute
+ entries:
+ -
+ name: input
+ doc: pin used as a input of a signal
+ value: 1
+ -
+ name: output
+ doc: pin used to output the signal
+ render-max: true
+ -
+ type: const
+ name: pin-frequency-1-hz
+ value: 1
+ -
+ type: const
+ name: pin-frequency-10-khz
+ value: 10000
+ -
+ type: const
+ name: pin-frequency-77_5-khz
+ value: 77500
+ -
+ type: const
+ name: pin-frequency-10-mhz
+ value: 10000000
+ -
+ type: enum
+ name: pin-state
+ doc: |
+ defines possible states of a pin, valid values for
+ DPLL_A_PIN_STATE attribute
+ entries:
+ -
+ name: connected
+ doc: pin connected, active input of phase locked loop
+ value: 1
+ -
+ name: disconnected
+ doc: pin disconnected, not considered as a valid input
+ -
+ name: selectable
+ doc: pin enabled for automatic input selection
+ render-max: true
+ -
+ type: flags
+ name: pin-capabilities
+ doc: |
+ defines possible capabilities of a pin, valid flags on
+ DPLL_A_PIN_CAPABILITIES attribute
+ entries:
+ -
+ name: direction-can-change
+ doc: pin direction can be changed
+ -
+ name: priority-can-change
+ doc: pin priority can be changed
+ -
+ name: state-can-change
+ doc: pin state can be changed
+ -
+ type: const
+ name: phase-offset-divider
+ value: 1000
+ doc: |
+ phase offset divider allows userspace to calculate a value of
+ measured signal phase difference between a pin and dpll device
+ as a fractional value with three digit decimal precision.
+ Value of (DPLL_A_PHASE_OFFSET / DPLL_PHASE_OFFSET_DIVIDER) is an
+ integer part of a measured phase offset value.
+ Value of (DPLL_A_PHASE_OFFSET % DPLL_PHASE_OFFSET_DIVIDER) is a
+ fractional part of a measured phase offset value.
+
+attribute-sets:
+ -
+ name: dpll
+ enum-name: dpll_a
+ attributes:
+ -
+ name: id
+ type: u32
+ -
+ name: module-name
+ type: string
+ -
+ name: pad
+ type: pad
+ -
+ name: clock-id
+ type: u64
+ -
+ name: mode
+ type: u32
+ enum: mode
+ -
+ name: mode-supported
+ type: u32
+ enum: mode
+ multi-attr: true
+ -
+ name: lock-status
+ type: u32
+ enum: lock-status
+ -
+ name: temp
+ type: s32
+ -
+ name: type
+ type: u32
+ enum: type
+ -
+ name: pin
+ enum-name: dpll_a_pin
+ attributes:
+ -
+ name: id
+ type: u32
+ -
+ name: parent-id
+ type: u32
+ -
+ name: module-name
+ type: string
+ -
+ name: pad
+ type: pad
+ -
+ name: clock-id
+ type: u64
+ -
+ name: board-label
+ type: string
+ -
+ name: panel-label
+ type: string
+ -
+ name: package-label
+ type: string
+ -
+ name: type
+ type: u32
+ enum: pin-type
+ -
+ name: direction
+ type: u32
+ enum: pin-direction
+ -
+ name: frequency
+ type: u64
+ -
+ name: frequency-supported
+ type: nest
+ multi-attr: true
+ nested-attributes: frequency-range
+ -
+ name: frequency-min
+ type: u64
+ -
+ name: frequency-max
+ type: u64
+ -
+ name: prio
+ type: u32
+ -
+ name: state
+ type: u32
+ enum: pin-state
+ -
+ name: capabilities
+ type: u32
+ -
+ name: parent-device
+ type: nest
+ multi-attr: true
+ nested-attributes: pin-parent-device
+ -
+ name: parent-pin
+ type: nest
+ multi-attr: true
+ nested-attributes: pin-parent-pin
+ -
+ name: phase-adjust-min
+ type: s32
+ -
+ name: phase-adjust-max
+ type: s32
+ -
+ name: phase-adjust
+ type: s32
+ -
+ name: phase-offset
+ type: s64
+ -
+ name: pin-parent-device
+ subset-of: pin
+ attributes:
+ -
+ name: parent-id
+ -
+ name: direction
+ -
+ name: prio
+ -
+ name: state
+ -
+ name: phase-offset
+ -
+ name: pin-parent-pin
+ subset-of: pin
+ attributes:
+ -
+ name: parent-id
+ -
+ name: state
+ -
+ name: frequency-range
+ subset-of: pin
+ attributes:
+ -
+ name: frequency-min
+ -
+ name: frequency-max
+
+operations:
+ enum-name: dpll_cmd
+ list:
+ -
+ name: device-id-get
+ doc: |
+ Get id of dpll device that matches given attributes
+ attribute-set: dpll
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-lock-doit
+ post: dpll-unlock-doit
+ request:
+ attributes:
+ - module-name
+ - clock-id
+ - type
+ reply:
+ attributes:
+ - id
+
+ -
+ name: device-get
+ doc: |
+ Get list of DPLL devices (dump) or attributes of a single dpll device
+ attribute-set: dpll
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-pre-doit
+ post: dpll-post-doit
+ request:
+ attributes:
+ - id
+ reply: &dev-attrs
+ attributes:
+ - id
+ - module-name
+ - mode
+ - mode-supported
+ - lock-status
+ - temp
+ - clock-id
+ - type
+
+ dump:
+ pre: dpll-lock-dumpit
+ post: dpll-unlock-dumpit
+ reply: *dev-attrs
+
+ -
+ name: device-set
+ doc: Set attributes for a DPLL device
+ attribute-set: dpll
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-pre-doit
+ post: dpll-post-doit
+ request:
+ attributes:
+ - id
+ -
+ name: device-create-ntf
+ doc: Notification about device appearing
+ notify: device-get
+ mcgrp: monitor
+ -
+ name: device-delete-ntf
+ doc: Notification about device disappearing
+ notify: device-get
+ mcgrp: monitor
+ -
+ name: device-change-ntf
+ doc: Notification about device configuration being changed
+ notify: device-get
+ mcgrp: monitor
+ -
+ name: pin-id-get
+ doc: |
+ Get id of a pin that matches given attributes
+ attribute-set: pin
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-lock-doit
+ post: dpll-unlock-doit
+ request:
+ attributes:
+ - module-name
+ - clock-id
+ - board-label
+ - panel-label
+ - package-label
+ - type
+ reply:
+ attributes:
+ - id
+
+ -
+ name: pin-get
+ doc: |
+ Get list of pins and its attributes.
+ - dump request without any attributes given - list all the pins in the
+ system
+ - dump request with target dpll - list all the pins registered with
+ a given dpll device
+ - do request with target dpll and target pin - single pin attributes
+ attribute-set: pin
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-pin-pre-doit
+ post: dpll-pin-post-doit
+ request:
+ attributes:
+ - id
+ reply: &pin-attrs
+ attributes:
+ - id
+ - board-label
+ - panel-label
+ - package-label
+ - type
+ - frequency
+ - frequency-supported
+ - capabilities
+ - parent-device
+ - parent-pin
+ - phase-adjust-min
+ - phase-adjust-max
+ - phase-adjust
+
+ dump:
+ pre: dpll-lock-dumpit
+ post: dpll-unlock-dumpit
+ request:
+ attributes:
+ - id
+ reply: *pin-attrs
+
+ -
+ name: pin-set
+ doc: Set attributes of a target pin
+ attribute-set: pin
+ flags: [ admin-perm ]
+
+ do:
+ pre: dpll-pin-pre-doit
+ post: dpll-pin-post-doit
+ request:
+ attributes:
+ - id
+ - frequency
+ - direction
+ - prio
+ - state
+ - parent-device
+ - parent-pin
+ - phase-adjust
+ -
+ name: pin-create-ntf
+ doc: Notification about pin appearing
+ notify: pin-get
+ mcgrp: monitor
+ -
+ name: pin-delete-ntf
+ doc: Notification about pin disappearing
+ notify: pin-get
+ mcgrp: monitor
+ -
+ name: pin-change-ntf
+ doc: Notification about pin configuration being changed
+ notify: pin-get
+ mcgrp: monitor
+
+mcast-groups:
+ list:
+ -
+ name: monitor
diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
index 837b565577ca..5c7a65b009b4 100644
--- a/Documentation/netlink/specs/ethtool.yaml
+++ b/Documentation/netlink/specs/ethtool.yaml
@@ -818,13 +818,10 @@ attribute-sets:
attributes:
-
name: hist-bkt-low
- type: u32
-
name: hist-bkt-hi
- type: u32
-
name: hist-val
- type: u64
-
name: stats
attributes:
diff --git a/Documentation/netlink/specs/handshake.yaml b/Documentation/netlink/specs/handshake.yaml
index 6d89e30f5fd5..b934cc513e3d 100644
--- a/Documentation/netlink/specs/handshake.yaml
+++ b/Documentation/netlink/specs/handshake.yaml
@@ -34,16 +34,16 @@ attribute-sets:
attributes:
-
name: cert
- type: u32
+ type: s32
-
name: privkey
- type: u32
+ type: s32
-
name: accept
attributes:
-
name: sockfd
- type: u32
+ type: s32
-
name: handler-class
type: u32
@@ -79,7 +79,7 @@ attribute-sets:
type: u32
-
name: sockfd
- type: u32
+ type: s32
-
name: remote-auth
type: u32
diff --git a/Documentation/netlink/specs/mptcp.yaml b/Documentation/netlink/specs/mptcp.yaml
new file mode 100644
index 000000000000..49f90cfb4698
--- /dev/null
+++ b/Documentation/netlink/specs/mptcp.yaml
@@ -0,0 +1,393 @@
+# SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+
+name: mptcp_pm
+protocol: genetlink-legacy
+doc: Multipath TCP.
+
+c-family-name: mptcp-pm-name
+c-version-name: mptcp-pm-ver
+max-by-define: true
+kernel-policy: per-op
+cmd-cnt-name: --mptcp-pm-cmd-after-last
+
+definitions:
+ -
+ type: enum
+ name: event-type
+ enum-name: mptcp-event-type
+ name-prefix: mptcp-event-
+ entries:
+ -
+ name: unspec
+ doc: unused event
+ -
+ name: created
+ doc:
+ token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport
+ A new MPTCP connection has been created. It is the good time to
+ allocate memory and send ADD_ADDR if needed. Depending on the
+ traffic-patterns it can take a long time until the
+ MPTCP_EVENT_ESTABLISHED is sent.
+ -
+ name: established
+ doc:
+ token, family, saddr4 | saddr6, daddr4 | daddr6, sport, dport
+ A MPTCP connection is established (can start new subflows).
+ -
+ name: closed
+ doc:
+ token
+ A MPTCP connection has stopped.
+ -
+ name: announced
+ value: 6
+ doc:
+ token, rem_id, family, daddr4 | daddr6 [, dport]
+ A new address has been announced by the peer.
+ -
+ name: removed
+ doc:
+ token, rem_id
+ An address has been lost by the peer.
+ -
+ name: sub-established
+ value: 10
+ doc:
+ token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
+ dport, backup, if_idx [, error]
+ A new subflow has been established. 'error' should not be set.
+ -
+ name: sub-closed
+ doc:
+ token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
+ dport, backup, if_idx [, error]
+ A subflow has been closed. An error (copy of sk_err) could be set if an
+ error has been detected for this subflow.
+ -
+ name: sub-priority
+ value: 13
+ doc:
+ token, family, loc_id, rem_id, saddr4 | saddr6, daddr4 | daddr6, sport,
+ dport, backup, if_idx [, error]
+ The priority of a subflow has changed. 'error' should not be set.
+ -
+ name: listener-created
+ value: 15
+ doc:
+ family, sport, saddr4 | saddr6
+ A new PM listener is created.
+ -
+ name: listener-closed
+ doc:
+ family, sport, saddr4 | saddr6
+ A PM listener is closed.
+
+attribute-sets:
+ -
+ name: address
+ name-prefix: mptcp-pm-addr-attr-
+ attributes:
+ -
+ name: unspec
+ type: unused
+ value: 0
+ -
+ name: family
+ type: u16
+ -
+ name: id
+ type: u8
+ -
+ name: addr4
+ type: u32
+ byte-order: big-endian
+ -
+ name: addr6
+ type: binary
+ checks:
+ exact-len: 16
+ -
+ name: port
+ type: u16
+ byte-order: big-endian
+ -
+ name: flags
+ type: u32
+ -
+ name: if-idx
+ type: s32
+ -
+ name: subflow-attribute
+ name-prefix: mptcp-subflow-attr-
+ attributes:
+ -
+ name: unspec
+ type: unused
+ value: 0
+ -
+ name: token-rem
+ type: u32
+ -
+ name: token-loc
+ type: u32
+ -
+ name: relwrite-seq
+ type: u32
+ -
+ name: map-seq
+ type: u64
+ -
+ name: map-sfseq
+ type: u32
+ -
+ name: ssn-offset
+ type: u32
+ -
+ name: map-datalen
+ type: u16
+ -
+ name: flags
+ type: u32
+ -
+ name: id-rem
+ type: u8
+ -
+ name: id-loc
+ type: u8
+ -
+ name: pad
+ type: pad
+ -
+ name: endpoint
+ name-prefix: mptcp-pm-endpoint-
+ attributes:
+ -
+ name: addr
+ type: nest
+ nested-attributes: address
+ -
+ name: attr
+ name-prefix: mptcp-pm-attr-
+ attr-cnt-name: --mptcp-attr-after-last
+ attributes:
+ -
+ name: unspec
+ type: unused
+ value: 0
+ -
+ name: addr
+ type: nest
+ nested-attributes: address
+ -
+ name: rcv-add-addrs
+ type: u32
+ -
+ name: subflows
+ type: u32
+ -
+ name: token
+ type: u32
+ -
+ name: loc-id
+ type: u8
+ -
+ name: addr-remote
+ type: nest
+ nested-attributes: address
+ -
+ name: event-attr
+ enum-name: mptcp-event-attr
+ name-prefix: mptcp-attr-
+ attributes:
+ -
+ name: unspec
+ type: unused
+ value: 0
+ -
+ name: token
+ type: u32
+ -
+ name: family
+ type: u16
+ -
+ name: loc-id
+ type: u8
+ -
+ name: rem-id
+ type: u8
+ -
+ name: saddr4
+ type: u32
+ byte-order: big-endian
+ -
+ name: saddr6
+ type: binary
+ checks:
+ min-len: 16
+ -
+ name: daddr4
+ type: u32
+ byte-order: big-endian
+ -
+ name: daddr6
+ type: binary
+ checks:
+ min-len: 16
+ -
+ name: sport
+ type: u16
+ byte-order: big-endian
+ -
+ name: dport
+ type: u16
+ byte-order: big-endian
+ -
+ name: backup
+ type: u8
+ -
+ name: error
+ type: u8
+ -
+ name: flags
+ type: u16
+ -
+ name: timeout
+ type: u32
+ -
+ name: if_idx
+ type: u32
+ -
+ name: reset-reason
+ type: u32
+ -
+ name: reset-flags
+ type: u32
+ -
+ name: server-side
+ type: u8
+
+operations:
+ list:
+ -
+ name: unspec
+ doc: unused
+ value: 0
+ -
+ name: add-addr
+ doc: Add endpoint
+ attribute-set: endpoint
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &add-addr-attrs
+ request:
+ attributes:
+ - addr
+ -
+ name: del-addr
+ doc: Delete endpoint
+ attribute-set: endpoint
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: *add-addr-attrs
+ -
+ name: get-addr
+ doc: Get endpoint information
+ attribute-set: endpoint
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &get-addr-attrs
+ request:
+ attributes:
+ - addr
+ reply:
+ attributes:
+ - addr
+ dump:
+ reply:
+ attributes:
+ - addr
+ -
+ name: flush-addrs
+ doc: flush addresses
+ attribute-set: endpoint
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: *add-addr-attrs
+ -
+ name: set-limits
+ doc: Set protocol limits
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &mptcp-limits
+ request:
+ attributes:
+ - rcv-add-addrs
+ - subflows
+ -
+ name: get-limits
+ doc: Get protocol limits
+ attribute-set: attr
+ dont-validate: [ strict ]
+ do: &mptcp-get-limits
+ request:
+ attributes:
+ - rcv-add-addrs
+ - subflows
+ reply:
+ attributes:
+ - rcv-add-addrs
+ - subflows
+ -
+ name: set-flags
+ doc: Change endpoint flags
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &mptcp-set-flags
+ request:
+ attributes:
+ - addr
+ - token
+ - addr-remote
+ -
+ name: announce
+ doc: announce new sf
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &announce-add
+ request:
+ attributes:
+ - addr
+ - token
+ -
+ name: remove
+ doc: announce removal
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do:
+ request:
+ attributes:
+ - token
+ - loc-id
+ -
+ name: subflow-create
+ doc: todo
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: &sf-create
+ request:
+ attributes:
+ - addr
+ - token
+ - addr-remote
+ -
+ name: subflow-destroy
+ doc: todo
+ attribute-set: attr
+ dont-validate: [ strict ]
+ flags: [ uns-admin-perm ]
+ do: *sf-create
diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml
index 1c7284fd535b..14511b13f305 100644
--- a/Documentation/netlink/specs/netdev.yaml
+++ b/Documentation/netlink/specs/netdev.yaml
@@ -42,6 +42,19 @@ definitions:
doc:
This feature informs if netdev implements non-linear XDP buffer
support in ndo_xdp_xmit callback.
+ -
+ type: flags
+ name: xdp-rx-metadata
+ render-max: true
+ entries:
+ -
+ name: timestamp
+ doc:
+ Device is capable of exposing receive HW timestamp via bpf_xdp_metadata_rx_timestamp().
+ -
+ name: hash
+ doc:
+ Device is capable of exposing receive packet hash via bpf_xdp_metadata_rx_hash().
attribute-sets:
-
@@ -61,13 +74,18 @@ attribute-sets:
doc: Bitmask of enabled xdp-features.
type: u64
enum: xdp-act
- enum-as-flags: true
-
name: xdp-zc-max-segs
doc: max fragment count supported by ZC driver
type: u32
checks:
min: 1
+ -
+ name: xdp-rx-metadata-features
+ doc: Bitmask of supported XDP receive metadata features.
+ See Documentation/networking/xdp-rx-metadata.rst for more details.
+ type: u64
+ enum: xdp-rx-metadata
operations:
list:
@@ -84,6 +102,7 @@ operations:
- ifindex
- xdp-features
- xdp-zc-max-segs
+ - xdp-rx-metadata-features
dump:
reply: *dev-all
-
diff --git a/Documentation/networking/device_drivers/appletalk/cops.rst b/Documentation/networking/device_drivers/appletalk/cops.rst
deleted file mode 100644
index 964ba80599a9..000000000000
--- a/Documentation/networking/device_drivers/appletalk/cops.rst
+++ /dev/null
@@ -1,80 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-
-========================================
-The COPS LocalTalk Linux driver (cops.c)
-========================================
-
-By Jay Schulist <jschlst@samba.org>
-
-This driver has two modes and they are: Dayna mode and Tangent mode.
-Each mode corresponds with the type of card. It has been found
-that there are 2 main types of cards and all other cards are
-the same and just have different names or only have minor differences
-such as more IO ports. As this driver is tested it will
-become more clear exactly what cards are supported.
-
-Right now these cards are known to work with the COPS driver. The
-LT-200 cards work in a somewhat more limited capacity than the
-DL200 cards, which work very well and are in use by many people.
-
-TANGENT driver mode:
- - Tangent ATB-II, Novell NL-1000, Daystar Digital LT-200
-
-DAYNA driver mode:
- - Dayna DL2000/DaynaTalk PC (Half Length), COPS LT-95,
- - Farallon PhoneNET PC III, Farallon PhoneNET PC II
-
-Other cards possibly supported mode unknown though:
- - Dayna DL2000 (Full length)
-
-The COPS driver defaults to using Dayna mode. To change the driver's
-mode if you built a driver with dual support use board_type=1 or
-board_type=2 for Dayna or Tangent with insmod.
-
-Operation/loading of the driver
-===============================
-
-Use modprobe like this: /sbin/modprobe cops.o (IO #) (IRQ #)
-If you do not specify any options the driver will try and use the IO = 0x240,
-IRQ = 5. As of right now I would only use IRQ 5 for the card, if autoprobing.
-
-To load multiple COPS driver Localtalk cards you can do one of the following::
-
- insmod cops io=0x240 irq=5
- insmod -o cops2 cops io=0x260 irq=3
-
-Or in lilo.conf put something like this::
-
- append="ether=5,0x240,lt0 ether=3,0x260,lt1"
-
-Then bring up the interface with ifconfig. It will look something like this::
-
- lt0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-F7-00-00-00-00-00-00-00-00
- inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
- UP BROADCAST RUNNING NOARP MULTICAST MTU:600 Metric:1
- RX packets:0 errors:0 dropped:0 overruns:0 frame:0
- TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 coll:0
-
-Netatalk Configuration
-======================
-
-You will need to configure atalkd with something like the following to make
-it work with the cops.c driver.
-
-* For single LTalk card use::
-
- dummy -seed -phase 2 -net 2000 -addr 2000.10 -zone "1033"
- lt0 -seed -phase 1 -net 1000 -addr 1000.50 -zone "1033"
-
-* For multiple cards, Ethernet and LocalTalk::
-
- eth0 -seed -phase 2 -net 3000 -addr 3000.20 -zone "1033"
- lt0 -seed -phase 1 -net 1000 -addr 1000.50 -zone "1033"
-
-* For multiple LocalTalk cards, and an Ethernet card.
-
-* Order seems to matter here, Ethernet last::
-
- lt0 -seed -phase 1 -net 1000 -addr 1000.10 -zone "LocalTalk1"
- lt1 -seed -phase 1 -net 2000 -addr 2000.20 -zone "LocalTalk2"
- eth0 -seed -phase 2 -net 3000 -addr 3000.30 -zone "EtherTalk"
diff --git a/Documentation/networking/device_drivers/appletalk/index.rst b/Documentation/networking/device_drivers/appletalk/index.rst
deleted file mode 100644
index c196baeb0856..000000000000
--- a/Documentation/networking/device_drivers/appletalk/index.rst
+++ /dev/null
@@ -1,18 +0,0 @@
-.. SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
-
-AppleTalk Device Drivers
-========================
-
-Contents:
-
-.. toctree::
- :maxdepth: 2
-
- cops
-
-.. only:: subproject and html
-
- Indices
- =======
-
- * :ref:`genindex`
diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst
index 9827e816084b..43de285b8a92 100644
--- a/Documentation/networking/device_drivers/ethernet/index.rst
+++ b/Documentation/networking/device_drivers/ethernet/index.rst
@@ -32,6 +32,7 @@ Contents:
intel/e1000
intel/e1000e
intel/fm10k
+ intel/idpf
intel/igb
intel/igbvf
intel/ixgbe
diff --git a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst
new file mode 100644
index 000000000000..adb16e2abd21
--- /dev/null
+++ b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst
@@ -0,0 +1,160 @@
+.. SPDX-License-Identifier: GPL-2.0+
+
+==========================================================================
+idpf Linux* Base Driver for the Intel(R) Infrastructure Data Path Function
+==========================================================================
+
+Intel idpf Linux driver.
+Copyright(C) 2023 Intel Corporation.
+
+.. contents::
+
+The idpf driver serves as both the Physical Function (PF) and Virtual Function
+(VF) driver for the Intel(R) Infrastructure Data Path Function.
+
+Driver information can be obtained using ethtool, lspci, and ip.
+
+For questions related to hardware requirements, refer to the documentation
+supplied with your Intel adapter. All hardware requirements listed apply to use
+with Linux.
+
+
+Identifying Your Adapter
+========================
+For information on how to identify your adapter, and for the latest Intel
+network drivers, refer to the Intel Support website:
+http://www.intel.com/support
+
+
+Additional Features and Configurations
+======================================
+
+ethtool
+-------
+The driver utilizes the ethtool interface for driver configuration and
+diagnostics, as well as displaying statistical information. The latest ethtool
+version is required for this functionality. If you don't have one yet, you can
+obtain it at:
+https://kernel.org/pub/software/network/ethtool/
+
+
+Viewing Link Messages
+---------------------
+Link messages will not be displayed to the console if the distribution is
+restricting system messages. In order to see network driver link messages on
+your console, set dmesg to eight by entering the following::
+
+ # dmesg -n 8
+
+.. note::
+ This setting is not saved across reboots.
+
+
+Jumbo Frames
+------------
+Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU)
+to a value larger than the default value of 1500.
+
+Use the ip command to increase the MTU size. For example, enter the following
+where <ethX> is the interface number::
+
+ # ip link set mtu 9000 dev <ethX>
+ # ip link set up dev <ethX>
+
+.. note::
+ The maximum MTU setting for jumbo frames is 9706. This corresponds to the
+ maximum jumbo frame size of 9728 bytes.
+
+.. note::
+ This driver will attempt to use multiple page sized buffers to receive
+ each jumbo packet. This should help to avoid buffer starvation issues when
+ allocating receive packets.
+
+.. note::
+ Packet loss may have a greater impact on throughput when you use jumbo
+ frames. If you observe a drop in performance after enabling jumbo frames,
+ enabling flow control may mitigate the issue.
+
+
+Performance Optimization
+========================
+Driver defaults are meant to fit a wide variety of workloads, but if further
+optimization is required, we recommend experimenting with the following
+settings.
+
+
+Interrupt Rate Limiting
+-----------------------
+This driver supports an adaptive interrupt throttle rate (ITR) mechanism that
+is tuned for general workloads. The user can customize the interrupt rate
+control for specific workloads, via ethtool, adjusting the number of
+microseconds between interrupts.
+
+To set the interrupt rate manually, you must disable adaptive mode::
+
+ # ethtool -C <ethX> adaptive-rx off adaptive-tx off
+
+For lower CPU utilization:
+ - Disable adaptive ITR and lower Rx and Tx interrupts. The examples below
+ affect every queue of the specified interface.
+
+ - Setting rx-usecs and tx-usecs to 80 will limit interrupts to about
+ 12,500 interrupts per second per queue::
+
+ # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80
+ tx-usecs 80
+
+For reduced latency:
+ - Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0
+ using ethtool::
+
+ # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0
+ tx-usecs 0
+
+Per-queue interrupt rate settings:
+ - The following examples are for queues 1 and 3, but you can adjust other
+ queues.
+
+ - To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or
+ about 100,000 interrupts/second, for queues 1 and 3::
+
+ # ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off
+ rx-usecs 10
+
+ - To show the current coalesce settings for queues 1 and 3::
+
+ # ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce
+
+
+
+Virtualized Environments
+------------------------
+In addition to the other suggestions in this section, the following may be
+helpful to optimize performance in VMs.
+
+ - Using the appropriate mechanism (vcpupin) in the VM, pin the CPUs to
+ individual LCPUs, making sure to use a set of CPUs included in the
+ device's local_cpulist: /sys/class/net/<ethX>/device/local_cpulist.
+
+ - Configure as many Rx/Tx queues in the VM as available. (See the idpf driver
+ documentation for the number of queues supported.) For example::
+
+ # ethtool -L <virt_interface> rx <max> tx <max>
+
+
+Support
+=======
+For general information, go to the Intel support website at:
+http://www.intel.com/support/
+
+If an issue is identified with the released source code on a supported kernel
+with a supported adapter, email the specific information related to the issue
+to intel-wired-lan@lists.osuosl.org.
+
+
+Trademarks
+==========
+Intel is a trademark or registered trademark of Intel Corporation or its
+subsidiaries in the United States and/or other countries.
+
+* Other names and brands may be claimed as the property of others.
diff --git a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/kconfig.rst b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/kconfig.rst
index 0a42c3395ffa..20d3b7e87049 100644
--- a/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/kconfig.rst
+++ b/Documentation/networking/device_drivers/ethernet/mellanox/mlx5/kconfig.rst
@@ -67,7 +67,7 @@ Enabling the driver and kconfig options
| Enables :ref:`IPSec XFRM cryptography-offload acceleration <xfrm_device>`.
-**CONFIG_MLX5_EN_MACSEC=(y/n)**
+**CONFIG_MLX5_MACSEC=(y/n)**
| Build support for MACsec cryptography-offload acceleration in the NIC.
diff --git a/Documentation/networking/device_drivers/index.rst b/Documentation/networking/device_drivers/index.rst
index 601eacaf12f3..2f0285a5bc80 100644
--- a/Documentation/networking/device_drivers/index.rst
+++ b/Documentation/networking/device_drivers/index.rst
@@ -8,7 +8,6 @@ Contents:
.. toctree::
:maxdepth: 2
- appletalk/index
atm/index
cable/index
can/index
diff --git a/Documentation/networking/devlink/i40e.rst b/Documentation/networking/devlink/i40e.rst
new file mode 100644
index 000000000000..d3cb5bb5197e
--- /dev/null
+++ b/Documentation/networking/devlink/i40e.rst
@@ -0,0 +1,59 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+====================
+i40e devlink support
+====================
+
+This document describes the devlink features implemented by the ``i40e``
+device driver.
+
+Info versions
+=============
+
+The ``i40e`` driver reports the following versions
+
+.. list-table:: devlink info versions implemented
+ :widths: 5 5 5 90
+
+ * - Name
+ - Type
+ - Example
+ - Description
+ * - ``board.id``
+ - fixed
+ - K15190-000
+ - The Product Board Assembly (PBA) identifier of the board.
+ * - ``fw.mgmt``
+ - running
+ - 9.130
+ - 2-digit version number of the management firmware that controls the
+ PHY, link, etc.
+ * - ``fw.mgmt.api``
+ - running
+ - 1.15
+ - 2-digit version number of the API exported over the AdminQ by the
+ management firmware. Used by the driver to identify what commands
+ are supported.
+ * - ``fw.mgmt.build``
+ - running
+ - 73618
+ - Build number of the source for the management firmware.
+ * - ``fw.undi``
+ - running
+ - 1.3429.0
+ - Version of the Option ROM containing the UEFI driver. The version is
+ reported in ``major.minor.patch`` format. The major version is
+ incremented whenever a major breaking change occurs, or when the
+ minor version would overflow. The minor version is incremented for
+ non-breaking changes and reset to 1 when the major version is
+ incremented. The patch version is normally 0 but is incremented when
+ a fix is delivered as a patch against an older base Option ROM.
+ * - ``fw.psid.api``
+ - running
+ - 9.30
+ - Version defining the format of the flash contents.
+ * - ``fw.bundle_id``
+ - running
+ - 0x8000e5f3
+ - Unique identifier of the firmware image file that was loaded onto
+ the device. Also referred to as the EETRACK identifier of the NVM.
diff --git a/Documentation/networking/devlink/index.rst b/Documentation/networking/devlink/index.rst
index b49749e2b9a6..e14d7a701b72 100644
--- a/Documentation/networking/devlink/index.rst
+++ b/Documentation/networking/devlink/index.rst
@@ -18,6 +18,34 @@ netlink commands.
Drivers are encouraged to use the devlink instance lock for their own needs.
+Drivers need to be cautious when taking devlink instance lock and
+taking RTNL lock at the same time. Devlink instance lock needs to be taken
+first, only after that RTNL lock could be taken.
+
+Nested instances
+----------------
+
+Some objects, like linecards or port functions, could have another
+devlink instances created underneath. In that case, drivers should make
+sure to respect following rules:
+
+ - Lock ordering should be maintained. If driver needs to take instance
+ lock of both nested and parent instances at the same time, devlink
+ instance lock of the parent instance should be taken first, only then
+ instance lock of the nested instance could be taken.
+ - Driver should use object-specific helpers to setup the
+ nested relationship:
+
+ - ``devl_nested_devlink_set()`` - called to setup devlink -> nested
+ devlink relationship (could be user for multiple nested instances.
+ - ``devl_port_fn_devlink_set()`` - called to setup port function ->
+ nested devlink relationship.
+ - ``devlink_linecard_nested_dl_set()`` - called to setup linecard ->
+ nested devlink relationship.
+
+The nested devlink info is exposed to the userspace over object-specific
+attributes of devlink netlink.
+
Interface documentation
-----------------------
@@ -52,6 +80,7 @@ parameters, info versions, and other features it supports.
bnxt
etas_es58x
hns3
+ i40e
ionic
ice
mlx4
diff --git a/Documentation/networking/dsa/b53.rst b/Documentation/networking/dsa/b53.rst
index b41637cdb82b..1cb3ff648f88 100644
--- a/Documentation/networking/dsa/b53.rst
+++ b/Documentation/networking/dsa/b53.rst
@@ -52,7 +52,7 @@ VLAN programming would basically change the CPU port's default PVID and make
it untagged, undesirable.
In difference to the configuration described in :ref:`dsa-vlan-configuration`
-the default VLAN 1 has to be removed from the slave interface configuration in
+the default VLAN 1 has to be removed from the user interface configuration in
single port and gateway configuration, while there is no need to add an extra
VLAN configuration in the bridge showcase.
@@ -68,13 +68,13 @@ By default packages are tagged with vid 1:
ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3
- # The master interface needs to be brought up before the slave ports.
+ # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up
ip link set eth0.1 up
ip link set eth0.2 up
ip link set eth0.3 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set wan up
ip link set lan1 up
ip link set lan2 up
@@ -113,11 +113,11 @@ bridge
# tag traffic on CPU port
ip link add link eth0 name eth0.1 type vlan id 1
- # The master interface needs to be brought up before the slave ports.
+ # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up
ip link set eth0.1 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set wan up
ip link set lan1 up
ip link set lan2 up
@@ -149,12 +149,12 @@ gateway
ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2
- # The master interface needs to be brought up before the slave ports.
+ # The conduit interface needs to be brought up before the user ports.
ip link set eth0 up
ip link set eth0.1 up
ip link set eth0.2 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set wan up
ip link set lan1 up
ip link set lan2 up
diff --git a/Documentation/networking/dsa/bcm_sf2.rst b/Documentation/networking/dsa/bcm_sf2.rst
index dee234039e1e..d2571435696f 100644
--- a/Documentation/networking/dsa/bcm_sf2.rst
+++ b/Documentation/networking/dsa/bcm_sf2.rst
@@ -67,7 +67,7 @@ MDIO indirect accesses
----------------------
Due to a limitation in how Broadcom switches have been designed, external
-Broadcom switches connected to a SF2 require the use of the DSA slave MDIO bus
+Broadcom switches connected to a SF2 require the use of the DSA user MDIO bus
in order to properly configure them. By default, the SF2 pseudo-PHY address, and
an external switch pseudo-PHY address will both be snooping for incoming MDIO
transactions, since they are at the same address (30), resulting in some kind of
diff --git a/Documentation/networking/dsa/configuration.rst b/Documentation/networking/dsa/configuration.rst
index d2934c40f0f1..6cc4ded3cc23 100644
--- a/Documentation/networking/dsa/configuration.rst
+++ b/Documentation/networking/dsa/configuration.rst
@@ -31,38 +31,38 @@ at https://www.kernel.org/pub/linux/utils/net/iproute2/
Through DSA every port of a switch is handled like a normal linux Ethernet
interface. The CPU port is the switch port connected to an Ethernet MAC chip.
-The corresponding linux Ethernet interface is called the master interface.
-All other corresponding linux interfaces are called slave interfaces.
+The corresponding linux Ethernet interface is called the conduit interface.
+All other corresponding linux interfaces are called user interfaces.
-The slave interfaces depend on the master interface being up in order for them
-to send or receive traffic. Prior to kernel v5.12, the state of the master
+The user interfaces depend on the conduit interface being up in order for them
+to send or receive traffic. Prior to kernel v5.12, the state of the conduit
interface had to be managed explicitly by the user. Starting with kernel v5.12,
the behavior is as follows:
-- when a DSA slave interface is brought up, the master interface is
+- when a DSA user interface is brought up, the conduit interface is
automatically brought up.
-- when the master interface is brought down, all DSA slave interfaces are
+- when the conduit interface is brought down, all DSA user interfaces are
automatically brought down.
In this documentation the following Ethernet interfaces are used:
*eth0*
- the master interface
+ the conduit interface
*eth1*
- another master interface
+ another conduit interface
*lan1*
- a slave interface
+ a user interface
*lan2*
- another slave interface
+ another user interface
*lan3*
- a third slave interface
+ a third user interface
*wan*
- A slave interface dedicated for upstream traffic
+ A user interface dedicated for upstream traffic
Further Ethernet interfaces can be configured similar.
The configured IPs and networks are:
@@ -96,11 +96,11 @@ without using a VLAN based configuration.
ip addr add 192.0.2.5/30 dev lan2
ip addr add 192.0.2.9/30 dev lan3
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set lan1 up
ip link set lan2 up
ip link set lan3 up
@@ -108,11 +108,11 @@ without using a VLAN based configuration.
*bridge*
.. code-block:: sh
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set lan1 up
ip link set lan2 up
ip link set lan3 up
@@ -134,11 +134,11 @@ without using a VLAN based configuration.
*gateway*
.. code-block:: sh
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set wan up
ip link set lan1 up
ip link set lan2 up
@@ -178,14 +178,14 @@ configuration.
ip link add link eth0 name eth0.2 type vlan id 2
ip link add link eth0 name eth0.3 type vlan id 3
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
ip link set eth0.1 up
ip link set eth0.2 up
ip link set eth0.3 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set lan1 up
ip link set lan2 up
ip link set lan3 up
@@ -221,12 +221,12 @@ configuration.
# tag traffic on CPU port
ip link add link eth0 name eth0.1 type vlan id 1
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
ip link set eth0.1 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set lan1 up
ip link set lan2 up
ip link set lan3 up
@@ -261,13 +261,13 @@ configuration.
ip link add link eth0 name eth0.1 type vlan id 1
ip link add link eth0 name eth0.2 type vlan id 2
- # For kernels earlier than v5.12, the master interface needs to be
- # brought up manually before the slave ports.
+ # For kernels earlier than v5.12, the conduit interface needs to be
+ # brought up manually before the user ports.
ip link set eth0 up
ip link set eth0.1 up
ip link set eth0.2 up
- # bring up the slave interfaces
+ # bring up the user interfaces
ip link set wan up
ip link set lan1 up
ip link set lan2 up
@@ -380,22 +380,22 @@ affinities according to the available CPU ports.
Secondly, it is possible to perform load balancing between CPU ports on a per
packet basis, rather than statically assigning user ports to CPU ports.
-This can be achieved by placing the DSA masters under a LAG interface (bonding
+This can be achieved by placing the DSA conduits under a LAG interface (bonding
or team). DSA monitors this operation and creates a mirror of this software LAG
-on the CPU ports facing the physical DSA masters that constitute the LAG slave
+on the CPU ports facing the physical DSA conduits that constitute the LAG slave
devices.
To make use of multiple CPU ports, the firmware (device tree) description of
-the switch must mark all the links between CPU ports and their DSA masters
+the switch must mark all the links between CPU ports and their DSA conduits
using the ``ethernet`` reference/phandle. At startup, only a single CPU port
-and DSA master will be used - the numerically first port from the firmware
+and DSA conduit will be used - the numerically first port from the firmware
description which has an ``ethernet`` property. It is up to the user to
-configure the system for the switch to use other masters.
+configure the system for the switch to use other conduits.
DSA uses the ``rtnl_link_ops`` mechanism (with a "dsa" ``kind``) to allow
-changing the DSA master of a user port. The ``IFLA_DSA_MASTER`` u32 netlink
-attribute contains the ifindex of the master device that handles each slave
-device. The DSA master must be a valid candidate based on firmware node
+changing the DSA conduit of a user port. The ``IFLA_DSA_CONDUIT`` u32 netlink
+attribute contains the ifindex of the conduit device that handles each user
+device. The DSA conduit must be a valid candidate based on firmware node
information, or a LAG interface which contains only slaves which are valid
candidates.
@@ -403,7 +403,7 @@ Using iproute2, the following manipulations are possible:
.. code-block:: sh
- # See the DSA master in current use
+ # See the DSA conduit in current use
ip -d link show dev swp0
(...)
dsa master eth0
@@ -414,7 +414,7 @@ Using iproute2, the following manipulations are possible:
ip link set swp2 type dsa master eth1
ip link set swp3 type dsa master eth0
- # CPU ports in LAG, using explicit assignment of the DSA master
+ # CPU ports in LAG, using explicit assignment of the DSA conduit
ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth1 down && ip link set eth1 master bond0
ip link set swp0 type dsa master bond0
@@ -426,7 +426,7 @@ Using iproute2, the following manipulations are possible:
(...)
dsa master bond0
- # CPU ports in LAG, relying on implicit migration of the DSA master
+ # CPU ports in LAG, relying on implicit migration of the DSA conduit
ip link add bond0 type bond mode balance-xor && ip link set bond0 up
ip link set eth0 down && ip link set eth0 master bond0
ip link set eth1 down && ip link set eth1 master bond0
@@ -435,24 +435,24 @@ Using iproute2, the following manipulations are possible:
dsa master bond0
Notice that in the case of CPU ports under a LAG, the use of the
-``IFLA_DSA_MASTER`` netlink attribute is not strictly needed, but rather, DSA
-reacts to the ``IFLA_MASTER`` attribute change of its present master (``eth0``)
+``IFLA_DSA_CONDUIT`` netlink attribute is not strictly needed, but rather, DSA
+reacts to the ``IFLA_MASTER`` attribute change of its present conduit (``eth0``)
and migrates all user ports to the new upper of ``eth0``, ``bond0``. Similarly,
when ``bond0`` is destroyed using ``RTM_DELLINK``, DSA migrates the user ports
-that were assigned to this interface to the first physical DSA master which is
+that were assigned to this interface to the first physical DSA conduit which is
eligible, based on the firmware description (it effectively reverts to the
startup configuration).
In a setup with more than 2 physical CPU ports, it is therefore possible to mix
-static user to CPU port assignment with LAG between DSA masters. It is not
-possible to statically assign a user port towards a DSA master that has any
-upper interfaces (this includes LAG devices - the master must always be the LAG
+static user to CPU port assignment with LAG between DSA conduits. It is not
+possible to statically assign a user port towards a DSA conduit that has any
+upper interfaces (this includes LAG devices - the conduit must always be the LAG
in this case).
-Live changing of the DSA master (and thus CPU port) affinity of a user port is
+Live changing of the DSA conduit (and thus CPU port) affinity of a user port is
permitted, in order to allow dynamic redistribution in response to traffic.
-Physical DSA masters are allowed to join and leave at any time a LAG interface
-used as a DSA master; however, DSA will reject a LAG interface as a valid
-candidate for being a DSA master unless it has at least one physical DSA master
+Physical DSA conduits are allowed to join and leave at any time a LAG interface
+used as a DSA conduit; however, DSA will reject a LAG interface as a valid
+candidate for being a DSA conduit unless it has at least one physical DSA conduit
as a slave device.
diff --git a/Documentation/networking/dsa/dsa.rst b/Documentation/networking/dsa/dsa.rst
index a94ddf83348a..7b2e69cd7ef0 100644
--- a/Documentation/networking/dsa/dsa.rst
+++ b/Documentation/networking/dsa/dsa.rst
@@ -25,7 +25,7 @@ presence of a management port connected to an Ethernet controller capable of
receiving Ethernet frames from the switch. This is a very common setup for all
kinds of Ethernet switches found in Small Home and Office products: routers,
gateways, or even top-of-rack switches. This host Ethernet controller will
-be later referred to as "master" and "cpu" in DSA terminology and code.
+be later referred to as "conduit" and "cpu" in DSA terminology and code.
The D in DSA stands for Distributed, because the subsystem has been designed
with the ability to configure and manage cascaded switches on top of each other
@@ -35,7 +35,7 @@ of multiple switches connected to each other is called a "switch tree".
For each front-panel port, DSA creates specialized network devices which are
used as controlling and data-flowing endpoints for use by the Linux networking
-stack. These specialized network interfaces are referred to as "slave" network
+stack. These specialized network interfaces are referred to as "user" network
interfaces in DSA terminology and code.
The ideal case for using DSA is when an Ethernet switch supports a "switch tag"
@@ -56,12 +56,16 @@ Note that DSA does not currently create network interfaces for the "cpu" and
- the "cpu" port is the Ethernet switch facing side of the management
controller, and as such, would create a duplication of feature, since you
- would get two interfaces for the same conduit: master netdev, and "cpu" netdev
+ would get two interfaces for the same conduit: conduit netdev, and "cpu" netdev
- the "dsa" port(s) are just conduits between two or more switches, and as such
cannot really be used as proper network interfaces either, only the
downstream, or the top-most upstream interface makes sense with that model
+NB: for the past 15 years, the DSA subsystem had been making use of the terms
+"master" (rather than "conduit") and "slave" (rather than "user"). These terms
+have been removed from the DSA codebase and phased out of the uAPI.
+
Switch tagging protocols
------------------------
@@ -80,14 +84,14 @@ methods of the ``struct dsa_device_ops`` structure, which are detailed below.
Tagging protocols generally fall in one of three categories:
1. The switch-specific frame header is located before the Ethernet header,
- shifting to the right (from the perspective of the DSA master's frame
+ shifting to the right (from the perspective of the DSA conduit's frame
parser) the MAC DA, MAC SA, EtherType and the entire L2 payload.
2. The switch-specific frame header is located before the EtherType, keeping
- the MAC DA and MAC SA in place from the DSA master's perspective, but
+ the MAC DA and MAC SA in place from the DSA conduit's perspective, but
shifting the 'real' EtherType and L2 payload to the right.
3. The switch-specific frame header is located at the tail of the packet,
keeping all frame headers in place and not altering the view of the packet
- that the DSA master's frame parser has.
+ that the DSA conduit's frame parser has.
A tagging protocol may tag all packets with switch tags of the same length, or
the tag length might vary (for example packets with PTP timestamps might
@@ -95,7 +99,7 @@ require an extended switch tag, or there might be one tag length on TX and a
different one on RX). Either way, the tagging protocol driver must populate the
``struct dsa_device_ops::needed_headroom`` and/or ``struct dsa_device_ops::needed_tailroom``
with the length in octets of the longest switch frame header/trailer. The DSA
-framework will automatically adjust the MTU of the master interface to
+framework will automatically adjust the MTU of the conduit interface to
accommodate for this extra size in order for DSA user ports to support the
standard MTU (L2 payload length) of 1500 octets. The ``needed_headroom`` and
``needed_tailroom`` properties are also used to request from the network stack,
@@ -140,18 +144,18 @@ adding or removing the ``ETH_P_EDSA`` EtherType and some padding octets).
It is possible to construct cascaded setups of DSA switches even if their
tagging protocols are not compatible with one another. In this case, there are
no DSA links in this fabric, and each switch constitutes a disjoint DSA switch
-tree. The DSA links are viewed as simply a pair of a DSA master (the out-facing
+tree. The DSA links are viewed as simply a pair of a DSA conduit (the out-facing
port of the upstream DSA switch) and a CPU port (the in-facing port of the
downstream DSA switch).
The tagging protocol of the attached DSA switch tree can be viewed through the
-``dsa/tagging`` sysfs attribute of the DSA master::
+``dsa/tagging`` sysfs attribute of the DSA conduit::
cat /sys/class/net/eth0/dsa/tagging
If the hardware and driver are capable, the tagging protocol of the DSA switch
tree can be changed at runtime. This is done by writing the new tagging
-protocol name to the same sysfs device attribute as above (the DSA master and
+protocol name to the same sysfs device attribute as above (the DSA conduit and
all attached switch ports must be down while doing this).
It is desirable that all tagging protocols are testable with the ``dsa_loop``
@@ -159,7 +163,7 @@ mockup driver, which can be attached to any network interface. The goal is that
any network interface should be capable of transmitting the same packet in the
same way, and the tagger should decode the same received packet in the same way
regardless of the driver used for the switch control path, and the driver used
-for the DSA master.
+for the DSA conduit.
The transmission of a packet goes through the tagger's ``xmit`` function.
The passed ``struct sk_buff *skb`` has ``skb->data`` pointing at
@@ -183,44 +187,44 @@ virtual DSA user network interface corresponding to the physical front-facing
switch port that the packet was received on.
Since tagging protocols in category 1 and 2 break software (and most often also
-hardware) packet dissection on the DSA master, features such as RPS (Receive
-Packet Steering) on the DSA master would be broken. The DSA framework deals
+hardware) packet dissection on the DSA conduit, features such as RPS (Receive
+Packet Steering) on the DSA conduit would be broken. The DSA framework deals
with this by hooking into the flow dissector and shifting the offset at which
-the IP header is to be found in the tagged frame as seen by the DSA master.
+the IP header is to be found in the tagged frame as seen by the DSA conduit.
This behavior is automatic based on the ``overhead`` value of the tagging
protocol. If not all packets are of equal size, the tagger can implement the
``flow_dissect`` method of the ``struct dsa_device_ops`` and override this
default behavior by specifying the correct offset incurred by each individual
RX packet. Tail taggers do not cause issues to the flow dissector.
-Checksum offload should work with category 1 and 2 taggers when the DSA master
+Checksum offload should work with category 1 and 2 taggers when the DSA conduit
driver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start and
csum_offset. For those cases, DSA will shift the checksum start and offset by
-the tag size. If the DSA master driver still uses the legacy NETIF_F_IP_CSUM
+the tag size. If the DSA conduit driver still uses the legacy NETIF_F_IP_CSUM
or NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if the
offload hardware already expects that specific tag (perhaps due to matching
-vendors). DSA slaves inherit those flags from the master port, and it is up to
+vendors). DSA user ports inherit those flags from the conduit, and it is up to
the driver to correctly fall back to software checksum when the IP header is not
where the hardware expects. If that check is ineffective, the packets might go
to the network without a proper checksum (the checksum field will have the
pseudo IP header sum). For category 3, when the offload hardware does not
already expect the switch tag in use, the checksum must be calculated before any
-tag is inserted (i.e. inside the tagger). Otherwise, the DSA master would
+tag is inserted (i.e. inside the tagger). Otherwise, the DSA conduit would
include the tail tag in the (software or hardware) checksum calculation. Then,
when the tag gets stripped by the switch during transmission, it will leave an
incorrect IP checksum in place.
Due to various reasons (most common being category 1 taggers being associated
-with DSA-unaware masters, mangling what the master perceives as MAC DA), the
-tagging protocol may require the DSA master to operate in promiscuous mode, to
+with DSA-unaware conduits, mangling what the conduit perceives as MAC DA), the
+tagging protocol may require the DSA conduit to operate in promiscuous mode, to
receive all frames regardless of the value of the MAC DA. This can be done by
-setting the ``promisc_on_master`` property of the ``struct dsa_device_ops``.
-Note that this assumes a DSA-unaware master driver, which is the norm.
+setting the ``promisc_on_conduit`` property of the ``struct dsa_device_ops``.
+Note that this assumes a DSA-unaware conduit driver, which is the norm.
-Master network devices
-----------------------
+Conduit network devices
+-----------------------
-Master network devices are regular, unmodified Linux network device drivers for
+Conduit network devices are regular, unmodified Linux network device drivers for
the CPU/management Ethernet interface. Such a driver might occasionally need to
know whether DSA is enabled (e.g.: to enable/disable specific offload features),
but the DSA subsystem has been proven to work with industry standard drivers:
@@ -232,14 +236,14 @@ Ethernet switch.
Networking stack hooks
----------------------
-When a master netdev is used with DSA, a small hook is placed in the
+When a conduit netdev is used with DSA, a small hook is placed in the
networking stack is in order to have the DSA subsystem process the Ethernet
switch specific tagging protocol. DSA accomplishes this by registering a
specific (and fake) Ethernet type (later becoming ``skb->protocol``) with the
networking stack, this is also known as a ``ptype`` or ``packet_type``. A typical
Ethernet Frame receive sequence looks like this:
-Master network device (e.g.: e1000e):
+Conduit network device (e.g.: e1000e):
1. Receive interrupt fires:
@@ -269,16 +273,16 @@ Master network device (e.g.: e1000e):
- inspect and strip switch tag protocol to determine originating port
- locate per-port network device
- - invoke ``eth_type_trans()`` with the DSA slave network device
+ - invoke ``eth_type_trans()`` with the DSA user network device
- invoked ``netif_receive_skb()``
-Past this point, the DSA slave network devices get delivered regular Ethernet
+Past this point, the DSA user network devices get delivered regular Ethernet
frames that can be processed by the networking stack.
-Slave network devices
----------------------
+User network devices
+--------------------
-Slave network devices created by DSA are stacked on top of their master network
+User network devices created by DSA are stacked on top of their conduit network
device, each of these network interfaces will be responsible for being a
controlling and data-flowing end-point for each front-panel port of the switch.
These interfaces are specialized in order to:
@@ -289,31 +293,31 @@ These interfaces are specialized in order to:
Wake-on-LAN, register dumps...
- manage external/internal PHY: link, auto-negotiation, etc.
-These slave network devices have custom net_device_ops and ethtool_ops function
+These user network devices have custom net_device_ops and ethtool_ops function
pointers which allow DSA to introduce a level of layering between the networking
stack/ethtool and the switch driver implementation.
-Upon frame transmission from these slave network devices, DSA will look up which
+Upon frame transmission from these user network devices, DSA will look up which
switch tagging protocol is currently registered with these network devices and
invoke a specific transmit routine which takes care of adding the relevant
switch tag in the Ethernet frames.
-These frames are then queued for transmission using the master network device
+These frames are then queued for transmission using the conduit network device
``ndo_start_xmit()`` function. Since they contain the appropriate switch tag, the
Ethernet switch will be able to process these incoming frames from the
management interface and deliver them to the physical switch port.
When using multiple CPU ports, it is possible to stack a LAG (bonding/team)
-device between the DSA slave devices and the physical DSA masters. The LAG
-device is thus also a DSA master, but the LAG slave devices continue to be DSA
-masters as well (just with no user port assigned to them; this is needed for
-recovery in case the LAG DSA master disappears). Thus, the data path of the LAG
-DSA master is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which
-calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA master;
-LAG slave). Therefore, the RX data path of the LAG DSA master is not used.
-On the other hand, TX takes place linearly: ``dsa_slave_xmit`` calls
-``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA master.
-The latter calls ``dev_queue_xmit`` towards one physical DSA master or the
+device between the DSA user devices and the physical DSA conduits. The LAG
+device is thus also a DSA conduit, but the LAG slave devices continue to be DSA
+conduits as well (just with no user port assigned to them; this is needed for
+recovery in case the LAG DSA conduit disappears). Thus, the data path of the LAG
+DSA conduit is used asymmetrically. On RX, the ``ETH_P_XDSA`` handler, which
+calls ``dsa_switch_rcv()``, is invoked early (on the physical DSA conduit;
+LAG slave). Therefore, the RX data path of the LAG DSA conduit is not used.
+On the other hand, TX takes place linearly: ``dsa_user_xmit`` calls
+``dsa_enqueue_skb``, which calls ``dev_queue_xmit`` towards the LAG DSA conduit.
+The latter calls ``dev_queue_xmit`` towards one physical DSA conduit or the
other, and in both cases, the packet exits the system through a hardware path
towards the switch.
@@ -352,11 +356,11 @@ perspective::
|| swp0 | | swp1 | | swp2 | | swp3 ||
++------+-+------+-+------+-+------++
-Slave MDIO bus
---------------
+User MDIO bus
+-------------
-In order to be able to read to/from a switch PHY built into it, DSA creates a
-slave MDIO bus which allows a specific switch driver to divert and intercept
+In order to be able to read to/from a switch PHY built into it, DSA creates an
+user MDIO bus which allows a specific switch driver to divert and intercept
MDIO reads/writes towards specific PHY addresses. In most MDIO-connected
switches, these functions would utilize direct or indirect PHY addressing mode
to return standard MII registers from the switch builtin PHYs, allowing the PHY
@@ -364,7 +368,7 @@ library and/or to return link status, link partner pages, auto-negotiation
results, etc.
For Ethernet switches which have both external and internal MDIO buses, the
-slave MII bus can be utilized to mux/demux MDIO reads and writes towards either
+user MII bus can be utilized to mux/demux MDIO reads and writes towards either
internal or external MDIO devices this switch might be connected to: internal
PHYs, external PHYs, or even external switches.
@@ -381,10 +385,10 @@ DSA data structures are defined in ``include/net/dsa.h`` as well as
- ``dsa_platform_data``: platform device configuration data which can reference
a collection of dsa_chip_data structures if multiple switches are cascaded,
- the master network device this switch tree is attached to needs to be
+ the conduit network device this switch tree is attached to needs to be
referenced
-- ``dsa_switch_tree``: structure assigned to the master network device under
+- ``dsa_switch_tree``: structure assigned to the conduit network device under
``dsa_ptr``, this structure references a dsa_platform_data structure as well as
the tagging protocol supported by the switch tree, and which receive/transmit
function hooks should be invoked, information about the directly attached
@@ -392,7 +396,7 @@ DSA data structures are defined in ``include/net/dsa.h`` as well as
referenced to address individual switches in the tree.
- ``dsa_switch``: structure describing a switch device in the tree, referencing
- a ``dsa_switch_tree`` as a backpointer, slave network devices, master network
+ a ``dsa_switch_tree`` as a backpointer, user network devices, conduit network
device, and a reference to the backing``dsa_switch_ops``
- ``dsa_switch_ops``: structure referencing function pointers, see below for a
@@ -404,7 +408,7 @@ Design limitations
Lack of CPU/DSA network devices
-------------------------------
-DSA does not currently create slave network devices for the CPU or DSA ports, as
+DSA does not currently create user network devices for the CPU or DSA ports, as
described before. This might be an issue in the following cases:
- inability to fetch switch CPU port statistics counters using ethtool, which
@@ -419,7 +423,7 @@ described before. This might be an issue in the following cases:
Common pitfalls using DSA setups
--------------------------------
-Once a master network device is configured to use DSA (dev->dsa_ptr becomes
+Once a conduit network device is configured to use DSA (dev->dsa_ptr becomes
non-NULL), and the switch behind it expects a tagging protocol, this network
interface can only exclusively be used as a conduit interface. Sending packets
directly through this interface (e.g.: opening a socket using this interface)
@@ -440,7 +444,7 @@ DSA currently leverages the following subsystems:
MDIO/PHY library
----------------
-Slave network devices exposed by DSA may or may not be interfacing with PHY
+User network devices exposed by DSA may or may not be interfacing with PHY
devices (``struct phy_device`` as defined in ``include/linux/phy.h)``, but the DSA
subsystem deals with all possible combinations:
@@ -450,7 +454,7 @@ subsystem deals with all possible combinations:
- special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.a
fixed PHYs
-The PHY configuration is done by the ``dsa_slave_phy_setup()`` function and the
+The PHY configuration is done by the ``dsa_user_phy_setup()`` function and the
logic basically looks like this:
- if Device Tree is used, the PHY device is looked up using the standard
@@ -463,7 +467,7 @@ logic basically looks like this:
and connected transparently using the special fixed MDIO bus driver
- finally, if the PHY is built into the switch, as is very common with
- standalone switch packages, the PHY is probed using the slave MII bus created
+ standalone switch packages, the PHY is probed using the user MII bus created
by DSA
@@ -472,7 +476,7 @@ SWITCHDEV
DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, and
more specifically with its VLAN filtering portion when configuring VLANs on top
-of per-port slave network devices. As of today, the only SWITCHDEV objects
+of per-port user network devices. As of today, the only SWITCHDEV objects
supported by DSA are the FDB and VLAN objects.
Devlink
@@ -589,8 +593,8 @@ is torn down when the first switch unregisters.
It is mandatory for DSA switch drivers to implement the ``shutdown()`` callback
of their respective bus, and call ``dsa_switch_shutdown()`` from it (a minimal
version of the full teardown performed by ``dsa_unregister_switch()``).
-The reason is that DSA keeps a reference on the master net device, and if the
-driver for the master device decides to unbind on shutdown, DSA's reference
+The reason is that DSA keeps a reference on the conduit net device, and if the
+driver for the conduit device decides to unbind on shutdown, DSA's reference
will block that operation from finalizing.
Either ``dsa_switch_shutdown()`` or ``dsa_unregister_switch()`` must be called,
@@ -615,7 +619,7 @@ Switch configuration
tag formats.
- ``change_tag_protocol``: when the default tagging protocol has compatibility
- problems with the master or other issues, the driver may support changing it
+ problems with the conduit or other issues, the driver may support changing it
at runtime, either through a device tree property or through sysfs. In that
case, further calls to ``get_tag_protocol`` should report the protocol in
current use.
@@ -643,22 +647,22 @@ Switch configuration
PHY cannot be found. In this case, probing of the DSA switch continues
without that particular port.
-- ``port_change_master``: method through which the affinity (association used
+- ``port_change_conduit``: method through which the affinity (association used
for traffic termination purposes) between a user port and a CPU port can be
changed. By default all user ports from a tree are assigned to the first
available CPU port that makes sense for them (most of the times this means
the user ports of a tree are all assigned to the same CPU port, except for H
topologies as described in commit 2c0b03258b8b). The ``port`` argument
- represents the index of the user port, and the ``master`` argument represents
- the new DSA master ``net_device``. The CPU port associated with the new
- master can be retrieved by looking at ``struct dsa_port *cpu_dp =
- master->dsa_ptr``. Additionally, the master can also be a LAG device where
- all the slave devices are physical DSA masters. LAG DSA masters also have a
- valid ``master->dsa_ptr`` pointer, however this is not unique, but rather a
- duplicate of the first physical DSA master's (LAG slave) ``dsa_ptr``. In case
- of a LAG DSA master, a further call to ``port_lag_join`` will be emitted
+ represents the index of the user port, and the ``conduit`` argument represents
+ the new DSA conduit ``net_device``. The CPU port associated with the new
+ conduit can be retrieved by looking at ``struct dsa_port *cpu_dp =
+ conduit->dsa_ptr``. Additionally, the conduit can also be a LAG device where
+ all the slave devices are physical DSA conduits. LAG DSA also have a
+ valid ``conduit->dsa_ptr`` pointer, however this is not unique, but rather a
+ duplicate of the first physical DSA conduit's (LAG slave) ``dsa_ptr``. In case
+ of a LAG DSA conduit, a further call to ``port_lag_join`` will be emitted
separately for the physical CPU ports associated with the physical DSA
- masters, requesting them to create a hardware LAG associated with the LAG
+ conduits, requesting them to create a hardware LAG associated with the LAG
interface.
PHY devices and link management
@@ -670,16 +674,16 @@ PHY devices and link management
should return a 32-bit bitmask of "flags" that is private between the switch
driver and the Ethernet PHY driver in ``drivers/net/phy/\*``.
-- ``phy_read``: Function invoked by the DSA slave MDIO bus when attempting to read
+- ``phy_read``: Function invoked by the DSA user MDIO bus when attempting to read
the switch port MDIO registers. If unavailable, return 0xffff for each read.
For builtin switch Ethernet PHYs, this function should allow reading the link
status, auto-negotiation results, link partner pages, etc.
-- ``phy_write``: Function invoked by the DSA slave MDIO bus when attempting to write
+- ``phy_write``: Function invoked by the DSA user MDIO bus when attempting to write
to the switch port MDIO registers. If unavailable return a negative error
code.
-- ``adjust_link``: Function invoked by the PHY library when a slave network device
+- ``adjust_link``: Function invoked by the PHY library when a user network device
is attached to a PHY device. This function is responsible for appropriately
configuring the switch port link parameters: speed, duplex, pause based on
what the ``phy_device`` is providing.
@@ -698,14 +702,14 @@ Ethtool operations
typically return statistics strings, private flags strings, etc.
- ``get_ethtool_stats``: ethtool function used to query per-port statistics and
- return their values. DSA overlays slave network devices general statistics:
+ return their values. DSA overlays user network devices general statistics:
RX/TX counters from the network device, with switch driver specific statistics
per port
- ``get_sset_count``: ethtool function used to query the number of statistics items
- ``get_wol``: ethtool function used to obtain Wake-on-LAN settings per-port, this
- function may for certain implementations also query the master network device
+ function may for certain implementations also query the conduit network device
Wake-on-LAN settings if this interface needs to participate in Wake-on-LAN
- ``set_wol``: ethtool function used to configure Wake-on-LAN settings per-port,
@@ -747,13 +751,13 @@ Power management
should resume all Ethernet switch activities and re-configure the switch to be
in a fully active state
-- ``port_enable``: function invoked by the DSA slave network device ndo_open
+- ``port_enable``: function invoked by the DSA user network device ndo_open
function when a port is administratively brought up, this function should
fully enable a given switch port. DSA takes care of marking the port with
``BR_STATE_BLOCKING`` if the port is a bridge member, or ``BR_STATE_FORWARDING`` if it
was not, and propagating these changes down to the hardware
-- ``port_disable``: function invoked by the DSA slave network device ndo_close
+- ``port_disable``: function invoked by the DSA user network device ndo_close
function when a port is administratively brought down, this function should
fully disable a given switch port. DSA takes care of marking the port with
``BR_STATE_DISABLED`` and propagating changes to the hardware if this port is
diff --git a/Documentation/networking/dsa/lan9303.rst b/Documentation/networking/dsa/lan9303.rst
index e3c820db28ad..ab81b4e0139e 100644
--- a/Documentation/networking/dsa/lan9303.rst
+++ b/Documentation/networking/dsa/lan9303.rst
@@ -4,7 +4,7 @@ LAN9303 Ethernet switch driver
The LAN9303 is a three port 10/100 Mbps ethernet switch with integrated phys for
the two external ethernet ports. The third port is an RMII/MII interface to a
-host master network interface (e.g. fixed link).
+host conduit network interface (e.g. fixed link).
Driver details
diff --git a/Documentation/networking/dsa/sja1105.rst b/Documentation/networking/dsa/sja1105.rst
index e0219c1452ab..8ab60eef07d4 100644
--- a/Documentation/networking/dsa/sja1105.rst
+++ b/Documentation/networking/dsa/sja1105.rst
@@ -79,7 +79,7 @@ The hardware tags all traffic internally with a port-based VLAN (pvid), or it
decodes the VLAN information from the 802.1Q tag. Advanced VLAN classification
is not possible. Once attributed a VLAN tag, frames are checked against the
port's membership rules and dropped at ingress if they don't match any VLAN.
-This behavior is available when switch ports are enslaved to a bridge with
+This behavior is available when switch ports join a bridge with
``vlan_filtering 1``.
Normally the hardware is not configurable with respect to VLAN awareness, but
@@ -122,7 +122,7 @@ on egress. Using ``vlan_filtering=1``, the behavior is the other way around:
offloaded flows can be steered to TX queues based on the VLAN PCP, but the DSA
net devices are no longer able to do that. To inject frames into a hardware TX
queue with VLAN awareness active, it is necessary to create a VLAN
-sub-interface on the DSA master port, and send normal (0x8100) VLAN-tagged
+sub-interface on the DSA conduit port, and send normal (0x8100) VLAN-tagged
towards the switch, with the VLAN PCP bits set appropriately.
Management traffic (having DMAC 01-80-C2-xx-xx-xx or 01-19-1B-xx-xx-xx) is the
@@ -389,7 +389,7 @@ MDIO bus and PHY management
The SJA1105 does not have an MDIO bus and does not perform in-band AN either.
Therefore there is no link state notification coming from the switch device.
A board would need to hook up the PHYs connected to the switch to any other
-MDIO bus available to Linux within the system (e.g. to the DSA master's MDIO
+MDIO bus available to Linux within the system (e.g. to the DSA conduit's MDIO
bus). Link state management then works by the driver manually keeping in sync
(over SPI commands) the MAC link speed with the settings negotiated by the PHY.
diff --git a/Documentation/networking/filter.rst b/Documentation/networking/filter.rst
index f69da5074860..7d8c5380492f 100644
--- a/Documentation/networking/filter.rst
+++ b/Documentation/networking/filter.rst
@@ -650,8 +650,8 @@ before a conversion to the new layout is being done behind the scenes!
Currently, the classic BPF format is being used for JITing on most
32-bit architectures, whereas x86-64, aarch64, s390x, powerpc64,
-sparc64, arm32, riscv64, riscv32 perform JIT compilation from eBPF
-instruction set.
+sparc64, arm32, riscv64, riscv32, loongarch64 perform JIT compilation
+from eBPF instruction set.
Testing
-------
diff --git a/Documentation/networking/index.rst b/Documentation/networking/index.rst
index 5b75c3f7a137..683eb42309cc 100644
--- a/Documentation/networking/index.rst
+++ b/Documentation/networking/index.rst
@@ -59,7 +59,6 @@ Contents:
gtp
ila
ioam6-sysctl
- ipddp
ip_dynaddr
ipsec
ip-sysctl
@@ -107,6 +106,7 @@ Contents:
sysfs-tagging
tc-actions-env-rules
tc-queue-filters
+ tcp_ao
tcp-thin
team
timestamping
diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
index a66054d0763a..4dfe0d9a57bb 100644
--- a/Documentation/networking/ip-sysctl.rst
+++ b/Documentation/networking/ip-sysctl.rst
@@ -745,6 +745,13 @@ tcp_comp_sack_nr - INTEGER
Default : 44
+tcp_backlog_ack_defer - BOOLEAN
+ If set, user thread processing socket backlog tries sending
+ one ACK for the whole queue. This helps to avoid potential
+ long latencies at end of a TCP socket syscall.
+
+ Default : true
+
tcp_slow_start_after_idle - BOOLEAN
If set, provide RFC2861 behavior and time out the congestion
window after an idle period. An idle period is defined at
@@ -1176,6 +1183,19 @@ tcp_plb_cong_thresh - INTEGER
Default: 128
+tcp_pingpong_thresh - INTEGER
+ The number of estimated data replies sent for estimated incoming data
+ requests that must happen before TCP considers that a connection is a
+ "ping-pong" (request-response) connection for which delayed
+ acknowledgments can provide benefits.
+
+ This threshold is 1 by default, but some applications may need a higher
+ threshold for optimal performance.
+
+ Possible Values: 1 - 255
+
+ Default: 1
+
UDP variables
=============
@@ -2304,6 +2324,17 @@ accept_ra_pinfo - BOOLEAN
- enabled if accept_ra is enabled.
- disabled if accept_ra is disabled.
+ra_honor_pio_life - BOOLEAN
+ Whether to use RFC4862 Section 5.5.3e to determine the valid
+ lifetime of an address matching a prefix sent in a Router
+ Advertisement Prefix Information Option.
+
+ - If enabled, the PIO valid lifetime will always be honored.
+ - If disabled, RFC4862 section 5.5.3e is used to determine
+ the valid lifetime of the address.
+
+ Default: 0 (disabled)
+
accept_ra_rt_info_min_plen - INTEGER
Minimum prefix length of Route Information in RA.
@@ -2471,12 +2502,18 @@ use_tempaddr - INTEGER
* -1 (for point-to-point devices and loopback devices)
temp_valid_lft - INTEGER
- valid lifetime (in seconds) for temporary addresses.
+ valid lifetime (in seconds) for temporary addresses. If less than the
+ minimum required lifetime (typically 5 seconds), temporary addresses
+ will not be created.
Default: 172800 (2 days)
temp_prefered_lft - INTEGER
- Preferred lifetime (in seconds) for temporary addresses.
+ Preferred lifetime (in seconds) for temporary addresses. If
+ temp_prefered_lft is less than the minimum required lifetime (typically
+ 5 seconds), the preferred lifetime is the minimum required. If
+ temp_prefered_lft is greater than temp_valid_lft, the preferred lifetime
+ is temp_valid_lft.
Default: 86400 (1 day)
diff --git a/Documentation/networking/ipddp.rst b/Documentation/networking/ipddp.rst
deleted file mode 100644
index be7091b77927..000000000000
--- a/Documentation/networking/ipddp.rst
+++ /dev/null
@@ -1,78 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-
-=========================================================
-AppleTalk-IP Decapsulation and AppleTalk-IP Encapsulation
-=========================================================
-
-Documentation ipddp.c
-
-This file is written by Jay Schulist <jschlst@samba.org>
-
-Introduction
-------------
-
-AppleTalk-IP (IPDDP) is the method computers connected to AppleTalk
-networks can use to communicate via IP. AppleTalk-IP is simply IP datagrams
-inside AppleTalk packets.
-
-Through this driver you can either allow your Linux box to communicate
-IP over an AppleTalk network or you can provide IP gatewaying functions
-for your AppleTalk users.
-
-You can currently encapsulate or decapsulate AppleTalk-IP on LocalTalk,
-EtherTalk and PPPTalk. The only limit on the protocol is that of what
-kernel AppleTalk layer and drivers are available.
-
-Each mode requires its own user space software.
-
-Compiling AppleTalk-IP Decapsulation/Encapsulation
-==================================================
-
-AppleTalk-IP decapsulation needs to be compiled into your kernel. You
-will need to turn on AppleTalk-IP driver support. Then you will need to
-select ONE of the two options; IP to AppleTalk-IP encapsulation support or
-AppleTalk-IP to IP decapsulation support. If you compile the driver
-statically you will only be able to use the driver for the function you have
-enabled in the kernel. If you compile the driver as a module you can
-select what mode you want it to run in via a module loading param.
-ipddp_mode=1 for AppleTalk-IP encapsulation and ipddp_mode=2 for
-AppleTalk-IP to IP decapsulation.
-
-Basic instructions for user space tools
-=======================================
-
-I will briefly describe the operation of the tools, but you will
-need to consult the supporting documentation for each set of tools.
-
-Decapsulation - You will need to download a software package called
-MacGate. In this distribution there will be a tool called MacRoute
-which enables you to add routes to the kernel for your Macs by hand.
-Also the tool MacRegGateWay is included to register the
-proper IP Gateway and IP addresses for your machine. Included in this
-distribution is a patch to netatalk-1.4b2+asun2.0a17.2 (available from
-ftp.u.washington.edu/pub/user-supported/asun/) this patch is optional
-but it allows automatic adding and deleting of routes for Macs. (Handy
-for locations with large Mac installations)
-
-Encapsulation - You will need to download a software daemon called ipddpd.
-This software expects there to be an AppleTalk-IP gateway on the network.
-You will also need to add the proper routes to route your Linux box's IP
-traffic out the ipddp interface.
-
-Common Uses of ipddp.c
-----------------------
-Of course AppleTalk-IP decapsulation and encapsulation, but specifically
-decapsulation is being used most for connecting LocalTalk networks to
-IP networks. Although it has been used on EtherTalk networks to allow
-Macs that are only able to tunnel IP over EtherTalk.
-
-Encapsulation has been used to allow a Linux box stuck on a LocalTalk
-network to use IP. It should work equally well if you are stuck on an
-EtherTalk only network.
-
-Further Assistance
--------------------
-You can contact me (Jay Schulist <jschlst@samba.org>) with any
-questions regarding decapsulation or encapsulation. Bradford W. Johnson
-<johns393@maroon.tc.umn.edu> originally wrote the ipddp.c driver for IP
-encapsulation in AppleTalk.
diff --git a/Documentation/networking/mptcp-sysctl.rst b/Documentation/networking/mptcp-sysctl.rst
index 15f1919d640c..69975ce25a02 100644
--- a/Documentation/networking/mptcp-sysctl.rst
+++ b/Documentation/networking/mptcp-sysctl.rst
@@ -25,6 +25,17 @@ add_addr_timeout - INTEGER (seconds)
Default: 120
+close_timeout - INTEGER (seconds)
+ Set the make-after-break timeout: in absence of any close or
+ shutdown syscall, MPTCP sockets will maintain the status
+ unchanged for such time, after the last subflow removal, before
+ moving to TCP_CLOSE.
+
+ The default value matches TCP_TIMEWAIT_LEN. This is a per-namespace
+ sysctl.
+
+ Default: 60
+
checksum_enabled - BOOLEAN
Control whether DSS checksum can be enabled.
diff --git a/Documentation/networking/msg_zerocopy.rst b/Documentation/networking/msg_zerocopy.rst
index b3ea96af9b49..78fb70e748b7 100644
--- a/Documentation/networking/msg_zerocopy.rst
+++ b/Documentation/networking/msg_zerocopy.rst
@@ -7,7 +7,8 @@ Intro
=====
The MSG_ZEROCOPY flag enables copy avoidance for socket send calls.
-The feature is currently implemented for TCP and UDP sockets.
+The feature is currently implemented for TCP, UDP and VSOCK (with
+virtio transport) sockets.
Opportunity and Caveats
@@ -174,7 +175,9 @@ read_notification() call in the previous snippet. A notification
is encoded in the standard error format, sock_extended_err.
The level and type fields in the control data are protocol family
-specific, IP_RECVERR or IPV6_RECVERR.
+specific, IP_RECVERR or IPV6_RECVERR (for TCP or UDP socket).
+For VSOCK socket, cmsg_level will be SOL_VSOCK and cmsg_type will be
+VSOCK_RECVERR.
Error origin is the new type SO_EE_ORIGIN_ZEROCOPY. ee_errno is zero,
as explained before, to avoid blocking read and write system calls on
@@ -235,12 +238,15 @@ Implementation
Loopback
--------
+For TCP and UDP:
Data sent to local sockets can be queued indefinitely if the receive
process does not read its socket. Unbound notification latency is not
acceptable. For this reason all packets generated with MSG_ZEROCOPY
that are looped to a local socket will incur a deferred copy. This
includes looping onto packet sockets (e.g., tcpdump) and tun devices.
+For VSOCK:
+Data path sent to local sockets is the same as for non-local sockets.
Testing
=======
@@ -254,3 +260,6 @@ instance when run with msg_zerocopy.sh between a veth pair across
namespaces, the test will not show any improvement. For testing, the
loopback restriction can be temporarily relaxed by making
skb_orphan_frags_rx identical to skb_orphan_frags.
+
+For VSOCK type of socket example can be found in
+tools/testing/vsock/vsock_test_zerocopy.c.
diff --git a/Documentation/networking/netconsole.rst b/Documentation/networking/netconsole.rst
index 7a9de0568e84..390730a74332 100644
--- a/Documentation/networking/netconsole.rst
+++ b/Documentation/networking/netconsole.rst
@@ -99,9 +99,6 @@ Dynamic reconfiguration:
Dynamic reconfigurability is a useful addition to netconsole that enables
remote logging targets to be dynamically added, removed, or have their
parameters reconfigured at runtime from a configfs-based userspace interface.
-[ Note that the parameters of netconsole targets that were specified/created
-from the boot/module option are not exposed via this interface, and hence
-cannot be modified dynamically. ]
To include this feature, select CONFIG_NETCONSOLE_DYNAMIC when building the
netconsole module (or kernel, if netconsole is built-in).
@@ -155,6 +152,25 @@ You can also update the local interface dynamically. This is especially
useful if you want to use interfaces that have newly come up (and may not
have existed when netconsole was loaded / initialized).
+Netconsole targets defined at boot time (or module load time) with the
+`netconsole=` param are assigned the name `cmdline<index>`. For example, the
+first target in the parameter is named `cmdline0`. You can control and modify
+these targets by creating configfs directories with the matching name.
+
+Let's suppose you have two netconsole targets defined at boot time::
+
+ netconsole=4444@10.0.0.1/eth1,9353@10.0.0.2/12:34:56:78:9a:bc;4444@10.0.0.1/eth1,9353@10.0.0.3/12:34:56:78:9a:bc
+
+You can modify these targets in runtime by creating the following targets::
+
+ mkdir cmdline0
+ cat cmdline0/remote_ip
+ 10.0.0.2
+
+ mkdir cmdline1
+ cat cmdline1/remote_ip
+ 10.0.0.3
+
Extended console:
=================
diff --git a/Documentation/networking/page_pool.rst b/Documentation/networking/page_pool.rst
index 215ebc92752c..60993cb56b32 100644
--- a/Documentation/networking/page_pool.rst
+++ b/Documentation/networking/page_pool.rst
@@ -58,7 +58,9 @@ a page will cause no race conditions is enough.
.. kernel-doc:: include/net/page_pool/helpers.h
:identifiers: page_pool_put_page page_pool_put_full_page
- page_pool_recycle_direct page_pool_dev_alloc_pages
+ page_pool_recycle_direct page_pool_free_va
+ page_pool_dev_alloc_pages page_pool_dev_alloc_frag
+ page_pool_dev_alloc page_pool_dev_alloc_va
page_pool_get_dma_addr page_pool_get_dma_dir
.. kernel-doc:: net/core/page_pool.c
diff --git a/Documentation/networking/pktgen.rst b/Documentation/networking/pktgen.rst
index 1225f0f63ff0..c945218946e1 100644
--- a/Documentation/networking/pktgen.rst
+++ b/Documentation/networking/pktgen.rst
@@ -178,6 +178,7 @@ Examples::
IPSEC # IPsec encapsulation (needs CONFIG_XFRM)
NODE_ALLOC # node specific memory allocation
NO_TIMESTAMP # disable timestamping
+ SHARED # enable shared SKB
pgset 'flag ![name]' Clear a flag to determine behaviour.
Note that you might need to use single quote in
interactive mode, so that your shell wouldn't expand
@@ -288,6 +289,16 @@ To avoid breaking existing testbed scripts for using AH type and tunnel mode,
you can use "pgset spi SPI_VALUE" to specify which transformation mode
to employ.
+Disable shared SKB
+==================
+By default, SKBs sent by pktgen are shared (user count > 1).
+To test with non-shared SKBs, remove the "SHARED" flag by simply setting::
+
+ pg_set "flag !SHARED"
+
+However, if the "clone_skb" or "burst" parameters are configured, the skb
+still needs to be held by pktgen for further access. Hence the skb must be
+shared.
Current commands and configuration options
==========================================
@@ -357,6 +368,7 @@ Current commands and configuration options
IPSEC
NODE_ALLOC
NO_TIMESTAMP
+ SHARED
spi (ipsec)
diff --git a/Documentation/networking/scaling.rst b/Documentation/networking/scaling.rst
index 92c9fb46d6a2..03ae19a689fc 100644
--- a/Documentation/networking/scaling.rst
+++ b/Documentation/networking/scaling.rst
@@ -105,6 +105,48 @@ a separate CPU. For interrupt handling, HT has shown no benefit in
initial tests, so limit the number of queues to the number of CPU cores
in the system.
+Dedicated RSS contexts
+~~~~~~~~~~~~~~~~~~~~~~
+
+Modern NICs support creating multiple co-existing RSS configurations
+which are selected based on explicit matching rules. This can be very
+useful when application wants to constrain the set of queues receiving
+traffic for e.g. a particular destination port or IP address.
+The example below shows how to direct all traffic to TCP port 22
+to queues 0 and 1.
+
+To create an additional RSS context use::
+
+ # ethtool -X eth0 hfunc toeplitz context new
+ New RSS context is 1
+
+Kernel reports back the ID of the allocated context (the default, always
+present RSS context has ID of 0). The new context can be queried and
+modified using the same APIs as the default context::
+
+ # ethtool -x eth0 context 1
+ RX flow hash indirection table for eth0 with 13 RX ring(s):
+ 0: 0 1 2 3 4 5 6 7
+ 8: 8 9 10 11 12 0 1 2
+ [...]
+ # ethtool -X eth0 equal 2 context 1
+ # ethtool -x eth0 context 1
+ RX flow hash indirection table for eth0 with 13 RX ring(s):
+ 0: 0 1 0 1 0 1 0 1
+ 8: 0 1 0 1 0 1 0 1
+ [...]
+
+To make use of the new context direct traffic to it using an n-tuple
+filter::
+
+ # ethtool -N eth0 flow-type tcp6 dst-port 22 context 1
+ Added rule with ID 1023
+
+When done, remove the context and the rule::
+
+ # ethtool -N eth0 delete 1023
+ # ethtool -X eth0 context 1 delete
+
RPS: Receive Packet Steering
============================
diff --git a/Documentation/networking/sfp-phylink.rst b/Documentation/networking/sfp-phylink.rst
index 55b65f607a64..8054d33f449f 100644
--- a/Documentation/networking/sfp-phylink.rst
+++ b/Documentation/networking/sfp-phylink.rst
@@ -200,10 +200,12 @@ this documentation.
when the in-band link state changes - otherwise the link will never
come up.
- The :c:func:`validate` method should mask the supplied supported mask,
- and ``state->advertising`` with the supported ethtool link modes.
- These are the new ethtool link modes, so bitmask operations must be
- used. For an example, see ``drivers/net/ethernet/marvell/mvneta.c``.
+ The :c:func:`mac_get_caps` method is optional, and if provided should
+ return the phylink MAC capabilities that are supported for the passed
+ ``interface`` mode. In general, there is no need to implement this method.
+ Phylink will use these capabilities in combination with permissible
+ capabilities for ``interface`` to determine the allowable ethtool link
+ modes.
The :c:func:`mac_link_state` method is used to read the link state
from the MAC, and report back the settings that the MAC is currently
diff --git a/Documentation/networking/tcp_ao.rst b/Documentation/networking/tcp_ao.rst
new file mode 100644
index 000000000000..cfa5bf1cc542
--- /dev/null
+++ b/Documentation/networking/tcp_ao.rst
@@ -0,0 +1,444 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+========================================================
+TCP Authentication Option Linux implementation (RFC5925)
+========================================================
+
+TCP Authentication Option (TCP-AO) provides a TCP extension aimed at verifying
+segments between trusted peers. It adds a new TCP header option with
+a Message Authentication Code (MAC). MACs are produced from the content
+of a TCP segment using a hashing function with a password known to both peers.
+The intent of TCP-AO is to deprecate TCP-MD5 providing better security,
+key rotation and support for variety of hashing algorithms.
+
+1. Introduction
+===============
+
+.. table:: Short and Limited Comparison of TCP-AO and TCP-MD5
+
+ +----------------------+------------------------+-----------------------+
+ | | TCP-MD5 | TCP-AO |
+ +======================+========================+=======================+
+ |Supported hashing |MD5 |Must support HMAC-SHA1 |
+ |algorithms |(cryptographically weak)|(chosen-prefix attacks)|
+ | | |and CMAC-AES-128 (only |
+ | | |side-channel attacks). |
+ | | |May support any hashing|
+ | | |algorithm. |
+ +----------------------+------------------------+-----------------------+
+ |Length of MACs (bytes)|16 |Typically 12-16. |
+ | | |Other variants that fit|
+ | | |TCP header permitted. |
+ +----------------------+------------------------+-----------------------+
+ |Number of keys per |1 |Many |
+ |TCP connection | | |
+ +----------------------+------------------------+-----------------------+
+ |Possibility to change |Non-practical (both |Supported by protocol |
+ |an active key |peers have to change | |
+ | |them during MSL) | |
+ +----------------------+------------------------+-----------------------+
+ |Protection against |No |Yes: ignoring them |
+ |ICMP 'hard errors' | |by default on |
+ | | |established connections|
+ +----------------------+------------------------+-----------------------+
+ |Protection against |No |Yes: pseudo-header |
+ |traffic-crossing | |includes TCP ports. |
+ |attack | | |
+ +----------------------+------------------------+-----------------------+
+ |Protection against |No |Sequence Number |
+ |replayed TCP segments | |Extension (SNE) and |
+ | | |Initial Sequence |
+ | | |Numbers (ISNs) |
+ +----------------------+------------------------+-----------------------+
+ |Supports |Yes |No. ISNs+SNE are needed|
+ |Connectionless Resets | |to correctly sign RST. |
+ +----------------------+------------------------+-----------------------+
+ |Standards |RFC 2385 |RFC 5925, RFC 5926 |
+ +----------------------+------------------------+-----------------------+
+
+
+1.1 Frequently Asked Questions (FAQ) with references to RFC 5925
+----------------------------------------------------------------
+
+Q: Can either SendID or RecvID be non-unique for the same 4-tuple
+(srcaddr, srcport, dstaddr, dstport)?
+
+A: No [3.1]::
+
+ >> The IDs of MKTs MUST NOT overlap where their TCP connection
+ identifiers overlap.
+
+Q: Can Master Key Tuple (MKT) for an active connection be removed?
+
+A: No, unless it's copied to Transport Control Block (TCB) [3.1]::
+
+ It is presumed that an MKT affecting a particular connection cannot
+ be destroyed during an active connection -- or, equivalently, that
+ its parameters are copied to an area local to the connection (i.e.,
+ instantiated) and so changes would affect only new connections.
+
+Q: If an old MKT needs to be deleted, how should it be done in order
+to not remove it for an active connection? (As it can be still in use
+at any moment later)
+
+A: Not specified by RFC 5925, seems to be a problem for key management
+to ensure that no one uses such MKT before trying to remove it.
+
+Q: Can an old MKT exist forever and be used by another peer?
+
+A: It can, it's a key management task to decide when to remove an old key [6.1]::
+
+ Deciding when to start using a key is a performance issue. Deciding
+ when to remove an MKT is a security issue. Invalid MKTs are expected
+ to be removed. TCP-AO provides no mechanism to coordinate their removal,
+ as we consider this a key management operation.
+
+also [6.1]::
+
+ The only way to avoid reuse of previously used MKTs is to remove the MKT
+ when it is no longer considered permitted.
+
+Linux TCP-AO will try its best to prevent you from removing a key that's
+being used, considering it a key management failure. But sine keeping
+an outdated key may become a security issue and as a peer may
+unintentionally prevent the removal of an old key by always setting
+it as RNextKeyID - a forced key removal mechanism is provided, where
+userspace has to supply KeyID to use instead of the one that's being removed
+and the kernel will atomically delete the old key, even if the peer is
+still requesting it. There are no guarantees for force-delete as the peer
+may yet not have the new key - the TCP connection may just break.
+Alternatively, one may choose to shut down the socket.
+
+Q: What happens when a packet is received on a new connection with no known
+MKT's RecvID?
+
+A: RFC 5925 specifies that by default it is accepted with a warning logged, but
+the behaviour can be configured by the user [7.5.1.a]::
+
+ If the segment is a SYN, then this is the first segment of a new
+ connection. Find the matching MKT for this segment, using the segment's
+ socket pair and its TCP-AO KeyID, matched against the MKT's TCP connection
+ identifier and the MKT's RecvID.
+
+ i. If there is no matching MKT, remove TCP-AO from the segment.
+ Proceed with further TCP handling of the segment.
+ NOTE: this presumes that connections that do not match any MKT
+ should be silently accepted, as noted in Section 7.3.
+
+[7.3]::
+
+ >> A TCP-AO implementation MUST allow for configuration of the behavior
+ of segments with TCP-AO but that do not match an MKT. The initial default
+ of this configuration SHOULD be to silently accept such connections.
+ If this is not the desired case, an MKT can be included to match such
+ connections, or the connection can indicate that TCP-AO is required.
+ Alternately, the configuration can be changed to discard segments with
+ the AO option not matching an MKT.
+
+[10.2.b]::
+
+ Connections not matching any MKT do not require TCP-AO. Further, incoming
+ segments with TCP-AO are not discarded solely because they include
+ the option, provided they do not match any MKT.
+
+Note that Linux TCP-AO implementation differs in this aspect. Currently, TCP-AO
+segments with unknown key signatures are discarded with warnings logged.
+
+Q: Does the RFC imply centralized kernel key management in any way?
+(i.e. that a key on all connections MUST be rotated at the same time?)
+
+A: Not specified. MKTs can be managed in userspace, the only relevant part to
+key changes is [7.3]::
+
+ >> All TCP segments MUST be checked against the set of MKTs for matching
+ TCP connection identifiers.
+
+Q: What happens when RNextKeyID requested by a peer is unknown? Should
+the connection be reset?
+
+A: It should not, no action needs to be performed [7.5.2.e]::
+
+ ii. If they differ, determine whether the RNextKeyID MKT is ready.
+
+ 1. If the MKT corresponding to the segment’s socket pair and RNextKeyID
+ is not available, no action is required (RNextKeyID of a received
+ segment needs to match the MKT’s SendID).
+
+Q: How current_key is set and when does it change? It is a user-triggered
+change, or is it by a request from the remote peer? Is it set by the user
+explicitly, or by a matching rule?
+
+A: current_key is set by RNextKeyID [6.1]::
+
+ Rnext_key is changed only by manual user intervention or MKT management
+ protocol operation. It is not manipulated by TCP-AO. Current_key is updated
+ by TCP-AO when processing received TCP segments as discussed in the segment
+ processing description in Section 7.5. Note that the algorithm allows
+ the current_key to change to a new MKT, then change back to a previously
+ used MKT (known as "backing up"). This can occur during an MKT change when
+ segments are received out of order, and is considered a feature of TCP-AO,
+ because reordering does not result in drops.
+
+[7.5.2.e.ii]::
+
+ 2. If the matching MKT corresponding to the segment’s socket pair and
+ RNextKeyID is available:
+
+ a. Set current_key to the RNextKeyID MKT.
+
+Q: If both peers have multiple MKTs matching the connection's socket pair
+(with different KeyIDs), how should the sender/receiver pick KeyID to use?
+
+A: Some mechanism should pick the "desired" MKT [3.3]::
+
+ Multiple MKTs may match a single outgoing segment, e.g., when MKTs
+ are being changed. Those MKTs cannot have conflicting IDs (as noted
+ elsewhere), and some mechanism must determine which MKT to use for each
+ given outgoing segment.
+
+ >> An outgoing TCP segment MUST match at most one desired MKT, indicated
+ by the segment’s socket pair. The segment MAY match multiple MKTs, provided
+ that exactly one MKT is indicated as desired. Other information in
+ the segment MAY be used to determine the desired MKT when multiple MKTs
+ match; such information MUST NOT include values in any TCP option fields.
+
+Q: Can TCP-MD5 connection migrate to TCP-AO (and vice-versa):
+
+A: No [1]::
+
+ TCP MD5-protected connections cannot be migrated to TCP-AO because TCP MD5
+ does not support any changes to a connection’s security algorithm
+ once established.
+
+Q: If all MKTs are removed on a connection, can it become a non-TCP-AO signed
+connection?
+
+A: [7.5.2] doesn't have the same choice as SYN packet handling in [7.5.1.i]
+that would allow accepting segments without a sign (which would be insecure).
+While switching to non-TCP-AO connection is not prohibited directly, it seems
+what the RFC means. Also, there's a requirement for TCP-AO connections to
+always have one current_key [3.3]::
+
+ TCP-AO requires that every protected TCP segment match exactly one MKT.
+
+[3.3]::
+
+ >> An incoming TCP segment including TCP-AO MUST match exactly one MKT,
+ indicated solely by the segment’s socket pair and its TCP-AO KeyID.
+
+[4.4]::
+
+ One or more MKTs. These are the MKTs that match this connection’s
+ socket pair.
+
+Q: Can a non-TCP-AO connection become a TCP-AO-enabled one?
+
+A: No: for already established non-TCP-AO connection it would be impossible
+to switch using TCP-AO as the traffic key generation requires the initial
+sequence numbers. Paraphrasing, starting using TCP-AO would require
+re-establishing the TCP connection.
+
+2. In-kernel MKTs database vs database in userspace
+===================================================
+
+Linux TCP-AO support is implemented using ``setsockopt()s``, in a similar way
+to TCP-MD5. It means that a userspace application that wants to use TCP-AO
+should perform ``setsockopt()`` on a TCP socket when it wants to add,
+remove or rotate MKTs. This approach moves the key management responsibility
+to userspace as well as decisions on corner cases, i.e. what to do if
+the peer doesn't respect RNextKeyID; moving more code to userspace, especially
+responsible for the policy decisions. Besides, it's flexible and scales well
+(with less locking needed than in the case of an in-kernel database). One also
+should keep in mind that mainly intended users are BGP processes, not any
+random applications, which means that compared to IPsec tunnels,
+no transparency is really needed and modern BGP daemons already have
+``setsockopt()s`` for TCP-MD5 support.
+
+.. table:: Considered pros and cons of the approaches
+
+ +----------------------+------------------------+-----------------------+
+ | | ``setsockopt()`` | in-kernel DB |
+ +======================+========================+=======================+
+ | Extendability | ``setsockopt()`` | Netlink messages are |
+ | | commands should be | simple and extendable |
+ | | extendable syscalls | |
+ +----------------------+------------------------+-----------------------+
+ | Required userspace | BGP or any application | could be transparent |
+ | changes | that wants TCP-AO needs| as tunnels, providing |
+ | | to perform | something like |
+ | | ``setsockopt()s`` | ``ip tcpao add key`` |
+ | | and do key management | (delete/show/rotate) |
+ +----------------------+------------------------+-----------------------+
+ |MKTs removal or adding| harder for userspace | harder for kernel |
+ +----------------------+------------------------+-----------------------+
+ | Dump-ability | ``getsockopt()`` | Netlink .dump() |
+ | | | callback |
+ +----------------------+------------------------+-----------------------+
+ | Limits on kernel | equal |
+ | resources/memory | |
+ +----------------------+------------------------+-----------------------+
+ | Scalability | contention on | contention on |
+ | | ``TCP_LISTEN`` sockets | the whole database |
+ +----------------------+------------------------+-----------------------+
+ | Monitoring & warnings| ``TCP_DIAG`` | same Netlink socket |
+ +----------------------+------------------------+-----------------------+
+ | Matching of MKTs | half-problem: only | hard |
+ | | listen sockets | |
+ +----------------------+------------------------+-----------------------+
+
+
+3. uAPI
+=======
+
+Linux provides a set of ``setsockopt()s`` and ``getsockopt()s`` that let
+userspace manage TCP-AO on a per-socket basis. In order to add/delete MKTs
+``TCP_AO_ADD_KEY`` and ``TCP_AO_DEL_KEY`` TCP socket options must be used
+It is not allowed to add a key on an established non-TCP-AO connection
+as well as to remove the last key from TCP-AO connection.
+
+``setsockopt(TCP_AO_DEL_KEY)`` command may specify ``tcp_ao_del::current_key``
++ ``tcp_ao_del::set_current`` and/or ``tcp_ao_del::rnext``
++ ``tcp_ao_del::set_rnext`` which makes such delete "forced": it
+provides userspace a way to delete a key that's being used and atomically set
+another one instead. This is not intended for normal use and should be used
+only when the peer ignores RNextKeyID and keeps requesting/using an old key.
+It provides a way to force-delete a key that's not trusted but may break
+the TCP-AO connection.
+
+The usual/normal key-rotation can be performed with ``setsockopt(TCP_AO_INFO)``.
+It also provides a uAPI to change per-socket TCP-AO settings, such as
+ignoring ICMPs, as well as clear per-socket TCP-AO packet counters.
+The corresponding ``getsockopt(TCP_AO_INFO)`` can be used to get those
+per-socket TCP-AO settings.
+
+Another useful command is ``getsockopt(TCP_AO_GET_KEYS)``. One can use it
+to list all MKTs on a TCP socket or use a filter to get keys for a specific
+peer and/or sndid/rcvid, VRF L3 interface or get current_key/rnext_key.
+
+To repair TCP-AO connections ``setsockopt(TCP_AO_REPAIR)`` is available,
+provided that the user previously has checkpointed/dumped the socket with
+``getsockopt(TCP_AO_REPAIR)``.
+
+A tip here for scaled TCP_LISTEN sockets, that may have some thousands TCP-AO
+keys, is: use filters in ``getsockopt(TCP_AO_GET_KEYS)`` and asynchronous
+delete with ``setsockopt(TCP_AO_DEL_KEY)``.
+
+Linux TCP-AO also provides a bunch of segment counters that can be helpful
+with troubleshooting/debugging issues. Every MKT has good/bad counters
+that reflect how many packets passed/failed verification.
+Each TCP-AO socket has the following counters:
+- for good segments (properly signed)
+- for bad segments (failed TCP-AO verification)
+- for segments with unknown keys
+- for segments where an AO signature was expected, but wasn't found
+- for the number of ignored ICMPs
+
+TCP-AO per-socket counters are also duplicated with per-netns counters,
+exposed with SNMP. Those are ``TCPAOGood``, ``TCPAOBad``, ``TCPAOKeyNotFound``,
+``TCPAORequired`` and ``TCPAODroppedIcmps``.
+
+RFC 5925 very permissively specifies how TCP port matching can be done for
+MKTs::
+
+ TCP connection identifier. A TCP socket pair, i.e., a local IP
+ address, a remote IP address, a TCP local port, and a TCP remote port.
+ Values can be partially specified using ranges (e.g., 2-30), masks
+ (e.g., 0xF0), wildcards (e.g., "*"), or any other suitable indication.
+
+Currently Linux TCP-AO implementation doesn't provide any TCP port matching.
+Probably, port ranges are the most flexible for uAPI, but so far
+not implemented.
+
+4. ``setsockopt()`` vs ``accept()`` race
+========================================
+
+In contrast with TCP-MD5 established connection which has just one key,
+TCP-AO connections may have many keys, which means that accepted connections
+on a listen socket may have any amount of keys as well. As copying all those
+keys on a first properly signed SYN would make the request socket bigger, that
+would be undesirable. Currently, the implementation doesn't copy keys
+to request sockets, but rather look them up on the "parent" listener socket.
+
+The result is that when userspace removes TCP-AO keys, that may break
+not-yet-established connections on request sockets as well as not removing
+keys from sockets that were already established, but not yet ``accept()``'ed,
+hanging in the accept queue.
+
+The reverse is valid as well: if userspace adds a new key for a peer on
+a listener socket, the established sockets in accept queue won't
+have the new keys.
+
+At this moment, the resolution for the two races:
+``setsockopt(TCP_AO_ADD_KEY)`` vs ``accept()``
+and ``setsockopt(TCP_AO_DEL_KEY)`` vs ``accept()`` is delegated to userspace.
+This means that it's expected that userspace would check the MKTs on the socket
+that was returned by ``accept()`` to verify that any key rotation that
+happened on listen socket is reflected on the newly established connection.
+
+This is a similar "do-nothing" approach to TCP-MD5 from the kernel side and
+may be changed later by introducing new flags to ``tcp_ao_add``
+and ``tcp_ao_del``.
+
+Note that this race is rare for it needs TCP-AO key rotation to happen
+during the 3-way handshake for the new TCP connection.
+
+5. Interaction with TCP-MD5
+===========================
+
+A TCP connection can not migrate between TCP-AO and TCP-MD5 options. The
+established sockets that have either AO or MD5 keys are restricted for
+adding keys of the other option.
+
+For listening sockets the picture is different: BGP server may want to receive
+both TCP-AO and (deprecated) TCP-MD5 clients. As a result, both types of keys
+may be added to TCP_CLOSED or TCP_LISTEN sockets. It's not allowed to add
+different types of keys for the same peer.
+
+6. SNE Linux implementation
+===========================
+
+RFC 5925 [6.2] describes the algorithm of how to extend TCP sequence numbers
+with SNE. In short: TCP has to track the previous sequence numbers and set
+sne_flag when the current SEQ number rolls over. The flag is cleared when
+both current and previous SEQ numbers cross 0x7fff, which is 32Kb.
+
+In times when sne_flag is set, the algorithm compares SEQ for each packet with
+0x7fff and if it's higher than 32Kb, it assumes that the packet should be
+verified with SNE before the increment. As a result, there's
+this [0; 32Kb] window, when packets with (SNE - 1) can be accepted.
+
+Linux implementation simplifies this a bit: as the network stack already tracks
+the first SEQ byte that ACK is wanted for (snd_una) and the next SEQ byte that
+is wanted (rcv_nxt) - that's enough information for a rough estimation
+on where in the 4GB SEQ number space both sender and receiver are.
+When they roll over to zero, the corresponding SNE gets incremented.
+
+tcp_ao_compute_sne() is called for each TCP-AO segment. It compares SEQ numbers
+from the segment with snd_una or rcv_nxt and fits the result into a 2GB window around them,
+detecting SEQ numbers rolling over. That simplifies the code a lot and only
+requires SNE numbers to be stored on every TCP-AO socket.
+
+The 2GB window at first glance seems much more permissive compared to
+RFC 5926. But that is only used to pick the correct SNE before/after
+a rollover. It allows more TCP segment replays, but yet all regular
+TCP checks in tcp_sequence() are applied on the verified segment.
+So, it trades a bit more permissive acceptance of replayed/retransmitted
+segments for the simplicity of the algorithm and what seems better behaviour
+for large TCP windows.
+
+7. Links
+========
+
+RFC 5925 The TCP Authentication Option
+ https://www.rfc-editor.org/rfc/pdfrfc/rfc5925.txt.pdf
+
+RFC 5926 Cryptographic Algorithms for the TCP Authentication Option (TCP-AO)
+ https://www.rfc-editor.org/rfc/pdfrfc/rfc5926.txt.pdf
+
+Draft "SHA-2 Algorithm for the TCP Authentication Option (TCP-AO)"
+ https://datatracker.ietf.org/doc/html/draft-nayak-tcp-sha2-03
+
+RFC 2385 Protection of BGP Sessions via the TCP MD5 Signature Option
+ https://www.rfc-editor.org/rfc/pdfrfc/rfc2385.txt.pdf
+
+:Author: Dmitry Safonov <dima@arista.com>
diff --git a/Documentation/networking/xdp-rx-metadata.rst b/Documentation/networking/xdp-rx-metadata.rst
index 25ce72af81c2..205696780b78 100644
--- a/Documentation/networking/xdp-rx-metadata.rst
+++ b/Documentation/networking/xdp-rx-metadata.rst
@@ -105,6 +105,13 @@ bpf_tail_call
Adding programs that access metadata kfuncs to the ``BPF_MAP_TYPE_PROG_ARRAY``
is currently not supported.
+Supported Devices
+=================
+
+It is possible to query which kfunc the particular netdev implements via
+netlink. See ``xdp-rx-metadata-features`` attribute set in
+``Documentation/netlink/specs/netdev.yaml``.
+
Example
=======
diff --git a/Documentation/process/7.AdvancedTopics.rst b/Documentation/process/7.AdvancedTopics.rst
index bf7cbfb4caa5..43291704338e 100644
--- a/Documentation/process/7.AdvancedTopics.rst
+++ b/Documentation/process/7.AdvancedTopics.rst
@@ -146,6 +146,7 @@ pull. The git request-pull command can be helpful in this regard; it will
format the request as other developers expect, and will also check to be
sure that you have remembered to push those changes to the public server.
+.. _development_advancedtopics_reviews:
Reviewing patches
-----------------
@@ -167,6 +168,12 @@ comments as questions rather than criticisms. Asking "how does the lock
get released in this path?" will always work better than stating "the
locking here is wrong."
+Another technique that is useful in case of a disagreement is to ask for others
+to chime in. If a discussion reaches a stalemate after a few exchanges,
+then call for opinions of other reviewers or maintainers. Often those in
+agreement with a reviewer remain silent unless called upon.
+The opinion of multiple people carries exponentially more weight.
+
Different developers will review code from different points of view. Some
are mostly concerned with coding style and whether code lines have trailing
white space. Others will focus primarily on whether the change implemented
@@ -176,3 +183,14 @@ security issues, duplication of code found elsewhere, adequate
documentation, adverse effects on performance, user-space ABI changes, etc.
All types of review, if they lead to better code going into the kernel, are
welcome and worthwhile.
+
+There is no strict requirement to use specific tags like ``Reviewed-by``.
+In fact reviews in plain English are more informative and encouraged
+even when a tag is provided, e.g. "I looked at aspects A, B and C of this
+submission and it looks good to me."
+Some form of a review message or reply is obviously necessary otherwise
+maintainers will not know that the reviewer has looked at the patch at all!
+
+Last but not least patch review may become a negative process, focused
+on pointing out problems. Please throw in a compliment once in a while,
+particularly for newbies!
diff --git a/Documentation/process/maintainer-netdev.rst b/Documentation/process/maintainer-netdev.rst
index 09dcf6377c27..7feacc20835e 100644
--- a/Documentation/process/maintainer-netdev.rst
+++ b/Documentation/process/maintainer-netdev.rst
@@ -441,6 +441,21 @@ in a way which would break what would normally be considered uAPI.
new ``netdevsim`` features must be accompanied by selftests under
``tools/testing/selftests/``.
+Reviewer guidance
+-----------------
+
+Reviewing other people's patches on the list is highly encouraged,
+regardless of the level of expertise. For general guidance and
+helpful tips please see :ref:`development_advancedtopics_reviews`.
+
+It's safe to assume that netdev maintainers know the community and the level
+of expertise of the reviewers. The reviewers should not be concerned about
+their comments impeding or derailing the patch flow.
+
+Less experienced reviewers are highly encouraged to do more in-depth
+review of submissions and not focus exclusively on trivial or subjective
+matters like code formatting, tags etc.
+
Testimonials / feedback
-----------------------
diff --git a/Documentation/userspace-api/netlink/genetlink-legacy.rst b/Documentation/userspace-api/netlink/genetlink-legacy.rst
index 40b82ad5d54a..70a77387f6c4 100644
--- a/Documentation/userspace-api/netlink/genetlink-legacy.rst
+++ b/Documentation/userspace-api/netlink/genetlink-legacy.rst
@@ -11,6 +11,20 @@ the ``genetlink-legacy`` protocol level.
Specification
=============
+Globals
+-------
+
+Attributes listed directly at the root level of the spec file.
+
+version
+~~~~~~~
+
+Generic Netlink family version, default is 1.
+
+``version`` has historically been used to introduce family changes
+which may break backwards compatibility. Since compatibility breaking changes
+are generally not allowed ``version`` is very rarely used.
+
Attribute type nests
--------------------
@@ -168,7 +182,7 @@ members
- ``name`` - The attribute name of the struct member
- ``type`` - One of the scalar types ``u8``, ``u16``, ``u32``, ``u64``, ``s8``,
- ``s16``, ``s32``, ``s64``, ``string`` or ``binary``.
+ ``s16``, ``s32``, ``s64``, ``string``, ``binary`` or ``bitfield32``.
- ``byte-order`` - ``big-endian`` or ``little-endian``
- ``doc``, ``enum``, ``enum-as-flags``, ``display-hint`` - Same as for
:ref:`attribute definitions <attribute_properties>`
diff --git a/Documentation/userspace-api/netlink/specs.rst b/Documentation/userspace-api/netlink/specs.rst
index cc4e2430997e..c1b951649113 100644
--- a/Documentation/userspace-api/netlink/specs.rst
+++ b/Documentation/userspace-api/netlink/specs.rst
@@ -86,11 +86,6 @@ name
Name of the family. Name identifies the family in a unique way, since
the Family IDs are allocated dynamically.
-version
-~~~~~~~
-
-Generic Netlink family version, default is 1.
-
protocol
~~~~~~~~
@@ -408,10 +403,21 @@ This section describes the attribute types supported by the ``genetlink``
compatibility level. Refer to documentation of different levels for additional
attribute types.
-Scalar integer types
+Common integer types
--------------------
-Fixed-width integer types:
+``sint`` and ``uint`` represent signed and unsigned 64 bit integers.
+If the value can fit on 32 bits only 32 bits are carried in netlink
+messages, otherwise full 64 bits are carried. Note that the payload
+is only aligned to 4B, so the full 64 bit value may be unaligned!
+
+Common integer types should be preferred over fix-width types in majority
+of cases.
+
+Fix-width integer types
+-----------------------
+
+Fixed-width integer types include:
``u8``, ``u16``, ``u32``, ``u64``, ``s8``, ``s16``, ``s32``, ``s64``.
Note that types smaller than 32 bit should be avoided as using them
@@ -421,6 +427,9 @@ See :ref:`pad_type` for padding of 64 bit attributes.
The payload of the attribute is the integer in host order unless ``byte-order``
specifies otherwise.
+64 bit values are usually aligned by the kernel but it is recommended
+that the user space is able to deal with unaligned values.
+
.. _pad_type:
pad
diff --git a/MAINTAINERS b/MAINTAINERS
index 375b2c87d099..f9a5be31e694 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1460,7 +1460,6 @@ F: drivers/hwmon/applesmc.c
APPLETALK NETWORK LAYER
L: netdev@vger.kernel.org
S: Odd fixes
-F: drivers/net/appletalk/
F: include/linux/atalk.h
F: include/uapi/linux/atalk.h
F: net/appletalk/
@@ -3622,9 +3621,10 @@ F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml
F: drivers/iio/accel/bma400*
BPF JIT for ARM
-M: Shubham Bansal <illusionist.neo@gmail.com>
+M: Russell King <linux@armlinux.org.uk>
+M: Puranjay Mohan <puranjay12@gmail.com>
L: bpf@vger.kernel.org
-S: Odd Fixes
+S: Maintained
F: arch/arm/net/
BPF JIT for ARM64
@@ -3804,6 +3804,15 @@ L: bpf@vger.kernel.org
S: Odd Fixes
K: (?:\b|_)bpf(?:\b|_)
+BPF [NETKIT] (BPF-programmable network device)
+M: Daniel Borkmann <daniel@iogearbox.net>
+M: Nikolay Aleksandrov <razor@blackwall.org>
+L: bpf@vger.kernel.org
+L: netdev@vger.kernel.org
+S: Supported
+F: drivers/net/netkit.c
+F: include/net/netkit.h
+
BPF [NETWORKING] (struct_ops, reuseport)
M: Martin KaFai Lau <martin.lau@linux.dev>
L: bpf@vger.kernel.org
@@ -4346,8 +4355,7 @@ F: drivers/net/ethernet/broadcom/bcmsysport.*
F: drivers/net/ethernet/broadcom/unimac.h
BROADCOM TG3 GIGABIT ETHERNET DRIVER
-M: Siva Reddy Kallam <siva.kallam@broadcom.com>
-M: Prashant Sreedharan <prashant@broadcom.com>
+M: Pavan Chebbi <pavan.chebbi@broadcom.com>
M: Michael Chan <mchan@broadcom.com>
L: netdev@vger.kernel.org
S: Supported
@@ -5343,12 +5351,6 @@ M: Bence Csókás <bence98@sch.bme.hu>
S: Maintained
F: drivers/i2c/busses/i2c-cp2615.c
-CPMAC ETHERNET DRIVER
-M: Florian Fainelli <f.fainelli@gmail.com>
-L: netdev@vger.kernel.org
-S: Maintained
-F: drivers/net/ethernet/ti/cpmac.c
-
CPU FREQUENCY DRIVERS - VEXPRESS SPC ARM BIG LITTLE
M: Viresh Kumar <viresh.kumar@linaro.org>
M: Sudeep Holla <sudeep.holla@arm.com>
@@ -6368,6 +6370,17 @@ F: Documentation/networking/device_drivers/ethernet/freescale/dpaa2/switch-drive
F: drivers/net/ethernet/freescale/dpaa2/dpaa2-switch*
F: drivers/net/ethernet/freescale/dpaa2/dpsw*
+DPLL SUBSYSTEM
+M: Vadim Fedorenko <vadim.fedorenko@linux.dev>
+M: Arkadiusz Kubalewski <arkadiusz.kubalewski@intel.com>
+M: Jiri Pirko <jiri@resnulli.us>
+L: netdev@vger.kernel.org
+S: Supported
+F: Documentation/driver-api/dpll.rst
+F: drivers/dpll/*
+F: include/linux/dpll.h
+F: include/uapi/linux/dpll.h
+
DRBD DRIVER
M: Philipp Reisner <philipp.reisner@linbit.com>
M: Lars Ellenberg <lars.ellenberg@linbit.com>
@@ -10476,7 +10489,6 @@ F: drivers/platform/x86/intel/atomisp2/led.c
INTEL BIOS SAR INT1092 DRIVER
M: Shravan Sudhakar <s.shravan@intel.com>
-M: Intel Corporation <linuxwwan@intel.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/intel/int1092/
@@ -10906,7 +10918,6 @@ F: drivers/platform/x86/intel/wmi/thunderbolt.c
INTEL WWAN IOSM DRIVER
M: M Chetan Kumar <m.chetan.kumar@intel.com>
-M: Intel Corporation <linuxwwan@intel.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/wwan/iosm/
@@ -13531,7 +13542,6 @@ F: net/dsa/tag_mtk.c
MEDIATEK T7XX 5G WWAN MODEM DRIVER
M: Chandrashekar Devegowda <chandrashekar.devegowda@intel.com>
-M: Intel Corporation <linuxwwan@intel.com>
R: Chiranjeevi Rapolu <chiranjeevi.rapolu@linux.intel.com>
R: Liu Haijun <haijun.liu@mediatek.com>
R: M Chetan Kumar <m.chetan.kumar@linux.intel.com>
@@ -14376,9 +14386,11 @@ MIPS/LOONGSON1 ARCHITECTURE
M: Keguang Zhang <keguang.zhang@gmail.com>
L: linux-mips@vger.kernel.org
S: Maintained
+F: Documentation/devicetree/bindings/*/loongson,ls1*.yaml
F: arch/mips/include/asm/mach-loongson32/
F: arch/mips/loongson32/
F: drivers/*/*loongson1*
+F: drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c
MIPS/LOONGSON2EF ARCHITECTURE
M: Jiaxun Yang <jiaxun.yang@flygoat.com>
@@ -14985,10 +14997,11 @@ W: https://github.com/multipath-tcp/mptcp_net-next/wiki
B: https://github.com/multipath-tcp/mptcp_net-next/issues
T: git https://github.com/multipath-tcp/mptcp_net-next.git export-net
T: git https://github.com/multipath-tcp/mptcp_net-next.git export
+F: Documentation/netlink/specs/mptcp.yaml
F: Documentation/networking/mptcp-sysctl.rst
F: include/net/mptcp.h
F: include/trace/events/mptcp.h
-F: include/uapi/linux/mptcp.h
+F: include/uapi/linux/mptcp*.h
F: net/mptcp/
F: tools/testing/selftests/bpf/*/*mptcp*.c
F: tools/testing/selftests/net/mptcp/
@@ -17957,7 +17970,6 @@ F: arch/mips/boot/dts/ralink/mt7621*
RALINK RT2X00 WIRELESS LAN DRIVER
M: Stanislaw Gruszka <stf_xl@wp.pl>
-M: Helmut Schaa <helmut.schaa@googlemail.com>
L: linux-wireless@vger.kernel.org
S: Maintained
F: drivers/net/wireless/ralink/rt2x00/
@@ -23066,7 +23078,7 @@ F: drivers/scsi/vmw_pvscsi.c
F: drivers/scsi/vmw_pvscsi.h
VMWARE VIRTUAL PTP CLOCK DRIVER
-M: Deep Shah <sdeep@vmware.com>
+M: Jeff Sipek <jsipek@vmware.com>
R: Ajay Kaher <akaher@vmware.com>
R: Alexey Makhalov <amakhalov@vmware.com>
R: VMware PV-Drivers Reviewers <pv-drivers@vmware.com>
@@ -23713,6 +23725,11 @@ F: Documentation/devicetree/bindings/gpio/xlnx,gpio-xilinx.yaml
F: drivers/gpio/gpio-xilinx.c
F: drivers/gpio/gpio-zynq.c
+XILINX LL TEMAC ETHERNET DRIVER
+L: netdev@vger.kernel.org
+S: Orphan
+F: drivers/net/ethernet/xilinx/ll_temac*
+
XILINX PWM DRIVER
M: Sean Anderson <sean.anderson@seco.com>
S: Maintained
diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
index 6a1c9fca5260..1d672457d02f 100644
--- a/arch/arm/net/bpf_jit_32.c
+++ b/arch/arm/net/bpf_jit_32.c
@@ -2,6 +2,7 @@
/*
* Just-In-Time compiler for eBPF filters on 32bit ARM
*
+ * Copyright (c) 2023 Puranjay Mohan <puranjay12@gmail.com>
* Copyright (c) 2017 Shubham Bansal <illusionist.neo@gmail.com>
* Copyright (c) 2011 Mircea Gherzan <mgherzan@gmail.com>
*/
@@ -15,6 +16,7 @@
#include <linux/string.h>
#include <linux/slab.h>
#include <linux/if_vlan.h>
+#include <linux/math64.h>
#include <asm/cacheflush.h>
#include <asm/hwcap.h>
@@ -228,6 +230,44 @@ static u32 jit_mod32(u32 dividend, u32 divisor)
return dividend % divisor;
}
+static s32 jit_sdiv32(s32 dividend, s32 divisor)
+{
+ return dividend / divisor;
+}
+
+static s32 jit_smod32(s32 dividend, s32 divisor)
+{
+ return dividend % divisor;
+}
+
+/* Wrappers for 64-bit div/mod */
+static u64 jit_udiv64(u64 dividend, u64 divisor)
+{
+ return div64_u64(dividend, divisor);
+}
+
+static u64 jit_mod64(u64 dividend, u64 divisor)
+{
+ u64 rem;
+
+ div64_u64_rem(dividend, divisor, &rem);
+ return rem;
+}
+
+static s64 jit_sdiv64(s64 dividend, s64 divisor)
+{
+ return div64_s64(dividend, divisor);
+}
+
+static s64 jit_smod64(s64 dividend, s64 divisor)
+{
+ u64 q;
+
+ q = div64_s64(dividend, divisor);
+
+ return dividend - q * divisor;
+}
+
static inline void _emit(int cond, u32 inst, struct jit_ctx *ctx)
{
inst |= (cond << 28);
@@ -333,6 +373,9 @@ static u32 arm_bpf_ldst_imm8(u32 op, u8 rt, u8 rn, s16 imm8)
#define ARM_LDRD_I(rt, rn, off) arm_bpf_ldst_imm8(ARM_INST_LDRD_I, rt, rn, off)
#define ARM_LDRH_I(rt, rn, off) arm_bpf_ldst_imm8(ARM_INST_LDRH_I, rt, rn, off)
+#define ARM_LDRSH_I(rt, rn, off) arm_bpf_ldst_imm8(ARM_INST_LDRSH_I, rt, rn, off)
+#define ARM_LDRSB_I(rt, rn, off) arm_bpf_ldst_imm8(ARM_INST_LDRSB_I, rt, rn, off)
+
#define ARM_STR_I(rt, rn, off) arm_bpf_ldst_imm12(ARM_INST_STR_I, rt, rn, off)
#define ARM_STRB_I(rt, rn, off) arm_bpf_ldst_imm12(ARM_INST_STRB_I, rt, rn, off)
#define ARM_STRD_I(rt, rn, off) arm_bpf_ldst_imm8(ARM_INST_STRD_I, rt, rn, off)
@@ -474,17 +517,18 @@ static inline int epilogue_offset(const struct jit_ctx *ctx)
return to - from - 2;
}
-static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
+static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op, u8 sign)
{
const int exclude_mask = BIT(ARM_R0) | BIT(ARM_R1);
const s8 *tmp = bpf2a32[TMP_REG_1];
+ u32 dst;
#if __LINUX_ARM_ARCH__ == 7
if (elf_hwcap & HWCAP_IDIVA) {
- if (op == BPF_DIV)
- emit(ARM_UDIV(rd, rm, rn), ctx);
- else {
- emit(ARM_UDIV(ARM_IP, rm, rn), ctx);
+ if (op == BPF_DIV) {
+ emit(sign ? ARM_SDIV(rd, rm, rn) : ARM_UDIV(rd, rm, rn), ctx);
+ } else {
+ emit(sign ? ARM_SDIV(ARM_IP, rm, rn) : ARM_UDIV(ARM_IP, rm, rn), ctx);
emit(ARM_MLS(rd, rn, ARM_IP, rm), ctx);
}
return;
@@ -512,8 +556,19 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
emit(ARM_PUSH(CALLER_MASK & ~exclude_mask), ctx);
/* Call appropriate function */
- emit_mov_i(ARM_IP, op == BPF_DIV ?
- (u32)jit_udiv32 : (u32)jit_mod32, ctx);
+ if (sign) {
+ if (op == BPF_DIV)
+ dst = (u32)jit_sdiv32;
+ else
+ dst = (u32)jit_smod32;
+ } else {
+ if (op == BPF_DIV)
+ dst = (u32)jit_udiv32;
+ else
+ dst = (u32)jit_mod32;
+ }
+
+ emit_mov_i(ARM_IP, dst, ctx);
emit_blx_r(ARM_IP, ctx);
/* Restore caller-saved registers from stack */
@@ -530,6 +585,78 @@ static inline void emit_udivmod(u8 rd, u8 rm, u8 rn, struct jit_ctx *ctx, u8 op)
emit(ARM_MOV_R(ARM_R0, tmp[1]), ctx);
}
+static inline void emit_udivmod64(const s8 *rd, const s8 *rm, const s8 *rn, struct jit_ctx *ctx,
+ u8 op, u8 sign)
+{
+ u32 dst;
+
+ /* Push caller-saved registers on stack */
+ emit(ARM_PUSH(CALLER_MASK), ctx);
+
+ /*
+ * As we are implementing 64-bit div/mod as function calls, We need to put the dividend in
+ * R0-R1 and the divisor in R2-R3. As we have already pushed these registers on the stack,
+ * we can recover them later after returning from the function call.
+ */
+ if (rm[1] != ARM_R0 || rn[1] != ARM_R2) {
+ /*
+ * Move Rm to {R1, R0} if it is not already there.
+ */
+ if (rm[1] != ARM_R0) {
+ if (rn[1] == ARM_R0)
+ emit(ARM_PUSH(BIT(ARM_R0) | BIT(ARM_R1)), ctx);
+ emit(ARM_MOV_R(ARM_R1, rm[0]), ctx);
+ emit(ARM_MOV_R(ARM_R0, rm[1]), ctx);
+ if (rn[1] == ARM_R0) {
+ emit(ARM_POP(BIT(ARM_R2) | BIT(ARM_R3)), ctx);
+ goto cont;
+ }
+ }
+ /*
+ * Move Rn to {R3, R2} if it is not already there.
+ */
+ if (rn[1] != ARM_R2) {
+ emit(ARM_MOV_R(ARM_R3, rn[0]), ctx);
+ emit(ARM_MOV_R(ARM_R2, rn[1]), ctx);
+ }
+ }
+
+cont:
+
+ /* Call appropriate function */
+ if (sign) {
+ if (op == BPF_DIV)
+ dst = (u32)jit_sdiv64;
+ else
+ dst = (u32)jit_smod64;
+ } else {
+ if (op == BPF_DIV)
+ dst = (u32)jit_udiv64;
+ else
+ dst = (u32)jit_mod64;
+ }
+
+ emit_mov_i(ARM_IP, dst, ctx);
+ emit_blx_r(ARM_IP, ctx);
+
+ /* Save return value */
+ if (rd[1] != ARM_R0) {
+ emit(ARM_MOV_R(rd[0], ARM_R1), ctx);
+ emit(ARM_MOV_R(rd[1], ARM_R0), ctx);
+ }
+
+ /* Recover {R3, R2} and {R1, R0} from stack if they are not Rd */
+ if (rd[1] != ARM_R0 && rd[1] != ARM_R2) {
+ emit(ARM_POP(CALLER_MASK), ctx);
+ } else if (rd[1] != ARM_R0) {
+ emit(ARM_POP(BIT(ARM_R0) | BIT(ARM_R1)), ctx);
+ emit(ARM_ADD_I(ARM_SP, ARM_SP, 8), ctx);
+ } else {
+ emit(ARM_ADD_I(ARM_SP, ARM_SP, 8), ctx);
+ emit(ARM_POP(BIT(ARM_R2) | BIT(ARM_R3)), ctx);
+ }
+}
+
/* Is the translated BPF register on stack? */
static bool is_stacked(s8 reg)
{
@@ -744,12 +871,16 @@ static inline void emit_a32_alu_r64(const bool is64, const s8 dst[],
}
/* dst = src (4 bytes)*/
-static inline void emit_a32_mov_r(const s8 dst, const s8 src,
+static inline void emit_a32_mov_r(const s8 dst, const s8 src, const u8 off,
struct jit_ctx *ctx) {
const s8 *tmp = bpf2a32[TMP_REG_1];
s8 rt;
rt = arm_bpf_get_reg32(src, tmp[0], ctx);
+ if (off && off != 32) {
+ emit(ARM_LSL_I(rt, rt, 32 - off), ctx);
+ emit(ARM_ASR_I(rt, rt, 32 - off), ctx);
+ }
arm_bpf_put_reg32(dst, rt, ctx);
}
@@ -758,15 +889,15 @@ static inline void emit_a32_mov_r64(const bool is64, const s8 dst[],
const s8 src[],
struct jit_ctx *ctx) {
if (!is64) {
- emit_a32_mov_r(dst_lo, src_lo, ctx);
+ emit_a32_mov_r(dst_lo, src_lo, 0, ctx);
if (!ctx->prog->aux->verifier_zext)
/* Zero out high 4 bytes */
emit_a32_mov_i(dst_hi, 0, ctx);
} else if (__LINUX_ARM_ARCH__ < 6 &&
ctx->cpu_architecture < CPU_ARCH_ARMv5TE) {
/* complete 8 byte move */
- emit_a32_mov_r(dst_lo, src_lo, ctx);
- emit_a32_mov_r(dst_hi, src_hi, ctx);
+ emit_a32_mov_r(dst_lo, src_lo, 0, ctx);
+ emit_a32_mov_r(dst_hi, src_hi, 0, ctx);
} else if (is_stacked(src_lo) && is_stacked(dst_lo)) {
const u8 *tmp = bpf2a32[TMP_REG_1];
@@ -782,6 +913,24 @@ static inline void emit_a32_mov_r64(const bool is64, const s8 dst[],
}
}
+/* dst = (signed)src */
+static inline void emit_a32_movsx_r64(const bool is64, const u8 off, const s8 dst[], const s8 src[],
+ struct jit_ctx *ctx) {
+ const s8 *tmp = bpf2a32[TMP_REG_1];
+ const s8 *rt;
+
+ rt = arm_bpf_get_reg64(dst, tmp, ctx);
+
+ emit_a32_mov_r(dst_lo, src_lo, off, ctx);
+ if (!is64) {
+ if (!ctx->prog->aux->verifier_zext)
+ /* Zero out high 4 bytes */
+ emit_a32_mov_i(dst_hi, 0, ctx);
+ } else {
+ emit(ARM_ASR_I(rt[0], rt[1], 31), ctx);
+ }
+}
+
/* Shift operations */
static inline void emit_a32_alu_i(const s8 dst, const u32 val,
struct jit_ctx *ctx, const u8 op) {
@@ -1026,6 +1175,24 @@ static bool is_ldst_imm(s16 off, const u8 size)
return -off_max <= off && off <= off_max;
}
+static bool is_ldst_imm8(s16 off, const u8 size)
+{
+ s16 off_max = 0;
+
+ switch (size) {
+ case BPF_B:
+ off_max = 0xff;
+ break;
+ case BPF_W:
+ off_max = 0xfff;
+ break;
+ case BPF_H:
+ off_max = 0xff;
+ break;
+ }
+ return -off_max <= off && off <= off_max;
+}
+
/* *(size *)(dst + off) = src */
static inline void emit_str_r(const s8 dst, const s8 src[],
s16 off, struct jit_ctx *ctx, const u8 sz){
@@ -1105,6 +1272,50 @@ static inline void emit_ldx_r(const s8 dst[], const s8 src,
arm_bpf_put_reg64(dst, rd, ctx);
}
+/* dst = *(signed size*)(src + off) */
+static inline void emit_ldsx_r(const s8 dst[], const s8 src,
+ s16 off, struct jit_ctx *ctx, const u8 sz){
+ const s8 *tmp = bpf2a32[TMP_REG_1];
+ const s8 *rd = is_stacked(dst_lo) ? tmp : dst;
+ s8 rm = src;
+ int add_off;
+
+ if (!is_ldst_imm8(off, sz)) {
+ /*
+ * offset does not fit in the load/store immediate,
+ * construct an ADD instruction to apply the offset.
+ */
+ add_off = imm8m(off);
+ if (add_off > 0) {
+ emit(ARM_ADD_I(tmp[0], src, add_off), ctx);
+ rm = tmp[0];
+ } else {
+ emit_a32_mov_i(tmp[0], off, ctx);
+ emit(ARM_ADD_R(tmp[0], tmp[0], src), ctx);
+ rm = tmp[0];
+ }
+ off = 0;
+ }
+
+ switch (sz) {
+ case BPF_B:
+ /* Load a Byte with sign extension*/
+ emit(ARM_LDRSB_I(rd[1], rm, off), ctx);
+ break;
+ case BPF_H:
+ /* Load a HalfWord with sign extension*/
+ emit(ARM_LDRSH_I(rd[1], rm, off), ctx);
+ break;
+ case BPF_W:
+ /* Load a Word*/
+ emit(ARM_LDR_I(rd[1], rm, off), ctx);
+ break;
+ }
+ /* Carry the sign extension to upper 32 bits */
+ emit(ARM_ASR_I(rd[0], rd[1], 31), ctx);
+ arm_bpf_put_reg64(dst, rd, ctx);
+}
+
/* Arithmatic Operation */
static inline void emit_ar_r(const u8 rd, const u8 rt, const u8 rm,
const u8 rn, struct jit_ctx *ctx, u8 op,
@@ -1385,7 +1596,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
emit_a32_mov_i(dst_hi, 0, ctx);
break;
}
- emit_a32_mov_r64(is64, dst, src, ctx);
+ if (insn->off)
+ emit_a32_movsx_r64(is64, insn->off, dst, src, ctx);
+ else
+ emit_a32_mov_r64(is64, dst, src, ctx);
break;
case BPF_K:
/* Sign-extend immediate value to destination reg */
@@ -1461,7 +1675,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
rt = src_lo;
break;
}
- emit_udivmod(rd_lo, rd_lo, rt, ctx, BPF_OP(code));
+ emit_udivmod(rd_lo, rd_lo, rt, ctx, BPF_OP(code), off);
arm_bpf_put_reg32(dst_lo, rd_lo, ctx);
if (!ctx->prog->aux->verifier_zext)
emit_a32_mov_i(dst_hi, 0, ctx);
@@ -1470,7 +1684,19 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
case BPF_ALU64 | BPF_DIV | BPF_X:
case BPF_ALU64 | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_MOD | BPF_X:
- goto notyet;
+ rd = arm_bpf_get_reg64(dst, tmp2, ctx);
+ switch (BPF_SRC(code)) {
+ case BPF_X:
+ rs = arm_bpf_get_reg64(src, tmp, ctx);
+ break;
+ case BPF_K:
+ rs = tmp;
+ emit_a32_mov_se_i64(is64, rs, imm, ctx);
+ break;
+ }
+ emit_udivmod64(rd, rd, rs, ctx, BPF_OP(code), off);
+ arm_bpf_put_reg64(dst, rd, ctx);
+ break;
/* dst = dst << imm */
/* dst = dst >> imm */
/* dst = dst >> imm (signed) */
@@ -1545,10 +1771,12 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
break;
/* dst = htole(dst) */
/* dst = htobe(dst) */
- case BPF_ALU | BPF_END | BPF_FROM_LE:
- case BPF_ALU | BPF_END | BPF_FROM_BE:
+ case BPF_ALU | BPF_END | BPF_FROM_LE: /* also BPF_TO_LE */
+ case BPF_ALU | BPF_END | BPF_FROM_BE: /* also BPF_TO_BE */
+ /* dst = bswap(dst) */
+ case BPF_ALU64 | BPF_END | BPF_FROM_LE: /* also BPF_TO_LE */
rd = arm_bpf_get_reg64(dst, tmp, ctx);
- if (BPF_SRC(code) == BPF_FROM_LE)
+ if (BPF_SRC(code) == BPF_FROM_LE && BPF_CLASS(code) != BPF_ALU64)
goto emit_bswap_uxt;
switch (imm) {
case 16:
@@ -1603,8 +1831,15 @@ exit:
case BPF_LDX | BPF_MEM | BPF_H:
case BPF_LDX | BPF_MEM | BPF_B:
case BPF_LDX | BPF_MEM | BPF_DW:
+ /* LDSX: dst = *(signed size *)(src + off) */
+ case BPF_LDX | BPF_MEMSX | BPF_B:
+ case BPF_LDX | BPF_MEMSX | BPF_H:
+ case BPF_LDX | BPF_MEMSX | BPF_W:
rn = arm_bpf_get_reg32(src_lo, tmp2[1], ctx);
- emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code));
+ if (BPF_MODE(insn->code) == BPF_MEMSX)
+ emit_ldsx_r(dst, rn, off, ctx, BPF_SIZE(code));
+ else
+ emit_ldx_r(dst, rn, off, ctx, BPF_SIZE(code));
break;
/* speculation barrier */
case BPF_ST | BPF_NOSPEC:
@@ -1761,10 +1996,15 @@ go_jmp:
break;
/* JMP OFF */
case BPF_JMP | BPF_JA:
+ case BPF_JMP32 | BPF_JA:
{
- if (off == 0)
+ if (BPF_CLASS(code) == BPF_JMP32 && imm != 0)
+ jmp_offset = bpf2a32_offset(i + imm, i, ctx);
+ else if (BPF_CLASS(code) == BPF_JMP && off != 0)
+ jmp_offset = bpf2a32_offset(i + off, i, ctx);
+ else
break;
- jmp_offset = bpf2a32_offset(i+off, i, ctx);
+
check_imm24(jmp_offset);
emit(ARM_B(jmp_offset), ctx);
break;
diff --git a/arch/arm/net/bpf_jit_32.h b/arch/arm/net/bpf_jit_32.h
index e0b593a1498d..438f0e1f91a0 100644
--- a/arch/arm/net/bpf_jit_32.h
+++ b/arch/arm/net/bpf_jit_32.h
@@ -79,9 +79,11 @@
#define ARM_INST_LDST__IMM12 0x00000fff
#define ARM_INST_LDRB_I 0x05500000
#define ARM_INST_LDRB_R 0x07d00000
+#define ARM_INST_LDRSB_I 0x015000d0
#define ARM_INST_LDRD_I 0x014000d0
#define ARM_INST_LDRH_I 0x015000b0
#define ARM_INST_LDRH_R 0x019000b0
+#define ARM_INST_LDRSH_I 0x015000f0
#define ARM_INST_LDR_I 0x05100000
#define ARM_INST_LDR_R 0x07900000
@@ -137,6 +139,7 @@
#define ARM_INST_TST_I 0x03100000
#define ARM_INST_UDIV 0x0730f010
+#define ARM_INST_SDIV 0x0710f010
#define ARM_INST_UMULL 0x00800090
@@ -265,6 +268,7 @@
#define ARM_TST_I(rn, imm) _AL3_I(ARM_INST_TST, 0, rn, imm)
#define ARM_UDIV(rd, rn, rm) (ARM_INST_UDIV | (rd) << 16 | (rn) | (rm) << 8)
+#define ARM_SDIV(rd, rn, rm) (ARM_INST_SDIV | (rd) << 16 | (rn) | (rm) << 8)
#define ARM_UMULL(rd_lo, rd_hi, rn, rm) (ARM_INST_UMULL | (rd_hi) << 16 \
| (rd_lo) << 12 | (rm) << 8 | rn)
diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
index 5fc613d24151..49cbdb55b4b3 100644
--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
@@ -13,7 +13,7 @@
/ {
aliases {
ethernet0 = &eth0;
- /* for dsa slave device */
+ /* for DSA user port device */
ethernet1 = &switch0port1;
ethernet2 = &switch0port2;
ethernet3 = &switch0port3;
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index 150d1c6543f7..7d4af64e3982 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -288,7 +288,7 @@ static bool is_lsi_offset(int offset, int scale)
static int build_prologue(struct jit_ctx *ctx, bool ebpf_from_cbpf)
{
const struct bpf_prog *prog = ctx->prog;
- const bool is_main_prog = prog->aux->func_idx == 0;
+ const bool is_main_prog = !bpf_is_subprog(prog);
const u8 r6 = bpf2a64[BPF_REG_6];
const u8 r7 = bpf2a64[BPF_REG_7];
const u8 r8 = bpf2a64[BPF_REG_8];
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index e507692e51e7..bf06b7283f0c 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -556,7 +556,7 @@ static void bpf_jit_prologue(struct bpf_jit *jit, struct bpf_prog *fp,
EMIT6_PCREL_RILC(0xc0040000, 0, jit->prologue_plt);
jit->prologue_plt_ret = jit->prg;
- if (fp->aux->func_idx == 0) {
+ if (!bpf_is_subprog(fp)) {
/* Initialize the tail call counter in the main program. */
/* xc STK_OFF_TCCNT(4,%r15),STK_OFF_TCCNT(%r15) */
_EMIT6(0xd703f000 | STK_OFF_TCCNT, 0xf000 | STK_OFF_TCCNT);
@@ -670,15 +670,18 @@ static void bpf_jit_epilogue(struct bpf_jit *jit, u32 stack_depth)
static int get_probe_mem_regno(const u8 *insn)
{
/*
- * insn must point to llgc, llgh, llgf or lg, which have destination
- * register at the same position.
+ * insn must point to llgc, llgh, llgf, lg, lgb, lgh or lgf, which have
+ * destination register at the same position.
*/
- if (insn[0] != 0xe3) /* common llgc, llgh, llgf and lg prefix */
+ if (insn[0] != 0xe3) /* common prefix */
return -1;
if (insn[5] != 0x90 && /* llgc */
insn[5] != 0x91 && /* llgh */
insn[5] != 0x16 && /* llgf */
- insn[5] != 0x04) /* lg */
+ insn[5] != 0x04 && /* lg */
+ insn[5] != 0x77 && /* lgb */
+ insn[5] != 0x15 && /* lgh */
+ insn[5] != 0x14) /* lgf */
return -1;
return insn[1] >> 4;
}
@@ -776,6 +779,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
int i, bool extra_pass, u32 stack_depth)
{
struct bpf_insn *insn = &fp->insnsi[i];
+ s16 branch_oc_off = insn->off;
u32 dst_reg = insn->dst_reg;
u32 src_reg = insn->src_reg;
int last, insn_count = 1;
@@ -788,22 +792,55 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
int err;
if (BPF_CLASS(insn->code) == BPF_LDX &&
- BPF_MODE(insn->code) == BPF_PROBE_MEM)
+ (BPF_MODE(insn->code) == BPF_PROBE_MEM ||
+ BPF_MODE(insn->code) == BPF_PROBE_MEMSX))
probe_prg = jit->prg;
switch (insn->code) {
/*
* BPF_MOV
*/
- case BPF_ALU | BPF_MOV | BPF_X: /* dst = (u32) src */
- /* llgfr %dst,%src */
- EMIT4(0xb9160000, dst_reg, src_reg);
- if (insn_is_zext(&insn[1]))
- insn_count = 2;
+ case BPF_ALU | BPF_MOV | BPF_X:
+ switch (insn->off) {
+ case 0: /* DST = (u32) SRC */
+ /* llgfr %dst,%src */
+ EMIT4(0xb9160000, dst_reg, src_reg);
+ if (insn_is_zext(&insn[1]))
+ insn_count = 2;
+ break;
+ case 8: /* DST = (u32)(s8) SRC */
+ /* lbr %dst,%src */
+ EMIT4(0xb9260000, dst_reg, src_reg);
+ /* llgfr %dst,%dst */
+ EMIT4(0xb9160000, dst_reg, dst_reg);
+ break;
+ case 16: /* DST = (u32)(s16) SRC */
+ /* lhr %dst,%src */
+ EMIT4(0xb9270000, dst_reg, src_reg);
+ /* llgfr %dst,%dst */
+ EMIT4(0xb9160000, dst_reg, dst_reg);
+ break;
+ }
break;
- case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
- /* lgr %dst,%src */
- EMIT4(0xb9040000, dst_reg, src_reg);
+ case BPF_ALU64 | BPF_MOV | BPF_X:
+ switch (insn->off) {
+ case 0: /* DST = SRC */
+ /* lgr %dst,%src */
+ EMIT4(0xb9040000, dst_reg, src_reg);
+ break;
+ case 8: /* DST = (s8) SRC */
+ /* lgbr %dst,%src */
+ EMIT4(0xb9060000, dst_reg, src_reg);
+ break;
+ case 16: /* DST = (s16) SRC */
+ /* lghr %dst,%src */
+ EMIT4(0xb9070000, dst_reg, src_reg);
+ break;
+ case 32: /* DST = (s32) SRC */
+ /* lgfr %dst,%src */
+ EMIT4(0xb9140000, dst_reg, src_reg);
+ break;
+ }
break;
case BPF_ALU | BPF_MOV | BPF_K: /* dst = (u32) imm */
/* llilf %dst,imm */
@@ -912,66 +949,115 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
/*
* BPF_DIV / BPF_MOD
*/
- case BPF_ALU | BPF_DIV | BPF_X: /* dst = (u32) dst / (u32) src */
- case BPF_ALU | BPF_MOD | BPF_X: /* dst = (u32) dst % (u32) src */
+ case BPF_ALU | BPF_DIV | BPF_X:
+ case BPF_ALU | BPF_MOD | BPF_X:
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
- /* lhi %w0,0 */
- EMIT4_IMM(0xa7080000, REG_W0, 0);
- /* lr %w1,%dst */
- EMIT2(0x1800, REG_W1, dst_reg);
- /* dlr %w0,%src */
- EMIT4(0xb9970000, REG_W0, src_reg);
+ switch (off) {
+ case 0: /* dst = (u32) dst {/,%} (u32) src */
+ /* xr %w0,%w0 */
+ EMIT2(0x1700, REG_W0, REG_W0);
+ /* lr %w1,%dst */
+ EMIT2(0x1800, REG_W1, dst_reg);
+ /* dlr %w0,%src */
+ EMIT4(0xb9970000, REG_W0, src_reg);
+ break;
+ case 1: /* dst = (u32) ((s32) dst {/,%} (s32) src) */
+ /* lgfr %r1,%dst */
+ EMIT4(0xb9140000, REG_W1, dst_reg);
+ /* dsgfr %r0,%src */
+ EMIT4(0xb91d0000, REG_W0, src_reg);
+ break;
+ }
/* llgfr %dst,%rc */
EMIT4(0xb9160000, dst_reg, rc_reg);
if (insn_is_zext(&insn[1]))
insn_count = 2;
break;
}
- case BPF_ALU64 | BPF_DIV | BPF_X: /* dst = dst / src */
- case BPF_ALU64 | BPF_MOD | BPF_X: /* dst = dst % src */
+ case BPF_ALU64 | BPF_DIV | BPF_X:
+ case BPF_ALU64 | BPF_MOD | BPF_X:
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
- /* lghi %w0,0 */
- EMIT4_IMM(0xa7090000, REG_W0, 0);
- /* lgr %w1,%dst */
- EMIT4(0xb9040000, REG_W1, dst_reg);
- /* dlgr %w0,%dst */
- EMIT4(0xb9870000, REG_W0, src_reg);
+ switch (off) {
+ case 0: /* dst = dst {/,%} src */
+ /* lghi %w0,0 */
+ EMIT4_IMM(0xa7090000, REG_W0, 0);
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* dlgr %w0,%src */
+ EMIT4(0xb9870000, REG_W0, src_reg);
+ break;
+ case 1: /* dst = (s64) dst {/,%} (s64) src */
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* dsgr %w0,%src */
+ EMIT4(0xb90d0000, REG_W0, src_reg);
+ break;
+ }
/* lgr %dst,%rc */
EMIT4(0xb9040000, dst_reg, rc_reg);
break;
}
- case BPF_ALU | BPF_DIV | BPF_K: /* dst = (u32) dst / (u32) imm */
- case BPF_ALU | BPF_MOD | BPF_K: /* dst = (u32) dst % (u32) imm */
+ case BPF_ALU | BPF_DIV | BPF_K:
+ case BPF_ALU | BPF_MOD | BPF_K:
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
if (imm == 1) {
if (BPF_OP(insn->code) == BPF_MOD)
- /* lhgi %dst,0 */
+ /* lghi %dst,0 */
EMIT4_IMM(0xa7090000, dst_reg, 0);
else
EMIT_ZERO(dst_reg);
break;
}
- /* lhi %w0,0 */
- EMIT4_IMM(0xa7080000, REG_W0, 0);
- /* lr %w1,%dst */
- EMIT2(0x1800, REG_W1, dst_reg);
if (!is_first_pass(jit) && can_use_ldisp_for_lit32(jit)) {
- /* dl %w0,<d(imm)>(%l) */
- EMIT6_DISP_LH(0xe3000000, 0x0097, REG_W0, REG_0, REG_L,
- EMIT_CONST_U32(imm));
+ switch (off) {
+ case 0: /* dst = (u32) dst {/,%} (u32) imm */
+ /* xr %w0,%w0 */
+ EMIT2(0x1700, REG_W0, REG_W0);
+ /* lr %w1,%dst */
+ EMIT2(0x1800, REG_W1, dst_reg);
+ /* dl %w0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0097, REG_W0, REG_0,
+ REG_L, EMIT_CONST_U32(imm));
+ break;
+ case 1: /* dst = (s32) dst {/,%} (s32) imm */
+ /* lgfr %r1,%dst */
+ EMIT4(0xb9140000, REG_W1, dst_reg);
+ /* dsgf %r0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x001d, REG_W0, REG_0,
+ REG_L, EMIT_CONST_U32(imm));
+ break;
+ }
} else {
- /* lgfrl %dst,imm */
- EMIT6_PCREL_RILB(0xc40c0000, dst_reg,
- _EMIT_CONST_U32(imm));
- jit->seen |= SEEN_LITERAL;
- /* dlr %w0,%dst */
- EMIT4(0xb9970000, REG_W0, dst_reg);
+ switch (off) {
+ case 0: /* dst = (u32) dst {/,%} (u32) imm */
+ /* xr %w0,%w0 */
+ EMIT2(0x1700, REG_W0, REG_W0);
+ /* lr %w1,%dst */
+ EMIT2(0x1800, REG_W1, dst_reg);
+ /* lrl %dst,imm */
+ EMIT6_PCREL_RILB(0xc40d0000, dst_reg,
+ _EMIT_CONST_U32(imm));
+ jit->seen |= SEEN_LITERAL;
+ /* dlr %w0,%dst */
+ EMIT4(0xb9970000, REG_W0, dst_reg);
+ break;
+ case 1: /* dst = (s32) dst {/,%} (s32) imm */
+ /* lgfr %w1,%dst */
+ EMIT4(0xb9140000, REG_W1, dst_reg);
+ /* lgfrl %dst,imm */
+ EMIT6_PCREL_RILB(0xc40c0000, dst_reg,
+ _EMIT_CONST_U32(imm));
+ jit->seen |= SEEN_LITERAL;
+ /* dsgr %w0,%dst */
+ EMIT4(0xb90d0000, REG_W0, dst_reg);
+ break;
+ }
}
/* llgfr %dst,%rc */
EMIT4(0xb9160000, dst_reg, rc_reg);
@@ -979,8 +1065,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
insn_count = 2;
break;
}
- case BPF_ALU64 | BPF_DIV | BPF_K: /* dst = dst / imm */
- case BPF_ALU64 | BPF_MOD | BPF_K: /* dst = dst % imm */
+ case BPF_ALU64 | BPF_DIV | BPF_K:
+ case BPF_ALU64 | BPF_MOD | BPF_K:
{
int rc_reg = BPF_OP(insn->code) == BPF_DIV ? REG_W1 : REG_W0;
@@ -990,21 +1076,50 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
EMIT4_IMM(0xa7090000, dst_reg, 0);
break;
}
- /* lghi %w0,0 */
- EMIT4_IMM(0xa7090000, REG_W0, 0);
- /* lgr %w1,%dst */
- EMIT4(0xb9040000, REG_W1, dst_reg);
if (!is_first_pass(jit) && can_use_ldisp_for_lit64(jit)) {
- /* dlg %w0,<d(imm)>(%l) */
- EMIT6_DISP_LH(0xe3000000, 0x0087, REG_W0, REG_0, REG_L,
- EMIT_CONST_U64(imm));
+ switch (off) {
+ case 0: /* dst = dst {/,%} imm */
+ /* lghi %w0,0 */
+ EMIT4_IMM(0xa7090000, REG_W0, 0);
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* dlg %w0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x0087, REG_W0, REG_0,
+ REG_L, EMIT_CONST_U64(imm));
+ break;
+ case 1: /* dst = (s64) dst {/,%} (s64) imm */
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* dsg %w0,<d(imm)>(%l) */
+ EMIT6_DISP_LH(0xe3000000, 0x000d, REG_W0, REG_0,
+ REG_L, EMIT_CONST_U64(imm));
+ break;
+ }
} else {
- /* lgrl %dst,imm */
- EMIT6_PCREL_RILB(0xc4080000, dst_reg,
- _EMIT_CONST_U64(imm));
- jit->seen |= SEEN_LITERAL;
- /* dlgr %w0,%dst */
- EMIT4(0xb9870000, REG_W0, dst_reg);
+ switch (off) {
+ case 0: /* dst = dst {/,%} imm */
+ /* lghi %w0,0 */
+ EMIT4_IMM(0xa7090000, REG_W0, 0);
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* lgrl %dst,imm */
+ EMIT6_PCREL_RILB(0xc4080000, dst_reg,
+ _EMIT_CONST_U64(imm));
+ jit->seen |= SEEN_LITERAL;
+ /* dlgr %w0,%dst */
+ EMIT4(0xb9870000, REG_W0, dst_reg);
+ break;
+ case 1: /* dst = (s64) dst {/,%} (s64) imm */
+ /* lgr %w1,%dst */
+ EMIT4(0xb9040000, REG_W1, dst_reg);
+ /* lgrl %dst,imm */
+ EMIT6_PCREL_RILB(0xc4080000, dst_reg,
+ _EMIT_CONST_U64(imm));
+ jit->seen |= SEEN_LITERAL;
+ /* dsgr %w0,%dst */
+ EMIT4(0xb90d0000, REG_W0, dst_reg);
+ break;
+ }
}
/* lgr %dst,%rc */
EMIT4(0xb9040000, dst_reg, rc_reg);
@@ -1217,6 +1332,7 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
}
break;
case BPF_ALU | BPF_END | BPF_FROM_LE:
+ case BPF_ALU64 | BPF_END | BPF_FROM_LE:
switch (imm) {
case 16: /* dst = (u16) cpu_to_le16(dst) */
/* lrvr %dst,%dst */
@@ -1374,6 +1490,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
if (insn_is_zext(&insn[1]))
insn_count = 2;
break;
+ case BPF_LDX | BPF_MEMSX | BPF_B: /* dst = *(s8 *)(ul) (src + off) */
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_B:
+ /* lgb %dst,0(off,%src) */
+ EMIT6_DISP_LH(0xe3000000, 0x0077, dst_reg, src_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
case BPF_LDX | BPF_MEM | BPF_H: /* dst = *(u16 *)(ul) (src + off) */
case BPF_LDX | BPF_PROBE_MEM | BPF_H:
/* llgh %dst,0(off,%src) */
@@ -1382,6 +1504,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
if (insn_is_zext(&insn[1]))
insn_count = 2;
break;
+ case BPF_LDX | BPF_MEMSX | BPF_H: /* dst = *(s16 *)(ul) (src + off) */
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_H:
+ /* lgh %dst,0(off,%src) */
+ EMIT6_DISP_LH(0xe3000000, 0x0015, dst_reg, src_reg, REG_0, off);
+ jit->seen |= SEEN_MEM;
+ break;
case BPF_LDX | BPF_MEM | BPF_W: /* dst = *(u32 *)(ul) (src + off) */
case BPF_LDX | BPF_PROBE_MEM | BPF_W:
/* llgf %dst,off(%src) */
@@ -1390,6 +1518,12 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
if (insn_is_zext(&insn[1]))
insn_count = 2;
break;
+ case BPF_LDX | BPF_MEMSX | BPF_W: /* dst = *(s32 *)(ul) (src + off) */
+ case BPF_LDX | BPF_PROBE_MEMSX | BPF_W:
+ /* lgf %dst,off(%src) */
+ jit->seen |= SEEN_MEM;
+ EMIT6_DISP_LH(0xe3000000, 0x0014, dst_reg, src_reg, REG_0, off);
+ break;
case BPF_LDX | BPF_MEM | BPF_DW: /* dst = *(u64 *)(ul) (src + off) */
case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
/* lg %dst,0(off,%src) */
@@ -1570,6 +1704,9 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp,
* instruction itself (loop) and for BPF with offset 0 we
* branch to the instruction behind the branch.
*/
+ case BPF_JMP32 | BPF_JA: /* if (true) */
+ branch_oc_off = imm;
+ fallthrough;
case BPF_JMP | BPF_JA: /* if (true) */
mask = 0xf000; /* j */
goto branch_oc;
@@ -1738,14 +1875,16 @@ branch_xu:
break;
branch_oc:
if (!is_first_pass(jit) &&
- can_use_rel(jit, addrs[i + off + 1])) {
+ can_use_rel(jit, addrs[i + branch_oc_off + 1])) {
/* brc mask,off */
EMIT4_PCREL_RIC(0xa7040000,
- mask >> 12, addrs[i + off + 1]);
+ mask >> 12,
+ addrs[i + branch_oc_off + 1]);
} else {
/* brcl mask,off */
EMIT6_PCREL_RILC(0xc0040000,
- mask >> 12, addrs[i + off + 1]);
+ mask >> 12,
+ addrs[i + branch_oc_off + 1]);
}
break;
}
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index a5930042139d..8c10d9abc239 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -16,6 +16,9 @@
#include <asm/set_memory.h>
#include <asm/nospec-branch.h>
#include <asm/text-patching.h>
+#include <asm/unwind.h>
+
+static bool all_callee_regs_used[4] = {true, true, true, true};
static u8 *emit_code(u8 *ptr, u32 bytes, unsigned int len)
{
@@ -255,6 +258,14 @@ struct jit_context {
/* Number of bytes that will be skipped on tailcall */
#define X86_TAIL_CALL_OFFSET (11 + ENDBR_INSN_SIZE)
+static void push_r12(u8 **pprog)
+{
+ u8 *prog = *pprog;
+
+ EMIT2(0x41, 0x54); /* push r12 */
+ *pprog = prog;
+}
+
static void push_callee_regs(u8 **pprog, bool *callee_regs_used)
{
u8 *prog = *pprog;
@@ -270,6 +281,14 @@ static void push_callee_regs(u8 **pprog, bool *callee_regs_used)
*pprog = prog;
}
+static void pop_r12(u8 **pprog)
+{
+ u8 *prog = *pprog;
+
+ EMIT2(0x41, 0x5C); /* pop r12 */
+ *pprog = prog;
+}
+
static void pop_callee_regs(u8 **pprog, bool *callee_regs_used)
{
u8 *prog = *pprog;
@@ -291,7 +310,8 @@ static void pop_callee_regs(u8 **pprog, bool *callee_regs_used)
* while jumping to another program
*/
static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
- bool tail_call_reachable, bool is_subprog)
+ bool tail_call_reachable, bool is_subprog,
+ bool is_exception_cb)
{
u8 *prog = *pprog;
@@ -303,12 +323,30 @@ static void emit_prologue(u8 **pprog, u32 stack_depth, bool ebpf_from_cbpf,
prog += X86_PATCH_SIZE;
if (!ebpf_from_cbpf) {
if (tail_call_reachable && !is_subprog)
+ /* When it's the entry of the whole tailcall context,
+ * zeroing rax means initialising tail_call_cnt.
+ */
EMIT2(0x31, 0xC0); /* xor eax, eax */
else
+ /* Keep the same instruction layout. */
EMIT2(0x66, 0x90); /* nop2 */
}
- EMIT1(0x55); /* push rbp */
- EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+ /* Exception callback receives FP as third parameter */
+ if (is_exception_cb) {
+ EMIT3(0x48, 0x89, 0xF4); /* mov rsp, rsi */
+ EMIT3(0x48, 0x89, 0xD5); /* mov rbp, rdx */
+ /* The main frame must have exception_boundary as true, so we
+ * first restore those callee-saved regs from stack, before
+ * reusing the stack frame.
+ */
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+ /* Reset the stack frame. */
+ EMIT3(0x48, 0x89, 0xEC); /* mov rsp, rbp */
+ } else {
+ EMIT1(0x55); /* push rbp */
+ EMIT3(0x48, 0x89, 0xE5); /* mov rbp, rsp */
+ }
/* X86_TAIL_CALL_OFFSET is here */
EMIT_ENDBR();
@@ -467,7 +505,8 @@ static void emit_return(u8 **pprog, u8 *ip)
* goto *(prog->bpf_func + prologue_size);
* out:
*/
-static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
+static void emit_bpf_tail_call_indirect(struct bpf_prog *bpf_prog,
+ u8 **pprog, bool *callee_regs_used,
u32 stack_depth, u8 *ip,
struct jit_context *ctx)
{
@@ -517,7 +556,12 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
offset = ctx->tail_call_indirect_label - (prog + 2 - start);
EMIT2(X86_JE, offset); /* je out */
- pop_callee_regs(&prog, callee_regs_used);
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+ } else {
+ pop_callee_regs(&prog, callee_regs_used);
+ }
EMIT1(0x58); /* pop rax */
if (stack_depth)
@@ -541,7 +585,8 @@ static void emit_bpf_tail_call_indirect(u8 **pprog, bool *callee_regs_used,
*pprog = prog;
}
-static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
+static void emit_bpf_tail_call_direct(struct bpf_prog *bpf_prog,
+ struct bpf_jit_poke_descriptor *poke,
u8 **pprog, u8 *ip,
bool *callee_regs_used, u32 stack_depth,
struct jit_context *ctx)
@@ -570,7 +615,13 @@ static void emit_bpf_tail_call_direct(struct bpf_jit_poke_descriptor *poke,
emit_jump(&prog, (u8 *)poke->tailcall_target + X86_PATCH_SIZE,
poke->tailcall_bypass);
- pop_callee_regs(&prog, callee_regs_used);
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+ } else {
+ pop_callee_regs(&prog, callee_regs_used);
+ }
+
EMIT1(0x58); /* pop rax */
if (stack_depth)
EMIT3_off32(0x48, 0x81, 0xC4, round_up(stack_depth, 8));
@@ -1018,6 +1069,10 @@ static void emit_shiftx(u8 **pprog, u32 dst_reg, u8 src_reg, bool is64, u8 op)
#define INSN_SZ_DIFF (((addrs[i] - addrs[i - 1]) - (prog - temp)))
+/* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
+#define RESTORE_TAIL_CALL_CNT(stack) \
+ EMIT3_off32(0x48, 0x8B, 0x85, -round_up(stack, 8) - 8)
+
static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image,
int oldproglen, struct jit_context *ctx, bool jmp_padding)
{
@@ -1041,8 +1096,20 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image, u8 *rw_image
emit_prologue(&prog, bpf_prog->aux->stack_depth,
bpf_prog_was_classic(bpf_prog), tail_call_reachable,
- bpf_prog->aux->func_idx != 0);
- push_callee_regs(&prog, callee_regs_used);
+ bpf_is_subprog(bpf_prog), bpf_prog->aux->exception_cb);
+ /* Exception callback will clobber callee regs for its own use, and
+ * restore the original callee regs from main prog's stack frame.
+ */
+ if (bpf_prog->aux->exception_boundary) {
+ /* We also need to save r12, which is not mapped to any BPF
+ * register, as we throw after entry into the kernel, which may
+ * overwrite r12.
+ */
+ push_r12(&prog);
+ push_callee_regs(&prog, all_callee_regs_used);
+ } else {
+ push_callee_regs(&prog, callee_regs_used);
+ }
ilen = prog - temp;
if (rw_image)
@@ -1623,9 +1690,7 @@ st: if (is_imm8(insn->off))
func = (u8 *) __bpf_call_base + imm32;
if (tail_call_reachable) {
- /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */
- EMIT3_off32(0x48, 0x8B, 0x85,
- -round_up(bpf_prog->aux->stack_depth, 8) - 8);
+ RESTORE_TAIL_CALL_CNT(bpf_prog->aux->stack_depth);
if (!imm32)
return -EINVAL;
offs = 7 + x86_call_depth_emit_accounting(&prog, func);
@@ -1641,13 +1706,15 @@ st: if (is_imm8(insn->off))
case BPF_JMP | BPF_TAIL_CALL:
if (imm32)
- emit_bpf_tail_call_direct(&bpf_prog->aux->poke_tab[imm32 - 1],
+ emit_bpf_tail_call_direct(bpf_prog,
+ &bpf_prog->aux->poke_tab[imm32 - 1],
&prog, image + addrs[i - 1],
callee_regs_used,
bpf_prog->aux->stack_depth,
ctx);
else
- emit_bpf_tail_call_indirect(&prog,
+ emit_bpf_tail_call_indirect(bpf_prog,
+ &prog,
callee_regs_used,
bpf_prog->aux->stack_depth,
image + addrs[i - 1],
@@ -1900,7 +1967,12 @@ emit_jmp:
seen_exit = true;
/* Update cleanup_addr */
ctx->cleanup_addr = proglen;
- pop_callee_regs(&prog, callee_regs_used);
+ if (bpf_prog->aux->exception_boundary) {
+ pop_callee_regs(&prog, all_callee_regs_used);
+ pop_r12(&prog);
+ } else {
+ pop_callee_regs(&prog, callee_regs_used);
+ }
EMIT1(0xC9); /* leave */
emit_return(&prog, image + addrs[i - 1] + (prog - temp));
break;
@@ -2400,6 +2472,7 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
* [ ... ]
* [ stack_arg2 ]
* RBP - arg_stack_off [ stack_arg1 ]
+ * RSP [ tail_call_cnt ] BPF_TRAMP_F_TAIL_CALL_CTX
*/
/* room for return value of orig_call or fentry prog */
@@ -2464,6 +2537,8 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
else
/* sub rsp, stack_size */
EMIT4(0x48, 0x83, 0xEC, stack_size);
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+ EMIT1(0x50); /* push rax */
/* mov QWORD PTR [rbp - rbx_off], rbx */
emit_stx(&prog, BPF_DW, BPF_REG_FP, BPF_REG_6, -rbx_off);
@@ -2516,9 +2591,15 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
restore_regs(m, &prog, regs_off);
save_args(m, &prog, arg_stack_off, true);
+ if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+ /* Before calling the original function, restore the
+ * tail_call_cnt from stack to rax.
+ */
+ RESTORE_TAIL_CALL_CNT(stack_size);
+
if (flags & BPF_TRAMP_F_ORIG_STACK) {
- emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, 8);
- EMIT2(0xff, 0xd0); /* call *rax */
+ emit_ldx(&prog, BPF_DW, BPF_REG_6, BPF_REG_FP, 8);
+ EMIT2(0xff, 0xd3); /* call *rbx */
} else {
/* call original function */
if (emit_rsb_call(&prog, orig_call, prog)) {
@@ -2569,7 +2650,12 @@ int arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *image, void *i
ret = -EINVAL;
goto cleanup;
}
- }
+ } else if (flags & BPF_TRAMP_F_TAIL_CALL_CTX)
+ /* Before running the original function, restore the
+ * tail_call_cnt from stack to rax.
+ */
+ RESTORE_TAIL_CALL_CNT(stack_size);
+
/* restore return value of orig_call or fentry prog back into RAX */
if (save_ret)
emit_ldx(&prog, BPF_DW, BPF_REG_0, BPF_REG_FP, -8);
@@ -2913,3 +2999,29 @@ void bpf_jit_free(struct bpf_prog *prog)
bpf_prog_unlock_free(prog);
}
+
+bool bpf_jit_supports_exceptions(void)
+{
+ /* We unwind through both kernel frames (starting from within bpf_throw
+ * call) and BPF frames. Therefore we require ORC unwinder to be enabled
+ * to walk kernel frames and reach BPF frames in the stack trace.
+ */
+ return IS_ENABLED(CONFIG_UNWINDER_ORC);
+}
+
+void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie)
+{
+#if defined(CONFIG_UNWINDER_ORC)
+ struct unwind_state state;
+ unsigned long addr;
+
+ for (unwind_start(&state, current, NULL, NULL); !unwind_done(&state);
+ unwind_next_frame(&state)) {
+ addr = unwind_get_return_address(&state);
+ if (!addr || !consume_fn(cookie, (u64)addr, (u64)state.sp, (u64)state.bp))
+ break;
+ }
+ return;
+#endif
+ WARN(1, "verification of programs using bpf_throw should have failed\n");
+}
diff --git a/drivers/Kconfig b/drivers/Kconfig
index efb66e25fa2d..8ba3e8b9ad72 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -243,4 +243,6 @@ source "drivers/hte/Kconfig"
source "drivers/cdx/Kconfig"
+source "drivers/dpll/Kconfig"
+
endmenu
diff --git a/drivers/Makefile b/drivers/Makefile
index 1bec7819a837..722d15be0eb7 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -197,5 +197,6 @@ obj-$(CONFIG_PECI) += peci/
obj-$(CONFIG_HTE) += hte/
obj-$(CONFIG_DRM_ACCEL) += accel/
obj-$(CONFIG_CDX_BUS) += cdx/
+obj-$(CONFIG_DPLL) += dpll/
obj-$(CONFIG_S390) += s390/
diff --git a/drivers/atm/fore200e.c b/drivers/atm/fore200e.c
index fb2be3574c26..50d8ce20ae5b 100644
--- a/drivers/atm/fore200e.c
+++ b/drivers/atm/fore200e.c
@@ -36,7 +36,7 @@
#ifdef CONFIG_SBUS
#include <linux/of.h>
-#include <linux/of_device.h>
+#include <linux/platform_device.h>
#include <asm/idprom.h>
#include <asm/openprom.h>
#include <asm/oplib.h>
@@ -2520,18 +2520,12 @@ static int fore200e_init(struct fore200e *fore200e, struct device *parent)
}
#ifdef CONFIG_SBUS
-static const struct of_device_id fore200e_sba_match[];
static int fore200e_sba_probe(struct platform_device *op)
{
- const struct of_device_id *match;
struct fore200e *fore200e;
static int index = 0;
int err;
- match = of_match_device(fore200e_sba_match, &op->dev);
- if (!match)
- return -EINVAL;
-
fore200e = kzalloc(sizeof(struct fore200e), GFP_KERNEL);
if (!fore200e)
return -ENOMEM;
diff --git a/drivers/bluetooth/btmtksdio.c b/drivers/bluetooth/btmtksdio.c
index f9a3444753c2..ff4868c83cd8 100644
--- a/drivers/bluetooth/btmtksdio.c
+++ b/drivers/bluetooth/btmtksdio.c
@@ -118,6 +118,7 @@ MODULE_DEVICE_TABLE(sdio, btmtksdio_table);
#define BTMTKSDIO_FUNC_ENABLED 3
#define BTMTKSDIO_PATCH_ENABLED 4
#define BTMTKSDIO_HW_RESET_ACTIVE 5
+#define BTMTKSDIO_BT_WAKE_ENABLED 6
struct mtkbtsdio_hdr {
__le16 len;
@@ -554,7 +555,7 @@ static void btmtksdio_txrx_work(struct work_struct *work)
sdio_claim_host(bdev->func);
/* Disable interrupt */
- sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, 0);
+ sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, NULL);
txrx_timeout = jiffies + 5 * HZ;
@@ -576,7 +577,7 @@ static void btmtksdio_txrx_work(struct work_struct *work)
if ((int_status & FW_MAILBOX_INT) &&
bdev->data->chipid == 0x7921) {
sdio_writel(bdev->func, PH2DSM0R_DRIVER_OWN,
- MTK_REG_PH2DSM0R, 0);
+ MTK_REG_PH2DSM0R, NULL);
}
if (int_status & FW_OWN_BACK_INT)
@@ -608,7 +609,7 @@ static void btmtksdio_txrx_work(struct work_struct *work)
} while (int_status || time_is_before_jiffies(txrx_timeout));
/* Enable interrupt */
- sdio_writel(bdev->func, C_INT_EN_SET, MTK_REG_CHLPCR, 0);
+ sdio_writel(bdev->func, C_INT_EN_SET, MTK_REG_CHLPCR, NULL);
sdio_release_host(bdev->func);
@@ -620,8 +621,14 @@ static void btmtksdio_interrupt(struct sdio_func *func)
{
struct btmtksdio_dev *bdev = sdio_get_drvdata(func);
+ if (test_bit(BTMTKSDIO_BT_WAKE_ENABLED, &bdev->tx_state)) {
+ if (bdev->hdev->suspended)
+ pm_wakeup_event(bdev->dev, 0);
+ clear_bit(BTMTKSDIO_BT_WAKE_ENABLED, &bdev->tx_state);
+ }
+
/* Disable interrupt */
- sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, 0);
+ sdio_writel(bdev->func, C_INT_EN_CLR, MTK_REG_CHLPCR, NULL);
schedule_work(&bdev->txrx_work);
}
@@ -1454,6 +1461,23 @@ static int btmtksdio_runtime_suspend(struct device *dev)
return err;
}
+static int btmtksdio_system_suspend(struct device *dev)
+{
+ struct sdio_func *func = dev_to_sdio_func(dev);
+ struct btmtksdio_dev *bdev;
+
+ bdev = sdio_get_drvdata(func);
+ if (!bdev)
+ return 0;
+
+ if (!test_bit(BTMTKSDIO_FUNC_ENABLED, &bdev->tx_state))
+ return 0;
+
+ set_bit(BTMTKSDIO_BT_WAKE_ENABLED, &bdev->tx_state);
+
+ return btmtksdio_runtime_suspend(dev);
+}
+
static int btmtksdio_runtime_resume(struct device *dev)
{
struct sdio_func *func = dev_to_sdio_func(dev);
@@ -1474,8 +1498,16 @@ static int btmtksdio_runtime_resume(struct device *dev)
return err;
}
-static UNIVERSAL_DEV_PM_OPS(btmtksdio_pm_ops, btmtksdio_runtime_suspend,
- btmtksdio_runtime_resume, NULL);
+static int btmtksdio_system_resume(struct device *dev)
+{
+ return btmtksdio_runtime_resume(dev);
+}
+
+static const struct dev_pm_ops btmtksdio_pm_ops = {
+ SYSTEM_SLEEP_PM_OPS(btmtksdio_system_suspend, btmtksdio_system_resume)
+ RUNTIME_PM_OPS(btmtksdio_runtime_suspend, btmtksdio_runtime_resume, NULL)
+};
+
#define BTMTKSDIO_PM_OPS (&btmtksdio_pm_ops)
#else /* CONFIG_PM */
#define BTMTKSDIO_PM_OPS NULL
diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
index 5a35ac4138c6..fdb0fae88d1c 100644
--- a/drivers/bluetooth/btqca.c
+++ b/drivers/bluetooth/btqca.c
@@ -205,6 +205,44 @@ static int qca_send_reset(struct hci_dev *hdev)
return 0;
}
+static int qca_read_fw_board_id(struct hci_dev *hdev, u16 *bid)
+{
+ u8 cmd;
+ struct sk_buff *skb;
+ struct edl_event_hdr *edl;
+ int err = 0;
+
+ cmd = EDL_GET_BID_REQ_CMD;
+ skb = __hci_cmd_sync_ev(hdev, EDL_PATCH_CMD_OPCODE, EDL_PATCH_CMD_LEN,
+ &cmd, 0, HCI_INIT_TIMEOUT);
+ if (IS_ERR(skb)) {
+ err = PTR_ERR(skb);
+ bt_dev_err(hdev, "Reading QCA board ID failed (%d)", err);
+ return err;
+ }
+
+ edl = skb_pull_data(skb, sizeof(*edl));
+ if (!edl) {
+ bt_dev_err(hdev, "QCA read board ID with no header");
+ err = -EILSEQ;
+ goto out;
+ }
+
+ if (edl->cresp != EDL_CMD_REQ_RES_EVT ||
+ edl->rtype != EDL_GET_BID_REQ_CMD) {
+ bt_dev_err(hdev, "QCA Wrong packet: %d %d", edl->cresp, edl->rtype);
+ err = -EIO;
+ goto out;
+ }
+
+ *bid = (edl->data[1] << 8) + edl->data[2];
+ bt_dev_dbg(hdev, "%s: bid = %x", __func__, *bid);
+
+out:
+ kfree_skb(skb);
+ return err;
+}
+
int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
{
struct sk_buff *skb;
@@ -574,6 +612,23 @@ int qca_set_bdaddr_rome(struct hci_dev *hdev, const bdaddr_t *bdaddr)
}
EXPORT_SYMBOL_GPL(qca_set_bdaddr_rome);
+static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size,
+ struct qca_btsoc_version ver, u8 rom_ver, u16 bid)
+{
+ const char *variant;
+
+ /* hsp gf chip */
+ if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID)
+ variant = "g";
+ else
+ variant = "";
+
+ if (bid == 0x0)
+ snprintf(fwname, max_size, "qca/hpnv%02x%s.bin", rom_ver, variant);
+ else
+ snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid);
+}
+
int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
enum qca_btsoc_type soc_type, struct qca_btsoc_version ver,
const char *firmware_name)
@@ -582,6 +637,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
int err;
u8 rom_ver = 0;
u32 soc_ver;
+ u16 boardid = 0;
bt_dev_dbg(hdev, "QCA setup on UART");
@@ -615,6 +671,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
snprintf(config.fwname, sizeof(config.fwname),
"qca/apbtfw%02x.tlv", rom_ver);
break;
+ case QCA_QCA2066:
+ snprintf(config.fwname, sizeof(config.fwname),
+ "qca/hpbtfw%02x.tlv", rom_ver);
+ break;
case QCA_QCA6390:
snprintf(config.fwname, sizeof(config.fwname),
"qca/htbtfw%02x.tlv", rom_ver);
@@ -649,6 +709,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
/* Give the controller some time to get ready to receive the NVM */
msleep(10);
+ if (soc_type == QCA_QCA2066)
+ qca_read_fw_board_id(hdev, &boardid);
+
/* Download NVM configuration */
config.type = TLV_TYPE_NVM;
if (firmware_name) {
@@ -671,6 +734,10 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
snprintf(config.fwname, sizeof(config.fwname),
"qca/apnv%02x.bin", rom_ver);
break;
+ case QCA_QCA2066:
+ qca_generate_hsp_nvm_name(config.fwname,
+ sizeof(config.fwname), ver, rom_ver, boardid);
+ break;
case QCA_QCA6390:
snprintf(config.fwname, sizeof(config.fwname),
"qca/htnv%02x.bin", rom_ver);
@@ -702,6 +769,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
switch (soc_type) {
case QCA_WCN3991:
+ case QCA_QCA2066:
case QCA_QCA6390:
case QCA_WCN6750:
case QCA_WCN6855:
diff --git a/drivers/bluetooth/btqca.h b/drivers/bluetooth/btqca.h
index 03bff5c0059d..dc31984f71dc 100644
--- a/drivers/bluetooth/btqca.h
+++ b/drivers/bluetooth/btqca.h
@@ -12,6 +12,7 @@
#define EDL_PATCH_VER_REQ_CMD (0x19)
#define EDL_PATCH_TLV_REQ_CMD (0x1E)
#define EDL_GET_BUILD_INFO_CMD (0x20)
+#define EDL_GET_BID_REQ_CMD (0x23)
#define EDL_NVM_ACCESS_SET_REQ_CMD (0x01)
#define EDL_PATCH_CONFIG_CMD (0x28)
#define MAX_SIZE_PER_TLV_SEGMENT (243)
@@ -47,7 +48,8 @@
((le32_to_cpu(soc_id) << 16) | (le16_to_cpu(rom_ver)))
#define QCA_FW_BUILD_VER_LEN 255
-
+#define QCA_HSP_GF_SOC_ID 0x1200
+#define QCA_HSP_GF_SOC_MASK 0x0000ff00
enum qca_baudrate {
QCA_BAUDRATE_115200 = 0,
@@ -146,6 +148,7 @@ enum qca_btsoc_type {
QCA_WCN3990,
QCA_WCN3998,
QCA_WCN3991,
+ QCA_QCA2066,
QCA_QCA6390,
QCA_WCN6750,
QCA_WCN6855,
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 499f4809fcdf..b8e9de887b5d 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -477,6 +477,7 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x8087, 0x0033), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0035), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x0036), .driver_info = BTUSB_INTEL_COMBINED },
+ { USB_DEVICE(0x8087, 0x0038), .driver_info = BTUSB_INTEL_COMBINED },
{ USB_DEVICE(0x8087, 0x07da), .driver_info = BTUSB_CSR },
{ USB_DEVICE(0x8087, 0x07dc), .driver_info = BTUSB_INTEL_COMBINED |
BTUSB_INTEL_NO_WBS_SUPPORT |
@@ -543,6 +544,10 @@ static const struct usb_device_id quirks_table[] = {
BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x0bda, 0x887b), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0bda, 0xb85b), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3570), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
{ USB_DEVICE(0x13d3, 0x3571), .driver_info = BTUSB_REALTEK |
BTUSB_WIDEBAND_SPEECH },
@@ -644,6 +649,9 @@ static const struct usb_device_id quirks_table[] = {
{ USB_DEVICE(0x04ca, 0x3804), .driver_info = BTUSB_MEDIATEK |
BTUSB_WIDEBAND_SPEECH |
BTUSB_VALID_LE_STATES },
+ { USB_DEVICE(0x35f5, 0x7922), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH |
+ BTUSB_VALID_LE_STATES },
/* Additional Realtek 8723AE Bluetooth devices */
{ USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
@@ -2818,6 +2826,9 @@ static int btusb_mtk_hci_wmt_sync(struct hci_dev *hdev,
goto err_free_wc;
}
+ if (data->evt_skb == NULL)
+ goto err_free_wc;
+
/* Parse and handle the return WMT event */
wmt_evt = (struct btmtk_hci_wmt_evt *)data->evt_skb->data;
if (wmt_evt->whdr.op != hdr->op) {
diff --git a/drivers/bluetooth/hci_bcm4377.c b/drivers/bluetooth/hci_bcm4377.c
index 19ad0e788646..a61757835695 100644
--- a/drivers/bluetooth/hci_bcm4377.c
+++ b/drivers/bluetooth/hci_bcm4377.c
@@ -512,6 +512,7 @@ struct bcm4377_hw {
unsigned long disable_aspm : 1;
unsigned long broken_ext_scan : 1;
unsigned long broken_mws_transport_config : 1;
+ unsigned long broken_le_coded : 1;
int (*send_calibration)(struct bcm4377_data *bcm4377);
int (*send_ptb)(struct bcm4377_data *bcm4377,
@@ -2372,6 +2373,8 @@ static int bcm4377_probe(struct pci_dev *pdev, const struct pci_device_id *id)
set_bit(HCI_QUIRK_BROKEN_MWS_TRANSPORT_CONFIG, &hdev->quirks);
if (bcm4377->hw->broken_ext_scan)
set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
+ if (bcm4377->hw->broken_le_coded)
+ set_bit(HCI_QUIRK_BROKEN_LE_CODED, &hdev->quirks);
pci_set_drvdata(pdev, bcm4377);
hci_set_drvdata(hdev, bcm4377);
@@ -2461,6 +2464,7 @@ static const struct bcm4377_hw bcm4377_hw_variants[] = {
.bar0_core2_window2 = 0x18107000,
.has_bar0_core2_window2 = true,
.broken_mws_transport_config = true,
+ .broken_le_coded = true,
.send_calibration = bcm4378_send_calibration,
.send_ptb = bcm4378_send_ptb,
},
@@ -2474,6 +2478,7 @@ static const struct bcm4377_hw bcm4377_hw_variants[] = {
.has_bar0_core2_window2 = true,
.clear_pciecfg_subsystem_ctrl_bit19 = true,
.broken_mws_transport_config = true,
+ .broken_le_coded = true,
.send_calibration = bcm4387_send_calibration,
.send_ptb = bcm4378_send_ptb,
},
diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
index 4b57e15f9c7a..067e248e3599 100644
--- a/drivers/bluetooth/hci_qca.c
+++ b/drivers/bluetooth/hci_qca.c
@@ -1841,6 +1841,10 @@ static int qca_setup(struct hci_uart *hu)
set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks);
switch (soc_type) {
+ case QCA_QCA2066:
+ soc_name = "qca2066";
+ break;
+
case QCA_WCN3988:
case QCA_WCN3990:
case QCA_WCN3991:
@@ -2032,6 +2036,11 @@ static const struct qca_device_data qca_soc_data_wcn3998 __maybe_unused = {
.num_vregs = 4,
};
+static const struct qca_device_data qca_soc_data_qca2066 __maybe_unused = {
+ .soc_type = QCA_QCA2066,
+ .num_vregs = 0,
+};
+
static const struct qca_device_data qca_soc_data_qca6390 __maybe_unused = {
.soc_type = QCA_QCA6390,
.num_vregs = 0,
@@ -2559,6 +2568,7 @@ static SIMPLE_DEV_PM_OPS(qca_pm_ops, qca_suspend, qca_resume);
#ifdef CONFIG_OF
static const struct of_device_id qca_bluetooth_of_match[] = {
+ { .compatible = "qcom,qca2066-bt", .data = &qca_soc_data_qca2066},
{ .compatible = "qcom,qca6174-bt" },
{ .compatible = "qcom,qca6390-bt", .data = &qca_soc_data_qca6390},
{ .compatible = "qcom,qca9377-bt" },
@@ -2576,6 +2586,7 @@ MODULE_DEVICE_TABLE(of, qca_bluetooth_of_match);
#ifdef CONFIG_ACPI
static const struct acpi_device_id qca_bluetooth_acpi_match[] = {
+ { "QCOM2066", (kernel_ulong_t)&qca_soc_data_qca2066 },
{ "QCOM6390", (kernel_ulong_t)&qca_soc_data_qca6390 },
{ "DLA16390", (kernel_ulong_t)&qca_soc_data_qca6390 },
{ "DLB16390", (kernel_ulong_t)&qca_soc_data_qca6390 },
diff --git a/drivers/dpll/Kconfig b/drivers/dpll/Kconfig
new file mode 100644
index 000000000000..a4cae73f20d3
--- /dev/null
+++ b/drivers/dpll/Kconfig
@@ -0,0 +1,7 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Generic DPLL drivers configuration
+#
+
+config DPLL
+ bool
diff --git a/drivers/dpll/Makefile b/drivers/dpll/Makefile
new file mode 100644
index 000000000000..2e5b27850110
--- /dev/null
+++ b/drivers/dpll/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for DPLL drivers.
+#
+
+obj-$(CONFIG_DPLL) += dpll.o
+dpll-y += dpll_core.o
+dpll-y += dpll_netlink.o
+dpll-y += dpll_nl.o
diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
new file mode 100644
index 000000000000..3568149b9562
--- /dev/null
+++ b/drivers/dpll/dpll_core.c
@@ -0,0 +1,798 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * dpll_core.c - DPLL subsystem kernel-space interface implementation.
+ *
+ * Copyright (c) 2023 Meta Platforms, Inc. and affiliates
+ * Copyright (c) 2023 Intel Corporation.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+
+#include "dpll_core.h"
+#include "dpll_netlink.h"
+
+/* Mutex lock to protect DPLL subsystem devices and pins */
+DEFINE_MUTEX(dpll_lock);
+
+DEFINE_XARRAY_FLAGS(dpll_device_xa, XA_FLAGS_ALLOC);
+DEFINE_XARRAY_FLAGS(dpll_pin_xa, XA_FLAGS_ALLOC);
+
+static u32 dpll_xa_id;
+
+#define ASSERT_DPLL_REGISTERED(d) \
+ WARN_ON_ONCE(!xa_get_mark(&dpll_device_xa, (d)->id, DPLL_REGISTERED))
+#define ASSERT_DPLL_NOT_REGISTERED(d) \
+ WARN_ON_ONCE(xa_get_mark(&dpll_device_xa, (d)->id, DPLL_REGISTERED))
+#define ASSERT_PIN_REGISTERED(p) \
+ WARN_ON_ONCE(!xa_get_mark(&dpll_pin_xa, (p)->id, DPLL_REGISTERED))
+
+struct dpll_device_registration {
+ struct list_head list;
+ const struct dpll_device_ops *ops;
+ void *priv;
+};
+
+struct dpll_pin_registration {
+ struct list_head list;
+ const struct dpll_pin_ops *ops;
+ void *priv;
+};
+
+struct dpll_device *dpll_device_get_by_id(int id)
+{
+ if (xa_get_mark(&dpll_device_xa, id, DPLL_REGISTERED))
+ return xa_load(&dpll_device_xa, id);
+
+ return NULL;
+}
+
+static struct dpll_pin_registration *
+dpll_pin_registration_find(struct dpll_pin_ref *ref,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_registration *reg;
+
+ list_for_each_entry(reg, &ref->registration_list, list) {
+ if (reg->ops == ops && reg->priv == priv)
+ return reg;
+ }
+ return NULL;
+}
+
+static int
+dpll_xa_ref_pin_add(struct xarray *xa_pins, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+ bool ref_exists = false;
+ unsigned long i;
+ int ret;
+
+ xa_for_each(xa_pins, i, ref) {
+ if (ref->pin != pin)
+ continue;
+ reg = dpll_pin_registration_find(ref, ops, priv);
+ if (reg) {
+ refcount_inc(&ref->refcount);
+ return 0;
+ }
+ ref_exists = true;
+ break;
+ }
+
+ if (!ref_exists) {
+ ref = kzalloc(sizeof(*ref), GFP_KERNEL);
+ if (!ref)
+ return -ENOMEM;
+ ref->pin = pin;
+ INIT_LIST_HEAD(&ref->registration_list);
+ ret = xa_insert(xa_pins, pin->pin_idx, ref, GFP_KERNEL);
+ if (ret) {
+ kfree(ref);
+ return ret;
+ }
+ refcount_set(&ref->refcount, 1);
+ }
+
+ reg = kzalloc(sizeof(*reg), GFP_KERNEL);
+ if (!reg) {
+ if (!ref_exists) {
+ xa_erase(xa_pins, pin->pin_idx);
+ kfree(ref);
+ }
+ return -ENOMEM;
+ }
+ reg->ops = ops;
+ reg->priv = priv;
+ if (ref_exists)
+ refcount_inc(&ref->refcount);
+ list_add_tail(&reg->list, &ref->registration_list);
+
+ return 0;
+}
+
+static int dpll_xa_ref_pin_del(struct xarray *xa_pins, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+ unsigned long i;
+
+ xa_for_each(xa_pins, i, ref) {
+ if (ref->pin != pin)
+ continue;
+ reg = dpll_pin_registration_find(ref, ops, priv);
+ if (WARN_ON(!reg))
+ return -EINVAL;
+ if (refcount_dec_and_test(&ref->refcount)) {
+ list_del(&reg->list);
+ kfree(reg);
+ xa_erase(xa_pins, i);
+ WARN_ON(!list_empty(&ref->registration_list));
+ kfree(ref);
+ }
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int
+dpll_xa_ref_dpll_add(struct xarray *xa_dplls, struct dpll_device *dpll,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+ bool ref_exists = false;
+ unsigned long i;
+ int ret;
+
+ xa_for_each(xa_dplls, i, ref) {
+ if (ref->dpll != dpll)
+ continue;
+ reg = dpll_pin_registration_find(ref, ops, priv);
+ if (reg) {
+ refcount_inc(&ref->refcount);
+ return 0;
+ }
+ ref_exists = true;
+ break;
+ }
+
+ if (!ref_exists) {
+ ref = kzalloc(sizeof(*ref), GFP_KERNEL);
+ if (!ref)
+ return -ENOMEM;
+ ref->dpll = dpll;
+ INIT_LIST_HEAD(&ref->registration_list);
+ ret = xa_insert(xa_dplls, dpll->id, ref, GFP_KERNEL);
+ if (ret) {
+ kfree(ref);
+ return ret;
+ }
+ refcount_set(&ref->refcount, 1);
+ }
+
+ reg = kzalloc(sizeof(*reg), GFP_KERNEL);
+ if (!reg) {
+ if (!ref_exists) {
+ xa_erase(xa_dplls, dpll->id);
+ kfree(ref);
+ }
+ return -ENOMEM;
+ }
+ reg->ops = ops;
+ reg->priv = priv;
+ if (ref_exists)
+ refcount_inc(&ref->refcount);
+ list_add_tail(&reg->list, &ref->registration_list);
+
+ return 0;
+}
+
+static void
+dpll_xa_ref_dpll_del(struct xarray *xa_dplls, struct dpll_device *dpll,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+ unsigned long i;
+
+ xa_for_each(xa_dplls, i, ref) {
+ if (ref->dpll != dpll)
+ continue;
+ reg = dpll_pin_registration_find(ref, ops, priv);
+ if (WARN_ON(!reg))
+ return;
+ if (refcount_dec_and_test(&ref->refcount)) {
+ list_del(&reg->list);
+ kfree(reg);
+ xa_erase(xa_dplls, i);
+ WARN_ON(!list_empty(&ref->registration_list));
+ kfree(ref);
+ }
+ return;
+ }
+}
+
+struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs)
+{
+ struct dpll_pin_ref *ref;
+ unsigned long i = 0;
+
+ ref = xa_find(xa_refs, &i, ULONG_MAX, XA_PRESENT);
+ WARN_ON(!ref);
+ return ref;
+}
+
+static struct dpll_device *
+dpll_device_alloc(const u64 clock_id, u32 device_idx, struct module *module)
+{
+ struct dpll_device *dpll;
+ int ret;
+
+ dpll = kzalloc(sizeof(*dpll), GFP_KERNEL);
+ if (!dpll)
+ return ERR_PTR(-ENOMEM);
+ refcount_set(&dpll->refcount, 1);
+ INIT_LIST_HEAD(&dpll->registration_list);
+ dpll->device_idx = device_idx;
+ dpll->clock_id = clock_id;
+ dpll->module = module;
+ ret = xa_alloc_cyclic(&dpll_device_xa, &dpll->id, dpll, xa_limit_32b,
+ &dpll_xa_id, GFP_KERNEL);
+ if (ret < 0) {
+ kfree(dpll);
+ return ERR_PTR(ret);
+ }
+ xa_init_flags(&dpll->pin_refs, XA_FLAGS_ALLOC);
+
+ return dpll;
+}
+
+/**
+ * dpll_device_get - find existing or create new dpll device
+ * @clock_id: clock_id of creator
+ * @device_idx: idx given by device driver
+ * @module: reference to registering module
+ *
+ * Get existing object of a dpll device, unique for given arguments.
+ * Create new if doesn't exist yet.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * valid dpll_device struct pointer if succeeded
+ * * ERR_PTR(X) - error
+ */
+struct dpll_device *
+dpll_device_get(u64 clock_id, u32 device_idx, struct module *module)
+{
+ struct dpll_device *dpll, *ret = NULL;
+ unsigned long index;
+
+ mutex_lock(&dpll_lock);
+ xa_for_each(&dpll_device_xa, index, dpll) {
+ if (dpll->clock_id == clock_id &&
+ dpll->device_idx == device_idx &&
+ dpll->module == module) {
+ ret = dpll;
+ refcount_inc(&ret->refcount);
+ break;
+ }
+ }
+ if (!ret)
+ ret = dpll_device_alloc(clock_id, device_idx, module);
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_device_get);
+
+/**
+ * dpll_device_put - decrease the refcount and free memory if possible
+ * @dpll: dpll_device struct pointer
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Drop reference for a dpll device, if all references are gone, delete
+ * dpll device object.
+ */
+void dpll_device_put(struct dpll_device *dpll)
+{
+ mutex_lock(&dpll_lock);
+ if (refcount_dec_and_test(&dpll->refcount)) {
+ ASSERT_DPLL_NOT_REGISTERED(dpll);
+ WARN_ON_ONCE(!xa_empty(&dpll->pin_refs));
+ xa_destroy(&dpll->pin_refs);
+ xa_erase(&dpll_device_xa, dpll->id);
+ WARN_ON(!list_empty(&dpll->registration_list));
+ kfree(dpll);
+ }
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_device_put);
+
+static struct dpll_device_registration *
+dpll_device_registration_find(struct dpll_device *dpll,
+ const struct dpll_device_ops *ops, void *priv)
+{
+ struct dpll_device_registration *reg;
+
+ list_for_each_entry(reg, &dpll->registration_list, list) {
+ if (reg->ops == ops && reg->priv == priv)
+ return reg;
+ }
+ return NULL;
+}
+
+/**
+ * dpll_device_register - register the dpll device in the subsystem
+ * @dpll: pointer to a dpll
+ * @type: type of a dpll
+ * @ops: ops for a dpll device
+ * @priv: pointer to private information of owner
+ *
+ * Make dpll device available for user space.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * 0 on success
+ * * negative - error value
+ */
+int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
+ const struct dpll_device_ops *ops, void *priv)
+{
+ struct dpll_device_registration *reg;
+ bool first_registration = false;
+
+ if (WARN_ON(!ops))
+ return -EINVAL;
+ if (WARN_ON(!ops->mode_get))
+ return -EINVAL;
+ if (WARN_ON(!ops->lock_status_get))
+ return -EINVAL;
+ if (WARN_ON(type < DPLL_TYPE_PPS || type > DPLL_TYPE_MAX))
+ return -EINVAL;
+
+ mutex_lock(&dpll_lock);
+ reg = dpll_device_registration_find(dpll, ops, priv);
+ if (reg) {
+ mutex_unlock(&dpll_lock);
+ return -EEXIST;
+ }
+
+ reg = kzalloc(sizeof(*reg), GFP_KERNEL);
+ if (!reg) {
+ mutex_unlock(&dpll_lock);
+ return -ENOMEM;
+ }
+ reg->ops = ops;
+ reg->priv = priv;
+ dpll->type = type;
+ first_registration = list_empty(&dpll->registration_list);
+ list_add_tail(&reg->list, &dpll->registration_list);
+ if (!first_registration) {
+ mutex_unlock(&dpll_lock);
+ return 0;
+ }
+
+ xa_set_mark(&dpll_device_xa, dpll->id, DPLL_REGISTERED);
+ dpll_device_create_ntf(dpll);
+ mutex_unlock(&dpll_lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dpll_device_register);
+
+/**
+ * dpll_device_unregister - unregister dpll device
+ * @dpll: registered dpll pointer
+ * @ops: ops for a dpll device
+ * @priv: pointer to private information of owner
+ *
+ * Unregister device, make it unavailable for userspace.
+ * Note: It does not free the memory
+ * Context: Acquires a lock (dpll_lock)
+ */
+void dpll_device_unregister(struct dpll_device *dpll,
+ const struct dpll_device_ops *ops, void *priv)
+{
+ struct dpll_device_registration *reg;
+
+ mutex_lock(&dpll_lock);
+ ASSERT_DPLL_REGISTERED(dpll);
+ dpll_device_delete_ntf(dpll);
+ reg = dpll_device_registration_find(dpll, ops, priv);
+ if (WARN_ON(!reg)) {
+ mutex_unlock(&dpll_lock);
+ return;
+ }
+ list_del(&reg->list);
+ kfree(reg);
+
+ if (!list_empty(&dpll->registration_list)) {
+ mutex_unlock(&dpll_lock);
+ return;
+ }
+ xa_clear_mark(&dpll_device_xa, dpll->id, DPLL_REGISTERED);
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_device_unregister);
+
+static struct dpll_pin *
+dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
+ const struct dpll_pin_properties *prop)
+{
+ struct dpll_pin *pin;
+ int ret;
+
+ pin = kzalloc(sizeof(*pin), GFP_KERNEL);
+ if (!pin)
+ return ERR_PTR(-ENOMEM);
+ pin->pin_idx = pin_idx;
+ pin->clock_id = clock_id;
+ pin->module = module;
+ if (WARN_ON(prop->type < DPLL_PIN_TYPE_MUX ||
+ prop->type > DPLL_PIN_TYPE_MAX)) {
+ ret = -EINVAL;
+ goto err;
+ }
+ pin->prop = prop;
+ refcount_set(&pin->refcount, 1);
+ xa_init_flags(&pin->dpll_refs, XA_FLAGS_ALLOC);
+ xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC);
+ ret = xa_alloc(&dpll_pin_xa, &pin->id, pin, xa_limit_16b, GFP_KERNEL);
+ if (ret)
+ goto err;
+ return pin;
+err:
+ xa_destroy(&pin->dpll_refs);
+ xa_destroy(&pin->parent_refs);
+ kfree(pin);
+ return ERR_PTR(ret);
+}
+
+/**
+ * dpll_pin_get - find existing or create new dpll pin
+ * @clock_id: clock_id of creator
+ * @pin_idx: idx given by dev driver
+ * @module: reference to registering module
+ * @prop: dpll pin properties
+ *
+ * Get existing object of a pin (unique for given arguments) or create new
+ * if doesn't exist yet.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * valid allocated dpll_pin struct pointer if succeeded
+ * * ERR_PTR(X) - error
+ */
+struct dpll_pin *
+dpll_pin_get(u64 clock_id, u32 pin_idx, struct module *module,
+ const struct dpll_pin_properties *prop)
+{
+ struct dpll_pin *pos, *ret = NULL;
+ unsigned long i;
+
+ mutex_lock(&dpll_lock);
+ xa_for_each(&dpll_pin_xa, i, pos) {
+ if (pos->clock_id == clock_id &&
+ pos->pin_idx == pin_idx &&
+ pos->module == module) {
+ ret = pos;
+ refcount_inc(&ret->refcount);
+ break;
+ }
+ }
+ if (!ret)
+ ret = dpll_pin_alloc(clock_id, pin_idx, module, prop);
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_pin_get);
+
+/**
+ * dpll_pin_put - decrease the refcount and free memory if possible
+ * @pin: pointer to a pin to be put
+ *
+ * Drop reference for a pin, if all references are gone, delete pin object.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ */
+void dpll_pin_put(struct dpll_pin *pin)
+{
+ mutex_lock(&dpll_lock);
+ if (refcount_dec_and_test(&pin->refcount)) {
+ xa_destroy(&pin->dpll_refs);
+ xa_destroy(&pin->parent_refs);
+ xa_erase(&dpll_pin_xa, pin->id);
+ kfree(pin);
+ }
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_pin_put);
+
+static int
+__dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ int ret;
+
+ ret = dpll_xa_ref_pin_add(&dpll->pin_refs, pin, ops, priv);
+ if (ret)
+ return ret;
+ ret = dpll_xa_ref_dpll_add(&pin->dpll_refs, dpll, ops, priv);
+ if (ret)
+ goto ref_pin_del;
+ xa_set_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED);
+ dpll_pin_create_ntf(pin);
+
+ return ret;
+
+ref_pin_del:
+ dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv);
+ return ret;
+}
+
+/**
+ * dpll_pin_register - register the dpll pin in the subsystem
+ * @dpll: pointer to a dpll
+ * @pin: pointer to a dpll pin
+ * @ops: ops for a dpll pin ops
+ * @priv: pointer to private information of owner
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * 0 on success
+ * * negative - error value
+ */
+int
+dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ int ret;
+
+ if (WARN_ON(!ops) ||
+ WARN_ON(!ops->state_on_dpll_get) ||
+ WARN_ON(!ops->direction_get))
+ return -EINVAL;
+ if (ASSERT_DPLL_REGISTERED(dpll))
+ return -EINVAL;
+
+ mutex_lock(&dpll_lock);
+ if (WARN_ON(!(dpll->module == pin->module &&
+ dpll->clock_id == pin->clock_id)))
+ ret = -EINVAL;
+ else
+ ret = __dpll_pin_register(dpll, pin, ops, priv);
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_pin_register);
+
+static void
+__dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ dpll_xa_ref_pin_del(&dpll->pin_refs, pin, ops, priv);
+ dpll_xa_ref_dpll_del(&pin->dpll_refs, dpll, ops, priv);
+ if (xa_empty(&pin->dpll_refs))
+ xa_clear_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED);
+}
+
+/**
+ * dpll_pin_unregister - unregister dpll pin from dpll device
+ * @dpll: registered dpll pointer
+ * @pin: pointer to a pin
+ * @ops: ops for a dpll pin
+ * @priv: pointer to private information of owner
+ *
+ * Note: It does not free the memory
+ * Context: Acquires a lock (dpll_lock)
+ */
+void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ if (WARN_ON(xa_empty(&dpll->pin_refs)))
+ return;
+ if (WARN_ON(!xa_empty(&pin->parent_refs)))
+ return;
+
+ mutex_lock(&dpll_lock);
+ dpll_pin_delete_ntf(pin);
+ __dpll_pin_unregister(dpll, pin, ops, priv);
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_pin_unregister);
+
+/**
+ * dpll_pin_on_pin_register - register a pin with a parent pin
+ * @parent: pointer to a parent pin
+ * @pin: pointer to a pin
+ * @ops: ops for a dpll pin
+ * @priv: pointer to private information of owner
+ *
+ * Register a pin with a parent pin, create references between them and
+ * between newly registered pin and dplls connected with a parent pin.
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Return:
+ * * 0 on success
+ * * negative - error value
+ */
+int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_ref *ref;
+ unsigned long i, stop;
+ int ret;
+
+ if (WARN_ON(parent->prop->type != DPLL_PIN_TYPE_MUX))
+ return -EINVAL;
+
+ if (WARN_ON(!ops) ||
+ WARN_ON(!ops->state_on_pin_get) ||
+ WARN_ON(!ops->direction_get))
+ return -EINVAL;
+ if (ASSERT_PIN_REGISTERED(parent))
+ return -EINVAL;
+
+ mutex_lock(&dpll_lock);
+ ret = dpll_xa_ref_pin_add(&pin->parent_refs, parent, ops, priv);
+ if (ret)
+ goto unlock;
+ refcount_inc(&pin->refcount);
+ xa_for_each(&parent->dpll_refs, i, ref) {
+ ret = __dpll_pin_register(ref->dpll, pin, ops, priv);
+ if (ret) {
+ stop = i;
+ goto dpll_unregister;
+ }
+ dpll_pin_create_ntf(pin);
+ }
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+
+dpll_unregister:
+ xa_for_each(&parent->dpll_refs, i, ref)
+ if (i < stop) {
+ __dpll_pin_unregister(ref->dpll, pin, ops, priv);
+ dpll_pin_delete_ntf(pin);
+ }
+ refcount_dec(&pin->refcount);
+ dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv);
+unlock:
+ mutex_unlock(&dpll_lock);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_pin_on_pin_register);
+
+/**
+ * dpll_pin_on_pin_unregister - unregister dpll pin from a parent pin
+ * @parent: pointer to a parent pin
+ * @pin: pointer to a pin
+ * @ops: ops for a dpll pin
+ * @priv: pointer to private information of owner
+ *
+ * Context: Acquires a lock (dpll_lock)
+ * Note: It does not free the memory
+ */
+void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv)
+{
+ struct dpll_pin_ref *ref;
+ unsigned long i;
+
+ mutex_lock(&dpll_lock);
+ dpll_pin_delete_ntf(pin);
+ dpll_xa_ref_pin_del(&pin->parent_refs, parent, ops, priv);
+ refcount_dec(&pin->refcount);
+ xa_for_each(&pin->dpll_refs, i, ref)
+ __dpll_pin_unregister(ref->dpll, pin, ops, priv);
+ mutex_unlock(&dpll_lock);
+}
+EXPORT_SYMBOL_GPL(dpll_pin_on_pin_unregister);
+
+static struct dpll_device_registration *
+dpll_device_registration_first(struct dpll_device *dpll)
+{
+ struct dpll_device_registration *reg;
+
+ reg = list_first_entry_or_null((struct list_head *)&dpll->registration_list,
+ struct dpll_device_registration, list);
+ WARN_ON(!reg);
+ return reg;
+}
+
+void *dpll_priv(struct dpll_device *dpll)
+{
+ struct dpll_device_registration *reg;
+
+ reg = dpll_device_registration_first(dpll);
+ return reg->priv;
+}
+
+const struct dpll_device_ops *dpll_device_ops(struct dpll_device *dpll)
+{
+ struct dpll_device_registration *reg;
+
+ reg = dpll_device_registration_first(dpll);
+ return reg->ops;
+}
+
+static struct dpll_pin_registration *
+dpll_pin_registration_first(struct dpll_pin_ref *ref)
+{
+ struct dpll_pin_registration *reg;
+
+ reg = list_first_entry_or_null(&ref->registration_list,
+ struct dpll_pin_registration, list);
+ WARN_ON(!reg);
+ return reg;
+}
+
+void *dpll_pin_on_dpll_priv(struct dpll_device *dpll,
+ struct dpll_pin *pin)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+
+ ref = xa_load(&dpll->pin_refs, pin->pin_idx);
+ if (!ref)
+ return NULL;
+ reg = dpll_pin_registration_first(ref);
+ return reg->priv;
+}
+
+void *dpll_pin_on_pin_priv(struct dpll_pin *parent,
+ struct dpll_pin *pin)
+{
+ struct dpll_pin_registration *reg;
+ struct dpll_pin_ref *ref;
+
+ ref = xa_load(&pin->parent_refs, parent->pin_idx);
+ if (!ref)
+ return NULL;
+ reg = dpll_pin_registration_first(ref);
+ return reg->priv;
+}
+
+const struct dpll_pin_ops *dpll_pin_ops(struct dpll_pin_ref *ref)
+{
+ struct dpll_pin_registration *reg;
+
+ reg = dpll_pin_registration_first(ref);
+ return reg->ops;
+}
+
+static int __init dpll_init(void)
+{
+ int ret;
+
+ ret = genl_register_family(&dpll_nl_family);
+ if (ret)
+ goto error;
+
+ return 0;
+
+error:
+ mutex_destroy(&dpll_lock);
+ return ret;
+}
+
+static void __exit dpll_exit(void)
+{
+ genl_unregister_family(&dpll_nl_family);
+ mutex_destroy(&dpll_lock);
+}
+
+subsys_initcall(dpll_init);
+module_exit(dpll_exit);
diff --git a/drivers/dpll/dpll_core.h b/drivers/dpll/dpll_core.h
new file mode 100644
index 000000000000..5585873c5c1b
--- /dev/null
+++ b/drivers/dpll/dpll_core.h
@@ -0,0 +1,89 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2023 Meta Platforms, Inc. and affiliates
+ * Copyright (c) 2023 Intel and affiliates
+ */
+
+#ifndef __DPLL_CORE_H__
+#define __DPLL_CORE_H__
+
+#include <linux/dpll.h>
+#include <linux/list.h>
+#include <linux/refcount.h>
+#include "dpll_nl.h"
+
+#define DPLL_REGISTERED XA_MARK_1
+
+/**
+ * struct dpll_device - stores DPLL device internal data
+ * @id: unique id number for device given by dpll subsystem
+ * @device_idx: id given by dev driver
+ * @clock_id: unique identifier (clock_id) of a dpll
+ * @module: module of creator
+ * @type: type of a dpll
+ * @pin_refs: stores pins registered within a dpll
+ * @refcount: refcount
+ * @registration_list: list of registered ops and priv data of dpll owners
+ **/
+struct dpll_device {
+ u32 id;
+ u32 device_idx;
+ u64 clock_id;
+ struct module *module;
+ enum dpll_type type;
+ struct xarray pin_refs;
+ refcount_t refcount;
+ struct list_head registration_list;
+};
+
+/**
+ * struct dpll_pin - structure for a dpll pin
+ * @id: unique id number for pin given by dpll subsystem
+ * @pin_idx: index of a pin given by dev driver
+ * @clock_id: clock_id of creator
+ * @module: module of creator
+ * @dpll_refs: hold referencees to dplls pin was registered with
+ * @parent_refs: hold references to parent pins pin was registered with
+ * @prop: pointer to pin properties given by registerer
+ * @rclk_dev_name: holds name of device when pin can recover clock from it
+ * @refcount: refcount
+ **/
+struct dpll_pin {
+ u32 id;
+ u32 pin_idx;
+ u64 clock_id;
+ struct module *module;
+ struct xarray dpll_refs;
+ struct xarray parent_refs;
+ const struct dpll_pin_properties *prop;
+ refcount_t refcount;
+};
+
+/**
+ * struct dpll_pin_ref - structure for referencing either dpll or pins
+ * @dpll: pointer to a dpll
+ * @pin: pointer to a pin
+ * @registration_list: list of ops and priv data registered with the ref
+ * @refcount: refcount
+ **/
+struct dpll_pin_ref {
+ union {
+ struct dpll_device *dpll;
+ struct dpll_pin *pin;
+ };
+ struct list_head registration_list;
+ refcount_t refcount;
+};
+
+void *dpll_priv(struct dpll_device *dpll);
+void *dpll_pin_on_dpll_priv(struct dpll_device *dpll, struct dpll_pin *pin);
+void *dpll_pin_on_pin_priv(struct dpll_pin *parent, struct dpll_pin *pin);
+
+const struct dpll_device_ops *dpll_device_ops(struct dpll_device *dpll);
+struct dpll_device *dpll_device_get_by_id(int id);
+const struct dpll_pin_ops *dpll_pin_ops(struct dpll_pin_ref *ref);
+struct dpll_pin_ref *dpll_xa_ref_dpll_first(struct xarray *xa_refs);
+extern struct xarray dpll_device_xa;
+extern struct xarray dpll_pin_xa;
+extern struct mutex dpll_lock;
+#endif
diff --git a/drivers/dpll/dpll_netlink.c b/drivers/dpll/dpll_netlink.c
new file mode 100644
index 000000000000..a6dc3997bf5c
--- /dev/null
+++ b/drivers/dpll/dpll_netlink.c
@@ -0,0 +1,1423 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generic netlink for DPLL management framework
+ *
+ * Copyright (c) 2023 Meta Platforms, Inc. and affiliates
+ * Copyright (c) 2023 Intel and affiliates
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <net/genetlink.h>
+#include "dpll_core.h"
+#include "dpll_netlink.h"
+#include "dpll_nl.h"
+#include <uapi/linux/dpll.h>
+
+#define ASSERT_NOT_NULL(ptr) (WARN_ON(!ptr))
+
+#define xa_for_each_marked_start(xa, index, entry, filter, start) \
+ for (index = start, entry = xa_find(xa, &index, ULONG_MAX, filter); \
+ entry; entry = xa_find_after(xa, &index, ULONG_MAX, filter))
+
+struct dpll_dump_ctx {
+ unsigned long idx;
+};
+
+static struct dpll_dump_ctx *dpll_dump_context(struct netlink_callback *cb)
+{
+ return (struct dpll_dump_ctx *)cb->ctx;
+}
+
+static int
+dpll_msg_add_dev_handle(struct sk_buff *msg, struct dpll_device *dpll)
+{
+ if (nla_put_u32(msg, DPLL_A_ID, dpll->id))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_dev_parent_handle(struct sk_buff *msg, u32 id)
+{
+ if (nla_put_u32(msg, DPLL_A_PIN_PARENT_ID, id))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+/**
+ * dpll_msg_pin_handle_size - get size of pin handle attribute for given pin
+ * @pin: pin pointer
+ *
+ * Return: byte size of pin handle attribute for given pin.
+ */
+size_t dpll_msg_pin_handle_size(struct dpll_pin *pin)
+{
+ return pin ? nla_total_size(4) : 0; /* DPLL_A_PIN_ID */
+}
+EXPORT_SYMBOL_GPL(dpll_msg_pin_handle_size);
+
+/**
+ * dpll_msg_add_pin_handle - attach pin handle attribute to a given message
+ * @msg: pointer to sk_buff message to attach a pin handle
+ * @pin: pin pointer
+ *
+ * Return:
+ * * 0 - success
+ * * -EMSGSIZE - no space in message to attach pin handle
+ */
+int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin)
+{
+ if (!pin)
+ return 0;
+ if (nla_put_u32(msg, DPLL_A_PIN_ID, pin->id))
+ return -EMSGSIZE;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(dpll_msg_add_pin_handle);
+
+static int
+dpll_msg_add_mode(struct sk_buff *msg, struct dpll_device *dpll,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_device_ops *ops = dpll_device_ops(dpll);
+ enum dpll_mode mode;
+ int ret;
+
+ ret = ops->mode_get(dpll, dpll_priv(dpll), &mode, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_MODE, mode))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_mode_supported(struct sk_buff *msg, struct dpll_device *dpll,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_device_ops *ops = dpll_device_ops(dpll);
+ enum dpll_mode mode;
+
+ if (!ops->mode_supported)
+ return 0;
+ for (mode = DPLL_MODE_MANUAL; mode <= DPLL_MODE_MAX; mode++)
+ if (ops->mode_supported(dpll, dpll_priv(dpll), mode, extack))
+ if (nla_put_u32(msg, DPLL_A_MODE_SUPPORTED, mode))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_lock_status(struct sk_buff *msg, struct dpll_device *dpll,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_device_ops *ops = dpll_device_ops(dpll);
+ enum dpll_lock_status status;
+ int ret;
+
+ ret = ops->lock_status_get(dpll, dpll_priv(dpll), &status, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_LOCK_STATUS, status))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_temp(struct sk_buff *msg, struct dpll_device *dpll,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_device_ops *ops = dpll_device_ops(dpll);
+ s32 temp;
+ int ret;
+
+ if (!ops->temp_get)
+ return 0;
+ ret = ops->temp_get(dpll, dpll_priv(dpll), &temp, extack);
+ if (ret)
+ return ret;
+ if (nla_put_s32(msg, DPLL_A_TEMP, temp))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_pin_prio(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ u32 prio;
+ int ret;
+
+ if (!ops->prio_get)
+ return 0;
+ ret = ops->prio_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
+ dpll_priv(dpll), &prio, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_PIN_PRIO, prio))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_pin_on_dpll_state(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ enum dpll_pin_state state;
+ int ret;
+
+ if (!ops->state_on_dpll_get)
+ return 0;
+ ret = ops->state_on_dpll_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), &state, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_PIN_STATE, state))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_pin_direction(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ enum dpll_pin_direction direction;
+ int ret;
+
+ ret = ops->direction_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
+ dpll_priv(dpll), &direction, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_PIN_DIRECTION, direction))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_pin_phase_adjust(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ s32 phase_adjust;
+ int ret;
+
+ if (!ops->phase_adjust_get)
+ return 0;
+ ret = ops->phase_adjust_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll),
+ &phase_adjust, extack);
+ if (ret)
+ return ret;
+ if (nla_put_s32(msg, DPLL_A_PIN_PHASE_ADJUST, phase_adjust))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_phase_offset(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ s64 phase_offset;
+ int ret;
+
+ if (!ops->phase_offset_get)
+ return 0;
+ ret = ops->phase_offset_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), &phase_offset,
+ extack);
+ if (ret)
+ return ret;
+ if (nla_put_64bit(msg, DPLL_A_PIN_PHASE_OFFSET, sizeof(phase_offset),
+ &phase_offset, DPLL_A_PIN_PAD))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_msg_add_pin_freq(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *ref, struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ struct dpll_device *dpll = ref->dpll;
+ struct nlattr *nest;
+ int fs, ret;
+ u64 freq;
+
+ if (!ops->frequency_get)
+ return 0;
+ ret = ops->frequency_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
+ dpll_priv(dpll), &freq, extack);
+ if (ret)
+ return ret;
+ if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY, sizeof(freq), &freq,
+ DPLL_A_PIN_PAD))
+ return -EMSGSIZE;
+ for (fs = 0; fs < pin->prop->freq_supported_num; fs++) {
+ nest = nla_nest_start(msg, DPLL_A_PIN_FREQUENCY_SUPPORTED);
+ if (!nest)
+ return -EMSGSIZE;
+ freq = pin->prop->freq_supported[fs].min;
+ if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MIN, sizeof(freq),
+ &freq, DPLL_A_PIN_PAD)) {
+ nla_nest_cancel(msg, nest);
+ return -EMSGSIZE;
+ }
+ freq = pin->prop->freq_supported[fs].max;
+ if (nla_put_64bit(msg, DPLL_A_PIN_FREQUENCY_MAX, sizeof(freq),
+ &freq, DPLL_A_PIN_PAD)) {
+ nla_nest_cancel(msg, nest);
+ return -EMSGSIZE;
+ }
+ nla_nest_end(msg, nest);
+ }
+
+ return 0;
+}
+
+static bool dpll_pin_is_freq_supported(struct dpll_pin *pin, u32 freq)
+{
+ int fs;
+
+ for (fs = 0; fs < pin->prop->freq_supported_num; fs++)
+ if (freq >= pin->prop->freq_supported[fs].min &&
+ freq <= pin->prop->freq_supported[fs].max)
+ return true;
+ return false;
+}
+
+static int
+dpll_msg_add_pin_parents(struct sk_buff *msg, struct dpll_pin *pin,
+ struct dpll_pin_ref *dpll_ref,
+ struct netlink_ext_ack *extack)
+{
+ enum dpll_pin_state state;
+ struct dpll_pin_ref *ref;
+ struct dpll_pin *ppin;
+ struct nlattr *nest;
+ unsigned long index;
+ int ret;
+
+ xa_for_each(&pin->parent_refs, index, ref) {
+ const struct dpll_pin_ops *ops = dpll_pin_ops(ref);
+ void *parent_priv;
+
+ ppin = ref->pin;
+ parent_priv = dpll_pin_on_dpll_priv(dpll_ref->dpll, ppin);
+ ret = ops->state_on_pin_get(pin,
+ dpll_pin_on_pin_priv(ppin, pin),
+ ppin, parent_priv, &state, extack);
+ if (ret)
+ return ret;
+ nest = nla_nest_start(msg, DPLL_A_PIN_PARENT_PIN);
+ if (!nest)
+ return -EMSGSIZE;
+ ret = dpll_msg_add_dev_parent_handle(msg, ppin->id);
+ if (ret)
+ goto nest_cancel;
+ if (nla_put_u32(msg, DPLL_A_PIN_STATE, state)) {
+ ret = -EMSGSIZE;
+ goto nest_cancel;
+ }
+ nla_nest_end(msg, nest);
+ }
+
+ return 0;
+
+nest_cancel:
+ nla_nest_cancel(msg, nest);
+ return ret;
+}
+
+static int
+dpll_msg_add_pin_dplls(struct sk_buff *msg, struct dpll_pin *pin,
+ struct netlink_ext_ack *extack)
+{
+ struct dpll_pin_ref *ref;
+ struct nlattr *attr;
+ unsigned long index;
+ int ret;
+
+ xa_for_each(&pin->dpll_refs, index, ref) {
+ attr = nla_nest_start(msg, DPLL_A_PIN_PARENT_DEVICE);
+ if (!attr)
+ return -EMSGSIZE;
+ ret = dpll_msg_add_dev_parent_handle(msg, ref->dpll->id);
+ if (ret)
+ goto nest_cancel;
+ ret = dpll_msg_add_pin_on_dpll_state(msg, pin, ref, extack);
+ if (ret)
+ goto nest_cancel;
+ ret = dpll_msg_add_pin_prio(msg, pin, ref, extack);
+ if (ret)
+ goto nest_cancel;
+ ret = dpll_msg_add_pin_direction(msg, pin, ref, extack);
+ if (ret)
+ goto nest_cancel;
+ ret = dpll_msg_add_phase_offset(msg, pin, ref, extack);
+ if (ret)
+ goto nest_cancel;
+ nla_nest_end(msg, attr);
+ }
+
+ return 0;
+
+nest_cancel:
+ nla_nest_end(msg, attr);
+ return ret;
+}
+
+static int
+dpll_cmd_pin_get_one(struct sk_buff *msg, struct dpll_pin *pin,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_properties *prop = pin->prop;
+ struct dpll_pin_ref *ref;
+ int ret;
+
+ ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
+ ASSERT_NOT_NULL(ref);
+
+ ret = dpll_msg_add_pin_handle(msg, pin);
+ if (ret)
+ return ret;
+ if (nla_put_string(msg, DPLL_A_PIN_MODULE_NAME,
+ module_name(pin->module)))
+ return -EMSGSIZE;
+ if (nla_put_64bit(msg, DPLL_A_PIN_CLOCK_ID, sizeof(pin->clock_id),
+ &pin->clock_id, DPLL_A_PIN_PAD))
+ return -EMSGSIZE;
+ if (prop->board_label &&
+ nla_put_string(msg, DPLL_A_PIN_BOARD_LABEL, prop->board_label))
+ return -EMSGSIZE;
+ if (prop->panel_label &&
+ nla_put_string(msg, DPLL_A_PIN_PANEL_LABEL, prop->panel_label))
+ return -EMSGSIZE;
+ if (prop->package_label &&
+ nla_put_string(msg, DPLL_A_PIN_PACKAGE_LABEL,
+ prop->package_label))
+ return -EMSGSIZE;
+ if (nla_put_u32(msg, DPLL_A_PIN_TYPE, prop->type))
+ return -EMSGSIZE;
+ if (nla_put_u32(msg, DPLL_A_PIN_CAPABILITIES, prop->capabilities))
+ return -EMSGSIZE;
+ ret = dpll_msg_add_pin_freq(msg, pin, ref, extack);
+ if (ret)
+ return ret;
+ if (nla_put_s32(msg, DPLL_A_PIN_PHASE_ADJUST_MIN,
+ prop->phase_range.min))
+ return -EMSGSIZE;
+ if (nla_put_s32(msg, DPLL_A_PIN_PHASE_ADJUST_MAX,
+ prop->phase_range.max))
+ return -EMSGSIZE;
+ ret = dpll_msg_add_pin_phase_adjust(msg, pin, ref, extack);
+ if (ret)
+ return ret;
+ if (xa_empty(&pin->parent_refs))
+ ret = dpll_msg_add_pin_dplls(msg, pin, extack);
+ else
+ ret = dpll_msg_add_pin_parents(msg, pin, ref, extack);
+
+ return ret;
+}
+
+static int
+dpll_device_get_one(struct dpll_device *dpll, struct sk_buff *msg,
+ struct netlink_ext_ack *extack)
+{
+ int ret;
+
+ ret = dpll_msg_add_dev_handle(msg, dpll);
+ if (ret)
+ return ret;
+ if (nla_put_string(msg, DPLL_A_MODULE_NAME, module_name(dpll->module)))
+ return -EMSGSIZE;
+ if (nla_put_64bit(msg, DPLL_A_CLOCK_ID, sizeof(dpll->clock_id),
+ &dpll->clock_id, DPLL_A_PAD))
+ return -EMSGSIZE;
+ ret = dpll_msg_add_temp(msg, dpll, extack);
+ if (ret)
+ return ret;
+ ret = dpll_msg_add_lock_status(msg, dpll, extack);
+ if (ret)
+ return ret;
+ ret = dpll_msg_add_mode(msg, dpll, extack);
+ if (ret)
+ return ret;
+ ret = dpll_msg_add_mode_supported(msg, dpll, extack);
+ if (ret)
+ return ret;
+ if (nla_put_u32(msg, DPLL_A_TYPE, dpll->type))
+ return -EMSGSIZE;
+
+ return 0;
+}
+
+static int
+dpll_device_event_send(enum dpll_cmd event, struct dpll_device *dpll)
+{
+ struct sk_buff *msg;
+ int ret = -ENOMEM;
+ void *hdr;
+
+ if (WARN_ON(!xa_get_mark(&dpll_device_xa, dpll->id, DPLL_REGISTERED)))
+ return -ENODEV;
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+ hdr = genlmsg_put(msg, 0, 0, &dpll_nl_family, 0, event);
+ if (!hdr)
+ goto err_free_msg;
+ ret = dpll_device_get_one(dpll, msg, NULL);
+ if (ret)
+ goto err_cancel_msg;
+ genlmsg_end(msg, hdr);
+ genlmsg_multicast(&dpll_nl_family, msg, 0, 0, GFP_KERNEL);
+
+ return 0;
+
+err_cancel_msg:
+ genlmsg_cancel(msg, hdr);
+err_free_msg:
+ nlmsg_free(msg);
+
+ return ret;
+}
+
+int dpll_device_create_ntf(struct dpll_device *dpll)
+{
+ return dpll_device_event_send(DPLL_CMD_DEVICE_CREATE_NTF, dpll);
+}
+
+int dpll_device_delete_ntf(struct dpll_device *dpll)
+{
+ return dpll_device_event_send(DPLL_CMD_DEVICE_DELETE_NTF, dpll);
+}
+
+static int
+__dpll_device_change_ntf(struct dpll_device *dpll)
+{
+ return dpll_device_event_send(DPLL_CMD_DEVICE_CHANGE_NTF, dpll);
+}
+
+/**
+ * dpll_device_change_ntf - notify that the dpll device has been changed
+ * @dpll: registered dpll pointer
+ *
+ * Context: acquires and holds a dpll_lock.
+ * Return: 0 if succeeds, error code otherwise.
+ */
+int dpll_device_change_ntf(struct dpll_device *dpll)
+{
+ int ret;
+
+ mutex_lock(&dpll_lock);
+ ret = __dpll_device_change_ntf(dpll);
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_device_change_ntf);
+
+static int
+dpll_pin_event_send(enum dpll_cmd event, struct dpll_pin *pin)
+{
+ struct sk_buff *msg;
+ int ret = -ENOMEM;
+ void *hdr;
+
+ if (WARN_ON(!xa_get_mark(&dpll_pin_xa, pin->id, DPLL_REGISTERED)))
+ return -ENODEV;
+
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+
+ hdr = genlmsg_put(msg, 0, 0, &dpll_nl_family, 0, event);
+ if (!hdr)
+ goto err_free_msg;
+ ret = dpll_cmd_pin_get_one(msg, pin, NULL);
+ if (ret)
+ goto err_cancel_msg;
+ genlmsg_end(msg, hdr);
+ genlmsg_multicast(&dpll_nl_family, msg, 0, 0, GFP_KERNEL);
+
+ return 0;
+
+err_cancel_msg:
+ genlmsg_cancel(msg, hdr);
+err_free_msg:
+ nlmsg_free(msg);
+
+ return ret;
+}
+
+int dpll_pin_create_ntf(struct dpll_pin *pin)
+{
+ return dpll_pin_event_send(DPLL_CMD_PIN_CREATE_NTF, pin);
+}
+
+int dpll_pin_delete_ntf(struct dpll_pin *pin)
+{
+ return dpll_pin_event_send(DPLL_CMD_PIN_DELETE_NTF, pin);
+}
+
+static int __dpll_pin_change_ntf(struct dpll_pin *pin)
+{
+ return dpll_pin_event_send(DPLL_CMD_PIN_CHANGE_NTF, pin);
+}
+
+/**
+ * dpll_pin_change_ntf - notify that the pin has been changed
+ * @pin: registered pin pointer
+ *
+ * Context: acquires and holds a dpll_lock.
+ * Return: 0 if succeeds, error code otherwise.
+ */
+int dpll_pin_change_ntf(struct dpll_pin *pin)
+{
+ int ret;
+
+ mutex_lock(&dpll_lock);
+ ret = __dpll_pin_change_ntf(pin);
+ mutex_unlock(&dpll_lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dpll_pin_change_ntf);
+
+static int
+dpll_pin_freq_set(struct dpll_pin *pin, struct nlattr *a,
+ struct netlink_ext_ack *extack)
+{
+ u64 freq = nla_get_u64(a), old_freq;
+ struct dpll_pin_ref *ref, *failed;
+ const struct dpll_pin_ops *ops;
+ struct dpll_device *dpll;
+ unsigned long i;
+ int ret;
+
+ if (!dpll_pin_is_freq_supported(pin, freq)) {
+ NL_SET_ERR_MSG_ATTR(extack, a, "frequency is not supported by the device");
+ return -EINVAL;
+ }
+
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ ops = dpll_pin_ops(ref);
+ if (!ops->frequency_set || !ops->frequency_get) {
+ NL_SET_ERR_MSG(extack, "frequency set not supported by the device");
+ return -EOPNOTSUPP;
+ }
+ }
+ ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ ret = ops->frequency_get(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
+ dpll_priv(dpll), &old_freq, extack);
+ if (ret) {
+ NL_SET_ERR_MSG(extack, "unable to get old frequency value");
+ return ret;
+ }
+ if (freq == old_freq)
+ return 0;
+
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ ret = ops->frequency_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), freq, extack);
+ if (ret) {
+ failed = ref;
+ NL_SET_ERR_MSG_FMT(extack, "frequency set failed for dpll_id:%u",
+ dpll->id);
+ goto rollback;
+ }
+ }
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+
+rollback:
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ if (ref == failed)
+ break;
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ if (ops->frequency_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), old_freq, extack))
+ NL_SET_ERR_MSG(extack, "set frequency rollback failed");
+ }
+ return ret;
+}
+
+static int
+dpll_pin_on_pin_state_set(struct dpll_pin *pin, u32 parent_idx,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ struct dpll_pin_ref *parent_ref;
+ const struct dpll_pin_ops *ops;
+ struct dpll_pin_ref *dpll_ref;
+ void *pin_priv, *parent_priv;
+ struct dpll_pin *parent;
+ unsigned long i;
+ int ret;
+
+ if (!(DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE &
+ pin->prop->capabilities)) {
+ NL_SET_ERR_MSG(extack, "state changing is not allowed");
+ return -EOPNOTSUPP;
+ }
+ parent = xa_load(&dpll_pin_xa, parent_idx);
+ if (!parent)
+ return -EINVAL;
+ parent_ref = xa_load(&pin->parent_refs, parent->pin_idx);
+ if (!parent_ref)
+ return -EINVAL;
+ xa_for_each(&parent->dpll_refs, i, dpll_ref) {
+ ops = dpll_pin_ops(parent_ref);
+ if (!ops->state_on_pin_set)
+ return -EOPNOTSUPP;
+ pin_priv = dpll_pin_on_pin_priv(parent, pin);
+ parent_priv = dpll_pin_on_dpll_priv(dpll_ref->dpll, parent);
+ ret = ops->state_on_pin_set(pin, pin_priv, parent, parent_priv,
+ state, extack);
+ if (ret)
+ return ret;
+ }
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+}
+
+static int
+dpll_pin_state_set(struct dpll_device *dpll, struct dpll_pin *pin,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops;
+ struct dpll_pin_ref *ref;
+ int ret;
+
+ if (!(DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE &
+ pin->prop->capabilities)) {
+ NL_SET_ERR_MSG(extack, "state changing is not allowed");
+ return -EOPNOTSUPP;
+ }
+ ref = xa_load(&pin->dpll_refs, dpll->id);
+ ASSERT_NOT_NULL(ref);
+ ops = dpll_pin_ops(ref);
+ if (!ops->state_on_dpll_set)
+ return -EOPNOTSUPP;
+ ret = ops->state_on_dpll_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), state, extack);
+ if (ret)
+ return ret;
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+}
+
+static int
+dpll_pin_prio_set(struct dpll_device *dpll, struct dpll_pin *pin,
+ u32 prio, struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops;
+ struct dpll_pin_ref *ref;
+ int ret;
+
+ if (!(DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE &
+ pin->prop->capabilities)) {
+ NL_SET_ERR_MSG(extack, "prio changing is not allowed");
+ return -EOPNOTSUPP;
+ }
+ ref = xa_load(&pin->dpll_refs, dpll->id);
+ ASSERT_NOT_NULL(ref);
+ ops = dpll_pin_ops(ref);
+ if (!ops->prio_set)
+ return -EOPNOTSUPP;
+ ret = ops->prio_set(pin, dpll_pin_on_dpll_priv(dpll, pin), dpll,
+ dpll_priv(dpll), prio, extack);
+ if (ret)
+ return ret;
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+}
+
+static int
+dpll_pin_direction_set(struct dpll_pin *pin, struct dpll_device *dpll,
+ enum dpll_pin_direction direction,
+ struct netlink_ext_ack *extack)
+{
+ const struct dpll_pin_ops *ops;
+ struct dpll_pin_ref *ref;
+ int ret;
+
+ if (!(DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE &
+ pin->prop->capabilities)) {
+ NL_SET_ERR_MSG(extack, "direction changing is not allowed");
+ return -EOPNOTSUPP;
+ }
+ ref = xa_load(&pin->dpll_refs, dpll->id);
+ ASSERT_NOT_NULL(ref);
+ ops = dpll_pin_ops(ref);
+ if (!ops->direction_set)
+ return -EOPNOTSUPP;
+ ret = ops->direction_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), direction, extack);
+ if (ret)
+ return ret;
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+}
+
+static int
+dpll_pin_phase_adj_set(struct dpll_pin *pin, struct nlattr *phase_adj_attr,
+ struct netlink_ext_ack *extack)
+{
+ struct dpll_pin_ref *ref, *failed;
+ const struct dpll_pin_ops *ops;
+ s32 phase_adj, old_phase_adj;
+ struct dpll_device *dpll;
+ unsigned long i;
+ int ret;
+
+ phase_adj = nla_get_s32(phase_adj_attr);
+ if (phase_adj > pin->prop->phase_range.max ||
+ phase_adj < pin->prop->phase_range.min) {
+ NL_SET_ERR_MSG_ATTR(extack, phase_adj_attr,
+ "phase adjust value not supported");
+ return -EINVAL;
+ }
+
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ ops = dpll_pin_ops(ref);
+ if (!ops->phase_adjust_set || !ops->phase_adjust_get) {
+ NL_SET_ERR_MSG(extack, "phase adjust not supported");
+ return -EOPNOTSUPP;
+ }
+ }
+ ref = dpll_xa_ref_dpll_first(&pin->dpll_refs);
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ ret = ops->phase_adjust_get(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), &old_phase_adj,
+ extack);
+ if (ret) {
+ NL_SET_ERR_MSG(extack, "unable to get old phase adjust value");
+ return ret;
+ }
+ if (phase_adj == old_phase_adj)
+ return 0;
+
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ ret = ops->phase_adjust_set(pin,
+ dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), phase_adj,
+ extack);
+ if (ret) {
+ failed = ref;
+ NL_SET_ERR_MSG_FMT(extack,
+ "phase adjust set failed for dpll_id:%u",
+ dpll->id);
+ goto rollback;
+ }
+ }
+ __dpll_pin_change_ntf(pin);
+
+ return 0;
+
+rollback:
+ xa_for_each(&pin->dpll_refs, i, ref) {
+ if (ref == failed)
+ break;
+ ops = dpll_pin_ops(ref);
+ dpll = ref->dpll;
+ if (ops->phase_adjust_set(pin, dpll_pin_on_dpll_priv(dpll, pin),
+ dpll, dpll_priv(dpll), old_phase_adj,
+ extack))
+ NL_SET_ERR_MSG(extack, "set phase adjust rollback failed");
+ }
+ return ret;
+}
+
+static int
+dpll_pin_parent_device_set(struct dpll_pin *pin, struct nlattr *parent_nest,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[DPLL_A_PIN_MAX + 1];
+ enum dpll_pin_direction direction;
+ enum dpll_pin_state state;
+ struct dpll_pin_ref *ref;
+ struct dpll_device *dpll;
+ u32 pdpll_idx, prio;
+ int ret;
+
+ nla_parse_nested(tb, DPLL_A_PIN_MAX, parent_nest,
+ dpll_pin_parent_device_nl_policy, extack);
+ if (!tb[DPLL_A_PIN_PARENT_ID]) {
+ NL_SET_ERR_MSG(extack, "device parent id expected");
+ return -EINVAL;
+ }
+ pdpll_idx = nla_get_u32(tb[DPLL_A_PIN_PARENT_ID]);
+ dpll = xa_load(&dpll_device_xa, pdpll_idx);
+ if (!dpll) {
+ NL_SET_ERR_MSG(extack, "parent device not found");
+ return -EINVAL;
+ }
+ ref = xa_load(&pin->dpll_refs, dpll->id);
+ if (!ref) {
+ NL_SET_ERR_MSG(extack, "pin not connected to given parent device");
+ return -EINVAL;
+ }
+ if (tb[DPLL_A_PIN_STATE]) {
+ state = nla_get_u32(tb[DPLL_A_PIN_STATE]);
+ ret = dpll_pin_state_set(dpll, pin, state, extack);
+ if (ret)
+ return ret;
+ }
+ if (tb[DPLL_A_PIN_PRIO]) {
+ prio = nla_get_u32(tb[DPLL_A_PIN_PRIO]);
+ ret = dpll_pin_prio_set(dpll, pin, prio, extack);
+ if (ret)
+ return ret;
+ }
+ if (tb[DPLL_A_PIN_DIRECTION]) {
+ direction = nla_get_u32(tb[DPLL_A_PIN_DIRECTION]);
+ ret = dpll_pin_direction_set(pin, dpll, direction, extack);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+static int
+dpll_pin_parent_pin_set(struct dpll_pin *pin, struct nlattr *parent_nest,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[DPLL_A_PIN_MAX + 1];
+ enum dpll_pin_state state;
+ u32 ppin_idx;
+ int ret;
+
+ nla_parse_nested(tb, DPLL_A_PIN_MAX, parent_nest,
+ dpll_pin_parent_pin_nl_policy, extack);
+ if (!tb[DPLL_A_PIN_PARENT_ID]) {
+ NL_SET_ERR_MSG(extack, "device parent id expected");
+ return -EINVAL;
+ }
+ ppin_idx = nla_get_u32(tb[DPLL_A_PIN_PARENT_ID]);
+ state = nla_get_u32(tb[DPLL_A_PIN_STATE]);
+ ret = dpll_pin_on_pin_state_set(pin, ppin_idx, state, extack);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int
+dpll_pin_set_from_nlattr(struct dpll_pin *pin, struct genl_info *info)
+{
+ struct nlattr *a;
+ int rem, ret;
+
+ nla_for_each_attr(a, genlmsg_data(info->genlhdr),
+ genlmsg_len(info->genlhdr), rem) {
+ switch (nla_type(a)) {
+ case DPLL_A_PIN_FREQUENCY:
+ ret = dpll_pin_freq_set(pin, a, info->extack);
+ if (ret)
+ return ret;
+ break;
+ case DPLL_A_PIN_PHASE_ADJUST:
+ ret = dpll_pin_phase_adj_set(pin, a, info->extack);
+ if (ret)
+ return ret;
+ break;
+ case DPLL_A_PIN_PARENT_DEVICE:
+ ret = dpll_pin_parent_device_set(pin, a, info->extack);
+ if (ret)
+ return ret;
+ break;
+ case DPLL_A_PIN_PARENT_PIN:
+ ret = dpll_pin_parent_pin_set(pin, a, info->extack);
+ if (ret)
+ return ret;
+ break;
+ }
+ }
+
+ return 0;
+}
+
+static struct dpll_pin *
+dpll_pin_find(u64 clock_id, struct nlattr *mod_name_attr,
+ enum dpll_pin_type type, struct nlattr *board_label,
+ struct nlattr *panel_label, struct nlattr *package_label,
+ struct netlink_ext_ack *extack)
+{
+ bool board_match, panel_match, package_match;
+ struct dpll_pin *pin_match = NULL, *pin;
+ const struct dpll_pin_properties *prop;
+ bool cid_match, mod_match, type_match;
+ unsigned long i;
+
+ xa_for_each_marked(&dpll_pin_xa, i, pin, DPLL_REGISTERED) {
+ prop = pin->prop;
+ cid_match = clock_id ? pin->clock_id == clock_id : true;
+ mod_match = mod_name_attr && module_name(pin->module) ?
+ !nla_strcmp(mod_name_attr,
+ module_name(pin->module)) : true;
+ type_match = type ? prop->type == type : true;
+ board_match = board_label ? (prop->board_label ?
+ !nla_strcmp(board_label, prop->board_label) : false) :
+ true;
+ panel_match = panel_label ? (prop->panel_label ?
+ !nla_strcmp(panel_label, prop->panel_label) : false) :
+ true;
+ package_match = package_label ? (prop->package_label ?
+ !nla_strcmp(package_label, prop->package_label) :
+ false) : true;
+ if (cid_match && mod_match && type_match && board_match &&
+ panel_match && package_match) {
+ if (pin_match) {
+ NL_SET_ERR_MSG(extack, "multiple matches");
+ return ERR_PTR(-EINVAL);
+ }
+ pin_match = pin;
+ }
+ }
+ if (!pin_match) {
+ NL_SET_ERR_MSG(extack, "not found");
+ return ERR_PTR(-ENODEV);
+ }
+ return pin_match;
+}
+
+static struct dpll_pin *dpll_pin_find_from_nlattr(struct genl_info *info)
+{
+ struct nlattr *attr, *mod_name_attr = NULL, *board_label_attr = NULL,
+ *panel_label_attr = NULL, *package_label_attr = NULL;
+ enum dpll_pin_type type = 0;
+ u64 clock_id = 0;
+ int rem = 0;
+
+ nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
+ genlmsg_len(info->genlhdr), rem) {
+ switch (nla_type(attr)) {
+ case DPLL_A_PIN_CLOCK_ID:
+ if (clock_id)
+ goto duplicated_attr;
+ clock_id = nla_get_u64(attr);
+ break;
+ case DPLL_A_PIN_MODULE_NAME:
+ if (mod_name_attr)
+ goto duplicated_attr;
+ mod_name_attr = attr;
+ break;
+ case DPLL_A_PIN_TYPE:
+ if (type)
+ goto duplicated_attr;
+ type = nla_get_u32(attr);
+ break;
+ case DPLL_A_PIN_BOARD_LABEL:
+ if (board_label_attr)
+ goto duplicated_attr;
+ board_label_attr = attr;
+ break;
+ case DPLL_A_PIN_PANEL_LABEL:
+ if (panel_label_attr)
+ goto duplicated_attr;
+ panel_label_attr = attr;
+ break;
+ case DPLL_A_PIN_PACKAGE_LABEL:
+ if (package_label_attr)
+ goto duplicated_attr;
+ package_label_attr = attr;
+ break;
+ default:
+ break;
+ }
+ }
+ if (!(clock_id || mod_name_attr || board_label_attr ||
+ panel_label_attr || package_label_attr)) {
+ NL_SET_ERR_MSG(info->extack, "missing attributes");
+ return ERR_PTR(-EINVAL);
+ }
+ return dpll_pin_find(clock_id, mod_name_attr, type, board_label_attr,
+ panel_label_attr, package_label_attr,
+ info->extack);
+duplicated_attr:
+ NL_SET_ERR_MSG(info->extack, "duplicated attribute");
+ return ERR_PTR(-EINVAL);
+}
+
+int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ struct dpll_pin *pin;
+ struct sk_buff *msg;
+ struct nlattr *hdr;
+ int ret;
+
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+ hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
+ DPLL_CMD_PIN_ID_GET);
+ if (!hdr)
+ return -EMSGSIZE;
+
+ pin = dpll_pin_find_from_nlattr(info);
+ if (!IS_ERR(pin)) {
+ ret = dpll_msg_add_pin_handle(msg, pin);
+ if (ret) {
+ nlmsg_free(msg);
+ return ret;
+ }
+ }
+ genlmsg_end(msg, hdr);
+
+ return genlmsg_reply(msg, info);
+}
+
+int dpll_nl_pin_get_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ struct dpll_pin *pin = info->user_ptr[0];
+ struct sk_buff *msg;
+ struct nlattr *hdr;
+ int ret;
+
+ if (!pin)
+ return -ENODEV;
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+ hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
+ DPLL_CMD_PIN_GET);
+ if (!hdr)
+ return -EMSGSIZE;
+ ret = dpll_cmd_pin_get_one(msg, pin, info->extack);
+ if (ret) {
+ nlmsg_free(msg);
+ return ret;
+ }
+ genlmsg_end(msg, hdr);
+
+ return genlmsg_reply(msg, info);
+}
+
+int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct dpll_dump_ctx *ctx = dpll_dump_context(cb);
+ struct dpll_pin *pin;
+ struct nlattr *hdr;
+ unsigned long i;
+ int ret = 0;
+
+ xa_for_each_marked_start(&dpll_pin_xa, i, pin, DPLL_REGISTERED,
+ ctx->idx) {
+ hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ &dpll_nl_family, NLM_F_MULTI,
+ DPLL_CMD_PIN_GET);
+ if (!hdr) {
+ ret = -EMSGSIZE;
+ break;
+ }
+ ret = dpll_cmd_pin_get_one(skb, pin, cb->extack);
+ if (ret) {
+ genlmsg_cancel(skb, hdr);
+ break;
+ }
+ genlmsg_end(skb, hdr);
+ }
+ if (ret == -EMSGSIZE) {
+ ctx->idx = i;
+ return skb->len;
+ }
+ return ret;
+}
+
+int dpll_nl_pin_set_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ struct dpll_pin *pin = info->user_ptr[0];
+
+ return dpll_pin_set_from_nlattr(pin, info);
+}
+
+static struct dpll_device *
+dpll_device_find(u64 clock_id, struct nlattr *mod_name_attr,
+ enum dpll_type type, struct netlink_ext_ack *extack)
+{
+ struct dpll_device *dpll_match = NULL, *dpll;
+ bool cid_match, mod_match, type_match;
+ unsigned long i;
+
+ xa_for_each_marked(&dpll_device_xa, i, dpll, DPLL_REGISTERED) {
+ cid_match = clock_id ? dpll->clock_id == clock_id : true;
+ mod_match = mod_name_attr ? (module_name(dpll->module) ?
+ !nla_strcmp(mod_name_attr,
+ module_name(dpll->module)) : false) : true;
+ type_match = type ? dpll->type == type : true;
+ if (cid_match && mod_match && type_match) {
+ if (dpll_match) {
+ NL_SET_ERR_MSG(extack, "multiple matches");
+ return ERR_PTR(-EINVAL);
+ }
+ dpll_match = dpll;
+ }
+ }
+ if (!dpll_match) {
+ NL_SET_ERR_MSG(extack, "not found");
+ return ERR_PTR(-ENODEV);
+ }
+
+ return dpll_match;
+}
+
+static struct dpll_device *
+dpll_device_find_from_nlattr(struct genl_info *info)
+{
+ struct nlattr *attr, *mod_name_attr = NULL;
+ enum dpll_type type = 0;
+ u64 clock_id = 0;
+ int rem = 0;
+
+ nla_for_each_attr(attr, genlmsg_data(info->genlhdr),
+ genlmsg_len(info->genlhdr), rem) {
+ switch (nla_type(attr)) {
+ case DPLL_A_CLOCK_ID:
+ if (clock_id)
+ goto duplicated_attr;
+ clock_id = nla_get_u64(attr);
+ break;
+ case DPLL_A_MODULE_NAME:
+ if (mod_name_attr)
+ goto duplicated_attr;
+ mod_name_attr = attr;
+ break;
+ case DPLL_A_TYPE:
+ if (type)
+ goto duplicated_attr;
+ type = nla_get_u32(attr);
+ break;
+ default:
+ break;
+ }
+ }
+ if (!clock_id && !mod_name_attr && !type) {
+ NL_SET_ERR_MSG(info->extack, "missing attributes");
+ return ERR_PTR(-EINVAL);
+ }
+ return dpll_device_find(clock_id, mod_name_attr, type, info->extack);
+duplicated_attr:
+ NL_SET_ERR_MSG(info->extack, "duplicated attribute");
+ return ERR_PTR(-EINVAL);
+}
+
+int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ struct dpll_device *dpll;
+ struct sk_buff *msg;
+ struct nlattr *hdr;
+ int ret;
+
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+ hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
+ DPLL_CMD_DEVICE_ID_GET);
+ if (!hdr)
+ return -EMSGSIZE;
+
+ dpll = dpll_device_find_from_nlattr(info);
+ if (!IS_ERR(dpll)) {
+ ret = dpll_msg_add_dev_handle(msg, dpll);
+ if (ret) {
+ nlmsg_free(msg);
+ return ret;
+ }
+ }
+ genlmsg_end(msg, hdr);
+
+ return genlmsg_reply(msg, info);
+}
+
+int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ struct dpll_device *dpll = info->user_ptr[0];
+ struct sk_buff *msg;
+ struct nlattr *hdr;
+ int ret;
+
+ msg = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+ hdr = genlmsg_put_reply(msg, info, &dpll_nl_family, 0,
+ DPLL_CMD_DEVICE_GET);
+ if (!hdr)
+ return -EMSGSIZE;
+
+ ret = dpll_device_get_one(dpll, msg, info->extack);
+ if (ret) {
+ nlmsg_free(msg);
+ return ret;
+ }
+ genlmsg_end(msg, hdr);
+
+ return genlmsg_reply(msg, info);
+}
+
+int dpll_nl_device_set_doit(struct sk_buff *skb, struct genl_info *info)
+{
+ /* placeholder for set command */
+ return 0;
+}
+
+int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct dpll_dump_ctx *ctx = dpll_dump_context(cb);
+ struct dpll_device *dpll;
+ struct nlattr *hdr;
+ unsigned long i;
+ int ret = 0;
+
+ xa_for_each_marked_start(&dpll_device_xa, i, dpll, DPLL_REGISTERED,
+ ctx->idx) {
+ hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, &dpll_nl_family,
+ NLM_F_MULTI, DPLL_CMD_DEVICE_GET);
+ if (!hdr) {
+ ret = -EMSGSIZE;
+ break;
+ }
+ ret = dpll_device_get_one(dpll, skb, cb->extack);
+ if (ret) {
+ genlmsg_cancel(skb, hdr);
+ break;
+ }
+ genlmsg_end(skb, hdr);
+ }
+ if (ret == -EMSGSIZE) {
+ ctx->idx = i;
+ return skb->len;
+ }
+ return ret;
+}
+
+int dpll_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ u32 id;
+
+ if (GENL_REQ_ATTR_CHECK(info, DPLL_A_ID))
+ return -EINVAL;
+
+ mutex_lock(&dpll_lock);
+ id = nla_get_u32(info->attrs[DPLL_A_ID]);
+ info->user_ptr[0] = dpll_device_get_by_id(id);
+ if (!info->user_ptr[0]) {
+ NL_SET_ERR_MSG(info->extack, "device not found");
+ goto unlock;
+ }
+ return 0;
+unlock:
+ mutex_unlock(&dpll_lock);
+ return -ENODEV;
+}
+
+void dpll_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ mutex_unlock(&dpll_lock);
+}
+
+int
+dpll_lock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ mutex_lock(&dpll_lock);
+
+ return 0;
+}
+
+void
+dpll_unlock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ mutex_unlock(&dpll_lock);
+}
+
+int dpll_lock_dumpit(struct netlink_callback *cb)
+{
+ mutex_lock(&dpll_lock);
+
+ return 0;
+}
+
+int dpll_unlock_dumpit(struct netlink_callback *cb)
+{
+ mutex_unlock(&dpll_lock);
+
+ return 0;
+}
+
+int dpll_pin_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ int ret;
+
+ mutex_lock(&dpll_lock);
+ if (GENL_REQ_ATTR_CHECK(info, DPLL_A_PIN_ID)) {
+ ret = -EINVAL;
+ goto unlock_dev;
+ }
+ info->user_ptr[0] = xa_load(&dpll_pin_xa,
+ nla_get_u32(info->attrs[DPLL_A_PIN_ID]));
+ if (!info->user_ptr[0]) {
+ NL_SET_ERR_MSG(info->extack, "pin not found");
+ ret = -ENODEV;
+ goto unlock_dev;
+ }
+
+ return 0;
+
+unlock_dev:
+ mutex_unlock(&dpll_lock);
+ return ret;
+}
+
+void dpll_pin_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info)
+{
+ mutex_unlock(&dpll_lock);
+}
diff --git a/drivers/dpll/dpll_netlink.h b/drivers/dpll/dpll_netlink.h
new file mode 100644
index 000000000000..a9cfd55f57fc
--- /dev/null
+++ b/drivers/dpll/dpll_netlink.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2023 Meta Platforms, Inc. and affiliates
+ * Copyright (c) 2023 Intel and affiliates
+ */
+
+int dpll_device_create_ntf(struct dpll_device *dpll);
+
+int dpll_device_delete_ntf(struct dpll_device *dpll);
+
+int dpll_pin_create_ntf(struct dpll_pin *pin);
+
+int dpll_pin_delete_ntf(struct dpll_pin *pin);
diff --git a/drivers/dpll/dpll_nl.c b/drivers/dpll/dpll_nl.c
new file mode 100644
index 000000000000..eaee5be7aa64
--- /dev/null
+++ b/drivers/dpll/dpll_nl.c
@@ -0,0 +1,164 @@
+// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/dpll.yaml */
+/* YNL-GEN kernel source */
+
+#include <net/netlink.h>
+#include <net/genetlink.h>
+
+#include "dpll_nl.h"
+
+#include <uapi/linux/dpll.h>
+
+/* Common nested types */
+const struct nla_policy dpll_pin_parent_device_nl_policy[DPLL_A_PIN_PHASE_OFFSET + 1] = {
+ [DPLL_A_PIN_PARENT_ID] = { .type = NLA_U32, },
+ [DPLL_A_PIN_DIRECTION] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
+ [DPLL_A_PIN_PRIO] = { .type = NLA_U32, },
+ [DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
+ [DPLL_A_PIN_PHASE_OFFSET] = { .type = NLA_S64, },
+};
+
+const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1] = {
+ [DPLL_A_PIN_PARENT_ID] = { .type = NLA_U32, },
+ [DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
+};
+
+/* DPLL_CMD_DEVICE_ID_GET - do */
+static const struct nla_policy dpll_device_id_get_nl_policy[DPLL_A_TYPE + 1] = {
+ [DPLL_A_MODULE_NAME] = { .type = NLA_NUL_STRING, },
+ [DPLL_A_CLOCK_ID] = { .type = NLA_U64, },
+ [DPLL_A_TYPE] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
+};
+
+/* DPLL_CMD_DEVICE_GET - do */
+static const struct nla_policy dpll_device_get_nl_policy[DPLL_A_ID + 1] = {
+ [DPLL_A_ID] = { .type = NLA_U32, },
+};
+
+/* DPLL_CMD_DEVICE_SET - do */
+static const struct nla_policy dpll_device_set_nl_policy[DPLL_A_ID + 1] = {
+ [DPLL_A_ID] = { .type = NLA_U32, },
+};
+
+/* DPLL_CMD_PIN_ID_GET - do */
+static const struct nla_policy dpll_pin_id_get_nl_policy[DPLL_A_PIN_TYPE + 1] = {
+ [DPLL_A_PIN_MODULE_NAME] = { .type = NLA_NUL_STRING, },
+ [DPLL_A_PIN_CLOCK_ID] = { .type = NLA_U64, },
+ [DPLL_A_PIN_BOARD_LABEL] = { .type = NLA_NUL_STRING, },
+ [DPLL_A_PIN_PANEL_LABEL] = { .type = NLA_NUL_STRING, },
+ [DPLL_A_PIN_PACKAGE_LABEL] = { .type = NLA_NUL_STRING, },
+ [DPLL_A_PIN_TYPE] = NLA_POLICY_RANGE(NLA_U32, 1, 5),
+};
+
+/* DPLL_CMD_PIN_GET - do */
+static const struct nla_policy dpll_pin_get_do_nl_policy[DPLL_A_PIN_ID + 1] = {
+ [DPLL_A_PIN_ID] = { .type = NLA_U32, },
+};
+
+/* DPLL_CMD_PIN_GET - dump */
+static const struct nla_policy dpll_pin_get_dump_nl_policy[DPLL_A_PIN_ID + 1] = {
+ [DPLL_A_PIN_ID] = { .type = NLA_U32, },
+};
+
+/* DPLL_CMD_PIN_SET - do */
+static const struct nla_policy dpll_pin_set_nl_policy[DPLL_A_PIN_PHASE_ADJUST + 1] = {
+ [DPLL_A_PIN_ID] = { .type = NLA_U32, },
+ [DPLL_A_PIN_FREQUENCY] = { .type = NLA_U64, },
+ [DPLL_A_PIN_DIRECTION] = NLA_POLICY_RANGE(NLA_U32, 1, 2),
+ [DPLL_A_PIN_PRIO] = { .type = NLA_U32, },
+ [DPLL_A_PIN_STATE] = NLA_POLICY_RANGE(NLA_U32, 1, 3),
+ [DPLL_A_PIN_PARENT_DEVICE] = NLA_POLICY_NESTED(dpll_pin_parent_device_nl_policy),
+ [DPLL_A_PIN_PARENT_PIN] = NLA_POLICY_NESTED(dpll_pin_parent_pin_nl_policy),
+ [DPLL_A_PIN_PHASE_ADJUST] = { .type = NLA_S32, },
+};
+
+/* Ops table for dpll */
+static const struct genl_split_ops dpll_nl_ops[] = {
+ {
+ .cmd = DPLL_CMD_DEVICE_ID_GET,
+ .pre_doit = dpll_lock_doit,
+ .doit = dpll_nl_device_id_get_doit,
+ .post_doit = dpll_unlock_doit,
+ .policy = dpll_device_id_get_nl_policy,
+ .maxattr = DPLL_A_TYPE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DPLL_CMD_DEVICE_GET,
+ .pre_doit = dpll_pre_doit,
+ .doit = dpll_nl_device_get_doit,
+ .post_doit = dpll_post_doit,
+ .policy = dpll_device_get_nl_policy,
+ .maxattr = DPLL_A_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DPLL_CMD_DEVICE_GET,
+ .start = dpll_lock_dumpit,
+ .dumpit = dpll_nl_device_get_dumpit,
+ .done = dpll_unlock_dumpit,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+ },
+ {
+ .cmd = DPLL_CMD_DEVICE_SET,
+ .pre_doit = dpll_pre_doit,
+ .doit = dpll_nl_device_set_doit,
+ .post_doit = dpll_post_doit,
+ .policy = dpll_device_set_nl_policy,
+ .maxattr = DPLL_A_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DPLL_CMD_PIN_ID_GET,
+ .pre_doit = dpll_lock_doit,
+ .doit = dpll_nl_pin_id_get_doit,
+ .post_doit = dpll_unlock_doit,
+ .policy = dpll_pin_id_get_nl_policy,
+ .maxattr = DPLL_A_PIN_TYPE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DPLL_CMD_PIN_GET,
+ .pre_doit = dpll_pin_pre_doit,
+ .doit = dpll_nl_pin_get_doit,
+ .post_doit = dpll_pin_post_doit,
+ .policy = dpll_pin_get_do_nl_policy,
+ .maxattr = DPLL_A_PIN_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DPLL_CMD_PIN_GET,
+ .start = dpll_lock_dumpit,
+ .dumpit = dpll_nl_pin_get_dumpit,
+ .done = dpll_unlock_dumpit,
+ .policy = dpll_pin_get_dump_nl_policy,
+ .maxattr = DPLL_A_PIN_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+ },
+ {
+ .cmd = DPLL_CMD_PIN_SET,
+ .pre_doit = dpll_pin_pre_doit,
+ .doit = dpll_nl_pin_set_doit,
+ .post_doit = dpll_pin_post_doit,
+ .policy = dpll_pin_set_nl_policy,
+ .maxattr = DPLL_A_PIN_PHASE_ADJUST,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+};
+
+static const struct genl_multicast_group dpll_nl_mcgrps[] = {
+ [DPLL_NLGRP_MONITOR] = { "monitor", },
+};
+
+struct genl_family dpll_nl_family __ro_after_init = {
+ .name = DPLL_FAMILY_NAME,
+ .version = DPLL_FAMILY_VERSION,
+ .netnsok = true,
+ .parallel_ops = true,
+ .module = THIS_MODULE,
+ .split_ops = dpll_nl_ops,
+ .n_split_ops = ARRAY_SIZE(dpll_nl_ops),
+ .mcgrps = dpll_nl_mcgrps,
+ .n_mcgrps = ARRAY_SIZE(dpll_nl_mcgrps),
+};
diff --git a/drivers/dpll/dpll_nl.h b/drivers/dpll/dpll_nl.h
new file mode 100644
index 000000000000..92d4c9c4f788
--- /dev/null
+++ b/drivers/dpll/dpll_nl.h
@@ -0,0 +1,51 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/dpll.yaml */
+/* YNL-GEN kernel header */
+
+#ifndef _LINUX_DPLL_GEN_H
+#define _LINUX_DPLL_GEN_H
+
+#include <net/netlink.h>
+#include <net/genetlink.h>
+
+#include <uapi/linux/dpll.h>
+
+/* Common nested types */
+extern const struct nla_policy dpll_pin_parent_device_nl_policy[DPLL_A_PIN_PHASE_OFFSET + 1];
+extern const struct nla_policy dpll_pin_parent_pin_nl_policy[DPLL_A_PIN_STATE + 1];
+
+int dpll_lock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+int dpll_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+int dpll_pin_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+void
+dpll_unlock_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+void
+dpll_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+void
+dpll_pin_post_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
+ struct genl_info *info);
+int dpll_lock_dumpit(struct netlink_callback *cb);
+int dpll_unlock_dumpit(struct netlink_callback *cb);
+
+int dpll_nl_device_id_get_doit(struct sk_buff *skb, struct genl_info *info);
+int dpll_nl_device_get_doit(struct sk_buff *skb, struct genl_info *info);
+int dpll_nl_device_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
+int dpll_nl_device_set_doit(struct sk_buff *skb, struct genl_info *info);
+int dpll_nl_pin_id_get_doit(struct sk_buff *skb, struct genl_info *info);
+int dpll_nl_pin_get_doit(struct sk_buff *skb, struct genl_info *info);
+int dpll_nl_pin_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
+int dpll_nl_pin_set_doit(struct sk_buff *skb, struct genl_info *info);
+
+enum {
+ DPLL_NLGRP_MONITOR,
+};
+
+extern struct genl_family dpll_nl_family;
+
+#endif /* _LINUX_DPLL_GEN_H */
diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
index 87283e4a4607..959ec5269376 100644
--- a/drivers/i3c/master.c
+++ b/drivers/i3c/master.c
@@ -22,6 +22,7 @@
static DEFINE_IDR(i3c_bus_idr);
static DEFINE_MUTEX(i3c_core_lock);
static int __i3c_first_dynamic_bus_num;
+static BLOCKING_NOTIFIER_HEAD(i3c_bus_notifier);
/**
* i3c_bus_maintenance_lock - Lock the bus for a maintenance operation
@@ -453,6 +454,36 @@ static int i3c_bus_init(struct i3c_bus *i3cbus, struct device_node *np)
return 0;
}
+void i3c_for_each_bus_locked(int (*fn)(struct i3c_bus *bus, void *data),
+ void *data)
+{
+ struct i3c_bus *bus;
+ int id;
+
+ mutex_lock(&i3c_core_lock);
+ idr_for_each_entry(&i3c_bus_idr, bus, id)
+ fn(bus, data);
+ mutex_unlock(&i3c_core_lock);
+}
+EXPORT_SYMBOL_GPL(i3c_for_each_bus_locked);
+
+int i3c_register_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&i3c_bus_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(i3c_register_notifier);
+
+int i3c_unregister_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&i3c_bus_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(i3c_unregister_notifier);
+
+static void i3c_bus_notify(struct i3c_bus *bus, unsigned int action)
+{
+ blocking_notifier_call_chain(&i3c_bus_notifier, action, bus);
+}
+
static const char * const i3c_bus_mode_strings[] = {
[I3C_BUS_MODE_PURE] = "pure",
[I3C_BUS_MODE_MIXED_FAST] = "mixed-fast",
@@ -2682,6 +2713,8 @@ int i3c_master_register(struct i3c_master_controller *master,
if (ret)
goto err_del_dev;
+ i3c_bus_notify(i3cbus, I3C_NOTIFY_BUS_ADD);
+
/*
* We're done initializing the bus and the controller, we can now
* register I3C devices discovered during the initial DAA.
@@ -2714,6 +2747,8 @@ EXPORT_SYMBOL_GPL(i3c_master_register);
*/
void i3c_master_unregister(struct i3c_master_controller *master)
{
+ i3c_bus_notify(&master->bus, I3C_NOTIFY_BUS_REMOVE);
+
i3c_master_i2c_adapter_cleanup(master);
i3c_master_unregister_i3c_devs(master);
i3c_master_bus_cleanup(master);
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 555629b798b9..e0e0451777ee 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -24,6 +24,7 @@
#include <linux/mlx5/vport.h>
#include <linux/mlx5/fs.h>
#include <linux/mlx5/eswitch.h>
+#include <linux/mlx5/driver.h>
#include <linux/list.h>
#include <rdma/ib_smi.h>
#include <rdma/ib_umem_odp.h>
@@ -3175,6 +3176,13 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev,
lockdep_assert_held(&mlx5_ib_multiport_mutex);
+ mlx5_core_mp_event_replay(ibdev->mdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_REMOVED,
+ NULL);
+ mlx5_core_mp_event_replay(mpi->mdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_REMOVED,
+ NULL);
+
mlx5_ib_cleanup_cong_debugfs(ibdev, port_num);
spin_lock(&port->mp.mpi_lock);
@@ -3226,6 +3234,7 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev,
struct mlx5_ib_multiport_info *mpi)
{
u32 port_num = mlx5_core_native_port_num(mpi->mdev) - 1;
+ u64 key;
int err;
lockdep_assert_held(&mlx5_ib_multiport_mutex);
@@ -3254,6 +3263,14 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev,
mlx5_ib_init_cong_debugfs(ibdev, port_num);
+ key = ibdev->ib_dev.index;
+ mlx5_core_mp_event_replay(mpi->mdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_DONE,
+ &key);
+ mlx5_core_mp_event_replay(ibdev->mdev,
+ MLX5_DRIVER_EVENT_AFFILIATION_DONE,
+ &key);
+
return true;
unbind:
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index ed25061fac62..7f84d9866cef 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -488,7 +488,7 @@ poll_more:
if (unlikely(ib_req_notify_cq(priv->recv_cq,
IB_CQ_NEXT_COMP |
IB_CQ_REPORT_MISSED_EVENTS)) &&
- napi_reschedule(napi))
+ napi_schedule(napi))
goto poll_more;
}
@@ -518,7 +518,7 @@ poll_more:
napi_complete(napi);
if (unlikely(ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP |
IB_CQ_REPORT_MISSED_EVENTS)) &&
- napi_reschedule(napi))
+ napi_schedule(napi))
goto poll_more;
}
return n < 0 ? 0 : n;
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 44eeb5d61ba9..af0da4bb429b 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -448,6 +448,15 @@ config NLMON
diagnostics, etc. This is mostly intended for developers or support
to debug netlink issues. If unsure, say N.
+config NETKIT
+ bool "BPF-programmable network device"
+ depends on BPF_SYSCALL
+ help
+ The netkit device is a virtual networking device where BPF programs
+ can be attached to the device(s) transmission routine in order to
+ implement the driver's internal logic. The device can be configured
+ to operate in L3 or L2 mode. If unsure, say N.
+
config NET_VRF
tristate "Virtual Routing and Forwarding (Lite)"
depends on IP_MULTIPLE_TABLES
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index e26f98f897c5..7cab36f94782 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -22,6 +22,7 @@ obj-$(CONFIG_MDIO) += mdio.o
obj-$(CONFIG_NET) += loopback.o
obj-$(CONFIG_NETDEV_LEGACY_INIT) += Space.o
obj-$(CONFIG_NETCONSOLE) += netconsole.o
+obj-$(CONFIG_NETKIT) += netkit.o
obj-y += phy/
obj-y += pse-pd/
obj-y += mdio/
@@ -45,7 +46,6 @@ obj-$(CONFIG_MHI_NET) += mhi_net.o
# Networking Drivers
#
obj-$(CONFIG_ARCNET) += arcnet/
-obj-$(CONFIG_DEV_APPLETALK) += appletalk/
obj-$(CONFIG_CAIF) += caif/
obj-$(CONFIG_CAN) += can/
obj-$(CONFIG_NET_DSA) += dsa/
diff --git a/drivers/net/Space.c b/drivers/net/Space.c
index 83214e2e70ab..dc50797a2ed0 100644
--- a/drivers/net/Space.c
+++ b/drivers/net/Space.c
@@ -247,12 +247,6 @@ static int __init net_olddevs_init(void)
for (num = 0; num < 8; ++num)
ethif_probe2(num);
-#ifdef CONFIG_COPS
- cops_probe(0);
- cops_probe(1);
- cops_probe(2);
-#endif
-
return 0;
}
diff --git a/drivers/net/appletalk/Kconfig b/drivers/net/appletalk/Kconfig
deleted file mode 100644
index b38ed52b82bc..000000000000
--- a/drivers/net/appletalk/Kconfig
+++ /dev/null
@@ -1,102 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# Appletalk driver configuration
-#
-config ATALK
- tristate "Appletalk protocol support"
- select LLC
- help
- AppleTalk is the protocol that Apple computers can use to communicate
- on a network. If your Linux box is connected to such a network and you
- wish to connect to it, say Y. You will need to use the netatalk package
- so that your Linux box can act as a print and file server for Macs as
- well as access AppleTalk printers. Check out
- <http://www.zettabyte.net/netatalk/> on the WWW for details.
- EtherTalk is the name used for AppleTalk over Ethernet and the
- cheaper and slower LocalTalk is AppleTalk over a proprietary Apple
- network using serial links. EtherTalk and LocalTalk are fully
- supported by Linux.
-
- General information about how to connect Linux, Windows machines and
- Macs is on the WWW at <http://www.eats.com/linux_mac_win.html>. The
- NET3-4-HOWTO, available from
- <http://www.tldp.org/docs.html#howto>, contains valuable
- information as well.
-
- To compile this driver as a module, choose M here: the module will be
- called appletalk. You almost certainly want to compile it as a
- module so you can restart your AppleTalk stack without rebooting
- your machine. I hear that the GNU boycott of Apple is over, so
- even politically correct people are allowed to say Y here.
-
-config DEV_APPLETALK
- tristate "Appletalk interfaces support"
- depends on ATALK
- help
- AppleTalk is the protocol that Apple computers can use to communicate
- on a network. If your Linux box is connected to such a network, and wish
- to do IP over it, or you have a LocalTalk card and wish to use it to
- connect to the AppleTalk network, say Y.
-
-
-config COPS
- tristate "COPS LocalTalk PC support"
- depends on DEV_APPLETALK && ISA
- depends on NETDEVICES
- select NETDEV_LEGACY_INIT
- help
- This allows you to use COPS AppleTalk cards to connect to LocalTalk
- networks. You also need version 1.3.3 or later of the netatalk
- package. This driver is experimental, which means that it may not
- work. This driver will only work if you choose "AppleTalk DDP"
- networking support, above.
- Please read the file
- <file:Documentation/networking/device_drivers/appletalk/cops.rst>.
-
-config COPS_DAYNA
- bool "Dayna firmware support"
- depends on COPS
- help
- Support COPS compatible cards with Dayna style firmware (Dayna
- DL2000/ Daynatalk/PC (half length), COPS LT-95, Farallon PhoneNET PC
- III, Farallon PhoneNET PC II).
-
-config COPS_TANGENT
- bool "Tangent firmware support"
- depends on COPS
- help
- Support COPS compatible cards with Tangent style firmware (Tangent
- ATB_II, Novell NL-1000, Daystar Digital LT-200.
-
-config IPDDP
- tristate "Appletalk-IP driver support"
- depends on DEV_APPLETALK && ATALK
- help
- This allows IP networking for users who only have AppleTalk
- networking available. This feature is experimental. With this
- driver, you can encapsulate IP inside AppleTalk (e.g. if your Linux
- box is stuck on an AppleTalk only network) or decapsulate (e.g. if
- you want your Linux box to act as an Internet gateway for a zoo of
- AppleTalk connected Macs). Please see the file
- <file:Documentation/networking/ipddp.rst> for more information.
-
- If you say Y here, the AppleTalk-IP support will be compiled into
- the kernel. In this case, you can either use encapsulation or
- decapsulation, but not both. With the following two questions, you
- decide which one you want.
-
- To compile the AppleTalk-IP support as a module, choose M here: the
- module will be called ipddp.
- In this case, you will be able to use both encapsulation and
- decapsulation simultaneously, by loading two copies of the module
- and specifying different values for the module option ipddp_mode.
-
-config IPDDP_ENCAP
- bool "IP to Appletalk-IP Encapsulation support"
- depends on IPDDP
- help
- If you say Y here, the AppleTalk-IP code will be able to encapsulate
- IP packets inside AppleTalk frames; this is useful if your Linux box
- is stuck on an AppleTalk network (which hopefully contains a
- decapsulator somewhere). Please see
- <file:Documentation/networking/ipddp.rst> for more information.
diff --git a/drivers/net/appletalk/Makefile b/drivers/net/appletalk/Makefile
deleted file mode 100644
index 6db2943ce5d6..000000000000
--- a/drivers/net/appletalk/Makefile
+++ /dev/null
@@ -1,7 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0-only
-#
-# Makefile for drivers/net/appletalk
-#
-
-obj-$(CONFIG_IPDDP) += ipddp.o
-obj-$(CONFIG_COPS) += cops.o
diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c
deleted file mode 100644
index 97f254bdbb16..000000000000
--- a/drivers/net/appletalk/cops.c
+++ /dev/null
@@ -1,1005 +0,0 @@
-/* cops.c: LocalTalk driver for Linux.
- *
- * Authors:
- * - Jay Schulist <jschlst@samba.org>
- *
- * With more than a little help from;
- * - Alan Cox <alan@lxorguk.ukuu.org.uk>
- *
- * Derived from:
- * - skeleton.c: A network driver outline for linux.
- * Written 1993-94 by Donald Becker.
- * - ltpc.c: A driver for the LocalTalk PC card.
- * Written by Bradford W. Johnson.
- *
- * Copyright 1993 United States Government as represented by the
- * Director, National Security Agency.
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- *
- * Changes:
- * 19970608 Alan Cox Allowed dual card type support
- * Can set board type in insmod
- * Hooks for cops_setup routine
- * (not yet implemented).
- * 19971101 Jay Schulist Fixes for multiple lt* devices.
- * 19980607 Steven Hirsch Fixed the badly broken support
- * for Tangent type cards. Only
- * tested on Daystar LT200. Some
- * cleanup of formatting and program
- * logic. Added emacs 'local-vars'
- * setup for Jay's brace style.
- * 20000211 Alan Cox Cleaned up for softnet
- */
-
-static const char *version =
-"cops.c:v0.04 6/7/98 Jay Schulist <jschlst@samba.org>\n";
-/*
- * Sources:
- * COPS Localtalk SDK. This provides almost all of the information
- * needed.
- */
-
-/*
- * insmod/modprobe configurable stuff.
- * - IO Port, choose one your card supports or 0 if you dare.
- * - IRQ, also choose one your card supports or nothing and let
- * the driver figure it out.
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/fcntl.h>
-#include <linux/interrupt.h>
-#include <linux/ptrace.h>
-#include <linux/ioport.h>
-#include <linux/in.h>
-#include <linux/string.h>
-#include <linux/errno.h>
-#include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/skbuff.h>
-#include <linux/if_arp.h>
-#include <linux/if_ltalk.h>
-#include <linux/delay.h> /* For udelay() */
-#include <linux/atalk.h>
-#include <linux/spinlock.h>
-#include <linux/bitops.h>
-#include <linux/jiffies.h>
-
-#include <net/Space.h>
-
-#include <asm/io.h>
-#include <asm/dma.h>
-
-#include "cops.h" /* Our Stuff */
-#include "cops_ltdrv.h" /* Firmware code for Tangent type cards. */
-#include "cops_ffdrv.h" /* Firmware code for Dayna type cards. */
-
-/*
- * The name of the card. Is used for messages and in the requests for
- * io regions, irqs and dma channels
- */
-
-static const char *cardname = "cops";
-
-#ifdef CONFIG_COPS_DAYNA
-static int board_type = DAYNA; /* Module exported */
-#else
-static int board_type = TANGENT;
-#endif
-
-static int io = 0x240; /* Default IO for Dayna */
-static int irq = 5; /* Default IRQ */
-
-/*
- * COPS Autoprobe information.
- * Right now if port address is right but IRQ is not 5 this will
- * return a 5 no matter what since we will still get a status response.
- * Need one more additional check to narrow down after we have gotten
- * the ioaddr. But since only other possible IRQs is 3 and 4 so no real
- * hurry on this. I *STRONGLY* recommend using IRQ 5 for your card with
- * this driver.
- *
- * This driver has 2 modes and they are: Dayna mode and Tangent mode.
- * Each mode corresponds with the type of card. It has been found
- * that there are 2 main types of cards and all other cards are
- * the same and just have different names or only have minor differences
- * such as more IO ports. As this driver is tested it will
- * become more clear on exactly what cards are supported. The driver
- * defaults to using Dayna mode. To change the drivers mode, simply
- * select Dayna or Tangent mode when configuring the kernel.
- *
- * This driver should support:
- * TANGENT driver mode:
- * Tangent ATB-II, Novell NL-1000, Daystar Digital LT-200,
- * COPS LT-1
- * DAYNA driver mode:
- * Dayna DL2000/DaynaTalk PC (Half Length), COPS LT-95,
- * Farallon PhoneNET PC III, Farallon PhoneNET PC II
- * Other cards possibly supported mode unknown though:
- * Dayna DL2000 (Full length), COPS LT/M (Micro-Channel)
- *
- * Cards NOT supported by this driver but supported by the ltpc.c
- * driver written by Bradford W. Johnson <johns393@maroon.tc.umn.edu>
- * Farallon PhoneNET PC
- * Original Apple LocalTalk PC card
- *
- * N.B.
- *
- * The Daystar Digital LT200 boards do not support interrupt-driven
- * IO. You must specify 'irq=0xff' as a module parameter to invoke
- * polled mode. I also believe that the port probing logic is quite
- * dangerous at best and certainly hopeless for a polled card. Best to
- * specify both. - Steve H.
- *
- */
-
-/*
- * Zero terminated list of IO ports to probe.
- */
-
-static unsigned int ports[] = {
- 0x240, 0x340, 0x200, 0x210, 0x220, 0x230, 0x260,
- 0x2A0, 0x300, 0x310, 0x320, 0x330, 0x350, 0x360,
- 0
-};
-
-/*
- * Zero terminated list of IRQ ports to probe.
- */
-
-static int cops_irqlist[] = {
- 5, 4, 3, 0
-};
-
-static struct timer_list cops_timer;
-static struct net_device *cops_timer_dev;
-
-/* use 0 for production, 1 for verification, 2 for debug, 3 for verbose debug */
-#ifndef COPS_DEBUG
-#define COPS_DEBUG 1
-#endif
-static unsigned int cops_debug = COPS_DEBUG;
-
-/* The number of low I/O ports used by the card. */
-#define COPS_IO_EXTENT 8
-
-/* Information that needs to be kept for each board. */
-
-struct cops_local
-{
- int board; /* Holds what board type is. */
- int nodeid; /* Set to 1 once have nodeid. */
- unsigned char node_acquire; /* Node ID when acquired. */
- struct atalk_addr node_addr; /* Full node address */
- spinlock_t lock; /* RX/TX lock */
-};
-
-/* Index to functions, as function prototypes. */
-static int cops_probe1 (struct net_device *dev, int ioaddr);
-static int cops_irq (int ioaddr, int board);
-
-static int cops_open (struct net_device *dev);
-static int cops_jumpstart (struct net_device *dev);
-static void cops_reset (struct net_device *dev, int sleep);
-static void cops_load (struct net_device *dev);
-static int cops_nodeid (struct net_device *dev, int nodeid);
-
-static irqreturn_t cops_interrupt (int irq, void *dev_id);
-static void cops_poll(struct timer_list *t);
-static void cops_timeout(struct net_device *dev, unsigned int txqueue);
-static void cops_rx (struct net_device *dev);
-static netdev_tx_t cops_send_packet (struct sk_buff *skb,
- struct net_device *dev);
-static void set_multicast_list (struct net_device *dev);
-static int cops_ioctl (struct net_device *dev, struct ifreq *rq, int cmd);
-static int cops_close (struct net_device *dev);
-
-static void cleanup_card(struct net_device *dev)
-{
- if (dev->irq)
- free_irq(dev->irq, dev);
- release_region(dev->base_addr, COPS_IO_EXTENT);
-}
-
-/*
- * Check for a network adaptor of this type, and return '0' iff one exists.
- * If dev->base_addr == 0, probe all likely locations.
- * If dev->base_addr in [1..0x1ff], always return failure.
- * otherwise go with what we pass in.
- */
-struct net_device * __init cops_probe(int unit)
-{
- struct net_device *dev;
- unsigned *port;
- int base_addr;
- int err = 0;
-
- dev = alloc_ltalkdev(sizeof(struct cops_local));
- if (!dev)
- return ERR_PTR(-ENOMEM);
-
- if (unit >= 0) {
- sprintf(dev->name, "lt%d", unit);
- netdev_boot_setup_check(dev);
- irq = dev->irq;
- base_addr = dev->base_addr;
- } else {
- base_addr = dev->base_addr = io;
- }
-
- if (base_addr > 0x1ff) { /* Check a single specified location. */
- err = cops_probe1(dev, base_addr);
- } else if (base_addr != 0) { /* Don't probe at all. */
- err = -ENXIO;
- } else {
- /* FIXME Does this really work for cards which generate irq?
- * It's definitely N.G. for polled Tangent. sh
- * Dayna cards don't autoprobe well at all, but if your card is
- * at IRQ 5 & IO 0x240 we find it every time. ;) JS
- */
- for (port = ports; *port && cops_probe1(dev, *port) < 0; port++)
- ;
- if (!*port)
- err = -ENODEV;
- }
- if (err)
- goto out;
- err = register_netdev(dev);
- if (err)
- goto out1;
- return dev;
-out1:
- cleanup_card(dev);
-out:
- free_netdev(dev);
- return ERR_PTR(err);
-}
-
-static const struct net_device_ops cops_netdev_ops = {
- .ndo_open = cops_open,
- .ndo_stop = cops_close,
- .ndo_start_xmit = cops_send_packet,
- .ndo_tx_timeout = cops_timeout,
- .ndo_do_ioctl = cops_ioctl,
- .ndo_set_rx_mode = set_multicast_list,
-};
-
-/*
- * This is the real probe routine. Linux has a history of friendly device
- * probes on the ISA bus. A good device probes avoids doing writes, and
- * verifies that the correct device exists and functions.
- */
-static int __init cops_probe1(struct net_device *dev, int ioaddr)
-{
- struct cops_local *lp;
- static unsigned version_printed;
- int board = board_type;
- int retval;
-
- if(cops_debug && version_printed++ == 0)
- printk("%s", version);
-
- /* Grab the region so no one else tries to probe our ioports. */
- if (!request_region(ioaddr, COPS_IO_EXTENT, dev->name))
- return -EBUSY;
-
- /*
- * Since this board has jumpered interrupts, allocate the interrupt
- * vector now. There is no point in waiting since no other device
- * can use the interrupt, and this marks the irq as busy. Jumpered
- * interrupts are typically not reported by the boards, and we must
- * used AutoIRQ to find them.
- */
- dev->irq = irq;
- switch (dev->irq)
- {
- case 0:
- /* COPS AutoIRQ routine */
- dev->irq = cops_irq(ioaddr, board);
- if (dev->irq)
- break;
- fallthrough; /* Once no IRQ found on this port */
- case 1:
- retval = -EINVAL;
- goto err_out;
-
- /* Fixup for users that don't know that IRQ 2 is really
- * IRQ 9, or don't know which one to set.
- */
- case 2:
- dev->irq = 9;
- break;
-
- /* Polled operation requested. Although irq of zero passed as
- * a parameter tells the init routines to probe, we'll
- * overload it to denote polled operation at runtime.
- */
- case 0xff:
- dev->irq = 0;
- break;
-
- default:
- break;
- }
-
- dev->base_addr = ioaddr;
-
- /* Reserve any actual interrupt. */
- if (dev->irq) {
- retval = request_irq(dev->irq, cops_interrupt, 0, dev->name, dev);
- if (retval)
- goto err_out;
- }
-
- lp = netdev_priv(dev);
- spin_lock_init(&lp->lock);
-
- /* Copy local board variable to lp struct. */
- lp->board = board;
-
- dev->netdev_ops = &cops_netdev_ops;
- dev->watchdog_timeo = HZ * 2;
-
-
- /* Tell the user where the card is and what mode we're in. */
- if(board==DAYNA)
- printk("%s: %s at %#3x, using IRQ %d, in Dayna mode.\n",
- dev->name, cardname, ioaddr, dev->irq);
- if(board==TANGENT) {
- if(dev->irq)
- printk("%s: %s at %#3x, IRQ %d, in Tangent mode\n",
- dev->name, cardname, ioaddr, dev->irq);
- else
- printk("%s: %s at %#3x, using polled IO, in Tangent mode.\n",
- dev->name, cardname, ioaddr);
-
- }
- return 0;
-
-err_out:
- release_region(ioaddr, COPS_IO_EXTENT);
- return retval;
-}
-
-static int __init cops_irq (int ioaddr, int board)
-{ /*
- * This does not use the IRQ to determine where the IRQ is. We just
- * assume that when we get a correct status response that it's the IRQ.
- * This really just verifies the IO port but since we only have access
- * to such a small number of IRQs (5, 4, 3) this is not bad.
- * This will probably not work for more than one card.
- */
- int irqaddr=0;
- int i, x, status;
-
- if(board==DAYNA)
- {
- outb(0, ioaddr+DAYNA_RESET);
- inb(ioaddr+DAYNA_RESET);
- mdelay(333);
- }
- if(board==TANGENT)
- {
- inb(ioaddr);
- outb(0, ioaddr);
- outb(0, ioaddr+TANG_RESET);
- }
-
- for(i=0; cops_irqlist[i] !=0; i++)
- {
- irqaddr = cops_irqlist[i];
- for(x = 0xFFFF; x>0; x --) /* wait for response */
- {
- if(board==DAYNA)
- {
- status = (inb(ioaddr+DAYNA_CARD_STATUS)&3);
- if(status == 1)
- return irqaddr;
- }
- if(board==TANGENT)
- {
- if((inb(ioaddr+TANG_CARD_STATUS)& TANG_TX_READY) !=0)
- return irqaddr;
- }
- }
- }
- return 0; /* no IRQ found */
-}
-
-/*
- * Open/initialize the board. This is called (in the current kernel)
- * sometime after booting when the 'ifconfig' program is run.
- */
-static int cops_open(struct net_device *dev)
-{
- struct cops_local *lp = netdev_priv(dev);
-
- if(dev->irq==0)
- {
- /*
- * I don't know if the Dayna-style boards support polled
- * operation. For now, only allow it for Tangent.
- */
- if(lp->board==TANGENT) /* Poll 20 times per second */
- {
- cops_timer_dev = dev;
- timer_setup(&cops_timer, cops_poll, 0);
- cops_timer.expires = jiffies + HZ/20;
- add_timer(&cops_timer);
- }
- else
- {
- printk(KERN_WARNING "%s: No irq line set\n", dev->name);
- return -EAGAIN;
- }
- }
-
- cops_jumpstart(dev); /* Start the card up. */
-
- netif_start_queue(dev);
- return 0;
-}
-
-/*
- * This allows for a dynamic start/restart of the entire card.
- */
-static int cops_jumpstart(struct net_device *dev)
-{
- struct cops_local *lp = netdev_priv(dev);
-
- /*
- * Once the card has the firmware loaded and has acquired
- * the nodeid, if it is reset it will lose it all.
- */
- cops_reset(dev,1); /* Need to reset card before load firmware. */
- cops_load(dev); /* Load the firmware. */
-
- /*
- * If atalkd already gave us a nodeid we will use that
- * one again, else we wait for atalkd to give us a nodeid
- * in cops_ioctl. This may cause a problem if someone steals
- * our nodeid while we are resetting.
- */
- if(lp->nodeid == 1)
- cops_nodeid(dev,lp->node_acquire);
-
- return 0;
-}
-
-static void tangent_wait_reset(int ioaddr)
-{
- int timeout=0;
-
- while(timeout++ < 5 && (inb(ioaddr+TANG_CARD_STATUS)&TANG_TX_READY)==0)
- mdelay(1); /* Wait 1 second */
-}
-
-/*
- * Reset the LocalTalk board.
- */
-static void cops_reset(struct net_device *dev, int sleep)
-{
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr=dev->base_addr;
-
- if(lp->board==TANGENT)
- {
- inb(ioaddr); /* Clear request latch. */
- outb(0,ioaddr); /* Clear the TANG_TX_READY flop. */
- outb(0, ioaddr+TANG_RESET); /* Reset the adapter. */
-
- tangent_wait_reset(ioaddr);
- outb(0, ioaddr+TANG_CLEAR_INT);
- }
- if(lp->board==DAYNA)
- {
- outb(0, ioaddr+DAYNA_RESET); /* Assert the reset port */
- inb(ioaddr+DAYNA_RESET); /* Clear the reset */
- if (sleep)
- msleep(333);
- else
- mdelay(333);
- }
-
- netif_wake_queue(dev);
-}
-
-static void cops_load (struct net_device *dev)
-{
- struct ifreq ifr;
- struct ltfirmware *ltf= (struct ltfirmware *)&ifr.ifr_ifru;
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr=dev->base_addr;
- int length, i = 0;
-
- strcpy(ifr.ifr_name,"lt0");
-
- /* Get card's firmware code and do some checks on it. */
-#ifdef CONFIG_COPS_DAYNA
- if(lp->board==DAYNA)
- {
- ltf->length=sizeof(ffdrv_code);
- ltf->data=ffdrv_code;
- }
- else
-#endif
-#ifdef CONFIG_COPS_TANGENT
- if(lp->board==TANGENT)
- {
- ltf->length=sizeof(ltdrv_code);
- ltf->data=ltdrv_code;
- }
- else
-#endif
- {
- printk(KERN_INFO "%s; unsupported board type.\n", dev->name);
- return;
- }
-
- /* Check to make sure firmware is correct length. */
- if(lp->board==DAYNA && ltf->length!=5983)
- {
- printk(KERN_WARNING "%s: Firmware is not length of FFDRV.BIN.\n", dev->name);
- return;
- }
- if(lp->board==TANGENT && ltf->length!=2501)
- {
- printk(KERN_WARNING "%s: Firmware is not length of DRVCODE.BIN.\n", dev->name);
- return;
- }
-
- if(lp->board==DAYNA)
- {
- /*
- * We must wait for a status response
- * with the DAYNA board.
- */
- while(++i<65536)
- {
- if((inb(ioaddr+DAYNA_CARD_STATUS)&3)==1)
- break;
- }
-
- if(i==65536)
- return;
- }
-
- /*
- * Upload the firmware and kick. Byte-by-byte works nicely here.
- */
- i=0;
- length = ltf->length;
- while(length--)
- {
- outb(ltf->data[i], ioaddr);
- i++;
- }
-
- if(cops_debug > 1)
- printk("%s: Uploaded firmware - %d bytes of %d bytes.\n",
- dev->name, i, ltf->length);
-
- if(lp->board==DAYNA) /* Tell Dayna to run the firmware code. */
- outb(1, ioaddr+DAYNA_INT_CARD);
- else /* Tell Tang to run the firmware code. */
- inb(ioaddr);
-
- if(lp->board==TANGENT)
- {
- tangent_wait_reset(ioaddr);
- inb(ioaddr); /* Clear initial ready signal. */
- }
-}
-
-/*
- * Get the LocalTalk Nodeid from the card. We can suggest
- * any nodeid 1-254. The card will try and get that exact
- * address else we can specify 0 as the nodeid and the card
- * will autoprobe for a nodeid.
- */
-static int cops_nodeid (struct net_device *dev, int nodeid)
-{
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- if(lp->board == DAYNA)
- {
- /* Empty any pending adapter responses. */
- while((inb(ioaddr+DAYNA_CARD_STATUS)&DAYNA_TX_READY)==0)
- {
- outb(0, ioaddr+COPS_CLEAR_INT); /* Clear interrupts. */
- if((inb(ioaddr+DAYNA_CARD_STATUS)&0x03)==DAYNA_RX_REQUEST)
- cops_rx(dev); /* Kick any packets waiting. */
- schedule();
- }
-
- outb(2, ioaddr); /* Output command packet length as 2. */
- outb(0, ioaddr);
- outb(LAP_INIT, ioaddr); /* Send LAP_INIT command byte. */
- outb(nodeid, ioaddr); /* Suggest node address. */
- }
-
- if(lp->board == TANGENT)
- {
- /* Empty any pending adapter responses. */
- while(inb(ioaddr+TANG_CARD_STATUS)&TANG_RX_READY)
- {
- outb(0, ioaddr+COPS_CLEAR_INT); /* Clear interrupt. */
- cops_rx(dev); /* Kick out packets waiting. */
- schedule();
- }
-
- /* Not sure what Tangent does if nodeid picked is used. */
- if(nodeid == 0) /* Seed. */
- nodeid = jiffies&0xFF; /* Get a random try */
- outb(2, ioaddr); /* Command length LSB */
- outb(0, ioaddr); /* Command length MSB */
- outb(LAP_INIT, ioaddr); /* Send LAP_INIT byte */
- outb(nodeid, ioaddr); /* LAP address hint. */
- outb(0xFF, ioaddr); /* Int. level to use */
- }
-
- lp->node_acquire=0; /* Set nodeid holder to 0. */
- while(lp->node_acquire==0) /* Get *True* nodeid finally. */
- {
- outb(0, ioaddr+COPS_CLEAR_INT); /* Clear any interrupt. */
-
- if(lp->board == DAYNA)
- {
- if((inb(ioaddr+DAYNA_CARD_STATUS)&0x03)==DAYNA_RX_REQUEST)
- cops_rx(dev); /* Grab the nodeid put in lp->node_acquire. */
- }
- if(lp->board == TANGENT)
- {
- if(inb(ioaddr+TANG_CARD_STATUS)&TANG_RX_READY)
- cops_rx(dev); /* Grab the nodeid put in lp->node_acquire. */
- }
- schedule();
- }
-
- if(cops_debug > 1)
- printk(KERN_DEBUG "%s: Node ID %d has been acquired.\n",
- dev->name, lp->node_acquire);
-
- lp->nodeid=1; /* Set got nodeid to 1. */
-
- return 0;
-}
-
-/*
- * Poll the Tangent type cards to see if we have work.
- */
-
-static void cops_poll(struct timer_list *unused)
-{
- int ioaddr, status;
- int boguscount = 0;
- struct net_device *dev = cops_timer_dev;
-
- del_timer(&cops_timer);
-
- if(dev == NULL)
- return; /* We've been downed */
-
- ioaddr = dev->base_addr;
- do {
- status=inb(ioaddr+TANG_CARD_STATUS);
- if(status & TANG_RX_READY)
- cops_rx(dev);
- if(status & TANG_TX_READY)
- netif_wake_queue(dev);
- status = inb(ioaddr+TANG_CARD_STATUS);
- } while((++boguscount < 20) && (status&(TANG_RX_READY|TANG_TX_READY)));
-
- /* poll 20 times per second */
- cops_timer.expires = jiffies + HZ/20;
- add_timer(&cops_timer);
-}
-
-/*
- * The typical workload of the driver:
- * Handle the network interface interrupts.
- */
-static irqreturn_t cops_interrupt(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct cops_local *lp;
- int ioaddr, status;
- int boguscount = 0;
-
- ioaddr = dev->base_addr;
- lp = netdev_priv(dev);
-
- if(lp->board==DAYNA)
- {
- do {
- outb(0, ioaddr + COPS_CLEAR_INT);
- status=inb(ioaddr+DAYNA_CARD_STATUS);
- if((status&0x03)==DAYNA_RX_REQUEST)
- cops_rx(dev);
- netif_wake_queue(dev);
- } while(++boguscount < 20);
- }
- else
- {
- do {
- status=inb(ioaddr+TANG_CARD_STATUS);
- if(status & TANG_RX_READY)
- cops_rx(dev);
- if(status & TANG_TX_READY)
- netif_wake_queue(dev);
- status=inb(ioaddr+TANG_CARD_STATUS);
- } while((++boguscount < 20) && (status&(TANG_RX_READY|TANG_TX_READY)));
- }
-
- return IRQ_HANDLED;
-}
-
-/*
- * We have a good packet(s), get it/them out of the buffers.
- */
-static void cops_rx(struct net_device *dev)
-{
- int pkt_len = 0;
- int rsp_type = 0;
- struct sk_buff *skb = NULL;
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- int boguscount = 0;
- unsigned long flags;
-
-
- spin_lock_irqsave(&lp->lock, flags);
-
- if(lp->board==DAYNA)
- {
- outb(0, ioaddr); /* Send out Zero length. */
- outb(0, ioaddr);
- outb(DATA_READ, ioaddr); /* Send read command out. */
-
- /* Wait for DMA to turn around. */
- while(++boguscount<1000000)
- {
- barrier();
- if((inb(ioaddr+DAYNA_CARD_STATUS)&0x03)==DAYNA_RX_READY)
- break;
- }
-
- if(boguscount==1000000)
- {
- printk(KERN_WARNING "%s: DMA timed out.\n",dev->name);
- spin_unlock_irqrestore(&lp->lock, flags);
- return;
- }
- }
-
- /* Get response length. */
- pkt_len = inb(ioaddr);
- pkt_len |= (inb(ioaddr) << 8);
- /* Input IO code. */
- rsp_type=inb(ioaddr);
-
- /* Malloc up new buffer. */
- skb = dev_alloc_skb(pkt_len);
- if(skb == NULL)
- {
- printk(KERN_WARNING "%s: Memory squeeze, dropping packet.\n",
- dev->name);
- dev->stats.rx_dropped++;
- while(pkt_len--) /* Discard packet */
- inb(ioaddr);
- spin_unlock_irqrestore(&lp->lock, flags);
- return;
- }
- skb->dev = dev;
- skb_put(skb, pkt_len);
- skb->protocol = htons(ETH_P_LOCALTALK);
-
- insb(ioaddr, skb->data, pkt_len); /* Eat the Data */
-
- if(lp->board==DAYNA)
- outb(1, ioaddr+DAYNA_INT_CARD); /* Interrupt the card */
-
- spin_unlock_irqrestore(&lp->lock, flags); /* Restore interrupts. */
-
- /* Check for bad response length */
- if(pkt_len < 0 || pkt_len > MAX_LLAP_SIZE)
- {
- printk(KERN_WARNING "%s: Bad packet length of %d bytes.\n",
- dev->name, pkt_len);
- dev->stats.tx_errors++;
- dev_kfree_skb_any(skb);
- return;
- }
-
- /* Set nodeid and then get out. */
- if(rsp_type == LAP_INIT_RSP)
- { /* Nodeid taken from received packet. */
- lp->node_acquire = skb->data[0];
- dev_kfree_skb_any(skb);
- return;
- }
-
- /* One last check to make sure we have a good packet. */
- if(rsp_type != LAP_RESPONSE)
- {
- printk(KERN_WARNING "%s: Bad packet type %d.\n", dev->name, rsp_type);
- dev->stats.tx_errors++;
- dev_kfree_skb_any(skb);
- return;
- }
-
- skb_reset_mac_header(skb); /* Point to entire packet. */
- skb_pull(skb,3);
- skb_reset_transport_header(skb); /* Point to data (Skip header). */
-
- /* Update the counters. */
- dev->stats.rx_packets++;
- dev->stats.rx_bytes += skb->len;
-
- /* Send packet to a higher place. */
- netif_rx(skb);
-}
-
-static void cops_timeout(struct net_device *dev, unsigned int txqueue)
-{
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
-
- dev->stats.tx_errors++;
- if(lp->board==TANGENT)
- {
- if((inb(ioaddr+TANG_CARD_STATUS)&TANG_TX_READY)==0)
- printk(KERN_WARNING "%s: No TX complete interrupt.\n", dev->name);
- }
- printk(KERN_WARNING "%s: Transmit timed out.\n", dev->name);
- cops_jumpstart(dev); /* Restart the card. */
- netif_trans_update(dev); /* prevent tx timeout */
- netif_wake_queue(dev);
-}
-
-
-/*
- * Make the card transmit a LocalTalk packet.
- */
-
-static netdev_tx_t cops_send_packet(struct sk_buff *skb,
- struct net_device *dev)
-{
- struct cops_local *lp = netdev_priv(dev);
- int ioaddr = dev->base_addr;
- unsigned long flags;
-
- /*
- * Block a timer-based transmit from overlapping.
- */
-
- netif_stop_queue(dev);
-
- spin_lock_irqsave(&lp->lock, flags);
- if(lp->board == DAYNA) /* Wait for adapter transmit buffer. */
- while((inb(ioaddr+DAYNA_CARD_STATUS)&DAYNA_TX_READY)==0)
- cpu_relax();
- if(lp->board == TANGENT) /* Wait for adapter transmit buffer. */
- while((inb(ioaddr+TANG_CARD_STATUS)&TANG_TX_READY)==0)
- cpu_relax();
-
- /* Output IO length. */
- outb(skb->len, ioaddr);
- outb(skb->len >> 8, ioaddr);
-
- /* Output IO code. */
- outb(LAP_WRITE, ioaddr);
-
- if(lp->board == DAYNA) /* Check the transmit buffer again. */
- while((inb(ioaddr+DAYNA_CARD_STATUS)&DAYNA_TX_READY)==0);
-
- outsb(ioaddr, skb->data, skb->len); /* Send out the data. */
-
- if(lp->board==DAYNA) /* Dayna requires you kick the card */
- outb(1, ioaddr+DAYNA_INT_CARD);
-
- spin_unlock_irqrestore(&lp->lock, flags); /* Restore interrupts. */
-
- /* Done sending packet, update counters and cleanup. */
- dev->stats.tx_packets++;
- dev->stats.tx_bytes += skb->len;
- dev_kfree_skb (skb);
- return NETDEV_TX_OK;
-}
-
-/*
- * Dummy function to keep the Appletalk layer happy.
- */
-
-static void set_multicast_list(struct net_device *dev)
-{
- if(cops_debug >= 3)
- printk("%s: set_multicast_list executed\n", dev->name);
-}
-
-/*
- * System ioctls for the COPS LocalTalk card.
- */
-
-static int cops_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- struct cops_local *lp = netdev_priv(dev);
- struct sockaddr_at *sa = (struct sockaddr_at *)&ifr->ifr_addr;
- struct atalk_addr *aa = &lp->node_addr;
-
- switch(cmd)
- {
- case SIOCSIFADDR:
- /* Get and set the nodeid and network # atalkd wants. */
- cops_nodeid(dev, sa->sat_addr.s_node);
- aa->s_net = sa->sat_addr.s_net;
- aa->s_node = lp->node_acquire;
-
- /* Set broardcast address. */
- dev->broadcast[0] = 0xFF;
-
- /* Set hardware address. */
- dev->addr_len = 1;
- dev_addr_set(dev, &aa->s_node);
- return 0;
-
- case SIOCGIFADDR:
- sa->sat_addr.s_net = aa->s_net;
- sa->sat_addr.s_node = aa->s_node;
- return 0;
-
- default:
- return -EOPNOTSUPP;
- }
-}
-
-/*
- * The inverse routine to cops_open().
- */
-
-static int cops_close(struct net_device *dev)
-{
- struct cops_local *lp = netdev_priv(dev);
-
- /* If we were running polled, yank the timer.
- */
- if(lp->board==TANGENT && dev->irq==0)
- del_timer(&cops_timer);
-
- netif_stop_queue(dev);
- return 0;
-}
-
-
-#ifdef MODULE
-static struct net_device *cops_dev;
-
-MODULE_LICENSE("GPL");
-module_param_hw(io, int, ioport, 0);
-module_param_hw(irq, int, irq, 0);
-module_param_hw(board_type, int, other, 0);
-
-static int __init cops_module_init(void)
-{
- if (io == 0)
- printk(KERN_WARNING "%s: You shouldn't autoprobe with insmod\n",
- cardname);
- cops_dev = cops_probe(-1);
- return PTR_ERR_OR_ZERO(cops_dev);
-}
-
-static void __exit cops_module_exit(void)
-{
- unregister_netdev(cops_dev);
- cleanup_card(cops_dev);
- free_netdev(cops_dev);
-}
-module_init(cops_module_init);
-module_exit(cops_module_exit);
-#endif /* MODULE */
diff --git a/drivers/net/appletalk/cops.h b/drivers/net/appletalk/cops.h
deleted file mode 100644
index 7a0bfb351929..000000000000
--- a/drivers/net/appletalk/cops.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* cops.h: LocalTalk driver for Linux.
- *
- * Authors:
- * - Jay Schulist <jschlst@samba.org>
- */
-
-#ifndef __LINUX_COPSLTALK_H
-#define __LINUX_COPSLTALK_H
-
-#ifdef __KERNEL__
-
-/* Max LLAP size we will accept. */
-#define MAX_LLAP_SIZE 603
-
-/* Tangent */
-#define TANG_CARD_STATUS 1
-#define TANG_CLEAR_INT 1
-#define TANG_RESET 3
-
-#define TANG_TX_READY 1
-#define TANG_RX_READY 2
-
-/* Dayna */
-#define DAYNA_CMD_DATA 0
-#define DAYNA_CLEAR_INT 1
-#define DAYNA_CARD_STATUS 2
-#define DAYNA_INT_CARD 3
-#define DAYNA_RESET 4
-
-#define DAYNA_RX_READY 0
-#define DAYNA_TX_READY 1
-#define DAYNA_RX_REQUEST 3
-
-/* Same on both card types */
-#define COPS_CLEAR_INT 1
-
-/* LAP response codes received from the cards. */
-#define LAP_INIT 1 /* Init cmd */
-#define LAP_INIT_RSP 2 /* Init response */
-#define LAP_WRITE 3 /* Write cmd */
-#define DATA_READ 4 /* Data read */
-#define LAP_RESPONSE 4 /* Received ALAP frame response */
-#define LAP_GETSTAT 5 /* Get LAP and HW status */
-#define LAP_RSPSTAT 6 /* Status response */
-
-#endif
-
-/*
- * Structure to hold the firmware information.
- */
-struct ltfirmware
-{
- unsigned int length;
- const unsigned char *data;
-};
-
-#define DAYNA 1
-#define TANGENT 2
-
-#endif
diff --git a/drivers/net/appletalk/cops_ffdrv.h b/drivers/net/appletalk/cops_ffdrv.h
deleted file mode 100644
index b02005087c1b..000000000000
--- a/drivers/net/appletalk/cops_ffdrv.h
+++ /dev/null
@@ -1,532 +0,0 @@
-
-/*
- * The firmware this driver downloads into the Localtalk card is a
- * separate program and is not GPL'd source code, even though the Linux
- * side driver and the routine that loads this data into the card are.
- *
- * It is taken from the COPS SDK and is under the following license
- *
- * This material is licensed to you strictly for use in conjunction with
- * the use of COPS LocalTalk adapters.
- * There is no charge for this SDK. And no waranty express or implied
- * about its fitness for any purpose. However, we will cheerefully
- * refund every penny you paid for this SDK...
- * Regards,
- *
- * Thomas F. Divine
- * Chief Scientist
- */
-
-
-/* cops_ffdrv.h: LocalTalk driver firmware dump for Linux.
- *
- * Authors:
- * - Jay Schulist <jschlst@samba.org>
- */
-
-
-#ifdef CONFIG_COPS_DAYNA
-
-static const unsigned char ffdrv_code[] = {
- 58,3,0,50,228,149,33,255,255,34,226,149,
- 249,17,40,152,33,202,154,183,237,82,77,68,
- 11,107,98,19,54,0,237,176,175,50,80,0,
- 62,128,237,71,62,32,237,57,51,62,12,237,
- 57,50,237,57,54,62,6,237,57,52,62,12,
- 237,57,49,33,107,137,34,32,128,33,83,130,
- 34,40,128,33,86,130,34,42,128,33,112,130,
- 34,36,128,33,211,130,34,38,128,62,0,237,
- 57,16,33,63,148,34,34,128,237,94,205,15,
- 130,251,205,168,145,24,141,67,111,112,121,114,
- 105,103,104,116,32,40,67,41,32,49,57,56,
- 56,32,45,32,68,97,121,110,97,32,67,111,
- 109,109,117,110,105,99,97,116,105,111,110,115,
- 32,32,32,65,108,108,32,114,105,103,104,116,
- 115,32,114,101,115,101,114,118,101,100,46,32,
- 32,40,68,40,68,7,16,8,34,7,22,6,
- 16,5,12,4,8,3,6,140,0,16,39,128,
- 0,4,96,10,224,6,0,7,126,2,64,11,
- 118,12,6,13,0,14,193,15,0,5,96,3,
- 192,1,64,9,8,62,9,211,66,62,192,211,
- 66,62,100,61,32,253,6,28,33,205,129,14,
- 66,237,163,194,253,129,6,28,33,205,129,14,
- 64,237,163,194,9,130,201,62,47,50,71,152,
- 62,47,211,68,58,203,129,237,57,20,58,204,
- 129,237,57,21,33,77,152,54,132,205,233,129,
- 58,228,149,254,209,40,6,56,4,62,0,24,
- 2,219,96,33,233,149,119,230,62,33,232,149,
- 119,213,33,8,152,17,7,0,25,119,19,25,
- 119,209,201,251,237,77,245,197,213,229,221,229,
- 205,233,129,62,1,50,106,137,205,158,139,221,
- 225,225,209,193,241,251,237,77,245,197,213,219,
- 72,237,56,16,230,46,237,57,16,237,56,12,
- 58,72,152,183,32,26,6,20,17,128,2,237,
- 56,46,187,32,35,237,56,47,186,32,29,219,
- 72,230,1,32,3,5,32,232,175,50,72,152,
- 229,221,229,62,1,50,106,137,205,158,139,221,
- 225,225,24,25,62,1,50,72,152,58,201,129,
- 237,57,12,58,202,129,237,57,13,237,56,16,
- 246,17,237,57,16,209,193,241,251,237,77,245,
- 197,229,213,221,229,237,56,16,230,17,237,57,
- 16,237,56,20,58,34,152,246,16,246,8,211,
- 68,62,6,61,32,253,58,34,152,246,8,211,
- 68,58,203,129,237,57,20,58,204,129,237,57,
- 21,237,56,16,246,34,237,57,16,221,225,209,
- 225,193,241,251,237,77,33,2,0,57,126,230,
- 3,237,100,1,40,2,246,128,230,130,245,62,
- 5,211,64,241,211,64,201,229,213,243,237,56,
- 16,230,46,237,57,16,237,56,12,251,70,35,
- 35,126,254,175,202,77,133,254,129,202,15,133,
- 230,128,194,191,132,43,58,44,152,119,33,76,
- 152,119,35,62,132,119,120,254,255,40,4,58,
- 49,152,119,219,72,43,43,112,17,3,0,237,
- 56,52,230,248,237,57,52,219,72,230,1,194,
- 141,131,209,225,237,56,52,246,6,237,57,52,
- 62,1,55,251,201,62,3,211,66,62,192,211,
- 66,62,48,211,66,0,0,219,66,230,1,40,
- 4,219,67,24,240,205,203,135,58,75,152,254,
- 255,202,128,132,58,49,152,254,161,250,207,131,
- 58,34,152,211,68,62,10,211,66,62,128,211,
- 66,62,11,211,66,62,6,211,66,24,0,62,
- 14,211,66,62,33,211,66,62,1,211,66,62,
- 64,211,66,62,3,211,66,62,209,211,66,62,
- 100,71,219,66,230,1,32,6,5,32,247,195,
- 248,132,219,67,71,58,44,152,184,194,248,132,
- 62,100,71,219,66,230,1,32,6,5,32,247,
- 195,248,132,219,67,62,100,71,219,66,230,1,
- 32,6,5,32,247,195,248,132,219,67,254,133,
- 32,7,62,0,50,74,152,24,17,254,173,32,
- 7,62,1,50,74,152,24,6,254,141,194,248,
- 132,71,209,225,58,49,152,254,132,32,10,62,
- 50,205,2,134,205,144,135,24,27,254,140,32,
- 15,62,110,205,2,134,62,141,184,32,5,205,
- 144,135,24,8,62,10,205,2,134,205,8,134,
- 62,1,50,106,137,205,158,139,237,56,52,246,
- 6,237,57,52,175,183,251,201,62,20,135,237,
- 57,20,175,237,57,21,237,56,16,246,2,237,
- 57,16,237,56,20,95,237,56,21,123,254,10,
- 48,244,237,56,16,230,17,237,57,16,209,225,
- 205,144,135,62,1,50,106,137,205,158,139,237,
- 56,52,246,6,237,57,52,175,183,251,201,209,
- 225,243,219,72,230,1,40,13,62,10,211,66,
- 0,0,219,66,230,192,202,226,132,237,56,52,
- 246,6,237,57,52,62,1,55,251,201,205,203,
- 135,62,1,50,106,137,205,158,139,237,56,52,
- 246,6,237,57,52,183,251,201,209,225,62,1,
- 50,106,137,205,158,139,237,56,52,246,6,237,
- 57,52,62,2,55,251,201,209,225,243,219,72,
- 230,1,202,213,132,62,10,211,66,0,0,219,
- 66,230,192,194,213,132,229,62,1,50,106,137,
- 42,40,152,205,65,143,225,17,3,0,205,111,
- 136,62,6,211,66,58,44,152,211,66,237,56,
- 52,246,6,237,57,52,183,251,201,209,197,237,
- 56,52,230,248,237,57,52,219,72,230,1,32,
- 15,193,225,237,56,52,246,6,237,57,52,62,
- 1,55,251,201,14,23,58,37,152,254,0,40,
- 14,14,2,254,1,32,5,62,140,119,24,3,
- 62,132,119,43,43,197,205,203,135,193,62,1,
- 211,66,62,64,211,66,62,3,211,66,62,193,
- 211,66,62,100,203,39,71,219,66,230,1,32,
- 6,5,32,247,195,229,133,33,238,151,219,67,
- 71,58,44,152,184,194,229,133,119,62,100,71,
- 219,66,230,1,32,6,5,32,247,195,229,133,
- 219,67,35,119,13,32,234,193,225,62,1,50,
- 106,137,205,158,139,237,56,52,246,6,237,57,
- 52,175,183,251,201,33,234,151,35,35,62,255,
- 119,193,225,62,1,50,106,137,205,158,139,237,
- 56,52,246,6,237,57,52,175,251,201,243,61,
- 32,253,251,201,62,3,211,66,62,192,211,66,
- 58,49,152,254,140,32,19,197,229,213,17,181,
- 129,33,185,129,1,2,0,237,176,209,225,193,
- 24,27,229,213,33,187,129,58,49,152,230,15,
- 87,30,2,237,92,25,17,181,129,126,18,19,
- 35,126,18,209,225,58,34,152,246,8,211,68,
- 58,49,152,254,165,40,14,254,164,40,10,62,
- 10,211,66,62,224,211,66,24,25,58,74,152,
- 254,0,40,10,62,10,211,66,62,160,211,66,
- 24,8,62,10,211,66,62,128,211,66,62,11,
- 211,66,62,6,211,66,205,147,143,62,5,211,
- 66,62,224,211,66,62,5,211,66,62,96,211,
- 66,62,5,61,32,253,62,5,211,66,62,224,
- 211,66,62,14,61,32,253,62,5,211,66,62,
- 233,211,66,62,128,211,66,58,181,129,61,32,
- 253,62,1,211,66,62,192,211,66,1,254,19,
- 237,56,46,187,32,6,13,32,247,195,226,134,
- 62,192,211,66,0,0,219,66,203,119,40,250,
- 219,66,203,87,40,250,243,237,56,16,230,17,
- 237,57,16,237,56,20,251,62,5,211,66,62,
- 224,211,66,58,182,129,61,32,253,229,33,181,
- 129,58,183,129,203,63,119,35,58,184,129,119,
- 225,62,10,211,66,62,224,211,66,62,11,211,
- 66,62,118,211,66,62,47,211,68,62,5,211,
- 66,62,233,211,66,58,181,129,61,32,253,62,
- 5,211,66,62,224,211,66,58,182,129,61,32,
- 253,62,5,211,66,62,96,211,66,201,229,213,
- 58,50,152,230,15,87,30,2,237,92,33,187,
- 129,25,17,181,129,126,18,35,19,126,18,209,
- 225,58,71,152,246,8,211,68,58,50,152,254,
- 165,40,14,254,164,40,10,62,10,211,66,62,
- 224,211,66,24,8,62,10,211,66,62,128,211,
- 66,62,11,211,66,62,6,211,66,195,248,135,
- 62,3,211,66,62,192,211,66,197,229,213,17,
- 181,129,33,183,129,1,2,0,237,176,209,225,
- 193,62,47,211,68,62,10,211,66,62,224,211,
- 66,62,11,211,66,62,118,211,66,62,1,211,
- 66,62,0,211,66,205,147,143,195,16,136,62,
- 3,211,66,62,192,211,66,197,229,213,17,181,
- 129,33,183,129,1,2,0,237,176,209,225,193,
- 62,47,211,68,62,10,211,66,62,224,211,66,
- 62,11,211,66,62,118,211,66,205,147,143,62,
- 5,211,66,62,224,211,66,62,5,211,66,62,
- 96,211,66,62,5,61,32,253,62,5,211,66,
- 62,224,211,66,62,14,61,32,253,62,5,211,
- 66,62,233,211,66,62,128,211,66,58,181,129,
- 61,32,253,62,1,211,66,62,192,211,66,1,
- 254,19,237,56,46,187,32,6,13,32,247,195,
- 88,136,62,192,211,66,0,0,219,66,203,119,
- 40,250,219,66,203,87,40,250,62,5,211,66,
- 62,224,211,66,58,182,129,61,32,253,62,5,
- 211,66,62,96,211,66,201,197,14,67,6,0,
- 62,3,211,66,62,192,211,66,62,48,211,66,
- 0,0,219,66,230,1,40,4,219,67,24,240,
- 62,5,211,66,62,233,211,66,62,128,211,66,
- 58,181,129,61,32,253,237,163,29,62,192,211,
- 66,219,66,230,4,40,250,237,163,29,32,245,
- 219,66,230,4,40,250,62,255,71,219,66,230,
- 4,40,3,5,32,247,219,66,230,4,40,250,
- 62,5,211,66,62,224,211,66,58,182,129,61,
- 32,253,62,5,211,66,62,96,211,66,58,71,
- 152,254,1,202,18,137,62,16,211,66,62,56,
- 211,66,62,14,211,66,62,33,211,66,62,1,
- 211,66,62,248,211,66,237,56,48,246,153,230,
- 207,237,57,48,62,3,211,66,62,221,211,66,
- 193,201,58,71,152,211,68,62,10,211,66,62,
- 128,211,66,62,11,211,66,62,6,211,66,62,
- 6,211,66,58,44,152,211,66,62,16,211,66,
- 62,56,211,66,62,48,211,66,0,0,62,14,
- 211,66,62,33,211,66,62,1,211,66,62,248,
- 211,66,237,56,48,246,145,246,8,230,207,237,
- 57,48,62,3,211,66,62,221,211,66,193,201,
- 44,3,1,0,70,69,1,245,197,213,229,175,
- 50,72,152,237,56,16,230,46,237,57,16,237,
- 56,12,62,1,211,66,0,0,219,66,95,230,
- 160,32,3,195,20,139,123,230,96,194,72,139,
- 62,48,211,66,62,1,211,66,62,64,211,66,
- 237,91,40,152,205,207,143,25,43,55,237,82,
- 218,70,139,34,42,152,98,107,58,44,152,190,
- 194,210,138,35,35,62,130,190,194,200,137,62,
- 1,50,48,152,62,175,190,202,82,139,62,132,
- 190,32,44,50,50,152,62,47,50,71,152,229,
- 175,50,106,137,42,40,152,205,65,143,225,54,
- 133,43,70,58,44,152,119,43,112,17,3,0,
- 62,10,205,2,134,205,111,136,195,158,138,62,
- 140,190,32,19,50,50,152,58,233,149,230,4,
- 202,222,138,62,1,50,71,152,195,219,137,126,
- 254,160,250,185,138,254,166,242,185,138,50,50,
- 152,43,126,35,229,213,33,234,149,95,22,0,
- 25,126,254,132,40,18,254,140,40,14,58,50,
- 152,230,15,87,126,31,21,242,65,138,56,2,
- 175,119,58,50,152,230,15,87,58,233,149,230,
- 62,31,21,242,85,138,218,98,138,209,225,195,
- 20,139,58,50,152,33,100,137,230,15,95,22,
- 0,25,126,50,71,152,209,225,58,50,152,254,
- 164,250,135,138,58,73,152,254,0,40,4,54,
- 173,24,2,54,133,43,70,58,44,152,119,43,
- 112,17,3,0,205,70,135,175,50,106,137,205,
- 208,139,58,199,129,237,57,12,58,200,129,237,
- 57,13,237,56,16,246,17,237,57,16,225,209,
- 193,241,251,237,77,62,129,190,194,227,138,54,
- 130,43,70,58,44,152,119,43,112,17,3,0,
- 205,144,135,195,20,139,35,35,126,254,132,194,
- 227,138,175,50,106,137,205,158,139,24,42,58,
- 201,154,254,1,40,7,62,1,50,106,137,24,
- 237,58,106,137,254,1,202,222,138,62,128,166,
- 194,222,138,221,229,221,33,67,152,205,127,142,
- 205,109,144,221,225,225,209,193,241,251,237,77,
- 58,106,137,254,1,202,44,139,58,50,152,254,
- 164,250,44,139,58,73,152,238,1,50,73,152,
- 221,229,221,33,51,152,205,127,142,221,225,62,
- 1,50,106,137,205,158,139,195,13,139,24,208,
- 24,206,24,204,230,64,40,3,195,20,139,195,
- 20,139,43,126,33,8,152,119,35,58,44,152,
- 119,43,237,91,35,152,205,203,135,205,158,139,
- 195,13,139,175,50,78,152,62,3,211,66,62,
- 192,211,66,201,197,33,4,0,57,126,35,102,
- 111,62,1,50,106,137,219,72,205,141,139,193,
- 201,62,1,50,78,152,34,40,152,54,0,35,
- 35,54,0,195,163,139,58,78,152,183,200,229,
- 33,181,129,58,183,129,119,35,58,184,129,119,
- 225,62,47,211,68,62,14,211,66,62,193,211,
- 66,62,10,211,66,62,224,211,66,62,11,211,
- 66,62,118,211,66,195,3,140,58,78,152,183,
- 200,58,71,152,211,68,254,69,40,4,254,70,
- 32,17,58,73,152,254,0,40,10,62,10,211,
- 66,62,160,211,66,24,8,62,10,211,66,62,
- 128,211,66,62,11,211,66,62,6,211,66,62,
- 6,211,66,58,44,152,211,66,62,16,211,66,
- 62,56,211,66,62,48,211,66,0,0,219,66,
- 230,1,40,4,219,67,24,240,62,14,211,66,
- 62,33,211,66,42,40,152,205,65,143,62,1,
- 211,66,62,248,211,66,237,56,48,246,145,246,
- 8,230,207,237,57,48,62,3,211,66,62,221,
- 211,66,201,62,16,211,66,62,56,211,66,62,
- 48,211,66,0,0,219,66,230,1,40,4,219,
- 67,24,240,62,14,211,66,62,33,211,66,62,
- 1,211,66,62,248,211,66,237,56,48,246,153,
- 230,207,237,57,48,62,3,211,66,62,221,211,
- 66,201,229,213,33,234,149,95,22,0,25,126,
- 254,132,40,4,254,140,32,2,175,119,123,209,
- 225,201,6,8,14,0,31,48,1,12,16,250,
- 121,201,33,4,0,57,94,35,86,33,2,0,
- 57,126,35,102,111,221,229,34,89,152,237,83,
- 91,152,221,33,63,152,205,127,142,58,81,152,
- 50,82,152,58,80,152,135,50,80,152,205,162,
- 140,254,3,56,16,58,81,152,135,60,230,15,
- 50,81,152,175,50,80,152,24,23,58,79,152,
- 205,162,140,254,3,48,13,58,81,152,203,63,
- 50,81,152,62,255,50,79,152,58,81,152,50,
- 82,152,58,79,152,135,50,79,152,62,32,50,
- 83,152,50,84,152,237,56,16,230,17,237,57,
- 16,219,72,62,192,50,93,152,62,93,50,94,
- 152,58,93,152,61,50,93,152,32,9,58,94,
- 152,61,50,94,152,40,44,62,170,237,57,20,
- 175,237,57,21,237,56,16,246,2,237,57,16,
- 219,72,230,1,202,29,141,237,56,20,71,237,
- 56,21,120,254,10,48,237,237,56,16,230,17,
- 237,57,16,243,62,14,211,66,62,65,211,66,
- 251,58,39,152,23,23,60,50,39,152,71,58,
- 82,152,160,230,15,40,22,71,14,10,219,66,
- 230,16,202,186,141,219,72,230,1,202,186,141,
- 13,32,239,16,235,42,89,152,237,91,91,152,
- 205,47,131,48,7,61,202,186,141,195,227,141,
- 221,225,33,0,0,201,221,33,55,152,205,127,
- 142,58,84,152,61,50,84,152,40,19,58,82,
- 152,246,1,50,82,152,58,79,152,246,1,50,
- 79,152,195,29,141,221,225,33,1,0,201,221,
- 33,59,152,205,127,142,58,80,152,246,1,50,
- 80,152,58,82,152,135,246,1,50,82,152,58,
- 83,152,61,50,83,152,194,29,141,221,225,33,
- 2,0,201,221,229,33,0,0,57,17,4,0,
- 25,126,50,44,152,230,128,50,85,152,58,85,
- 152,183,40,6,221,33,88,2,24,4,221,33,
- 150,0,58,44,152,183,40,53,60,40,50,60,
- 40,47,61,61,33,86,152,119,35,119,35,54,
- 129,175,50,48,152,221,43,221,229,225,124,181,
- 40,42,33,86,152,17,3,0,205,189,140,17,
- 232,3,27,123,178,32,251,58,48,152,183,40,
- 224,58,44,152,71,62,7,128,230,127,71,58,
- 85,152,176,50,44,152,24,162,221,225,201,183,
- 221,52,0,192,221,52,1,192,221,52,2,192,
- 221,52,3,192,55,201,245,62,1,211,100,241,
- 201,245,62,1,211,96,241,201,33,2,0,57,
- 126,35,102,111,237,56,48,230,175,237,57,48,
- 62,48,237,57,49,125,237,57,32,124,237,57,
- 33,62,0,237,57,34,62,88,237,57,35,62,
- 0,237,57,36,237,57,37,33,128,2,125,237,
- 57,38,124,237,57,39,237,56,48,246,97,230,
- 207,237,57,48,62,0,237,57,0,62,0,211,
- 96,211,100,201,33,2,0,57,126,35,102,111,
- 237,56,48,230,175,237,57,48,62,12,237,57,
- 49,62,76,237,57,32,62,0,237,57,33,237,
- 57,34,125,237,57,35,124,237,57,36,62,0,
- 237,57,37,33,128,2,125,237,57,38,124,237,
- 57,39,237,56,48,246,97,230,207,237,57,48,
- 62,1,211,96,201,33,2,0,57,126,35,102,
- 111,229,237,56,48,230,87,237,57,48,125,237,
- 57,40,124,237,57,41,62,0,237,57,42,62,
- 67,237,57,43,62,0,237,57,44,58,106,137,
- 254,1,32,5,33,6,0,24,3,33,128,2,
- 125,237,57,46,124,237,57,47,237,56,50,230,
- 252,246,2,237,57,50,225,201,33,4,0,57,
- 94,35,86,33,2,0,57,126,35,102,111,237,
- 56,48,230,87,237,57,48,125,237,57,40,124,
- 237,57,41,62,0,237,57,42,62,67,237,57,
- 43,62,0,237,57,44,123,237,57,46,122,237,
- 57,47,237,56,50,230,244,246,0,237,57,50,
- 237,56,48,246,145,230,207,237,57,48,201,213,
- 237,56,46,95,237,56,47,87,237,56,46,111,
- 237,56,47,103,183,237,82,32,235,33,128,2,
- 183,237,82,209,201,213,237,56,38,95,237,56,
- 39,87,237,56,38,111,237,56,39,103,183,237,
- 82,32,235,33,128,2,183,237,82,209,201,245,
- 197,1,52,0,237,120,230,253,237,121,193,241,
- 201,245,197,1,52,0,237,120,246,2,237,121,
- 193,241,201,33,2,0,57,126,35,102,111,126,
- 35,110,103,201,33,0,0,34,102,152,34,96,
- 152,34,98,152,33,202,154,34,104,152,237,91,
- 104,152,42,226,149,183,237,82,17,0,255,25,
- 34,100,152,203,124,40,6,33,0,125,34,100,
- 152,42,104,152,35,35,35,229,205,120,139,193,
- 201,205,186,149,229,42,40,152,35,35,35,229,
- 205,39,144,193,124,230,3,103,221,117,254,221,
- 116,255,237,91,42,152,35,35,35,183,237,82,
- 32,12,17,5,0,42,42,152,205,171,149,242,
- 169,144,42,40,152,229,205,120,139,193,195,198,
- 149,237,91,42,152,42,98,152,25,34,98,152,
- 19,19,19,42,102,152,25,34,102,152,237,91,
- 100,152,33,158,253,25,237,91,102,152,205,171,
- 149,242,214,144,33,0,0,34,102,152,62,1,
- 50,95,152,205,225,144,195,198,149,58,95,152,
- 183,200,237,91,96,152,42,102,152,205,171,149,
- 242,5,145,237,91,102,152,33,98,2,25,237,
- 91,96,152,205,171,149,250,37,145,237,91,96,
- 152,42,102,152,183,237,82,32,7,42,98,152,
- 125,180,40,13,237,91,102,152,42,96,152,205,
- 171,149,242,58,145,237,91,104,152,42,102,152,
- 25,35,35,35,229,205,120,139,193,175,50,95,
- 152,201,195,107,139,205,206,149,250,255,243,205,
- 225,144,251,58,230,149,183,194,198,149,17,1,
- 0,42,98,152,205,171,149,250,198,149,62,1,
- 50,230,149,237,91,96,152,42,104,152,25,221,
- 117,252,221,116,253,237,91,104,152,42,96,152,
- 25,35,35,35,221,117,254,221,116,255,35,35,
- 35,229,205,39,144,124,230,3,103,35,35,35,
- 221,117,250,221,116,251,235,221,110,252,221,102,
- 253,115,35,114,35,54,4,62,1,211,100,211,
- 84,195,198,149,33,0,0,34,102,152,34,96,
- 152,34,98,152,33,202,154,34,104,152,237,91,
- 104,152,42,226,149,183,237,82,17,0,255,25,
- 34,100,152,33,109,152,54,0,33,107,152,229,
- 205,240,142,193,62,47,50,34,152,62,132,50,
- 49,152,205,241,145,205,61,145,58,39,152,60,
- 50,39,152,24,241,205,206,149,251,255,33,109,
- 152,126,183,202,198,149,110,221,117,251,33,109,
- 152,54,0,221,126,251,254,1,40,28,254,3,
- 40,101,254,4,202,190,147,254,5,202,147,147,
- 254,8,40,87,33,107,152,229,205,240,142,195,
- 198,149,58,201,154,183,32,21,33,111,152,126,
- 50,229,149,205,52,144,33,110,152,110,38,0,
- 229,205,11,142,193,237,91,96,152,42,104,152,
- 25,221,117,254,221,116,255,35,35,54,2,17,
- 2,0,43,43,115,35,114,58,44,152,35,35,
- 119,58,228,149,35,119,62,1,211,100,211,84,
- 62,1,50,201,154,24,169,205,153,142,58,231,
- 149,183,40,250,175,50,231,149,33,110,152,126,
- 254,255,40,91,58,233,149,230,63,183,40,83,
- 94,22,0,33,234,149,25,126,183,40,13,33,
- 110,152,94,33,234,150,25,126,254,3,32,36,
- 205,81,148,125,180,33,110,152,94,22,0,40,
- 17,33,234,149,25,54,0,33,107,152,229,205,
- 240,142,193,195,198,149,33,234,150,25,54,0,
- 33,110,152,94,22,0,33,234,149,25,126,50,
- 49,152,254,132,32,37,62,47,50,34,152,42,
- 107,152,229,33,110,152,229,205,174,140,193,193,
- 125,180,33,110,152,94,22,0,33,234,150,202,
- 117,147,25,52,195,120,147,58,49,152,254,140,
- 32,7,62,1,50,34,152,24,210,62,32,50,
- 106,152,24,19,58,49,152,95,58,106,152,163,
- 183,58,106,152,32,11,203,63,50,106,152,58,
- 106,152,183,32,231,254,2,40,51,254,4,40,
- 38,254,8,40,26,254,16,40,13,254,32,32,
- 158,62,165,50,49,152,62,69,24,190,62,164,
- 50,49,152,62,70,24,181,62,163,50,49,152,
- 175,24,173,62,162,50,49,152,62,1,24,164,
- 62,161,50,49,152,62,3,24,155,25,54,0,
- 221,126,251,254,8,40,7,58,230,149,183,202,
- 32,146,33,107,152,229,205,240,142,193,211,84,
- 195,198,149,237,91,96,152,42,104,152,25,221,
- 117,254,221,116,255,35,35,54,6,17,2,0,
- 43,43,115,35,114,58,228,149,35,35,119,58,
- 233,149,35,119,205,146,142,195,32,146,237,91,
- 96,152,42,104,152,25,229,205,160,142,193,58,
- 231,149,183,40,250,175,50,231,149,243,237,91,
- 96,152,42,104,152,25,221,117,254,221,116,255,
- 78,35,70,221,113,252,221,112,253,89,80,42,
- 98,152,183,237,82,34,98,152,203,124,40,19,
- 33,0,0,34,98,152,34,102,152,34,96,152,
- 62,1,50,95,152,24,40,221,94,252,221,86,
- 253,19,19,19,42,96,152,25,34,96,152,237,
- 91,100,152,33,158,253,25,237,91,96,152,205,
- 171,149,242,55,148,33,0,0,34,96,152,175,
- 50,230,149,251,195,32,146,245,62,1,50,231,
- 149,62,16,237,57,0,211,80,241,251,237,77,
- 201,205,186,149,229,229,33,0,0,34,37,152,
- 33,110,152,126,50,234,151,58,44,152,33,235,
- 151,119,221,54,253,0,221,54,254,0,195,230,
- 148,33,236,151,54,175,33,3,0,229,33,234,
- 151,229,205,174,140,193,193,33,236,151,126,254,
- 255,40,74,33,245,151,110,221,117,255,33,249,
- 151,126,221,166,255,221,119,255,33,253,151,126,
- 221,166,255,221,119,255,58,232,149,95,221,126,
- 255,163,221,119,255,183,40,15,230,191,33,110,
- 152,94,22,0,33,234,149,25,119,24,12,33,
- 110,152,94,22,0,33,234,149,25,54,132,33,
- 0,0,195,198,149,221,110,253,221,102,254,35,
- 221,117,253,221,116,254,17,32,0,221,110,253,
- 221,102,254,205,171,149,250,117,148,58,233,149,
- 203,87,40,84,33,1,0,34,37,152,221,54,
- 253,0,221,54,254,0,24,53,33,236,151,54,
- 175,33,3,0,229,33,234,151,229,205,174,140,
- 193,193,33,236,151,126,254,255,40,14,33,110,
- 152,94,22,0,33,234,149,25,54,140,24,159,
- 221,110,253,221,102,254,35,221,117,253,221,116,
- 254,17,32,0,221,110,253,221,102,254,205,171,
- 149,250,12,149,33,2,0,34,37,152,221,54,
- 253,0,221,54,254,0,24,54,33,236,151,54,
- 175,33,3,0,229,33,234,151,229,205,174,140,
- 193,193,33,236,151,126,254,255,40,15,33,110,
- 152,94,22,0,33,234,149,25,54,132,195,211,
- 148,221,110,253,221,102,254,35,221,117,253,221,
- 116,254,17,32,0,221,110,253,221,102,254,205,
- 171,149,250,96,149,33,1,0,195,198,149,124,
- 170,250,179,149,237,82,201,124,230,128,237,82,
- 60,201,225,253,229,221,229,221,33,0,0,221,
- 57,233,221,249,221,225,253,225,201,233,225,253,
- 229,221,229,221,33,0,0,221,57,94,35,86,
- 35,235,57,249,235,233,0,0,0,0,0,0,
- 62,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 175,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,133,1,0,0,0,63,
- 255,255,255,255,0,0,0,63,0,0,0,0,
- 0,0,0,0,0,0,0,24,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0
- } ;
-
-#endif
diff --git a/drivers/net/appletalk/cops_ltdrv.h b/drivers/net/appletalk/cops_ltdrv.h
deleted file mode 100644
index c699b1ad31da..000000000000
--- a/drivers/net/appletalk/cops_ltdrv.h
+++ /dev/null
@@ -1,241 +0,0 @@
-/*
- * The firmware this driver downloads into the Localtalk card is a
- * separate program and is not GPL'd source code, even though the Linux
- * side driver and the routine that loads this data into the card are.
- *
- * It is taken from the COPS SDK and is under the following license
- *
- * This material is licensed to you strictly for use in conjunction with
- * the use of COPS LocalTalk adapters.
- * There is no charge for this SDK. And no waranty express or implied
- * about its fitness for any purpose. However, we will cheerefully
- * refund every penny you paid for this SDK...
- * Regards,
- *
- * Thomas F. Divine
- * Chief Scientist
- */
-
-
-/* cops_ltdrv.h: LocalTalk driver firmware dump for Linux.
- *
- * Authors:
- * - Jay Schulist <jschlst@samba.org>
- */
-
-
-#ifdef CONFIG_COPS_TANGENT
-
-static const unsigned char ltdrv_code[] = {
- 58,3,0,50,148,10,33,143,15,62,85,119,
- 190,32,9,62,170,119,190,32,3,35,24,241,
- 34,146,10,249,17,150,10,33,143,15,183,237,
- 82,77,68,11,107,98,19,54,0,237,176,62,
- 16,237,57,51,62,0,237,57,50,237,57,54,
- 62,12,237,57,49,62,195,33,39,2,50,56,
- 0,34,57,0,237,86,205,30,2,251,205,60,
- 10,24,169,67,111,112,121,114,105,103,104,116,
- 32,40,99,41,32,49,57,56,56,45,49,57,
- 57,50,44,32,80,114,105,110,116,105,110,103,
- 32,67,111,109,109,117,110,105,99,97,116,105,
- 111,110,115,32,65,115,115,111,99,105,97,116,
- 101,115,44,32,73,110,99,46,65,108,108,32,
- 114,105,103,104,116,115,32,114,101,115,101,114,
- 118,101,100,46,32,32,4,4,22,40,255,60,
- 4,96,10,224,6,0,7,126,2,64,11,246,
- 12,6,13,0,14,193,15,0,5,96,3,192,
- 1,0,9,8,62,3,211,82,62,192,211,82,
- 201,62,3,211,82,62,213,211,82,201,62,5,
- 211,82,62,224,211,82,201,62,5,211,82,62,
- 224,211,82,201,62,5,211,82,62,96,211,82,
- 201,6,28,33,180,1,14,82,237,163,194,4,
- 2,33,39,2,34,64,0,58,3,0,230,1,
- 192,62,11,237,121,62,118,237,121,201,33,182,
- 10,54,132,205,253,1,201,245,197,213,229,42,
- 150,10,14,83,17,98,2,67,20,237,162,58,
- 179,1,95,219,82,230,1,32,6,29,32,247,
- 195,17,3,62,1,211,82,219,82,95,230,160,
- 32,10,237,162,32,225,21,32,222,195,15,3,
- 237,162,123,230,96,194,21,3,62,48,211,82,
- 62,1,211,82,175,211,82,237,91,150,10,43,
- 55,237,82,218,19,3,34,152,10,98,107,58,
- 154,10,190,32,81,62,1,50,158,10,35,35,
- 62,132,190,32,44,54,133,43,70,58,154,10,
- 119,43,112,17,3,0,205,137,3,62,16,211,
- 82,62,56,211,82,205,217,1,42,150,10,14,
- 83,17,98,2,67,20,58,178,1,95,195,59,
- 2,62,129,190,194,227,2,54,130,43,70,58,
- 154,10,119,43,112,17,3,0,205,137,3,195,
- 254,2,35,35,126,254,132,194,227,2,205,61,
- 3,24,20,62,128,166,194,222,2,221,229,221,
- 33,175,10,205,93,6,205,144,7,221,225,225,
- 209,193,241,251,237,77,221,229,221,33,159,10,
- 205,93,6,221,225,205,61,3,195,247,2,24,
- 237,24,235,24,233,230,64,40,2,24,227,24,
- 225,175,50,179,10,205,208,1,201,197,33,4,
- 0,57,126,35,102,111,205,51,3,193,201,62,
- 1,50,179,10,34,150,10,54,0,58,179,10,
- 183,200,62,14,211,82,62,193,211,82,62,10,
- 211,82,62,224,211,82,62,6,211,82,58,154,
- 10,211,82,62,16,211,82,62,56,211,82,62,
- 48,211,82,219,82,230,1,40,4,219,83,24,
- 242,62,14,211,82,62,33,211,82,62,1,211,
- 82,62,9,211,82,62,32,211,82,205,217,1,
- 201,14,83,205,208,1,24,23,14,83,205,208,
- 1,205,226,1,58,174,1,61,32,253,205,244,
- 1,58,174,1,61,32,253,205,226,1,58,175,
- 1,61,32,253,62,5,211,82,62,233,211,82,
- 62,128,211,82,58,176,1,61,32,253,237,163,
- 27,62,192,211,82,219,82,230,4,40,250,237,
- 163,27,122,179,32,243,219,82,230,4,40,250,
- 58,178,1,71,219,82,230,4,40,3,5,32,
- 247,219,82,230,4,40,250,205,235,1,58,177,
- 1,61,32,253,205,244,1,201,229,213,35,35,
- 126,230,128,194,145,4,43,58,154,10,119,43,
- 70,33,181,10,119,43,112,17,3,0,243,62,
- 10,211,82,219,82,230,128,202,41,4,209,225,
- 62,1,55,251,201,205,144,3,58,180,10,254,
- 255,202,127,4,205,217,1,58,178,1,71,219,
- 82,230,1,32,6,5,32,247,195,173,4,219,
- 83,71,58,154,10,184,194,173,4,58,178,1,
- 71,219,82,230,1,32,6,5,32,247,195,173,
- 4,219,83,58,178,1,71,219,82,230,1,32,
- 6,5,32,247,195,173,4,219,83,254,133,194,
- 173,4,58,179,1,24,4,58,179,1,135,61,
- 32,253,209,225,205,137,3,205,61,3,183,251,
- 201,209,225,243,62,10,211,82,219,82,230,128,
- 202,164,4,62,1,55,251,201,205,144,3,205,
- 61,3,183,251,201,209,225,62,2,55,251,201,
- 243,62,14,211,82,62,33,211,82,251,201,33,
- 4,0,57,94,35,86,33,2,0,57,126,35,
- 102,111,221,229,34,193,10,237,83,195,10,221,
- 33,171,10,205,93,6,58,185,10,50,186,10,
- 58,184,10,135,50,184,10,205,112,6,254,3,
- 56,16,58,185,10,135,60,230,15,50,185,10,
- 175,50,184,10,24,23,58,183,10,205,112,6,
- 254,3,48,13,58,185,10,203,63,50,185,10,
- 62,255,50,183,10,58,185,10,50,186,10,58,
- 183,10,135,50,183,10,62,32,50,187,10,50,
- 188,10,6,255,219,82,230,16,32,3,5,32,
- 247,205,180,4,6,40,219,82,230,16,40,3,
- 5,32,247,62,10,211,82,219,82,230,128,194,
- 46,5,219,82,230,16,40,214,237,95,71,58,
- 186,10,160,230,15,40,32,71,14,10,62,10,
- 211,82,219,82,230,128,202,119,5,205,180,4,
- 195,156,5,219,82,230,16,202,156,5,13,32,
- 229,16,225,42,193,10,237,91,195,10,205,252,
- 3,48,7,61,202,156,5,195,197,5,221,225,
- 33,0,0,201,221,33,163,10,205,93,6,58,
- 188,10,61,50,188,10,40,19,58,186,10,246,
- 1,50,186,10,58,183,10,246,1,50,183,10,
- 195,46,5,221,225,33,1,0,201,221,33,167,
- 10,205,93,6,58,184,10,246,1,50,184,10,
- 58,186,10,135,246,1,50,186,10,58,187,10,
- 61,50,187,10,194,46,5,221,225,33,2,0,
- 201,221,229,33,0,0,57,17,4,0,25,126,
- 50,154,10,230,128,50,189,10,58,189,10,183,
- 40,6,221,33,88,2,24,4,221,33,150,0,
- 58,154,10,183,40,49,60,40,46,61,33,190,
- 10,119,35,119,35,54,129,175,50,158,10,221,
- 43,221,229,225,124,181,40,42,33,190,10,17,
- 3,0,205,206,4,17,232,3,27,123,178,32,
- 251,58,158,10,183,40,224,58,154,10,71,62,
- 7,128,230,127,71,58,189,10,176,50,154,10,
- 24,166,221,225,201,183,221,52,0,192,221,52,
- 1,192,221,52,2,192,221,52,3,192,55,201,
- 6,8,14,0,31,48,1,12,16,250,121,201,
- 33,2,0,57,94,35,86,35,78,35,70,35,
- 126,35,102,105,79,120,68,103,237,176,201,33,
- 2,0,57,126,35,102,111,62,17,237,57,48,
- 125,237,57,40,124,237,57,41,62,0,237,57,
- 42,62,64,237,57,43,62,0,237,57,44,33,
- 128,2,125,237,57,46,124,237,57,47,62,145,
- 237,57,48,211,68,58,149,10,211,66,201,33,
- 2,0,57,126,35,102,111,62,33,237,57,48,
- 62,64,237,57,32,62,0,237,57,33,237,57,
- 34,125,237,57,35,124,237,57,36,62,0,237,
- 57,37,33,128,2,125,237,57,38,124,237,57,
- 39,62,97,237,57,48,211,67,58,149,10,211,
- 66,201,237,56,46,95,237,56,47,87,237,56,
- 46,111,237,56,47,103,183,237,82,32,235,33,
- 128,2,183,237,82,201,237,56,38,95,237,56,
- 39,87,237,56,38,111,237,56,39,103,183,237,
- 82,32,235,33,128,2,183,237,82,201,205,106,
- 10,221,110,6,221,102,7,126,35,110,103,195,
- 118,10,205,106,10,33,0,0,34,205,10,34,
- 198,10,34,200,10,33,143,15,34,207,10,237,
- 91,207,10,42,146,10,183,237,82,17,0,255,
- 25,34,203,10,203,124,40,6,33,0,125,34,
- 203,10,42,207,10,229,205,37,3,195,118,10,
- 205,106,10,229,42,150,10,35,35,35,229,205,
- 70,7,193,124,230,3,103,221,117,254,221,116,
- 255,237,91,152,10,35,35,35,183,237,82,32,
- 12,17,5,0,42,152,10,205,91,10,242,203,
- 7,42,150,10,229,205,37,3,195,118,10,237,
- 91,152,10,42,200,10,25,34,200,10,42,205,
- 10,25,34,205,10,237,91,203,10,33,158,253,
- 25,237,91,205,10,205,91,10,242,245,7,33,
- 0,0,34,205,10,62,1,50,197,10,205,5,
- 8,33,0,0,57,249,195,118,10,205,106,10,
- 58,197,10,183,202,118,10,237,91,198,10,42,
- 205,10,205,91,10,242,46,8,237,91,205,10,
- 33,98,2,25,237,91,198,10,205,91,10,250,
- 78,8,237,91,198,10,42,205,10,183,237,82,
- 32,7,42,200,10,125,180,40,13,237,91,205,
- 10,42,198,10,205,91,10,242,97,8,237,91,
- 207,10,42,205,10,25,229,205,37,3,175,50,
- 197,10,195,118,10,205,29,3,33,0,0,57,
- 249,195,118,10,205,106,10,58,202,10,183,40,
- 22,205,14,7,237,91,209,10,19,19,19,205,
- 91,10,242,139,8,33,1,0,195,118,10,33,
- 0,0,195,118,10,205,126,10,252,255,205,108,
- 8,125,180,194,118,10,237,91,200,10,33,0,
- 0,205,91,10,242,118,10,237,91,207,10,42,
- 198,10,25,221,117,254,221,116,255,35,35,35,
- 229,205,70,7,193,124,230,3,103,35,35,35,
- 221,117,252,221,116,253,229,221,110,254,221,102,
- 255,229,33,212,10,229,205,124,6,193,193,221,
- 110,252,221,102,253,34,209,10,33,211,10,54,
- 4,33,209,10,227,205,147,6,193,62,1,50,
- 202,10,243,221,94,252,221,86,253,42,200,10,
- 183,237,82,34,200,10,203,124,40,17,33,0,
- 0,34,200,10,34,205,10,34,198,10,50,197,
- 10,24,37,221,94,252,221,86,253,42,198,10,
- 25,34,198,10,237,91,203,10,33,158,253,25,
- 237,91,198,10,205,91,10,242,68,9,33,0,
- 0,34,198,10,205,5,8,33,0,0,57,249,
- 251,195,118,10,205,106,10,33,49,13,126,183,
- 40,16,205,42,7,237,91,47,13,19,19,19,
- 205,91,10,242,117,9,58,142,15,198,1,50,
- 142,15,195,118,10,33,49,13,126,254,1,40,
- 25,254,3,202,7,10,254,5,202,21,10,33,
- 49,13,54,0,33,47,13,229,205,207,6,195,
- 118,10,58,141,15,183,32,72,33,51,13,126,
- 50,149,10,205,86,7,33,50,13,126,230,127,
- 183,32,40,58,142,15,230,127,50,142,15,183,
- 32,5,198,1,50,142,15,33,50,13,126,111,
- 23,159,103,203,125,58,142,15,40,5,198,128,
- 50,142,15,33,50,13,119,33,50,13,126,111,
- 23,159,103,229,205,237,5,193,33,211,10,54,
- 2,33,2,0,34,209,10,58,154,10,33,212,
- 10,119,58,148,10,33,213,10,119,33,209,10,
- 229,205,147,6,193,24,128,42,47,13,229,33,
- 50,13,229,205,191,4,193,24,239,33,211,10,
- 54,6,33,3,0,34,209,10,58,154,10,33,
- 212,10,119,58,148,10,33,213,10,119,33,214,
- 10,54,5,33,209,10,229,205,147,6,24,200,
- 205,106,10,33,49,13,54,0,33,47,13,229,
- 205,207,6,33,209,10,227,205,147,6,193,205,
- 80,9,205,145,8,24,248,124,170,250,99,10,
- 237,82,201,124,230,128,237,82,60,201,225,253,
- 229,221,229,221,33,0,0,221,57,233,221,249,
- 221,225,253,225,201,233,225,253,229,221,229,221,
- 33,0,0,221,57,94,35,86,35,235,57,249,
- 235,233,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0,0,0,0,0,0,0,0,
- 0,0,0,0,0
- } ;
-
-#endif
diff --git a/drivers/net/appletalk/ipddp.c b/drivers/net/appletalk/ipddp.c
deleted file mode 100644
index d558535390f9..000000000000
--- a/drivers/net/appletalk/ipddp.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/*
- * ipddp.c: IP to Appletalk-IP Encapsulation driver for Linux
- * Appletalk-IP to IP Decapsulation driver for Linux
- *
- * Authors:
- * - DDP-IP Encap by: Bradford W. Johnson <johns393@maroon.tc.umn.edu>
- * - DDP-IP Decap by: Jay Schulist <jschlst@samba.org>
- *
- * Derived from:
- * - Almost all code already existed in net/appletalk/ddp.c I just
- * moved/reorginized it into a driver file. Original IP-over-DDP code
- * was done by Bradford W. Johnson <johns393@maroon.tc.umn.edu>
- * - skeleton.c: A network driver outline for linux.
- * Written 1993-94 by Donald Becker.
- * - dummy.c: A dummy net driver. By Nick Holloway.
- * - MacGate: A user space Daemon for Appletalk-IP Decap for
- * Linux by Jay Schulist <jschlst@samba.org>
- *
- * Copyright 1993 United States Government as represented by the
- * Director, National Security Agency.
- *
- * This software may be used and distributed according to the terms
- * of the GNU General Public License, incorporated herein by reference.
- */
-
-#include <linux/compat.h>
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/ip.h>
-#include <linux/atalk.h>
-#include <linux/if_arp.h>
-#include <linux/slab.h>
-#include <net/route.h>
-#include <linux/uaccess.h>
-
-#include "ipddp.h" /* Our stuff */
-
-static const char version[] = KERN_INFO "ipddp.c:v0.01 8/28/97 Bradford W. Johnson <johns393@maroon.tc.umn.edu>\n";
-
-static struct ipddp_route *ipddp_route_list;
-static DEFINE_SPINLOCK(ipddp_route_lock);
-
-#ifdef CONFIG_IPDDP_ENCAP
-static int ipddp_mode = IPDDP_ENCAP;
-#else
-static int ipddp_mode = IPDDP_DECAP;
-#endif
-
-/* Index to functions, as function prototypes. */
-static netdev_tx_t ipddp_xmit(struct sk_buff *skb,
- struct net_device *dev);
-static int ipddp_create(struct ipddp_route *new_rt);
-static int ipddp_delete(struct ipddp_route *rt);
-static struct ipddp_route* __ipddp_find_route(struct ipddp_route *rt);
-static int ipddp_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
- void __user *data, int cmd);
-
-static const struct net_device_ops ipddp_netdev_ops = {
- .ndo_start_xmit = ipddp_xmit,
- .ndo_siocdevprivate = ipddp_siocdevprivate,
- .ndo_set_mac_address = eth_mac_addr,
- .ndo_validate_addr = eth_validate_addr,
-};
-
-static struct net_device * __init ipddp_init(void)
-{
- static unsigned version_printed;
- struct net_device *dev;
- int err;
-
- dev = alloc_etherdev(0);
- if (!dev)
- return ERR_PTR(-ENOMEM);
-
- netif_keep_dst(dev);
- strcpy(dev->name, "ipddp%d");
-
- if (version_printed++ == 0)
- printk(version);
-
- /* Initialize the device structure. */
- dev->netdev_ops = &ipddp_netdev_ops;
-
- dev->type = ARPHRD_IPDDP; /* IP over DDP tunnel */
- dev->mtu = 585;
- dev->flags |= IFF_NOARP;
-
- /*
- * The worst case header we will need is currently a
- * ethernet header (14 bytes) and a ddp header (sizeof ddpehdr+1)
- * We send over SNAP so that takes another 8 bytes.
- */
- dev->hard_header_len = 14+8+sizeof(struct ddpehdr)+1;
-
- err = register_netdev(dev);
- if (err) {
- free_netdev(dev);
- return ERR_PTR(err);
- }
-
- /* Let the user now what mode we are in */
- if(ipddp_mode == IPDDP_ENCAP)
- printk("%s: Appletalk-IP Encap. mode by Bradford W. Johnson <johns393@maroon.tc.umn.edu>\n",
- dev->name);
- if(ipddp_mode == IPDDP_DECAP)
- printk("%s: Appletalk-IP Decap. mode by Jay Schulist <jschlst@samba.org>\n",
- dev->name);
-
- return dev;
-}
-
-
-/*
- * Transmit LLAP/ELAP frame using aarp_send_ddp.
- */
-static netdev_tx_t ipddp_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- struct rtable *rtable = skb_rtable(skb);
- __be32 paddr = 0;
- struct ddpehdr *ddp;
- struct ipddp_route *rt;
- struct atalk_addr *our_addr;
-
- if (rtable->rt_gw_family == AF_INET)
- paddr = rtable->rt_gw4;
-
- spin_lock(&ipddp_route_lock);
-
- /*
- * Find appropriate route to use, based only on IP number.
- */
- for(rt = ipddp_route_list; rt != NULL; rt = rt->next)
- {
- if(rt->ip == paddr)
- break;
- }
- if(rt == NULL) {
- spin_unlock(&ipddp_route_lock);
- return NETDEV_TX_OK;
- }
-
- our_addr = atalk_find_dev_addr(rt->dev);
-
- if(ipddp_mode == IPDDP_DECAP)
- /*
- * Pull off the excess room that should not be there.
- * This is due to a hard-header problem. This is the
- * quick fix for now though, till it breaks.
- */
- skb_pull(skb, 35-(sizeof(struct ddpehdr)+1));
-
- /* Create the Extended DDP header */
- ddp = (struct ddpehdr *)skb->data;
- ddp->deh_len_hops = htons(skb->len + (1<<10));
- ddp->deh_sum = 0;
-
- /*
- * For Localtalk we need aarp_send_ddp to strip the
- * long DDP header and place a shot DDP header on it.
- */
- if(rt->dev->type == ARPHRD_LOCALTLK)
- {
- ddp->deh_dnet = 0; /* FIXME more hops?? */
- ddp->deh_snet = 0;
- }
- else
- {
- ddp->deh_dnet = rt->at.s_net; /* FIXME more hops?? */
- ddp->deh_snet = our_addr->s_net;
- }
- ddp->deh_dnode = rt->at.s_node;
- ddp->deh_snode = our_addr->s_node;
- ddp->deh_dport = 72;
- ddp->deh_sport = 72;
-
- *((__u8 *)(ddp+1)) = 22; /* ddp type = IP */
-
- skb->protocol = htons(ETH_P_ATALK); /* Protocol has changed */
-
- dev->stats.tx_packets++;
- dev->stats.tx_bytes += skb->len;
-
- aarp_send_ddp(rt->dev, skb, &rt->at, NULL);
-
- spin_unlock(&ipddp_route_lock);
-
- return NETDEV_TX_OK;
-}
-
-/*
- * Create a routing entry. We first verify that the
- * record does not already exist. If it does we return -EEXIST
- */
-static int ipddp_create(struct ipddp_route *new_rt)
-{
- struct ipddp_route *rt = kzalloc(sizeof(*rt), GFP_KERNEL);
-
- if (rt == NULL)
- return -ENOMEM;
-
- rt->ip = new_rt->ip;
- rt->at = new_rt->at;
- rt->next = NULL;
- if ((rt->dev = atrtr_get_dev(&rt->at)) == NULL) {
- kfree(rt);
- return -ENETUNREACH;
- }
-
- spin_lock_bh(&ipddp_route_lock);
- if (__ipddp_find_route(rt)) {
- spin_unlock_bh(&ipddp_route_lock);
- kfree(rt);
- return -EEXIST;
- }
-
- rt->next = ipddp_route_list;
- ipddp_route_list = rt;
-
- spin_unlock_bh(&ipddp_route_lock);
-
- return 0;
-}
-
-/*
- * Delete a route, we only delete a FULL match.
- * If route does not exist we return -ENOENT.
- */
-static int ipddp_delete(struct ipddp_route *rt)
-{
- struct ipddp_route **r = &ipddp_route_list;
- struct ipddp_route *tmp;
-
- spin_lock_bh(&ipddp_route_lock);
- while((tmp = *r) != NULL)
- {
- if(tmp->ip == rt->ip &&
- tmp->at.s_net == rt->at.s_net &&
- tmp->at.s_node == rt->at.s_node)
- {
- *r = tmp->next;
- spin_unlock_bh(&ipddp_route_lock);
- kfree(tmp);
- return 0;
- }
- r = &tmp->next;
- }
-
- spin_unlock_bh(&ipddp_route_lock);
- return -ENOENT;
-}
-
-/*
- * Find a routing entry, we only return a FULL match
- */
-static struct ipddp_route* __ipddp_find_route(struct ipddp_route *rt)
-{
- struct ipddp_route *f;
-
- for(f = ipddp_route_list; f != NULL; f = f->next)
- {
- if(f->ip == rt->ip &&
- f->at.s_net == rt->at.s_net &&
- f->at.s_node == rt->at.s_node)
- return f;
- }
-
- return NULL;
-}
-
-static int ipddp_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
- void __user *data, int cmd)
-{
- struct ipddp_route rcp, rcp2, *rp;
-
- if (in_compat_syscall())
- return -EOPNOTSUPP;
-
- if(!capable(CAP_NET_ADMIN))
- return -EPERM;
-
- if (copy_from_user(&rcp, data, sizeof(rcp)))
- return -EFAULT;
-
- switch(cmd)
- {
- case SIOCADDIPDDPRT:
- return ipddp_create(&rcp);
-
- case SIOCFINDIPDDPRT:
- spin_lock_bh(&ipddp_route_lock);
- rp = __ipddp_find_route(&rcp);
- if (rp) {
- memset(&rcp2, 0, sizeof(rcp2));
- rcp2.ip = rp->ip;
- rcp2.at = rp->at;
- rcp2.flags = rp->flags;
- }
- spin_unlock_bh(&ipddp_route_lock);
-
- if (rp) {
- if (copy_to_user(data, &rcp2,
- sizeof(struct ipddp_route)))
- return -EFAULT;
- return 0;
- } else
- return -ENOENT;
-
- case SIOCDELIPDDPRT:
- return ipddp_delete(&rcp);
-
- default:
- return -EINVAL;
- }
-}
-
-static struct net_device *dev_ipddp;
-
-MODULE_LICENSE("GPL");
-module_param(ipddp_mode, int, 0);
-
-static int __init ipddp_init_module(void)
-{
- dev_ipddp = ipddp_init();
- return PTR_ERR_OR_ZERO(dev_ipddp);
-}
-
-static void __exit ipddp_cleanup_module(void)
-{
- struct ipddp_route *p;
-
- unregister_netdev(dev_ipddp);
- free_netdev(dev_ipddp);
-
- while (ipddp_route_list) {
- p = ipddp_route_list->next;
- kfree(ipddp_route_list);
- ipddp_route_list = p;
- }
-}
-
-module_init(ipddp_init_module);
-module_exit(ipddp_cleanup_module);
diff --git a/drivers/net/appletalk/ipddp.h b/drivers/net/appletalk/ipddp.h
deleted file mode 100644
index 9a8e45a46925..000000000000
--- a/drivers/net/appletalk/ipddp.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * ipddp.h: Header for IP-over-DDP driver for Linux.
- */
-
-#ifndef __LINUX_IPDDP_H
-#define __LINUX_IPDDP_H
-
-#ifdef __KERNEL__
-
-#define SIOCADDIPDDPRT (SIOCDEVPRIVATE)
-#define SIOCDELIPDDPRT (SIOCDEVPRIVATE+1)
-#define SIOCFINDIPDDPRT (SIOCDEVPRIVATE+2)
-
-struct ipddp_route
-{
- struct net_device *dev; /* Carrier device */
- __be32 ip; /* IP address */
- struct atalk_addr at; /* Gateway appletalk address */
- int flags;
- struct ipddp_route *next;
-};
-
-#define IPDDP_ENCAP 1
-#define IPDDP_DECAP 2
-
-#endif /* __KERNEL__ */
-#endif /* __LINUX_IPDDP_H */
diff --git a/drivers/net/bareudp.c b/drivers/net/bareudp.c
index 683203f87ae2..31377bb1cc97 100644
--- a/drivers/net/bareudp.c
+++ b/drivers/net/bareudp.c
@@ -306,8 +306,13 @@ static int bareudp_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (!sock)
return -ESHUTDOWN;
- rt = ip_route_output_tunnel(skb, dev, bareudp->net, &saddr, info,
- IPPROTO_UDP, use_cache);
+ sport = udp_flow_src_port(bareudp->net, skb,
+ bareudp->sport_min, USHRT_MAX,
+ true);
+ rt = udp_tunnel_dst_lookup(skb, dev, bareudp->net, 0, &saddr, &info->key,
+ sport, bareudp->port, key->tos,
+ use_cache ?
+ (struct dst_cache *)&info->dst_cache : NULL);
if (IS_ERR(rt))
return PTR_ERR(rt);
@@ -315,9 +320,6 @@ static int bareudp_xmit_skb(struct sk_buff *skb, struct net_device *dev,
skb_tunnel_check_pmtu(skb, &rt->dst,
BAREUDP_IPV4_HLEN + info->options_len, false);
- sport = udp_flow_src_port(bareudp->net, skb,
- bareudp->sport_min, USHRT_MAX,
- true);
tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
@@ -369,17 +371,19 @@ static int bareudp6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (!sock)
return -ESHUTDOWN;
- dst = ip6_dst_lookup_tunnel(skb, dev, bareudp->net, sock, &saddr, info,
- IPPROTO_UDP, use_cache);
+ sport = udp_flow_src_port(bareudp->net, skb,
+ bareudp->sport_min, USHRT_MAX,
+ true);
+ dst = udp_tunnel6_dst_lookup(skb, dev, bareudp->net, sock, 0, &saddr,
+ key, sport, bareudp->port, key->tos,
+ use_cache ?
+ (struct dst_cache *) &info->dst_cache : NULL);
if (IS_ERR(dst))
return PTR_ERR(dst);
skb_tunnel_check_pmtu(skb, dst, BAREUDP_IPV6_HLEN + info->options_len,
false);
- sport = udp_flow_src_port(bareudp->net, skb,
- bareudp->sport_min, USHRT_MAX,
- true);
prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
@@ -476,15 +480,21 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
struct ip_tunnel_info *info = skb_tunnel_info(skb);
struct bareudp_dev *bareudp = netdev_priv(dev);
bool use_cache;
+ __be16 sport;
use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ sport = udp_flow_src_port(bareudp->net, skb,
+ bareudp->sport_min, USHRT_MAX,
+ true);
if (!ipv6_mod_enabled() || ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
__be32 saddr;
- rt = ip_route_output_tunnel(skb, dev, bareudp->net, &saddr,
- info, IPPROTO_UDP, use_cache);
+ rt = udp_tunnel_dst_lookup(skb, dev, bareudp->net, 0, &saddr,
+ &info->key, sport, bareudp->port,
+ info->key.tos,
+ use_cache ? &info->dst_cache : NULL);
if (IS_ERR(rt))
return PTR_ERR(rt);
@@ -495,9 +505,10 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
struct in6_addr saddr;
struct socket *sock = rcu_dereference(bareudp->sock);
- dst = ip6_dst_lookup_tunnel(skb, dev, bareudp->net, sock,
- &saddr, info, IPPROTO_UDP,
- use_cache);
+ dst = udp_tunnel6_dst_lookup(skb, dev, bareudp->net, sock,
+ 0, &saddr, &info->key,
+ sport, bareudp->port, info->key.tos,
+ use_cache ? &info->dst_cache : NULL);
if (IS_ERR(dst))
return PTR_ERR(dst);
@@ -507,9 +518,7 @@ static int bareudp_fill_metadata_dst(struct net_device *dev,
return -EINVAL;
}
- info->key.tp_src = udp_flow_src_port(bareudp->net, skb,
- bareudp->sport_min,
- USHRT_MAX, true);
+ info->key.tp_src = sport;
info->key.tp_dst = bareudp->port;
return 0;
}
diff --git a/drivers/net/bonding/bond_netlink.c b/drivers/net/bonding/bond_netlink.c
index 27cbe148f0db..cfa74cf8bb1a 100644
--- a/drivers/net/bonding/bond_netlink.c
+++ b/drivers/net/bonding/bond_netlink.c
@@ -85,7 +85,7 @@ nla_put_failure:
}
/* Limit the max delay range to 300s */
-static struct netlink_range_validation delay_range = {
+static const struct netlink_range_validation delay_range = {
.max = 300000,
};
diff --git a/drivers/net/can/Kconfig b/drivers/net/can/Kconfig
index f8cde9f9f554..eb410714afc2 100644
--- a/drivers/net/can/Kconfig
+++ b/drivers/net/can/Kconfig
@@ -89,6 +89,7 @@ config CAN_RX_OFFLOAD
config CAN_AT91
tristate "Atmel AT91 onchip CAN controller"
depends on (ARCH_AT91 || COMPILE_TEST) && HAS_IOMEM
+ select CAN_RX_OFFLOAD
help
This is a driver for the SoC CAN controller in Atmel's AT91SAM9263
and AT91SAM9X5 processors.
diff --git a/drivers/net/can/at91_can.c b/drivers/net/can/at91_can.c
index 4621266851ed..11f434d708b3 100644
--- a/drivers/net/can/at91_can.c
+++ b/drivers/net/can/at91_can.c
@@ -3,9 +3,10 @@
* at91_can.c - CAN network driver for AT91 SoC CAN controller
*
* (C) 2007 by Hans J. Koch <hjk@hansjkoch.de>
- * (C) 2008, 2009, 2010, 2011 by Marc Kleine-Budde <kernel@pengutronix.de>
+ * (C) 2008, 2009, 2010, 2011, 2023 by Marc Kleine-Budde <kernel@pengutronix.de>
*/
+#include <linux/bitfield.h>
#include <linux/clk.h>
#include <linux/errno.h>
#include <linux/ethtool.h>
@@ -15,6 +16,7 @@
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/of.h>
+#include <linux/phy/phy.h>
#include <linux/platform_device.h>
#include <linux/rtnetlink.h>
#include <linux/skbuff.h>
@@ -24,90 +26,115 @@
#include <linux/can/dev.h>
#include <linux/can/error.h>
+#include <linux/can/rx-offload.h>
-#define AT91_MB_MASK(i) ((1 << (i)) - 1)
+#define AT91_MB_MASK(i) ((1 << (i)) - 1)
/* Common registers */
enum at91_reg {
- AT91_MR = 0x000,
- AT91_IER = 0x004,
- AT91_IDR = 0x008,
- AT91_IMR = 0x00C,
- AT91_SR = 0x010,
- AT91_BR = 0x014,
- AT91_TIM = 0x018,
- AT91_TIMESTP = 0x01C,
- AT91_ECR = 0x020,
- AT91_TCR = 0x024,
- AT91_ACR = 0x028,
+ AT91_MR = 0x000,
+ AT91_IER = 0x004,
+ AT91_IDR = 0x008,
+ AT91_IMR = 0x00C,
+ AT91_SR = 0x010,
+ AT91_BR = 0x014,
+ AT91_TIM = 0x018,
+ AT91_TIMESTP = 0x01C,
+ AT91_ECR = 0x020,
+ AT91_TCR = 0x024,
+ AT91_ACR = 0x028,
};
/* Mailbox registers (0 <= i <= 15) */
-#define AT91_MMR(i) ((enum at91_reg)(0x200 + ((i) * 0x20)))
-#define AT91_MAM(i) ((enum at91_reg)(0x204 + ((i) * 0x20)))
-#define AT91_MID(i) ((enum at91_reg)(0x208 + ((i) * 0x20)))
-#define AT91_MFID(i) ((enum at91_reg)(0x20C + ((i) * 0x20)))
-#define AT91_MSR(i) ((enum at91_reg)(0x210 + ((i) * 0x20)))
-#define AT91_MDL(i) ((enum at91_reg)(0x214 + ((i) * 0x20)))
-#define AT91_MDH(i) ((enum at91_reg)(0x218 + ((i) * 0x20)))
-#define AT91_MCR(i) ((enum at91_reg)(0x21C + ((i) * 0x20)))
+#define AT91_MMR(i) ((enum at91_reg)(0x200 + ((i) * 0x20)))
+#define AT91_MAM(i) ((enum at91_reg)(0x204 + ((i) * 0x20)))
+#define AT91_MID(i) ((enum at91_reg)(0x208 + ((i) * 0x20)))
+#define AT91_MFID(i) ((enum at91_reg)(0x20C + ((i) * 0x20)))
+#define AT91_MSR(i) ((enum at91_reg)(0x210 + ((i) * 0x20)))
+#define AT91_MDL(i) ((enum at91_reg)(0x214 + ((i) * 0x20)))
+#define AT91_MDH(i) ((enum at91_reg)(0x218 + ((i) * 0x20)))
+#define AT91_MCR(i) ((enum at91_reg)(0x21C + ((i) * 0x20)))
/* Register bits */
-#define AT91_MR_CANEN BIT(0)
-#define AT91_MR_LPM BIT(1)
-#define AT91_MR_ABM BIT(2)
-#define AT91_MR_OVL BIT(3)
-#define AT91_MR_TEOF BIT(4)
-#define AT91_MR_TTM BIT(5)
-#define AT91_MR_TIMFRZ BIT(6)
-#define AT91_MR_DRPT BIT(7)
-
-#define AT91_SR_RBSY BIT(29)
-
-#define AT91_MMR_PRIO_SHIFT (16)
-
-#define AT91_MID_MIDE BIT(29)
-
-#define AT91_MSR_MRTR BIT(20)
-#define AT91_MSR_MABT BIT(22)
-#define AT91_MSR_MRDY BIT(23)
-#define AT91_MSR_MMI BIT(24)
-
-#define AT91_MCR_MRTR BIT(20)
-#define AT91_MCR_MTCR BIT(23)
+#define AT91_MR_CANEN BIT(0)
+#define AT91_MR_LPM BIT(1)
+#define AT91_MR_ABM BIT(2)
+#define AT91_MR_OVL BIT(3)
+#define AT91_MR_TEOF BIT(4)
+#define AT91_MR_TTM BIT(5)
+#define AT91_MR_TIMFRZ BIT(6)
+#define AT91_MR_DRPT BIT(7)
+
+#define AT91_SR_RBSY BIT(29)
+#define AT91_SR_TBSY BIT(30)
+#define AT91_SR_OVLSY BIT(31)
+
+#define AT91_BR_PHASE2_MASK GENMASK(2, 0)
+#define AT91_BR_PHASE1_MASK GENMASK(6, 4)
+#define AT91_BR_PROPAG_MASK GENMASK(10, 8)
+#define AT91_BR_SJW_MASK GENMASK(13, 12)
+#define AT91_BR_BRP_MASK GENMASK(22, 16)
+#define AT91_BR_SMP BIT(24)
+
+#define AT91_TIM_TIMER_MASK GENMASK(15, 0)
+
+#define AT91_ECR_REC_MASK GENMASK(8, 0)
+#define AT91_ECR_TEC_MASK GENMASK(23, 16)
+
+#define AT91_TCR_TIMRST BIT(31)
+
+#define AT91_MMR_MTIMEMARK_MASK GENMASK(15, 0)
+#define AT91_MMR_PRIOR_MASK GENMASK(19, 16)
+#define AT91_MMR_MOT_MASK GENMASK(26, 24)
+
+#define AT91_MID_MIDVB_MASK GENMASK(17, 0)
+#define AT91_MID_MIDVA_MASK GENMASK(28, 18)
+#define AT91_MID_MIDE BIT(29)
+
+#define AT91_MSR_MTIMESTAMP_MASK GENMASK(15, 0)
+#define AT91_MSR_MDLC_MASK GENMASK(19, 16)
+#define AT91_MSR_MRTR BIT(20)
+#define AT91_MSR_MABT BIT(22)
+#define AT91_MSR_MRDY BIT(23)
+#define AT91_MSR_MMI BIT(24)
+
+#define AT91_MCR_MDLC_MASK GENMASK(19, 16)
+#define AT91_MCR_MRTR BIT(20)
+#define AT91_MCR_MACR BIT(22)
+#define AT91_MCR_MTCR BIT(23)
/* Mailbox Modes */
enum at91_mb_mode {
- AT91_MB_MODE_DISABLED = 0,
- AT91_MB_MODE_RX = 1,
- AT91_MB_MODE_RX_OVRWR = 2,
- AT91_MB_MODE_TX = 3,
- AT91_MB_MODE_CONSUMER = 4,
- AT91_MB_MODE_PRODUCER = 5,
+ AT91_MB_MODE_DISABLED = 0,
+ AT91_MB_MODE_RX = 1,
+ AT91_MB_MODE_RX_OVRWR = 2,
+ AT91_MB_MODE_TX = 3,
+ AT91_MB_MODE_CONSUMER = 4,
+ AT91_MB_MODE_PRODUCER = 5,
};
/* Interrupt mask bits */
-#define AT91_IRQ_ERRA BIT(16)
-#define AT91_IRQ_WARN BIT(17)
-#define AT91_IRQ_ERRP BIT(18)
-#define AT91_IRQ_BOFF BIT(19)
-#define AT91_IRQ_SLEEP BIT(20)
-#define AT91_IRQ_WAKEUP BIT(21)
-#define AT91_IRQ_TOVF BIT(22)
-#define AT91_IRQ_TSTP BIT(23)
-#define AT91_IRQ_CERR BIT(24)
-#define AT91_IRQ_SERR BIT(25)
-#define AT91_IRQ_AERR BIT(26)
-#define AT91_IRQ_FERR BIT(27)
-#define AT91_IRQ_BERR BIT(28)
-
-#define AT91_IRQ_ERR_ALL (0x1fff0000)
-#define AT91_IRQ_ERR_FRAME (AT91_IRQ_CERR | AT91_IRQ_SERR | \
- AT91_IRQ_AERR | AT91_IRQ_FERR | AT91_IRQ_BERR)
-#define AT91_IRQ_ERR_LINE (AT91_IRQ_ERRA | AT91_IRQ_WARN | \
- AT91_IRQ_ERRP | AT91_IRQ_BOFF)
-
-#define AT91_IRQ_ALL (0x1fffffff)
+#define AT91_IRQ_ERRA BIT(16)
+#define AT91_IRQ_WARN BIT(17)
+#define AT91_IRQ_ERRP BIT(18)
+#define AT91_IRQ_BOFF BIT(19)
+#define AT91_IRQ_SLEEP BIT(20)
+#define AT91_IRQ_WAKEUP BIT(21)
+#define AT91_IRQ_TOVF BIT(22)
+#define AT91_IRQ_TSTP BIT(23)
+#define AT91_IRQ_CERR BIT(24)
+#define AT91_IRQ_SERR BIT(25)
+#define AT91_IRQ_AERR BIT(26)
+#define AT91_IRQ_FERR BIT(27)
+#define AT91_IRQ_BERR BIT(28)
+
+#define AT91_IRQ_ERR_ALL (0x1fff0000)
+#define AT91_IRQ_ERR_FRAME (AT91_IRQ_CERR | AT91_IRQ_SERR | \
+ AT91_IRQ_AERR | AT91_IRQ_FERR | AT91_IRQ_BERR)
+#define AT91_IRQ_ERR_LINE (AT91_IRQ_ERRA | AT91_IRQ_WARN | \
+ AT91_IRQ_ERRP | AT91_IRQ_BOFF)
+
+#define AT91_IRQ_ALL (0x1fffffff)
enum at91_devtype {
AT91_DEVTYPE_SAM9263,
@@ -116,7 +143,6 @@ enum at91_devtype {
struct at91_devtype_data {
unsigned int rx_first;
- unsigned int rx_split;
unsigned int rx_last;
unsigned int tx_shift;
enum at91_devtype type;
@@ -124,14 +150,13 @@ struct at91_devtype_data {
struct at91_priv {
struct can_priv can; /* must be the first member! */
- struct napi_struct napi;
+ struct can_rx_offload offload;
+ struct phy *transceiver;
void __iomem *reg_base;
- u32 reg_sr;
- unsigned int tx_next;
- unsigned int tx_echo;
- unsigned int rx_next;
+ unsigned int tx_head;
+ unsigned int tx_tail;
struct at91_devtype_data devtype_data;
struct clk *clk;
@@ -140,9 +165,13 @@ struct at91_priv {
canid_t mb0_id;
};
+static inline struct at91_priv *rx_offload_to_priv(struct can_rx_offload *offload)
+{
+ return container_of(offload, struct at91_priv, offload);
+}
+
static const struct at91_devtype_data at91_at91sam9263_data = {
.rx_first = 1,
- .rx_split = 8,
.rx_last = 11,
.tx_shift = 2,
.type = AT91_DEVTYPE_SAM9263,
@@ -150,7 +179,6 @@ static const struct at91_devtype_data at91_at91sam9263_data = {
static const struct at91_devtype_data at91_at91sam9x5_data = {
.rx_first = 0,
- .rx_split = 4,
.rx_last = 5,
.tx_shift = 1,
.type = AT91_DEVTYPE_SAM9X5,
@@ -187,27 +215,6 @@ static inline unsigned int get_mb_rx_last(const struct at91_priv *priv)
return priv->devtype_data.rx_last;
}
-static inline unsigned int get_mb_rx_split(const struct at91_priv *priv)
-{
- return priv->devtype_data.rx_split;
-}
-
-static inline unsigned int get_mb_rx_num(const struct at91_priv *priv)
-{
- return get_mb_rx_last(priv) - get_mb_rx_first(priv) + 1;
-}
-
-static inline unsigned int get_mb_rx_low_last(const struct at91_priv *priv)
-{
- return get_mb_rx_split(priv) - 1;
-}
-
-static inline unsigned int get_mb_rx_low_mask(const struct at91_priv *priv)
-{
- return AT91_MB_MASK(get_mb_rx_split(priv)) &
- ~AT91_MB_MASK(get_mb_rx_first(priv));
-}
-
static inline unsigned int get_mb_tx_shift(const struct at91_priv *priv)
{
return priv->devtype_data.tx_shift;
@@ -228,24 +235,24 @@ static inline unsigned int get_mb_tx_last(const struct at91_priv *priv)
return get_mb_tx_first(priv) + get_mb_tx_num(priv) - 1;
}
-static inline unsigned int get_next_prio_shift(const struct at91_priv *priv)
+static inline unsigned int get_head_prio_shift(const struct at91_priv *priv)
{
return get_mb_tx_shift(priv);
}
-static inline unsigned int get_next_prio_mask(const struct at91_priv *priv)
+static inline unsigned int get_head_prio_mask(const struct at91_priv *priv)
{
return 0xf << get_mb_tx_shift(priv);
}
-static inline unsigned int get_next_mb_mask(const struct at91_priv *priv)
+static inline unsigned int get_head_mb_mask(const struct at91_priv *priv)
{
return AT91_MB_MASK(get_mb_tx_shift(priv));
}
-static inline unsigned int get_next_mask(const struct at91_priv *priv)
+static inline unsigned int get_head_mask(const struct at91_priv *priv)
{
- return get_next_mb_mask(priv) | get_next_prio_mask(priv);
+ return get_head_mb_mask(priv) | get_head_prio_mask(priv);
}
static inline unsigned int get_irq_mb_rx(const struct at91_priv *priv)
@@ -260,19 +267,19 @@ static inline unsigned int get_irq_mb_tx(const struct at91_priv *priv)
~AT91_MB_MASK(get_mb_tx_first(priv));
}
-static inline unsigned int get_tx_next_mb(const struct at91_priv *priv)
+static inline unsigned int get_tx_head_mb(const struct at91_priv *priv)
{
- return (priv->tx_next & get_next_mb_mask(priv)) + get_mb_tx_first(priv);
+ return (priv->tx_head & get_head_mb_mask(priv)) + get_mb_tx_first(priv);
}
-static inline unsigned int get_tx_next_prio(const struct at91_priv *priv)
+static inline unsigned int get_tx_head_prio(const struct at91_priv *priv)
{
- return (priv->tx_next >> get_next_prio_shift(priv)) & 0xf;
+ return (priv->tx_head >> get_head_prio_shift(priv)) & 0xf;
}
-static inline unsigned int get_tx_echo_mb(const struct at91_priv *priv)
+static inline unsigned int get_tx_tail_mb(const struct at91_priv *priv)
{
- return (priv->tx_echo & get_next_mb_mask(priv)) + get_mb_tx_first(priv);
+ return (priv->tx_tail & get_head_mb_mask(priv)) + get_mb_tx_first(priv);
}
static inline u32 at91_read(const struct at91_priv *priv, enum at91_reg reg)
@@ -288,9 +295,12 @@ static inline void at91_write(const struct at91_priv *priv, enum at91_reg reg,
static inline void set_mb_mode_prio(const struct at91_priv *priv,
unsigned int mb, enum at91_mb_mode mode,
- int prio)
+ u8 prio)
{
- at91_write(priv, AT91_MMR(mb), (mode << 24) | (prio << 16));
+ const u32 reg_mmr = FIELD_PREP(AT91_MMR_MOT_MASK, mode) |
+ FIELD_PREP(AT91_MMR_PRIOR_MASK, prio);
+
+ at91_write(priv, AT91_MMR(mb), reg_mmr);
}
static inline void set_mb_mode(const struct at91_priv *priv, unsigned int mb,
@@ -304,9 +314,10 @@ static inline u32 at91_can_id_to_reg_mid(canid_t can_id)
u32 reg_mid;
if (can_id & CAN_EFF_FLAG)
- reg_mid = (can_id & CAN_EFF_MASK) | AT91_MID_MIDE;
+ reg_mid = FIELD_PREP(AT91_MID_MIDVA_MASK | AT91_MID_MIDVB_MASK, can_id) |
+ AT91_MID_MIDE;
else
- reg_mid = (can_id & CAN_SFF_MASK) << 18;
+ reg_mid = FIELD_PREP(AT91_MID_MIDVA_MASK, can_id);
return reg_mid;
}
@@ -318,8 +329,8 @@ static void at91_setup_mailboxes(struct net_device *dev)
u32 reg_mid;
/* Due to a chip bug (errata 50.2.6.3 & 50.3.5.3) the first
- * mailbox is disabled. The next 11 mailboxes are used as a
- * reception FIFO. The last mailbox is configured with
+ * mailbox is disabled. The next mailboxes are used as a
+ * reception FIFO. The last of the RX mailboxes is configured with
* overwrite option. The overwrite flag indicates a FIFO
* overflow.
*/
@@ -340,27 +351,30 @@ static void at91_setup_mailboxes(struct net_device *dev)
at91_write(priv, AT91_MID(i), AT91_MID_MIDE);
}
- /* The last 4 mailboxes are used for transmitting. */
+ /* The last mailboxes are used for transmitting. */
for (i = get_mb_tx_first(priv); i <= get_mb_tx_last(priv); i++)
set_mb_mode_prio(priv, i, AT91_MB_MODE_TX, 0);
- /* Reset tx and rx helper pointers */
- priv->tx_next = priv->tx_echo = 0;
- priv->rx_next = get_mb_rx_first(priv);
+ /* Reset tx helper pointers */
+ priv->tx_head = priv->tx_tail = 0;
}
static int at91_set_bittiming(struct net_device *dev)
{
const struct at91_priv *priv = netdev_priv(dev);
const struct can_bittiming *bt = &priv->can.bittiming;
- u32 reg_br;
+ u32 reg_br = 0;
+
+ if (priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES)
+ reg_br |= AT91_BR_SMP;
- reg_br = ((priv->can.ctrlmode & CAN_CTRLMODE_3_SAMPLES) ? 1 << 24 : 0) |
- ((bt->brp - 1) << 16) | ((bt->sjw - 1) << 12) |
- ((bt->prop_seg - 1) << 8) | ((bt->phase_seg1 - 1) << 4) |
- ((bt->phase_seg2 - 1) << 0);
+ reg_br |= FIELD_PREP(AT91_BR_BRP_MASK, bt->brp - 1) |
+ FIELD_PREP(AT91_BR_SJW_MASK, bt->sjw - 1) |
+ FIELD_PREP(AT91_BR_PROPAG_MASK, bt->prop_seg - 1) |
+ FIELD_PREP(AT91_BR_PHASE1_MASK, bt->phase_seg1 - 1) |
+ FIELD_PREP(AT91_BR_PHASE2_MASK, bt->phase_seg2 - 1);
- netdev_info(dev, "writing AT91_BR: 0x%08x\n", reg_br);
+ netdev_dbg(dev, "writing AT91_BR: 0x%08x\n", reg_br);
at91_write(priv, AT91_BR, reg_br);
@@ -373,8 +387,8 @@ static int at91_get_berr_counter(const struct net_device *dev,
const struct at91_priv *priv = netdev_priv(dev);
u32 reg_ecr = at91_read(priv, AT91_ECR);
- bec->rxerr = reg_ecr & 0xff;
- bec->txerr = reg_ecr >> 16;
+ bec->rxerr = FIELD_GET(AT91_ECR_REC_MASK, reg_ecr);
+ bec->txerr = FIELD_GET(AT91_ECR_TEC_MASK, reg_ecr);
return 0;
}
@@ -403,9 +417,13 @@ static void at91_chip_start(struct net_device *dev)
priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ /* Dummy read to clear latched line error interrupts on
+ * sam9x5 and newer SoCs.
+ */
+ at91_read(priv, AT91_SR);
+
/* Enable interrupts */
- reg_ier = get_irq_mb_rx(priv) | AT91_IRQ_ERRP | AT91_IRQ_ERR_FRAME;
- at91_write(priv, AT91_IDR, AT91_IRQ_ALL);
+ reg_ier = get_irq_mb_rx(priv) | AT91_IRQ_ERR_LINE | AT91_IRQ_ERR_FRAME;
at91_write(priv, AT91_IER, reg_ier);
}
@@ -414,6 +432,11 @@ static void at91_chip_stop(struct net_device *dev, enum can_state state)
struct at91_priv *priv = netdev_priv(dev);
u32 reg_mr;
+ /* Abort any pending TX requests. However this doesn't seem to
+ * work in case of bus-off on sama5d3.
+ */
+ at91_write(priv, AT91_ACR, get_irq_mb_tx(priv));
+
/* disable interrupts */
at91_write(priv, AT91_IDR, AT91_IRQ_ALL);
@@ -437,11 +460,11 @@ static void at91_chip_stop(struct net_device *dev, enum can_state state)
* stop sending, waiting for all messages to be delivered, then start
* again with mailbox AT91_MB_TX_FIRST prio 0.
*
- * We use the priv->tx_next as counter for the next transmission
+ * We use the priv->tx_head as counter for the next transmission
* mailbox, but without the offset AT91_MB_TX_FIRST. The lower bits
* encode the mailbox number, the upper 4 bits the mailbox priority:
*
- * priv->tx_next = (prio << get_next_prio_shift(priv)) |
+ * priv->tx_head = (prio << get_next_prio_shift(priv)) |
* (mb - get_mb_tx_first(priv));
*
*/
@@ -455,8 +478,8 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (can_dev_dropped_skb(dev, skb))
return NETDEV_TX_OK;
- mb = get_tx_next_mb(priv);
- prio = get_tx_next_prio(priv);
+ mb = get_tx_head_mb(priv);
+ prio = get_tx_head_prio(priv);
if (unlikely(!(at91_read(priv, AT91_MSR(mb)) & AT91_MSR_MRDY))) {
netif_stop_queue(dev);
@@ -465,8 +488,12 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_BUSY;
}
reg_mid = at91_can_id_to_reg_mid(cf->can_id);
- reg_mcr = ((cf->can_id & CAN_RTR_FLAG) ? AT91_MCR_MRTR : 0) |
- (cf->len << 16) | AT91_MCR_MTCR;
+
+ reg_mcr = FIELD_PREP(AT91_MCR_MDLC_MASK, cf->len) |
+ AT91_MCR_MTCR;
+
+ if (cf->can_id & CAN_RTR_FLAG)
+ reg_mcr |= AT91_MCR_MRTR;
/* disable MB while writing ID (see datasheet) */
set_mb_mode(priv, mb, AT91_MB_MODE_DISABLED);
@@ -484,15 +511,15 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* we have to stop the queue and deliver all messages in case
* of a prio+mb counter wrap around. This is the case if
- * tx_next buffer prio and mailbox equals 0.
+ * tx_head buffer prio and mailbox equals 0.
*
* also stop the queue if next buffer is still in use
* (== not ready)
*/
- priv->tx_next++;
- if (!(at91_read(priv, AT91_MSR(get_tx_next_mb(priv))) &
+ priv->tx_head++;
+ if (!(at91_read(priv, AT91_MSR(get_tx_head_mb(priv))) &
AT91_MSR_MRDY) ||
- (priv->tx_next & get_next_mask(priv)) == 0)
+ (priv->tx_head & get_head_mask(priv)) == 0)
netif_stop_queue(dev);
/* Enable interrupt for this mailbox */
@@ -501,32 +528,20 @@ static netdev_tx_t at91_start_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
}
-/**
- * at91_activate_rx_low - activate lower rx mailboxes
- * @priv: a91 context
- *
- * Reenables the lower mailboxes for reception of new CAN messages
- */
-static inline void at91_activate_rx_low(const struct at91_priv *priv)
+static inline u32 at91_get_timestamp(const struct at91_priv *priv)
{
- u32 mask = get_mb_rx_low_mask(priv);
-
- at91_write(priv, AT91_TCR, mask);
+ return at91_read(priv, AT91_TIM);
}
-/**
- * at91_activate_rx_mb - reactive single rx mailbox
- * @priv: a91 context
- * @mb: mailbox to reactivate
- *
- * Reenables given mailbox for reception of new CAN messages
- */
-static inline void at91_activate_rx_mb(const struct at91_priv *priv,
- unsigned int mb)
+static inline struct sk_buff *
+at91_alloc_can_err_skb(struct net_device *dev,
+ struct can_frame **cf, u32 *timestamp)
{
- u32 mask = 1 << mb;
+ const struct at91_priv *priv = netdev_priv(dev);
- at91_write(priv, AT91_TCR, mask);
+ *timestamp = at91_get_timestamp(priv);
+
+ return alloc_can_err_skb(dev, cf);
}
/**
@@ -537,45 +552,71 @@ static void at91_rx_overflow_err(struct net_device *dev)
{
struct net_device_stats *stats = &dev->stats;
struct sk_buff *skb;
+ struct at91_priv *priv = netdev_priv(dev);
struct can_frame *cf;
+ u32 timestamp;
+ int err;
netdev_dbg(dev, "RX buffer overflow\n");
stats->rx_over_errors++;
stats->rx_errors++;
- skb = alloc_can_err_skb(dev, &cf);
+ skb = at91_alloc_can_err_skb(dev, &cf, &timestamp);
if (unlikely(!skb))
return;
cf->can_id |= CAN_ERR_CRTL;
cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
- netif_receive_skb(skb);
+ err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
+ if (err)
+ stats->rx_fifo_errors++;
}
/**
- * at91_read_mb - read CAN msg from mailbox (lowlevel impl)
- * @dev: net device
+ * at91_mailbox_read - read CAN msg from mailbox
+ * @offload: rx-offload
* @mb: mailbox number to read from
- * @cf: can frame where to store message
+ * @timestamp: pointer to 32 bit timestamp
+ * @drop: true indicated mailbox to mark as read and drop frame
*
- * Reads a CAN message from the given mailbox and stores data into
- * given can frame. "mb" and "cf" must be valid.
+ * Reads a CAN message from the given mailbox if not empty.
*/
-static void at91_read_mb(struct net_device *dev, unsigned int mb,
- struct can_frame *cf)
+static struct sk_buff *at91_mailbox_read(struct can_rx_offload *offload,
+ unsigned int mb, u32 *timestamp,
+ bool drop)
{
- const struct at91_priv *priv = netdev_priv(dev);
+ const struct at91_priv *priv = rx_offload_to_priv(offload);
+ struct can_frame *cf;
+ struct sk_buff *skb;
u32 reg_msr, reg_mid;
+ reg_msr = at91_read(priv, AT91_MSR(mb));
+ if (!(reg_msr & AT91_MSR_MRDY))
+ return NULL;
+
+ if (unlikely(drop)) {
+ skb = ERR_PTR(-ENOBUFS);
+ goto mark_as_read;
+ }
+
+ skb = alloc_can_skb(offload->dev, &cf);
+ if (unlikely(!skb)) {
+ skb = ERR_PTR(-ENOMEM);
+ goto mark_as_read;
+ }
+
reg_mid = at91_read(priv, AT91_MID(mb));
if (reg_mid & AT91_MID_MIDE)
- cf->can_id = ((reg_mid >> 0) & CAN_EFF_MASK) | CAN_EFF_FLAG;
+ cf->can_id = FIELD_GET(AT91_MID_MIDVA_MASK | AT91_MID_MIDVB_MASK, reg_mid) |
+ CAN_EFF_FLAG;
else
- cf->can_id = (reg_mid >> 18) & CAN_SFF_MASK;
+ cf->can_id = FIELD_GET(AT91_MID_MIDVA_MASK, reg_mid);
- reg_msr = at91_read(priv, AT91_MSR(mb));
- cf->len = can_cc_dlc2len((reg_msr >> 16) & 0xf);
+ /* extend timestamp to full 32 bit */
+ *timestamp = FIELD_GET(AT91_MSR_MTIMESTAMP_MASK, reg_msr) << 16;
+
+ cf->len = can_cc_dlc2len(FIELD_GET(AT91_MSR_MDLC_MASK, reg_msr));
if (reg_msr & AT91_MSR_MRTR) {
cf->can_id |= CAN_RTR_FLAG;
@@ -588,234 +629,21 @@ static void at91_read_mb(struct net_device *dev, unsigned int mb,
at91_write(priv, AT91_MID(mb), AT91_MID_MIDE);
if (unlikely(mb == get_mb_rx_last(priv) && reg_msr & AT91_MSR_MMI))
- at91_rx_overflow_err(dev);
-}
-
-/**
- * at91_read_msg - read CAN message from mailbox
- * @dev: net device
- * @mb: mail box to read from
- *
- * Reads a CAN message from given mailbox, and put into linux network
- * RX queue, does all housekeeping chores (stats, ...)
- */
-static void at91_read_msg(struct net_device *dev, unsigned int mb)
-{
- struct net_device_stats *stats = &dev->stats;
- struct can_frame *cf;
- struct sk_buff *skb;
-
- skb = alloc_can_skb(dev, &cf);
- if (unlikely(!skb)) {
- stats->rx_dropped++;
- return;
- }
-
- at91_read_mb(dev, mb, cf);
-
- stats->rx_packets++;
- if (!(cf->can_id & CAN_RTR_FLAG))
- stats->rx_bytes += cf->len;
-
- netif_receive_skb(skb);
-}
-
-/**
- * at91_poll_rx - read multiple CAN messages from mailboxes
- * @dev: net device
- * @quota: max number of pkgs we're allowed to receive
- *
- * Theory of Operation:
- *
- * About 3/4 of the mailboxes (get_mb_rx_first()...get_mb_rx_last())
- * on the chip are reserved for RX. We split them into 2 groups. The
- * lower group ranges from get_mb_rx_first() to get_mb_rx_low_last().
- *
- * Like it or not, but the chip always saves a received CAN message
- * into the first free mailbox it finds (starting with the
- * lowest). This makes it very difficult to read the messages in the
- * right order from the chip. This is how we work around that problem:
- *
- * The first message goes into mb nr. 1 and issues an interrupt. All
- * rx ints are disabled in the interrupt handler and a napi poll is
- * scheduled. We read the mailbox, but do _not_ re-enable the mb (to
- * receive another message).
- *
- * lower mbxs upper
- * ____^______ __^__
- * / \ / \
- * +-+-+-+-+-+-+-+-++-+-+-+-+
- * | |x|x|x|x|x|x|x|| | | | |
- * +-+-+-+-+-+-+-+-++-+-+-+-+
- * 0 0 0 0 0 0 0 0 0 0 1 1 \ mail
- * 0 1 2 3 4 5 6 7 8 9 0 1 / box
- * ^
- * |
- * \
- * unused, due to chip bug
- *
- * The variable priv->rx_next points to the next mailbox to read a
- * message from. As long we're in the lower mailboxes we just read the
- * mailbox but not re-enable it.
- *
- * With completion of the last of the lower mailboxes, we re-enable the
- * whole first group, but continue to look for filled mailboxes in the
- * upper mailboxes. Imagine the second group like overflow mailboxes,
- * which takes CAN messages if the lower goup is full. While in the
- * upper group we re-enable the mailbox right after reading it. Giving
- * the chip more room to store messages.
- *
- * After finishing we look again in the lower group if we've still
- * quota.
- *
- */
-static int at91_poll_rx(struct net_device *dev, int quota)
-{
- struct at91_priv *priv = netdev_priv(dev);
- u32 reg_sr = at91_read(priv, AT91_SR);
- const unsigned long *addr = (unsigned long *)&reg_sr;
- unsigned int mb;
- int received = 0;
-
- if (priv->rx_next > get_mb_rx_low_last(priv) &&
- reg_sr & get_mb_rx_low_mask(priv))
- netdev_info(dev,
- "order of incoming frames cannot be guaranteed\n");
-
- again:
- for (mb = find_next_bit(addr, get_mb_tx_first(priv), priv->rx_next);
- mb < get_mb_tx_first(priv) && quota > 0;
- reg_sr = at91_read(priv, AT91_SR),
- mb = find_next_bit(addr, get_mb_tx_first(priv), ++priv->rx_next)) {
- at91_read_msg(dev, mb);
-
- /* reactivate mailboxes */
- if (mb == get_mb_rx_low_last(priv))
- /* all lower mailboxed, if just finished it */
- at91_activate_rx_low(priv);
- else if (mb > get_mb_rx_low_last(priv))
- /* only the mailbox we read */
- at91_activate_rx_mb(priv, mb);
-
- received++;
- quota--;
- }
-
- /* upper group completed, look again in lower */
- if (priv->rx_next > get_mb_rx_low_last(priv) &&
- mb > get_mb_rx_last(priv)) {
- priv->rx_next = get_mb_rx_first(priv);
- if (quota > 0)
- goto again;
- }
-
- return received;
-}
-
-static void at91_poll_err_frame(struct net_device *dev,
- struct can_frame *cf, u32 reg_sr)
-{
- struct at91_priv *priv = netdev_priv(dev);
-
- /* CRC error */
- if (reg_sr & AT91_IRQ_CERR) {
- netdev_dbg(dev, "CERR irq\n");
- dev->stats.rx_errors++;
- priv->can.can_stats.bus_error++;
- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
- }
-
- /* Stuffing Error */
- if (reg_sr & AT91_IRQ_SERR) {
- netdev_dbg(dev, "SERR irq\n");
- dev->stats.rx_errors++;
- priv->can.can_stats.bus_error++;
- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
- cf->data[2] |= CAN_ERR_PROT_STUFF;
- }
-
- /* Acknowledgement Error */
- if (reg_sr & AT91_IRQ_AERR) {
- netdev_dbg(dev, "AERR irq\n");
- dev->stats.tx_errors++;
- cf->can_id |= CAN_ERR_ACK;
- }
-
- /* Form error */
- if (reg_sr & AT91_IRQ_FERR) {
- netdev_dbg(dev, "FERR irq\n");
- dev->stats.rx_errors++;
- priv->can.can_stats.bus_error++;
- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
- cf->data[2] |= CAN_ERR_PROT_FORM;
- }
-
- /* Bit Error */
- if (reg_sr & AT91_IRQ_BERR) {
- netdev_dbg(dev, "BERR irq\n");
- dev->stats.tx_errors++;
- priv->can.can_stats.bus_error++;
- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
- cf->data[2] |= CAN_ERR_PROT_BIT;
- }
-}
-
-static int at91_poll_err(struct net_device *dev, int quota, u32 reg_sr)
-{
- struct sk_buff *skb;
- struct can_frame *cf;
-
- if (quota == 0)
- return 0;
-
- skb = alloc_can_err_skb(dev, &cf);
- if (unlikely(!skb))
- return 0;
-
- at91_poll_err_frame(dev, cf, reg_sr);
-
- netif_receive_skb(skb);
-
- return 1;
-}
-
-static int at91_poll(struct napi_struct *napi, int quota)
-{
- struct net_device *dev = napi->dev;
- const struct at91_priv *priv = netdev_priv(dev);
- u32 reg_sr = at91_read(priv, AT91_SR);
- int work_done = 0;
-
- if (reg_sr & get_irq_mb_rx(priv))
- work_done += at91_poll_rx(dev, quota - work_done);
-
- /* The error bits are clear on read,
- * so use saved value from irq handler.
- */
- reg_sr |= priv->reg_sr;
- if (reg_sr & AT91_IRQ_ERR_FRAME)
- work_done += at91_poll_err(dev, quota - work_done, reg_sr);
-
- if (work_done < quota) {
- /* enable IRQs for frame errors and all mailboxes >= rx_next */
- u32 reg_ier = AT91_IRQ_ERR_FRAME;
+ at91_rx_overflow_err(offload->dev);
- reg_ier |= get_irq_mb_rx(priv) & ~AT91_MB_MASK(priv->rx_next);
+ mark_as_read:
+ at91_write(priv, AT91_MCR(mb), AT91_MCR_MTCR);
- napi_complete_done(napi, work_done);
- at91_write(priv, AT91_IER, reg_ier);
- }
-
- return work_done;
+ return skb;
}
/* theory of operation:
*
- * priv->tx_echo holds the number of the oldest can_frame put for
+ * priv->tx_tail holds the number of the oldest can_frame put for
* transmission into the hardware, but not yet ACKed by the CAN tx
* complete IRQ.
*
- * We iterate from priv->tx_echo to priv->tx_next and check if the
+ * We iterate from priv->tx_tail to priv->tx_head and check if the
* packet has been transmitted, echo it back to the CAN framework. If
* we discover a not yet transmitted package, stop looking for more.
*
@@ -826,10 +654,8 @@ static void at91_irq_tx(struct net_device *dev, u32 reg_sr)
u32 reg_msr;
unsigned int mb;
- /* masking of reg_sr not needed, already done by at91_irq */
-
- for (/* nix */; (priv->tx_next - priv->tx_echo) > 0; priv->tx_echo++) {
- mb = get_tx_echo_mb(priv);
+ for (/* nix */; (priv->tx_head - priv->tx_tail) > 0; priv->tx_tail++) {
+ mb = get_tx_tail_mb(priv);
/* no event in mailbox? */
if (!(reg_sr & (1 << mb)))
@@ -844,236 +670,202 @@ static void at91_irq_tx(struct net_device *dev, u32 reg_sr)
* parked in the echo queue.
*/
reg_msr = at91_read(priv, AT91_MSR(mb));
- if (likely(reg_msr & AT91_MSR_MRDY &&
- ~reg_msr & AT91_MSR_MABT)) {
- /* _NOTE_: subtract AT91_MB_TX_FIRST offset from mb! */
- dev->stats.tx_bytes +=
- can_get_echo_skb(dev,
- mb - get_mb_tx_first(priv),
- NULL);
- dev->stats.tx_packets++;
- }
+ if (unlikely(!(reg_msr & AT91_MSR_MRDY &&
+ ~reg_msr & AT91_MSR_MABT)))
+ continue;
+
+ /* _NOTE_: subtract AT91_MB_TX_FIRST offset from mb! */
+ dev->stats.tx_bytes +=
+ can_get_echo_skb(dev, mb - get_mb_tx_first(priv), NULL);
+ dev->stats.tx_packets++;
}
/* restart queue if we don't have a wrap around but restart if
* we get a TX int for the last can frame directly before a
* wrap around.
*/
- if ((priv->tx_next & get_next_mask(priv)) != 0 ||
- (priv->tx_echo & get_next_mask(priv)) == 0)
+ if ((priv->tx_head & get_head_mask(priv)) != 0 ||
+ (priv->tx_tail & get_head_mask(priv)) == 0)
netif_wake_queue(dev);
}
-static void at91_irq_err_state(struct net_device *dev,
- struct can_frame *cf, enum can_state new_state)
+static void at91_irq_err_line(struct net_device *dev, const u32 reg_sr)
{
+ struct net_device_stats *stats = &dev->stats;
+ enum can_state new_state, rx_state, tx_state;
struct at91_priv *priv = netdev_priv(dev);
- u32 reg_idr = 0, reg_ier = 0;
struct can_berr_counter bec;
+ struct sk_buff *skb;
+ struct can_frame *cf;
+ u32 timestamp;
+ int err;
at91_get_berr_counter(dev, &bec);
+ can_state_get_by_berr_counter(dev, &bec, &tx_state, &rx_state);
- switch (priv->can.state) {
- case CAN_STATE_ERROR_ACTIVE:
- /* from: ERROR_ACTIVE
- * to : ERROR_WARNING, ERROR_PASSIVE, BUS_OFF
- * => : there was a warning int
- */
- if (new_state >= CAN_STATE_ERROR_WARNING &&
- new_state <= CAN_STATE_BUS_OFF) {
- netdev_dbg(dev, "Error Warning IRQ\n");
- priv->can.can_stats.error_warning++;
-
- cf->can_id |= CAN_ERR_CRTL;
- cf->data[1] = (bec.txerr > bec.rxerr) ?
- CAN_ERR_CRTL_TX_WARNING :
- CAN_ERR_CRTL_RX_WARNING;
- }
- fallthrough;
- case CAN_STATE_ERROR_WARNING:
- /* from: ERROR_ACTIVE, ERROR_WARNING
- * to : ERROR_PASSIVE, BUS_OFF
- * => : error passive int
- */
- if (new_state >= CAN_STATE_ERROR_PASSIVE &&
- new_state <= CAN_STATE_BUS_OFF) {
- netdev_dbg(dev, "Error Passive IRQ\n");
- priv->can.can_stats.error_passive++;
-
- cf->can_id |= CAN_ERR_CRTL;
- cf->data[1] = (bec.txerr > bec.rxerr) ?
- CAN_ERR_CRTL_TX_PASSIVE :
- CAN_ERR_CRTL_RX_PASSIVE;
- }
- break;
- case CAN_STATE_BUS_OFF:
- /* from: BUS_OFF
- * to : ERROR_ACTIVE, ERROR_WARNING, ERROR_PASSIVE
- */
- if (new_state <= CAN_STATE_ERROR_PASSIVE) {
- cf->can_id |= CAN_ERR_RESTARTED;
+ /* The chip automatically recovers from bus-off after 128
+ * occurrences of 11 consecutive recessive bits.
+ *
+ * After an auto-recovered bus-off, the error counters no
+ * longer reflect this fact. On the sam9263 the state bits in
+ * the SR register show the current state (based on the
+ * current error counters), while on sam9x5 and newer SoCs
+ * these bits are latched.
+ *
+ * Take any latched bus-off information from the SR register
+ * into account when calculating the CAN new state, to start
+ * the standard CAN bus off handling.
+ */
+ if (reg_sr & AT91_IRQ_BOFF)
+ rx_state = CAN_STATE_BUS_OFF;
- netdev_dbg(dev, "restarted\n");
- priv->can.can_stats.restarts++;
+ new_state = max(tx_state, rx_state);
- netif_carrier_on(dev);
- netif_wake_queue(dev);
- }
- break;
- default:
- break;
- }
+ /* state hasn't changed */
+ if (likely(new_state == priv->can.state))
+ return;
- /* process state changes depending on the new state */
- switch (new_state) {
- case CAN_STATE_ERROR_ACTIVE:
- /* actually we want to enable AT91_IRQ_WARN here, but
- * it screws up the system under certain
- * circumstances. so just enable AT91_IRQ_ERRP, thus
- * the "fallthrough"
- */
- netdev_dbg(dev, "Error Active\n");
- cf->can_id |= CAN_ERR_PROT;
- cf->data[2] = CAN_ERR_PROT_ACTIVE;
- fallthrough;
- case CAN_STATE_ERROR_WARNING:
- reg_idr = AT91_IRQ_ERRA | AT91_IRQ_WARN | AT91_IRQ_BOFF;
- reg_ier = AT91_IRQ_ERRP;
- break;
- case CAN_STATE_ERROR_PASSIVE:
- reg_idr = AT91_IRQ_ERRA | AT91_IRQ_WARN | AT91_IRQ_ERRP;
- reg_ier = AT91_IRQ_BOFF;
- break;
- case CAN_STATE_BUS_OFF:
- reg_idr = AT91_IRQ_ERRA | AT91_IRQ_ERRP |
- AT91_IRQ_WARN | AT91_IRQ_BOFF;
- reg_ier = 0;
+ /* The skb allocation might fail, but can_change_state()
+ * handles cf == NULL.
+ */
+ skb = at91_alloc_can_err_skb(dev, &cf, &timestamp);
+ can_change_state(dev, cf, tx_state, rx_state);
- cf->can_id |= CAN_ERR_BUSOFF;
+ if (new_state == CAN_STATE_BUS_OFF) {
+ at91_chip_stop(dev, CAN_STATE_BUS_OFF);
+ can_bus_off(dev);
+ }
- netdev_dbg(dev, "bus-off\n");
- netif_carrier_off(dev);
- priv->can.can_stats.bus_off++;
+ if (unlikely(!skb))
+ return;
- /* turn off chip, if restart is disabled */
- if (!priv->can.restart_ms) {
- at91_chip_stop(dev, CAN_STATE_BUS_OFF);
- return;
- }
- break;
- default:
- break;
+ if (new_state != CAN_STATE_BUS_OFF) {
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = bec.txerr;
+ cf->data[7] = bec.rxerr;
}
- at91_write(priv, AT91_IDR, reg_idr);
- at91_write(priv, AT91_IER, reg_ier);
+ err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
+ if (err)
+ stats->rx_fifo_errors++;
}
-static int at91_get_state_by_bec(const struct net_device *dev,
- enum can_state *state)
+static void at91_irq_err_frame(struct net_device *dev, const u32 reg_sr)
{
- struct can_berr_counter bec;
+ struct net_device_stats *stats = &dev->stats;
+ struct at91_priv *priv = netdev_priv(dev);
+ struct can_frame *cf;
+ struct sk_buff *skb;
+ u32 timestamp;
int err;
- err = at91_get_berr_counter(dev, &bec);
- if (err)
- return err;
+ priv->can.can_stats.bus_error++;
- if (bec.txerr < 96 && bec.rxerr < 96)
- *state = CAN_STATE_ERROR_ACTIVE;
- else if (bec.txerr < 128 && bec.rxerr < 128)
- *state = CAN_STATE_ERROR_WARNING;
- else if (bec.txerr < 256 && bec.rxerr < 256)
- *state = CAN_STATE_ERROR_PASSIVE;
- else
- *state = CAN_STATE_BUS_OFF;
+ skb = at91_alloc_can_err_skb(dev, &cf, &timestamp);
+ if (cf)
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
- return 0;
-}
+ if (reg_sr & AT91_IRQ_CERR) {
+ netdev_dbg(dev, "CRC error\n");
-static void at91_irq_err(struct net_device *dev)
-{
- struct at91_priv *priv = netdev_priv(dev);
- struct sk_buff *skb;
- struct can_frame *cf;
- enum can_state new_state;
- u32 reg_sr;
- int err;
+ stats->rx_errors++;
+ if (cf)
+ cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ;
+ }
+
+ if (reg_sr & AT91_IRQ_SERR) {
+ netdev_dbg(dev, "Stuff error\n");
+
+ stats->rx_errors++;
+ if (cf)
+ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ }
- if (at91_is_sam9263(priv)) {
- reg_sr = at91_read(priv, AT91_SR);
-
- /* we need to look at the unmasked reg_sr */
- if (unlikely(reg_sr & AT91_IRQ_BOFF)) {
- new_state = CAN_STATE_BUS_OFF;
- } else if (unlikely(reg_sr & AT91_IRQ_ERRP)) {
- new_state = CAN_STATE_ERROR_PASSIVE;
- } else if (unlikely(reg_sr & AT91_IRQ_WARN)) {
- new_state = CAN_STATE_ERROR_WARNING;
- } else if (likely(reg_sr & AT91_IRQ_ERRA)) {
- new_state = CAN_STATE_ERROR_ACTIVE;
- } else {
- netdev_err(dev, "BUG! hardware in undefined state\n");
- return;
+ if (reg_sr & AT91_IRQ_AERR) {
+ netdev_dbg(dev, "NACK error\n");
+
+ stats->tx_errors++;
+ if (cf) {
+ cf->can_id |= CAN_ERR_ACK;
+ cf->data[2] |= CAN_ERR_PROT_TX;
}
- } else {
- err = at91_get_state_by_bec(dev, &new_state);
- if (err)
- return;
}
- /* state hasn't changed */
- if (likely(new_state == priv->can.state))
- return;
+ if (reg_sr & AT91_IRQ_FERR) {
+ netdev_dbg(dev, "Format error\n");
- skb = alloc_can_err_skb(dev, &cf);
- if (unlikely(!skb))
+ stats->rx_errors++;
+ if (cf)
+ cf->data[2] |= CAN_ERR_PROT_FORM;
+ }
+
+ if (reg_sr & AT91_IRQ_BERR) {
+ netdev_dbg(dev, "Bit error\n");
+
+ stats->tx_errors++;
+ if (cf)
+ cf->data[2] |= CAN_ERR_PROT_TX | CAN_ERR_PROT_BIT;
+ }
+
+ if (!cf)
return;
- at91_irq_err_state(dev, cf, new_state);
+ err = can_rx_offload_queue_timestamp(&priv->offload, skb, timestamp);
+ if (err)
+ stats->rx_fifo_errors++;
+}
+
+static u32 at91_get_reg_sr_rx(const struct at91_priv *priv, u32 *reg_sr_p)
+{
+ const u32 reg_sr = at91_read(priv, AT91_SR);
- netif_rx(skb);
+ *reg_sr_p |= reg_sr;
- priv->can.state = new_state;
+ return reg_sr & get_irq_mb_rx(priv);
}
-/* interrupt handler
- */
static irqreturn_t at91_irq(int irq, void *dev_id)
{
struct net_device *dev = dev_id;
struct at91_priv *priv = netdev_priv(dev);
irqreturn_t handled = IRQ_NONE;
- u32 reg_sr, reg_imr;
+ u32 reg_sr = 0, reg_sr_rx;
+ int ret;
- reg_sr = at91_read(priv, AT91_SR);
- reg_imr = at91_read(priv, AT91_IMR);
-
- /* Ignore masked interrupts */
- reg_sr &= reg_imr;
- if (!reg_sr)
- goto exit;
-
- handled = IRQ_HANDLED;
+ /* Receive interrupt
+ * Some bits of AT91_SR are cleared on read, keep them in reg_sr.
+ */
+ while ((reg_sr_rx = at91_get_reg_sr_rx(priv, &reg_sr))) {
+ ret = can_rx_offload_irq_offload_timestamp(&priv->offload,
+ reg_sr_rx);
+ handled = IRQ_HANDLED;
- /* Receive or error interrupt? -> napi */
- if (reg_sr & (get_irq_mb_rx(priv) | AT91_IRQ_ERR_FRAME)) {
- /* The error bits are clear on read,
- * save for later use.
- */
- priv->reg_sr = reg_sr;
- at91_write(priv, AT91_IDR,
- get_irq_mb_rx(priv) | AT91_IRQ_ERR_FRAME);
- napi_schedule(&priv->napi);
+ if (!ret)
+ break;
}
/* Transmission complete interrupt */
- if (reg_sr & get_irq_mb_tx(priv))
+ if (reg_sr & get_irq_mb_tx(priv)) {
at91_irq_tx(dev, reg_sr);
+ handled = IRQ_HANDLED;
+ }
- at91_irq_err(dev);
+ /* Line Error interrupt */
+ if (reg_sr & AT91_IRQ_ERR_LINE ||
+ priv->can.state > CAN_STATE_ERROR_ACTIVE) {
+ at91_irq_err_line(dev, reg_sr);
+ handled = IRQ_HANDLED;
+ }
+
+ /* Frame Error Interrupt */
+ if (reg_sr & AT91_IRQ_ERR_FRAME) {
+ at91_irq_err_frame(dev, reg_sr);
+ handled = IRQ_HANDLED;
+ }
+
+ if (handled)
+ can_rx_offload_irq_finish(&priv->offload);
- exit:
return handled;
}
@@ -1082,33 +874,38 @@ static int at91_open(struct net_device *dev)
struct at91_priv *priv = netdev_priv(dev);
int err;
- err = clk_prepare_enable(priv->clk);
+ err = phy_power_on(priv->transceiver);
if (err)
return err;
/* check or determine and set bittime */
err = open_candev(dev);
if (err)
- goto out;
+ goto out_phy_power_off;
+
+ err = clk_prepare_enable(priv->clk);
+ if (err)
+ goto out_close_candev;
/* register interrupt handler */
- if (request_irq(dev->irq, at91_irq, IRQF_SHARED,
- dev->name, dev)) {
- err = -EAGAIN;
- goto out_close;
- }
+ err = request_irq(dev->irq, at91_irq, IRQF_SHARED,
+ dev->name, dev);
+ if (err)
+ goto out_clock_disable_unprepare;
/* start chip and queuing */
at91_chip_start(dev);
- napi_enable(&priv->napi);
+ can_rx_offload_enable(&priv->offload);
netif_start_queue(dev);
return 0;
- out_close:
- close_candev(dev);
- out:
+ out_clock_disable_unprepare:
clk_disable_unprepare(priv->clk);
+ out_close_candev:
+ close_candev(dev);
+ out_phy_power_off:
+ phy_power_off(priv->transceiver);
return err;
}
@@ -1120,11 +917,12 @@ static int at91_close(struct net_device *dev)
struct at91_priv *priv = netdev_priv(dev);
netif_stop_queue(dev);
- napi_disable(&priv->napi);
+ can_rx_offload_disable(&priv->offload);
at91_chip_stop(dev, CAN_STATE_STOPPED);
free_irq(dev->irq, dev);
clk_disable_unprepare(priv->clk);
+ phy_power_off(priv->transceiver);
close_candev(dev);
@@ -1249,6 +1047,7 @@ static const struct at91_devtype_data *at91_can_get_driver_data(struct platform_
static int at91_can_probe(struct platform_device *pdev)
{
const struct at91_devtype_data *devtype_data;
+ struct phy *transceiver;
struct net_device *dev;
struct at91_priv *priv;
struct resource *res;
@@ -1297,6 +1096,13 @@ static int at91_can_probe(struct platform_device *pdev)
goto exit_iounmap;
}
+ transceiver = devm_phy_optional_get(&pdev->dev, NULL);
+ if (IS_ERR(transceiver)) {
+ err = PTR_ERR(transceiver);
+ dev_err_probe(&pdev->dev, err, "failed to get phy\n");
+ goto exit_iounmap;
+ }
+
dev->netdev_ops = &at91_netdev_ops;
dev->ethtool_ops = &at91_ethtool_ops;
dev->irq = irq;
@@ -1314,8 +1120,14 @@ static int at91_can_probe(struct platform_device *pdev)
priv->clk = clk;
priv->pdata = dev_get_platdata(&pdev->dev);
priv->mb0_id = 0x7ff;
+ priv->offload.mailbox_read = at91_mailbox_read;
+ priv->offload.mb_first = devtype_data->rx_first;
+ priv->offload.mb_last = devtype_data->rx_last;
+
+ can_rx_offload_add_timestamp(dev, &priv->offload);
- netif_napi_add_weight(dev, &priv->napi, at91_poll, get_mb_rx_num(priv));
+ if (transceiver)
+ priv->can.bitrate_max = transceiver->attrs.max_link_rate;
if (at91_is_sam9263(priv))
dev->sysfs_groups[0] = &at91_sysfs_attr_group;
diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
index 7f9334a8af50..3a3be5cdfc1f 100644
--- a/drivers/net/can/dev/dev.c
+++ b/drivers/net/can/dev/dev.c
@@ -90,6 +90,28 @@ const char *can_get_state_str(const enum can_state state)
}
EXPORT_SYMBOL_GPL(can_get_state_str);
+static enum can_state can_state_err_to_state(u16 err)
+{
+ if (err < CAN_ERROR_WARNING_THRESHOLD)
+ return CAN_STATE_ERROR_ACTIVE;
+ if (err < CAN_ERROR_PASSIVE_THRESHOLD)
+ return CAN_STATE_ERROR_WARNING;
+ if (err < CAN_BUS_OFF_THRESHOLD)
+ return CAN_STATE_ERROR_PASSIVE;
+
+ return CAN_STATE_BUS_OFF;
+}
+
+void can_state_get_by_berr_counter(const struct net_device *dev,
+ const struct can_berr_counter *bec,
+ enum can_state *tx_state,
+ enum can_state *rx_state)
+{
+ *tx_state = can_state_err_to_state(bec->txerr);
+ *rx_state = can_state_err_to_state(bec->rxerr);
+}
+EXPORT_SYMBOL_GPL(can_state_get_by_berr_counter);
+
void can_change_state(struct net_device *dev, struct can_frame *cf,
enum can_state tx_state, enum can_state rx_state)
{
@@ -132,7 +154,8 @@ static void can_restart(struct net_device *dev)
struct can_frame *cf;
int err;
- BUG_ON(netif_carrier_ok(dev));
+ if (netif_carrier_ok(dev))
+ netdev_err(dev, "Attempt to restart for bus-off recovery, but carrier is OK?\n");
/* No synchronization needed because the device is bus-off and
* no messages can come in or go out.
@@ -141,23 +164,21 @@ static void can_restart(struct net_device *dev)
/* send restart message upstream */
skb = alloc_can_err_skb(dev, &cf);
- if (!skb)
- goto restart;
-
- cf->can_id |= CAN_ERR_RESTARTED;
-
- netif_rx(skb);
-
-restart:
- netdev_dbg(dev, "restarted\n");
- priv->can_stats.restarts++;
+ if (skb) {
+ cf->can_id |= CAN_ERR_RESTARTED;
+ netif_rx(skb);
+ }
/* Now restart the device */
- err = priv->do_set_mode(dev, CAN_MODE_START);
-
netif_carrier_on(dev);
- if (err)
- netdev_err(dev, "Error %d during restart", err);
+ err = priv->do_set_mode(dev, CAN_MODE_START);
+ if (err) {
+ netdev_err(dev, "Restart failed, error %pe\n", ERR_PTR(err));
+ netif_carrier_off(dev);
+ } else {
+ netdev_dbg(dev, "Restarted\n");
+ priv->can_stats.restarts++;
+ }
}
static void can_restart_work(struct work_struct *work)
diff --git a/drivers/net/can/dev/rx-offload.c b/drivers/net/can/dev/rx-offload.c
index 77091f7d1fa7..46e7b6db4a1e 100644
--- a/drivers/net/can/dev/rx-offload.c
+++ b/drivers/net/can/dev/rx-offload.c
@@ -67,7 +67,7 @@ static int can_rx_offload_napi_poll(struct napi_struct *napi, int quota)
/* Check if there was another interrupt */
if (!skb_queue_empty(&offload->skb_queue))
- napi_reschedule(&offload->napi);
+ napi_schedule(&offload->napi);
}
return work_done;
diff --git a/drivers/net/can/dev/skb.c b/drivers/net/can/dev/skb.c
index f6d05b3ef59a..3ebd4f779b9b 100644
--- a/drivers/net/can/dev/skb.c
+++ b/drivers/net/can/dev/skb.c
@@ -49,7 +49,11 @@ int can_put_echo_skb(struct sk_buff *skb, struct net_device *dev,
{
struct can_priv *priv = netdev_priv(dev);
- BUG_ON(idx >= priv->echo_skb_max);
+ if (idx >= priv->echo_skb_max) {
+ netdev_err(dev, "%s: BUG! Trying to access can_priv::echo_skb out of bounds (%u/max %u)\n",
+ __func__, idx, priv->echo_skb_max);
+ return -EINVAL;
+ }
/* check flag whether this packet has to be looped back */
if (!(dev->flags & IFF_ECHO) ||
diff --git a/drivers/net/can/sja1000/peak_pci.c b/drivers/net/can/sja1000/peak_pci.c
index 84f34020aafb..da396d641e24 100644
--- a/drivers/net/can/sja1000/peak_pci.c
+++ b/drivers/net/can/sja1000/peak_pci.c
@@ -462,7 +462,7 @@ static int peak_pciec_probe(struct pci_dev *pdev, struct net_device *dev)
card->led_chip.owner = THIS_MODULE;
card->led_chip.dev.parent = &pdev->dev;
card->led_chip.algo_data = &card->i2c_bit;
- strncpy(card->led_chip.name, "peak_i2c",
+ strscpy(card->led_chip.name, "peak_i2c",
sizeof(card->led_chip.name));
card->i2c_bit = peak_pciec_i2c_bit_ops;
diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
index 743c2eb62b87..ddb3247948ad 100644
--- a/drivers/net/can/sja1000/sja1000.c
+++ b/drivers/net/can/sja1000/sja1000.c
@@ -206,7 +206,7 @@ static void sja1000_start(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
- /* leave reset mode */
+ /* enter reset mode */
if (priv->can.state != CAN_STATE_STOPPED)
set_reset_mode(dev);
diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.c b/drivers/net/can/usb/etas_es58x/es58x_core.c
index 0c7f7505632c..5e3a72b7c469 100644
--- a/drivers/net/can/usb/etas_es58x/es58x_core.c
+++ b/drivers/net/can/usb/etas_es58x/es58x_core.c
@@ -2230,6 +2230,7 @@ static int es58x_probe(struct usb_interface *intf,
for (ch_idx = 0; ch_idx < es58x_dev->num_can_ch; ch_idx++) {
int ret = es58x_init_netdev(es58x_dev, ch_idx);
+
if (ret) {
es58x_free_netdevs(es58x_dev);
return ret;
diff --git a/drivers/net/can/usb/etas_es58x/es58x_core.h b/drivers/net/can/usb/etas_es58x/es58x_core.h
index c1ba1a4e8857..2e183bdeedd7 100644
--- a/drivers/net/can/usb/etas_es58x/es58x_core.h
+++ b/drivers/net/can/usb/etas_es58x/es58x_core.h
@@ -378,13 +378,13 @@ struct es58x_sw_version {
/**
* struct es58x_hw_revision - Hardware revision number.
- * @letter: Revision letter.
+ * @letter: Revision letter, an alphanumeric character.
* @major: Version major number, represented on three digits.
* @minor: Version minor number, represented on three digits.
*
* The hardware revision uses its own format: "axxx/xxx" where 'a' is
- * a letter and 'x' a digit. It can be retrieved from the product
- * information string.
+ * an alphanumeric character and 'x' a digit. It can be retrieved from
+ * the product information string.
*/
struct es58x_hw_revision {
char letter;
diff --git a/drivers/net/can/usb/etas_es58x/es58x_devlink.c b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
index 9fba29e2f57c..635edeb8f68c 100644
--- a/drivers/net/can/usb/etas_es58x/es58x_devlink.c
+++ b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
@@ -125,14 +125,28 @@ static int es58x_parse_hw_rev(struct es58x_device *es58x_dev,
* firmware version, the bootloader version and the hardware
* revision.
*
- * If the function fails, simply emit a log message and continue
- * because product information is not critical for the driver to
- * operate.
+ * If the function fails, set the version or revision to an invalid
+ * value and emit an informal message. Continue probing because the
+ * product information is not critical for the driver to operate.
*/
void es58x_parse_product_info(struct es58x_device *es58x_dev)
{
+ static const struct es58x_sw_version sw_version_not_set = {
+ .major = -1,
+ .minor = -1,
+ .revision = -1,
+ };
+ static const struct es58x_hw_revision hw_revision_not_set = {
+ .letter = '\0',
+ .major = -1,
+ .minor = -1,
+ };
char *prod_info;
+ es58x_dev->firmware_version = sw_version_not_set;
+ es58x_dev->bootloader_version = sw_version_not_set;
+ es58x_dev->hardware_revision = hw_revision_not_set;
+
prod_info = usb_cache_string(es58x_dev->udev, ES58X_PROD_INFO_IDX);
if (!prod_info) {
dev_warn(es58x_dev->dev,
@@ -150,29 +164,36 @@ void es58x_parse_product_info(struct es58x_device *es58x_dev)
}
/**
- * es58x_sw_version_is_set() - Check if the version is a valid number.
+ * es58x_sw_version_is_valid() - Check if the version is a valid number.
* @sw_ver: Version number of either the firmware or the bootloader.
*
- * If &es58x_sw_version.major, &es58x_sw_version.minor and
- * &es58x_sw_version.revision are all zero, the product string could
- * not be parsed and the version number is invalid.
+ * If any of the software version sub-numbers do not fit on two
+ * digits, the version is invalid, most probably because the product
+ * string could not be parsed.
+ *
+ * Return: @true if the software version is valid, @false otherwise.
*/
-static inline bool es58x_sw_version_is_set(struct es58x_sw_version *sw_ver)
+static inline bool es58x_sw_version_is_valid(struct es58x_sw_version *sw_ver)
{
- return sw_ver->major || sw_ver->minor || sw_ver->revision;
+ return sw_ver->major < 100 && sw_ver->minor < 100 &&
+ sw_ver->revision < 100;
}
/**
- * es58x_hw_revision_is_set() - Check if the revision is a valid number.
+ * es58x_hw_revision_is_valid() - Check if the revision is a valid number.
* @hw_rev: Revision number of the hardware.
*
- * If &es58x_hw_revision.letter is the null character, the product
- * string could not be parsed and the hardware revision number is
- * invalid.
+ * If &es58x_hw_revision.letter is not a alphanumeric character or if
+ * any of the hardware revision sub-numbers do not fit on three
+ * digits, the revision is invalid, most probably because the product
+ * string could not be parsed.
+ *
+ * Return: @true if the hardware revision is valid, @false otherwise.
*/
-static inline bool es58x_hw_revision_is_set(struct es58x_hw_revision *hw_rev)
+static inline bool es58x_hw_revision_is_valid(struct es58x_hw_revision *hw_rev)
{
- return hw_rev->letter != '\0';
+ return isalnum(hw_rev->letter) && hw_rev->major < 1000 &&
+ hw_rev->minor < 1000;
}
/**
@@ -197,7 +218,7 @@ static int es58x_devlink_info_get(struct devlink *devlink,
char buf[max(sizeof("xx.xx.xx"), sizeof("axxx/xxx"))];
int ret = 0;
- if (es58x_sw_version_is_set(fw_ver)) {
+ if (es58x_sw_version_is_valid(fw_ver)) {
snprintf(buf, sizeof(buf), "%02u.%02u.%02u",
fw_ver->major, fw_ver->minor, fw_ver->revision);
ret = devlink_info_version_running_put(req,
@@ -207,7 +228,7 @@ static int es58x_devlink_info_get(struct devlink *devlink,
return ret;
}
- if (es58x_sw_version_is_set(bl_ver)) {
+ if (es58x_sw_version_is_valid(bl_ver)) {
snprintf(buf, sizeof(buf), "%02u.%02u.%02u",
bl_ver->major, bl_ver->minor, bl_ver->revision);
ret = devlink_info_version_running_put(req,
@@ -217,7 +238,7 @@ static int es58x_devlink_info_get(struct devlink *devlink,
return ret;
}
- if (es58x_hw_revision_is_set(hw_rev)) {
+ if (es58x_hw_revision_is_valid(hw_rev)) {
snprintf(buf, sizeof(buf), "%c%03u/%03u",
hw_rev->letter, hw_rev->major, hw_rev->minor);
ret = devlink_info_version_fixed_put(req,
diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
index 4e27dc913cf7..0d628b35fd5c 100644
--- a/drivers/net/dsa/b53/b53_common.c
+++ b/drivers/net/dsa/b53/b53_common.c
@@ -757,7 +757,7 @@ int b53_configure_vlan(struct dsa_switch *ds)
/* Create an untagged VLAN entry for the default PVID in case
* CONFIG_VLAN_8021Q is disabled and there are no calls to
- * dsa_slave_vlan_rx_add_vid() to create the default VLAN
+ * dsa_user_vlan_rx_add_vid() to create the default VLAN
* entry. Do this only when the tagging protocol is not
* DSA_TAG_PROTO_NONE
*/
@@ -958,7 +958,7 @@ static struct phy_device *b53_get_phy_device(struct dsa_switch *ds, int port)
return NULL;
}
- return mdiobus_get_phy(ds->slave_mii_bus, port);
+ return mdiobus_get_phy(ds->user_mii_bus, port);
}
void b53_get_strings(struct dsa_switch *ds, int port, u32 stringset,
diff --git a/drivers/net/dsa/b53/b53_mdio.c b/drivers/net/dsa/b53/b53_mdio.c
index 4d55d8d18376..897e5e8b3d69 100644
--- a/drivers/net/dsa/b53/b53_mdio.c
+++ b/drivers/net/dsa/b53/b53_mdio.c
@@ -329,7 +329,7 @@ static int b53_mdio_probe(struct mdio_device *mdiodev)
* layer setup
*/
if (of_machine_is_compatible("brcm,bcm7445d0") &&
- strcmp(mdiodev->bus->name, "sf2 slave mii"))
+ strcmp(mdiodev->bus->name, "sf2 user mii"))
return -EPROBE_DEFER;
dev = b53_switch_alloc(&mdiodev->dev, &b53_mdio_ops, mdiodev->bus);
diff --git a/drivers/net/dsa/b53/b53_mmap.c b/drivers/net/dsa/b53/b53_mmap.c
index 5e39641ea887..3a89349dc918 100644
--- a/drivers/net/dsa/b53/b53_mmap.c
+++ b/drivers/net/dsa/b53/b53_mmap.c
@@ -324,14 +324,12 @@ static int b53_mmap_probe(struct platform_device *pdev)
return b53_switch_register(dev);
}
-static int b53_mmap_remove(struct platform_device *pdev)
+static void b53_mmap_remove(struct platform_device *pdev)
{
struct b53_device *dev = platform_get_drvdata(pdev);
if (dev)
b53_switch_remove(dev);
-
- return 0;
}
static void b53_mmap_shutdown(struct platform_device *pdev)
@@ -372,7 +370,7 @@ MODULE_DEVICE_TABLE(of, b53_mmap_of_table);
static struct platform_driver b53_mmap_driver = {
.probe = b53_mmap_probe,
- .remove = b53_mmap_remove,
+ .remove_new = b53_mmap_remove,
.shutdown = b53_mmap_shutdown,
.driver = {
.name = "b53-switch",
diff --git a/drivers/net/dsa/b53/b53_srab.c b/drivers/net/dsa/b53/b53_srab.c
index bcb44034404d..f3f95332ff17 100644
--- a/drivers/net/dsa/b53/b53_srab.c
+++ b/drivers/net/dsa/b53/b53_srab.c
@@ -657,17 +657,15 @@ static int b53_srab_probe(struct platform_device *pdev)
return b53_switch_register(dev);
}
-static int b53_srab_remove(struct platform_device *pdev)
+static void b53_srab_remove(struct platform_device *pdev)
{
struct b53_device *dev = platform_get_drvdata(pdev);
if (!dev)
- return 0;
+ return;
b53_srab_intr_set(dev->priv, false);
b53_switch_remove(dev);
-
- return 0;
}
static void b53_srab_shutdown(struct platform_device *pdev)
@@ -684,7 +682,7 @@ static void b53_srab_shutdown(struct platform_device *pdev)
static struct platform_driver b53_srab_driver = {
.probe = b53_srab_probe,
- .remove = b53_srab_remove,
+ .remove_new = b53_srab_remove,
.shutdown = b53_srab_shutdown,
.driver = {
.name = "b53-srab-switch",
diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
index cd1f240c90f3..cadee5505c29 100644
--- a/drivers/net/dsa/bcm_sf2.c
+++ b/drivers/net/dsa/bcm_sf2.c
@@ -623,19 +623,19 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
priv->master_mii_dn = dn;
- priv->slave_mii_bus = mdiobus_alloc();
- if (!priv->slave_mii_bus) {
+ priv->user_mii_bus = mdiobus_alloc();
+ if (!priv->user_mii_bus) {
err = -ENOMEM;
goto err_put_master_mii_bus_dev;
}
- priv->slave_mii_bus->priv = priv;
- priv->slave_mii_bus->name = "sf2 slave mii";
- priv->slave_mii_bus->read = bcm_sf2_sw_mdio_read;
- priv->slave_mii_bus->write = bcm_sf2_sw_mdio_write;
- snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d",
+ priv->user_mii_bus->priv = priv;
+ priv->user_mii_bus->name = "sf2 user mii";
+ priv->user_mii_bus->read = bcm_sf2_sw_mdio_read;
+ priv->user_mii_bus->write = bcm_sf2_sw_mdio_write;
+ snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "sf2-%d",
index++);
- priv->slave_mii_bus->dev.of_node = dn;
+ priv->user_mii_bus->dev.of_node = dn;
/* Include the pseudo-PHY address to divert reads towards our
* workaround. This is only required for 7445D0, since 7445E0
@@ -653,9 +653,9 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
priv->indir_phy_mask = 0;
ds->phys_mii_mask = priv->indir_phy_mask;
- ds->slave_mii_bus = priv->slave_mii_bus;
- priv->slave_mii_bus->parent = ds->dev->parent;
- priv->slave_mii_bus->phy_mask = ~priv->indir_phy_mask;
+ ds->user_mii_bus = priv->user_mii_bus;
+ priv->user_mii_bus->parent = ds->dev->parent;
+ priv->user_mii_bus->phy_mask = ~priv->indir_phy_mask;
/* We need to make sure that of_phy_connect() will not work by
* removing the 'phandle' and 'linux,phandle' properties and
@@ -682,14 +682,14 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
phy_device_remove(phydev);
}
- err = mdiobus_register(priv->slave_mii_bus);
+ err = mdiobus_register(priv->user_mii_bus);
if (err && dn)
- goto err_free_slave_mii_bus;
+ goto err_free_user_mii_bus;
return 0;
-err_free_slave_mii_bus:
- mdiobus_free(priv->slave_mii_bus);
+err_free_user_mii_bus:
+ mdiobus_free(priv->user_mii_bus);
err_put_master_mii_bus_dev:
put_device(&priv->master_mii_bus->dev);
err_of_node_put:
@@ -699,10 +699,9 @@ err_of_node_put:
static void bcm_sf2_mdio_unregister(struct bcm_sf2_priv *priv)
{
- mdiobus_unregister(priv->slave_mii_bus);
- mdiobus_free(priv->slave_mii_bus);
+ mdiobus_unregister(priv->user_mii_bus);
+ mdiobus_free(priv->user_mii_bus);
put_device(&priv->master_mii_bus->dev);
- of_node_put(priv->master_mii_dn);
}
static u32 bcm_sf2_sw_get_phy_flags(struct dsa_switch *ds, int port)
@@ -915,7 +914,7 @@ static void bcm_sf2_sw_fixed_state(struct dsa_switch *ds, int port,
* state machine and make it go in PHY_FORCING state instead.
*/
if (!status->link)
- netif_carrier_off(dsa_to_port(ds, port)->slave);
+ netif_carrier_off(dsa_to_port(ds, port)->user);
status->duplex = DUPLEX_FULL;
} else {
status->link = true;
@@ -989,7 +988,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol)
{
- struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
+ struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
struct ethtool_wolinfo pwol = { };
@@ -1013,7 +1012,7 @@ static void bcm_sf2_sw_get_wol(struct dsa_switch *ds, int port,
static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port,
struct ethtool_wolinfo *wol)
{
- struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
+ struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
s8 cpu_port = dsa_to_port(ds, port)->cpu_dp->index;
struct ethtool_wolinfo pwol = { };
@@ -1543,12 +1542,12 @@ out_clk:
return ret;
}
-static int bcm_sf2_sw_remove(struct platform_device *pdev)
+static void bcm_sf2_sw_remove(struct platform_device *pdev)
{
struct bcm_sf2_priv *priv = platform_get_drvdata(pdev);
if (!priv)
- return 0;
+ return;
priv->wol_ports_mask = 0;
/* Disable interrupts */
@@ -1560,8 +1559,6 @@ static int bcm_sf2_sw_remove(struct platform_device *pdev)
clk_disable_unprepare(priv->clk);
if (priv->type == BCM7278_DEVICE_ID)
reset_control_assert(priv->rcdev);
-
- return 0;
}
static void bcm_sf2_sw_shutdown(struct platform_device *pdev)
@@ -1607,7 +1604,7 @@ static SIMPLE_DEV_PM_OPS(bcm_sf2_pm_ops,
static struct platform_driver bcm_sf2_driver = {
.probe = bcm_sf2_sw_probe,
- .remove = bcm_sf2_sw_remove,
+ .remove_new = bcm_sf2_sw_remove,
.shutdown = bcm_sf2_sw_shutdown,
.driver = {
.name = "brcm-sf2",
diff --git a/drivers/net/dsa/bcm_sf2.h b/drivers/net/dsa/bcm_sf2.h
index 00afc94ce522..424f896b5a6f 100644
--- a/drivers/net/dsa/bcm_sf2.h
+++ b/drivers/net/dsa/bcm_sf2.h
@@ -108,7 +108,7 @@ struct bcm_sf2_priv {
/* Master and slave MDIO bus controller */
unsigned int indir_phy_mask;
struct device_node *master_mii_dn;
- struct mii_bus *slave_mii_bus;
+ struct mii_bus *user_mii_bus;
struct mii_bus *master_mii_bus;
/* Bitmask of ports needing BRCM tags */
diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
index c4010b7bf089..c88ee3dd4299 100644
--- a/drivers/net/dsa/bcm_sf2_cfp.c
+++ b/drivers/net/dsa/bcm_sf2_cfp.c
@@ -1102,7 +1102,7 @@ static int bcm_sf2_cfp_rule_get_all(struct bcm_sf2_priv *priv,
int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc, u32 *rule_locs)
{
- struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
+ struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0;
@@ -1145,7 +1145,7 @@ int bcm_sf2_get_rxnfc(struct dsa_switch *ds, int port,
int bcm_sf2_set_rxnfc(struct dsa_switch *ds, int port,
struct ethtool_rxnfc *nfc)
{
- struct net_device *p = dsa_port_to_master(dsa_to_port(ds, port));
+ struct net_device *p = dsa_port_to_conduit(dsa_to_port(ds, port));
struct bcm_sf2_priv *priv = bcm_sf2_to_priv(ds);
int ret = 0;
diff --git a/drivers/net/dsa/dsa_loop.c b/drivers/net/dsa/dsa_loop.c
index 5b139f2206b6..c70ed67cc188 100644
--- a/drivers/net/dsa/dsa_loop.c
+++ b/drivers/net/dsa/dsa_loop.c
@@ -277,6 +277,14 @@ static int dsa_loop_port_max_mtu(struct dsa_switch *ds, int port)
return ETH_MAX_MTU;
}
+static void dsa_loop_phylink_get_caps(struct dsa_switch *dsa, int port,
+ struct phylink_config *config)
+{
+ bitmap_fill(config->supported_interfaces, PHY_INTERFACE_MODE_MAX);
+ __clear_bit(PHY_INTERFACE_MODE_NA, config->supported_interfaces);
+ config->mac_capabilities = ~0;
+}
+
static const struct dsa_switch_ops dsa_loop_driver = {
.get_tag_protocol = dsa_loop_get_protocol,
.setup = dsa_loop_setup,
@@ -295,6 +303,7 @@ static const struct dsa_switch_ops dsa_loop_driver = {
.port_vlan_del = dsa_loop_port_vlan_del,
.port_change_mtu = dsa_loop_port_change_mtu,
.port_max_mtu = dsa_loop_port_max_mtu,
+ .phylink_get_caps = dsa_loop_phylink_get_caps,
};
static int dsa_loop_drv_probe(struct mdio_device *mdiodev)
diff --git a/drivers/net/dsa/hirschmann/hellcreek.c b/drivers/net/dsa/hirschmann/hellcreek.c
index 11ef1d7ea229..beda1e9d350f 100644
--- a/drivers/net/dsa/hirschmann/hellcreek.c
+++ b/drivers/net/dsa/hirschmann/hellcreek.c
@@ -2060,18 +2060,16 @@ err_ptp_setup:
return ret;
}
-static int hellcreek_remove(struct platform_device *pdev)
+static void hellcreek_remove(struct platform_device *pdev)
{
struct hellcreek *hellcreek = platform_get_drvdata(pdev);
if (!hellcreek)
- return 0;
+ return;
hellcreek_hwtstamp_free(hellcreek);
hellcreek_ptp_free(hellcreek);
dsa_unregister_switch(hellcreek->ds);
-
- return 0;
}
static void hellcreek_shutdown(struct platform_device *pdev)
@@ -2107,7 +2105,7 @@ MODULE_DEVICE_TABLE(of, hellcreek_of_match);
static struct platform_driver hellcreek_driver = {
.probe = hellcreek_probe,
- .remove = hellcreek_remove,
+ .remove_new = hellcreek_remove,
.shutdown = hellcreek_shutdown,
.driver = {
.name = "hellcreek",
diff --git a/drivers/net/dsa/lan9303-core.c b/drivers/net/dsa/lan9303-core.c
index ee67adeb2cdb..fcb20eac332a 100644
--- a/drivers/net/dsa/lan9303-core.c
+++ b/drivers/net/dsa/lan9303-core.c
@@ -1084,7 +1084,7 @@ static int lan9303_port_enable(struct dsa_switch *ds, int port,
if (!dsa_port_is_user(dp))
return 0;
- vlan_vid_add(dsa_port_to_master(dp), htons(ETH_P_8021Q), port);
+ vlan_vid_add(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port);
return lan9303_enable_processing_port(chip, port);
}
@@ -1097,7 +1097,7 @@ static void lan9303_port_disable(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return;
- vlan_vid_del(dsa_port_to_master(dp), htons(ETH_P_8021Q), port);
+ vlan_vid_del(dsa_port_to_conduit(dp), htons(ETH_P_8021Q), port);
lan9303_disable_processing_port(chip, port);
lan9303_phy_write(ds, chip->phy_addr_base + port, MII_BMCR, BMCR_PDOWN);
diff --git a/drivers/net/dsa/lantiq_gswip.c b/drivers/net/dsa/lantiq_gswip.c
index 3c76a1a14aee..9c185c9f0963 100644
--- a/drivers/net/dsa/lantiq_gswip.c
+++ b/drivers/net/dsa/lantiq_gswip.c
@@ -510,22 +510,22 @@ static int gswip_mdio(struct gswip_priv *priv, struct device_node *mdio_np)
struct dsa_switch *ds = priv->ds;
int err;
- ds->slave_mii_bus = mdiobus_alloc();
- if (!ds->slave_mii_bus)
+ ds->user_mii_bus = mdiobus_alloc();
+ if (!ds->user_mii_bus)
return -ENOMEM;
- ds->slave_mii_bus->priv = priv;
- ds->slave_mii_bus->read = gswip_mdio_rd;
- ds->slave_mii_bus->write = gswip_mdio_wr;
- ds->slave_mii_bus->name = "lantiq,xrx200-mdio";
- snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii",
+ ds->user_mii_bus->priv = priv;
+ ds->user_mii_bus->read = gswip_mdio_rd;
+ ds->user_mii_bus->write = gswip_mdio_wr;
+ ds->user_mii_bus->name = "lantiq,xrx200-mdio";
+ snprintf(ds->user_mii_bus->id, MII_BUS_ID_SIZE, "%s-mii",
dev_name(priv->dev));
- ds->slave_mii_bus->parent = priv->dev;
- ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask;
+ ds->user_mii_bus->parent = priv->dev;
+ ds->user_mii_bus->phy_mask = ~ds->phys_mii_mask;
- err = of_mdiobus_register(ds->slave_mii_bus, mdio_np);
+ err = of_mdiobus_register(ds->user_mii_bus, mdio_np);
if (err)
- mdiobus_free(ds->slave_mii_bus);
+ mdiobus_free(ds->user_mii_bus);
return err;
}
@@ -1759,8 +1759,7 @@ static void gswip_get_strings(struct dsa_switch *ds, int port, u32 stringset,
return;
for (i = 0; i < ARRAY_SIZE(gswip_rmon_cnt); i++)
- strncpy(data + i * ETH_GSTRING_LEN, gswip_rmon_cnt[i].name,
- ETH_GSTRING_LEN);
+ ethtool_sprintf(&data, "%s", gswip_rmon_cnt[i].name);
}
static u32 gswip_bcm_ram_entry_read(struct gswip_priv *priv, u32 table,
@@ -2197,8 +2196,8 @@ disable_switch:
dsa_unregister_switch(priv->ds);
mdio_bus:
if (mdio_np) {
- mdiobus_unregister(priv->ds->slave_mii_bus);
- mdiobus_free(priv->ds->slave_mii_bus);
+ mdiobus_unregister(priv->ds->user_mii_bus);
+ mdiobus_free(priv->ds->user_mii_bus);
}
put_mdio_node:
of_node_put(mdio_np);
@@ -2207,29 +2206,27 @@ put_mdio_node:
return err;
}
-static int gswip_remove(struct platform_device *pdev)
+static void gswip_remove(struct platform_device *pdev)
{
struct gswip_priv *priv = platform_get_drvdata(pdev);
int i;
if (!priv)
- return 0;
+ return;
/* disable the switch */
gswip_mdio_mask(priv, GSWIP_MDIO_GLOB_ENABLE, 0, GSWIP_MDIO_GLOB);
dsa_unregister_switch(priv->ds);
- if (priv->ds->slave_mii_bus) {
- mdiobus_unregister(priv->ds->slave_mii_bus);
- of_node_put(priv->ds->slave_mii_bus->dev.of_node);
- mdiobus_free(priv->ds->slave_mii_bus);
+ if (priv->ds->user_mii_bus) {
+ mdiobus_unregister(priv->ds->user_mii_bus);
+ of_node_put(priv->ds->user_mii_bus->dev.of_node);
+ mdiobus_free(priv->ds->user_mii_bus);
}
for (i = 0; i < priv->num_gphy_fw; i++)
gswip_gphy_fw_remove(priv, &priv->gphy_fw[i]);
-
- return 0;
}
static void gswip_shutdown(struct platform_device *pdev)
@@ -2266,7 +2263,7 @@ MODULE_DEVICE_TABLE(of, gswip_of_match);
static struct platform_driver gswip_driver = {
.probe = gswip_probe,
- .remove = gswip_remove,
+ .remove_new = gswip_remove,
.shutdown = gswip_shutdown,
.driver = {
.name = "gswip",
diff --git a/drivers/net/dsa/microchip/Makefile b/drivers/net/dsa/microchip/Makefile
index 48360cc9fc68..49459a50dbc8 100644
--- a/drivers/net/dsa/microchip/Makefile
+++ b/drivers/net/dsa/microchip/Makefile
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-$(CONFIG_NET_DSA_MICROCHIP_KSZ_COMMON) += ksz_switch.o
ksz_switch-objs := ksz_common.o
-ksz_switch-objs += ksz9477.o
+ksz_switch-objs += ksz9477.o ksz9477_acl.o ksz9477_tc_flower.o
ksz_switch-objs += ksz8795.o
ksz_switch-objs += lan937x_main.o
diff --git a/drivers/net/dsa/microchip/ksz8795.c b/drivers/net/dsa/microchip/ksz8795.c
index 91aba470fb2f..4bf4d67557dc 100644
--- a/drivers/net/dsa/microchip/ksz8795.c
+++ b/drivers/net/dsa/microchip/ksz8795.c
@@ -632,6 +632,50 @@ static void ksz8_w_vlan_table(struct ksz_device *dev, u16 vid, u16 vlan)
ksz8_w_table(dev, TABLE_VLAN, addr, buf);
}
+/**
+ * ksz8_r_phy_ctrl - Translates and reads from the SMI interface to a MIIM PHY
+ * Control register (Reg. 31).
+ * @dev: The KSZ device instance.
+ * @port: The port number to be read.
+ * @val: The value read from the SMI interface.
+ *
+ * This function reads the SMI interface and translates the hardware register
+ * bit values into their corresponding control settings for a MIIM PHY Control
+ * register.
+ *
+ * Return: 0 on success, error code on failure.
+ */
+static int ksz8_r_phy_ctrl(struct ksz_device *dev, int port, u16 *val)
+{
+ const u16 *regs = dev->info->regs;
+ u8 reg_val;
+ int ret;
+
+ *val = 0;
+
+ ret = ksz_pread8(dev, port, regs[P_LINK_STATUS], &reg_val);
+ if (ret < 0)
+ return ret;
+
+ if (reg_val & PORT_MDIX_STATUS)
+ *val |= KSZ886X_CTRL_MDIX_STAT;
+
+ ret = ksz_pread8(dev, port, REG_PORT_LINK_MD_CTRL, &reg_val);
+ if (ret < 0)
+ return ret;
+
+ if (reg_val & PORT_FORCE_LINK)
+ *val |= KSZ886X_CTRL_FORCE_LINK;
+
+ if (reg_val & PORT_POWER_SAVING)
+ *val |= KSZ886X_CTRL_PWRSAVE;
+
+ if (reg_val & PORT_PHY_REMOTE_LOOPBACK)
+ *val |= KSZ886X_CTRL_REMOTE_LOOPBACK;
+
+ return 0;
+}
+
int ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
{
u8 restart, speed, ctrl, link;
@@ -769,12 +813,10 @@ int ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
FIELD_GET(PORT_CABLE_FAULT_COUNTER_L, val2));
break;
case PHY_REG_PHY_CTRL:
- ret = ksz_pread8(dev, p, regs[P_LINK_STATUS], &link);
+ ret = ksz8_r_phy_ctrl(dev, p, &data);
if (ret)
return ret;
- if (link & PORT_MDIX_STATUS)
- data |= KSZ886X_CTRL_MDIX_STAT;
break;
default:
processed = false;
@@ -786,6 +828,38 @@ int ksz8_r_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 *val)
return 0;
}
+/**
+ * ksz8_w_phy_ctrl - Translates and writes to the SMI interface from a MIIM PHY
+ * Control register (Reg. 31).
+ * @dev: The KSZ device instance.
+ * @port: The port number to be configured.
+ * @val: The register value to be written.
+ *
+ * This function translates control settings from a MIIM PHY Control register
+ * into their corresponding hardware register bit values for the SMI
+ * interface.
+ *
+ * Return: 0 on success, error code on failure.
+ */
+static int ksz8_w_phy_ctrl(struct ksz_device *dev, int port, u16 val)
+{
+ u8 reg_val = 0;
+ int ret;
+
+ if (val & KSZ886X_CTRL_FORCE_LINK)
+ reg_val |= PORT_FORCE_LINK;
+
+ if (val & KSZ886X_CTRL_PWRSAVE)
+ reg_val |= PORT_POWER_SAVING;
+
+ if (val & KSZ886X_CTRL_REMOTE_LOOPBACK)
+ reg_val |= PORT_PHY_REMOTE_LOOPBACK;
+
+ ret = ksz_prmw8(dev, port, REG_PORT_LINK_MD_CTRL, PORT_FORCE_LINK |
+ PORT_POWER_SAVING | PORT_PHY_REMOTE_LOOPBACK, reg_val);
+ return ret;
+}
+
int ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
{
u8 restart, speed, ctrl, data;
@@ -926,6 +1000,12 @@ int ksz8_w_phy(struct ksz_device *dev, u16 phy, u16 reg, u16 val)
if (val & PHY_START_CABLE_DIAG)
ksz_port_cfg(dev, p, REG_PORT_LINK_MD_CTRL, PORT_START_CABLE_DIAG, true);
break;
+
+ case PHY_REG_PHY_CTRL:
+ ret = ksz8_w_phy_ctrl(dev, p, val);
+ if (ret)
+ return ret;
+ break;
default:
break;
}
diff --git a/drivers/net/dsa/microchip/ksz8795_reg.h b/drivers/net/dsa/microchip/ksz8795_reg.h
index 7a57c6088f80..3c9dae53e4d8 100644
--- a/drivers/net/dsa/microchip/ksz8795_reg.h
+++ b/drivers/net/dsa/microchip/ksz8795_reg.h
@@ -323,13 +323,6 @@
((addr) + REG_PORT_1_CTRL_0 + (port) * \
(REG_PORT_2_CTRL_0 - REG_PORT_1_CTRL_0))
-#define REG_SW_MAC_ADDR_0 0x68
-#define REG_SW_MAC_ADDR_1 0x69
-#define REG_SW_MAC_ADDR_2 0x6A
-#define REG_SW_MAC_ADDR_3 0x6B
-#define REG_SW_MAC_ADDR_4 0x6C
-#define REG_SW_MAC_ADDR_5 0x6D
-
#define TABLE_EXT_SELECT_S 5
#define TABLE_EEE_V 1
#define TABLE_ACL_V 2
@@ -442,20 +435,6 @@
#define TOS_PRIO_M KS_PRIO_M
#define TOS_PRIO_S KS_PRIO_S
-#define REG_SW_CTRL_20 0xA3
-
-#define SW_GMII_DRIVE_STRENGTH_S 4
-#define SW_DRIVE_STRENGTH_M 0x7
-#define SW_DRIVE_STRENGTH_2MA 0
-#define SW_DRIVE_STRENGTH_4MA 1
-#define SW_DRIVE_STRENGTH_8MA 2
-#define SW_DRIVE_STRENGTH_12MA 3
-#define SW_DRIVE_STRENGTH_16MA 4
-#define SW_DRIVE_STRENGTH_20MA 5
-#define SW_DRIVE_STRENGTH_24MA 6
-#define SW_DRIVE_STRENGTH_28MA 7
-#define SW_MII_DRIVE_STRENGTH_S 0
-
#define REG_SW_CTRL_21 0xA4
#define SW_IPV6_MLD_OPTION BIT(3)
diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
index 83b7f2d5c1ea..7f745628c84d 100644
--- a/drivers/net/dsa/microchip/ksz9477.c
+++ b/drivers/net/dsa/microchip/ksz9477.c
@@ -56,6 +56,187 @@ int ksz9477_change_mtu(struct ksz_device *dev, int port, int mtu)
REG_SW_MTU_MASK, frame_size);
}
+/**
+ * ksz9477_handle_wake_reason - Handle wake reason on a specified port.
+ * @dev: The device structure.
+ * @port: The port number.
+ *
+ * This function reads the PME (Power Management Event) status register of a
+ * specified port to determine the wake reason. If there is no wake event, it
+ * returns early. Otherwise, it logs the wake reason which could be due to a
+ * "Magic Packet", "Link Up", or "Energy Detect" event. The PME status register
+ * is then cleared to acknowledge the handling of the wake event.
+ *
+ * Return: 0 on success, or an error code on failure.
+ */
+static int ksz9477_handle_wake_reason(struct ksz_device *dev, int port)
+{
+ u8 pme_status;
+ int ret;
+
+ ret = ksz_pread8(dev, port, REG_PORT_PME_STATUS, &pme_status);
+ if (ret)
+ return ret;
+
+ if (!pme_status)
+ return 0;
+
+ dev_dbg(dev->dev, "Wake event on port %d due to:%s%s%s\n", port,
+ pme_status & PME_WOL_MAGICPKT ? " \"Magic Packet\"" : "",
+ pme_status & PME_WOL_LINKUP ? " \"Link Up\"" : "",
+ pme_status & PME_WOL_ENERGY ? " \"Energy detect\"" : "");
+
+ return ksz_pwrite8(dev, port, REG_PORT_PME_STATUS, pme_status);
+}
+
+/**
+ * ksz9477_get_wol - Get Wake-on-LAN settings for a specified port.
+ * @dev: The device structure.
+ * @port: The port number.
+ * @wol: Pointer to ethtool Wake-on-LAN settings structure.
+ *
+ * This function checks the PME Pin Control Register to see if PME Pin Output
+ * Enable is set, indicating PME is enabled. If enabled, it sets the supported
+ * and active WoL flags.
+ */
+void ksz9477_get_wol(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol)
+{
+ u8 pme_ctrl;
+ int ret;
+
+ if (!dev->wakeup_source)
+ return;
+
+ wol->supported = WAKE_PHY;
+
+ /* Check if the current MAC address on this port can be set
+ * as global for WAKE_MAGIC support. The result may vary
+ * dynamically based on other ports configurations.
+ */
+ if (ksz_is_port_mac_global_usable(dev->ds, port))
+ wol->supported |= WAKE_MAGIC;
+
+ ret = ksz_pread8(dev, port, REG_PORT_PME_CTRL, &pme_ctrl);
+ if (ret)
+ return;
+
+ if (pme_ctrl & PME_WOL_MAGICPKT)
+ wol->wolopts |= WAKE_MAGIC;
+ if (pme_ctrl & (PME_WOL_LINKUP | PME_WOL_ENERGY))
+ wol->wolopts |= WAKE_PHY;
+}
+
+/**
+ * ksz9477_set_wol - Set Wake-on-LAN settings for a specified port.
+ * @dev: The device structure.
+ * @port: The port number.
+ * @wol: Pointer to ethtool Wake-on-LAN settings structure.
+ *
+ * This function configures Wake-on-LAN (WoL) settings for a specified port.
+ * It validates the provided WoL options, checks if PME is enabled via the
+ * switch's PME Pin Control Register, clears any previous wake reasons,
+ * and sets the Magic Packet flag in the port's PME control register if
+ * specified.
+ *
+ * Return: 0 on success, or other error codes on failure.
+ */
+int ksz9477_set_wol(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol)
+{
+ u8 pme_ctrl = 0, pme_ctrl_old = 0;
+ bool magic_switched_off;
+ bool magic_switched_on;
+ int ret;
+
+ if (wol->wolopts & ~(WAKE_PHY | WAKE_MAGIC))
+ return -EINVAL;
+
+ if (!dev->wakeup_source)
+ return -EOPNOTSUPP;
+
+ ret = ksz9477_handle_wake_reason(dev, port);
+ if (ret)
+ return ret;
+
+ if (wol->wolopts & WAKE_MAGIC)
+ pme_ctrl |= PME_WOL_MAGICPKT;
+ if (wol->wolopts & WAKE_PHY)
+ pme_ctrl |= PME_WOL_LINKUP | PME_WOL_ENERGY;
+
+ ret = ksz_pread8(dev, port, REG_PORT_PME_CTRL, &pme_ctrl_old);
+ if (ret)
+ return ret;
+
+ if (pme_ctrl_old == pme_ctrl)
+ return 0;
+
+ magic_switched_off = (pme_ctrl_old & PME_WOL_MAGICPKT) &&
+ !(pme_ctrl & PME_WOL_MAGICPKT);
+ magic_switched_on = !(pme_ctrl_old & PME_WOL_MAGICPKT) &&
+ (pme_ctrl & PME_WOL_MAGICPKT);
+
+ /* To keep reference count of MAC address, we should do this
+ * operation only on change of WOL settings.
+ */
+ if (magic_switched_on) {
+ ret = ksz_switch_macaddr_get(dev->ds, port, NULL);
+ if (ret)
+ return ret;
+ } else if (magic_switched_off) {
+ ksz_switch_macaddr_put(dev->ds);
+ }
+
+ ret = ksz_pwrite8(dev, port, REG_PORT_PME_CTRL, pme_ctrl);
+ if (ret) {
+ if (magic_switched_on)
+ ksz_switch_macaddr_put(dev->ds);
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz9477_wol_pre_shutdown - Prepares the switch device for shutdown while
+ * considering Wake-on-LAN (WoL) settings.
+ * @dev: The switch device structure.
+ * @wol_enabled: Pointer to a boolean which will be set to true if WoL is
+ * enabled on any port.
+ *
+ * This function prepares the switch device for a safe shutdown while taking
+ * into account the Wake-on-LAN (WoL) settings on the user ports. It updates
+ * the wol_enabled flag accordingly to reflect whether WoL is active on any
+ * port.
+ */
+void ksz9477_wol_pre_shutdown(struct ksz_device *dev, bool *wol_enabled)
+{
+ struct dsa_port *dp;
+ int ret;
+
+ *wol_enabled = false;
+
+ if (!dev->wakeup_source)
+ return;
+
+ dsa_switch_for_each_user_port(dp, dev->ds) {
+ u8 pme_ctrl = 0;
+
+ ret = ksz_pread8(dev, dp->index, REG_PORT_PME_CTRL, &pme_ctrl);
+ if (!ret && pme_ctrl)
+ *wol_enabled = true;
+
+ /* make sure there are no pending wake events which would
+ * prevent the device from going to sleep/shutdown.
+ */
+ ksz9477_handle_wake_reason(dev, dp->index);
+ }
+
+ /* Now we are save to enable PME pin. */
+ if (*wol_enabled)
+ ksz_write8(dev, REG_SW_PME_CTRL, PME_ENABLE);
+}
+
static int ksz9477_wait_vlan_ctrl_ready(struct ksz_device *dev)
{
unsigned int val;
@@ -1004,6 +1185,16 @@ void ksz9477_port_setup(struct ksz_device *dev, int port, bool cpu_port)
/* clear pending interrupts */
if (dev->info->internal_phy[port])
ksz_pread16(dev, port, REG_PORT_PHY_INT_ENABLE, &data16);
+
+ ksz9477_port_acl_init(dev, port);
+
+ /* clear pending wake flags */
+ ksz9477_handle_wake_reason(dev, port);
+
+ /* Disable all WoL options by default. Otherwise
+ * ksz_switch_macaddr_get/put logic will not work properly.
+ */
+ ksz_pwrite8(dev, port, REG_PORT_PME_CTRL, 0);
}
void ksz9477_config_cpu_port(struct dsa_switch *ds)
@@ -1126,6 +1317,12 @@ int ksz9477_setup(struct dsa_switch *ds)
/* enable global MIB counter freeze function */
ksz_cfg(dev, REG_SW_MAC_CTRL_6, SW_MIB_COUNTER_FREEZE, true);
+ /* Make sure PME (WoL) is not enabled. If requested, it will be
+ * enabled by ksz9477_wol_pre_shutdown(). Otherwise, some PMICs do not
+ * like PME events changes before shutdown.
+ */
+ ksz_write8(dev, REG_SW_PME_CTRL, 0);
+
return 0;
}
@@ -1141,6 +1338,83 @@ int ksz9477_tc_cbs_set_cinc(struct ksz_device *dev, int port, u32 val)
return ksz_pwrite16(dev, port, REG_PORT_MTI_CREDIT_INCREMENT, val);
}
+/* The KSZ9477 provides following HW features to accelerate
+ * HSR frames handling:
+ *
+ * 1. TX PACKET DUPLICATION FROM HOST TO SWITCH
+ * 2. RX PACKET DUPLICATION DISCARDING
+ * 3. PREVENTING PACKET LOOP IN THE RING BY SELF-ADDRESS FILTERING
+ *
+ * Only one from point 1. has the NETIF_F* flag available.
+ *
+ * Ones from point 2 and 3 are "best effort" - i.e. those will
+ * work correctly most of the time, but it may happen that some
+ * frames will not be caught - to be more specific; there is a race
+ * condition in hardware such that, when duplicate packets are received
+ * on member ports very close in time to each other, the hardware fails
+ * to detect that they are duplicates.
+ *
+ * Hence, the SW needs to handle those special cases. However, the speed
+ * up gain is considerable when above features are used.
+ *
+ * Moreover, the NETIF_F_HW_HSR_FWD feature is also enabled, as HSR frames
+ * can be forwarded in the switch fabric between HSR ports.
+ */
+#define KSZ9477_SUPPORTED_HSR_FEATURES (NETIF_F_HW_HSR_DUP | NETIF_F_HW_HSR_FWD)
+
+void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr)
+{
+ struct ksz_device *dev = ds->priv;
+ struct net_device *user;
+ struct dsa_port *hsr_dp;
+ u8 data, hsr_ports = 0;
+
+ /* Program which port(s) shall support HSR */
+ ksz_rmw32(dev, REG_HSR_PORT_MAP__4, BIT(port), BIT(port));
+
+ /* Forward frames between HSR ports (i.e. bridge together HSR ports) */
+ if (dev->hsr_ports) {
+ dsa_hsr_foreach_port(hsr_dp, ds, hsr)
+ hsr_ports |= BIT(hsr_dp->index);
+
+ hsr_ports |= BIT(dsa_upstream_port(ds, port));
+ dsa_hsr_foreach_port(hsr_dp, ds, hsr)
+ ksz9477_cfg_port_member(dev, hsr_dp->index, hsr_ports);
+ }
+
+ if (!dev->hsr_ports) {
+ /* Enable discarding of received HSR frames */
+ ksz_read8(dev, REG_HSR_ALU_CTRL_0__1, &data);
+ data |= HSR_DUPLICATE_DISCARD;
+ data &= ~HSR_NODE_UNICAST;
+ ksz_write8(dev, REG_HSR_ALU_CTRL_0__1, data);
+ }
+
+ /* Enable per port self-address filtering.
+ * The global self-address filtering has already been enabled in the
+ * ksz9477_reset_switch() function.
+ */
+ ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, PORT_SRC_ADDR_FILTER, true);
+
+ /* Setup HW supported features for lan HSR ports */
+ user = dsa_to_port(ds, port)->user;
+ user->features |= KSZ9477_SUPPORTED_HSR_FEATURES;
+}
+
+void ksz9477_hsr_leave(struct dsa_switch *ds, int port, struct net_device *hsr)
+{
+ struct ksz_device *dev = ds->priv;
+
+ /* Clear port HSR support */
+ ksz_rmw32(dev, REG_HSR_PORT_MAP__4, BIT(port), 0);
+
+ /* Disable forwarding frames between HSR ports */
+ ksz9477_cfg_port_member(dev, port, BIT(dsa_upstream_port(ds, port)));
+
+ /* Disable per port self-address filtering */
+ ksz_port_cfg(dev, port, REG_PORT_LUE_CTRL, PORT_SRC_ADDR_FILTER, false);
+}
+
int ksz9477_switch_init(struct ksz_device *dev)
{
u8 data8;
diff --git a/drivers/net/dsa/microchip/ksz9477.h b/drivers/net/dsa/microchip/ksz9477.h
index a6f425866a29..ce1e656b800b 100644
--- a/drivers/net/dsa/microchip/ksz9477.h
+++ b/drivers/net/dsa/microchip/ksz9477.h
@@ -56,5 +56,48 @@ int ksz9477_reset_switch(struct ksz_device *dev);
int ksz9477_switch_init(struct ksz_device *dev);
void ksz9477_switch_exit(struct ksz_device *dev);
void ksz9477_port_queue_split(struct ksz_device *dev, int port);
+void ksz9477_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr);
+void ksz9477_hsr_leave(struct dsa_switch *ds, int port, struct net_device *hsr);
+void ksz9477_get_wol(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol);
+int ksz9477_set_wol(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol);
+void ksz9477_wol_pre_shutdown(struct ksz_device *dev, bool *wol_enabled);
+
+int ksz9477_port_acl_init(struct ksz_device *dev, int port);
+void ksz9477_port_acl_free(struct ksz_device *dev, int port);
+int ksz9477_cls_flower_add(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress);
+int ksz9477_cls_flower_del(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress);
+
+#define KSZ9477_ACL_ENTRY_SIZE 18
+#define KSZ9477_ACL_MAX_ENTRIES 16
+
+struct ksz9477_acl_entry {
+ u8 entry[KSZ9477_ACL_ENTRY_SIZE];
+ unsigned long cookie;
+ u32 prio;
+};
+
+struct ksz9477_acl_entries {
+ struct ksz9477_acl_entry entries[KSZ9477_ACL_MAX_ENTRIES];
+ int entries_count;
+};
+
+struct ksz9477_acl_priv {
+ struct ksz9477_acl_entries acles;
+};
+
+void ksz9477_acl_remove_entries(struct ksz_device *dev, int port,
+ struct ksz9477_acl_entries *acles,
+ unsigned long cookie);
+int ksz9477_acl_write_list(struct ksz_device *dev, int port);
+int ksz9477_sort_acl_entries(struct ksz_device *dev, int port);
+void ksz9477_acl_action_rule_cfg(u8 *entry, bool force_prio, u8 prio_val);
+void ksz9477_acl_processing_rule_set_action(u8 *entry, u8 action_idx);
+void ksz9477_acl_match_process_l2(struct ksz_device *dev, int port,
+ u16 ethtype, u8 *src_mac, u8 *dst_mac,
+ unsigned long cookie, u32 prio);
#endif
diff --git a/drivers/net/dsa/microchip/ksz9477_acl.c b/drivers/net/dsa/microchip/ksz9477_acl.c
new file mode 100644
index 000000000000..7ba778df63ac
--- /dev/null
+++ b/drivers/net/dsa/microchip/ksz9477_acl.c
@@ -0,0 +1,1436 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2023 Pengutronix, Oleksij Rempel <kernel@pengutronix.de>
+
+/* Access Control List (ACL) structure:
+ *
+ * There are multiple groups of registers involved in ACL configuration:
+ *
+ * - Matching Rules: These registers define the criteria for matching incoming
+ * packets based on their header information (Layer 2 MAC, Layer 3 IP, or
+ * Layer 4 TCP/UDP). Different register settings are used depending on the
+ * matching rule mode (MD) and the Enable (ENB) settings.
+ *
+ * - Action Rules: These registers define how the ACL should modify the packet's
+ * priority, VLAN tag priority, and forwarding map once a matching rule has
+ * been triggered. The settings vary depending on whether the matching rule is
+ * in Count Mode (MD = 01 and ENB = 00) or not.
+ *
+ * - Processing Rules: These registers control the overall behavior of the ACL,
+ * such as selecting which matching rule to apply first, enabling/disabling
+ * specific rules, or specifying actions for matched packets.
+ *
+ * ACL Structure:
+ * +----------------------+
+ * +----------------------+ | (optional) |
+ * | Matching Rules | | Matching Rules |
+ * | (Layer 2, 3, 4) | | (Layer 2, 3, 4) |
+ * +----------------------+ +----------------------+
+ * | |
+ * \___________________________/
+ * v
+ * +----------------------+
+ * | Processing Rules |
+ * | (action idx, |
+ * | matching rule set) |
+ * +----------------------+
+ * |
+ * v
+ * +----------------------+
+ * | Action Rules |
+ * | (Modify Priority, |
+ * | Forwarding Map, |
+ * | VLAN tag, etc) |
+ * +----------------------+
+ */
+
+#include <linux/bitops.h>
+
+#include "ksz9477.h"
+#include "ksz9477_reg.h"
+#include "ksz_common.h"
+
+#define KSZ9477_PORT_ACL_0 0x600
+
+enum ksz9477_acl_port_access {
+ KSZ9477_ACL_PORT_ACCESS_0 = 0x00,
+ KSZ9477_ACL_PORT_ACCESS_1 = 0x01,
+ KSZ9477_ACL_PORT_ACCESS_2 = 0x02,
+ KSZ9477_ACL_PORT_ACCESS_3 = 0x03,
+ KSZ9477_ACL_PORT_ACCESS_4 = 0x04,
+ KSZ9477_ACL_PORT_ACCESS_5 = 0x05,
+ KSZ9477_ACL_PORT_ACCESS_6 = 0x06,
+ KSZ9477_ACL_PORT_ACCESS_7 = 0x07,
+ KSZ9477_ACL_PORT_ACCESS_8 = 0x08,
+ KSZ9477_ACL_PORT_ACCESS_9 = 0x09,
+ KSZ9477_ACL_PORT_ACCESS_A = 0x0A,
+ KSZ9477_ACL_PORT_ACCESS_B = 0x0B,
+ KSZ9477_ACL_PORT_ACCESS_C = 0x0C,
+ KSZ9477_ACL_PORT_ACCESS_D = 0x0D,
+ KSZ9477_ACL_PORT_ACCESS_E = 0x0E,
+ KSZ9477_ACL_PORT_ACCESS_F = 0x0F,
+ KSZ9477_ACL_PORT_ACCESS_10 = 0x10,
+ KSZ9477_ACL_PORT_ACCESS_11 = 0x11
+};
+
+#define KSZ9477_ACL_MD_MASK GENMASK(5, 4)
+#define KSZ9477_ACL_MD_DISABLE 0
+#define KSZ9477_ACL_MD_L2_MAC 1
+#define KSZ9477_ACL_MD_L3_IP 2
+#define KSZ9477_ACL_MD_L4_TCP_UDP 3
+
+#define KSZ9477_ACL_ENB_MASK GENMASK(3, 2)
+#define KSZ9477_ACL_ENB_L2_COUNTER 0
+#define KSZ9477_ACL_ENB_L2_TYPE 1
+#define KSZ9477_ACL_ENB_L2_MAC 2
+#define KSZ9477_ACL_ENB_L2_MAC_TYPE 3
+
+/* only IPv4 src or dst can be used with mask */
+#define KSZ9477_ACL_ENB_L3_IPV4_ADDR_MASK 1
+/* only IPv4 src and dst can be used without mask */
+#define KSZ9477_ACL_ENB_L3_IPV4_ADDR_SRC_DST 2
+
+#define KSZ9477_ACL_ENB_L4_IP_PROTO 0
+#define KSZ9477_ACL_ENB_L4_TCP_SRC_DST_PORT 1
+#define KSZ9477_ACL_ENB_L4_UDP_SRC_DST_PORT 2
+#define KSZ9477_ACL_ENB_L4_TCP_SEQ_NUMBER 3
+
+#define KSZ9477_ACL_SD_SRC BIT(1)
+#define KSZ9477_ACL_SD_DST 0
+#define KSZ9477_ACL_EQ_EQUAL BIT(0)
+#define KSZ9477_ACL_EQ_NOT_EQUAL 0
+
+#define KSZ9477_ACL_PM_M GENMASK(7, 6)
+#define KSZ9477_ACL_PM_DISABLE 0
+#define KSZ9477_ACL_PM_HIGHER 1
+#define KSZ9477_ACL_PM_LOWER 2
+#define KSZ9477_ACL_PM_REPLACE 3
+#define KSZ9477_ACL_P_M GENMASK(5, 3)
+
+#define KSZ9477_PORT_ACL_CTRL_0 0x0612
+
+#define KSZ9477_ACL_WRITE_DONE BIT(6)
+#define KSZ9477_ACL_READ_DONE BIT(5)
+#define KSZ9477_ACL_WRITE BIT(4)
+#define KSZ9477_ACL_INDEX_M GENMASK(3, 0)
+
+/**
+ * ksz9477_dump_acl_index - Print the ACL entry at the specified index
+ *
+ * @dev: Pointer to the ksz9477 device structure.
+ * @acle: Pointer to the ACL entry array.
+ * @index: The index of the ACL entry to print.
+ *
+ * This function prints the details of an ACL entry, located at a particular
+ * index within the ksz9477 device's ACL table. It omits printing entries that
+ * are empty.
+ *
+ * Return: 1 if the entry is non-empty and printed, 0 otherwise.
+ */
+static int ksz9477_dump_acl_index(struct ksz_device *dev,
+ struct ksz9477_acl_entry *acle, int index)
+{
+ bool empty = true;
+ char buf[64];
+ u8 *entry;
+ int i;
+
+ entry = &acle[index].entry[0];
+ for (i = 0; i <= KSZ9477_ACL_PORT_ACCESS_11; i++) {
+ if (entry[i])
+ empty = false;
+
+ sprintf(buf + (i * 3), "%02x ", entry[i]);
+ }
+
+ /* no need to print empty entries */
+ if (empty)
+ return 0;
+
+ dev_err(dev->dev, " Entry %02d, prio: %02d : %s", index,
+ acle[index].prio, buf);
+
+ return 1;
+}
+
+/**
+ * ksz9477_dump_acl - Print ACL entries
+ *
+ * @dev: Pointer to the device structure.
+ * @acle: Pointer to the ACL entry array.
+ */
+static void ksz9477_dump_acl(struct ksz_device *dev,
+ struct ksz9477_acl_entry *acle)
+{
+ int count = 0;
+ int i;
+
+ for (i = 0; i < KSZ9477_ACL_MAX_ENTRIES; i++)
+ count += ksz9477_dump_acl_index(dev, acle, i);
+
+ if (count != KSZ9477_ACL_MAX_ENTRIES - 1)
+ dev_err(dev->dev, " Empty ACL entries were skipped\n");
+}
+
+/**
+ * ksz9477_acl_is_valid_matching_rule - Check if an ACL entry contains a valid
+ * matching rule.
+ *
+ * @entry: Pointer to ACL entry buffer
+ *
+ * This function checks if the given ACL entry buffer contains a valid
+ * matching rule by inspecting the Mode (MD) and Enable (ENB) fields.
+ *
+ * Returns: True if it's a valid matching rule, false otherwise.
+ */
+static bool ksz9477_acl_is_valid_matching_rule(u8 *entry)
+{
+ u8 val1, md, enb;
+
+ val1 = entry[KSZ9477_ACL_PORT_ACCESS_1];
+
+ md = FIELD_GET(KSZ9477_ACL_MD_MASK, val1);
+ if (md == KSZ9477_ACL_MD_DISABLE)
+ return false;
+
+ if (md == KSZ9477_ACL_MD_L2_MAC) {
+ /* L2 counter is not support, so it is not valid rule for now */
+ enb = FIELD_GET(KSZ9477_ACL_ENB_MASK, val1);
+ if (enb == KSZ9477_ACL_ENB_L2_COUNTER)
+ return false;
+ }
+
+ return true;
+}
+
+/**
+ * ksz9477_acl_get_cont_entr - Get count of contiguous ACL entries and validate
+ * the matching rules.
+ * @dev: Pointer to the KSZ9477 device structure.
+ * @port: Port number.
+ * @index: Index of the starting ACL entry.
+ *
+ * Based on the KSZ9477 switch's Access Control List (ACL) system, the RuleSet
+ * in an ACL entry indicates which entries contain Matching rules linked to it.
+ * This RuleSet is represented by two registers: KSZ9477_ACL_PORT_ACCESS_E and
+ * KSZ9477_ACL_PORT_ACCESS_F. Each bit set in these registers corresponds to
+ * an entry containing a Matching rule for this RuleSet.
+ *
+ * For a single Matching rule linked, only one bit is set. However, when an
+ * entry links multiple Matching rules, forming what's termed a 'complex rule',
+ * multiple bits are set in these registers.
+ *
+ * This function checks that, for complex rules, the entries containing the
+ * linked Matching rules are contiguous in terms of their indices. It calculates
+ * and returns the number of these contiguous entries.
+ *
+ * Returns:
+ * - 0 if the entry is empty and can be safely overwritten
+ * - 1 if the entry represents a simple rule
+ * - The number of contiguous entries if it is the root entry of a complex
+ * rule
+ * - -ENOTEMPTY if the entry is part of a complex rule but not the root
+ * entry
+ * - -EINVAL if the validation fails
+ */
+static int ksz9477_acl_get_cont_entr(struct ksz_device *dev, int port,
+ int index)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ int start_idx, end_idx, contiguous_count;
+ unsigned long val;
+ u8 vale, valf;
+ u8 *entry;
+ int i;
+
+ entry = &acles->entries[index].entry[0];
+ vale = entry[KSZ9477_ACL_PORT_ACCESS_E];
+ valf = entry[KSZ9477_ACL_PORT_ACCESS_F];
+
+ val = (vale << 8) | valf;
+
+ /* If no bits are set, return an appropriate value or error */
+ if (!val) {
+ if (ksz9477_acl_is_valid_matching_rule(entry)) {
+ /* Looks like we are about to corrupt some complex rule.
+ * Do not print an error here, as this is a normal case
+ * when we are trying to find a free or starting entry.
+ */
+ dev_dbg(dev->dev, "ACL: entry %d starting with a valid matching rule, but no bits set in RuleSet\n",
+ index);
+ return -ENOTEMPTY;
+ }
+
+ /* This entry does not contain a valid matching rule */
+ return 0;
+ }
+
+ start_idx = find_first_bit((unsigned long *)&val, 16);
+ end_idx = find_last_bit((unsigned long *)&val, 16);
+
+ /* Calculate the contiguous count */
+ contiguous_count = end_idx - start_idx + 1;
+
+ /* Check if the number of bits set in val matches our calculated count */
+ if (contiguous_count != hweight16(val)) {
+ /* Probably we have a fragmented complex rule, which is not
+ * supported by this driver.
+ */
+ dev_err(dev->dev, "ACL: number of bits set in RuleSet does not match calculated count\n");
+ return -EINVAL;
+ }
+
+ /* loop over the contiguous entries and check for valid matching rules */
+ for (i = start_idx; i <= end_idx; i++) {
+ u8 *current_entry = &acles->entries[i].entry[0];
+
+ if (!ksz9477_acl_is_valid_matching_rule(current_entry)) {
+ /* we have something linked without a valid matching
+ * rule. ACL table?
+ */
+ dev_err(dev->dev, "ACL: entry %d does not contain a valid matching rule\n",
+ i);
+ return -EINVAL;
+ }
+
+ if (i > start_idx) {
+ vale = current_entry[KSZ9477_ACL_PORT_ACCESS_E];
+ valf = current_entry[KSZ9477_ACL_PORT_ACCESS_F];
+ /* Following entry should have empty linkage list */
+ if (vale || valf) {
+ dev_err(dev->dev, "ACL: entry %d has non-empty RuleSet linkage\n",
+ i);
+ return -EINVAL;
+ }
+ }
+ }
+
+ return contiguous_count;
+}
+
+/**
+ * ksz9477_acl_update_linkage - Update the RuleSet linkage for an ACL entry
+ * after a move operation.
+ *
+ * @dev: Pointer to the ksz_device.
+ * @entry: Pointer to the ACL entry array.
+ * @old_idx: The original index of the ACL entry before moving.
+ * @new_idx: The new index of the ACL entry after moving.
+ *
+ * This function updates the RuleSet linkage bits for an ACL entry when
+ * it's moved from one position to another in the ACL table. The RuleSet
+ * linkage is represented by two 8-bit registers, which are combined
+ * into a 16-bit value for easier manipulation. The linkage bits are shifted
+ * based on the difference between the old and new index. If any bits are lost
+ * during the shift operation, an error is returned.
+ *
+ * Note: Fragmentation within a RuleSet is not supported. Hence, entries must
+ * be moved as complete blocks, maintaining the integrity of the RuleSet.
+ *
+ * Returns: 0 on success, or -EINVAL if any RuleSet linkage bits are lost
+ * during the move.
+ */
+static int ksz9477_acl_update_linkage(struct ksz_device *dev, u8 *entry,
+ u16 old_idx, u16 new_idx)
+{
+ unsigned int original_bit_count;
+ unsigned long rule_linkage;
+ u8 vale, valf, val0;
+ int shift;
+
+ val0 = entry[KSZ9477_ACL_PORT_ACCESS_0];
+ vale = entry[KSZ9477_ACL_PORT_ACCESS_E];
+ valf = entry[KSZ9477_ACL_PORT_ACCESS_F];
+
+ /* Combine the two u8 values into one u16 for easier manipulation */
+ rule_linkage = (vale << 8) | valf;
+ original_bit_count = hweight16(rule_linkage);
+
+ /* Even if HW is able to handle fragmented RuleSet, we don't support it.
+ * RuleSet is filled only for the first entry of the set.
+ */
+ if (!rule_linkage)
+ return 0;
+
+ if (val0 != old_idx) {
+ dev_err(dev->dev, "ACL: entry %d has unexpected ActionRule linkage: %d\n",
+ old_idx, val0);
+ return -EINVAL;
+ }
+
+ val0 = new_idx;
+
+ /* Calculate the number of positions to shift */
+ shift = new_idx - old_idx;
+
+ /* Shift the RuleSet */
+ if (shift > 0)
+ rule_linkage <<= shift;
+ else
+ rule_linkage >>= -shift;
+
+ /* Check that no bits were lost in the process */
+ if (original_bit_count != hweight16(rule_linkage)) {
+ dev_err(dev->dev, "ACL RuleSet linkage bits lost during move\n");
+ return -EINVAL;
+ }
+
+ entry[KSZ9477_ACL_PORT_ACCESS_0] = val0;
+
+ /* Update the RuleSet bitfields in the entry */
+ entry[KSZ9477_ACL_PORT_ACCESS_E] = (rule_linkage >> 8) & 0xFF;
+ entry[KSZ9477_ACL_PORT_ACCESS_F] = rule_linkage & 0xFF;
+
+ return 0;
+}
+
+/**
+ * ksz9477_validate_and_get_src_count - Validate source and destination indices
+ * and determine the source entry count.
+ * @dev: Pointer to the KSZ device structure.
+ * @port: Port number on the KSZ device where the ACL entries reside.
+ * @src_idx: Index of the starting ACL entry that needs to be validated.
+ * @dst_idx: Index of the destination where the source entries are intended to
+ * be moved.
+ * @src_count: Pointer to the variable that will hold the number of contiguous
+ * source entries if the validation passes.
+ * @dst_count: Pointer to the variable that will hold the number of contiguous
+ * destination entries if the validation passes.
+ *
+ * This function performs validation on the source and destination indices
+ * provided for ACL entries. It checks if the indices are within the valid
+ * range, and if the source entries are contiguous. Additionally, the function
+ * ensures that there's adequate space at the destination for the source entries
+ * and that the destination index isn't in the middle of a RuleSet. If all
+ * validations pass, the function returns the number of contiguous source and
+ * destination entries.
+ *
+ * Return: 0 on success, otherwise returns a negative error code if any
+ * validation check fails.
+ */
+static int ksz9477_validate_and_get_src_count(struct ksz_device *dev, int port,
+ int src_idx, int dst_idx,
+ int *src_count, int *dst_count)
+{
+ int ret;
+
+ if (src_idx >= KSZ9477_ACL_MAX_ENTRIES ||
+ dst_idx >= KSZ9477_ACL_MAX_ENTRIES) {
+ dev_err(dev->dev, "ACL: invalid entry index\n");
+ return -EINVAL;
+ }
+
+ /* Validate if the source entries are contiguous */
+ ret = ksz9477_acl_get_cont_entr(dev, port, src_idx);
+ if (ret < 0)
+ return ret;
+ *src_count = ret;
+
+ if (!*src_count) {
+ dev_err(dev->dev, "ACL: source entry is empty\n");
+ return -EINVAL;
+ }
+
+ if (dst_idx + *src_count >= KSZ9477_ACL_MAX_ENTRIES) {
+ dev_err(dev->dev, "ACL: Not enough space at the destination. Move operation will fail.\n");
+ return -EINVAL;
+ }
+
+ /* Validate if the destination entry is empty or not in the middle of
+ * a RuleSet.
+ */
+ ret = ksz9477_acl_get_cont_entr(dev, port, dst_idx);
+ if (ret < 0)
+ return ret;
+ *dst_count = ret;
+
+ return 0;
+}
+
+/**
+ * ksz9477_move_entries_downwards - Move a range of ACL entries downwards in
+ * the list.
+ * @dev: Pointer to the KSZ device structure.
+ * @acles: Pointer to the structure encapsulating all the ACL entries.
+ * @start_idx: Starting index of the entries to be relocated.
+ * @num_entries_to_move: Number of consecutive entries to be relocated.
+ * @end_idx: Destination index where the first entry should be situated post
+ * relocation.
+ *
+ * This function is responsible for rearranging a specific block of ACL entries
+ * by shifting them downwards in the list based on the supplied source and
+ * destination indices. It ensures that the linkage between the ACL entries is
+ * maintained accurately after the relocation.
+ *
+ * Return: 0 on successful relocation of entries, otherwise returns a negative
+ * error code.
+ */
+static int ksz9477_move_entries_downwards(struct ksz_device *dev,
+ struct ksz9477_acl_entries *acles,
+ u16 start_idx,
+ u16 num_entries_to_move,
+ u16 end_idx)
+{
+ struct ksz9477_acl_entry *e;
+ int ret, i;
+
+ for (i = start_idx; i < end_idx; i++) {
+ e = &acles->entries[i];
+ *e = acles->entries[i + num_entries_to_move];
+
+ ret = ksz9477_acl_update_linkage(dev, &e->entry[0],
+ i + num_entries_to_move, i);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz9477_move_entries_upwards - Move a range of ACL entries upwards in the
+ * list.
+ * @dev: Pointer to the KSZ device structure.
+ * @acles: Pointer to the structure holding all the ACL entries.
+ * @start_idx: The starting index of the entries to be moved.
+ * @num_entries_to_move: Number of contiguous entries to be moved.
+ * @target_idx: The destination index where the first entry should be placed
+ * after moving.
+ *
+ * This function rearranges a chunk of ACL entries by moving them upwards
+ * in the list based on the given source and destination indices. The reordering
+ * process preserves the linkage between entries by updating it accordingly.
+ *
+ * Return: 0 if the entries were successfully moved, otherwise a negative error
+ * code.
+ */
+static int ksz9477_move_entries_upwards(struct ksz_device *dev,
+ struct ksz9477_acl_entries *acles,
+ u16 start_idx, u16 num_entries_to_move,
+ u16 target_idx)
+{
+ struct ksz9477_acl_entry *e;
+ int ret, i, b;
+
+ for (i = start_idx; i > target_idx; i--) {
+ b = i + num_entries_to_move - 1;
+
+ e = &acles->entries[b];
+ *e = acles->entries[i - 1];
+
+ ret = ksz9477_acl_update_linkage(dev, &e->entry[0], i - 1, b);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz9477_acl_move_entries - Move a block of contiguous ACL entries from a
+ * source to a destination index.
+ * @dev: Pointer to the KSZ9477 device structure.
+ * @port: Port number.
+ * @src_idx: Index of the starting source ACL entry.
+ * @dst_idx: Index of the starting destination ACL entry.
+ *
+ * This function aims to move a block of contiguous ACL entries from the source
+ * index to the destination index while ensuring the integrity and validity of
+ * the ACL table.
+ *
+ * In case of any errors during the adjustments or copying, the function will
+ * restore the ACL entries to their original state from the backup.
+ *
+ * Return: 0 if the move operation is successful. Returns -EINVAL for validation
+ * errors or other error codes based on specific failure conditions.
+ */
+static int ksz9477_acl_move_entries(struct ksz_device *dev, int port,
+ u16 src_idx, u16 dst_idx)
+{
+ struct ksz9477_acl_entry buffer[KSZ9477_ACL_MAX_ENTRIES];
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ int src_count, ret, dst_count;
+
+ /* Nothing to do */
+ if (src_idx == dst_idx)
+ return 0;
+
+ ret = ksz9477_validate_and_get_src_count(dev, port, src_idx, dst_idx,
+ &src_count, &dst_count);
+ if (ret)
+ return ret;
+
+ /* In case dst_index is greater than src_index, we need to adjust the
+ * destination index to account for the entries that will be moved
+ * downwards and the size of the entry located at dst_idx.
+ */
+ if (dst_idx > src_idx)
+ dst_idx = dst_idx + dst_count - src_count;
+
+ /* Copy source block to buffer and update its linkage */
+ for (int i = 0; i < src_count; i++) {
+ buffer[i] = acles->entries[src_idx + i];
+ ret = ksz9477_acl_update_linkage(dev, &buffer[i].entry[0],
+ src_idx + i, dst_idx + i);
+ if (ret < 0)
+ return ret;
+ }
+
+ /* Adjust other entries and their linkage based on destination */
+ if (dst_idx > src_idx) {
+ ret = ksz9477_move_entries_downwards(dev, acles, src_idx,
+ src_count, dst_idx);
+ } else {
+ ret = ksz9477_move_entries_upwards(dev, acles, src_idx,
+ src_count, dst_idx);
+ }
+ if (ret < 0)
+ return ret;
+
+ /* Copy buffer to destination block */
+ for (int i = 0; i < src_count; i++)
+ acles->entries[dst_idx + i] = buffer[i];
+
+ return 0;
+}
+
+/**
+ * ksz9477_get_next_block_start - Identify the starting index of the next ACL
+ * block.
+ * @dev: Pointer to the device structure.
+ * @port: The port number on which the ACL entries are being checked.
+ * @start: The starting index from which the search begins.
+ *
+ * This function looks for the next valid ACL block starting from the provided
+ * 'start' index and returns the beginning index of that block. If the block is
+ * invalid or if it reaches the end of the ACL entries without finding another
+ * block, it returns the maximum ACL entries count.
+ *
+ * Returns:
+ * - The starting index of the next valid ACL block.
+ * - KSZ9477_ACL_MAX_ENTRIES if no other valid blocks are found after 'start'.
+ * - A negative error code if an error occurs while checking.
+ */
+static int ksz9477_get_next_block_start(struct ksz_device *dev, int port,
+ int start)
+{
+ int block_size;
+
+ for (int i = start; i < KSZ9477_ACL_MAX_ENTRIES;) {
+ block_size = ksz9477_acl_get_cont_entr(dev, port, i);
+ if (block_size < 0 && block_size != -ENOTEMPTY)
+ return block_size;
+
+ if (block_size > 0)
+ return i;
+
+ i++;
+ }
+ return KSZ9477_ACL_MAX_ENTRIES;
+}
+
+/**
+ * ksz9477_swap_acl_blocks - Swap two ACL blocks
+ * @dev: Pointer to the device structure.
+ * @port: The port number on which the ACL blocks are to be swapped.
+ * @i: The starting index of the first ACL block.
+ * @j: The starting index of the second ACL block.
+ *
+ * This function is used to swap two ACL blocks present at given indices. The
+ * main purpose is to aid in the sorting and reordering of ACL blocks based on
+ * certain criteria, e.g., priority. It checks the validity of the block at
+ * index 'i', ensuring it's not an empty block, and then proceeds to swap it
+ * with the block at index 'j'.
+ *
+ * Returns:
+ * - 0 on successful swapping of blocks.
+ * - -EINVAL if the block at index 'i' is empty.
+ * - A negative error code if any other error occurs during the swap.
+ */
+static int ksz9477_swap_acl_blocks(struct ksz_device *dev, int port, int i,
+ int j)
+{
+ int ret, current_block_size;
+
+ current_block_size = ksz9477_acl_get_cont_entr(dev, port, i);
+ if (current_block_size < 0)
+ return current_block_size;
+
+ if (!current_block_size) {
+ dev_err(dev->dev, "ACL: swapping empty entry %d\n", i);
+ return -EINVAL;
+ }
+
+ ret = ksz9477_acl_move_entries(dev, port, i, j);
+ if (ret)
+ return ret;
+
+ ret = ksz9477_acl_move_entries(dev, port, j - current_block_size, i);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+/**
+ * ksz9477_sort_acl_entr_no_back - Sort ACL entries for a given port based on
+ * priority without backing up entries.
+ * @dev: Pointer to the device structure.
+ * @port: The port number whose ACL entries need to be sorted.
+ *
+ * This function sorts ACL entries of the specified port using a variant of the
+ * bubble sort algorithm. It operates on blocks of ACL entries rather than
+ * individual entries. Each block's starting point is identified and then
+ * compared with subsequent blocks based on their priority. If the current
+ * block has a lower priority than the subsequent block, the two blocks are
+ * swapped.
+ *
+ * This is done in order to maintain an organized order of ACL entries based on
+ * priority, ensuring efficient and predictable ACL rule application.
+ *
+ * Returns:
+ * - 0 on successful sorting of entries.
+ * - A negative error code if any issue arises during sorting, e.g.,
+ * if the function is unable to get the next block start.
+ */
+static int ksz9477_sort_acl_entr_no_back(struct ksz_device *dev, int port)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ struct ksz9477_acl_entry *curr, *next;
+ int i, j, ret;
+
+ /* Bubble sort */
+ for (i = 0; i < KSZ9477_ACL_MAX_ENTRIES;) {
+ curr = &acles->entries[i];
+
+ j = ksz9477_get_next_block_start(dev, port, i + 1);
+ if (j < 0)
+ return j;
+
+ while (j < KSZ9477_ACL_MAX_ENTRIES) {
+ next = &acles->entries[j];
+
+ if (curr->prio > next->prio) {
+ ret = ksz9477_swap_acl_blocks(dev, port, i, j);
+ if (ret)
+ return ret;
+ }
+
+ j = ksz9477_get_next_block_start(dev, port, j + 1);
+ if (j < 0)
+ return j;
+ }
+
+ i = ksz9477_get_next_block_start(dev, port, i + 1);
+ if (i < 0)
+ return i;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz9477_sort_acl_entries - Sort the ACL entries for a given port.
+ * @dev: Pointer to the KSZ device.
+ * @port: Port number.
+ *
+ * This function sorts the Access Control List (ACL) entries for a specified
+ * port. Before sorting, a backup of the original entries is created. If the
+ * sorting process fails, the function will log error messages displaying both
+ * the original and attempted sorted entries, and then restore the original
+ * entries from the backup.
+ *
+ * Return: 0 if the sorting succeeds, otherwise a negative error code.
+ */
+int ksz9477_sort_acl_entries(struct ksz_device *dev, int port)
+{
+ struct ksz9477_acl_entry backup[KSZ9477_ACL_MAX_ENTRIES];
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ int ret;
+
+ /* create a backup of the ACL entries, if something goes wrong
+ * we can restore the ACL entries.
+ */
+ memcpy(backup, acles->entries, sizeof(backup));
+
+ ret = ksz9477_sort_acl_entr_no_back(dev, port);
+ if (ret) {
+ dev_err(dev->dev, "ACL: failed to sort entries for port %d\n",
+ port);
+ dev_err(dev->dev, "ACL dump before sorting:\n");
+ ksz9477_dump_acl(dev, backup);
+ dev_err(dev->dev, "ACL dump after sorting:\n");
+ ksz9477_dump_acl(dev, acles->entries);
+ /* Restore the original entries */
+ memcpy(acles->entries, backup, sizeof(backup));
+ }
+
+ return ret;
+}
+
+/**
+ * ksz9477_acl_wait_ready - Waits for the ACL operation to complete on a given
+ * port.
+ * @dev: The ksz_device instance.
+ * @port: The port number to wait for.
+ *
+ * This function checks if the ACL write or read operation is completed by
+ * polling the specified register.
+ *
+ * Returns: 0 if the operation is successful, or a negative error code if an
+ * error occurs.
+ */
+static int ksz9477_acl_wait_ready(struct ksz_device *dev, int port)
+{
+ unsigned int wr_mask = KSZ9477_ACL_WRITE_DONE | KSZ9477_ACL_READ_DONE;
+ unsigned int val, reg;
+ int ret;
+
+ reg = dev->dev_ops->get_port_addr(port, KSZ9477_PORT_ACL_CTRL_0);
+
+ ret = regmap_read_poll_timeout(dev->regmap[0], reg, val,
+ (val & wr_mask) == wr_mask, 1000, 10000);
+ if (ret)
+ dev_err(dev->dev, "Failed to read/write ACL table\n");
+
+ return ret;
+}
+
+/**
+ * ksz9477_acl_entry_write - Writes an ACL entry to a given port at the
+ * specified index.
+ * @dev: The ksz_device instance.
+ * @port: The port number to write the ACL entry to.
+ * @entry: A pointer to the ACL entry data.
+ * @idx: The index at which to write the ACL entry.
+ *
+ * This function writes the provided ACL entry to the specified port at the
+ * given index.
+ *
+ * Returns: 0 if the operation is successful, or a negative error code if an
+ * error occurs.
+ */
+static int ksz9477_acl_entry_write(struct ksz_device *dev, int port, u8 *entry,
+ int idx)
+{
+ int ret, i;
+ u8 val;
+
+ for (i = 0; i < KSZ9477_ACL_ENTRY_SIZE; i++) {
+ ret = ksz_pwrite8(dev, port, KSZ9477_PORT_ACL_0 + i, entry[i]);
+ if (ret) {
+ dev_err(dev->dev, "Failed to write ACL entry %d\n", i);
+ return ret;
+ }
+ }
+
+ /* write everything down */
+ val = FIELD_PREP(KSZ9477_ACL_INDEX_M, idx) | KSZ9477_ACL_WRITE;
+ ret = ksz_pwrite8(dev, port, KSZ9477_PORT_ACL_CTRL_0, val);
+ if (ret)
+ return ret;
+
+ /* wait until everything is written */
+ return ksz9477_acl_wait_ready(dev, port);
+}
+
+/**
+ * ksz9477_acl_port_enable - Enables ACL functionality on a given port.
+ * @dev: The ksz_device instance.
+ * @port: The port number on which to enable ACL functionality.
+ *
+ * This function enables ACL functionality on the specified port by configuring
+ * the appropriate control registers. It returns 0 if the operation is
+ * successful, or a negative error code if an error occurs.
+ *
+ * 0xn801 - KSZ9477S 5.2.8.2 Port Priority Control Register
+ * Bit 7 - Highest Priority
+ * Bit 6 - OR'ed Priority
+ * Bit 4 - MAC Address Priority Classification
+ * Bit 3 - VLAN Priority Classification
+ * Bit 2 - 802.1p Priority Classification
+ * Bit 1 - Diffserv Priority Classification
+ * Bit 0 - ACL Priority Classification
+ *
+ * Current driver implementation sets 802.1p priority classification by default.
+ * In this function we add ACL priority classification with OR'ed priority.
+ * According to testing, priority set by ACL will supersede the 802.1p priority.
+ *
+ * 0xn803 - KSZ9477S 5.2.8.4 Port Authentication Control Register
+ * Bit 2 - Access Control List (ACL) Enable
+ * Bits 1:0 - Authentication Mode
+ * 00 = Reserved
+ * 01 = Block Mode. Authentication is enabled. When ACL is
+ * enabled, all traffic that misses the ACL rules is
+ * blocked; otherwise ACL actions apply.
+ * 10 = Pass Mode. Authentication is disabled. When ACL is
+ * enabled, all traffic that misses the ACL rules is
+ * forwarded; otherwise ACL actions apply.
+ * 11 = Trap Mode. Authentication is enabled. All traffic is
+ * forwarded to the host port. When ACL is enabled, all
+ * traffic that misses the ACL rules is blocked; otherwise
+ * ACL actions apply.
+ *
+ * We are using Pass Mode int this function.
+ *
+ * Returns: 0 if the operation is successful, or a negative error code if an
+ * error occurs.
+ */
+static int ksz9477_acl_port_enable(struct ksz_device *dev, int port)
+{
+ int ret;
+
+ ret = ksz_prmw8(dev, port, P_PRIO_CTRL, 0, PORT_ACL_PRIO_ENABLE |
+ PORT_OR_PRIO);
+ if (ret)
+ return ret;
+
+ return ksz_pwrite8(dev, port, REG_PORT_MRI_AUTHEN_CTRL,
+ PORT_ACL_ENABLE |
+ FIELD_PREP(PORT_AUTHEN_MODE, PORT_AUTHEN_PASS));
+}
+
+/**
+ * ksz9477_acl_port_disable - Disables ACL functionality on a given port.
+ * @dev: The ksz_device instance.
+ * @port: The port number on which to disable ACL functionality.
+ *
+ * This function disables ACL functionality on the specified port by writing a
+ * value of 0 to the REG_PORT_MRI_AUTHEN_CTRL control register and remove
+ * PORT_ACL_PRIO_ENABLE bit from P_PRIO_CTRL register.
+ *
+ * Returns: 0 if the operation is successful, or a negative error code if an
+ * error occurs.
+ */
+static int ksz9477_acl_port_disable(struct ksz_device *dev, int port)
+{
+ int ret;
+
+ ret = ksz_prmw8(dev, port, P_PRIO_CTRL, PORT_ACL_PRIO_ENABLE, 0);
+ if (ret)
+ return ret;
+
+ return ksz_pwrite8(dev, port, REG_PORT_MRI_AUTHEN_CTRL, 0);
+}
+
+/**
+ * ksz9477_acl_write_list - Write a list of ACL entries to a given port.
+ * @dev: The ksz_device instance.
+ * @port: The port number on which to write ACL entries.
+ *
+ * This function enables ACL functionality on the specified port, writes a list
+ * of ACL entries to the port, and disables ACL functionality if there are no
+ * entries.
+ *
+ * Returns: 0 if the operation is successful, or a negative error code if an
+ * error occurs.
+ */
+int ksz9477_acl_write_list(struct ksz_device *dev, int port)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ int ret, i;
+
+ /* ACL should be enabled before writing entries */
+ ret = ksz9477_acl_port_enable(dev, port);
+ if (ret)
+ return ret;
+
+ /* write all entries */
+ for (i = 0; i < ARRAY_SIZE(acles->entries); i++) {
+ u8 *entry = acles->entries[i].entry;
+
+ /* Check if entry was removed and should be zeroed.
+ * If last fields of the entry are not zero, it means
+ * it is removed locally but currently not synced with the HW.
+ * So, we will write it down to the HW to remove it.
+ */
+ if (i >= acles->entries_count &&
+ entry[KSZ9477_ACL_PORT_ACCESS_10] == 0 &&
+ entry[KSZ9477_ACL_PORT_ACCESS_11] == 0)
+ continue;
+
+ ret = ksz9477_acl_entry_write(dev, port, entry, i);
+ if (ret)
+ return ret;
+
+ /* now removed entry is clean on HW side, so it can
+ * in the cache too
+ */
+ if (i >= acles->entries_count &&
+ entry[KSZ9477_ACL_PORT_ACCESS_10] != 0 &&
+ entry[KSZ9477_ACL_PORT_ACCESS_11] != 0) {
+ entry[KSZ9477_ACL_PORT_ACCESS_10] = 0;
+ entry[KSZ9477_ACL_PORT_ACCESS_11] = 0;
+ }
+ }
+
+ if (!acles->entries_count)
+ return ksz9477_acl_port_disable(dev, port);
+
+ return 0;
+}
+
+/**
+ * ksz9477_acl_remove_entries - Remove ACL entries with a given cookie from a
+ * specified ksz9477_acl_entries structure.
+ * @dev: The ksz_device instance.
+ * @port: The port number on which to remove ACL entries.
+ * @acles: The ksz9477_acl_entries instance.
+ * @cookie: The cookie value to match for entry removal.
+ *
+ * This function iterates through the entries array, removing any entries with
+ * a matching cookie value. The remaining entries are then shifted down to fill
+ * the gap.
+ */
+void ksz9477_acl_remove_entries(struct ksz_device *dev, int port,
+ struct ksz9477_acl_entries *acles,
+ unsigned long cookie)
+{
+ int entries_count = acles->entries_count;
+ int ret, i, src_count;
+ int src_idx = -1;
+
+ if (!entries_count)
+ return;
+
+ /* Search for the first position with the cookie */
+ for (i = 0; i < entries_count; i++) {
+ if (acles->entries[i].cookie == cookie) {
+ src_idx = i;
+ break;
+ }
+ }
+
+ /* No entries with the matching cookie found */
+ if (src_idx == -1)
+ return;
+
+ /* Get the size of the cookie entry. We may have complex entries. */
+ src_count = ksz9477_acl_get_cont_entr(dev, port, src_idx);
+ if (src_count <= 0)
+ return;
+
+ /* Move all entries down to overwrite removed entry with the cookie */
+ ret = ksz9477_move_entries_downwards(dev, acles, src_idx,
+ src_count,
+ entries_count - src_count);
+ if (ret) {
+ dev_err(dev->dev, "Failed to move ACL entries down\n");
+ return;
+ }
+
+ /* Overwrite new empty places at the end of the list with zeros to make
+ * sure not unexpected things will happen or no unexplored quirks will
+ * come out.
+ */
+ for (i = entries_count - src_count; i < entries_count; i++) {
+ struct ksz9477_acl_entry *entry = &acles->entries[i];
+
+ memset(entry, 0, sizeof(*entry));
+
+ /* Set all access bits to be able to write zeroed entry to HW */
+ entry->entry[KSZ9477_ACL_PORT_ACCESS_10] = 0xff;
+ entry->entry[KSZ9477_ACL_PORT_ACCESS_11] = 0xff;
+ }
+
+ /* Adjust the total entries count */
+ acles->entries_count -= src_count;
+}
+
+/**
+ * ksz9477_port_acl_init - Initialize the ACL for a specified port on a ksz
+ * device.
+ * @dev: The ksz_device instance.
+ * @port: The port number to initialize the ACL for.
+ *
+ * This function allocates memory for an acl structure, associates it with the
+ * specified port, and initializes the ACL entries to a default state. The
+ * entries are then written using the ksz9477_acl_write_list function, ensuring
+ * the ACL has a predictable initial hardware state.
+ *
+ * Returns: 0 on success, or an error code on failure.
+ */
+int ksz9477_port_acl_init(struct ksz_device *dev, int port)
+{
+ struct ksz9477_acl_entries *acles;
+ struct ksz9477_acl_priv *acl;
+ int ret, i;
+
+ acl = kzalloc(sizeof(*acl), GFP_KERNEL);
+ if (!acl)
+ return -ENOMEM;
+
+ dev->ports[port].acl_priv = acl;
+
+ acles = &acl->acles;
+ /* write all entries */
+ for (i = 0; i < ARRAY_SIZE(acles->entries); i++) {
+ u8 *entry = acles->entries[i].entry;
+
+ /* Set all access bits to be able to write zeroed
+ * entry
+ */
+ entry[KSZ9477_ACL_PORT_ACCESS_10] = 0xff;
+ entry[KSZ9477_ACL_PORT_ACCESS_11] = 0xff;
+ }
+
+ ret = ksz9477_acl_write_list(dev, port);
+ if (ret)
+ goto free_acl;
+
+ return 0;
+
+free_acl:
+ kfree(dev->ports[port].acl_priv);
+ dev->ports[port].acl_priv = NULL;
+
+ return ret;
+}
+
+/**
+ * ksz9477_port_acl_free - Free the ACL resources for a specified port on a ksz
+ * device.
+ * @dev: The ksz_device instance.
+ * @port: The port number to initialize the ACL for.
+ *
+ * This disables the ACL for the specified port and frees the associated memory,
+ */
+void ksz9477_port_acl_free(struct ksz_device *dev, int port)
+{
+ if (!dev->ports[port].acl_priv)
+ return;
+
+ ksz9477_acl_port_disable(dev, port);
+
+ kfree(dev->ports[port].acl_priv);
+ dev->ports[port].acl_priv = NULL;
+}
+
+/**
+ * ksz9477_acl_set_reg - Set entry[16] and entry[17] depending on the updated
+ * entry[]
+ * @entry: An array containing the entries
+ * @reg: The register of the entry that needs to be updated
+ * @value: The value to be assigned to the updated entry
+ *
+ * This function updates the entry[] array based on the provided register and
+ * value. It also sets entry[0x10] and entry[0x11] according to the ACL byte
+ * enable rules.
+ *
+ * 0x10 - Byte Enable [15:8]
+ *
+ * Each bit enables accessing one of the ACL bytes when a read or write is
+ * initiated by writing to the Port ACL Byte Enable LSB Register.
+ * Bit 0 applies to the Port ACL Access 7 Register
+ * Bit 1 applies to the Port ACL Access 6 Register, etc.
+ * Bit 7 applies to the Port ACL Access 0 Register
+ * 1 = Byte is selected for read/write
+ * 0 = Byte is not selected
+ *
+ * 0x11 - Byte Enable [7:0]
+ *
+ * Each bit enables accessing one of the ACL bytes when a read or write is
+ * initiated by writing to the Port ACL Byte Enable LSB Register.
+ * Bit 0 applies to the Port ACL Access F Register
+ * Bit 1 applies to the Port ACL Access E Register, etc.
+ * Bit 7 applies to the Port ACL Access 8 Register
+ * 1 = Byte is selected for read/write
+ * 0 = Byte is not selected
+ */
+static void ksz9477_acl_set_reg(u8 *entry, enum ksz9477_acl_port_access reg,
+ u8 value)
+{
+ if (reg >= KSZ9477_ACL_PORT_ACCESS_0 &&
+ reg <= KSZ9477_ACL_PORT_ACCESS_7) {
+ entry[KSZ9477_ACL_PORT_ACCESS_10] |=
+ BIT(KSZ9477_ACL_PORT_ACCESS_7 - reg);
+ } else if (reg >= KSZ9477_ACL_PORT_ACCESS_8 &&
+ reg <= KSZ9477_ACL_PORT_ACCESS_F) {
+ entry[KSZ9477_ACL_PORT_ACCESS_11] |=
+ BIT(KSZ9477_ACL_PORT_ACCESS_F - reg);
+ } else {
+ WARN_ON(1);
+ return;
+ }
+
+ entry[reg] = value;
+}
+
+/**
+ * ksz9477_acl_matching_rule_cfg_l2 - Configure an ACL filtering entry to match
+ * L2 types of Ethernet frames
+ * @entry: Pointer to ACL entry buffer
+ * @ethertype: Ethertype value
+ * @eth_addr: Pointer to Ethernet address
+ * @is_src: If true, match the source MAC address; if false, match the
+ * destination MAC address
+ *
+ * This function configures an Access Control List (ACL) filtering
+ * entry to match Layer 2 types of Ethernet frames based on the provided
+ * ethertype and Ethernet address. Additionally, it can match either the source
+ * or destination MAC address depending on the value of the is_src parameter.
+ *
+ * Register Descriptions for MD = 01 and ENB != 00 (Layer 2 MAC header
+ * filtering)
+ *
+ * 0x01 - Mode and Enable
+ * Bits 5:4 - MD (Mode)
+ * 01 = Layer 2 MAC header or counter filtering
+ * Bits 3:2 - ENB (Enable)
+ * 01 = Comparison is performed only on the TYPE value
+ * 10 = Comparison is performed only on the MAC Address value
+ * 11 = Both the MAC Address and TYPE are tested
+ * Bit 1 - S/D (Source / Destination)
+ * 0 = Destination address
+ * 1 = Source address
+ * Bit 0 - EQ (Equal / Not Equal)
+ * 0 = Not Equal produces true result
+ * 1 = Equal produces true result
+ *
+ * 0x02-0x07 - MAC Address
+ * 0x02 - MAC Address [47:40]
+ * 0x03 - MAC Address [39:32]
+ * 0x04 - MAC Address [31:24]
+ * 0x05 - MAC Address [23:16]
+ * 0x06 - MAC Address [15:8]
+ * 0x07 - MAC Address [7:0]
+ *
+ * 0x08-0x09 - EtherType
+ * 0x08 - EtherType [15:8]
+ * 0x09 - EtherType [7:0]
+ */
+static void ksz9477_acl_matching_rule_cfg_l2(u8 *entry, u16 ethertype,
+ u8 *eth_addr, bool is_src)
+{
+ u8 enb = 0;
+ u8 val;
+
+ if (ethertype)
+ enb |= KSZ9477_ACL_ENB_L2_TYPE;
+ if (eth_addr)
+ enb |= KSZ9477_ACL_ENB_L2_MAC;
+
+ val = FIELD_PREP(KSZ9477_ACL_MD_MASK, KSZ9477_ACL_MD_L2_MAC) |
+ FIELD_PREP(KSZ9477_ACL_ENB_MASK, enb) |
+ FIELD_PREP(KSZ9477_ACL_SD_SRC, is_src) | KSZ9477_ACL_EQ_EQUAL;
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_1, val);
+
+ if (eth_addr) {
+ int i;
+
+ for (i = 0; i < ETH_ALEN; i++) {
+ ksz9477_acl_set_reg(entry,
+ KSZ9477_ACL_PORT_ACCESS_2 + i,
+ eth_addr[i]);
+ }
+ }
+
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_8, ethertype >> 8);
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_9, ethertype & 0xff);
+}
+
+/**
+ * ksz9477_acl_action_rule_cfg - Set action for an ACL entry
+ * @entry: Pointer to the ACL entry
+ * @force_prio: If true, force the priority value
+ * @prio_val: Priority value
+ *
+ * This function sets the action for the specified ACL entry. It prepares
+ * the priority mode and traffic class values and updates the entry's
+ * action registers accordingly. Currently, there is no port or VLAN PCP
+ * remapping.
+ *
+ * ACL Action Rule Parameters for Non-Count Modes (MD ≠ 01 or ENB ≠ 00)
+ *
+ * 0x0A - PM, P, RPE, RP[2:1]
+ * Bits 7:6 - PM[1:0] - Priority Mode
+ * 00 = ACL does not specify the packet priority. Priority is
+ * determined by standard QoS functions.
+ * 01 = Change packet priority to P[2:0] if it is greater than QoS
+ * result.
+ * 10 = Change packet priority to P[2:0] if it is smaller than the
+ * QoS result.
+ * 11 = Always change packet priority to P[2:0].
+ * Bits 5:3 - P[2:0] - Priority value
+ * Bit 2 - RPE - Remark Priority Enable
+ * Bits 1:0 - RP[2:1] - Remarked Priority value (bits 2:1)
+ * 0 = Disable priority remarking
+ * 1 = Enable priority remarking. VLAN tag priority (PCP) bits are
+ * replaced by RP[2:0].
+ *
+ * 0x0B - RP[0], MM
+ * Bit 7 - RP[0] - Remarked Priority value (bit 0)
+ * Bits 6:5 - MM[1:0] - Map Mode
+ * 00 = No forwarding remapping
+ * 01 = The forwarding map in FORWARD is OR'ed with the forwarding
+ * map from the Address Lookup Table.
+ * 10 = The forwarding map in FORWARD is AND'ed with the forwarding
+ * map from the Address Lookup Table.
+ * 11 = The forwarding map in FORWARD replaces the forwarding map
+ * from the Address Lookup Table.
+ * 0x0D - FORWARD[n:0]
+ * Bits 7:0 - FORWARD[n:0] - Forwarding map. Bit 0 = port 1,
+ * bit 1 = port 2, etc.
+ * 1 = enable forwarding to this port
+ * 0 = do not forward to this port
+ */
+void ksz9477_acl_action_rule_cfg(u8 *entry, bool force_prio, u8 prio_val)
+{
+ u8 prio_mode, val;
+
+ if (force_prio)
+ prio_mode = KSZ9477_ACL_PM_REPLACE;
+ else
+ prio_mode = KSZ9477_ACL_PM_DISABLE;
+
+ val = FIELD_PREP(KSZ9477_ACL_PM_M, prio_mode) |
+ FIELD_PREP(KSZ9477_ACL_P_M, prio_val);
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_A, val);
+
+ /* no port or VLAN PCP remapping for now */
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_B, 0);
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_D, 0);
+}
+
+/**
+ * ksz9477_acl_processing_rule_set_action - Set the action for the processing
+ * rule set.
+ * @entry: Pointer to the ACL entry
+ * @action_idx: Index of the action to be applied
+ *
+ * This function sets the action for the processing rule set by updating the
+ * appropriate register in the entry. There can be only one action per
+ * processing rule.
+ *
+ * Access Control List (ACL) Processing Rule Registers:
+ *
+ * 0x00 - First Rule Number (FRN)
+ * Bits 3:0 - First Rule Number. Pointer to an Action rule entry.
+ */
+void ksz9477_acl_processing_rule_set_action(u8 *entry, u8 action_idx)
+{
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_0, action_idx);
+}
+
+/**
+ * ksz9477_acl_processing_rule_add_match - Add a matching rule to the rule set
+ * @entry: Pointer to the ACL entry
+ * @match_idx: Index of the matching rule to be added
+ *
+ * This function adds a matching rule to the rule set by updating the
+ * appropriate bits in the entry's rule set registers.
+ *
+ * Access Control List (ACL) Processing Rule Registers:
+ *
+ * 0x0E - RuleSet [15:8]
+ * Bits 7:0 - RuleSet [15:8] Specifies a set of one or more Matching rule
+ * entries. RuleSet has one bit for each of the 16 Matching rule entries.
+ * If multiple Matching rules are selected, then all conditions will be
+ * AND'ed to produce a final match result.
+ * 0 = Matching rule not selected
+ * 1 = Matching rule selected
+ *
+ * 0x0F - RuleSet [7:0]
+ * Bits 7:0 - RuleSet [7:0]
+ */
+static void ksz9477_acl_processing_rule_add_match(u8 *entry, u8 match_idx)
+{
+ u8 vale = entry[KSZ9477_ACL_PORT_ACCESS_E];
+ u8 valf = entry[KSZ9477_ACL_PORT_ACCESS_F];
+
+ if (match_idx < 8)
+ valf |= BIT(match_idx);
+ else
+ vale |= BIT(match_idx - 8);
+
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_E, vale);
+ ksz9477_acl_set_reg(entry, KSZ9477_ACL_PORT_ACCESS_F, valf);
+}
+
+/**
+ * ksz9477_acl_get_init_entry - Get a new uninitialized entry for a specified
+ * port on a ksz_device.
+ * @dev: The ksz_device instance.
+ * @port: The port number to get the uninitialized entry for.
+ * @cookie: The cookie to associate with the entry.
+ * @prio: The priority to associate with the entry.
+ *
+ * This function retrieves the next available ACL entry for the specified port,
+ * clears all access flags, and associates it with the current cookie.
+ *
+ * Returns: A pointer to the new uninitialized ACL entry.
+ */
+static struct ksz9477_acl_entry *
+ksz9477_acl_get_init_entry(struct ksz_device *dev, int port,
+ unsigned long cookie, u32 prio)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ struct ksz9477_acl_entry *entry;
+
+ entry = &acles->entries[acles->entries_count];
+ entry->cookie = cookie;
+ entry->prio = prio;
+
+ /* clear all access flags */
+ entry->entry[KSZ9477_ACL_PORT_ACCESS_10] = 0;
+ entry->entry[KSZ9477_ACL_PORT_ACCESS_11] = 0;
+
+ return entry;
+}
+
+/**
+ * ksz9477_acl_match_process_l2 - Configure Layer 2 ACL matching rules and
+ * processing rules.
+ * @dev: Pointer to the ksz_device.
+ * @port: Port number.
+ * @ethtype: Ethernet type.
+ * @src_mac: Source MAC address.
+ * @dst_mac: Destination MAC address.
+ * @cookie: The cookie to associate with the entry.
+ * @prio: The priority of the entry.
+ *
+ * This function sets up matching and processing rules for Layer 2 ACLs.
+ * It takes into account that only one MAC per entry is supported.
+ */
+void ksz9477_acl_match_process_l2(struct ksz_device *dev, int port,
+ u16 ethtype, u8 *src_mac, u8 *dst_mac,
+ unsigned long cookie, u32 prio)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct ksz9477_acl_entries *acles = &acl->acles;
+ struct ksz9477_acl_entry *entry;
+
+ entry = ksz9477_acl_get_init_entry(dev, port, cookie, prio);
+
+ /* ACL supports only one MAC per entry */
+ if (src_mac && dst_mac) {
+ ksz9477_acl_matching_rule_cfg_l2(entry->entry, ethtype, src_mac,
+ true);
+
+ /* Add both match entries to first processing rule */
+ ksz9477_acl_processing_rule_add_match(entry->entry,
+ acles->entries_count);
+ acles->entries_count++;
+ ksz9477_acl_processing_rule_add_match(entry->entry,
+ acles->entries_count);
+
+ entry = ksz9477_acl_get_init_entry(dev, port, cookie, prio);
+ ksz9477_acl_matching_rule_cfg_l2(entry->entry, 0, dst_mac,
+ false);
+ acles->entries_count++;
+ } else {
+ u8 *mac = src_mac ? src_mac : dst_mac;
+ bool is_src = src_mac ? true : false;
+
+ ksz9477_acl_matching_rule_cfg_l2(entry->entry, ethtype, mac,
+ is_src);
+ ksz9477_acl_processing_rule_add_match(entry->entry,
+ acles->entries_count);
+ acles->entries_count++;
+ }
+}
diff --git a/drivers/net/dsa/microchip/ksz9477_i2c.c b/drivers/net/dsa/microchip/ksz9477_i2c.c
index 2710afad4f3a..cac4a607e54a 100644
--- a/drivers/net/dsa/microchip/ksz9477_i2c.c
+++ b/drivers/net/dsa/microchip/ksz9477_i2c.c
@@ -66,10 +66,7 @@ static void ksz9477_i2c_shutdown(struct i2c_client *i2c)
if (!dev)
return;
- if (dev->dev_ops->reset)
- dev->dev_ops->reset(dev);
-
- dsa_switch_shutdown(dev->ds);
+ ksz_switch_shutdown(dev);
i2c_set_clientdata(i2c, NULL);
}
diff --git a/drivers/net/dsa/microchip/ksz9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h
index cba3dba58bc3..f3a205ee483f 100644
--- a/drivers/net/dsa/microchip/ksz9477_reg.h
+++ b/drivers/net/dsa/microchip/ksz9477_reg.h
@@ -112,19 +112,6 @@
#define REG_SW_IBA_SYNC__1 0x010C
-#define REG_SW_IO_STRENGTH__1 0x010D
-#define SW_DRIVE_STRENGTH_M 0x7
-#define SW_DRIVE_STRENGTH_2MA 0
-#define SW_DRIVE_STRENGTH_4MA 1
-#define SW_DRIVE_STRENGTH_8MA 2
-#define SW_DRIVE_STRENGTH_12MA 3
-#define SW_DRIVE_STRENGTH_16MA 4
-#define SW_DRIVE_STRENGTH_20MA 5
-#define SW_DRIVE_STRENGTH_24MA 6
-#define SW_DRIVE_STRENGTH_28MA 7
-#define SW_HI_SPEED_DRIVE_STRENGTH_S 4
-#define SW_LO_SPEED_DRIVE_STRENGTH_S 0
-
#define REG_SW_IBA_STATUS__4 0x0110
#define SW_IBA_REQ BIT(31)
@@ -166,13 +153,6 @@
#define SW_DOUBLE_TAG BIT(7)
#define SW_RESET BIT(1)
-#define REG_SW_MAC_ADDR_0 0x0302
-#define REG_SW_MAC_ADDR_1 0x0303
-#define REG_SW_MAC_ADDR_2 0x0304
-#define REG_SW_MAC_ADDR_3 0x0305
-#define REG_SW_MAC_ADDR_4 0x0306
-#define REG_SW_MAC_ADDR_5 0x0307
-
#define REG_SW_MTU__2 0x0308
#define REG_SW_MTU_MASK GENMASK(13, 0)
diff --git a/drivers/net/dsa/microchip/ksz9477_tc_flower.c b/drivers/net/dsa/microchip/ksz9477_tc_flower.c
new file mode 100644
index 000000000000..8b2f5be667e0
--- /dev/null
+++ b/drivers/net/dsa/microchip/ksz9477_tc_flower.c
@@ -0,0 +1,281 @@
+// SPDX-License-Identifier: GPL-2.0
+// Copyright (c) 2023 Pengutronix, Oleksij Rempel <kernel@pengutronix.de>
+
+#include "ksz9477.h"
+#include "ksz9477_reg.h"
+#include "ksz_common.h"
+
+#define ETHER_TYPE_FULL_MASK cpu_to_be16(~0)
+#define KSZ9477_MAX_TC 7
+
+/**
+ * ksz9477_flower_parse_key_l2 - Parse Layer 2 key from flow rule and configure
+ * ACL entries accordingly.
+ * @dev: Pointer to the ksz_device.
+ * @port: Port number.
+ * @extack: Pointer to the netlink_ext_ack.
+ * @rule: Pointer to the flow_rule.
+ * @cookie: The cookie to associate with the entry.
+ * @prio: The priority of the entry.
+ *
+ * This function parses the Layer 2 key from the flow rule and configures
+ * the corresponding ACL entries. It checks for unsupported offloads and
+ * available entries before proceeding with the configuration.
+ *
+ * Returns: 0 on success or a negative error code on failure.
+ */
+static int ksz9477_flower_parse_key_l2(struct ksz_device *dev, int port,
+ struct netlink_ext_ack *extack,
+ struct flow_rule *rule,
+ unsigned long cookie, u32 prio)
+{
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ struct flow_match_eth_addrs ematch;
+ struct ksz9477_acl_entries *acles;
+ int required_entries;
+ u8 *src_mac = NULL;
+ u8 *dst_mac = NULL;
+ u16 ethtype = 0;
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC)) {
+ struct flow_match_basic match;
+
+ flow_rule_match_basic(rule, &match);
+
+ if (match.key->n_proto) {
+ if (match.mask->n_proto != ETHER_TYPE_FULL_MASK) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "ethernet type mask must be a full mask");
+ return -EINVAL;
+ }
+
+ ethtype = be16_to_cpu(match.key->n_proto);
+ }
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ flow_rule_match_eth_addrs(rule, &ematch);
+
+ if (!is_zero_ether_addr(ematch.key->src)) {
+ if (!is_broadcast_ether_addr(ematch.mask->src))
+ goto not_full_mask_err;
+
+ src_mac = ematch.key->src;
+ }
+
+ if (!is_zero_ether_addr(ematch.key->dst)) {
+ if (!is_broadcast_ether_addr(ematch.mask->dst))
+ goto not_full_mask_err;
+
+ dst_mac = ematch.key->dst;
+ }
+ }
+
+ acles = &acl->acles;
+ /* ACL supports only one MAC per entry */
+ required_entries = src_mac && dst_mac ? 2 : 1;
+
+ /* Check if there are enough available entries */
+ if (acles->entries_count + required_entries > KSZ9477_ACL_MAX_ENTRIES) {
+ NL_SET_ERR_MSG_MOD(extack, "ACL entry limit reached");
+ return -EOPNOTSUPP;
+ }
+
+ ksz9477_acl_match_process_l2(dev, port, ethtype, src_mac, dst_mac,
+ cookie, prio);
+
+ return 0;
+
+not_full_mask_err:
+ NL_SET_ERR_MSG_MOD(extack, "MAC address mask must be a full mask");
+ return -EOPNOTSUPP;
+}
+
+/**
+ * ksz9477_flower_parse_key - Parse flow rule keys for a specified port on a
+ * ksz_device.
+ * @dev: The ksz_device instance.
+ * @port: The port number to parse the flow rule keys for.
+ * @extack: The netlink extended ACK for reporting errors.
+ * @rule: The flow_rule to parse.
+ * @cookie: The cookie to associate with the entry.
+ * @prio: The priority of the entry.
+ *
+ * This function checks if the used keys in the flow rule are supported by
+ * the device and parses the L2 keys if they match. If unsupported keys are
+ * used, an error message is set in the extended ACK.
+ *
+ * Returns: 0 on success or a negative error code on failure.
+ */
+static int ksz9477_flower_parse_key(struct ksz_device *dev, int port,
+ struct netlink_ext_ack *extack,
+ struct flow_rule *rule,
+ unsigned long cookie, u32 prio)
+{
+ struct flow_dissector *dissector = rule->match.dissector;
+ int ret;
+
+ if (dissector->used_keys &
+ ~(BIT_ULL(FLOW_DISSECTOR_KEY_BASIC) |
+ BIT_ULL(FLOW_DISSECTOR_KEY_ETH_ADDRS) |
+ BIT_ULL(FLOW_DISSECTOR_KEY_CONTROL))) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Unsupported keys used");
+ return -EOPNOTSUPP;
+ }
+
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_BASIC) ||
+ flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ ret = ksz9477_flower_parse_key_l2(dev, port, extack, rule,
+ cookie, prio);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz9477_flower_parse_action - Parse flow rule actions for a specified port
+ * on a ksz_device.
+ * @dev: The ksz_device instance.
+ * @port: The port number to parse the flow rule actions for.
+ * @extack: The netlink extended ACK for reporting errors.
+ * @cls: The flow_cls_offload instance containing the flow rule.
+ * @entry_idx: The index of the ACL entry to store the action.
+ *
+ * This function checks if the actions in the flow rule are supported by
+ * the device. Currently, only actions that change priorities are supported.
+ * If unsupported actions are encountered, an error message is set in the
+ * extended ACK.
+ *
+ * Returns: 0 on success or a negative error code on failure.
+ */
+static int ksz9477_flower_parse_action(struct ksz_device *dev, int port,
+ struct netlink_ext_ack *extack,
+ struct flow_cls_offload *cls,
+ int entry_idx)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
+ struct ksz9477_acl_priv *acl = dev->ports[port].acl_priv;
+ const struct flow_action_entry *act;
+ struct ksz9477_acl_entry *entry;
+ bool prio_force = false;
+ u8 prio_val = 0;
+ int i;
+
+ if (TC_H_MIN(cls->classid)) {
+ NL_SET_ERR_MSG_MOD(extack, "hw_tc is not supported. Use: action skbedit prio");
+ return -EOPNOTSUPP;
+ }
+
+ flow_action_for_each(i, act, &rule->action) {
+ switch (act->id) {
+ case FLOW_ACTION_PRIORITY:
+ if (act->priority > KSZ9477_MAX_TC) {
+ NL_SET_ERR_MSG_MOD(extack, "Priority value is too high");
+ return -EOPNOTSUPP;
+ }
+ prio_force = true;
+ prio_val = act->priority;
+ break;
+ default:
+ NL_SET_ERR_MSG_MOD(extack, "action not supported");
+ return -EOPNOTSUPP;
+ }
+ }
+
+ /* pick entry to store action */
+ entry = &acl->acles.entries[entry_idx];
+
+ ksz9477_acl_action_rule_cfg(entry->entry, prio_force, prio_val);
+ ksz9477_acl_processing_rule_set_action(entry->entry, entry_idx);
+
+ return 0;
+}
+
+/**
+ * ksz9477_cls_flower_add - Add a flow classification rule for a specified port
+ * on a ksz_device.
+ * @ds: The DSA switch instance.
+ * @port: The port number to add the flow classification rule to.
+ * @cls: The flow_cls_offload instance containing the flow rule.
+ * @ingress: A flag indicating if the rule is applied on the ingress path.
+ *
+ * This function adds a flow classification rule for a specified port on a
+ * ksz_device. It checks if the ACL offloading is supported and parses the flow
+ * keys and actions. If the ACL is not supported, it returns an error. If there
+ * are unprocessed entries, it parses the action for the rule.
+ *
+ * Returns: 0 on success or a negative error code on failure.
+ */
+int ksz9477_cls_flower_add(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress)
+{
+ struct flow_rule *rule = flow_cls_offload_flow_rule(cls);
+ struct netlink_ext_ack *extack = cls->common.extack;
+ struct ksz_device *dev = ds->priv;
+ struct ksz9477_acl_priv *acl;
+ int action_entry_idx;
+ int ret;
+
+ acl = dev->ports[port].acl_priv;
+
+ if (!acl) {
+ NL_SET_ERR_MSG_MOD(extack, "ACL offloading is not supported");
+ return -EOPNOTSUPP;
+ }
+
+ /* A complex rule set can take multiple entries. Use first entry
+ * to store the action.
+ */
+ action_entry_idx = acl->acles.entries_count;
+
+ ret = ksz9477_flower_parse_key(dev, port, extack, rule, cls->cookie,
+ cls->common.prio);
+ if (ret)
+ return ret;
+
+ ret = ksz9477_flower_parse_action(dev, port, extack, cls,
+ action_entry_idx);
+ if (ret)
+ return ret;
+
+ ret = ksz9477_sort_acl_entries(dev, port);
+ if (ret)
+ return ret;
+
+ return ksz9477_acl_write_list(dev, port);
+}
+
+/**
+ * ksz9477_cls_flower_del - Remove a flow classification rule for a specified
+ * port on a ksz_device.
+ * @ds: The DSA switch instance.
+ * @port: The port number to remove the flow classification rule from.
+ * @cls: The flow_cls_offload instance containing the flow rule.
+ * @ingress: A flag indicating if the rule is applied on the ingress path.
+ *
+ * This function removes a flow classification rule for a specified port on a
+ * ksz_device. It checks if the ACL is initialized, and if not, returns an
+ * error. If the ACL is initialized, it removes entries with the specified
+ * cookie and rewrites the ACL list.
+ *
+ * Returns: 0 on success or a negative error code on failure.
+ */
+int ksz9477_cls_flower_del(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress)
+{
+ unsigned long cookie = cls->cookie;
+ struct ksz_device *dev = ds->priv;
+ struct ksz9477_acl_priv *acl;
+
+ acl = dev->ports[port].acl_priv;
+
+ if (!acl)
+ return -EOPNOTSUPP;
+
+ ksz9477_acl_remove_entries(dev, port, &acl->acles, cookie);
+
+ return ksz9477_acl_write_list(dev, port);
+}
diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
index 42db7679c360..3fed406fb46a 100644
--- a/drivers/net/dsa/microchip/ksz_common.c
+++ b/drivers/net/dsa/microchip/ksz_common.c
@@ -16,6 +16,7 @@
#include <linux/etherdevice.h>
#include <linux/if_bridge.h>
#include <linux/if_vlan.h>
+#include <linux/if_hsr.h>
#include <linux/irq.h>
#include <linux/irqdomain.h>
#include <linux/of.h>
@@ -186,6 +187,72 @@ static const struct ksz_mib_names ksz9477_mib_names[] = {
{ 0x83, "tx_discards" },
};
+struct ksz_driver_strength_prop {
+ const char *name;
+ int offset;
+ int value;
+};
+
+enum ksz_driver_strength_type {
+ KSZ_DRIVER_STRENGTH_HI,
+ KSZ_DRIVER_STRENGTH_LO,
+ KSZ_DRIVER_STRENGTH_IO,
+};
+
+/**
+ * struct ksz_drive_strength - drive strength mapping
+ * @reg_val: register value
+ * @microamp: microamp value
+ */
+struct ksz_drive_strength {
+ u32 reg_val;
+ u32 microamp;
+};
+
+/* ksz9477_drive_strengths - Drive strength mapping for KSZ9477 variants
+ *
+ * This values are not documented in KSZ9477 variants but confirmed by
+ * Microchip that KSZ9477, KSZ9567, KSZ8567, KSZ9897, KSZ9896, KSZ9563, KSZ9893
+ * and KSZ8563 are using same register (drive strength) settings like KSZ8795.
+ *
+ * Documentation in KSZ8795CLX provides more information with some
+ * recommendations:
+ * - for high speed signals
+ * 1. 4 mA or 8 mA is often used for MII, RMII, and SPI interface with using
+ * 2.5V or 3.3V VDDIO.
+ * 2. 12 mA or 16 mA is often used for MII, RMII, and SPI interface with
+ * using 1.8V VDDIO.
+ * 3. 20 mA or 24 mA is often used for GMII/RGMII interface with using 2.5V
+ * or 3.3V VDDIO.
+ * 4. 28 mA is often used for GMII/RGMII interface with using 1.8V VDDIO.
+ * 5. In same interface, the heavy loading should use higher one of the
+ * drive current strength.
+ * - for low speed signals
+ * 1. 3.3V VDDIO, use either 4 mA or 8 mA.
+ * 2. 2.5V VDDIO, use either 8 mA or 12 mA.
+ * 3. 1.8V VDDIO, use either 12 mA or 16 mA.
+ * 4. If it is heavy loading, can use higher drive current strength.
+ */
+static const struct ksz_drive_strength ksz9477_drive_strengths[] = {
+ { SW_DRIVE_STRENGTH_2MA, 2000 },
+ { SW_DRIVE_STRENGTH_4MA, 4000 },
+ { SW_DRIVE_STRENGTH_8MA, 8000 },
+ { SW_DRIVE_STRENGTH_12MA, 12000 },
+ { SW_DRIVE_STRENGTH_16MA, 16000 },
+ { SW_DRIVE_STRENGTH_20MA, 20000 },
+ { SW_DRIVE_STRENGTH_24MA, 24000 },
+ { SW_DRIVE_STRENGTH_28MA, 28000 },
+};
+
+/* ksz8830_drive_strengths - Drive strength mapping for KSZ8830, KSZ8873, ..
+ * variants.
+ * This values are documented in KSZ8873 and KSZ8863 datasheets.
+ */
+static const struct ksz_drive_strength ksz8830_drive_strengths[] = {
+ { 0, 8000 },
+ { KSZ8873_DRIVE_STRENGTH_16MA, 16000 },
+};
+
static const struct ksz_dev_ops ksz8_dev_ops = {
.setup = ksz8_setup,
.get_port_addr = ksz8_get_port_addr,
@@ -252,6 +319,9 @@ static const struct ksz_dev_ops ksz9477_dev_ops = {
.mdb_del = ksz9477_mdb_del,
.change_mtu = ksz9477_change_mtu,
.phylink_mac_link_up = ksz9477_phylink_mac_link_up,
+ .get_wol = ksz9477_get_wol,
+ .set_wol = ksz9477_set_wol,
+ .wol_pre_shutdown = ksz9477_wol_pre_shutdown,
.config_cpu_port = ksz9477_config_cpu_port,
.tc_cbs_set_cinc = ksz9477_tc_cbs_set_cinc,
.enable_stp_addr = ksz9477_enable_stp_addr,
@@ -298,6 +368,7 @@ static const struct ksz_dev_ops lan937x_dev_ops = {
};
static const u16 ksz8795_regs[] = {
+ [REG_SW_MAC_ADDR] = 0x68,
[REG_IND_CTRL_0] = 0x6E,
[REG_IND_DATA_8] = 0x70,
[REG_IND_DATA_CHECK] = 0x72,
@@ -373,6 +444,7 @@ static const u8 ksz8795_shifts[] = {
};
static const u16 ksz8863_regs[] = {
+ [REG_SW_MAC_ADDR] = 0x70,
[REG_IND_CTRL_0] = 0x79,
[REG_IND_DATA_8] = 0x7B,
[REG_IND_DATA_CHECK] = 0x7B,
@@ -426,6 +498,7 @@ static u8 ksz8863_shifts[] = {
};
static const u16 ksz9477_regs[] = {
+ [REG_SW_MAC_ADDR] = 0x0302,
[P_STP_CTRL] = 0x0B04,
[S_START_CTRL] = 0x0300,
[S_BROADCAST_CTRL] = 0x0332,
@@ -1876,14 +1949,14 @@ static int ksz_irq_phy_setup(struct ksz_device *dev)
ret = irq;
goto out;
}
- ds->slave_mii_bus->irq[phy] = irq;
+ ds->user_mii_bus->irq[phy] = irq;
}
}
return 0;
out:
while (phy--)
if (BIT(phy) & ds->phys_mii_mask)
- irq_dispose_mapping(ds->slave_mii_bus->irq[phy]);
+ irq_dispose_mapping(ds->user_mii_bus->irq[phy]);
return ret;
}
@@ -1895,7 +1968,7 @@ static void ksz_irq_phy_free(struct ksz_device *dev)
for (phy = 0; phy < KSZ_MAX_NUM_PORTS; phy++)
if (BIT(phy) & ds->phys_mii_mask)
- irq_dispose_mapping(ds->slave_mii_bus->irq[phy]);
+ irq_dispose_mapping(ds->user_mii_bus->irq[phy]);
}
static int ksz_mdio_register(struct ksz_device *dev)
@@ -1918,12 +1991,12 @@ static int ksz_mdio_register(struct ksz_device *dev)
bus->priv = dev;
bus->read = ksz_sw_mdio_read;
bus->write = ksz_sw_mdio_write;
- bus->name = "ksz slave smi";
+ bus->name = "ksz user smi";
snprintf(bus->id, MII_BUS_ID_SIZE, "SMI-%d", ds->index);
bus->parent = ds->dev;
bus->phy_mask = ~ds->phys_mii_mask;
- ds->slave_mii_bus = bus;
+ ds->user_mii_bus = bus;
if (dev->irq > 0) {
ret = ksz_irq_phy_setup(dev);
@@ -2275,7 +2348,7 @@ static void ksz_mib_read_work(struct work_struct *work)
if (!p->read) {
const struct dsa_port *dp = dsa_to_port(dev->ds, i);
- if (!netif_carrier_ok(dp->slave))
+ if (!netif_carrier_ok(dp->user))
mib->cnt_ptr = dev->info->reg_mib_cnt;
}
port_r_cnt(dev, i);
@@ -2395,7 +2468,7 @@ static void ksz_get_ethtool_stats(struct dsa_switch *ds, int port,
mutex_lock(&mib->cnt_mutex);
/* Only read dropped counters if no link. */
- if (!netif_carrier_ok(dp->slave))
+ if (!netif_carrier_ok(dp->user))
mib->cnt_ptr = dev->info->reg_mib_cnt;
port_r_cnt(dev, port);
memcpy(buf, mib->counters, dev->info->mib_cnt * sizeof(u64));
@@ -2498,15 +2571,14 @@ static int ksz_port_mdb_del(struct dsa_switch *ds, int port,
return dev->dev_ops->mdb_del(dev, port, mdb, db);
}
-static int ksz_enable_port(struct dsa_switch *ds, int port,
- struct phy_device *phy)
+static int ksz_port_setup(struct dsa_switch *ds, int port)
{
struct ksz_device *dev = ds->priv;
if (!dsa_is_user_port(ds, port))
return 0;
- /* setup slave port */
+ /* setup user port */
dev->dev_ops->port_setup(dev, port, false);
/* port_stp_state_set() will be called after to enable the port so
@@ -2562,6 +2634,23 @@ void ksz_port_stp_state_set(struct dsa_switch *ds, int port, u8 state)
ksz_update_port_member(dev, port);
}
+static void ksz_port_teardown(struct dsa_switch *ds, int port)
+{
+ struct ksz_device *dev = ds->priv;
+
+ switch (dev->chip_id) {
+ case KSZ8563_CHIP_ID:
+ case KSZ9477_CHIP_ID:
+ case KSZ9563_CHIP_ID:
+ case KSZ9567_CHIP_ID:
+ case KSZ9893_CHIP_ID:
+ case KSZ9896_CHIP_ID:
+ case KSZ9897_CHIP_ID:
+ if (dsa_is_user_port(ds, port))
+ ksz9477_port_acl_free(dev, port);
+ }
+}
+
static int ksz_port_pre_bridge_flags(struct dsa_switch *ds, int port,
struct switchdev_brport_flags flags,
struct netlink_ext_ack *extack)
@@ -3107,6 +3196,44 @@ static int ksz_switch_detect(struct ksz_device *dev)
return 0;
}
+static int ksz_cls_flower_add(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress)
+{
+ struct ksz_device *dev = ds->priv;
+
+ switch (dev->chip_id) {
+ case KSZ8563_CHIP_ID:
+ case KSZ9477_CHIP_ID:
+ case KSZ9563_CHIP_ID:
+ case KSZ9567_CHIP_ID:
+ case KSZ9893_CHIP_ID:
+ case KSZ9896_CHIP_ID:
+ case KSZ9897_CHIP_ID:
+ return ksz9477_cls_flower_add(ds, port, cls, ingress);
+ }
+
+ return -EOPNOTSUPP;
+}
+
+static int ksz_cls_flower_del(struct dsa_switch *ds, int port,
+ struct flow_cls_offload *cls, bool ingress)
+{
+ struct ksz_device *dev = ds->priv;
+
+ switch (dev->chip_id) {
+ case KSZ8563_CHIP_ID:
+ case KSZ9477_CHIP_ID:
+ case KSZ9563_CHIP_ID:
+ case KSZ9567_CHIP_ID:
+ case KSZ9893_CHIP_ID:
+ case KSZ9896_CHIP_ID:
+ case KSZ9897_CHIP_ID:
+ return ksz9477_cls_flower_del(ds, port, cls, ingress);
+ }
+
+ return -EOPNOTSUPP;
+}
+
/* Bandwidth is calculated by idle slope/transmission speed. Then the Bandwidth
* is converted to Hex-decimal using the successive multiplication method. On
* every step, integer part is taken and decimal part is carry forwarded.
@@ -3419,6 +3546,224 @@ static int ksz_setup_tc(struct dsa_switch *ds, int port,
}
}
+static void ksz_get_wol(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *wol)
+{
+ struct ksz_device *dev = ds->priv;
+
+ if (dev->dev_ops->get_wol)
+ dev->dev_ops->get_wol(dev, port, wol);
+}
+
+static int ksz_set_wol(struct dsa_switch *ds, int port,
+ struct ethtool_wolinfo *wol)
+{
+ struct ksz_device *dev = ds->priv;
+
+ if (dev->dev_ops->set_wol)
+ return dev->dev_ops->set_wol(dev, port, wol);
+
+ return -EOPNOTSUPP;
+}
+
+static int ksz_port_set_mac_address(struct dsa_switch *ds, int port,
+ const unsigned char *addr)
+{
+ struct dsa_port *dp = dsa_to_port(ds, port);
+ struct ethtool_wolinfo wol;
+
+ if (dp->hsr_dev) {
+ dev_err(ds->dev,
+ "Cannot change MAC address on port %d with active HSR offload\n",
+ port);
+ return -EBUSY;
+ }
+
+ ksz_get_wol(ds, dp->index, &wol);
+ if (wol.wolopts & WAKE_MAGIC) {
+ dev_err(ds->dev,
+ "Cannot change MAC address on port %d with active Wake on Magic Packet\n",
+ port);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+/**
+ * ksz_is_port_mac_global_usable - Check if the MAC address on a given port
+ * can be used as a global address.
+ * @ds: Pointer to the DSA switch structure.
+ * @port: The port number on which the MAC address is to be checked.
+ *
+ * This function examines the MAC address set on the specified port and
+ * determines if it can be used as a global address for the switch.
+ *
+ * Return: true if the port's MAC address can be used as a global address, false
+ * otherwise.
+ */
+bool ksz_is_port_mac_global_usable(struct dsa_switch *ds, int port)
+{
+ struct net_device *user = dsa_to_port(ds, port)->user;
+ const unsigned char *addr = user->dev_addr;
+ struct ksz_switch_macaddr *switch_macaddr;
+ struct ksz_device *dev = ds->priv;
+
+ ASSERT_RTNL();
+
+ switch_macaddr = dev->switch_macaddr;
+ if (switch_macaddr && !ether_addr_equal(switch_macaddr->addr, addr))
+ return false;
+
+ return true;
+}
+
+/**
+ * ksz_switch_macaddr_get - Program the switch's MAC address register.
+ * @ds: DSA switch instance.
+ * @port: Port number.
+ * @extack: Netlink extended acknowledgment.
+ *
+ * This function programs the switch's MAC address register with the MAC address
+ * of the requesting user port. This single address is used by the switch for
+ * multiple features like HSR self-address filtering and WoL. Other user ports
+ * can share ownership of this address as long as their MAC address is the same.
+ * The MAC addresses of user ports must not change while they have ownership of
+ * the switch MAC address.
+ *
+ * Return: 0 on success, or other error codes on failure.
+ */
+int ksz_switch_macaddr_get(struct dsa_switch *ds, int port,
+ struct netlink_ext_ack *extack)
+{
+ struct net_device *user = dsa_to_port(ds, port)->user;
+ const unsigned char *addr = user->dev_addr;
+ struct ksz_switch_macaddr *switch_macaddr;
+ struct ksz_device *dev = ds->priv;
+ const u16 *regs = dev->info->regs;
+ int i, ret;
+
+ /* Make sure concurrent MAC address changes are blocked */
+ ASSERT_RTNL();
+
+ switch_macaddr = dev->switch_macaddr;
+ if (switch_macaddr) {
+ if (!ether_addr_equal(switch_macaddr->addr, addr)) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Switch already configured for MAC address %pM",
+ switch_macaddr->addr);
+ return -EBUSY;
+ }
+
+ refcount_inc(&switch_macaddr->refcount);
+ return 0;
+ }
+
+ switch_macaddr = kzalloc(sizeof(*switch_macaddr), GFP_KERNEL);
+ if (!switch_macaddr)
+ return -ENOMEM;
+
+ ether_addr_copy(switch_macaddr->addr, addr);
+ refcount_set(&switch_macaddr->refcount, 1);
+ dev->switch_macaddr = switch_macaddr;
+
+ /* Program the switch MAC address to hardware */
+ for (i = 0; i < ETH_ALEN; i++) {
+ ret = ksz_write8(dev, regs[REG_SW_MAC_ADDR] + i, addr[i]);
+ if (ret)
+ goto macaddr_drop;
+ }
+
+ return 0;
+
+macaddr_drop:
+ dev->switch_macaddr = NULL;
+ refcount_set(&switch_macaddr->refcount, 0);
+ kfree(switch_macaddr);
+
+ return ret;
+}
+
+void ksz_switch_macaddr_put(struct dsa_switch *ds)
+{
+ struct ksz_switch_macaddr *switch_macaddr;
+ struct ksz_device *dev = ds->priv;
+ const u16 *regs = dev->info->regs;
+ int i;
+
+ /* Make sure concurrent MAC address changes are blocked */
+ ASSERT_RTNL();
+
+ switch_macaddr = dev->switch_macaddr;
+ if (!refcount_dec_and_test(&switch_macaddr->refcount))
+ return;
+
+ for (i = 0; i < ETH_ALEN; i++)
+ ksz_write8(dev, regs[REG_SW_MAC_ADDR] + i, 0);
+
+ dev->switch_macaddr = NULL;
+ kfree(switch_macaddr);
+}
+
+static int ksz_hsr_join(struct dsa_switch *ds, int port, struct net_device *hsr,
+ struct netlink_ext_ack *extack)
+{
+ struct ksz_device *dev = ds->priv;
+ enum hsr_version ver;
+ int ret;
+
+ ret = hsr_get_version(hsr, &ver);
+ if (ret)
+ return ret;
+
+ if (dev->chip_id != KSZ9477_CHIP_ID) {
+ NL_SET_ERR_MSG_MOD(extack, "Chip does not support HSR offload");
+ return -EOPNOTSUPP;
+ }
+
+ /* KSZ9477 can support HW offloading of only 1 HSR device */
+ if (dev->hsr_dev && hsr != dev->hsr_dev) {
+ NL_SET_ERR_MSG_MOD(extack, "Offload supported for a single HSR");
+ return -EOPNOTSUPP;
+ }
+
+ /* KSZ9477 only supports HSR v0 and v1 */
+ if (!(ver == HSR_V0 || ver == HSR_V1)) {
+ NL_SET_ERR_MSG_MOD(extack, "Only HSR v0 and v1 supported");
+ return -EOPNOTSUPP;
+ }
+
+ /* Self MAC address filtering, to avoid frames traversing
+ * the HSR ring more than once.
+ */
+ ret = ksz_switch_macaddr_get(ds, port, extack);
+ if (ret)
+ return ret;
+
+ ksz9477_hsr_join(ds, port, hsr);
+ dev->hsr_dev = hsr;
+ dev->hsr_ports |= BIT(port);
+
+ return 0;
+}
+
+static int ksz_hsr_leave(struct dsa_switch *ds, int port,
+ struct net_device *hsr)
+{
+ struct ksz_device *dev = ds->priv;
+
+ WARN_ON(dev->chip_id != KSZ9477_CHIP_ID);
+
+ ksz9477_hsr_leave(ds, port, hsr);
+ dev->hsr_ports &= ~BIT(port);
+ if (!dev->hsr_ports)
+ dev->hsr_dev = NULL;
+
+ ksz_switch_macaddr_put(ds);
+
+ return 0;
+}
+
static const struct dsa_switch_ops ksz_switch_ops = {
.get_tag_protocol = ksz_get_tag_protocol,
.connect_tag_protocol = ksz_connect_tag_protocol,
@@ -3431,14 +3776,18 @@ static const struct dsa_switch_ops ksz_switch_ops = {
.phylink_mac_config = ksz_phylink_mac_config,
.phylink_mac_link_up = ksz_phylink_mac_link_up,
.phylink_mac_link_down = ksz_mac_link_down,
- .port_enable = ksz_enable_port,
+ .port_setup = ksz_port_setup,
.set_ageing_time = ksz_set_ageing_time,
.get_strings = ksz_get_strings,
.get_ethtool_stats = ksz_get_ethtool_stats,
.get_sset_count = ksz_sset_count,
.port_bridge_join = ksz_port_bridge_join,
.port_bridge_leave = ksz_port_bridge_leave,
+ .port_hsr_join = ksz_hsr_join,
+ .port_hsr_leave = ksz_hsr_leave,
+ .port_set_mac_address = ksz_port_set_mac_address,
.port_stp_state_set = ksz_port_stp_state_set,
+ .port_teardown = ksz_port_teardown,
.port_pre_bridge_flags = ksz_port_pre_bridge_flags,
.port_bridge_flags = ksz_port_bridge_flags,
.port_fast_age = ksz_port_fast_age,
@@ -3456,11 +3805,15 @@ static const struct dsa_switch_ops ksz_switch_ops = {
.get_pause_stats = ksz_get_pause_stats,
.port_change_mtu = ksz_change_mtu,
.port_max_mtu = ksz_max_mtu,
+ .get_wol = ksz_get_wol,
+ .set_wol = ksz_set_wol,
.get_ts_info = ksz_get_ts_info,
.port_hwtstamp_get = ksz_hwtstamp_get,
.port_hwtstamp_set = ksz_hwtstamp_set,
.port_txtstamp = ksz_port_txtstamp,
.port_rxtstamp = ksz_port_rxtstamp,
+ .cls_flower_add = ksz_cls_flower_add,
+ .cls_flower_del = ksz_cls_flower_del,
.port_setup_tc = ksz_setup_tc,
.get_mac_eee = ksz_get_mac_eee,
.set_mac_eee = ksz_set_mac_eee,
@@ -3493,6 +3846,30 @@ struct ksz_device *ksz_switch_alloc(struct device *base, void *priv)
}
EXPORT_SYMBOL(ksz_switch_alloc);
+/**
+ * ksz_switch_shutdown - Shutdown routine for the switch device.
+ * @dev: The switch device structure.
+ *
+ * This function is responsible for initiating a shutdown sequence for the
+ * switch device. It invokes the reset operation defined in the device
+ * operations, if available, to reset the switch. Subsequently, it calls the
+ * DSA framework's shutdown function to ensure a proper shutdown of the DSA
+ * switch.
+ */
+void ksz_switch_shutdown(struct ksz_device *dev)
+{
+ bool wol_enabled = false;
+
+ if (dev->dev_ops->wol_pre_shutdown)
+ dev->dev_ops->wol_pre_shutdown(dev, &wol_enabled);
+
+ if (dev->dev_ops->reset && !wol_enabled)
+ dev->dev_ops->reset(dev);
+
+ dsa_switch_shutdown(dev->ds);
+}
+EXPORT_SYMBOL(ksz_switch_shutdown);
+
static void ksz_parse_rgmii_delay(struct ksz_device *dev, int port_num,
struct device_node *port_dn)
{
@@ -3530,6 +3907,245 @@ static void ksz_parse_rgmii_delay(struct ksz_device *dev, int port_num,
dev->ports[port_num].rgmii_tx_val = tx_delay;
}
+/**
+ * ksz_drive_strength_to_reg() - Convert drive strength value to corresponding
+ * register value.
+ * @array: The array of drive strength values to search.
+ * @array_size: The size of the array.
+ * @microamp: The drive strength value in microamp to be converted.
+ *
+ * This function searches the array of drive strength values for the given
+ * microamp value and returns the corresponding register value for that drive.
+ *
+ * Returns: If found, the corresponding register value for that drive strength
+ * is returned. Otherwise, -EINVAL is returned indicating an invalid value.
+ */
+static int ksz_drive_strength_to_reg(const struct ksz_drive_strength *array,
+ size_t array_size, int microamp)
+{
+ int i;
+
+ for (i = 0; i < array_size; i++) {
+ if (array[i].microamp == microamp)
+ return array[i].reg_val;
+ }
+
+ return -EINVAL;
+}
+
+/**
+ * ksz_drive_strength_error() - Report invalid drive strength value
+ * @dev: ksz device
+ * @array: The array of drive strength values to search.
+ * @array_size: The size of the array.
+ * @microamp: Invalid drive strength value in microamp
+ *
+ * This function logs an error message when an unsupported drive strength value
+ * is detected. It lists out all the supported drive strength values for
+ * reference in the error message.
+ */
+static void ksz_drive_strength_error(struct ksz_device *dev,
+ const struct ksz_drive_strength *array,
+ size_t array_size, int microamp)
+{
+ char supported_values[100];
+ size_t remaining_size;
+ int added_len;
+ char *ptr;
+ int i;
+
+ remaining_size = sizeof(supported_values);
+ ptr = supported_values;
+
+ for (i = 0; i < array_size; i++) {
+ added_len = snprintf(ptr, remaining_size,
+ i == 0 ? "%d" : ", %d", array[i].microamp);
+
+ if (added_len >= remaining_size)
+ break;
+
+ ptr += added_len;
+ remaining_size -= added_len;
+ }
+
+ dev_err(dev->dev, "Invalid drive strength %d, supported values are %s\n",
+ microamp, supported_values);
+}
+
+/**
+ * ksz9477_drive_strength_write() - Set the drive strength for specific KSZ9477
+ * chip variants.
+ * @dev: ksz device
+ * @props: Array of drive strength properties to be applied
+ * @num_props: Number of properties in the array
+ *
+ * This function configures the drive strength for various KSZ9477 chip variants
+ * based on the provided properties. It handles chip-specific nuances and
+ * ensures only valid drive strengths are written to the respective chip.
+ *
+ * Return: 0 on successful configuration, a negative error code on failure.
+ */
+static int ksz9477_drive_strength_write(struct ksz_device *dev,
+ struct ksz_driver_strength_prop *props,
+ int num_props)
+{
+ size_t array_size = ARRAY_SIZE(ksz9477_drive_strengths);
+ int i, ret, reg;
+ u8 mask = 0;
+ u8 val = 0;
+
+ if (props[KSZ_DRIVER_STRENGTH_IO].value != -1)
+ dev_warn(dev->dev, "%s is not supported by this chip variant\n",
+ props[KSZ_DRIVER_STRENGTH_IO].name);
+
+ if (dev->chip_id == KSZ8795_CHIP_ID ||
+ dev->chip_id == KSZ8794_CHIP_ID ||
+ dev->chip_id == KSZ8765_CHIP_ID)
+ reg = KSZ8795_REG_SW_CTRL_20;
+ else
+ reg = KSZ9477_REG_SW_IO_STRENGTH;
+
+ for (i = 0; i < num_props; i++) {
+ if (props[i].value == -1)
+ continue;
+
+ ret = ksz_drive_strength_to_reg(ksz9477_drive_strengths,
+ array_size, props[i].value);
+ if (ret < 0) {
+ ksz_drive_strength_error(dev, ksz9477_drive_strengths,
+ array_size, props[i].value);
+ return ret;
+ }
+
+ mask |= SW_DRIVE_STRENGTH_M << props[i].offset;
+ val |= ret << props[i].offset;
+ }
+
+ return ksz_rmw8(dev, reg, mask, val);
+}
+
+/**
+ * ksz8830_drive_strength_write() - Set the drive strength configuration for
+ * KSZ8830 compatible chip variants.
+ * @dev: ksz device
+ * @props: Array of drive strength properties to be set
+ * @num_props: Number of properties in the array
+ *
+ * This function applies the specified drive strength settings to KSZ8830 chip
+ * variants (KSZ8873, KSZ8863).
+ * It ensures the configurations align with what the chip variant supports and
+ * warns or errors out on unsupported settings.
+ *
+ * Return: 0 on success, error code otherwise
+ */
+static int ksz8830_drive_strength_write(struct ksz_device *dev,
+ struct ksz_driver_strength_prop *props,
+ int num_props)
+{
+ size_t array_size = ARRAY_SIZE(ksz8830_drive_strengths);
+ int microamp;
+ int i, ret;
+
+ for (i = 0; i < num_props; i++) {
+ if (props[i].value == -1 || i == KSZ_DRIVER_STRENGTH_IO)
+ continue;
+
+ dev_warn(dev->dev, "%s is not supported by this chip variant\n",
+ props[i].name);
+ }
+
+ microamp = props[KSZ_DRIVER_STRENGTH_IO].value;
+ ret = ksz_drive_strength_to_reg(ksz8830_drive_strengths, array_size,
+ microamp);
+ if (ret < 0) {
+ ksz_drive_strength_error(dev, ksz8830_drive_strengths,
+ array_size, microamp);
+ return ret;
+ }
+
+ return ksz_rmw8(dev, KSZ8873_REG_GLOBAL_CTRL_12,
+ KSZ8873_DRIVE_STRENGTH_16MA, ret);
+}
+
+/**
+ * ksz_parse_drive_strength() - Extract and apply drive strength configurations
+ * from device tree properties.
+ * @dev: ksz device
+ *
+ * This function reads the specified drive strength properties from the
+ * device tree, validates against the supported chip variants, and sets
+ * them accordingly. An error should be critical here, as the drive strength
+ * settings are crucial for EMI compliance.
+ *
+ * Return: 0 on success, error code otherwise
+ */
+static int ksz_parse_drive_strength(struct ksz_device *dev)
+{
+ struct ksz_driver_strength_prop of_props[] = {
+ [KSZ_DRIVER_STRENGTH_HI] = {
+ .name = "microchip,hi-drive-strength-microamp",
+ .offset = SW_HI_SPEED_DRIVE_STRENGTH_S,
+ .value = -1,
+ },
+ [KSZ_DRIVER_STRENGTH_LO] = {
+ .name = "microchip,lo-drive-strength-microamp",
+ .offset = SW_LO_SPEED_DRIVE_STRENGTH_S,
+ .value = -1,
+ },
+ [KSZ_DRIVER_STRENGTH_IO] = {
+ .name = "microchip,io-drive-strength-microamp",
+ .offset = 0, /* don't care */
+ .value = -1,
+ },
+ };
+ struct device_node *np = dev->dev->of_node;
+ bool have_any_prop = false;
+ int i, ret;
+
+ for (i = 0; i < ARRAY_SIZE(of_props); i++) {
+ ret = of_property_read_u32(np, of_props[i].name,
+ &of_props[i].value);
+ if (ret && ret != -EINVAL)
+ dev_warn(dev->dev, "Failed to read %s\n",
+ of_props[i].name);
+ if (ret)
+ continue;
+
+ have_any_prop = true;
+ }
+
+ if (!have_any_prop)
+ return 0;
+
+ switch (dev->chip_id) {
+ case KSZ8830_CHIP_ID:
+ return ksz8830_drive_strength_write(dev, of_props,
+ ARRAY_SIZE(of_props));
+ case KSZ8795_CHIP_ID:
+ case KSZ8794_CHIP_ID:
+ case KSZ8765_CHIP_ID:
+ case KSZ8563_CHIP_ID:
+ case KSZ9477_CHIP_ID:
+ case KSZ9563_CHIP_ID:
+ case KSZ9567_CHIP_ID:
+ case KSZ9893_CHIP_ID:
+ case KSZ9896_CHIP_ID:
+ case KSZ9897_CHIP_ID:
+ return ksz9477_drive_strength_write(dev, of_props,
+ ARRAY_SIZE(of_props));
+ default:
+ for (i = 0; i < ARRAY_SIZE(of_props); i++) {
+ if (of_props[i].value == -1)
+ continue;
+
+ dev_warn(dev->dev, "%s is not supported by this chip variant\n",
+ of_props[i].name);
+ }
+ }
+
+ return 0;
+}
+
int ksz_switch_register(struct ksz_device *dev)
{
const struct ksz_chip_data *info;
@@ -3612,6 +4228,10 @@ int ksz_switch_register(struct ksz_device *dev)
for (port_num = 0; port_num < dev->info->port_cnt; ++port_num)
dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA;
if (dev->dev->of_node) {
+ ret = ksz_parse_drive_strength(dev);
+ if (ret)
+ return ret;
+
ret = of_get_phy_mode(dev->dev->of_node, &interface);
if (ret == 0)
dev->compat_interface = interface;
@@ -3643,6 +4263,9 @@ int ksz_switch_register(struct ksz_device *dev)
dev_err(dev->dev, "inconsistent synclko settings\n");
return -EINVAL;
}
+
+ dev->wakeup_source = of_property_read_bool(dev->dev->of_node,
+ "wakeup-source");
}
ret = dsa_register_switch(dev->ds);
diff --git a/drivers/net/dsa/microchip/ksz_common.h b/drivers/net/dsa/microchip/ksz_common.h
index a4de58847dea..b7e8a403a132 100644
--- a/drivers/net/dsa/microchip/ksz_common.h
+++ b/drivers/net/dsa/microchip/ksz_common.h
@@ -101,6 +101,11 @@ struct ksz_ptp_irq {
int num;
};
+struct ksz_switch_macaddr {
+ unsigned char addr[ETH_ALEN];
+ refcount_t refcount;
+};
+
struct ksz_port {
bool remove_tag; /* Remove Tag flag set, for ksz8795 only */
bool learning;
@@ -117,6 +122,7 @@ struct ksz_port {
u32 rgmii_tx_val;
u32 rgmii_rx_val;
struct ksz_device *ksz_dev;
+ void *acl_priv;
struct ksz_irq pirq;
u8 num;
#if IS_ENABLED(CONFIG_NET_DSA_MICROCHIP_KSZ_PTP)
@@ -157,6 +163,7 @@ struct ksz_device {
phy_interface_t compat_interface;
bool synclko_125;
bool synclko_disable;
+ bool wakeup_source;
struct vlan_table *vlan_cache;
@@ -169,6 +176,10 @@ struct ksz_device {
struct mutex lock_irq; /* IRQ Access */
struct ksz_irq girq;
struct ksz_ptp_data ptp_data;
+
+ struct ksz_switch_macaddr *switch_macaddr;
+ struct net_device *hsr_dev; /* HSR */
+ u8 hsr_ports;
};
/* List of supported models */
@@ -211,6 +222,7 @@ enum ksz_chip_id {
};
enum ksz_regs {
+ REG_SW_MAC_ADDR,
REG_IND_CTRL_0,
REG_IND_DATA_8,
REG_IND_DATA_CHECK,
@@ -362,6 +374,11 @@ struct ksz_dev_ops {
int duplex, bool tx_pause, bool rx_pause);
void (*setup_rgmii_delay)(struct ksz_device *dev, int port);
int (*tc_cbs_set_cinc)(struct ksz_device *dev, int port, u32 val);
+ void (*get_wol)(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol);
+ int (*set_wol)(struct ksz_device *dev, int port,
+ struct ethtool_wolinfo *wol);
+ void (*wol_pre_shutdown)(struct ksz_device *dev, bool *wol_enabled);
void (*config_cpu_port)(struct dsa_switch *ds);
int (*enable_stp_addr)(struct ksz_device *dev);
int (*reset)(struct ksz_device *dev);
@@ -374,12 +391,17 @@ int ksz_switch_register(struct ksz_device *dev);
void ksz_switch_remove(struct ksz_device *dev);
void ksz_init_mib_timer(struct ksz_device *dev);
+bool ksz_is_port_mac_global_usable(struct dsa_switch *ds, int port);
void ksz_r_mib_stats64(struct ksz_device *dev, int port);
void ksz88xx_r_mib_stats64(struct ksz_device *dev, int port);
void ksz_port_stp_state_set(struct dsa_switch *ds, int port, u8 state);
bool ksz_get_gbit(struct ksz_device *dev, int port);
phy_interface_t ksz_get_xmii(struct ksz_device *dev, int port, bool gbit);
extern const struct ksz_chip_data ksz_switch_chips[];
+int ksz_switch_macaddr_get(struct dsa_switch *ds, int port,
+ struct netlink_ext_ack *extack);
+void ksz_switch_macaddr_put(struct dsa_switch *ds);
+void ksz_switch_shutdown(struct ksz_device *dev);
/* Common register access functions */
static inline struct regmap *ksz_regmap_8(struct ksz_device *dev)
@@ -689,6 +711,26 @@ static inline int is_lan937x(struct ksz_device *dev)
#define KSZ8_LEGAL_PACKET_SIZE 1518
#define KSZ9477_MAX_FRAME_SIZE 9000
+#define KSZ8873_REG_GLOBAL_CTRL_12 0x0e
+/* Drive Strength of I/O Pad
+ * 0: 8mA, 1: 16mA
+ */
+#define KSZ8873_DRIVE_STRENGTH_16MA BIT(6)
+
+#define KSZ8795_REG_SW_CTRL_20 0xa3
+#define KSZ9477_REG_SW_IO_STRENGTH 0x010d
+#define SW_DRIVE_STRENGTH_M 0x7
+#define SW_DRIVE_STRENGTH_2MA 0
+#define SW_DRIVE_STRENGTH_4MA 1
+#define SW_DRIVE_STRENGTH_8MA 2
+#define SW_DRIVE_STRENGTH_12MA 3
+#define SW_DRIVE_STRENGTH_16MA 4
+#define SW_DRIVE_STRENGTH_20MA 5
+#define SW_DRIVE_STRENGTH_24MA 6
+#define SW_DRIVE_STRENGTH_28MA 7
+#define SW_HI_SPEED_DRIVE_STRENGTH_S 4
+#define SW_LO_SPEED_DRIVE_STRENGTH_S 0
+
#define KSZ9477_REG_PORT_OUT_RATE_0 0x0420
#define KSZ9477_OUT_RATE_NO_LIMIT 0
diff --git a/drivers/net/dsa/microchip/ksz_ptp.c b/drivers/net/dsa/microchip/ksz_ptp.c
index 4e22a695a64c..1fe105913c75 100644
--- a/drivers/net/dsa/microchip/ksz_ptp.c
+++ b/drivers/net/dsa/microchip/ksz_ptp.c
@@ -557,7 +557,7 @@ static void ksz_ptp_txtstamp_skb(struct ksz_device *dev,
struct skb_shared_hwtstamps hwtstamps = {};
int ret;
- /* timeout must include DSA master to transmit data, tstamp latency,
+ /* timeout must include DSA conduit to transmit data, tstamp latency,
* IRQ latency and time for reading the time stamp.
*/
ret = wait_for_completion_timeout(&prt->tstamp_msg_comp,
diff --git a/drivers/net/dsa/microchip/ksz_spi.c b/drivers/net/dsa/microchip/ksz_spi.c
index 279338451621..6f6d878e742c 100644
--- a/drivers/net/dsa/microchip/ksz_spi.c
+++ b/drivers/net/dsa/microchip/ksz_spi.c
@@ -114,10 +114,7 @@ static void ksz_spi_shutdown(struct spi_device *spi)
if (!dev)
return;
- if (dev->dev_ops->reset)
- dev->dev_ops->reset(dev);
-
- dsa_switch_shutdown(dev->ds);
+ ksz_switch_shutdown(dev);
spi_set_drvdata(spi, NULL);
}
diff --git a/drivers/net/dsa/mt7530-mmio.c b/drivers/net/dsa/mt7530-mmio.c
index 0a6a2fe34e64..b74a230a3f13 100644
--- a/drivers/net/dsa/mt7530-mmio.c
+++ b/drivers/net/dsa/mt7530-mmio.c
@@ -63,15 +63,12 @@ mt7988_probe(struct platform_device *pdev)
return dsa_register_switch(priv->ds);
}
-static int
-mt7988_remove(struct platform_device *pdev)
+static void mt7988_remove(struct platform_device *pdev)
{
struct mt7530_priv *priv = platform_get_drvdata(pdev);
if (priv)
mt7530_remove_common(priv);
-
- return 0;
}
static void mt7988_shutdown(struct platform_device *pdev)
@@ -88,7 +85,7 @@ static void mt7988_shutdown(struct platform_device *pdev)
static struct platform_driver mt7988_platform_driver = {
.probe = mt7988_probe,
- .remove = mt7988_remove,
+ .remove_new = mt7988_remove,
.shutdown = mt7988_shutdown,
.driver = {
.name = "mt7530-mmio",
diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
index 035a34b50f31..d27c6b70a2f6 100644
--- a/drivers/net/dsa/mt7530.c
+++ b/drivers/net/dsa/mt7530.c
@@ -836,8 +836,7 @@ mt7530_get_strings(struct dsa_switch *ds, int port, u32 stringset,
return;
for (i = 0; i < ARRAY_SIZE(mt7530_mib); i++)
- strncpy(data + i * ETH_GSTRING_LEN, mt7530_mib[i].name,
- ETH_GSTRING_LEN);
+ ethtool_sprintf(&data, "%s", mt7530_mib[i].name);
}
static void
@@ -1114,7 +1113,7 @@ mt7530_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
u32 val;
/* When a new MTU is set, DSA always set the CPU port's MTU to the
- * largest MTU of the slave ports. Because the switch only has a global
+ * largest MTU of the user ports. Because the switch only has a global
* RX length register, only allowing CPU port here is enough.
*/
if (!dsa_is_cpu_port(ds, port))
@@ -2070,7 +2069,7 @@ mt7530_setup_mdio_irq(struct mt7530_priv *priv)
unsigned int irq;
irq = irq_create_mapping(priv->irq_domain, p);
- ds->slave_mii_bus->irq[p] = irq;
+ ds->user_mii_bus->irq[p] = irq;
}
}
}
@@ -2164,7 +2163,7 @@ mt7530_setup_mdio(struct mt7530_priv *priv)
if (!bus)
return -ENOMEM;
- ds->slave_mii_bus = bus;
+ ds->user_mii_bus = bus;
bus->priv = priv;
bus->name = KBUILD_MODNAME "-mii";
snprintf(bus->id, MII_BUS_ID_SIZE, KBUILD_MODNAME "-%d", idx++);
@@ -2201,20 +2200,20 @@ mt7530_setup(struct dsa_switch *ds)
u32 id, val;
int ret, i;
- /* The parent node of master netdev which holds the common system
+ /* The parent node of conduit netdev which holds the common system
* controller also is the container for two GMACs nodes representing
* as two netdev instances.
*/
dsa_switch_for_each_cpu_port(cpu_dp, ds) {
- dn = cpu_dp->master->dev.of_node->parent;
+ dn = cpu_dp->conduit->dev.of_node->parent;
/* It doesn't matter which CPU port is found first,
- * their masters should share the same parent OF node
+ * their conduits should share the same parent OF node
*/
break;
}
if (!dn) {
- dev_err(ds->dev, "parent OF node of DSA master not found");
+ dev_err(ds->dev, "parent OF node of DSA conduit not found");
return -EINVAL;
}
@@ -2489,7 +2488,7 @@ mt7531_setup(struct dsa_switch *ds)
if (mt7531_dual_sgmii_supported(priv)) {
priv->p5_intf_sel = P5_INTF_SEL_GMAC5_SGMII;
- /* Let ds->slave_mii_bus be able to access external phy. */
+ /* Let ds->user_mii_bus be able to access external phy. */
mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO11_RG_RXD2_MASK,
MT7531_EXT_P_MDC_11);
mt7530_rmw(priv, MT7531_GPIO_MODE1, MT7531_GPIO12_RG_RXD3_MASK,
@@ -2718,7 +2717,7 @@ mt7531_mac_config(struct dsa_switch *ds, int port, unsigned int mode,
case PHY_INTERFACE_MODE_RGMII_RXID:
case PHY_INTERFACE_MODE_RGMII_TXID:
dp = dsa_to_port(ds, port);
- phydev = dp->slave->phydev;
+ phydev = dp->user->phydev;
return mt7531_rgmii_setup(priv, port, interface, phydev);
case PHY_INTERFACE_MODE_SGMII:
case PHY_INTERFACE_MODE_NA:
@@ -2824,15 +2823,6 @@ static void mt753x_phylink_mac_link_down(struct dsa_switch *ds, int port,
mt7530_clear(priv, MT7530_PMCR_P(port), PMCR_LINK_SETTINGS_MASK);
}
-static void mt753x_phylink_pcs_link_up(struct phylink_pcs *pcs,
- unsigned int mode,
- phy_interface_t interface,
- int speed, int duplex)
-{
- if (pcs->ops->pcs_link_up)
- pcs->ops->pcs_link_up(pcs, mode, interface, speed, duplex);
-}
-
static void mt753x_phylink_mac_link_up(struct dsa_switch *ds, int port,
unsigned int mode,
phy_interface_t interface,
@@ -2921,8 +2911,6 @@ mt7531_cpu_port_config(struct dsa_switch *ds, int port)
return ret;
mt7530_write(priv, MT7530_PMCR_P(port),
PMCR_CPU_PORT_SETTING(priv->id));
- mt753x_phylink_pcs_link_up(&priv->pcs[port].pcs, MLO_AN_FIXED,
- interface, speed, DUPLEX_FULL);
mt753x_phylink_mac_link_up(ds, port, MLO_AN_FIXED, interface, NULL,
speed, DUPLEX_FULL, true, true);
diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
index ab434a77b059..42b1acaca33a 100644
--- a/drivers/net/dsa/mv88e6xxx/chip.c
+++ b/drivers/net/dsa/mv88e6xxx/chip.c
@@ -2486,7 +2486,7 @@ static int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
else
member = MV88E6XXX_G1_VTU_DATA_MEMBER_TAG_TAGGED;
- /* net/dsa/slave.c will call dsa_port_vlan_add() for the affected port
+ /* net/dsa/user.c will call dsa_port_vlan_add() for the affected port
* and then the CPU port. Do not warn for duplicates for the CPU port.
*/
warn = !dsa_is_cpu_port(ds, port) && !dsa_is_dsa_port(ds, port);
@@ -3719,7 +3719,7 @@ static int mv88e6xxx_setup(struct dsa_switch *ds)
return err;
chip->ds = ds;
- ds->slave_mii_bus = mv88e6xxx_default_mdio_bus(chip);
+ ds->user_mii_bus = mv88e6xxx_default_mdio_bus(chip);
/* Since virtual bridges are mapped in the PVT, the number we support
* depends on the physical switch topology. We need to let DSA figure
diff --git a/drivers/net/dsa/mv88e6xxx/pcs-639x.c b/drivers/net/dsa/mv88e6xxx/pcs-639x.c
index ba373656bfe1..9a8429f5d09c 100644
--- a/drivers/net/dsa/mv88e6xxx/pcs-639x.c
+++ b/drivers/net/dsa/mv88e6xxx/pcs-639x.c
@@ -208,7 +208,7 @@ static void mv88e639x_sgmii_pcs_pre_config(struct phylink_pcs *pcs,
static int mv88e6390_erratum_3_14(struct mv88e639x_pcs *mpcs)
{
- const int lanes[] = { MV88E6390_PORT9_LANE0, MV88E6390_PORT9_LANE1,
+ static const int lanes[] = { MV88E6390_PORT9_LANE0, MV88E6390_PORT9_LANE1,
MV88E6390_PORT9_LANE2, MV88E6390_PORT9_LANE3,
MV88E6390_PORT10_LANE0, MV88E6390_PORT10_LANE1,
MV88E6390_PORT10_LANE2, MV88E6390_PORT10_LANE3 };
diff --git a/drivers/net/dsa/mv88e6xxx/ptp.c b/drivers/net/dsa/mv88e6xxx/ptp.c
index ea17231dc34e..56391e09b325 100644
--- a/drivers/net/dsa/mv88e6xxx/ptp.c
+++ b/drivers/net/dsa/mv88e6xxx/ptp.c
@@ -182,6 +182,10 @@ static void mv88e6352_tai_event_work(struct work_struct *ugly)
mv88e6xxx_reg_lock(chip);
err = mv88e6xxx_tai_write(chip, MV88E6XXX_TAI_EVENT_STATUS, status[0]);
mv88e6xxx_reg_unlock(chip);
+ if (err) {
+ dev_err(chip->dev, "failed to write TAI status register\n");
+ return;
+ }
/* This is an external timestamp */
ev.type = PTP_CLOCK_EXTTS;
diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
index 9a3e5ec16972..61e95487732d 100644
--- a/drivers/net/dsa/ocelot/felix.c
+++ b/drivers/net/dsa/ocelot/felix.c
@@ -42,22 +42,22 @@ static struct net_device *felix_classify_db(struct dsa_db db)
}
}
-static int felix_cpu_port_for_master(struct dsa_switch *ds,
- struct net_device *master)
+static int felix_cpu_port_for_conduit(struct dsa_switch *ds,
+ struct net_device *conduit)
{
struct ocelot *ocelot = ds->priv;
struct dsa_port *cpu_dp;
int lag;
- if (netif_is_lag_master(master)) {
+ if (netif_is_lag_master(conduit)) {
mutex_lock(&ocelot->fwd_domain_lock);
- lag = ocelot_bond_get_id(ocelot, master);
+ lag = ocelot_bond_get_id(ocelot, conduit);
mutex_unlock(&ocelot->fwd_domain_lock);
return lag;
}
- cpu_dp = master->dsa_ptr;
+ cpu_dp = conduit->dsa_ptr;
return cpu_dp->index;
}
@@ -366,7 +366,7 @@ static int felix_update_trapping_destinations(struct dsa_switch *ds,
* is the mode through which frames can be injected from and extracted to an
* external CPU, over Ethernet. In NXP SoCs, the "external CPU" is the ARM CPU
* running Linux, and this forms a DSA setup together with the enetc or fman
- * DSA master.
+ * DSA conduit.
*/
static void felix_npi_port_init(struct ocelot *ocelot, int port)
{
@@ -441,16 +441,16 @@ static unsigned long felix_tag_npi_get_host_fwd_mask(struct dsa_switch *ds)
return BIT(ocelot->num_phys_ports);
}
-static int felix_tag_npi_change_master(struct dsa_switch *ds, int port,
- struct net_device *master,
- struct netlink_ext_ack *extack)
+static int felix_tag_npi_change_conduit(struct dsa_switch *ds, int port,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack)
{
struct dsa_port *dp = dsa_to_port(ds, port), *other_dp;
struct ocelot *ocelot = ds->priv;
- if (netif_is_lag_master(master)) {
+ if (netif_is_lag_master(conduit)) {
NL_SET_ERR_MSG_MOD(extack,
- "LAG DSA master only supported using ocelot-8021q");
+ "LAG DSA conduit only supported using ocelot-8021q");
return -EOPNOTSUPP;
}
@@ -459,24 +459,24 @@ static int felix_tag_npi_change_master(struct dsa_switch *ds, int port,
* come back up until they're all changed to the new one.
*/
dsa_switch_for_each_user_port(other_dp, ds) {
- struct net_device *slave = other_dp->slave;
+ struct net_device *user = other_dp->user;
- if (other_dp != dp && (slave->flags & IFF_UP) &&
- dsa_port_to_master(other_dp) != master) {
+ if (other_dp != dp && (user->flags & IFF_UP) &&
+ dsa_port_to_conduit(other_dp) != conduit) {
NL_SET_ERR_MSG_MOD(extack,
- "Cannot change while old master still has users");
+ "Cannot change while old conduit still has users");
return -EOPNOTSUPP;
}
}
felix_npi_port_deinit(ocelot, ocelot->npi);
- felix_npi_port_init(ocelot, felix_cpu_port_for_master(ds, master));
+ felix_npi_port_init(ocelot, felix_cpu_port_for_conduit(ds, conduit));
return 0;
}
/* Alternatively to using the NPI functionality, that same hardware MAC
- * connected internally to the enetc or fman DSA master can be configured to
+ * connected internally to the enetc or fman DSA conduit can be configured to
* use the software-defined tag_8021q frame format. As far as the hardware is
* concerned, it thinks it is a "dumb switch" - the queues of the CPU port
* module are now disconnected from it, but can still be accessed through
@@ -486,7 +486,7 @@ static const struct felix_tag_proto_ops felix_tag_npi_proto_ops = {
.setup = felix_tag_npi_setup,
.teardown = felix_tag_npi_teardown,
.get_host_fwd_mask = felix_tag_npi_get_host_fwd_mask,
- .change_master = felix_tag_npi_change_master,
+ .change_conduit = felix_tag_npi_change_conduit,
};
static int felix_tag_8021q_setup(struct dsa_switch *ds)
@@ -561,11 +561,11 @@ static unsigned long felix_tag_8021q_get_host_fwd_mask(struct dsa_switch *ds)
return dsa_cpu_ports(ds);
}
-static int felix_tag_8021q_change_master(struct dsa_switch *ds, int port,
- struct net_device *master,
- struct netlink_ext_ack *extack)
+static int felix_tag_8021q_change_conduit(struct dsa_switch *ds, int port,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack)
{
- int cpu = felix_cpu_port_for_master(ds, master);
+ int cpu = felix_cpu_port_for_conduit(ds, conduit);
struct ocelot *ocelot = ds->priv;
ocelot_port_unassign_dsa_8021q_cpu(ocelot, port);
@@ -578,7 +578,7 @@ static const struct felix_tag_proto_ops felix_tag_8021q_proto_ops = {
.setup = felix_tag_8021q_setup,
.teardown = felix_tag_8021q_teardown,
.get_host_fwd_mask = felix_tag_8021q_get_host_fwd_mask,
- .change_master = felix_tag_8021q_change_master,
+ .change_conduit = felix_tag_8021q_change_conduit,
};
static void felix_set_host_flood(struct dsa_switch *ds, unsigned long mask,
@@ -741,14 +741,14 @@ static void felix_port_set_host_flood(struct dsa_switch *ds, int port,
!!felix->host_flood_mc_mask, true);
}
-static int felix_port_change_master(struct dsa_switch *ds, int port,
- struct net_device *master,
- struct netlink_ext_ack *extack)
+static int felix_port_change_conduit(struct dsa_switch *ds, int port,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack)
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
- return felix->tag_proto_ops->change_master(ds, port, master, extack);
+ return felix->tag_proto_ops->change_conduit(ds, port, conduit, extack);
}
static int felix_set_ageing_time(struct dsa_switch *ds,
@@ -953,7 +953,7 @@ static int felix_lag_join(struct dsa_switch *ds, int port,
if (!dsa_is_cpu_port(ds, port))
return 0;
- return felix_port_change_master(ds, port, lag.dev, extack);
+ return felix_port_change_conduit(ds, port, lag.dev, extack);
}
static int felix_lag_leave(struct dsa_switch *ds, int port,
@@ -967,7 +967,7 @@ static int felix_lag_leave(struct dsa_switch *ds, int port,
if (!dsa_is_cpu_port(ds, port))
return 0;
- return felix_port_change_master(ds, port, lag.dev, NULL);
+ return felix_port_change_conduit(ds, port, lag.dev, NULL);
}
static int felix_lag_change(struct dsa_switch *ds, int port)
@@ -1116,10 +1116,10 @@ static int felix_port_enable(struct dsa_switch *ds, int port,
return 0;
if (ocelot->npi >= 0) {
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
- if (felix_cpu_port_for_master(ds, master) != ocelot->npi) {
- dev_err(ds->dev, "Multiple masters are not allowed\n");
+ if (felix_cpu_port_for_conduit(ds, conduit) != ocelot->npi) {
+ dev_err(ds->dev, "Multiple conduits are not allowed\n");
return -EINVAL;
}
}
@@ -2164,7 +2164,7 @@ const struct dsa_switch_ops felix_switch_ops = {
.port_add_dscp_prio = felix_port_add_dscp_prio,
.port_del_dscp_prio = felix_port_del_dscp_prio,
.port_set_host_flood = felix_port_set_host_flood,
- .port_change_master = felix_port_change_master,
+ .port_change_conduit = felix_port_change_conduit,
};
EXPORT_SYMBOL_GPL(felix_switch_ops);
@@ -2176,7 +2176,7 @@ struct net_device *felix_port_to_netdev(struct ocelot *ocelot, int port)
if (!dsa_is_user_port(ds, port))
return NULL;
- return dsa_to_port(ds, port)->slave;
+ return dsa_to_port(ds, port)->user;
}
EXPORT_SYMBOL_GPL(felix_port_to_netdev);
diff --git a/drivers/net/dsa/ocelot/felix.h b/drivers/net/dsa/ocelot/felix.h
index 1d4befe7cfe8..dbf5872fe367 100644
--- a/drivers/net/dsa/ocelot/felix.h
+++ b/drivers/net/dsa/ocelot/felix.h
@@ -77,9 +77,9 @@ struct felix_tag_proto_ops {
int (*setup)(struct dsa_switch *ds);
void (*teardown)(struct dsa_switch *ds);
unsigned long (*get_host_fwd_mask)(struct dsa_switch *ds);
- int (*change_master)(struct dsa_switch *ds, int port,
- struct net_device *master,
- struct netlink_ext_ack *extack);
+ int (*change_conduit)(struct dsa_switch *ds, int port,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack);
};
extern const struct dsa_switch_ops felix_switch_ops;
diff --git a/drivers/net/dsa/ocelot/ocelot_ext.c b/drivers/net/dsa/ocelot/ocelot_ext.c
index c29bee5a5c48..22187d831c4b 100644
--- a/drivers/net/dsa/ocelot/ocelot_ext.c
+++ b/drivers/net/dsa/ocelot/ocelot_ext.c
@@ -115,19 +115,17 @@ err_free_felix:
return err;
}
-static int ocelot_ext_remove(struct platform_device *pdev)
+static void ocelot_ext_remove(struct platform_device *pdev)
{
struct felix *felix = dev_get_drvdata(&pdev->dev);
if (!felix)
- return 0;
+ return;
dsa_unregister_switch(felix->ds);
kfree(felix->ds);
kfree(felix);
-
- return 0;
}
static void ocelot_ext_shutdown(struct platform_device *pdev)
@@ -154,7 +152,7 @@ static struct platform_driver ocelot_ext_switch_driver = {
.of_match_table = ocelot_ext_switch_of_match,
},
.probe = ocelot_ext_probe,
- .remove = ocelot_ext_remove,
+ .remove_new = ocelot_ext_remove,
.shutdown = ocelot_ext_shutdown,
};
module_platform_driver(ocelot_ext_switch_driver);
diff --git a/drivers/net/dsa/ocelot/seville_vsc9953.c b/drivers/net/dsa/ocelot/seville_vsc9953.c
index 8f912bda120b..049930da0521 100644
--- a/drivers/net/dsa/ocelot/seville_vsc9953.c
+++ b/drivers/net/dsa/ocelot/seville_vsc9953.c
@@ -1029,19 +1029,17 @@ err_alloc_felix:
return err;
}
-static int seville_remove(struct platform_device *pdev)
+static void seville_remove(struct platform_device *pdev)
{
struct felix *felix = platform_get_drvdata(pdev);
if (!felix)
- return 0;
+ return;
dsa_unregister_switch(felix->ds);
kfree(felix->ds);
kfree(felix);
-
- return 0;
}
static void seville_shutdown(struct platform_device *pdev)
@@ -1064,7 +1062,7 @@ MODULE_DEVICE_TABLE(of, seville_of_match);
static struct platform_driver seville_vsc9953_driver = {
.probe = seville_probe,
- .remove = seville_remove,
+ .remove_new = seville_remove,
.shutdown = seville_shutdown,
.driver = {
.name = "mscc_seville",
diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
index 4ce68e655a63..ec57d9d52072 100644
--- a/drivers/net/dsa/qca/qca8k-8xxx.c
+++ b/drivers/net/dsa/qca/qca8k-8xxx.c
@@ -323,14 +323,14 @@ static int qca8k_read_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
mutex_lock(&mgmt_eth_data->mutex);
- /* Check mgmt_master if is operational */
- if (!priv->mgmt_master) {
+ /* Check if the mgmt_conduit if is operational */
+ if (!priv->mgmt_conduit) {
kfree_skb(skb);
mutex_unlock(&mgmt_eth_data->mutex);
return -EINVAL;
}
- skb->dev = priv->mgmt_master;
+ skb->dev = priv->mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done);
@@ -375,14 +375,14 @@ static int qca8k_write_eth(struct qca8k_priv *priv, u32 reg, u32 *val, int len)
mutex_lock(&mgmt_eth_data->mutex);
- /* Check mgmt_master if is operational */
- if (!priv->mgmt_master) {
+ /* Check if the mgmt_conduit if is operational */
+ if (!priv->mgmt_conduit) {
kfree_skb(skb);
mutex_unlock(&mgmt_eth_data->mutex);
return -EINVAL;
}
- skb->dev = priv->mgmt_master;
+ skb->dev = priv->mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done);
@@ -508,7 +508,7 @@ qca8k_bulk_read(void *ctx, const void *reg_buf, size_t reg_len,
struct qca8k_priv *priv = ctx;
u32 reg = *(u16 *)reg_buf;
- if (priv->mgmt_master &&
+ if (priv->mgmt_conduit &&
!qca8k_read_eth(priv, reg, val_buf, val_len))
return 0;
@@ -531,7 +531,7 @@ qca8k_bulk_gather_write(void *ctx, const void *reg_buf, size_t reg_len,
u32 reg = *(u16 *)reg_buf;
u32 *val = (u32 *)val_buf;
- if (priv->mgmt_master &&
+ if (priv->mgmt_conduit &&
!qca8k_write_eth(priv, reg, val, val_len))
return 0;
@@ -626,7 +626,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
struct sk_buff *write_skb, *clear_skb, *read_skb;
struct qca8k_mgmt_eth_data *mgmt_eth_data;
u32 write_val, clear_val = 0, val;
- struct net_device *mgmt_master;
+ struct net_device *mgmt_conduit;
int ret, ret1;
bool ack;
@@ -683,18 +683,18 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
*/
mutex_lock(&mgmt_eth_data->mutex);
- /* Check if mgmt_master is operational */
- mgmt_master = priv->mgmt_master;
- if (!mgmt_master) {
+ /* Check if mgmt_conduit is operational */
+ mgmt_conduit = priv->mgmt_conduit;
+ if (!mgmt_conduit) {
mutex_unlock(&mgmt_eth_data->mutex);
mutex_unlock(&priv->bus->mdio_lock);
ret = -EINVAL;
- goto err_mgmt_master;
+ goto err_mgmt_conduit;
}
- read_skb->dev = mgmt_master;
- clear_skb->dev = mgmt_master;
- write_skb->dev = mgmt_master;
+ read_skb->dev = mgmt_conduit;
+ clear_skb->dev = mgmt_conduit;
+ write_skb->dev = mgmt_conduit;
reinit_completion(&mgmt_eth_data->rw_done);
@@ -780,7 +780,7 @@ exit:
return ret;
/* Error handling before lock */
-err_mgmt_master:
+err_mgmt_conduit:
kfree_skb(read_skb);
err_read_skb:
kfree_skb(clear_skb);
@@ -959,12 +959,12 @@ qca8k_mdio_register(struct qca8k_priv *priv)
ds->dst->index, ds->index);
bus->parent = ds->dev;
bus->phy_mask = ~ds->phys_mii_mask;
- ds->slave_mii_bus = bus;
+ ds->user_mii_bus = bus;
/* Check if the devicetree declare the port:phy mapping */
mdio = of_get_child_by_name(priv->dev->of_node, "mdio");
if (of_device_is_available(mdio)) {
- bus->name = "qca8k slave mii";
+ bus->name = "qca8k user mii";
bus->read = qca8k_internal_mdio_read;
bus->write = qca8k_internal_mdio_write;
return devm_of_mdiobus_register(priv->dev, bus, mdio);
@@ -973,7 +973,7 @@ qca8k_mdio_register(struct qca8k_priv *priv)
/* If a mapping can't be found the legacy mapping is used,
* using the qca8k_port_to_phy function
*/
- bus->name = "qca8k-legacy slave mii";
+ bus->name = "qca8k-legacy user mii";
bus->read = qca8k_legacy_mdio_read;
bus->write = qca8k_legacy_mdio_write;
return devm_mdiobus_register(priv->dev, bus);
@@ -1728,10 +1728,10 @@ qca8k_get_tag_protocol(struct dsa_switch *ds, int port,
}
static void
-qca8k_master_change(struct dsa_switch *ds, const struct net_device *master,
- bool operational)
+qca8k_conduit_change(struct dsa_switch *ds, const struct net_device *conduit,
+ bool operational)
{
- struct dsa_port *dp = master->dsa_ptr;
+ struct dsa_port *dp = conduit->dsa_ptr;
struct qca8k_priv *priv = ds->priv;
/* Ethernet MIB/MDIO is only supported for CPU port 0 */
@@ -1741,7 +1741,7 @@ qca8k_master_change(struct dsa_switch *ds, const struct net_device *master,
mutex_lock(&priv->mgmt_eth_data.mutex);
mutex_lock(&priv->mib_eth_data.mutex);
- priv->mgmt_master = operational ? (struct net_device *)master : NULL;
+ priv->mgmt_conduit = operational ? (struct net_device *)conduit : NULL;
mutex_unlock(&priv->mib_eth_data.mutex);
mutex_unlock(&priv->mgmt_eth_data.mutex);
@@ -2016,7 +2016,7 @@ static const struct dsa_switch_ops qca8k_switch_ops = {
.get_phy_flags = qca8k_get_phy_flags,
.port_lag_join = qca8k_port_lag_join,
.port_lag_leave = qca8k_port_lag_leave,
- .master_state_change = qca8k_master_change,
+ .conduit_state_change = qca8k_conduit_change,
.connect_tag_protocol = qca8k_connect_tag_protocol,
};
diff --git a/drivers/net/dsa/qca/qca8k-common.c b/drivers/net/dsa/qca/qca8k-common.c
index fce04ce12cf9..9243eff8918d 100644
--- a/drivers/net/dsa/qca/qca8k-common.c
+++ b/drivers/net/dsa/qca/qca8k-common.c
@@ -487,8 +487,7 @@ void qca8k_get_strings(struct dsa_switch *ds, int port, u32 stringset,
return;
for (i = 0; i < priv->info->mib_count; i++)
- strncpy(data + i * ETH_GSTRING_LEN, ar8327_mib[i].name,
- ETH_GSTRING_LEN);
+ ethtool_sprintf(&data, "%s", ar8327_mib[i].name);
}
void qca8k_get_ethtool_stats(struct dsa_switch *ds, int port,
@@ -500,7 +499,7 @@ void qca8k_get_ethtool_stats(struct dsa_switch *ds, int port,
u32 hi = 0;
int ret;
- if (priv->mgmt_master && priv->info->ops->autocast_mib &&
+ if (priv->mgmt_conduit && priv->info->ops->autocast_mib &&
priv->info->ops->autocast_mib(ds, port, data) > 0)
return;
@@ -762,7 +761,7 @@ int qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)
int ret;
/* We have only have a general MTU setting.
- * DSA always set the CPU port's MTU to the largest MTU of the slave
+ * DSA always set the CPU port's MTU to the largest MTU of the user
* ports.
* Setting MTU just for the CPU port is sufficient to correctly set a
* value for every port.
diff --git a/drivers/net/dsa/qca/qca8k-leds.c b/drivers/net/dsa/qca/qca8k-leds.c
index e8c16e76e34b..90e30c2909e4 100644
--- a/drivers/net/dsa/qca/qca8k-leds.c
+++ b/drivers/net/dsa/qca/qca8k-leds.c
@@ -356,8 +356,8 @@ static struct device *qca8k_cled_hw_control_get_device(struct led_classdev *ldev
dp = dsa_to_port(priv->ds, qca8k_phy_to_port(led->port_num));
if (!dp)
return NULL;
- if (dp->slave)
- return &dp->slave->dev;
+ if (dp->user)
+ return &dp->user->dev;
return NULL;
}
@@ -429,7 +429,7 @@ qca8k_parse_port_leds(struct qca8k_priv *priv, struct fwnode_handle *port, int p
init_data.default_label = ":port";
init_data.fwnode = led;
init_data.devname_mandatory = true;
- init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->slave_mii_bus->id,
+ init_data.devicename = kasprintf(GFP_KERNEL, "%s:0%d", ds->user_mii_bus->id,
port_num);
if (!init_data.devicename)
return -ENOMEM;
diff --git a/drivers/net/dsa/qca/qca8k.h b/drivers/net/dsa/qca/qca8k.h
index 8f88b7db384d..2ac7e88f8da5 100644
--- a/drivers/net/dsa/qca/qca8k.h
+++ b/drivers/net/dsa/qca/qca8k.h
@@ -458,7 +458,7 @@ struct qca8k_priv {
struct mutex reg_mutex;
struct device *dev;
struct gpio_desc *reset_gpio;
- struct net_device *mgmt_master; /* Track if mdio/mib Ethernet is available */
+ struct net_device *mgmt_conduit; /* Track if mdio/mib Ethernet is available */
struct qca8k_mgmt_eth_data mgmt_eth_data;
struct qca8k_mib_eth_data mib_eth_data;
struct qca8k_mdio_cache mdio_cache;
diff --git a/drivers/net/dsa/realtek/realtek-smi.c b/drivers/net/dsa/realtek/realtek-smi.c
index ff13563059c5..755546ed8db6 100644
--- a/drivers/net/dsa/realtek/realtek-smi.c
+++ b/drivers/net/dsa/realtek/realtek-smi.c
@@ -378,25 +378,25 @@ static int realtek_smi_setup_mdio(struct dsa_switch *ds)
return -ENODEV;
}
- priv->slave_mii_bus = devm_mdiobus_alloc(priv->dev);
- if (!priv->slave_mii_bus) {
+ priv->user_mii_bus = devm_mdiobus_alloc(priv->dev);
+ if (!priv->user_mii_bus) {
ret = -ENOMEM;
goto err_put_node;
}
- priv->slave_mii_bus->priv = priv;
- priv->slave_mii_bus->name = "SMI slave MII";
- priv->slave_mii_bus->read = realtek_smi_mdio_read;
- priv->slave_mii_bus->write = realtek_smi_mdio_write;
- snprintf(priv->slave_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
+ priv->user_mii_bus->priv = priv;
+ priv->user_mii_bus->name = "SMI user MII";
+ priv->user_mii_bus->read = realtek_smi_mdio_read;
+ priv->user_mii_bus->write = realtek_smi_mdio_write;
+ snprintf(priv->user_mii_bus->id, MII_BUS_ID_SIZE, "SMI-%d",
ds->index);
- priv->slave_mii_bus->dev.of_node = mdio_np;
- priv->slave_mii_bus->parent = priv->dev;
- ds->slave_mii_bus = priv->slave_mii_bus;
+ priv->user_mii_bus->dev.of_node = mdio_np;
+ priv->user_mii_bus->parent = priv->dev;
+ ds->user_mii_bus = priv->user_mii_bus;
- ret = devm_of_mdiobus_register(priv->dev, priv->slave_mii_bus, mdio_np);
+ ret = devm_of_mdiobus_register(priv->dev, priv->user_mii_bus, mdio_np);
if (ret) {
dev_err(priv->dev, "unable to register MDIO bus %s\n",
- priv->slave_mii_bus->id);
+ priv->user_mii_bus->id);
goto err_put_node;
}
@@ -506,22 +506,20 @@ static int realtek_smi_probe(struct platform_device *pdev)
return 0;
}
-static int realtek_smi_remove(struct platform_device *pdev)
+static void realtek_smi_remove(struct platform_device *pdev)
{
struct realtek_priv *priv = platform_get_drvdata(pdev);
if (!priv)
- return 0;
+ return;
dsa_unregister_switch(priv->ds);
- if (priv->slave_mii_bus)
- of_node_put(priv->slave_mii_bus->dev.of_node);
+ if (priv->user_mii_bus)
+ of_node_put(priv->user_mii_bus->dev.of_node);
/* leave the device reset asserted */
if (priv->reset)
gpiod_set_value(priv->reset, 1);
-
- return 0;
}
static void realtek_smi_shutdown(struct platform_device *pdev)
@@ -559,7 +557,7 @@ static struct platform_driver realtek_smi_driver = {
.of_match_table = realtek_smi_of_match,
},
.probe = realtek_smi_probe,
- .remove = realtek_smi_remove,
+ .remove_new = realtek_smi_remove,
.shutdown = realtek_smi_shutdown,
};
module_platform_driver(realtek_smi_driver);
diff --git a/drivers/net/dsa/realtek/realtek.h b/drivers/net/dsa/realtek/realtek.h
index 4fa7c6ba874a..790488e9c667 100644
--- a/drivers/net/dsa/realtek/realtek.h
+++ b/drivers/net/dsa/realtek/realtek.h
@@ -54,7 +54,7 @@ struct realtek_priv {
struct regmap *map;
struct regmap *map_nolock;
struct mutex map_lock;
- struct mii_bus *slave_mii_bus;
+ struct mii_bus *user_mii_bus;
struct mii_bus *bus;
int mdio_addr;
diff --git a/drivers/net/dsa/realtek/rtl8365mb.c b/drivers/net/dsa/realtek/rtl8365mb.c
index 41ea3b5a42b1..0875e4fc9f57 100644
--- a/drivers/net/dsa/realtek/rtl8365mb.c
+++ b/drivers/net/dsa/realtek/rtl8365mb.c
@@ -1144,7 +1144,7 @@ static int rtl8365mb_port_change_mtu(struct dsa_switch *ds, int port,
int frame_size;
/* When a new MTU is set, DSA always sets the CPU port's MTU to the
- * largest MTU of the slave ports. Because the switch only has a global
+ * largest MTU of the user ports. Because the switch only has a global
* RX length register, only allowing CPU port here is enough.
*/
if (!dsa_is_cpu_port(ds, port))
@@ -1303,8 +1303,7 @@ static void rtl8365mb_get_strings(struct dsa_switch *ds, int port, u32 stringset
for (i = 0; i < RTL8365MB_MIB_END; i++) {
struct rtl8365mb_mib_counter *mib = &rtl8365mb_mib_counters[i];
-
- strncpy(data + i * ETH_GSTRING_LEN, mib->name, ETH_GSTRING_LEN);
+ ethtool_sprintf(&data, "%s", mib->name);
}
}
diff --git a/drivers/net/dsa/realtek/rtl8366-core.c b/drivers/net/dsa/realtek/rtl8366-core.c
index dc5f75be3017..82e267b8fddb 100644
--- a/drivers/net/dsa/realtek/rtl8366-core.c
+++ b/drivers/net/dsa/realtek/rtl8366-core.c
@@ -395,17 +395,13 @@ void rtl8366_get_strings(struct dsa_switch *ds, int port, u32 stringset,
uint8_t *data)
{
struct realtek_priv *priv = ds->priv;
- struct rtl8366_mib_counter *mib;
int i;
if (port >= priv->num_ports)
return;
- for (i = 0; i < priv->num_mib_counters; i++) {
- mib = &priv->mib_counters[i];
- strncpy(data + i * ETH_GSTRING_LEN,
- mib->name, ETH_GSTRING_LEN);
- }
+ for (i = 0; i < priv->num_mib_counters; i++)
+ ethtool_sprintf(&data, "%s", priv->mib_counters[i].name);
}
EXPORT_SYMBOL_GPL(rtl8366_get_strings);
diff --git a/drivers/net/dsa/realtek/rtl8366rb.c b/drivers/net/dsa/realtek/rtl8366rb.c
index 7868ef237f6c..b39b719a5b8f 100644
--- a/drivers/net/dsa/realtek/rtl8366rb.c
+++ b/drivers/net/dsa/realtek/rtl8366rb.c
@@ -95,12 +95,6 @@
#define RTL8366RB_PAACR_RX_PAUSE BIT(6)
#define RTL8366RB_PAACR_AN BIT(7)
-#define RTL8366RB_PAACR_CPU_PORT (RTL8366RB_PAACR_SPEED_1000M | \
- RTL8366RB_PAACR_FULL_DUPLEX | \
- RTL8366RB_PAACR_LINK_UP | \
- RTL8366RB_PAACR_TX_PAUSE | \
- RTL8366RB_PAACR_RX_PAUSE)
-
/* bits 0..7 = port 0, bits 8..15 = port 1 */
#define RTL8366RB_PSTAT0 0x0014
/* bits 0..7 = port 2, bits 8..15 = port 3 */
@@ -1081,29 +1075,61 @@ rtl8366rb_mac_link_up(struct dsa_switch *ds, int port, unsigned int mode,
int speed, int duplex, bool tx_pause, bool rx_pause)
{
struct realtek_priv *priv = ds->priv;
+ unsigned int val;
int ret;
+ /* Allow forcing the mode on the fixed CPU port, no autonegotiation.
+ * We assume autonegotiation works on the PHY-facing ports.
+ */
if (port != priv->cpu_port)
return;
dev_dbg(priv->dev, "MAC link up on CPU port (%d)\n", port);
- /* Force the fixed CPU port into 1Gbit mode, no autonegotiation */
ret = regmap_update_bits(priv->map, RTL8366RB_MAC_FORCE_CTRL_REG,
BIT(port), BIT(port));
if (ret) {
- dev_err(priv->dev, "failed to force 1Gbit on CPU port\n");
+ dev_err(priv->dev, "failed to force CPU port\n");
return;
}
+ /* Conjure port config */
+ switch (speed) {
+ case SPEED_10:
+ val = RTL8366RB_PAACR_SPEED_10M;
+ break;
+ case SPEED_100:
+ val = RTL8366RB_PAACR_SPEED_100M;
+ break;
+ case SPEED_1000:
+ val = RTL8366RB_PAACR_SPEED_1000M;
+ break;
+ default:
+ val = RTL8366RB_PAACR_SPEED_1000M;
+ break;
+ }
+
+ if (duplex == DUPLEX_FULL)
+ val |= RTL8366RB_PAACR_FULL_DUPLEX;
+
+ if (tx_pause)
+ val |= RTL8366RB_PAACR_TX_PAUSE;
+
+ if (rx_pause)
+ val |= RTL8366RB_PAACR_RX_PAUSE;
+
+ val |= RTL8366RB_PAACR_LINK_UP;
+
ret = regmap_update_bits(priv->map, RTL8366RB_PAACR2,
0xFF00U,
- RTL8366RB_PAACR_CPU_PORT << 8);
+ val << 8);
if (ret) {
dev_err(priv->dev, "failed to set PAACR on CPU port\n");
return;
}
+ dev_dbg(priv->dev, "set PAACR to %04x\n", val);
+
/* Enable the CPU port */
ret = regmap_update_bits(priv->map, RTL8366RB_PECR, BIT(port),
0);
diff --git a/drivers/net/dsa/rzn1_a5psw.c b/drivers/net/dsa/rzn1_a5psw.c
index 2eda10b33f2e..10092ea85e46 100644
--- a/drivers/net/dsa/rzn1_a5psw.c
+++ b/drivers/net/dsa/rzn1_a5psw.c
@@ -1272,19 +1272,17 @@ free_pcs:
return ret;
}
-static int a5psw_remove(struct platform_device *pdev)
+static void a5psw_remove(struct platform_device *pdev)
{
struct a5psw *a5psw = platform_get_drvdata(pdev);
if (!a5psw)
- return 0;
+ return;
dsa_unregister_switch(&a5psw->ds);
a5psw_pcs_free(a5psw);
clk_disable_unprepare(a5psw->hclk);
clk_disable_unprepare(a5psw->clk);
-
- return 0;
}
static void a5psw_shutdown(struct platform_device *pdev)
@@ -1311,7 +1309,7 @@ static struct platform_driver a5psw_driver = {
.of_match_table = a5psw_of_mtable,
},
.probe = a5psw_probe,
- .remove = a5psw_remove,
+ .remove_new = a5psw_remove,
.shutdown = a5psw_shutdown,
};
module_platform_driver(a5psw_driver);
diff --git a/drivers/net/dsa/sja1105/sja1105_clocking.c b/drivers/net/dsa/sja1105/sja1105_clocking.c
index e3699f76f6d7..08a3e7b96254 100644
--- a/drivers/net/dsa/sja1105/sja1105_clocking.c
+++ b/drivers/net/dsa/sja1105/sja1105_clocking.c
@@ -153,14 +153,14 @@ static int sja1105_cgu_mii_tx_clk_config(struct sja1105_private *priv,
{
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_tx_clk;
- const int mac_clk_sources[] = {
+ static const int mac_clk_sources[] = {
CLKSRC_MII0_TX_CLK,
CLKSRC_MII1_TX_CLK,
CLKSRC_MII2_TX_CLK,
CLKSRC_MII3_TX_CLK,
CLKSRC_MII4_TX_CLK,
};
- const int phy_clk_sources[] = {
+ static const int phy_clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
@@ -194,7 +194,7 @@ sja1105_cgu_mii_rx_clk_config(struct sja1105_private *priv, int port)
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_rx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
- const int clk_sources[] = {
+ static const int clk_sources[] = {
CLKSRC_MII0_RX_CLK,
CLKSRC_MII1_RX_CLK,
CLKSRC_MII2_RX_CLK,
@@ -221,7 +221,7 @@ sja1105_cgu_mii_ext_tx_clk_config(struct sja1105_private *priv, int port)
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_ext_tx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
- const int clk_sources[] = {
+ static const int clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
@@ -248,7 +248,7 @@ sja1105_cgu_mii_ext_rx_clk_config(struct sja1105_private *priv, int port)
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl mii_ext_rx_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
- const int clk_sources[] = {
+ static const int clk_sources[] = {
CLKSRC_IDIV0,
CLKSRC_IDIV1,
CLKSRC_IDIV2,
@@ -349,8 +349,13 @@ static int sja1105_cgu_rgmii_tx_clk_config(struct sja1105_private *priv,
if (speed == priv->info->port_speed[SJA1105_SPEED_1000MBPS]) {
clksrc = CLKSRC_PLL0;
} else {
- int clk_sources[] = {CLKSRC_IDIV0, CLKSRC_IDIV1, CLKSRC_IDIV2,
- CLKSRC_IDIV3, CLKSRC_IDIV4};
+ static const int clk_sources[] = {
+ CLKSRC_IDIV0,
+ CLKSRC_IDIV1,
+ CLKSRC_IDIV2,
+ CLKSRC_IDIV3,
+ CLKSRC_IDIV4,
+ };
clksrc = clk_sources[port];
}
@@ -638,7 +643,7 @@ static int sja1105_cgu_rmii_ref_clk_config(struct sja1105_private *priv,
const struct sja1105_regs *regs = priv->info->regs;
struct sja1105_cgu_mii_ctrl ref_clk;
u8 packed_buf[SJA1105_SIZE_CGU_CMD] = {0};
- const int clk_sources[] = {
+ static const int clk_sources[] = {
CLKSRC_MII0_TX_CLK,
CLKSRC_MII1_TX_CLK,
CLKSRC_MII2_TX_CLK,
diff --git a/drivers/net/dsa/sja1105/sja1105_main.c b/drivers/net/dsa/sja1105/sja1105_main.c
index 1a367e64bc3b..74cee39d73df 100644
--- a/drivers/net/dsa/sja1105/sja1105_main.c
+++ b/drivers/net/dsa/sja1105/sja1105_main.c
@@ -2688,7 +2688,7 @@ static int sja1105_mgmt_xmit(struct dsa_switch *ds, int port, int slot,
}
/* Transfer skb to the host port. */
- dsa_enqueue_skb(skb, dsa_to_port(ds, port)->slave);
+ dsa_enqueue_skb(skb, dsa_to_port(ds, port)->user);
/* Wait until the switch has processed the frame */
do {
@@ -3081,7 +3081,7 @@ static int sja1105_port_bridge_flags(struct dsa_switch *ds, int port,
* ref_clk pin. So port clocking needs to be initialized early, before
* connecting to PHYs is attempted, otherwise they won't respond through MDIO.
* Setting correct PHY link speed does not matter now.
- * But dsa_slave_phy_setup is called later than sja1105_setup, so the PHY
+ * But dsa_user_phy_setup is called later than sja1105_setup, so the PHY
* bindings are not yet parsed by DSA core. We need to parse early so that we
* can populate the xMII mode parameters table.
*/
diff --git a/drivers/net/dsa/vitesse-vsc73xx-core.c b/drivers/net/dsa/vitesse-vsc73xx-core.c
index 4f09e7438f3b..e6f29e4e508c 100644
--- a/drivers/net/dsa/vitesse-vsc73xx-core.c
+++ b/drivers/net/dsa/vitesse-vsc73xx-core.c
@@ -928,7 +928,8 @@ static void vsc73xx_get_strings(struct dsa_switch *ds, int port, u32 stringset,
const struct vsc73xx_counter *cnt;
struct vsc73xx *vsc = ds->priv;
u8 indices[6];
- int i, j;
+ u8 *buf = data;
+ int i;
u32 val;
int ret;
@@ -948,10 +949,7 @@ static void vsc73xx_get_strings(struct dsa_switch *ds, int port, u32 stringset,
indices[5] = ((val >> 26) & 0x1f); /* TX counter 2 */
/* The first counters is the RX octets */
- j = 0;
- strncpy(data + j * ETH_GSTRING_LEN,
- "RxEtherStatsOctets", ETH_GSTRING_LEN);
- j++;
+ ethtool_sprintf(&buf, "RxEtherStatsOctets");
/* Each port supports recording 3 RX counters and 3 TX counters,
* figure out what counters we use in this set-up and return the
@@ -961,23 +959,16 @@ static void vsc73xx_get_strings(struct dsa_switch *ds, int port, u32 stringset,
*/
for (i = 0; i < 3; i++) {
cnt = vsc73xx_find_counter(vsc, indices[i], false);
- if (cnt)
- strncpy(data + j * ETH_GSTRING_LEN,
- cnt->name, ETH_GSTRING_LEN);
- j++;
+ ethtool_sprintf(&buf, "%s", cnt ? cnt->name : "");
}
/* TX stats begins with the number of TX octets */
- strncpy(data + j * ETH_GSTRING_LEN,
- "TxEtherStatsOctets", ETH_GSTRING_LEN);
- j++;
+ ethtool_sprintf(&buf, "TxEtherStatsOctets");
for (i = 3; i < 6; i++) {
cnt = vsc73xx_find_counter(vsc, indices[i], true);
- if (cnt)
- strncpy(data + j * ETH_GSTRING_LEN,
- cnt->name, ETH_GSTRING_LEN);
- j++;
+ ethtool_sprintf(&buf, "%s", cnt ? cnt->name : "");
+
}
}
@@ -1037,6 +1028,31 @@ static int vsc73xx_get_max_mtu(struct dsa_switch *ds, int port)
return 9600 - ETH_HLEN - ETH_FCS_LEN;
}
+static void vsc73xx_phylink_get_caps(struct dsa_switch *dsa, int port,
+ struct phylink_config *config)
+{
+ unsigned long *interfaces = config->supported_interfaces;
+
+ if (port == 5)
+ return;
+
+ if (port == CPU_PORT) {
+ __set_bit(PHY_INTERFACE_MODE_MII, interfaces);
+ __set_bit(PHY_INTERFACE_MODE_REVMII, interfaces);
+ __set_bit(PHY_INTERFACE_MODE_GMII, interfaces);
+ __set_bit(PHY_INTERFACE_MODE_RGMII, interfaces);
+ }
+
+ if (port <= 4) {
+ /* Internal PHYs */
+ __set_bit(PHY_INTERFACE_MODE_INTERNAL, interfaces);
+ /* phylib default */
+ __set_bit(PHY_INTERFACE_MODE_GMII, interfaces);
+ }
+
+ config->mac_capabilities = MAC_SYM_PAUSE | MAC_10 | MAC_100 | MAC_1000;
+}
+
static const struct dsa_switch_ops vsc73xx_ds_ops = {
.get_tag_protocol = vsc73xx_get_tag_protocol,
.setup = vsc73xx_setup,
@@ -1050,6 +1066,7 @@ static const struct dsa_switch_ops vsc73xx_ds_ops = {
.port_disable = vsc73xx_port_disable,
.port_change_mtu = vsc73xx_change_mtu,
.port_max_mtu = vsc73xx_get_max_mtu,
+ .phylink_get_caps = vsc73xx_phylink_get_caps,
};
static int vsc73xx_gpio_get(struct gpio_chip *chip, unsigned int offset)
diff --git a/drivers/net/dsa/vitesse-vsc73xx-platform.c b/drivers/net/dsa/vitesse-vsc73xx-platform.c
index bd4206e8f9af..755b7895a15a 100644
--- a/drivers/net/dsa/vitesse-vsc73xx-platform.c
+++ b/drivers/net/dsa/vitesse-vsc73xx-platform.c
@@ -112,16 +112,14 @@ static int vsc73xx_platform_probe(struct platform_device *pdev)
return vsc73xx_probe(&vsc_platform->vsc);
}
-static int vsc73xx_platform_remove(struct platform_device *pdev)
+static void vsc73xx_platform_remove(struct platform_device *pdev)
{
struct vsc73xx_platform *vsc_platform = platform_get_drvdata(pdev);
if (!vsc_platform)
- return 0;
+ return;
vsc73xx_remove(&vsc_platform->vsc);
-
- return 0;
}
static void vsc73xx_platform_shutdown(struct platform_device *pdev)
@@ -160,7 +158,7 @@ MODULE_DEVICE_TABLE(of, vsc73xx_of_match);
static struct platform_driver vsc73xx_platform_driver = {
.probe = vsc73xx_platform_probe,
- .remove = vsc73xx_platform_remove,
+ .remove_new = vsc73xx_platform_remove,
.shutdown = vsc73xx_platform_shutdown,
.driver = {
.name = "vsc73xx-platform",
diff --git a/drivers/net/dsa/xrs700x/xrs700x.c b/drivers/net/dsa/xrs700x/xrs700x.c
index 753fef757f11..96db032b478f 100644
--- a/drivers/net/dsa/xrs700x/xrs700x.c
+++ b/drivers/net/dsa/xrs700x/xrs700x.c
@@ -548,12 +548,13 @@ static void xrs700x_bridge_leave(struct dsa_switch *ds, int port,
}
static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
- struct net_device *hsr)
+ struct net_device *hsr,
+ struct netlink_ext_ack *extack)
{
unsigned int val = XRS_HSR_CFG_HSR_PRP;
struct dsa_port *partner = NULL, *dp;
struct xrs700x *priv = ds->priv;
- struct net_device *slave;
+ struct net_device *user;
int ret, i, hsr_pair[2];
enum hsr_version ver;
bool fwd = false;
@@ -562,16 +563,21 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
if (ret)
return ret;
- /* Only ports 1 and 2 can be HSR/PRP redundant ports. */
- if (port != 1 && port != 2)
+ if (port != 1 && port != 2) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Only ports 1 and 2 can offload HSR/PRP");
return -EOPNOTSUPP;
+ }
- if (ver == HSR_V1)
+ if (ver == HSR_V1) {
val |= XRS_HSR_CFG_HSR;
- else if (ver == PRP_V1)
+ } else if (ver == PRP_V1) {
val |= XRS_HSR_CFG_PRP;
- else
+ } else {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Only HSR v1 and PRP v1 can be offloaded");
return -EOPNOTSUPP;
+ }
dsa_hsr_foreach_port(dp, ds, hsr) {
if (dp->index != port) {
@@ -632,8 +638,8 @@ static int xrs700x_hsr_join(struct dsa_switch *ds, int port,
hsr_pair[0] = port;
hsr_pair[1] = partner->index;
for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) {
- slave = dsa_to_port(ds, hsr_pair[i])->slave;
- slave->features |= XRS7000X_SUPPORTED_HSR_FEATURES;
+ user = dsa_to_port(ds, hsr_pair[i])->user;
+ user->features |= XRS7000X_SUPPORTED_HSR_FEATURES;
}
return 0;
@@ -644,7 +650,7 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
{
struct dsa_port *partner = NULL, *dp;
struct xrs700x *priv = ds->priv;
- struct net_device *slave;
+ struct net_device *user;
int i, hsr_pair[2];
unsigned int val;
@@ -686,8 +692,8 @@ static int xrs700x_hsr_leave(struct dsa_switch *ds, int port,
hsr_pair[0] = port;
hsr_pair[1] = partner->index;
for (i = 0; i < ARRAY_SIZE(hsr_pair); i++) {
- slave = dsa_to_port(ds, hsr_pair[i])->slave;
- slave->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES;
+ user = dsa_to_port(ds, hsr_pair[i])->user;
+ user->features &= ~XRS7000X_SUPPORTED_HSR_FEATURES;
}
return 0;
diff --git a/drivers/net/ethernet/8390/ax88796.c b/drivers/net/ethernet/8390/ax88796.c
index af603256b724..2874680ef24d 100644
--- a/drivers/net/ethernet/8390/ax88796.c
+++ b/drivers/net/ethernet/8390/ax88796.c
@@ -811,7 +811,7 @@ static int ax_init_dev(struct net_device *dev)
return ret;
}
-static int ax_remove(struct platform_device *pdev)
+static void ax_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct ei_device *ei_local = netdev_priv(dev);
@@ -832,8 +832,6 @@ static int ax_remove(struct platform_device *pdev)
platform_set_drvdata(pdev, NULL);
free_netdev(dev);
-
- return 0;
}
/*
@@ -1011,7 +1009,7 @@ static struct platform_driver axdrv = {
.name = "ax88796",
},
.probe = ax_probe,
- .remove = ax_remove,
+ .remove_new = ax_remove,
.suspend = ax_suspend,
.resume = ax_resume,
};
diff --git a/drivers/net/ethernet/8390/mcf8390.c b/drivers/net/ethernet/8390/mcf8390.c
index 217838b28220..5a0fa995e643 100644
--- a/drivers/net/ethernet/8390/mcf8390.c
+++ b/drivers/net/ethernet/8390/mcf8390.c
@@ -441,7 +441,7 @@ static int mcf8390_probe(struct platform_device *pdev)
return 0;
}
-static int mcf8390_remove(struct platform_device *pdev)
+static void mcf8390_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct resource *mem;
@@ -450,7 +450,6 @@ static int mcf8390_remove(struct platform_device *pdev)
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
release_mem_region(mem->start, resource_size(mem));
free_netdev(dev);
- return 0;
}
static struct platform_driver mcf8390_drv = {
@@ -458,7 +457,7 @@ static struct platform_driver mcf8390_drv = {
.name = "mcf8390",
},
.probe = mcf8390_probe,
- .remove = mcf8390_remove,
+ .remove_new = mcf8390_remove,
};
module_platform_driver(mcf8390_drv);
diff --git a/drivers/net/ethernet/8390/ne.c b/drivers/net/ethernet/8390/ne.c
index 7d89ec1cf273..350683a09d2e 100644
--- a/drivers/net/ethernet/8390/ne.c
+++ b/drivers/net/ethernet/8390/ne.c
@@ -823,7 +823,7 @@ static int __init ne_drv_probe(struct platform_device *pdev)
return 0;
}
-static int ne_drv_remove(struct platform_device *pdev)
+static void ne_drv_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
@@ -842,7 +842,6 @@ static int ne_drv_remove(struct platform_device *pdev)
release_region(dev->base_addr, NE_IO_EXTENT);
free_netdev(dev);
}
- return 0;
}
/* Remove unused devices or all if true. */
@@ -895,7 +894,7 @@ static int ne_drv_resume(struct platform_device *pdev)
#endif
static struct platform_driver ne_driver = {
- .remove = ne_drv_remove,
+ .remove_new = ne_drv_remove,
.suspend = ne_drv_suspend,
.resume = ne_drv_resume,
.driver = {
diff --git a/drivers/net/ethernet/actions/owl-emac.c b/drivers/net/ethernet/actions/owl-emac.c
index c6f8f852bff1..e03193da5874 100644
--- a/drivers/net/ethernet/actions/owl-emac.c
+++ b/drivers/net/ethernet/actions/owl-emac.c
@@ -1582,15 +1582,13 @@ static int owl_emac_probe(struct platform_device *pdev)
return 0;
}
-static int owl_emac_remove(struct platform_device *pdev)
+static void owl_emac_remove(struct platform_device *pdev)
{
struct owl_emac_priv *priv = platform_get_drvdata(pdev);
netif_napi_del(&priv->napi);
phy_disconnect(priv->netdev->phydev);
cancel_work_sync(&priv->mac_reset_task);
-
- return 0;
}
static const struct of_device_id owl_emac_of_match[] = {
@@ -1609,7 +1607,7 @@ static struct platform_driver owl_emac_driver = {
.pm = &owl_emac_pm_ops,
},
.probe = owl_emac_probe,
- .remove = owl_emac_remove,
+ .remove_new = owl_emac_remove,
};
module_platform_driver(owl_emac_driver);
diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
index 597a02c75d52..27af7746d645 100644
--- a/drivers/net/ethernet/aeroflex/greth.c
+++ b/drivers/net/ethernet/aeroflex/greth.c
@@ -1525,7 +1525,7 @@ error1:
return err;
}
-static int greth_of_remove(struct platform_device *of_dev)
+static void greth_of_remove(struct platform_device *of_dev)
{
struct net_device *ndev = platform_get_drvdata(of_dev);
struct greth_private *greth = netdev_priv(ndev);
@@ -1544,8 +1544,6 @@ static int greth_of_remove(struct platform_device *of_dev)
of_iounmap(&of_dev->resource[0], greth->regs, resource_size(&of_dev->resource[0]));
free_netdev(ndev);
-
- return 0;
}
static const struct of_device_id greth_of_match[] = {
@@ -1566,7 +1564,7 @@ static struct platform_driver greth_of_driver = {
.of_match_table = greth_of_match,
},
.probe = greth_of_probe,
- .remove = greth_of_remove,
+ .remove_new = greth_of_remove,
};
module_platform_driver(greth_of_driver);
diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
index a94c62956eed..d761c08fe5c1 100644
--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
+++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
@@ -1083,7 +1083,7 @@ out:
return ret;
}
-static int emac_remove(struct platform_device *pdev)
+static void emac_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct emac_board_info *db = netdev_priv(ndev);
@@ -1101,7 +1101,6 @@ static int emac_remove(struct platform_device *pdev)
free_netdev(ndev);
dev_dbg(&pdev->dev, "released and freed device\n");
- return 0;
}
static int emac_suspend(struct platform_device *dev, pm_message_t state)
@@ -1143,7 +1142,7 @@ static struct platform_driver emac_driver = {
.of_match_table = emac_of_match,
},
.probe = emac_probe,
- .remove = emac_remove,
+ .remove_new = emac_remove,
.suspend = emac_suspend,
.resume = emac_resume,
};
diff --git a/drivers/net/ethernet/altera/altera_tse.h b/drivers/net/ethernet/altera/altera_tse.h
index db5eed06e92d..82f2363a45cd 100644
--- a/drivers/net/ethernet/altera/altera_tse.h
+++ b/drivers/net/ethernet/altera/altera_tse.h
@@ -472,7 +472,7 @@ struct altera_tse_private {
/* ethtool msglvl option */
u32 msg_enable;
- struct altera_dmaops *dmaops;
+ const struct altera_dmaops *dmaops;
struct phylink *phylink;
struct phylink_config phylink_config;
diff --git a/drivers/net/ethernet/altera/altera_tse_main.c b/drivers/net/ethernet/altera/altera_tse_main.c
index 2e15800e5310..1c8763be0e4b 100644
--- a/drivers/net/ethernet/altera/altera_tse_main.c
+++ b/drivers/net/ethernet/altera/altera_tse_main.c
@@ -29,13 +29,13 @@
#include <linux/mii.h>
#include <linux/mdio/mdio-regmap.h>
#include <linux/netdevice.h>
-#include <linux/of_device.h>
+#include <linux/of.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
-#include <linux/of_platform.h>
#include <linux/pcs-lynx.h>
#include <linux/phy.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/regmap.h>
#include <linux/skbuff.h>
#include <asm/cacheflush.h>
@@ -82,8 +82,6 @@ MODULE_PARM_DESC(dma_tx_num, "Number of descriptors in the TX list");
#define TXQUEUESTOP_THRESHHOLD 2
-static const struct of_device_id altera_tse_ids[];
-
static inline u32 tse_tx_avail(struct altera_tse_private *priv)
{
return priv->tx_cons + priv->tx_ring_size - priv->tx_prod - 1;
@@ -1133,7 +1131,6 @@ static int request_and_map(struct platform_device *pdev, const char *name,
*/
static int altera_tse_probe(struct platform_device *pdev)
{
- const struct of_device_id *of_id = NULL;
struct regmap_config pcs_regmap_cfg;
struct altera_tse_private *priv;
struct mdio_regmap_config mrc;
@@ -1159,11 +1156,7 @@ static int altera_tse_probe(struct platform_device *pdev)
priv->dev = ndev;
priv->msg_enable = netif_msg_init(debug, default_msg_level);
- of_id = of_match_device(altera_tse_ids, &pdev->dev);
-
- if (of_id)
- priv->dmaops = (struct altera_dmaops *)of_id->data;
-
+ priv->dmaops = device_get_match_data(&pdev->dev);
if (priv->dmaops &&
priv->dmaops->altera_dtype == ALTERA_DTYPE_SGDMA) {
@@ -1464,7 +1457,7 @@ err_free_netdev:
/* Remove Altera TSE MAC device
*/
-static int altera_tse_remove(struct platform_device *pdev)
+static void altera_tse_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct altera_tse_private *priv = netdev_priv(ndev);
@@ -1476,8 +1469,6 @@ static int altera_tse_remove(struct platform_device *pdev)
lynx_pcs_destroy(priv->pcs);
free_netdev(ndev);
-
- return 0;
}
static const struct altera_dmaops altera_dtype_sgdma = {
@@ -1528,7 +1519,7 @@ MODULE_DEVICE_TABLE(of, altera_tse_ids);
static struct platform_driver altera_tse_driver = {
.probe = altera_tse_probe,
- .remove = altera_tse_remove,
+ .remove_new = altera_tse_remove,
.suspend = NULL,
.resume = NULL,
.driver = {
diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
index f955bde10cf9..b5bca4814830 100644
--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
+++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
@@ -1828,7 +1828,7 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
}
if (xdp_flags & ENA_XDP_REDIRECT)
- xdp_do_flush_map();
+ xdp_do_flush();
return work_done;
diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c
index c5cec4e79489..85c978149bf6 100644
--- a/drivers/net/ethernet/amd/au1000_eth.c
+++ b/drivers/net/ethernet/amd/au1000_eth.c
@@ -1323,7 +1323,7 @@ out:
return err;
}
-static int au1000_remove(struct platform_device *pdev)
+static void au1000_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct au1000_private *aup = netdev_priv(dev);
@@ -1359,13 +1359,11 @@ static int au1000_remove(struct platform_device *pdev)
release_mem_region(macen->start, resource_size(macen));
free_netdev(dev);
-
- return 0;
}
static struct platform_driver au1000_eth_driver = {
.probe = au1000_probe,
- .remove = au1000_remove,
+ .remove_new = au1000_remove,
.driver = {
.name = "au1000-eth",
},
diff --git a/drivers/net/ethernet/amd/pds_core/core.c b/drivers/net/ethernet/amd/pds_core/core.c
index 36f9b932b9e2..0d2091e9eb28 100644
--- a/drivers/net/ethernet/amd/pds_core/core.c
+++ b/drivers/net/ethernet/amd/pds_core/core.c
@@ -152,11 +152,8 @@ void pdsc_qcq_free(struct pdsc *pdsc, struct pdsc_qcq *qcq)
dma_free_coherent(dev, qcq->cq_size,
qcq->cq_base, qcq->cq_base_pa);
- if (qcq->cq.info)
- vfree(qcq->cq.info);
-
- if (qcq->q.info)
- vfree(qcq->q.info);
+ vfree(qcq->cq.info);
+ vfree(qcq->q.info);
memset(qcq, 0, sizeof(*qcq));
}
@@ -445,12 +442,13 @@ int pdsc_setup(struct pdsc *pdsc, bool init)
goto err_out_teardown;
/* Set up the VIFs */
- err = pdsc_viftypes_init(pdsc);
- if (err)
- goto err_out_teardown;
+ if (init) {
+ err = pdsc_viftypes_init(pdsc);
+ if (err)
+ goto err_out_teardown;
- if (init)
pdsc_debugfs_add_viftype(pdsc);
+ }
clear_bit(PDSC_S_FW_DEAD, &pdsc->state);
return 0;
@@ -469,8 +467,10 @@ void pdsc_teardown(struct pdsc *pdsc, bool removing)
pdsc_qcq_free(pdsc, &pdsc->notifyqcq);
pdsc_qcq_free(pdsc, &pdsc->adminqcq);
- kfree(pdsc->viftype_status);
- pdsc->viftype_status = NULL;
+ if (removing) {
+ kfree(pdsc->viftype_status);
+ pdsc->viftype_status = NULL;
+ }
if (pdsc->intr_info) {
for (i = 0; i < pdsc->nintrs; i++)
@@ -512,7 +512,7 @@ void pdsc_stop(struct pdsc *pdsc)
PDS_CORE_INTR_MASK_SET);
}
-static void pdsc_fw_down(struct pdsc *pdsc)
+void pdsc_fw_down(struct pdsc *pdsc)
{
union pds_core_notifyq_comp reset_event = {
.reset.ecode = cpu_to_le16(PDS_EVENT_RESET),
@@ -520,10 +520,13 @@ static void pdsc_fw_down(struct pdsc *pdsc)
};
if (test_and_set_bit(PDSC_S_FW_DEAD, &pdsc->state)) {
- dev_err(pdsc->dev, "%s: already happening\n", __func__);
+ dev_warn(pdsc->dev, "%s: already happening\n", __func__);
return;
}
+ if (pdsc->pdev->is_virtfn)
+ return;
+
/* Notify clients of fw_down */
if (pdsc->fw_reporter)
devlink_health_report(pdsc->fw_reporter, "FW down reported", pdsc);
@@ -533,7 +536,7 @@ static void pdsc_fw_down(struct pdsc *pdsc)
pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY);
}
-static void pdsc_fw_up(struct pdsc *pdsc)
+void pdsc_fw_up(struct pdsc *pdsc)
{
union pds_core_notifyq_comp reset_event = {
.reset.ecode = cpu_to_le16(PDS_EVENT_RESET),
@@ -546,6 +549,11 @@ static void pdsc_fw_up(struct pdsc *pdsc)
return;
}
+ if (pdsc->pdev->is_virtfn) {
+ clear_bit(PDSC_S_FW_DEAD, &pdsc->state);
+ return;
+ }
+
err = pdsc_setup(pdsc, PDSC_SETUP_RECOVERY);
if (err)
goto err_out;
@@ -567,6 +575,18 @@ err_out:
pdsc_teardown(pdsc, PDSC_TEARDOWN_RECOVERY);
}
+static void pdsc_check_pci_health(struct pdsc *pdsc)
+{
+ u8 fw_status = ioread8(&pdsc->info_regs->fw_status);
+
+ /* is PCI broken? */
+ if (fw_status != PDS_RC_BAD_PCI)
+ return;
+
+ pdsc_reset_prepare(pdsc->pdev);
+ pdsc_reset_done(pdsc->pdev);
+}
+
void pdsc_health_thread(struct work_struct *work)
{
struct pdsc *pdsc = container_of(work, struct pdsc, health_work);
@@ -593,6 +613,8 @@ void pdsc_health_thread(struct work_struct *work)
pdsc_fw_down(pdsc);
}
+ pdsc_check_pci_health(pdsc);
+
pdsc->fw_generation = pdsc->fw_status & PDS_CORE_FW_STS_F_GENERATION;
out_unlock:
diff --git a/drivers/net/ethernet/amd/pds_core/core.h b/drivers/net/ethernet/amd/pds_core/core.h
index e545fafc4819..f3a7deda9972 100644
--- a/drivers/net/ethernet/amd/pds_core/core.h
+++ b/drivers/net/ethernet/amd/pds_core/core.h
@@ -283,6 +283,9 @@ int pdsc_devcmd_reset(struct pdsc *pdsc);
int pdsc_dev_reinit(struct pdsc *pdsc);
int pdsc_dev_init(struct pdsc *pdsc);
+void pdsc_reset_prepare(struct pci_dev *pdev);
+void pdsc_reset_done(struct pci_dev *pdev);
+
int pdsc_intr_alloc(struct pdsc *pdsc, char *name,
irq_handler_t handler, void *data);
void pdsc_intr_free(struct pdsc *pdsc, int index);
@@ -309,4 +312,8 @@ irqreturn_t pdsc_adminq_isr(int irq, void *data);
int pdsc_firmware_update(struct pdsc *pdsc, const struct firmware *fw,
struct netlink_ext_ack *extack);
+
+void pdsc_fw_down(struct pdsc *pdsc);
+void pdsc_fw_up(struct pdsc *pdsc);
+
#endif /* _PDSC_H_ */
diff --git a/drivers/net/ethernet/amd/pds_core/dev.c b/drivers/net/ethernet/amd/pds_core/dev.c
index f77cd9f5a2fd..7c1b965d61a9 100644
--- a/drivers/net/ethernet/amd/pds_core/dev.c
+++ b/drivers/net/ethernet/amd/pds_core/dev.c
@@ -42,6 +42,8 @@ int pdsc_err_to_errno(enum pds_core_status_code code)
return -ERANGE;
case PDS_RC_BAD_ADDR:
return -EFAULT;
+ case PDS_RC_BAD_PCI:
+ return -ENXIO;
case PDS_RC_EOPCODE:
case PDS_RC_EINTR:
case PDS_RC_DEV_CMD:
@@ -62,7 +64,7 @@ bool pdsc_is_fw_running(struct pdsc *pdsc)
/* Firmware is useful only if the running bit is set and
* fw_status != 0xff (bad PCI read)
*/
- return (pdsc->fw_status != 0xff) &&
+ return (pdsc->fw_status != PDS_RC_BAD_PCI) &&
(pdsc->fw_status & PDS_CORE_FW_STS_F_RUNNING);
}
@@ -128,6 +130,7 @@ static int pdsc_devcmd_wait(struct pdsc *pdsc, u8 opcode, int max_seconds)
unsigned long max_wait;
unsigned long duration;
int timeout = 0;
+ bool running;
int done = 0;
int err = 0;
int status;
@@ -136,6 +139,10 @@ static int pdsc_devcmd_wait(struct pdsc *pdsc, u8 opcode, int max_seconds)
max_wait = start_time + (max_seconds * HZ);
while (!done && !timeout) {
+ running = pdsc_is_fw_running(pdsc);
+ if (!running)
+ break;
+
done = pdsc_devcmd_done(pdsc);
if (done)
break;
@@ -152,7 +159,7 @@ static int pdsc_devcmd_wait(struct pdsc *pdsc, u8 opcode, int max_seconds)
dev_dbg(dev, "DEVCMD %d %s after %ld secs\n",
opcode, pdsc_devcmd_str(opcode), duration / HZ);
- if (!done || timeout) {
+ if ((!done || timeout) && running) {
dev_err(dev, "DEVCMD %d %s timeout, done %d timeout %d max_seconds=%d\n",
opcode, pdsc_devcmd_str(opcode), done, timeout,
max_seconds);
diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
index d9607033bbf2..57f88c8b37de 100644
--- a/drivers/net/ethernet/amd/pds_core/devlink.c
+++ b/drivers/net/ethernet/amd/pds_core/devlink.c
@@ -124,6 +124,8 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
snprintf(buf, sizeof(buf), "fw.slot_%d", i);
err = devlink_info_version_stored_put(req, buf,
fw_list.fw_names[i].fw_version);
+ if (err)
+ return err;
}
err = devlink_info_version_running_put(req,
@@ -154,33 +156,20 @@ int pdsc_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct netlink_ext_ack *extack)
{
struct pdsc *pdsc = devlink_health_reporter_priv(reporter);
- int err;
mutex_lock(&pdsc->config_lock);
-
if (test_bit(PDSC_S_FW_DEAD, &pdsc->state))
- err = devlink_fmsg_string_pair_put(fmsg, "Status", "dead");
+ devlink_fmsg_string_pair_put(fmsg, "Status", "dead");
else if (!pdsc_is_fw_good(pdsc))
- err = devlink_fmsg_string_pair_put(fmsg, "Status", "unhealthy");
+ devlink_fmsg_string_pair_put(fmsg, "Status", "unhealthy");
else
- err = devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
-
+ devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
mutex_unlock(&pdsc->config_lock);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "State",
- pdsc->fw_status &
- ~PDS_CORE_FW_STS_F_GENERATION);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "Generation",
- pdsc->fw_generation >> 4);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "State",
+ pdsc->fw_status & ~PDS_CORE_FW_STS_F_GENERATION);
+ devlink_fmsg_u32_pair_put(fmsg, "Generation", pdsc->fw_generation >> 4);
+ devlink_fmsg_u32_pair_put(fmsg, "Recoveries", pdsc->fw_recoveries);
- return devlink_fmsg_u32_pair_put(fmsg, "Recoveries",
- pdsc->fw_recoveries);
+ return 0;
}
diff --git a/drivers/net/ethernet/amd/pds_core/main.c b/drivers/net/ethernet/amd/pds_core/main.c
index 3a45bf474a19..3080898d7b95 100644
--- a/drivers/net/ethernet/amd/pds_core/main.c
+++ b/drivers/net/ethernet/amd/pds_core/main.c
@@ -445,12 +445,62 @@ static void pdsc_remove(struct pci_dev *pdev)
devlink_free(dl);
}
+void pdsc_reset_prepare(struct pci_dev *pdev)
+{
+ struct pdsc *pdsc = pci_get_drvdata(pdev);
+
+ pdsc_fw_down(pdsc);
+
+ pci_free_irq_vectors(pdev);
+ pdsc_unmap_bars(pdsc);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+}
+
+void pdsc_reset_done(struct pci_dev *pdev)
+{
+ struct pdsc *pdsc = pci_get_drvdata(pdev);
+ struct device *dev = pdsc->dev;
+ int err;
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ dev_err(dev, "Cannot enable PCI device: %pe\n", ERR_PTR(err));
+ return;
+ }
+ pci_set_master(pdev);
+
+ if (!pdev->is_virtfn) {
+ pcie_print_link_status(pdsc->pdev);
+
+ err = pci_request_regions(pdsc->pdev, PDS_CORE_DRV_NAME);
+ if (err) {
+ dev_err(pdsc->dev, "Cannot request PCI regions: %pe\n",
+ ERR_PTR(err));
+ return;
+ }
+
+ err = pdsc_map_bars(pdsc);
+ if (err)
+ return;
+ }
+
+ pdsc_fw_up(pdsc);
+}
+
+static const struct pci_error_handlers pdsc_err_handler = {
+ /* FLR handling */
+ .reset_prepare = pdsc_reset_prepare,
+ .reset_done = pdsc_reset_done,
+};
+
static struct pci_driver pdsc_driver = {
.name = PDS_CORE_DRV_NAME,
.id_table = pdsc_id_table,
.probe = pdsc_probe,
.remove = pdsc_remove,
.sriov_configure = pdsc_sriov_configure,
+ .err_handler = &pdsc_err_handler,
};
void *pdsc_get_pf_struct(struct pci_dev *vf_pdev)
diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c
index 33bb539ad70a..c78706d21a6a 100644
--- a/drivers/net/ethernet/amd/sunlance.c
+++ b/drivers/net/ethernet/amd/sunlance.c
@@ -1487,7 +1487,7 @@ static int sunlance_sbus_probe(struct platform_device *op)
return err;
}
-static int sunlance_sbus_remove(struct platform_device *op)
+static void sunlance_sbus_remove(struct platform_device *op)
{
struct lance_private *lp = platform_get_drvdata(op);
struct net_device *net_dev = lp->dev;
@@ -1497,8 +1497,6 @@ static int sunlance_sbus_remove(struct platform_device *op)
lance_free_hwresources(lp);
free_netdev(net_dev);
-
- return 0;
}
static const struct of_device_id sunlance_sbus_match[] = {
@@ -1516,7 +1514,7 @@ static struct platform_driver sunlance_sbus_driver = {
.of_match_table = sunlance_sbus_match,
},
.probe = sunlance_sbus_probe,
- .remove = sunlance_sbus_remove,
+ .remove_new = sunlance_sbus_remove,
};
module_platform_driver(sunlance_sbus_driver);
diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-platform.c b/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
index 4d790a89fe77..9131020d06af 100644
--- a/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
+++ b/drivers/net/ethernet/amd/xgbe/xgbe-platform.c
@@ -123,9 +123,7 @@
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_net.h>
-#include <linux/of_address.h>
#include <linux/of_platform.h>
-#include <linux/of_device.h>
#include <linux/clk.h>
#include <linux/property.h>
#include <linux/acpi.h>
@@ -135,17 +133,6 @@
#include "xgbe-common.h"
#ifdef CONFIG_ACPI
-static const struct acpi_device_id xgbe_acpi_match[];
-
-static struct xgbe_version_data *xgbe_acpi_vdata(struct xgbe_prv_data *pdata)
-{
- const struct acpi_device_id *id;
-
- id = acpi_match_device(xgbe_acpi_match, pdata->dev);
-
- return id ? (struct xgbe_version_data *)id->driver_data : NULL;
-}
-
static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
{
struct device *dev = pdata->dev;
@@ -173,11 +160,6 @@ static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
return 0;
}
#else /* CONFIG_ACPI */
-static struct xgbe_version_data *xgbe_acpi_vdata(struct xgbe_prv_data *pdata)
-{
- return NULL;
-}
-
static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
{
return -EINVAL;
@@ -185,17 +167,6 @@ static int xgbe_acpi_support(struct xgbe_prv_data *pdata)
#endif /* CONFIG_ACPI */
#ifdef CONFIG_OF
-static const struct of_device_id xgbe_of_match[];
-
-static struct xgbe_version_data *xgbe_of_vdata(struct xgbe_prv_data *pdata)
-{
- const struct of_device_id *id;
-
- id = of_match_device(xgbe_of_match, pdata->dev);
-
- return id ? (struct xgbe_version_data *)id->data : NULL;
-}
-
static int xgbe_of_support(struct xgbe_prv_data *pdata)
{
struct device *dev = pdata->dev;
@@ -244,11 +215,6 @@ static struct platform_device *xgbe_of_get_phy_pdev(struct xgbe_prv_data *pdata)
return phy_pdev;
}
#else /* CONFIG_OF */
-static struct xgbe_version_data *xgbe_of_vdata(struct xgbe_prv_data *pdata)
-{
- return NULL;
-}
-
static int xgbe_of_support(struct xgbe_prv_data *pdata)
{
return -EINVAL;
@@ -290,12 +256,6 @@ static struct platform_device *xgbe_get_phy_pdev(struct xgbe_prv_data *pdata)
return phy_pdev;
}
-static struct xgbe_version_data *xgbe_get_vdata(struct xgbe_prv_data *pdata)
-{
- return pdata->use_acpi ? xgbe_acpi_vdata(pdata)
- : xgbe_of_vdata(pdata);
-}
-
static int xgbe_platform_probe(struct platform_device *pdev)
{
struct xgbe_prv_data *pdata;
@@ -321,7 +281,7 @@ static int xgbe_platform_probe(struct platform_device *pdev)
pdata->use_acpi = dev->of_node ? 0 : 1;
/* Get the version data */
- pdata->vdata = xgbe_get_vdata(pdata);
+ pdata->vdata = (struct xgbe_version_data *)device_get_match_data(dev);
phy_pdev = xgbe_get_phy_pdev(pdata);
if (!phy_pdev) {
@@ -512,7 +472,7 @@ err_alloc:
return ret;
}
-static int xgbe_platform_remove(struct platform_device *pdev)
+static void xgbe_platform_remove(struct platform_device *pdev)
{
struct xgbe_prv_data *pdata = platform_get_drvdata(pdev);
@@ -521,8 +481,6 @@ static int xgbe_platform_remove(struct platform_device *pdev)
platform_device_put(pdata->phy_platdev);
xgbe_free_pdata(pdata);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -615,7 +573,7 @@ static struct platform_driver xgbe_driver = {
.pm = &xgbe_platform_pm_ops,
},
.probe = xgbe_platform_probe,
- .remove = xgbe_platform_remove,
+ .remove_new = xgbe_platform_remove,
};
int xgbe_platform_init(void)
diff --git a/drivers/net/ethernet/apm/xgene-v2/main.c b/drivers/net/ethernet/apm/xgene-v2/main.c
index 379d19d18dbe..9e90c2381491 100644
--- a/drivers/net/ethernet/apm/xgene-v2/main.c
+++ b/drivers/net/ethernet/apm/xgene-v2/main.c
@@ -690,7 +690,7 @@ err:
return ret;
}
-static int xge_remove(struct platform_device *pdev)
+static void xge_remove(struct platform_device *pdev)
{
struct xge_pdata *pdata;
struct net_device *ndev;
@@ -706,8 +706,6 @@ static int xge_remove(struct platform_device *pdev)
xge_mdio_remove(ndev);
unregister_netdev(ndev);
free_netdev(ndev);
-
- return 0;
}
static void xge_shutdown(struct platform_device *pdev)
@@ -736,7 +734,7 @@ static struct platform_driver xge_driver = {
.acpi_match_table = ACPI_PTR(xge_acpi_match),
},
.probe = xge_probe,
- .remove = xge_remove,
+ .remove_new = xge_remove,
.shutdown = xge_shutdown,
};
module_platform_driver(xge_driver);
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
index f3305c434c95..44900026d11b 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
@@ -2018,7 +2018,6 @@ static int xgene_enet_probe(struct platform_device *pdev)
struct xgene_enet_pdata *pdata;
struct device *dev = &pdev->dev;
void (*link_state)(struct work_struct *);
- const struct of_device_id *of_id;
int ret;
ndev = alloc_etherdev_mqs(sizeof(struct xgene_enet_pdata),
@@ -2039,19 +2038,7 @@ static int xgene_enet_probe(struct platform_device *pdev)
NETIF_F_GRO |
NETIF_F_SG;
- of_id = of_match_device(xgene_enet_of_match, &pdev->dev);
- if (of_id) {
- pdata->enet_id = (uintptr_t)of_id->data;
- }
-#ifdef CONFIG_ACPI
- else {
- const struct acpi_device_id *acpi_id;
-
- acpi_id = acpi_match_device(xgene_enet_acpi_match, &pdev->dev);
- if (acpi_id)
- pdata->enet_id = (enum xgene_enet_id) acpi_id->driver_data;
- }
-#endif
+ pdata->enet_id = (enum xgene_enet_id)device_get_match_data(&pdev->dev);
if (!pdata->enet_id) {
ret = -ENODEV;
goto err;
@@ -2127,7 +2114,7 @@ err:
return ret;
}
-static int xgene_enet_remove(struct platform_device *pdev)
+static void xgene_enet_remove(struct platform_device *pdev)
{
struct xgene_enet_pdata *pdata;
struct net_device *ndev;
@@ -2149,8 +2136,6 @@ static int xgene_enet_remove(struct platform_device *pdev)
xgene_enet_delete_desc_rings(pdata);
pdata->port_ops->shutdown(pdata);
free_netdev(ndev);
-
- return 0;
}
static void xgene_enet_shutdown(struct platform_device *pdev)
@@ -2174,7 +2159,7 @@ static struct platform_driver xgene_enet_driver = {
.acpi_match_table = ACPI_PTR(xgene_enet_acpi_match),
},
.probe = xgene_enet_probe,
- .remove = xgene_enet_remove,
+ .remove_new = xgene_enet_remove,
.shutdown = xgene_enet_shutdown,
};
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
index 643f5e646740..bce2c19e3f22 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
@@ -15,9 +15,10 @@
#include <linux/efi.h>
#include <linux/irq.h>
#include <linux/io.h>
-#include <linux/of_platform.h>
+#include <linux/of.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
+#include <linux/platform_device.h>
#include <linux/mdio/mdio-xgene.h>
#include <linux/module.h>
#include <net/ip.h>
diff --git a/drivers/net/ethernet/apple/macmace.c b/drivers/net/ethernet/apple/macmace.c
index 8775c3234e91..766ab78256fe 100644
--- a/drivers/net/ethernet/apple/macmace.c
+++ b/drivers/net/ethernet/apple/macmace.c
@@ -739,7 +739,7 @@ MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Macintosh MACE ethernet driver");
MODULE_ALIAS("platform:macmace");
-static int mac_mace_device_remove(struct platform_device *pdev)
+static void mac_mace_device_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct mace_data *mp = netdev_priv(dev);
@@ -755,13 +755,11 @@ static int mac_mace_device_remove(struct platform_device *pdev)
mp->tx_ring, mp->tx_ring_phys);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver mac_mace_driver = {
.probe = mace_probe,
- .remove = mac_mace_device_remove,
+ .remove_new = mac_mace_device_remove,
.driver = {
.name = mac_mace_string,
},
diff --git a/drivers/net/ethernet/arc/emac_arc.c b/drivers/net/ethernet/arc/emac_arc.c
index ce3147e886a1..a3afddb23ee8 100644
--- a/drivers/net/ethernet/arc/emac_arc.c
+++ b/drivers/net/ethernet/arc/emac_arc.c
@@ -58,14 +58,12 @@ out_netdev:
return err;
}
-static int emac_arc_remove(struct platform_device *pdev)
+static void emac_arc_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
arc_emac_remove(ndev);
free_netdev(ndev);
-
- return 0;
}
static const struct of_device_id emac_arc_dt_ids[] = {
@@ -76,7 +74,7 @@ MODULE_DEVICE_TABLE(of, emac_arc_dt_ids);
static struct platform_driver emac_arc_driver = {
.probe = emac_arc_probe,
- .remove = emac_arc_remove,
+ .remove_new = emac_arc_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = emac_arc_dt_ids,
diff --git a/drivers/net/ethernet/arc/emac_rockchip.c b/drivers/net/ethernet/arc/emac_rockchip.c
index 509101112279..493d6356c8ca 100644
--- a/drivers/net/ethernet/arc/emac_rockchip.c
+++ b/drivers/net/ethernet/arc/emac_rockchip.c
@@ -244,7 +244,7 @@ out_netdev:
return err;
}
-static int emac_rockchip_remove(struct platform_device *pdev)
+static void emac_rockchip_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct rockchip_priv_data *priv = netdev_priv(ndev);
@@ -260,12 +260,11 @@ static int emac_rockchip_remove(struct platform_device *pdev)
clk_disable_unprepare(priv->macclk);
free_netdev(ndev);
- return 0;
}
static struct platform_driver emac_rockchip_driver = {
.probe = emac_rockchip_probe,
- .remove = emac_rockchip_remove,
+ .remove_new = emac_rockchip_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = emac_rockchip_dt_ids,
diff --git a/drivers/net/ethernet/asix/ax88796c_ioctl.c b/drivers/net/ethernet/asix/ax88796c_ioctl.c
index 916ae380a004..7d2fe2e5af92 100644
--- a/drivers/net/ethernet/asix/ax88796c_ioctl.c
+++ b/drivers/net/ethernet/asix/ax88796c_ioctl.c
@@ -24,7 +24,7 @@ static void
ax88796c_get_drvinfo(struct net_device *ndev, struct ethtool_drvinfo *info)
{
/* Inherit standard device info */
- strncpy(info->driver, DRV_NAME, sizeof(info->driver));
+ strscpy(info->driver, DRV_NAME, sizeof(info->driver));
}
static u32 ax88796c_get_msglevel(struct net_device *ndev)
diff --git a/drivers/net/ethernet/atheros/ag71xx.c b/drivers/net/ethernet/atheros/ag71xx.c
index 009e0b3066fa..0f2f400b5bc4 100644
--- a/drivers/net/ethernet/atheros/ag71xx.c
+++ b/drivers/net/ethernet/atheros/ag71xx.c
@@ -1968,21 +1968,19 @@ err_put_clk:
return err;
}
-static int ag71xx_remove(struct platform_device *pdev)
+static void ag71xx_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct ag71xx *ag;
if (!ndev)
- return 0;
+ return;
ag = netdev_priv(ndev);
unregister_netdev(ndev);
ag71xx_mdio_remove(ag);
clk_disable_unprepare(ag->clk_eth);
platform_set_drvdata(pdev, NULL);
-
- return 0;
}
static const u32 ar71xx_fifo_ar7100[] = {
@@ -2069,7 +2067,7 @@ static const struct of_device_id ag71xx_match[] = {
static struct platform_driver ag71xx_driver = {
.probe = ag71xx_probe,
- .remove = ag71xx_remove,
+ .remove_new = ag71xx_remove,
.driver = {
.name = "ag71xx",
.of_match_table = ag71xx_match,
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c.h b/drivers/net/ethernet/atheros/atl1c/atl1c.h
index 43d821fe7a54..63ba64dbb731 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c.h
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c.h
@@ -504,15 +504,12 @@ struct atl1c_rrd_ring {
u16 next_to_use;
u16 next_to_clean;
struct napi_struct napi;
- struct page *rx_page;
- unsigned int rx_page_offset;
};
/* board specific private data structure */
struct atl1c_adapter {
struct net_device *netdev;
struct pci_dev *pdev;
- unsigned int rx_frag_size;
struct atl1c_hw hw;
struct atl1c_hw_stats hw_stats;
struct mii_if_info mii; /* MII interface info */
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
index 940c5d1ff9cf..46cdc32b4e31 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
@@ -483,15 +483,10 @@ static int atl1c_set_mac_addr(struct net_device *netdev, void *p)
static void atl1c_set_rxbufsize(struct atl1c_adapter *adapter,
struct net_device *dev)
{
- unsigned int head_size;
int mtu = dev->mtu;
adapter->rx_buffer_len = mtu > AT_RX_BUF_SIZE ?
roundup(mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN, 8) : AT_RX_BUF_SIZE;
-
- head_size = SKB_DATA_ALIGN(adapter->rx_buffer_len + NET_SKB_PAD + NET_IP_ALIGN) +
- SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
- adapter->rx_frag_size = roundup_pow_of_two(head_size);
}
static netdev_features_t atl1c_fix_features(struct net_device *netdev,
@@ -847,7 +842,8 @@ static int atl1c_sw_init(struct atl1c_adapter *adapter)
}
static inline void atl1c_clean_buffer(struct pci_dev *pdev,
- struct atl1c_buffer *buffer_info)
+ struct atl1c_buffer *buffer_info,
+ int budget)
{
u16 pci_driection;
if (buffer_info->flags & ATL1C_BUFFER_FREE)
@@ -866,7 +862,7 @@ static inline void atl1c_clean_buffer(struct pci_dev *pdev,
buffer_info->length, pci_driection);
}
if (buffer_info->skb)
- dev_consume_skb_any(buffer_info->skb);
+ napi_consume_skb(buffer_info->skb, budget);
buffer_info->dma = 0;
buffer_info->skb = NULL;
ATL1C_SET_BUFFER_STATE(buffer_info, ATL1C_BUFFER_FREE);
@@ -887,7 +883,7 @@ static void atl1c_clean_tx_ring(struct atl1c_adapter *adapter,
ring_count = tpd_ring->count;
for (index = 0; index < ring_count; index++) {
buffer_info = &tpd_ring->buffer_info[index];
- atl1c_clean_buffer(pdev, buffer_info);
+ atl1c_clean_buffer(pdev, buffer_info, 0);
}
netdev_tx_reset_queue(netdev_get_tx_queue(adapter->netdev, queue));
@@ -914,7 +910,7 @@ static void atl1c_clean_rx_ring(struct atl1c_adapter *adapter, u32 queue)
for (j = 0; j < rfd_ring->count; j++) {
buffer_info = &rfd_ring->buffer_info[j];
- atl1c_clean_buffer(pdev, buffer_info);
+ atl1c_clean_buffer(pdev, buffer_info, 0);
}
/* zero out the descriptor ring */
memset(rfd_ring->desc, 0, rfd_ring->size);
@@ -964,7 +960,6 @@ static void atl1c_init_ring_ptrs(struct atl1c_adapter *adapter)
static void atl1c_free_ring_resources(struct atl1c_adapter *adapter)
{
struct pci_dev *pdev = adapter->pdev;
- int i;
dma_free_coherent(&pdev->dev, adapter->ring_header.size,
adapter->ring_header.desc, adapter->ring_header.dma);
@@ -977,12 +972,6 @@ static void atl1c_free_ring_resources(struct atl1c_adapter *adapter)
kfree(adapter->tpd_ring[0].buffer_info);
adapter->tpd_ring[0].buffer_info = NULL;
}
- for (i = 0; i < adapter->rx_queue_count; ++i) {
- if (adapter->rrd_ring[i].rx_page) {
- put_page(adapter->rrd_ring[i].rx_page);
- adapter->rrd_ring[i].rx_page = NULL;
- }
- }
}
/**
@@ -1619,7 +1608,7 @@ static int atl1c_clean_tx(struct napi_struct *napi, int budget)
total_bytes += buffer_info->skb->len;
total_packets++;
}
- atl1c_clean_buffer(pdev, buffer_info);
+ atl1c_clean_buffer(pdev, buffer_info, budget);
if (++next_to_clean == tpd_ring->count)
next_to_clean = 0;
atomic_set(&tpd_ring->next_to_clean, next_to_clean);
@@ -1754,48 +1743,11 @@ static inline void atl1c_rx_checksum(struct atl1c_adapter *adapter,
skb_checksum_none_assert(skb);
}
-static struct sk_buff *atl1c_alloc_skb(struct atl1c_adapter *adapter,
- u32 queue, bool napi_mode)
-{
- struct atl1c_rrd_ring *rrd_ring = &adapter->rrd_ring[queue];
- struct sk_buff *skb;
- struct page *page;
-
- if (adapter->rx_frag_size > PAGE_SIZE) {
- if (likely(napi_mode))
- return napi_alloc_skb(&rrd_ring->napi,
- adapter->rx_buffer_len);
- else
- return netdev_alloc_skb_ip_align(adapter->netdev,
- adapter->rx_buffer_len);
- }
-
- page = rrd_ring->rx_page;
- if (!page) {
- page = alloc_page(GFP_ATOMIC);
- if (unlikely(!page))
- return NULL;
- rrd_ring->rx_page = page;
- rrd_ring->rx_page_offset = 0;
- }
-
- skb = build_skb(page_address(page) + rrd_ring->rx_page_offset,
- adapter->rx_frag_size);
- if (likely(skb)) {
- skb_reserve(skb, NET_SKB_PAD + NET_IP_ALIGN);
- rrd_ring->rx_page_offset += adapter->rx_frag_size;
- if (rrd_ring->rx_page_offset >= PAGE_SIZE)
- rrd_ring->rx_page = NULL;
- else
- get_page(page);
- }
- return skb;
-}
-
static int atl1c_alloc_rx_buffer(struct atl1c_adapter *adapter, u32 queue,
bool napi_mode)
{
struct atl1c_rfd_ring *rfd_ring = &adapter->rfd_ring[queue];
+ struct atl1c_rrd_ring *rrd_ring = &adapter->rrd_ring[queue];
struct pci_dev *pdev = adapter->pdev;
struct atl1c_buffer *buffer_info, *next_info;
struct sk_buff *skb;
@@ -1814,13 +1766,27 @@ static int atl1c_alloc_rx_buffer(struct atl1c_adapter *adapter, u32 queue,
while (next_info->flags & ATL1C_BUFFER_FREE) {
rfd_desc = ATL1C_RFD_DESC(rfd_ring, rfd_next_to_use);
- skb = atl1c_alloc_skb(adapter, queue, napi_mode);
+ /* When DMA RX address is set to something like
+ * 0x....fc0, it will be very likely to cause DMA
+ * RFD overflow issue.
+ *
+ * To work around it, we apply rx skb with 64 bytes
+ * longer space, and offset the address whenever
+ * 0x....fc0 is detected.
+ */
+ if (likely(napi_mode))
+ skb = napi_alloc_skb(&rrd_ring->napi, adapter->rx_buffer_len + 64);
+ else
+ skb = netdev_alloc_skb(adapter->netdev, adapter->rx_buffer_len + 64);
if (unlikely(!skb)) {
if (netif_msg_rx_err(adapter))
dev_warn(&pdev->dev, "alloc rx buffer failed\n");
break;
}
+ if (((unsigned long)skb->data & 0xfff) == 0xfc0)
+ skb_reserve(skb, 64);
+
/*
* Make buffer alignment 2 beyond a 16 byte boundary
* this will result in a 16 byte aligned IP header after
@@ -2186,7 +2152,7 @@ static void atl1c_tx_rollback(struct atl1c_adapter *adpt,
while (index != tpd_ring->next_to_use) {
tpd = ATL1C_TPD_DESC(tpd_ring, index);
buffer_info = &tpd_ring->buffer_info[index];
- atl1c_clean_buffer(adpt->pdev, buffer_info);
+ atl1c_clean_buffer(adpt->pdev, buffer_info, 0);
memset(tpd, 0, sizeof(struct atl1c_tpd_desc));
if (++index == tpd_ring->count)
index = 0;
diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c
index 02aa6fd8ebc2..a9014d7932db 100644
--- a/drivers/net/ethernet/atheros/atlx/atl1.c
+++ b/drivers/net/ethernet/atheros/atlx/atl1.c
@@ -2446,7 +2446,7 @@ static int atl1_rings_clean(struct napi_struct *napi, int budget)
static inline int atl1_sched_rings_clean(struct atl1_adapter* adapter)
{
- if (!napi_schedule_prep(&adapter->napi))
+ if (!napi_schedule(&adapter->napi))
/* It is possible in case even the RX/TX ints are disabled via IMR
* register the ISR bits are set anyway (but do not produce IRQ).
* To handle such situation the napi functions used to check is
@@ -2454,8 +2454,6 @@ static inline int atl1_sched_rings_clean(struct atl1_adapter* adapter)
*/
return 0;
- __napi_schedule(&adapter->napi);
-
/*
* Disable RX/TX ints via IMR register if it is
* allowed. NAPI handler must reenable them in same
diff --git a/drivers/net/ethernet/atheros/atlx/atl2.c b/drivers/net/ethernet/atheros/atlx/atl2.c
index 1b487c071cb6..bcfc9488125b 100644
--- a/drivers/net/ethernet/atheros/atlx/atl2.c
+++ b/drivers/net/ethernet/atheros/atlx/atl2.c
@@ -1377,7 +1377,7 @@ static int atl2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->watchdog_timeo = 5 * HZ;
netdev->min_mtu = 40;
netdev->max_mtu = ETH_DATA_LEN + VLAN_HLEN;
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
netdev->mem_start = mmio_start;
netdev->mem_end = mmio_start + mmio_len;
diff --git a/drivers/net/ethernet/broadcom/asp2/bcmasp.c b/drivers/net/ethernet/broadcom/asp2/bcmasp.c
index 41a6098eb0c2..29b04a274d07 100644
--- a/drivers/net/ethernet/broadcom/asp2/bcmasp.c
+++ b/drivers/net/ethernet/broadcom/asp2/bcmasp.c
@@ -1344,17 +1344,15 @@ of_put_exit:
return ret;
}
-static int bcmasp_remove(struct platform_device *pdev)
+static void bcmasp_remove(struct platform_device *pdev)
{
struct bcmasp_priv *priv = dev_get_drvdata(&pdev->dev);
if (!priv)
- return 0;
+ return;
priv->destroy_wol(priv);
bcmasp_remove_intfs(priv);
-
- return 0;
}
static void bcmasp_shutdown(struct platform_device *pdev)
@@ -1428,7 +1426,7 @@ static SIMPLE_DEV_PM_OPS(bcmasp_pm_ops,
static struct platform_driver bcmasp_driver = {
.probe = bcmasp_probe,
- .remove = bcmasp_remove,
+ .remove_new = bcmasp_remove,
.shutdown = bcmasp_shutdown,
.driver = {
.name = "brcm,asp-v2",
diff --git a/drivers/net/ethernet/broadcom/bcm4908_enet.c b/drivers/net/ethernet/broadcom/bcm4908_enet.c
index 33d86683af50..3e7c8671cd11 100644
--- a/drivers/net/ethernet/broadcom/bcm4908_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm4908_enet.c
@@ -768,7 +768,7 @@ err_dma_free:
return err;
}
-static int bcm4908_enet_remove(struct platform_device *pdev)
+static void bcm4908_enet_remove(struct platform_device *pdev)
{
struct bcm4908_enet *enet = platform_get_drvdata(pdev);
@@ -776,8 +776,6 @@ static int bcm4908_enet_remove(struct platform_device *pdev)
netif_napi_del(&enet->rx_ring.napi);
netif_napi_del(&enet->tx_ring.napi);
bcm4908_enet_dma_free(enet);
-
- return 0;
}
static const struct of_device_id bcm4908_enet_of_match[] = {
@@ -791,7 +789,7 @@ static struct platform_driver bcm4908_enet_driver = {
.of_match_table = bcm4908_enet_of_match,
},
.probe = bcm4908_enet_probe,
- .remove = bcm4908_enet_remove,
+ .remove_new = bcm4908_enet_remove,
};
module_platform_driver(bcm4908_enet_driver);
diff --git a/drivers/net/ethernet/broadcom/bcm63xx_enet.c b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
index a741070f1f9a..3196c4dea076 100644
--- a/drivers/net/ethernet/broadcom/bcm63xx_enet.c
+++ b/drivers/net/ethernet/broadcom/bcm63xx_enet.c
@@ -1902,7 +1902,7 @@ out:
/*
* exit func, stops hardware and unregisters netdevice
*/
-static int bcm_enet_remove(struct platform_device *pdev)
+static void bcm_enet_remove(struct platform_device *pdev)
{
struct bcm_enet_priv *priv;
struct net_device *dev;
@@ -1932,12 +1932,11 @@ static int bcm_enet_remove(struct platform_device *pdev)
clk_disable_unprepare(priv->mac_clk);
free_netdev(dev);
- return 0;
}
static struct platform_driver bcm63xx_enet_driver = {
.probe = bcm_enet_probe,
- .remove = bcm_enet_remove,
+ .remove_new = bcm_enet_remove,
.driver = {
.name = "bcm63xx_enet",
},
@@ -2531,8 +2530,8 @@ static int bcm_enetsw_get_sset_count(struct net_device *netdev,
static void bcm_enetsw_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
- strncpy(drvinfo->driver, bcm_enet_driver_name, sizeof(drvinfo->driver));
- strncpy(drvinfo->bus_info, "bcm63xx", sizeof(drvinfo->bus_info));
+ strscpy(drvinfo->driver, bcm_enet_driver_name, sizeof(drvinfo->driver));
+ strscpy(drvinfo->bus_info, "bcm63xx", sizeof(drvinfo->bus_info));
}
static void bcm_enetsw_get_ethtool_stats(struct net_device *netdev,
@@ -2739,7 +2738,7 @@ out:
/* exit func, stops hardware and unregisters netdevice */
-static int bcm_enetsw_remove(struct platform_device *pdev)
+static void bcm_enetsw_remove(struct platform_device *pdev)
{
struct bcm_enet_priv *priv;
struct net_device *dev;
@@ -2752,12 +2751,11 @@ static int bcm_enetsw_remove(struct platform_device *pdev)
clk_disable_unprepare(priv->mac_clk);
free_netdev(dev);
- return 0;
}
static struct platform_driver bcm63xx_enetsw_driver = {
.probe = bcm_enetsw_probe,
- .remove = bcm_enetsw_remove,
+ .remove_new = bcm_enetsw_remove,
.driver = {
.name = "bcm63xx_enetsw",
},
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
index bf1611cce974..c9faa8540859 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.c
+++ b/drivers/net/ethernet/broadcom/bcmsysport.c
@@ -2430,7 +2430,7 @@ static int bcm_sysport_netdevice_event(struct notifier_block *nb,
if (dev->netdev_ops != &bcm_sysport_netdev_ops)
return NOTIFY_DONE;
- if (!dsa_slave_dev_check(info->upper_dev))
+ if (!dsa_user_dev_check(info->upper_dev))
return NOTIFY_DONE;
if (info->linking)
@@ -2648,7 +2648,7 @@ err_free_netdev:
return ret;
}
-static int bcm_sysport_remove(struct platform_device *pdev)
+static void bcm_sysport_remove(struct platform_device *pdev)
{
struct net_device *dev = dev_get_drvdata(&pdev->dev);
struct bcm_sysport_priv *priv = netdev_priv(dev);
@@ -2663,8 +2663,6 @@ static int bcm_sysport_remove(struct platform_device *pdev)
of_phy_deregister_fixed_link(dn);
free_netdev(dev);
dev_set_drvdata(&pdev->dev, NULL);
-
- return 0;
}
static int bcm_sysport_suspend_to_wol(struct bcm_sysport_priv *priv)
@@ -2901,7 +2899,7 @@ static SIMPLE_DEV_PM_OPS(bcm_sysport_pm_ops,
static struct platform_driver bcm_sysport_driver = {
.probe = bcm_sysport_probe,
- .remove = bcm_sysport_remove,
+ .remove_new = bcm_sysport_remove,
.driver = {
.name = "brcm-systemport",
.of_match_table = bcm_sysport_of_match,
diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
index b4381cd41979..0b21fd5bd457 100644
--- a/drivers/net/ethernet/broadcom/bgmac-platform.c
+++ b/drivers/net/ethernet/broadcom/bgmac-platform.c
@@ -246,13 +246,11 @@ static int bgmac_probe(struct platform_device *pdev)
return bgmac_enet_probe(bgmac);
}
-static int bgmac_remove(struct platform_device *pdev)
+static void bgmac_remove(struct platform_device *pdev)
{
struct bgmac *bgmac = platform_get_drvdata(pdev);
bgmac_enet_remove(bgmac);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -296,7 +294,7 @@ static struct platform_driver bgmac_enet_driver = {
.pm = BGMAC_PM_OPS
},
.probe = bgmac_probe,
- .remove = bgmac_remove,
+ .remove_new = bgmac_remove,
};
module_platform_driver(bgmac_enet_driver);
diff --git a/drivers/net/ethernet/broadcom/bnxt/Makefile b/drivers/net/ethernet/broadcom/bnxt/Makefile
index 2bc2b707d6ee..ba6c239d52fa 100644
--- a/drivers/net/ethernet/broadcom/bnxt/Makefile
+++ b/drivers/net/ethernet/broadcom/bnxt/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_BNXT) += bnxt_en.o
bnxt_en-y := bnxt.o bnxt_hwrm.o bnxt_sriov.o bnxt_ethtool.o bnxt_dcb.o bnxt_ulp.o bnxt_xdp.o bnxt_ptp.o bnxt_vfr.o bnxt_devlink.o bnxt_dim.o bnxt_coredump.o
bnxt_en-$(CONFIG_BNXT_FLOWER_OFFLOAD) += bnxt_tc.o
bnxt_en-$(CONFIG_DEBUG_FS) += bnxt_debugfs.o
+bnxt_en-$(CONFIG_BNXT_HWMON) += bnxt_hwmon.o
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 7551aa8068f8..d0359b569afe 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -52,8 +52,6 @@
#include <linux/cpu_rmap.h>
#include <linux/cpumask.h>
#include <net/pkt_cls.h>
-#include <linux/hwmon.h>
-#include <linux/hwmon-sysfs.h>
#include <net/page_pool/helpers.h>
#include <linux/align.h>
#include <net/netdev_queues.h>
@@ -71,6 +69,7 @@
#include "bnxt_tc.h"
#include "bnxt_devlink.h"
#include "bnxt_debugfs.h"
+#include "bnxt_hwmon.h"
#define BNXT_TX_TIMEOUT (5 * HZ)
#define BNXT_DEF_MSG_ENABLE (NETIF_MSG_DRV | NETIF_MSG_HW | \
@@ -2130,7 +2129,68 @@ static u16 bnxt_agg_ring_id_to_grp_idx(struct bnxt *bp, u16 ring_id)
return INVALID_HW_RING_ID;
}
-static void bnxt_event_error_report(struct bnxt *bp, u32 data1, u32 data2)
+static u16 bnxt_get_force_speed(struct bnxt_link_info *link_info)
+{
+ if (link_info->req_signal_mode == BNXT_SIG_MODE_PAM4)
+ return link_info->force_pam4_link_speed;
+ return link_info->force_link_speed;
+}
+
+static void bnxt_set_force_speed(struct bnxt_link_info *link_info)
+{
+ link_info->req_link_speed = link_info->force_link_speed;
+ link_info->req_signal_mode = BNXT_SIG_MODE_NRZ;
+ if (link_info->force_pam4_link_speed) {
+ link_info->req_link_speed = link_info->force_pam4_link_speed;
+ link_info->req_signal_mode = BNXT_SIG_MODE_PAM4;
+ }
+}
+
+static void bnxt_set_auto_speed(struct bnxt_link_info *link_info)
+{
+ link_info->advertising = link_info->auto_link_speeds;
+ link_info->advertising_pam4 = link_info->auto_pam4_link_speeds;
+}
+
+static bool bnxt_force_speed_updated(struct bnxt_link_info *link_info)
+{
+ if (link_info->req_signal_mode == BNXT_SIG_MODE_NRZ &&
+ link_info->req_link_speed != link_info->force_link_speed)
+ return true;
+ if (link_info->req_signal_mode == BNXT_SIG_MODE_PAM4 &&
+ link_info->req_link_speed != link_info->force_pam4_link_speed)
+ return true;
+ return false;
+}
+
+static bool bnxt_auto_speed_updated(struct bnxt_link_info *link_info)
+{
+ if (link_info->advertising != link_info->auto_link_speeds ||
+ link_info->advertising_pam4 != link_info->auto_pam4_link_speeds)
+ return true;
+ return false;
+}
+
+#define BNXT_EVENT_THERMAL_CURRENT_TEMP(data2) \
+ ((data2) & \
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_CURRENT_TEMP_MASK)
+
+#define BNXT_EVENT_THERMAL_THRESHOLD_TEMP(data2) \
+ (((data2) & \
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_THRESHOLD_TEMP_MASK) >>\
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_THRESHOLD_TEMP_SFT)
+
+#define EVENT_DATA1_THERMAL_THRESHOLD_TYPE(data1) \
+ ((data1) & \
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_MASK)
+
+#define EVENT_DATA1_THERMAL_THRESHOLD_DIR_INCREASING(data1) \
+ (((data1) & \
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR) ==\
+ ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR_INCREASING)
+
+/* Return true if the workqueue has to be scheduled */
+static bool bnxt_event_error_report(struct bnxt *bp, u32 data1, u32 data2)
{
u32 err_type = BNXT_EVENT_ERROR_REPORT_TYPE(data1);
@@ -2145,11 +2205,53 @@ static void bnxt_event_error_report(struct bnxt *bp, u32 data1, u32 data2)
case ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_DOORBELL_DROP_THRESHOLD:
netdev_warn(bp->dev, "One or more MMIO doorbells dropped by the device!\n");
break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_BASE_EVENT_DATA1_ERROR_TYPE_THERMAL_THRESHOLD: {
+ u32 type = EVENT_DATA1_THERMAL_THRESHOLD_TYPE(data1);
+ char *threshold_type;
+ bool notify = false;
+ char *dir_str;
+
+ switch (type) {
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_WARN:
+ threshold_type = "warning";
+ break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_CRITICAL:
+ threshold_type = "critical";
+ break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_FATAL:
+ threshold_type = "fatal";
+ break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_SHUTDOWN:
+ threshold_type = "shutdown";
+ break;
+ default:
+ netdev_err(bp->dev, "Unknown Thermal threshold type event\n");
+ return false;
+ }
+ if (EVENT_DATA1_THERMAL_THRESHOLD_DIR_INCREASING(data1)) {
+ dir_str = "above";
+ notify = true;
+ } else {
+ dir_str = "below";
+ }
+ netdev_warn(bp->dev, "Chip temperature has gone %s the %s thermal threshold!\n",
+ dir_str, threshold_type);
+ netdev_warn(bp->dev, "Temperature (In Celsius), Current: %lu, threshold: %lu\n",
+ BNXT_EVENT_THERMAL_CURRENT_TEMP(data2),
+ BNXT_EVENT_THERMAL_THRESHOLD_TEMP(data2));
+ if (notify) {
+ bp->thermal_threshold_type = type;
+ set_bit(BNXT_THERMAL_THRESHOLD_SP_EVENT, &bp->sp_event);
+ return true;
+ }
+ return false;
+ }
default:
netdev_err(bp->dev, "FW reported unknown error type %u\n",
err_type);
break;
}
+ return false;
}
#define BNXT_GET_EVENT_PORT(data) \
@@ -2195,7 +2297,7 @@ static int bnxt_async_event_process(struct bnxt *bp,
/* print unsupported speed warning in forced speed mode only */
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED) &&
(data1 & 0x20000)) {
- u16 fw_speed = link_info->force_link_speed;
+ u16 fw_speed = bnxt_get_force_speed(link_info);
u32 speed = bnxt_fw_to_ethtool_speed(fw_speed);
if (speed != SPEED_UNKNOWN)
@@ -2350,7 +2452,8 @@ static int bnxt_async_event_process(struct bnxt *bp,
goto async_event_process_exit;
}
case ASYNC_EVENT_CMPL_EVENT_ID_ERROR_REPORT: {
- bnxt_event_error_report(bp, data1, data2);
+ if (bnxt_event_error_report(bp, data1, data2))
+ break;
goto async_event_process_exit;
}
case ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE: {
@@ -3199,8 +3302,6 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
pp.dma_dir = bp->rx_dir;
pp.max_len = PAGE_SIZE;
pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
- if (PAGE_SIZE > BNXT_RX_PAGE_SIZE)
- pp.flags |= PP_FLAG_PAGE_FRAG;
rxr->page_pool = page_pool_create(&pp);
if (IS_ERR(rxr->page_pool)) {
@@ -5810,7 +5911,7 @@ static int bnxt_hwrm_set_async_event_cr(struct bnxt *bp, int idx)
if (BNXT_PF(bp)) {
struct hwrm_func_cfg_input *req;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -6221,7 +6322,7 @@ __bnxt_hwrm_reserve_pf_rings(struct bnxt *bp, int tx_rings, int rx_rings,
struct hwrm_func_cfg_input *req;
u32 enables = 0;
- if (hwrm_req_init(bp, req, HWRM_FUNC_CFG))
+ if (bnxt_hwrm_func_cfg_short_req_init(bp, &req))
return NULL;
req->fid = cpu_to_le16(0xffff);
@@ -8566,7 +8667,7 @@ static int bnxt_hwrm_set_br_mode(struct bnxt *bp, u16 br_mode)
else
return -EINVAL;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -8584,7 +8685,7 @@ static int bnxt_hwrm_set_cache_line_size(struct bnxt *bp, int size)
if (BNXT_VF(bp) || bp->hwrm_spec_code < 0x10803)
return 0;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -9628,13 +9729,31 @@ static bool bnxt_support_dropped(u16 advertising, u16 supported)
return ((supported | diff) != supported);
}
+static bool bnxt_support_speed_dropped(struct bnxt_link_info *link_info)
+{
+ /* Check if any advertised speeds are no longer supported. The caller
+ * holds the link_lock mutex, so we can modify link_info settings.
+ */
+ if (bnxt_support_dropped(link_info->advertising,
+ link_info->support_auto_speeds)) {
+ link_info->advertising = link_info->support_auto_speeds;
+ return true;
+ }
+ if (bnxt_support_dropped(link_info->advertising_pam4,
+ link_info->support_pam4_auto_speeds)) {
+ link_info->advertising_pam4 = link_info->support_pam4_auto_speeds;
+ return true;
+ }
+ return false;
+}
+
int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
{
struct bnxt_link_info *link_info = &bp->link_info;
struct hwrm_port_phy_qcfg_output *resp;
struct hwrm_port_phy_qcfg_input *req;
u8 link_state = link_info->link_state;
- bool support_changed = false;
+ bool support_changed;
int rc;
rc = hwrm_req_init(bp, req, HWRM_PORT_PHY_QCFG);
@@ -9748,19 +9867,7 @@ int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
if (!BNXT_PHY_CFG_ABLE(bp))
return 0;
- /* Check if any advertised speeds are no longer supported. The caller
- * holds the link_lock mutex, so we can modify link_info settings.
- */
- if (bnxt_support_dropped(link_info->advertising,
- link_info->support_auto_speeds)) {
- link_info->advertising = link_info->support_auto_speeds;
- support_changed = true;
- }
- if (bnxt_support_dropped(link_info->advertising_pam4,
- link_info->support_pam4_auto_speeds)) {
- link_info->advertising_pam4 = link_info->support_pam4_auto_speeds;
- support_changed = true;
- }
+ support_changed = bnxt_support_speed_dropped(link_info);
if (support_changed && (link_info->autoneg & BNXT_AUTONEG_SPEED))
bnxt_hwrm_set_link_setting(bp, true, false);
return 0;
@@ -10250,79 +10357,6 @@ static void bnxt_get_wol_settings(struct bnxt *bp)
} while (handle && handle != 0xffff);
}
-#ifdef CONFIG_BNXT_HWMON
-static ssize_t bnxt_show_temp(struct device *dev,
- struct device_attribute *devattr, char *buf)
-{
- struct hwrm_temp_monitor_query_output *resp;
- struct hwrm_temp_monitor_query_input *req;
- struct bnxt *bp = dev_get_drvdata(dev);
- u32 len = 0;
- int rc;
-
- rc = hwrm_req_init(bp, req, HWRM_TEMP_MONITOR_QUERY);
- if (rc)
- return rc;
- resp = hwrm_req_hold(bp, req);
- rc = hwrm_req_send(bp, req);
- if (!rc)
- len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
- hwrm_req_drop(bp, req);
- if (rc)
- return rc;
- return len;
-}
-static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
-
-static struct attribute *bnxt_attrs[] = {
- &sensor_dev_attr_temp1_input.dev_attr.attr,
- NULL
-};
-ATTRIBUTE_GROUPS(bnxt);
-
-static void bnxt_hwmon_close(struct bnxt *bp)
-{
- if (bp->hwmon_dev) {
- hwmon_device_unregister(bp->hwmon_dev);
- bp->hwmon_dev = NULL;
- }
-}
-
-static void bnxt_hwmon_open(struct bnxt *bp)
-{
- struct hwrm_temp_monitor_query_input *req;
- struct pci_dev *pdev = bp->pdev;
- int rc;
-
- rc = hwrm_req_init(bp, req, HWRM_TEMP_MONITOR_QUERY);
- if (!rc)
- rc = hwrm_req_send_silent(bp, req);
- if (rc == -EACCES || rc == -EOPNOTSUPP) {
- bnxt_hwmon_close(bp);
- return;
- }
-
- if (bp->hwmon_dev)
- return;
-
- bp->hwmon_dev = hwmon_device_register_with_groups(&pdev->dev,
- DRV_MODULE_NAME, bp,
- bnxt_groups);
- if (IS_ERR(bp->hwmon_dev)) {
- bp->hwmon_dev = NULL;
- dev_warn(&pdev->dev, "Cannot register hwmon device\n");
- }
-}
-#else
-static void bnxt_hwmon_close(struct bnxt *bp)
-{
-}
-
-static void bnxt_hwmon_open(struct bnxt *bp)
-{
-}
-#endif
-
static bool bnxt_eee_config_ok(struct bnxt *bp)
{
struct ethtool_eee *eee = &bp->eee;
@@ -10374,19 +10408,14 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
if (BNXT_AUTO_MODE(link_info->auto_mode))
update_link = true;
- if (link_info->req_signal_mode == BNXT_SIG_MODE_NRZ &&
- link_info->req_link_speed != link_info->force_link_speed)
- update_link = true;
- else if (link_info->req_signal_mode == BNXT_SIG_MODE_PAM4 &&
- link_info->req_link_speed != link_info->force_pam4_link_speed)
+ if (bnxt_force_speed_updated(link_info))
update_link = true;
if (link_info->req_duplex != link_info->duplex_setting)
update_link = true;
} else {
if (link_info->auto_mode == BNXT_LINK_AUTO_NONE)
update_link = true;
- if (link_info->advertising != link_info->auto_link_speeds ||
- link_info->advertising_pam4 != link_info->auto_pam4_link_speeds)
+ if (bnxt_auto_speed_updated(link_info))
update_link = true;
}
@@ -10651,7 +10680,6 @@ static int bnxt_open(struct net_device *dev)
bnxt_reenable_sriov(bp);
}
}
- bnxt_hwmon_open(bp);
}
return rc;
@@ -10736,7 +10764,6 @@ static int bnxt_close(struct net_device *dev)
{
struct bnxt *bp = netdev_priv(dev);
- bnxt_hwmon_close(bp);
bnxt_close_nic(bp, true, true);
bnxt_hwrm_shutdown_link(bp);
bnxt_hwrm_if_change(bp, false);
@@ -12006,16 +12033,9 @@ static void bnxt_init_ethtool_link_settings(struct bnxt *bp)
} else {
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
}
- link_info->advertising = link_info->auto_link_speeds;
- link_info->advertising_pam4 = link_info->auto_pam4_link_speeds;
+ bnxt_set_auto_speed(link_info);
} else {
- link_info->req_link_speed = link_info->force_link_speed;
- link_info->req_signal_mode = BNXT_SIG_MODE_NRZ;
- if (link_info->force_pam4_link_speed) {
- link_info->req_link_speed =
- link_info->force_pam4_link_speed;
- link_info->req_signal_mode = BNXT_SIG_MODE_PAM4;
- }
+ bnxt_set_force_speed(link_info);
link_info->req_duplex = link_info->duplex_setting;
}
if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
@@ -12109,6 +12129,9 @@ static void bnxt_sp_task(struct work_struct *work)
if (test_and_clear_bit(BNXT_FW_ECHO_REQUEST_SP_EVENT, &bp->sp_event))
bnxt_fw_echo_reply(bp);
+ if (test_and_clear_bit(BNXT_THERMAL_THRESHOLD_SP_EVENT, &bp->sp_event))
+ bnxt_hwmon_notify_event(bp);
+
/* These functions below will clear BNXT_STATE_IN_SP_TASK. They
* must be the last functions to be called before exiting.
*/
@@ -12237,6 +12260,20 @@ static void bnxt_init_dflt_coal(struct bnxt *bp)
bp->stats_coal_ticks = BNXT_DEF_STATS_COAL_TICKS;
}
+/* FW that pre-reserves 1 VNIC per function */
+static bool bnxt_fw_pre_resv_vnics(struct bnxt *bp)
+{
+ u16 fw_maj = BNXT_FW_MAJ(bp), fw_bld = BNXT_FW_BLD(bp);
+
+ if (!(bp->flags & BNXT_FLAG_CHIP_P5) &&
+ (fw_maj > 218 || (fw_maj == 218 && fw_bld >= 18)))
+ return true;
+ if ((bp->flags & BNXT_FLAG_CHIP_P5) &&
+ (fw_maj > 216 || (fw_maj == 216 && fw_bld >= 172)))
+ return true;
+ return false;
+}
+
static int bnxt_fw_init_one_p1(struct bnxt *bp)
{
int rc;
@@ -12293,6 +12330,9 @@ static int bnxt_fw_init_one_p2(struct bnxt *bp)
if (rc)
return -ENODEV;
+ if (bnxt_fw_pre_resv_vnics(bp))
+ bp->fw_cap |= BNXT_FW_CAP_PRE_RESV_VNICS;
+
bnxt_hwrm_func_qcfg(bp);
bnxt_hwrm_vnic_qcaps(bp);
bnxt_hwrm_port_led_qcaps(bp);
@@ -12300,6 +12340,7 @@ static int bnxt_fw_init_one_p2(struct bnxt *bp)
if (bp->fw_cap & BNXT_FW_CAP_PTP)
__bnxt_hwrm_ptp_qcfg(bp);
bnxt_dcb_init(bp);
+ bnxt_hwmon_init(bp);
return 0;
}
@@ -13205,6 +13246,7 @@ static void bnxt_remove_one(struct pci_dev *pdev)
bnxt_clear_int_mode(bp);
bnxt_hwrm_func_drv_unrgtr(bp);
bnxt_free_hwrm_resources(bp);
+ bnxt_hwmon_uninit(bp);
bnxt_ethtool_free(bp);
bnxt_dcb_free(bp);
kfree(bp->ptp_cfg);
@@ -13801,6 +13843,7 @@ init_err_dl:
init_err_pci_clean:
bnxt_hwrm_func_drv_unrgtr(bp);
bnxt_free_hwrm_resources(bp);
+ bnxt_hwmon_uninit(bp);
bnxt_ethtool_free(bp);
bnxt_ptp_clear(bp);
kfree(bp->ptp_cfg);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 84cbcfa61bc1..e702dbc3e6b1 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1295,6 +1295,7 @@ struct bnxt_link_info {
u8 req_signal_mode;
#define BNXT_SIG_MODE_NRZ PORT_PHY_QCFG_RESP_SIGNAL_MODE_NRZ
#define BNXT_SIG_MODE_PAM4 PORT_PHY_QCFG_RESP_SIGNAL_MODE_PAM4
+#define BNXT_SIG_MODE_MAX (PORT_PHY_QCFG_RESP_SIGNAL_MODE_LAST + 1)
u8 req_duplex;
u8 req_flow_ctrl;
u16 req_link_speed;
@@ -2013,6 +2014,9 @@ struct bnxt {
#define BNXT_FW_CAP_RING_MONITOR BIT_ULL(30)
#define BNXT_FW_CAP_DBG_QCAPS BIT_ULL(31)
#define BNXT_FW_CAP_PTP BIT_ULL(32)
+ #define BNXT_FW_CAP_THRESHOLD_TEMP_SUPPORTED BIT_ULL(33)
+ #define BNXT_FW_CAP_DFLT_VLAN_TPID_PCP BIT_ULL(34)
+ #define BNXT_FW_CAP_PRE_RESV_VNICS BIT_ULL(35)
u32 fw_dbg_cap;
@@ -2053,6 +2057,7 @@ struct bnxt {
#define BNXT_FW_VER_CODE(maj, min, bld, rsv) \
((u64)(maj) << 48 | (u64)(min) << 32 | (u64)(bld) << 16 | (rsv))
#define BNXT_FW_MAJ(bp) ((bp)->fw_ver_code >> 48)
+#define BNXT_FW_BLD(bp) (((bp)->fw_ver_code >> 16) & 0xffff)
u16 vxlan_fw_dst_port_id;
u16 nge_fw_dst_port_id;
@@ -2090,6 +2095,7 @@ struct bnxt {
#define BNXT_FW_RESET_NOTIFY_SP_EVENT 18
#define BNXT_FW_EXCEPTION_SP_EVENT 19
#define BNXT_LINK_CFG_CHANGE_SP_EVENT 21
+#define BNXT_THERMAL_THRESHOLD_SP_EVENT 22
#define BNXT_FW_ECHO_REQUEST_SP_EVENT 23
struct delayed_work fw_reset_task;
@@ -2185,7 +2191,14 @@ struct bnxt {
struct bnxt_tc_info *tc_info;
struct list_head tc_indr_block_list;
struct dentry *debugfs_pdev;
+#ifdef CONFIG_BNXT_HWMON
struct device *hwmon_dev;
+ u8 warn_thresh_temp;
+ u8 crit_thresh_temp;
+ u8 fatal_thresh_temp;
+ u8 shutdown_thresh_temp;
+#endif
+ u32 thermal_threshold_type;
enum board_idx board_idx;
};
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
index 8b3e7697390f..f302dac56599 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.c
@@ -62,7 +62,7 @@ static int bnxt_hwrm_remote_dev_reset_set(struct bnxt *bp, bool remote_reset)
if (~bp->fw_cap & BNXT_FW_CAP_HOT_RESET_IF)
return -EOPNOTSUPP;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -104,20 +104,21 @@ static int bnxt_fw_diagnose(struct devlink_health_reporter *reporter,
struct bnxt *bp = devlink_health_reporter_priv(reporter);
struct bnxt_fw_health *h = bp->fw_health;
u32 fw_status, fw_resets;
- int rc;
- if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state))
- return devlink_fmsg_string_pair_put(fmsg, "Status", "recovering");
+ if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state)) {
+ devlink_fmsg_string_pair_put(fmsg, "Status", "recovering");
+ return 0;
+ }
- if (!h->status_reliable)
- return devlink_fmsg_string_pair_put(fmsg, "Status", "unknown");
+ if (!h->status_reliable) {
+ devlink_fmsg_string_pair_put(fmsg, "Status", "unknown");
+ return 0;
+ }
mutex_lock(&h->lock);
fw_status = bnxt_fw_health_readl(bp, BNXT_FW_HEALTH_REG);
if (BNXT_FW_IS_BOOTING(fw_status)) {
- rc = devlink_fmsg_string_pair_put(fmsg, "Status", "initializing");
- if (rc)
- goto unlock;
+ devlink_fmsg_string_pair_put(fmsg, "Status", "initializing");
} else if (h->severity || fw_status != BNXT_FW_STATUS_HEALTHY) {
if (!h->severity) {
h->severity = SEVERITY_FATAL;
@@ -126,58 +127,35 @@ static int bnxt_fw_diagnose(struct devlink_health_reporter *reporter,
devlink_health_report(h->fw_reporter,
"FW error diagnosed", h);
}
- rc = devlink_fmsg_string_pair_put(fmsg, "Status", "error");
- if (rc)
- goto unlock;
- rc = devlink_fmsg_u32_pair_put(fmsg, "Syndrome", fw_status);
- if (rc)
- goto unlock;
+ devlink_fmsg_string_pair_put(fmsg, "Status", "error");
+ devlink_fmsg_u32_pair_put(fmsg, "Syndrome", fw_status);
} else {
- rc = devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
- if (rc)
- goto unlock;
+ devlink_fmsg_string_pair_put(fmsg, "Status", "healthy");
}
- rc = devlink_fmsg_string_pair_put(fmsg, "Severity",
- bnxt_health_severity_str(h->severity));
- if (rc)
- goto unlock;
+ devlink_fmsg_string_pair_put(fmsg, "Severity",
+ bnxt_health_severity_str(h->severity));
if (h->severity) {
- rc = devlink_fmsg_string_pair_put(fmsg, "Remedy",
- bnxt_health_remedy_str(h->remedy));
- if (rc)
- goto unlock;
- if (h->remedy == REMEDY_DEVLINK_RECOVER) {
- rc = devlink_fmsg_string_pair_put(fmsg, "Impact",
- "traffic+ntuple_cfg");
- if (rc)
- goto unlock;
- }
+ devlink_fmsg_string_pair_put(fmsg, "Remedy",
+ bnxt_health_remedy_str(h->remedy));
+ if (h->remedy == REMEDY_DEVLINK_RECOVER)
+ devlink_fmsg_string_pair_put(fmsg, "Impact",
+ "traffic+ntuple_cfg");
}
-unlock:
mutex_unlock(&h->lock);
- if (rc || !h->resets_reliable)
- return rc;
+ if (!h->resets_reliable)
+ return 0;
fw_resets = bnxt_fw_health_readl(bp, BNXT_FW_RESET_CNT_REG);
- rc = devlink_fmsg_u32_pair_put(fmsg, "Resets", fw_resets);
- if (rc)
- return rc;
- rc = devlink_fmsg_u32_pair_put(fmsg, "Arrests", h->arrests);
- if (rc)
- return rc;
- rc = devlink_fmsg_u32_pair_put(fmsg, "Survivals", h->survivals);
- if (rc)
- return rc;
- rc = devlink_fmsg_u32_pair_put(fmsg, "Discoveries", h->discoveries);
- if (rc)
- return rc;
- rc = devlink_fmsg_u32_pair_put(fmsg, "Fatalities", h->fatalities);
- if (rc)
- return rc;
- return devlink_fmsg_u32_pair_put(fmsg, "Diagnoses", h->diagnoses);
+ devlink_fmsg_u32_pair_put(fmsg, "Resets", fw_resets);
+ devlink_fmsg_u32_pair_put(fmsg, "Arrests", h->arrests);
+ devlink_fmsg_u32_pair_put(fmsg, "Survivals", h->survivals);
+ devlink_fmsg_u32_pair_put(fmsg, "Discoveries", h->discoveries);
+ devlink_fmsg_u32_pair_put(fmsg, "Fatalities", h->fatalities);
+ devlink_fmsg_u32_pair_put(fmsg, "Diagnoses", h->diagnoses);
+ return 0;
}
static int bnxt_fw_dump(struct devlink_health_reporter *reporter,
@@ -203,19 +181,12 @@ static int bnxt_fw_dump(struct devlink_health_reporter *reporter,
rc = bnxt_get_coredump(bp, BNXT_DUMP_LIVE, data, &dump_len);
if (!rc) {
- rc = devlink_fmsg_pair_nest_start(fmsg, "core");
- if (rc)
- goto exit;
- rc = devlink_fmsg_binary_pair_put(fmsg, "data", data, dump_len);
- if (rc)
- goto exit;
- rc = devlink_fmsg_u32_pair_put(fmsg, "size", dump_len);
- if (rc)
- goto exit;
- rc = devlink_fmsg_pair_nest_end(fmsg);
+ devlink_fmsg_pair_nest_start(fmsg, "core");
+ devlink_fmsg_binary_pair_put(fmsg, "data", data, dump_len);
+ devlink_fmsg_u32_pair_put(fmsg, "size", dump_len);
+ devlink_fmsg_pair_nest_end(fmsg);
}
-exit:
vfree(data);
return rc;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 547247d98eba..f3f384773ac0 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -8,6 +8,7 @@
* the Free Software Foundation.
*/
+#include <linux/bitops.h>
#include <linux/ctype.h>
#include <linux/stringify.h>
#include <linux/ethtool.h>
@@ -534,6 +535,7 @@ static int bnxt_get_num_ring_stats(struct bnxt *bp)
static int bnxt_get_num_stats(struct bnxt *bp)
{
int num_stats = bnxt_get_num_ring_stats(bp);
+ int len;
num_stats += BNXT_NUM_RING_ERR_STATS;
@@ -541,8 +543,12 @@ static int bnxt_get_num_stats(struct bnxt *bp)
num_stats += BNXT_NUM_PORT_STATS;
if (bp->flags & BNXT_FLAG_PORT_STATS_EXT) {
- num_stats += bp->fw_rx_stats_ext_size +
- bp->fw_tx_stats_ext_size;
+ len = min_t(int, bp->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnxt_port_stats_ext_arr));
+ num_stats += len;
+ len = min_t(int, bp->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnxt_tx_port_stats_ext_arr));
+ num_stats += len;
if (bp->pri2cos_valid)
num_stats += BNXT_NUM_STATS_PRI;
}
@@ -652,12 +658,17 @@ skip_ring_stats:
if (bp->flags & BNXT_FLAG_PORT_STATS_EXT) {
u64 *rx_port_stats_ext = bp->rx_port_stats_ext.sw_stats;
u64 *tx_port_stats_ext = bp->tx_port_stats_ext.sw_stats;
+ u32 len;
- for (i = 0; i < bp->fw_rx_stats_ext_size; i++, j++) {
+ len = min_t(u32, bp->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnxt_port_stats_ext_arr));
+ for (i = 0; i < len; i++, j++) {
buf[j] = *(rx_port_stats_ext +
bnxt_port_stats_ext_arr[i].offset);
}
- for (i = 0; i < bp->fw_tx_stats_ext_size; i++, j++) {
+ len = min_t(u32, bp->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnxt_tx_port_stats_ext_arr));
+ for (i = 0; i < len; i++, j++) {
buf[j] = *(tx_port_stats_ext +
bnxt_tx_port_stats_ext_arr[i].offset);
}
@@ -756,11 +767,17 @@ skip_tpa_stats:
}
}
if (bp->flags & BNXT_FLAG_PORT_STATS_EXT) {
- for (i = 0; i < bp->fw_rx_stats_ext_size; i++) {
+ u32 len;
+
+ len = min_t(u32, bp->fw_rx_stats_ext_size,
+ ARRAY_SIZE(bnxt_port_stats_ext_arr));
+ for (i = 0; i < len; i++) {
strcpy(buf, bnxt_port_stats_ext_arr[i].string);
buf += ETH_GSTRING_LEN;
}
- for (i = 0; i < bp->fw_tx_stats_ext_size; i++) {
+ len = min_t(u32, bp->fw_tx_stats_ext_size,
+ ARRAY_SIZE(bnxt_tx_port_stats_ext_arr));
+ for (i = 0; i < len; i++) {
strcpy(buf,
bnxt_tx_port_stats_ext_arr[i].string);
buf += ETH_GSTRING_LEN;
@@ -1506,94 +1523,388 @@ u32 _bnxt_fw_to_ethtool_adv_spds(u16 fw_speeds, u8 fw_pause)
return speed_mask;
}
-#define BNXT_FW_TO_ETHTOOL_SPDS(fw_speeds, fw_pause, lk_ksettings, name)\
-{ \
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_100MB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 100baseT_Full); \
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_1GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 1000baseT_Full); \
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_10GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 10000baseT_Full); \
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_25GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 25000baseCR_Full); \
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_40GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 40000baseCR4_Full);\
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_50GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 50000baseCR2_Full);\
- if ((fw_speeds) & BNXT_LINK_SPEED_MSK_100GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 100000baseCR4_Full);\
- if ((fw_pause) & BNXT_LINK_PAUSE_RX) { \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- Pause); \
- if (!((fw_pause) & BNXT_LINK_PAUSE_TX)) \
- ethtool_link_ksettings_add_link_mode( \
- lk_ksettings, name, Asym_Pause);\
- } else if ((fw_pause) & BNXT_LINK_PAUSE_TX) { \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- Asym_Pause); \
- } \
+enum bnxt_media_type {
+ BNXT_MEDIA_UNKNOWN = 0,
+ BNXT_MEDIA_TP,
+ BNXT_MEDIA_CR,
+ BNXT_MEDIA_SR,
+ BNXT_MEDIA_LR_ER_FR,
+ BNXT_MEDIA_KR,
+ BNXT_MEDIA_KX,
+ BNXT_MEDIA_X,
+ __BNXT_MEDIA_END,
+};
+
+static const enum bnxt_media_type bnxt_phy_types[] = {
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASECR] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR4] = BNXT_MEDIA_KR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASELR] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASESR] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR2] = BNXT_MEDIA_KR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKX] = BNXT_MEDIA_KX,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR] = BNXT_MEDIA_KR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASET] = BNXT_MEDIA_TP,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_BASETE] = BNXT_MEDIA_TP,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_25G_BASECR_CA_L] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_25G_BASECR_CA_S] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_25G_BASECR_CA_N] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_25G_BASESR] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASECR4] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR4] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASELR4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASEER4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR10] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_40G_BASECR4] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_40G_BASESR4] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_40G_BASELR4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_40G_BASEER4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_40G_ACTIVE_CABLE] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_1G_BASET] = BNXT_MEDIA_TP,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_1G_BASESX] = BNXT_MEDIA_X,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_1G_BASECX] = BNXT_MEDIA_X,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASECR4] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASESR4] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASELR4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_200G_BASEER4] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASECR] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASESR] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASELR] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_50G_BASEER] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASECR2] = BNXT_MEDIA_CR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASESR2] = BNXT_MEDIA_SR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASELR2] = BNXT_MEDIA_LR_ER_FR,
+ [PORT_PHY_QCFG_RESP_PHY_TYPE_100G_BASEER2] = BNXT_MEDIA_LR_ER_FR,
+};
+
+static enum bnxt_media_type
+bnxt_get_media(struct bnxt_link_info *link_info)
+{
+ switch (link_info->media_type) {
+ case PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP:
+ return BNXT_MEDIA_TP;
+ case PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC:
+ return BNXT_MEDIA_CR;
+ default:
+ if (link_info->phy_type < ARRAY_SIZE(bnxt_phy_types))
+ return bnxt_phy_types[link_info->phy_type];
+ return BNXT_MEDIA_UNKNOWN;
+ }
+}
+
+enum bnxt_link_speed_indices {
+ BNXT_LINK_SPEED_UNKNOWN = 0,
+ BNXT_LINK_SPEED_100MB_IDX,
+ BNXT_LINK_SPEED_1GB_IDX,
+ BNXT_LINK_SPEED_10GB_IDX,
+ BNXT_LINK_SPEED_25GB_IDX,
+ BNXT_LINK_SPEED_40GB_IDX,
+ BNXT_LINK_SPEED_50GB_IDX,
+ BNXT_LINK_SPEED_100GB_IDX,
+ BNXT_LINK_SPEED_200GB_IDX,
+ __BNXT_LINK_SPEED_END
+};
+
+static enum bnxt_link_speed_indices bnxt_fw_speed_idx(u16 speed)
+{
+ switch (speed) {
+ case BNXT_LINK_SPEED_100MB: return BNXT_LINK_SPEED_100MB_IDX;
+ case BNXT_LINK_SPEED_1GB: return BNXT_LINK_SPEED_1GB_IDX;
+ case BNXT_LINK_SPEED_10GB: return BNXT_LINK_SPEED_10GB_IDX;
+ case BNXT_LINK_SPEED_25GB: return BNXT_LINK_SPEED_25GB_IDX;
+ case BNXT_LINK_SPEED_40GB: return BNXT_LINK_SPEED_40GB_IDX;
+ case BNXT_LINK_SPEED_50GB: return BNXT_LINK_SPEED_50GB_IDX;
+ case BNXT_LINK_SPEED_100GB: return BNXT_LINK_SPEED_100GB_IDX;
+ case BNXT_LINK_SPEED_200GB: return BNXT_LINK_SPEED_200GB_IDX;
+ default: return BNXT_LINK_SPEED_UNKNOWN;
+ }
+}
+
+static const enum ethtool_link_mode_bit_indices
+bnxt_link_modes[__BNXT_LINK_SPEED_END][BNXT_SIG_MODE_MAX][__BNXT_MEDIA_END] = {
+ [BNXT_LINK_SPEED_100MB_IDX] = {
+ {
+ [BNXT_MEDIA_TP] = ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_1GB_IDX] = {
+ {
+ [BNXT_MEDIA_TP] = ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ /* historically baseT, but DAC is more correctly baseX */
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ [BNXT_MEDIA_KX] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ [BNXT_MEDIA_X] = ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_10GB_IDX] = {
+ {
+ [BNXT_MEDIA_TP] = ETHTOOL_LINK_MODE_10000baseT_Full_BIT,
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_10000baseCR_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ [BNXT_MEDIA_KX] = ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_25GB_IDX] = {
+ {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_40GB_IDX] = {
+ {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_50GB_IDX] = {
+ [BNXT_SIG_MODE_NRZ] = {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ },
+ [BNXT_SIG_MODE_PAM4] = {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_50000baseCR_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_50000baseSR_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_50000baseKR_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_100GB_IDX] = {
+ [BNXT_SIG_MODE_NRZ] = {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ },
+ [BNXT_SIG_MODE_PAM4] = {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT,
+ },
+ },
+ [BNXT_LINK_SPEED_200GB_IDX] = {
+ [BNXT_SIG_MODE_PAM4] = {
+ [BNXT_MEDIA_CR] = ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
+ [BNXT_MEDIA_SR] = ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
+ [BNXT_MEDIA_LR_ER_FR] = ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT,
+ [BNXT_MEDIA_KR] = ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
+ },
+ },
+};
+
+#define BNXT_LINK_MODE_UNKNOWN -1
+
+static enum ethtool_link_mode_bit_indices
+bnxt_get_link_mode(struct bnxt_link_info *link_info)
+{
+ enum ethtool_link_mode_bit_indices link_mode;
+ enum bnxt_link_speed_indices speed;
+ enum bnxt_media_type media;
+ u8 sig_mode;
+
+ if (link_info->phy_link_status != BNXT_LINK_LINK)
+ return BNXT_LINK_MODE_UNKNOWN;
+
+ media = bnxt_get_media(link_info);
+ if (BNXT_AUTO_MODE(link_info->auto_mode)) {
+ speed = bnxt_fw_speed_idx(link_info->link_speed);
+ sig_mode = link_info->active_fec_sig_mode &
+ PORT_PHY_QCFG_RESP_SIGNAL_MODE_MASK;
+ } else {
+ speed = bnxt_fw_speed_idx(link_info->req_link_speed);
+ sig_mode = link_info->req_signal_mode;
+ }
+ if (sig_mode >= BNXT_SIG_MODE_MAX)
+ return BNXT_LINK_MODE_UNKNOWN;
+
+ /* Note ETHTOOL_LINK_MODE_10baseT_Half_BIT == 0 is a legal Linux
+ * link mode, but since no such devices exist, the zeroes in the
+ * map can be conveniently used to represent unknown link modes.
+ */
+ link_mode = bnxt_link_modes[speed][sig_mode][media];
+ if (!link_mode)
+ return BNXT_LINK_MODE_UNKNOWN;
+
+ switch (link_mode) {
+ case ETHTOOL_LINK_MODE_100baseT_Full_BIT:
+ if (~link_info->duplex & BNXT_LINK_DUPLEX_FULL)
+ link_mode = ETHTOOL_LINK_MODE_100baseT_Half_BIT;
+ break;
+ case ETHTOOL_LINK_MODE_1000baseT_Full_BIT:
+ if (~link_info->duplex & BNXT_LINK_DUPLEX_FULL)
+ link_mode = ETHTOOL_LINK_MODE_1000baseT_Half_BIT;
+ break;
+ default:
+ break;
+ }
+
+ return link_mode;
+}
+
+static void bnxt_get_ethtool_modes(struct bnxt_link_info *link_info,
+ struct ethtool_link_ksettings *lk_ksettings)
+{
+ struct bnxt *bp = container_of(link_info, struct bnxt, link_info);
+
+ if (!(bp->phy_flags & BNXT_PHY_FL_NO_PAUSE)) {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.supported);
+ }
+
+ if (link_info->support_auto_speeds || link_info->support_pam4_auto_speeds)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ lk_ksettings->link_modes.supported);
+
+ if (~link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
+ return;
+
+ if (link_info->auto_pause_setting & BNXT_LINK_PAUSE_RX)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (hweight8(link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH) == 1)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.advertising);
+ if (link_info->lp_pause & BNXT_LINK_PAUSE_RX)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Pause_BIT,
+ lk_ksettings->link_modes.lp_advertising);
+ if (hweight8(link_info->lp_pause & BNXT_LINK_PAUSE_BOTH) == 1)
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,
+ lk_ksettings->link_modes.lp_advertising);
}
-#define BNXT_ETHTOOL_TO_FW_SPDS(fw_speeds, lk_ksettings, name) \
-{ \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 100baseT_Full) || \
- ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 100baseT_Half)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_100MB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 1000baseT_Full) || \
- ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 1000baseT_Half)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_1GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 10000baseT_Full)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_10GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 25000baseCR_Full)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_25GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 40000baseCR4_Full)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_40GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 50000baseCR2_Full)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_50GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 100000baseCR4_Full)) \
- (fw_speeds) |= BNXT_LINK_SPEED_MSK_100GB; \
+static const u16 bnxt_nrz_speed_masks[] = {
+ [BNXT_LINK_SPEED_100MB_IDX] = BNXT_LINK_SPEED_MSK_100MB,
+ [BNXT_LINK_SPEED_1GB_IDX] = BNXT_LINK_SPEED_MSK_1GB,
+ [BNXT_LINK_SPEED_10GB_IDX] = BNXT_LINK_SPEED_MSK_10GB,
+ [BNXT_LINK_SPEED_25GB_IDX] = BNXT_LINK_SPEED_MSK_25GB,
+ [BNXT_LINK_SPEED_40GB_IDX] = BNXT_LINK_SPEED_MSK_40GB,
+ [BNXT_LINK_SPEED_50GB_IDX] = BNXT_LINK_SPEED_MSK_50GB,
+ [BNXT_LINK_SPEED_100GB_IDX] = BNXT_LINK_SPEED_MSK_100GB,
+ [__BNXT_LINK_SPEED_END - 1] = 0 /* make any legal speed a valid index */
+};
+
+static const u16 bnxt_pam4_speed_masks[] = {
+ [BNXT_LINK_SPEED_50GB_IDX] = BNXT_LINK_PAM4_SPEED_MSK_50GB,
+ [BNXT_LINK_SPEED_100GB_IDX] = BNXT_LINK_PAM4_SPEED_MSK_100GB,
+ [BNXT_LINK_SPEED_200GB_IDX] = BNXT_LINK_PAM4_SPEED_MSK_200GB,
+};
+
+static enum bnxt_link_speed_indices
+bnxt_encoding_speed_idx(u8 sig_mode, u16 speed_msk)
+{
+ const u16 *speeds;
+ int idx, len;
+
+ switch (sig_mode) {
+ case BNXT_SIG_MODE_NRZ:
+ speeds = bnxt_nrz_speed_masks;
+ len = ARRAY_SIZE(bnxt_nrz_speed_masks);
+ break;
+ case BNXT_SIG_MODE_PAM4:
+ speeds = bnxt_pam4_speed_masks;
+ len = ARRAY_SIZE(bnxt_pam4_speed_masks);
+ break;
+ default:
+ return BNXT_LINK_SPEED_UNKNOWN;
+ }
+
+ for (idx = 0; idx < len; idx++) {
+ if (speeds[idx] == speed_msk)
+ return idx;
+ }
+
+ return BNXT_LINK_SPEED_UNKNOWN;
}
-#define BNXT_FW_TO_ETHTOOL_PAM4_SPDS(fw_speeds, lk_ksettings, name) \
-{ \
- if ((fw_speeds) & BNXT_LINK_PAM4_SPEED_MSK_50GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 50000baseCR_Full); \
- if ((fw_speeds) & BNXT_LINK_PAM4_SPEED_MSK_100GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 100000baseCR2_Full);\
- if ((fw_speeds) & BNXT_LINK_PAM4_SPEED_MSK_200GB) \
- ethtool_link_ksettings_add_link_mode(lk_ksettings, name,\
- 200000baseCR4_Full);\
+#define BNXT_FW_SPEED_MSK_BITS 16
+
+static void
+__bnxt_get_ethtool_speeds(unsigned long fw_mask, enum bnxt_media_type media,
+ u8 sig_mode, unsigned long *et_mask)
+{
+ enum ethtool_link_mode_bit_indices link_mode;
+ enum bnxt_link_speed_indices speed;
+ u8 bit;
+
+ for_each_set_bit(bit, &fw_mask, BNXT_FW_SPEED_MSK_BITS) {
+ speed = bnxt_encoding_speed_idx(sig_mode, 1 << bit);
+ if (!speed)
+ continue;
+
+ link_mode = bnxt_link_modes[speed][sig_mode][media];
+ if (!link_mode)
+ continue;
+
+ linkmode_set_bit(link_mode, et_mask);
+ }
}
-#define BNXT_ETHTOOL_TO_FW_PAM4_SPDS(fw_speeds, lk_ksettings, name) \
-{ \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 50000baseCR_Full)) \
- (fw_speeds) |= BNXT_LINK_PAM4_SPEED_MSK_50GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 100000baseCR2_Full)) \
- (fw_speeds) |= BNXT_LINK_PAM4_SPEED_MSK_100GB; \
- if (ethtool_link_ksettings_test_link_mode(lk_ksettings, name, \
- 200000baseCR4_Full)) \
- (fw_speeds) |= BNXT_LINK_PAM4_SPEED_MSK_200GB; \
+static void
+bnxt_get_ethtool_speeds(unsigned long fw_mask, enum bnxt_media_type media,
+ u8 sig_mode, unsigned long *et_mask)
+{
+ if (media) {
+ __bnxt_get_ethtool_speeds(fw_mask, media, sig_mode, et_mask);
+ return;
+ }
+
+ /* list speeds for all media if unknown */
+ for (media = 1; media < __BNXT_MEDIA_END; media++)
+ __bnxt_get_ethtool_speeds(fw_mask, media, sig_mode, et_mask);
+}
+
+static void bnxt_update_speed(u32 *delta, bool installed_media, u16 *speeds,
+ u16 speed_msk, const unsigned long *et_mask,
+ enum ethtool_link_mode_bit_indices mode)
+{
+ bool mode_desired = linkmode_test_bit(mode, et_mask);
+
+ if (!mode)
+ return;
+
+ /* enabled speeds for installed media should override */
+ if (installed_media && mode_desired) {
+ *speeds |= speed_msk;
+ *delta |= speed_msk;
+ return;
+ }
+
+ /* many to one mapping, only allow one change per fw_speed bit */
+ if (!(*delta & speed_msk) && (mode_desired == !(*speeds & speed_msk))) {
+ *speeds ^= speed_msk;
+ *delta |= speed_msk;
+ }
+}
+
+static void bnxt_set_ethtool_speeds(struct bnxt_link_info *link_info,
+ const unsigned long *et_mask)
+{
+ enum bnxt_media_type media = bnxt_get_media(link_info);
+ u32 delta_pam4 = 0;
+ u32 delta_nrz = 0;
+ int i, m;
+
+ for (i = 1; i < __BNXT_LINK_SPEED_END; i++) {
+ /* accept any legal media from user */
+ for (m = 1; m < __BNXT_MEDIA_END; m++) {
+ bnxt_update_speed(&delta_nrz, m == media,
+ &link_info->advertising,
+ bnxt_nrz_speed_masks[i], et_mask,
+ bnxt_link_modes[i][BNXT_SIG_MODE_NRZ][m]);
+ bnxt_update_speed(&delta_pam4, m == media,
+ &link_info->advertising_pam4,
+ bnxt_pam4_speed_masks[i], et_mask,
+ bnxt_link_modes[i][BNXT_SIG_MODE_PAM4][m]);
+ }
+ }
}
static void bnxt_fw_to_ethtool_advertised_fec(struct bnxt_link_info *link_info,
@@ -1617,36 +1928,6 @@ static void bnxt_fw_to_ethtool_advertised_fec(struct bnxt_link_info *link_info,
lk_ksettings->link_modes.advertising);
}
-static void bnxt_fw_to_ethtool_advertised_spds(struct bnxt_link_info *link_info,
- struct ethtool_link_ksettings *lk_ksettings)
-{
- u16 fw_speeds = link_info->advertising;
- u8 fw_pause = 0;
-
- if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
- fw_pause = link_info->auto_pause_setting;
-
- BNXT_FW_TO_ETHTOOL_SPDS(fw_speeds, fw_pause, lk_ksettings, advertising);
- fw_speeds = link_info->advertising_pam4;
- BNXT_FW_TO_ETHTOOL_PAM4_SPDS(fw_speeds, lk_ksettings, advertising);
- bnxt_fw_to_ethtool_advertised_fec(link_info, lk_ksettings);
-}
-
-static void bnxt_fw_to_ethtool_lp_adv(struct bnxt_link_info *link_info,
- struct ethtool_link_ksettings *lk_ksettings)
-{
- u16 fw_speeds = link_info->lp_auto_link_speeds;
- u8 fw_pause = 0;
-
- if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
- fw_pause = link_info->lp_pause;
-
- BNXT_FW_TO_ETHTOOL_SPDS(fw_speeds, fw_pause, lk_ksettings,
- lp_advertising);
- fw_speeds = link_info->lp_auto_pam4_link_speeds;
- BNXT_FW_TO_ETHTOOL_PAM4_SPDS(fw_speeds, lk_ksettings, lp_advertising);
-}
-
static void bnxt_fw_to_ethtool_support_fec(struct bnxt_link_info *link_info,
struct ethtool_link_ksettings *lk_ksettings)
{
@@ -1668,30 +1949,6 @@ static void bnxt_fw_to_ethtool_support_fec(struct bnxt_link_info *link_info,
lk_ksettings->link_modes.supported);
}
-static void bnxt_fw_to_ethtool_support_spds(struct bnxt_link_info *link_info,
- struct ethtool_link_ksettings *lk_ksettings)
-{
- struct bnxt *bp = container_of(link_info, struct bnxt, link_info);
- u16 fw_speeds = link_info->support_speeds;
-
- BNXT_FW_TO_ETHTOOL_SPDS(fw_speeds, 0, lk_ksettings, supported);
- fw_speeds = link_info->support_pam4_speeds;
- BNXT_FW_TO_ETHTOOL_PAM4_SPDS(fw_speeds, lk_ksettings, supported);
-
- if (!(bp->phy_flags & BNXT_PHY_FL_NO_PAUSE)) {
- ethtool_link_ksettings_add_link_mode(lk_ksettings, supported,
- Pause);
- ethtool_link_ksettings_add_link_mode(lk_ksettings, supported,
- Asym_Pause);
- }
-
- if (link_info->support_auto_speeds ||
- link_info->support_pam4_auto_speeds)
- ethtool_link_ksettings_add_link_mode(lk_ksettings, supported,
- Autoneg);
- bnxt_fw_to_ethtool_support_fec(link_info, lk_ksettings);
-}
-
u32 bnxt_fw_to_ethtool_speed(u16 fw_link_speed)
{
switch (fw_link_speed) {
@@ -1720,60 +1977,95 @@ u32 bnxt_fw_to_ethtool_speed(u16 fw_link_speed)
}
}
+static void bnxt_get_default_speeds(struct ethtool_link_ksettings *lk_ksettings,
+ struct bnxt_link_info *link_info)
+{
+ struct ethtool_link_settings *base = &lk_ksettings->base;
+
+ if (link_info->link_state == BNXT_LINK_STATE_UP) {
+ base->speed = bnxt_fw_to_ethtool_speed(link_info->link_speed);
+ base->duplex = DUPLEX_HALF;
+ if (link_info->duplex & BNXT_LINK_DUPLEX_FULL)
+ base->duplex = DUPLEX_FULL;
+ } else if (!link_info->autoneg) {
+ base->speed = bnxt_fw_to_ethtool_speed(link_info->req_link_speed);
+ base->duplex = DUPLEX_HALF;
+ if (link_info->req_duplex == BNXT_LINK_DUPLEX_FULL)
+ base->duplex = DUPLEX_FULL;
+ }
+}
+
static int bnxt_get_link_ksettings(struct net_device *dev,
struct ethtool_link_ksettings *lk_ksettings)
{
- struct bnxt *bp = netdev_priv(dev);
- struct bnxt_link_info *link_info = &bp->link_info;
struct ethtool_link_settings *base = &lk_ksettings->base;
- u32 ethtool_speed;
+ enum ethtool_link_mode_bit_indices link_mode;
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_link_info *link_info;
+ enum bnxt_media_type media;
+ ethtool_link_ksettings_zero_link_mode(lk_ksettings, lp_advertising);
+ ethtool_link_ksettings_zero_link_mode(lk_ksettings, advertising);
ethtool_link_ksettings_zero_link_mode(lk_ksettings, supported);
+ base->duplex = DUPLEX_UNKNOWN;
+ base->speed = SPEED_UNKNOWN;
+ link_info = &bp->link_info;
+
mutex_lock(&bp->link_lock);
- bnxt_fw_to_ethtool_support_spds(link_info, lk_ksettings);
+ bnxt_get_ethtool_modes(link_info, lk_ksettings);
+ media = bnxt_get_media(link_info);
+ bnxt_get_ethtool_speeds(link_info->support_speeds,
+ media, BNXT_SIG_MODE_NRZ,
+ lk_ksettings->link_modes.supported);
+ bnxt_get_ethtool_speeds(link_info->support_pam4_speeds,
+ media, BNXT_SIG_MODE_PAM4,
+ lk_ksettings->link_modes.supported);
+ bnxt_fw_to_ethtool_support_fec(link_info, lk_ksettings);
+ link_mode = bnxt_get_link_mode(link_info);
+ if (link_mode != BNXT_LINK_MODE_UNKNOWN)
+ ethtool_params_from_link_mode(lk_ksettings, link_mode);
+ else
+ bnxt_get_default_speeds(lk_ksettings, link_info);
- ethtool_link_ksettings_zero_link_mode(lk_ksettings, advertising);
if (link_info->autoneg) {
- bnxt_fw_to_ethtool_advertised_spds(link_info, lk_ksettings);
- ethtool_link_ksettings_add_link_mode(lk_ksettings,
- advertising, Autoneg);
+ bnxt_fw_to_ethtool_advertised_fec(link_info, lk_ksettings);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
+ lk_ksettings->link_modes.advertising);
base->autoneg = AUTONEG_ENABLE;
- base->duplex = DUPLEX_UNKNOWN;
+ bnxt_get_ethtool_speeds(link_info->advertising,
+ media, BNXT_SIG_MODE_NRZ,
+ lk_ksettings->link_modes.advertising);
+ bnxt_get_ethtool_speeds(link_info->advertising_pam4,
+ media, BNXT_SIG_MODE_PAM4,
+ lk_ksettings->link_modes.advertising);
if (link_info->phy_link_status == BNXT_LINK_LINK) {
- bnxt_fw_to_ethtool_lp_adv(link_info, lk_ksettings);
- if (link_info->duplex & BNXT_LINK_DUPLEX_FULL)
- base->duplex = DUPLEX_FULL;
- else
- base->duplex = DUPLEX_HALF;
+ bnxt_get_ethtool_speeds(link_info->lp_auto_link_speeds,
+ media, BNXT_SIG_MODE_NRZ,
+ lk_ksettings->link_modes.lp_advertising);
+ bnxt_get_ethtool_speeds(link_info->lp_auto_pam4_link_speeds,
+ media, BNXT_SIG_MODE_PAM4,
+ lk_ksettings->link_modes.lp_advertising);
}
- ethtool_speed = bnxt_fw_to_ethtool_speed(link_info->link_speed);
} else {
base->autoneg = AUTONEG_DISABLE;
- ethtool_speed =
- bnxt_fw_to_ethtool_speed(link_info->req_link_speed);
- base->duplex = DUPLEX_HALF;
- if (link_info->req_duplex == BNXT_LINK_DUPLEX_FULL)
- base->duplex = DUPLEX_FULL;
}
- base->speed = ethtool_speed;
base->port = PORT_NONE;
if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
base->port = PORT_TP;
- ethtool_link_ksettings_add_link_mode(lk_ksettings, supported,
- TP);
- ethtool_link_ksettings_add_link_mode(lk_ksettings, advertising,
- TP);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.advertising);
} else {
- ethtool_link_ksettings_add_link_mode(lk_ksettings, supported,
- FIBRE);
- ethtool_link_ksettings_add_link_mode(lk_ksettings, advertising,
- FIBRE);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.advertising);
if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC)
base->port = PORT_DA;
- else if (link_info->media_type ==
- PORT_PHY_QCFG_RESP_MEDIA_TYPE_FIBRE)
+ else
base->port = PORT_FIBRE;
}
base->phy_address = link_info->phy_addr;
@@ -1782,13 +2074,15 @@ static int bnxt_get_link_ksettings(struct net_device *dev,
return 0;
}
-static int bnxt_force_link_speed(struct net_device *dev, u32 ethtool_speed)
+static int
+bnxt_force_link_speed(struct net_device *dev, u32 ethtool_speed, u32 lanes)
{
struct bnxt *bp = netdev_priv(dev);
struct bnxt_link_info *link_info = &bp->link_info;
u16 support_pam4_spds = link_info->support_pam4_speeds;
u16 support_spds = link_info->support_speeds;
u8 sig_mode = BNXT_SIG_MODE_NRZ;
+ u32 lanes_needed = 1;
u16 fw_speed = 0;
switch (ethtool_speed) {
@@ -1809,37 +2103,46 @@ static int bnxt_force_link_speed(struct net_device *dev, u32 ethtool_speed)
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_10GB;
break;
case SPEED_20000:
- if (support_spds & BNXT_LINK_SPEED_MSK_20GB)
+ if (support_spds & BNXT_LINK_SPEED_MSK_20GB) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_20GB;
+ lanes_needed = 2;
+ }
break;
case SPEED_25000:
if (support_spds & BNXT_LINK_SPEED_MSK_25GB)
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_25GB;
break;
case SPEED_40000:
- if (support_spds & BNXT_LINK_SPEED_MSK_40GB)
+ if (support_spds & BNXT_LINK_SPEED_MSK_40GB) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_40GB;
+ lanes_needed = 4;
+ }
break;
case SPEED_50000:
- if (support_spds & BNXT_LINK_SPEED_MSK_50GB) {
+ if ((support_spds & BNXT_LINK_SPEED_MSK_50GB) && lanes != 1) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_50GB;
+ lanes_needed = 2;
} else if (support_pam4_spds & BNXT_LINK_PAM4_SPEED_MSK_50GB) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_50GB;
sig_mode = BNXT_SIG_MODE_PAM4;
}
break;
case SPEED_100000:
- if (support_spds & BNXT_LINK_SPEED_MSK_100GB) {
+ if ((support_spds & BNXT_LINK_SPEED_MSK_100GB) &&
+ lanes != 2 && lanes != 1) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_100GB;
+ lanes_needed = 4;
} else if (support_pam4_spds & BNXT_LINK_PAM4_SPEED_MSK_100GB) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_100GB;
sig_mode = BNXT_SIG_MODE_PAM4;
+ lanes_needed = 2;
}
break;
case SPEED_200000:
if (support_pam4_spds & BNXT_LINK_PAM4_SPEED_MSK_200GB) {
fw_speed = PORT_PHY_CFG_REQ_FORCE_PAM4_LINK_SPEED_200GB;
sig_mode = BNXT_SIG_MODE_PAM4;
+ lanes_needed = 4;
}
break;
}
@@ -1849,6 +2152,11 @@ static int bnxt_force_link_speed(struct net_device *dev, u32 ethtool_speed)
return -EINVAL;
}
+ if (lanes && lanes != lanes_needed) {
+ netdev_err(dev, "unsupported number of lanes for speed\n");
+ return -EINVAL;
+ }
+
if (link_info->req_link_speed == fw_speed &&
link_info->req_signal_mode == sig_mode &&
link_info->autoneg == 0)
@@ -1893,7 +2201,7 @@ static int bnxt_set_link_ksettings(struct net_device *dev,
struct bnxt_link_info *link_info = &bp->link_info;
const struct ethtool_link_settings *base = &lk_ksettings->base;
bool set_pause = false;
- u32 speed;
+ u32 speed, lanes = 0;
int rc = 0;
if (!BNXT_PHY_CFG_ABLE(bp))
@@ -1901,12 +2209,8 @@ static int bnxt_set_link_ksettings(struct net_device *dev,
mutex_lock(&bp->link_lock);
if (base->autoneg == AUTONEG_ENABLE) {
- link_info->advertising = 0;
- link_info->advertising_pam4 = 0;
- BNXT_ETHTOOL_TO_FW_SPDS(link_info->advertising, lk_ksettings,
- advertising);
- BNXT_ETHTOOL_TO_FW_PAM4_SPDS(link_info->advertising_pam4,
- lk_ksettings, advertising);
+ bnxt_set_ethtool_speeds(link_info,
+ lk_ksettings->link_modes.advertising);
link_info->autoneg |= BNXT_AUTONEG_SPEED;
if (!link_info->advertising && !link_info->advertising_pam4) {
link_info->advertising = link_info->support_auto_speeds;
@@ -1934,7 +2238,8 @@ static int bnxt_set_link_ksettings(struct net_device *dev,
goto set_setting_exit;
}
speed = base->speed;
- rc = bnxt_force_link_speed(dev, speed);
+ lanes = lk_ksettings->lanes;
+ rc = bnxt_force_link_speed(dev, speed, lanes);
if (rc) {
if (rc == -EALREADY)
rc = 0;
@@ -4140,6 +4445,7 @@ void bnxt_ethtool_free(struct bnxt *bp)
}
const struct ethtool_ops bnxt_ethtool_ops = {
+ .cap_link_lanes_supported = 1,
.supported_coalesce_params = ETHTOOL_COALESCE_USECS |
ETHTOOL_COALESCE_MAX_FRAMES |
ETHTOOL_COALESCE_USECS_IRQ |
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
index 3ae8e8af8ab3..d5fad5a3cdd1 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
@@ -2,7 +2,7 @@
*
* Copyright (c) 2014-2016 Broadcom Corporation
* Copyright (c) 2014-2018 Broadcom Limited
- * Copyright (c) 2018-2022 Broadcom Inc.
+ * Copyright (c) 2018-2023 Broadcom Inc.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -191,6 +191,11 @@ struct cmd_nums {
#define HWRM_QUEUE_VLANPRI2PRI_CFG 0x85UL
#define HWRM_QUEUE_GLOBAL_CFG 0x86UL
#define HWRM_QUEUE_GLOBAL_QCFG 0x87UL
+ #define HWRM_QUEUE_ADPTV_QOS_RX_FEATURE_QCFG 0x88UL
+ #define HWRM_QUEUE_ADPTV_QOS_RX_FEATURE_CFG 0x89UL
+ #define HWRM_QUEUE_ADPTV_QOS_TX_FEATURE_QCFG 0x8aUL
+ #define HWRM_QUEUE_ADPTV_QOS_TX_FEATURE_CFG 0x8bUL
+ #define HWRM_QUEUE_QCAPS 0x8cUL
#define HWRM_CFA_L2_FILTER_ALLOC 0x90UL
#define HWRM_CFA_L2_FILTER_FREE 0x91UL
#define HWRM_CFA_L2_FILTER_CFG 0x92UL
@@ -315,6 +320,7 @@ struct cmd_nums {
#define HWRM_CFA_LAG_GROUP_MEMBER_UNRGTR 0x127UL
#define HWRM_CFA_TLS_FILTER_ALLOC 0x128UL
#define HWRM_CFA_TLS_FILTER_FREE 0x129UL
+ #define HWRM_CFA_RELEASE_AFM_FUNC 0x12aUL
#define HWRM_ENGINE_CKV_STATUS 0x12eUL
#define HWRM_ENGINE_CKV_CKEK_ADD 0x12fUL
#define HWRM_ENGINE_CKV_CKEK_DELETE 0x130UL
@@ -383,6 +389,9 @@ struct cmd_nums {
#define HWRM_FUNC_DBR_RECOVERY_COMPLETED 0x1aaUL
#define HWRM_FUNC_SYNCE_CFG 0x1abUL
#define HWRM_FUNC_SYNCE_QCFG 0x1acUL
+ #define HWRM_FUNC_KEY_CTX_FREE 0x1adUL
+ #define HWRM_FUNC_LAG_MODE_CFG 0x1aeUL
+ #define HWRM_FUNC_LAG_MODE_QCFG 0x1afUL
#define HWRM_SELFTEST_QLIST 0x200UL
#define HWRM_SELFTEST_EXEC 0x201UL
#define HWRM_SELFTEST_IRQ 0x202UL
@@ -408,10 +417,10 @@ struct cmd_nums {
#define HWRM_MFG_SELFTEST_QLIST 0x216UL
#define HWRM_MFG_SELFTEST_EXEC 0x217UL
#define HWRM_STAT_GENERIC_QSTATS 0x218UL
+ #define HWRM_MFG_PRVSN_EXPORT_CERT 0x219UL
#define HWRM_TF 0x2bcUL
#define HWRM_TF_VERSION_GET 0x2bdUL
#define HWRM_TF_SESSION_OPEN 0x2c6UL
- #define HWRM_TF_SESSION_ATTACH 0x2c7UL
#define HWRM_TF_SESSION_REGISTER 0x2c8UL
#define HWRM_TF_SESSION_UNREGISTER 0x2c9UL
#define HWRM_TF_SESSION_CLOSE 0x2caUL
@@ -426,14 +435,6 @@ struct cmd_nums {
#define HWRM_TF_TBL_TYPE_GET 0x2daUL
#define HWRM_TF_TBL_TYPE_SET 0x2dbUL
#define HWRM_TF_TBL_TYPE_BULK_GET 0x2dcUL
- #define HWRM_TF_CTXT_MEM_ALLOC 0x2e2UL
- #define HWRM_TF_CTXT_MEM_FREE 0x2e3UL
- #define HWRM_TF_CTXT_MEM_RGTR 0x2e4UL
- #define HWRM_TF_CTXT_MEM_UNRGTR 0x2e5UL
- #define HWRM_TF_EXT_EM_QCAPS 0x2e6UL
- #define HWRM_TF_EXT_EM_OP 0x2e7UL
- #define HWRM_TF_EXT_EM_CFG 0x2e8UL
- #define HWRM_TF_EXT_EM_QCFG 0x2e9UL
#define HWRM_TF_EM_INSERT 0x2eaUL
#define HWRM_TF_EM_DELETE 0x2ebUL
#define HWRM_TF_EM_HASH_INSERT 0x2ecUL
@@ -465,6 +466,14 @@ struct cmd_nums {
#define HWRM_TFC_IDX_TBL_GET 0x390UL
#define HWRM_TFC_IDX_TBL_FREE 0x391UL
#define HWRM_TFC_GLOBAL_ID_ALLOC 0x392UL
+ #define HWRM_TFC_TCAM_SET 0x393UL
+ #define HWRM_TFC_TCAM_GET 0x394UL
+ #define HWRM_TFC_TCAM_ALLOC 0x395UL
+ #define HWRM_TFC_TCAM_ALLOC_SET 0x396UL
+ #define HWRM_TFC_TCAM_FREE 0x397UL
+ #define HWRM_TFC_IF_TBL_SET 0x398UL
+ #define HWRM_TFC_IF_TBL_GET 0x399UL
+ #define HWRM_TFC_TBL_SCOPE_CONFIG_GET 0x39aUL
#define HWRM_SV 0x400UL
#define HWRM_DBG_READ_DIRECT 0xff10UL
#define HWRM_DBG_READ_INDIRECT 0xff11UL
@@ -494,6 +503,8 @@ struct cmd_nums {
#define HWRM_DBG_USEQ_RUN 0xff29UL
#define HWRM_DBG_USEQ_DELIVERY_REQ 0xff2aUL
#define HWRM_DBG_USEQ_RESP_HDR 0xff2bUL
+ #define HWRM_NVM_GET_VPD_FIELD_INFO 0xffeaUL
+ #define HWRM_NVM_SET_VPD_FIELD_INFO 0xffebUL
#define HWRM_NVM_DEFRAG 0xffecUL
#define HWRM_NVM_REQ_ARBITRATION 0xffedUL
#define HWRM_NVM_FACTORY_DEFAULTS 0xffeeUL
@@ -540,6 +551,7 @@ struct ret_codes {
#define HWRM_ERR_CODE_BUSY 0x10UL
#define HWRM_ERR_CODE_RESOURCE_LOCKED 0x11UL
#define HWRM_ERR_CODE_PF_UNAVAILABLE 0x12UL
+ #define HWRM_ERR_CODE_ENTITY_NOT_PRESENT 0x13UL
#define HWRM_ERR_CODE_TLV_ENCAPSULATED_RESPONSE 0x8000UL
#define HWRM_ERR_CODE_UNKNOWN_ERR 0xfffeUL
#define HWRM_ERR_CODE_CMD_NOT_SUPPORTED 0xffffUL
@@ -571,8 +583,8 @@ struct hwrm_err_output {
#define HWRM_VERSION_MAJOR 1
#define HWRM_VERSION_MINOR 10
#define HWRM_VERSION_UPDATE 2
-#define HWRM_VERSION_RSVD 118
-#define HWRM_VERSION_STR "1.10.2.118"
+#define HWRM_VERSION_RSVD 171
+#define HWRM_VERSION_STR "1.10.2.171"
/* hwrm_ver_get_input (size:192b/24B) */
struct hwrm_ver_get_input {
@@ -761,51 +773,53 @@ struct hwrm_async_event_cmpl {
#define ASYNC_EVENT_CMPL_TYPE_HWRM_ASYNC_EVENT 0x2eUL
#define ASYNC_EVENT_CMPL_TYPE_LAST ASYNC_EVENT_CMPL_TYPE_HWRM_ASYNC_EVENT
__le16 event_id;
- #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE 0x0UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_MTU_CHANGE 0x1UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CHANGE 0x2UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DCB_CONFIG_CHANGE 0x3UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PORT_CONN_NOT_ALLOWED 0x4UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_NOT_ALLOWED 0x5UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE 0x6UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE 0x7UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY 0x8UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY 0x9UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_RING_MONITOR_MSG 0xaUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_UNLOAD 0x10UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_LOAD 0x11UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_FLR_PROC_CMPLT 0x12UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD 0x20UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_LOAD 0x21UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR 0x30UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_VF_MAC_ADDR_CHANGE 0x31UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PF_VF_COMM_STATUS_CHANGE 0x32UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_VF_CFG_CHANGE 0x33UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE 0x34UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE 0x35UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_HW_FLOW_AGED 0x36UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION 0x37UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_REQ 0x38UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_DONE 0x39UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_TCP_FLAG_ACTION_CHANGE 0x3aUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_FLOW_ACTIVE 0x3bUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE 0x3cUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE 0x3dUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE 0x3eUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE 0x3fUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE 0x40UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE 0x41UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_ECHO_REQUEST 0x42UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE 0x43UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_PPS_TIMESTAMP 0x44UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_ERROR_REPORT 0x45UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DOORBELL_PACING_THRESHOLD 0x46UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_RSS_CHANGE 0x47UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_DOORBELL_PACING_NQ_UPDATE 0x48UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_MAX_RGTR_EVENT_ID 0x49UL
- #define ASYNC_EVENT_CMPL_EVENT_ID_FW_TRACE_MSG 0xfeUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR 0xffUL
- #define ASYNC_EVENT_CMPL_EVENT_ID_LAST ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE 0x0UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_MTU_CHANGE 0x1UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CHANGE 0x2UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DCB_CONFIG_CHANGE 0x3UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PORT_CONN_NOT_ALLOWED 0x4UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_NOT_ALLOWED 0x5UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE 0x6UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PORT_PHY_CFG_CHANGE 0x7UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY 0x8UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY 0x9UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_RING_MONITOR_MSG 0xaUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_UNLOAD 0x10UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_LOAD 0x11UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_FUNC_FLR_PROC_CMPLT 0x12UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD 0x20UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_LOAD 0x21UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR 0x30UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_VF_MAC_ADDR_CHANGE 0x31UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PF_VF_COMM_STATUS_CHANGE 0x32UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_VF_CFG_CHANGE 0x33UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LLFC_PFC_CHANGE 0x34UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DEFAULT_VNIC_CHANGE 0x35UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_HW_FLOW_AGED 0x36UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DEBUG_NOTIFICATION 0x37UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_REQ 0x38UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CACHE_FLUSH_DONE 0x39UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_TCP_FLAG_ACTION_CHANGE 0x3aUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_FLOW_ACTIVE 0x3bUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_EEM_CFG_CHANGE 0x3cUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_DEFAULT_VNIC_CHANGE 0x3dUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_TFLIB_LINK_STATUS_CHANGE 0x3eUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_QUIESCE_DONE 0x3fUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DEFERRED_RESPONSE 0x40UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PFC_WATCHDOG_CFG_CHANGE 0x41UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_ECHO_REQUEST 0x42UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PHC_UPDATE 0x43UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_PPS_TIMESTAMP 0x44UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_ERROR_REPORT 0x45UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DOORBELL_PACING_THRESHOLD 0x46UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_RSS_CHANGE 0x47UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_DOORBELL_PACING_NQ_UPDATE 0x48UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_HW_DOORBELL_RECOVERY_READ_ERROR 0x49UL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_CTX_ERROR 0x4aUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_MAX_RGTR_EVENT_ID 0x4bUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_FW_TRACE_MSG 0xfeUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR 0xffUL
+ #define ASYNC_EVENT_CMPL_EVENT_ID_LAST ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR
__le32 event_data2;
u8 opaque_v;
#define ASYNC_EVENT_CMPL_V 0x1UL
@@ -1011,6 +1025,7 @@ struct hwrm_async_event_cmpl_vf_cfg_change {
#define ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_DFLT_MAC_ADDR_CHANGE 0x4UL
#define ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_DFLT_VLAN_CHANGE 0x8UL
#define ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_TRUSTED_VF_CFG_CHANGE 0x10UL
+ #define ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_TF_OWNERSHIP_RELEASE 0x20UL
};
/* hwrm_async_event_cmpl_default_vnic_change (size:128b/16B) */
@@ -1402,6 +1417,45 @@ struct hwrm_async_event_cmpl_error_report_doorbell_drop_threshold {
#define ASYNC_EVENT_CMPL_ERROR_REPORT_DOORBELL_DROP_THRESHOLD_EVENT_DATA1_EPOCH_SFT 8
};
+/* hwrm_async_event_cmpl_error_report_thermal (size:128b/16B) */
+struct hwrm_async_event_cmpl_error_report_thermal {
+ __le16 type;
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_TYPE_MASK 0x3fUL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_TYPE_SFT 0
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_TYPE_HWRM_ASYNC_EVENT 0x2eUL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_TYPE_HWRM_ASYNC_EVENT
+ __le16 event_id;
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_ID_ERROR_REPORT 0x45UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_ID_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_ID_ERROR_REPORT
+ __le32 event_data2;
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_CURRENT_TEMP_MASK 0xffUL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_CURRENT_TEMP_SFT 0
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_THRESHOLD_TEMP_MASK 0xff00UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA2_THRESHOLD_TEMP_SFT 8
+ u8 opaque_v;
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_V 0x1UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_OPAQUE_MASK 0xfeUL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_OPAQUE_SFT 1
+ u8 timestamp_lo;
+ __le16 timestamp_hi;
+ __le32 event_data1;
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_ERROR_TYPE_MASK 0xffUL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_ERROR_TYPE_SFT 0
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_ERROR_TYPE_THERMAL_EVENT 0x5UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_ERROR_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_ERROR_TYPE_THERMAL_EVENT
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_MASK 0x700UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_SFT 8
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_WARN (0x0UL << 8)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_CRITICAL (0x1UL << 8)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_FATAL (0x2UL << 8)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_SHUTDOWN (0x3UL << 8)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_SHUTDOWN
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR 0x800UL
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR_DECREASING (0x0UL << 11)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR_INCREASING (0x1UL << 11)
+ #define ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR_LAST ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_TRANSITION_DIR_INCREASING
+};
+
/* hwrm_func_reset_input (size:192b/24B) */
struct hwrm_func_reset_input {
__le16 req_type;
@@ -1502,7 +1556,7 @@ struct hwrm_func_vf_free_output {
u8 valid;
};
-/* hwrm_func_vf_cfg_input (size:448b/56B) */
+/* hwrm_func_vf_cfg_input (size:576b/72B) */
struct hwrm_func_vf_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -1510,20 +1564,22 @@ struct hwrm_func_vf_cfg_input {
__le16 target_id;
__le64 resp_addr;
__le32 enables;
- #define FUNC_VF_CFG_REQ_ENABLES_MTU 0x1UL
- #define FUNC_VF_CFG_REQ_ENABLES_GUEST_VLAN 0x2UL
- #define FUNC_VF_CFG_REQ_ENABLES_ASYNC_EVENT_CR 0x4UL
- #define FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR 0x8UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS 0x10UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS 0x20UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS 0x40UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS 0x80UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_L2_CTXS 0x100UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS 0x200UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS 0x400UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS 0x800UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_TX_KEY_CTXS 0x1000UL
- #define FUNC_VF_CFG_REQ_ENABLES_NUM_RX_KEY_CTXS 0x2000UL
+ #define FUNC_VF_CFG_REQ_ENABLES_MTU 0x1UL
+ #define FUNC_VF_CFG_REQ_ENABLES_GUEST_VLAN 0x2UL
+ #define FUNC_VF_CFG_REQ_ENABLES_ASYNC_EVENT_CR 0x4UL
+ #define FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR 0x8UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_RSSCOS_CTXS 0x10UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_CMPL_RINGS 0x20UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_TX_RINGS 0x40UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_RX_RINGS 0x80UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_L2_CTXS 0x100UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_VNICS 0x200UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_STAT_CTXS 0x400UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_HW_RING_GRPS 0x800UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_KTLS_TX_KEY_CTXS 0x1000UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_KTLS_RX_KEY_CTXS 0x2000UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_QUIC_TX_KEY_CTXS 0x4000UL
+ #define FUNC_VF_CFG_REQ_ENABLES_NUM_QUIC_RX_KEY_CTXS 0x8000UL
__le16 mtu;
__le16 guest_vlan;
__le16 async_event_cr;
@@ -1547,8 +1603,12 @@ struct hwrm_func_vf_cfg_input {
__le16 num_vnics;
__le16 num_stat_ctxs;
__le16 num_hw_ring_grps;
- __le16 num_tx_key_ctxs;
- __le16 num_rx_key_ctxs;
+ __le32 num_ktls_tx_key_ctxs;
+ __le32 num_ktls_rx_key_ctxs;
+ __le16 num_msix;
+ u8 unused[2];
+ __le32 num_quic_tx_key_ctxs;
+ __le32 num_quic_rx_key_ctxs;
};
/* hwrm_func_vf_cfg_output (size:128b/16B) */
@@ -1572,7 +1632,7 @@ struct hwrm_func_qcaps_input {
u8 unused_0[6];
};
-/* hwrm_func_qcaps_output (size:768b/96B) */
+/* hwrm_func_qcaps_output (size:896b/112B) */
struct hwrm_func_qcaps_output {
__le16 error_code;
__le16 req_type;
@@ -1686,6 +1746,11 @@ struct hwrm_func_qcaps_output {
#define FUNC_QCAPS_RESP_FLAGS_EXT2_SYNCE_SUPPORTED 0x80UL
#define FUNC_QCAPS_RESP_FLAGS_EXT2_DBR_PACING_V0_SUPPORTED 0x100UL
#define FUNC_QCAPS_RESP_FLAGS_EXT2_TX_PKT_TS_CMPL_SUPPORTED 0x200UL
+ #define FUNC_QCAPS_RESP_FLAGS_EXT2_HW_LAG_SUPPORTED 0x400UL
+ #define FUNC_QCAPS_RESP_FLAGS_EXT2_ON_CHIP_CTX_SUPPORTED 0x800UL
+ #define FUNC_QCAPS_RESP_FLAGS_EXT2_STEERING_TAG_SUPPORTED 0x1000UL
+ #define FUNC_QCAPS_RESP_FLAGS_EXT2_ENHANCED_VF_SCALE_SUPPORTED 0x2000UL
+ #define FUNC_QCAPS_RESP_FLAGS_EXT2_KEY_XID_PARTITION_SUPPORTED 0x4000UL
__le16 tunnel_disable_flag;
#define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_VXLAN 0x1UL
#define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_NGE 0x2UL
@@ -1695,7 +1760,15 @@ struct hwrm_func_qcaps_output {
#define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_IPINIP 0x20UL
#define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_MPLS 0x40UL
#define FUNC_QCAPS_RESP_TUNNEL_DISABLE_FLAG_DISABLE_PPPOE 0x80UL
+ u8 key_xid_partition_cap;
+ #define FUNC_QCAPS_RESP_KEY_XID_PARTITION_CAP_TKC 0x1UL
+ #define FUNC_QCAPS_RESP_KEY_XID_PARTITION_CAP_RKC 0x2UL
+ #define FUNC_QCAPS_RESP_KEY_XID_PARTITION_CAP_QUIC_TKC 0x4UL
+ #define FUNC_QCAPS_RESP_KEY_XID_PARTITION_CAP_QUIC_RKC 0x8UL
u8 unused_1;
+ u8 device_serial_number[8];
+ __le16 ctxs_per_partition;
+ u8 unused_2[5];
u8 valid;
};
@@ -1710,7 +1783,7 @@ struct hwrm_func_qcfg_input {
u8 unused_0[6];
};
-/* hwrm_func_qcfg_output (size:896b/112B) */
+/* hwrm_func_qcfg_output (size:1024b/128B) */
struct hwrm_func_qcfg_output {
__le16 error_code;
__le16 req_type;
@@ -1870,19 +1943,24 @@ struct hwrm_func_qcfg_output {
#define FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100 (0x1UL << 29)
#define FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_LAST FUNC_QCFG_RESP_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100
__le16 host_mtu;
- __le16 alloc_tx_key_ctxs;
- __le16 alloc_rx_key_ctxs;
+ u8 unused_3[2];
+ u8 unused_4[2];
u8 port_kdnet_mode;
#define FUNC_QCFG_RESP_PORT_KDNET_MODE_DISABLED 0x0UL
#define FUNC_QCFG_RESP_PORT_KDNET_MODE_ENABLED 0x1UL
#define FUNC_QCFG_RESP_PORT_KDNET_MODE_LAST FUNC_QCFG_RESP_PORT_KDNET_MODE_ENABLED
u8 kdnet_pcie_function;
__le16 port_kdnet_fid;
- u8 unused_3;
+ u8 unused_5[2];
+ __le32 alloc_tx_key_ctxs;
+ __le32 alloc_rx_key_ctxs;
+ u8 lag_id;
+ u8 parif;
+ u8 unused_6[5];
u8 valid;
};
-/* hwrm_func_cfg_input (size:960b/120B) */
+/* hwrm_func_cfg_input (size:1088b/136B) */
struct hwrm_func_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -2061,8 +2139,7 @@ struct hwrm_func_cfg_input {
#define FUNC_CFG_REQ_PARTITION_MAX_BW_BW_VALUE_UNIT_LAST FUNC_CFG_REQ_PARTITION_MAX_BW_BW_VALUE_UNIT_PERCENT1_100
__be16 tpid;
__le16 host_mtu;
- __le16 num_tx_key_ctxs;
- __le16 num_rx_key_ctxs;
+ u8 unused_0[4];
__le32 enables2;
#define FUNC_CFG_REQ_ENABLES2_KDNET 0x1UL
#define FUNC_CFG_REQ_ENABLES2_DB_PAGE_SIZE 0x2UL
@@ -2083,7 +2160,12 @@ struct hwrm_func_cfg_input {
#define FUNC_CFG_REQ_DB_PAGE_SIZE_2MB 0x9UL
#define FUNC_CFG_REQ_DB_PAGE_SIZE_4MB 0xaUL
#define FUNC_CFG_REQ_DB_PAGE_SIZE_LAST FUNC_CFG_REQ_DB_PAGE_SIZE_4MB
- u8 unused_0[6];
+ u8 unused_1[2];
+ __le32 num_ktls_tx_key_ctxs;
+ __le32 num_ktls_rx_key_ctxs;
+ __le32 num_quic_tx_key_ctxs;
+ __le32 num_quic_rx_key_ctxs;
+ __le32 unused_2;
};
/* hwrm_func_cfg_output (size:128b/16B) */
@@ -2390,7 +2472,11 @@ struct hwrm_func_drv_qver_input {
__le64 resp_addr;
__le32 reserved;
__le16 fid;
- u8 unused_0[2];
+ u8 driver_type;
+ #define FUNC_DRV_QVER_REQ_DRIVER_TYPE_L2 0x0UL
+ #define FUNC_DRV_QVER_REQ_DRIVER_TYPE_ROCE 0x1UL
+ #define FUNC_DRV_QVER_REQ_DRIVER_TYPE_LAST FUNC_DRV_QVER_REQ_DRIVER_TYPE_ROCE
+ u8 unused_0;
};
/* hwrm_func_drv_qver_output (size:256b/32B) */
@@ -2435,7 +2521,7 @@ struct hwrm_func_resource_qcaps_input {
u8 unused_0[6];
};
-/* hwrm_func_resource_qcaps_output (size:512b/64B) */
+/* hwrm_func_resource_qcaps_output (size:704b/88B) */
struct hwrm_func_resource_qcaps_output {
__le16 error_code;
__le16 req_type;
@@ -2467,15 +2553,20 @@ struct hwrm_func_resource_qcaps_output {
__le16 max_tx_scheduler_inputs;
__le16 flags;
#define FUNC_RESOURCE_QCAPS_RESP_FLAGS_MIN_GUARANTEED 0x1UL
- __le16 min_tx_key_ctxs;
- __le16 max_tx_key_ctxs;
- __le16 min_rx_key_ctxs;
- __le16 max_rx_key_ctxs;
- u8 unused_0[5];
+ __le16 min_msix;
+ __le32 min_ktls_tx_key_ctxs;
+ __le32 max_ktls_tx_key_ctxs;
+ __le32 min_ktls_rx_key_ctxs;
+ __le32 max_ktls_rx_key_ctxs;
+ __le32 min_quic_tx_key_ctxs;
+ __le32 max_quic_tx_key_ctxs;
+ __le32 min_quic_rx_key_ctxs;
+ __le32 max_quic_rx_key_ctxs;
+ u8 unused_0[3];
u8 valid;
};
-/* hwrm_func_vf_resource_cfg_input (size:512b/64B) */
+/* hwrm_func_vf_resource_cfg_input (size:704b/88B) */
struct hwrm_func_vf_resource_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -2502,14 +2593,18 @@ struct hwrm_func_vf_resource_cfg_input {
__le16 max_hw_ring_grps;
__le16 flags;
#define FUNC_VF_RESOURCE_CFG_REQ_FLAGS_MIN_GUARANTEED 0x1UL
- __le16 min_tx_key_ctxs;
- __le16 max_tx_key_ctxs;
- __le16 min_rx_key_ctxs;
- __le16 max_rx_key_ctxs;
- u8 unused_0[2];
-};
-
-/* hwrm_func_vf_resource_cfg_output (size:256b/32B) */
+ __le16 min_msix;
+ __le32 min_ktls_tx_key_ctxs;
+ __le32 max_ktls_tx_key_ctxs;
+ __le32 min_ktls_rx_key_ctxs;
+ __le32 max_ktls_rx_key_ctxs;
+ __le32 min_quic_tx_key_ctxs;
+ __le32 max_quic_tx_key_ctxs;
+ __le32 min_quic_rx_key_ctxs;
+ __le32 max_quic_rx_key_ctxs;
+};
+
+/* hwrm_func_vf_resource_cfg_output (size:320b/40B) */
struct hwrm_func_vf_resource_cfg_output {
__le16 error_code;
__le16 req_type;
@@ -2523,9 +2618,9 @@ struct hwrm_func_vf_resource_cfg_output {
__le16 reserved_vnics;
__le16 reserved_stat_ctx;
__le16 reserved_hw_ring_grps;
- __le16 reserved_tx_key_ctxs;
- __le16 reserved_rx_key_ctxs;
- u8 unused_0[3];
+ __le32 reserved_tx_key_ctxs;
+ __le32 reserved_rx_key_ctxs;
+ u8 unused_0[7];
u8 valid;
};
@@ -2592,7 +2687,8 @@ struct hwrm_func_backing_store_qcaps_output {
__le16 rkc_entry_size;
__le32 tkc_max_entries;
__le32 rkc_max_entries;
- u8 rsvd1[7];
+ __le16 fast_qpmd_qp_num_entries;
+ u8 rsvd1[5];
u8 valid;
};
@@ -2630,27 +2726,28 @@ struct hwrm_func_backing_store_cfg_input {
#define FUNC_BACKING_STORE_CFG_REQ_FLAGS_PREBOOT_MODE 0x1UL
#define FUNC_BACKING_STORE_CFG_REQ_FLAGS_MRAV_RESERVATION_SPLIT 0x2UL
__le32 enables;
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP 0x1UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ 0x2UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ 0x4UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC 0x8UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT 0x10UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP 0x20UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING0 0x40UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING1 0x80UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING2 0x100UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING3 0x200UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING4 0x400UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING5 0x800UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING6 0x1000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING7 0x2000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV 0x4000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM 0x8000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING8 0x10000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING9 0x20000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING10 0x40000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TKC 0x80000UL
- #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_RKC 0x100000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP 0x1UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_SRQ 0x2UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_CQ 0x4UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_VNIC 0x8UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_STAT 0x10UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_SP 0x20UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING0 0x40UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING1 0x80UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING2 0x100UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING3 0x200UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING4 0x400UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING5 0x800UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING6 0x1000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING7 0x2000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_MRAV 0x4000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TIM 0x8000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING8 0x10000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING9 0x20000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TQM_RING10 0x40000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_TKC 0x80000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_RKC 0x100000UL
+ #define FUNC_BACKING_STORE_CFG_REQ_ENABLES_QP_FAST_QPMD 0x200000UL
u8 qpc_pg_size_qpc_lvl;
#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_MASK 0xfUL
#define FUNC_BACKING_STORE_CFG_REQ_QPC_LVL_SFT 0
@@ -3047,7 +3144,7 @@ struct hwrm_func_backing_store_cfg_input {
#define FUNC_BACKING_STORE_CFG_REQ_RKC_PG_SIZE_PG_8M (0x4UL << 4)
#define FUNC_BACKING_STORE_CFG_REQ_RKC_PG_SIZE_PG_1G (0x5UL << 4)
#define FUNC_BACKING_STORE_CFG_REQ_RKC_PG_SIZE_LAST FUNC_BACKING_STORE_CFG_REQ_RKC_PG_SIZE_PG_1G
- u8 rsvd[2];
+ __le16 qp_num_fast_qpmd_entries;
};
/* hwrm_func_backing_store_cfg_output (size:128b/16B) */
@@ -3477,6 +3574,8 @@ struct hwrm_func_backing_store_cfg_v2_input {
#define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL
#define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_QUIC_TKC 0x1aUL
#define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_QUIC_RKC 0x1bUL
+ #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_TBL_SCOPE 0x1cUL
+ #define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_XID_PARTITION 0x1dUL
#define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_INVALID 0xffffUL
#define FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_LAST FUNC_BACKING_STORE_CFG_V2_REQ_TYPE_INVALID
__le16 instance;
@@ -3546,6 +3645,8 @@ struct hwrm_func_backing_store_qcfg_v2_input {
#define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL
#define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_QUIC_TKC 0x1aUL
#define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_QUIC_RKC 0x1bUL
+ #define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_TBL_SCOPE 0x1cUL
+ #define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_XID_PARTITION 0x1dUL
#define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_INVALID 0xffffUL
#define FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_LAST FUNC_BACKING_STORE_QCFG_V2_REQ_TYPE_INVALID
__le16 instance;
@@ -3559,22 +3660,24 @@ struct hwrm_func_backing_store_qcfg_v2_output {
__le16 seq_id;
__le16 resp_len;
__le16 type;
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QP 0x0UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRQ 0x1UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_CQ 0x2UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_VNIC 0x3UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_STAT 0x4UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SP_TQM_RING 0x5UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_FP_TQM_RING 0x6UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MRAV 0xeUL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TIM 0xfUL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TKC 0x13UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_RKC 0x14UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MP_TQM_RING 0x15UL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QUIC_TKC 0x1aUL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QUIC_RKC 0x1bUL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_INVALID 0xffffUL
- #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_LAST FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_INVALID
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QP 0x0UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SRQ 0x1UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_CQ 0x2UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_VNIC 0x3UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_STAT 0x4UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_SP_TQM_RING 0x5UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_FP_TQM_RING 0x6UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MRAV 0xeUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TIM 0xfUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TKC 0x13UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_RKC 0x14UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_MP_TQM_RING 0x15UL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QUIC_TKC 0x1aUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_QUIC_RKC 0x1bUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_TBL_SCOPE 0x1cUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_XID_PARTITION 0x1dUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_INVALID 0xffffUL
+ #define FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_LAST FUNC_BACKING_STORE_QCFG_V2_RESP_TYPE_INVALID
__le16 instance;
__le32 flags;
__le64 page_dir;
@@ -3609,7 +3712,8 @@ struct hwrm_func_backing_store_qcfg_v2_output {
struct qpc_split_entries {
__le32 qp_num_l2_entries;
__le32 qp_num_qp1_entries;
- __le32 rsvd[2];
+ __le32 qp_num_fast_qpmd_entries;
+ __le32 rsvd;
};
/* srq_split_entries (size:128b/16B) */
@@ -3666,6 +3770,8 @@ struct hwrm_func_backing_store_qcaps_v2_input {
#define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_CQ_DB_SHADOW 0x19UL
#define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QUIC_TKC 0x1aUL
#define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_QUIC_RKC 0x1bUL
+ #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_TBL_SCOPE 0x1cUL
+ #define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_XID_PARTITION 0x1dUL
#define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_INVALID 0xffffUL
#define FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_LAST FUNC_BACKING_STORE_QCAPS_V2_REQ_TYPE_INVALID
u8 rsvd[6];
@@ -3696,13 +3802,16 @@ struct hwrm_func_backing_store_qcaps_v2_output {
#define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_CQ_DB_SHADOW 0x19UL
#define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_QUIC_TKC 0x1aUL
#define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_QUIC_RKC 0x1bUL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_TBL_SCOPE 0x1cUL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_XID_PARTITION 0x1dUL
#define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_INVALID 0xffffUL
#define FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_LAST FUNC_BACKING_STORE_QCAPS_V2_RESP_TYPE_INVALID
__le16 entry_size;
__le32 flags;
- #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_ENABLE_CTX_KIND_INIT 0x1UL
- #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID 0x2UL
- #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_DRIVER_MANAGED_MEMORY 0x4UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_ENABLE_CTX_KIND_INIT 0x1UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_TYPE_VALID 0x2UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_DRIVER_MANAGED_MEMORY 0x4UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_FLAGS_ROCE_QP_PSEUDO_STATIC_ALLOC 0x8UL
__le32 instance_bit_map;
u8 ctx_init_value;
u8 ctx_init_offset;
@@ -3712,7 +3821,13 @@ struct hwrm_func_backing_store_qcaps_v2_output {
__le32 min_num_entries;
__le16 next_valid_type;
u8 subtype_valid_cnt;
- u8 rsvd2;
+ u8 exact_cnt_bit_map;
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_0_EXACT 0x1UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_1_EXACT 0x2UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_2_EXACT 0x4UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_3_EXACT 0x8UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_UNUSED_MASK 0xf0UL
+ #define FUNC_BACKING_STORE_QCAPS_V2_RESP_EXACT_CNT_BIT_MAP_UNUSED_SFT 4
__le32 split_entry_0;
__le32 split_entry_1;
__le32 split_entry_2;
@@ -4599,7 +4714,7 @@ struct tx_port_stats_ext {
__le64 pfc_pri7_tx_transitions;
};
-/* rx_port_stats_ext (size:3776b/472B) */
+/* rx_port_stats_ext (size:3904b/488B) */
struct rx_port_stats_ext {
__le64 link_down_events;
__le64 continuous_pause_events;
@@ -4660,6 +4775,8 @@ struct rx_port_stats_ext {
__le64 rx_discard_packets_cos7;
__le64 rx_fec_corrected_blocks;
__le64 rx_fec_uncorrectable_blocks;
+ __le64 rx_filter_miss;
+ __le64 rx_fec_symbol_err;
};
/* hwrm_port_qstats_ext_input (size:320b/40B) */
@@ -6092,6 +6209,7 @@ struct hwrm_vnic_cfg_input {
#define VNIC_CFG_REQ_FLAGS_ROCE_ONLY_VNIC_MODE 0x10UL
#define VNIC_CFG_REQ_FLAGS_RSS_DFLT_CR_MODE 0x20UL
#define VNIC_CFG_REQ_FLAGS_ROCE_MIRRORING_CAPABLE_VNIC_MODE 0x40UL
+ #define VNIC_CFG_REQ_FLAGS_PORTCOS_MAPPING_MODE 0x80UL
__le32 enables;
#define VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP 0x1UL
#define VNIC_CFG_REQ_ENABLES_RSS_RULE 0x2UL
@@ -6181,12 +6299,16 @@ struct hwrm_vnic_qcaps_output {
#define VNIC_QCAPS_RESP_FLAGS_RSS_IPSEC_AH_SPI_IPV6_CAP 0x800000UL
#define VNIC_QCAPS_RESP_FLAGS_RSS_IPSEC_ESP_SPI_IPV6_CAP 0x1000000UL
#define VNIC_QCAPS_RESP_FLAGS_OUTERMOST_RSS_TRUSTED_VF_CAP 0x2000000UL
+ #define VNIC_QCAPS_RESP_FLAGS_PORTCOS_MAPPING_MODE 0x4000000UL
+ #define VNIC_QCAPS_RESP_FLAGS_RSS_PROF_TCAM_MODE_ENABLED 0x8000000UL
+ #define VNIC_QCAPS_RESP_FLAGS_VNIC_RSS_HASH_MODE_CAP 0x10000000UL
+ #define VNIC_QCAPS_RESP_FLAGS_HW_TUNNEL_TPA_CAP 0x20000000UL
__le16 max_aggs_supported;
u8 unused_1[5];
u8 valid;
};
-/* hwrm_vnic_tpa_cfg_input (size:320b/40B) */
+/* hwrm_vnic_tpa_cfg_input (size:384b/48B) */
struct hwrm_vnic_tpa_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -6208,6 +6330,7 @@ struct hwrm_vnic_tpa_cfg_input {
#define VNIC_TPA_CFG_REQ_ENABLES_MAX_AGGS 0x2UL
#define VNIC_TPA_CFG_REQ_ENABLES_MAX_AGG_TIMER 0x4UL
#define VNIC_TPA_CFG_REQ_ENABLES_MIN_AGG_LEN 0x8UL
+ #define VNIC_TPA_CFG_REQ_ENABLES_TNL_TPA_EN 0x10UL
__le16 vnic_id;
__le16 max_agg_segs;
#define VNIC_TPA_CFG_REQ_MAX_AGG_SEGS_1 0x0UL
@@ -6227,6 +6350,25 @@ struct hwrm_vnic_tpa_cfg_input {
u8 unused_0[2];
__le32 max_agg_timer;
__le32 min_agg_len;
+ __le32 tnl_tpa_en_bitmap;
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN 0x1UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GENEVE 0x2UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_NVGRE 0x4UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GRE 0x8UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV4 0x10UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_IPV6 0x20UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN_GPE 0x40UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_VXLAN_CUST1 0x80UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_GRE_CUST1 0x100UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR1 0x200UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR2 0x400UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR3 0x800UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR4 0x1000UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR5 0x2000UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR6 0x4000UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR7 0x8000UL
+ #define VNIC_TPA_CFG_REQ_TNL_TPA_EN_BITMAP_UPAR8 0x10000UL
+ u8 unused_1[4];
};
/* hwrm_vnic_tpa_cfg_output (size:128b/16B) */
@@ -6282,7 +6424,25 @@ struct hwrm_vnic_tpa_qcfg_output {
#define VNIC_TPA_QCFG_RESP_MAX_AGGS_LAST VNIC_TPA_QCFG_RESP_MAX_AGGS_MAX
__le32 max_agg_timer;
__le32 min_agg_len;
- u8 unused_0[7];
+ __le32 tnl_tpa_en_bitmap;
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_VXLAN 0x1UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_GENEVE 0x2UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_NVGRE 0x4UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_GRE 0x8UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_IPV4 0x10UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_IPV6 0x20UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_VXLAN_GPE 0x40UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_VXLAN_CUST1 0x80UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_GRE_CUST1 0x100UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR1 0x200UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR2 0x400UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR3 0x800UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR4 0x1000UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR5 0x2000UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR6 0x4000UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR7 0x8000UL
+ #define VNIC_TPA_QCFG_RESP_TNL_TPA_EN_BITMAP_UPAR8 0x10000UL
+ u8 unused_0[3];
u8 valid;
};
@@ -6317,8 +6477,9 @@ struct hwrm_vnic_rss_cfg_input {
__le64 hash_key_tbl_addr;
__le16 rss_ctx_idx;
u8 flags;
- #define VNIC_RSS_CFG_REQ_FLAGS_HASH_TYPE_INCLUDE 0x1UL
- #define VNIC_RSS_CFG_REQ_FLAGS_HASH_TYPE_EXCLUDE 0x2UL
+ #define VNIC_RSS_CFG_REQ_FLAGS_HASH_TYPE_INCLUDE 0x1UL
+ #define VNIC_RSS_CFG_REQ_FLAGS_HASH_TYPE_EXCLUDE 0x2UL
+ #define VNIC_RSS_CFG_REQ_FLAGS_IPSEC_HASH_TYPE_CFG_SUPPORT 0x4UL
u8 ring_select_mode;
#define VNIC_RSS_CFG_REQ_RING_SELECT_MODE_TOEPLITZ 0x0UL
#define VNIC_RSS_CFG_REQ_RING_SELECT_MODE_XOR 0x1UL
@@ -6480,14 +6641,15 @@ struct hwrm_ring_alloc_input {
__le16 target_id;
__le64 resp_addr;
__le32 enables;
- #define RING_ALLOC_REQ_ENABLES_RING_ARB_CFG 0x2UL
- #define RING_ALLOC_REQ_ENABLES_STAT_CTX_ID_VALID 0x8UL
- #define RING_ALLOC_REQ_ENABLES_MAX_BW_VALID 0x20UL
- #define RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID 0x40UL
- #define RING_ALLOC_REQ_ENABLES_NQ_RING_ID_VALID 0x80UL
- #define RING_ALLOC_REQ_ENABLES_RX_BUF_SIZE_VALID 0x100UL
- #define RING_ALLOC_REQ_ENABLES_SCHQ_ID 0x200UL
- #define RING_ALLOC_REQ_ENABLES_MPC_CHNLS_TYPE 0x400UL
+ #define RING_ALLOC_REQ_ENABLES_RING_ARB_CFG 0x2UL
+ #define RING_ALLOC_REQ_ENABLES_STAT_CTX_ID_VALID 0x8UL
+ #define RING_ALLOC_REQ_ENABLES_MAX_BW_VALID 0x20UL
+ #define RING_ALLOC_REQ_ENABLES_RX_RING_ID_VALID 0x40UL
+ #define RING_ALLOC_REQ_ENABLES_NQ_RING_ID_VALID 0x80UL
+ #define RING_ALLOC_REQ_ENABLES_RX_BUF_SIZE_VALID 0x100UL
+ #define RING_ALLOC_REQ_ENABLES_SCHQ_ID 0x200UL
+ #define RING_ALLOC_REQ_ENABLES_MPC_CHNLS_TYPE 0x400UL
+ #define RING_ALLOC_REQ_ENABLES_STEERING_TAG_VALID 0x800UL
u8 ring_type;
#define RING_ALLOC_REQ_RING_TYPE_L2_CMPL 0x0UL
#define RING_ALLOC_REQ_RING_TYPE_TX 0x1UL
@@ -6541,7 +6703,7 @@ struct hwrm_ring_alloc_input {
#define RING_ALLOC_REQ_RING_ARB_CFG_RSVD_SFT 4
#define RING_ALLOC_REQ_RING_ARB_CFG_ARB_POLICY_PARAM_MASK 0xff00UL
#define RING_ALLOC_REQ_RING_ARB_CFG_ARB_POLICY_PARAM_SFT 8
- __le16 unused_3;
+ __le16 steering_tag;
__le32 reserved3;
__le32 stat_ctx_id;
__le32 reserved4;
@@ -6917,6 +7079,7 @@ struct hwrm_cfa_l2_filter_alloc_input {
#define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
#define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL
#define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
+ #define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
#define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL
#define CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_L2_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL
u8 unused_4;
@@ -7099,6 +7262,7 @@ struct hwrm_cfa_tunnel_filter_alloc_input {
#define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
#define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL
#define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
+ #define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
#define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL
#define CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_TUNNEL_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL
u8 tunnel_flags;
@@ -7233,7 +7397,8 @@ struct hwrm_cfa_encap_record_alloc_input {
#define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_IPGRE_V1 0xaUL
#define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_L2_ETYPE 0xbUL
#define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_VXLAN_GPE_V6 0xcUL
- #define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_LAST CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_VXLAN_GPE_V6
+ #define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_VXLAN_GPE 0x10UL
+ #define CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_LAST CFA_ENCAP_RECORD_ALLOC_REQ_ENCAP_TYPE_VXLAN_GPE
u8 unused_0[3];
__le32 encap_data[20];
};
@@ -7338,6 +7503,7 @@ struct hwrm_cfa_ntuple_filter_alloc_input {
#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL
#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
+ #define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL
#define CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_NTUPLE_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL
u8 pri_hint;
@@ -7485,6 +7651,7 @@ struct hwrm_cfa_decap_filter_alloc_input {
#define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
#define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL
#define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
+ #define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
#define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL
#define CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_DECAP_FILTER_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL
u8 unused_0;
@@ -7628,6 +7795,7 @@ struct hwrm_cfa_flow_alloc_input {
#define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_IPGRE_V1 0xaUL
#define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_L2_ETYPE 0xbUL
#define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
+ #define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
#define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL 0xffUL
#define CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_LAST CFA_FLOW_ALLOC_REQ_TUNNEL_TYPE_ANYTUNNEL
};
@@ -8053,8 +8221,11 @@ struct hwrm_tunnel_dst_port_query_input {
#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL
#define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI 0xeUL
- #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_ECPRI
- u8 unused_0[7];
+ #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_SRV6 0xfUL
+ #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
+ #define TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_QUERY_REQ_TUNNEL_TYPE_VXLAN_GPE
+ u8 tunnel_next_proto;
+ u8 unused_0[6];
};
/* hwrm_tunnel_dst_port_query_output (size:128b/16B) */
@@ -8094,10 +8265,12 @@ struct hwrm_tunnel_dst_port_alloc_input {
#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL
#define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI 0xeUL
- #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_ECPRI
- u8 unused_0;
+ #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_SRV6 0xfUL
+ #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
+ #define TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_ALLOC_REQ_TUNNEL_TYPE_VXLAN_GPE
+ u8 tunnel_next_proto;
__be16 tunnel_dst_port_val;
- u8 unused_1[4];
+ u8 unused_0[4];
};
/* hwrm_tunnel_dst_port_alloc_output (size:128b/16B) */
@@ -8141,10 +8314,12 @@ struct hwrm_tunnel_dst_port_free_input {
#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE_V6 0xcUL
#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_CUSTOM_GRE 0xdUL
#define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI 0xeUL
- #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_ECPRI
- u8 unused_0;
+ #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_SRV6 0xfUL
+ #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE 0x10UL
+ #define TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_LAST TUNNEL_DST_PORT_FREE_REQ_TUNNEL_TYPE_VXLAN_GPE
+ u8 tunnel_next_proto;
__le16 tunnel_dst_port_id;
- u8 unused_1[4];
+ u8 unused_0[4];
};
/* hwrm_tunnel_dst_port_free_output (size:128b/16B) */
@@ -8212,7 +8387,7 @@ struct ctx_hw_stats_ext {
__le64 rx_tpa_events;
};
-/* hwrm_stat_ctx_alloc_input (size:256b/32B) */
+/* hwrm_stat_ctx_alloc_input (size:320b/40B) */
struct hwrm_stat_ctx_alloc_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -8225,6 +8400,10 @@ struct hwrm_stat_ctx_alloc_input {
#define STAT_CTX_ALLOC_REQ_STAT_CTX_FLAGS_ROCE 0x1UL
u8 unused_0;
__le16 stats_dma_length;
+ __le16 flags;
+ #define STAT_CTX_ALLOC_REQ_FLAGS_STEERING_TAG_VALID 0x1UL
+ __le16 steering_tag;
+ __le32 unused_1;
};
/* hwrm_stat_ctx_alloc_output (size:128b/16B) */
@@ -8432,7 +8611,7 @@ struct hwrm_stat_generic_qstats_output {
u8 valid;
};
-/* generic_sw_hw_stats (size:1216b/152B) */
+/* generic_sw_hw_stats (size:1408b/176B) */
struct generic_sw_hw_stats {
__le64 pcie_statistics_tx_tlp;
__le64 pcie_statistics_rx_tlp;
@@ -8453,6 +8632,9 @@ struct generic_sw_hw_stats {
__le64 cache_miss_count_cfcs;
__le64 cache_miss_count_cfcc;
__le64 cache_miss_count_cfcm;
+ __le64 hw_db_recov_dbs_dropped;
+ __le64 hw_db_recov_drops_serviced;
+ __le64 hw_db_recov_dbs_recovered;
};
/* hwrm_fw_reset_input (size:192b/24B) */
@@ -8876,7 +9058,7 @@ struct hwrm_temp_monitor_query_input {
__le64 resp_addr;
};
-/* hwrm_temp_monitor_query_output (size:128b/16B) */
+/* hwrm_temp_monitor_query_output (size:192b/24B) */
struct hwrm_temp_monitor_query_output {
__le16 error_code;
__le16 req_type;
@@ -8886,14 +9068,20 @@ struct hwrm_temp_monitor_query_output {
u8 phy_temp;
u8 om_temp;
u8 flags;
- #define TEMP_MONITOR_QUERY_RESP_FLAGS_TEMP_NOT_AVAILABLE 0x1UL
- #define TEMP_MONITOR_QUERY_RESP_FLAGS_PHY_TEMP_NOT_AVAILABLE 0x2UL
- #define TEMP_MONITOR_QUERY_RESP_FLAGS_OM_NOT_PRESENT 0x4UL
- #define TEMP_MONITOR_QUERY_RESP_FLAGS_OM_TEMP_NOT_AVAILABLE 0x8UL
- #define TEMP_MONITOR_QUERY_RESP_FLAGS_EXT_TEMP_FIELDS_AVAILABLE 0x10UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_TEMP_NOT_AVAILABLE 0x1UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_PHY_TEMP_NOT_AVAILABLE 0x2UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_OM_NOT_PRESENT 0x4UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_OM_TEMP_NOT_AVAILABLE 0x8UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_EXT_TEMP_FIELDS_AVAILABLE 0x10UL
+ #define TEMP_MONITOR_QUERY_RESP_FLAGS_THRESHOLD_VALUES_AVAILABLE 0x20UL
u8 temp2;
u8 phy_temp2;
u8 om_temp2;
+ u8 warn_threshold;
+ u8 critical_threshold;
+ u8 fatal_threshold;
+ u8 shutdown_threshold;
+ u8 unused_0[4];
u8 valid;
};
@@ -9317,7 +9505,8 @@ struct hwrm_dbg_ring_info_get_output {
__le32 producer_index;
__le32 consumer_index;
__le32 cag_vector_ctrl;
- u8 unused_0[3];
+ __le16 st_tag;
+ u8 unused_0;
u8 valid;
};
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.c
new file mode 100644
index 000000000000..669d24ba0e87
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.c
@@ -0,0 +1,241 @@
+/* Broadcom NetXtreme-C/E network driver.
+ *
+ * Copyright (c) 2023 Broadcom Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/dev_printk.h>
+#include <linux/errno.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-sysfs.h>
+#include <linux/pci.h>
+
+#include "bnxt_hsi.h"
+#include "bnxt.h"
+#include "bnxt_hwrm.h"
+#include "bnxt_hwmon.h"
+
+void bnxt_hwmon_notify_event(struct bnxt *bp)
+{
+ u32 attr;
+
+ if (!bp->hwmon_dev)
+ return;
+
+ switch (bp->thermal_threshold_type) {
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_WARN:
+ attr = hwmon_temp_max_alarm;
+ break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_CRITICAL:
+ attr = hwmon_temp_crit_alarm;
+ break;
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_FATAL:
+ case ASYNC_EVENT_CMPL_ERROR_REPORT_THERMAL_EVENT_DATA1_THRESHOLD_TYPE_SHUTDOWN:
+ attr = hwmon_temp_emergency_alarm;
+ break;
+ default:
+ return;
+ }
+
+ hwmon_notify_event(&bp->pdev->dev, hwmon_temp, attr, 0);
+}
+
+static int bnxt_hwrm_temp_query(struct bnxt *bp, u8 *temp)
+{
+ struct hwrm_temp_monitor_query_output *resp;
+ struct hwrm_temp_monitor_query_input *req;
+ int rc;
+
+ rc = hwrm_req_init(bp, req, HWRM_TEMP_MONITOR_QUERY);
+ if (rc)
+ return rc;
+ resp = hwrm_req_hold(bp, req);
+ rc = hwrm_req_send_silent(bp, req);
+ if (rc)
+ goto drop_req;
+
+ if (temp) {
+ *temp = resp->temp;
+ } else if (resp->flags &
+ TEMP_MONITOR_QUERY_RESP_FLAGS_THRESHOLD_VALUES_AVAILABLE) {
+ bp->fw_cap |= BNXT_FW_CAP_THRESHOLD_TEMP_SUPPORTED;
+ bp->warn_thresh_temp = resp->warn_threshold;
+ bp->crit_thresh_temp = resp->critical_threshold;
+ bp->fatal_thresh_temp = resp->fatal_threshold;
+ bp->shutdown_thresh_temp = resp->shutdown_threshold;
+ }
+drop_req:
+ hwrm_req_drop(bp, req);
+ return rc;
+}
+
+static umode_t bnxt_hwmon_is_visible(const void *_data, enum hwmon_sensor_types type,
+ u32 attr, int channel)
+{
+ const struct bnxt *bp = _data;
+
+ if (type != hwmon_temp)
+ return 0;
+
+ switch (attr) {
+ case hwmon_temp_input:
+ return 0444;
+ case hwmon_temp_max:
+ case hwmon_temp_crit:
+ case hwmon_temp_emergency:
+ case hwmon_temp_max_alarm:
+ case hwmon_temp_crit_alarm:
+ case hwmon_temp_emergency_alarm:
+ if (!(bp->fw_cap & BNXT_FW_CAP_THRESHOLD_TEMP_SUPPORTED))
+ return 0;
+ return 0444;
+ default:
+ return 0;
+ }
+}
+
+static int bnxt_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ int channel, long *val)
+{
+ struct bnxt *bp = dev_get_drvdata(dev);
+ u8 temp = 0;
+ int rc;
+
+ switch (attr) {
+ case hwmon_temp_input:
+ rc = bnxt_hwrm_temp_query(bp, &temp);
+ if (!rc)
+ *val = temp * 1000;
+ return rc;
+ case hwmon_temp_max:
+ *val = bp->warn_thresh_temp * 1000;
+ return 0;
+ case hwmon_temp_crit:
+ *val = bp->crit_thresh_temp * 1000;
+ return 0;
+ case hwmon_temp_emergency:
+ *val = bp->fatal_thresh_temp * 1000;
+ return 0;
+ case hwmon_temp_max_alarm:
+ rc = bnxt_hwrm_temp_query(bp, &temp);
+ if (!rc)
+ *val = temp >= bp->warn_thresh_temp;
+ return rc;
+ case hwmon_temp_crit_alarm:
+ rc = bnxt_hwrm_temp_query(bp, &temp);
+ if (!rc)
+ *val = temp >= bp->crit_thresh_temp;
+ return rc;
+ case hwmon_temp_emergency_alarm:
+ rc = bnxt_hwrm_temp_query(bp, &temp);
+ if (!rc)
+ *val = temp >= bp->fatal_thresh_temp;
+ return rc;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static const struct hwmon_channel_info *bnxt_hwmon_info[] = {
+ HWMON_CHANNEL_INFO(temp, HWMON_T_INPUT | HWMON_T_MAX | HWMON_T_CRIT |
+ HWMON_T_EMERGENCY | HWMON_T_MAX_ALARM |
+ HWMON_T_CRIT_ALARM | HWMON_T_EMERGENCY_ALARM),
+ NULL
+};
+
+static const struct hwmon_ops bnxt_hwmon_ops = {
+ .is_visible = bnxt_hwmon_is_visible,
+ .read = bnxt_hwmon_read,
+};
+
+static const struct hwmon_chip_info bnxt_hwmon_chip_info = {
+ .ops = &bnxt_hwmon_ops,
+ .info = bnxt_hwmon_info,
+};
+
+static ssize_t temp1_shutdown_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct bnxt *bp = dev_get_drvdata(dev);
+
+ return sysfs_emit(buf, "%u\n", bp->shutdown_thresh_temp * 1000);
+}
+
+static ssize_t temp1_shutdown_alarm_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct bnxt *bp = dev_get_drvdata(dev);
+ u8 temp;
+ int rc;
+
+ rc = bnxt_hwrm_temp_query(bp, &temp);
+ if (rc)
+ return -EIO;
+
+ return sysfs_emit(buf, "%u\n", temp >= bp->shutdown_thresh_temp);
+}
+
+static DEVICE_ATTR_RO(temp1_shutdown);
+static DEVICE_ATTR_RO(temp1_shutdown_alarm);
+
+static struct attribute *bnxt_temp_extra_attrs[] = {
+ &dev_attr_temp1_shutdown.attr,
+ &dev_attr_temp1_shutdown_alarm.attr,
+ NULL,
+};
+
+static umode_t bnxt_temp_extra_attrs_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct bnxt *bp = dev_get_drvdata(dev);
+
+ /* Shutdown temperature setting in NVM is optional */
+ if (!(bp->fw_cap & BNXT_FW_CAP_THRESHOLD_TEMP_SUPPORTED) ||
+ !bp->shutdown_thresh_temp)
+ return 0;
+
+ return attr->mode;
+}
+
+static const struct attribute_group bnxt_temp_extra_group = {
+ .attrs = bnxt_temp_extra_attrs,
+ .is_visible = bnxt_temp_extra_attrs_visible,
+};
+__ATTRIBUTE_GROUPS(bnxt_temp_extra);
+
+void bnxt_hwmon_uninit(struct bnxt *bp)
+{
+ if (bp->hwmon_dev) {
+ hwmon_device_unregister(bp->hwmon_dev);
+ bp->hwmon_dev = NULL;
+ }
+}
+
+void bnxt_hwmon_init(struct bnxt *bp)
+{
+ struct pci_dev *pdev = bp->pdev;
+ int rc;
+
+ /* temp1_xxx is only sensor, ensure not registered if it will fail */
+ rc = bnxt_hwrm_temp_query(bp, NULL);
+ if (rc == -EACCES || rc == -EOPNOTSUPP) {
+ bnxt_hwmon_uninit(bp);
+ return;
+ }
+
+ if (bp->hwmon_dev)
+ return;
+
+ bp->hwmon_dev = hwmon_device_register_with_info(&pdev->dev,
+ DRV_MODULE_NAME, bp,
+ &bnxt_hwmon_chip_info,
+ bnxt_temp_extra_groups);
+ if (IS_ERR(bp->hwmon_dev)) {
+ bp->hwmon_dev = NULL;
+ dev_warn(&pdev->dev, "Cannot register hwmon device\n");
+ }
+}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.h
new file mode 100644
index 000000000000..de54a562e06a
--- /dev/null
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwmon.h
@@ -0,0 +1,30 @@
+/* Broadcom NetXtreme-C/E network driver.
+ *
+ * Copyright (c) 2023 Broadcom Limited
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation.
+ */
+
+#ifndef BNXT_HWMON_H
+#define BNXT_HWMON_H
+
+#ifdef CONFIG_BNXT_HWMON
+void bnxt_hwmon_notify_event(struct bnxt *bp);
+void bnxt_hwmon_uninit(struct bnxt *bp);
+void bnxt_hwmon_init(struct bnxt *bp);
+#else
+static inline void bnxt_hwmon_notify_event(struct bnxt *bp)
+{
+}
+
+static inline void bnxt_hwmon_uninit(struct bnxt *bp)
+{
+}
+
+static inline void bnxt_hwmon_init(struct bnxt *bp)
+{
+}
+#endif
+#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
index 132442f16fe6..1df3d56cc4b5 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.c
@@ -485,6 +485,8 @@ static int __hwrm_send(struct bnxt *bp, struct bnxt_hwrm_ctx *ctx)
if (msg_len > BNXT_HWRM_MAX_REQ_LEN &&
msg_len > bp->hwrm_max_ext_req_len) {
+ netdev_warn(bp->dev, "oversized hwrm request, req_type 0x%x",
+ req_type);
rc = -E2BIG;
goto exit;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
index c98032e38188..15ca51b5d204 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hwrm.h
@@ -137,4 +137,18 @@ int hwrm_req_send_silent(struct bnxt *bp, void *req);
int hwrm_req_replace(struct bnxt *bp, void *req, void *new_req, u32 len);
void hwrm_req_alloc_flags(struct bnxt *bp, void *req, gfp_t flags);
void *hwrm_req_dma_slice(struct bnxt *bp, void *req, u32 size, dma_addr_t *dma);
+
+/* Older devices can only support req length of 128.
+ * HWRM_FUNC_CFG requests which don't need fields starting at
+ * num_quic_tx_key_ctxs can use this helper to avoid getting -E2BIG.
+ */
+static inline int
+bnxt_hwrm_func_cfg_short_req_init(struct bnxt *bp,
+ struct hwrm_func_cfg_input **req)
+{
+ u32 req_len;
+
+ req_len = min_t(u32, sizeof(**req), bp->hwrm_max_ext_req_len);
+ return __hwrm_req_init(bp, (void **)req, HWRM_FUNC_CFG, req_len);
+}
#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
index dde327f2c57e..c722b3b41730 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
@@ -95,7 +95,7 @@ int bnxt_set_vf_spoofchk(struct net_device *dev, int vf_id, bool setting)
/*TODO: if the driver supports VLAN filter on guest VLAN,
* the spoof check should also include vlan anti-spoofing
*/
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (!rc) {
req->fid = cpu_to_le16(vf->fw_fid);
req->flags = cpu_to_le32(func_flags);
@@ -146,7 +146,7 @@ static int bnxt_hwrm_set_trusted_vf(struct bnxt *bp, struct bnxt_vf_info *vf)
if (!(bp->fw_cap & BNXT_FW_CAP_TRUSTED_VF))
return 0;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -232,7 +232,7 @@ int bnxt_set_vf_mac(struct net_device *dev, int vf_id, u8 *mac)
}
vf = &bp->pf.vf[vf_id];
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -274,7 +274,7 @@ int bnxt_set_vf_vlan(struct net_device *dev, int vf_id, u16 vlan_id, u8 qos,
if (vlan_tag == vf->vlan)
return 0;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (!rc) {
req->fid = cpu_to_le16(vf->fw_fid);
req->dflt_vlan = cpu_to_le16(vlan_tag);
@@ -314,7 +314,7 @@ int bnxt_set_vf_bw(struct net_device *dev, int vf_id, int min_tx_rate,
}
if (min_tx_rate == vf->min_tx_rate && max_tx_rate == vf->max_tx_rate)
return 0;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (!rc) {
req->fid = cpu_to_le16(vf->fw_fid);
req->enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_MAX_BW |
@@ -491,7 +491,7 @@ static int __bnxt_set_vf_params(struct bnxt *bp, int vf_id)
struct bnxt_vf_info *vf;
int rc;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
@@ -550,7 +550,6 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset)
vf_rx_rings = hw_resc->max_rx_rings - bp->rx_nr_rings;
vf_tx_rings = hw_resc->max_tx_rings - bp->tx_nr_rings;
vf_vnics = hw_resc->max_vnics - bp->nr_vnics;
- vf_vnics = min_t(u16, vf_vnics, vf_rx_rings);
vf_rss = hw_resc->max_rsscos_ctxs - bp->rsscos_nr_ctxs;
req->min_rsscos_ctx = cpu_to_le16(BNXT_VF_MIN_RSS_CTX);
@@ -572,11 +571,20 @@ static int bnxt_hwrm_func_vf_resc_cfg(struct bnxt *bp, int num_vfs, bool reset)
vf_cp_rings /= num_vfs;
vf_tx_rings /= num_vfs;
vf_rx_rings /= num_vfs;
- vf_vnics /= num_vfs;
+ if ((bp->fw_cap & BNXT_FW_CAP_PRE_RESV_VNICS) &&
+ vf_vnics >= pf->max_vfs) {
+ /* Take into account that FW has pre-reserved 1 VNIC for
+ * each pf->max_vfs.
+ */
+ vf_vnics = (vf_vnics - pf->max_vfs + num_vfs) / num_vfs;
+ } else {
+ vf_vnics /= num_vfs;
+ }
vf_stat_ctx /= num_vfs;
vf_ring_grps /= num_vfs;
vf_rss /= num_vfs;
+ vf_vnics = min_t(u16, vf_vnics, vf_rx_rings);
req->min_cmpl_rings = cpu_to_le16(vf_cp_rings);
req->min_tx_rings = cpu_to_le16(vf_tx_rings);
req->min_rx_rings = cpu_to_le16(vf_rx_rings);
@@ -645,7 +653,7 @@ static int bnxt_hwrm_func_cfg(struct bnxt *bp, int num_vfs)
u32 mtu, i;
int rc;
- rc = hwrm_req_init(bp, req, HWRM_FUNC_CFG);
+ rc = bnxt_hwrm_func_cfg_short_req_init(bp, &req);
if (rc)
return rc;
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 24bade875ca6..9282403d1bf6 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -3247,23 +3247,6 @@ static irqreturn_t bcmgenet_wol_isr(int irq, void *dev_id)
return IRQ_HANDLED;
}
-#ifdef CONFIG_NET_POLL_CONTROLLER
-static void bcmgenet_poll_controller(struct net_device *dev)
-{
- struct bcmgenet_priv *priv = netdev_priv(dev);
-
- /* Invoke the main RX/TX interrupt handler */
- disable_irq(priv->irq0);
- bcmgenet_isr0(priv->irq0, priv);
- enable_irq(priv->irq0);
-
- /* And the interrupt handler for RX/TX priority queues */
- disable_irq(priv->irq1);
- bcmgenet_isr1(priv->irq1, priv);
- enable_irq(priv->irq1);
-}
-#endif
-
static void bcmgenet_umac_reset(struct bcmgenet_priv *priv)
{
u32 reg;
@@ -3720,9 +3703,6 @@ static const struct net_device_ops bcmgenet_netdev_ops = {
.ndo_set_mac_address = bcmgenet_set_mac_addr,
.ndo_eth_ioctl = phy_do_ioctl_running,
.ndo_set_features = bcmgenet_set_features,
-#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_poll_controller = bcmgenet_poll_controller,
-#endif
.ndo_get_stats = bcmgenet_get_stats,
.ndo_change_carrier = bcmgenet_change_carrier,
};
@@ -4164,7 +4144,7 @@ err:
return err;
}
-static int bcmgenet_remove(struct platform_device *pdev)
+static void bcmgenet_remove(struct platform_device *pdev)
{
struct bcmgenet_priv *priv = dev_to_priv(&pdev->dev);
@@ -4172,8 +4152,6 @@ static int bcmgenet_remove(struct platform_device *pdev)
unregister_netdev(priv->dev);
bcmgenet_mii_exit(priv->dev);
free_netdev(priv->dev);
-
- return 0;
}
static void bcmgenet_shutdown(struct platform_device *pdev)
@@ -4352,7 +4330,7 @@ MODULE_DEVICE_TABLE(acpi, genet_acpi_match);
static struct platform_driver bcmgenet_driver = {
.probe = bcmgenet_probe,
- .remove = bcmgenet_remove,
+ .remove_new = bcmgenet_remove,
.shutdown = bcmgenet_shutdown,
.driver = {
.name = "bcmgenet",
diff --git a/drivers/net/ethernet/broadcom/sb1250-mac.c b/drivers/net/ethernet/broadcom/sb1250-mac.c
index 3a6763c5e8b3..fcf8485f3446 100644
--- a/drivers/net/ethernet/broadcom/sb1250-mac.c
+++ b/drivers/net/ethernet/broadcom/sb1250-mac.c
@@ -2593,7 +2593,7 @@ out_out:
return err;
}
-static int sbmac_remove(struct platform_device *pldev)
+static void sbmac_remove(struct platform_device *pldev)
{
struct net_device *dev = platform_get_drvdata(pldev);
struct sbmac_softc *sc = netdev_priv(dev);
@@ -2604,13 +2604,11 @@ static int sbmac_remove(struct platform_device *pldev)
mdiobus_free(sc->mii_bus);
iounmap(sc->sbm_base);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver sbmac_driver = {
.probe = sbmac_probe,
- .remove = sbmac_remove,
+ .remove_new = sbmac_remove,
.driver = {
.name = sbmac_string,
},
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 14b311196b8f..59896bbf9e73 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -6314,6 +6314,46 @@ err_out:
return -EOPNOTSUPP;
}
+static void tg3_hwclock_to_timestamp(struct tg3 *tp, u64 hwclock,
+ struct skb_shared_hwtstamps *timestamp)
+{
+ memset(timestamp, 0, sizeof(struct skb_shared_hwtstamps));
+ timestamp->hwtstamp = ns_to_ktime((hwclock & TG3_TSTAMP_MASK) +
+ tp->ptp_adjust);
+}
+
+static void tg3_read_tx_tstamp(struct tg3 *tp, u64 *hwclock)
+{
+ *hwclock = tr32(TG3_TX_TSTAMP_LSB);
+ *hwclock |= (u64)tr32(TG3_TX_TSTAMP_MSB) << 32;
+}
+
+static long tg3_ptp_ts_aux_work(struct ptp_clock_info *ptp)
+{
+ struct tg3 *tp = container_of(ptp, struct tg3, ptp_info);
+ struct skb_shared_hwtstamps timestamp;
+ u64 hwclock;
+
+ if (tp->ptp_txts_retrycnt > 2)
+ goto done;
+
+ tg3_read_tx_tstamp(tp, &hwclock);
+
+ if (hwclock != tp->pre_tx_ts) {
+ tg3_hwclock_to_timestamp(tp, hwclock, &timestamp);
+ skb_tstamp_tx(tp->tx_tstamp_skb, &timestamp);
+ goto done;
+ }
+ tp->ptp_txts_retrycnt++;
+ return HZ / 10;
+done:
+ dev_consume_skb_any(tp->tx_tstamp_skb);
+ tp->tx_tstamp_skb = NULL;
+ tp->ptp_txts_retrycnt = 0;
+ tp->pre_tx_ts = 0;
+ return -1;
+}
+
static const struct ptp_clock_info tg3_ptp_caps = {
.owner = THIS_MODULE,
.name = "tg3 clock",
@@ -6325,19 +6365,12 @@ static const struct ptp_clock_info tg3_ptp_caps = {
.pps = 0,
.adjfine = tg3_ptp_adjfine,
.adjtime = tg3_ptp_adjtime,
+ .do_aux_work = tg3_ptp_ts_aux_work,
.gettimex64 = tg3_ptp_gettimex,
.settime64 = tg3_ptp_settime,
.enable = tg3_ptp_enable,
};
-static void tg3_hwclock_to_timestamp(struct tg3 *tp, u64 hwclock,
- struct skb_shared_hwtstamps *timestamp)
-{
- memset(timestamp, 0, sizeof(struct skb_shared_hwtstamps));
- timestamp->hwtstamp = ns_to_ktime((hwclock & TG3_TSTAMP_MASK) +
- tp->ptp_adjust);
-}
-
/* tp->lock must be held */
static void tg3_ptp_init(struct tg3 *tp)
{
@@ -6368,6 +6401,8 @@ static void tg3_ptp_fini(struct tg3 *tp)
ptp_clock_unregister(tp->ptp_clock);
tp->ptp_clock = NULL;
tp->ptp_adjust = 0;
+ dev_consume_skb_any(tp->tx_tstamp_skb);
+ tp->tx_tstamp_skb = NULL;
}
static inline int tg3_irq_sync(struct tg3 *tp)
@@ -6538,6 +6573,7 @@ static void tg3_tx(struct tg3_napi *tnapi)
while (sw_idx != hw_idx) {
struct tg3_tx_ring_info *ri = &tnapi->tx_buffers[sw_idx];
+ bool complete_skb_later = false;
struct sk_buff *skb = ri->skb;
int i, tx_bug = 0;
@@ -6548,12 +6584,17 @@ static void tg3_tx(struct tg3_napi *tnapi)
if (tnapi->tx_ring[sw_idx].len_flags & TXD_FLAG_HWTSTAMP) {
struct skb_shared_hwtstamps timestamp;
- u64 hwclock = tr32(TG3_TX_TSTAMP_LSB);
- hwclock |= (u64)tr32(TG3_TX_TSTAMP_MSB) << 32;
-
- tg3_hwclock_to_timestamp(tp, hwclock, &timestamp);
+ u64 hwclock;
- skb_tstamp_tx(skb, &timestamp);
+ tg3_read_tx_tstamp(tp, &hwclock);
+ if (hwclock != tp->pre_tx_ts) {
+ tg3_hwclock_to_timestamp(tp, hwclock, &timestamp);
+ skb_tstamp_tx(skb, &timestamp);
+ tp->pre_tx_ts = 0;
+ } else {
+ tp->tx_tstamp_skb = skb;
+ complete_skb_later = true;
+ }
}
dma_unmap_single(&tp->pdev->dev, dma_unmap_addr(ri, mapping),
@@ -6591,7 +6632,10 @@ static void tg3_tx(struct tg3_napi *tnapi)
pkts_compl++;
bytes_compl += skb->len;
- dev_consume_skb_any(skb);
+ if (!complete_skb_later)
+ dev_consume_skb_any(skb);
+ else
+ ptp_schedule_worker(tp->ptp_clock, 0);
if (unlikely(tx_bug)) {
tg3_tx_recover(tp);
@@ -8028,8 +8072,13 @@ static netdev_tx_t tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
if ((unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) &&
tg3_flag(tp, TX_TSTAMP_EN)) {
- skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
- base_flags |= TXD_FLAG_HWTSTAMP;
+ tg3_full_lock(tp, 0);
+ if (!tp->pre_tx_ts) {
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ base_flags |= TXD_FLAG_HWTSTAMP;
+ tg3_read_tx_tstamp(tp, &tp->pre_tx_ts);
+ }
+ tg3_full_unlock(tp);
}
len = skb_headlen(skb);
diff --git a/drivers/net/ethernet/broadcom/tg3.h b/drivers/net/ethernet/broadcom/tg3.h
index 1000c894064f..ae5c01bd1110 100644
--- a/drivers/net/ethernet/broadcom/tg3.h
+++ b/drivers/net/ethernet/broadcom/tg3.h
@@ -3190,6 +3190,7 @@ struct tg3 {
struct ptp_clock_info ptp_info;
struct ptp_clock *ptp_clock;
s64 ptp_adjust;
+ u8 ptp_txts_retrycnt;
/* begin "tx thread" cacheline section */
void (*write32_tx_mbox) (struct tg3 *, u32,
@@ -3372,6 +3373,8 @@ struct tg3 {
struct tg3_hw_stats *hw_stats;
dma_addr_t stats_mapping;
struct work_struct reset_task;
+ struct sk_buff *tx_tstamp_skb;
+ u64 pre_tx_ts;
int nvram_lock_cnt;
u32 nvram_size;
diff --git a/drivers/net/ethernet/brocade/bna/bfa_ioc.c b/drivers/net/ethernet/brocade/bna/bfa_ioc.c
index b07522ac3e74..9c80ab07a735 100644
--- a/drivers/net/ethernet/brocade/bna/bfa_ioc.c
+++ b/drivers/net/ethernet/brocade/bna/bfa_ioc.c
@@ -2839,7 +2839,7 @@ bfa_ioc_get_adapter_optrom_ver(struct bfa_ioc *ioc, char *optrom_ver)
static void
bfa_ioc_get_adapter_manufacturer(struct bfa_ioc *ioc, char *manufacturer)
{
- strncpy(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN);
+ strscpy_pad(manufacturer, BFA_MFG_NAME, BFA_ADAPTER_MFG_NAME_LEN);
}
static void
diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
index b940dcd3ace6..cebae0f418f2 100644
--- a/drivers/net/ethernet/cadence/macb_main.c
+++ b/drivers/net/ethernet/cadence/macb_main.c
@@ -5156,7 +5156,7 @@ err_disable_clocks:
return err;
}
-static int macb_remove(struct platform_device *pdev)
+static void macb_remove(struct platform_device *pdev)
{
struct net_device *dev;
struct macb *bp;
@@ -5181,8 +5181,6 @@ static int macb_remove(struct platform_device *pdev)
phylink_destroy(bp->phylink);
free_netdev(dev);
}
-
- return 0;
}
static int __maybe_unused macb_suspend(struct device *dev)
@@ -5398,7 +5396,7 @@ static const struct dev_pm_ops macb_pm_ops = {
static struct platform_driver macb_driver = {
.probe = macb_probe,
- .remove = macb_remove,
+ .remove_new = macb_remove,
.driver = {
.name = "macb",
.of_match_table = of_match_ptr(macb_dt_ids),
diff --git a/drivers/net/ethernet/calxeda/xgmac.c b/drivers/net/ethernet/calxeda/xgmac.c
index f4f87dfa9687..5e97f1e4e38e 100644
--- a/drivers/net/ethernet/calxeda/xgmac.c
+++ b/drivers/net/ethernet/calxeda/xgmac.c
@@ -1820,7 +1820,7 @@ err_alloc:
* changes the link status, releases the DMA descriptor rings,
* unregisters the MDIO bus and unmaps the allocated memory.
*/
-static int xgmac_remove(struct platform_device *pdev)
+static void xgmac_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct xgmac_priv *priv = netdev_priv(ndev);
@@ -1840,8 +1840,6 @@ static int xgmac_remove(struct platform_device *pdev)
release_mem_region(res->start, resource_size(res));
free_netdev(ndev);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -1921,7 +1919,7 @@ static struct platform_driver xgmac_driver = {
.pm = &xgmac_pm_ops,
},
.probe = xgmac_probe,
- .remove = xgmac_remove,
+ .remove_new = xgmac_remove,
};
module_platform_driver(xgmac_driver);
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_ethtool.c b/drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
index 9d56181a301f..d3e07b6ed5e1 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_ethtool.c
@@ -442,10 +442,11 @@ lio_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
oct = lio->oct_dev;
memset(drvinfo, 0, sizeof(struct ethtool_drvinfo));
- strcpy(drvinfo->driver, "liquidio");
- strncpy(drvinfo->fw_version, oct->fw_info.liquidio_firmware_version,
- ETHTOOL_FWVERS_LEN);
- strncpy(drvinfo->bus_info, pci_name(oct->pci_dev), 32);
+ strscpy(drvinfo->driver, "liquidio", sizeof(drvinfo->driver));
+ strscpy(drvinfo->fw_version, oct->fw_info.liquidio_firmware_version,
+ sizeof(drvinfo->fw_version));
+ strscpy(drvinfo->bus_info, pci_name(oct->pci_dev),
+ sizeof(drvinfo->bus_info));
}
static void
@@ -458,10 +459,11 @@ lio_get_vf_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo)
oct = lio->oct_dev;
memset(drvinfo, 0, sizeof(struct ethtool_drvinfo));
- strcpy(drvinfo->driver, "liquidio_vf");
- strncpy(drvinfo->fw_version, oct->fw_info.liquidio_firmware_version,
- ETHTOOL_FWVERS_LEN);
- strncpy(drvinfo->bus_info, pci_name(oct->pci_dev), 32);
+ strscpy(drvinfo->driver, "liquidio_vf", sizeof(drvinfo->driver));
+ strscpy(drvinfo->fw_version, oct->fw_info.liquidio_firmware_version,
+ sizeof(drvinfo->fw_version));
+ strscpy(drvinfo->bus_info, pci_name(oct->pci_dev),
+ sizeof(drvinfo->bus_info));
}
static int
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 100daadbea2a..34f02a8ec2ca 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -1689,7 +1689,7 @@ static int load_firmware(struct octeon_device *oct)
if (fw_type_is_auto()) {
tmp_fw_type = LIO_FW_NAME_TYPE_NIC;
- strncpy(fw_type, tmp_fw_type, sizeof(fw_type));
+ strscpy_pad(fw_type, tmp_fw_type, sizeof(fw_type));
} else {
tmp_fw_type = fw_type;
}
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
index 600de587d7a9..aa6c0dfb6f1c 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_vf_rep.c
@@ -638,7 +638,8 @@ lio_vf_rep_netdev_event(struct notifier_block *nb,
memset(&rep_cfg, 0, sizeof(rep_cfg));
rep_cfg.req_type = LIO_VF_REP_REQ_DEVNAME;
rep_cfg.ifidx = vf_rep->ifidx;
- strncpy(rep_cfg.rep_name.name, ndev->name, LIO_IF_NAME_SIZE);
+ strscpy(rep_cfg.rep_name.name, ndev->name,
+ sizeof(rep_cfg.rep_name.name));
ret = lio_vf_rep_send_soft_command(oct, &rep_cfg,
sizeof(rep_cfg), NULL, 0);
diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_device.c b/drivers/net/ethernet/cavium/liquidio/octeon_device.c
index 364f4f912dc2..6b6cb73482d7 100644
--- a/drivers/net/ethernet/cavium/liquidio/octeon_device.c
+++ b/drivers/net/ethernet/cavium/liquidio/octeon_device.c
@@ -1217,10 +1217,10 @@ int octeon_core_drv_init(struct octeon_recv_info *recv_info, void *buf)
goto core_drv_init_err;
}
- strncpy(app_name,
+ strscpy(app_name,
get_oct_app_string(
(u32)recv_pkt->rh.r_core_drv_init.app_mode),
- sizeof(app_name) - 1);
+ sizeof(app_name));
oct->app_mode = (u32)recv_pkt->rh.r_core_drv_init.app_mode;
if (recv_pkt->rh.r_core_drv_init.app_mode == CVM_DRV_NIC_APP) {
oct->fw_info.max_nic_ports =
@@ -1257,9 +1257,10 @@ int octeon_core_drv_init(struct octeon_recv_info *recv_info, void *buf)
memcpy(cs, get_rbd(
recv_pkt->buffer_ptr[0]) + OCT_DROQ_INFO_SIZE, sizeof(*cs));
- strncpy(oct->boardinfo.name, cs->boardname, OCT_BOARD_NAME);
- strncpy(oct->boardinfo.serial_number, cs->board_serial_number,
- OCT_SERIAL_LEN);
+ strscpy(oct->boardinfo.name, cs->boardname,
+ sizeof(oct->boardinfo.name));
+ strscpy(oct->boardinfo.serial_number, cs->board_serial_number,
+ sizeof(oct->boardinfo.serial_number));
octeon_swap_8B_data((u64 *)cs, (sizeof(*cs) >> 3));
diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
index edde0b8fa49c..007d4b06819e 100644
--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
@@ -1521,7 +1521,7 @@ err:
return result;
}
-static int octeon_mgmt_remove(struct platform_device *pdev)
+static void octeon_mgmt_remove(struct platform_device *pdev)
{
struct net_device *netdev = platform_get_drvdata(pdev);
struct octeon_mgmt *p = netdev_priv(netdev);
@@ -1529,7 +1529,6 @@ static int octeon_mgmt_remove(struct platform_device *pdev)
unregister_netdev(netdev);
of_node_put(p->phy_np);
free_netdev(netdev);
- return 0;
}
static const struct of_device_id octeon_mgmt_match[] = {
@@ -1546,7 +1545,7 @@ static struct platform_driver octeon_mgmt_driver = {
.of_match_table = octeon_mgmt_match,
},
.probe = octeon_mgmt_probe,
- .remove = octeon_mgmt_remove,
+ .remove_new = octeon_mgmt_remove,
};
module_platform_driver(octeon_mgmt_driver);
diff --git a/drivers/net/ethernet/chelsio/cxgb3/l2t.h b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
index ea75f275023f..646ca0bc25bd 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/l2t.h
+++ b/drivers/net/ethernet/chelsio/cxgb3/l2t.h
@@ -76,7 +76,7 @@ struct l2t_data {
atomic_t nfree; /* number of free entries */
rwlock_t lock;
struct rcu_head rcu_head; /* to handle rcu cleanup */
- struct l2t_entry l2tab[];
+ struct l2t_entry l2tab[] __counted_by(nentries);
};
typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
diff --git a/drivers/net/ethernet/chelsio/cxgb3/sge.c b/drivers/net/ethernet/chelsio/cxgb3/sge.c
index 2e9a74fe0970..6268f96cb4aa 100644
--- a/drivers/net/ethernet/chelsio/cxgb3/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb3/sge.c
@@ -2501,14 +2501,6 @@ static int napi_rx_handler(struct napi_struct *napi, int budget)
return work_done;
}
-/*
- * Returns true if the device is already scheduled for polling.
- */
-static inline int napi_is_scheduled(struct napi_struct *napi)
-{
- return test_bit(NAPI_STATE_SCHED, &napi->state);
-}
-
/**
* process_pure_responses - process pure responses from a response queue
* @adap: the adapter
@@ -2674,12 +2666,7 @@ static int rspq_check_napi(struct sge_qset *qs)
{
struct sge_rspq *q = &qs->rspq;
- if (!napi_is_scheduled(&qs->napi) &&
- is_new_response(&q->desc[q->cidx], q)) {
- napi_schedule(&qs->napi);
- return 1;
- }
- return 0;
+ return is_new_response(&q->desc[q->cidx], q) && napi_schedule(&qs->napi);
}
/*
diff --git a/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h b/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h
index 290c1058069a..847c7fc2bbd9 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/clip_tbl.h
@@ -29,7 +29,7 @@ struct clip_tbl {
atomic_t nfree;
struct list_head ce_free_head;
void *cl_list;
- struct list_head hash_list[];
+ struct list_head hash_list[] __counted_by(clipt_size);
};
enum {
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
index f59dd4b2ae6f..9050568a034c 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_u32_parse.h
@@ -331,6 +331,6 @@ struct cxgb4_link {
struct cxgb4_tc_u32_table {
unsigned int size; /* number of entries in table */
- struct cxgb4_link table[]; /* Jump table */
+ struct cxgb4_link table[] __counted_by(size); /* Jump table */
};
#endif /* __CXGB4_TC_U32_PARSE_H */
diff --git a/drivers/net/ethernet/chelsio/cxgb4/l2t.c b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
index a10a6862a9a4..1e5f5b1a22a6 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/l2t.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/l2t.c
@@ -59,7 +59,7 @@ struct l2t_data {
rwlock_t lock;
atomic_t nfree; /* number of free entries */
struct l2t_entry *rover; /* starting point for next allocation */
- struct l2t_entry l2tab[]; /* MUST BE LAST */
+ struct l2t_entry l2tab[] __counted_by(l2t_size); /* MUST BE LAST */
};
static inline unsigned int vlan_prio(const struct l2t_entry *e)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sched.h b/drivers/net/ethernet/chelsio/cxgb4/sched.h
index 5f8b871d79af..6b3c778815f0 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sched.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/sched.h
@@ -82,7 +82,7 @@ struct sched_class {
struct sched_table { /* per port scheduling table */
u8 sched_size;
- struct sched_class tab[];
+ struct sched_class tab[] __counted_by(sched_size);
};
static inline bool can_sched(struct net_device *dev)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index 98dd78551d89..b5ff2e1a9975 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -4261,7 +4261,7 @@ static void sge_rx_timer_cb(struct timer_list *t)
if (fl_starving(adap, fl)) {
rxq = container_of(fl, struct sge_eth_rxq, fl);
- if (napi_reschedule(&rxq->rspq.napi))
+ if (napi_schedule(&rxq->rspq.napi))
fl->starving++;
else
set_bit(id, s->starving_fl);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/smt.h b/drivers/net/ethernet/chelsio/cxgb4/smt.h
index 541249d78914..109c1dff563a 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/smt.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/smt.h
@@ -66,7 +66,7 @@ struct smt_entry {
struct smt_data {
unsigned int smt_size;
rwlock_t lock;
- struct smt_entry smtab[];
+ struct smt_entry smtab[] __counted_by(smt_size);
};
struct smt_data *t4_init_smt(void);
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
index 2d0cf76fb3c5..5b1d746e6563 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
@@ -2094,7 +2094,7 @@ static void sge_rx_timer_cb(struct timer_list *t)
struct sge_eth_rxq *rxq;
rxq = container_of(fl, struct sge_eth_rxq, fl);
- if (napi_reschedule(&rxq->rspq.napi))
+ if (napi_schedule(&rxq->rspq.napi))
fl->starving++;
else
set_bit(id, s->starving_fl);
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
index bcdc7fc2f427..6482728794dd 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.c
@@ -361,9 +361,7 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
struct tls_context *tls_ctx,
enum tls_offload_ctx_dir direction)
{
- struct chcr_ktls_ofld_ctx_tx *tx_ctx =
- chcr_get_ktls_tx_context(tls_ctx);
- struct chcr_ktls_info *tx_info = tx_ctx->chcr_info;
+ struct chcr_ktls_info *tx_info = chcr_get_ktls_tx_info(tls_ctx);
struct ch_ktls_port_stats_debug *port_stats;
struct chcr_ktls_uld_ctx *u_ctx;
@@ -396,7 +394,7 @@ static void chcr_ktls_dev_del(struct net_device *netdev,
port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
atomic64_inc(&port_stats->ktls_tx_connection_close);
kvfree(tx_info);
- tx_ctx->chcr_info = NULL;
+ chcr_set_ktls_tx_info(tls_ctx, NULL);
/* release module refcount */
module_put(THIS_MODULE);
}
@@ -417,7 +415,6 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
{
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct ch_ktls_port_stats_debug *port_stats;
- struct chcr_ktls_ofld_ctx_tx *tx_ctx;
struct chcr_ktls_uld_ctx *u_ctx;
struct chcr_ktls_info *tx_info;
struct dst_entry *dst;
@@ -427,8 +424,6 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
u8 daaddr[16];
int ret = -1;
- tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
-
pi = netdev_priv(netdev);
adap = pi->adapter;
port_stats = &adap->ch_ktls_stats.ktls_port[pi->port_id];
@@ -440,7 +435,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
goto out;
}
- if (tx_ctx->chcr_info)
+ if (chcr_get_ktls_tx_info(tls_ctx))
goto out;
if (u_ctx && u_ctx->detach)
@@ -566,7 +561,7 @@ static int chcr_ktls_dev_add(struct net_device *netdev, struct sock *sk,
goto free_tid;
atomic64_inc(&port_stats->ktls_tx_ctx);
- tx_ctx->chcr_info = tx_info;
+ chcr_set_ktls_tx_info(tls_ctx, tx_info);
return 0;
@@ -647,7 +642,7 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
{
const struct cpl_act_open_rpl *p = (void *)input;
struct chcr_ktls_info *tx_info = NULL;
- struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct tls_offload_context_tx *tx_ctx;
struct chcr_ktls_uld_ctx *u_ctx;
unsigned int atid, tid, status;
struct tls_context *tls_ctx;
@@ -686,7 +681,7 @@ static int chcr_ktls_cpl_act_open_rpl(struct adapter *adap,
cxgb4_insert_tid(t, tx_info, tx_info->tid, tx_info->ip_family);
/* Adding tid */
tls_ctx = tls_get_ctx(tx_info->sk);
- tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
+ tx_ctx = tls_offload_ctx_tx(tls_ctx);
u_ctx = adap->uld[CXGB4_ULD_KTLS].handle;
if (u_ctx) {
ret = xa_insert_bh(&u_ctx->tid_list, tid, tx_ctx,
@@ -1924,7 +1919,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
{
u32 tls_end_offset, tcp_seq, skb_data_len, skb_offset;
struct ch_ktls_port_stats_debug *port_stats;
- struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct tls_offload_context_tx *tx_ctx;
struct ch_ktls_stats_debug *stats;
struct tcphdr *th = tcp_hdr(skb);
int data_len, qidx, ret = 0, mss;
@@ -1944,6 +1939,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
mss = skb_is_gso(skb) ? skb_shinfo(skb)->gso_size : data_len;
tls_ctx = tls_get_ctx(skb->sk);
+ tx_ctx = tls_offload_ctx_tx(tls_ctx);
tls_netdev = rcu_dereference_bh(tls_ctx->netdev);
/* Don't quit on NULL: if tls_device_down is running in parallel,
* netdev might become NULL, even if tls_is_skb_tx_device_offloaded was
@@ -1952,8 +1948,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
if (unlikely(tls_netdev && tls_netdev != dev))
goto out;
- tx_ctx = chcr_get_ktls_tx_context(tls_ctx);
- tx_info = tx_ctx->chcr_info;
+ tx_info = chcr_get_ktls_tx_info(tls_ctx);
if (unlikely(!tx_info))
goto out;
@@ -1979,19 +1974,19 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
* we will send the complete record again.
*/
- spin_lock_irqsave(&tx_ctx->base.lock, flags);
+ spin_lock_irqsave(&tx_ctx->lock, flags);
do {
cxgb4_reclaim_completed_tx(adap, &q->q, true);
/* fetch the tls record */
- record = tls_get_record(&tx_ctx->base, tcp_seq,
+ record = tls_get_record(tx_ctx, tcp_seq,
&tx_info->record_no);
/* By the time packet reached to us, ACK is received, and record
* won't be found in that case, handle it gracefully.
*/
if (unlikely(!record)) {
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ spin_unlock_irqrestore(&tx_ctx->lock, flags);
atomic64_inc(&port_stats->ktls_tx_drop_no_sync_data);
goto out;
}
@@ -2015,7 +2010,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
tls_end_offset !=
record->len);
if (ret) {
- spin_unlock_irqrestore(&tx_ctx->base.lock,
+ spin_unlock_irqrestore(&tx_ctx->lock,
flags);
goto out;
}
@@ -2046,7 +2041,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
/* free the refcount taken earlier */
if (tls_end_offset < data_len)
dev_kfree_skb_any(skb);
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ spin_unlock_irqrestore(&tx_ctx->lock, flags);
goto out;
}
@@ -2082,7 +2077,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
/* if any failure, come out from the loop. */
if (ret) {
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ spin_unlock_irqrestore(&tx_ctx->lock, flags);
if (th->fin)
dev_kfree_skb_any(skb);
@@ -2097,7 +2092,7 @@ static int chcr_ktls_xmit(struct sk_buff *skb, struct net_device *dev)
} while (data_len > 0);
- spin_unlock_irqrestore(&tx_ctx->base.lock, flags);
+ spin_unlock_irqrestore(&tx_ctx->lock, flags);
atomic64_inc(&port_stats->ktls_tx_encrypted_packets);
atomic64_add(skb_data_len, &port_stats->ktls_tx_encrypted_bytes);
@@ -2185,17 +2180,17 @@ static void clear_conn_resources(struct chcr_ktls_info *tx_info)
static void ch_ktls_reset_all_conn(struct chcr_ktls_uld_ctx *u_ctx)
{
struct ch_ktls_port_stats_debug *port_stats;
- struct chcr_ktls_ofld_ctx_tx *tx_ctx;
+ struct tls_offload_context_tx *tx_ctx;
struct chcr_ktls_info *tx_info;
unsigned long index;
xa_for_each(&u_ctx->tid_list, index, tx_ctx) {
- tx_info = tx_ctx->chcr_info;
+ tx_info = __chcr_get_ktls_tx_info(tx_ctx);
clear_conn_resources(tx_info);
port_stats = &tx_info->adap->ch_ktls_stats.ktls_port[tx_info->port_id];
atomic64_inc(&port_stats->ktls_tx_connection_close);
kvfree(tx_info);
- tx_ctx->chcr_info = NULL;
+ memset(tx_ctx->driver_state, 0, TLS_DRIVER_STATE_SIZE_TX);
/* release module refcount */
module_put(THIS_MODULE);
}
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
index 10572dc55365..dbbba92bf540 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
+++ b/drivers/net/ethernet/chelsio/inline_crypto/ch_ktls/chcr_ktls.h
@@ -67,8 +67,7 @@ struct chcr_ktls_info {
bool pending_close;
};
-struct chcr_ktls_ofld_ctx_tx {
- struct tls_offload_context_tx base;
+struct chcr_ktls_ctx_tx {
struct chcr_ktls_info *chcr_info;
};
@@ -79,14 +78,33 @@ struct chcr_ktls_uld_ctx {
bool detach;
};
-static inline struct chcr_ktls_ofld_ctx_tx *
-chcr_get_ktls_tx_context(struct tls_context *tls_ctx)
+static inline struct chcr_ktls_info *
+__chcr_get_ktls_tx_info(struct tls_offload_context_tx *octx)
{
- BUILD_BUG_ON(sizeof(struct chcr_ktls_ofld_ctx_tx) >
- TLS_OFFLOAD_CONTEXT_SIZE_TX);
- return container_of(tls_offload_ctx_tx(tls_ctx),
- struct chcr_ktls_ofld_ctx_tx,
- base);
+ struct chcr_ktls_ctx_tx *priv_ctx;
+
+ BUILD_BUG_ON(sizeof(struct chcr_ktls_ctx_tx) > TLS_DRIVER_STATE_SIZE_TX);
+ priv_ctx = (struct chcr_ktls_ctx_tx *)octx->driver_state;
+ return priv_ctx->chcr_info;
+}
+
+static inline struct chcr_ktls_info *
+chcr_get_ktls_tx_info(struct tls_context *tls_ctx)
+{
+ struct chcr_ktls_ctx_tx *priv_ctx;
+
+ BUILD_BUG_ON(sizeof(struct chcr_ktls_ctx_tx) > TLS_DRIVER_STATE_SIZE_TX);
+ priv_ctx = (struct chcr_ktls_ctx_tx *)__tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX);
+ return priv_ctx->chcr_info;
+}
+
+static inline void
+chcr_set_ktls_tx_info(struct tls_context *tls_ctx, struct chcr_ktls_info *chcr_info)
+{
+ struct chcr_ktls_ctx_tx *priv_ctx;
+
+ priv_ctx = __tls_driver_ctx(tls_ctx, TLS_OFFLOAD_CTX_DIR_TX);
+ priv_ctx->chcr_info = chcr_info;
}
static inline int chcr_get_first_rx_qid(struct adapter *adap)
diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
index 7750702900fa..6f6525983130 100644
--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
+++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_cm.c
@@ -2259,7 +2259,7 @@ static void chtls_rx_ack(struct sock *sk, struct sk_buff *skb)
if (tp->snd_una != snd_una) {
tp->snd_una = snd_una;
- tp->rcv_tstamp = tcp_time_stamp(tp);
+ tp->rcv_tstamp = tcp_jiffies32;
if (tp->snd_una == tp->snd_nxt &&
!csk_flag_nochk(csk, CSK_TX_FAILOVER))
csk_reset_flag(csk, CSK_TX_WAIT_IDLE);
diff --git a/drivers/net/ethernet/cirrus/cs89x0.c b/drivers/net/ethernet/cirrus/cs89x0.c
index d323c5c23521..0a21a10a791c 100644
--- a/drivers/net/ethernet/cirrus/cs89x0.c
+++ b/drivers/net/ethernet/cirrus/cs89x0.c
@@ -1879,7 +1879,7 @@ free:
return err;
}
-static int cs89x0_platform_remove(struct platform_device *pdev)
+static void cs89x0_platform_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
@@ -1889,7 +1889,6 @@ static int cs89x0_platform_remove(struct platform_device *pdev)
*/
unregister_netdev(dev);
free_netdev(dev);
- return 0;
}
static const struct of_device_id __maybe_unused cs89x0_match[] = {
@@ -1904,7 +1903,7 @@ static struct platform_driver cs89x0_driver = {
.name = DRV_NAME,
.of_match_table = of_match_ptr(cs89x0_match),
},
- .remove = cs89x0_platform_remove,
+ .remove_new = cs89x0_platform_remove,
};
module_platform_driver_probe(cs89x0_driver, cs89x0_platform_probe);
diff --git a/drivers/net/ethernet/cirrus/ep93xx_eth.c b/drivers/net/ethernet/cirrus/ep93xx_eth.c
index 8627ab19d470..1c2a540db13d 100644
--- a/drivers/net/ethernet/cirrus/ep93xx_eth.c
+++ b/drivers/net/ethernet/cirrus/ep93xx_eth.c
@@ -757,7 +757,7 @@ static struct net_device *ep93xx_dev_alloc(struct ep93xx_eth_data *data)
}
-static int ep93xx_eth_remove(struct platform_device *pdev)
+static void ep93xx_eth_remove(struct platform_device *pdev)
{
struct net_device *dev;
struct ep93xx_priv *ep;
@@ -765,7 +765,7 @@ static int ep93xx_eth_remove(struct platform_device *pdev)
dev = platform_get_drvdata(pdev);
if (dev == NULL)
- return 0;
+ return;
ep = netdev_priv(dev);
@@ -782,8 +782,6 @@ static int ep93xx_eth_remove(struct platform_device *pdev)
}
free_netdev(dev);
-
- return 0;
}
static int ep93xx_eth_probe(struct platform_device *pdev)
@@ -862,7 +860,7 @@ err_out:
static struct platform_driver ep93xx_eth_driver = {
.probe = ep93xx_eth_probe,
- .remove = ep93xx_eth_remove,
+ .remove_new = ep93xx_eth_remove,
.driver = {
.name = "ep93xx-eth",
},
diff --git a/drivers/net/ethernet/cirrus/mac89x0.c b/drivers/net/ethernet/cirrus/mac89x0.c
index 21a70b1f0ac5..887876f35f10 100644
--- a/drivers/net/ethernet/cirrus/mac89x0.c
+++ b/drivers/net/ethernet/cirrus/mac89x0.c
@@ -556,19 +556,18 @@ static int set_mac_address(struct net_device *dev, void *addr)
MODULE_LICENSE("GPL");
-static int mac89x0_device_remove(struct platform_device *pdev)
+static void mac89x0_device_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
unregister_netdev(dev);
nubus_writew(0, dev->base_addr + ADD_PORT);
free_netdev(dev);
- return 0;
}
static struct platform_driver mac89x0_platform_driver = {
.probe = mac89x0_device_probe,
- .remove = mac89x0_device_remove,
+ .remove_new = mac89x0_device_remove,
.driver = {
.name = "mac89x0",
},
diff --git a/drivers/net/ethernet/cortina/gemini.c b/drivers/net/ethernet/cortina/gemini.c
index a8b9d1a3e4d5..5423fe26b4ef 100644
--- a/drivers/net/ethernet/cortina/gemini.c
+++ b/drivers/net/ethernet/cortina/gemini.c
@@ -2518,13 +2518,11 @@ unprepare:
return ret;
}
-static int gemini_ethernet_port_remove(struct platform_device *pdev)
+static void gemini_ethernet_port_remove(struct platform_device *pdev)
{
struct gemini_ethernet_port *port = platform_get_drvdata(pdev);
gemini_port_remove(port);
-
- return 0;
}
static const struct of_device_id gemini_ethernet_port_of_match[] = {
@@ -2541,7 +2539,7 @@ static struct platform_driver gemini_ethernet_port_driver = {
.of_match_table = gemini_ethernet_port_of_match,
},
.probe = gemini_ethernet_port_probe,
- .remove = gemini_ethernet_port_remove,
+ .remove_new = gemini_ethernet_port_remove,
};
static int gemini_ethernet_probe(struct platform_device *pdev)
@@ -2583,14 +2581,12 @@ static int gemini_ethernet_probe(struct platform_device *pdev)
return devm_of_platform_populate(dev);
}
-static int gemini_ethernet_remove(struct platform_device *pdev)
+static void gemini_ethernet_remove(struct platform_device *pdev)
{
struct gemini_ethernet *geth = platform_get_drvdata(pdev);
geth_cleanup_freeq(geth);
geth->initialized = false;
-
- return 0;
}
static const struct of_device_id gemini_ethernet_of_match[] = {
@@ -2607,7 +2603,7 @@ static struct platform_driver gemini_ethernet_driver = {
.of_match_table = gemini_ethernet_of_match,
},
.probe = gemini_ethernet_probe,
- .remove = gemini_ethernet_remove,
+ .remove_new = gemini_ethernet_remove,
};
static int __init gemini_ethernet_module_init(void)
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index 05a89ab6766c..150cc94ae9f8 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -1770,8 +1770,7 @@ static const struct dev_pm_ops dm9000_drv_pm_ops = {
.resume = dm9000_drv_resume,
};
-static int
-dm9000_drv_remove(struct platform_device *pdev)
+static void dm9000_drv_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct board_info *dm = to_dm9000_board(ndev);
@@ -1783,7 +1782,6 @@ dm9000_drv_remove(struct platform_device *pdev)
regulator_disable(dm->power_supply);
dev_dbg(&pdev->dev, "released and freed device\n");
- return 0;
}
#ifdef CONFIG_OF
@@ -1801,7 +1799,7 @@ static struct platform_driver dm9000_driver = {
.of_match_table = of_match_ptr(dm9000_of_matches),
},
.probe = dm9000_probe,
- .remove = dm9000_drv_remove,
+ .remove_new = dm9000_drv_remove,
};
module_platform_driver(dm9000_driver);
diff --git a/drivers/net/ethernet/dec/tulip/tulip.h b/drivers/net/ethernet/dec/tulip/tulip.h
index 0ed598dc7569..bd786dfbc066 100644
--- a/drivers/net/ethernet/dec/tulip/tulip.h
+++ b/drivers/net/ethernet/dec/tulip/tulip.h
@@ -381,7 +381,7 @@ struct mediatable {
unsigned has_reset:6;
u32 csr15dir;
u32 csr15val; /* 21143 NWay setting. */
- struct medialeaf mleaf[];
+ struct medialeaf mleaf[] __counted_by(leafcount);
};
diff --git a/drivers/net/ethernet/dnet.c b/drivers/net/ethernet/dnet.c
index 151ca9573be9..2a18df3605f1 100644
--- a/drivers/net/ethernet/dnet.c
+++ b/drivers/net/ethernet/dnet.c
@@ -841,7 +841,7 @@ err_out_free_dev:
return err;
}
-static int dnet_remove(struct platform_device *pdev)
+static void dnet_remove(struct platform_device *pdev)
{
struct net_device *dev;
@@ -859,13 +859,11 @@ static int dnet_remove(struct platform_device *pdev)
free_irq(dev->irq, dev);
free_netdev(dev);
}
-
- return 0;
}
static struct platform_driver dnet_driver = {
.probe = dnet_probe,
- .remove = dnet_remove,
+ .remove_new = dnet_remove,
.driver = {
.name = "dnet",
},
diff --git a/drivers/net/ethernet/engleder/tsnep.h b/drivers/net/ethernet/engleder/tsnep.h
index 6e14c918e3fb..f188fba021a6 100644
--- a/drivers/net/ethernet/engleder/tsnep.h
+++ b/drivers/net/ethernet/engleder/tsnep.h
@@ -143,7 +143,7 @@ struct tsnep_rx {
struct tsnep_queue {
struct tsnep_adapter *adapter;
- char name[IFNAMSIZ + 9];
+ char name[IFNAMSIZ + 16];
struct tsnep_tx *tx;
struct tsnep_rx *rx;
diff --git a/drivers/net/ethernet/engleder/tsnep_hw.h b/drivers/net/ethernet/engleder/tsnep_hw.h
index 55e1caf193a6..64c97eb66f67 100644
--- a/drivers/net/ethernet/engleder/tsnep_hw.h
+++ b/drivers/net/ethernet/engleder/tsnep_hw.h
@@ -181,6 +181,8 @@ struct tsnep_gcl_operation {
#define TSNEP_DESC_SIZE 256
#define TSNEP_DESC_SIZE_DATA_AFTER 2048
#define TSNEP_DESC_OFFSET 128
+#define TSNEP_DESC_SIZE_DATA_AFTER_INLINE (64 - sizeof(struct tsnep_tx_desc) + \
+ sizeof_field(struct tsnep_tx_desc, tx))
#define TSNEP_DESC_OWNER_COUNTER_MASK 0xC0000000
#define TSNEP_DESC_OWNER_COUNTER_SHIFT 30
#define TSNEP_DESC_LENGTH_MASK 0x00003FFF
diff --git a/drivers/net/ethernet/engleder/tsnep_main.c b/drivers/net/ethernet/engleder/tsnep_main.c
index 8b992dc9bb52..df40c720e7b2 100644
--- a/drivers/net/ethernet/engleder/tsnep_main.c
+++ b/drivers/net/ethernet/engleder/tsnep_main.c
@@ -51,12 +51,22 @@
#define TSNEP_COALESCE_USECS_MAX ((ECM_INT_DELAY_MASK >> ECM_INT_DELAY_SHIFT) * \
ECM_INT_DELAY_BASE_US + ECM_INT_DELAY_BASE_US - 1)
-#define TSNEP_TX_TYPE_SKB BIT(0)
-#define TSNEP_TX_TYPE_SKB_FRAG BIT(1)
-#define TSNEP_TX_TYPE_XDP_TX BIT(2)
-#define TSNEP_TX_TYPE_XDP_NDO BIT(3)
-#define TSNEP_TX_TYPE_XDP (TSNEP_TX_TYPE_XDP_TX | TSNEP_TX_TYPE_XDP_NDO)
-#define TSNEP_TX_TYPE_XSK BIT(4)
+/* mapping type */
+#define TSNEP_TX_TYPE_MAP BIT(0)
+#define TSNEP_TX_TYPE_MAP_PAGE BIT(1)
+#define TSNEP_TX_TYPE_INLINE BIT(2)
+/* buffer type */
+#define TSNEP_TX_TYPE_SKB BIT(8)
+#define TSNEP_TX_TYPE_SKB_MAP (TSNEP_TX_TYPE_SKB | TSNEP_TX_TYPE_MAP)
+#define TSNEP_TX_TYPE_SKB_INLINE (TSNEP_TX_TYPE_SKB | TSNEP_TX_TYPE_INLINE)
+#define TSNEP_TX_TYPE_SKB_FRAG BIT(9)
+#define TSNEP_TX_TYPE_SKB_FRAG_MAP_PAGE (TSNEP_TX_TYPE_SKB_FRAG | TSNEP_TX_TYPE_MAP_PAGE)
+#define TSNEP_TX_TYPE_SKB_FRAG_INLINE (TSNEP_TX_TYPE_SKB_FRAG | TSNEP_TX_TYPE_INLINE)
+#define TSNEP_TX_TYPE_XDP_TX BIT(10)
+#define TSNEP_TX_TYPE_XDP_NDO BIT(11)
+#define TSNEP_TX_TYPE_XDP_NDO_MAP_PAGE (TSNEP_TX_TYPE_XDP_NDO | TSNEP_TX_TYPE_MAP_PAGE)
+#define TSNEP_TX_TYPE_XDP (TSNEP_TX_TYPE_XDP_TX | TSNEP_TX_TYPE_XDP_NDO)
+#define TSNEP_TX_TYPE_XSK BIT(12)
#define TSNEP_XDP_TX BIT(0)
#define TSNEP_XDP_REDIRECT BIT(1)
@@ -416,6 +426,8 @@ static void tsnep_tx_activate(struct tsnep_tx *tx, int index, int length,
entry->properties |= TSNEP_TX_DESC_OWNER_USER_FLAG;
entry->desc->more_properties =
__cpu_to_le32(entry->len & TSNEP_DESC_LENGTH_MASK);
+ if (entry->type & TSNEP_TX_TYPE_INLINE)
+ entry->properties |= TSNEP_TX_DESC_DATA_AFTER_DESC_FLAG;
/* descriptor properties shall be written last, because valid data is
* signaled there
@@ -433,39 +445,79 @@ static int tsnep_tx_desc_available(struct tsnep_tx *tx)
return tx->read - tx->write - 1;
}
+static int tsnep_tx_map_frag(skb_frag_t *frag, struct tsnep_tx_entry *entry,
+ struct device *dmadev, dma_addr_t *dma)
+{
+ unsigned int len;
+ int mapped;
+
+ len = skb_frag_size(frag);
+ if (likely(len > TSNEP_DESC_SIZE_DATA_AFTER_INLINE)) {
+ *dma = skb_frag_dma_map(dmadev, frag, 0, len, DMA_TO_DEVICE);
+ if (dma_mapping_error(dmadev, *dma))
+ return -ENOMEM;
+ entry->type = TSNEP_TX_TYPE_SKB_FRAG_MAP_PAGE;
+ mapped = 1;
+ } else {
+ void *fragdata = skb_frag_address_safe(frag);
+
+ if (likely(fragdata)) {
+ memcpy(&entry->desc->tx, fragdata, len);
+ } else {
+ struct page *page = skb_frag_page(frag);
+
+ fragdata = kmap_local_page(page);
+ memcpy(&entry->desc->tx, fragdata + skb_frag_off(frag),
+ len);
+ kunmap_local(fragdata);
+ }
+ entry->type = TSNEP_TX_TYPE_SKB_FRAG_INLINE;
+ mapped = 0;
+ }
+
+ return mapped;
+}
+
static int tsnep_tx_map(struct sk_buff *skb, struct tsnep_tx *tx, int count)
{
struct device *dmadev = tx->adapter->dmadev;
struct tsnep_tx_entry *entry;
unsigned int len;
- dma_addr_t dma;
int map_len = 0;
- int i;
+ dma_addr_t dma;
+ int i, mapped;
for (i = 0; i < count; i++) {
entry = &tx->entry[(tx->write + i) & TSNEP_RING_MASK];
if (!i) {
len = skb_headlen(skb);
- dma = dma_map_single(dmadev, skb->data, len,
- DMA_TO_DEVICE);
-
- entry->type = TSNEP_TX_TYPE_SKB;
+ if (likely(len > TSNEP_DESC_SIZE_DATA_AFTER_INLINE)) {
+ dma = dma_map_single(dmadev, skb->data, len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(dmadev, dma))
+ return -ENOMEM;
+ entry->type = TSNEP_TX_TYPE_SKB_MAP;
+ mapped = 1;
+ } else {
+ memcpy(&entry->desc->tx, skb->data, len);
+ entry->type = TSNEP_TX_TYPE_SKB_INLINE;
+ mapped = 0;
+ }
} else {
- len = skb_frag_size(&skb_shinfo(skb)->frags[i - 1]);
- dma = skb_frag_dma_map(dmadev,
- &skb_shinfo(skb)->frags[i - 1],
- 0, len, DMA_TO_DEVICE);
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i - 1];
- entry->type = TSNEP_TX_TYPE_SKB_FRAG;
+ len = skb_frag_size(frag);
+ mapped = tsnep_tx_map_frag(frag, entry, dmadev, &dma);
+ if (mapped < 0)
+ return mapped;
}
- if (dma_mapping_error(dmadev, dma))
- return -ENOMEM;
entry->len = len;
- dma_unmap_addr_set(entry, dma, dma);
-
- entry->desc->tx = __cpu_to_le64(dma);
+ if (likely(mapped)) {
+ dma_unmap_addr_set(entry, dma, dma);
+ entry->desc->tx = __cpu_to_le64(dma);
+ }
map_len += len;
}
@@ -484,13 +536,12 @@ static int tsnep_tx_unmap(struct tsnep_tx *tx, int index, int count)
entry = &tx->entry[(index + i) & TSNEP_RING_MASK];
if (entry->len) {
- if (entry->type & TSNEP_TX_TYPE_SKB)
+ if (entry->type & TSNEP_TX_TYPE_MAP)
dma_unmap_single(dmadev,
dma_unmap_addr(entry, dma),
dma_unmap_len(entry, len),
DMA_TO_DEVICE);
- else if (entry->type &
- (TSNEP_TX_TYPE_SKB_FRAG | TSNEP_TX_TYPE_XDP_NDO))
+ else if (entry->type & TSNEP_TX_TYPE_MAP_PAGE)
dma_unmap_page(dmadev,
dma_unmap_addr(entry, dma),
dma_unmap_len(entry, len),
@@ -586,7 +637,7 @@ static int tsnep_xdp_tx_map(struct xdp_frame *xdpf, struct tsnep_tx *tx,
if (dma_mapping_error(dmadev, dma))
return -ENOMEM;
- entry->type = TSNEP_TX_TYPE_XDP_NDO;
+ entry->type = TSNEP_TX_TYPE_XDP_NDO_MAP_PAGE;
} else {
page = unlikely(frag) ? skb_frag_page(frag) :
virt_to_page(xdpf->data);
@@ -1779,14 +1830,14 @@ static int tsnep_request_irq(struct tsnep_queue *queue, bool first)
dev = queue->adapter;
} else {
if (queue->tx && queue->rx)
- sprintf(queue->name, "%s-txrx-%d", name,
- queue->rx->queue_index);
+ snprintf(queue->name, sizeof(queue->name), "%s-txrx-%d",
+ name, queue->rx->queue_index);
else if (queue->tx)
- sprintf(queue->name, "%s-tx-%d", name,
- queue->tx->queue_index);
+ snprintf(queue->name, sizeof(queue->name), "%s-tx-%d",
+ name, queue->tx->queue_index);
else
- sprintf(queue->name, "%s-rx-%d", name,
- queue->rx->queue_index);
+ snprintf(queue->name, sizeof(queue->name), "%s-rx-%d",
+ name, queue->rx->queue_index);
handler = tsnep_irq_txrx;
dev = queue;
}
@@ -2587,7 +2638,7 @@ mdio_init_failed:
return retval;
}
-static int tsnep_remove(struct platform_device *pdev)
+static void tsnep_remove(struct platform_device *pdev)
{
struct tsnep_adapter *adapter = platform_get_drvdata(pdev);
@@ -2603,8 +2654,6 @@ static int tsnep_remove(struct platform_device *pdev)
mdiobus_unregister(adapter->mdiobus);
tsnep_disable_irq(adapter, ECM_INT_ALL);
-
- return 0;
}
static const struct of_device_id tsnep_of_match[] = {
@@ -2619,7 +2668,7 @@ static struct platform_driver tsnep_driver = {
.of_match_table = tsnep_of_match,
},
.probe = tsnep_probe,
- .remove = tsnep_remove,
+ .remove_new = tsnep_remove,
};
module_platform_driver(tsnep_driver);
diff --git a/drivers/net/ethernet/ethoc.c b/drivers/net/ethernet/ethoc.c
index 95cbad198b4b..ad41c9019018 100644
--- a/drivers/net/ethernet/ethoc.c
+++ b/drivers/net/ethernet/ethoc.c
@@ -1254,7 +1254,7 @@ out:
* ethoc_remove - shutdown OpenCores ethernet MAC
* @pdev: platform device
*/
-static int ethoc_remove(struct platform_device *pdev)
+static void ethoc_remove(struct platform_device *pdev)
{
struct net_device *netdev = platform_get_drvdata(pdev);
struct ethoc *priv = netdev_priv(netdev);
@@ -1271,8 +1271,6 @@ static int ethoc_remove(struct platform_device *pdev)
unregister_netdev(netdev);
free_netdev(netdev);
}
-
- return 0;
}
#ifdef CONFIG_PM
@@ -1298,7 +1296,7 @@ MODULE_DEVICE_TABLE(of, ethoc_match);
static struct platform_driver ethoc_driver = {
.probe = ethoc_probe,
- .remove = ethoc_remove,
+ .remove_new = ethoc_remove,
.suspend = ethoc_suspend,
.resume = ethoc_resume,
.driver = {
diff --git a/drivers/net/ethernet/ezchip/nps_enet.c b/drivers/net/ethernet/ezchip/nps_enet.c
index edf000e7bab4..4d7184d46824 100644
--- a/drivers/net/ethernet/ezchip/nps_enet.c
+++ b/drivers/net/ethernet/ezchip/nps_enet.c
@@ -198,7 +198,7 @@ static int nps_enet_poll(struct napi_struct *napi, int budget)
*/
if (nps_enet_is_tx_pending(priv)) {
nps_enet_reg_set(priv, NPS_ENET_REG_BUF_INT_ENABLE, 0);
- napi_reschedule(napi);
+ napi_schedule(napi);
}
}
diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
index 9135b918dd49..fddfd1dd5070 100644
--- a/drivers/net/ethernet/faraday/ftgmac100.c
+++ b/drivers/net/ethernet/faraday/ftgmac100.c
@@ -2012,7 +2012,7 @@ err_alloc_etherdev:
return err;
}
-static int ftgmac100_remove(struct platform_device *pdev)
+static void ftgmac100_remove(struct platform_device *pdev)
{
struct net_device *netdev;
struct ftgmac100 *priv;
@@ -2040,7 +2040,6 @@ static int ftgmac100_remove(struct platform_device *pdev)
netif_napi_del(&priv->napi);
free_netdev(netdev);
- return 0;
}
static const struct of_device_id ftgmac100_of_match[] = {
@@ -2051,7 +2050,7 @@ MODULE_DEVICE_TABLE(of, ftgmac100_of_match);
static struct platform_driver ftgmac100_driver = {
.probe = ftgmac100_probe,
- .remove = ftgmac100_remove,
+ .remove_new = ftgmac100_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = ftgmac100_of_match,
diff --git a/drivers/net/ethernet/faraday/ftmac100.c b/drivers/net/ethernet/faraday/ftmac100.c
index 183069581bc0..003bc9a45c65 100644
--- a/drivers/net/ethernet/faraday/ftmac100.c
+++ b/drivers/net/ethernet/faraday/ftmac100.c
@@ -1219,7 +1219,7 @@ err_alloc_etherdev:
return err;
}
-static int ftmac100_remove(struct platform_device *pdev)
+static void ftmac100_remove(struct platform_device *pdev)
{
struct net_device *netdev;
struct ftmac100 *priv;
@@ -1234,7 +1234,6 @@ static int ftmac100_remove(struct platform_device *pdev)
netif_napi_del(&priv->napi);
free_netdev(netdev);
- return 0;
}
static const struct of_device_id ftmac100_of_ids[] = {
@@ -1244,7 +1243,7 @@ static const struct of_device_id ftmac100_of_ids[] = {
static struct platform_driver ftmac100_driver = {
.probe = ftmac100_probe,
- .remove = ftmac100_remove,
+ .remove_new = ftmac100_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = ftmac100_of_ids
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
index 35461165de0d..30bec47bc665 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -1655,7 +1655,7 @@ out:
rx_ring->stats.bytes += rx_byte_cnt;
if (xdp_redirect_frm_cnt)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_tx_frm_cnt)
enetc_update_tx_ring_tail(tx_ring);
diff --git a/drivers/net/ethernet/freescale/enetc/enetc.h b/drivers/net/ethernet/freescale/enetc/enetc.h
index 7439739cd81a..a9c2ff22431c 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.h
+++ b/drivers/net/ethernet/freescale/enetc/enetc.h
@@ -297,7 +297,7 @@ struct enetc_int_vector {
char name[ENETC_INT_NAME_MAX];
struct enetc_bdr rx_ring;
- struct enetc_bdr tx_ring[];
+ struct enetc_bdr tx_ring[] __counted_by(count_tx_rings);
} ____cacheline_aligned_in_smp;
struct enetc_cls_rule {
diff --git a/drivers/net/ethernet/freescale/enetc/enetc_qos.c b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
index 2513b44056c1..b65da49dd926 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc_qos.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc_qos.c
@@ -443,7 +443,7 @@ struct enetc_psfp_gate {
u32 num_entries;
refcount_t refcount;
struct hlist_node node;
- struct action_gate_entry entries[];
+ struct action_gate_entry entries[] __counted_by(num_entries);
};
/* Only enable the green color frame now
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 77c8e9cfb445..032c15b541ff 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -52,11 +52,11 @@
#include <linux/clk.h>
#include <linux/crc32.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/mdio.h>
#include <linux/phy.h>
#include <linux/fec.h>
#include <linux/of.h>
-#include <linux/of_device.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
#include <linux/regulator/consumer.h>
@@ -187,65 +187,22 @@ static struct platform_device_id fec_devtype[] = {
.name = DRIVER_NAME,
.driver_data = 0,
}, {
- .name = "imx25-fec",
- .driver_data = (kernel_ulong_t)&fec_imx25_info,
- }, {
- .name = "imx27-fec",
- .driver_data = (kernel_ulong_t)&fec_imx27_info,
- }, {
- .name = "imx28-fec",
- .driver_data = (kernel_ulong_t)&fec_imx28_info,
- }, {
- .name = "imx6q-fec",
- .driver_data = (kernel_ulong_t)&fec_imx6q_info,
- }, {
- .name = "mvf600-fec",
- .driver_data = (kernel_ulong_t)&fec_mvf600_info,
- }, {
- .name = "imx6sx-fec",
- .driver_data = (kernel_ulong_t)&fec_imx6x_info,
- }, {
- .name = "imx6ul-fec",
- .driver_data = (kernel_ulong_t)&fec_imx6ul_info,
- }, {
- .name = "imx8mq-fec",
- .driver_data = (kernel_ulong_t)&fec_imx8mq_info,
- }, {
- .name = "imx8qm-fec",
- .driver_data = (kernel_ulong_t)&fec_imx8qm_info,
- }, {
- .name = "s32v234-fec",
- .driver_data = (kernel_ulong_t)&fec_s32v234_info,
- }, {
/* sentinel */
}
};
MODULE_DEVICE_TABLE(platform, fec_devtype);
-enum imx_fec_type {
- IMX25_FEC = 1, /* runs on i.mx25/50/53 */
- IMX27_FEC, /* runs on i.mx27/35/51 */
- IMX28_FEC,
- IMX6Q_FEC,
- MVF600_FEC,
- IMX6SX_FEC,
- IMX6UL_FEC,
- IMX8MQ_FEC,
- IMX8QM_FEC,
- S32V234_FEC,
-};
-
static const struct of_device_id fec_dt_ids[] = {
- { .compatible = "fsl,imx25-fec", .data = &fec_devtype[IMX25_FEC], },
- { .compatible = "fsl,imx27-fec", .data = &fec_devtype[IMX27_FEC], },
- { .compatible = "fsl,imx28-fec", .data = &fec_devtype[IMX28_FEC], },
- { .compatible = "fsl,imx6q-fec", .data = &fec_devtype[IMX6Q_FEC], },
- { .compatible = "fsl,mvf600-fec", .data = &fec_devtype[MVF600_FEC], },
- { .compatible = "fsl,imx6sx-fec", .data = &fec_devtype[IMX6SX_FEC], },
- { .compatible = "fsl,imx6ul-fec", .data = &fec_devtype[IMX6UL_FEC], },
- { .compatible = "fsl,imx8mq-fec", .data = &fec_devtype[IMX8MQ_FEC], },
- { .compatible = "fsl,imx8qm-fec", .data = &fec_devtype[IMX8QM_FEC], },
- { .compatible = "fsl,s32v234-fec", .data = &fec_devtype[S32V234_FEC], },
+ { .compatible = "fsl,imx25-fec", .data = &fec_imx25_info, },
+ { .compatible = "fsl,imx27-fec", .data = &fec_imx27_info, },
+ { .compatible = "fsl,imx28-fec", .data = &fec_imx28_info, },
+ { .compatible = "fsl,imx6q-fec", .data = &fec_imx6q_info, },
+ { .compatible = "fsl,mvf600-fec", .data = &fec_mvf600_info, },
+ { .compatible = "fsl,imx6sx-fec", .data = &fec_imx6x_info, },
+ { .compatible = "fsl,imx6ul-fec", .data = &fec_imx6ul_info, },
+ { .compatible = "fsl,imx8mq-fec", .data = &fec_imx8mq_info, },
+ { .compatible = "fsl,imx8qm-fec", .data = &fec_imx8qm_info, },
+ { .compatible = "fsl,s32v234-fec", .data = &fec_s32v234_info, },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, fec_dt_ids);
@@ -1832,7 +1789,7 @@ rx_processing_done:
rxq->bd.cur = bdp;
if (xdp_result & FEC_ENET_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
return pkt_received;
}
@@ -2907,12 +2864,10 @@ static void fec_enet_get_strings(struct net_device *netdev,
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < ARRAY_SIZE(fec_stats); i++) {
- memcpy(data, fec_stats[i].name, ETH_GSTRING_LEN);
- data += ETH_GSTRING_LEN;
+ ethtool_sprintf(&data, "%s", fec_stats[i].name);
}
for (i = 0; i < ARRAY_SIZE(fec_xdp_stat_strs); i++) {
- strncpy(data, fec_xdp_stat_strs[i], ETH_GSTRING_LEN);
- data += ETH_GSTRING_LEN;
+ ethtool_sprintf(&data, "%s", fec_xdp_stat_strs[i]);
}
page_pool_ethtool_stats_get_strings(data);
@@ -4292,14 +4247,13 @@ fec_probe(struct platform_device *pdev)
phy_interface_t interface;
struct net_device *ndev;
int i, irq, ret = 0;
- const struct of_device_id *of_id;
static int dev_id;
struct device_node *np = pdev->dev.of_node, *phy_node;
int num_tx_qs;
int num_rx_qs;
char irq_name[8];
int irq_cnt;
- struct fec_devinfo *dev_info;
+ const struct fec_devinfo *dev_info;
fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs);
@@ -4314,10 +4268,9 @@ fec_probe(struct platform_device *pdev)
/* setup board info structure */
fep = netdev_priv(ndev);
- of_id = of_match_device(fec_dt_ids, &pdev->dev);
- if (of_id)
- pdev->id_entry = of_id->data;
- dev_info = (struct fec_devinfo *)pdev->id_entry->driver_data;
+ dev_info = device_get_match_data(&pdev->dev);
+ if (!dev_info)
+ dev_info = (const struct fec_devinfo *)pdev->id_entry->driver_data;
if (dev_info)
fep->quirks = dev_info->quirks;
diff --git a/drivers/net/ethernet/freescale/fman/fman_memac.c b/drivers/net/ethernet/freescale/fman/fman_memac.c
index 3b75cc543be9..9ba15d3183d7 100644
--- a/drivers/net/ethernet/freescale/fman/fman_memac.c
+++ b/drivers/net/ethernet/freescale/fman/fman_memac.c
@@ -618,18 +618,17 @@ static int memac_accept_rx_pause_frames(struct fman_mac *memac, bool en)
return 0;
}
-static void memac_validate(struct phylink_config *config,
- unsigned long *supported,
- struct phylink_link_state *state)
+static unsigned long memac_get_caps(struct phylink_config *config,
+ phy_interface_t interface)
{
struct fman_mac *memac = fman_config_to_mac(config)->fman_mac;
unsigned long caps = config->mac_capabilities;
- if (phy_interface_mode_is_rgmii(state->interface) &&
+ if (phy_interface_mode_is_rgmii(interface) &&
memac->rgmii_no_half_duplex)
caps &= ~(MAC_10HD | MAC_100HD);
- phylink_validate_mask_caps(supported, state, caps);
+ return caps;
}
/**
@@ -776,7 +775,7 @@ static void memac_link_down(struct phylink_config *config, unsigned int mode,
}
static const struct phylink_mac_ops memac_mac_ops = {
- .validate = memac_validate,
+ .mac_get_caps = memac_get_caps,
.mac_select_pcs = memac_select_pcs,
.mac_prepare = memac_prepare,
.mac_config = memac_mac_config,
diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
index a6dfc8807d3d..cf392faa6105 100644
--- a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
+++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
@@ -35,10 +35,9 @@
#include <linux/fs.h>
#include <linux/platform_device.h>
#include <linux/phy.h>
+#include <linux/property.h>
#include <linux/of.h>
#include <linux/of_mdio.h>
-#include <linux/of_platform.h>
-#include <linux/of_gpio.h>
#include <linux/of_net.h>
#include <linux/pgtable.h>
@@ -884,9 +883,9 @@ static const struct ethtool_ops fs_ethtool_ops = {
/**************************************************************************************/
#ifdef CONFIG_FS_ENET_HAS_FEC
-#define IS_FEC(match) ((match)->data == &fs_fec_ops)
+#define IS_FEC(ops) ((ops) == &fs_fec_ops)
#else
-#define IS_FEC(match) 0
+#define IS_FEC(ops) 0
#endif
static const struct net_device_ops fs_enet_netdev_ops = {
@@ -903,10 +902,9 @@ static const struct net_device_ops fs_enet_netdev_ops = {
#endif
};
-static const struct of_device_id fs_enet_match[];
static int fs_enet_probe(struct platform_device *ofdev)
{
- const struct of_device_id *match;
+ const struct fs_ops *ops;
struct net_device *ndev;
struct fs_enet_private *fep;
struct fs_platform_info *fpi;
@@ -916,15 +914,15 @@ static int fs_enet_probe(struct platform_device *ofdev)
const char *phy_connection_type;
int privsize, len, ret = -ENODEV;
- match = of_match_device(fs_enet_match, &ofdev->dev);
- if (!match)
+ ops = device_get_match_data(&ofdev->dev);
+ if (!ops)
return -EINVAL;
fpi = kzalloc(sizeof(*fpi), GFP_KERNEL);
if (!fpi)
return -ENOMEM;
- if (!IS_FEC(match)) {
+ if (!IS_FEC(ops)) {
data = of_get_property(ofdev->dev.of_node, "fsl,cpm-command", &len);
if (!data || len != 4)
goto out_free_fpi;
@@ -986,7 +984,7 @@ static int fs_enet_probe(struct platform_device *ofdev)
fep->dev = &ofdev->dev;
fep->ndev = ndev;
fep->fpi = fpi;
- fep->ops = match->data;
+ fep->ops = ops;
ret = fep->ops->setup_data(ndev);
if (ret)
diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
index a1e777a4b75f..7bb69727952a 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mii-fec.c
@@ -30,9 +30,10 @@
#include <linux/ethtool.h>
#include <linux/bitops.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
+#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_mdio.h>
-#include <linux/of_platform.h>
#include <linux/pgtable.h>
#include <asm/irq.h>
@@ -96,20 +97,15 @@ static int fs_enet_fec_mii_write(struct mii_bus *bus, int phy_id, int location,
}
-static const struct of_device_id fs_enet_mdio_fec_match[];
static int fs_enet_mdio_probe(struct platform_device *ofdev)
{
- const struct of_device_id *match;
struct resource res;
struct mii_bus *new_bus;
struct fec_info *fec;
int (*get_bus_freq)(struct device *);
int ret = -ENOMEM, clock, speed;
- match = of_match_device(fs_enet_mdio_fec_match, &ofdev->dev);
- if (!match)
- return -EINVAL;
- get_bus_freq = match->data;
+ get_bus_freq = device_get_match_data(&ofdev->dev);
new_bus = mdiobus_alloc();
if (!new_bus)
diff --git a/drivers/net/ethernet/freescale/fsl_pq_mdio.c b/drivers/net/ethernet/freescale/fsl_pq_mdio.c
index eee675a25b2c..70dd982a5edc 100644
--- a/drivers/net/ethernet/freescale/fsl_pq_mdio.c
+++ b/drivers/net/ethernet/freescale/fsl_pq_mdio.c
@@ -19,9 +19,10 @@
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/mii.h>
+#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_mdio.h>
-#include <linux/of_device.h>
+#include <linux/property.h>
#include <asm/io.h>
#if IS_ENABLED(CONFIG_UCC_GETH)
@@ -407,8 +408,6 @@ static void set_tbipa(const u32 tbipa_val, struct platform_device *pdev,
static int fsl_pq_mdio_probe(struct platform_device *pdev)
{
- const struct of_device_id *id =
- of_match_device(fsl_pq_mdio_match, &pdev->dev);
const struct fsl_pq_mdio_data *data;
struct device_node *np = pdev->dev.of_node;
struct resource res;
@@ -417,15 +416,12 @@ static int fsl_pq_mdio_probe(struct platform_device *pdev)
struct mii_bus *new_bus;
int err;
- if (!id) {
+ data = device_get_match_data(&pdev->dev);
+ if (!data) {
dev_err(&pdev->dev, "Failed to match device\n");
return -ENODEV;
}
- data = id->data;
-
- dev_dbg(&pdev->dev, "found %s compatible node\n", id->compatible);
-
new_bus = mdiobus_alloc_size(sizeof(*priv));
if (!new_bus)
return -ENOMEM;
diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
index 5704b5f57cd0..276f996f95dc 100644
--- a/drivers/net/ethernet/google/gve/gve_main.c
+++ b/drivers/net/ethernet/google/gve/gve_main.c
@@ -190,7 +190,7 @@ static int gve_alloc_stats_report(struct gve_priv *priv)
rx_stats_num = (GVE_RX_STATS_REPORT_NUM + NIC_RX_STATS_REPORT_NUM) *
priv->rx_cfg.num_queues;
priv->stats_report_len = struct_size(priv->stats_report, stats,
- tx_stats_num + rx_stats_num);
+ size_add(tx_stats_num, rx_stats_num));
priv->stats_report =
dma_alloc_coherent(&priv->pdev->dev, priv->stats_report_len,
&priv->stats_report_bus, GFP_KERNEL);
@@ -281,7 +281,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget)
if (block->rx)
reschedule |= gve_rx_work_pending(block->rx);
- if (reschedule && napi_reschedule(napi))
+ if (reschedule && napi_schedule(napi))
iowrite32be(GVE_IRQ_MASK, irq_doorbell);
}
return work_done;
diff --git a/drivers/net/ethernet/hisilicon/hip04_eth.c b/drivers/net/ethernet/hisilicon/hip04_eth.c
index ecf92a5d56bb..b91e7a06b97f 100644
--- a/drivers/net/ethernet/hisilicon/hip04_eth.c
+++ b/drivers/net/ethernet/hisilicon/hip04_eth.c
@@ -1021,7 +1021,7 @@ init_fail:
return ret;
}
-static int hip04_remove(struct platform_device *pdev)
+static void hip04_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct hip04_priv *priv = netdev_priv(ndev);
@@ -1035,8 +1035,6 @@ static int hip04_remove(struct platform_device *pdev)
of_node_put(priv->phy_node);
cancel_work_sync(&priv->tx_timeout_task);
free_netdev(ndev);
-
- return 0;
}
static const struct of_device_id hip04_mac_match[] = {
@@ -1048,7 +1046,7 @@ MODULE_DEVICE_TABLE(of, hip04_mac_match);
static struct platform_driver hip04_mac_driver = {
.probe = hip04_mac_probe,
- .remove = hip04_remove,
+ .remove_new = hip04_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = hip04_mac_match,
diff --git a/drivers/net/ethernet/hisilicon/hisi_femac.c b/drivers/net/ethernet/hisilicon/hisi_femac.c
index cb7b0293fe85..2406263c9dd3 100644
--- a/drivers/net/ethernet/hisilicon/hisi_femac.c
+++ b/drivers/net/ethernet/hisilicon/hisi_femac.c
@@ -893,7 +893,7 @@ out_free_netdev:
return ret;
}
-static int hisi_femac_drv_remove(struct platform_device *pdev)
+static void hisi_femac_drv_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct hisi_femac_priv *priv = netdev_priv(ndev);
@@ -904,8 +904,6 @@ static int hisi_femac_drv_remove(struct platform_device *pdev)
phy_disconnect(ndev->phydev);
clk_disable_unprepare(priv->clk);
free_netdev(ndev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -961,7 +959,7 @@ static struct platform_driver hisi_femac_driver = {
.of_match_table = hisi_femac_match,
},
.probe = hisi_femac_drv_probe,
- .remove = hisi_femac_drv_remove,
+ .remove_new = hisi_femac_drv_remove,
#ifdef CONFIG_PM
.suspend = hisi_femac_drv_suspend,
.resume = hisi_femac_drv_resume,
diff --git a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
index 26d22bb04b87..1a972b093a42 100644
--- a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
+++ b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
@@ -7,7 +7,8 @@
#include <linux/interrupt.h>
#include <linux/etherdevice.h>
#include <linux/platform_device.h>
-#include <linux/of_device.h>
+#include <linux/property.h>
+#include <linux/of.h>
#include <linux/of_net.h>
#include <linux/of_mdio.h>
#include <linux/reset.h>
@@ -1094,7 +1095,6 @@ static int hix5hd2_dev_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct device_node *node = dev->of_node;
- const struct of_device_id *of_id = NULL;
struct net_device *ndev;
struct hix5hd2_priv *priv;
struct mii_bus *bus;
@@ -1110,12 +1110,7 @@ static int hix5hd2_dev_probe(struct platform_device *pdev)
priv->dev = dev;
priv->netdev = ndev;
- of_id = of_match_device(hix5hd2_of_match, dev);
- if (!of_id) {
- ret = -EINVAL;
- goto out_free_netdev;
- }
- priv->hw_cap = (unsigned long)of_id->data;
+ priv->hw_cap = (unsigned long)device_get_match_data(dev);
priv->base = devm_platform_ioremap_resource(pdev, 0);
if (IS_ERR(priv->base)) {
@@ -1282,7 +1277,7 @@ out_free_netdev:
return ret;
}
-static int hix5hd2_dev_remove(struct platform_device *pdev)
+static void hix5hd2_dev_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct hix5hd2_priv *priv = netdev_priv(ndev);
@@ -1298,8 +1293,6 @@ static int hix5hd2_dev_remove(struct platform_device *pdev)
of_node_put(priv->phy_node);
cancel_work_sync(&priv->tx_timeout_task);
free_netdev(ndev);
-
- return 0;
}
static const struct of_device_id hix5hd2_of_match[] = {
@@ -1319,7 +1312,7 @@ static struct platform_driver hix5hd2_dev_driver = {
.of_match_table = hix5hd2_of_match,
},
.probe = hix5hd2_dev_probe,
- .remove = hix5hd2_dev_remove,
+ .remove_new = hix5hd2_dev_remove,
};
module_platform_driver(hix5hd2_dev_driver);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
index fcaf5132b865..1b67da1f6fa8 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
@@ -3007,7 +3007,7 @@ free_dev:
* hns_dsaf_remove - remove dsaf dev
* @pdev: dasf platform device
*/
-static int hns_dsaf_remove(struct platform_device *pdev)
+static void hns_dsaf_remove(struct platform_device *pdev)
{
struct dsaf_device *dsaf_dev = dev_get_drvdata(&pdev->dev);
@@ -3020,8 +3020,6 @@ static int hns_dsaf_remove(struct platform_device *pdev)
hns_dsaf_free(dsaf_dev);
hns_dsaf_free_dev(dsaf_dev);
-
- return 0;
}
static const struct of_device_id g_dsaf_match[] = {
@@ -3033,7 +3031,7 @@ MODULE_DEVICE_TABLE(of, g_dsaf_match);
static struct platform_driver g_dsaf_driver = {
.probe = hns_dsaf_probe,
- .remove = hns_dsaf_remove,
+ .remove_new = hns_dsaf_remove,
.driver = {
.name = DSAF_DRV_NAME,
.of_match_table = g_dsaf_match,
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
index 0f0e16f9afc0..7e00231c1acf 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
@@ -92,7 +92,7 @@ struct ppe_common_cb {
u8 comm_index; /*ppe_common index*/
u32 ppe_num;
- struct hns_ppe_cb ppe_cb[];
+ struct hns_ppe_cb ppe_cb[] __counted_by(ppe_num);
};
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
index a9f805925699..c1e9b6997853 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
@@ -108,7 +108,7 @@ struct rcb_common_cb {
u32 ring_num;
u32 desc_num; /* desc num per queue*/
- struct ring_pair_cb ring_pair_cb[];
+ struct ring_pair_cb ring_pair_cb[] __counted_by(ring_num);
};
int hns_rcb_buf_size2type(u32 buf_size);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
index 7cf10d1e2b31..0900abf5c508 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
@@ -2384,7 +2384,7 @@ out_read_prop_fail:
return ret;
}
-static int hns_nic_dev_remove(struct platform_device *pdev)
+static void hns_nic_dev_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct hns_nic_priv *priv = netdev_priv(ndev);
@@ -2413,7 +2413,6 @@ static int hns_nic_dev_remove(struct platform_device *pdev)
of_node_put(to_of_node(priv->fwnode));
free_netdev(ndev);
- return 0;
}
static const struct of_device_id hns_enet_of_match[] = {
@@ -2431,7 +2430,7 @@ static struct platform_driver hns_nic_dev_driver = {
.acpi_match_table = ACPI_PTR(hns_enet_acpi_match),
},
.probe = hns_nic_dev_probe,
- .remove = hns_nic_dev_remove,
+ .remove_new = hns_nic_dev_remove,
};
module_platform_driver(hns_nic_dev_driver);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
index aaf1f42624a7..d7e175a9cb49 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
@@ -103,6 +103,7 @@ enum HNAE3_DEV_CAP_BITS {
HNAE3_DEV_SUPPORT_LANE_NUM_B,
HNAE3_DEV_SUPPORT_WOL_B,
HNAE3_DEV_SUPPORT_TM_FLUSH_B,
+ HNAE3_DEV_SUPPORT_VF_FAULT_B,
};
#define hnae3_ae_dev_fd_supported(ae_dev) \
@@ -177,6 +178,9 @@ enum HNAE3_DEV_CAP_BITS {
#define hnae3_ae_dev_tm_flush_supported(hdev) \
test_bit(HNAE3_DEV_SUPPORT_TM_FLUSH_B, (hdev)->ae_dev->caps)
+#define hnae3_ae_dev_vf_fault_supported(ae_dev) \
+ test_bit(HNAE3_DEV_SUPPORT_VF_FAULT_B, (ae_dev)->caps)
+
enum HNAE3_PF_CAP_BITS {
HNAE3_PF_SUPPORT_VLAN_FLTR_MDF_B = 0,
};
@@ -271,6 +275,7 @@ enum hnae3_reset_type {
HNAE3_GLOBAL_RESET,
HNAE3_IMP_RESET,
HNAE3_NONE_RESET,
+ HNAE3_VF_EXP_RESET,
HNAE3_MAX_RESET,
};
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
index dcecb23daac6..d92ad6082d8e 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c
@@ -157,6 +157,7 @@ static const struct hclge_comm_caps_bit_map hclge_pf_cmd_caps[] = {
{HCLGE_COMM_CAP_LANE_NUM_B, HNAE3_DEV_SUPPORT_LANE_NUM_B},
{HCLGE_COMM_CAP_WOL_B, HNAE3_DEV_SUPPORT_WOL_B},
{HCLGE_COMM_CAP_TM_FLUSH_B, HNAE3_DEV_SUPPORT_TM_FLUSH_B},
+ {HCLGE_COMM_CAP_VF_FAULT_B, HNAE3_DEV_SUPPORT_VF_FAULT_B},
};
static const struct hclge_comm_caps_bit_map hclge_vf_cmd_caps[] = {
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
index 2b7197ce0ae8..533c19d25e4f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h
@@ -93,6 +93,7 @@ enum hclge_opcode_type {
HCLGE_OPC_DFX_SSU_REG_2 = 0x004F,
HCLGE_OPC_QUERY_DEV_SPECS = 0x0050,
+ HCLGE_OPC_GET_QUEUE_ERR_VF = 0x0067,
/* MAC command */
HCLGE_OPC_CONFIG_MAC_MODE = 0x0301,
@@ -348,6 +349,7 @@ enum HCLGE_COMM_CAP_BITS {
HCLGE_COMM_CAP_GRO_B = 20,
HCLGE_COMM_CAP_FD_B = 21,
HCLGE_COMM_CAP_FEC_STATS_B = 25,
+ HCLGE_COMM_CAP_VF_FAULT_B = 26,
HCLGE_COMM_CAP_LANE_NUM_B = 27,
HCLGE_COMM_CAP_WOL_B = 28,
HCLGE_COMM_CAP_TM_FLUSH_B = 31,
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
index b8508533878b..0b138635bafa 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
@@ -414,6 +414,9 @@ static struct hns3_dbg_cap_info hns3_dbg_cap[] = {
}, {
.name = "support tm flush",
.cap_bit = HNAE3_DEV_SUPPORT_TM_FLUSH_B,
+ }, {
+ .name = "support vf fault detect",
+ .cap_bit = HNAE3_DEV_SUPPORT_VF_FAULT_B,
}
};
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
index cf50368441b7..06117502001f 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
@@ -4940,8 +4940,7 @@ static void hns3_put_ring_config(struct hns3_nic_priv *priv)
static void hns3_alloc_page_pool(struct hns3_enet_ring *ring)
{
struct page_pool_params pp_params = {
- .flags = PP_FLAG_DMA_MAP | PP_FLAG_PAGE_FRAG |
- PP_FLAG_DMA_SYNC_DEV,
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
.order = hns3_page_order(ring),
.pool_size = ring->desc_num * hns3_buf_size(ring) /
(PAGE_SIZE << hns3_page_order(ring)),
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
index 3f35227ef1fa..d63e114f93d0 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.c
@@ -1301,10 +1301,12 @@ static const struct hclge_hw_type_id hclge_hw_type_id_st[] = {
.msg = "tqp_int_ecc_error"
}, {
.type_id = PF_ABNORMAL_INT_ERROR,
- .msg = "pf_abnormal_int_error"
+ .msg = "pf_abnormal_int_error",
+ .cause_by_vf = true
}, {
.type_id = MPF_ABNORMAL_INT_ERROR,
- .msg = "mpf_abnormal_int_error"
+ .msg = "mpf_abnormal_int_error",
+ .cause_by_vf = true
}, {
.type_id = COMMON_ERROR,
.msg = "common_error"
@@ -2759,7 +2761,7 @@ void hclge_handle_occurred_error(struct hclge_dev *hdev)
hclge_handle_error_info_log(ae_dev);
}
-static void
+static bool
hclge_handle_error_type_reg_log(struct device *dev,
struct hclge_mod_err_info *mod_info,
struct hclge_type_reg_err_info *type_reg_info)
@@ -2770,6 +2772,7 @@ hclge_handle_error_type_reg_log(struct device *dev,
u8 mod_id, total_module, type_id, total_type, i, is_ras;
u8 index_module = MODULE_NONE;
u8 index_type = NONE_ERROR;
+ bool cause_by_vf = false;
mod_id = mod_info->mod_id;
type_id = type_reg_info->type_id & HCLGE_ERR_TYPE_MASK;
@@ -2788,6 +2791,7 @@ hclge_handle_error_type_reg_log(struct device *dev,
for (i = 0; i < total_type; i++) {
if (type_id == hclge_hw_type_id_st[i].type_id) {
index_type = i;
+ cause_by_vf = hclge_hw_type_id_st[i].cause_by_vf;
break;
}
}
@@ -2805,6 +2809,8 @@ hclge_handle_error_type_reg_log(struct device *dev,
dev_err(dev, "reg_value:\n");
for (i = 0; i < type_reg_info->reg_num; i++)
dev_err(dev, "0x%08x\n", type_reg_info->hclge_reg[i]);
+
+ return cause_by_vf;
}
static void hclge_handle_error_module_log(struct hnae3_ae_dev *ae_dev,
@@ -2815,6 +2821,7 @@ static void hclge_handle_error_module_log(struct hnae3_ae_dev *ae_dev,
struct device *dev = &hdev->pdev->dev;
struct hclge_mod_err_info *mod_info;
struct hclge_sum_err_info *sum_info;
+ bool cause_by_vf = false;
u8 mod_num, err_num, i;
u32 offset = 0;
@@ -2843,12 +2850,16 @@ static void hclge_handle_error_module_log(struct hnae3_ae_dev *ae_dev,
type_reg_info = (struct hclge_type_reg_err_info *)
&buf[offset++];
- hclge_handle_error_type_reg_log(dev, mod_info,
- type_reg_info);
+ if (hclge_handle_error_type_reg_log(dev, mod_info,
+ type_reg_info))
+ cause_by_vf = true;
offset += type_reg_info->reg_num;
}
}
+
+ if (hnae3_ae_dev_vf_fault_supported(hdev->ae_dev) && cause_by_vf)
+ set_bit(HNAE3_VF_EXP_RESET, &ae_dev->hw_err_reset_req);
}
static int hclge_query_all_err_bd_num(struct hclge_dev *hdev, u32 *bd_num)
@@ -2940,3 +2951,98 @@ err_desc:
out:
return ret;
}
+
+static bool hclge_reset_vf_in_bitmap(struct hclge_dev *hdev,
+ unsigned long *bitmap)
+{
+ struct hclge_vport *vport;
+ bool exist_set = false;
+ int func_id;
+ int ret;
+
+ func_id = find_first_bit(bitmap, HCLGE_VPORT_NUM);
+ if (func_id == PF_VPORT_ID)
+ return false;
+
+ while (func_id != HCLGE_VPORT_NUM) {
+ vport = hclge_get_vf_vport(hdev,
+ func_id - HCLGE_VF_VPORT_START_NUM);
+ if (!vport) {
+ dev_err(&hdev->pdev->dev, "invalid func id(%d)\n",
+ func_id);
+ return false;
+ }
+
+ dev_info(&hdev->pdev->dev, "do function %d recovery.", func_id);
+
+ ret = hclge_reset_tqp(&vport->nic);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to reset tqp, ret = %d.", ret);
+ return false;
+ }
+
+ ret = hclge_inform_vf_reset(vport, HNAE3_VF_FUNC_RESET);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to reset func %d, ret = %d.",
+ func_id, ret);
+ return false;
+ }
+
+ exist_set = true;
+ clear_bit(func_id, bitmap);
+ func_id = find_first_bit(bitmap, HCLGE_VPORT_NUM);
+ }
+
+ return exist_set;
+}
+
+static void hclge_get_vf_fault_bitmap(struct hclge_desc *desc,
+ unsigned long *bitmap)
+{
+#define HCLGE_FIR_FAULT_BYTES 24
+#define HCLGE_SEC_FAULT_BYTES 8
+
+ u8 *buff;
+
+ BUILD_BUG_ON(HCLGE_FIR_FAULT_BYTES + HCLGE_SEC_FAULT_BYTES !=
+ BITS_TO_BYTES(HCLGE_VPORT_NUM));
+
+ memcpy(bitmap, desc[0].data, HCLGE_FIR_FAULT_BYTES);
+ buff = (u8 *)bitmap + HCLGE_FIR_FAULT_BYTES;
+ memcpy(buff, desc[1].data, HCLGE_SEC_FAULT_BYTES);
+}
+
+int hclge_handle_vf_queue_err_ras(struct hclge_dev *hdev)
+{
+ unsigned long vf_fault_bitmap[BITS_TO_LONGS(HCLGE_VPORT_NUM)];
+ struct hclge_desc desc[2];
+ bool cause_by_vf = false;
+ int ret;
+
+ if (!test_and_clear_bit(HNAE3_VF_EXP_RESET,
+ &hdev->ae_dev->hw_err_reset_req) ||
+ !hnae3_ae_dev_vf_fault_supported(hdev->ae_dev))
+ return 0;
+
+ hclge_comm_cmd_setup_basic_desc(&desc[0], HCLGE_OPC_GET_QUEUE_ERR_VF,
+ true);
+ desc[0].flag |= cpu_to_le16(HCLGE_COMM_CMD_FLAG_NEXT);
+ hclge_comm_cmd_setup_basic_desc(&desc[1], HCLGE_OPC_GET_QUEUE_ERR_VF,
+ true);
+
+ ret = hclge_comm_cmd_send(&hdev->hw.hw, desc, 2);
+ if (ret) {
+ dev_err(&hdev->pdev->dev,
+ "failed to get vf bitmap, ret = %d.\n", ret);
+ return ret;
+ }
+ hclge_get_vf_fault_bitmap(desc, vf_fault_bitmap);
+
+ cause_by_vf = hclge_reset_vf_in_bitmap(hdev, vf_fault_bitmap);
+ if (cause_by_vf)
+ hdev->ae_dev->hw_err_reset_req = 0;
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
index 86be6fb32990..68b738affa66 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_err.h
@@ -196,6 +196,7 @@ struct hclge_hw_module_id {
struct hclge_hw_type_id {
enum hclge_err_type_list type_id;
const char *msg;
+ bool cause_by_vf; /* indicate the error may from vf exception */
};
struct hclge_sum_err_info {
@@ -228,4 +229,5 @@ int hclge_handle_hw_msix_error(struct hclge_dev *hdev,
unsigned long *reset_requests);
int hclge_handle_error_info_log(struct hnae3_ae_dev *ae_dev);
int hclge_handle_mac_tnl(struct hclge_dev *hdev);
+int hclge_handle_vf_queue_err_ras(struct hclge_dev *hdev);
#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
index c42574e29747..66e5807903a0 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
@@ -881,8 +881,8 @@ static const struct hclge_speed_bit_map speed_bit_map[] = {
{HCLGE_MAC_SPEED_10G, HCLGE_SUPPORT_10G_BIT},
{HCLGE_MAC_SPEED_25G, HCLGE_SUPPORT_25G_BIT},
{HCLGE_MAC_SPEED_40G, HCLGE_SUPPORT_40G_BIT},
- {HCLGE_MAC_SPEED_50G, HCLGE_SUPPORT_50G_BIT},
- {HCLGE_MAC_SPEED_100G, HCLGE_SUPPORT_100G_BIT},
+ {HCLGE_MAC_SPEED_50G, HCLGE_SUPPORT_50G_BITS},
+ {HCLGE_MAC_SPEED_100G, HCLGE_SUPPORT_100G_BITS},
{HCLGE_MAC_SPEED_200G, HCLGE_SUPPORT_200G_BIT},
};
@@ -939,100 +939,98 @@ static void hclge_update_fec_support(struct hclge_mac *mac)
mac->supported);
}
+static const struct hclge_link_mode_bmap hclge_sr_link_mode_bmap[8] = {
+ {HCLGE_SUPPORT_10G_BIT, ETHTOOL_LINK_MODE_10000baseSR_Full_BIT},
+ {HCLGE_SUPPORT_25G_BIT, ETHTOOL_LINK_MODE_25000baseSR_Full_BIT},
+ {HCLGE_SUPPORT_40G_BIT, ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT},
+ {HCLGE_SUPPORT_50G_R2_BIT, ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT},
+ {HCLGE_SUPPORT_50G_R1_BIT, ETHTOOL_LINK_MODE_50000baseSR_Full_BIT},
+ {HCLGE_SUPPORT_100G_R4_BIT, ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT},
+ {HCLGE_SUPPORT_100G_R2_BIT, ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT},
+ {HCLGE_SUPPORT_200G_BIT, ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT},
+};
+
+static const struct hclge_link_mode_bmap hclge_lr_link_mode_bmap[6] = {
+ {HCLGE_SUPPORT_10G_BIT, ETHTOOL_LINK_MODE_10000baseLR_Full_BIT},
+ {HCLGE_SUPPORT_40G_BIT, ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT},
+ {HCLGE_SUPPORT_50G_R1_BIT, ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT},
+ {HCLGE_SUPPORT_100G_R4_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT},
+ {HCLGE_SUPPORT_100G_R2_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR2_ER2_FR2_Full_BIT},
+ {HCLGE_SUPPORT_200G_BIT,
+ ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT},
+};
+
+static const struct hclge_link_mode_bmap hclge_cr_link_mode_bmap[8] = {
+ {HCLGE_SUPPORT_10G_BIT, ETHTOOL_LINK_MODE_10000baseCR_Full_BIT},
+ {HCLGE_SUPPORT_25G_BIT, ETHTOOL_LINK_MODE_25000baseCR_Full_BIT},
+ {HCLGE_SUPPORT_40G_BIT, ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT},
+ {HCLGE_SUPPORT_50G_R2_BIT, ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT},
+ {HCLGE_SUPPORT_50G_R1_BIT, ETHTOOL_LINK_MODE_50000baseCR_Full_BIT},
+ {HCLGE_SUPPORT_100G_R4_BIT, ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT},
+ {HCLGE_SUPPORT_100G_R2_BIT, ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT},
+ {HCLGE_SUPPORT_200G_BIT, ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT},
+};
+
+static const struct hclge_link_mode_bmap hclge_kr_link_mode_bmap[9] = {
+ {HCLGE_SUPPORT_1G_BIT, ETHTOOL_LINK_MODE_1000baseKX_Full_BIT},
+ {HCLGE_SUPPORT_10G_BIT, ETHTOOL_LINK_MODE_10000baseKR_Full_BIT},
+ {HCLGE_SUPPORT_25G_BIT, ETHTOOL_LINK_MODE_25000baseKR_Full_BIT},
+ {HCLGE_SUPPORT_40G_BIT, ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT},
+ {HCLGE_SUPPORT_50G_R2_BIT, ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT},
+ {HCLGE_SUPPORT_50G_R1_BIT, ETHTOOL_LINK_MODE_50000baseKR_Full_BIT},
+ {HCLGE_SUPPORT_100G_R4_BIT, ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT},
+ {HCLGE_SUPPORT_100G_R2_BIT, ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT},
+ {HCLGE_SUPPORT_200G_BIT, ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT},
+};
+
static void hclge_convert_setting_sr(u16 speed_ability,
unsigned long *link_mode)
{
- if (speed_ability & HCLGE_SUPPORT_10G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_25G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_40G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_50G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_100G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_200G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
- link_mode);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hclge_sr_link_mode_bmap); i++) {
+ if (speed_ability & hclge_sr_link_mode_bmap[i].support_bit)
+ linkmode_set_bit(hclge_sr_link_mode_bmap[i].link_mode,
+ link_mode);
+ }
}
static void hclge_convert_setting_lr(u16 speed_ability,
unsigned long *link_mode)
{
- if (speed_ability & HCLGE_SUPPORT_10G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_25G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_50G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_50000baseLR_ER_FR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_40G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_100G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_200G_BIT)
- linkmode_set_bit(
- ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT,
- link_mode);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hclge_lr_link_mode_bmap); i++) {
+ if (speed_ability & hclge_lr_link_mode_bmap[i].support_bit)
+ linkmode_set_bit(hclge_lr_link_mode_bmap[i].link_mode,
+ link_mode);
+ }
}
static void hclge_convert_setting_cr(u16 speed_ability,
unsigned long *link_mode)
{
- if (speed_ability & HCLGE_SUPPORT_10G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseCR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_25G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_40G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_50G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_100G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_200G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
- link_mode);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hclge_cr_link_mode_bmap); i++) {
+ if (speed_ability & hclge_cr_link_mode_bmap[i].support_bit)
+ linkmode_set_bit(hclge_cr_link_mode_bmap[i].link_mode,
+ link_mode);
+ }
}
static void hclge_convert_setting_kr(u16 speed_ability,
unsigned long *link_mode)
{
- if (speed_ability & HCLGE_SUPPORT_1G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_10G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_25G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_40G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_50G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_100G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
- link_mode);
- if (speed_ability & HCLGE_SUPPORT_200G_BIT)
- linkmode_set_bit(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
- link_mode);
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(hclge_kr_link_mode_bmap); i++) {
+ if (speed_ability & hclge_kr_link_mode_bmap[i].support_bit)
+ linkmode_set_bit(hclge_kr_link_mode_bmap[i].link_mode,
+ link_mode);
+ }
}
static void hclge_convert_setting_fec(struct hclge_mac *mac)
@@ -1158,10 +1156,10 @@ static u32 hclge_get_max_speed(u16 speed_ability)
if (speed_ability & HCLGE_SUPPORT_200G_BIT)
return HCLGE_MAC_SPEED_200G;
- if (speed_ability & HCLGE_SUPPORT_100G_BIT)
+ if (speed_ability & HCLGE_SUPPORT_100G_BITS)
return HCLGE_MAC_SPEED_100G;
- if (speed_ability & HCLGE_SUPPORT_50G_BIT)
+ if (speed_ability & HCLGE_SUPPORT_50G_BITS)
return HCLGE_MAC_SPEED_50G;
if (speed_ability & HCLGE_SUPPORT_40G_BIT)
@@ -3424,7 +3422,7 @@ static int hclge_get_status(struct hnae3_handle *handle)
return hdev->hw.mac.link;
}
-static struct hclge_vport *hclge_get_vf_vport(struct hclge_dev *hdev, int vf)
+struct hclge_vport *hclge_get_vf_vport(struct hclge_dev *hdev, int vf)
{
if (!pci_num_vf(hdev->pdev)) {
dev_err(&hdev->pdev->dev,
@@ -4468,6 +4466,7 @@ static void hclge_handle_err_recovery(struct hclge_dev *hdev)
if (hclge_find_error_source(hdev)) {
hclge_handle_error_info_log(ae_dev);
hclge_handle_mac_tnl(hdev);
+ hclge_handle_vf_queue_err_ras(hdev);
}
hclge_handle_err_reset_request(hdev);
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
index 7bc2049b723d..51979cf71262 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h
@@ -185,15 +185,22 @@ enum HLCGE_PORT_TYPE {
#define HCLGE_SUPPORT_1G_BIT BIT(0)
#define HCLGE_SUPPORT_10G_BIT BIT(1)
#define HCLGE_SUPPORT_25G_BIT BIT(2)
-#define HCLGE_SUPPORT_50G_BIT BIT(3)
-#define HCLGE_SUPPORT_100G_BIT BIT(4)
+#define HCLGE_SUPPORT_50G_R2_BIT BIT(3)
+#define HCLGE_SUPPORT_100G_R4_BIT BIT(4)
/* to be compatible with exsit board */
#define HCLGE_SUPPORT_40G_BIT BIT(5)
#define HCLGE_SUPPORT_100M_BIT BIT(6)
#define HCLGE_SUPPORT_10M_BIT BIT(7)
#define HCLGE_SUPPORT_200G_BIT BIT(8)
+#define HCLGE_SUPPORT_50G_R1_BIT BIT(9)
+#define HCLGE_SUPPORT_100G_R2_BIT BIT(10)
+
#define HCLGE_SUPPORT_GE \
(HCLGE_SUPPORT_1G_BIT | HCLGE_SUPPORT_100M_BIT | HCLGE_SUPPORT_10M_BIT)
+#define HCLGE_SUPPORT_50G_BITS \
+ (HCLGE_SUPPORT_50G_R2_BIT | HCLGE_SUPPORT_50G_R1_BIT)
+#define HCLGE_SUPPORT_100G_BITS \
+ (HCLGE_SUPPORT_100G_R4_BIT | HCLGE_SUPPORT_100G_R2_BIT)
enum HCLGE_DEV_STATE {
HCLGE_STATE_REINITING,
@@ -1076,6 +1083,11 @@ struct hclge_mac_speed_map {
u32 speed_fw; /* speed defined in firmware */
};
+struct hclge_link_mode_bmap {
+ u16 support_bit;
+ enum ethtool_link_mode_bit_indices link_mode;
+};
+
int hclge_set_vport_promisc_mode(struct hclge_vport *vport, bool en_uc_pmc,
bool en_mc_pmc, bool en_bc_pmc);
int hclge_add_uc_addr_common(struct hclge_vport *vport,
@@ -1146,4 +1158,6 @@ int hclge_dbg_dump_rst_info(struct hclge_dev *hdev, char *buf, int len);
int hclge_push_vf_link_status(struct hclge_vport *vport);
int hclge_enable_vport_vlan_filter(struct hclge_vport *vport, bool request_en);
int hclge_mac_update_stats(struct hclge_dev *hdev);
+struct hclge_vport *hclge_get_vf_vport(struct hclge_dev *hdev, int vf);
+int hclge_inform_vf_reset(struct hclge_vport *vport, u16 reset_type);
#endif
diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
index 04ff9bf12185..4b0d07ca2505 100644
--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
+++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c
@@ -124,7 +124,7 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len,
return status;
}
-static int hclge_inform_vf_reset(struct hclge_vport *vport, u16 reset_type)
+int hclge_inform_vf_reset(struct hclge_vport *vport, u16 reset_type)
{
__le16 msg_data;
u8 dest_vfid;
diff --git a/drivers/net/ethernet/hisilicon/hns_mdio.c b/drivers/net/ethernet/hisilicon/hns_mdio.c
index 409a89d80220..ed73707176c1 100644
--- a/drivers/net/ethernet/hisilicon/hns_mdio.c
+++ b/drivers/net/ethernet/hisilicon/hns_mdio.c
@@ -610,7 +610,7 @@ static int hns_mdio_probe(struct platform_device *pdev)
*
* Return 0 on success, negative on failure
*/
-static int hns_mdio_remove(struct platform_device *pdev)
+static void hns_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus;
@@ -618,7 +618,6 @@ static int hns_mdio_remove(struct platform_device *pdev)
mdiobus_unregister(bus);
platform_set_drvdata(pdev, NULL);
- return 0;
}
static const struct of_device_id hns_mdio_match[] = {
@@ -636,7 +635,7 @@ MODULE_DEVICE_TABLE(acpi, hns_mdio_acpi_match);
static struct platform_driver hns_mdio_driver = {
.probe = hns_mdio_probe,
- .remove = hns_mdio_remove,
+ .remove_new = hns_mdio_remove,
.driver = {
.name = MDIO_DRV_NAME,
.of_match_table = hns_mdio_match,
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_devlink.c b/drivers/net/ethernet/huawei/hinic/hinic_devlink.c
index 1749d26f4bef..03e42512a2d5 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_devlink.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_devlink.c
@@ -315,136 +315,76 @@ void hinic_devlink_unregister(struct hinic_devlink_priv *priv)
devlink_unregister(devlink);
}
-static int chip_fault_show(struct devlink_fmsg *fmsg,
- struct hinic_fault_event *event)
+static void chip_fault_show(struct devlink_fmsg *fmsg,
+ struct hinic_fault_event *event)
{
const char * const level_str[FAULT_LEVEL_MAX + 1] = {
"fatal", "reset", "flr", "general", "suggestion", "Unknown"};
u8 fault_level;
- int err;
fault_level = (event->event.chip.err_level < FAULT_LEVEL_MAX) ?
event->event.chip.err_level : FAULT_LEVEL_MAX;
- if (fault_level == FAULT_LEVEL_SERIOUS_FLR) {
- err = devlink_fmsg_u32_pair_put(fmsg, "Function level err func_id",
- (u32)event->event.chip.func_id);
- if (err)
- return err;
- }
-
- err = devlink_fmsg_u8_pair_put(fmsg, "module_id", event->event.chip.node_id);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "err_type", (u32)event->event.chip.err_type);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "err_level", level_str[fault_level]);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "err_csr_addr",
- event->event.chip.err_csr_addr);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "err_csr_value",
- event->event.chip.err_csr_value);
- if (err)
- return err;
-
- return 0;
+ if (fault_level == FAULT_LEVEL_SERIOUS_FLR)
+ devlink_fmsg_u32_pair_put(fmsg, "Function level err func_id",
+ (u32)event->event.chip.func_id);
+ devlink_fmsg_u8_pair_put(fmsg, "module_id", event->event.chip.node_id);
+ devlink_fmsg_u32_pair_put(fmsg, "err_type", (u32)event->event.chip.err_type);
+ devlink_fmsg_string_pair_put(fmsg, "err_level", level_str[fault_level]);
+ devlink_fmsg_u32_pair_put(fmsg, "err_csr_addr",
+ event->event.chip.err_csr_addr);
+ devlink_fmsg_u32_pair_put(fmsg, "err_csr_value",
+ event->event.chip.err_csr_value);
}
-static int fault_report_show(struct devlink_fmsg *fmsg,
- struct hinic_fault_event *event)
+static void fault_report_show(struct devlink_fmsg *fmsg,
+ struct hinic_fault_event *event)
{
const char * const type_str[FAULT_TYPE_MAX + 1] = {
"chip", "ucode", "mem rd timeout", "mem wr timeout",
"reg rd timeout", "reg wr timeout", "phy fault", "Unknown"};
u8 fault_type;
- int err;
fault_type = (event->type < FAULT_TYPE_MAX) ? event->type : FAULT_TYPE_MAX;
- err = devlink_fmsg_string_pair_put(fmsg, "Fault type", type_str[fault_type]);
- if (err)
- return err;
-
- err = devlink_fmsg_binary_pair_put(fmsg, "Fault raw data",
- event->event.val, sizeof(event->event.val));
- if (err)
- return err;
+ devlink_fmsg_string_pair_put(fmsg, "Fault type", type_str[fault_type]);
+ devlink_fmsg_binary_pair_put(fmsg, "Fault raw data", event->event.val,
+ sizeof(event->event.val));
switch (event->type) {
case FAULT_TYPE_CHIP:
- err = chip_fault_show(fmsg, event);
- if (err)
- return err;
+ chip_fault_show(fmsg, event);
break;
case FAULT_TYPE_UCODE:
- err = devlink_fmsg_u8_pair_put(fmsg, "Cause_id", event->event.ucode.cause_id);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "core_id", event->event.ucode.core_id);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "c_id", event->event.ucode.c_id);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "epc", event->event.ucode.epc);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "Cause_id", event->event.ucode.cause_id);
+ devlink_fmsg_u8_pair_put(fmsg, "core_id", event->event.ucode.core_id);
+ devlink_fmsg_u8_pair_put(fmsg, "c_id", event->event.ucode.c_id);
+ devlink_fmsg_u8_pair_put(fmsg, "epc", event->event.ucode.epc);
break;
case FAULT_TYPE_MEM_RD_TIMEOUT:
case FAULT_TYPE_MEM_WR_TIMEOUT:
- err = devlink_fmsg_u32_pair_put(fmsg, "Err_csr_ctrl",
- event->event.mem_timeout.err_csr_ctrl);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "err_csr_data",
- event->event.mem_timeout.err_csr_data);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "ctrl_tab",
- event->event.mem_timeout.ctrl_tab);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "mem_index",
- event->event.mem_timeout.mem_index);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "Err_csr_ctrl",
+ event->event.mem_timeout.err_csr_ctrl);
+ devlink_fmsg_u32_pair_put(fmsg, "err_csr_data",
+ event->event.mem_timeout.err_csr_data);
+ devlink_fmsg_u32_pair_put(fmsg, "ctrl_tab",
+ event->event.mem_timeout.ctrl_tab);
+ devlink_fmsg_u32_pair_put(fmsg, "mem_index",
+ event->event.mem_timeout.mem_index);
break;
case FAULT_TYPE_REG_RD_TIMEOUT:
case FAULT_TYPE_REG_WR_TIMEOUT:
- err = devlink_fmsg_u32_pair_put(fmsg, "Err_csr", event->event.reg_timeout.err_csr);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "Err_csr", event->event.reg_timeout.err_csr);
break;
case FAULT_TYPE_PHY_FAULT:
- err = devlink_fmsg_u8_pair_put(fmsg, "Op_type", event->event.phy_fault.op_type);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "port_id", event->event.phy_fault.port_id);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "dev_ad", event->event.phy_fault.dev_ad);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "csr_addr", event->event.phy_fault.csr_addr);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "op_data", event->event.phy_fault.op_data);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "Op_type", event->event.phy_fault.op_type);
+ devlink_fmsg_u8_pair_put(fmsg, "port_id", event->event.phy_fault.port_id);
+ devlink_fmsg_u8_pair_put(fmsg, "dev_ad", event->event.phy_fault.dev_ad);
+ devlink_fmsg_u32_pair_put(fmsg, "csr_addr", event->event.phy_fault.csr_addr);
+ devlink_fmsg_u32_pair_put(fmsg, "op_data", event->event.phy_fault.op_data);
break;
default:
break;
}
-
- return 0;
}
static int hinic_hw_reporter_dump(struct devlink_health_reporter *reporter,
@@ -452,75 +392,30 @@ static int hinic_hw_reporter_dump(struct devlink_health_reporter *reporter,
struct netlink_ext_ack *extack)
{
if (priv_ctx)
- return fault_report_show(fmsg, priv_ctx);
+ fault_report_show(fmsg, priv_ctx);
return 0;
}
-static int mgmt_watchdog_report_show(struct devlink_fmsg *fmsg,
- struct hinic_mgmt_watchdog_info *watchdog_info)
+static void mgmt_watchdog_report_show(struct devlink_fmsg *fmsg,
+ struct hinic_mgmt_watchdog_info *winfo)
{
- int err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "Mgmt deadloop time_h", watchdog_info->curr_time_h);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "time_l", watchdog_info->curr_time_l);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "task_id", watchdog_info->task_id);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "sp", watchdog_info->sp);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "stack_current_used", watchdog_info->curr_used);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "peak_used", watchdog_info->peak_used);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "\n Overflow_flag", watchdog_info->is_overflow);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "stack_top", watchdog_info->stack_top);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "stack_bottom", watchdog_info->stack_bottom);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "mgmt_pc", watchdog_info->pc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "lr", watchdog_info->lr);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cpsr", watchdog_info->cpsr);
- if (err)
- return err;
-
- err = devlink_fmsg_binary_pair_put(fmsg, "Mgmt register info",
- watchdog_info->reg, sizeof(watchdog_info->reg));
- if (err)
- return err;
-
- err = devlink_fmsg_binary_pair_put(fmsg, "Mgmt dump stack(start from sp)",
- watchdog_info->data, sizeof(watchdog_info->data));
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_u32_pair_put(fmsg, "Mgmt deadloop time_h", winfo->curr_time_h);
+ devlink_fmsg_u32_pair_put(fmsg, "time_l", winfo->curr_time_l);
+ devlink_fmsg_u32_pair_put(fmsg, "task_id", winfo->task_id);
+ devlink_fmsg_u32_pair_put(fmsg, "sp", winfo->sp);
+ devlink_fmsg_u32_pair_put(fmsg, "stack_current_used", winfo->curr_used);
+ devlink_fmsg_u32_pair_put(fmsg, "peak_used", winfo->peak_used);
+ devlink_fmsg_u32_pair_put(fmsg, "\n Overflow_flag", winfo->is_overflow);
+ devlink_fmsg_u32_pair_put(fmsg, "stack_top", winfo->stack_top);
+ devlink_fmsg_u32_pair_put(fmsg, "stack_bottom", winfo->stack_bottom);
+ devlink_fmsg_u32_pair_put(fmsg, "mgmt_pc", winfo->pc);
+ devlink_fmsg_u32_pair_put(fmsg, "lr", winfo->lr);
+ devlink_fmsg_u32_pair_put(fmsg, "cpsr", winfo->cpsr);
+ devlink_fmsg_binary_pair_put(fmsg, "Mgmt register info", winfo->reg,
+ sizeof(winfo->reg));
+ devlink_fmsg_binary_pair_put(fmsg, "Mgmt dump stack(start from sp)",
+ winfo->data, sizeof(winfo->data));
}
static int hinic_fw_reporter_dump(struct devlink_health_reporter *reporter,
@@ -528,7 +423,7 @@ static int hinic_fw_reporter_dump(struct devlink_health_reporter *reporter,
struct netlink_ext_ack *extack)
{
if (priv_ctx)
- return mgmt_watchdog_report_show(fmsg, priv_ctx);
+ mgmt_watchdog_report_show(fmsg, priv_ctx);
return 0;
}
diff --git a/drivers/net/ethernet/huawei/hinic/hinic_tx.c b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
index ad47ac51a139..9b60966736db 100644
--- a/drivers/net/ethernet/huawei/hinic/hinic_tx.c
+++ b/drivers/net/ethernet/huawei/hinic/hinic_tx.c
@@ -861,7 +861,7 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
struct hinic_qp *qp = container_of(sq, struct hinic_qp, sq);
struct hinic_dev *nic_dev = netdev_priv(netdev);
struct hinic_hwdev *hwdev = nic_dev->hwdev;
- int err, irqname_len;
+ int err;
txq->netdev = netdev;
txq->sq = sq;
@@ -882,15 +882,13 @@ int hinic_init_txq(struct hinic_txq *txq, struct hinic_sq *sq,
goto err_alloc_free_sges;
}
- irqname_len = snprintf(NULL, 0, "%s_txq%d", netdev->name, qp->q_id) + 1;
- txq->irq_name = devm_kzalloc(&netdev->dev, irqname_len, GFP_KERNEL);
+ txq->irq_name = devm_kasprintf(&netdev->dev, GFP_KERNEL, "%s_txq%d",
+ netdev->name, qp->q_id);
if (!txq->irq_name) {
err = -ENOMEM;
goto err_alloc_irqname;
}
- sprintf(txq->irq_name, "%s_txq%d", netdev->name, qp->q_id);
-
err = hinic_hwdev_hw_ci_addr_set(hwdev, sq, CI_UPDATE_NO_PENDING,
CI_UPDATE_NO_COALESC);
if (err)
diff --git a/drivers/net/ethernet/i825xx/sni_82596.c b/drivers/net/ethernet/i825xx/sni_82596.c
index 54bb4d9a0d1e..813403c2628f 100644
--- a/drivers/net/ethernet/i825xx/sni_82596.c
+++ b/drivers/net/ethernet/i825xx/sni_82596.c
@@ -153,7 +153,7 @@ probe_failed_free_mpu:
return retval;
}
-static int sni_82596_driver_remove(struct platform_device *pdev)
+static void sni_82596_driver_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct i596_private *lp = netdev_priv(dev);
@@ -164,12 +164,11 @@ static int sni_82596_driver_remove(struct platform_device *pdev)
iounmap(lp->ca);
iounmap(lp->mpu_port);
free_netdev (dev);
- return 0;
}
static struct platform_driver sni_82596_driver = {
.probe = sni_82596_probe,
- .remove = sni_82596_driver_remove,
+ .remove_new = sni_82596_driver_remove,
.driver = {
.name = sni_82596_string,
},
diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
index 0a56e9752464..1e29e5c9a2df 100644
--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
+++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
@@ -90,7 +90,7 @@ static struct ehea_bcmc_reg_array ehea_bcmc_regs;
static int ehea_probe_adapter(struct platform_device *dev);
-static int ehea_remove(struct platform_device *dev);
+static void ehea_remove(struct platform_device *dev);
static const struct of_device_id ehea_module_device_table[] = {
{
@@ -121,7 +121,7 @@ static struct platform_driver ehea_driver = {
.of_match_table = ehea_device_table,
},
.probe = ehea_probe_adapter,
- .remove = ehea_remove,
+ .remove_new = ehea_remove,
};
void ehea_dump(void *adr, int len, char *msg)
@@ -900,7 +900,7 @@ static int ehea_poll(struct napi_struct *napi, int budget)
if (!cqe && !cqe_skb)
return rx;
- if (!napi_reschedule(napi))
+ if (!napi_schedule(napi))
return rx;
cqe_skb = ehea_proc_cqes(pr, EHEA_POLL_MAX_CQES);
@@ -3471,7 +3471,7 @@ out:
return ret;
}
-static int ehea_remove(struct platform_device *dev)
+static void ehea_remove(struct platform_device *dev)
{
struct ehea_adapter *adapter = platform_get_drvdata(dev);
int i;
@@ -3492,8 +3492,6 @@ static int ehea_remove(struct platform_device *dev)
list_del(&adapter->list);
ehea_update_firmware_handles();
-
- return 0;
}
static int check_module_parm(void)
diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
index 0c314bf97480..e6e47b1842ea 100644
--- a/drivers/net/ethernet/ibm/emac/core.c
+++ b/drivers/net/ethernet/ibm/emac/core.c
@@ -3253,7 +3253,7 @@ static int emac_probe(struct platform_device *ofdev)
return err;
}
-static int emac_remove(struct platform_device *ofdev)
+static void emac_remove(struct platform_device *ofdev)
{
struct emac_instance *dev = platform_get_drvdata(ofdev);
@@ -3290,8 +3290,6 @@ static int emac_remove(struct platform_device *ofdev)
irq_dispose_mapping(dev->emac_irq);
free_netdev(dev->ndev);
-
- return 0;
}
/* XXX Features in here should be replaced by properties... */
@@ -3319,7 +3317,7 @@ static struct platform_driver emac_driver = {
.of_match_table = emac_match,
},
.probe = emac_probe,
- .remove = emac_remove,
+ .remove_new = emac_remove,
};
static void __init emac_make_bootlist(void)
diff --git a/drivers/net/ethernet/ibm/emac/mal.c b/drivers/net/ethernet/ibm/emac/mal.c
index c3236b59e7e9..2439f7e96e05 100644
--- a/drivers/net/ethernet/ibm/emac/mal.c
+++ b/drivers/net/ethernet/ibm/emac/mal.c
@@ -442,7 +442,7 @@ static int mal_poll(struct napi_struct *napi, int budget)
if (unlikely(mc->ops->peek_rx(mc->dev) ||
test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) {
MAL_DBG2(mal, "rotting packet" NL);
- if (!napi_reschedule(napi))
+ if (!napi_schedule(napi))
goto more_work;
spin_lock_irqsave(&mal->lock, flags);
@@ -711,7 +711,7 @@ static int mal_probe(struct platform_device *ofdev)
return err;
}
-static int mal_remove(struct platform_device *ofdev)
+static void mal_remove(struct platform_device *ofdev)
{
struct mal_instance *mal = platform_get_drvdata(ofdev);
@@ -740,8 +740,6 @@ static int mal_remove(struct platform_device *ofdev)
NUM_RX_BUFF * mal->num_rx_chans), mal->bd_virt,
mal->bd_dma);
kfree(mal);
-
- return 0;
}
static const struct of_device_id mal_platform_match[] =
@@ -770,7 +768,7 @@ static struct platform_driver mal_of_driver = {
.of_match_table = mal_platform_match,
},
.probe = mal_probe,
- .remove = mal_remove,
+ .remove_new = mal_remove,
};
int __init mal_init(void)
diff --git a/drivers/net/ethernet/ibm/emac/rgmii.c b/drivers/net/ethernet/ibm/emac/rgmii.c
index fd437f986edf..e1712fdc3c31 100644
--- a/drivers/net/ethernet/ibm/emac/rgmii.c
+++ b/drivers/net/ethernet/ibm/emac/rgmii.c
@@ -273,7 +273,7 @@ static int rgmii_probe(struct platform_device *ofdev)
return rc;
}
-static int rgmii_remove(struct platform_device *ofdev)
+static void rgmii_remove(struct platform_device *ofdev)
{
struct rgmii_instance *dev = platform_get_drvdata(ofdev);
@@ -281,8 +281,6 @@ static int rgmii_remove(struct platform_device *ofdev)
iounmap(dev->base);
kfree(dev);
-
- return 0;
}
static const struct of_device_id rgmii_match[] =
@@ -302,7 +300,7 @@ static struct platform_driver rgmii_driver = {
.of_match_table = rgmii_match,
},
.probe = rgmii_probe,
- .remove = rgmii_remove,
+ .remove_new = rgmii_remove,
};
int __init rgmii_init(void)
diff --git a/drivers/net/ethernet/ibm/emac/tah.c b/drivers/net/ethernet/ibm/emac/tah.c
index aae9a88d95d7..fa3488258ca2 100644
--- a/drivers/net/ethernet/ibm/emac/tah.c
+++ b/drivers/net/ethernet/ibm/emac/tah.c
@@ -130,7 +130,7 @@ static int tah_probe(struct platform_device *ofdev)
return rc;
}
-static int tah_remove(struct platform_device *ofdev)
+static void tah_remove(struct platform_device *ofdev)
{
struct tah_instance *dev = platform_get_drvdata(ofdev);
@@ -138,8 +138,6 @@ static int tah_remove(struct platform_device *ofdev)
iounmap(dev->base);
kfree(dev);
-
- return 0;
}
static const struct of_device_id tah_match[] =
@@ -160,7 +158,7 @@ static struct platform_driver tah_driver = {
.of_match_table = tah_match,
},
.probe = tah_probe,
- .remove = tah_remove,
+ .remove_new = tah_remove,
};
int __init tah_init(void)
diff --git a/drivers/net/ethernet/ibm/emac/zmii.c b/drivers/net/ethernet/ibm/emac/zmii.c
index 6337388ee5f4..26e86cdee2f6 100644
--- a/drivers/net/ethernet/ibm/emac/zmii.c
+++ b/drivers/net/ethernet/ibm/emac/zmii.c
@@ -278,7 +278,7 @@ static int zmii_probe(struct platform_device *ofdev)
return rc;
}
-static int zmii_remove(struct platform_device *ofdev)
+static void zmii_remove(struct platform_device *ofdev)
{
struct zmii_instance *dev = platform_get_drvdata(ofdev);
@@ -286,8 +286,6 @@ static int zmii_remove(struct platform_device *ofdev)
iounmap(dev->base);
kfree(dev);
-
- return 0;
}
static const struct of_device_id zmii_match[] =
@@ -308,7 +306,7 @@ static struct platform_driver zmii_driver = {
.of_match_table = zmii_match,
},
.probe = zmii_probe,
- .remove = zmii_remove,
+ .remove_new = zmii_remove,
};
int __init zmii_init(void)
diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
index a8d79ee350f8..b5aef0b29efe 100644
--- a/drivers/net/ethernet/ibm/ibmveth.c
+++ b/drivers/net/ethernet/ibm/ibmveth.c
@@ -1432,7 +1432,7 @@ static int ibmveth_poll(struct napi_struct *napi, int budget)
BUG_ON(lpar_rc != H_SUCCESS);
if (ibmveth_rxq_pending_buffer(adapter) &&
- napi_reschedule(napi)) {
+ napi_schedule(napi)) {
lpar_rc = h_vio_signal(adapter->vdev->unit_address,
VIO_IRQ_DISABLE);
}
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index cdf5251e5679..30c47b8470ad 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -3519,7 +3519,7 @@ restart_poll:
if (napi_complete_done(napi, frames_processed)) {
enable_scrq_irq(adapter, rx_scrq);
if (pending_scrq(adapter, rx_scrq)) {
- if (napi_reschedule(napi)) {
+ if (napi_schedule(napi)) {
disable_scrq_irq(adapter, rx_scrq);
goto restart_poll;
}
@@ -5247,7 +5247,8 @@ static void handle_vpd_rsp(union ibmvnic_crq *crq,
/* copy firmware version string from vpd into adapter */
if ((substr + 3 + fw_level_len) <
(adapter->vpd->buff + adapter->vpd->len)) {
- strncpy((char *)adapter->fw_version, substr + 3, fw_level_len);
+ strscpy(adapter->fw_version, substr + 3,
+ sizeof(adapter->fw_version));
} else {
dev_info(dev, "FW substr extrapolated VPD buff\n");
}
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 9bc0a9519899..06ddd7147c7f 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -225,6 +225,7 @@ config I40E
depends on PTP_1588_CLOCK_OPTIONAL
depends on PCI
select AUXILIARY_BUS
+ select NET_DEVLINK
help
This driver supports Intel(R) Ethernet Controller XL710 Family of
devices. For more information on how to identify your adapter, go
@@ -284,6 +285,7 @@ config ICE
select DIMLIB
select NET_DEVLINK
select PLDMFW
+ select DPLL
help
This driver supports Intel(R) Ethernet Connection E800 Series of
devices. For more information on how to identify your adapter, go
@@ -355,5 +357,17 @@ config IGC
To compile this driver as a module, choose M here. The module
will be called igc.
+config IDPF
+ tristate "Intel(R) Infrastructure Data Path Function Support"
+ depends on PCI_MSI
+ select DIMLIB
+ select PAGE_POOL
+ select PAGE_POOL_STATS
+ help
+ This driver supports Intel(R) Infrastructure Data Path Function
+ devices.
+
+ To compile this driver as a module, choose M here. The module
+ will be called idpf.
endif # NET_VENDOR_INTEL
diff --git a/drivers/net/ethernet/intel/Makefile b/drivers/net/ethernet/intel/Makefile
index d80d04132073..dacb481ee5b1 100644
--- a/drivers/net/ethernet/intel/Makefile
+++ b/drivers/net/ethernet/intel/Makefile
@@ -15,3 +15,4 @@ obj-$(CONFIG_I40E) += i40e/
obj-$(CONFIG_IAVF) += iavf/
obj-$(CONFIG_FM10K) += fm10k/
obj-$(CONFIG_ICE) += ice/
+obj-$(CONFIG_IDPF) += idpf/
diff --git a/drivers/net/ethernet/intel/e100.c b/drivers/net/ethernet/intel/e100.c
index d3fdc290937f..01f0f12035ca 100644
--- a/drivers/net/ethernet/intel/e100.c
+++ b/drivers/net/ethernet/intel/e100.c
@@ -2841,7 +2841,7 @@ static int e100_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->netdev_ops = &e100_netdev_ops;
netdev->ethtool_ops = &e100_ethtool_ops;
netdev->watchdog_timeo = E100_WATCHDOG_PERIOD;
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
nic = netdev_priv(netdev);
netif_napi_add_weight(netdev, &nic->napi, e100_poll, E100_NAPI_WEIGHT);
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index da6e303ad99b..1d1e93686af2 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -1014,7 +1014,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->watchdog_timeo = 5 * HZ;
netif_napi_add(netdev, &adapter->napi, e1000_clean);
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
adapter->bd_number = cards_found;
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
index d53369e30040..13a05604dcc0 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
@@ -448,10 +448,10 @@ static void fm10k_get_drvinfo(struct net_device *dev,
{
struct fm10k_intfc *interface = netdev_priv(dev);
- strncpy(info->driver, fm10k_driver_name,
- sizeof(info->driver) - 1);
- strncpy(info->bus_info, pci_name(interface->pdev),
- sizeof(info->bus_info) - 1);
+ strscpy(info->driver, fm10k_driver_name,
+ sizeof(info->driver));
+ strscpy(info->bus_info, pci_name(interface->pdev),
+ sizeof(info->bus_info));
}
static void fm10k_get_pauseparam(struct net_device *dev,
diff --git a/drivers/net/ethernet/intel/i40e/Makefile b/drivers/net/ethernet/intel/i40e/Makefile
index 2f21b3e89fd0..cad93f323bd5 100644
--- a/drivers/net/ethernet/intel/i40e/Makefile
+++ b/drivers/net/ethernet/intel/i40e/Makefile
@@ -24,6 +24,7 @@ i40e-objs := i40e_main.o \
i40e_ddp.o \
i40e_client.o \
i40e_virtchnl_pf.o \
- i40e_xsk.o
+ i40e_xsk.o \
+ i40e_devlink.o
i40e-$(CONFIG_I40E_DCB) += i40e_dcb.o i40e_dcb_nl.o
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 55bb0b5310d5..1bf424ac3145 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -4,47 +4,21 @@
#ifndef _I40E_H_
#define _I40E_H_
-#include <net/tcp.h>
-#include <net/udp.h>
-#include <linux/types.h>
-#include <linux/errno.h>
-#include <linux/module.h>
#include <linux/pci.h>
-#include <linux/netdevice.h>
-#include <linux/ioport.h>
-#include <linux/iommu.h>
-#include <linux/slab.h>
-#include <linux/list.h>
-#include <linux/hashtable.h>
-#include <linux/string.h>
-#include <linux/in.h>
-#include <linux/ip.h>
-#include <linux/sctp.h>
-#include <linux/pkt_sched.h>
-#include <linux/ipv6.h>
-#include <net/checksum.h>
-#include <net/ip6_checksum.h>
-#include <linux/ethtool.h>
-#include <linux/if_vlan.h>
-#include <linux/if_macvlan.h>
-#include <linux/if_bridge.h>
-#include <linux/clocksource.h>
-#include <linux/net_tstamp.h>
#include <linux/ptp_clock_kernel.h>
+#include <linux/types.h>
+#include <linux/avf/virtchnl.h>
+#include <linux/net/intel/i40e_client.h>
+#include <net/devlink.h>
#include <net/pkt_cls.h>
-#include <net/pkt_sched.h>
-#include <net/tc_act/tc_gact.h>
-#include <net/tc_act/tc_mirred.h>
#include <net/udp_tunnel.h>
-#include <net/xdp_sock.h>
-#include <linux/bitfield.h>
-#include "i40e_type.h"
+#include "i40e_dcb.h"
+#include "i40e_debug.h"
+#include "i40e_devlink.h"
+#include "i40e_io.h"
#include "i40e_prototype.h"
-#include <linux/net/intel/i40e_client.h>
-#include <linux/avf/virtchnl.h>
-#include "i40e_virtchnl_pf.h"
+#include "i40e_register.h"
#include "i40e_txrx.h"
-#include "i40e_dcb.h"
/* Useful i40e defaults */
#define I40E_MAX_VEB 16
@@ -75,23 +49,19 @@
#define I40E_QUEUE_WAIT_RETRY_LIMIT 10
#define I40E_INT_NAME_STR_LEN (IFNAMSIZ + 16)
-#define I40E_NVM_VERSION_LO_SHIFT 0
-#define I40E_NVM_VERSION_LO_MASK (0xff << I40E_NVM_VERSION_LO_SHIFT)
-#define I40E_NVM_VERSION_HI_SHIFT 12
-#define I40E_NVM_VERSION_HI_MASK (0xf << I40E_NVM_VERSION_HI_SHIFT)
-#define I40E_OEM_VER_BUILD_MASK 0xffff
-#define I40E_OEM_VER_PATCH_MASK 0xff
-#define I40E_OEM_VER_BUILD_SHIFT 8
-#define I40E_OEM_VER_SHIFT 24
#define I40E_PHY_DEBUG_ALL \
(I40E_AQ_PHY_DEBUG_DISABLE_LINK_FW | \
I40E_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW)
#define I40E_OEM_EETRACK_ID 0xffffffff
-#define I40E_OEM_GEN_SHIFT 24
-#define I40E_OEM_SNAP_MASK 0x00ff0000
-#define I40E_OEM_SNAP_SHIFT 16
-#define I40E_OEM_RELEASE_MASK 0x0000ffff
+#define I40E_NVM_VERSION_LO_MASK GENMASK(7, 0)
+#define I40E_NVM_VERSION_HI_MASK GENMASK(15, 12)
+#define I40E_OEM_VER_BUILD_MASK GENMASK(23, 8)
+#define I40E_OEM_VER_PATCH_MASK GENMASK(7, 0)
+#define I40E_OEM_VER_MASK GENMASK(31, 24)
+#define I40E_OEM_GEN_MASK GENMASK(31, 24)
+#define I40E_OEM_SNAP_MASK GENMASK(23, 16)
+#define I40E_OEM_RELEASE_MASK GENMASK(15, 0)
#define I40E_RX_DESC(R, i) \
(&(((union i40e_rx_desc *)((R)->desc))[i]))
@@ -323,29 +293,6 @@ struct i40e_udp_port_config {
u8 filter_index;
};
-#define I40_DDP_FLASH_REGION 100
-#define I40E_PROFILE_INFO_SIZE 48
-#define I40E_MAX_PROFILE_NUM 16
-#define I40E_PROFILE_LIST_SIZE \
- (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4)
-#define I40E_DDP_PROFILE_PATH "intel/i40e/ddp/"
-#define I40E_DDP_PROFILE_NAME_MAX 64
-
-int i40e_ddp_load(struct net_device *netdev, const u8 *data, size_t size,
- bool is_add);
-int i40e_ddp_flash(struct net_device *netdev, struct ethtool_flash *flash);
-
-struct i40e_ddp_profile_list {
- u32 p_count;
- struct i40e_profile_info p_info[];
-};
-
-struct i40e_ddp_old_profile_list {
- struct list_head list;
- size_t old_ddp_size;
- u8 old_ddp_buf[];
-};
-
/* macros related to FLX_PIT */
#define I40E_FLEX_SET_FSIZE(fsize) (((fsize) << \
I40E_PRTQF_FLX_PIT_FSIZE_SHIFT) & \
@@ -462,6 +409,7 @@ static inline const u8 *i40e_channel_mac(struct i40e_channel *ch)
/* struct that defines the Ethernet device */
struct i40e_pf {
struct pci_dev *pdev;
+ struct devlink_port devlink_port;
struct i40e_hw hw;
DECLARE_BITMAP(state, __I40E_STATE_SIZE__);
struct msix_entry *msix_entries;
@@ -1002,43 +950,104 @@ struct i40e_device {
};
/**
- * i40e_nvm_version_str - format the NVM version strings
+ * i40e_info_nvm_ver - format the NVM version string
* @hw: ptr to the hardware info
+ * @buf: string buffer to store
+ * @len: buffer size
+ *
+ * Formats NVM version string as:
+ * <gen>.<snap>.<release> when eetrackid == I40E_OEM_EETRACK_ID
+ * <nvm_major>.<nvm_minor> otherwise
**/
-static inline char *i40e_nvm_version_str(struct i40e_hw *hw)
+static inline void i40e_info_nvm_ver(struct i40e_hw *hw, char *buf, size_t len)
{
- static char buf[32];
- u32 full_ver;
+ struct i40e_nvm_info *nvm = &hw->nvm;
- full_ver = hw->nvm.oem_ver;
-
- if (hw->nvm.eetrack == I40E_OEM_EETRACK_ID) {
+ if (nvm->eetrack == I40E_OEM_EETRACK_ID) {
+ u32 full_ver = nvm->oem_ver;
u8 gen, snap;
u16 release;
- gen = (u8)(full_ver >> I40E_OEM_GEN_SHIFT);
- snap = (u8)((full_ver & I40E_OEM_SNAP_MASK) >>
- I40E_OEM_SNAP_SHIFT);
- release = (u16)(full_ver & I40E_OEM_RELEASE_MASK);
-
- snprintf(buf, sizeof(buf), "%x.%x.%x", gen, snap, release);
+ gen = FIELD_GET(I40E_OEM_GEN_MASK, full_ver);
+ snap = FIELD_GET(I40E_OEM_SNAP_MASK, full_ver);
+ release = FIELD_GET(I40E_OEM_RELEASE_MASK, full_ver);
+ snprintf(buf, len, "%x.%x.%x", gen, snap, release);
} else {
- u8 ver, patch;
+ u8 major, minor;
+
+ major = FIELD_GET(I40E_NVM_VERSION_HI_MASK, nvm->version);
+ minor = FIELD_GET(I40E_NVM_VERSION_LO_MASK, nvm->version);
+ snprintf(buf, len, "%x.%02x", major, minor);
+ }
+}
+
+/**
+ * i40e_info_eetrack - format the EETrackID string
+ * @hw: ptr to the hardware info
+ * @buf: string buffer to store
+ * @len: buffer size
+ *
+ * Returns hexadecimally formated EETrackID if it is
+ * different from I40E_OEM_EETRACK_ID or empty string.
+ **/
+static inline void i40e_info_eetrack(struct i40e_hw *hw, char *buf, size_t len)
+{
+ struct i40e_nvm_info *nvm = &hw->nvm;
+
+ buf[0] = '\0';
+ if (nvm->eetrack != I40E_OEM_EETRACK_ID)
+ snprintf(buf, len, "0x%08x", nvm->eetrack);
+}
+
+/**
+ * i40e_info_civd_ver - format the NVM version strings
+ * @hw: ptr to the hardware info
+ * @buf: string buffer to store
+ * @len: buffer size
+ *
+ * Returns formated combo image version if adapter's EETrackID is
+ * different from I40E_OEM_EETRACK_ID or empty string.
+ **/
+static inline void i40e_info_civd_ver(struct i40e_hw *hw, char *buf, size_t len)
+{
+ struct i40e_nvm_info *nvm = &hw->nvm;
+
+ buf[0] = '\0';
+ if (nvm->eetrack != I40E_OEM_EETRACK_ID) {
+ u32 full_ver = nvm->oem_ver;
+ u8 major, minor;
u16 build;
- ver = (u8)(full_ver >> I40E_OEM_VER_SHIFT);
- build = (u16)((full_ver >> I40E_OEM_VER_BUILD_SHIFT) &
- I40E_OEM_VER_BUILD_MASK);
- patch = (u8)(full_ver & I40E_OEM_VER_PATCH_MASK);
-
- snprintf(buf, sizeof(buf),
- "%x.%02x 0x%x %d.%d.%d",
- (hw->nvm.version & I40E_NVM_VERSION_HI_MASK) >>
- I40E_NVM_VERSION_HI_SHIFT,
- (hw->nvm.version & I40E_NVM_VERSION_LO_MASK) >>
- I40E_NVM_VERSION_LO_SHIFT,
- hw->nvm.eetrack, ver, build, patch);
+ major = FIELD_GET(I40E_OEM_VER_MASK, full_ver);
+ build = FIELD_GET(I40E_OEM_VER_BUILD_MASK, full_ver);
+ minor = FIELD_GET(I40E_OEM_VER_PATCH_MASK, full_ver);
+ snprintf(buf, len, "%d.%d.%d", major, build, minor);
}
+}
+
+/**
+ * i40e_nvm_version_str - format the NVM version strings
+ * @hw: ptr to the hardware info
+ * @buf: string buffer to store
+ * @len: buffer size
+ **/
+static inline char *i40e_nvm_version_str(struct i40e_hw *hw, char *buf,
+ size_t len)
+{
+ char ver[16] = " ";
+
+ /* Get NVM version */
+ i40e_info_nvm_ver(hw, buf, len);
+
+ /* Append EETrackID if provided */
+ i40e_info_eetrack(hw, &ver[1], sizeof(ver) - 1);
+ if (strlen(ver) > 1)
+ strlcat(buf, ver, len);
+
+ /* Append combo image version if provided */
+ i40e_info_civd_ver(hw, &ver[1], sizeof(ver) - 1);
+ if (strlen(ver) > 1)
+ strlcat(buf, ver, len);
return buf;
}
@@ -1321,4 +1330,15 @@ static inline u32 i40e_is_tc_mqprio_enabled(struct i40e_pf *pf)
return pf->flags & I40E_FLAG_TC_MQPRIO;
}
+/**
+ * i40e_hw_to_pf - get pf pointer from the hardware structure
+ * @hw: pointer to the device HW structure
+ **/
+static inline struct i40e_pf *i40e_hw_to_pf(struct i40e_hw *hw)
+{
+ return container_of(hw, struct i40e_pf, hw);
+}
+
+struct device *i40e_hw_to_dev(struct i40e_hw *hw);
+
#endif /* _I40E_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index 100eb77b8dfe..9ce6e633cc2f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
-#include "i40e_type.h"
+#include <linux/delay.h>
+#include "i40e_alloc.h"
#include "i40e_register.h"
-#include "i40e_adminq.h"
#include "i40e_prototype.h"
static void i40e_resume_aq(struct i40e_hw *hw);
@@ -51,7 +51,6 @@ static int i40e_alloc_adminq_asq_ring(struct i40e_hw *hw)
int ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.asq.desc_buf,
- i40e_mem_atq_ring,
(hw->aq.num_asq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
@@ -78,7 +77,6 @@ static int i40e_alloc_adminq_arq_ring(struct i40e_hw *hw)
int ret_code;
ret_code = i40e_allocate_dma_mem(hw, &hw->aq.arq.desc_buf,
- i40e_mem_arq_ring,
(hw->aq.num_arq_entries *
sizeof(struct i40e_aq_desc)),
I40E_ADMINQ_DESC_ALIGNMENT);
@@ -136,7 +134,6 @@ static int i40e_alloc_arq_bufs(struct i40e_hw *hw)
for (i = 0; i < hw->aq.num_arq_entries; i++) {
bi = &hw->aq.arq.r.arq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
- i40e_mem_arq_buf,
hw->aq.arq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
@@ -198,7 +195,6 @@ static int i40e_alloc_asq_bufs(struct i40e_hw *hw)
for (i = 0; i < hw->aq.num_asq_entries; i++) {
bi = &hw->aq.asq.r.asq_bi[i];
ret_code = i40e_allocate_dma_mem(hw, bi,
- i40e_mem_asq_buf,
hw->aq.asq_buf_size,
I40E_ADMINQ_DESC_ALIGNMENT);
if (ret_code)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.h b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
index 267f2e0a21ce..80125bea80a2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
@@ -4,7 +4,8 @@
#ifndef _I40E_ADMINQ_H_
#define _I40E_ADMINQ_H_
-#include "i40e_osdep.h"
+#include <linux/mutex.h>
+#include "i40e_alloc.h"
#include "i40e_adminq_cmd.h"
#define I40E_ADMINQ_DESC(R, i) \
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index 3357d65a906b..18a1c3b6d72c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -4,6 +4,8 @@
#ifndef _I40E_ADMINQ_CMD_H_
#define _I40E_ADMINQ_CMD_H_
+#include <linux/bits.h>
+
/* This header file defines the i40e Admin Queue commands and is shared between
* i40e Firmware and Software.
*
diff --git a/drivers/net/ethernet/intel/i40e/i40e_alloc.h b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
index a6c9a9e343d1..e0dde326255d 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_alloc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_alloc.h
@@ -4,25 +4,25 @@
#ifndef _I40E_ALLOC_H_
#define _I40E_ALLOC_H_
+#include <linux/types.h>
+
struct i40e_hw;
-/* Memory allocation types */
-enum i40e_memory_type {
- i40e_mem_arq_buf = 0, /* ARQ indirect command buffer */
- i40e_mem_asq_buf = 1,
- i40e_mem_atq_buf = 2, /* ATQ indirect command buffer */
- i40e_mem_arq_ring = 3, /* ARQ descriptor ring */
- i40e_mem_atq_ring = 4, /* ATQ descriptor ring */
- i40e_mem_pd = 5, /* Page Descriptor */
- i40e_mem_bp = 6, /* Backing Page - 4KB */
- i40e_mem_bp_jumbo = 7, /* Backing Page - > 4KB */
- i40e_mem_reserved
+/* memory allocation tracking */
+struct i40e_dma_mem {
+ void *va;
+ dma_addr_t pa;
+ u32 size;
+};
+
+struct i40e_virt_mem {
+ void *va;
+ u32 size;
};
/* prototype for functions used for dynamic memory allocation */
int i40e_allocate_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem,
- enum i40e_memory_type type,
u64 size, u32 alignment);
int i40e_free_dma_mem(struct i40e_hw *hw,
struct i40e_dma_mem *mem);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.c b/drivers/net/ethernet/intel/i40e/i40e_client.c
index 639c5a1ca853..306758428aef 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.c
@@ -6,7 +6,6 @@
#include <linux/net/intel/i40e_client.h>
#include "i40e.h"
-#include "i40e_prototype.h"
static LIST_HEAD(i40e_devices);
static DEFINE_MUTEX(i40e_device_mutex);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index 1b493854f522..d7e24d661724 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -1,11 +1,14 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2021 Intel Corporation. */
-#include "i40e.h"
-#include "i40e_type.h"
-#include "i40e_adminq.h"
-#include "i40e_prototype.h"
#include <linux/avf/virtchnl.h>
+#include <linux/delay.h>
+#include <linux/etherdevice.h>
+#include <linux/pci.h>
+#include "i40e_adminq_cmd.h"
+#include "i40e_devids.h"
+#include "i40e_prototype.h"
+#include "i40e_register.h"
/**
* i40e_set_mac_type - Sets MAC type
@@ -818,62 +821,72 @@ void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable)
}
/**
- * i40e_read_pba_string - Reads part number string from EEPROM
+ * i40e_get_pba_string - Reads part number string from EEPROM
* @hw: pointer to hardware structure
- * @pba_num: stores the part number string from the EEPROM
- * @pba_num_size: part number string buffer length
*
- * Reads the part number string from the EEPROM.
+ * Reads the part number string from the EEPROM and stores it
+ * into newly allocated buffer and saves resulting pointer
+ * to i40e_hw->pba_id field.
**/
-int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size)
+void i40e_get_pba_string(struct i40e_hw *hw)
{
+#define I40E_NVM_PBA_FLAGS_BLK_PRESENT 0xFAFA
u16 pba_word = 0;
u16 pba_size = 0;
u16 pba_ptr = 0;
- int status = 0;
- u16 i = 0;
+ int status;
+ char *ptr;
+ u16 i;
status = i40e_read_nvm_word(hw, I40E_SR_PBA_FLAGS, &pba_word);
- if (status || (pba_word != 0xFAFA)) {
- hw_dbg(hw, "Failed to read PBA flags or flag is invalid.\n");
- return status;
+ if (status) {
+ hw_dbg(hw, "Failed to read PBA flags.\n");
+ return;
+ }
+ if (pba_word != I40E_NVM_PBA_FLAGS_BLK_PRESENT) {
+ hw_dbg(hw, "PBA block is not present.\n");
+ return;
}
status = i40e_read_nvm_word(hw, I40E_SR_PBA_BLOCK_PTR, &pba_ptr);
if (status) {
hw_dbg(hw, "Failed to read PBA Block pointer.\n");
- return status;
+ return;
}
status = i40e_read_nvm_word(hw, pba_ptr, &pba_size);
if (status) {
hw_dbg(hw, "Failed to read PBA Block size.\n");
- return status;
+ return;
}
/* Subtract one to get PBA word count (PBA Size word is included in
- * total size)
+ * total size) and advance pointer to first PBA word.
*/
pba_size--;
- if (pba_num_size < (((u32)pba_size * 2) + 1)) {
- hw_dbg(hw, "Buffer too small for PBA data.\n");
- return -EINVAL;
+ pba_ptr++;
+ if (!pba_size) {
+ hw_dbg(hw, "PBA ID is empty.\n");
+ return;
}
+ ptr = devm_kzalloc(i40e_hw_to_dev(hw), pba_size * 2 + 1, GFP_KERNEL);
+ if (!ptr)
+ return;
+ hw->pba_id = ptr;
+
for (i = 0; i < pba_size; i++) {
- status = i40e_read_nvm_word(hw, (pba_ptr + 1) + i, &pba_word);
+ status = i40e_read_nvm_word(hw, pba_ptr + i, &pba_word);
if (status) {
hw_dbg(hw, "Failed to read PBA Block word %d.\n", i);
- return status;
+ devm_kfree(i40e_hw_to_dev(hw), hw->pba_id);
+ hw->pba_id = NULL;
+ return;
}
- pba_num[(i * 2)] = (pba_word >> 8) & 0xFF;
- pba_num[(i * 2) + 1] = pba_word & 0xFF;
+ *ptr++ = (pba_word >> 8) & 0xFF;
+ *ptr++ = pba_word & 0xFF;
}
- pba_num[(pba_size * 2)] = '\0';
-
- return status;
}
/**
diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb.c b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
index f81e744c0fb3..68602fc375f6 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_dcb.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_dcb.c
@@ -1,9 +1,9 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2021 Intel Corporation. */
-#include "i40e_adminq.h"
-#include "i40e_prototype.h"
+#include "i40e_alloc.h"
#include "i40e_dcb.h"
+#include "i40e_prototype.h"
/**
* i40e_get_dcbx_status
diff --git a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
index 195421d863ab..077a95dad32c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_dcb_nl.c
@@ -2,8 +2,8 @@
/* Copyright(c) 2013 - 2021 Intel Corporation. */
#ifdef CONFIG_I40E_DCB
-#include "i40e.h"
#include <net/dcbnl.h>
+#include "i40e.h"
#define I40E_DCBNL_STATUS_SUCCESS 0
#define I40E_DCBNL_STATUS_ERROR 1
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ddp.c b/drivers/net/ethernet/intel/i40e/i40e_ddp.c
index 0e72abd178ae..cf25bfc5dc3f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ddp.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ddp.c
@@ -1,9 +1,27 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
+#include <linux/firmware.h>
#include "i40e.h"
-#include <linux/firmware.h>
+#define I40_DDP_FLASH_REGION 100
+#define I40E_PROFILE_INFO_SIZE 48
+#define I40E_MAX_PROFILE_NUM 16
+#define I40E_PROFILE_LIST_SIZE \
+ (I40E_PROFILE_INFO_SIZE * I40E_MAX_PROFILE_NUM + 4)
+#define I40E_DDP_PROFILE_PATH "intel/i40e/ddp/"
+#define I40E_DDP_PROFILE_NAME_MAX 64
+
+struct i40e_ddp_profile_list {
+ u32 p_count;
+ struct i40e_profile_info p_info[];
+};
+
+struct i40e_ddp_old_profile_list {
+ struct list_head list;
+ size_t old_ddp_size;
+ u8 old_ddp_buf[];
+};
/**
* i40e_ddp_profiles_eq - checks if DDP profiles are the equivalent
@@ -261,8 +279,8 @@ static bool i40e_ddp_is_pkg_hdr_valid(struct net_device *netdev,
* Checks correctness and loads DDP profile to the NIC. The function is
* also used for rolling back previously loaded profile.
**/
-int i40e_ddp_load(struct net_device *netdev, const u8 *data, size_t size,
- bool is_add)
+static int i40e_ddp_load(struct net_device *netdev, const u8 *data, size_t size,
+ bool is_add)
{
u8 profile_info_sec[sizeof(struct i40e_profile_section_header) +
sizeof(struct i40e_profile_info)];
@@ -438,10 +456,9 @@ int i40e_ddp_flash(struct net_device *netdev, struct ethtool_flash *flash)
char profile_name[sizeof(I40E_DDP_PROFILE_PATH)
+ I40E_DDP_PROFILE_NAME_MAX];
- profile_name[sizeof(profile_name) - 1] = 0;
- strncpy(profile_name, I40E_DDP_PROFILE_PATH,
- sizeof(profile_name) - 1);
- strncat(profile_name, flash->data, I40E_DDP_PROFILE_NAME_MAX);
+ scnprintf(profile_name, sizeof(profile_name), "%s%s",
+ I40E_DDP_PROFILE_PATH, flash->data);
+
/* Load DDP recipe. */
status = request_firmware(&ddp_config, profile_name,
&netdev->dev);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debug.h b/drivers/net/ethernet/intel/i40e/i40e_debug.h
new file mode 100644
index 000000000000..27ebc72d8bfe
--- /dev/null
+++ b/drivers/net/ethernet/intel/i40e/i40e_debug.h
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2023 Intel Corporation. */
+
+#ifndef _I40E_DEBUG_H_
+#define _I40E_DEBUG_H_
+
+#include <linux/dev_printk.h>
+
+/* debug masks - set these bits in hw->debug_mask to control output */
+enum i40e_debug_mask {
+ I40E_DEBUG_INIT = 0x00000001,
+ I40E_DEBUG_RELEASE = 0x00000002,
+
+ I40E_DEBUG_LINK = 0x00000010,
+ I40E_DEBUG_PHY = 0x00000020,
+ I40E_DEBUG_HMC = 0x00000040,
+ I40E_DEBUG_NVM = 0x00000080,
+ I40E_DEBUG_LAN = 0x00000100,
+ I40E_DEBUG_FLOW = 0x00000200,
+ I40E_DEBUG_DCB = 0x00000400,
+ I40E_DEBUG_DIAG = 0x00000800,
+ I40E_DEBUG_FD = 0x00001000,
+ I40E_DEBUG_PACKAGE = 0x00002000,
+ I40E_DEBUG_IWARP = 0x00F00000,
+ I40E_DEBUG_AQ_MESSAGE = 0x01000000,
+ I40E_DEBUG_AQ_DESCRIPTOR = 0x02000000,
+ I40E_DEBUG_AQ_DESC_BUFFER = 0x04000000,
+ I40E_DEBUG_AQ_COMMAND = 0x06000000,
+ I40E_DEBUG_AQ = 0x0F000000,
+
+ I40E_DEBUG_USER = 0xF0000000,
+
+ I40E_DEBUG_ALL = 0xFFFFFFFF
+};
+
+struct i40e_hw;
+struct device *i40e_hw_to_dev(struct i40e_hw *hw);
+
+#define hw_dbg(hw, S, A...) dev_dbg(i40e_hw_to_dev(hw), S, ##A)
+
+#define i40e_debug(h, m, s, ...) \
+do { \
+ if (((m) & (h)->debug_mask)) \
+ dev_info(i40e_hw_to_dev(hw), s, ##__VA_ARGS__); \
+} while (0)
+
+#endif /* _I40E_DEBUG_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 1a497cb07710..999c9708def5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -5,8 +5,9 @@
#include <linux/fs.h>
#include <linux/debugfs.h>
-
+#include <linux/if_bridge.h>
#include "i40e.h"
+#include "i40e_virtchnl_pf.h"
static struct dentry *i40e_dbg_root;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_devlink.c b/drivers/net/ethernet/intel/i40e/i40e_devlink.c
new file mode 100644
index 000000000000..74bc111b4849
--- /dev/null
+++ b/drivers/net/ethernet/intel/i40e/i40e_devlink.c
@@ -0,0 +1,236 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright(c) 2023 Intel Corporation. */
+
+#include <net/devlink.h>
+#include "i40e.h"
+#include "i40e_devlink.h"
+
+static void i40e_info_get_dsn(struct i40e_pf *pf, char *buf, size_t len)
+{
+ u8 dsn[8];
+
+ put_unaligned_be64(pci_get_dsn(pf->pdev), dsn);
+
+ snprintf(buf, len, "%8phD", dsn);
+}
+
+static void i40e_info_fw_mgmt(struct i40e_hw *hw, char *buf, size_t len)
+{
+ struct i40e_adminq_info *aq = &hw->aq;
+
+ snprintf(buf, len, "%u.%u", aq->fw_maj_ver, aq->fw_min_ver);
+}
+
+static void i40e_info_fw_mgmt_build(struct i40e_hw *hw, char *buf, size_t len)
+{
+ struct i40e_adminq_info *aq = &hw->aq;
+
+ snprintf(buf, len, "%05d", aq->fw_build);
+}
+
+static void i40e_info_fw_api(struct i40e_hw *hw, char *buf, size_t len)
+{
+ struct i40e_adminq_info *aq = &hw->aq;
+
+ snprintf(buf, len, "%u.%u", aq->api_maj_ver, aq->api_min_ver);
+}
+
+static void i40e_info_pba(struct i40e_hw *hw, char *buf, size_t len)
+{
+ buf[0] = '\0';
+ if (hw->pba_id)
+ strscpy(buf, hw->pba_id, len);
+}
+
+enum i40e_devlink_version_type {
+ I40E_DL_VERSION_FIXED,
+ I40E_DL_VERSION_RUNNING,
+};
+
+static int i40e_devlink_info_put(struct devlink_info_req *req,
+ enum i40e_devlink_version_type type,
+ const char *key, const char *value)
+{
+ if (!strlen(value))
+ return 0;
+
+ switch (type) {
+ case I40E_DL_VERSION_FIXED:
+ return devlink_info_version_fixed_put(req, key, value);
+ case I40E_DL_VERSION_RUNNING:
+ return devlink_info_version_running_put(req, key, value);
+ }
+ return 0;
+}
+
+static int i40e_devlink_info_get(struct devlink *dl,
+ struct devlink_info_req *req,
+ struct netlink_ext_ack *extack)
+{
+ struct i40e_pf *pf = devlink_priv(dl);
+ struct i40e_hw *hw = &pf->hw;
+ char buf[32];
+ int err;
+
+ i40e_info_get_dsn(pf, buf, sizeof(buf));
+ err = devlink_info_serial_number_put(req, buf);
+ if (err)
+ return err;
+
+ i40e_info_fw_mgmt(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ DEVLINK_INFO_VERSION_GENERIC_FW_MGMT, buf);
+ if (err)
+ return err;
+
+ i40e_info_fw_mgmt_build(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ "fw.mgmt.build", buf);
+ if (err)
+ return err;
+
+ i40e_info_fw_api(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ DEVLINK_INFO_VERSION_GENERIC_FW_MGMT_API,
+ buf);
+ if (err)
+ return err;
+
+ i40e_info_nvm_ver(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ "fw.psid.api", buf);
+ if (err)
+ return err;
+
+ i40e_info_eetrack(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ DEVLINK_INFO_VERSION_GENERIC_FW_BUNDLE_ID,
+ buf);
+ if (err)
+ return err;
+
+ i40e_info_civd_ver(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_RUNNING,
+ DEVLINK_INFO_VERSION_GENERIC_FW_UNDI, buf);
+ if (err)
+ return err;
+
+ i40e_info_pba(hw, buf, sizeof(buf));
+ err = i40e_devlink_info_put(req, I40E_DL_VERSION_FIXED,
+ DEVLINK_INFO_VERSION_GENERIC_BOARD_ID, buf);
+
+ return err;
+}
+
+static const struct devlink_ops i40e_devlink_ops = {
+ .info_get = i40e_devlink_info_get,
+};
+
+/**
+ * i40e_alloc_pf - Allocate devlink and return i40e_pf structure pointer
+ * @dev: the device to allocate for
+ *
+ * Allocate a devlink instance for this device and return the private
+ * area as the i40e_pf structure.
+ **/
+struct i40e_pf *i40e_alloc_pf(struct device *dev)
+{
+ struct devlink *devlink;
+
+ devlink = devlink_alloc(&i40e_devlink_ops, sizeof(struct i40e_pf), dev);
+ if (!devlink)
+ return NULL;
+
+ return devlink_priv(devlink);
+}
+
+/**
+ * i40e_free_pf - Free i40e_pf structure and associated devlink
+ * @pf: the PF structure
+ *
+ * Free i40e_pf structure and devlink allocated by devlink_alloc.
+ **/
+void i40e_free_pf(struct i40e_pf *pf)
+{
+ struct devlink *devlink = priv_to_devlink(pf);
+
+ devlink_free(devlink);
+}
+
+/**
+ * i40e_devlink_register - Register devlink interface for this PF
+ * @pf: the PF to register the devlink for.
+ *
+ * Register the devlink instance associated with this physical function.
+ **/
+void i40e_devlink_register(struct i40e_pf *pf)
+{
+ devlink_register(priv_to_devlink(pf));
+}
+
+/**
+ * i40e_devlink_unregister - Unregister devlink resources for this PF.
+ * @pf: the PF structure to cleanup
+ *
+ * Releases resources used by devlink and cleans up associated memory.
+ **/
+void i40e_devlink_unregister(struct i40e_pf *pf)
+{
+ devlink_unregister(priv_to_devlink(pf));
+}
+
+/**
+ * i40e_devlink_set_switch_id - Set unique switch id based on pci dsn
+ * @pf: the PF to create a devlink port for
+ * @ppid: struct with switch id information
+ */
+static void i40e_devlink_set_switch_id(struct i40e_pf *pf,
+ struct netdev_phys_item_id *ppid)
+{
+ u64 id = pci_get_dsn(pf->pdev);
+
+ ppid->id_len = sizeof(id);
+ put_unaligned_be64(id, &ppid->id);
+}
+
+/**
+ * i40e_devlink_create_port - Create a devlink port for this PF
+ * @pf: the PF to create a port for
+ *
+ * Create and register a devlink_port for this PF. Note that although each
+ * physical function is connected to a separate devlink instance, the port
+ * will still be numbered according to the physical function id.
+ *
+ * Return: zero on success or an error code on failure.
+ **/
+int i40e_devlink_create_port(struct i40e_pf *pf)
+{
+ struct devlink *devlink = priv_to_devlink(pf);
+ struct devlink_port_attrs attrs = {};
+ struct device *dev = &pf->pdev->dev;
+ int err;
+
+ attrs.flavour = DEVLINK_PORT_FLAVOUR_PHYSICAL;
+ attrs.phys.port_number = pf->hw.pf_id;
+ i40e_devlink_set_switch_id(pf, &attrs.switch_id);
+ devlink_port_attrs_set(&pf->devlink_port, &attrs);
+ err = devlink_port_register(devlink, &pf->devlink_port, pf->hw.pf_id);
+ if (err) {
+ dev_err(dev, "devlink_port_register failed: %d\n", err);
+ return err;
+ }
+
+ return 0;
+}
+
+/**
+ * i40e_devlink_destroy_port - Destroy the devlink_port for this PF
+ * @pf: the PF to cleanup
+ *
+ * Unregisters the devlink_port structure associated with this PF.
+ **/
+void i40e_devlink_destroy_port(struct i40e_pf *pf)
+{
+ devlink_port_type_clear(&pf->devlink_port);
+ devlink_port_unregister(&pf->devlink_port);
+}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_devlink.h b/drivers/net/ethernet/intel/i40e/i40e_devlink.h
new file mode 100644
index 000000000000..469fb3d2ee25
--- /dev/null
+++ b/drivers/net/ethernet/intel/i40e/i40e_devlink.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2023, Intel Corporation. */
+
+#ifndef _I40E_DEVLINK_H_
+#define _I40E_DEVLINK_H_
+
+#include <linux/device.h>
+
+struct i40e_pf;
+
+struct i40e_pf *i40e_alloc_pf(struct device *dev);
+void i40e_free_pf(struct i40e_pf *pf);
+void i40e_devlink_register(struct i40e_pf *pf);
+void i40e_devlink_unregister(struct i40e_pf *pf);
+int i40e_devlink_create_port(struct i40e_pf *pf);
+void i40e_devlink_destroy_port(struct i40e_pf *pf);
+
+#endif /* _I40E_DEVLINK_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_diag.h b/drivers/net/ethernet/intel/i40e/i40e_diag.h
index c3ce5f35211f..ece3a6b9a5c6 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_diag.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_diag.h
@@ -4,7 +4,10 @@
#ifndef _I40E_DIAG_H_
#define _I40E_DIAG_H_
-#include "i40e_type.h"
+#include "i40e_adminq_cmd.h"
+
+/* forward-declare the HW struct for the compiler */
+struct i40e_hw;
enum i40e_lb_mode {
I40E_LB_MODE_NONE = 0x0,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index bd1321bf7e26..fd7163128c4d 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -3,9 +3,10 @@
/* ethtool support for i40e */
-#include "i40e.h"
+#include "i40e_devids.h"
#include "i40e_diag.h"
#include "i40e_txrx_common.h"
+#include "i40e_virtchnl_pf.h"
/* ethtool statistics helpers */
@@ -245,6 +246,7 @@ static const struct i40e_stats i40e_gstrings_net_stats[] = {
I40E_NETDEV_STAT(rx_errors),
I40E_NETDEV_STAT(tx_errors),
I40E_NETDEV_STAT(rx_dropped),
+ I40E_NETDEV_STAT(rx_missed_errors),
I40E_NETDEV_STAT(tx_dropped),
I40E_NETDEV_STAT(collisions),
I40E_NETDEV_STAT(rx_length_errors),
@@ -321,7 +323,7 @@ static const struct i40e_stats i40e_gstrings_stats[] = {
I40E_PF_STAT("port.rx_broadcast", stats.eth.rx_broadcast),
I40E_PF_STAT("port.tx_broadcast", stats.eth.tx_broadcast),
I40E_PF_STAT("port.tx_errors", stats.eth.tx_errors),
- I40E_PF_STAT("port.rx_dropped", stats.eth.rx_discards),
+ I40E_PF_STAT("port.rx_discards", stats.eth.rx_discards),
I40E_PF_STAT("port.tx_dropped_link_down", stats.tx_dropped_link_down),
I40E_PF_STAT("port.rx_crc_errors", stats.crc_errors),
I40E_PF_STAT("port.illegal_bytes", stats.illegal_bytes),
@@ -2004,8 +2006,8 @@ static void i40e_get_drvinfo(struct net_device *netdev,
struct i40e_pf *pf = vsi->back;
strscpy(drvinfo->driver, i40e_driver_name, sizeof(drvinfo->driver));
- strscpy(drvinfo->fw_version, i40e_nvm_version_str(&pf->hw),
- sizeof(drvinfo->fw_version));
+ i40e_nvm_version_str(&pf->hw, drvinfo->fw_version,
+ sizeof(drvinfo->fw_version));
strscpy(drvinfo->bus_info, pci_name(pf->pdev),
sizeof(drvinfo->bus_info));
drvinfo->n_priv_flags = I40E_PRIV_FLAGS_STR_LEN;
@@ -2512,11 +2514,13 @@ static void i40e_get_priv_flag_strings(struct net_device *netdev, u8 *data)
u8 *p = data;
for (i = 0; i < I40E_PRIV_FLAGS_STR_LEN; i++)
- ethtool_sprintf(&p, i40e_gstrings_priv_flags[i].flag_string);
+ ethtool_sprintf(&p, "%s",
+ i40e_gstrings_priv_flags[i].flag_string);
if (pf->hw.pf_id != 0)
return;
for (i = 0; i < I40E_GL_PRIV_FLAGS_STR_LEN; i++)
- ethtool_sprintf(&p, i40e_gl_gstrings_priv_flags[i].flag_string);
+ ethtool_sprintf(&p, "%s",
+ i40e_gl_gstrings_priv_flags[i].flag_string);
}
static void i40e_get_strings(struct net_device *netdev, u32 stringset,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
index 96ee63aca7a1..1742624ca62e 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
@@ -1,10 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
-#include "i40e.h"
-#include "i40e_osdep.h"
-#include "i40e_register.h"
#include "i40e_alloc.h"
+#include "i40e_debug.h"
#include "i40e_hmc.h"
#include "i40e_type.h"
@@ -22,7 +20,6 @@ int i40e_add_sd_table_entry(struct i40e_hw *hw,
enum i40e_sd_entry_type type,
u64 direct_mode_sz)
{
- enum i40e_memory_type mem_type __attribute__((unused));
struct i40e_hmc_sd_entry *sd_entry;
bool dma_mem_alloc_done = false;
struct i40e_dma_mem mem;
@@ -43,16 +40,13 @@ int i40e_add_sd_table_entry(struct i40e_hw *hw,
sd_entry = &hmc_info->sd_table.sd_entry[sd_index];
if (!sd_entry->valid) {
- if (I40E_SD_TYPE_PAGED == type) {
- mem_type = i40e_mem_pd;
+ if (type == I40E_SD_TYPE_PAGED)
alloc_len = I40E_HMC_PAGED_BP_SIZE;
- } else {
- mem_type = i40e_mem_bp_jumbo;
+ else
alloc_len = direct_mode_sz;
- }
/* allocate a 4K pd page or 2M backing page */
- ret_code = i40e_allocate_dma_mem(hw, &mem, mem_type, alloc_len,
+ ret_code = i40e_allocate_dma_mem(hw, &mem, alloc_len,
I40E_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
goto exit;
@@ -140,7 +134,7 @@ int i40e_add_pd_table_entry(struct i40e_hw *hw,
page = rsrc_pg;
} else {
/* allocate a 4K backing page */
- ret_code = i40e_allocate_dma_mem(hw, page, i40e_mem_bp,
+ ret_code = i40e_allocate_dma_mem(hw, page,
I40E_HMC_PAGED_BP_SIZE,
I40E_HMC_PD_BP_BUF_ALIGNMENT);
if (ret_code)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
index 9960da07a573..480e3a883cc7 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.h
@@ -4,6 +4,10 @@
#ifndef _I40E_HMC_H_
#define _I40E_HMC_H_
+#include "i40e_alloc.h"
+#include "i40e_io.h"
+#include "i40e_register.h"
+
#define I40E_HMC_MAX_BP_COUNT 512
/* forward-declare the HW struct for the compiler */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_io.h b/drivers/net/ethernet/intel/i40e/i40e_io.h
new file mode 100644
index 000000000000..2a2ed9a1d476
--- /dev/null
+++ b/drivers/net/ethernet/intel/i40e/i40e_io.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright(c) 2023 Intel Corporation. */
+
+#ifndef _I40E_IO_H_
+#define _I40E_IO_H_
+
+/* get readq/writeq support for 32 bit kernels, use the low-first version */
+#include <linux/io-64-nonatomic-lo-hi.h>
+
+#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
+#define rd32(a, reg) readl((a)->hw_addr + (reg))
+
+#define rd64(a, reg) readq((a)->hw_addr + (reg))
+#define i40e_flush(a) readl((a)->hw_addr + I40E_GLGEN_STAT)
+
+#endif /* _I40E_IO_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
index 474365bf0648..beaaf5c309d5 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.c
@@ -1,13 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
-#include "i40e.h"
-#include "i40e_osdep.h"
-#include "i40e_register.h"
-#include "i40e_type.h"
-#include "i40e_hmc.h"
+#include "i40e_alloc.h"
+#include "i40e_debug.h"
#include "i40e_lan_hmc.h"
-#include "i40e_prototype.h"
+#include "i40e_type.h"
/* lan specific interface functions */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
index 9f960404c2b3..305a276953b0 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_lan_hmc.h
@@ -4,6 +4,8 @@
#ifndef _I40E_LAN_HMC_H_
#define _I40E_LAN_HMC_H_
+#include "i40e_hmc.h"
+
/* forward-declare the HW struct for the compiler */
struct i40e_hw;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index de7fd43dc11c..3157d14d9b12 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -1,19 +1,22 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2021 Intel Corporation. */
-#include <linux/etherdevice.h>
-#include <linux/of_net.h>
-#include <linux/pci.h>
-#include <linux/bpf.h>
#include <generated/utsrelease.h>
#include <linux/crash_dump.h>
+#include <linux/if_bridge.h>
+#include <linux/if_macvlan.h>
+#include <linux/module.h>
+#include <net/pkt_cls.h>
+#include <net/xdp_sock_drv.h>
/* Local includes */
#include "i40e.h"
+#include "i40e_devids.h"
#include "i40e_diag.h"
+#include "i40e_lan_hmc.h"
+#include "i40e_virtchnl_pf.h"
#include "i40e_xsk.h"
-#include <net/udp_tunnel.h>
-#include <net/xdp_sock_drv.h>
+
/* All i40e tracepoints are defined by the include below, which
* must be included exactly once across the whole kernel with
* CREATE_TRACE_POINTS defined
@@ -120,16 +123,27 @@ static void netdev_hw_addr_refcnt(struct i40e_mac_filter *f,
}
/**
- * i40e_allocate_dma_mem_d - OS specific memory alloc for shared code
+ * i40e_hw_to_dev - get device pointer from the hardware structure
+ * @hw: pointer to the device HW structure
+ **/
+struct device *i40e_hw_to_dev(struct i40e_hw *hw)
+{
+ struct i40e_pf *pf = i40e_hw_to_pf(hw);
+
+ return &pf->pdev->dev;
+}
+
+/**
+ * i40e_allocate_dma_mem - OS specific memory alloc for shared code
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to fill out
* @size: size of memory requested
* @alignment: what to align the allocation to
**/
-int i40e_allocate_dma_mem_d(struct i40e_hw *hw, struct i40e_dma_mem *mem,
- u64 size, u32 alignment)
+int i40e_allocate_dma_mem(struct i40e_hw *hw, struct i40e_dma_mem *mem,
+ u64 size, u32 alignment)
{
- struct i40e_pf *pf = (struct i40e_pf *)hw->back;
+ struct i40e_pf *pf = i40e_hw_to_pf(hw);
mem->size = ALIGN(size, alignment);
mem->va = dma_alloc_coherent(&pf->pdev->dev, mem->size, &mem->pa,
@@ -141,13 +155,13 @@ int i40e_allocate_dma_mem_d(struct i40e_hw *hw, struct i40e_dma_mem *mem,
}
/**
- * i40e_free_dma_mem_d - OS specific memory free for shared code
+ * i40e_free_dma_mem - OS specific memory free for shared code
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to free
**/
-int i40e_free_dma_mem_d(struct i40e_hw *hw, struct i40e_dma_mem *mem)
+int i40e_free_dma_mem(struct i40e_hw *hw, struct i40e_dma_mem *mem)
{
- struct i40e_pf *pf = (struct i40e_pf *)hw->back;
+ struct i40e_pf *pf = i40e_hw_to_pf(hw);
dma_free_coherent(&pf->pdev->dev, mem->size, mem->va, mem->pa);
mem->va = NULL;
@@ -158,13 +172,13 @@ int i40e_free_dma_mem_d(struct i40e_hw *hw, struct i40e_dma_mem *mem)
}
/**
- * i40e_allocate_virt_mem_d - OS specific memory alloc for shared code
+ * i40e_allocate_virt_mem - OS specific memory alloc for shared code
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to fill out
* @size: size of memory requested
**/
-int i40e_allocate_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem,
- u32 size)
+int i40e_allocate_virt_mem(struct i40e_hw *hw, struct i40e_virt_mem *mem,
+ u32 size)
{
mem->size = size;
mem->va = kzalloc(size, GFP_KERNEL);
@@ -176,11 +190,11 @@ int i40e_allocate_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem,
}
/**
- * i40e_free_virt_mem_d - OS specific memory free for shared code
+ * i40e_free_virt_mem - OS specific memory free for shared code
* @hw: pointer to the HW structure
* @mem: ptr to mem struct to free
**/
-int i40e_free_virt_mem_d(struct i40e_hw *hw, struct i40e_virt_mem *mem)
+int i40e_free_virt_mem(struct i40e_hw *hw, struct i40e_virt_mem *mem)
{
/* it's ok to kfree a NULL pointer */
kfree(mem->va);
@@ -489,6 +503,7 @@ static void i40e_get_netdev_stats_struct(struct net_device *netdev,
stats->tx_dropped = vsi_stats->tx_dropped;
stats->rx_errors = vsi_stats->rx_errors;
stats->rx_dropped = vsi_stats->rx_dropped;
+ stats->rx_missed_errors = vsi_stats->rx_missed_errors;
stats->rx_crc_errors = vsi_stats->rx_crc_errors;
stats->rx_length_errors = vsi_stats->rx_length_errors;
}
@@ -680,17 +695,13 @@ i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw,
struct i40e_eth_stats *stat_offset,
struct i40e_eth_stats *stat)
{
- u64 rx_rdpc, rx_rxerr;
-
i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded,
- &stat_offset->rx_discards, &rx_rdpc);
+ &stat_offset->rx_discards, &stat->rx_discards);
i40e_stat_update64(hw,
I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)),
I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)),
offset_loaded, &stat_offset->rx_discards_other,
- &rx_rxerr);
-
- stat->rx_discards = rx_rdpc + rx_rxerr;
+ &stat->rx_discards_other);
}
/**
@@ -712,9 +723,6 @@ void i40e_update_eth_stats(struct i40e_vsi *vsi)
i40e_stat_update32(hw, I40E_GLV_TEPC(stat_idx),
vsi->stat_offsets_loaded,
&oes->tx_errors, &es->tx_errors);
- i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx),
- vsi->stat_offsets_loaded,
- &oes->rx_discards, &es->rx_discards);
i40e_stat_update32(hw, I40E_GLV_RUPP(stat_idx),
vsi->stat_offsets_loaded,
&oes->rx_unknown_protocol, &es->rx_unknown_protocol);
@@ -971,8 +979,10 @@ static void i40e_update_vsi_stats(struct i40e_vsi *vsi)
ns->tx_errors = es->tx_errors;
ons->multicast = oes->rx_multicast;
ns->multicast = es->rx_multicast;
- ons->rx_dropped = oes->rx_discards;
- ns->rx_dropped = es->rx_discards;
+ ons->rx_dropped = oes->rx_discards_other;
+ ns->rx_dropped = es->rx_discards_other;
+ ons->rx_missed_errors = oes->rx_discards;
+ ns->rx_missed_errors = es->rx_discards;
ons->tx_dropped = oes->tx_discards;
ns->tx_dropped = es->tx_discards;
@@ -10788,7 +10798,9 @@ static void i40e_get_oem_version(struct i40e_hw *hw)
&gen_snap);
i40e_read_nvm_word(hw, block_offset + I40E_NVM_OEM_RELEASE_OFFSET,
&release);
- hw->nvm.oem_ver = (gen_snap << I40E_OEM_SNAP_SHIFT) | release;
+ hw->nvm.oem_ver =
+ FIELD_PREP(I40E_OEM_GEN_MASK | I40E_OEM_SNAP_MASK, gen_snap) |
+ FIELD_PREP(I40E_OEM_RELEASE_MASK, release);
hw->nvm.eetrack = I40E_OEM_EETRACK_ID;
}
@@ -14201,6 +14213,8 @@ int i40e_vsi_release(struct i40e_vsi *vsi)
}
set_bit(__I40E_VSI_RELEASING, vsi->state);
uplink_seid = vsi->uplink_seid;
+ if (vsi->type == I40E_VSI_MAIN)
+ i40e_devlink_destroy_port(pf);
if (vsi->type != I40E_VSI_SRIOV) {
if (vsi->netdev_registered) {
vsi->netdev_registered = false;
@@ -14388,6 +14402,8 @@ static struct i40e_vsi *i40e_vsi_reinit_setup(struct i40e_vsi *vsi)
err_rings:
i40e_vsi_free_q_vectors(vsi);
+ if (vsi->type == I40E_VSI_MAIN)
+ i40e_devlink_destroy_port(pf);
if (vsi->netdev_registered) {
vsi->netdev_registered = false;
unregister_netdev(vsi->netdev);
@@ -14534,9 +14550,15 @@ struct i40e_vsi *i40e_vsi_setup(struct i40e_pf *pf, u8 type,
ret = i40e_netif_set_realnum_tx_rx_queues(vsi);
if (ret)
goto err_netdev;
+ if (vsi->type == I40E_VSI_MAIN) {
+ ret = i40e_devlink_create_port(pf);
+ if (ret)
+ goto err_netdev;
+ SET_NETDEV_DEVLINK_PORT(vsi->netdev, &pf->devlink_port);
+ }
ret = register_netdev(vsi->netdev);
if (ret)
- goto err_netdev;
+ goto err_dl_port;
vsi->netdev_registered = true;
netif_carrier_off(vsi->netdev);
#ifdef CONFIG_I40E_DCB
@@ -14579,6 +14601,9 @@ err_msix:
free_netdev(vsi->netdev);
vsi->netdev = NULL;
}
+err_dl_port:
+ if (vsi->type == I40E_VSI_MAIN)
+ i40e_devlink_destroy_port(pf);
err_netdev:
i40e_aq_delete_element(&pf->hw, vsi->seid, NULL);
err_vsi:
@@ -15609,7 +15634,7 @@ err_switch_setup:
iounmap(hw->hw_addr);
pci_release_mem_regions(pf->pdev);
pci_disable_device(pf->pdev);
- kfree(pf);
+ i40e_free_pf(pf);
return err;
}
@@ -15623,10 +15648,10 @@ err_switch_setup:
**/
static inline void i40e_set_subsystem_device_id(struct i40e_hw *hw)
{
- struct pci_dev *pdev = ((struct i40e_pf *)hw->back)->pdev;
+ struct i40e_pf *pf = i40e_hw_to_pf(hw);
- hw->subsystem_device_id = pdev->subsystem_device ?
- pdev->subsystem_device :
+ hw->subsystem_device_id = pf->pdev->subsystem_device ?
+ pf->pdev->subsystem_device :
(ushort)(rd32(hw, I40E_PFPCI_SUBSYSID) & USHRT_MAX);
}
@@ -15651,6 +15676,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
struct i40e_hw *hw;
static u16 pfs_found;
u16 wol_nvm_bits;
+ char nvm_ver[32];
u16 link_status;
#ifdef CONFIG_I40E_DCB
int status;
@@ -15686,7 +15712,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* the Admin Queue structures and then querying for the
* device's current profile information.
*/
- pf = kzalloc(sizeof(*pf), GFP_KERNEL);
+ pf = i40e_alloc_pf(&pdev->dev);
if (!pf) {
err = -ENOMEM;
goto err_pf_alloc;
@@ -15696,7 +15722,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
set_bit(__I40E_DOWN, pf->state);
hw = &pf->hw;
- hw->back = pf;
pf->ioremap_len = min_t(int, pci_resource_len(pdev, 0),
I40E_MAX_CSR_SPACE);
@@ -15821,13 +15846,15 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_pf_reset;
}
i40e_get_oem_version(hw);
+ i40e_get_pba_string(hw);
/* provide nvm, fw, api versions, vendor:device id, subsys vendor:device id */
+ i40e_nvm_version_str(hw, nvm_ver, sizeof(nvm_ver));
dev_info(&pdev->dev, "fw %d.%d.%05d api %d.%d nvm %s [%04x:%04x] [%04x:%04x]\n",
hw->aq.fw_maj_ver, hw->aq.fw_min_ver, hw->aq.fw_build,
- hw->aq.api_maj_ver, hw->aq.api_min_ver,
- i40e_nvm_version_str(hw), hw->vendor_id, hw->device_id,
- hw->subsystem_vendor_id, hw->subsystem_device_id);
+ hw->aq.api_maj_ver, hw->aq.api_min_ver, nvm_ver,
+ hw->vendor_id, hw->device_id, hw->subsystem_vendor_id,
+ hw->subsystem_device_id);
if (hw->aq.api_maj_ver == I40E_FW_API_VERSION_MAJOR &&
hw->aq.api_min_ver > I40E_FW_MINOR_VERSION(hw))
@@ -16214,6 +16241,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* print a string summarizing features */
i40e_print_features(pf);
+ i40e_devlink_register(pf);
+
return 0;
/* Unwind what we've done if something failed in the setup */
@@ -16234,7 +16263,7 @@ err_adminq_setup:
err_pf_reset:
iounmap(hw->hw_addr);
err_ioremap:
- kfree(pf);
+ i40e_free_pf(pf);
err_pf_alloc:
pci_release_mem_regions(pdev);
err_pci_reg:
@@ -16259,6 +16288,8 @@ static void i40e_remove(struct pci_dev *pdev)
int ret_code;
int i;
+ i40e_devlink_unregister(pf);
+
i40e_dbg_pf_exit(pf);
i40e_ptp_stop(pf);
@@ -16320,11 +16351,15 @@ static void i40e_remove(struct pci_dev *pdev)
i40e_switch_branch_release(pf->veb[i]);
}
- /* Now we can shutdown the PF's VSI, just before we kill
+ /* Now we can shutdown the PF's VSIs, just before we kill
* adminq and hmc.
*/
- if (pf->vsi[pf->lan_vsi])
- i40e_vsi_release(pf->vsi[pf->lan_vsi]);
+ for (i = pf->num_alloc_vsi; i--;)
+ if (pf->vsi[i]) {
+ i40e_vsi_close(pf->vsi[i]);
+ i40e_vsi_release(pf->vsi[i]);
+ pf->vsi[i] = NULL;
+ }
i40e_cloud_filter_exit(pf);
@@ -16380,7 +16415,7 @@ unmap:
kfree(pf->vsi);
iounmap(hw->hw_addr);
- kfree(pf);
+ i40e_free_pf(pf);
pci_release_mem_regions(pdev);
pci_disable_device(pdev);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
index 07a46adeab38..77cdbfc19d47 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
@@ -1,6 +1,8 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
+#include <linux/delay.h>
+#include "i40e_alloc.h"
#include "i40e_prototype.h"
/**
diff --git a/drivers/net/ethernet/intel/i40e/i40e_osdep.h b/drivers/net/ethernet/intel/i40e/i40e_osdep.h
deleted file mode 100644
index 2bd4de03dafa..000000000000
--- a/drivers/net/ethernet/intel/i40e/i40e_osdep.h
+++ /dev/null
@@ -1,59 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2013 - 2018 Intel Corporation. */
-
-#ifndef _I40E_OSDEP_H_
-#define _I40E_OSDEP_H_
-
-#include <linux/types.h>
-#include <linux/if_ether.h>
-#include <linux/if_vlan.h>
-#include <linux/tcp.h>
-#include <linux/pci.h>
-#include <linux/highuid.h>
-
-/* get readq/writeq support for 32 bit kernels, use the low-first version */
-#include <linux/io-64-nonatomic-lo-hi.h>
-
-/* File to be the magic between shared code and
- * actual OS primitives
- */
-
-#define hw_dbg(hw, S, A...) \
-do { \
- dev_dbg(&((struct i40e_pf *)hw->back)->pdev->dev, S, ##A); \
-} while (0)
-
-#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
-#define rd32(a, reg) readl((a)->hw_addr + (reg))
-
-#define rd64(a, reg) readq((a)->hw_addr + (reg))
-#define i40e_flush(a) readl((a)->hw_addr + I40E_GLGEN_STAT)
-
-/* memory allocation tracking */
-struct i40e_dma_mem {
- void *va;
- dma_addr_t pa;
- u32 size;
-};
-
-#define i40e_allocate_dma_mem(h, m, unused, s, a) \
- i40e_allocate_dma_mem_d(h, m, s, a)
-#define i40e_free_dma_mem(h, m) i40e_free_dma_mem_d(h, m)
-
-struct i40e_virt_mem {
- void *va;
- u32 size;
-};
-
-#define i40e_allocate_virt_mem(h, m, s) i40e_allocate_virt_mem_d(h, m, s)
-#define i40e_free_virt_mem(h, m) i40e_free_virt_mem_d(h, m)
-
-#define i40e_debug(h, m, s, ...) \
-do { \
- if (((m) & (h)->debug_mask)) \
- pr_info("i40e %02x:%02x.%x " s, \
- (h)->bus.bus_id, (h)->bus.device, \
- (h)->bus.func, ##__VA_ARGS__); \
-} while (0)
-
-#endif /* _I40E_OSDEP_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index 3eeee224f1fb..001162042050 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -4,9 +4,10 @@
#ifndef _I40E_PROTOTYPE_H_
#define _I40E_PROTOTYPE_H_
-#include "i40e_type.h"
-#include "i40e_alloc.h"
+#include <linux/ethtool.h>
#include <linux/avf/virtchnl.h>
+#include "i40e_debug.h"
+#include "i40e_type.h"
/* Prototypes for shared code functions that are not in
* the standard function pointer structures. These are
@@ -340,8 +341,7 @@ i40e_aq_configure_partition_bw(struct i40e_hw *hw,
struct i40e_aqc_configure_partition_bw_data *bw_data,
struct i40e_asq_cmd_details *cmd_details);
int i40e_get_port_mac_addr(struct i40e_hw *hw, u8 *mac_addr);
-int i40e_read_pba_string(struct i40e_hw *hw, u8 *pba_num,
- u32 pba_num_size);
+void i40e_get_pba_string(struct i40e_hw *hw);
void i40e_pre_tx_queue_cfg(struct i40e_hw *hw, u32 queue, bool enable);
/* prototype for functions used for NVM access */
int i40e_init_nvm(struct i40e_hw *hw);
@@ -497,4 +497,8 @@ int
i40e_add_pinfo_to_list(struct i40e_hw *hw,
struct i40e_profile_segment *profile,
u8 *profile_info_sec, u32 track_id);
+
+/* i40e_ddp */
+int i40e_ddp_flash(struct net_device *netdev, struct ethtool_flash *flash);
+
#endif /* _I40E_PROTOTYPE_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
index 8a26811140b4..20b77398f060 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
@@ -1,9 +1,10 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
-#include "i40e.h"
#include <linux/ptp_classify.h>
#include <linux/posix-clock.h>
+#include "i40e.h"
+#include "i40e_devids.h"
/* The XL710 timesync is very much like Intel's 82599 design when it comes to
* the fundamental clock design. However, the clock operations are much simpler
diff --git a/drivers/net/ethernet/intel/i40e/i40e_register.h b/drivers/net/ethernet/intel/i40e/i40e_register.h
index 7339003aa17c..f408fcf23ce8 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_register.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_register.h
@@ -4,6 +4,9 @@
#ifndef _I40E_REGISTER_H_
#define _I40E_REGISTER_H_
+/* I40E_MASK is a macro used on 32 bit registers */
+#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
+
#define I40E_GL_ATQLEN_ATQCRIT_SHIFT 30
#define I40E_GL_ATQLEN_ATQCRIT_MASK I40E_MASK(0x1, I40E_GL_ATQLEN_ATQCRIT_SHIFT)
#define I40E_PF_ARQBAH 0x00080180 /* Reset: EMPR */
@@ -202,7 +205,9 @@
#define I40E_GLGEN_MSCA_DEVADD_SHIFT 16
#define I40E_GLGEN_MSCA_PHYADD_SHIFT 21
#define I40E_GLGEN_MSCA_OPCODE_SHIFT 26
+#define I40E_GLGEN_MSCA_OPCODE_MASK(_i) I40E_MASK(_i, I40E_GLGEN_MSCA_OPCODE_SHIFT)
#define I40E_GLGEN_MSCA_STCODE_SHIFT 28
+#define I40E_GLGEN_MSCA_STCODE_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_STCODE_SHIFT)
#define I40E_GLGEN_MSCA_MDICMD_SHIFT 30
#define I40E_GLGEN_MSCA_MDICMD_MASK I40E_MASK(0x1, I40E_GLGEN_MSCA_MDICMD_SHIFT)
#define I40E_GLGEN_MSCA_MDIINPROGEN_SHIFT 31
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index b047c587629b..dd410b15000f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -1,14 +1,13 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright(c) 2013 - 2018 Intel Corporation. */
-#include <linux/prefetch.h>
#include <linux/bpf_trace.h>
+#include <linux/prefetch.h>
+#include <linux/sctp.h>
#include <net/mpls.h>
#include <net/xdp.h>
-#include "i40e.h"
-#include "i40e_trace.h"
-#include "i40e_prototype.h"
#include "i40e_txrx_common.h"
+#include "i40e_trace.h"
#include "i40e_xsk.h"
#define I40E_TXD_CMD (I40E_TX_DESC_CMD_EOP | I40E_TX_DESC_CMD_RS)
@@ -2405,7 +2404,7 @@ void i40e_update_rx_stats(struct i40e_ring *rx_ring,
void i40e_finalize_xdp_rx(struct i40e_ring *rx_ring, unsigned int xdp_res)
{
if (xdp_res & I40E_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_res & I40E_XDP_TX) {
struct i40e_ring *xdp_ring =
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index 900b0d9ede9f..421fe5675584 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -5,6 +5,7 @@
#define _I40E_TXRX_H_
#include <net/xdp.h>
+#include "i40e_type.h"
/* Interrupt Throttling and Rate Limiting Goodies */
#define I40E_DEFAULT_IRQ_WORK 256
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h
index 8c5118c8baaf..e26807fd2123 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx_common.h
@@ -4,6 +4,8 @@
#ifndef I40E_TXRX_COMMON_
#define I40E_TXRX_COMMON_
+#include "i40e.h"
+
int i40e_xmit_xdp_tx_ring(struct xdp_buff *xdp, struct i40e_ring *xdp_ring);
void i40e_clean_programming_status(struct i40e_ring *rx_ring, u64 qword0_raw,
u64 qword1);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index 232131bedc3e..aff6dc6afbe2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -4,15 +4,9 @@
#ifndef _I40E_TYPE_H_
#define _I40E_TYPE_H_
-#include "i40e_osdep.h"
-#include "i40e_register.h"
+#include <uapi/linux/if_ether.h>
#include "i40e_adminq.h"
#include "i40e_hmc.h"
-#include "i40e_lan_hmc.h"
-#include "i40e_devids.h"
-
-/* I40E_MASK is a macro used on 32 bit registers */
-#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
#define I40E_MAX_VSI_QP 16
#define I40E_MAX_VF_VSI 4
@@ -43,48 +37,14 @@ typedef void (*I40E_ADMINQ_CALLBACK)(struct i40e_hw *, struct i40e_aq_desc *);
#define I40E_QTX_CTL_VM_QUEUE 0x1
#define I40E_QTX_CTL_PF_QUEUE 0x2
-/* debug masks - set these bits in hw->debug_mask to control output */
-enum i40e_debug_mask {
- I40E_DEBUG_INIT = 0x00000001,
- I40E_DEBUG_RELEASE = 0x00000002,
-
- I40E_DEBUG_LINK = 0x00000010,
- I40E_DEBUG_PHY = 0x00000020,
- I40E_DEBUG_HMC = 0x00000040,
- I40E_DEBUG_NVM = 0x00000080,
- I40E_DEBUG_LAN = 0x00000100,
- I40E_DEBUG_FLOW = 0x00000200,
- I40E_DEBUG_DCB = 0x00000400,
- I40E_DEBUG_DIAG = 0x00000800,
- I40E_DEBUG_FD = 0x00001000,
- I40E_DEBUG_PACKAGE = 0x00002000,
- I40E_DEBUG_IWARP = 0x00F00000,
- I40E_DEBUG_AQ_MESSAGE = 0x01000000,
- I40E_DEBUG_AQ_DESCRIPTOR = 0x02000000,
- I40E_DEBUG_AQ_DESC_BUFFER = 0x04000000,
- I40E_DEBUG_AQ_COMMAND = 0x06000000,
- I40E_DEBUG_AQ = 0x0F000000,
-
- I40E_DEBUG_USER = 0xF0000000,
-
- I40E_DEBUG_ALL = 0xFFFFFFFF
-};
-
-#define I40E_MDIO_CLAUSE22_STCODE_MASK I40E_MASK(1, \
- I40E_GLGEN_MSCA_STCODE_SHIFT)
-#define I40E_MDIO_CLAUSE22_OPCODE_WRITE_MASK I40E_MASK(1, \
- I40E_GLGEN_MSCA_OPCODE_SHIFT)
-#define I40E_MDIO_CLAUSE22_OPCODE_READ_MASK I40E_MASK(2, \
- I40E_GLGEN_MSCA_OPCODE_SHIFT)
-
-#define I40E_MDIO_CLAUSE45_STCODE_MASK I40E_MASK(0, \
- I40E_GLGEN_MSCA_STCODE_SHIFT)
-#define I40E_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK I40E_MASK(0, \
- I40E_GLGEN_MSCA_OPCODE_SHIFT)
-#define I40E_MDIO_CLAUSE45_OPCODE_WRITE_MASK I40E_MASK(1, \
- I40E_GLGEN_MSCA_OPCODE_SHIFT)
-#define I40E_MDIO_CLAUSE45_OPCODE_READ_MASK I40E_MASK(3, \
- I40E_GLGEN_MSCA_OPCODE_SHIFT)
+#define I40E_MDIO_CLAUSE22_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK
+#define I40E_MDIO_CLAUSE22_OPCODE_WRITE_MASK I40E_GLGEN_MSCA_OPCODE_MASK(1)
+#define I40E_MDIO_CLAUSE22_OPCODE_READ_MASK I40E_GLGEN_MSCA_OPCODE_MASK(2)
+
+#define I40E_MDIO_CLAUSE45_STCODE_MASK I40E_GLGEN_MSCA_STCODE_MASK
+#define I40E_MDIO_CLAUSE45_OPCODE_ADDRESS_MASK I40E_GLGEN_MSCA_OPCODE_MASK(0)
+#define I40E_MDIO_CLAUSE45_OPCODE_WRITE_MASK I40E_GLGEN_MSCA_OPCODE_MASK(1)
+#define I40E_MDIO_CLAUSE45_OPCODE_READ_MASK I40E_GLGEN_MSCA_OPCODE_MASK(3)
#define I40E_PHY_COM_REG_PAGE 0x1E
#define I40E_PHY_LED_LINK_MODE_MASK 0xF0
@@ -525,7 +485,6 @@ struct i40e_dcbx_config {
/* Port hardware description */
struct i40e_hw {
u8 __iomem *hw_addr;
- void *back;
/* subsystem structs */
struct i40e_phy_info phy;
@@ -534,6 +493,9 @@ struct i40e_hw {
struct i40e_nvm_info nvm;
struct i40e_fc_info fc;
+ /* PBA ID */
+ const char *pba_id;
+
/* pci info */
u16 device_id;
u16 vendor_id;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index d3d6415553ed..08d7edccfb8d 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -2,6 +2,8 @@
/* Copyright(c) 2013 - 2018 Intel Corporation. */
#include "i40e.h"
+#include "i40e_lan_hmc.h"
+#include "i40e_virtchnl_pf.h"
/*********************notification routines***********************/
@@ -4916,7 +4918,7 @@ int i40e_get_vf_stats(struct net_device *netdev, int vf_id,
vf_stats->tx_bytes = stats->tx_bytes;
vf_stats->broadcast = stats->rx_broadcast;
vf_stats->multicast = stats->rx_multicast;
- vf_stats->rx_dropped = stats->rx_discards;
+ vf_stats->rx_dropped = stats->rx_discards + stats->rx_discards_other;
vf_stats->tx_dropped = stats->tx_discards;
return 0;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
index 895b8feb2567..2ee0f8a23248 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
@@ -4,7 +4,9 @@
#ifndef _I40E_VIRTCHNL_PF_H_
#define _I40E_VIRTCHNL_PF_H_
-#include "i40e.h"
+#include <linux/avf/virtchnl.h>
+#include <linux/netdevice.h>
+#include "i40e_type.h"
#define I40E_MAX_VLANID 4095
diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.c b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
index 7d991e4d9b89..e99fa854d17f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.c
@@ -2,11 +2,7 @@
/* Copyright(c) 2018 Intel Corporation. */
#include <linux/bpf_trace.h>
-#include <linux/stringify.h>
#include <net/xdp_sock_drv.h>
-#include <net/xdp.h>
-
-#include "i40e.h"
#include "i40e_txrx_common.h"
#include "i40e_xsk.h"
diff --git a/drivers/net/ethernet/intel/i40e/i40e_xsk.h b/drivers/net/ethernet/intel/i40e/i40e_xsk.h
index 821df248f8be..ef156fad52f2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_xsk.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_xsk.h
@@ -4,6 +4,8 @@
#ifndef _I40E_XSK_H_
#define _I40E_XSK_H_
+#include <linux/types.h>
+
/* This value should match the pragma in the loop_unrolled_for
* macro. Why 4? It is strictly empirical. It seems to be a good
* compromise between the advantage of having simultaneous outstanding
@@ -20,7 +22,9 @@
#define loop_unrolled_for for
#endif
+struct i40e_ring;
struct i40e_vsi;
+struct net_device;
struct xsk_buff_pool;
int i40e_queue_pair_disable(struct i40e_vsi *vsi, int queue_pair);
diff --git a/drivers/net/ethernet/intel/iavf/Makefile b/drivers/net/ethernet/intel/iavf/Makefile
index 9c3e45c54d01..2d154a4e2fd7 100644
--- a/drivers/net/ethernet/intel/iavf/Makefile
+++ b/drivers/net/ethernet/intel/iavf/Makefile
@@ -13,4 +13,4 @@ obj-$(CONFIG_IAVF) += iavf.o
iavf-objs := iavf_main.o iavf_ethtool.o iavf_virtchnl.o iavf_fdir.o \
iavf_adv_rss.o \
- iavf_txrx.o iavf_common.o iavf_adminq.o iavf_client.o
+ iavf_txrx.o iavf_common.o iavf_adminq.o
diff --git a/drivers/net/ethernet/intel/iavf/iavf.h b/drivers/net/ethernet/intel/iavf/iavf.h
index e110ba346185..e7ab89dc883a 100644
--- a/drivers/net/ethernet/intel/iavf/iavf.h
+++ b/drivers/net/ethernet/intel/iavf/iavf.h
@@ -63,7 +63,6 @@ struct iavf_vsi {
DECLARE_BITMAP(state, __IAVF_VSI_STATE_SIZE__);
int base_vector;
u16 qs_handle;
- void *priv; /* client driver data reference. */
};
/* How many Rx Buffers do we bundle into one write to the hardware ? */
@@ -256,7 +255,6 @@ struct iavf_adapter {
struct work_struct reset_task;
struct work_struct adminq_task;
struct work_struct finish_config;
- struct delayed_work client_task;
wait_queue_head_t down_waitqueue;
wait_queue_head_t reset_waitqueue;
wait_queue_head_t vc_waitqueue;
@@ -265,7 +263,6 @@ struct iavf_adapter {
int num_vlan_filters;
struct list_head mac_filter_list;
struct mutex crit_lock;
- struct mutex client_lock;
/* Lock to protect accesses to MAC and VLAN lists */
spinlock_t mac_vlan_list_lock;
char misc_vector_name[IFNAMSIZ + 9];
@@ -282,10 +279,6 @@ struct iavf_adapter {
u64 hw_csum_rx_error;
u32 rx_desc_count;
int num_msix_vectors;
- int num_rdma_msix;
- int rdma_base_vector;
- u32 client_pending;
- struct iavf_client_instance *cinst;
struct msix_entry *msix_entries;
u32 flags;
@@ -294,12 +287,6 @@ struct iavf_adapter {
#define IAVF_FLAG_RESET_PENDING BIT(4)
#define IAVF_FLAG_RESET_NEEDED BIT(5)
#define IAVF_FLAG_WB_ON_ITR_CAPABLE BIT(6)
-#define IAVF_FLAG_SERVICE_CLIENT_REQUESTED BIT(9)
-#define IAVF_FLAG_CLIENT_NEEDS_OPEN BIT(10)
-#define IAVF_FLAG_CLIENT_NEEDS_CLOSE BIT(11)
-#define IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS BIT(12)
-#define IAVF_FLAG_PROMISC_ON BIT(13)
-#define IAVF_FLAG_ALLMULTI_ON BIT(14)
#define IAVF_FLAG_LEGACY_RX BIT(15)
#define IAVF_FLAG_REINIT_ITR_NEEDED BIT(16)
#define IAVF_FLAG_QUEUES_DISABLED BIT(17)
@@ -325,10 +312,7 @@ struct iavf_adapter {
#define IAVF_FLAG_AQ_SET_HENA BIT_ULL(12)
#define IAVF_FLAG_AQ_SET_RSS_KEY BIT_ULL(13)
#define IAVF_FLAG_AQ_SET_RSS_LUT BIT_ULL(14)
-#define IAVF_FLAG_AQ_REQUEST_PROMISC BIT_ULL(15)
-#define IAVF_FLAG_AQ_RELEASE_PROMISC BIT_ULL(16)
-#define IAVF_FLAG_AQ_REQUEST_ALLMULTI BIT_ULL(17)
-#define IAVF_FLAG_AQ_RELEASE_ALLMULTI BIT_ULL(18)
+#define IAVF_FLAG_AQ_CONFIGURE_PROMISC_MODE BIT_ULL(15)
#define IAVF_FLAG_AQ_ENABLE_VLAN_STRIPPING BIT_ULL(19)
#define IAVF_FLAG_AQ_DISABLE_VLAN_STRIPPING BIT_ULL(20)
#define IAVF_FLAG_AQ_ENABLE_CHANNELS BIT_ULL(21)
@@ -365,6 +349,12 @@ struct iavf_adapter {
(IAVF_EXTENDED_CAP_SEND_VLAN_V2 | \
IAVF_EXTENDED_CAP_RECV_VLAN_V2)
+ /* Lock to prevent possible clobbering of
+ * current_netdev_promisc_flags
+ */
+ spinlock_t current_netdev_promisc_flags_lock;
+ netdev_features_t current_netdev_promisc_flags;
+
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
@@ -376,7 +366,6 @@ struct iavf_adapter {
unsigned long crit_section;
struct delayed_work watchdog_task;
- bool netdev_registered;
bool link_up;
enum virtchnl_link_speed link_speed;
/* This is only populated if the VIRTCHNL_VF_CAP_ADV_LINK_SPEED is set
@@ -388,11 +377,6 @@ struct iavf_adapter {
u32 link_speed_mbps;
enum virtchnl_ops current_op;
-#define CLIENT_ALLOWED(_a) ((_a)->vf_res ? \
- (_a)->vf_res->vf_cap_flags & \
- VIRTCHNL_VF_OFFLOAD_RDMA : \
- 0)
-#define CLIENT_ENABLED(_a) ((_a)->cinst)
/* RSS by the PF should be preferred over RSS via other methods. */
#define RSS_PF(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_RSS_PF)
@@ -405,6 +389,8 @@ struct iavf_adapter {
VIRTCHNL_VF_OFFLOAD_VLAN)
#define VLAN_V2_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
VIRTCHNL_VF_OFFLOAD_VLAN_V2)
+#define CRC_OFFLOAD_ALLOWED(_a) ((_a)->vf_res->vf_cap_flags & \
+ VIRTCHNL_VF_OFFLOAD_CRC)
#define VLAN_V2_FILTERING_ALLOWED(_a) \
(VLAN_V2_ALLOWED((_a)) && \
((_a)->vlan_v2_caps.filtering.filtering_support.outer || \
@@ -458,12 +444,6 @@ struct iavf_adapter {
/* Ethtool Private Flags */
-/* lan device, used by client interface */
-struct iavf_device {
- struct list_head list;
- struct iavf_adapter *vf;
-};
-
/* needed by iavf_ethtool.c */
extern char iavf_driver_name[];
@@ -551,7 +531,8 @@ void iavf_add_ether_addrs(struct iavf_adapter *adapter);
void iavf_del_ether_addrs(struct iavf_adapter *adapter);
void iavf_add_vlans(struct iavf_adapter *adapter);
void iavf_del_vlans(struct iavf_adapter *adapter);
-void iavf_set_promiscuous(struct iavf_adapter *adapter, int flags);
+void iavf_set_promiscuous(struct iavf_adapter *adapter);
+bool iavf_promiscuous_mode_changed(struct iavf_adapter *adapter);
void iavf_request_stats(struct iavf_adapter *adapter);
int iavf_request_reset(struct iavf_adapter *adapter);
void iavf_get_hena(struct iavf_adapter *adapter);
@@ -566,11 +547,6 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
int iavf_config_rss(struct iavf_adapter *adapter);
int iavf_lan_add_device(struct iavf_adapter *adapter);
int iavf_lan_del_device(struct iavf_adapter *adapter);
-void iavf_client_subtask(struct iavf_adapter *adapter);
-void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len);
-void iavf_notify_client_l2_params(struct iavf_vsi *vsi);
-void iavf_notify_client_open(struct iavf_vsi *vsi);
-void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset);
void iavf_enable_channels(struct iavf_adapter *adapter);
void iavf_disable_channels(struct iavf_adapter *adapter);
void iavf_add_cloud_filter(struct iavf_adapter *adapter);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.c b/drivers/net/ethernet/intel/iavf/iavf_client.c
deleted file mode 100644
index e6051b6355aa..000000000000
--- a/drivers/net/ethernet/intel/iavf/iavf_client.c
+++ /dev/null
@@ -1,578 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0
-/* Copyright(c) 2013 - 2018 Intel Corporation. */
-
-#include <linux/list.h>
-#include <linux/errno.h>
-
-#include "iavf.h"
-#include "iavf_prototype.h"
-#include "iavf_client.h"
-
-static
-const char iavf_client_interface_version_str[] = IAVF_CLIENT_VERSION_STR;
-static struct iavf_client *vf_registered_client;
-static LIST_HEAD(iavf_devices);
-static DEFINE_MUTEX(iavf_device_mutex);
-
-static u32 iavf_client_virtchnl_send(struct iavf_info *ldev,
- struct iavf_client *client,
- u8 *msg, u16 len);
-
-static int iavf_client_setup_qvlist(struct iavf_info *ldev,
- struct iavf_client *client,
- struct iavf_qvlist_info *qvlist_info);
-
-static struct iavf_ops iavf_lan_ops = {
- .virtchnl_send = iavf_client_virtchnl_send,
- .setup_qvlist = iavf_client_setup_qvlist,
-};
-
-/**
- * iavf_client_get_params - retrieve relevant client parameters
- * @vsi: VSI with parameters
- * @params: client param struct
- **/
-static
-void iavf_client_get_params(struct iavf_vsi *vsi, struct iavf_params *params)
-{
- int i;
-
- memset(params, 0, sizeof(struct iavf_params));
- params->mtu = vsi->netdev->mtu;
- params->link_up = vsi->back->link_up;
-
- for (i = 0; i < IAVF_MAX_USER_PRIORITY; i++) {
- params->qos.prio_qos[i].tc = 0;
- params->qos.prio_qos[i].qs_handle = vsi->qs_handle;
- }
-}
-
-/**
- * iavf_notify_client_message - call the client message receive callback
- * @vsi: the VSI associated with this client
- * @msg: message buffer
- * @len: length of message
- *
- * If there is a client to this VSI, call the client
- **/
-void iavf_notify_client_message(struct iavf_vsi *vsi, u8 *msg, u16 len)
-{
- struct iavf_client_instance *cinst;
-
- if (!vsi)
- return;
-
- cinst = vsi->back->cinst;
- if (!cinst || !cinst->client || !cinst->client->ops ||
- !cinst->client->ops->virtchnl_receive) {
- dev_dbg(&vsi->back->pdev->dev,
- "Cannot locate client instance virtchnl_receive function\n");
- return;
- }
- cinst->client->ops->virtchnl_receive(&cinst->lan_info, cinst->client,
- msg, len);
-}
-
-/**
- * iavf_notify_client_l2_params - call the client notify callback
- * @vsi: the VSI with l2 param changes
- *
- * If there is a client to this VSI, call the client
- **/
-void iavf_notify_client_l2_params(struct iavf_vsi *vsi)
-{
- struct iavf_client_instance *cinst;
- struct iavf_params params;
-
- if (!vsi)
- return;
-
- cinst = vsi->back->cinst;
-
- if (!cinst || !cinst->client || !cinst->client->ops ||
- !cinst->client->ops->l2_param_change) {
- dev_dbg(&vsi->back->pdev->dev,
- "Cannot locate client instance l2_param_change function\n");
- return;
- }
- iavf_client_get_params(vsi, &params);
- cinst->lan_info.params = params;
- cinst->client->ops->l2_param_change(&cinst->lan_info, cinst->client,
- &params);
-}
-
-/**
- * iavf_notify_client_open - call the client open callback
- * @vsi: the VSI with netdev opened
- *
- * If there is a client to this netdev, call the client with open
- **/
-void iavf_notify_client_open(struct iavf_vsi *vsi)
-{
- struct iavf_adapter *adapter = vsi->back;
- struct iavf_client_instance *cinst = adapter->cinst;
- int ret;
-
- if (!cinst || !cinst->client || !cinst->client->ops ||
- !cinst->client->ops->open) {
- dev_dbg(&vsi->back->pdev->dev,
- "Cannot locate client instance open function\n");
- return;
- }
- if (!(test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state))) {
- ret = cinst->client->ops->open(&cinst->lan_info, cinst->client);
- if (!ret)
- set_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
- }
-}
-
-/**
- * iavf_client_release_qvlist - send a message to the PF to release rdma qv map
- * @ldev: pointer to L2 context.
- *
- * Return 0 on success or < 0 on error
- **/
-static int iavf_client_release_qvlist(struct iavf_info *ldev)
-{
- struct iavf_adapter *adapter = ldev->vf;
- enum iavf_status err;
-
- if (adapter->aq_required)
- return -EAGAIN;
-
- err = iavf_aq_send_msg_to_pf(&adapter->hw,
- VIRTCHNL_OP_RELEASE_RDMA_IRQ_MAP,
- IAVF_SUCCESS, NULL, 0, NULL);
-
- if (err)
- dev_err(&adapter->pdev->dev,
- "Unable to send RDMA vector release message to PF, error %d, aq status %d\n",
- err, adapter->hw.aq.asq_last_status);
-
- return err;
-}
-
-/**
- * iavf_notify_client_close - call the client close callback
- * @vsi: the VSI with netdev closed
- * @reset: true when close called due to reset pending
- *
- * If there is a client to this netdev, call the client with close
- **/
-void iavf_notify_client_close(struct iavf_vsi *vsi, bool reset)
-{
- struct iavf_adapter *adapter = vsi->back;
- struct iavf_client_instance *cinst = adapter->cinst;
-
- if (!cinst || !cinst->client || !cinst->client->ops ||
- !cinst->client->ops->close) {
- dev_dbg(&vsi->back->pdev->dev,
- "Cannot locate client instance close function\n");
- return;
- }
- cinst->client->ops->close(&cinst->lan_info, cinst->client, reset);
- iavf_client_release_qvlist(&cinst->lan_info);
- clear_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
-}
-
-/**
- * iavf_client_add_instance - add a client instance to the instance list
- * @adapter: pointer to the board struct
- *
- * Returns cinst ptr on success, NULL on failure
- **/
-static struct iavf_client_instance *
-iavf_client_add_instance(struct iavf_adapter *adapter)
-{
- struct iavf_client_instance *cinst = NULL;
- struct iavf_vsi *vsi = &adapter->vsi;
- struct netdev_hw_addr *mac = NULL;
- struct iavf_params params;
-
- if (!vf_registered_client)
- goto out;
-
- if (adapter->cinst) {
- cinst = adapter->cinst;
- goto out;
- }
-
- cinst = kzalloc(sizeof(*cinst), GFP_KERNEL);
- if (!cinst)
- goto out;
-
- cinst->lan_info.vf = (void *)adapter;
- cinst->lan_info.netdev = vsi->netdev;
- cinst->lan_info.pcidev = adapter->pdev;
- cinst->lan_info.fid = 0;
- cinst->lan_info.ftype = IAVF_CLIENT_FTYPE_VF;
- cinst->lan_info.hw_addr = adapter->hw.hw_addr;
- cinst->lan_info.ops = &iavf_lan_ops;
- cinst->lan_info.version.major = IAVF_CLIENT_VERSION_MAJOR;
- cinst->lan_info.version.minor = IAVF_CLIENT_VERSION_MINOR;
- cinst->lan_info.version.build = IAVF_CLIENT_VERSION_BUILD;
- iavf_client_get_params(vsi, &params);
- cinst->lan_info.params = params;
- set_bit(__IAVF_CLIENT_INSTANCE_NONE, &cinst->state);
-
- cinst->lan_info.msix_count = adapter->num_rdma_msix;
- cinst->lan_info.msix_entries =
- &adapter->msix_entries[adapter->rdma_base_vector];
-
- mac = list_first_entry(&cinst->lan_info.netdev->dev_addrs.list,
- struct netdev_hw_addr, list);
- if (mac)
- ether_addr_copy(cinst->lan_info.lanmac, mac->addr);
- else
- dev_err(&adapter->pdev->dev, "MAC address list is empty!\n");
-
- cinst->client = vf_registered_client;
- adapter->cinst = cinst;
-out:
- return cinst;
-}
-
-/**
- * iavf_client_del_instance - removes a client instance from the list
- * @adapter: pointer to the board struct
- *
- **/
-static
-void iavf_client_del_instance(struct iavf_adapter *adapter)
-{
- kfree(adapter->cinst);
- adapter->cinst = NULL;
-}
-
-/**
- * iavf_client_subtask - client maintenance work
- * @adapter: board private structure
- **/
-void iavf_client_subtask(struct iavf_adapter *adapter)
-{
- struct iavf_client *client = vf_registered_client;
- struct iavf_client_instance *cinst;
- int ret = 0;
-
- if (adapter->state < __IAVF_DOWN)
- return;
-
- /* first check client is registered */
- if (!client)
- return;
-
- /* Add the client instance to the instance list */
- cinst = iavf_client_add_instance(adapter);
- if (!cinst)
- return;
-
- dev_info(&adapter->pdev->dev, "Added instance of Client %s\n",
- client->name);
-
- if (!test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state)) {
- /* Send an Open request to the client */
-
- if (client->ops && client->ops->open)
- ret = client->ops->open(&cinst->lan_info, client);
- if (!ret)
- set_bit(__IAVF_CLIENT_INSTANCE_OPENED,
- &cinst->state);
- else
- /* remove client instance */
- iavf_client_del_instance(adapter);
- }
-}
-
-/**
- * iavf_lan_add_device - add a lan device struct to the list of lan devices
- * @adapter: pointer to the board struct
- *
- * Returns 0 on success or none 0 on error
- **/
-int iavf_lan_add_device(struct iavf_adapter *adapter)
-{
- struct iavf_device *ldev;
- int ret = 0;
-
- mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &iavf_devices, list) {
- if (ldev->vf == adapter) {
- ret = -EEXIST;
- goto out;
- }
- }
- ldev = kzalloc(sizeof(*ldev), GFP_KERNEL);
- if (!ldev) {
- ret = -ENOMEM;
- goto out;
- }
- ldev->vf = adapter;
- INIT_LIST_HEAD(&ldev->list);
- list_add(&ldev->list, &iavf_devices);
- dev_info(&adapter->pdev->dev, "Added LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
- adapter->hw.bus.bus_id, adapter->hw.bus.device,
- adapter->hw.bus.func);
-
- /* Since in some cases register may have happened before a device gets
- * added, we can schedule a subtask to go initiate the clients.
- */
- adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
-
-out:
- mutex_unlock(&iavf_device_mutex);
- return ret;
-}
-
-/**
- * iavf_lan_del_device - removes a lan device from the device list
- * @adapter: pointer to the board struct
- *
- * Returns 0 on success or non-0 on error
- **/
-int iavf_lan_del_device(struct iavf_adapter *adapter)
-{
- struct iavf_device *ldev, *tmp;
- int ret = -ENODEV;
-
- mutex_lock(&iavf_device_mutex);
- list_for_each_entry_safe(ldev, tmp, &iavf_devices, list) {
- if (ldev->vf == adapter) {
- dev_info(&adapter->pdev->dev,
- "Deleted LAN device bus=0x%02x dev=0x%02x func=0x%02x\n",
- adapter->hw.bus.bus_id, adapter->hw.bus.device,
- adapter->hw.bus.func);
- list_del(&ldev->list);
- kfree(ldev);
- ret = 0;
- break;
- }
- }
-
- mutex_unlock(&iavf_device_mutex);
- return ret;
-}
-
-/**
- * iavf_client_release - release client specific resources
- * @client: pointer to the registered client
- *
- **/
-static void iavf_client_release(struct iavf_client *client)
-{
- struct iavf_client_instance *cinst;
- struct iavf_device *ldev;
- struct iavf_adapter *adapter;
-
- mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &iavf_devices, list) {
- adapter = ldev->vf;
- cinst = adapter->cinst;
- if (!cinst)
- continue;
- if (test_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state)) {
- if (client->ops && client->ops->close)
- client->ops->close(&cinst->lan_info, client,
- false);
- iavf_client_release_qvlist(&cinst->lan_info);
- clear_bit(__IAVF_CLIENT_INSTANCE_OPENED, &cinst->state);
-
- dev_warn(&adapter->pdev->dev,
- "Client %s instance closed\n", client->name);
- }
- /* delete the client instance */
- iavf_client_del_instance(adapter);
- dev_info(&adapter->pdev->dev, "Deleted client instance of Client %s\n",
- client->name);
- }
- mutex_unlock(&iavf_device_mutex);
-}
-
-/**
- * iavf_client_prepare - prepare client specific resources
- * @client: pointer to the registered client
- *
- **/
-static void iavf_client_prepare(struct iavf_client *client)
-{
- struct iavf_device *ldev;
- struct iavf_adapter *adapter;
-
- mutex_lock(&iavf_device_mutex);
- list_for_each_entry(ldev, &iavf_devices, list) {
- adapter = ldev->vf;
- /* Signal the watchdog to service the client */
- adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
- }
- mutex_unlock(&iavf_device_mutex);
-}
-
-/**
- * iavf_client_virtchnl_send - send a message to the PF instance
- * @ldev: pointer to L2 context.
- * @client: Client pointer.
- * @msg: pointer to message buffer
- * @len: message length
- *
- * Return 0 on success or < 0 on error
- **/
-static u32 iavf_client_virtchnl_send(struct iavf_info *ldev,
- struct iavf_client *client,
- u8 *msg, u16 len)
-{
- struct iavf_adapter *adapter = ldev->vf;
- enum iavf_status err;
-
- if (adapter->aq_required)
- return -EAGAIN;
-
- err = iavf_aq_send_msg_to_pf(&adapter->hw, VIRTCHNL_OP_RDMA,
- IAVF_SUCCESS, msg, len, NULL);
- if (err)
- dev_err(&adapter->pdev->dev, "Unable to send RDMA message to PF, error %d, aq status %d\n",
- err, adapter->hw.aq.asq_last_status);
-
- return err;
-}
-
-/**
- * iavf_client_setup_qvlist - send a message to the PF to setup rdma qv map
- * @ldev: pointer to L2 context.
- * @client: Client pointer.
- * @qvlist_info: queue and vector list
- *
- * Return 0 on success or < 0 on error
- **/
-static int iavf_client_setup_qvlist(struct iavf_info *ldev,
- struct iavf_client *client,
- struct iavf_qvlist_info *qvlist_info)
-{
- struct virtchnl_rdma_qvlist_info *v_qvlist_info;
- struct iavf_adapter *adapter = ldev->vf;
- struct iavf_qv_info *qv_info;
- enum iavf_status err;
- u32 v_idx, i;
- size_t msg_size;
-
- if (adapter->aq_required)
- return -EAGAIN;
-
- /* A quick check on whether the vectors belong to the client */
- for (i = 0; i < qvlist_info->num_vectors; i++) {
- qv_info = &qvlist_info->qv_info[i];
- if (!qv_info)
- continue;
- v_idx = qv_info->v_idx;
- if ((v_idx >=
- (adapter->rdma_base_vector + adapter->num_rdma_msix)) ||
- (v_idx < adapter->rdma_base_vector))
- return -EINVAL;
- }
-
- v_qvlist_info = (struct virtchnl_rdma_qvlist_info *)qvlist_info;
- msg_size = virtchnl_struct_size(v_qvlist_info, qv_info,
- v_qvlist_info->num_vectors);
-
- adapter->client_pending |= BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP);
- err = iavf_aq_send_msg_to_pf(&adapter->hw,
- VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP, IAVF_SUCCESS,
- (u8 *)v_qvlist_info, msg_size, NULL);
-
- if (err) {
- dev_err(&adapter->pdev->dev,
- "Unable to send RDMA vector config message to PF, error %d, aq status %d\n",
- err, adapter->hw.aq.asq_last_status);
- goto out;
- }
-
- err = -EBUSY;
- for (i = 0; i < 5; i++) {
- msleep(100);
- if (!(adapter->client_pending &
- BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP))) {
- err = 0;
- break;
- }
- }
-out:
- return err;
-}
-
-/**
- * iavf_register_client - Register a iavf client driver with the L2 driver
- * @client: pointer to the iavf_client struct
- *
- * Returns 0 on success or non-0 on error
- **/
-int iavf_register_client(struct iavf_client *client)
-{
- int ret = 0;
-
- if (!client) {
- ret = -EIO;
- goto out;
- }
-
- if (strlen(client->name) == 0) {
- pr_info("iavf: Failed to register client with no name\n");
- ret = -EIO;
- goto out;
- }
-
- if (vf_registered_client) {
- pr_info("iavf: Client %s has already been registered!\n",
- client->name);
- ret = -EEXIST;
- goto out;
- }
-
- if ((client->version.major != IAVF_CLIENT_VERSION_MAJOR) ||
- (client->version.minor != IAVF_CLIENT_VERSION_MINOR)) {
- pr_info("iavf: Failed to register client %s due to mismatched client interface version\n",
- client->name);
- pr_info("Client is using version: %02d.%02d.%02d while LAN driver supports %s\n",
- client->version.major, client->version.minor,
- client->version.build,
- iavf_client_interface_version_str);
- ret = -EIO;
- goto out;
- }
-
- vf_registered_client = client;
-
- iavf_client_prepare(client);
-
- pr_info("iavf: Registered client %s with return code %d\n",
- client->name, ret);
-out:
- return ret;
-}
-EXPORT_SYMBOL(iavf_register_client);
-
-/**
- * iavf_unregister_client - Unregister a iavf client driver with the L2 driver
- * @client: pointer to the iavf_client struct
- *
- * Returns 0 on success or non-0 on error
- **/
-int iavf_unregister_client(struct iavf_client *client)
-{
- int ret = 0;
-
- /* When a unregister request comes through we would have to send
- * a close for each of the client instances that were opened.
- * client_release function is called to handle this.
- */
- iavf_client_release(client);
-
- if (vf_registered_client != client) {
- pr_info("iavf: Client %s has not been registered\n",
- client->name);
- ret = -ENODEV;
- goto out;
- }
- vf_registered_client = NULL;
- pr_info("iavf: Unregistered client %s\n", client->name);
-out:
- return ret;
-}
-EXPORT_SYMBOL(iavf_unregister_client);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_client.h b/drivers/net/ethernet/intel/iavf/iavf_client.h
deleted file mode 100644
index 500269bc0f5b..000000000000
--- a/drivers/net/ethernet/intel/iavf/iavf_client.h
+++ /dev/null
@@ -1,169 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright(c) 2013 - 2018 Intel Corporation. */
-
-#ifndef _IAVF_CLIENT_H_
-#define _IAVF_CLIENT_H_
-
-#define IAVF_CLIENT_STR_LENGTH 10
-
-/* Client interface version should be updated anytime there is a change in the
- * existing APIs or data structures.
- */
-#define IAVF_CLIENT_VERSION_MAJOR 0
-#define IAVF_CLIENT_VERSION_MINOR 01
-#define IAVF_CLIENT_VERSION_BUILD 00
-#define IAVF_CLIENT_VERSION_STR \
- __stringify(IAVF_CLIENT_VERSION_MAJOR) "." \
- __stringify(IAVF_CLIENT_VERSION_MINOR) "." \
- __stringify(IAVF_CLIENT_VERSION_BUILD)
-
-struct iavf_client_version {
- u8 major;
- u8 minor;
- u8 build;
- u8 rsvd;
-};
-
-enum iavf_client_state {
- __IAVF_CLIENT_NULL,
- __IAVF_CLIENT_REGISTERED
-};
-
-enum iavf_client_instance_state {
- __IAVF_CLIENT_INSTANCE_NONE,
- __IAVF_CLIENT_INSTANCE_OPENED,
-};
-
-struct iavf_ops;
-struct iavf_client;
-
-/* HW does not define a type value for AEQ; only for RX/TX and CEQ.
- * In order for us to keep the interface simple, SW will define a
- * unique type value for AEQ.
- */
-#define IAVF_QUEUE_TYPE_PE_AEQ 0x80
-#define IAVF_QUEUE_INVALID_IDX 0xFFFF
-
-struct iavf_qv_info {
- u32 v_idx; /* msix_vector */
- u16 ceq_idx;
- u16 aeq_idx;
- u8 itr_idx;
-};
-
-struct iavf_qvlist_info {
- u32 num_vectors;
- struct iavf_qv_info qv_info[];
-};
-
-#define IAVF_CLIENT_MSIX_ALL 0xFFFFFFFF
-
-/* set of LAN parameters useful for clients managed by LAN */
-
-/* Struct to hold per priority info */
-struct iavf_prio_qos_params {
- u16 qs_handle; /* qs handle for prio */
- u8 tc; /* TC mapped to prio */
- u8 reserved;
-};
-
-#define IAVF_CLIENT_MAX_USER_PRIORITY 8
-/* Struct to hold Client QoS */
-struct iavf_qos_params {
- struct iavf_prio_qos_params prio_qos[IAVF_CLIENT_MAX_USER_PRIORITY];
-};
-
-struct iavf_params {
- struct iavf_qos_params qos;
- u16 mtu;
- u16 link_up; /* boolean */
-};
-
-/* Structure to hold LAN device info for a client device */
-struct iavf_info {
- struct iavf_client_version version;
- u8 lanmac[6];
- struct net_device *netdev;
- struct pci_dev *pcidev;
- u8 __iomem *hw_addr;
- u8 fid; /* function id, PF id or VF id */
-#define IAVF_CLIENT_FTYPE_PF 0
-#define IAVF_CLIENT_FTYPE_VF 1
- u8 ftype; /* function type, PF or VF */
- void *vf; /* cast to iavf_adapter */
-
- /* All L2 params that could change during the life span of the device
- * and needs to be communicated to the client when they change
- */
- struct iavf_params params;
- struct iavf_ops *ops;
-
- u16 msix_count; /* number of msix vectors*/
- /* Array down below will be dynamically allocated based on msix_count */
- struct msix_entry *msix_entries;
- u16 itr_index; /* Which ITR index the PE driver is suppose to use */
-};
-
-struct iavf_ops {
- /* setup_q_vector_list enables queues with a particular vector */
- int (*setup_qvlist)(struct iavf_info *ldev, struct iavf_client *client,
- struct iavf_qvlist_info *qv_info);
-
- u32 (*virtchnl_send)(struct iavf_info *ldev, struct iavf_client *client,
- u8 *msg, u16 len);
-
- /* If the PE Engine is unresponsive, RDMA driver can request a reset.*/
- void (*request_reset)(struct iavf_info *ldev,
- struct iavf_client *client);
-};
-
-struct iavf_client_ops {
- /* Should be called from register_client() or whenever the driver is
- * ready to create a specific client instance.
- */
- int (*open)(struct iavf_info *ldev, struct iavf_client *client);
-
- /* Should be closed when netdev is unavailable or when unregister
- * call comes in. If the close happens due to a reset, set the reset
- * bit to true.
- */
- void (*close)(struct iavf_info *ldev, struct iavf_client *client,
- bool reset);
-
- /* called when some l2 managed parameters changes - mss */
- void (*l2_param_change)(struct iavf_info *ldev,
- struct iavf_client *client,
- struct iavf_params *params);
-
- /* called when a message is received from the PF */
- int (*virtchnl_receive)(struct iavf_info *ldev,
- struct iavf_client *client,
- u8 *msg, u16 len);
-};
-
-/* Client device */
-struct iavf_client_instance {
- struct list_head list;
- struct iavf_info lan_info;
- struct iavf_client *client;
- unsigned long state;
-};
-
-struct iavf_client {
- struct list_head list; /* list of registered clients */
- char name[IAVF_CLIENT_STR_LENGTH];
- struct iavf_client_version version;
- unsigned long state; /* client state */
- atomic_t ref_cnt; /* Count of all the client devices of this kind */
- u32 flags;
-#define IAVF_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0)
-#define IAVF_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2)
- u8 type;
-#define IAVF_CLIENT_RDMA 0
- struct iavf_client_ops *ops; /* client ops provided by the client */
-};
-
-/* used by clients */
-int iavf_register_client(struct iavf_client *client);
-int iavf_unregister_client(struct iavf_client *client);
-#endif /* _IAVF_CLIENT_H_ */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_common.c b/drivers/net/ethernet/intel/iavf/iavf_common.c
index 1afd761d8052..8091e6feca01 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_common.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_common.c
@@ -7,38 +7,6 @@
#include <linux/avf/virtchnl.h>
/**
- * iavf_set_mac_type - Sets MAC type
- * @hw: pointer to the HW structure
- *
- * This function sets the mac type of the adapter based on the
- * vendor ID and device ID stored in the hw structure.
- **/
-enum iavf_status iavf_set_mac_type(struct iavf_hw *hw)
-{
- enum iavf_status status = 0;
-
- if (hw->vendor_id == PCI_VENDOR_ID_INTEL) {
- switch (hw->device_id) {
- case IAVF_DEV_ID_X722_VF:
- hw->mac.type = IAVF_MAC_X722_VF;
- break;
- case IAVF_DEV_ID_VF:
- case IAVF_DEV_ID_VF_HV:
- case IAVF_DEV_ID_ADAPTIVE_VF:
- hw->mac.type = IAVF_MAC_VF;
- break;
- default:
- hw->mac.type = IAVF_MAC_GENERIC;
- break;
- }
- } else {
- status = IAVF_ERR_DEVICE_NOT_SUPPORTED;
- }
-
- return status;
-}
-
-/**
* iavf_aq_str - convert AQ err code to a string
* @hw: pointer to the HW structure
* @aq_err: the AQ error code to convert
diff --git a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
index 90397293525f..6f236d1a6444 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_ethtool.c
@@ -395,11 +395,9 @@ static void iavf_get_priv_flag_strings(struct net_device *netdev, u8 *data)
{
unsigned int i;
- for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++) {
- snprintf(data, ETH_GSTRING_LEN, "%s",
- iavf_gstrings_priv_flags[i].flag_string);
- data += ETH_GSTRING_LEN;
- }
+ for (i = 0; i < IAVF_PRIV_FLAGS_STR_LEN; i++)
+ ethtool_sprintf(&data, "%s",
+ iavf_gstrings_priv_flags[i].flag_string);
}
/**
diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index b3434dbc90d6..c862ebcd2e39 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -3,7 +3,6 @@
#include "iavf.h"
#include "iavf_prototype.h"
-#include "iavf_client.h"
/* All iavf tracepoints are defined by the include below, which must
* be included exactly once across the whole kernel with
* CREATE_TRACE_POINTS defined
@@ -1187,6 +1186,16 @@ static int iavf_addr_unsync(struct net_device *netdev, const u8 *addr)
}
/**
+ * iavf_promiscuous_mode_changed - check if promiscuous mode bits changed
+ * @adapter: device specific adapter
+ */
+bool iavf_promiscuous_mode_changed(struct iavf_adapter *adapter)
+{
+ return (adapter->current_netdev_promisc_flags ^ adapter->netdev->flags) &
+ (IFF_PROMISC | IFF_ALLMULTI);
+}
+
+/**
* iavf_set_rx_mode - NDO callback to set the netdev filters
* @netdev: network interface device structure
**/
@@ -1199,19 +1208,10 @@ static void iavf_set_rx_mode(struct net_device *netdev)
__dev_mc_sync(netdev, iavf_addr_sync, iavf_addr_unsync);
spin_unlock_bh(&adapter->mac_vlan_list_lock);
- if (netdev->flags & IFF_PROMISC &&
- !(adapter->flags & IAVF_FLAG_PROMISC_ON))
- adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_PROMISC;
- else if (!(netdev->flags & IFF_PROMISC) &&
- adapter->flags & IAVF_FLAG_PROMISC_ON)
- adapter->aq_required |= IAVF_FLAG_AQ_RELEASE_PROMISC;
-
- if (netdev->flags & IFF_ALLMULTI &&
- !(adapter->flags & IAVF_FLAG_ALLMULTI_ON))
- adapter->aq_required |= IAVF_FLAG_AQ_REQUEST_ALLMULTI;
- else if (!(netdev->flags & IFF_ALLMULTI) &&
- adapter->flags & IAVF_FLAG_ALLMULTI_ON)
- adapter->aq_required |= IAVF_FLAG_AQ_RELEASE_ALLMULTI;
+ spin_lock_bh(&adapter->current_netdev_promisc_flags_lock);
+ if (iavf_promiscuous_mode_changed(adapter))
+ adapter->aq_required |= IAVF_FLAG_AQ_CONFIGURE_PROMISC_MODE;
+ spin_unlock_bh(&adapter->current_netdev_promisc_flags_lock);
}
/**
@@ -1275,7 +1275,7 @@ static void iavf_configure(struct iavf_adapter *adapter)
* iavf_up_complete - Finish the last steps of bringing up a connection
* @adapter: board private structure
*
- * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock.
+ * Expects to be called while holding crit_lock.
**/
static void iavf_up_complete(struct iavf_adapter *adapter)
{
@@ -1285,8 +1285,6 @@ static void iavf_up_complete(struct iavf_adapter *adapter)
iavf_napi_enable_all(adapter);
adapter->aq_required |= IAVF_FLAG_AQ_ENABLE_QUEUES;
- if (CLIENT_ENABLED(adapter))
- adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_OPEN;
mod_delayed_work(adapter->wq, &adapter->watchdog_task, 0);
}
@@ -1399,7 +1397,7 @@ static void iavf_clear_adv_rss_conf(struct iavf_adapter *adapter)
* iavf_down - Shutdown the connection processing
* @adapter: board private structure
*
- * Expects to be called while holding the __IAVF_IN_CRITICAL_TASK bit lock.
+ * Expects to be called while holding crit_lock.
**/
void iavf_down(struct iavf_adapter *adapter)
{
@@ -1419,8 +1417,10 @@ void iavf_down(struct iavf_adapter *adapter)
iavf_clear_fdir_filters(adapter);
iavf_clear_adv_rss_conf(adapter);
- if (!(adapter->flags & IAVF_FLAG_PF_COMMS_FAILED) &&
- !(test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section))) {
+ if (adapter->flags & IAVF_FLAG_PF_COMMS_FAILED)
+ return;
+
+ if (!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) {
/* cancel any current operation */
adapter->current_op = VIRTCHNL_OP_UNKNOWN;
/* Schedule operations to close down the HW. Don't wait
@@ -1952,6 +1952,17 @@ err_alloc_queues:
}
/**
+ * iavf_free_interrupt_scheme - Undo what iavf_init_interrupt_scheme does
+ * @adapter: board private structure
+ **/
+static void iavf_free_interrupt_scheme(struct iavf_adapter *adapter)
+{
+ iavf_free_q_vectors(adapter);
+ iavf_reset_interrupt_capability(adapter);
+ iavf_free_queues(adapter);
+}
+
+/**
* iavf_free_rss - Free memory used by RSS structs
* @adapter: board private structure
**/
@@ -1979,11 +1990,9 @@ static int iavf_reinit_interrupt_scheme(struct iavf_adapter *adapter, bool runni
if (running)
iavf_free_traffic_irqs(adapter);
iavf_free_misc_irq(adapter);
- iavf_reset_interrupt_capability(adapter);
- iavf_free_q_vectors(adapter);
- iavf_free_queues(adapter);
+ iavf_free_interrupt_scheme(adapter);
- err = iavf_init_interrupt_scheme(adapter);
+ err = iavf_init_interrupt_scheme(adapter);
if (err)
goto err;
@@ -2018,7 +2027,7 @@ static void iavf_finish_config(struct work_struct *work)
mutex_lock(&adapter->crit_lock);
if ((adapter->flags & IAVF_FLAG_SETUP_NETDEV_FEATURES) &&
- adapter->netdev_registered &&
+ adapter->netdev->reg_state == NETREG_REGISTERED &&
!test_bit(__IAVF_IN_REMOVE_TASK, &adapter->crit_section)) {
netdev_update_features(adapter->netdev);
adapter->flags &= ~IAVF_FLAG_SETUP_NETDEV_FEATURES;
@@ -2026,7 +2035,7 @@ static void iavf_finish_config(struct work_struct *work)
switch (adapter->state) {
case __IAVF_DOWN:
- if (!adapter->netdev_registered) {
+ if (adapter->netdev->reg_state != NETREG_REGISTERED) {
err = register_netdevice(adapter->netdev);
if (err) {
dev_err(&adapter->pdev->dev, "Unable to register netdev (%d)\n",
@@ -2040,7 +2049,6 @@ static void iavf_finish_config(struct work_struct *work)
__IAVF_INIT_CONFIG_ADAPTER);
goto out;
}
- adapter->netdev_registered = true;
}
/* Set the real number of queues when reset occurs while
@@ -2162,19 +2170,8 @@ static int iavf_process_aq_command(struct iavf_adapter *adapter)
return 0;
}
- if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_PROMISC) {
- iavf_set_promiscuous(adapter, FLAG_VF_UNICAST_PROMISC |
- FLAG_VF_MULTICAST_PROMISC);
- return 0;
- }
-
- if (adapter->aq_required & IAVF_FLAG_AQ_REQUEST_ALLMULTI) {
- iavf_set_promiscuous(adapter, FLAG_VF_MULTICAST_PROMISC);
- return 0;
- }
- if ((adapter->aq_required & IAVF_FLAG_AQ_RELEASE_PROMISC) ||
- (adapter->aq_required & IAVF_FLAG_AQ_RELEASE_ALLMULTI)) {
- iavf_set_promiscuous(adapter, 0);
+ if (adapter->aq_required & IAVF_FLAG_AQ_CONFIGURE_PROMISC_MODE) {
+ iavf_set_promiscuous(adapter);
return 0;
}
@@ -2366,11 +2363,6 @@ static void iavf_startup(struct iavf_adapter *adapter)
/* driver loaded, probe complete */
adapter->flags &= ~IAVF_FLAG_PF_COMMS_FAILED;
adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
- status = iavf_set_mac_type(hw);
- if (status) {
- dev_err(&pdev->dev, "Failed to set MAC type (%d)\n", status);
- goto err;
- }
ret = iavf_check_reset_complete(hw);
if (ret) {
@@ -2711,12 +2703,6 @@ static void iavf_init_config_adapter(struct iavf_adapter *adapter)
adapter->link_up = false;
netif_tx_stop_all_queues(netdev);
- if (CLIENT_ALLOWED(adapter)) {
- err = iavf_lan_add_device(adapter);
- if (err)
- dev_info(&pdev->dev, "Failed to add VF to client API service list: %d\n",
- err);
- }
dev_info(&pdev->dev, "MAC address: %pM\n", adapter->hw.mac.addr);
if (netdev->features & NETIF_F_GRO)
dev_info(&pdev->dev, "GRO is enabled\n");
@@ -2913,7 +2899,6 @@ static void iavf_watchdog_task(struct work_struct *work)
return;
}
- schedule_delayed_work(&adapter->client_task, msecs_to_jiffies(5));
mutex_unlock(&adapter->crit_lock);
restart_watchdog:
if (adapter->state >= __IAVF_DOWN)
@@ -2982,9 +2967,7 @@ static void iavf_disable_vf(struct iavf_adapter *adapter)
spin_unlock_bh(&adapter->cloud_filter_list_lock);
iavf_free_misc_irq(adapter);
- iavf_reset_interrupt_capability(adapter);
- iavf_free_q_vectors(adapter);
- iavf_free_queues(adapter);
+ iavf_free_interrupt_scheme(adapter);
memset(adapter->vf_res, 0, IAVF_VIRTCHNL_VF_RESOURCE_SIZE);
iavf_shutdown_adminq(&adapter->hw);
adapter->flags &= ~IAVF_FLAG_RESET_PENDING;
@@ -3026,16 +3009,6 @@ static void iavf_reset_task(struct work_struct *work)
return;
}
- while (!mutex_trylock(&adapter->client_lock))
- usleep_range(500, 1000);
- if (CLIENT_ENABLED(adapter)) {
- adapter->flags &= ~(IAVF_FLAG_CLIENT_NEEDS_OPEN |
- IAVF_FLAG_CLIENT_NEEDS_CLOSE |
- IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS |
- IAVF_FLAG_SERVICE_CLIENT_REQUESTED);
- cancel_delayed_work_sync(&adapter->client_task);
- iavf_notify_client_close(&adapter->vsi, true);
- }
iavf_misc_irq_disable(adapter);
if (adapter->flags & IAVF_FLAG_RESET_NEEDED) {
adapter->flags &= ~IAVF_FLAG_RESET_NEEDED;
@@ -3079,7 +3052,6 @@ static void iavf_reset_task(struct work_struct *work)
dev_err(&adapter->pdev->dev, "Reset never finished (%x)\n",
reg_val);
iavf_disable_vf(adapter);
- mutex_unlock(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
return; /* Do not attempt to reinit. It's dead, Jim. */
}
@@ -3218,7 +3190,6 @@ continue_reset:
adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
wake_up(&adapter->reset_waitqueue);
- mutex_unlock(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
return;
@@ -3229,7 +3200,6 @@ reset_err:
}
iavf_disable_vf(adapter);
- mutex_unlock(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
dev_err(&adapter->pdev->dev, "failed to allocate resources during reinit\n");
}
@@ -3329,48 +3299,6 @@ out:
}
/**
- * iavf_client_task - worker thread to perform client work
- * @work: pointer to work_struct containing our data
- *
- * This task handles client interactions. Because client calls can be
- * reentrant, we can't handle them in the watchdog.
- **/
-static void iavf_client_task(struct work_struct *work)
-{
- struct iavf_adapter *adapter =
- container_of(work, struct iavf_adapter, client_task.work);
-
- /* If we can't get the client bit, just give up. We'll be rescheduled
- * later.
- */
-
- if (!mutex_trylock(&adapter->client_lock))
- return;
-
- if (adapter->flags & IAVF_FLAG_SERVICE_CLIENT_REQUESTED) {
- iavf_client_subtask(adapter);
- adapter->flags &= ~IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
- goto out;
- }
- if (adapter->flags & IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS) {
- iavf_notify_client_l2_params(&adapter->vsi);
- adapter->flags &= ~IAVF_FLAG_CLIENT_NEEDS_L2_PARAMS;
- goto out;
- }
- if (adapter->flags & IAVF_FLAG_CLIENT_NEEDS_CLOSE) {
- iavf_notify_client_close(&adapter->vsi, false);
- adapter->flags &= ~IAVF_FLAG_CLIENT_NEEDS_CLOSE;
- goto out;
- }
- if (adapter->flags & IAVF_FLAG_CLIENT_NEEDS_OPEN) {
- iavf_notify_client_open(&adapter->vsi);
- adapter->flags &= ~IAVF_FLAG_CLIENT_NEEDS_OPEN;
- }
-out:
- mutex_unlock(&adapter->client_lock);
-}
-
-/**
* iavf_free_all_tx_resources - Free Tx Resources for All Queues
* @adapter: board private structure
*
@@ -4302,8 +4230,6 @@ static int iavf_close(struct net_device *netdev)
}
set_bit(__IAVF_VSI_DOWN, adapter->vsi.state);
- if (CLIENT_ENABLED(adapter))
- adapter->flags |= IAVF_FLAG_CLIENT_NEEDS_CLOSE;
/* We cannot send IAVF_FLAG_AQ_GET_OFFLOAD_VLAN_V2_CAPS before
* IAVF_FLAG_AQ_DISABLE_QUEUES because in such case there is rtnl
* deadlock with adminq_task() until iavf_close timeouts. We must send
@@ -4372,10 +4298,6 @@ static int iavf_change_mtu(struct net_device *netdev, int new_mtu)
netdev_dbg(netdev, "changing MTU from %d to %d\n",
netdev->mtu, new_mtu);
netdev->mtu = new_mtu;
- if (CLIENT_ENABLED(adapter)) {
- iavf_notify_client_l2_params(&adapter->vsi);
- adapter->flags |= IAVF_FLAG_SERVICE_CLIENT_REQUESTED;
- }
if (netif_running(netdev)) {
iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
@@ -4410,6 +4332,9 @@ static int iavf_set_features(struct net_device *netdev,
(features & NETIF_VLAN_OFFLOAD_FEATURES))
iavf_set_vlan_offload_features(adapter, netdev->features,
features);
+ if (CRC_OFFLOAD_ALLOWED(adapter) &&
+ ((netdev->features & NETIF_F_RXFCS) ^ (features & NETIF_F_RXFCS)))
+ iavf_schedule_reset(adapter, IAVF_FLAG_RESET_NEEDED);
return 0;
}
@@ -4531,6 +4456,9 @@ iavf_get_netdev_vlan_hw_features(struct iavf_adapter *adapter)
}
}
+ if (CRC_OFFLOAD_ALLOWED(adapter))
+ hw_features |= NETIF_F_RXFCS;
+
return hw_features;
}
@@ -4695,6 +4623,55 @@ iavf_fix_netdev_vlan_features(struct iavf_adapter *adapter,
}
/**
+ * iavf_fix_strip_features - fix NETDEV CRC and VLAN strip features
+ * @adapter: board private structure
+ * @requested_features: stack requested NETDEV features
+ *
+ * Returns fixed-up features bits
+ **/
+static netdev_features_t
+iavf_fix_strip_features(struct iavf_adapter *adapter,
+ netdev_features_t requested_features)
+{
+ struct net_device *netdev = adapter->netdev;
+ bool crc_offload_req, is_vlan_strip;
+ netdev_features_t vlan_strip;
+ int num_non_zero_vlan;
+
+ crc_offload_req = CRC_OFFLOAD_ALLOWED(adapter) &&
+ (requested_features & NETIF_F_RXFCS);
+ num_non_zero_vlan = iavf_get_num_vlans_added(adapter);
+ vlan_strip = (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX);
+ is_vlan_strip = requested_features & vlan_strip;
+
+ if (!crc_offload_req)
+ return requested_features;
+
+ if (!num_non_zero_vlan && (netdev->features & vlan_strip) &&
+ !(netdev->features & NETIF_F_RXFCS) && is_vlan_strip) {
+ requested_features &= ~vlan_strip;
+ netdev_info(netdev, "Disabling VLAN stripping as FCS/CRC stripping is also disabled and there is no VLAN configured\n");
+ return requested_features;
+ }
+
+ if ((netdev->features & NETIF_F_RXFCS) && is_vlan_strip) {
+ requested_features &= ~vlan_strip;
+ if (!(netdev->features & vlan_strip))
+ netdev_info(netdev, "To enable VLAN stripping, first need to enable FCS/CRC stripping");
+
+ return requested_features;
+ }
+
+ if (num_non_zero_vlan && is_vlan_strip &&
+ !(netdev->features & NETIF_F_RXFCS)) {
+ requested_features &= ~NETIF_F_RXFCS;
+ netdev_info(netdev, "To disable FCS/CRC stripping, first need to disable VLAN stripping");
+ }
+
+ return requested_features;
+}
+
+/**
* iavf_fix_features - fix up the netdev feature bits
* @netdev: our net device
* @features: desired feature bits
@@ -4706,7 +4683,9 @@ static netdev_features_t iavf_fix_features(struct net_device *netdev,
{
struct iavf_adapter *adapter = netdev_priv(netdev);
- return iavf_fix_netdev_vlan_features(adapter, features);
+ features = iavf_fix_netdev_vlan_features(adapter, features);
+
+ return iavf_fix_strip_features(adapter, features);
}
static const struct net_device_ops iavf_netdev_ops = {
@@ -4743,7 +4722,7 @@ static int iavf_check_reset_complete(struct iavf_hw *hw)
if ((rstat == VIRTCHNL_VFR_VFACTIVE) ||
(rstat == VIRTCHNL_VFR_COMPLETED))
return 0;
- usleep_range(10, 20);
+ msleep(IAVF_RESET_WAIT_MS);
}
return -EBUSY;
}
@@ -4962,7 +4941,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
* and destroy them only once in remove
*/
mutex_init(&adapter->crit_lock);
- mutex_init(&adapter->client_lock);
mutex_init(&hw->aq.asq_mutex);
mutex_init(&hw->aq.arq_mutex);
@@ -4970,6 +4948,7 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
spin_lock_init(&adapter->cloud_filter_list_lock);
spin_lock_init(&adapter->fdir_fltr_lock);
spin_lock_init(&adapter->adv_rss_lock);
+ spin_lock_init(&adapter->current_netdev_promisc_flags_lock);
INIT_LIST_HEAD(&adapter->mac_filter_list);
INIT_LIST_HEAD(&adapter->vlan_filter_list);
@@ -4981,7 +4960,6 @@ static int iavf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
INIT_WORK(&adapter->adminq_task, iavf_adminq_task);
INIT_WORK(&adapter->finish_config, iavf_finish_config);
INIT_DELAYED_WORK(&adapter->watchdog_task, iavf_watchdog_task);
- INIT_DELAYED_WORK(&adapter->client_task, iavf_client_task);
/* Setup the wait queue for indicating transition to down status */
init_waitqueue_head(&adapter->down_waitqueue);
@@ -5022,8 +5000,7 @@ static int __maybe_unused iavf_suspend(struct device *dev_d)
netif_device_detach(netdev);
- while (!mutex_trylock(&adapter->crit_lock))
- usleep_range(500, 1000);
+ mutex_lock(&adapter->crit_lock);
if (netif_running(netdev)) {
rtnl_lock();
@@ -5094,7 +5071,6 @@ static void iavf_remove(struct pci_dev *pdev)
struct iavf_mac_filter *f, *ftmp;
struct net_device *netdev;
struct iavf_hw *hw;
- int err;
netdev = adapter->netdev;
hw = &adapter->hw;
@@ -5125,19 +5101,8 @@ static void iavf_remove(struct pci_dev *pdev)
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->finish_config);
- rtnl_lock();
- if (adapter->netdev_registered) {
- unregister_netdevice(netdev);
- adapter->netdev_registered = false;
- }
- rtnl_unlock();
-
- if (CLIENT_ALLOWED(adapter)) {
- err = iavf_lan_del_device(adapter);
- if (err)
- dev_warn(&pdev->dev, "Failed to delete client device: %d\n",
- err);
- }
+ if (netdev->reg_state == NETREG_REGISTERED)
+ unregister_netdev(netdev);
mutex_lock(&adapter->crit_lock);
dev_info(&adapter->pdev->dev, "Removing device\n");
@@ -5156,7 +5121,6 @@ static void iavf_remove(struct pci_dev *pdev)
cancel_work_sync(&adapter->reset_task);
cancel_delayed_work_sync(&adapter->watchdog_task);
cancel_work_sync(&adapter->adminq_task);
- cancel_delayed_work_sync(&adapter->client_task);
adapter->aq_required = 0;
adapter->flags &= ~IAVF_FLAG_REINIT_ITR_NEEDED;
@@ -5164,9 +5128,7 @@ static void iavf_remove(struct pci_dev *pdev)
iavf_free_all_tx_resources(adapter);
iavf_free_all_rx_resources(adapter);
iavf_free_misc_irq(adapter);
-
- iavf_reset_interrupt_capability(adapter);
- iavf_free_q_vectors(adapter);
+ iavf_free_interrupt_scheme(adapter);
iavf_free_rss(adapter);
@@ -5176,13 +5138,11 @@ static void iavf_remove(struct pci_dev *pdev)
/* destroy the locks only once, here */
mutex_destroy(&hw->aq.arq_mutex);
mutex_destroy(&hw->aq.asq_mutex);
- mutex_destroy(&adapter->client_lock);
mutex_unlock(&adapter->crit_lock);
mutex_destroy(&adapter->crit_lock);
iounmap(hw->hw_addr);
pci_release_regions(pdev);
- iavf_free_queues(adapter);
kfree(adapter->vf_res);
spin_lock_bh(&adapter->mac_vlan_list_lock);
/* If we got removed before an up/down sequence, we've got a filter
diff --git a/drivers/net/ethernet/intel/iavf/iavf_prototype.h b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
index 940cb4203fbe..4a48e6171405 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_prototype.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_prototype.h
@@ -45,8 +45,6 @@ enum iavf_status iavf_aq_set_rss_lut(struct iavf_hw *hw, u16 seid,
enum iavf_status iavf_aq_set_rss_key(struct iavf_hw *hw, u16 seid,
struct iavf_aqc_get_set_rss_key_data *key);
-enum iavf_status iavf_set_mac_type(struct iavf_hw *hw);
-
extern struct iavf_rx_ptype_decoded iavf_ptype_lookup[];
static inline struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index 8c5f6096b002..d64c4997136b 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -7,8 +7,8 @@
#include "iavf_trace.h"
#include "iavf_prototype.h"
-static inline __le64 build_ctob(u32 td_cmd, u32 td_offset, unsigned int size,
- u32 td_tag)
+static __le64 build_ctob(u32 td_cmd, u32 td_offset, unsigned int size,
+ u32 td_tag)
{
return cpu_to_le64(IAVF_TX_DESC_DTYPE_DATA |
((u64)td_cmd << IAVF_TXD_QW1_CMD_SHIFT) |
@@ -370,8 +370,8 @@ static void iavf_enable_wb_on_itr(struct iavf_vsi *vsi,
q_vector->arm_wb_state = true;
}
-static inline bool iavf_container_is_rx(struct iavf_q_vector *q_vector,
- struct iavf_ring_container *rc)
+static bool iavf_container_is_rx(struct iavf_q_vector *q_vector,
+ struct iavf_ring_container *rc)
{
return &q_vector->rx == rc;
}
@@ -806,7 +806,7 @@ err:
* @rx_ring: ring to bump
* @val: new head index
**/
-static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
+static void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
{
rx_ring->next_to_use = val;
@@ -828,7 +828,7 @@ static inline void iavf_release_rx_desc(struct iavf_ring *rx_ring, u32 val)
*
* Returns the offset value for ring into the data buffer.
*/
-static inline unsigned int iavf_rx_offset(struct iavf_ring *rx_ring)
+static unsigned int iavf_rx_offset(struct iavf_ring *rx_ring)
{
return ring_uses_build_skb(rx_ring) ? IAVF_SKB_PAD : 0;
}
@@ -977,9 +977,9 @@ no_buffers:
* @skb: skb currently being received and modified
* @rx_desc: the receive descriptor
**/
-static inline void iavf_rx_checksum(struct iavf_vsi *vsi,
- struct sk_buff *skb,
- union iavf_rx_desc *rx_desc)
+static void iavf_rx_checksum(struct iavf_vsi *vsi,
+ struct sk_buff *skb,
+ union iavf_rx_desc *rx_desc)
{
struct iavf_rx_ptype_decoded decoded;
u32 rx_error, rx_status;
@@ -1061,7 +1061,7 @@ checksum_fail:
*
* Returns a hash type to be used by skb_set_hash
**/
-static inline int iavf_ptype_to_htype(u8 ptype)
+static int iavf_ptype_to_htype(u8 ptype)
{
struct iavf_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
@@ -1085,10 +1085,10 @@ static inline int iavf_ptype_to_htype(u8 ptype)
* @skb: skb currently being received and modified
* @rx_ptype: Rx packet type
**/
-static inline void iavf_rx_hash(struct iavf_ring *ring,
- union iavf_rx_desc *rx_desc,
- struct sk_buff *skb,
- u8 rx_ptype)
+static void iavf_rx_hash(struct iavf_ring *ring,
+ union iavf_rx_desc *rx_desc,
+ struct sk_buff *skb,
+ u8 rx_ptype)
{
u32 hash;
const __le64 rss_mask =
@@ -1115,10 +1115,10 @@ static inline void iavf_rx_hash(struct iavf_ring *ring,
* order to populate the hash, checksum, VLAN, protocol, and
* other fields within the skb.
**/
-static inline
-void iavf_process_skb_fields(struct iavf_ring *rx_ring,
- union iavf_rx_desc *rx_desc, struct sk_buff *skb,
- u8 rx_ptype)
+static void
+iavf_process_skb_fields(struct iavf_ring *rx_ring,
+ union iavf_rx_desc *rx_desc, struct sk_buff *skb,
+ u8 rx_ptype)
{
iavf_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
@@ -1662,8 +1662,8 @@ static inline u32 iavf_buildreg_itr(const int type, u16 itr)
* @q_vector: q_vector for which itr is being updated and interrupt enabled
*
**/
-static inline void iavf_update_enable_itr(struct iavf_vsi *vsi,
- struct iavf_q_vector *q_vector)
+static void iavf_update_enable_itr(struct iavf_vsi *vsi,
+ struct iavf_q_vector *q_vector)
{
struct iavf_hw *hw = &vsi->back->hw;
u32 intval;
@@ -2275,9 +2275,9 @@ int __iavf_maybe_stop_tx(struct iavf_ring *tx_ring, int size)
* @td_cmd: the command field in the descriptor
* @td_offset: offset for checksum or crc
**/
-static inline void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb,
- struct iavf_tx_buffer *first, u32 tx_flags,
- const u8 hdr_len, u32 td_cmd, u32 td_offset)
+static void iavf_tx_map(struct iavf_ring *tx_ring, struct sk_buff *skb,
+ struct iavf_tx_buffer *first, u32 tx_flags,
+ const u8 hdr_len, u32 td_cmd, u32 td_offset)
{
unsigned int data_len = skb->data_len;
unsigned int size = skb_headlen(skb);
diff --git a/drivers/net/ethernet/intel/iavf/iavf_type.h b/drivers/net/ethernet/intel/iavf/iavf_type.h
index 9f1f523807c4..2b6a207fa441 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_type.h
+++ b/drivers/net/ethernet/intel/iavf/iavf_type.h
@@ -69,15 +69,6 @@ enum iavf_debug_mask {
* the Firmware and AdminQ are intended to insulate the driver from most of the
* future changes, but these structures will also do part of the job.
*/
-enum iavf_mac_type {
- IAVF_MAC_UNKNOWN = 0,
- IAVF_MAC_XL710,
- IAVF_MAC_VF,
- IAVF_MAC_X722,
- IAVF_MAC_X722_VF,
- IAVF_MAC_GENERIC,
-};
-
enum iavf_vsi_type {
IAVF_VSI_MAIN = 0,
IAVF_VSI_VMDQ1 = 1,
@@ -110,11 +101,8 @@ struct iavf_hw_capabilities {
};
struct iavf_mac_info {
- enum iavf_mac_type type;
u8 addr[ETH_ALEN];
u8 perm_addr[ETH_ALEN];
- u8 san_addr[ETH_ALEN];
- u16 max_fcoeq;
};
/* PCI bus types */
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index f9727e9c3d63..64c4443dbef9 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -3,7 +3,6 @@
#include "iavf.h"
#include "iavf_prototype.h"
-#include "iavf_client.h"
/**
* iavf_send_pf_msg
@@ -142,6 +141,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 |
VIRTCHNL_VF_OFFLOAD_ENCAP |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
+ VIRTCHNL_VF_OFFLOAD_CRC |
VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
VIRTCHNL_VF_OFFLOAD_ADQ |
@@ -312,6 +312,9 @@ void iavf_configure_queues(struct iavf_adapter *adapter)
vqpi->rxq.databuffer_size =
ALIGN(adapter->rx_rings[i].rx_buf_len,
BIT_ULL(IAVF_RXQ_CTX_DBUFF_SHIFT));
+ if (CRC_OFFLOAD_ALLOWED(adapter))
+ vqpi->rxq.crc_disable = !!(adapter->netdev->features &
+ NETIF_F_RXFCS);
vqpi++;
}
@@ -936,14 +939,14 @@ void iavf_del_vlans(struct iavf_adapter *adapter)
/**
* iavf_set_promiscuous
* @adapter: adapter structure
- * @flags: bitmask to control unicast/multicast promiscuous.
*
* Request that the PF enable promiscuous mode for our VSI.
**/
-void iavf_set_promiscuous(struct iavf_adapter *adapter, int flags)
+void iavf_set_promiscuous(struct iavf_adapter *adapter)
{
+ struct net_device *netdev = adapter->netdev;
struct virtchnl_promisc_info vpi;
- int promisc_all;
+ unsigned int flags;
if (adapter->current_op != VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -952,36 +955,57 @@ void iavf_set_promiscuous(struct iavf_adapter *adapter, int flags)
return;
}
- promisc_all = FLAG_VF_UNICAST_PROMISC |
- FLAG_VF_MULTICAST_PROMISC;
- if ((flags & promisc_all) == promisc_all) {
- adapter->flags |= IAVF_FLAG_PROMISC_ON;
- adapter->aq_required &= ~IAVF_FLAG_AQ_REQUEST_PROMISC;
- dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
- }
+ /* prevent changes to promiscuous flags */
+ spin_lock_bh(&adapter->current_netdev_promisc_flags_lock);
- if (flags & FLAG_VF_MULTICAST_PROMISC) {
- adapter->flags |= IAVF_FLAG_ALLMULTI_ON;
- adapter->aq_required &= ~IAVF_FLAG_AQ_REQUEST_ALLMULTI;
- dev_info(&adapter->pdev->dev, "%s is entering multicast promiscuous mode\n",
- adapter->netdev->name);
+ /* sanity check to prevent duplicate AQ calls */
+ if (!iavf_promiscuous_mode_changed(adapter)) {
+ adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_PROMISC_MODE;
+ dev_dbg(&adapter->pdev->dev, "No change in promiscuous mode\n");
+ /* allow changes to promiscuous flags */
+ spin_unlock_bh(&adapter->current_netdev_promisc_flags_lock);
+ return;
}
- if (!flags) {
- if (adapter->flags & IAVF_FLAG_PROMISC_ON) {
- adapter->flags &= ~IAVF_FLAG_PROMISC_ON;
- adapter->aq_required &= ~IAVF_FLAG_AQ_RELEASE_PROMISC;
- dev_info(&adapter->pdev->dev, "Leaving promiscuous mode\n");
- }
+ /* there are 2 bits, but only 3 states */
+ if (!(netdev->flags & IFF_PROMISC) &&
+ netdev->flags & IFF_ALLMULTI) {
+ /* State 1 - only multicast promiscuous mode enabled
+ * - !IFF_PROMISC && IFF_ALLMULTI
+ */
+ flags = FLAG_VF_MULTICAST_PROMISC;
+ adapter->current_netdev_promisc_flags |= IFF_ALLMULTI;
+ adapter->current_netdev_promisc_flags &= ~IFF_PROMISC;
+ dev_info(&adapter->pdev->dev, "Entering multicast promiscuous mode\n");
+ } else if (!(netdev->flags & IFF_PROMISC) &&
+ !(netdev->flags & IFF_ALLMULTI)) {
+ /* State 2 - unicast/multicast promiscuous mode disabled
+ * - !IFF_PROMISC && !IFF_ALLMULTI
+ */
+ flags = 0;
+ adapter->current_netdev_promisc_flags &=
+ ~(IFF_PROMISC | IFF_ALLMULTI);
+ dev_info(&adapter->pdev->dev, "Leaving promiscuous mode\n");
+ } else {
+ /* State 3 - unicast/multicast promiscuous mode enabled
+ * - IFF_PROMISC && IFF_ALLMULTI
+ * - IFF_PROMISC && !IFF_ALLMULTI
+ */
+ flags = FLAG_VF_UNICAST_PROMISC | FLAG_VF_MULTICAST_PROMISC;
+ adapter->current_netdev_promisc_flags |= IFF_PROMISC;
+ if (netdev->flags & IFF_ALLMULTI)
+ adapter->current_netdev_promisc_flags |= IFF_ALLMULTI;
+ else
+ adapter->current_netdev_promisc_flags &= ~IFF_ALLMULTI;
- if (adapter->flags & IAVF_FLAG_ALLMULTI_ON) {
- adapter->flags &= ~IAVF_FLAG_ALLMULTI_ON;
- adapter->aq_required &= ~IAVF_FLAG_AQ_RELEASE_ALLMULTI;
- dev_info(&adapter->pdev->dev, "%s is leaving multicast promiscuous mode\n",
- adapter->netdev->name);
- }
+ dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
}
+ adapter->aq_required &= ~IAVF_FLAG_AQ_CONFIGURE_PROMISC_MODE;
+
+ /* allow changes to promiscuous flags */
+ spin_unlock_bh(&adapter->current_netdev_promisc_flags_lock);
+
adapter->current_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
vpi.vsi_id = adapter->vsi_res->vsi_id;
vpi.flags = flags;
@@ -1353,8 +1377,6 @@ void iavf_disable_vlan_insertion_v2(struct iavf_adapter *adapter, u16 tpid)
VIRTCHNL_OP_DISABLE_VLAN_INSERTION_V2);
}
-#define IAVF_MAX_SPEED_STRLEN 13
-
/**
* iavf_print_link_message - print link up or down
* @adapter: adapter structure
@@ -1372,10 +1394,6 @@ static void iavf_print_link_message(struct iavf_adapter *adapter)
return;
}
- speed = kzalloc(IAVF_MAX_SPEED_STRLEN, GFP_KERNEL);
- if (!speed)
- return;
-
if (ADV_LINK_SUPPORT(adapter)) {
link_speed_mbps = adapter->link_speed_mbps;
goto print_link_msg;
@@ -1413,17 +1431,17 @@ static void iavf_print_link_message(struct iavf_adapter *adapter)
print_link_msg:
if (link_speed_mbps > SPEED_1000) {
- if (link_speed_mbps == SPEED_2500)
- snprintf(speed, IAVF_MAX_SPEED_STRLEN, "2.5 Gbps");
- else
+ if (link_speed_mbps == SPEED_2500) {
+ speed = kasprintf(GFP_KERNEL, "%s", "2.5 Gbps");
+ } else {
/* convert to Gbps inline */
- snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s",
- link_speed_mbps / 1000, "Gbps");
+ speed = kasprintf(GFP_KERNEL, "%d Gbps",
+ link_speed_mbps / 1000);
+ }
} else if (link_speed_mbps == SPEED_UNKNOWN) {
- snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%s", "Unknown Mbps");
+ speed = kasprintf(GFP_KERNEL, "%s", "Unknown Mbps");
} else {
- snprintf(speed, IAVF_MAX_SPEED_STRLEN, "%d %s",
- link_speed_mbps, "Mbps");
+ speed = kasprintf(GFP_KERNEL, "%d Mbps", link_speed_mbps);
}
netdev_info(netdev, "NIC Link is Up Speed is %s Full Duplex\n", speed);
@@ -2290,19 +2308,6 @@ void iavf_virtchnl_completion(struct iavf_adapter *adapter,
if (v_opcode != adapter->current_op)
return;
break;
- case VIRTCHNL_OP_RDMA:
- /* Gobble zero-length replies from the PF. They indicate that
- * a previous message was received OK, and the client doesn't
- * care about that.
- */
- if (msglen && CLIENT_ENABLED(adapter))
- iavf_notify_client_message(&adapter->vsi, msg, msglen);
- break;
-
- case VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP:
- adapter->client_pending &=
- ~(BIT(VIRTCHNL_OP_CONFIG_RDMA_IRQ_MAP));
- break;
case VIRTCHNL_OP_GET_RSS_HENA_CAPS: {
struct virtchnl_rss_hena *vrh = (struct virtchnl_rss_hena *)msg;
diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index 960277d78e09..0679907980f7 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -43,7 +43,7 @@ ice-$(CONFIG_PCI_IOV) += \
ice_vf_mbx.o \
ice_vf_vsi_vlan_ops.o \
ice_vf_lib.o
-ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o
+ice-$(CONFIG_PTP_1588_CLOCK) += ice_ptp.o ice_ptp_hw.o ice_dpll.o
ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 5022b036ca4f..351e0d36df44 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -76,6 +76,7 @@
#include "ice_vsi_vlan_ops.h"
#include "ice_gnss.h"
#include "ice_irq.h"
+#include "ice_dpll.h"
#define ICE_BAR0 0
#define ICE_REQ_DESC_MULTIPLE 32
@@ -195,10 +196,13 @@
#define ice_pf_to_dev(pf) (&((pf)->pdev->dev))
+#define ice_pf_src_tmr_owned(pf) ((pf)->hw.func_caps.ts_func_info.src_tmr_owned)
+
enum ice_feature {
ICE_F_DSCP,
- ICE_F_PTP_EXTTS,
+ ICE_F_PHY_RCLK,
ICE_F_SMA_CTRL,
+ ICE_F_CGU,
ICE_F_GNSS,
ICE_F_ROCE_LAG,
ICE_F_SRIOV_LAG,
@@ -508,6 +512,7 @@ enum ice_pf_flags {
ICE_FLAG_UNPLUG_AUX_DEV,
ICE_FLAG_MTU_CHANGED,
ICE_FLAG_GNSS, /* GNSS successfully initialized */
+ ICE_FLAG_DPLL, /* SyncE/PTP dplls initialized */
ICE_PF_FLAGS_NBITS /* must be last */
};
@@ -549,6 +554,8 @@ struct ice_pf {
* MSIX vectors allowed on this PF.
*/
u16 sriov_base_vector;
+ unsigned long *sriov_irq_bm; /* bitmap to track irq usage */
+ u16 sriov_irq_size; /* size of the irq_bm bitmap */
u16 ctrl_vsi_idx; /* control VSI index in pf->vsi array */
@@ -640,6 +647,7 @@ struct ice_pf {
#define ICE_VF_AGG_NODE_ID_START 65
#define ICE_MAX_VF_AGG_NODES 32
struct ice_agg_node vf_agg_node[ICE_MAX_VF_AGG_NODES];
+ struct ice_dplls dplls;
};
extern struct workqueue_struct *ice_lag_wq;
@@ -668,6 +676,18 @@ static inline bool ice_vector_ch_enabled(struct ice_q_vector *qv)
}
/**
+ * ice_ptp_pf_handles_tx_interrupt - Check if PF handles Tx interrupt
+ * @pf: Board private structure
+ *
+ * Return true if this PF should respond to the Tx timestamp interrupt
+ * indication in the miscellaneous OICR interrupt handler.
+ */
+static inline bool ice_ptp_pf_handles_tx_interrupt(struct ice_pf *pf)
+{
+ return pf->ptp.tx_interrupt_mode != ICE_PTP_TX_INTERRUPT_NONE;
+}
+
+/**
* ice_irq_dynamic_ena - Enable default interrupt generation settings
* @hw: pointer to HW struct
* @vsi: pointer to VSI struct, can be NULL
@@ -942,6 +962,7 @@ int ice_stop(struct net_device *netdev);
void ice_service_task_schedule(struct ice_pf *pf);
int ice_load(struct ice_pf *pf);
void ice_unload(struct ice_pf *pf);
+void ice_adv_lnk_speed_maps_init(void);
/**
* ice_set_rdma_cap - enable RDMA support
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 29f7a9852aec..d7fdb7ba7268 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1099,7 +1099,15 @@ struct ice_aqc_get_phy_caps {
#define ICE_PHY_TYPE_HIGH_100G_CAUI2 BIT_ULL(2)
#define ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC BIT_ULL(3)
#define ICE_PHY_TYPE_HIGH_100G_AUI2 BIT_ULL(4)
-#define ICE_PHY_TYPE_HIGH_MAX_INDEX 4
+#define ICE_PHY_TYPE_HIGH_200G_CR4_PAM4 BIT_ULL(5)
+#define ICE_PHY_TYPE_HIGH_200G_SR4 BIT_ULL(6)
+#define ICE_PHY_TYPE_HIGH_200G_FR4 BIT_ULL(7)
+#define ICE_PHY_TYPE_HIGH_200G_LR4 BIT_ULL(8)
+#define ICE_PHY_TYPE_HIGH_200G_DR4 BIT_ULL(9)
+#define ICE_PHY_TYPE_HIGH_200G_KR4_PAM4 BIT_ULL(10)
+#define ICE_PHY_TYPE_HIGH_200G_AUI4_AOC_ACC BIT_ULL(11)
+#define ICE_PHY_TYPE_HIGH_200G_AUI4 BIT_ULL(12)
+#define ICE_PHY_TYPE_HIGH_MAX_INDEX 12
struct ice_aqc_get_phy_caps_data {
__le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
@@ -1319,11 +1327,41 @@ struct ice_aqc_get_link_status_data {
#define ICE_AQ_LINK_SPEED_40GB BIT(8)
#define ICE_AQ_LINK_SPEED_50GB BIT(9)
#define ICE_AQ_LINK_SPEED_100GB BIT(10)
+#define ICE_AQ_LINK_SPEED_200GB BIT(11)
#define ICE_AQ_LINK_SPEED_UNKNOWN BIT(15)
- __le32 reserved3; /* Aligns next field to 8-byte boundary */
- __le64 phy_type_low; /* Use values from ICE_PHY_TYPE_LOW_* */
- __le64 phy_type_high; /* Use values from ICE_PHY_TYPE_HIGH_* */
-};
+ /* Aligns next field to 8-byte boundary */
+ __le16 reserved3;
+ u8 ext_fec_status;
+ /* RS 272 FEC enabled */
+#define ICE_AQ_LINK_RS_272_FEC_EN BIT(0)
+ u8 reserved4;
+ /* Use values from ICE_PHY_TYPE_LOW_* */
+ __le64 phy_type_low;
+ /* Use values from ICE_PHY_TYPE_HIGH_* */
+ __le64 phy_type_high;
+#define ICE_AQC_LS_DATA_SIZE_V1 \
+ offsetofend(struct ice_aqc_get_link_status_data, phy_type_high)
+ /* Get link status v2 link partner data */
+ __le64 lp_phy_type_low;
+ __le64 lp_phy_type_high;
+ u8 lp_fec_adv;
+#define ICE_AQ_LINK_LP_10G_KR_FEC_CAP BIT(0)
+#define ICE_AQ_LINK_LP_25G_KR_FEC_CAP BIT(1)
+#define ICE_AQ_LINK_LP_RS_528_FEC_CAP BIT(2)
+#define ICE_AQ_LINK_LP_50G_KR_272_FEC_CAP BIT(3)
+#define ICE_AQ_LINK_LP_100G_KR_272_FEC_CAP BIT(4)
+#define ICE_AQ_LINK_LP_200G_KR_272_FEC_CAP BIT(5)
+ u8 lp_fec_req;
+#define ICE_AQ_LINK_LP_10G_KR_FEC_REQ BIT(0)
+#define ICE_AQ_LINK_LP_25G_KR_FEC_REQ BIT(1)
+#define ICE_AQ_LINK_LP_RS_528_FEC_REQ BIT(2)
+#define ICE_AQ_LINK_LP_KR_272_FEC_REQ BIT(3)
+ u8 lp_flowcontrol;
+#define ICE_AQ_LINK_LP_PAUSE_ADV BIT(0)
+#define ICE_AQ_LINK_LP_ASM_DIR_ADV BIT(1)
+#define ICE_AQC_LS_DATA_SIZE_V2 \
+ offsetofend(struct ice_aqc_get_link_status_data, lp_flowcontrol)
+} __packed;
/* Set event mask command (direct 0x0613) */
struct ice_aqc_set_event_mask {
@@ -1351,6 +1389,30 @@ struct ice_aqc_set_mac_lb {
u8 reserved[15];
};
+/* Set PHY recovered clock output (direct 0x0630) */
+struct ice_aqc_set_phy_rec_clk_out {
+ u8 phy_output;
+ u8 port_num;
+#define ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT 0xFF
+ u8 flags;
+#define ICE_AQC_SET_PHY_REC_CLK_OUT_OUT_EN BIT(0)
+ u8 rsvd;
+ __le32 freq;
+ u8 rsvd2[6];
+ __le16 node_handle;
+};
+
+/* Get PHY recovered clock output (direct 0x0631) */
+struct ice_aqc_get_phy_rec_clk_out {
+ u8 phy_output;
+ u8 port_num;
+#define ICE_AQC_GET_PHY_REC_CLK_OUT_CURR_PORT 0xFF
+ u8 flags;
+#define ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN BIT(0)
+ u8 rsvd[11];
+ __le16 node_handle;
+};
+
struct ice_aqc_link_topo_params {
u8 lport_num;
u8 lport_num_valid;
@@ -1367,6 +1429,9 @@ struct ice_aqc_link_topo_params {
#define ICE_AQC_LINK_TOPO_NODE_TYPE_CAGE 6
#define ICE_AQC_LINK_TOPO_NODE_TYPE_MEZZ 7
#define ICE_AQC_LINK_TOPO_NODE_TYPE_ID_EEPROM 8
+#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL 9
+#define ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX 10
+#define ICE_AQC_LINK_TOPO_NODE_TYPE_GPS 11
#define ICE_AQC_LINK_TOPO_NODE_CTX_S 4
#define ICE_AQC_LINK_TOPO_NODE_CTX_M \
(0xF << ICE_AQC_LINK_TOPO_NODE_CTX_S)
@@ -1403,8 +1468,13 @@ struct ice_aqc_link_topo_addr {
struct ice_aqc_get_link_topo {
struct ice_aqc_link_topo_addr addr;
u8 node_part_num;
-#define ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575 0x21
-#define ICE_AQC_GET_LINK_TOPO_NODE_NR_C827 0x31
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_PCA9575 0x21
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032 0x24
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384 0x25
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_E822_PHY 0x30
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_C827 0x31
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX 0x47
+#define ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS 0x48
u8 rsvd[9];
};
@@ -2125,6 +2195,193 @@ struct ice_aqc_get_pkg_info_resp {
struct ice_aqc_get_pkg_info pkg_info[];
};
+/* Get CGU abilities command response data structure (indirect 0x0C61) */
+struct ice_aqc_get_cgu_abilities {
+ u8 num_inputs;
+ u8 num_outputs;
+ u8 pps_dpll_idx;
+ u8 eec_dpll_idx;
+ __le32 max_in_freq;
+ __le32 max_in_phase_adj;
+ __le32 max_out_freq;
+ __le32 max_out_phase_adj;
+ u8 cgu_part_num;
+ u8 rsvd[3];
+};
+
+/* Set CGU input config (direct 0x0C62) */
+struct ice_aqc_set_cgu_input_config {
+ u8 input_idx;
+ u8 flags1;
+#define ICE_AQC_SET_CGU_IN_CFG_FLG1_UPDATE_FREQ BIT(6)
+#define ICE_AQC_SET_CGU_IN_CFG_FLG1_UPDATE_DELAY BIT(7)
+ u8 flags2;
+#define ICE_AQC_SET_CGU_IN_CFG_FLG2_INPUT_EN BIT(5)
+#define ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN BIT(6)
+ u8 rsvd;
+ __le32 freq;
+ __le32 phase_delay;
+ u8 rsvd2[2];
+ __le16 node_handle;
+};
+
+/* Get CGU input config response descriptor structure (direct 0x0C63) */
+struct ice_aqc_get_cgu_input_config {
+ u8 input_idx;
+ u8 status;
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_LOS BIT(0)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_SCM_FAIL BIT(1)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_CFM_FAIL BIT(2)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_GST_FAIL BIT(3)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_PFM_FAIL BIT(4)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_ESYNC_FAIL BIT(6)
+#define ICE_AQC_GET_CGU_IN_CFG_STATUS_ESYNC_CAP BIT(7)
+ u8 type;
+#define ICE_AQC_GET_CGU_IN_CFG_TYPE_READ_ONLY BIT(0)
+#define ICE_AQC_GET_CGU_IN_CFG_TYPE_GPS BIT(4)
+#define ICE_AQC_GET_CGU_IN_CFG_TYPE_EXTERNAL BIT(5)
+#define ICE_AQC_GET_CGU_IN_CFG_TYPE_PHY BIT(6)
+ u8 flags1;
+#define ICE_AQC_GET_CGU_IN_CFG_FLG1_PHASE_DELAY_SUPP BIT(0)
+#define ICE_AQC_GET_CGU_IN_CFG_FLG1_1PPS_SUPP BIT(2)
+#define ICE_AQC_GET_CGU_IN_CFG_FLG1_10MHZ_SUPP BIT(3)
+#define ICE_AQC_GET_CGU_IN_CFG_FLG1_ANYFREQ BIT(7)
+ __le32 freq;
+ __le32 phase_delay;
+ u8 flags2;
+#define ICE_AQC_GET_CGU_IN_CFG_FLG2_INPUT_EN BIT(5)
+#define ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN BIT(6)
+ u8 rsvd[1];
+ __le16 node_handle;
+};
+
+/* Set CGU output config (direct 0x0C64) */
+struct ice_aqc_set_cgu_output_config {
+ u8 output_idx;
+ u8 flags;
+#define ICE_AQC_SET_CGU_OUT_CFG_OUT_EN BIT(0)
+#define ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN BIT(1)
+#define ICE_AQC_SET_CGU_OUT_CFG_UPDATE_FREQ BIT(2)
+#define ICE_AQC_SET_CGU_OUT_CFG_UPDATE_PHASE BIT(3)
+#define ICE_AQC_SET_CGU_OUT_CFG_UPDATE_SRC_SEL BIT(4)
+ u8 src_sel;
+#define ICE_AQC_SET_CGU_OUT_CFG_DPLL_SRC_SEL ICE_M(0x1F, 0)
+ u8 rsvd;
+ __le32 freq;
+ __le32 phase_delay;
+ u8 rsvd2[2];
+ __le16 node_handle;
+};
+
+/* Get CGU output config (direct 0x0C65) */
+struct ice_aqc_get_cgu_output_config {
+ u8 output_idx;
+ u8 flags;
+#define ICE_AQC_GET_CGU_OUT_CFG_OUT_EN BIT(0)
+#define ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN BIT(1)
+#define ICE_AQC_GET_CGU_OUT_CFG_ESYNC_ABILITY BIT(2)
+ u8 src_sel;
+#define ICE_AQC_GET_CGU_OUT_CFG_DPLL_SRC_SEL_SHIFT 0
+#define ICE_AQC_GET_CGU_OUT_CFG_DPLL_SRC_SEL \
+ ICE_M(0x1F, ICE_AQC_GET_CGU_OUT_CFG_DPLL_SRC_SEL_SHIFT)
+#define ICE_AQC_GET_CGU_OUT_CFG_DPLL_MODE_SHIFT 5
+#define ICE_AQC_GET_CGU_OUT_CFG_DPLL_MODE \
+ ICE_M(0x7, ICE_AQC_GET_CGU_OUT_CFG_DPLL_MODE_SHIFT)
+ u8 rsvd;
+ __le32 freq;
+ __le32 src_freq;
+ u8 rsvd2[2];
+ __le16 node_handle;
+};
+
+/* Get CGU DPLL status (direct 0x0C66) */
+struct ice_aqc_get_cgu_dpll_status {
+ u8 dpll_num;
+ u8 ref_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_LOS BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_SCM BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_CFM BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_GST BIT(3)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_PFM BIT(4)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_FAST_LOCK_EN BIT(5)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_REF_SW_ESYNC BIT(6)
+ u8 dpll_state;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK BIT(0)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO BIT(1)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY BIT(2)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_FLHIT BIT(5)
+#define ICE_AQC_GET_CGU_DPLL_STATUS_STATE_PSLHIT BIT(7)
+ u8 config;
+#define ICE_AQC_GET_CGU_DPLL_CONFIG_CLK_REF_SEL ICE_M(0x1F, 0)
+#define ICE_AQC_GET_CGU_DPLL_CONFIG_MODE_SHIFT 5
+#define ICE_AQC_GET_CGU_DPLL_CONFIG_MODE \
+ ICE_M(0x7, ICE_AQC_GET_CGU_DPLL_CONFIG_MODE_SHIFT)
+#define ICE_AQC_GET_CGU_DPLL_CONFIG_MODE_FREERUN 0
+#define ICE_AQC_GET_CGU_DPLL_CONFIG_MODE_AUTOMATIC \
+ ICE_M(0x3, ICE_AQC_GET_CGU_DPLL_CONFIG_MODE_SHIFT)
+ __le32 phase_offset_h;
+ __le32 phase_offset_l;
+ u8 eec_mode;
+#define ICE_AQC_GET_CGU_DPLL_STATUS_EEC_MODE_1 0xA
+#define ICE_AQC_GET_CGU_DPLL_STATUS_EEC_MODE_2 0xB
+#define ICE_AQC_GET_CGU_DPLL_STATUS_EEC_MODE_UNKNOWN 0xF
+ u8 rsvd[1];
+ __le16 node_handle;
+};
+
+/* Set CGU DPLL config (direct 0x0C67) */
+struct ice_aqc_set_cgu_dpll_config {
+ u8 dpll_num;
+ u8 ref_state;
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_LOS BIT(0)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_SCM BIT(1)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_CFM BIT(2)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_GST BIT(3)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_PFM BIT(4)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_FLOCK_EN BIT(5)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_REF_SW_ESYNC BIT(6)
+ u8 rsvd;
+ u8 config;
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_CLK_REF_SEL ICE_M(0x1F, 0)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_MODE_SHIFT 5
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_MODE \
+ ICE_M(0x7, ICE_AQC_SET_CGU_DPLL_CONFIG_MODE_SHIFT)
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_MODE_FREERUN 0
+#define ICE_AQC_SET_CGU_DPLL_CONFIG_MODE_AUTOMATIC \
+ ICE_M(0x3, ICE_AQC_SET_CGU_DPLL_CONFIG_MODE_SHIFT)
+ u8 rsvd2[8];
+ u8 eec_mode;
+ u8 rsvd3[1];
+ __le16 node_handle;
+};
+
+/* Set CGU reference priority (direct 0x0C68) */
+struct ice_aqc_set_cgu_ref_prio {
+ u8 dpll_num;
+ u8 ref_idx;
+ u8 ref_priority;
+ u8 rsvd[11];
+ __le16 node_handle;
+};
+
+/* Get CGU reference priority (direct 0x0C69) */
+struct ice_aqc_get_cgu_ref_prio {
+ u8 dpll_num;
+ u8 ref_idx;
+ u8 ref_priority; /* Valid only in response */
+ u8 rsvd[13];
+};
+
+/* Get CGU info (direct 0x0C6A) */
+struct ice_aqc_get_cgu_info {
+ __le32 cgu_id;
+ __le32 cgu_cfg_ver;
+ __le32 cgu_fw_ver;
+ u8 node_part_num;
+ u8 dev_rev;
+ __le16 node_handle;
+};
+
/* Driver Shared Parameters (direct, 0x0C90) */
struct ice_aqc_driver_shared_params {
u8 set_or_get_op;
@@ -2139,16 +2396,6 @@ struct ice_aqc_driver_shared_params {
__le32 addr_low;
};
-enum ice_aqc_driver_params {
- /* OS clock index for PTP timer Domain 0 */
- ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR0 = 0,
- /* OS clock index for PTP timer Domain 1 */
- ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR1,
-
- /* Add new parameters above */
- ICE_AQC_DRIVER_PARAM_MAX = 16,
-};
-
/* Lan Queue Overflow Event (direct, 0x1001) */
struct ice_aqc_event_lan_overflow {
__le32 prtdcb_ruptq;
@@ -2194,6 +2441,8 @@ struct ice_aq_desc {
struct ice_aqc_get_phy_caps get_phy;
struct ice_aqc_set_phy_cfg set_phy;
struct ice_aqc_restart_an restart_an;
+ struct ice_aqc_set_phy_rec_clk_out set_phy_rec_clk_out;
+ struct ice_aqc_get_phy_rec_clk_out get_phy_rec_clk_out;
struct ice_aqc_gpio read_write_gpio;
struct ice_aqc_sff_eeprom read_write_sff_param;
struct ice_aqc_set_port_id_led set_port_id_led;
@@ -2234,6 +2483,15 @@ struct ice_aq_desc {
struct ice_aqc_fw_logging fw_logging;
struct ice_aqc_get_clear_fw_log get_clear_fw_log;
struct ice_aqc_download_pkg download_pkg;
+ struct ice_aqc_set_cgu_input_config set_cgu_input_config;
+ struct ice_aqc_get_cgu_input_config get_cgu_input_config;
+ struct ice_aqc_set_cgu_output_config set_cgu_output_config;
+ struct ice_aqc_get_cgu_output_config get_cgu_output_config;
+ struct ice_aqc_get_cgu_dpll_status get_cgu_dpll_status;
+ struct ice_aqc_set_cgu_dpll_config set_cgu_dpll_config;
+ struct ice_aqc_set_cgu_ref_prio set_cgu_ref_prio;
+ struct ice_aqc_get_cgu_ref_prio get_cgu_ref_prio;
+ struct ice_aqc_get_cgu_info get_cgu_info;
struct ice_aqc_driver_shared_params drv_shared_params;
struct ice_aqc_set_mac_lb set_mac_lb;
struct ice_aqc_alloc_free_res_cmd sw_res_ctrl;
@@ -2358,6 +2616,8 @@ enum ice_adminq_opc {
ice_aqc_opc_get_link_status = 0x0607,
ice_aqc_opc_set_event_mask = 0x0613,
ice_aqc_opc_set_mac_lb = 0x0620,
+ ice_aqc_opc_set_phy_rec_clk_out = 0x0630,
+ ice_aqc_opc_get_phy_rec_clk_out = 0x0631,
ice_aqc_opc_get_link_topo = 0x06E0,
ice_aqc_opc_read_i2c = 0x06E2,
ice_aqc_opc_write_i2c = 0x06E3,
@@ -2413,6 +2673,18 @@ enum ice_adminq_opc {
ice_aqc_opc_update_pkg = 0x0C42,
ice_aqc_opc_get_pkg_info_list = 0x0C43,
+ /* 1588/SyncE commands/events */
+ ice_aqc_opc_get_cgu_abilities = 0x0C61,
+ ice_aqc_opc_set_cgu_input_config = 0x0C62,
+ ice_aqc_opc_get_cgu_input_config = 0x0C63,
+ ice_aqc_opc_set_cgu_output_config = 0x0C64,
+ ice_aqc_opc_get_cgu_output_config = 0x0C65,
+ ice_aqc_opc_get_cgu_dpll_status = 0x0C66,
+ ice_aqc_opc_set_cgu_dpll_config = 0x0C67,
+ ice_aqc_opc_set_cgu_ref_prio = 0x0C68,
+ ice_aqc_opc_get_cgu_ref_prio = 0x0C69,
+ ice_aqc_opc_get_cgu_info = 0x0C6A,
+
ice_aqc_opc_driver_shared_params = 0x0C90,
/* Standalone Commands/Events */
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 80deca45ab59..9a6c25f98632 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-/* Copyright (c) 2018, Intel Corporation. */
+/* Copyright (c) 2018-2023, Intel Corporation. */
#include "ice_common.h"
#include "ice_sched.h"
@@ -8,6 +8,7 @@
#include "ice_ptp_hw.h"
#define ICE_PF_RESET_WAIT_COUNT 300
+#define ICE_MAX_NETLIST_SIZE 10
static const char * const ice_link_mode_str_low[] = {
[0] = "100BASE_TX",
@@ -153,6 +154,12 @@ static int ice_set_mac_type(struct ice_hw *hw)
case ICE_DEV_ID_E823L_SFP:
hw->mac_type = ICE_MAC_GENERIC;
break;
+ case ICE_DEV_ID_E830_BACKPLANE:
+ case ICE_DEV_ID_E830_QSFP56:
+ case ICE_DEV_ID_E830_SFP:
+ case ICE_DEV_ID_E830_SFP_DD:
+ hw->mac_type = ICE_MAC_E830;
+ break;
default:
hw->mac_type = ICE_MAC_UNKNOWN;
break;
@@ -436,6 +443,80 @@ ice_aq_get_link_topo_handle(struct ice_port_info *pi, u8 node_type,
}
/**
+ * ice_aq_get_netlist_node
+ * @hw: pointer to the hw struct
+ * @cmd: get_link_topo AQ structure
+ * @node_part_number: output node part number if node found
+ * @node_handle: output node handle parameter if node found
+ *
+ * Get netlist node handle.
+ */
+int
+ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ u8 *node_part_number, u16 *node_handle)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo);
+ desc.params.get_link_topo = *cmd;
+
+ if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL))
+ return -EINTR;
+
+ if (node_handle)
+ *node_handle =
+ le16_to_cpu(desc.params.get_link_topo.addr.handle);
+ if (node_part_number)
+ *node_part_number = desc.params.get_link_topo.node_part_num;
+
+ return 0;
+}
+
+/**
+ * ice_find_netlist_node
+ * @hw: pointer to the hw struct
+ * @node_type_ctx: type of netlist node to look for
+ * @node_part_number: node part number to look for
+ * @node_handle: output parameter if node found - optional
+ *
+ * Scan the netlist for a node handle of the given node type and part number.
+ *
+ * If node_handle is non-NULL it will be modified on function exit. It is only
+ * valid if the function returns zero, and should be ignored on any non-zero
+ * return value.
+ *
+ * Returns: 0 if the node is found, -ENOENT if no handle was found, and
+ * a negative error code on failure to access the AQ.
+ */
+static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx,
+ u8 node_part_number, u16 *node_handle)
+{
+ u8 idx;
+
+ for (idx = 0; idx < ICE_MAX_NETLIST_SIZE; idx++) {
+ struct ice_aqc_get_link_topo cmd = {};
+ u8 rec_node_part_number;
+ int status;
+
+ cmd.addr.topo_params.node_type_ctx =
+ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M,
+ node_type_ctx);
+ cmd.addr.topo_params.index = idx;
+
+ status = ice_aq_get_netlist_node(hw, &cmd,
+ &rec_node_part_number,
+ node_handle);
+ if (status)
+ return status;
+
+ if (rec_node_part_number == node_part_number)
+ return 0;
+ }
+
+ return -ENOENT;
+}
+
+/**
* ice_is_media_cage_present
* @pi: port information structure
*
@@ -571,6 +652,24 @@ static enum ice_media_type ice_get_media_type(struct ice_port_info *pi)
}
/**
+ * ice_get_link_status_datalen
+ * @hw: pointer to the HW struct
+ *
+ * Returns datalength for the Get Link Status AQ command, which is bigger for
+ * newer adapter families handled by ice driver.
+ */
+static u16 ice_get_link_status_datalen(struct ice_hw *hw)
+{
+ switch (hw->mac_type) {
+ case ICE_MAC_E830:
+ return ICE_AQC_LS_DATA_SIZE_V2;
+ case ICE_MAC_E810:
+ default:
+ return ICE_AQC_LS_DATA_SIZE_V1;
+ }
+}
+
+/**
* ice_aq_get_link_info
* @pi: port information structure
* @ena_lse: enable/disable LinkStatusEvent reporting
@@ -608,8 +707,8 @@ ice_aq_get_link_info(struct ice_port_info *pi, bool ena_lse,
resp->cmd_flags = cpu_to_le16(cmd_flags);
resp->lport_num = pi->lport;
- status = ice_aq_send_cmd(hw, &desc, &link_data, sizeof(link_data), cd);
-
+ status = ice_aq_send_cmd(hw, &desc, &link_data,
+ ice_get_link_status_datalen(hw), cd);
if (status)
return status;
@@ -684,8 +783,7 @@ static void
ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw,
struct ice_aqc_set_mac_cfg *cmd)
{
- u16 fc_thres_val, tx_timer_val;
- u32 val;
+ u32 val, fc_thres_m;
/* We read back the transmit timer and FC threshold value of
* LFC. Thus, we will use index =
@@ -694,19 +792,32 @@ ice_fill_tx_timer_and_fc_thresh(struct ice_hw *hw,
* Also, because we are operating on transmit timer and FC
* threshold of LFC, we don't turn on any bit in tx_tmr_priority
*/
-#define IDX_OF_LFC PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX
-
- /* Retrieve the transmit timer */
- val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(IDX_OF_LFC));
- tx_timer_val = val &
- PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M;
- cmd->tx_tmr_value = cpu_to_le16(tx_timer_val);
-
- /* Retrieve the FC threshold */
- val = rd32(hw, PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(IDX_OF_LFC));
- fc_thres_val = val & PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M;
-
- cmd->fc_refresh_threshold = cpu_to_le16(fc_thres_val);
+#define E800_IDX_OF_LFC E800_PRTMAC_HSEC_CTL_TX_PS_QNT_MAX
+#define E800_REFRESH_TMR E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR
+
+ if (hw->mac_type == ICE_MAC_E830) {
+ /* Retrieve the transmit timer */
+ val = rd32(hw, E830_PRTMAC_CL01_PS_QNT);
+ cmd->tx_tmr_value =
+ le16_encode_bits(val, E830_PRTMAC_CL01_PS_QNT_CL0_M);
+
+ /* Retrieve the fc threshold */
+ val = rd32(hw, E830_PRTMAC_CL01_QNT_THR);
+ fc_thres_m = E830_PRTMAC_CL01_QNT_THR_CL0_M;
+ } else {
+ /* Retrieve the transmit timer */
+ val = rd32(hw,
+ E800_PRTMAC_HSEC_CTL_TX_PS_QNT(E800_IDX_OF_LFC));
+ cmd->tx_tmr_value =
+ le16_encode_bits(val,
+ E800_PRTMAC_HSEC_CTL_TX_PS_QNT_M);
+
+ /* Retrieve the fc threshold */
+ val = rd32(hw,
+ E800_REFRESH_TMR(E800_IDX_OF_LFC));
+ fc_thres_m = E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR_M;
+ }
+ cmd->fc_refresh_threshold = le16_encode_bits(val, fc_thres_m);
}
/**
@@ -2389,16 +2500,21 @@ ice_parse_1588_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p,
static void
ice_parse_fdir_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p)
{
- u32 reg_val, val;
+ u32 reg_val, gsize, bsize;
reg_val = rd32(hw, GLQF_FD_SIZE);
- val = (reg_val & GLQF_FD_SIZE_FD_GSIZE_M) >>
- GLQF_FD_SIZE_FD_GSIZE_S;
- func_p->fd_fltr_guar =
- ice_get_num_per_func(hw, val);
- val = (reg_val & GLQF_FD_SIZE_FD_BSIZE_M) >>
- GLQF_FD_SIZE_FD_BSIZE_S;
- func_p->fd_fltr_best_effort = val;
+ switch (hw->mac_type) {
+ case ICE_MAC_E830:
+ gsize = FIELD_GET(E830_GLQF_FD_SIZE_FD_GSIZE_M, reg_val);
+ bsize = FIELD_GET(E830_GLQF_FD_SIZE_FD_BSIZE_M, reg_val);
+ break;
+ case ICE_MAC_E810:
+ default:
+ gsize = FIELD_GET(E800_GLQF_FD_SIZE_FD_GSIZE_M, reg_val);
+ bsize = FIELD_GET(E800_GLQF_FD_SIZE_FD_BSIZE_M, reg_val);
+ }
+ func_p->fd_fltr_guar = ice_get_num_per_func(hw, gsize);
+ func_p->fd_fltr_best_effort = bsize;
ice_debug(hw, ICE_DBG_INIT, "func caps: fd_fltr_guar = %d\n",
func_p->fd_fltr_guar);
@@ -2655,33 +2771,6 @@ ice_parse_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p,
}
/**
- * ice_aq_get_netlist_node
- * @hw: pointer to the hw struct
- * @cmd: get_link_topo AQ structure
- * @node_part_number: output node part number if node found
- * @node_handle: output node handle parameter if node found
- */
-static int
-ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
- u8 *node_part_number, u16 *node_handle)
-{
- struct ice_aq_desc desc;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_link_topo);
- desc.params.get_link_topo = *cmd;
-
- if (ice_aq_send_cmd(hw, &desc, NULL, 0, NULL))
- return -EIO;
-
- if (node_handle)
- *node_handle = le16_to_cpu(desc.params.get_link_topo.addr.handle);
- if (node_part_number)
- *node_part_number = desc.params.get_link_topo.node_part_num;
-
- return 0;
-}
-
-/**
* ice_is_pf_c827 - check if pf contains c827 phy
* @hw: pointer to the hw struct
*/
@@ -2716,6 +2805,82 @@ bool ice_is_pf_c827(struct ice_hw *hw)
}
/**
+ * ice_is_phy_rclk_in_netlist
+ * @hw: pointer to the hw struct
+ *
+ * Check if the PHY Recovered Clock device is present in the netlist
+ */
+bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw)
+{
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_C827, NULL) &&
+ ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_E822_PHY, NULL))
+ return false;
+
+ return true;
+}
+
+/**
+ * ice_is_clock_mux_in_netlist
+ * @hw: pointer to the hw struct
+ *
+ * Check if the Clock Multiplexer device is present in the netlist
+ */
+bool ice_is_clock_mux_in_netlist(struct ice_hw *hw)
+{
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX,
+ NULL))
+ return false;
+
+ return true;
+}
+
+/**
+ * ice_is_cgu_in_netlist - check for CGU presence
+ * @hw: pointer to the hw struct
+ *
+ * Check if the Clock Generation Unit (CGU) device is present in the netlist.
+ * Save the CGU part number in the hw structure for later use.
+ * Return:
+ * * true - cgu is present
+ * * false - cgu is not present
+ */
+bool ice_is_cgu_in_netlist(struct ice_hw *hw)
+{
+ if (!ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032;
+ return true;
+ } else if (!ice_find_netlist_node(hw,
+ ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384;
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * ice_is_gps_in_netlist
+ * @hw: pointer to the hw struct
+ *
+ * Check if the GPS generic device is present in the netlist
+ */
+bool ice_is_gps_in_netlist(struct ice_hw *hw)
+{
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_GPS,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS, NULL))
+ return false;
+
+ return true;
+}
+
+/**
* ice_aq_list_caps - query function/device capabilities
* @hw: pointer to the HW struct
* @buf: a buffer to hold the capabilities
@@ -4726,11 +4891,11 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
enum ice_disq_rst_src rst_src, u16 vmvf_num,
struct ice_sq_cd *cd)
{
- struct ice_aqc_dis_txq_item *qg_list;
+ DEFINE_FLEX(struct ice_aqc_dis_txq_item, qg_list, q_id, 1);
+ u16 i, buf_size = __struct_size(qg_list);
struct ice_q_ctx *q_ctx;
int status = -ENOENT;
struct ice_hw *hw;
- u16 i, buf_size;
if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
return -EIO;
@@ -4748,11 +4913,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
return -EIO;
}
- buf_size = struct_size(qg_list, q_id, 1);
- qg_list = kzalloc(buf_size, GFP_KERNEL);
- if (!qg_list)
- return -ENOMEM;
-
mutex_lock(&pi->sched_lock);
for (i = 0; i < num_queues; i++) {
@@ -4785,7 +4945,6 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
q_ctx->q_teid = ICE_INVAL_TEID;
}
mutex_unlock(&pi->sched_lock);
- kfree(qg_list);
return status;
}
@@ -4954,10 +5113,10 @@ int
ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
u16 *q_id)
{
- struct ice_aqc_dis_txq_item *qg_list;
+ DEFINE_FLEX(struct ice_aqc_dis_txq_item, qg_list, q_id, 1);
+ u16 qg_size = __struct_size(qg_list);
struct ice_hw *hw;
int status = 0;
- u16 qg_size;
int i;
if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
@@ -4965,11 +5124,6 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
hw = pi->hw;
- qg_size = struct_size(qg_list, q_id, 1);
- qg_list = kzalloc(qg_size, GFP_KERNEL);
- if (!qg_list)
- return -ENOMEM;
-
mutex_lock(&pi->sched_lock);
for (i = 0; i < count; i++) {
@@ -4994,7 +5148,395 @@ ice_dis_vsi_rdma_qset(struct ice_port_info *pi, u16 count, u32 *qset_teid,
}
mutex_unlock(&pi->sched_lock);
- kfree(qg_list);
+ return status;
+}
+
+/**
+ * ice_aq_get_cgu_abilities - get cgu abilities
+ * @hw: pointer to the HW struct
+ * @abilities: CGU abilities
+ *
+ * Get CGU abilities (0x0C61)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_cgu_abilities(struct ice_hw *hw,
+ struct ice_aqc_get_cgu_abilities *abilities)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_abilities);
+ return ice_aq_send_cmd(hw, &desc, abilities, sizeof(*abilities), NULL);
+}
+
+/**
+ * ice_aq_set_input_pin_cfg - set input pin config
+ * @hw: pointer to the HW struct
+ * @input_idx: Input index
+ * @flags1: Input flags
+ * @flags2: Input flags
+ * @freq: Frequency in Hz
+ * @phase_delay: Delay in ps
+ *
+ * Set CGU input config (0x0C62)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_set_input_pin_cfg(struct ice_hw *hw, u8 input_idx, u8 flags1, u8 flags2,
+ u32 freq, s32 phase_delay)
+{
+ struct ice_aqc_set_cgu_input_config *cmd;
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_cgu_input_config);
+ cmd = &desc.params.set_cgu_input_config;
+ cmd->input_idx = input_idx;
+ cmd->flags1 = flags1;
+ cmd->flags2 = flags2;
+ cmd->freq = cpu_to_le32(freq);
+ cmd->phase_delay = cpu_to_le32(phase_delay);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_get_input_pin_cfg - get input pin config
+ * @hw: pointer to the HW struct
+ * @input_idx: Input index
+ * @status: Pin status
+ * @type: Pin type
+ * @flags1: Input flags
+ * @flags2: Input flags
+ * @freq: Frequency in Hz
+ * @phase_delay: Delay in ps
+ *
+ * Get CGU input config (0x0C63)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_input_pin_cfg(struct ice_hw *hw, u8 input_idx, u8 *status, u8 *type,
+ u8 *flags1, u8 *flags2, u32 *freq, s32 *phase_delay)
+{
+ struct ice_aqc_get_cgu_input_config *cmd;
+ struct ice_aq_desc desc;
+ int ret;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_input_config);
+ cmd = &desc.params.get_cgu_input_config;
+ cmd->input_idx = input_idx;
+
+ ret = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!ret) {
+ if (status)
+ *status = cmd->status;
+ if (type)
+ *type = cmd->type;
+ if (flags1)
+ *flags1 = cmd->flags1;
+ if (flags2)
+ *flags2 = cmd->flags2;
+ if (freq)
+ *freq = le32_to_cpu(cmd->freq);
+ if (phase_delay)
+ *phase_delay = le32_to_cpu(cmd->phase_delay);
+ }
+
+ return ret;
+}
+
+/**
+ * ice_aq_set_output_pin_cfg - set output pin config
+ * @hw: pointer to the HW struct
+ * @output_idx: Output index
+ * @flags: Output flags
+ * @src_sel: Index of DPLL block
+ * @freq: Output frequency
+ * @phase_delay: Output phase compensation
+ *
+ * Set CGU output config (0x0C64)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_set_output_pin_cfg(struct ice_hw *hw, u8 output_idx, u8 flags,
+ u8 src_sel, u32 freq, s32 phase_delay)
+{
+ struct ice_aqc_set_cgu_output_config *cmd;
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_cgu_output_config);
+ cmd = &desc.params.set_cgu_output_config;
+ cmd->output_idx = output_idx;
+ cmd->flags = flags;
+ cmd->src_sel = src_sel;
+ cmd->freq = cpu_to_le32(freq);
+ cmd->phase_delay = cpu_to_le32(phase_delay);
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_get_output_pin_cfg - get output pin config
+ * @hw: pointer to the HW struct
+ * @output_idx: Output index
+ * @flags: Output flags
+ * @src_sel: Internal DPLL source
+ * @freq: Output frequency
+ * @src_freq: Source frequency
+ *
+ * Get CGU output config (0x0C65)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_output_pin_cfg(struct ice_hw *hw, u8 output_idx, u8 *flags,
+ u8 *src_sel, u32 *freq, u32 *src_freq)
+{
+ struct ice_aqc_get_cgu_output_config *cmd;
+ struct ice_aq_desc desc;
+ int ret;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_output_config);
+ cmd = &desc.params.get_cgu_output_config;
+ cmd->output_idx = output_idx;
+
+ ret = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!ret) {
+ if (flags)
+ *flags = cmd->flags;
+ if (src_sel)
+ *src_sel = cmd->src_sel;
+ if (freq)
+ *freq = le32_to_cpu(cmd->freq);
+ if (src_freq)
+ *src_freq = le32_to_cpu(cmd->src_freq);
+ }
+
+ return ret;
+}
+
+/**
+ * ice_aq_get_cgu_dpll_status - get dpll status
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_state: Reference clock state
+ * @config: current DPLL config
+ * @dpll_state: current DPLL state
+ * @phase_offset: Phase offset in ns
+ * @eec_mode: EEC_mode
+ *
+ * Get CGU DPLL status (0x0C66)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+ u8 *dpll_state, u8 *config, s64 *phase_offset,
+ u8 *eec_mode)
+{
+ struct ice_aqc_get_cgu_dpll_status *cmd;
+ const s64 nsec_per_psec = 1000LL;
+ struct ice_aq_desc desc;
+ int status;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_dpll_status);
+ cmd = &desc.params.get_cgu_dpll_status;
+ cmd->dpll_num = dpll_num;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!status) {
+ *ref_state = cmd->ref_state;
+ *dpll_state = cmd->dpll_state;
+ *config = cmd->config;
+ *phase_offset = le32_to_cpu(cmd->phase_offset_h);
+ *phase_offset <<= 32;
+ *phase_offset += le32_to_cpu(cmd->phase_offset_l);
+ *phase_offset = div64_s64(sign_extend64(*phase_offset, 47),
+ nsec_per_psec);
+ *eec_mode = cmd->eec_mode;
+ }
+
+ return status;
+}
+
+/**
+ * ice_aq_set_cgu_dpll_config - set dpll config
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_state: Reference clock state
+ * @config: DPLL config
+ * @eec_mode: EEC mode
+ *
+ * Set CGU DPLL config (0x0C67)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_set_cgu_dpll_config(struct ice_hw *hw, u8 dpll_num, u8 ref_state,
+ u8 config, u8 eec_mode)
+{
+ struct ice_aqc_set_cgu_dpll_config *cmd;
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_cgu_dpll_config);
+ cmd = &desc.params.set_cgu_dpll_config;
+ cmd->dpll_num = dpll_num;
+ cmd->ref_state = ref_state;
+ cmd->config = config;
+ cmd->eec_mode = eec_mode;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_set_cgu_ref_prio - set input reference priority
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_idx: Reference pin index
+ * @ref_priority: Reference input priority
+ *
+ * Set CGU reference priority (0x0C68)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_set_cgu_ref_prio(struct ice_hw *hw, u8 dpll_num, u8 ref_idx,
+ u8 ref_priority)
+{
+ struct ice_aqc_set_cgu_ref_prio *cmd;
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_cgu_ref_prio);
+ cmd = &desc.params.set_cgu_ref_prio;
+ cmd->dpll_num = dpll_num;
+ cmd->ref_idx = ref_idx;
+ cmd->ref_priority = ref_priority;
+
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+}
+
+/**
+ * ice_aq_get_cgu_ref_prio - get input reference priority
+ * @hw: pointer to the HW struct
+ * @dpll_num: DPLL index
+ * @ref_idx: Reference pin index
+ * @ref_prio: Reference input priority
+ *
+ * Get CGU reference priority (0x0C69)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_cgu_ref_prio(struct ice_hw *hw, u8 dpll_num, u8 ref_idx,
+ u8 *ref_prio)
+{
+ struct ice_aqc_get_cgu_ref_prio *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_ref_prio);
+ cmd = &desc.params.get_cgu_ref_prio;
+ cmd->dpll_num = dpll_num;
+ cmd->ref_idx = ref_idx;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!status)
+ *ref_prio = cmd->ref_priority;
+
+ return status;
+}
+
+/**
+ * ice_aq_get_cgu_info - get cgu info
+ * @hw: pointer to the HW struct
+ * @cgu_id: CGU ID
+ * @cgu_cfg_ver: CGU config version
+ * @cgu_fw_ver: CGU firmware version
+ *
+ * Get CGU info (0x0C6A)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_cgu_info(struct ice_hw *hw, u32 *cgu_id, u32 *cgu_cfg_ver,
+ u32 *cgu_fw_ver)
+{
+ struct ice_aqc_get_cgu_info *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_cgu_info);
+ cmd = &desc.params.get_cgu_info;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!status) {
+ *cgu_id = le32_to_cpu(cmd->cgu_id);
+ *cgu_cfg_ver = le32_to_cpu(cmd->cgu_cfg_ver);
+ *cgu_fw_ver = le32_to_cpu(cmd->cgu_fw_ver);
+ }
+
+ return status;
+}
+
+/**
+ * ice_aq_set_phy_rec_clk_out - set RCLK phy out
+ * @hw: pointer to the HW struct
+ * @phy_output: PHY reference clock output pin
+ * @enable: GPIO state to be applied
+ * @freq: PHY output frequency
+ *
+ * Set phy recovered clock as reference (0x0630)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_set_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, bool enable,
+ u32 *freq)
+{
+ struct ice_aqc_set_phy_rec_clk_out *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_set_phy_rec_clk_out);
+ cmd = &desc.params.set_phy_rec_clk_out;
+ cmd->phy_output = phy_output;
+ cmd->port_num = ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT;
+ cmd->flags = enable & ICE_AQC_SET_PHY_REC_CLK_OUT_OUT_EN;
+ cmd->freq = cpu_to_le32(*freq);
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!status)
+ *freq = le32_to_cpu(cmd->freq);
+
+ return status;
+}
+
+/**
+ * ice_aq_get_phy_rec_clk_out - get phy recovered signal info
+ * @hw: pointer to the HW struct
+ * @phy_output: PHY reference clock output pin
+ * @port_num: Port number
+ * @flags: PHY flags
+ * @node_handle: PHY output frequency
+ *
+ * Get PHY recovered clock output info (0x0631)
+ * Return: 0 on success or negative value on failure.
+ */
+int
+ice_aq_get_phy_rec_clk_out(struct ice_hw *hw, u8 *phy_output, u8 *port_num,
+ u8 *flags, u16 *node_handle)
+{
+ struct ice_aqc_get_phy_rec_clk_out *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_phy_rec_clk_out);
+ cmd = &desc.params.get_phy_rec_clk_out;
+ cmd->phy_output = *phy_output;
+
+ status = ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ if (!status) {
+ *phy_output = cmd->phy_output;
+ if (port_num)
+ *port_num = cmd->port_num;
+ if (flags)
+ *flags = cmd->flags;
+ if (node_handle)
+ *node_handle = le16_to_cpu(cmd->node_handle);
+ }
+
return status;
}
@@ -5268,81 +5810,6 @@ ice_aq_write_i2c(struct ice_hw *hw, struct ice_aqc_link_topo_addr topo_addr,
}
/**
- * ice_aq_set_driver_param - Set driver parameter to share via firmware
- * @hw: pointer to the HW struct
- * @idx: parameter index to set
- * @value: the value to set the parameter to
- * @cd: pointer to command details structure or NULL
- *
- * Set the value of one of the software defined parameters. All PFs connected
- * to this device can read the value using ice_aq_get_driver_param.
- *
- * Note that firmware provides no synchronization or locking, and will not
- * save the parameter value during a device reset. It is expected that
- * a single PF will write the parameter value, while all other PFs will only
- * read it.
- */
-int
-ice_aq_set_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx,
- u32 value, struct ice_sq_cd *cd)
-{
- struct ice_aqc_driver_shared_params *cmd;
- struct ice_aq_desc desc;
-
- if (idx >= ICE_AQC_DRIVER_PARAM_MAX)
- return -EIO;
-
- cmd = &desc.params.drv_shared_params;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_driver_shared_params);
-
- cmd->set_or_get_op = ICE_AQC_DRIVER_PARAM_SET;
- cmd->param_indx = idx;
- cmd->param_val = cpu_to_le32(value);
-
- return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
-}
-
-/**
- * ice_aq_get_driver_param - Get driver parameter shared via firmware
- * @hw: pointer to the HW struct
- * @idx: parameter index to set
- * @value: storage to return the shared parameter
- * @cd: pointer to command details structure or NULL
- *
- * Get the value of one of the software defined parameters.
- *
- * Note that firmware provides no synchronization or locking. It is expected
- * that only a single PF will write a given parameter.
- */
-int
-ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx,
- u32 *value, struct ice_sq_cd *cd)
-{
- struct ice_aqc_driver_shared_params *cmd;
- struct ice_aq_desc desc;
- int status;
-
- if (idx >= ICE_AQC_DRIVER_PARAM_MAX)
- return -EIO;
-
- cmd = &desc.params.drv_shared_params;
-
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_driver_shared_params);
-
- cmd->set_or_get_op = ICE_AQC_DRIVER_PARAM_GET;
- cmd->param_indx = idx;
-
- status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
- if (status)
- return status;
-
- *value = le32_to_cpu(cmd->param_val);
-
- return 0;
-}
-
-/**
* ice_aq_set_gpio
* @hw: pointer to the hw struct
* @gpio_ctrl_handle: GPIO controller node handle
@@ -5643,6 +6110,7 @@ static const u32 ice_aq_to_link_speed[] = {
SPEED_40000,
SPEED_50000,
SPEED_100000, /* BIT(10) */
+ SPEED_200000,
};
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index 226b81f97a92..31fdcac33986 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -93,6 +93,13 @@ ice_aq_get_phy_caps(struct ice_port_info *pi, bool qual_mods, u8 report_mode,
struct ice_aqc_get_phy_caps_data *caps,
struct ice_sq_cd *cd);
bool ice_is_pf_c827(struct ice_hw *hw);
+bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw);
+bool ice_is_clock_mux_in_netlist(struct ice_hw *hw);
+bool ice_is_cgu_in_netlist(struct ice_hw *hw);
+bool ice_is_gps_in_netlist(struct ice_hw *hw);
+int
+ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ u8 *node_part_number, u16 *node_handle);
int
ice_aq_list_caps(struct ice_hw *hw, void *buf, u16 buf_size, u32 *cap_count,
enum ice_adminq_opc opc, struct ice_sq_cd *cd);
@@ -196,6 +203,44 @@ void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf);
struct ice_q_ctx *
ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle);
int ice_sbq_rw_reg(struct ice_hw *hw, struct ice_sbq_msg_input *in);
+int
+ice_aq_get_cgu_abilities(struct ice_hw *hw,
+ struct ice_aqc_get_cgu_abilities *abilities);
+int
+ice_aq_set_input_pin_cfg(struct ice_hw *hw, u8 input_idx, u8 flags1, u8 flags2,
+ u32 freq, s32 phase_delay);
+int
+ice_aq_get_input_pin_cfg(struct ice_hw *hw, u8 input_idx, u8 *status, u8 *type,
+ u8 *flags1, u8 *flags2, u32 *freq, s32 *phase_delay);
+int
+ice_aq_set_output_pin_cfg(struct ice_hw *hw, u8 output_idx, u8 flags,
+ u8 src_sel, u32 freq, s32 phase_delay);
+int
+ice_aq_get_output_pin_cfg(struct ice_hw *hw, u8 output_idx, u8 *flags,
+ u8 *src_sel, u32 *freq, u32 *src_freq);
+int
+ice_aq_get_cgu_dpll_status(struct ice_hw *hw, u8 dpll_num, u8 *ref_state,
+ u8 *dpll_state, u8 *config, s64 *phase_offset,
+ u8 *eec_mode);
+int
+ice_aq_set_cgu_dpll_config(struct ice_hw *hw, u8 dpll_num, u8 ref_state,
+ u8 config, u8 eec_mode);
+int
+ice_aq_set_cgu_ref_prio(struct ice_hw *hw, u8 dpll_num, u8 ref_idx,
+ u8 ref_priority);
+int
+ice_aq_get_cgu_ref_prio(struct ice_hw *hw, u8 dpll_num, u8 ref_idx,
+ u8 *ref_prio);
+int
+ice_aq_get_cgu_info(struct ice_hw *hw, u32 *cgu_id, u32 *cgu_cfg_ver,
+ u32 *cgu_fw_ver);
+
+int
+ice_aq_set_phy_rec_clk_out(struct ice_hw *hw, u8 phy_output, bool enable,
+ u32 *freq);
+int
+ice_aq_get_phy_rec_clk_out(struct ice_hw *hw, u8 *phy_output, u8 *port_num,
+ u8 *flags, u16 *node_handle);
void
ice_stat_update40(struct ice_hw *hw, u32 reg, bool prev_stat_loaded,
u64 *prev_stat, u64 *cur_stat);
@@ -208,12 +253,6 @@ int
ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
struct ice_aqc_txsched_elem_data *buf);
int
-ice_aq_set_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx,
- u32 value, struct ice_sq_cd *cd);
-int
-ice_aq_get_driver_param(struct ice_hw *hw, enum ice_aqc_driver_params idx,
- u32 *value, struct ice_sq_cd *cd);
-int
ice_aq_set_gpio(struct ice_hw *hw, u16 gpio_ctrl_handle, u8 pin_idx, bool value,
struct ice_sq_cd *cd);
int
diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.c b/drivers/net/ethernet/intel/ice/ice_ddp.c
index b27ec93638b6..cfb1580f5850 100644
--- a/drivers/net/ethernet/intel/ice/ice_ddp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ddp.c
@@ -1201,23 +1201,120 @@ ice_aq_download_pkg(struct ice_hw *hw, struct ice_buf_hdr *pkg_buf,
}
/**
- * ice_dwnld_cfg_bufs
+ * ice_get_pkg_seg_by_idx
+ * @pkg_hdr: pointer to the package header to be searched
+ * @idx: index of segment
+ */
+static struct ice_generic_seg_hdr *
+ice_get_pkg_seg_by_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx)
+{
+ if (idx < le32_to_cpu(pkg_hdr->seg_count))
+ return (struct ice_generic_seg_hdr *)
+ ((u8 *)pkg_hdr +
+ le32_to_cpu(pkg_hdr->seg_offset[idx]));
+
+ return NULL;
+}
+
+/**
+ * ice_is_signing_seg_at_idx - determine if segment is a signing segment
+ * @pkg_hdr: pointer to package header
+ * @idx: segment index
+ */
+static bool ice_is_signing_seg_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx)
+{
+ struct ice_generic_seg_hdr *seg;
+
+ seg = ice_get_pkg_seg_by_idx(pkg_hdr, idx);
+ if (!seg)
+ return false;
+
+ return le32_to_cpu(seg->seg_type) == SEGMENT_TYPE_SIGNING;
+}
+
+/**
+ * ice_is_signing_seg_type_at_idx
+ * @pkg_hdr: pointer to package header
+ * @idx: segment index
+ * @seg_id: segment id that is expected
+ * @sign_type: signing type
+ *
+ * Determine if a segment is a signing segment of the correct type
+ */
+static bool
+ice_is_signing_seg_type_at_idx(struct ice_pkg_hdr *pkg_hdr, u32 idx,
+ u32 seg_id, u32 sign_type)
+{
+ struct ice_sign_seg *seg;
+
+ if (!ice_is_signing_seg_at_idx(pkg_hdr, idx))
+ return false;
+
+ seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx);
+
+ if (seg && le32_to_cpu(seg->seg_id) == seg_id &&
+ le32_to_cpu(seg->sign_type) == sign_type)
+ return true;
+
+ return false;
+}
+
+/**
+ * ice_is_buffer_metadata - determine if package buffer is a metadata buffer
+ * @buf: pointer to buffer header
+ */
+static bool ice_is_buffer_metadata(struct ice_buf_hdr *buf)
+{
+ if (le32_to_cpu(buf->section_entry[0].type) & ICE_METADATA_BUF)
+ return true;
+
+ return false;
+}
+
+/**
+ * ice_is_last_download_buffer
+ * @buf: pointer to current buffer header
+ * @idx: index of the buffer in the current sequence
+ * @count: the buffer count in the current sequence
+ *
+ * Note: this routine should only be called if the buffer is not the last buffer
+ */
+static bool
+ice_is_last_download_buffer(struct ice_buf_hdr *buf, u32 idx, u32 count)
+{
+ struct ice_buf *next_buf;
+
+ if ((idx + 1) == count)
+ return true;
+
+ /* A set metadata flag in the next buffer will signal that the current
+ * buffer will be the last buffer downloaded
+ */
+ next_buf = ((struct ice_buf *)buf) + 1;
+
+ return ice_is_buffer_metadata((struct ice_buf_hdr *)next_buf);
+}
+
+/**
+ * ice_dwnld_cfg_bufs_no_lock
* @hw: pointer to the hardware structure
* @bufs: pointer to an array of buffers
- * @count: the number of buffers in the array
+ * @start: buffer index of first buffer to download
+ * @count: the number of buffers to download
+ * @indicate_last: if true, then set last buffer flag on last buffer download
*
- * Obtains global config lock and downloads the package configuration buffers
- * to the firmware. Metadata buffers are skipped, and the first metadata buffer
- * found indicates that the rest of the buffers are all metadata buffers.
+ * Downloads package configuration buffers to the firmware. Metadata buffers
+ * are skipped, and the first metadata buffer found indicates that the rest
+ * of the buffers are all metadata buffers.
*/
-static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw,
- struct ice_buf *bufs, u32 count)
+static enum ice_ddp_state
+ice_dwnld_cfg_bufs_no_lock(struct ice_hw *hw, struct ice_buf *bufs, u32 start,
+ u32 count, bool indicate_last)
{
enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
struct ice_buf_hdr *bh;
enum ice_aq_err err;
u32 offset, info, i;
- int status;
if (!bufs || !count)
return ICE_DDP_PKG_ERR;
@@ -1226,43 +1323,25 @@ static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw,
* then there are no buffers to be downloaded, and the operation is
* considered a success.
*/
- bh = (struct ice_buf_hdr *)bufs;
+ bh = (struct ice_buf_hdr *)(bufs + start);
if (le32_to_cpu(bh->section_entry[0].type) & ICE_METADATA_BUF)
return ICE_DDP_PKG_SUCCESS;
- status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
- if (status) {
- if (status == -EALREADY)
- return ICE_DDP_PKG_ALREADY_LOADED;
- return ice_map_aq_err_to_ddp_state(hw->adminq.sq_last_status);
- }
-
for (i = 0; i < count; i++) {
- bool last = ((i + 1) == count);
+ bool last = false;
+ int status;
- if (!last) {
- /* check next buffer for metadata flag */
- bh = (struct ice_buf_hdr *)(bufs + i + 1);
+ bh = (struct ice_buf_hdr *)(bufs + start + i);
- /* A set metadata flag in the next buffer will signal
- * that the current buffer will be the last buffer
- * downloaded
- */
- if (le16_to_cpu(bh->section_count))
- if (le32_to_cpu(bh->section_entry[0].type) &
- ICE_METADATA_BUF)
- last = true;
- }
-
- bh = (struct ice_buf_hdr *)(bufs + i);
+ if (indicate_last)
+ last = ice_is_last_download_buffer(bh, i, count);
status = ice_aq_download_pkg(hw, bh, ICE_PKG_BUF_SIZE, last,
&offset, &info, NULL);
/* Save AQ status from download package */
if (status) {
- ice_debug(hw, ICE_DBG_PKG,
- "Pkg download failed: err %d off %d inf %d\n",
+ ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n",
status, offset, info);
err = hw->adminq.sq_last_status;
state = ice_map_aq_err_to_ddp_state(err);
@@ -1273,72 +1352,196 @@ static enum ice_ddp_state ice_dwnld_cfg_bufs(struct ice_hw *hw,
break;
}
- if (!status) {
- status = ice_set_vlan_mode(hw);
- if (status)
- ice_debug(hw, ICE_DBG_PKG,
- "Failed to set VLAN mode: err %d\n", status);
+ return state;
+}
+
+/**
+ * ice_download_pkg_sig_seg - download a signature segment
+ * @hw: pointer to the hardware structure
+ * @seg: pointer to signature segment
+ */
+static enum ice_ddp_state
+ice_download_pkg_sig_seg(struct ice_hw *hw, struct ice_sign_seg *seg)
+{
+ return ice_dwnld_cfg_bufs_no_lock(hw, seg->buf_tbl.buf_array, 0,
+ le32_to_cpu(seg->buf_tbl.buf_count),
+ false);
+}
+
+/**
+ * ice_download_pkg_config_seg - download a config segment
+ * @hw: pointer to the hardware structure
+ * @pkg_hdr: pointer to package header
+ * @idx: segment index
+ * @start: starting buffer
+ * @count: buffer count
+ *
+ * Note: idx must reference a ICE segment
+ */
+static enum ice_ddp_state
+ice_download_pkg_config_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr,
+ u32 idx, u32 start, u32 count)
+{
+ struct ice_buf_table *bufs;
+ struct ice_seg *seg;
+ u32 buf_count;
+
+ seg = (struct ice_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx);
+ if (!seg)
+ return ICE_DDP_PKG_ERR;
+
+ bufs = ice_find_buf_table(seg);
+ buf_count = le32_to_cpu(bufs->buf_count);
+
+ if (start >= buf_count || start + count > buf_count)
+ return ICE_DDP_PKG_ERR;
+
+ return ice_dwnld_cfg_bufs_no_lock(hw, bufs->buf_array, start, count,
+ true);
+}
+
+/**
+ * ice_dwnld_sign_and_cfg_segs - download a signing segment and config segment
+ * @hw: pointer to the hardware structure
+ * @pkg_hdr: pointer to package header
+ * @idx: segment index (must be a signature segment)
+ *
+ * Note: idx must reference a signature segment
+ */
+static enum ice_ddp_state
+ice_dwnld_sign_and_cfg_segs(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr,
+ u32 idx)
+{
+ enum ice_ddp_state state;
+ struct ice_sign_seg *seg;
+ u32 conf_idx;
+ u32 start;
+ u32 count;
+
+ seg = (struct ice_sign_seg *)ice_get_pkg_seg_by_idx(pkg_hdr, idx);
+ if (!seg) {
+ state = ICE_DDP_PKG_ERR;
+ goto exit;
}
- ice_release_global_cfg_lock(hw);
+ conf_idx = le32_to_cpu(seg->signed_seg_idx);
+ start = le32_to_cpu(seg->signed_buf_start);
+ count = le32_to_cpu(seg->signed_buf_count);
+ state = ice_download_pkg_sig_seg(hw, seg);
+ if (state)
+ goto exit;
+
+ state = ice_download_pkg_config_seg(hw, pkg_hdr, conf_idx, start,
+ count);
+
+exit:
return state;
}
/**
- * ice_aq_get_pkg_info_list
+ * ice_match_signing_seg - determine if a matching signing segment exists
+ * @pkg_hdr: pointer to package header
+ * @seg_id: segment id that is expected
+ * @sign_type: signing type
+ */
+static bool
+ice_match_signing_seg(struct ice_pkg_hdr *pkg_hdr, u32 seg_id, u32 sign_type)
+{
+ u32 i;
+
+ for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
+ if (ice_is_signing_seg_type_at_idx(pkg_hdr, i, seg_id,
+ sign_type))
+ return true;
+ }
+
+ return false;
+}
+
+/**
+ * ice_post_dwnld_pkg_actions - perform post download package actions
* @hw: pointer to the hardware structure
- * @pkg_info: the buffer which will receive the information list
- * @buf_size: the size of the pkg_info information buffer
- * @cd: pointer to command details structure or NULL
- *
- * Get Package Info List (0x0C43)
*/
-static int ice_aq_get_pkg_info_list(struct ice_hw *hw,
- struct ice_aqc_get_pkg_info_resp *pkg_info,
- u16 buf_size, struct ice_sq_cd *cd)
+static enum ice_ddp_state
+ice_post_dwnld_pkg_actions(struct ice_hw *hw)
{
- struct ice_aq_desc desc;
+ int status;
- ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list);
+ status = ice_set_vlan_mode(hw);
+ if (status) {
+ ice_debug(hw, ICE_DBG_PKG, "Failed to set VLAN mode: err %d\n",
+ status);
+ return ICE_DDP_PKG_ERR;
+ }
- return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd);
+ return ICE_DDP_PKG_SUCCESS;
}
/**
* ice_download_pkg
* @hw: pointer to the hardware structure
- * @ice_seg: pointer to the segment of the package to be downloaded
+ * @pkg_hdr: pointer to package header
*
* Handles the download of a complete package.
*/
-static enum ice_ddp_state ice_download_pkg(struct ice_hw *hw,
- struct ice_seg *ice_seg)
+static enum ice_ddp_state
+ice_download_pkg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
{
- struct ice_buf_table *ice_buf_tbl;
+ enum ice_aq_err aq_err = hw->adminq.sq_last_status;
+ enum ice_ddp_state state = ICE_DDP_PKG_ERR;
int status;
+ u32 i;
- ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n",
- ice_seg->hdr.seg_format_ver.major,
- ice_seg->hdr.seg_format_ver.minor,
- ice_seg->hdr.seg_format_ver.update,
- ice_seg->hdr.seg_format_ver.draft);
+ ice_debug(hw, ICE_DBG_INIT, "Segment ID %d\n", hw->pkg_seg_id);
+ ice_debug(hw, ICE_DBG_INIT, "Signature type %d\n", hw->pkg_sign_type);
- ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n",
- le32_to_cpu(ice_seg->hdr.seg_type),
- le32_to_cpu(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id);
+ status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
+ if (status) {
+ if (status == -EALREADY)
+ state = ICE_DDP_PKG_ALREADY_LOADED;
+ else
+ state = ice_map_aq_err_to_ddp_state(aq_err);
+ return state;
+ }
- ice_buf_tbl = ice_find_buf_table(ice_seg);
+ for (i = 0; i < le32_to_cpu(pkg_hdr->seg_count); i++) {
+ if (!ice_is_signing_seg_type_at_idx(pkg_hdr, i, hw->pkg_seg_id,
+ hw->pkg_sign_type))
+ continue;
- ice_debug(hw, ICE_DBG_PKG, "Seg buf count: %d\n",
- le32_to_cpu(ice_buf_tbl->buf_count));
+ state = ice_dwnld_sign_and_cfg_segs(hw, pkg_hdr, i);
+ if (state)
+ break;
+ }
- status = ice_dwnld_cfg_bufs(hw, ice_buf_tbl->buf_array,
- le32_to_cpu(ice_buf_tbl->buf_count));
+ if (!state)
+ state = ice_post_dwnld_pkg_actions(hw);
+ ice_release_global_cfg_lock(hw);
ice_post_pkg_dwnld_vlan_mode_cfg(hw);
- return status;
+ return state;
+}
+
+/**
+ * ice_aq_get_pkg_info_list
+ * @hw: pointer to the hardware structure
+ * @pkg_info: the buffer which will receive the information list
+ * @buf_size: the size of the pkg_info information buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Get Package Info List (0x0C43)
+ */
+static int ice_aq_get_pkg_info_list(struct ice_hw *hw,
+ struct ice_aqc_get_pkg_info_resp *pkg_info,
+ u16 buf_size, struct ice_sq_cd *cd)
+{
+ struct ice_aq_desc desc;
+
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_pkg_info_list);
+
+ return ice_aq_send_cmd(hw, &desc, pkg_info, buf_size, cd);
}
/**
@@ -1498,6 +1701,73 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type,
}
/**
+ * ice_has_signing_seg - determine if package has a signing segment
+ * @hw: pointer to the hardware structure
+ * @pkg_hdr: pointer to the driver's package hdr
+ */
+static bool ice_has_signing_seg(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
+{
+ struct ice_generic_seg_hdr *seg_hdr;
+
+ seg_hdr = (struct ice_generic_seg_hdr *)
+ ice_find_seg_in_pkg(hw, SEGMENT_TYPE_SIGNING, pkg_hdr);
+
+ return seg_hdr ? true : false;
+}
+
+/**
+ * ice_get_pkg_segment_id - get correct package segment id, based on device
+ * @mac_type: MAC type of the device
+ */
+static u32 ice_get_pkg_segment_id(enum ice_mac_type mac_type)
+{
+ u32 seg_id;
+
+ switch (mac_type) {
+ case ICE_MAC_E830:
+ seg_id = SEGMENT_TYPE_ICE_E830;
+ break;
+ case ICE_MAC_GENERIC:
+ default:
+ seg_id = SEGMENT_TYPE_ICE_E810;
+ break;
+ }
+
+ return seg_id;
+}
+
+/**
+ * ice_get_pkg_sign_type - get package segment sign type, based on device
+ * @mac_type: MAC type of the device
+ */
+static u32 ice_get_pkg_sign_type(enum ice_mac_type mac_type)
+{
+ u32 sign_type;
+
+ switch (mac_type) {
+ case ICE_MAC_E830:
+ sign_type = SEGMENT_SIGN_TYPE_RSA3K_SBB;
+ break;
+ case ICE_MAC_GENERIC:
+ default:
+ sign_type = SEGMENT_SIGN_TYPE_RSA2K;
+ break;
+ }
+
+ return sign_type;
+}
+
+/**
+ * ice_get_signing_req - get correct package requirements, based on device
+ * @hw: pointer to the hardware structure
+ */
+static void ice_get_signing_req(struct ice_hw *hw)
+{
+ hw->pkg_seg_id = ice_get_pkg_segment_id(hw->mac_type);
+ hw->pkg_sign_type = ice_get_pkg_sign_type(hw->mac_type);
+}
+
+/**
* ice_init_pkg_info
* @hw: pointer to the hardware structure
* @pkg_hdr: pointer to the driver's package hdr
@@ -1512,7 +1782,14 @@ static enum ice_ddp_state ice_init_pkg_info(struct ice_hw *hw,
if (!pkg_hdr)
return ICE_DDP_PKG_ERR;
- seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
+ hw->pkg_has_signing_seg = ice_has_signing_seg(hw, pkg_hdr);
+ ice_get_signing_req(hw);
+
+ ice_debug(hw, ICE_DBG_INIT, "Pkg using segment id: 0x%08X\n",
+ hw->pkg_seg_id);
+
+ seg_hdr = (struct ice_generic_seg_hdr *)
+ ice_find_seg_in_pkg(hw, hw->pkg_seg_id, pkg_hdr);
if (seg_hdr) {
struct ice_meta_sect *meta;
struct ice_pkg_enum state;
@@ -1560,21 +1837,14 @@ static enum ice_ddp_state ice_init_pkg_info(struct ice_hw *hw,
*/
static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw)
{
- enum ice_ddp_state state = ICE_DDP_PKG_SUCCESS;
- struct ice_aqc_get_pkg_info_resp *pkg_info;
- u16 size;
+ DEFINE_FLEX(struct ice_aqc_get_pkg_info_resp, pkg_info, pkg_info,
+ ICE_PKG_CNT);
+ u16 size = __struct_size(pkg_info);
u32 i;
- size = struct_size(pkg_info, pkg_info, ICE_PKG_CNT);
- pkg_info = kzalloc(size, GFP_KERNEL);
- if (!pkg_info)
+ if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL))
return ICE_DDP_PKG_ERR;
- if (ice_aq_get_pkg_info_list(hw, pkg_info, size, NULL)) {
- state = ICE_DDP_PKG_ERR;
- goto init_pkg_free_alloc;
- }
-
for (i = 0; i < le32_to_cpu(pkg_info->count); i++) {
#define ICE_PKG_FLAG_COUNT 4
char flags[ICE_PKG_FLAG_COUNT + 1] = { 0 };
@@ -1604,10 +1874,7 @@ static enum ice_ddp_state ice_get_pkg_info(struct ice_hw *hw)
pkg_info->pkg_info[i].name, flags);
}
-init_pkg_free_alloc:
- kfree(pkg_info);
-
- return state;
+ return ICE_DDP_PKG_SUCCESS;
}
/**
@@ -1622,9 +1889,10 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw,
struct ice_pkg_hdr *ospkg,
struct ice_seg **seg)
{
- struct ice_aqc_get_pkg_info_resp *pkg;
+ DEFINE_FLEX(struct ice_aqc_get_pkg_info_resp, pkg, pkg_info,
+ ICE_PKG_CNT);
+ u16 size = __struct_size(pkg);
enum ice_ddp_state state;
- u16 size;
u32 i;
/* Check package version compatibility */
@@ -1635,7 +1903,7 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw,
}
/* find ICE segment in given package */
- *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE,
+ *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, hw->pkg_seg_id,
ospkg);
if (!*seg) {
ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n");
@@ -1643,15 +1911,8 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw,
}
/* Check if FW is compatible with the OS package */
- size = struct_size(pkg, pkg_info, ICE_PKG_CNT);
- pkg = kzalloc(size, GFP_KERNEL);
- if (!pkg)
- return ICE_DDP_PKG_ERR;
-
- if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL)) {
- state = ICE_DDP_PKG_LOAD_ERROR;
- goto fw_ddp_compat_free_alloc;
- }
+ if (ice_aq_get_pkg_info_list(hw, pkg, size, NULL))
+ return ICE_DDP_PKG_LOAD_ERROR;
for (i = 0; i < le32_to_cpu(pkg->count); i++) {
/* loop till we find the NVM package */
@@ -1668,8 +1929,7 @@ static enum ice_ddp_state ice_chk_pkg_compat(struct ice_hw *hw,
/* done processing NVM package so break */
break;
}
-fw_ddp_compat_free_alloc:
- kfree(pkg);
+
return state;
}
@@ -1809,6 +2069,11 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
if (state)
return state;
+ /* must be a matching segment */
+ if (hw->pkg_has_signing_seg &&
+ !ice_match_signing_seg(pkg, hw->pkg_seg_id, hw->pkg_sign_type))
+ return ICE_DDP_PKG_ERR;
+
/* before downloading the package, check package version for
* compatibility with driver
*/
@@ -1818,7 +2083,7 @@ enum ice_ddp_state ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
/* initialize package hints and then download package */
ice_init_pkg_hints(hw, seg);
- state = ice_download_pkg(hw, seg);
+ state = ice_download_pkg(hw, pkg);
if (state == ICE_DDP_PKG_ALREADY_LOADED) {
ice_debug(hw, ICE_DBG_INIT,
"package previously loaded - no work.\n");
diff --git a/drivers/net/ethernet/intel/ice/ice_ddp.h b/drivers/net/ethernet/intel/ice/ice_ddp.h
index abb5f32f2ef4..ff66c2ffb1a2 100644
--- a/drivers/net/ethernet/intel/ice/ice_ddp.h
+++ b/drivers/net/ethernet/intel/ice/ice_ddp.h
@@ -98,10 +98,21 @@ struct ice_pkg_hdr {
__le32 seg_offset[];
};
+/* Package signing algorithm types */
+#define SEGMENT_SIGN_TYPE_INVALID 0x00000000
+#define SEGMENT_SIGN_TYPE_RSA2K 0x00000001
+#define SEGMENT_SIGN_TYPE_RSA3K 0x00000002
+#define SEGMENT_SIGN_TYPE_RSA3K_SBB 0x00000003 /* Secure Boot Block */
+#define SEGMENT_SIGN_TYPE_RSA3K_E825 0x00000005
+
/* generic segment */
struct ice_generic_seg_hdr {
-#define SEGMENT_TYPE_METADATA 0x00000001
-#define SEGMENT_TYPE_ICE 0x00000010
+#define SEGMENT_TYPE_INVALID 0x00000000
+#define SEGMENT_TYPE_METADATA 0x00000001
+#define SEGMENT_TYPE_ICE_E810 0x00000010
+#define SEGMENT_TYPE_SIGNING 0x00001001
+#define SEGMENT_TYPE_ICE_RUN_TIME_CFG 0x00000020
+#define SEGMENT_TYPE_ICE_E830 0x00000017
__le32 seg_type;
struct ice_pkg_ver seg_format_ver;
__le32 seg_size;
@@ -163,6 +174,18 @@ struct ice_global_metadata_seg {
#define ICE_MIN_S_SZ 1
#define ICE_MAX_S_SZ 4084
+struct ice_sign_seg {
+ struct ice_generic_seg_hdr hdr;
+ __le32 seg_id;
+ __le32 sign_type;
+ __le32 signed_seg_idx;
+ __le32 signed_buf_start;
+ __le32 signed_buf_count;
+#define ICE_SIGN_SEG_RESERVED_COUNT 44
+ u8 reserved[ICE_SIGN_SEG_RESERVED_COUNT];
+ struct ice_buf_table buf_tbl;
+};
+
/* section information */
struct ice_section_entry {
__le32 type;
diff --git a/drivers/net/ethernet/intel/ice/ice_devids.h b/drivers/net/ethernet/intel/ice/ice_devids.h
index 6d560d1c74a4..a2d384dbfc76 100644
--- a/drivers/net/ethernet/intel/ice/ice_devids.h
+++ b/drivers/net/ethernet/intel/ice/ice_devids.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2018, Intel Corporation. */
+/* Copyright (c) 2018-2023, Intel Corporation. */
#ifndef _ICE_DEVIDS_H_
#define _ICE_DEVIDS_H_
@@ -16,6 +16,14 @@
#define ICE_DEV_ID_E823L_1GBE 0x124F
/* Intel(R) Ethernet Connection E823-L for QSFP */
#define ICE_DEV_ID_E823L_QSFP 0x151D
+/* Intel(R) Ethernet Controller E830-C for backplane */
+#define ICE_DEV_ID_E830_BACKPLANE 0x12D1
+/* Intel(R) Ethernet Controller E830-C for QSFP */
+#define ICE_DEV_ID_E830_QSFP56 0x12D2
+/* Intel(R) Ethernet Controller E830-C for SFP */
+#define ICE_DEV_ID_E830_SFP 0x12D3
+/* Intel(R) Ethernet Controller E830-C for SFP-DD */
+#define ICE_DEV_ID_E830_SFP_DD 0x12D4
/* Intel(R) Ethernet Controller E810-C for backplane */
#define ICE_DEV_ID_E810C_BACKPLANE 0x1591
/* Intel(R) Ethernet Controller E810-C for QSFP */
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
new file mode 100644
index 000000000000..835c419ccc74
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
@@ -0,0 +1,2120 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2022, Intel Corporation. */
+
+#include "ice.h"
+#include "ice_lib.h"
+#include "ice_trace.h"
+#include <linux/dpll.h>
+
+#define ICE_CGU_STATE_ACQ_ERR_THRESHOLD 50
+#define ICE_DPLL_PIN_IDX_INVALID 0xff
+#define ICE_DPLL_RCLK_NUM_PER_PF 1
+
+/**
+ * enum ice_dpll_pin_type - enumerate ice pin types:
+ * @ICE_DPLL_PIN_INVALID: invalid pin type
+ * @ICE_DPLL_PIN_TYPE_INPUT: input pin
+ * @ICE_DPLL_PIN_TYPE_OUTPUT: output pin
+ * @ICE_DPLL_PIN_TYPE_RCLK_INPUT: recovery clock input pin
+ */
+enum ice_dpll_pin_type {
+ ICE_DPLL_PIN_INVALID,
+ ICE_DPLL_PIN_TYPE_INPUT,
+ ICE_DPLL_PIN_TYPE_OUTPUT,
+ ICE_DPLL_PIN_TYPE_RCLK_INPUT,
+};
+
+static const char * const pin_type_name[] = {
+ [ICE_DPLL_PIN_TYPE_INPUT] = "input",
+ [ICE_DPLL_PIN_TYPE_OUTPUT] = "output",
+ [ICE_DPLL_PIN_TYPE_RCLK_INPUT] = "rclk-input",
+};
+
+/**
+ * ice_dpll_pin_freq_set - set pin's frequency
+ * @pf: private board structure
+ * @pin: pointer to a pin
+ * @pin_type: type of pin being configured
+ * @freq: frequency to be set
+ * @extack: error reporting
+ *
+ * Set requested frequency on a pin.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error on AQ or wrong pin type given
+ */
+static int
+ice_dpll_pin_freq_set(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ enum ice_dpll_pin_type pin_type, const u32 freq,
+ struct netlink_ext_ack *extack)
+{
+ u8 flags;
+ int ret;
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ flags = ICE_AQC_SET_CGU_IN_CFG_FLG1_UPDATE_FREQ;
+ ret = ice_aq_set_input_pin_cfg(&pf->hw, pin->idx, flags,
+ pin->flags[0], freq, 0);
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ flags = ICE_AQC_SET_CGU_OUT_CFG_UPDATE_FREQ;
+ ret = ice_aq_set_output_pin_cfg(&pf->hw, pin->idx, flags,
+ 0, freq, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to set pin freq:%u on pin:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ freq, pin->idx);
+ return ret;
+ }
+ pin->freq = freq;
+
+ return 0;
+}
+
+/**
+ * ice_dpll_frequency_set - wrapper for pin callback for set frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: frequency to be set
+ * @extack: error reporting
+ * @pin_type: type of pin being configured
+ *
+ * Wraps internal set frequency command on a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't set in hw
+ */
+static int
+ice_dpll_frequency_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ const u32 frequency,
+ struct netlink_ext_ack *extack,
+ enum ice_dpll_pin_type pin_type)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+ int ret;
+
+ mutex_lock(&pf->dplls.lock);
+ ret = ice_dpll_pin_freq_set(pf, p, pin_type, frequency, extack);
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_input_frequency_set - input pin callback for set frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: frequency to be set
+ * @extack: error reporting
+ *
+ * Wraps internal set frequency command on a pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't set in hw
+ */
+static int
+ice_dpll_input_frequency_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 frequency, struct netlink_ext_ack *extack)
+{
+ return ice_dpll_frequency_set(pin, pin_priv, dpll, dpll_priv, frequency,
+ extack, ICE_DPLL_PIN_TYPE_INPUT);
+}
+
+/**
+ * ice_dpll_output_frequency_set - output pin callback for set frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: frequency to be set
+ * @extack: error reporting
+ *
+ * Wraps internal set frequency command on a pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't set in hw
+ */
+static int
+ice_dpll_output_frequency_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 frequency, struct netlink_ext_ack *extack)
+{
+ return ice_dpll_frequency_set(pin, pin_priv, dpll, dpll_priv, frequency,
+ extack, ICE_DPLL_PIN_TYPE_OUTPUT);
+}
+
+/**
+ * ice_dpll_frequency_get - wrapper for pin callback for get frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: on success holds pin's frequency
+ * @extack: error reporting
+ * @pin_type: type of pin being configured
+ *
+ * Wraps internal get frequency command of a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't get from hw
+ */
+static int
+ice_dpll_frequency_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 *frequency, struct netlink_ext_ack *extack,
+ enum ice_dpll_pin_type pin_type)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ *frequency = p->freq;
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_input_frequency_get - input pin callback for get frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: on success holds pin's frequency
+ * @extack: error reporting
+ *
+ * Wraps internal get frequency command of a input pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't get from hw
+ */
+static int
+ice_dpll_input_frequency_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 *frequency, struct netlink_ext_ack *extack)
+{
+ return ice_dpll_frequency_get(pin, pin_priv, dpll, dpll_priv, frequency,
+ extack, ICE_DPLL_PIN_TYPE_INPUT);
+}
+
+/**
+ * ice_dpll_output_frequency_get - output pin callback for get frequency
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: pointer to dpll
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @frequency: on success holds pin's frequency
+ * @extack: error reporting
+ *
+ * Wraps internal get frequency command of a pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error pin not found or couldn't get from hw
+ */
+static int
+ice_dpll_output_frequency_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 *frequency, struct netlink_ext_ack *extack)
+{
+ return ice_dpll_frequency_get(pin, pin_priv, dpll, dpll_priv, frequency,
+ extack, ICE_DPLL_PIN_TYPE_OUTPUT);
+}
+
+/**
+ * ice_dpll_pin_enable - enable a pin on dplls
+ * @hw: board private hw structure
+ * @pin: pointer to a pin
+ * @pin_type: type of pin being enabled
+ * @extack: error reporting
+ *
+ * Enable a pin on both dplls. Store current state in pin->flags.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - OK
+ * * negative - error
+ */
+static int
+ice_dpll_pin_enable(struct ice_hw *hw, struct ice_dpll_pin *pin,
+ enum ice_dpll_pin_type pin_type,
+ struct netlink_ext_ack *extack)
+{
+ u8 flags = 0;
+ int ret;
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ if (pin->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN)
+ flags |= ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN;
+ flags |= ICE_AQC_SET_CGU_IN_CFG_FLG2_INPUT_EN;
+ ret = ice_aq_set_input_pin_cfg(hw, pin->idx, 0, flags, 0, 0);
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ if (pin->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN)
+ flags |= ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
+ flags |= ICE_AQC_SET_CGU_OUT_CFG_OUT_EN;
+ ret = ice_aq_set_output_pin_cfg(hw, pin->idx, flags, 0, 0, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to enable %s pin:%u\n",
+ ret, ice_aq_str(hw->adminq.sq_last_status),
+ pin_type_name[pin_type], pin->idx);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_pin_disable - disable a pin on dplls
+ * @hw: board private hw structure
+ * @pin: pointer to a pin
+ * @pin_type: type of pin being disabled
+ * @extack: error reporting
+ *
+ * Disable a pin on both dplls. Store current state in pin->flags.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - OK
+ * * negative - error
+ */
+static int
+ice_dpll_pin_disable(struct ice_hw *hw, struct ice_dpll_pin *pin,
+ enum ice_dpll_pin_type pin_type,
+ struct netlink_ext_ack *extack)
+{
+ u8 flags = 0;
+ int ret;
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ if (pin->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN)
+ flags |= ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN;
+ ret = ice_aq_set_input_pin_cfg(hw, pin->idx, 0, flags, 0, 0);
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ if (pin->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN)
+ flags |= ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
+ ret = ice_aq_set_output_pin_cfg(hw, pin->idx, flags, 0, 0, 0);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to disable %s pin:%u\n",
+ ret, ice_aq_str(hw->adminq.sq_last_status),
+ pin_type_name[pin_type], pin->idx);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_pin_state_update - update pin's state
+ * @pf: private board struct
+ * @pin: structure with pin attributes to be updated
+ * @pin_type: type of pin being updated
+ * @extack: error reporting
+ *
+ * Determine pin current state and frequency, then update struct
+ * holding the pin info. For input pin states are separated for each
+ * dpll, for rclk pins states are separated for each parent.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - OK
+ * * negative - error
+ */
+static int
+ice_dpll_pin_state_update(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ enum ice_dpll_pin_type pin_type,
+ struct netlink_ext_ack *extack)
+{
+ u8 parent, port_num = ICE_AQC_SET_PHY_REC_CLK_OUT_CURR_PORT;
+ int ret;
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ ret = ice_aq_get_input_pin_cfg(&pf->hw, pin->idx, NULL, NULL,
+ NULL, &pin->flags[0],
+ &pin->freq, NULL);
+ if (ret)
+ goto err;
+ if (ICE_AQC_GET_CGU_IN_CFG_FLG2_INPUT_EN & pin->flags[0]) {
+ if (pin->pin) {
+ pin->state[pf->dplls.eec.dpll_idx] =
+ pin->pin == pf->dplls.eec.active_input ?
+ DPLL_PIN_STATE_CONNECTED :
+ DPLL_PIN_STATE_SELECTABLE;
+ pin->state[pf->dplls.pps.dpll_idx] =
+ pin->pin == pf->dplls.pps.active_input ?
+ DPLL_PIN_STATE_CONNECTED :
+ DPLL_PIN_STATE_SELECTABLE;
+ } else {
+ pin->state[pf->dplls.eec.dpll_idx] =
+ DPLL_PIN_STATE_SELECTABLE;
+ pin->state[pf->dplls.pps.dpll_idx] =
+ DPLL_PIN_STATE_SELECTABLE;
+ }
+ } else {
+ pin->state[pf->dplls.eec.dpll_idx] =
+ DPLL_PIN_STATE_DISCONNECTED;
+ pin->state[pf->dplls.pps.dpll_idx] =
+ DPLL_PIN_STATE_DISCONNECTED;
+ }
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ ret = ice_aq_get_output_pin_cfg(&pf->hw, pin->idx,
+ &pin->flags[0], NULL,
+ &pin->freq, NULL);
+ if (ret)
+ goto err;
+ if (ICE_AQC_SET_CGU_OUT_CFG_OUT_EN & pin->flags[0])
+ pin->state[0] = DPLL_PIN_STATE_CONNECTED;
+ else
+ pin->state[0] = DPLL_PIN_STATE_DISCONNECTED;
+ break;
+ case ICE_DPLL_PIN_TYPE_RCLK_INPUT:
+ for (parent = 0; parent < pf->dplls.rclk.num_parents;
+ parent++) {
+ u8 p = parent;
+
+ ret = ice_aq_get_phy_rec_clk_out(&pf->hw, &p,
+ &port_num,
+ &pin->flags[parent],
+ NULL);
+ if (ret)
+ goto err;
+ if (ICE_AQC_GET_PHY_REC_CLK_OUT_OUT_EN &
+ pin->flags[parent])
+ pin->state[parent] = DPLL_PIN_STATE_CONNECTED;
+ else
+ pin->state[parent] =
+ DPLL_PIN_STATE_DISCONNECTED;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+err:
+ if (extack)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to update %s pin:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ pin_type_name[pin_type], pin->idx);
+ else
+ dev_err_ratelimited(ice_pf_to_dev(pf),
+ "err:%d %s failed to update %s pin:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ pin_type_name[pin_type], pin->idx);
+ return ret;
+}
+
+/**
+ * ice_dpll_hw_input_prio_set - set input priority value in hardware
+ * @pf: board private structure
+ * @dpll: ice dpll pointer
+ * @pin: ice pin pointer
+ * @prio: priority value being set on a dpll
+ * @extack: error reporting
+ *
+ * Internal wrapper for setting the priority in the hardware.
+ *
+ * Context: Called under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_hw_input_prio_set(struct ice_pf *pf, struct ice_dpll *dpll,
+ struct ice_dpll_pin *pin, const u32 prio,
+ struct netlink_ext_ack *extack)
+{
+ int ret;
+
+ ret = ice_aq_set_cgu_ref_prio(&pf->hw, dpll->dpll_idx, pin->idx,
+ (u8)prio);
+ if (ret)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to set pin prio:%u on pin:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ prio, pin->idx);
+ else
+ dpll->input_prio[pin->idx] = prio;
+
+ return ret;
+}
+
+/**
+ * ice_dpll_lock_status_get - get dpll lock status callback
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @status: on success holds dpll's lock status
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback, provides dpll's lock status.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_lock_status_get(const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_lock_status *status,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ *status = d->dpll_state;
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_mode_supported - check if dpll's working mode is supported
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @mode: mode to be checked for support
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Provides information if working mode is supported
+ * by dpll.
+ *
+ * Return:
+ * * true - mode is supported
+ * * false - mode is not supported
+ */
+static bool ice_dpll_mode_supported(const struct dpll_device *dpll,
+ void *dpll_priv,
+ enum dpll_mode mode,
+ struct netlink_ext_ack *extack)
+{
+ if (mode == DPLL_MODE_AUTOMATIC)
+ return true;
+
+ return false;
+}
+
+/**
+ * ice_dpll_mode_get - get dpll's working mode
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @mode: on success holds current working mode of dpll
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Provides working mode of dpll.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int ice_dpll_mode_get(const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_mode *mode,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ *mode = d->mode;
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_pin_state_set - set pin's state on dpll
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @enable: if pin shalll be enabled
+ * @extack: error reporting
+ * @pin_type: type of a pin
+ *
+ * Set pin state on a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - OK or no change required
+ * * negative - error
+ */
+static int
+ice_dpll_pin_state_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ bool enable, struct netlink_ext_ack *extack,
+ enum ice_dpll_pin_type pin_type)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+ int ret;
+
+ mutex_lock(&pf->dplls.lock);
+ if (enable)
+ ret = ice_dpll_pin_enable(&pf->hw, p, pin_type, extack);
+ else
+ ret = ice_dpll_pin_disable(&pf->hw, p, pin_type, extack);
+ if (!ret)
+ ret = ice_dpll_pin_state_update(pf, p, pin_type, extack);
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_output_state_set - enable/disable output pin on dpll device
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: dpll being configured
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @state: state of pin to be set
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Set given state on output type pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - successfully enabled mode
+ * * negative - failed to enable mode
+ */
+static int
+ice_dpll_output_state_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ bool enable = state == DPLL_PIN_STATE_CONNECTED;
+
+ return ice_dpll_pin_state_set(pin, pin_priv, dpll, dpll_priv, enable,
+ extack, ICE_DPLL_PIN_TYPE_OUTPUT);
+}
+
+/**
+ * ice_dpll_input_state_set - enable/disable input pin on dpll levice
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: dpll being configured
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @state: state of pin to be set
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Enables given mode on input type pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - successfully enabled mode
+ * * negative - failed to enable mode
+ */
+static int
+ice_dpll_input_state_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ bool enable = state == DPLL_PIN_STATE_SELECTABLE;
+
+ return ice_dpll_pin_state_set(pin, pin_priv, dpll, dpll_priv, enable,
+ extack, ICE_DPLL_PIN_TYPE_INPUT);
+}
+
+/**
+ * ice_dpll_pin_state_get - set pin's state on dpll
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @state: on success holds state of the pin
+ * @extack: error reporting
+ * @pin_type: type of questioned pin
+ *
+ * Determine pin state set it on a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failed to get state
+ */
+static int
+ice_dpll_pin_state_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack,
+ enum ice_dpll_pin_type pin_type)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+ int ret;
+
+ mutex_lock(&pf->dplls.lock);
+ ret = ice_dpll_pin_state_update(pf, p, pin_type, extack);
+ if (ret)
+ goto unlock;
+ if (pin_type == ICE_DPLL_PIN_TYPE_INPUT)
+ *state = p->state[d->dpll_idx];
+ else if (pin_type == ICE_DPLL_PIN_TYPE_OUTPUT)
+ *state = p->state[0];
+ ret = 0;
+unlock:
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_output_state_get - get output pin state on dpll device
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @state: on success holds state of the pin
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Check state of a pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failed to get state
+ */
+static int
+ice_dpll_output_state_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack)
+{
+ return ice_dpll_pin_state_get(pin, pin_priv, dpll, dpll_priv, state,
+ extack, ICE_DPLL_PIN_TYPE_OUTPUT);
+}
+
+/**
+ * ice_dpll_input_state_get - get input pin state on dpll device
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @state: on success holds state of the pin
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Check state of a input pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failed to get state
+ */
+static int
+ice_dpll_input_state_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack)
+{
+ return ice_dpll_pin_state_get(pin, pin_priv, dpll, dpll_priv, state,
+ extack, ICE_DPLL_PIN_TYPE_INPUT);
+}
+
+/**
+ * ice_dpll_input_prio_get - get dpll's input prio
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @prio: on success - returns input priority on dpll
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for getting priority of a input pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_input_prio_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u32 *prio, struct netlink_ext_ack *extack)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ *prio = d->input_prio[p->idx];
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_input_prio_set - set dpll input prio
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @prio: input priority to be set on dpll
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for setting priority of a input pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_input_prio_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u32 prio, struct netlink_ext_ack *extack)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+ int ret;
+
+ if (prio > ICE_DPLL_PRIO_MAX) {
+ NL_SET_ERR_MSG_FMT(extack, "prio out of supported range 0-%d",
+ ICE_DPLL_PRIO_MAX);
+ return -EINVAL;
+ }
+
+ mutex_lock(&pf->dplls.lock);
+ ret = ice_dpll_hw_input_prio_set(pf, d, p, prio, extack);
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_input_direction - callback for get input pin direction
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @direction: holds input pin direction
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for getting direction of a input pin.
+ *
+ * Return:
+ * * 0 - success
+ */
+static int
+ice_dpll_input_direction(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_direction *direction,
+ struct netlink_ext_ack *extack)
+{
+ *direction = DPLL_PIN_DIRECTION_INPUT;
+
+ return 0;
+}
+
+/**
+ * ice_dpll_output_direction - callback for get output pin direction
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @direction: holds output pin direction
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for getting direction of an output pin.
+ *
+ * Return:
+ * * 0 - success
+ */
+static int
+ice_dpll_output_direction(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_direction *direction,
+ struct netlink_ext_ack *extack)
+{
+ *direction = DPLL_PIN_DIRECTION_OUTPUT;
+
+ return 0;
+}
+
+/**
+ * ice_dpll_pin_phase_adjust_get - callback for get pin phase adjust value
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @phase_adjust: on success holds pin phase_adjust value
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for getting phase adjust value of a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int
+ice_dpll_pin_phase_adjust_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s32 *phase_adjust,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_pf *pf = p->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ *phase_adjust = p->phase_adjust;
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_pin_phase_adjust_set - helper for setting a pin phase adjust value
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @phase_adjust: phase_adjust to be set
+ * @extack: error reporting
+ * @type: type of a pin
+ *
+ * Helper for dpll subsystem callback. Handler for setting phase adjust value
+ * of a pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int
+ice_dpll_pin_phase_adjust_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s32 phase_adjust,
+ struct netlink_ext_ack *extack,
+ enum ice_dpll_pin_type type)
+{
+ struct ice_dpll_pin *p = pin_priv;
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+ u8 flag, flags_en = 0;
+ int ret;
+
+ mutex_lock(&pf->dplls.lock);
+ switch (type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ flag = ICE_AQC_SET_CGU_IN_CFG_FLG1_UPDATE_DELAY;
+ if (p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_ESYNC_EN)
+ flags_en |= ICE_AQC_SET_CGU_IN_CFG_FLG2_ESYNC_EN;
+ if (p->flags[0] & ICE_AQC_GET_CGU_IN_CFG_FLG2_INPUT_EN)
+ flags_en |= ICE_AQC_SET_CGU_IN_CFG_FLG2_INPUT_EN;
+ ret = ice_aq_set_input_pin_cfg(&pf->hw, p->idx, flag, flags_en,
+ 0, phase_adjust);
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ flag = ICE_AQC_SET_CGU_OUT_CFG_UPDATE_PHASE;
+ if (p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_OUT_EN)
+ flag |= ICE_AQC_SET_CGU_OUT_CFG_OUT_EN;
+ if (p->flags[0] & ICE_AQC_GET_CGU_OUT_CFG_ESYNC_EN)
+ flag |= ICE_AQC_SET_CGU_OUT_CFG_ESYNC_EN;
+ ret = ice_aq_set_output_pin_cfg(&pf->hw, p->idx, flag, 0, 0,
+ phase_adjust);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ if (!ret)
+ p->phase_adjust = phase_adjust;
+ mutex_unlock(&pf->dplls.lock);
+ if (ret)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to set pin phase_adjust:%d for pin:%u on dpll:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ phase_adjust, p->idx, d->dpll_idx);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_input_phase_adjust_set - callback for set input pin phase adjust
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @phase_adjust: phase_adjust to be set
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Wraps a handler for setting phase adjust on input
+ * pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int
+ice_dpll_input_phase_adjust_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s32 phase_adjust,
+ struct netlink_ext_ack *extack)
+{
+ return ice_dpll_pin_phase_adjust_set(pin, pin_priv, dpll, dpll_priv,
+ phase_adjust, extack,
+ ICE_DPLL_PIN_TYPE_INPUT);
+}
+
+/**
+ * ice_dpll_output_phase_adjust_set - callback for set output pin phase adjust
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @phase_adjust: phase_adjust to be set
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Wraps a handler for setting phase adjust on output
+ * pin.
+ *
+ * Context: Calls a function which acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int
+ice_dpll_output_phase_adjust_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s32 phase_adjust,
+ struct netlink_ext_ack *extack)
+{
+ return ice_dpll_pin_phase_adjust_set(pin, pin_priv, dpll, dpll_priv,
+ phase_adjust, extack,
+ ICE_DPLL_PIN_TYPE_OUTPUT);
+}
+
+#define ICE_DPLL_PHASE_OFFSET_DIVIDER 100
+#define ICE_DPLL_PHASE_OFFSET_FACTOR \
+ (DPLL_PHASE_OFFSET_DIVIDER / ICE_DPLL_PHASE_OFFSET_DIVIDER)
+/**
+ * ice_dpll_phase_offset_get - callback for get dpll phase shift value
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @dpll: registered dpll pointer
+ * @dpll_priv: private data pointer passed on dpll registration
+ * @phase_offset: on success holds pin phase_offset value
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback. Handler for getting phase shift value between
+ * dpll's input and output.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - error
+ */
+static int
+ice_dpll_phase_offset_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s64 *phase_offset, struct netlink_ext_ack *extack)
+{
+ struct ice_dpll *d = dpll_priv;
+ struct ice_pf *pf = d->pf;
+
+ mutex_lock(&pf->dplls.lock);
+ if (d->active_input == pin)
+ *phase_offset = d->phase_offset * ICE_DPLL_PHASE_OFFSET_FACTOR;
+ else
+ *phase_offset = 0;
+ mutex_unlock(&pf->dplls.lock);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_rclk_state_on_pin_set - set a state on rclk pin
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @parent_pin: pin parent pointer
+ * @parent_pin_priv: parent private data pointer passed on pin registration
+ * @state: state to be set on pin
+ * @extack: error reporting
+ *
+ * Dpll subsystem callback, set a state of a rclk pin on a parent pin
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_rclk_state_on_pin_set(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_pin *parent_pin,
+ void *parent_pin_priv,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv;
+ bool enable = state == DPLL_PIN_STATE_CONNECTED;
+ struct ice_pf *pf = p->pf;
+ int ret = -EINVAL;
+ u32 hw_idx;
+
+ mutex_lock(&pf->dplls.lock);
+ hw_idx = parent->idx - pf->dplls.base_rclk_idx;
+ if (hw_idx >= pf->dplls.num_inputs)
+ goto unlock;
+
+ if ((enable && p->state[hw_idx] == DPLL_PIN_STATE_CONNECTED) ||
+ (!enable && p->state[hw_idx] == DPLL_PIN_STATE_DISCONNECTED)) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "pin:%u state:%u on parent:%u already set",
+ p->idx, state, parent->idx);
+ goto unlock;
+ }
+ ret = ice_aq_set_phy_rec_clk_out(&pf->hw, hw_idx, enable,
+ &p->freq);
+ if (ret)
+ NL_SET_ERR_MSG_FMT(extack,
+ "err:%d %s failed to set pin state:%u for pin:%u on parent:%u\n",
+ ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status),
+ state, p->idx, parent->idx);
+unlock:
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+/**
+ * ice_dpll_rclk_state_on_pin_get - get a state of rclk pin
+ * @pin: pointer to a pin
+ * @pin_priv: private data pointer passed on pin registration
+ * @parent_pin: pin parent pointer
+ * @parent_pin_priv: pin parent priv data pointer passed on pin registration
+ * @state: on success holds pin state on parent pin
+ * @extack: error reporting
+ *
+ * dpll subsystem callback, get a state of a recovered clock pin.
+ *
+ * Context: Acquires pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int
+ice_dpll_rclk_state_on_pin_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_pin *parent_pin,
+ void *parent_pin_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_dpll_pin *p = pin_priv, *parent = parent_pin_priv;
+ struct ice_pf *pf = p->pf;
+ int ret = -EINVAL;
+ u32 hw_idx;
+
+ mutex_lock(&pf->dplls.lock);
+ hw_idx = parent->idx - pf->dplls.base_rclk_idx;
+ if (hw_idx >= pf->dplls.num_inputs)
+ goto unlock;
+
+ ret = ice_dpll_pin_state_update(pf, p, ICE_DPLL_PIN_TYPE_RCLK_INPUT,
+ extack);
+ if (ret)
+ goto unlock;
+
+ *state = p->state[hw_idx];
+ ret = 0;
+unlock:
+ mutex_unlock(&pf->dplls.lock);
+
+ return ret;
+}
+
+static const struct dpll_pin_ops ice_dpll_rclk_ops = {
+ .state_on_pin_set = ice_dpll_rclk_state_on_pin_set,
+ .state_on_pin_get = ice_dpll_rclk_state_on_pin_get,
+ .direction_get = ice_dpll_input_direction,
+};
+
+static const struct dpll_pin_ops ice_dpll_input_ops = {
+ .frequency_get = ice_dpll_input_frequency_get,
+ .frequency_set = ice_dpll_input_frequency_set,
+ .state_on_dpll_get = ice_dpll_input_state_get,
+ .state_on_dpll_set = ice_dpll_input_state_set,
+ .prio_get = ice_dpll_input_prio_get,
+ .prio_set = ice_dpll_input_prio_set,
+ .direction_get = ice_dpll_input_direction,
+ .phase_adjust_get = ice_dpll_pin_phase_adjust_get,
+ .phase_adjust_set = ice_dpll_input_phase_adjust_set,
+ .phase_offset_get = ice_dpll_phase_offset_get,
+};
+
+static const struct dpll_pin_ops ice_dpll_output_ops = {
+ .frequency_get = ice_dpll_output_frequency_get,
+ .frequency_set = ice_dpll_output_frequency_set,
+ .state_on_dpll_get = ice_dpll_output_state_get,
+ .state_on_dpll_set = ice_dpll_output_state_set,
+ .direction_get = ice_dpll_output_direction,
+ .phase_adjust_get = ice_dpll_pin_phase_adjust_get,
+ .phase_adjust_set = ice_dpll_output_phase_adjust_set,
+};
+
+static const struct dpll_device_ops ice_dpll_ops = {
+ .lock_status_get = ice_dpll_lock_status_get,
+ .mode_supported = ice_dpll_mode_supported,
+ .mode_get = ice_dpll_mode_get,
+};
+
+/**
+ * ice_generate_clock_id - generates unique clock_id for registering dpll.
+ * @pf: board private structure
+ *
+ * Generates unique (per board) clock_id for allocation and search of dpll
+ * devices in Linux dpll subsystem.
+ *
+ * Return: generated clock id for the board
+ */
+static u64 ice_generate_clock_id(struct ice_pf *pf)
+{
+ return pci_get_dsn(pf->pdev);
+}
+
+/**
+ * ice_dpll_notify_changes - notify dpll subsystem about changes
+ * @d: pointer do dpll
+ *
+ * Once change detected appropriate event is submitted to the dpll subsystem.
+ */
+static void ice_dpll_notify_changes(struct ice_dpll *d)
+{
+ bool pin_notified = false;
+
+ if (d->prev_dpll_state != d->dpll_state) {
+ d->prev_dpll_state = d->dpll_state;
+ dpll_device_change_ntf(d->dpll);
+ }
+ if (d->prev_input != d->active_input) {
+ if (d->prev_input)
+ dpll_pin_change_ntf(d->prev_input);
+ d->prev_input = d->active_input;
+ if (d->active_input) {
+ dpll_pin_change_ntf(d->active_input);
+ pin_notified = true;
+ }
+ }
+ if (d->prev_phase_offset != d->phase_offset) {
+ d->prev_phase_offset = d->phase_offset;
+ if (!pin_notified && d->active_input)
+ dpll_pin_change_ntf(d->active_input);
+ }
+}
+
+/**
+ * ice_dpll_update_state - update dpll state
+ * @pf: pf private structure
+ * @d: pointer to queried dpll device
+ * @init: if function called on initialization of ice dpll
+ *
+ * Poll current state of dpll from hw and update ice_dpll struct.
+ *
+ * Context: Called by kworker under pf->dplls.lock
+ * Return:
+ * * 0 - success
+ * * negative - AQ failure
+ */
+static int
+ice_dpll_update_state(struct ice_pf *pf, struct ice_dpll *d, bool init)
+{
+ struct ice_dpll_pin *p = NULL;
+ int ret;
+
+ ret = ice_get_cgu_state(&pf->hw, d->dpll_idx, d->prev_dpll_state,
+ &d->input_idx, &d->ref_state, &d->eec_mode,
+ &d->phase_offset, &d->dpll_state);
+
+ dev_dbg(ice_pf_to_dev(pf),
+ "update dpll=%d, prev_src_idx:%u, src_idx:%u, state:%d, prev:%d mode:%d\n",
+ d->dpll_idx, d->prev_input_idx, d->input_idx,
+ d->dpll_state, d->prev_dpll_state, d->mode);
+ if (ret) {
+ dev_err(ice_pf_to_dev(pf),
+ "update dpll=%d state failed, ret=%d %s\n",
+ d->dpll_idx, ret,
+ ice_aq_str(pf->hw.adminq.sq_last_status));
+ return ret;
+ }
+ if (init) {
+ if (d->dpll_state == DPLL_LOCK_STATUS_LOCKED ||
+ d->dpll_state == DPLL_LOCK_STATUS_LOCKED_HO_ACQ)
+ d->active_input = pf->dplls.inputs[d->input_idx].pin;
+ p = &pf->dplls.inputs[d->input_idx];
+ return ice_dpll_pin_state_update(pf, p,
+ ICE_DPLL_PIN_TYPE_INPUT, NULL);
+ }
+ if (d->dpll_state == DPLL_LOCK_STATUS_HOLDOVER ||
+ d->dpll_state == DPLL_LOCK_STATUS_UNLOCKED) {
+ d->active_input = NULL;
+ if (d->input_idx != ICE_DPLL_PIN_IDX_INVALID)
+ p = &pf->dplls.inputs[d->input_idx];
+ d->prev_input_idx = ICE_DPLL_PIN_IDX_INVALID;
+ d->input_idx = ICE_DPLL_PIN_IDX_INVALID;
+ if (!p)
+ return 0;
+ ret = ice_dpll_pin_state_update(pf, p,
+ ICE_DPLL_PIN_TYPE_INPUT, NULL);
+ } else if (d->input_idx != d->prev_input_idx) {
+ if (d->prev_input_idx != ICE_DPLL_PIN_IDX_INVALID) {
+ p = &pf->dplls.inputs[d->prev_input_idx];
+ ice_dpll_pin_state_update(pf, p,
+ ICE_DPLL_PIN_TYPE_INPUT,
+ NULL);
+ }
+ if (d->input_idx != ICE_DPLL_PIN_IDX_INVALID) {
+ p = &pf->dplls.inputs[d->input_idx];
+ d->active_input = p->pin;
+ ice_dpll_pin_state_update(pf, p,
+ ICE_DPLL_PIN_TYPE_INPUT,
+ NULL);
+ }
+ d->prev_input_idx = d->input_idx;
+ }
+
+ return ret;
+}
+
+/**
+ * ice_dpll_periodic_work - DPLLs periodic worker
+ * @work: pointer to kthread_work structure
+ *
+ * DPLLs periodic worker is responsible for polling state of dpll.
+ * Context: Holds pf->dplls.lock
+ */
+static void ice_dpll_periodic_work(struct kthread_work *work)
+{
+ struct ice_dplls *d = container_of(work, struct ice_dplls, work.work);
+ struct ice_pf *pf = container_of(d, struct ice_pf, dplls);
+ struct ice_dpll *de = &pf->dplls.eec;
+ struct ice_dpll *dp = &pf->dplls.pps;
+ int ret;
+
+ mutex_lock(&pf->dplls.lock);
+ ret = ice_dpll_update_state(pf, de, false);
+ if (!ret)
+ ret = ice_dpll_update_state(pf, dp, false);
+ if (ret) {
+ d->cgu_state_acq_err_num++;
+ /* stop rescheduling this worker */
+ if (d->cgu_state_acq_err_num >
+ ICE_CGU_STATE_ACQ_ERR_THRESHOLD) {
+ dev_err(ice_pf_to_dev(pf),
+ "EEC/PPS DPLLs periodic work disabled\n");
+ mutex_unlock(&pf->dplls.lock);
+ return;
+ }
+ }
+ mutex_unlock(&pf->dplls.lock);
+ ice_dpll_notify_changes(de);
+ ice_dpll_notify_changes(dp);
+
+ /* Run twice a second or reschedule if update failed */
+ kthread_queue_delayed_work(d->kworker, &d->work,
+ ret ? msecs_to_jiffies(10) :
+ msecs_to_jiffies(500));
+}
+
+/**
+ * ice_dpll_release_pins - release pins resources from dpll subsystem
+ * @pins: pointer to pins array
+ * @count: number of pins
+ *
+ * Release resources of given pins array in the dpll subsystem.
+ */
+static void ice_dpll_release_pins(struct ice_dpll_pin *pins, int count)
+{
+ int i;
+
+ for (i = 0; i < count; i++)
+ dpll_pin_put(pins[i].pin);
+}
+
+/**
+ * ice_dpll_get_pins - get pins from dpll subsystem
+ * @pf: board private structure
+ * @pins: pointer to pins array
+ * @start_idx: get starts from this pin idx value
+ * @count: number of pins
+ * @clock_id: clock_id of dpll device
+ *
+ * Get pins - allocate - in dpll subsystem, store them in pin field of given
+ * pins array.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - allocation failure reason
+ */
+static int
+ice_dpll_get_pins(struct ice_pf *pf, struct ice_dpll_pin *pins,
+ int start_idx, int count, u64 clock_id)
+{
+ int i, ret;
+
+ for (i = 0; i < count; i++) {
+ pins[i].pin = dpll_pin_get(clock_id, i + start_idx, THIS_MODULE,
+ &pins[i].prop);
+ if (IS_ERR(pins[i].pin)) {
+ ret = PTR_ERR(pins[i].pin);
+ goto release_pins;
+ }
+ }
+
+ return 0;
+
+release_pins:
+ while (--i >= 0)
+ dpll_pin_put(pins[i].pin);
+ return ret;
+}
+
+/**
+ * ice_dpll_unregister_pins - unregister pins from a dpll
+ * @dpll: dpll device pointer
+ * @pins: pointer to pins array
+ * @ops: callback ops registered with the pins
+ * @count: number of pins
+ *
+ * Unregister pins of a given array of pins from given dpll device registered in
+ * dpll subsystem.
+ */
+static void
+ice_dpll_unregister_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins,
+ const struct dpll_pin_ops *ops, int count)
+{
+ int i;
+
+ for (i = 0; i < count; i++)
+ dpll_pin_unregister(dpll, pins[i].pin, ops, &pins[i]);
+}
+
+/**
+ * ice_dpll_register_pins - register pins with a dpll
+ * @dpll: dpll pointer to register pins with
+ * @pins: pointer to pins array
+ * @ops: callback ops registered with the pins
+ * @count: number of pins
+ *
+ * Register pins of a given array with given dpll in dpll subsystem.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - registration failure reason
+ */
+static int
+ice_dpll_register_pins(struct dpll_device *dpll, struct ice_dpll_pin *pins,
+ const struct dpll_pin_ops *ops, int count)
+{
+ int ret, i;
+
+ for (i = 0; i < count; i++) {
+ ret = dpll_pin_register(dpll, pins[i].pin, ops, &pins[i]);
+ if (ret)
+ goto unregister_pins;
+ }
+
+ return 0;
+
+unregister_pins:
+ while (--i >= 0)
+ dpll_pin_unregister(dpll, pins[i].pin, ops, &pins[i]);
+ return ret;
+}
+
+/**
+ * ice_dpll_deinit_direct_pins - deinitialize direct pins
+ * @cgu: if cgu is present and controlled by this NIC
+ * @pins: pointer to pins array
+ * @count: number of pins
+ * @ops: callback ops registered with the pins
+ * @first: dpll device pointer
+ * @second: dpll device pointer
+ *
+ * If cgu is owned unregister pins from given dplls.
+ * Release pins resources to the dpll subsystem.
+ */
+static void
+ice_dpll_deinit_direct_pins(bool cgu, struct ice_dpll_pin *pins, int count,
+ const struct dpll_pin_ops *ops,
+ struct dpll_device *first,
+ struct dpll_device *second)
+{
+ if (cgu) {
+ ice_dpll_unregister_pins(first, pins, ops, count);
+ ice_dpll_unregister_pins(second, pins, ops, count);
+ }
+ ice_dpll_release_pins(pins, count);
+}
+
+/**
+ * ice_dpll_init_direct_pins - initialize direct pins
+ * @pf: board private structure
+ * @cgu: if cgu is present and controlled by this NIC
+ * @pins: pointer to pins array
+ * @start_idx: on which index shall allocation start in dpll subsystem
+ * @count: number of pins
+ * @ops: callback ops registered with the pins
+ * @first: dpll device pointer
+ * @second: dpll device pointer
+ *
+ * Allocate directly connected pins of a given array in dpll subsystem.
+ * If cgu is owned register allocated pins with given dplls.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - registration failure reason
+ */
+static int
+ice_dpll_init_direct_pins(struct ice_pf *pf, bool cgu,
+ struct ice_dpll_pin *pins, int start_idx, int count,
+ const struct dpll_pin_ops *ops,
+ struct dpll_device *first, struct dpll_device *second)
+{
+ int ret;
+
+ ret = ice_dpll_get_pins(pf, pins, start_idx, count, pf->dplls.clock_id);
+ if (ret)
+ return ret;
+ if (cgu) {
+ ret = ice_dpll_register_pins(first, pins, ops, count);
+ if (ret)
+ goto release_pins;
+ ret = ice_dpll_register_pins(second, pins, ops, count);
+ if (ret)
+ goto unregister_first;
+ }
+
+ return 0;
+
+unregister_first:
+ ice_dpll_unregister_pins(first, pins, ops, count);
+release_pins:
+ ice_dpll_release_pins(pins, count);
+ return ret;
+}
+
+/**
+ * ice_dpll_deinit_rclk_pin - release rclk pin resources
+ * @pf: board private structure
+ *
+ * Deregister rclk pin from parent pins and release resources in dpll subsystem.
+ */
+static void ice_dpll_deinit_rclk_pin(struct ice_pf *pf)
+{
+ struct ice_dpll_pin *rclk = &pf->dplls.rclk;
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
+ struct dpll_pin *parent;
+ int i;
+
+ for (i = 0; i < rclk->num_parents; i++) {
+ parent = pf->dplls.inputs[rclk->parent_idx[i]].pin;
+ if (!parent)
+ continue;
+ dpll_pin_on_pin_unregister(parent, rclk->pin,
+ &ice_dpll_rclk_ops, rclk);
+ }
+ if (WARN_ON_ONCE(!vsi || !vsi->netdev))
+ return;
+ netdev_dpll_pin_clear(vsi->netdev);
+ dpll_pin_put(rclk->pin);
+}
+
+/**
+ * ice_dpll_init_rclk_pins - initialize recovered clock pin
+ * @pf: board private structure
+ * @pin: pin to register
+ * @start_idx: on which index shall allocation start in dpll subsystem
+ * @ops: callback ops registered with the pins
+ *
+ * Allocate resource for recovered clock pin in dpll subsystem. Register the
+ * pin with the parents it has in the info. Register pin with the pf's main vsi
+ * netdev.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - registration failure reason
+ */
+static int
+ice_dpll_init_rclk_pins(struct ice_pf *pf, struct ice_dpll_pin *pin,
+ int start_idx, const struct dpll_pin_ops *ops)
+{
+ struct ice_vsi *vsi = ice_get_main_vsi(pf);
+ struct dpll_pin *parent;
+ int ret, i;
+
+ ret = ice_dpll_get_pins(pf, pin, start_idx, ICE_DPLL_RCLK_NUM_PER_PF,
+ pf->dplls.clock_id);
+ if (ret)
+ return ret;
+ for (i = 0; i < pf->dplls.rclk.num_parents; i++) {
+ parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[i]].pin;
+ if (!parent) {
+ ret = -ENODEV;
+ goto unregister_pins;
+ }
+ ret = dpll_pin_on_pin_register(parent, pf->dplls.rclk.pin,
+ ops, &pf->dplls.rclk);
+ if (ret)
+ goto unregister_pins;
+ }
+ if (WARN_ON((!vsi || !vsi->netdev)))
+ return -EINVAL;
+ netdev_dpll_pin_set(vsi->netdev, pf->dplls.rclk.pin);
+
+ return 0;
+
+unregister_pins:
+ while (i) {
+ parent = pf->dplls.inputs[pf->dplls.rclk.parent_idx[--i]].pin;
+ dpll_pin_on_pin_unregister(parent, pf->dplls.rclk.pin,
+ &ice_dpll_rclk_ops, &pf->dplls.rclk);
+ }
+ ice_dpll_release_pins(pin, ICE_DPLL_RCLK_NUM_PER_PF);
+ return ret;
+}
+
+/**
+ * ice_dpll_deinit_pins - deinitialize direct pins
+ * @pf: board private structure
+ * @cgu: if cgu is controlled by this pf
+ *
+ * If cgu is owned unregister directly connected pins from the dplls.
+ * Release resources of directly connected pins from the dpll subsystem.
+ */
+static void ice_dpll_deinit_pins(struct ice_pf *pf, bool cgu)
+{
+ struct ice_dpll_pin *outputs = pf->dplls.outputs;
+ struct ice_dpll_pin *inputs = pf->dplls.inputs;
+ int num_outputs = pf->dplls.num_outputs;
+ int num_inputs = pf->dplls.num_inputs;
+ struct ice_dplls *d = &pf->dplls;
+ struct ice_dpll *de = &d->eec;
+ struct ice_dpll *dp = &d->pps;
+
+ ice_dpll_deinit_rclk_pin(pf);
+ if (cgu) {
+ ice_dpll_unregister_pins(dp->dpll, inputs, &ice_dpll_input_ops,
+ num_inputs);
+ ice_dpll_unregister_pins(de->dpll, inputs, &ice_dpll_input_ops,
+ num_inputs);
+ }
+ ice_dpll_release_pins(inputs, num_inputs);
+ if (cgu) {
+ ice_dpll_unregister_pins(dp->dpll, outputs,
+ &ice_dpll_output_ops, num_outputs);
+ ice_dpll_unregister_pins(de->dpll, outputs,
+ &ice_dpll_output_ops, num_outputs);
+ ice_dpll_release_pins(outputs, num_outputs);
+ }
+}
+
+/**
+ * ice_dpll_init_pins - init pins and register pins with a dplls
+ * @pf: board private structure
+ * @cgu: if cgu is present and controlled by this NIC
+ *
+ * Initialize directly connected pf's pins within pf's dplls in a Linux dpll
+ * subsystem.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - initialization failure reason
+ */
+static int ice_dpll_init_pins(struct ice_pf *pf, bool cgu)
+{
+ u32 rclk_idx;
+ int ret;
+
+ ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.inputs, 0,
+ pf->dplls.num_inputs,
+ &ice_dpll_input_ops,
+ pf->dplls.eec.dpll, pf->dplls.pps.dpll);
+ if (ret)
+ return ret;
+ if (cgu) {
+ ret = ice_dpll_init_direct_pins(pf, cgu, pf->dplls.outputs,
+ pf->dplls.num_inputs,
+ pf->dplls.num_outputs,
+ &ice_dpll_output_ops,
+ pf->dplls.eec.dpll,
+ pf->dplls.pps.dpll);
+ if (ret)
+ goto deinit_inputs;
+ }
+ rclk_idx = pf->dplls.num_inputs + pf->dplls.num_outputs + pf->hw.pf_id;
+ ret = ice_dpll_init_rclk_pins(pf, &pf->dplls.rclk, rclk_idx,
+ &ice_dpll_rclk_ops);
+ if (ret)
+ goto deinit_outputs;
+
+ return 0;
+deinit_outputs:
+ ice_dpll_deinit_direct_pins(cgu, pf->dplls.outputs,
+ pf->dplls.num_outputs,
+ &ice_dpll_output_ops, pf->dplls.pps.dpll,
+ pf->dplls.eec.dpll);
+deinit_inputs:
+ ice_dpll_deinit_direct_pins(cgu, pf->dplls.inputs, pf->dplls.num_inputs,
+ &ice_dpll_input_ops, pf->dplls.pps.dpll,
+ pf->dplls.eec.dpll);
+ return ret;
+}
+
+/**
+ * ice_dpll_deinit_dpll - deinitialize dpll device
+ * @pf: board private structure
+ * @d: pointer to ice_dpll
+ * @cgu: if cgu is present and controlled by this NIC
+ *
+ * If cgu is owned unregister the dpll from dpll subsystem.
+ * Release resources of dpll device from dpll subsystem.
+ */
+static void
+ice_dpll_deinit_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu)
+{
+ if (cgu)
+ dpll_device_unregister(d->dpll, &ice_dpll_ops, d);
+ dpll_device_put(d->dpll);
+}
+
+/**
+ * ice_dpll_init_dpll - initialize dpll device in dpll subsystem
+ * @pf: board private structure
+ * @d: dpll to be initialized
+ * @cgu: if cgu is present and controlled by this NIC
+ * @type: type of dpll being initialized
+ *
+ * Allocate dpll instance for this board in dpll subsystem, if cgu is controlled
+ * by this NIC, register dpll with the callback ops.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - initialization failure reason
+ */
+static int
+ice_dpll_init_dpll(struct ice_pf *pf, struct ice_dpll *d, bool cgu,
+ enum dpll_type type)
+{
+ u64 clock_id = pf->dplls.clock_id;
+ int ret;
+
+ d->dpll = dpll_device_get(clock_id, d->dpll_idx, THIS_MODULE);
+ if (IS_ERR(d->dpll)) {
+ ret = PTR_ERR(d->dpll);
+ dev_err(ice_pf_to_dev(pf),
+ "dpll_device_get failed (%p) err=%d\n", d, ret);
+ return ret;
+ }
+ d->pf = pf;
+ if (cgu) {
+ ret = dpll_device_register(d->dpll, type, &ice_dpll_ops, d);
+ if (ret) {
+ dpll_device_put(d->dpll);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * ice_dpll_deinit_worker - deinitialize dpll kworker
+ * @pf: board private structure
+ *
+ * Stop dpll's kworker, release it's resources.
+ */
+static void ice_dpll_deinit_worker(struct ice_pf *pf)
+{
+ struct ice_dplls *d = &pf->dplls;
+
+ kthread_cancel_delayed_work_sync(&d->work);
+ kthread_destroy_worker(d->kworker);
+}
+
+/**
+ * ice_dpll_init_worker - Initialize DPLLs periodic worker
+ * @pf: board private structure
+ *
+ * Create and start DPLLs periodic worker.
+ *
+ * Context: Shall be called after pf->dplls.lock is initialized.
+ * Return:
+ * * 0 - success
+ * * negative - create worker failure
+ */
+static int ice_dpll_init_worker(struct ice_pf *pf)
+{
+ struct ice_dplls *d = &pf->dplls;
+ struct kthread_worker *kworker;
+
+ ice_dpll_update_state(pf, &d->eec, true);
+ ice_dpll_update_state(pf, &d->pps, true);
+ kthread_init_delayed_work(&d->work, ice_dpll_periodic_work);
+ kworker = kthread_create_worker(0, "ice-dplls-%s",
+ dev_name(ice_pf_to_dev(pf)));
+ if (IS_ERR(kworker))
+ return PTR_ERR(kworker);
+ d->kworker = kworker;
+ d->cgu_state_acq_err_num = 0;
+ kthread_queue_delayed_work(d->kworker, &d->work, 0);
+
+ return 0;
+}
+
+/**
+ * ice_dpll_init_info_direct_pins - initializes direct pins info
+ * @pf: board private structure
+ * @pin_type: type of pins being initialized
+ *
+ * Init information for directly connected pins, cache them in pf's pins
+ * structures.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - init failure reason
+ */
+static int
+ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ enum ice_dpll_pin_type pin_type)
+{
+ struct ice_dpll *de = &pf->dplls.eec, *dp = &pf->dplls.pps;
+ int num_pins, i, ret = -EINVAL;
+ struct ice_hw *hw = &pf->hw;
+ struct ice_dpll_pin *pins;
+ u8 freq_supp_num;
+ bool input;
+
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ pins = pf->dplls.inputs;
+ num_pins = pf->dplls.num_inputs;
+ input = true;
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ pins = pf->dplls.outputs;
+ num_pins = pf->dplls.num_outputs;
+ input = false;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ for (i = 0; i < num_pins; i++) {
+ pins[i].idx = i;
+ pins[i].prop.board_label = ice_cgu_get_pin_name(hw, i, input);
+ pins[i].prop.type = ice_cgu_get_pin_type(hw, i, input);
+ if (input) {
+ ret = ice_aq_get_cgu_ref_prio(hw, de->dpll_idx, i,
+ &de->input_prio[i]);
+ if (ret)
+ return ret;
+ ret = ice_aq_get_cgu_ref_prio(hw, dp->dpll_idx, i,
+ &dp->input_prio[i]);
+ if (ret)
+ return ret;
+ pins[i].prop.capabilities |=
+ DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE;
+ pins[i].prop.phase_range.min =
+ pf->dplls.input_phase_adj_max;
+ pins[i].prop.phase_range.max =
+ -pf->dplls.input_phase_adj_max;
+ } else {
+ pins[i].prop.phase_range.min =
+ pf->dplls.output_phase_adj_max;
+ pins[i].prop.phase_range.max =
+ -pf->dplls.output_phase_adj_max;
+ }
+ pins[i].prop.capabilities |=
+ DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE;
+ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
+ if (ret)
+ return ret;
+ pins[i].prop.freq_supported =
+ ice_cgu_get_pin_freq_supp(hw, i, input, &freq_supp_num);
+ pins[i].prop.freq_supported_num = freq_supp_num;
+ pins[i].pf = pf;
+ }
+
+ return ret;
+}
+
+/**
+ * ice_dpll_init_info_rclk_pin - initializes rclk pin information
+ * @pf: board private structure
+ *
+ * Init information for rclk pin, cache them in pf->dplls.rclk.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - init failure reason
+ */
+static int ice_dpll_init_info_rclk_pin(struct ice_pf *pf)
+{
+ struct ice_dpll_pin *pin = &pf->dplls.rclk;
+
+ pin->prop.type = DPLL_PIN_TYPE_SYNCE_ETH_PORT;
+ pin->prop.capabilities |= DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE;
+ pin->pf = pf;
+
+ return ice_dpll_pin_state_update(pf, pin,
+ ICE_DPLL_PIN_TYPE_RCLK_INPUT, NULL);
+}
+
+/**
+ * ice_dpll_init_pins_info - init pins info wrapper
+ * @pf: board private structure
+ * @pin_type: type of pins being initialized
+ *
+ * Wraps functions for pin initialization.
+ *
+ * Return:
+ * * 0 - success
+ * * negative - init failure reason
+ */
+static int
+ice_dpll_init_pins_info(struct ice_pf *pf, enum ice_dpll_pin_type pin_type)
+{
+ switch (pin_type) {
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ return ice_dpll_init_info_direct_pins(pf, pin_type);
+ case ICE_DPLL_PIN_TYPE_RCLK_INPUT:
+ return ice_dpll_init_info_rclk_pin(pf);
+ default:
+ return -EINVAL;
+ }
+}
+
+/**
+ * ice_dpll_deinit_info - release memory allocated for pins info
+ * @pf: board private structure
+ *
+ * Release memory allocated for pins by ice_dpll_init_info function.
+ */
+static void ice_dpll_deinit_info(struct ice_pf *pf)
+{
+ kfree(pf->dplls.inputs);
+ kfree(pf->dplls.outputs);
+ kfree(pf->dplls.eec.input_prio);
+ kfree(pf->dplls.pps.input_prio);
+}
+
+/**
+ * ice_dpll_init_info - prepare pf's dpll information structure
+ * @pf: board private structure
+ * @cgu: if cgu is present and controlled by this NIC
+ *
+ * Acquire (from HW) and set basic dpll information (on pf->dplls struct).
+ *
+ * Return:
+ * * 0 - success
+ * * negative - init failure reason
+ */
+static int ice_dpll_init_info(struct ice_pf *pf, bool cgu)
+{
+ struct ice_aqc_get_cgu_abilities abilities;
+ struct ice_dpll *de = &pf->dplls.eec;
+ struct ice_dpll *dp = &pf->dplls.pps;
+ struct ice_dplls *d = &pf->dplls;
+ struct ice_hw *hw = &pf->hw;
+ int ret, alloc_size, i;
+
+ d->clock_id = ice_generate_clock_id(pf);
+ ret = ice_aq_get_cgu_abilities(hw, &abilities);
+ if (ret) {
+ dev_err(ice_pf_to_dev(pf),
+ "err:%d %s failed to read cgu abilities\n",
+ ret, ice_aq_str(hw->adminq.sq_last_status));
+ return ret;
+ }
+
+ de->dpll_idx = abilities.eec_dpll_idx;
+ dp->dpll_idx = abilities.pps_dpll_idx;
+ d->num_inputs = abilities.num_inputs;
+ d->num_outputs = abilities.num_outputs;
+ d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj);
+ d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj);
+
+ alloc_size = sizeof(*d->inputs) * d->num_inputs;
+ d->inputs = kzalloc(alloc_size, GFP_KERNEL);
+ if (!d->inputs)
+ return -ENOMEM;
+
+ alloc_size = sizeof(*de->input_prio) * d->num_inputs;
+ de->input_prio = kzalloc(alloc_size, GFP_KERNEL);
+ if (!de->input_prio)
+ return -ENOMEM;
+
+ dp->input_prio = kzalloc(alloc_size, GFP_KERNEL);
+ if (!dp->input_prio)
+ return -ENOMEM;
+
+ ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_INPUT);
+ if (ret)
+ goto deinit_info;
+
+ if (cgu) {
+ alloc_size = sizeof(*d->outputs) * d->num_outputs;
+ d->outputs = kzalloc(alloc_size, GFP_KERNEL);
+ if (!d->outputs) {
+ ret = -ENOMEM;
+ goto deinit_info;
+ }
+
+ ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_OUTPUT);
+ if (ret)
+ goto deinit_info;
+ }
+
+ ret = ice_get_cgu_rclk_pin_info(&pf->hw, &d->base_rclk_idx,
+ &pf->dplls.rclk.num_parents);
+ if (ret)
+ return ret;
+ for (i = 0; i < pf->dplls.rclk.num_parents; i++)
+ pf->dplls.rclk.parent_idx[i] = d->base_rclk_idx + i;
+ ret = ice_dpll_init_pins_info(pf, ICE_DPLL_PIN_TYPE_RCLK_INPUT);
+ if (ret)
+ return ret;
+ de->mode = DPLL_MODE_AUTOMATIC;
+ dp->mode = DPLL_MODE_AUTOMATIC;
+
+ dev_dbg(ice_pf_to_dev(pf),
+ "%s - success, inputs:%u, outputs:%u rclk-parents:%u\n",
+ __func__, d->num_inputs, d->num_outputs, d->rclk.num_parents);
+
+ return 0;
+
+deinit_info:
+ dev_err(ice_pf_to_dev(pf),
+ "%s - fail: d->inputs:%p, de->input_prio:%p, dp->input_prio:%p, d->outputs:%p\n",
+ __func__, d->inputs, de->input_prio,
+ dp->input_prio, d->outputs);
+ ice_dpll_deinit_info(pf);
+ return ret;
+}
+
+/**
+ * ice_dpll_deinit - Disable the driver/HW support for dpll subsystem
+ * the dpll device.
+ * @pf: board private structure
+ *
+ * Handles the cleanup work required after dpll initialization, freeing
+ * resources and unregistering the dpll, pin and all resources used for
+ * handling them.
+ *
+ * Context: Destroys pf->dplls.lock mutex. Call only if ICE_FLAG_DPLL was set.
+ */
+void ice_dpll_deinit(struct ice_pf *pf)
+{
+ bool cgu = ice_is_feature_supported(pf, ICE_F_CGU);
+
+ clear_bit(ICE_FLAG_DPLL, pf->flags);
+ if (cgu)
+ ice_dpll_deinit_worker(pf);
+
+ ice_dpll_deinit_pins(pf, cgu);
+ ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu);
+ ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu);
+ ice_dpll_deinit_info(pf);
+ mutex_destroy(&pf->dplls.lock);
+}
+
+/**
+ * ice_dpll_init - initialize support for dpll subsystem
+ * @pf: board private structure
+ *
+ * Set up the device dplls, register them and pins connected within Linux dpll
+ * subsystem. Allow userspace to obtain state of DPLL and handling of DPLL
+ * configuration requests.
+ *
+ * Context: Initializes pf->dplls.lock mutex.
+ */
+void ice_dpll_init(struct ice_pf *pf)
+{
+ bool cgu = ice_is_feature_supported(pf, ICE_F_CGU);
+ struct ice_dplls *d = &pf->dplls;
+ int err = 0;
+
+ err = ice_dpll_init_info(pf, cgu);
+ if (err)
+ goto err_exit;
+ err = ice_dpll_init_dpll(pf, &pf->dplls.eec, cgu, DPLL_TYPE_EEC);
+ if (err)
+ goto deinit_info;
+ err = ice_dpll_init_dpll(pf, &pf->dplls.pps, cgu, DPLL_TYPE_PPS);
+ if (err)
+ goto deinit_eec;
+ err = ice_dpll_init_pins(pf, cgu);
+ if (err)
+ goto deinit_pps;
+ mutex_init(&d->lock);
+ if (cgu) {
+ err = ice_dpll_init_worker(pf);
+ if (err)
+ goto deinit_pins;
+ }
+ set_bit(ICE_FLAG_DPLL, pf->flags);
+
+ return;
+
+deinit_pins:
+ ice_dpll_deinit_pins(pf, cgu);
+deinit_pps:
+ ice_dpll_deinit_dpll(pf, &pf->dplls.pps, cgu);
+deinit_eec:
+ ice_dpll_deinit_dpll(pf, &pf->dplls.eec, cgu);
+deinit_info:
+ ice_dpll_deinit_info(pf);
+err_exit:
+ mutex_destroy(&d->lock);
+ dev_warn(ice_pf_to_dev(pf), "DPLLs init failure err:%d\n", err);
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.h b/drivers/net/ethernet/intel/ice/ice_dpll.h
new file mode 100644
index 000000000000..bb32b6d88373
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_dpll.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2022, Intel Corporation. */
+
+#ifndef _ICE_DPLL_H_
+#define _ICE_DPLL_H_
+
+#include "ice.h"
+
+#define ICE_DPLL_PRIO_MAX 0xF
+#define ICE_DPLL_RCLK_NUM_MAX 4
+
+/** ice_dpll_pin - store info about pins
+ * @pin: dpll pin structure
+ * @pf: pointer to pf, which has registered the dpll_pin
+ * @idx: ice pin private idx
+ * @num_parents: hols number of parent pins
+ * @parent_idx: hold indexes of parent pins
+ * @flags: pin flags returned from HW
+ * @state: state of a pin
+ * @prop: pin properties
+ * @freq: current frequency of a pin
+ * @phase_adjust: current phase adjust value
+ */
+struct ice_dpll_pin {
+ struct dpll_pin *pin;
+ struct ice_pf *pf;
+ u8 idx;
+ u8 num_parents;
+ u8 parent_idx[ICE_DPLL_RCLK_NUM_MAX];
+ u8 flags[ICE_DPLL_RCLK_NUM_MAX];
+ u8 state[ICE_DPLL_RCLK_NUM_MAX];
+ struct dpll_pin_properties prop;
+ u32 freq;
+ s32 phase_adjust;
+};
+
+/** ice_dpll - store info required for DPLL control
+ * @dpll: pointer to dpll dev
+ * @pf: pointer to pf, which has registered the dpll_device
+ * @dpll_idx: index of dpll on the NIC
+ * @input_idx: currently selected input index
+ * @prev_input_idx: previously selected input index
+ * @ref_state: state of dpll reference signals
+ * @eec_mode: eec_mode dpll is configured for
+ * @phase_offset: phase offset of active pin vs dpll signal
+ * @prev_phase_offset: previous phase offset of active pin vs dpll signal
+ * @input_prio: priorities of each input
+ * @dpll_state: current dpll sync state
+ * @prev_dpll_state: last dpll sync state
+ * @active_input: pointer to active input pin
+ * @prev_input: pointer to previous active input pin
+ */
+struct ice_dpll {
+ struct dpll_device *dpll;
+ struct ice_pf *pf;
+ u8 dpll_idx;
+ u8 input_idx;
+ u8 prev_input_idx;
+ u8 ref_state;
+ u8 eec_mode;
+ s64 phase_offset;
+ s64 prev_phase_offset;
+ u8 *input_prio;
+ enum dpll_lock_status dpll_state;
+ enum dpll_lock_status prev_dpll_state;
+ enum dpll_mode mode;
+ struct dpll_pin *active_input;
+ struct dpll_pin *prev_input;
+};
+
+/** ice_dplls - store info required for CCU (clock controlling unit)
+ * @kworker: periodic worker
+ * @work: periodic work
+ * @lock: locks access to configuration of a dpll
+ * @eec: pointer to EEC dpll dev
+ * @pps: pointer to PPS dpll dev
+ * @inputs: input pins pointer
+ * @outputs: output pins pointer
+ * @rclk: recovered pins pointer
+ * @num_inputs: number of input pins available on dpll
+ * @num_outputs: number of output pins available on dpll
+ * @cgu_state_acq_err_num: number of errors returned during periodic work
+ * @base_rclk_idx: idx of first pin used for clock revocery pins
+ * @clock_id: clock_id of dplls
+ * @input_phase_adj_max: max phase adjust value for an input pins
+ * @output_phase_adj_max: max phase adjust value for an output pins
+ */
+struct ice_dplls {
+ struct kthread_worker *kworker;
+ struct kthread_delayed_work work;
+ struct mutex lock;
+ struct ice_dpll eec;
+ struct ice_dpll pps;
+ struct ice_dpll_pin *inputs;
+ struct ice_dpll_pin *outputs;
+ struct ice_dpll_pin rclk;
+ u8 num_inputs;
+ u8 num_outputs;
+ int cgu_state_acq_err_num;
+ u8 base_rclk_idx;
+ u64 clock_id;
+ s32 input_phase_adj_max;
+ s32 output_phase_adj_max;
+};
+
+#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
+void ice_dpll_init(struct ice_pf *pf);
+void ice_dpll_deinit(struct ice_pf *pf);
+#else
+static inline void ice_dpll_init(struct ice_pf *pf) { }
+static inline void ice_dpll_deinit(struct ice_pf *pf) { }
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
index 67bfd1f61cdd..6ae0269bdf73 100644
--- a/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
+++ b/drivers/net/ethernet/intel/ice/ice_eswitch_br.c
@@ -73,7 +73,7 @@ ice_eswitch_br_ingress_rule_setup(struct ice_adv_rule_info *rule_info,
rule_info->sw_act.vsi_handle = vf_vsi_idx;
rule_info->sw_act.flag |= ICE_FLTR_RX;
rule_info->sw_act.src = pf_id;
- rule_info->priority = 5;
+ rule_info->priority = 2;
}
static void
@@ -84,7 +84,7 @@ ice_eswitch_br_egress_rule_setup(struct ice_adv_rule_info *rule_info,
rule_info->sw_act.flag |= ICE_FLTR_TX;
rule_info->flags_info.act = ICE_SINGLE_ACT_LAN_ENABLE;
rule_info->flags_info.act_valid = true;
- rule_info->priority = 5;
+ rule_info->priority = 2;
}
static int
@@ -207,7 +207,7 @@ ice_eswitch_br_guard_rule_create(struct ice_hw *hw, u16 vsi_idx,
rule_info.allow_pass_l2 = true;
rule_info.sw_act.vsi_handle = vsi_idx;
rule_info.sw_act.fltr_act = ICE_NOP;
- rule_info.priority = 5;
+ rule_info.priority = 2;
err = ice_add_adv_rule(hw, list, lkups_cnt, &rule_info, rule);
if (err)
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index ad4d4702129f..a34083567e6f 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -345,6 +345,88 @@ static const struct ice_priv_flag ice_gstrings_priv_flags[] = {
#define ICE_PRIV_FLAG_ARRAY_SIZE ARRAY_SIZE(ice_gstrings_priv_flags)
+static const u32 ice_adv_lnk_speed_100[] __initconst = {
+ ETHTOOL_LINK_MODE_100baseT_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_1000[] __initconst = {
+ ETHTOOL_LINK_MODE_1000baseX_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_2500[] __initconst = {
+ ETHTOOL_LINK_MODE_2500baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_2500baseX_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_5000[] __initconst = {
+ ETHTOOL_LINK_MODE_5000baseT_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_10000[] __initconst = {
+ ETHTOOL_LINK_MODE_10000baseT_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseKR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_10000baseLR_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_25000[] __initconst = {
+ ETHTOOL_LINK_MODE_25000baseCR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseSR_Full_BIT,
+ ETHTOOL_LINK_MODE_25000baseKR_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_40000[] __initconst = {
+ ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT,
+ ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_50000[] __initconst = {
+ ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT,
+ ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_100000[] __initconst = {
+ ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseCR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseSR2_Full_BIT,
+ ETHTOOL_LINK_MODE_100000baseKR2_Full_BIT,
+};
+
+static const u32 ice_adv_lnk_speed_200000[] __initconst = {
+ ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT,
+ ETHTOOL_LINK_MODE_200000baseCR4_Full_BIT,
+};
+
+static struct ethtool_forced_speed_map ice_adv_lnk_speed_maps[] __ro_after_init = {
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 100),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 1000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 2500),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 5000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 10000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 25000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 40000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 50000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 100000),
+ ETHTOOL_FORCED_SPEED_MAP(ice_adv_lnk_speed, 200000),
+};
+
+void __init ice_adv_lnk_speed_maps_init(void)
+{
+ ethtool_forced_speed_maps_init(ice_adv_lnk_speed_maps,
+ ARRAY_SIZE(ice_adv_lnk_speed_maps));
+}
+
static void
__ice_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *drvinfo,
struct ice_vsi *vsi)
@@ -1060,7 +1142,7 @@ __ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data,
switch (stringset) {
case ETH_SS_STATS:
for (i = 0; i < ICE_VSI_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
ice_gstrings_vsi_stats[i].stat_string);
if (ice_is_port_repr_netdev(netdev))
@@ -1080,7 +1162,7 @@ __ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data,
return;
for (i = 0; i < ICE_PF_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
ice_gstrings_pf_stats[i].stat_string);
for (i = 0; i < ICE_MAX_USER_PRIORITY; i++) {
@@ -1097,7 +1179,8 @@ __ice_get_strings(struct net_device *netdev, u32 stringset, u8 *data,
break;
case ETH_SS_PRIV_FLAGS:
for (i = 0; i < ICE_PRIV_FLAG_ARRAY_SIZE; i++)
- ethtool_sprintf(&p, ice_gstrings_priv_flags[i].name);
+ ethtool_sprintf(&p, "%s",
+ ice_gstrings_priv_flags[i].name);
break;
default:
break;
@@ -1638,6 +1721,15 @@ ice_get_ethtool_stats(struct net_device *netdev,
ICE_PHY_TYPE_HIGH_100G_AUI2_AOC_ACC | \
ICE_PHY_TYPE_HIGH_100G_AUI2)
+#define ICE_PHY_TYPE_HIGH_MASK_200G (ICE_PHY_TYPE_HIGH_200G_CR4_PAM4 | \
+ ICE_PHY_TYPE_HIGH_200G_SR4 | \
+ ICE_PHY_TYPE_HIGH_200G_FR4 | \
+ ICE_PHY_TYPE_HIGH_200G_LR4 | \
+ ICE_PHY_TYPE_HIGH_200G_DR4 | \
+ ICE_PHY_TYPE_HIGH_200G_KR4_PAM4 | \
+ ICE_PHY_TYPE_HIGH_200G_AUI4_AOC_ACC | \
+ ICE_PHY_TYPE_HIGH_200G_AUI4)
+
/**
* ice_mask_min_supported_speeds
* @hw: pointer to the HW structure
@@ -1652,8 +1744,9 @@ ice_mask_min_supported_speeds(struct ice_hw *hw,
u64 phy_types_high, u64 *phy_types_low)
{
/* if QSFP connection with 100G speed, minimum supported speed is 25G */
- if (*phy_types_low & ICE_PHY_TYPE_LOW_MASK_100G ||
- phy_types_high & ICE_PHY_TYPE_HIGH_MASK_100G)
+ if ((*phy_types_low & ICE_PHY_TYPE_LOW_MASK_100G) ||
+ (phy_types_high & ICE_PHY_TYPE_HIGH_MASK_100G) ||
+ (phy_types_high & ICE_PHY_TYPE_HIGH_MASK_200G))
*phy_types_low &= ~ICE_PHY_TYPE_LOW_MASK_MIN_25G;
else if (!ice_is_100m_speed_supported(hw))
*phy_types_low &= ~ICE_PHY_TYPE_LOW_MASK_MIN_1G;
@@ -1796,6 +1889,9 @@ ice_get_settings_link_up(struct ethtool_link_ksettings *ks,
ice_phy_type_to_ethtool(netdev, ks);
switch (link_info->link_speed) {
+ case ICE_AQ_LINK_SPEED_200GB:
+ ks->base.speed = SPEED_200000;
+ break;
case ICE_AQ_LINK_SPEED_100GB:
ks->base.speed = SPEED_100000;
break;
@@ -2008,79 +2104,69 @@ done:
}
/**
+ * ice_speed_to_aq_link - Get AQ link speed by Ethtool forced speed
+ * @speed: ethtool forced speed
+ */
+static u16 ice_speed_to_aq_link(int speed)
+{
+ int aq_speed;
+
+ switch (speed) {
+ case SPEED_10:
+ aq_speed = ICE_AQ_LINK_SPEED_10MB;
+ break;
+ case SPEED_100:
+ aq_speed = ICE_AQ_LINK_SPEED_100MB;
+ break;
+ case SPEED_1000:
+ aq_speed = ICE_AQ_LINK_SPEED_1000MB;
+ break;
+ case SPEED_2500:
+ aq_speed = ICE_AQ_LINK_SPEED_2500MB;
+ break;
+ case SPEED_5000:
+ aq_speed = ICE_AQ_LINK_SPEED_5GB;
+ break;
+ case SPEED_10000:
+ aq_speed = ICE_AQ_LINK_SPEED_10GB;
+ break;
+ case SPEED_20000:
+ aq_speed = ICE_AQ_LINK_SPEED_20GB;
+ break;
+ case SPEED_25000:
+ aq_speed = ICE_AQ_LINK_SPEED_25GB;
+ break;
+ case SPEED_40000:
+ aq_speed = ICE_AQ_LINK_SPEED_40GB;
+ break;
+ case SPEED_50000:
+ aq_speed = ICE_AQ_LINK_SPEED_50GB;
+ break;
+ case SPEED_100000:
+ aq_speed = ICE_AQ_LINK_SPEED_100GB;
+ break;
+ default:
+ aq_speed = ICE_AQ_LINK_SPEED_UNKNOWN;
+ break;
+ }
+ return aq_speed;
+}
+
+/**
* ice_ksettings_find_adv_link_speed - Find advertising link speed
* @ks: ethtool ksettings
*/
static u16
ice_ksettings_find_adv_link_speed(const struct ethtool_link_ksettings *ks)
{
+ const struct ethtool_forced_speed_map *map;
u16 adv_link_speed = 0;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100baseT_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_100MB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 1000baseX_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 1000baseT_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 1000baseKX_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_1000MB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 2500baseT_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 2500baseX_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_2500MB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 5000baseT_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_5GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 10000baseT_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 10000baseKR_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 10000baseSR_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 10000baseLR_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_10GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 25000baseCR_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 25000baseSR_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 25000baseKR_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_25GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 40000baseCR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 40000baseSR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 40000baseLR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 40000baseKR4_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_40GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 50000baseCR2_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 50000baseKR2_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 50000baseSR2_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_50GB;
- if (ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseCR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseSR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseLR4_ER4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseKR4_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseCR2_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseSR2_Full) ||
- ethtool_link_ksettings_test_link_mode(ks, advertising,
- 100000baseKR2_Full))
- adv_link_speed |= ICE_AQ_LINK_SPEED_100GB;
+ for (u32 i = 0; i < ARRAY_SIZE(ice_adv_lnk_speed_maps); i++) {
+ map = ice_adv_lnk_speed_maps + i;
+ if (linkmode_intersects(ks->link_modes.advertising, map->caps))
+ adv_link_speed |= ice_speed_to_aq_link(map->speed);
+ }
return adv_link_speed;
}
@@ -3285,7 +3371,7 @@ ice_get_ts_info(struct net_device *dev, struct ethtool_ts_info *info)
SOF_TIMESTAMPING_RX_HARDWARE |
SOF_TIMESTAMPING_RAW_HARDWARE;
- info->phc_index = ice_get_ptp_clock_index(pf);
+ info->phc_index = ice_ptp_clock_index(pf);
info->tx_types = BIT(HWTSTAMP_TX_OFF) | BIT(HWTSTAMP_TX_ON);
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.h b/drivers/net/ethernet/intel/ice/ice_ethtool.h
index b403ee79cd5e..b88e3da06f13 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.h
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.h
@@ -100,6 +100,14 @@ phy_type_high_lkup[] = {
[2] = ICE_PHY_TYPE(100GB, 100000baseCR2_Full),
[3] = ICE_PHY_TYPE(100GB, 100000baseSR2_Full),
[4] = ICE_PHY_TYPE(100GB, 100000baseCR2_Full),
+ [5] = ICE_PHY_TYPE(200GB, 200000baseCR4_Full),
+ [6] = ICE_PHY_TYPE(200GB, 200000baseSR4_Full),
+ [7] = ICE_PHY_TYPE(200GB, 200000baseLR4_ER4_FR4_Full),
+ [8] = ICE_PHY_TYPE(200GB, 200000baseLR4_ER4_FR4_Full),
+ [9] = ICE_PHY_TYPE(200GB, 200000baseDR4_Full),
+ [10] = ICE_PHY_TYPE(200GB, 200000baseKR4_Full),
+ [11] = ICE_PHY_TYPE(200GB, 200000baseSR4_Full),
+ [12] = ICE_PHY_TYPE(200GB, 200000baseCR4_Full),
};
#endif /* !_ICE_ETHTOOL_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index 8c6e13f87b7d..d151e5bacfec 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2018-2020, Intel Corporation. */
+/* Copyright (C) 2018-2023, Intel Corporation. */
/* flow director ethtool support for ice */
@@ -540,16 +540,24 @@ static int ice_fdir_num_avail_fltr(struct ice_hw *hw, struct ice_vsi *vsi)
/* total guaranteed filters assigned to this VSI */
num_guar = vsi->num_gfltr;
- /* minus the guaranteed filters programed by this VSI */
- num_guar -= (rd32(hw, VSIQF_FD_CNT(vsi_num)) &
- VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S;
-
/* total global best effort filters */
num_be = hw->func_caps.fd_fltr_best_effort;
- /* minus the global best effort filters programmed */
- num_be -= (rd32(hw, GLQF_FD_CNT) & GLQF_FD_CNT_FD_BCNT_M) >>
- GLQF_FD_CNT_FD_BCNT_S;
+ /* Subtract the number of programmed filters from the global values */
+ switch (hw->mac_type) {
+ case ICE_MAC_E830:
+ num_guar -= FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M,
+ rd32(hw, VSIQF_FD_CNT(vsi_num)));
+ num_be -= FIELD_GET(E830_GLQF_FD_CNT_FD_BCNT_M,
+ rd32(hw, GLQF_FD_CNT));
+ break;
+ case ICE_MAC_E810:
+ default:
+ num_guar -= FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M,
+ rd32(hw, VSIQF_FD_CNT(vsi_num)));
+ num_be -= FIELD_GET(E800_GLQF_FD_CNT_FD_BCNT_M,
+ rd32(hw, GLQF_FD_CNT));
+ }
return num_guar + num_be;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index 85cca572c22a..fb8b925aaf8b 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -1318,7 +1318,6 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block __always_unused blk,
list_del(&entry->l_entry);
- devm_kfree(ice_hw_to_dev(hw), entry->entry);
devm_kfree(ice_hw_to_dev(hw), entry);
return 0;
@@ -1645,10 +1644,8 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
*entry_h = ICE_FLOW_ENTRY_HNDL(e);
out:
- if (status && e) {
- devm_kfree(ice_hw_to_dev(hw), e->entry);
+ if (status)
devm_kfree(ice_hw_to_dev(hw), e);
- }
return status;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index b465d27d9b80..96923ef0a5a8 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -350,11 +350,8 @@ struct ice_flow_entry {
u64 id;
struct ice_flow_prof *prof;
- /* Flow entry's content */
- void *entry;
enum ice_flow_priority priority;
u16 vsi_handle;
- u16 entry_sz;
};
#define ICE_FLOW_ENTRY_HNDL(e) ((u64)(uintptr_t)e)
diff --git a/drivers/net/ethernet/intel/ice/ice_gnss.c b/drivers/net/ethernet/intel/ice/ice_gnss.c
index 75c9de675f20..c8ea1af51ad3 100644
--- a/drivers/net/ethernet/intel/ice/ice_gnss.c
+++ b/drivers/net/ethernet/intel/ice/ice_gnss.c
@@ -389,6 +389,9 @@ bool ice_gnss_is_gps_present(struct ice_hw *hw)
if (!hw->func_caps.ts_func_info.src_tmr_owned)
return false;
+ if (!ice_is_gps_in_netlist(hw))
+ return false;
+
#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
if (ice_is_e810t(hw)) {
int err;
diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
index 531cc2194741..86936b758ade 100644
--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2018, Intel Corporation. */
+/* Copyright (c) 2018-2023, Intel Corporation. */
/* Machine-generated file */
@@ -231,6 +231,7 @@
#define PFINT_SB_CTL 0x0016B600
#define PFINT_SB_CTL_MSIX_INDX_M ICE_M(0x7FF, 0)
#define PFINT_SB_CTL_CAUSE_ENA_M BIT(30)
+#define PFINT_TSYN_MSK 0x0016C980
#define QINT_RQCTL(_QRX) (0x00150000 + ((_QRX) * 4))
#define QINT_RQCTL_MSIX_INDX_S 0
#define QINT_RQCTL_MSIX_INDX_M ICE_M(0x7FF, 0)
@@ -284,11 +285,11 @@
#define VPLAN_TX_QBASE_VFNUMQ_M ICE_M(0xFF, 16)
#define VPLAN_TXQ_MAPENA(_VF) (0x00073800 + ((_VF) * 4))
#define VPLAN_TXQ_MAPENA_TX_ENA_M BIT(0)
-#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA(_i) (0x001E36E0 + ((_i) * 32))
-#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_MAX_INDEX 8
-#define PRTMAC_HSEC_CTL_TX_PAUSE_QUANTA_HSEC_CTL_TX_PAUSE_QUANTA_M ICE_M(0xFFFF, 0)
-#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER(_i) (0x001E3800 + ((_i) * 32))
-#define PRTMAC_HSEC_CTL_TX_PAUSE_REFRESH_TIMER_M ICE_M(0xFFFF, 0)
+#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT(_i) (0x001E36E0 + ((_i) * 32))
+#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT_MAX 8
+#define E800_PRTMAC_HSEC_CTL_TX_PS_QNT_M GENMASK(15, 0)
+#define E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR(_i) (0x001E3800 + ((_i) * 32))
+#define E800_PRTMAC_HSEC_CTL_TX_PS_RFSH_TMR_M GENMASK(15, 0)
#define GL_MDCK_TX_TDPU 0x00049348
#define GL_MDCK_TX_TDPU_RCU_ANTISPOOF_ITR_DIS_M BIT(1)
#define GL_MDET_RX 0x00294C00
@@ -311,7 +312,11 @@
#define GL_MDET_TX_PQM_MAL_TYPE_S 26
#define GL_MDET_TX_PQM_MAL_TYPE_M ICE_M(0x1F, 26)
#define GL_MDET_TX_PQM_VALID_M BIT(31)
-#define GL_MDET_TX_TCLAN 0x000FC068
+#define GL_MDET_TX_TCLAN_BY_MAC(hw) \
+ ((hw)->mac_type == ICE_MAC_E830 ? E830_GL_MDET_TX_TCLAN : \
+ E800_GL_MDET_TX_TCLAN)
+#define E800_GL_MDET_TX_TCLAN 0x000FC068
+#define E830_GL_MDET_TX_TCLAN 0x000FCCC0
#define GL_MDET_TX_TCLAN_QNUM_S 0
#define GL_MDET_TX_TCLAN_QNUM_M ICE_M(0x7FFF, 0)
#define GL_MDET_TX_TCLAN_VF_NUM_S 15
@@ -325,7 +330,11 @@
#define PF_MDET_RX_VALID_M BIT(0)
#define PF_MDET_TX_PQM 0x002D2C80
#define PF_MDET_TX_PQM_VALID_M BIT(0)
-#define PF_MDET_TX_TCLAN 0x000FC000
+#define PF_MDET_TX_TCLAN_BY_MAC(hw) \
+ ((hw)->mac_type == ICE_MAC_E830 ? E830_PF_MDET_TX_TCLAN : \
+ E800_PF_MDET_TX_TCLAN)
+#define E800_PF_MDET_TX_TCLAN 0x000FC000
+#define E830_PF_MDET_TX_TCLAN 0x000FCC00
#define PF_MDET_TX_TCLAN_VALID_M BIT(0)
#define VP_MDET_RX(_VF) (0x00294400 + ((_VF) * 4))
#define VP_MDET_RX_VALID_M BIT(0)
@@ -335,6 +344,8 @@
#define VP_MDET_TX_TCLAN_VALID_M BIT(0)
#define VP_MDET_TX_TDPU(_VF) (0x00040000 + ((_VF) * 4))
#define VP_MDET_TX_TDPU_VALID_M BIT(0)
+#define E800_GL_MNG_FWSM_FW_MODES_M GENMASK(2, 0)
+#define E830_GL_MNG_FWSM_FW_MODES_M GENMASK(1, 0)
#define GL_MNG_FWSM 0x000B6134
#define GL_MNG_FWSM_FW_LOADING_M BIT(30)
#define GLNVM_FLA 0x000B6108
@@ -363,13 +374,18 @@
#define GL_PWR_MODE_CTL_CAR_MAX_BW_S 30
#define GL_PWR_MODE_CTL_CAR_MAX_BW_M ICE_M(0x3, 30)
#define GLQF_FD_CNT 0x00460018
+#define E800_GLQF_FD_CNT_FD_GCNT_M GENMASK(14, 0)
+#define E830_GLQF_FD_CNT_FD_GCNT_M GENMASK(15, 0)
#define GLQF_FD_CNT_FD_BCNT_S 16
-#define GLQF_FD_CNT_FD_BCNT_M ICE_M(0x7FFF, 16)
+#define E800_GLQF_FD_CNT_FD_BCNT_M GENMASK(30, 16)
+#define E830_GLQF_FD_CNT_FD_BCNT_M GENMASK(31, 16)
#define GLQF_FD_SIZE 0x00460010
#define GLQF_FD_SIZE_FD_GSIZE_S 0
-#define GLQF_FD_SIZE_FD_GSIZE_M ICE_M(0x7FFF, 0)
+#define E800_GLQF_FD_SIZE_FD_GSIZE_M GENMASK(14, 0)
+#define E830_GLQF_FD_SIZE_FD_GSIZE_M GENMASK(15, 0)
#define GLQF_FD_SIZE_FD_BSIZE_S 16
-#define GLQF_FD_SIZE_FD_BSIZE_M ICE_M(0x7FFF, 16)
+#define E800_GLQF_FD_SIZE_FD_BSIZE_M GENMASK(30, 16)
+#define E830_GLQF_FD_SIZE_FD_BSIZE_M GENMASK(31, 16)
#define GLQF_FDINSET(_i, _j) (0x00412000 + ((_i) * 4 + (_j) * 512))
#define GLQF_FDMASK(_i) (0x00410800 + ((_i) * 4))
#define GLQF_FDMASK_MAX_INDEX 31
@@ -388,6 +404,10 @@
#define GLQF_HMASK_SEL(_i) (0x00410000 + ((_i) * 4))
#define GLQF_HMASK_SEL_MAX_INDEX 127
#define GLQF_HMASK_SEL_MASK_SEL_S 0
+#define E800_PFQF_FD_CNT_FD_GCNT_M GENMASK(14, 0)
+#define E830_PFQF_FD_CNT_FD_GCNT_M GENMASK(15, 0)
+#define E800_PFQF_FD_CNT_FD_BCNT_M GENMASK(30, 16)
+#define E830_PFQF_FD_CNT_FD_BCNT_M GENMASK(31, 16)
#define PFQF_FD_ENA 0x0043A000
#define PFQF_FD_ENA_FD_ENA_M BIT(0)
#define PFQF_FD_SIZE 0x00460100
@@ -478,6 +498,7 @@
#define GLTSYN_SYNC_DLAY 0x00088818
#define GLTSYN_TGT_H_0(_i) (0x00088930 + ((_i) * 4))
#define GLTSYN_TGT_L_0(_i) (0x00088928 + ((_i) * 4))
+#define GLTSYN_TIME_0(_i) (0x000888C8 + ((_i) * 4))
#define GLTSYN_TIME_H(_i) (0x000888D8 + ((_i) * 4))
#define GLTSYN_TIME_L(_i) (0x000888D0 + ((_i) * 4))
#define PFHH_SEM 0x000A4200 /* Reset Source: PFR */
@@ -486,9 +507,11 @@
#define PFTSYN_SEM_BUSY_M BIT(0)
#define VSIQF_FD_CNT(_VSI) (0x00464000 + ((_VSI) * 4))
#define VSIQF_FD_CNT_FD_GCNT_S 0
-#define VSIQF_FD_CNT_FD_GCNT_M ICE_M(0x3FFF, 0)
+#define E800_VSIQF_FD_CNT_FD_GCNT_M GENMASK(13, 0)
+#define E830_VSIQF_FD_CNT_FD_GCNT_M GENMASK(15, 0)
#define VSIQF_FD_CNT_FD_BCNT_S 16
-#define VSIQF_FD_CNT_FD_BCNT_M ICE_M(0x3FFF, 16)
+#define E800_VSIQF_FD_CNT_FD_BCNT_M GENMASK(29, 16)
+#define E830_VSIQF_FD_CNT_FD_BCNT_M GENMASK(31, 16)
#define VSIQF_FD_SIZE(_VSI) (0x00462000 + ((_VSI) * 4))
#define VSIQF_HKEY_MAX_INDEX 12
#define PFPM_APM 0x000B8080
@@ -500,6 +523,10 @@
#define PFPM_WUS_MAG_M BIT(1)
#define PFPM_WUS_MNG_M BIT(3)
#define PFPM_WUS_FW_RST_WK_M BIT(31)
+#define E830_PRTMAC_CL01_PS_QNT 0x001E32A0
+#define E830_PRTMAC_CL01_PS_QNT_CL0_M GENMASK(15, 0)
+#define E830_PRTMAC_CL01_QNT_THR 0x001E3320
+#define E830_PRTMAC_CL01_QNT_THR_CL0_M GENMASK(15, 0)
#define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4))
#define VFINT_DYN_CTLN_CLEARPBA_M BIT(1)
diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
index 7b1256992dcf..b980f89dc892 100644
--- a/drivers/net/ethernet/intel/ice/ice_lag.c
+++ b/drivers/net/ethernet/intel/ice/ice_lag.c
@@ -19,8 +19,11 @@ static const u8 lacp_train_pkt[LACP_TRAIN_PKT_LEN] = { 0, 0, 0, 0, 0, 0,
static const u8 ice_dflt_vsi_rcp[ICE_RECIPE_LEN] = {
0x05, 0, 0, 0, 0x20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0x85, 0, 0x01, 0, 0, 0, 0xff, 0xff, 0x08, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0x30, 0, 0, 0, 0, 0, 0, 0, 0, 0,
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
+ 0, 0, 0, 0, 0, 0, 0x30 };
+static const u8 ice_lport_rcp[ICE_RECIPE_LEN] = {
+ 0x05, 0, 0, 0, 0x20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0x85, 0, 0x16, 0, 0, 0, 0xff, 0xff, 0x07, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0x30 };
/**
* ice_lag_set_primary - set PF LAG state as Primary
@@ -173,18 +176,22 @@ static struct ice_lag *ice_lag_find_primary(struct ice_lag *lag)
}
/**
- * ice_lag_cfg_dflt_fltr - Add/Remove default VSI rule for LAG
+ * ice_lag_cfg_fltr - Add/Remove rule for LAG
* @lag: lag struct for local interface
+ * @act: rule action
+ * @recipe_id: recipe id for the new rule
+ * @rule_idx: pointer to rule index
* @add: boolean on whether we are adding filters
*/
static int
-ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
+ice_lag_cfg_fltr(struct ice_lag *lag, u32 act, u16 recipe_id, u16 *rule_idx,
+ bool add)
{
struct ice_sw_rule_lkup_rx_tx *s_rule;
u16 s_rule_sz, vsi_num;
struct ice_hw *hw;
- u32 act, opc;
u8 *eth_hdr;
+ u32 opc;
int err;
hw = &lag->pf->hw;
@@ -193,7 +200,7 @@ ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
s_rule_sz = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE(s_rule);
s_rule = kzalloc(s_rule_sz, GFP_KERNEL);
if (!s_rule) {
- dev_err(ice_pf_to_dev(lag->pf), "error allocating rule for LAG default VSI\n");
+ dev_err(ice_pf_to_dev(lag->pf), "error allocating rule for LAG\n");
return -ENOMEM;
}
@@ -201,19 +208,17 @@ ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
eth_hdr = s_rule->hdr_data;
ice_fill_eth_hdr(eth_hdr);
- act = (vsi_num << ICE_SINGLE_ACT_VSI_ID_S) &
+ act |= (vsi_num << ICE_SINGLE_ACT_VSI_ID_S) &
ICE_SINGLE_ACT_VSI_ID_M;
- act |= ICE_SINGLE_ACT_VSI_FORWARDING |
- ICE_SINGLE_ACT_VALID_BIT | ICE_SINGLE_ACT_LAN_ENABLE;
s_rule->hdr.type = cpu_to_le16(ICE_AQC_SW_RULES_T_LKUP_RX);
- s_rule->recipe_id = cpu_to_le16(lag->pf_recipe);
+ s_rule->recipe_id = cpu_to_le16(recipe_id);
s_rule->src = cpu_to_le16(hw->port_info->lport);
s_rule->act = cpu_to_le32(act);
s_rule->hdr_len = cpu_to_le16(DUMMY_ETH_HDR_LEN);
opc = ice_aqc_opc_add_sw_rules;
} else {
- s_rule->index = cpu_to_le16(lag->pf_rule_id);
+ s_rule->index = cpu_to_le16(*rule_idx);
opc = ice_aqc_opc_remove_sw_rules;
}
@@ -222,9 +227,9 @@ ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
goto dflt_fltr_free;
if (add)
- lag->pf_rule_id = le16_to_cpu(s_rule->index);
+ *rule_idx = le16_to_cpu(s_rule->index);
else
- lag->pf_rule_id = 0;
+ *rule_idx = 0;
dflt_fltr_free:
kfree(s_rule);
@@ -232,6 +237,37 @@ dflt_fltr_free:
}
/**
+ * ice_lag_cfg_dflt_fltr - Add/Remove default VSI rule for LAG
+ * @lag: lag struct for local interface
+ * @add: boolean on whether to add filter
+ */
+static int
+ice_lag_cfg_dflt_fltr(struct ice_lag *lag, bool add)
+{
+ u32 act = ICE_SINGLE_ACT_VSI_FORWARDING |
+ ICE_SINGLE_ACT_VALID_BIT | ICE_SINGLE_ACT_LAN_ENABLE;
+
+ return ice_lag_cfg_fltr(lag, act, lag->pf_recipe,
+ &lag->pf_rule_id, add);
+}
+
+/**
+ * ice_lag_cfg_drop_fltr - Add/Remove lport drop rule
+ * @lag: lag struct for local interface
+ * @add: boolean on whether to add filter
+ */
+static int
+ice_lag_cfg_drop_fltr(struct ice_lag *lag, bool add)
+{
+ u32 act = ICE_SINGLE_ACT_VSI_FORWARDING |
+ ICE_SINGLE_ACT_VALID_BIT |
+ ICE_SINGLE_ACT_DROP;
+
+ return ice_lag_cfg_fltr(lag, act, lag->lport_recipe,
+ &lag->lport_rule_idx, add);
+}
+
+/**
* ice_lag_cfg_pf_fltrs - set filters up for new active port
* @lag: local interfaces lag struct
* @ptr: opaque data containing notifier event
@@ -257,13 +293,18 @@ ice_lag_cfg_pf_fltrs(struct ice_lag *lag, void *ptr)
if (bonding_info->slave.state && lag->pf_rule_id) {
if (ice_lag_cfg_dflt_fltr(lag, false))
dev_err(dev, "Error removing old default VSI filter\n");
+ if (ice_lag_cfg_drop_fltr(lag, true))
+ dev_err(dev, "Error adding new drop filter\n");
return;
}
/* interface becoming active - add new default VSI rule */
- if (!bonding_info->slave.state && !lag->pf_rule_id)
+ if (!bonding_info->slave.state && !lag->pf_rule_id) {
if (ice_lag_cfg_dflt_fltr(lag, true))
dev_err(dev, "Error adding new default VSI filter\n");
+ if (lag->lport_rule_idx && ice_lag_cfg_drop_fltr(lag, false))
+ dev_err(dev, "Error removing old drop filter\n");
+ }
}
/**
@@ -430,10 +471,11 @@ static void
ice_lag_move_vf_node_tc(struct ice_lag *lag, u8 oldport, u8 newport,
u16 vsi_num, u8 tc)
{
- u16 numq, valq, buf_size, num_moved, qbuf_size;
+ DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1);
struct device *dev = ice_pf_to_dev(lag->pf);
+ u16 numq, valq, num_moved, qbuf_size;
+ u16 buf_size = __struct_size(buf);
struct ice_aqc_cfg_txqs_buf *qbuf;
- struct ice_aqc_move_elem *buf;
struct ice_sched_node *n_prt;
struct ice_hw *new_hw = NULL;
__le32 teid, parent_teid;
@@ -505,26 +547,17 @@ qbuf_none:
goto resume_traffic;
/* Move Vf's VSI node for this TC to newport's scheduler tree */
- buf_size = struct_size(buf, teid, 1);
- buf = kzalloc(buf_size, GFP_KERNEL);
- if (!buf) {
- dev_warn(dev, "Failure to alloc memory for VF node failover\n");
- goto resume_traffic;
- }
-
buf->hdr.src_parent_teid = parent_teid;
buf->hdr.dest_parent_teid = n_prt->info.node_teid;
buf->hdr.num_elems = cpu_to_le16(1);
buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN;
buf->teid[0] = teid;
- if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved,
- NULL))
+ if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved))
dev_warn(dev, "Failure to move VF nodes for failover\n");
else
ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]);
- kfree(buf);
goto resume_traffic;
qbuf_err:
@@ -755,10 +788,11 @@ static void
ice_lag_reclaim_vf_tc(struct ice_lag *lag, struct ice_hw *src_hw, u16 vsi_num,
u8 tc)
{
- u16 numq, valq, buf_size, num_moved, qbuf_size;
+ DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1);
struct device *dev = ice_pf_to_dev(lag->pf);
+ u16 numq, valq, num_moved, qbuf_size;
+ u16 buf_size = __struct_size(buf);
struct ice_aqc_cfg_txqs_buf *qbuf;
- struct ice_aqc_move_elem *buf;
struct ice_sched_node *n_prt;
__le32 teid, parent_teid;
struct ice_vsi_ctx *ctx;
@@ -820,26 +854,17 @@ reclaim_none:
goto resume_reclaim;
/* Move node to new parent */
- buf_size = struct_size(buf, teid, 1);
- buf = kzalloc(buf_size, GFP_KERNEL);
- if (!buf) {
- dev_warn(dev, "Failure to alloc memory for VF node failover\n");
- goto resume_reclaim;
- }
-
buf->hdr.src_parent_teid = parent_teid;
buf->hdr.dest_parent_teid = n_prt->info.node_teid;
buf->hdr.num_elems = cpu_to_le16(1);
buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN;
buf->teid[0] = teid;
- if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved,
- NULL))
+ if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved))
dev_warn(dev, "Failure to move VF nodes for LAG reclaim\n");
else
ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]);
- kfree(buf);
goto resume_reclaim;
reclaim_qerr:
@@ -1195,6 +1220,7 @@ static void ice_lag_changeupper_event(struct ice_lag *lag, void *ptr)
swid = primary_lag->pf->hw.port_info->sw_id;
ice_lag_set_swid(swid, lag, true);
ice_lag_add_prune_list(primary_lag, lag->pf);
+ ice_lag_cfg_drop_fltr(lag, true);
}
/* add filter for primary control packets */
ice_lag_cfg_cp_fltr(lag, true);
@@ -1792,10 +1818,11 @@ static void
ice_lag_move_vf_nodes_tc_sync(struct ice_lag *lag, struct ice_hw *dest_hw,
u16 vsi_num, u8 tc)
{
- u16 numq, valq, buf_size, num_moved, qbuf_size;
+ DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1);
struct device *dev = ice_pf_to_dev(lag->pf);
+ u16 numq, valq, num_moved, qbuf_size;
+ u16 buf_size = __struct_size(buf);
struct ice_aqc_cfg_txqs_buf *qbuf;
- struct ice_aqc_move_elem *buf;
struct ice_sched_node *n_prt;
__le32 teid, parent_teid;
struct ice_vsi_ctx *ctx;
@@ -1853,26 +1880,17 @@ sync_none:
goto resume_sync;
/* Move node to new parent */
- buf_size = struct_size(buf, teid, 1);
- buf = kzalloc(buf_size, GFP_KERNEL);
- if (!buf) {
- dev_warn(dev, "Failure to alloc for VF node move in reset rebuild\n");
- goto resume_sync;
- }
-
buf->hdr.src_parent_teid = parent_teid;
buf->hdr.dest_parent_teid = n_prt->info.node_teid;
buf->hdr.num_elems = cpu_to_le16(1);
buf->hdr.mode = ICE_AQC_MOVE_ELEM_MODE_KEEP_OWN;
buf->teid[0] = teid;
- if (ice_aq_move_sched_elems(&lag->pf->hw, 1, buf, buf_size, &num_moved,
- NULL))
+ if (ice_aq_move_sched_elems(&lag->pf->hw, buf, buf_size, &num_moved))
dev_warn(dev, "Failure to move VF nodes for LAG reset rebuild\n");
else
ice_sched_update_parent(n_prt, ctx->sched.vsi_node[tc]);
- kfree(buf);
goto resume_sync;
sync_qerr:
@@ -1953,11 +1971,16 @@ int ice_init_lag(struct ice_pf *pf)
goto lag_error;
}
- err = ice_create_lag_recipe(&pf->hw, &lag->pf_recipe, ice_dflt_vsi_rcp,
- 1);
+ err = ice_create_lag_recipe(&pf->hw, &lag->pf_recipe,
+ ice_dflt_vsi_rcp, 1);
if (err)
goto lag_error;
+ err = ice_create_lag_recipe(&pf->hw, &lag->lport_recipe,
+ ice_lport_rcp, 3);
+ if (err)
+ goto free_rcp_res;
+
/* associate recipes to profiles */
for (n = 0; n < ICE_PROFID_IPV6_GTPU_IPV6_TCP_INNER; n++) {
err = ice_aq_get_recipe_to_profile(&pf->hw, n,
@@ -1966,7 +1989,8 @@ int ice_init_lag(struct ice_pf *pf)
continue;
if (recipe_bits & BIT(ICE_SW_LKUP_DFLT)) {
- recipe_bits |= BIT(lag->pf_recipe);
+ recipe_bits |= BIT(lag->pf_recipe) |
+ BIT(lag->lport_recipe);
ice_aq_map_recipe_to_profile(&pf->hw, n,
(u8 *)&recipe_bits, NULL);
}
@@ -1977,6 +2001,9 @@ int ice_init_lag(struct ice_pf *pf)
dev_dbg(dev, "INIT LAG complete\n");
return 0;
+free_rcp_res:
+ ice_free_hw_res(&pf->hw, ICE_AQC_RES_TYPE_RECIPE, 1,
+ &pf->lag->pf_recipe);
lag_error:
kfree(lag);
pf->lag = NULL;
@@ -2006,6 +2033,8 @@ void ice_deinit_lag(struct ice_pf *pf)
ice_free_hw_res(&pf->hw, ICE_AQC_RES_TYPE_RECIPE, 1,
&pf->lag->pf_recipe);
+ ice_free_hw_res(&pf->hw, ICE_AQC_RES_TYPE_RECIPE, 1,
+ &pf->lag->lport_recipe);
kfree(lag);
diff --git a/drivers/net/ethernet/intel/ice/ice_lag.h b/drivers/net/ethernet/intel/ice/ice_lag.h
index facb6c894b6d..9557e8605a07 100644
--- a/drivers/net/ethernet/intel/ice/ice_lag.h
+++ b/drivers/net/ethernet/intel/ice/ice_lag.h
@@ -39,8 +39,10 @@ struct ice_lag {
u8 bonded:1; /* currently bonded */
u8 primary:1; /* this is primary */
u16 pf_recipe;
+ u16 lport_recipe;
u16 pf_rule_id;
u16 cp_rule_idx;
+ u16 lport_rule_idx;
u8 role;
};
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 73bbf06a76db..4b1e56396293 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -229,7 +229,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi)
* of queues vectors, subtract 1 (ICE_NONQ_VECS_VF) from the
* original vector count
*/
- vsi->num_q_vectors = pf->vfs.num_msix_per - ICE_NONQ_VECS_VF;
+ vsi->num_q_vectors = vf->num_msix - ICE_NONQ_VECS_VF;
break;
case ICE_VSI_CTRL:
vsi->alloc_txq = 1;
@@ -1831,21 +1831,14 @@ int ice_vsi_cfg_single_rxq(struct ice_vsi *vsi, u16 q_idx)
int ice_vsi_cfg_single_txq(struct ice_vsi *vsi, struct ice_tx_ring **tx_rings, u16 q_idx)
{
- struct ice_aqc_add_tx_qgrp *qg_buf;
- int err;
+ DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1);
if (q_idx >= vsi->alloc_txq || !tx_rings || !tx_rings[q_idx])
return -EINVAL;
- qg_buf = kzalloc(struct_size(qg_buf, txqs, 1), GFP_KERNEL);
- if (!qg_buf)
- return -ENOMEM;
-
qg_buf->num_txqs = 1;
- err = ice_vsi_cfg_txq(vsi, tx_rings[q_idx], qg_buf);
- kfree(qg_buf);
- return err;
+ return ice_vsi_cfg_txq(vsi, tx_rings[q_idx], qg_buf);
}
/**
@@ -1887,24 +1880,18 @@ setup_rings:
static int
ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_tx_ring **rings, u16 count)
{
- struct ice_aqc_add_tx_qgrp *qg_buf;
- u16 q_idx = 0;
+ DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1);
int err = 0;
-
- qg_buf = kzalloc(struct_size(qg_buf, txqs, 1), GFP_KERNEL);
- if (!qg_buf)
- return -ENOMEM;
+ u16 q_idx;
qg_buf->num_txqs = 1;
for (q_idx = 0; q_idx < count; q_idx++) {
err = ice_vsi_cfg_txq(vsi, rings[q_idx], qg_buf);
if (err)
- goto err_cfg_txqs;
+ break;
}
-err_cfg_txqs:
- kfree(qg_buf);
return err;
}
@@ -3990,13 +3977,21 @@ void ice_init_feature_support(struct ice_pf *pf)
case ICE_DEV_ID_E810C_BACKPLANE:
case ICE_DEV_ID_E810C_QSFP:
case ICE_DEV_ID_E810C_SFP:
+ case ICE_DEV_ID_E810_XXV_BACKPLANE:
+ case ICE_DEV_ID_E810_XXV_QSFP:
+ case ICE_DEV_ID_E810_XXV_SFP:
ice_set_feature_support(pf, ICE_F_DSCP);
- ice_set_feature_support(pf, ICE_F_PTP_EXTTS);
- if (ice_is_e810t(&pf->hw)) {
+ if (ice_is_phy_rclk_in_netlist(&pf->hw))
+ ice_set_feature_support(pf, ICE_F_PHY_RCLK);
+ /* If we don't own the timer - don't enable other caps */
+ if (!ice_pf_src_tmr_owned(pf))
+ break;
+ if (ice_is_cgu_in_netlist(&pf->hw))
+ ice_set_feature_support(pf, ICE_F_CGU);
+ if (ice_is_clock_mux_in_netlist(&pf->hw))
ice_set_feature_support(pf, ICE_F_SMA_CTRL);
- if (ice_gnss_is_gps_present(&pf->hw))
- ice_set_feature_support(pf, ICE_F_GNSS);
- }
+ if (ice_gnss_is_gps_present(&pf->hw))
+ ice_set_feature_support(pf, ICE_F_GNSS);
break;
default:
break;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 7784135160fd..6607fa6fe556 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-/* Copyright (c) 2018, Intel Corporation. */
+/* Copyright (c) 2018-2023, Intel Corporation. */
/* Intel(R) Ethernet Connection E800 Series Linux Driver */
@@ -1759,7 +1759,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
wr32(hw, GL_MDET_TX_PQM, 0xffffffff);
}
- reg = rd32(hw, GL_MDET_TX_TCLAN);
+ reg = rd32(hw, GL_MDET_TX_TCLAN_BY_MAC(hw));
if (reg & GL_MDET_TX_TCLAN_VALID_M) {
u8 pf_num = (reg & GL_MDET_TX_TCLAN_PF_NUM_M) >>
GL_MDET_TX_TCLAN_PF_NUM_S;
@@ -1773,7 +1773,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
if (netif_msg_tx_err(pf))
dev_info(dev, "Malicious Driver Detection event %d on TX queue %d PF# %d VF# %d\n",
event, queue, pf_num, vf_num);
- wr32(hw, GL_MDET_TX_TCLAN, 0xffffffff);
+ wr32(hw, GL_MDET_TX_TCLAN_BY_MAC(hw), U32_MAX);
}
reg = rd32(hw, GL_MDET_RX);
@@ -1801,9 +1801,9 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
dev_info(dev, "Malicious Driver Detection event TX_PQM detected on PF\n");
}
- reg = rd32(hw, PF_MDET_TX_TCLAN);
+ reg = rd32(hw, PF_MDET_TX_TCLAN_BY_MAC(hw));
if (reg & PF_MDET_TX_TCLAN_VALID_M) {
- wr32(hw, PF_MDET_TX_TCLAN, 0xFFFF);
+ wr32(hw, PF_MDET_TX_TCLAN_BY_MAC(hw), 0xffff);
if (netif_msg_tx_err(pf))
dev_info(dev, "Malicious Driver Detection event TX_TCLAN detected on PF\n");
}
@@ -3150,7 +3150,7 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
if (oicr & PFINT_OICR_TSYN_TX_M) {
ena_mask &= ~PFINT_OICR_TSYN_TX_M;
- if (!hw->reset_ongoing)
+ if (!hw->reset_ongoing && ice_ptp_pf_handles_tx_interrupt(pf))
set_bit(ICE_MISC_THREAD_TX_TSTAMP, pf->misc_thread);
}
@@ -3160,7 +3160,7 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
ena_mask &= ~PFINT_OICR_TSYN_EVNT_M;
- if (hw->func_caps.ts_func_info.src_tmr_owned) {
+ if (ice_pf_src_tmr_owned(pf)) {
/* Save EVENTs from GLTSYN register */
pf->ptp.ext_ts_irq |= gltsyn_stat &
(GLTSYN_STAT_EVENT0_M |
@@ -3871,7 +3871,8 @@ static void ice_set_pf_caps(struct ice_pf *pf)
}
clear_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags);
- if (func_caps->common_cap.ieee_1588)
+ if (func_caps->common_cap.ieee_1588 &&
+ !(pf->hw.mac_type == ICE_MAC_E830))
set_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags);
pf->max_pf_txqs = func_caps->common_cap.num_txq;
@@ -4666,6 +4667,10 @@ static void ice_init_features(struct ice_pf *pf)
if (ice_is_feature_supported(pf, ICE_F_GNSS))
ice_gnss_init(pf);
+ if (ice_is_feature_supported(pf, ICE_F_CGU) ||
+ ice_is_feature_supported(pf, ICE_F_PHY_RCLK))
+ ice_dpll_init(pf);
+
/* Note: Flow director init failure is non-fatal to load */
if (ice_init_fdir(pf))
dev_err(dev, "could not initialize flow director\n");
@@ -4695,6 +4700,8 @@ static void ice_deinit_features(struct ice_pf *pf)
ice_gnss_exit(pf);
if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
ice_ptp_release(pf);
+ if (test_bit(ICE_FLAG_DPLL, pf->flags))
+ ice_dpll_deinit(pf);
}
static void ice_init_wakeup(struct ice_pf *pf)
@@ -5535,7 +5542,7 @@ static void ice_pci_err_resume(struct pci_dev *pdev)
return;
}
- ice_restore_all_vfs_msi_state(pdev);
+ ice_restore_all_vfs_msi_state(pf);
ice_do_reset(pf, ICE_RESET_PFR);
ice_service_task_restart(pf);
@@ -5578,34 +5585,38 @@ static void ice_pci_err_reset_done(struct pci_dev *pdev)
* Class, Class Mask, private data (not used) }
*/
static const struct pci_device_id ice_pci_tbl[] = {
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_10G_BASE_T), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_SGMII), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_QSFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_10G_BASE_T), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_SGMII), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_10G_BASE_T), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_SGMII), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_BACKPLANE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_SFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_10G_BASE_T), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_1GBE), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_QSFP), 0 },
- { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822_SI_DFLT), 0 },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_QSFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810C_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_QSFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E810_XXV_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_QSFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_10G_BASE_T) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823C_SGMII) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_QSFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_10G_BASE_T) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822C_SGMII) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_10G_BASE_T) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822L_SGMII) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_10G_BASE_T) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_1GBE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E823L_QSFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E822_SI_DFLT) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E830_BACKPLANE) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E830_QSFP56) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E830_SFP) },
+ { PCI_VDEVICE(INTEL, ICE_DEV_ID_E830_SFP_DD) },
/* required last entry */
- { 0, }
+ {}
};
MODULE_DEVICE_TABLE(pci, ice_pci_tbl);
@@ -5629,6 +5640,8 @@ static struct pci_driver ice_driver = {
#endif /* CONFIG_PM */
.shutdown = ice_shutdown,
.sriov_configure = ice_sriov_configure,
+ .sriov_get_vf_total_msix = ice_sriov_get_vf_total_msix,
+ .sriov_set_msix_vec_count = ice_sriov_set_msix_vec_count,
.err_handler = &ice_pci_err_handler
};
@@ -5645,6 +5658,8 @@ static int __init ice_module_init(void)
pr_info("%s\n", ice_driver_string);
pr_info("%s\n", ice_copyright);
+ ice_adv_lnk_speed_maps_init();
+
ice_wq = alloc_workqueue("%s", 0, 0, KBUILD_MODNAME);
if (!ice_wq) {
pr_err("Failed to create workqueue\n");
@@ -7387,8 +7402,13 @@ static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
}
/* configure PTP timestamping after VSI rebuild */
- if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags))
- ice_ptp_cfg_timestamp(pf, false);
+ if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) {
+ if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF)
+ ice_ptp_cfg_timestamp(pf, false);
+ else if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL)
+ /* for E82x PHC owner always need to have interrupts */
+ ice_ptp_cfg_timestamp(pf, true);
+ }
err = ice_vsi_rebuild_by_type(pf, ICE_VSI_SWITCHDEV_CTRL);
if (err) {
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
index 81d96a40d5a7..1eddcbe89b0c 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
@@ -39,8 +39,8 @@ ice_get_sma_config_e810t(struct ice_hw *hw, struct ptp_pin_desc *ptp_pins)
/* initialize with defaults */
for (i = 0; i < NUM_PTP_PINS_E810T; i++) {
- snprintf(ptp_pins[i].name, sizeof(ptp_pins[i].name),
- "%s", ice_pin_desc_e810t[i].name);
+ strscpy(ptp_pins[i].name, ice_pin_desc_e810t[i].name,
+ sizeof(ptp_pins[i].name));
ptp_pins[i].index = ice_pin_desc_e810t[i].index;
ptp_pins[i].func = ice_pin_desc_e810t[i].func;
ptp_pins[i].chan = ice_pin_desc_e810t[i].chan;
@@ -256,6 +256,24 @@ ice_verify_pin_e810t(struct ptp_clock_info *info, unsigned int pin,
}
/**
+ * ice_ptp_configure_tx_tstamp - Enable or disable Tx timestamp interrupt
+ * @pf: The PF pointer to search in
+ * @on: bool value for whether timestamp interrupt is enabled or disabled
+ */
+static void ice_ptp_configure_tx_tstamp(struct ice_pf *pf, bool on)
+{
+ u32 val;
+
+ /* Configure the Tx timestamp interrupt */
+ val = rd32(&pf->hw, PFINT_OICR_ENA);
+ if (on)
+ val |= PFINT_OICR_TSYN_TX_M;
+ else
+ val &= ~PFINT_OICR_TSYN_TX_M;
+ wr32(&pf->hw, PFINT_OICR_ENA, val);
+}
+
+/**
* ice_set_tx_tstamp - Enable or disable Tx timestamping
* @pf: The PF pointer to search in
* @on: bool value for whether timestamps are enabled or disabled
@@ -263,7 +281,6 @@ ice_verify_pin_e810t(struct ptp_clock_info *info, unsigned int pin,
static void ice_set_tx_tstamp(struct ice_pf *pf, bool on)
{
struct ice_vsi *vsi;
- u32 val;
u16 i;
vsi = ice_get_main_vsi(pf);
@@ -277,13 +294,8 @@ static void ice_set_tx_tstamp(struct ice_pf *pf, bool on)
vsi->tx_rings[i]->ptp_tx = on;
}
- /* Configure the Tx timestamp interrupt */
- val = rd32(&pf->hw, PFINT_OICR_ENA);
- if (on)
- val |= PFINT_OICR_TSYN_TX_M;
- else
- val &= ~PFINT_OICR_TSYN_TX_M;
- wr32(&pf->hw, PFINT_OICR_ENA, val);
+ if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_SELF)
+ ice_ptp_configure_tx_tstamp(pf, on);
pf->ptp.tstamp_config.tx_type = on ? HWTSTAMP_TX_ON : HWTSTAMP_TX_OFF;
}
@@ -328,131 +340,6 @@ void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena)
}
/**
- * ice_get_ptp_clock_index - Get the PTP clock index
- * @pf: the PF pointer
- *
- * Determine the clock index of the PTP clock associated with this device. If
- * this is the PF controlling the clock, just use the local access to the
- * clock device pointer.
- *
- * Otherwise, read from the driver shared parameters to determine the clock
- * index value.
- *
- * Returns: the index of the PTP clock associated with this device, or -1 if
- * there is no associated clock.
- */
-int ice_get_ptp_clock_index(struct ice_pf *pf)
-{
- struct device *dev = ice_pf_to_dev(pf);
- enum ice_aqc_driver_params param_idx;
- struct ice_hw *hw = &pf->hw;
- u8 tmr_idx;
- u32 value;
- int err;
-
- /* Use the ptp_clock structure if we're the main PF */
- if (pf->ptp.clock)
- return ptp_clock_index(pf->ptp.clock);
-
- tmr_idx = hw->func_caps.ts_func_info.tmr_index_assoc;
- if (!tmr_idx)
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR0;
- else
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR1;
-
- err = ice_aq_get_driver_param(hw, param_idx, &value, NULL);
- if (err) {
- dev_err(dev, "Failed to read PTP clock index parameter, err %d aq_err %s\n",
- err, ice_aq_str(hw->adminq.sq_last_status));
- return -1;
- }
-
- /* The PTP clock index is an integer, and will be between 0 and
- * INT_MAX. The highest bit of the driver shared parameter is used to
- * indicate whether or not the currently stored clock index is valid.
- */
- if (!(value & PTP_SHARED_CLK_IDX_VALID))
- return -1;
-
- return value & ~PTP_SHARED_CLK_IDX_VALID;
-}
-
-/**
- * ice_set_ptp_clock_index - Set the PTP clock index
- * @pf: the PF pointer
- *
- * Set the PTP clock index for this device into the shared driver parameters,
- * so that other PFs associated with this device can read it.
- *
- * If the PF is unable to store the clock index, it will log an error, but
- * will continue operating PTP.
- */
-static void ice_set_ptp_clock_index(struct ice_pf *pf)
-{
- struct device *dev = ice_pf_to_dev(pf);
- enum ice_aqc_driver_params param_idx;
- struct ice_hw *hw = &pf->hw;
- u8 tmr_idx;
- u32 value;
- int err;
-
- if (!pf->ptp.clock)
- return;
-
- tmr_idx = hw->func_caps.ts_func_info.tmr_index_assoc;
- if (!tmr_idx)
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR0;
- else
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR1;
-
- value = (u32)ptp_clock_index(pf->ptp.clock);
- if (value > INT_MAX) {
- dev_err(dev, "PTP Clock index is too large to store\n");
- return;
- }
- value |= PTP_SHARED_CLK_IDX_VALID;
-
- err = ice_aq_set_driver_param(hw, param_idx, value, NULL);
- if (err) {
- dev_err(dev, "Failed to set PTP clock index parameter, err %d aq_err %s\n",
- err, ice_aq_str(hw->adminq.sq_last_status));
- }
-}
-
-/**
- * ice_clear_ptp_clock_index - Clear the PTP clock index
- * @pf: the PF pointer
- *
- * Clear the PTP clock index for this device. Must be called when
- * unregistering the PTP clock, in order to ensure other PFs stop reporting
- * a clock object that no longer exists.
- */
-static void ice_clear_ptp_clock_index(struct ice_pf *pf)
-{
- struct device *dev = ice_pf_to_dev(pf);
- enum ice_aqc_driver_params param_idx;
- struct ice_hw *hw = &pf->hw;
- u8 tmr_idx;
- int err;
-
- /* Do not clear the index if we don't own the timer */
- if (!hw->func_caps.ts_func_info.src_tmr_owned)
- return;
-
- tmr_idx = hw->func_caps.ts_func_info.tmr_index_assoc;
- if (!tmr_idx)
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR0;
- else
- param_idx = ICE_AQC_DRIVER_PARAM_CLK_IDX_TMR1;
-
- err = ice_aq_set_driver_param(hw, param_idx, 0, NULL);
- if (err) {
- dev_dbg(dev, "Failed to clear PTP clock index parameter, err %d aq_err %s\n",
- err, ice_aq_str(hw->adminq.sq_last_status));
- }
-}
-
-/**
* ice_ptp_read_src_clk_reg - Read the source clock register
* @pf: Board private structure
* @sts: Optional parameter for holding a pair of system timestamps from
@@ -674,9 +561,6 @@ static void ice_ptp_process_tx_tstamp(struct ice_ptp_tx *tx)
int err;
u8 idx;
- if (!tx->init)
- return;
-
ptp_port = container_of(tx, struct ice_ptp_port, tx);
pf = ptp_port_to_pf(ptp_port);
hw = &pf->hw;
@@ -775,6 +659,39 @@ skip_ts_read:
}
/**
+ * ice_ptp_tx_tstamp_owner - Process Tx timestamps for all ports on the device
+ * @pf: Board private structure
+ */
+static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
+{
+ struct ice_ptp_port *port;
+ unsigned int i;
+
+ mutex_lock(&pf->ptp.ports_owner.lock);
+ list_for_each_entry(port, &pf->ptp.ports_owner.ports, list_member) {
+ struct ice_ptp_tx *tx = &port->tx;
+
+ if (!tx || !tx->init)
+ continue;
+
+ ice_ptp_process_tx_tstamp(tx);
+ }
+ mutex_unlock(&pf->ptp.ports_owner.lock);
+
+ for (i = 0; i < ICE_MAX_QUAD; i++) {
+ u64 tstamp_ready;
+ int err;
+
+ /* Read the Tx ready status first */
+ err = ice_get_phy_tx_tstamp_ready(&pf->hw, i, &tstamp_ready);
+ if (err || tstamp_ready)
+ return ICE_TX_TSTAMP_WORK_PENDING;
+ }
+
+ return ICE_TX_TSTAMP_WORK_DONE;
+}
+
+/**
* ice_ptp_tx_tstamp - Process Tx timestamps for this function.
* @tx: Tx tracking structure to initialize
*
@@ -1366,6 +1283,7 @@ out_unlock:
void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
{
struct ice_ptp_port *ptp_port;
+ struct ice_hw *hw = &pf->hw;
if (!test_bit(ICE_FLAG_PTP, pf->flags))
return;
@@ -1380,11 +1298,16 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
/* Update cached link status for this port immediately */
ptp_port->link_up = linkup;
- /* E810 devices do not need to reconfigure the PHY */
- if (ice_is_e810(&pf->hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
+ /* Do not reconfigure E810 PHY */
return;
-
- ice_ptp_port_phy_restart(ptp_port);
+ case ICE_PHY_E822:
+ ice_ptp_port_phy_restart(ptp_port);
+ return;
+ default:
+ dev_warn(ice_pf_to_dev(pf), "%s: Unknown PHY type\n", __func__);
+ }
}
/**
@@ -1441,6 +1364,24 @@ static void ice_ptp_reset_phy_timestamping(struct ice_pf *pf)
}
/**
+ * ice_ptp_restart_all_phy - Restart all PHYs to recalibrate timestamping
+ * @pf: Board private structure
+ */
+static void ice_ptp_restart_all_phy(struct ice_pf *pf)
+{
+ struct list_head *entry;
+
+ list_for_each(entry, &pf->ptp.ports_owner.ports) {
+ struct ice_ptp_port *port = list_entry(entry,
+ struct ice_ptp_port,
+ list_member);
+
+ if (port->link_up)
+ ice_ptp_port_phy_restart(port);
+ }
+}
+
+/**
* ice_ptp_adjfine - Adjust clock increment rate
* @info: the driver's PTP info structure
* @scaled_ppm: Parts per million with 16-bit fractional field
@@ -1877,9 +1818,9 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
/* Reenable periodic outputs */
ice_ptp_enable_all_clkout(pf);
- /* Recalibrate and re-enable timestamp block */
- if (pf->ptp.port.link_up)
- ice_ptp_port_phy_restart(&pf->ptp.port);
+ /* Recalibrate and re-enable timestamp blocks for E822/E823 */
+ if (hw->phy_model == ICE_PHY_E822)
+ ice_ptp_restart_all_phy(pf);
exit:
if (err) {
dev_err(ice_pf_to_dev(pf), "PTP failed to set time %d\n", err);
@@ -1976,21 +1917,32 @@ ice_ptp_get_syncdevicetime(ktime_t *device,
u32 hh_lock, hh_art_ctl;
int i;
- /* Get the HW lock */
- hh_lock = rd32(hw, PFHH_SEM + (PFTSYN_SEM_BYTES * hw->pf_id));
+#define MAX_HH_HW_LOCK_TRIES 5
+#define MAX_HH_CTL_LOCK_TRIES 100
+
+ for (i = 0; i < MAX_HH_HW_LOCK_TRIES; i++) {
+ /* Get the HW lock */
+ hh_lock = rd32(hw, PFHH_SEM + (PFTSYN_SEM_BYTES * hw->pf_id));
+ if (hh_lock & PFHH_SEM_BUSY_M) {
+ usleep_range(10000, 15000);
+ continue;
+ }
+ break;
+ }
if (hh_lock & PFHH_SEM_BUSY_M) {
dev_err(ice_pf_to_dev(pf), "PTP failed to get hh lock\n");
- return -EFAULT;
+ return -EBUSY;
}
+ /* Program cmd to master timer */
+ ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME);
+
/* Start the ART and device clock sync sequence */
hh_art_ctl = rd32(hw, GLHH_ART_CTL);
hh_art_ctl = hh_art_ctl | GLHH_ART_CTL_ACTIVE_M;
wr32(hw, GLHH_ART_CTL, hh_art_ctl);
-#define MAX_HH_LOCK_TRIES 100
-
- for (i = 0; i < MAX_HH_LOCK_TRIES; i++) {
+ for (i = 0; i < MAX_HH_CTL_LOCK_TRIES; i++) {
/* Wait for sync to complete */
hh_art_ctl = rd32(hw, GLHH_ART_CTL);
if (hh_art_ctl & GLHH_ART_CTL_ACTIVE_M) {
@@ -2014,19 +1966,23 @@ ice_ptp_get_syncdevicetime(ktime_t *device,
break;
}
}
+
+ /* Clear the master timer */
+ ice_ptp_src_cmd(hw, ICE_PTP_NOP);
+
/* Release HW lock */
hh_lock = rd32(hw, PFHH_SEM + (PFTSYN_SEM_BYTES * hw->pf_id));
hh_lock = hh_lock & ~PFHH_SEM_BUSY_M;
wr32(hw, PFHH_SEM + (PFTSYN_SEM_BYTES * hw->pf_id), hh_lock);
- if (i == MAX_HH_LOCK_TRIES)
+ if (i == MAX_HH_CTL_LOCK_TRIES)
return -ETIMEDOUT;
return 0;
}
/**
- * ice_ptp_getcrosststamp_e822 - Capture a device cross timestamp
+ * ice_ptp_getcrosststamp_e82x - Capture a device cross timestamp
* @info: the driver's PTP info structure
* @cts: The memory to fill the cross timestamp info
*
@@ -2034,14 +1990,14 @@ ice_ptp_get_syncdevicetime(ktime_t *device,
* clock. Fill the cross timestamp information and report it back to the
* caller.
*
- * This is only valid for E822 devices which have support for generating the
- * cross timestamp via PCIe PTM.
+ * This is only valid for E822 and E823 devices which have support for
+ * generating the cross timestamp via PCIe PTM.
*
* In order to correctly correlate the ART timestamp back to the TSC time, the
* CPU must have X86_FEATURE_TSC_KNOWN_FREQ.
*/
static int
-ice_ptp_getcrosststamp_e822(struct ptp_clock_info *info,
+ice_ptp_getcrosststamp_e82x(struct ptp_clock_info *info,
struct system_device_crosststamp *cts)
{
struct ice_pf *pf = ptp_info_to_pf(info);
@@ -2246,18 +2202,20 @@ ice_ptp_setup_sma_pins_e810t(struct ice_pf *pf, struct ptp_clock_info *info)
static void
ice_ptp_setup_pins_e810(struct ice_pf *pf, struct ptp_clock_info *info)
{
- info->n_per_out = N_PER_OUT_E810;
-
- if (ice_is_feature_supported(pf, ICE_F_PTP_EXTTS))
- info->n_ext_ts = N_EXT_TS_E810;
-
if (ice_is_feature_supported(pf, ICE_F_SMA_CTRL)) {
info->n_ext_ts = N_EXT_TS_E810;
+ info->n_per_out = N_PER_OUT_E810T;
info->n_pins = NUM_PTP_PINS_E810T;
info->verify = ice_verify_pin_e810t;
/* Complete setup of the SMA pins */
ice_ptp_setup_sma_pins_e810t(pf, info);
+ } else if (ice_is_e810t(&pf->hw)) {
+ info->n_ext_ts = N_EXT_TS_NO_SMA_E810T;
+ info->n_per_out = N_PER_OUT_NO_SMA_E810T;
+ } else {
+ info->n_per_out = N_PER_OUT_E810;
+ info->n_ext_ts = N_EXT_TS_E810;
}
}
@@ -2275,22 +2233,22 @@ ice_ptp_setup_pins_e823(struct ice_pf *pf, struct ptp_clock_info *info)
}
/**
- * ice_ptp_set_funcs_e822 - Set specialized functions for E822 support
+ * ice_ptp_set_funcs_e82x - Set specialized functions for E82x support
* @pf: Board private structure
* @info: PTP info to fill
*
- * Assign functions to the PTP capabiltiies structure for E822 devices.
+ * Assign functions to the PTP capabiltiies structure for E82x devices.
* Functions which operate across all device families should be set directly
- * in ice_ptp_set_caps. Only add functions here which are distinct for E822
+ * in ice_ptp_set_caps. Only add functions here which are distinct for E82x
* devices.
*/
static void
-ice_ptp_set_funcs_e822(struct ice_pf *pf, struct ptp_clock_info *info)
+ice_ptp_set_funcs_e82x(struct ice_pf *pf, struct ptp_clock_info *info)
{
#ifdef CONFIG_ICE_HWTS
if (boot_cpu_has(X86_FEATURE_ART) &&
boot_cpu_has(X86_FEATURE_TSC_KNOWN_FREQ))
- info->getcrosststamp = ice_ptp_getcrosststamp_e822;
+ info->getcrosststamp = ice_ptp_getcrosststamp_e82x;
#endif /* CONFIG_ICE_HWTS */
}
@@ -2324,6 +2282,8 @@ ice_ptp_set_funcs_e810(struct ice_pf *pf, struct ptp_clock_info *info)
static void
ice_ptp_set_funcs_e823(struct ice_pf *pf, struct ptp_clock_info *info)
{
+ ice_ptp_set_funcs_e82x(pf, info);
+
info->enable = ice_ptp_gpio_enable_e823;
ice_ptp_setup_pins_e823(pf, info);
}
@@ -2351,7 +2311,7 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
else if (ice_is_e823(&pf->hw))
ice_ptp_set_funcs_e823(pf, info);
else
- ice_ptp_set_funcs_e822(pf, info);
+ ice_ptp_set_funcs_e82x(pf, info);
}
/**
@@ -2366,7 +2326,6 @@ static void ice_ptp_set_caps(struct ice_pf *pf)
static long ice_ptp_create_clock(struct ice_pf *pf)
{
struct ptp_clock_info *info;
- struct ptp_clock *clock;
struct device *dev;
/* No need to create a clock device if we already have one */
@@ -2379,11 +2338,11 @@ static long ice_ptp_create_clock(struct ice_pf *pf)
dev = ice_pf_to_dev(pf);
/* Attempt to register the clock before enabling the hardware. */
- clock = ptp_clock_register(info, dev);
- if (IS_ERR(clock))
- return PTR_ERR(clock);
-
- pf->ptp.clock = clock;
+ pf->ptp.clock = ptp_clock_register(info, dev);
+ if (IS_ERR(pf->ptp.clock)) {
+ dev_err(ice_pf_to_dev(pf), "Failed to register PTP clock device");
+ return PTR_ERR(pf->ptp.clock);
+ }
return 0;
}
@@ -2440,7 +2399,21 @@ s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb)
*/
enum ice_tx_tstamp_work ice_ptp_process_ts(struct ice_pf *pf)
{
- return ice_ptp_tx_tstamp(&pf->ptp.port.tx);
+ switch (pf->ptp.tx_interrupt_mode) {
+ case ICE_PTP_TX_INTERRUPT_NONE:
+ /* This device has the clock owner handle timestamps for it */
+ return ICE_TX_TSTAMP_WORK_DONE;
+ case ICE_PTP_TX_INTERRUPT_SELF:
+ /* This device handles its own timestamps */
+ return ice_ptp_tx_tstamp(&pf->ptp.port.tx);
+ case ICE_PTP_TX_INTERRUPT_ALL:
+ /* This device handles timestamps for all ports */
+ return ice_ptp_tx_tstamp_owner(pf);
+ default:
+ WARN_ONCE(1, "Unexpected Tx timestamp interrupt mode %u\n",
+ pf->ptp.tx_interrupt_mode);
+ return ICE_TX_TSTAMP_WORK_DONE;
+ }
}
static void ice_ptp_periodic_work(struct kthread_work *work)
@@ -2474,7 +2447,7 @@ void ice_ptp_reset(struct ice_pf *pf)
if (test_bit(ICE_PFR_REQ, pf->state))
goto pfr;
- if (!hw->func_caps.ts_func_info.src_tmr_owned)
+ if (!ice_pf_src_tmr_owned(pf))
goto reset_ts;
err = ice_ptp_init_phc(hw);
@@ -2550,6 +2523,209 @@ err:
}
/**
+ * ice_ptp_aux_dev_to_aux_pf - Get auxiliary PF handle for the auxiliary device
+ * @aux_dev: auxiliary device to get the auxiliary PF for
+ */
+static struct ice_pf *
+ice_ptp_aux_dev_to_aux_pf(struct auxiliary_device *aux_dev)
+{
+ struct ice_ptp_port *aux_port;
+ struct ice_ptp *aux_ptp;
+
+ aux_port = container_of(aux_dev, struct ice_ptp_port, aux_dev);
+ aux_ptp = container_of(aux_port, struct ice_ptp, port);
+
+ return container_of(aux_ptp, struct ice_pf, ptp);
+}
+
+/**
+ * ice_ptp_aux_dev_to_owner_pf - Get PF handle for the auxiliary device
+ * @aux_dev: auxiliary device to get the PF for
+ */
+static struct ice_pf *
+ice_ptp_aux_dev_to_owner_pf(struct auxiliary_device *aux_dev)
+{
+ struct ice_ptp_port_owner *ports_owner;
+ struct auxiliary_driver *aux_drv;
+ struct ice_ptp *owner_ptp;
+
+ if (!aux_dev->dev.driver)
+ return NULL;
+
+ aux_drv = to_auxiliary_drv(aux_dev->dev.driver);
+ ports_owner = container_of(aux_drv, struct ice_ptp_port_owner,
+ aux_driver);
+ owner_ptp = container_of(ports_owner, struct ice_ptp, ports_owner);
+ return container_of(owner_ptp, struct ice_pf, ptp);
+}
+
+/**
+ * ice_ptp_auxbus_probe - Probe auxiliary devices
+ * @aux_dev: PF's auxiliary device
+ * @id: Auxiliary device ID
+ */
+static int ice_ptp_auxbus_probe(struct auxiliary_device *aux_dev,
+ const struct auxiliary_device_id *id)
+{
+ struct ice_pf *owner_pf = ice_ptp_aux_dev_to_owner_pf(aux_dev);
+ struct ice_pf *aux_pf = ice_ptp_aux_dev_to_aux_pf(aux_dev);
+
+ if (WARN_ON(!owner_pf))
+ return -ENODEV;
+
+ INIT_LIST_HEAD(&aux_pf->ptp.port.list_member);
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+ list_add(&aux_pf->ptp.port.list_member,
+ &owner_pf->ptp.ports_owner.ports);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+
+ return 0;
+}
+
+/**
+ * ice_ptp_auxbus_remove - Remove auxiliary devices from the bus
+ * @aux_dev: PF's auxiliary device
+ */
+static void ice_ptp_auxbus_remove(struct auxiliary_device *aux_dev)
+{
+ struct ice_pf *owner_pf = ice_ptp_aux_dev_to_owner_pf(aux_dev);
+ struct ice_pf *aux_pf = ice_ptp_aux_dev_to_aux_pf(aux_dev);
+
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+ list_del(&aux_pf->ptp.port.list_member);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+}
+
+/**
+ * ice_ptp_auxbus_shutdown
+ * @aux_dev: PF's auxiliary device
+ */
+static void ice_ptp_auxbus_shutdown(struct auxiliary_device *aux_dev)
+{
+ /* Doing nothing here, but handle to auxbus driver must be satisfied */
+}
+
+/**
+ * ice_ptp_auxbus_suspend
+ * @aux_dev: PF's auxiliary device
+ * @state: power management state indicator
+ */
+static int
+ice_ptp_auxbus_suspend(struct auxiliary_device *aux_dev, pm_message_t state)
+{
+ /* Doing nothing here, but handle to auxbus driver must be satisfied */
+ return 0;
+}
+
+/**
+ * ice_ptp_auxbus_resume
+ * @aux_dev: PF's auxiliary device
+ */
+static int ice_ptp_auxbus_resume(struct auxiliary_device *aux_dev)
+{
+ /* Doing nothing here, but handle to auxbus driver must be satisfied */
+ return 0;
+}
+
+/**
+ * ice_ptp_auxbus_create_id_table - Create auxiliary device ID table
+ * @pf: Board private structure
+ * @name: auxiliary bus driver name
+ */
+static struct auxiliary_device_id *
+ice_ptp_auxbus_create_id_table(struct ice_pf *pf, const char *name)
+{
+ struct auxiliary_device_id *ids;
+
+ /* Second id left empty to terminate the array */
+ ids = devm_kcalloc(ice_pf_to_dev(pf), 2,
+ sizeof(struct auxiliary_device_id), GFP_KERNEL);
+ if (!ids)
+ return NULL;
+
+ snprintf(ids[0].name, sizeof(ids[0].name), "ice.%s", name);
+
+ return ids;
+}
+
+/**
+ * ice_ptp_register_auxbus_driver - Register PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+{
+ struct auxiliary_driver *aux_driver;
+ struct ice_ptp *ptp;
+ struct device *dev;
+ char *name;
+ int err;
+
+ ptp = &pf->ptp;
+ dev = ice_pf_to_dev(pf);
+ aux_driver = &ptp->ports_owner.aux_driver;
+ INIT_LIST_HEAD(&ptp->ports_owner.ports);
+ mutex_init(&ptp->ports_owner.lock);
+ name = devm_kasprintf(dev, GFP_KERNEL, "ptp_aux_dev_%u_%u_clk%u",
+ pf->pdev->bus->number, PCI_SLOT(pf->pdev->devfn),
+ ice_get_ptp_src_clock_index(&pf->hw));
+
+ aux_driver->name = name;
+ aux_driver->shutdown = ice_ptp_auxbus_shutdown;
+ aux_driver->suspend = ice_ptp_auxbus_suspend;
+ aux_driver->remove = ice_ptp_auxbus_remove;
+ aux_driver->resume = ice_ptp_auxbus_resume;
+ aux_driver->probe = ice_ptp_auxbus_probe;
+ aux_driver->id_table = ice_ptp_auxbus_create_id_table(pf, name);
+ if (!aux_driver->id_table)
+ return -ENOMEM;
+
+ err = auxiliary_driver_register(aux_driver);
+ if (err) {
+ devm_kfree(dev, aux_driver->id_table);
+ dev_err(dev, "Failed registering aux_driver, name <%s>\n",
+ name);
+ }
+
+ return err;
+}
+
+/**
+ * ice_ptp_unregister_auxbus_driver - Unregister PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+static void ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
+{
+ struct auxiliary_driver *aux_driver = &pf->ptp.ports_owner.aux_driver;
+
+ auxiliary_driver_unregister(aux_driver);
+ devm_kfree(ice_pf_to_dev(pf), aux_driver->id_table);
+
+ mutex_destroy(&pf->ptp.ports_owner.lock);
+}
+
+/**
+ * ice_ptp_clock_index - Get the PTP clock index for this device
+ * @pf: Board private structure
+ *
+ * Returns: the PTP clock index associated with this PF, or -1 if no PTP clock
+ * is associated.
+ */
+int ice_ptp_clock_index(struct ice_pf *pf)
+{
+ struct auxiliary_device *aux_dev;
+ struct ice_pf *owner_pf;
+ struct ptp_clock *clock;
+
+ aux_dev = &pf->ptp.port.aux_dev;
+ owner_pf = ice_ptp_aux_dev_to_owner_pf(aux_dev);
+ if (!owner_pf)
+ return -1;
+ clock = owner_pf->ptp.clock;
+
+ return clock ? ptp_clock_index(clock) : -1;
+}
+
+/**
* ice_ptp_prepare_for_reset - Prepare PTP for reset
* @pf: Board private structure
*/
@@ -2627,7 +2803,15 @@ static int ice_ptp_init_owner(struct ice_pf *pf)
/* Release the global hardware lock */
ice_ptp_unlock(hw);
- if (!ice_is_e810(hw)) {
+ if (pf->ptp.tx_interrupt_mode == ICE_PTP_TX_INTERRUPT_ALL) {
+ /* The clock owner for this device type handles the timestamp
+ * interrupt for all ports.
+ */
+ ice_ptp_configure_tx_tstamp(pf, true);
+
+ /* React on all quads interrupts for E82x */
+ wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x1f);
+
/* Enable quad interrupts */
err = ice_ptp_tx_ena_intr(pf, true, itr);
if (err)
@@ -2639,11 +2823,15 @@ static int ice_ptp_init_owner(struct ice_pf *pf)
if (err)
goto err_clk;
- /* Store the PTP clock index for other PFs */
- ice_set_ptp_clock_index(pf);
+ err = ice_ptp_register_auxbus_driver(pf);
+ if (err) {
+ dev_err(ice_pf_to_dev(pf), "Failed to register PTP auxbus driver");
+ goto err_aux;
+ }
return 0;
-
+err_aux:
+ ptp_clock_unregister(pf->ptp.clock);
err_clk:
pf->ptp.clock = NULL;
err_exit:
@@ -2685,14 +2873,124 @@ static int ice_ptp_init_work(struct ice_pf *pf, struct ice_ptp *ptp)
*/
static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
{
+ struct ice_hw *hw = &pf->hw;
+
mutex_init(&ptp_port->ps_lock);
- if (ice_is_e810(&pf->hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
return ice_ptp_init_tx_e810(pf, &ptp_port->tx);
+ case ICE_PHY_E822:
+ /* Non-owner PFs don't react to any interrupts on E82x,
+ * neither on own quad nor on others
+ */
+ if (!ice_ptp_pf_handles_tx_interrupt(pf)) {
+ ice_ptp_configure_tx_tstamp(pf, false);
+ wr32(hw, PFINT_TSYN_MSK + (0x4 * hw->pf_id), (u32)0x0);
+ }
+ kthread_init_delayed_work(&ptp_port->ov_work,
+ ice_ptp_wait_for_offsets);
+
+ return ice_ptp_init_tx_e822(pf, &ptp_port->tx,
+ ptp_port->port_num);
+ default:
+ return -ENODEV;
+ }
+}
+
+/**
+ * ice_ptp_release_auxbus_device
+ * @dev: device that utilizes the auxbus
+ */
+static void ice_ptp_release_auxbus_device(struct device *dev)
+{
+ /* Doing nothing here, but handle to auxbux device must be satisfied */
+}
+
+/**
+ * ice_ptp_create_auxbus_device - Create PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+{
+ struct auxiliary_device *aux_dev;
+ struct ice_ptp *ptp;
+ struct device *dev;
+ char *name;
+ int err;
+ u32 id;
- kthread_init_delayed_work(&ptp_port->ov_work,
- ice_ptp_wait_for_offsets);
- return ice_ptp_init_tx_e822(pf, &ptp_port->tx, ptp_port->port_num);
+ ptp = &pf->ptp;
+ id = ptp->port.port_num;
+ dev = ice_pf_to_dev(pf);
+
+ aux_dev = &ptp->port.aux_dev;
+
+ name = devm_kasprintf(dev, GFP_KERNEL, "ptp_aux_dev_%u_%u_clk%u",
+ pf->pdev->bus->number, PCI_SLOT(pf->pdev->devfn),
+ ice_get_ptp_src_clock_index(&pf->hw));
+
+ aux_dev->name = name;
+ aux_dev->id = id;
+ aux_dev->dev.release = ice_ptp_release_auxbus_device;
+ aux_dev->dev.parent = dev;
+
+ err = auxiliary_device_init(aux_dev);
+ if (err)
+ goto aux_err;
+
+ err = auxiliary_device_add(aux_dev);
+ if (err) {
+ auxiliary_device_uninit(aux_dev);
+ goto aux_err;
+ }
+
+ return 0;
+aux_err:
+ dev_err(dev, "Failed to create PTP auxiliary bus device <%s>\n", name);
+ devm_kfree(dev, name);
+ return err;
+}
+
+/**
+ * ice_ptp_remove_auxbus_device - Remove PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+static void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
+{
+ struct auxiliary_device *aux_dev = &pf->ptp.port.aux_dev;
+
+ auxiliary_device_delete(aux_dev);
+ auxiliary_device_uninit(aux_dev);
+
+ memset(aux_dev, 0, sizeof(*aux_dev));
+}
+
+/**
+ * ice_ptp_init_tx_interrupt_mode - Initialize device Tx interrupt mode
+ * @pf: Board private structure
+ *
+ * Initialize the Tx timestamp interrupt mode for this device. For most device
+ * types, each PF processes the interrupt and manages its own timestamps. For
+ * E822-based devices, only the clock owner processes the timestamps. Other
+ * PFs disable the interrupt and do not process their own timestamps.
+ */
+static void ice_ptp_init_tx_interrupt_mode(struct ice_pf *pf)
+{
+ switch (pf->hw.phy_model) {
+ case ICE_PHY_E822:
+ /* E822 based PHY has the clock owner process the interrupt
+ * for all ports.
+ */
+ if (ice_pf_src_tmr_owned(pf))
+ pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_ALL;
+ else
+ pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_NONE;
+ break;
+ default:
+ /* other PHY types handle their own Tx interrupt */
+ pf->ptp.tx_interrupt_mode = ICE_PTP_TX_INTERRUPT_SELF;
+ }
}
/**
@@ -2713,10 +3011,14 @@ void ice_ptp_init(struct ice_pf *pf)
struct ice_hw *hw = &pf->hw;
int err;
+ ice_ptp_init_phy_model(hw);
+
+ ice_ptp_init_tx_interrupt_mode(pf);
+
/* If this function owns the clock hardware, it must allocate and
* configure the PTP clock device to represent it.
*/
- if (hw->func_caps.ts_func_info.src_tmr_owned) {
+ if (ice_pf_src_tmr_owned(pf)) {
err = ice_ptp_init_owner(pf);
if (err)
goto err;
@@ -2735,6 +3037,10 @@ void ice_ptp_init(struct ice_pf *pf)
if (err)
goto err;
+ err = ice_ptp_create_auxbus_device(pf);
+ if (err)
+ goto err;
+
dev_info(ice_pf_to_dev(pf), "PTP init successful\n");
return;
@@ -2763,6 +3069,8 @@ void ice_ptp_release(struct ice_pf *pf)
/* Disable timestamping for both Tx and Rx */
ice_ptp_cfg_timestamp(pf, false);
+ ice_ptp_remove_auxbus_device(pf);
+
ice_ptp_release_tx_tracker(pf, &pf->ptp.port.tx);
clear_bit(ICE_FLAG_PTP, pf->flags);
@@ -2782,9 +3090,10 @@ void ice_ptp_release(struct ice_pf *pf)
/* Disable periodic outputs */
ice_ptp_disable_all_clkout(pf);
- ice_clear_ptp_clock_index(pf);
ptp_clock_unregister(pf->ptp.clock);
pf->ptp.clock = NULL;
+ ice_ptp_unregister_auxbus_driver(pf);
+
dev_info(ice_pf_to_dev(pf), "Removed PTP clock\n");
}
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
index 995a57019ba7..8f6f94392756 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
@@ -157,7 +157,9 @@ struct ice_ptp_tx {
* ready for PTP functionality. It is used to track the port initialization
* and determine when the port's PHY offset is valid.
*
+ * @list_member: list member structure of auxiliary device
* @tx: Tx timestamp tracking for this port
+ * @aux_dev: auxiliary device associated with this port
* @ov_work: delayed work task for tracking when PHY offset is valid
* @ps_lock: mutex used to protect the overall PTP PHY start procedure
* @link_up: indicates whether the link is up
@@ -165,7 +167,9 @@ struct ice_ptp_tx {
* @port_num: the port number this structure represents
*/
struct ice_ptp_port {
+ struct list_head list_member;
struct ice_ptp_tx tx;
+ struct auxiliary_device aux_dev;
struct kthread_delayed_work ov_work;
struct mutex ps_lock; /* protects overall PTP PHY start procedure */
bool link_up;
@@ -173,11 +177,35 @@ struct ice_ptp_port {
u8 port_num;
};
+enum ice_ptp_tx_interrupt {
+ ICE_PTP_TX_INTERRUPT_NONE = 0,
+ ICE_PTP_TX_INTERRUPT_SELF,
+ ICE_PTP_TX_INTERRUPT_ALL,
+};
+
+/**
+ * struct ice_ptp_port_owner - data used to handle the PTP clock owner info
+ *
+ * This structure contains data necessary for the PTP clock owner to correctly
+ * handle the timestamping feature for all attached ports.
+ *
+ * @aux_driver: the structure carring the auxiliary driver information
+ * @ports: list of porst handled by this port owner
+ * @lock: protect access to ports list
+ */
+struct ice_ptp_port_owner {
+ struct auxiliary_driver aux_driver;
+ struct list_head ports;
+ struct mutex lock;
+};
+
#define GLTSYN_TGT_H_IDX_MAX 4
/**
* struct ice_ptp - data used for integrating with CONFIG_PTP_1588_CLOCK
+ * @tx_interrupt_mode: the TX interrupt mode for the PTP clock
* @port: data for the PHY port initialization procedure
+ * @ports_owner: data for the auxiliary driver owner
* @work: delayed work function for periodic tasks
* @cached_phc_time: a cached copy of the PHC time for timestamp extension
* @cached_phc_jiffies: jiffies when cached_phc_time was last updated
@@ -197,7 +225,9 @@ struct ice_ptp_port {
* @late_cached_phc_updates: number of times cached PHC update is late
*/
struct ice_ptp {
+ enum ice_ptp_tx_interrupt tx_interrupt_mode;
struct ice_ptp_port port;
+ struct ice_ptp_port_owner ports_owner;
struct kthread_delayed_work work;
u64 cached_phc_time;
unsigned long cached_phc_jiffies;
@@ -258,11 +288,11 @@ struct ice_ptp {
#define ETH_GLTSYN_ENA(_i) (0x03000348 + ((_i) * 4))
#if IS_ENABLED(CONFIG_PTP_1588_CLOCK)
+int ice_ptp_clock_index(struct ice_pf *pf);
struct ice_pf;
int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr);
int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr);
void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena);
-int ice_get_ptp_clock_index(struct ice_pf *pf);
void ice_ptp_extts_event(struct ice_pf *pf);
s8 ice_ptp_request_ts(struct ice_ptp_tx *tx, struct sk_buff *skb);
@@ -288,10 +318,6 @@ static inline int ice_ptp_get_ts_config(struct ice_pf *pf, struct ifreq *ifr)
}
static inline void ice_ptp_cfg_timestamp(struct ice_pf *pf, bool ena) { }
-static inline int ice_get_ptp_clock_index(struct ice_pf *pf)
-{
- return -1;
-}
static inline void ice_ptp_extts_event(struct ice_pf *pf) { }
static inline s8
@@ -314,5 +340,10 @@ static inline void ice_ptp_release(struct ice_pf *pf) { }
static inline void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
{
}
+
+static inline int ice_ptp_clock_index(struct ice_pf *pf)
+{
+ return -1;
+}
#endif /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
#endif /* _ICE_PTP_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
index f818dd215c05..6d573908de7a 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
@@ -7,6 +7,132 @@
#include "ice_ptp_consts.h"
#include "ice_cgu_regs.h"
+static struct dpll_pin_frequency ice_cgu_pin_freq_common[] = {
+ DPLL_PIN_FREQUENCY_1PPS,
+ DPLL_PIN_FREQUENCY_10MHZ,
+};
+
+static struct dpll_pin_frequency ice_cgu_pin_freq_1_hz[] = {
+ DPLL_PIN_FREQUENCY_1PPS,
+};
+
+static struct dpll_pin_frequency ice_cgu_pin_freq_10_mhz[] = {
+ DPLL_PIN_FREQUENCY_10MHZ,
+};
+
+static const struct ice_cgu_pin_desc ice_e810t_sfp_cgu_inputs[] = {
+ { "CVL-SDP22", ZL_REF0P, DPLL_PIN_TYPE_INT_OSCILLATOR,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "CVL-SDP20", ZL_REF0N, DPLL_PIN_TYPE_INT_OSCILLATOR,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "C827_0-RCLKA", ZL_REF1P, DPLL_PIN_TYPE_MUX, 0, },
+ { "C827_0-RCLKB", ZL_REF1N, DPLL_PIN_TYPE_MUX, 0, },
+ { "SMA1", ZL_REF3P, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "SMA2/U.FL2", ZL_REF3N, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, 0, },
+};
+
+static const struct ice_cgu_pin_desc ice_e810t_qsfp_cgu_inputs[] = {
+ { "CVL-SDP22", ZL_REF0P, DPLL_PIN_TYPE_INT_OSCILLATOR,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "CVL-SDP20", ZL_REF0N, DPLL_PIN_TYPE_INT_OSCILLATOR,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "C827_0-RCLKA", ZL_REF1P, DPLL_PIN_TYPE_MUX, },
+ { "C827_0-RCLKB", ZL_REF1N, DPLL_PIN_TYPE_MUX, },
+ { "C827_1-RCLKA", ZL_REF2P, DPLL_PIN_TYPE_MUX, },
+ { "C827_1-RCLKB", ZL_REF2N, DPLL_PIN_TYPE_MUX, },
+ { "SMA1", ZL_REF3P, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "SMA2/U.FL2", ZL_REF3N, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "GNSS-1PPS", ZL_REF4P, DPLL_PIN_TYPE_GNSS,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, },
+};
+
+static const struct ice_cgu_pin_desc ice_e810t_sfp_cgu_outputs[] = {
+ { "REF-SMA1", ZL_OUT0, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "REF-SMA2/U.FL2", ZL_OUT1, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "PHY-CLK", ZL_OUT2, DPLL_PIN_TYPE_SYNCE_ETH_PORT, },
+ { "MAC-CLK", ZL_OUT3, DPLL_PIN_TYPE_SYNCE_ETH_PORT, },
+ { "CVL-SDP21", ZL_OUT4, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "CVL-SDP23", ZL_OUT5, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+};
+
+static const struct ice_cgu_pin_desc ice_e810t_qsfp_cgu_outputs[] = {
+ { "REF-SMA1", ZL_OUT0, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "REF-SMA2/U.FL2", ZL_OUT1, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "PHY-CLK", ZL_OUT2, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "PHY2-CLK", ZL_OUT3, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "MAC-CLK", ZL_OUT4, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "CVL-SDP21", ZL_OUT5, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "CVL-SDP23", ZL_OUT6, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+};
+
+static const struct ice_cgu_pin_desc ice_e823_si_cgu_inputs[] = {
+ { "NONE", SI_REF0P, 0, 0 },
+ { "NONE", SI_REF0N, 0, 0 },
+ { "SYNCE0_DP", SI_REF1P, DPLL_PIN_TYPE_MUX, 0 },
+ { "SYNCE0_DN", SI_REF1N, DPLL_PIN_TYPE_MUX, 0 },
+ { "EXT_CLK_SYNC", SI_REF2P, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "NONE", SI_REF2N, 0, 0 },
+ { "EXT_PPS_OUT", SI_REF3, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "INT_PPS_OUT", SI_REF4, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+};
+
+static const struct ice_cgu_pin_desc ice_e823_si_cgu_outputs[] = {
+ { "1588-TIME_SYNC", SI_OUT0, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "PHY-CLK", SI_OUT1, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "10MHZ-SMA2", SI_OUT2, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_10_mhz), ice_cgu_pin_freq_10_mhz },
+ { "PPS-SMA1", SI_OUT3, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+};
+
+static const struct ice_cgu_pin_desc ice_e823_zl_cgu_inputs[] = {
+ { "NONE", ZL_REF0P, 0, 0 },
+ { "INT_PPS_OUT", ZL_REF0N, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "SYNCE0_DP", ZL_REF1P, DPLL_PIN_TYPE_MUX, 0 },
+ { "SYNCE0_DN", ZL_REF1N, DPLL_PIN_TYPE_MUX, 0 },
+ { "NONE", ZL_REF2P, 0, 0 },
+ { "NONE", ZL_REF2N, 0, 0 },
+ { "EXT_CLK_SYNC", ZL_REF3P, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "NONE", ZL_REF3N, 0, 0 },
+ { "EXT_PPS_OUT", ZL_REF4P, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "OCXO", ZL_REF4N, DPLL_PIN_TYPE_INT_OSCILLATOR, 0 },
+};
+
+static const struct ice_cgu_pin_desc ice_e823_zl_cgu_outputs[] = {
+ { "PPS-SMA1", ZL_OUT0, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_1_hz), ice_cgu_pin_freq_1_hz },
+ { "10MHZ-SMA2", ZL_OUT1, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_10_mhz), ice_cgu_pin_freq_10_mhz },
+ { "PHY-CLK", ZL_OUT2, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "1588-TIME_REF", ZL_OUT3, DPLL_PIN_TYPE_SYNCE_ETH_PORT, 0 },
+ { "CPK-TIME_SYNC", ZL_OUT4, DPLL_PIN_TYPE_EXT,
+ ARRAY_SIZE(ice_cgu_pin_freq_common), ice_cgu_pin_freq_common },
+ { "NONE", ZL_OUT5, 0, 0 },
+};
+
/* Low level functions for interacting with and managing the device clock used
* for the Precision Time Protocol.
*
@@ -107,7 +233,7 @@ static u64 ice_ptp_read_src_incval(struct ice_hw *hw)
*
* Prepare the source timer for an upcoming timer sync command.
*/
-static void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
{
u32 cmd_val;
u8 tmr_idx;
@@ -116,19 +242,19 @@ static void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
cmd_val = tmr_idx << SEL_CPK_SRC;
switch (cmd) {
- case INIT_TIME:
+ case ICE_PTP_INIT_TIME:
cmd_val |= GLTSYN_CMD_INIT_TIME;
break;
- case INIT_INCVAL:
+ case ICE_PTP_INIT_INCVAL:
cmd_val |= GLTSYN_CMD_INIT_INCVAL;
break;
- case ADJ_TIME:
+ case ICE_PTP_ADJ_TIME:
cmd_val |= GLTSYN_CMD_ADJ_TIME;
break;
- case ADJ_TIME_AT_TIME:
+ case ICE_PTP_ADJ_TIME_AT_TIME:
cmd_val |= GLTSYN_CMD_ADJ_INIT_TIME;
break;
- case READ_TIME:
+ case ICE_PTP_READ_TIME:
cmd_val |= GLTSYN_CMD_READ_TIME;
break;
case ICE_PTP_NOP:
@@ -168,9 +294,9 @@ ice_fill_phy_msg_e822(struct ice_sbq_msg_input *msg, u8 port, u16 offset)
{
int phy_port, phy, quadtype;
- phy_port = port % ICE_PORTS_PER_PHY;
- phy = port / ICE_PORTS_PER_PHY;
- quadtype = (port / ICE_PORTS_PER_QUAD) % ICE_NUM_QUAD_TYPE;
+ phy_port = port % ICE_PORTS_PER_PHY_E822;
+ phy = port / ICE_PORTS_PER_PHY_E822;
+ quadtype = (port / ICE_PORTS_PER_QUAD) % ICE_QUADS_PER_PHY_E822;
if (quadtype == 0) {
msg->msg_addr_low = P_Q0_L(P_0_BASE + offset, phy_port);
@@ -495,20 +621,25 @@ ice_write_64b_phy_reg_e822(struct ice_hw *hw, u8 port, u16 low_addr, u64 val)
* Fill a message buffer for accessing a register in a quad shared between
* multiple PHYs.
*/
-static void
+static int
ice_fill_quad_msg_e822(struct ice_sbq_msg_input *msg, u8 quad, u16 offset)
{
u32 addr;
+ if (quad >= ICE_MAX_QUAD)
+ return -EINVAL;
+
msg->dest_dev = rmn_0;
- if ((quad % ICE_NUM_QUAD_TYPE) == 0)
+ if ((quad % ICE_QUADS_PER_PHY_E822) == 0)
addr = Q_0_BASE + offset;
else
addr = Q_1_BASE + offset;
msg->msg_addr_low = lower_16_bits(addr);
msg->msg_addr_high = upper_16_bits(addr);
+
+ return 0;
}
/**
@@ -527,10 +658,10 @@ ice_read_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 *val)
struct ice_sbq_msg_input msg = {0};
int err;
- if (quad >= ICE_MAX_QUAD)
- return -EINVAL;
+ err = ice_fill_quad_msg_e822(&msg, quad, offset);
+ if (err)
+ return err;
- ice_fill_quad_msg_e822(&msg, quad, offset);
msg.opcode = ice_sbq_msg_rd;
err = ice_sbq_rw_reg(hw, &msg);
@@ -561,10 +692,10 @@ ice_write_quad_reg_e822(struct ice_hw *hw, u8 quad, u16 offset, u32 val)
struct ice_sbq_msg_input msg = {0};
int err;
- if (quad >= ICE_MAX_QUAD)
- return -EINVAL;
+ err = ice_fill_quad_msg_e822(&msg, quad, offset);
+ if (err)
+ return err;
- ice_fill_quad_msg_e822(&msg, quad, offset);
msg.opcode = ice_sbq_msg_wr;
msg.data = val;
@@ -628,29 +759,32 @@ ice_read_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx, u64 *tstamp)
* @quad: the quad to read from
* @idx: the timestamp index to reset
*
- * Clear a timestamp, resetting its valid bit, from the PHY quad block that is
- * shared between the internal PHYs on the E822 devices.
+ * Read the timestamp out of the quad to clear its timestamp status bit from
+ * the PHY quad block that is shared between the internal PHYs of the E822
+ * devices.
+ *
+ * Note that unlike E810, software cannot directly write to the quad memory
+ * bank registers. E822 relies on the ice_get_phy_tx_tstamp_ready() function
+ * to determine which timestamps are valid. Reading a timestamp auto-clears
+ * the valid bit.
+ *
+ * To directly clear the contents of the timestamp block entirely, discarding
+ * all timestamp data at once, software should instead use
+ * ice_ptp_reset_ts_memory_quad_e822().
+ *
+ * This function should only be called on an idx whose bit is set according to
+ * ice_get_phy_tx_tstamp_ready().
*/
static int
ice_clear_phy_tstamp_e822(struct ice_hw *hw, u8 quad, u8 idx)
{
- u16 lo_addr, hi_addr;
+ u64 unused_tstamp;
int err;
- lo_addr = (u16)TS_L(Q_REG_TX_MEMORY_BANK_START, idx);
- hi_addr = (u16)TS_H(Q_REG_TX_MEMORY_BANK_START, idx);
-
- err = ice_write_quad_reg_e822(hw, quad, lo_addr, 0);
+ err = ice_read_phy_tstamp_e822(hw, quad, idx, &unused_tstamp);
if (err) {
- ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register, err %d\n",
- err);
- return err;
- }
-
- err = ice_write_quad_reg_e822(hw, quad, hi_addr, 0);
- if (err) {
- ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register, err %d\n",
- err);
+ ice_debug(hw, ICE_DBG_PTP, "Failed to read the timestamp register for quad %u, idx %u, err %d\n",
+ quad, idx, err);
return err;
}
@@ -1025,7 +1159,7 @@ static int ice_ptp_init_phc_e822(struct ice_hw *hw)
* @time: Time to initialize the PHY port clocks to
*
* Program the PHY port registers with a new initial time value. The port
- * clock will be initialized once the driver issues an INIT_TIME sync
+ * clock will be initialized once the driver issues an ICE_PTP_INIT_TIME sync
* command. The time value is the upper 32 bits of the PHY timer, usually in
* units of nominal nanoseconds.
*/
@@ -1074,7 +1208,7 @@ exit_err:
*
* Program the port for an atomic adjustment by writing the Tx and Rx timer
* registers. The atomic adjustment won't be completed until the driver issues
- * an ADJ_TIME command.
+ * an ICE_PTP_ADJ_TIME command.
*
* Note that time is not in units of nanoseconds. It is in clock time
* including the lower sub-nanosecond portion of the port timer.
@@ -1127,7 +1261,7 @@ exit_err:
*
* Prepare the PHY ports for an atomic time adjustment by programming the PHY
* Tx and Rx port registers. The actual adjustment is completed by issuing an
- * ADJ_TIME or ADJ_TIME_AT_TIME sync command.
+ * ICE_PTP_ADJ_TIME or ICE_PTP_ADJ_TIME_AT_TIME sync command.
*/
static int
ice_ptp_prep_phy_adj_e822(struct ice_hw *hw, s32 adj)
@@ -1162,7 +1296,7 @@ ice_ptp_prep_phy_adj_e822(struct ice_hw *hw, s32 adj)
*
* Prepare each of the PHY ports for a new increment value by programming the
* port's TIMETUS registers. The new increment value will be updated after
- * issuing an INIT_INCVAL command.
+ * issuing an ICE_PTP_INIT_INCVAL command.
*/
static int
ice_ptp_prep_phy_incval_e822(struct ice_hw *hw, u64 incval)
@@ -1248,19 +1382,19 @@ ice_ptp_write_port_cmd_e822(struct ice_hw *hw, u8 port, enum ice_ptp_tmr_cmd cmd
tmr_idx = ice_get_ptp_src_clock_index(hw);
cmd_val = tmr_idx << SEL_PHY_SRC;
switch (cmd) {
- case INIT_TIME:
+ case ICE_PTP_INIT_TIME:
cmd_val |= PHY_CMD_INIT_TIME;
break;
- case INIT_INCVAL:
+ case ICE_PTP_INIT_INCVAL:
cmd_val |= PHY_CMD_INIT_INCVAL;
break;
- case ADJ_TIME:
+ case ICE_PTP_ADJ_TIME:
cmd_val |= PHY_CMD_ADJ_TIME;
break;
- case READ_TIME:
+ case ICE_PTP_READ_TIME:
cmd_val |= PHY_CMD_READ_TIME;
break;
- case ADJ_TIME_AT_TIME:
+ case ICE_PTP_ADJ_TIME_AT_TIME:
cmd_val |= PHY_CMD_ADJ_TIME_AT_TIME;
break;
case ICE_PTP_NOP:
@@ -2196,8 +2330,8 @@ int ice_phy_cfg_rx_offset_e822(struct ice_hw *hw, u8 port)
* @phy_time: on return, the 64bit PHY timer value
* @phc_time: on return, the lower 64bits of PHC time
*
- * Issue a READ_TIME timer command to simultaneously capture the PHY and PHC
- * timer values.
+ * Issue a ICE_PTP_READ_TIME timer command to simultaneously capture the PHY
+ * and PHC timer values.
*/
static int
ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time,
@@ -2210,15 +2344,15 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time,
tmr_idx = ice_get_ptp_src_clock_index(hw);
- /* Prepare the PHC timer for a READ_TIME capture command */
- ice_ptp_src_cmd(hw, READ_TIME);
+ /* Prepare the PHC timer for a ICE_PTP_READ_TIME capture command */
+ ice_ptp_src_cmd(hw, ICE_PTP_READ_TIME);
- /* Prepare the PHY timer for a READ_TIME capture command */
- err = ice_ptp_one_port_cmd(hw, port, READ_TIME);
+ /* Prepare the PHY timer for a ICE_PTP_READ_TIME capture command */
+ err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_READ_TIME);
if (err)
return err;
- /* Issue the sync to start the READ_TIME capture */
+ /* Issue the sync to start the ICE_PTP_READ_TIME capture */
ice_ptp_exec_tmr_cmd(hw);
/* Read the captured PHC time from the shadow time registers */
@@ -2252,10 +2386,11 @@ ice_read_phy_and_phc_time_e822(struct ice_hw *hw, u8 port, u64 *phy_time,
* @port: the PHY port to synchronize
*
* Perform an adjustment to ensure that the PHY and PHC timers are in sync.
- * This is done by issuing a READ_TIME command which triggers a simultaneous
- * read of the PHY timer and PHC timer. Then we use the difference to
- * calculate an appropriate 2s complement addition to add to the PHY timer in
- * order to ensure it reads the same value as the primary PHC timer.
+ * This is done by issuing a ICE_PTP_READ_TIME command which triggers a
+ * simultaneous read of the PHY timer and PHC timer. Then we use the
+ * difference to calculate an appropriate 2s complement addition to add
+ * to the PHY timer in order to ensure it reads the same value as the
+ * primary PHC timer.
*/
static int ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port)
{
@@ -2285,7 +2420,7 @@ static int ice_sync_phy_timer_e822(struct ice_hw *hw, u8 port)
if (err)
goto err_unlock;
- err = ice_ptp_one_port_cmd(hw, port, ADJ_TIME);
+ err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_ADJ_TIME);
if (err)
goto err_unlock;
@@ -2408,7 +2543,7 @@ int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port)
if (err)
return err;
- err = ice_ptp_one_port_cmd(hw, port, INIT_INCVAL);
+ err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_INIT_INCVAL);
if (err)
return err;
@@ -2436,7 +2571,7 @@ int ice_start_phy_timer_e822(struct ice_hw *hw, u8 port)
if (err)
return err;
- err = ice_ptp_one_port_cmd(hw, port, INIT_INCVAL);
+ err = ice_ptp_one_port_cmd(hw, port, ICE_PTP_INIT_INCVAL);
if (err)
return err;
@@ -2685,28 +2820,39 @@ ice_read_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx, u64 *tstamp)
* @lport: the lport to read from
* @idx: the timestamp index to reset
*
- * Clear a timestamp, resetting its valid bit, from the timestamp block of the
- * external PHY on the E810 device.
+ * Read the timestamp and then forcibly overwrite its value to clear the valid
+ * bit from the timestamp block of the external PHY on the E810 device.
+ *
+ * This function should only be called on an idx whose bit is set according to
+ * ice_get_phy_tx_tstamp_ready().
*/
static int ice_clear_phy_tstamp_e810(struct ice_hw *hw, u8 lport, u8 idx)
{
u32 lo_addr, hi_addr;
+ u64 unused_tstamp;
int err;
+ err = ice_read_phy_tstamp_e810(hw, lport, idx, &unused_tstamp);
+ if (err) {
+ ice_debug(hw, ICE_DBG_PTP, "Failed to read the timestamp register for lport %u, idx %u, err %d\n",
+ lport, idx, err);
+ return err;
+ }
+
lo_addr = TS_EXT(LOW_TX_MEMORY_BANK_START, lport, idx);
hi_addr = TS_EXT(HIGH_TX_MEMORY_BANK_START, lport, idx);
err = ice_write_phy_reg_e810(hw, lo_addr, 0);
if (err) {
- ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register, err %d\n",
- err);
+ ice_debug(hw, ICE_DBG_PTP, "Failed to clear low PTP timestamp register for lport %u, idx %u, err %d\n",
+ lport, idx, err);
return err;
}
err = ice_write_phy_reg_e810(hw, hi_addr, 0);
if (err) {
- ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register, err %d\n",
- err);
+ ice_debug(hw, ICE_DBG_PTP, "Failed to clear high PTP timestamp register for lport %u, idx %u, err %d\n",
+ lport, idx, err);
return err;
}
@@ -2757,7 +2903,7 @@ static int ice_ptp_init_phc_e810(struct ice_hw *hw)
*
* Program the PHY port ETH_GLTSYN_SHTIME registers in preparation setting the
* initial clock time. The time will not actually be programmed until the
- * driver issues an INIT_TIME command.
+ * driver issues an ICE_PTP_INIT_TIME command.
*
* The time value is the upper 32 bits of the PHY timer, usually in units of
* nominal nanoseconds.
@@ -2792,7 +2938,7 @@ static int ice_ptp_prep_phy_time_e810(struct ice_hw *hw, u32 time)
*
* Prepare the PHY port for an atomic adjustment by programming the PHY
* ETH_GLTSYN_SHADJ_L and ETH_GLTSYN_SHADJ_H registers. The actual adjustment
- * is completed by issuing an ADJ_TIME sync command.
+ * is completed by issuing an ICE_PTP_ADJ_TIME sync command.
*
* The adjustment value only contains the portion used for the upper 32bits of
* the PHY timer, usually in units of nominal nanoseconds. Negative
@@ -2832,7 +2978,7 @@ static int ice_ptp_prep_phy_adj_e810(struct ice_hw *hw, s32 adj)
*
* Prepare the PHY port for a new increment value by programming the PHY
* ETH_GLTSYN_SHADJ_L and ETH_GLTSYN_SHADJ_H registers. The actual change is
- * completed by issuing an INIT_INCVAL command.
+ * completed by issuing an ICE_PTP_INIT_INCVAL command.
*/
static int ice_ptp_prep_phy_incval_e810(struct ice_hw *hw, u64 incval)
{
@@ -2875,19 +3021,19 @@ static int ice_ptp_port_cmd_e810(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
int err;
switch (cmd) {
- case INIT_TIME:
+ case ICE_PTP_INIT_TIME:
cmd_val = GLTSYN_CMD_INIT_TIME;
break;
- case INIT_INCVAL:
+ case ICE_PTP_INIT_INCVAL:
cmd_val = GLTSYN_CMD_INIT_INCVAL;
break;
- case ADJ_TIME:
+ case ICE_PTP_ADJ_TIME:
cmd_val = GLTSYN_CMD_ADJ_TIME;
break;
- case READ_TIME:
+ case ICE_PTP_READ_TIME:
cmd_val = GLTSYN_CMD_READ_TIME;
break;
- case ADJ_TIME_AT_TIME:
+ case ICE_PTP_ADJ_TIME_AT_TIME:
cmd_val = GLTSYN_CMD_ADJ_INIT_TIME;
break;
case ICE_PTP_NOP:
@@ -3150,6 +3296,21 @@ void ice_ptp_unlock(struct ice_hw *hw)
}
/**
+ * ice_ptp_init_phy_model - Initialize hw->phy_model based on device type
+ * @hw: pointer to the HW structure
+ *
+ * Determine the PHY model for the device, and initialize hw->phy_model
+ * for use by other functions.
+ */
+void ice_ptp_init_phy_model(struct ice_hw *hw)
+{
+ if (ice_is_e810(hw))
+ hw->phy_model = ICE_PHY_E810;
+ else
+ hw->phy_model = ICE_PHY_E822;
+}
+
+/**
* ice_ptp_tmr_cmd - Prepare and trigger a timer sync command
* @hw: pointer to HW struct
* @cmd: the command to issue
@@ -3167,10 +3328,17 @@ static int ice_ptp_tmr_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
ice_ptp_src_cmd(hw, cmd);
/* Next, prepare the ports */
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
err = ice_ptp_port_cmd_e810(hw, cmd);
- else
+ break;
+ case ICE_PHY_E822:
err = ice_ptp_port_cmd_e822(hw, cmd);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
if (err) {
ice_debug(hw, ICE_DBG_PTP, "Failed to prepare PHY ports for timer command %u, err %d\n",
cmd, err);
@@ -3212,14 +3380,21 @@ int ice_ptp_init_time(struct ice_hw *hw, u64 time)
/* PHY timers */
/* Fill Rx and Tx ports and send msg to PHY */
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
err = ice_ptp_prep_phy_time_e810(hw, time & 0xFFFFFFFF);
- else
+ break;
+ case ICE_PHY_E822:
err = ice_ptp_prep_phy_time_e822(hw, time & 0xFFFFFFFF);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
if (err)
return err;
- return ice_ptp_tmr_cmd(hw, INIT_TIME);
+ return ice_ptp_tmr_cmd(hw, ICE_PTP_INIT_TIME);
}
/**
@@ -3232,8 +3407,8 @@ int ice_ptp_init_time(struct ice_hw *hw, u64 time)
*
* 1) Write the increment value to the source timer shadow registers
* 2) Write the increment value to the PHY timer shadow registers
- * 3) Issue an INIT_INCVAL timer command to synchronously switch both the
- * source and port timers to the new increment value at the next clock
+ * 3) Issue an ICE_PTP_INIT_INCVAL timer command to synchronously switch both
+ * the source and port timers to the new increment value at the next clock
* cycle.
*/
int ice_ptp_write_incval(struct ice_hw *hw, u64 incval)
@@ -3247,14 +3422,21 @@ int ice_ptp_write_incval(struct ice_hw *hw, u64 incval)
wr32(hw, GLTSYN_SHADJ_L(tmr_idx), lower_32_bits(incval));
wr32(hw, GLTSYN_SHADJ_H(tmr_idx), upper_32_bits(incval));
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
err = ice_ptp_prep_phy_incval_e810(hw, incval);
- else
+ break;
+ case ICE_PHY_E822:
err = ice_ptp_prep_phy_incval_e822(hw, incval);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
if (err)
return err;
- return ice_ptp_tmr_cmd(hw, INIT_INCVAL);
+ return ice_ptp_tmr_cmd(hw, ICE_PTP_INIT_INCVAL);
}
/**
@@ -3288,8 +3470,8 @@ int ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval)
*
* 1) Write the adjustment to the source timer shadow registers
* 2) Write the adjustment to the PHY timer shadow registers
- * 3) Issue an ADJ_TIME timer command to synchronously apply the adjustment to
- * both the source and port timers at the next clock cycle.
+ * 3) Issue an ICE_PTP_ADJ_TIME timer command to synchronously apply the
+ * adjustment to both the source and port timers at the next clock cycle.
*/
int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
{
@@ -3299,21 +3481,28 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
tmr_idx = hw->func_caps.ts_func_info.tmr_index_owned;
/* Write the desired clock adjustment into the GLTSYN_SHADJ register.
- * For an ADJ_TIME command, this set of registers represents the value
- * to add to the clock time. It supports subtraction by interpreting
- * the value as a 2's complement integer.
+ * For an ICE_PTP_ADJ_TIME command, this set of registers represents
+ * the value to add to the clock time. It supports subtraction by
+ * interpreting the value as a 2's complement integer.
*/
wr32(hw, GLTSYN_SHADJ_L(tmr_idx), 0);
wr32(hw, GLTSYN_SHADJ_H(tmr_idx), adj);
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
err = ice_ptp_prep_phy_adj_e810(hw, adj);
- else
+ break;
+ case ICE_PHY_E822:
err = ice_ptp_prep_phy_adj_e822(hw, adj);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ }
+
if (err)
return err;
- return ice_ptp_tmr_cmd(hw, ADJ_TIME);
+ return ice_ptp_tmr_cmd(hw, ICE_PTP_ADJ_TIME);
}
/**
@@ -3329,10 +3518,14 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
*/
int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
{
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
return ice_read_phy_tstamp_e810(hw, block, idx, tstamp);
- else
+ case ICE_PHY_E822:
return ice_read_phy_tstamp_e822(hw, block, idx, tstamp);
+ default:
+ return -EOPNOTSUPP;
+ }
}
/**
@@ -3341,16 +3534,71 @@ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
* @block: the block to read from
* @idx: the timestamp index to reset
*
- * Clear a timestamp, resetting its valid bit, from the timestamp block. For
- * E822 devices, the block is the quad to clear from. For E810 devices, the
- * block is the logical port to clear from.
+ * Clear a timestamp from the timestamp block, discarding its value without
+ * returning it. This resets the memory status bit for the timestamp index
+ * allowing it to be reused for another timestamp in the future.
+ *
+ * For E822 devices, the block number is the PHY quad to clear from. For E810
+ * devices, the block number is the logical port to clear from.
+ *
+ * This function must only be called on a timestamp index whose valid bit is
+ * set according to ice_get_phy_tx_tstamp_ready().
*/
int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
{
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
return ice_clear_phy_tstamp_e810(hw, block, idx);
- else
+ case ICE_PHY_E822:
return ice_clear_phy_tstamp_e822(hw, block, idx);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+/**
+ * ice_get_pf_c827_idx - find and return the C827 index for the current pf
+ * @hw: pointer to the hw struct
+ * @idx: index of the found C827 PHY
+ * Return:
+ * * 0 - success
+ * * negative - failure
+ */
+static int ice_get_pf_c827_idx(struct ice_hw *hw, u8 *idx)
+{
+ struct ice_aqc_get_link_topo cmd;
+ u8 node_part_number;
+ u16 node_handle;
+ int status;
+ u8 ctx;
+
+ if (hw->mac_type != ICE_MAC_E810)
+ return -ENODEV;
+
+ if (hw->device_id != ICE_DEV_ID_E810C_QSFP) {
+ *idx = C827_0;
+ return 0;
+ }
+
+ memset(&cmd, 0, sizeof(cmd));
+
+ ctx = ICE_AQC_LINK_TOPO_NODE_TYPE_PHY << ICE_AQC_LINK_TOPO_NODE_TYPE_S;
+ ctx |= ICE_AQC_LINK_TOPO_NODE_CTX_PORT << ICE_AQC_LINK_TOPO_NODE_CTX_S;
+ cmd.addr.topo_params.node_type_ctx = ctx;
+
+ status = ice_aq_get_netlist_node(hw, &cmd, &node_part_number,
+ &node_handle);
+ if (status || node_part_number != ICE_AQC_GET_LINK_TOPO_NODE_NR_C827)
+ return -ENOENT;
+
+ if (node_handle == E810C_QSFP_C827_0_HANDLE)
+ *idx = C827_0;
+ else if (node_handle == E810C_QSFP_C827_1_HANDLE)
+ *idx = C827_1;
+ else
+ return -EIO;
+
+ return 0;
}
/**
@@ -3359,10 +3607,14 @@ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
*/
void ice_ptp_reset_ts_memory(struct ice_hw *hw)
{
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E822:
+ ice_ptp_reset_ts_memory_e822(hw);
+ break;
+ case ICE_PHY_E810:
+ default:
return;
-
- ice_ptp_reset_ts_memory_e822(hw);
+ }
}
/**
@@ -3381,10 +3633,14 @@ int ice_ptp_init_phc(struct ice_hw *hw)
/* Clear event err indications for auxiliary pins */
(void)rd32(hw, GLTSYN_STAT(src_idx));
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
return ice_ptp_init_phc_e810(hw);
- else
+ case ICE_PHY_E822:
return ice_ptp_init_phc_e822(hw);
+ default:
+ return -EOPNOTSUPP;
+ }
}
/**
@@ -3400,10 +3656,308 @@ int ice_ptp_init_phc(struct ice_hw *hw)
*/
int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready)
{
- if (ice_is_e810(hw))
+ switch (hw->phy_model) {
+ case ICE_PHY_E810:
return ice_get_phy_tx_tstamp_ready_e810(hw, block,
tstamp_ready);
- else
+ case ICE_PHY_E822:
return ice_get_phy_tx_tstamp_ready_e822(hw, block,
tstamp_ready);
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+/**
+ * ice_cgu_get_pin_desc_e823 - get pin description array
+ * @hw: pointer to the hw struct
+ * @input: if request is done against input or output pin
+ * @size: number of inputs/outputs
+ *
+ * Return: pointer to pin description array associated to given hw.
+ */
+static const struct ice_cgu_pin_desc *
+ice_cgu_get_pin_desc_e823(struct ice_hw *hw, bool input, int *size)
+{
+ static const struct ice_cgu_pin_desc *t;
+
+ if (hw->cgu_part_number ==
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032) {
+ if (input) {
+ t = ice_e823_zl_cgu_inputs;
+ *size = ARRAY_SIZE(ice_e823_zl_cgu_inputs);
+ } else {
+ t = ice_e823_zl_cgu_outputs;
+ *size = ARRAY_SIZE(ice_e823_zl_cgu_outputs);
+ }
+ } else if (hw->cgu_part_number ==
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384) {
+ if (input) {
+ t = ice_e823_si_cgu_inputs;
+ *size = ARRAY_SIZE(ice_e823_si_cgu_inputs);
+ } else {
+ t = ice_e823_si_cgu_outputs;
+ *size = ARRAY_SIZE(ice_e823_si_cgu_outputs);
+ }
+ } else {
+ t = NULL;
+ *size = 0;
+ }
+
+ return t;
+}
+
+/**
+ * ice_cgu_get_pin_desc - get pin description array
+ * @hw: pointer to the hw struct
+ * @input: if request is done against input or output pins
+ * @size: size of array returned by function
+ *
+ * Return: pointer to pin description array associated to given hw.
+ */
+static const struct ice_cgu_pin_desc *
+ice_cgu_get_pin_desc(struct ice_hw *hw, bool input, int *size)
+{
+ const struct ice_cgu_pin_desc *t = NULL;
+
+ switch (hw->device_id) {
+ case ICE_DEV_ID_E810C_SFP:
+ if (input) {
+ t = ice_e810t_sfp_cgu_inputs;
+ *size = ARRAY_SIZE(ice_e810t_sfp_cgu_inputs);
+ } else {
+ t = ice_e810t_sfp_cgu_outputs;
+ *size = ARRAY_SIZE(ice_e810t_sfp_cgu_outputs);
+ }
+ break;
+ case ICE_DEV_ID_E810C_QSFP:
+ if (input) {
+ t = ice_e810t_qsfp_cgu_inputs;
+ *size = ARRAY_SIZE(ice_e810t_qsfp_cgu_inputs);
+ } else {
+ t = ice_e810t_qsfp_cgu_outputs;
+ *size = ARRAY_SIZE(ice_e810t_qsfp_cgu_outputs);
+ }
+ break;
+ case ICE_DEV_ID_E823L_10G_BASE_T:
+ case ICE_DEV_ID_E823L_1GBE:
+ case ICE_DEV_ID_E823L_BACKPLANE:
+ case ICE_DEV_ID_E823L_QSFP:
+ case ICE_DEV_ID_E823L_SFP:
+ case ICE_DEV_ID_E823C_10G_BASE_T:
+ case ICE_DEV_ID_E823C_BACKPLANE:
+ case ICE_DEV_ID_E823C_QSFP:
+ case ICE_DEV_ID_E823C_SFP:
+ case ICE_DEV_ID_E823C_SGMII:
+ t = ice_cgu_get_pin_desc_e823(hw, input, size);
+ break;
+ default:
+ break;
+ }
+
+ return t;
+}
+
+/**
+ * ice_cgu_get_pin_type - get pin's type
+ * @hw: pointer to the hw struct
+ * @pin: pin index
+ * @input: if request is done against input or output pin
+ *
+ * Return: type of a pin.
+ */
+enum dpll_pin_type ice_cgu_get_pin_type(struct ice_hw *hw, u8 pin, bool input)
+{
+ const struct ice_cgu_pin_desc *t;
+ int t_size;
+
+ t = ice_cgu_get_pin_desc(hw, input, &t_size);
+
+ if (!t)
+ return 0;
+
+ if (pin >= t_size)
+ return 0;
+
+ return t[pin].type;
+}
+
+/**
+ * ice_cgu_get_pin_freq_supp - get pin's supported frequency
+ * @hw: pointer to the hw struct
+ * @pin: pin index
+ * @input: if request is done against input or output pin
+ * @num: output number of supported frequencies
+ *
+ * Get frequency supported number and array of supported frequencies.
+ *
+ * Return: array of supported frequencies for given pin.
+ */
+struct dpll_pin_frequency *
+ice_cgu_get_pin_freq_supp(struct ice_hw *hw, u8 pin, bool input, u8 *num)
+{
+ const struct ice_cgu_pin_desc *t;
+ int t_size;
+
+ *num = 0;
+ t = ice_cgu_get_pin_desc(hw, input, &t_size);
+ if (!t)
+ return NULL;
+ if (pin >= t_size)
+ return NULL;
+ *num = t[pin].freq_supp_num;
+
+ return t[pin].freq_supp;
+}
+
+/**
+ * ice_cgu_get_pin_name - get pin's name
+ * @hw: pointer to the hw struct
+ * @pin: pin index
+ * @input: if request is done against input or output pin
+ *
+ * Return:
+ * * null terminated char array with name
+ * * NULL in case of failure
+ */
+const char *ice_cgu_get_pin_name(struct ice_hw *hw, u8 pin, bool input)
+{
+ const struct ice_cgu_pin_desc *t;
+ int t_size;
+
+ t = ice_cgu_get_pin_desc(hw, input, &t_size);
+
+ if (!t)
+ return NULL;
+
+ if (pin >= t_size)
+ return NULL;
+
+ return t[pin].name;
+}
+
+/**
+ * ice_get_cgu_state - get the state of the DPLL
+ * @hw: pointer to the hw struct
+ * @dpll_idx: Index of internal DPLL unit
+ * @last_dpll_state: last known state of DPLL
+ * @pin: pointer to a buffer for returning currently active pin
+ * @ref_state: reference clock state
+ * @eec_mode: eec mode of the DPLL
+ * @phase_offset: pointer to a buffer for returning phase offset
+ * @dpll_state: state of the DPLL (output)
+ *
+ * This function will read the state of the DPLL(dpll_idx). Non-null
+ * 'pin', 'ref_state', 'eec_mode' and 'phase_offset' parameters are used to
+ * retrieve currently active pin, state, mode and phase_offset respectively.
+ *
+ * Return: state of the DPLL
+ */
+int ice_get_cgu_state(struct ice_hw *hw, u8 dpll_idx,
+ enum dpll_lock_status last_dpll_state, u8 *pin,
+ u8 *ref_state, u8 *eec_mode, s64 *phase_offset,
+ enum dpll_lock_status *dpll_state)
+{
+ u8 hw_ref_state, hw_dpll_state, hw_eec_mode, hw_config;
+ s64 hw_phase_offset;
+ int status;
+
+ status = ice_aq_get_cgu_dpll_status(hw, dpll_idx, &hw_ref_state,
+ &hw_dpll_state, &hw_config,
+ &hw_phase_offset, &hw_eec_mode);
+ if (status)
+ return status;
+
+ if (pin)
+ /* current ref pin in dpll_state_refsel_status_X register */
+ *pin = hw_config & ICE_AQC_GET_CGU_DPLL_CONFIG_CLK_REF_SEL;
+ if (phase_offset)
+ *phase_offset = hw_phase_offset;
+ if (ref_state)
+ *ref_state = hw_ref_state;
+ if (eec_mode)
+ *eec_mode = hw_eec_mode;
+ if (!dpll_state)
+ return 0;
+
+ /* According to ZL DPLL documentation, once state reach LOCKED_HO_ACQ
+ * it would never return to FREERUN. This aligns to ITU-T G.781
+ * Recommendation. We cannot report HOLDOVER as HO memory is cleared
+ * while switching to another reference.
+ * Only for situations where previous state was either: "LOCKED without
+ * HO_ACQ" or "HOLDOVER" we actually back to FREERUN.
+ */
+ if (hw_dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_LOCK) {
+ if (hw_dpll_state & ICE_AQC_GET_CGU_DPLL_STATUS_STATE_HO_READY)
+ *dpll_state = DPLL_LOCK_STATUS_LOCKED_HO_ACQ;
+ else
+ *dpll_state = DPLL_LOCK_STATUS_LOCKED;
+ } else if (last_dpll_state == DPLL_LOCK_STATUS_LOCKED_HO_ACQ ||
+ last_dpll_state == DPLL_LOCK_STATUS_HOLDOVER) {
+ *dpll_state = DPLL_LOCK_STATUS_HOLDOVER;
+ } else {
+ *dpll_state = DPLL_LOCK_STATUS_UNLOCKED;
+ }
+
+ return 0;
+}
+
+/**
+ * ice_get_cgu_rclk_pin_info - get info on available recovered clock pins
+ * @hw: pointer to the hw struct
+ * @base_idx: returns index of first recovered clock pin on device
+ * @pin_num: returns number of recovered clock pins available on device
+ *
+ * Based on hw provide caller info about recovery clock pins available on the
+ * board.
+ *
+ * Return:
+ * * 0 - success, information is valid
+ * * negative - failure, information is not valid
+ */
+int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num)
+{
+ u8 phy_idx;
+ int ret;
+
+ switch (hw->device_id) {
+ case ICE_DEV_ID_E810C_SFP:
+ case ICE_DEV_ID_E810C_QSFP:
+
+ ret = ice_get_pf_c827_idx(hw, &phy_idx);
+ if (ret)
+ return ret;
+ *base_idx = E810T_CGU_INPUT_C827(phy_idx, ICE_RCLKA_PIN);
+ *pin_num = ICE_E810_RCLK_PINS_NUM;
+ ret = 0;
+ break;
+ case ICE_DEV_ID_E823L_10G_BASE_T:
+ case ICE_DEV_ID_E823L_1GBE:
+ case ICE_DEV_ID_E823L_BACKPLANE:
+ case ICE_DEV_ID_E823L_QSFP:
+ case ICE_DEV_ID_E823L_SFP:
+ case ICE_DEV_ID_E823C_10G_BASE_T:
+ case ICE_DEV_ID_E823C_BACKPLANE:
+ case ICE_DEV_ID_E823C_QSFP:
+ case ICE_DEV_ID_E823C_SFP:
+ case ICE_DEV_ID_E823C_SGMII:
+ *pin_num = ICE_E822_RCLK_PINS_NUM;
+ ret = 0;
+ if (hw->cgu_part_number ==
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032)
+ *base_idx = ZL_REF1P;
+ else if (hw->cgu_part_number ==
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384)
+ *base_idx = SI_REF1P;
+ else
+ ret = -ENODEV;
+
+ break;
+ default:
+ ret = -ENODEV;
+ break;
+ }
+
+ return ret;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
index 9aa10b0426fd..36aeeef99ec0 100644
--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
@@ -3,13 +3,14 @@
#ifndef _ICE_PTP_HW_H_
#define _ICE_PTP_HW_H_
+#include <linux/dpll.h>
enum ice_ptp_tmr_cmd {
- INIT_TIME,
- INIT_INCVAL,
- ADJ_TIME,
- ADJ_TIME_AT_TIME,
- READ_TIME,
+ ICE_PTP_INIT_TIME,
+ ICE_PTP_INIT_INCVAL,
+ ICE_PTP_ADJ_TIME,
+ ICE_PTP_ADJ_TIME_AT_TIME,
+ ICE_PTP_READ_TIME,
ICE_PTP_NOP,
};
@@ -110,6 +111,77 @@ struct ice_cgu_pll_params_e822 {
u32 post_pll_div;
};
+#define E810C_QSFP_C827_0_HANDLE 2
+#define E810C_QSFP_C827_1_HANDLE 3
+enum ice_e810_c827_idx {
+ C827_0,
+ C827_1
+};
+
+enum ice_phy_rclk_pins {
+ ICE_RCLKA_PIN = 0, /* SCL pin */
+ ICE_RCLKB_PIN, /* SDA pin */
+};
+
+#define ICE_E810_RCLK_PINS_NUM (ICE_RCLKB_PIN + 1)
+#define ICE_E822_RCLK_PINS_NUM (ICE_RCLKA_PIN + 1)
+#define E810T_CGU_INPUT_C827(_phy, _pin) ((_phy) * ICE_E810_RCLK_PINS_NUM + \
+ (_pin) + ZL_REF1P)
+
+enum ice_zl_cgu_in_pins {
+ ZL_REF0P = 0,
+ ZL_REF0N,
+ ZL_REF1P,
+ ZL_REF1N,
+ ZL_REF2P,
+ ZL_REF2N,
+ ZL_REF3P,
+ ZL_REF3N,
+ ZL_REF4P,
+ ZL_REF4N,
+ NUM_ZL_CGU_INPUT_PINS
+};
+
+enum ice_zl_cgu_out_pins {
+ ZL_OUT0 = 0,
+ ZL_OUT1,
+ ZL_OUT2,
+ ZL_OUT3,
+ ZL_OUT4,
+ ZL_OUT5,
+ ZL_OUT6,
+ NUM_ZL_CGU_OUTPUT_PINS
+};
+
+enum ice_si_cgu_in_pins {
+ SI_REF0P = 0,
+ SI_REF0N,
+ SI_REF1P,
+ SI_REF1N,
+ SI_REF2P,
+ SI_REF2N,
+ SI_REF3,
+ SI_REF4,
+ NUM_SI_CGU_INPUT_PINS
+};
+
+enum ice_si_cgu_out_pins {
+ SI_OUT0 = 0,
+ SI_OUT1,
+ SI_OUT2,
+ SI_OUT3,
+ SI_OUT4,
+ NUM_SI_CGU_OUTPUT_PINS
+};
+
+struct ice_cgu_pin_desc {
+ char *name;
+ u8 index;
+ enum dpll_pin_type type;
+ u32 freq_supp_num;
+ struct dpll_pin_frequency *freq_supp;
+};
+
extern const struct
ice_cgu_pll_params_e822 e822_cgu_params[NUM_ICE_TIME_REF_FREQ];
@@ -131,6 +203,7 @@ extern const struct ice_vernier_info_e822 e822_vernier[NUM_ICE_PTP_LNK_SPD];
u8 ice_get_ptp_src_clock_index(struct ice_hw *hw);
bool ice_ptp_lock(struct ice_hw *hw);
void ice_ptp_unlock(struct ice_hw *hw);
+void ice_ptp_src_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd);
int ice_ptp_init_time(struct ice_hw *hw, u64 time);
int ice_ptp_write_incval(struct ice_hw *hw, u64 incval);
int ice_ptp_write_incval_locked(struct ice_hw *hw, u64 incval);
@@ -197,6 +270,18 @@ int ice_ptp_init_phy_e810(struct ice_hw *hw);
int ice_read_sma_ctrl_e810t(struct ice_hw *hw, u8 *data);
int ice_write_sma_ctrl_e810t(struct ice_hw *hw, u8 data);
int ice_read_pca9575_reg_e810t(struct ice_hw *hw, u8 offset, u8 *data);
+bool ice_is_pca9575_present(struct ice_hw *hw);
+enum dpll_pin_type ice_cgu_get_pin_type(struct ice_hw *hw, u8 pin, bool input);
+struct dpll_pin_frequency *
+ice_cgu_get_pin_freq_supp(struct ice_hw *hw, u8 pin, bool input, u8 *num);
+const char *ice_cgu_get_pin_name(struct ice_hw *hw, u8 pin, bool input);
+int ice_get_cgu_state(struct ice_hw *hw, u8 dpll_idx,
+ enum dpll_lock_status last_dpll_state, u8 *pin,
+ u8 *ref_state, u8 *eec_mode, s64 *phase_offset,
+ enum dpll_lock_status *dpll_state);
+int ice_get_cgu_rclk_pin_info(struct ice_hw *hw, u8 *base_idx, u8 *pin_num);
+
+void ice_ptp_init_phy_model(struct ice_hw *hw);
#define PFTSYN_SEM_BYTES 4
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index c0533d7b66b9..2f4a621254e8 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -229,29 +229,22 @@ ice_aq_delete_sched_elems(struct ice_hw *hw, u16 grps_req,
* ice_sched_remove_elems - remove nodes from HW
* @hw: pointer to the HW struct
* @parent: pointer to the parent node
- * @num_nodes: number of nodes
- * @node_teids: array of node teids to be deleted
+ * @node_teid: node teid to be deleted
*
* This function remove nodes from HW
*/
static int
ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
- u16 num_nodes, u32 *node_teids)
+ u32 node_teid)
{
- struct ice_aqc_delete_elem *buf;
- u16 i, num_groups_removed = 0;
- u16 buf_size;
+ DEFINE_FLEX(struct ice_aqc_delete_elem, buf, teid, 1);
+ u16 buf_size = __struct_size(buf);
+ u16 num_groups_removed = 0;
int status;
- buf_size = struct_size(buf, teid, num_nodes);
- buf = devm_kzalloc(ice_hw_to_dev(hw), buf_size, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
buf->hdr.parent_teid = parent->info.node_teid;
- buf->hdr.num_elems = cpu_to_le16(num_nodes);
- for (i = 0; i < num_nodes; i++)
- buf->teid[i] = cpu_to_le32(node_teids[i]);
+ buf->hdr.num_elems = cpu_to_le16(1);
+ buf->teid[0] = cpu_to_le32(node_teid);
status = ice_aq_delete_sched_elems(hw, 1, buf, buf_size,
&num_groups_removed, NULL);
@@ -259,7 +252,6 @@ ice_sched_remove_elems(struct ice_hw *hw, struct ice_sched_node *parent,
ice_debug(hw, ICE_DBG_SCHED, "remove node failed FW error %d\n",
hw->adminq.sq_last_status);
- devm_kfree(ice_hw_to_dev(hw), buf);
return status;
}
@@ -326,7 +318,7 @@ void ice_free_sched_node(struct ice_port_info *pi, struct ice_sched_node *node)
node->info.data.elem_type != ICE_AQC_ELEM_TYPE_LEAF) {
u32 teid = le32_to_cpu(node->info.node_teid);
- ice_sched_remove_elems(hw, node->parent, 1, &teid);
+ ice_sched_remove_elems(hw, node->parent, teid);
}
parent = node->parent;
/* root has no parent */
@@ -437,24 +429,20 @@ ice_aq_cfg_sched_elems(struct ice_hw *hw, u16 elems_req,
}
/**
- * ice_aq_move_sched_elems - move scheduler elements
+ * ice_aq_move_sched_elems - move scheduler element (just 1 group)
* @hw: pointer to the HW struct
- * @grps_req: number of groups to move
* @buf: pointer to buffer
* @buf_size: buffer size in bytes
* @grps_movd: returns total number of groups moved
- * @cd: pointer to command details structure or NULL
*
* Move scheduling elements (0x0408)
*/
int
-ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
- struct ice_aqc_move_elem *buf, u16 buf_size,
- u16 *grps_movd, struct ice_sq_cd *cd)
+ice_aq_move_sched_elems(struct ice_hw *hw, struct ice_aqc_move_elem *buf,
+ u16 buf_size, u16 *grps_movd)
{
return ice_aqc_send_sched_elem_cmd(hw, ice_aqc_opc_move_sched_elems,
- grps_req, (void *)buf, buf_size,
- grps_movd, cd);
+ 1, buf, buf_size, grps_movd, NULL);
}
/**
@@ -1193,7 +1181,7 @@ static void ice_rm_dflt_leaf_node(struct ice_port_info *pi)
int status;
/* remove the default leaf node */
- status = ice_sched_remove_elems(pi->hw, node->parent, 1, &teid);
+ status = ice_sched_remove_elems(pi->hw, node->parent, teid);
if (!status)
ice_free_sched_node(pi, node);
}
@@ -2232,12 +2220,12 @@ int
ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
u16 num_items, u32 *list)
{
- struct ice_aqc_move_elem *buf;
+ DEFINE_FLEX(struct ice_aqc_move_elem, buf, teid, 1);
+ u16 buf_len = __struct_size(buf);
struct ice_sched_node *node;
u16 i, grps_movd = 0;
struct ice_hw *hw;
int status = 0;
- u16 buf_len;
hw = pi->hw;
@@ -2249,35 +2237,27 @@ ice_sched_move_nodes(struct ice_port_info *pi, struct ice_sched_node *parent,
hw->max_children[parent->tx_sched_layer])
return -ENOSPC;
- buf_len = struct_size(buf, teid, 1);
- buf = kzalloc(buf_len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
for (i = 0; i < num_items; i++) {
node = ice_sched_find_node_by_teid(pi->root, list[i]);
if (!node) {
status = -EINVAL;
- goto move_err_exit;
+ break;
}
buf->hdr.src_parent_teid = node->info.parent_teid;
buf->hdr.dest_parent_teid = parent->info.node_teid;
buf->teid[0] = node->info.node_teid;
buf->hdr.num_elems = cpu_to_le16(1);
- status = ice_aq_move_sched_elems(hw, 1, buf, buf_len,
- &grps_movd, NULL);
+ status = ice_aq_move_sched_elems(hw, buf, buf_len, &grps_movd);
if (status && grps_movd != 1) {
status = -EIO;
- goto move_err_exit;
+ break;
}
/* update the SW DB */
ice_sched_update_parent(parent, node);
}
-move_err_exit:
- kfree(buf);
return status;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h
index 0055d9330c07..1aef05ea5a57 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.h
+++ b/drivers/net/ethernet/intel/ice/ice_sched.h
@@ -161,10 +161,8 @@ ice_sched_add_nodes_to_layer(struct ice_port_info *pi,
u16 *num_nodes_added);
void ice_sched_replay_agg_vsi_preinit(struct ice_hw *hw);
void ice_sched_replay_agg(struct ice_hw *hw);
-int
-ice_aq_move_sched_elems(struct ice_hw *hw, u16 grps_req,
- struct ice_aqc_move_elem *buf, u16 buf_size,
- u16 *grps_movd, struct ice_sq_cd *cd);
+int ice_aq_move_sched_elems(struct ice_hw *hw, struct ice_aqc_move_elem *buf,
+ u16 buf_size, u16 *grps_movd);
int ice_replay_vsi_agg(struct ice_hw *hw, u16 vsi_handle);
int ice_sched_replay_q_bw(struct ice_port_info *pi, struct ice_q_ctx *q_ctx);
#endif /* _ICE_SCHED_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 31314e7540f8..2a5e6616cc0a 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -64,7 +64,7 @@ static void ice_free_vf_res(struct ice_vf *vf)
vf->num_mac = 0;
}
- last_vector_idx = vf->first_vector_idx + pf->vfs.num_msix_per - 1;
+ last_vector_idx = vf->first_vector_idx + vf->num_msix - 1;
/* clear VF MDD event information */
memset(&vf->mdd_tx_events, 0, sizeof(vf->mdd_tx_events));
@@ -102,7 +102,7 @@ static void ice_dis_vf_mappings(struct ice_vf *vf)
wr32(hw, VPINT_ALLOC_PCI(vf->vf_id), 0);
first = vf->first_vector_idx;
- last = first + pf->vfs.num_msix_per - 1;
+ last = first + vf->num_msix - 1;
for (v = first; v <= last; v++) {
u32 reg;
@@ -138,6 +138,8 @@ static int ice_sriov_free_msix_res(struct ice_pf *pf)
if (!pf)
return -EINVAL;
+ bitmap_free(pf->sriov_irq_bm);
+ pf->sriov_irq_size = 0;
pf->sriov_base_vector = 0;
return 0;
@@ -244,22 +246,6 @@ static struct ice_vsi *ice_vf_vsi_setup(struct ice_vf *vf)
return vsi;
}
-/**
- * ice_calc_vf_first_vector_idx - Calculate MSIX vector index in the PF space
- * @pf: pointer to PF structure
- * @vf: pointer to VF that the first MSIX vector index is being calculated for
- *
- * This returns the first MSIX vector index in PF space that is used by this VF.
- * This index is used when accessing PF relative registers such as
- * GLINT_VECT2FUNC and GLINT_DYN_CTL.
- * This will always be the OICR index in the AVF driver so any functionality
- * using vf->first_vector_idx for queue configuration will have to increment by
- * 1 to avoid meddling with the OICR index.
- */
-static int ice_calc_vf_first_vector_idx(struct ice_pf *pf, struct ice_vf *vf)
-{
- return pf->sriov_base_vector + vf->vf_id * pf->vfs.num_msix_per;
-}
/**
* ice_ena_vf_msix_mappings - enable VF MSIX mappings in hardware
@@ -280,12 +266,12 @@ static void ice_ena_vf_msix_mappings(struct ice_vf *vf)
hw = &pf->hw;
pf_based_first_msix = vf->first_vector_idx;
- pf_based_last_msix = (pf_based_first_msix + pf->vfs.num_msix_per) - 1;
+ pf_based_last_msix = (pf_based_first_msix + vf->num_msix) - 1;
device_based_first_msix = pf_based_first_msix +
pf->hw.func_caps.common_cap.msix_vector_first_id;
device_based_last_msix =
- (device_based_first_msix + pf->vfs.num_msix_per) - 1;
+ (device_based_first_msix + vf->num_msix) - 1;
device_based_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
reg = (((device_based_first_msix << VPINT_ALLOC_FIRST_S) &
@@ -527,6 +513,52 @@ static int ice_set_per_vf_res(struct ice_pf *pf, u16 num_vfs)
}
/**
+ * ice_sriov_get_irqs - get irqs for SR-IOV usacase
+ * @pf: pointer to PF structure
+ * @needed: number of irqs to get
+ *
+ * This returns the first MSI-X vector index in PF space that is used by this
+ * VF. This index is used when accessing PF relative registers such as
+ * GLINT_VECT2FUNC and GLINT_DYN_CTL.
+ * This will always be the OICR index in the AVF driver so any functionality
+ * using vf->first_vector_idx for queue configuration_id: id of VF which will
+ * use this irqs
+ *
+ * Only SRIOV specific vectors are tracked in sriov_irq_bm. SRIOV vectors are
+ * allocated from the end of global irq index. First bit in sriov_irq_bm means
+ * last irq index etc. It simplifies extension of SRIOV vectors.
+ * They will be always located from sriov_base_vector to the last irq
+ * index. While increasing/decreasing sriov_base_vector can be moved.
+ */
+static int ice_sriov_get_irqs(struct ice_pf *pf, u16 needed)
+{
+ int res = bitmap_find_next_zero_area(pf->sriov_irq_bm,
+ pf->sriov_irq_size, 0, needed, 0);
+ /* conversion from number in bitmap to global irq index */
+ int index = pf->sriov_irq_size - res - needed;
+
+ if (res >= pf->sriov_irq_size || index < pf->sriov_base_vector)
+ return -ENOENT;
+
+ bitmap_set(pf->sriov_irq_bm, res, needed);
+ return index;
+}
+
+/**
+ * ice_sriov_free_irqs - free irqs used by the VF
+ * @pf: pointer to PF structure
+ * @vf: pointer to VF structure
+ */
+static void ice_sriov_free_irqs(struct ice_pf *pf, struct ice_vf *vf)
+{
+ /* Move back from first vector index to first index in bitmap */
+ int bm_i = pf->sriov_irq_size - vf->first_vector_idx - vf->num_msix;
+
+ bitmap_clear(pf->sriov_irq_bm, bm_i, vf->num_msix);
+ vf->first_vector_idx = 0;
+}
+
+/**
* ice_init_vf_vsi_res - initialize/setup VF VSI resources
* @vf: VF to initialize/setup the VSI for
*
@@ -539,7 +571,9 @@ static int ice_init_vf_vsi_res(struct ice_vf *vf)
struct ice_vsi *vsi;
int err;
- vf->first_vector_idx = ice_calc_vf_first_vector_idx(pf, vf);
+ vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+ if (vf->first_vector_idx < 0)
+ return -ENOMEM;
vsi = ice_vf_vsi_setup(vf);
if (!vsi)
@@ -789,14 +823,19 @@ static const struct ice_vf_ops ice_sriov_vf_ops = {
*/
static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
{
+ struct pci_dev *pdev = pf->pdev;
struct ice_vfs *vfs = &pf->vfs;
+ struct pci_dev *vfdev = NULL;
struct ice_vf *vf;
- u16 vf_id;
- int err;
+ u16 vf_pdev_id;
+ int err, pos;
lockdep_assert_held(&vfs->table_lock);
- for (vf_id = 0; vf_id < num_vfs; vf_id++) {
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
+ pci_read_config_word(pdev, pos + PCI_SRIOV_VF_DID, &vf_pdev_id);
+
+ for (u16 vf_id = 0; vf_id < num_vfs; vf_id++) {
vf = kzalloc(sizeof(*vf), GFP_KERNEL);
if (!vf) {
err = -ENOMEM;
@@ -812,11 +851,28 @@ static int ice_create_vf_entries(struct ice_pf *pf, u16 num_vfs)
ice_initialize_vf_entry(vf);
+ do {
+ vfdev = pci_get_device(pdev->vendor, vf_pdev_id, vfdev);
+ } while (vfdev && vfdev->physfn != pdev);
+ vf->vfdev = vfdev;
vf->vf_sw_id = pf->first_sw;
+ pci_dev_get(vfdev);
+
+ /* set default number of MSI-X */
+ vf->num_msix = pf->vfs.num_msix_per;
+ vf->num_vf_qs = pf->vfs.num_qps_per;
+ ice_vc_set_default_allowlist(vf);
+
hash_add_rcu(vfs->table, &vf->entry, vf_id);
}
+ /* Decrement of refcount done by pci_get_device() inside the loop does
+ * not touch the last iteration's vfdev, so it has to be done manually
+ * to balance pci_dev_get() added within the loop.
+ */
+ pci_dev_put(vfdev);
+
return 0;
err_free_entries:
@@ -831,10 +887,16 @@ err_free_entries:
*/
static int ice_ena_vfs(struct ice_pf *pf, u16 num_vfs)
{
+ int total_vectors = pf->hw.func_caps.common_cap.num_msix_vectors;
struct device *dev = ice_pf_to_dev(pf);
struct ice_hw *hw = &pf->hw;
int ret;
+ pf->sriov_irq_bm = bitmap_zalloc(total_vectors, GFP_KERNEL);
+ if (!pf->sriov_irq_bm)
+ return -ENOMEM;
+ pf->sriov_irq_size = total_vectors;
+
/* Disable global interrupt 0 so we don't try to handle the VFLR. */
wr32(hw, GLINT_DYN_CTL(pf->oicr_irq.index),
ICE_ITR_NONE << GLINT_DYN_CTL_ITR_INDX_S);
@@ -893,6 +955,7 @@ err_unroll_intr:
/* rearm interrupts here */
ice_irq_dynamic_ena(hw, NULL, NULL);
clear_bit(ICE_OICR_INTR_DIS, pf->state);
+ bitmap_free(pf->sriov_irq_bm);
return ret;
}
@@ -957,6 +1020,175 @@ static int ice_check_sriov_allowed(struct ice_pf *pf)
}
/**
+ * ice_sriov_get_vf_total_msix - return number of MSI-X used by VFs
+ * @pdev: pointer to pci_dev struct
+ *
+ * The function is called via sysfs ops
+ */
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+ struct ice_pf *pf = pci_get_drvdata(pdev);
+
+ return pf->sriov_irq_size - ice_get_max_used_msix_vector(pf);
+}
+
+static int ice_sriov_move_base_vector(struct ice_pf *pf, int move)
+{
+ if (pf->sriov_base_vector - move < ice_get_max_used_msix_vector(pf))
+ return -ENOMEM;
+
+ pf->sriov_base_vector -= move;
+ return 0;
+}
+
+static void ice_sriov_remap_vectors(struct ice_pf *pf, u16 restricted_id)
+{
+ u16 vf_ids[ICE_MAX_SRIOV_VFS];
+ struct ice_vf *tmp_vf;
+ int to_remap = 0, bkt;
+
+ /* For better irqs usage try to remap irqs of VFs
+ * that aren't running yet
+ */
+ ice_for_each_vf(pf, bkt, tmp_vf) {
+ /* skip VF which is changing the number of MSI-X */
+ if (restricted_id == tmp_vf->vf_id ||
+ test_bit(ICE_VF_STATE_ACTIVE, tmp_vf->vf_states))
+ continue;
+
+ ice_dis_vf_mappings(tmp_vf);
+ ice_sriov_free_irqs(pf, tmp_vf);
+
+ vf_ids[to_remap] = tmp_vf->vf_id;
+ to_remap += 1;
+ }
+
+ for (int i = 0; i < to_remap; i++) {
+ tmp_vf = ice_get_vf_by_id(pf, vf_ids[i]);
+ if (!tmp_vf)
+ continue;
+
+ tmp_vf->first_vector_idx =
+ ice_sriov_get_irqs(pf, tmp_vf->num_msix);
+ /* there is no need to rebuild VSI as we are only changing the
+ * vector indexes not amount of MSI-X or queues
+ */
+ ice_ena_vf_mappings(tmp_vf);
+ ice_put_vf(tmp_vf);
+ }
+}
+
+/**
+ * ice_sriov_set_msix_vec_count
+ * @vf_dev: pointer to pci_dev struct of VF device
+ * @msix_vec_count: new value for MSI-X amount on this VF
+ *
+ * Set requested MSI-X, queues and registers for @vf_dev.
+ *
+ * First do some sanity checks like if there are any VFs, if the new value
+ * is correct etc. Then disable old mapping (MSI-X and queues registers), change
+ * MSI-X and queues, rebuild VSI and enable new mapping.
+ *
+ * If it is possible (driver not binded to VF) try to remap also other VFs to
+ * linearize irqs register usage.
+ */
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+ struct pci_dev *pdev = pci_physfn(vf_dev);
+ struct ice_pf *pf = pci_get_drvdata(pdev);
+ u16 prev_msix, prev_queues, queues;
+ bool needs_rebuild = false;
+ struct ice_vf *vf;
+ int id;
+
+ if (!ice_get_num_vfs(pf))
+ return -ENOENT;
+
+ if (!msix_vec_count)
+ return 0;
+
+ queues = msix_vec_count;
+ /* add 1 MSI-X for OICR */
+ msix_vec_count += 1;
+
+ if (queues > min(ice_get_avail_txq_count(pf),
+ ice_get_avail_rxq_count(pf)))
+ return -EINVAL;
+
+ if (msix_vec_count < ICE_MIN_INTR_PER_VF)
+ return -EINVAL;
+
+ /* Transition of PCI VF function number to function_id */
+ for (id = 0; id < pci_num_vf(pdev); id++) {
+ if (vf_dev->devfn == pci_iov_virtfn_devfn(pdev, id))
+ break;
+ }
+
+ if (id == pci_num_vf(pdev))
+ return -ENOENT;
+
+ vf = ice_get_vf_by_id(pf, id);
+
+ if (!vf)
+ return -ENOENT;
+
+ prev_msix = vf->num_msix;
+ prev_queues = vf->num_vf_qs;
+
+ if (ice_sriov_move_base_vector(pf, msix_vec_count - prev_msix)) {
+ ice_put_vf(vf);
+ return -ENOSPC;
+ }
+
+ ice_dis_vf_mappings(vf);
+ ice_sriov_free_irqs(pf, vf);
+
+ /* Remap all VFs beside the one is now configured */
+ ice_sriov_remap_vectors(pf, vf->vf_id);
+
+ vf->num_msix = msix_vec_count;
+ vf->num_vf_qs = queues;
+ vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+ if (vf->first_vector_idx < 0)
+ goto unroll;
+
+ ice_vf_vsi_release(vf);
+ if (vf->vf_ops->create_vsi(vf)) {
+ /* Try to rebuild with previous values */
+ needs_rebuild = true;
+ goto unroll;
+ }
+
+ dev_info(ice_pf_to_dev(pf),
+ "Changing VF %d resources to %d vectors and %d queues\n",
+ vf->vf_id, vf->num_msix, vf->num_vf_qs);
+
+ ice_ena_vf_mappings(vf);
+ ice_put_vf(vf);
+
+ return 0;
+
+unroll:
+ dev_info(ice_pf_to_dev(pf),
+ "Can't set %d vectors on VF %d, falling back to %d\n",
+ vf->num_msix, vf->vf_id, prev_msix);
+
+ vf->num_msix = prev_msix;
+ vf->num_vf_qs = prev_queues;
+ vf->first_vector_idx = ice_sriov_get_irqs(pf, vf->num_msix);
+ if (vf->first_vector_idx < 0)
+ return -EINVAL;
+
+ if (needs_rebuild)
+ vf->vf_ops->create_vsi(vf);
+
+ ice_ena_vf_mappings(vf);
+ ice_put_vf(vf);
+
+ return -EINVAL;
+}
+
+/**
* ice_sriov_configure - Enable or change number of VFs via sysfs
* @pdev: pointer to a pci_dev structure
* @num_vfs: number of VFs to allocate or 0 to free VFs
@@ -1709,31 +1941,16 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf)
/**
* ice_restore_all_vfs_msi_state - restore VF MSI state after PF FLR
- * @pdev: pointer to a pci_dev structure
+ * @pf: pointer to the PF structure
*
* Called when recovering from a PF FLR to restore interrupt capability to
* the VFs.
*/
-void ice_restore_all_vfs_msi_state(struct pci_dev *pdev)
+void ice_restore_all_vfs_msi_state(struct ice_pf *pf)
{
- u16 vf_id;
- int pos;
-
- if (!pci_num_vf(pdev))
- return;
+ struct ice_vf *vf;
+ u32 bkt;
- pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_SRIOV);
- if (pos) {
- struct pci_dev *vfdev;
-
- pci_read_config_word(pdev, pos + PCI_SRIOV_VF_DID,
- &vf_id);
- vfdev = pci_get_device(pdev->vendor, vf_id, NULL);
- while (vfdev) {
- if (vfdev->is_virtfn && vfdev->physfn == pdev)
- pci_restore_msi_state(vfdev);
- vfdev = pci_get_device(pdev->vendor, vf_id,
- vfdev);
- }
- }
+ ice_for_each_vf(pf, bkt, vf)
+ pci_restore_msi_state(vf->vfdev);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h b/drivers/net/ethernet/intel/ice/ice_sriov.h
index 346cb2666f3a..8488df38b586 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.h
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
@@ -33,7 +33,7 @@ int
ice_get_vf_cfg(struct net_device *netdev, int vf_id, struct ifla_vf_info *ivi);
void ice_free_vfs(struct ice_pf *pf);
-void ice_restore_all_vfs_msi_state(struct pci_dev *pdev);
+void ice_restore_all_vfs_msi_state(struct ice_pf *pf);
int
ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
@@ -60,6 +60,8 @@ void ice_print_vfs_mdd_events(struct ice_pf *pf);
void ice_print_vf_rx_mdd_event(struct ice_vf *vf);
bool
ice_vc_validate_pattern(struct ice_vf *vf, struct virtchnl_proto_hdrs *proto);
+u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev);
+int ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count);
#else /* CONFIG_PCI_IOV */
static inline void ice_process_vflr_event(struct ice_pf *pf) { }
static inline void ice_free_vfs(struct ice_pf *pf) { }
@@ -67,7 +69,7 @@ static inline
void ice_vf_lan_overflow_event(struct ice_pf *pf, struct ice_rq_event_info *event) { }
static inline void ice_print_vfs_mdd_events(struct ice_pf *pf) { }
static inline void ice_print_vf_rx_mdd_event(struct ice_vf *vf) { }
-static inline void ice_restore_all_vfs_msi_state(struct pci_dev *pdev) { }
+static inline void ice_restore_all_vfs_msi_state(struct ice_pf *pf) { }
static inline int
ice_sriov_configure(struct pci_dev __always_unused *pdev,
@@ -142,5 +144,16 @@ ice_get_vf_stats(struct net_device __always_unused *netdev,
{
return -EOPNOTSUPP;
}
+
+static inline u32 ice_sriov_get_vf_total_msix(struct pci_dev *pdev)
+{
+ return 0;
+}
+
+static inline int
+ice_sriov_set_msix_vec_count(struct pci_dev *vf_dev, int msix_vec_count)
+{
+ return -EOPNOTSUPP;
+}
#endif /* CONFIG_PCI_IOV */
#endif /* _ICE_SRIOV_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index 2f77b684ff76..ee19f3aa3d19 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -1812,15 +1812,11 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
enum ice_sw_lkup_type lkup_type,
enum ice_adminq_opc opc)
{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
+ DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, sw_buf, elem, 1);
+ u16 buf_len = __struct_size(sw_buf);
struct ice_aqc_res_elem *vsi_ele;
- u16 buf_len;
int status;
- buf_len = struct_size(sw_buf, elem, 1);
- sw_buf = devm_kzalloc(ice_hw_to_dev(hw), buf_len, GFP_KERNEL);
- if (!sw_buf)
- return -ENOMEM;
sw_buf->num_elems = cpu_to_le16(1);
if (lkup_type == ICE_SW_LKUP_MAC ||
@@ -1840,8 +1836,7 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
sw_buf->res_type =
cpu_to_le16(ICE_AQC_RES_TYPE_VSI_LIST_PRUNE);
} else {
- status = -EINVAL;
- goto ice_aq_alloc_free_vsi_list_exit;
+ return -EINVAL;
}
if (opc == ice_aqc_opc_free_res)
@@ -1849,16 +1844,14 @@ ice_aq_alloc_free_vsi_list(struct ice_hw *hw, u16 *vsi_list_id,
status = ice_aq_alloc_free_res(hw, sw_buf, buf_len, opc);
if (status)
- goto ice_aq_alloc_free_vsi_list_exit;
+ return status;
if (opc == ice_aqc_opc_alloc_res) {
vsi_ele = &sw_buf->elem[0];
*vsi_list_id = le16_to_cpu(vsi_ele->e.sw_resp);
}
-ice_aq_alloc_free_vsi_list_exit:
- devm_kfree(ice_hw_to_dev(hw), sw_buf);
- return status;
+ return 0;
}
/**
@@ -2088,15 +2081,10 @@ ice_aq_get_recipe_to_profile(struct ice_hw *hw, u32 profile_id, u8 *r_bitmap,
*/
int ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
{
- struct ice_aqc_alloc_free_res_elem *sw_buf;
- u16 buf_len;
+ DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, sw_buf, elem, 1);
+ u16 buf_len = __struct_size(sw_buf);
int status;
- buf_len = struct_size(sw_buf, elem, 1);
- sw_buf = kzalloc(buf_len, GFP_KERNEL);
- if (!sw_buf)
- return -ENOMEM;
-
sw_buf->num_elems = cpu_to_le16(1);
sw_buf->res_type = cpu_to_le16((ICE_AQC_RES_TYPE_RECIPE <<
ICE_AQC_RES_TYPE_S) |
@@ -2105,7 +2093,6 @@ int ice_alloc_recipe(struct ice_hw *hw, u16 *rid)
ice_aqc_opc_alloc_res);
if (!status)
*rid = le16_to_cpu(sw_buf->elem[0].e.sw_resp);
- kfree(sw_buf);
return status;
}
@@ -4434,28 +4421,19 @@ int
ice_alloc_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
u16 *counter_id)
{
- struct ice_aqc_alloc_free_res_elem *buf;
- u16 buf_len;
+ DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1);
+ u16 buf_len = __struct_size(buf);
int status;
- /* Allocate resource */
- buf_len = struct_size(buf, elem, 1);
- buf = kzalloc(buf_len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
buf->num_elems = cpu_to_le16(num_items);
buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) &
ICE_AQC_RES_TYPE_M) | alloc_shared);
status = ice_aq_alloc_free_res(hw, buf, buf_len, ice_aqc_opc_alloc_res);
if (status)
- goto exit;
+ return status;
*counter_id = le16_to_cpu(buf->elem[0].e.sw_resp);
-
-exit:
- kfree(buf);
return status;
}
@@ -4471,16 +4449,10 @@ int
ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
u16 counter_id)
{
- struct ice_aqc_alloc_free_res_elem *buf;
- u16 buf_len;
+ DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1);
+ u16 buf_len = __struct_size(buf);
int status;
- /* Free resource */
- buf_len = struct_size(buf, elem, 1);
- buf = kzalloc(buf_len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
buf->num_elems = cpu_to_le16(num_items);
buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) &
ICE_AQC_RES_TYPE_M) | alloc_shared);
@@ -4490,7 +4462,6 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
if (status)
ice_debug(hw, ICE_DBG_SW, "counter resource could not be freed\n");
- kfree(buf);
return status;
}
@@ -4508,15 +4479,10 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
*/
int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id)
{
- struct ice_aqc_alloc_free_res_elem *buf;
- u16 buf_len;
+ DEFINE_FLEX(struct ice_aqc_alloc_free_res_elem, buf, elem, 1);
+ u16 buf_len = __struct_size(buf);
int status;
- buf_len = struct_size(buf, elem, 1);
- buf = kzalloc(buf_len, GFP_KERNEL);
- if (!buf)
- return -ENOMEM;
-
buf->num_elems = cpu_to_le16(1);
if (shared)
buf->res_type = cpu_to_le16(((type << ICE_AQC_RES_TYPE_S) &
@@ -4534,7 +4500,6 @@ int ice_share_res(struct ice_hw *hw, u16 type, u8 shared, u16 res_id)
ice_debug(hw, ICE_DBG_SW, "Could not set resource type %u id %u to %s\n",
type, res_id, shared ? "SHARED" : "DEDICATED");
- kfree(buf);
return status;
}
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
index c8322fb6f2b3..7e06373e14d9 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.c
@@ -450,7 +450,7 @@ void ice_finalize_xdp_rx(struct ice_tx_ring *xdp_ring, unsigned int xdp_res,
struct ice_tx_buf *tx_buf = &xdp_ring->tx_buf[first_idx];
if (xdp_res & ICE_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_res & ICE_XDP_TX) {
if (static_branch_unlikely(&ice_xdp_locking_key))
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 5e353b0cbe6f..a18ca0ff879f 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: GPL-2.0 */
-/* Copyright (c) 2018, Intel Corporation. */
+/* Copyright (c) 2018-2023, Intel Corporation. */
#ifndef _ICE_TYPE_H_
#define _ICE_TYPE_H_
@@ -129,6 +129,7 @@ enum ice_set_fc_aq_failures {
enum ice_mac_type {
ICE_MAC_UNKNOWN = 0,
ICE_MAC_E810,
+ ICE_MAC_E830,
ICE_MAC_GENERIC,
};
@@ -822,6 +823,13 @@ struct ice_mbx_data {
u16 async_watermark_val;
};
+/* PHY model */
+enum ice_phy_model {
+ ICE_PHY_UNSUP = -1,
+ ICE_PHY_E810 = 1,
+ ICE_PHY_E822,
+};
+
/* Port hardware description */
struct ice_hw {
u8 __iomem *hw_addr;
@@ -843,6 +851,7 @@ struct ice_hw {
u8 revision_id;
u8 pf_id; /* device profile info */
+ enum ice_phy_model phy_model;
u16 max_burst_size; /* driver sets this value */
@@ -901,17 +910,20 @@ struct ice_hw {
/* INTRL granularity in 1 us */
u8 intrl_gran;
-#define ICE_PHY_PER_NAC 1
-#define ICE_MAX_QUAD 2
-#define ICE_NUM_QUAD_TYPE 2
-#define ICE_PORTS_PER_QUAD 4
-#define ICE_PHY_0_LAST_QUAD 1
-#define ICE_PORTS_PER_PHY 8
-#define ICE_NUM_EXTERNAL_PORTS ICE_PORTS_PER_PHY
+#define ICE_PHY_PER_NAC_E822 1
+#define ICE_MAX_QUAD 2
+#define ICE_QUADS_PER_PHY_E822 2
+#define ICE_PORTS_PER_PHY_E822 8
+#define ICE_PORTS_PER_QUAD 4
+#define ICE_PORTS_PER_PHY_E810 4
+#define ICE_NUM_EXTERNAL_PORTS (ICE_MAX_QUAD * ICE_PORTS_PER_QUAD)
/* Active package version (currently active) */
struct ice_pkg_ver active_pkg_ver;
+ u32 pkg_seg_id;
+ u32 pkg_sign_type;
u32 active_track_id;
+ u8 pkg_has_signing_seg:1;
u8 active_pkg_name[ICE_PKG_NAME_SIZE];
u8 active_pkg_in_nvm;
@@ -965,6 +977,7 @@ struct ice_hw {
DECLARE_BITMAP(hw_ptype, ICE_FLOW_PTYPE_MAX);
u8 dvm_ena;
u16 io_expander_handle;
+ u8 cgu_part_number;
};
/* Statistics collected by each port, VSI, VEB, and S-channel */
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
index 24e4f4d897b6..aca1f2ea5034 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
@@ -56,6 +56,8 @@ static void ice_release_vf(struct kref *ref)
{
struct ice_vf *vf = container_of(ref, struct ice_vf, refcnt);
+ pci_dev_put(vf->vfdev);
+
vf->vf_ops->free(vf);
}
diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.h b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
index 48fea6fa0362..93c774f2f437 100644
--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.h
@@ -72,7 +72,7 @@ struct ice_vfs {
struct mutex table_lock; /* Lock for protecting the hash table */
u16 num_supported; /* max supported VFs on this PF */
u16 num_qps_per; /* number of queue pairs per VF */
- u16 num_msix_per; /* number of MSI-X vectors per VF */
+ u16 num_msix_per; /* default MSI-X vectors per VF */
unsigned long last_printed_mdd_jiffies; /* MDD message rate limit */
};
@@ -82,7 +82,7 @@ struct ice_vf {
struct rcu_head rcu;
struct kref refcnt;
struct ice_pf *pf;
-
+ struct pci_dev *vfdev;
/* Used during virtchnl message handling and NDO ops against the VF
* that will trigger a VFR
*/
@@ -123,6 +123,9 @@ struct ice_vf {
u8 num_req_qs; /* num of queue pairs requested by VF */
u16 num_mac;
u16 num_vf_qs; /* num of queue configured per VF */
+ u8 vlan_strip_ena; /* Outer and Inner VLAN strip enable */
+#define ICE_INNER_VLAN_STRIP_ENA BIT(0)
+#define ICE_OUTER_VLAN_STRIP_ENA BIT(1)
struct ice_mdd_vf_events mdd_rx_events;
struct ice_mdd_vf_events mdd_tx_events;
DECLARE_BITMAP(opcodes_allowlist, VIRTCHNL_OP_MAX);
@@ -133,6 +136,8 @@ struct ice_vf {
/* devlink port data */
struct devlink_port devlink_port;
+
+ u16 num_msix; /* num of MSI-X configured on this VF */
};
/* Flags for controlling behavior of ice_reset_vf */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
index db97353efd06..cdf17b1e2f25 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
@@ -486,6 +486,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)
vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_REQ_QUEUES;
+ if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_CRC)
+ vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_CRC;
+
if (vf->driver_caps & VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
vfres->vf_cap_flags |= VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
@@ -498,7 +501,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
vfres->num_vsis = 1;
/* Tx and Rx queue are equal for VF */
vfres->num_queue_pairs = vsi->num_txq;
- vfres->max_vectors = vf->pf->vfs.num_msix_per;
+ vfres->max_vectors = vf->num_msix;
vfres->rss_key_size = ICE_VSIQF_HKEY_ARRAY_SIZE;
vfres->rss_lut_size = ICE_LUT_VSI_SIZE;
vfres->max_mtu = ice_vc_get_max_frame_size(vf);
@@ -1621,6 +1624,15 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
}
for (i = 0; i < qci->num_queue_pairs; i++) {
+ if (!qci->qpair[i].rxq.crc_disable)
+ continue;
+
+ if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_CRC) ||
+ vf->vlan_strip_ena)
+ goto error_param;
+ }
+
+ for (i = 0; i < qci->num_queue_pairs; i++) {
qpi = &qci->qpair[i];
if (qpi->txq.vsi_id != qci->vsi_id ||
qpi->rxq.vsi_id != qci->vsi_id ||
@@ -1666,6 +1678,13 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
vsi->rx_rings[i]->count = qpi->rxq.ring_len;
+ if (qpi->rxq.crc_disable)
+ vsi->rx_rings[q_idx]->flags |=
+ ICE_RX_FLAGS_CRC_STRIP_DIS;
+ else
+ vsi->rx_rings[q_idx]->flags &=
+ ~ICE_RX_FLAGS_CRC_STRIP_DIS;
+
if (qpi->rxq.databuffer_size != 0 &&
(qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
qpi->rxq.databuffer_size < 1024))
@@ -2411,6 +2430,21 @@ static int ice_vc_remove_vlan_msg(struct ice_vf *vf, u8 *msg)
}
/**
+ * ice_vsi_is_rxq_crc_strip_dis - check if Rx queue CRC strip is disabled or not
+ * @vsi: pointer to the VF VSI info
+ */
+static bool ice_vsi_is_rxq_crc_strip_dis(struct ice_vsi *vsi)
+{
+ unsigned int i;
+
+ ice_for_each_alloc_rxq(vsi, i)
+ if (vsi->rx_rings[i]->flags & ICE_RX_FLAGS_CRC_STRIP_DIS)
+ return true;
+
+ return false;
+}
+
+/**
* ice_vc_ena_vlan_stripping
* @vf: pointer to the VF info
*
@@ -2439,6 +2473,8 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf)
if (vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q))
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ else
+ vf->vlan_strip_ena |= ICE_INNER_VLAN_STRIP_ENA;
error_param:
return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
@@ -2474,6 +2510,8 @@ static int ice_vc_dis_vlan_stripping(struct ice_vf *vf)
if (vsi->inner_vlan_ops.dis_stripping(vsi))
v_ret = VIRTCHNL_STATUS_ERR_PARAM;
+ else
+ vf->vlan_strip_ena &= ~ICE_INNER_VLAN_STRIP_ENA;
error_param:
return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
@@ -2651,6 +2689,8 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf)
{
struct ice_vsi *vsi = ice_get_vf_vsi(vf);
+ vf->vlan_strip_ena = 0;
+
if (!vsi)
return -EINVAL;
@@ -2660,10 +2700,16 @@ static int ice_vf_init_vlan_stripping(struct ice_vf *vf)
if (ice_vf_is_port_vlan_ena(vf) && !ice_is_dvm_ena(&vsi->back->hw))
return 0;
- if (ice_vf_vlan_offload_ena(vf->driver_caps))
- return vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q);
- else
- return vsi->inner_vlan_ops.dis_stripping(vsi);
+ if (ice_vf_vlan_offload_ena(vf->driver_caps)) {
+ int err;
+
+ err = vsi->inner_vlan_ops.ena_stripping(vsi, ETH_P_8021Q);
+ if (!err)
+ vf->vlan_strip_ena |= ICE_INNER_VLAN_STRIP_ENA;
+ return err;
+ }
+
+ return vsi->inner_vlan_ops.dis_stripping(vsi);
}
static u16 ice_vc_get_max_vlan_fltrs(struct ice_vf *vf)
@@ -3437,6 +3483,11 @@ static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg)
goto out;
}
+ if (ice_vsi_is_rxq_crc_strip_dis(vsi)) {
+ v_ret = VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+ goto out;
+ }
+
ethertype_setting = strip_msg->outer_ethertype_setting;
if (ethertype_setting) {
if (ice_vc_ena_vlan_offload(vsi,
@@ -3457,6 +3508,8 @@ static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg)
* enabled, is extracted in L2TAG1.
*/
ice_vsi_update_l2tsel(vsi, l2tsel);
+
+ vf->vlan_strip_ena |= ICE_OUTER_VLAN_STRIP_ENA;
}
}
@@ -3468,6 +3521,9 @@ static int ice_vc_ena_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg)
goto out;
}
+ if (ethertype_setting)
+ vf->vlan_strip_ena |= ICE_INNER_VLAN_STRIP_ENA;
+
out:
return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2,
v_ret, NULL, 0);
@@ -3529,6 +3585,8 @@ static int ice_vc_dis_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg)
* in L2TAG1.
*/
ice_vsi_update_l2tsel(vsi, l2tsel);
+
+ vf->vlan_strip_ena &= ~ICE_OUTER_VLAN_STRIP_ENA;
}
}
@@ -3538,6 +3596,9 @@ static int ice_vc_dis_vlan_stripping_v2_msg(struct ice_vf *vf, u8 *msg)
goto out;
}
+ if (ethertype_setting)
+ vf->vlan_strip_ena &= ~ICE_INNER_VLAN_STRIP_ENA;
+
out:
return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2,
v_ret, NULL, 0);
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
index daa6a1e894cf..24b23b7ef04a 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_fdir.c
@@ -1,5 +1,5 @@
// SPDX-License-Identifier: GPL-2.0
-/* Copyright (C) 2021, Intel Corporation. */
+/* Copyright (C) 2021-2023, Intel Corporation. */
#include "ice.h"
#include "ice_base.h"
@@ -1422,8 +1422,8 @@ ice_vc_fdir_irq_handler(struct ice_vsi *ctrl_vsi,
*/
static void ice_vf_fdir_dump_info(struct ice_vf *vf)
{
+ u32 fd_size, fd_cnt, fd_size_g, fd_cnt_g, fd_size_b, fd_cnt_b;
struct ice_vsi *vf_vsi;
- u32 fd_size, fd_cnt;
struct device *dev;
struct ice_pf *pf;
struct ice_hw *hw;
@@ -1442,12 +1442,25 @@ static void ice_vf_fdir_dump_info(struct ice_vf *vf)
fd_size = rd32(hw, VSIQF_FD_SIZE(vsi_num));
fd_cnt = rd32(hw, VSIQF_FD_CNT(vsi_num));
- dev_dbg(dev, "VF %d: space allocated: guar:0x%x, be:0x%x, space consumed: guar:0x%x, be:0x%x\n",
- vf->vf_id,
- (fd_size & VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S,
- (fd_size & VSIQF_FD_CNT_FD_BCNT_M) >> VSIQF_FD_CNT_FD_BCNT_S,
- (fd_cnt & VSIQF_FD_CNT_FD_GCNT_M) >> VSIQF_FD_CNT_FD_GCNT_S,
- (fd_cnt & VSIQF_FD_CNT_FD_BCNT_M) >> VSIQF_FD_CNT_FD_BCNT_S);
+ switch (hw->mac_type) {
+ case ICE_MAC_E830:
+ fd_size_g = FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M, fd_size);
+ fd_size_b = FIELD_GET(E830_VSIQF_FD_CNT_FD_BCNT_M, fd_size);
+ fd_cnt_g = FIELD_GET(E830_VSIQF_FD_CNT_FD_GCNT_M, fd_cnt);
+ fd_cnt_b = FIELD_GET(E830_VSIQF_FD_CNT_FD_BCNT_M, fd_cnt);
+ break;
+ case ICE_MAC_E810:
+ default:
+ fd_size_g = FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M, fd_size);
+ fd_size_b = FIELD_GET(E800_VSIQF_FD_CNT_FD_BCNT_M, fd_size);
+ fd_cnt_g = FIELD_GET(E800_VSIQF_FD_CNT_FD_GCNT_M, fd_cnt);
+ fd_cnt_b = FIELD_GET(E800_VSIQF_FD_CNT_FD_BCNT_M, fd_cnt);
+ }
+
+ dev_dbg(dev, "VF %d: Size in the FD table: guaranteed:0x%x, best effort:0x%x\n",
+ vf->vf_id, fd_size_g, fd_size_b);
+ dev_dbg(dev, "VF %d: Filter counter in the FD table: guaranteed:0x%x, best effort:0x%x\n",
+ vf->vf_id, fd_cnt_g, fd_cnt_b);
}
/**
diff --git a/drivers/net/ethernet/intel/ice/ice_xsk.c b/drivers/net/ethernet/intel/ice/ice_xsk.c
index 2a3f0834e139..99954508184f 100644
--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
+++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
@@ -217,21 +217,16 @@ static int ice_qp_dis(struct ice_vsi *vsi, u16 q_idx)
*/
static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
{
- struct ice_aqc_add_tx_qgrp *qg_buf;
+ DEFINE_FLEX(struct ice_aqc_add_tx_qgrp, qg_buf, txqs, 1);
+ u16 size = __struct_size(qg_buf);
struct ice_q_vector *q_vector;
struct ice_tx_ring *tx_ring;
struct ice_rx_ring *rx_ring;
- u16 size;
int err;
if (q_idx >= vsi->num_rxq || q_idx >= vsi->num_txq)
return -EINVAL;
- size = struct_size(qg_buf, txqs, 1);
- qg_buf = kzalloc(size, GFP_KERNEL);
- if (!qg_buf)
- return -ENOMEM;
-
qg_buf->num_txqs = 1;
tx_ring = vsi->tx_rings[q_idx];
@@ -240,7 +235,7 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
err = ice_vsi_cfg_txq(vsi, tx_ring, qg_buf);
if (err)
- goto free_buf;
+ return err;
if (ice_is_xdp_ena_vsi(vsi)) {
struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx];
@@ -249,29 +244,28 @@ static int ice_qp_ena(struct ice_vsi *vsi, u16 q_idx)
qg_buf->num_txqs = 1;
err = ice_vsi_cfg_txq(vsi, xdp_ring, qg_buf);
if (err)
- goto free_buf;
+ return err;
ice_set_ring_xdp(xdp_ring);
ice_tx_xsk_pool(vsi, q_idx);
}
err = ice_vsi_cfg_rxq(rx_ring);
if (err)
- goto free_buf;
+ return err;
ice_qvec_cfg_msix(vsi, q_vector);
err = ice_vsi_ctrl_one_rx_ring(vsi, true, q_idx, true);
if (err)
- goto free_buf;
+ return err;
clear_bit(ICE_CFG_BUSY, vsi->state);
ice_qvec_toggle_napi(vsi, q_vector, true);
ice_qvec_ena_irq(vsi, q_vector);
netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx));
-free_buf:
- kfree(qg_buf);
- return err;
+
+ return 0;
}
/**
diff --git a/drivers/net/ethernet/intel/idpf/Makefile b/drivers/net/ethernet/intel/idpf/Makefile
new file mode 100644
index 000000000000..6844ead2f3ac
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/Makefile
@@ -0,0 +1,18 @@
+# SPDX-License-Identifier: GPL-2.0-only
+# Copyright (C) 2023 Intel Corporation
+
+# Makefile for Intel(R) Infrastructure Data Path Function Linux Driver
+
+obj-$(CONFIG_IDPF) += idpf.o
+
+idpf-y := \
+ idpf_controlq.o \
+ idpf_controlq_setup.o \
+ idpf_dev.o \
+ idpf_ethtool.o \
+ idpf_lib.o \
+ idpf_main.o \
+ idpf_singleq_txrx.o \
+ idpf_txrx.o \
+ idpf_virtchnl.o \
+ idpf_vf_dev.o
diff --git a/drivers/net/ethernet/intel/idpf/idpf.h b/drivers/net/ethernet/intel/idpf/idpf.h
new file mode 100644
index 000000000000..bee73353b56a
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf.h
@@ -0,0 +1,968 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_H_
+#define _IDPF_H_
+
+/* Forward declaration */
+struct idpf_adapter;
+struct idpf_vport;
+struct idpf_vport_max_q;
+
+#include <net/pkt_sched.h>
+#include <linux/aer.h>
+#include <linux/etherdevice.h>
+#include <linux/pci.h>
+#include <linux/bitfield.h>
+#include <linux/sctp.h>
+#include <linux/ethtool.h>
+#include <net/gro.h>
+#include <linux/dim.h>
+
+#include "virtchnl2.h"
+#include "idpf_lan_txrx.h"
+#include "idpf_txrx.h"
+#include "idpf_controlq.h"
+
+#define GETMAXVAL(num_bits) GENMASK((num_bits) - 1, 0)
+
+#define IDPF_NO_FREE_SLOT 0xffff
+
+/* Default Mailbox settings */
+#define IDPF_NUM_FILTERS_PER_MSG 20
+#define IDPF_NUM_DFLT_MBX_Q 2 /* includes both TX and RX */
+#define IDPF_DFLT_MBX_Q_LEN 64
+#define IDPF_DFLT_MBX_ID -1
+/* maximum number of times to try before resetting mailbox */
+#define IDPF_MB_MAX_ERR 20
+#define IDPF_NUM_CHUNKS_PER_MSG(struct_sz, chunk_sz) \
+ ((IDPF_CTLQ_MAX_BUF_LEN - (struct_sz)) / (chunk_sz))
+#define IDPF_WAIT_FOR_EVENT_TIMEO_MIN 2000
+#define IDPF_WAIT_FOR_EVENT_TIMEO 60000
+
+#define IDPF_MAX_WAIT 500
+
+/* available message levels */
+#define IDPF_AVAIL_NETIF_M (NETIF_MSG_DRV | NETIF_MSG_PROBE | NETIF_MSG_LINK)
+
+#define IDPF_DIM_PROFILE_SLOTS 5
+
+#define IDPF_VIRTCHNL_VERSION_MAJOR VIRTCHNL2_VERSION_MAJOR_2
+#define IDPF_VIRTCHNL_VERSION_MINOR VIRTCHNL2_VERSION_MINOR_0
+
+/**
+ * struct idpf_mac_filter
+ * @list: list member field
+ * @macaddr: MAC address
+ * @remove: filter should be removed (virtchnl)
+ * @add: filter should be added (virtchnl)
+ */
+struct idpf_mac_filter {
+ struct list_head list;
+ u8 macaddr[ETH_ALEN];
+ bool remove;
+ bool add;
+};
+
+/**
+ * enum idpf_state - State machine to handle bring up
+ * @__IDPF_STARTUP: Start the state machine
+ * @__IDPF_VER_CHECK: Negotiate virtchnl version
+ * @__IDPF_GET_CAPS: Negotiate capabilities
+ * @__IDPF_INIT_SW: Init based on given capabilities
+ * @__IDPF_STATE_LAST: Must be last, used to determine size
+ */
+enum idpf_state {
+ __IDPF_STARTUP,
+ __IDPF_VER_CHECK,
+ __IDPF_GET_CAPS,
+ __IDPF_INIT_SW,
+ __IDPF_STATE_LAST,
+};
+
+/**
+ * enum idpf_flags - Hard reset causes.
+ * @IDPF_HR_FUNC_RESET: Hard reset when TxRx timeout
+ * @IDPF_HR_DRV_LOAD: Set on driver load for a clean HW
+ * @IDPF_HR_RESET_IN_PROG: Reset in progress
+ * @IDPF_REMOVE_IN_PROG: Driver remove in progress
+ * @IDPF_MB_INTR_MODE: Mailbox in interrupt mode
+ * @IDPF_FLAGS_NBITS: Must be last
+ */
+enum idpf_flags {
+ IDPF_HR_FUNC_RESET,
+ IDPF_HR_DRV_LOAD,
+ IDPF_HR_RESET_IN_PROG,
+ IDPF_REMOVE_IN_PROG,
+ IDPF_MB_INTR_MODE,
+ IDPF_FLAGS_NBITS,
+};
+
+/**
+ * enum idpf_cap_field - Offsets into capabilities struct for specific caps
+ * @IDPF_BASE_CAPS: generic base capabilities
+ * @IDPF_CSUM_CAPS: checksum offload capabilities
+ * @IDPF_SEG_CAPS: segmentation offload capabilities
+ * @IDPF_RSS_CAPS: RSS offload capabilities
+ * @IDPF_HSPLIT_CAPS: Header split capabilities
+ * @IDPF_RSC_CAPS: RSC offload capabilities
+ * @IDPF_OTHER_CAPS: miscellaneous offloads
+ *
+ * Used when checking for a specific capability flag since different capability
+ * sets are not mutually exclusive numerically, the caller must specify which
+ * type of capability they are checking for.
+ */
+enum idpf_cap_field {
+ IDPF_BASE_CAPS = -1,
+ IDPF_CSUM_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ csum_caps),
+ IDPF_SEG_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ seg_caps),
+ IDPF_RSS_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ rss_caps),
+ IDPF_HSPLIT_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ hsplit_caps),
+ IDPF_RSC_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ rsc_caps),
+ IDPF_OTHER_CAPS = offsetof(struct virtchnl2_get_capabilities,
+ other_caps),
+};
+
+/**
+ * enum idpf_vport_state - Current vport state
+ * @__IDPF_VPORT_DOWN: Vport is down
+ * @__IDPF_VPORT_UP: Vport is up
+ * @__IDPF_VPORT_STATE_LAST: Must be last, number of states
+ */
+enum idpf_vport_state {
+ __IDPF_VPORT_DOWN,
+ __IDPF_VPORT_UP,
+ __IDPF_VPORT_STATE_LAST,
+};
+
+/**
+ * struct idpf_netdev_priv - Struct to store vport back pointer
+ * @adapter: Adapter back pointer
+ * @vport: Vport back pointer
+ * @vport_id: Vport identifier
+ * @vport_idx: Relative vport index
+ * @state: See enum idpf_vport_state
+ * @netstats: Packet and byte stats
+ * @stats_lock: Lock to protect stats update
+ */
+struct idpf_netdev_priv {
+ struct idpf_adapter *adapter;
+ struct idpf_vport *vport;
+ u32 vport_id;
+ u16 vport_idx;
+ enum idpf_vport_state state;
+ struct rtnl_link_stats64 netstats;
+ spinlock_t stats_lock;
+};
+
+/**
+ * struct idpf_reset_reg - Reset register offsets/masks
+ * @rstat: Reset status register
+ * @rstat_m: Reset status mask
+ */
+struct idpf_reset_reg {
+ void __iomem *rstat;
+ u32 rstat_m;
+};
+
+/**
+ * struct idpf_vport_max_q - Queue limits
+ * @max_rxq: Maximum number of RX queues supported
+ * @max_txq: Maixmum number of TX queues supported
+ * @max_bufq: In splitq, maximum number of buffer queues supported
+ * @max_complq: In splitq, maximum number of completion queues supported
+ */
+struct idpf_vport_max_q {
+ u16 max_rxq;
+ u16 max_txq;
+ u16 max_bufq;
+ u16 max_complq;
+};
+
+/**
+ * struct idpf_reg_ops - Device specific register operation function pointers
+ * @ctlq_reg_init: Mailbox control queue register initialization
+ * @intr_reg_init: Traffic interrupt register initialization
+ * @mb_intr_reg_init: Mailbox interrupt register initialization
+ * @reset_reg_init: Reset register initialization
+ * @trigger_reset: Trigger a reset to occur
+ */
+struct idpf_reg_ops {
+ void (*ctlq_reg_init)(struct idpf_ctlq_create_info *cq);
+ int (*intr_reg_init)(struct idpf_vport *vport);
+ void (*mb_intr_reg_init)(struct idpf_adapter *adapter);
+ void (*reset_reg_init)(struct idpf_adapter *adapter);
+ void (*trigger_reset)(struct idpf_adapter *adapter,
+ enum idpf_flags trig_cause);
+};
+
+/**
+ * struct idpf_dev_ops - Device specific operations
+ * @reg_ops: Register operations
+ */
+struct idpf_dev_ops {
+ struct idpf_reg_ops reg_ops;
+};
+
+/* These macros allow us to generate an enum and a matching char * array of
+ * stringified enums that are always in sync. Checkpatch issues a bogus warning
+ * about this being a complex macro; but it's wrong, these are never used as a
+ * statement and instead only used to define the enum and array.
+ */
+#define IDPF_FOREACH_VPORT_VC_STATE(STATE) \
+ STATE(IDPF_VC_CREATE_VPORT) \
+ STATE(IDPF_VC_CREATE_VPORT_ERR) \
+ STATE(IDPF_VC_ENA_VPORT) \
+ STATE(IDPF_VC_ENA_VPORT_ERR) \
+ STATE(IDPF_VC_DIS_VPORT) \
+ STATE(IDPF_VC_DIS_VPORT_ERR) \
+ STATE(IDPF_VC_DESTROY_VPORT) \
+ STATE(IDPF_VC_DESTROY_VPORT_ERR) \
+ STATE(IDPF_VC_CONFIG_TXQ) \
+ STATE(IDPF_VC_CONFIG_TXQ_ERR) \
+ STATE(IDPF_VC_CONFIG_RXQ) \
+ STATE(IDPF_VC_CONFIG_RXQ_ERR) \
+ STATE(IDPF_VC_ENA_QUEUES) \
+ STATE(IDPF_VC_ENA_QUEUES_ERR) \
+ STATE(IDPF_VC_DIS_QUEUES) \
+ STATE(IDPF_VC_DIS_QUEUES_ERR) \
+ STATE(IDPF_VC_MAP_IRQ) \
+ STATE(IDPF_VC_MAP_IRQ_ERR) \
+ STATE(IDPF_VC_UNMAP_IRQ) \
+ STATE(IDPF_VC_UNMAP_IRQ_ERR) \
+ STATE(IDPF_VC_ADD_QUEUES) \
+ STATE(IDPF_VC_ADD_QUEUES_ERR) \
+ STATE(IDPF_VC_DEL_QUEUES) \
+ STATE(IDPF_VC_DEL_QUEUES_ERR) \
+ STATE(IDPF_VC_ALLOC_VECTORS) \
+ STATE(IDPF_VC_ALLOC_VECTORS_ERR) \
+ STATE(IDPF_VC_DEALLOC_VECTORS) \
+ STATE(IDPF_VC_DEALLOC_VECTORS_ERR) \
+ STATE(IDPF_VC_SET_SRIOV_VFS) \
+ STATE(IDPF_VC_SET_SRIOV_VFS_ERR) \
+ STATE(IDPF_VC_GET_RSS_LUT) \
+ STATE(IDPF_VC_GET_RSS_LUT_ERR) \
+ STATE(IDPF_VC_SET_RSS_LUT) \
+ STATE(IDPF_VC_SET_RSS_LUT_ERR) \
+ STATE(IDPF_VC_GET_RSS_KEY) \
+ STATE(IDPF_VC_GET_RSS_KEY_ERR) \
+ STATE(IDPF_VC_SET_RSS_KEY) \
+ STATE(IDPF_VC_SET_RSS_KEY_ERR) \
+ STATE(IDPF_VC_GET_STATS) \
+ STATE(IDPF_VC_GET_STATS_ERR) \
+ STATE(IDPF_VC_ADD_MAC_ADDR) \
+ STATE(IDPF_VC_ADD_MAC_ADDR_ERR) \
+ STATE(IDPF_VC_DEL_MAC_ADDR) \
+ STATE(IDPF_VC_DEL_MAC_ADDR_ERR) \
+ STATE(IDPF_VC_GET_PTYPE_INFO) \
+ STATE(IDPF_VC_GET_PTYPE_INFO_ERR) \
+ STATE(IDPF_VC_LOOPBACK_STATE) \
+ STATE(IDPF_VC_LOOPBACK_STATE_ERR) \
+ STATE(IDPF_VC_NBITS)
+
+#define IDPF_GEN_ENUM(ENUM) ENUM,
+#define IDPF_GEN_STRING(STRING) #STRING,
+
+enum idpf_vport_vc_state {
+ IDPF_FOREACH_VPORT_VC_STATE(IDPF_GEN_ENUM)
+};
+
+extern const char * const idpf_vport_vc_state_str[];
+
+/**
+ * enum idpf_vport_reset_cause - Vport soft reset causes
+ * @IDPF_SR_Q_CHANGE: Soft reset queue change
+ * @IDPF_SR_Q_DESC_CHANGE: Soft reset descriptor change
+ * @IDPF_SR_MTU_CHANGE: Soft reset MTU change
+ * @IDPF_SR_RSC_CHANGE: Soft reset RSC change
+ */
+enum idpf_vport_reset_cause {
+ IDPF_SR_Q_CHANGE,
+ IDPF_SR_Q_DESC_CHANGE,
+ IDPF_SR_MTU_CHANGE,
+ IDPF_SR_RSC_CHANGE,
+};
+
+/**
+ * enum idpf_vport_flags - Vport flags
+ * @IDPF_VPORT_DEL_QUEUES: To send delete queues message
+ * @IDPF_VPORT_SW_MARKER: Indicate TX pipe drain software marker packets
+ * processing is done
+ * @IDPF_VPORT_FLAGS_NBITS: Must be last
+ */
+enum idpf_vport_flags {
+ IDPF_VPORT_DEL_QUEUES,
+ IDPF_VPORT_SW_MARKER,
+ IDPF_VPORT_FLAGS_NBITS,
+};
+
+struct idpf_port_stats {
+ struct u64_stats_sync stats_sync;
+ u64_stats_t rx_hw_csum_err;
+ u64_stats_t rx_hsplit;
+ u64_stats_t rx_hsplit_hbo;
+ u64_stats_t rx_bad_descs;
+ u64_stats_t tx_linearize;
+ u64_stats_t tx_busy;
+ u64_stats_t tx_drops;
+ u64_stats_t tx_dma_map_errs;
+ struct virtchnl2_vport_stats vport_stats;
+};
+
+/**
+ * struct idpf_vport - Handle for netdevices and queue resources
+ * @num_txq: Number of allocated TX queues
+ * @num_complq: Number of allocated completion queues
+ * @txq_desc_count: TX queue descriptor count
+ * @complq_desc_count: Completion queue descriptor count
+ * @compln_clean_budget: Work budget for completion clean
+ * @num_txq_grp: Number of TX queue groups
+ * @txq_grps: Array of TX queue groups
+ * @txq_model: Split queue or single queue queuing model
+ * @txqs: Used only in hotpath to get to the right queue very fast
+ * @crc_enable: Enable CRC insertion offload
+ * @num_rxq: Number of allocated RX queues
+ * @num_bufq: Number of allocated buffer queues
+ * @rxq_desc_count: RX queue descriptor count. *MUST* have enough descriptors
+ * to complete all buffer descriptors for all buffer queues in
+ * the worst case.
+ * @num_bufqs_per_qgrp: Buffer queues per RX queue in a given grouping
+ * @bufq_desc_count: Buffer queue descriptor count
+ * @bufq_size: Size of buffers in ring (e.g. 2K, 4K, etc)
+ * @num_rxq_grp: Number of RX queues in a group
+ * @rxq_grps: Total number of RX groups. Number of groups * number of RX per
+ * group will yield total number of RX queues.
+ * @rxq_model: Splitq queue or single queue queuing model
+ * @rx_ptype_lkup: Lookup table for ptypes on RX
+ * @adapter: back pointer to associated adapter
+ * @netdev: Associated net_device. Each vport should have one and only one
+ * associated netdev.
+ * @flags: See enum idpf_vport_flags
+ * @vport_type: Default SRIOV, SIOV, etc.
+ * @vport_id: Device given vport identifier
+ * @idx: Software index in adapter vports struct
+ * @default_vport: Use this vport if one isn't specified
+ * @base_rxd: True if the driver should use base descriptors instead of flex
+ * @num_q_vectors: Number of IRQ vectors allocated
+ * @q_vectors: Array of queue vectors
+ * @q_vector_idxs: Starting index of queue vectors
+ * @max_mtu: device given max possible MTU
+ * @default_mac_addr: device will give a default MAC to use
+ * @rx_itr_profile: RX profiles for Dynamic Interrupt Moderation
+ * @tx_itr_profile: TX profiles for Dynamic Interrupt Moderation
+ * @port_stats: per port csum, header split, and other offload stats
+ * @link_up: True if link is up
+ * @link_speed_mbps: Link speed in mbps
+ * @vc_msg: Virtchnl message buffer
+ * @vc_state: Virtchnl message state
+ * @vchnl_wq: Wait queue for virtchnl messages
+ * @sw_marker_wq: workqueue for marker packets
+ * @vc_buf_lock: Lock to protect virtchnl buffer
+ */
+struct idpf_vport {
+ u16 num_txq;
+ u16 num_complq;
+ u32 txq_desc_count;
+ u32 complq_desc_count;
+ u32 compln_clean_budget;
+ u16 num_txq_grp;
+ struct idpf_txq_group *txq_grps;
+ u32 txq_model;
+ struct idpf_queue **txqs;
+ bool crc_enable;
+
+ u16 num_rxq;
+ u16 num_bufq;
+ u32 rxq_desc_count;
+ u8 num_bufqs_per_qgrp;
+ u32 bufq_desc_count[IDPF_MAX_BUFQS_PER_RXQ_GRP];
+ u32 bufq_size[IDPF_MAX_BUFQS_PER_RXQ_GRP];
+ u16 num_rxq_grp;
+ struct idpf_rxq_group *rxq_grps;
+ u32 rxq_model;
+ struct idpf_rx_ptype_decoded rx_ptype_lkup[IDPF_RX_MAX_PTYPE];
+
+ struct idpf_adapter *adapter;
+ struct net_device *netdev;
+ DECLARE_BITMAP(flags, IDPF_VPORT_FLAGS_NBITS);
+ u16 vport_type;
+ u32 vport_id;
+ u16 idx;
+ bool default_vport;
+ bool base_rxd;
+
+ u16 num_q_vectors;
+ struct idpf_q_vector *q_vectors;
+ u16 *q_vector_idxs;
+ u16 max_mtu;
+ u8 default_mac_addr[ETH_ALEN];
+ u16 rx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
+ u16 tx_itr_profile[IDPF_DIM_PROFILE_SLOTS];
+ struct idpf_port_stats port_stats;
+
+ bool link_up;
+ u32 link_speed_mbps;
+
+ char vc_msg[IDPF_CTLQ_MAX_BUF_LEN];
+ DECLARE_BITMAP(vc_state, IDPF_VC_NBITS);
+
+ wait_queue_head_t vchnl_wq;
+ wait_queue_head_t sw_marker_wq;
+ struct mutex vc_buf_lock;
+};
+
+/**
+ * enum idpf_user_flags
+ * @__IDPF_PROMISC_UC: Unicast promiscuous mode
+ * @__IDPF_PROMISC_MC: Multicast promiscuous mode
+ * @__IDPF_USER_FLAGS_NBITS: Must be last
+ */
+enum idpf_user_flags {
+ __IDPF_PROMISC_UC = 32,
+ __IDPF_PROMISC_MC,
+
+ __IDPF_USER_FLAGS_NBITS,
+};
+
+/**
+ * struct idpf_rss_data - Associated RSS data
+ * @rss_key_size: Size of RSS hash key
+ * @rss_key: RSS hash key
+ * @rss_lut_size: Size of RSS lookup table
+ * @rss_lut: RSS lookup table
+ * @cached_lut: Used to restore previously init RSS lut
+ */
+struct idpf_rss_data {
+ u16 rss_key_size;
+ u8 *rss_key;
+ u16 rss_lut_size;
+ u32 *rss_lut;
+ u32 *cached_lut;
+};
+
+/**
+ * struct idpf_vport_user_config_data - User defined configuration values for
+ * each vport.
+ * @rss_data: See struct idpf_rss_data
+ * @num_req_tx_qs: Number of user requested TX queues through ethtool
+ * @num_req_rx_qs: Number of user requested RX queues through ethtool
+ * @num_req_txq_desc: Number of user requested TX queue descriptors through
+ * ethtool
+ * @num_req_rxq_desc: Number of user requested RX queue descriptors through
+ * ethtool
+ * @user_flags: User toggled config flags
+ * @mac_filter_list: List of MAC filters
+ *
+ * Used to restore configuration after a reset as the vport will get wiped.
+ */
+struct idpf_vport_user_config_data {
+ struct idpf_rss_data rss_data;
+ u16 num_req_tx_qs;
+ u16 num_req_rx_qs;
+ u32 num_req_txq_desc;
+ u32 num_req_rxq_desc;
+ DECLARE_BITMAP(user_flags, __IDPF_USER_FLAGS_NBITS);
+ struct list_head mac_filter_list;
+};
+
+/**
+ * enum idpf_vport_config_flags - Vport config flags
+ * @IDPF_VPORT_REG_NETDEV: Register netdev
+ * @IDPF_VPORT_UP_REQUESTED: Set if interface up is requested on core reset
+ * @IDPF_VPORT_ADD_MAC_REQ: Asynchronous add ether address in flight
+ * @IDPF_VPORT_DEL_MAC_REQ: Asynchronous delete ether address in flight
+ * @IDPF_VPORT_CONFIG_FLAGS_NBITS: Must be last
+ */
+enum idpf_vport_config_flags {
+ IDPF_VPORT_REG_NETDEV,
+ IDPF_VPORT_UP_REQUESTED,
+ IDPF_VPORT_ADD_MAC_REQ,
+ IDPF_VPORT_DEL_MAC_REQ,
+ IDPF_VPORT_CONFIG_FLAGS_NBITS,
+};
+
+/**
+ * struct idpf_avail_queue_info
+ * @avail_rxq: Available RX queues
+ * @avail_txq: Available TX queues
+ * @avail_bufq: Available buffer queues
+ * @avail_complq: Available completion queues
+ *
+ * Maintain total queues available after allocating max queues to each vport.
+ */
+struct idpf_avail_queue_info {
+ u16 avail_rxq;
+ u16 avail_txq;
+ u16 avail_bufq;
+ u16 avail_complq;
+};
+
+/**
+ * struct idpf_vector_info - Utility structure to pass function arguments as a
+ * structure
+ * @num_req_vecs: Vectors required based on the number of queues updated by the
+ * user via ethtool
+ * @num_curr_vecs: Current number of vectors, must be >= @num_req_vecs
+ * @index: Relative starting index for vectors
+ * @default_vport: Vectors are for default vport
+ */
+struct idpf_vector_info {
+ u16 num_req_vecs;
+ u16 num_curr_vecs;
+ u16 index;
+ bool default_vport;
+};
+
+/**
+ * struct idpf_vector_lifo - Stack to maintain vector indexes used for vector
+ * distribution algorithm
+ * @top: Points to stack top i.e. next available vector index
+ * @base: Always points to start of the free pool
+ * @size: Total size of the vector stack
+ * @vec_idx: Array to store all the vector indexes
+ *
+ * Vector stack maintains all the relative vector indexes at the *adapter*
+ * level. This stack is divided into 2 parts, first one is called as 'default
+ * pool' and other one is called 'free pool'. Vector distribution algorithm
+ * gives priority to default vports in a way that at least IDPF_MIN_Q_VEC
+ * vectors are allocated per default vport and the relative vector indexes for
+ * those are maintained in default pool. Free pool contains all the unallocated
+ * vector indexes which can be allocated on-demand basis. Mailbox vector index
+ * is maintained in the default pool of the stack.
+ */
+struct idpf_vector_lifo {
+ u16 top;
+ u16 base;
+ u16 size;
+ u16 *vec_idx;
+};
+
+/**
+ * struct idpf_vport_config - Vport configuration data
+ * @user_config: see struct idpf_vport_user_config_data
+ * @max_q: Maximum possible queues
+ * @req_qs_chunks: Queue chunk data for requested queues
+ * @mac_filter_list_lock: Lock to protect mac filters
+ * @flags: See enum idpf_vport_config_flags
+ */
+struct idpf_vport_config {
+ struct idpf_vport_user_config_data user_config;
+ struct idpf_vport_max_q max_q;
+ void *req_qs_chunks;
+ spinlock_t mac_filter_list_lock;
+ DECLARE_BITMAP(flags, IDPF_VPORT_CONFIG_FLAGS_NBITS);
+};
+
+/**
+ * struct idpf_adapter - Device data struct generated on probe
+ * @pdev: PCI device struct given on probe
+ * @virt_ver_maj: Virtchnl version major
+ * @virt_ver_min: Virtchnl version minor
+ * @msg_enable: Debug message level enabled
+ * @mb_wait_count: Number of times mailbox was attempted initialization
+ * @state: Init state machine
+ * @flags: See enum idpf_flags
+ * @reset_reg: See struct idpf_reset_reg
+ * @hw: Device access data
+ * @num_req_msix: Requested number of MSIX vectors
+ * @num_avail_msix: Available number of MSIX vectors
+ * @num_msix_entries: Number of entries in MSIX table
+ * @msix_entries: MSIX table
+ * @req_vec_chunks: Requested vector chunk data
+ * @mb_vector: Mailbox vector data
+ * @vector_stack: Stack to store the msix vector indexes
+ * @irq_mb_handler: Handler for hard interrupt for mailbox
+ * @tx_timeout_count: Number of TX timeouts that have occurred
+ * @avail_queues: Device given queue limits
+ * @vports: Array to store vports created by the driver
+ * @netdevs: Associated Vport netdevs
+ * @vport_params_reqd: Vport params requested
+ * @vport_params_recvd: Vport params received
+ * @vport_ids: Array of device given vport identifiers
+ * @vport_config: Vport config parameters
+ * @max_vports: Maximum vports that can be allocated
+ * @num_alloc_vports: Current number of vports allocated
+ * @next_vport: Next free slot in pf->vport[] - 0-based!
+ * @init_task: Initialization task
+ * @init_wq: Workqueue for initialization task
+ * @serv_task: Periodically recurring maintenance task
+ * @serv_wq: Workqueue for service task
+ * @mbx_task: Task to handle mailbox interrupts
+ * @mbx_wq: Workqueue for mailbox responses
+ * @vc_event_task: Task to handle out of band virtchnl event notifications
+ * @vc_event_wq: Workqueue for virtchnl events
+ * @stats_task: Periodic statistics retrieval task
+ * @stats_wq: Workqueue for statistics task
+ * @caps: Negotiated capabilities with device
+ * @vchnl_wq: Wait queue for virtchnl messages
+ * @vc_state: Virtchnl message state
+ * @vc_msg: Virtchnl message buffer
+ * @dev_ops: See idpf_dev_ops
+ * @num_vfs: Number of allocated VFs through sysfs. PF does not directly talk
+ * to VFs but is used to initialize them
+ * @crc_enable: Enable CRC insertion offload
+ * @req_tx_splitq: TX split or single queue model to request
+ * @req_rx_splitq: RX split or single queue model to request
+ * @vport_ctrl_lock: Lock to protect the vport control flow
+ * @vector_lock: Lock to protect vector distribution
+ * @queue_lock: Lock to protect queue distribution
+ * @vc_buf_lock: Lock to protect virtchnl buffer
+ */
+struct idpf_adapter {
+ struct pci_dev *pdev;
+ u32 virt_ver_maj;
+ u32 virt_ver_min;
+
+ u32 msg_enable;
+ u32 mb_wait_count;
+ enum idpf_state state;
+ DECLARE_BITMAP(flags, IDPF_FLAGS_NBITS);
+ struct idpf_reset_reg reset_reg;
+ struct idpf_hw hw;
+ u16 num_req_msix;
+ u16 num_avail_msix;
+ u16 num_msix_entries;
+ struct msix_entry *msix_entries;
+ struct virtchnl2_alloc_vectors *req_vec_chunks;
+ struct idpf_q_vector mb_vector;
+ struct idpf_vector_lifo vector_stack;
+ irqreturn_t (*irq_mb_handler)(int irq, void *data);
+
+ u32 tx_timeout_count;
+ struct idpf_avail_queue_info avail_queues;
+ struct idpf_vport **vports;
+ struct net_device **netdevs;
+ struct virtchnl2_create_vport **vport_params_reqd;
+ struct virtchnl2_create_vport **vport_params_recvd;
+ u32 *vport_ids;
+
+ struct idpf_vport_config **vport_config;
+ u16 max_vports;
+ u16 num_alloc_vports;
+ u16 next_vport;
+
+ struct delayed_work init_task;
+ struct workqueue_struct *init_wq;
+ struct delayed_work serv_task;
+ struct workqueue_struct *serv_wq;
+ struct delayed_work mbx_task;
+ struct workqueue_struct *mbx_wq;
+ struct delayed_work vc_event_task;
+ struct workqueue_struct *vc_event_wq;
+ struct delayed_work stats_task;
+ struct workqueue_struct *stats_wq;
+ struct virtchnl2_get_capabilities caps;
+
+ wait_queue_head_t vchnl_wq;
+ DECLARE_BITMAP(vc_state, IDPF_VC_NBITS);
+ char vc_msg[IDPF_CTLQ_MAX_BUF_LEN];
+ struct idpf_dev_ops dev_ops;
+ int num_vfs;
+ bool crc_enable;
+ bool req_tx_splitq;
+ bool req_rx_splitq;
+
+ struct mutex vport_ctrl_lock;
+ struct mutex vector_lock;
+ struct mutex queue_lock;
+ struct mutex vc_buf_lock;
+};
+
+/**
+ * idpf_is_queue_model_split - check if queue model is split
+ * @q_model: queue model single or split
+ *
+ * Returns true if queue model is split else false
+ */
+static inline int idpf_is_queue_model_split(u16 q_model)
+{
+ return q_model == VIRTCHNL2_QUEUE_MODEL_SPLIT;
+}
+
+#define idpf_is_cap_ena(adapter, field, flag) \
+ idpf_is_capability_ena(adapter, false, field, flag)
+#define idpf_is_cap_ena_all(adapter, field, flag) \
+ idpf_is_capability_ena(adapter, true, field, flag)
+
+bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
+ enum idpf_cap_field field, u64 flag);
+
+#define IDPF_CAP_RSS (\
+ VIRTCHNL2_CAP_RSS_IPV4_TCP |\
+ VIRTCHNL2_CAP_RSS_IPV4_TCP |\
+ VIRTCHNL2_CAP_RSS_IPV4_UDP |\
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP |\
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER |\
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |\
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |\
+ VIRTCHNL2_CAP_RSS_IPV6_UDP |\
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP |\
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER)
+
+#define IDPF_CAP_RSC (\
+ VIRTCHNL2_CAP_RSC_IPV4_TCP |\
+ VIRTCHNL2_CAP_RSC_IPV6_TCP)
+
+#define IDPF_CAP_HSPLIT (\
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |\
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6)
+
+#define IDPF_CAP_RX_CSUM_L4V4 (\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP)
+
+#define IDPF_CAP_RX_CSUM_L4V6 (\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
+
+#define IDPF_CAP_RX_CSUM (\
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP)
+
+#define IDPF_CAP_SCTP_CSUM (\
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |\
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |\
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP)
+
+#define IDPF_CAP_TUNNEL_TX_CSUM (\
+ VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |\
+ VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL)
+
+/**
+ * idpf_get_reserved_vecs - Get reserved vectors
+ * @adapter: private data struct
+ */
+static inline u16 idpf_get_reserved_vecs(struct idpf_adapter *adapter)
+{
+ return le16_to_cpu(adapter->caps.num_allocated_vectors);
+}
+
+/**
+ * idpf_get_default_vports - Get default number of vports
+ * @adapter: private data struct
+ */
+static inline u16 idpf_get_default_vports(struct idpf_adapter *adapter)
+{
+ return le16_to_cpu(adapter->caps.default_num_vports);
+}
+
+/**
+ * idpf_get_max_vports - Get max number of vports
+ * @adapter: private data struct
+ */
+static inline u16 idpf_get_max_vports(struct idpf_adapter *adapter)
+{
+ return le16_to_cpu(adapter->caps.max_vports);
+}
+
+/**
+ * idpf_get_max_tx_bufs - Get max scatter-gather buffers supported by the device
+ * @adapter: private data struct
+ */
+static inline unsigned int idpf_get_max_tx_bufs(struct idpf_adapter *adapter)
+{
+ return adapter->caps.max_sg_bufs_per_tx_pkt;
+}
+
+/**
+ * idpf_get_min_tx_pkt_len - Get min packet length supported by the device
+ * @adapter: private data struct
+ */
+static inline u8 idpf_get_min_tx_pkt_len(struct idpf_adapter *adapter)
+{
+ u8 pkt_len = adapter->caps.min_sso_packet_len;
+
+ return pkt_len ? pkt_len : IDPF_TX_MIN_PKT_LEN;
+}
+
+/**
+ * idpf_get_reg_addr - Get BAR0 register address
+ * @adapter: private data struct
+ * @reg_offset: register offset value
+ *
+ * Based on the register offset, return the actual BAR0 register address
+ */
+static inline void __iomem *idpf_get_reg_addr(struct idpf_adapter *adapter,
+ resource_size_t reg_offset)
+{
+ return (void __iomem *)(adapter->hw.hw_addr + reg_offset);
+}
+
+/**
+ * idpf_is_reset_detected - check if we were reset at some point
+ * @adapter: driver specific private structure
+ *
+ * Returns true if we are either in reset currently or were previously reset.
+ */
+static inline bool idpf_is_reset_detected(struct idpf_adapter *adapter)
+{
+ if (!adapter->hw.arq)
+ return true;
+
+ return !(readl(idpf_get_reg_addr(adapter, adapter->hw.arq->reg.len)) &
+ adapter->hw.arq->reg.len_mask);
+}
+
+/**
+ * idpf_is_reset_in_prog - check if reset is in progress
+ * @adapter: driver specific private structure
+ *
+ * Returns true if hard reset is in progress, false otherwise
+ */
+static inline bool idpf_is_reset_in_prog(struct idpf_adapter *adapter)
+{
+ return (test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags) ||
+ test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
+ test_bit(IDPF_HR_DRV_LOAD, adapter->flags));
+}
+
+/**
+ * idpf_netdev_to_vport - get a vport handle from a netdev
+ * @netdev: network interface device structure
+ */
+static inline struct idpf_vport *idpf_netdev_to_vport(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ return np->vport;
+}
+
+/**
+ * idpf_netdev_to_adapter - Get adapter handle from a netdev
+ * @netdev: Network interface device structure
+ */
+static inline struct idpf_adapter *idpf_netdev_to_adapter(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ return np->adapter;
+}
+
+/**
+ * idpf_is_feature_ena - Determine if a particular feature is enabled
+ * @vport: Vport to check
+ * @feature: Netdev flag to check
+ *
+ * Returns true or false if a particular feature is enabled.
+ */
+static inline bool idpf_is_feature_ena(const struct idpf_vport *vport,
+ netdev_features_t feature)
+{
+ return vport->netdev->features & feature;
+}
+
+/**
+ * idpf_get_max_tx_hdr_size -- get the size of tx header
+ * @adapter: Driver specific private structure
+ */
+static inline u16 idpf_get_max_tx_hdr_size(struct idpf_adapter *adapter)
+{
+ return le16_to_cpu(adapter->caps.max_tx_hdr_size);
+}
+
+/**
+ * idpf_vport_ctrl_lock - Acquire the vport control lock
+ * @netdev: Network interface device structure
+ *
+ * This lock should be used by non-datapath code to protect against vport
+ * destruction.
+ */
+static inline void idpf_vport_ctrl_lock(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ mutex_lock(&np->adapter->vport_ctrl_lock);
+}
+
+/**
+ * idpf_vport_ctrl_unlock - Release the vport control lock
+ * @netdev: Network interface device structure
+ */
+static inline void idpf_vport_ctrl_unlock(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ mutex_unlock(&np->adapter->vport_ctrl_lock);
+}
+
+void idpf_statistics_task(struct work_struct *work);
+void idpf_init_task(struct work_struct *work);
+void idpf_service_task(struct work_struct *work);
+void idpf_mbx_task(struct work_struct *work);
+void idpf_vc_event_task(struct work_struct *work);
+void idpf_dev_ops_init(struct idpf_adapter *adapter);
+void idpf_vf_dev_ops_init(struct idpf_adapter *adapter);
+int idpf_vport_adjust_qs(struct idpf_vport *vport);
+int idpf_init_dflt_mbx(struct idpf_adapter *adapter);
+void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter);
+int idpf_vc_core_init(struct idpf_adapter *adapter);
+void idpf_vc_core_deinit(struct idpf_adapter *adapter);
+int idpf_intr_req(struct idpf_adapter *adapter);
+void idpf_intr_rel(struct idpf_adapter *adapter);
+int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
+ struct idpf_vec_regs *reg_vals);
+u16 idpf_get_max_tx_hdr_size(struct idpf_adapter *adapter);
+int idpf_send_delete_queues_msg(struct idpf_vport *vport);
+int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
+ u16 num_complq, u16 num_rx_q, u16 num_rx_bufq);
+int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ enum idpf_vport_reset_cause reset_cause);
+int idpf_send_enable_vport_msg(struct idpf_vport *vport);
+int idpf_send_disable_vport_msg(struct idpf_vport *vport);
+int idpf_send_destroy_vport_msg(struct idpf_vport *vport);
+int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport);
+int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport);
+int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get);
+int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get);
+int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter);
+int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors);
+void idpf_deinit_task(struct idpf_adapter *adapter);
+int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter,
+ u16 *q_vector_idxs,
+ struct idpf_vector_info *vec_info);
+int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport);
+int idpf_send_get_stats_msg(struct idpf_vport *vport);
+int idpf_get_vec_ids(struct idpf_adapter *adapter,
+ u16 *vecids, int num_vecids,
+ struct virtchnl2_vector_chunks *chunks);
+int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
+ void *msg, int msg_size);
+int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
+ u16 msg_size, u8 *msg);
+void idpf_set_ethtool_ops(struct net_device *netdev);
+int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q);
+void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q);
+int idpf_add_del_mac_filters(struct idpf_vport *vport,
+ struct idpf_netdev_priv *np,
+ bool add, bool async);
+int idpf_set_promiscuous(struct idpf_adapter *adapter,
+ struct idpf_vport_user_config_data *config_data,
+ u32 vport_id);
+int idpf_send_disable_queues_msg(struct idpf_vport *vport);
+void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q);
+u32 idpf_get_vport_id(struct idpf_vport *vport);
+int idpf_vport_queue_ids_init(struct idpf_vport *vport);
+int idpf_queue_reg_init(struct idpf_vport *vport);
+int idpf_send_config_queues_msg(struct idpf_vport *vport);
+int idpf_send_enable_queues_msg(struct idpf_vport *vport);
+int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q);
+int idpf_check_supported_desc_ids(struct idpf_vport *vport);
+void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector,
+ u16 itr, bool tx);
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map);
+int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs);
+int idpf_sriov_configure(struct pci_dev *pdev, int num_vfs);
+
+#endif /* !_IDPF_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
new file mode 100644
index 000000000000..c7f43d2fcd13
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
@@ -0,0 +1,621 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf_controlq.h"
+
+/**
+ * idpf_ctlq_setup_regs - initialize control queue registers
+ * @cq: pointer to the specific control queue
+ * @q_create_info: structs containing info for each queue to be initialized
+ */
+static void idpf_ctlq_setup_regs(struct idpf_ctlq_info *cq,
+ struct idpf_ctlq_create_info *q_create_info)
+{
+ /* set control queue registers in our local struct */
+ cq->reg.head = q_create_info->reg.head;
+ cq->reg.tail = q_create_info->reg.tail;
+ cq->reg.len = q_create_info->reg.len;
+ cq->reg.bah = q_create_info->reg.bah;
+ cq->reg.bal = q_create_info->reg.bal;
+ cq->reg.len_mask = q_create_info->reg.len_mask;
+ cq->reg.len_ena_mask = q_create_info->reg.len_ena_mask;
+ cq->reg.head_mask = q_create_info->reg.head_mask;
+}
+
+/**
+ * idpf_ctlq_init_regs - Initialize control queue registers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ * @is_rxq: true if receive control queue, false otherwise
+ *
+ * Initialize registers. The caller is expected to have already initialized the
+ * descriptor ring memory and buffer memory
+ */
+static void idpf_ctlq_init_regs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ bool is_rxq)
+{
+ /* Update tail to post pre-allocated buffers for rx queues */
+ if (is_rxq)
+ wr32(hw, cq->reg.tail, (u32)(cq->ring_size - 1));
+
+ /* For non-Mailbox control queues only TAIL need to be set */
+ if (cq->q_id != -1)
+ return;
+
+ /* Clear Head for both send or receive */
+ wr32(hw, cq->reg.head, 0);
+
+ /* set starting point */
+ wr32(hw, cq->reg.bal, lower_32_bits(cq->desc_ring.pa));
+ wr32(hw, cq->reg.bah, upper_32_bits(cq->desc_ring.pa));
+ wr32(hw, cq->reg.len, (cq->ring_size | cq->reg.len_ena_mask));
+}
+
+/**
+ * idpf_ctlq_init_rxq_bufs - populate receive queue descriptors with buf
+ * @cq: pointer to the specific Control queue
+ *
+ * Record the address of the receive queue DMA buffers in the descriptors.
+ * The buffers must have been previously allocated.
+ */
+static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq)
+{
+ int i;
+
+ for (i = 0; i < cq->ring_size; i++) {
+ struct idpf_ctlq_desc *desc = IDPF_CTLQ_DESC(cq, i);
+ struct idpf_dma_mem *bi = cq->bi.rx_buff[i];
+
+ /* No buffer to post to descriptor, continue */
+ if (!bi)
+ continue;
+
+ desc->flags =
+ cpu_to_le16(IDPF_CTLQ_FLAG_BUF | IDPF_CTLQ_FLAG_RD);
+ desc->opcode = 0;
+ desc->datalen = cpu_to_le16(bi->size);
+ desc->ret_val = 0;
+ desc->v_opcode_dtype = 0;
+ desc->v_retval = 0;
+ desc->params.indirect.addr_high =
+ cpu_to_le32(upper_32_bits(bi->pa));
+ desc->params.indirect.addr_low =
+ cpu_to_le32(lower_32_bits(bi->pa));
+ desc->params.indirect.param0 = 0;
+ desc->params.indirect.sw_cookie = 0;
+ desc->params.indirect.v_flags = 0;
+ }
+}
+
+/**
+ * idpf_ctlq_shutdown - shutdown the CQ
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * The main shutdown routine for any controq queue
+ */
+static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+{
+ mutex_lock(&cq->cq_lock);
+
+ /* free ring buffers and the ring itself */
+ idpf_ctlq_dealloc_ring_res(hw, cq);
+
+ /* Set ring_size to 0 to indicate uninitialized queue */
+ cq->ring_size = 0;
+
+ mutex_unlock(&cq->cq_lock);
+ mutex_destroy(&cq->cq_lock);
+}
+
+/**
+ * idpf_ctlq_add - add one control queue
+ * @hw: pointer to hardware struct
+ * @qinfo: info for queue to be created
+ * @cq_out: (output) double pointer to control queue to be created
+ *
+ * Allocate and initialize a control queue and add it to the control queue list.
+ * The cq parameter will be allocated/initialized and passed back to the caller
+ * if no errors occur.
+ *
+ * Note: idpf_ctlq_init must be called prior to any calls to idpf_ctlq_add
+ */
+int idpf_ctlq_add(struct idpf_hw *hw,
+ struct idpf_ctlq_create_info *qinfo,
+ struct idpf_ctlq_info **cq_out)
+{
+ struct idpf_ctlq_info *cq;
+ bool is_rxq = false;
+ int err;
+
+ cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+ if (!cq)
+ return -ENOMEM;
+
+ cq->cq_type = qinfo->type;
+ cq->q_id = qinfo->id;
+ cq->buf_size = qinfo->buf_size;
+ cq->ring_size = qinfo->len;
+
+ cq->next_to_use = 0;
+ cq->next_to_clean = 0;
+ cq->next_to_post = cq->ring_size - 1;
+
+ switch (qinfo->type) {
+ case IDPF_CTLQ_TYPE_MAILBOX_RX:
+ is_rxq = true;
+ fallthrough;
+ case IDPF_CTLQ_TYPE_MAILBOX_TX:
+ err = idpf_ctlq_alloc_ring_res(hw, cq);
+ break;
+ default:
+ err = -EBADR;
+ break;
+ }
+
+ if (err)
+ goto init_free_q;
+
+ if (is_rxq) {
+ idpf_ctlq_init_rxq_bufs(cq);
+ } else {
+ /* Allocate the array of msg pointers for TX queues */
+ cq->bi.tx_msg = kcalloc(qinfo->len,
+ sizeof(struct idpf_ctlq_msg *),
+ GFP_KERNEL);
+ if (!cq->bi.tx_msg) {
+ err = -ENOMEM;
+ goto init_dealloc_q_mem;
+ }
+ }
+
+ idpf_ctlq_setup_regs(cq, qinfo);
+
+ idpf_ctlq_init_regs(hw, cq, is_rxq);
+
+ mutex_init(&cq->cq_lock);
+
+ list_add(&cq->cq_list, &hw->cq_list_head);
+
+ *cq_out = cq;
+
+ return 0;
+
+init_dealloc_q_mem:
+ /* free ring buffers and the ring itself */
+ idpf_ctlq_dealloc_ring_res(hw, cq);
+init_free_q:
+ kfree(cq);
+
+ return err;
+}
+
+/**
+ * idpf_ctlq_remove - deallocate and remove specified control queue
+ * @hw: pointer to hardware struct
+ * @cq: pointer to control queue to be removed
+ */
+void idpf_ctlq_remove(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq)
+{
+ list_del(&cq->cq_list);
+ idpf_ctlq_shutdown(hw, cq);
+ kfree(cq);
+}
+
+/**
+ * idpf_ctlq_init - main initialization routine for all control queues
+ * @hw: pointer to hardware struct
+ * @num_q: number of queues to initialize
+ * @q_info: array of structs containing info for each queue to be initialized
+ *
+ * This initializes any number and any type of control queues. This is an all
+ * or nothing routine; if one fails, all previously allocated queues will be
+ * destroyed. This must be called prior to using the individual add/remove
+ * APIs.
+ */
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+ struct idpf_ctlq_create_info *q_info)
+{
+ struct idpf_ctlq_info *cq, *tmp;
+ int err;
+ int i;
+
+ INIT_LIST_HEAD(&hw->cq_list_head);
+
+ for (i = 0; i < num_q; i++) {
+ struct idpf_ctlq_create_info *qinfo = q_info + i;
+
+ err = idpf_ctlq_add(hw, qinfo, &cq);
+ if (err)
+ goto init_destroy_qs;
+ }
+
+ return 0;
+
+init_destroy_qs:
+ list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
+ idpf_ctlq_remove(hw, cq);
+
+ return err;
+}
+
+/**
+ * idpf_ctlq_deinit - destroy all control queues
+ * @hw: pointer to hw struct
+ */
+void idpf_ctlq_deinit(struct idpf_hw *hw)
+{
+ struct idpf_ctlq_info *cq, *tmp;
+
+ list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
+ idpf_ctlq_remove(hw, cq);
+}
+
+/**
+ * idpf_ctlq_send - send command to Control Queue (CTQ)
+ * @hw: pointer to hw struct
+ * @cq: handle to control queue struct to send on
+ * @num_q_msg: number of messages to send on control queue
+ * @q_msg: pointer to array of queue messages to be sent
+ *
+ * The caller is expected to allocate DMAable buffers and pass them to the
+ * send routine via the q_msg struct / control queue specific data struct.
+ * The control queue will hold a reference to each send message until
+ * the completion for that message has been cleaned.
+ */
+int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ u16 num_q_msg, struct idpf_ctlq_msg q_msg[])
+{
+ struct idpf_ctlq_desc *desc;
+ int num_desc_avail;
+ int err = 0;
+ int i;
+
+ mutex_lock(&cq->cq_lock);
+
+ /* Ensure there are enough descriptors to send all messages */
+ num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq);
+ if (num_desc_avail == 0 || num_desc_avail < num_q_msg) {
+ err = -ENOSPC;
+ goto err_unlock;
+ }
+
+ for (i = 0; i < num_q_msg; i++) {
+ struct idpf_ctlq_msg *msg = &q_msg[i];
+
+ desc = IDPF_CTLQ_DESC(cq, cq->next_to_use);
+
+ desc->opcode = cpu_to_le16(msg->opcode);
+ desc->pfid_vfid = cpu_to_le16(msg->func_id);
+
+ desc->v_opcode_dtype = cpu_to_le32(msg->cookie.mbx.chnl_opcode);
+ desc->v_retval = cpu_to_le32(msg->cookie.mbx.chnl_retval);
+
+ desc->flags = cpu_to_le16((msg->host_id & IDPF_HOST_ID_MASK) <<
+ IDPF_CTLQ_FLAG_HOST_ID_S);
+ if (msg->data_len) {
+ struct idpf_dma_mem *buff = msg->ctx.indirect.payload;
+
+ desc->datalen |= cpu_to_le16(msg->data_len);
+ desc->flags |= cpu_to_le16(IDPF_CTLQ_FLAG_BUF);
+ desc->flags |= cpu_to_le16(IDPF_CTLQ_FLAG_RD);
+
+ /* Update the address values in the desc with the pa
+ * value for respective buffer
+ */
+ desc->params.indirect.addr_high =
+ cpu_to_le32(upper_32_bits(buff->pa));
+ desc->params.indirect.addr_low =
+ cpu_to_le32(lower_32_bits(buff->pa));
+
+ memcpy(&desc->params, msg->ctx.indirect.context,
+ IDPF_INDIRECT_CTX_SIZE);
+ } else {
+ memcpy(&desc->params, msg->ctx.direct,
+ IDPF_DIRECT_CTX_SIZE);
+ }
+
+ /* Store buffer info */
+ cq->bi.tx_msg[cq->next_to_use] = msg;
+
+ (cq->next_to_use)++;
+ if (cq->next_to_use == cq->ring_size)
+ cq->next_to_use = 0;
+ }
+
+ /* Force memory write to complete before letting hardware
+ * know that there are new descriptors to fetch.
+ */
+ dma_wmb();
+
+ wr32(hw, cq->reg.tail, cq->next_to_use);
+
+err_unlock:
+ mutex_unlock(&cq->cq_lock);
+
+ return err;
+}
+
+/**
+ * idpf_ctlq_clean_sq - reclaim send descriptors on HW write back for the
+ * requested queue
+ * @cq: pointer to the specific Control queue
+ * @clean_count: (input|output) number of descriptors to clean as input, and
+ * number of descriptors actually cleaned as output
+ * @msg_status: (output) pointer to msg pointer array to be populated; needs
+ * to be allocated by caller
+ *
+ * Returns an array of message pointers associated with the cleaned
+ * descriptors. The pointers are to the original ctlq_msgs sent on the cleaned
+ * descriptors. The status will be returned for each; any messages that failed
+ * to send will have a non-zero status. The caller is expected to free original
+ * ctlq_msgs and free or reuse the DMA buffers.
+ */
+int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ struct idpf_ctlq_msg *msg_status[])
+{
+ struct idpf_ctlq_desc *desc;
+ u16 i, num_to_clean;
+ u16 ntc, desc_err;
+
+ if (*clean_count == 0)
+ return 0;
+ if (*clean_count > cq->ring_size)
+ return -EBADR;
+
+ mutex_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *clean_count;
+
+ for (i = 0; i < num_to_clean; i++) {
+ /* Fetch next descriptor and check if marked as done */
+ desc = IDPF_CTLQ_DESC(cq, ntc);
+ if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD))
+ break;
+
+ /* strip off FW internal code */
+ desc_err = le16_to_cpu(desc->ret_val) & 0xff;
+
+ msg_status[i] = cq->bi.tx_msg[ntc];
+ msg_status[i]->status = desc_err;
+
+ cq->bi.tx_msg[ntc] = NULL;
+
+ /* Zero out any stale data */
+ memset(desc, 0, sizeof(*desc));
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ }
+
+ cq->next_to_clean = ntc;
+
+ mutex_unlock(&cq->cq_lock);
+
+ /* Return number of descriptors actually cleaned */
+ *clean_count = i;
+
+ return 0;
+}
+
+/**
+ * idpf_ctlq_post_rx_buffs - post buffers to descriptor ring
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue handle
+ * @buff_count: (input|output) input is number of buffers caller is trying to
+ * return; output is number of buffers that were not posted
+ * @buffs: array of pointers to dma mem structs to be given to hardware
+ *
+ * Caller uses this function to return DMA buffers to the descriptor ring after
+ * consuming them; buff_count will be the number of buffers.
+ *
+ * Note: this function needs to be called after a receive call even
+ * if there are no DMA buffers to be returned, i.e. buff_count = 0,
+ * buffs = NULL to support direct commands
+ */
+int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq,
+ u16 *buff_count, struct idpf_dma_mem **buffs)
+{
+ struct idpf_ctlq_desc *desc;
+ u16 ntp = cq->next_to_post;
+ bool buffs_avail = false;
+ u16 tbp = ntp + 1;
+ int i = 0;
+
+ if (*buff_count > cq->ring_size)
+ return -EBADR;
+
+ if (*buff_count > 0)
+ buffs_avail = true;
+
+ mutex_lock(&cq->cq_lock);
+
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ if (tbp == cq->next_to_clean)
+ /* Nothing to do */
+ goto post_buffs_out;
+
+ /* Post buffers for as many as provided or up until the last one used */
+ while (ntp != cq->next_to_clean) {
+ desc = IDPF_CTLQ_DESC(cq, ntp);
+
+ if (cq->bi.rx_buff[ntp])
+ goto fill_desc;
+ if (!buffs_avail) {
+ /* If the caller hasn't given us any buffers or
+ * there are none left, search the ring itself
+ * for an available buffer to move to this
+ * entry starting at the next entry in the ring
+ */
+ tbp = ntp + 1;
+
+ /* Wrap ring if necessary */
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+
+ while (tbp != cq->next_to_clean) {
+ if (cq->bi.rx_buff[tbp]) {
+ cq->bi.rx_buff[ntp] =
+ cq->bi.rx_buff[tbp];
+ cq->bi.rx_buff[tbp] = NULL;
+
+ /* Found a buffer, no need to
+ * search anymore
+ */
+ break;
+ }
+
+ /* Wrap ring if necessary */
+ tbp++;
+ if (tbp >= cq->ring_size)
+ tbp = 0;
+ }
+
+ if (tbp == cq->next_to_clean)
+ goto post_buffs_out;
+ } else {
+ /* Give back pointer to DMA buffer */
+ cq->bi.rx_buff[ntp] = buffs[i];
+ i++;
+
+ if (i >= *buff_count)
+ buffs_avail = false;
+ }
+
+fill_desc:
+ desc->flags =
+ cpu_to_le16(IDPF_CTLQ_FLAG_BUF | IDPF_CTLQ_FLAG_RD);
+
+ /* Post buffers to descriptor */
+ desc->datalen = cpu_to_le16(cq->bi.rx_buff[ntp]->size);
+ desc->params.indirect.addr_high =
+ cpu_to_le32(upper_32_bits(cq->bi.rx_buff[ntp]->pa));
+ desc->params.indirect.addr_low =
+ cpu_to_le32(lower_32_bits(cq->bi.rx_buff[ntp]->pa));
+
+ ntp++;
+ if (ntp == cq->ring_size)
+ ntp = 0;
+ }
+
+post_buffs_out:
+ /* Only update tail if buffers were actually posted */
+ if (cq->next_to_post != ntp) {
+ if (ntp)
+ /* Update next_to_post to ntp - 1 since current ntp
+ * will not have a buffer
+ */
+ cq->next_to_post = ntp - 1;
+ else
+ /* Wrap to end of end ring since current ntp is 0 */
+ cq->next_to_post = cq->ring_size - 1;
+
+ wr32(hw, cq->reg.tail, cq->next_to_post);
+ }
+
+ mutex_unlock(&cq->cq_lock);
+
+ /* return the number of buffers that were not posted */
+ *buff_count = *buff_count - i;
+
+ return 0;
+}
+
+/**
+ * idpf_ctlq_recv - receive control queue message call back
+ * @cq: pointer to control queue handle to receive on
+ * @num_q_msg: (input|output) input number of messages that should be received;
+ * output number of messages actually received
+ * @q_msg: (output) array of received control queue messages on this q;
+ * needs to be pre-allocated by caller for as many messages as requested
+ *
+ * Called by interrupt handler or polling mechanism. Caller is expected
+ * to free buffers
+ */
+int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ struct idpf_ctlq_msg *q_msg)
+{
+ u16 num_to_clean, ntc, flags;
+ struct idpf_ctlq_desc *desc;
+ int err = 0;
+ u16 i;
+
+ if (*num_q_msg == 0)
+ return 0;
+ else if (*num_q_msg > cq->ring_size)
+ return -EBADR;
+
+ /* take the lock before we start messing with the ring */
+ mutex_lock(&cq->cq_lock);
+
+ ntc = cq->next_to_clean;
+
+ num_to_clean = *num_q_msg;
+
+ for (i = 0; i < num_to_clean; i++) {
+ /* Fetch next descriptor and check if marked as done */
+ desc = IDPF_CTLQ_DESC(cq, ntc);
+ flags = le16_to_cpu(desc->flags);
+
+ if (!(flags & IDPF_CTLQ_FLAG_DD))
+ break;
+
+ q_msg[i].vmvf_type = (flags &
+ (IDPF_CTLQ_FLAG_FTYPE_VM |
+ IDPF_CTLQ_FLAG_FTYPE_PF)) >>
+ IDPF_CTLQ_FLAG_FTYPE_S;
+
+ if (flags & IDPF_CTLQ_FLAG_ERR)
+ err = -EBADMSG;
+
+ q_msg[i].cookie.mbx.chnl_opcode =
+ le32_to_cpu(desc->v_opcode_dtype);
+ q_msg[i].cookie.mbx.chnl_retval =
+ le32_to_cpu(desc->v_retval);
+
+ q_msg[i].opcode = le16_to_cpu(desc->opcode);
+ q_msg[i].data_len = le16_to_cpu(desc->datalen);
+ q_msg[i].status = le16_to_cpu(desc->ret_val);
+
+ if (desc->datalen) {
+ memcpy(q_msg[i].ctx.indirect.context,
+ &desc->params.indirect, IDPF_INDIRECT_CTX_SIZE);
+
+ /* Assign pointer to dma buffer to ctlq_msg array
+ * to be given to upper layer
+ */
+ q_msg[i].ctx.indirect.payload = cq->bi.rx_buff[ntc];
+
+ /* Zero out pointer to DMA buffer info;
+ * will be repopulated by post buffers API
+ */
+ cq->bi.rx_buff[ntc] = NULL;
+ } else {
+ memcpy(q_msg[i].ctx.direct, desc->params.raw,
+ IDPF_DIRECT_CTX_SIZE);
+ }
+
+ /* Zero out stale data in descriptor */
+ memset(desc, 0, sizeof(struct idpf_ctlq_desc));
+
+ ntc++;
+ if (ntc == cq->ring_size)
+ ntc = 0;
+ }
+
+ cq->next_to_clean = ntc;
+
+ mutex_unlock(&cq->cq_lock);
+
+ *num_q_msg = i;
+ if (*num_q_msg == 0)
+ err = -ENOMSG;
+
+ return err;
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.h b/drivers/net/ethernet/intel/idpf/idpf_controlq.h
new file mode 100644
index 000000000000..c1aba09e9856
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.h
@@ -0,0 +1,130 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_CONTROLQ_H_
+#define _IDPF_CONTROLQ_H_
+
+#include <linux/slab.h>
+
+#include "idpf_controlq_api.h"
+
+/* Maximum buffer length for all control queue types */
+#define IDPF_CTLQ_MAX_BUF_LEN 4096
+
+#define IDPF_CTLQ_DESC(R, i) \
+ (&(((struct idpf_ctlq_desc *)((R)->desc_ring.va))[i]))
+
+#define IDPF_CTLQ_DESC_UNUSED(R) \
+ ((u16)((((R)->next_to_clean > (R)->next_to_use) ? 0 : (R)->ring_size) + \
+ (R)->next_to_clean - (R)->next_to_use - 1))
+
+/* Control Queue default settings */
+#define IDPF_CTRL_SQ_CMD_TIMEOUT 250 /* msecs */
+
+struct idpf_ctlq_desc {
+ /* Control queue descriptor flags */
+ __le16 flags;
+ /* Control queue message opcode */
+ __le16 opcode;
+ __le16 datalen; /* 0 for direct commands */
+ union {
+ __le16 ret_val;
+ __le16 pfid_vfid;
+#define IDPF_CTLQ_DESC_VF_ID_S 0
+#define IDPF_CTLQ_DESC_VF_ID_M (0x7FF << IDPF_CTLQ_DESC_VF_ID_S)
+#define IDPF_CTLQ_DESC_PF_ID_S 11
+#define IDPF_CTLQ_DESC_PF_ID_M (0x1F << IDPF_CTLQ_DESC_PF_ID_S)
+ };
+
+ /* Virtchnl message opcode and virtchnl descriptor type
+ * v_opcode=[27:0], v_dtype=[31:28]
+ */
+ __le32 v_opcode_dtype;
+ /* Virtchnl return value */
+ __le32 v_retval;
+ union {
+ struct {
+ __le32 param0;
+ __le32 param1;
+ __le32 param2;
+ __le32 param3;
+ } direct;
+ struct {
+ __le32 param0;
+ __le16 sw_cookie;
+ /* Virtchnl flags */
+ __le16 v_flags;
+ __le32 addr_high;
+ __le32 addr_low;
+ } indirect;
+ u8 raw[16];
+ } params;
+};
+
+/* Flags sub-structure
+ * |0 |1 |2 |3 |4 |5 |6 |7 |8 |9 |10 |11 |12 |13 |14 |15 |
+ * |DD |CMP|ERR| * RSV * |FTYPE | *RSV* |RD |VFC|BUF| HOST_ID |
+ */
+/* command flags and offsets */
+#define IDPF_CTLQ_FLAG_DD_S 0
+#define IDPF_CTLQ_FLAG_CMP_S 1
+#define IDPF_CTLQ_FLAG_ERR_S 2
+#define IDPF_CTLQ_FLAG_FTYPE_S 6
+#define IDPF_CTLQ_FLAG_RD_S 10
+#define IDPF_CTLQ_FLAG_VFC_S 11
+#define IDPF_CTLQ_FLAG_BUF_S 12
+#define IDPF_CTLQ_FLAG_HOST_ID_S 13
+
+#define IDPF_CTLQ_FLAG_DD BIT(IDPF_CTLQ_FLAG_DD_S) /* 0x1 */
+#define IDPF_CTLQ_FLAG_CMP BIT(IDPF_CTLQ_FLAG_CMP_S) /* 0x2 */
+#define IDPF_CTLQ_FLAG_ERR BIT(IDPF_CTLQ_FLAG_ERR_S) /* 0x4 */
+#define IDPF_CTLQ_FLAG_FTYPE_VM BIT(IDPF_CTLQ_FLAG_FTYPE_S) /* 0x40 */
+#define IDPF_CTLQ_FLAG_FTYPE_PF BIT(IDPF_CTLQ_FLAG_FTYPE_S + 1) /* 0x80 */
+#define IDPF_CTLQ_FLAG_RD BIT(IDPF_CTLQ_FLAG_RD_S) /* 0x400 */
+#define IDPF_CTLQ_FLAG_VFC BIT(IDPF_CTLQ_FLAG_VFC_S) /* 0x800 */
+#define IDPF_CTLQ_FLAG_BUF BIT(IDPF_CTLQ_FLAG_BUF_S) /* 0x1000 */
+
+/* Host ID is a special field that has 3b and not a 1b flag */
+#define IDPF_CTLQ_FLAG_HOST_ID_M MAKE_MASK(0x7000UL, IDPF_CTLQ_FLAG_HOST_ID_S)
+
+struct idpf_mbxq_desc {
+ u8 pad[8]; /* CTLQ flags/opcode/len/retval fields */
+ u32 chnl_opcode; /* avoid confusion with desc->opcode */
+ u32 chnl_retval; /* ditto for desc->retval */
+ u32 pf_vf_id; /* used by CP when sending to PF */
+};
+
+/* Define the driver hardware struct to replace other control structs as needed
+ * Align to ctlq_hw_info
+ */
+struct idpf_hw {
+ void __iomem *hw_addr;
+ resource_size_t hw_addr_len;
+
+ struct idpf_adapter *back;
+
+ /* control queue - send and receive */
+ struct idpf_ctlq_info *asq;
+ struct idpf_ctlq_info *arq;
+
+ /* pci info */
+ u16 device_id;
+ u16 vendor_id;
+ u16 subsystem_device_id;
+ u16 subsystem_vendor_id;
+ u8 revision_id;
+ bool adapter_stopped;
+
+ struct list_head cq_list_head;
+};
+
+int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq);
+
+void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq);
+
+/* prototype for functions used for dynamic memory allocation */
+void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem,
+ u64 size);
+void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem);
+#endif /* _IDPF_CONTROLQ_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
new file mode 100644
index 000000000000..8dee098bbfb0
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_CONTROLQ_API_H_
+#define _IDPF_CONTROLQ_API_H_
+
+#include "idpf_mem.h"
+
+struct idpf_hw;
+
+/* Used for queue init, response and events */
+enum idpf_ctlq_type {
+ IDPF_CTLQ_TYPE_MAILBOX_TX = 0,
+ IDPF_CTLQ_TYPE_MAILBOX_RX = 1,
+ IDPF_CTLQ_TYPE_CONFIG_TX = 2,
+ IDPF_CTLQ_TYPE_CONFIG_RX = 3,
+ IDPF_CTLQ_TYPE_EVENT_RX = 4,
+ IDPF_CTLQ_TYPE_RDMA_TX = 5,
+ IDPF_CTLQ_TYPE_RDMA_RX = 6,
+ IDPF_CTLQ_TYPE_RDMA_COMPL = 7
+};
+
+/* Generic Control Queue Structures */
+struct idpf_ctlq_reg {
+ /* used for queue tracking */
+ u32 head;
+ u32 tail;
+ /* Below applies only to default mb (if present) */
+ u32 len;
+ u32 bah;
+ u32 bal;
+ u32 len_mask;
+ u32 len_ena_mask;
+ u32 head_mask;
+};
+
+/* Generic queue msg structure */
+struct idpf_ctlq_msg {
+ u8 vmvf_type; /* represents the source of the message on recv */
+#define IDPF_VMVF_TYPE_VF 0
+#define IDPF_VMVF_TYPE_VM 1
+#define IDPF_VMVF_TYPE_PF 2
+ u8 host_id;
+ /* 3b field used only when sending a message to CP - to be used in
+ * combination with target func_id to route the message
+ */
+#define IDPF_HOST_ID_MASK 0x7
+
+ u16 opcode;
+ u16 data_len; /* data_len = 0 when no payload is attached */
+ union {
+ u16 func_id; /* when sending a message */
+ u16 status; /* when receiving a message */
+ };
+ union {
+ struct {
+ u32 chnl_opcode;
+ u32 chnl_retval;
+ } mbx;
+ } cookie;
+ union {
+#define IDPF_DIRECT_CTX_SIZE 16
+#define IDPF_INDIRECT_CTX_SIZE 8
+ /* 16 bytes of context can be provided or 8 bytes of context
+ * plus the address of a DMA buffer
+ */
+ u8 direct[IDPF_DIRECT_CTX_SIZE];
+ struct {
+ u8 context[IDPF_INDIRECT_CTX_SIZE];
+ struct idpf_dma_mem *payload;
+ } indirect;
+ } ctx;
+};
+
+/* Generic queue info structures */
+/* MB, CONFIG and EVENT q do not have extended info */
+struct idpf_ctlq_create_info {
+ enum idpf_ctlq_type type;
+ int id; /* absolute queue offset passed as input
+ * -1 for default mailbox if present
+ */
+ u16 len; /* Queue length passed as input */
+ u16 buf_size; /* buffer size passed as input */
+ u64 base_address; /* output, HPA of the Queue start */
+ struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
+
+ int ext_info_size;
+ void *ext_info; /* Specific to q type */
+};
+
+/* Control Queue information */
+struct idpf_ctlq_info {
+ struct list_head cq_list;
+
+ enum idpf_ctlq_type cq_type;
+ int q_id;
+ struct mutex cq_lock; /* control queue lock */
+ /* used for interrupt processing */
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_post; /* starting descriptor to post buffers
+ * to after recev
+ */
+
+ struct idpf_dma_mem desc_ring; /* descriptor ring memory
+ * idpf_dma_mem is defined in OSdep.h
+ */
+ union {
+ struct idpf_dma_mem **rx_buff;
+ struct idpf_ctlq_msg **tx_msg;
+ } bi;
+
+ u16 buf_size; /* queue buffer size */
+ u16 ring_size; /* Number of descriptors */
+ struct idpf_ctlq_reg reg; /* registers accessed by ctlqs */
+};
+
+/**
+ * enum idpf_mbx_opc - PF/VF mailbox commands
+ * @idpf_mbq_opc_send_msg_to_cp: used by PF or VF to send a message to its CP
+ */
+enum idpf_mbx_opc {
+ idpf_mbq_opc_send_msg_to_cp = 0x0801,
+};
+
+/* API supported for control queue management */
+/* Will init all required q including default mb. "q_info" is an array of
+ * create_info structs equal to the number of control queues to be created.
+ */
+int idpf_ctlq_init(struct idpf_hw *hw, u8 num_q,
+ struct idpf_ctlq_create_info *q_info);
+
+/* Allocate and initialize a single control queue, which will be added to the
+ * control queue list; returns a handle to the created control queue
+ */
+int idpf_ctlq_add(struct idpf_hw *hw,
+ struct idpf_ctlq_create_info *qinfo,
+ struct idpf_ctlq_info **cq);
+
+/* Deinitialize and deallocate a single control queue */
+void idpf_ctlq_remove(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq);
+
+/* Sends messages to HW and will also free the buffer*/
+int idpf_ctlq_send(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq,
+ u16 num_q_msg,
+ struct idpf_ctlq_msg q_msg[]);
+
+/* Receives messages and called by interrupt handler/polling
+ * initiated by app/process. Also caller is supposed to free the buffers
+ */
+int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ struct idpf_ctlq_msg *q_msg);
+
+/* Reclaims send descriptors on HW write back */
+int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ struct idpf_ctlq_msg *msg_status[]);
+
+/* Indicate RX buffers are done being processed */
+int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq,
+ u16 *buff_count,
+ struct idpf_dma_mem **buffs);
+
+/* Will destroy all q including the default mb */
+void idpf_ctlq_deinit(struct idpf_hw *hw);
+
+#endif /* _IDPF_CONTROLQ_API_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c b/drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
new file mode 100644
index 000000000000..a942a6385d06
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_controlq_setup.c
@@ -0,0 +1,171 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf_controlq.h"
+
+/**
+ * idpf_ctlq_alloc_desc_ring - Allocate Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ */
+static int idpf_ctlq_alloc_desc_ring(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq)
+{
+ size_t size = cq->ring_size * sizeof(struct idpf_ctlq_desc);
+
+ cq->desc_ring.va = idpf_alloc_dma_mem(hw, &cq->desc_ring, size);
+ if (!cq->desc_ring.va)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * idpf_ctlq_alloc_bufs - Allocate Control Queue (CQ) buffers
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Allocate the buffer head for all control queues, and if it's a receive
+ * queue, allocate DMA buffers
+ */
+static int idpf_ctlq_alloc_bufs(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq)
+{
+ int i;
+
+ /* Do not allocate DMA buffers for transmit queues */
+ if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
+ return 0;
+
+ /* We'll be allocating the buffer info memory first, then we can
+ * allocate the mapped buffers for the event processing
+ */
+ cq->bi.rx_buff = kcalloc(cq->ring_size, sizeof(struct idpf_dma_mem *),
+ GFP_KERNEL);
+ if (!cq->bi.rx_buff)
+ return -ENOMEM;
+
+ /* allocate the mapped buffers (except for the last one) */
+ for (i = 0; i < cq->ring_size - 1; i++) {
+ struct idpf_dma_mem *bi;
+ int num = 1; /* number of idpf_dma_mem to be allocated */
+
+ cq->bi.rx_buff[i] = kcalloc(num, sizeof(struct idpf_dma_mem),
+ GFP_KERNEL);
+ if (!cq->bi.rx_buff[i])
+ goto unwind_alloc_cq_bufs;
+
+ bi = cq->bi.rx_buff[i];
+
+ bi->va = idpf_alloc_dma_mem(hw, bi, cq->buf_size);
+ if (!bi->va) {
+ /* unwind will not free the failed entry */
+ kfree(cq->bi.rx_buff[i]);
+ goto unwind_alloc_cq_bufs;
+ }
+ }
+
+ return 0;
+
+unwind_alloc_cq_bufs:
+ /* don't try to free the one that failed... */
+ i--;
+ for (; i >= 0; i--) {
+ idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ kfree(cq->bi.rx_buff[i]);
+ }
+ kfree(cq->bi.rx_buff);
+
+ return -ENOMEM;
+}
+
+/**
+ * idpf_ctlq_free_desc_ring - Free Control Queue (CQ) rings
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * This assumes the posted send buffers have already been cleaned
+ * and de-allocated
+ */
+static void idpf_ctlq_free_desc_ring(struct idpf_hw *hw,
+ struct idpf_ctlq_info *cq)
+{
+ idpf_free_dma_mem(hw, &cq->desc_ring);
+}
+
+/**
+ * idpf_ctlq_free_bufs - Free CQ buffer info elements
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the DMA buffers for RX queues, and DMA buffer header for both RX and TX
+ * queues. The upper layers are expected to manage freeing of TX DMA buffers
+ */
+static void idpf_ctlq_free_bufs(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+{
+ void *bi;
+
+ if (cq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_RX) {
+ int i;
+
+ /* free DMA buffers for rx queues*/
+ for (i = 0; i < cq->ring_size; i++) {
+ if (cq->bi.rx_buff[i]) {
+ idpf_free_dma_mem(hw, cq->bi.rx_buff[i]);
+ kfree(cq->bi.rx_buff[i]);
+ }
+ }
+
+ bi = (void *)cq->bi.rx_buff;
+ } else {
+ bi = (void *)cq->bi.tx_msg;
+ }
+
+ /* free the buffer header */
+ kfree(bi);
+}
+
+/**
+ * idpf_ctlq_dealloc_ring_res - Free memory allocated for control queue
+ * @hw: pointer to hw struct
+ * @cq: pointer to the specific Control queue
+ *
+ * Free the memory used by the ring, buffers and other related structures
+ */
+void idpf_ctlq_dealloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+{
+ /* free ring buffers and the ring itself */
+ idpf_ctlq_free_bufs(hw, cq);
+ idpf_ctlq_free_desc_ring(hw, cq);
+}
+
+/**
+ * idpf_ctlq_alloc_ring_res - allocate memory for descriptor ring and bufs
+ * @hw: pointer to hw struct
+ * @cq: pointer to control queue struct
+ *
+ * Do *NOT* hold cq_lock when calling this as the memory allocation routines
+ * called are not going to be atomic context safe
+ */
+int idpf_ctlq_alloc_ring_res(struct idpf_hw *hw, struct idpf_ctlq_info *cq)
+{
+ int err;
+
+ /* allocate the ring memory */
+ err = idpf_ctlq_alloc_desc_ring(hw, cq);
+ if (err)
+ return err;
+
+ /* allocate buffers in the rings */
+ err = idpf_ctlq_alloc_bufs(hw, cq);
+ if (err)
+ goto idpf_init_cq_free_ring;
+
+ /* success! */
+ return 0;
+
+idpf_init_cq_free_ring:
+ idpf_free_dma_mem(hw, &cq->desc_ring);
+
+ return err;
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_dev.c
new file mode 100644
index 000000000000..34ad1ac46b78
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_dev.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+#include "idpf_lan_pf_regs.h"
+
+#define IDPF_PF_ITR_IDX_SPACING 0x4
+
+/**
+ * idpf_ctlq_reg_init - initialize default mailbox registers
+ * @cq: pointer to the array of create control queues
+ */
+static void idpf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
+{
+ int i;
+
+ for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
+ struct idpf_ctlq_create_info *ccq = cq + i;
+
+ switch (ccq->type) {
+ case IDPF_CTLQ_TYPE_MAILBOX_TX:
+ /* set head and tail registers in our local struct */
+ ccq->reg.head = PF_FW_ATQH;
+ ccq->reg.tail = PF_FW_ATQT;
+ ccq->reg.len = PF_FW_ATQLEN;
+ ccq->reg.bah = PF_FW_ATQBAH;
+ ccq->reg.bal = PF_FW_ATQBAL;
+ ccq->reg.len_mask = PF_FW_ATQLEN_ATQLEN_M;
+ ccq->reg.len_ena_mask = PF_FW_ATQLEN_ATQENABLE_M;
+ ccq->reg.head_mask = PF_FW_ATQH_ATQH_M;
+ break;
+ case IDPF_CTLQ_TYPE_MAILBOX_RX:
+ /* set head and tail registers in our local struct */
+ ccq->reg.head = PF_FW_ARQH;
+ ccq->reg.tail = PF_FW_ARQT;
+ ccq->reg.len = PF_FW_ARQLEN;
+ ccq->reg.bah = PF_FW_ARQBAH;
+ ccq->reg.bal = PF_FW_ARQBAL;
+ ccq->reg.len_mask = PF_FW_ARQLEN_ARQLEN_M;
+ ccq->reg.len_ena_mask = PF_FW_ARQLEN_ARQENABLE_M;
+ ccq->reg.head_mask = PF_FW_ARQH_ARQH_M;
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/**
+ * idpf_mb_intr_reg_init - Initialize mailbox interrupt register
+ * @adapter: adapter structure
+ */
+static void idpf_mb_intr_reg_init(struct idpf_adapter *adapter)
+{
+ struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
+ u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
+
+ intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
+ intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
+ intr->dyn_ctl_itridx_m = PF_GLINT_DYN_CTL_ITR_INDX_M;
+ intr->icr_ena = idpf_get_reg_addr(adapter, PF_INT_DIR_OICR_ENA);
+ intr->icr_ena_ctlq_m = PF_INT_DIR_OICR_ENA_M;
+}
+
+/**
+ * idpf_intr_reg_init - Initialize interrupt registers
+ * @vport: virtual port structure
+ */
+static int idpf_intr_reg_init(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ int num_vecs = vport->num_q_vectors;
+ struct idpf_vec_regs *reg_vals;
+ int num_regs, i, err = 0;
+ u32 rx_itr, tx_itr;
+ u16 total_vecs;
+
+ total_vecs = idpf_get_reserved_vecs(vport->adapter);
+ reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
+ GFP_KERNEL);
+ if (!reg_vals)
+ return -ENOMEM;
+
+ num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
+ if (num_regs < num_vecs) {
+ err = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ for (i = 0; i < num_vecs; i++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[i];
+ u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
+ struct idpf_intr_reg *intr = &q_vector->intr_reg;
+ u32 spacing;
+
+ intr->dyn_ctl = idpf_get_reg_addr(adapter,
+ reg_vals[vec_id].dyn_ctl_reg);
+ intr->dyn_ctl_intena_m = PF_GLINT_DYN_CTL_INTENA_M;
+ intr->dyn_ctl_itridx_s = PF_GLINT_DYN_CTL_ITR_INDX_S;
+ intr->dyn_ctl_intrvl_s = PF_GLINT_DYN_CTL_INTERVAL_S;
+
+ spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
+ IDPF_PF_ITR_IDX_SPACING);
+ rx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_0,
+ reg_vals[vec_id].itrn_reg,
+ spacing);
+ tx_itr = PF_GLINT_ITR_ADDR(VIRTCHNL2_ITR_IDX_1,
+ reg_vals[vec_id].itrn_reg,
+ spacing);
+ intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
+ intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
+ }
+
+free_reg_vals:
+ kfree(reg_vals);
+
+ return err;
+}
+
+/**
+ * idpf_reset_reg_init - Initialize reset registers
+ * @adapter: Driver specific private structure
+ */
+static void idpf_reset_reg_init(struct idpf_adapter *adapter)
+{
+ adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, PFGEN_RSTAT);
+ adapter->reset_reg.rstat_m = PFGEN_RSTAT_PFR_STATE_M;
+}
+
+/**
+ * idpf_trigger_reset - trigger reset
+ * @adapter: Driver specific private structure
+ * @trig_cause: Reason to trigger a reset
+ */
+static void idpf_trigger_reset(struct idpf_adapter *adapter,
+ enum idpf_flags __always_unused trig_cause)
+{
+ u32 reset_reg;
+
+ reset_reg = readl(idpf_get_reg_addr(adapter, PFGEN_CTRL));
+ writel(reset_reg | PFGEN_CTRL_PFSWR,
+ idpf_get_reg_addr(adapter, PFGEN_CTRL));
+}
+
+/**
+ * idpf_reg_ops_init - Initialize register API function pointers
+ * @adapter: Driver specific private structure
+ */
+static void idpf_reg_ops_init(struct idpf_adapter *adapter)
+{
+ adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_ctlq_reg_init;
+ adapter->dev_ops.reg_ops.intr_reg_init = idpf_intr_reg_init;
+ adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_mb_intr_reg_init;
+ adapter->dev_ops.reg_ops.reset_reg_init = idpf_reset_reg_init;
+ adapter->dev_ops.reg_ops.trigger_reset = idpf_trigger_reset;
+}
+
+/**
+ * idpf_dev_ops_init - Initialize device API function pointers
+ * @adapter: Driver specific private structure
+ */
+void idpf_dev_ops_init(struct idpf_adapter *adapter)
+{
+ idpf_reg_ops_init(adapter);
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_devids.h b/drivers/net/ethernet/intel/idpf/idpf_devids.h
new file mode 100644
index 000000000000..5154a52ae61c
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_devids.h
@@ -0,0 +1,10 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_DEVIDS_H_
+#define _IDPF_DEVIDS_H_
+
+#define IDPF_DEV_ID_PF 0x1452
+#define IDPF_DEV_ID_VF 0x145C
+
+#endif /* _IDPF_DEVIDS_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
new file mode 100644
index 000000000000..52ea38669f85
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c
@@ -0,0 +1,1369 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+
+/**
+ * idpf_get_rxnfc - command to get RX flow classification rules
+ * @netdev: network interface device structure
+ * @cmd: ethtool rxnfc command
+ * @rule_locs: pointer to store rule locations
+ *
+ * Returns Success if the command is supported.
+ */
+static int idpf_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
+ u32 __always_unused *rule_locs)
+{
+ struct idpf_vport *vport;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ switch (cmd->cmd) {
+ case ETHTOOL_GRXRINGS:
+ cmd->data = vport->num_rxq;
+ idpf_vport_ctrl_unlock(netdev);
+
+ return 0;
+ default:
+ break;
+ }
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return -EOPNOTSUPP;
+}
+
+/**
+ * idpf_get_rxfh_key_size - get the RSS hash key size
+ * @netdev: network interface device structure
+ *
+ * Returns the key size on success, error value on failure.
+ */
+static u32 idpf_get_rxfh_key_size(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_user_config_data *user_config;
+
+ if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+ return -EOPNOTSUPP;
+
+ user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
+
+ return user_config->rss_data.rss_key_size;
+}
+
+/**
+ * idpf_get_rxfh_indir_size - get the rx flow hash indirection table size
+ * @netdev: network interface device structure
+ *
+ * Returns the table size on success, error value on failure.
+ */
+static u32 idpf_get_rxfh_indir_size(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_user_config_data *user_config;
+
+ if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+ return -EOPNOTSUPP;
+
+ user_config = &np->adapter->vport_config[np->vport_idx]->user_config;
+
+ return user_config->rss_data.rss_lut_size;
+}
+
+/**
+ * idpf_get_rxfh - get the rx flow hash indirection table
+ * @netdev: network interface device structure
+ * @indir: indirection table
+ * @key: hash key
+ * @hfunc: hash function in use
+ *
+ * Reads the indirection table directly from the hardware. Always returns 0.
+ */
+static int idpf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
+ u8 *hfunc)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_rss_data *rss_data;
+ struct idpf_adapter *adapter;
+ int err = 0;
+ u16 i;
+
+ idpf_vport_ctrl_lock(netdev);
+
+ adapter = np->adapter;
+
+ if (!idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) {
+ err = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+
+ rss_data = &adapter->vport_config[np->vport_idx]->user_config.rss_data;
+ if (np->state != __IDPF_VPORT_UP)
+ goto unlock_mutex;
+
+ if (hfunc)
+ *hfunc = ETH_RSS_HASH_TOP;
+
+ if (key)
+ memcpy(key, rss_data->rss_key, rss_data->rss_key_size);
+
+ if (indir) {
+ for (i = 0; i < rss_data->rss_lut_size; i++)
+ indir[i] = rss_data->rss_lut[i];
+ }
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_set_rxfh - set the rx flow hash indirection table
+ * @netdev: network interface device structure
+ * @indir: indirection table
+ * @key: hash key
+ * @hfunc: hash function to use
+ *
+ * Returns -EINVAL if the table specifies an invalid queue id, otherwise
+ * returns 0 after programming the table.
+ */
+static int idpf_set_rxfh(struct net_device *netdev, const u32 *indir,
+ const u8 *key, const u8 hfunc)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_rss_data *rss_data;
+ struct idpf_adapter *adapter;
+ struct idpf_vport *vport;
+ int err = 0;
+ u16 lut;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ adapter = vport->adapter;
+
+ if (!idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) {
+ err = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+ if (np->state != __IDPF_VPORT_UP)
+ goto unlock_mutex;
+
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP) {
+ err = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+
+ if (key)
+ memcpy(rss_data->rss_key, key, rss_data->rss_key_size);
+
+ if (indir) {
+ for (lut = 0; lut < rss_data->rss_lut_size; lut++)
+ rss_data->rss_lut[lut] = indir[lut];
+ }
+
+ err = idpf_config_rss(vport);
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_get_channels: get the number of channels supported by the device
+ * @netdev: network interface device structure
+ * @ch: channel information structure
+ *
+ * Report maximum of TX and RX. Report one extra channel to match our MailBox
+ * Queue.
+ */
+static void idpf_get_channels(struct net_device *netdev,
+ struct ethtool_channels *ch)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_config *vport_config;
+ u16 num_txq, num_rxq;
+ u16 combined;
+
+ vport_config = np->adapter->vport_config[np->vport_idx];
+
+ num_txq = vport_config->user_config.num_req_tx_qs;
+ num_rxq = vport_config->user_config.num_req_rx_qs;
+
+ combined = min(num_txq, num_rxq);
+
+ /* Report maximum channels */
+ ch->max_combined = min_t(u16, vport_config->max_q.max_txq,
+ vport_config->max_q.max_rxq);
+ ch->max_rx = vport_config->max_q.max_rxq;
+ ch->max_tx = vport_config->max_q.max_txq;
+
+ ch->max_other = IDPF_MAX_MBXQ;
+ ch->other_count = IDPF_MAX_MBXQ;
+
+ ch->combined_count = combined;
+ ch->rx_count = num_rxq - combined;
+ ch->tx_count = num_txq - combined;
+}
+
+/**
+ * idpf_set_channels: set the new channel count
+ * @netdev: network interface device structure
+ * @ch: channel information structure
+ *
+ * Negotiate a new number of channels with CP. Returns 0 on success, negative
+ * on failure.
+ */
+static int idpf_set_channels(struct net_device *netdev,
+ struct ethtool_channels *ch)
+{
+ struct idpf_vport_config *vport_config;
+ u16 combined, num_txq, num_rxq;
+ unsigned int num_req_tx_q;
+ unsigned int num_req_rx_q;
+ struct idpf_vport *vport;
+ struct device *dev;
+ int err = 0;
+ u16 idx;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ idx = vport->idx;
+ vport_config = vport->adapter->vport_config[idx];
+
+ num_txq = vport_config->user_config.num_req_tx_qs;
+ num_rxq = vport_config->user_config.num_req_rx_qs;
+
+ combined = min(num_txq, num_rxq);
+
+ /* these checks are for cases where user didn't specify a particular
+ * value on cmd line but we get non-zero value anyway via
+ * get_channels(); look at ethtool.c in ethtool repository (the user
+ * space part), particularly, do_schannels() routine
+ */
+ if (ch->combined_count == combined)
+ ch->combined_count = 0;
+ if (ch->combined_count && ch->rx_count == num_rxq - combined)
+ ch->rx_count = 0;
+ if (ch->combined_count && ch->tx_count == num_txq - combined)
+ ch->tx_count = 0;
+
+ num_req_tx_q = ch->combined_count + ch->tx_count;
+ num_req_rx_q = ch->combined_count + ch->rx_count;
+
+ dev = &vport->adapter->pdev->dev;
+ /* It's possible to specify number of queues that exceeds max.
+ * Stack checks max combined_count and max [tx|rx]_count but not the
+ * max combined_count + [tx|rx]_count. These checks should catch that.
+ */
+ if (num_req_tx_q > vport_config->max_q.max_txq) {
+ dev_info(dev, "Maximum TX queues is %d\n",
+ vport_config->max_q.max_txq);
+ err = -EINVAL;
+ goto unlock_mutex;
+ }
+ if (num_req_rx_q > vport_config->max_q.max_rxq) {
+ dev_info(dev, "Maximum RX queues is %d\n",
+ vport_config->max_q.max_rxq);
+ err = -EINVAL;
+ goto unlock_mutex;
+ }
+
+ if (num_req_tx_q == num_txq && num_req_rx_q == num_rxq)
+ goto unlock_mutex;
+
+ vport_config->user_config.num_req_tx_qs = num_req_tx_q;
+ vport_config->user_config.num_req_rx_qs = num_req_rx_q;
+
+ err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_CHANGE);
+ if (err) {
+ /* roll back queue change */
+ vport_config->user_config.num_req_tx_qs = num_txq;
+ vport_config->user_config.num_req_rx_qs = num_rxq;
+ }
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_get_ringparam - Get ring parameters
+ * @netdev: network interface device structure
+ * @ring: ethtool ringparam structure
+ * @kring: unused
+ * @ext_ack: unused
+ *
+ * Returns current ring parameters. TX and RX rings are reported separately,
+ * but the number of rings is not reported.
+ */
+static void idpf_get_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kring,
+ struct netlink_ext_ack *ext_ack)
+{
+ struct idpf_vport *vport;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ ring->rx_max_pending = IDPF_MAX_RXQ_DESC;
+ ring->tx_max_pending = IDPF_MAX_TXQ_DESC;
+ ring->rx_pending = vport->rxq_desc_count;
+ ring->tx_pending = vport->txq_desc_count;
+
+ idpf_vport_ctrl_unlock(netdev);
+}
+
+/**
+ * idpf_set_ringparam - Set ring parameters
+ * @netdev: network interface device structure
+ * @ring: ethtool ringparam structure
+ * @kring: unused
+ * @ext_ack: unused
+ *
+ * Sets ring parameters. TX and RX rings are controlled separately, but the
+ * number of rings is not specified, so all rings get the same settings.
+ */
+static int idpf_set_ringparam(struct net_device *netdev,
+ struct ethtool_ringparam *ring,
+ struct kernel_ethtool_ringparam *kring,
+ struct netlink_ext_ack *ext_ack)
+{
+ struct idpf_vport_user_config_data *config_data;
+ u32 new_rx_count, new_tx_count;
+ struct idpf_vport *vport;
+ int i, err = 0;
+ u16 idx;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ idx = vport->idx;
+
+ if (ring->tx_pending < IDPF_MIN_TXQ_DESC) {
+ netdev_err(netdev, "Descriptors requested (Tx: %u) is less than min supported (%u)\n",
+ ring->tx_pending,
+ IDPF_MIN_TXQ_DESC);
+ err = -EINVAL;
+ goto unlock_mutex;
+ }
+
+ if (ring->rx_pending < IDPF_MIN_RXQ_DESC) {
+ netdev_err(netdev, "Descriptors requested (Rx: %u) is less than min supported (%u)\n",
+ ring->rx_pending,
+ IDPF_MIN_RXQ_DESC);
+ err = -EINVAL;
+ goto unlock_mutex;
+ }
+
+ new_rx_count = ALIGN(ring->rx_pending, IDPF_REQ_RXQ_DESC_MULTIPLE);
+ if (new_rx_count != ring->rx_pending)
+ netdev_info(netdev, "Requested Rx descriptor count rounded up to %u\n",
+ new_rx_count);
+
+ new_tx_count = ALIGN(ring->tx_pending, IDPF_REQ_DESC_MULTIPLE);
+ if (new_tx_count != ring->tx_pending)
+ netdev_info(netdev, "Requested Tx descriptor count rounded up to %u\n",
+ new_tx_count);
+
+ if (new_tx_count == vport->txq_desc_count &&
+ new_rx_count == vport->rxq_desc_count)
+ goto unlock_mutex;
+
+ config_data = &vport->adapter->vport_config[idx]->user_config;
+ config_data->num_req_txq_desc = new_tx_count;
+ config_data->num_req_rxq_desc = new_rx_count;
+
+ /* Since we adjusted the RX completion queue count, the RX buffer queue
+ * descriptor count needs to be adjusted as well
+ */
+ for (i = 0; i < vport->num_bufqs_per_qgrp; i++)
+ vport->bufq_desc_count[i] =
+ IDPF_RX_BUFQ_DESC_COUNT(new_rx_count,
+ vport->num_bufqs_per_qgrp);
+
+ err = idpf_initiate_soft_reset(vport, IDPF_SR_Q_DESC_CHANGE);
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * struct idpf_stats - definition for an ethtool statistic
+ * @stat_string: statistic name to display in ethtool -S output
+ * @sizeof_stat: the sizeof() the stat, must be no greater than sizeof(u64)
+ * @stat_offset: offsetof() the stat from a base pointer
+ *
+ * This structure defines a statistic to be added to the ethtool stats buffer.
+ * It defines a statistic as offset from a common base pointer. Stats should
+ * be defined in constant arrays using the IDPF_STAT macro, with every element
+ * of the array using the same _type for calculating the sizeof_stat and
+ * stat_offset.
+ *
+ * The @sizeof_stat is expected to be sizeof(u8), sizeof(u16), sizeof(u32) or
+ * sizeof(u64). Other sizes are not expected and will produce a WARN_ONCE from
+ * the idpf_add_ethtool_stat() helper function.
+ *
+ * The @stat_string is interpreted as a format string, allowing formatted
+ * values to be inserted while looping over multiple structures for a given
+ * statistics array. Thus, every statistic string in an array should have the
+ * same type and number of format specifiers, to be formatted by variadic
+ * arguments to the idpf_add_stat_string() helper function.
+ */
+struct idpf_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ int sizeof_stat;
+ int stat_offset;
+};
+
+/* Helper macro to define an idpf_stat structure with proper size and type.
+ * Use this when defining constant statistics arrays. Note that @_type expects
+ * only a type name and is used multiple times.
+ */
+#define IDPF_STAT(_type, _name, _stat) { \
+ .stat_string = _name, \
+ .sizeof_stat = sizeof_field(_type, _stat), \
+ .stat_offset = offsetof(_type, _stat) \
+}
+
+/* Helper macro for defining some statistics related to queues */
+#define IDPF_QUEUE_STAT(_name, _stat) \
+ IDPF_STAT(struct idpf_queue, _name, _stat)
+
+/* Stats associated with a Tx queue */
+static const struct idpf_stats idpf_gstrings_tx_queue_stats[] = {
+ IDPF_QUEUE_STAT("pkts", q_stats.tx.packets),
+ IDPF_QUEUE_STAT("bytes", q_stats.tx.bytes),
+ IDPF_QUEUE_STAT("lso_pkts", q_stats.tx.lso_pkts),
+};
+
+/* Stats associated with an Rx queue */
+static const struct idpf_stats idpf_gstrings_rx_queue_stats[] = {
+ IDPF_QUEUE_STAT("pkts", q_stats.rx.packets),
+ IDPF_QUEUE_STAT("bytes", q_stats.rx.bytes),
+ IDPF_QUEUE_STAT("rx_gro_hw_pkts", q_stats.rx.rsc_pkts),
+};
+
+#define IDPF_TX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_tx_queue_stats)
+#define IDPF_RX_QUEUE_STATS_LEN ARRAY_SIZE(idpf_gstrings_rx_queue_stats)
+
+#define IDPF_PORT_STAT(_name, _stat) \
+ IDPF_STAT(struct idpf_vport, _name, _stat)
+
+static const struct idpf_stats idpf_gstrings_port_stats[] = {
+ IDPF_PORT_STAT("rx-csum_errors", port_stats.rx_hw_csum_err),
+ IDPF_PORT_STAT("rx-hsplit", port_stats.rx_hsplit),
+ IDPF_PORT_STAT("rx-hsplit_hbo", port_stats.rx_hsplit_hbo),
+ IDPF_PORT_STAT("rx-bad_descs", port_stats.rx_bad_descs),
+ IDPF_PORT_STAT("tx-skb_drops", port_stats.tx_drops),
+ IDPF_PORT_STAT("tx-dma_map_errs", port_stats.tx_dma_map_errs),
+ IDPF_PORT_STAT("tx-linearized_pkts", port_stats.tx_linearize),
+ IDPF_PORT_STAT("tx-busy_events", port_stats.tx_busy),
+ IDPF_PORT_STAT("rx-unicast_pkts", port_stats.vport_stats.rx_unicast),
+ IDPF_PORT_STAT("rx-multicast_pkts", port_stats.vport_stats.rx_multicast),
+ IDPF_PORT_STAT("rx-broadcast_pkts", port_stats.vport_stats.rx_broadcast),
+ IDPF_PORT_STAT("rx-unknown_protocol", port_stats.vport_stats.rx_unknown_protocol),
+ IDPF_PORT_STAT("tx-unicast_pkts", port_stats.vport_stats.tx_unicast),
+ IDPF_PORT_STAT("tx-multicast_pkts", port_stats.vport_stats.tx_multicast),
+ IDPF_PORT_STAT("tx-broadcast_pkts", port_stats.vport_stats.tx_broadcast),
+};
+
+#define IDPF_PORT_STATS_LEN ARRAY_SIZE(idpf_gstrings_port_stats)
+
+/**
+ * __idpf_add_qstat_strings - copy stat strings into ethtool buffer
+ * @p: ethtool supplied buffer
+ * @stats: stat definitions array
+ * @size: size of the stats array
+ * @type: stat type
+ * @idx: stat index
+ *
+ * Format and copy the strings described by stats into the buffer pointed at
+ * by p.
+ */
+static void __idpf_add_qstat_strings(u8 **p, const struct idpf_stats *stats,
+ const unsigned int size, const char *type,
+ unsigned int idx)
+{
+ unsigned int i;
+
+ for (i = 0; i < size; i++)
+ ethtool_sprintf(p, "%s_q-%u_%s",
+ type, idx, stats[i].stat_string);
+}
+
+/**
+ * idpf_add_qstat_strings - Copy queue stat strings into ethtool buffer
+ * @p: ethtool supplied buffer
+ * @stats: stat definitions array
+ * @type: stat type
+ * @idx: stat idx
+ *
+ * Format and copy the strings described by the const static stats value into
+ * the buffer pointed at by p.
+ *
+ * The parameter @stats is evaluated twice, so parameters with side effects
+ * should be avoided. Additionally, stats must be an array such that
+ * ARRAY_SIZE can be called on it.
+ */
+#define idpf_add_qstat_strings(p, stats, type, idx) \
+ __idpf_add_qstat_strings(p, stats, ARRAY_SIZE(stats), type, idx)
+
+/**
+ * idpf_add_stat_strings - Copy port stat strings into ethtool buffer
+ * @p: ethtool buffer
+ * @stats: struct to copy from
+ * @size: size of stats array to copy from
+ */
+static void idpf_add_stat_strings(u8 **p, const struct idpf_stats *stats,
+ const unsigned int size)
+{
+ unsigned int i;
+
+ for (i = 0; i < size; i++)
+ ethtool_sprintf(p, "%s", stats[i].stat_string);
+}
+
+/**
+ * idpf_get_stat_strings - Get stat strings
+ * @netdev: network interface device structure
+ * @data: buffer for string data
+ *
+ * Builds the statistics string table
+ */
+static void idpf_get_stat_strings(struct net_device *netdev, u8 *data)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_config *vport_config;
+ unsigned int i;
+
+ idpf_add_stat_strings(&data, idpf_gstrings_port_stats,
+ IDPF_PORT_STATS_LEN);
+
+ vport_config = np->adapter->vport_config[np->vport_idx];
+ /* It's critical that we always report a constant number of strings and
+ * that the strings are reported in the same order regardless of how
+ * many queues are actually in use.
+ */
+ for (i = 0; i < vport_config->max_q.max_txq; i++)
+ idpf_add_qstat_strings(&data, idpf_gstrings_tx_queue_stats,
+ "tx", i);
+
+ for (i = 0; i < vport_config->max_q.max_rxq; i++)
+ idpf_add_qstat_strings(&data, idpf_gstrings_rx_queue_stats,
+ "rx", i);
+
+ page_pool_ethtool_stats_get_strings(data);
+}
+
+/**
+ * idpf_get_strings - Get string set
+ * @netdev: network interface device structure
+ * @sset: id of string set
+ * @data: buffer for string data
+ *
+ * Builds string tables for various string sets
+ */
+static void idpf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ idpf_get_stat_strings(netdev, data);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * idpf_get_sset_count - Get length of string set
+ * @netdev: network interface device structure
+ * @sset: id of string set
+ *
+ * Reports size of various string tables.
+ */
+static int idpf_get_sset_count(struct net_device *netdev, int sset)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_config *vport_config;
+ u16 max_txq, max_rxq;
+ unsigned int size;
+
+ if (sset != ETH_SS_STATS)
+ return -EINVAL;
+
+ vport_config = np->adapter->vport_config[np->vport_idx];
+ /* This size reported back here *must* be constant throughout the
+ * lifecycle of the netdevice, i.e. we must report the maximum length
+ * even for queues that don't technically exist. This is due to the
+ * fact that this userspace API uses three separate ioctl calls to get
+ * stats data but has no way to communicate back to userspace when that
+ * size has changed, which can typically happen as a result of changing
+ * number of queues. If the number/order of stats change in the middle
+ * of this call chain it will lead to userspace crashing/accessing bad
+ * data through buffer under/overflow.
+ */
+ max_txq = vport_config->max_q.max_txq;
+ max_rxq = vport_config->max_q.max_rxq;
+
+ size = IDPF_PORT_STATS_LEN + (IDPF_TX_QUEUE_STATS_LEN * max_txq) +
+ (IDPF_RX_QUEUE_STATS_LEN * max_rxq);
+ size += page_pool_ethtool_stats_get_count();
+
+ return size;
+}
+
+/**
+ * idpf_add_one_ethtool_stat - copy the stat into the supplied buffer
+ * @data: location to store the stat value
+ * @pstat: old stat pointer to copy from
+ * @stat: the stat definition
+ *
+ * Copies the stat data defined by the pointer and stat structure pair into
+ * the memory supplied as data. If the pointer is null, data will be zero'd.
+ */
+static void idpf_add_one_ethtool_stat(u64 *data, void *pstat,
+ const struct idpf_stats *stat)
+{
+ char *p;
+
+ if (!pstat) {
+ /* Ensure that the ethtool data buffer is zero'd for any stats
+ * which don't have a valid pointer.
+ */
+ *data = 0;
+ return;
+ }
+
+ p = (char *)pstat + stat->stat_offset;
+ switch (stat->sizeof_stat) {
+ case sizeof(u64):
+ *data = *((u64 *)p);
+ break;
+ case sizeof(u32):
+ *data = *((u32 *)p);
+ break;
+ case sizeof(u16):
+ *data = *((u16 *)p);
+ break;
+ case sizeof(u8):
+ *data = *((u8 *)p);
+ break;
+ default:
+ WARN_ONCE(1, "unexpected stat size for %s",
+ stat->stat_string);
+ *data = 0;
+ }
+}
+
+/**
+ * idpf_add_queue_stats - copy queue statistics into supplied buffer
+ * @data: ethtool stats buffer
+ * @q: the queue to copy
+ *
+ * Queue statistics must be copied while protected by u64_stats_fetch_begin,
+ * so we can't directly use idpf_add_ethtool_stats. Assumes that queue stats
+ * are defined in idpf_gstrings_queue_stats. If the queue pointer is null,
+ * zero out the queue stat values and update the data pointer. Otherwise
+ * safely copy the stats from the queue into the supplied buffer and update
+ * the data pointer when finished.
+ *
+ * This function expects to be called while under rcu_read_lock().
+ */
+static void idpf_add_queue_stats(u64 **data, struct idpf_queue *q)
+{
+ const struct idpf_stats *stats;
+ unsigned int start;
+ unsigned int size;
+ unsigned int i;
+
+ if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ size = IDPF_RX_QUEUE_STATS_LEN;
+ stats = idpf_gstrings_rx_queue_stats;
+ } else {
+ size = IDPF_TX_QUEUE_STATS_LEN;
+ stats = idpf_gstrings_tx_queue_stats;
+ }
+
+ /* To avoid invalid statistics values, ensure that we keep retrying
+ * the copy until we get a consistent value according to
+ * u64_stats_fetch_retry.
+ */
+ do {
+ start = u64_stats_fetch_begin(&q->stats_sync);
+ for (i = 0; i < size; i++)
+ idpf_add_one_ethtool_stat(&(*data)[i], q, &stats[i]);
+ } while (u64_stats_fetch_retry(&q->stats_sync, start));
+
+ /* Once we successfully copy the stats in, update the data pointer */
+ *data += size;
+}
+
+/**
+ * idpf_add_empty_queue_stats - Add stats for a non-existent queue
+ * @data: pointer to data buffer
+ * @qtype: type of data queue
+ *
+ * We must report a constant length of stats back to userspace regardless of
+ * how many queues are actually in use because stats collection happens over
+ * three separate ioctls and there's no way to notify userspace the size
+ * changed between those calls. This adds empty to data to the stats since we
+ * don't have a real queue to refer to for this stats slot.
+ */
+static void idpf_add_empty_queue_stats(u64 **data, u16 qtype)
+{
+ unsigned int i;
+ int stats_len;
+
+ if (qtype == VIRTCHNL2_QUEUE_TYPE_RX)
+ stats_len = IDPF_RX_QUEUE_STATS_LEN;
+ else
+ stats_len = IDPF_TX_QUEUE_STATS_LEN;
+
+ for (i = 0; i < stats_len; i++)
+ (*data)[i] = 0;
+ *data += stats_len;
+}
+
+/**
+ * idpf_add_port_stats - Copy port stats into ethtool buffer
+ * @vport: virtual port struct
+ * @data: ethtool buffer to copy into
+ */
+static void idpf_add_port_stats(struct idpf_vport *vport, u64 **data)
+{
+ unsigned int size = IDPF_PORT_STATS_LEN;
+ unsigned int start;
+ unsigned int i;
+
+ do {
+ start = u64_stats_fetch_begin(&vport->port_stats.stats_sync);
+ for (i = 0; i < size; i++)
+ idpf_add_one_ethtool_stat(&(*data)[i], vport,
+ &idpf_gstrings_port_stats[i]);
+ } while (u64_stats_fetch_retry(&vport->port_stats.stats_sync, start));
+
+ *data += size;
+}
+
+/**
+ * idpf_collect_queue_stats - accumulate various per queue stats
+ * into port level stats
+ * @vport: pointer to vport struct
+ **/
+static void idpf_collect_queue_stats(struct idpf_vport *vport)
+{
+ struct idpf_port_stats *pstats = &vport->port_stats;
+ int i, j;
+
+ /* zero out port stats since they're actually tracked in per
+ * queue stats; this is only for reporting
+ */
+ u64_stats_update_begin(&pstats->stats_sync);
+ u64_stats_set(&pstats->rx_hw_csum_err, 0);
+ u64_stats_set(&pstats->rx_hsplit, 0);
+ u64_stats_set(&pstats->rx_hsplit_hbo, 0);
+ u64_stats_set(&pstats->rx_bad_descs, 0);
+ u64_stats_set(&pstats->tx_linearize, 0);
+ u64_stats_set(&pstats->tx_busy, 0);
+ u64_stats_set(&pstats->tx_drops, 0);
+ u64_stats_set(&pstats->tx_dma_map_errs, 0);
+ u64_stats_update_end(&pstats->stats_sync);
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
+ u16 num_rxq;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rxq_grp->splitq.num_rxq_sets;
+ else
+ num_rxq = rxq_grp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ u64 hw_csum_err, hsplit, hsplit_hbo, bad_descs;
+ struct idpf_rx_queue_stats *stats;
+ struct idpf_queue *rxq;
+ unsigned int start;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
+ else
+ rxq = rxq_grp->singleq.rxqs[j];
+
+ if (!rxq)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin(&rxq->stats_sync);
+
+ stats = &rxq->q_stats.rx;
+ hw_csum_err = u64_stats_read(&stats->hw_csum_err);
+ hsplit = u64_stats_read(&stats->hsplit_pkts);
+ hsplit_hbo = u64_stats_read(&stats->hsplit_buf_ovf);
+ bad_descs = u64_stats_read(&stats->bad_descs);
+ } while (u64_stats_fetch_retry(&rxq->stats_sync, start));
+
+ u64_stats_update_begin(&pstats->stats_sync);
+ u64_stats_add(&pstats->rx_hw_csum_err, hw_csum_err);
+ u64_stats_add(&pstats->rx_hsplit, hsplit);
+ u64_stats_add(&pstats->rx_hsplit_hbo, hsplit_hbo);
+ u64_stats_add(&pstats->rx_bad_descs, bad_descs);
+ u64_stats_update_end(&pstats->stats_sync);
+ }
+ }
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+
+ for (j = 0; j < txq_grp->num_txq; j++) {
+ u64 linearize, qbusy, skb_drops, dma_map_errs;
+ struct idpf_queue *txq = txq_grp->txqs[j];
+ struct idpf_tx_queue_stats *stats;
+ unsigned int start;
+
+ if (!txq)
+ continue;
+
+ do {
+ start = u64_stats_fetch_begin(&txq->stats_sync);
+
+ stats = &txq->q_stats.tx;
+ linearize = u64_stats_read(&stats->linearize);
+ qbusy = u64_stats_read(&stats->q_busy);
+ skb_drops = u64_stats_read(&stats->skb_drops);
+ dma_map_errs = u64_stats_read(&stats->dma_map_errs);
+ } while (u64_stats_fetch_retry(&txq->stats_sync, start));
+
+ u64_stats_update_begin(&pstats->stats_sync);
+ u64_stats_add(&pstats->tx_linearize, linearize);
+ u64_stats_add(&pstats->tx_busy, qbusy);
+ u64_stats_add(&pstats->tx_drops, skb_drops);
+ u64_stats_add(&pstats->tx_dma_map_errs, dma_map_errs);
+ u64_stats_update_end(&pstats->stats_sync);
+ }
+ }
+}
+
+/**
+ * idpf_get_ethtool_stats - report device statistics
+ * @netdev: network interface device structure
+ * @stats: ethtool statistics structure
+ * @data: pointer to data buffer
+ *
+ * All statistics are added to the data buffer as an array of u64.
+ */
+static void idpf_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats __always_unused *stats,
+ u64 *data)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_config *vport_config;
+ struct page_pool_stats pp_stats = { };
+ struct idpf_vport *vport;
+ unsigned int total = 0;
+ unsigned int i, j;
+ bool is_splitq;
+ u16 qtype;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ if (np->state != __IDPF_VPORT_UP) {
+ idpf_vport_ctrl_unlock(netdev);
+
+ return;
+ }
+
+ rcu_read_lock();
+
+ idpf_collect_queue_stats(vport);
+ idpf_add_port_stats(vport, &data);
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+
+ qtype = VIRTCHNL2_QUEUE_TYPE_TX;
+
+ for (j = 0; j < txq_grp->num_txq; j++, total++) {
+ struct idpf_queue *txq = txq_grp->txqs[j];
+
+ if (!txq)
+ idpf_add_empty_queue_stats(&data, qtype);
+ else
+ idpf_add_queue_stats(&data, txq);
+ }
+ }
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+ /* It is critical we provide a constant number of stats back to
+ * userspace regardless of how many queues are actually in use because
+ * there is no way to inform userspace the size has changed between
+ * ioctl calls. This will fill in any missing stats with zero.
+ */
+ for (; total < vport_config->max_q.max_txq; total++)
+ idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_TX);
+ total = 0;
+
+ is_splitq = idpf_is_queue_model_split(vport->rxq_model);
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rxq_grp = &vport->rxq_grps[i];
+ u16 num_rxq;
+
+ qtype = VIRTCHNL2_QUEUE_TYPE_RX;
+
+ if (is_splitq)
+ num_rxq = rxq_grp->splitq.num_rxq_sets;
+ else
+ num_rxq = rxq_grp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++, total++) {
+ struct idpf_queue *rxq;
+
+ if (is_splitq)
+ rxq = &rxq_grp->splitq.rxq_sets[j]->rxq;
+ else
+ rxq = rxq_grp->singleq.rxqs[j];
+ if (!rxq)
+ idpf_add_empty_queue_stats(&data, qtype);
+ else
+ idpf_add_queue_stats(&data, rxq);
+
+ /* In splitq mode, don't get page pool stats here since
+ * the pools are attached to the buffer queues
+ */
+ if (is_splitq)
+ continue;
+
+ if (rxq)
+ page_pool_get_stats(rxq->pp, &pp_stats);
+ }
+ }
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ struct idpf_queue *rxbufq =
+ &vport->rxq_grps[i].splitq.bufq_sets[j].bufq;
+
+ page_pool_get_stats(rxbufq->pp, &pp_stats);
+ }
+ }
+
+ for (; total < vport_config->max_q.max_rxq; total++)
+ idpf_add_empty_queue_stats(&data, VIRTCHNL2_QUEUE_TYPE_RX);
+
+ page_pool_ethtool_stats_get(data, &pp_stats);
+
+ rcu_read_unlock();
+
+ idpf_vport_ctrl_unlock(netdev);
+}
+
+/**
+ * idpf_find_rxq - find rxq from q index
+ * @vport: virtual port associated to queue
+ * @q_num: q index used to find queue
+ *
+ * returns pointer to rx queue
+ */
+static struct idpf_queue *idpf_find_rxq(struct idpf_vport *vport, int q_num)
+{
+ int q_grp, q_idx;
+
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ return vport->rxq_grps->singleq.rxqs[q_num];
+
+ q_grp = q_num / IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
+ q_idx = q_num % IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
+
+ return &vport->rxq_grps[q_grp].splitq.rxq_sets[q_idx]->rxq;
+}
+
+/**
+ * idpf_find_txq - find txq from q index
+ * @vport: virtual port associated to queue
+ * @q_num: q index used to find queue
+ *
+ * returns pointer to tx queue
+ */
+static struct idpf_queue *idpf_find_txq(struct idpf_vport *vport, int q_num)
+{
+ int q_grp;
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ return vport->txqs[q_num];
+
+ q_grp = q_num / IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
+
+ return vport->txq_grps[q_grp].complq;
+}
+
+/**
+ * __idpf_get_q_coalesce - get ITR values for specific queue
+ * @ec: ethtool structure to fill with driver's coalesce settings
+ * @q: quuee of Rx or Tx
+ */
+static void __idpf_get_q_coalesce(struct ethtool_coalesce *ec,
+ struct idpf_queue *q)
+{
+ if (q->q_type == VIRTCHNL2_QUEUE_TYPE_RX) {
+ ec->use_adaptive_rx_coalesce =
+ IDPF_ITR_IS_DYNAMIC(q->q_vector->rx_intr_mode);
+ ec->rx_coalesce_usecs = q->q_vector->rx_itr_value;
+ } else {
+ ec->use_adaptive_tx_coalesce =
+ IDPF_ITR_IS_DYNAMIC(q->q_vector->tx_intr_mode);
+ ec->tx_coalesce_usecs = q->q_vector->tx_itr_value;
+ }
+}
+
+/**
+ * idpf_get_q_coalesce - get ITR values for specific queue
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to program the device with
+ * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int idpf_get_q_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec,
+ u32 q_num)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport *vport;
+ int err = 0;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ if (np->state != __IDPF_VPORT_UP)
+ goto unlock_mutex;
+
+ if (q_num >= vport->num_rxq && q_num >= vport->num_txq) {
+ err = -EINVAL;
+ goto unlock_mutex;
+ }
+
+ if (q_num < vport->num_rxq)
+ __idpf_get_q_coalesce(ec, idpf_find_rxq(vport, q_num));
+
+ if (q_num < vport->num_txq)
+ __idpf_get_q_coalesce(ec, idpf_find_txq(vport, q_num));
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_get_coalesce - get ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to be filled
+ * @kec: unused
+ * @extack: unused
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int idpf_get_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec,
+ struct kernel_ethtool_coalesce *kec,
+ struct netlink_ext_ack *extack)
+{
+ /* Return coalesce based on queue number zero */
+ return idpf_get_q_coalesce(netdev, ec, 0);
+}
+
+/**
+ * idpf_get_per_q_coalesce - get ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @q_num: queue for which the itr values has to retrieved
+ * @ec: coalesce settings to be filled
+ *
+ * Return 0 on success, and negative on failure
+ */
+
+static int idpf_get_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ struct ethtool_coalesce *ec)
+{
+ return idpf_get_q_coalesce(netdev, ec, q_num);
+}
+
+/**
+ * __idpf_set_q_coalesce - set ITR values for specific queue
+ * @ec: ethtool structure from user to update ITR settings
+ * @q: queue for which itr values has to be set
+ * @is_rxq: is queue type rx
+ *
+ * Returns 0 on success, negative otherwise.
+ */
+static int __idpf_set_q_coalesce(struct ethtool_coalesce *ec,
+ struct idpf_queue *q, bool is_rxq)
+{
+ u32 use_adaptive_coalesce, coalesce_usecs;
+ struct idpf_q_vector *qv = q->q_vector;
+ bool is_dim_ena = false;
+ u16 itr_val;
+
+ if (is_rxq) {
+ is_dim_ena = IDPF_ITR_IS_DYNAMIC(qv->rx_intr_mode);
+ use_adaptive_coalesce = ec->use_adaptive_rx_coalesce;
+ coalesce_usecs = ec->rx_coalesce_usecs;
+ itr_val = qv->rx_itr_value;
+ } else {
+ is_dim_ena = IDPF_ITR_IS_DYNAMIC(qv->tx_intr_mode);
+ use_adaptive_coalesce = ec->use_adaptive_tx_coalesce;
+ coalesce_usecs = ec->tx_coalesce_usecs;
+ itr_val = qv->tx_itr_value;
+ }
+ if (coalesce_usecs != itr_val && use_adaptive_coalesce) {
+ netdev_err(q->vport->netdev, "Cannot set coalesce usecs if adaptive enabled\n");
+
+ return -EINVAL;
+ }
+
+ if (is_dim_ena && use_adaptive_coalesce)
+ return 0;
+
+ if (coalesce_usecs > IDPF_ITR_MAX) {
+ netdev_err(q->vport->netdev,
+ "Invalid value, %d-usecs range is 0-%d\n",
+ coalesce_usecs, IDPF_ITR_MAX);
+
+ return -EINVAL;
+ }
+
+ if (coalesce_usecs % 2) {
+ coalesce_usecs--;
+ netdev_info(q->vport->netdev,
+ "HW only supports even ITR values, ITR rounded to %d\n",
+ coalesce_usecs);
+ }
+
+ if (is_rxq) {
+ qv->rx_itr_value = coalesce_usecs;
+ if (use_adaptive_coalesce) {
+ qv->rx_intr_mode = IDPF_ITR_DYNAMIC;
+ } else {
+ qv->rx_intr_mode = !IDPF_ITR_DYNAMIC;
+ idpf_vport_intr_write_itr(qv, qv->rx_itr_value,
+ false);
+ }
+ } else {
+ qv->tx_itr_value = coalesce_usecs;
+ if (use_adaptive_coalesce) {
+ qv->tx_intr_mode = IDPF_ITR_DYNAMIC;
+ } else {
+ qv->tx_intr_mode = !IDPF_ITR_DYNAMIC;
+ idpf_vport_intr_write_itr(qv, qv->tx_itr_value, true);
+ }
+ }
+
+ /* Update of static/dynamic itr will be taken care when interrupt is
+ * fired
+ */
+ return 0;
+}
+
+/**
+ * idpf_set_q_coalesce - set ITR values for specific queue
+ * @vport: vport associated to the queue that need updating
+ * @ec: coalesce settings to program the device with
+ * @q_num: update ITR/INTRL (coalesce) settings for this queue number/index
+ * @is_rxq: is queue type rx
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int idpf_set_q_coalesce(struct idpf_vport *vport,
+ struct ethtool_coalesce *ec,
+ int q_num, bool is_rxq)
+{
+ struct idpf_queue *q;
+
+ q = is_rxq ? idpf_find_rxq(vport, q_num) : idpf_find_txq(vport, q_num);
+
+ if (q && __idpf_set_q_coalesce(ec, q, is_rxq))
+ return -EINVAL;
+
+ return 0;
+}
+
+/**
+ * idpf_set_coalesce - set ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @ec: coalesce settings to program the device with
+ * @kec: unused
+ * @extack: unused
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int idpf_set_coalesce(struct net_device *netdev,
+ struct ethtool_coalesce *ec,
+ struct kernel_ethtool_coalesce *kec,
+ struct netlink_ext_ack *extack)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport *vport;
+ int i, err = 0;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ if (np->state != __IDPF_VPORT_UP)
+ goto unlock_mutex;
+
+ for (i = 0; i < vport->num_txq; i++) {
+ err = idpf_set_q_coalesce(vport, ec, i, false);
+ if (err)
+ goto unlock_mutex;
+ }
+
+ for (i = 0; i < vport->num_rxq; i++) {
+ err = idpf_set_q_coalesce(vport, ec, i, true);
+ if (err)
+ goto unlock_mutex;
+ }
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_set_per_q_coalesce - set ITR values as requested by user
+ * @netdev: pointer to the netdev associated with this query
+ * @q_num: queue for which the itr values has to be set
+ * @ec: coalesce settings to program the device with
+ *
+ * Return 0 on success, and negative on failure
+ */
+static int idpf_set_per_q_coalesce(struct net_device *netdev, u32 q_num,
+ struct ethtool_coalesce *ec)
+{
+ struct idpf_vport *vport;
+ int err;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ err = idpf_set_q_coalesce(vport, ec, q_num, false);
+ if (err) {
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+ }
+
+ err = idpf_set_q_coalesce(vport, ec, q_num, true);
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_get_msglevel - Get debug message level
+ * @netdev: network interface device structure
+ *
+ * Returns current debug message level.
+ */
+static u32 idpf_get_msglevel(struct net_device *netdev)
+{
+ struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev);
+
+ return adapter->msg_enable;
+}
+
+/**
+ * idpf_set_msglevel - Set debug message level
+ * @netdev: network interface device structure
+ * @data: message level
+ *
+ * Set current debug message level. Higher values cause the driver to
+ * be noisier.
+ */
+static void idpf_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev);
+
+ adapter->msg_enable = data;
+}
+
+/**
+ * idpf_get_link_ksettings - Get Link Speed and Duplex settings
+ * @netdev: network interface device structure
+ * @cmd: ethtool command
+ *
+ * Reports speed/duplex settings.
+ **/
+static int idpf_get_link_ksettings(struct net_device *netdev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct idpf_vport *vport;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ ethtool_link_ksettings_zero_link_mode(cmd, supported);
+ cmd->base.autoneg = AUTONEG_DISABLE;
+ cmd->base.port = PORT_NONE;
+ if (vport->link_up) {
+ cmd->base.duplex = DUPLEX_FULL;
+ cmd->base.speed = vport->link_speed_mbps;
+ } else {
+ cmd->base.duplex = DUPLEX_UNKNOWN;
+ cmd->base.speed = SPEED_UNKNOWN;
+ }
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return 0;
+}
+
+static const struct ethtool_ops idpf_ethtool_ops = {
+ .supported_coalesce_params = ETHTOOL_COALESCE_USECS |
+ ETHTOOL_COALESCE_USE_ADAPTIVE,
+ .get_msglevel = idpf_get_msglevel,
+ .set_msglevel = idpf_set_msglevel,
+ .get_link = ethtool_op_get_link,
+ .get_coalesce = idpf_get_coalesce,
+ .set_coalesce = idpf_set_coalesce,
+ .get_per_queue_coalesce = idpf_get_per_q_coalesce,
+ .set_per_queue_coalesce = idpf_set_per_q_coalesce,
+ .get_ethtool_stats = idpf_get_ethtool_stats,
+ .get_strings = idpf_get_strings,
+ .get_sset_count = idpf_get_sset_count,
+ .get_channels = idpf_get_channels,
+ .get_rxnfc = idpf_get_rxnfc,
+ .get_rxfh_key_size = idpf_get_rxfh_key_size,
+ .get_rxfh_indir_size = idpf_get_rxfh_indir_size,
+ .get_rxfh = idpf_get_rxfh,
+ .set_rxfh = idpf_set_rxfh,
+ .set_channels = idpf_set_channels,
+ .get_ringparam = idpf_get_ringparam,
+ .set_ringparam = idpf_set_ringparam,
+ .get_link_ksettings = idpf_get_link_ksettings,
+};
+
+/**
+ * idpf_set_ethtool_ops - Initialize ethtool ops struct
+ * @netdev: network interface device structure
+ *
+ * Sets ethtool ops struct in our netdev so that ethtool can call
+ * our functions.
+ */
+void idpf_set_ethtool_ops(struct net_device *netdev)
+{
+ netdev->ethtool_ops = &idpf_ethtool_ops;
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_pf_regs.h b/drivers/net/ethernet/intel/idpf/idpf_lan_pf_regs.h
new file mode 100644
index 000000000000..24edb8a6ec2e
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_lan_pf_regs.h
@@ -0,0 +1,124 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_LAN_PF_REGS_H_
+#define _IDPF_LAN_PF_REGS_H_
+
+/* Receive queues */
+#define PF_QRX_BASE 0x00000000
+#define PF_QRX_TAIL(_QRX) (PF_QRX_BASE + (((_QRX) * 0x1000)))
+#define PF_QRX_BUFFQ_BASE 0x03000000
+#define PF_QRX_BUFFQ_TAIL(_QRX) (PF_QRX_BUFFQ_BASE + (((_QRX) * 0x1000)))
+
+/* Transmit queues */
+#define PF_QTX_BASE 0x05000000
+#define PF_QTX_COMM_DBELL(_DBQM) (PF_QTX_BASE + ((_DBQM) * 0x1000))
+
+/* Control(PF Mailbox) Queue */
+#define PF_FW_BASE 0x08400000
+
+#define PF_FW_ARQBAL (PF_FW_BASE)
+#define PF_FW_ARQBAH (PF_FW_BASE + 0x4)
+#define PF_FW_ARQLEN (PF_FW_BASE + 0x8)
+#define PF_FW_ARQLEN_ARQLEN_S 0
+#define PF_FW_ARQLEN_ARQLEN_M GENMASK(12, 0)
+#define PF_FW_ARQLEN_ARQVFE_S 28
+#define PF_FW_ARQLEN_ARQVFE_M BIT(PF_FW_ARQLEN_ARQVFE_S)
+#define PF_FW_ARQLEN_ARQOVFL_S 29
+#define PF_FW_ARQLEN_ARQOVFL_M BIT(PF_FW_ARQLEN_ARQOVFL_S)
+#define PF_FW_ARQLEN_ARQCRIT_S 30
+#define PF_FW_ARQLEN_ARQCRIT_M BIT(PF_FW_ARQLEN_ARQCRIT_S)
+#define PF_FW_ARQLEN_ARQENABLE_S 31
+#define PF_FW_ARQLEN_ARQENABLE_M BIT(PF_FW_ARQLEN_ARQENABLE_S)
+#define PF_FW_ARQH (PF_FW_BASE + 0xC)
+#define PF_FW_ARQH_ARQH_S 0
+#define PF_FW_ARQH_ARQH_M GENMASK(12, 0)
+#define PF_FW_ARQT (PF_FW_BASE + 0x10)
+
+#define PF_FW_ATQBAL (PF_FW_BASE + 0x14)
+#define PF_FW_ATQBAH (PF_FW_BASE + 0x18)
+#define PF_FW_ATQLEN (PF_FW_BASE + 0x1C)
+#define PF_FW_ATQLEN_ATQLEN_S 0
+#define PF_FW_ATQLEN_ATQLEN_M GENMASK(9, 0)
+#define PF_FW_ATQLEN_ATQVFE_S 28
+#define PF_FW_ATQLEN_ATQVFE_M BIT(PF_FW_ATQLEN_ATQVFE_S)
+#define PF_FW_ATQLEN_ATQOVFL_S 29
+#define PF_FW_ATQLEN_ATQOVFL_M BIT(PF_FW_ATQLEN_ATQOVFL_S)
+#define PF_FW_ATQLEN_ATQCRIT_S 30
+#define PF_FW_ATQLEN_ATQCRIT_M BIT(PF_FW_ATQLEN_ATQCRIT_S)
+#define PF_FW_ATQLEN_ATQENABLE_S 31
+#define PF_FW_ATQLEN_ATQENABLE_M BIT(PF_FW_ATQLEN_ATQENABLE_S)
+#define PF_FW_ATQH (PF_FW_BASE + 0x20)
+#define PF_FW_ATQH_ATQH_S 0
+#define PF_FW_ATQH_ATQH_M GENMASK(9, 0)
+#define PF_FW_ATQT (PF_FW_BASE + 0x24)
+
+/* Interrupts */
+#define PF_GLINT_BASE 0x08900000
+#define PF_GLINT_DYN_CTL(_INT) (PF_GLINT_BASE + ((_INT) * 0x1000))
+#define PF_GLINT_DYN_CTL_INTENA_S 0
+#define PF_GLINT_DYN_CTL_INTENA_M BIT(PF_GLINT_DYN_CTL_INTENA_S)
+#define PF_GLINT_DYN_CTL_CLEARPBA_S 1
+#define PF_GLINT_DYN_CTL_CLEARPBA_M BIT(PF_GLINT_DYN_CTL_CLEARPBA_S)
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_S 2
+#define PF_GLINT_DYN_CTL_SWINT_TRIG_M BIT(PF_GLINT_DYN_CTL_SWINT_TRIG_S)
+#define PF_GLINT_DYN_CTL_ITR_INDX_S 3
+#define PF_GLINT_DYN_CTL_ITR_INDX_M GENMASK(4, 3)
+#define PF_GLINT_DYN_CTL_INTERVAL_S 5
+#define PF_GLINT_DYN_CTL_INTERVAL_M BIT(PF_GLINT_DYN_CTL_INTERVAL_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S 24
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_ENA_S)
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_S 25
+#define PF_GLINT_DYN_CTL_SW_ITR_INDX_M BIT(PF_GLINT_DYN_CTL_SW_ITR_INDX_S)
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_S 30
+#define PF_GLINT_DYN_CTL_WB_ON_ITR_M BIT(PF_GLINT_DYN_CTL_WB_ON_ITR_S)
+#define PF_GLINT_DYN_CTL_INTENA_MSK_S 31
+#define PF_GLINT_DYN_CTL_INTENA_MSK_M BIT(PF_GLINT_DYN_CTL_INTENA_MSK_S)
+/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is
+ * spacing b/w itrn registers of the same vector.
+ */
+#define PF_GLINT_ITR_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
+ ((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
+/* For PF, itrn_indx_spacing is 4 and itrn_reg_spacing is 0x1000 */
+#define PF_GLINT_ITR(_ITR, _INT) \
+ (PF_GLINT_BASE + (((_ITR) + 1) * 4) + ((_INT) * 0x1000))
+#define PF_GLINT_ITR_MAX_INDEX 2
+#define PF_GLINT_ITR_INTERVAL_S 0
+#define PF_GLINT_ITR_INTERVAL_M GENMASK(11, 0)
+
+/* Generic registers */
+#define PF_INT_DIR_OICR_ENA 0x08406000
+#define PF_INT_DIR_OICR_ENA_S 0
+#define PF_INT_DIR_OICR_ENA_M GENMASK(31, 0)
+#define PF_INT_DIR_OICR 0x08406004
+#define PF_INT_DIR_OICR_TSYN_EVNT 0
+#define PF_INT_DIR_OICR_PHY_TS_0 BIT(1)
+#define PF_INT_DIR_OICR_PHY_TS_1 BIT(2)
+#define PF_INT_DIR_OICR_CAUSE 0x08406008
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_S 0
+#define PF_INT_DIR_OICR_CAUSE_CAUSE_M GENMASK(31, 0)
+#define PF_INT_PBA_CLEAR 0x0840600C
+
+#define PF_FUNC_RID 0x08406010
+#define PF_FUNC_RID_FUNCTION_NUMBER_S 0
+#define PF_FUNC_RID_FUNCTION_NUMBER_M GENMASK(2, 0)
+#define PF_FUNC_RID_DEVICE_NUMBER_S 3
+#define PF_FUNC_RID_DEVICE_NUMBER_M GENMASK(7, 3)
+#define PF_FUNC_RID_BUS_NUMBER_S 8
+#define PF_FUNC_RID_BUS_NUMBER_M GENMASK(15, 8)
+
+/* Reset registers */
+#define PFGEN_RTRIG 0x08407000
+#define PFGEN_RTRIG_CORER_S 0
+#define PFGEN_RTRIG_CORER_M BIT(0)
+#define PFGEN_RTRIG_LINKR_S 1
+#define PFGEN_RTRIG_LINKR_M BIT(1)
+#define PFGEN_RTRIG_IMCR_S 2
+#define PFGEN_RTRIG_IMCR_M BIT(2)
+#define PFGEN_RSTAT 0x08407008 /* PFR Status */
+#define PFGEN_RSTAT_PFR_STATE_S 0
+#define PFGEN_RSTAT_PFR_STATE_M GENMASK(1, 0)
+#define PFGEN_CTRL 0x0840700C
+#define PFGEN_CTRL_PFSWR BIT(0)
+
+#endif
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
new file mode 100644
index 000000000000..a5752dcab888
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_lan_txrx.h
@@ -0,0 +1,293 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_LAN_TXRX_H_
+#define _IDPF_LAN_TXRX_H_
+
+enum idpf_rss_hash {
+ IDPF_HASH_INVALID = 0,
+ /* Values 1 - 28 are reserved for future use */
+ IDPF_HASH_NONF_UNICAST_IPV4_UDP = 29,
+ IDPF_HASH_NONF_MULTICAST_IPV4_UDP,
+ IDPF_HASH_NONF_IPV4_UDP,
+ IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK,
+ IDPF_HASH_NONF_IPV4_TCP,
+ IDPF_HASH_NONF_IPV4_SCTP,
+ IDPF_HASH_NONF_IPV4_OTHER,
+ IDPF_HASH_FRAG_IPV4,
+ /* Values 37-38 are reserved */
+ IDPF_HASH_NONF_UNICAST_IPV6_UDP = 39,
+ IDPF_HASH_NONF_MULTICAST_IPV6_UDP,
+ IDPF_HASH_NONF_IPV6_UDP,
+ IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK,
+ IDPF_HASH_NONF_IPV6_TCP,
+ IDPF_HASH_NONF_IPV6_SCTP,
+ IDPF_HASH_NONF_IPV6_OTHER,
+ IDPF_HASH_FRAG_IPV6,
+ IDPF_HASH_NONF_RSVD47,
+ IDPF_HASH_NONF_FCOE_OX,
+ IDPF_HASH_NONF_FCOE_RX,
+ IDPF_HASH_NONF_FCOE_OTHER,
+ /* Values 51-62 are reserved */
+ IDPF_HASH_L2_PAYLOAD = 63,
+
+ IDPF_HASH_MAX
+};
+
+/* Supported RSS offloads */
+#define IDPF_DEFAULT_RSS_HASH \
+ (BIT_ULL(IDPF_HASH_NONF_IPV4_UDP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV4_SCTP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV4_TCP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV4_OTHER) | \
+ BIT_ULL(IDPF_HASH_FRAG_IPV4) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV6_UDP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV6_TCP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV6_SCTP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV6_OTHER) | \
+ BIT_ULL(IDPF_HASH_FRAG_IPV6) | \
+ BIT_ULL(IDPF_HASH_L2_PAYLOAD))
+
+#define IDPF_DEFAULT_RSS_HASH_EXPANDED (IDPF_DEFAULT_RSS_HASH | \
+ BIT_ULL(IDPF_HASH_NONF_IPV4_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV4_UDP) | \
+ BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV4_UDP) | \
+ BIT_ULL(IDPF_HASH_NONF_IPV6_TCP_SYN_NO_ACK) | \
+ BIT_ULL(IDPF_HASH_NONF_UNICAST_IPV6_UDP) | \
+ BIT_ULL(IDPF_HASH_NONF_MULTICAST_IPV6_UDP))
+
+/* For idpf_splitq_base_tx_compl_desc */
+#define IDPF_TXD_COMPLQ_GEN_S 15
+#define IDPF_TXD_COMPLQ_GEN_M BIT_ULL(IDPF_TXD_COMPLQ_GEN_S)
+#define IDPF_TXD_COMPLQ_COMPL_TYPE_S 11
+#define IDPF_TXD_COMPLQ_COMPL_TYPE_M GENMASK_ULL(13, 11)
+#define IDPF_TXD_COMPLQ_QID_S 0
+#define IDPF_TXD_COMPLQ_QID_M GENMASK_ULL(9, 0)
+
+/* For base mode TX descriptors */
+
+#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S 23
+#define IDPF_TXD_CTX_QW0_TUNN_L4T_CS_M BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_L4T_CS_S)
+#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_S 19
+#define IDPF_TXD_CTX_QW0_TUNN_DECTTL_M \
+ (0xFULL << IDPF_TXD_CTX_QW0_TUNN_DECTTL_S)
+#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_S 12
+#define IDPF_TXD_CTX_QW0_TUNN_NATLEN_M \
+ (0X7FULL << IDPF_TXD_CTX_QW0_TUNN_NATLEN_S)
+#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S 11
+#define IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M \
+ BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_S)
+#define IDPF_TXD_CTX_EIP_NOINC_IPID_CONST \
+ IDPF_TXD_CTX_QW0_TUNN_EIP_NOINC_M
+#define IDPF_TXD_CTX_QW0_TUNN_NATT_S 9
+#define IDPF_TXD_CTX_QW0_TUNN_NATT_M (0x3ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
+#define IDPF_TXD_CTX_UDP_TUNNELING BIT_ULL(IDPF_TXD_CTX_QW0_TUNN_NATT_S)
+#define IDPF_TXD_CTX_GRE_TUNNELING (0x2ULL << IDPF_TXD_CTX_QW0_TUNN_NATT_S)
+#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S 2
+#define IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_M \
+ (0x3FULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_S)
+#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S 0
+#define IDPF_TXD_CTX_QW0_TUNN_EXT_IP_M \
+ (0x3ULL << IDPF_TXD_CTX_QW0_TUNN_EXT_IP_S)
+
+#define IDPF_TXD_CTX_QW1_MSS_S 50
+#define IDPF_TXD_CTX_QW1_MSS_M GENMASK_ULL(63, 50)
+#define IDPF_TXD_CTX_QW1_TSO_LEN_S 30
+#define IDPF_TXD_CTX_QW1_TSO_LEN_M GENMASK_ULL(47, 30)
+#define IDPF_TXD_CTX_QW1_CMD_S 4
+#define IDPF_TXD_CTX_QW1_CMD_M GENMASK_ULL(15, 4)
+#define IDPF_TXD_CTX_QW1_DTYPE_S 0
+#define IDPF_TXD_CTX_QW1_DTYPE_M GENMASK_ULL(3, 0)
+#define IDPF_TXD_QW1_L2TAG1_S 48
+#define IDPF_TXD_QW1_L2TAG1_M GENMASK_ULL(63, 48)
+#define IDPF_TXD_QW1_TX_BUF_SZ_S 34
+#define IDPF_TXD_QW1_TX_BUF_SZ_M GENMASK_ULL(47, 34)
+#define IDPF_TXD_QW1_OFFSET_S 16
+#define IDPF_TXD_QW1_OFFSET_M GENMASK_ULL(33, 16)
+#define IDPF_TXD_QW1_CMD_S 4
+#define IDPF_TXD_QW1_CMD_M GENMASK_ULL(15, 4)
+#define IDPF_TXD_QW1_DTYPE_S 0
+#define IDPF_TXD_QW1_DTYPE_M GENMASK_ULL(3, 0)
+
+/* TX Completion Descriptor Completion Types */
+#define IDPF_TXD_COMPLT_ITR_FLUSH 0
+/* Descriptor completion type 1 is reserved */
+#define IDPF_TXD_COMPLT_RS 2
+/* Descriptor completion type 3 is reserved */
+#define IDPF_TXD_COMPLT_RE 4
+#define IDPF_TXD_COMPLT_SW_MARKER 5
+
+enum idpf_tx_desc_dtype_value {
+ IDPF_TX_DESC_DTYPE_DATA = 0,
+ IDPF_TX_DESC_DTYPE_CTX = 1,
+ /* DTYPE 2 is reserved
+ * DTYPE 3 is free for future use
+ * DTYPE 4 is reserved
+ */
+ IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX = 5,
+ /* DTYPE 6 is reserved */
+ IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 = 7,
+ /* DTYPE 8, 9 are free for future use
+ * DTYPE 10 is reserved
+ * DTYPE 11 is free for future use
+ */
+ IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE = 12,
+ /* DTYPE 13, 14 are free for future use */
+
+ /* DESC_DONE - HW has completed write-back of descriptor */
+ IDPF_TX_DESC_DTYPE_DESC_DONE = 15,
+};
+
+enum idpf_tx_ctx_desc_cmd_bits {
+ IDPF_TX_CTX_DESC_TSO = 0x01,
+ IDPF_TX_CTX_DESC_TSYN = 0x02,
+ IDPF_TX_CTX_DESC_IL2TAG2 = 0x04,
+ IDPF_TX_CTX_DESC_RSVD = 0x08,
+ IDPF_TX_CTX_DESC_SWTCH_NOTAG = 0x00,
+ IDPF_TX_CTX_DESC_SWTCH_UPLINK = 0x10,
+ IDPF_TX_CTX_DESC_SWTCH_LOCAL = 0x20,
+ IDPF_TX_CTX_DESC_SWTCH_VSI = 0x30,
+ IDPF_TX_CTX_DESC_FILT_AU_EN = 0x40,
+ IDPF_TX_CTX_DESC_FILT_AU_EVICT = 0x80,
+ IDPF_TX_CTX_DESC_RSVD1 = 0xF00
+};
+
+enum idpf_tx_desc_len_fields {
+ /* Note: These are predefined bit offsets */
+ IDPF_TX_DESC_LEN_MACLEN_S = 0, /* 7 BITS */
+ IDPF_TX_DESC_LEN_IPLEN_S = 7, /* 7 BITS */
+ IDPF_TX_DESC_LEN_L4_LEN_S = 14 /* 4 BITS */
+};
+
+enum idpf_tx_base_desc_cmd_bits {
+ IDPF_TX_DESC_CMD_EOP = BIT(0),
+ IDPF_TX_DESC_CMD_RS = BIT(1),
+ /* only on VFs else RSVD */
+ IDPF_TX_DESC_CMD_ICRC = BIT(2),
+ IDPF_TX_DESC_CMD_IL2TAG1 = BIT(3),
+ IDPF_TX_DESC_CMD_RSVD1 = BIT(4),
+ IDPF_TX_DESC_CMD_IIPT_IPV6 = BIT(5),
+ IDPF_TX_DESC_CMD_IIPT_IPV4 = BIT(6),
+ IDPF_TX_DESC_CMD_IIPT_IPV4_CSUM = GENMASK(6, 5),
+ IDPF_TX_DESC_CMD_RSVD2 = BIT(7),
+ IDPF_TX_DESC_CMD_L4T_EOFT_TCP = BIT(8),
+ IDPF_TX_DESC_CMD_L4T_EOFT_SCTP = BIT(9),
+ IDPF_TX_DESC_CMD_L4T_EOFT_UDP = GENMASK(9, 8),
+ IDPF_TX_DESC_CMD_RSVD3 = BIT(10),
+ IDPF_TX_DESC_CMD_RSVD4 = BIT(11),
+};
+
+/* Transmit descriptors */
+/* splitq tx buf, singleq tx buf and singleq compl desc */
+struct idpf_base_tx_desc {
+ __le64 buf_addr; /* Address of descriptor's data buf */
+ __le64 qw1; /* type_cmd_offset_bsz_l2tag1 */
+}; /* read used with buffer queues */
+
+struct idpf_splitq_tx_compl_desc {
+ /* qid=[10:0] comptype=[13:11] rsvd=[14] gen=[15] */
+ __le16 qid_comptype_gen;
+ union {
+ __le16 q_head; /* Queue head */
+ __le16 compl_tag; /* Completion tag */
+ } q_head_compl_tag;
+ u8 ts[3];
+ u8 rsvd; /* Reserved */
+}; /* writeback used with completion queues */
+
+/* Context descriptors */
+struct idpf_base_tx_ctx_desc {
+ struct {
+ __le32 tunneling_params;
+ __le16 l2tag2;
+ __le16 rsvd1;
+ } qw0;
+ __le64 qw1; /* type_cmd_tlen_mss/rt_hint */
+};
+
+/* Common cmd field defines for all desc except Flex Flow Scheduler (0x0C) */
+enum idpf_tx_flex_desc_cmd_bits {
+ IDPF_TX_FLEX_DESC_CMD_EOP = BIT(0),
+ IDPF_TX_FLEX_DESC_CMD_RS = BIT(1),
+ IDPF_TX_FLEX_DESC_CMD_RE = BIT(2),
+ IDPF_TX_FLEX_DESC_CMD_IL2TAG1 = BIT(3),
+ IDPF_TX_FLEX_DESC_CMD_DUMMY = BIT(4),
+ IDPF_TX_FLEX_DESC_CMD_CS_EN = BIT(5),
+ IDPF_TX_FLEX_DESC_CMD_FILT_AU_EN = BIT(6),
+ IDPF_TX_FLEX_DESC_CMD_FILT_AU_EVICT = BIT(7),
+};
+
+struct idpf_flex_tx_desc {
+ __le64 buf_addr; /* Packet buffer address */
+ struct {
+#define IDPF_FLEX_TXD_QW1_DTYPE_S 0
+#define IDPF_FLEX_TXD_QW1_DTYPE_M GENMASK(4, 0)
+#define IDPF_FLEX_TXD_QW1_CMD_S 5
+#define IDPF_FLEX_TXD_QW1_CMD_M GENMASK(15, 5)
+ __le16 cmd_dtype;
+ /* DTYPE=IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2 (0x07) */
+ struct {
+ __le16 l2tag1;
+ __le16 l2tag2;
+ } l2tags;
+ __le16 buf_size;
+ } qw1;
+};
+
+struct idpf_flex_tx_sched_desc {
+ __le64 buf_addr; /* Packet buffer address */
+
+ /* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE_16B (0x0C) */
+ struct {
+ u8 cmd_dtype;
+#define IDPF_TXD_FLEX_FLOW_DTYPE_M GENMASK(4, 0)
+#define IDPF_TXD_FLEX_FLOW_CMD_EOP BIT(5)
+#define IDPF_TXD_FLEX_FLOW_CMD_CS_EN BIT(6)
+#define IDPF_TXD_FLEX_FLOW_CMD_RE BIT(7)
+
+ /* [23:23] Horizon Overflow bit, [22:0] timestamp */
+ u8 ts[3];
+#define IDPF_TXD_FLOW_SCH_HORIZON_OVERFLOW_M BIT(7)
+
+ __le16 compl_tag;
+ __le16 rxr_bufsize;
+#define IDPF_TXD_FLEX_FLOW_RXR BIT(14)
+#define IDPF_TXD_FLEX_FLOW_BUFSIZE_M GENMASK(13, 0)
+ } qw1;
+};
+
+/* Common cmd fields for all flex context descriptors
+ * Note: these defines already account for the 5 bit dtype in the cmd_dtype
+ * field
+ */
+enum idpf_tx_flex_ctx_desc_cmd_bits {
+ IDPF_TX_FLEX_CTX_DESC_CMD_TSO = BIT(5),
+ IDPF_TX_FLEX_CTX_DESC_CMD_TSYN_EN = BIT(6),
+ IDPF_TX_FLEX_CTX_DESC_CMD_L2TAG2 = BIT(7),
+ IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_UPLNK = BIT(9),
+ IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_LOCAL = BIT(10),
+ IDPF_TX_FLEX_CTX_DESC_CMD_SWTCH_TARGETVSI = GENMASK(10, 9),
+};
+
+/* Standard flex descriptor TSO context quad word */
+struct idpf_flex_tx_tso_ctx_qw {
+ __le32 flex_tlen;
+#define IDPF_TXD_FLEX_CTX_TLEN_M GENMASK(17, 0)
+#define IDPF_TXD_FLEX_TSO_CTX_FLEX_S 24
+ __le16 mss_rt;
+#define IDPF_TXD_FLEX_CTX_MSS_RT_M GENMASK(13, 0)
+ u8 hdr_len;
+ u8 flex;
+};
+
+struct idpf_flex_tx_ctx_desc {
+ /* DTYPE = IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX (0x05) */
+ struct {
+ struct idpf_flex_tx_tso_ctx_qw qw0;
+ struct {
+ __le16 cmd_dtype;
+ u8 flex[6];
+ } qw1;
+ } tso;
+};
+#endif /* _IDPF_LAN_TXRX_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lan_vf_regs.h b/drivers/net/ethernet/intel/idpf/idpf_lan_vf_regs.h
new file mode 100644
index 000000000000..3d73b6c76863
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_lan_vf_regs.h
@@ -0,0 +1,128 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_LAN_VF_REGS_H_
+#define _IDPF_LAN_VF_REGS_H_
+
+/* Reset */
+#define VFGEN_RSTAT 0x00008800
+#define VFGEN_RSTAT_VFR_STATE_S 0
+#define VFGEN_RSTAT_VFR_STATE_M GENMASK(1, 0)
+
+/* Control(VF Mailbox) Queue */
+#define VF_BASE 0x00006000
+
+#define VF_ATQBAL (VF_BASE + 0x1C00)
+#define VF_ATQBAH (VF_BASE + 0x1800)
+#define VF_ATQLEN (VF_BASE + 0x0800)
+#define VF_ATQLEN_ATQLEN_S 0
+#define VF_ATQLEN_ATQLEN_M GENMASK(9, 0)
+#define VF_ATQLEN_ATQVFE_S 28
+#define VF_ATQLEN_ATQVFE_M BIT(VF_ATQLEN_ATQVFE_S)
+#define VF_ATQLEN_ATQOVFL_S 29
+#define VF_ATQLEN_ATQOVFL_M BIT(VF_ATQLEN_ATQOVFL_S)
+#define VF_ATQLEN_ATQCRIT_S 30
+#define VF_ATQLEN_ATQCRIT_M BIT(VF_ATQLEN_ATQCRIT_S)
+#define VF_ATQLEN_ATQENABLE_S 31
+#define VF_ATQLEN_ATQENABLE_M BIT(VF_ATQLEN_ATQENABLE_S)
+#define VF_ATQH (VF_BASE + 0x0400)
+#define VF_ATQH_ATQH_S 0
+#define VF_ATQH_ATQH_M GENMASK(9, 0)
+#define VF_ATQT (VF_BASE + 0x2400)
+
+#define VF_ARQBAL (VF_BASE + 0x0C00)
+#define VF_ARQBAH (VF_BASE)
+#define VF_ARQLEN (VF_BASE + 0x2000)
+#define VF_ARQLEN_ARQLEN_S 0
+#define VF_ARQLEN_ARQLEN_M GENMASK(9, 0)
+#define VF_ARQLEN_ARQVFE_S 28
+#define VF_ARQLEN_ARQVFE_M BIT(VF_ARQLEN_ARQVFE_S)
+#define VF_ARQLEN_ARQOVFL_S 29
+#define VF_ARQLEN_ARQOVFL_M BIT(VF_ARQLEN_ARQOVFL_S)
+#define VF_ARQLEN_ARQCRIT_S 30
+#define VF_ARQLEN_ARQCRIT_M BIT(VF_ARQLEN_ARQCRIT_S)
+#define VF_ARQLEN_ARQENABLE_S 31
+#define VF_ARQLEN_ARQENABLE_M BIT(VF_ARQLEN_ARQENABLE_S)
+#define VF_ARQH (VF_BASE + 0x1400)
+#define VF_ARQH_ARQH_S 0
+#define VF_ARQH_ARQH_M GENMASK(12, 0)
+#define VF_ARQT (VF_BASE + 0x1000)
+
+/* Transmit queues */
+#define VF_QTX_TAIL_BASE 0x00000000
+#define VF_QTX_TAIL(_QTX) (VF_QTX_TAIL_BASE + (_QTX) * 0x4)
+#define VF_QTX_TAIL_EXT_BASE 0x00040000
+#define VF_QTX_TAIL_EXT(_QTX) (VF_QTX_TAIL_EXT_BASE + ((_QTX) * 4))
+
+/* Receive queues */
+#define VF_QRX_TAIL_BASE 0x00002000
+#define VF_QRX_TAIL(_QRX) (VF_QRX_TAIL_BASE + ((_QRX) * 4))
+#define VF_QRX_TAIL_EXT_BASE 0x00050000
+#define VF_QRX_TAIL_EXT(_QRX) (VF_QRX_TAIL_EXT_BASE + ((_QRX) * 4))
+#define VF_QRXB_TAIL_BASE 0x00060000
+#define VF_QRXB_TAIL(_QRX) (VF_QRXB_TAIL_BASE + ((_QRX) * 4))
+
+/* Interrupts */
+#define VF_INT_DYN_CTL0 0x00005C00
+#define VF_INT_DYN_CTL0_INTENA_S 0
+#define VF_INT_DYN_CTL0_INTENA_M BIT(VF_INT_DYN_CTL0_INTENA_S)
+#define VF_INT_DYN_CTL0_ITR_INDX_S 3
+#define VF_INT_DYN_CTL0_ITR_INDX_M GENMASK(4, 3)
+#define VF_INT_DYN_CTLN(_INT) (0x00003800 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_EXT(_INT) (0x00070000 + ((_INT) * 4))
+#define VF_INT_DYN_CTLN_INTENA_S 0
+#define VF_INT_DYN_CTLN_INTENA_M BIT(VF_INT_DYN_CTLN_INTENA_S)
+#define VF_INT_DYN_CTLN_CLEARPBA_S 1
+#define VF_INT_DYN_CTLN_CLEARPBA_M BIT(VF_INT_DYN_CTLN_CLEARPBA_S)
+#define VF_INT_DYN_CTLN_SWINT_TRIG_S 2
+#define VF_INT_DYN_CTLN_SWINT_TRIG_M BIT(VF_INT_DYN_CTLN_SWINT_TRIG_S)
+#define VF_INT_DYN_CTLN_ITR_INDX_S 3
+#define VF_INT_DYN_CTLN_ITR_INDX_M GENMASK(4, 3)
+#define VF_INT_DYN_CTLN_INTERVAL_S 5
+#define VF_INT_DYN_CTLN_INTERVAL_M BIT(VF_INT_DYN_CTLN_INTERVAL_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S 24
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_ENA_S)
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_S 25
+#define VF_INT_DYN_CTLN_SW_ITR_INDX_M BIT(VF_INT_DYN_CTLN_SW_ITR_INDX_S)
+#define VF_INT_DYN_CTLN_WB_ON_ITR_S 30
+#define VF_INT_DYN_CTLN_WB_ON_ITR_M BIT(VF_INT_DYN_CTLN_WB_ON_ITR_S)
+#define VF_INT_DYN_CTLN_INTENA_MSK_S 31
+#define VF_INT_DYN_CTLN_INTENA_MSK_M BIT(VF_INT_DYN_CTLN_INTENA_MSK_S)
+/* _ITR is ITR index, _INT is interrupt index, _itrn_indx_spacing is spacing
+ * b/w itrn registers of the same vector
+ */
+#define VF_INT_ITR0(_ITR) (0x00004C00 + ((_ITR) * 4))
+#define VF_INT_ITRN_ADDR(_ITR, _reg_start, _itrn_indx_spacing) \
+ ((_reg_start) + ((_ITR) * (_itrn_indx_spacing)))
+/* For VF with 16 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
+ * is 0x40 and base register offset is 0x00002800
+ */
+#define VF_INT_ITRN(_INT, _ITR) \
+ (0x00002800 + ((_INT) * 4) + ((_ITR) * 0x40))
+/* For VF with 64 vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
+ * is 0x100 and base register offset is 0x00002C00
+ */
+#define VF_INT_ITRN_64(_INT, _ITR) \
+ (0x00002C00 + ((_INT) * 4) + ((_ITR) * 0x100))
+/* For VF with 2k vector support, itrn_reg_spacing is 0x4, itrn_indx_spacing
+ * is 0x2000 and base register offset is 0x00072000
+ */
+#define VF_INT_ITRN_2K(_INT, _ITR) \
+ (0x00072000 + ((_INT) * 4) + ((_ITR) * 0x2000))
+#define VF_INT_ITRN_MAX_INDEX 2
+#define VF_INT_ITRN_INTERVAL_S 0
+#define VF_INT_ITRN_INTERVAL_M GENMASK(11, 0)
+#define VF_INT_PBA_CLEAR 0x00008900
+
+#define VF_INT_ICR0_ENA1 0x00005000
+#define VF_INT_ICR0_ENA1_ADMINQ_S 30
+#define VF_INT_ICR0_ENA1_ADMINQ_M BIT(VF_INT_ICR0_ENA1_ADMINQ_S)
+#define VF_INT_ICR0_ENA1_RSVD_S 31
+#define VF_INT_ICR01 0x00004800
+#define VF_QF_HENA(_i) (0x0000C400 + ((_i) * 4))
+#define VF_QF_HENA_MAX_INDX 1
+#define VF_QF_HKEY(_i) (0x0000CC00 + ((_i) * 4))
+#define VF_QF_HKEY_MAX_INDX 12
+#define VF_QF_HLUT(_i) (0x0000D000 + ((_i) * 4))
+#define VF_QF_HLUT_MAX_INDX 15
+#endif
diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
new file mode 100644
index 000000000000..19809b0ddcd9
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
@@ -0,0 +1,2379 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+
+static const struct net_device_ops idpf_netdev_ops_splitq;
+static const struct net_device_ops idpf_netdev_ops_singleq;
+
+const char * const idpf_vport_vc_state_str[] = {
+ IDPF_FOREACH_VPORT_VC_STATE(IDPF_GEN_STRING)
+};
+
+/**
+ * idpf_init_vector_stack - Fill the MSIX vector stack with vector index
+ * @adapter: private data struct
+ *
+ * Return 0 on success, error on failure
+ */
+static int idpf_init_vector_stack(struct idpf_adapter *adapter)
+{
+ struct idpf_vector_lifo *stack;
+ u16 min_vec;
+ u32 i;
+
+ mutex_lock(&adapter->vector_lock);
+ min_vec = adapter->num_msix_entries - adapter->num_avail_msix;
+ stack = &adapter->vector_stack;
+ stack->size = adapter->num_msix_entries;
+ /* set the base and top to point at start of the 'free pool' to
+ * distribute the unused vectors on-demand basis
+ */
+ stack->base = min_vec;
+ stack->top = min_vec;
+
+ stack->vec_idx = kcalloc(stack->size, sizeof(u16), GFP_KERNEL);
+ if (!stack->vec_idx) {
+ mutex_unlock(&adapter->vector_lock);
+
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < stack->size; i++)
+ stack->vec_idx[i] = i;
+
+ mutex_unlock(&adapter->vector_lock);
+
+ return 0;
+}
+
+/**
+ * idpf_deinit_vector_stack - zero out the MSIX vector stack
+ * @adapter: private data struct
+ */
+static void idpf_deinit_vector_stack(struct idpf_adapter *adapter)
+{
+ struct idpf_vector_lifo *stack;
+
+ mutex_lock(&adapter->vector_lock);
+ stack = &adapter->vector_stack;
+ kfree(stack->vec_idx);
+ stack->vec_idx = NULL;
+ mutex_unlock(&adapter->vector_lock);
+}
+
+/**
+ * idpf_mb_intr_rel_irq - Free the IRQ association with the OS
+ * @adapter: adapter structure
+ *
+ * This will also disable interrupt mode and queue up mailbox task. Mailbox
+ * task will reschedule itself if not in interrupt mode.
+ */
+static void idpf_mb_intr_rel_irq(struct idpf_adapter *adapter)
+{
+ clear_bit(IDPF_MB_INTR_MODE, adapter->flags);
+ free_irq(adapter->msix_entries[0].vector, adapter);
+ queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 0);
+}
+
+/**
+ * idpf_intr_rel - Release interrupt capabilities and free memory
+ * @adapter: adapter to disable interrupts on
+ */
+void idpf_intr_rel(struct idpf_adapter *adapter)
+{
+ int err;
+
+ if (!adapter->msix_entries)
+ return;
+
+ idpf_mb_intr_rel_irq(adapter);
+ pci_free_irq_vectors(adapter->pdev);
+
+ err = idpf_send_dealloc_vectors_msg(adapter);
+ if (err)
+ dev_err(&adapter->pdev->dev,
+ "Failed to deallocate vectors: %d\n", err);
+
+ idpf_deinit_vector_stack(adapter);
+ kfree(adapter->msix_entries);
+ adapter->msix_entries = NULL;
+}
+
+/**
+ * idpf_mb_intr_clean - Interrupt handler for the mailbox
+ * @irq: interrupt number
+ * @data: pointer to the adapter structure
+ */
+static irqreturn_t idpf_mb_intr_clean(int __always_unused irq, void *data)
+{
+ struct idpf_adapter *adapter = (struct idpf_adapter *)data;
+
+ queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 0);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * idpf_mb_irq_enable - Enable MSIX interrupt for the mailbox
+ * @adapter: adapter to get the hardware address for register write
+ */
+static void idpf_mb_irq_enable(struct idpf_adapter *adapter)
+{
+ struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
+ u32 val;
+
+ val = intr->dyn_ctl_intena_m | intr->dyn_ctl_itridx_m;
+ writel(val, intr->dyn_ctl);
+ writel(intr->icr_ena_ctlq_m, intr->icr_ena);
+}
+
+/**
+ * idpf_mb_intr_req_irq - Request irq for the mailbox interrupt
+ * @adapter: adapter structure to pass to the mailbox irq handler
+ */
+static int idpf_mb_intr_req_irq(struct idpf_adapter *adapter)
+{
+ struct idpf_q_vector *mb_vector = &adapter->mb_vector;
+ int irq_num, mb_vidx = 0, err;
+
+ irq_num = adapter->msix_entries[mb_vidx].vector;
+ mb_vector->name = kasprintf(GFP_KERNEL, "%s-%s-%d",
+ dev_driver_string(&adapter->pdev->dev),
+ "Mailbox", mb_vidx);
+ err = request_irq(irq_num, adapter->irq_mb_handler, 0,
+ mb_vector->name, adapter);
+ if (err) {
+ dev_err(&adapter->pdev->dev,
+ "IRQ request for mailbox failed, error: %d\n", err);
+
+ return err;
+ }
+
+ set_bit(IDPF_MB_INTR_MODE, adapter->flags);
+
+ return 0;
+}
+
+/**
+ * idpf_set_mb_vec_id - Set vector index for mailbox
+ * @adapter: adapter structure to access the vector chunks
+ *
+ * The first vector id in the requested vector chunks from the CP is for
+ * the mailbox
+ */
+static void idpf_set_mb_vec_id(struct idpf_adapter *adapter)
+{
+ if (adapter->req_vec_chunks)
+ adapter->mb_vector.v_idx =
+ le16_to_cpu(adapter->caps.mailbox_vector_id);
+ else
+ adapter->mb_vector.v_idx = 0;
+}
+
+/**
+ * idpf_mb_intr_init - Initialize the mailbox interrupt
+ * @adapter: adapter structure to store the mailbox vector
+ */
+static int idpf_mb_intr_init(struct idpf_adapter *adapter)
+{
+ adapter->dev_ops.reg_ops.mb_intr_reg_init(adapter);
+ adapter->irq_mb_handler = idpf_mb_intr_clean;
+
+ return idpf_mb_intr_req_irq(adapter);
+}
+
+/**
+ * idpf_vector_lifo_push - push MSIX vector index onto stack
+ * @adapter: private data struct
+ * @vec_idx: vector index to store
+ */
+static int idpf_vector_lifo_push(struct idpf_adapter *adapter, u16 vec_idx)
+{
+ struct idpf_vector_lifo *stack = &adapter->vector_stack;
+
+ lockdep_assert_held(&adapter->vector_lock);
+
+ if (stack->top == stack->base) {
+ dev_err(&adapter->pdev->dev, "Exceeded the vector stack limit: %d\n",
+ stack->top);
+ return -EINVAL;
+ }
+
+ stack->vec_idx[--stack->top] = vec_idx;
+
+ return 0;
+}
+
+/**
+ * idpf_vector_lifo_pop - pop MSIX vector index from stack
+ * @adapter: private data struct
+ */
+static int idpf_vector_lifo_pop(struct idpf_adapter *adapter)
+{
+ struct idpf_vector_lifo *stack = &adapter->vector_stack;
+
+ lockdep_assert_held(&adapter->vector_lock);
+
+ if (stack->top == stack->size) {
+ dev_err(&adapter->pdev->dev, "No interrupt vectors are available to distribute!\n");
+
+ return -EINVAL;
+ }
+
+ return stack->vec_idx[stack->top++];
+}
+
+/**
+ * idpf_vector_stash - Store the vector indexes onto the stack
+ * @adapter: private data struct
+ * @q_vector_idxs: vector index array
+ * @vec_info: info related to the number of vectors
+ *
+ * This function is a no-op if there are no vectors indexes to be stashed
+ */
+static void idpf_vector_stash(struct idpf_adapter *adapter, u16 *q_vector_idxs,
+ struct idpf_vector_info *vec_info)
+{
+ int i, base = 0;
+ u16 vec_idx;
+
+ lockdep_assert_held(&adapter->vector_lock);
+
+ if (!vec_info->num_curr_vecs)
+ return;
+
+ /* For default vports, no need to stash vector allocated from the
+ * default pool onto the stack
+ */
+ if (vec_info->default_vport)
+ base = IDPF_MIN_Q_VEC;
+
+ for (i = vec_info->num_curr_vecs - 1; i >= base ; i--) {
+ vec_idx = q_vector_idxs[i];
+ idpf_vector_lifo_push(adapter, vec_idx);
+ adapter->num_avail_msix++;
+ }
+}
+
+/**
+ * idpf_req_rel_vector_indexes - Request or release MSIX vector indexes
+ * @adapter: driver specific private structure
+ * @q_vector_idxs: vector index array
+ * @vec_info: info related to the number of vectors
+ *
+ * This is the core function to distribute the MSIX vectors acquired from the
+ * OS. It expects the caller to pass the number of vectors required and
+ * also previously allocated. First, it stashes previously allocated vector
+ * indexes on to the stack and then figures out if it can allocate requested
+ * vectors. It can wait on acquiring the mutex lock. If the caller passes 0 as
+ * requested vectors, then this function just stashes the already allocated
+ * vectors and returns 0.
+ *
+ * Returns actual number of vectors allocated on success, error value on failure
+ * If 0 is returned, implies the stack has no vectors to allocate which is also
+ * a failure case for the caller
+ */
+int idpf_req_rel_vector_indexes(struct idpf_adapter *adapter,
+ u16 *q_vector_idxs,
+ struct idpf_vector_info *vec_info)
+{
+ u16 num_req_vecs, num_alloc_vecs = 0, max_vecs;
+ struct idpf_vector_lifo *stack;
+ int i, j, vecid;
+
+ mutex_lock(&adapter->vector_lock);
+ stack = &adapter->vector_stack;
+ num_req_vecs = vec_info->num_req_vecs;
+
+ /* Stash interrupt vector indexes onto the stack if required */
+ idpf_vector_stash(adapter, q_vector_idxs, vec_info);
+
+ if (!num_req_vecs)
+ goto rel_lock;
+
+ if (vec_info->default_vport) {
+ /* As IDPF_MIN_Q_VEC per default vport is put aside in the
+ * default pool of the stack, use them for default vports
+ */
+ j = vec_info->index * IDPF_MIN_Q_VEC + IDPF_MBX_Q_VEC;
+ for (i = 0; i < IDPF_MIN_Q_VEC; i++) {
+ q_vector_idxs[num_alloc_vecs++] = stack->vec_idx[j++];
+ num_req_vecs--;
+ }
+ }
+
+ /* Find if stack has enough vector to allocate */
+ max_vecs = min(adapter->num_avail_msix, num_req_vecs);
+
+ for (j = 0; j < max_vecs; j++) {
+ vecid = idpf_vector_lifo_pop(adapter);
+ q_vector_idxs[num_alloc_vecs++] = vecid;
+ }
+ adapter->num_avail_msix -= max_vecs;
+
+rel_lock:
+ mutex_unlock(&adapter->vector_lock);
+
+ return num_alloc_vecs;
+}
+
+/**
+ * idpf_intr_req - Request interrupt capabilities
+ * @adapter: adapter to enable interrupts on
+ *
+ * Returns 0 on success, negative on failure
+ */
+int idpf_intr_req(struct idpf_adapter *adapter)
+{
+ u16 default_vports = idpf_get_default_vports(adapter);
+ int num_q_vecs, total_vecs, num_vec_ids;
+ int min_vectors, v_actual, err;
+ unsigned int vector;
+ u16 *vecids;
+
+ total_vecs = idpf_get_reserved_vecs(adapter);
+ num_q_vecs = total_vecs - IDPF_MBX_Q_VEC;
+
+ err = idpf_send_alloc_vectors_msg(adapter, num_q_vecs);
+ if (err) {
+ dev_err(&adapter->pdev->dev,
+ "Failed to allocate %d vectors: %d\n", num_q_vecs, err);
+
+ return -EAGAIN;
+ }
+
+ min_vectors = IDPF_MBX_Q_VEC + IDPF_MIN_Q_VEC * default_vports;
+ v_actual = pci_alloc_irq_vectors(adapter->pdev, min_vectors,
+ total_vecs, PCI_IRQ_MSIX);
+ if (v_actual < min_vectors) {
+ dev_err(&adapter->pdev->dev, "Failed to allocate MSIX vectors: %d\n",
+ v_actual);
+ err = -EAGAIN;
+ goto send_dealloc_vecs;
+ }
+
+ adapter->msix_entries = kcalloc(v_actual, sizeof(struct msix_entry),
+ GFP_KERNEL);
+
+ if (!adapter->msix_entries) {
+ err = -ENOMEM;
+ goto free_irq;
+ }
+
+ idpf_set_mb_vec_id(adapter);
+
+ vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL);
+ if (!vecids) {
+ err = -ENOMEM;
+ goto free_msix;
+ }
+
+ if (adapter->req_vec_chunks) {
+ struct virtchnl2_vector_chunks *vchunks;
+ struct virtchnl2_alloc_vectors *ac;
+
+ ac = adapter->req_vec_chunks;
+ vchunks = &ac->vchunks;
+
+ num_vec_ids = idpf_get_vec_ids(adapter, vecids, total_vecs,
+ vchunks);
+ if (num_vec_ids < v_actual) {
+ err = -EINVAL;
+ goto free_vecids;
+ }
+ } else {
+ int i;
+
+ for (i = 0; i < v_actual; i++)
+ vecids[i] = i;
+ }
+
+ for (vector = 0; vector < v_actual; vector++) {
+ adapter->msix_entries[vector].entry = vecids[vector];
+ adapter->msix_entries[vector].vector =
+ pci_irq_vector(adapter->pdev, vector);
+ }
+
+ adapter->num_req_msix = total_vecs;
+ adapter->num_msix_entries = v_actual;
+ /* 'num_avail_msix' is used to distribute excess vectors to the vports
+ * after considering the minimum vectors required per each default
+ * vport
+ */
+ adapter->num_avail_msix = v_actual - min_vectors;
+
+ /* Fill MSIX vector lifo stack with vector indexes */
+ err = idpf_init_vector_stack(adapter);
+ if (err)
+ goto free_vecids;
+
+ err = idpf_mb_intr_init(adapter);
+ if (err)
+ goto deinit_vec_stack;
+ idpf_mb_irq_enable(adapter);
+ kfree(vecids);
+
+ return 0;
+
+deinit_vec_stack:
+ idpf_deinit_vector_stack(adapter);
+free_vecids:
+ kfree(vecids);
+free_msix:
+ kfree(adapter->msix_entries);
+ adapter->msix_entries = NULL;
+free_irq:
+ pci_free_irq_vectors(adapter->pdev);
+send_dealloc_vecs:
+ idpf_send_dealloc_vectors_msg(adapter);
+
+ return err;
+}
+
+/**
+ * idpf_find_mac_filter - Search filter list for specific mac filter
+ * @vconfig: Vport config structure
+ * @macaddr: The MAC address
+ *
+ * Returns ptr to the filter object or NULL. Must be called while holding the
+ * mac_filter_list_lock.
+ **/
+static struct idpf_mac_filter *idpf_find_mac_filter(struct idpf_vport_config *vconfig,
+ const u8 *macaddr)
+{
+ struct idpf_mac_filter *f;
+
+ if (!macaddr)
+ return NULL;
+
+ list_for_each_entry(f, &vconfig->user_config.mac_filter_list, list) {
+ if (ether_addr_equal(macaddr, f->macaddr))
+ return f;
+ }
+
+ return NULL;
+}
+
+/**
+ * __idpf_del_mac_filter - Delete a MAC filter from the filter list
+ * @vport_config: Vport config structure
+ * @macaddr: The MAC address
+ *
+ * Returns 0 on success, error value on failure
+ **/
+static int __idpf_del_mac_filter(struct idpf_vport_config *vport_config,
+ const u8 *macaddr)
+{
+ struct idpf_mac_filter *f;
+
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+ f = idpf_find_mac_filter(vport_config, macaddr);
+ if (f) {
+ list_del(&f->list);
+ kfree(f);
+ }
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return 0;
+}
+
+/**
+ * idpf_del_mac_filter - Delete a MAC filter from the filter list
+ * @vport: Main vport structure
+ * @np: Netdev private structure
+ * @macaddr: The MAC address
+ * @async: Don't wait for return message
+ *
+ * Removes filter from list and if interface is up, tells hardware about the
+ * removed filter.
+ **/
+static int idpf_del_mac_filter(struct idpf_vport *vport,
+ struct idpf_netdev_priv *np,
+ const u8 *macaddr, bool async)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_mac_filter *f;
+
+ vport_config = np->adapter->vport_config[np->vport_idx];
+
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+ f = idpf_find_mac_filter(vport_config, macaddr);
+ if (f) {
+ f->remove = true;
+ } else {
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return -EINVAL;
+ }
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ if (np->state == __IDPF_VPORT_UP) {
+ int err;
+
+ err = idpf_add_del_mac_filters(vport, np, false, async);
+ if (err)
+ return err;
+ }
+
+ return __idpf_del_mac_filter(vport_config, macaddr);
+}
+
+/**
+ * __idpf_add_mac_filter - Add mac filter helper function
+ * @vport_config: Vport config structure
+ * @macaddr: Address to add
+ *
+ * Takes mac_filter_list_lock spinlock to add new filter to list.
+ */
+static int __idpf_add_mac_filter(struct idpf_vport_config *vport_config,
+ const u8 *macaddr)
+{
+ struct idpf_mac_filter *f;
+
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ f = idpf_find_mac_filter(vport_config, macaddr);
+ if (f) {
+ f->remove = false;
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return 0;
+ }
+
+ f = kzalloc(sizeof(*f), GFP_ATOMIC);
+ if (!f) {
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return -ENOMEM;
+ }
+
+ ether_addr_copy(f->macaddr, macaddr);
+ list_add_tail(&f->list, &vport_config->user_config.mac_filter_list);
+ f->add = true;
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return 0;
+}
+
+/**
+ * idpf_add_mac_filter - Add a mac filter to the filter list
+ * @vport: Main vport structure
+ * @np: Netdev private structure
+ * @macaddr: The MAC address
+ * @async: Don't wait for return message
+ *
+ * Returns 0 on success or error on failure. If interface is up, we'll also
+ * send the virtchnl message to tell hardware about the filter.
+ **/
+static int idpf_add_mac_filter(struct idpf_vport *vport,
+ struct idpf_netdev_priv *np,
+ const u8 *macaddr, bool async)
+{
+ struct idpf_vport_config *vport_config;
+ int err;
+
+ vport_config = np->adapter->vport_config[np->vport_idx];
+ err = __idpf_add_mac_filter(vport_config, macaddr);
+ if (err)
+ return err;
+
+ if (np->state == __IDPF_VPORT_UP)
+ err = idpf_add_del_mac_filters(vport, np, true, async);
+
+ return err;
+}
+
+/**
+ * idpf_del_all_mac_filters - Delete all MAC filters in list
+ * @vport: main vport struct
+ *
+ * Takes mac_filter_list_lock spinlock. Deletes all filters
+ */
+static void idpf_del_all_mac_filters(struct idpf_vport *vport)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_mac_filter *f, *ftmp;
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ list_for_each_entry_safe(f, ftmp, &vport_config->user_config.mac_filter_list,
+ list) {
+ list_del(&f->list);
+ kfree(f);
+ }
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+}
+
+/**
+ * idpf_restore_mac_filters - Re-add all MAC filters in list
+ * @vport: main vport struct
+ *
+ * Takes mac_filter_list_lock spinlock. Sets add field to true for filters to
+ * resync filters back to HW.
+ */
+static void idpf_restore_mac_filters(struct idpf_vport *vport)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_mac_filter *f;
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ list_for_each_entry(f, &vport_config->user_config.mac_filter_list, list)
+ f->add = true;
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
+ true, false);
+}
+
+/**
+ * idpf_remove_mac_filters - Remove all MAC filters in list
+ * @vport: main vport struct
+ *
+ * Takes mac_filter_list_lock spinlock. Sets remove field to true for filters
+ * to remove filters in HW.
+ */
+static void idpf_remove_mac_filters(struct idpf_vport *vport)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_mac_filter *f;
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ list_for_each_entry(f, &vport_config->user_config.mac_filter_list, list)
+ f->remove = true;
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ idpf_add_del_mac_filters(vport, netdev_priv(vport->netdev),
+ false, false);
+}
+
+/**
+ * idpf_deinit_mac_addr - deinitialize mac address for vport
+ * @vport: main vport structure
+ */
+static void idpf_deinit_mac_addr(struct idpf_vport *vport)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_mac_filter *f;
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ f = idpf_find_mac_filter(vport_config, vport->default_mac_addr);
+ if (f) {
+ list_del(&f->list);
+ kfree(f);
+ }
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+}
+
+/**
+ * idpf_init_mac_addr - initialize mac address for vport
+ * @vport: main vport structure
+ * @netdev: pointer to netdev struct associated with this vport
+ */
+static int idpf_init_mac_addr(struct idpf_vport *vport,
+ struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_adapter *adapter = vport->adapter;
+ int err;
+
+ if (is_valid_ether_addr(vport->default_mac_addr)) {
+ eth_hw_addr_set(netdev, vport->default_mac_addr);
+ ether_addr_copy(netdev->perm_addr, vport->default_mac_addr);
+
+ return idpf_add_mac_filter(vport, np, vport->default_mac_addr,
+ false);
+ }
+
+ if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS,
+ VIRTCHNL2_CAP_MACFILTER)) {
+ dev_err(&adapter->pdev->dev,
+ "MAC address is not provided and capability is not set\n");
+
+ return -EINVAL;
+ }
+
+ eth_hw_addr_random(netdev);
+ err = idpf_add_mac_filter(vport, np, netdev->dev_addr, false);
+ if (err)
+ return err;
+
+ dev_info(&adapter->pdev->dev, "Invalid MAC address %pM, using random %pM\n",
+ vport->default_mac_addr, netdev->dev_addr);
+ ether_addr_copy(vport->default_mac_addr, netdev->dev_addr);
+
+ return 0;
+}
+
+/**
+ * idpf_cfg_netdev - Allocate, configure and register a netdev
+ * @vport: main vport structure
+ *
+ * Returns 0 on success, negative value on failure.
+ */
+static int idpf_cfg_netdev(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
+ netdev_features_t dflt_features;
+ netdev_features_t offloads = 0;
+ struct idpf_netdev_priv *np;
+ struct net_device *netdev;
+ u16 idx = vport->idx;
+ int err;
+
+ vport_config = adapter->vport_config[idx];
+
+ /* It's possible we already have a netdev allocated and registered for
+ * this vport
+ */
+ if (test_bit(IDPF_VPORT_REG_NETDEV, vport_config->flags)) {
+ netdev = adapter->netdevs[idx];
+ np = netdev_priv(netdev);
+ np->vport = vport;
+ np->vport_idx = vport->idx;
+ np->vport_id = vport->vport_id;
+ vport->netdev = netdev;
+
+ return idpf_init_mac_addr(vport, netdev);
+ }
+
+ netdev = alloc_etherdev_mqs(sizeof(struct idpf_netdev_priv),
+ vport_config->max_q.max_txq,
+ vport_config->max_q.max_rxq);
+ if (!netdev)
+ return -ENOMEM;
+
+ vport->netdev = netdev;
+ np = netdev_priv(netdev);
+ np->vport = vport;
+ np->adapter = adapter;
+ np->vport_idx = vport->idx;
+ np->vport_id = vport->vport_id;
+
+ spin_lock_init(&np->stats_lock);
+
+ err = idpf_init_mac_addr(vport, netdev);
+ if (err) {
+ free_netdev(vport->netdev);
+ vport->netdev = NULL;
+
+ return err;
+ }
+
+ /* assign netdev_ops */
+ if (idpf_is_queue_model_split(vport->txq_model))
+ netdev->netdev_ops = &idpf_netdev_ops_splitq;
+ else
+ netdev->netdev_ops = &idpf_netdev_ops_singleq;
+
+ /* setup watchdog timeout value to be 5 second */
+ netdev->watchdog_timeo = 5 * HZ;
+
+ /* configure default MTU size */
+ netdev->min_mtu = ETH_MIN_MTU;
+ netdev->max_mtu = vport->max_mtu;
+
+ dflt_features = NETIF_F_SG |
+ NETIF_F_HIGHDMA;
+
+ if (idpf_is_cap_ena_all(adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS))
+ dflt_features |= NETIF_F_RXHASH;
+ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V4))
+ dflt_features |= NETIF_F_IP_CSUM;
+ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM_L4V6))
+ dflt_features |= NETIF_F_IPV6_CSUM;
+ if (idpf_is_cap_ena(adapter, IDPF_CSUM_CAPS, IDPF_CAP_RX_CSUM))
+ dflt_features |= NETIF_F_RXCSUM;
+ if (idpf_is_cap_ena_all(adapter, IDPF_CSUM_CAPS, IDPF_CAP_SCTP_CSUM))
+ dflt_features |= NETIF_F_SCTP_CRC;
+
+ if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV4_TCP))
+ dflt_features |= NETIF_F_TSO;
+ if (idpf_is_cap_ena(adapter, IDPF_SEG_CAPS, VIRTCHNL2_CAP_SEG_IPV6_TCP))
+ dflt_features |= NETIF_F_TSO6;
+ if (idpf_is_cap_ena_all(adapter, IDPF_SEG_CAPS,
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP))
+ dflt_features |= NETIF_F_GSO_UDP_L4;
+ if (idpf_is_cap_ena_all(adapter, IDPF_RSC_CAPS, IDPF_CAP_RSC))
+ offloads |= NETIF_F_GRO_HW;
+ /* advertise to stack only if offloads for encapsulated packets is
+ * supported
+ */
+ if (idpf_is_cap_ena(vport->adapter, IDPF_SEG_CAPS,
+ VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL)) {
+ offloads |= NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_GRE |
+ NETIF_F_GSO_GRE_CSUM |
+ NETIF_F_GSO_PARTIAL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6 |
+ 0;
+
+ if (!idpf_is_cap_ena_all(vport->adapter, IDPF_CSUM_CAPS,
+ IDPF_CAP_TUNNEL_TX_CSUM))
+ netdev->gso_partial_features |=
+ NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+ netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+ offloads |= NETIF_F_TSO_MANGLEID;
+ }
+ if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_LOOPBACK))
+ offloads |= NETIF_F_LOOPBACK;
+
+ netdev->features |= dflt_features;
+ netdev->hw_features |= dflt_features | offloads;
+ netdev->hw_enc_features |= dflt_features | offloads;
+ idpf_set_ethtool_ops(netdev);
+ SET_NETDEV_DEV(netdev, &adapter->pdev->dev);
+
+ /* carrier off on init to avoid Tx hangs */
+ netif_carrier_off(netdev);
+
+ /* make sure transmit queues start off as stopped */
+ netif_tx_stop_all_queues(netdev);
+
+ /* The vport can be arbitrarily released so we need to also track
+ * netdevs in the adapter struct
+ */
+ adapter->netdevs[idx] = netdev;
+
+ return 0;
+}
+
+/**
+ * idpf_get_free_slot - get the next non-NULL location index in array
+ * @adapter: adapter in which to look for a free vport slot
+ */
+static int idpf_get_free_slot(struct idpf_adapter *adapter)
+{
+ unsigned int i;
+
+ for (i = 0; i < adapter->max_vports; i++) {
+ if (!adapter->vports[i])
+ return i;
+ }
+
+ return IDPF_NO_FREE_SLOT;
+}
+
+/**
+ * idpf_remove_features - Turn off feature configs
+ * @vport: virtual port structure
+ */
+static void idpf_remove_features(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+
+ if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER))
+ idpf_remove_mac_filters(vport);
+}
+
+/**
+ * idpf_vport_stop - Disable a vport
+ * @vport: vport to disable
+ */
+static void idpf_vport_stop(struct idpf_vport *vport)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+
+ if (np->state <= __IDPF_VPORT_DOWN)
+ return;
+
+ netif_carrier_off(vport->netdev);
+ netif_tx_disable(vport->netdev);
+
+ idpf_send_disable_vport_msg(vport);
+ idpf_send_disable_queues_msg(vport);
+ idpf_send_map_unmap_queue_vector_msg(vport, false);
+ /* Normally we ask for queues in create_vport, but if the number of
+ * initially requested queues have changed, for example via ethtool
+ * set channels, we do delete queues and then add the queues back
+ * instead of deleting and reallocating the vport.
+ */
+ if (test_and_clear_bit(IDPF_VPORT_DEL_QUEUES, vport->flags))
+ idpf_send_delete_queues_msg(vport);
+
+ idpf_remove_features(vport);
+
+ vport->link_up = false;
+ idpf_vport_intr_deinit(vport);
+ idpf_vport_intr_rel(vport);
+ idpf_vport_queues_rel(vport);
+ np->state = __IDPF_VPORT_DOWN;
+}
+
+/**
+ * idpf_stop - Disables a network interface
+ * @netdev: network interface device structure
+ *
+ * The stop entry point is called when an interface is de-activated by the OS,
+ * and the netdevice enters the DOWN state. The hardware is still under the
+ * driver's control, but the netdev interface is disabled.
+ *
+ * Returns success only - not allowed to fail
+ */
+static int idpf_stop(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport *vport;
+
+ if (test_bit(IDPF_REMOVE_IN_PROG, np->adapter->flags))
+ return 0;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ idpf_vport_stop(vport);
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return 0;
+}
+
+/**
+ * idpf_decfg_netdev - Unregister the netdev
+ * @vport: vport for which netdev to be unregistered
+ */
+static void idpf_decfg_netdev(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+
+ unregister_netdev(vport->netdev);
+ free_netdev(vport->netdev);
+ vport->netdev = NULL;
+
+ adapter->netdevs[vport->idx] = NULL;
+}
+
+/**
+ * idpf_vport_rel - Delete a vport and free its resources
+ * @vport: the vport being removed
+ */
+static void idpf_vport_rel(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
+ struct idpf_vector_info vec_info;
+ struct idpf_rss_data *rss_data;
+ struct idpf_vport_max_q max_q;
+ u16 idx = vport->idx;
+ int i;
+
+ vport_config = adapter->vport_config[vport->idx];
+ idpf_deinit_rss(vport);
+ rss_data = &vport_config->user_config.rss_data;
+ kfree(rss_data->rss_key);
+ rss_data->rss_key = NULL;
+
+ idpf_send_destroy_vport_msg(vport);
+
+ /* Set all bits as we dont know on which vc_state the vport vhnl_wq
+ * is waiting on and wakeup the virtchnl workqueue even if it is
+ * waiting for the response as we are going down
+ */
+ for (i = 0; i < IDPF_VC_NBITS; i++)
+ set_bit(i, vport->vc_state);
+ wake_up(&vport->vchnl_wq);
+
+ mutex_destroy(&vport->vc_buf_lock);
+
+ /* Clear all the bits */
+ for (i = 0; i < IDPF_VC_NBITS; i++)
+ clear_bit(i, vport->vc_state);
+
+ /* Release all max queues allocated to the adapter's pool */
+ max_q.max_rxq = vport_config->max_q.max_rxq;
+ max_q.max_txq = vport_config->max_q.max_txq;
+ max_q.max_bufq = vport_config->max_q.max_bufq;
+ max_q.max_complq = vport_config->max_q.max_complq;
+ idpf_vport_dealloc_max_qs(adapter, &max_q);
+
+ /* Release all the allocated vectors on the stack */
+ vec_info.num_req_vecs = 0;
+ vec_info.num_curr_vecs = vport->num_q_vectors;
+ vec_info.default_vport = vport->default_vport;
+
+ idpf_req_rel_vector_indexes(adapter, vport->q_vector_idxs, &vec_info);
+
+ kfree(vport->q_vector_idxs);
+ vport->q_vector_idxs = NULL;
+
+ kfree(adapter->vport_params_recvd[idx]);
+ adapter->vport_params_recvd[idx] = NULL;
+ kfree(adapter->vport_params_reqd[idx]);
+ adapter->vport_params_reqd[idx] = NULL;
+ if (adapter->vport_config[idx]) {
+ kfree(adapter->vport_config[idx]->req_qs_chunks);
+ adapter->vport_config[idx]->req_qs_chunks = NULL;
+ }
+ kfree(vport);
+ adapter->num_alloc_vports--;
+}
+
+/**
+ * idpf_vport_dealloc - cleanup and release a given vport
+ * @vport: pointer to idpf vport structure
+ *
+ * returns nothing
+ */
+static void idpf_vport_dealloc(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ unsigned int i = vport->idx;
+
+ idpf_deinit_mac_addr(vport);
+ idpf_vport_stop(vport);
+
+ if (!test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
+ idpf_decfg_netdev(vport);
+ if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ idpf_del_all_mac_filters(vport);
+
+ if (adapter->netdevs[i]) {
+ struct idpf_netdev_priv *np = netdev_priv(adapter->netdevs[i]);
+
+ np->vport = NULL;
+ }
+
+ idpf_vport_rel(vport);
+
+ adapter->vports[i] = NULL;
+ adapter->next_vport = idpf_get_free_slot(adapter);
+}
+
+/**
+ * idpf_vport_alloc - Allocates the next available struct vport in the adapter
+ * @adapter: board private structure
+ * @max_q: vport max queue info
+ *
+ * returns a pointer to a vport on success, NULL on failure.
+ */
+static struct idpf_vport *idpf_vport_alloc(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q)
+{
+ struct idpf_rss_data *rss_data;
+ u16 idx = adapter->next_vport;
+ struct idpf_vport *vport;
+ u16 num_max_q;
+
+ if (idx == IDPF_NO_FREE_SLOT)
+ return NULL;
+
+ vport = kzalloc(sizeof(*vport), GFP_KERNEL);
+ if (!vport)
+ return vport;
+
+ if (!adapter->vport_config[idx]) {
+ struct idpf_vport_config *vport_config;
+
+ vport_config = kzalloc(sizeof(*vport_config), GFP_KERNEL);
+ if (!vport_config) {
+ kfree(vport);
+
+ return NULL;
+ }
+
+ adapter->vport_config[idx] = vport_config;
+ }
+
+ vport->idx = idx;
+ vport->adapter = adapter;
+ vport->compln_clean_budget = IDPF_TX_COMPLQ_CLEAN_BUDGET;
+ vport->default_vport = adapter->num_alloc_vports <
+ idpf_get_default_vports(adapter);
+
+ num_max_q = max(max_q->max_txq, max_q->max_rxq);
+ vport->q_vector_idxs = kcalloc(num_max_q, sizeof(u16), GFP_KERNEL);
+ if (!vport->q_vector_idxs) {
+ kfree(vport);
+
+ return NULL;
+ }
+ idpf_vport_init(vport, max_q);
+
+ /* This alloc is done separate from the LUT because it's not strictly
+ * dependent on how many queues we have. If we change number of queues
+ * and soft reset we'll need a new LUT but the key can remain the same
+ * for as long as the vport exists.
+ */
+ rss_data = &adapter->vport_config[idx]->user_config.rss_data;
+ rss_data->rss_key = kzalloc(rss_data->rss_key_size, GFP_KERNEL);
+ if (!rss_data->rss_key) {
+ kfree(vport);
+
+ return NULL;
+ }
+ /* Initialize default rss key */
+ netdev_rss_key_fill((void *)rss_data->rss_key, rss_data->rss_key_size);
+
+ /* fill vport slot in the adapter struct */
+ adapter->vports[idx] = vport;
+ adapter->vport_ids[idx] = idpf_get_vport_id(vport);
+
+ adapter->num_alloc_vports++;
+ /* prepare adapter->next_vport for next use */
+ adapter->next_vport = idpf_get_free_slot(adapter);
+
+ return vport;
+}
+
+/**
+ * idpf_get_stats64 - get statistics for network device structure
+ * @netdev: network interface device structure
+ * @stats: main device statistics structure
+ */
+static void idpf_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ spin_lock_bh(&np->stats_lock);
+ *stats = np->netstats;
+ spin_unlock_bh(&np->stats_lock);
+}
+
+/**
+ * idpf_statistics_task - Delayed task to get statistics over mailbox
+ * @work: work_struct handle to our data
+ */
+void idpf_statistics_task(struct work_struct *work)
+{
+ struct idpf_adapter *adapter;
+ int i;
+
+ adapter = container_of(work, struct idpf_adapter, stats_task.work);
+
+ for (i = 0; i < adapter->max_vports; i++) {
+ struct idpf_vport *vport = adapter->vports[i];
+
+ if (vport && !test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
+ idpf_send_get_stats_msg(vport);
+ }
+
+ queue_delayed_work(adapter->stats_wq, &adapter->stats_task,
+ msecs_to_jiffies(10000));
+}
+
+/**
+ * idpf_mbx_task - Delayed task to handle mailbox responses
+ * @work: work_struct handle
+ */
+void idpf_mbx_task(struct work_struct *work)
+{
+ struct idpf_adapter *adapter;
+
+ adapter = container_of(work, struct idpf_adapter, mbx_task.work);
+
+ if (test_bit(IDPF_MB_INTR_MODE, adapter->flags))
+ idpf_mb_irq_enable(adapter);
+ else
+ queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task,
+ msecs_to_jiffies(300));
+
+ idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_UNKNOWN, NULL, 0);
+}
+
+/**
+ * idpf_service_task - Delayed task for handling mailbox responses
+ * @work: work_struct handle to our data
+ *
+ */
+void idpf_service_task(struct work_struct *work)
+{
+ struct idpf_adapter *adapter;
+
+ adapter = container_of(work, struct idpf_adapter, serv_task.work);
+
+ if (idpf_is_reset_detected(adapter) &&
+ !idpf_is_reset_in_prog(adapter) &&
+ !test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) {
+ dev_info(&adapter->pdev->dev, "HW reset detected\n");
+ set_bit(IDPF_HR_FUNC_RESET, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq,
+ &adapter->vc_event_task,
+ msecs_to_jiffies(10));
+ }
+
+ queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
+ msecs_to_jiffies(300));
+}
+
+/**
+ * idpf_restore_features - Restore feature configs
+ * @vport: virtual port structure
+ */
+static void idpf_restore_features(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+
+ if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER))
+ idpf_restore_mac_filters(vport);
+}
+
+/**
+ * idpf_set_real_num_queues - set number of queues for netdev
+ * @vport: virtual port structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int idpf_set_real_num_queues(struct idpf_vport *vport)
+{
+ int err;
+
+ err = netif_set_real_num_rx_queues(vport->netdev, vport->num_rxq);
+ if (err)
+ return err;
+
+ return netif_set_real_num_tx_queues(vport->netdev, vport->num_txq);
+}
+
+/**
+ * idpf_up_complete - Complete interface up sequence
+ * @vport: virtual port structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int idpf_up_complete(struct idpf_vport *vport)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+
+ if (vport->link_up && !netif_carrier_ok(vport->netdev)) {
+ netif_carrier_on(vport->netdev);
+ netif_tx_start_all_queues(vport->netdev);
+ }
+
+ np->state = __IDPF_VPORT_UP;
+
+ return 0;
+}
+
+/**
+ * idpf_rx_init_buf_tail - Write initial buffer ring tail value
+ * @vport: virtual port struct
+ */
+static void idpf_rx_init_buf_tail(struct idpf_vport *vport)
+{
+ int i, j;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *grp = &vport->rxq_grps[i];
+
+ if (idpf_is_queue_model_split(vport->rxq_model)) {
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ struct idpf_queue *q =
+ &grp->splitq.bufq_sets[j].bufq;
+
+ writel(q->next_to_alloc, q->tail);
+ }
+ } else {
+ for (j = 0; j < grp->singleq.num_rxq; j++) {
+ struct idpf_queue *q =
+ grp->singleq.rxqs[j];
+
+ writel(q->next_to_alloc, q->tail);
+ }
+ }
+ }
+}
+
+/**
+ * idpf_vport_open - Bring up a vport
+ * @vport: vport to bring up
+ * @alloc_res: allocate queue resources
+ */
+static int idpf_vport_open(struct idpf_vport *vport, bool alloc_res)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
+ int err;
+
+ if (np->state != __IDPF_VPORT_DOWN)
+ return -EBUSY;
+
+ /* we do not allow interface up just yet */
+ netif_carrier_off(vport->netdev);
+
+ if (alloc_res) {
+ err = idpf_vport_queues_alloc(vport);
+ if (err)
+ return err;
+ }
+
+ err = idpf_vport_intr_alloc(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to allocate interrupts for vport %u: %d\n",
+ vport->vport_id, err);
+ goto queues_rel;
+ }
+
+ err = idpf_vport_queue_ids_init(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to initialize queue ids for vport %u: %d\n",
+ vport->vport_id, err);
+ goto intr_rel;
+ }
+
+ err = idpf_vport_intr_init(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to initialize interrupts for vport %u: %d\n",
+ vport->vport_id, err);
+ goto intr_rel;
+ }
+
+ err = idpf_rx_bufs_init_all(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to initialize RX buffers for vport %u: %d\n",
+ vport->vport_id, err);
+ goto intr_rel;
+ }
+
+ err = idpf_queue_reg_init(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to initialize queue registers for vport %u: %d\n",
+ vport->vport_id, err);
+ goto intr_rel;
+ }
+
+ idpf_rx_init_buf_tail(vport);
+
+ err = idpf_send_config_queues_msg(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to configure queues for vport %u, %d\n",
+ vport->vport_id, err);
+ goto intr_deinit;
+ }
+
+ err = idpf_send_map_unmap_queue_vector_msg(vport, true);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to map queue vectors for vport %u: %d\n",
+ vport->vport_id, err);
+ goto intr_deinit;
+ }
+
+ err = idpf_send_enable_queues_msg(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to enable queues for vport %u: %d\n",
+ vport->vport_id, err);
+ goto unmap_queue_vectors;
+ }
+
+ err = idpf_send_enable_vport_msg(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to enable vport %u: %d\n",
+ vport->vport_id, err);
+ err = -EAGAIN;
+ goto disable_queues;
+ }
+
+ idpf_restore_features(vport);
+
+ vport_config = adapter->vport_config[vport->idx];
+ if (vport_config->user_config.rss_data.rss_lut)
+ err = idpf_config_rss(vport);
+ else
+ err = idpf_init_rss(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to initialize RSS for vport %u: %d\n",
+ vport->vport_id, err);
+ goto disable_vport;
+ }
+
+ err = idpf_up_complete(vport);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to complete interface up for vport %u: %d\n",
+ vport->vport_id, err);
+ goto deinit_rss;
+ }
+
+ return 0;
+
+deinit_rss:
+ idpf_deinit_rss(vport);
+disable_vport:
+ idpf_send_disable_vport_msg(vport);
+disable_queues:
+ idpf_send_disable_queues_msg(vport);
+unmap_queue_vectors:
+ idpf_send_map_unmap_queue_vector_msg(vport, false);
+intr_deinit:
+ idpf_vport_intr_deinit(vport);
+intr_rel:
+ idpf_vport_intr_rel(vport);
+queues_rel:
+ idpf_vport_queues_rel(vport);
+
+ return err;
+}
+
+/**
+ * idpf_init_task - Delayed initialization task
+ * @work: work_struct handle to our data
+ *
+ * Init task finishes up pending work started in probe. Due to the asynchronous
+ * nature in which the device communicates with hardware, we may have to wait
+ * several milliseconds to get a response. Instead of busy polling in probe,
+ * pulling it out into a delayed work task prevents us from bogging down the
+ * whole system waiting for a response from hardware.
+ */
+void idpf_init_task(struct work_struct *work)
+{
+ struct idpf_vport_config *vport_config;
+ struct idpf_vport_max_q max_q;
+ struct idpf_adapter *adapter;
+ struct idpf_netdev_priv *np;
+ struct idpf_vport *vport;
+ u16 num_default_vports;
+ struct pci_dev *pdev;
+ bool default_vport;
+ int index, err;
+
+ adapter = container_of(work, struct idpf_adapter, init_task.work);
+
+ num_default_vports = idpf_get_default_vports(adapter);
+ if (adapter->num_alloc_vports < num_default_vports)
+ default_vport = true;
+ else
+ default_vport = false;
+
+ err = idpf_vport_alloc_max_qs(adapter, &max_q);
+ if (err)
+ goto unwind_vports;
+
+ err = idpf_send_create_vport_msg(adapter, &max_q);
+ if (err) {
+ idpf_vport_dealloc_max_qs(adapter, &max_q);
+ goto unwind_vports;
+ }
+
+ pdev = adapter->pdev;
+ vport = idpf_vport_alloc(adapter, &max_q);
+ if (!vport) {
+ err = -EFAULT;
+ dev_err(&pdev->dev, "failed to allocate vport: %d\n",
+ err);
+ idpf_vport_dealloc_max_qs(adapter, &max_q);
+ goto unwind_vports;
+ }
+
+ index = vport->idx;
+ vport_config = adapter->vport_config[index];
+
+ init_waitqueue_head(&vport->sw_marker_wq);
+ init_waitqueue_head(&vport->vchnl_wq);
+
+ mutex_init(&vport->vc_buf_lock);
+ spin_lock_init(&vport_config->mac_filter_list_lock);
+
+ INIT_LIST_HEAD(&vport_config->user_config.mac_filter_list);
+
+ err = idpf_check_supported_desc_ids(vport);
+ if (err) {
+ dev_err(&pdev->dev, "failed to get required descriptor ids\n");
+ goto cfg_netdev_err;
+ }
+
+ if (idpf_cfg_netdev(vport))
+ goto cfg_netdev_err;
+
+ err = idpf_send_get_rx_ptype_msg(vport);
+ if (err)
+ goto handle_err;
+
+ /* Once state is put into DOWN, driver is ready for dev_open */
+ np = netdev_priv(vport->netdev);
+ np->state = __IDPF_VPORT_DOWN;
+ if (test_and_clear_bit(IDPF_VPORT_UP_REQUESTED, vport_config->flags))
+ idpf_vport_open(vport, true);
+
+ /* Spawn and return 'idpf_init_task' work queue until all the
+ * default vports are created
+ */
+ if (adapter->num_alloc_vports < num_default_vports) {
+ queue_delayed_work(adapter->init_wq, &adapter->init_task,
+ msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07)));
+
+ return;
+ }
+
+ for (index = 0; index < adapter->max_vports; index++) {
+ if (adapter->netdevs[index] &&
+ !test_bit(IDPF_VPORT_REG_NETDEV,
+ adapter->vport_config[index]->flags)) {
+ register_netdev(adapter->netdevs[index]);
+ set_bit(IDPF_VPORT_REG_NETDEV,
+ adapter->vport_config[index]->flags);
+ }
+ }
+
+ /* As all the required vports are created, clear the reset flag
+ * unconditionally here in case we were in reset and the link was down.
+ */
+ clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+ /* Start the statistics task now */
+ queue_delayed_work(adapter->stats_wq, &adapter->stats_task,
+ msecs_to_jiffies(10 * (pdev->devfn & 0x07)));
+
+ return;
+
+handle_err:
+ idpf_decfg_netdev(vport);
+cfg_netdev_err:
+ idpf_vport_rel(vport);
+ adapter->vports[index] = NULL;
+unwind_vports:
+ if (default_vport) {
+ for (index = 0; index < adapter->max_vports; index++) {
+ if (adapter->vports[index])
+ idpf_vport_dealloc(adapter->vports[index]);
+ }
+ }
+ clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+}
+
+/**
+ * idpf_sriov_ena - Enable or change number of VFs
+ * @adapter: private data struct
+ * @num_vfs: number of VFs to allocate
+ */
+static int idpf_sriov_ena(struct idpf_adapter *adapter, int num_vfs)
+{
+ struct device *dev = &adapter->pdev->dev;
+ int err;
+
+ err = idpf_send_set_sriov_vfs_msg(adapter, num_vfs);
+ if (err) {
+ dev_err(dev, "Failed to allocate VFs: %d\n", err);
+
+ return err;
+ }
+
+ err = pci_enable_sriov(adapter->pdev, num_vfs);
+ if (err) {
+ idpf_send_set_sriov_vfs_msg(adapter, 0);
+ dev_err(dev, "Failed to enable SR-IOV: %d\n", err);
+
+ return err;
+ }
+
+ adapter->num_vfs = num_vfs;
+
+ return num_vfs;
+}
+
+/**
+ * idpf_sriov_configure - Configure the requested VFs
+ * @pdev: pointer to a pci_dev structure
+ * @num_vfs: number of vfs to allocate
+ *
+ * Enable or change the number of VFs. Called when the user updates the number
+ * of VFs in sysfs.
+ **/
+int idpf_sriov_configure(struct pci_dev *pdev, int num_vfs)
+{
+ struct idpf_adapter *adapter = pci_get_drvdata(pdev);
+
+ if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_SRIOV)) {
+ dev_info(&pdev->dev, "SR-IOV is not supported on this device\n");
+
+ return -EOPNOTSUPP;
+ }
+
+ if (num_vfs)
+ return idpf_sriov_ena(adapter, num_vfs);
+
+ if (pci_vfs_assigned(pdev)) {
+ dev_warn(&pdev->dev, "Unable to free VFs because some are assigned to VMs\n");
+
+ return -EBUSY;
+ }
+
+ pci_disable_sriov(adapter->pdev);
+ idpf_send_set_sriov_vfs_msg(adapter, 0);
+ adapter->num_vfs = 0;
+
+ return 0;
+}
+
+/**
+ * idpf_deinit_task - Device deinit routine
+ * @adapter: Driver specific private structure
+ *
+ * Extended remove logic which will be used for
+ * hard reset as well
+ */
+void idpf_deinit_task(struct idpf_adapter *adapter)
+{
+ unsigned int i;
+
+ /* Wait until the init_task is done else this thread might release
+ * the resources first and the other thread might end up in a bad state
+ */
+ cancel_delayed_work_sync(&adapter->init_task);
+
+ if (!adapter->vports)
+ return;
+
+ cancel_delayed_work_sync(&adapter->stats_task);
+
+ for (i = 0; i < adapter->max_vports; i++) {
+ if (adapter->vports[i])
+ idpf_vport_dealloc(adapter->vports[i]);
+ }
+}
+
+/**
+ * idpf_check_reset_complete - check that reset is complete
+ * @hw: pointer to hw struct
+ * @reset_reg: struct with reset registers
+ *
+ * Returns 0 if device is ready to use, or -EBUSY if it's in reset.
+ **/
+static int idpf_check_reset_complete(struct idpf_hw *hw,
+ struct idpf_reset_reg *reset_reg)
+{
+ struct idpf_adapter *adapter = hw->back;
+ int i;
+
+ for (i = 0; i < 2000; i++) {
+ u32 reg_val = readl(reset_reg->rstat);
+
+ /* 0xFFFFFFFF might be read if other side hasn't cleared the
+ * register for us yet and 0xFFFFFFFF is not a valid value for
+ * the register, so treat that as invalid.
+ */
+ if (reg_val != 0xFFFFFFFF && (reg_val & reset_reg->rstat_m))
+ return 0;
+
+ usleep_range(5000, 10000);
+ }
+
+ dev_warn(&adapter->pdev->dev, "Device reset timeout!\n");
+ /* Clear the reset flag unconditionally here since the reset
+ * technically isn't in progress anymore from the driver's perspective
+ */
+ clear_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+
+ return -EBUSY;
+}
+
+/**
+ * idpf_set_vport_state - Set the vport state to be after the reset
+ * @adapter: Driver specific private structure
+ */
+static void idpf_set_vport_state(struct idpf_adapter *adapter)
+{
+ u16 i;
+
+ for (i = 0; i < adapter->max_vports; i++) {
+ struct idpf_netdev_priv *np;
+
+ if (!adapter->netdevs[i])
+ continue;
+
+ np = netdev_priv(adapter->netdevs[i]);
+ if (np->state == __IDPF_VPORT_UP)
+ set_bit(IDPF_VPORT_UP_REQUESTED,
+ adapter->vport_config[i]->flags);
+ }
+}
+
+/**
+ * idpf_init_hard_reset - Initiate a hardware reset
+ * @adapter: Driver specific private structure
+ *
+ * Deallocate the vports and all the resources associated with them and
+ * reallocate. Also reinitialize the mailbox. Return 0 on success,
+ * negative on failure.
+ */
+static int idpf_init_hard_reset(struct idpf_adapter *adapter)
+{
+ struct idpf_reg_ops *reg_ops = &adapter->dev_ops.reg_ops;
+ struct device *dev = &adapter->pdev->dev;
+ struct net_device *netdev;
+ int err;
+ u16 i;
+
+ mutex_lock(&adapter->vport_ctrl_lock);
+
+ dev_info(dev, "Device HW Reset initiated\n");
+
+ /* Avoid TX hangs on reset */
+ for (i = 0; i < adapter->max_vports; i++) {
+ netdev = adapter->netdevs[i];
+ if (!netdev)
+ continue;
+
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+ }
+
+ /* Prepare for reset */
+ if (test_and_clear_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
+ reg_ops->trigger_reset(adapter, IDPF_HR_DRV_LOAD);
+ } else if (test_and_clear_bit(IDPF_HR_FUNC_RESET, adapter->flags)) {
+ bool is_reset = idpf_is_reset_detected(adapter);
+
+ idpf_set_vport_state(adapter);
+ idpf_vc_core_deinit(adapter);
+ if (!is_reset)
+ reg_ops->trigger_reset(adapter, IDPF_HR_FUNC_RESET);
+ idpf_deinit_dflt_mbx(adapter);
+ } else {
+ dev_err(dev, "Unhandled hard reset cause\n");
+ err = -EBADRQC;
+ goto unlock_mutex;
+ }
+
+ /* Wait for reset to complete */
+ err = idpf_check_reset_complete(&adapter->hw, &adapter->reset_reg);
+ if (err) {
+ dev_err(dev, "The driver was unable to contact the device's firmware. Check that the FW is running. Driver state= 0x%x\n",
+ adapter->state);
+ goto unlock_mutex;
+ }
+
+ /* Reset is complete and so start building the driver resources again */
+ err = idpf_init_dflt_mbx(adapter);
+ if (err) {
+ dev_err(dev, "Failed to initialize default mailbox: %d\n", err);
+ goto unlock_mutex;
+ }
+
+ /* Initialize the state machine, also allocate memory and request
+ * resources
+ */
+ err = idpf_vc_core_init(adapter);
+ if (err) {
+ idpf_deinit_dflt_mbx(adapter);
+ goto unlock_mutex;
+ }
+
+ /* Wait till all the vports are initialized to release the reset lock,
+ * else user space callbacks may access uninitialized vports
+ */
+ while (test_bit(IDPF_HR_RESET_IN_PROG, adapter->flags))
+ msleep(100);
+
+unlock_mutex:
+ mutex_unlock(&adapter->vport_ctrl_lock);
+
+ return err;
+}
+
+/**
+ * idpf_vc_event_task - Handle virtchannel event logic
+ * @work: work queue struct
+ */
+void idpf_vc_event_task(struct work_struct *work)
+{
+ struct idpf_adapter *adapter;
+
+ adapter = container_of(work, struct idpf_adapter, vc_event_task.work);
+
+ if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ return;
+
+ if (test_bit(IDPF_HR_FUNC_RESET, adapter->flags) ||
+ test_bit(IDPF_HR_DRV_LOAD, adapter->flags)) {
+ set_bit(IDPF_HR_RESET_IN_PROG, adapter->flags);
+ idpf_init_hard_reset(adapter);
+ }
+}
+
+/**
+ * idpf_initiate_soft_reset - Initiate a software reset
+ * @vport: virtual port data struct
+ * @reset_cause: reason for the soft reset
+ *
+ * Soft reset only reallocs vport queue resources. Returns 0 on success,
+ * negative on failure.
+ */
+int idpf_initiate_soft_reset(struct idpf_vport *vport,
+ enum idpf_vport_reset_cause reset_cause)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ enum idpf_vport_state current_state = np->state;
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport *new_vport;
+ int err, i;
+
+ /* If the system is low on memory, we can end up in bad state if we
+ * free all the memory for queue resources and try to allocate them
+ * again. Instead, we can pre-allocate the new resources before doing
+ * anything and bailing if the alloc fails.
+ *
+ * Make a clone of the existing vport to mimic its current
+ * configuration, then modify the new structure with any requested
+ * changes. Once the allocation of the new resources is done, stop the
+ * existing vport and copy the configuration to the main vport. If an
+ * error occurred, the existing vport will be untouched.
+ *
+ */
+ new_vport = kzalloc(sizeof(*vport), GFP_KERNEL);
+ if (!new_vport)
+ return -ENOMEM;
+
+ /* This purposely avoids copying the end of the struct because it
+ * contains wait_queues and mutexes and other stuff we don't want to
+ * mess with. Nothing below should use those variables from new_vport
+ * and should instead always refer to them in vport if they need to.
+ */
+ memcpy(new_vport, vport, offsetof(struct idpf_vport, vc_state));
+
+ /* Adjust resource parameters prior to reallocating resources */
+ switch (reset_cause) {
+ case IDPF_SR_Q_CHANGE:
+ err = idpf_vport_adjust_qs(new_vport);
+ if (err)
+ goto free_vport;
+ break;
+ case IDPF_SR_Q_DESC_CHANGE:
+ /* Update queue parameters before allocating resources */
+ idpf_vport_calc_num_q_desc(new_vport);
+ break;
+ case IDPF_SR_MTU_CHANGE:
+ case IDPF_SR_RSC_CHANGE:
+ break;
+ default:
+ dev_err(&adapter->pdev->dev, "Unhandled soft reset cause\n");
+ err = -EINVAL;
+ goto free_vport;
+ }
+
+ err = idpf_vport_queues_alloc(new_vport);
+ if (err)
+ goto free_vport;
+ if (current_state <= __IDPF_VPORT_DOWN) {
+ idpf_send_delete_queues_msg(vport);
+ } else {
+ set_bit(IDPF_VPORT_DEL_QUEUES, vport->flags);
+ idpf_vport_stop(vport);
+ }
+
+ idpf_deinit_rss(vport);
+ /* We're passing in vport here because we need its wait_queue
+ * to send a message and it should be getting all the vport
+ * config data out of the adapter but we need to be careful not
+ * to add code to add_queues to change the vport config within
+ * vport itself as it will be wiped with a memcpy later.
+ */
+ err = idpf_send_add_queues_msg(vport, new_vport->num_txq,
+ new_vport->num_complq,
+ new_vport->num_rxq,
+ new_vport->num_bufq);
+ if (err)
+ goto err_reset;
+
+ /* Same comment as above regarding avoiding copying the wait_queues and
+ * mutexes applies here. We do not want to mess with those if possible.
+ */
+ memcpy(vport, new_vport, offsetof(struct idpf_vport, vc_state));
+
+ /* Since idpf_vport_queues_alloc was called with new_port, the queue
+ * back pointers are currently pointing to the local new_vport. Reset
+ * the backpointers to the original vport here
+ */
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ int j;
+
+ tx_qgrp->vport = vport;
+ for (j = 0; j < tx_qgrp->num_txq; j++)
+ tx_qgrp->txqs[j]->vport = vport;
+
+ if (idpf_is_queue_model_split(vport->txq_model))
+ tx_qgrp->complq->vport = vport;
+ }
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ struct idpf_queue *q;
+ u16 num_rxq;
+ int j;
+
+ rx_qgrp->vport = vport;
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++)
+ rx_qgrp->splitq.bufq_sets[j].bufq.vport = vport;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ else
+ q = rx_qgrp->singleq.rxqs[j];
+ q->vport = vport;
+ }
+ }
+
+ if (reset_cause == IDPF_SR_Q_CHANGE)
+ idpf_vport_alloc_vec_indexes(vport);
+
+ err = idpf_set_real_num_queues(vport);
+ if (err)
+ goto err_reset;
+
+ if (current_state == __IDPF_VPORT_UP)
+ err = idpf_vport_open(vport, false);
+
+ kfree(new_vport);
+
+ return err;
+
+err_reset:
+ idpf_vport_queues_rel(new_vport);
+free_vport:
+ kfree(new_vport);
+
+ return err;
+}
+
+/**
+ * idpf_addr_sync - Callback for dev_(mc|uc)_sync to add address
+ * @netdev: the netdevice
+ * @addr: address to add
+ *
+ * Called by __dev_(mc|uc)_sync when an address needs to be added. We call
+ * __dev_(uc|mc)_sync from .set_rx_mode. Kernel takes addr_list_lock spinlock
+ * meaning we cannot sleep in this context. Due to this, we have to add the
+ * filter and send the virtchnl message asynchronously without waiting for the
+ * response from the other side. We won't know whether or not the operation
+ * actually succeeded until we get the message back. Returns 0 on success,
+ * negative on failure.
+ */
+static int idpf_addr_sync(struct net_device *netdev, const u8 *addr)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ return idpf_add_mac_filter(np->vport, np, addr, true);
+}
+
+/**
+ * idpf_addr_unsync - Callback for dev_(mc|uc)_sync to remove address
+ * @netdev: the netdevice
+ * @addr: address to add
+ *
+ * Called by __dev_(mc|uc)_sync when an address needs to be added. We call
+ * __dev_(uc|mc)_sync from .set_rx_mode. Kernel takes addr_list_lock spinlock
+ * meaning we cannot sleep in this context. Due to this we have to delete the
+ * filter and send the virtchnl message asynchronously without waiting for the
+ * return from the other side. We won't know whether or not the operation
+ * actually succeeded until we get the message back. Returns 0 on success,
+ * negative on failure.
+ */
+static int idpf_addr_unsync(struct net_device *netdev, const u8 *addr)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+
+ /* Under some circumstances, we might receive a request to delete
+ * our own device address from our uc list. Because we store the
+ * device address in the VSI's MAC filter list, we need to ignore
+ * such requests and not delete our device address from this list.
+ */
+ if (ether_addr_equal(addr, netdev->dev_addr))
+ return 0;
+
+ idpf_del_mac_filter(np->vport, np, addr, true);
+
+ return 0;
+}
+
+/**
+ * idpf_set_rx_mode - NDO callback to set the netdev filters
+ * @netdev: network interface device structure
+ *
+ * Stack takes addr_list_lock spinlock before calling our .set_rx_mode. We
+ * cannot sleep in this context.
+ */
+static void idpf_set_rx_mode(struct net_device *netdev)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_user_config_data *config_data;
+ struct idpf_adapter *adapter;
+ bool changed = false;
+ struct device *dev;
+ int err;
+
+ adapter = np->adapter;
+ dev = &adapter->pdev->dev;
+
+ if (idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_MACFILTER)) {
+ __dev_uc_sync(netdev, idpf_addr_sync, idpf_addr_unsync);
+ __dev_mc_sync(netdev, idpf_addr_sync, idpf_addr_unsync);
+ }
+
+ if (!idpf_is_cap_ena(adapter, IDPF_OTHER_CAPS, VIRTCHNL2_CAP_PROMISC))
+ return;
+
+ config_data = &adapter->vport_config[np->vport_idx]->user_config;
+ /* IFF_PROMISC enables both unicast and multicast promiscuous,
+ * while IFF_ALLMULTI only enables multicast such that:
+ *
+ * promisc + allmulti = unicast | multicast
+ * promisc + !allmulti = unicast | multicast
+ * !promisc + allmulti = multicast
+ */
+ if ((netdev->flags & IFF_PROMISC) &&
+ !test_and_set_bit(__IDPF_PROMISC_UC, config_data->user_flags)) {
+ changed = true;
+ dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
+ if (!test_and_set_bit(__IDPF_PROMISC_MC, adapter->flags))
+ dev_info(dev, "Entering multicast promiscuous mode\n");
+ }
+
+ if (!(netdev->flags & IFF_PROMISC) &&
+ test_and_clear_bit(__IDPF_PROMISC_UC, config_data->user_flags)) {
+ changed = true;
+ dev_info(dev, "Leaving promiscuous mode\n");
+ }
+
+ if (netdev->flags & IFF_ALLMULTI &&
+ !test_and_set_bit(__IDPF_PROMISC_MC, config_data->user_flags)) {
+ changed = true;
+ dev_info(dev, "Entering multicast promiscuous mode\n");
+ }
+
+ if (!(netdev->flags & (IFF_ALLMULTI | IFF_PROMISC)) &&
+ test_and_clear_bit(__IDPF_PROMISC_MC, config_data->user_flags)) {
+ changed = true;
+ dev_info(dev, "Leaving multicast promiscuous mode\n");
+ }
+
+ if (!changed)
+ return;
+
+ err = idpf_set_promiscuous(adapter, config_data, np->vport_id);
+ if (err)
+ dev_err(dev, "Failed to set promiscuous mode: %d\n", err);
+}
+
+/**
+ * idpf_vport_manage_rss_lut - disable/enable RSS
+ * @vport: the vport being changed
+ *
+ * In the event of disable request for RSS, this function will zero out RSS
+ * LUT, while in the event of enable request for RSS, it will reconfigure RSS
+ * LUT with the default LUT configuration.
+ */
+static int idpf_vport_manage_rss_lut(struct idpf_vport *vport)
+{
+ bool ena = idpf_is_feature_ena(vport, NETIF_F_RXHASH);
+ struct idpf_rss_data *rss_data;
+ u16 idx = vport->idx;
+ int lut_size;
+
+ rss_data = &vport->adapter->vport_config[idx]->user_config.rss_data;
+ lut_size = rss_data->rss_lut_size * sizeof(u32);
+
+ if (ena) {
+ /* This will contain the default or user configured LUT */
+ memcpy(rss_data->rss_lut, rss_data->cached_lut, lut_size);
+ } else {
+ /* Save a copy of the current LUT to be restored later if
+ * requested.
+ */
+ memcpy(rss_data->cached_lut, rss_data->rss_lut, lut_size);
+
+ /* Zero out the current LUT to disable */
+ memset(rss_data->rss_lut, 0, lut_size);
+ }
+
+ return idpf_config_rss(vport);
+}
+
+/**
+ * idpf_set_features - set the netdev feature flags
+ * @netdev: ptr to the netdev being adjusted
+ * @features: the feature set that the stack is suggesting
+ */
+static int idpf_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ netdev_features_t changed = netdev->features ^ features;
+ struct idpf_adapter *adapter;
+ struct idpf_vport *vport;
+ int err = 0;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ adapter = vport->adapter;
+
+ if (idpf_is_reset_in_prog(adapter)) {
+ dev_err(&adapter->pdev->dev, "Device is resetting, changing netdev features temporarily unavailable.\n");
+ err = -EBUSY;
+ goto unlock_mutex;
+ }
+
+ if (changed & NETIF_F_RXHASH) {
+ netdev->features ^= NETIF_F_RXHASH;
+ err = idpf_vport_manage_rss_lut(vport);
+ if (err)
+ goto unlock_mutex;
+ }
+
+ if (changed & NETIF_F_GRO_HW) {
+ netdev->features ^= NETIF_F_GRO_HW;
+ err = idpf_initiate_soft_reset(vport, IDPF_SR_RSC_CHANGE);
+ if (err)
+ goto unlock_mutex;
+ }
+
+ if (changed & NETIF_F_LOOPBACK) {
+ netdev->features ^= NETIF_F_LOOPBACK;
+ err = idpf_send_ena_dis_loopback_msg(vport);
+ }
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_open - Called when a network interface becomes active
+ * @netdev: network interface device structure
+ *
+ * The open entry point is called when a network interface is made
+ * active by the system (IFF_UP). At this point all resources needed
+ * for transmit and receive operations are allocated, the interrupt
+ * handler is registered with the OS, the netdev watchdog is enabled,
+ * and the stack is notified that the interface is ready.
+ *
+ * Returns 0 on success, negative value on failure
+ */
+static int idpf_open(struct net_device *netdev)
+{
+ struct idpf_vport *vport;
+ int err;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ err = idpf_vport_open(vport, true);
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_change_mtu - NDO callback to change the MTU
+ * @netdev: network interface device structure
+ * @new_mtu: new value for maximum frame size
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct idpf_vport *vport;
+ int err;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ netdev->mtu = new_mtu;
+
+ err = idpf_initiate_soft_reset(vport, IDPF_SR_MTU_CHANGE);
+
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_features_check - Validate packet conforms to limits
+ * @skb: skb buffer
+ * @netdev: This port's netdev
+ * @features: Offload features that the stack believes apply
+ */
+static netdev_features_t idpf_features_check(struct sk_buff *skb,
+ struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+ struct idpf_adapter *adapter = vport->adapter;
+ size_t len;
+
+ /* No point in doing any of this if neither checksum nor GSO are
+ * being requested for this frame. We can rule out both by just
+ * checking for CHECKSUM_PARTIAL
+ */
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return features;
+
+ /* We cannot support GSO if the MSS is going to be less than
+ * 88 bytes. If it is then we need to drop support for GSO.
+ */
+ if (skb_is_gso(skb) &&
+ (skb_shinfo(skb)->gso_size < IDPF_TX_TSO_MIN_MSS))
+ features &= ~NETIF_F_GSO_MASK;
+
+ /* Ensure MACLEN is <= 126 bytes (63 words) and not an odd size */
+ len = skb_network_offset(skb);
+ if (unlikely(len & ~(126)))
+ goto unsupported;
+
+ len = skb_network_header_len(skb);
+ if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
+ goto unsupported;
+
+ if (!skb->encapsulation)
+ return features;
+
+ /* L4TUNLEN can support 127 words */
+ len = skb_inner_network_header(skb) - skb_transport_header(skb);
+ if (unlikely(len & ~(127 * 2)))
+ goto unsupported;
+
+ /* IPLEN can support at most 127 dwords */
+ len = skb_inner_network_header_len(skb);
+ if (unlikely(len > idpf_get_max_tx_hdr_size(adapter)))
+ goto unsupported;
+
+ /* No need to validate L4LEN as TCP is the only protocol with a
+ * a flexible value and we support all possible values supported
+ * by TCP, which is at most 15 dwords
+ */
+
+ return features;
+
+unsupported:
+ return features & ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+}
+
+/**
+ * idpf_set_mac - NDO callback to set port mac address
+ * @netdev: network interface device structure
+ * @p: pointer to an address structure
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int idpf_set_mac(struct net_device *netdev, void *p)
+{
+ struct idpf_netdev_priv *np = netdev_priv(netdev);
+ struct idpf_vport_config *vport_config;
+ struct sockaddr *addr = p;
+ struct idpf_vport *vport;
+ int err = 0;
+
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
+ if (!idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
+ VIRTCHNL2_CAP_MACFILTER)) {
+ dev_info(&vport->adapter->pdev->dev, "Setting MAC address is not supported\n");
+ err = -EOPNOTSUPP;
+ goto unlock_mutex;
+ }
+
+ if (!is_valid_ether_addr(addr->sa_data)) {
+ dev_info(&vport->adapter->pdev->dev, "Invalid MAC address: %pM\n",
+ addr->sa_data);
+ err = -EADDRNOTAVAIL;
+ goto unlock_mutex;
+ }
+
+ if (ether_addr_equal(netdev->dev_addr, addr->sa_data))
+ goto unlock_mutex;
+
+ vport_config = vport->adapter->vport_config[vport->idx];
+ err = idpf_add_mac_filter(vport, np, addr->sa_data, false);
+ if (err) {
+ __idpf_del_mac_filter(vport_config, addr->sa_data);
+ goto unlock_mutex;
+ }
+
+ if (is_valid_ether_addr(vport->default_mac_addr))
+ idpf_del_mac_filter(vport, np, vport->default_mac_addr, false);
+
+ ether_addr_copy(vport->default_mac_addr, addr->sa_data);
+ eth_hw_addr_set(netdev, addr->sa_data);
+
+unlock_mutex:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+}
+
+/**
+ * idpf_alloc_dma_mem - Allocate dma memory
+ * @hw: pointer to hw struct
+ * @mem: pointer to dma_mem struct
+ * @size: size of the memory to allocate
+ */
+void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem, u64 size)
+{
+ struct idpf_adapter *adapter = hw->back;
+ size_t sz = ALIGN(size, 4096);
+
+ mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz,
+ &mem->pa, GFP_KERNEL);
+ mem->size = sz;
+
+ return mem->va;
+}
+
+/**
+ * idpf_free_dma_mem - Free the allocated dma memory
+ * @hw: pointer to hw struct
+ * @mem: pointer to dma_mem struct
+ */
+void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem)
+{
+ struct idpf_adapter *adapter = hw->back;
+
+ dma_free_coherent(&adapter->pdev->dev, mem->size,
+ mem->va, mem->pa);
+ mem->size = 0;
+ mem->va = NULL;
+ mem->pa = 0;
+}
+
+static const struct net_device_ops idpf_netdev_ops_splitq = {
+ .ndo_open = idpf_open,
+ .ndo_stop = idpf_stop,
+ .ndo_start_xmit = idpf_tx_splitq_start,
+ .ndo_features_check = idpf_features_check,
+ .ndo_set_rx_mode = idpf_set_rx_mode,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = idpf_set_mac,
+ .ndo_change_mtu = idpf_change_mtu,
+ .ndo_get_stats64 = idpf_get_stats64,
+ .ndo_set_features = idpf_set_features,
+ .ndo_tx_timeout = idpf_tx_timeout,
+};
+
+static const struct net_device_ops idpf_netdev_ops_singleq = {
+ .ndo_open = idpf_open,
+ .ndo_stop = idpf_stop,
+ .ndo_start_xmit = idpf_tx_singleq_start,
+ .ndo_features_check = idpf_features_check,
+ .ndo_set_rx_mode = idpf_set_rx_mode,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = idpf_set_mac,
+ .ndo_change_mtu = idpf_change_mtu,
+ .ndo_get_stats64 = idpf_get_stats64,
+ .ndo_set_features = idpf_set_features,
+ .ndo_tx_timeout = idpf_tx_timeout,
+};
diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
new file mode 100644
index 000000000000..e1febc74cefd
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
@@ -0,0 +1,279 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+#include "idpf_devids.h"
+
+#define DRV_SUMMARY "Intel(R) Infrastructure Data Path Function Linux Driver"
+
+MODULE_DESCRIPTION(DRV_SUMMARY);
+MODULE_LICENSE("GPL");
+
+/**
+ * idpf_remove - Device removal routine
+ * @pdev: PCI device information struct
+ */
+static void idpf_remove(struct pci_dev *pdev)
+{
+ struct idpf_adapter *adapter = pci_get_drvdata(pdev);
+ int i;
+
+ set_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
+
+ /* Wait until vc_event_task is done to consider if any hard reset is
+ * in progress else we may go ahead and release the resources but the
+ * thread doing the hard reset might continue the init path and
+ * end up in bad state.
+ */
+ cancel_delayed_work_sync(&adapter->vc_event_task);
+ if (adapter->num_vfs)
+ idpf_sriov_configure(pdev, 0);
+
+ idpf_vc_core_deinit(adapter);
+ /* Be a good citizen and leave the device clean on exit */
+ adapter->dev_ops.reg_ops.trigger_reset(adapter, IDPF_HR_FUNC_RESET);
+ idpf_deinit_dflt_mbx(adapter);
+
+ if (!adapter->netdevs)
+ goto destroy_wqs;
+
+ /* There are some cases where it's possible to still have netdevs
+ * registered with the stack at this point, e.g. if the driver detected
+ * a HW reset and rmmod is called before it fully recovers. Unregister
+ * any stale netdevs here.
+ */
+ for (i = 0; i < adapter->max_vports; i++) {
+ if (!adapter->netdevs[i])
+ continue;
+ if (adapter->netdevs[i]->reg_state != NETREG_UNINITIALIZED)
+ unregister_netdev(adapter->netdevs[i]);
+ free_netdev(adapter->netdevs[i]);
+ adapter->netdevs[i] = NULL;
+ }
+
+destroy_wqs:
+ destroy_workqueue(adapter->init_wq);
+ destroy_workqueue(adapter->serv_wq);
+ destroy_workqueue(adapter->mbx_wq);
+ destroy_workqueue(adapter->stats_wq);
+ destroy_workqueue(adapter->vc_event_wq);
+
+ for (i = 0; i < adapter->max_vports; i++) {
+ kfree(adapter->vport_config[i]);
+ adapter->vport_config[i] = NULL;
+ }
+ kfree(adapter->vport_config);
+ adapter->vport_config = NULL;
+ kfree(adapter->netdevs);
+ adapter->netdevs = NULL;
+
+ mutex_destroy(&adapter->vport_ctrl_lock);
+ mutex_destroy(&adapter->vector_lock);
+ mutex_destroy(&adapter->queue_lock);
+ mutex_destroy(&adapter->vc_buf_lock);
+
+ pci_set_drvdata(pdev, NULL);
+ kfree(adapter);
+}
+
+/**
+ * idpf_shutdown - PCI callback for shutting down device
+ * @pdev: PCI device information struct
+ */
+static void idpf_shutdown(struct pci_dev *pdev)
+{
+ idpf_remove(pdev);
+
+ if (system_state == SYSTEM_POWER_OFF)
+ pci_set_power_state(pdev, PCI_D3hot);
+}
+
+/**
+ * idpf_cfg_hw - Initialize HW struct
+ * @adapter: adapter to setup hw struct for
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_cfg_hw(struct idpf_adapter *adapter)
+{
+ struct pci_dev *pdev = adapter->pdev;
+ struct idpf_hw *hw = &adapter->hw;
+
+ hw->hw_addr = pcim_iomap_table(pdev)[0];
+ if (!hw->hw_addr) {
+ pci_err(pdev, "failed to allocate PCI iomap table\n");
+
+ return -ENOMEM;
+ }
+
+ hw->back = adapter;
+
+ return 0;
+}
+
+/**
+ * idpf_probe - Device initialization routine
+ * @pdev: PCI device information struct
+ * @ent: entry in idpf_pci_tbl
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ struct device *dev = &pdev->dev;
+ struct idpf_adapter *adapter;
+ int err;
+
+ adapter = kzalloc(sizeof(*adapter), GFP_KERNEL);
+ if (!adapter)
+ return -ENOMEM;
+
+ adapter->req_tx_splitq = true;
+ adapter->req_rx_splitq = true;
+
+ switch (ent->device) {
+ case IDPF_DEV_ID_PF:
+ idpf_dev_ops_init(adapter);
+ break;
+ case IDPF_DEV_ID_VF:
+ idpf_vf_dev_ops_init(adapter);
+ adapter->crc_enable = true;
+ break;
+ default:
+ err = -ENODEV;
+ dev_err(&pdev->dev, "Unexpected dev ID 0x%x in idpf probe\n",
+ ent->device);
+ goto err_free;
+ }
+
+ adapter->pdev = pdev;
+ err = pcim_enable_device(pdev);
+ if (err)
+ goto err_free;
+
+ err = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
+ if (err) {
+ pci_err(pdev, "pcim_iomap_regions failed %pe\n", ERR_PTR(err));
+
+ goto err_free;
+ }
+
+ /* set up for high or low dma */
+ err = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (err) {
+ pci_err(pdev, "DMA configuration failed: %pe\n", ERR_PTR(err));
+
+ goto err_free;
+ }
+
+ pci_set_master(pdev);
+ pci_set_drvdata(pdev, adapter);
+
+ adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->init_wq) {
+ dev_err(dev, "Failed to allocate init workqueue\n");
+ err = -ENOMEM;
+ goto err_free;
+ }
+
+ adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->serv_wq) {
+ dev_err(dev, "Failed to allocate service workqueue\n");
+ err = -ENOMEM;
+ goto err_serv_wq_alloc;
+ }
+
+ adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->mbx_wq) {
+ dev_err(dev, "Failed to allocate mailbox workqueue\n");
+ err = -ENOMEM;
+ goto err_mbx_wq_alloc;
+ }
+
+ adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->stats_wq) {
+ dev_err(dev, "Failed to allocate workqueue\n");
+ err = -ENOMEM;
+ goto err_stats_wq_alloc;
+ }
+
+ adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->vc_event_wq) {
+ dev_err(dev, "Failed to allocate virtchnl event workqueue\n");
+ err = -ENOMEM;
+ goto err_vc_event_wq_alloc;
+ }
+
+ /* setup msglvl */
+ adapter->msg_enable = netif_msg_init(-1, IDPF_AVAIL_NETIF_M);
+
+ err = idpf_cfg_hw(adapter);
+ if (err) {
+ dev_err(dev, "Failed to configure HW structure for adapter: %d\n",
+ err);
+ goto err_cfg_hw;
+ }
+
+ mutex_init(&adapter->vport_ctrl_lock);
+ mutex_init(&adapter->vector_lock);
+ mutex_init(&adapter->queue_lock);
+ mutex_init(&adapter->vc_buf_lock);
+
+ init_waitqueue_head(&adapter->vchnl_wq);
+
+ INIT_DELAYED_WORK(&adapter->init_task, idpf_init_task);
+ INIT_DELAYED_WORK(&adapter->serv_task, idpf_service_task);
+ INIT_DELAYED_WORK(&adapter->mbx_task, idpf_mbx_task);
+ INIT_DELAYED_WORK(&adapter->stats_task, idpf_statistics_task);
+ INIT_DELAYED_WORK(&adapter->vc_event_task, idpf_vc_event_task);
+
+ adapter->dev_ops.reg_ops.reset_reg_init(adapter);
+ set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
+ msecs_to_jiffies(10 * (pdev->devfn & 0x07)));
+
+ return 0;
+
+err_cfg_hw:
+ destroy_workqueue(adapter->vc_event_wq);
+err_vc_event_wq_alloc:
+ destroy_workqueue(adapter->stats_wq);
+err_stats_wq_alloc:
+ destroy_workqueue(adapter->mbx_wq);
+err_mbx_wq_alloc:
+ destroy_workqueue(adapter->serv_wq);
+err_serv_wq_alloc:
+ destroy_workqueue(adapter->init_wq);
+err_free:
+ kfree(adapter);
+ return err;
+}
+
+/* idpf_pci_tbl - PCI Dev idpf ID Table
+ */
+static const struct pci_device_id idpf_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, IDPF_DEV_ID_PF)},
+ { PCI_VDEVICE(INTEL, IDPF_DEV_ID_VF)},
+ { /* Sentinel */ }
+};
+MODULE_DEVICE_TABLE(pci, idpf_pci_tbl);
+
+static struct pci_driver idpf_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = idpf_pci_tbl,
+ .probe = idpf_probe,
+ .sriov_configure = idpf_sriov_configure,
+ .remove = idpf_remove,
+ .shutdown = idpf_shutdown,
+};
+module_pci_driver(idpf_driver);
diff --git a/drivers/net/ethernet/intel/idpf/idpf_mem.h b/drivers/net/ethernet/intel/idpf/idpf_mem.h
new file mode 100644
index 000000000000..b21a04fccf0f
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_mem.h
@@ -0,0 +1,20 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_MEM_H_
+#define _IDPF_MEM_H_
+
+#include <linux/io.h>
+
+struct idpf_dma_mem {
+ void *va;
+ dma_addr_t pa;
+ size_t size;
+};
+
+#define wr32(a, reg, value) writel((value), ((a)->hw_addr + (reg)))
+#define rd32(a, reg) readl((a)->hw_addr + (reg))
+#define wr64(a, reg, value) writeq((value), ((a)->hw_addr + (reg)))
+#define rd64(a, reg) readq((a)->hw_addr + (reg))
+
+#endif /* _IDPF_MEM_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
new file mode 100644
index 000000000000..81288a17da2a
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_singleq_txrx.c
@@ -0,0 +1,1183 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+
+/**
+ * idpf_tx_singleq_csum - Enable tx checksum offloads
+ * @skb: pointer to skb
+ * @off: pointer to struct that holds offload parameters
+ *
+ * Returns 0 or error (negative) if checksum offload cannot be executed, 1
+ * otherwise.
+ */
+static int idpf_tx_singleq_csum(struct sk_buff *skb,
+ struct idpf_tx_offload_params *off)
+{
+ u32 l4_len, l3_len, l2_len;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ unsigned char *hdr;
+ } l4;
+ u32 offset, cmd = 0;
+ u8 l4_proto = 0;
+ __be16 frag_off;
+ bool is_tso;
+
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+
+ /* compute outer L2 header size */
+ l2_len = ip.hdr - skb->data;
+ offset = FIELD_PREP(0x3F << IDPF_TX_DESC_LEN_MACLEN_S, l2_len / 2);
+ is_tso = !!(off->tx_flags & IDPF_TX_FLAGS_TSO);
+ if (skb->encapsulation) {
+ u32 tunnel = 0;
+
+ /* define outer network header type */
+ if (off->tx_flags & IDPF_TX_FLAGS_IPV4) {
+ /* The stack computes the IP header already, the only
+ * time we need the hardware to recompute it is in the
+ * case of TSO.
+ */
+ tunnel |= is_tso ?
+ IDPF_TX_CTX_EXT_IP_IPV4 :
+ IDPF_TX_CTX_EXT_IP_IPV4_NO_CSUM;
+
+ l4_proto = ip.v4->protocol;
+ } else if (off->tx_flags & IDPF_TX_FLAGS_IPV6) {
+ tunnel |= IDPF_TX_CTX_EXT_IP_IPV6;
+
+ l4_proto = ip.v6->nexthdr;
+ if (ipv6_ext_hdr(l4_proto))
+ ipv6_skip_exthdr(skb, skb_network_offset(skb) +
+ sizeof(*ip.v6),
+ &l4_proto, &frag_off);
+ }
+
+ /* define outer transport */
+ switch (l4_proto) {
+ case IPPROTO_UDP:
+ tunnel |= IDPF_TXD_CTX_UDP_TUNNELING;
+ break;
+ case IPPROTO_GRE:
+ tunnel |= IDPF_TXD_CTX_GRE_TUNNELING;
+ break;
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
+ l4.hdr = skb_inner_network_header(skb);
+ break;
+ default:
+ if (is_tso)
+ return -1;
+
+ skb_checksum_help(skb);
+
+ return 0;
+ }
+ off->tx_flags |= IDPF_TX_FLAGS_TUNNEL;
+
+ /* compute outer L3 header size */
+ tunnel |= FIELD_PREP(IDPF_TXD_CTX_QW0_TUNN_EXT_IPLEN_M,
+ (l4.hdr - ip.hdr) / 4);
+
+ /* switch IP header pointer from outer to inner header */
+ ip.hdr = skb_inner_network_header(skb);
+
+ /* compute tunnel header size */
+ tunnel |= FIELD_PREP(IDPF_TXD_CTX_QW0_TUNN_NATLEN_M,
+ (ip.hdr - l4.hdr) / 2);
+
+ /* indicate if we need to offload outer UDP header */
+ if (is_tso &&
+ !(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
+ (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
+ tunnel |= IDPF_TXD_CTX_QW0_TUNN_L4T_CS_M;
+
+ /* record tunnel offload values */
+ off->cd_tunneling |= tunnel;
+
+ /* switch L4 header pointer from outer to inner */
+ l4.hdr = skb_inner_transport_header(skb);
+ l4_proto = 0;
+
+ /* reset type as we transition from outer to inner headers */
+ off->tx_flags &= ~(IDPF_TX_FLAGS_IPV4 | IDPF_TX_FLAGS_IPV6);
+ if (ip.v4->version == 4)
+ off->tx_flags |= IDPF_TX_FLAGS_IPV4;
+ if (ip.v6->version == 6)
+ off->tx_flags |= IDPF_TX_FLAGS_IPV6;
+ }
+
+ /* Enable IP checksum offloads */
+ if (off->tx_flags & IDPF_TX_FLAGS_IPV4) {
+ l4_proto = ip.v4->protocol;
+ /* See comment above regarding need for HW to recompute IP
+ * header checksum in the case of TSO.
+ */
+ if (is_tso)
+ cmd |= IDPF_TX_DESC_CMD_IIPT_IPV4_CSUM;
+ else
+ cmd |= IDPF_TX_DESC_CMD_IIPT_IPV4;
+
+ } else if (off->tx_flags & IDPF_TX_FLAGS_IPV6) {
+ cmd |= IDPF_TX_DESC_CMD_IIPT_IPV6;
+ l4_proto = ip.v6->nexthdr;
+ if (ipv6_ext_hdr(l4_proto))
+ ipv6_skip_exthdr(skb, skb_network_offset(skb) +
+ sizeof(*ip.v6), &l4_proto,
+ &frag_off);
+ } else {
+ return -1;
+ }
+
+ /* compute inner L3 header size */
+ l3_len = l4.hdr - ip.hdr;
+ offset |= (l3_len / 4) << IDPF_TX_DESC_LEN_IPLEN_S;
+
+ /* Enable L4 checksum offloads */
+ switch (l4_proto) {
+ case IPPROTO_TCP:
+ /* enable checksum offloads */
+ cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_TCP;
+ l4_len = l4.tcp->doff;
+ break;
+ case IPPROTO_UDP:
+ /* enable UDP checksum offload */
+ cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_UDP;
+ l4_len = sizeof(struct udphdr) >> 2;
+ break;
+ case IPPROTO_SCTP:
+ /* enable SCTP checksum offload */
+ cmd |= IDPF_TX_DESC_CMD_L4T_EOFT_SCTP;
+ l4_len = sizeof(struct sctphdr) >> 2;
+ break;
+ default:
+ if (is_tso)
+ return -1;
+
+ skb_checksum_help(skb);
+
+ return 0;
+ }
+
+ offset |= l4_len << IDPF_TX_DESC_LEN_L4_LEN_S;
+ off->td_cmd |= cmd;
+ off->hdr_offsets |= offset;
+
+ return 1;
+}
+
+/**
+ * idpf_tx_singleq_map - Build the Tx base descriptor
+ * @tx_q: queue to send buffer on
+ * @first: first buffer info buffer to use
+ * @offloads: pointer to struct that holds offload parameters
+ *
+ * This function loops over the skb data pointed to by *first
+ * and gets a physical address for each memory location and programs
+ * it and the length into the transmit base mode descriptor.
+ */
+static void idpf_tx_singleq_map(struct idpf_queue *tx_q,
+ struct idpf_tx_buf *first,
+ struct idpf_tx_offload_params *offloads)
+{
+ u32 offsets = offloads->hdr_offsets;
+ struct idpf_tx_buf *tx_buf = first;
+ struct idpf_base_tx_desc *tx_desc;
+ struct sk_buff *skb = first->skb;
+ u64 td_cmd = offloads->td_cmd;
+ unsigned int data_len, size;
+ u16 i = tx_q->next_to_use;
+ struct netdev_queue *nq;
+ skb_frag_t *frag;
+ dma_addr_t dma;
+ u64 td_tag = 0;
+
+ data_len = skb->data_len;
+ size = skb_headlen(skb);
+
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, i);
+
+ dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+
+ /* write each descriptor with CRC bit */
+ if (tx_q->vport->crc_enable)
+ td_cmd |= IDPF_TX_DESC_CMD_ICRC;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+
+ if (dma_mapping_error(tx_q->dev, dma))
+ return idpf_tx_dma_map_error(tx_q, skb, first, i);
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buf, len, size);
+ dma_unmap_addr_set(tx_buf, dma, dma);
+
+ /* align size to end of page */
+ max_data += -dma & (IDPF_TX_MAX_READ_REQ_SIZE - 1);
+ tx_desc->buf_addr = cpu_to_le64(dma);
+
+ /* account for data chunks larger than the hardware
+ * can handle
+ */
+ while (unlikely(size > IDPF_TX_MAX_DESC_DATA)) {
+ tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd,
+ offsets,
+ max_data,
+ td_tag);
+ tx_desc++;
+ i++;
+
+ if (i == tx_q->desc_count) {
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
+ i = 0;
+ }
+
+ dma += max_data;
+ size -= max_data;
+
+ max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+ tx_desc->buf_addr = cpu_to_le64(dma);
+ }
+
+ if (!data_len)
+ break;
+
+ tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets,
+ size, td_tag);
+ tx_desc++;
+ i++;
+
+ if (i == tx_q->desc_count) {
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
+ i = 0;
+ }
+
+ size = skb_frag_size(frag);
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buf = &tx_q->tx_buf[i];
+ }
+
+ skb_tx_timestamp(first->skb);
+
+ /* write last descriptor with RS and EOP bits */
+ td_cmd |= (u64)(IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS);
+
+ tx_desc->qw1 = idpf_tx_singleq_build_ctob(td_cmd, offsets,
+ size, td_tag);
+
+ IDPF_SINGLEQ_BUMP_RING_IDX(tx_q, i);
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+ netdev_tx_sent_queue(nq, first->bytecount);
+
+ idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more());
+}
+
+/**
+ * idpf_tx_singleq_get_ctx_desc - grab next desc and update buffer ring
+ * @txq: queue to put context descriptor on
+ *
+ * Since the TX buffer rings mimics the descriptor ring, update the tx buffer
+ * ring entry to reflect that this index is a context descriptor
+ */
+static struct idpf_base_tx_ctx_desc *
+idpf_tx_singleq_get_ctx_desc(struct idpf_queue *txq)
+{
+ struct idpf_base_tx_ctx_desc *ctx_desc;
+ int ntu = txq->next_to_use;
+
+ memset(&txq->tx_buf[ntu], 0, sizeof(struct idpf_tx_buf));
+ txq->tx_buf[ntu].ctx_entry = true;
+
+ ctx_desc = IDPF_BASE_TX_CTX_DESC(txq, ntu);
+
+ IDPF_SINGLEQ_BUMP_RING_IDX(txq, ntu);
+ txq->next_to_use = ntu;
+
+ return ctx_desc;
+}
+
+/**
+ * idpf_tx_singleq_build_ctx_desc - populate context descriptor
+ * @txq: queue to send buffer on
+ * @offload: offload parameter structure
+ **/
+static void idpf_tx_singleq_build_ctx_desc(struct idpf_queue *txq,
+ struct idpf_tx_offload_params *offload)
+{
+ struct idpf_base_tx_ctx_desc *desc = idpf_tx_singleq_get_ctx_desc(txq);
+ u64 qw1 = (u64)IDPF_TX_DESC_DTYPE_CTX;
+
+ if (offload->tso_segs) {
+ qw1 |= IDPF_TX_CTX_DESC_TSO << IDPF_TXD_CTX_QW1_CMD_S;
+ qw1 |= ((u64)offload->tso_len << IDPF_TXD_CTX_QW1_TSO_LEN_S) &
+ IDPF_TXD_CTX_QW1_TSO_LEN_M;
+ qw1 |= ((u64)offload->mss << IDPF_TXD_CTX_QW1_MSS_S) &
+ IDPF_TXD_CTX_QW1_MSS_M;
+
+ u64_stats_update_begin(&txq->stats_sync);
+ u64_stats_inc(&txq->q_stats.tx.lso_pkts);
+ u64_stats_update_end(&txq->stats_sync);
+ }
+
+ desc->qw0.tunneling_params = cpu_to_le32(offload->cd_tunneling);
+
+ desc->qw0.l2tag2 = 0;
+ desc->qw0.rsvd1 = 0;
+ desc->qw1 = cpu_to_le64(qw1);
+}
+
+/**
+ * idpf_tx_singleq_frame - Sends buffer on Tx ring using base descriptors
+ * @skb: send buffer
+ * @tx_q: queue to send buffer on
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+static netdev_tx_t idpf_tx_singleq_frame(struct sk_buff *skb,
+ struct idpf_queue *tx_q)
+{
+ struct idpf_tx_offload_params offload = { };
+ struct idpf_tx_buf *first;
+ unsigned int count;
+ __be16 protocol;
+ int csum, tso;
+
+ count = idpf_tx_desc_count_required(tx_q, skb);
+ if (unlikely(!count))
+ return idpf_tx_drop_skb(tx_q, skb);
+
+ if (idpf_tx_maybe_stop_common(tx_q,
+ count + IDPF_TX_DESCS_PER_CACHE_LINE +
+ IDPF_TX_DESCS_FOR_CTX)) {
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
+ return NETDEV_TX_BUSY;
+ }
+
+ protocol = vlan_get_protocol(skb);
+ if (protocol == htons(ETH_P_IP))
+ offload.tx_flags |= IDPF_TX_FLAGS_IPV4;
+ else if (protocol == htons(ETH_P_IPV6))
+ offload.tx_flags |= IDPF_TX_FLAGS_IPV6;
+
+ tso = idpf_tso(skb, &offload);
+ if (tso < 0)
+ goto out_drop;
+
+ csum = idpf_tx_singleq_csum(skb, &offload);
+ if (csum < 0)
+ goto out_drop;
+
+ if (tso || offload.cd_tunneling)
+ idpf_tx_singleq_build_ctx_desc(tx_q, &offload);
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_q->tx_buf[tx_q->next_to_use];
+ first->skb = skb;
+
+ if (tso) {
+ first->gso_segs = offload.tso_segs;
+ first->bytecount = skb->len + ((first->gso_segs - 1) * offload.tso_hdr_len);
+ } else {
+ first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+ first->gso_segs = 1;
+ }
+ idpf_tx_singleq_map(tx_q, first, &offload);
+
+ return NETDEV_TX_OK;
+
+out_drop:
+ return idpf_tx_drop_skb(tx_q, skb);
+}
+
+/**
+ * idpf_tx_singleq_start - Selects the right Tx queue to send buffer
+ * @skb: send buffer
+ * @netdev: network interface device structure
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+ struct idpf_queue *tx_q;
+
+ tx_q = vport->txqs[skb_get_queue_mapping(skb)];
+
+ /* hardware can't handle really short frames, hardware padding works
+ * beyond this point
+ */
+ if (skb_put_padto(skb, IDPF_TX_MIN_PKT_LEN)) {
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
+ return NETDEV_TX_OK;
+ }
+
+ return idpf_tx_singleq_frame(skb, tx_q);
+}
+
+/**
+ * idpf_tx_singleq_clean - Reclaim resources from queue
+ * @tx_q: Tx queue to clean
+ * @napi_budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ */
+static bool idpf_tx_singleq_clean(struct idpf_queue *tx_q, int napi_budget,
+ int *cleaned)
+{
+ unsigned int budget = tx_q->vport->compln_clean_budget;
+ unsigned int total_bytes = 0, total_pkts = 0;
+ struct idpf_base_tx_desc *tx_desc;
+ s16 ntc = tx_q->next_to_clean;
+ struct idpf_netdev_priv *np;
+ struct idpf_tx_buf *tx_buf;
+ struct idpf_vport *vport;
+ struct netdev_queue *nq;
+ bool dont_wake;
+
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, ntc);
+ tx_buf = &tx_q->tx_buf[ntc];
+ ntc -= tx_q->desc_count;
+
+ do {
+ struct idpf_base_tx_desc *eop_desc;
+
+ /* If this entry in the ring was used as a context descriptor,
+ * it's corresponding entry in the buffer ring will indicate as
+ * such. We can skip this descriptor since there is no buffer
+ * to clean.
+ */
+ if (tx_buf->ctx_entry) {
+ /* Clear this flag here to avoid stale flag values when
+ * this buffer is used for actual data in the future.
+ * There are cases where the tx_buf struct / the flags
+ * field will not be cleared before being reused.
+ */
+ tx_buf->ctx_entry = false;
+ goto fetch_next_txq_desc;
+ }
+
+ /* if next_to_watch is not set then no work pending */
+ eop_desc = (struct idpf_base_tx_desc *)tx_buf->next_to_watch;
+ if (!eop_desc)
+ break;
+
+ /* prevent any other reads prior to eop_desc */
+ smp_rmb();
+
+ /* if the descriptor isn't done, no work yet to do */
+ if (!(eop_desc->qw1 &
+ cpu_to_le64(IDPF_TX_DESC_DTYPE_DESC_DONE)))
+ break;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buf->next_to_watch = NULL;
+
+ /* update the statistics for this packet */
+ total_bytes += tx_buf->bytecount;
+ total_pkts += tx_buf->gso_segs;
+
+ napi_consume_skb(tx_buf->skb, napi_budget);
+
+ /* unmap skb header data */
+ dma_unmap_single(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+
+ /* clear tx_buf data */
+ tx_buf->skb = NULL;
+ dma_unmap_len_set(tx_buf, len, 0);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ tx_buf++;
+ tx_desc++;
+ ntc++;
+ if (unlikely(!ntc)) {
+ ntc -= tx_q->desc_count;
+ tx_buf = tx_q->tx_buf;
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
+ }
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_page(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+ }
+ }
+
+ /* update budget only if we did something */
+ budget--;
+
+fetch_next_txq_desc:
+ tx_buf++;
+ tx_desc++;
+ ntc++;
+ if (unlikely(!ntc)) {
+ ntc -= tx_q->desc_count;
+ tx_buf = tx_q->tx_buf;
+ tx_desc = IDPF_BASE_TX_DESC(tx_q, 0);
+ }
+ } while (likely(budget));
+
+ ntc += tx_q->desc_count;
+ tx_q->next_to_clean = ntc;
+
+ *cleaned += total_pkts;
+
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_add(&tx_q->q_stats.tx.packets, total_pkts);
+ u64_stats_add(&tx_q->q_stats.tx.bytes, total_bytes);
+ u64_stats_update_end(&tx_q->stats_sync);
+
+ vport = tx_q->vport;
+ np = netdev_priv(vport->netdev);
+ nq = netdev_get_tx_queue(vport->netdev, tx_q->idx);
+
+ dont_wake = np->state != __IDPF_VPORT_UP ||
+ !netif_carrier_ok(vport->netdev);
+ __netif_txq_completed_wake(nq, total_pkts, total_bytes,
+ IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
+ dont_wake);
+
+ return !!budget;
+}
+
+/**
+ * idpf_tx_singleq_clean_all - Clean all Tx queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static bool idpf_tx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ int *cleaned)
+{
+ u16 num_txq = q_vec->num_txq;
+ bool clean_complete = true;
+ int i, budget_per_q;
+
+ budget_per_q = num_txq ? max(budget / num_txq, 1) : 0;
+ for (i = 0; i < num_txq; i++) {
+ struct idpf_queue *q;
+
+ q = q_vec->tx[i];
+ clean_complete &= idpf_tx_singleq_clean(q, budget_per_q,
+ cleaned);
+ }
+
+ return clean_complete;
+}
+
+/**
+ * idpf_rx_singleq_test_staterr - tests bits in Rx descriptor
+ * status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_ptype_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static bool idpf_rx_singleq_test_staterr(const union virtchnl2_rx_desc *rx_desc,
+ const u64 stat_err_bits)
+{
+ return !!(rx_desc->base_wb.qword1.status_error_ptype_len &
+ cpu_to_le64(stat_err_bits));
+}
+
+/**
+ * idpf_rx_singleq_is_non_eop - process handling of non-EOP buffers
+ * @rxq: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ * @ntc: next to clean
+ */
+static bool idpf_rx_singleq_is_non_eop(struct idpf_queue *rxq,
+ union virtchnl2_rx_desc *rx_desc,
+ struct sk_buff *skb, u16 ntc)
+{
+ /* if we are the last buffer then there is nothing else to do */
+ if (likely(idpf_rx_singleq_test_staterr(rx_desc, IDPF_RXD_EOF_SINGLEQ)))
+ return false;
+
+ return true;
+}
+
+/**
+ * idpf_rx_singleq_csum - Indicate in skb if checksum is good
+ * @rxq: Rx ring being processed
+ * @skb: skb currently being received and modified
+ * @csum_bits: checksum bits from descriptor
+ * @ptype: the packet type decoded by hardware
+ *
+ * skb->protocol must be set before this function is called
+ */
+static void idpf_rx_singleq_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+ struct idpf_rx_csum_decoded *csum_bits,
+ u16 ptype)
+{
+ struct idpf_rx_ptype_decoded decoded;
+ bool ipv4, ipv6;
+
+ /* check if Rx checksum is enabled */
+ if (unlikely(!(rxq->vport->netdev->features & NETIF_F_RXCSUM)))
+ return;
+
+ /* check if HW has decoded the packet and checksum */
+ if (unlikely(!(csum_bits->l3l4p)))
+ return;
+
+ decoded = rxq->vport->rx_ptype_lkup[ptype];
+ if (unlikely(!(decoded.known && decoded.outer_ip)))
+ return;
+
+ ipv4 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV4);
+ ipv6 = IDPF_RX_PTYPE_TO_IPV(&decoded, IDPF_RX_PTYPE_OUTER_IPV6);
+
+ /* Check if there were any checksum errors */
+ if (unlikely(ipv4 && (csum_bits->ipe || csum_bits->eipe)))
+ goto checksum_fail;
+
+ /* Device could not do any checksum offload for certain extension
+ * headers as indicated by setting IPV6EXADD bit
+ */
+ if (unlikely(ipv6 && csum_bits->ipv6exadd))
+ return;
+
+ /* check for L4 errors and handle packets that were not able to be
+ * checksummed due to arrival speed
+ */
+ if (unlikely(csum_bits->l4e))
+ goto checksum_fail;
+
+ if (unlikely(csum_bits->nat && csum_bits->eudpe))
+ goto checksum_fail;
+
+ /* Handle packets that were not able to be checksummed due to arrival
+ * speed, in this case the stack can compute the csum.
+ */
+ if (unlikely(csum_bits->pprs))
+ return;
+
+ /* If there is an outer header present that might contain a checksum
+ * we need to bump the checksum level by 1 to reflect the fact that
+ * we are indicating we validated the inner checksum.
+ */
+ if (decoded.tunnel_type >= IDPF_RX_PTYPE_TUNNEL_IP_GRENAT)
+ skb->csum_level = 1;
+
+ /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */
+ switch (decoded.inner_prot) {
+ case IDPF_RX_PTYPE_INNER_PROT_ICMP:
+ case IDPF_RX_PTYPE_INNER_PROT_TCP:
+ case IDPF_RX_PTYPE_INNER_PROT_UDP:
+ case IDPF_RX_PTYPE_INNER_PROT_SCTP:
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ return;
+ default:
+ return;
+ }
+
+checksum_fail:
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.hw_csum_err);
+ u64_stats_update_end(&rxq->stats_sync);
+}
+
+/**
+ * idpf_rx_singleq_base_csum - Indicate in skb if hw indicated a good cksum
+ * @rx_q: Rx completion queue
+ * @skb: skb currently being received and modified
+ * @rx_desc: the receive descriptor
+ * @ptype: Rx packet type
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+ * descriptor writeback format.
+ **/
+static void idpf_rx_singleq_base_csum(struct idpf_queue *rx_q,
+ struct sk_buff *skb,
+ union virtchnl2_rx_desc *rx_desc,
+ u16 ptype)
+{
+ struct idpf_rx_csum_decoded csum_bits;
+ u32 rx_error, rx_status;
+ u64 qword;
+
+ qword = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
+
+ rx_status = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M, qword);
+ rx_error = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, qword);
+
+ csum_bits.ipe = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_M, rx_error);
+ csum_bits.eipe = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_M,
+ rx_error);
+ csum_bits.l4e = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_M, rx_error);
+ csum_bits.pprs = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_M,
+ rx_error);
+ csum_bits.l3l4p = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_M,
+ rx_status);
+ csum_bits.ipv6exadd = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_M,
+ rx_status);
+ csum_bits.nat = 0;
+ csum_bits.eudpe = 0;
+
+ idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype);
+}
+
+/**
+ * idpf_rx_singleq_flex_csum - Indicate in skb if hw indicated a good cksum
+ * @rx_q: Rx completion queue
+ * @skb: skb currently being received and modified
+ * @rx_desc: the receive descriptor
+ * @ptype: Rx packet type
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+ * descriptor writeback format.
+ **/
+static void idpf_rx_singleq_flex_csum(struct idpf_queue *rx_q,
+ struct sk_buff *skb,
+ union virtchnl2_rx_desc *rx_desc,
+ u16 ptype)
+{
+ struct idpf_rx_csum_decoded csum_bits;
+ u16 rx_status0, rx_status1;
+
+ rx_status0 = le16_to_cpu(rx_desc->flex_nic_wb.status_error0);
+ rx_status1 = le16_to_cpu(rx_desc->flex_nic_wb.status_error1);
+
+ csum_bits.ipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_M,
+ rx_status0);
+ csum_bits.eipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_M,
+ rx_status0);
+ csum_bits.l4e = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_M,
+ rx_status0);
+ csum_bits.eudpe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_M,
+ rx_status0);
+ csum_bits.l3l4p = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_M,
+ rx_status0);
+ csum_bits.ipv6exadd = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_M,
+ rx_status0);
+ csum_bits.nat = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_M,
+ rx_status1);
+ csum_bits.pprs = 0;
+
+ idpf_rx_singleq_csum(rx_q, skb, &csum_bits, ptype);
+}
+
+/**
+ * idpf_rx_singleq_base_hash - set the hash value in the skb
+ * @rx_q: Rx completion queue
+ * @skb: skb currently being received and modified
+ * @rx_desc: specific descriptor
+ * @decoded: Decoded Rx packet type related fields
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+ * descriptor writeback format.
+ **/
+static void idpf_rx_singleq_base_hash(struct idpf_queue *rx_q,
+ struct sk_buff *skb,
+ union virtchnl2_rx_desc *rx_desc,
+ struct idpf_rx_ptype_decoded *decoded)
+{
+ u64 mask, qw1;
+
+ if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
+ return;
+
+ mask = VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M;
+ qw1 = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
+
+ if (FIELD_GET(mask, qw1) == mask) {
+ u32 hash = le32_to_cpu(rx_desc->base_wb.qword0.hi_dword.rss);
+
+ skb_set_hash(skb, hash, idpf_ptype_to_htype(decoded));
+ }
+}
+
+/**
+ * idpf_rx_singleq_flex_hash - set the hash value in the skb
+ * @rx_q: Rx completion queue
+ * @skb: skb currently being received and modified
+ * @rx_desc: specific descriptor
+ * @decoded: Decoded Rx packet type related fields
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+ * descriptor writeback format.
+ **/
+static void idpf_rx_singleq_flex_hash(struct idpf_queue *rx_q,
+ struct sk_buff *skb,
+ union virtchnl2_rx_desc *rx_desc,
+ struct idpf_rx_ptype_decoded *decoded)
+{
+ if (unlikely(!(rx_q->vport->netdev->features & NETIF_F_RXHASH)))
+ return;
+
+ if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M,
+ le16_to_cpu(rx_desc->flex_nic_wb.status_error0)))
+ skb_set_hash(skb, le32_to_cpu(rx_desc->flex_nic_wb.rss_hash),
+ idpf_ptype_to_htype(decoded));
+}
+
+/**
+ * idpf_rx_singleq_process_skb_fields - Populate skb header fields from Rx
+ * descriptor
+ * @rx_q: Rx ring being processed
+ * @skb: pointer to current skb being populated
+ * @rx_desc: descriptor for skb
+ * @ptype: packet type
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
+ */
+static void idpf_rx_singleq_process_skb_fields(struct idpf_queue *rx_q,
+ struct sk_buff *skb,
+ union virtchnl2_rx_desc *rx_desc,
+ u16 ptype)
+{
+ struct idpf_rx_ptype_decoded decoded =
+ rx_q->vport->rx_ptype_lkup[ptype];
+
+ /* modifies the skb - consumes the enet header */
+ skb->protocol = eth_type_trans(skb, rx_q->vport->netdev);
+
+ /* Check if we're using base mode descriptor IDs */
+ if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M) {
+ idpf_rx_singleq_base_hash(rx_q, skb, rx_desc, &decoded);
+ idpf_rx_singleq_base_csum(rx_q, skb, rx_desc, ptype);
+ } else {
+ idpf_rx_singleq_flex_hash(rx_q, skb, rx_desc, &decoded);
+ idpf_rx_singleq_flex_csum(rx_q, skb, rx_desc, ptype);
+ }
+}
+
+/**
+ * idpf_rx_singleq_buf_hw_alloc_all - Replace used receive buffers
+ * @rx_q: queue for which the hw buffers are allocated
+ * @cleaned_count: number of buffers to replace
+ *
+ * Returns false if all allocations were successful, true if any fail
+ */
+bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rx_q,
+ u16 cleaned_count)
+{
+ struct virtchnl2_singleq_rx_buf_desc *desc;
+ u16 nta = rx_q->next_to_alloc;
+ struct idpf_rx_buf *buf;
+
+ if (!cleaned_count)
+ return false;
+
+ desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, nta);
+ buf = &rx_q->rx_buf.buf[nta];
+
+ do {
+ dma_addr_t addr;
+
+ addr = idpf_alloc_page(rx_q->pp, buf, rx_q->rx_buf_size);
+ if (unlikely(addr == DMA_MAPPING_ERROR))
+ break;
+
+ /* Refresh the desc even if buffer_addrs didn't change
+ * because each write-back erases this info.
+ */
+ desc->pkt_addr = cpu_to_le64(addr);
+ desc->hdr_addr = 0;
+ desc++;
+
+ buf++;
+ nta++;
+ if (unlikely(nta == rx_q->desc_count)) {
+ desc = IDPF_SINGLEQ_RX_BUF_DESC(rx_q, 0);
+ buf = rx_q->rx_buf.buf;
+ nta = 0;
+ }
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ if (rx_q->next_to_alloc != nta) {
+ idpf_rx_buf_hw_update(rx_q, nta);
+ rx_q->next_to_alloc = nta;
+ }
+
+ return !!cleaned_count;
+}
+
+/**
+ * idpf_rx_singleq_extract_base_fields - Extract fields from the Rx descriptor
+ * @rx_q: Rx descriptor queue
+ * @rx_desc: the descriptor to process
+ * @fields: storage for extracted values
+ *
+ * Decode the Rx descriptor and extract relevant information including the
+ * size and Rx packet type.
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_1_32B_BASE_M base 32byte
+ * descriptor writeback format.
+ */
+static void idpf_rx_singleq_extract_base_fields(struct idpf_queue *rx_q,
+ union virtchnl2_rx_desc *rx_desc,
+ struct idpf_rx_extracted *fields)
+{
+ u64 qword;
+
+ qword = le64_to_cpu(rx_desc->base_wb.qword1.status_error_ptype_len);
+
+ fields->size = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M, qword);
+ fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M, qword);
+}
+
+/**
+ * idpf_rx_singleq_extract_flex_fields - Extract fields from the Rx descriptor
+ * @rx_q: Rx descriptor queue
+ * @rx_desc: the descriptor to process
+ * @fields: storage for extracted values
+ *
+ * Decode the Rx descriptor and extract relevant information including the
+ * size and Rx packet type.
+ *
+ * This function only operates on the VIRTCHNL2_RXDID_2_FLEX_SQ_NIC flexible
+ * descriptor writeback format.
+ */
+static void idpf_rx_singleq_extract_flex_fields(struct idpf_queue *rx_q,
+ union virtchnl2_rx_desc *rx_desc,
+ struct idpf_rx_extracted *fields)
+{
+ fields->size = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M,
+ le16_to_cpu(rx_desc->flex_nic_wb.pkt_len));
+ fields->rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_PTYPE_M,
+ le16_to_cpu(rx_desc->flex_nic_wb.ptype_flex_flags0));
+}
+
+/**
+ * idpf_rx_singleq_extract_fields - Extract fields from the Rx descriptor
+ * @rx_q: Rx descriptor queue
+ * @rx_desc: the descriptor to process
+ * @fields: storage for extracted values
+ *
+ */
+static void idpf_rx_singleq_extract_fields(struct idpf_queue *rx_q,
+ union virtchnl2_rx_desc *rx_desc,
+ struct idpf_rx_extracted *fields)
+{
+ if (rx_q->rxdids == VIRTCHNL2_RXDID_1_32B_BASE_M)
+ idpf_rx_singleq_extract_base_fields(rx_q, rx_desc, fields);
+ else
+ idpf_rx_singleq_extract_flex_fields(rx_q, rx_desc, fields);
+}
+
+/**
+ * idpf_rx_singleq_clean - Reclaim resources after receive completes
+ * @rx_q: rx queue to clean
+ * @budget: Total limit on number of packets to process
+ *
+ * Returns true if there's any budget left (e.g. the clean is finished)
+ */
+static int idpf_rx_singleq_clean(struct idpf_queue *rx_q, int budget)
+{
+ unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+ struct sk_buff *skb = rx_q->skb;
+ u16 ntc = rx_q->next_to_clean;
+ u16 cleaned_count = 0;
+ bool failure = false;
+
+ /* Process Rx packets bounded by budget */
+ while (likely(total_rx_pkts < (unsigned int)budget)) {
+ struct idpf_rx_extracted fields = { };
+ union virtchnl2_rx_desc *rx_desc;
+ struct idpf_rx_buf *rx_buf;
+
+ /* get the Rx desc from Rx queue based on 'next_to_clean' */
+ rx_desc = IDPF_RX_DESC(rx_q, ntc);
+
+ /* status_error_ptype_len will always be zero for unused
+ * descriptors because it's cleared in cleanup, and overlaps
+ * with hdr_addr which is always zero because packet split
+ * isn't used, if the hardware wrote DD then the length will be
+ * non-zero
+ */
+#define IDPF_RXD_DD VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M
+ if (!idpf_rx_singleq_test_staterr(rx_desc,
+ IDPF_RXD_DD))
+ break;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc
+ */
+ dma_rmb();
+
+ idpf_rx_singleq_extract_fields(rx_q, rx_desc, &fields);
+
+ rx_buf = &rx_q->rx_buf.buf[ntc];
+ if (!fields.size) {
+ idpf_rx_put_page(rx_buf);
+ goto skip_data;
+ }
+
+ idpf_rx_sync_for_cpu(rx_buf, fields.size);
+ skb = rx_q->skb;
+ if (skb)
+ idpf_rx_add_frag(rx_buf, skb, fields.size);
+ else
+ skb = idpf_rx_construct_skb(rx_q, rx_buf, fields.size);
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb)
+ break;
+
+skip_data:
+ IDPF_SINGLEQ_BUMP_RING_IDX(rx_q, ntc);
+
+ cleaned_count++;
+
+ /* skip if it is non EOP desc */
+ if (idpf_rx_singleq_is_non_eop(rx_q, rx_desc, skb, ntc))
+ continue;
+
+#define IDPF_RXD_ERR_S FIELD_PREP(VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M, \
+ VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M)
+ if (unlikely(idpf_rx_singleq_test_staterr(rx_desc,
+ IDPF_RXD_ERR_S))) {
+ dev_kfree_skb_any(skb);
+ skb = NULL;
+ continue;
+ }
+
+ /* pad skb if needed (to make valid ethernet frame) */
+ if (eth_skb_pad(skb)) {
+ skb = NULL;
+ continue;
+ }
+
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
+
+ /* protocol */
+ idpf_rx_singleq_process_skb_fields(rx_q, skb,
+ rx_desc, fields.rx_ptype);
+
+ /* send completed skb up the stack */
+ napi_gro_receive(&rx_q->q_vector->napi, skb);
+ skb = NULL;
+
+ /* update budget accounting */
+ total_rx_pkts++;
+ }
+
+ rx_q->skb = skb;
+
+ rx_q->next_to_clean = ntc;
+
+ if (cleaned_count)
+ failure = idpf_rx_singleq_buf_hw_alloc_all(rx_q, cleaned_count);
+
+ u64_stats_update_begin(&rx_q->stats_sync);
+ u64_stats_add(&rx_q->q_stats.rx.packets, total_rx_pkts);
+ u64_stats_add(&rx_q->q_stats.rx.bytes, total_rx_bytes);
+ u64_stats_update_end(&rx_q->stats_sync);
+
+ /* guarantee a trip back through this routine if there was a failure */
+ return failure ? budget : (int)total_rx_pkts;
+}
+
+/**
+ * idpf_rx_singleq_clean_all - Clean all Rx queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static bool idpf_rx_singleq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ int *cleaned)
+{
+ u16 num_rxq = q_vec->num_rxq;
+ bool clean_complete = true;
+ int budget_per_q, i;
+
+ /* We attempt to distribute budget to each Rx queue fairly, but don't
+ * allow the budget to go below 1 because that would exit polling early.
+ */
+ budget_per_q = num_rxq ? max(budget / num_rxq, 1) : 0;
+ for (i = 0; i < num_rxq; i++) {
+ struct idpf_queue *rxq = q_vec->rx[i];
+ int pkts_cleaned_per_q;
+
+ pkts_cleaned_per_q = idpf_rx_singleq_clean(rxq, budget_per_q);
+
+ /* if we clean as many as budgeted, we must not be done */
+ if (pkts_cleaned_per_q >= budget_per_q)
+ clean_complete = false;
+ *cleaned += pkts_cleaned_per_q;
+ }
+
+ return clean_complete;
+}
+
+/**
+ * idpf_vport_singleq_napi_poll - NAPI handler
+ * @napi: struct from which you get q_vector
+ * @budget: budget provided by stack
+ */
+int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct idpf_q_vector *q_vector =
+ container_of(napi, struct idpf_q_vector, napi);
+ bool clean_complete;
+ int work_done = 0;
+
+ /* Handle case where we are called by netpoll with a budget of 0 */
+ if (budget <= 0) {
+ idpf_tx_singleq_clean_all(q_vector, budget, &work_done);
+
+ return budget;
+ }
+
+ clean_complete = idpf_rx_singleq_clean_all(q_vector, budget,
+ &work_done);
+ clean_complete &= idpf_tx_singleq_clean_all(q_vector, budget,
+ &work_done);
+
+ /* If work not completed, return budget and polling will return */
+ if (!clean_complete)
+ return budget;
+
+ work_done = min_t(int, work_done, budget - 1);
+
+ /* Exit the polling mode, but don't re-enable interrupts if stack might
+ * poll us due to busy-polling
+ */
+ if (likely(napi_complete_done(napi, work_done)))
+ idpf_vport_intr_update_itr_ena_irq(q_vector);
+
+ return work_done;
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
new file mode 100644
index 000000000000..5e1ef70d54fe
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
@@ -0,0 +1,4292 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+
+/**
+ * idpf_buf_lifo_push - push a buffer pointer onto stack
+ * @stack: pointer to stack struct
+ * @buf: pointer to buf to push
+ *
+ * Returns 0 on success, negative on failure
+ **/
+static int idpf_buf_lifo_push(struct idpf_buf_lifo *stack,
+ struct idpf_tx_stash *buf)
+{
+ if (unlikely(stack->top == stack->size))
+ return -ENOSPC;
+
+ stack->bufs[stack->top++] = buf;
+
+ return 0;
+}
+
+/**
+ * idpf_buf_lifo_pop - pop a buffer pointer from stack
+ * @stack: pointer to stack struct
+ **/
+static struct idpf_tx_stash *idpf_buf_lifo_pop(struct idpf_buf_lifo *stack)
+{
+ if (unlikely(!stack->top))
+ return NULL;
+
+ return stack->bufs[--stack->top];
+}
+
+/**
+ * idpf_tx_timeout - Respond to a Tx Hang
+ * @netdev: network interface device structure
+ * @txqueue: TX queue
+ */
+void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue)
+{
+ struct idpf_adapter *adapter = idpf_netdev_to_adapter(netdev);
+
+ adapter->tx_timeout_count++;
+
+ netdev_err(netdev, "Detected Tx timeout: Count %d, Queue %d\n",
+ adapter->tx_timeout_count, txqueue);
+ if (!idpf_is_reset_in_prog(adapter)) {
+ set_bit(IDPF_HR_FUNC_RESET, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq,
+ &adapter->vc_event_task,
+ msecs_to_jiffies(10));
+ }
+}
+
+/**
+ * idpf_tx_buf_rel - Release a Tx buffer
+ * @tx_q: the queue that owns the buffer
+ * @tx_buf: the buffer to free
+ */
+static void idpf_tx_buf_rel(struct idpf_queue *tx_q, struct idpf_tx_buf *tx_buf)
+{
+ if (tx_buf->skb) {
+ if (dma_unmap_len(tx_buf, len))
+ dma_unmap_single(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ dev_kfree_skb_any(tx_buf->skb);
+ } else if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_page(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ }
+
+ tx_buf->next_to_watch = NULL;
+ tx_buf->skb = NULL;
+ tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+ dma_unmap_len_set(tx_buf, len, 0);
+}
+
+/**
+ * idpf_tx_buf_rel_all - Free any empty Tx buffers
+ * @txq: queue to be cleaned
+ */
+static void idpf_tx_buf_rel_all(struct idpf_queue *txq)
+{
+ u16 i;
+
+ /* Buffers already cleared, nothing to do */
+ if (!txq->tx_buf)
+ return;
+
+ /* Free all the Tx buffer sk_buffs */
+ for (i = 0; i < txq->desc_count; i++)
+ idpf_tx_buf_rel(txq, &txq->tx_buf[i]);
+
+ kfree(txq->tx_buf);
+ txq->tx_buf = NULL;
+
+ if (!txq->buf_stack.bufs)
+ return;
+
+ for (i = 0; i < txq->buf_stack.size; i++)
+ kfree(txq->buf_stack.bufs[i]);
+
+ kfree(txq->buf_stack.bufs);
+ txq->buf_stack.bufs = NULL;
+}
+
+/**
+ * idpf_tx_desc_rel - Free Tx resources per queue
+ * @txq: Tx descriptor ring for a specific queue
+ * @bufq: buffer q or completion q
+ *
+ * Free all transmit software resources
+ */
+static void idpf_tx_desc_rel(struct idpf_queue *txq, bool bufq)
+{
+ if (bufq)
+ idpf_tx_buf_rel_all(txq);
+
+ if (!txq->desc_ring)
+ return;
+
+ dmam_free_coherent(txq->dev, txq->size, txq->desc_ring, txq->dma);
+ txq->desc_ring = NULL;
+ txq->next_to_alloc = 0;
+ txq->next_to_use = 0;
+ txq->next_to_clean = 0;
+}
+
+/**
+ * idpf_tx_desc_rel_all - Free Tx Resources for All Queues
+ * @vport: virtual port structure
+ *
+ * Free all transmit software resources
+ */
+static void idpf_tx_desc_rel_all(struct idpf_vport *vport)
+{
+ int i, j;
+
+ if (!vport->txq_grps)
+ return;
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+
+ for (j = 0; j < txq_grp->num_txq; j++)
+ idpf_tx_desc_rel(txq_grp->txqs[j], true);
+
+ if (idpf_is_queue_model_split(vport->txq_model))
+ idpf_tx_desc_rel(txq_grp->complq, false);
+ }
+}
+
+/**
+ * idpf_tx_buf_alloc_all - Allocate memory for all buffer resources
+ * @tx_q: queue for which the buffers are allocated
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_tx_buf_alloc_all(struct idpf_queue *tx_q)
+{
+ int buf_size;
+ int i;
+
+ /* Allocate book keeping buffers only. Buffers to be supplied to HW
+ * are allocated by kernel network stack and received as part of skb
+ */
+ buf_size = sizeof(struct idpf_tx_buf) * tx_q->desc_count;
+ tx_q->tx_buf = kzalloc(buf_size, GFP_KERNEL);
+ if (!tx_q->tx_buf)
+ return -ENOMEM;
+
+ /* Initialize tx_bufs with invalid completion tags */
+ for (i = 0; i < tx_q->desc_count; i++)
+ tx_q->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+
+ /* Initialize tx buf stack for out-of-order completions if
+ * flow scheduling offload is enabled
+ */
+ tx_q->buf_stack.bufs =
+ kcalloc(tx_q->desc_count, sizeof(struct idpf_tx_stash *),
+ GFP_KERNEL);
+ if (!tx_q->buf_stack.bufs)
+ return -ENOMEM;
+
+ tx_q->buf_stack.size = tx_q->desc_count;
+ tx_q->buf_stack.top = tx_q->desc_count;
+
+ for (i = 0; i < tx_q->desc_count; i++) {
+ tx_q->buf_stack.bufs[i] = kzalloc(sizeof(*tx_q->buf_stack.bufs[i]),
+ GFP_KERNEL);
+ if (!tx_q->buf_stack.bufs[i])
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/**
+ * idpf_tx_desc_alloc - Allocate the Tx descriptors
+ * @tx_q: the tx ring to set up
+ * @bufq: buffer or completion queue
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_tx_desc_alloc(struct idpf_queue *tx_q, bool bufq)
+{
+ struct device *dev = tx_q->dev;
+ u32 desc_sz;
+ int err;
+
+ if (bufq) {
+ err = idpf_tx_buf_alloc_all(tx_q);
+ if (err)
+ goto err_alloc;
+
+ desc_sz = sizeof(struct idpf_base_tx_desc);
+ } else {
+ desc_sz = sizeof(struct idpf_splitq_tx_compl_desc);
+ }
+
+ tx_q->size = tx_q->desc_count * desc_sz;
+
+ /* Allocate descriptors also round up to nearest 4K */
+ tx_q->size = ALIGN(tx_q->size, 4096);
+ tx_q->desc_ring = dmam_alloc_coherent(dev, tx_q->size, &tx_q->dma,
+ GFP_KERNEL);
+ if (!tx_q->desc_ring) {
+ dev_err(dev, "Unable to allocate memory for the Tx descriptor ring, size=%d\n",
+ tx_q->size);
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+
+ tx_q->next_to_alloc = 0;
+ tx_q->next_to_use = 0;
+ tx_q->next_to_clean = 0;
+ set_bit(__IDPF_Q_GEN_CHK, tx_q->flags);
+
+ return 0;
+
+err_alloc:
+ idpf_tx_desc_rel(tx_q, bufq);
+
+ return err;
+}
+
+/**
+ * idpf_tx_desc_alloc_all - allocate all queues Tx resources
+ * @vport: virtual port private structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_tx_desc_alloc_all(struct idpf_vport *vport)
+{
+ struct device *dev = &vport->adapter->pdev->dev;
+ int err = 0;
+ int i, j;
+
+ /* Setup buffer queues. In single queue model buffer queues and
+ * completion queues will be same
+ */
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ for (j = 0; j < vport->txq_grps[i].num_txq; j++) {
+ struct idpf_queue *txq = vport->txq_grps[i].txqs[j];
+ u8 gen_bits = 0;
+ u16 bufidx_mask;
+
+ err = idpf_tx_desc_alloc(txq, true);
+ if (err) {
+ dev_err(dev, "Allocation for Tx Queue %u failed\n",
+ i);
+ goto err_out;
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ continue;
+
+ txq->compl_tag_cur_gen = 0;
+
+ /* Determine the number of bits in the bufid
+ * mask and add one to get the start of the
+ * generation bits
+ */
+ bufidx_mask = txq->desc_count - 1;
+ while (bufidx_mask >> 1) {
+ txq->compl_tag_gen_s++;
+ bufidx_mask = bufidx_mask >> 1;
+ }
+ txq->compl_tag_gen_s++;
+
+ gen_bits = IDPF_TX_SPLITQ_COMPL_TAG_WIDTH -
+ txq->compl_tag_gen_s;
+ txq->compl_tag_gen_max = GETMAXVAL(gen_bits);
+
+ /* Set bufid mask based on location of first
+ * gen bit; it cannot simply be the descriptor
+ * ring size-1 since we can have size values
+ * where not all of those bits are set.
+ */
+ txq->compl_tag_bufid_m =
+ GETMAXVAL(txq->compl_tag_gen_s);
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ continue;
+
+ /* Setup completion queues */
+ err = idpf_tx_desc_alloc(vport->txq_grps[i].complq, false);
+ if (err) {
+ dev_err(dev, "Allocation for Tx Completion Queue %u failed\n",
+ i);
+ goto err_out;
+ }
+ }
+
+err_out:
+ if (err)
+ idpf_tx_desc_rel_all(vport);
+
+ return err;
+}
+
+/**
+ * idpf_rx_page_rel - Release an rx buffer page
+ * @rxq: the queue that owns the buffer
+ * @rx_buf: the buffer to free
+ */
+static void idpf_rx_page_rel(struct idpf_queue *rxq, struct idpf_rx_buf *rx_buf)
+{
+ if (unlikely(!rx_buf->page))
+ return;
+
+ page_pool_put_full_page(rxq->pp, rx_buf->page, false);
+
+ rx_buf->page = NULL;
+ rx_buf->page_offset = 0;
+}
+
+/**
+ * idpf_rx_hdr_buf_rel_all - Release header buffer memory
+ * @rxq: queue to use
+ */
+static void idpf_rx_hdr_buf_rel_all(struct idpf_queue *rxq)
+{
+ struct idpf_adapter *adapter = rxq->vport->adapter;
+
+ dma_free_coherent(&adapter->pdev->dev,
+ rxq->desc_count * IDPF_HDR_BUF_SIZE,
+ rxq->rx_buf.hdr_buf_va,
+ rxq->rx_buf.hdr_buf_pa);
+ rxq->rx_buf.hdr_buf_va = NULL;
+}
+
+/**
+ * idpf_rx_buf_rel_all - Free all Rx buffer resources for a queue
+ * @rxq: queue to be cleaned
+ */
+static void idpf_rx_buf_rel_all(struct idpf_queue *rxq)
+{
+ u16 i;
+
+ /* queue already cleared, nothing to do */
+ if (!rxq->rx_buf.buf)
+ return;
+
+ /* Free all the bufs allocated and given to hw on Rx queue */
+ for (i = 0; i < rxq->desc_count; i++)
+ idpf_rx_page_rel(rxq, &rxq->rx_buf.buf[i]);
+
+ if (rxq->rx_hsplit_en)
+ idpf_rx_hdr_buf_rel_all(rxq);
+
+ page_pool_destroy(rxq->pp);
+ rxq->pp = NULL;
+
+ kfree(rxq->rx_buf.buf);
+ rxq->rx_buf.buf = NULL;
+}
+
+/**
+ * idpf_rx_desc_rel - Free a specific Rx q resources
+ * @rxq: queue to clean the resources from
+ * @bufq: buffer q or completion q
+ * @q_model: single or split q model
+ *
+ * Free a specific rx queue resources
+ */
+static void idpf_rx_desc_rel(struct idpf_queue *rxq, bool bufq, s32 q_model)
+{
+ if (!rxq)
+ return;
+
+ if (!bufq && idpf_is_queue_model_split(q_model) && rxq->skb) {
+ dev_kfree_skb_any(rxq->skb);
+ rxq->skb = NULL;
+ }
+
+ if (bufq || !idpf_is_queue_model_split(q_model))
+ idpf_rx_buf_rel_all(rxq);
+
+ rxq->next_to_alloc = 0;
+ rxq->next_to_clean = 0;
+ rxq->next_to_use = 0;
+ if (!rxq->desc_ring)
+ return;
+
+ dmam_free_coherent(rxq->dev, rxq->size, rxq->desc_ring, rxq->dma);
+ rxq->desc_ring = NULL;
+}
+
+/**
+ * idpf_rx_desc_rel_all - Free Rx Resources for All Queues
+ * @vport: virtual port structure
+ *
+ * Free all rx queues resources
+ */
+static void idpf_rx_desc_rel_all(struct idpf_vport *vport)
+{
+ struct idpf_rxq_group *rx_qgrp;
+ u16 num_rxq;
+ int i, j;
+
+ if (!vport->rxq_grps)
+ return;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ rx_qgrp = &vport->rxq_grps[i];
+
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+ idpf_rx_desc_rel(rx_qgrp->singleq.rxqs[j],
+ false, vport->rxq_model);
+ continue;
+ }
+
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ for (j = 0; j < num_rxq; j++)
+ idpf_rx_desc_rel(&rx_qgrp->splitq.rxq_sets[j]->rxq,
+ false, vport->rxq_model);
+
+ if (!rx_qgrp->splitq.bufq_sets)
+ continue;
+
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ struct idpf_bufq_set *bufq_set =
+ &rx_qgrp->splitq.bufq_sets[j];
+
+ idpf_rx_desc_rel(&bufq_set->bufq, true,
+ vport->rxq_model);
+ }
+ }
+}
+
+/**
+ * idpf_rx_buf_hw_update - Store the new tail and head values
+ * @rxq: queue to bump
+ * @val: new head index
+ */
+void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val)
+{
+ rxq->next_to_use = val;
+
+ if (unlikely(!rxq->tail))
+ return;
+
+ /* writel has an implicit memory barrier */
+ writel(val, rxq->tail);
+}
+
+/**
+ * idpf_rx_hdr_buf_alloc_all - Allocate memory for header buffers
+ * @rxq: ring to use
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int idpf_rx_hdr_buf_alloc_all(struct idpf_queue *rxq)
+{
+ struct idpf_adapter *adapter = rxq->vport->adapter;
+
+ rxq->rx_buf.hdr_buf_va =
+ dma_alloc_coherent(&adapter->pdev->dev,
+ IDPF_HDR_BUF_SIZE * rxq->desc_count,
+ &rxq->rx_buf.hdr_buf_pa,
+ GFP_KERNEL);
+ if (!rxq->rx_buf.hdr_buf_va)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * idpf_rx_post_buf_refill - Post buffer id to refill queue
+ * @refillq: refill queue to post to
+ * @buf_id: buffer id to post
+ */
+static void idpf_rx_post_buf_refill(struct idpf_sw_queue *refillq, u16 buf_id)
+{
+ u16 nta = refillq->next_to_alloc;
+
+ /* store the buffer ID and the SW maintained GEN bit to the refillq */
+ refillq->ring[nta] =
+ ((buf_id << IDPF_RX_BI_BUFID_S) & IDPF_RX_BI_BUFID_M) |
+ (!!(test_bit(__IDPF_Q_GEN_CHK, refillq->flags)) <<
+ IDPF_RX_BI_GEN_S);
+
+ if (unlikely(++nta == refillq->desc_count)) {
+ nta = 0;
+ change_bit(__IDPF_Q_GEN_CHK, refillq->flags);
+ }
+ refillq->next_to_alloc = nta;
+}
+
+/**
+ * idpf_rx_post_buf_desc - Post buffer to bufq descriptor ring
+ * @bufq: buffer queue to post to
+ * @buf_id: buffer id to post
+ *
+ * Returns false if buffer could not be allocated, true otherwise.
+ */
+static bool idpf_rx_post_buf_desc(struct idpf_queue *bufq, u16 buf_id)
+{
+ struct virtchnl2_splitq_rx_buf_desc *splitq_rx_desc = NULL;
+ u16 nta = bufq->next_to_alloc;
+ struct idpf_rx_buf *buf;
+ dma_addr_t addr;
+
+ splitq_rx_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, nta);
+ buf = &bufq->rx_buf.buf[buf_id];
+
+ if (bufq->rx_hsplit_en) {
+ splitq_rx_desc->hdr_addr =
+ cpu_to_le64(bufq->rx_buf.hdr_buf_pa +
+ (u32)buf_id * IDPF_HDR_BUF_SIZE);
+ }
+
+ addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size);
+ if (unlikely(addr == DMA_MAPPING_ERROR))
+ return false;
+
+ splitq_rx_desc->pkt_addr = cpu_to_le64(addr);
+ splitq_rx_desc->qword0.buf_id = cpu_to_le16(buf_id);
+
+ nta++;
+ if (unlikely(nta == bufq->desc_count))
+ nta = 0;
+ bufq->next_to_alloc = nta;
+
+ return true;
+}
+
+/**
+ * idpf_rx_post_init_bufs - Post initial buffers to bufq
+ * @bufq: buffer queue to post working set to
+ * @working_set: number of buffers to put in working set
+ *
+ * Returns true if @working_set bufs were posted successfully, false otherwise.
+ */
+static bool idpf_rx_post_init_bufs(struct idpf_queue *bufq, u16 working_set)
+{
+ int i;
+
+ for (i = 0; i < working_set; i++) {
+ if (!idpf_rx_post_buf_desc(bufq, i))
+ return false;
+ }
+
+ idpf_rx_buf_hw_update(bufq,
+ bufq->next_to_alloc & ~(bufq->rx_buf_stride - 1));
+
+ return true;
+}
+
+/**
+ * idpf_rx_create_page_pool - Create a page pool
+ * @rxbufq: RX queue to create page pool for
+ *
+ * Returns &page_pool on success, casted -errno on failure
+ */
+static struct page_pool *idpf_rx_create_page_pool(struct idpf_queue *rxbufq)
+{
+ struct page_pool_params pp = {
+ .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
+ .order = 0,
+ .pool_size = rxbufq->desc_count,
+ .nid = NUMA_NO_NODE,
+ .dev = rxbufq->vport->netdev->dev.parent,
+ .max_len = PAGE_SIZE,
+ .dma_dir = DMA_FROM_DEVICE,
+ .offset = 0,
+ };
+
+ return page_pool_create(&pp);
+}
+
+/**
+ * idpf_rx_buf_alloc_all - Allocate memory for all buffer resources
+ * @rxbufq: queue for which the buffers are allocated; equivalent to
+ * rxq when operating in singleq mode
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_rx_buf_alloc_all(struct idpf_queue *rxbufq)
+{
+ int err = 0;
+
+ /* Allocate book keeping buffers */
+ rxbufq->rx_buf.buf = kcalloc(rxbufq->desc_count,
+ sizeof(struct idpf_rx_buf), GFP_KERNEL);
+ if (!rxbufq->rx_buf.buf) {
+ err = -ENOMEM;
+ goto rx_buf_alloc_all_out;
+ }
+
+ if (rxbufq->rx_hsplit_en) {
+ err = idpf_rx_hdr_buf_alloc_all(rxbufq);
+ if (err)
+ goto rx_buf_alloc_all_out;
+ }
+
+ /* Allocate buffers to be given to HW. */
+ if (idpf_is_queue_model_split(rxbufq->vport->rxq_model)) {
+ int working_set = IDPF_RX_BUFQ_WORKING_SET(rxbufq);
+
+ if (!idpf_rx_post_init_bufs(rxbufq, working_set))
+ err = -ENOMEM;
+ } else {
+ if (idpf_rx_singleq_buf_hw_alloc_all(rxbufq,
+ rxbufq->desc_count - 1))
+ err = -ENOMEM;
+ }
+
+rx_buf_alloc_all_out:
+ if (err)
+ idpf_rx_buf_rel_all(rxbufq);
+
+ return err;
+}
+
+/**
+ * idpf_rx_bufs_init - Initialize page pool, allocate rx bufs, and post to HW
+ * @rxbufq: RX queue to create page pool for
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_rx_bufs_init(struct idpf_queue *rxbufq)
+{
+ struct page_pool *pool;
+
+ pool = idpf_rx_create_page_pool(rxbufq);
+ if (IS_ERR(pool))
+ return PTR_ERR(pool);
+
+ rxbufq->pp = pool;
+
+ return idpf_rx_buf_alloc_all(rxbufq);
+}
+
+/**
+ * idpf_rx_bufs_init_all - Initialize all RX bufs
+ * @vport: virtual port struct
+ *
+ * Returns 0 on success, negative on failure
+ */
+int idpf_rx_bufs_init_all(struct idpf_vport *vport)
+{
+ struct idpf_rxq_group *rx_qgrp;
+ struct idpf_queue *q;
+ int i, j, err;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ rx_qgrp = &vport->rxq_grps[i];
+
+ /* Allocate bufs for the rxq itself in singleq */
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ int num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ q = rx_qgrp->singleq.rxqs[j];
+ err = idpf_rx_bufs_init(q);
+ if (err)
+ return err;
+ }
+
+ continue;
+ }
+
+ /* Otherwise, allocate bufs for the buffer queues */
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ err = idpf_rx_bufs_init(q);
+ if (err)
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * idpf_rx_desc_alloc - Allocate queue Rx resources
+ * @rxq: Rx queue for which the resources are setup
+ * @bufq: buffer or completion queue
+ * @q_model: single or split queue model
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_rx_desc_alloc(struct idpf_queue *rxq, bool bufq, s32 q_model)
+{
+ struct device *dev = rxq->dev;
+
+ if (bufq)
+ rxq->size = rxq->desc_count *
+ sizeof(struct virtchnl2_splitq_rx_buf_desc);
+ else
+ rxq->size = rxq->desc_count *
+ sizeof(union virtchnl2_rx_desc);
+
+ /* Allocate descriptors and also round up to nearest 4K */
+ rxq->size = ALIGN(rxq->size, 4096);
+ rxq->desc_ring = dmam_alloc_coherent(dev, rxq->size,
+ &rxq->dma, GFP_KERNEL);
+ if (!rxq->desc_ring) {
+ dev_err(dev, "Unable to allocate memory for the Rx descriptor ring, size=%d\n",
+ rxq->size);
+ return -ENOMEM;
+ }
+
+ rxq->next_to_alloc = 0;
+ rxq->next_to_clean = 0;
+ rxq->next_to_use = 0;
+ set_bit(__IDPF_Q_GEN_CHK, rxq->flags);
+
+ return 0;
+}
+
+/**
+ * idpf_rx_desc_alloc_all - allocate all RX queues resources
+ * @vport: virtual port structure
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_rx_desc_alloc_all(struct idpf_vport *vport)
+{
+ struct device *dev = &vport->adapter->pdev->dev;
+ struct idpf_rxq_group *rx_qgrp;
+ struct idpf_queue *q;
+ int i, j, err;
+ u16 num_rxq;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ rx_qgrp = &vport->rxq_grps[i];
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ else
+ q = rx_qgrp->singleq.rxqs[j];
+ err = idpf_rx_desc_alloc(q, false, vport->rxq_model);
+ if (err) {
+ dev_err(dev, "Memory allocation for Rx Queue %u failed\n",
+ i);
+ goto err_out;
+ }
+ }
+
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ continue;
+
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ err = idpf_rx_desc_alloc(q, true, vport->rxq_model);
+ if (err) {
+ dev_err(dev, "Memory allocation for Rx Buffer Queue %u failed\n",
+ i);
+ goto err_out;
+ }
+ }
+ }
+
+ return 0;
+
+err_out:
+ idpf_rx_desc_rel_all(vport);
+
+ return err;
+}
+
+/**
+ * idpf_txq_group_rel - Release all resources for txq groups
+ * @vport: vport to release txq groups on
+ */
+static void idpf_txq_group_rel(struct idpf_vport *vport)
+{
+ int i, j;
+
+ if (!vport->txq_grps)
+ return;
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *txq_grp = &vport->txq_grps[i];
+
+ for (j = 0; j < txq_grp->num_txq; j++) {
+ kfree(txq_grp->txqs[j]);
+ txq_grp->txqs[j] = NULL;
+ }
+ kfree(txq_grp->complq);
+ txq_grp->complq = NULL;
+ }
+ kfree(vport->txq_grps);
+ vport->txq_grps = NULL;
+}
+
+/**
+ * idpf_rxq_sw_queue_rel - Release software queue resources
+ * @rx_qgrp: rx queue group with software queues
+ */
+static void idpf_rxq_sw_queue_rel(struct idpf_rxq_group *rx_qgrp)
+{
+ int i, j;
+
+ for (i = 0; i < rx_qgrp->vport->num_bufqs_per_qgrp; i++) {
+ struct idpf_bufq_set *bufq_set = &rx_qgrp->splitq.bufq_sets[i];
+
+ for (j = 0; j < bufq_set->num_refillqs; j++) {
+ kfree(bufq_set->refillqs[j].ring);
+ bufq_set->refillqs[j].ring = NULL;
+ }
+ kfree(bufq_set->refillqs);
+ bufq_set->refillqs = NULL;
+ }
+}
+
+/**
+ * idpf_rxq_group_rel - Release all resources for rxq groups
+ * @vport: vport to release rxq groups on
+ */
+static void idpf_rxq_group_rel(struct idpf_vport *vport)
+{
+ int i;
+
+ if (!vport->rxq_grps)
+ return;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u16 num_rxq;
+ int j;
+
+ if (idpf_is_queue_model_split(vport->rxq_model)) {
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ for (j = 0; j < num_rxq; j++) {
+ kfree(rx_qgrp->splitq.rxq_sets[j]);
+ rx_qgrp->splitq.rxq_sets[j] = NULL;
+ }
+
+ idpf_rxq_sw_queue_rel(rx_qgrp);
+ kfree(rx_qgrp->splitq.bufq_sets);
+ rx_qgrp->splitq.bufq_sets = NULL;
+ } else {
+ num_rxq = rx_qgrp->singleq.num_rxq;
+ for (j = 0; j < num_rxq; j++) {
+ kfree(rx_qgrp->singleq.rxqs[j]);
+ rx_qgrp->singleq.rxqs[j] = NULL;
+ }
+ }
+ }
+ kfree(vport->rxq_grps);
+ vport->rxq_grps = NULL;
+}
+
+/**
+ * idpf_vport_queue_grp_rel_all - Release all queue groups
+ * @vport: vport to release queue groups for
+ */
+static void idpf_vport_queue_grp_rel_all(struct idpf_vport *vport)
+{
+ idpf_txq_group_rel(vport);
+ idpf_rxq_group_rel(vport);
+}
+
+/**
+ * idpf_vport_queues_rel - Free memory for all queues
+ * @vport: virtual port
+ *
+ * Free the memory allocated for queues associated to a vport
+ */
+void idpf_vport_queues_rel(struct idpf_vport *vport)
+{
+ idpf_tx_desc_rel_all(vport);
+ idpf_rx_desc_rel_all(vport);
+ idpf_vport_queue_grp_rel_all(vport);
+
+ kfree(vport->txqs);
+ vport->txqs = NULL;
+}
+
+/**
+ * idpf_vport_init_fast_path_txqs - Initialize fast path txq array
+ * @vport: vport to init txqs on
+ *
+ * We get a queue index from skb->queue_mapping and we need a fast way to
+ * dereference the queue from queue groups. This allows us to quickly pull a
+ * txq based on a queue index.
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_vport_init_fast_path_txqs(struct idpf_vport *vport)
+{
+ int i, j, k = 0;
+
+ vport->txqs = kcalloc(vport->num_txq, sizeof(struct idpf_queue *),
+ GFP_KERNEL);
+
+ if (!vport->txqs)
+ return -ENOMEM;
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_grp = &vport->txq_grps[i];
+
+ for (j = 0; j < tx_grp->num_txq; j++, k++) {
+ vport->txqs[k] = tx_grp->txqs[j];
+ vport->txqs[k]->idx = k;
+ }
+ }
+
+ return 0;
+}
+
+/**
+ * idpf_vport_init_num_qs - Initialize number of queues
+ * @vport: vport to initialize queues
+ * @vport_msg: data to be filled into vport
+ */
+void idpf_vport_init_num_qs(struct idpf_vport *vport,
+ struct virtchnl2_create_vport *vport_msg)
+{
+ struct idpf_vport_user_config_data *config_data;
+ u16 idx = vport->idx;
+
+ config_data = &vport->adapter->vport_config[idx]->user_config;
+ vport->num_txq = le16_to_cpu(vport_msg->num_tx_q);
+ vport->num_rxq = le16_to_cpu(vport_msg->num_rx_q);
+ /* number of txqs and rxqs in config data will be zeros only in the
+ * driver load path and we dont update them there after
+ */
+ if (!config_data->num_req_tx_qs && !config_data->num_req_rx_qs) {
+ config_data->num_req_tx_qs = le16_to_cpu(vport_msg->num_tx_q);
+ config_data->num_req_rx_qs = le16_to_cpu(vport_msg->num_rx_q);
+ }
+
+ if (idpf_is_queue_model_split(vport->txq_model))
+ vport->num_complq = le16_to_cpu(vport_msg->num_tx_complq);
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ vport->num_bufq = le16_to_cpu(vport_msg->num_rx_bufq);
+
+ /* Adjust number of buffer queues per Rx queue group. */
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ vport->num_bufqs_per_qgrp = 0;
+ vport->bufq_size[0] = IDPF_RX_BUF_2048;
+
+ return;
+ }
+
+ vport->num_bufqs_per_qgrp = IDPF_MAX_BUFQS_PER_RXQ_GRP;
+ /* Bufq[0] default buffer size is 4K
+ * Bufq[1] default buffer size is 2K
+ */
+ vport->bufq_size[0] = IDPF_RX_BUF_4096;
+ vport->bufq_size[1] = IDPF_RX_BUF_2048;
+}
+
+/**
+ * idpf_vport_calc_num_q_desc - Calculate number of queue groups
+ * @vport: vport to calculate q groups for
+ */
+void idpf_vport_calc_num_q_desc(struct idpf_vport *vport)
+{
+ struct idpf_vport_user_config_data *config_data;
+ int num_bufqs = vport->num_bufqs_per_qgrp;
+ u32 num_req_txq_desc, num_req_rxq_desc;
+ u16 idx = vport->idx;
+ int i;
+
+ config_data = &vport->adapter->vport_config[idx]->user_config;
+ num_req_txq_desc = config_data->num_req_txq_desc;
+ num_req_rxq_desc = config_data->num_req_rxq_desc;
+
+ vport->complq_desc_count = 0;
+ if (num_req_txq_desc) {
+ vport->txq_desc_count = num_req_txq_desc;
+ if (idpf_is_queue_model_split(vport->txq_model)) {
+ vport->complq_desc_count = num_req_txq_desc;
+ if (vport->complq_desc_count < IDPF_MIN_TXQ_COMPLQ_DESC)
+ vport->complq_desc_count =
+ IDPF_MIN_TXQ_COMPLQ_DESC;
+ }
+ } else {
+ vport->txq_desc_count = IDPF_DFLT_TX_Q_DESC_COUNT;
+ if (idpf_is_queue_model_split(vport->txq_model))
+ vport->complq_desc_count =
+ IDPF_DFLT_TX_COMPLQ_DESC_COUNT;
+ }
+
+ if (num_req_rxq_desc)
+ vport->rxq_desc_count = num_req_rxq_desc;
+ else
+ vport->rxq_desc_count = IDPF_DFLT_RX_Q_DESC_COUNT;
+
+ for (i = 0; i < num_bufqs; i++) {
+ if (!vport->bufq_desc_count[i])
+ vport->bufq_desc_count[i] =
+ IDPF_RX_BUFQ_DESC_COUNT(vport->rxq_desc_count,
+ num_bufqs);
+ }
+}
+
+/**
+ * idpf_vport_calc_total_qs - Calculate total number of queues
+ * @adapter: private data struct
+ * @vport_idx: vport idx to retrieve vport pointer
+ * @vport_msg: message to fill with data
+ * @max_q: vport max queue info
+ *
+ * Return 0 on success, error value on failure.
+ */
+int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_idx,
+ struct virtchnl2_create_vport *vport_msg,
+ struct idpf_vport_max_q *max_q)
+{
+ int dflt_splitq_txq_grps = 0, dflt_singleq_txqs = 0;
+ int dflt_splitq_rxq_grps = 0, dflt_singleq_rxqs = 0;
+ u16 num_req_tx_qs = 0, num_req_rx_qs = 0;
+ struct idpf_vport_config *vport_config;
+ u16 num_txq_grps, num_rxq_grps;
+ u32 num_qs;
+
+ vport_config = adapter->vport_config[vport_idx];
+ if (vport_config) {
+ num_req_tx_qs = vport_config->user_config.num_req_tx_qs;
+ num_req_rx_qs = vport_config->user_config.num_req_rx_qs;
+ } else {
+ int num_cpus;
+
+ /* Restrict num of queues to cpus online as a default
+ * configuration to give best performance. User can always
+ * override to a max number of queues via ethtool.
+ */
+ num_cpus = num_online_cpus();
+
+ dflt_splitq_txq_grps = min_t(int, max_q->max_txq, num_cpus);
+ dflt_singleq_txqs = min_t(int, max_q->max_txq, num_cpus);
+ dflt_splitq_rxq_grps = min_t(int, max_q->max_rxq, num_cpus);
+ dflt_singleq_rxqs = min_t(int, max_q->max_rxq, num_cpus);
+ }
+
+ if (idpf_is_queue_model_split(le16_to_cpu(vport_msg->txq_model))) {
+ num_txq_grps = num_req_tx_qs ? num_req_tx_qs : dflt_splitq_txq_grps;
+ vport_msg->num_tx_complq = cpu_to_le16(num_txq_grps *
+ IDPF_COMPLQ_PER_GROUP);
+ vport_msg->num_tx_q = cpu_to_le16(num_txq_grps *
+ IDPF_DFLT_SPLITQ_TXQ_PER_GROUP);
+ } else {
+ num_txq_grps = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS;
+ num_qs = num_txq_grps * (num_req_tx_qs ? num_req_tx_qs :
+ dflt_singleq_txqs);
+ vport_msg->num_tx_q = cpu_to_le16(num_qs);
+ vport_msg->num_tx_complq = 0;
+ }
+ if (idpf_is_queue_model_split(le16_to_cpu(vport_msg->rxq_model))) {
+ num_rxq_grps = num_req_rx_qs ? num_req_rx_qs : dflt_splitq_rxq_grps;
+ vport_msg->num_rx_bufq = cpu_to_le16(num_rxq_grps *
+ IDPF_MAX_BUFQS_PER_RXQ_GRP);
+ vport_msg->num_rx_q = cpu_to_le16(num_rxq_grps *
+ IDPF_DFLT_SPLITQ_RXQ_PER_GROUP);
+ } else {
+ num_rxq_grps = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS;
+ num_qs = num_rxq_grps * (num_req_rx_qs ? num_req_rx_qs :
+ dflt_singleq_rxqs);
+ vport_msg->num_rx_q = cpu_to_le16(num_qs);
+ vport_msg->num_rx_bufq = 0;
+ }
+
+ return 0;
+}
+
+/**
+ * idpf_vport_calc_num_q_groups - Calculate number of queue groups
+ * @vport: vport to calculate q groups for
+ */
+void idpf_vport_calc_num_q_groups(struct idpf_vport *vport)
+{
+ if (idpf_is_queue_model_split(vport->txq_model))
+ vport->num_txq_grp = vport->num_txq;
+ else
+ vport->num_txq_grp = IDPF_DFLT_SINGLEQ_TX_Q_GROUPS;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ vport->num_rxq_grp = vport->num_rxq;
+ else
+ vport->num_rxq_grp = IDPF_DFLT_SINGLEQ_RX_Q_GROUPS;
+}
+
+/**
+ * idpf_vport_calc_numq_per_grp - Calculate number of queues per group
+ * @vport: vport to calculate queues for
+ * @num_txq: return parameter for number of TX queues
+ * @num_rxq: return parameter for number of RX queues
+ */
+static void idpf_vport_calc_numq_per_grp(struct idpf_vport *vport,
+ u16 *num_txq, u16 *num_rxq)
+{
+ if (idpf_is_queue_model_split(vport->txq_model))
+ *num_txq = IDPF_DFLT_SPLITQ_TXQ_PER_GROUP;
+ else
+ *num_txq = vport->num_txq;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ *num_rxq = IDPF_DFLT_SPLITQ_RXQ_PER_GROUP;
+ else
+ *num_rxq = vport->num_rxq;
+}
+
+/**
+ * idpf_rxq_set_descids - set the descids supported by this queue
+ * @vport: virtual port data structure
+ * @q: rx queue for which descids are set
+ *
+ */
+static void idpf_rxq_set_descids(struct idpf_vport *vport, struct idpf_queue *q)
+{
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M;
+ } else {
+ if (vport->base_rxd)
+ q->rxdids = VIRTCHNL2_RXDID_1_32B_BASE_M;
+ else
+ q->rxdids = VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M;
+ }
+}
+
+/**
+ * idpf_txq_group_alloc - Allocate all txq group resources
+ * @vport: vport to allocate txq groups for
+ * @num_txq: number of txqs to allocate for each group
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_txq_group_alloc(struct idpf_vport *vport, u16 num_txq)
+{
+ bool flow_sch_en;
+ int err, i;
+
+ vport->txq_grps = kcalloc(vport->num_txq_grp,
+ sizeof(*vport->txq_grps), GFP_KERNEL);
+ if (!vport->txq_grps)
+ return -ENOMEM;
+
+ flow_sch_en = !idpf_is_cap_ena(vport->adapter, IDPF_OTHER_CAPS,
+ VIRTCHNL2_CAP_SPLITQ_QSCHED);
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ struct idpf_adapter *adapter = vport->adapter;
+ int j;
+
+ tx_qgrp->vport = vport;
+ tx_qgrp->num_txq = num_txq;
+
+ for (j = 0; j < tx_qgrp->num_txq; j++) {
+ tx_qgrp->txqs[j] = kzalloc(sizeof(*tx_qgrp->txqs[j]),
+ GFP_KERNEL);
+ if (!tx_qgrp->txqs[j]) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+ }
+
+ for (j = 0; j < tx_qgrp->num_txq; j++) {
+ struct idpf_queue *q = tx_qgrp->txqs[j];
+
+ q->dev = &adapter->pdev->dev;
+ q->desc_count = vport->txq_desc_count;
+ q->tx_max_bufs = idpf_get_max_tx_bufs(adapter);
+ q->tx_min_pkt_len = idpf_get_min_tx_pkt_len(adapter);
+ q->vport = vport;
+ q->txq_grp = tx_qgrp;
+ hash_init(q->sched_buf_hash);
+
+ if (flow_sch_en)
+ set_bit(__IDPF_Q_FLOW_SCH_EN, q->flags);
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ continue;
+
+ tx_qgrp->complq = kcalloc(IDPF_COMPLQ_PER_GROUP,
+ sizeof(*tx_qgrp->complq),
+ GFP_KERNEL);
+ if (!tx_qgrp->complq) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+
+ tx_qgrp->complq->dev = &adapter->pdev->dev;
+ tx_qgrp->complq->desc_count = vport->complq_desc_count;
+ tx_qgrp->complq->vport = vport;
+ tx_qgrp->complq->txq_grp = tx_qgrp;
+
+ if (flow_sch_en)
+ __set_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags);
+ }
+
+ return 0;
+
+err_alloc:
+ idpf_txq_group_rel(vport);
+
+ return err;
+}
+
+/**
+ * idpf_rxq_group_alloc - Allocate all rxq group resources
+ * @vport: vport to allocate rxq groups for
+ * @num_rxq: number of rxqs to allocate for each group
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_rxq_group_alloc(struct idpf_vport *vport, u16 num_rxq)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_queue *q;
+ int i, k, err = 0;
+
+ vport->rxq_grps = kcalloc(vport->num_rxq_grp,
+ sizeof(struct idpf_rxq_group), GFP_KERNEL);
+ if (!vport->rxq_grps)
+ return -ENOMEM;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ int j;
+
+ rx_qgrp->vport = vport;
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ rx_qgrp->singleq.num_rxq = num_rxq;
+ for (j = 0; j < num_rxq; j++) {
+ rx_qgrp->singleq.rxqs[j] =
+ kzalloc(sizeof(*rx_qgrp->singleq.rxqs[j]),
+ GFP_KERNEL);
+ if (!rx_qgrp->singleq.rxqs[j]) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+ }
+ goto skip_splitq_rx_init;
+ }
+ rx_qgrp->splitq.num_rxq_sets = num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ rx_qgrp->splitq.rxq_sets[j] =
+ kzalloc(sizeof(struct idpf_rxq_set),
+ GFP_KERNEL);
+ if (!rx_qgrp->splitq.rxq_sets[j]) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+ }
+
+ rx_qgrp->splitq.bufq_sets = kcalloc(vport->num_bufqs_per_qgrp,
+ sizeof(struct idpf_bufq_set),
+ GFP_KERNEL);
+ if (!rx_qgrp->splitq.bufq_sets) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ struct idpf_bufq_set *bufq_set =
+ &rx_qgrp->splitq.bufq_sets[j];
+ int swq_size = sizeof(struct idpf_sw_queue);
+
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ q->dev = &adapter->pdev->dev;
+ q->desc_count = vport->bufq_desc_count[j];
+ q->vport = vport;
+ q->rxq_grp = rx_qgrp;
+ q->idx = j;
+ q->rx_buf_size = vport->bufq_size[j];
+ q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
+ q->rx_buf_stride = IDPF_RX_BUF_STRIDE;
+ if (idpf_is_cap_ena_all(adapter, IDPF_HSPLIT_CAPS,
+ IDPF_CAP_HSPLIT) &&
+ idpf_is_queue_model_split(vport->rxq_model)) {
+ q->rx_hsplit_en = true;
+ q->rx_hbuf_size = IDPF_HDR_BUF_SIZE;
+ }
+
+ bufq_set->num_refillqs = num_rxq;
+ bufq_set->refillqs = kcalloc(num_rxq, swq_size,
+ GFP_KERNEL);
+ if (!bufq_set->refillqs) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+ for (k = 0; k < bufq_set->num_refillqs; k++) {
+ struct idpf_sw_queue *refillq =
+ &bufq_set->refillqs[k];
+
+ refillq->dev = &vport->adapter->pdev->dev;
+ refillq->desc_count =
+ vport->bufq_desc_count[j];
+ set_bit(__IDPF_Q_GEN_CHK, refillq->flags);
+ set_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags);
+ refillq->ring = kcalloc(refillq->desc_count,
+ sizeof(u16),
+ GFP_KERNEL);
+ if (!refillq->ring) {
+ err = -ENOMEM;
+ goto err_alloc;
+ }
+ }
+ }
+
+skip_splitq_rx_init:
+ for (j = 0; j < num_rxq; j++) {
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ q = rx_qgrp->singleq.rxqs[j];
+ goto setup_rxq;
+ }
+ q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ rx_qgrp->splitq.rxq_sets[j]->refillq0 =
+ &rx_qgrp->splitq.bufq_sets[0].refillqs[j];
+ if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP)
+ rx_qgrp->splitq.rxq_sets[j]->refillq1 =
+ &rx_qgrp->splitq.bufq_sets[1].refillqs[j];
+
+ if (idpf_is_cap_ena_all(adapter, IDPF_HSPLIT_CAPS,
+ IDPF_CAP_HSPLIT) &&
+ idpf_is_queue_model_split(vport->rxq_model)) {
+ q->rx_hsplit_en = true;
+ q->rx_hbuf_size = IDPF_HDR_BUF_SIZE;
+ }
+
+setup_rxq:
+ q->dev = &adapter->pdev->dev;
+ q->desc_count = vport->rxq_desc_count;
+ q->vport = vport;
+ q->rxq_grp = rx_qgrp;
+ q->idx = (i * num_rxq) + j;
+ /* In splitq mode, RXQ buffer size should be
+ * set to that of the first buffer queue
+ * associated with this RXQ
+ */
+ q->rx_buf_size = vport->bufq_size[0];
+ q->rx_buffer_low_watermark = IDPF_LOW_WATERMARK;
+ q->rx_max_pkt_size = vport->netdev->mtu +
+ IDPF_PACKET_HDR_PAD;
+ idpf_rxq_set_descids(vport, q);
+ }
+ }
+
+err_alloc:
+ if (err)
+ idpf_rxq_group_rel(vport);
+
+ return err;
+}
+
+/**
+ * idpf_vport_queue_grp_alloc_all - Allocate all queue groups/resources
+ * @vport: vport with qgrps to allocate
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_vport_queue_grp_alloc_all(struct idpf_vport *vport)
+{
+ u16 num_txq, num_rxq;
+ int err;
+
+ idpf_vport_calc_numq_per_grp(vport, &num_txq, &num_rxq);
+
+ err = idpf_txq_group_alloc(vport, num_txq);
+ if (err)
+ goto err_out;
+
+ err = idpf_rxq_group_alloc(vport, num_rxq);
+ if (err)
+ goto err_out;
+
+ return 0;
+
+err_out:
+ idpf_vport_queue_grp_rel_all(vport);
+
+ return err;
+}
+
+/**
+ * idpf_vport_queues_alloc - Allocate memory for all queues
+ * @vport: virtual port
+ *
+ * Allocate memory for queues associated with a vport. Returns 0 on success,
+ * negative on failure.
+ */
+int idpf_vport_queues_alloc(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_vport_queue_grp_alloc_all(vport);
+ if (err)
+ goto err_out;
+
+ err = idpf_tx_desc_alloc_all(vport);
+ if (err)
+ goto err_out;
+
+ err = idpf_rx_desc_alloc_all(vport);
+ if (err)
+ goto err_out;
+
+ err = idpf_vport_init_fast_path_txqs(vport);
+ if (err)
+ goto err_out;
+
+ return 0;
+
+err_out:
+ idpf_vport_queues_rel(vport);
+
+ return err;
+}
+
+/**
+ * idpf_tx_handle_sw_marker - Handle queue marker packet
+ * @tx_q: tx queue to handle software marker
+ */
+static void idpf_tx_handle_sw_marker(struct idpf_queue *tx_q)
+{
+ struct idpf_vport *vport = tx_q->vport;
+ int i;
+
+ clear_bit(__IDPF_Q_SW_MARKER, tx_q->flags);
+ /* Hardware must write marker packets to all queues associated with
+ * completion queues. So check if all queues received marker packets
+ */
+ for (i = 0; i < vport->num_txq; i++)
+ /* If we're still waiting on any other TXQ marker completions,
+ * just return now since we cannot wake up the marker_wq yet.
+ */
+ if (test_bit(__IDPF_Q_SW_MARKER, vport->txqs[i]->flags))
+ return;
+
+ /* Drain complete */
+ set_bit(IDPF_VPORT_SW_MARKER, vport->flags);
+ wake_up(&vport->sw_marker_wq);
+}
+
+/**
+ * idpf_tx_splitq_clean_hdr - Clean TX buffer resources for header portion of
+ * packet
+ * @tx_q: tx queue to clean buffer from
+ * @tx_buf: buffer to be cleaned
+ * @cleaned: pointer to stats struct to track cleaned packets/bytes
+ * @napi_budget: Used to determine if we are in netpoll
+ */
+static void idpf_tx_splitq_clean_hdr(struct idpf_queue *tx_q,
+ struct idpf_tx_buf *tx_buf,
+ struct idpf_cleaned_stats *cleaned,
+ int napi_budget)
+{
+ napi_consume_skb(tx_buf->skb, napi_budget);
+
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_single(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+
+ dma_unmap_len_set(tx_buf, len, 0);
+ }
+
+ /* clear tx_buf data */
+ tx_buf->skb = NULL;
+
+ cleaned->bytes += tx_buf->bytecount;
+ cleaned->packets += tx_buf->gso_segs;
+}
+
+/**
+ * idpf_tx_clean_stashed_bufs - clean bufs that were stored for
+ * out of order completions
+ * @txq: queue to clean
+ * @compl_tag: completion tag of packet to clean (from completion descriptor)
+ * @cleaned: pointer to stats struct to track cleaned packets/bytes
+ * @budget: Used to determine if we are in netpoll
+ */
+static void idpf_tx_clean_stashed_bufs(struct idpf_queue *txq, u16 compl_tag,
+ struct idpf_cleaned_stats *cleaned,
+ int budget)
+{
+ struct idpf_tx_stash *stash;
+ struct hlist_node *tmp_buf;
+
+ /* Buffer completion */
+ hash_for_each_possible_safe(txq->sched_buf_hash, stash, tmp_buf,
+ hlist, compl_tag) {
+ if (unlikely(stash->buf.compl_tag != (int)compl_tag))
+ continue;
+
+ if (stash->buf.skb) {
+ idpf_tx_splitq_clean_hdr(txq, &stash->buf, cleaned,
+ budget);
+ } else if (dma_unmap_len(&stash->buf, len)) {
+ dma_unmap_page(txq->dev,
+ dma_unmap_addr(&stash->buf, dma),
+ dma_unmap_len(&stash->buf, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(&stash->buf, len, 0);
+ }
+
+ /* Push shadow buf back onto stack */
+ idpf_buf_lifo_push(&txq->buf_stack, stash);
+
+ hash_del(&stash->hlist);
+ }
+}
+
+/**
+ * idpf_stash_flow_sch_buffers - store buffer parameters info to be freed at a
+ * later time (only relevant for flow scheduling mode)
+ * @txq: Tx queue to clean
+ * @tx_buf: buffer to store
+ */
+static int idpf_stash_flow_sch_buffers(struct idpf_queue *txq,
+ struct idpf_tx_buf *tx_buf)
+{
+ struct idpf_tx_stash *stash;
+
+ if (unlikely(!dma_unmap_addr(tx_buf, dma) &&
+ !dma_unmap_len(tx_buf, len)))
+ return 0;
+
+ stash = idpf_buf_lifo_pop(&txq->buf_stack);
+ if (unlikely(!stash)) {
+ net_err_ratelimited("%s: No out-of-order TX buffers left!\n",
+ txq->vport->netdev->name);
+
+ return -ENOMEM;
+ }
+
+ /* Store buffer params in shadow buffer */
+ stash->buf.skb = tx_buf->skb;
+ stash->buf.bytecount = tx_buf->bytecount;
+ stash->buf.gso_segs = tx_buf->gso_segs;
+ dma_unmap_addr_set(&stash->buf, dma, dma_unmap_addr(tx_buf, dma));
+ dma_unmap_len_set(&stash->buf, len, dma_unmap_len(tx_buf, len));
+ stash->buf.compl_tag = tx_buf->compl_tag;
+
+ /* Add buffer to buf_hash table to be freed later */
+ hash_add(txq->sched_buf_hash, &stash->hlist, stash->buf.compl_tag);
+
+ memset(tx_buf, 0, sizeof(struct idpf_tx_buf));
+
+ /* Reinitialize buf_id portion of tag */
+ tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+
+ return 0;
+}
+
+#define idpf_tx_splitq_clean_bump_ntc(txq, ntc, desc, buf) \
+do { \
+ (ntc)++; \
+ if (unlikely(!(ntc))) { \
+ ntc -= (txq)->desc_count; \
+ buf = (txq)->tx_buf; \
+ desc = IDPF_FLEX_TX_DESC(txq, 0); \
+ } else { \
+ (buf)++; \
+ (desc)++; \
+ } \
+} while (0)
+
+/**
+ * idpf_tx_splitq_clean - Reclaim resources from buffer queue
+ * @tx_q: Tx queue to clean
+ * @end: queue index until which it should be cleaned
+ * @napi_budget: Used to determine if we are in netpoll
+ * @cleaned: pointer to stats struct to track cleaned packets/bytes
+ * @descs_only: true if queue is using flow-based scheduling and should
+ * not clean buffers at this time
+ *
+ * Cleans the queue descriptor ring. If the queue is using queue-based
+ * scheduling, the buffers will be cleaned as well. If the queue is using
+ * flow-based scheduling, only the descriptors are cleaned at this time.
+ * Separate packet completion events will be reported on the completion queue,
+ * and the buffers will be cleaned separately. The stats are not updated from
+ * this function when using flow-based scheduling.
+ */
+static void idpf_tx_splitq_clean(struct idpf_queue *tx_q, u16 end,
+ int napi_budget,
+ struct idpf_cleaned_stats *cleaned,
+ bool descs_only)
+{
+ union idpf_tx_flex_desc *next_pending_desc = NULL;
+ union idpf_tx_flex_desc *tx_desc;
+ s16 ntc = tx_q->next_to_clean;
+ struct idpf_tx_buf *tx_buf;
+
+ tx_desc = IDPF_FLEX_TX_DESC(tx_q, ntc);
+ next_pending_desc = IDPF_FLEX_TX_DESC(tx_q, end);
+ tx_buf = &tx_q->tx_buf[ntc];
+ ntc -= tx_q->desc_count;
+
+ while (tx_desc != next_pending_desc) {
+ union idpf_tx_flex_desc *eop_desc;
+
+ /* If this entry in the ring was used as a context descriptor,
+ * it's corresponding entry in the buffer ring will have an
+ * invalid completion tag since no buffer was used. We can
+ * skip this descriptor since there is no buffer to clean.
+ */
+ if (unlikely(tx_buf->compl_tag == IDPF_SPLITQ_TX_INVAL_COMPL_TAG))
+ goto fetch_next_txq_desc;
+
+ eop_desc = (union idpf_tx_flex_desc *)tx_buf->next_to_watch;
+
+ /* clear next_to_watch to prevent false hangs */
+ tx_buf->next_to_watch = NULL;
+
+ if (descs_only) {
+ if (idpf_stash_flow_sch_buffers(tx_q, tx_buf))
+ goto tx_splitq_clean_out;
+
+ while (tx_desc != eop_desc) {
+ idpf_tx_splitq_clean_bump_ntc(tx_q, ntc,
+ tx_desc, tx_buf);
+
+ if (dma_unmap_len(tx_buf, len)) {
+ if (idpf_stash_flow_sch_buffers(tx_q,
+ tx_buf))
+ goto tx_splitq_clean_out;
+ }
+ }
+ } else {
+ idpf_tx_splitq_clean_hdr(tx_q, tx_buf, cleaned,
+ napi_budget);
+
+ /* unmap remaining buffers */
+ while (tx_desc != eop_desc) {
+ idpf_tx_splitq_clean_bump_ntc(tx_q, ntc,
+ tx_desc, tx_buf);
+
+ /* unmap any remaining paged data */
+ if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_page(tx_q->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+ }
+ }
+ }
+
+fetch_next_txq_desc:
+ idpf_tx_splitq_clean_bump_ntc(tx_q, ntc, tx_desc, tx_buf);
+ }
+
+tx_splitq_clean_out:
+ ntc += tx_q->desc_count;
+ tx_q->next_to_clean = ntc;
+}
+
+#define idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, buf) \
+do { \
+ (buf)++; \
+ (ntc)++; \
+ if (unlikely((ntc) == (txq)->desc_count)) { \
+ buf = (txq)->tx_buf; \
+ ntc = 0; \
+ } \
+} while (0)
+
+/**
+ * idpf_tx_clean_buf_ring - clean flow scheduling TX queue buffers
+ * @txq: queue to clean
+ * @compl_tag: completion tag of packet to clean (from completion descriptor)
+ * @cleaned: pointer to stats struct to track cleaned packets/bytes
+ * @budget: Used to determine if we are in netpoll
+ *
+ * Cleans all buffers associated with the input completion tag either from the
+ * TX buffer ring or from the hash table if the buffers were previously
+ * stashed. Returns the byte/segment count for the cleaned packet associated
+ * this completion tag.
+ */
+static bool idpf_tx_clean_buf_ring(struct idpf_queue *txq, u16 compl_tag,
+ struct idpf_cleaned_stats *cleaned,
+ int budget)
+{
+ u16 idx = compl_tag & txq->compl_tag_bufid_m;
+ struct idpf_tx_buf *tx_buf = NULL;
+ u16 ntc = txq->next_to_clean;
+ u16 num_descs_cleaned = 0;
+ u16 orig_idx = idx;
+
+ tx_buf = &txq->tx_buf[idx];
+
+ while (tx_buf->compl_tag == (int)compl_tag) {
+ if (tx_buf->skb) {
+ idpf_tx_splitq_clean_hdr(txq, tx_buf, cleaned, budget);
+ } else if (dma_unmap_len(tx_buf, len)) {
+ dma_unmap_page(txq->dev,
+ dma_unmap_addr(tx_buf, dma),
+ dma_unmap_len(tx_buf, len),
+ DMA_TO_DEVICE);
+ dma_unmap_len_set(tx_buf, len, 0);
+ }
+
+ memset(tx_buf, 0, sizeof(struct idpf_tx_buf));
+ tx_buf->compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+
+ num_descs_cleaned++;
+ idpf_tx_clean_buf_ring_bump_ntc(txq, idx, tx_buf);
+ }
+
+ /* If we didn't clean anything on the ring for this completion, there's
+ * nothing more to do.
+ */
+ if (unlikely(!num_descs_cleaned))
+ return false;
+
+ /* Otherwise, if we did clean a packet on the ring directly, it's safe
+ * to assume that the descriptors starting from the original
+ * next_to_clean up until the previously cleaned packet can be reused.
+ * Therefore, we will go back in the ring and stash any buffers still
+ * in the ring into the hash table to be cleaned later.
+ */
+ tx_buf = &txq->tx_buf[ntc];
+ while (tx_buf != &txq->tx_buf[orig_idx]) {
+ idpf_stash_flow_sch_buffers(txq, tx_buf);
+ idpf_tx_clean_buf_ring_bump_ntc(txq, ntc, tx_buf);
+ }
+
+ /* Finally, update next_to_clean to reflect the work that was just done
+ * on the ring, if any. If the packet was only cleaned from the hash
+ * table, the ring will not be impacted, therefore we should not touch
+ * next_to_clean. The updated idx is used here
+ */
+ txq->next_to_clean = idx;
+
+ return true;
+}
+
+/**
+ * idpf_tx_handle_rs_completion - clean a single packet and all of its buffers
+ * whether on the buffer ring or in the hash table
+ * @txq: Tx ring to clean
+ * @desc: pointer to completion queue descriptor to extract completion
+ * information from
+ * @cleaned: pointer to stats struct to track cleaned packets/bytes
+ * @budget: Used to determine if we are in netpoll
+ *
+ * Returns bytes/packets cleaned
+ */
+static void idpf_tx_handle_rs_completion(struct idpf_queue *txq,
+ struct idpf_splitq_tx_compl_desc *desc,
+ struct idpf_cleaned_stats *cleaned,
+ int budget)
+{
+ u16 compl_tag;
+
+ if (!test_bit(__IDPF_Q_FLOW_SCH_EN, txq->flags)) {
+ u16 head = le16_to_cpu(desc->q_head_compl_tag.q_head);
+
+ return idpf_tx_splitq_clean(txq, head, budget, cleaned, false);
+ }
+
+ compl_tag = le16_to_cpu(desc->q_head_compl_tag.compl_tag);
+
+ /* If we didn't clean anything on the ring, this packet must be
+ * in the hash table. Go clean it there.
+ */
+ if (!idpf_tx_clean_buf_ring(txq, compl_tag, cleaned, budget))
+ idpf_tx_clean_stashed_bufs(txq, compl_tag, cleaned, budget);
+}
+
+/**
+ * idpf_tx_clean_complq - Reclaim resources on completion queue
+ * @complq: Tx ring to clean
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns true if there's any budget left (e.g. the clean is finished)
+ */
+static bool idpf_tx_clean_complq(struct idpf_queue *complq, int budget,
+ int *cleaned)
+{
+ struct idpf_splitq_tx_compl_desc *tx_desc;
+ struct idpf_vport *vport = complq->vport;
+ s16 ntc = complq->next_to_clean;
+ struct idpf_netdev_priv *np;
+ unsigned int complq_budget;
+ bool complq_ok = true;
+ int i;
+
+ complq_budget = vport->compln_clean_budget;
+ tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, ntc);
+ ntc -= complq->desc_count;
+
+ do {
+ struct idpf_cleaned_stats cleaned_stats = { };
+ struct idpf_queue *tx_q;
+ int rel_tx_qid;
+ u16 hw_head;
+ u8 ctype; /* completion type */
+ u16 gen;
+
+ /* if the descriptor isn't done, no work yet to do */
+ gen = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+ IDPF_TXD_COMPLQ_GEN_M) >> IDPF_TXD_COMPLQ_GEN_S;
+ if (test_bit(__IDPF_Q_GEN_CHK, complq->flags) != gen)
+ break;
+
+ /* Find necessary info of TX queue to clean buffers */
+ rel_tx_qid = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+ IDPF_TXD_COMPLQ_QID_M) >> IDPF_TXD_COMPLQ_QID_S;
+ if (rel_tx_qid >= complq->txq_grp->num_txq ||
+ !complq->txq_grp->txqs[rel_tx_qid]) {
+ dev_err(&complq->vport->adapter->pdev->dev,
+ "TxQ not found\n");
+ goto fetch_next_desc;
+ }
+ tx_q = complq->txq_grp->txqs[rel_tx_qid];
+
+ /* Determine completion type */
+ ctype = (le16_to_cpu(tx_desc->qid_comptype_gen) &
+ IDPF_TXD_COMPLQ_COMPL_TYPE_M) >>
+ IDPF_TXD_COMPLQ_COMPL_TYPE_S;
+ switch (ctype) {
+ case IDPF_TXD_COMPLT_RE:
+ hw_head = le16_to_cpu(tx_desc->q_head_compl_tag.q_head);
+
+ idpf_tx_splitq_clean(tx_q, hw_head, budget,
+ &cleaned_stats, true);
+ break;
+ case IDPF_TXD_COMPLT_RS:
+ idpf_tx_handle_rs_completion(tx_q, tx_desc,
+ &cleaned_stats, budget);
+ break;
+ case IDPF_TXD_COMPLT_SW_MARKER:
+ idpf_tx_handle_sw_marker(tx_q);
+ break;
+ default:
+ dev_err(&tx_q->vport->adapter->pdev->dev,
+ "Unknown TX completion type: %d\n",
+ ctype);
+ goto fetch_next_desc;
+ }
+
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_add(&tx_q->q_stats.tx.packets, cleaned_stats.packets);
+ u64_stats_add(&tx_q->q_stats.tx.bytes, cleaned_stats.bytes);
+ tx_q->cleaned_pkts += cleaned_stats.packets;
+ tx_q->cleaned_bytes += cleaned_stats.bytes;
+ complq->num_completions++;
+ u64_stats_update_end(&tx_q->stats_sync);
+
+fetch_next_desc:
+ tx_desc++;
+ ntc++;
+ if (unlikely(!ntc)) {
+ ntc -= complq->desc_count;
+ tx_desc = IDPF_SPLITQ_TX_COMPLQ_DESC(complq, 0);
+ change_bit(__IDPF_Q_GEN_CHK, complq->flags);
+ }
+
+ prefetch(tx_desc);
+
+ /* update budget accounting */
+ complq_budget--;
+ } while (likely(complq_budget));
+
+ /* Store the state of the complq to be used later in deciding if a
+ * TXQ can be started again
+ */
+ if (unlikely(IDPF_TX_COMPLQ_PENDING(complq->txq_grp) >
+ IDPF_TX_COMPLQ_OVERFLOW_THRESH(complq)))
+ complq_ok = false;
+
+ np = netdev_priv(complq->vport->netdev);
+ for (i = 0; i < complq->txq_grp->num_txq; ++i) {
+ struct idpf_queue *tx_q = complq->txq_grp->txqs[i];
+ struct netdev_queue *nq;
+ bool dont_wake;
+
+ /* We didn't clean anything on this queue, move along */
+ if (!tx_q->cleaned_bytes)
+ continue;
+
+ *cleaned += tx_q->cleaned_pkts;
+
+ /* Update BQL */
+ nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+
+ dont_wake = !complq_ok || IDPF_TX_BUF_RSV_LOW(tx_q) ||
+ np->state != __IDPF_VPORT_UP ||
+ !netif_carrier_ok(tx_q->vport->netdev);
+ /* Check if the TXQ needs to and can be restarted */
+ __netif_txq_completed_wake(nq, tx_q->cleaned_pkts, tx_q->cleaned_bytes,
+ IDPF_DESC_UNUSED(tx_q), IDPF_TX_WAKE_THRESH,
+ dont_wake);
+
+ /* Reset cleaned stats for the next time this queue is
+ * cleaned
+ */
+ tx_q->cleaned_bytes = 0;
+ tx_q->cleaned_pkts = 0;
+ }
+
+ ntc += complq->desc_count;
+ complq->next_to_clean = ntc;
+
+ return !!complq_budget;
+}
+
+/**
+ * idpf_tx_splitq_build_ctb - populate command tag and size for queue
+ * based scheduling descriptors
+ * @desc: descriptor to populate
+ * @params: pointer to tx params struct
+ * @td_cmd: command to be filled in desc
+ * @size: size of buffer
+ */
+void idpf_tx_splitq_build_ctb(union idpf_tx_flex_desc *desc,
+ struct idpf_tx_splitq_params *params,
+ u16 td_cmd, u16 size)
+{
+ desc->q.qw1.cmd_dtype =
+ cpu_to_le16(params->dtype & IDPF_FLEX_TXD_QW1_DTYPE_M);
+ desc->q.qw1.cmd_dtype |=
+ cpu_to_le16((td_cmd << IDPF_FLEX_TXD_QW1_CMD_S) &
+ IDPF_FLEX_TXD_QW1_CMD_M);
+ desc->q.qw1.buf_size = cpu_to_le16((u16)size);
+ desc->q.qw1.l2tags.l2tag1 = cpu_to_le16(params->td_tag);
+}
+
+/**
+ * idpf_tx_splitq_build_flow_desc - populate command tag and size for flow
+ * scheduling descriptors
+ * @desc: descriptor to populate
+ * @params: pointer to tx params struct
+ * @td_cmd: command to be filled in desc
+ * @size: size of buffer
+ */
+void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ struct idpf_tx_splitq_params *params,
+ u16 td_cmd, u16 size)
+{
+ desc->flow.qw1.cmd_dtype = (u16)params->dtype | td_cmd;
+ desc->flow.qw1.rxr_bufsize = cpu_to_le16((u16)size);
+ desc->flow.qw1.compl_tag = cpu_to_le16(params->compl_tag);
+}
+
+/**
+ * idpf_tx_maybe_stop_common - 1st level check for common Tx stop conditions
+ * @tx_q: the queue to be checked
+ * @size: number of descriptors we want to assure is available
+ *
+ * Returns 0 if stop is not needed
+ */
+int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size)
+{
+ struct netdev_queue *nq;
+
+ if (likely(IDPF_DESC_UNUSED(tx_q) >= size))
+ return 0;
+
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.tx.q_busy);
+ u64_stats_update_end(&tx_q->stats_sync);
+
+ nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+
+ return netif_txq_maybe_stop(nq, IDPF_DESC_UNUSED(tx_q), size, size);
+}
+
+/**
+ * idpf_tx_maybe_stop_splitq - 1st level check for Tx splitq stop conditions
+ * @tx_q: the queue to be checked
+ * @descs_needed: number of descriptors required for this packet
+ *
+ * Returns 0 if stop is not needed
+ */
+static int idpf_tx_maybe_stop_splitq(struct idpf_queue *tx_q,
+ unsigned int descs_needed)
+{
+ if (idpf_tx_maybe_stop_common(tx_q, descs_needed))
+ goto splitq_stop;
+
+ /* If there are too many outstanding completions expected on the
+ * completion queue, stop the TX queue to give the device some time to
+ * catch up
+ */
+ if (unlikely(IDPF_TX_COMPLQ_PENDING(tx_q->txq_grp) >
+ IDPF_TX_COMPLQ_OVERFLOW_THRESH(tx_q->txq_grp->complq)))
+ goto splitq_stop;
+
+ /* Also check for available book keeping buffers; if we are low, stop
+ * the queue to wait for more completions
+ */
+ if (unlikely(IDPF_TX_BUF_RSV_LOW(tx_q)))
+ goto splitq_stop;
+
+ return 0;
+
+splitq_stop:
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.tx.q_busy);
+ u64_stats_update_end(&tx_q->stats_sync);
+ netif_stop_subqueue(tx_q->vport->netdev, tx_q->idx);
+
+ return -EBUSY;
+}
+
+/**
+ * idpf_tx_buf_hw_update - Store the new tail value
+ * @tx_q: queue to bump
+ * @val: new tail index
+ * @xmit_more: more skb's pending
+ *
+ * The naming here is special in that 'hw' signals that this function is about
+ * to do a register write to update our queue status. We know this can only
+ * mean tail here as HW should be owning head for TX.
+ */
+void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
+ bool xmit_more)
+{
+ struct netdev_queue *nq;
+
+ nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+ tx_q->next_to_use = val;
+
+ idpf_tx_maybe_stop_common(tx_q, IDPF_TX_DESC_NEEDED);
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64).
+ */
+ wmb();
+
+ /* notify HW of packet */
+ if (netif_xmit_stopped(nq) || !xmit_more)
+ writel(val, tx_q->tail);
+}
+
+/**
+ * idpf_tx_desc_count_required - calculate number of Tx descriptors needed
+ * @txq: queue to send buffer on
+ * @skb: send buffer
+ *
+ * Returns number of data descriptors needed for this skb.
+ */
+unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
+ struct sk_buff *skb)
+{
+ const struct skb_shared_info *shinfo;
+ unsigned int count = 0, i;
+
+ count += !!skb_headlen(skb);
+
+ if (!skb_is_nonlinear(skb))
+ return count;
+
+ shinfo = skb_shinfo(skb);
+ for (i = 0; i < shinfo->nr_frags; i++) {
+ unsigned int size;
+
+ size = skb_frag_size(&shinfo->frags[i]);
+
+ /* We only need to use the idpf_size_to_txd_count check if the
+ * fragment is going to span multiple descriptors,
+ * i.e. size >= 16K.
+ */
+ if (size >= SZ_16K)
+ count += idpf_size_to_txd_count(size);
+ else
+ count++;
+ }
+
+ if (idpf_chk_linearize(skb, txq->tx_max_bufs, count)) {
+ if (__skb_linearize(skb))
+ return 0;
+
+ count = idpf_size_to_txd_count(skb->len);
+ u64_stats_update_begin(&txq->stats_sync);
+ u64_stats_inc(&txq->q_stats.tx.linearize);
+ u64_stats_update_end(&txq->stats_sync);
+ }
+
+ return count;
+}
+
+/**
+ * idpf_tx_dma_map_error - handle TX DMA map errors
+ * @txq: queue to send buffer on
+ * @skb: send buffer
+ * @first: original first buffer info buffer for packet
+ * @idx: starting point on ring to unwind
+ */
+void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
+ struct idpf_tx_buf *first, u16 idx)
+{
+ u64_stats_update_begin(&txq->stats_sync);
+ u64_stats_inc(&txq->q_stats.tx.dma_map_errs);
+ u64_stats_update_end(&txq->stats_sync);
+
+ /* clear dma mappings for failed tx_buf map */
+ for (;;) {
+ struct idpf_tx_buf *tx_buf;
+
+ tx_buf = &txq->tx_buf[idx];
+ idpf_tx_buf_rel(txq, tx_buf);
+ if (tx_buf == first)
+ break;
+ if (idx == 0)
+ idx = txq->desc_count;
+ idx--;
+ }
+
+ if (skb_is_gso(skb)) {
+ union idpf_tx_flex_desc *tx_desc;
+
+ /* If we failed a DMA mapping for a TSO packet, we will have
+ * used one additional descriptor for a context
+ * descriptor. Reset that here.
+ */
+ tx_desc = IDPF_FLEX_TX_DESC(txq, idx);
+ memset(tx_desc, 0, sizeof(struct idpf_flex_tx_ctx_desc));
+ if (idx == 0)
+ idx = txq->desc_count;
+ idx--;
+ }
+
+ /* Update tail in case netdev_xmit_more was previously true */
+ idpf_tx_buf_hw_update(txq, idx, false);
+}
+
+/**
+ * idpf_tx_splitq_bump_ntu - adjust NTU and generation
+ * @txq: the tx ring to wrap
+ * @ntu: ring index to bump
+ */
+static unsigned int idpf_tx_splitq_bump_ntu(struct idpf_queue *txq, u16 ntu)
+{
+ ntu++;
+
+ if (ntu == txq->desc_count) {
+ ntu = 0;
+ txq->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(txq);
+ }
+
+ return ntu;
+}
+
+/**
+ * idpf_tx_splitq_map - Build the Tx flex descriptor
+ * @tx_q: queue to send buffer on
+ * @params: pointer to splitq params struct
+ * @first: first buffer info buffer to use
+ *
+ * This function loops over the skb data pointed to by *first
+ * and gets a physical address for each memory location and programs
+ * it and the length into the transmit flex descriptor.
+ */
+static void idpf_tx_splitq_map(struct idpf_queue *tx_q,
+ struct idpf_tx_splitq_params *params,
+ struct idpf_tx_buf *first)
+{
+ union idpf_tx_flex_desc *tx_desc;
+ unsigned int data_len, size;
+ struct idpf_tx_buf *tx_buf;
+ u16 i = tx_q->next_to_use;
+ struct netdev_queue *nq;
+ struct sk_buff *skb;
+ skb_frag_t *frag;
+ u16 td_cmd = 0;
+ dma_addr_t dma;
+
+ skb = first->skb;
+
+ td_cmd = params->offload.td_cmd;
+
+ data_len = skb->data_len;
+ size = skb_headlen(skb);
+
+ tx_desc = IDPF_FLEX_TX_DESC(tx_q, i);
+
+ dma = dma_map_single(tx_q->dev, skb->data, size, DMA_TO_DEVICE);
+
+ tx_buf = first;
+
+ params->compl_tag =
+ (tx_q->compl_tag_cur_gen << tx_q->compl_tag_gen_s) | i;
+
+ for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ unsigned int max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+
+ if (dma_mapping_error(tx_q->dev, dma))
+ return idpf_tx_dma_map_error(tx_q, skb, first, i);
+
+ tx_buf->compl_tag = params->compl_tag;
+
+ /* record length, and DMA address */
+ dma_unmap_len_set(tx_buf, len, size);
+ dma_unmap_addr_set(tx_buf, dma, dma);
+
+ /* buf_addr is in same location for both desc types */
+ tx_desc->q.buf_addr = cpu_to_le64(dma);
+
+ /* The stack can send us fragments that are too large for a
+ * single descriptor i.e. frag size > 16K-1. We will need to
+ * split the fragment across multiple descriptors in this case.
+ * To adhere to HW alignment restrictions, the fragment needs
+ * to be split such that the first chunk ends on a 4K boundary
+ * and all subsequent chunks start on a 4K boundary. We still
+ * want to send as much data as possible though, so our
+ * intermediate descriptor chunk size will be 12K.
+ *
+ * For example, consider a 32K fragment mapped to DMA addr 2600.
+ * ------------------------------------------------------------
+ * | frag_size = 32K |
+ * ------------------------------------------------------------
+ * |2600 |16384 |28672
+ *
+ * 3 descriptors will be used for this fragment. The HW expects
+ * the descriptors to contain the following:
+ * ------------------------------------------------------------
+ * | size = 13784 | size = 12K | size = 6696 |
+ * | dma = 2600 | dma = 16384 | dma = 28672 |
+ * ------------------------------------------------------------
+ *
+ * We need to first adjust the max_data for the first chunk so
+ * that it ends on a 4K boundary. By negating the value of the
+ * DMA address and taking only the low order bits, we're
+ * effectively calculating
+ * 4K - (DMA addr lower order bits) =
+ * bytes to next boundary.
+ *
+ * Add that to our base aligned max_data (12K) and we have
+ * our first chunk size. In the example above,
+ * 13784 = 12K + (4096-2600)
+ *
+ * After guaranteeing the first chunk ends on a 4K boundary, we
+ * will give the intermediate descriptors 12K chunks and
+ * whatever is left to the final descriptor. This ensures that
+ * all descriptors used for the remaining chunks of the
+ * fragment start on a 4K boundary and we use as few
+ * descriptors as possible.
+ */
+ max_data += -dma & (IDPF_TX_MAX_READ_REQ_SIZE - 1);
+ while (unlikely(size > IDPF_TX_MAX_DESC_DATA)) {
+ idpf_tx_splitq_build_desc(tx_desc, params, td_cmd,
+ max_data);
+
+ tx_desc++;
+ i++;
+
+ if (i == tx_q->desc_count) {
+ tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0);
+ i = 0;
+ tx_q->compl_tag_cur_gen =
+ IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+ }
+
+ /* Since this packet has a buffer that is going to span
+ * multiple descriptors, it's going to leave holes in
+ * to the TX buffer ring. To ensure these holes do not
+ * cause issues in the cleaning routines, we will clear
+ * them of any stale data and assign them the same
+ * completion tag as the current packet. Then when the
+ * packet is being cleaned, the cleaning routines will
+ * simply pass over these holes and finish cleaning the
+ * rest of the packet.
+ */
+ memset(&tx_q->tx_buf[i], 0, sizeof(struct idpf_tx_buf));
+ tx_q->tx_buf[i].compl_tag = params->compl_tag;
+
+ /* Adjust the DMA offset and the remaining size of the
+ * fragment. On the first iteration of this loop,
+ * max_data will be >= 12K and <= 16K-1. On any
+ * subsequent iteration of this loop, max_data will
+ * always be 12K.
+ */
+ dma += max_data;
+ size -= max_data;
+
+ /* Reset max_data since remaining chunks will be 12K
+ * at most
+ */
+ max_data = IDPF_TX_MAX_DESC_DATA_ALIGNED;
+
+ /* buf_addr is in same location for both desc types */
+ tx_desc->q.buf_addr = cpu_to_le64(dma);
+ }
+
+ if (!data_len)
+ break;
+
+ idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
+ tx_desc++;
+ i++;
+
+ if (i == tx_q->desc_count) {
+ tx_desc = IDPF_FLEX_TX_DESC(tx_q, 0);
+ i = 0;
+ tx_q->compl_tag_cur_gen = IDPF_TX_ADJ_COMPL_TAG_GEN(tx_q);
+ }
+
+ size = skb_frag_size(frag);
+ data_len -= size;
+
+ dma = skb_frag_dma_map(tx_q->dev, frag, 0, size,
+ DMA_TO_DEVICE);
+
+ tx_buf = &tx_q->tx_buf[i];
+ }
+
+ /* record SW timestamp if HW timestamp is not available */
+ skb_tx_timestamp(skb);
+
+ /* write last descriptor with RS and EOP bits */
+ td_cmd |= params->eop_cmd;
+ idpf_tx_splitq_build_desc(tx_desc, params, td_cmd, size);
+ i = idpf_tx_splitq_bump_ntu(tx_q, i);
+
+ /* set next_to_watch value indicating a packet is present */
+ first->next_to_watch = tx_desc;
+
+ tx_q->txq_grp->num_completions_pending++;
+
+ /* record bytecount for BQL */
+ nq = netdev_get_tx_queue(tx_q->vport->netdev, tx_q->idx);
+ netdev_tx_sent_queue(nq, first->bytecount);
+
+ idpf_tx_buf_hw_update(tx_q, i, netdev_xmit_more());
+}
+
+/**
+ * idpf_tso - computes mss and TSO length to prepare for TSO
+ * @skb: pointer to skb
+ * @off: pointer to struct that holds offload parameters
+ *
+ * Returns error (negative) if TSO was requested but cannot be applied to the
+ * given skb, 0 if TSO does not apply to the given skb, or 1 otherwise.
+ */
+int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off)
+{
+ const struct skb_shared_info *shinfo = skb_shinfo(skb);
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ struct udphdr *udp;
+ unsigned char *hdr;
+ } l4;
+ u32 paylen, l4_start;
+ int err;
+
+ if (!shinfo->gso_size)
+ return 0;
+
+ err = skb_cow_head(skb, 0);
+ if (err < 0)
+ return err;
+
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_transport_header(skb);
+
+ /* initialize outer IP header fields */
+ if (ip.v4->version == 4) {
+ ip.v4->tot_len = 0;
+ ip.v4->check = 0;
+ } else if (ip.v6->version == 6) {
+ ip.v6->payload_len = 0;
+ }
+
+ l4_start = skb_transport_offset(skb);
+
+ /* remove payload length from checksum */
+ paylen = skb->len - l4_start;
+
+ switch (shinfo->gso_type & ~SKB_GSO_DODGY) {
+ case SKB_GSO_TCPV4:
+ case SKB_GSO_TCPV6:
+ csum_replace_by_diff(&l4.tcp->check,
+ (__force __wsum)htonl(paylen));
+ off->tso_hdr_len = __tcp_hdrlen(l4.tcp) + l4_start;
+ break;
+ case SKB_GSO_UDP_L4:
+ csum_replace_by_diff(&l4.udp->check,
+ (__force __wsum)htonl(paylen));
+ /* compute length of segmentation header */
+ off->tso_hdr_len = sizeof(struct udphdr) + l4_start;
+ l4.udp->len = htons(shinfo->gso_size + sizeof(struct udphdr));
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ off->tso_len = skb->len - off->tso_hdr_len;
+ off->mss = shinfo->gso_size;
+ off->tso_segs = shinfo->gso_segs;
+
+ off->tx_flags |= IDPF_TX_FLAGS_TSO;
+
+ return 1;
+}
+
+/**
+ * __idpf_chk_linearize - Check skb is not using too many buffers
+ * @skb: send buffer
+ * @max_bufs: maximum number of buffers
+ *
+ * For TSO we need to count the TSO header and segment payload separately. As
+ * such we need to check cases where we have max_bufs-1 fragments or more as we
+ * can potentially require max_bufs+1 DMA transactions, 1 for the TSO header, 1
+ * for the segment payload in the first descriptor, and another max_buf-1 for
+ * the fragments.
+ */
+static bool __idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs)
+{
+ const struct skb_shared_info *shinfo = skb_shinfo(skb);
+ const skb_frag_t *frag, *stale;
+ int nr_frags, sum;
+
+ /* no need to check if number of frags is less than max_bufs - 1 */
+ nr_frags = shinfo->nr_frags;
+ if (nr_frags < (max_bufs - 1))
+ return false;
+
+ /* We need to walk through the list and validate that each group
+ * of max_bufs-2 fragments totals at least gso_size.
+ */
+ nr_frags -= max_bufs - 2;
+ frag = &shinfo->frags[0];
+
+ /* Initialize size to the negative value of gso_size minus 1. We use
+ * this as the worst case scenario in which the frag ahead of us only
+ * provides one byte which is why we are limited to max_bufs-2
+ * descriptors for a single transmit as the header and previous
+ * fragment are already consuming 2 descriptors.
+ */
+ sum = 1 - shinfo->gso_size;
+
+ /* Add size of frags 0 through 4 to create our initial sum */
+ sum += skb_frag_size(frag++);
+ sum += skb_frag_size(frag++);
+ sum += skb_frag_size(frag++);
+ sum += skb_frag_size(frag++);
+ sum += skb_frag_size(frag++);
+
+ /* Walk through fragments adding latest fragment, testing it, and
+ * then removing stale fragments from the sum.
+ */
+ for (stale = &shinfo->frags[0];; stale++) {
+ int stale_size = skb_frag_size(stale);
+
+ sum += skb_frag_size(frag++);
+
+ /* The stale fragment may present us with a smaller
+ * descriptor than the actual fragment size. To account
+ * for that we need to remove all the data on the front and
+ * figure out what the remainder would be in the last
+ * descriptor associated with the fragment.
+ */
+ if (stale_size > IDPF_TX_MAX_DESC_DATA) {
+ int align_pad = -(skb_frag_off(stale)) &
+ (IDPF_TX_MAX_READ_REQ_SIZE - 1);
+
+ sum -= align_pad;
+ stale_size -= align_pad;
+
+ do {
+ sum -= IDPF_TX_MAX_DESC_DATA_ALIGNED;
+ stale_size -= IDPF_TX_MAX_DESC_DATA_ALIGNED;
+ } while (stale_size > IDPF_TX_MAX_DESC_DATA);
+ }
+
+ /* if sum is negative we failed to make sufficient progress */
+ if (sum < 0)
+ return true;
+
+ if (!nr_frags--)
+ break;
+
+ sum -= stale_size;
+ }
+
+ return false;
+}
+
+/**
+ * idpf_chk_linearize - Check if skb exceeds max descriptors per packet
+ * @skb: send buffer
+ * @max_bufs: maximum scatter gather buffers for single packet
+ * @count: number of buffers this packet needs
+ *
+ * Make sure we don't exceed maximum scatter gather buffers for a single
+ * packet. We have to do some special checking around the boundary (max_bufs-1)
+ * if TSO is on since we need count the TSO header and payload separately.
+ * E.g.: a packet with 7 fragments can require 9 DMA transactions; 1 for TSO
+ * header, 1 for segment payload, and then 7 for the fragments.
+ */
+bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
+ unsigned int count)
+{
+ if (likely(count < max_bufs))
+ return false;
+ if (skb_is_gso(skb))
+ return __idpf_chk_linearize(skb, max_bufs);
+
+ return count > max_bufs;
+}
+
+/**
+ * idpf_tx_splitq_get_ctx_desc - grab next desc and update buffer ring
+ * @txq: queue to put context descriptor on
+ *
+ * Since the TX buffer rings mimics the descriptor ring, update the tx buffer
+ * ring entry to reflect that this index is a context descriptor
+ */
+static struct idpf_flex_tx_ctx_desc *
+idpf_tx_splitq_get_ctx_desc(struct idpf_queue *txq)
+{
+ struct idpf_flex_tx_ctx_desc *desc;
+ int i = txq->next_to_use;
+
+ memset(&txq->tx_buf[i], 0, sizeof(struct idpf_tx_buf));
+ txq->tx_buf[i].compl_tag = IDPF_SPLITQ_TX_INVAL_COMPL_TAG;
+
+ /* grab the next descriptor */
+ desc = IDPF_FLEX_TX_CTX_DESC(txq, i);
+ txq->next_to_use = idpf_tx_splitq_bump_ntu(txq, i);
+
+ return desc;
+}
+
+/**
+ * idpf_tx_drop_skb - free the SKB and bump tail if necessary
+ * @tx_q: queue to send buffer on
+ * @skb: pointer to skb
+ */
+netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb)
+{
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.tx.skb_drops);
+ u64_stats_update_end(&tx_q->stats_sync);
+
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
+ dev_kfree_skb(skb);
+
+ return NETDEV_TX_OK;
+}
+
+/**
+ * idpf_tx_splitq_frame - Sends buffer on Tx ring using flex descriptors
+ * @skb: send buffer
+ * @tx_q: queue to send buffer on
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+static netdev_tx_t idpf_tx_splitq_frame(struct sk_buff *skb,
+ struct idpf_queue *tx_q)
+{
+ struct idpf_tx_splitq_params tx_params = { };
+ struct idpf_tx_buf *first;
+ unsigned int count;
+ int tso;
+
+ count = idpf_tx_desc_count_required(tx_q, skb);
+ if (unlikely(!count))
+ return idpf_tx_drop_skb(tx_q, skb);
+
+ tso = idpf_tso(skb, &tx_params.offload);
+ if (unlikely(tso < 0))
+ return idpf_tx_drop_skb(tx_q, skb);
+
+ /* Check for splitq specific TX resources */
+ count += (IDPF_TX_DESCS_PER_CACHE_LINE + tso);
+ if (idpf_tx_maybe_stop_splitq(tx_q, count)) {
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
+ return NETDEV_TX_BUSY;
+ }
+
+ if (tso) {
+ /* If tso is needed, set up context desc */
+ struct idpf_flex_tx_ctx_desc *ctx_desc =
+ idpf_tx_splitq_get_ctx_desc(tx_q);
+
+ ctx_desc->tso.qw1.cmd_dtype =
+ cpu_to_le16(IDPF_TX_DESC_DTYPE_FLEX_TSO_CTX |
+ IDPF_TX_FLEX_CTX_DESC_CMD_TSO);
+ ctx_desc->tso.qw0.flex_tlen =
+ cpu_to_le32(tx_params.offload.tso_len &
+ IDPF_TXD_FLEX_CTX_TLEN_M);
+ ctx_desc->tso.qw0.mss_rt =
+ cpu_to_le16(tx_params.offload.mss &
+ IDPF_TXD_FLEX_CTX_MSS_RT_M);
+ ctx_desc->tso.qw0.hdr_len = tx_params.offload.tso_hdr_len;
+
+ u64_stats_update_begin(&tx_q->stats_sync);
+ u64_stats_inc(&tx_q->q_stats.tx.lso_pkts);
+ u64_stats_update_end(&tx_q->stats_sync);
+ }
+
+ /* record the location of the first descriptor for this packet */
+ first = &tx_q->tx_buf[tx_q->next_to_use];
+ first->skb = skb;
+
+ if (tso) {
+ first->gso_segs = tx_params.offload.tso_segs;
+ first->bytecount = skb->len +
+ ((first->gso_segs - 1) * tx_params.offload.tso_hdr_len);
+ } else {
+ first->gso_segs = 1;
+ first->bytecount = max_t(unsigned int, skb->len, ETH_ZLEN);
+ }
+
+ if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_q->flags)) {
+ tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_FLOW_SCHE;
+ tx_params.eop_cmd = IDPF_TXD_FLEX_FLOW_CMD_EOP;
+ /* Set the RE bit to catch any packets that may have not been
+ * stashed during RS completion cleaning. MIN_GAP is set to
+ * MIN_RING size to ensure it will be set at least once each
+ * time around the ring.
+ */
+ if (!(tx_q->next_to_use % IDPF_TX_SPLITQ_RE_MIN_GAP)) {
+ tx_params.eop_cmd |= IDPF_TXD_FLEX_FLOW_CMD_RE;
+ tx_q->txq_grp->num_completions_pending++;
+ }
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ tx_params.offload.td_cmd |= IDPF_TXD_FLEX_FLOW_CMD_CS_EN;
+
+ } else {
+ tx_params.dtype = IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2;
+ tx_params.eop_cmd = IDPF_TXD_LAST_DESC_CMD;
+
+ if (skb->ip_summed == CHECKSUM_PARTIAL)
+ tx_params.offload.td_cmd |= IDPF_TX_FLEX_DESC_CMD_CS_EN;
+ }
+
+ idpf_tx_splitq_map(tx_q, &tx_params, first);
+
+ return NETDEV_TX_OK;
+}
+
+/**
+ * idpf_tx_splitq_start - Selects the right Tx queue to send buffer
+ * @skb: send buffer
+ * @netdev: network interface device structure
+ *
+ * Returns NETDEV_TX_OK if sent, else an error code
+ */
+netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct idpf_vport *vport = idpf_netdev_to_vport(netdev);
+ struct idpf_queue *tx_q;
+
+ if (unlikely(skb_get_queue_mapping(skb) >= vport->num_txq)) {
+ dev_kfree_skb_any(skb);
+
+ return NETDEV_TX_OK;
+ }
+
+ tx_q = vport->txqs[skb_get_queue_mapping(skb)];
+
+ /* hardware can't handle really short frames, hardware padding works
+ * beyond this point
+ */
+ if (skb_put_padto(skb, tx_q->tx_min_pkt_len)) {
+ idpf_tx_buf_hw_update(tx_q, tx_q->next_to_use, false);
+
+ return NETDEV_TX_OK;
+ }
+
+ return idpf_tx_splitq_frame(skb, tx_q);
+}
+
+/**
+ * idpf_ptype_to_htype - get a hash type
+ * @decoded: Decoded Rx packet type related fields
+ *
+ * Returns appropriate hash type (such as PKT_HASH_TYPE_L2/L3/L4) to be used by
+ * skb_set_hash based on PTYPE as parsed by HW Rx pipeline and is part of
+ * Rx desc.
+ */
+enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *decoded)
+{
+ if (!decoded->known)
+ return PKT_HASH_TYPE_NONE;
+ if (decoded->payload_layer == IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 &&
+ decoded->inner_prot)
+ return PKT_HASH_TYPE_L4;
+ if (decoded->payload_layer == IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 &&
+ decoded->outer_ip)
+ return PKT_HASH_TYPE_L3;
+ if (decoded->outer_ip == IDPF_RX_PTYPE_OUTER_L2)
+ return PKT_HASH_TYPE_L2;
+
+ return PKT_HASH_TYPE_NONE;
+}
+
+/**
+ * idpf_rx_hash - set the hash value in the skb
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ * @decoded: Decoded Rx packet type related fields
+ */
+static void idpf_rx_hash(struct idpf_queue *rxq, struct sk_buff *skb,
+ struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+ struct idpf_rx_ptype_decoded *decoded)
+{
+ u32 hash;
+
+ if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXHASH)))
+ return;
+
+ hash = le16_to_cpu(rx_desc->hash1) |
+ (rx_desc->ff2_mirrid_hash2.hash2 << 16) |
+ (rx_desc->hash3 << 24);
+
+ skb_set_hash(skb, hash, idpf_ptype_to_htype(decoded));
+}
+
+/**
+ * idpf_rx_csum - Indicate in skb if checksum is good
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @csum_bits: checksum fields extracted from the descriptor
+ * @decoded: Decoded Rx packet type related fields
+ *
+ * skb->protocol must be set before this function is called
+ */
+static void idpf_rx_csum(struct idpf_queue *rxq, struct sk_buff *skb,
+ struct idpf_rx_csum_decoded *csum_bits,
+ struct idpf_rx_ptype_decoded *decoded)
+{
+ bool ipv4, ipv6;
+
+ /* check if Rx checksum is enabled */
+ if (unlikely(!idpf_is_feature_ena(rxq->vport, NETIF_F_RXCSUM)))
+ return;
+
+ /* check if HW has decoded the packet and checksum */
+ if (!(csum_bits->l3l4p))
+ return;
+
+ ipv4 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV4);
+ ipv6 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV6);
+
+ if (ipv4 && (csum_bits->ipe || csum_bits->eipe))
+ goto checksum_fail;
+
+ if (ipv6 && csum_bits->ipv6exadd)
+ return;
+
+ /* check for L4 errors and handle packets that were not able to be
+ * checksummed
+ */
+ if (csum_bits->l4e)
+ goto checksum_fail;
+
+ /* Only report checksum unnecessary for ICMP, TCP, UDP, or SCTP */
+ switch (decoded->inner_prot) {
+ case IDPF_RX_PTYPE_INNER_PROT_ICMP:
+ case IDPF_RX_PTYPE_INNER_PROT_TCP:
+ case IDPF_RX_PTYPE_INNER_PROT_UDP:
+ if (!csum_bits->raw_csum_inv) {
+ u16 csum = csum_bits->raw_csum;
+
+ skb->csum = csum_unfold((__force __sum16)~swab16(csum));
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ } else {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+ break;
+ case IDPF_RX_PTYPE_INNER_PROT_SCTP:
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ break;
+ default:
+ break;
+ }
+
+ return;
+
+checksum_fail:
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.hw_csum_err);
+ u64_stats_update_end(&rxq->stats_sync);
+}
+
+/**
+ * idpf_rx_splitq_extract_csum_bits - Extract checksum bits from descriptor
+ * @rx_desc: receive descriptor
+ * @csum: structure to extract checksum fields
+ *
+ **/
+static void idpf_rx_splitq_extract_csum_bits(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+ struct idpf_rx_csum_decoded *csum)
+{
+ u8 qword0, qword1;
+
+ qword0 = rx_desc->status_err0_qw0;
+ qword1 = rx_desc->status_err0_qw1;
+
+ csum->ipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_M,
+ qword1);
+ csum->eipe = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_M,
+ qword1);
+ csum->l4e = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_M,
+ qword1);
+ csum->l3l4p = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_M,
+ qword1);
+ csum->ipv6exadd = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_M,
+ qword0);
+ csum->raw_csum_inv = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M,
+ le16_to_cpu(rx_desc->ptype_err_fflags0));
+ csum->raw_csum = le16_to_cpu(rx_desc->misc.raw_cs);
+}
+
+/**
+ * idpf_rx_rsc - Set the RSC fields in the skb
+ * @rxq : Rx descriptor ring packet is being transacted on
+ * @skb : pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ * @decoded: Decoded Rx packet type related fields
+ *
+ * Return 0 on success and error code on failure
+ *
+ * Populate the skb fields with the total number of RSC segments, RSC payload
+ * length and packet type.
+ */
+static int idpf_rx_rsc(struct idpf_queue *rxq, struct sk_buff *skb,
+ struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc,
+ struct idpf_rx_ptype_decoded *decoded)
+{
+ u16 rsc_segments, rsc_seg_len;
+ bool ipv4, ipv6;
+ int len;
+
+ if (unlikely(!decoded->outer_ip))
+ return -EINVAL;
+
+ rsc_seg_len = le16_to_cpu(rx_desc->misc.rscseglen);
+ if (unlikely(!rsc_seg_len))
+ return -EINVAL;
+
+ ipv4 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV4);
+ ipv6 = IDPF_RX_PTYPE_TO_IPV(decoded, IDPF_RX_PTYPE_OUTER_IPV6);
+
+ if (unlikely(!(ipv4 ^ ipv6)))
+ return -EINVAL;
+
+ rsc_segments = DIV_ROUND_UP(skb->data_len, rsc_seg_len);
+ if (unlikely(rsc_segments == 1))
+ return 0;
+
+ NAPI_GRO_CB(skb)->count = rsc_segments;
+ skb_shinfo(skb)->gso_size = rsc_seg_len;
+
+ skb_reset_network_header(skb);
+ len = skb->len - skb_transport_offset(skb);
+
+ if (ipv4) {
+ struct iphdr *ipv4h = ip_hdr(skb);
+
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4;
+
+ /* Reset and set transport header offset in skb */
+ skb_set_transport_header(skb, sizeof(struct iphdr));
+
+ /* Compute the TCP pseudo header checksum*/
+ tcp_hdr(skb)->check =
+ ~tcp_v4_check(len, ipv4h->saddr, ipv4h->daddr, 0);
+ } else {
+ struct ipv6hdr *ipv6h = ipv6_hdr(skb);
+
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+ skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+ tcp_hdr(skb)->check =
+ ~tcp_v6_check(len, &ipv6h->saddr, &ipv6h->daddr, 0);
+ }
+
+ tcp_gro_complete(skb);
+
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.rsc_pkts);
+ u64_stats_update_end(&rxq->stats_sync);
+
+ return 0;
+}
+
+/**
+ * idpf_rx_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rxq: Rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being populated
+ * @rx_desc: Receive descriptor
+ *
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, protocol, and
+ * other fields within the skb.
+ */
+static int idpf_rx_process_skb_fields(struct idpf_queue *rxq,
+ struct sk_buff *skb,
+ struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+ struct idpf_rx_csum_decoded csum_bits = { };
+ struct idpf_rx_ptype_decoded decoded;
+ u16 rx_ptype;
+
+ rx_ptype = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M,
+ le16_to_cpu(rx_desc->ptype_err_fflags0));
+
+ decoded = rxq->vport->rx_ptype_lkup[rx_ptype];
+ /* If we don't know the ptype we can't do anything else with it. Just
+ * pass it up the stack as-is.
+ */
+ if (!decoded.known)
+ return 0;
+
+ /* process RSS/hash */
+ idpf_rx_hash(rxq, skb, rx_desc, &decoded);
+
+ skb->protocol = eth_type_trans(skb, rxq->vport->netdev);
+
+ if (FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M,
+ le16_to_cpu(rx_desc->hdrlen_flags)))
+ return idpf_rx_rsc(rxq, skb, rx_desc, &decoded);
+
+ idpf_rx_splitq_extract_csum_bits(rx_desc, &csum_bits);
+ idpf_rx_csum(rxq, skb, &csum_bits, &decoded);
+
+ return 0;
+}
+
+/**
+ * idpf_rx_add_frag - Add contents of Rx buffer to sk_buff as a frag
+ * @rx_buf: buffer containing page to add
+ * @skb: sk_buff to place the data into
+ * @size: packet length from rx_desc
+ *
+ * This function will add the data contained in rx_buf->page to the skb.
+ * It will just attach the page as a frag to the skb.
+ * The function will then update the page offset.
+ */
+void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb,
+ unsigned int size)
+{
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page,
+ rx_buf->page_offset, size, rx_buf->truesize);
+
+ rx_buf->page = NULL;
+}
+
+/**
+ * idpf_rx_construct_skb - Allocate skb and populate it
+ * @rxq: Rx descriptor queue
+ * @rx_buf: Rx buffer to pull data from
+ * @size: the length of the packet
+ *
+ * This function allocates an skb. It then populates it with the page
+ * data from the current receive descriptor, taking care to set up the
+ * skb correctly.
+ */
+struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
+ struct idpf_rx_buf *rx_buf,
+ unsigned int size)
+{
+ unsigned int headlen;
+ struct sk_buff *skb;
+ void *va;
+
+ va = page_address(rx_buf->page) + rx_buf->page_offset;
+
+ /* prefetch first cache line of first page */
+ net_prefetch(va);
+ /* allocate a skb to store the frags */
+ skb = __napi_alloc_skb(&rxq->q_vector->napi, IDPF_RX_HDR_SIZE,
+ GFP_ATOMIC);
+ if (unlikely(!skb)) {
+ idpf_rx_put_page(rx_buf);
+
+ return NULL;
+ }
+
+ skb_record_rx_queue(skb, rxq->idx);
+ skb_mark_for_recycle(skb);
+
+ /* Determine available headroom for copy */
+ headlen = size;
+ if (headlen > IDPF_RX_HDR_SIZE)
+ headlen = eth_get_headlen(skb->dev, va, IDPF_RX_HDR_SIZE);
+
+ /* align pull length to size of long to optimize memcpy performance */
+ memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));
+
+ /* if we exhaust the linear part then add what is left as a frag */
+ size -= headlen;
+ if (!size) {
+ idpf_rx_put_page(rx_buf);
+
+ return skb;
+ }
+
+ skb_add_rx_frag(skb, 0, rx_buf->page, rx_buf->page_offset + headlen,
+ size, rx_buf->truesize);
+
+ /* Since we're giving the page to the stack, clear our reference to it.
+ * We'll get a new one during buffer posting.
+ */
+ rx_buf->page = NULL;
+
+ return skb;
+}
+
+/**
+ * idpf_rx_hdr_construct_skb - Allocate skb and populate it from header buffer
+ * @rxq: Rx descriptor queue
+ * @va: Rx buffer to pull data from
+ * @size: the length of the packet
+ *
+ * This function allocates an skb. It then populates it with the page data from
+ * the current receive descriptor, taking care to set up the skb correctly.
+ * This specifically uses a header buffer to start building the skb.
+ */
+static struct sk_buff *idpf_rx_hdr_construct_skb(struct idpf_queue *rxq,
+ const void *va,
+ unsigned int size)
+{
+ struct sk_buff *skb;
+
+ /* allocate a skb to store the frags */
+ skb = __napi_alloc_skb(&rxq->q_vector->napi, size, GFP_ATOMIC);
+ if (unlikely(!skb))
+ return NULL;
+
+ skb_record_rx_queue(skb, rxq->idx);
+
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
+
+ /* More than likely, a payload fragment, which will use a page from
+ * page_pool will be added to the SKB so mark it for recycle
+ * preemptively. And if not, it's inconsequential.
+ */
+ skb_mark_for_recycle(skb);
+
+ return skb;
+}
+
+/**
+ * idpf_rx_splitq_test_staterr - tests bits in Rx descriptor
+ * status and error fields
+ * @stat_err_field: field from descriptor to test bits in
+ * @stat_err_bits: value to mask
+ *
+ */
+static bool idpf_rx_splitq_test_staterr(const u8 stat_err_field,
+ const u8 stat_err_bits)
+{
+ return !!(stat_err_field & stat_err_bits);
+}
+
+/**
+ * idpf_rx_splitq_is_eop - process handling of EOP buffers
+ * @rx_desc: Rx descriptor for current buffer
+ *
+ * If the buffer is an EOP buffer, this function exits returning true,
+ * otherwise return false indicating that this is in fact a non-EOP buffer.
+ */
+static bool idpf_rx_splitq_is_eop(struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc)
+{
+ /* if we are the last buffer then there is nothing else to do */
+ return likely(idpf_rx_splitq_test_staterr(rx_desc->status_err0_qw1,
+ IDPF_RXD_EOF_SPLITQ));
+}
+
+/**
+ * idpf_rx_splitq_clean - Clean completed descriptors from Rx queue
+ * @rxq: Rx descriptor queue to retrieve receive buffer queue
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed
+ */
+static int idpf_rx_splitq_clean(struct idpf_queue *rxq, int budget)
+{
+ int total_rx_bytes = 0, total_rx_pkts = 0;
+ struct idpf_queue *rx_bufq = NULL;
+ struct sk_buff *skb = rxq->skb;
+ u16 ntc = rxq->next_to_clean;
+
+ /* Process Rx packets bounded by budget */
+ while (likely(total_rx_pkts < budget)) {
+ struct virtchnl2_rx_flex_desc_adv_nic_3 *rx_desc;
+ struct idpf_sw_queue *refillq = NULL;
+ struct idpf_rxq_set *rxq_set = NULL;
+ struct idpf_rx_buf *rx_buf = NULL;
+ union virtchnl2_rx_desc *desc;
+ unsigned int pkt_len = 0;
+ unsigned int hdr_len = 0;
+ u16 gen_id, buf_id = 0;
+ /* Header buffer overflow only valid for header split */
+ bool hbo = false;
+ int bufq_id;
+ u8 rxdid;
+
+ /* get the Rx desc from Rx queue based on 'next_to_clean' */
+ desc = IDPF_RX_DESC(rxq, ntc);
+ rx_desc = (struct virtchnl2_rx_flex_desc_adv_nic_3 *)desc;
+
+ /* This memory barrier is needed to keep us from reading
+ * any other fields out of the rx_desc
+ */
+ dma_rmb();
+
+ /* if the descriptor isn't done, no work yet to do */
+ gen_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id);
+ gen_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M, gen_id);
+
+ if (test_bit(__IDPF_Q_GEN_CHK, rxq->flags) != gen_id)
+ break;
+
+ rxdid = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M,
+ rx_desc->rxdid_ucast);
+ if (rxdid != VIRTCHNL2_RXDID_2_FLEX_SPLITQ) {
+ IDPF_RX_BUMP_NTC(rxq, ntc);
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.bad_descs);
+ u64_stats_update_end(&rxq->stats_sync);
+ continue;
+ }
+
+ pkt_len = le16_to_cpu(rx_desc->pktlen_gen_bufq_id);
+ pkt_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M,
+ pkt_len);
+
+ hbo = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M,
+ rx_desc->status_err0_qw1);
+
+ if (unlikely(hbo)) {
+ /* If a header buffer overflow, occurs, i.e. header is
+ * too large to fit in the header split buffer, HW will
+ * put the entire packet, including headers, in the
+ * data/payload buffer.
+ */
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.hsplit_buf_ovf);
+ u64_stats_update_end(&rxq->stats_sync);
+ goto bypass_hsplit;
+ }
+
+ hdr_len = le16_to_cpu(rx_desc->hdrlen_flags);
+ hdr_len = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M,
+ hdr_len);
+
+bypass_hsplit:
+ bufq_id = le16_to_cpu(rx_desc->pktlen_gen_bufq_id);
+ bufq_id = FIELD_GET(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M,
+ bufq_id);
+
+ rxq_set = container_of(rxq, struct idpf_rxq_set, rxq);
+ if (!bufq_id)
+ refillq = rxq_set->refillq0;
+ else
+ refillq = rxq_set->refillq1;
+
+ /* retrieve buffer from the rxq */
+ rx_bufq = &rxq->rxq_grp->splitq.bufq_sets[bufq_id].bufq;
+
+ buf_id = le16_to_cpu(rx_desc->buf_id);
+
+ rx_buf = &rx_bufq->rx_buf.buf[buf_id];
+
+ if (hdr_len) {
+ const void *va = (u8 *)rx_bufq->rx_buf.hdr_buf_va +
+ (u32)buf_id * IDPF_HDR_BUF_SIZE;
+
+ skb = idpf_rx_hdr_construct_skb(rxq, va, hdr_len);
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_inc(&rxq->q_stats.rx.hsplit_pkts);
+ u64_stats_update_end(&rxq->stats_sync);
+ }
+
+ if (pkt_len) {
+ idpf_rx_sync_for_cpu(rx_buf, pkt_len);
+ if (skb)
+ idpf_rx_add_frag(rx_buf, skb, pkt_len);
+ else
+ skb = idpf_rx_construct_skb(rxq, rx_buf,
+ pkt_len);
+ } else {
+ idpf_rx_put_page(rx_buf);
+ }
+
+ /* exit if we failed to retrieve a buffer */
+ if (!skb)
+ break;
+
+ idpf_rx_post_buf_refill(refillq, buf_id);
+
+ IDPF_RX_BUMP_NTC(rxq, ntc);
+ /* skip if it is non EOP desc */
+ if (!idpf_rx_splitq_is_eop(rx_desc))
+ continue;
+
+ /* pad skb if needed (to make valid ethernet frame) */
+ if (eth_skb_pad(skb)) {
+ skb = NULL;
+ continue;
+ }
+
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
+
+ /* protocol */
+ if (unlikely(idpf_rx_process_skb_fields(rxq, skb, rx_desc))) {
+ dev_kfree_skb_any(skb);
+ skb = NULL;
+ continue;
+ }
+
+ /* send completed skb up the stack */
+ napi_gro_receive(&rxq->q_vector->napi, skb);
+ skb = NULL;
+
+ /* update budget accounting */
+ total_rx_pkts++;
+ }
+
+ rxq->next_to_clean = ntc;
+
+ rxq->skb = skb;
+ u64_stats_update_begin(&rxq->stats_sync);
+ u64_stats_add(&rxq->q_stats.rx.packets, total_rx_pkts);
+ u64_stats_add(&rxq->q_stats.rx.bytes, total_rx_bytes);
+ u64_stats_update_end(&rxq->stats_sync);
+
+ /* guarantee a trip back through this routine if there was a failure */
+ return total_rx_pkts;
+}
+
+/**
+ * idpf_rx_update_bufq_desc - Update buffer queue descriptor
+ * @bufq: Pointer to the buffer queue
+ * @refill_desc: SW Refill queue descriptor containing buffer ID
+ * @buf_desc: Buffer queue descriptor
+ *
+ * Return 0 on success and negative on failure.
+ */
+static int idpf_rx_update_bufq_desc(struct idpf_queue *bufq, u16 refill_desc,
+ struct virtchnl2_splitq_rx_buf_desc *buf_desc)
+{
+ struct idpf_rx_buf *buf;
+ dma_addr_t addr;
+ u16 buf_id;
+
+ buf_id = FIELD_GET(IDPF_RX_BI_BUFID_M, refill_desc);
+
+ buf = &bufq->rx_buf.buf[buf_id];
+
+ addr = idpf_alloc_page(bufq->pp, buf, bufq->rx_buf_size);
+ if (unlikely(addr == DMA_MAPPING_ERROR))
+ return -ENOMEM;
+
+ buf_desc->pkt_addr = cpu_to_le64(addr);
+ buf_desc->qword0.buf_id = cpu_to_le16(buf_id);
+
+ if (!bufq->rx_hsplit_en)
+ return 0;
+
+ buf_desc->hdr_addr = cpu_to_le64(bufq->rx_buf.hdr_buf_pa +
+ (u32)buf_id * IDPF_HDR_BUF_SIZE);
+
+ return 0;
+}
+
+/**
+ * idpf_rx_clean_refillq - Clean refill queue buffers
+ * @bufq: buffer queue to post buffers back to
+ * @refillq: refill queue to clean
+ *
+ * This function takes care of the buffer refill management
+ */
+static void idpf_rx_clean_refillq(struct idpf_queue *bufq,
+ struct idpf_sw_queue *refillq)
+{
+ struct virtchnl2_splitq_rx_buf_desc *buf_desc;
+ u16 bufq_nta = bufq->next_to_alloc;
+ u16 ntc = refillq->next_to_clean;
+ int cleaned = 0;
+ u16 gen;
+
+ buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, bufq_nta);
+
+ /* make sure we stop at ring wrap in the unlikely case ring is full */
+ while (likely(cleaned < refillq->desc_count)) {
+ u16 refill_desc = IDPF_SPLITQ_RX_BI_DESC(refillq, ntc);
+ bool failure;
+
+ gen = FIELD_GET(IDPF_RX_BI_GEN_M, refill_desc);
+ if (test_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags) != gen)
+ break;
+
+ failure = idpf_rx_update_bufq_desc(bufq, refill_desc,
+ buf_desc);
+ if (failure)
+ break;
+
+ if (unlikely(++ntc == refillq->desc_count)) {
+ change_bit(__IDPF_RFLQ_GEN_CHK, refillq->flags);
+ ntc = 0;
+ }
+
+ if (unlikely(++bufq_nta == bufq->desc_count)) {
+ buf_desc = IDPF_SPLITQ_RX_BUF_DESC(bufq, 0);
+ bufq_nta = 0;
+ } else {
+ buf_desc++;
+ }
+
+ cleaned++;
+ }
+
+ if (!cleaned)
+ return;
+
+ /* We want to limit how many transactions on the bus we trigger with
+ * tail writes so we only do it in strides. It's also important we
+ * align the write to a multiple of 8 as required by HW.
+ */
+ if (((bufq->next_to_use <= bufq_nta ? 0 : bufq->desc_count) +
+ bufq_nta - bufq->next_to_use) >= IDPF_RX_BUF_POST_STRIDE)
+ idpf_rx_buf_hw_update(bufq, ALIGN_DOWN(bufq_nta,
+ IDPF_RX_BUF_POST_STRIDE));
+
+ /* update next to alloc since we have filled the ring */
+ refillq->next_to_clean = ntc;
+ bufq->next_to_alloc = bufq_nta;
+}
+
+/**
+ * idpf_rx_clean_refillq_all - Clean all refill queues
+ * @bufq: buffer queue with refill queues
+ *
+ * Iterates through all refill queues assigned to the buffer queue assigned to
+ * this vector. Returns true if clean is complete within budget, false
+ * otherwise.
+ */
+static void idpf_rx_clean_refillq_all(struct idpf_queue *bufq)
+{
+ struct idpf_bufq_set *bufq_set;
+ int i;
+
+ bufq_set = container_of(bufq, struct idpf_bufq_set, bufq);
+ for (i = 0; i < bufq_set->num_refillqs; i++)
+ idpf_rx_clean_refillq(bufq, &bufq_set->refillqs[i]);
+}
+
+/**
+ * idpf_vport_intr_clean_queues - MSIX mode Interrupt Handler
+ * @irq: interrupt number
+ * @data: pointer to a q_vector
+ *
+ */
+static irqreturn_t idpf_vport_intr_clean_queues(int __always_unused irq,
+ void *data)
+{
+ struct idpf_q_vector *q_vector = (struct idpf_q_vector *)data;
+
+ q_vector->total_events++;
+ napi_schedule(&q_vector->napi);
+
+ return IRQ_HANDLED;
+}
+
+/**
+ * idpf_vport_intr_napi_del_all - Unregister napi for all q_vectors in vport
+ * @vport: virtual port structure
+ *
+ */
+static void idpf_vport_intr_napi_del_all(struct idpf_vport *vport)
+{
+ u16 v_idx;
+
+ for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++)
+ netif_napi_del(&vport->q_vectors[v_idx].napi);
+}
+
+/**
+ * idpf_vport_intr_napi_dis_all - Disable NAPI for all q_vectors in the vport
+ * @vport: main vport structure
+ */
+static void idpf_vport_intr_napi_dis_all(struct idpf_vport *vport)
+{
+ int v_idx;
+
+ for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++)
+ napi_disable(&vport->q_vectors[v_idx].napi);
+}
+
+/**
+ * idpf_vport_intr_rel - Free memory allocated for interrupt vectors
+ * @vport: virtual port
+ *
+ * Free the memory allocated for interrupt vectors associated to a vport
+ */
+void idpf_vport_intr_rel(struct idpf_vport *vport)
+{
+ int i, j, v_idx;
+
+ for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
+
+ kfree(q_vector->bufq);
+ q_vector->bufq = NULL;
+ kfree(q_vector->tx);
+ q_vector->tx = NULL;
+ kfree(q_vector->rx);
+ q_vector->rx = NULL;
+ }
+
+ /* Clean up the mapping of queues to vectors */
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ for (j = 0; j < rx_qgrp->splitq.num_rxq_sets; j++)
+ rx_qgrp->splitq.rxq_sets[j]->rxq.q_vector = NULL;
+ else
+ for (j = 0; j < rx_qgrp->singleq.num_rxq; j++)
+ rx_qgrp->singleq.rxqs[j]->q_vector = NULL;
+ }
+
+ if (idpf_is_queue_model_split(vport->txq_model))
+ for (i = 0; i < vport->num_txq_grp; i++)
+ vport->txq_grps[i].complq->q_vector = NULL;
+ else
+ for (i = 0; i < vport->num_txq_grp; i++)
+ for (j = 0; j < vport->txq_grps[i].num_txq; j++)
+ vport->txq_grps[i].txqs[j]->q_vector = NULL;
+
+ kfree(vport->q_vectors);
+ vport->q_vectors = NULL;
+}
+
+/**
+ * idpf_vport_intr_rel_irq - Free the IRQ association with the OS
+ * @vport: main vport structure
+ */
+static void idpf_vport_intr_rel_irq(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ int vector;
+
+ for (vector = 0; vector < vport->num_q_vectors; vector++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
+ int irq_num, vidx;
+
+ /* free only the irqs that were actually requested */
+ if (!q_vector)
+ continue;
+
+ vidx = vport->q_vector_idxs[vector];
+ irq_num = adapter->msix_entries[vidx].vector;
+
+ /* clear the affinity_mask in the IRQ descriptor */
+ irq_set_affinity_hint(irq_num, NULL);
+ free_irq(irq_num, q_vector);
+ }
+}
+
+/**
+ * idpf_vport_intr_dis_irq_all - Disable all interrupt
+ * @vport: main vport structure
+ */
+static void idpf_vport_intr_dis_irq_all(struct idpf_vport *vport)
+{
+ struct idpf_q_vector *q_vector = vport->q_vectors;
+ int q_idx;
+
+ for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++)
+ writel(0, q_vector[q_idx].intr_reg.dyn_ctl);
+}
+
+/**
+ * idpf_vport_intr_buildreg_itr - Enable default interrupt generation settings
+ * @q_vector: pointer to q_vector
+ * @type: itr index
+ * @itr: itr value
+ */
+static u32 idpf_vport_intr_buildreg_itr(struct idpf_q_vector *q_vector,
+ const int type, u16 itr)
+{
+ u32 itr_val;
+
+ itr &= IDPF_ITR_MASK;
+ /* Don't clear PBA because that can cause lost interrupts that
+ * came in while we were cleaning/polling
+ */
+ itr_val = q_vector->intr_reg.dyn_ctl_intena_m |
+ (type << q_vector->intr_reg.dyn_ctl_itridx_s) |
+ (itr << (q_vector->intr_reg.dyn_ctl_intrvl_s - 1));
+
+ return itr_val;
+}
+
+/**
+ * idpf_update_dim_sample - Update dim sample with packets and bytes
+ * @q_vector: the vector associated with the interrupt
+ * @dim_sample: dim sample to update
+ * @dim: dim instance structure
+ * @packets: total packets
+ * @bytes: total bytes
+ *
+ * Update the dim sample with the packets and bytes which are passed to this
+ * function. Set the dim state appropriately if the dim settings gets stale.
+ */
+static void idpf_update_dim_sample(struct idpf_q_vector *q_vector,
+ struct dim_sample *dim_sample,
+ struct dim *dim, u64 packets, u64 bytes)
+{
+ dim_update_sample(q_vector->total_events, packets, bytes, dim_sample);
+ dim_sample->comp_ctr = 0;
+
+ /* if dim settings get stale, like when not updated for 1 second or
+ * longer, force it to start again. This addresses the frequent case
+ * of an idle queue being switched to by the scheduler.
+ */
+ if (ktime_ms_delta(dim_sample->time, dim->start_sample.time) >= HZ)
+ dim->state = DIM_START_MEASURE;
+}
+
+/**
+ * idpf_net_dim - Update net DIM algorithm
+ * @q_vector: the vector associated with the interrupt
+ *
+ * Create a DIM sample and notify net_dim() so that it can possibly decide
+ * a new ITR value based on incoming packets, bytes, and interrupts.
+ *
+ * This function is a no-op if the queue is not configured to dynamic ITR.
+ */
+static void idpf_net_dim(struct idpf_q_vector *q_vector)
+{
+ struct dim_sample dim_sample = { };
+ u64 packets, bytes;
+ u32 i;
+
+ if (!IDPF_ITR_IS_DYNAMIC(q_vector->tx_intr_mode))
+ goto check_rx_itr;
+
+ for (i = 0, packets = 0, bytes = 0; i < q_vector->num_txq; i++) {
+ struct idpf_queue *txq = q_vector->tx[i];
+ unsigned int start;
+
+ do {
+ start = u64_stats_fetch_begin(&txq->stats_sync);
+ packets += u64_stats_read(&txq->q_stats.tx.packets);
+ bytes += u64_stats_read(&txq->q_stats.tx.bytes);
+ } while (u64_stats_fetch_retry(&txq->stats_sync, start));
+ }
+
+ idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->tx_dim,
+ packets, bytes);
+ net_dim(&q_vector->tx_dim, dim_sample);
+
+check_rx_itr:
+ if (!IDPF_ITR_IS_DYNAMIC(q_vector->rx_intr_mode))
+ return;
+
+ for (i = 0, packets = 0, bytes = 0; i < q_vector->num_rxq; i++) {
+ struct idpf_queue *rxq = q_vector->rx[i];
+ unsigned int start;
+
+ do {
+ start = u64_stats_fetch_begin(&rxq->stats_sync);
+ packets += u64_stats_read(&rxq->q_stats.rx.packets);
+ bytes += u64_stats_read(&rxq->q_stats.rx.bytes);
+ } while (u64_stats_fetch_retry(&rxq->stats_sync, start));
+ }
+
+ idpf_update_dim_sample(q_vector, &dim_sample, &q_vector->rx_dim,
+ packets, bytes);
+ net_dim(&q_vector->rx_dim, dim_sample);
+}
+
+/**
+ * idpf_vport_intr_update_itr_ena_irq - Update itr and re-enable MSIX interrupt
+ * @q_vector: q_vector for which itr is being updated and interrupt enabled
+ *
+ * Update the net_dim() algorithm and re-enable the interrupt associated with
+ * this vector.
+ */
+void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector)
+{
+ u32 intval;
+
+ /* net_dim() updates ITR out-of-band using a work item */
+ idpf_net_dim(q_vector);
+
+ intval = idpf_vport_intr_buildreg_itr(q_vector,
+ IDPF_NO_ITR_UPDATE_IDX, 0);
+
+ writel(intval, q_vector->intr_reg.dyn_ctl);
+}
+
+/**
+ * idpf_vport_intr_req_irq - get MSI-X vectors from the OS for the vport
+ * @vport: main vport structure
+ * @basename: name for the vector
+ */
+static int idpf_vport_intr_req_irq(struct idpf_vport *vport, char *basename)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ int vector, err, irq_num, vidx;
+ const char *vec_name;
+
+ for (vector = 0; vector < vport->num_q_vectors; vector++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[vector];
+
+ vidx = vport->q_vector_idxs[vector];
+ irq_num = adapter->msix_entries[vidx].vector;
+
+ if (q_vector->num_rxq && q_vector->num_txq)
+ vec_name = "TxRx";
+ else if (q_vector->num_rxq)
+ vec_name = "Rx";
+ else if (q_vector->num_txq)
+ vec_name = "Tx";
+ else
+ continue;
+
+ q_vector->name = kasprintf(GFP_KERNEL, "%s-%s-%d",
+ basename, vec_name, vidx);
+
+ err = request_irq(irq_num, idpf_vport_intr_clean_queues, 0,
+ q_vector->name, q_vector);
+ if (err) {
+ netdev_err(vport->netdev,
+ "Request_irq failed, error: %d\n", err);
+ goto free_q_irqs;
+ }
+ /* assign the mask for this irq */
+ irq_set_affinity_hint(irq_num, &q_vector->affinity_mask);
+ }
+
+ return 0;
+
+free_q_irqs:
+ while (--vector >= 0) {
+ vidx = vport->q_vector_idxs[vector];
+ irq_num = adapter->msix_entries[vidx].vector;
+ free_irq(irq_num, &vport->q_vectors[vector]);
+ }
+
+ return err;
+}
+
+/**
+ * idpf_vport_intr_write_itr - Write ITR value to the ITR register
+ * @q_vector: q_vector structure
+ * @itr: Interrupt throttling rate
+ * @tx: Tx or Rx ITR
+ */
+void idpf_vport_intr_write_itr(struct idpf_q_vector *q_vector, u16 itr, bool tx)
+{
+ struct idpf_intr_reg *intr_reg;
+
+ if (tx && !q_vector->tx)
+ return;
+ else if (!tx && !q_vector->rx)
+ return;
+
+ intr_reg = &q_vector->intr_reg;
+ writel(ITR_REG_ALIGN(itr) >> IDPF_ITR_GRAN_S,
+ tx ? intr_reg->tx_itr : intr_reg->rx_itr);
+}
+
+/**
+ * idpf_vport_intr_ena_irq_all - Enable IRQ for the given vport
+ * @vport: main vport structure
+ */
+static void idpf_vport_intr_ena_irq_all(struct idpf_vport *vport)
+{
+ bool dynamic;
+ int q_idx;
+ u16 itr;
+
+ for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+ struct idpf_q_vector *qv = &vport->q_vectors[q_idx];
+
+ /* Set the initial ITR values */
+ if (qv->num_txq) {
+ dynamic = IDPF_ITR_IS_DYNAMIC(qv->tx_intr_mode);
+ itr = vport->tx_itr_profile[qv->tx_dim.profile_ix];
+ idpf_vport_intr_write_itr(qv, dynamic ?
+ itr : qv->tx_itr_value,
+ true);
+ }
+
+ if (qv->num_rxq) {
+ dynamic = IDPF_ITR_IS_DYNAMIC(qv->rx_intr_mode);
+ itr = vport->rx_itr_profile[qv->rx_dim.profile_ix];
+ idpf_vport_intr_write_itr(qv, dynamic ?
+ itr : qv->rx_itr_value,
+ false);
+ }
+
+ if (qv->num_txq || qv->num_rxq)
+ idpf_vport_intr_update_itr_ena_irq(qv);
+ }
+}
+
+/**
+ * idpf_vport_intr_deinit - Release all vector associations for the vport
+ * @vport: main vport structure
+ */
+void idpf_vport_intr_deinit(struct idpf_vport *vport)
+{
+ idpf_vport_intr_napi_dis_all(vport);
+ idpf_vport_intr_napi_del_all(vport);
+ idpf_vport_intr_dis_irq_all(vport);
+ idpf_vport_intr_rel_irq(vport);
+}
+
+/**
+ * idpf_tx_dim_work - Call back from the stack
+ * @work: work queue structure
+ */
+static void idpf_tx_dim_work(struct work_struct *work)
+{
+ struct idpf_q_vector *q_vector;
+ struct idpf_vport *vport;
+ struct dim *dim;
+ u16 itr;
+
+ dim = container_of(work, struct dim, work);
+ q_vector = container_of(dim, struct idpf_q_vector, tx_dim);
+ vport = q_vector->vport;
+
+ if (dim->profile_ix >= ARRAY_SIZE(vport->tx_itr_profile))
+ dim->profile_ix = ARRAY_SIZE(vport->tx_itr_profile) - 1;
+
+ /* look up the values in our local table */
+ itr = vport->tx_itr_profile[dim->profile_ix];
+
+ idpf_vport_intr_write_itr(q_vector, itr, true);
+
+ dim->state = DIM_START_MEASURE;
+}
+
+/**
+ * idpf_rx_dim_work - Call back from the stack
+ * @work: work queue structure
+ */
+static void idpf_rx_dim_work(struct work_struct *work)
+{
+ struct idpf_q_vector *q_vector;
+ struct idpf_vport *vport;
+ struct dim *dim;
+ u16 itr;
+
+ dim = container_of(work, struct dim, work);
+ q_vector = container_of(dim, struct idpf_q_vector, rx_dim);
+ vport = q_vector->vport;
+
+ if (dim->profile_ix >= ARRAY_SIZE(vport->rx_itr_profile))
+ dim->profile_ix = ARRAY_SIZE(vport->rx_itr_profile) - 1;
+
+ /* look up the values in our local table */
+ itr = vport->rx_itr_profile[dim->profile_ix];
+
+ idpf_vport_intr_write_itr(q_vector, itr, false);
+
+ dim->state = DIM_START_MEASURE;
+}
+
+/**
+ * idpf_init_dim - Set up dynamic interrupt moderation
+ * @qv: q_vector structure
+ */
+static void idpf_init_dim(struct idpf_q_vector *qv)
+{
+ INIT_WORK(&qv->tx_dim.work, idpf_tx_dim_work);
+ qv->tx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ qv->tx_dim.profile_ix = IDPF_DIM_DEFAULT_PROFILE_IX;
+
+ INIT_WORK(&qv->rx_dim.work, idpf_rx_dim_work);
+ qv->rx_dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+ qv->rx_dim.profile_ix = IDPF_DIM_DEFAULT_PROFILE_IX;
+}
+
+/**
+ * idpf_vport_intr_napi_ena_all - Enable NAPI for all q_vectors in the vport
+ * @vport: main vport structure
+ */
+static void idpf_vport_intr_napi_ena_all(struct idpf_vport *vport)
+{
+ int q_idx;
+
+ for (q_idx = 0; q_idx < vport->num_q_vectors; q_idx++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[q_idx];
+
+ idpf_init_dim(q_vector);
+ napi_enable(&q_vector->napi);
+ }
+}
+
+/**
+ * idpf_tx_splitq_clean_all- Clean completion queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static bool idpf_tx_splitq_clean_all(struct idpf_q_vector *q_vec,
+ int budget, int *cleaned)
+{
+ u16 num_txq = q_vec->num_txq;
+ bool clean_complete = true;
+ int i, budget_per_q;
+
+ if (unlikely(!num_txq))
+ return true;
+
+ budget_per_q = DIV_ROUND_UP(budget, num_txq);
+ for (i = 0; i < num_txq; i++)
+ clean_complete &= idpf_tx_clean_complq(q_vec->tx[i],
+ budget_per_q, cleaned);
+
+ return clean_complete;
+}
+
+/**
+ * idpf_rx_splitq_clean_all- Clean completion queues
+ * @q_vec: queue vector
+ * @budget: Used to determine if we are in netpoll
+ * @cleaned: returns number of packets cleaned
+ *
+ * Returns false if clean is not complete else returns true
+ */
+static bool idpf_rx_splitq_clean_all(struct idpf_q_vector *q_vec, int budget,
+ int *cleaned)
+{
+ u16 num_rxq = q_vec->num_rxq;
+ bool clean_complete = true;
+ int pkts_cleaned = 0;
+ int i, budget_per_q;
+
+ /* We attempt to distribute budget to each Rx queue fairly, but don't
+ * allow the budget to go below 1 because that would exit polling early.
+ */
+ budget_per_q = num_rxq ? max(budget / num_rxq, 1) : 0;
+ for (i = 0; i < num_rxq; i++) {
+ struct idpf_queue *rxq = q_vec->rx[i];
+ int pkts_cleaned_per_q;
+
+ pkts_cleaned_per_q = idpf_rx_splitq_clean(rxq, budget_per_q);
+ /* if we clean as many as budgeted, we must not be done */
+ if (pkts_cleaned_per_q >= budget_per_q)
+ clean_complete = false;
+ pkts_cleaned += pkts_cleaned_per_q;
+ }
+ *cleaned = pkts_cleaned;
+
+ for (i = 0; i < q_vec->num_bufq; i++)
+ idpf_rx_clean_refillq_all(q_vec->bufq[i]);
+
+ return clean_complete;
+}
+
+/**
+ * idpf_vport_splitq_napi_poll - NAPI handler
+ * @napi: struct from which you get q_vector
+ * @budget: budget provided by stack
+ */
+static int idpf_vport_splitq_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct idpf_q_vector *q_vector =
+ container_of(napi, struct idpf_q_vector, napi);
+ bool clean_complete;
+ int work_done = 0;
+
+ /* Handle case where we are called by netpoll with a budget of 0 */
+ if (unlikely(!budget)) {
+ idpf_tx_splitq_clean_all(q_vector, budget, &work_done);
+
+ return 0;
+ }
+
+ clean_complete = idpf_rx_splitq_clean_all(q_vector, budget, &work_done);
+ clean_complete &= idpf_tx_splitq_clean_all(q_vector, budget, &work_done);
+
+ /* If work not completed, return budget and polling will return */
+ if (!clean_complete)
+ return budget;
+
+ work_done = min_t(int, work_done, budget - 1);
+
+ /* Exit the polling mode, but don't re-enable interrupts if stack might
+ * poll us due to busy-polling
+ */
+ if (likely(napi_complete_done(napi, work_done)))
+ idpf_vport_intr_update_itr_ena_irq(q_vector);
+
+ /* Switch to poll mode in the tear-down path after sending disable
+ * queues virtchnl message, as the interrupts will be disabled after
+ * that
+ */
+ if (unlikely(q_vector->num_txq && test_bit(__IDPF_Q_POLL_MODE,
+ q_vector->tx[0]->flags)))
+ return budget;
+ else
+ return work_done;
+}
+
+/**
+ * idpf_vport_intr_map_vector_to_qs - Map vectors to queues
+ * @vport: virtual port
+ *
+ * Mapping for vectors to queues
+ */
+static void idpf_vport_intr_map_vector_to_qs(struct idpf_vport *vport)
+{
+ u16 num_txq_grp = vport->num_txq_grp;
+ int i, j, qv_idx, bufq_vidx = 0;
+ struct idpf_rxq_group *rx_qgrp;
+ struct idpf_txq_group *tx_qgrp;
+ struct idpf_queue *q, *bufq;
+ u16 q_index;
+
+ for (i = 0, qv_idx = 0; i < vport->num_rxq_grp; i++) {
+ u16 num_rxq;
+
+ rx_qgrp = &vport->rxq_grps[i];
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++) {
+ if (qv_idx >= vport->num_q_vectors)
+ qv_idx = 0;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ else
+ q = rx_qgrp->singleq.rxqs[j];
+ q->q_vector = &vport->q_vectors[qv_idx];
+ q_index = q->q_vector->num_rxq;
+ q->q_vector->rx[q_index] = q;
+ q->q_vector->num_rxq++;
+ qv_idx++;
+ }
+
+ if (idpf_is_queue_model_split(vport->rxq_model)) {
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++) {
+ bufq = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ bufq->q_vector = &vport->q_vectors[bufq_vidx];
+ q_index = bufq->q_vector->num_bufq;
+ bufq->q_vector->bufq[q_index] = bufq;
+ bufq->q_vector->num_bufq++;
+ }
+ if (++bufq_vidx >= vport->num_q_vectors)
+ bufq_vidx = 0;
+ }
+ }
+
+ for (i = 0, qv_idx = 0; i < num_txq_grp; i++) {
+ u16 num_txq;
+
+ tx_qgrp = &vport->txq_grps[i];
+ num_txq = tx_qgrp->num_txq;
+
+ if (idpf_is_queue_model_split(vport->txq_model)) {
+ if (qv_idx >= vport->num_q_vectors)
+ qv_idx = 0;
+
+ q = tx_qgrp->complq;
+ q->q_vector = &vport->q_vectors[qv_idx];
+ q_index = q->q_vector->num_txq;
+ q->q_vector->tx[q_index] = q;
+ q->q_vector->num_txq++;
+ qv_idx++;
+ } else {
+ for (j = 0; j < num_txq; j++) {
+ if (qv_idx >= vport->num_q_vectors)
+ qv_idx = 0;
+
+ q = tx_qgrp->txqs[j];
+ q->q_vector = &vport->q_vectors[qv_idx];
+ q_index = q->q_vector->num_txq;
+ q->q_vector->tx[q_index] = q;
+ q->q_vector->num_txq++;
+
+ qv_idx++;
+ }
+ }
+ }
+}
+
+/**
+ * idpf_vport_intr_init_vec_idx - Initialize the vector indexes
+ * @vport: virtual port
+ *
+ * Initialize vector indexes with values returened over mailbox
+ */
+static int idpf_vport_intr_init_vec_idx(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_alloc_vectors *ac;
+ u16 *vecids, total_vecs;
+ int i;
+
+ ac = adapter->req_vec_chunks;
+ if (!ac) {
+ for (i = 0; i < vport->num_q_vectors; i++)
+ vport->q_vectors[i].v_idx = vport->q_vector_idxs[i];
+
+ return 0;
+ }
+
+ total_vecs = idpf_get_reserved_vecs(adapter);
+ vecids = kcalloc(total_vecs, sizeof(u16), GFP_KERNEL);
+ if (!vecids)
+ return -ENOMEM;
+
+ idpf_get_vec_ids(adapter, vecids, total_vecs, &ac->vchunks);
+
+ for (i = 0; i < vport->num_q_vectors; i++)
+ vport->q_vectors[i].v_idx = vecids[vport->q_vector_idxs[i]];
+
+ kfree(vecids);
+
+ return 0;
+}
+
+/**
+ * idpf_vport_intr_napi_add_all- Register napi handler for all qvectors
+ * @vport: virtual port structure
+ */
+static void idpf_vport_intr_napi_add_all(struct idpf_vport *vport)
+{
+ int (*napi_poll)(struct napi_struct *napi, int budget);
+ u16 v_idx;
+
+ if (idpf_is_queue_model_split(vport->txq_model))
+ napi_poll = idpf_vport_splitq_napi_poll;
+ else
+ napi_poll = idpf_vport_singleq_napi_poll;
+
+ for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[v_idx];
+
+ netif_napi_add(vport->netdev, &q_vector->napi, napi_poll);
+
+ /* only set affinity_mask if the CPU is online */
+ if (cpu_online(v_idx))
+ cpumask_set_cpu(v_idx, &q_vector->affinity_mask);
+ }
+}
+
+/**
+ * idpf_vport_intr_alloc - Allocate memory for interrupt vectors
+ * @vport: virtual port
+ *
+ * We allocate one q_vector per queue interrupt. If allocation fails we
+ * return -ENOMEM.
+ */
+int idpf_vport_intr_alloc(struct idpf_vport *vport)
+{
+ u16 txqs_per_vector, rxqs_per_vector, bufqs_per_vector;
+ struct idpf_q_vector *q_vector;
+ int v_idx, err;
+
+ vport->q_vectors = kcalloc(vport->num_q_vectors,
+ sizeof(struct idpf_q_vector), GFP_KERNEL);
+ if (!vport->q_vectors)
+ return -ENOMEM;
+
+ txqs_per_vector = DIV_ROUND_UP(vport->num_txq, vport->num_q_vectors);
+ rxqs_per_vector = DIV_ROUND_UP(vport->num_rxq, vport->num_q_vectors);
+ bufqs_per_vector = vport->num_bufqs_per_qgrp *
+ DIV_ROUND_UP(vport->num_rxq_grp,
+ vport->num_q_vectors);
+
+ for (v_idx = 0; v_idx < vport->num_q_vectors; v_idx++) {
+ q_vector = &vport->q_vectors[v_idx];
+ q_vector->vport = vport;
+
+ q_vector->tx_itr_value = IDPF_ITR_TX_DEF;
+ q_vector->tx_intr_mode = IDPF_ITR_DYNAMIC;
+ q_vector->tx_itr_idx = VIRTCHNL2_ITR_IDX_1;
+
+ q_vector->rx_itr_value = IDPF_ITR_RX_DEF;
+ q_vector->rx_intr_mode = IDPF_ITR_DYNAMIC;
+ q_vector->rx_itr_idx = VIRTCHNL2_ITR_IDX_0;
+
+ q_vector->tx = kcalloc(txqs_per_vector,
+ sizeof(struct idpf_queue *),
+ GFP_KERNEL);
+ if (!q_vector->tx) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ q_vector->rx = kcalloc(rxqs_per_vector,
+ sizeof(struct idpf_queue *),
+ GFP_KERNEL);
+ if (!q_vector->rx) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ continue;
+
+ q_vector->bufq = kcalloc(bufqs_per_vector,
+ sizeof(struct idpf_queue *),
+ GFP_KERNEL);
+ if (!q_vector->bufq) {
+ err = -ENOMEM;
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ idpf_vport_intr_rel(vport);
+
+ return err;
+}
+
+/**
+ * idpf_vport_intr_init - Setup all vectors for the given vport
+ * @vport: virtual port
+ *
+ * Returns 0 on success or negative on failure
+ */
+int idpf_vport_intr_init(struct idpf_vport *vport)
+{
+ char *int_name;
+ int err;
+
+ err = idpf_vport_intr_init_vec_idx(vport);
+ if (err)
+ return err;
+
+ idpf_vport_intr_map_vector_to_qs(vport);
+ idpf_vport_intr_napi_add_all(vport);
+ idpf_vport_intr_napi_ena_all(vport);
+
+ err = vport->adapter->dev_ops.reg_ops.intr_reg_init(vport);
+ if (err)
+ goto unroll_vectors_alloc;
+
+ int_name = kasprintf(GFP_KERNEL, "%s-%s",
+ dev_driver_string(&vport->adapter->pdev->dev),
+ vport->netdev->name);
+
+ err = idpf_vport_intr_req_irq(vport, int_name);
+ if (err)
+ goto unroll_vectors_alloc;
+
+ idpf_vport_intr_ena_irq_all(vport);
+
+ return 0;
+
+unroll_vectors_alloc:
+ idpf_vport_intr_napi_dis_all(vport);
+ idpf_vport_intr_napi_del_all(vport);
+
+ return err;
+}
+
+/**
+ * idpf_config_rss - Send virtchnl messages to configure RSS
+ * @vport: virtual port
+ *
+ * Return 0 on success, negative on failure
+ */
+int idpf_config_rss(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_send_get_set_rss_key_msg(vport, false);
+ if (err)
+ return err;
+
+ return idpf_send_get_set_rss_lut_msg(vport, false);
+}
+
+/**
+ * idpf_fill_dflt_rss_lut - Fill the indirection table with the default values
+ * @vport: virtual port structure
+ */
+static void idpf_fill_dflt_rss_lut(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ u16 num_active_rxq = vport->num_rxq;
+ struct idpf_rss_data *rss_data;
+ int i;
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+
+ for (i = 0; i < rss_data->rss_lut_size; i++) {
+ rss_data->rss_lut[i] = i % num_active_rxq;
+ rss_data->cached_lut[i] = rss_data->rss_lut[i];
+ }
+}
+
+/**
+ * idpf_init_rss - Allocate and initialize RSS resources
+ * @vport: virtual port
+ *
+ * Return 0 on success, negative on failure
+ */
+int idpf_init_rss(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_rss_data *rss_data;
+ u32 lut_size;
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+
+ lut_size = rss_data->rss_lut_size * sizeof(u32);
+ rss_data->rss_lut = kzalloc(lut_size, GFP_KERNEL);
+ if (!rss_data->rss_lut)
+ return -ENOMEM;
+
+ rss_data->cached_lut = kzalloc(lut_size, GFP_KERNEL);
+ if (!rss_data->cached_lut) {
+ kfree(rss_data->rss_lut);
+ rss_data->rss_lut = NULL;
+
+ return -ENOMEM;
+ }
+
+ /* Fill the default RSS lut values */
+ idpf_fill_dflt_rss_lut(vport);
+
+ return idpf_config_rss(vport);
+}
+
+/**
+ * idpf_deinit_rss - Release RSS resources
+ * @vport: virtual port
+ */
+void idpf_deinit_rss(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_rss_data *rss_data;
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+ kfree(rss_data->cached_lut);
+ rss_data->cached_lut = NULL;
+ kfree(rss_data->rss_lut);
+ rss_data->rss_lut = NULL;
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
new file mode 100644
index 000000000000..df76493faa75
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
@@ -0,0 +1,1023 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _IDPF_TXRX_H_
+#define _IDPF_TXRX_H_
+
+#include <net/page_pool/helpers.h>
+#include <net/tcp.h>
+#include <net/netdev_queues.h>
+
+#define IDPF_LARGE_MAX_Q 256
+#define IDPF_MAX_Q 16
+#define IDPF_MIN_Q 2
+/* Mailbox Queue */
+#define IDPF_MAX_MBXQ 1
+
+#define IDPF_MIN_TXQ_DESC 64
+#define IDPF_MIN_RXQ_DESC 64
+#define IDPF_MIN_TXQ_COMPLQ_DESC 256
+#define IDPF_MAX_QIDS 256
+
+/* Number of descriptors in a queue should be a multiple of 32. RX queue
+ * descriptors alone should be a multiple of IDPF_REQ_RXQ_DESC_MULTIPLE
+ * to achieve BufQ descriptors aligned to 32
+ */
+#define IDPF_REQ_DESC_MULTIPLE 32
+#define IDPF_REQ_RXQ_DESC_MULTIPLE (IDPF_MAX_BUFQS_PER_RXQ_GRP * 32)
+#define IDPF_MIN_TX_DESC_NEEDED (MAX_SKB_FRAGS + 6)
+#define IDPF_TX_WAKE_THRESH ((u16)IDPF_MIN_TX_DESC_NEEDED * 2)
+
+#define IDPF_MAX_DESCS 8160
+#define IDPF_MAX_TXQ_DESC ALIGN_DOWN(IDPF_MAX_DESCS, IDPF_REQ_DESC_MULTIPLE)
+#define IDPF_MAX_RXQ_DESC ALIGN_DOWN(IDPF_MAX_DESCS, IDPF_REQ_RXQ_DESC_MULTIPLE)
+#define MIN_SUPPORT_TXDID (\
+ VIRTCHNL2_TXDID_FLEX_FLOW_SCHED |\
+ VIRTCHNL2_TXDID_FLEX_TSO_CTX)
+
+#define IDPF_DFLT_SINGLEQ_TX_Q_GROUPS 1
+#define IDPF_DFLT_SINGLEQ_RX_Q_GROUPS 1
+#define IDPF_DFLT_SINGLEQ_TXQ_PER_GROUP 4
+#define IDPF_DFLT_SINGLEQ_RXQ_PER_GROUP 4
+
+#define IDPF_COMPLQ_PER_GROUP 1
+#define IDPF_SINGLE_BUFQ_PER_RXQ_GRP 1
+#define IDPF_MAX_BUFQS_PER_RXQ_GRP 2
+#define IDPF_BUFQ2_ENA 1
+#define IDPF_NUMQ_PER_CHUNK 1
+
+#define IDPF_DFLT_SPLITQ_TXQ_PER_GROUP 1
+#define IDPF_DFLT_SPLITQ_RXQ_PER_GROUP 1
+
+/* Default vector sharing */
+#define IDPF_MBX_Q_VEC 1
+#define IDPF_MIN_Q_VEC 1
+
+#define IDPF_DFLT_TX_Q_DESC_COUNT 512
+#define IDPF_DFLT_TX_COMPLQ_DESC_COUNT 512
+#define IDPF_DFLT_RX_Q_DESC_COUNT 512
+
+/* IMPORTANT: We absolutely _cannot_ have more buffers in the system than a
+ * given RX completion queue has descriptors. This includes _ALL_ buffer
+ * queues. E.g.: If you have two buffer queues of 512 descriptors and buffers,
+ * you have a total of 1024 buffers so your RX queue _must_ have at least that
+ * many descriptors. This macro divides a given number of RX descriptors by
+ * number of buffer queues to calculate how many descriptors each buffer queue
+ * can have without overrunning the RX queue.
+ *
+ * If you give hardware more buffers than completion descriptors what will
+ * happen is that if hardware gets a chance to post more than ring wrap of
+ * descriptors before SW gets an interrupt and overwrites SW head, the gen bit
+ * in the descriptor will be wrong. Any overwritten descriptors' buffers will
+ * be gone forever and SW has no reasonable way to tell that this has happened.
+ * From SW perspective, when we finally get an interrupt, it looks like we're
+ * still waiting for descriptor to be done, stalling forever.
+ */
+#define IDPF_RX_BUFQ_DESC_COUNT(RXD, NUM_BUFQ) ((RXD) / (NUM_BUFQ))
+
+#define IDPF_RX_BUFQ_WORKING_SET(rxq) ((rxq)->desc_count - 1)
+
+#define IDPF_RX_BUMP_NTC(rxq, ntc) \
+do { \
+ if (unlikely(++(ntc) == (rxq)->desc_count)) { \
+ ntc = 0; \
+ change_bit(__IDPF_Q_GEN_CHK, (rxq)->flags); \
+ } \
+} while (0)
+
+#define IDPF_SINGLEQ_BUMP_RING_IDX(q, idx) \
+do { \
+ if (unlikely(++(idx) == (q)->desc_count)) \
+ idx = 0; \
+} while (0)
+
+#define IDPF_RX_HDR_SIZE 256
+#define IDPF_RX_BUF_2048 2048
+#define IDPF_RX_BUF_4096 4096
+#define IDPF_RX_BUF_STRIDE 32
+#define IDPF_RX_BUF_POST_STRIDE 16
+#define IDPF_LOW_WATERMARK 64
+/* Size of header buffer specifically for header split */
+#define IDPF_HDR_BUF_SIZE 256
+#define IDPF_PACKET_HDR_PAD \
+ (ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN * 2)
+#define IDPF_TX_TSO_MIN_MSS 88
+
+/* Minimum number of descriptors between 2 descriptors with the RE bit set;
+ * only relevant in flow scheduling mode
+ */
+#define IDPF_TX_SPLITQ_RE_MIN_GAP 64
+
+#define IDPF_RX_BI_BUFID_S 0
+#define IDPF_RX_BI_BUFID_M GENMASK(14, 0)
+#define IDPF_RX_BI_GEN_S 15
+#define IDPF_RX_BI_GEN_M BIT(IDPF_RX_BI_GEN_S)
+#define IDPF_RXD_EOF_SPLITQ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M
+#define IDPF_RXD_EOF_SINGLEQ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M
+
+#define IDPF_SINGLEQ_RX_BUF_DESC(rxq, i) \
+ (&(((struct virtchnl2_singleq_rx_buf_desc *)((rxq)->desc_ring))[i]))
+#define IDPF_SPLITQ_RX_BUF_DESC(rxq, i) \
+ (&(((struct virtchnl2_splitq_rx_buf_desc *)((rxq)->desc_ring))[i]))
+#define IDPF_SPLITQ_RX_BI_DESC(rxq, i) ((((rxq)->ring))[i])
+
+#define IDPF_BASE_TX_DESC(txq, i) \
+ (&(((struct idpf_base_tx_desc *)((txq)->desc_ring))[i]))
+#define IDPF_BASE_TX_CTX_DESC(txq, i) \
+ (&(((struct idpf_base_tx_ctx_desc *)((txq)->desc_ring))[i]))
+#define IDPF_SPLITQ_TX_COMPLQ_DESC(txcq, i) \
+ (&(((struct idpf_splitq_tx_compl_desc *)((txcq)->desc_ring))[i]))
+
+#define IDPF_FLEX_TX_DESC(txq, i) \
+ (&(((union idpf_tx_flex_desc *)((txq)->desc_ring))[i]))
+#define IDPF_FLEX_TX_CTX_DESC(txq, i) \
+ (&(((struct idpf_flex_tx_ctx_desc *)((txq)->desc_ring))[i]))
+
+#define IDPF_DESC_UNUSED(txq) \
+ ((((txq)->next_to_clean > (txq)->next_to_use) ? 0 : (txq)->desc_count) + \
+ (txq)->next_to_clean - (txq)->next_to_use - 1)
+
+#define IDPF_TX_BUF_RSV_UNUSED(txq) ((txq)->buf_stack.top)
+#define IDPF_TX_BUF_RSV_LOW(txq) (IDPF_TX_BUF_RSV_UNUSED(txq) < \
+ (txq)->desc_count >> 2)
+
+#define IDPF_TX_COMPLQ_OVERFLOW_THRESH(txcq) ((txcq)->desc_count >> 1)
+/* Determine the absolute number of completions pending, i.e. the number of
+ * completions that are expected to arrive on the TX completion queue.
+ */
+#define IDPF_TX_COMPLQ_PENDING(txq) \
+ (((txq)->num_completions_pending >= (txq)->complq->num_completions ? \
+ 0 : U64_MAX) + \
+ (txq)->num_completions_pending - (txq)->complq->num_completions)
+
+#define IDPF_TX_SPLITQ_COMPL_TAG_WIDTH 16
+#define IDPF_SPLITQ_TX_INVAL_COMPL_TAG -1
+/* Adjust the generation for the completion tag and wrap if necessary */
+#define IDPF_TX_ADJ_COMPL_TAG_GEN(txq) \
+ ((++(txq)->compl_tag_cur_gen) >= (txq)->compl_tag_gen_max ? \
+ 0 : (txq)->compl_tag_cur_gen)
+
+#define IDPF_TXD_LAST_DESC_CMD (IDPF_TX_DESC_CMD_EOP | IDPF_TX_DESC_CMD_RS)
+
+#define IDPF_TX_FLAGS_TSO BIT(0)
+#define IDPF_TX_FLAGS_IPV4 BIT(1)
+#define IDPF_TX_FLAGS_IPV6 BIT(2)
+#define IDPF_TX_FLAGS_TUNNEL BIT(3)
+
+union idpf_tx_flex_desc {
+ struct idpf_flex_tx_desc q; /* queue based scheduling */
+ struct idpf_flex_tx_sched_desc flow; /* flow based scheduling */
+};
+
+/**
+ * struct idpf_tx_buf
+ * @next_to_watch: Next descriptor to clean
+ * @skb: Pointer to the skb
+ * @dma: DMA address
+ * @len: DMA length
+ * @bytecount: Number of bytes
+ * @gso_segs: Number of GSO segments
+ * @compl_tag: Splitq only, unique identifier for a buffer. Used to compare
+ * with completion tag returned in buffer completion event.
+ * Because the completion tag is expected to be the same in all
+ * data descriptors for a given packet, and a single packet can
+ * span multiple buffers, we need this field to track all
+ * buffers associated with this completion tag independently of
+ * the buf_id. The tag consists of a N bit buf_id and M upper
+ * order "generation bits". See compl_tag_bufid_m and
+ * compl_tag_gen_s in struct idpf_queue. We'll use a value of -1
+ * to indicate the tag is not valid.
+ * @ctx_entry: Singleq only. Used to indicate the corresponding entry
+ * in the descriptor ring was used for a context descriptor and
+ * this buffer entry should be skipped.
+ */
+struct idpf_tx_buf {
+ void *next_to_watch;
+ struct sk_buff *skb;
+ DEFINE_DMA_UNMAP_ADDR(dma);
+ DEFINE_DMA_UNMAP_LEN(len);
+ unsigned int bytecount;
+ unsigned short gso_segs;
+
+ union {
+ int compl_tag;
+
+ bool ctx_entry;
+ };
+};
+
+struct idpf_tx_stash {
+ struct hlist_node hlist;
+ struct idpf_tx_buf buf;
+};
+
+/**
+ * struct idpf_buf_lifo - LIFO for managing OOO completions
+ * @top: Used to know how many buffers are left
+ * @size: Total size of LIFO
+ * @bufs: Backing array
+ */
+struct idpf_buf_lifo {
+ u16 top;
+ u16 size;
+ struct idpf_tx_stash **bufs;
+};
+
+/**
+ * struct idpf_tx_offload_params - Offload parameters for a given packet
+ * @tx_flags: Feature flags enabled for this packet
+ * @hdr_offsets: Offset parameter for single queue model
+ * @cd_tunneling: Type of tunneling enabled for single queue model
+ * @tso_len: Total length of payload to segment
+ * @mss: Segment size
+ * @tso_segs: Number of segments to be sent
+ * @tso_hdr_len: Length of headers to be duplicated
+ * @td_cmd: Command field to be inserted into descriptor
+ */
+struct idpf_tx_offload_params {
+ u32 tx_flags;
+
+ u32 hdr_offsets;
+ u32 cd_tunneling;
+
+ u32 tso_len;
+ u16 mss;
+ u16 tso_segs;
+ u16 tso_hdr_len;
+
+ u16 td_cmd;
+};
+
+/**
+ * struct idpf_tx_splitq_params
+ * @dtype: General descriptor info
+ * @eop_cmd: Type of EOP
+ * @compl_tag: Associated tag for completion
+ * @td_tag: Descriptor tunneling tag
+ * @offload: Offload parameters
+ */
+struct idpf_tx_splitq_params {
+ enum idpf_tx_desc_dtype_value dtype;
+ u16 eop_cmd;
+ union {
+ u16 compl_tag;
+ u16 td_tag;
+ };
+
+ struct idpf_tx_offload_params offload;
+};
+
+enum idpf_tx_ctx_desc_eipt_offload {
+ IDPF_TX_CTX_EXT_IP_NONE = 0x0,
+ IDPF_TX_CTX_EXT_IP_IPV6 = 0x1,
+ IDPF_TX_CTX_EXT_IP_IPV4_NO_CSUM = 0x2,
+ IDPF_TX_CTX_EXT_IP_IPV4 = 0x3
+};
+
+/* Checksum offload bits decoded from the receive descriptor. */
+struct idpf_rx_csum_decoded {
+ u32 l3l4p : 1;
+ u32 ipe : 1;
+ u32 eipe : 1;
+ u32 eudpe : 1;
+ u32 ipv6exadd : 1;
+ u32 l4e : 1;
+ u32 pprs : 1;
+ u32 nat : 1;
+ u32 raw_csum_inv : 1;
+ u32 raw_csum : 16;
+};
+
+struct idpf_rx_extracted {
+ unsigned int size;
+ u16 rx_ptype;
+};
+
+#define IDPF_TX_COMPLQ_CLEAN_BUDGET 256
+#define IDPF_TX_MIN_PKT_LEN 17
+#define IDPF_TX_DESCS_FOR_SKB_DATA_PTR 1
+#define IDPF_TX_DESCS_PER_CACHE_LINE (L1_CACHE_BYTES / \
+ sizeof(struct idpf_flex_tx_desc))
+#define IDPF_TX_DESCS_FOR_CTX 1
+/* TX descriptors needed, worst case */
+#define IDPF_TX_DESC_NEEDED (MAX_SKB_FRAGS + IDPF_TX_DESCS_FOR_CTX + \
+ IDPF_TX_DESCS_PER_CACHE_LINE + \
+ IDPF_TX_DESCS_FOR_SKB_DATA_PTR)
+
+/* The size limit for a transmit buffer in a descriptor is (16K - 1).
+ * In order to align with the read requests we will align the value to
+ * the nearest 4K which represents our maximum read request size.
+ */
+#define IDPF_TX_MAX_READ_REQ_SIZE SZ_4K
+#define IDPF_TX_MAX_DESC_DATA (SZ_16K - 1)
+#define IDPF_TX_MAX_DESC_DATA_ALIGNED \
+ ALIGN_DOWN(IDPF_TX_MAX_DESC_DATA, IDPF_TX_MAX_READ_REQ_SIZE)
+
+#define IDPF_RX_DMA_ATTR \
+ (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+#define IDPF_RX_DESC(rxq, i) \
+ (&(((union virtchnl2_rx_desc *)((rxq)->desc_ring))[i]))
+
+struct idpf_rx_buf {
+ struct page *page;
+ unsigned int page_offset;
+ u16 truesize;
+};
+
+#define IDPF_RX_MAX_PTYPE_PROTO_IDS 32
+#define IDPF_RX_MAX_PTYPE_SZ (sizeof(struct virtchnl2_ptype) + \
+ (sizeof(u16) * IDPF_RX_MAX_PTYPE_PROTO_IDS))
+#define IDPF_RX_PTYPE_HDR_SZ sizeof(struct virtchnl2_get_ptype_info)
+#define IDPF_RX_MAX_PTYPES_PER_BUF \
+ DIV_ROUND_DOWN_ULL((IDPF_CTLQ_MAX_BUF_LEN - IDPF_RX_PTYPE_HDR_SZ), \
+ IDPF_RX_MAX_PTYPE_SZ)
+
+#define IDPF_GET_PTYPE_SIZE(p) struct_size((p), proto_id, (p)->proto_id_count)
+
+#define IDPF_TUN_IP_GRE (\
+ IDPF_PTYPE_TUNNEL_IP |\
+ IDPF_PTYPE_TUNNEL_IP_GRENAT)
+
+#define IDPF_TUN_IP_GRE_MAC (\
+ IDPF_TUN_IP_GRE |\
+ IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC)
+
+#define IDPF_RX_MAX_PTYPE 1024
+#define IDPF_RX_MAX_BASE_PTYPE 256
+#define IDPF_INVALID_PTYPE_ID 0xFFFF
+
+/* Packet type non-ip values */
+enum idpf_rx_ptype_l2 {
+ IDPF_RX_PTYPE_L2_RESERVED = 0,
+ IDPF_RX_PTYPE_L2_MAC_PAY2 = 1,
+ IDPF_RX_PTYPE_L2_TIMESYNC_PAY2 = 2,
+ IDPF_RX_PTYPE_L2_FIP_PAY2 = 3,
+ IDPF_RX_PTYPE_L2_OUI_PAY2 = 4,
+ IDPF_RX_PTYPE_L2_MACCNTRL_PAY2 = 5,
+ IDPF_RX_PTYPE_L2_LLDP_PAY2 = 6,
+ IDPF_RX_PTYPE_L2_ECP_PAY2 = 7,
+ IDPF_RX_PTYPE_L2_EVB_PAY2 = 8,
+ IDPF_RX_PTYPE_L2_QCN_PAY2 = 9,
+ IDPF_RX_PTYPE_L2_EAPOL_PAY2 = 10,
+ IDPF_RX_PTYPE_L2_ARP = 11,
+};
+
+enum idpf_rx_ptype_outer_ip {
+ IDPF_RX_PTYPE_OUTER_L2 = 0,
+ IDPF_RX_PTYPE_OUTER_IP = 1,
+};
+
+#define IDPF_RX_PTYPE_TO_IPV(ptype, ipv) \
+ (((ptype)->outer_ip == IDPF_RX_PTYPE_OUTER_IP) && \
+ ((ptype)->outer_ip_ver == (ipv)))
+
+enum idpf_rx_ptype_outer_ip_ver {
+ IDPF_RX_PTYPE_OUTER_NONE = 0,
+ IDPF_RX_PTYPE_OUTER_IPV4 = 1,
+ IDPF_RX_PTYPE_OUTER_IPV6 = 2,
+};
+
+enum idpf_rx_ptype_outer_fragmented {
+ IDPF_RX_PTYPE_NOT_FRAG = 0,
+ IDPF_RX_PTYPE_FRAG = 1,
+};
+
+enum idpf_rx_ptype_tunnel_type {
+ IDPF_RX_PTYPE_TUNNEL_NONE = 0,
+ IDPF_RX_PTYPE_TUNNEL_IP_IP = 1,
+ IDPF_RX_PTYPE_TUNNEL_IP_GRENAT = 2,
+ IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC = 3,
+ IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC_VLAN = 4,
+};
+
+enum idpf_rx_ptype_tunnel_end_prot {
+ IDPF_RX_PTYPE_TUNNEL_END_NONE = 0,
+ IDPF_RX_PTYPE_TUNNEL_END_IPV4 = 1,
+ IDPF_RX_PTYPE_TUNNEL_END_IPV6 = 2,
+};
+
+enum idpf_rx_ptype_inner_prot {
+ IDPF_RX_PTYPE_INNER_PROT_NONE = 0,
+ IDPF_RX_PTYPE_INNER_PROT_UDP = 1,
+ IDPF_RX_PTYPE_INNER_PROT_TCP = 2,
+ IDPF_RX_PTYPE_INNER_PROT_SCTP = 3,
+ IDPF_RX_PTYPE_INNER_PROT_ICMP = 4,
+ IDPF_RX_PTYPE_INNER_PROT_TIMESYNC = 5,
+};
+
+enum idpf_rx_ptype_payload_layer {
+ IDPF_RX_PTYPE_PAYLOAD_LAYER_NONE = 0,
+ IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2 = 1,
+ IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY3 = 2,
+ IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY4 = 3,
+};
+
+enum idpf_tunnel_state {
+ IDPF_PTYPE_TUNNEL_IP = BIT(0),
+ IDPF_PTYPE_TUNNEL_IP_GRENAT = BIT(1),
+ IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC = BIT(2),
+};
+
+struct idpf_ptype_state {
+ bool outer_ip;
+ bool outer_frag;
+ u8 tunnel_state;
+};
+
+struct idpf_rx_ptype_decoded {
+ u32 ptype:10;
+ u32 known:1;
+ u32 outer_ip:1;
+ u32 outer_ip_ver:2;
+ u32 outer_frag:1;
+ u32 tunnel_type:3;
+ u32 tunnel_end_prot:2;
+ u32 tunnel_end_frag:1;
+ u32 inner_prot:4;
+ u32 payload_layer:3;
+};
+
+/**
+ * enum idpf_queue_flags_t
+ * @__IDPF_Q_GEN_CHK: Queues operating in splitq mode use a generation bit to
+ * identify new descriptor writebacks on the ring. HW sets
+ * the gen bit to 1 on the first writeback of any given
+ * descriptor. After the ring wraps, HW sets the gen bit of
+ * those descriptors to 0, and continues flipping
+ * 0->1 or 1->0 on each ring wrap. SW maintains its own
+ * gen bit to know what value will indicate writebacks on
+ * the next pass around the ring. E.g. it is initialized
+ * to 1 and knows that reading a gen bit of 1 in any
+ * descriptor on the initial pass of the ring indicates a
+ * writeback. It also flips on every ring wrap.
+ * @__IDPF_RFLQ_GEN_CHK: Refill queues are SW only, so Q_GEN acts as the HW bit
+ * and RFLGQ_GEN is the SW bit.
+ * @__IDPF_Q_FLOW_SCH_EN: Enable flow scheduling
+ * @__IDPF_Q_SW_MARKER: Used to indicate TX queue marker completions
+ * @__IDPF_Q_POLL_MODE: Enable poll mode
+ * @__IDPF_Q_FLAGS_NBITS: Must be last
+ */
+enum idpf_queue_flags_t {
+ __IDPF_Q_GEN_CHK,
+ __IDPF_RFLQ_GEN_CHK,
+ __IDPF_Q_FLOW_SCH_EN,
+ __IDPF_Q_SW_MARKER,
+ __IDPF_Q_POLL_MODE,
+
+ __IDPF_Q_FLAGS_NBITS,
+};
+
+/**
+ * struct idpf_vec_regs
+ * @dyn_ctl_reg: Dynamic control interrupt register offset
+ * @itrn_reg: Interrupt Throttling Rate register offset
+ * @itrn_index_spacing: Register spacing between ITR registers of the same
+ * vector
+ */
+struct idpf_vec_regs {
+ u32 dyn_ctl_reg;
+ u32 itrn_reg;
+ u32 itrn_index_spacing;
+};
+
+/**
+ * struct idpf_intr_reg
+ * @dyn_ctl: Dynamic control interrupt register
+ * @dyn_ctl_intena_m: Mask for dyn_ctl interrupt enable
+ * @dyn_ctl_itridx_s: Register bit offset for ITR index
+ * @dyn_ctl_itridx_m: Mask for ITR index
+ * @dyn_ctl_intrvl_s: Register bit offset for ITR interval
+ * @rx_itr: RX ITR register
+ * @tx_itr: TX ITR register
+ * @icr_ena: Interrupt cause register offset
+ * @icr_ena_ctlq_m: Mask for ICR
+ */
+struct idpf_intr_reg {
+ void __iomem *dyn_ctl;
+ u32 dyn_ctl_intena_m;
+ u32 dyn_ctl_itridx_s;
+ u32 dyn_ctl_itridx_m;
+ u32 dyn_ctl_intrvl_s;
+ void __iomem *rx_itr;
+ void __iomem *tx_itr;
+ void __iomem *icr_ena;
+ u32 icr_ena_ctlq_m;
+};
+
+/**
+ * struct idpf_q_vector
+ * @vport: Vport back pointer
+ * @affinity_mask: CPU affinity mask
+ * @napi: napi handler
+ * @v_idx: Vector index
+ * @intr_reg: See struct idpf_intr_reg
+ * @num_txq: Number of TX queues
+ * @tx: Array of TX queues to service
+ * @tx_dim: Data for TX net_dim algorithm
+ * @tx_itr_value: TX interrupt throttling rate
+ * @tx_intr_mode: Dynamic ITR or not
+ * @tx_itr_idx: TX ITR index
+ * @num_rxq: Number of RX queues
+ * @rx: Array of RX queues to service
+ * @rx_dim: Data for RX net_dim algorithm
+ * @rx_itr_value: RX interrupt throttling rate
+ * @rx_intr_mode: Dynamic ITR or not
+ * @rx_itr_idx: RX ITR index
+ * @num_bufq: Number of buffer queues
+ * @bufq: Array of buffer queues to service
+ * @total_events: Number of interrupts processed
+ * @name: Queue vector name
+ */
+struct idpf_q_vector {
+ struct idpf_vport *vport;
+ cpumask_t affinity_mask;
+ struct napi_struct napi;
+ u16 v_idx;
+ struct idpf_intr_reg intr_reg;
+
+ u16 num_txq;
+ struct idpf_queue **tx;
+ struct dim tx_dim;
+ u16 tx_itr_value;
+ bool tx_intr_mode;
+ u32 tx_itr_idx;
+
+ u16 num_rxq;
+ struct idpf_queue **rx;
+ struct dim rx_dim;
+ u16 rx_itr_value;
+ bool rx_intr_mode;
+ u32 rx_itr_idx;
+
+ u16 num_bufq;
+ struct idpf_queue **bufq;
+
+ u16 total_events;
+ char *name;
+};
+
+struct idpf_rx_queue_stats {
+ u64_stats_t packets;
+ u64_stats_t bytes;
+ u64_stats_t rsc_pkts;
+ u64_stats_t hw_csum_err;
+ u64_stats_t hsplit_pkts;
+ u64_stats_t hsplit_buf_ovf;
+ u64_stats_t bad_descs;
+};
+
+struct idpf_tx_queue_stats {
+ u64_stats_t packets;
+ u64_stats_t bytes;
+ u64_stats_t lso_pkts;
+ u64_stats_t linearize;
+ u64_stats_t q_busy;
+ u64_stats_t skb_drops;
+ u64_stats_t dma_map_errs;
+};
+
+struct idpf_cleaned_stats {
+ u32 packets;
+ u32 bytes;
+};
+
+union idpf_queue_stats {
+ struct idpf_rx_queue_stats rx;
+ struct idpf_tx_queue_stats tx;
+};
+
+#define IDPF_ITR_DYNAMIC 1
+#define IDPF_ITR_MAX 0x1FE0
+#define IDPF_ITR_20K 0x0032
+#define IDPF_ITR_GRAN_S 1 /* Assume ITR granularity is 2us */
+#define IDPF_ITR_MASK 0x1FFE /* ITR register value alignment mask */
+#define ITR_REG_ALIGN(setting) ((setting) & IDPF_ITR_MASK)
+#define IDPF_ITR_IS_DYNAMIC(itr_mode) (itr_mode)
+#define IDPF_ITR_TX_DEF IDPF_ITR_20K
+#define IDPF_ITR_RX_DEF IDPF_ITR_20K
+/* Index used for 'No ITR' update in DYN_CTL register */
+#define IDPF_NO_ITR_UPDATE_IDX 3
+#define IDPF_ITR_IDX_SPACING(spacing, dflt) (spacing ? spacing : dflt)
+#define IDPF_DIM_DEFAULT_PROFILE_IX 1
+
+/**
+ * struct idpf_queue
+ * @dev: Device back pointer for DMA mapping
+ * @vport: Back pointer to associated vport
+ * @txq_grp: See struct idpf_txq_group
+ * @rxq_grp: See struct idpf_rxq_group
+ * @idx: For buffer queue, it is used as group id, either 0 or 1. On clean,
+ * buffer queue uses this index to determine which group of refill queues
+ * to clean.
+ * For TX queue, it is used as index to map between TX queue group and
+ * hot path TX pointers stored in vport. Used in both singleq/splitq.
+ * For RX queue, it is used to index to total RX queue across groups and
+ * used for skb reporting.
+ * @tail: Tail offset. Used for both queue models single and split. In splitq
+ * model relevant only for TX queue and RX queue.
+ * @tx_buf: See struct idpf_tx_buf
+ * @rx_buf: Struct with RX buffer related members
+ * @rx_buf.buf: See struct idpf_rx_buf
+ * @rx_buf.hdr_buf_pa: DMA handle
+ * @rx_buf.hdr_buf_va: Virtual address
+ * @pp: Page pool pointer
+ * @skb: Pointer to the skb
+ * @q_type: Queue type (TX, RX, TX completion, RX buffer)
+ * @q_id: Queue id
+ * @desc_count: Number of descriptors
+ * @next_to_use: Next descriptor to use. Relevant in both split & single txq
+ * and bufq.
+ * @next_to_clean: Next descriptor to clean. In split queue model, only
+ * relevant to TX completion queue and RX queue.
+ * @next_to_alloc: RX buffer to allocate at. Used only for RX. In splitq model
+ * only relevant to RX queue.
+ * @flags: See enum idpf_queue_flags_t
+ * @q_stats: See union idpf_queue_stats
+ * @stats_sync: See struct u64_stats_sync
+ * @cleaned_bytes: Splitq only, TXQ only: When a TX completion is received on
+ * the TX completion queue, it can be for any TXQ associated
+ * with that completion queue. This means we can clean up to
+ * N TXQs during a single call to clean the completion queue.
+ * cleaned_bytes|pkts tracks the clean stats per TXQ during
+ * that single call to clean the completion queue. By doing so,
+ * we can update BQL with aggregate cleaned stats for each TXQ
+ * only once at the end of the cleaning routine.
+ * @cleaned_pkts: Number of packets cleaned for the above said case
+ * @rx_hsplit_en: RX headsplit enable
+ * @rx_hbuf_size: Header buffer size
+ * @rx_buf_size: Buffer size
+ * @rx_max_pkt_size: RX max packet size
+ * @rx_buf_stride: RX buffer stride
+ * @rx_buffer_low_watermark: RX buffer low watermark
+ * @rxdids: Supported RX descriptor ids
+ * @q_vector: Backreference to associated vector
+ * @size: Length of descriptor ring in bytes
+ * @dma: Physical address of ring
+ * @desc_ring: Descriptor ring memory
+ * @tx_max_bufs: Max buffers that can be transmitted with scatter-gather
+ * @tx_min_pkt_len: Min supported packet length
+ * @num_completions: Only relevant for TX completion queue. It tracks the
+ * number of completions received to compare against the
+ * number of completions pending, as accumulated by the
+ * TX queues.
+ * @buf_stack: Stack of empty buffers to store buffer info for out of order
+ * buffer completions. See struct idpf_buf_lifo.
+ * @compl_tag_bufid_m: Completion tag buffer id mask
+ * @compl_tag_gen_s: Completion tag generation bit
+ * The format of the completion tag will change based on the TXQ
+ * descriptor ring size so that we can maintain roughly the same level
+ * of "uniqueness" across all descriptor sizes. For example, if the
+ * TXQ descriptor ring size is 64 (the minimum size supported), the
+ * completion tag will be formatted as below:
+ * 15 6 5 0
+ * --------------------------------
+ * | GEN=0-1023 |IDX = 0-63|
+ * --------------------------------
+ *
+ * This gives us 64*1024 = 65536 possible unique values. Similarly, if
+ * the TXQ descriptor ring size is 8160 (the maximum size supported),
+ * the completion tag will be formatted as below:
+ * 15 13 12 0
+ * --------------------------------
+ * |GEN | IDX = 0-8159 |
+ * --------------------------------
+ *
+ * This gives us 8*8160 = 65280 possible unique values.
+ * @compl_tag_cur_gen: Used to keep track of current completion tag generation
+ * @compl_tag_gen_max: To determine when compl_tag_cur_gen should be reset
+ * @sched_buf_hash: Hash table to stores buffers
+ */
+struct idpf_queue {
+ struct device *dev;
+ struct idpf_vport *vport;
+ union {
+ struct idpf_txq_group *txq_grp;
+ struct idpf_rxq_group *rxq_grp;
+ };
+ u16 idx;
+ void __iomem *tail;
+ union {
+ struct idpf_tx_buf *tx_buf;
+ struct {
+ struct idpf_rx_buf *buf;
+ dma_addr_t hdr_buf_pa;
+ void *hdr_buf_va;
+ } rx_buf;
+ };
+ struct page_pool *pp;
+ struct sk_buff *skb;
+ u16 q_type;
+ u32 q_id;
+ u16 desc_count;
+
+ u16 next_to_use;
+ u16 next_to_clean;
+ u16 next_to_alloc;
+ DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
+
+ union idpf_queue_stats q_stats;
+ struct u64_stats_sync stats_sync;
+
+ u32 cleaned_bytes;
+ u16 cleaned_pkts;
+
+ bool rx_hsplit_en;
+ u16 rx_hbuf_size;
+ u16 rx_buf_size;
+ u16 rx_max_pkt_size;
+ u16 rx_buf_stride;
+ u8 rx_buffer_low_watermark;
+ u64 rxdids;
+ struct idpf_q_vector *q_vector;
+ unsigned int size;
+ dma_addr_t dma;
+ void *desc_ring;
+
+ u16 tx_max_bufs;
+ u8 tx_min_pkt_len;
+
+ u32 num_completions;
+
+ struct idpf_buf_lifo buf_stack;
+
+ u16 compl_tag_bufid_m;
+ u16 compl_tag_gen_s;
+
+ u16 compl_tag_cur_gen;
+ u16 compl_tag_gen_max;
+
+ DECLARE_HASHTABLE(sched_buf_hash, 12);
+} ____cacheline_internodealigned_in_smp;
+
+/**
+ * struct idpf_sw_queue
+ * @next_to_clean: Next descriptor to clean
+ * @next_to_alloc: Buffer to allocate at
+ * @flags: See enum idpf_queue_flags_t
+ * @ring: Pointer to the ring
+ * @desc_count: Descriptor count
+ * @dev: Device back pointer for DMA mapping
+ *
+ * Software queues are used in splitq mode to manage buffers between rxq
+ * producer and the bufq consumer. These are required in order to maintain a
+ * lockless buffer management system and are strictly software only constructs.
+ */
+struct idpf_sw_queue {
+ u16 next_to_clean;
+ u16 next_to_alloc;
+ DECLARE_BITMAP(flags, __IDPF_Q_FLAGS_NBITS);
+ u16 *ring;
+ u16 desc_count;
+ struct device *dev;
+} ____cacheline_internodealigned_in_smp;
+
+/**
+ * struct idpf_rxq_set
+ * @rxq: RX queue
+ * @refillq0: Pointer to refill queue 0
+ * @refillq1: Pointer to refill queue 1
+ *
+ * Splitq only. idpf_rxq_set associates an rxq with at an array of refillqs.
+ * Each rxq needs a refillq to return used buffers back to the respective bufq.
+ * Bufqs then clean these refillqs for buffers to give to hardware.
+ */
+struct idpf_rxq_set {
+ struct idpf_queue rxq;
+ struct idpf_sw_queue *refillq0;
+ struct idpf_sw_queue *refillq1;
+};
+
+/**
+ * struct idpf_bufq_set
+ * @bufq: Buffer queue
+ * @num_refillqs: Number of refill queues. This is always equal to num_rxq_sets
+ * in idpf_rxq_group.
+ * @refillqs: Pointer to refill queues array.
+ *
+ * Splitq only. idpf_bufq_set associates a bufq to an array of refillqs.
+ * In this bufq_set, there will be one refillq for each rxq in this rxq_group.
+ * Used buffers received by rxqs will be put on refillqs which bufqs will
+ * clean to return new buffers back to hardware.
+ *
+ * Buffers needed by some number of rxqs associated in this rxq_group are
+ * managed by at most two bufqs (depending on performance configuration).
+ */
+struct idpf_bufq_set {
+ struct idpf_queue bufq;
+ int num_refillqs;
+ struct idpf_sw_queue *refillqs;
+};
+
+/**
+ * struct idpf_rxq_group
+ * @vport: Vport back pointer
+ * @singleq: Struct with single queue related members
+ * @singleq.num_rxq: Number of RX queues associated
+ * @singleq.rxqs: Array of RX queue pointers
+ * @splitq: Struct with split queue related members
+ * @splitq.num_rxq_sets: Number of RX queue sets
+ * @splitq.rxq_sets: Array of RX queue sets
+ * @splitq.bufq_sets: Buffer queue set pointer
+ *
+ * In singleq mode, an rxq_group is simply an array of rxqs. In splitq, a
+ * rxq_group contains all the rxqs, bufqs and refillqs needed to
+ * manage buffers in splitq mode.
+ */
+struct idpf_rxq_group {
+ struct idpf_vport *vport;
+
+ union {
+ struct {
+ u16 num_rxq;
+ struct idpf_queue *rxqs[IDPF_LARGE_MAX_Q];
+ } singleq;
+ struct {
+ u16 num_rxq_sets;
+ struct idpf_rxq_set *rxq_sets[IDPF_LARGE_MAX_Q];
+ struct idpf_bufq_set *bufq_sets;
+ } splitq;
+ };
+};
+
+/**
+ * struct idpf_txq_group
+ * @vport: Vport back pointer
+ * @num_txq: Number of TX queues associated
+ * @txqs: Array of TX queue pointers
+ * @complq: Associated completion queue pointer, split queue only
+ * @num_completions_pending: Total number of completions pending for the
+ * completion queue, acculumated for all TX queues
+ * associated with that completion queue.
+ *
+ * Between singleq and splitq, a txq_group is largely the same except for the
+ * complq. In splitq a single complq is responsible for handling completions
+ * for some number of txqs associated in this txq_group.
+ */
+struct idpf_txq_group {
+ struct idpf_vport *vport;
+
+ u16 num_txq;
+ struct idpf_queue *txqs[IDPF_LARGE_MAX_Q];
+
+ struct idpf_queue *complq;
+
+ u32 num_completions_pending;
+};
+
+/**
+ * idpf_size_to_txd_count - Get number of descriptors needed for large Tx frag
+ * @size: transmit request size in bytes
+ *
+ * In the case where a large frag (>= 16K) needs to be split across multiple
+ * descriptors, we need to assume that we can have no more than 12K of data
+ * per descriptor due to hardware alignment restrictions (4K alignment).
+ */
+static inline u32 idpf_size_to_txd_count(unsigned int size)
+{
+ return DIV_ROUND_UP(size, IDPF_TX_MAX_DESC_DATA_ALIGNED);
+}
+
+/**
+ * idpf_tx_singleq_build_ctob - populate command tag offset and size
+ * @td_cmd: Command to be filled in desc
+ * @td_offset: Offset to be filled in desc
+ * @size: Size of the buffer
+ * @td_tag: td tag to be filled
+ *
+ * Returns the 64 bit value populated with the input parameters
+ */
+static inline __le64 idpf_tx_singleq_build_ctob(u64 td_cmd, u64 td_offset,
+ unsigned int size, u64 td_tag)
+{
+ return cpu_to_le64(IDPF_TX_DESC_DTYPE_DATA |
+ (td_cmd << IDPF_TXD_QW1_CMD_S) |
+ (td_offset << IDPF_TXD_QW1_OFFSET_S) |
+ ((u64)size << IDPF_TXD_QW1_TX_BUF_SZ_S) |
+ (td_tag << IDPF_TXD_QW1_L2TAG1_S));
+}
+
+void idpf_tx_splitq_build_ctb(union idpf_tx_flex_desc *desc,
+ struct idpf_tx_splitq_params *params,
+ u16 td_cmd, u16 size);
+void idpf_tx_splitq_build_flow_desc(union idpf_tx_flex_desc *desc,
+ struct idpf_tx_splitq_params *params,
+ u16 td_cmd, u16 size);
+/**
+ * idpf_tx_splitq_build_desc - determine which type of data descriptor to build
+ * @desc: descriptor to populate
+ * @params: pointer to tx params struct
+ * @td_cmd: command to be filled in desc
+ * @size: size of buffer
+ */
+static inline void idpf_tx_splitq_build_desc(union idpf_tx_flex_desc *desc,
+ struct idpf_tx_splitq_params *params,
+ u16 td_cmd, u16 size)
+{
+ if (params->dtype == IDPF_TX_DESC_DTYPE_FLEX_L2TAG1_L2TAG2)
+ idpf_tx_splitq_build_ctb(desc, params, td_cmd, size);
+ else
+ idpf_tx_splitq_build_flow_desc(desc, params, td_cmd, size);
+}
+
+/**
+ * idpf_alloc_page - Allocate a new RX buffer from the page pool
+ * @pool: page_pool to allocate from
+ * @buf: metadata struct to populate with page info
+ * @buf_size: 2K or 4K
+ *
+ * Returns &dma_addr_t to be passed to HW for Rx, %DMA_MAPPING_ERROR otherwise.
+ */
+static inline dma_addr_t idpf_alloc_page(struct page_pool *pool,
+ struct idpf_rx_buf *buf,
+ unsigned int buf_size)
+{
+ if (buf_size == IDPF_RX_BUF_2048)
+ buf->page = page_pool_dev_alloc_frag(pool, &buf->page_offset,
+ buf_size);
+ else
+ buf->page = page_pool_dev_alloc_pages(pool);
+
+ if (!buf->page)
+ return DMA_MAPPING_ERROR;
+
+ buf->truesize = buf_size;
+
+ return page_pool_get_dma_addr(buf->page) + buf->page_offset +
+ pool->p.offset;
+}
+
+/**
+ * idpf_rx_put_page - Return RX buffer page to pool
+ * @rx_buf: RX buffer metadata struct
+ */
+static inline void idpf_rx_put_page(struct idpf_rx_buf *rx_buf)
+{
+ page_pool_put_page(rx_buf->page->pp, rx_buf->page,
+ rx_buf->truesize, true);
+ rx_buf->page = NULL;
+}
+
+/**
+ * idpf_rx_sync_for_cpu - Synchronize DMA buffer
+ * @rx_buf: RX buffer metadata struct
+ * @len: frame length from descriptor
+ */
+static inline void idpf_rx_sync_for_cpu(struct idpf_rx_buf *rx_buf, u32 len)
+{
+ struct page *page = rx_buf->page;
+ struct page_pool *pp = page->pp;
+
+ dma_sync_single_range_for_cpu(pp->p.dev,
+ page_pool_get_dma_addr(page),
+ rx_buf->page_offset + pp->p.offset, len,
+ page_pool_get_dma_dir(pp));
+}
+
+int idpf_vport_singleq_napi_poll(struct napi_struct *napi, int budget);
+void idpf_vport_init_num_qs(struct idpf_vport *vport,
+ struct virtchnl2_create_vport *vport_msg);
+void idpf_vport_calc_num_q_desc(struct idpf_vport *vport);
+int idpf_vport_calc_total_qs(struct idpf_adapter *adapter, u16 vport_index,
+ struct virtchnl2_create_vport *vport_msg,
+ struct idpf_vport_max_q *max_q);
+void idpf_vport_calc_num_q_groups(struct idpf_vport *vport);
+int idpf_vport_queues_alloc(struct idpf_vport *vport);
+void idpf_vport_queues_rel(struct idpf_vport *vport);
+void idpf_vport_intr_rel(struct idpf_vport *vport);
+int idpf_vport_intr_alloc(struct idpf_vport *vport);
+void idpf_vport_intr_update_itr_ena_irq(struct idpf_q_vector *q_vector);
+void idpf_vport_intr_deinit(struct idpf_vport *vport);
+int idpf_vport_intr_init(struct idpf_vport *vport);
+enum pkt_hash_types idpf_ptype_to_htype(const struct idpf_rx_ptype_decoded *decoded);
+int idpf_config_rss(struct idpf_vport *vport);
+int idpf_init_rss(struct idpf_vport *vport);
+void idpf_deinit_rss(struct idpf_vport *vport);
+int idpf_rx_bufs_init_all(struct idpf_vport *vport);
+void idpf_rx_add_frag(struct idpf_rx_buf *rx_buf, struct sk_buff *skb,
+ unsigned int size);
+struct sk_buff *idpf_rx_construct_skb(struct idpf_queue *rxq,
+ struct idpf_rx_buf *rx_buf,
+ unsigned int size);
+bool idpf_init_rx_buf_hw_alloc(struct idpf_queue *rxq, struct idpf_rx_buf *buf);
+void idpf_rx_buf_hw_update(struct idpf_queue *rxq, u32 val);
+void idpf_tx_buf_hw_update(struct idpf_queue *tx_q, u32 val,
+ bool xmit_more);
+unsigned int idpf_size_to_txd_count(unsigned int size);
+netdev_tx_t idpf_tx_drop_skb(struct idpf_queue *tx_q, struct sk_buff *skb);
+void idpf_tx_dma_map_error(struct idpf_queue *txq, struct sk_buff *skb,
+ struct idpf_tx_buf *first, u16 ring_idx);
+unsigned int idpf_tx_desc_count_required(struct idpf_queue *txq,
+ struct sk_buff *skb);
+bool idpf_chk_linearize(struct sk_buff *skb, unsigned int max_bufs,
+ unsigned int count);
+int idpf_tx_maybe_stop_common(struct idpf_queue *tx_q, unsigned int size);
+void idpf_tx_timeout(struct net_device *netdev, unsigned int txqueue);
+netdev_tx_t idpf_tx_splitq_start(struct sk_buff *skb,
+ struct net_device *netdev);
+netdev_tx_t idpf_tx_singleq_start(struct sk_buff *skb,
+ struct net_device *netdev);
+bool idpf_rx_singleq_buf_hw_alloc_all(struct idpf_queue *rxq,
+ u16 cleaned_count);
+int idpf_tso(struct sk_buff *skb, struct idpf_tx_offload_params *off);
+
+#endif /* !_IDPF_TXRX_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
new file mode 100644
index 000000000000..8ade4e3a9fe1
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_vf_dev.c
@@ -0,0 +1,163 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+#include "idpf_lan_vf_regs.h"
+
+#define IDPF_VF_ITR_IDX_SPACING 0x40
+
+/**
+ * idpf_vf_ctlq_reg_init - initialize default mailbox registers
+ * @cq: pointer to the array of create control queues
+ */
+static void idpf_vf_ctlq_reg_init(struct idpf_ctlq_create_info *cq)
+{
+ int i;
+
+ for (i = 0; i < IDPF_NUM_DFLT_MBX_Q; i++) {
+ struct idpf_ctlq_create_info *ccq = cq + i;
+
+ switch (ccq->type) {
+ case IDPF_CTLQ_TYPE_MAILBOX_TX:
+ /* set head and tail registers in our local struct */
+ ccq->reg.head = VF_ATQH;
+ ccq->reg.tail = VF_ATQT;
+ ccq->reg.len = VF_ATQLEN;
+ ccq->reg.bah = VF_ATQBAH;
+ ccq->reg.bal = VF_ATQBAL;
+ ccq->reg.len_mask = VF_ATQLEN_ATQLEN_M;
+ ccq->reg.len_ena_mask = VF_ATQLEN_ATQENABLE_M;
+ ccq->reg.head_mask = VF_ATQH_ATQH_M;
+ break;
+ case IDPF_CTLQ_TYPE_MAILBOX_RX:
+ /* set head and tail registers in our local struct */
+ ccq->reg.head = VF_ARQH;
+ ccq->reg.tail = VF_ARQT;
+ ccq->reg.len = VF_ARQLEN;
+ ccq->reg.bah = VF_ARQBAH;
+ ccq->reg.bal = VF_ARQBAL;
+ ccq->reg.len_mask = VF_ARQLEN_ARQLEN_M;
+ ccq->reg.len_ena_mask = VF_ARQLEN_ARQENABLE_M;
+ ccq->reg.head_mask = VF_ARQH_ARQH_M;
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/**
+ * idpf_vf_mb_intr_reg_init - Initialize the mailbox register
+ * @adapter: adapter structure
+ */
+static void idpf_vf_mb_intr_reg_init(struct idpf_adapter *adapter)
+{
+ struct idpf_intr_reg *intr = &adapter->mb_vector.intr_reg;
+ u32 dyn_ctl = le32_to_cpu(adapter->caps.mailbox_dyn_ctl);
+
+ intr->dyn_ctl = idpf_get_reg_addr(adapter, dyn_ctl);
+ intr->dyn_ctl_intena_m = VF_INT_DYN_CTL0_INTENA_M;
+ intr->dyn_ctl_itridx_m = VF_INT_DYN_CTL0_ITR_INDX_M;
+ intr->icr_ena = idpf_get_reg_addr(adapter, VF_INT_ICR0_ENA1);
+ intr->icr_ena_ctlq_m = VF_INT_ICR0_ENA1_ADMINQ_M;
+}
+
+/**
+ * idpf_vf_intr_reg_init - Initialize interrupt registers
+ * @vport: virtual port structure
+ */
+static int idpf_vf_intr_reg_init(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ int num_vecs = vport->num_q_vectors;
+ struct idpf_vec_regs *reg_vals;
+ int num_regs, i, err = 0;
+ u32 rx_itr, tx_itr;
+ u16 total_vecs;
+
+ total_vecs = idpf_get_reserved_vecs(vport->adapter);
+ reg_vals = kcalloc(total_vecs, sizeof(struct idpf_vec_regs),
+ GFP_KERNEL);
+ if (!reg_vals)
+ return -ENOMEM;
+
+ num_regs = idpf_get_reg_intr_vecs(vport, reg_vals);
+ if (num_regs < num_vecs) {
+ err = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ for (i = 0; i < num_vecs; i++) {
+ struct idpf_q_vector *q_vector = &vport->q_vectors[i];
+ u16 vec_id = vport->q_vector_idxs[i] - IDPF_MBX_Q_VEC;
+ struct idpf_intr_reg *intr = &q_vector->intr_reg;
+ u32 spacing;
+
+ intr->dyn_ctl = idpf_get_reg_addr(adapter,
+ reg_vals[vec_id].dyn_ctl_reg);
+ intr->dyn_ctl_intena_m = VF_INT_DYN_CTLN_INTENA_M;
+ intr->dyn_ctl_itridx_s = VF_INT_DYN_CTLN_ITR_INDX_S;
+
+ spacing = IDPF_ITR_IDX_SPACING(reg_vals[vec_id].itrn_index_spacing,
+ IDPF_VF_ITR_IDX_SPACING);
+ rx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_0,
+ reg_vals[vec_id].itrn_reg,
+ spacing);
+ tx_itr = VF_INT_ITRN_ADDR(VIRTCHNL2_ITR_IDX_1,
+ reg_vals[vec_id].itrn_reg,
+ spacing);
+ intr->rx_itr = idpf_get_reg_addr(adapter, rx_itr);
+ intr->tx_itr = idpf_get_reg_addr(adapter, tx_itr);
+ }
+
+free_reg_vals:
+ kfree(reg_vals);
+
+ return err;
+}
+
+/**
+ * idpf_vf_reset_reg_init - Initialize reset registers
+ * @adapter: Driver specific private structure
+ */
+static void idpf_vf_reset_reg_init(struct idpf_adapter *adapter)
+{
+ adapter->reset_reg.rstat = idpf_get_reg_addr(adapter, VFGEN_RSTAT);
+ adapter->reset_reg.rstat_m = VFGEN_RSTAT_VFR_STATE_M;
+}
+
+/**
+ * idpf_vf_trigger_reset - trigger reset
+ * @adapter: Driver specific private structure
+ * @trig_cause: Reason to trigger a reset
+ */
+static void idpf_vf_trigger_reset(struct idpf_adapter *adapter,
+ enum idpf_flags trig_cause)
+{
+ /* Do not send VIRTCHNL2_OP_RESET_VF message on driver unload */
+ if (trig_cause == IDPF_HR_FUNC_RESET &&
+ !test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ idpf_send_mb_msg(adapter, VIRTCHNL2_OP_RESET_VF, 0, NULL);
+}
+
+/**
+ * idpf_vf_reg_ops_init - Initialize register API function pointers
+ * @adapter: Driver specific private structure
+ */
+static void idpf_vf_reg_ops_init(struct idpf_adapter *adapter)
+{
+ adapter->dev_ops.reg_ops.ctlq_reg_init = idpf_vf_ctlq_reg_init;
+ adapter->dev_ops.reg_ops.intr_reg_init = idpf_vf_intr_reg_init;
+ adapter->dev_ops.reg_ops.mb_intr_reg_init = idpf_vf_mb_intr_reg_init;
+ adapter->dev_ops.reg_ops.reset_reg_init = idpf_vf_reset_reg_init;
+ adapter->dev_ops.reg_ops.trigger_reset = idpf_vf_trigger_reset;
+}
+
+/**
+ * idpf_vf_dev_ops_init - Initialize device API function pointers
+ * @adapter: Driver specific private structure
+ */
+void idpf_vf_dev_ops_init(struct idpf_adapter *adapter)
+{
+ idpf_vf_reg_ops_init(adapter);
+}
diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
new file mode 100644
index 000000000000..2c1b051fdc0d
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
@@ -0,0 +1,3798 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (C) 2023 Intel Corporation */
+
+#include "idpf.h"
+
+/**
+ * idpf_recv_event_msg - Receive virtchnl event message
+ * @vport: virtual port structure
+ * @ctlq_msg: message to copy from
+ *
+ * Receive virtchnl event message
+ */
+static void idpf_recv_event_msg(struct idpf_vport *vport,
+ struct idpf_ctlq_msg *ctlq_msg)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct virtchnl2_event *v2e;
+ bool link_status;
+ u32 event;
+
+ v2e = (struct virtchnl2_event *)ctlq_msg->ctx.indirect.payload->va;
+ event = le32_to_cpu(v2e->event);
+
+ switch (event) {
+ case VIRTCHNL2_EVENT_LINK_CHANGE:
+ vport->link_speed_mbps = le32_to_cpu(v2e->link_speed);
+ link_status = v2e->link_status;
+
+ if (vport->link_up == link_status)
+ break;
+
+ vport->link_up = link_status;
+ if (np->state == __IDPF_VPORT_UP) {
+ if (vport->link_up) {
+ netif_carrier_on(vport->netdev);
+ netif_tx_start_all_queues(vport->netdev);
+ } else {
+ netif_tx_stop_all_queues(vport->netdev);
+ netif_carrier_off(vport->netdev);
+ }
+ }
+ break;
+ default:
+ dev_err(&vport->adapter->pdev->dev,
+ "Unknown event %d from PF\n", event);
+ break;
+ }
+}
+
+/**
+ * idpf_mb_clean - Reclaim the send mailbox queue entries
+ * @adapter: Driver specific private structure
+ *
+ * Reclaim the send mailbox queue entries to be used to send further messages
+ *
+ * Returns 0 on success, negative on failure
+ */
+static int idpf_mb_clean(struct idpf_adapter *adapter)
+{
+ u16 i, num_q_msg = IDPF_DFLT_MBX_Q_LEN;
+ struct idpf_ctlq_msg **q_msg;
+ struct idpf_dma_mem *dma_mem;
+ int err;
+
+ q_msg = kcalloc(num_q_msg, sizeof(struct idpf_ctlq_msg *), GFP_ATOMIC);
+ if (!q_msg)
+ return -ENOMEM;
+
+ err = idpf_ctlq_clean_sq(adapter->hw.asq, &num_q_msg, q_msg);
+ if (err)
+ goto err_kfree;
+
+ for (i = 0; i < num_q_msg; i++) {
+ if (!q_msg[i])
+ continue;
+ dma_mem = q_msg[i]->ctx.indirect.payload;
+ if (dma_mem)
+ dma_free_coherent(&adapter->pdev->dev, dma_mem->size,
+ dma_mem->va, dma_mem->pa);
+ kfree(q_msg[i]);
+ kfree(dma_mem);
+ }
+
+err_kfree:
+ kfree(q_msg);
+
+ return err;
+}
+
+/**
+ * idpf_send_mb_msg - Send message over mailbox
+ * @adapter: Driver specific private structure
+ * @op: virtchnl opcode
+ * @msg_size: size of the payload
+ * @msg: pointer to buffer holding the payload
+ *
+ * Will prepare the control queue message and initiates the send api
+ *
+ * Returns 0 on success, negative on failure
+ */
+int idpf_send_mb_msg(struct idpf_adapter *adapter, u32 op,
+ u16 msg_size, u8 *msg)
+{
+ struct idpf_ctlq_msg *ctlq_msg;
+ struct idpf_dma_mem *dma_mem;
+ int err;
+
+ /* If we are here and a reset is detected nothing much can be
+ * done. This thread should silently abort and expected to
+ * be corrected with a new run either by user or driver
+ * flows after reset
+ */
+ if (idpf_is_reset_detected(adapter))
+ return 0;
+
+ err = idpf_mb_clean(adapter);
+ if (err)
+ return err;
+
+ ctlq_msg = kzalloc(sizeof(*ctlq_msg), GFP_ATOMIC);
+ if (!ctlq_msg)
+ return -ENOMEM;
+
+ dma_mem = kzalloc(sizeof(*dma_mem), GFP_ATOMIC);
+ if (!dma_mem) {
+ err = -ENOMEM;
+ goto dma_mem_error;
+ }
+
+ ctlq_msg->opcode = idpf_mbq_opc_send_msg_to_cp;
+ ctlq_msg->func_id = 0;
+ ctlq_msg->data_len = msg_size;
+ ctlq_msg->cookie.mbx.chnl_opcode = op;
+ ctlq_msg->cookie.mbx.chnl_retval = 0;
+ dma_mem->size = IDPF_CTLQ_MAX_BUF_LEN;
+ dma_mem->va = dma_alloc_coherent(&adapter->pdev->dev, dma_mem->size,
+ &dma_mem->pa, GFP_ATOMIC);
+ if (!dma_mem->va) {
+ err = -ENOMEM;
+ goto dma_alloc_error;
+ }
+ memcpy(dma_mem->va, msg, msg_size);
+ ctlq_msg->ctx.indirect.payload = dma_mem;
+
+ err = idpf_ctlq_send(&adapter->hw, adapter->hw.asq, 1, ctlq_msg);
+ if (err)
+ goto send_error;
+
+ return 0;
+
+send_error:
+ dma_free_coherent(&adapter->pdev->dev, dma_mem->size, dma_mem->va,
+ dma_mem->pa);
+dma_alloc_error:
+ kfree(dma_mem);
+dma_mem_error:
+ kfree(ctlq_msg);
+
+ return err;
+}
+
+/**
+ * idpf_find_vport - Find vport pointer from control queue message
+ * @adapter: driver specific private structure
+ * @vport: address of vport pointer to copy the vport from adapters vport list
+ * @ctlq_msg: control queue message
+ *
+ * Return 0 on success, error value on failure. Also this function does check
+ * for the opcodes which expect to receive payload and return error value if
+ * it is not the case.
+ */
+static int idpf_find_vport(struct idpf_adapter *adapter,
+ struct idpf_vport **vport,
+ struct idpf_ctlq_msg *ctlq_msg)
+{
+ bool no_op = false, vid_found = false;
+ int i, err = 0;
+ char *vc_msg;
+ u32 v_id;
+
+ vc_msg = kcalloc(IDPF_CTLQ_MAX_BUF_LEN, sizeof(char), GFP_KERNEL);
+ if (!vc_msg)
+ return -ENOMEM;
+
+ if (ctlq_msg->data_len) {
+ size_t payload_size = ctlq_msg->ctx.indirect.payload->size;
+
+ if (!payload_size) {
+ dev_err(&adapter->pdev->dev, "Failed to receive payload buffer\n");
+ kfree(vc_msg);
+
+ return -EINVAL;
+ }
+
+ memcpy(vc_msg, ctlq_msg->ctx.indirect.payload->va,
+ min_t(size_t, payload_size, IDPF_CTLQ_MAX_BUF_LEN));
+ }
+
+ switch (ctlq_msg->cookie.mbx.chnl_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ case VIRTCHNL2_OP_GET_CAPS:
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ goto free_vc_msg;
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ v_id = le32_to_cpu(((struct virtchnl2_vport *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ v_id = le32_to_cpu(((struct virtchnl2_config_tx_queues *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ v_id = le32_to_cpu(((struct virtchnl2_config_rx_queues *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ v_id = le32_to_cpu(((struct virtchnl2_del_ena_dis_queues *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ v_id = le32_to_cpu(((struct virtchnl2_add_queues *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ v_id = le32_to_cpu(((struct virtchnl2_queue_vector_maps *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_GET_STATS:
+ v_id = le32_to_cpu(((struct virtchnl2_vport_stats *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ v_id = le32_to_cpu(((struct virtchnl2_rss_lut *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ v_id = le32_to_cpu(((struct virtchnl2_rss_key *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_EVENT:
+ v_id = le32_to_cpu(((struct virtchnl2_event *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_LOOPBACK:
+ v_id = le32_to_cpu(((struct virtchnl2_loopback *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE:
+ v_id = le32_to_cpu(((struct virtchnl2_promisc_info *)vc_msg)->vport_id);
+ break;
+ case VIRTCHNL2_OP_ADD_MAC_ADDR:
+ case VIRTCHNL2_OP_DEL_MAC_ADDR:
+ v_id = le32_to_cpu(((struct virtchnl2_mac_addr_list *)vc_msg)->vport_id);
+ break;
+ default:
+ no_op = true;
+ break;
+ }
+
+ if (no_op)
+ goto free_vc_msg;
+
+ for (i = 0; i < idpf_get_max_vports(adapter); i++) {
+ if (adapter->vport_ids[i] == v_id) {
+ vid_found = true;
+ break;
+ }
+ }
+
+ if (vid_found)
+ *vport = adapter->vports[i];
+ else
+ err = -EINVAL;
+
+free_vc_msg:
+ kfree(vc_msg);
+
+ return err;
+}
+
+/**
+ * idpf_copy_data_to_vc_buf - Copy the virtchnl response data into the buffer.
+ * @adapter: driver specific private structure
+ * @vport: virtual port structure
+ * @ctlq_msg: msg to copy from
+ * @err_enum: err bit to set on error
+ *
+ * Copies the payload from ctlq_msg into virtchnl buffer. Returns 0 on success,
+ * negative on failure.
+ */
+static int idpf_copy_data_to_vc_buf(struct idpf_adapter *adapter,
+ struct idpf_vport *vport,
+ struct idpf_ctlq_msg *ctlq_msg,
+ enum idpf_vport_vc_state err_enum)
+{
+ if (ctlq_msg->cookie.mbx.chnl_retval) {
+ if (vport)
+ set_bit(err_enum, vport->vc_state);
+ else
+ set_bit(err_enum, adapter->vc_state);
+
+ return -EINVAL;
+ }
+
+ if (vport)
+ memcpy(vport->vc_msg, ctlq_msg->ctx.indirect.payload->va,
+ min_t(int, ctlq_msg->ctx.indirect.payload->size,
+ IDPF_CTLQ_MAX_BUF_LEN));
+ else
+ memcpy(adapter->vc_msg, ctlq_msg->ctx.indirect.payload->va,
+ min_t(int, ctlq_msg->ctx.indirect.payload->size,
+ IDPF_CTLQ_MAX_BUF_LEN));
+
+ return 0;
+}
+
+/**
+ * idpf_recv_vchnl_op - helper function with common logic when handling the
+ * reception of VIRTCHNL OPs.
+ * @adapter: driver specific private structure
+ * @vport: virtual port structure
+ * @ctlq_msg: msg to copy from
+ * @state: state bit used on timeout check
+ * @err_state: err bit to set on error
+ */
+static void idpf_recv_vchnl_op(struct idpf_adapter *adapter,
+ struct idpf_vport *vport,
+ struct idpf_ctlq_msg *ctlq_msg,
+ enum idpf_vport_vc_state state,
+ enum idpf_vport_vc_state err_state)
+{
+ wait_queue_head_t *vchnl_wq;
+ int err;
+
+ if (vport)
+ vchnl_wq = &vport->vchnl_wq;
+ else
+ vchnl_wq = &adapter->vchnl_wq;
+
+ err = idpf_copy_data_to_vc_buf(adapter, vport, ctlq_msg, err_state);
+ if (wq_has_sleeper(vchnl_wq)) {
+ if (vport)
+ set_bit(state, vport->vc_state);
+ else
+ set_bit(state, adapter->vc_state);
+
+ wake_up(vchnl_wq);
+ } else {
+ if (!err) {
+ dev_warn(&adapter->pdev->dev, "opcode %d received without waiting thread\n",
+ ctlq_msg->cookie.mbx.chnl_opcode);
+ } else {
+ /* Clear the errors since there is no sleeper to pass
+ * them on
+ */
+ if (vport)
+ clear_bit(err_state, vport->vc_state);
+ else
+ clear_bit(err_state, adapter->vc_state);
+ }
+ }
+}
+
+/**
+ * idpf_recv_mb_msg - Receive message over mailbox
+ * @adapter: Driver specific private structure
+ * @op: virtchannel operation code
+ * @msg: Received message holding buffer
+ * @msg_size: message size
+ *
+ * Will receive control queue message and posts the receive buffer. Returns 0
+ * on success and negative on failure.
+ */
+int idpf_recv_mb_msg(struct idpf_adapter *adapter, u32 op,
+ void *msg, int msg_size)
+{
+ struct idpf_vport *vport = NULL;
+ struct idpf_ctlq_msg ctlq_msg;
+ struct idpf_dma_mem *dma_mem;
+ bool work_done = false;
+ int num_retry = 2000;
+ u16 num_q_msg;
+ int err;
+
+ while (1) {
+ struct idpf_vport_config *vport_config;
+ int payload_size = 0;
+
+ /* Try to get one message */
+ num_q_msg = 1;
+ dma_mem = NULL;
+ err = idpf_ctlq_recv(adapter->hw.arq, &num_q_msg, &ctlq_msg);
+ /* If no message then decide if we have to retry based on
+ * opcode
+ */
+ if (err || !num_q_msg) {
+ /* Increasing num_retry to consider the delayed
+ * responses because of large number of VF's mailbox
+ * messages. If the mailbox message is received from
+ * the other side, we come out of the sleep cycle
+ * immediately else we wait for more time.
+ */
+ if (!op || !num_retry--)
+ break;
+ if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags)) {
+ err = -EIO;
+ break;
+ }
+ msleep(20);
+ continue;
+ }
+
+ /* If we are here a message is received. Check if we are looking
+ * for a specific message based on opcode. If it is different
+ * ignore and post buffers
+ */
+ if (op && ctlq_msg.cookie.mbx.chnl_opcode != op)
+ goto post_buffs;
+
+ err = idpf_find_vport(adapter, &vport, &ctlq_msg);
+ if (err)
+ goto post_buffs;
+
+ if (ctlq_msg.data_len)
+ payload_size = ctlq_msg.ctx.indirect.payload->size;
+
+ /* All conditions are met. Either a message requested is
+ * received or we received a message to be processed
+ */
+ switch (ctlq_msg.cookie.mbx.chnl_opcode) {
+ case VIRTCHNL2_OP_VERSION:
+ case VIRTCHNL2_OP_GET_CAPS:
+ if (ctlq_msg.cookie.mbx.chnl_retval) {
+ dev_err(&adapter->pdev->dev, "Failure initializing, vc op: %u retval: %u\n",
+ ctlq_msg.cookie.mbx.chnl_opcode,
+ ctlq_msg.cookie.mbx.chnl_retval);
+ err = -EBADMSG;
+ } else if (msg) {
+ memcpy(msg, ctlq_msg.ctx.indirect.payload->va,
+ min_t(int, payload_size, msg_size));
+ }
+ work_done = true;
+ break;
+ case VIRTCHNL2_OP_CREATE_VPORT:
+ idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
+ IDPF_VC_CREATE_VPORT,
+ IDPF_VC_CREATE_VPORT_ERR);
+ break;
+ case VIRTCHNL2_OP_ENABLE_VPORT:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_ENA_VPORT,
+ IDPF_VC_ENA_VPORT_ERR);
+ break;
+ case VIRTCHNL2_OP_DISABLE_VPORT:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_DIS_VPORT,
+ IDPF_VC_DIS_VPORT_ERR);
+ break;
+ case VIRTCHNL2_OP_DESTROY_VPORT:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_DESTROY_VPORT,
+ IDPF_VC_DESTROY_VPORT_ERR);
+ break;
+ case VIRTCHNL2_OP_CONFIG_TX_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_CONFIG_TXQ,
+ IDPF_VC_CONFIG_TXQ_ERR);
+ break;
+ case VIRTCHNL2_OP_CONFIG_RX_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_CONFIG_RXQ,
+ IDPF_VC_CONFIG_RXQ_ERR);
+ break;
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_ENA_QUEUES,
+ IDPF_VC_ENA_QUEUES_ERR);
+ break;
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_DIS_QUEUES,
+ IDPF_VC_DIS_QUEUES_ERR);
+ break;
+ case VIRTCHNL2_OP_ADD_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_ADD_QUEUES,
+ IDPF_VC_ADD_QUEUES_ERR);
+ break;
+ case VIRTCHNL2_OP_DEL_QUEUES:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_DEL_QUEUES,
+ IDPF_VC_DEL_QUEUES_ERR);
+ break;
+ case VIRTCHNL2_OP_MAP_QUEUE_VECTOR:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_MAP_IRQ,
+ IDPF_VC_MAP_IRQ_ERR);
+ break;
+ case VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_UNMAP_IRQ,
+ IDPF_VC_UNMAP_IRQ_ERR);
+ break;
+ case VIRTCHNL2_OP_GET_STATS:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_GET_STATS,
+ IDPF_VC_GET_STATS_ERR);
+ break;
+ case VIRTCHNL2_OP_GET_RSS_LUT:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_GET_RSS_LUT,
+ IDPF_VC_GET_RSS_LUT_ERR);
+ break;
+ case VIRTCHNL2_OP_SET_RSS_LUT:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_SET_RSS_LUT,
+ IDPF_VC_SET_RSS_LUT_ERR);
+ break;
+ case VIRTCHNL2_OP_GET_RSS_KEY:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_GET_RSS_KEY,
+ IDPF_VC_GET_RSS_KEY_ERR);
+ break;
+ case VIRTCHNL2_OP_SET_RSS_KEY:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_SET_RSS_KEY,
+ IDPF_VC_SET_RSS_KEY_ERR);
+ break;
+ case VIRTCHNL2_OP_SET_SRIOV_VFS:
+ idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
+ IDPF_VC_SET_SRIOV_VFS,
+ IDPF_VC_SET_SRIOV_VFS_ERR);
+ break;
+ case VIRTCHNL2_OP_ALLOC_VECTORS:
+ idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
+ IDPF_VC_ALLOC_VECTORS,
+ IDPF_VC_ALLOC_VECTORS_ERR);
+ break;
+ case VIRTCHNL2_OP_DEALLOC_VECTORS:
+ idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
+ IDPF_VC_DEALLOC_VECTORS,
+ IDPF_VC_DEALLOC_VECTORS_ERR);
+ break;
+ case VIRTCHNL2_OP_GET_PTYPE_INFO:
+ idpf_recv_vchnl_op(adapter, NULL, &ctlq_msg,
+ IDPF_VC_GET_PTYPE_INFO,
+ IDPF_VC_GET_PTYPE_INFO_ERR);
+ break;
+ case VIRTCHNL2_OP_LOOPBACK:
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_LOOPBACK_STATE,
+ IDPF_VC_LOOPBACK_STATE_ERR);
+ break;
+ case VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE:
+ /* This message can only be sent asynchronously. As
+ * such we'll have lost the context in which it was
+ * called and thus can only really report if it looks
+ * like an error occurred. Don't bother setting ERR bit
+ * or waking chnl_wq since no work queue will be waiting
+ * to read the message.
+ */
+ if (ctlq_msg.cookie.mbx.chnl_retval) {
+ dev_err(&adapter->pdev->dev, "Failed to set promiscuous mode: %d\n",
+ ctlq_msg.cookie.mbx.chnl_retval);
+ }
+ break;
+ case VIRTCHNL2_OP_ADD_MAC_ADDR:
+ vport_config = adapter->vport_config[vport->idx];
+ if (test_and_clear_bit(IDPF_VPORT_ADD_MAC_REQ,
+ vport_config->flags)) {
+ /* Message was sent asynchronously. We don't
+ * normally print errors here, instead
+ * prefer to handle errors in the function
+ * calling wait_for_event. However, if
+ * asynchronous, the context in which the
+ * message was sent is lost. We can't really do
+ * anything about at it this point, but we
+ * should at a minimum indicate that it looks
+ * like something went wrong. Also don't bother
+ * setting ERR bit or waking vchnl_wq since no
+ * one will be waiting to read the async
+ * message.
+ */
+ if (ctlq_msg.cookie.mbx.chnl_retval)
+ dev_err(&adapter->pdev->dev, "Failed to add MAC address: %d\n",
+ ctlq_msg.cookie.mbx.chnl_retval);
+ break;
+ }
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_ADD_MAC_ADDR,
+ IDPF_VC_ADD_MAC_ADDR_ERR);
+ break;
+ case VIRTCHNL2_OP_DEL_MAC_ADDR:
+ vport_config = adapter->vport_config[vport->idx];
+ if (test_and_clear_bit(IDPF_VPORT_DEL_MAC_REQ,
+ vport_config->flags)) {
+ /* Message was sent asynchronously like the
+ * VIRTCHNL2_OP_ADD_MAC_ADDR
+ */
+ if (ctlq_msg.cookie.mbx.chnl_retval)
+ dev_err(&adapter->pdev->dev, "Failed to delete MAC address: %d\n",
+ ctlq_msg.cookie.mbx.chnl_retval);
+ break;
+ }
+ idpf_recv_vchnl_op(adapter, vport, &ctlq_msg,
+ IDPF_VC_DEL_MAC_ADDR,
+ IDPF_VC_DEL_MAC_ADDR_ERR);
+ break;
+ case VIRTCHNL2_OP_EVENT:
+ idpf_recv_event_msg(vport, &ctlq_msg);
+ break;
+ default:
+ dev_warn(&adapter->pdev->dev,
+ "Unhandled virtchnl response %d\n",
+ ctlq_msg.cookie.mbx.chnl_opcode);
+ break;
+ }
+
+post_buffs:
+ if (ctlq_msg.data_len)
+ dma_mem = ctlq_msg.ctx.indirect.payload;
+ else
+ num_q_msg = 0;
+
+ err = idpf_ctlq_post_rx_buffs(&adapter->hw, adapter->hw.arq,
+ &num_q_msg, &dma_mem);
+ /* If post failed clear the only buffer we supplied */
+ if (err && dma_mem)
+ dma_free_coherent(&adapter->pdev->dev, dma_mem->size,
+ dma_mem->va, dma_mem->pa);
+
+ /* Applies only if we are looking for a specific opcode */
+ if (work_done)
+ break;
+ }
+
+ return err;
+}
+
+/**
+ * __idpf_wait_for_event - wrapper function for wait on virtchannel response
+ * @adapter: Driver private data structure
+ * @vport: virtual port structure
+ * @state: check on state upon timeout
+ * @err_check: check if this specific error bit is set
+ * @timeout: Max time to wait
+ *
+ * Checks if state is set upon expiry of timeout. Returns 0 on success,
+ * negative on failure.
+ */
+static int __idpf_wait_for_event(struct idpf_adapter *adapter,
+ struct idpf_vport *vport,
+ enum idpf_vport_vc_state state,
+ enum idpf_vport_vc_state err_check,
+ int timeout)
+{
+ int time_to_wait, num_waits;
+ wait_queue_head_t *vchnl_wq;
+ unsigned long *vc_state;
+
+ time_to_wait = ((timeout <= IDPF_MAX_WAIT) ? timeout : IDPF_MAX_WAIT);
+ num_waits = ((timeout <= IDPF_MAX_WAIT) ? 1 : timeout / IDPF_MAX_WAIT);
+
+ if (vport) {
+ vchnl_wq = &vport->vchnl_wq;
+ vc_state = vport->vc_state;
+ } else {
+ vchnl_wq = &adapter->vchnl_wq;
+ vc_state = adapter->vc_state;
+ }
+
+ while (num_waits) {
+ int event;
+
+ /* If we are here and a reset is detected do not wait but
+ * return. Reset timing is out of drivers control. So
+ * while we are cleaning resources as part of reset if the
+ * underlying HW mailbox is gone, wait on mailbox messages
+ * is not meaningful
+ */
+ if (idpf_is_reset_detected(adapter))
+ return 0;
+
+ event = wait_event_timeout(*vchnl_wq,
+ test_and_clear_bit(state, vc_state),
+ msecs_to_jiffies(time_to_wait));
+ if (event) {
+ if (test_and_clear_bit(err_check, vc_state)) {
+ dev_err(&adapter->pdev->dev, "VC response error %s\n",
+ idpf_vport_vc_state_str[err_check]);
+
+ return -EINVAL;
+ }
+
+ return 0;
+ }
+ num_waits--;
+ }
+
+ /* Timeout occurred */
+ dev_err(&adapter->pdev->dev, "VC timeout, state = %s\n",
+ idpf_vport_vc_state_str[state]);
+
+ return -ETIMEDOUT;
+}
+
+/**
+ * idpf_min_wait_for_event - wait for virtchannel response
+ * @adapter: Driver private data structure
+ * @vport: virtual port structure
+ * @state: check on state upon timeout
+ * @err_check: check if this specific error bit is set
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int idpf_min_wait_for_event(struct idpf_adapter *adapter,
+ struct idpf_vport *vport,
+ enum idpf_vport_vc_state state,
+ enum idpf_vport_vc_state err_check)
+{
+ return __idpf_wait_for_event(adapter, vport, state, err_check,
+ IDPF_WAIT_FOR_EVENT_TIMEO_MIN);
+}
+
+/**
+ * idpf_wait_for_event - wait for virtchannel response
+ * @adapter: Driver private data structure
+ * @vport: virtual port structure
+ * @state: check on state upon timeout after 500ms
+ * @err_check: check if this specific error bit is set
+ *
+ * Returns 0 on success, negative on failure.
+ */
+static int idpf_wait_for_event(struct idpf_adapter *adapter,
+ struct idpf_vport *vport,
+ enum idpf_vport_vc_state state,
+ enum idpf_vport_vc_state err_check)
+{
+ /* Increasing the timeout in __IDPF_INIT_SW flow to consider large
+ * number of VF's mailbox message responses. When a message is received
+ * on mailbox, this thread is woken up by the idpf_recv_mb_msg before
+ * the timeout expires. Only in the error case i.e. if no message is
+ * received on mailbox, we wait for the complete timeout which is
+ * less likely to happen.
+ */
+ return __idpf_wait_for_event(adapter, vport, state, err_check,
+ IDPF_WAIT_FOR_EVENT_TIMEO);
+}
+
+/**
+ * idpf_wait_for_marker_event - wait for software marker response
+ * @vport: virtual port data structure
+ *
+ * Returns 0 success, negative on failure.
+ **/
+static int idpf_wait_for_marker_event(struct idpf_vport *vport)
+{
+ int event;
+ int i;
+
+ for (i = 0; i < vport->num_txq; i++)
+ set_bit(__IDPF_Q_SW_MARKER, vport->txqs[i]->flags);
+
+ event = wait_event_timeout(vport->sw_marker_wq,
+ test_and_clear_bit(IDPF_VPORT_SW_MARKER,
+ vport->flags),
+ msecs_to_jiffies(500));
+
+ for (i = 0; i < vport->num_txq; i++)
+ clear_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
+
+ if (event)
+ return 0;
+
+ dev_warn(&vport->adapter->pdev->dev, "Failed to receive marker packets\n");
+
+ return -ETIMEDOUT;
+}
+
+/**
+ * idpf_send_ver_msg - send virtchnl version message
+ * @adapter: Driver specific private structure
+ *
+ * Send virtchnl version message. Returns 0 on success, negative on failure.
+ */
+static int idpf_send_ver_msg(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_version_info vvi;
+
+ if (adapter->virt_ver_maj) {
+ vvi.major = cpu_to_le32(adapter->virt_ver_maj);
+ vvi.minor = cpu_to_le32(adapter->virt_ver_min);
+ } else {
+ vvi.major = cpu_to_le32(IDPF_VIRTCHNL_VERSION_MAJOR);
+ vvi.minor = cpu_to_le32(IDPF_VIRTCHNL_VERSION_MINOR);
+ }
+
+ return idpf_send_mb_msg(adapter, VIRTCHNL2_OP_VERSION, sizeof(vvi),
+ (u8 *)&vvi);
+}
+
+/**
+ * idpf_recv_ver_msg - Receive virtchnl version message
+ * @adapter: Driver specific private structure
+ *
+ * Receive virtchnl version message. Returns 0 on success, -EAGAIN if we need
+ * to send version message again, otherwise negative on failure.
+ */
+static int idpf_recv_ver_msg(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_version_info vvi;
+ u32 major, minor;
+ int err;
+
+ err = idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_VERSION, &vvi,
+ sizeof(vvi));
+ if (err)
+ return err;
+
+ major = le32_to_cpu(vvi.major);
+ minor = le32_to_cpu(vvi.minor);
+
+ if (major > IDPF_VIRTCHNL_VERSION_MAJOR) {
+ dev_warn(&adapter->pdev->dev,
+ "Virtchnl major version (%d) greater than supported\n",
+ major);
+
+ return -EINVAL;
+ }
+
+ if (major == IDPF_VIRTCHNL_VERSION_MAJOR &&
+ minor > IDPF_VIRTCHNL_VERSION_MINOR)
+ dev_warn(&adapter->pdev->dev,
+ "Virtchnl minor version (%d) didn't match\n", minor);
+
+ /* If we have a mismatch, resend version to update receiver on what
+ * version we will use.
+ */
+ if (!adapter->virt_ver_maj &&
+ major != IDPF_VIRTCHNL_VERSION_MAJOR &&
+ minor != IDPF_VIRTCHNL_VERSION_MINOR)
+ err = -EAGAIN;
+
+ adapter->virt_ver_maj = major;
+ adapter->virt_ver_min = minor;
+
+ return err;
+}
+
+/**
+ * idpf_send_get_caps_msg - Send virtchnl get capabilities message
+ * @adapter: Driver specific private structure
+ *
+ * Send virtchl get capabilities message. Returns 0 on success, negative on
+ * failure.
+ */
+static int idpf_send_get_caps_msg(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_get_capabilities caps = { };
+
+ caps.csum_caps =
+ cpu_to_le32(VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP |
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP |
+ VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL |
+ VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL |
+ VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL |
+ VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL |
+ VIRTCHNL2_CAP_RX_CSUM_GENERIC);
+
+ caps.seg_caps =
+ cpu_to_le32(VIRTCHNL2_CAP_SEG_IPV4_TCP |
+ VIRTCHNL2_CAP_SEG_IPV4_UDP |
+ VIRTCHNL2_CAP_SEG_IPV4_SCTP |
+ VIRTCHNL2_CAP_SEG_IPV6_TCP |
+ VIRTCHNL2_CAP_SEG_IPV6_UDP |
+ VIRTCHNL2_CAP_SEG_IPV6_SCTP |
+ VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL);
+
+ caps.rss_caps =
+ cpu_to_le64(VIRTCHNL2_CAP_RSS_IPV4_TCP |
+ VIRTCHNL2_CAP_RSS_IPV4_UDP |
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER |
+ VIRTCHNL2_CAP_RSS_IPV6_TCP |
+ VIRTCHNL2_CAP_RSS_IPV6_UDP |
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP |
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER);
+
+ caps.hsplit_caps =
+ cpu_to_le32(VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 |
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6);
+
+ caps.rsc_caps =
+ cpu_to_le32(VIRTCHNL2_CAP_RSC_IPV4_TCP |
+ VIRTCHNL2_CAP_RSC_IPV6_TCP);
+
+ caps.other_caps =
+ cpu_to_le64(VIRTCHNL2_CAP_SRIOV |
+ VIRTCHNL2_CAP_MACFILTER |
+ VIRTCHNL2_CAP_SPLITQ_QSCHED |
+ VIRTCHNL2_CAP_PROMISC |
+ VIRTCHNL2_CAP_LOOPBACK);
+
+ return idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_CAPS, sizeof(caps),
+ (u8 *)&caps);
+}
+
+/**
+ * idpf_recv_get_caps_msg - Receive virtchnl get capabilities message
+ * @adapter: Driver specific private structure
+ *
+ * Receive virtchnl get capabilities message. Returns 0 on success, negative on
+ * failure.
+ */
+static int idpf_recv_get_caps_msg(struct idpf_adapter *adapter)
+{
+ return idpf_recv_mb_msg(adapter, VIRTCHNL2_OP_GET_CAPS, &adapter->caps,
+ sizeof(struct virtchnl2_get_capabilities));
+}
+
+/**
+ * idpf_vport_alloc_max_qs - Allocate max queues for a vport
+ * @adapter: Driver specific private structure
+ * @max_q: vport max queue structure
+ */
+int idpf_vport_alloc_max_qs(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q)
+{
+ struct idpf_avail_queue_info *avail_queues = &adapter->avail_queues;
+ struct virtchnl2_get_capabilities *caps = &adapter->caps;
+ u16 default_vports = idpf_get_default_vports(adapter);
+ int max_rx_q, max_tx_q;
+
+ mutex_lock(&adapter->queue_lock);
+
+ max_rx_q = le16_to_cpu(caps->max_rx_q) / default_vports;
+ max_tx_q = le16_to_cpu(caps->max_tx_q) / default_vports;
+ if (adapter->num_alloc_vports < default_vports) {
+ max_q->max_rxq = min_t(u16, max_rx_q, IDPF_MAX_Q);
+ max_q->max_txq = min_t(u16, max_tx_q, IDPF_MAX_Q);
+ } else {
+ max_q->max_rxq = IDPF_MIN_Q;
+ max_q->max_txq = IDPF_MIN_Q;
+ }
+ max_q->max_bufq = max_q->max_rxq * IDPF_MAX_BUFQS_PER_RXQ_GRP;
+ max_q->max_complq = max_q->max_txq;
+
+ if (avail_queues->avail_rxq < max_q->max_rxq ||
+ avail_queues->avail_txq < max_q->max_txq ||
+ avail_queues->avail_bufq < max_q->max_bufq ||
+ avail_queues->avail_complq < max_q->max_complq) {
+ mutex_unlock(&adapter->queue_lock);
+
+ return -EINVAL;
+ }
+
+ avail_queues->avail_rxq -= max_q->max_rxq;
+ avail_queues->avail_txq -= max_q->max_txq;
+ avail_queues->avail_bufq -= max_q->max_bufq;
+ avail_queues->avail_complq -= max_q->max_complq;
+
+ mutex_unlock(&adapter->queue_lock);
+
+ return 0;
+}
+
+/**
+ * idpf_vport_dealloc_max_qs - Deallocate max queues of a vport
+ * @adapter: Driver specific private structure
+ * @max_q: vport max queue structure
+ */
+void idpf_vport_dealloc_max_qs(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q)
+{
+ struct idpf_avail_queue_info *avail_queues;
+
+ mutex_lock(&adapter->queue_lock);
+ avail_queues = &adapter->avail_queues;
+
+ avail_queues->avail_rxq += max_q->max_rxq;
+ avail_queues->avail_txq += max_q->max_txq;
+ avail_queues->avail_bufq += max_q->max_bufq;
+ avail_queues->avail_complq += max_q->max_complq;
+
+ mutex_unlock(&adapter->queue_lock);
+}
+
+/**
+ * idpf_init_avail_queues - Initialize available queues on the device
+ * @adapter: Driver specific private structure
+ */
+static void idpf_init_avail_queues(struct idpf_adapter *adapter)
+{
+ struct idpf_avail_queue_info *avail_queues = &adapter->avail_queues;
+ struct virtchnl2_get_capabilities *caps = &adapter->caps;
+
+ avail_queues->avail_rxq = le16_to_cpu(caps->max_rx_q);
+ avail_queues->avail_txq = le16_to_cpu(caps->max_tx_q);
+ avail_queues->avail_bufq = le16_to_cpu(caps->max_rx_bufq);
+ avail_queues->avail_complq = le16_to_cpu(caps->max_tx_complq);
+}
+
+/**
+ * idpf_get_reg_intr_vecs - Get vector queue register offset
+ * @vport: virtual port structure
+ * @reg_vals: Register offsets to store in
+ *
+ * Returns number of registers that got populated
+ */
+int idpf_get_reg_intr_vecs(struct idpf_vport *vport,
+ struct idpf_vec_regs *reg_vals)
+{
+ struct virtchnl2_vector_chunks *chunks;
+ struct idpf_vec_regs reg_val;
+ u16 num_vchunks, num_vec;
+ int num_regs = 0, i, j;
+
+ chunks = &vport->adapter->req_vec_chunks->vchunks;
+ num_vchunks = le16_to_cpu(chunks->num_vchunks);
+
+ for (j = 0; j < num_vchunks; j++) {
+ struct virtchnl2_vector_chunk *chunk;
+ u32 dynctl_reg_spacing;
+ u32 itrn_reg_spacing;
+
+ chunk = &chunks->vchunks[j];
+ num_vec = le16_to_cpu(chunk->num_vectors);
+ reg_val.dyn_ctl_reg = le32_to_cpu(chunk->dynctl_reg_start);
+ reg_val.itrn_reg = le32_to_cpu(chunk->itrn_reg_start);
+ reg_val.itrn_index_spacing = le32_to_cpu(chunk->itrn_index_spacing);
+
+ dynctl_reg_spacing = le32_to_cpu(chunk->dynctl_reg_spacing);
+ itrn_reg_spacing = le32_to_cpu(chunk->itrn_reg_spacing);
+
+ for (i = 0; i < num_vec; i++) {
+ reg_vals[num_regs].dyn_ctl_reg = reg_val.dyn_ctl_reg;
+ reg_vals[num_regs].itrn_reg = reg_val.itrn_reg;
+ reg_vals[num_regs].itrn_index_spacing =
+ reg_val.itrn_index_spacing;
+
+ reg_val.dyn_ctl_reg += dynctl_reg_spacing;
+ reg_val.itrn_reg += itrn_reg_spacing;
+ num_regs++;
+ }
+ }
+
+ return num_regs;
+}
+
+/**
+ * idpf_vport_get_q_reg - Get the queue registers for the vport
+ * @reg_vals: register values needing to be set
+ * @num_regs: amount we expect to fill
+ * @q_type: queue model
+ * @chunks: queue regs received over mailbox
+ *
+ * This function parses the queue register offsets from the queue register
+ * chunk information, with a specific queue type and stores it into the array
+ * passed as an argument. It returns the actual number of queue registers that
+ * are filled.
+ */
+static int idpf_vport_get_q_reg(u32 *reg_vals, int num_regs, u32 q_type,
+ struct virtchnl2_queue_reg_chunks *chunks)
+{
+ u16 num_chunks = le16_to_cpu(chunks->num_chunks);
+ int reg_filled = 0, i;
+ u32 reg_val;
+
+ while (num_chunks--) {
+ struct virtchnl2_queue_reg_chunk *chunk;
+ u16 num_q;
+
+ chunk = &chunks->chunks[num_chunks];
+ if (le32_to_cpu(chunk->type) != q_type)
+ continue;
+
+ num_q = le32_to_cpu(chunk->num_queues);
+ reg_val = le64_to_cpu(chunk->qtail_reg_start);
+ for (i = 0; i < num_q && reg_filled < num_regs ; i++) {
+ reg_vals[reg_filled++] = reg_val;
+ reg_val += le32_to_cpu(chunk->qtail_reg_spacing);
+ }
+ }
+
+ return reg_filled;
+}
+
+/**
+ * __idpf_queue_reg_init - initialize queue registers
+ * @vport: virtual port structure
+ * @reg_vals: registers we are initializing
+ * @num_regs: how many registers there are in total
+ * @q_type: queue model
+ *
+ * Return number of queues that are initialized
+ */
+static int __idpf_queue_reg_init(struct idpf_vport *vport, u32 *reg_vals,
+ int num_regs, u32 q_type)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_queue *q;
+ int i, j, k = 0;
+
+ switch (q_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ for (j = 0; j < tx_qgrp->num_txq && k < num_regs; j++, k++)
+ tx_qgrp->txqs[j]->tail =
+ idpf_get_reg_addr(adapter, reg_vals[k]);
+ }
+ break;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u16 num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq && k < num_regs; j++, k++) {
+ q = rx_qgrp->singleq.rxqs[j];
+ q->tail = idpf_get_reg_addr(adapter,
+ reg_vals[k]);
+ }
+ }
+ break;
+ case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u8 num_bufqs = vport->num_bufqs_per_qgrp;
+
+ for (j = 0; j < num_bufqs && k < num_regs; j++, k++) {
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ q->tail = idpf_get_reg_addr(adapter,
+ reg_vals[k]);
+ }
+ }
+ break;
+ default:
+ break;
+ }
+
+ return k;
+}
+
+/**
+ * idpf_queue_reg_init - initialize queue registers
+ * @vport: virtual port structure
+ *
+ * Return 0 on success, negative on failure
+ */
+int idpf_queue_reg_init(struct idpf_vport *vport)
+{
+ struct virtchnl2_create_vport *vport_params;
+ struct virtchnl2_queue_reg_chunks *chunks;
+ struct idpf_vport_config *vport_config;
+ u16 vport_idx = vport->idx;
+ int num_regs, ret = 0;
+ u32 *reg_vals;
+
+ /* We may never deal with more than 256 same type of queues */
+ reg_vals = kzalloc(sizeof(void *) * IDPF_LARGE_MAX_Q, GFP_KERNEL);
+ if (!reg_vals)
+ return -ENOMEM;
+
+ vport_config = vport->adapter->vport_config[vport_idx];
+ if (vport_config->req_qs_chunks) {
+ struct virtchnl2_add_queues *vc_aq =
+ (struct virtchnl2_add_queues *)vport_config->req_qs_chunks;
+ chunks = &vc_aq->chunks;
+ } else {
+ vport_params = vport->adapter->vport_params_recvd[vport_idx];
+ chunks = &vport_params->chunks;
+ }
+
+ /* Initialize Tx queue tail register address */
+ num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
+ VIRTCHNL2_QUEUE_TYPE_TX,
+ chunks);
+ if (num_regs < vport->num_txq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ VIRTCHNL2_QUEUE_TYPE_TX);
+ if (num_regs < vport->num_txq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ /* Initialize Rx/buffer queue tail register address based on Rx queue
+ * model
+ */
+ if (idpf_is_queue_model_split(vport->rxq_model)) {
+ num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER,
+ chunks);
+ if (num_regs < vport->num_bufq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER);
+ if (num_regs < vport->num_bufq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+ } else {
+ num_regs = idpf_vport_get_q_reg(reg_vals, IDPF_LARGE_MAX_Q,
+ VIRTCHNL2_QUEUE_TYPE_RX,
+ chunks);
+ if (num_regs < vport->num_rxq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+
+ num_regs = __idpf_queue_reg_init(vport, reg_vals, num_regs,
+ VIRTCHNL2_QUEUE_TYPE_RX);
+ if (num_regs < vport->num_rxq) {
+ ret = -EINVAL;
+ goto free_reg_vals;
+ }
+ }
+
+free_reg_vals:
+ kfree(reg_vals);
+
+ return ret;
+}
+
+/**
+ * idpf_send_create_vport_msg - Send virtchnl create vport message
+ * @adapter: Driver specific private structure
+ * @max_q: vport max queue info
+ *
+ * send virtchnl creae vport message
+ *
+ * Returns 0 on success, negative on failure
+ */
+int idpf_send_create_vport_msg(struct idpf_adapter *adapter,
+ struct idpf_vport_max_q *max_q)
+{
+ struct virtchnl2_create_vport *vport_msg;
+ u16 idx = adapter->next_vport;
+ int err, buf_size;
+
+ buf_size = sizeof(struct virtchnl2_create_vport);
+ if (!adapter->vport_params_reqd[idx]) {
+ adapter->vport_params_reqd[idx] = kzalloc(buf_size,
+ GFP_KERNEL);
+ if (!adapter->vport_params_reqd[idx])
+ return -ENOMEM;
+ }
+
+ vport_msg = adapter->vport_params_reqd[idx];
+ vport_msg->vport_type = cpu_to_le16(VIRTCHNL2_VPORT_TYPE_DEFAULT);
+ vport_msg->vport_index = cpu_to_le16(idx);
+
+ if (adapter->req_tx_splitq)
+ vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ else
+ vport_msg->txq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+
+ if (adapter->req_rx_splitq)
+ vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SPLIT);
+ else
+ vport_msg->rxq_model = cpu_to_le16(VIRTCHNL2_QUEUE_MODEL_SINGLE);
+
+ err = idpf_vport_calc_total_qs(adapter, idx, vport_msg, max_q);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Enough queues are not available");
+
+ return err;
+ }
+
+ mutex_lock(&adapter->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_CREATE_VPORT, buf_size,
+ (u8 *)vport_msg);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(adapter, NULL, IDPF_VC_CREATE_VPORT,
+ IDPF_VC_CREATE_VPORT_ERR);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to receive create vport message");
+
+ goto rel_lock;
+ }
+
+ if (!adapter->vport_params_recvd[idx]) {
+ adapter->vport_params_recvd[idx] = kzalloc(IDPF_CTLQ_MAX_BUF_LEN,
+ GFP_KERNEL);
+ if (!adapter->vport_params_recvd[idx]) {
+ err = -ENOMEM;
+ goto rel_lock;
+ }
+ }
+
+ vport_msg = adapter->vport_params_recvd[idx];
+ memcpy(vport_msg, adapter->vc_msg, IDPF_CTLQ_MAX_BUF_LEN);
+
+rel_lock:
+ mutex_unlock(&adapter->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_check_supported_desc_ids - Verify we have required descriptor support
+ * @vport: virtual port structure
+ *
+ * Return 0 on success, error on failure
+ */
+int idpf_check_supported_desc_ids(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_create_vport *vport_msg;
+ u64 rx_desc_ids, tx_desc_ids;
+
+ vport_msg = adapter->vport_params_recvd[vport->idx];
+
+ rx_desc_ids = le64_to_cpu(vport_msg->rx_desc_ids);
+ tx_desc_ids = le64_to_cpu(vport_msg->tx_desc_ids);
+
+ if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) {
+ if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M)) {
+ dev_info(&adapter->pdev->dev, "Minimum RX descriptor support not provided, using the default\n");
+ vport_msg->rx_desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M);
+ }
+ } else {
+ if (!(rx_desc_ids & VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M))
+ vport->base_rxd = true;
+ }
+
+ if (vport->txq_model != VIRTCHNL2_QUEUE_MODEL_SPLIT)
+ return 0;
+
+ if ((tx_desc_ids & MIN_SUPPORT_TXDID) != MIN_SUPPORT_TXDID) {
+ dev_info(&adapter->pdev->dev, "Minimum TX descriptor support not provided, using the default\n");
+ vport_msg->tx_desc_ids = cpu_to_le64(MIN_SUPPORT_TXDID);
+ }
+
+ return 0;
+}
+
+/**
+ * idpf_send_destroy_vport_msg - Send virtchnl destroy vport message
+ * @vport: virtual port data structure
+ *
+ * Send virtchnl destroy vport message. Returns 0 on success, negative on
+ * failure.
+ */
+int idpf_send_destroy_vport_msg(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_vport v_id;
+ int err;
+
+ v_id.vport_id = cpu_to_le32(vport->vport_id);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_DESTROY_VPORT,
+ sizeof(v_id), (u8 *)&v_id);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_min_wait_for_event(adapter, vport, IDPF_VC_DESTROY_VPORT,
+ IDPF_VC_DESTROY_VPORT_ERR);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_enable_vport_msg - Send virtchnl enable vport message
+ * @vport: virtual port data structure
+ *
+ * Send enable vport virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+int idpf_send_enable_vport_msg(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_vport v_id;
+ int err;
+
+ v_id.vport_id = cpu_to_le32(vport->vport_id);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_ENABLE_VPORT,
+ sizeof(v_id), (u8 *)&v_id);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_ENA_VPORT,
+ IDPF_VC_ENA_VPORT_ERR);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_disable_vport_msg - Send virtchnl disable vport message
+ * @vport: virtual port data structure
+ *
+ * Send disable vport virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+int idpf_send_disable_vport_msg(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_vport v_id;
+ int err;
+
+ v_id.vport_id = cpu_to_le32(vport->vport_id);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_DISABLE_VPORT,
+ sizeof(v_id), (u8 *)&v_id);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_min_wait_for_event(adapter, vport, IDPF_VC_DIS_VPORT,
+ IDPF_VC_DIS_VPORT_ERR);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_config_tx_queues_msg - Send virtchnl config tx queues message
+ * @vport: virtual port data structure
+ *
+ * Send config tx queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+static int idpf_send_config_tx_queues_msg(struct idpf_vport *vport)
+{
+ struct virtchnl2_config_tx_queues *ctq;
+ u32 config_sz, chunk_sz, buf_sz;
+ int totqs, num_msgs, num_chunks;
+ struct virtchnl2_txq_info *qi;
+ int err = 0, i, k = 0;
+
+ totqs = vport->num_txq + vport->num_complq;
+ qi = kcalloc(totqs, sizeof(struct virtchnl2_txq_info), GFP_KERNEL);
+ if (!qi)
+ return -ENOMEM;
+
+ /* Populate the queue info buffer with all queue context info */
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+ int j, sched_mode;
+
+ for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+ qi[k].queue_id =
+ cpu_to_le32(tx_qgrp->txqs[j]->q_id);
+ qi[k].model =
+ cpu_to_le16(vport->txq_model);
+ qi[k].type =
+ cpu_to_le32(tx_qgrp->txqs[j]->q_type);
+ qi[k].ring_len =
+ cpu_to_le16(tx_qgrp->txqs[j]->desc_count);
+ qi[k].dma_ring_addr =
+ cpu_to_le64(tx_qgrp->txqs[j]->dma);
+ if (idpf_is_queue_model_split(vport->txq_model)) {
+ struct idpf_queue *q = tx_qgrp->txqs[j];
+
+ qi[k].tx_compl_queue_id =
+ cpu_to_le16(tx_qgrp->complq->q_id);
+ qi[k].relative_queue_id = cpu_to_le16(j);
+
+ if (test_bit(__IDPF_Q_FLOW_SCH_EN, q->flags))
+ qi[k].sched_mode =
+ cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_FLOW);
+ else
+ qi[k].sched_mode =
+ cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE);
+ } else {
+ qi[k].sched_mode =
+ cpu_to_le16(VIRTCHNL2_TXQ_SCHED_MODE_QUEUE);
+ }
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ continue;
+
+ qi[k].queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
+ qi[k].model = cpu_to_le16(vport->txq_model);
+ qi[k].type = cpu_to_le32(tx_qgrp->complq->q_type);
+ qi[k].ring_len = cpu_to_le16(tx_qgrp->complq->desc_count);
+ qi[k].dma_ring_addr = cpu_to_le64(tx_qgrp->complq->dma);
+
+ if (test_bit(__IDPF_Q_FLOW_SCH_EN, tx_qgrp->complq->flags))
+ sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_FLOW;
+ else
+ sched_mode = VIRTCHNL2_TXQ_SCHED_MODE_QUEUE;
+ qi[k].sched_mode = cpu_to_le16(sched_mode);
+
+ k++;
+ }
+
+ /* Make sure accounting agrees */
+ if (k != totqs) {
+ err = -EINVAL;
+ goto error;
+ }
+
+ /* Chunk up the queue contexts into multiple messages to avoid
+ * sending a control queue message buffer that is too large
+ */
+ config_sz = sizeof(struct virtchnl2_config_tx_queues);
+ chunk_sz = sizeof(struct virtchnl2_txq_info);
+
+ num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz),
+ totqs);
+ num_msgs = DIV_ROUND_UP(totqs, num_chunks);
+
+ buf_sz = struct_size(ctq, qinfo, num_chunks);
+ ctq = kzalloc(buf_sz, GFP_KERNEL);
+ if (!ctq) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ for (i = 0, k = 0; i < num_msgs; i++) {
+ memset(ctq, 0, buf_sz);
+ ctq->vport_id = cpu_to_le32(vport->vport_id);
+ ctq->num_qinfo = cpu_to_le16(num_chunks);
+ memcpy(ctq->qinfo, &qi[k], chunk_sz * num_chunks);
+
+ err = idpf_send_mb_msg(vport->adapter,
+ VIRTCHNL2_OP_CONFIG_TX_QUEUES,
+ buf_sz, (u8 *)ctq);
+ if (err)
+ goto mbx_error;
+
+ err = idpf_wait_for_event(vport->adapter, vport,
+ IDPF_VC_CONFIG_TXQ,
+ IDPF_VC_CONFIG_TXQ_ERR);
+ if (err)
+ goto mbx_error;
+
+ k += num_chunks;
+ totqs -= num_chunks;
+ num_chunks = min(num_chunks, totqs);
+ /* Recalculate buffer size */
+ buf_sz = struct_size(ctq, qinfo, num_chunks);
+ }
+
+mbx_error:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(ctq);
+error:
+ kfree(qi);
+
+ return err;
+}
+
+/**
+ * idpf_send_config_rx_queues_msg - Send virtchnl config rx queues message
+ * @vport: virtual port data structure
+ *
+ * Send config rx queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+static int idpf_send_config_rx_queues_msg(struct idpf_vport *vport)
+{
+ struct virtchnl2_config_rx_queues *crq;
+ u32 config_sz, chunk_sz, buf_sz;
+ int totqs, num_msgs, num_chunks;
+ struct virtchnl2_rxq_info *qi;
+ int err = 0, i, k = 0;
+
+ totqs = vport->num_rxq + vport->num_bufq;
+ qi = kcalloc(totqs, sizeof(struct virtchnl2_rxq_info), GFP_KERNEL);
+ if (!qi)
+ return -ENOMEM;
+
+ /* Populate the queue info buffer with all queue context info */
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u16 num_rxq;
+ int j;
+
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ goto setup_rxqs;
+
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
+ struct idpf_queue *bufq =
+ &rx_qgrp->splitq.bufq_sets[j].bufq;
+
+ qi[k].queue_id = cpu_to_le32(bufq->q_id);
+ qi[k].model = cpu_to_le16(vport->rxq_model);
+ qi[k].type = cpu_to_le32(bufq->q_type);
+ qi[k].desc_ids = cpu_to_le64(VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M);
+ qi[k].ring_len = cpu_to_le16(bufq->desc_count);
+ qi[k].dma_ring_addr = cpu_to_le64(bufq->dma);
+ qi[k].data_buffer_size = cpu_to_le32(bufq->rx_buf_size);
+ qi[k].buffer_notif_stride = bufq->rx_buf_stride;
+ qi[k].rx_buffer_low_watermark =
+ cpu_to_le16(bufq->rx_buffer_low_watermark);
+ if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW))
+ qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC);
+ }
+
+setup_rxqs:
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++, k++) {
+ struct idpf_queue *rxq;
+
+ if (!idpf_is_queue_model_split(vport->rxq_model)) {
+ rxq = rx_qgrp->singleq.rxqs[j];
+ goto common_qi_fields;
+ }
+ rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ qi[k].rx_bufq1_id =
+ cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[0].bufq.q_id);
+ if (vport->num_bufqs_per_qgrp > IDPF_SINGLE_BUFQ_PER_RXQ_GRP) {
+ qi[k].bufq2_ena = IDPF_BUFQ2_ENA;
+ qi[k].rx_bufq2_id =
+ cpu_to_le16(rxq->rxq_grp->splitq.bufq_sets[1].bufq.q_id);
+ }
+ qi[k].rx_buffer_low_watermark =
+ cpu_to_le16(rxq->rx_buffer_low_watermark);
+ if (idpf_is_feature_ena(vport, NETIF_F_GRO_HW))
+ qi[k].qflags |= cpu_to_le16(VIRTCHNL2_RXQ_RSC);
+
+common_qi_fields:
+ if (rxq->rx_hsplit_en) {
+ qi[k].qflags |=
+ cpu_to_le16(VIRTCHNL2_RXQ_HDR_SPLIT);
+ qi[k].hdr_buffer_size =
+ cpu_to_le16(rxq->rx_hbuf_size);
+ }
+ qi[k].queue_id = cpu_to_le32(rxq->q_id);
+ qi[k].model = cpu_to_le16(vport->rxq_model);
+ qi[k].type = cpu_to_le32(rxq->q_type);
+ qi[k].ring_len = cpu_to_le16(rxq->desc_count);
+ qi[k].dma_ring_addr = cpu_to_le64(rxq->dma);
+ qi[k].max_pkt_size = cpu_to_le32(rxq->rx_max_pkt_size);
+ qi[k].data_buffer_size = cpu_to_le32(rxq->rx_buf_size);
+ qi[k].qflags |=
+ cpu_to_le16(VIRTCHNL2_RX_DESC_SIZE_32BYTE);
+ qi[k].desc_ids = cpu_to_le64(rxq->rxdids);
+ }
+ }
+
+ /* Make sure accounting agrees */
+ if (k != totqs) {
+ err = -EINVAL;
+ goto error;
+ }
+
+ /* Chunk up the queue contexts into multiple messages to avoid
+ * sending a control queue message buffer that is too large
+ */
+ config_sz = sizeof(struct virtchnl2_config_rx_queues);
+ chunk_sz = sizeof(struct virtchnl2_rxq_info);
+
+ num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz),
+ totqs);
+ num_msgs = DIV_ROUND_UP(totqs, num_chunks);
+
+ buf_sz = struct_size(crq, qinfo, num_chunks);
+ crq = kzalloc(buf_sz, GFP_KERNEL);
+ if (!crq) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ for (i = 0, k = 0; i < num_msgs; i++) {
+ memset(crq, 0, buf_sz);
+ crq->vport_id = cpu_to_le32(vport->vport_id);
+ crq->num_qinfo = cpu_to_le16(num_chunks);
+ memcpy(crq->qinfo, &qi[k], chunk_sz * num_chunks);
+
+ err = idpf_send_mb_msg(vport->adapter,
+ VIRTCHNL2_OP_CONFIG_RX_QUEUES,
+ buf_sz, (u8 *)crq);
+ if (err)
+ goto mbx_error;
+
+ err = idpf_wait_for_event(vport->adapter, vport,
+ IDPF_VC_CONFIG_RXQ,
+ IDPF_VC_CONFIG_RXQ_ERR);
+ if (err)
+ goto mbx_error;
+
+ k += num_chunks;
+ totqs -= num_chunks;
+ num_chunks = min(num_chunks, totqs);
+ /* Recalculate buffer size */
+ buf_sz = struct_size(crq, qinfo, num_chunks);
+ }
+
+mbx_error:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(crq);
+error:
+ kfree(qi);
+
+ return err;
+}
+
+/**
+ * idpf_send_ena_dis_queues_msg - Send virtchnl enable or disable
+ * queues message
+ * @vport: virtual port data structure
+ * @vc_op: virtchnl op code to send
+ *
+ * Send enable or disable queues virtchnl message. Returns 0 on success,
+ * negative on failure.
+ */
+static int idpf_send_ena_dis_queues_msg(struct idpf_vport *vport, u32 vc_op)
+{
+ u32 num_msgs, num_chunks, num_txq, num_rxq, num_q;
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_del_ena_dis_queues *eq;
+ struct virtchnl2_queue_chunks *qcs;
+ struct virtchnl2_queue_chunk *qc;
+ u32 config_sz, chunk_sz, buf_sz;
+ int i, j, k = 0, err = 0;
+
+ /* validate virtchnl op */
+ switch (vc_op) {
+ case VIRTCHNL2_OP_ENABLE_QUEUES:
+ case VIRTCHNL2_OP_DISABLE_QUEUES:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ num_txq = vport->num_txq + vport->num_complq;
+ num_rxq = vport->num_rxq + vport->num_bufq;
+ num_q = num_txq + num_rxq;
+ buf_sz = sizeof(struct virtchnl2_queue_chunk) * num_q;
+ qc = kzalloc(buf_sz, GFP_KERNEL);
+ if (!qc)
+ return -ENOMEM;
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+ qc[k].type = cpu_to_le32(tx_qgrp->txqs[j]->q_type);
+ qc[k].start_queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id);
+ qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ }
+ }
+ if (vport->num_txq != k) {
+ err = -EINVAL;
+ goto error;
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ goto setup_rx;
+
+ for (i = 0; i < vport->num_txq_grp; i++, k++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ qc[k].type = cpu_to_le32(tx_qgrp->complq->q_type);
+ qc[k].start_queue_id = cpu_to_le32(tx_qgrp->complq->q_id);
+ qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ }
+ if (vport->num_complq != (k - vport->num_txq)) {
+ err = -EINVAL;
+ goto error;
+ }
+
+setup_rx:
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++, k++) {
+ if (idpf_is_queue_model_split(vport->rxq_model)) {
+ qc[k].start_queue_id =
+ cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_id);
+ qc[k].type =
+ cpu_to_le32(rx_qgrp->splitq.rxq_sets[j]->rxq.q_type);
+ } else {
+ qc[k].start_queue_id =
+ cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_id);
+ qc[k].type =
+ cpu_to_le32(rx_qgrp->singleq.rxqs[j]->q_type);
+ }
+ qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ }
+ }
+ if (vport->num_rxq != k - (vport->num_txq + vport->num_complq)) {
+ err = -EINVAL;
+ goto error;
+ }
+
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ goto send_msg;
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+
+ for (j = 0; j < vport->num_bufqs_per_qgrp; j++, k++) {
+ struct idpf_queue *q;
+
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ qc[k].type = cpu_to_le32(q->q_type);
+ qc[k].start_queue_id = cpu_to_le32(q->q_id);
+ qc[k].num_queues = cpu_to_le32(IDPF_NUMQ_PER_CHUNK);
+ }
+ }
+ if (vport->num_bufq != k - (vport->num_txq +
+ vport->num_complq +
+ vport->num_rxq)) {
+ err = -EINVAL;
+ goto error;
+ }
+
+send_msg:
+ /* Chunk up the queue info into multiple messages */
+ config_sz = sizeof(struct virtchnl2_del_ena_dis_queues);
+ chunk_sz = sizeof(struct virtchnl2_queue_chunk);
+
+ num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz),
+ num_q);
+ num_msgs = DIV_ROUND_UP(num_q, num_chunks);
+
+ buf_sz = struct_size(eq, chunks.chunks, num_chunks);
+ eq = kzalloc(buf_sz, GFP_KERNEL);
+ if (!eq) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ for (i = 0, k = 0; i < num_msgs; i++) {
+ memset(eq, 0, buf_sz);
+ eq->vport_id = cpu_to_le32(vport->vport_id);
+ eq->chunks.num_chunks = cpu_to_le16(num_chunks);
+ qcs = &eq->chunks;
+ memcpy(qcs->chunks, &qc[k], chunk_sz * num_chunks);
+
+ err = idpf_send_mb_msg(adapter, vc_op, buf_sz, (u8 *)eq);
+ if (err)
+ goto mbx_error;
+
+ if (vc_op == VIRTCHNL2_OP_ENABLE_QUEUES)
+ err = idpf_wait_for_event(adapter, vport,
+ IDPF_VC_ENA_QUEUES,
+ IDPF_VC_ENA_QUEUES_ERR);
+ else
+ err = idpf_min_wait_for_event(adapter, vport,
+ IDPF_VC_DIS_QUEUES,
+ IDPF_VC_DIS_QUEUES_ERR);
+ if (err)
+ goto mbx_error;
+
+ k += num_chunks;
+ num_q -= num_chunks;
+ num_chunks = min(num_chunks, num_q);
+ /* Recalculate buffer size */
+ buf_sz = struct_size(eq, chunks.chunks, num_chunks);
+ }
+
+mbx_error:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(eq);
+error:
+ kfree(qc);
+
+ return err;
+}
+
+/**
+ * idpf_send_map_unmap_queue_vector_msg - Send virtchnl map or unmap queue
+ * vector message
+ * @vport: virtual port data structure
+ * @map: true for map and false for unmap
+ *
+ * Send map or unmap queue vector virtchnl message. Returns 0 on success,
+ * negative on failure.
+ */
+int idpf_send_map_unmap_queue_vector_msg(struct idpf_vport *vport, bool map)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_queue_vector_maps *vqvm;
+ struct virtchnl2_queue_vector *vqv;
+ u32 config_sz, chunk_sz, buf_sz;
+ u32 num_msgs, num_chunks, num_q;
+ int i, j, k = 0, err = 0;
+
+ num_q = vport->num_txq + vport->num_rxq;
+
+ buf_sz = sizeof(struct virtchnl2_queue_vector) * num_q;
+ vqv = kzalloc(buf_sz, GFP_KERNEL);
+ if (!vqv)
+ return -ENOMEM;
+
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ for (j = 0; j < tx_qgrp->num_txq; j++, k++) {
+ vqv[k].queue_type = cpu_to_le32(tx_qgrp->txqs[j]->q_type);
+ vqv[k].queue_id = cpu_to_le32(tx_qgrp->txqs[j]->q_id);
+
+ if (idpf_is_queue_model_split(vport->txq_model)) {
+ vqv[k].vector_id =
+ cpu_to_le16(tx_qgrp->complq->q_vector->v_idx);
+ vqv[k].itr_idx =
+ cpu_to_le32(tx_qgrp->complq->q_vector->tx_itr_idx);
+ } else {
+ vqv[k].vector_id =
+ cpu_to_le16(tx_qgrp->txqs[j]->q_vector->v_idx);
+ vqv[k].itr_idx =
+ cpu_to_le32(tx_qgrp->txqs[j]->q_vector->tx_itr_idx);
+ }
+ }
+ }
+
+ if (vport->num_txq != k) {
+ err = -EINVAL;
+ goto error;
+ }
+
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u16 num_rxq;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq; j++, k++) {
+ struct idpf_queue *rxq;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ rxq = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ else
+ rxq = rx_qgrp->singleq.rxqs[j];
+
+ vqv[k].queue_type = cpu_to_le32(rxq->q_type);
+ vqv[k].queue_id = cpu_to_le32(rxq->q_id);
+ vqv[k].vector_id = cpu_to_le16(rxq->q_vector->v_idx);
+ vqv[k].itr_idx = cpu_to_le32(rxq->q_vector->rx_itr_idx);
+ }
+ }
+
+ if (idpf_is_queue_model_split(vport->txq_model)) {
+ if (vport->num_rxq != k - vport->num_complq) {
+ err = -EINVAL;
+ goto error;
+ }
+ } else {
+ if (vport->num_rxq != k - vport->num_txq) {
+ err = -EINVAL;
+ goto error;
+ }
+ }
+
+ /* Chunk up the vector info into multiple messages */
+ config_sz = sizeof(struct virtchnl2_queue_vector_maps);
+ chunk_sz = sizeof(struct virtchnl2_queue_vector);
+
+ num_chunks = min_t(u32, IDPF_NUM_CHUNKS_PER_MSG(config_sz, chunk_sz),
+ num_q);
+ num_msgs = DIV_ROUND_UP(num_q, num_chunks);
+
+ buf_sz = struct_size(vqvm, qv_maps, num_chunks);
+ vqvm = kzalloc(buf_sz, GFP_KERNEL);
+ if (!vqvm) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ for (i = 0, k = 0; i < num_msgs; i++) {
+ memset(vqvm, 0, buf_sz);
+ vqvm->vport_id = cpu_to_le32(vport->vport_id);
+ vqvm->num_qv_maps = cpu_to_le16(num_chunks);
+ memcpy(vqvm->qv_maps, &vqv[k], chunk_sz * num_chunks);
+
+ if (map) {
+ err = idpf_send_mb_msg(adapter,
+ VIRTCHNL2_OP_MAP_QUEUE_VECTOR,
+ buf_sz, (u8 *)vqvm);
+ if (!err)
+ err = idpf_wait_for_event(adapter, vport,
+ IDPF_VC_MAP_IRQ,
+ IDPF_VC_MAP_IRQ_ERR);
+ } else {
+ err = idpf_send_mb_msg(adapter,
+ VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR,
+ buf_sz, (u8 *)vqvm);
+ if (!err)
+ err =
+ idpf_min_wait_for_event(adapter, vport,
+ IDPF_VC_UNMAP_IRQ,
+ IDPF_VC_UNMAP_IRQ_ERR);
+ }
+ if (err)
+ goto mbx_error;
+
+ k += num_chunks;
+ num_q -= num_chunks;
+ num_chunks = min(num_chunks, num_q);
+ /* Recalculate buffer size */
+ buf_sz = struct_size(vqvm, qv_maps, num_chunks);
+ }
+
+mbx_error:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(vqvm);
+error:
+ kfree(vqv);
+
+ return err;
+}
+
+/**
+ * idpf_send_enable_queues_msg - send enable queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send enable queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+int idpf_send_enable_queues_msg(struct idpf_vport *vport)
+{
+ return idpf_send_ena_dis_queues_msg(vport, VIRTCHNL2_OP_ENABLE_QUEUES);
+}
+
+/**
+ * idpf_send_disable_queues_msg - send disable queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send disable queues virtchnl message. Returns 0 on success, negative
+ * on failure.
+ */
+int idpf_send_disable_queues_msg(struct idpf_vport *vport)
+{
+ int err, i;
+
+ err = idpf_send_ena_dis_queues_msg(vport, VIRTCHNL2_OP_DISABLE_QUEUES);
+ if (err)
+ return err;
+
+ /* switch to poll mode as interrupts will be disabled after disable
+ * queues virtchnl message is sent
+ */
+ for (i = 0; i < vport->num_txq; i++)
+ set_bit(__IDPF_Q_POLL_MODE, vport->txqs[i]->flags);
+
+ /* schedule the napi to receive all the marker packets */
+ for (i = 0; i < vport->num_q_vectors; i++)
+ napi_schedule(&vport->q_vectors[i].napi);
+
+ return idpf_wait_for_marker_event(vport);
+}
+
+/**
+ * idpf_convert_reg_to_queue_chunks - Copy queue chunk information to the right
+ * structure
+ * @dchunks: Destination chunks to store data to
+ * @schunks: Source chunks to copy data from
+ * @num_chunks: number of chunks to copy
+ */
+static void idpf_convert_reg_to_queue_chunks(struct virtchnl2_queue_chunk *dchunks,
+ struct virtchnl2_queue_reg_chunk *schunks,
+ u16 num_chunks)
+{
+ u16 i;
+
+ for (i = 0; i < num_chunks; i++) {
+ dchunks[i].type = schunks[i].type;
+ dchunks[i].start_queue_id = schunks[i].start_queue_id;
+ dchunks[i].num_queues = schunks[i].num_queues;
+ }
+}
+
+/**
+ * idpf_send_delete_queues_msg - send delete queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send delete queues virtchnl message. Return 0 on success, negative on
+ * failure.
+ */
+int idpf_send_delete_queues_msg(struct idpf_vport *vport)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_create_vport *vport_params;
+ struct virtchnl2_queue_reg_chunks *chunks;
+ struct virtchnl2_del_ena_dis_queues *eq;
+ struct idpf_vport_config *vport_config;
+ u16 vport_idx = vport->idx;
+ int buf_size, err;
+ u16 num_chunks;
+
+ vport_config = adapter->vport_config[vport_idx];
+ if (vport_config->req_qs_chunks) {
+ struct virtchnl2_add_queues *vc_aq =
+ (struct virtchnl2_add_queues *)vport_config->req_qs_chunks;
+ chunks = &vc_aq->chunks;
+ } else {
+ vport_params = adapter->vport_params_recvd[vport_idx];
+ chunks = &vport_params->chunks;
+ }
+
+ num_chunks = le16_to_cpu(chunks->num_chunks);
+ buf_size = struct_size(eq, chunks.chunks, num_chunks);
+
+ eq = kzalloc(buf_size, GFP_KERNEL);
+ if (!eq)
+ return -ENOMEM;
+
+ eq->vport_id = cpu_to_le32(vport->vport_id);
+ eq->chunks.num_chunks = cpu_to_le16(num_chunks);
+
+ idpf_convert_reg_to_queue_chunks(eq->chunks.chunks, chunks->chunks,
+ num_chunks);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_DEL_QUEUES,
+ buf_size, (u8 *)eq);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_min_wait_for_event(adapter, vport, IDPF_VC_DEL_QUEUES,
+ IDPF_VC_DEL_QUEUES_ERR);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(eq);
+
+ return err;
+}
+
+/**
+ * idpf_send_config_queues_msg - Send config queues virtchnl message
+ * @vport: Virtual port private data structure
+ *
+ * Will send config queues virtchnl message. Returns 0 on success, negative on
+ * failure.
+ */
+int idpf_send_config_queues_msg(struct idpf_vport *vport)
+{
+ int err;
+
+ err = idpf_send_config_tx_queues_msg(vport);
+ if (err)
+ return err;
+
+ return idpf_send_config_rx_queues_msg(vport);
+}
+
+/**
+ * idpf_send_add_queues_msg - Send virtchnl add queues message
+ * @vport: Virtual port private data structure
+ * @num_tx_q: number of transmit queues
+ * @num_complq: number of transmit completion queues
+ * @num_rx_q: number of receive queues
+ * @num_rx_bufq: number of receive buffer queues
+ *
+ * Returns 0 on success, negative on failure. vport _MUST_ be const here as
+ * we should not change any fields within vport itself in this function.
+ */
+int idpf_send_add_queues_msg(const struct idpf_vport *vport, u16 num_tx_q,
+ u16 num_complq, u16 num_rx_q, u16 num_rx_bufq)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct idpf_vport_config *vport_config;
+ struct virtchnl2_add_queues aq = { };
+ struct virtchnl2_add_queues *vc_msg;
+ u16 vport_idx = vport->idx;
+ int size, err;
+
+ vport_config = adapter->vport_config[vport_idx];
+
+ aq.vport_id = cpu_to_le32(vport->vport_id);
+ aq.num_tx_q = cpu_to_le16(num_tx_q);
+ aq.num_tx_complq = cpu_to_le16(num_complq);
+ aq.num_rx_q = cpu_to_le16(num_rx_q);
+ aq.num_rx_bufq = cpu_to_le16(num_rx_bufq);
+
+ mutex_lock(&((struct idpf_vport *)vport)->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_ADD_QUEUES,
+ sizeof(struct virtchnl2_add_queues), (u8 *)&aq);
+ if (err)
+ goto rel_lock;
+
+ /* We want vport to be const to prevent incidental code changes making
+ * changes to the vport config. We're making a special exception here
+ * to discard const to use the virtchnl.
+ */
+ err = idpf_wait_for_event(adapter, (struct idpf_vport *)vport,
+ IDPF_VC_ADD_QUEUES, IDPF_VC_ADD_QUEUES_ERR);
+ if (err)
+ goto rel_lock;
+
+ kfree(vport_config->req_qs_chunks);
+ vport_config->req_qs_chunks = NULL;
+
+ vc_msg = (struct virtchnl2_add_queues *)vport->vc_msg;
+ /* compare vc_msg num queues with vport num queues */
+ if (le16_to_cpu(vc_msg->num_tx_q) != num_tx_q ||
+ le16_to_cpu(vc_msg->num_rx_q) != num_rx_q ||
+ le16_to_cpu(vc_msg->num_tx_complq) != num_complq ||
+ le16_to_cpu(vc_msg->num_rx_bufq) != num_rx_bufq) {
+ err = -EINVAL;
+ goto rel_lock;
+ }
+
+ size = struct_size(vc_msg, chunks.chunks,
+ le16_to_cpu(vc_msg->chunks.num_chunks));
+ vport_config->req_qs_chunks = kmemdup(vc_msg, size, GFP_KERNEL);
+ if (!vport_config->req_qs_chunks) {
+ err = -ENOMEM;
+ goto rel_lock;
+ }
+
+rel_lock:
+ mutex_unlock(&((struct idpf_vport *)vport)->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_alloc_vectors_msg - Send virtchnl alloc vectors message
+ * @adapter: Driver specific private structure
+ * @num_vectors: number of vectors to be allocated
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_alloc_vectors_msg(struct idpf_adapter *adapter, u16 num_vectors)
+{
+ struct virtchnl2_alloc_vectors *alloc_vec, *rcvd_vec;
+ struct virtchnl2_alloc_vectors ac = { };
+ u16 num_vchunks;
+ int size, err;
+
+ ac.num_vectors = cpu_to_le16(num_vectors);
+
+ mutex_lock(&adapter->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_ALLOC_VECTORS,
+ sizeof(ac), (u8 *)&ac);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(adapter, NULL, IDPF_VC_ALLOC_VECTORS,
+ IDPF_VC_ALLOC_VECTORS_ERR);
+ if (err)
+ goto rel_lock;
+
+ rcvd_vec = (struct virtchnl2_alloc_vectors *)adapter->vc_msg;
+ num_vchunks = le16_to_cpu(rcvd_vec->vchunks.num_vchunks);
+
+ size = struct_size(rcvd_vec, vchunks.vchunks, num_vchunks);
+ if (size > sizeof(adapter->vc_msg)) {
+ err = -EINVAL;
+ goto rel_lock;
+ }
+
+ kfree(adapter->req_vec_chunks);
+ adapter->req_vec_chunks = NULL;
+ adapter->req_vec_chunks = kmemdup(adapter->vc_msg, size, GFP_KERNEL);
+ if (!adapter->req_vec_chunks) {
+ err = -ENOMEM;
+ goto rel_lock;
+ }
+
+ alloc_vec = adapter->req_vec_chunks;
+ if (le16_to_cpu(alloc_vec->num_vectors) < num_vectors) {
+ kfree(adapter->req_vec_chunks);
+ adapter->req_vec_chunks = NULL;
+ err = -EINVAL;
+ }
+
+rel_lock:
+ mutex_unlock(&adapter->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_dealloc_vectors_msg - Send virtchnl de allocate vectors message
+ * @adapter: Driver specific private structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_dealloc_vectors_msg(struct idpf_adapter *adapter)
+{
+ struct virtchnl2_alloc_vectors *ac = adapter->req_vec_chunks;
+ struct virtchnl2_vector_chunks *vcs = &ac->vchunks;
+ int buf_size, err;
+
+ buf_size = struct_size(vcs, vchunks, le16_to_cpu(vcs->num_vchunks));
+
+ mutex_lock(&adapter->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_DEALLOC_VECTORS, buf_size,
+ (u8 *)vcs);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_min_wait_for_event(adapter, NULL, IDPF_VC_DEALLOC_VECTORS,
+ IDPF_VC_DEALLOC_VECTORS_ERR);
+ if (err)
+ goto rel_lock;
+
+ kfree(adapter->req_vec_chunks);
+ adapter->req_vec_chunks = NULL;
+
+rel_lock:
+ mutex_unlock(&adapter->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_get_max_vfs - Get max number of vfs supported
+ * @adapter: Driver specific private structure
+ *
+ * Returns max number of VFs
+ */
+static int idpf_get_max_vfs(struct idpf_adapter *adapter)
+{
+ return le16_to_cpu(adapter->caps.max_sriov_vfs);
+}
+
+/**
+ * idpf_send_set_sriov_vfs_msg - Send virtchnl set sriov vfs message
+ * @adapter: Driver specific private structure
+ * @num_vfs: number of virtual functions to be created
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_set_sriov_vfs_msg(struct idpf_adapter *adapter, u16 num_vfs)
+{
+ struct virtchnl2_sriov_vfs_info svi = { };
+ int err;
+
+ svi.num_vfs = cpu_to_le16(num_vfs);
+
+ mutex_lock(&adapter->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_SET_SRIOV_VFS,
+ sizeof(svi), (u8 *)&svi);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(adapter, NULL, IDPF_VC_SET_SRIOV_VFS,
+ IDPF_VC_SET_SRIOV_VFS_ERR);
+
+rel_lock:
+ mutex_unlock(&adapter->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_get_stats_msg - Send virtchnl get statistics message
+ * @vport: vport to get stats for
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_get_stats_msg(struct idpf_vport *vport)
+{
+ struct idpf_netdev_priv *np = netdev_priv(vport->netdev);
+ struct rtnl_link_stats64 *netstats = &np->netstats;
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_vport_stats stats_msg = { };
+ struct virtchnl2_vport_stats *stats;
+ int err;
+
+ /* Don't send get_stats message if the link is down */
+ if (np->state <= __IDPF_VPORT_DOWN)
+ return 0;
+
+ stats_msg.vport_id = cpu_to_le32(vport->vport_id);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_STATS,
+ sizeof(struct virtchnl2_vport_stats),
+ (u8 *)&stats_msg);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_GET_STATS,
+ IDPF_VC_GET_STATS_ERR);
+ if (err)
+ goto rel_lock;
+
+ stats = (struct virtchnl2_vport_stats *)vport->vc_msg;
+
+ spin_lock_bh(&np->stats_lock);
+
+ netstats->rx_packets = le64_to_cpu(stats->rx_unicast) +
+ le64_to_cpu(stats->rx_multicast) +
+ le64_to_cpu(stats->rx_broadcast);
+ netstats->rx_bytes = le64_to_cpu(stats->rx_bytes);
+ netstats->rx_dropped = le64_to_cpu(stats->rx_discards);
+ netstats->rx_over_errors = le64_to_cpu(stats->rx_overflow_drop);
+ netstats->rx_length_errors = le64_to_cpu(stats->rx_invalid_frame_length);
+
+ netstats->tx_packets = le64_to_cpu(stats->tx_unicast) +
+ le64_to_cpu(stats->tx_multicast) +
+ le64_to_cpu(stats->tx_broadcast);
+ netstats->tx_bytes = le64_to_cpu(stats->tx_bytes);
+ netstats->tx_errors = le64_to_cpu(stats->tx_errors);
+ netstats->tx_dropped = le64_to_cpu(stats->tx_discards);
+
+ vport->port_stats.vport_stats = *stats;
+
+ spin_unlock_bh(&np->stats_lock);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_send_get_set_rss_lut_msg - Send virtchnl get or set rss lut message
+ * @vport: virtual port data structure
+ * @get: flag to set or get rss look up table
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_get_set_rss_lut_msg(struct idpf_vport *vport, bool get)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_rss_lut *recv_rl;
+ struct idpf_rss_data *rss_data;
+ struct virtchnl2_rss_lut *rl;
+ int buf_size, lut_buf_size;
+ int i, err;
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+ buf_size = struct_size(rl, lut, rss_data->rss_lut_size);
+ rl = kzalloc(buf_size, GFP_KERNEL);
+ if (!rl)
+ return -ENOMEM;
+
+ rl->vport_id = cpu_to_le32(vport->vport_id);
+ mutex_lock(&vport->vc_buf_lock);
+
+ if (!get) {
+ rl->lut_entries = cpu_to_le16(rss_data->rss_lut_size);
+ for (i = 0; i < rss_data->rss_lut_size; i++)
+ rl->lut[i] = cpu_to_le32(rss_data->rss_lut[i]);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_SET_RSS_LUT,
+ buf_size, (u8 *)rl);
+ if (err)
+ goto free_mem;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_SET_RSS_LUT,
+ IDPF_VC_SET_RSS_LUT_ERR);
+
+ goto free_mem;
+ }
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_RSS_LUT,
+ buf_size, (u8 *)rl);
+ if (err)
+ goto free_mem;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_GET_RSS_LUT,
+ IDPF_VC_GET_RSS_LUT_ERR);
+ if (err)
+ goto free_mem;
+
+ recv_rl = (struct virtchnl2_rss_lut *)vport->vc_msg;
+ if (rss_data->rss_lut_size == le16_to_cpu(recv_rl->lut_entries))
+ goto do_memcpy;
+
+ rss_data->rss_lut_size = le16_to_cpu(recv_rl->lut_entries);
+ kfree(rss_data->rss_lut);
+
+ lut_buf_size = rss_data->rss_lut_size * sizeof(u32);
+ rss_data->rss_lut = kzalloc(lut_buf_size, GFP_KERNEL);
+ if (!rss_data->rss_lut) {
+ rss_data->rss_lut_size = 0;
+ err = -ENOMEM;
+ goto free_mem;
+ }
+
+do_memcpy:
+ memcpy(rss_data->rss_lut, vport->vc_msg, rss_data->rss_lut_size);
+free_mem:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(rl);
+
+ return err;
+}
+
+/**
+ * idpf_send_get_set_rss_key_msg - Send virtchnl get or set rss key message
+ * @vport: virtual port data structure
+ * @get: flag to set or get rss look up table
+ *
+ * Returns 0 on success, negative on failure
+ */
+int idpf_send_get_set_rss_key_msg(struct idpf_vport *vport, bool get)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_rss_key *recv_rk;
+ struct idpf_rss_data *rss_data;
+ struct virtchnl2_rss_key *rk;
+ int i, buf_size, err;
+
+ rss_data = &adapter->vport_config[vport->idx]->user_config.rss_data;
+ buf_size = struct_size(rk, key_flex, rss_data->rss_key_size);
+ rk = kzalloc(buf_size, GFP_KERNEL);
+ if (!rk)
+ return -ENOMEM;
+
+ rk->vport_id = cpu_to_le32(vport->vport_id);
+ mutex_lock(&vport->vc_buf_lock);
+
+ if (get) {
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_RSS_KEY,
+ buf_size, (u8 *)rk);
+ if (err)
+ goto error;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_GET_RSS_KEY,
+ IDPF_VC_GET_RSS_KEY_ERR);
+ if (err)
+ goto error;
+
+ recv_rk = (struct virtchnl2_rss_key *)vport->vc_msg;
+ if (rss_data->rss_key_size !=
+ le16_to_cpu(recv_rk->key_len)) {
+ rss_data->rss_key_size =
+ min_t(u16, NETDEV_RSS_KEY_LEN,
+ le16_to_cpu(recv_rk->key_len));
+ kfree(rss_data->rss_key);
+ rss_data->rss_key = kzalloc(rss_data->rss_key_size,
+ GFP_KERNEL);
+ if (!rss_data->rss_key) {
+ rss_data->rss_key_size = 0;
+ err = -ENOMEM;
+ goto error;
+ }
+ }
+ memcpy(rss_data->rss_key, recv_rk->key_flex,
+ rss_data->rss_key_size);
+ } else {
+ rk->key_len = cpu_to_le16(rss_data->rss_key_size);
+ for (i = 0; i < rss_data->rss_key_size; i++)
+ rk->key_flex[i] = rss_data->rss_key[i];
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_SET_RSS_KEY,
+ buf_size, (u8 *)rk);
+ if (err)
+ goto error;
+
+ err = idpf_wait_for_event(adapter, vport, IDPF_VC_SET_RSS_KEY,
+ IDPF_VC_SET_RSS_KEY_ERR);
+ }
+
+error:
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(rk);
+
+ return err;
+}
+
+/**
+ * idpf_fill_ptype_lookup - Fill L3 specific fields in ptype lookup table
+ * @ptype: ptype lookup table
+ * @pstate: state machine for ptype lookup table
+ * @ipv4: ipv4 or ipv6
+ * @frag: fragmentation allowed
+ *
+ */
+static void idpf_fill_ptype_lookup(struct idpf_rx_ptype_decoded *ptype,
+ struct idpf_ptype_state *pstate,
+ bool ipv4, bool frag)
+{
+ if (!pstate->outer_ip || !pstate->outer_frag) {
+ ptype->outer_ip = IDPF_RX_PTYPE_OUTER_IP;
+ pstate->outer_ip = true;
+
+ if (ipv4)
+ ptype->outer_ip_ver = IDPF_RX_PTYPE_OUTER_IPV4;
+ else
+ ptype->outer_ip_ver = IDPF_RX_PTYPE_OUTER_IPV6;
+
+ if (frag) {
+ ptype->outer_frag = IDPF_RX_PTYPE_FRAG;
+ pstate->outer_frag = true;
+ }
+ } else {
+ ptype->tunnel_type = IDPF_RX_PTYPE_TUNNEL_IP_IP;
+ pstate->tunnel_state = IDPF_PTYPE_TUNNEL_IP;
+
+ if (ipv4)
+ ptype->tunnel_end_prot =
+ IDPF_RX_PTYPE_TUNNEL_END_IPV4;
+ else
+ ptype->tunnel_end_prot =
+ IDPF_RX_PTYPE_TUNNEL_END_IPV6;
+
+ if (frag)
+ ptype->tunnel_end_frag = IDPF_RX_PTYPE_FRAG;
+ }
+}
+
+/**
+ * idpf_send_get_rx_ptype_msg - Send virtchnl for ptype info
+ * @vport: virtual port data structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_get_rx_ptype_msg(struct idpf_vport *vport)
+{
+ struct idpf_rx_ptype_decoded *ptype_lkup = vport->rx_ptype_lkup;
+ struct virtchnl2_get_ptype_info get_ptype_info;
+ int max_ptype, ptypes_recvd = 0, ptype_offset;
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_get_ptype_info *ptype_info;
+ u16 next_ptype_id = 0;
+ int err = 0, i, j, k;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ max_ptype = IDPF_RX_MAX_PTYPE;
+ else
+ max_ptype = IDPF_RX_MAX_BASE_PTYPE;
+
+ memset(vport->rx_ptype_lkup, 0, sizeof(vport->rx_ptype_lkup));
+
+ ptype_info = kzalloc(IDPF_CTLQ_MAX_BUF_LEN, GFP_KERNEL);
+ if (!ptype_info)
+ return -ENOMEM;
+
+ mutex_lock(&adapter->vc_buf_lock);
+
+ while (next_ptype_id < max_ptype) {
+ get_ptype_info.start_ptype_id = cpu_to_le16(next_ptype_id);
+
+ if ((next_ptype_id + IDPF_RX_MAX_PTYPES_PER_BUF) > max_ptype)
+ get_ptype_info.num_ptypes =
+ cpu_to_le16(max_ptype - next_ptype_id);
+ else
+ get_ptype_info.num_ptypes =
+ cpu_to_le16(IDPF_RX_MAX_PTYPES_PER_BUF);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_GET_PTYPE_INFO,
+ sizeof(struct virtchnl2_get_ptype_info),
+ (u8 *)&get_ptype_info);
+ if (err)
+ goto vc_buf_unlock;
+
+ err = idpf_wait_for_event(adapter, NULL, IDPF_VC_GET_PTYPE_INFO,
+ IDPF_VC_GET_PTYPE_INFO_ERR);
+ if (err)
+ goto vc_buf_unlock;
+
+ memcpy(ptype_info, adapter->vc_msg, IDPF_CTLQ_MAX_BUF_LEN);
+
+ ptypes_recvd += le16_to_cpu(ptype_info->num_ptypes);
+ if (ptypes_recvd > max_ptype) {
+ err = -EINVAL;
+ goto vc_buf_unlock;
+ }
+
+ next_ptype_id = le16_to_cpu(get_ptype_info.start_ptype_id) +
+ le16_to_cpu(get_ptype_info.num_ptypes);
+
+ ptype_offset = IDPF_RX_PTYPE_HDR_SZ;
+
+ for (i = 0; i < le16_to_cpu(ptype_info->num_ptypes); i++) {
+ struct idpf_ptype_state pstate = { };
+ struct virtchnl2_ptype *ptype;
+ u16 id;
+
+ ptype = (struct virtchnl2_ptype *)
+ ((u8 *)ptype_info + ptype_offset);
+
+ ptype_offset += IDPF_GET_PTYPE_SIZE(ptype);
+ if (ptype_offset > IDPF_CTLQ_MAX_BUF_LEN) {
+ err = -EINVAL;
+ goto vc_buf_unlock;
+ }
+
+ /* 0xFFFF indicates end of ptypes */
+ if (le16_to_cpu(ptype->ptype_id_10) ==
+ IDPF_INVALID_PTYPE_ID) {
+ err = 0;
+ goto vc_buf_unlock;
+ }
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ k = le16_to_cpu(ptype->ptype_id_10);
+ else
+ k = ptype->ptype_id_8;
+
+ if (ptype->proto_id_count)
+ ptype_lkup[k].known = 1;
+
+ for (j = 0; j < ptype->proto_id_count; j++) {
+ id = le16_to_cpu(ptype->proto_id[j]);
+ switch (id) {
+ case VIRTCHNL2_PROTO_HDR_GRE:
+ if (pstate.tunnel_state ==
+ IDPF_PTYPE_TUNNEL_IP) {
+ ptype_lkup[k].tunnel_type =
+ IDPF_RX_PTYPE_TUNNEL_IP_GRENAT;
+ pstate.tunnel_state |=
+ IDPF_PTYPE_TUNNEL_IP_GRENAT;
+ }
+ break;
+ case VIRTCHNL2_PROTO_HDR_MAC:
+ ptype_lkup[k].outer_ip =
+ IDPF_RX_PTYPE_OUTER_L2;
+ if (pstate.tunnel_state ==
+ IDPF_TUN_IP_GRE) {
+ ptype_lkup[k].tunnel_type =
+ IDPF_RX_PTYPE_TUNNEL_IP_GRENAT_MAC;
+ pstate.tunnel_state |=
+ IDPF_PTYPE_TUNNEL_IP_GRENAT_MAC;
+ }
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV4:
+ idpf_fill_ptype_lookup(&ptype_lkup[k],
+ &pstate, true,
+ false);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV6:
+ idpf_fill_ptype_lookup(&ptype_lkup[k],
+ &pstate, false,
+ false);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV4_FRAG:
+ idpf_fill_ptype_lookup(&ptype_lkup[k],
+ &pstate, true,
+ true);
+ break;
+ case VIRTCHNL2_PROTO_HDR_IPV6_FRAG:
+ idpf_fill_ptype_lookup(&ptype_lkup[k],
+ &pstate, false,
+ true);
+ break;
+ case VIRTCHNL2_PROTO_HDR_UDP:
+ ptype_lkup[k].inner_prot =
+ IDPF_RX_PTYPE_INNER_PROT_UDP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_TCP:
+ ptype_lkup[k].inner_prot =
+ IDPF_RX_PTYPE_INNER_PROT_TCP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_SCTP:
+ ptype_lkup[k].inner_prot =
+ IDPF_RX_PTYPE_INNER_PROT_SCTP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_ICMP:
+ ptype_lkup[k].inner_prot =
+ IDPF_RX_PTYPE_INNER_PROT_ICMP;
+ break;
+ case VIRTCHNL2_PROTO_HDR_PAY:
+ ptype_lkup[k].payload_layer =
+ IDPF_RX_PTYPE_PAYLOAD_LAYER_PAY2;
+ break;
+ case VIRTCHNL2_PROTO_HDR_ICMPV6:
+ case VIRTCHNL2_PROTO_HDR_IPV6_EH:
+ case VIRTCHNL2_PROTO_HDR_PRE_MAC:
+ case VIRTCHNL2_PROTO_HDR_POST_MAC:
+ case VIRTCHNL2_PROTO_HDR_ETHERTYPE:
+ case VIRTCHNL2_PROTO_HDR_SVLAN:
+ case VIRTCHNL2_PROTO_HDR_CVLAN:
+ case VIRTCHNL2_PROTO_HDR_MPLS:
+ case VIRTCHNL2_PROTO_HDR_MMPLS:
+ case VIRTCHNL2_PROTO_HDR_PTP:
+ case VIRTCHNL2_PROTO_HDR_CTRL:
+ case VIRTCHNL2_PROTO_HDR_LLDP:
+ case VIRTCHNL2_PROTO_HDR_ARP:
+ case VIRTCHNL2_PROTO_HDR_ECP:
+ case VIRTCHNL2_PROTO_HDR_EAPOL:
+ case VIRTCHNL2_PROTO_HDR_PPPOD:
+ case VIRTCHNL2_PROTO_HDR_PPPOE:
+ case VIRTCHNL2_PROTO_HDR_IGMP:
+ case VIRTCHNL2_PROTO_HDR_AH:
+ case VIRTCHNL2_PROTO_HDR_ESP:
+ case VIRTCHNL2_PROTO_HDR_IKE:
+ case VIRTCHNL2_PROTO_HDR_NATT_KEEP:
+ case VIRTCHNL2_PROTO_HDR_L2TPV2:
+ case VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL:
+ case VIRTCHNL2_PROTO_HDR_L2TPV3:
+ case VIRTCHNL2_PROTO_HDR_GTP:
+ case VIRTCHNL2_PROTO_HDR_GTP_EH:
+ case VIRTCHNL2_PROTO_HDR_GTPCV2:
+ case VIRTCHNL2_PROTO_HDR_GTPC_TEID:
+ case VIRTCHNL2_PROTO_HDR_GTPU:
+ case VIRTCHNL2_PROTO_HDR_GTPU_UL:
+ case VIRTCHNL2_PROTO_HDR_GTPU_DL:
+ case VIRTCHNL2_PROTO_HDR_ECPRI:
+ case VIRTCHNL2_PROTO_HDR_VRRP:
+ case VIRTCHNL2_PROTO_HDR_OSPF:
+ case VIRTCHNL2_PROTO_HDR_TUN:
+ case VIRTCHNL2_PROTO_HDR_NVGRE:
+ case VIRTCHNL2_PROTO_HDR_VXLAN:
+ case VIRTCHNL2_PROTO_HDR_VXLAN_GPE:
+ case VIRTCHNL2_PROTO_HDR_GENEVE:
+ case VIRTCHNL2_PROTO_HDR_NSH:
+ case VIRTCHNL2_PROTO_HDR_QUIC:
+ case VIRTCHNL2_PROTO_HDR_PFCP:
+ case VIRTCHNL2_PROTO_HDR_PFCP_NODE:
+ case VIRTCHNL2_PROTO_HDR_PFCP_SESSION:
+ case VIRTCHNL2_PROTO_HDR_RTP:
+ case VIRTCHNL2_PROTO_HDR_NO_PROTO:
+ break;
+ default:
+ break;
+ }
+ }
+ }
+ }
+
+vc_buf_unlock:
+ mutex_unlock(&adapter->vc_buf_lock);
+ kfree(ptype_info);
+
+ return err;
+}
+
+/**
+ * idpf_send_ena_dis_loopback_msg - Send virtchnl enable/disable loopback
+ * message
+ * @vport: virtual port data structure
+ *
+ * Returns 0 on success, negative on failure.
+ */
+int idpf_send_ena_dis_loopback_msg(struct idpf_vport *vport)
+{
+ struct virtchnl2_loopback loopback;
+ int err;
+
+ loopback.vport_id = cpu_to_le32(vport->vport_id);
+ loopback.enable = idpf_is_feature_ena(vport, NETIF_F_LOOPBACK);
+
+ mutex_lock(&vport->vc_buf_lock);
+
+ err = idpf_send_mb_msg(vport->adapter, VIRTCHNL2_OP_LOOPBACK,
+ sizeof(loopback), (u8 *)&loopback);
+ if (err)
+ goto rel_lock;
+
+ err = idpf_wait_for_event(vport->adapter, vport,
+ IDPF_VC_LOOPBACK_STATE,
+ IDPF_VC_LOOPBACK_STATE_ERR);
+
+rel_lock:
+ mutex_unlock(&vport->vc_buf_lock);
+
+ return err;
+}
+
+/**
+ * idpf_find_ctlq - Given a type and id, find ctlq info
+ * @hw: hardware struct
+ * @type: type of ctrlq to find
+ * @id: ctlq id to find
+ *
+ * Returns pointer to found ctlq info struct, NULL otherwise.
+ */
+static struct idpf_ctlq_info *idpf_find_ctlq(struct idpf_hw *hw,
+ enum idpf_ctlq_type type, int id)
+{
+ struct idpf_ctlq_info *cq, *tmp;
+
+ list_for_each_entry_safe(cq, tmp, &hw->cq_list_head, cq_list)
+ if (cq->q_id == id && cq->cq_type == type)
+ return cq;
+
+ return NULL;
+}
+
+/**
+ * idpf_init_dflt_mbx - Setup default mailbox parameters and make request
+ * @adapter: adapter info struct
+ *
+ * Returns 0 on success, negative otherwise
+ */
+int idpf_init_dflt_mbx(struct idpf_adapter *adapter)
+{
+ struct idpf_ctlq_create_info ctlq_info[] = {
+ {
+ .type = IDPF_CTLQ_TYPE_MAILBOX_TX,
+ .id = IDPF_DFLT_MBX_ID,
+ .len = IDPF_DFLT_MBX_Q_LEN,
+ .buf_size = IDPF_CTLQ_MAX_BUF_LEN
+ },
+ {
+ .type = IDPF_CTLQ_TYPE_MAILBOX_RX,
+ .id = IDPF_DFLT_MBX_ID,
+ .len = IDPF_DFLT_MBX_Q_LEN,
+ .buf_size = IDPF_CTLQ_MAX_BUF_LEN
+ }
+ };
+ struct idpf_hw *hw = &adapter->hw;
+ int err;
+
+ adapter->dev_ops.reg_ops.ctlq_reg_init(ctlq_info);
+
+ err = idpf_ctlq_init(hw, IDPF_NUM_DFLT_MBX_Q, ctlq_info);
+ if (err)
+ return err;
+
+ hw->asq = idpf_find_ctlq(hw, IDPF_CTLQ_TYPE_MAILBOX_TX,
+ IDPF_DFLT_MBX_ID);
+ hw->arq = idpf_find_ctlq(hw, IDPF_CTLQ_TYPE_MAILBOX_RX,
+ IDPF_DFLT_MBX_ID);
+
+ if (!hw->asq || !hw->arq) {
+ idpf_ctlq_deinit(hw);
+
+ return -ENOENT;
+ }
+
+ adapter->state = __IDPF_STARTUP;
+
+ return 0;
+}
+
+/**
+ * idpf_deinit_dflt_mbx - Free up ctlqs setup
+ * @adapter: Driver specific private data structure
+ */
+void idpf_deinit_dflt_mbx(struct idpf_adapter *adapter)
+{
+ if (adapter->hw.arq && adapter->hw.asq) {
+ idpf_mb_clean(adapter);
+ idpf_ctlq_deinit(&adapter->hw);
+ }
+ adapter->hw.arq = NULL;
+ adapter->hw.asq = NULL;
+}
+
+/**
+ * idpf_vport_params_buf_rel - Release memory for MailBox resources
+ * @adapter: Driver specific private data structure
+ *
+ * Will release memory to hold the vport parameters received on MailBox
+ */
+static void idpf_vport_params_buf_rel(struct idpf_adapter *adapter)
+{
+ kfree(adapter->vport_params_recvd);
+ adapter->vport_params_recvd = NULL;
+ kfree(adapter->vport_params_reqd);
+ adapter->vport_params_reqd = NULL;
+ kfree(adapter->vport_ids);
+ adapter->vport_ids = NULL;
+}
+
+/**
+ * idpf_vport_params_buf_alloc - Allocate memory for MailBox resources
+ * @adapter: Driver specific private data structure
+ *
+ * Will alloc memory to hold the vport parameters received on MailBox
+ */
+static int idpf_vport_params_buf_alloc(struct idpf_adapter *adapter)
+{
+ u16 num_max_vports = idpf_get_max_vports(adapter);
+
+ adapter->vport_params_reqd = kcalloc(num_max_vports,
+ sizeof(*adapter->vport_params_reqd),
+ GFP_KERNEL);
+ if (!adapter->vport_params_reqd)
+ return -ENOMEM;
+
+ adapter->vport_params_recvd = kcalloc(num_max_vports,
+ sizeof(*adapter->vport_params_recvd),
+ GFP_KERNEL);
+ if (!adapter->vport_params_recvd)
+ goto err_mem;
+
+ adapter->vport_ids = kcalloc(num_max_vports, sizeof(u32), GFP_KERNEL);
+ if (!adapter->vport_ids)
+ goto err_mem;
+
+ if (adapter->vport_config)
+ return 0;
+
+ adapter->vport_config = kcalloc(num_max_vports,
+ sizeof(*adapter->vport_config),
+ GFP_KERNEL);
+ if (!adapter->vport_config)
+ goto err_mem;
+
+ return 0;
+
+err_mem:
+ idpf_vport_params_buf_rel(adapter);
+
+ return -ENOMEM;
+}
+
+/**
+ * idpf_vc_core_init - Initialize state machine and get driver specific
+ * resources
+ * @adapter: Driver specific private structure
+ *
+ * This function will initialize the state machine and request all necessary
+ * resources required by the device driver. Once the state machine is
+ * initialized, allocate memory to store vport specific information and also
+ * requests required interrupts.
+ *
+ * Returns 0 on success, -EAGAIN function will get called again,
+ * otherwise negative on failure.
+ */
+int idpf_vc_core_init(struct idpf_adapter *adapter)
+{
+ int task_delay = 30;
+ u16 num_max_vports;
+ int err = 0;
+
+ while (adapter->state != __IDPF_INIT_SW) {
+ switch (adapter->state) {
+ case __IDPF_STARTUP:
+ if (idpf_send_ver_msg(adapter))
+ goto init_failed;
+ adapter->state = __IDPF_VER_CHECK;
+ goto restart;
+ case __IDPF_VER_CHECK:
+ err = idpf_recv_ver_msg(adapter);
+ if (err == -EIO) {
+ return err;
+ } else if (err == -EAGAIN) {
+ adapter->state = __IDPF_STARTUP;
+ goto restart;
+ } else if (err) {
+ goto init_failed;
+ }
+ if (idpf_send_get_caps_msg(adapter))
+ goto init_failed;
+ adapter->state = __IDPF_GET_CAPS;
+ goto restart;
+ case __IDPF_GET_CAPS:
+ if (idpf_recv_get_caps_msg(adapter))
+ goto init_failed;
+ adapter->state = __IDPF_INIT_SW;
+ break;
+ default:
+ dev_err(&adapter->pdev->dev, "Device is in bad state: %d\n",
+ adapter->state);
+ goto init_failed;
+ }
+ break;
+restart:
+ /* Give enough time before proceeding further with
+ * state machine
+ */
+ msleep(task_delay);
+ }
+
+ pci_sriov_set_totalvfs(adapter->pdev, idpf_get_max_vfs(adapter));
+ num_max_vports = idpf_get_max_vports(adapter);
+ adapter->max_vports = num_max_vports;
+ adapter->vports = kcalloc(num_max_vports, sizeof(*adapter->vports),
+ GFP_KERNEL);
+ if (!adapter->vports)
+ return -ENOMEM;
+
+ if (!adapter->netdevs) {
+ adapter->netdevs = kcalloc(num_max_vports,
+ sizeof(struct net_device *),
+ GFP_KERNEL);
+ if (!adapter->netdevs) {
+ err = -ENOMEM;
+ goto err_netdev_alloc;
+ }
+ }
+
+ err = idpf_vport_params_buf_alloc(adapter);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "Failed to alloc vport params buffer: %d\n",
+ err);
+ goto err_netdev_alloc;
+ }
+
+ /* Start the mailbox task before requesting vectors. This will ensure
+ * vector information response from mailbox is handled
+ */
+ queue_delayed_work(adapter->mbx_wq, &adapter->mbx_task, 0);
+
+ queue_delayed_work(adapter->serv_wq, &adapter->serv_task,
+ msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07)));
+
+ err = idpf_intr_req(adapter);
+ if (err) {
+ dev_err(&adapter->pdev->dev, "failed to enable interrupt vectors: %d\n",
+ err);
+ goto err_intr_req;
+ }
+
+ idpf_init_avail_queues(adapter);
+
+ /* Skew the delay for init tasks for each function based on fn number
+ * to prevent every function from making the same call simultaneously.
+ */
+ queue_delayed_work(adapter->init_wq, &adapter->init_task,
+ msecs_to_jiffies(5 * (adapter->pdev->devfn & 0x07)));
+
+ goto no_err;
+
+err_intr_req:
+ cancel_delayed_work_sync(&adapter->serv_task);
+ cancel_delayed_work_sync(&adapter->mbx_task);
+ idpf_vport_params_buf_rel(adapter);
+err_netdev_alloc:
+ kfree(adapter->vports);
+ adapter->vports = NULL;
+no_err:
+ return err;
+
+init_failed:
+ /* Don't retry if we're trying to go down, just bail. */
+ if (test_bit(IDPF_REMOVE_IN_PROG, adapter->flags))
+ return err;
+
+ if (++adapter->mb_wait_count > IDPF_MB_MAX_ERR) {
+ dev_err(&adapter->pdev->dev, "Failed to establish mailbox communications with hardware\n");
+
+ return -EFAULT;
+ }
+ /* If it reached here, it is possible that mailbox queue initialization
+ * register writes might not have taken effect. Retry to initialize
+ * the mailbox again
+ */
+ adapter->state = __IDPF_STARTUP;
+ idpf_deinit_dflt_mbx(adapter);
+ set_bit(IDPF_HR_DRV_LOAD, adapter->flags);
+ queue_delayed_work(adapter->vc_event_wq, &adapter->vc_event_task,
+ msecs_to_jiffies(task_delay));
+
+ return -EAGAIN;
+}
+
+/**
+ * idpf_vc_core_deinit - Device deinit routine
+ * @adapter: Driver specific private structure
+ *
+ */
+void idpf_vc_core_deinit(struct idpf_adapter *adapter)
+{
+ int i;
+
+ idpf_deinit_task(adapter);
+ idpf_intr_rel(adapter);
+ /* Set all bits as we dont know on which vc_state the vhnl_wq is
+ * waiting on and wakeup the virtchnl workqueue even if it is waiting
+ * for the response as we are going down
+ */
+ for (i = 0; i < IDPF_VC_NBITS; i++)
+ set_bit(i, adapter->vc_state);
+ wake_up(&adapter->vchnl_wq);
+
+ cancel_delayed_work_sync(&adapter->serv_task);
+ cancel_delayed_work_sync(&adapter->mbx_task);
+
+ idpf_vport_params_buf_rel(adapter);
+
+ /* Clear all the bits */
+ for (i = 0; i < IDPF_VC_NBITS; i++)
+ clear_bit(i, adapter->vc_state);
+
+ kfree(adapter->vports);
+ adapter->vports = NULL;
+}
+
+/**
+ * idpf_vport_alloc_vec_indexes - Get relative vector indexes
+ * @vport: virtual port data struct
+ *
+ * This function requests the vector information required for the vport and
+ * stores the vector indexes received from the 'global vector distribution'
+ * in the vport's queue vectors array.
+ *
+ * Return 0 on success, error on failure
+ */
+int idpf_vport_alloc_vec_indexes(struct idpf_vport *vport)
+{
+ struct idpf_vector_info vec_info;
+ int num_alloc_vecs;
+
+ vec_info.num_curr_vecs = vport->num_q_vectors;
+ vec_info.num_req_vecs = max(vport->num_txq, vport->num_rxq);
+ vec_info.default_vport = vport->default_vport;
+ vec_info.index = vport->idx;
+
+ num_alloc_vecs = idpf_req_rel_vector_indexes(vport->adapter,
+ vport->q_vector_idxs,
+ &vec_info);
+ if (num_alloc_vecs <= 0) {
+ dev_err(&vport->adapter->pdev->dev, "Vector distribution failed: %d\n",
+ num_alloc_vecs);
+ return -EINVAL;
+ }
+
+ vport->num_q_vectors = num_alloc_vecs;
+
+ return 0;
+}
+
+/**
+ * idpf_vport_init - Initialize virtual port
+ * @vport: virtual port to be initialized
+ * @max_q: vport max queue info
+ *
+ * Will initialize vport with the info received through MB earlier
+ */
+void idpf_vport_init(struct idpf_vport *vport, struct idpf_vport_max_q *max_q)
+{
+ struct idpf_adapter *adapter = vport->adapter;
+ struct virtchnl2_create_vport *vport_msg;
+ struct idpf_vport_config *vport_config;
+ u16 tx_itr[] = {2, 8, 64, 128, 256};
+ u16 rx_itr[] = {2, 8, 32, 96, 128};
+ struct idpf_rss_data *rss_data;
+ u16 idx = vport->idx;
+
+ vport_config = adapter->vport_config[idx];
+ rss_data = &vport_config->user_config.rss_data;
+ vport_msg = adapter->vport_params_recvd[idx];
+
+ vport_config->max_q.max_txq = max_q->max_txq;
+ vport_config->max_q.max_rxq = max_q->max_rxq;
+ vport_config->max_q.max_complq = max_q->max_complq;
+ vport_config->max_q.max_bufq = max_q->max_bufq;
+
+ vport->txq_model = le16_to_cpu(vport_msg->txq_model);
+ vport->rxq_model = le16_to_cpu(vport_msg->rxq_model);
+ vport->vport_type = le16_to_cpu(vport_msg->vport_type);
+ vport->vport_id = le32_to_cpu(vport_msg->vport_id);
+
+ rss_data->rss_key_size = min_t(u16, NETDEV_RSS_KEY_LEN,
+ le16_to_cpu(vport_msg->rss_key_size));
+ rss_data->rss_lut_size = le16_to_cpu(vport_msg->rss_lut_size);
+
+ ether_addr_copy(vport->default_mac_addr, vport_msg->default_mac_addr);
+ vport->max_mtu = le16_to_cpu(vport_msg->max_mtu) - IDPF_PACKET_HDR_PAD;
+
+ /* Initialize Tx and Rx profiles for Dynamic Interrupt Moderation */
+ memcpy(vport->rx_itr_profile, rx_itr, IDPF_DIM_PROFILE_SLOTS);
+ memcpy(vport->tx_itr_profile, tx_itr, IDPF_DIM_PROFILE_SLOTS);
+
+ idpf_vport_init_num_qs(vport, vport_msg);
+ idpf_vport_calc_num_q_desc(vport);
+ idpf_vport_calc_num_q_groups(vport);
+ idpf_vport_alloc_vec_indexes(vport);
+
+ vport->crc_enable = adapter->crc_enable;
+}
+
+/**
+ * idpf_get_vec_ids - Initialize vector id from Mailbox parameters
+ * @adapter: adapter structure to get the mailbox vector id
+ * @vecids: Array of vector ids
+ * @num_vecids: number of vector ids
+ * @chunks: vector ids received over mailbox
+ *
+ * Will initialize the mailbox vector id which is received from the
+ * get capabilities and data queue vector ids with ids received as
+ * mailbox parameters.
+ * Returns number of ids filled
+ */
+int idpf_get_vec_ids(struct idpf_adapter *adapter,
+ u16 *vecids, int num_vecids,
+ struct virtchnl2_vector_chunks *chunks)
+{
+ u16 num_chunks = le16_to_cpu(chunks->num_vchunks);
+ int num_vecid_filled = 0;
+ int i, j;
+
+ vecids[num_vecid_filled] = adapter->mb_vector.v_idx;
+ num_vecid_filled++;
+
+ for (j = 0; j < num_chunks; j++) {
+ struct virtchnl2_vector_chunk *chunk;
+ u16 start_vecid, num_vec;
+
+ chunk = &chunks->vchunks[j];
+ num_vec = le16_to_cpu(chunk->num_vectors);
+ start_vecid = le16_to_cpu(chunk->start_vector_id);
+
+ for (i = 0; i < num_vec; i++) {
+ if ((num_vecid_filled + i) < num_vecids) {
+ vecids[num_vecid_filled + i] = start_vecid;
+ start_vecid++;
+ } else {
+ break;
+ }
+ }
+ num_vecid_filled = num_vecid_filled + i;
+ }
+
+ return num_vecid_filled;
+}
+
+/**
+ * idpf_vport_get_queue_ids - Initialize queue id from Mailbox parameters
+ * @qids: Array of queue ids
+ * @num_qids: number of queue ids
+ * @q_type: queue model
+ * @chunks: queue ids received over mailbox
+ *
+ * Will initialize all queue ids with ids received as mailbox parameters
+ * Returns number of ids filled
+ */
+static int idpf_vport_get_queue_ids(u32 *qids, int num_qids, u16 q_type,
+ struct virtchnl2_queue_reg_chunks *chunks)
+{
+ u16 num_chunks = le16_to_cpu(chunks->num_chunks);
+ u32 num_q_id_filled = 0, i;
+ u32 start_q_id, num_q;
+
+ while (num_chunks--) {
+ struct virtchnl2_queue_reg_chunk *chunk;
+
+ chunk = &chunks->chunks[num_chunks];
+ if (le32_to_cpu(chunk->type) != q_type)
+ continue;
+
+ num_q = le32_to_cpu(chunk->num_queues);
+ start_q_id = le32_to_cpu(chunk->start_queue_id);
+
+ for (i = 0; i < num_q; i++) {
+ if ((num_q_id_filled + i) < num_qids) {
+ qids[num_q_id_filled + i] = start_q_id;
+ start_q_id++;
+ } else {
+ break;
+ }
+ }
+ num_q_id_filled = num_q_id_filled + i;
+ }
+
+ return num_q_id_filled;
+}
+
+/**
+ * __idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
+ * @vport: virtual port for which the queues ids are initialized
+ * @qids: queue ids
+ * @num_qids: number of queue ids
+ * @q_type: type of queue
+ *
+ * Will initialize all queue ids with ids received as mailbox
+ * parameters. Returns number of queue ids initialized.
+ */
+static int __idpf_vport_queue_ids_init(struct idpf_vport *vport,
+ const u32 *qids,
+ int num_qids,
+ u32 q_type)
+{
+ struct idpf_queue *q;
+ int i, j, k = 0;
+
+ switch (q_type) {
+ case VIRTCHNL2_QUEUE_TYPE_TX:
+ for (i = 0; i < vport->num_txq_grp; i++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ for (j = 0; j < tx_qgrp->num_txq && k < num_qids; j++, k++) {
+ tx_qgrp->txqs[j]->q_id = qids[k];
+ tx_qgrp->txqs[j]->q_type =
+ VIRTCHNL2_QUEUE_TYPE_TX;
+ }
+ }
+ break;
+ case VIRTCHNL2_QUEUE_TYPE_RX:
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u16 num_rxq;
+
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ num_rxq = rx_qgrp->splitq.num_rxq_sets;
+ else
+ num_rxq = rx_qgrp->singleq.num_rxq;
+
+ for (j = 0; j < num_rxq && k < num_qids; j++, k++) {
+ if (idpf_is_queue_model_split(vport->rxq_model))
+ q = &rx_qgrp->splitq.rxq_sets[j]->rxq;
+ else
+ q = rx_qgrp->singleq.rxqs[j];
+ q->q_id = qids[k];
+ q->q_type = VIRTCHNL2_QUEUE_TYPE_RX;
+ }
+ }
+ break;
+ case VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION:
+ for (i = 0; i < vport->num_txq_grp && k < num_qids; i++, k++) {
+ struct idpf_txq_group *tx_qgrp = &vport->txq_grps[i];
+
+ tx_qgrp->complq->q_id = qids[k];
+ tx_qgrp->complq->q_type =
+ VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ }
+ break;
+ case VIRTCHNL2_QUEUE_TYPE_RX_BUFFER:
+ for (i = 0; i < vport->num_rxq_grp; i++) {
+ struct idpf_rxq_group *rx_qgrp = &vport->rxq_grps[i];
+ u8 num_bufqs = vport->num_bufqs_per_qgrp;
+
+ for (j = 0; j < num_bufqs && k < num_qids; j++, k++) {
+ q = &rx_qgrp->splitq.bufq_sets[j].bufq;
+ q->q_id = qids[k];
+ q->q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ }
+ }
+ break;
+ default:
+ break;
+ }
+
+ return k;
+}
+
+/**
+ * idpf_vport_queue_ids_init - Initialize queue ids from Mailbox parameters
+ * @vport: virtual port for which the queues ids are initialized
+ *
+ * Will initialize all queue ids with ids received as mailbox parameters.
+ * Returns 0 on success, negative if all the queues are not initialized.
+ */
+int idpf_vport_queue_ids_init(struct idpf_vport *vport)
+{
+ struct virtchnl2_create_vport *vport_params;
+ struct virtchnl2_queue_reg_chunks *chunks;
+ struct idpf_vport_config *vport_config;
+ u16 vport_idx = vport->idx;
+ int num_ids, err = 0;
+ u16 q_type;
+ u32 *qids;
+
+ vport_config = vport->adapter->vport_config[vport_idx];
+ if (vport_config->req_qs_chunks) {
+ struct virtchnl2_add_queues *vc_aq =
+ (struct virtchnl2_add_queues *)vport_config->req_qs_chunks;
+ chunks = &vc_aq->chunks;
+ } else {
+ vport_params = vport->adapter->vport_params_recvd[vport_idx];
+ chunks = &vport_params->chunks;
+ }
+
+ qids = kcalloc(IDPF_MAX_QIDS, sizeof(u32), GFP_KERNEL);
+ if (!qids)
+ return -ENOMEM;
+
+ num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS,
+ VIRTCHNL2_QUEUE_TYPE_TX,
+ chunks);
+ if (num_ids < vport->num_txq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+ num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids,
+ VIRTCHNL2_QUEUE_TYPE_TX);
+ if (num_ids < vport->num_txq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+
+ num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS,
+ VIRTCHNL2_QUEUE_TYPE_RX,
+ chunks);
+ if (num_ids < vport->num_rxq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+ num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids,
+ VIRTCHNL2_QUEUE_TYPE_RX);
+ if (num_ids < vport->num_rxq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+
+ if (!idpf_is_queue_model_split(vport->txq_model))
+ goto check_rxq;
+
+ q_type = VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION;
+ num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks);
+ if (num_ids < vport->num_complq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+ num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type);
+ if (num_ids < vport->num_complq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+
+check_rxq:
+ if (!idpf_is_queue_model_split(vport->rxq_model))
+ goto mem_rel;
+
+ q_type = VIRTCHNL2_QUEUE_TYPE_RX_BUFFER;
+ num_ids = idpf_vport_get_queue_ids(qids, IDPF_MAX_QIDS, q_type, chunks);
+ if (num_ids < vport->num_bufq) {
+ err = -EINVAL;
+ goto mem_rel;
+ }
+ num_ids = __idpf_vport_queue_ids_init(vport, qids, num_ids, q_type);
+ if (num_ids < vport->num_bufq)
+ err = -EINVAL;
+
+mem_rel:
+ kfree(qids);
+
+ return err;
+}
+
+/**
+ * idpf_vport_adjust_qs - Adjust to new requested queues
+ * @vport: virtual port data struct
+ *
+ * Renegotiate queues. Returns 0 on success, negative on failure.
+ */
+int idpf_vport_adjust_qs(struct idpf_vport *vport)
+{
+ struct virtchnl2_create_vport vport_msg;
+ int err;
+
+ vport_msg.txq_model = cpu_to_le16(vport->txq_model);
+ vport_msg.rxq_model = cpu_to_le16(vport->rxq_model);
+ err = idpf_vport_calc_total_qs(vport->adapter, vport->idx, &vport_msg,
+ NULL);
+ if (err)
+ return err;
+
+ idpf_vport_init_num_qs(vport, &vport_msg);
+ idpf_vport_calc_num_q_groups(vport);
+
+ return 0;
+}
+
+/**
+ * idpf_is_capability_ena - Default implementation of capability checking
+ * @adapter: Private data struct
+ * @all: all or one flag
+ * @field: caps field to check for flags
+ * @flag: flag to check
+ *
+ * Return true if all capabilities are supported, false otherwise
+ */
+bool idpf_is_capability_ena(struct idpf_adapter *adapter, bool all,
+ enum idpf_cap_field field, u64 flag)
+{
+ u8 *caps = (u8 *)&adapter->caps;
+ u32 *cap_field;
+
+ if (!caps)
+ return false;
+
+ if (field == IDPF_BASE_CAPS)
+ return false;
+
+ cap_field = (u32 *)(caps + field);
+
+ if (all)
+ return (*cap_field & flag) == flag;
+ else
+ return !!(*cap_field & flag);
+}
+
+/**
+ * idpf_get_vport_id: Get vport id
+ * @vport: virtual port structure
+ *
+ * Return vport id from the adapter persistent data
+ */
+u32 idpf_get_vport_id(struct idpf_vport *vport)
+{
+ struct virtchnl2_create_vport *vport_msg;
+
+ vport_msg = vport->adapter->vport_params_recvd[vport->idx];
+
+ return le32_to_cpu(vport_msg->vport_id);
+}
+
+/**
+ * idpf_add_del_mac_filters - Add/del mac filters
+ * @vport: Virtual port data structure
+ * @np: Netdev private structure
+ * @add: Add or delete flag
+ * @async: Don't wait for return message
+ *
+ * Returns 0 on success, error on failure.
+ **/
+int idpf_add_del_mac_filters(struct idpf_vport *vport,
+ struct idpf_netdev_priv *np,
+ bool add, bool async)
+{
+ struct virtchnl2_mac_addr_list *ma_list = NULL;
+ struct idpf_adapter *adapter = np->adapter;
+ struct idpf_vport_config *vport_config;
+ enum idpf_vport_config_flags mac_flag;
+ struct pci_dev *pdev = adapter->pdev;
+ enum idpf_vport_vc_state vc, vc_err;
+ struct virtchnl2_mac_addr *mac_addr;
+ struct idpf_mac_filter *f, *tmp;
+ u32 num_msgs, total_filters = 0;
+ int i = 0, k, err = 0;
+ u32 vop;
+
+ vport_config = adapter->vport_config[np->vport_idx];
+ spin_lock_bh(&vport_config->mac_filter_list_lock);
+
+ /* Find the number of newly added filters */
+ list_for_each_entry(f, &vport_config->user_config.mac_filter_list,
+ list) {
+ if (add && f->add)
+ total_filters++;
+ else if (!add && f->remove)
+ total_filters++;
+ }
+
+ if (!total_filters) {
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ return 0;
+ }
+
+ /* Fill all the new filters into virtchannel message */
+ mac_addr = kcalloc(total_filters, sizeof(struct virtchnl2_mac_addr),
+ GFP_ATOMIC);
+ if (!mac_addr) {
+ err = -ENOMEM;
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+ goto error;
+ }
+
+ list_for_each_entry_safe(f, tmp, &vport_config->user_config.mac_filter_list,
+ list) {
+ if (add && f->add) {
+ ether_addr_copy(mac_addr[i].addr, f->macaddr);
+ i++;
+ f->add = false;
+ if (i == total_filters)
+ break;
+ }
+ if (!add && f->remove) {
+ ether_addr_copy(mac_addr[i].addr, f->macaddr);
+ i++;
+ f->remove = false;
+ if (i == total_filters)
+ break;
+ }
+ }
+
+ spin_unlock_bh(&vport_config->mac_filter_list_lock);
+
+ if (add) {
+ vop = VIRTCHNL2_OP_ADD_MAC_ADDR;
+ vc = IDPF_VC_ADD_MAC_ADDR;
+ vc_err = IDPF_VC_ADD_MAC_ADDR_ERR;
+ mac_flag = IDPF_VPORT_ADD_MAC_REQ;
+ } else {
+ vop = VIRTCHNL2_OP_DEL_MAC_ADDR;
+ vc = IDPF_VC_DEL_MAC_ADDR;
+ vc_err = IDPF_VC_DEL_MAC_ADDR_ERR;
+ mac_flag = IDPF_VPORT_DEL_MAC_REQ;
+ }
+
+ /* Chunk up the filters into multiple messages to avoid
+ * sending a control queue message buffer that is too large
+ */
+ num_msgs = DIV_ROUND_UP(total_filters, IDPF_NUM_FILTERS_PER_MSG);
+
+ if (!async)
+ mutex_lock(&vport->vc_buf_lock);
+
+ for (i = 0, k = 0; i < num_msgs; i++) {
+ u32 entries_size, buf_size, num_entries;
+
+ num_entries = min_t(u32, total_filters,
+ IDPF_NUM_FILTERS_PER_MSG);
+ entries_size = sizeof(struct virtchnl2_mac_addr) * num_entries;
+ buf_size = struct_size(ma_list, mac_addr_list, num_entries);
+
+ if (!ma_list || num_entries != IDPF_NUM_FILTERS_PER_MSG) {
+ kfree(ma_list);
+ ma_list = kzalloc(buf_size, GFP_ATOMIC);
+ if (!ma_list) {
+ err = -ENOMEM;
+ goto list_prep_error;
+ }
+ } else {
+ memset(ma_list, 0, buf_size);
+ }
+
+ ma_list->vport_id = cpu_to_le32(np->vport_id);
+ ma_list->num_mac_addr = cpu_to_le16(num_entries);
+ memcpy(ma_list->mac_addr_list, &mac_addr[k], entries_size);
+
+ if (async)
+ set_bit(mac_flag, vport_config->flags);
+
+ err = idpf_send_mb_msg(adapter, vop, buf_size, (u8 *)ma_list);
+ if (err)
+ goto mbx_error;
+
+ if (!async) {
+ err = idpf_wait_for_event(adapter, vport, vc, vc_err);
+ if (err)
+ goto mbx_error;
+ }
+
+ k += num_entries;
+ total_filters -= num_entries;
+ }
+
+mbx_error:
+ if (!async)
+ mutex_unlock(&vport->vc_buf_lock);
+ kfree(ma_list);
+list_prep_error:
+ kfree(mac_addr);
+error:
+ if (err)
+ dev_err(&pdev->dev, "Failed to add or del mac filters %d", err);
+
+ return err;
+}
+
+/**
+ * idpf_set_promiscuous - set promiscuous and send message to mailbox
+ * @adapter: Driver specific private structure
+ * @config_data: Vport specific config data
+ * @vport_id: Vport identifier
+ *
+ * Request to enable promiscuous mode for the vport. Message is sent
+ * asynchronously and won't wait for response. Returns 0 on success, negative
+ * on failure;
+ */
+int idpf_set_promiscuous(struct idpf_adapter *adapter,
+ struct idpf_vport_user_config_data *config_data,
+ u32 vport_id)
+{
+ struct virtchnl2_promisc_info vpi;
+ u16 flags = 0;
+ int err;
+
+ if (test_bit(__IDPF_PROMISC_UC, config_data->user_flags))
+ flags |= VIRTCHNL2_UNICAST_PROMISC;
+ if (test_bit(__IDPF_PROMISC_MC, config_data->user_flags))
+ flags |= VIRTCHNL2_MULTICAST_PROMISC;
+
+ vpi.vport_id = cpu_to_le32(vport_id);
+ vpi.flags = cpu_to_le16(flags);
+
+ err = idpf_send_mb_msg(adapter, VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE,
+ sizeof(struct virtchnl2_promisc_info),
+ (u8 *)&vpi);
+
+ return err;
+}
diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2.h b/drivers/net/ethernet/intel/idpf/virtchnl2.h
new file mode 100644
index 000000000000..07e72c72d156
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/virtchnl2.h
@@ -0,0 +1,1273 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _VIRTCHNL2_H_
+#define _VIRTCHNL2_H_
+
+/* All opcodes associated with virtchnl2 are prefixed with virtchnl2 or
+ * VIRTCHNL2. Any future opcodes, offloads/capabilities, structures,
+ * and defines must be prefixed with virtchnl2 or VIRTCHNL2 to avoid confusion.
+ *
+ * PF/VF uses the virtchnl2 interface defined in this header file to communicate
+ * with device Control Plane (CP). Driver and the CP may run on different
+ * platforms with different endianness. To avoid byte order discrepancies,
+ * all the structures in this header follow little-endian format.
+ *
+ * This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
+ */
+
+#include "virtchnl2_lan_desc.h"
+
+/* This macro is used to generate compilation errors if a structure
+ * is not exactly the correct length.
+ */
+#define VIRTCHNL2_CHECK_STRUCT_LEN(n, X) \
+ static_assert((n) == sizeof(struct X))
+
+/* New major set of opcodes introduced and so leaving room for
+ * old misc opcodes to be added in future. Also these opcodes may only
+ * be used if both the PF and VF have successfully negotiated the
+ * VIRTCHNL version as 2.0 during VIRTCHNL2_OP_VERSION exchange.
+ */
+enum virtchnl2_op {
+ VIRTCHNL2_OP_UNKNOWN = 0,
+ VIRTCHNL2_OP_VERSION = 1,
+ VIRTCHNL2_OP_GET_CAPS = 500,
+ VIRTCHNL2_OP_CREATE_VPORT = 501,
+ VIRTCHNL2_OP_DESTROY_VPORT = 502,
+ VIRTCHNL2_OP_ENABLE_VPORT = 503,
+ VIRTCHNL2_OP_DISABLE_VPORT = 504,
+ VIRTCHNL2_OP_CONFIG_TX_QUEUES = 505,
+ VIRTCHNL2_OP_CONFIG_RX_QUEUES = 506,
+ VIRTCHNL2_OP_ENABLE_QUEUES = 507,
+ VIRTCHNL2_OP_DISABLE_QUEUES = 508,
+ VIRTCHNL2_OP_ADD_QUEUES = 509,
+ VIRTCHNL2_OP_DEL_QUEUES = 510,
+ VIRTCHNL2_OP_MAP_QUEUE_VECTOR = 511,
+ VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR = 512,
+ VIRTCHNL2_OP_GET_RSS_KEY = 513,
+ VIRTCHNL2_OP_SET_RSS_KEY = 514,
+ VIRTCHNL2_OP_GET_RSS_LUT = 515,
+ VIRTCHNL2_OP_SET_RSS_LUT = 516,
+ VIRTCHNL2_OP_GET_RSS_HASH = 517,
+ VIRTCHNL2_OP_SET_RSS_HASH = 518,
+ VIRTCHNL2_OP_SET_SRIOV_VFS = 519,
+ VIRTCHNL2_OP_ALLOC_VECTORS = 520,
+ VIRTCHNL2_OP_DEALLOC_VECTORS = 521,
+ VIRTCHNL2_OP_EVENT = 522,
+ VIRTCHNL2_OP_GET_STATS = 523,
+ VIRTCHNL2_OP_RESET_VF = 524,
+ VIRTCHNL2_OP_GET_EDT_CAPS = 525,
+ VIRTCHNL2_OP_GET_PTYPE_INFO = 526,
+ /* Opcode 527 and 528 are reserved for VIRTCHNL2_OP_GET_PTYPE_ID and
+ * VIRTCHNL2_OP_GET_PTYPE_INFO_RAW.
+ * Opcodes 529, 530, 531, 532 and 533 are reserved.
+ */
+ VIRTCHNL2_OP_LOOPBACK = 534,
+ VIRTCHNL2_OP_ADD_MAC_ADDR = 535,
+ VIRTCHNL2_OP_DEL_MAC_ADDR = 536,
+ VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE = 537,
+};
+
+/**
+ * enum virtchnl2_vport_type - Type of virtual port.
+ * @VIRTCHNL2_VPORT_TYPE_DEFAULT: Default virtual port type.
+ */
+enum virtchnl2_vport_type {
+ VIRTCHNL2_VPORT_TYPE_DEFAULT = 0,
+};
+
+/**
+ * enum virtchnl2_queue_model - Type of queue model.
+ * @VIRTCHNL2_QUEUE_MODEL_SINGLE: Single queue model.
+ * @VIRTCHNL2_QUEUE_MODEL_SPLIT: Split queue model.
+ *
+ * In the single queue model, the same transmit descriptor queue is used by
+ * software to post descriptors to hardware and by hardware to post completed
+ * descriptors to software.
+ * Likewise, the same receive descriptor queue is used by hardware to post
+ * completions to software and by software to post buffers to hardware.
+ *
+ * In the split queue model, hardware uses transmit completion queues to post
+ * descriptor/buffer completions to software, while software uses transmit
+ * descriptor queues to post descriptors to hardware.
+ * Likewise, hardware posts descriptor completions to the receive descriptor
+ * queue, while software uses receive buffer queues to post buffers to hardware.
+ */
+enum virtchnl2_queue_model {
+ VIRTCHNL2_QUEUE_MODEL_SINGLE = 0,
+ VIRTCHNL2_QUEUE_MODEL_SPLIT = 1,
+};
+
+/* Checksum offload capability flags */
+enum virtchnl2_cap_txrx_csum {
+ VIRTCHNL2_CAP_TX_CSUM_L3_IPV4 = BIT(0),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_TCP = BIT(1),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_UDP = BIT(2),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV4_SCTP = BIT(3),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_TCP = BIT(4),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_UDP = BIT(5),
+ VIRTCHNL2_CAP_TX_CSUM_L4_IPV6_SCTP = BIT(6),
+ VIRTCHNL2_CAP_TX_CSUM_GENERIC = BIT(7),
+ VIRTCHNL2_CAP_RX_CSUM_L3_IPV4 = BIT(8),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_TCP = BIT(9),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_UDP = BIT(10),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV4_SCTP = BIT(11),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_TCP = BIT(12),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_UDP = BIT(13),
+ VIRTCHNL2_CAP_RX_CSUM_L4_IPV6_SCTP = BIT(14),
+ VIRTCHNL2_CAP_RX_CSUM_GENERIC = BIT(15),
+ VIRTCHNL2_CAP_TX_CSUM_L3_SINGLE_TUNNEL = BIT(16),
+ VIRTCHNL2_CAP_TX_CSUM_L3_DOUBLE_TUNNEL = BIT(17),
+ VIRTCHNL2_CAP_RX_CSUM_L3_SINGLE_TUNNEL = BIT(18),
+ VIRTCHNL2_CAP_RX_CSUM_L3_DOUBLE_TUNNEL = BIT(19),
+ VIRTCHNL2_CAP_TX_CSUM_L4_SINGLE_TUNNEL = BIT(20),
+ VIRTCHNL2_CAP_TX_CSUM_L4_DOUBLE_TUNNEL = BIT(21),
+ VIRTCHNL2_CAP_RX_CSUM_L4_SINGLE_TUNNEL = BIT(22),
+ VIRTCHNL2_CAP_RX_CSUM_L4_DOUBLE_TUNNEL = BIT(23),
+};
+
+/* Segmentation offload capability flags */
+enum virtchnl2_cap_seg {
+ VIRTCHNL2_CAP_SEG_IPV4_TCP = BIT(0),
+ VIRTCHNL2_CAP_SEG_IPV4_UDP = BIT(1),
+ VIRTCHNL2_CAP_SEG_IPV4_SCTP = BIT(2),
+ VIRTCHNL2_CAP_SEG_IPV6_TCP = BIT(3),
+ VIRTCHNL2_CAP_SEG_IPV6_UDP = BIT(4),
+ VIRTCHNL2_CAP_SEG_IPV6_SCTP = BIT(5),
+ VIRTCHNL2_CAP_SEG_GENERIC = BIT(6),
+ VIRTCHNL2_CAP_SEG_TX_SINGLE_TUNNEL = BIT(7),
+ VIRTCHNL2_CAP_SEG_TX_DOUBLE_TUNNEL = BIT(8),
+};
+
+/* Receive Side Scaling Flow type capability flags */
+enum virtchnl2_cap_rss {
+ VIRTCHNL2_CAP_RSS_IPV4_TCP = BIT(0),
+ VIRTCHNL2_CAP_RSS_IPV4_UDP = BIT(1),
+ VIRTCHNL2_CAP_RSS_IPV4_SCTP = BIT(2),
+ VIRTCHNL2_CAP_RSS_IPV4_OTHER = BIT(3),
+ VIRTCHNL2_CAP_RSS_IPV6_TCP = BIT(4),
+ VIRTCHNL2_CAP_RSS_IPV6_UDP = BIT(5),
+ VIRTCHNL2_CAP_RSS_IPV6_SCTP = BIT(6),
+ VIRTCHNL2_CAP_RSS_IPV6_OTHER = BIT(7),
+ VIRTCHNL2_CAP_RSS_IPV4_AH = BIT(8),
+ VIRTCHNL2_CAP_RSS_IPV4_ESP = BIT(9),
+ VIRTCHNL2_CAP_RSS_IPV4_AH_ESP = BIT(10),
+ VIRTCHNL2_CAP_RSS_IPV6_AH = BIT(11),
+ VIRTCHNL2_CAP_RSS_IPV6_ESP = BIT(12),
+ VIRTCHNL2_CAP_RSS_IPV6_AH_ESP = BIT(13),
+};
+
+/* Header split capability flags */
+enum virtchnl2_cap_rx_hsplit_at {
+ /* for prepended metadata */
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L2 = BIT(0),
+ /* all VLANs go into header buffer */
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L3 = BIT(1),
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V4 = BIT(2),
+ VIRTCHNL2_CAP_RX_HSPLIT_AT_L4V6 = BIT(3),
+};
+
+/* Receive Side Coalescing offload capability flags */
+enum virtchnl2_cap_rsc {
+ VIRTCHNL2_CAP_RSC_IPV4_TCP = BIT(0),
+ VIRTCHNL2_CAP_RSC_IPV4_SCTP = BIT(1),
+ VIRTCHNL2_CAP_RSC_IPV6_TCP = BIT(2),
+ VIRTCHNL2_CAP_RSC_IPV6_SCTP = BIT(3),
+};
+
+/* Other capability flags */
+enum virtchnl2_cap_other {
+ VIRTCHNL2_CAP_RDMA = BIT_ULL(0),
+ VIRTCHNL2_CAP_SRIOV = BIT_ULL(1),
+ VIRTCHNL2_CAP_MACFILTER = BIT_ULL(2),
+ VIRTCHNL2_CAP_FLOW_DIRECTOR = BIT_ULL(3),
+ /* Queue based scheduling using split queue model */
+ VIRTCHNL2_CAP_SPLITQ_QSCHED = BIT_ULL(4),
+ VIRTCHNL2_CAP_CRC = BIT_ULL(5),
+ VIRTCHNL2_CAP_ADQ = BIT_ULL(6),
+ VIRTCHNL2_CAP_WB_ON_ITR = BIT_ULL(7),
+ VIRTCHNL2_CAP_PROMISC = BIT_ULL(8),
+ VIRTCHNL2_CAP_LINK_SPEED = BIT_ULL(9),
+ VIRTCHNL2_CAP_INLINE_IPSEC = BIT_ULL(10),
+ VIRTCHNL2_CAP_LARGE_NUM_QUEUES = BIT_ULL(11),
+ VIRTCHNL2_CAP_VLAN = BIT_ULL(12),
+ VIRTCHNL2_CAP_PTP = BIT_ULL(13),
+ /* EDT: Earliest Departure Time capability used for Timing Wheel */
+ VIRTCHNL2_CAP_EDT = BIT_ULL(14),
+ VIRTCHNL2_CAP_ADV_RSS = BIT_ULL(15),
+ VIRTCHNL2_CAP_FDIR = BIT_ULL(16),
+ VIRTCHNL2_CAP_RX_FLEX_DESC = BIT_ULL(17),
+ VIRTCHNL2_CAP_PTYPE = BIT_ULL(18),
+ VIRTCHNL2_CAP_LOOPBACK = BIT_ULL(19),
+ /* Other capability 20 is reserved */
+
+ /* this must be the last capability */
+ VIRTCHNL2_CAP_OEM = BIT_ULL(63),
+};
+
+/* underlying device type */
+enum virtchl2_device_type {
+ VIRTCHNL2_MEV_DEVICE = 0,
+};
+
+/**
+ * enum virtchnl2_txq_sched_mode - Transmit Queue Scheduling Modes.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_QUEUE: Queue mode is the legacy mode i.e. inorder
+ * completions where descriptors and buffers
+ * are completed at the same time.
+ * @VIRTCHNL2_TXQ_SCHED_MODE_FLOW: Flow scheduling mode allows for out of order
+ * packet processing where descriptors are
+ * cleaned in order, but buffers can be
+ * completed out of order.
+ */
+enum virtchnl2_txq_sched_mode {
+ VIRTCHNL2_TXQ_SCHED_MODE_QUEUE = 0,
+ VIRTCHNL2_TXQ_SCHED_MODE_FLOW = 1,
+};
+
+/**
+ * enum virtchnl2_rxq_flags - Receive Queue Feature flags.
+ * @VIRTCHNL2_RXQ_RSC: Rx queue RSC flag.
+ * @VIRTCHNL2_RXQ_HDR_SPLIT: Rx queue header split flag.
+ * @VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK: When set, packet descriptors are flushed
+ * by hardware immediately after processing
+ * each packet.
+ * @VIRTCHNL2_RX_DESC_SIZE_16BYTE: Rx queue 16 byte descriptor size.
+ * @VIRTCHNL2_RX_DESC_SIZE_32BYTE: Rx queue 32 byte descriptor size.
+ */
+enum virtchnl2_rxq_flags {
+ VIRTCHNL2_RXQ_RSC = BIT(0),
+ VIRTCHNL2_RXQ_HDR_SPLIT = BIT(1),
+ VIRTCHNL2_RXQ_IMMEDIATE_WRITE_BACK = BIT(2),
+ VIRTCHNL2_RX_DESC_SIZE_16BYTE = BIT(3),
+ VIRTCHNL2_RX_DESC_SIZE_32BYTE = BIT(4),
+};
+
+/* Type of RSS algorithm */
+enum virtchnl2_rss_alg {
+ VIRTCHNL2_RSS_ALG_TOEPLITZ_ASYMMETRIC = 0,
+ VIRTCHNL2_RSS_ALG_R_ASYMMETRIC = 1,
+ VIRTCHNL2_RSS_ALG_TOEPLITZ_SYMMETRIC = 2,
+ VIRTCHNL2_RSS_ALG_XOR_SYMMETRIC = 3,
+};
+
+/* Type of event */
+enum virtchnl2_event_codes {
+ VIRTCHNL2_EVENT_UNKNOWN = 0,
+ VIRTCHNL2_EVENT_LINK_CHANGE = 1,
+ /* Event type 2, 3 are reserved */
+};
+
+/* Transmit and Receive queue types are valid in legacy as well as split queue
+ * models. With Split Queue model, 2 additional types are introduced -
+ * TX_COMPLETION and RX_BUFFER. In split queue model, receive corresponds to
+ * the queue where hardware posts completions.
+ */
+enum virtchnl2_queue_type {
+ VIRTCHNL2_QUEUE_TYPE_TX = 0,
+ VIRTCHNL2_QUEUE_TYPE_RX = 1,
+ VIRTCHNL2_QUEUE_TYPE_TX_COMPLETION = 2,
+ VIRTCHNL2_QUEUE_TYPE_RX_BUFFER = 3,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_TX = 4,
+ VIRTCHNL2_QUEUE_TYPE_CONFIG_RX = 5,
+ /* Queue types 6, 7, 8, 9 are reserved */
+ VIRTCHNL2_QUEUE_TYPE_MBX_TX = 10,
+ VIRTCHNL2_QUEUE_TYPE_MBX_RX = 11,
+};
+
+/* Interrupt throttling rate index */
+enum virtchnl2_itr_idx {
+ VIRTCHNL2_ITR_IDX_0 = 0,
+ VIRTCHNL2_ITR_IDX_1 = 1,
+};
+
+/**
+ * enum virtchnl2_mac_addr_type - MAC address types.
+ * @VIRTCHNL2_MAC_ADDR_PRIMARY: PF/VF driver should set this type for the
+ * primary/device unicast MAC address filter for
+ * VIRTCHNL2_OP_ADD_MAC_ADDR and
+ * VIRTCHNL2_OP_DEL_MAC_ADDR. This allows for the
+ * underlying control plane function to accurately
+ * track the MAC address and for VM/function reset.
+ *
+ * @VIRTCHNL2_MAC_ADDR_EXTRA: PF/VF driver should set this type for any extra
+ * unicast and/or multicast filters that are being
+ * added/deleted via VIRTCHNL2_OP_ADD_MAC_ADDR or
+ * VIRTCHNL2_OP_DEL_MAC_ADDR.
+ */
+enum virtchnl2_mac_addr_type {
+ VIRTCHNL2_MAC_ADDR_PRIMARY = 1,
+ VIRTCHNL2_MAC_ADDR_EXTRA = 2,
+};
+
+/* Flags used for promiscuous mode */
+enum virtchnl2_promisc_flags {
+ VIRTCHNL2_UNICAST_PROMISC = BIT(0),
+ VIRTCHNL2_MULTICAST_PROMISC = BIT(1),
+};
+
+/* Protocol header type within a packet segment. A segment consists of one or
+ * more protocol headers that make up a logical group of protocol headers. Each
+ * logical group of protocol headers encapsulates or is encapsulated using/by
+ * tunneling or encapsulation protocols for network virtualization.
+ */
+enum virtchnl2_proto_hdr_type {
+ /* VIRTCHNL2_PROTO_HDR_ANY is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_ANY = 0,
+ VIRTCHNL2_PROTO_HDR_PRE_MAC = 1,
+ /* VIRTCHNL2_PROTO_HDR_MAC is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_MAC = 2,
+ VIRTCHNL2_PROTO_HDR_POST_MAC = 3,
+ VIRTCHNL2_PROTO_HDR_ETHERTYPE = 4,
+ VIRTCHNL2_PROTO_HDR_VLAN = 5,
+ VIRTCHNL2_PROTO_HDR_SVLAN = 6,
+ VIRTCHNL2_PROTO_HDR_CVLAN = 7,
+ VIRTCHNL2_PROTO_HDR_MPLS = 8,
+ VIRTCHNL2_PROTO_HDR_UMPLS = 9,
+ VIRTCHNL2_PROTO_HDR_MMPLS = 10,
+ VIRTCHNL2_PROTO_HDR_PTP = 11,
+ VIRTCHNL2_PROTO_HDR_CTRL = 12,
+ VIRTCHNL2_PROTO_HDR_LLDP = 13,
+ VIRTCHNL2_PROTO_HDR_ARP = 14,
+ VIRTCHNL2_PROTO_HDR_ECP = 15,
+ VIRTCHNL2_PROTO_HDR_EAPOL = 16,
+ VIRTCHNL2_PROTO_HDR_PPPOD = 17,
+ VIRTCHNL2_PROTO_HDR_PPPOE = 18,
+ /* VIRTCHNL2_PROTO_HDR_IPV4 is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_IPV4 = 19,
+ /* IPv4 and IPv6 Fragment header types are only associated to
+ * VIRTCHNL2_PROTO_HDR_IPV4 and VIRTCHNL2_PROTO_HDR_IPV6 respectively,
+ * cannot be used independently.
+ */
+ /* VIRTCHNL2_PROTO_HDR_IPV4_FRAG is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_IPV4_FRAG = 20,
+ /* VIRTCHNL2_PROTO_HDR_IPV6 is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_IPV6 = 21,
+ /* VIRTCHNL2_PROTO_HDR_IPV6_FRAG is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_IPV6_FRAG = 22,
+ VIRTCHNL2_PROTO_HDR_IPV6_EH = 23,
+ /* VIRTCHNL2_PROTO_HDR_UDP is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_UDP = 24,
+ /* VIRTCHNL2_PROTO_HDR_TCP is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_TCP = 25,
+ /* VIRTCHNL2_PROTO_HDR_SCTP is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_SCTP = 26,
+ /* VIRTCHNL2_PROTO_HDR_ICMP is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_ICMP = 27,
+ /* VIRTCHNL2_PROTO_HDR_ICMPV6 is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_ICMPV6 = 28,
+ VIRTCHNL2_PROTO_HDR_IGMP = 29,
+ VIRTCHNL2_PROTO_HDR_AH = 30,
+ VIRTCHNL2_PROTO_HDR_ESP = 31,
+ VIRTCHNL2_PROTO_HDR_IKE = 32,
+ VIRTCHNL2_PROTO_HDR_NATT_KEEP = 33,
+ /* VIRTCHNL2_PROTO_HDR_PAY is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_PAY = 34,
+ VIRTCHNL2_PROTO_HDR_L2TPV2 = 35,
+ VIRTCHNL2_PROTO_HDR_L2TPV2_CONTROL = 36,
+ VIRTCHNL2_PROTO_HDR_L2TPV3 = 37,
+ VIRTCHNL2_PROTO_HDR_GTP = 38,
+ VIRTCHNL2_PROTO_HDR_GTP_EH = 39,
+ VIRTCHNL2_PROTO_HDR_GTPCV2 = 40,
+ VIRTCHNL2_PROTO_HDR_GTPC_TEID = 41,
+ VIRTCHNL2_PROTO_HDR_GTPU = 42,
+ VIRTCHNL2_PROTO_HDR_GTPU_UL = 43,
+ VIRTCHNL2_PROTO_HDR_GTPU_DL = 44,
+ VIRTCHNL2_PROTO_HDR_ECPRI = 45,
+ VIRTCHNL2_PROTO_HDR_VRRP = 46,
+ VIRTCHNL2_PROTO_HDR_OSPF = 47,
+ /* VIRTCHNL2_PROTO_HDR_TUN is a mandatory protocol id */
+ VIRTCHNL2_PROTO_HDR_TUN = 48,
+ VIRTCHNL2_PROTO_HDR_GRE = 49,
+ VIRTCHNL2_PROTO_HDR_NVGRE = 50,
+ VIRTCHNL2_PROTO_HDR_VXLAN = 51,
+ VIRTCHNL2_PROTO_HDR_VXLAN_GPE = 52,
+ VIRTCHNL2_PROTO_HDR_GENEVE = 53,
+ VIRTCHNL2_PROTO_HDR_NSH = 54,
+ VIRTCHNL2_PROTO_HDR_QUIC = 55,
+ VIRTCHNL2_PROTO_HDR_PFCP = 56,
+ VIRTCHNL2_PROTO_HDR_PFCP_NODE = 57,
+ VIRTCHNL2_PROTO_HDR_PFCP_SESSION = 58,
+ VIRTCHNL2_PROTO_HDR_RTP = 59,
+ VIRTCHNL2_PROTO_HDR_ROCE = 60,
+ VIRTCHNL2_PROTO_HDR_ROCEV1 = 61,
+ VIRTCHNL2_PROTO_HDR_ROCEV2 = 62,
+ /* Protocol ids up to 32767 are reserved.
+ * 32768 - 65534 are used for user defined protocol ids.
+ * VIRTCHNL2_PROTO_HDR_NO_PROTO is a mandatory protocol id.
+ */
+ VIRTCHNL2_PROTO_HDR_NO_PROTO = 65535,
+};
+
+enum virtchl2_version {
+ VIRTCHNL2_VERSION_MINOR_0 = 0,
+ VIRTCHNL2_VERSION_MAJOR_2 = 2,
+};
+
+/**
+ * struct virtchnl2_edt_caps - Get EDT granularity and time horizon.
+ * @tstamp_granularity_ns: Timestamp granularity in nanoseconds.
+ * @time_horizon_ns: Total time window in nanoseconds.
+ *
+ * Associated with VIRTCHNL2_OP_GET_EDT_CAPS.
+ */
+struct virtchnl2_edt_caps {
+ __le64 tstamp_granularity_ns;
+ __le64 time_horizon_ns;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_edt_caps);
+
+/**
+ * struct virtchnl2_version_info - Version information.
+ * @major: Major version.
+ * @minor: Minor version.
+ *
+ * PF/VF posts its version number to the CP. CP responds with its version number
+ * in the same format, along with a return code.
+ * If there is a major version mismatch, then the PF/VF cannot operate.
+ * If there is a minor version mismatch, then the PF/VF can operate but should
+ * add a warning to the system log.
+ *
+ * This version opcode MUST always be specified as == 1, regardless of other
+ * changes in the API. The CP must always respond to this message without
+ * error regardless of version mismatch.
+ *
+ * Associated with VIRTCHNL2_OP_VERSION.
+ */
+struct virtchnl2_version_info {
+ __le32 major;
+ __le32 minor;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_version_info);
+
+/**
+ * struct virtchnl2_get_capabilities - Capabilities info.
+ * @csum_caps: See enum virtchnl2_cap_txrx_csum.
+ * @seg_caps: See enum virtchnl2_cap_seg.
+ * @hsplit_caps: See enum virtchnl2_cap_rx_hsplit_at.
+ * @rsc_caps: See enum virtchnl2_cap_rsc.
+ * @rss_caps: See enum virtchnl2_cap_rss.
+ * @other_caps: See enum virtchnl2_cap_other.
+ * @mailbox_dyn_ctl: DYN_CTL register offset and vector id for mailbox
+ * provided by CP.
+ * @mailbox_vector_id: Mailbox vector id.
+ * @num_allocated_vectors: Maximum number of allocated vectors for the device.
+ * @max_rx_q: Maximum number of supported Rx queues.
+ * @max_tx_q: Maximum number of supported Tx queues.
+ * @max_rx_bufq: Maximum number of supported buffer queues.
+ * @max_tx_complq: Maximum number of supported completion queues.
+ * @max_sriov_vfs: The PF sends the maximum VFs it is requesting. The CP
+ * responds with the maximum VFs granted.
+ * @max_vports: Maximum number of vports that can be supported.
+ * @default_num_vports: Default number of vports driver should allocate on load.
+ * @max_tx_hdr_size: Max header length hardware can parse/checksum, in bytes.
+ * @max_sg_bufs_per_tx_pkt: Max number of scatter gather buffers that can be
+ * sent per transmit packet without needing to be
+ * linearized.
+ * @pad: Padding.
+ * @reserved: Reserved.
+ * @device_type: See enum virtchl2_device_type.
+ * @min_sso_packet_len: Min packet length supported by device for single
+ * segment offload.
+ * @max_hdr_buf_per_lso: Max number of header buffers that can be used for
+ * an LSO.
+ * @pad1: Padding for future extensions.
+ *
+ * Dataplane driver sends this message to CP to negotiate capabilities and
+ * provides a virtchnl2_get_capabilities structure with its desired
+ * capabilities, max_sriov_vfs and num_allocated_vectors.
+ * CP responds with a virtchnl2_get_capabilities structure updated
+ * with allowed capabilities and the other fields as below.
+ * If PF sets max_sriov_vfs as 0, CP will respond with max number of VFs
+ * that can be created by this PF. For any other value 'n', CP responds
+ * with max_sriov_vfs set to min(n, x) where x is the max number of VFs
+ * allowed by CP's policy. max_sriov_vfs is not applicable for VFs.
+ * If dataplane driver sets num_allocated_vectors as 0, CP will respond with 1
+ * which is default vector associated with the default mailbox. For any other
+ * value 'n', CP responds with a value <= n based on the CP's policy of
+ * max number of vectors for a PF.
+ * CP will respond with the vector ID of mailbox allocated to the PF in
+ * mailbox_vector_id and the number of itr index registers in itr_idx_map.
+ * It also responds with default number of vports that the dataplane driver
+ * should comeup with in default_num_vports and maximum number of vports that
+ * can be supported in max_vports.
+ *
+ * Associated with VIRTCHNL2_OP_GET_CAPS.
+ */
+struct virtchnl2_get_capabilities {
+ __le32 csum_caps;
+ __le32 seg_caps;
+ __le32 hsplit_caps;
+ __le32 rsc_caps;
+ __le64 rss_caps;
+ __le64 other_caps;
+ __le32 mailbox_dyn_ctl;
+ __le16 mailbox_vector_id;
+ __le16 num_allocated_vectors;
+ __le16 max_rx_q;
+ __le16 max_tx_q;
+ __le16 max_rx_bufq;
+ __le16 max_tx_complq;
+ __le16 max_sriov_vfs;
+ __le16 max_vports;
+ __le16 default_num_vports;
+ __le16 max_tx_hdr_size;
+ u8 max_sg_bufs_per_tx_pkt;
+ u8 pad[3];
+ u8 reserved[4];
+ __le32 device_type;
+ u8 min_sso_packet_len;
+ u8 max_hdr_buf_per_lso;
+ u8 pad1[10];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(80, virtchnl2_get_capabilities);
+
+/**
+ * struct virtchnl2_queue_reg_chunk - Single queue chunk.
+ * @type: See enum virtchnl2_queue_type.
+ * @start_queue_id: Start Queue ID.
+ * @num_queues: Number of queues in the chunk.
+ * @pad: Padding.
+ * @qtail_reg_start: Queue tail register offset.
+ * @qtail_reg_spacing: Queue tail register spacing.
+ * @pad1: Padding for future extensions.
+ */
+struct virtchnl2_queue_reg_chunk {
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ __le32 pad;
+ __le64 qtail_reg_start;
+ __le32 qtail_reg_spacing;
+ u8 pad1[4];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_queue_reg_chunk);
+
+/**
+ * struct virtchnl2_queue_reg_chunks - Specify several chunks of contiguous
+ * queues.
+ * @num_chunks: Number of chunks.
+ * @pad: Padding.
+ * @chunks: Chunks of queue info.
+ */
+struct virtchnl2_queue_reg_chunks {
+ __le16 num_chunks;
+ u8 pad[6];
+ struct virtchnl2_queue_reg_chunk chunks[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_reg_chunks);
+
+/**
+ * struct virtchnl2_create_vport - Create vport config info.
+ * @vport_type: See enum virtchnl2_vport_type.
+ * @txq_model: See virtchnl2_queue_model.
+ * @rxq_model: See virtchnl2_queue_model.
+ * @num_tx_q: Number of Tx queues.
+ * @num_tx_complq: Valid only if txq_model is split queue.
+ * @num_rx_q: Number of Rx queues.
+ * @num_rx_bufq: Valid only if rxq_model is split queue.
+ * @default_rx_q: Relative receive queue index to be used as default.
+ * @vport_index: Used to align PF and CP in case of default multiple vports,
+ * it is filled by the PF and CP returns the same value, to
+ * enable the driver to support multiple asynchronous parallel
+ * CREATE_VPORT requests and associate a response to a specific
+ * request.
+ * @max_mtu: Max MTU. CP populates this field on response.
+ * @vport_id: Vport id. CP populates this field on response.
+ * @default_mac_addr: Default MAC address.
+ * @pad: Padding.
+ * @rx_desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions.
+ * @tx_desc_ids: See VIRTCHNL2_TX_DESC_IDS definitions.
+ * @pad1: Padding.
+ * @rss_algorithm: RSS algorithm.
+ * @rss_key_size: RSS key size.
+ * @rss_lut_size: RSS LUT size.
+ * @rx_split_pos: See enum virtchnl2_cap_rx_hsplit_at.
+ * @pad2: Padding.
+ * @chunks: Chunks of contiguous queues.
+ *
+ * PF sends this message to CP to create a vport by filling in required
+ * fields of virtchnl2_create_vport structure.
+ * CP responds with the updated virtchnl2_create_vport structure containing the
+ * necessary fields followed by chunks which in turn will have an array of
+ * num_chunks entries of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_CREATE_VPORT.
+ */
+struct virtchnl2_create_vport {
+ __le16 vport_type;
+ __le16 txq_model;
+ __le16 rxq_model;
+ __le16 num_tx_q;
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ __le16 num_rx_bufq;
+ __le16 default_rx_q;
+ __le16 vport_index;
+ /* CP populates the following fields on response */
+ __le16 max_mtu;
+ __le32 vport_id;
+ u8 default_mac_addr[ETH_ALEN];
+ __le16 pad;
+ __le64 rx_desc_ids;
+ __le64 tx_desc_ids;
+ u8 pad1[72];
+ __le32 rss_algorithm;
+ __le16 rss_key_size;
+ __le16 rss_lut_size;
+ __le32 rx_split_pos;
+ u8 pad2[20];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(160, virtchnl2_create_vport);
+
+/**
+ * struct virtchnl2_vport - Vport ID info.
+ * @vport_id: Vport id.
+ * @pad: Padding for future extensions.
+ *
+ * PF sends this message to CP to destroy, enable or disable a vport by filling
+ * in the vport_id in virtchnl2_vport structure.
+ * CP responds with the status of the requested operation.
+ *
+ * Associated with VIRTCHNL2_OP_DESTROY_VPORT, VIRTCHNL2_OP_ENABLE_VPORT,
+ * VIRTCHNL2_OP_DISABLE_VPORT.
+ */
+struct virtchnl2_vport {
+ __le32 vport_id;
+ u8 pad[4];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_vport);
+
+/**
+ * struct virtchnl2_txq_info - Transmit queue config info
+ * @dma_ring_addr: DMA address.
+ * @type: See enum virtchnl2_queue_type.
+ * @queue_id: Queue ID.
+ * @relative_queue_id: Valid only if queue model is split and type is transmit
+ * queue. Used in many to one mapping of transmit queues to
+ * completion queue.
+ * @model: See enum virtchnl2_queue_model.
+ * @sched_mode: See enum virtchnl2_txq_sched_mode.
+ * @qflags: TX queue feature flags.
+ * @ring_len: Ring length.
+ * @tx_compl_queue_id: Valid only if queue model is split and type is transmit
+ * queue.
+ * @peer_type: Valid only if queue type is VIRTCHNL2_QUEUE_TYPE_MAILBOX_TX
+ * @peer_rx_queue_id: Valid only if queue type is CONFIG_TX and used to deliver
+ * messages for the respective CONFIG_TX queue.
+ * @pad: Padding.
+ * @egress_pasid: Egress PASID info.
+ * @egress_hdr_pasid: Egress HDR passid.
+ * @egress_buf_pasid: Egress buf passid.
+ * @pad1: Padding for future extensions.
+ */
+struct virtchnl2_txq_info {
+ __le64 dma_ring_addr;
+ __le32 type;
+ __le32 queue_id;
+ __le16 relative_queue_id;
+ __le16 model;
+ __le16 sched_mode;
+ __le16 qflags;
+ __le16 ring_len;
+ __le16 tx_compl_queue_id;
+ __le16 peer_type;
+ __le16 peer_rx_queue_id;
+ u8 pad[4];
+ __le32 egress_pasid;
+ __le32 egress_hdr_pasid;
+ __le32 egress_buf_pasid;
+ u8 pad1[8];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(56, virtchnl2_txq_info);
+
+/**
+ * struct virtchnl2_config_tx_queues - TX queue config.
+ * @vport_id: Vport id.
+ * @num_qinfo: Number of virtchnl2_txq_info structs.
+ * @pad: Padding.
+ * @qinfo: Tx queues config info.
+ *
+ * PF sends this message to set up parameters for one or more transmit queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_txq_info
+ * structures. CP configures requested queues and returns a status code. If
+ * num_qinfo specified is greater than the number of queues associated with the
+ * vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_TX_QUEUES.
+ */
+struct virtchnl2_config_tx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+ u8 pad[10];
+ struct virtchnl2_txq_info qinfo[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_config_tx_queues);
+
+/**
+ * struct virtchnl2_rxq_info - Receive queue config info.
+ * @desc_ids: See VIRTCHNL2_RX_DESC_IDS definitions.
+ * @dma_ring_addr: See VIRTCHNL2_RX_DESC_IDS definitions.
+ * @type: See enum virtchnl2_queue_type.
+ * @queue_id: Queue id.
+ * @model: See enum virtchnl2_queue_model.
+ * @hdr_buffer_size: Header buffer size.
+ * @data_buffer_size: Data buffer size.
+ * @max_pkt_size: Max packet size.
+ * @ring_len: Ring length.
+ * @buffer_notif_stride: Buffer notification stride in units of 32-descriptors.
+ * This field must be a power of 2.
+ * @pad: Padding.
+ * @dma_head_wb_addr: Applicable only for receive buffer queues.
+ * @qflags: Applicable only for receive completion queues.
+ * See enum virtchnl2_rxq_flags.
+ * @rx_buffer_low_watermark: Rx buffer low watermark.
+ * @rx_bufq1_id: Buffer queue index of the first buffer queue associated with
+ * the Rx queue. Valid only in split queue model.
+ * @rx_bufq2_id: Buffer queue index of the second buffer queue associated with
+ * the Rx queue. Valid only in split queue model.
+ * @bufq2_ena: It indicates if there is a second buffer, rx_bufq2_id is valid
+ * only if this field is set.
+ * @pad1: Padding.
+ * @ingress_pasid: Ingress PASID.
+ * @ingress_hdr_pasid: Ingress PASID header.
+ * @ingress_buf_pasid: Ingress PASID buffer.
+ * @pad2: Padding for future extensions.
+ */
+struct virtchnl2_rxq_info {
+ __le64 desc_ids;
+ __le64 dma_ring_addr;
+ __le32 type;
+ __le32 queue_id;
+ __le16 model;
+ __le16 hdr_buffer_size;
+ __le32 data_buffer_size;
+ __le32 max_pkt_size;
+ __le16 ring_len;
+ u8 buffer_notif_stride;
+ u8 pad;
+ __le64 dma_head_wb_addr;
+ __le16 qflags;
+ __le16 rx_buffer_low_watermark;
+ __le16 rx_bufq1_id;
+ __le16 rx_bufq2_id;
+ u8 bufq2_ena;
+ u8 pad1[3];
+ __le32 ingress_pasid;
+ __le32 ingress_hdr_pasid;
+ __le32 ingress_buf_pasid;
+ u8 pad2[16];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(88, virtchnl2_rxq_info);
+
+/**
+ * struct virtchnl2_config_rx_queues - Rx queues config.
+ * @vport_id: Vport id.
+ * @num_qinfo: Number of instances.
+ * @pad: Padding.
+ * @qinfo: Rx queues config info.
+ *
+ * PF sends this message to set up parameters for one or more receive queues.
+ * This message contains an array of num_qinfo instances of virtchnl2_rxq_info
+ * structures. CP configures requested queues and returns a status code.
+ * If the number of queues specified is greater than the number of queues
+ * associated with the vport, an error is returned and no queues are configured.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_RX_QUEUES.
+ */
+struct virtchnl2_config_rx_queues {
+ __le32 vport_id;
+ __le16 num_qinfo;
+ u8 pad[18];
+ struct virtchnl2_rxq_info qinfo[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_config_rx_queues);
+
+/**
+ * struct virtchnl2_add_queues - data for VIRTCHNL2_OP_ADD_QUEUES.
+ * @vport_id: Vport id.
+ * @num_tx_q: Number of Tx qieues.
+ * @num_tx_complq: Number of Tx completion queues.
+ * @num_rx_q: Number of Rx queues.
+ * @num_rx_bufq: Number of Rx buffer queues.
+ * @pad: Padding.
+ * @chunks: Chunks of contiguous queues.
+ *
+ * PF sends this message to request additional transmit/receive queues beyond
+ * the ones that were assigned via CREATE_VPORT request. virtchnl2_add_queues
+ * structure is used to specify the number of each type of queues.
+ * CP responds with the same structure with the actual number of queues assigned
+ * followed by num_chunks of virtchnl2_queue_chunk structures.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_QUEUES.
+ */
+struct virtchnl2_add_queues {
+ __le32 vport_id;
+ __le16 num_tx_q;
+ __le16 num_tx_complq;
+ __le16 num_rx_q;
+ __le16 num_rx_bufq;
+ u8 pad[4];
+ struct virtchnl2_queue_reg_chunks chunks;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_add_queues);
+
+/**
+ * struct virtchnl2_vector_chunk - Structure to specify a chunk of contiguous
+ * interrupt vectors.
+ * @start_vector_id: Start vector id.
+ * @start_evv_id: Start EVV id.
+ * @num_vectors: Number of vectors.
+ * @pad: Padding.
+ * @dynctl_reg_start: DYN_CTL register offset.
+ * @dynctl_reg_spacing: register spacing between DYN_CTL registers of 2
+ * consecutive vectors.
+ * @itrn_reg_start: ITRN register offset.
+ * @itrn_reg_spacing: Register spacing between dynctl registers of 2
+ * consecutive vectors.
+ * @itrn_index_spacing: Register spacing between itrn registers of the same
+ * vector where n=0..2.
+ * @pad1: Padding for future extensions.
+ *
+ * Register offsets and spacing provided by CP.
+ * Dynamic control registers are used for enabling/disabling/re-enabling
+ * interrupts and updating interrupt rates in the hotpath. Any changes
+ * to interrupt rates in the dynamic control registers will be reflected
+ * in the interrupt throttling rate registers.
+ * itrn registers are used to update interrupt rates for specific
+ * interrupt indices without modifying the state of the interrupt.
+ */
+struct virtchnl2_vector_chunk {
+ __le16 start_vector_id;
+ __le16 start_evv_id;
+ __le16 num_vectors;
+ __le16 pad;
+ __le32 dynctl_reg_start;
+ __le32 dynctl_reg_spacing;
+ __le32 itrn_reg_start;
+ __le32 itrn_reg_spacing;
+ __le32 itrn_index_spacing;
+ u8 pad1[4];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_vector_chunk);
+
+/**
+ * struct virtchnl2_vector_chunks - chunks of contiguous interrupt vectors.
+ * @num_vchunks: number of vector chunks.
+ * @pad: Padding.
+ * @vchunks: Chunks of contiguous vector info.
+ *
+ * PF sends virtchnl2_vector_chunks struct to specify the vectors it is giving
+ * away. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_DEALLOC_VECTORS.
+ */
+struct virtchnl2_vector_chunks {
+ __le16 num_vchunks;
+ u8 pad[14];
+ struct virtchnl2_vector_chunk vchunks[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_vector_chunks);
+
+/**
+ * struct virtchnl2_alloc_vectors - vector allocation info.
+ * @num_vectors: Number of vectors.
+ * @pad: Padding.
+ * @vchunks: Chunks of contiguous vector info.
+ *
+ * PF sends this message to request additional interrupt vectors beyond the
+ * ones that were assigned via GET_CAPS request. virtchnl2_alloc_vectors
+ * structure is used to specify the number of vectors requested. CP responds
+ * with the same structure with the actual number of vectors assigned followed
+ * by virtchnl2_vector_chunks structure identifying the vector ids.
+ *
+ * Associated with VIRTCHNL2_OP_ALLOC_VECTORS.
+ */
+struct virtchnl2_alloc_vectors {
+ __le16 num_vectors;
+ u8 pad[14];
+ struct virtchnl2_vector_chunks vchunks;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(32, virtchnl2_alloc_vectors);
+
+/**
+ * struct virtchnl2_rss_lut - RSS LUT info.
+ * @vport_id: Vport id.
+ * @lut_entries_start: Start of LUT entries.
+ * @lut_entries: Number of LUT entrties.
+ * @pad: Padding.
+ * @lut: RSS lookup table.
+ *
+ * PF sends this message to get or set RSS lookup table. Only supported if
+ * both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_LUT and VIRTCHNL2_OP_SET_RSS_LUT.
+ */
+struct virtchnl2_rss_lut {
+ __le32 vport_id;
+ __le16 lut_entries_start;
+ __le16 lut_entries;
+ u8 pad[4];
+ __le32 lut[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(12, virtchnl2_rss_lut);
+
+/**
+ * struct virtchnl2_rss_hash - RSS hash info.
+ * @ptype_groups: Packet type groups bitmap.
+ * @vport_id: Vport id.
+ * @pad: Padding for future extensions.
+ *
+ * PF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the CP sets these to all possible traffic types that the
+ * hardware supports. The PF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Only supported if both PF and CP drivers set the VIRTCHNL2_CAP_RSS bit
+ * during configuration negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_HASH and VIRTCHNL2_OP_SET_RSS_HASH
+ */
+struct virtchnl2_rss_hash {
+ __le64 ptype_groups;
+ __le32 vport_id;
+ u8 pad[4];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_rss_hash);
+
+/**
+ * struct virtchnl2_sriov_vfs_info - VFs info.
+ * @num_vfs: Number of VFs.
+ * @pad: Padding for future extensions.
+ *
+ * This message is used to set number of SRIOV VFs to be created. The actual
+ * allocation of resources for the VFs in terms of vport, queues and interrupts
+ * is done by CP. When this call completes, the IDPF driver calls
+ * pci_enable_sriov to let the OS instantiate the SRIOV PCIE devices.
+ * The number of VFs set to 0 will destroy all the VFs of this function.
+ *
+ * Associated with VIRTCHNL2_OP_SET_SRIOV_VFS.
+ */
+struct virtchnl2_sriov_vfs_info {
+ __le16 num_vfs;
+ __le16 pad;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(4, virtchnl2_sriov_vfs_info);
+
+/**
+ * struct virtchnl2_ptype - Packet type info.
+ * @ptype_id_10: 10-bit packet type.
+ * @ptype_id_8: 8-bit packet type.
+ * @proto_id_count: Number of protocol ids the packet supports, maximum of 32
+ * protocol ids are supported.
+ * @pad: Padding.
+ * @proto_id: proto_id_count decides the allocation of protocol id array.
+ * See enum virtchnl2_proto_hdr_type.
+ *
+ * Based on the descriptor type the PF supports, CP fills ptype_id_10 or
+ * ptype_id_8 for flex and base descriptor respectively. If ptype_id_10 value
+ * is set to 0xFFFF, PF should consider this ptype as dummy one and it is the
+ * last ptype.
+ */
+struct virtchnl2_ptype {
+ __le16 ptype_id_10;
+ u8 ptype_id_8;
+ u8 proto_id_count;
+ __le16 pad;
+ __le16 proto_id[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(6, virtchnl2_ptype);
+
+/**
+ * struct virtchnl2_get_ptype_info - Packet type info.
+ * @start_ptype_id: Starting ptype ID.
+ * @num_ptypes: Number of packet types from start_ptype_id.
+ * @pad: Padding for future extensions.
+ *
+ * The total number of supported packet types is based on the descriptor type.
+ * For the flex descriptor, it is 1024 (10-bit ptype), and for the base
+ * descriptor, it is 256 (8-bit ptype). Send this message to the CP by
+ * populating the 'start_ptype_id' and the 'num_ptypes'. CP responds with the
+ * 'start_ptype_id', 'num_ptypes', and the array of ptype (virtchnl2_ptype) that
+ * are added at the end of the 'virtchnl2_get_ptype_info' message (Note: There
+ * is no specific field for the ptypes but are added at the end of the
+ * ptype info message. PF/VF is expected to extract the ptypes accordingly.
+ * Reason for doing this is because compiler doesn't allow nested flexible
+ * array fields).
+ *
+ * If all the ptypes don't fit into one mailbox buffer, CP splits the
+ * ptype info into multiple messages, where each message will have its own
+ * 'start_ptype_id', 'num_ptypes', and the ptype array itself. When CP is done
+ * updating all the ptype information extracted from the package (the number of
+ * ptypes extracted might be less than what PF/VF expects), it will append a
+ * dummy ptype (which has 'ptype_id_10' of 'struct virtchnl2_ptype' as 0xFFFF)
+ * to the ptype array.
+ *
+ * PF/VF is expected to receive multiple VIRTCHNL2_OP_GET_PTYPE_INFO messages.
+ *
+ * Associated with VIRTCHNL2_OP_GET_PTYPE_INFO.
+ */
+struct virtchnl2_get_ptype_info {
+ __le16 start_ptype_id;
+ __le16 num_ptypes;
+ __le32 pad;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_get_ptype_info);
+
+/**
+ * struct virtchnl2_vport_stats - Vport statistics.
+ * @vport_id: Vport id.
+ * @pad: Padding.
+ * @rx_bytes: Received bytes.
+ * @rx_unicast: Received unicast packets.
+ * @rx_multicast: Received multicast packets.
+ * @rx_broadcast: Received broadcast packets.
+ * @rx_discards: Discarded packets on receive.
+ * @rx_errors: Receive errors.
+ * @rx_unknown_protocol: Unlnown protocol.
+ * @tx_bytes: Transmitted bytes.
+ * @tx_unicast: Transmitted unicast packets.
+ * @tx_multicast: Transmitted multicast packets.
+ * @tx_broadcast: Transmitted broadcast packets.
+ * @tx_discards: Discarded packets on transmit.
+ * @tx_errors: Transmit errors.
+ * @rx_invalid_frame_length: Packets with invalid frame length.
+ * @rx_overflow_drop: Packets dropped on buffer overflow.
+ *
+ * PF/VF sends this message to CP to get the update stats by specifying the
+ * vport_id. CP responds with stats in struct virtchnl2_vport_stats.
+ *
+ * Associated with VIRTCHNL2_OP_GET_STATS.
+ */
+struct virtchnl2_vport_stats {
+ __le32 vport_id;
+ u8 pad[4];
+ __le64 rx_bytes;
+ __le64 rx_unicast;
+ __le64 rx_multicast;
+ __le64 rx_broadcast;
+ __le64 rx_discards;
+ __le64 rx_errors;
+ __le64 rx_unknown_protocol;
+ __le64 tx_bytes;
+ __le64 tx_unicast;
+ __le64 tx_multicast;
+ __le64 tx_broadcast;
+ __le64 tx_discards;
+ __le64 tx_errors;
+ __le64 rx_invalid_frame_length;
+ __le64 rx_overflow_drop;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(128, virtchnl2_vport_stats);
+
+/**
+ * struct virtchnl2_event - Event info.
+ * @event: Event opcode. See enum virtchnl2_event_codes.
+ * @link_speed: Link_speed provided in Mbps.
+ * @vport_id: Vport ID.
+ * @link_status: Link status.
+ * @pad: Padding.
+ * @reserved: Reserved.
+ *
+ * CP sends this message to inform the PF/VF driver of events that may affect
+ * it. No direct response is expected from the driver, though it may generate
+ * other messages in response to this one.
+ *
+ * Associated with VIRTCHNL2_OP_EVENT.
+ */
+struct virtchnl2_event {
+ __le32 event;
+ __le32 link_speed;
+ __le32 vport_id;
+ u8 link_status;
+ u8 pad;
+ __le16 reserved;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_event);
+
+/**
+ * struct virtchnl2_rss_key - RSS key info.
+ * @vport_id: Vport id.
+ * @key_len: Length of RSS key.
+ * @pad: Padding.
+ * @key_flex: RSS hash key, packed bytes.
+ * PF/VF sends this message to get or set RSS key. Only supported if both
+ * PF/VF and CP drivers set the VIRTCHNL2_CAP_RSS bit during configuration
+ * negotiation.
+ *
+ * Associated with VIRTCHNL2_OP_GET_RSS_KEY and VIRTCHNL2_OP_SET_RSS_KEY.
+ */
+struct virtchnl2_rss_key {
+ __le32 vport_id;
+ __le16 key_len;
+ u8 pad;
+ __DECLARE_FLEX_ARRAY(u8, key_flex);
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_rss_key);
+
+/**
+ * struct virtchnl2_queue_chunk - chunk of contiguous queues
+ * @type: See enum virtchnl2_queue_type.
+ * @start_queue_id: Starting queue id.
+ * @num_queues: Number of queues.
+ * @pad: Padding for future extensions.
+ */
+struct virtchnl2_queue_chunk {
+ __le32 type;
+ __le32 start_queue_id;
+ __le32 num_queues;
+ u8 pad[4];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_chunk);
+
+/* struct virtchnl2_queue_chunks - chunks of contiguous queues
+ * @num_chunks: Number of chunks.
+ * @pad: Padding.
+ * @chunks: Chunks of contiguous queues info.
+ */
+struct virtchnl2_queue_chunks {
+ __le16 num_chunks;
+ u8 pad[6];
+ struct virtchnl2_queue_chunk chunks[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_queue_chunks);
+
+/**
+ * struct virtchnl2_del_ena_dis_queues - Enable/disable queues info.
+ * @vport_id: Vport id.
+ * @pad: Padding.
+ * @chunks: Chunks of contiguous queues info.
+ *
+ * PF sends these messages to enable, disable or delete queues specified in
+ * chunks. PF sends virtchnl2_del_ena_dis_queues struct to specify the queues
+ * to be enabled/disabled/deleted. Also applicable to single queue receive or
+ * transmit. CP performs requested action and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_ENABLE_QUEUES, VIRTCHNL2_OP_DISABLE_QUEUES and
+ * VIRTCHNL2_OP_DISABLE_QUEUES.
+ */
+struct virtchnl2_del_ena_dis_queues {
+ __le32 vport_id;
+ u8 pad[4];
+ struct virtchnl2_queue_chunks chunks;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_del_ena_dis_queues);
+
+/**
+ * struct virtchnl2_queue_vector - Queue to vector mapping.
+ * @queue_id: Queue id.
+ * @vector_id: Vector id.
+ * @pad: Padding.
+ * @itr_idx: See enum virtchnl2_itr_idx.
+ * @queue_type: See enum virtchnl2_queue_type.
+ * @pad1: Padding for future extensions.
+ */
+struct virtchnl2_queue_vector {
+ __le32 queue_id;
+ __le16 vector_id;
+ u8 pad[2];
+ __le32 itr_idx;
+ __le32 queue_type;
+ u8 pad1[8];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(24, virtchnl2_queue_vector);
+
+/**
+ * struct virtchnl2_queue_vector_maps - Map/unmap queues info.
+ * @vport_id: Vport id.
+ * @num_qv_maps: Number of queue vector maps.
+ * @pad: Padding.
+ * @qv_maps: Queue to vector maps.
+ *
+ * PF sends this message to map or unmap queues to vectors and interrupt
+ * throttling rate index registers. External data buffer contains
+ * virtchnl2_queue_vector_maps structure that contains num_qv_maps of
+ * virtchnl2_queue_vector structures. CP maps the requested queue vector maps
+ * after validating the queue and vector ids and returns a status code.
+ *
+ * Associated with VIRTCHNL2_OP_MAP_QUEUE_VECTOR and
+ * VIRTCHNL2_OP_UNMAP_QUEUE_VECTOR.
+ */
+struct virtchnl2_queue_vector_maps {
+ __le32 vport_id;
+ __le16 num_qv_maps;
+ u8 pad[10];
+ struct virtchnl2_queue_vector qv_maps[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(16, virtchnl2_queue_vector_maps);
+
+/**
+ * struct virtchnl2_loopback - Loopback info.
+ * @vport_id: Vport id.
+ * @enable: Enable/disable.
+ * @pad: Padding for future extensions.
+ *
+ * PF/VF sends this message to transition to/from the loopback state. Setting
+ * the 'enable' to 1 enables the loopback state and setting 'enable' to 0
+ * disables it. CP configures the state to loopback and returns status.
+ *
+ * Associated with VIRTCHNL2_OP_LOOPBACK.
+ */
+struct virtchnl2_loopback {
+ __le32 vport_id;
+ u8 enable;
+ u8 pad[3];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_loopback);
+
+/* struct virtchnl2_mac_addr - MAC address info.
+ * @addr: MAC address.
+ * @type: MAC type. See enum virtchnl2_mac_addr_type.
+ * @pad: Padding for future extensions.
+ */
+struct virtchnl2_mac_addr {
+ u8 addr[ETH_ALEN];
+ u8 type;
+ u8 pad;
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr);
+
+/**
+ * struct virtchnl2_mac_addr_list - List of MAC addresses.
+ * @vport_id: Vport id.
+ * @num_mac_addr: Number of MAC addresses.
+ * @pad: Padding.
+ * @mac_addr_list: List with MAC address info.
+ *
+ * PF/VF driver uses this structure to send list of MAC addresses to be
+ * added/deleted to the CP where as CP performs the action and returns the
+ * status.
+ *
+ * Associated with VIRTCHNL2_OP_ADD_MAC_ADDR and VIRTCHNL2_OP_DEL_MAC_ADDR.
+ */
+struct virtchnl2_mac_addr_list {
+ __le32 vport_id;
+ __le16 num_mac_addr;
+ u8 pad[2];
+ struct virtchnl2_mac_addr mac_addr_list[];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_mac_addr_list);
+
+/**
+ * struct virtchnl2_promisc_info - Promisc type info.
+ * @vport_id: Vport id.
+ * @flags: See enum virtchnl2_promisc_flags.
+ * @pad: Padding for future extensions.
+ *
+ * PF/VF sends vport id and flags to the CP where as CP performs the action
+ * and returns the status.
+ *
+ * Associated with VIRTCHNL2_OP_CONFIG_PROMISCUOUS_MODE.
+ */
+struct virtchnl2_promisc_info {
+ __le32 vport_id;
+ /* See VIRTCHNL2_PROMISC_FLAGS definitions */
+ __le16 flags;
+ u8 pad[2];
+};
+VIRTCHNL2_CHECK_STRUCT_LEN(8, virtchnl2_promisc_info);
+
+#endif /* _VIRTCHNL_2_H_ */
diff --git a/drivers/net/ethernet/intel/idpf/virtchnl2_lan_desc.h b/drivers/net/ethernet/intel/idpf/virtchnl2_lan_desc.h
new file mode 100644
index 000000000000..f1b577f1c452
--- /dev/null
+++ b/drivers/net/ethernet/intel/idpf/virtchnl2_lan_desc.h
@@ -0,0 +1,451 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/* Copyright (C) 2023 Intel Corporation */
+
+#ifndef _VIRTCHNL2_LAN_DESC_H_
+#define _VIRTCHNL2_LAN_DESC_H_
+
+#include <linux/bits.h>
+
+/* This is an interface definition file where existing enums and their values
+ * must remain unchanged over time, so we specify explicit values for all enums.
+ */
+
+/* Transmit descriptor ID flags
+ */
+enum virtchnl2_tx_desc_ids {
+ VIRTCHNL2_TXDID_DATA = BIT(0),
+ VIRTCHNL2_TXDID_CTX = BIT(1),
+ /* TXDID bit 2 is reserved
+ * TXDID bit 3 is free for future use
+ * TXDID bit 4 is reserved
+ */
+ VIRTCHNL2_TXDID_FLEX_TSO_CTX = BIT(5),
+ /* TXDID bit 6 is reserved */
+ VIRTCHNL2_TXDID_FLEX_L2TAG1_L2TAG2 = BIT(7),
+ /* TXDID bits 8 and 9 are free for future use
+ * TXDID bit 10 is reserved
+ * TXDID bit 11 is free for future use
+ */
+ VIRTCHNL2_TXDID_FLEX_FLOW_SCHED = BIT(12),
+ /* TXDID bits 13 and 14 are free for future use */
+ VIRTCHNL2_TXDID_DESC_DONE = BIT(15),
+};
+
+/* Receive descriptor IDs */
+enum virtchnl2_rx_desc_ids {
+ VIRTCHNL2_RXDID_1_32B_BASE = 1,
+ /* FLEX_SQ_NIC and FLEX_SPLITQ share desc ids because they can be
+ * differentiated based on queue model; e.g. single queue model can
+ * only use FLEX_SQ_NIC and split queue model can only use FLEX_SPLITQ
+ * for DID 2.
+ */
+ VIRTCHNL2_RXDID_2_FLEX_SPLITQ = 2,
+ VIRTCHNL2_RXDID_2_FLEX_SQ_NIC = VIRTCHNL2_RXDID_2_FLEX_SPLITQ,
+ /* 3 through 6 are reserved */
+ VIRTCHNL2_RXDID_7_HW_RSVD = 7,
+ /* 8 through 15 are free */
+};
+
+/* Receive descriptor ID bitmasks */
+#define VIRTCHNL2_RXDID_M(bit) BIT_ULL(VIRTCHNL2_RXDID_##bit)
+
+enum virtchnl2_rx_desc_id_bitmasks {
+ VIRTCHNL2_RXDID_1_32B_BASE_M = VIRTCHNL2_RXDID_M(1_32B_BASE),
+ VIRTCHNL2_RXDID_2_FLEX_SPLITQ_M = VIRTCHNL2_RXDID_M(2_FLEX_SPLITQ),
+ VIRTCHNL2_RXDID_2_FLEX_SQ_NIC_M = VIRTCHNL2_RXDID_M(2_FLEX_SQ_NIC),
+ VIRTCHNL2_RXDID_7_HW_RSVD_M = VIRTCHNL2_RXDID_M(7_HW_RSVD),
+};
+
+/* For splitq virtchnl2_rx_flex_desc_adv_nic_3 desc members */
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RXDID_M GENMASK(3, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_UMBCAST_M GENMASK(7, 6)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_PTYPE_M GENMASK(9, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RAW_CSUM_INV_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF0_M GENMASK(15, 13)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_PBUF_M GENMASK(13, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S 14
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_GEN_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_BUFQ_ID_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_LEN_HDR_M GENMASK(9, 0)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S 10
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S 11
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_SPH_S)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_S 12
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_FF1_M GENMASK(14, 12)
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S 15
+#define VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_M \
+ BIT_ULL(VIRTCHNL2_RX_FLEX_DESC_ADV_MISS_S)
+
+/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
+enum virtchl2_rx_flex_desc_adv_status_error_0_qw1_bits {
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_DD_M = BIT(0),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_EOF_M = BIT(1),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_HBO_M = BIT(2),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L3L4P_M = BIT(3),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_IPE_M = BIT(4),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_L4E_M = BIT(5),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EIPE_M = BIT(6),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XSUM_EUDPE_M = BIT(7),
+};
+
+/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
+enum virtchnl2_rx_flex_desc_adv_status_error_0_qw0_bits {
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_LPBK_M = BIT(0),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_IPV6EXADD_M = BIT(1),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RXE_M = BIT(2),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_CRCP_M = BIT(3),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_RSS_VALID_M = BIT(4),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_L2TAG1P_M = BIT(5),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD0_VALID_M = BIT(6),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS0_XTRMD1_VALID_M = BIT(7),
+};
+
+/* Bitmasks for splitq virtchnl2_rx_flex_desc_adv_nic_3 */
+enum virtchnl2_rx_flex_desc_adv_status_error_1_bits {
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_RSVD_M = GENMASK(1, 0),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_ATRAEFAIL_M = BIT(2),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_L2TAG2P_M = BIT(3),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD2_VALID_M = BIT(4),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD3_VALID_M = BIT(5),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD4_VALID_M = BIT(6),
+ VIRTCHNL2_RX_FLEX_DESC_ADV_STATUS1_XTRMD5_VALID_M = BIT(7),
+};
+
+/* For singleq (flex) virtchnl2_rx_flex_desc fields
+ * For virtchnl2_rx_flex_desc.ptype_flex_flags0 member
+ */
+#define VIRTCHNL2_RX_FLEX_DESC_PTYPE_M GENMASK(9, 0)
+
+/* For virtchnl2_rx_flex_desc.pkt_len member */
+#define VIRTCHNL2_RX_FLEX_DESC_PKT_LEN_M GENMASK(13, 0)
+
+/* Bitmasks for singleq (flex) virtchnl2_rx_flex_desc */
+enum virtchnl2_rx_flex_desc_status_error_0_bits {
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_DD_M = BIT(0),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_EOF_M = BIT(1),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_HBO_M = BIT(2),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_L3L4P_M = BIT(3),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_IPE_M = BIT(4),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_L4E_M = BIT(5),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EIPE_M = BIT(6),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XSUM_EUDPE_M = BIT(7),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_LPBK_M = BIT(8),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_IPV6EXADD_M = BIT(9),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_RXE_M = BIT(10),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_CRCP_M = BIT(11),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_RSS_VALID_M = BIT(12),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_L2TAG1P_M = BIT(13),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD0_VALID_M = BIT(14),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS0_XTRMD1_VALID_M = BIT(15),
+};
+
+/* Bitmasks for singleq (flex) virtchnl2_rx_flex_desc */
+enum virtchnl2_rx_flex_desc_status_error_1_bits {
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_CPM_M = GENMASK(3, 0),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_NAT_M = BIT(4),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_CRYPTO_M = BIT(5),
+ /* [10:6] reserved */
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_L2TAG2P_M = BIT(11),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD2_VALID_M = BIT(12),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD3_VALID_M = BIT(13),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD4_VALID_M = BIT(14),
+ VIRTCHNL2_RX_FLEX_DESC_STATUS1_XTRMD5_VALID_M = BIT(15),
+};
+
+/* For virtchnl2_rx_flex_desc.ts_low member */
+#define VIRTCHNL2_RX_FLEX_TSTAMP_VALID BIT(0)
+
+/* For singleq (non flex) virtchnl2_singleq_base_rx_desc legacy desc members */
+#define VIRTCHNL2_RX_BASE_DESC_QW1_LEN_PBUF_M GENMASK_ULL(51, 38)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_PTYPE_M GENMASK_ULL(37, 30)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_ERROR_M GENMASK_ULL(26, 19)
+#define VIRTCHNL2_RX_BASE_DESC_QW1_STATUS_M GENMASK_ULL(18, 0)
+
+/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
+enum virtchnl2_rx_base_desc_status_bits {
+ VIRTCHNL2_RX_BASE_DESC_STATUS_DD_M = BIT(0),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_EOF_M = BIT(1),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_L2TAG1P_M = BIT(2),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_L3L4P_M = BIT(3),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_CRCP_M = BIT(4),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD_M = GENMASK(7, 5),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_EXT_UDP_0_M = BIT(8),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_UMBCAST_M = GENMASK(10, 9),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_FLM_M = BIT(11),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_FLTSTAT_M = GENMASK(13, 12),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_LPBK_M = BIT(14),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_IPV6EXADD_M = BIT(15),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_RSVD1_M = GENMASK(17, 16),
+ VIRTCHNL2_RX_BASE_DESC_STATUS_INT_UDP_0_M = BIT(18),
+};
+
+/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
+enum virtchnl2_rx_base_desc_error_bits {
+ VIRTCHNL2_RX_BASE_DESC_ERROR_RXE_M = BIT(0),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_ATRAEFAIL_M = BIT(1),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_HBO_M = BIT(2),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_L3L4E_M = GENMASK(5, 3),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_IPE_M = BIT(3),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_L4E_M = BIT(4),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_EIPE_M = BIT(5),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_OVERSIZE_M = BIT(6),
+ VIRTCHNL2_RX_BASE_DESC_ERROR_PPRS_M = BIT(7),
+};
+
+/* Bitmasks for singleq (base) virtchnl2_rx_base_desc */
+#define VIRTCHNL2_RX_BASE_DESC_FLTSTAT_RSS_HASH_M GENMASK(13, 12)
+
+/**
+ * struct virtchnl2_splitq_rx_buf_desc - SplitQ RX buffer descriptor format
+ * @qword0: RX buffer struct.
+ * @qword0.buf_id: Buffer identifier.
+ * @qword0.rsvd0: Reserved.
+ * @qword0.rsvd1: Reserved.
+ * @pkt_addr: Packet buffer address.
+ * @hdr_addr: Header buffer address.
+ * @rsvd2: Rerserved.
+ *
+ * Receive Descriptors
+ * SplitQ buffer
+ * | 16| 0|
+ * ----------------------------------------------------------------
+ * | RSV | Buffer ID |
+ * ----------------------------------------------------------------
+ * | Rx packet buffer address |
+ * ----------------------------------------------------------------
+ * | Rx header buffer address |
+ * ----------------------------------------------------------------
+ * | RSV |
+ * ----------------------------------------------------------------
+ * | 0|
+ */
+struct virtchnl2_splitq_rx_buf_desc {
+ struct {
+ __le16 buf_id;
+ __le16 rsvd0;
+ __le32 rsvd1;
+ } qword0;
+ __le64 pkt_addr;
+ __le64 hdr_addr;
+ __le64 rsvd2;
+};
+
+/**
+ * struct virtchnl2_singleq_rx_buf_desc - SingleQ RX buffer descriptor format.
+ * @pkt_addr: Packet buffer address.
+ * @hdr_addr: Header buffer address.
+ * @rsvd1: Reserved.
+ * @rsvd2: Reserved.
+ *
+ * SingleQ buffer
+ * | 0|
+ * ----------------------------------------------------------------
+ * | Rx packet buffer address |
+ * ----------------------------------------------------------------
+ * | Rx header buffer address |
+ * ----------------------------------------------------------------
+ * | RSV |
+ * ----------------------------------------------------------------
+ * | RSV |
+ * ----------------------------------------------------------------
+ * | 0|
+ */
+struct virtchnl2_singleq_rx_buf_desc {
+ __le64 pkt_addr;
+ __le64 hdr_addr;
+ __le64 rsvd1;
+ __le64 rsvd2;
+};
+
+/**
+ * struct virtchnl2_singleq_base_rx_desc - RX descriptor writeback format.
+ * @qword0: First quad word struct.
+ * @qword0.lo_dword: Lower dual word struct.
+ * @qword0.lo_dword.mirroring_status: Mirrored packet status.
+ * @qword0.lo_dword.l2tag1: Stripped L2 tag from the received packet.
+ * @qword0.hi_dword: High dual word union.
+ * @qword0.hi_dword.rss: RSS hash.
+ * @qword0.hi_dword.fd_id: Flow director filter id.
+ * @qword1: Second quad word struct.
+ * @qword1.status_error_ptype_len: Status/error/PTYPE/length.
+ * @qword2: Third quad word struct.
+ * @qword2.ext_status: Extended status.
+ * @qword2.rsvd: Reserved.
+ * @qword2.l2tag2_1: Extracted L2 tag 2 from the packet.
+ * @qword2.l2tag2_2: Reserved.
+ * @qword3: Fourth quad word struct.
+ * @qword3.reserved: Reserved.
+ * @qword3.fd_id: Flow director filter id.
+ *
+ * Profile ID 0x1, SingleQ, base writeback format
+ */
+struct virtchnl2_singleq_base_rx_desc {
+ struct {
+ struct {
+ __le16 mirroring_status;
+ __le16 l2tag1;
+ } lo_dword;
+ union {
+ __le32 rss;
+ __le32 fd_id;
+ } hi_dword;
+ } qword0;
+ struct {
+ __le64 status_error_ptype_len;
+ } qword1;
+ struct {
+ __le16 ext_status;
+ __le16 rsvd;
+ __le16 l2tag2_1;
+ __le16 l2tag2_2;
+ } qword2;
+ struct {
+ __le32 reserved;
+ __le32 fd_id;
+ } qword3;
+};
+
+/**
+ * struct virtchnl2_rx_flex_desc_nic - RX descriptor writeback format.
+ *
+ * @rxdid: Descriptor builder profile id.
+ * @mir_id_umb_cast: umb_cast=[7:6], mirror=[5:0]
+ * @ptype_flex_flags0: ff0=[15:10], ptype=[9:0]
+ * @pkt_len: Packet length, [15:14] are reserved.
+ * @hdr_len_sph_flex_flags1: ff1/ext=[15:12], sph=[11], header=[10:0].
+ * @status_error0: Status/Error section 0.
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @rss_hash: RSS hash.
+ * @status_error1: Status/Error section 1.
+ * @flexi_flags2: Flexible flags section 2.
+ * @ts_low: Lower word of timestamp value.
+ * @l2tag2_1st: First L2TAG2.
+ * @l2tag2_2nd: Second L2TAG2.
+ * @flow_id: Flow id.
+ * @flex_ts: Timestamp and flexible flow id union.
+ * @flex_ts.ts_high: Timestamp higher word of the timestamp value.
+ * @flex_ts.flex.rsvd: Reserved.
+ * @flex_ts.flex.flow_id_ipv6: IPv6 flow id.
+ *
+ * Profile ID 0x2, SingleQ, flex writeback format
+ */
+struct virtchnl2_rx_flex_desc_nic {
+ /* Qword 0 */
+ u8 rxdid;
+ u8 mir_id_umb_cast;
+ __le16 ptype_flex_flags0;
+ __le16 pkt_len;
+ __le16 hdr_len_sph_flex_flags1;
+ /* Qword 1 */
+ __le16 status_error0;
+ __le16 l2tag1;
+ __le32 rss_hash;
+ /* Qword 2 */
+ __le16 status_error1;
+ u8 flexi_flags2;
+ u8 ts_low;
+ __le16 l2tag2_1st;
+ __le16 l2tag2_2nd;
+ /* Qword 3 */
+ __le32 flow_id;
+ union {
+ struct {
+ __le16 rsvd;
+ __le16 flow_id_ipv6;
+ } flex;
+ __le32 ts_high;
+ } flex_ts;
+};
+
+/**
+ * struct virtchnl2_rx_flex_desc_adv_nic_3 - RX descriptor writeback format.
+ * @rxdid_ucast: ucast=[7:6], rsvd=[5:4], profile_id=[3:0].
+ * @status_err0_qw0: Status/Error section 0 in quad word 0.
+ * @ptype_err_fflags0: ff0=[15:12], udp_len_err=[11], ip_hdr_err=[10],
+ * ptype=[9:0].
+ * @pktlen_gen_bufq_id: bufq_id=[15] only in splitq, gen=[14] only in splitq,
+ * plen=[13:0].
+ * @hdrlen_flags: miss_prepend=[15], trunc_mirr=[14], int_udp_0=[13],
+ * ext_udp0=[12], sph=[11] only in splitq, rsc=[10]
+ * only in splitq, header=[9:0].
+ * @status_err0_qw1: Status/Error section 0 in quad word 1.
+ * @status_err1: Status/Error section 1.
+ * @fflags1: Flexible flags section 1.
+ * @ts_low: Lower word of timestamp value.
+ * @buf_id: Buffer identifier. Only in splitq mode.
+ * @misc: Union.
+ * @misc.raw_cs: Raw checksum.
+ * @misc.l2tag1: Stripped L2 tag from the received packet
+ * @misc.rscseglen:
+ * @hash1: Lower bits of Rx hash value.
+ * @ff2_mirrid_hash2: Union.
+ * @ff2_mirrid_hash2.fflags2: Flexible flags section 2.
+ * @ff2_mirrid_hash2.mirrorid: Mirror id.
+ * @ff2_mirrid_hash2.rscseglen: RSC segment length.
+ * @hash3: Upper bits of Rx hash value.
+ * @l2tag2: Extracted L2 tag 2 from the packet.
+ * @fmd4: Flexible metadata container 4.
+ * @l2tag1: Stripped L2 tag from the received packet
+ * @fmd6: Flexible metadata container 6.
+ * @ts_high: Timestamp higher word of the timestamp value.
+ *
+ * Profile ID 0x2, SplitQ, flex writeback format
+ *
+ * Flex-field 0: BufferID
+ * Flex-field 1: Raw checksum/L2TAG1/RSC Seg Len (determined by HW)
+ * Flex-field 2: Hash[15:0]
+ * Flex-flags 2: Hash[23:16]
+ * Flex-field 3: L2TAG2
+ * Flex-field 5: L2TAG1
+ * Flex-field 7: Timestamp (upper 32 bits)
+ */
+struct virtchnl2_rx_flex_desc_adv_nic_3 {
+ /* Qword 0 */
+ u8 rxdid_ucast;
+ u8 status_err0_qw0;
+ __le16 ptype_err_fflags0;
+ __le16 pktlen_gen_bufq_id;
+ __le16 hdrlen_flags;
+ /* Qword 1 */
+ u8 status_err0_qw1;
+ u8 status_err1;
+ u8 fflags1;
+ u8 ts_low;
+ __le16 buf_id;
+ union {
+ __le16 raw_cs;
+ __le16 l2tag1;
+ __le16 rscseglen;
+ } misc;
+ /* Qword 2 */
+ __le16 hash1;
+ union {
+ u8 fflags2;
+ u8 mirrorid;
+ u8 hash2;
+ } ff2_mirrid_hash2;
+ u8 hash3;
+ __le16 l2tag2;
+ __le16 fmd4;
+ /* Qword 3 */
+ __le16 l2tag1;
+ __le16 fmd6;
+ __le32 ts_high;
+};
+
+/* Common union for accessing descriptor format structs */
+union virtchnl2_rx_desc {
+ struct virtchnl2_singleq_base_rx_desc base_wb;
+ struct virtchnl2_rx_flex_desc_nic flex_nic_wb;
+ struct virtchnl2_rx_flex_desc_adv_nic_3 flex_adv_nic_3_wb;
+};
+
+#endif /* _VIRTCHNL_LAN_DESC_H_ */
diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
index 4ee849985e2b..16d2a55d5e17 100644
--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
+++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
@@ -2356,10 +2356,10 @@ static void igb_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
break;
case ETH_SS_STATS:
for (i = 0; i < IGB_GLOBAL_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
igb_gstrings_stats[i].stat_string);
for (i = 0; i < IGB_NETDEV_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
igb_gstrings_net_stats[i].stat_string);
for (i = 0; i < adapter->num_tx_queues; i++) {
ethtool_sprintf(&p, "tx_queue_%u_packets", i);
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 76b34cee1da3..b2295caa2f0a 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -3069,6 +3069,7 @@ void igb_set_fw_version(struct igb_adapter *adapter)
{
struct e1000_hw *hw = &adapter->hw;
struct e1000_fw_version fw;
+ char *lbuf;
igb_get_fw_version(hw, &fw);
@@ -3076,36 +3077,34 @@ void igb_set_fw_version(struct igb_adapter *adapter)
case e1000_i210:
case e1000_i211:
if (!(igb_get_flash_presence_i210(hw))) {
- snprintf(adapter->fw_version,
- sizeof(adapter->fw_version),
- "%2d.%2d-%d",
- fw.invm_major, fw.invm_minor,
- fw.invm_img_type);
+ lbuf = kasprintf(GFP_KERNEL, "%2d.%2d-%d",
+ fw.invm_major, fw.invm_minor,
+ fw.invm_img_type);
break;
}
fallthrough;
default:
- /* if option is rom valid, display its version too */
+ /* if option rom is valid, display its version too */
if (fw.or_valid) {
- snprintf(adapter->fw_version,
- sizeof(adapter->fw_version),
- "%d.%d, 0x%08x, %d.%d.%d",
- fw.eep_major, fw.eep_minor, fw.etrack_id,
- fw.or_major, fw.or_build, fw.or_patch);
+ lbuf = kasprintf(GFP_KERNEL, "%d.%d, 0x%08x, %d.%d.%d",
+ fw.eep_major, fw.eep_minor,
+ fw.etrack_id, fw.or_major, fw.or_build,
+ fw.or_patch);
/* no option rom */
} else if (fw.etrack_id != 0X0000) {
- snprintf(adapter->fw_version,
- sizeof(adapter->fw_version),
- "%d.%d, 0x%08x",
- fw.eep_major, fw.eep_minor, fw.etrack_id);
+ lbuf = kasprintf(GFP_KERNEL, "%d.%d, 0x%08x",
+ fw.eep_major, fw.eep_minor,
+ fw.etrack_id);
} else {
- snprintf(adapter->fw_version,
- sizeof(adapter->fw_version),
- "%d.%d.%d",
- fw.eep_major, fw.eep_minor, fw.eep_build);
+ lbuf = kasprintf(GFP_KERNEL, "%d.%d.%d", fw.eep_major,
+ fw.eep_minor, fw.eep_build);
}
break;
}
+
+ /* the truncate happens here if it doesn't fit */
+ strscpy(adapter->fw_version, lbuf, sizeof(adapter->fw_version));
+ kfree(lbuf);
}
/**
@@ -3264,7 +3263,7 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
igb_set_ethtool_ops(netdev);
netdev->watchdog_timeo = 5 * HZ;
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
netdev->mem_start = pci_resource_start(pdev, 0);
netdev->mem_end = pci_resource_end(pdev, 0);
@@ -7857,8 +7856,8 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
{
struct pci_dev *pdev = adapter->pdev;
struct vf_data_storage *vf_data = &adapter->vf_data[vf];
- struct list_head *pos;
- struct vf_mac_filter *entry = NULL;
+ struct vf_mac_filter *entry;
+ bool found = false;
int ret = 0;
if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&
@@ -7878,8 +7877,7 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
switch (info) {
case E1000_VF_MAC_FILTER_CLR:
/* remove all unicast MAC filters related to the current VF */
- list_for_each(pos, &adapter->vf_macs.l) {
- entry = list_entry(pos, struct vf_mac_filter, l);
+ list_for_each_entry(entry, &adapter->vf_macs.l, l) {
if (entry->vf == vf) {
entry->vf = -1;
entry->free = true;
@@ -7889,13 +7887,14 @@ static int igb_set_vf_mac_filter(struct igb_adapter *adapter, const int vf,
break;
case E1000_VF_MAC_FILTER_ADD:
/* try to find empty slot in the list */
- list_for_each(pos, &adapter->vf_macs.l) {
- entry = list_entry(pos, struct vf_mac_filter, l);
- if (entry->free)
+ list_for_each_entry(entry, &adapter->vf_macs.l, l) {
+ if (entry->free) {
+ found = true;
break;
+ }
}
- if (entry && entry->free) {
+ if (found) {
entry->free = false;
entry->vf = vf;
ether_addr_copy(entry->vf_mac, addr);
diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
index 7ff2752dd763..fd712585af27 100644
--- a/drivers/net/ethernet/intel/igbvf/netdev.c
+++ b/drivers/net/ethernet/intel/igbvf/netdev.c
@@ -2785,7 +2785,7 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
igbvf_set_ethtool_ops(netdev);
netdev->watchdog_timeo = 5 * HZ;
- strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+ strscpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
adapter->bd_number = cards_found++;
diff --git a/drivers/net/ethernet/intel/igc/igc_ethtool.c b/drivers/net/ethernet/intel/igc/igc_ethtool.c
index dd8a9d27a167..785eaa8e0ba8 100644
--- a/drivers/net/ethernet/intel/igc/igc_ethtool.c
+++ b/drivers/net/ethernet/intel/igc/igc_ethtool.c
@@ -773,9 +773,10 @@ static void igc_ethtool_get_strings(struct net_device *netdev, u32 stringset,
break;
case ETH_SS_STATS:
for (i = 0; i < IGC_GLOBAL_STATS_LEN; i++)
- ethtool_sprintf(&p, igc_gstrings_stats[i].stat_string);
+ ethtool_sprintf(&p, "%s",
+ igc_gstrings_stats[i].stat_string);
for (i = 0; i < IGC_NETDEV_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
igc_gstrings_net_stats[i].stat_string);
for (i = 0; i < adapter->num_tx_queues; i++) {
ethtool_sprintf(&p, "tx_queue_%u_packets", i);
diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
index 98de34d0ce07..e9bb403bbacf 100644
--- a/drivers/net/ethernet/intel/igc/igc_main.c
+++ b/drivers/net/ethernet/intel/igc/igc_main.c
@@ -6935,7 +6935,7 @@ static int igc_probe(struct pci_dev *pdev,
*/
igc_get_hw_control(adapter);
- strncpy(netdev->name, "eth%d", IFNAMSIZ);
+ strscpy(netdev->name, "eth%d", sizeof(netdev->name));
err = register_netdev(netdev);
if (err)
goto err_register;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 0bbad4a5cc2f..4dd897806fa5 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1413,11 +1413,11 @@ static void ixgbe_get_strings(struct net_device *netdev, u32 stringset,
switch (stringset) {
case ETH_SS_TEST:
for (i = 0; i < IXGBE_TEST_LEN; i++)
- ethtool_sprintf(&p, ixgbe_gstrings_test[i]);
+ ethtool_sprintf(&p, "%s", ixgbe_gstrings_test[i]);
break;
case ETH_SS_STATS:
for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++)
- ethtool_sprintf(&p,
+ ethtool_sprintf(&p, "%s",
ixgbe_gstrings_stats[i].stat_string);
for (i = 0; i < netdev->num_tx_queues; i++) {
ethtool_sprintf(&p, "tx_queue_%u_packets", i);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index dd03b017dfc5..94bde2cad0f4 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2421,7 +2421,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
}
if (xdp_xmit & IXGBE_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_xmit & IXGBE_XDP_TX) {
struct ixgbe_ring *ring = ixgbe_determine_xdp_ring(adapter);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index ea88ac04ab9a..9cfdfa8a4355 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -640,12 +640,11 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
int vf, int index, unsigned char *mac_addr)
{
struct vf_macvlans *entry;
- struct list_head *pos;
+ bool found = false;
int retval = 0;
if (index <= 1) {
- list_for_each(pos, &adapter->vf_mvs.l) {
- entry = list_entry(pos, struct vf_macvlans, l);
+ list_for_each_entry(entry, &adapter->vf_mvs.l, l) {
if (entry->vf == vf) {
entry->vf = -1;
entry->free = true;
@@ -663,23 +662,22 @@ static int ixgbe_set_vf_macvlan(struct ixgbe_adapter *adapter,
if (!index)
return 0;
- entry = NULL;
-
- list_for_each(pos, &adapter->vf_mvs.l) {
- entry = list_entry(pos, struct vf_macvlans, l);
- if (entry->free)
+ list_for_each_entry(entry, &adapter->vf_mvs.l, l) {
+ if (entry->free) {
+ found = true;
break;
+ }
}
/*
* If we traversed the entire list and didn't find a free entry
- * then we're out of space on the RAR table. Also entry may
- * be NULL because the original memory allocation for the list
- * failed, which is not fatal but does mean we can't support
- * VF requests for MACVLAN because we couldn't allocate
- * memory for the list management required.
+ * then we're out of space on the RAR table. It's also possible
+ * for the &adapter->vf_mvs.l list to be empty because the original
+ * memory allocation for the list failed, which is not fatal but does
+ * mean we can't support VF requests for MACVLAN because we couldn't
+ * allocate memory for the list management required.
*/
- if (!entry || !entry->free)
+ if (!found)
return -ENOSPC;
retval = ixgbe_add_mac_filter(adapter, mac_addr, vf);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
index 1703c640a434..59798bc33298 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_xsk.c
@@ -351,7 +351,7 @@ construct_skb:
}
if (xdp_xmit & IXGBE_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_xmit & IXGBE_XDP_TX) {
struct ixgbe_ring *ring = ixgbe_determine_xdp_ring(adapter);
diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c
index 5f6ae11212ae..81cf3361a1e5 100644
--- a/drivers/net/ethernet/korina.c
+++ b/drivers/net/ethernet/korina.c
@@ -1380,13 +1380,11 @@ static int korina_probe(struct platform_device *pdev)
return rc;
}
-static int korina_remove(struct platform_device *pdev)
+static void korina_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
unregister_netdev(dev);
-
- return 0;
}
#ifdef CONFIG_OF
@@ -1405,7 +1403,7 @@ static struct platform_driver korina_driver = {
.of_match_table = of_match_ptr(korina_match),
},
.probe = korina_probe,
- .remove = korina_remove,
+ .remove_new = korina_remove,
};
module_platform_driver(korina_driver);
diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
index f5961bdcc480..1d5b7bb6380f 100644
--- a/drivers/net/ethernet/lantiq_etop.c
+++ b/drivers/net/ethernet/lantiq_etop.c
@@ -721,8 +721,7 @@ err_out:
return err;
}
-static int
-ltq_etop_remove(struct platform_device *pdev)
+static void ltq_etop_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
@@ -732,11 +731,10 @@ ltq_etop_remove(struct platform_device *pdev)
ltq_etop_mdio_cleanup(dev);
unregister_netdev(dev);
}
- return 0;
}
static struct platform_driver ltq_mii_driver = {
- .remove = ltq_etop_remove,
+ .remove_new = ltq_etop_remove,
.driver = {
.name = "ltq_etop",
},
diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c
index 8d646c7f8c82..8bd4def3622e 100644
--- a/drivers/net/ethernet/lantiq_xrx200.c
+++ b/drivers/net/ethernet/lantiq_xrx200.c
@@ -641,7 +641,7 @@ err_uninit_dma:
return err;
}
-static int xrx200_remove(struct platform_device *pdev)
+static void xrx200_remove(struct platform_device *pdev)
{
struct xrx200_priv *priv = platform_get_drvdata(pdev);
struct net_device *net_dev = priv->net_dev;
@@ -659,8 +659,6 @@ static int xrx200_remove(struct platform_device *pdev)
/* shut down hardware */
xrx200_hw_cleanup(priv);
-
- return 0;
}
static const struct of_device_id xrx200_match[] = {
@@ -671,7 +669,7 @@ MODULE_DEVICE_TABLE(of, xrx200_match);
static struct platform_driver xrx200_driver = {
.probe = xrx200_probe,
- .remove = xrx200_remove,
+ .remove_new = xrx200_remove,
.driver = {
.name = "lantiq,xrx200-net",
.of_match_table = xrx200_match,
diff --git a/drivers/net/ethernet/litex/litex_liteeth.c b/drivers/net/ethernet/litex/litex_liteeth.c
index ffa96059079c..5182fe737c37 100644
--- a/drivers/net/ethernet/litex/litex_liteeth.c
+++ b/drivers/net/ethernet/litex/litex_liteeth.c
@@ -294,13 +294,11 @@ static int liteeth_probe(struct platform_device *pdev)
return 0;
}
-static int liteeth_remove(struct platform_device *pdev)
+static void liteeth_remove(struct platform_device *pdev)
{
struct net_device *netdev = platform_get_drvdata(pdev);
unregister_netdev(netdev);
-
- return 0;
}
static const struct of_device_id liteeth_of_match[] = {
@@ -311,7 +309,7 @@ MODULE_DEVICE_TABLE(of, liteeth_of_match);
static struct platform_driver liteeth_driver = {
.probe = liteeth_probe,
- .remove = liteeth_remove,
+ .remove_new = liteeth_remove,
.driver = {
.name = DRV_NAME,
.of_match_table = liteeth_of_match,
diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
index 3b129a1c3381..f0bdc06d253d 100644
--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
+++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
@@ -2892,19 +2892,18 @@ err_put_clk:
return ret;
}
-static int mv643xx_eth_shared_remove(struct platform_device *pdev)
+static void mv643xx_eth_shared_remove(struct platform_device *pdev)
{
struct mv643xx_eth_shared_private *msp = platform_get_drvdata(pdev);
mv643xx_eth_shared_of_remove();
if (!IS_ERR(msp->clk))
clk_disable_unprepare(msp->clk);
- return 0;
}
static struct platform_driver mv643xx_eth_shared_driver = {
.probe = mv643xx_eth_shared_probe,
- .remove = mv643xx_eth_shared_remove,
+ .remove_new = mv643xx_eth_shared_remove,
.driver = {
.name = MV643XX_ETH_SHARED_NAME,
.of_match_table = of_match_ptr(mv643xx_eth_shared_ids),
@@ -3279,7 +3278,7 @@ out:
return err;
}
-static int mv643xx_eth_remove(struct platform_device *pdev)
+static void mv643xx_eth_remove(struct platform_device *pdev)
{
struct mv643xx_eth_private *mp = platform_get_drvdata(pdev);
struct net_device *dev = mp->dev;
@@ -3293,8 +3292,6 @@ static int mv643xx_eth_remove(struct platform_device *pdev)
clk_disable_unprepare(mp->clk);
free_netdev(mp->dev);
-
- return 0;
}
static void mv643xx_eth_shutdown(struct platform_device *pdev)
@@ -3311,7 +3308,7 @@ static void mv643xx_eth_shutdown(struct platform_device *pdev)
static struct platform_driver mv643xx_eth_driver = {
.probe = mv643xx_eth_probe,
- .remove = mv643xx_eth_remove,
+ .remove_new = mv643xx_eth_remove,
.shutdown = mv643xx_eth_shutdown,
.driver = {
.name = MV643XX_ETH_NAME,
diff --git a/drivers/net/ethernet/marvell/mvmdio.c b/drivers/net/ethernet/marvell/mvmdio.c
index 674913184ebf..89f26402f8fb 100644
--- a/drivers/net/ethernet/marvell/mvmdio.c
+++ b/drivers/net/ethernet/marvell/mvmdio.c
@@ -388,7 +388,7 @@ out_clk:
return ret;
}
-static int orion_mdio_remove(struct platform_device *pdev)
+static void orion_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
struct orion_mdio_dev *dev = bus->priv;
@@ -404,8 +404,6 @@ static int orion_mdio_remove(struct platform_device *pdev)
clk_disable_unprepare(dev->clk[i]);
clk_put(dev->clk[i]);
}
-
- return 0;
}
static const struct of_device_id orion_mdio_match[] = {
@@ -426,7 +424,7 @@ MODULE_DEVICE_TABLE(acpi, orion_mdio_acpi_match);
static struct platform_driver orion_mdio_driver = {
.probe = orion_mdio_probe,
- .remove = orion_mdio_remove,
+ .remove_new = orion_mdio_remove,
.driver = {
.name = "orion-mdio",
.of_match_table = orion_mdio_match,
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index d483b8c00ec0..90817136808d 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2520,7 +2520,7 @@ next:
mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1);
if (ps.xdp_redirect)
- xdp_do_flush_map();
+ xdp_do_flush();
if (ps.rx_packets)
mvneta_update_stats(pp, &ps);
@@ -5725,7 +5725,7 @@ err_free_irq:
}
/* Device removal routine */
-static int mvneta_remove(struct platform_device *pdev)
+static void mvneta_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct mvneta_port *pp = netdev_priv(dev);
@@ -5744,8 +5744,6 @@ static int mvneta_remove(struct platform_device *pdev)
1 << pp->id);
mvneta_bm_put(pp->bm_priv);
}
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -5871,7 +5869,7 @@ MODULE_DEVICE_TABLE(of, mvneta_match);
static struct platform_driver mvneta_driver = {
.probe = mvneta_probe,
- .remove = mvneta_remove,
+ .remove_new = mvneta_remove,
.driver = {
.name = MVNETA_DRIVER_NAME,
.of_match_table = mvneta_match,
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c
index 46c942ef2287..3f46a0fed048 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.c
+++ b/drivers/net/ethernet/marvell/mvneta_bm.c
@@ -457,7 +457,7 @@ err_clk:
return err;
}
-static int mvneta_bm_remove(struct platform_device *pdev)
+static void mvneta_bm_remove(struct platform_device *pdev)
{
struct mvneta_bm *priv = platform_get_drvdata(pdev);
u8 all_ports_map = 0xff;
@@ -475,8 +475,6 @@ static int mvneta_bm_remove(struct platform_device *pdev)
mvneta_bm_write(priv, MVNETA_BM_COMMAND_REG, MVNETA_BM_STOP_MASK);
clk_disable_unprepare(priv->clk);
-
- return 0;
}
static const struct of_device_id mvneta_bm_match[] = {
@@ -487,7 +485,7 @@ MODULE_DEVICE_TABLE(of, mvneta_bm_match);
static struct platform_driver mvneta_bm_driver = {
.probe = mvneta_bm_probe,
- .remove = mvneta_bm_remove,
+ .remove_new = mvneta_bm_remove,
.driver = {
.name = MVNETA_BM_DRIVER_NAME,
.of_match_table = mvneta_bm_match,
diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
index 21c3f9b015c8..93137606869e 100644
--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
@@ -4027,7 +4027,7 @@ err_drop_frame:
}
if (xdp_ret & MVPP2_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (ps.rx_packets) {
struct mvpp2_pcpu_stats *stats = this_cpu_ptr(port->stats);
@@ -5831,7 +5831,7 @@ static int mvpp2_multi_queue_vectors_init(struct mvpp2_port *port,
v->type = MVPP2_QUEUE_VECTOR_SHARED;
if (port->flags & MVPP2_F_DT_COMPAT)
- strncpy(irqname, "rx-shared", sizeof(irqname));
+ strscpy(irqname, "rx-shared", sizeof(irqname));
}
if (port_node)
@@ -7662,7 +7662,7 @@ err_pp_clk:
return err;
}
-static int mvpp2_remove(struct platform_device *pdev)
+static void mvpp2_remove(struct platform_device *pdev)
{
struct mvpp2 *priv = platform_get_drvdata(pdev);
struct fwnode_handle *fwnode = pdev->dev.fwnode;
@@ -7700,15 +7700,13 @@ static int mvpp2_remove(struct platform_device *pdev)
}
if (is_acpi_node(port_fwnode))
- return 0;
+ return;
clk_disable_unprepare(priv->axi_clk);
clk_disable_unprepare(priv->mg_core_clk);
clk_disable_unprepare(priv->mg_clk);
clk_disable_unprepare(priv->pp_clk);
clk_disable_unprepare(priv->gop_clk);
-
- return 0;
}
static const struct of_device_id mvpp2_match[] = {
@@ -7734,7 +7732,7 @@ MODULE_DEVICE_TABLE(acpi, mvpp2_acpi_match);
static struct platform_driver mvpp2_driver = {
.probe = mvpp2_probe,
- .remove = mvpp2_remove,
+ .remove_new = mvpp2_remove,
.driver = {
.name = MVPP2_DRIVER_NAME,
.of_match_table = mvpp2_match,
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c b/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
index 90c3a419932d..d4ee2454675b 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_cn9k_pf.c
@@ -16,9 +16,6 @@
#define CTRL_MBOX_MAX_PF 128
#define CTRL_MBOX_SZ ((size_t)(0x400000 / CTRL_MBOX_MAX_PF))
-#define FW_HB_INTERVAL_IN_SECS 1
-#define FW_HB_MISS_COUNT 10
-
/* Names of Hardware non-queue generic interrupts */
static char *cn93_non_ioq_msix_names[] = {
"epf_ire_rint",
@@ -250,12 +247,11 @@ static void octep_init_config_cn93_pf(struct octep_device *oct)
link = PCI_DEVFN(PCI_SLOT(oct->pdev->devfn), link);
}
conf->ctrl_mbox_cfg.barmem_addr = (void __iomem *)oct->mmio[2].hw_addr +
- (0x400000ull * 7) +
+ CN93_PEM_BAR4_INDEX_OFFSET +
(link * CTRL_MBOX_SZ);
- conf->hb_interval = FW_HB_INTERVAL_IN_SECS;
- conf->max_hb_miss_cnt = FW_HB_MISS_COUNT;
-
+ conf->fw_info.hb_interval = OCTEP_DEFAULT_FW_HB_INTERVAL;
+ conf->fw_info.hb_miss_count = OCTEP_DEFAULT_FW_HB_MISS_COUNT;
}
/* Setup registers for a hardware Tx Queue */
@@ -373,34 +369,40 @@ static void octep_setup_mbox_regs_cn93_pf(struct octep_device *oct, int q_no)
mbox->mbox_read_reg = oct->mmio[0].hw_addr + CN93_SDP_R_MBOX_VF_PF_DATA(q_no);
}
-/* Process non-ioq interrupts required to keep pf interface running.
- * OEI_RINT is needed for control mailbox
- */
-static bool octep_poll_non_ioq_interrupts_cn93_pf(struct octep_device *oct)
-{
- bool handled = false;
- u64 reg0;
-
- /* Check for OEI INTR */
- reg0 = octep_read_csr64(oct, CN93_SDP_EPF_OEI_RINT);
- if (reg0) {
- dev_info(&oct->pdev->dev,
- "Received OEI_RINT intr: 0x%llx\n",
- reg0);
- octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT, reg0);
- if (reg0 & CN93_SDP_EPF_OEI_RINT_DATA_BIT_MBOX)
+/* Poll OEI events like heartbeat */
+static void octep_poll_oei_cn93_pf(struct octep_device *oct)
+{
+ u64 reg;
+
+ reg = octep_read_csr64(oct, CN93_SDP_EPF_OEI_RINT);
+ if (reg) {
+ octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT, reg);
+ if (reg & CN93_SDP_EPF_OEI_RINT_DATA_BIT_MBOX)
queue_work(octep_wq, &oct->ctrl_mbox_task);
- else if (reg0 & CN93_SDP_EPF_OEI_RINT_DATA_BIT_HBEAT)
+ else if (reg & CN93_SDP_EPF_OEI_RINT_DATA_BIT_HBEAT)
atomic_set(&oct->hb_miss_cnt, 0);
-
- handled = true;
}
+}
+
+/* OEI interrupt handler */
+static irqreturn_t octep_oei_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+
+ octep_poll_oei_cn93_pf(oct);
+ return IRQ_HANDLED;
+}
- return handled;
+/* Process non-ioq interrupts required to keep pf interface running.
+ * OEI_RINT is needed for control mailbox
+ */
+static void octep_poll_non_ioq_interrupts_cn93_pf(struct octep_device *oct)
+{
+ octep_poll_oei_cn93_pf(oct);
}
-/* Interrupts handler for all non-queue generic interrupts. */
-static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
+/* Interrupt handler for input ring error interrupts. */
+static irqreturn_t octep_ire_intr_handler_cn93_pf(void *dev)
{
struct octep_device *oct = (struct octep_device *)dev;
struct pci_dev *pdev = oct->pdev;
@@ -425,8 +427,17 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
reg_val);
}
}
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for output ring error interrupts. */
+static irqreturn_t octep_ore_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
+ int i = 0;
/* Check for ORERR INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_ORERR_RINT);
@@ -444,9 +455,16 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
reg_val);
}
}
-
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for vf input ring error interrupts. */
+static irqreturn_t octep_vfire_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
/* Check for VFIRE INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_VFIRE_RINT(0));
@@ -454,8 +472,16 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
dev_info(&pdev->dev,
"Received VFIRE_RINT intr: 0x%llx\n", reg_val);
octep_write_csr64(oct, CN93_SDP_EPF_VFIRE_RINT(0), reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for vf output ring error interrupts. */
+static irqreturn_t octep_vfore_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
/* Check for VFORE INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_VFORE_RINT(0));
@@ -463,19 +489,30 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
dev_info(&pdev->dev,
"Received VFORE_RINT intr: 0x%llx\n", reg_val);
octep_write_csr64(oct, CN93_SDP_EPF_VFORE_RINT(0), reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
- /* Check for MBOX INTR and OEI INTR */
- if (octep_poll_non_ioq_interrupts_cn93_pf(oct))
- goto irq_handled;
+/* Interrupt handler for dpi dma related interrupts. */
+static irqreturn_t octep_dma_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ u64 reg_val = 0;
/* Check for DMA INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_DMA_RINT);
if (reg_val) {
octep_write_csr64(oct, CN93_SDP_EPF_DMA_RINT, reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for dpi dma transaction error interrupts for VFs */
+static irqreturn_t octep_dma_vf_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
/* Check for DMA VF INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_DMA_VF_RINT(0));
@@ -483,8 +520,16 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
dev_info(&pdev->dev,
"Received DMA_VF_RINT intr: 0x%llx\n", reg_val);
octep_write_csr64(oct, CN93_SDP_EPF_DMA_VF_RINT(0), reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for pp transaction error interrupts for VFs */
+static irqreturn_t octep_pp_vf_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
/* Check for PPVF INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_PP_VF_RINT(0));
@@ -492,8 +537,16 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
dev_info(&pdev->dev,
"Received PP_VF_RINT intr: 0x%llx\n", reg_val);
octep_write_csr64(oct, CN93_SDP_EPF_PP_VF_RINT(0), reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupt handler for mac related interrupts. */
+static irqreturn_t octep_misc_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
+ u64 reg_val = 0;
/* Check for MISC INTR */
reg_val = octep_read_csr64(oct, CN93_SDP_EPF_MISC_RINT);
@@ -501,11 +554,17 @@ static irqreturn_t octep_non_ioq_intr_handler_cn93_pf(void *dev)
dev_info(&pdev->dev,
"Received MISC_RINT intr: 0x%llx\n", reg_val);
octep_write_csr64(oct, CN93_SDP_EPF_MISC_RINT, reg_val);
- goto irq_handled;
}
+ return IRQ_HANDLED;
+}
+
+/* Interrupts handler for all reserved interrupts. */
+static irqreturn_t octep_rsvd_intr_handler_cn93_pf(void *dev)
+{
+ struct octep_device *oct = (struct octep_device *)dev;
+ struct pci_dev *pdev = oct->pdev;
dev_info(&pdev->dev, "Reserved interrupts raised; Ignore\n");
-irq_handled:
return IRQ_HANDLED;
}
@@ -569,8 +628,15 @@ static void octep_enable_interrupts_cn93_pf(struct octep_device *oct)
octep_write_csr64(oct, CN93_SDP_EPF_IRERR_RINT_ENA_W1S, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_ORERR_RINT_ENA_W1S, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT_ENA_W1S, -1ULL);
+
+ octep_write_csr64(oct, CN93_SDP_EPF_VFIRE_RINT_ENA_W1S(0), -1ULL);
+ octep_write_csr64(oct, CN93_SDP_EPF_VFORE_RINT_ENA_W1S(0), -1ULL);
+
octep_write_csr64(oct, CN93_SDP_EPF_MISC_RINT_ENA_W1S, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_DMA_RINT_ENA_W1S, intr_mask);
+
+ octep_write_csr64(oct, CN93_SDP_EPF_DMA_VF_RINT_ENA_W1S(0), -1ULL);
+ octep_write_csr64(oct, CN93_SDP_EPF_PP_VF_RINT_ENA_W1S(0), -1ULL);
}
/* Disable all interrupts */
@@ -588,8 +654,15 @@ static void octep_disable_interrupts_cn93_pf(struct octep_device *oct)
octep_write_csr64(oct, CN93_SDP_EPF_IRERR_RINT_ENA_W1C, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_ORERR_RINT_ENA_W1C, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_OEI_RINT_ENA_W1C, -1ULL);
+
+ octep_write_csr64(oct, CN93_SDP_EPF_VFIRE_RINT_ENA_W1C(0), -1ULL);
+ octep_write_csr64(oct, CN93_SDP_EPF_VFORE_RINT_ENA_W1C(0), -1ULL);
+
octep_write_csr64(oct, CN93_SDP_EPF_MISC_RINT_ENA_W1C, intr_mask);
octep_write_csr64(oct, CN93_SDP_EPF_DMA_RINT_ENA_W1C, intr_mask);
+
+ octep_write_csr64(oct, CN93_SDP_EPF_DMA_VF_RINT_ENA_W1C(0), -1ULL);
+ octep_write_csr64(oct, CN93_SDP_EPF_PP_VF_RINT_ENA_W1C(0), -1ULL);
}
/* Get new Octeon Read Index: index of descriptor that Octeon reads next. */
@@ -722,7 +795,16 @@ void octep_device_setup_cn93_pf(struct octep_device *oct)
oct->hw_ops.setup_oq_regs = octep_setup_oq_regs_cn93_pf;
oct->hw_ops.setup_mbox_regs = octep_setup_mbox_regs_cn93_pf;
- oct->hw_ops.non_ioq_intr_handler = octep_non_ioq_intr_handler_cn93_pf;
+ oct->hw_ops.oei_intr_handler = octep_oei_intr_handler_cn93_pf;
+ oct->hw_ops.ire_intr_handler = octep_ire_intr_handler_cn93_pf;
+ oct->hw_ops.ore_intr_handler = octep_ore_intr_handler_cn93_pf;
+ oct->hw_ops.vfire_intr_handler = octep_vfire_intr_handler_cn93_pf;
+ oct->hw_ops.vfore_intr_handler = octep_vfore_intr_handler_cn93_pf;
+ oct->hw_ops.dma_intr_handler = octep_dma_intr_handler_cn93_pf;
+ oct->hw_ops.dma_vf_intr_handler = octep_dma_vf_intr_handler_cn93_pf;
+ oct->hw_ops.pp_vf_intr_handler = octep_pp_vf_intr_handler_cn93_pf;
+ oct->hw_ops.misc_intr_handler = octep_misc_intr_handler_cn93_pf;
+ oct->hw_ops.rsvd_intr_handler = octep_rsvd_intr_handler_cn93_pf;
oct->hw_ops.ioq_intr_handler = octep_ioq_intr_handler_cn93_pf;
oct->hw_ops.soft_reset = octep_soft_reset_cn93_pf;
oct->hw_ops.reinit_regs = octep_reinit_regs_cn93_pf;
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_config.h b/drivers/net/ethernet/marvell/octeon_ep/octep_config.h
index df7cd39d9fce..1622a6ebf036 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_config.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_config.h
@@ -49,6 +49,11 @@
/* Default MTU */
#define OCTEP_DEFAULT_MTU 1500
+/* pf heartbeat interval in milliseconds */
+#define OCTEP_DEFAULT_FW_HB_INTERVAL 1000
+/* pf heartbeat miss count */
+#define OCTEP_DEFAULT_FW_HB_MISS_COUNT 20
+
/* Macros to get octeon config params */
#define CFG_GET_IQ_CFG(cfg) ((cfg)->iq)
#define CFG_GET_IQ_NUM_DESC(cfg) ((cfg)->iq.num_descs)
@@ -181,6 +186,16 @@ struct octep_ctrl_mbox_config {
void __iomem *barmem_addr;
};
+/* Info from firmware */
+struct octep_fw_info {
+ /* interface pkind */
+ u16 pkind;
+ /* heartbeat interval in milliseconds */
+ u16 hb_interval;
+ /* heartbeat miss count */
+ u16 hb_miss_count;
+};
+
/* Data Structure to hold configuration limits and active config */
struct octep_config {
/* Input Queue attributes. */
@@ -201,10 +216,7 @@ struct octep_config {
/* ctrl mbox config */
struct octep_ctrl_mbox_config ctrl_mbox_cfg;
- /* Configured maximum heartbeat miss count */
- u32 max_hb_miss_cnt;
-
- /* Configured firmware heartbeat interval in secs */
- u32 hb_interval;
+ /* fw info */
+ struct octep_fw_info fw_info;
};
#endif /* _OCTEP_CONFIG_H_ */
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
index 17bfd5cdf462..0594607a2585 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.c
@@ -26,7 +26,7 @@ static atomic_t ctrl_net_msg_id;
/* Control plane version in which OCTEP_CTRL_NET_H2F_CMD was added */
static const u32 octep_ctrl_net_h2f_cmd_versions[OCTEP_CTRL_NET_H2F_CMD_MAX] = {
- [OCTEP_CTRL_NET_H2F_CMD_INVALID ... OCTEP_CTRL_NET_H2F_CMD_LINK_INFO] =
+ [OCTEP_CTRL_NET_H2F_CMD_INVALID ... OCTEP_CTRL_NET_H2F_CMD_GET_INFO] =
OCTEP_CP_VERSION(1, 0, 0)
};
@@ -353,6 +353,28 @@ void octep_ctrl_net_recv_fw_messages(struct octep_device *oct)
}
}
+int octep_ctrl_net_get_info(struct octep_device *oct, int vfid,
+ struct octep_fw_info *info)
+{
+ struct octep_ctrl_net_wait_data d = {0};
+ struct octep_ctrl_net_h2f_resp *resp;
+ struct octep_ctrl_net_h2f_req *req;
+ int err;
+
+ req = &d.data.req;
+ init_send_req(&d.msg, req, 0, vfid);
+ req->hdr.s.cmd = OCTEP_CTRL_NET_H2F_CMD_GET_INFO;
+ req->link_info.cmd = OCTEP_CTRL_NET_CMD_GET;
+ err = octep_send_mbox_req(oct, &d, true);
+ if (err < 0)
+ return err;
+
+ resp = &d.data.resp;
+ memcpy(info, &resp->info.fw_info, sizeof(struct octep_fw_info));
+
+ return 0;
+}
+
int octep_ctrl_net_uninit(struct octep_device *oct)
{
struct octep_ctrl_net_wait_data *pos, *n;
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h
index 1c2ef4ee31d9..b330f370131b 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ctrl_net.h
@@ -41,6 +41,7 @@ enum octep_ctrl_net_h2f_cmd {
OCTEP_CTRL_NET_H2F_CMD_LINK_STATUS,
OCTEP_CTRL_NET_H2F_CMD_RX_STATE,
OCTEP_CTRL_NET_H2F_CMD_LINK_INFO,
+ OCTEP_CTRL_NET_H2F_CMD_GET_INFO,
OCTEP_CTRL_NET_H2F_CMD_MAX
};
@@ -161,6 +162,11 @@ struct octep_ctrl_net_h2f_resp_cmd_state {
u16 state;
};
+/* get info request */
+struct octep_ctrl_net_h2f_resp_cmd_get_info {
+ struct octep_fw_info fw_info;
+};
+
/* Host to fw response data */
struct octep_ctrl_net_h2f_resp {
union octep_ctrl_net_resp_hdr hdr;
@@ -171,6 +177,7 @@ struct octep_ctrl_net_h2f_resp {
struct octep_ctrl_net_h2f_resp_cmd_state link;
struct octep_ctrl_net_h2f_resp_cmd_state rx;
struct octep_ctrl_net_link_info link_info;
+ struct octep_ctrl_net_h2f_resp_cmd_get_info info;
};
} __packed;
@@ -330,6 +337,17 @@ int octep_ctrl_net_set_link_info(struct octep_device *oct,
*/
void octep_ctrl_net_recv_fw_messages(struct octep_device *oct);
+/** Get info from firmware.
+ *
+ * @param oct: non-null pointer to struct octep_device.
+ * @param vfid: Index of virtual function.
+ * @param info: non-null pointer to struct octep_fw_info.
+ *
+ * return value: 0 on success, -errno on failure.
+ */
+int octep_ctrl_net_get_info(struct octep_device *oct, int vfid,
+ struct octep_fw_info *info);
+
/** Uninitialize data for ctrl net.
*
* @param oct: non-null pointer to struct octep_device.
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
index 5b46ca47c8e5..552970c7dec0 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
@@ -155,18 +155,153 @@ static void octep_disable_msix(struct octep_device *oct)
}
/**
- * octep_non_ioq_intr_handler() - common handler for all generic interrupts.
+ * octep_oei_intr_handler() - common handler for output endpoint interrupts.
*
* @irq: Interrupt number.
* @data: interrupt data.
*
- * this is common handler for all non-queue (generic) interrupts.
+ * this is common handler for all output endpoint interrupts.
+ */
+static irqreturn_t octep_oei_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.oei_intr_handler(oct);
+}
+
+/**
+ * octep_ire_intr_handler() - common handler for input ring error interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for input ring error interrupts.
+ */
+static irqreturn_t octep_ire_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.ire_intr_handler(oct);
+}
+
+/**
+ * octep_ore_intr_handler() - common handler for output ring error interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for output ring error interrupts.
+ */
+static irqreturn_t octep_ore_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.ore_intr_handler(oct);
+}
+
+/**
+ * octep_vfire_intr_handler() - common handler for vf input ring error interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for vf input ring error interrupts.
+ */
+static irqreturn_t octep_vfire_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.vfire_intr_handler(oct);
+}
+
+/**
+ * octep_vfore_intr_handler() - common handler for vf output ring error interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for vf output ring error interrupts.
+ */
+static irqreturn_t octep_vfore_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.vfore_intr_handler(oct);
+}
+
+/**
+ * octep_dma_intr_handler() - common handler for dpi dma related interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for dpi dma related interrupts.
+ */
+static irqreturn_t octep_dma_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.dma_intr_handler(oct);
+}
+
+/**
+ * octep_dma_vf_intr_handler() - common handler for dpi dma transaction error interrupts for VFs.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for dpi dma transaction error interrupts for VFs.
*/
-static irqreturn_t octep_non_ioq_intr_handler(int irq, void *data)
+static irqreturn_t octep_dma_vf_intr_handler(int irq, void *data)
{
struct octep_device *oct = data;
- return oct->hw_ops.non_ioq_intr_handler(oct);
+ return oct->hw_ops.dma_vf_intr_handler(oct);
+}
+
+/**
+ * octep_pp_vf_intr_handler() - common handler for pp transaction error interrupts for VFs.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for pp transaction error interrupts for VFs.
+ */
+static irqreturn_t octep_pp_vf_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.pp_vf_intr_handler(oct);
+}
+
+/**
+ * octep_misc_intr_handler() - common handler for mac related interrupts.
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for mac related interrupts.
+ */
+static irqreturn_t octep_misc_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.misc_intr_handler(oct);
+}
+
+/**
+ * octep_rsvd_intr_handler() - common handler for reserved interrupts (future use).
+ *
+ * @irq: Interrupt number.
+ * @data: interrupt data.
+ *
+ * this is common handler for all reserved interrupts.
+ */
+static irqreturn_t octep_rsvd_intr_handler(int irq, void *data)
+{
+ struct octep_device *oct = data;
+
+ return oct->hw_ops.rsvd_intr_handler(oct);
}
/**
@@ -222,9 +357,57 @@ static int octep_request_irqs(struct octep_device *oct)
snprintf(irq_name, OCTEP_MSIX_NAME_SIZE,
"%s-%s", netdev->name, non_ioq_msix_names[i]);
- ret = request_irq(msix_entry->vector,
- octep_non_ioq_intr_handler, 0,
- irq_name, oct);
+ if (!strncmp(non_ioq_msix_names[i], "epf_oei_rint",
+ strlen("epf_oei_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_oei_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_ire_rint",
+ strlen("epf_ire_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_ire_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_ore_rint",
+ strlen("epf_ore_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_ore_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_vfire_rint",
+ strlen("epf_vfire_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_vfire_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_vfore_rint",
+ strlen("epf_vfore_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_vfore_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_dma_rint",
+ strlen("epf_dma_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_dma_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_dma_vf_rint",
+ strlen("epf_dma_vf_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_dma_vf_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_pp_vf_rint",
+ strlen("epf_pp_vf_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_pp_vf_intr_handler, 0,
+ irq_name, oct);
+ } else if (!strncmp(non_ioq_msix_names[i], "epf_misc_rint",
+ strlen("epf_misc_rint"))) {
+ ret = request_irq(msix_entry->vector,
+ octep_misc_intr_handler, 0,
+ irq_name, oct);
+ } else {
+ ret = request_irq(msix_entry->vector,
+ octep_rsvd_intr_handler, 0,
+ irq_name, oct);
+ }
+
if (ret) {
netdev_err(netdev,
"request_irq failed for %s; err=%d",
@@ -917,9 +1100,9 @@ static void octep_hb_timeout_task(struct work_struct *work)
int miss_cnt;
miss_cnt = atomic_inc_return(&oct->hb_miss_cnt);
- if (miss_cnt < oct->conf->max_hb_miss_cnt) {
+ if (miss_cnt < oct->conf->fw_info.hb_miss_count) {
queue_delayed_work(octep_wq, &oct->hb_task,
- msecs_to_jiffies(oct->conf->hb_interval * 1000));
+ msecs_to_jiffies(oct->conf->fw_info.hb_interval));
return;
}
@@ -1012,8 +1195,7 @@ int octep_device_setup(struct octep_device *oct)
atomic_set(&oct->hb_miss_cnt, 0);
INIT_DELAYED_WORK(&oct->hb_task, octep_hb_timeout_task);
- queue_delayed_work(octep_wq, &oct->hb_task,
- msecs_to_jiffies(oct->conf->hb_interval * 1000));
+
return 0;
unsupported_dev:
@@ -1142,6 +1324,15 @@ static int octep_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
dev_err(&pdev->dev, "Device setup failed\n");
goto err_octep_config;
}
+
+ octep_ctrl_net_get_info(octep_dev, OCTEP_CTRL_NET_INVALID_VFID,
+ &octep_dev->conf->fw_info);
+ dev_info(&octep_dev->pdev->dev, "Heartbeat interval %u msecs Heartbeat miss count %u\n",
+ octep_dev->conf->fw_info.hb_interval,
+ octep_dev->conf->fw_info.hb_miss_count);
+ queue_delayed_work(octep_wq, &octep_dev->hb_task,
+ msecs_to_jiffies(octep_dev->conf->fw_info.hb_interval));
+
INIT_WORK(&octep_dev->tx_timeout_task, octep_tx_timeout_task);
INIT_WORK(&octep_dev->ctrl_mbox_task, octep_ctrl_mbox_task);
INIT_DELAYED_WORK(&octep_dev->intr_poll_task, octep_intr_poll_task);
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
index e0907a719133..6df902ebb7f3 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
@@ -65,7 +65,16 @@ struct octep_hw_ops {
void (*setup_oq_regs)(struct octep_device *oct, int q);
void (*setup_mbox_regs)(struct octep_device *oct, int mbox);
- irqreturn_t (*non_ioq_intr_handler)(void *ioq_vector);
+ irqreturn_t (*oei_intr_handler)(void *ioq_vector);
+ irqreturn_t (*ire_intr_handler)(void *ioq_vector);
+ irqreturn_t (*ore_intr_handler)(void *ioq_vector);
+ irqreturn_t (*vfire_intr_handler)(void *ioq_vector);
+ irqreturn_t (*vfore_intr_handler)(void *ioq_vector);
+ irqreturn_t (*dma_intr_handler)(void *ioq_vector);
+ irqreturn_t (*dma_vf_intr_handler)(void *ioq_vector);
+ irqreturn_t (*pp_vf_intr_handler)(void *ioq_vector);
+ irqreturn_t (*misc_intr_handler)(void *ioq_vector);
+ irqreturn_t (*rsvd_intr_handler)(void *ioq_vector);
irqreturn_t (*ioq_intr_handler)(void *ioq_vector);
int (*soft_reset)(struct octep_device *oct);
void (*reinit_regs)(struct octep_device *oct);
@@ -73,7 +82,7 @@ struct octep_hw_ops {
void (*enable_interrupts)(struct octep_device *oct);
void (*disable_interrupts)(struct octep_device *oct);
- bool (*poll_non_ioq_interrupts)(struct octep_device *oct);
+ void (*poll_non_ioq_interrupts)(struct octep_device *oct);
void (*enable_io_queues)(struct octep_device *oct);
void (*disable_io_queues)(struct octep_device *oct);
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
index b25c3093dc7b..0a43983e9101 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
@@ -370,4 +370,8 @@
/* bit 1 for firmware heartbeat interrupt */
#define CN93_SDP_EPF_OEI_RINT_DATA_BIT_HBEAT BIT_ULL(1)
+#define CN93_PEM_BAR4_INDEX 7
+#define CN93_PEM_BAR4_INDEX_SIZE 0x400000ULL
+#define CN93_PEM_BAR4_INDEX_OFFSET (CN93_PEM_BAR4_INDEX * CN93_PEM_BAR4_INDEX_SIZE)
+
#endif /* _OCTEP_REGS_CN9K_PF_H_ */
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
index 782a24f27f3e..49feae80d7d2 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
@@ -20,6 +20,7 @@ struct octep_oq_desc_hw {
dma_addr_t buffer_ptr;
u64 info_ptr;
};
+static_assert(sizeof(struct octep_oq_desc_hw) == 16);
#define OCTEP_OQ_DESC_SIZE (sizeof(struct octep_oq_desc_hw))
@@ -39,6 +40,7 @@ struct octep_oq_resp_hw_ext {
/* checksum verified. */
u64 csum_verified:2;
};
+static_assert(sizeof(struct octep_oq_resp_hw_ext) == 8);
#define OCTEP_OQ_RESP_HW_EXT_SIZE (sizeof(struct octep_oq_resp_hw_ext))
@@ -50,6 +52,7 @@ struct octep_oq_resp_hw {
/* The Length of the packet. */
__be64 length;
};
+static_assert(sizeof(struct octep_oq_resp_hw) == 8);
#define OCTEP_OQ_RESP_HW_SIZE (sizeof(struct octep_oq_resp_hw))
diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
index 21e75ff9f5e7..86c98b13fc44 100644
--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
@@ -36,6 +36,7 @@ struct octep_tx_sglist_desc {
u16 len[4];
dma_addr_t dma_ptr[4];
};
+static_assert(sizeof(struct octep_tx_sglist_desc) == 40);
/* Each Scatter/Gather entry sent to hardwar hold four pointers.
* So, number of entries required is (MAX_SKB_FRAGS + 1)/4, where '+1'
@@ -239,6 +240,7 @@ struct octep_instr_hdr {
/* Reserved3 */
u64 reserved3:1;
};
+static_assert(sizeof(struct octep_instr_hdr) == 8);
/* Hardware Tx completion response header */
struct octep_instr_resp_hdr {
@@ -263,6 +265,7 @@ struct octep_instr_resp_hdr {
/* Opcode for the return packet */
u64 opcode:16;
};
+static_assert(sizeof(struct octep_instr_hdr) == 8);
/* 64-byte Tx instruction format.
* Format of instruction for a 64-byte mode input queue.
@@ -293,6 +296,7 @@ struct octep_tx_desc_hw {
/* Additional headers available in a 64-byte instruction. */
u64 exhdr[4];
};
+static_assert(sizeof(struct octep_tx_desc_hw) == 64);
#define OCTEP_IQ_DESC_SIZE (sizeof(struct octep_tx_desc_hw))
#endif /* _OCTEP_TX_H_ */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
index e06f77ad6106..6c70c8498690 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
@@ -1218,8 +1218,6 @@ static inline void link_status_user_format(u64 lstat,
struct cgx_link_user_info *linfo,
struct cgx *cgx, u8 lmac_id)
{
- const char *lmac_string;
-
linfo->link_up = FIELD_GET(RESP_LINKSTAT_UP, lstat);
linfo->full_duplex = FIELD_GET(RESP_LINKSTAT_FDUPLEX, lstat);
linfo->speed = cgx_speed_mbps[FIELD_GET(RESP_LINKSTAT_SPEED, lstat)];
@@ -1230,12 +1228,12 @@ static inline void link_status_user_format(u64 lstat,
if (linfo->lmac_type_id >= LMAC_MODE_MAX) {
dev_err(&cgx->pdev->dev, "Unknown lmac_type_id %d reported by firmware on cgx port%d:%d",
linfo->lmac_type_id, cgx->cgx_id, lmac_id);
- strncpy(linfo->lmac_type, "Unknown", LMACTYPE_STR_LEN - 1);
+ strscpy(linfo->lmac_type, "Unknown", sizeof(linfo->lmac_type));
return;
}
- lmac_string = cgx_lmactype_string[linfo->lmac_type_id];
- strncpy(linfo->lmac_type, lmac_string, LMACTYPE_STR_LEN - 1);
+ strscpy(linfo->lmac_type, cgx_lmactype_string[linfo->lmac_type_id],
+ sizeof(linfo->lmac_type));
}
/* Hardware event handlers */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
index 6b5b06c2b4e9..6845556581c3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
@@ -1473,6 +1473,12 @@ struct flow_msg {
u8 next_header;
};
__be16 vlan_itci;
+#define OTX2_FLOWER_MASK_MPLS_LB GENMASK(31, 12)
+#define OTX2_FLOWER_MASK_MPLS_TC GENMASK(11, 9)
+#define OTX2_FLOWER_MASK_MPLS_BOS BIT(8)
+#define OTX2_FLOWER_MASK_MPLS_TTL GENMASK(7, 0)
+#define OTX2_FLOWER_MASK_MPLS_NON_TTL GENMASK(31, 8)
+ u32 mpls_lse[4];
};
struct npc_install_flow_req {
@@ -1574,7 +1580,7 @@ enum ptp_op {
PTP_OP_GET_CLOCK = 1,
PTP_OP_GET_TSTMP = 2,
PTP_OP_SET_THRESH = 3,
- PTP_OP_EXTTS_ON = 4,
+ PTP_OP_PPS_ON = 4,
PTP_OP_ADJTIME = 5,
PTP_OP_SET_CLOCK = 6,
};
@@ -1584,7 +1590,8 @@ struct ptp_req {
u8 op;
s64 scaled_ppm;
u64 thresh;
- int extts_on;
+ u64 period;
+ int pps_on;
s64 delta;
u64 clk;
};
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/npc.h b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
index de9fbd98dfb7..ab3e39eef2eb 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/npc.h
+++ b/drivers/net/ethernet/marvell/octeontx2/af/npc.h
@@ -206,6 +206,14 @@ enum key_fields {
NPC_SPORT_SCTP,
NPC_DPORT_SCTP,
NPC_IPSEC_SPI,
+ NPC_MPLS1_LBTCBOS,
+ NPC_MPLS1_TTL,
+ NPC_MPLS2_LBTCBOS,
+ NPC_MPLS2_TTL,
+ NPC_MPLS3_LBTCBOS,
+ NPC_MPLS3_TTL,
+ NPC_MPLS4_LBTCBOS,
+ NPC_MPLS4_TTL,
NPC_HEADER_FIELDS_MAX,
NPC_CHAN = NPC_HEADER_FIELDS_MAX, /* Valid when Rx */
NPC_PF_FUNC, /* Valid when Tx */
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
index ffbd22797163..bcc96eed2481 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/ptp.c
@@ -46,6 +46,7 @@
#define PTP_PPS_HI_INCR 0xF60ULL
#define PTP_PPS_LO_INCR 0xF68ULL
+#define PTP_PPS_THRESH_LO 0xF50ULL
#define PTP_PPS_THRESH_HI 0xF58ULL
#define PTP_CLOCK_LO 0xF08ULL
@@ -411,29 +412,12 @@ void ptp_start(struct rvu *rvu, u64 sclk, u32 ext_clk_freq, u32 extts)
}
clock_cfg |= PTP_CLOCK_CFG_PTP_EN;
- clock_cfg |= PTP_CLOCK_CFG_PPS_EN | PTP_CLOCK_CFG_PPS_INV;
writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG);
clock_cfg &= ~PTP_CLOCK_CFG_ATOMIC_OP_MASK;
clock_cfg |= (ATOMIC_SET << 26);
writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
- /* Set 50% duty cycle for 1Hz output */
- writeq(0x1dcd650000000000, ptp->reg_base + PTP_PPS_HI_INCR);
- writeq(0x1dcd650000000000, ptp->reg_base + PTP_PPS_LO_INCR);
- if (cn10k_ptp_errata(ptp)) {
- /* The ptp_clock_hi rollsover to zero once clock cycle before it
- * reaches one second boundary. so, program the pps_lo_incr in
- * such a way that the pps threshold value comparison at one
- * second boundary will succeed and pps edge changes. After each
- * one second boundary, the hrtimer handler will be invoked and
- * reprograms the pps threshold value.
- */
- ptp->clock_period = NSEC_PER_SEC / ptp->clock_rate;
- writeq((0x1dcd6500ULL - ptp->clock_period) << 32,
- ptp->reg_base + PTP_PPS_LO_INCR);
- }
-
if (cn10k_ptp_errata(ptp))
clock_comp = ptp_calc_adjusted_comp(ptp->clock_rate);
else
@@ -465,20 +449,68 @@ static int ptp_set_thresh(struct ptp *ptp, u64 thresh)
return 0;
}
-static int ptp_extts_on(struct ptp *ptp, int on)
+static int ptp_config_hrtimer(struct ptp *ptp, int on)
{
u64 ptp_clock_hi;
- if (cn10k_ptp_errata(ptp)) {
- if (on) {
- ptp_clock_hi = readq(ptp->reg_base + PTP_CLOCK_HI);
- ptp_hrtimer_start(ptp, (ktime_t)ptp_clock_hi);
- } else {
- if (hrtimer_active(&ptp->hrtimer))
- hrtimer_cancel(&ptp->hrtimer);
+ if (on) {
+ ptp_clock_hi = readq(ptp->reg_base + PTP_CLOCK_HI);
+ ptp_hrtimer_start(ptp, (ktime_t)ptp_clock_hi);
+ } else {
+ if (hrtimer_active(&ptp->hrtimer))
+ hrtimer_cancel(&ptp->hrtimer);
+ }
+
+ return 0;
+}
+
+static int ptp_pps_on(struct ptp *ptp, int on, u64 period)
+{
+ u64 clock_cfg;
+
+ clock_cfg = readq(ptp->reg_base + PTP_CLOCK_CFG);
+ if (on) {
+ if (cn10k_ptp_errata(ptp) && period != NSEC_PER_SEC) {
+ dev_err(&ptp->pdev->dev, "Supports max period value as 1 second\n");
+ return -EINVAL;
}
+
+ if (period > (8 * NSEC_PER_SEC)) {
+ dev_err(&ptp->pdev->dev, "Supports max period as 8 seconds\n");
+ return -EINVAL;
+ }
+
+ clock_cfg |= PTP_CLOCK_CFG_PPS_EN | PTP_CLOCK_CFG_PPS_INV;
+ writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
+
+ writeq(0, ptp->reg_base + PTP_PPS_THRESH_HI);
+ writeq(0, ptp->reg_base + PTP_PPS_THRESH_LO);
+
+ /* Configure high/low phase time */
+ period = period / 2;
+ writeq(((u64)period << 32), ptp->reg_base + PTP_PPS_HI_INCR);
+ writeq(((u64)period << 32), ptp->reg_base + PTP_PPS_LO_INCR);
+ } else {
+ clock_cfg &= ~(PTP_CLOCK_CFG_PPS_EN | PTP_CLOCK_CFG_PPS_INV);
+ writeq(clock_cfg, ptp->reg_base + PTP_CLOCK_CFG);
}
+ if (on && cn10k_ptp_errata(ptp)) {
+ /* The ptp_clock_hi rollsover to zero once clock cycle before it
+ * reaches one second boundary. so, program the pps_lo_incr in
+ * such a way that the pps threshold value comparison at one
+ * second boundary will succeed and pps edge changes. After each
+ * one second boundary, the hrtimer handler will be invoked and
+ * reprograms the pps threshold value.
+ */
+ ptp->clock_period = NSEC_PER_SEC / ptp->clock_rate;
+ writeq((0x1dcd6500ULL - ptp->clock_period) << 32,
+ ptp->reg_base + PTP_PPS_LO_INCR);
+ }
+
+ if (cn10k_ptp_errata(ptp))
+ ptp_config_hrtimer(ptp, on);
+
return 0;
}
@@ -613,8 +645,8 @@ int rvu_mbox_handler_ptp_op(struct rvu *rvu, struct ptp_req *req,
case PTP_OP_SET_THRESH:
err = ptp_set_thresh(rvu->ptp, req->thresh);
break;
- case PTP_OP_EXTTS_ON:
- err = ptp_extts_on(rvu->ptp, req->extts_on);
+ case PTP_OP_PPS_ON:
+ err = ptp_pps_on(rvu->ptp, req->pps_on, req->period);
break;
case PTP_OP_ADJTIME:
ptp_atomic_adjtime(rvu->ptp, req->delta);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
index f2b1edf1bb43..15a319684ed3 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
@@ -756,12 +756,11 @@ static int rvu_cgx_ptp_rx_cfg(struct rvu *rvu, u16 pcifunc, bool enable)
if (!is_mac_feature_supported(rvu, pf, RVU_LMAC_FEAT_PTP))
return 0;
- /* This msg is expected only from PFs that are mapped to CGX LMACs,
+ /* This msg is expected only from PF/VFs that are mapped to CGX/RPM LMACs,
* if received from other PF/VF simply ACK, nothing to do.
*/
- if ((pcifunc & RVU_PFVF_FUNC_MASK) ||
- !is_pf_cgxmapped(rvu, pf))
- return -ENODEV;
+ if (!is_pf_cgxmapped(rvu, pf))
+ return -EPERM;
rvu_get_cgx_lmac_id(rvu->pf2cgxlmac_map[pf], &cgx_id, &lmac_id);
cgxd = rvu_cgx_pdata(cgx_id, rvu);
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
index d30e84803481..bd817ee88735 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_debugfs.c
@@ -2756,6 +2756,27 @@ static int rvu_dbg_npc_rx_miss_stats_display(struct seq_file *filp,
RVU_DEBUG_SEQ_FOPS(npc_rx_miss_act, npc_rx_miss_stats_display, NULL);
+#define RVU_DBG_PRINT_MPLS_TTL(pkt, mask) \
+do { \
+ seq_printf(s, "%ld ", FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL, pkt)); \
+ seq_printf(s, "mask 0x%lx\n", \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL, mask)); \
+} while (0) \
+
+#define RVU_DBG_PRINT_MPLS_LBTCBOS(_pkt, _mask) \
+do { \
+ typeof(_pkt) (pkt) = (_pkt); \
+ typeof(_mask) (mask) = (_mask); \
+ seq_printf(s, "%ld %ld %ld\n", \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_LB, pkt), \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TC, pkt), \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_BOS, pkt)); \
+ seq_printf(s, "\tmask 0x%lx 0x%lx 0x%lx\n", \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_LB, mask), \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TC, mask), \
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_BOS, mask)); \
+} while (0) \
+
static void rvu_dbg_npc_mcam_show_flows(struct seq_file *s,
struct rvu_npc_mcam_rule *rule)
{
@@ -2836,6 +2857,38 @@ static void rvu_dbg_npc_mcam_show_flows(struct seq_file *s,
seq_printf(s, "0x%x ", ntohl(rule->packet.spi));
seq_printf(s, "mask 0x%x\n", ntohl(rule->mask.spi));
break;
+ case NPC_MPLS1_LBTCBOS:
+ RVU_DBG_PRINT_MPLS_LBTCBOS(rule->packet.mpls_lse[0],
+ rule->mask.mpls_lse[0]);
+ break;
+ case NPC_MPLS1_TTL:
+ RVU_DBG_PRINT_MPLS_TTL(rule->packet.mpls_lse[0],
+ rule->mask.mpls_lse[0]);
+ break;
+ case NPC_MPLS2_LBTCBOS:
+ RVU_DBG_PRINT_MPLS_LBTCBOS(rule->packet.mpls_lse[1],
+ rule->mask.mpls_lse[1]);
+ break;
+ case NPC_MPLS2_TTL:
+ RVU_DBG_PRINT_MPLS_TTL(rule->packet.mpls_lse[1],
+ rule->mask.mpls_lse[1]);
+ break;
+ case NPC_MPLS3_LBTCBOS:
+ RVU_DBG_PRINT_MPLS_LBTCBOS(rule->packet.mpls_lse[2],
+ rule->mask.mpls_lse[2]);
+ break;
+ case NPC_MPLS3_TTL:
+ RVU_DBG_PRINT_MPLS_TTL(rule->packet.mpls_lse[2],
+ rule->mask.mpls_lse[2]);
+ break;
+ case NPC_MPLS4_LBTCBOS:
+ RVU_DBG_PRINT_MPLS_LBTCBOS(rule->packet.mpls_lse[3],
+ rule->mask.mpls_lse[3]);
+ break;
+ case NPC_MPLS4_TTL:
+ RVU_DBG_PRINT_MPLS_TTL(rule->packet.mpls_lse[3],
+ rule->mask.mpls_lse[3]);
+ break;
default:
seq_puts(s, "\n");
break;
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
index 41df5ac23f92..c70932625d0d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
@@ -14,26 +14,16 @@
#define DRV_NAME "octeontx2-af"
-static int rvu_report_pair_start(struct devlink_fmsg *fmsg, const char *name)
+static void rvu_report_pair_start(struct devlink_fmsg *fmsg, const char *name)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- return devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_obj_nest_start(fmsg);
}
-static int rvu_report_pair_end(struct devlink_fmsg *fmsg)
+static void rvu_report_pair_end(struct devlink_fmsg *fmsg)
{
- int err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return devlink_fmsg_pair_nest_end(fmsg);
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
}
static bool rvu_common_request_irq(struct rvu *rvu, int offset,
@@ -284,175 +274,81 @@ static int rvu_nix_report_show(struct devlink_fmsg *fmsg, void *ctx,
{
struct rvu_nix_event_ctx *nix_event_context;
u64 intr_val;
- int err;
nix_event_context = ctx;
switch (health_reporter) {
case NIX_AF_RVU_INTR:
intr_val = nix_event_context->nix_af_rvu_int;
- err = rvu_report_pair_start(fmsg, "NIX_AF_RVU");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNIX RVU Interrupt Reg ",
- nix_event_context->nix_af_rvu_int);
- if (err)
- return err;
- if (intr_val & BIT_ULL(0)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tUnmap Slot Error");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NIX_AF_RVU");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNIX RVU Interrupt Reg ",
+ nix_event_context->nix_af_rvu_int);
+ if (intr_val & BIT_ULL(0))
+ devlink_fmsg_string_put(fmsg, "\n\tUnmap Slot Error");
+ rvu_report_pair_end(fmsg);
break;
case NIX_AF_RVU_GEN:
intr_val = nix_event_context->nix_af_rvu_gen;
- err = rvu_report_pair_start(fmsg, "NIX_AF_GENERAL");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNIX General Interrupt Reg ",
- nix_event_context->nix_af_rvu_gen);
- if (err)
- return err;
- if (intr_val & BIT_ULL(0)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tRx multicast pkt drop");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(1)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tRx mirror pkt drop");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(4)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tSMQ flush done");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NIX_AF_GENERAL");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNIX General Interrupt Reg ",
+ nix_event_context->nix_af_rvu_gen);
+ if (intr_val & BIT_ULL(0))
+ devlink_fmsg_string_put(fmsg, "\n\tRx multicast pkt drop");
+ if (intr_val & BIT_ULL(1))
+ devlink_fmsg_string_put(fmsg, "\n\tRx mirror pkt drop");
+ if (intr_val & BIT_ULL(4))
+ devlink_fmsg_string_put(fmsg, "\n\tSMQ flush done");
+ rvu_report_pair_end(fmsg);
break;
case NIX_AF_RVU_ERR:
intr_val = nix_event_context->nix_af_rvu_err;
- err = rvu_report_pair_start(fmsg, "NIX_AF_ERR");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNIX Error Interrupt Reg ",
- nix_event_context->nix_af_rvu_err);
- if (err)
- return err;
- if (intr_val & BIT_ULL(14)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_AQ_INST_S read");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(13)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_AQ_RES_S write");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(12)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAQ Doorbell Error");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(6)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tRx on unmapped PF_FUNC");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(5)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tRx multicast replication error");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(4)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_RX_MCE_S read");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(3)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on multicast WQE read");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(2)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on mirror WQE read");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(1)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on mirror pkt write");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(0)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on multicast pkt write");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NIX_AF_ERR");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNIX Error Interrupt Reg ",
+ nix_event_context->nix_af_rvu_err);
+ if (intr_val & BIT_ULL(14))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_AQ_INST_S read");
+ if (intr_val & BIT_ULL(13))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_AQ_RES_S write");
+ if (intr_val & BIT_ULL(12))
+ devlink_fmsg_string_put(fmsg, "\n\tAQ Doorbell Error");
+ if (intr_val & BIT_ULL(6))
+ devlink_fmsg_string_put(fmsg, "\n\tRx on unmapped PF_FUNC");
+ if (intr_val & BIT_ULL(5))
+ devlink_fmsg_string_put(fmsg, "\n\tRx multicast replication error");
+ if (intr_val & BIT_ULL(4))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on NIX_RX_MCE_S read");
+ if (intr_val & BIT_ULL(3))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on multicast WQE read");
+ if (intr_val & BIT_ULL(2))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on mirror WQE read");
+ if (intr_val & BIT_ULL(1))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on mirror pkt write");
+ if (intr_val & BIT_ULL(0))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on multicast pkt write");
+ rvu_report_pair_end(fmsg);
break;
case NIX_AF_RVU_RAS:
intr_val = nix_event_context->nix_af_rvu_err;
- err = rvu_report_pair_start(fmsg, "NIX_AF_RAS");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNIX RAS Interrupt Reg ",
- nix_event_context->nix_af_rvu_err);
- if (err)
- return err;
- err = devlink_fmsg_string_put(fmsg, "\n\tPoison Data on:");
- if (err)
- return err;
- if (intr_val & BIT_ULL(34)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX_AQ_INST_S");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(33)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX_AQ_RES_S");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(32)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tHW ctx");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(4)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tPacket from mirror buffer");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(3)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tPacket from multicast buffer");
-
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(2)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tWQE read from mirror buffer");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(1)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tWQE read from multicast buffer");
- if (err)
- return err;
- }
- if (intr_val & BIT_ULL(0)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX_RX_MCE_S read");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NIX_AF_RAS");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNIX RAS Interrupt Reg ",
+ nix_event_context->nix_af_rvu_err);
+ devlink_fmsg_string_put(fmsg, "\n\tPoison Data on:");
+ if (intr_val & BIT_ULL(34))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX_AQ_INST_S");
+ if (intr_val & BIT_ULL(33))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX_AQ_RES_S");
+ if (intr_val & BIT_ULL(32))
+ devlink_fmsg_string_put(fmsg, "\n\tHW ctx");
+ if (intr_val & BIT_ULL(4))
+ devlink_fmsg_string_put(fmsg, "\n\tPacket from mirror buffer");
+ if (intr_val & BIT_ULL(3))
+ devlink_fmsg_string_put(fmsg, "\n\tPacket from multicast buffer");
+ if (intr_val & BIT_ULL(2))
+ devlink_fmsg_string_put(fmsg, "\n\tWQE read from mirror buffer");
+ if (intr_val & BIT_ULL(1))
+ devlink_fmsg_string_put(fmsg, "\n\tWQE read from multicast buffer");
+ if (intr_val & BIT_ULL(0))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX_RX_MCE_S read");
+ rvu_report_pair_end(fmsg);
break;
default:
return -EINVAL;
@@ -922,181 +818,87 @@ static int rvu_npa_report_show(struct devlink_fmsg *fmsg, void *ctx,
struct rvu_npa_event_ctx *npa_event_context;
unsigned int alloc_dis, free_dis;
u64 intr_val;
- int err;
npa_event_context = ctx;
switch (health_reporter) {
case NPA_AF_RVU_GEN:
intr_val = npa_event_context->npa_af_rvu_gen;
- err = rvu_report_pair_start(fmsg, "NPA_AF_GENERAL");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNPA General Interrupt Reg ",
- npa_event_context->npa_af_rvu_gen);
- if (err)
- return err;
- if (intr_val & BIT_ULL(32)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tUnmap PF Error");
- if (err)
- return err;
- }
+ rvu_report_pair_start(fmsg, "NPA_AF_GENERAL");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNPA General Interrupt Reg ",
+ npa_event_context->npa_af_rvu_gen);
+ if (intr_val & BIT_ULL(32))
+ devlink_fmsg_string_put(fmsg, "\n\tUnmap PF Error");
free_dis = FIELD_GET(GENMASK(15, 0), intr_val);
- if (free_dis & BIT(NPA_INPQ_NIX0_RX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX0: free disabled RX");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_NIX0_TX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX0:free disabled TX");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_NIX1_RX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX1: free disabled RX");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_NIX1_TX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX1:free disabled TX");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_SSO)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for SSO");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_TIM)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for TIM");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_DPI)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for DPI");
- if (err)
- return err;
- }
- if (free_dis & BIT(NPA_INPQ_AURA_OP)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for AURA");
- if (err)
- return err;
- }
+ if (free_dis & BIT(NPA_INPQ_NIX0_RX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX0: free disabled RX");
+ if (free_dis & BIT(NPA_INPQ_NIX0_TX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX0:free disabled TX");
+ if (free_dis & BIT(NPA_INPQ_NIX1_RX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX1: free disabled RX");
+ if (free_dis & BIT(NPA_INPQ_NIX1_TX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX1:free disabled TX");
+ if (free_dis & BIT(NPA_INPQ_SSO))
+ devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for SSO");
+ if (free_dis & BIT(NPA_INPQ_TIM))
+ devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for TIM");
+ if (free_dis & BIT(NPA_INPQ_DPI))
+ devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for DPI");
+ if (free_dis & BIT(NPA_INPQ_AURA_OP))
+ devlink_fmsg_string_put(fmsg, "\n\tFree Disabled for AURA");
alloc_dis = FIELD_GET(GENMASK(31, 16), intr_val);
- if (alloc_dis & BIT(NPA_INPQ_NIX0_RX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX0: alloc disabled RX");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_NIX0_TX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX0:alloc disabled TX");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_NIX1_RX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX1: alloc disabled RX");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_NIX1_TX)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tNIX1:alloc disabled TX");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_SSO)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for SSO");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_TIM)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for TIM");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_DPI)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for DPI");
- if (err)
- return err;
- }
- if (alloc_dis & BIT(NPA_INPQ_AURA_OP)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for AURA");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ if (alloc_dis & BIT(NPA_INPQ_NIX0_RX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX0: alloc disabled RX");
+ if (alloc_dis & BIT(NPA_INPQ_NIX0_TX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX0:alloc disabled TX");
+ if (alloc_dis & BIT(NPA_INPQ_NIX1_RX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX1: alloc disabled RX");
+ if (alloc_dis & BIT(NPA_INPQ_NIX1_TX))
+ devlink_fmsg_string_put(fmsg, "\n\tNIX1:alloc disabled TX");
+ if (alloc_dis & BIT(NPA_INPQ_SSO))
+ devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for SSO");
+ if (alloc_dis & BIT(NPA_INPQ_TIM))
+ devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for TIM");
+ if (alloc_dis & BIT(NPA_INPQ_DPI))
+ devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for DPI");
+ if (alloc_dis & BIT(NPA_INPQ_AURA_OP))
+ devlink_fmsg_string_put(fmsg, "\n\tAlloc Disabled for AURA");
+
+ rvu_report_pair_end(fmsg);
break;
case NPA_AF_RVU_ERR:
- err = rvu_report_pair_start(fmsg, "NPA_AF_ERR");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNPA Error Interrupt Reg ",
- npa_event_context->npa_af_rvu_err);
- if (err)
- return err;
-
- if (npa_event_context->npa_af_rvu_err & BIT_ULL(14)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on NPA_AQ_INST_S read");
- if (err)
- return err;
- }
- if (npa_event_context->npa_af_rvu_err & BIT_ULL(13)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tFault on NPA_AQ_RES_S write");
- if (err)
- return err;
- }
- if (npa_event_context->npa_af_rvu_err & BIT_ULL(12)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tAQ Doorbell Error");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NPA_AF_ERR");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNPA Error Interrupt Reg ",
+ npa_event_context->npa_af_rvu_err);
+ if (npa_event_context->npa_af_rvu_err & BIT_ULL(14))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on NPA_AQ_INST_S read");
+ if (npa_event_context->npa_af_rvu_err & BIT_ULL(13))
+ devlink_fmsg_string_put(fmsg, "\n\tFault on NPA_AQ_RES_S write");
+ if (npa_event_context->npa_af_rvu_err & BIT_ULL(12))
+ devlink_fmsg_string_put(fmsg, "\n\tAQ Doorbell Error");
+ rvu_report_pair_end(fmsg);
break;
case NPA_AF_RVU_RAS:
- err = rvu_report_pair_start(fmsg, "NPA_AF_RVU_RAS");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNPA RAS Interrupt Reg ",
- npa_event_context->npa_af_rvu_ras);
- if (err)
- return err;
- if (npa_event_context->npa_af_rvu_ras & BIT_ULL(34)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tPoison data on NPA_AQ_INST_S");
- if (err)
- return err;
- }
- if (npa_event_context->npa_af_rvu_ras & BIT_ULL(33)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tPoison data on NPA_AQ_RES_S");
- if (err)
- return err;
- }
- if (npa_event_context->npa_af_rvu_ras & BIT_ULL(32)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tPoison data on HW context");
- if (err)
- return err;
- }
- err = rvu_report_pair_end(fmsg);
- if (err)
- return err;
+ rvu_report_pair_start(fmsg, "NPA_AF_RVU_RAS");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNPA RAS Interrupt Reg ",
+ npa_event_context->npa_af_rvu_ras);
+ if (npa_event_context->npa_af_rvu_ras & BIT_ULL(34))
+ devlink_fmsg_string_put(fmsg, "\n\tPoison data on NPA_AQ_INST_S");
+ if (npa_event_context->npa_af_rvu_ras & BIT_ULL(33))
+ devlink_fmsg_string_put(fmsg, "\n\tPoison data on NPA_AQ_RES_S");
+ if (npa_event_context->npa_af_rvu_ras & BIT_ULL(32))
+ devlink_fmsg_string_put(fmsg, "\n\tPoison data on HW context");
+ rvu_report_pair_end(fmsg);
break;
case NPA_AF_RVU_INTR:
- err = rvu_report_pair_start(fmsg, "NPA_AF_RVU");
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "\tNPA RVU Interrupt Reg ",
- npa_event_context->npa_af_rvu_int);
- if (err)
- return err;
- if (npa_event_context->npa_af_rvu_int & BIT_ULL(0)) {
- err = devlink_fmsg_string_put(fmsg, "\n\tUnmap Slot Error");
- if (err)
- return err;
- }
- return rvu_report_pair_end(fmsg);
+ rvu_report_pair_start(fmsg, "NPA_AF_RVU");
+ devlink_fmsg_u64_pair_put(fmsg, "\tNPA RVU Interrupt Reg ",
+ npa_event_context->npa_af_rvu_int);
+ if (npa_event_context->npa_af_rvu_int & BIT_ULL(0))
+ devlink_fmsg_string_put(fmsg, "\n\tUnmap Slot Error");
+ rvu_report_pair_end(fmsg);
+ break;
default:
return -EINVAL;
}
diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
index 237f82082ebe..114e4ec21802 100644
--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
+++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_npc_fs.c
@@ -43,6 +43,14 @@ static const char * const npc_flow_names[] = {
[NPC_DPORT_SCTP] = "sctp destination port",
[NPC_LXMB] = "Mcast/Bcast header ",
[NPC_IPSEC_SPI] = "SPI ",
+ [NPC_MPLS1_LBTCBOS] = "lse depth 1 label tc bos",
+ [NPC_MPLS1_TTL] = "lse depth 1 ttl",
+ [NPC_MPLS2_LBTCBOS] = "lse depth 2 label tc bos",
+ [NPC_MPLS2_TTL] = "lse depth 2 ttl",
+ [NPC_MPLS3_LBTCBOS] = "lse depth 3 label tc bos",
+ [NPC_MPLS3_TTL] = "lse depth 3 ttl",
+ [NPC_MPLS4_LBTCBOS] = "lse depth 4 label tc bos",
+ [NPC_MPLS4_TTL] = "lse depth 4",
[NPC_UNKNOWN] = "unknown",
};
@@ -528,6 +536,14 @@ do { \
NPC_SCAN_HDR(NPC_IPSEC_SPI, NPC_LID_LD, NPC_LT_LD_AH, 4, 4);
NPC_SCAN_HDR(NPC_IPSEC_SPI, NPC_LID_LE, NPC_LT_LE_ESP, 0, 4);
+ NPC_SCAN_HDR(NPC_MPLS1_LBTCBOS, NPC_LID_LC, NPC_LT_LC_MPLS, 0, 3);
+ NPC_SCAN_HDR(NPC_MPLS1_TTL, NPC_LID_LC, NPC_LT_LC_MPLS, 3, 1);
+ NPC_SCAN_HDR(NPC_MPLS2_LBTCBOS, NPC_LID_LC, NPC_LT_LC_MPLS, 4, 3);
+ NPC_SCAN_HDR(NPC_MPLS2_TTL, NPC_LID_LC, NPC_LT_LC_MPLS, 7, 1);
+ NPC_SCAN_HDR(NPC_MPLS3_LBTCBOS, NPC_LID_LC, NPC_LT_LC_MPLS, 8, 3);
+ NPC_SCAN_HDR(NPC_MPLS3_TTL, NPC_LID_LC, NPC_LT_LC_MPLS, 11, 1);
+ NPC_SCAN_HDR(NPC_MPLS4_LBTCBOS, NPC_LID_LC, NPC_LT_LC_MPLS, 12, 3);
+ NPC_SCAN_HDR(NPC_MPLS4_TTL, NPC_LID_LC, NPC_LT_LC_MPLS, 15, 1);
/* SMAC follows the DMAC(which is 6 bytes) */
NPC_SCAN_HDR(NPC_SMAC, NPC_LID_LA, la_ltype, la_start + 6, 6);
@@ -593,6 +609,11 @@ static void npc_set_features(struct rvu *rvu, int blkaddr, u8 intf)
/* for L2M/L2B/L3M/L3B, check if the type is present in the key */
if (npc_check_field(rvu, blkaddr, NPC_LXMB, intf))
*features |= BIT_ULL(NPC_LXMB);
+
+ for (hdr = NPC_MPLS1_LBTCBOS; hdr <= NPC_MPLS4_TTL; hdr++) {
+ if (npc_check_field(rvu, blkaddr, hdr, intf))
+ *features |= BIT_ULL(hdr);
+ }
}
/* Scan key extraction profile and record how fields of our interest
@@ -959,6 +980,47 @@ do { \
NPC_WRITE_FLOW(NPC_INNER_VID, vlan_itci, ntohs(pkt->vlan_itci), 0,
ntohs(mask->vlan_itci), 0);
+ NPC_WRITE_FLOW(NPC_MPLS1_LBTCBOS, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ pkt->mpls_lse[0]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ mask->mpls_lse[0]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS1_TTL, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ pkt->mpls_lse[0]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ mask->mpls_lse[0]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS2_LBTCBOS, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ pkt->mpls_lse[1]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ mask->mpls_lse[1]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS2_TTL, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ pkt->mpls_lse[1]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ mask->mpls_lse[1]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS3_LBTCBOS, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ pkt->mpls_lse[2]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ mask->mpls_lse[2]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS3_TTL, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ pkt->mpls_lse[2]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ mask->mpls_lse[2]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS4_LBTCBOS, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ pkt->mpls_lse[3]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_NON_TTL,
+ mask->mpls_lse[3]), 0);
+ NPC_WRITE_FLOW(NPC_MPLS4_TTL, mpls_lse,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ pkt->mpls_lse[3]), 0,
+ FIELD_GET(OTX2_FLOWER_MASK_MPLS_TTL,
+ mask->mpls_lse[3]), 0);
+
NPC_WRITE_FLOW(NPC_IPFRAG_IPV6, next_header, pkt->next_header, 0,
mask->next_header, 0);
npc_update_ipv6_flow(rvu, entry, features, pkt, mask, output, intf);
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index 818ce76185b2..1a42bfded872 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -1404,7 +1404,7 @@ int otx2_pool_init(struct otx2_nic *pfvf, u16 pool_id,
}
pp_params.order = get_order(buf_size);
- pp_params.flags = PP_FLAG_PAGE_FRAG | PP_FLAG_DMA_MAP;
+ pp_params.flags = PP_FLAG_DMA_MAP;
pp_params.pool_size = min(OTX2_PAGE_POOL_SZ, numptrs);
pp_params.nid = NUMA_NO_NODE;
pp_params.dev = pfvf->dev;
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
index 3a72b0793d4a..63130ba37e9d 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ptp.c
@@ -175,7 +175,7 @@ static int ptp_set_thresh(struct otx2_ptp *ptp, u64 thresh)
return otx2_sync_mbox_msg(&ptp->nic->mbox);
}
-static int ptp_extts_on(struct otx2_ptp *ptp, int on)
+static int ptp_pps_on(struct otx2_ptp *ptp, int on, u64 period)
{
struct ptp_req *req;
@@ -186,8 +186,9 @@ static int ptp_extts_on(struct otx2_ptp *ptp, int on)
if (!req)
return -ENOMEM;
- req->op = PTP_OP_EXTTS_ON;
- req->extts_on = on;
+ req->op = PTP_OP_PPS_ON;
+ req->pps_on = on;
+ req->period = period;
return otx2_sync_mbox_msg(&ptp->nic->mbox);
}
@@ -276,8 +277,8 @@ static int otx2_ptp_verify_pin(struct ptp_clock_info *ptp, unsigned int pin,
switch (func) {
case PTP_PF_NONE:
case PTP_PF_EXTTS:
- break;
case PTP_PF_PEROUT:
+ break;
case PTP_PF_PHYSYNC:
return -1;
}
@@ -340,6 +341,7 @@ static int otx2_ptp_enable(struct ptp_clock_info *ptp_info,
{
struct otx2_ptp *ptp = container_of(ptp_info, struct otx2_ptp,
ptp_info);
+ u64 period = 0;
int pin;
if (!ptp->nic)
@@ -351,12 +353,24 @@ static int otx2_ptp_enable(struct ptp_clock_info *ptp_info,
rq->extts.index);
if (pin < 0)
return -EBUSY;
- if (on) {
- ptp_extts_on(ptp, on);
+ if (on)
schedule_delayed_work(&ptp->extts_work, msecs_to_jiffies(200));
- } else {
- ptp_extts_on(ptp, on);
+ else
cancel_delayed_work_sync(&ptp->extts_work);
+
+ return 0;
+ case PTP_CLK_REQ_PEROUT:
+ if (rq->perout.flags)
+ return -EOPNOTSUPP;
+
+ if (rq->perout.index >= ptp_info->n_pins)
+ return -EINVAL;
+ if (on) {
+ period = rq->perout.period.sec * NSEC_PER_SEC +
+ rq->perout.period.nsec;
+ ptp_pps_on(ptp, on, period);
+ } else {
+ ptp_pps_on(ptp, on, period);
}
return 0;
default:
@@ -411,6 +425,7 @@ int otx2_ptp_init(struct otx2_nic *pfvf)
.name = "OcteonTX2 PTP",
.max_adj = 1000000000ull,
.n_ext_ts = 1,
+ .n_per_out = 1,
.n_pins = 1,
.pps = 0,
.pin_config = &ptp_ptr->extts_config,
diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
index fab9d85bfb37..8a5e3987a482 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_tc.c
@@ -27,6 +27,8 @@
#define CN10K_TLX_BURST_MANTISSA GENMASK_ULL(43, 29)
#define CN10K_TLX_BURST_EXPONENT GENMASK_ULL(47, 44)
+#define OTX2_UNSUPP_LSE_DEPTH GENMASK(6, 4)
+
struct otx2_tc_flow_stats {
u64 bytes;
u64 pkts;
@@ -519,6 +521,7 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
BIT_ULL(FLOW_DISSECTOR_KEY_IPV6_ADDRS) |
BIT_ULL(FLOW_DISSECTOR_KEY_PORTS) |
BIT(FLOW_DISSECTOR_KEY_IPSEC) |
+ BIT_ULL(FLOW_DISSECTOR_KEY_MPLS) |
BIT_ULL(FLOW_DISSECTOR_KEY_IP)))) {
netdev_info(nic->netdev, "unsupported flow used key 0x%llx",
dissector->used_keys);
@@ -738,6 +741,61 @@ static int otx2_tc_prepare_flow(struct otx2_nic *nic, struct otx2_tc_flow *node,
}
}
+ if (flow_rule_match_key(rule, FLOW_DISSECTOR_KEY_MPLS)) {
+ struct flow_match_mpls match;
+ u8 bit;
+
+ flow_rule_match_mpls(rule, &match);
+
+ if (match.mask->used_lses & OTX2_UNSUPP_LSE_DEPTH) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "unsupported LSE depth for MPLS match offload");
+ return -EOPNOTSUPP;
+ }
+
+ for_each_set_bit(bit, (unsigned long *)&match.mask->used_lses,
+ FLOW_DIS_MPLS_MAX) {
+ /* check if any of the fields LABEL,TC,BOS are set */
+ if (*((u32 *)&match.mask->ls[bit]) &
+ OTX2_FLOWER_MASK_MPLS_NON_TTL) {
+ /* Hardware will capture 4 byte MPLS header into
+ * two fields NPC_MPLSX_LBTCBOS and NPC_MPLSX_TTL.
+ * Derive the associated NPC key based on header
+ * index and offset.
+ */
+
+ req->features |= BIT_ULL(NPC_MPLS1_LBTCBOS +
+ 2 * bit);
+ flow_spec->mpls_lse[bit] =
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_LB,
+ match.key->ls[bit].mpls_label) |
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_TC,
+ match.key->ls[bit].mpls_tc) |
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_BOS,
+ match.key->ls[bit].mpls_bos);
+
+ flow_mask->mpls_lse[bit] =
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_LB,
+ match.mask->ls[bit].mpls_label) |
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_TC,
+ match.mask->ls[bit].mpls_tc) |
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_BOS,
+ match.mask->ls[bit].mpls_bos);
+ }
+
+ if (match.mask->ls[bit].mpls_ttl) {
+ req->features |= BIT_ULL(NPC_MPLS1_TTL +
+ 2 * bit);
+ flow_spec->mpls_lse[bit] |=
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_TTL,
+ match.key->ls[bit].mpls_ttl);
+ flow_mask->mpls_lse[bit] |=
+ FIELD_PREP(OTX2_FLOWER_MASK_MPLS_TTL,
+ match.mask->ls[bit].mpls_ttl);
+ }
+ }
+ }
+
return otx2_tc_parse_actions(nic, &rule->action, req, f, node);
}
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index d5691b6a2bc5..dd6ca2e4fd51 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -1528,7 +1528,7 @@ err_clk:
return err;
}
-static int pxa168_eth_remove(struct platform_device *pdev)
+static void pxa168_eth_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct pxa168_eth_private *pep = netdev_priv(dev);
@@ -1547,7 +1547,6 @@ static int pxa168_eth_remove(struct platform_device *pdev)
mdiobus_free(pep->smi_bus);
unregister_netdev(dev);
free_netdev(dev);
- return 0;
}
static void pxa168_eth_shutdown(struct platform_device *pdev)
@@ -1580,7 +1579,7 @@ MODULE_DEVICE_TABLE(of, pxa168_eth_of_match);
static struct platform_driver pxa168_eth_driver = {
.probe = pxa168_eth_probe,
- .remove = pxa168_eth_remove,
+ .remove_new = pxa168_eth_remove,
.shutdown = pxa168_eth_shutdown,
.resume = pxa168_eth_resume,
.suspend = pxa168_eth_suspend,
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index 20afe79f380a..3cf6589cfdac 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -197,6 +197,7 @@ static const struct mtk_reg_map mt7988_reg_map = {
.wdma_base = {
[0] = 0x4800,
[1] = 0x4c00,
+ [2] = 0x5000,
},
.pse_iq_sta = 0x0180,
.pse_oq_sta = 0x01a0,
@@ -2210,7 +2211,7 @@ rx_done:
net_dim(&eth->rx_dim, dim_sample);
if (xdp_flush)
- xdp_do_flush_map();
+ xdp_do_flush();
return done;
}
@@ -3328,7 +3329,7 @@ static int mtk_device_event(struct notifier_block *n, unsigned long event, void
return NOTIFY_DONE;
found:
- if (!dsa_slave_dev_check(dev))
+ if (!dsa_user_dev_check(dev))
return NOTIFY_DONE;
if (__ethtool_get_link_ksettings(dev, &s))
@@ -5002,7 +5003,7 @@ err_destroy_sgmii:
return err;
}
-static int mtk_remove(struct platform_device *pdev)
+static void mtk_remove(struct platform_device *pdev)
{
struct mtk_eth *eth = platform_get_drvdata(pdev);
struct mtk_mac *mac;
@@ -5024,8 +5025,6 @@ static int mtk_remove(struct platform_device *pdev)
netif_napi_del(&eth->rx_napi);
mtk_cleanup(eth);
mtk_mdio_cleanup(eth);
-
- return 0;
}
static const struct mtk_soc_data mt2701_data = {
@@ -5226,7 +5225,7 @@ MODULE_DEVICE_TABLE(of, of_mtk_match);
static struct platform_driver mtk_driver = {
.probe = mtk_probe,
- .remove = mtk_remove,
+ .remove_new = mtk_remove,
.driver = {
.name = "mtk_soc_eth",
.of_match_table = of_mtk_match,
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index 403219d987ef..9ae3b8a71d0e 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -1132,7 +1132,7 @@ struct mtk_reg_map {
u32 gdm1_cnt;
u32 gdma_to_ppe;
u32 ppe_base;
- u32 wdma_base[2];
+ u32 wdma_base[3];
u32 pse_iq_sta;
u32 pse_oq_sta;
};
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
index 86f32f486043..b2a5d9c3733d 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
+++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
@@ -425,7 +425,8 @@ int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry,
}
int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry,
- int wdma_idx, int txq, int bss, int wcid)
+ int wdma_idx, int txq, int bss, int wcid,
+ bool amsdu_en)
{
struct mtk_foe_mac_info *l2 = mtk_foe_entry_l2(eth, entry);
u32 *ib2 = mtk_foe_entry_ib2(eth, entry);
@@ -437,6 +438,7 @@ int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry,
MTK_FOE_IB2_WDMA_WINFO_V2;
l2->w3info = FIELD_PREP(MTK_FOE_WINFO_WCID_V3, wcid) |
FIELD_PREP(MTK_FOE_WINFO_BSS_V3, bss);
+ l2->amsdu = FIELD_PREP(MTK_FOE_WINFO_AMSDU_EN, amsdu_en);
break;
case 2:
*ib2 &= ~MTK_FOE_IB2_PORT_MG_V2;
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.h b/drivers/net/ethernet/mediatek/mtk_ppe.h
index e3d0ec72bc69..691806bca372 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe.h
+++ b/drivers/net/ethernet/mediatek/mtk_ppe.h
@@ -88,13 +88,13 @@ enum {
#define MTK_FOE_WINFO_BSS_V3 GENMASK(23, 16)
#define MTK_FOE_WINFO_WCID_V3 GENMASK(15, 0)
-#define MTK_FOE_WINFO_PAO_USR_INFO GENMASK(15, 0)
-#define MTK_FOE_WINFO_PAO_TID GENMASK(19, 16)
-#define MTK_FOE_WINFO_PAO_IS_FIXEDRATE BIT(20)
-#define MTK_FOE_WINFO_PAO_IS_PRIOR BIT(21)
-#define MTK_FOE_WINFO_PAO_IS_SP BIT(22)
-#define MTK_FOE_WINFO_PAO_HF BIT(23)
-#define MTK_FOE_WINFO_PAO_AMSDU_EN BIT(24)
+#define MTK_FOE_WINFO_AMSDU_USR_INFO GENMASK(15, 0)
+#define MTK_FOE_WINFO_AMSDU_TID GENMASK(19, 16)
+#define MTK_FOE_WINFO_AMSDU_IS_FIXEDRATE BIT(20)
+#define MTK_FOE_WINFO_AMSDU_IS_PRIOR BIT(21)
+#define MTK_FOE_WINFO_AMSDU_IS_SP BIT(22)
+#define MTK_FOE_WINFO_AMSDU_HF BIT(23)
+#define MTK_FOE_WINFO_AMSDU_EN BIT(24)
enum {
MTK_FOE_STATE_INVALID,
@@ -123,7 +123,7 @@ struct mtk_foe_mac_info {
/* netsys_v3 */
u32 w3info;
- u32 wpao;
+ u32 amsdu;
};
/* software-only entry type */
@@ -392,7 +392,8 @@ int mtk_foe_entry_set_vlan(struct mtk_eth *eth, struct mtk_foe_entry *entry,
int mtk_foe_entry_set_pppoe(struct mtk_eth *eth, struct mtk_foe_entry *entry,
int sid);
int mtk_foe_entry_set_wdma(struct mtk_eth *eth, struct mtk_foe_entry *entry,
- int wdma_idx, int txq, int bss, int wcid);
+ int wdma_idx, int txq, int bss, int wcid,
+ bool amsdu_en);
int mtk_foe_entry_set_queue(struct mtk_eth *eth, struct mtk_foe_entry *entry,
unsigned int queue);
int mtk_foe_entry_commit(struct mtk_ppe *ppe, struct mtk_flow_entry *entry);
diff --git a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
index a4efbeb16208..fbb5e9d5af13 100644
--- a/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
+++ b/drivers/net/ethernet/mediatek/mtk_ppe_offload.c
@@ -111,6 +111,7 @@ mtk_flow_get_wdma_info(struct net_device *dev, const u8 *addr, struct mtk_wdma_i
info->queue = path->mtk_wdma.queue;
info->bss = path->mtk_wdma.bss;
info->wcid = path->mtk_wdma.wcid;
+ info->amsdu = path->mtk_wdma.amsdu;
return 0;
}
@@ -174,7 +175,7 @@ mtk_flow_get_dsa_port(struct net_device **dev)
if (dp->cpu_dp->tag_ops->proto != DSA_TAG_PROTO_MTK)
return -ENODEV;
- *dev = dsa_port_to_master(dp);
+ *dev = dsa_port_to_conduit(dp);
return dp->index;
#else
@@ -192,14 +193,17 @@ mtk_flow_set_output_device(struct mtk_eth *eth, struct mtk_foe_entry *foe,
if (mtk_flow_get_wdma_info(dev, dest_mac, &info) == 0) {
mtk_foe_entry_set_wdma(eth, foe, info.wdma_idx, info.queue,
- info.bss, info.wcid);
+ info.bss, info.wcid, info.amsdu);
if (mtk_is_netsys_v2_or_greater(eth)) {
switch (info.wdma_idx) {
case 0:
- pse_port = 8;
+ pse_port = PSE_WDMA0_PORT;
break;
case 1:
- pse_port = 9;
+ pse_port = PSE_WDMA1_PORT;
+ break;
+ case 2:
+ pse_port = PSE_WDMA2_PORT;
break;
default:
return -EINVAL;
diff --git a/drivers/net/ethernet/mediatek/mtk_wed.c b/drivers/net/ethernet/mediatek/mtk_wed.c
index 94376aa2b34c..9a6744c0d458 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed.c
@@ -17,17 +17,21 @@
#include <net/flow_offload.h>
#include <net/pkt_cls.h>
#include "mtk_eth_soc.h"
-#include "mtk_wed_regs.h"
#include "mtk_wed.h"
#include "mtk_ppe.h"
#include "mtk_wed_wo.h"
#define MTK_PCIE_BASE(n) (0x1a143000 + (n) * 0x2000)
-#define MTK_WED_PKT_SIZE 1900
+#define MTK_WED_PKT_SIZE 1920
#define MTK_WED_BUF_SIZE 2048
+#define MTK_WED_PAGE_BUF_SIZE 128
#define MTK_WED_BUF_PER_PAGE (PAGE_SIZE / 2048)
+#define MTK_WED_RX_BUF_PER_PAGE (PAGE_SIZE / MTK_WED_PAGE_BUF_SIZE)
#define MTK_WED_RX_RING_SIZE 1536
+#define MTK_WED_RX_PG_BM_CNT 8192
+#define MTK_WED_AMSDU_BUF_SIZE (PAGE_SIZE << 4)
+#define MTK_WED_AMSDU_NPAGES 32
#define MTK_WED_TX_RING_SIZE 2048
#define MTK_WED_WDMA_RING_SIZE 1024
@@ -41,7 +45,10 @@
#define MTK_WED_RRO_QUE_CNT 8192
#define MTK_WED_MIOD_ENTRY_CNT 128
-static struct mtk_wed_hw *hw_list[2];
+#define MTK_WED_TX_BM_DMA_SIZE 65536
+#define MTK_WED_TX_BM_PKT_CNT 32768
+
+static struct mtk_wed_hw *hw_list[3];
static DEFINE_MUTEX(hw_lock);
struct mtk_wed_flow_block_priv {
@@ -49,6 +56,39 @@ struct mtk_wed_flow_block_priv {
struct net_device *dev;
};
+static const struct mtk_wed_soc_data mt7622_data = {
+ .regmap = {
+ .tx_bm_tkid = 0x088,
+ .wpdma_rx_ring0 = 0x770,
+ .reset_idx_tx_mask = GENMASK(3, 0),
+ .reset_idx_rx_mask = GENMASK(17, 16),
+ },
+ .tx_ring_desc_size = sizeof(struct mtk_wdma_desc),
+ .wdma_desc_size = sizeof(struct mtk_wdma_desc),
+};
+
+static const struct mtk_wed_soc_data mt7986_data = {
+ .regmap = {
+ .tx_bm_tkid = 0x0c8,
+ .wpdma_rx_ring0 = 0x770,
+ .reset_idx_tx_mask = GENMASK(1, 0),
+ .reset_idx_rx_mask = GENMASK(7, 6),
+ },
+ .tx_ring_desc_size = sizeof(struct mtk_wdma_desc),
+ .wdma_desc_size = 2 * sizeof(struct mtk_wdma_desc),
+};
+
+static const struct mtk_wed_soc_data mt7988_data = {
+ .regmap = {
+ .tx_bm_tkid = 0x0c8,
+ .wpdma_rx_ring0 = 0x7d0,
+ .reset_idx_tx_mask = GENMASK(1, 0),
+ .reset_idx_rx_mask = GENMASK(7, 6),
+ },
+ .tx_ring_desc_size = sizeof(struct mtk_wed_bm_desc),
+ .wdma_desc_size = 2 * sizeof(struct mtk_wdma_desc),
+};
+
static void
wed_m32(struct mtk_wed_device *dev, u32 reg, u32 mask, u32 val)
{
@@ -109,6 +149,90 @@ mtk_wdma_read_reset(struct mtk_wed_device *dev)
return wdma_r32(dev, MTK_WDMA_GLO_CFG);
}
+static void
+mtk_wdma_v3_rx_reset(struct mtk_wed_device *dev)
+{
+ u32 status;
+
+ if (!mtk_wed_is_v3_or_greater(dev->hw))
+ return;
+
+ wdma_clr(dev, MTK_WDMA_PREF_TX_CFG, MTK_WDMA_PREF_TX_CFG_PREF_EN);
+ wdma_clr(dev, MTK_WDMA_PREF_RX_CFG, MTK_WDMA_PREF_RX_CFG_PREF_EN);
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_PREF_TX_CFG_PREF_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_PREF_TX_CFG))
+ dev_err(dev->hw->dev, "rx reset failed\n");
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_PREF_RX_CFG_PREF_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_PREF_RX_CFG))
+ dev_err(dev->hw->dev, "rx reset failed\n");
+
+ wdma_clr(dev, MTK_WDMA_WRBK_TX_CFG, MTK_WDMA_WRBK_TX_CFG_WRBK_EN);
+ wdma_clr(dev, MTK_WDMA_WRBK_RX_CFG, MTK_WDMA_WRBK_RX_CFG_WRBK_EN);
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_WRBK_TX_CFG_WRBK_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_WRBK_TX_CFG))
+ dev_err(dev->hw->dev, "rx reset failed\n");
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_WRBK_RX_CFG_WRBK_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_WRBK_RX_CFG))
+ dev_err(dev->hw->dev, "rx reset failed\n");
+
+ /* prefetch FIFO */
+ wdma_w32(dev, MTK_WDMA_PREF_RX_FIFO_CFG,
+ MTK_WDMA_PREF_RX_FIFO_CFG_RING0_CLEAR |
+ MTK_WDMA_PREF_RX_FIFO_CFG_RING1_CLEAR);
+ wdma_clr(dev, MTK_WDMA_PREF_RX_FIFO_CFG,
+ MTK_WDMA_PREF_RX_FIFO_CFG_RING0_CLEAR |
+ MTK_WDMA_PREF_RX_FIFO_CFG_RING1_CLEAR);
+
+ /* core FIFO */
+ wdma_w32(dev, MTK_WDMA_XDMA_RX_FIFO_CFG,
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_PAR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_CMD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_DMAD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_ARR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_LEN_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_WID_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_BID_FIFO_CLEAR);
+ wdma_clr(dev, MTK_WDMA_XDMA_RX_FIFO_CFG,
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_PAR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_CMD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_DMAD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_ARR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_LEN_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_WID_FIFO_CLEAR |
+ MTK_WDMA_XDMA_RX_FIFO_CFG_RX_BID_FIFO_CLEAR);
+
+ /* writeback FIFO */
+ wdma_w32(dev, MTK_WDMA_WRBK_RX_FIFO_CFG(0),
+ MTK_WDMA_WRBK_RX_FIFO_CFG_RING_CLEAR);
+ wdma_w32(dev, MTK_WDMA_WRBK_RX_FIFO_CFG(1),
+ MTK_WDMA_WRBK_RX_FIFO_CFG_RING_CLEAR);
+
+ wdma_clr(dev, MTK_WDMA_WRBK_RX_FIFO_CFG(0),
+ MTK_WDMA_WRBK_RX_FIFO_CFG_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_WRBK_RX_FIFO_CFG(1),
+ MTK_WDMA_WRBK_RX_FIFO_CFG_RING_CLEAR);
+
+ /* prefetch ring status */
+ wdma_w32(dev, MTK_WDMA_PREF_SIDX_CFG,
+ MTK_WDMA_PREF_SIDX_CFG_RX_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_PREF_SIDX_CFG,
+ MTK_WDMA_PREF_SIDX_CFG_RX_RING_CLEAR);
+
+ /* writeback ring status */
+ wdma_w32(dev, MTK_WDMA_WRBK_SIDX_CFG,
+ MTK_WDMA_WRBK_SIDX_CFG_RX_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_WRBK_SIDX_CFG,
+ MTK_WDMA_WRBK_SIDX_CFG_RX_RING_CLEAR);
+}
+
static int
mtk_wdma_rx_reset(struct mtk_wed_device *dev)
{
@@ -121,6 +245,7 @@ mtk_wdma_rx_reset(struct mtk_wed_device *dev)
if (ret)
dev_err(dev->hw->dev, "rx reset failed\n");
+ mtk_wdma_v3_rx_reset(dev);
wdma_w32(dev, MTK_WDMA_RESET_IDX, MTK_WDMA_RESET_IDX_RX);
wdma_w32(dev, MTK_WDMA_RESET_IDX, 0);
@@ -135,6 +260,101 @@ mtk_wdma_rx_reset(struct mtk_wed_device *dev)
return ret;
}
+static u32
+mtk_wed_check_busy(struct mtk_wed_device *dev, u32 reg, u32 mask)
+{
+ return !!(wed_r32(dev, reg) & mask);
+}
+
+static int
+mtk_wed_poll_busy(struct mtk_wed_device *dev, u32 reg, u32 mask)
+{
+ int sleep = 15000;
+ int timeout = 100 * sleep;
+ u32 val;
+
+ return read_poll_timeout(mtk_wed_check_busy, val, !val, sleep,
+ timeout, false, dev, reg, mask);
+}
+
+static void
+mtk_wdma_v3_tx_reset(struct mtk_wed_device *dev)
+{
+ u32 status;
+
+ if (!mtk_wed_is_v3_or_greater(dev->hw))
+ return;
+
+ wdma_clr(dev, MTK_WDMA_PREF_TX_CFG, MTK_WDMA_PREF_TX_CFG_PREF_EN);
+ wdma_clr(dev, MTK_WDMA_PREF_RX_CFG, MTK_WDMA_PREF_RX_CFG_PREF_EN);
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_PREF_TX_CFG_PREF_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_PREF_TX_CFG))
+ dev_err(dev->hw->dev, "tx reset failed\n");
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_PREF_RX_CFG_PREF_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_PREF_RX_CFG))
+ dev_err(dev->hw->dev, "tx reset failed\n");
+
+ wdma_clr(dev, MTK_WDMA_WRBK_TX_CFG, MTK_WDMA_WRBK_TX_CFG_WRBK_EN);
+ wdma_clr(dev, MTK_WDMA_WRBK_RX_CFG, MTK_WDMA_WRBK_RX_CFG_WRBK_EN);
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_WRBK_TX_CFG_WRBK_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_WRBK_TX_CFG))
+ dev_err(dev->hw->dev, "tx reset failed\n");
+
+ if (read_poll_timeout(wdma_r32, status,
+ !(status & MTK_WDMA_WRBK_RX_CFG_WRBK_BUSY),
+ 0, 10000, false, dev, MTK_WDMA_WRBK_RX_CFG))
+ dev_err(dev->hw->dev, "tx reset failed\n");
+
+ /* prefetch FIFO */
+ wdma_w32(dev, MTK_WDMA_PREF_TX_FIFO_CFG,
+ MTK_WDMA_PREF_TX_FIFO_CFG_RING0_CLEAR |
+ MTK_WDMA_PREF_TX_FIFO_CFG_RING1_CLEAR);
+ wdma_clr(dev, MTK_WDMA_PREF_TX_FIFO_CFG,
+ MTK_WDMA_PREF_TX_FIFO_CFG_RING0_CLEAR |
+ MTK_WDMA_PREF_TX_FIFO_CFG_RING1_CLEAR);
+
+ /* core FIFO */
+ wdma_w32(dev, MTK_WDMA_XDMA_TX_FIFO_CFG,
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_PAR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_CMD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_DMAD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_ARR_FIFO_CLEAR);
+ wdma_clr(dev, MTK_WDMA_XDMA_TX_FIFO_CFG,
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_PAR_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_CMD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_DMAD_FIFO_CLEAR |
+ MTK_WDMA_XDMA_TX_FIFO_CFG_TX_ARR_FIFO_CLEAR);
+
+ /* writeback FIFO */
+ wdma_w32(dev, MTK_WDMA_WRBK_TX_FIFO_CFG(0),
+ MTK_WDMA_WRBK_TX_FIFO_CFG_RING_CLEAR);
+ wdma_w32(dev, MTK_WDMA_WRBK_TX_FIFO_CFG(1),
+ MTK_WDMA_WRBK_TX_FIFO_CFG_RING_CLEAR);
+
+ wdma_clr(dev, MTK_WDMA_WRBK_TX_FIFO_CFG(0),
+ MTK_WDMA_WRBK_TX_FIFO_CFG_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_WRBK_TX_FIFO_CFG(1),
+ MTK_WDMA_WRBK_TX_FIFO_CFG_RING_CLEAR);
+
+ /* prefetch ring status */
+ wdma_w32(dev, MTK_WDMA_PREF_SIDX_CFG,
+ MTK_WDMA_PREF_SIDX_CFG_TX_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_PREF_SIDX_CFG,
+ MTK_WDMA_PREF_SIDX_CFG_TX_RING_CLEAR);
+
+ /* writeback ring status */
+ wdma_w32(dev, MTK_WDMA_WRBK_SIDX_CFG,
+ MTK_WDMA_WRBK_SIDX_CFG_TX_RING_CLEAR);
+ wdma_clr(dev, MTK_WDMA_WRBK_SIDX_CFG,
+ MTK_WDMA_WRBK_SIDX_CFG_TX_RING_CLEAR);
+}
+
static void
mtk_wdma_tx_reset(struct mtk_wed_device *dev)
{
@@ -146,6 +366,7 @@ mtk_wdma_tx_reset(struct mtk_wed_device *dev)
!(status & mask), 0, 10000))
dev_err(dev->hw->dev, "tx reset failed\n");
+ mtk_wdma_v3_tx_reset(dev);
wdma_w32(dev, MTK_WDMA_RESET_IDX, MTK_WDMA_RESET_IDX_TX);
wdma_w32(dev, MTK_WDMA_RESET_IDX, 0);
@@ -278,7 +499,7 @@ mtk_wed_assign(struct mtk_wed_device *dev)
if (!hw->wed_dev)
goto out;
- if (hw->version == 1)
+ if (mtk_wed_is_v1(hw))
return NULL;
/* MT7986 WED devices do not have any pcie slot restrictions */
@@ -298,35 +519,152 @@ out:
}
static int
+mtk_wed_amsdu_buffer_alloc(struct mtk_wed_device *dev)
+{
+ struct mtk_wed_hw *hw = dev->hw;
+ struct mtk_wed_amsdu *wed_amsdu;
+ int i;
+
+ if (!mtk_wed_is_v3_or_greater(hw))
+ return 0;
+
+ wed_amsdu = devm_kcalloc(hw->dev, MTK_WED_AMSDU_NPAGES,
+ sizeof(*wed_amsdu), GFP_KERNEL);
+ if (!wed_amsdu)
+ return -ENOMEM;
+
+ for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++) {
+ void *ptr;
+
+ /* each segment is 64K */
+ ptr = (void *)__get_free_pages(GFP_KERNEL | __GFP_NOWARN |
+ __GFP_ZERO | __GFP_COMP |
+ GFP_DMA32,
+ get_order(MTK_WED_AMSDU_BUF_SIZE));
+ if (!ptr)
+ goto error;
+
+ wed_amsdu[i].txd = ptr;
+ wed_amsdu[i].txd_phy = dma_map_single(hw->dev, ptr,
+ MTK_WED_AMSDU_BUF_SIZE,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(hw->dev, wed_amsdu[i].txd_phy))
+ goto error;
+ }
+ dev->hw->wed_amsdu = wed_amsdu;
+
+ return 0;
+
+error:
+ for (i--; i >= 0; i--)
+ dma_unmap_single(hw->dev, wed_amsdu[i].txd_phy,
+ MTK_WED_AMSDU_BUF_SIZE, DMA_TO_DEVICE);
+ return -ENOMEM;
+}
+
+static void
+mtk_wed_amsdu_free_buffer(struct mtk_wed_device *dev)
+{
+ struct mtk_wed_amsdu *wed_amsdu = dev->hw->wed_amsdu;
+ int i;
+
+ if (!wed_amsdu)
+ return;
+
+ for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++) {
+ dma_unmap_single(dev->hw->dev, wed_amsdu[i].txd_phy,
+ MTK_WED_AMSDU_BUF_SIZE, DMA_TO_DEVICE);
+ free_pages((unsigned long)wed_amsdu[i].txd,
+ get_order(MTK_WED_AMSDU_BUF_SIZE));
+ }
+}
+
+static int
+mtk_wed_amsdu_init(struct mtk_wed_device *dev)
+{
+ struct mtk_wed_amsdu *wed_amsdu = dev->hw->wed_amsdu;
+ int i, ret;
+
+ if (!wed_amsdu)
+ return 0;
+
+ for (i = 0; i < MTK_WED_AMSDU_NPAGES; i++)
+ wed_w32(dev, MTK_WED_AMSDU_HIFTXD_BASE_L(i),
+ wed_amsdu[i].txd_phy);
+
+ /* init all sta parameter */
+ wed_w32(dev, MTK_WED_AMSDU_STA_INFO_INIT, MTK_WED_AMSDU_STA_RMVL |
+ MTK_WED_AMSDU_STA_WTBL_HDRT_MODE |
+ FIELD_PREP(MTK_WED_AMSDU_STA_MAX_AMSDU_LEN,
+ dev->wlan.amsdu_max_len >> 8) |
+ FIELD_PREP(MTK_WED_AMSDU_STA_MAX_AMSDU_NUM,
+ dev->wlan.amsdu_max_subframes));
+
+ wed_w32(dev, MTK_WED_AMSDU_STA_INFO, MTK_WED_AMSDU_STA_INFO_DO_INIT);
+
+ ret = mtk_wed_poll_busy(dev, MTK_WED_AMSDU_STA_INFO,
+ MTK_WED_AMSDU_STA_INFO_DO_INIT);
+ if (ret) {
+ dev_err(dev->hw->dev, "amsdu initialization failed\n");
+ return ret;
+ }
+
+ /* init partial amsdu offload txd src */
+ wed_set(dev, MTK_WED_AMSDU_HIFTXD_CFG,
+ FIELD_PREP(MTK_WED_AMSDU_HIFTXD_SRC, dev->hw->index));
+
+ /* init qmem */
+ wed_set(dev, MTK_WED_AMSDU_PSE, MTK_WED_AMSDU_PSE_RESET);
+ ret = mtk_wed_poll_busy(dev, MTK_WED_MON_AMSDU_QMEM_STS1, BIT(29));
+ if (ret) {
+ pr_info("%s: amsdu qmem initialization failed\n", __func__);
+ return ret;
+ }
+
+ /* eagle E1 PCIE1 tx ring 22 flow control issue */
+ if (dev->wlan.id == 0x7991)
+ wed_clr(dev, MTK_WED_AMSDU_FIFO, MTK_WED_AMSDU_IS_PRIOR0_RING);
+
+ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_AMSDU_EN);
+
+ return 0;
+}
+
+static int
mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev)
{
- struct mtk_wdma_desc *desc;
- dma_addr_t desc_phys;
- void **page_list;
+ u32 desc_size = dev->hw->soc->tx_ring_desc_size;
+ int i, page_idx = 0, n_pages, ring_size;
int token = dev->wlan.token_start;
- int ring_size;
- int n_pages;
- int i, page_idx;
+ struct mtk_wed_buf *page_list;
+ dma_addr_t desc_phys;
+ void *desc_ptr;
- ring_size = dev->wlan.nbuf & ~(MTK_WED_BUF_PER_PAGE - 1);
- n_pages = ring_size / MTK_WED_BUF_PER_PAGE;
+ if (!mtk_wed_is_v3_or_greater(dev->hw)) {
+ ring_size = dev->wlan.nbuf & ~(MTK_WED_BUF_PER_PAGE - 1);
+ dev->tx_buf_ring.size = ring_size;
+ } else {
+ dev->tx_buf_ring.size = MTK_WED_TX_BM_DMA_SIZE;
+ ring_size = MTK_WED_TX_BM_PKT_CNT;
+ }
+ n_pages = dev->tx_buf_ring.size / MTK_WED_BUF_PER_PAGE;
page_list = kcalloc(n_pages, sizeof(*page_list), GFP_KERNEL);
if (!page_list)
return -ENOMEM;
- dev->tx_buf_ring.size = ring_size;
dev->tx_buf_ring.pages = page_list;
- desc = dma_alloc_coherent(dev->hw->dev, ring_size * sizeof(*desc),
- &desc_phys, GFP_KERNEL);
- if (!desc)
+ desc_ptr = dma_alloc_coherent(dev->hw->dev,
+ dev->tx_buf_ring.size * desc_size,
+ &desc_phys, GFP_KERNEL);
+ if (!desc_ptr)
return -ENOMEM;
- dev->tx_buf_ring.desc = desc;
+ dev->tx_buf_ring.desc = desc_ptr;
dev->tx_buf_ring.desc_phys = desc_phys;
- for (i = 0, page_idx = 0; i < ring_size; i += MTK_WED_BUF_PER_PAGE) {
+ for (i = 0; i < ring_size; i += MTK_WED_BUF_PER_PAGE) {
dma_addr_t page_phys, buf_phys;
struct page *page;
void *buf;
@@ -343,7 +681,8 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev)
return -ENOMEM;
}
- page_list[page_idx++] = page;
+ page_list[page_idx].p = page;
+ page_list[page_idx++].phy_addr = page_phys;
dma_sync_single_for_cpu(dev->hw->dev, page_phys, PAGE_SIZE,
DMA_BIDIRECTIONAL);
@@ -351,28 +690,31 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev)
buf_phys = page_phys;
for (s = 0; s < MTK_WED_BUF_PER_PAGE; s++) {
- u32 txd_size;
- u32 ctrl;
-
- txd_size = dev->wlan.init_buf(buf, buf_phys, token++);
+ struct mtk_wdma_desc *desc = desc_ptr;
desc->buf0 = cpu_to_le32(buf_phys);
- desc->buf1 = cpu_to_le32(buf_phys + txd_size);
-
- if (dev->hw->version == 1)
- ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) |
- FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1,
- MTK_WED_BUF_SIZE - txd_size) |
- MTK_WDMA_DESC_CTRL_LAST_SEG1;
- else
- ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size) |
- FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1_V2,
- MTK_WED_BUF_SIZE - txd_size) |
- MTK_WDMA_DESC_CTRL_LAST_SEG0;
- desc->ctrl = cpu_to_le32(ctrl);
- desc->info = 0;
- desc++;
-
+ if (!mtk_wed_is_v3_or_greater(dev->hw)) {
+ u32 txd_size, ctrl;
+
+ txd_size = dev->wlan.init_buf(buf, buf_phys,
+ token++);
+ desc->buf1 = cpu_to_le32(buf_phys + txd_size);
+ ctrl = FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN0, txd_size);
+ if (mtk_wed_is_v1(dev->hw))
+ ctrl |= MTK_WDMA_DESC_CTRL_LAST_SEG1 |
+ FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1,
+ MTK_WED_BUF_SIZE - txd_size);
+ else
+ ctrl |= MTK_WDMA_DESC_CTRL_LAST_SEG0 |
+ FIELD_PREP(MTK_WDMA_DESC_CTRL_LEN1_V2,
+ MTK_WED_BUF_SIZE - txd_size);
+ desc->ctrl = cpu_to_le32(ctrl);
+ desc->info = 0;
+ } else {
+ desc->ctrl = cpu_to_le32(token << 16);
+ }
+
+ desc_ptr += desc_size;
buf += MTK_WED_BUF_SIZE;
buf_phys += MTK_WED_BUF_SIZE;
}
@@ -387,42 +729,103 @@ mtk_wed_tx_buffer_alloc(struct mtk_wed_device *dev)
static void
mtk_wed_free_tx_buffer(struct mtk_wed_device *dev)
{
- struct mtk_wdma_desc *desc = dev->tx_buf_ring.desc;
- void **page_list = dev->tx_buf_ring.pages;
- int page_idx;
- int i;
+ struct mtk_wed_buf *page_list = dev->tx_buf_ring.pages;
+ struct mtk_wed_hw *hw = dev->hw;
+ int i, page_idx = 0;
if (!page_list)
return;
- if (!desc)
+ if (!dev->tx_buf_ring.desc)
goto free_pagelist;
- for (i = 0, page_idx = 0; i < dev->tx_buf_ring.size;
- i += MTK_WED_BUF_PER_PAGE) {
- void *page = page_list[page_idx++];
- dma_addr_t buf_addr;
+ for (i = 0; i < dev->tx_buf_ring.size; i += MTK_WED_BUF_PER_PAGE) {
+ dma_addr_t page_phy = page_list[page_idx].phy_addr;
+ void *page = page_list[page_idx++].p;
if (!page)
break;
- buf_addr = le32_to_cpu(desc[i].buf0);
- dma_unmap_page(dev->hw->dev, buf_addr, PAGE_SIZE,
+ dma_unmap_page(dev->hw->dev, page_phy, PAGE_SIZE,
DMA_BIDIRECTIONAL);
__free_page(page);
}
- dma_free_coherent(dev->hw->dev, dev->tx_buf_ring.size * sizeof(*desc),
- desc, dev->tx_buf_ring.desc_phys);
+ dma_free_coherent(dev->hw->dev,
+ dev->tx_buf_ring.size * hw->soc->tx_ring_desc_size,
+ dev->tx_buf_ring.desc,
+ dev->tx_buf_ring.desc_phys);
free_pagelist:
kfree(page_list);
}
static int
+mtk_wed_hwrro_buffer_alloc(struct mtk_wed_device *dev)
+{
+ int n_pages = MTK_WED_RX_PG_BM_CNT / MTK_WED_RX_BUF_PER_PAGE;
+ struct mtk_wed_buf *page_list;
+ struct mtk_wed_bm_desc *desc;
+ dma_addr_t desc_phys;
+ int i, page_idx = 0;
+
+ if (!dev->wlan.hw_rro)
+ return 0;
+
+ page_list = kcalloc(n_pages, sizeof(*page_list), GFP_KERNEL);
+ if (!page_list)
+ return -ENOMEM;
+
+ dev->hw_rro.size = dev->wlan.rx_nbuf & ~(MTK_WED_BUF_PER_PAGE - 1);
+ dev->hw_rro.pages = page_list;
+ desc = dma_alloc_coherent(dev->hw->dev,
+ dev->wlan.rx_nbuf * sizeof(*desc),
+ &desc_phys, GFP_KERNEL);
+ if (!desc)
+ return -ENOMEM;
+
+ dev->hw_rro.desc = desc;
+ dev->hw_rro.desc_phys = desc_phys;
+
+ for (i = 0; i < MTK_WED_RX_PG_BM_CNT; i += MTK_WED_RX_BUF_PER_PAGE) {
+ dma_addr_t page_phys, buf_phys;
+ struct page *page;
+ int s;
+
+ page = __dev_alloc_page(GFP_KERNEL);
+ if (!page)
+ return -ENOMEM;
+
+ page_phys = dma_map_page(dev->hw->dev, page, 0, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(dev->hw->dev, page_phys)) {
+ __free_page(page);
+ return -ENOMEM;
+ }
+
+ page_list[page_idx].p = page;
+ page_list[page_idx++].phy_addr = page_phys;
+ dma_sync_single_for_cpu(dev->hw->dev, page_phys, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+
+ buf_phys = page_phys;
+ for (s = 0; s < MTK_WED_RX_BUF_PER_PAGE; s++) {
+ desc->buf0 = cpu_to_le32(buf_phys);
+ buf_phys += MTK_WED_PAGE_BUF_SIZE;
+ desc++;
+ }
+
+ dma_sync_single_for_device(dev->hw->dev, page_phys, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ }
+
+ return 0;
+}
+
+static int
mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev)
{
- struct mtk_rxbm_desc *desc;
+ struct mtk_wed_bm_desc *desc;
dma_addr_t desc_phys;
dev->rx_buf_ring.size = dev->wlan.rx_nbuf;
@@ -436,13 +839,48 @@ mtk_wed_rx_buffer_alloc(struct mtk_wed_device *dev)
dev->rx_buf_ring.desc_phys = desc_phys;
dev->wlan.init_rx_buf(dev, dev->wlan.rx_npkt);
- return 0;
+ return mtk_wed_hwrro_buffer_alloc(dev);
+}
+
+static void
+mtk_wed_hwrro_free_buffer(struct mtk_wed_device *dev)
+{
+ struct mtk_wed_buf *page_list = dev->hw_rro.pages;
+ struct mtk_wed_bm_desc *desc = dev->hw_rro.desc;
+ int i, page_idx = 0;
+
+ if (!dev->wlan.hw_rro)
+ return;
+
+ if (!page_list)
+ return;
+
+ if (!desc)
+ goto free_pagelist;
+
+ for (i = 0; i < MTK_WED_RX_PG_BM_CNT; i += MTK_WED_RX_BUF_PER_PAGE) {
+ dma_addr_t buf_addr = page_list[page_idx].phy_addr;
+ void *page = page_list[page_idx++].p;
+
+ if (!page)
+ break;
+
+ dma_unmap_page(dev->hw->dev, buf_addr, PAGE_SIZE,
+ DMA_BIDIRECTIONAL);
+ __free_page(page);
+ }
+
+ dma_free_coherent(dev->hw->dev, dev->hw_rro.size * sizeof(*desc),
+ desc, dev->hw_rro.desc_phys);
+
+free_pagelist:
+ kfree(page_list);
}
static void
mtk_wed_free_rx_buffer(struct mtk_wed_device *dev)
{
- struct mtk_rxbm_desc *desc = dev->rx_buf_ring.desc;
+ struct mtk_wed_bm_desc *desc = dev->rx_buf_ring.desc;
if (!desc)
return;
@@ -450,6 +888,28 @@ mtk_wed_free_rx_buffer(struct mtk_wed_device *dev)
dev->wlan.release_rx_buf(dev);
dma_free_coherent(dev->hw->dev, dev->rx_buf_ring.size * sizeof(*desc),
desc, dev->rx_buf_ring.desc_phys);
+
+ mtk_wed_hwrro_free_buffer(dev);
+}
+
+static void
+mtk_wed_hwrro_init(struct mtk_wed_device *dev)
+{
+ if (!mtk_wed_get_rx_capa(dev) || !dev->wlan.hw_rro)
+ return;
+
+ wed_set(dev, MTK_WED_RRO_PG_BM_RX_DMAM,
+ FIELD_PREP(MTK_WED_RRO_PG_BM_RX_SDL0, 128));
+
+ wed_w32(dev, MTK_WED_RRO_PG_BM_BASE, dev->hw_rro.desc_phys);
+
+ wed_w32(dev, MTK_WED_RRO_PG_BM_INIT_PTR,
+ MTK_WED_RRO_PG_BM_INIT_SW_TAIL_IDX |
+ FIELD_PREP(MTK_WED_RRO_PG_BM_SW_TAIL_IDX,
+ MTK_WED_RX_PG_BM_CNT));
+
+ /* enable rx_page_bm to fetch dmad */
+ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_PG_BM_EN);
}
static void
@@ -463,6 +923,8 @@ mtk_wed_rx_buffer_hw_init(struct mtk_wed_device *dev)
wed_w32(dev, MTK_WED_RX_BM_DYN_ALLOC_TH,
FIELD_PREP(MTK_WED_RX_BM_DYN_ALLOC_TH_H, 0xffff));
wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_BM_EN);
+
+ mtk_wed_hwrro_init(dev);
}
static void
@@ -498,13 +960,23 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en)
{
u32 mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK;
- if (dev->hw->version == 1)
+ switch (dev->hw->version) {
+ case 1:
mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR;
- else
+ break;
+ case 2:
mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH |
MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH |
MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT |
MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR;
+ break;
+ case 3:
+ mask = MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT |
+ MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD;
+ break;
+ default:
+ break;
+ }
if (!dev->hw->num_flows)
mask &= ~MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD;
@@ -516,6 +988,9 @@ mtk_wed_set_ext_int(struct mtk_wed_device *dev, bool en)
static void
mtk_wed_set_512_support(struct mtk_wed_device *dev, bool enable)
{
+ if (!mtk_wed_is_v2(dev->hw))
+ return;
+
if (enable) {
wed_w32(dev, MTK_WED_TXDP_CTRL, MTK_WED_TXDP_DW9_OVERWR);
wed_w32(dev, MTK_WED_TXP_DW1,
@@ -527,22 +1002,15 @@ mtk_wed_set_512_support(struct mtk_wed_device *dev, bool enable)
}
}
-#define MTK_WFMDA_RX_DMA_EN BIT(2)
-static void
-mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev, int idx)
+static int
+mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev,
+ struct mtk_wed_ring *ring)
{
- u32 val;
int i;
- if (!(dev->rx_ring[idx].flags & MTK_WED_RING_CONFIGURED))
- return; /* queue is not configured by mt76 */
-
for (i = 0; i < 3; i++) {
- u32 cur_idx;
+ u32 cur_idx = readl(ring->wpdma + MTK_WED_RING_OFS_CPU_IDX);
- cur_idx = wed_r32(dev,
- MTK_WED_WPDMA_RING_RX_DATA(idx) +
- MTK_WED_RING_OFS_CPU_IDX);
if (cur_idx == MTK_WED_RX_RING_SIZE - 1)
break;
@@ -551,12 +1019,10 @@ mtk_wed_check_wfdma_rx_fill(struct mtk_wed_device *dev, int idx)
if (i == 3) {
dev_err(dev->hw->dev, "rx dma enable failed\n");
- return;
+ return -ETIMEDOUT;
}
- val = wifi_r32(dev, dev->wlan.wpdma_rx_glo - dev->wlan.phy_base) |
- MTK_WFMDA_RX_DMA_EN;
- wifi_w32(dev, dev->wlan.wpdma_rx_glo - dev->wlan.phy_base, val);
+ return 0;
}
static void
@@ -577,7 +1043,7 @@ mtk_wed_dma_disable(struct mtk_wed_device *dev)
MTK_WDMA_GLO_CFG_RX_INFO1_PRERES |
MTK_WDMA_GLO_CFG_RX_INFO2_PRERES);
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v1(dev->hw)) {
regmap_write(dev->hw->mirror, dev->hw->index * 4, 0);
wdma_clr(dev, MTK_WDMA_GLO_CFG,
MTK_WDMA_GLO_CFG_RX_INFO3_PRERES);
@@ -590,6 +1056,14 @@ mtk_wed_dma_disable(struct mtk_wed_device *dev)
MTK_WED_WPDMA_RX_D_RX_DRV_EN);
wed_clr(dev, MTK_WED_WDMA_GLO_CFG,
MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK);
+
+ if (mtk_wed_is_v3_or_greater(dev->hw) &&
+ mtk_wed_get_rx_capa(dev)) {
+ wdma_clr(dev, MTK_WDMA_PREF_TX_CFG,
+ MTK_WDMA_PREF_TX_CFG_PREF_EN);
+ wdma_clr(dev, MTK_WDMA_PREF_RX_CFG,
+ MTK_WDMA_PREF_RX_CFG_PREF_EN);
+ }
}
mtk_wed_set_512_support(dev, false);
@@ -606,7 +1080,7 @@ mtk_wed_stop(struct mtk_wed_device *dev)
wdma_w32(dev, MTK_WDMA_INT_GRP2, 0);
wed_w32(dev, MTK_WED_WPDMA_INT_MASK, 0);
- if (dev->hw->version == 1)
+ if (!mtk_wed_get_rx_capa(dev))
return;
wed_w32(dev, MTK_WED_EXT_INT_MASK1, 0);
@@ -625,13 +1099,21 @@ mtk_wed_deinit(struct mtk_wed_device *dev)
MTK_WED_CTRL_WED_TX_BM_EN |
MTK_WED_CTRL_WED_TX_FREE_AGENT_EN);
- if (dev->hw->version == 1)
+ if (mtk_wed_is_v1(dev->hw))
return;
wed_clr(dev, MTK_WED_CTRL,
MTK_WED_CTRL_RX_ROUTE_QM_EN |
MTK_WED_CTRL_WED_RX_BM_EN |
MTK_WED_CTRL_RX_RRO_QM_EN);
+
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_AMSDU_EN);
+ wed_clr(dev, MTK_WED_RESET, MTK_WED_RESET_TX_AMSDU);
+ wed_clr(dev, MTK_WED_PCIE_INT_CTRL,
+ MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA |
+ MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER);
+ }
}
static void
@@ -643,6 +1125,7 @@ __mtk_wed_detach(struct mtk_wed_device *dev)
mtk_wdma_rx_reset(dev);
mtk_wed_reset(dev, MTK_WED_RESET_WED);
+ mtk_wed_amsdu_free_buffer(dev);
mtk_wed_free_tx_buffer(dev);
mtk_wed_free_tx_rings(dev);
@@ -681,21 +1164,37 @@ mtk_wed_detach(struct mtk_wed_device *dev)
mutex_unlock(&hw_lock);
}
-#define PCIE_BASE_ADDR0 0x11280000
static void
mtk_wed_bus_init(struct mtk_wed_device *dev)
{
switch (dev->wlan.bus_type) {
case MTK_WED_BUS_PCIE: {
struct device_node *np = dev->hw->eth->dev->of_node;
- struct regmap *regs;
- regs = syscon_regmap_lookup_by_phandle(np,
- "mediatek,wed-pcie");
- if (IS_ERR(regs))
- break;
+ if (mtk_wed_is_v2(dev->hw)) {
+ struct regmap *regs;
+
+ regs = syscon_regmap_lookup_by_phandle(np,
+ "mediatek,wed-pcie");
+ if (IS_ERR(regs))
+ break;
- regmap_update_bits(regs, 0, BIT(0), BIT(0));
+ regmap_update_bits(regs, 0, BIT(0), BIT(0));
+ }
+
+ if (dev->wlan.msi) {
+ wed_w32(dev, MTK_WED_PCIE_CFG_INTM,
+ dev->hw->pcie_base | 0xc08);
+ wed_w32(dev, MTK_WED_PCIE_CFG_BASE,
+ dev->hw->pcie_base | 0xc04);
+ wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(8));
+ } else {
+ wed_w32(dev, MTK_WED_PCIE_CFG_INTM,
+ dev->hw->pcie_base | 0x180);
+ wed_w32(dev, MTK_WED_PCIE_CFG_BASE,
+ dev->hw->pcie_base | 0x184);
+ wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24));
+ }
wed_w32(dev, MTK_WED_PCIE_INT_CTRL,
FIELD_PREP(MTK_WED_PCIE_INT_CTRL_POLL_EN, 2));
@@ -703,19 +1202,9 @@ mtk_wed_bus_init(struct mtk_wed_device *dev)
/* pcie interrupt control: pola/source selection */
wed_set(dev, MTK_WED_PCIE_INT_CTRL,
MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA |
- FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL, 1));
- wed_r32(dev, MTK_WED_PCIE_INT_CTRL);
-
- wed_w32(dev, MTK_WED_PCIE_CFG_INTM, PCIE_BASE_ADDR0 | 0x180);
- wed_w32(dev, MTK_WED_PCIE_CFG_BASE, PCIE_BASE_ADDR0 | 0x184);
-
- /* pcie interrupt status trigger register */
- wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER, BIT(24));
- wed_r32(dev, MTK_WED_PCIE_INT_TRIGGER);
-
- /* pola setting */
- wed_set(dev, MTK_WED_PCIE_INT_CTRL,
- MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA);
+ MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER |
+ FIELD_PREP(MTK_WED_PCIE_INT_CTRL_SRC_SEL,
+ dev->hw->index));
break;
}
case MTK_WED_BUS_AXI:
@@ -731,38 +1220,55 @@ mtk_wed_bus_init(struct mtk_wed_device *dev)
static void
mtk_wed_set_wpdma(struct mtk_wed_device *dev)
{
- if (dev->hw->version == 1) {
- wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys);
- } else {
- mtk_wed_bus_init(dev);
+ int i;
- wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_int);
- wed_w32(dev, MTK_WED_WPDMA_CFG_INT_MASK, dev->wlan.wpdma_mask);
- wed_w32(dev, MTK_WED_WPDMA_CFG_TX, dev->wlan.wpdma_tx);
- wed_w32(dev, MTK_WED_WPDMA_CFG_TX_FREE, dev->wlan.wpdma_txfree);
- wed_w32(dev, MTK_WED_WPDMA_RX_GLO_CFG, dev->wlan.wpdma_rx_glo);
- wed_w32(dev, MTK_WED_WPDMA_RX_RING, dev->wlan.wpdma_rx);
+ if (mtk_wed_is_v1(dev->hw)) {
+ wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_phys);
+ return;
}
+
+ mtk_wed_bus_init(dev);
+
+ wed_w32(dev, MTK_WED_WPDMA_CFG_BASE, dev->wlan.wpdma_int);
+ wed_w32(dev, MTK_WED_WPDMA_CFG_INT_MASK, dev->wlan.wpdma_mask);
+ wed_w32(dev, MTK_WED_WPDMA_CFG_TX, dev->wlan.wpdma_tx);
+ wed_w32(dev, MTK_WED_WPDMA_CFG_TX_FREE, dev->wlan.wpdma_txfree);
+
+ if (!mtk_wed_get_rx_capa(dev))
+ return;
+
+ wed_w32(dev, MTK_WED_WPDMA_RX_GLO_CFG, dev->wlan.wpdma_rx_glo);
+ wed_w32(dev, dev->hw->soc->regmap.wpdma_rx_ring0, dev->wlan.wpdma_rx);
+
+ if (!dev->wlan.hw_rro)
+ return;
+
+ wed_w32(dev, MTK_WED_RRO_RX_D_CFG(0), dev->wlan.wpdma_rx_rro[0]);
+ wed_w32(dev, MTK_WED_RRO_RX_D_CFG(1), dev->wlan.wpdma_rx_rro[1]);
+ for (i = 0; i < MTK_WED_RX_PAGE_QUEUES; i++)
+ wed_w32(dev, MTK_WED_RRO_MSDU_PG_RING_CFG(i),
+ dev->wlan.wpdma_rx_pg + i * 0x10);
}
static void
mtk_wed_hw_init_early(struct mtk_wed_device *dev)
{
- u32 mask, set;
+ u32 set = FIELD_PREP(MTK_WED_WDMA_GLO_CFG_BT_SIZE, 2);
+ u32 mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE;
mtk_wed_deinit(dev);
mtk_wed_reset(dev, MTK_WED_RESET_WED);
mtk_wed_set_wpdma(dev);
- mask = MTK_WED_WDMA_GLO_CFG_BT_SIZE |
- MTK_WED_WDMA_GLO_CFG_DYNAMIC_DMAD_RECYCLE |
- MTK_WED_WDMA_GLO_CFG_RX_DIS_FSM_AUTO_IDLE;
- set = FIELD_PREP(MTK_WED_WDMA_GLO_CFG_BT_SIZE, 2) |
- MTK_WED_WDMA_GLO_CFG_DYNAMIC_SKIP_DMAD_PREP |
- MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY;
+ if (!mtk_wed_is_v3_or_greater(dev->hw)) {
+ mask |= MTK_WED_WDMA_GLO_CFG_DYNAMIC_DMAD_RECYCLE |
+ MTK_WED_WDMA_GLO_CFG_RX_DIS_FSM_AUTO_IDLE;
+ set |= MTK_WED_WDMA_GLO_CFG_DYNAMIC_SKIP_DMAD_PREP |
+ MTK_WED_WDMA_GLO_CFG_IDLE_DMAD_SUPPLY;
+ }
wed_m32(dev, MTK_WED_WDMA_GLO_CFG, mask, set);
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v1(dev->hw)) {
u32 offset = dev->hw->index ? 0x04000400 : 0;
wdma_set(dev, MTK_WDMA_GLO_CFG,
@@ -907,11 +1413,18 @@ mtk_wed_route_qm_hw_init(struct mtk_wed_device *dev)
}
/* configure RX_ROUTE_QM */
- wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST);
- wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_TXDMAD_FPORT);
- wed_set(dev, MTK_WED_RTQM_GLO_CFG,
- FIELD_PREP(MTK_WED_RTQM_TXDMAD_FPORT, 0x3 + dev->hw->index));
- wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST);
+ if (mtk_wed_is_v2(dev->hw)) {
+ wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST);
+ wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_TXDMAD_FPORT);
+ wed_set(dev, MTK_WED_RTQM_GLO_CFG,
+ FIELD_PREP(MTK_WED_RTQM_TXDMAD_FPORT,
+ 0x3 + dev->hw->index));
+ wed_clr(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST);
+ } else {
+ wed_set(dev, MTK_WED_RTQM_ENQ_CFG0,
+ FIELD_PREP(MTK_WED_RTQM_ENQ_CFG_TXDMAD_FPORT,
+ 0x3 + dev->hw->index));
+ }
/* enable RX_ROUTE_QM */
wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_RX_ROUTE_QM_EN);
}
@@ -924,34 +1437,30 @@ mtk_wed_hw_init(struct mtk_wed_device *dev)
dev->init_done = true;
mtk_wed_set_ext_int(dev, false);
- wed_w32(dev, MTK_WED_TX_BM_CTRL,
- MTK_WED_TX_BM_CTRL_PAUSE |
- FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM,
- dev->tx_buf_ring.size / 128) |
- FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM,
- MTK_WED_TX_RING_SIZE / 256));
wed_w32(dev, MTK_WED_TX_BM_BASE, dev->tx_buf_ring.desc_phys);
-
wed_w32(dev, MTK_WED_TX_BM_BUF_LEN, MTK_WED_PKT_SIZE);
- if (dev->hw->version == 1) {
- wed_w32(dev, MTK_WED_TX_BM_TKID,
- FIELD_PREP(MTK_WED_TX_BM_TKID_START,
- dev->wlan.token_start) |
- FIELD_PREP(MTK_WED_TX_BM_TKID_END,
- dev->wlan.token_start +
- dev->wlan.nbuf - 1));
+ if (mtk_wed_is_v1(dev->hw)) {
+ wed_w32(dev, MTK_WED_TX_BM_CTRL,
+ MTK_WED_TX_BM_CTRL_PAUSE |
+ FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM,
+ dev->tx_buf_ring.size / 128) |
+ FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM,
+ MTK_WED_TX_RING_SIZE / 256));
wed_w32(dev, MTK_WED_TX_BM_DYN_THR,
FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO, 1) |
MTK_WED_TX_BM_DYN_THR_HI);
- } else {
- wed_w32(dev, MTK_WED_TX_BM_TKID_V2,
- FIELD_PREP(MTK_WED_TX_BM_TKID_START,
- dev->wlan.token_start) |
- FIELD_PREP(MTK_WED_TX_BM_TKID_END,
- dev->wlan.token_start +
- dev->wlan.nbuf - 1));
+ } else if (mtk_wed_is_v2(dev->hw)) {
+ wed_w32(dev, MTK_WED_TX_BM_CTRL,
+ MTK_WED_TX_BM_CTRL_PAUSE |
+ FIELD_PREP(MTK_WED_TX_BM_CTRL_VLD_GRP_NUM,
+ dev->tx_buf_ring.size / 128) |
+ FIELD_PREP(MTK_WED_TX_BM_CTRL_RSV_GRP_NUM,
+ MTK_WED_TX_RING_SIZE / 256));
+ wed_w32(dev, MTK_WED_TX_TKID_DYN_THR,
+ FIELD_PREP(MTK_WED_TX_TKID_DYN_THR_LO, 0) |
+ MTK_WED_TX_TKID_DYN_THR_HI);
wed_w32(dev, MTK_WED_TX_BM_DYN_THR,
FIELD_PREP(MTK_WED_TX_BM_DYN_THR_LO_V2, 0) |
MTK_WED_TX_BM_DYN_THR_HI_V2);
@@ -961,31 +1470,71 @@ mtk_wed_hw_init(struct mtk_wed_device *dev)
dev->tx_buf_ring.size / 128) |
FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM,
dev->tx_buf_ring.size / 128));
- wed_w32(dev, MTK_WED_TX_TKID_DYN_THR,
- FIELD_PREP(MTK_WED_TX_TKID_DYN_THR_LO, 0) |
- MTK_WED_TX_TKID_DYN_THR_HI);
}
+ wed_w32(dev, dev->hw->soc->regmap.tx_bm_tkid,
+ FIELD_PREP(MTK_WED_TX_BM_TKID_START, dev->wlan.token_start) |
+ FIELD_PREP(MTK_WED_TX_BM_TKID_END,
+ dev->wlan.token_start + dev->wlan.nbuf - 1));
+
mtk_wed_reset(dev, MTK_WED_RESET_TX_BM);
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ /* switch to new bm architecture */
+ wed_clr(dev, MTK_WED_TX_BM_CTRL,
+ MTK_WED_TX_BM_CTRL_LEGACY_EN);
+
+ wed_w32(dev, MTK_WED_TX_TKID_CTRL,
+ MTK_WED_TX_TKID_CTRL_PAUSE |
+ FIELD_PREP(MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM_V3,
+ dev->wlan.nbuf / 128) |
+ FIELD_PREP(MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM_V3,
+ dev->wlan.nbuf / 128));
+ /* return SKBID + SDP back to bm */
+ wed_set(dev, MTK_WED_TX_TKID_CTRL,
+ MTK_WED_TX_TKID_CTRL_FREE_FORMAT);
+
+ wed_w32(dev, MTK_WED_TX_BM_INIT_PTR,
+ MTK_WED_TX_BM_PKT_CNT |
+ MTK_WED_TX_BM_INIT_SW_TAIL_IDX);
+ }
+
+ if (mtk_wed_is_v1(dev->hw)) {
wed_set(dev, MTK_WED_CTRL,
MTK_WED_CTRL_WED_TX_BM_EN |
MTK_WED_CTRL_WED_TX_FREE_AGENT_EN);
- } else {
- wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE);
+ } else if (mtk_wed_get_rx_capa(dev)) {
/* rx hw init */
wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX,
MTK_WED_WPDMA_RX_D_RST_CRX_IDX |
MTK_WED_WPDMA_RX_D_RST_DRV_IDX);
wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX, 0);
+ /* reset prefetch index of ring */
+ wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX,
+ MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR);
+ wed_clr(dev, MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX,
+ MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR);
+
+ wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX,
+ MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR);
+ wed_clr(dev, MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX,
+ MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR);
+
+ /* reset prefetch FIFO of ring */
+ wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG,
+ MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R0_CLR |
+ MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R1_CLR);
+ wed_w32(dev, MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG, 0);
+
mtk_wed_rx_buffer_hw_init(dev);
mtk_wed_rro_hw_init(dev);
mtk_wed_route_qm_hw_init(dev);
}
wed_clr(dev, MTK_WED_TX_BM_CTRL, MTK_WED_TX_BM_CTRL_PAUSE);
+ if (!mtk_wed_is_v1(dev->hw))
+ wed_clr(dev, MTK_WED_TX_TKID_CTRL, MTK_WED_TX_TKID_CTRL_PAUSE);
}
static void
@@ -1008,23 +1557,6 @@ mtk_wed_ring_reset(struct mtk_wed_ring *ring, int size, bool tx)
}
}
-static u32
-mtk_wed_check_busy(struct mtk_wed_device *dev, u32 reg, u32 mask)
-{
- return !!(wed_r32(dev, reg) & mask);
-}
-
-static int
-mtk_wed_poll_busy(struct mtk_wed_device *dev, u32 reg, u32 mask)
-{
- int sleep = 15000;
- int timeout = 100 * sleep;
- u32 val;
-
- return read_poll_timeout(mtk_wed_check_busy, val, !val, sleep,
- timeout, false, dev, reg, mask);
-}
-
static int
mtk_wed_rx_reset(struct mtk_wed_device *dev)
{
@@ -1038,13 +1570,33 @@ mtk_wed_rx_reset(struct mtk_wed_device *dev)
if (ret)
return ret;
+ if (dev->wlan.hw_rro) {
+ wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_IND_CMD_EN);
+ mtk_wed_poll_busy(dev, MTK_WED_RRO_RX_HW_STS,
+ MTK_WED_RX_IND_CMD_BUSY);
+ mtk_wed_reset(dev, MTK_WED_RESET_RRO_RX_TO_PG);
+ }
+
wed_clr(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, MTK_WED_WPDMA_RX_D_RX_DRV_EN);
ret = mtk_wed_poll_busy(dev, MTK_WED_WPDMA_RX_D_GLO_CFG,
MTK_WED_WPDMA_RX_D_RX_DRV_BUSY);
+ if (!ret && mtk_wed_is_v3_or_greater(dev->hw))
+ ret = mtk_wed_poll_busy(dev, MTK_WED_WPDMA_RX_D_PREF_CFG,
+ MTK_WED_WPDMA_RX_D_PREF_BUSY);
if (ret) {
mtk_wed_reset(dev, MTK_WED_RESET_WPDMA_INT_AGENT);
mtk_wed_reset(dev, MTK_WED_RESET_WPDMA_RX_D_DRV);
} else {
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ /* 1.a. disable prefetch HW */
+ wed_clr(dev, MTK_WED_WPDMA_RX_D_PREF_CFG,
+ MTK_WED_WPDMA_RX_D_PREF_EN);
+ mtk_wed_poll_busy(dev, MTK_WED_WPDMA_RX_D_PREF_CFG,
+ MTK_WED_WPDMA_RX_D_PREF_BUSY);
+ wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX,
+ MTK_WED_WPDMA_RX_D_RST_DRV_IDX_ALL);
+ }
+
wed_w32(dev, MTK_WED_WPDMA_RX_D_RST_IDX,
MTK_WED_WPDMA_RX_D_RST_CRX_IDX |
MTK_WED_WPDMA_RX_D_RST_DRV_IDX);
@@ -1072,23 +1624,52 @@ mtk_wed_rx_reset(struct mtk_wed_device *dev)
wed_w32(dev, MTK_WED_RROQM_RST_IDX, 0);
}
+ if (dev->wlan.hw_rro) {
+ /* disable rro msdu page drv */
+ wed_clr(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_EN);
+
+ /* disable rro data drv */
+ wed_clr(dev, MTK_WED_RRO_RX_D_CFG(2), MTK_WED_RRO_RX_D_DRV_EN);
+
+ /* rro msdu page drv reset */
+ wed_w32(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_CLR);
+ mtk_wed_poll_busy(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_CLR);
+
+ /* rro data drv reset */
+ wed_w32(dev, MTK_WED_RRO_RX_D_CFG(2),
+ MTK_WED_RRO_RX_D_DRV_CLR);
+ mtk_wed_poll_busy(dev, MTK_WED_RRO_RX_D_CFG(2),
+ MTK_WED_RRO_RX_D_DRV_CLR);
+ }
+
/* reset route qm */
wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_RX_ROUTE_QM_EN);
ret = mtk_wed_poll_busy(dev, MTK_WED_CTRL,
MTK_WED_CTRL_RX_ROUTE_QM_BUSY);
- if (ret)
+ if (ret) {
mtk_wed_reset(dev, MTK_WED_RESET_RX_ROUTE_QM);
- else
- wed_set(dev, MTK_WED_RTQM_GLO_CFG,
- MTK_WED_RTQM_Q_RST);
+ } else if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_set(dev, MTK_WED_RTQM_RST, BIT(0));
+ wed_clr(dev, MTK_WED_RTQM_RST, BIT(0));
+ mtk_wed_reset(dev, MTK_WED_RESET_RX_ROUTE_QM);
+ } else {
+ wed_set(dev, MTK_WED_RTQM_GLO_CFG, MTK_WED_RTQM_Q_RST);
+ }
/* reset tx wdma */
mtk_wdma_tx_reset(dev);
/* reset tx wdma drv */
wed_clr(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_TX_DRV_EN);
- mtk_wed_poll_busy(dev, MTK_WED_CTRL,
- MTK_WED_CTRL_WDMA_INT_AGENT_BUSY);
+ if (mtk_wed_is_v3_or_greater(dev->hw))
+ mtk_wed_poll_busy(dev, MTK_WED_WPDMA_STATUS,
+ MTK_WED_WPDMA_STATUS_TX_DRV);
+ else
+ mtk_wed_poll_busy(dev, MTK_WED_CTRL,
+ MTK_WED_CTRL_WDMA_INT_AGENT_BUSY);
mtk_wed_reset(dev, MTK_WED_RESET_WDMA_TX_DRV);
/* reset wed rx dma */
@@ -1098,13 +1679,8 @@ mtk_wed_rx_reset(struct mtk_wed_device *dev)
if (ret) {
mtk_wed_reset(dev, MTK_WED_RESET_WED_RX_DMA);
} else {
- struct mtk_eth *eth = dev->hw->eth;
-
- if (mtk_is_netsys_v2_or_greater(eth))
- wed_set(dev, MTK_WED_RESET_IDX,
- MTK_WED_RESET_IDX_RX_V2);
- else
- wed_set(dev, MTK_WED_RESET_IDX, MTK_WED_RESET_IDX_RX);
+ wed_set(dev, MTK_WED_RESET_IDX,
+ dev->hw->soc->regmap.reset_idx_rx_mask);
wed_w32(dev, MTK_WED_RESET_IDX, 0);
}
@@ -1114,6 +1690,14 @@ mtk_wed_rx_reset(struct mtk_wed_device *dev)
MTK_WED_CTRL_WED_RX_BM_BUSY);
mtk_wed_reset(dev, MTK_WED_RESET_RX_BM);
+ if (dev->wlan.hw_rro) {
+ wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_PG_BM_EN);
+ mtk_wed_poll_busy(dev, MTK_WED_CTRL,
+ MTK_WED_CTRL_WED_RX_PG_BM_BUSY);
+ wed_set(dev, MTK_WED_RESET, MTK_WED_RESET_RX_PG_BM);
+ wed_clr(dev, MTK_WED_RESET, MTK_WED_RESET_RX_PG_BM);
+ }
+
/* wo change to enable state */
val = MTK_WED_WO_STATE_ENABLE;
ret = mtk_wed_mcu_send_msg(wo, MTK_WED_MODULE_ID_WO,
@@ -1131,6 +1715,7 @@ mtk_wed_rx_reset(struct mtk_wed_device *dev)
false);
}
mtk_wed_free_rx_buffer(dev);
+ mtk_wed_hwrro_free_buffer(dev);
return 0;
}
@@ -1157,21 +1742,48 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev)
if (busy) {
mtk_wed_reset(dev, MTK_WED_RESET_WED_TX_DMA);
} else {
- wed_w32(dev, MTK_WED_RESET_IDX, MTK_WED_RESET_IDX_TX);
+ wed_w32(dev, MTK_WED_RESET_IDX,
+ dev->hw->soc->regmap.reset_idx_tx_mask);
wed_w32(dev, MTK_WED_RESET_IDX, 0);
}
/* 2. reset WDMA rx DMA */
busy = !!mtk_wdma_rx_reset(dev);
- wed_clr(dev, MTK_WED_WDMA_GLO_CFG, MTK_WED_WDMA_GLO_CFG_RX_DRV_EN);
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ val = MTK_WED_WDMA_GLO_CFG_RX_DIS_FSM_AUTO_IDLE |
+ wed_r32(dev, MTK_WED_WDMA_GLO_CFG);
+ val &= ~MTK_WED_WDMA_GLO_CFG_RX_DRV_EN;
+ wed_w32(dev, MTK_WED_WDMA_GLO_CFG, val);
+ } else {
+ wed_clr(dev, MTK_WED_WDMA_GLO_CFG,
+ MTK_WED_WDMA_GLO_CFG_RX_DRV_EN);
+ }
+
if (!busy)
busy = mtk_wed_poll_busy(dev, MTK_WED_WDMA_GLO_CFG,
MTK_WED_WDMA_GLO_CFG_RX_DRV_BUSY);
+ if (!busy && mtk_wed_is_v3_or_greater(dev->hw))
+ busy = mtk_wed_poll_busy(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_BUSY);
if (busy) {
mtk_wed_reset(dev, MTK_WED_RESET_WDMA_INT_AGENT);
mtk_wed_reset(dev, MTK_WED_RESET_WDMA_RX_DRV);
} else {
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ /* 1.a. disable prefetch HW */
+ wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_EN);
+ mtk_wed_poll_busy(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_BUSY);
+ wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_DDONE2_EN);
+
+ /* 2. Reset dma index */
+ wed_w32(dev, MTK_WED_WDMA_RESET_IDX,
+ MTK_WED_WDMA_RESET_IDX_RX_ALL);
+ }
+
wed_w32(dev, MTK_WED_WDMA_RESET_IDX,
MTK_WED_WDMA_RESET_IDX_RX | MTK_WED_WDMA_RESET_IDX_DRV);
wed_w32(dev, MTK_WED_WDMA_RESET_IDX, 0);
@@ -1187,8 +1799,13 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev)
wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_TX_FREE_AGENT_EN);
for (i = 0; i < 100; i++) {
- val = wed_r32(dev, MTK_WED_TX_BM_INTF);
- if (FIELD_GET(MTK_WED_TX_BM_INTF_TKFIFO_FDEP, val) == 0x40)
+ if (mtk_wed_is_v1(dev->hw))
+ val = FIELD_GET(MTK_WED_TX_BM_INTF_TKFIFO_FDEP,
+ wed_r32(dev, MTK_WED_TX_BM_INTF));
+ else
+ val = FIELD_GET(MTK_WED_TX_TKID_INTF_TKFIFO_FDEP,
+ wed_r32(dev, MTK_WED_TX_TKID_INTF));
+ if (val == 0x40)
break;
}
@@ -1210,6 +1827,8 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev)
mtk_wed_reset(dev, MTK_WED_RESET_WPDMA_INT_AGENT);
mtk_wed_reset(dev, MTK_WED_RESET_WPDMA_TX_DRV);
mtk_wed_reset(dev, MTK_WED_RESET_WPDMA_RX_DRV);
+ if (mtk_wed_is_v3_or_greater(dev->hw))
+ wed_w32(dev, MTK_WED_RX1_CTRL2, 0);
} else {
wed_w32(dev, MTK_WED_WPDMA_RESET_IDX,
MTK_WED_WPDMA_RESET_IDX_TX |
@@ -1218,7 +1837,7 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev)
}
dev->init_done = false;
- if (dev->hw->version == 1)
+ if (mtk_wed_is_v1(dev->hw))
return;
if (!busy) {
@@ -1226,7 +1845,14 @@ mtk_wed_reset_dma(struct mtk_wed_device *dev)
wed_w32(dev, MTK_WED_RESET_IDX, 0);
}
- mtk_wed_rx_reset(dev);
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ /* reset amsdu engine */
+ wed_clr(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_AMSDU_EN);
+ mtk_wed_reset(dev, MTK_WED_RESET_TX_AMSDU);
+ }
+
+ if (mtk_wed_get_rx_capa(dev))
+ mtk_wed_rx_reset(dev);
}
static int
@@ -1249,7 +1875,6 @@ static int
mtk_wed_wdma_rx_ring_setup(struct mtk_wed_device *dev, int idx, int size,
bool reset)
{
- u32 desc_size = sizeof(struct mtk_wdma_desc) * dev->hw->version;
struct mtk_wed_ring *wdma;
if (idx >= ARRAY_SIZE(dev->rx_wdma))
@@ -1257,7 +1882,7 @@ mtk_wed_wdma_rx_ring_setup(struct mtk_wed_device *dev, int idx, int size,
wdma = &dev->rx_wdma[idx];
if (!reset && mtk_wed_ring_alloc(dev, wdma, MTK_WED_WDMA_RING_SIZE,
- desc_size, true))
+ dev->hw->soc->wdma_desc_size, true))
return -ENOMEM;
wdma_w32(dev, MTK_WDMA_RING_RX(idx) + MTK_WED_RING_OFS_BASE,
@@ -1278,7 +1903,6 @@ static int
mtk_wed_wdma_tx_ring_setup(struct mtk_wed_device *dev, int idx, int size,
bool reset)
{
- u32 desc_size = sizeof(struct mtk_wdma_desc) * dev->hw->version;
struct mtk_wed_ring *wdma;
if (idx >= ARRAY_SIZE(dev->tx_wdma))
@@ -1286,9 +1910,27 @@ mtk_wed_wdma_tx_ring_setup(struct mtk_wed_device *dev, int idx, int size,
wdma = &dev->tx_wdma[idx];
if (!reset && mtk_wed_ring_alloc(dev, wdma, MTK_WED_WDMA_RING_SIZE,
- desc_size, true))
+ dev->hw->soc->wdma_desc_size, true))
return -ENOMEM;
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ struct mtk_wdma_desc *desc = wdma->desc;
+ int i;
+
+ for (i = 0; i < MTK_WED_WDMA_RING_SIZE; i++) {
+ desc->buf0 = 0;
+ desc->ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE);
+ desc->buf1 = 0;
+ desc->info = cpu_to_le32(MTK_WDMA_TXD0_DESC_INFO_DMA_DONE);
+ desc++;
+ desc->buf0 = 0;
+ desc->ctrl = cpu_to_le32(MTK_WDMA_DESC_CTRL_DMA_DONE);
+ desc->buf1 = 0;
+ desc->info = cpu_to_le32(MTK_WDMA_TXD1_DESC_INFO_DMA_DONE);
+ desc++;
+ }
+ }
+
wdma_w32(dev, MTK_WDMA_RING_TX(idx) + MTK_WED_RING_OFS_BASE,
wdma->desc_phys);
wdma_w32(dev, MTK_WDMA_RING_TX(idx) + MTK_WED_RING_OFS_COUNT,
@@ -1344,7 +1986,7 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask)
MTK_WED_CTRL_WED_TX_BM_EN |
MTK_WED_CTRL_WED_TX_FREE_AGENT_EN);
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v1(dev->hw)) {
wed_w32(dev, MTK_WED_PCIE_INT_TRIGGER,
MTK_WED_PCIE_INT_TRIGGER_STATUS);
@@ -1354,8 +1996,9 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask)
wed_clr(dev, MTK_WED_WDMA_INT_CTRL, wdma_mask);
} else {
- wdma_mask |= FIELD_PREP(MTK_WDMA_INT_MASK_TX_DONE,
- GENMASK(1, 0));
+ if (mtk_wed_is_v3_or_greater(dev->hw))
+ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_TX_TKID_ALI_EN);
+
/* initail tx interrupt trigger */
wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_TX,
MTK_WED_WPDMA_INT_CTRL_TX0_DONE_EN |
@@ -1374,15 +2017,20 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask)
FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_TX_FREE_DONE_TRIG,
dev->wlan.txfree_tbit));
- wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RX,
- MTK_WED_WPDMA_INT_CTRL_RX0_EN |
- MTK_WED_WPDMA_INT_CTRL_RX0_CLR |
- MTK_WED_WPDMA_INT_CTRL_RX1_EN |
- MTK_WED_WPDMA_INT_CTRL_RX1_CLR |
- FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX0_DONE_TRIG,
- dev->wlan.rx_tbit[0]) |
- FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX1_DONE_TRIG,
- dev->wlan.rx_tbit[1]));
+ if (mtk_wed_get_rx_capa(dev)) {
+ wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RX,
+ MTK_WED_WPDMA_INT_CTRL_RX0_EN |
+ MTK_WED_WPDMA_INT_CTRL_RX0_CLR |
+ MTK_WED_WPDMA_INT_CTRL_RX1_EN |
+ MTK_WED_WPDMA_INT_CTRL_RX1_CLR |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX0_DONE_TRIG,
+ dev->wlan.rx_tbit[0]) |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RX1_DONE_TRIG,
+ dev->wlan.rx_tbit[1]));
+
+ wdma_mask |= FIELD_PREP(MTK_WDMA_INT_MASK_TX_DONE,
+ GENMASK(1, 0));
+ }
wed_w32(dev, MTK_WED_WDMA_INT_CLR, wdma_mask);
wed_set(dev, MTK_WED_WDMA_INT_CTRL,
@@ -1398,55 +2046,280 @@ mtk_wed_configure_irq(struct mtk_wed_device *dev, u32 irq_mask)
wed_w32(dev, MTK_WED_INT_MASK, irq_mask);
}
+#define MTK_WFMDA_RX_DMA_EN BIT(2)
static void
mtk_wed_dma_enable(struct mtk_wed_device *dev)
{
- wed_set(dev, MTK_WED_WPDMA_INT_CTRL, MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV);
+ int i;
+
+ if (!mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_set(dev, MTK_WED_WPDMA_INT_CTRL,
+ MTK_WED_WPDMA_INT_CTRL_SUBRT_ADV);
+ wed_set(dev, MTK_WED_WPDMA_GLO_CFG,
+ MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN |
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN);
+ wdma_set(dev, MTK_WDMA_GLO_CFG,
+ MTK_WDMA_GLO_CFG_TX_DMA_EN |
+ MTK_WDMA_GLO_CFG_RX_INFO1_PRERES |
+ MTK_WDMA_GLO_CFG_RX_INFO2_PRERES);
+ wed_set(dev, MTK_WED_WPDMA_CTRL, MTK_WED_WPDMA_CTRL_SDL1_FIXED);
+ } else {
+ wed_set(dev, MTK_WED_WPDMA_GLO_CFG,
+ MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN |
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN |
+ MTK_WED_WPDMA_GLO_CFG_RX_DDONE2_WR);
+ wdma_set(dev, MTK_WDMA_GLO_CFG, MTK_WDMA_GLO_CFG_TX_DMA_EN);
+ }
wed_set(dev, MTK_WED_GLO_CFG,
MTK_WED_GLO_CFG_TX_DMA_EN |
MTK_WED_GLO_CFG_RX_DMA_EN);
- wed_set(dev, MTK_WED_WPDMA_GLO_CFG,
- MTK_WED_WPDMA_GLO_CFG_TX_DRV_EN |
- MTK_WED_WPDMA_GLO_CFG_RX_DRV_EN);
+
wed_set(dev, MTK_WED_WDMA_GLO_CFG,
MTK_WED_WDMA_GLO_CFG_RX_DRV_EN);
- wdma_set(dev, MTK_WDMA_GLO_CFG,
- MTK_WDMA_GLO_CFG_TX_DMA_EN |
- MTK_WDMA_GLO_CFG_RX_INFO1_PRERES |
- MTK_WDMA_GLO_CFG_RX_INFO2_PRERES);
-
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v1(dev->hw)) {
wdma_set(dev, MTK_WDMA_GLO_CFG,
MTK_WDMA_GLO_CFG_RX_INFO3_PRERES);
- } else {
- int i;
+ return;
+ }
- wed_set(dev, MTK_WED_WPDMA_CTRL,
- MTK_WED_WPDMA_CTRL_SDL1_FIXED);
+ wed_set(dev, MTK_WED_WPDMA_GLO_CFG,
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC |
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC);
- wed_set(dev, MTK_WED_WDMA_GLO_CFG,
- MTK_WED_WDMA_GLO_CFG_TX_DRV_EN |
- MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK);
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ FIELD_PREP(MTK_WED_WDMA_RX_PREF_BURST_SIZE, 0x10) |
+ FIELD_PREP(MTK_WED_WDMA_RX_PREF_LOW_THRES, 0x8));
+ wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_DDONE2_EN);
+ wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG, MTK_WED_WDMA_RX_PREF_EN);
+ wed_clr(dev, MTK_WED_WPDMA_GLO_CFG,
+ MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK_LAST);
wed_set(dev, MTK_WED_WPDMA_GLO_CFG,
- MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_PKT_PROC |
- MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC);
+ MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK |
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_CHK |
+ MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNS_VER_FORCE_4);
- wed_clr(dev, MTK_WED_WPDMA_GLO_CFG,
- MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP |
- MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV);
+ wdma_set(dev, MTK_WDMA_PREF_RX_CFG, MTK_WDMA_PREF_RX_CFG_PREF_EN);
+ wdma_set(dev, MTK_WDMA_WRBK_RX_CFG, MTK_WDMA_WRBK_RX_CFG_WRBK_EN);
+ }
- wed_set(dev, MTK_WED_WPDMA_RX_D_GLO_CFG,
- MTK_WED_WPDMA_RX_D_RX_DRV_EN |
- FIELD_PREP(MTK_WED_WPDMA_RX_D_RXD_READ_LEN, 0x18) |
- FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL,
- 0x2));
+ wed_clr(dev, MTK_WED_WPDMA_GLO_CFG,
+ MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP |
+ MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV);
+
+ if (!mtk_wed_get_rx_capa(dev))
+ return;
+
+ wed_set(dev, MTK_WED_WDMA_GLO_CFG,
+ MTK_WED_WDMA_GLO_CFG_TX_DRV_EN |
+ MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK);
+
+ wed_clr(dev, MTK_WED_WPDMA_RX_D_GLO_CFG, MTK_WED_WPDMA_RX_D_RXD_READ_LEN);
+ wed_set(dev, MTK_WED_WPDMA_RX_D_GLO_CFG,
+ MTK_WED_WPDMA_RX_D_RX_DRV_EN |
+ FIELD_PREP(MTK_WED_WPDMA_RX_D_RXD_READ_LEN, 0x18) |
+ FIELD_PREP(MTK_WED_WPDMA_RX_D_INIT_PHASE_RXEN_SEL, 0x2));
+
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_set(dev, MTK_WED_WPDMA_RX_D_PREF_CFG,
+ MTK_WED_WPDMA_RX_D_PREF_EN |
+ FIELD_PREP(MTK_WED_WPDMA_RX_D_PREF_BURST_SIZE, 0x10) |
+ FIELD_PREP(MTK_WED_WPDMA_RX_D_PREF_LOW_THRES, 0x8));
+
+ wed_set(dev, MTK_WED_RRO_RX_D_CFG(2), MTK_WED_RRO_RX_D_DRV_EN);
+ wdma_set(dev, MTK_WDMA_PREF_TX_CFG, MTK_WDMA_PREF_TX_CFG_PREF_EN);
+ wdma_set(dev, MTK_WDMA_WRBK_TX_CFG, MTK_WDMA_WRBK_TX_CFG_WRBK_EN);
+ }
+
+ for (i = 0; i < MTK_WED_RX_QUEUES; i++) {
+ struct mtk_wed_ring *ring = &dev->rx_ring[i];
+ u32 val;
+
+ if (!(ring->flags & MTK_WED_RING_CONFIGURED))
+ continue; /* queue is not configured by mt76 */
+
+ if (mtk_wed_check_wfdma_rx_fill(dev, ring)) {
+ dev_err(dev->hw->dev,
+ "rx_ring(%d) dma enable failed\n", i);
+ continue;
+ }
+
+ val = wifi_r32(dev,
+ dev->wlan.wpdma_rx_glo -
+ dev->wlan.phy_base) | MTK_WFMDA_RX_DMA_EN;
+ wifi_w32(dev,
+ dev->wlan.wpdma_rx_glo - dev->wlan.phy_base,
+ val);
+ }
+}
+
+static void
+mtk_wed_start_hw_rro(struct mtk_wed_device *dev, u32 irq_mask, bool reset)
+{
+ int i;
+
+ wed_w32(dev, MTK_WED_WPDMA_INT_MASK, irq_mask);
+ wed_w32(dev, MTK_WED_INT_MASK, irq_mask);
+
+ if (!mtk_wed_get_rx_capa(dev) || !dev->wlan.hw_rro)
+ return;
+
+ if (reset) {
+ wed_set(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_EN);
+ return;
+ }
+
+ wed_set(dev, MTK_WED_RRO_RX_D_CFG(2), MTK_WED_RRO_MSDU_PG_DRV_CLR);
+ wed_w32(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_CLR);
+
+ wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RRO_RX,
+ MTK_WED_WPDMA_INT_CTRL_RRO_RX0_EN |
+ MTK_WED_WPDMA_INT_CTRL_RRO_RX0_CLR |
+ MTK_WED_WPDMA_INT_CTRL_RRO_RX1_EN |
+ MTK_WED_WPDMA_INT_CTRL_RRO_RX1_CLR |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_RX0_DONE_TRIG,
+ dev->wlan.rro_rx_tbit[0]) |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_RX1_DONE_TRIG,
+ dev->wlan.rro_rx_tbit[1]));
+
+ wed_w32(dev, MTK_WED_WPDMA_INT_CTRL_RRO_MSDU_PG,
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG0_EN |
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG0_CLR |
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG1_EN |
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG1_CLR |
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG2_EN |
+ MTK_WED_WPDMA_INT_CTRL_RRO_PG2_CLR |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG0_DONE_TRIG,
+ dev->wlan.rx_pg_tbit[0]) |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG1_DONE_TRIG,
+ dev->wlan.rx_pg_tbit[1]) |
+ FIELD_PREP(MTK_WED_WPDMA_INT_CTRL_RRO_PG2_DONE_TRIG,
+ dev->wlan.rx_pg_tbit[2]));
+
+ /* RRO_MSDU_PG_RING2_CFG1_FLD_DRV_EN should be enabled after
+ * WM FWDL completed, otherwise RRO_MSDU_PG ring may broken
+ */
+ wed_set(dev, MTK_WED_RRO_MSDU_PG_RING2_CFG,
+ MTK_WED_RRO_MSDU_PG_DRV_EN);
+
+ for (i = 0; i < MTK_WED_RX_QUEUES; i++) {
+ struct mtk_wed_ring *ring = &dev->rx_rro_ring[i];
+
+ if (!(ring->flags & MTK_WED_RING_CONFIGURED))
+ continue;
+
+ if (mtk_wed_check_wfdma_rx_fill(dev, ring))
+ dev_err(dev->hw->dev,
+ "rx_rro_ring(%d) initialization failed\n", i);
+ }
+
+ for (i = 0; i < MTK_WED_RX_PAGE_QUEUES; i++) {
+ struct mtk_wed_ring *ring = &dev->rx_page_ring[i];
+
+ if (!(ring->flags & MTK_WED_RING_CONFIGURED))
+ continue;
+
+ if (mtk_wed_check_wfdma_rx_fill(dev, ring))
+ dev_err(dev->hw->dev,
+ "rx_page_ring(%d) initialization failed\n", i);
+ }
+}
+
+static void
+mtk_wed_rro_rx_ring_setup(struct mtk_wed_device *dev, int idx,
+ void __iomem *regs)
+{
+ struct mtk_wed_ring *ring = &dev->rx_rro_ring[idx];
+
+ ring->wpdma = regs;
+ wed_w32(dev, MTK_WED_RRO_RX_D_RX(idx) + MTK_WED_RING_OFS_BASE,
+ readl(regs));
+ wed_w32(dev, MTK_WED_RRO_RX_D_RX(idx) + MTK_WED_RING_OFS_COUNT,
+ readl(regs + MTK_WED_RING_OFS_COUNT));
+ ring->flags |= MTK_WED_RING_CONFIGURED;
+}
+
+static void
+mtk_wed_msdu_pg_rx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs)
+{
+ struct mtk_wed_ring *ring = &dev->rx_page_ring[idx];
+
+ ring->wpdma = regs;
+ wed_w32(dev, MTK_WED_RRO_MSDU_PG_CTRL0(idx) + MTK_WED_RING_OFS_BASE,
+ readl(regs));
+ wed_w32(dev, MTK_WED_RRO_MSDU_PG_CTRL0(idx) + MTK_WED_RING_OFS_COUNT,
+ readl(regs + MTK_WED_RING_OFS_COUNT));
+ ring->flags |= MTK_WED_RING_CONFIGURED;
+}
+
+static int
+mtk_wed_ind_rx_ring_setup(struct mtk_wed_device *dev, void __iomem *regs)
+{
+ struct mtk_wed_ring *ring = &dev->ind_cmd_ring;
+ u32 val = readl(regs + MTK_WED_RING_OFS_COUNT);
+ int i, count = 0;
- for (i = 0; i < MTK_WED_RX_QUEUES; i++)
- mtk_wed_check_wfdma_rx_fill(dev, i);
+ ring->wpdma = regs;
+ wed_w32(dev, MTK_WED_IND_CMD_RX_CTRL1 + MTK_WED_RING_OFS_BASE,
+ readl(regs) & 0xfffffff0);
+
+ wed_w32(dev, MTK_WED_IND_CMD_RX_CTRL1 + MTK_WED_RING_OFS_COUNT,
+ readl(regs + MTK_WED_RING_OFS_COUNT));
+
+ /* ack sn cr */
+ wed_w32(dev, MTK_WED_RRO_CFG0, dev->wlan.phy_base +
+ dev->wlan.ind_cmd.ack_sn_addr);
+ wed_w32(dev, MTK_WED_RRO_CFG1,
+ FIELD_PREP(MTK_WED_RRO_CFG1_MAX_WIN_SZ,
+ dev->wlan.ind_cmd.win_size) |
+ FIELD_PREP(MTK_WED_RRO_CFG1_PARTICL_SE_ID,
+ dev->wlan.ind_cmd.particular_sid));
+
+ /* particular session addr element */
+ wed_w32(dev, MTK_WED_ADDR_ELEM_CFG0,
+ dev->wlan.ind_cmd.particular_se_phys);
+
+ for (i = 0; i < dev->wlan.ind_cmd.se_group_nums; i++) {
+ wed_w32(dev, MTK_WED_RADDR_ELEM_TBL_WDATA,
+ dev->wlan.ind_cmd.addr_elem_phys[i] >> 4);
+ wed_w32(dev, MTK_WED_ADDR_ELEM_TBL_CFG,
+ MTK_WED_ADDR_ELEM_TBL_WR | (i & 0x7f));
+
+ val = wed_r32(dev, MTK_WED_ADDR_ELEM_TBL_CFG);
+ while (!(val & MTK_WED_ADDR_ELEM_TBL_WR_RDY) && count++ < 100)
+ val = wed_r32(dev, MTK_WED_ADDR_ELEM_TBL_CFG);
+ if (count >= 100)
+ dev_err(dev->hw->dev,
+ "write ba session base failed\n");
+ }
+
+ /* pn check init */
+ for (i = 0; i < dev->wlan.ind_cmd.particular_sid; i++) {
+ wed_w32(dev, MTK_WED_PN_CHECK_WDATA_M,
+ MTK_WED_PN_CHECK_IS_FIRST);
+
+ wed_w32(dev, MTK_WED_PN_CHECK_CFG, MTK_WED_PN_CHECK_WR |
+ FIELD_PREP(MTK_WED_PN_CHECK_SE_ID, i));
+
+ count = 0;
+ val = wed_r32(dev, MTK_WED_PN_CHECK_CFG);
+ while (!(val & MTK_WED_PN_CHECK_WR_RDY) && count++ < 100)
+ val = wed_r32(dev, MTK_WED_PN_CHECK_CFG);
+ if (count >= 100)
+ dev_err(dev->hw->dev,
+ "session(%d) initialization failed\n", i);
}
+
+ wed_w32(dev, MTK_WED_RX_IND_CMD_CNT0, MTK_WED_RX_IND_CMD_DBG_CNT_EN);
+ wed_set(dev, MTK_WED_CTRL, MTK_WED_CTRL_WED_RX_IND_CMD_EN);
+
+ return 0;
}
static void
@@ -1466,14 +2339,14 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask)
mtk_wed_set_ext_int(dev, true);
- if (dev->hw->version == 1) {
+ if (mtk_wed_is_v1(dev->hw)) {
u32 val = dev->wlan.wpdma_phys | MTK_PCIE_MIRROR_MAP_EN |
FIELD_PREP(MTK_PCIE_MIRROR_MAP_WED_ID,
dev->hw->index);
val |= BIT(0) | (BIT(1) * !!dev->hw->index);
regmap_write(dev->hw->mirror, dev->hw->index * 4, val);
- } else {
+ } else if (mtk_wed_get_rx_capa(dev)) {
/* driver set mid ready and only once */
wed_w32(dev, MTK_WED_EXT_INT_MASK1,
MTK_WED_EXT_INT_STATUS_WPDMA_MID_RDY);
@@ -1483,12 +2356,18 @@ mtk_wed_start(struct mtk_wed_device *dev, u32 irq_mask)
wed_r32(dev, MTK_WED_EXT_INT_MASK1);
wed_r32(dev, MTK_WED_EXT_INT_MASK2);
+ if (mtk_wed_is_v3_or_greater(dev->hw)) {
+ wed_w32(dev, MTK_WED_EXT_INT_MASK3,
+ MTK_WED_EXT_INT_STATUS_WPDMA_MID_RDY);
+ wed_r32(dev, MTK_WED_EXT_INT_MASK3);
+ }
+
if (mtk_wed_rro_cfg(dev))
return;
-
}
mtk_wed_set_512_support(dev, dev->wlan.wcid_512);
+ mtk_wed_amsdu_init(dev);
mtk_wed_dma_enable(dev);
dev->running = true;
@@ -1535,6 +2414,7 @@ mtk_wed_attach(struct mtk_wed_device *dev)
dev->irq = hw->irq;
dev->wdma_idx = hw->index;
dev->version = hw->version;
+ dev->hw->pcie_base = mtk_wed_get_pcie_base(dev);
if (hw->eth->dma_dev == hw->eth->dev &&
of_dma_is_coherent(hw->eth->dev->of_node))
@@ -1544,6 +2424,10 @@ mtk_wed_attach(struct mtk_wed_device *dev)
if (ret)
goto out;
+ ret = mtk_wed_amsdu_buffer_alloc(dev);
+ if (ret)
+ goto out;
+
if (mtk_wed_get_rx_capa(dev)) {
ret = mtk_wed_rro_alloc(dev);
if (ret)
@@ -1551,13 +2435,14 @@ mtk_wed_attach(struct mtk_wed_device *dev)
}
mtk_wed_hw_init_early(dev);
- if (hw->version == 1) {
+ if (mtk_wed_is_v1(hw))
regmap_update_bits(hw->hifsys, HIFSYS_DMA_AG_MAP,
BIT(hw->index), 0);
- } else {
+ else
dev->rev_id = wed_r32(dev, MTK_WED_REV_ID);
+
+ if (mtk_wed_get_rx_capa(dev))
ret = mtk_wed_wo_init(hw);
- }
out:
if (ret) {
dev_err(dev->hw->dev, "failed to attach wed device\n");
@@ -1601,6 +2486,23 @@ mtk_wed_tx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs,
ring->reg_base = MTK_WED_RING_TX(idx);
ring->wpdma = regs;
+ if (mtk_wed_is_v3_or_greater(dev->hw) && idx == 1) {
+ /* reset prefetch index */
+ wed_set(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR |
+ MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR);
+
+ wed_clr(dev, MTK_WED_WDMA_RX_PREF_CFG,
+ MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR |
+ MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR);
+
+ /* reset prefetch FIFO */
+ wed_w32(dev, MTK_WED_WDMA_RX_PREF_FIFO_CFG,
+ MTK_WED_WDMA_RX_PREF_FIFO_RX0_CLR |
+ MTK_WED_WDMA_RX_PREF_FIFO_RX1_CLR);
+ wed_w32(dev, MTK_WED_WDMA_RX_PREF_FIFO_CFG, 0);
+ }
+
/* WED -> WPDMA */
wpdma_tx_w32(dev, idx, MTK_WED_RING_OFS_BASE, ring->desc_phys);
wpdma_tx_w32(dev, idx, MTK_WED_RING_OFS_COUNT, MTK_WED_TX_RING_SIZE);
@@ -1619,7 +2521,7 @@ static int
mtk_wed_txfree_ring_setup(struct mtk_wed_device *dev, void __iomem *regs)
{
struct mtk_wed_ring *ring = &dev->txfree_ring;
- int i, index = dev->hw->version == 1;
+ int i, index = mtk_wed_is_v1(dev->hw);
/*
* For txfree event handling, the same DMA ring is shared between WED
@@ -1675,15 +2577,13 @@ mtk_wed_rx_ring_setup(struct mtk_wed_device *dev, int idx, void __iomem *regs,
static u32
mtk_wed_irq_get(struct mtk_wed_device *dev, u32 mask)
{
- u32 val, ext_mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK;
+ u32 val, ext_mask;
- if (dev->hw->version == 1)
- ext_mask |= MTK_WED_EXT_INT_STATUS_TX_DRV_R_RESP_ERR;
+ if (mtk_wed_is_v3_or_greater(dev->hw))
+ ext_mask = MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT |
+ MTK_WED_EXT_INT_STATUS_TKID_WO_PYLD;
else
- ext_mask |= MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH |
- MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH |
- MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT |
- MTK_WED_EXT_INT_STATUS_TX_DMA_W_RESP_ERR;
+ ext_mask = MTK_WED_EXT_INT_STATUS_ERROR_MASK;
val = wed_r32(dev, MTK_WED_EXT_INT_STATUS);
wed_w32(dev, MTK_WED_EXT_INT_STATUS, val);
@@ -1713,19 +2613,20 @@ mtk_wed_irq_set_mask(struct mtk_wed_device *dev, u32 mask)
int mtk_wed_flow_add(int index)
{
struct mtk_wed_hw *hw = hw_list[index];
- int ret;
+ int ret = 0;
- if (!hw || !hw->wed_dev)
- return -ENODEV;
+ mutex_lock(&hw_lock);
- if (hw->num_flows) {
- hw->num_flows++;
- return 0;
+ if (!hw || !hw->wed_dev) {
+ ret = -ENODEV;
+ goto out;
}
- mutex_lock(&hw_lock);
- if (!hw->wed_dev) {
- ret = -ENODEV;
+ if (!hw->wed_dev->wlan.offload_enable)
+ goto out;
+
+ if (hw->num_flows) {
+ hw->num_flows++;
goto out;
}
@@ -1744,14 +2645,15 @@ void mtk_wed_flow_remove(int index)
{
struct mtk_wed_hw *hw = hw_list[index];
- if (!hw)
- return;
+ mutex_lock(&hw_lock);
- if (--hw->num_flows)
- return;
+ if (!hw || !hw->wed_dev)
+ goto out;
- mutex_lock(&hw_lock);
- if (!hw->wed_dev)
+ if (!hw->wed_dev->wlan.offload_disable)
+ goto out;
+
+ if (--hw->num_flows)
goto out;
hw->wed_dev->wlan.offload_disable(hw->wed_dev);
@@ -1842,7 +2744,7 @@ mtk_wed_setup_tc(struct mtk_wed_device *wed, struct net_device *dev,
{
struct mtk_wed_hw *hw = wed->hw;
- if (hw->version < 2)
+ if (mtk_wed_is_v1(hw))
return -EOPNOTSUPP;
switch (type) {
@@ -1874,6 +2776,10 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
.detach = mtk_wed_detach,
.ppe_check = mtk_wed_ppe_check,
.setup_tc = mtk_wed_setup_tc,
+ .start_hw_rro = mtk_wed_start_hw_rro,
+ .rro_rx_ring_setup = mtk_wed_rro_rx_ring_setup,
+ .msdu_pg_rx_ring_setup = mtk_wed_msdu_pg_rx_ring_setup,
+ .ind_rx_ring_setup = mtk_wed_ind_rx_ring_setup,
};
struct device_node *eth_np = eth->dev->of_node;
struct platform_device *pdev;
@@ -1916,9 +2822,17 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
hw->wdma = wdma;
hw->index = index;
hw->irq = irq;
- hw->version = mtk_is_netsys_v1(eth) ? 1 : 2;
+ hw->version = eth->soc->version;
- if (hw->version == 1) {
+ switch (hw->version) {
+ case 2:
+ hw->soc = &mt7986_data;
+ break;
+ case 3:
+ hw->soc = &mt7988_data;
+ break;
+ default:
+ case 1:
hw->mirror = syscon_regmap_lookup_by_phandle(eth_np,
"mediatek,pcie-mirror");
hw->hifsys = syscon_regmap_lookup_by_phandle(eth_np,
@@ -1932,6 +2846,8 @@ void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
regmap_write(hw->mirror, 0, 0);
regmap_write(hw->mirror, 4, 0);
}
+ hw->soc = &mt7622_data;
+ break;
}
mtk_wed_hw_add_debugfs(hw);
diff --git a/drivers/net/ethernet/mediatek/mtk_wed.h b/drivers/net/ethernet/mediatek/mtk_wed.h
index 43ab77eaf683..c1f0479d7a71 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed.h
+++ b/drivers/net/ethernet/mediatek/mtk_wed.h
@@ -9,10 +9,29 @@
#include <linux/regmap.h>
#include <linux/netdevice.h>
+#include "mtk_wed_regs.h"
+
struct mtk_eth;
struct mtk_wed_wo;
+struct mtk_wed_soc_data {
+ struct {
+ u32 tx_bm_tkid;
+ u32 wpdma_rx_ring0;
+ u32 reset_idx_tx_mask;
+ u32 reset_idx_rx_mask;
+ } regmap;
+ u32 tx_ring_desc_size;
+ u32 wdma_desc_size;
+};
+
+struct mtk_wed_amsdu {
+ void *txd;
+ dma_addr_t txd_phy;
+};
+
struct mtk_wed_hw {
+ const struct mtk_wed_soc_data *soc;
struct device_node *node;
struct mtk_eth *eth;
struct regmap *regs;
@@ -24,6 +43,8 @@ struct mtk_wed_hw {
struct dentry *debugfs_dir;
struct mtk_wed_device *wed_dev;
struct mtk_wed_wo *wed_wo;
+ struct mtk_wed_amsdu *wed_amsdu;
+ u32 pcie_base;
u32 debugfs_reg;
u32 num_flows;
u8 version;
@@ -37,9 +58,30 @@ struct mtk_wdma_info {
u8 queue;
u16 wcid;
u8 bss;
+ u8 amsdu;
};
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
+static inline bool mtk_wed_is_v1(struct mtk_wed_hw *hw)
+{
+ return hw->version == 1;
+}
+
+static inline bool mtk_wed_is_v2(struct mtk_wed_hw *hw)
+{
+ return hw->version == 2;
+}
+
+static inline bool mtk_wed_is_v3(struct mtk_wed_hw *hw)
+{
+ return hw->version == 3;
+}
+
+static inline bool mtk_wed_is_v3_or_greater(struct mtk_wed_hw *hw)
+{
+ return hw->version > 2;
+}
+
static inline void
wed_w32(struct mtk_wed_device *dev, u32 reg, u32 val)
{
@@ -122,6 +164,21 @@ wpdma_txfree_w32(struct mtk_wed_device *dev, u32 reg, u32 val)
writel(val, dev->txfree_ring.wpdma + reg);
}
+static inline u32 mtk_wed_get_pcie_base(struct mtk_wed_device *dev)
+{
+ if (!mtk_wed_is_v3_or_greater(dev->hw))
+ return MTK_WED_PCIE_BASE;
+
+ switch (dev->hw->index) {
+ case 1:
+ return MTK_WED_PCIE_BASE1;
+ case 2:
+ return MTK_WED_PCIE_BASE2;
+ default:
+ return MTK_WED_PCIE_BASE0;
+ }
+}
+
void mtk_wed_add_hw(struct device_node *np, struct mtk_eth *eth,
void __iomem *wdma, phys_addr_t wdma_phy,
int index);
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c
index e24afeaea0da..781c691473e1 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed_debugfs.c
@@ -11,6 +11,7 @@ struct reg_dump {
u16 offset;
u8 type;
u8 base;
+ u32 mask;
};
enum {
@@ -25,6 +26,8 @@ enum {
#define DUMP_STR(_str) { _str, 0, DUMP_TYPE_STRING }
#define DUMP_REG(_reg, ...) { #_reg, MTK_##_reg, __VA_ARGS__ }
+#define DUMP_REG_MASK(_reg, _mask) \
+ { #_mask, MTK_##_reg, DUMP_TYPE_WED, 0, MTK_##_mask }
#define DUMP_RING(_prefix, _base, ...) \
{ _prefix " BASE", _base, __VA_ARGS__ }, \
{ _prefix " CNT", _base + 0x4, __VA_ARGS__ }, \
@@ -32,6 +35,7 @@ enum {
{ _prefix " DIDX", _base + 0xc, __VA_ARGS__ }
#define DUMP_WED(_reg) DUMP_REG(_reg, DUMP_TYPE_WED)
+#define DUMP_WED_MASK(_reg, _mask) DUMP_REG_MASK(_reg, _mask)
#define DUMP_WED_RING(_base) DUMP_RING(#_base, MTK_##_base, DUMP_TYPE_WED)
#define DUMP_WDMA(_reg) DUMP_REG(_reg, DUMP_TYPE_WDMA)
@@ -151,7 +155,7 @@ DEFINE_SHOW_ATTRIBUTE(wed_txinfo);
static int
wed_rxinfo_show(struct seq_file *s, void *data)
{
- static const struct reg_dump regs[] = {
+ static const struct reg_dump regs_common[] = {
DUMP_STR("WPDMA RX"),
DUMP_WPDMA_RX_RING(0),
DUMP_WPDMA_RX_RING(1),
@@ -169,7 +173,7 @@ wed_rxinfo_show(struct seq_file *s, void *data)
DUMP_WED_RING(WED_RING_RX_DATA(0)),
DUMP_WED_RING(WED_RING_RX_DATA(1)),
- DUMP_STR("WED RRO"),
+ DUMP_STR("WED WO RRO"),
DUMP_WED_RRO_RING(WED_RROQM_MIOD_CTRL0),
DUMP_WED(WED_RROQM_MID_MIB),
DUMP_WED(WED_RROQM_MOD_MIB),
@@ -180,17 +184,6 @@ wed_rxinfo_show(struct seq_file *s, void *data)
DUMP_WED(WED_RROQM_FDBK_ANC_MIB),
DUMP_WED(WED_RROQM_FDBK_ANC2H_MIB),
- DUMP_STR("WED Route QM"),
- DUMP_WED(WED_RTQM_R2H_MIB(0)),
- DUMP_WED(WED_RTQM_R2Q_MIB(0)),
- DUMP_WED(WED_RTQM_Q2H_MIB(0)),
- DUMP_WED(WED_RTQM_R2H_MIB(1)),
- DUMP_WED(WED_RTQM_R2Q_MIB(1)),
- DUMP_WED(WED_RTQM_Q2H_MIB(1)),
- DUMP_WED(WED_RTQM_Q2N_MIB),
- DUMP_WED(WED_RTQM_Q2B_MIB),
- DUMP_WED(WED_RTQM_PFDBK_MIB),
-
DUMP_STR("WED WDMA TX"),
DUMP_WED(WED_WDMA_TX_MIB),
DUMP_WED_RING(WED_WDMA_RING_TX),
@@ -211,6 +204,287 @@ wed_rxinfo_show(struct seq_file *s, void *data)
DUMP_WED(WED_RX_BM_INTF),
DUMP_WED(WED_RX_BM_ERR_STS),
};
+ static const struct reg_dump regs_wed_v2[] = {
+ DUMP_STR("WED Route QM"),
+ DUMP_WED(WED_RTQM_R2H_MIB(0)),
+ DUMP_WED(WED_RTQM_R2Q_MIB(0)),
+ DUMP_WED(WED_RTQM_Q2H_MIB(0)),
+ DUMP_WED(WED_RTQM_R2H_MIB(1)),
+ DUMP_WED(WED_RTQM_R2Q_MIB(1)),
+ DUMP_WED(WED_RTQM_Q2H_MIB(1)),
+ DUMP_WED(WED_RTQM_Q2N_MIB),
+ DUMP_WED(WED_RTQM_Q2B_MIB),
+ DUMP_WED(WED_RTQM_PFDBK_MIB),
+ };
+ static const struct reg_dump regs_wed_v3[] = {
+ DUMP_STR("WED RX RRO DATA"),
+ DUMP_WED_RING(WED_RRO_RX_D_RX(0)),
+ DUMP_WED_RING(WED_RRO_RX_D_RX(1)),
+
+ DUMP_STR("WED RX MSDU PAGE"),
+ DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(0)),
+ DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(1)),
+ DUMP_WED_RING(WED_RRO_MSDU_PG_CTRL0(2)),
+
+ DUMP_STR("WED RX IND CMD"),
+ DUMP_WED(WED_IND_CMD_RX_CTRL1),
+ DUMP_WED_MASK(WED_IND_CMD_RX_CTRL2, WED_IND_CMD_MAX_CNT),
+ DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0, WED_IND_CMD_PROC_IDX),
+ DUMP_WED_MASK(RRO_IND_CMD_SIGNATURE, RRO_IND_CMD_DMA_IDX),
+ DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0, WED_IND_CMD_MAGIC_CNT),
+ DUMP_WED_MASK(RRO_IND_CMD_SIGNATURE, RRO_IND_CMD_MAGIC_CNT),
+ DUMP_WED_MASK(WED_IND_CMD_RX_CTRL0,
+ WED_IND_CMD_PREFETCH_FREE_CNT),
+ DUMP_WED_MASK(WED_RRO_CFG1, WED_RRO_CFG1_PARTICL_SE_ID),
+
+ DUMP_STR("WED ADDR ELEM"),
+ DUMP_WED(WED_ADDR_ELEM_CFG0),
+ DUMP_WED_MASK(WED_ADDR_ELEM_CFG1,
+ WED_ADDR_ELEM_PREFETCH_FREE_CNT),
+
+ DUMP_STR("WED Route QM"),
+ DUMP_WED(WED_RTQM_ENQ_I2Q_DMAD_CNT),
+ DUMP_WED(WED_RTQM_ENQ_I2N_DMAD_CNT),
+ DUMP_WED(WED_RTQM_ENQ_I2Q_PKT_CNT),
+ DUMP_WED(WED_RTQM_ENQ_I2N_PKT_CNT),
+ DUMP_WED(WED_RTQM_ENQ_USED_ENTRY_CNT),
+ DUMP_WED(WED_RTQM_ENQ_ERR_CNT),
+
+ DUMP_WED(WED_RTQM_DEQ_DMAD_CNT),
+ DUMP_WED(WED_RTQM_DEQ_Q2I_DMAD_CNT),
+ DUMP_WED(WED_RTQM_DEQ_PKT_CNT),
+ DUMP_WED(WED_RTQM_DEQ_Q2I_PKT_CNT),
+ DUMP_WED(WED_RTQM_DEQ_USED_PFDBK_CNT),
+ DUMP_WED(WED_RTQM_DEQ_ERR_CNT),
+ };
+ struct mtk_wed_hw *hw = s->private;
+ struct mtk_wed_device *dev = hw->wed_dev;
+
+ if (dev) {
+ dump_wed_regs(s, dev, regs_common, ARRAY_SIZE(regs_common));
+ if (mtk_wed_is_v2(hw))
+ dump_wed_regs(s, dev,
+ regs_wed_v2, ARRAY_SIZE(regs_wed_v2));
+ else
+ dump_wed_regs(s, dev,
+ regs_wed_v3, ARRAY_SIZE(regs_wed_v3));
+ }
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(wed_rxinfo);
+
+static int
+wed_amsdu_show(struct seq_file *s, void *data)
+{
+ static const struct reg_dump regs[] = {
+ DUMP_STR("WED AMDSU INFO"),
+ DUMP_WED(WED_MON_AMSDU_FIFO_DMAD),
+
+ DUMP_STR("WED AMDSU ENG0 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(0)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(0)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(0)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(0)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(0)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(0),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(0),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(0),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG1 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(1)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(1)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(1)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(1)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(1)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(1),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(1),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(1),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG2 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(2)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(2)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(2)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(2)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(2)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(2),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(2),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(2),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG3 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(3)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(3)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(3)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(3)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(3)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(3),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(3),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(3),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG4 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(4)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(4)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(4)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(4)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(4)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(4),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(4),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG5 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(5)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(5)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(5)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(5)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(5)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(5),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(5),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(5),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG6 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(6)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(6)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(6)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(6)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(6)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(6),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(6),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(6),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG7 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(7)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(7)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(7)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(7)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(7)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(7),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(7),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(7),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(7),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(4),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED AMDSU ENG8 INFO"),
+ DUMP_WED(WED_MON_AMSDU_ENG_DMAD(8)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QFPL(8)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENI(8)),
+ DUMP_WED(WED_MON_AMSDU_ENG_QENO(8)),
+ DUMP_WED(WED_MON_AMSDU_ENG_MERG(8)),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(8),
+ WED_AMSDU_ENG_MAX_PL_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT8(8),
+ WED_AMSDU_ENG_MAX_QGPP_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8),
+ WED_AMSDU_ENG_CUR_ENTRY),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8),
+ WED_AMSDU_ENG_MAX_BUF_MERGED),
+ DUMP_WED_MASK(WED_MON_AMSDU_ENG_CNT9(8),
+ WED_AMSDU_ENG_MAX_MSDU_MERGED),
+
+ DUMP_STR("WED QMEM INFO"),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(0), WED_AMSDU_QMEM_FQ_CNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(0), WED_AMSDU_QMEM_SP_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(1), WED_AMSDU_QMEM_TID0_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(1), WED_AMSDU_QMEM_TID1_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(2), WED_AMSDU_QMEM_TID2_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(2), WED_AMSDU_QMEM_TID3_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(3), WED_AMSDU_QMEM_TID4_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(3), WED_AMSDU_QMEM_TID5_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(4), WED_AMSDU_QMEM_TID6_QCNT),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_CNT(4), WED_AMSDU_QMEM_TID7_QCNT),
+
+ DUMP_STR("WED QMEM HEAD INFO"),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(0), WED_AMSDU_QMEM_FQ_HEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(0), WED_AMSDU_QMEM_SP_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(1), WED_AMSDU_QMEM_TID0_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(1), WED_AMSDU_QMEM_TID1_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(2), WED_AMSDU_QMEM_TID2_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(2), WED_AMSDU_QMEM_TID3_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(3), WED_AMSDU_QMEM_TID4_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(3), WED_AMSDU_QMEM_TID5_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(4), WED_AMSDU_QMEM_TID6_QHEAD),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(4), WED_AMSDU_QMEM_TID7_QHEAD),
+
+ DUMP_STR("WED QMEM TAIL INFO"),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(5), WED_AMSDU_QMEM_FQ_TAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(5), WED_AMSDU_QMEM_SP_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(6), WED_AMSDU_QMEM_TID0_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(6), WED_AMSDU_QMEM_TID1_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(7), WED_AMSDU_QMEM_TID2_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(7), WED_AMSDU_QMEM_TID3_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(8), WED_AMSDU_QMEM_TID4_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(8), WED_AMSDU_QMEM_TID5_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(9), WED_AMSDU_QMEM_TID6_QTAIL),
+ DUMP_WED_MASK(WED_MON_AMSDU_QMEM_PTR(9), WED_AMSDU_QMEM_TID7_QTAIL),
+
+ DUMP_STR("WED HIFTXD MSDU INFO"),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(1)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(2)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(3)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(4)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(5)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(6)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(7)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(8)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(9)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(10)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(11)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(12)),
+ DUMP_WED(WED_MON_AMSDU_HIFTXD_FETCH_MSDU(13)),
+ };
struct mtk_wed_hw *hw = s->private;
struct mtk_wed_device *dev = hw->wed_dev;
@@ -219,7 +493,94 @@ wed_rxinfo_show(struct seq_file *s, void *data)
return 0;
}
-DEFINE_SHOW_ATTRIBUTE(wed_rxinfo);
+DEFINE_SHOW_ATTRIBUTE(wed_amsdu);
+
+static int
+wed_rtqm_show(struct seq_file *s, void *data)
+{
+ static const struct reg_dump regs[] = {
+ DUMP_STR("WED Route QM IGRS0(N2H + Recycle)"),
+ DUMP_WED(WED_RTQM_IGRS0_I2HW_DMAD_CNT),
+ DUMP_WED(WED_RTQM_IGRS0_I2H_DMAD_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS0_I2H_DMAD_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS0_I2HW_PKT_CNT),
+ DUMP_WED(WED_RTQM_IGRS0_I2H_PKT_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS0_I2H_PKT_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS0_FDROP_CNT),
+
+ DUMP_STR("WED Route QM IGRS1(Legacy)"),
+ DUMP_WED(WED_RTQM_IGRS1_I2HW_DMAD_CNT),
+ DUMP_WED(WED_RTQM_IGRS1_I2H_DMAD_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS1_I2H_DMAD_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS1_I2HW_PKT_CNT),
+ DUMP_WED(WED_RTQM_IGRS1_I2H_PKT_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS1_I2H_PKT_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS1_FDROP_CNT),
+
+ DUMP_STR("WED Route QM IGRS2(RRO3.0)"),
+ DUMP_WED(WED_RTQM_IGRS2_I2HW_DMAD_CNT),
+ DUMP_WED(WED_RTQM_IGRS2_I2H_DMAD_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS2_I2H_DMAD_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS2_I2HW_PKT_CNT),
+ DUMP_WED(WED_RTQM_IGRS2_I2H_PKT_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS2_I2H_PKT_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS2_FDROP_CNT),
+
+ DUMP_STR("WED Route QM IGRS3(DEBUG)"),
+ DUMP_WED(WED_RTQM_IGRS2_I2HW_DMAD_CNT),
+ DUMP_WED(WED_RTQM_IGRS3_I2H_DMAD_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS3_I2H_DMAD_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS3_I2HW_PKT_CNT),
+ DUMP_WED(WED_RTQM_IGRS3_I2H_PKT_CNT(0)),
+ DUMP_WED(WED_RTQM_IGRS3_I2H_PKT_CNT(1)),
+ DUMP_WED(WED_RTQM_IGRS3_FDROP_CNT),
+ };
+ struct mtk_wed_hw *hw = s->private;
+ struct mtk_wed_device *dev = hw->wed_dev;
+
+ if (dev)
+ dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs));
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(wed_rtqm);
+
+static int
+wed_rro_show(struct seq_file *s, void *data)
+{
+ static const struct reg_dump regs[] = {
+ DUMP_STR("RRO/IND CMD CNT"),
+ DUMP_WED(WED_RX_IND_CMD_CNT(1)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(2)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(3)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(4)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(5)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(6)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(7)),
+ DUMP_WED(WED_RX_IND_CMD_CNT(8)),
+ DUMP_WED_MASK(WED_RX_IND_CMD_CNT(9),
+ WED_IND_CMD_MAGIC_CNT_FAIL_CNT),
+
+ DUMP_WED(WED_RX_ADDR_ELEM_CNT(0)),
+ DUMP_WED_MASK(WED_RX_ADDR_ELEM_CNT(1),
+ WED_ADDR_ELEM_SIG_FAIL_CNT),
+ DUMP_WED(WED_RX_MSDU_PG_CNT(1)),
+ DUMP_WED(WED_RX_MSDU_PG_CNT(2)),
+ DUMP_WED(WED_RX_MSDU_PG_CNT(3)),
+ DUMP_WED(WED_RX_MSDU_PG_CNT(4)),
+ DUMP_WED(WED_RX_MSDU_PG_CNT(5)),
+ DUMP_WED_MASK(WED_RX_PN_CHK_CNT,
+ WED_PN_CHK_FAIL_CNT),
+ };
+ struct mtk_wed_hw *hw = s->private;
+ struct mtk_wed_device *dev = hw->wed_dev;
+
+ if (dev)
+ dump_wed_regs(s, dev, regs, ARRAY_SIZE(regs));
+
+ return 0;
+}
+DEFINE_SHOW_ATTRIBUTE(wed_rro);
static int
mtk_wed_reg_set(void *data, u64 val)
@@ -261,7 +622,16 @@ void mtk_wed_hw_add_debugfs(struct mtk_wed_hw *hw)
debugfs_create_u32("regidx", 0600, dir, &hw->debugfs_reg);
debugfs_create_file_unsafe("regval", 0600, dir, hw, &fops_regval);
debugfs_create_file_unsafe("txinfo", 0400, dir, hw, &wed_txinfo_fops);
- if (hw->version != 1)
+ if (!mtk_wed_is_v1(hw)) {
debugfs_create_file_unsafe("rxinfo", 0400, dir, hw,
&wed_rxinfo_fops);
+ if (mtk_wed_is_v3_or_greater(hw)) {
+ debugfs_create_file_unsafe("amsdu", 0400, dir, hw,
+ &wed_amsdu_fops);
+ debugfs_create_file_unsafe("rtqm", 0400, dir, hw,
+ &wed_rtqm_fops);
+ debugfs_create_file_unsafe("rro", 0400, dir, hw,
+ &wed_rro_fops);
+ }
+ }
}
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c
index 071ed3dea860..ea0884186d76 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_mcu.c
+++ b/drivers/net/ethernet/mediatek/mtk_wed_mcu.c
@@ -16,14 +16,30 @@
#include "mtk_wed_wo.h"
#include "mtk_wed.h"
-static u32 wo_r32(struct mtk_wed_wo *wo, u32 reg)
+static struct mtk_wed_wo_memory_region mem_region[] = {
+ [MTK_WED_WO_REGION_EMI] = {
+ .name = "wo-emi",
+ },
+ [MTK_WED_WO_REGION_ILM] = {
+ .name = "wo-ilm",
+ },
+ [MTK_WED_WO_REGION_DATA] = {
+ .name = "wo-data",
+ .shared = true,
+ },
+ [MTK_WED_WO_REGION_BOOT] = {
+ .name = "wo-boot",
+ },
+};
+
+static u32 wo_r32(u32 reg)
{
- return readl(wo->boot.addr + reg);
+ return readl(mem_region[MTK_WED_WO_REGION_BOOT].addr + reg);
}
-static void wo_w32(struct mtk_wed_wo *wo, u32 reg, u32 val)
+static void wo_w32(u32 reg, u32 val)
{
- writel(val, wo->boot.addr + reg);
+ writel(val, mem_region[MTK_WED_WO_REGION_BOOT].addr + reg);
}
static struct sk_buff *
@@ -68,6 +84,9 @@ mtk_wed_update_rx_stats(struct mtk_wed_device *wed, struct sk_buff *skb)
struct mtk_wed_wo_rx_stats *stats;
int i;
+ if (!wed->wlan.update_wo_rx_stats)
+ return;
+
if (count * sizeof(*stats) > skb->len - sizeof(u32))
return;
@@ -204,7 +223,7 @@ int mtk_wed_mcu_msg_update(struct mtk_wed_device *dev, int id, void *data,
{
struct mtk_wed_wo *wo = dev->hw->wed_wo;
- if (dev->hw->version == 1)
+ if (!mtk_wed_get_rx_capa(dev))
return 0;
if (WARN_ON(!wo))
@@ -215,19 +234,13 @@ int mtk_wed_mcu_msg_update(struct mtk_wed_device *dev, int id, void *data,
}
static int
-mtk_wed_get_memory_region(struct mtk_wed_wo *wo,
+mtk_wed_get_memory_region(struct mtk_wed_hw *hw, int index,
struct mtk_wed_wo_memory_region *region)
{
struct reserved_mem *rmem;
struct device_node *np;
- int index;
- index = of_property_match_string(wo->hw->node, "memory-region-names",
- region->name);
- if (index < 0)
- return index;
-
- np = of_parse_phandle(wo->hw->node, "memory-region", index);
+ np = of_parse_phandle(hw->node, "memory-region", index);
if (!np)
return -ENODEV;
@@ -239,14 +252,13 @@ mtk_wed_get_memory_region(struct mtk_wed_wo *wo,
region->phy_addr = rmem->base;
region->size = rmem->size;
- region->addr = devm_ioremap(wo->hw->dev, region->phy_addr, region->size);
+ region->addr = devm_ioremap(hw->dev, region->phy_addr, region->size);
return !region->addr ? -EINVAL : 0;
}
static int
-mtk_wed_mcu_run_firmware(struct mtk_wed_wo *wo, const struct firmware *fw,
- struct mtk_wed_wo_memory_region *region)
+mtk_wed_mcu_run_firmware(struct mtk_wed_wo *wo, const struct firmware *fw)
{
const u8 *first_region_ptr, *region_ptr, *trailer_ptr, *ptr = fw->data;
const struct mtk_wed_fw_trailer *trailer;
@@ -259,50 +271,46 @@ mtk_wed_mcu_run_firmware(struct mtk_wed_wo *wo, const struct firmware *fw,
while (region_ptr < trailer_ptr) {
u32 length;
+ int i;
fw_region = (const struct mtk_wed_fw_region *)region_ptr;
length = le32_to_cpu(fw_region->len);
-
- if (region->phy_addr != le32_to_cpu(fw_region->addr))
+ if (first_region_ptr < ptr + length)
goto next;
- if (region->size < length)
- goto next;
+ for (i = 0; i < ARRAY_SIZE(mem_region); i++) {
+ struct mtk_wed_wo_memory_region *region;
- if (first_region_ptr < ptr + length)
- goto next;
+ region = &mem_region[i];
+ if (region->phy_addr != le32_to_cpu(fw_region->addr))
+ continue;
+
+ if (region->size < length)
+ continue;
- if (region->shared && region->consumed)
- return 0;
+ if (region->shared && region->consumed)
+ break;
- if (!region->shared || !region->consumed) {
- memcpy_toio(region->addr, ptr, length);
- region->consumed = true;
- return 0;
+ if (!region->shared || !region->consumed) {
+ memcpy_toio(region->addr, ptr, length);
+ region->consumed = true;
+ break;
+ }
}
+
+ if (i == ARRAY_SIZE(mem_region))
+ return -EINVAL;
next:
region_ptr += sizeof(*fw_region);
ptr += length;
}
- return -EINVAL;
+ return 0;
}
static int
mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo)
{
- static struct mtk_wed_wo_memory_region mem_region[] = {
- [MTK_WED_WO_REGION_EMI] = {
- .name = "wo-emi",
- },
- [MTK_WED_WO_REGION_ILM] = {
- .name = "wo-ilm",
- },
- [MTK_WED_WO_REGION_DATA] = {
- .name = "wo-data",
- .shared = true,
- },
- };
const struct mtk_wed_fw_trailer *trailer;
const struct firmware *fw;
const char *fw_name;
@@ -311,25 +319,38 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo)
/* load firmware region metadata */
for (i = 0; i < ARRAY_SIZE(mem_region); i++) {
- ret = mtk_wed_get_memory_region(wo, &mem_region[i]);
+ int index = of_property_match_string(wo->hw->node,
+ "memory-region-names",
+ mem_region[i].name);
+ if (index < 0)
+ continue;
+
+ ret = mtk_wed_get_memory_region(wo->hw, index, &mem_region[i]);
if (ret)
return ret;
}
- wo->boot.name = "wo-boot";
- ret = mtk_wed_get_memory_region(wo, &wo->boot);
- if (ret)
- return ret;
-
/* set dummy cr */
wed_w32(wo->hw->wed_dev, MTK_WED_SCR0 + 4 * MTK_WED_DUMMY_CR_FWDL,
wo->hw->index + 1);
/* load firmware */
- if (of_device_is_compatible(wo->hw->node, "mediatek,mt7981-wed"))
- fw_name = MT7981_FIRMWARE_WO;
- else
- fw_name = wo->hw->index ? MT7986_FIRMWARE_WO1 : MT7986_FIRMWARE_WO0;
+ switch (wo->hw->version) {
+ case 2:
+ if (of_device_is_compatible(wo->hw->node,
+ "mediatek,mt7981-wed"))
+ fw_name = MT7981_FIRMWARE_WO;
+ else
+ fw_name = wo->hw->index ? MT7986_FIRMWARE_WO1
+ : MT7986_FIRMWARE_WO0;
+ break;
+ case 3:
+ fw_name = wo->hw->index ? MT7988_FIRMWARE_WO1
+ : MT7988_FIRMWARE_WO0;
+ break;
+ default:
+ return -EINVAL;
+ }
ret = request_firmware(&fw, fw_name, wo->hw->dev);
if (ret)
@@ -343,23 +364,22 @@ mtk_wed_mcu_load_firmware(struct mtk_wed_wo *wo)
dev_info(wo->hw->dev, "MTK WED WO Chip ID %02x Region %d\n",
trailer->chip_id, trailer->num_region);
- for (i = 0; i < ARRAY_SIZE(mem_region); i++) {
- ret = mtk_wed_mcu_run_firmware(wo, fw, &mem_region[i]);
- if (ret)
- goto out;
- }
+ ret = mtk_wed_mcu_run_firmware(wo, fw);
+ if (ret)
+ goto out;
/* set the start address */
- boot_cr = wo->hw->index ? MTK_WO_MCU_CFG_LS_WA_BOOT_ADDR_ADDR
- : MTK_WO_MCU_CFG_LS_WM_BOOT_ADDR_ADDR;
- wo_w32(wo, boot_cr, mem_region[MTK_WED_WO_REGION_EMI].phy_addr >> 16);
+ if (!mtk_wed_is_v3_or_greater(wo->hw) && wo->hw->index)
+ boot_cr = MTK_WO_MCU_CFG_LS_WA_BOOT_ADDR_ADDR;
+ else
+ boot_cr = MTK_WO_MCU_CFG_LS_WM_BOOT_ADDR_ADDR;
+ wo_w32(boot_cr, mem_region[MTK_WED_WO_REGION_EMI].phy_addr >> 16);
/* wo firmware reset */
- wo_w32(wo, MTK_WO_MCU_CFG_LS_WF_MCCR_CLR_ADDR, 0xc00);
+ wo_w32(MTK_WO_MCU_CFG_LS_WF_MCCR_CLR_ADDR, 0xc00);
- val = wo_r32(wo, MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR);
- val |= wo->hw->index ? MTK_WO_MCU_CFG_LS_WF_WM_WA_WA_CPU_RSTB_MASK
- : MTK_WO_MCU_CFG_LS_WF_WM_WA_WM_CPU_RSTB_MASK;
- wo_w32(wo, MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR, val);
+ val = wo_r32(MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR) |
+ MTK_WO_MCU_CFG_LS_WF_WM_WA_WM_CPU_RSTB_MASK;
+ wo_w32(MTK_WO_MCU_CFG_LS_WF_MCU_CFG_WM_WA_ADDR, val);
out:
release_firmware(fw);
@@ -393,3 +413,5 @@ int mtk_wed_mcu_init(struct mtk_wed_wo *wo)
MODULE_FIRMWARE(MT7981_FIRMWARE_WO);
MODULE_FIRMWARE(MT7986_FIRMWARE_WO0);
MODULE_FIRMWARE(MT7986_FIRMWARE_WO1);
+MODULE_FIRMWARE(MT7988_FIRMWARE_WO0);
+MODULE_FIRMWARE(MT7988_FIRMWARE_WO1);
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_regs.h b/drivers/net/ethernet/mediatek/mtk_wed_regs.h
index 47ea69feb3b2..c71190924816 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_regs.h
+++ b/drivers/net/ethernet/mediatek/mtk_wed_regs.h
@@ -13,6 +13,9 @@
#define MTK_WDMA_DESC_CTRL_LAST_SEG0 BIT(30)
#define MTK_WDMA_DESC_CTRL_DMA_DONE BIT(31)
+#define MTK_WDMA_TXD0_DESC_INFO_DMA_DONE BIT(29)
+#define MTK_WDMA_TXD1_DESC_INFO_DMA_DONE BIT(31)
+
struct mtk_wdma_desc {
__le32 buf0;
__le32 ctrl;
@@ -25,6 +28,8 @@ struct mtk_wdma_desc {
#define MTK_WED_RESET 0x008
#define MTK_WED_RESET_TX_BM BIT(0)
#define MTK_WED_RESET_RX_BM BIT(1)
+#define MTK_WED_RESET_RX_PG_BM BIT(2)
+#define MTK_WED_RESET_RRO_RX_TO_PG BIT(3)
#define MTK_WED_RESET_TX_FREE_AGENT BIT(4)
#define MTK_WED_RESET_WPDMA_TX_DRV BIT(8)
#define MTK_WED_RESET_WPDMA_RX_DRV BIT(9)
@@ -37,6 +42,7 @@ struct mtk_wdma_desc {
#define MTK_WED_RESET_WDMA_INT_AGENT BIT(19)
#define MTK_WED_RESET_RX_RRO_QM BIT(20)
#define MTK_WED_RESET_RX_ROUTE_QM BIT(21)
+#define MTK_WED_RESET_TX_AMSDU BIT(22)
#define MTK_WED_RESET_WED BIT(31)
#define MTK_WED_CTRL 0x00c
@@ -44,6 +50,9 @@ struct mtk_wdma_desc {
#define MTK_WED_CTRL_WPDMA_INT_AGENT_BUSY BIT(1)
#define MTK_WED_CTRL_WDMA_INT_AGENT_EN BIT(2)
#define MTK_WED_CTRL_WDMA_INT_AGENT_BUSY BIT(3)
+#define MTK_WED_CTRL_WED_RX_IND_CMD_EN BIT(5)
+#define MTK_WED_CTRL_WED_RX_PG_BM_EN BIT(6)
+#define MTK_WED_CTRL_WED_RX_PG_BM_BUSY BIT(7)
#define MTK_WED_CTRL_WED_TX_BM_EN BIT(8)
#define MTK_WED_CTRL_WED_TX_BM_BUSY BIT(9)
#define MTK_WED_CTRL_WED_TX_FREE_AGENT_EN BIT(10)
@@ -54,9 +63,14 @@ struct mtk_wdma_desc {
#define MTK_WED_CTRL_RX_RRO_QM_BUSY BIT(15)
#define MTK_WED_CTRL_RX_ROUTE_QM_EN BIT(16)
#define MTK_WED_CTRL_RX_ROUTE_QM_BUSY BIT(17)
+#define MTK_WED_CTRL_TX_TKID_ALI_EN BIT(20)
+#define MTK_WED_CTRL_TX_TKID_ALI_BUSY BIT(21)
+#define MTK_WED_CTRL_TX_AMSDU_EN BIT(22)
+#define MTK_WED_CTRL_TX_AMSDU_BUSY BIT(23)
#define MTK_WED_CTRL_FINAL_DIDX_READ BIT(24)
#define MTK_WED_CTRL_ETH_DMAD_FMT BIT(25)
#define MTK_WED_CTRL_MIB_READ_CLEAR BIT(28)
+#define MTK_WED_CTRL_FLD_MIB_RD_CLR BIT(28)
#define MTK_WED_EXT_INT_STATUS 0x020
#define MTK_WED_EXT_INT_STATUS_TF_LEN_ERR BIT(0)
@@ -64,8 +78,8 @@ struct mtk_wdma_desc {
#define MTK_WED_EXT_INT_STATUS_TKID_TITO_INVALID BIT(4)
#define MTK_WED_EXT_INT_STATUS_TX_FBUF_LO_TH BIT(8)
#define MTK_WED_EXT_INT_STATUS_TX_FBUF_HI_TH BIT(9)
-#define MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH BIT(12)
-#define MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH BIT(13)
+#define MTK_WED_EXT_INT_STATUS_RX_FBUF_LO_TH BIT(10) /* wed v2 */
+#define MTK_WED_EXT_INT_STATUS_RX_FBUF_HI_TH BIT(11) /* wed v2 */
#define MTK_WED_EXT_INT_STATUS_RX_DRV_R_RESP_ERR BIT(16)
#define MTK_WED_EXT_INT_STATUS_RX_DRV_W_RESP_ERR BIT(17)
#define MTK_WED_EXT_INT_STATUS_RX_DRV_COHERENT BIT(18)
@@ -89,19 +103,26 @@ struct mtk_wdma_desc {
#define MTK_WED_EXT_INT_MASK 0x028
#define MTK_WED_EXT_INT_MASK1 0x02c
#define MTK_WED_EXT_INT_MASK2 0x030
+#define MTK_WED_EXT_INT_MASK3 0x034
#define MTK_WED_STATUS 0x060
#define MTK_WED_STATUS_TX GENMASK(15, 8)
+#define MTK_WED_WPDMA_STATUS 0x068
+#define MTK_WED_WPDMA_STATUS_TX_DRV GENMASK(15, 8)
+
#define MTK_WED_TX_BM_CTRL 0x080
#define MTK_WED_TX_BM_CTRL_VLD_GRP_NUM GENMASK(6, 0)
#define MTK_WED_TX_BM_CTRL_RSV_GRP_NUM GENMASK(22, 16)
+#define MTK_WED_TX_BM_CTRL_LEGACY_EN BIT(26)
+#define MTK_WED_TX_TKID_CTRL_FREE_FORMAT BIT(27)
#define MTK_WED_TX_BM_CTRL_PAUSE BIT(28)
#define MTK_WED_TX_BM_BASE 0x084
+#define MTK_WED_TX_BM_INIT_PTR 0x088
+#define MTK_WED_TX_BM_SW_TAIL_IDX GENMASK(16, 0)
+#define MTK_WED_TX_BM_INIT_SW_TAIL_IDX BIT(16)
-#define MTK_WED_TX_BM_TKID 0x088
-#define MTK_WED_TX_BM_TKID_V2 0x0c8
#define MTK_WED_TX_BM_TKID_START GENMASK(15, 0)
#define MTK_WED_TX_BM_TKID_END GENMASK(31, 16)
@@ -124,6 +145,12 @@ struct mtk_wdma_desc {
#define MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM GENMASK(22, 16)
#define MTK_WED_TX_TKID_CTRL_PAUSE BIT(28)
+#define MTK_WED_TX_TKID_INTF 0x0dc
+#define MTK_WED_TX_TKID_INTF_TKFIFO_FDEP GENMASK(25, 16)
+
+#define MTK_WED_TX_TKID_CTRL_VLD_GRP_NUM_V3 GENMASK(7, 0)
+#define MTK_WED_TX_TKID_CTRL_RSV_GRP_NUM_V3 GENMASK(23, 16)
+
#define MTK_WED_TX_TKID_DYN_THR 0x0e0
#define MTK_WED_TX_TKID_DYN_THR_LO GENMASK(6, 0)
#define MTK_WED_TX_TKID_DYN_THR_HI GENMASK(22, 16)
@@ -160,9 +187,6 @@ struct mtk_wdma_desc {
#define MTK_WED_GLO_CFG_RX_2B_OFFSET BIT(31)
#define MTK_WED_RESET_IDX 0x20c
-#define MTK_WED_RESET_IDX_TX GENMASK(3, 0)
-#define MTK_WED_RESET_IDX_RX GENMASK(17, 16)
-#define MTK_WED_RESET_IDX_RX_V2 GENMASK(7, 6)
#define MTK_WED_RESET_WPDMA_IDX_RX GENMASK(31, 30)
#define MTK_WED_TX_MIB(_n) (0x2a0 + (_n) * 4)
@@ -174,6 +198,7 @@ struct mtk_wdma_desc {
#define MTK_WED_RING_RX_DATA(_n) (0x420 + (_n) * 0x10)
#define MTK_WED_SCR0 0x3c0
+#define MTK_WED_RX1_CTRL2 0x418
#define MTK_WED_WPDMA_INT_TRIGGER 0x504
#define MTK_WED_WPDMA_INT_TRIGGER_RX_DONE BIT(1)
#define MTK_WED_WPDMA_INT_TRIGGER_TX_DONE GENMASK(5, 4)
@@ -204,12 +229,15 @@ struct mtk_wdma_desc {
#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_PKT_PROC BIT(5)
#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R0_CRX_SYNC BIT(6)
#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_R1_CRX_SYNC BIT(7)
-#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_VER GENMASK(18, 16)
+#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_VER GENMASK(15, 12)
+#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNS_VER_FORCE_4 BIT(18)
#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UNSUPPORT_FMT BIT(19)
-#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_UEVENT_PKT_FMT_CHK BIT(20)
+#define MTK_WED_WPDMA_GLO_CFG_RX_DRV_EVENT_PKT_FMT_CHK BIT(20)
#define MTK_WED_WPDMA_GLO_CFG_RX_DDONE2_WR BIT(21)
#define MTK_WED_WPDMA_GLO_CFG_TX_TKID_KEEP BIT(24)
+#define MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK_LAST BIT(25)
#define MTK_WED_WPDMA_GLO_CFG_TX_DMAD_DW3_PREV BIT(28)
+#define MTK_WED_WPDMA_GLO_CFG_TX_DDONE_CHK BIT(30)
#define MTK_WED_WPDMA_RESET_IDX 0x50c
#define MTK_WED_WPDMA_RESET_IDX_TX GENMASK(3, 0)
@@ -255,9 +283,10 @@ struct mtk_wdma_desc {
#define MTK_WED_PCIE_INT_TRIGGER_STATUS BIT(16)
#define MTK_WED_PCIE_INT_CTRL 0x57c
-#define MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA BIT(20)
-#define MTK_WED_PCIE_INT_CTRL_SRC_SEL GENMASK(17, 16)
#define MTK_WED_PCIE_INT_CTRL_POLL_EN GENMASK(13, 12)
+#define MTK_WED_PCIE_INT_CTRL_SRC_SEL GENMASK(17, 16)
+#define MTK_WED_PCIE_INT_CTRL_MSK_EN_POLA BIT(20)
+#define MTK_WED_PCIE_INT_CTRL_MSK_IRQ_FILTER BIT(21)
#define MTK_WED_WPDMA_CFG_BASE 0x580
#define MTK_WED_WPDMA_CFG_INT_MASK 0x584
@@ -283,15 +312,30 @@ struct mtk_wdma_desc {
#define MTK_WED_WPDMA_RX_D_RST_IDX 0x760
#define MTK_WED_WPDMA_RX_D_RST_CRX_IDX GENMASK(17, 16)
+#define MTK_WED_WPDMA_RX_D_RST_DRV_IDX_ALL BIT(20)
#define MTK_WED_WPDMA_RX_D_RST_DRV_IDX GENMASK(25, 24)
#define MTK_WED_WPDMA_RX_GLO_CFG 0x76c
-#define MTK_WED_WPDMA_RX_RING 0x770
#define MTK_WED_WPDMA_RX_D_MIB(_n) (0x774 + (_n) * 4)
#define MTK_WED_WPDMA_RX_D_PROCESSED_MIB(_n) (0x784 + (_n) * 4)
#define MTK_WED_WPDMA_RX_D_COHERENT_MIB 0x78c
+#define MTK_WED_WPDMA_RX_D_PREF_CFG 0x7b4
+#define MTK_WED_WPDMA_RX_D_PREF_EN BIT(0)
+#define MTK_WED_WPDMA_RX_D_PREF_BUSY BIT(1)
+#define MTK_WED_WPDMA_RX_D_PREF_BURST_SIZE GENMASK(12, 8)
+#define MTK_WED_WPDMA_RX_D_PREF_LOW_THRES GENMASK(21, 16)
+
+#define MTK_WED_WPDMA_RX_D_PREF_RX0_SIDX 0x7b8
+#define MTK_WED_WPDMA_RX_D_PREF_SIDX_IDX_CLR BIT(15)
+
+#define MTK_WED_WPDMA_RX_D_PREF_RX1_SIDX 0x7bc
+
+#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG 0x7c0
+#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R0_CLR BIT(0)
+#define MTK_WED_WPDMA_RX_D_PREF_FIFO_CFG_R1_CLR BIT(16)
+
#define MTK_WED_WDMA_RING_TX 0x800
#define MTK_WED_WDMA_TX_MIB 0x810
@@ -299,6 +343,20 @@ struct mtk_wdma_desc {
#define MTK_WED_WDMA_RING_RX(_n) (0x900 + (_n) * 0x10)
#define MTK_WED_WDMA_RX_THRES(_n) (0x940 + (_n) * 0x4)
+#define MTK_WED_WDMA_RX_PREF_CFG 0x950
+#define MTK_WED_WDMA_RX_PREF_EN BIT(0)
+#define MTK_WED_WDMA_RX_PREF_BUSY BIT(1)
+#define MTK_WED_WDMA_RX_PREF_BURST_SIZE GENMASK(12, 8)
+#define MTK_WED_WDMA_RX_PREF_LOW_THRES GENMASK(21, 16)
+#define MTK_WED_WDMA_RX_PREF_RX0_SIDX_CLR BIT(24)
+#define MTK_WED_WDMA_RX_PREF_RX1_SIDX_CLR BIT(25)
+#define MTK_WED_WDMA_RX_PREF_DDONE2_EN BIT(26)
+#define MTK_WED_WDMA_RX_PREF_DDONE2_BUSY BIT(27)
+
+#define MTK_WED_WDMA_RX_PREF_FIFO_CFG 0x95C
+#define MTK_WED_WDMA_RX_PREF_FIFO_RX0_CLR BIT(0)
+#define MTK_WED_WDMA_RX_PREF_FIFO_RX1_CLR BIT(16)
+
#define MTK_WED_WDMA_GLO_CFG 0xa04
#define MTK_WED_WDMA_GLO_CFG_TX_DRV_EN BIT(0)
#define MTK_WED_WDMA_GLO_CFG_TX_DDONE_CHK BIT(1)
@@ -322,6 +380,7 @@ struct mtk_wdma_desc {
#define MTK_WED_WDMA_RESET_IDX 0xa08
#define MTK_WED_WDMA_RESET_IDX_RX GENMASK(17, 16)
+#define MTK_WED_WDMA_RESET_IDX_RX_ALL BIT(20)
#define MTK_WED_WDMA_RESET_IDX_DRV GENMASK(25, 24)
#define MTK_WED_WDMA_INT_CLR 0xa24
@@ -331,6 +390,7 @@ struct mtk_wdma_desc {
#define MTK_WED_WDMA_INT_TRIGGER_RX_DONE GENMASK(17, 16)
#define MTK_WED_WDMA_INT_CTRL 0xa2c
+#define MTK_WED_WDMA_INT_POLL_PRD GENMASK(7, 0)
#define MTK_WED_WDMA_INT_CTRL_POLL_SRC_SEL GENMASK(17, 16)
#define MTK_WED_WDMA_CFG_BASE 0xaa0
@@ -391,9 +451,62 @@ struct mtk_wdma_desc {
#define MTK_WDMA_INT_MASK_RX_DELAY BIT(30)
#define MTK_WDMA_INT_MASK_RX_COHERENT BIT(31)
+#define MTK_WDMA_XDMA_TX_FIFO_CFG 0x238
+#define MTK_WDMA_XDMA_TX_FIFO_CFG_TX_PAR_FIFO_CLEAR BIT(0)
+#define MTK_WDMA_XDMA_TX_FIFO_CFG_TX_CMD_FIFO_CLEAR BIT(4)
+#define MTK_WDMA_XDMA_TX_FIFO_CFG_TX_DMAD_FIFO_CLEAR BIT(8)
+#define MTK_WDMA_XDMA_TX_FIFO_CFG_TX_ARR_FIFO_CLEAR BIT(12)
+
+#define MTK_WDMA_XDMA_RX_FIFO_CFG 0x23c
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_PAR_FIFO_CLEAR BIT(0)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_CMD_FIFO_CLEAR BIT(4)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_DMAD_FIFO_CLEAR BIT(8)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_ARR_FIFO_CLEAR BIT(12)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_LEN_FIFO_CLEAR BIT(15)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_WID_FIFO_CLEAR BIT(18)
+#define MTK_WDMA_XDMA_RX_FIFO_CFG_RX_BID_FIFO_CLEAR BIT(21)
+
#define MTK_WDMA_INT_GRP1 0x250
#define MTK_WDMA_INT_GRP2 0x254
+#define MTK_WDMA_PREF_TX_CFG 0x2d0
+#define MTK_WDMA_PREF_TX_CFG_PREF_EN BIT(0)
+#define MTK_WDMA_PREF_TX_CFG_PREF_BUSY BIT(1)
+
+#define MTK_WDMA_PREF_RX_CFG 0x2dc
+#define MTK_WDMA_PREF_RX_CFG_PREF_EN BIT(0)
+#define MTK_WDMA_PREF_RX_CFG_PREF_BUSY BIT(1)
+
+#define MTK_WDMA_PREF_RX_FIFO_CFG 0x2e0
+#define MTK_WDMA_PREF_RX_FIFO_CFG_RING0_CLEAR BIT(0)
+#define MTK_WDMA_PREF_RX_FIFO_CFG_RING1_CLEAR BIT(16)
+
+#define MTK_WDMA_PREF_TX_FIFO_CFG 0x2d4
+#define MTK_WDMA_PREF_TX_FIFO_CFG_RING0_CLEAR BIT(0)
+#define MTK_WDMA_PREF_TX_FIFO_CFG_RING1_CLEAR BIT(16)
+
+#define MTK_WDMA_PREF_SIDX_CFG 0x2e4
+#define MTK_WDMA_PREF_SIDX_CFG_TX_RING_CLEAR GENMASK(3, 0)
+#define MTK_WDMA_PREF_SIDX_CFG_RX_RING_CLEAR GENMASK(5, 4)
+
+#define MTK_WDMA_WRBK_TX_CFG 0x300
+#define MTK_WDMA_WRBK_TX_CFG_WRBK_BUSY BIT(0)
+#define MTK_WDMA_WRBK_TX_CFG_WRBK_EN BIT(30)
+
+#define MTK_WDMA_WRBK_TX_FIFO_CFG(_n) (0x304 + (_n) * 0x4)
+#define MTK_WDMA_WRBK_TX_FIFO_CFG_RING_CLEAR BIT(0)
+
+#define MTK_WDMA_WRBK_RX_CFG 0x344
+#define MTK_WDMA_WRBK_RX_CFG_WRBK_BUSY BIT(0)
+#define MTK_WDMA_WRBK_RX_CFG_WRBK_EN BIT(30)
+
+#define MTK_WDMA_WRBK_RX_FIFO_CFG(_n) (0x348 + (_n) * 0x4)
+#define MTK_WDMA_WRBK_RX_FIFO_CFG_RING_CLEAR BIT(0)
+
+#define MTK_WDMA_WRBK_SIDX_CFG 0x388
+#define MTK_WDMA_WRBK_SIDX_CFG_TX_RING_CLEAR GENMASK(3, 0)
+#define MTK_WDMA_WRBK_SIDX_CFG_RX_RING_CLEAR GENMASK(5, 4)
+
#define MTK_PCIE_MIRROR_MAP(n) ((n) ? 0x4 : 0x0)
#define MTK_PCIE_MIRROR_MAP_EN BIT(0)
#define MTK_PCIE_MIRROR_MAP_WED_ID BIT(1)
@@ -407,6 +520,32 @@ struct mtk_wdma_desc {
#define MTK_WED_RTQM_Q_DBG_BYPASS BIT(5)
#define MTK_WED_RTQM_TXDMAD_FPORT GENMASK(23, 20)
+#define MTK_WED_RTQM_RST 0xb04
+
+#define MTK_WED_RTQM_IGRS0_I2HW_DMAD_CNT 0xb1c
+#define MTK_WED_RTQM_IGRS0_I2H_DMAD_CNT(_n) (0xb20 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS0_I2HW_PKT_CNT 0xb28
+#define MTK_WED_RTQM_IGRS0_I2H_PKT_CNT(_n) (0xb2c + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS0_FDROP_CNT 0xb34
+
+#define MTK_WED_RTQM_IGRS1_I2HW_DMAD_CNT 0xb44
+#define MTK_WED_RTQM_IGRS1_I2H_DMAD_CNT(_n) (0xb48 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS1_I2HW_PKT_CNT 0xb50
+#define MTK_WED_RTQM_IGRS1_I2H_PKT_CNT(_n) (0xb54 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS1_FDROP_CNT 0xb5c
+
+#define MTK_WED_RTQM_IGRS2_I2HW_DMAD_CNT 0xb6c
+#define MTK_WED_RTQM_IGRS2_I2H_DMAD_CNT(_n) (0xb70 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS2_I2HW_PKT_CNT 0xb78
+#define MTK_WED_RTQM_IGRS2_I2H_PKT_CNT(_n) (0xb7c + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS2_FDROP_CNT 0xb84
+
+#define MTK_WED_RTQM_IGRS3_I2HW_DMAD_CNT 0xb94
+#define MTK_WED_RTQM_IGRS3_I2H_DMAD_CNT(_n) (0xb98 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS3_I2HW_PKT_CNT 0xba0
+#define MTK_WED_RTQM_IGRS3_I2H_PKT_CNT(_n) (0xba4 + (_n) * 0x4)
+#define MTK_WED_RTQM_IGRS3_FDROP_CNT 0xbac
+
#define MTK_WED_RTQM_R2H_MIB(_n) (0xb70 + (_n) * 0x4)
#define MTK_WED_RTQM_R2Q_MIB(_n) (0xb78 + (_n) * 0x4)
#define MTK_WED_RTQM_Q2N_MIB 0xb80
@@ -415,6 +554,24 @@ struct mtk_wdma_desc {
#define MTK_WED_RTQM_Q2B_MIB 0xb8c
#define MTK_WED_RTQM_PFDBK_MIB 0xb90
+#define MTK_WED_RTQM_ENQ_CFG0 0xbb8
+#define MTK_WED_RTQM_ENQ_CFG_TXDMAD_FPORT GENMASK(15, 12)
+
+#define MTK_WED_RTQM_FDROP_MIB 0xb84
+#define MTK_WED_RTQM_ENQ_I2Q_DMAD_CNT 0xbbc
+#define MTK_WED_RTQM_ENQ_I2N_DMAD_CNT 0xbc0
+#define MTK_WED_RTQM_ENQ_I2Q_PKT_CNT 0xbc4
+#define MTK_WED_RTQM_ENQ_I2N_PKT_CNT 0xbc8
+#define MTK_WED_RTQM_ENQ_USED_ENTRY_CNT 0xbcc
+#define MTK_WED_RTQM_ENQ_ERR_CNT 0xbd0
+
+#define MTK_WED_RTQM_DEQ_DMAD_CNT 0xbd8
+#define MTK_WED_RTQM_DEQ_Q2I_DMAD_CNT 0xbdc
+#define MTK_WED_RTQM_DEQ_PKT_CNT 0xbe0
+#define MTK_WED_RTQM_DEQ_Q2I_PKT_CNT 0xbe4
+#define MTK_WED_RTQM_DEQ_USED_PFDBK_CNT 0xbe8
+#define MTK_WED_RTQM_DEQ_ERR_CNT 0xbec
+
#define MTK_WED_RROQM_GLO_CFG 0xc04
#define MTK_WED_RROQM_RST_IDX 0xc08
#define MTK_WED_RROQM_RST_IDX_MIOD BIT(0)
@@ -464,7 +621,195 @@ struct mtk_wdma_desc {
#define MTK_WED_RX_BM_INTF 0xd9c
#define MTK_WED_RX_BM_ERR_STS 0xda8
+#define MTK_RRO_IND_CMD_SIGNATURE 0xe00
+#define MTK_RRO_IND_CMD_DMA_IDX GENMASK(11, 0)
+#define MTK_RRO_IND_CMD_MAGIC_CNT GENMASK(30, 28)
+
+#define MTK_WED_IND_CMD_RX_CTRL0 0xe04
+#define MTK_WED_IND_CMD_PROC_IDX GENMASK(11, 0)
+#define MTK_WED_IND_CMD_PREFETCH_FREE_CNT GENMASK(19, 16)
+#define MTK_WED_IND_CMD_MAGIC_CNT GENMASK(30, 28)
+
+#define MTK_WED_IND_CMD_RX_CTRL1 0xe08
+#define MTK_WED_IND_CMD_RX_CTRL2 0xe0c
+#define MTK_WED_IND_CMD_MAX_CNT GENMASK(11, 0)
+#define MTK_WED_IND_CMD_BASE_M GENMASK(19, 16)
+
+#define MTK_WED_RRO_CFG0 0xe10
+#define MTK_WED_RRO_CFG1 0xe14
+#define MTK_WED_RRO_CFG1_MAX_WIN_SZ GENMASK(31, 29)
+#define MTK_WED_RRO_CFG1_ACK_SN_BASE_M GENMASK(19, 16)
+#define MTK_WED_RRO_CFG1_PARTICL_SE_ID GENMASK(11, 0)
+
+#define MTK_WED_ADDR_ELEM_CFG0 0xe18
+#define MTK_WED_ADDR_ELEM_CFG1 0xe1c
+#define MTK_WED_ADDR_ELEM_PREFETCH_FREE_CNT GENMASK(19, 16)
+
+#define MTK_WED_ADDR_ELEM_TBL_CFG 0xe20
+#define MTK_WED_ADDR_ELEM_TBL_OFFSET GENMASK(6, 0)
+#define MTK_WED_ADDR_ELEM_TBL_RD_RDY BIT(28)
+#define MTK_WED_ADDR_ELEM_TBL_WR_RDY BIT(29)
+#define MTK_WED_ADDR_ELEM_TBL_RD BIT(30)
+#define MTK_WED_ADDR_ELEM_TBL_WR BIT(31)
+
+#define MTK_WED_RADDR_ELEM_TBL_WDATA 0xe24
+#define MTK_WED_RADDR_ELEM_TBL_RDATA 0xe28
+
+#define MTK_WED_PN_CHECK_CFG 0xe30
+#define MTK_WED_PN_CHECK_SE_ID GENMASK(11, 0)
+#define MTK_WED_PN_CHECK_RD_RDY BIT(28)
+#define MTK_WED_PN_CHECK_WR_RDY BIT(29)
+#define MTK_WED_PN_CHECK_RD BIT(30)
+#define MTK_WED_PN_CHECK_WR BIT(31)
+
+#define MTK_WED_PN_CHECK_WDATA_M 0xe38
+#define MTK_WED_PN_CHECK_IS_FIRST BIT(17)
+
+#define MTK_WED_RRO_MSDU_PG_RING_CFG(_n) (0xe44 + (_n) * 0x8)
+
+#define MTK_WED_RRO_MSDU_PG_RING2_CFG 0xe58
+#define MTK_WED_RRO_MSDU_PG_DRV_CLR BIT(26)
+#define MTK_WED_RRO_MSDU_PG_DRV_EN BIT(31)
+
+#define MTK_WED_RRO_MSDU_PG_CTRL0(_n) (0xe5c + (_n) * 0xc)
+#define MTK_WED_RRO_MSDU_PG_CTRL1(_n) (0xe60 + (_n) * 0xc)
+#define MTK_WED_RRO_MSDU_PG_CTRL2(_n) (0xe64 + (_n) * 0xc)
+
+#define MTK_WED_RRO_RX_D_RX(_n) (0xe80 + (_n) * 0x10)
+
+#define MTK_WED_RRO_RX_MAGIC_CNT BIT(13)
+
+#define MTK_WED_RRO_RX_D_CFG(_n) (0xea0 + (_n) * 0x4)
+#define MTK_WED_RRO_RX_D_DRV_CLR BIT(26)
+#define MTK_WED_RRO_RX_D_DRV_EN BIT(31)
+
+#define MTK_WED_RRO_PG_BM_RX_DMAM 0xeb0
+#define MTK_WED_RRO_PG_BM_RX_SDL0 GENMASK(13, 0)
+
+#define MTK_WED_RRO_PG_BM_BASE 0xeb4
+#define MTK_WED_RRO_PG_BM_INIT_PTR 0xeb8
+#define MTK_WED_RRO_PG_BM_SW_TAIL_IDX GENMASK(15, 0)
+#define MTK_WED_RRO_PG_BM_INIT_SW_TAIL_IDX BIT(16)
+
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX 0xeec
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_EN BIT(0)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_CLR BIT(1)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX0_DONE_TRIG GENMASK(6, 2)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_EN BIT(8)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_CLR BIT(9)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_RX1_DONE_TRIG GENMASK(14, 10)
+
+#define MTK_WED_WPDMA_INT_CTRL_RRO_MSDU_PG 0xef4
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_EN BIT(0)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_CLR BIT(1)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG0_DONE_TRIG GENMASK(6, 2)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_EN BIT(8)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_CLR BIT(9)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG1_DONE_TRIG GENMASK(14, 10)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_EN BIT(16)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_CLR BIT(17)
+#define MTK_WED_WPDMA_INT_CTRL_RRO_PG2_DONE_TRIG GENMASK(22, 18)
+
+#define MTK_WED_RRO_RX_HW_STS 0xf00
+#define MTK_WED_RX_IND_CMD_BUSY GENMASK(31, 0)
+
+#define MTK_WED_RX_IND_CMD_CNT0 0xf20
+#define MTK_WED_RX_IND_CMD_DBG_CNT_EN BIT(31)
+
+#define MTK_WED_RX_IND_CMD_CNT(_n) (0xf20 + (_n) * 0x4)
+#define MTK_WED_IND_CMD_MAGIC_CNT_FAIL_CNT GENMASK(15, 0)
+
+#define MTK_WED_RX_ADDR_ELEM_CNT(_n) (0xf48 + (_n) * 0x4)
+#define MTK_WED_ADDR_ELEM_SIG_FAIL_CNT GENMASK(15, 0)
+#define MTK_WED_ADDR_ELEM_FIRST_SIG_FAIL_CNT GENMASK(31, 16)
+#define MTK_WED_ADDR_ELEM_ACKSN_CNT GENMASK(27, 0)
+
+#define MTK_WED_RX_MSDU_PG_CNT(_n) (0xf5c + (_n) * 0x4)
+
+#define MTK_WED_RX_PN_CHK_CNT 0xf70
+#define MTK_WED_PN_CHK_FAIL_CNT GENMASK(15, 0)
+
#define MTK_WED_WOCPU_VIEW_MIOD_BASE 0x8000
#define MTK_WED_PCIE_INT_MASK 0x0
+#define MTK_WED_AMSDU_FIFO 0x1800
+#define MTK_WED_AMSDU_IS_PRIOR0_RING BIT(10)
+
+#define MTK_WED_AMSDU_STA_INFO 0x01810
+#define MTK_WED_AMSDU_STA_INFO_DO_INIT BIT(0)
+#define MTK_WED_AMSDU_STA_INFO_SET_INIT BIT(1)
+
+#define MTK_WED_AMSDU_STA_INFO_INIT 0x01814
+#define MTK_WED_AMSDU_STA_WTBL_HDRT_MODE BIT(0)
+#define MTK_WED_AMSDU_STA_RMVL BIT(1)
+#define MTK_WED_AMSDU_STA_MAX_AMSDU_LEN GENMASK(7, 2)
+#define MTK_WED_AMSDU_STA_MAX_AMSDU_NUM GENMASK(11, 8)
+
+#define MTK_WED_AMSDU_HIFTXD_BASE_L(_n) (0x1980 + (_n) * 0x4)
+
+#define MTK_WED_AMSDU_PSE 0x1910
+#define MTK_WED_AMSDU_PSE_RESET BIT(16)
+
+#define MTK_WED_AMSDU_HIFTXD_CFG 0x1968
+#define MTK_WED_AMSDU_HIFTXD_SRC GENMASK(16, 15)
+
+#define MTK_WED_MON_AMSDU_FIFO_DMAD 0x1a34
+
+#define MTK_WED_MON_AMSDU_ENG_DMAD(_n) (0x1a80 + (_n) * 0x50)
+#define MTK_WED_MON_AMSDU_ENG_QFPL(_n) (0x1a84 + (_n) * 0x50)
+#define MTK_WED_MON_AMSDU_ENG_QENI(_n) (0x1a88 + (_n) * 0x50)
+#define MTK_WED_MON_AMSDU_ENG_QENO(_n) (0x1a8c + (_n) * 0x50)
+#define MTK_WED_MON_AMSDU_ENG_MERG(_n) (0x1a90 + (_n) * 0x50)
+
+#define MTK_WED_MON_AMSDU_ENG_CNT8(_n) (0x1a94 + (_n) * 0x50)
+#define MTK_WED_AMSDU_ENG_MAX_QGPP_CNT GENMASK(10, 0)
+#define MTK_WED_AMSDU_ENG_MAX_PL_CNT GENMASK(27, 16)
+
+#define MTK_WED_MON_AMSDU_ENG_CNT9(_n) (0x1a98 + (_n) * 0x50)
+#define MTK_WED_AMSDU_ENG_CUR_ENTRY GENMASK(10, 0)
+#define MTK_WED_AMSDU_ENG_MAX_BUF_MERGED GENMASK(20, 16)
+#define MTK_WED_AMSDU_ENG_MAX_MSDU_MERGED GENMASK(28, 24)
+
+#define MTK_WED_MON_AMSDU_QMEM_STS1 0x1e04
+
+#define MTK_WED_MON_AMSDU_QMEM_CNT(_n) (0x1e0c + (_n) * 0x4)
+#define MTK_WED_AMSDU_QMEM_FQ_CNT GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_SP_QCNT GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID0_QCNT GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID1_QCNT GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID2_QCNT GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID3_QCNT GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID4_QCNT GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID5_QCNT GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID6_QCNT GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID7_QCNT GENMASK(11, 0)
+
+#define MTK_WED_MON_AMSDU_QMEM_PTR(_n) (0x1e20 + (_n) * 0x4)
+#define MTK_WED_AMSDU_QMEM_FQ_HEAD GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_SP_QHEAD GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID0_QHEAD GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID1_QHEAD GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID2_QHEAD GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID3_QHEAD GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID4_QHEAD GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID5_QHEAD GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID6_QHEAD GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID7_QHEAD GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_FQ_TAIL GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_SP_QTAIL GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID0_QTAIL GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID1_QTAIL GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID2_QTAIL GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID3_QTAIL GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID4_QTAIL GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID5_QTAIL GENMASK(11, 0)
+#define MTK_WED_AMSDU_QMEM_TID6_QTAIL GENMASK(27, 16)
+#define MTK_WED_AMSDU_QMEM_TID7_QTAIL GENMASK(11, 0)
+
+#define MTK_WED_MON_AMSDU_HIFTXD_FETCH_MSDU(_n) (0x1ec4 + (_n) * 0x4)
+
+#define MTK_WED_PCIE_BASE 0x11280000
+#define MTK_WED_PCIE_BASE0 0x11300000
+#define MTK_WED_PCIE_BASE1 0x11310000
+#define MTK_WED_PCIE_BASE2 0x11290000
#endif
diff --git a/drivers/net/ethernet/mediatek/mtk_wed_wo.h b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
index 7a1a2a28f1ac..87a67fa3868d 100644
--- a/drivers/net/ethernet/mediatek/mtk_wed_wo.h
+++ b/drivers/net/ethernet/mediatek/mtk_wed_wo.h
@@ -91,6 +91,8 @@ enum mtk_wed_dummy_cr_idx {
#define MT7981_FIRMWARE_WO "mediatek/mt7981_wo.bin"
#define MT7986_FIRMWARE_WO0 "mediatek/mt7986_wo_0.bin"
#define MT7986_FIRMWARE_WO1 "mediatek/mt7986_wo_1.bin"
+#define MT7988_FIRMWARE_WO0 "mediatek/mt7988_wo_0.bin"
+#define MT7988_FIRMWARE_WO1 "mediatek/mt7988_wo_1.bin"
#define MTK_WO_MCU_CFG_LS_BASE 0
#define MTK_WO_MCU_CFG_LS_HW_VER_ADDR (MTK_WO_MCU_CFG_LS_BASE + 0x000)
@@ -228,7 +230,6 @@ struct mtk_wed_wo_queue {
struct mtk_wed_wo {
struct mtk_wed_hw *hw;
- struct mtk_wed_wo_memory_region boot;
struct mtk_wed_wo_queue q_tx;
struct mtk_wed_wo_queue q_rx;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index 332472fe4990..a09b6e05337d 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -400,7 +400,7 @@ void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv)
for (ring = 0; ring < priv->rx_ring_num; ring++) {
if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) {
local_bh_disable();
- napi_reschedule(&priv->rx_cq[ring]->napi);
+ napi_schedule(&priv->rx_cq[ring]->napi);
local_bh_enable();
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
index fe48d20d6118..0005d9e2c2d6 100644
--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
+++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
@@ -1967,7 +1967,7 @@ int mlx4_INIT_HCA(struct mlx4_dev *dev, struct mlx4_init_hca_param *param)
if (dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_DRIVER_VERSION_TO_FW) {
u8 *dst = (u8 *)(inbox + INIT_HCA_DRIVER_VERSION_OFFSET / 4);
- strncpy(dst, DRV_NAME_FOR_FW, INIT_HCA_DRIVER_VERSION_SZ - 1);
+ strscpy(dst, DRV_NAME_FOR_FW, INIT_HCA_DRIVER_VERSION_SZ);
mlx4_dbg(dev, "Reporting Driver Version to FW: %s\n", dst);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index c4f4de82e29e..685335832a93 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -189,3 +189,11 @@ config MLX5_SF_MANAGER
port is managed through devlink. A subfunction supports RDMA, netdevice
and vdpa device. It is similar to a SRIOV VF but it doesn't require
SRIOV support.
+
+config MLX5_DPLL
+ tristate "Mellanox 5th generation network adapters (ConnectX series) DPLL support"
+ depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
+ select DPLL
+ help
+ DPLL support in Mellanox Technologies ConnectX NICs.
+
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 7e94caca4888..c44870b175f9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -128,3 +128,6 @@ mlx5_core-$(CONFIG_MLX5_SF) += sf/vhca_event.o sf/dev/dev.o sf/dev/driver.o irq_
# SF manager
#
mlx5_core-$(CONFIG_MLX5_SF_MANAGER) += sf/cmd.o sf/hw_table.o sf/devlink.o
+
+obj-$(CONFIG_MLX5_DPLL) += mlx5_dpll.o
+mlx5_dpll-y := dpll.o
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index c22b0ad0c870..f8f0a712c943 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -525,6 +525,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_SAVE_VHCA_STATE:
case MLX5_CMD_OP_LOAD_VHCA_STATE:
case MLX5_CMD_OP_SYNC_CRYPTO:
+ case MLX5_CMD_OP_ALLOW_OTHER_VHCA_ACCESS:
*status = MLX5_DRIVER_STATUS_ABORTED;
*synd = MLX5_DRIVER_SYND;
return -ENOLINK;
@@ -728,6 +729,7 @@ const char *mlx5_command_str(int command)
MLX5_COMMAND_STR_CASE(SAVE_VHCA_STATE);
MLX5_COMMAND_STR_CASE(LOAD_VHCA_STATE);
MLX5_COMMAND_STR_CASE(SYNC_CRYPTO);
+ MLX5_COMMAND_STR_CASE(ALLOW_OTHER_VHCA_ACCESS);
default: return "unknown command opcode";
}
}
@@ -2090,6 +2092,74 @@ int mlx5_cmd_exec_cb(struct mlx5_async_ctx *ctx, void *in, int in_size,
}
EXPORT_SYMBOL(mlx5_cmd_exec_cb);
+int mlx5_cmd_allow_other_vhca_access(struct mlx5_core_dev *dev,
+ struct mlx5_cmd_allow_other_vhca_access_attr *attr)
+{
+ u32 out[MLX5_ST_SZ_DW(allow_other_vhca_access_out)] = {};
+ u32 in[MLX5_ST_SZ_DW(allow_other_vhca_access_in)] = {};
+ void *key;
+
+ MLX5_SET(allow_other_vhca_access_in,
+ in, opcode, MLX5_CMD_OP_ALLOW_OTHER_VHCA_ACCESS);
+ MLX5_SET(allow_other_vhca_access_in,
+ in, object_type_to_be_accessed, attr->obj_type);
+ MLX5_SET(allow_other_vhca_access_in,
+ in, object_id_to_be_accessed, attr->obj_id);
+
+ key = MLX5_ADDR_OF(allow_other_vhca_access_in, in, access_key);
+ memcpy(key, attr->access_key, sizeof(attr->access_key));
+
+ return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+}
+
+int mlx5_cmd_alias_obj_create(struct mlx5_core_dev *dev,
+ struct mlx5_cmd_alias_obj_create_attr *alias_attr,
+ u32 *obj_id)
+{
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
+ u32 in[MLX5_ST_SZ_DW(create_alias_obj_in)] = {};
+ void *param;
+ void *attr;
+ void *key;
+ int ret;
+
+ attr = MLX5_ADDR_OF(create_alias_obj_in, in, hdr);
+ MLX5_SET(general_obj_in_cmd_hdr,
+ attr, opcode, MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr,
+ attr, obj_type, alias_attr->obj_type);
+ param = MLX5_ADDR_OF(general_obj_in_cmd_hdr, in, op_param);
+ MLX5_SET(general_obj_create_param, param, alias_object, 1);
+
+ attr = MLX5_ADDR_OF(create_alias_obj_in, in, alias_ctx);
+ MLX5_SET(alias_context, attr, vhca_id_to_be_accessed, alias_attr->vhca_id);
+ MLX5_SET(alias_context, attr, object_id_to_be_accessed, alias_attr->obj_id);
+
+ key = MLX5_ADDR_OF(alias_context, attr, access_key);
+ memcpy(key, alias_attr->access_key, sizeof(alias_attr->access_key));
+
+ ret = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+ if (ret)
+ return ret;
+
+ *obj_id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+
+ return 0;
+}
+
+int mlx5_cmd_alias_obj_destroy(struct mlx5_core_dev *dev, u32 obj_id,
+ u16 obj_type)
+{
+ u32 out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {};
+ u32 in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {};
+
+ MLX5_SET(general_obj_in_cmd_hdr, in, opcode, MLX5_CMD_OP_DESTROY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_type, obj_type);
+ MLX5_SET(general_obj_in_cmd_hdr, in, obj_id, obj_id);
+
+ return mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
+}
+
static void destroy_msg_cache(struct mlx5_core_dev *dev)
{
struct cmd_msg_cache *ch;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
index 7909f378dc93..cf0477f53dc4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/dev.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dev.c
@@ -38,8 +38,6 @@
#include "devlink.h"
#include "lag/lag.h"
-/* intf dev list mutex */
-static DEFINE_MUTEX(mlx5_intf_mutex);
static DEFINE_IDA(mlx5_adev_ida);
static bool is_eth_rep_supported(struct mlx5_core_dev *dev)
@@ -206,6 +204,19 @@ static bool is_ib_enabled(struct mlx5_core_dev *dev)
return err ? false : val.vbool;
}
+static bool is_dpll_supported(struct mlx5_core_dev *dev)
+{
+ if (!IS_ENABLED(CONFIG_MLX5_DPLL))
+ return false;
+
+ if (!MLX5_CAP_MCAM_REG2(dev, synce_registers)) {
+ mlx5_core_warn(dev, "Missing SyncE capability\n");
+ return false;
+ }
+
+ return true;
+}
+
enum {
MLX5_INTERFACE_PROTOCOL_ETH,
MLX5_INTERFACE_PROTOCOL_ETH_REP,
@@ -215,6 +226,8 @@ enum {
MLX5_INTERFACE_PROTOCOL_MPIB,
MLX5_INTERFACE_PROTOCOL_VNET,
+
+ MLX5_INTERFACE_PROTOCOL_DPLL,
};
static const struct mlx5_adev_device {
@@ -237,6 +250,8 @@ static const struct mlx5_adev_device {
.is_supported = &is_ib_rep_supported },
[MLX5_INTERFACE_PROTOCOL_MPIB] = { .suffix = "multiport",
.is_supported = &is_mp_supported },
+ [MLX5_INTERFACE_PROTOCOL_DPLL] = { .suffix = "dpll",
+ .is_supported = &is_dpll_supported },
};
int mlx5_adev_idx_alloc(void)
@@ -320,9 +335,9 @@ static void del_adev(struct auxiliary_device *adev)
void mlx5_dev_set_lightweight(struct mlx5_core_dev *dev)
{
- mutex_lock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
dev->priv.flags |= MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
- mutex_unlock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
}
bool mlx5_dev_is_lightweight(struct mlx5_core_dev *dev)
@@ -338,7 +353,7 @@ int mlx5_attach_device(struct mlx5_core_dev *dev)
int ret = 0, i;
devl_assert_locked(priv_to_devlink(dev));
- mutex_lock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
priv->flags &= ~MLX5_PRIV_FLAGS_DETACH;
for (i = 0; i < ARRAY_SIZE(mlx5_adev_devices); i++) {
if (!priv->adev[i]) {
@@ -383,7 +398,7 @@ int mlx5_attach_device(struct mlx5_core_dev *dev)
break;
}
}
- mutex_unlock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
return ret;
}
@@ -396,7 +411,7 @@ void mlx5_detach_device(struct mlx5_core_dev *dev, bool suspend)
int i;
devl_assert_locked(priv_to_devlink(dev));
- mutex_lock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
for (i = ARRAY_SIZE(mlx5_adev_devices) - 1; i >= 0; i--) {
if (!priv->adev[i])
continue;
@@ -426,7 +441,7 @@ skip_suspend:
priv->adev[i] = NULL;
}
priv->flags |= MLX5_PRIV_FLAGS_DETACH;
- mutex_unlock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
}
int mlx5_register_device(struct mlx5_core_dev *dev)
@@ -434,10 +449,10 @@ int mlx5_register_device(struct mlx5_core_dev *dev)
int ret;
devl_assert_locked(priv_to_devlink(dev));
- mutex_lock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
dev->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
ret = mlx5_rescan_drivers_locked(dev);
- mutex_unlock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
if (ret)
mlx5_unregister_device(dev);
@@ -447,10 +462,10 @@ int mlx5_register_device(struct mlx5_core_dev *dev)
void mlx5_unregister_device(struct mlx5_core_dev *dev)
{
devl_assert_locked(priv_to_devlink(dev));
- mutex_lock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
dev->priv.flags = MLX5_PRIV_FLAGS_DISABLE_ALL_ADEV;
mlx5_rescan_drivers_locked(dev);
- mutex_unlock(&mlx5_intf_mutex);
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
}
static int add_drivers(struct mlx5_core_dev *dev)
@@ -528,7 +543,6 @@ int mlx5_rescan_drivers_locked(struct mlx5_core_dev *dev)
{
struct mlx5_priv *priv = &dev->priv;
- lockdep_assert_held(&mlx5_intf_mutex);
if (priv->flags & MLX5_PRIV_FLAGS_DETACH)
return 0;
@@ -548,85 +562,3 @@ bool mlx5_same_hw_devs(struct mlx5_core_dev *dev, struct mlx5_core_dev *peer_dev
return (fsystem_guid && psystem_guid && fsystem_guid == psystem_guid);
}
-
-static u32 mlx5_gen_pci_id(const struct mlx5_core_dev *dev)
-{
- return (u32)((pci_domain_nr(dev->pdev->bus) << 16) |
- (dev->pdev->bus->number << 8) |
- PCI_SLOT(dev->pdev->devfn));
-}
-
-static int _next_phys_dev(struct mlx5_core_dev *mdev,
- const struct mlx5_core_dev *curr)
-{
- if (!mlx5_core_is_pf(mdev))
- return 0;
-
- if (mdev == curr)
- return 0;
-
- if (!mlx5_same_hw_devs(mdev, (struct mlx5_core_dev *)curr) &&
- mlx5_gen_pci_id(mdev) != mlx5_gen_pci_id(curr))
- return 0;
-
- return 1;
-}
-
-static void *pci_get_other_drvdata(struct device *this, struct device *other)
-{
- if (this->driver != other->driver)
- return NULL;
-
- return pci_get_drvdata(to_pci_dev(other));
-}
-
-static int next_phys_dev_lag(struct device *dev, const void *data)
-{
- struct mlx5_core_dev *mdev, *this = (struct mlx5_core_dev *)data;
-
- mdev = pci_get_other_drvdata(this->device, dev);
- if (!mdev)
- return 0;
-
- if (!mlx5_lag_is_supported(mdev))
- return 0;
-
- return _next_phys_dev(mdev, data);
-}
-
-static struct mlx5_core_dev *mlx5_get_next_dev(struct mlx5_core_dev *dev,
- int (*match)(struct device *dev, const void *data))
-{
- struct device *next;
-
- if (!mlx5_core_is_pf(dev))
- return NULL;
-
- next = bus_find_device(&pci_bus_type, NULL, dev, match);
- if (!next)
- return NULL;
-
- put_device(next);
- return pci_get_drvdata(to_pci_dev(next));
-}
-
-/* Must be called with intf_mutex held */
-struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev)
-{
- lockdep_assert_held(&mlx5_intf_mutex);
- return mlx5_get_next_dev(dev, &next_phys_dev_lag);
-}
-
-void mlx5_dev_list_lock(void)
-{
- mutex_lock(&mlx5_intf_mutex);
-}
-void mlx5_dev_list_unlock(void)
-{
- mutex_unlock(&mlx5_intf_mutex);
-}
-
-int mlx5_dev_list_trylock(void)
-{
- return mutex_trylock(&mlx5_intf_mutex);
-}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
index af8460bb257b..3e064234f6fe 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
@@ -138,7 +138,6 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
struct pci_dev *pdev = dev->pdev;
- bool sf_dev_allocated;
int ret = 0;
if (mlx5_dev_is_lightweight(dev)) {
@@ -148,16 +147,6 @@ static int mlx5_devlink_reload_down(struct devlink *devlink, bool netns_change,
return 0;
}
- sf_dev_allocated = mlx5_sf_dev_allocated(dev);
- if (sf_dev_allocated) {
- /* Reload results in deleting SF device which further results in
- * unregistering devlink instance while holding devlink_mutext.
- * Hence, do not support reload.
- */
- NL_SET_ERR_MSG_MOD(extack, "reload is unsupported when SFs are allocated");
- return -EOPNOTSUPP;
- }
-
if (mlx5_lag_is_active(dev)) {
NL_SET_ERR_MSG_MOD(extack, "reload is unsupported in Lag mode");
return -EOPNOTSUPP;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
index ad789349c06e..76d27d2ee40c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
@@ -889,36 +889,16 @@ int mlx5_fw_tracer_trigger_core_dump_general(struct mlx5_core_dev *dev)
return 0;
}
-static int
+static void
mlx5_devlink_fmsg_fill_trace(struct devlink_fmsg *fmsg,
struct mlx5_fw_trace_data *trace_data)
{
- int err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_u64_pair_put(fmsg, "timestamp", trace_data->timestamp);
- if (err)
- return err;
-
- err = devlink_fmsg_bool_pair_put(fmsg, "lost", trace_data->lost);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "event_id", trace_data->event_id);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "msg", trace_data->msg);
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
- return 0;
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_u64_pair_put(fmsg, "timestamp", trace_data->timestamp);
+ devlink_fmsg_bool_pair_put(fmsg, "lost", trace_data->lost);
+ devlink_fmsg_u8_pair_put(fmsg, "event_id", trace_data->event_id);
+ devlink_fmsg_string_pair_put(fmsg, "msg", trace_data->msg);
+ devlink_fmsg_obj_nest_end(fmsg);
}
int mlx5_fw_tracer_get_saved_traces_objects(struct mlx5_fw_tracer *tracer,
@@ -927,7 +907,6 @@ int mlx5_fw_tracer_get_saved_traces_objects(struct mlx5_fw_tracer *tracer,
struct mlx5_fw_trace_data *straces = tracer->st_arr.straces;
u32 index, start_index, end_index;
u32 saved_traces_index;
- int err;
if (!straces[0].timestamp)
return -ENOMSG;
@@ -940,22 +919,18 @@ int mlx5_fw_tracer_get_saved_traces_objects(struct mlx5_fw_tracer *tracer,
start_index = 0;
end_index = (saved_traces_index - 1) & (SAVED_TRACES_NUM - 1);
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "dump fw traces");
- if (err)
- goto unlock;
+ devlink_fmsg_arr_pair_nest_start(fmsg, "dump fw traces");
index = start_index;
while (index != end_index) {
- err = mlx5_devlink_fmsg_fill_trace(fmsg, &straces[index]);
- if (err)
- goto unlock;
+ mlx5_devlink_fmsg_fill_trace(fmsg, &straces[index]);
index = (index + 1) & (SAVED_TRACES_NUM - 1);
}
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
-unlock:
+ devlink_fmsg_arr_pair_nest_end(fmsg);
mutex_unlock(&tracer->st_arr.lock);
- return err;
+
+ return 0;
}
static void mlx5_fw_tracer_update_db(struct work_struct *work)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c
index e869c65d8e90..c7216e84ef8c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.c
@@ -13,106 +13,55 @@ struct mlx5_vnic_diag_stats {
__be64 query_vnic_env_out[MLX5_ST_SZ_QW(query_vnic_env_out)];
};
-int mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev,
- struct devlink_fmsg *fmsg,
- u16 vport_num, bool other_vport)
+void mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev,
+ struct devlink_fmsg *fmsg,
+ u16 vport_num, bool other_vport)
{
u32 in[MLX5_ST_SZ_DW(query_vnic_env_in)] = {};
struct mlx5_vnic_diag_stats vnic;
- int err;
MLX5_SET(query_vnic_env_in, in, opcode, MLX5_CMD_OP_QUERY_VNIC_ENV);
MLX5_SET(query_vnic_env_in, in, vport_number, vport_num);
MLX5_SET(query_vnic_env_in, in, other_vport, !!other_vport);
- err = mlx5_cmd_exec_inout(dev, query_vnic_env, in, &vnic.query_vnic_env_out);
- if (err)
- return err;
+ mlx5_cmd_exec_inout(dev, query_vnic_env, in, &vnic.query_vnic_env_out);
- err = devlink_fmsg_pair_nest_start(fmsg, "vNIC env counters");
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
+ devlink_fmsg_pair_nest_start(fmsg, "vNIC env counters");
+ devlink_fmsg_obj_nest_start(fmsg);
if (MLX5_CAP_GEN(dev, vnic_env_queue_counters)) {
- err = devlink_fmsg_u32_pair_put(fmsg, "total_error_queues",
- VNIC_ENV_GET(&vnic, total_error_queues));
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "send_queue_priority_update_flow",
- VNIC_ENV_GET(&vnic,
- send_queue_priority_update_flow));
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "total_error_queues",
+ VNIC_ENV_GET(&vnic, total_error_queues));
+ devlink_fmsg_u32_pair_put(fmsg, "send_queue_priority_update_flow",
+ VNIC_ENV_GET(&vnic, send_queue_priority_update_flow));
}
-
if (MLX5_CAP_GEN(dev, eq_overrun_count)) {
- err = devlink_fmsg_u32_pair_put(fmsg, "comp_eq_overrun",
- VNIC_ENV_GET(&vnic, comp_eq_overrun));
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "async_eq_overrun",
- VNIC_ENV_GET(&vnic, async_eq_overrun));
- if (err)
- return err;
- }
-
- if (MLX5_CAP_GEN(dev, vnic_env_cq_overrun)) {
- err = devlink_fmsg_u32_pair_put(fmsg, "cq_overrun",
- VNIC_ENV_GET(&vnic, cq_overrun));
- if (err)
- return err;
- }
-
- if (MLX5_CAP_GEN(dev, invalid_command_count)) {
- err = devlink_fmsg_u32_pair_put(fmsg, "invalid_command",
- VNIC_ENV_GET(&vnic, invalid_command));
- if (err)
- return err;
- }
-
- if (MLX5_CAP_GEN(dev, quota_exceeded_count)) {
- err = devlink_fmsg_u32_pair_put(fmsg, "quota_exceeded_command",
- VNIC_ENV_GET(&vnic, quota_exceeded_command));
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "comp_eq_overrun",
+ VNIC_ENV_GET(&vnic, comp_eq_overrun));
+ devlink_fmsg_u32_pair_put(fmsg, "async_eq_overrun",
+ VNIC_ENV_GET(&vnic, async_eq_overrun));
}
-
- if (MLX5_CAP_GEN(dev, nic_receive_steering_discard)) {
- err = devlink_fmsg_u64_pair_put(fmsg, "nic_receive_steering_discard",
- VNIC_ENV_GET64(&vnic,
- nic_receive_steering_discard));
- if (err)
- return err;
- }
-
+ if (MLX5_CAP_GEN(dev, vnic_env_cq_overrun))
+ devlink_fmsg_u32_pair_put(fmsg, "cq_overrun",
+ VNIC_ENV_GET(&vnic, cq_overrun));
+ if (MLX5_CAP_GEN(dev, invalid_command_count))
+ devlink_fmsg_u32_pair_put(fmsg, "invalid_command",
+ VNIC_ENV_GET(&vnic, invalid_command));
+ if (MLX5_CAP_GEN(dev, quota_exceeded_count))
+ devlink_fmsg_u32_pair_put(fmsg, "quota_exceeded_command",
+ VNIC_ENV_GET(&vnic, quota_exceeded_command));
+ if (MLX5_CAP_GEN(dev, nic_receive_steering_discard))
+ devlink_fmsg_u64_pair_put(fmsg, "nic_receive_steering_discard",
+ VNIC_ENV_GET64(&vnic, nic_receive_steering_discard));
if (MLX5_CAP_GEN(dev, vnic_env_cnt_steering_fail)) {
- err = devlink_fmsg_u64_pair_put(fmsg, "generated_pkt_steering_fail",
- VNIC_ENV_GET64(&vnic,
- generated_pkt_steering_fail));
- if (err)
- return err;
-
- err = devlink_fmsg_u64_pair_put(fmsg, "handled_pkt_steering_fail",
- VNIC_ENV_GET64(&vnic, handled_pkt_steering_fail));
- if (err)
- return err;
+ devlink_fmsg_u64_pair_put(fmsg, "generated_pkt_steering_fail",
+ VNIC_ENV_GET64(&vnic, generated_pkt_steering_fail));
+ devlink_fmsg_u64_pair_put(fmsg, "handled_pkt_steering_fail",
+ VNIC_ENV_GET64(&vnic, handled_pkt_steering_fail));
}
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
}
static int mlx5_reporter_vnic_diagnose(struct devlink_health_reporter *reporter,
@@ -121,7 +70,8 @@ static int mlx5_reporter_vnic_diagnose(struct devlink_health_reporter *reporter,
{
struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
- return mlx5_reporter_vnic_diagnose_counters(dev, fmsg, 0, false);
+ mlx5_reporter_vnic_diagnose_counters(dev, fmsg, 0, false);
+ return 0;
}
static const struct devlink_health_reporter_ops mlx5_reporter_vnic_ops = {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h b/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h
index eba87a39e9b1..fbc31256f7fe 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/reporter_vnic.h
@@ -9,8 +9,8 @@
void mlx5_reporter_vnic_create(struct mlx5_core_dev *dev);
void mlx5_reporter_vnic_destroy(struct mlx5_core_dev *dev);
-int mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev,
- struct devlink_fmsg *fmsg,
- u16 vport_num, bool other_vport);
+void mlx5_reporter_vnic_diagnose_counters(struct mlx5_core_dev *dev,
+ struct devlink_fmsg *fmsg,
+ u16 vport_num, bool other_vport);
#endif /* __MLX5_REPORTER_VNIC_H */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/dpll.c b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
new file mode 100644
index 000000000000..2cd81bb32c66
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/dpll.c
@@ -0,0 +1,432 @@
+// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB
+/* Copyright (c) 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
+
+#include <linux/dpll.h>
+#include <linux/mlx5/driver.h>
+
+/* This structure represents a reference to DPLL, one is created
+ * per mdev instance.
+ */
+struct mlx5_dpll {
+ struct dpll_device *dpll;
+ struct dpll_pin *dpll_pin;
+ struct mlx5_core_dev *mdev;
+ struct workqueue_struct *wq;
+ struct delayed_work work;
+ struct {
+ bool valid;
+ enum dpll_lock_status lock_status;
+ enum dpll_pin_state pin_state;
+ } last;
+ struct notifier_block mdev_nb;
+ struct net_device *tracking_netdev;
+};
+
+static int mlx5_dpll_clock_id_get(struct mlx5_core_dev *mdev, u64 *clock_id)
+{
+ u32 out[MLX5_ST_SZ_DW(msecq_reg)] = {};
+ u32 in[MLX5_ST_SZ_DW(msecq_reg)] = {};
+ int err;
+
+ err = mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out),
+ MLX5_REG_MSECQ, 0, 0);
+ if (err)
+ return err;
+ *clock_id = MLX5_GET64(msecq_reg, out, local_clock_identity);
+ return 0;
+}
+
+static int
+mlx5_dpll_synce_status_get(struct mlx5_core_dev *mdev,
+ enum mlx5_msees_admin_status *admin_status,
+ enum mlx5_msees_oper_status *oper_status,
+ bool *ho_acq)
+{
+ u32 out[MLX5_ST_SZ_DW(msees_reg)] = {};
+ u32 in[MLX5_ST_SZ_DW(msees_reg)] = {};
+ int err;
+
+ err = mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out),
+ MLX5_REG_MSEES, 0, 0);
+ if (err)
+ return err;
+ if (admin_status)
+ *admin_status = MLX5_GET(msees_reg, out, admin_status);
+ *oper_status = MLX5_GET(msees_reg, out, oper_status);
+ if (ho_acq)
+ *ho_acq = MLX5_GET(msees_reg, out, ho_acq);
+ return 0;
+}
+
+static int
+mlx5_dpll_synce_status_set(struct mlx5_core_dev *mdev,
+ enum mlx5_msees_admin_status admin_status)
+{
+ u32 out[MLX5_ST_SZ_DW(msees_reg)] = {};
+ u32 in[MLX5_ST_SZ_DW(msees_reg)] = {};
+
+ MLX5_SET(msees_reg, in, field_select,
+ MLX5_MSEES_FIELD_SELECT_ENABLE |
+ MLX5_MSEES_FIELD_SELECT_ADMIN_STATUS);
+ MLX5_SET(msees_reg, in, admin_status, admin_status);
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out, sizeof(out),
+ MLX5_REG_MSEES, 0, 1);
+}
+
+static enum dpll_lock_status
+mlx5_dpll_lock_status_get(enum mlx5_msees_oper_status oper_status, bool ho_acq)
+{
+ switch (oper_status) {
+ case MLX5_MSEES_OPER_STATUS_SELF_TRACK:
+ fallthrough;
+ case MLX5_MSEES_OPER_STATUS_OTHER_TRACK:
+ return ho_acq ? DPLL_LOCK_STATUS_LOCKED_HO_ACQ :
+ DPLL_LOCK_STATUS_LOCKED;
+ case MLX5_MSEES_OPER_STATUS_HOLDOVER:
+ fallthrough;
+ case MLX5_MSEES_OPER_STATUS_FAIL_HOLDOVER:
+ return DPLL_LOCK_STATUS_HOLDOVER;
+ default:
+ return DPLL_LOCK_STATUS_UNLOCKED;
+ }
+}
+
+static enum dpll_pin_state
+mlx5_dpll_pin_state_get(enum mlx5_msees_admin_status admin_status,
+ enum mlx5_msees_oper_status oper_status)
+{
+ return (admin_status == MLX5_MSEES_ADMIN_STATUS_TRACK &&
+ (oper_status == MLX5_MSEES_OPER_STATUS_SELF_TRACK ||
+ oper_status == MLX5_MSEES_OPER_STATUS_OTHER_TRACK)) ?
+ DPLL_PIN_STATE_CONNECTED : DPLL_PIN_STATE_DISCONNECTED;
+}
+
+static int mlx5_dpll_device_lock_status_get(const struct dpll_device *dpll,
+ void *priv,
+ enum dpll_lock_status *status,
+ struct netlink_ext_ack *extack)
+{
+ enum mlx5_msees_oper_status oper_status;
+ struct mlx5_dpll *mdpll = priv;
+ bool ho_acq;
+ int err;
+
+ err = mlx5_dpll_synce_status_get(mdpll->mdev, NULL,
+ &oper_status, &ho_acq);
+ if (err)
+ return err;
+
+ *status = mlx5_dpll_lock_status_get(oper_status, ho_acq);
+ return 0;
+}
+
+static int mlx5_dpll_device_mode_get(const struct dpll_device *dpll,
+ void *priv, enum dpll_mode *mode,
+ struct netlink_ext_ack *extack)
+{
+ *mode = DPLL_MODE_MANUAL;
+ return 0;
+}
+
+static bool mlx5_dpll_device_mode_supported(const struct dpll_device *dpll,
+ void *priv,
+ enum dpll_mode mode,
+ struct netlink_ext_ack *extack)
+{
+ return mode == DPLL_MODE_MANUAL;
+}
+
+static const struct dpll_device_ops mlx5_dpll_device_ops = {
+ .lock_status_get = mlx5_dpll_device_lock_status_get,
+ .mode_get = mlx5_dpll_device_mode_get,
+ .mode_supported = mlx5_dpll_device_mode_supported,
+};
+
+static int mlx5_dpll_pin_direction_get(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv,
+ enum dpll_pin_direction *direction,
+ struct netlink_ext_ack *extack)
+{
+ *direction = DPLL_PIN_DIRECTION_INPUT;
+ return 0;
+}
+
+static int mlx5_dpll_state_on_dpll_get(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack)
+{
+ enum mlx5_msees_admin_status admin_status;
+ enum mlx5_msees_oper_status oper_status;
+ struct mlx5_dpll *mdpll = pin_priv;
+ int err;
+
+ err = mlx5_dpll_synce_status_get(mdpll->mdev, &admin_status,
+ &oper_status, NULL);
+ if (err)
+ return err;
+ *state = mlx5_dpll_pin_state_get(admin_status, oper_status);
+ return 0;
+}
+
+static int mlx5_dpll_state_on_dpll_set(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv,
+ enum dpll_pin_state state,
+ struct netlink_ext_ack *extack)
+{
+ struct mlx5_dpll *mdpll = pin_priv;
+
+ return mlx5_dpll_synce_status_set(mdpll->mdev,
+ state == DPLL_PIN_STATE_CONNECTED ?
+ MLX5_MSEES_ADMIN_STATUS_TRACK :
+ MLX5_MSEES_ADMIN_STATUS_FREE_RUNNING);
+}
+
+static const struct dpll_pin_ops mlx5_dpll_pins_ops = {
+ .direction_get = mlx5_dpll_pin_direction_get,
+ .state_on_dpll_get = mlx5_dpll_state_on_dpll_get,
+ .state_on_dpll_set = mlx5_dpll_state_on_dpll_set,
+};
+
+static const struct dpll_pin_properties mlx5_dpll_pin_properties = {
+ .type = DPLL_PIN_TYPE_SYNCE_ETH_PORT,
+ .capabilities = DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE,
+};
+
+#define MLX5_DPLL_PERIODIC_WORK_INTERVAL 500 /* ms */
+
+static void mlx5_dpll_periodic_work_queue(struct mlx5_dpll *mdpll)
+{
+ queue_delayed_work(mdpll->wq, &mdpll->work,
+ msecs_to_jiffies(MLX5_DPLL_PERIODIC_WORK_INTERVAL));
+}
+
+static void mlx5_dpll_periodic_work(struct work_struct *work)
+{
+ struct mlx5_dpll *mdpll = container_of(work, struct mlx5_dpll,
+ work.work);
+ enum mlx5_msees_admin_status admin_status;
+ enum mlx5_msees_oper_status oper_status;
+ enum dpll_lock_status lock_status;
+ enum dpll_pin_state pin_state;
+ bool ho_acq;
+ int err;
+
+ err = mlx5_dpll_synce_status_get(mdpll->mdev, &admin_status,
+ &oper_status, &ho_acq);
+ if (err)
+ goto err_out;
+ lock_status = mlx5_dpll_lock_status_get(oper_status, ho_acq);
+ pin_state = mlx5_dpll_pin_state_get(admin_status, oper_status);
+
+ if (!mdpll->last.valid)
+ goto invalid_out;
+
+ if (mdpll->last.lock_status != lock_status)
+ dpll_device_change_ntf(mdpll->dpll);
+ if (mdpll->last.pin_state != pin_state)
+ dpll_pin_change_ntf(mdpll->dpll_pin);
+
+invalid_out:
+ mdpll->last.lock_status = lock_status;
+ mdpll->last.pin_state = pin_state;
+ mdpll->last.valid = true;
+err_out:
+ mlx5_dpll_periodic_work_queue(mdpll);
+}
+
+static void mlx5_dpll_netdev_dpll_pin_set(struct mlx5_dpll *mdpll,
+ struct net_device *netdev)
+{
+ if (mdpll->tracking_netdev)
+ return;
+ netdev_dpll_pin_set(netdev, mdpll->dpll_pin);
+ mdpll->tracking_netdev = netdev;
+}
+
+static void mlx5_dpll_netdev_dpll_pin_clear(struct mlx5_dpll *mdpll)
+{
+ if (!mdpll->tracking_netdev)
+ return;
+ netdev_dpll_pin_clear(mdpll->tracking_netdev);
+ mdpll->tracking_netdev = NULL;
+}
+
+static int mlx5_dpll_mdev_notifier_event(struct notifier_block *nb,
+ unsigned long event, void *data)
+{
+ struct mlx5_dpll *mdpll = container_of(nb, struct mlx5_dpll, mdev_nb);
+ struct net_device *netdev = data;
+
+ switch (event) {
+ case MLX5_DRIVER_EVENT_UPLINK_NETDEV:
+ if (netdev)
+ mlx5_dpll_netdev_dpll_pin_set(mdpll, netdev);
+ else
+ mlx5_dpll_netdev_dpll_pin_clear(mdpll);
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+
+ return NOTIFY_OK;
+}
+
+static void mlx5_dpll_mdev_netdev_track(struct mlx5_dpll *mdpll,
+ struct mlx5_core_dev *mdev)
+{
+ mdpll->mdev_nb.notifier_call = mlx5_dpll_mdev_notifier_event;
+ mlx5_blocking_notifier_register(mdev, &mdpll->mdev_nb);
+ mlx5_core_uplink_netdev_event_replay(mdev);
+}
+
+static void mlx5_dpll_mdev_netdev_untrack(struct mlx5_dpll *mdpll,
+ struct mlx5_core_dev *mdev)
+{
+ mlx5_blocking_notifier_unregister(mdev, &mdpll->mdev_nb);
+ mlx5_dpll_netdev_dpll_pin_clear(mdpll);
+}
+
+static int mlx5_dpll_probe(struct auxiliary_device *adev,
+ const struct auxiliary_device_id *id)
+{
+ struct mlx5_adev *edev = container_of(adev, struct mlx5_adev, adev);
+ struct mlx5_core_dev *mdev = edev->mdev;
+ struct mlx5_dpll *mdpll;
+ u64 clock_id;
+ int err;
+
+ err = mlx5_dpll_synce_status_set(mdev,
+ MLX5_MSEES_ADMIN_STATUS_FREE_RUNNING);
+ if (err)
+ return err;
+
+ err = mlx5_dpll_clock_id_get(mdev, &clock_id);
+ if (err)
+ return err;
+
+ mdpll = kzalloc(sizeof(*mdpll), GFP_KERNEL);
+ if (!mdpll)
+ return -ENOMEM;
+ mdpll->mdev = mdev;
+ auxiliary_set_drvdata(adev, mdpll);
+
+ /* Multiple mdev instances might share one DPLL device. */
+ mdpll->dpll = dpll_device_get(clock_id, 0, THIS_MODULE);
+ if (IS_ERR(mdpll->dpll)) {
+ err = PTR_ERR(mdpll->dpll);
+ goto err_free_mdpll;
+ }
+
+ err = dpll_device_register(mdpll->dpll, DPLL_TYPE_EEC,
+ &mlx5_dpll_device_ops, mdpll);
+ if (err)
+ goto err_put_dpll_device;
+
+ /* Multiple mdev instances might share one DPLL pin. */
+ mdpll->dpll_pin = dpll_pin_get(clock_id, mlx5_get_dev_index(mdev),
+ THIS_MODULE, &mlx5_dpll_pin_properties);
+ if (IS_ERR(mdpll->dpll_pin)) {
+ err = PTR_ERR(mdpll->dpll_pin);
+ goto err_unregister_dpll_device;
+ }
+
+ err = dpll_pin_register(mdpll->dpll, mdpll->dpll_pin,
+ &mlx5_dpll_pins_ops, mdpll);
+ if (err)
+ goto err_put_dpll_pin;
+
+ mdpll->wq = create_singlethread_workqueue("mlx5_dpll");
+ if (!mdpll->wq) {
+ err = -ENOMEM;
+ goto err_unregister_dpll_pin;
+ }
+
+ mlx5_dpll_mdev_netdev_track(mdpll, mdev);
+
+ INIT_DELAYED_WORK(&mdpll->work, &mlx5_dpll_periodic_work);
+ mlx5_dpll_periodic_work_queue(mdpll);
+
+ return 0;
+
+err_unregister_dpll_pin:
+ dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
+ &mlx5_dpll_pins_ops, mdpll);
+err_put_dpll_pin:
+ dpll_pin_put(mdpll->dpll_pin);
+err_unregister_dpll_device:
+ dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
+err_put_dpll_device:
+ dpll_device_put(mdpll->dpll);
+err_free_mdpll:
+ kfree(mdpll);
+ return err;
+}
+
+static void mlx5_dpll_remove(struct auxiliary_device *adev)
+{
+ struct mlx5_dpll *mdpll = auxiliary_get_drvdata(adev);
+ struct mlx5_core_dev *mdev = mdpll->mdev;
+
+ cancel_delayed_work(&mdpll->work);
+ mlx5_dpll_mdev_netdev_untrack(mdpll, mdev);
+ destroy_workqueue(mdpll->wq);
+ dpll_pin_unregister(mdpll->dpll, mdpll->dpll_pin,
+ &mlx5_dpll_pins_ops, mdpll);
+ dpll_pin_put(mdpll->dpll_pin);
+ dpll_device_unregister(mdpll->dpll, &mlx5_dpll_device_ops, mdpll);
+ dpll_device_put(mdpll->dpll);
+ kfree(mdpll);
+
+ mlx5_dpll_synce_status_set(mdev,
+ MLX5_MSEES_ADMIN_STATUS_FREE_RUNNING);
+}
+
+static int mlx5_dpll_suspend(struct auxiliary_device *adev, pm_message_t state)
+{
+ return 0;
+}
+
+static int mlx5_dpll_resume(struct auxiliary_device *adev)
+{
+ return 0;
+}
+
+static const struct auxiliary_device_id mlx5_dpll_id_table[] = {
+ { .name = MLX5_ADEV_NAME ".dpll", },
+ {},
+};
+
+MODULE_DEVICE_TABLE(auxiliary, mlx5_dpll_id_table);
+
+static struct auxiliary_driver mlx5_dpll_driver = {
+ .name = "dpll",
+ .probe = mlx5_dpll_probe,
+ .remove = mlx5_dpll_remove,
+ .suspend = mlx5_dpll_suspend,
+ .resume = mlx5_dpll_resume,
+ .id_table = mlx5_dpll_id_table,
+};
+
+static int __init mlx5_dpll_init(void)
+{
+ return auxiliary_driver_register(&mlx5_dpll_driver);
+}
+
+static void __exit mlx5_dpll_exit(void)
+{
+ auxiliary_driver_unregister(&mlx5_dpll_driver);
+}
+
+module_init(mlx5_dpll_init);
+module_exit(mlx5_dpll_exit);
+
+MODULE_AUTHOR("Jiri Pirko <jiri@nvidia.com>");
+MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) DPLL driver");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 86f2690c5e01..b2a5da9739d2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -141,7 +141,7 @@ struct page_pool;
#define MLX5E_PARAMS_DEFAULT_MIN_RX_WQES_MPW 0x2
#define MLX5E_MIN_NUM_CHANNELS 0x1
-#define MLX5E_MAX_NUM_CHANNELS (MLX5E_INDIR_RQT_SIZE / 2)
+#define MLX5E_MAX_NUM_CHANNELS 256
#define MLX5E_TX_CQ_POLL_BUDGET 128
#define MLX5E_TX_XSK_POLL_BUDGET 64
#define MLX5E_SQ_RECOVER_MIN_INTERVAL 500 /* msecs */
@@ -168,6 +168,13 @@ struct page_pool;
#define mlx5e_state_dereference(priv, p) \
rcu_dereference_protected((p), lockdep_is_held(&(priv)->state_lock))
+enum mlx5e_devcom_events {
+ MPV_DEVCOM_MASTER_UP,
+ MPV_DEVCOM_MASTER_DOWN,
+ MPV_DEVCOM_IPSEC_MASTER_UP,
+ MPV_DEVCOM_IPSEC_MASTER_DOWN,
+};
+
static inline u8 mlx5e_get_num_lag_ports(struct mlx5_core_dev *mdev)
{
if (mlx5_lag_is_lacp_owner(mdev))
@@ -193,7 +200,8 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
{
return is_kdump_kernel() ?
MLX5E_MIN_NUM_CHANNELS :
- min_t(int, mlx5_comp_vectors_max(mdev), MLX5E_MAX_NUM_CHANNELS);
+ min3(mlx5_comp_vectors_max(mdev), (u32)MLX5E_MAX_NUM_CHANNELS,
+ (u32)(1 << MLX5_CAP_GEN(mdev, log_max_rqt_size)));
}
/* The maximum WQE size can be retrieved by max_wqe_sz_sq in
@@ -936,6 +944,7 @@ struct mlx5e_priv {
struct mlx5e_htb *htb;
struct mlx5e_mqprio_rl *mqprio_rl;
struct dentry *dfs_root;
+ struct mlx5_devcom_comp_dev *devcom;
};
struct mlx5e_dev {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
index c6b6e290fd79..0b1ac6e5c890 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/devlink.c
@@ -12,11 +12,19 @@ struct mlx5e_dev *mlx5e_create_devlink(struct device *dev,
{
struct mlx5e_dev *mlx5e_dev;
struct devlink *devlink;
+ int err;
devlink = devlink_alloc_ns(&mlx5e_devlink_ops, sizeof(*mlx5e_dev),
devlink_net(priv_to_devlink(mdev)), dev);
if (!devlink)
return ERR_PTR(-ENOMEM);
+
+ err = devl_nested_devlink_set(priv_to_devlink(mdev), devlink);
+ if (err) {
+ devlink_free(devlink);
+ return ERR_PTR(err);
+ }
+
devlink_register(devlink);
return devlink_priv(devlink);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
index e5a44b0b9616..4d6225e0eec7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/fs.h
@@ -150,7 +150,6 @@ struct mlx5e_flow_steering *mlx5e_fs_init(const struct mlx5e_profile *profile,
struct dentry *dfs_root);
void mlx5e_fs_cleanup(struct mlx5e_flow_steering *fs);
struct mlx5e_vlan_table *mlx5e_fs_get_vlan(struct mlx5e_flow_steering *fs);
-void mlx5e_fs_set_tc(struct mlx5e_flow_steering *fs, struct mlx5e_tc_table *tc);
struct mlx5e_tc_table *mlx5e_fs_get_tc(struct mlx5e_flow_steering *fs);
struct mlx5e_l2_table *mlx5e_fs_get_l2(struct mlx5e_flow_steering *fs);
struct mlx5_flow_namespace *mlx5e_fs_get_ns(struct mlx5e_flow_steering *fs, bool egress);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
index 6f4e6c34b2a2..81523825faa2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.c
@@ -5,134 +5,59 @@
#include "lib/eq.h"
#include "lib/mlx5.h"
-int mlx5e_health_fmsg_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name)
+void mlx5e_health_fmsg_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_obj_nest_start(fmsg);
}
-int mlx5e_health_fmsg_named_obj_nest_end(struct devlink_fmsg *fmsg)
+void mlx5e_health_fmsg_named_obj_nest_end(struct devlink_fmsg *fmsg)
{
- int err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
}
-int mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
+void mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
{
u32 out[MLX5_ST_SZ_DW(query_cq_out)] = {};
u8 hw_status;
void *cqc;
- int err;
-
- err = mlx5_core_query_cq(cq->mdev, &cq->mcq, out);
- if (err)
- return err;
+ mlx5_core_query_cq(cq->mdev, &cq->mcq, out);
cqc = MLX5_ADDR_OF(query_cq_out, out, cq_context);
hw_status = MLX5_GET(cqc, cqc, status);
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cqn", cq->mcq.cqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "HW status", hw_status);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "ci", mlx5_cqwq_get_ci(&cq->wq));
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", mlx5_cqwq_get_size(&cq->wq));
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
+ devlink_fmsg_u32_pair_put(fmsg, "cqn", cq->mcq.cqn);
+ devlink_fmsg_u8_pair_put(fmsg, "HW status", hw_status);
+ devlink_fmsg_u32_pair_put(fmsg, "ci", mlx5_cqwq_get_ci(&cq->wq));
+ devlink_fmsg_u32_pair_put(fmsg, "size", mlx5_cqwq_get_size(&cq->wq));
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-int mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
+void mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg)
{
u8 cq_log_stride;
u32 cq_sz;
- int err;
cq_sz = mlx5_cqwq_get_size(&cq->wq);
cq_log_stride = mlx5_cqwq_get_log_stride_size(&cq->wq);
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u64_pair_put(fmsg, "stride size", BIT(cq_log_stride));
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", cq_sz);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
+ devlink_fmsg_u64_pair_put(fmsg, "stride size", BIT(cq_log_stride));
+ devlink_fmsg_u32_pair_put(fmsg, "size", cq_sz);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-int mlx5e_health_eq_diag_fmsg(struct mlx5_eq_comp *eq, struct devlink_fmsg *fmsg)
+void mlx5e_health_eq_diag_fmsg(struct mlx5_eq_comp *eq, struct devlink_fmsg *fmsg)
{
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "EQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "eqn", eq->core.eqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "irqn", eq->core.irqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "vecidx", eq->core.vecidx);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "ci", eq->core.cons_index);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", eq_get_size(&eq->core));
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "EQ");
+ devlink_fmsg_u8_pair_put(fmsg, "eqn", eq->core.eqn);
+ devlink_fmsg_u32_pair_put(fmsg, "irqn", eq->core.irqn);
+ devlink_fmsg_u32_pair_put(fmsg, "vecidx", eq->core.vecidx);
+ devlink_fmsg_u32_pair_put(fmsg, "ci", eq->core.cons_index);
+ devlink_fmsg_u32_pair_put(fmsg, "size", eq_get_size(&eq->core));
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
void mlx5e_health_create_reporters(struct mlx5e_priv *priv)
@@ -235,23 +160,19 @@ int mlx5e_health_report(struct mlx5e_priv *priv,
}
#define MLX5_HEALTH_DEVLINK_MAX_SIZE 1024
-static int mlx5e_health_rsc_fmsg_binary(struct devlink_fmsg *fmsg,
- const void *value, u32 value_len)
+static void mlx5e_health_rsc_fmsg_binary(struct devlink_fmsg *fmsg,
+ const void *value, u32 value_len)
{
u32 data_size;
- int err = 0;
u32 offset;
for (offset = 0; offset < value_len; offset += data_size) {
data_size = value_len - offset;
if (data_size > MLX5_HEALTH_DEVLINK_MAX_SIZE)
data_size = MLX5_HEALTH_DEVLINK_MAX_SIZE;
- err = devlink_fmsg_binary_put(fmsg, value + offset, data_size);
- if (err)
- break;
+ devlink_fmsg_binary_put(fmsg, value + offset, data_size);
}
- return err;
}
int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key,
@@ -259,9 +180,8 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key
{
struct mlx5_core_dev *mdev = priv->mdev;
struct mlx5_rsc_dump_cmd *cmd;
+ int cmd_err, err = 0;
struct page *page;
- int cmd_err, err;
- int end_err;
int size;
if (IS_ERR_OR_NULL(mdev->rsc_dump))
@@ -271,9 +191,7 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key
if (!page)
return -ENOMEM;
- err = devlink_fmsg_binary_pair_nest_start(fmsg, "data");
- if (err)
- goto free_page;
+ devlink_fmsg_binary_pair_nest_start(fmsg, "data");
cmd = mlx5_rsc_dump_cmd_create(mdev, key);
if (IS_ERR(cmd)) {
@@ -288,52 +206,31 @@ int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key
goto destroy_cmd;
}
- err = mlx5e_health_rsc_fmsg_binary(fmsg, page_address(page), size);
- if (err)
- goto destroy_cmd;
-
+ mlx5e_health_rsc_fmsg_binary(fmsg, page_address(page), size);
} while (cmd_err > 0);
destroy_cmd:
mlx5_rsc_dump_cmd_destroy(cmd);
- end_err = devlink_fmsg_binary_pair_nest_end(fmsg);
- if (end_err)
- err = end_err;
+ devlink_fmsg_binary_pair_nest_end(fmsg);
free_page:
__free_page(page);
return err;
}
-int mlx5e_health_queue_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
- int queue_idx, char *lbl)
+void mlx5e_health_queue_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
+ int queue_idx, char *lbl)
{
struct mlx5_rsc_key key = {};
- int err;
key.rsc = MLX5_SGMT_TYPE_FULL_QPC;
key.index1 = queue_idx;
key.size = PAGE_SIZE;
key.num_of_obj1 = 1;
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, lbl);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "index", queue_idx);
- if (err)
- return err;
-
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_obj_nest_start(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, lbl);
+ devlink_fmsg_u32_pair_put(fmsg, "index", queue_idx);
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ devlink_fmsg_obj_nest_end(fmsg);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
index 415840c3ef84..84be3dd6f747 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/health.h
@@ -20,11 +20,11 @@ void mlx5e_reporter_tx_err_cqe(struct mlx5e_txqsq *sq);
int mlx5e_reporter_tx_timeout(struct mlx5e_txqsq *sq);
void mlx5e_reporter_tx_ptpsq_unhealthy(struct mlx5e_ptpsq *ptpsq);
-int mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
-int mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
-int mlx5e_health_eq_diag_fmsg(struct mlx5_eq_comp *eq, struct devlink_fmsg *fmsg);
-int mlx5e_health_fmsg_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name);
-int mlx5e_health_fmsg_named_obj_nest_end(struct devlink_fmsg *fmsg);
+void mlx5e_health_cq_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
+void mlx5e_health_cq_common_diag_fmsg(struct mlx5e_cq *cq, struct devlink_fmsg *fmsg);
+void mlx5e_health_eq_diag_fmsg(struct mlx5_eq_comp *eq, struct devlink_fmsg *fmsg);
+void mlx5e_health_fmsg_named_obj_nest_start(struct devlink_fmsg *fmsg, char *name);
+void mlx5e_health_fmsg_named_obj_nest_end(struct devlink_fmsg *fmsg);
void mlx5e_reporter_rx_create(struct mlx5e_priv *priv);
void mlx5e_reporter_rx_destroy(struct mlx5e_priv *priv);
@@ -54,6 +54,6 @@ void mlx5e_health_destroy_reporters(struct mlx5e_priv *priv);
void mlx5e_health_channels_update(struct mlx5e_priv *priv);
int mlx5e_health_rsc_fmsg_dump(struct mlx5e_priv *priv, struct mlx5_rsc_key *key,
struct devlink_fmsg *fmsg);
-int mlx5e_health_queue_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
- int queue_idx, char *lbl);
+void mlx5e_health_queue_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
+ int queue_idx, char *lbl);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
index e8eea9ffd5eb..fea8c0a5fe89 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_rx.c
@@ -199,78 +199,38 @@ static int mlx5e_rx_reporter_recover(struct devlink_health_reporter *reporter,
mlx5e_health_recover_channels(priv);
}
-static int mlx5e_reporter_icosq_diagnose(struct mlx5e_icosq *icosq, u8 hw_state,
- struct devlink_fmsg *fmsg)
+static void mlx5e_reporter_icosq_diagnose(struct mlx5e_icosq *icosq, u8 hw_state,
+ struct devlink_fmsg *fmsg)
{
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "ICOSQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "sqn", icosq->sqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "HW state", hw_state);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cc", icosq->cc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "pc", icosq->pc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "WQE size",
- mlx5_wq_cyc_get_size(&icosq->wq));
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cqn", icosq->cq.mcq.cqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cc", icosq->cq.wq.cc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", mlx5_cqwq_get_size(&icosq->cq.wq));
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "ICOSQ");
+ devlink_fmsg_u32_pair_put(fmsg, "sqn", icosq->sqn);
+ devlink_fmsg_u8_pair_put(fmsg, "HW state", hw_state);
+ devlink_fmsg_u32_pair_put(fmsg, "cc", icosq->cc);
+ devlink_fmsg_u32_pair_put(fmsg, "pc", icosq->pc);
+ devlink_fmsg_u32_pair_put(fmsg, "WQE size", mlx5_wq_cyc_get_size(&icosq->wq));
+
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "CQ");
+ devlink_fmsg_u32_pair_put(fmsg, "cqn", icosq->cq.mcq.cqn);
+ devlink_fmsg_u32_pair_put(fmsg, "cc", icosq->cq.wq.cc);
+ devlink_fmsg_u32_pair_put(fmsg, "size", mlx5_cqwq_get_size(&icosq->cq.wq));
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int mlx5e_health_rq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_rq *rq)
+static void mlx5e_health_rq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_rq *rq)
{
- int err;
int i;
BUILD_BUG_ON_MSG(ARRAY_SIZE(rq_sw_state_type_name) != MLX5E_NUM_RQ_STATES,
"rq_sw_state_type_name string array must be consistent with MLX5E_RQ_STATE_* enum in en.h");
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
- for (i = 0; i < ARRAY_SIZE(rq_sw_state_type_name); ++i) {
- err = devlink_fmsg_u32_pair_put(fmsg, rq_sw_state_type_name[i],
- test_bit(i, &rq->state));
- if (err)
- return err;
- }
+ for (i = 0; i < ARRAY_SIZE(rq_sw_state_type_name); ++i)
+ devlink_fmsg_u32_pair_put(fmsg, rq_sw_state_type_name[i],
+ test_bit(i, &rq->state));
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
static int
@@ -291,184 +251,93 @@ mlx5e_rx_reporter_build_diagnose_output_rq_common(struct mlx5e_rq *rq,
wq_head = mlx5e_rqwq_get_head(rq);
wqe_counter = mlx5e_rqwq_get_wqe_counter(rq);
- err = devlink_fmsg_u32_pair_put(fmsg, "rqn", rq->rqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "HW state", hw_state);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "WQE counter", wqe_counter);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "posted WQEs", wqes_sz);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "cc", wq_head);
- if (err)
- return err;
-
- err = mlx5e_health_rq_put_sw_state(fmsg, rq);
- if (err)
- return err;
-
- err = mlx5e_health_cq_diag_fmsg(&rq->cq, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_eq_diag_fmsg(rq->cq.mcq.eq, fmsg);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "rqn", rq->rqn);
+ devlink_fmsg_u8_pair_put(fmsg, "HW state", hw_state);
+ devlink_fmsg_u32_pair_put(fmsg, "WQE counter", wqe_counter);
+ devlink_fmsg_u32_pair_put(fmsg, "posted WQEs", wqes_sz);
+ devlink_fmsg_u32_pair_put(fmsg, "cc", wq_head);
+ mlx5e_health_rq_put_sw_state(fmsg, rq);
+ mlx5e_health_cq_diag_fmsg(&rq->cq, fmsg);
+ mlx5e_health_eq_diag_fmsg(rq->cq.mcq.eq, fmsg);
if (rq->icosq) {
struct mlx5e_icosq *icosq = rq->icosq;
u8 icosq_hw_state;
+ int err;
err = mlx5_core_query_sq_state(rq->mdev, icosq->sqn, &icosq_hw_state);
if (err)
return err;
- err = mlx5e_reporter_icosq_diagnose(icosq, icosq_hw_state, fmsg);
- if (err)
- return err;
+ mlx5e_reporter_icosq_diagnose(icosq, icosq_hw_state, fmsg);
}
return 0;
}
-static int mlx5e_rx_reporter_build_diagnose_output(struct mlx5e_rq *rq,
- struct devlink_fmsg *fmsg)
+static void mlx5e_rx_reporter_build_diagnose_output(struct mlx5e_rq *rq,
+ struct devlink_fmsg *fmsg)
{
- int err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "channel ix", rq->ix);
- if (err)
- return err;
-
- err = mlx5e_rx_reporter_build_diagnose_output_rq_common(rq, fmsg);
- if (err)
- return err;
-
- return devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_u32_pair_put(fmsg, "channel ix", rq->ix);
+ mlx5e_rx_reporter_build_diagnose_output_rq_common(rq, fmsg);
+ devlink_fmsg_obj_nest_end(fmsg);
}
-static int mlx5e_rx_reporter_diagnose_generic_rq(struct mlx5e_rq *rq,
- struct devlink_fmsg *fmsg)
+static void mlx5e_rx_reporter_diagnose_generic_rq(struct mlx5e_rq *rq,
+ struct devlink_fmsg *fmsg)
{
struct mlx5e_priv *priv = rq->priv;
struct mlx5e_params *params;
u32 rq_stride, rq_sz;
bool real_time;
- int err;
params = &priv->channels.params;
rq_sz = mlx5e_rqwq_get_size(rq);
real_time = mlx5_is_real_time_rq(priv->mdev);
rq_stride = BIT(mlx5e_mpwqe_get_log_stride_size(priv->mdev, params, NULL));
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RQ");
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "type", params->rq_wq_type);
- if (err)
- return err;
-
- err = devlink_fmsg_u64_pair_put(fmsg, "stride size", rq_stride);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", rq_sz);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "ts_format", real_time ? "RT" : "FRC");
- if (err)
- return err;
-
- err = mlx5e_health_cq_common_diag_fmsg(&rq->cq, fmsg);
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RQ");
+ devlink_fmsg_u8_pair_put(fmsg, "type", params->rq_wq_type);
+ devlink_fmsg_u64_pair_put(fmsg, "stride size", rq_stride);
+ devlink_fmsg_u32_pair_put(fmsg, "size", rq_sz);
+ devlink_fmsg_string_pair_put(fmsg, "ts_format", real_time ? "RT" : "FRC");
+ mlx5e_health_cq_common_diag_fmsg(&rq->cq, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_rx_reporter_diagnose_common_ptp_config(struct mlx5e_priv *priv, struct mlx5e_ptp *ptp_ch,
struct devlink_fmsg *fmsg)
{
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "PTP");
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "filter_type", priv->tstamp.rx_filter);
- if (err)
- return err;
-
- err = mlx5e_rx_reporter_diagnose_generic_rq(&ptp_ch->rq, fmsg);
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "PTP");
+ devlink_fmsg_u32_pair_put(fmsg, "filter_type", priv->tstamp.rx_filter);
+ mlx5e_rx_reporter_diagnose_generic_rq(&ptp_ch->rq, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_rx_reporter_diagnose_common_config(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg)
{
struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
struct mlx5e_rq *generic_rq = &priv->channels.c[0]->rq;
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Common config");
- if (err)
- return err;
-
- err = mlx5e_rx_reporter_diagnose_generic_rq(generic_rq, fmsg);
- if (err)
- return err;
- if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state)) {
- err = mlx5e_rx_reporter_diagnose_common_ptp_config(priv, ptp_ch, fmsg);
- if (err)
- return err;
- }
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Common config");
+ mlx5e_rx_reporter_diagnose_generic_rq(generic_rq, fmsg);
+ if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state))
+ mlx5e_rx_reporter_diagnose_common_ptp_config(priv, ptp_ch, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int mlx5e_rx_reporter_build_diagnose_output_ptp_rq(struct mlx5e_rq *rq,
- struct devlink_fmsg *fmsg)
+static void mlx5e_rx_reporter_build_diagnose_output_ptp_rq(struct mlx5e_rq *rq,
+ struct devlink_fmsg *fmsg)
{
- int err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "channel", "ptp");
- if (err)
- return err;
-
- err = mlx5e_rx_reporter_build_diagnose_output_rq_common(rq, fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_string_pair_put(fmsg, "channel", "ptp");
+ mlx5e_rx_reporter_build_diagnose_output_rq_common(rq, fmsg);
+ devlink_fmsg_obj_nest_end(fmsg);
}
static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
@@ -477,20 +346,15 @@ static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
{
struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
- int i, err = 0;
+ int i;
mutex_lock(&priv->state_lock);
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
goto unlock;
- err = mlx5e_rx_reporter_diagnose_common_config(reporter, fmsg);
- if (err)
- goto unlock;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "RQs");
- if (err)
- goto unlock;
+ mlx5e_rx_reporter_diagnose_common_config(reporter, fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "RQs");
for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i];
@@ -499,19 +363,14 @@ static int mlx5e_rx_reporter_diagnose(struct devlink_health_reporter *reporter,
rq = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state) ?
&c->xskrq : &c->rq;
- err = mlx5e_rx_reporter_build_diagnose_output(rq, fmsg);
- if (err)
- goto unlock;
- }
- if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state)) {
- err = mlx5e_rx_reporter_build_diagnose_output_ptp_rq(&ptp_ch->rq, fmsg);
- if (err)
- goto unlock;
+ mlx5e_rx_reporter_build_diagnose_output(rq, fmsg);
}
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
+ if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state))
+ mlx5e_rx_reporter_build_diagnose_output_ptp_rq(&ptp_ch->rq, fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
unlock:
mutex_unlock(&priv->state_lock);
- return err;
+ return 0;
}
static int mlx5e_rx_reporter_dump_icosq(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
@@ -519,61 +378,34 @@ static int mlx5e_rx_reporter_dump_icosq(struct mlx5e_priv *priv, struct devlink_
{
struct mlx5e_txqsq *icosq = ctx;
struct mlx5_rsc_key key = {};
- int err;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
key.size = PAGE_SIZE;
key.rsc = MLX5_SGMT_TYPE_SX_SLICE_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "ICOSQ");
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "ICOSQ");
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
key.rsc = MLX5_SGMT_TYPE_FULL_QPC;
key.index1 = icosq->sqn;
key.num_of_obj1 = 1;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "send_buff");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "send_buff");
key.rsc = MLX5_SGMT_TYPE_SND_BUFF;
key.num_of_obj2 = MLX5_RSC_DUMP_ALL;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ return 0;
}
static int mlx5e_rx_reporter_dump_rq(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
@@ -581,60 +413,34 @@ static int mlx5e_rx_reporter_dump_rq(struct mlx5e_priv *priv, struct devlink_fms
{
struct mlx5_rsc_key key = {};
struct mlx5e_rq *rq = ctx;
- int err;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RX Slice");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RX Slice");
key.size = PAGE_SIZE;
key.rsc = MLX5_SGMT_TYPE_RX_SLICE_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RQ");
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RQ");
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
key.rsc = MLX5_SGMT_TYPE_FULL_QPC;
key.index1 = rq->rqn;
key.num_of_obj1 = 1;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "receive_buff");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "receive_buff");
key.rsc = MLX5_SGMT_TYPE_RCV_BUFF;
key.num_of_obj2 = MLX5_RSC_DUMP_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ return 0;
}
static int mlx5e_rx_reporter_dump_all_rqs(struct mlx5e_priv *priv,
@@ -642,44 +448,28 @@ static int mlx5e_rx_reporter_dump_all_rqs(struct mlx5e_priv *priv,
{
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
struct mlx5_rsc_key key = {};
- int i, err;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RX Slice");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "RX Slice");
key.size = PAGE_SIZE;
key.rsc = MLX5_SGMT_TYPE_RX_SLICE_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "RQs");
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "RQs");
- for (i = 0; i < priv->channels.num; i++) {
+ for (int i = 0; i < priv->channels.num; i++) {
struct mlx5e_rq *rq = &priv->channels.c[i]->rq;
- err = mlx5e_health_queue_dump(priv, fmsg, rq->rqn, "RQ");
- if (err)
- return err;
+ mlx5e_health_queue_dump(priv, fmsg, rq->rqn, "RQ");
}
- if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state)) {
- err = mlx5e_health_queue_dump(priv, fmsg, ptp_ch->rq.rqn, "PTP RQ");
- if (err)
- return err;
- }
+ if (ptp_ch && test_bit(MLX5E_PTP_STATE_RX, ptp_ch->state))
+ mlx5e_health_queue_dump(priv, fmsg, ptp_ch->rq.rqn, "PTP RQ");
- return devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ return 0;
}
static int mlx5e_rx_reporter_dump_from_ctx(struct mlx5e_priv *priv,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
index ff8242f67c54..6b44ddce14e9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/reporter_tx.c
@@ -50,25 +50,19 @@ static void mlx5e_reset_txqsq_cc_pc(struct mlx5e_txqsq *sq)
sq->pc = 0;
}
-static int mlx5e_health_sq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_txqsq *sq)
+static void mlx5e_health_sq_put_sw_state(struct devlink_fmsg *fmsg, struct mlx5e_txqsq *sq)
{
- int err;
int i;
BUILD_BUG_ON_MSG(ARRAY_SIZE(sq_sw_state_type_name) != MLX5E_NUM_SQ_STATES,
"sq_sw_state_type_name string array must be consistent with MLX5E_SQ_STATE_* enum in en.h");
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SW State");
- for (i = 0; i < ARRAY_SIZE(sq_sw_state_type_name); ++i) {
- err = devlink_fmsg_u32_pair_put(fmsg, sq_sw_state_type_name[i],
- test_bit(i, &sq->state));
- if (err)
- return err;
- }
+ for (i = 0; i < ARRAY_SIZE(sq_sw_state_type_name); ++i)
+ devlink_fmsg_u32_pair_put(fmsg, sq_sw_state_type_name[i],
+ test_bit(i, &sq->state));
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
static int mlx5e_tx_reporter_err_cqe_recover(void *ctx)
@@ -220,7 +214,7 @@ static int mlx5e_tx_reporter_recover(struct devlink_health_reporter *reporter,
mlx5e_health_recover_channels(priv);
}
-static int
+static void
mlx5e_tx_reporter_build_diagnose_output_sq_common(struct devlink_fmsg *fmsg,
struct mlx5e_txqsq *sq, int tc)
{
@@ -229,164 +223,71 @@ mlx5e_tx_reporter_build_diagnose_output_sq_common(struct devlink_fmsg *fmsg,
u8 state;
int err;
- err = mlx5_core_query_sq_state(priv->mdev, sq->sqn, &state);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "tc", tc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "txq ix", sq->txq_ix);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "sqn", sq->sqn);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "HW state", state);
- if (err)
- return err;
-
- err = devlink_fmsg_bool_pair_put(fmsg, "stopped", stopped);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "tc", tc);
+ devlink_fmsg_u32_pair_put(fmsg, "txq ix", sq->txq_ix);
+ devlink_fmsg_u32_pair_put(fmsg, "sqn", sq->sqn);
- err = devlink_fmsg_u32_pair_put(fmsg, "cc", sq->cc);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "pc", sq->pc);
- if (err)
- return err;
-
- err = mlx5e_health_sq_put_sw_state(fmsg, sq);
- if (err)
- return err;
-
- err = mlx5e_health_cq_diag_fmsg(&sq->cq, fmsg);
- if (err)
- return err;
-
- return mlx5e_health_eq_diag_fmsg(sq->cq.mcq.eq, fmsg);
+ err = mlx5_core_query_sq_state(priv->mdev, sq->sqn, &state);
+ if (!err)
+ devlink_fmsg_u8_pair_put(fmsg, "HW state", state);
+
+ devlink_fmsg_bool_pair_put(fmsg, "stopped", stopped);
+ devlink_fmsg_u32_pair_put(fmsg, "cc", sq->cc);
+ devlink_fmsg_u32_pair_put(fmsg, "pc", sq->pc);
+ mlx5e_health_sq_put_sw_state(fmsg, sq);
+ mlx5e_health_cq_diag_fmsg(&sq->cq, fmsg);
+ mlx5e_health_eq_diag_fmsg(sq->cq.mcq.eq, fmsg);
}
-static int
+static void
mlx5e_tx_reporter_build_diagnose_output(struct devlink_fmsg *fmsg,
struct mlx5e_txqsq *sq, int tc)
{
- int err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "channel ix", sq->ch_ix);
- if (err)
- return err;
-
- err = mlx5e_tx_reporter_build_diagnose_output_sq_common(fmsg, sq, tc);
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_u32_pair_put(fmsg, "channel ix", sq->ch_ix);
+ mlx5e_tx_reporter_build_diagnose_output_sq_common(fmsg, sq, tc);
+ devlink_fmsg_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_tx_reporter_build_diagnose_output_ptpsq(struct devlink_fmsg *fmsg,
struct mlx5e_ptpsq *ptpsq, int tc)
{
- int err;
-
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "channel", "ptp");
- if (err)
- return err;
-
- err = mlx5e_tx_reporter_build_diagnose_output_sq_common(fmsg, &ptpsq->txqsq, tc);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Port TS");
- if (err)
- return err;
-
- err = mlx5e_health_cq_diag_fmsg(&ptpsq->ts_cq, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_string_pair_put(fmsg, "channel", "ptp");
+ mlx5e_tx_reporter_build_diagnose_output_sq_common(fmsg, &ptpsq->txqsq, tc);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Port TS");
+ mlx5e_health_cq_diag_fmsg(&ptpsq->ts_cq, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ devlink_fmsg_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_tx_reporter_diagnose_generic_txqsq(struct devlink_fmsg *fmsg,
struct mlx5e_txqsq *txqsq)
{
- u32 sq_stride, sq_sz;
- bool real_time;
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SQ");
- if (err)
- return err;
-
- real_time = mlx5_is_real_time_sq(txqsq->mdev);
- sq_sz = mlx5_wq_cyc_get_size(&txqsq->wq);
- sq_stride = MLX5_SEND_WQE_BB;
-
- err = devlink_fmsg_u64_pair_put(fmsg, "stride size", sq_stride);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_pair_put(fmsg, "size", sq_sz);
- if (err)
- return err;
-
- err = devlink_fmsg_string_pair_put(fmsg, "ts_format", real_time ? "RT" : "FRC");
- if (err)
- return err;
-
- err = mlx5e_health_cq_common_diag_fmsg(&txqsq->cq, fmsg);
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ bool real_time = mlx5_is_real_time_sq(txqsq->mdev);
+ u32 sq_sz = mlx5_wq_cyc_get_size(&txqsq->wq);
+ u32 sq_stride = MLX5_SEND_WQE_BB;
+
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SQ");
+ devlink_fmsg_u64_pair_put(fmsg, "stride size", sq_stride);
+ devlink_fmsg_u32_pair_put(fmsg, "size", sq_sz);
+ devlink_fmsg_string_pair_put(fmsg, "ts_format", real_time ? "RT" : "FRC");
+ mlx5e_health_cq_common_diag_fmsg(&txqsq->cq, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_tx_reporter_diagnose_generic_tx_port_ts(struct devlink_fmsg *fmsg,
struct mlx5e_ptpsq *ptpsq)
{
- int err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Port TS");
- if (err)
- return err;
-
- err = mlx5e_health_cq_common_diag_fmsg(&ptpsq->ts_cq, fmsg);
- if (err)
- return err;
-
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Port TS");
+ mlx5e_health_cq_common_diag_fmsg(&ptpsq->ts_cq, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
-static int
+static void
mlx5e_tx_reporter_diagnose_common_config(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg)
{
@@ -394,39 +295,20 @@ mlx5e_tx_reporter_diagnose_common_config(struct devlink_health_reporter *reporte
struct mlx5e_txqsq *generic_sq = priv->txq2sq[0];
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
struct mlx5e_ptpsq *generic_ptpsq;
- int err;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Common Config");
- if (err)
- return err;
-
- err = mlx5e_tx_reporter_diagnose_generic_txqsq(fmsg, generic_sq);
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "Common Config");
+ mlx5e_tx_reporter_diagnose_generic_txqsq(fmsg, generic_sq);
if (!ptp_ch || !test_bit(MLX5E_PTP_STATE_TX, ptp_ch->state))
goto out;
generic_ptpsq = &ptp_ch->ptpsq[0];
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "PTP");
- if (err)
- return err;
-
- err = mlx5e_tx_reporter_diagnose_generic_txqsq(fmsg, &generic_ptpsq->txqsq);
- if (err)
- return err;
-
- err = mlx5e_tx_reporter_diagnose_generic_tx_port_ts(fmsg, generic_ptpsq);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "PTP");
+ mlx5e_tx_reporter_diagnose_generic_txqsq(fmsg, &generic_ptpsq->txqsq);
+ mlx5e_tx_reporter_diagnose_generic_tx_port_ts(fmsg, generic_ptpsq);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
out:
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
}
static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
@@ -436,20 +318,15 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
struct mlx5e_priv *priv = devlink_health_reporter_priv(reporter);
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
- int i, tc, err = 0;
+ int i, tc;
mutex_lock(&priv->state_lock);
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
goto unlock;
- err = mlx5e_tx_reporter_diagnose_common_config(reporter, fmsg);
- if (err)
- goto unlock;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "SQs");
- if (err)
- goto unlock;
+ mlx5e_tx_reporter_diagnose_common_config(reporter, fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "SQs");
for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i];
@@ -457,31 +334,23 @@ static int mlx5e_tx_reporter_diagnose(struct devlink_health_reporter *reporter,
for (tc = 0; tc < mlx5e_get_dcb_num_tc(&priv->channels.params); tc++) {
struct mlx5e_txqsq *sq = &c->sq[tc];
- err = mlx5e_tx_reporter_build_diagnose_output(fmsg, sq, tc);
- if (err)
- goto unlock;
+ mlx5e_tx_reporter_build_diagnose_output(fmsg, sq, tc);
}
}
if (!ptp_ch || !test_bit(MLX5E_PTP_STATE_TX, ptp_ch->state))
goto close_sqs_nest;
- for (tc = 0; tc < mlx5e_get_dcb_num_tc(&priv->channels.params); tc++) {
- err = mlx5e_tx_reporter_build_diagnose_output_ptpsq(fmsg,
- &ptp_ch->ptpsq[tc],
- tc);
- if (err)
- goto unlock;
- }
+ for (tc = 0; tc < mlx5e_get_dcb_num_tc(&priv->channels.params); tc++)
+ mlx5e_tx_reporter_build_diagnose_output_ptpsq(fmsg,
+ &ptp_ch->ptpsq[tc],
+ tc);
close_sqs_nest:
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- goto unlock;
-
+ devlink_fmsg_arr_pair_nest_end(fmsg);
unlock:
mutex_unlock(&priv->state_lock);
- return err;
+ return 0;
}
static int mlx5e_tx_reporter_dump_sq(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
@@ -489,60 +358,33 @@ static int mlx5e_tx_reporter_dump_sq(struct mlx5e_priv *priv, struct devlink_fms
{
struct mlx5_rsc_key key = {};
struct mlx5e_txqsq *sq = ctx;
- int err;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
key.size = PAGE_SIZE;
key.rsc = MLX5_SGMT_TYPE_SX_SLICE_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SQ");
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SQ");
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "QPC");
key.rsc = MLX5_SGMT_TYPE_FULL_QPC;
key.index1 = sq->sqn;
key.num_of_obj1 = 1;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "send_buff");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "send_buff");
key.rsc = MLX5_SGMT_TYPE_SND_BUFF;
key.num_of_obj2 = MLX5_RSC_DUMP_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- return mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ return 0;
}
static int mlx5e_tx_reporter_timeout_dump(struct mlx5e_priv *priv, struct devlink_fmsg *fmsg,
@@ -567,28 +409,17 @@ static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv,
{
struct mlx5e_ptp *ptp_ch = priv->channels.ptp;
struct mlx5_rsc_key key = {};
- int i, tc, err;
+ int i, tc;
if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
return 0;
- err = mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
- if (err)
- return err;
-
+ mlx5e_health_fmsg_named_obj_nest_start(fmsg, "SX Slice");
key.size = PAGE_SIZE;
key.rsc = MLX5_SGMT_TYPE_SX_SLICE_ALL;
- err = mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
- if (err)
- return err;
-
- err = mlx5e_health_fmsg_named_obj_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "SQs");
- if (err)
- return err;
+ mlx5e_health_rsc_fmsg_dump(priv, &key, fmsg);
+ mlx5e_health_fmsg_named_obj_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "SQs");
for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i];
@@ -596,9 +427,7 @@ static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv,
for (tc = 0; tc < mlx5e_get_dcb_num_tc(&priv->channels.params); tc++) {
struct mlx5e_txqsq *sq = &c->sq[tc];
- err = mlx5e_health_queue_dump(priv, fmsg, sq->sqn, "SQ");
- if (err)
- return err;
+ mlx5e_health_queue_dump(priv, fmsg, sq->sqn, "SQ");
}
}
@@ -606,13 +435,12 @@ static int mlx5e_tx_reporter_dump_all_sqs(struct mlx5e_priv *priv,
for (tc = 0; tc < mlx5e_get_dcb_num_tc(&priv->channels.params); tc++) {
struct mlx5e_txqsq *sq = &ptp_ch->ptpsq[tc].txqsq;
- err = mlx5e_health_queue_dump(priv, fmsg, sq->sqn, "PTP SQ");
- if (err)
- return err;
+ mlx5e_health_queue_dump(priv, fmsg, sq->sqn, "PTP SQ");
}
}
- return devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ return 0;
}
static int mlx5e_tx_reporter_dump_from_ctx(struct mlx5e_priv *priv,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.c
index b915fb29dd2c..7b8ff7a71003 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.c
@@ -9,7 +9,7 @@ void mlx5e_rss_params_indir_init_uniform(struct mlx5e_rss_params_indir *indir,
{
unsigned int i;
- for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++)
+ for (i = 0; i < indir->actual_table_size; i++)
indir->table[i] = i % num_channels;
}
@@ -45,9 +45,9 @@ static int mlx5e_rqt_init(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
}
int mlx5e_rqt_init_direct(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
- bool indir_enabled, u32 init_rqn)
+ bool indir_enabled, u32 init_rqn, u32 indir_table_size)
{
- u16 max_size = indir_enabled ? MLX5E_INDIR_RQT_SIZE : 1;
+ u16 max_size = indir_enabled ? indir_table_size : 1;
return mlx5e_rqt_init(rqt, mdev, max_size, &init_rqn, 1);
}
@@ -68,11 +68,11 @@ static int mlx5e_calc_indir_rqns(u32 *rss_rqns, u32 *rqns, unsigned int num_rqns
{
unsigned int i;
- for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++) {
+ for (i = 0; i < indir->actual_table_size; i++) {
unsigned int ix = i;
if (hfunc == ETH_RSS_HASH_XOR)
- ix = mlx5e_bits_invert(ix, ilog2(MLX5E_INDIR_RQT_SIZE));
+ ix = mlx5e_bits_invert(ix, ilog2(indir->actual_table_size));
ix = indir->table[ix];
@@ -94,7 +94,7 @@ int mlx5e_rqt_init_indir(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
u32 *rss_rqns;
int err;
- rss_rqns = kvmalloc_array(MLX5E_INDIR_RQT_SIZE, sizeof(*rss_rqns), GFP_KERNEL);
+ rss_rqns = kvmalloc_array(indir->actual_table_size, sizeof(*rss_rqns), GFP_KERNEL);
if (!rss_rqns)
return -ENOMEM;
@@ -102,13 +102,25 @@ int mlx5e_rqt_init_indir(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
if (err)
goto out;
- err = mlx5e_rqt_init(rqt, mdev, MLX5E_INDIR_RQT_SIZE, rss_rqns, MLX5E_INDIR_RQT_SIZE);
+ err = mlx5e_rqt_init(rqt, mdev, indir->max_table_size, rss_rqns,
+ indir->actual_table_size);
out:
kvfree(rss_rqns);
return err;
}
+#define MLX5E_UNIFORM_SPREAD_RQT_FACTOR 2
+
+u32 mlx5e_rqt_size(struct mlx5_core_dev *mdev, unsigned int num_channels)
+{
+ u32 rqt_size = max_t(u32, MLX5E_INDIR_MIN_RQT_SIZE,
+ roundup_pow_of_two(num_channels * MLX5E_UNIFORM_SPREAD_RQT_FACTOR));
+ u32 max_cap_rqt_size = 1 << MLX5_CAP_GEN(mdev, log_max_rqt_size);
+
+ return min_t(u32, rqt_size, max_cap_rqt_size);
+}
+
void mlx5e_rqt_destroy(struct mlx5e_rqt *rqt)
{
mlx5_core_destroy_rqt(rqt->mdev, rqt->rqtn);
@@ -151,10 +163,10 @@ int mlx5e_rqt_redirect_indir(struct mlx5e_rqt *rqt, u32 *rqns, unsigned int num_
u32 *rss_rqns;
int err;
- if (WARN_ON(rqt->size != MLX5E_INDIR_RQT_SIZE))
+ if (WARN_ON(rqt->size != indir->max_table_size))
return -EINVAL;
- rss_rqns = kvmalloc_array(MLX5E_INDIR_RQT_SIZE, sizeof(*rss_rqns), GFP_KERNEL);
+ rss_rqns = kvmalloc_array(indir->actual_table_size, sizeof(*rss_rqns), GFP_KERNEL);
if (!rss_rqns)
return -ENOMEM;
@@ -162,7 +174,7 @@ int mlx5e_rqt_redirect_indir(struct mlx5e_rqt *rqt, u32 *rqns, unsigned int num_
if (err)
goto out;
- err = mlx5e_rqt_redirect(rqt, rss_rqns, MLX5E_INDIR_RQT_SIZE);
+ err = mlx5e_rqt_redirect(rqt, rss_rqns, indir->actual_table_size);
out:
kvfree(rss_rqns);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.h b/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.h
index 60c985a12f24..77fba3ebd18d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rqt.h
@@ -6,12 +6,14 @@
#include <linux/kernel.h>
-#define MLX5E_INDIR_RQT_SIZE (1 << 8)
+#define MLX5E_INDIR_MIN_RQT_SIZE (BIT(8))
struct mlx5_core_dev;
struct mlx5e_rss_params_indir {
- u32 table[MLX5E_INDIR_RQT_SIZE];
+ u32 *table;
+ u32 actual_table_size;
+ u32 max_table_size;
};
void mlx5e_rss_params_indir_init_uniform(struct mlx5e_rss_params_indir *indir,
@@ -24,7 +26,7 @@ struct mlx5e_rqt {
};
int mlx5e_rqt_init_direct(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
- bool indir_enabled, u32 init_rqn);
+ bool indir_enabled, u32 init_rqn, u32 indir_table_size);
int mlx5e_rqt_init_indir(struct mlx5e_rqt *rqt, struct mlx5_core_dev *mdev,
u32 *rqns, unsigned int num_rqns,
u8 hfunc, struct mlx5e_rss_params_indir *indir);
@@ -35,6 +37,7 @@ static inline u32 mlx5e_rqt_get_rqtn(struct mlx5e_rqt *rqt)
return rqt->rqtn;
}
+u32 mlx5e_rqt_size(struct mlx5_core_dev *mdev, unsigned int num_channels);
int mlx5e_rqt_redirect_direct(struct mlx5e_rqt *rqt, u32 rqn);
int mlx5e_rqt_redirect_indir(struct mlx5e_rqt *rqt, u32 *rqns, unsigned int num_rqns,
u8 hfunc, struct mlx5e_rss_params_indir *indir);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rss.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rss.c
index 7f93426b88b3..c1545a2e8d6d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rss.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rss.c
@@ -81,14 +81,75 @@ struct mlx5e_rss {
refcount_t refcnt;
};
-struct mlx5e_rss *mlx5e_rss_alloc(void)
+void mlx5e_rss_params_indir_modify_actual_size(struct mlx5e_rss *rss, u32 num_channels)
{
- return kvzalloc(sizeof(struct mlx5e_rss), GFP_KERNEL);
+ rss->indir.actual_table_size = mlx5e_rqt_size(rss->mdev, num_channels);
}
-void mlx5e_rss_free(struct mlx5e_rss *rss)
+int mlx5e_rss_params_indir_init(struct mlx5e_rss_params_indir *indir, struct mlx5_core_dev *mdev,
+ u32 actual_table_size, u32 max_table_size)
{
+ indir->table = kvmalloc_array(max_table_size, sizeof(*indir->table), GFP_KERNEL);
+ if (!indir->table)
+ return -ENOMEM;
+
+ indir->max_table_size = max_table_size;
+ indir->actual_table_size = actual_table_size;
+
+ return 0;
+}
+
+void mlx5e_rss_params_indir_cleanup(struct mlx5e_rss_params_indir *indir)
+{
+ kvfree(indir->table);
+}
+
+static int mlx5e_rss_copy(struct mlx5e_rss *to, const struct mlx5e_rss *from)
+{
+ u32 *dst_indir_table;
+
+ if (to->indir.actual_table_size != from->indir.actual_table_size ||
+ to->indir.max_table_size != from->indir.max_table_size) {
+ mlx5e_rss_warn(to->mdev,
+ "Failed to copy RSS due to size mismatch, src (actual %u, max %u) != dst (actual %u, max %u)\n",
+ from->indir.actual_table_size, from->indir.max_table_size,
+ to->indir.actual_table_size, to->indir.max_table_size);
+ return -EINVAL;
+ }
+
+ dst_indir_table = to->indir.table;
+ *to = *from;
+ to->indir.table = dst_indir_table;
+ memcpy(to->indir.table, from->indir.table,
+ from->indir.actual_table_size * sizeof(*from->indir.table));
+ return 0;
+}
+
+static struct mlx5e_rss *mlx5e_rss_init_copy(const struct mlx5e_rss *from)
+{
+ struct mlx5e_rss *rss;
+ int err;
+
+ rss = kvzalloc(sizeof(*rss), GFP_KERNEL);
+ if (!rss)
+ return ERR_PTR(-ENOMEM);
+
+ err = mlx5e_rss_params_indir_init(&rss->indir, from->mdev, from->indir.actual_table_size,
+ from->indir.max_table_size);
+ if (err)
+ goto err_free_rss;
+
+ err = mlx5e_rss_copy(rss, from);
+ if (err)
+ goto err_free_indir;
+
+ return rss;
+
+err_free_indir:
+ mlx5e_rss_params_indir_cleanup(&rss->indir);
+err_free_rss:
kvfree(rss);
+ return ERR_PTR(err);
}
static void mlx5e_rss_params_init(struct mlx5e_rss *rss)
@@ -282,28 +343,43 @@ static int mlx5e_rss_update_tirs(struct mlx5e_rss *rss)
return retval;
}
-int mlx5e_rss_init_no_tirs(struct mlx5e_rss *rss, struct mlx5_core_dev *mdev,
- bool inner_ft_support, u32 drop_rqn)
+static int mlx5e_rss_init_no_tirs(struct mlx5e_rss *rss)
{
- rss->mdev = mdev;
- rss->inner_ft_support = inner_ft_support;
- rss->drop_rqn = drop_rqn;
-
mlx5e_rss_params_init(rss);
refcount_set(&rss->refcnt, 1);
- return mlx5e_rqt_init_direct(&rss->rqt, mdev, true, drop_rqn);
+ return mlx5e_rqt_init_direct(&rss->rqt, rss->mdev, true,
+ rss->drop_rqn, rss->indir.max_table_size);
}
-int mlx5e_rss_init(struct mlx5e_rss *rss, struct mlx5_core_dev *mdev,
- bool inner_ft_support, u32 drop_rqn,
- const struct mlx5e_packet_merge_param *init_pkt_merge_param)
+struct mlx5e_rss *mlx5e_rss_init(struct mlx5_core_dev *mdev, bool inner_ft_support, u32 drop_rqn,
+ const struct mlx5e_packet_merge_param *init_pkt_merge_param,
+ enum mlx5e_rss_init_type type, unsigned int nch,
+ unsigned int max_nch)
{
+ struct mlx5e_rss *rss;
int err;
- err = mlx5e_rss_init_no_tirs(rss, mdev, inner_ft_support, drop_rqn);
+ rss = kvzalloc(sizeof(*rss), GFP_KERNEL);
+ if (!rss)
+ return ERR_PTR(-ENOMEM);
+
+ err = mlx5e_rss_params_indir_init(&rss->indir, mdev,
+ mlx5e_rqt_size(mdev, nch),
+ mlx5e_rqt_size(mdev, max_nch));
+ if (err)
+ goto err_free_rss;
+
+ rss->mdev = mdev;
+ rss->inner_ft_support = inner_ft_support;
+ rss->drop_rqn = drop_rqn;
+
+ err = mlx5e_rss_init_no_tirs(rss);
if (err)
- goto err_out;
+ goto err_free_indir;
+
+ if (type == MLX5E_RSS_INIT_NO_TIRS)
+ goto out;
err = mlx5e_rss_create_tirs(rss, init_pkt_merge_param, false);
if (err)
@@ -315,14 +391,18 @@ int mlx5e_rss_init(struct mlx5e_rss *rss, struct mlx5_core_dev *mdev,
goto err_destroy_tirs;
}
- return 0;
+out:
+ return rss;
err_destroy_tirs:
mlx5e_rss_destroy_tirs(rss, false);
err_destroy_rqt:
mlx5e_rqt_destroy(&rss->rqt);
-err_out:
- return err;
+err_free_indir:
+ mlx5e_rss_params_indir_cleanup(&rss->indir);
+err_free_rss:
+ kvfree(rss);
+ return ERR_PTR(err);
}
int mlx5e_rss_cleanup(struct mlx5e_rss *rss)
@@ -336,6 +416,8 @@ int mlx5e_rss_cleanup(struct mlx5e_rss *rss)
mlx5e_rss_destroy_tirs(rss, true);
mlx5e_rqt_destroy(&rss->rqt);
+ mlx5e_rss_params_indir_cleanup(&rss->indir);
+ kvfree(rss);
return 0;
}
@@ -470,11 +552,9 @@ inner_tir:
int mlx5e_rss_get_rxfh(struct mlx5e_rss *rss, u32 *indir, u8 *key, u8 *hfunc)
{
- unsigned int i;
-
if (indir)
- for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++)
- indir[i] = rss->indir.table[i];
+ memcpy(indir, rss->indir.table,
+ rss->indir.actual_table_size * sizeof(*rss->indir.table));
if (key)
memcpy(key, rss->hash.toeplitz_hash_key,
@@ -495,11 +575,9 @@ int mlx5e_rss_set_rxfh(struct mlx5e_rss *rss, const u32 *indir,
struct mlx5e_rss *old_rss;
int err = 0;
- old_rss = mlx5e_rss_alloc();
- if (!old_rss)
- return -ENOMEM;
-
- *old_rss = *rss;
+ old_rss = mlx5e_rss_init_copy(rss);
+ if (IS_ERR(old_rss))
+ return PTR_ERR(old_rss);
if (hfunc && *hfunc != rss->hash.hfunc) {
switch (*hfunc) {
@@ -523,18 +601,16 @@ int mlx5e_rss_set_rxfh(struct mlx5e_rss *rss, const u32 *indir,
}
if (indir) {
- unsigned int i;
-
changed_indir = true;
- for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++)
- rss->indir.table[i] = indir[i];
+ memcpy(rss->indir.table, indir,
+ rss->indir.actual_table_size * sizeof(*rss->indir.table));
}
if (changed_indir && rss->enabled) {
err = mlx5e_rss_apply(rss, rqns, num_rqns);
if (err) {
- *rss = *old_rss;
+ mlx5e_rss_copy(rss, old_rss);
goto out;
}
}
@@ -543,7 +619,9 @@ int mlx5e_rss_set_rxfh(struct mlx5e_rss *rss, const u32 *indir,
mlx5e_rss_update_tirs(rss);
out:
- mlx5e_rss_free(old_rss);
+ mlx5e_rss_params_indir_cleanup(&old_rss->indir);
+ kvfree(old_rss);
+
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rss.h b/drivers/net/ethernet/mellanox/mlx5/core/en/rss.h
index c6b216416344..d1d0bc350e92 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rss.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rss.h
@@ -8,18 +8,24 @@
#include "tir.h"
#include "fs.h"
+enum mlx5e_rss_init_type {
+ MLX5E_RSS_INIT_NO_TIRS = 0,
+ MLX5E_RSS_INIT_TIRS
+};
+
struct mlx5e_rss_params_traffic_type
mlx5e_rss_get_default_tt_config(enum mlx5_traffic_types tt);
struct mlx5e_rss;
-struct mlx5e_rss *mlx5e_rss_alloc(void);
-void mlx5e_rss_free(struct mlx5e_rss *rss);
-int mlx5e_rss_init(struct mlx5e_rss *rss, struct mlx5_core_dev *mdev,
- bool inner_ft_support, u32 drop_rqn,
- const struct mlx5e_packet_merge_param *init_pkt_merge_param);
-int mlx5e_rss_init_no_tirs(struct mlx5e_rss *rss, struct mlx5_core_dev *mdev,
- bool inner_ft_support, u32 drop_rqn);
+int mlx5e_rss_params_indir_init(struct mlx5e_rss_params_indir *indir, struct mlx5_core_dev *mdev,
+ u32 actual_table_size, u32 max_table_size);
+void mlx5e_rss_params_indir_cleanup(struct mlx5e_rss_params_indir *indir);
+void mlx5e_rss_params_indir_modify_actual_size(struct mlx5e_rss *rss, u32 num_channels);
+struct mlx5e_rss *mlx5e_rss_init(struct mlx5_core_dev *mdev, bool inner_ft_support, u32 drop_rqn,
+ const struct mlx5e_packet_merge_param *init_pkt_merge_param,
+ enum mlx5e_rss_init_type type, unsigned int nch,
+ unsigned int max_nch);
int mlx5e_rss_cleanup(struct mlx5e_rss *rss);
void mlx5e_rss_refcnt_inc(struct mlx5e_rss *rss);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c
index 56e6b8c7501f..b23e224e3763 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.c
@@ -18,7 +18,7 @@ struct mlx5e_rx_res {
struct mlx5e_rss *rss[MLX5E_MAX_NUM_RSS];
bool rss_active;
- u32 rss_rqns[MLX5E_INDIR_RQT_SIZE];
+ u32 *rss_rqns;
unsigned int rss_nch;
struct {
@@ -34,41 +34,42 @@ struct mlx5e_rx_res {
/* API for rx_res_rss_* */
+void mlx5e_rx_res_rss_update_num_channels(struct mlx5e_rx_res *res, u32 nch)
+{
+ int i;
+
+ for (i = 0; i < MLX5E_MAX_NUM_RSS; i++) {
+ if (res->rss[i])
+ mlx5e_rss_params_indir_modify_actual_size(res->rss[i], nch);
+ }
+}
+
static int mlx5e_rx_res_rss_init_def(struct mlx5e_rx_res *res,
unsigned int init_nch)
{
bool inner_ft_support = res->features & MLX5E_RX_RES_FEATURE_INNER_FT;
struct mlx5e_rss *rss;
- int err;
if (WARN_ON(res->rss[0]))
return -EINVAL;
- rss = mlx5e_rss_alloc();
- if (!rss)
- return -ENOMEM;
-
- err = mlx5e_rss_init(rss, res->mdev, inner_ft_support, res->drop_rqn,
- &res->pkt_merge_param);
- if (err)
- goto err_rss_free;
+ rss = mlx5e_rss_init(res->mdev, inner_ft_support, res->drop_rqn,
+ &res->pkt_merge_param, MLX5E_RSS_INIT_TIRS, init_nch, res->max_nch);
+ if (IS_ERR(rss))
+ return PTR_ERR(rss);
mlx5e_rss_set_indir_uniform(rss, init_nch);
res->rss[0] = rss;
return 0;
-
-err_rss_free:
- mlx5e_rss_free(rss);
- return err;
}
int mlx5e_rx_res_rss_init(struct mlx5e_rx_res *res, u32 *rss_idx, unsigned int init_nch)
{
bool inner_ft_support = res->features & MLX5E_RX_RES_FEATURE_INNER_FT;
struct mlx5e_rss *rss;
- int err, i;
+ int i;
for (i = 1; i < MLX5E_MAX_NUM_RSS; i++)
if (!res->rss[i])
@@ -77,13 +78,11 @@ int mlx5e_rx_res_rss_init(struct mlx5e_rx_res *res, u32 *rss_idx, unsigned int i
if (i == MLX5E_MAX_NUM_RSS)
return -ENOSPC;
- rss = mlx5e_rss_alloc();
- if (!rss)
- return -ENOMEM;
-
- err = mlx5e_rss_init_no_tirs(rss, res->mdev, inner_ft_support, res->drop_rqn);
- if (err)
- goto err_rss_free;
+ rss = mlx5e_rss_init(res->mdev, inner_ft_support, res->drop_rqn,
+ &res->pkt_merge_param, MLX5E_RSS_INIT_NO_TIRS, init_nch,
+ res->max_nch);
+ if (IS_ERR(rss))
+ return PTR_ERR(rss);
mlx5e_rss_set_indir_uniform(rss, init_nch);
if (res->rss_active)
@@ -93,10 +92,6 @@ int mlx5e_rx_res_rss_init(struct mlx5e_rx_res *res, u32 *rss_idx, unsigned int i
*rss_idx = i;
return 0;
-
-err_rss_free:
- mlx5e_rss_free(rss);
- return err;
}
static int __mlx5e_rx_res_rss_destroy(struct mlx5e_rx_res *res, u32 rss_idx)
@@ -108,7 +103,6 @@ static int __mlx5e_rx_res_rss_destroy(struct mlx5e_rx_res *res, u32 rss_idx)
if (err)
return err;
- mlx5e_rss_free(rss);
res->rss[rss_idx] = NULL;
return 0;
@@ -284,9 +278,27 @@ struct mlx5e_rss *mlx5e_rx_res_rss_get(struct mlx5e_rx_res *res, u32 rss_idx)
/* End of API rx_res_rss_* */
-struct mlx5e_rx_res *mlx5e_rx_res_alloc(void)
+static void mlx5e_rx_res_free(struct mlx5e_rx_res *res)
{
- return kvzalloc(sizeof(struct mlx5e_rx_res), GFP_KERNEL);
+ kvfree(res->rss_rqns);
+ kvfree(res);
+}
+
+static struct mlx5e_rx_res *mlx5e_rx_res_alloc(struct mlx5_core_dev *mdev, unsigned int max_nch)
+{
+ struct mlx5e_rx_res *rx_res;
+
+ rx_res = kvzalloc(sizeof(*rx_res), GFP_KERNEL);
+ if (!rx_res)
+ return NULL;
+
+ rx_res->rss_rqns = kvcalloc(max_nch, sizeof(*rx_res->rss_rqns), GFP_KERNEL);
+ if (!rx_res->rss_rqns) {
+ kvfree(rx_res);
+ return NULL;
+ }
+
+ return rx_res;
}
static int mlx5e_rx_res_channels_init(struct mlx5e_rx_res *res)
@@ -308,7 +320,8 @@ static int mlx5e_rx_res_channels_init(struct mlx5e_rx_res *res)
for (ix = 0; ix < res->max_nch; ix++) {
err = mlx5e_rqt_init_direct(&res->channels[ix].direct_rqt,
- res->mdev, false, res->drop_rqn);
+ res->mdev, false, res->drop_rqn,
+ mlx5e_rqt_size(res->mdev, res->max_nch));
if (err) {
mlx5_core_warn(res->mdev, "Failed to create a direct RQT: err = %d, ix = %u\n",
err, ix);
@@ -362,7 +375,8 @@ static int mlx5e_rx_res_ptp_init(struct mlx5e_rx_res *res)
if (!builder)
return -ENOMEM;
- err = mlx5e_rqt_init_direct(&res->ptp.rqt, res->mdev, false, res->drop_rqn);
+ err = mlx5e_rqt_init_direct(&res->ptp.rqt, res->mdev, false, res->drop_rqn,
+ mlx5e_rqt_size(res->mdev, res->max_nch));
if (err)
goto out;
@@ -404,13 +418,19 @@ static void mlx5e_rx_res_ptp_destroy(struct mlx5e_rx_res *res)
mlx5e_rqt_destroy(&res->ptp.rqt);
}
-int mlx5e_rx_res_init(struct mlx5e_rx_res *res, struct mlx5_core_dev *mdev,
- enum mlx5e_rx_res_features features, unsigned int max_nch,
- u32 drop_rqn, const struct mlx5e_packet_merge_param *init_pkt_merge_param,
- unsigned int init_nch)
+struct mlx5e_rx_res *
+mlx5e_rx_res_create(struct mlx5_core_dev *mdev, enum mlx5e_rx_res_features features,
+ unsigned int max_nch, u32 drop_rqn,
+ const struct mlx5e_packet_merge_param *init_pkt_merge_param,
+ unsigned int init_nch)
{
+ struct mlx5e_rx_res *res;
int err;
+ res = mlx5e_rx_res_alloc(mdev, max_nch);
+ if (!res)
+ return ERR_PTR(-ENOMEM);
+
res->mdev = mdev;
res->features = features;
res->max_nch = max_nch;
@@ -421,7 +441,7 @@ int mlx5e_rx_res_init(struct mlx5e_rx_res *res, struct mlx5_core_dev *mdev,
err = mlx5e_rx_res_rss_init_def(res, init_nch);
if (err)
- goto err_out;
+ goto err_rx_res_free;
err = mlx5e_rx_res_channels_init(res);
if (err)
@@ -431,14 +451,15 @@ int mlx5e_rx_res_init(struct mlx5e_rx_res *res, struct mlx5_core_dev *mdev,
if (err)
goto err_channels_destroy;
- return 0;
+ return res;
err_channels_destroy:
mlx5e_rx_res_channels_destroy(res);
err_rss_destroy:
__mlx5e_rx_res_rss_destroy(res, 0);
-err_out:
- return err;
+err_rx_res_free:
+ mlx5e_rx_res_free(res);
+ return ERR_PTR(err);
}
void mlx5e_rx_res_destroy(struct mlx5e_rx_res *res)
@@ -446,11 +467,7 @@ void mlx5e_rx_res_destroy(struct mlx5e_rx_res *res)
mlx5e_rx_res_ptp_destroy(res);
mlx5e_rx_res_channels_destroy(res);
mlx5e_rx_res_rss_destroy_all(res);
-}
-
-void mlx5e_rx_res_free(struct mlx5e_rx_res *res)
-{
- kvfree(res);
+ mlx5e_rx_res_free(res);
}
u32 mlx5e_rx_res_get_tirn_direct(struct mlx5e_rx_res *res, unsigned int ix)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h b/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h
index 580fe8bc3cd2..82aaba8a82b3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rx_res.h
@@ -21,13 +21,12 @@ enum mlx5e_rx_res_features {
};
/* Setup */
-struct mlx5e_rx_res *mlx5e_rx_res_alloc(void);
-int mlx5e_rx_res_init(struct mlx5e_rx_res *res, struct mlx5_core_dev *mdev,
- enum mlx5e_rx_res_features features, unsigned int max_nch,
- u32 drop_rqn, const struct mlx5e_packet_merge_param *init_pkt_merge_param,
- unsigned int init_nch);
+struct mlx5e_rx_res *
+mlx5e_rx_res_create(struct mlx5_core_dev *mdev, enum mlx5e_rx_res_features features,
+ unsigned int max_nch, u32 drop_rqn,
+ const struct mlx5e_packet_merge_param *init_pkt_merge_param,
+ unsigned int init_nch);
void mlx5e_rx_res_destroy(struct mlx5e_rx_res *res);
-void mlx5e_rx_res_free(struct mlx5e_rx_res *res);
/* TIRN getters for flow steering */
u32 mlx5e_rx_res_get_tirn_direct(struct mlx5e_rx_res *res, unsigned int ix);
@@ -60,6 +59,7 @@ int mlx5e_rx_res_rss_destroy(struct mlx5e_rx_res *res, u32 rss_idx);
int mlx5e_rx_res_rss_cnt(struct mlx5e_rx_res *res);
int mlx5e_rx_res_rss_index(struct mlx5e_rx_res *res, struct mlx5e_rss *rss);
struct mlx5e_rss *mlx5e_rx_res_rss_get(struct mlx5e_rx_res *res, u32 rss_idx);
+void mlx5e_rx_res_rss_update_num_channels(struct mlx5e_rx_res *res, u32 nch);
/* Workaround for hairpin */
struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *res);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
index 8bed17d8fe56..7decc81ed33a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
@@ -893,7 +893,7 @@ void mlx5e_xdp_rx_poll_complete(struct mlx5e_rq *rq)
mlx5e_xmit_xdp_doorbell(xdpsq);
if (test_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags)) {
- xdp_do_flush_map();
+ xdp_do_flush();
__clear_bit(MLX5E_RQ_FLAG_XDP_REDIRECT, rq->flags);
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
index 7d4ceb9b9c16..655496598c68 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
@@ -56,7 +56,7 @@ static struct mlx5e_ipsec_pol_entry *to_ipsec_pol_entry(struct xfrm_policy *x)
return (struct mlx5e_ipsec_pol_entry *)x->xdo.offload_handle;
}
-static void mlx5e_ipsec_handle_tx_limit(struct work_struct *_work)
+static void mlx5e_ipsec_handle_sw_limits(struct work_struct *_work)
{
struct mlx5e_ipsec_dwork *dwork =
container_of(_work, struct mlx5e_ipsec_dwork, dwork.work);
@@ -486,9 +486,15 @@ static int mlx5e_xfrm_validate_state(struct mlx5_core_dev *mdev,
return -EINVAL;
}
- if (x->lft.hard_byte_limit != XFRM_INF ||
- x->lft.soft_byte_limit != XFRM_INF) {
- NL_SET_ERR_MSG_MOD(extack, "Device doesn't support limits in bytes");
+ if (x->lft.soft_byte_limit >= x->lft.hard_byte_limit &&
+ x->lft.hard_byte_limit != XFRM_INF) {
+ /* XFRM stack doesn't prevent such configuration :(. */
+ NL_SET_ERR_MSG_MOD(extack, "Hard byte limit must be greater than soft one");
+ return -EINVAL;
+ }
+
+ if (!x->lft.soft_byte_limit || !x->lft.hard_byte_limit) {
+ NL_SET_ERR_MSG_MOD(extack, "Soft/hard byte limits can't be 0");
return -EINVAL;
}
@@ -624,11 +630,10 @@ static int mlx5e_ipsec_create_dwork(struct mlx5e_ipsec_sa_entry *sa_entry)
if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
return 0;
- if (x->xso.dir != XFRM_DEV_OFFLOAD_OUT)
- return 0;
-
if (x->lft.soft_packet_limit == XFRM_INF &&
- x->lft.hard_packet_limit == XFRM_INF)
+ x->lft.hard_packet_limit == XFRM_INF &&
+ x->lft.soft_byte_limit == XFRM_INF &&
+ x->lft.hard_byte_limit == XFRM_INF)
return 0;
dwork = kzalloc(sizeof(*dwork), GFP_KERNEL);
@@ -636,7 +641,7 @@ static int mlx5e_ipsec_create_dwork(struct mlx5e_ipsec_sa_entry *sa_entry)
return -ENOMEM;
dwork->sa_entry = sa_entry;
- INIT_DELAYED_WORK(&dwork->dwork, mlx5e_ipsec_handle_tx_limit);
+ INIT_DELAYED_WORK(&dwork->dwork, mlx5e_ipsec_handle_sw_limits);
sa_entry->dwork = dwork;
return 0;
}
@@ -850,6 +855,7 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv)
xa_init_flags(&ipsec->sadb, XA_FLAGS_ALLOC);
ipsec->mdev = priv->mdev;
+ init_completion(&ipsec->comp);
ipsec->wq = alloc_workqueue("mlx5e_ipsec: %s", WQ_UNBOUND, 0,
priv->netdev->name);
if (!ipsec->wq)
@@ -870,7 +876,7 @@ void mlx5e_ipsec_init(struct mlx5e_priv *priv)
}
ipsec->is_uplink_rep = mlx5e_is_uplink_rep(priv);
- ret = mlx5e_accel_ipsec_fs_init(ipsec);
+ ret = mlx5e_accel_ipsec_fs_init(ipsec, &priv->devcom);
if (ret)
goto err_fs_init;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
index 9e7c42c2f77b..8f4a37bceaf4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.h
@@ -38,6 +38,7 @@
#include <net/xfrm.h>
#include <linux/idr.h>
#include "lib/aso.h"
+#include "lib/devcom.h"
#define MLX5E_IPSEC_SADB_RX_BITS 10
#define MLX5E_IPSEC_ESN_SCOPE_MID 0x80000000L
@@ -221,12 +222,20 @@ struct mlx5e_ipsec_tx_create_attr {
enum mlx5_flow_namespace_type chains_ns;
};
+struct mlx5e_ipsec_mpv_work {
+ int event;
+ struct work_struct work;
+ struct mlx5e_priv *slave_priv;
+ struct mlx5e_priv *master_priv;
+};
+
struct mlx5e_ipsec {
struct mlx5_core_dev *mdev;
struct xarray sadb;
struct mlx5e_ipsec_sw_stats sw_stats;
struct mlx5e_ipsec_hw_stats hw_stats;
struct workqueue_struct *wq;
+ struct completion comp;
struct mlx5e_flow_steering *fs;
struct mlx5e_ipsec_rx *rx_ipv4;
struct mlx5e_ipsec_rx *rx_ipv6;
@@ -238,6 +247,7 @@ struct mlx5e_ipsec {
struct notifier_block netevent_nb;
struct mlx5_ipsec_fs *roce;
u8 is_uplink_rep: 1;
+ struct mlx5e_ipsec_mpv_work mpv_work;
};
struct mlx5e_ipsec_esn_state {
@@ -302,7 +312,7 @@ void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv);
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv);
void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec);
-int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec);
+int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec, struct mlx5_devcom_comp_dev **devcom);
int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry);
void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry);
int mlx5e_accel_ipsec_fs_add_pol(struct mlx5e_ipsec_pol_entry *pol_entry);
@@ -328,6 +338,10 @@ void mlx5e_accel_ipsec_fs_read_stats(struct mlx5e_priv *priv,
void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_accel_esp_xfrm_attrs *attrs);
+void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *slave_priv,
+ struct mlx5e_priv *master_priv);
+void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event);
+
static inline struct mlx5_core_dev *
mlx5e_ipsec_sa2dev(struct mlx5e_ipsec_sa_entry *sa_entry)
{
@@ -363,6 +377,15 @@ static inline u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
{
return 0;
}
+
+static inline void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *slave_priv,
+ struct mlx5e_priv *master_priv)
+{
+}
+
+static inline void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event)
+{
+}
#endif
#endif /* __MLX5E_IPSEC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
index 7dba4221993f..f41c976dc33f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
@@ -229,6 +229,83 @@ out:
return err;
}
+static void handle_ipsec_rx_bringup(struct mlx5e_ipsec *ipsec, u32 family)
+{
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family, XFRM_DEV_OFFLOAD_PACKET);
+ struct mlx5_flow_namespace *ns = mlx5e_fs_get_ns(ipsec->fs, false);
+ struct mlx5_flow_destination old_dest, new_dest;
+
+ old_dest = mlx5_ttc_get_default_dest(mlx5e_fs_get_ttc(ipsec->fs, false),
+ family2tt(family));
+
+ mlx5_ipsec_fs_roce_rx_create(ipsec->mdev, ipsec->roce, ns, &old_dest, family,
+ MLX5E_ACCEL_FS_ESP_FT_ROCE_LEVEL, MLX5E_NIC_PRIO);
+
+ new_dest.ft = mlx5_ipsec_fs_roce_ft_get(ipsec->roce, family);
+ new_dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ mlx5_modify_rule_destination(rx->status.rule, &new_dest, &old_dest);
+ mlx5_modify_rule_destination(rx->sa.rule, &new_dest, &old_dest);
+}
+
+static void handle_ipsec_rx_cleanup(struct mlx5e_ipsec *ipsec, u32 family)
+{
+ struct mlx5e_ipsec_rx *rx = ipsec_rx(ipsec, family, XFRM_DEV_OFFLOAD_PACKET);
+ struct mlx5_flow_destination old_dest, new_dest;
+
+ old_dest.ft = mlx5_ipsec_fs_roce_ft_get(ipsec->roce, family);
+ old_dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ new_dest = mlx5_ttc_get_default_dest(mlx5e_fs_get_ttc(ipsec->fs, false),
+ family2tt(family));
+ mlx5_modify_rule_destination(rx->sa.rule, &new_dest, &old_dest);
+ mlx5_modify_rule_destination(rx->status.rule, &new_dest, &old_dest);
+
+ mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family, ipsec->mdev);
+}
+
+static void ipsec_mpv_work_handler(struct work_struct *_work)
+{
+ struct mlx5e_ipsec_mpv_work *work = container_of(_work, struct mlx5e_ipsec_mpv_work, work);
+ struct mlx5e_ipsec *ipsec = work->slave_priv->ipsec;
+
+ switch (work->event) {
+ case MPV_DEVCOM_IPSEC_MASTER_UP:
+ mutex_lock(&ipsec->tx->ft.mutex);
+ if (ipsec->tx->ft.refcnt)
+ mlx5_ipsec_fs_roce_tx_create(ipsec->mdev, ipsec->roce, ipsec->tx->ft.pol,
+ true);
+ mutex_unlock(&ipsec->tx->ft.mutex);
+
+ mutex_lock(&ipsec->rx_ipv4->ft.mutex);
+ if (ipsec->rx_ipv4->ft.refcnt)
+ handle_ipsec_rx_bringup(ipsec, AF_INET);
+ mutex_unlock(&ipsec->rx_ipv4->ft.mutex);
+
+ mutex_lock(&ipsec->rx_ipv6->ft.mutex);
+ if (ipsec->rx_ipv6->ft.refcnt)
+ handle_ipsec_rx_bringup(ipsec, AF_INET6);
+ mutex_unlock(&ipsec->rx_ipv6->ft.mutex);
+ break;
+ case MPV_DEVCOM_IPSEC_MASTER_DOWN:
+ mutex_lock(&ipsec->tx->ft.mutex);
+ if (ipsec->tx->ft.refcnt)
+ mlx5_ipsec_fs_roce_tx_destroy(ipsec->roce, ipsec->mdev);
+ mutex_unlock(&ipsec->tx->ft.mutex);
+
+ mutex_lock(&ipsec->rx_ipv4->ft.mutex);
+ if (ipsec->rx_ipv4->ft.refcnt)
+ handle_ipsec_rx_cleanup(ipsec, AF_INET);
+ mutex_unlock(&ipsec->rx_ipv4->ft.mutex);
+
+ mutex_lock(&ipsec->rx_ipv6->ft.mutex);
+ if (ipsec->rx_ipv6->ft.refcnt)
+ handle_ipsec_rx_cleanup(ipsec, AF_INET6);
+ mutex_unlock(&ipsec->rx_ipv6->ft.mutex);
+ break;
+ }
+
+ complete(&work->master_priv->ipsec->comp);
+}
+
static void ipsec_rx_ft_disconnect(struct mlx5e_ipsec *ipsec, u32 family)
{
struct mlx5_ttc_table *ttc = mlx5e_fs_get_ttc(ipsec->fs, false);
@@ -264,7 +341,7 @@ static void rx_destroy(struct mlx5_core_dev *mdev, struct mlx5e_ipsec *ipsec,
}
mlx5_destroy_flow_table(rx->ft.status);
- mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family);
+ mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family, mdev);
}
static void ipsec_rx_create_attr_set(struct mlx5e_ipsec *ipsec,
@@ -422,7 +499,7 @@ err_fs_ft:
err_add:
mlx5_destroy_flow_table(rx->ft.status);
err_fs_ft_status:
- mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family);
+ mlx5_ipsec_fs_roce_rx_destroy(ipsec->roce, family, mdev);
return err;
}
@@ -562,7 +639,7 @@ err_rule:
static void tx_destroy(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx,
struct mlx5_ipsec_fs *roce)
{
- mlx5_ipsec_fs_roce_tx_destroy(roce);
+ mlx5_ipsec_fs_roce_tx_destroy(roce, ipsec->mdev);
if (tx->chains) {
ipsec_chains_destroy(tx->chains);
} else {
@@ -665,7 +742,7 @@ static int tx_create(struct mlx5e_ipsec *ipsec, struct mlx5e_ipsec_tx *tx,
}
connect_roce:
- err = mlx5_ipsec_fs_roce_tx_create(mdev, roce, tx->ft.pol);
+ err = mlx5_ipsec_fs_roce_tx_create(mdev, roce, tx->ft.pol, false);
if (err)
goto err_roce;
return 0;
@@ -1249,15 +1326,17 @@ static int rx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
setup_fte_no_frags(spec);
setup_fte_upper_proto_match(spec, &attrs->upspec);
- if (rx != ipsec->rx_esw)
- err = setup_modify_header(ipsec, attrs->type,
- sa_entry->ipsec_obj_id | BIT(31),
- XFRM_DEV_OFFLOAD_IN, &flow_act);
- else
- err = mlx5_esw_ipsec_rx_setup_modify_header(sa_entry, &flow_act);
+ if (!attrs->drop) {
+ if (rx != ipsec->rx_esw)
+ err = setup_modify_header(ipsec, attrs->type,
+ sa_entry->ipsec_obj_id | BIT(31),
+ XFRM_DEV_OFFLOAD_IN, &flow_act);
+ else
+ err = mlx5_esw_ipsec_rx_setup_modify_header(sa_entry, &flow_act);
- if (err)
- goto err_mod_header;
+ if (err)
+ goto err_mod_header;
+ }
switch (attrs->type) {
case XFRM_DEV_OFFLOAD_PACKET:
@@ -1307,7 +1386,8 @@ err_add_cnt:
if (flow_act.pkt_reformat)
mlx5_packet_reformat_dealloc(mdev, flow_act.pkt_reformat);
err_pkt_reformat:
- mlx5_modify_header_dealloc(mdev, flow_act.modify_hdr);
+ if (flow_act.modify_hdr)
+ mlx5_modify_header_dealloc(mdev, flow_act.modify_hdr);
err_mod_header:
kvfree(spec);
err_alloc:
@@ -1805,7 +1885,8 @@ void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
return;
}
- mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr);
+ if (ipsec_rule->modify_hdr)
+ mlx5_modify_header_dealloc(mdev, ipsec_rule->modify_hdr);
mlx5_esw_ipsec_rx_id_mapping_remove(sa_entry);
rx_ft_put(sa_entry->ipsec, sa_entry->attrs.family, sa_entry->attrs.type);
}
@@ -1888,7 +1969,8 @@ void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec)
}
}
-int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec)
+int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec,
+ struct mlx5_devcom_comp_dev **devcom)
{
struct mlx5_core_dev *mdev = ipsec->mdev;
struct mlx5_flow_namespace *ns, *ns_esw;
@@ -1940,7 +2022,9 @@ int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec)
ipsec->tx_esw->ns = ns_esw;
xa_init_flags(&ipsec->rx_esw->ipsec_obj_id_map, XA_FLAGS_ALLOC1);
} else if (mlx5_ipsec_device_caps(mdev) & MLX5_IPSEC_CAP_ROCE) {
- ipsec->roce = mlx5_ipsec_fs_roce_init(mdev);
+ ipsec->roce = mlx5_ipsec_fs_roce_init(mdev, devcom);
+ } else {
+ mlx5_core_warn(mdev, "IPsec was initialized without RoCE support\n");
}
return 0;
@@ -1987,3 +2071,33 @@ bool mlx5e_ipsec_fs_tunnel_enabled(struct mlx5e_ipsec_sa_entry *sa_entry)
return rx->allow_tunnel_mode;
}
+
+void mlx5e_ipsec_handle_mpv_event(int event, struct mlx5e_priv *slave_priv,
+ struct mlx5e_priv *master_priv)
+{
+ struct mlx5e_ipsec_mpv_work *work;
+
+ reinit_completion(&master_priv->ipsec->comp);
+
+ if (!slave_priv->ipsec) {
+ complete(&master_priv->ipsec->comp);
+ return;
+ }
+
+ work = &slave_priv->ipsec->mpv_work;
+
+ INIT_WORK(&work->work, ipsec_mpv_work_handler);
+ work->event = event;
+ work->slave_priv = slave_priv;
+ work->master_priv = master_priv;
+ queue_work(slave_priv->ipsec->wq, &work->work);
+}
+
+void mlx5e_ipsec_send_event(struct mlx5e_priv *priv, int event)
+{
+ if (!priv->ipsec)
+ return; /* IPsec not supported */
+
+ mlx5_devcom_send_event(priv->devcom, event, event, priv);
+ wait_for_completion(&priv->ipsec->comp);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
index 3245d1c9d539..a91f772dc981 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
@@ -5,6 +5,7 @@
#include "en.h"
#include "ipsec.h"
#include "lib/crypto.h"
+#include "lib/ipsec_fs_roce.h"
enum {
MLX5_IPSEC_ASO_REMOVE_FLOW_PKT_CNT_OFFSET,
@@ -63,7 +64,7 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
caps |= MLX5_IPSEC_CAP_ESPINUDP;
}
- if (mlx5_get_roce_state(mdev) &&
+ if (mlx5_get_roce_state(mdev) && mlx5_ipsec_fs_is_mpv_roce_supported(mdev) &&
MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_RX_2_NIC_RX_RDMA &&
MLX5_CAP_GEN_2(mdev, flow_table_type_2_type) & MLX5_FT_NIC_TX_RDMA_2_NIC_TX)
caps |= MLX5_IPSEC_CAP_ROCE;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
index 9ee014a8ad24..2ed99772f168 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_rxtx.h
@@ -54,7 +54,6 @@ struct mlx5e_accel_tx_ipsec_state {
#ifdef CONFIG_MLX5_EN_IPSEC
-void mlx5e_ipsec_inverse_table_init(void);
void mlx5e_ipsec_set_iv_esn(struct sk_buff *skb, struct xfrm_state *x,
struct xfrm_offload *xo);
void mlx5e_ipsec_set_iv(struct sk_buff *skb, struct xfrm_state *x,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index dff02434ff45..215261a69255 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -1247,7 +1247,7 @@ static u32 mlx5e_get_rxfh_key_size(struct net_device *netdev)
u32 mlx5e_ethtool_get_rxfh_indir_size(struct mlx5e_priv *priv)
{
- return MLX5E_INDIR_RQT_SIZE;
+ return mlx5e_rqt_size(priv->mdev, priv->channels.params.num_channels);
}
static u32 mlx5e_get_rxfh_indir_size(struct net_device *netdev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index 934b0d5ce1b3..777d311d44ef 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -1283,9 +1283,7 @@ static int mlx5e_create_inner_ttc_table(struct mlx5e_flow_steering *fs,
mlx5e_set_inner_ttc_params(fs, rx_res, &ttc_params);
fs->inner_ttc = mlx5_create_inner_ttc_table(fs->mdev,
&ttc_params);
- if (IS_ERR(fs->inner_ttc))
- return PTR_ERR(fs->inner_ttc);
- return 0;
+ return PTR_ERR_OR_ZERO(fs->inner_ttc);
}
int mlx5e_create_ttc_table(struct mlx5e_flow_steering *fs,
@@ -1295,9 +1293,7 @@ int mlx5e_create_ttc_table(struct mlx5e_flow_steering *fs,
mlx5e_set_ttc_params(fs, rx_res, &ttc_params, true);
fs->ttc = mlx5_create_ttc_table(fs->mdev, &ttc_params);
- if (IS_ERR(fs->ttc))
- return PTR_ERR(fs->ttc);
- return 0;
+ return PTR_ERR_OR_ZERO(fs->ttc);
}
int mlx5e_create_flow_steering(struct mlx5e_flow_steering *fs,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index acb40770cf0c..ea58c6917433 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -69,6 +69,7 @@
#include "en/htb.h"
#include "qos.h"
#include "en/trap.h"
+#include "lib/devcom.h"
bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev, u8 page_shift,
enum mlx5e_mpwrq_umr_mode umr_mode)
@@ -178,6 +179,61 @@ static void mlx5e_disable_async_events(struct mlx5e_priv *priv)
mlx5_notifier_unregister(priv->mdev, &priv->events_nb);
}
+static int mlx5e_devcom_event_mpv(int event, void *my_data, void *event_data)
+{
+ struct mlx5e_priv *slave_priv = my_data;
+
+ switch (event) {
+ case MPV_DEVCOM_MASTER_UP:
+ mlx5_devcom_comp_set_ready(slave_priv->devcom, true);
+ break;
+ case MPV_DEVCOM_MASTER_DOWN:
+ /* no need for comp set ready false since we unregister after
+ * and it hurts cleanup flow.
+ */
+ break;
+ case MPV_DEVCOM_IPSEC_MASTER_UP:
+ case MPV_DEVCOM_IPSEC_MASTER_DOWN:
+ mlx5e_ipsec_handle_mpv_event(event, my_data, event_data);
+ break;
+ }
+
+ return 0;
+}
+
+static int mlx5e_devcom_init_mpv(struct mlx5e_priv *priv, u64 *data)
+{
+ priv->devcom = mlx5_devcom_register_component(priv->mdev->priv.devc,
+ MLX5_DEVCOM_MPV,
+ *data,
+ mlx5e_devcom_event_mpv,
+ priv);
+ if (IS_ERR_OR_NULL(priv->devcom))
+ return -EOPNOTSUPP;
+
+ if (mlx5_core_is_mp_master(priv->mdev)) {
+ mlx5_devcom_send_event(priv->devcom, MPV_DEVCOM_MASTER_UP,
+ MPV_DEVCOM_MASTER_UP, priv);
+ mlx5e_ipsec_send_event(priv, MPV_DEVCOM_IPSEC_MASTER_UP);
+ }
+
+ return 0;
+}
+
+static void mlx5e_devcom_cleanup_mpv(struct mlx5e_priv *priv)
+{
+ if (IS_ERR_OR_NULL(priv->devcom))
+ return;
+
+ if (mlx5_core_is_mp_master(priv->mdev)) {
+ mlx5_devcom_send_event(priv->devcom, MPV_DEVCOM_MASTER_DOWN,
+ MPV_DEVCOM_MASTER_DOWN, priv);
+ mlx5e_ipsec_send_event(priv, MPV_DEVCOM_IPSEC_MASTER_DOWN);
+ }
+
+ mlx5_devcom_unregister_component(priv->devcom);
+}
+
static int blocking_event(struct notifier_block *nb, unsigned long event, void *data)
{
struct mlx5e_priv *priv = container_of(nb, struct mlx5e_priv, blocking_events_nb);
@@ -192,6 +248,13 @@ static int blocking_event(struct notifier_block *nb, unsigned long event, void *
return NOTIFY_BAD;
}
break;
+ case MLX5_DRIVER_EVENT_AFFILIATION_DONE:
+ if (mlx5e_devcom_init_mpv(priv, data))
+ return NOTIFY_BAD;
+ break;
+ case MLX5_DRIVER_EVENT_AFFILIATION_REMOVED:
+ mlx5e_devcom_cleanup_mpv(priv);
+ break;
default:
return NOTIFY_DONE;
}
@@ -834,7 +897,7 @@ static int mlx5e_alloc_rq(struct mlx5e_params *params,
struct page_pool_params pp_params = { 0 };
pp_params.order = 0;
- pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV | PP_FLAG_PAGE_FRAG;
+ pp_params.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
pp_params.pool_size = pool_size;
pp_params.nid = node;
pp_params.dev = rq->pdev;
@@ -2885,8 +2948,12 @@ static int mlx5e_num_channels_changed(struct mlx5e_priv *priv)
mlx5e_set_default_xps_cpumasks(priv, &priv->channels.params);
/* This function may be called on attach, before priv->rx_res is created. */
- if (!netif_is_rxfh_configured(priv->netdev) && priv->rx_res)
- mlx5e_rx_res_rss_set_indir_uniform(priv->rx_res, count);
+ if (priv->rx_res) {
+ mlx5e_rx_res_rss_update_num_channels(priv->rx_res, count);
+
+ if (!netif_is_rxfh_configured(priv->netdev))
+ mlx5e_rx_res_rss_set_indir_uniform(priv->rx_res, count);
+ }
return 0;
}
@@ -5326,10 +5393,6 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
enum mlx5e_rx_res_features features;
int err;
- priv->rx_res = mlx5e_rx_res_alloc();
- if (!priv->rx_res)
- return -ENOMEM;
-
mlx5e_create_q_counters(priv);
err = mlx5e_open_drop_rq(priv, &priv->drop_rq);
@@ -5341,12 +5404,16 @@ static int mlx5e_init_nic_rx(struct mlx5e_priv *priv)
features = MLX5E_RX_RES_FEATURE_PTP;
if (mlx5_tunnel_inner_ft_supported(mdev))
features |= MLX5E_RX_RES_FEATURE_INNER_FT;
- err = mlx5e_rx_res_init(priv->rx_res, priv->mdev, features,
- priv->max_nch, priv->drop_rq.rqn,
- &priv->channels.params.packet_merge,
- priv->channels.params.num_channels);
- if (err)
+
+ priv->rx_res = mlx5e_rx_res_create(priv->mdev, features, priv->max_nch, priv->drop_rq.rqn,
+ &priv->channels.params.packet_merge,
+ priv->channels.params.num_channels);
+ if (IS_ERR(priv->rx_res)) {
+ err = PTR_ERR(priv->rx_res);
+ priv->rx_res = NULL;
+ mlx5_core_err(mdev, "create rx resources failed, %d\n", err);
goto err_close_drop_rq;
+ }
err = mlx5e_create_flow_steering(priv->fs, priv->rx_res, priv->profile,
priv->netdev);
@@ -5376,12 +5443,11 @@ err_destroy_flow_steering:
priv->profile);
err_destroy_rx_res:
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = NULL;
err_close_drop_rq:
mlx5e_close_drop_rq(&priv->drop_rq);
err_destroy_q_counters:
mlx5e_destroy_q_counters(priv);
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
return err;
}
@@ -5392,10 +5458,9 @@ static void mlx5e_cleanup_nic_rx(struct mlx5e_priv *priv)
mlx5e_destroy_flow_steering(priv->fs, !!(priv->netdev->hw_features & NETIF_F_NTUPLE),
priv->profile);
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = NULL;
mlx5e_close_drop_rq(&priv->drop_rq);
mlx5e_destroy_q_counters(priv);
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
}
static void mlx5e_set_mqprio_rl(struct mlx5e_priv *priv)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
index fd1cce542b68..693e55b010d9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
@@ -1006,26 +1006,22 @@ static int mlx5e_init_rep_rx(struct mlx5e_priv *priv)
struct mlx5_core_dev *mdev = priv->mdev;
int err;
- priv->rx_res = mlx5e_rx_res_alloc();
- if (!priv->rx_res) {
- err = -ENOMEM;
- goto err_free_fs;
- }
-
mlx5e_fs_init_l2_addr(priv->fs, priv->netdev);
err = mlx5e_open_drop_rq(priv, &priv->drop_rq);
if (err) {
mlx5_core_err(mdev, "open drop rq failed, %d\n", err);
- goto err_rx_res_free;
+ goto err_free_fs;
}
- err = mlx5e_rx_res_init(priv->rx_res, priv->mdev, 0,
- priv->max_nch, priv->drop_rq.rqn,
- &priv->channels.params.packet_merge,
- priv->channels.params.num_channels);
- if (err)
+ priv->rx_res = mlx5e_rx_res_create(priv->mdev, 0, priv->max_nch, priv->drop_rq.rqn,
+ &priv->channels.params.packet_merge,
+ priv->channels.params.num_channels);
+ if (IS_ERR(priv->rx_res)) {
+ err = PTR_ERR(priv->rx_res);
+ mlx5_core_err(mdev, "Create rx resources failed, err=%d\n", err);
goto err_close_drop_rq;
+ }
err = mlx5e_create_rep_ttc_table(priv);
if (err)
@@ -1049,11 +1045,9 @@ err_destroy_ttc_table:
mlx5_destroy_ttc_table(mlx5e_fs_get_ttc(priv->fs, false));
err_destroy_rx_res:
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = ERR_PTR(-EINVAL);
err_close_drop_rq:
mlx5e_close_drop_rq(&priv->drop_rq);
-err_rx_res_free:
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
err_free_fs:
mlx5e_fs_cleanup(priv->fs);
priv->fs = NULL;
@@ -1067,9 +1061,8 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
mlx5e_destroy_rep_root_ft(priv);
mlx5_destroy_ttc_table(mlx5e_fs_get_ttc(priv->fs, false));
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = ERR_PTR(-EINVAL);
mlx5e_close_drop_rq(&priv->drop_rq);
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
}
static void mlx5e_rep_mpesw_work(struct work_struct *work)
@@ -1363,8 +1356,9 @@ mlx5e_rep_vnic_reporter_diagnose(struct devlink_health_reporter *reporter,
struct mlx5e_rep_priv *rpriv = devlink_health_reporter_priv(reporter);
struct mlx5_eswitch_rep *rep = rpriv->rep;
- return mlx5_reporter_vnic_diagnose_counters(rep->esw->dev, fmsg,
- rep->vport, true);
+ mlx5_reporter_vnic_diagnose_counters(rep->esw->dev, fmsg, rep->vport,
+ true);
+ return 0;
}
static const struct devlink_health_reporter_ops mlx5_rep_vnic_reporter_ops = {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index c8590483ddc6..9a5a5c2c7da9 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -753,19 +753,21 @@ static int mlx5e_hairpin_create_indirect_rqt(struct mlx5e_hairpin *hp)
{
struct mlx5e_priv *priv = hp->func_priv;
struct mlx5_core_dev *mdev = priv->mdev;
- struct mlx5e_rss_params_indir *indir;
+ struct mlx5e_rss_params_indir indir;
int err;
- indir = kvmalloc(sizeof(*indir), GFP_KERNEL);
- if (!indir)
- return -ENOMEM;
+ err = mlx5e_rss_params_indir_init(&indir, mdev,
+ mlx5e_rqt_size(mdev, hp->num_channels),
+ mlx5e_rqt_size(mdev, priv->max_nch));
+ if (err)
+ return err;
- mlx5e_rss_params_indir_init_uniform(indir, hp->num_channels);
+ mlx5e_rss_params_indir_init_uniform(&indir, hp->num_channels);
err = mlx5e_rqt_init_indir(&hp->indir_rqt, mdev, hp->pair->rqn, hp->num_channels,
mlx5e_rx_res_get_current_hash(priv->rx_res).hfunc,
- indir);
+ &indir);
- kvfree(indir);
+ mlx5e_rss_params_indir_cleanup(&indir);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
index 7a01714b3780..a7ed87e9d842 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/bridge_mcast.c
@@ -78,6 +78,8 @@ mlx5_esw_bridge_mdb_flow_create(u16 esw_owner_vhca_id, struct mlx5_esw_bridge_md
xa_for_each(&entry->ports, idx, port) {
dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
dests[i].ft = port->mcast.ft;
+ if (port->vport_num == MLX5_VPORT_UPLINK)
+ dests[i].ft->flags |= MLX5_FLOW_TABLE_UPLINK_VPORT;
i++;
}
@@ -585,10 +587,6 @@ mlx5_esw_bridge_mcast_vlan_flow_create(u16 vlan_proto, struct mlx5_esw_bridge_po
if (!rule_spec)
return ERR_PTR(-ENOMEM);
- if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) &&
- port->vport_num == MLX5_VPORT_UPLINK)
- rule_spec->flow_context.flow_source =
- MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
rule_spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
flow_act.action |= MLX5_FLOW_CONTEXT_ACTION_PACKET_REFORMAT;
@@ -660,11 +658,6 @@ mlx5_esw_bridge_mcast_fwd_flow_create(struct mlx5_esw_bridge_port *port)
if (!rule_spec)
return ERR_PTR(-ENOMEM);
- if (MLX5_CAP_ESW_FLOWTABLE(bridge->br_offloads->esw->dev, flow_source) &&
- port->vport_num == MLX5_VPORT_UPLINK)
- rule_spec->flow_context.flow_source =
- MLX5_FLOW_CONTEXT_FLOW_SOURCE_LOCAL_VPORT;
-
if (MLX5_CAP_ESW(bridge->br_offloads->esw->dev, merged_eswitch)) {
dest.vport.flags = MLX5_FLOW_DEST_VPORT_VHCA_ID;
dest.vport.vhca_id = port->esw_owner_vhca_id;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
index 1887a24ee414..d2ebe56c3977 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/qos.c
@@ -2,6 +2,7 @@
/* Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
#include "eswitch.h"
+#include "lib/mlx5.h"
#include "esw/qos.h"
#include "en/port.h"
#define CREATE_TRACE_POINTS
@@ -701,10 +702,75 @@ int mlx5_esw_qos_set_vport_rate(struct mlx5_eswitch *esw, struct mlx5_vport *vpo
return err;
}
+static u32 mlx5_esw_qos_lag_link_speed_get_locked(struct mlx5_core_dev *mdev)
+{
+ struct ethtool_link_ksettings lksettings;
+ struct net_device *slave, *master;
+ u32 speed = SPEED_UNKNOWN;
+
+ /* Lock ensures a stable reference to master and slave netdevice
+ * while port speed of master is queried.
+ */
+ ASSERT_RTNL();
+
+ slave = mlx5_uplink_netdev_get(mdev);
+ if (!slave)
+ goto out;
+
+ master = netdev_master_upper_dev_get(slave);
+ if (master && !__ethtool_get_link_ksettings(master, &lksettings))
+ speed = lksettings.base.speed;
+
+out:
+ return speed;
+}
+
+static int mlx5_esw_qos_max_link_speed_get(struct mlx5_core_dev *mdev, u32 *link_speed_max,
+ bool hold_rtnl_lock, struct netlink_ext_ack *extack)
+{
+ int err;
+
+ if (!mlx5_lag_is_active(mdev))
+ goto skip_lag;
+
+ if (hold_rtnl_lock)
+ rtnl_lock();
+
+ *link_speed_max = mlx5_esw_qos_lag_link_speed_get_locked(mdev);
+
+ if (hold_rtnl_lock)
+ rtnl_unlock();
+
+ if (*link_speed_max != (u32)SPEED_UNKNOWN)
+ return 0;
+
+skip_lag:
+ err = mlx5_port_max_linkspeed(mdev, link_speed_max);
+ if (err)
+ NL_SET_ERR_MSG_MOD(extack, "Failed to get link maximum speed");
+
+ return err;
+}
+
+static int mlx5_esw_qos_link_speed_verify(struct mlx5_core_dev *mdev,
+ const char *name, u32 link_speed_max,
+ u64 value, struct netlink_ext_ack *extack)
+{
+ if (value > link_speed_max) {
+ pr_err("%s rate value %lluMbps exceed link maximum speed %u.\n",
+ name, value, link_speed_max);
+ NL_SET_ERR_MSG_MOD(extack, "TX rate value exceed link maximum speed");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32 rate_mbps)
{
u32 ctx[MLX5_ST_SZ_DW(scheduling_context)] = {};
struct mlx5_vport *vport;
+ u32 link_speed_max;
u32 bitmask;
int err;
@@ -712,6 +778,17 @@ int mlx5_esw_qos_modify_vport_rate(struct mlx5_eswitch *esw, u16 vport_num, u32
if (IS_ERR(vport))
return PTR_ERR(vport);
+ if (rate_mbps) {
+ err = mlx5_esw_qos_max_link_speed_get(esw->dev, &link_speed_max, false, NULL);
+ if (err)
+ return err;
+
+ err = mlx5_esw_qos_link_speed_verify(esw->dev, "Police",
+ link_speed_max, rate_mbps, NULL);
+ if (err)
+ return err;
+ }
+
mutex_lock(&esw->state_lock);
if (!vport->qos.enabled) {
/* Eswitch QoS wasn't enabled yet. Enable it and vport QoS. */
@@ -744,12 +821,6 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char *
u64 value;
int err;
- err = mlx5_port_max_linkspeed(mdev, &link_speed_max);
- if (err) {
- NL_SET_ERR_MSG_MOD(extack, "Failed to get link maximum speed");
- return err;
- }
-
value = div_u64_rem(*rate, MLX5_LINKSPEED_UNIT, &remainder);
if (remainder) {
pr_err("%s rate value %lluBps not in link speed units of 1Mbps.\n",
@@ -758,12 +829,13 @@ static int esw_qos_devlink_rate_to_mbps(struct mlx5_core_dev *mdev, const char *
return -EINVAL;
}
- if (value > link_speed_max) {
- pr_err("%s rate value %lluMbps exceed link maximum speed %u.\n",
- name, value, link_speed_max);
- NL_SET_ERR_MSG_MOD(extack, "TX rate value exceed link maximum speed");
- return -EINVAL;
- }
+ err = mlx5_esw_qos_max_link_speed_get(mdev, &link_speed_max, true, extack);
+ if (err)
+ return err;
+
+ err = mlx5_esw_qos_link_speed_verify(mdev, name, link_speed_max, value, extack);
+ if (err)
+ return err;
*rate = value;
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
index 3ec892d51f57..d91ea53eb394 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
@@ -441,8 +441,3 @@ int mlx5_blocking_notifier_call_chain(struct mlx5_core_dev *dev, unsigned int ev
return blocking_notifier_call_chain(&events->sw_nh, event, data);
}
-
-void mlx5_events_work_enqueue(struct mlx5_core_dev *dev, struct work_struct *work)
-{
- queue_work(dev->priv.events->wq, work);
-}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index a13b9c2bd144..e6bfa7e4f146 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -114,9 +114,9 @@
#define ETHTOOL_NUM_PRIOS 11
#define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS)
/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy,
- * IPsec RoCE policy
+ * {IPsec RoCE MPV,Alias table},IPsec RoCE policy
*/
-#define KERNEL_NIC_PRIO_NUM_LEVELS 9
+#define KERNEL_NIC_PRIO_NUM_LEVELS 11
#define KERNEL_NIC_NUM_PRIOS 1
/* One more level for tc */
#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
@@ -137,7 +137,7 @@
#define LAG_MIN_LEVEL (OFFLOADS_MIN_LEVEL + KERNEL_RX_MACSEC_MIN_LEVEL + 1)
#define KERNEL_TX_IPSEC_NUM_PRIOS 1
-#define KERNEL_TX_IPSEC_NUM_LEVELS 3
+#define KERNEL_TX_IPSEC_NUM_LEVELS 4
#define KERNEL_TX_IPSEC_MIN_LEVEL (KERNEL_TX_IPSEC_NUM_LEVELS)
#define KERNEL_TX_MACSEC_NUM_PRIOS 1
@@ -231,7 +231,7 @@ enum {
};
#define RDMA_RX_IPSEC_NUM_PRIOS 1
-#define RDMA_RX_IPSEC_NUM_LEVELS 2
+#define RDMA_RX_IPSEC_NUM_LEVELS 4
#define RDMA_RX_IPSEC_MIN_LEVEL (RDMA_RX_IPSEC_NUM_LEVELS)
#define RDMA_RX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_REGULAR_PRIOS
@@ -288,7 +288,7 @@ enum {
#define RDMA_TX_BYPASS_MIN_LEVEL MLX5_BY_PASS_NUM_PRIOS
#define RDMA_TX_COUNTERS_MIN_LEVEL (RDMA_TX_BYPASS_MIN_LEVEL + 1)
-#define RDMA_TX_IPSEC_NUM_PRIOS 1
+#define RDMA_TX_IPSEC_NUM_PRIOS 2
#define RDMA_TX_IPSEC_PRIO_NUM_LEVELS 1
#define RDMA_TX_IPSEC_MIN_LEVEL (RDMA_TX_COUNTERS_MIN_LEVEL + RDMA_TX_IPSEC_NUM_PRIOS)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
index 2fb2598b775e..8ff6dc9bc803 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -365,6 +365,8 @@ static const char *hsynd_str(u8 synd)
return "FFSER error";
case MLX5_INITIAL_SEG_HEALTH_SYNDROME_HIGH_TEMP_ERR:
return "High temperature";
+ case MLX5_INITIAL_SEG_HEALTH_SYNDROME_ICM_PCI_POISONED_ERR:
+ return "ICM fetch PCI data poisoned error";
default:
return "unrecognized error";
}
@@ -448,14 +450,15 @@ mlx5_fw_reporter_diagnose(struct devlink_health_reporter *reporter,
struct mlx5_core_dev *dev = devlink_health_reporter_priv(reporter);
struct mlx5_core_health *health = &dev->priv.health;
struct health_buffer __iomem *h = health->health;
- u8 synd;
- int err;
+ u8 synd = ioread8(&h->synd);
- synd = ioread8(&h->synd);
- err = devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd);
- if (err || !synd)
- return err;
- return devlink_fmsg_string_pair_put(fmsg, "Description", hsynd_str(synd));
+ if (!synd)
+ return 0;
+
+ devlink_fmsg_u8_pair_put(fmsg, "Syndrome", synd);
+ devlink_fmsg_string_pair_put(fmsg, "Description", hsynd_str(synd));
+
+ return 0;
}
struct mlx5_fw_reporter_ctx {
@@ -463,94 +466,47 @@ struct mlx5_fw_reporter_ctx {
int miss_counter;
};
-static int
+static void
mlx5_fw_reporter_ctx_pairs_put(struct devlink_fmsg *fmsg,
struct mlx5_fw_reporter_ctx *fw_reporter_ctx)
{
- int err;
-
- err = devlink_fmsg_u8_pair_put(fmsg, "syndrome",
- fw_reporter_ctx->err_synd);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "fw_miss_counter",
- fw_reporter_ctx->miss_counter);
- if (err)
- return err;
- return 0;
+ devlink_fmsg_u8_pair_put(fmsg, "syndrome", fw_reporter_ctx->err_synd);
+ devlink_fmsg_u32_pair_put(fmsg, "fw_miss_counter", fw_reporter_ctx->miss_counter);
}
-static int
+static void
mlx5_fw_reporter_heath_buffer_data_put(struct mlx5_core_dev *dev,
struct devlink_fmsg *fmsg)
{
struct mlx5_core_health *health = &dev->priv.health;
struct health_buffer __iomem *h = health->health;
u8 rfr_severity;
- int err;
int i;
if (!ioread8(&h->synd))
- return 0;
-
- err = devlink_fmsg_pair_nest_start(fmsg, "health buffer");
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "assert_var");
- if (err)
- return err;
+ return;
- for (i = 0; i < ARRAY_SIZE(h->assert_var); i++) {
- err = devlink_fmsg_u32_put(fmsg, ioread32be(h->assert_var + i));
- if (err)
- return err;
- }
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "assert_exit_ptr",
- ioread32be(&h->assert_exit_ptr));
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "assert_callra",
- ioread32be(&h->assert_callra));
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "time", ioread32be(&h->time));
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "hw_id", ioread32be(&h->hw_id));
- if (err)
- return err;
+ devlink_fmsg_pair_nest_start(fmsg, "health buffer");
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "assert_var");
+ for (i = 0; i < ARRAY_SIZE(h->assert_var); i++)
+ devlink_fmsg_u32_put(fmsg, ioread32be(h->assert_var + i));
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_u32_pair_put(fmsg, "assert_exit_ptr",
+ ioread32be(&h->assert_exit_ptr));
+ devlink_fmsg_u32_pair_put(fmsg, "assert_callra",
+ ioread32be(&h->assert_callra));
+ devlink_fmsg_u32_pair_put(fmsg, "time", ioread32be(&h->time));
+ devlink_fmsg_u32_pair_put(fmsg, "hw_id", ioread32be(&h->hw_id));
rfr_severity = ioread8(&h->rfr_severity);
- err = devlink_fmsg_u8_pair_put(fmsg, "rfr", mlx5_health_get_rfr(rfr_severity));
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "severity", mlx5_health_get_severity(rfr_severity));
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "irisc_index",
- ioread8(&h->irisc_index));
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "synd", ioread8(&h->synd));
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "ext_synd",
- ioread16be(&h->ext_synd));
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "raw_fw_ver",
- ioread32be(&h->fw_ver));
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
- return devlink_fmsg_pair_nest_end(fmsg);
+ devlink_fmsg_u8_pair_put(fmsg, "rfr", mlx5_health_get_rfr(rfr_severity));
+ devlink_fmsg_u8_pair_put(fmsg, "severity", mlx5_health_get_severity(rfr_severity));
+ devlink_fmsg_u8_pair_put(fmsg, "irisc_index", ioread8(&h->irisc_index));
+ devlink_fmsg_u8_pair_put(fmsg, "synd", ioread8(&h->synd));
+ devlink_fmsg_u32_pair_put(fmsg, "ext_synd", ioread16be(&h->ext_synd));
+ devlink_fmsg_u32_pair_put(fmsg, "raw_fw_ver", ioread32be(&h->fw_ver));
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
}
static int
@@ -568,14 +524,11 @@ mlx5_fw_reporter_dump(struct devlink_health_reporter *reporter,
if (priv_ctx) {
struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
- err = mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
- if (err)
- return err;
+ mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
}
- err = mlx5_fw_reporter_heath_buffer_data_put(dev, fmsg);
- if (err)
- return err;
+ mlx5_fw_reporter_heath_buffer_data_put(dev, fmsg);
+
return mlx5_fw_tracer_get_saved_traces_objects(dev->tracer, fmsg);
}
@@ -641,12 +594,10 @@ mlx5_fw_fatal_reporter_dump(struct devlink_health_reporter *reporter,
if (priv_ctx) {
struct mlx5_fw_reporter_ctx *fw_reporter_ctx = priv_ctx;
- err = mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
- if (err)
- goto free_data;
+ mlx5_fw_reporter_ctx_pairs_put(fmsg, fw_reporter_ctx);
}
- err = devlink_fmsg_binary_pair_put(fmsg, "crdump_data", cr_data, crdump_size);
+ devlink_fmsg_binary_pair_put(fmsg, "crdump_data", cr_data, crdump_size);
free_data:
kvfree(cr_data);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
index baa7ef812313..2bf77a5251b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
@@ -418,12 +418,6 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
return -ENOMEM;
}
- priv->rx_res = mlx5e_rx_res_alloc();
- if (!priv->rx_res) {
- err = -ENOMEM;
- goto err_free_fs;
- }
-
mlx5e_create_q_counters(priv);
err = mlx5e_open_drop_rq(priv, &priv->drop_rq);
@@ -432,12 +426,13 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
goto err_destroy_q_counters;
}
- err = mlx5e_rx_res_init(priv->rx_res, priv->mdev, 0,
- priv->max_nch, priv->drop_rq.rqn,
- &priv->channels.params.packet_merge,
- priv->channels.params.num_channels);
- if (err)
+ priv->rx_res = mlx5e_rx_res_create(priv->mdev, 0, priv->max_nch, priv->drop_rq.rqn,
+ &priv->channels.params.packet_merge,
+ priv->channels.params.num_channels);
+ if (IS_ERR(priv->rx_res)) {
+ err = PTR_ERR(priv->rx_res);
goto err_close_drop_rq;
+ }
err = mlx5i_create_flow_steering(priv);
if (err)
@@ -447,13 +442,11 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
err_destroy_rx_res:
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = ERR_PTR(-EINVAL);
err_close_drop_rq:
mlx5e_close_drop_rq(&priv->drop_rq);
err_destroy_q_counters:
mlx5e_destroy_q_counters(priv);
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
-err_free_fs:
mlx5e_fs_cleanup(priv->fs);
return err;
}
@@ -462,10 +455,9 @@ static void mlx5i_cleanup_rx(struct mlx5e_priv *priv)
{
mlx5i_destroy_flow_steering(priv);
mlx5e_rx_res_destroy(priv->rx_res);
+ priv->rx_res = ERR_PTR(-EINVAL);
mlx5e_close_drop_rq(&priv->drop_rq);
mlx5e_destroy_q_counters(priv);
- mlx5e_rx_res_free(priv->rx_res);
- priv->rx_res = NULL;
mlx5e_fs_cleanup(priv->fs);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
index af3fac090b82..d14459e5c04f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
@@ -943,6 +943,26 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
}
}
+/* The last mdev to unregister will destroy the workqueue before removing the
+ * devcom component, and as all the mdevs use the same devcom component we are
+ * guaranteed that the devcom is valid while the calling work is running.
+ */
+struct mlx5_devcom_comp_dev *mlx5_lag_get_devcom_comp(struct mlx5_lag *ldev)
+{
+ struct mlx5_devcom_comp_dev *devcom = NULL;
+ int i;
+
+ mutex_lock(&ldev->lock);
+ for (i = 0; i < ldev->ports; i++) {
+ if (ldev->pf[i].dev) {
+ devcom = ldev->pf[i].dev->priv.hca_devcom_comp;
+ break;
+ }
+ }
+ mutex_unlock(&ldev->lock);
+ return devcom;
+}
+
static void mlx5_queue_bond_work(struct mlx5_lag *ldev, unsigned long delay)
{
queue_delayed_work(ldev->wq, &ldev->bond_work, delay);
@@ -953,9 +973,14 @@ static void mlx5_do_bond_work(struct work_struct *work)
struct delayed_work *delayed_work = to_delayed_work(work);
struct mlx5_lag *ldev = container_of(delayed_work, struct mlx5_lag,
bond_work);
+ struct mlx5_devcom_comp_dev *devcom;
int status;
- status = mlx5_dev_list_trylock();
+ devcom = mlx5_lag_get_devcom_comp(ldev);
+ if (!devcom)
+ return;
+
+ status = mlx5_devcom_comp_trylock(devcom);
if (!status) {
mlx5_queue_bond_work(ldev, HZ);
return;
@@ -964,14 +989,14 @@ static void mlx5_do_bond_work(struct work_struct *work)
mutex_lock(&ldev->lock);
if (ldev->mode_changes_in_progress) {
mutex_unlock(&ldev->lock);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(devcom);
mlx5_queue_bond_work(ldev, HZ);
return;
}
mlx5_do_bond(ldev);
mutex_unlock(&ldev->lock);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(devcom);
}
static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
@@ -1212,13 +1237,14 @@ static void mlx5_ldev_remove_mdev(struct mlx5_lag *ldev,
dev->priv.lag = NULL;
}
-/* Must be called with intf_mutex held */
+/* Must be called with HCA devcom component lock held */
static int __mlx5_lag_dev_add_mdev(struct mlx5_core_dev *dev)
{
+ struct mlx5_devcom_comp_dev *pos = NULL;
struct mlx5_lag *ldev = NULL;
struct mlx5_core_dev *tmp_dev;
- tmp_dev = mlx5_get_next_phys_dev_lag(dev);
+ tmp_dev = mlx5_devcom_get_next_peer_data(dev->priv.hca_devcom_comp, &pos);
if (tmp_dev)
ldev = mlx5_lag_dev(tmp_dev);
@@ -1275,10 +1301,13 @@ void mlx5_lag_add_mdev(struct mlx5_core_dev *dev)
if (!mlx5_lag_is_supported(dev))
return;
+ if (IS_ERR_OR_NULL(dev->priv.hca_devcom_comp))
+ return;
+
recheck:
- mlx5_dev_list_lock();
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
err = __mlx5_lag_dev_add_mdev(dev);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
if (err) {
msleep(100);
@@ -1431,7 +1460,7 @@ void mlx5_lag_disable_change(struct mlx5_core_dev *dev)
if (!ldev)
return;
- mlx5_dev_list_lock();
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
mutex_lock(&ldev->lock);
ldev->mode_changes_in_progress++;
@@ -1439,7 +1468,7 @@ void mlx5_lag_disable_change(struct mlx5_core_dev *dev)
mlx5_disable_lag(ldev);
mutex_unlock(&ldev->lock);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
}
void mlx5_lag_enable_change(struct mlx5_core_dev *dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
index 481e92f39fe6..50fcb1eee574 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
@@ -112,6 +112,7 @@ void mlx5_disable_lag(struct mlx5_lag *ldev);
void mlx5_lag_remove_devices(struct mlx5_lag *ldev);
int mlx5_deactivate_lag(struct mlx5_lag *ldev);
void mlx5_lag_add_devices(struct mlx5_lag *ldev);
+struct mlx5_devcom_comp_dev *mlx5_lag_get_devcom_comp(struct mlx5_lag *ldev);
static inline bool mlx5_lag_is_supported(struct mlx5_core_dev *dev)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
index 4bf15391525c..82889f30506e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
@@ -65,12 +65,12 @@ err_metadata:
return err;
}
-#define MLX5_LAG_MPESW_OFFLOADS_SUPPORTED_PORTS 2
+#define MLX5_LAG_MPESW_OFFLOADS_SUPPORTED_PORTS 4
static int enable_mpesw(struct mlx5_lag *ldev)
{
struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
- struct mlx5_core_dev *dev1 = ldev->pf[MLX5_LAG_P2].dev;
int err;
+ int i;
if (ldev->mode != MLX5_LAG_MODE_NONE)
return -EINVAL;
@@ -98,11 +98,11 @@ static int enable_mpesw(struct mlx5_lag *ldev)
dev0->priv.flags &= ~MLX5_PRIV_FLAGS_DISABLE_IB_ADEV;
mlx5_rescan_drivers_locked(dev0);
- err = mlx5_eswitch_reload_reps(dev0->priv.eswitch);
- if (!err)
- err = mlx5_eswitch_reload_reps(dev1->priv.eswitch);
- if (err)
- goto err_rescan_drivers;
+ for (i = 0; i < ldev->ports; i++) {
+ err = mlx5_eswitch_reload_reps(ldev->pf[i].dev->priv.eswitch);
+ if (err)
+ goto err_rescan_drivers;
+ }
return 0;
@@ -112,8 +112,8 @@ err_rescan_drivers:
mlx5_deactivate_lag(ldev);
err_add_devices:
mlx5_lag_add_devices(ldev);
- mlx5_eswitch_reload_reps(dev0->priv.eswitch);
- mlx5_eswitch_reload_reps(dev1->priv.eswitch);
+ for (i = 0; i < ldev->ports; i++)
+ mlx5_eswitch_reload_reps(ldev->pf[i].dev->priv.eswitch);
mlx5_mpesw_metadata_cleanup(ldev);
return err;
}
@@ -129,9 +129,14 @@ static void disable_mpesw(struct mlx5_lag *ldev)
static void mlx5_mpesw_work(struct work_struct *work)
{
struct mlx5_mpesw_work_st *mpesww = container_of(work, struct mlx5_mpesw_work_st, work);
+ struct mlx5_devcom_comp_dev *devcom;
struct mlx5_lag *ldev = mpesww->lag;
- mlx5_dev_list_lock();
+ devcom = mlx5_lag_get_devcom_comp(ldev);
+ if (!devcom)
+ return;
+
+ mlx5_devcom_comp_lock(devcom);
mutex_lock(&ldev->lock);
if (ldev->mode_changes_in_progress) {
mpesww->result = -EAGAIN;
@@ -144,7 +149,7 @@ static void mlx5_mpesw_work(struct work_struct *work)
disable_mpesw(ldev);
unlock:
mutex_unlock(&ldev->lock);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(devcom);
complete(&mpesww->comp);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
index 7d9bbb494d95..101b3bb90863 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
@@ -507,10 +507,7 @@ static int mlx5_lag_create_ttc_table(struct mlx5_lag *ldev)
mlx5_lag_set_outer_ttc_params(ldev, &ttc_params);
port_sel->outer.ttc = mlx5_create_ttc_table(dev, &ttc_params);
- if (IS_ERR(port_sel->outer.ttc))
- return PTR_ERR(port_sel->outer.ttc);
-
- return 0;
+ return PTR_ERR_OR_ZERO(port_sel->outer.ttc);
}
static int mlx5_lag_create_inner_ttc_table(struct mlx5_lag *ldev)
@@ -521,10 +518,7 @@ static int mlx5_lag_create_inner_ttc_table(struct mlx5_lag *ldev)
mlx5_lag_set_inner_ttc_params(ldev, &ttc_params);
port_sel->inner.ttc = mlx5_create_inner_ttc_table(dev, &ttc_params);
- if (IS_ERR(port_sel->inner.ttc))
- return PTR_ERR(port_sel->inner.ttc);
-
- return 0;
+ return PTR_ERR_OR_ZERO(port_sel->inner.ttc);
}
int mlx5_lag_port_sel_create(struct mlx5_lag *ldev,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
index 00e67910e3ee..e8e50563e956 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.c
@@ -31,6 +31,7 @@ struct mlx5_devcom_comp {
struct kref ref;
bool ready;
struct rw_semaphore sem;
+ struct lock_class_key lock_key;
};
struct mlx5_devcom_comp_dev {
@@ -119,6 +120,8 @@ mlx5_devcom_comp_alloc(u64 id, u64 key, mlx5_devcom_event_handler_t handler)
comp->key = key;
comp->handler = handler;
init_rwsem(&comp->sem);
+ lockdep_register_key(&comp->lock_key);
+ lockdep_set_class(&comp->sem, &comp->lock_key);
kref_init(&comp->ref);
INIT_LIST_HEAD(&comp->comp_dev_list_head);
@@ -133,6 +136,7 @@ mlx5_devcom_comp_release(struct kref *ref)
mutex_lock(&comp_list_lock);
list_del(&comp->comp_list);
mutex_unlock(&comp_list_lock);
+ lockdep_unregister_key(&comp->lock_key);
kfree(comp);
}
@@ -383,3 +387,24 @@ void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom_comp_dev *devcom,
*pos = tmp;
return data;
}
+
+void mlx5_devcom_comp_lock(struct mlx5_devcom_comp_dev *devcom)
+{
+ if (IS_ERR_OR_NULL(devcom))
+ return;
+ down_write(&devcom->comp->sem);
+}
+
+void mlx5_devcom_comp_unlock(struct mlx5_devcom_comp_dev *devcom)
+{
+ if (IS_ERR_OR_NULL(devcom))
+ return;
+ up_write(&devcom->comp->sem);
+}
+
+int mlx5_devcom_comp_trylock(struct mlx5_devcom_comp_dev *devcom)
+{
+ if (IS_ERR_OR_NULL(devcom))
+ return 0;
+ return down_write_trylock(&devcom->comp->sem);
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
index 8389ac0af708..fc23bbef87b4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/devcom.h
@@ -8,6 +8,8 @@
enum mlx5_devcom_component {
MLX5_DEVCOM_ESW_OFFLOADS,
+ MLX5_DEVCOM_MPV,
+ MLX5_DEVCOM_HCA_PORTS,
MLX5_DEVCOM_NUM_COMPONENTS,
};
@@ -51,4 +53,8 @@ void *mlx5_devcom_get_next_peer_data_rcu(struct mlx5_devcom_comp_dev *devcom,
data; \
data = mlx5_devcom_get_next_peer_data_rcu(devcom, &pos))
+void mlx5_devcom_comp_lock(struct mlx5_devcom_comp_dev *devcom);
+void mlx5_devcom_comp_unlock(struct mlx5_devcom_comp_dev *devcom);
+int mlx5_devcom_comp_trylock(struct mlx5_devcom_comp_dev *devcom);
+
#endif /* __LIB_MLX5_DEVCOM_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
index 69a75459775d..4b7f7131c560 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/eq.h
@@ -85,7 +85,6 @@ void mlx5_eq_del_cq(struct mlx5_eq *eq, struct mlx5_core_cq *cq);
struct mlx5_eq_comp *mlx5_eqn2comp_eq(struct mlx5_core_dev *dev, int eqn);
struct mlx5_eq *mlx5_get_async_eq(struct mlx5_core_dev *dev);
void mlx5_cq_tasklet_cb(struct tasklet_struct *t);
-struct cpumask *mlx5_eq_comp_cpumask(struct mlx5_core_dev *dev, int ix);
u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq_comp *eq);
void mlx5_cmd_eq_recover(struct mlx5_core_dev *dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
index 6e3f178d6f84..234cd00f71a1 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.c
@@ -2,8 +2,11 @@
/* Copyright (c) 2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved. */
#include "fs_core.h"
+#include "fs_cmd.h"
+#include "en.h"
#include "lib/ipsec_fs_roce.h"
#include "mlx5_core.h"
+#include <linux/random.h>
struct mlx5_ipsec_miss {
struct mlx5_flow_group *group;
@@ -15,6 +18,12 @@ struct mlx5_ipsec_rx_roce {
struct mlx5_flow_table *ft;
struct mlx5_flow_handle *rule;
struct mlx5_ipsec_miss roce_miss;
+ struct mlx5_flow_table *nic_master_ft;
+ struct mlx5_flow_group *nic_master_group;
+ struct mlx5_flow_handle *nic_master_rule;
+ struct mlx5_flow_table *goto_alias_ft;
+ u32 alias_id;
+ char key[ACCESS_KEY_LEN];
struct mlx5_flow_table *ft_rdma;
struct mlx5_flow_namespace *ns_rdma;
@@ -24,6 +33,9 @@ struct mlx5_ipsec_tx_roce {
struct mlx5_flow_group *g;
struct mlx5_flow_table *ft;
struct mlx5_flow_handle *rule;
+ struct mlx5_flow_table *goto_alias_ft;
+ u32 alias_id;
+ char key[ACCESS_KEY_LEN];
struct mlx5_flow_namespace *ns;
};
@@ -31,6 +43,7 @@ struct mlx5_ipsec_fs {
struct mlx5_ipsec_rx_roce ipv4_rx;
struct mlx5_ipsec_rx_roce ipv6_rx;
struct mlx5_ipsec_tx_roce tx;
+ struct mlx5_devcom_comp_dev **devcom;
};
static void ipsec_fs_roce_setup_udp_dport(struct mlx5_flow_spec *spec,
@@ -43,11 +56,83 @@ static void ipsec_fs_roce_setup_udp_dport(struct mlx5_flow_spec *spec,
MLX5_SET(fte_match_param, spec->match_value, outer_headers.udp_dport, dport);
}
+static bool ipsec_fs_create_alias_supported_one(struct mlx5_core_dev *mdev)
+{
+ u64 obj_allowed = MLX5_CAP_GEN_2_64(mdev, allowed_object_for_other_vhca_access);
+ u32 obj_supp = MLX5_CAP_GEN_2(mdev, cross_vhca_object_to_object_supported);
+
+ if (!(obj_supp &
+ MLX5_CROSS_VHCA_OBJ_TO_OBJ_SUPPORTED_LOCAL_FLOW_TABLE_TO_REMOTE_FLOW_TABLE_MISS))
+ return false;
+
+ if (!(obj_allowed & MLX5_ALLOWED_OBJ_FOR_OTHER_VHCA_ACCESS_FLOW_TABLE))
+ return false;
+
+ return true;
+}
+
+static bool ipsec_fs_create_alias_supported(struct mlx5_core_dev *mdev,
+ struct mlx5_core_dev *master_mdev)
+{
+ if (ipsec_fs_create_alias_supported_one(mdev) &&
+ ipsec_fs_create_alias_supported_one(master_mdev))
+ return true;
+
+ return false;
+}
+
+static int ipsec_fs_create_aliased_ft(struct mlx5_core_dev *ibv_owner,
+ struct mlx5_core_dev *ibv_allowed,
+ struct mlx5_flow_table *ft,
+ u32 *obj_id, char *alias_key, bool from_event)
+{
+ u32 aliased_object_id = (ft->type << FT_ID_FT_TYPE_OFFSET) | ft->id;
+ u16 vhca_id_to_be_accessed = MLX5_CAP_GEN(ibv_owner, vhca_id);
+ struct mlx5_cmd_allow_other_vhca_access_attr allow_attr = {};
+ struct mlx5_cmd_alias_obj_create_attr alias_attr = {};
+ int ret;
+ int i;
+
+ if (!ipsec_fs_create_alias_supported(ibv_owner, ibv_allowed))
+ return -EOPNOTSUPP;
+
+ for (i = 0; i < ACCESS_KEY_LEN; i++)
+ if (!from_event)
+ alias_key[i] = get_random_u64() & 0xFF;
+
+ memcpy(allow_attr.access_key, alias_key, ACCESS_KEY_LEN);
+ allow_attr.obj_type = MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS;
+ allow_attr.obj_id = aliased_object_id;
+
+ if (!from_event) {
+ ret = mlx5_cmd_allow_other_vhca_access(ibv_owner, &allow_attr);
+ if (ret) {
+ mlx5_core_err(ibv_owner, "Failed to allow other vhca access err=%d\n",
+ ret);
+ return ret;
+ }
+ }
+
+ memcpy(alias_attr.access_key, alias_key, ACCESS_KEY_LEN);
+ alias_attr.obj_id = aliased_object_id;
+ alias_attr.obj_type = MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS;
+ alias_attr.vhca_id = vhca_id_to_be_accessed;
+ ret = mlx5_cmd_alias_obj_create(ibv_allowed, &alias_attr, obj_id);
+ if (ret) {
+ mlx5_core_err(ibv_allowed, "Failed to create alias object err=%d\n",
+ ret);
+ return ret;
+ }
+
+ return 0;
+}
+
static int
ipsec_fs_roce_rx_rule_setup(struct mlx5_core_dev *mdev,
struct mlx5_flow_destination *default_dst,
struct mlx5_ipsec_rx_roce *roce)
{
+ bool is_mpv_slave = mlx5_core_is_mp_slave(mdev);
struct mlx5_flow_destination dst = {};
MLX5_DECLARE_FLOW_ACT(flow_act);
struct mlx5_flow_handle *rule;
@@ -61,14 +146,19 @@ ipsec_fs_roce_rx_rule_setup(struct mlx5_core_dev *mdev,
ipsec_fs_roce_setup_udp_dport(spec, ROCE_V2_UDP_DPORT);
flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
- dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
- dst.ft = roce->ft_rdma;
+ if (is_mpv_slave) {
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dst.ft = roce->goto_alias_ft;
+ } else {
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ dst.ft = roce->ft_rdma;
+ }
rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, &dst, 1);
if (IS_ERR(rule)) {
err = PTR_ERR(rule);
mlx5_core_err(mdev, "Fail to add RX RoCE IPsec rule err=%d\n",
err);
- goto fail_add_rule;
+ goto out;
}
roce->rule = rule;
@@ -84,12 +174,30 @@ ipsec_fs_roce_rx_rule_setup(struct mlx5_core_dev *mdev,
roce->roce_miss.rule = rule;
+ if (!is_mpv_slave)
+ goto out;
+
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ dst.ft = roce->ft_rdma;
+ rule = mlx5_add_flow_rules(roce->nic_master_ft, NULL, &flow_act, &dst,
+ 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ mlx5_core_err(mdev, "Fail to add RX RoCE IPsec rule for alias err=%d\n",
+ err);
+ goto fail_add_nic_master_rule;
+ }
+ roce->nic_master_rule = rule;
+
kvfree(spec);
return 0;
+fail_add_nic_master_rule:
+ mlx5_del_flow_rules(roce->roce_miss.rule);
fail_add_default_rule:
mlx5_del_flow_rules(roce->rule);
-fail_add_rule:
+out:
kvfree(spec);
return err;
}
@@ -120,25 +228,373 @@ out:
return err;
}
-void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce)
+static int ipsec_fs_roce_tx_mpv_rule_setup(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_tx_roce *roce,
+ struct mlx5_flow_table *pol_ft)
{
+ struct mlx5_flow_destination dst = {};
+ MLX5_DECLARE_FLOW_ACT(flow_act);
+ struct mlx5_flow_handle *rule;
+ struct mlx5_flow_spec *spec;
+ int err = 0;
+
+ spec = kvzalloc(sizeof(*spec), GFP_KERNEL);
+ if (!spec)
+ return -ENOMEM;
+
+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;
+ MLX5_SET_TO_ONES(fte_match_param, spec->match_criteria, misc_parameters.source_vhca_port);
+ MLX5_SET(fte_match_param, spec->match_value, misc_parameters.source_vhca_port,
+ MLX5_CAP_GEN(mdev, native_port_num));
+
+ flow_act.action = MLX5_FLOW_CONTEXT_ACTION_FWD_DEST;
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TABLE_TYPE;
+ dst.ft = roce->goto_alias_ft;
+ rule = mlx5_add_flow_rules(roce->ft, spec, &flow_act, &dst, 1);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ mlx5_core_err(mdev, "Fail to add TX RoCE IPsec rule err=%d\n",
+ err);
+ goto out;
+ }
+ roce->rule = rule;
+
+ /* No need for miss rule, since on miss we go to next PRIO, in which
+ * if master is configured, he will catch the traffic to go to his
+ * encryption table.
+ */
+
+out:
+ kvfree(spec);
+ return err;
+}
+
+#define MLX5_TX_ROCE_GROUP_SIZE BIT(0)
+#define MLX5_IPSEC_RDMA_TX_FT_LEVEL 0
+#define MLX5_IPSEC_NIC_GOTO_ALIAS_FT_LEVEL 3 /* Since last used level in NIC ipsec is 2 */
+
+static int ipsec_fs_roce_tx_mpv_create_ft(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_tx_roce *roce,
+ struct mlx5_flow_table *pol_ft,
+ struct mlx5e_priv *peer_priv,
+ bool from_event)
+{
+ struct mlx5_flow_namespace *roce_ns, *nic_ns;
+ struct mlx5_flow_table_attr ft_attr = {};
+ struct mlx5_flow_table next_ft;
+ struct mlx5_flow_table *ft;
+ int err;
+
+ roce_ns = mlx5_get_flow_namespace(peer_priv->mdev, MLX5_FLOW_NAMESPACE_RDMA_TX_IPSEC);
+ if (!roce_ns)
+ return -EOPNOTSUPP;
+
+ nic_ns = mlx5_get_flow_namespace(peer_priv->mdev, MLX5_FLOW_NAMESPACE_EGRESS_IPSEC);
+ if (!nic_ns)
+ return -EOPNOTSUPP;
+
+ err = ipsec_fs_create_aliased_ft(mdev, peer_priv->mdev, pol_ft, &roce->alias_id, roce->key,
+ from_event);
+ if (err)
+ return err;
+
+ next_ft.id = roce->alias_id;
+ ft_attr.max_fte = 1;
+ ft_attr.next_ft = &next_ft;
+ ft_attr.level = MLX5_IPSEC_NIC_GOTO_ALIAS_FT_LEVEL;
+ ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED;
+ ft = mlx5_create_flow_table(nic_ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec goto alias ft err=%d\n", err);
+ goto destroy_alias;
+ }
+
+ roce->goto_alias_ft = ft;
+
+ memset(&ft_attr, 0, sizeof(ft_attr));
+ ft_attr.max_fte = 1;
+ ft_attr.level = MLX5_IPSEC_RDMA_TX_FT_LEVEL;
+ ft = mlx5_create_flow_table(roce_ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx ft err=%d\n", err);
+ goto destroy_alias_ft;
+ }
+
+ roce->ft = ft;
+
+ return 0;
+
+destroy_alias_ft:
+ mlx5_destroy_flow_table(roce->goto_alias_ft);
+destroy_alias:
+ mlx5_cmd_alias_obj_destroy(peer_priv->mdev, roce->alias_id,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS);
+ return err;
+}
+
+static int ipsec_fs_roce_tx_mpv_create_group_rules(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_tx_roce *roce,
+ struct mlx5_flow_table *pol_ft,
+ u32 *in)
+{
+ struct mlx5_flow_group *g;
+ int ix = 0;
+ int err;
+ u8 *mc;
+
+ mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+ MLX5_SET_TO_ONES(fte_match_param, mc, misc_parameters.source_vhca_port);
+ MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_MISC_PARAMETERS);
+
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5_TX_ROCE_GROUP_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ g = mlx5_create_flow_group(roce->ft, in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group err=%d\n", err);
+ return err;
+ }
+ roce->g = g;
+
+ err = ipsec_fs_roce_tx_mpv_rule_setup(mdev, roce, pol_ft);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx rules err=%d\n", err);
+ goto destroy_group;
+ }
+
+ return 0;
+
+destroy_group:
+ mlx5_destroy_flow_group(roce->g);
+ return err;
+}
+
+static int ipsec_fs_roce_tx_mpv_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_table *pol_ft,
+ u32 *in, bool from_event)
+{
+ struct mlx5_devcom_comp_dev *tmp = NULL;
+ struct mlx5_ipsec_tx_roce *roce;
+ struct mlx5e_priv *peer_priv;
+ int err;
+
+ if (!mlx5_devcom_for_each_peer_begin(*ipsec_roce->devcom))
+ return -EOPNOTSUPP;
+
+ peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
+ if (!peer_priv) {
+ err = -EOPNOTSUPP;
+ goto release_peer;
+ }
+
+ roce = &ipsec_roce->tx;
+
+ err = ipsec_fs_roce_tx_mpv_create_ft(mdev, roce, pol_ft, peer_priv, from_event);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tables err=%d\n", err);
+ goto release_peer;
+ }
+
+ err = ipsec_fs_roce_tx_mpv_create_group_rules(mdev, roce, pol_ft, in);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec tx group/rule err=%d\n", err);
+ goto destroy_tables;
+ }
+
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ return 0;
+
+destroy_tables:
+ mlx5_destroy_flow_table(roce->ft);
+ mlx5_destroy_flow_table(roce->goto_alias_ft);
+ mlx5_cmd_alias_obj_destroy(peer_priv->mdev, roce->alias_id,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS);
+release_peer:
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ return err;
+}
+
+static void roce_rx_mpv_destroy_tables(struct mlx5_core_dev *mdev, struct mlx5_ipsec_rx_roce *roce)
+{
+ mlx5_destroy_flow_table(roce->goto_alias_ft);
+ mlx5_cmd_alias_obj_destroy(mdev, roce->alias_id,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS);
+ mlx5_destroy_flow_group(roce->nic_master_group);
+ mlx5_destroy_flow_table(roce->nic_master_ft);
+}
+
+#define MLX5_RX_ROCE_GROUP_SIZE BIT(0)
+#define MLX5_IPSEC_RX_IPV4_FT_LEVEL 3
+#define MLX5_IPSEC_RX_IPV6_FT_LEVEL 2
+
+static int ipsec_fs_roce_rx_mpv_create(struct mlx5_core_dev *mdev,
+ struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_flow_namespace *ns,
+ u32 family, u32 level, u32 prio)
+{
+ struct mlx5_flow_namespace *roce_ns, *nic_ns;
+ struct mlx5_flow_table_attr ft_attr = {};
+ struct mlx5_devcom_comp_dev *tmp = NULL;
+ struct mlx5_ipsec_rx_roce *roce;
+ struct mlx5_flow_table next_ft;
+ struct mlx5_flow_table *ft;
+ struct mlx5_flow_group *g;
+ struct mlx5e_priv *peer_priv;
+ int ix = 0;
+ u32 *in;
+ int err;
+
+ roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx :
+ &ipsec_roce->ipv6_rx;
+
+ if (!mlx5_devcom_for_each_peer_begin(*ipsec_roce->devcom))
+ return -EOPNOTSUPP;
+
+ peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
+ if (!peer_priv) {
+ err = -EOPNOTSUPP;
+ goto release_peer;
+ }
+
+ roce_ns = mlx5_get_flow_namespace(peer_priv->mdev, MLX5_FLOW_NAMESPACE_RDMA_RX_IPSEC);
+ if (!roce_ns) {
+ err = -EOPNOTSUPP;
+ goto release_peer;
+ }
+
+ nic_ns = mlx5_get_flow_namespace(peer_priv->mdev, MLX5_FLOW_NAMESPACE_KERNEL);
+ if (!nic_ns) {
+ err = -EOPNOTSUPP;
+ goto release_peer;
+ }
+
+ in = kvzalloc(MLX5_ST_SZ_BYTES(create_flow_group_in), GFP_KERNEL);
+ if (!in) {
+ err = -ENOMEM;
+ goto release_peer;
+ }
+
+ ft_attr.level = (family == AF_INET) ? MLX5_IPSEC_RX_IPV4_FT_LEVEL :
+ MLX5_IPSEC_RX_IPV6_FT_LEVEL;
+ ft_attr.max_fte = 1;
+ ft = mlx5_create_flow_table(roce_ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at rdma master err=%d\n", err);
+ goto free_in;
+ }
+
+ roce->ft_rdma = ft;
+
+ ft_attr.max_fte = 1;
+ ft_attr.prio = prio;
+ ft_attr.level = level + 2;
+ ft = mlx5_create_flow_table(nic_ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at NIC master err=%d\n", err);
+ goto destroy_ft_rdma;
+ }
+ roce->nic_master_ft = ft;
+
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += 1;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ g = mlx5_create_flow_group(roce->nic_master_ft, in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx group aliased err=%d\n", err);
+ goto destroy_nic_master_ft;
+ }
+ roce->nic_master_group = g;
+
+ err = ipsec_fs_create_aliased_ft(peer_priv->mdev, mdev, roce->nic_master_ft,
+ &roce->alias_id, roce->key, false);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx alias FT err=%d\n", err);
+ goto destroy_group;
+ }
+
+ next_ft.id = roce->alias_id;
+ ft_attr.max_fte = 1;
+ ft_attr.prio = prio;
+ ft_attr.level = roce->ft->level + 1;
+ ft_attr.flags = MLX5_FLOW_TABLE_UNMANAGED;
+ ft_attr.next_ft = &next_ft;
+ ft = mlx5_create_flow_table(ns, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at NIC slave err=%d\n", err);
+ goto destroy_alias;
+ }
+ roce->goto_alias_ft = ft;
+
+ kvfree(in);
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ return 0;
+
+destroy_alias:
+ mlx5_cmd_alias_obj_destroy(mdev, roce->alias_id,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS);
+destroy_group:
+ mlx5_destroy_flow_group(roce->nic_master_group);
+destroy_nic_master_ft:
+ mlx5_destroy_flow_table(roce->nic_master_ft);
+destroy_ft_rdma:
+ mlx5_destroy_flow_table(roce->ft_rdma);
+free_in:
+ kvfree(in);
+release_peer:
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ return err;
+}
+
+void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_core_dev *mdev)
+{
+ struct mlx5_devcom_comp_dev *tmp = NULL;
struct mlx5_ipsec_tx_roce *tx_roce;
+ struct mlx5e_priv *peer_priv;
if (!ipsec_roce)
return;
tx_roce = &ipsec_roce->tx;
+ if (!tx_roce->ft)
+ return; /* Incase RoCE was cleaned from MPV event flow */
+
mlx5_del_flow_rules(tx_roce->rule);
mlx5_destroy_flow_group(tx_roce->g);
mlx5_destroy_flow_table(tx_roce->ft);
-}
-#define MLX5_TX_ROCE_GROUP_SIZE BIT(0)
+ if (!mlx5_core_is_mp_slave(mdev))
+ return;
+
+ if (!mlx5_devcom_for_each_peer_begin(*ipsec_roce->devcom))
+ return;
+
+ peer_priv = mlx5_devcom_get_next_peer_data(*ipsec_roce->devcom, &tmp);
+ if (!peer_priv) {
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ return;
+ }
+
+ mlx5_destroy_flow_table(tx_roce->goto_alias_ft);
+ mlx5_cmd_alias_obj_destroy(peer_priv->mdev, tx_roce->alias_id,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS);
+ mlx5_devcom_for_each_peer_end(*ipsec_roce->devcom);
+ tx_roce->ft = NULL;
+}
int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
struct mlx5_ipsec_fs *ipsec_roce,
- struct mlx5_flow_table *pol_ft)
+ struct mlx5_flow_table *pol_ft,
+ bool from_event)
{
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_ipsec_tx_roce *roce;
@@ -157,7 +613,14 @@ int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
if (!in)
return -ENOMEM;
+ if (mlx5_core_is_mp_slave(mdev)) {
+ err = ipsec_fs_roce_tx_mpv_create(mdev, ipsec_roce, pol_ft, in, from_event);
+ goto free_in;
+ }
+
ft_attr.max_fte = 1;
+ ft_attr.prio = 1;
+ ft_attr.level = MLX5_IPSEC_RDMA_TX_FT_LEVEL;
ft = mlx5_create_flow_table(roce->ns, &ft_attr);
if (IS_ERR(ft)) {
err = PTR_ERR(ft);
@@ -209,8 +672,10 @@ struct mlx5_flow_table *mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_ro
return rx_roce->ft;
}
-void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, u32 family)
+void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, u32 family,
+ struct mlx5_core_dev *mdev)
{
+ bool is_mpv_slave = mlx5_core_is_mp_slave(mdev);
struct mlx5_ipsec_rx_roce *rx_roce;
if (!ipsec_roce)
@@ -218,23 +683,29 @@ void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce, u32 family)
rx_roce = (family == AF_INET) ? &ipsec_roce->ipv4_rx :
&ipsec_roce->ipv6_rx;
+ if (!rx_roce->ft)
+ return; /* Incase RoCE was cleaned from MPV event flow */
+ if (is_mpv_slave)
+ mlx5_del_flow_rules(rx_roce->nic_master_rule);
mlx5_del_flow_rules(rx_roce->roce_miss.rule);
mlx5_del_flow_rules(rx_roce->rule);
+ if (is_mpv_slave)
+ roce_rx_mpv_destroy_tables(mdev, rx_roce);
mlx5_destroy_flow_table(rx_roce->ft_rdma);
mlx5_destroy_flow_group(rx_roce->roce_miss.group);
mlx5_destroy_flow_group(rx_roce->g);
mlx5_destroy_flow_table(rx_roce->ft);
+ rx_roce->ft = NULL;
}
-#define MLX5_RX_ROCE_GROUP_SIZE BIT(0)
-
int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
struct mlx5_ipsec_fs *ipsec_roce,
struct mlx5_flow_namespace *ns,
struct mlx5_flow_destination *default_dst,
u32 family, u32 level, u32 prio)
{
+ bool is_mpv_slave = mlx5_core_is_mp_slave(mdev);
struct mlx5_flow_table_attr ft_attr = {};
struct mlx5_ipsec_rx_roce *roce;
struct mlx5_flow_table *ft;
@@ -298,18 +769,28 @@ int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
}
roce->roce_miss.group = g;
- memset(&ft_attr, 0, sizeof(ft_attr));
- if (family == AF_INET)
- ft_attr.level = 1;
- ft = mlx5_create_flow_table(roce->ns_rdma, &ft_attr);
- if (IS_ERR(ft)) {
- err = PTR_ERR(ft);
- mlx5_core_err(mdev, "Fail to create RoCE IPsec rx ft at rdma err=%d\n", err);
- goto fail_rdma_table;
+ if (is_mpv_slave) {
+ err = ipsec_fs_roce_rx_mpv_create(mdev, ipsec_roce, ns, family, level, prio);
+ if (err) {
+ mlx5_core_err(mdev, "Fail to create RoCE IPsec rx alias err=%d\n", err);
+ goto fail_mpv_create;
+ }
+ } else {
+ memset(&ft_attr, 0, sizeof(ft_attr));
+ if (family == AF_INET)
+ ft_attr.level = 1;
+ ft_attr.max_fte = 1;
+ ft = mlx5_create_flow_table(roce->ns_rdma, &ft_attr);
+ if (IS_ERR(ft)) {
+ err = PTR_ERR(ft);
+ mlx5_core_err(mdev,
+ "Fail to create RoCE IPsec rx ft at rdma err=%d\n", err);
+ goto fail_rdma_table;
+ }
+
+ roce->ft_rdma = ft;
}
- roce->ft_rdma = ft;
-
err = ipsec_fs_roce_rx_rule_setup(mdev, default_dst, roce);
if (err) {
mlx5_core_err(mdev, "Fail to create RoCE IPsec rx rules err=%d\n", err);
@@ -320,7 +801,10 @@ int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
return 0;
fail_setup_rule:
+ if (is_mpv_slave)
+ roce_rx_mpv_destroy_tables(mdev, roce);
mlx5_destroy_flow_table(roce->ft_rdma);
+fail_mpv_create:
fail_rdma_table:
mlx5_destroy_flow_group(roce->roce_miss.group);
fail_mgroup:
@@ -332,12 +816,24 @@ fail_nomem:
return err;
}
+bool mlx5_ipsec_fs_is_mpv_roce_supported(struct mlx5_core_dev *mdev)
+{
+ if (!mlx5_core_mp_enabled(mdev))
+ return true;
+
+ if (ipsec_fs_create_alias_supported_one(mdev))
+ return true;
+
+ return false;
+}
+
void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce)
{
kfree(ipsec_roce);
}
-struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev)
+struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev,
+ struct mlx5_devcom_comp_dev **devcom)
{
struct mlx5_ipsec_fs *roce_ipsec;
struct mlx5_flow_namespace *ns;
@@ -363,6 +859,8 @@ struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev)
roce_ipsec->tx.ns = ns;
+ roce_ipsec->devcom = devcom;
+
return roce_ipsec;
err_tx:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h
index 9712d705fe48..2a1af78309fe 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/ipsec_fs_roce.h
@@ -4,22 +4,28 @@
#ifndef __MLX5_LIB_IPSEC_H__
#define __MLX5_LIB_IPSEC_H__
+#include "lib/devcom.h"
+
struct mlx5_ipsec_fs;
struct mlx5_flow_table *
mlx5_ipsec_fs_roce_ft_get(struct mlx5_ipsec_fs *ipsec_roce, u32 family);
void mlx5_ipsec_fs_roce_rx_destroy(struct mlx5_ipsec_fs *ipsec_roce,
- u32 family);
+ u32 family, struct mlx5_core_dev *mdev);
int mlx5_ipsec_fs_roce_rx_create(struct mlx5_core_dev *mdev,
struct mlx5_ipsec_fs *ipsec_roce,
struct mlx5_flow_namespace *ns,
struct mlx5_flow_destination *default_dst,
u32 family, u32 level, u32 prio);
-void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce);
+void mlx5_ipsec_fs_roce_tx_destroy(struct mlx5_ipsec_fs *ipsec_roce,
+ struct mlx5_core_dev *mdev);
int mlx5_ipsec_fs_roce_tx_create(struct mlx5_core_dev *mdev,
struct mlx5_ipsec_fs *ipsec_roce,
- struct mlx5_flow_table *pol_ft);
+ struct mlx5_flow_table *pol_ft,
+ bool from_event);
void mlx5_ipsec_fs_roce_cleanup(struct mlx5_ipsec_fs *ipsec_roce);
-struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev);
+struct mlx5_ipsec_fs *mlx5_ipsec_fs_roce_init(struct mlx5_core_dev *mdev,
+ struct mlx5_devcom_comp_dev **devcom);
+bool mlx5_ipsec_fs_is_mpv_roce_supported(struct mlx5_core_dev *mdev);
#endif /* __MLX5_LIB_IPSEC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 15561965d2af..a17152c1cbb2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -73,6 +73,7 @@
#include "sf/sf.h"
#include "mlx5_irq.h"
#include "hwmon.h"
+#include "lag/lag.h"
MODULE_AUTHOR("Eli Cohen <eli@mellanox.com>");
MODULE_DESCRIPTION("Mellanox 5th generation network adapters (ConnectX series) core driver");
@@ -361,6 +362,12 @@ void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *dev)
}
EXPORT_SYMBOL(mlx5_core_uplink_netdev_event_replay);
+void mlx5_core_mp_event_replay(struct mlx5_core_dev *dev, u32 event, void *data)
+{
+ mlx5_blocking_notifier_call_chain(dev, event, data);
+}
+EXPORT_SYMBOL(mlx5_core_mp_event_replay);
+
int mlx5_core_get_caps_mode(struct mlx5_core_dev *dev, enum mlx5_cap_type cap_type,
enum mlx5_cap_mode cap_mode)
{
@@ -946,6 +953,27 @@ static void mlx5_pci_close(struct mlx5_core_dev *dev)
mlx5_pci_disable_device(dev);
}
+static void mlx5_register_hca_devcom_comp(struct mlx5_core_dev *dev)
+{
+ /* This component is use to sync adding core_dev to lag_dev and to sync
+ * changes of mlx5_adev_devices between LAG layer and other layers.
+ */
+ if (!mlx5_lag_is_supported(dev))
+ return;
+
+ dev->priv.hca_devcom_comp =
+ mlx5_devcom_register_component(dev->priv.devc, MLX5_DEVCOM_HCA_PORTS,
+ mlx5_query_nic_system_image_guid(dev),
+ NULL, dev);
+ if (IS_ERR_OR_NULL(dev->priv.hca_devcom_comp))
+ mlx5_core_err(dev, "Failed to register devcom HCA component\n");
+}
+
+static void mlx5_unregister_hca_devcom_comp(struct mlx5_core_dev *dev)
+{
+ mlx5_devcom_unregister_component(dev->priv.hca_devcom_comp);
+}
+
static int mlx5_init_once(struct mlx5_core_dev *dev)
{
int err;
@@ -954,6 +982,7 @@ static int mlx5_init_once(struct mlx5_core_dev *dev)
if (IS_ERR(dev->priv.devc))
mlx5_core_warn(dev, "failed to register devcom device %ld\n",
PTR_ERR(dev->priv.devc));
+ mlx5_register_hca_devcom_comp(dev);
err = mlx5_query_board_id(dev);
if (err) {
@@ -1088,6 +1117,7 @@ err_eq_cleanup:
err_irq_cleanup:
mlx5_irq_table_cleanup(dev);
err_devcom:
+ mlx5_unregister_hca_devcom_comp(dev);
mlx5_devcom_unregister_device(dev->priv.devc);
return err;
@@ -1117,6 +1147,7 @@ static void mlx5_cleanup_once(struct mlx5_core_dev *dev)
mlx5_events_cleanup(dev);
mlx5_eq_table_cleanup(dev);
mlx5_irq_table_cleanup(dev);
+ mlx5_unregister_hca_devcom_comp(dev);
mlx5_devcom_unregister_device(dev->priv.devc);
}
@@ -1405,9 +1436,9 @@ err_irq_table:
static void mlx5_unload(struct mlx5_core_dev *dev)
{
+ mlx5_eswitch_disable(dev->priv.eswitch);
mlx5_devlink_traps_unregister(priv_to_devlink(dev));
mlx5_sf_dev_table_destroy(dev);
- mlx5_eswitch_disable(dev->priv.eswitch);
mlx5_sriov_detach(dev);
mlx5_lag_remove_mdev(dev);
mlx5_ec_cleanup(dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index 124352459c23..6b14e347d914 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -41,6 +41,7 @@
#include <linux/mlx5/cq.h>
#include <linux/mlx5/fs.h>
#include <linux/mlx5/driver.h>
+#include "lib/devcom.h"
extern uint mlx5_core_debug_mask;
@@ -97,6 +98,22 @@ do { \
__func__, __LINE__, current->pid, \
##__VA_ARGS__)
+#define ACCESS_KEY_LEN 32
+#define FT_ID_FT_TYPE_OFFSET 24
+
+struct mlx5_cmd_allow_other_vhca_access_attr {
+ u16 obj_type;
+ u32 obj_id;
+ u8 access_key[ACCESS_KEY_LEN];
+};
+
+struct mlx5_cmd_alias_obj_create_attr {
+ u32 obj_id;
+ u16 vhca_id;
+ u16 obj_type;
+ u8 access_key[ACCESS_KEY_LEN];
+};
+
static inline void mlx5_printk(struct mlx5_core_dev *dev, int level, const char *format, ...)
{
struct device *device = dev->device;
@@ -143,6 +160,8 @@ enum mlx5_semaphore_space_address {
#define MLX5_DEFAULT_PROF 2
#define MLX5_SF_PROF 3
+#define MLX5_NUM_FW_CMD_THREADS 8
+#define MLX5_DEV_MAX_WQS MLX5_NUM_FW_CMD_THREADS
static inline int mlx5_flexible_inlen(struct mlx5_core_dev *dev, size_t fixed,
size_t item_size, size_t num_items,
@@ -248,10 +267,6 @@ int mlx5_register_device(struct mlx5_core_dev *dev);
void mlx5_unregister_device(struct mlx5_core_dev *dev);
void mlx5_dev_set_lightweight(struct mlx5_core_dev *dev);
bool mlx5_dev_is_lightweight(struct mlx5_core_dev *dev);
-struct mlx5_core_dev *mlx5_get_next_phys_dev_lag(struct mlx5_core_dev *dev);
-void mlx5_dev_list_lock(void);
-void mlx5_dev_list_unlock(void);
-int mlx5_dev_list_trylock(void);
void mlx5_fw_reporters_create(struct mlx5_core_dev *dev);
int mlx5_query_mtpps(struct mlx5_core_dev *dev, u32 *mtpps, u32 mtpps_size);
@@ -290,14 +305,12 @@ static inline int mlx5_rescan_drivers(struct mlx5_core_dev *dev)
{
int ret;
- mlx5_dev_list_lock();
+ mlx5_devcom_comp_lock(dev->priv.hca_devcom_comp);
ret = mlx5_rescan_drivers_locked(dev);
- mlx5_dev_list_unlock();
+ mlx5_devcom_comp_unlock(dev->priv.hca_devcom_comp);
return ret;
}
-void mlx5_lag_update(struct mlx5_core_dev *dev);
-
enum {
MLX5_NIC_IFC_FULL = 0,
MLX5_NIC_IFC_DISABLED = 1,
@@ -331,7 +344,6 @@ int mlx5_vport_set_other_func_cap(struct mlx5_core_dev *dev, const void *hca_cap
#define mlx5_vport_get_other_func_general_cap(dev, vport, out) \
mlx5_vport_get_other_func_cap(dev, vport, out, MLX5_CAP_GENERAL)
-void mlx5_events_work_enqueue(struct mlx5_core_dev *dev, struct work_struct *work);
static inline u32 mlx5_sriov_get_vf_total_msix(struct pci_dev *pdev)
{
struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
@@ -343,6 +355,12 @@ bool mlx5_eth_supported(struct mlx5_core_dev *dev);
bool mlx5_rdma_supported(struct mlx5_core_dev *dev);
bool mlx5_vnet_supported(struct mlx5_core_dev *dev);
bool mlx5_same_hw_devs(struct mlx5_core_dev *dev, struct mlx5_core_dev *peer_dev);
+int mlx5_cmd_allow_other_vhca_access(struct mlx5_core_dev *dev,
+ struct mlx5_cmd_allow_other_vhca_access_attr *attr);
+int mlx5_cmd_alias_obj_create(struct mlx5_core_dev *dev,
+ struct mlx5_cmd_alias_obj_create_attr *alias_attr,
+ u32 *obj_id);
+int mlx5_cmd_alias_obj_destroy(struct mlx5_core_dev *dev, u32 obj_id, u16 obj_type);
static inline u16 mlx5_core_ec_vf_vport_base(const struct mlx5_core_dev *dev)
{
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
index 05e148db9889..c93492b67788 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.c
@@ -14,17 +14,22 @@
struct mlx5_sf_dev_table {
struct xarray devices;
- unsigned int max_sfs;
phys_addr_t base_address;
u64 sf_bar_length;
struct notifier_block nb;
- struct mutex table_lock; /* Serializes sf life cycle and vhca state change handler */
struct workqueue_struct *active_wq;
struct work_struct work;
u8 stop_active_wq:1;
struct mlx5_core_dev *dev;
};
+struct mlx5_sf_dev_active_work_ctx {
+ struct work_struct work;
+ struct mlx5_vhca_state_event event;
+ struct mlx5_sf_dev_table *table;
+ int sf_index;
+};
+
static bool mlx5_sf_dev_supported(const struct mlx5_core_dev *dev)
{
return MLX5_CAP_GEN(dev, sf) && mlx5_vhca_event_supported(dev);
@@ -110,12 +115,6 @@ static void mlx5_sf_dev_add(struct mlx5_core_dev *dev, u16 sf_index, u16 fn_id,
sf_dev->parent_mdev = dev;
sf_dev->fn_id = fn_id;
- if (!table->max_sfs) {
- mlx5_adev_idx_free(id);
- kfree(sf_dev);
- err = -EOPNOTSUPP;
- goto add_err;
- }
sf_dev->bar_base_addr = table->base_address + (sf_index * table->sf_bar_length);
trace_mlx5_sf_dev_add(dev, sf_dev, id);
@@ -172,7 +171,6 @@ mlx5_sf_dev_state_change_handler(struct notifier_block *nb, unsigned long event_
return 0;
sf_index = event->function_id - base_id;
- mutex_lock(&table->table_lock);
sf_dev = xa_load(&table->devices, sf_index);
switch (event->new_vhca_state) {
case MLX5_VHCA_STATE_INVALID:
@@ -196,7 +194,6 @@ mlx5_sf_dev_state_change_handler(struct notifier_block *nb, unsigned long event_
default:
break;
}
- mutex_unlock(&table->table_lock);
return 0;
}
@@ -221,15 +218,44 @@ static int mlx5_sf_dev_vhca_arm_all(struct mlx5_sf_dev_table *table)
return 0;
}
-static void mlx5_sf_dev_add_active_work(struct work_struct *work)
+static void mlx5_sf_dev_add_active_work(struct work_struct *_work)
{
- struct mlx5_sf_dev_table *table = container_of(work, struct mlx5_sf_dev_table, work);
+ struct mlx5_sf_dev_active_work_ctx *work_ctx;
+
+ work_ctx = container_of(_work, struct mlx5_sf_dev_active_work_ctx, work);
+ if (work_ctx->table->stop_active_wq)
+ goto out;
+ /* Don't probe device which is already probe */
+ if (!xa_load(&work_ctx->table->devices, work_ctx->sf_index))
+ mlx5_sf_dev_add(work_ctx->table->dev, work_ctx->sf_index,
+ work_ctx->event.function_id, work_ctx->event.sw_function_id);
+ /* There is a race where SF got inactive after the query
+ * above. e.g.: the query returns that the state of the
+ * SF is active, and after that the eswitch manager set it to
+ * inactive.
+ * This case cannot be managed in SW, since the probing of the
+ * SF is on one system, and the inactivation is on a different
+ * system.
+ * If the inactive is done after the SF perform init_hca(),
+ * the SF will fully probe and then removed. If it was
+ * done before init_hca(), the SF probe will fail.
+ */
+out:
+ kfree(work_ctx);
+}
+
+/* In case SFs are generated externally, probe active SFs */
+static void mlx5_sf_dev_queue_active_works(struct work_struct *_work)
+{
+ struct mlx5_sf_dev_table *table = container_of(_work, struct mlx5_sf_dev_table, work);
u32 out[MLX5_ST_SZ_DW(query_vhca_state_out)] = {};
+ struct mlx5_sf_dev_active_work_ctx *work_ctx;
struct mlx5_core_dev *dev = table->dev;
u16 max_functions;
u16 function_id;
u16 sw_func_id;
int err = 0;
+ int wq_idx;
u8 state;
int i;
@@ -249,27 +275,22 @@ static void mlx5_sf_dev_add_active_work(struct work_struct *work)
continue;
sw_func_id = MLX5_GET(query_vhca_state_out, out, vhca_state_context.sw_function_id);
- mutex_lock(&table->table_lock);
- /* Don't probe device which is already probe */
- if (!xa_load(&table->devices, i))
- mlx5_sf_dev_add(dev, i, function_id, sw_func_id);
- /* There is a race where SF got inactive after the query
- * above. e.g.: the query returns that the state of the
- * SF is active, and after that the eswitch manager set it to
- * inactive.
- * This case cannot be managed in SW, since the probing of the
- * SF is on one system, and the inactivation is on a different
- * system.
- * If the inactive is done after the SF perform init_hca(),
- * the SF will fully probe and then removed. If it was
- * done before init_hca(), the SF probe will fail.
- */
- mutex_unlock(&table->table_lock);
+ work_ctx = kzalloc(sizeof(*work_ctx), GFP_KERNEL);
+ if (!work_ctx)
+ return;
+
+ INIT_WORK(&work_ctx->work, &mlx5_sf_dev_add_active_work);
+ work_ctx->event.function_id = function_id;
+ work_ctx->event.sw_function_id = sw_func_id;
+ work_ctx->table = table;
+ work_ctx->sf_index = i;
+ wq_idx = work_ctx->event.function_id % MLX5_DEV_MAX_WQS;
+ mlx5_vhca_events_work_enqueue(dev, wq_idx, &work_ctx->work);
}
}
/* In case SFs are generated externally, probe active SFs */
-static int mlx5_sf_dev_queue_active_work(struct mlx5_sf_dev_table *table)
+static int mlx5_sf_dev_create_active_works(struct mlx5_sf_dev_table *table)
{
if (MLX5_CAP_GEN(table->dev, eswitch_manager))
return 0; /* the table is local */
@@ -280,12 +301,12 @@ static int mlx5_sf_dev_queue_active_work(struct mlx5_sf_dev_table *table)
table->active_wq = create_singlethread_workqueue("mlx5_active_sf");
if (!table->active_wq)
return -ENOMEM;
- INIT_WORK(&table->work, &mlx5_sf_dev_add_active_work);
+ INIT_WORK(&table->work, &mlx5_sf_dev_queue_active_works);
queue_work(table->active_wq, &table->work);
return 0;
}
-static void mlx5_sf_dev_destroy_active_work(struct mlx5_sf_dev_table *table)
+static void mlx5_sf_dev_destroy_active_works(struct mlx5_sf_dev_table *table)
{
if (table->active_wq) {
table->stop_active_wq = true;
@@ -296,7 +317,6 @@ static void mlx5_sf_dev_destroy_active_work(struct mlx5_sf_dev_table *table)
void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev)
{
struct mlx5_sf_dev_table *table;
- unsigned int max_sfs;
int err;
if (!mlx5_sf_dev_supported(dev))
@@ -310,37 +330,30 @@ void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev)
table->nb.notifier_call = mlx5_sf_dev_state_change_handler;
table->dev = dev;
- if (MLX5_CAP_GEN(dev, max_num_sf))
- max_sfs = MLX5_CAP_GEN(dev, max_num_sf);
- else
- max_sfs = 1 << MLX5_CAP_GEN(dev, log_max_sf);
table->sf_bar_length = 1 << (MLX5_CAP_GEN(dev, log_min_sf_size) + 12);
table->base_address = pci_resource_start(dev->pdev, 2);
- table->max_sfs = max_sfs;
xa_init(&table->devices);
- mutex_init(&table->table_lock);
dev->priv.sf_dev_table = table;
err = mlx5_vhca_event_notifier_register(dev, &table->nb);
if (err)
goto vhca_err;
- err = mlx5_sf_dev_queue_active_work(table);
+ err = mlx5_sf_dev_create_active_works(table);
if (err)
goto add_active_err;
err = mlx5_sf_dev_vhca_arm_all(table);
if (err)
goto arm_err;
- mlx5_core_dbg(dev, "SF DEV: max sf devices=%d\n", max_sfs);
return;
arm_err:
- mlx5_sf_dev_destroy_active_work(table);
+ mlx5_sf_dev_destroy_active_works(table);
add_active_err:
mlx5_vhca_event_notifier_unregister(dev, &table->nb);
+ mlx5_vhca_event_work_queues_flush(dev);
vhca_err:
- table->max_sfs = 0;
kfree(table);
dev->priv.sf_dev_table = NULL;
table_err:
@@ -365,9 +378,9 @@ void mlx5_sf_dev_table_destroy(struct mlx5_core_dev *dev)
if (!table)
return;
- mlx5_sf_dev_destroy_active_work(table);
+ mlx5_sf_dev_destroy_active_works(table);
mlx5_vhca_event_notifier_unregister(dev, &table->nb);
- mutex_destroy(&table->table_lock);
+ mlx5_vhca_event_work_queues_flush(dev);
/* Now that event handler is not running, it is safe to destroy
* the sf device without race.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.h b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.h
index 2a66a427ef15..b99131e95e37 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/dev.h
@@ -19,6 +19,12 @@ struct mlx5_sf_dev {
u16 fn_id;
};
+struct mlx5_sf_peer_devlink_event_ctx {
+ u16 fn_id;
+ struct devlink *devlink;
+ int err;
+};
+
void mlx5_sf_dev_table_create(struct mlx5_core_dev *dev);
void mlx5_sf_dev_table_destroy(struct mlx5_core_dev *dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
index 8fe82f1191bb..169c2c68ed5c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/dev/driver.c
@@ -8,6 +8,20 @@
#include "dev.h"
#include "devlink.h"
+static int mlx5_core_peer_devlink_set(struct mlx5_sf_dev *sf_dev, struct devlink *devlink)
+{
+ struct mlx5_sf_peer_devlink_event_ctx event_ctx = {
+ .fn_id = sf_dev->fn_id,
+ .devlink = devlink,
+ };
+ int ret;
+
+ ret = mlx5_blocking_notifier_call_chain(sf_dev->parent_mdev,
+ MLX5_DRIVER_EVENT_SF_PEER_DEVLINK,
+ &event_ctx);
+ return ret == NOTIFY_OK ? event_ctx.err : 0;
+}
+
static int mlx5_sf_dev_probe(struct auxiliary_device *adev, const struct auxiliary_device_id *id)
{
struct mlx5_sf_dev *sf_dev = container_of(adev, struct mlx5_sf_dev, adev);
@@ -54,9 +68,21 @@ static int mlx5_sf_dev_probe(struct auxiliary_device *adev, const struct auxilia
mlx5_core_warn(mdev, "mlx5_init_one err=%d\n", err);
goto init_one_err;
}
+
+ err = mlx5_core_peer_devlink_set(sf_dev, devlink);
+ if (err) {
+ mlx5_core_warn(mdev, "mlx5_core_peer_devlink_set err=%d\n", err);
+ goto peer_devlink_set_err;
+ }
+
devlink_register(devlink);
return 0;
+peer_devlink_set_err:
+ if (mlx5_dev_is_lightweight(sf_dev->mdev))
+ mlx5_uninit_one_light(sf_dev->mdev);
+ else
+ mlx5_uninit_one(sf_dev->mdev);
init_one_err:
iounmap(mdev->iseg);
remap_err:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
index e34a8f88c518..6c11e075cab0 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
@@ -20,43 +20,36 @@ struct mlx5_sf {
u16 hw_state;
};
+static void *mlx5_sf_by_dl_port(struct devlink_port *dl_port)
+{
+ struct mlx5_devlink_port *mlx5_dl_port = mlx5_devlink_port_get(dl_port);
+
+ return container_of(mlx5_dl_port, struct mlx5_sf, dl_port);
+}
+
struct mlx5_sf_table {
struct mlx5_core_dev *dev; /* To refer from notifier context. */
- struct xarray port_indices; /* port index based lookup. */
- refcount_t refcount;
- struct completion disable_complete;
+ struct xarray function_ids; /* function id based lookup. */
struct mutex sf_state_lock; /* Serializes sf state among user cmds & vhca event handler. */
struct notifier_block esw_nb;
struct notifier_block vhca_nb;
+ struct notifier_block mdev_nb;
};
static struct mlx5_sf *
-mlx5_sf_lookup_by_index(struct mlx5_sf_table *table, unsigned int port_index)
-{
- return xa_load(&table->port_indices, port_index);
-}
-
-static struct mlx5_sf *
mlx5_sf_lookup_by_function_id(struct mlx5_sf_table *table, unsigned int fn_id)
{
- unsigned long index;
- struct mlx5_sf *sf;
-
- xa_for_each(&table->port_indices, index, sf) {
- if (sf->hw_fn_id == fn_id)
- return sf;
- }
- return NULL;
+ return xa_load(&table->function_ids, fn_id);
}
-static int mlx5_sf_id_insert(struct mlx5_sf_table *table, struct mlx5_sf *sf)
+static int mlx5_sf_function_id_insert(struct mlx5_sf_table *table, struct mlx5_sf *sf)
{
- return xa_insert(&table->port_indices, sf->port_index, sf, GFP_KERNEL);
+ return xa_insert(&table->function_ids, sf->hw_fn_id, sf, GFP_KERNEL);
}
-static void mlx5_sf_id_erase(struct mlx5_sf_table *table, struct mlx5_sf *sf)
+static void mlx5_sf_function_id_erase(struct mlx5_sf_table *table, struct mlx5_sf *sf)
{
- xa_erase(&table->port_indices, sf->port_index);
+ xa_erase(&table->function_ids, sf->hw_fn_id);
}
static struct mlx5_sf *
@@ -93,7 +86,7 @@ mlx5_sf_alloc(struct mlx5_sf_table *table, struct mlx5_eswitch *esw,
sf->hw_state = MLX5_VHCA_STATE_ALLOCATED;
sf->controller = controller;
- err = mlx5_sf_id_insert(table, sf);
+ err = mlx5_sf_function_id_insert(table, sf);
if (err)
goto insert_err;
@@ -111,28 +104,11 @@ id_err:
static void mlx5_sf_free(struct mlx5_sf_table *table, struct mlx5_sf *sf)
{
- mlx5_sf_id_erase(table, sf);
mlx5_sf_hw_table_sf_free(table->dev, sf->controller, sf->id);
trace_mlx5_sf_free(table->dev, sf->port_index, sf->controller, sf->hw_fn_id);
kfree(sf);
}
-static struct mlx5_sf_table *mlx5_sf_table_try_get(struct mlx5_core_dev *dev)
-{
- struct mlx5_sf_table *table = dev->priv.sf_table;
-
- if (!table)
- return NULL;
-
- return refcount_inc_not_zero(&table->refcount) ? table : NULL;
-}
-
-static void mlx5_sf_table_put(struct mlx5_sf_table *table)
-{
- if (refcount_dec_and_test(&table->refcount))
- complete(&table->disable_complete);
-}
-
static enum devlink_port_fn_state mlx5_sf_to_devlink_state(u8 hw_state)
{
switch (hw_state) {
@@ -172,26 +148,14 @@ int mlx5_devlink_sf_port_fn_state_get(struct devlink_port *dl_port,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink);
- struct mlx5_sf_table *table;
- struct mlx5_sf *sf;
- int err = 0;
-
- table = mlx5_sf_table_try_get(dev);
- if (!table)
- return -EOPNOTSUPP;
+ struct mlx5_sf_table *table = dev->priv.sf_table;
+ struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port);
- sf = mlx5_sf_lookup_by_index(table, dl_port->index);
- if (!sf) {
- err = -EOPNOTSUPP;
- goto sf_err;
- }
mutex_lock(&table->sf_state_lock);
*state = mlx5_sf_to_devlink_state(sf->hw_state);
*opstate = mlx5_sf_to_devlink_opstate(sf->hw_state);
mutex_unlock(&table->sf_state_lock);
-sf_err:
- mlx5_sf_table_put(table);
- return err;
+ return 0;
}
static int mlx5_sf_activate(struct mlx5_core_dev *dev, struct mlx5_sf *sf,
@@ -257,26 +221,10 @@ int mlx5_devlink_sf_port_fn_state_set(struct devlink_port *dl_port,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(dl_port->devlink);
- struct mlx5_sf_table *table;
- struct mlx5_sf *sf;
- int err;
-
- table = mlx5_sf_table_try_get(dev);
- if (!table) {
- NL_SET_ERR_MSG_MOD(extack,
- "Port state set is only supported in eswitch switchdev mode or SF ports are disabled.");
- return -EOPNOTSUPP;
- }
- sf = mlx5_sf_lookup_by_index(table, dl_port->index);
- if (!sf) {
- err = -ENODEV;
- goto out;
- }
+ struct mlx5_sf_table *table = dev->priv.sf_table;
+ struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port);
- err = mlx5_sf_state_set(dev, table, sf, state, extack);
-out:
- mlx5_sf_table_put(table);
- return err;
+ return mlx5_sf_state_set(dev, table, sf, state, extack);
}
static int mlx5_sf_add(struct mlx5_core_dev *dev, struct mlx5_sf_table *table,
@@ -335,32 +283,45 @@ mlx5_sf_new_check_attr(struct mlx5_core_dev *dev, const struct devlink_port_new_
return 0;
}
+static bool mlx5_sf_table_supported(const struct mlx5_core_dev *dev)
+{
+ return dev->priv.eswitch && MLX5_ESWITCH_MANAGER(dev) &&
+ mlx5_sf_hw_table_supported(dev);
+}
+
int mlx5_devlink_sf_port_new(struct devlink *devlink,
const struct devlink_port_new_attrs *new_attr,
struct netlink_ext_ack *extack,
struct devlink_port **dl_port)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
- struct mlx5_sf_table *table;
+ struct mlx5_sf_table *table = dev->priv.sf_table;
int err;
err = mlx5_sf_new_check_attr(dev, new_attr, extack);
if (err)
return err;
- table = mlx5_sf_table_try_get(dev);
- if (!table) {
+ if (!mlx5_sf_table_supported(dev)) {
+ NL_SET_ERR_MSG_MOD(extack, "SF ports are not supported.");
+ return -EOPNOTSUPP;
+ }
+
+ if (!is_mdev_switchdev_mode(dev)) {
NL_SET_ERR_MSG_MOD(extack,
- "Port add is only supported in eswitch switchdev mode or SF ports are disabled.");
+ "SF ports are only supported in eswitch switchdev mode.");
return -EOPNOTSUPP;
}
- err = mlx5_sf_add(dev, table, new_attr, extack, dl_port);
- mlx5_sf_table_put(table);
- return err;
+
+ return mlx5_sf_add(dev, table, new_attr, extack, dl_port);
}
static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf)
{
+ mutex_lock(&table->sf_state_lock);
+
+ mlx5_sf_function_id_erase(table, sf);
+
if (sf->hw_state == MLX5_VHCA_STATE_ALLOCATED) {
mlx5_sf_free(table, sf);
} else if (mlx5_sf_is_active(sf)) {
@@ -376,6 +337,16 @@ static void mlx5_sf_dealloc(struct mlx5_sf_table *table, struct mlx5_sf *sf)
mlx5_sf_hw_table_sf_deferred_free(table->dev, sf->controller, sf->id);
kfree(sf);
}
+
+ mutex_unlock(&table->sf_state_lock);
+}
+
+static void mlx5_sf_del(struct mlx5_sf_table *table, struct mlx5_sf *sf)
+{
+ struct mlx5_eswitch *esw = table->dev->priv.eswitch;
+
+ mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id);
+ mlx5_sf_dealloc(table, sf);
}
int mlx5_devlink_sf_port_del(struct devlink *devlink,
@@ -383,32 +354,11 @@ int mlx5_devlink_sf_port_del(struct devlink *devlink,
struct netlink_ext_ack *extack)
{
struct mlx5_core_dev *dev = devlink_priv(devlink);
- struct mlx5_eswitch *esw = dev->priv.eswitch;
- struct mlx5_sf_table *table;
- struct mlx5_sf *sf;
- int err = 0;
-
- table = mlx5_sf_table_try_get(dev);
- if (!table) {
- NL_SET_ERR_MSG_MOD(extack,
- "Port del is only supported in eswitch switchdev mode or SF ports are disabled.");
- return -EOPNOTSUPP;
- }
- sf = mlx5_sf_lookup_by_index(table, dl_port->index);
- if (!sf) {
- err = -ENODEV;
- goto sf_err;
- }
-
- mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id);
- mlx5_sf_id_erase(table, sf);
+ struct mlx5_sf_table *table = dev->priv.sf_table;
+ struct mlx5_sf *sf = mlx5_sf_by_dl_port(dl_port);
- mutex_lock(&table->sf_state_lock);
- mlx5_sf_dealloc(table, sf);
- mutex_unlock(&table->sf_state_lock);
-sf_err:
- mlx5_sf_table_put(table);
- return err;
+ mlx5_sf_del(table, sf);
+ return 0;
}
static bool mlx5_sf_state_update_check(const struct mlx5_sf *sf, u8 new_state)
@@ -433,14 +383,10 @@ static int mlx5_sf_vhca_event(struct notifier_block *nb, unsigned long opcode, v
bool update = false;
struct mlx5_sf *sf;
- table = mlx5_sf_table_try_get(table->dev);
- if (!table)
- return 0;
-
mutex_lock(&table->sf_state_lock);
sf = mlx5_sf_lookup_by_function_id(table, event->function_id);
if (!sf)
- goto sf_err;
+ goto unlock;
/* When driver is attached or detached to a function, an event
* notifies such state change.
@@ -450,46 +396,18 @@ static int mlx5_sf_vhca_event(struct notifier_block *nb, unsigned long opcode, v
sf->hw_state = event->new_vhca_state;
trace_mlx5_sf_update_state(table->dev, sf->port_index, sf->controller,
sf->hw_fn_id, sf->hw_state);
-sf_err:
+unlock:
mutex_unlock(&table->sf_state_lock);
- mlx5_sf_table_put(table);
return 0;
}
-static void mlx5_sf_table_enable(struct mlx5_sf_table *table)
-{
- init_completion(&table->disable_complete);
- refcount_set(&table->refcount, 1);
-}
-
-static void mlx5_sf_deactivate_all(struct mlx5_sf_table *table)
+static void mlx5_sf_del_all(struct mlx5_sf_table *table)
{
- struct mlx5_eswitch *esw = table->dev->priv.eswitch;
unsigned long index;
struct mlx5_sf *sf;
- /* At this point, no new user commands can start and no vhca event can
- * arrive. It is safe to destroy all user created SFs.
- */
- xa_for_each(&table->port_indices, index, sf) {
- mlx5_eswitch_unload_sf_vport(esw, sf->hw_fn_id);
- mlx5_sf_id_erase(table, sf);
- mlx5_sf_dealloc(table, sf);
- }
-}
-
-static void mlx5_sf_table_disable(struct mlx5_sf_table *table)
-{
- if (!refcount_read(&table->refcount))
- return;
-
- /* Balances with refcount_set; drop the reference so that new user cmd cannot start
- * and new vhca event handler cannot run.
- */
- mlx5_sf_table_put(table);
- wait_for_completion(&table->disable_complete);
-
- mlx5_sf_deactivate_all(table);
+ xa_for_each(&table->function_ids, index, sf)
+ mlx5_sf_del(table, sf);
}
static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, void *data)
@@ -498,11 +416,8 @@ static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, voi
const struct mlx5_esw_event_info *mode = data;
switch (mode->new_mode) {
- case MLX5_ESWITCH_OFFLOADS:
- mlx5_sf_table_enable(table);
- break;
case MLX5_ESWITCH_LEGACY:
- mlx5_sf_table_disable(table);
+ mlx5_sf_del_all(table);
break;
default:
break;
@@ -511,10 +426,29 @@ static int mlx5_sf_esw_event(struct notifier_block *nb, unsigned long event, voi
return 0;
}
-static bool mlx5_sf_table_supported(const struct mlx5_core_dev *dev)
+static int mlx5_sf_mdev_event(struct notifier_block *nb, unsigned long event, void *data)
{
- return dev->priv.eswitch && MLX5_ESWITCH_MANAGER(dev) &&
- mlx5_sf_hw_table_supported(dev);
+ struct mlx5_sf_table *table = container_of(nb, struct mlx5_sf_table, mdev_nb);
+ struct mlx5_sf_peer_devlink_event_ctx *event_ctx = data;
+ int ret = NOTIFY_DONE;
+ struct mlx5_sf *sf;
+
+ if (event != MLX5_DRIVER_EVENT_SF_PEER_DEVLINK)
+ return NOTIFY_DONE;
+
+
+ mutex_lock(&table->sf_state_lock);
+ sf = mlx5_sf_lookup_by_function_id(table, event_ctx->fn_id);
+ if (!sf)
+ goto out;
+
+ event_ctx->err = devl_port_fn_devlink_set(&sf->dl_port.dl_port,
+ event_ctx->devlink);
+
+ ret = NOTIFY_OK;
+out:
+ mutex_unlock(&table->sf_state_lock);
+ return ret;
}
int mlx5_sf_table_init(struct mlx5_core_dev *dev)
@@ -531,9 +465,8 @@ int mlx5_sf_table_init(struct mlx5_core_dev *dev)
mutex_init(&table->sf_state_lock);
table->dev = dev;
- xa_init(&table->port_indices);
+ xa_init(&table->function_ids);
dev->priv.sf_table = table;
- refcount_set(&table->refcount, 0);
table->esw_nb.notifier_call = mlx5_sf_esw_event;
err = mlx5_esw_event_notifier_register(dev->priv.eswitch, &table->esw_nb);
if (err)
@@ -544,6 +477,9 @@ int mlx5_sf_table_init(struct mlx5_core_dev *dev)
if (err)
goto vhca_err;
+ table->mdev_nb.notifier_call = mlx5_sf_mdev_event;
+ mlx5_blocking_notifier_register(dev, &table->mdev_nb);
+
return 0;
vhca_err:
@@ -562,10 +498,10 @@ void mlx5_sf_table_cleanup(struct mlx5_core_dev *dev)
if (!table)
return;
+ mlx5_blocking_notifier_unregister(dev, &table->mdev_nb);
mlx5_vhca_event_notifier_unregister(table->dev, &table->vhca_nb);
mlx5_esw_event_notifier_unregister(dev->priv.eswitch, &table->esw_nb);
- WARN_ON(refcount_read(&table->refcount));
mutex_destroy(&table->sf_state_lock);
- WARN_ON(!xa_empty(&table->port_indices));
+ WARN_ON(!xa_empty(&table->function_ids));
kfree(table);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
index d908fba968f0..cda01ba441ae 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.c
@@ -21,6 +21,15 @@ struct mlx5_vhca_event_work {
struct mlx5_vhca_state_event event;
};
+struct mlx5_vhca_event_handler {
+ struct workqueue_struct *wq;
+};
+
+struct mlx5_vhca_events {
+ struct mlx5_core_dev *dev;
+ struct mlx5_vhca_event_handler handler[MLX5_DEV_MAX_WQS];
+};
+
int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id, u32 *out, u32 outlen)
{
u32 in[MLX5_ST_SZ_DW(query_vhca_state_in)] = {};
@@ -99,6 +108,11 @@ static void mlx5_vhca_state_work_handler(struct work_struct *_work)
kfree(work);
}
+void mlx5_vhca_events_work_enqueue(struct mlx5_core_dev *dev, int idx, struct work_struct *work)
+{
+ queue_work(dev->priv.vhca_events->handler[idx].wq, work);
+}
+
static int
mlx5_vhca_state_change_notifier(struct notifier_block *nb, unsigned long type, void *data)
{
@@ -106,6 +120,7 @@ mlx5_vhca_state_change_notifier(struct notifier_block *nb, unsigned long type, v
mlx5_nb_cof(nb, struct mlx5_vhca_state_notifier, nb);
struct mlx5_vhca_event_work *work;
struct mlx5_eqe *eqe = data;
+ int wq_idx;
work = kzalloc(sizeof(*work), GFP_ATOMIC);
if (!work)
@@ -113,7 +128,8 @@ mlx5_vhca_state_change_notifier(struct notifier_block *nb, unsigned long type, v
INIT_WORK(&work->work, &mlx5_vhca_state_work_handler);
work->notifier = notifier;
work->event.function_id = be16_to_cpu(eqe->data.vhca_state.function_id);
- mlx5_events_work_enqueue(notifier->dev, &work->work);
+ wq_idx = work->event.function_id % MLX5_DEV_MAX_WQS;
+ mlx5_vhca_events_work_enqueue(notifier->dev, wq_idx, &work->work);
return NOTIFY_OK;
}
@@ -132,28 +148,75 @@ void mlx5_vhca_state_cap_handle(struct mlx5_core_dev *dev, void *set_hca_cap)
int mlx5_vhca_event_init(struct mlx5_core_dev *dev)
{
struct mlx5_vhca_state_notifier *notifier;
+ char wq_name[MLX5_CMD_WQ_MAX_NAME];
+ struct mlx5_vhca_events *events;
+ int err, i;
if (!mlx5_vhca_event_supported(dev))
return 0;
- notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
- if (!notifier)
+ events = kzalloc(sizeof(*events), GFP_KERNEL);
+ if (!events)
return -ENOMEM;
+ events->dev = dev;
+ dev->priv.vhca_events = events;
+ for (i = 0; i < MLX5_DEV_MAX_WQS; i++) {
+ snprintf(wq_name, MLX5_CMD_WQ_MAX_NAME, "mlx5_vhca_event%d", i);
+ events->handler[i].wq = create_singlethread_workqueue(wq_name);
+ if (!events->handler[i].wq) {
+ err = -ENOMEM;
+ goto err_create_wq;
+ }
+ }
+
+ notifier = kzalloc(sizeof(*notifier), GFP_KERNEL);
+ if (!notifier) {
+ err = -ENOMEM;
+ goto err_notifier;
+ }
+
dev->priv.vhca_state_notifier = notifier;
notifier->dev = dev;
BLOCKING_INIT_NOTIFIER_HEAD(&notifier->n_head);
MLX5_NB_INIT(&notifier->nb, mlx5_vhca_state_change_notifier, VHCA_STATE_CHANGE);
return 0;
+
+err_notifier:
+err_create_wq:
+ for (--i; i >= 0; i--)
+ destroy_workqueue(events->handler[i].wq);
+ kfree(events);
+ return err;
+}
+
+void mlx5_vhca_event_work_queues_flush(struct mlx5_core_dev *dev)
+{
+ struct mlx5_vhca_events *vhca_events;
+ int i;
+
+ if (!mlx5_vhca_event_supported(dev))
+ return;
+
+ vhca_events = dev->priv.vhca_events;
+ for (i = 0; i < MLX5_DEV_MAX_WQS; i++)
+ flush_workqueue(vhca_events->handler[i].wq);
}
void mlx5_vhca_event_cleanup(struct mlx5_core_dev *dev)
{
+ struct mlx5_vhca_events *vhca_events;
+ int i;
+
if (!mlx5_vhca_event_supported(dev))
return;
kfree(dev->priv.vhca_state_notifier);
dev->priv.vhca_state_notifier = NULL;
+ vhca_events = dev->priv.vhca_events;
+ for (i = 0; i < MLX5_DEV_MAX_WQS; i++)
+ destroy_workqueue(vhca_events->handler[i].wq);
+ kvfree(vhca_events);
}
void mlx5_vhca_event_start(struct mlx5_core_dev *dev)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h
index 013cdfe90616..1725ba64f8af 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/vhca_event.h
@@ -28,6 +28,9 @@ int mlx5_modify_vhca_sw_id(struct mlx5_core_dev *dev, u16 function_id, u32 sw_fn
int mlx5_vhca_event_arm(struct mlx5_core_dev *dev, u16 function_id);
int mlx5_cmd_query_vhca_state(struct mlx5_core_dev *dev, u16 function_id,
u32 *out, u32 outlen);
+void mlx5_vhca_events_work_enqueue(struct mlx5_core_dev *dev, int idx, struct work_struct *work);
+void mlx5_vhca_event_work_queues_flush(struct mlx5_core_dev *dev);
+
#else
static inline void mlx5_vhca_state_cap_handle(struct mlx5_core_dev *dev, void *set_hca_cap)
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
index 5b83da08692d..6ea88a581804 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_action.c
@@ -55,6 +55,12 @@ static const char *dr_action_id_to_str(enum mlx5dr_action_type action_id)
return action_type_to_str[action_id];
}
+static bool mlx5dr_action_supp_fwd_fdb_multi_ft(struct mlx5_core_dev *dev)
+{
+ return (MLX5_CAP_ESW_FLOWTABLE(dev, fdb_multi_path_any_table_limit_regc) ||
+ MLX5_CAP_ESW_FLOWTABLE(dev, fdb_multi_path_any_table));
+}
+
static const enum dr_action_valid_state
next_action_state[DR_ACTION_DOMAIN_MAX][DR_ACTION_STATE_MAX][DR_ACTION_TYP_MAX] = {
[DR_ACTION_DOMAIN_NIC_INGRESS] = {
@@ -1163,12 +1169,16 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
bool ignore_flow_level,
u32 flow_source)
{
+ struct mlx5dr_cmd_flow_destination_hw_info tmp_hw_dest;
struct mlx5dr_cmd_flow_destination_hw_info *hw_dests;
struct mlx5dr_action **ref_actions;
struct mlx5dr_action *action;
bool reformat_req = false;
+ bool is_ft_wire = false;
+ u16 num_dst_ft = 0;
u32 num_of_ref = 0;
u32 ref_act_cnt;
+ u16 last_dest;
int ret;
int i;
@@ -1210,11 +1220,22 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
break;
case DR_ACTION_TYP_FT:
+ if (num_dst_ft &&
+ !mlx5dr_action_supp_fwd_fdb_multi_ft(dmn->mdev)) {
+ mlx5dr_dbg(dmn, "multiple FT destinations not supported\n");
+ goto free_ref_actions;
+ }
+ num_dst_ft++;
hw_dests[i].type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
- if (dest_action->dest_tbl->is_fw_tbl)
+ if (dest_action->dest_tbl->is_fw_tbl) {
hw_dests[i].ft_id = dest_action->dest_tbl->fw_tbl.id;
- else
+ } else {
hw_dests[i].ft_id = dest_action->dest_tbl->tbl->table_id;
+ if (dest_action->dest_tbl->is_wire_ft) {
+ is_ft_wire = true;
+ last_dest = i;
+ }
+ }
break;
default:
@@ -1223,6 +1244,16 @@ mlx5dr_action_create_mult_dest_tbl(struct mlx5dr_domain *dmn,
}
}
+ /* In multidest, the FW does the iterator in the RX except of the last
+ * one that done in the TX.
+ * So, if one of the ft target is wire, put it at the end of the dest list.
+ */
+ if (is_ft_wire && num_dst_ft > 1) {
+ tmp_hw_dest = hw_dests[last_dest];
+ hw_dests[last_dest] = hw_dests[num_of_dests - 1];
+ hw_dests[num_of_dests - 1] = tmp_hw_dest;
+ }
+
action = dr_action_create_generic(DR_ACTION_TYP_FT);
if (!action)
goto free_ref_actions;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
index 6c59de3e28f6..81eff6c410ce 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_types.h
@@ -436,10 +436,6 @@ void mlx5dr_ste_build_mpls(struct mlx5dr_ste_ctx *ste_ctx,
struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask,
bool inner, bool rx);
-void mlx5dr_ste_build_tnl_mpls(struct mlx5dr_ste_ctx *ste_ctx,
- struct mlx5dr_ste_build *sb,
- struct mlx5dr_match_param *mask,
- bool inner, bool rx);
void mlx5dr_ste_build_tnl_mpls_over_gre(struct mlx5dr_ste_ctx *ste_ctx,
struct mlx5dr_ste_build *sb,
struct mlx5dr_match_param *mask,
@@ -1064,6 +1060,7 @@ struct mlx5dr_action_sampler {
struct mlx5dr_action_dest_tbl {
u8 is_fw_tbl:1;
+ u8 is_wire_ft:1;
union {
struct mlx5dr_table *tbl;
struct {
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
index 14f6df88b1f9..50c2554c9ccf 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/fs_dr.c
@@ -209,10 +209,17 @@ static struct mlx5dr_action *create_ft_action(struct mlx5dr_domain *domain,
struct mlx5_flow_rule *dst)
{
struct mlx5_flow_table *dest_ft = dst->dest_attr.ft;
+ struct mlx5dr_action *tbl_action;
if (mlx5dr_is_fw_table(dest_ft))
return mlx5dr_action_create_dest_flow_fw_table(domain, dest_ft);
- return mlx5dr_action_create_dest_table(dest_ft->fs_dr_table.dr_table);
+
+ tbl_action = mlx5dr_action_create_dest_table(dest_ft->fs_dr_table.dr_table);
+ if (tbl_action)
+ tbl_action->dest_tbl->is_wire_ft =
+ dest_ft->flags & MLX5_FLOW_TABLE_UPLINK_VPORT ? 1 : 0;
+
+ return tbl_action;
}
static struct mlx5dr_action *create_range_action(struct mlx5dr_domain *domain,
diff --git a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
index 694de9513b9f..954ba0826c61 100644
--- a/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
+++ b/drivers/net/ethernet/mellanox/mlxbf_gige/mlxbf_gige_main.c
@@ -471,7 +471,7 @@ out:
return err;
}
-static int mlxbf_gige_remove(struct platform_device *pdev)
+static void mlxbf_gige_remove(struct platform_device *pdev)
{
struct mlxbf_gige *priv = platform_get_drvdata(pdev);
@@ -479,8 +479,6 @@ static int mlxbf_gige_remove(struct platform_device *pdev)
phy_disconnect(priv->netdev->phydev);
mlxbf_gige_mdio_remove(priv);
platform_set_drvdata(pdev, NULL);
-
- return 0;
}
static void mlxbf_gige_shutdown(struct platform_device *pdev)
@@ -499,7 +497,7 @@ MODULE_DEVICE_TABLE(acpi, mlxbf_gige_acpi_match);
static struct platform_driver mlxbf_gige_driver = {
.probe = mlxbf_gige_probe,
- .remove = mlxbf_gige_remove,
+ .remove_new = mlxbf_gige_remove,
.shutdown = mlxbf_gige_shutdown,
.driver = {
.name = KBUILD_MODNAME,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/cmd.h b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
index 09bef04b11d1..e827c78be114 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/cmd.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/cmd.h
@@ -276,6 +276,12 @@ MLXSW_ITEM32(cmd_mbox, query_fw, fw_month, 0x14, 8, 8);
*/
MLXSW_ITEM32(cmd_mbox, query_fw, fw_day, 0x14, 0, 8);
+/* cmd_mbox_query_fw_lag_mode_support
+ * 0: CONFIG_PROFILE.lag_mode is not supported by FW
+ * 1: CONFIG_PROFILE.lag_mode is supported by FW
+ */
+MLXSW_ITEM32(cmd_mbox, query_fw, lag_mode_support, 0x18, 1, 1);
+
/* cmd_mbox_query_fw_clr_int_base_offset
* Clear Interrupt register's offset from clr_int_bar register
* in PCI address space.
@@ -659,42 +665,48 @@ MLXSW_ITEM32(cmd_mbox, config_profile,
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_ar_sec, 0x0C, 15, 1);
-/* cmd_mbox_config_set_ubridge
+/* cmd_mbox_config_profile_set_ubridge
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_ubridge, 0x0C, 22, 1);
-/* cmd_mbox_config_set_kvd_linear_size
+/* cmd_mbox_config_profile_set_kvd_linear_size
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_linear_size, 0x0C, 24, 1);
-/* cmd_mbox_config_set_kvd_hash_single_size
+/* cmd_mbox_config_profile_set_kvd_hash_single_size
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_hash_single_size, 0x0C, 25, 1);
-/* cmd_mbox_config_set_kvd_hash_double_size
+/* cmd_mbox_config_profile_set_kvd_hash_double_size
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_kvd_hash_double_size, 0x0C, 26, 1);
-/* cmd_mbox_config_set_cqe_version
+/* cmd_mbox_config_profile_set_cqe_version
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_cqe_version, 0x08, 0, 1);
-/* cmd_mbox_config_set_cqe_time_stamp_type
+/* cmd_mbox_config_profile_set_cqe_time_stamp_type
* Capability bit. Setting a bit to 1 configures the profile
* according to the mailbox contents.
*/
MLXSW_ITEM32(cmd_mbox, config_profile, set_cqe_time_stamp_type, 0x08, 2, 1);
+/* cmd_mbox_config_profile_set_lag_mode
+ * Capability bit. Setting a bit to 1 configures the lag_mode
+ * according to the mailbox contents.
+ */
+MLXSW_ITEM32(cmd_mbox, config_profile, set_lag_mode, 0x08, 7, 1);
+
/* cmd_mbox_config_profile_max_vepa_channels
* Maximum number of VEPA channels per port (0 through 16)
* 0 - multi-channel VEPA is disabled
@@ -840,6 +852,21 @@ MLXSW_ITEM32(cmd_mbox, config_profile, arn, 0x50, 31, 1);
*/
MLXSW_ITEM32(cmd_mbox, config_profile, ubridge, 0x50, 4, 1);
+enum mlxsw_cmd_mbox_config_profile_lag_mode {
+ /* FW manages PGT LAG table */
+ MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_FW,
+ /* SW manages PGT LAG table */
+ MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_SW,
+};
+
+/* cmd_mbox_config_profile_lag_mode
+ * LAG mode
+ * Configured if set_lag_mode is set
+ * Supported from Spectrum-2 and above.
+ * Supported only when ubridge = 1
+ */
+MLXSW_ITEM32(cmd_mbox, config_profile, lag_mode, 0x50, 3, 1);
+
/* cmd_mbox_config_kvd_linear_size
* KVD Linear Size
* Valid for Spectrum only
@@ -847,7 +874,7 @@ MLXSW_ITEM32(cmd_mbox, config_profile, ubridge, 0x50, 4, 1);
*/
MLXSW_ITEM32(cmd_mbox, config_profile, kvd_linear_size, 0x54, 0, 24);
-/* cmd_mbox_config_kvd_hash_single_size
+/* cmd_mbox_config_profile_kvd_hash_single_size
* KVD Hash single-entries size
* Valid for Spectrum only
* Allowed values are 128*N where N=0 or higher
@@ -856,7 +883,7 @@ MLXSW_ITEM32(cmd_mbox, config_profile, kvd_linear_size, 0x54, 0, 24);
*/
MLXSW_ITEM32(cmd_mbox, config_profile, kvd_hash_single_size, 0x58, 0, 24);
-/* cmd_mbox_config_kvd_hash_double_size
+/* cmd_mbox_config_profile_kvd_hash_double_size
* KVD Hash double-entries size (units of single-size entries)
* Valid for Spectrum only
* Allowed values are 128*N where N=0 or higher
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index 1ccf3b73ed72..f23421f038f3 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -204,6 +204,13 @@ int mlxsw_core_max_lag(struct mlxsw_core *mlxsw_core, u16 *p_max_lag)
}
EXPORT_SYMBOL(mlxsw_core_max_lag);
+enum mlxsw_cmd_mbox_config_profile_lag_mode
+mlxsw_core_lag_mode(struct mlxsw_core *mlxsw_core)
+{
+ return mlxsw_core->bus->lag_mode(mlxsw_core->bus_priv);
+}
+EXPORT_SYMBOL(mlxsw_core_lag_mode);
+
void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core)
{
return mlxsw_core->driver_priv;
@@ -1792,122 +1799,78 @@ static void mlxsw_core_health_listener_func(const struct mlxsw_reg_info *reg,
static const struct mlxsw_listener mlxsw_core_health_listener =
MLXSW_CORE_EVENTL(mlxsw_core_health_listener_func, MFDE);
-static int
+static void
mlxsw_core_health_fw_fatal_dump_fatal_cause(const char *mfde_pl,
struct devlink_fmsg *fmsg)
{
u32 val, tile_v;
- int err;
val = mlxsw_reg_mfde_fatal_cause_id_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "cause_id", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "cause_id", val);
tile_v = mlxsw_reg_mfde_fatal_cause_tile_v_get(mfde_pl);
if (tile_v) {
val = mlxsw_reg_mfde_fatal_cause_tile_index_get(mfde_pl);
- err = devlink_fmsg_u8_pair_put(fmsg, "tile_index", val);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "tile_index", val);
}
-
- return 0;
}
-static int
+static void
mlxsw_core_health_fw_fatal_dump_fw_assert(const char *mfde_pl,
struct devlink_fmsg *fmsg)
{
u32 val, tile_v;
- int err;
val = mlxsw_reg_mfde_fw_assert_var0_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "var0", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "var0", val);
val = mlxsw_reg_mfde_fw_assert_var1_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "var1", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "var1", val);
val = mlxsw_reg_mfde_fw_assert_var2_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "var2", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "var2", val);
val = mlxsw_reg_mfde_fw_assert_var3_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "var3", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "var3", val);
val = mlxsw_reg_mfde_fw_assert_var4_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "var4", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "var4", val);
val = mlxsw_reg_mfde_fw_assert_existptr_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "existptr", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "existptr", val);
val = mlxsw_reg_mfde_fw_assert_callra_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "callra", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "callra", val);
val = mlxsw_reg_mfde_fw_assert_oe_get(mfde_pl);
- err = devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
- if (err)
- return err;
+ devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
tile_v = mlxsw_reg_mfde_fw_assert_tile_v_get(mfde_pl);
if (tile_v) {
val = mlxsw_reg_mfde_fw_assert_tile_index_get(mfde_pl);
- err = devlink_fmsg_u8_pair_put(fmsg, "tile_index", val);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "tile_index", val);
}
val = mlxsw_reg_mfde_fw_assert_ext_synd_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "ext_synd", val);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_u32_pair_put(fmsg, "ext_synd", val);
}
-static int
+static void
mlxsw_core_health_fw_fatal_dump_kvd_im_stop(const char *mfde_pl,
struct devlink_fmsg *fmsg)
{
u32 val;
- int err;
val = mlxsw_reg_mfde_kvd_im_stop_oe_get(mfde_pl);
- err = devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
- if (err)
- return err;
+ devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
val = mlxsw_reg_mfde_kvd_im_stop_pipes_mask_get(mfde_pl);
- return devlink_fmsg_u32_pair_put(fmsg, "pipes_mask", val);
+ devlink_fmsg_u32_pair_put(fmsg, "pipes_mask", val);
}
-static int
+static void
mlxsw_core_health_fw_fatal_dump_crspace_to(const char *mfde_pl,
struct devlink_fmsg *fmsg)
{
u32 val;
- int err;
val = mlxsw_reg_mfde_crspace_to_log_address_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "log_address", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "log_address", val);
val = mlxsw_reg_mfde_crspace_to_oe_get(mfde_pl);
- err = devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
- if (err)
- return err;
+ devlink_fmsg_bool_pair_put(fmsg, "old_event", val);
val = mlxsw_reg_mfde_crspace_to_log_id_get(mfde_pl);
- err = devlink_fmsg_u8_pair_put(fmsg, "log_irisc_id", val);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "log_irisc_id", val);
val = mlxsw_reg_mfde_crspace_to_log_ip_get(mfde_pl);
- err = devlink_fmsg_u64_pair_put(fmsg, "log_ip", val);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_u64_pair_put(fmsg, "log_ip", val);
}
static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *reporter,
@@ -1918,24 +1881,17 @@ static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *repor
char *val_str;
u8 event_id;
u32 val;
- int err;
if (!priv_ctx)
/* User-triggered dumps are not possible */
return -EOPNOTSUPP;
val = mlxsw_reg_mfde_irisc_id_get(mfde_pl);
- err = devlink_fmsg_u8_pair_put(fmsg, "irisc_id", val);
- if (err)
- return err;
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "event");
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "irisc_id", val);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "event");
event_id = mlxsw_reg_mfde_event_id_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "id", event_id);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "id", event_id);
switch (event_id) {
case MLXSW_REG_MFDE_EVENT_ID_CRSPACE_TO:
val_str = "CR space timeout";
@@ -1955,24 +1911,13 @@ static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *repor
default:
val_str = NULL;
}
- if (val_str) {
- err = devlink_fmsg_string_pair_put(fmsg, "desc", val_str);
- if (err)
- return err;
- }
-
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "severity");
- if (err)
- return err;
+ if (val_str)
+ devlink_fmsg_string_pair_put(fmsg, "desc", val_str);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "severity");
val = mlxsw_reg_mfde_severity_get(mfde_pl);
- err = devlink_fmsg_u8_pair_put(fmsg, "id", val);
- if (err)
- return err;
+ devlink_fmsg_u8_pair_put(fmsg, "id", val);
switch (val) {
case MLXSW_REG_MFDE_SEVERITY_FATL:
val_str = "Fatal";
@@ -1986,15 +1931,9 @@ static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *repor
default:
val_str = NULL;
}
- if (val_str) {
- err = devlink_fmsg_string_pair_put(fmsg, "desc", val_str);
- if (err)
- return err;
- }
-
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
+ if (val_str)
+ devlink_fmsg_string_pair_put(fmsg, "desc", val_str);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
val = mlxsw_reg_mfde_method_get(mfde_pl);
switch (val) {
@@ -2007,16 +1946,11 @@ static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *repor
default:
val_str = NULL;
}
- if (val_str) {
- err = devlink_fmsg_string_pair_put(fmsg, "method", val_str);
- if (err)
- return err;
- }
+ if (val_str)
+ devlink_fmsg_string_pair_put(fmsg, "method", val_str);
val = mlxsw_reg_mfde_long_process_get(mfde_pl);
- err = devlink_fmsg_bool_pair_put(fmsg, "long_process", val);
- if (err)
- return err;
+ devlink_fmsg_bool_pair_put(fmsg, "long_process", val);
val = mlxsw_reg_mfde_command_type_get(mfde_pl);
switch (val) {
@@ -2032,29 +1966,25 @@ static int mlxsw_core_health_fw_fatal_dump(struct devlink_health_reporter *repor
default:
val_str = NULL;
}
- if (val_str) {
- err = devlink_fmsg_string_pair_put(fmsg, "command_type", val_str);
- if (err)
- return err;
- }
+ if (val_str)
+ devlink_fmsg_string_pair_put(fmsg, "command_type", val_str);
val = mlxsw_reg_mfde_reg_attr_id_get(mfde_pl);
- err = devlink_fmsg_u32_pair_put(fmsg, "reg_attr_id", val);
- if (err)
- return err;
+ devlink_fmsg_u32_pair_put(fmsg, "reg_attr_id", val);
switch (event_id) {
case MLXSW_REG_MFDE_EVENT_ID_CRSPACE_TO:
- return mlxsw_core_health_fw_fatal_dump_crspace_to(mfde_pl,
- fmsg);
+ mlxsw_core_health_fw_fatal_dump_crspace_to(mfde_pl, fmsg);
+ break;
case MLXSW_REG_MFDE_EVENT_ID_KVD_IM_STOP:
- return mlxsw_core_health_fw_fatal_dump_kvd_im_stop(mfde_pl,
- fmsg);
+ mlxsw_core_health_fw_fatal_dump_kvd_im_stop(mfde_pl, fmsg);
+ break;
case MLXSW_REG_MFDE_EVENT_ID_FW_ASSERT:
- return mlxsw_core_health_fw_fatal_dump_fw_assert(mfde_pl, fmsg);
+ mlxsw_core_health_fw_fatal_dump_fw_assert(mfde_pl, fmsg);
+ break;
case MLXSW_REG_MFDE_EVENT_ID_FATAL_CAUSE:
- return mlxsw_core_health_fw_fatal_dump_fatal_cause(mfde_pl,
- fmsg);
+ mlxsw_core_health_fw_fatal_dump_fatal_cause(mfde_pl, fmsg);
+ break;
}
return 0;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index e5474d3e34db..764d14bd5bc0 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -36,6 +36,8 @@ struct mlxsw_fw_rev;
unsigned int mlxsw_core_max_ports(const struct mlxsw_core *mlxsw_core);
int mlxsw_core_max_lag(struct mlxsw_core *mlxsw_core, u16 *p_max_lag);
+enum mlxsw_cmd_mbox_config_profile_lag_mode
+mlxsw_core_lag_mode(struct mlxsw_core *mlxsw_core);
void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core);
@@ -335,6 +337,7 @@ struct mlxsw_config_profile {
u8 kvd_hash_single_parts;
u8 kvd_hash_double_parts;
u8 cqe_time_stamp_type;
+ bool lag_mode_prefer_sw;
struct mlxsw_swid_config swid_config[MLXSW_CONFIG_PROFILE_SWID_COUNT];
};
@@ -485,6 +488,7 @@ struct mlxsw_bus {
u32 (*read_frc_l)(void *bus_priv);
u32 (*read_utc_sec)(void *bus_priv);
u32 (*read_utc_nsec)(void *bus_priv);
+ enum mlxsw_cmd_mbox_config_profile_lag_mode (*lag_mode)(void *bus_priv);
u8 features;
};
@@ -624,7 +628,7 @@ struct mlxsw_linecards {
struct mlxsw_linecard_types_info *types_info;
struct list_head event_ops_list;
struct mutex event_ops_list_lock; /* Locks accesses to event ops list */
- struct mlxsw_linecard linecards[];
+ struct mlxsw_linecard linecards[] __counted_by(count);
};
static inline struct mlxsw_linecard *
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
index 70f9b5e85a26..0d5e6f9b466e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
@@ -32,8 +32,7 @@ static const struct mlxsw_afk_element_info mlxsw_afk_element_infos[] = {
MLXSW_AFK_ELEMENT_INFO_U32(IP_TTL_, 0x18, 0, 8),
MLXSW_AFK_ELEMENT_INFO_U32(IP_ECN, 0x18, 9, 2),
MLXSW_AFK_ELEMENT_INFO_U32(IP_DSCP, 0x18, 11, 6),
- MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x18, 17, 4),
- MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_LSB, 0x18, 21, 8),
+ MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER, 0x18, 17, 12),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_96_127, 0x20, 4),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_64_95, 0x24, 4),
MLXSW_AFK_ELEMENT_INFO_BUF(SRC_IP_32_63, 0x28, 4),
@@ -44,6 +43,9 @@ static const struct mlxsw_afk_element_info mlxsw_afk_element_infos[] = {
MLXSW_AFK_ELEMENT_INFO_BUF(DST_IP_0_31, 0x3C, 4),
MLXSW_AFK_ELEMENT_INFO_U32(FDB_MISS, 0x40, 0, 1),
MLXSW_AFK_ELEMENT_INFO_U32(L4_PORT_RANGE, 0x40, 1, 16),
+ MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_0_3, 0x40, 17, 4),
+ MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_4_7, 0x40, 21, 4),
+ MLXSW_AFK_ELEMENT_INFO_U32(VIRT_ROUTER_MSB, 0x40, 25, 4),
};
struct mlxsw_afk {
@@ -136,6 +138,7 @@ mlxsw_afk_key_info_find(struct mlxsw_afk *mlxsw_afk,
struct mlxsw_afk_picker {
DECLARE_BITMAP(element, MLXSW_AFK_ELEMENT_MAX);
+ DECLARE_BITMAP(chosen_element, MLXSW_AFK_ELEMENT_MAX);
unsigned int total;
};
@@ -206,7 +209,7 @@ static int mlxsw_afk_picker_key_info_add(struct mlxsw_afk *mlxsw_afk,
if (key_info->blocks_count == mlxsw_afk->max_blocks)
return -EINVAL;
- for_each_set_bit(element, picker[block_index].element,
+ for_each_set_bit(element, picker[block_index].chosen_element,
MLXSW_AFK_ELEMENT_MAX) {
key_info->element_to_block[element] = key_info->blocks_count;
mlxsw_afk_element_usage_add(&key_info->elusage, element);
@@ -218,11 +221,43 @@ static int mlxsw_afk_picker_key_info_add(struct mlxsw_afk *mlxsw_afk,
return 0;
}
+static int mlxsw_afk_keys_fill(struct mlxsw_afk *mlxsw_afk,
+ unsigned long *chosen_blocks_bm,
+ struct mlxsw_afk_picker *picker,
+ struct mlxsw_afk_key_info *key_info)
+{
+ int i, err;
+
+ /* First fill only key blocks with high_entropy. */
+ for_each_set_bit(i, chosen_blocks_bm, mlxsw_afk->blocks_count) {
+ if (!mlxsw_afk->blocks[i].high_entropy)
+ continue;
+
+ err = mlxsw_afk_picker_key_info_add(mlxsw_afk, picker, i,
+ key_info);
+ if (err)
+ return err;
+ __clear_bit(i, chosen_blocks_bm);
+ }
+
+ /* Fill the rest of key blocks. */
+ for_each_set_bit(i, chosen_blocks_bm, mlxsw_afk->blocks_count) {
+ err = mlxsw_afk_picker_key_info_add(mlxsw_afk, picker, i,
+ key_info);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
static int mlxsw_afk_picker(struct mlxsw_afk *mlxsw_afk,
struct mlxsw_afk_key_info *key_info,
struct mlxsw_afk_element_usage *elusage)
{
+ DECLARE_BITMAP(elusage_chosen, MLXSW_AFK_ELEMENT_MAX) = {0};
struct mlxsw_afk_picker *picker;
+ unsigned long *chosen_blocks_bm;
enum mlxsw_afk_element element;
int err;
@@ -230,6 +265,12 @@ static int mlxsw_afk_picker(struct mlxsw_afk *mlxsw_afk,
if (!picker)
return -ENOMEM;
+ chosen_blocks_bm = bitmap_zalloc(mlxsw_afk->blocks_count, GFP_KERNEL);
+ if (!chosen_blocks_bm) {
+ err = -ENOMEM;
+ goto err_bitmap_alloc;
+ }
+
/* Since the same elements could be present in multiple blocks,
* we must find out optimal block list in order to make the
* block count as low as possible.
@@ -254,15 +295,26 @@ static int mlxsw_afk_picker(struct mlxsw_afk *mlxsw_afk,
err = block_index;
goto out;
}
- err = mlxsw_afk_picker_key_info_add(mlxsw_afk, picker,
- block_index, key_info);
- if (err)
- goto out;
+
+ __set_bit(block_index, chosen_blocks_bm);
+
+ bitmap_copy(picker[block_index].chosen_element,
+ picker[block_index].element, MLXSW_AFK_ELEMENT_MAX);
+
+ bitmap_or(elusage_chosen, elusage_chosen,
+ picker[block_index].chosen_element,
+ MLXSW_AFK_ELEMENT_MAX);
+
mlxsw_afk_picker_subtract_hits(mlxsw_afk, picker, block_index);
- } while (!mlxsw_afk_key_info_elements_eq(key_info, elusage));
- err = 0;
+ } while (!bitmap_equal(elusage_chosen, elusage->usage,
+ MLXSW_AFK_ELEMENT_MAX));
+
+ err = mlxsw_afk_keys_fill(mlxsw_afk, chosen_blocks_bm, picker,
+ key_info);
out:
+ bitmap_free(chosen_blocks_bm);
+err_bitmap_alloc:
kfree(picker);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
index 2eac7582c31a..98a05598178b 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
@@ -33,10 +33,12 @@ enum mlxsw_afk_element {
MLXSW_AFK_ELEMENT_IP_TTL_,
MLXSW_AFK_ELEMENT_IP_ECN,
MLXSW_AFK_ELEMENT_IP_DSCP,
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_LSB,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER,
MLXSW_AFK_ELEMENT_FDB_MISS,
MLXSW_AFK_ELEMENT_L4_PORT_RANGE,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_3,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_4_7,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
MLXSW_AFK_ELEMENT_MAX,
};
@@ -117,6 +119,7 @@ struct mlxsw_afk_block {
u16 encoding; /* block ID */
struct mlxsw_afk_element_inst *instances;
unsigned int instances_count;
+ bool high_entropy;
};
#define MLXSW_AFK_BLOCK(_encoding, _instances) \
@@ -126,6 +129,14 @@ struct mlxsw_afk_block {
.instances_count = ARRAY_SIZE(_instances), \
}
+#define MLXSW_AFK_BLOCK_HIGH_ENTROPY(_encoding, _instances) \
+ { \
+ .encoding = _encoding, \
+ .instances = _instances, \
+ .instances_count = ARRAY_SIZE(_instances), \
+ .high_entropy = true, \
+ }
+
struct mlxsw_afk_element_usage {
DECLARE_BITMAP(usage, MLXSW_AFK_ELEMENT_MAX);
};
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_env.c b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
index d637c0348fa1..53b150b7ae4e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_env.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_env.c
@@ -34,7 +34,7 @@ struct mlxsw_env {
u8 num_of_slots; /* Including the main board. */
u8 max_eeprom_len; /* Maximum module EEPROM transaction length. */
struct mutex line_cards_lock; /* Protects line cards. */
- struct mlxsw_env_line_card *line_cards[];
+ struct mlxsw_env_line_card *line_cards[] __counted_by(num_of_slots);
};
static bool __mlxsw_env_linecard_is_active(struct mlxsw_env *mlxsw_env,
@@ -775,7 +775,7 @@ static int mlxsw_env_module_has_temp_sensor(struct mlxsw_core *mlxsw_core,
int err;
mlxsw_reg_mtbr_pack(mtbr_pl, slot_index,
- MLXSW_REG_MTBR_BASE_MODULE_INDEX + module, 1);
+ MLXSW_REG_MTBR_BASE_MODULE_INDEX + module);
err = mlxsw_reg_query(mlxsw_core, MLXSW_REG(mtbr), mtbr_pl);
if (err)
return err;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
index 0fd290d776ff..9c12e1feb643 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_hwmon.c
@@ -293,7 +293,7 @@ static ssize_t mlxsw_hwmon_module_temp_fault_show(struct device *dev,
module = mlxsw_hwmon_attr->type_index - mlxsw_hwmon_dev->sensor_count;
mlxsw_reg_mtbr_pack(mtbr_pl, mlxsw_hwmon_dev->slot_index,
- MLXSW_REG_MTBR_BASE_MODULE_INDEX + module, 1);
+ MLXSW_REG_MTBR_BASE_MODULE_INDEX + module);
err = mlxsw_reg_query(mlxsw_hwmon->core, MLXSW_REG(mtbr), mtbr_pl);
if (err) {
dev_err(dev, "Failed to query module temperature sensor\n");
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c b/drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c
index af37e650a8ad..e8d6fe35bf36 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_linecard_dev.c
@@ -132,6 +132,7 @@ static int mlxsw_linecard_bdev_probe(struct auxiliary_device *adev,
struct mlxsw_linecard *linecard = linecard_bdev->linecard;
struct mlxsw_linecard_dev *linecard_dev;
struct devlink *devlink;
+ int err;
devlink = devlink_alloc(&mlxsw_linecard_dev_devlink_ops,
sizeof(*linecard_dev), &adev->dev);
@@ -141,8 +142,12 @@ static int mlxsw_linecard_bdev_probe(struct auxiliary_device *adev,
linecard_dev->linecard = linecard_bdev->linecard;
linecard_bdev->linecard_dev = linecard_dev;
+ err = devlink_linecard_nested_dl_set(linecard->devlink_linecard, devlink);
+ if (err) {
+ devlink_free(devlink);
+ return err;
+ }
devlink_register(devlink);
- devlink_linecard_nested_dl_set(linecard->devlink_linecard, devlink);
return 0;
}
@@ -151,9 +156,7 @@ static void mlxsw_linecard_bdev_remove(struct auxiliary_device *adev)
struct mlxsw_linecard_bdev *linecard_bdev =
container_of(adev, struct mlxsw_linecard_bdev, adev);
struct devlink *devlink = priv_to_devlink(linecard_bdev->linecard_dev);
- struct mlxsw_linecard *linecard = linecard_bdev->linecard;
- devlink_linecard_nested_dl_set(linecard->devlink_linecard, NULL);
devlink_unregister(devlink);
devlink_free(devlink);
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
index 70d7fff24fa2..f1b48d6615f6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
@@ -31,6 +31,7 @@
/* External cooling devices, allowed for binding to mlxsw thermal zones. */
static char * const mlxsw_thermal_external_allowed_cdev[] = {
"mlxreg_fan",
+ "emc2305",
};
struct mlxsw_cooling_states {
@@ -535,7 +536,7 @@ mlxsw_thermal_modules_fini(struct mlxsw_thermal *thermal,
static int
mlxsw_thermal_gearbox_tz_init(struct mlxsw_thermal_module *gearbox_tz)
{
- char tz_name[THERMAL_NAME_LENGTH];
+ char tz_name[40];
int ret;
if (gearbox_tz->slot_index)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/i2c.c b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
index d23f293e285c..1e150ce1c73a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/i2c.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/i2c.c
@@ -424,9 +424,7 @@ mlxsw_i2c_cmd(struct device *dev, u16 opcode, u32 in_mod, size_t in_mbox_size,
if (in_mbox) {
reg_size = mlxsw_i2c_get_reg_size(in_mbox);
- num = reg_size / mlxsw_i2c->block_size;
- if (reg_size % mlxsw_i2c->block_size)
- num++;
+ num = DIV_ROUND_UP(reg_size, mlxsw_i2c->block_size);
if (mutex_lock_interruptible(&mlxsw_i2c->cmd.lock) < 0) {
dev_err(&client->dev, "Could not acquire lock");
diff --git a/drivers/net/ethernet/mellanox/mlxsw/pci.c b/drivers/net/ethernet/mellanox/mlxsw/pci.c
index 51eea1f0529c..e4b25e187467 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/pci.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/pci.c
@@ -105,6 +105,8 @@ struct mlxsw_pci {
u64 free_running_clock_offset;
u64 utc_sec_offset;
u64 utc_nsec_offset;
+ bool lag_mode_support;
+ enum mlxsw_cmd_mbox_config_profile_lag_mode lag_mode;
struct mlxsw_pci_queue_type_group queues[MLXSW_PCI_QUEUE_TYPE_COUNT];
u32 doorbell_offset;
struct mlxsw_core *core;
@@ -352,14 +354,15 @@ static void mlxsw_pci_wqe_frag_unmap(struct mlxsw_pci *mlxsw_pci, char *wqe,
}
static int mlxsw_pci_rdq_skb_alloc(struct mlxsw_pci *mlxsw_pci,
- struct mlxsw_pci_queue_elem_info *elem_info)
+ struct mlxsw_pci_queue_elem_info *elem_info,
+ gfp_t gfp)
{
size_t buf_len = MLXSW_PORT_MAX_MTU;
char *wqe = elem_info->elem;
struct sk_buff *skb;
int err;
- skb = netdev_alloc_skb_ip_align(NULL, buf_len);
+ skb = __netdev_alloc_skb_ip_align(NULL, buf_len, gfp);
if (!skb)
return -ENOMEM;
@@ -420,7 +423,7 @@ static int mlxsw_pci_rdq_init(struct mlxsw_pci *mlxsw_pci, char *mbox,
for (i = 0; i < q->count; i++) {
elem_info = mlxsw_pci_queue_elem_info_producer_get(q);
BUG_ON(!elem_info);
- err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
+ err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info, GFP_KERNEL);
if (err)
goto rollback;
/* Everything is set up, ring doorbell to pass elem to HW */
@@ -640,7 +643,7 @@ static void mlxsw_pci_cqe_rdq_handle(struct mlxsw_pci *mlxsw_pci,
if (q->consumer_counter++ != consumer_counter_limit)
dev_dbg_ratelimited(&pdev->dev, "Consumer counter does not match limit in RDQ\n");
- err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info);
+ err = mlxsw_pci_rdq_skb_alloc(mlxsw_pci, elem_info, GFP_ATOMIC);
if (err) {
dev_err_ratelimited(&pdev->dev, "Failed to alloc skb for RDQ\n");
goto out;
@@ -1311,6 +1314,16 @@ static int mlxsw_pci_config_profile(struct mlxsw_pci *mlxsw_pci, char *mbox,
profile->cqe_time_stamp_type);
}
+ if (profile->lag_mode_prefer_sw && mlxsw_pci->lag_mode_support) {
+ enum mlxsw_cmd_mbox_config_profile_lag_mode lag_mode =
+ MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_SW;
+
+ mlxsw_cmd_mbox_config_profile_set_lag_mode_set(mbox, 1);
+ mlxsw_cmd_mbox_config_profile_lag_mode_set(mbox, lag_mode);
+ mlxsw_pci->lag_mode = lag_mode;
+ } else {
+ mlxsw_pci->lag_mode = MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_FW;
+ }
return mlxsw_cmd_config_profile_set(mlxsw_pci->core, mbox);
}
@@ -1586,6 +1599,8 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
mlxsw_pci->utc_nsec_offset =
mlxsw_cmd_mbox_query_fw_utc_nsec_offset_get(mbox);
+ mlxsw_pci->lag_mode_support =
+ mlxsw_cmd_mbox_query_fw_lag_mode_support_get(mbox);
num_pages = mlxsw_cmd_mbox_query_fw_fw_pages_get(mbox);
err = mlxsw_pci_fw_area_init(mlxsw_pci, mbox, num_pages);
if (err)
@@ -1618,9 +1633,8 @@ static int mlxsw_pci_init(void *bus_priv, struct mlxsw_core *mlxsw_core,
if (err)
goto err_config_profile;
- /* Some resources depend on unified bridge model, which is configured
- * as part of config_profile. Query the resources again to get correct
- * values.
+ /* Some resources depend on details of config_profile, such as unified
+ * bridge model. Query the resources again to get correct values.
*/
err = mlxsw_core_resources_query(mlxsw_core, mbox, res);
if (err)
@@ -1895,6 +1909,14 @@ static u32 mlxsw_pci_read_utc_nsec(void *bus_priv)
return mlxsw_pci_read32_off(mlxsw_pci, mlxsw_pci->utc_nsec_offset);
}
+static enum mlxsw_cmd_mbox_config_profile_lag_mode
+mlxsw_pci_lag_mode(void *bus_priv)
+{
+ struct mlxsw_pci *mlxsw_pci = bus_priv;
+
+ return mlxsw_pci->lag_mode;
+}
+
static const struct mlxsw_bus mlxsw_pci_bus = {
.kind = "pci",
.init = mlxsw_pci_init,
@@ -1906,6 +1928,7 @@ static const struct mlxsw_bus mlxsw_pci_bus = {
.read_frc_l = mlxsw_pci_read_frc_l,
.read_utc_sec = mlxsw_pci_read_utc_sec,
.read_utc_nsec = mlxsw_pci_read_utc_nsec,
+ .lag_mode = mlxsw_pci_lag_mode,
.features = MLXSW_BUS_F_TXRX | MLXSW_BUS_F_RESET,
};
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index ae556ddd7624..25b294fdeb3d 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -38,18 +38,18 @@ static const struct mlxsw_reg_info mlxsw_reg_##_name = { \
MLXSW_REG_DEFINE(sgcr, MLXSW_REG_SGCR_ID, MLXSW_REG_SGCR_LEN);
-/* reg_sgcr_llb
- * Link Local Broadcast (Default=0)
- * When set, all Link Local packets (224.0.0.X) will be treated as broadcast
- * packets and ignore the IGMP snooping entries.
+/* reg_sgcr_lag_lookup_pgt_base
+ * Base address used for lookup in PGT table
+ * Supported when CONFIG_PROFILE.lag_mode = 1
+ * Note: when IGCR.ddd_lag_mode=0, the address shall be aligned to 8 entries.
* Access: RW
*/
-MLXSW_ITEM32(reg, sgcr, llb, 0x04, 0, 1);
+MLXSW_ITEM32(reg, sgcr, lag_lookup_pgt_base, 0x0C, 0, 16);
-static inline void mlxsw_reg_sgcr_pack(char *payload, bool llb)
+static inline void mlxsw_reg_sgcr_pack(char *payload, u16 lag_lookup_pgt_base)
{
MLXSW_REG_ZERO(sgcr, payload);
- mlxsw_reg_sgcr_llb_set(payload, !!llb);
+ mlxsw_reg_sgcr_lag_lookup_pgt_base_set(payload, lag_lookup_pgt_base);
}
/* SPAD - Switch Physical Address Register
@@ -9551,7 +9551,7 @@ MLXSW_ITEM_BIT_ARRAY(reg, mtwe, sensor_warning, 0x0, 0x10, 1);
#define MLXSW_REG_MTBR_ID 0x900F
#define MLXSW_REG_MTBR_BASE_LEN 0x10 /* base length, without records */
#define MLXSW_REG_MTBR_REC_LEN 0x04 /* record length */
-#define MLXSW_REG_MTBR_REC_MAX_COUNT 47 /* firmware limitation */
+#define MLXSW_REG_MTBR_REC_MAX_COUNT 1
#define MLXSW_REG_MTBR_LEN (MLXSW_REG_MTBR_BASE_LEN + \
MLXSW_REG_MTBR_REC_LEN * \
MLXSW_REG_MTBR_REC_MAX_COUNT)
@@ -9597,12 +9597,12 @@ MLXSW_ITEM32_INDEXED(reg, mtbr, rec_temp, MLXSW_REG_MTBR_BASE_LEN, 0, 16,
MLXSW_REG_MTBR_REC_LEN, 0x00, false);
static inline void mlxsw_reg_mtbr_pack(char *payload, u8 slot_index,
- u16 base_sensor_index, u8 num_rec)
+ u16 base_sensor_index)
{
MLXSW_REG_ZERO(mtbr, payload);
mlxsw_reg_mtbr_slot_index_set(payload, slot_index);
mlxsw_reg_mtbr_base_sensor_index_set(payload, base_sensor_index);
- mlxsw_reg_mtbr_num_rec_set(payload, num_rec);
+ mlxsw_reg_mtbr_num_rec_set(payload, 1);
}
/* Error codes from temperatute reading */
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 9dbd5edff0b0..cec72d99d9c9 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -2692,6 +2692,63 @@ static void mlxsw_sp_traps_fini(struct mlxsw_sp *mlxsw_sp)
kfree(mlxsw_sp->trap);
}
+static int mlxsw_sp_lag_pgt_init(struct mlxsw_sp *mlxsw_sp)
+{
+ char sgcr_pl[MLXSW_REG_SGCR_LEN];
+ u16 max_lag;
+ int err;
+
+ if (mlxsw_core_lag_mode(mlxsw_sp->core) !=
+ MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_SW)
+ return 0;
+
+ err = mlxsw_core_max_lag(mlxsw_sp->core, &max_lag);
+ if (err)
+ return err;
+
+ /* In DDD mode, which we by default use, each LAG entry is 8 PGT
+ * entries. The LAG table address needs to be 8-aligned, but that ought
+ * to be the case, since the LAG table is allocated first.
+ */
+ err = mlxsw_sp_pgt_mid_alloc_range(mlxsw_sp, &mlxsw_sp->lag_pgt_base,
+ max_lag * 8);
+ if (err)
+ return err;
+ if (WARN_ON_ONCE(mlxsw_sp->lag_pgt_base % 8)) {
+ err = -EINVAL;
+ goto err_mid_alloc_range;
+ }
+
+ mlxsw_reg_sgcr_pack(sgcr_pl, mlxsw_sp->lag_pgt_base);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sgcr), sgcr_pl);
+ if (err)
+ goto err_mid_alloc_range;
+
+ return 0;
+
+err_mid_alloc_range:
+ mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mlxsw_sp->lag_pgt_base,
+ max_lag * 8);
+ return err;
+}
+
+static void mlxsw_sp_lag_pgt_fini(struct mlxsw_sp *mlxsw_sp)
+{
+ u16 max_lag;
+ int err;
+
+ if (mlxsw_core_lag_mode(mlxsw_sp->core) !=
+ MLXSW_CMD_MBOX_CONFIG_PROFILE_LAG_MODE_SW)
+ return;
+
+ err = mlxsw_core_max_lag(mlxsw_sp->core, &max_lag);
+ if (err)
+ return;
+
+ mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mlxsw_sp->lag_pgt_base,
+ max_lag * 8);
+}
+
#define MLXSW_SP_LAG_SEED_INIT 0xcafecafe
static int mlxsw_sp_lag_init(struct mlxsw_sp *mlxsw_sp)
@@ -2723,16 +2780,27 @@ static int mlxsw_sp_lag_init(struct mlxsw_sp *mlxsw_sp)
if (!MLXSW_CORE_RES_VALID(mlxsw_sp->core, MAX_LAG_MEMBERS))
return -EIO;
+ err = mlxsw_sp_lag_pgt_init(mlxsw_sp);
+ if (err)
+ return err;
+
mlxsw_sp->lags = kcalloc(max_lag, sizeof(struct mlxsw_sp_upper),
GFP_KERNEL);
- if (!mlxsw_sp->lags)
- return -ENOMEM;
+ if (!mlxsw_sp->lags) {
+ err = -ENOMEM;
+ goto err_kcalloc;
+ }
return 0;
+
+err_kcalloc:
+ mlxsw_sp_lag_pgt_fini(mlxsw_sp);
+ return err;
}
static void mlxsw_sp_lag_fini(struct mlxsw_sp *mlxsw_sp)
{
+ mlxsw_sp_lag_pgt_fini(mlxsw_sp);
kfree(mlxsw_sp->lags);
}
@@ -3113,6 +3181,15 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
goto err_pgt_init;
}
+ /* Initialize before FIDs so that the LAG table is at the start of PGT
+ * and 8-aligned without overallocation.
+ */
+ err = mlxsw_sp_lag_init(mlxsw_sp);
+ if (err) {
+ dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize LAG\n");
+ goto err_lag_init;
+ }
+
err = mlxsw_sp_fids_init(mlxsw_sp);
if (err) {
dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize FIDs\n");
@@ -3143,12 +3220,6 @@ static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
goto err_buffers_init;
}
- err = mlxsw_sp_lag_init(mlxsw_sp);
- if (err) {
- dev_err(mlxsw_sp->bus_info->dev, "Failed to initialize LAG\n");
- goto err_lag_init;
- }
-
/* Initialize SPAN before router and switchdev, so that those components
* can call mlxsw_sp_span_respin().
*/
@@ -3300,8 +3371,6 @@ err_counter_pool_init:
err_switchdev_init:
mlxsw_sp_span_fini(mlxsw_sp);
err_span_init:
- mlxsw_sp_lag_fini(mlxsw_sp);
-err_lag_init:
mlxsw_sp_buffers_fini(mlxsw_sp);
err_buffers_init:
mlxsw_sp_devlink_traps_fini(mlxsw_sp);
@@ -3312,6 +3381,8 @@ err_traps_init:
err_policers_init:
mlxsw_sp_fids_fini(mlxsw_sp);
err_fids_init:
+ mlxsw_sp_lag_fini(mlxsw_sp);
+err_lag_init:
mlxsw_sp_pgt_fini(mlxsw_sp);
err_pgt_init:
mlxsw_sp_kvdl_fini(mlxsw_sp);
@@ -3477,12 +3548,12 @@ static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core)
mlxsw_sp_counter_pool_fini(mlxsw_sp);
mlxsw_sp_switchdev_fini(mlxsw_sp);
mlxsw_sp_span_fini(mlxsw_sp);
- mlxsw_sp_lag_fini(mlxsw_sp);
mlxsw_sp_buffers_fini(mlxsw_sp);
mlxsw_sp_devlink_traps_fini(mlxsw_sp);
mlxsw_sp_traps_fini(mlxsw_sp);
mlxsw_sp_policers_fini(mlxsw_sp);
mlxsw_sp_fids_fini(mlxsw_sp);
+ mlxsw_sp_lag_fini(mlxsw_sp);
mlxsw_sp_pgt_fini(mlxsw_sp);
mlxsw_sp_kvdl_fini(mlxsw_sp);
mlxsw_sp_parsing_fini(mlxsw_sp);
@@ -3526,6 +3597,7 @@ static const struct mlxsw_config_profile mlxsw_sp2_config_profile = {
},
.used_cqe_time_stamp_type = 1,
.cqe_time_stamp_type = MLXSW_CMD_MBOX_CONFIG_PROFILE_CQE_TIME_STAMP_TYPE_UTC,
+ .lag_mode_prefer_sw = true,
};
/* Reduce number of LAGs from full capacity (256) to the maximum supported LAGs
@@ -3553,6 +3625,7 @@ static const struct mlxsw_config_profile mlxsw_sp4_config_profile = {
},
.used_cqe_time_stamp_type = 1,
.cqe_time_stamp_type = MLXSW_CMD_MBOX_CONFIG_PROFILE_CQE_TIME_STAMP_TYPE_UTC,
+ .lag_mode_prefer_sw = true,
};
static void
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 02ca2871b6f9..c70333b460ea 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -212,6 +212,7 @@ struct mlxsw_sp {
struct mutex ipv6_addr_ht_lock; /* Protects ipv6_addr_ht */
struct mlxsw_sp_pgt *pgt;
bool pgt_smpe_index_valid;
+ u16 lag_pgt_base;
};
struct mlxsw_sp_ptp_ops {
@@ -1480,7 +1481,7 @@ int mlxsw_sp_policer_resources_register(struct mlxsw_core *mlxsw_core);
/* spectrum_pgt.c */
int mlxsw_sp_pgt_mid_alloc(struct mlxsw_sp *mlxsw_sp, u16 *p_mid);
void mlxsw_sp_pgt_mid_free(struct mlxsw_sp *mlxsw_sp, u16 mid_base);
-int mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base,
+int mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 *mid_base,
u16 count);
void mlxsw_sp_pgt_mid_free_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base,
u16 count);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c
index b1178b7a7f51..99eeafdc8d1e 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_mr_tcam.c
@@ -45,8 +45,7 @@ static int mlxsw_sp2_mr_tcam_bind_group(struct mlxsw_sp *mlxsw_sp,
}
static const enum mlxsw_afk_element mlxsw_sp2_mr_tcam_usage_ipv4[] = {
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_LSB,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER,
MLXSW_AFK_ELEMENT_SRC_IP_0_31,
MLXSW_AFK_ELEMENT_DST_IP_0_31,
};
@@ -89,8 +88,9 @@ static void mlxsw_sp2_mr_tcam_ipv4_fini(struct mlxsw_sp2_mr_tcam *mr_tcam)
}
static const enum mlxsw_afk_element mlxsw_sp2_mr_tcam_usage_ipv6[] = {
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_3,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_4_7,
MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_LSB,
MLXSW_AFK_ELEMENT_SRC_IP_96_127,
MLXSW_AFK_ELEMENT_SRC_IP_64_95,
MLXSW_AFK_ELEMENT_SRC_IP_32_63,
@@ -142,6 +142,8 @@ static void
mlxsw_sp2_mr_tcam_rule_parse4(struct mlxsw_sp_acl_rule_info *rulei,
struct mlxsw_sp_mr_route_key *key)
{
+ mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_VIRT_ROUTER,
+ key->vrid, GENMASK(11, 0));
mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_0_31,
(char *) &key->source.addr4,
(char *) &key->source_mask.addr4, 4);
@@ -154,6 +156,13 @@ static void
mlxsw_sp2_mr_tcam_rule_parse6(struct mlxsw_sp_acl_rule_info *rulei,
struct mlxsw_sp_mr_route_key *key)
{
+ mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_VIRT_ROUTER_0_3,
+ key->vrid, GENMASK(3, 0));
+ mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_VIRT_ROUTER_4_7,
+ key->vrid >> 4, GENMASK(3, 0));
+ mlxsw_sp_acl_rulei_keymask_u32(rulei,
+ MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
+ key->vrid >> 8, GENMASK(3, 0));
mlxsw_sp_acl_rulei_keymask_buf(rulei, MLXSW_AFK_ELEMENT_SRC_IP_96_127,
&key->source.addr6.s6_addr[0x0],
&key->source_mask.addr6.s6_addr[0x0], 4);
@@ -189,11 +198,6 @@ mlxsw_sp2_mr_tcam_rule_parse(struct mlxsw_sp_acl_rule *rule,
rulei = mlxsw_sp_acl_rule_rulei(rule);
rulei->priority = priority;
- mlxsw_sp_acl_rulei_keymask_u32(rulei, MLXSW_AFK_ELEMENT_VIRT_ROUTER_LSB,
- key->vrid, GENMASK(7, 0));
- mlxsw_sp_acl_rulei_keymask_u32(rulei,
- MLXSW_AFK_ELEMENT_VIRT_ROUTER_MSB,
- key->vrid >> 8, GENMASK(3, 0));
switch (key->proto) {
case MLXSW_SP_L3_PROTO_IPV4:
return mlxsw_sp2_mr_tcam_rule_parse4(rulei, key);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
index e2aced7ab454..95f63fcf4ba1 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_bloom_filter.c
@@ -496,7 +496,7 @@ mlxsw_sp_acl_bf_init(struct mlxsw_sp *mlxsw_sp, unsigned int num_erp_banks)
* is 2^ACL_MAX_BF_LOG
*/
bf_bank_size = 1 << MLXSW_CORE_RES_GET(mlxsw_sp->core, ACL_MAX_BF_LOG);
- bf = kzalloc(struct_size(bf, refcnt, bf_bank_size * num_erp_banks),
+ bf = kzalloc(struct_size(bf, refcnt, size_mul(bf_bank_size, num_erp_banks)),
GFP_KERNEL);
if (!bf)
return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
index cb746a43b24b..eaad78605602 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
@@ -171,20 +171,22 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x04, 16, 8),
};
-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4[] = {
- MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 24, 8),
- MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true),
+static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER, 0x04, 20, 11, 0, true),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_0_3, 0x00, 0, 4),
MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x04, 4),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_1[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_4_7, 0x00, 0, 4),
MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_64_95, 0x04, 4),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true),
MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
};
@@ -220,7 +222,7 @@ static const struct mlxsw_afk_block mlxsw_sp2_afk_blocks[] = {
MLXSW_AFK_BLOCK(0x38, mlxsw_sp_afk_element_info_ipv4_0),
MLXSW_AFK_BLOCK(0x39, mlxsw_sp_afk_element_info_ipv4_1),
MLXSW_AFK_BLOCK(0x3A, mlxsw_sp_afk_element_info_ipv4_2),
- MLXSW_AFK_BLOCK(0x3C, mlxsw_sp_afk_element_info_ipv4_4),
+ MLXSW_AFK_BLOCK(0x3D, mlxsw_sp_afk_element_info_ipv4_5),
MLXSW_AFK_BLOCK(0x40, mlxsw_sp_afk_element_info_ipv6_0),
MLXSW_AFK_BLOCK(0x41, mlxsw_sp_afk_element_info_ipv6_1),
MLXSW_AFK_BLOCK(0x42, mlxsw_sp_afk_element_info_ipv6_2),
@@ -322,33 +324,33 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 9, -1, true), /* RX_ACL_SYSTEM_PORT */
};
-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_4b[] = {
- MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_LSB, 0x04, 13, 8),
- MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x04, 21, 4),
+static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER, 0x04, 20, 12),
};
static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x00, 0, 4),
MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
};
static const struct mlxsw_afk_block mlxsw_sp4_afk_blocks[] = {
- MLXSW_AFK_BLOCK(0x10, mlxsw_sp_afk_element_info_mac_0),
- MLXSW_AFK_BLOCK(0x11, mlxsw_sp_afk_element_info_mac_1),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x10, mlxsw_sp_afk_element_info_mac_0),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x11, mlxsw_sp_afk_element_info_mac_1),
MLXSW_AFK_BLOCK(0x12, mlxsw_sp_afk_element_info_mac_2),
MLXSW_AFK_BLOCK(0x13, mlxsw_sp_afk_element_info_mac_3),
MLXSW_AFK_BLOCK(0x14, mlxsw_sp_afk_element_info_mac_4),
- MLXSW_AFK_BLOCK(0x1A, mlxsw_sp_afk_element_info_mac_5b),
- MLXSW_AFK_BLOCK(0x38, mlxsw_sp_afk_element_info_ipv4_0),
- MLXSW_AFK_BLOCK(0x39, mlxsw_sp_afk_element_info_ipv4_1),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x1A, mlxsw_sp_afk_element_info_mac_5b),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x38, mlxsw_sp_afk_element_info_ipv4_0),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x39, mlxsw_sp_afk_element_info_ipv4_1),
MLXSW_AFK_BLOCK(0x3A, mlxsw_sp_afk_element_info_ipv4_2),
- MLXSW_AFK_BLOCK(0x35, mlxsw_sp_afk_element_info_ipv4_4b),
+ MLXSW_AFK_BLOCK(0x36, mlxsw_sp_afk_element_info_ipv4_5b),
MLXSW_AFK_BLOCK(0x40, mlxsw_sp_afk_element_info_ipv6_0),
MLXSW_AFK_BLOCK(0x41, mlxsw_sp_afk_element_info_ipv6_1),
MLXSW_AFK_BLOCK(0x47, mlxsw_sp_afk_element_info_ipv6_2b),
MLXSW_AFK_BLOCK(0x43, mlxsw_sp_afk_element_info_ipv6_3),
MLXSW_AFK_BLOCK(0x44, mlxsw_sp_afk_element_info_ipv6_4),
MLXSW_AFK_BLOCK(0x45, mlxsw_sp_afk_element_info_ipv6_5),
- MLXSW_AFK_BLOCK(0x90, mlxsw_sp_afk_element_info_l4_0),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x90, mlxsw_sp_afk_element_info_l4_0),
MLXSW_AFK_BLOCK(0x92, mlxsw_sp_afk_element_info_l4_2),
};
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
index ee59c79156e4..50e591420bd9 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_cnt.c
@@ -24,7 +24,7 @@ struct mlxsw_sp_counter_pool {
spinlock_t counter_pool_lock; /* Protects counter pool allocations */
atomic_t active_entries_count;
unsigned int sub_pools_count;
- struct mlxsw_sp_counter_sub_pool sub_pools[];
+ struct mlxsw_sp_counter_sub_pool sub_pools[] __counted_by(sub_pools_count);
};
static const struct mlxsw_sp_counter_sub_pool mlxsw_sp_counter_sub_pools[] = {
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
index 472830d07ac1..0f29e9c19411 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
@@ -619,7 +619,7 @@ static void mlxsw_sp_port_get_tc_strings(u8 **p, int tc)
int i;
for (i = 0; i < MLXSW_SP_PORT_HW_TC_STATS_LEN; i++) {
- snprintf(*p, ETH_GSTRING_LEN, "%.29s_%.1d",
+ snprintf(*p, ETH_GSTRING_LEN, "%.28s_%d",
mlxsw_sp_port_hw_tc_stats[i].str, tc);
*p += ETH_GSTRING_LEN;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
index 9df098474743..e954b8cd2ee8 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_fid.c
@@ -321,6 +321,14 @@ mlxsw_sp_fid_family_num_fids(const struct mlxsw_sp_fid_family *fid_family)
}
static u16
+mlxsw_sp_fid_family_pgt_size(const struct mlxsw_sp_fid_family *fid_family)
+{
+ u16 num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
+
+ return num_fids * fid_family->nr_flood_tables;
+}
+
+static u16
mlxsw_sp_fid_flood_table_mid(const struct mlxsw_sp_fid_family *fid_family,
const struct mlxsw_sp_flood_table *flood_table,
u16 fid_offset)
@@ -1068,8 +1076,6 @@ static const struct mlxsw_sp_fid_ops mlxsw_sp_fid_8021d_ops = {
#define MLXSW_SP_FID_8021Q_MAX (VLAN_N_VID - 2)
#define MLXSW_SP_FID_RFID_MAX (11 * 1024)
-#define MLXSW_SP_FID_8021Q_PGT_BASE 0
-#define MLXSW_SP_FID_8021D_PGT_BASE (3 * MLXSW_SP_FID_8021Q_MAX)
static const struct mlxsw_sp_flood_table mlxsw_sp_fid_8021d_flood_tables[] = {
{
@@ -1434,7 +1440,6 @@ static const struct mlxsw_sp_fid_family mlxsw_sp1_fid_8021q_family = {
.ops = &mlxsw_sp_fid_8021q_ops,
.flood_rsp = false,
.bridge_type = MLXSW_REG_BRIDGE_TYPE_0,
- .pgt_base = MLXSW_SP_FID_8021Q_PGT_BASE,
.smpe_index_valid = false,
};
@@ -1448,7 +1453,6 @@ static const struct mlxsw_sp_fid_family mlxsw_sp1_fid_8021d_family = {
.rif_type = MLXSW_SP_RIF_TYPE_FID,
.ops = &mlxsw_sp_fid_8021d_ops,
.bridge_type = MLXSW_REG_BRIDGE_TYPE_1,
- .pgt_base = MLXSW_SP_FID_8021D_PGT_BASE,
.smpe_index_valid = false,
};
@@ -1490,7 +1494,6 @@ static const struct mlxsw_sp_fid_family mlxsw_sp2_fid_8021q_family = {
.ops = &mlxsw_sp_fid_8021q_ops,
.flood_rsp = false,
.bridge_type = MLXSW_REG_BRIDGE_TYPE_0,
- .pgt_base = MLXSW_SP_FID_8021Q_PGT_BASE,
.smpe_index_valid = true,
};
@@ -1504,7 +1507,6 @@ static const struct mlxsw_sp_fid_family mlxsw_sp2_fid_8021d_family = {
.rif_type = MLXSW_SP_RIF_TYPE_FID,
.ops = &mlxsw_sp_fid_8021d_ops,
.bridge_type = MLXSW_REG_BRIDGE_TYPE_1,
- .pgt_base = MLXSW_SP_FID_8021D_PGT_BASE,
.smpe_index_valid = true,
};
@@ -1654,14 +1656,10 @@ mlxsw_sp_fid_flood_table_init(struct mlxsw_sp_fid_family *fid_family,
enum mlxsw_sp_flood_type packet_type = flood_table->packet_type;
struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
const int *sfgc_packet_types;
- u16 num_fids, mid_base;
+ u16 mid_base;
int err, i;
mid_base = mlxsw_sp_fid_flood_table_mid(fid_family, flood_table, 0);
- num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
- err = mlxsw_sp_pgt_mid_alloc_range(mlxsw_sp, mid_base, num_fids);
- if (err)
- return err;
sfgc_packet_types = mlxsw_sp_packet_type_sfgc_types[packet_type];
for (i = 0; i < MLXSW_REG_SFGC_TYPE_MAX; i++) {
@@ -1675,57 +1673,56 @@ mlxsw_sp_fid_flood_table_init(struct mlxsw_sp_fid_family *fid_family,
err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfgc), sfgc_pl);
if (err)
- goto err_reg_write;
+ return err;
}
return 0;
-
-err_reg_write:
- mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mid_base, num_fids);
- return err;
-}
-
-static void
-mlxsw_sp_fid_flood_table_fini(struct mlxsw_sp_fid_family *fid_family,
- const struct mlxsw_sp_flood_table *flood_table)
-{
- struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
- u16 num_fids, mid_base;
-
- mid_base = mlxsw_sp_fid_flood_table_mid(fid_family, flood_table, 0);
- num_fids = mlxsw_sp_fid_family_num_fids(fid_family);
- mlxsw_sp_pgt_mid_free_range(mlxsw_sp, mid_base, num_fids);
}
static int
mlxsw_sp_fid_flood_tables_init(struct mlxsw_sp_fid_family *fid_family)
{
+ struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
+ u16 pgt_size;
+ int err;
int i;
+ if (!fid_family->nr_flood_tables)
+ return 0;
+
+ pgt_size = mlxsw_sp_fid_family_pgt_size(fid_family);
+ err = mlxsw_sp_pgt_mid_alloc_range(mlxsw_sp, &fid_family->pgt_base,
+ pgt_size);
+ if (err)
+ return err;
+
for (i = 0; i < fid_family->nr_flood_tables; i++) {
const struct mlxsw_sp_flood_table *flood_table;
- int err;
flood_table = &fid_family->flood_tables[i];
err = mlxsw_sp_fid_flood_table_init(fid_family, flood_table);
if (err)
- return err;
+ goto err_flood_table_init;
}
return 0;
+
+err_flood_table_init:
+ mlxsw_sp_pgt_mid_free_range(mlxsw_sp, fid_family->pgt_base, pgt_size);
+ return err;
}
static void
mlxsw_sp_fid_flood_tables_fini(struct mlxsw_sp_fid_family *fid_family)
{
- int i;
+ struct mlxsw_sp *mlxsw_sp = fid_family->mlxsw_sp;
+ u16 pgt_size;
- for (i = 0; i < fid_family->nr_flood_tables; i++) {
- const struct mlxsw_sp_flood_table *flood_table;
+ if (!fid_family->nr_flood_tables)
+ return;
- flood_table = &fid_family->flood_tables[i];
- mlxsw_sp_fid_flood_table_fini(fid_family, flood_table);
- }
+ pgt_size = mlxsw_sp_fid_family_pgt_size(fid_family);
+ mlxsw_sp_pgt_mid_free_range(mlxsw_sp, fid_family->pgt_base, pgt_size);
}
static int mlxsw_sp_fid_family_register(struct mlxsw_sp *mlxsw_sp,
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c
index 7dd3dba0fa83..4ef81bac17d6 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_pgt.c
@@ -54,25 +54,15 @@ void mlxsw_sp_pgt_mid_free(struct mlxsw_sp *mlxsw_sp, u16 mid_base)
mutex_unlock(&mlxsw_sp->pgt->lock);
}
-int
-mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base, u16 count)
+int mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 *p_mid_base,
+ u16 count)
{
- unsigned int idr_cursor;
+ unsigned int mid_base;
int i, err;
mutex_lock(&mlxsw_sp->pgt->lock);
- /* This function is supposed to be called several times as part of
- * driver init, in specific order. Verify that the mid_index is the
- * first free index in the idr, to be able to free the indexes in case
- * of error.
- */
- idr_cursor = idr_get_cursor(&mlxsw_sp->pgt->pgt_idr);
- if (WARN_ON(idr_cursor != mid_base)) {
- err = -EINVAL;
- goto err_idr_cursor;
- }
-
+ mid_base = idr_get_cursor(&mlxsw_sp->pgt->pgt_idr);
for (i = 0; i < count; i++) {
err = idr_alloc_cyclic(&mlxsw_sp->pgt->pgt_idr, NULL,
mid_base, mid_base + count, GFP_KERNEL);
@@ -81,12 +71,12 @@ mlxsw_sp_pgt_mid_alloc_range(struct mlxsw_sp *mlxsw_sp, u16 mid_base, u16 count)
}
mutex_unlock(&mlxsw_sp->pgt->lock);
+ *p_mid_base = mid_base;
return 0;
err_idr_alloc_cyclic:
for (i--; i >= 0; i--)
idr_remove(&mlxsw_sp->pgt->pgt_idr, mid_base + i);
-err_idr_cursor:
mutex_unlock(&mlxsw_sp->pgt->lock);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
index debd2c466f11..82a95125d9ca 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
@@ -3107,7 +3107,7 @@ struct mlxsw_sp_nexthop_group_info {
gateway:1, /* routes using the group use a gateway */
is_resilient:1;
struct list_head list; /* member in nh_res_grp_list */
- struct mlxsw_sp_nexthop nexthops[];
+ struct mlxsw_sp_nexthop nexthops[] __counted_by(count);
};
static struct mlxsw_sp_rif *
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
index b3472fb94617..af50ff9e5f26 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
@@ -31,7 +31,7 @@ struct mlxsw_sp_span {
refcount_t policer_id_base_ref_count;
atomic_t active_entries_count;
int entries_count;
- struct mlxsw_sp_span_entry entries[];
+ struct mlxsw_sp_span_entry entries[] __counted_by(entries_count);
};
struct mlxsw_sp_span_analyzed_port {
diff --git a/drivers/net/ethernet/micrel/ks8842.c b/drivers/net/ethernet/micrel/ks8842.c
index c11b118dc415..ddd87ef71caf 100644
--- a/drivers/net/ethernet/micrel/ks8842.c
+++ b/drivers/net/ethernet/micrel/ks8842.c
@@ -1228,7 +1228,7 @@ err_mem_region:
return err;
}
-static int ks8842_remove(struct platform_device *pdev)
+static void ks8842_remove(struct platform_device *pdev)
{
struct net_device *netdev = platform_get_drvdata(pdev);
struct ks8842_adapter *adapter = netdev_priv(netdev);
@@ -1239,7 +1239,6 @@ static int ks8842_remove(struct platform_device *pdev)
iounmap(adapter->hw_addr);
free_netdev(netdev);
release_mem_region(iomem->start, resource_size(iomem));
- return 0;
}
@@ -1248,7 +1247,7 @@ static struct platform_driver ks8842_platform_driver = {
.name = DRV_NAME,
},
.probe = ks8842_probe,
- .remove = ks8842_remove,
+ .remove_new = ks8842_remove,
};
module_platform_driver(ks8842_platform_driver);
diff --git a/drivers/net/ethernet/micrel/ks8851_par.c b/drivers/net/ethernet/micrel/ks8851_par.c
index 7f49042484bd..2a7f29854267 100644
--- a/drivers/net/ethernet/micrel/ks8851_par.c
+++ b/drivers/net/ethernet/micrel/ks8851_par.c
@@ -327,11 +327,9 @@ static int ks8851_probe_par(struct platform_device *pdev)
return ks8851_probe_common(netdev, dev, msg_enable);
}
-static int ks8851_remove_par(struct platform_device *pdev)
+static void ks8851_remove_par(struct platform_device *pdev)
{
ks8851_remove_common(&pdev->dev);
-
- return 0;
}
static const struct of_device_id ks8851_match_table[] = {
@@ -347,7 +345,7 @@ static struct platform_driver ks8851_driver = {
.pm = &ks8851_pm_ops,
},
.probe = ks8851_probe_par,
- .remove = ks8851_remove_par,
+ .remove_new = ks8851_remove_par,
};
module_platform_driver(ks8851_driver);
diff --git a/drivers/net/ethernet/microchip/lan743x_ethtool.c b/drivers/net/ethernet/microchip/lan743x_ethtool.c
index 2db5949b4c7e..6961cfc55fb9 100644
--- a/drivers/net/ethernet/microchip/lan743x_ethtool.c
+++ b/drivers/net/ethernet/microchip/lan743x_ethtool.c
@@ -1047,7 +1047,8 @@ static int lan743x_ethtool_get_ts_info(struct net_device *netdev,
BIT(HWTSTAMP_TX_ON) |
BIT(HWTSTAMP_TX_ONESTEP_SYNC);
ts_info->rx_filters = BIT(HWTSTAMP_FILTER_NONE) |
- BIT(HWTSTAMP_FILTER_ALL);
+ BIT(HWTSTAMP_FILTER_ALL) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
return 0;
}
diff --git a/drivers/net/ethernet/microchip/lan743x_main.c b/drivers/net/ethernet/microchip/lan743x_main.c
index c81cdeb4d4e7..45e209a7d083 100644
--- a/drivers/net/ethernet/microchip/lan743x_main.c
+++ b/drivers/net/ethernet/microchip/lan743x_main.c
@@ -1466,9 +1466,15 @@ static void lan743x_phy_link_status_change(struct net_device *netdev)
static void lan743x_phy_close(struct lan743x_adapter *adapter)
{
struct net_device *netdev = adapter->netdev;
+ struct phy_device *phydev = netdev->phydev;
phy_stop(netdev->phydev);
phy_disconnect(netdev->phydev);
+
+ /* using phydev here as phy_disconnect NULLs netdev->phydev */
+ if (phy_is_pseudo_fixed_link(phydev))
+ fixed_phy_unregister(phydev);
+
}
static void lan743x_phy_interface_select(struct lan743x_adapter *adapter)
@@ -1864,6 +1870,50 @@ static int lan743x_tx_get_avail_desc(struct lan743x_tx *tx)
return last_head - last_tail - 1;
}
+static void lan743x_rx_cfg_b_tstamp_config(struct lan743x_adapter *adapter,
+ int rx_ts_config)
+{
+ int channel_number;
+ int index;
+ u32 data;
+
+ for (index = 0; index < LAN743X_USED_RX_CHANNELS; index++) {
+ channel_number = adapter->rx[index].channel_number;
+ data = lan743x_csr_read(adapter, RX_CFG_B(channel_number));
+ data &= RX_CFG_B_TS_MASK_;
+ data |= rx_ts_config;
+ lan743x_csr_write(adapter, RX_CFG_B(channel_number),
+ data);
+ }
+}
+
+int lan743x_rx_set_tstamp_mode(struct lan743x_adapter *adapter,
+ int rx_filter)
+{
+ u32 data;
+
+ switch (rx_filter) {
+ case HWTSTAMP_FILTER_PTP_V2_EVENT:
+ lan743x_rx_cfg_b_tstamp_config(adapter,
+ RX_CFG_B_TS_DESCR_EN_);
+ data = lan743x_csr_read(adapter, PTP_RX_TS_CFG);
+ data |= PTP_RX_TS_CFG_EVENT_MSGS_;
+ lan743x_csr_write(adapter, PTP_RX_TS_CFG, data);
+ break;
+ case HWTSTAMP_FILTER_NONE:
+ lan743x_rx_cfg_b_tstamp_config(adapter,
+ RX_CFG_B_TS_NONE_);
+ break;
+ case HWTSTAMP_FILTER_ALL:
+ lan743x_rx_cfg_b_tstamp_config(adapter,
+ RX_CFG_B_TS_ALL_RX_);
+ break;
+ default:
+ return -ERANGE;
+ }
+ return 0;
+}
+
void lan743x_tx_set_timestamping_mode(struct lan743x_tx *tx,
bool enable_timestamping,
bool enable_onestep_sync)
@@ -2938,7 +2988,6 @@ static int lan743x_rx_open(struct lan743x_rx *rx)
data |= RX_CFG_B_RX_PAD_2_;
data &= ~RX_CFG_B_RX_RING_LEN_MASK_;
data |= ((rx->ring_size) & RX_CFG_B_RX_RING_LEN_MASK_);
- data |= RX_CFG_B_TS_ALL_RX_;
if (!(adapter->csr.flags & LAN743X_CSR_FLAG_IS_A0))
data |= RX_CFG_B_RDMABL_512_;
diff --git a/drivers/net/ethernet/microchip/lan743x_main.h b/drivers/net/ethernet/microchip/lan743x_main.h
index 52609fc13ad9..b648461787d2 100644
--- a/drivers/net/ethernet/microchip/lan743x_main.h
+++ b/drivers/net/ethernet/microchip/lan743x_main.h
@@ -522,6 +522,8 @@
(((u32)(rx_latency)) & 0x0000FFFF)
#define PTP_CAP_INFO (0x0A60)
#define PTP_CAP_INFO_TX_TS_CNT_GET_(reg_val) (((reg_val) & 0x00000070) >> 4)
+#define PTP_RX_TS_CFG (0x0A68)
+#define PTP_RX_TS_CFG_EVENT_MSGS_ GENMASK(3, 0)
#define PTP_TX_MOD (0x0AA4)
#define PTP_TX_MOD_TX_PTP_SYNC_TS_INSERT_ (0x10000000)
@@ -657,6 +659,9 @@
#define RX_CFG_B(channel) (0xC44 + ((channel) << 6))
#define RX_CFG_B_TS_ALL_RX_ BIT(29)
+#define RX_CFG_B_TS_DESCR_EN_ BIT(28)
+#define RX_CFG_B_TS_NONE_ 0
+#define RX_CFG_B_TS_MASK_ (0xCFFFFFFF)
#define RX_CFG_B_RX_PAD_MASK_ (0x03000000)
#define RX_CFG_B_RX_PAD_0_ (0x00000000)
#define RX_CFG_B_RX_PAD_2_ (0x02000000)
@@ -991,6 +996,9 @@ struct lan743x_rx {
struct sk_buff *skb_head, *skb_tail;
};
+int lan743x_rx_set_tstamp_mode(struct lan743x_adapter *adapter,
+ int rx_filter);
+
/* SGMII Link Speed Duplex status */
enum lan743x_sgmii_lsd {
POWER_DOWN = 0,
diff --git a/drivers/net/ethernet/microchip/lan743x_ptp.c b/drivers/net/ethernet/microchip/lan743x_ptp.c
index 39e1066ecd5f..2f04bc77a118 100644
--- a/drivers/net/ethernet/microchip/lan743x_ptp.c
+++ b/drivers/net/ethernet/microchip/lan743x_ptp.c
@@ -1493,6 +1493,10 @@ int lan743x_ptp_open(struct lan743x_adapter *adapter)
temp = lan743x_csr_read(adapter, PTP_TX_MOD2);
temp |= PTP_TX_MOD2_TX_PTP_CLR_UDPV4_CHKSUM_;
lan743x_csr_write(adapter, PTP_TX_MOD2, temp);
+
+ /* Default Timestamping */
+ lan743x_rx_set_tstamp_mode(adapter, HWTSTAMP_FILTER_NONE);
+
lan743x_ptp_enable(adapter);
lan743x_csr_write(adapter, INT_EN_SET, INT_BIT_1588_);
lan743x_csr_write(adapter, PTP_INT_EN_SET,
@@ -1653,6 +1657,9 @@ static void lan743x_ptp_disable(struct lan743x_adapter *adapter)
{
struct lan743x_ptp *ptp = &adapter->ptp;
+ /* Disable Timestamping */
+ lan743x_rx_set_tstamp_mode(adapter, HWTSTAMP_FILTER_NONE);
+
mutex_lock(&ptp->command_lock);
if (!lan743x_ptp_is_enabled(adapter)) {
netif_warn(adapter, drv, adapter->netdev,
@@ -1785,6 +1792,8 @@ int lan743x_ptp_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
break;
}
+ ret = lan743x_rx_set_tstamp_mode(adapter, config.rx_filter);
+
if (!ret)
return copy_to_user(ifr->ifr_data, &config,
sizeof(config)) ? -EFAULT : 0;
diff --git a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
index 0d6e79af2410..2635ef8958c8 100644
--- a/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
+++ b/drivers/net/ethernet/microchip/lan966x/lan966x_main.c
@@ -671,7 +671,6 @@ static irqreturn_t lan966x_xtr_irq_handler(int irq, void *args)
skb = netdev_alloc_skb(dev, len);
if (unlikely(!skb)) {
netdev_err(dev, "Unable to allocate sk_buff\n");
- err = -ENOMEM;
break;
}
buf_len = len - ETH_FCS_LEN;
@@ -1261,7 +1260,7 @@ cleanup_ports:
return err;
}
-static int lan966x_remove(struct platform_device *pdev)
+static void lan966x_remove(struct platform_device *pdev)
{
struct lan966x *lan966x = platform_get_drvdata(pdev);
@@ -1280,13 +1279,11 @@ static int lan966x_remove(struct platform_device *pdev)
lan966x_ptp_deinit(lan966x);
debugfs_remove_recursive(lan966x->debugfs_root);
-
- return 0;
}
static struct platform_driver lan966x_driver = {
.probe = lan966x_probe,
- .remove = lan966x_remove,
+ .remove_new = lan966x_remove,
.driver = {
.name = "lan966x-switch",
.of_match_table = lan966x_match,
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c b/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
index 01f3a3a41cdb..37d2584b48a7 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_ethtool.c
@@ -1012,8 +1012,7 @@ static void sparx5_get_sset_strings(struct net_device *ndev, u32 sset, u8 *data)
return;
for (idx = 0; idx < sparx5->num_ethtool_stats; idx++)
- strncpy(data + idx * ETH_GSTRING_LEN,
- sparx5->stats_layout[idx], ETH_GSTRING_LEN);
+ ethtool_sprintf(&data, "%s", sparx5->stats_layout[idx]);
}
static void sparx5_get_sset_data(struct net_device *ndev,
diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
index dc9af480bfea..d1f7fc8b1b71 100644
--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
@@ -911,7 +911,7 @@ cleanup_pnode:
return err;
}
-static int mchp_sparx5_remove(struct platform_device *pdev)
+static void mchp_sparx5_remove(struct platform_device *pdev)
{
struct sparx5 *sparx5 = platform_get_drvdata(pdev);
@@ -931,8 +931,6 @@ static int mchp_sparx5_remove(struct platform_device *pdev)
/* Unregister netdevs */
sparx5_unregister_notifier_blocks(sparx5);
destroy_workqueue(sparx5->mact_queue);
-
- return 0;
}
static const struct of_device_id mchp_sparx5_match[] = {
@@ -943,7 +941,7 @@ MODULE_DEVICE_TABLE(of, mchp_sparx5_match);
static struct platform_driver mchp_sparx5_driver = {
.probe = mchp_sparx5_probe,
- .remove = mchp_sparx5_remove,
+ .remove_new = mchp_sparx5_remove,
.driver = {
.name = "sparx5-switch",
.of_match_table = mchp_sparx5_match,
diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
index c2c3397c5898..59bfbda29bb3 100644
--- a/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
+++ b/drivers/net/ethernet/microchip/vcap/vcap_api_debugfs.c
@@ -300,7 +300,7 @@ static int vcap_show_admin(struct vcap_control *vctrl,
vcap_show_admin_info(vctrl, admin, out);
list_for_each_entry(elem, &admin->rules, list) {
vrule = vcap_decode_rule(elem);
- if (IS_ERR_OR_NULL(vrule)) {
+ if (IS_ERR(vrule)) {
ret = PTR_ERR(vrule);
break;
}
diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
index 48ea4aeeea5d..fc3d2903a80f 100644
--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
+++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
@@ -2687,8 +2687,9 @@ static int mana_probe_port(struct mana_context *ac, int port_idx,
ndev->features = ndev->hw_features | NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_CTAG_RX;
ndev->vlan_features = ndev->features;
- ndev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
- NETDEV_XDP_ACT_NDO_XMIT;
+ xdp_set_features_flag(ndev, NETDEV_XDP_ACT_BASIC |
+ NETDEV_XDP_ACT_REDIRECT |
+ NETDEV_XDP_ACT_NDO_XMIT);
err = register_netdev(ndev);
if (err) {
diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
index 3da99b62797d..96dc69e7141f 100644
--- a/drivers/net/ethernet/moxa/moxart_ether.c
+++ b/drivers/net/ethernet/moxa/moxart_ether.c
@@ -558,7 +558,7 @@ irq_map_fail:
return ret;
}
-static int moxart_remove(struct platform_device *pdev)
+static void moxart_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
@@ -566,8 +566,6 @@ static int moxart_remove(struct platform_device *pdev)
devm_free_irq(&pdev->dev, ndev->irq, ndev);
moxart_mac_free_memory(ndev);
free_netdev(ndev);
-
- return 0;
}
static const struct of_device_id moxart_mac_match[] = {
@@ -578,7 +576,7 @@ MODULE_DEVICE_TABLE(of, moxart_mac_match);
static struct platform_driver moxart_mac_driver = {
.probe = moxart_mac_probe,
- .remove = moxart_remove,
+ .remove_new = moxart_remove,
.driver = {
.name = "moxart-ethernet",
.of_match_table = moxart_mac_match,
diff --git a/drivers/net/ethernet/mscc/ocelot_vsc7514.c b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
index 151b42465348..993212c3a7da 100644
--- a/drivers/net/ethernet/mscc/ocelot_vsc7514.c
+++ b/drivers/net/ethernet/mscc/ocelot_vsc7514.c
@@ -392,7 +392,7 @@ out_free_devlink:
return err;
}
-static int mscc_ocelot_remove(struct platform_device *pdev)
+static void mscc_ocelot_remove(struct platform_device *pdev)
{
struct ocelot *ocelot = platform_get_drvdata(pdev);
@@ -408,13 +408,11 @@ static int mscc_ocelot_remove(struct platform_device *pdev)
unregister_switchdev_notifier(&ocelot_switchdev_nb);
unregister_netdevice_notifier(&ocelot_netdevice_nb);
devlink_free(ocelot->devlink);
-
- return 0;
}
static struct platform_driver mscc_ocelot_driver = {
.probe = mscc_ocelot_probe,
- .remove = mscc_ocelot_remove,
+ .remove_new = mscc_ocelot_remove,
.driver = {
.name = "ocelot-switch",
.of_match_table = mscc_ocelot_match,
diff --git a/drivers/net/ethernet/natsemi/jazzsonic.c b/drivers/net/ethernet/natsemi/jazzsonic.c
index 3f371faeb6d0..2b6e097df28f 100644
--- a/drivers/net/ethernet/natsemi/jazzsonic.c
+++ b/drivers/net/ethernet/natsemi/jazzsonic.c
@@ -227,7 +227,7 @@ MODULE_ALIAS("platform:jazzsonic");
#include "sonic.c"
-static int jazz_sonic_device_remove(struct platform_device *pdev)
+static void jazz_sonic_device_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct sonic_local* lp = netdev_priv(dev);
@@ -237,13 +237,11 @@ static int jazz_sonic_device_remove(struct platform_device *pdev)
lp->descriptors, lp->descriptors_laddr);
release_mem_region(dev->base_addr, SONIC_MEM_SIZE);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver jazz_sonic_driver = {
.probe = jazz_sonic_probe,
- .remove = jazz_sonic_device_remove,
+ .remove_new = jazz_sonic_device_remove,
.driver = {
.name = jazz_sonic_string,
},
diff --git a/drivers/net/ethernet/natsemi/macsonic.c b/drivers/net/ethernet/natsemi/macsonic.c
index b16f7c830f9b..2fc63860dbdb 100644
--- a/drivers/net/ethernet/natsemi/macsonic.c
+++ b/drivers/net/ethernet/natsemi/macsonic.c
@@ -532,7 +532,7 @@ MODULE_ALIAS("platform:macsonic");
#include "sonic.c"
-static int mac_sonic_platform_remove(struct platform_device *pdev)
+static void mac_sonic_platform_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct sonic_local* lp = netdev_priv(dev);
@@ -541,13 +541,11 @@ static int mac_sonic_platform_remove(struct platform_device *pdev)
dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
lp->descriptors, lp->descriptors_laddr);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver mac_sonic_platform_driver = {
.probe = mac_sonic_platform_probe,
- .remove = mac_sonic_platform_remove,
+ .remove_new = mac_sonic_platform_remove,
.driver = {
.name = "macsonic",
},
diff --git a/drivers/net/ethernet/natsemi/xtsonic.c b/drivers/net/ethernet/natsemi/xtsonic.c
index 52fef34d43f9..8943e7244310 100644
--- a/drivers/net/ethernet/natsemi/xtsonic.c
+++ b/drivers/net/ethernet/natsemi/xtsonic.c
@@ -249,7 +249,7 @@ MODULE_DESCRIPTION("Xtensa XT2000 SONIC ethernet driver");
#include "sonic.c"
-static int xtsonic_device_remove(struct platform_device *pdev)
+static void xtsonic_device_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct sonic_local *lp = netdev_priv(dev);
@@ -260,13 +260,11 @@ static int xtsonic_device_remove(struct platform_device *pdev)
lp->descriptors, lp->descriptors_laddr);
release_region (dev->base_addr, SONIC_MEM_SIZE);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver xtsonic_driver = {
.probe = xtsonic_probe,
- .remove = xtsonic_device_remove,
+ .remove_new = xtsonic_device_remove,
.driver = {
.name = xtsonic_string,
},
diff --git a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
index b1f026b81dea..cc54faca2283 100644
--- a/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
+++ b/drivers/net/ethernet/netronome/nfp/crypto/ipsec.c
@@ -378,6 +378,34 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
/* Encryption */
switch (x->props.ealgo) {
case SADB_EALG_NONE:
+ /* The xfrm descriptor for CHACAH20_POLY1305 does not set the algorithm id, which
+ * is the default value SADB_EALG_NONE. In the branch of SADB_EALG_NONE, driver
+ * uses algorithm name to identify CHACAH20_POLY1305's algorithm.
+ */
+ if (x->aead && !strcmp(x->aead->alg_name, "rfc7539esp(chacha20,poly1305)")) {
+ if (nn->pdev->device != PCI_DEVICE_ID_NFP3800) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "Unsupported encryption algorithm for offload");
+ return -EINVAL;
+ }
+ if (x->aead->alg_icv_len != 128) {
+ NL_SET_ERR_MSG_MOD(extack,
+ "ICV must be 128bit with CHACHA20_POLY1305");
+ return -EINVAL;
+ }
+
+ /* Aead->alg_key_len includes 32-bit salt */
+ if (x->aead->alg_key_len - 32 != 256) {
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported CHACHA20 key length");
+ return -EINVAL;
+ }
+
+ /* The CHACHA20's mode is not configured */
+ cfg->ctrl_word.hash = NFP_IPSEC_HASH_POLY1305_128;
+ cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_CHACHA20;
+ break;
+ }
+ fallthrough;
case SADB_EALG_NULL:
cfg->ctrl_word.cimode = NFP_IPSEC_CIMODE_CBC;
cfg->ctrl_word.cipher = NFP_IPSEC_CIPHER_NULL;
@@ -427,6 +455,7 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
}
if (x->aead) {
+ int key_offset = 0;
int salt_len = 4;
key_len = DIV_ROUND_UP(x->aead->alg_key_len, BITS_PER_BYTE);
@@ -437,9 +466,19 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x,
return -EINVAL;
}
- for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]) ; i++)
- cfg->ciph_key[i] = get_unaligned_be32(x->aead->alg_key +
- sizeof(cfg->ciph_key[0]) * i);
+ /* The CHACHA20's key order needs to be adjusted based on hardware design.
+ * Other's key order: {K0, K1, K2, K3, K4, K5, K6, K7}
+ * CHACHA20's key order: {K4, K5, K6, K7, K0, K1, K2, K3}
+ */
+ if (!strcmp(x->aead->alg_name, "rfc7539esp(chacha20,poly1305)"))
+ key_offset = key_len / sizeof(cfg->ciph_key[0]) >> 1;
+
+ for (i = 0; i < key_len / sizeof(cfg->ciph_key[0]); i++) {
+ int index = (i + key_offset) % (key_len / sizeof(cfg->ciph_key[0]));
+
+ cfg->ciph_key[index] = get_unaligned_be32(x->aead->alg_key +
+ sizeof(cfg->ciph_key[0]) * i);
+ }
/* Load up the salt */
cfg->aesgcm_fields.salt = get_unaligned_be32(x->aead->alg_key + key_len);
diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
index 0cc026b0aefd..17381bfc15d7 100644
--- a/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
+++ b/drivers/net/ethernet/netronome/nfp/nfd3/dp.c
@@ -1070,7 +1070,7 @@ static int nfp_nfd3_rx(struct nfp_net_rx_ring *rx_ring, int budget)
nfp_repr_inc_rx_stats(netdev, pkt_len);
}
- skb = build_skb(rxbuf->frag, true_bufsz);
+ skb = napi_build_skb(rxbuf->frag, true_bufsz);
if (unlikely(!skb)) {
nfp_nfd3_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL);
continue;
diff --git a/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c b/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c
index 5d9db8c2a5b4..45be6954d5aa 100644
--- a/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c
+++ b/drivers/net/ethernet/netronome/nfp/nfd3/xsk.c
@@ -256,7 +256,7 @@ nfp_nfd3_xsk_rx(struct nfp_net_rx_ring *rx_ring, int budget,
nfp_net_xsk_rx_ring_fill_freelist(r_vec->rx_ring);
if (xdp_redir)
- xdp_do_flush_map();
+ xdp_do_flush();
if (tx_ring->wr_ptr_add)
nfp_net_tx_xmit_more_flush(tx_ring);
diff --git a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
index 33b6d74adb4b..8d78c6faefa8 100644
--- a/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
+++ b/drivers/net/ethernet/netronome/nfp/nfdk/dp.c
@@ -1189,7 +1189,7 @@ static int nfp_nfdk_rx(struct nfp_net_rx_ring *rx_ring, int budget)
nfp_repr_inc_rx_stats(netdev, pkt_len);
}
- skb = build_skb(rxbuf->frag, true_bufsz);
+ skb = napi_build_skb(rxbuf->frag, true_bufsz);
if (unlikely(!skb)) {
nfp_nfdk_rx_drop(dp, r_vec, rx_ring, rxbuf, NULL);
continue;
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h
index 48a74accbbd3..77bf4198dbde 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_repr.h
@@ -18,7 +18,7 @@ struct nfp_port;
*/
struct nfp_reprs {
unsigned int num_reprs;
- struct net_device __rcu *reprs[];
+ struct net_device __rcu *reprs[] __counted_by(num_reprs);
};
/**
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
index 6e044ac04917..00264af13b49 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.h
@@ -241,7 +241,7 @@ struct nfp_eth_table {
u64 link_modes_supp[2];
u64 link_modes_ad[2];
- } ports[];
+ } ports[] __counted_by(count);
};
struct nfp_eth_table *nfp_eth_read_ports(struct nfp_cpp *cpp);
diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
index ce7492a6a98f..279ea0b56955 100644
--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
+++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_resource.c
@@ -159,7 +159,7 @@ nfp_resource_acquire(struct nfp_cpp *cpp, const char *name)
if (!res)
return ERR_PTR(-ENOMEM);
- strncpy(res->name, name, NFP_RESOURCE_ENTRY_NAME_SZ);
+ strscpy(res->name, name, sizeof(res->name));
dev_mutex = nfp_cpp_mutex_alloc(cpp, NFP_RESOURCE_TBL_TARGET,
NFP_RESOURCE_TBL_BASE,
diff --git a/drivers/net/ethernet/ni/nixge.c b/drivers/net/ethernet/ni/nixge.c
index ba27bbc68f85..fa1f78b03cb2 100644
--- a/drivers/net/ethernet/ni/nixge.c
+++ b/drivers/net/ethernet/ni/nixge.c
@@ -683,7 +683,7 @@ static int nixge_poll(struct napi_struct *napi, int budget)
if (status & (XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK)) {
/* If there's more, reschedule, but clear */
nixge_dma_write_reg(priv, XAXIDMA_RX_SR_OFFSET, status);
- napi_reschedule(napi);
+ napi_schedule(napi);
} else {
/* if not, turn on RX IRQs again ... */
cr = nixge_dma_read_reg(priv, XAXIDMA_RX_CR_OFFSET);
@@ -755,8 +755,7 @@ static irqreturn_t nixge_rx_irq(int irq, void *_ndev)
cr &= ~(XAXIDMA_IRQ_IOC_MASK | XAXIDMA_IRQ_DELAY_MASK);
nixge_dma_write_reg(priv, XAXIDMA_RX_CR_OFFSET, cr);
- if (napi_schedule_prep(&priv->napi))
- __napi_schedule(&priv->napi);
+ napi_schedule(&priv->napi);
goto out;
}
if (!(status & XAXIDMA_IRQ_ALL_MASK)) {
@@ -1397,7 +1396,7 @@ free_netdev:
return err;
}
-static int nixge_remove(struct platform_device *pdev)
+static void nixge_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct nixge_priv *priv = netdev_priv(ndev);
@@ -1412,13 +1411,11 @@ static int nixge_remove(struct platform_device *pdev)
mdiobus_unregister(priv->mii_bus);
free_netdev(ndev);
-
- return 0;
}
static struct platform_driver nixge_driver = {
.probe = nixge_probe,
- .remove = nixge_remove,
+ .remove_new = nixge_remove,
.driver = {
.name = "nixge",
.of_match_table = nixge_dt_ids,
diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
index 1a4a272f4c5c..dd3e58a1319c 100644
--- a/drivers/net/ethernet/nxp/lpc_eth.c
+++ b/drivers/net/ethernet/nxp/lpc_eth.c
@@ -1417,7 +1417,7 @@ err_exit:
return ret;
}
-static int lpc_eth_drv_remove(struct platform_device *pdev)
+static void lpc_eth_drv_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct netdata_local *pldat = netdev_priv(ndev);
@@ -1436,8 +1436,6 @@ static int lpc_eth_drv_remove(struct platform_device *pdev)
clk_disable_unprepare(pldat->clk);
clk_put(pldat->clk);
free_netdev(ndev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -1505,7 +1503,7 @@ MODULE_DEVICE_TABLE(of, lpc_eth_match);
static struct platform_driver lpc_eth_driver = {
.probe = lpc_eth_drv_probe,
- .remove = lpc_eth_drv_remove,
+ .remove_new = lpc_eth_drv_remove,
#ifdef CONFIG_PM
.suspend = lpc_eth_drv_suspend,
.resume = lpc_eth_drv_resume,
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.h b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
index aae4131f146a..1dbc3cb50b1d 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.h
+++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.h
@@ -7,6 +7,7 @@
#include <linux/atomic.h>
#include <linux/mutex.h>
#include <linux/workqueue.h>
+#include <linux/skbuff.h>
#include "ionic_if.h"
#include "ionic_regs.h"
@@ -217,7 +218,7 @@ struct ionic_desc_info {
};
unsigned int bytes;
unsigned int nbufs;
- struct ionic_buf_info bufs[IONIC_MAX_FRAGS];
+ struct ionic_buf_info bufs[MAX_SKB_FRAGS + 1];
ionic_desc_cb cb;
void *cb_arg;
};
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
index 2c3e36b2dd7f..edc14730ce88 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
@@ -3831,6 +3831,18 @@ static void ionic_lif_queue_identify(struct ionic_lif *lif)
qtype, qti->max_sg_elems);
dev_dbg(ionic->dev, " qtype[%d].sg_desc_stride = %d\n",
qtype, qti->sg_desc_stride);
+
+ if (qti->max_sg_elems >= IONIC_MAX_FRAGS) {
+ qti->max_sg_elems = IONIC_MAX_FRAGS - 1;
+ dev_dbg(ionic->dev, "limiting qtype %d max_sg_elems to IONIC_MAX_FRAGS-1 %d\n",
+ qtype, qti->max_sg_elems);
+ }
+
+ if (qti->max_sg_elems > MAX_SKB_FRAGS) {
+ qti->max_sg_elems = MAX_SKB_FRAGS;
+ dev_dbg(ionic->dev, "limiting qtype %d max_sg_elems to MAX_SKB_FRAGS %d\n",
+ qtype, qti->max_sg_elems);
+ }
}
}
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_main.c b/drivers/net/ethernet/pensando/ionic/ionic_main.c
index 1dc79cecc5cc..835577392178 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_main.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_main.c
@@ -554,8 +554,8 @@ int ionic_identify(struct ionic *ionic)
memset(ident, 0, sizeof(*ident));
ident->drv.os_type = cpu_to_le32(IONIC_OS_TYPE_LINUX);
- strncpy(ident->drv.driver_ver_str, UTS_RELEASE,
- sizeof(ident->drv.driver_ver_str) - 1);
+ strscpy(ident->drv.driver_ver_str, UTS_RELEASE,
+ sizeof(ident->drv.driver_ver_str));
mutex_lock(&ionic->dev_cmd_lock);
diff --git a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
index 44466e8c5d77..ccc1b1d407e4 100644
--- a/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
+++ b/drivers/net/ethernet/pensando/ionic/ionic_txrx.c
@@ -1243,25 +1243,84 @@ static int ionic_tx(struct ionic_queue *q, struct sk_buff *skb)
static int ionic_tx_descs_needed(struct ionic_queue *q, struct sk_buff *skb)
{
struct ionic_tx_stats *stats = q_to_tx_stats(q);
+ bool too_many_frags = false;
+ skb_frag_t *frag;
+ int desc_bufs;
+ int chunk_len;
+ int frag_rem;
+ int tso_rem;
+ int seg_rem;
+ bool encap;
+ int hdrlen;
int ndescs;
int err;
/* Each desc is mss long max, so a descriptor for each gso_seg */
- if (skb_is_gso(skb))
+ if (skb_is_gso(skb)) {
ndescs = skb_shinfo(skb)->gso_segs;
- else
+ } else {
ndescs = 1;
+ if (skb_shinfo(skb)->nr_frags > q->max_sg_elems) {
+ too_many_frags = true;
+ goto linearize;
+ }
+ }
- /* If non-TSO, just need 1 desc and nr_frags sg elems */
- if (skb_shinfo(skb)->nr_frags <= q->max_sg_elems)
+ /* If non-TSO, or no frags to check, we're done */
+ if (!skb_is_gso(skb) || !skb_shinfo(skb)->nr_frags)
return ndescs;
- /* Too many frags, so linearize */
- err = skb_linearize(skb);
- if (err)
- return err;
+ /* We need to scan the skb to be sure that none of the MTU sized
+ * packets in the TSO will require more sgs per descriptor than we
+ * can support. We loop through the frags, add up the lengths for
+ * a packet, and count the number of sgs used per packet.
+ */
+ tso_rem = skb->len;
+ frag = skb_shinfo(skb)->frags;
+ encap = skb->encapsulation;
+
+ /* start with just hdr in first part of first descriptor */
+ if (encap)
+ hdrlen = skb_inner_tcp_all_headers(skb);
+ else
+ hdrlen = skb_tcp_all_headers(skb);
+ seg_rem = min_t(int, tso_rem, hdrlen + skb_shinfo(skb)->gso_size);
+ frag_rem = hdrlen;
+
+ while (tso_rem > 0) {
+ desc_bufs = 0;
+ while (seg_rem > 0) {
+ desc_bufs++;
+
+ /* We add the +1 because we can take buffers for one
+ * more than we have SGs: one for the initial desc data
+ * in addition to the SG segments that might follow.
+ */
+ if (desc_bufs > q->max_sg_elems + 1) {
+ too_many_frags = true;
+ goto linearize;
+ }
+
+ if (frag_rem == 0) {
+ frag_rem = skb_frag_size(frag);
+ frag++;
+ }
+ chunk_len = min(frag_rem, seg_rem);
+ frag_rem -= chunk_len;
+ tso_rem -= chunk_len;
+ seg_rem -= chunk_len;
+ }
+
+ seg_rem = min_t(int, tso_rem, skb_shinfo(skb)->gso_size);
+ }
- stats->linearize++;
+linearize:
+ if (too_many_frags) {
+ err = skb_linearize(skb);
+ if (err)
+ return err;
+ stats->linearize++;
+ }
return ndescs;
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_debug.c b/drivers/net/ethernet/qlogic/qed/qed_debug.c
index cdcead614e9f..f67be4b8ad43 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_debug.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_debug.c
@@ -3204,8 +3204,8 @@ static u32 qed_grc_dump_big_ram(struct qed_hwfn *p_hwfn,
BIT(big_ram->is_256b_bit_offset[dev_data->chip_id]) ? 256
: 128;
- strncpy(type_name, big_ram->instance_name, BIG_RAM_NAME_LEN);
- strncpy(mem_name, big_ram->instance_name, BIG_RAM_NAME_LEN);
+ memcpy(type_name, big_ram->instance_name, BIG_RAM_NAME_LEN);
+ memcpy(mem_name, big_ram->instance_name, BIG_RAM_NAME_LEN);
/* Dump memory header */
offset += qed_grc_dump_mem_hdr(p_hwfn,
@@ -6359,8 +6359,7 @@ static void qed_read_str_from_buf(void *buf, u32 *offset, u32 size, char *dest)
{
const char *source_str = &((const char *)buf)[*offset];
- strncpy(dest, source_str, size);
- dest[size - 1] = '\0';
+ strscpy(dest, source_str, size);
*offset += size;
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_devlink.c b/drivers/net/ethernet/qlogic/qed/qed_devlink.c
index be5cc8b79bd5..dad8e617c393 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_devlink.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_devlink.c
@@ -66,12 +66,12 @@ qed_fw_fatal_reporter_dump(struct devlink_health_reporter *reporter,
return err;
}
- err = devlink_fmsg_binary_pair_put(fmsg, "dump_data",
- p_dbg_data_buf, dbg_data_buf_size);
+ devlink_fmsg_binary_pair_put(fmsg, "dump_data", p_dbg_data_buf,
+ dbg_data_buf_size);
vfree(p_dbg_data_buf);
- return err;
+ return 0;
}
static int
diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
index 95820cf1cd6c..b6b849a079ed 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
@@ -201,21 +201,6 @@ static const char qede_tests_str_arr[QEDE_ETHTOOL_TEST_MAX][ETH_GSTRING_LEN] = {
/* Forced speed capabilities maps */
-struct qede_forced_speed_map {
- u32 speed;
- __ETHTOOL_DECLARE_LINK_MODE_MASK(caps);
-
- const u32 *cap_arr;
- u32 arr_size;
-};
-
-#define QEDE_FORCED_SPEED_MAP(value) \
-{ \
- .speed = SPEED_##value, \
- .cap_arr = qede_forced_speed_##value, \
- .arr_size = ARRAY_SIZE(qede_forced_speed_##value), \
-}
-
static const u32 qede_forced_speed_1000[] __initconst = {
ETHTOOL_LINK_MODE_1000baseT_Full_BIT,
ETHTOOL_LINK_MODE_1000baseKX_Full_BIT,
@@ -263,28 +248,21 @@ static const u32 qede_forced_speed_100000[] __initconst = {
ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT,
};
-static struct qede_forced_speed_map qede_forced_speed_maps[] __ro_after_init = {
- QEDE_FORCED_SPEED_MAP(1000),
- QEDE_FORCED_SPEED_MAP(10000),
- QEDE_FORCED_SPEED_MAP(20000),
- QEDE_FORCED_SPEED_MAP(25000),
- QEDE_FORCED_SPEED_MAP(40000),
- QEDE_FORCED_SPEED_MAP(50000),
- QEDE_FORCED_SPEED_MAP(100000),
+static struct ethtool_forced_speed_map
+qede_forced_speed_maps[] __ro_after_init = {
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 1000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 10000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 20000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 25000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 40000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 50000),
+ ETHTOOL_FORCED_SPEED_MAP(qede_forced_speed, 100000),
};
void __init qede_forced_speed_maps_init(void)
{
- struct qede_forced_speed_map *map;
- u32 i;
-
- for (i = 0; i < ARRAY_SIZE(qede_forced_speed_maps); i++) {
- map = qede_forced_speed_maps + i;
-
- linkmode_set_bit_array(map->cap_arr, map->arr_size, map->caps);
- map->cap_arr = NULL;
- map->arr_size = 0;
- }
+ ethtool_forced_speed_maps_init(qede_forced_speed_maps,
+ ARRAY_SIZE(qede_forced_speed_maps));
}
/* Ethtool callbacks */
@@ -564,8 +542,8 @@ static int qede_set_link_ksettings(struct net_device *dev,
const struct ethtool_link_ksettings *cmd)
{
const struct ethtool_link_settings *base = &cmd->base;
+ const struct ethtool_forced_speed_map *map;
struct qede_dev *edev = netdev_priv(dev);
- const struct qede_forced_speed_map *map;
struct qed_link_output current_link;
struct qed_link_params params;
u32 i;
diff --git a/drivers/net/ethernet/qualcomm/emac/emac.c b/drivers/net/ethernet/qualcomm/emac/emac.c
index 19bb16daf4e7..3270df72541b 100644
--- a/drivers/net/ethernet/qualcomm/emac/emac.c
+++ b/drivers/net/ethernet/qualcomm/emac/emac.c
@@ -718,7 +718,7 @@ err_undo_netdev:
return ret;
}
-static int emac_remove(struct platform_device *pdev)
+static void emac_remove(struct platform_device *pdev)
{
struct net_device *netdev = dev_get_drvdata(&pdev->dev);
struct emac_adapter *adpt = netdev_priv(netdev);
@@ -742,8 +742,6 @@ static int emac_remove(struct platform_device *pdev)
iounmap(adpt->phy.base);
free_netdev(netdev);
-
- return 0;
}
static void emac_shutdown(struct platform_device *pdev)
@@ -762,7 +760,7 @@ static void emac_shutdown(struct platform_device *pdev)
static struct platform_driver emac_platform_driver = {
.probe = emac_probe,
- .remove = emac_remove,
+ .remove_new = emac_remove,
.driver = {
.name = "qcom-emac",
.of_match_table = emac_dt_match,
diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
index 361b90007148..a987defb575c 100644
--- a/drivers/net/ethernet/realtek/r8169_main.c
+++ b/drivers/net/ethernet/realtek/r8169_main.c
@@ -4596,7 +4596,11 @@ static void r8169_phylink_handler(struct net_device *ndev)
if (netif_carrier_ok(ndev)) {
rtl_link_chg_patch(tp);
pm_request_resume(d);
+ netif_wake_queue(tp->dev);
} else {
+ /* In few cases rx is broken after link-down otherwise */
+ if (rtl_is_8125(tp))
+ rtl_reset_work(tp);
pm_runtime_idle(d);
}
diff --git a/drivers/net/ethernet/renesas/Kconfig b/drivers/net/ethernet/renesas/Kconfig
index 3ceb57408ed0..8ef5b0241e64 100644
--- a/drivers/net/ethernet/renesas/Kconfig
+++ b/drivers/net/ethernet/renesas/Kconfig
@@ -1,6 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Renesas device configuration
+# Renesas network device configuration
#
config NET_VENDOR_RENESAS
@@ -25,9 +25,6 @@ config SH_ETH
select PHYLIB
help
Renesas SuperH Ethernet device driver.
- This driver supporting CPUs are:
- - SH7619, SH7710, SH7712, SH7724, SH7734, SH7763, SH7757,
- R8A7740, R8A774x, R8A777x and R8A779x.
config RAVB
tristate "Renesas Ethernet AVB support"
@@ -39,8 +36,6 @@ config RAVB
select PHYLIB
help
Renesas Ethernet AVB device driver.
- This driver supports the following SoCs:
- - R8A779x.
config RENESAS_ETHER_SWITCH
tristate "Renesas Ethernet Switch support"
@@ -51,7 +46,5 @@ config RENESAS_ETHER_SWITCH
select PHYLINK
help
Renesas Ethernet Switch device driver.
- This driver supports the following SoCs:
- - R8A779Fx.
endif # NET_VENDOR_RENESAS
diff --git a/drivers/net/ethernet/renesas/Makefile b/drivers/net/ethernet/renesas/Makefile
index 592005893464..e8fd85b5fe8f 100644
--- a/drivers/net/ethernet/renesas/Makefile
+++ b/drivers/net/ethernet/renesas/Makefile
@@ -1,14 +1,12 @@
# SPDX-License-Identifier: GPL-2.0
#
-# Makefile for the Renesas device drivers.
+# Makefile for the Renesas network device drivers
#
obj-$(CONFIG_SH_ETH) += sh_eth.o
ravb-objs := ravb_main.o ravb_ptp.o
-
obj-$(CONFIG_RAVB) += ravb.o
rswitch_drv-objs := rswitch.o rcar_gen4_ptp.o
-
obj-$(CONFIG_RENESAS_ETHER_SWITCH) += rswitch_drv.o
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index 0ef0b88b7145..c70cff80cc99 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -2880,7 +2880,7 @@ out_release:
return error;
}
-static int ravb_remove(struct platform_device *pdev)
+static void ravb_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct ravb_private *priv = netdev_priv(ndev);
@@ -2907,8 +2907,6 @@ static int ravb_remove(struct platform_device *pdev)
reset_control_assert(priv->rstc);
free_netdev(ndev);
platform_set_drvdata(pdev, NULL);
-
- return 0;
}
static int ravb_wol_setup(struct net_device *ndev)
@@ -3046,7 +3044,7 @@ static const struct dev_pm_ops ravb_dev_pm_ops = {
static struct platform_driver ravb_driver = {
.probe = ravb_probe,
- .remove = ravb_remove,
+ .remove_new = ravb_remove,
.driver = {
.name = "ravb",
.pm = &ravb_dev_pm_ops,
diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
index 0fc0b6bea753..43a7795d6591 100644
--- a/drivers/net/ethernet/renesas/rswitch.c
+++ b/drivers/net/ethernet/renesas/rswitch.c
@@ -17,6 +17,7 @@
#include <linux/of_net.h>
#include <linux/phy/phy.h>
#include <linux/platform_device.h>
+#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/rtnetlink.h>
#include <linux/slab.h>
@@ -1315,6 +1316,7 @@ static int rswitch_phy_device_init(struct rswitch_device *rdev)
if (!phydev)
goto out;
__set_bit(rdev->etha->phy_interface, phydev->host_interfaces);
+ phydev->mac_managed_pm = true;
phydev = of_phy_connect(rdev->ndev, phy, rswitch_adjust_link, 0,
rdev->etha->phy_interface);
@@ -1405,7 +1407,8 @@ static void rswitch_ether_port_deinit_one(struct rswitch_device *rdev)
static int rswitch_ether_port_init_all(struct rswitch_private *priv)
{
- int i, err;
+ unsigned int i;
+ int err;
rswitch_for_each_enabled_port(priv, i) {
err = rswitch_ether_port_init_one(priv->rdev[i]);
@@ -1786,7 +1789,8 @@ static void rswitch_device_free(struct rswitch_private *priv, int index)
static int rswitch_init(struct rswitch_private *priv)
{
- int i, err;
+ unsigned int i;
+ int err;
for (i = 0; i < RSWITCH_NUM_PORTS; i++)
rswitch_etha_init(priv, i);
@@ -1816,7 +1820,7 @@ static int rswitch_init(struct rswitch_private *priv)
for (i = 0; i < RSWITCH_NUM_PORTS; i++) {
err = rswitch_device_alloc(priv, i);
if (err < 0) {
- for (i--; i >= 0; i--)
+ for (; i-- > 0; )
rswitch_device_free(priv, i);
goto err_device_alloc;
}
@@ -1959,7 +1963,7 @@ static int renesas_eth_sw_probe(struct platform_device *pdev)
static void rswitch_deinit(struct rswitch_private *priv)
{
- int i;
+ unsigned int i;
rswitch_gwca_hw_deinit(priv);
rcar_gen4_ptp_unregister(priv->ptp_priv);
@@ -1981,7 +1985,7 @@ static void rswitch_deinit(struct rswitch_private *priv)
rswitch_clock_disable(priv);
}
-static int renesas_eth_sw_remove(struct platform_device *pdev)
+static void renesas_eth_sw_remove(struct platform_device *pdev)
{
struct rswitch_private *priv = platform_get_drvdata(pdev);
@@ -1991,15 +1995,54 @@ static int renesas_eth_sw_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
platform_set_drvdata(pdev, NULL);
+}
+
+static int renesas_eth_sw_suspend(struct device *dev)
+{
+ struct rswitch_private *priv = dev_get_drvdata(dev);
+ struct net_device *ndev;
+ unsigned int i;
+
+ rswitch_for_each_enabled_port(priv, i) {
+ ndev = priv->rdev[i]->ndev;
+ if (netif_running(ndev)) {
+ netif_device_detach(ndev);
+ rswitch_stop(ndev);
+ }
+ if (priv->rdev[i]->serdes->init_count)
+ phy_exit(priv->rdev[i]->serdes);
+ }
+
+ return 0;
+}
+
+static int renesas_eth_sw_resume(struct device *dev)
+{
+ struct rswitch_private *priv = dev_get_drvdata(dev);
+ struct net_device *ndev;
+ unsigned int i;
+
+ rswitch_for_each_enabled_port(priv, i) {
+ phy_init(priv->rdev[i]->serdes);
+ ndev = priv->rdev[i]->ndev;
+ if (netif_running(ndev)) {
+ rswitch_open(ndev);
+ netif_device_attach(ndev);
+ }
+ }
return 0;
}
+static DEFINE_SIMPLE_DEV_PM_OPS(renesas_eth_sw_pm_ops, renesas_eth_sw_suspend,
+ renesas_eth_sw_resume);
+
static struct platform_driver renesas_eth_sw_driver_platform = {
.probe = renesas_eth_sw_probe,
- .remove = renesas_eth_sw_remove,
+ .remove_new = renesas_eth_sw_remove,
.driver = {
.name = "renesas_eth_sw",
+ .pm = pm_sleep_ptr(&renesas_eth_sw_pm_ops),
.of_match_table = renesas_eth_sw_of_table,
}
};
diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
index 04f49a7a5843..27c9d3872c0e 100644
--- a/drivers/net/ethernet/renesas/rswitch.h
+++ b/drivers/net/ethernet/renesas/rswitch.h
@@ -20,7 +20,7 @@
else
#define rswitch_for_each_enabled_port_continue_reverse(priv, i) \
- for (i--; i >= 0; i--) \
+ for (; i-- > 0; ) \
if (priv->rdev[i]->disabled) \
continue; \
else
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
index 274ea16c0a1f..475e1e8c1d35 100644
--- a/drivers/net/ethernet/renesas/sh_eth.c
+++ b/drivers/net/ethernet/renesas/sh_eth.c
@@ -3431,7 +3431,7 @@ out_release:
return ret;
}
-static int sh_eth_drv_remove(struct platform_device *pdev)
+static void sh_eth_drv_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct sh_eth_private *mdp = netdev_priv(ndev);
@@ -3441,8 +3441,6 @@ static int sh_eth_drv_remove(struct platform_device *pdev)
sh_mdio_release(mdp);
pm_runtime_disable(&pdev->dev);
free_netdev(ndev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -3562,7 +3560,7 @@ MODULE_DEVICE_TABLE(platform, sh_eth_id_table);
static struct platform_driver sh_eth_driver = {
.probe = sh_eth_drv_probe,
- .remove = sh_eth_drv_remove,
+ .remove_new = sh_eth_drv_remove,
.id_table = sh_eth_id_table,
.driver = {
.name = CARDNAME,
diff --git a/drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c b/drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
index fb59ff94509a..e6e130dbe1de 100644
--- a/drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
+++ b/drivers/net/ethernet/samsung/sxgbe/sxgbe_platform.c
@@ -169,13 +169,11 @@ err_out:
* Description: this function calls the main to free the net resources
* and calls the platforms hook and release the resources (e.g. mem).
*/
-static int sxgbe_platform_remove(struct platform_device *pdev)
+static void sxgbe_platform_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
sxgbe_drv_remove(ndev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -226,7 +224,7 @@ MODULE_DEVICE_TABLE(of, sxgbe_dt_ids);
static struct platform_driver sxgbe_platform_driver = {
.probe = sxgbe_platform_probe,
- .remove = sxgbe_platform_remove,
+ .remove_new = sxgbe_platform_remove,
.driver = {
.name = SXGBE_RESOURCE_NAME,
.pm = &sxgbe_platform_pm_ops,
diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c
index 96065dfc747b..76356dadf233 100644
--- a/drivers/net/ethernet/seeq/sgiseeq.c
+++ b/drivers/net/ethernet/seeq/sgiseeq.c
@@ -819,7 +819,7 @@ err_out:
return err;
}
-static int sgiseeq_remove(struct platform_device *pdev)
+static void sgiseeq_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct sgiseeq_private *sp = netdev_priv(dev);
@@ -828,13 +828,11 @@ static int sgiseeq_remove(struct platform_device *pdev)
dma_free_noncoherent(&pdev->dev, sizeof(*sp->srings), sp->srings,
sp->srings_dma, DMA_BIDIRECTIONAL);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver sgiseeq_driver = {
.probe = sgiseeq_probe,
- .remove = sgiseeq_remove,
+ .remove_new = sgiseeq_remove,
.driver = {
.name = "sgiseeq",
}
diff --git a/drivers/net/ethernet/sfc/efx_channels.c b/drivers/net/ethernet/sfc/efx_channels.c
index 8d2d7ea2ebef..c9e17a8208a9 100644
--- a/drivers/net/ethernet/sfc/efx_channels.c
+++ b/drivers/net/ethernet/sfc/efx_channels.c
@@ -1260,7 +1260,7 @@ static int efx_poll(struct napi_struct *napi, int budget)
spent = efx_process_channel(channel, budget);
- xdp_do_flush_map();
+ xdp_do_flush();
if (spent < budget) {
if (efx_channel_has_rx_queue(channel) &&
diff --git a/drivers/net/ethernet/sfc/mae.c b/drivers/net/ethernet/sfc/mae.c
index c3e2b4a21d10..10709d828a63 100644
--- a/drivers/net/ethernet/sfc/mae.c
+++ b/drivers/net/ethernet/sfc/mae.c
@@ -1291,10 +1291,11 @@ int efx_mae_alloc_action_set(struct efx_nic *efx, struct efx_tc_action_set *act)
size_t outlen;
int rc;
- MCDI_POPULATE_DWORD_4(inbuf, MAE_ACTION_SET_ALLOC_IN_FLAGS,
+ MCDI_POPULATE_DWORD_5(inbuf, MAE_ACTION_SET_ALLOC_IN_FLAGS,
MAE_ACTION_SET_ALLOC_IN_VLAN_PUSH, act->vlan_push,
MAE_ACTION_SET_ALLOC_IN_VLAN_POP, act->vlan_pop,
MAE_ACTION_SET_ALLOC_IN_DECAP, act->decap,
+ MAE_ACTION_SET_ALLOC_IN_DO_NAT, act->do_nat,
MAE_ACTION_SET_ALLOC_IN_DO_DECR_IP_TTL,
act->do_ttl_dec);
@@ -1705,8 +1706,10 @@ static int efx_mae_insert_lhs_outer_rule(struct efx_nic *efx,
/* action */
act = &rule->lhs_act;
- MCDI_SET_DWORD(inbuf, MAE_OUTER_RULE_INSERT_IN_ENCAP_TYPE,
- MAE_MCDI_ENCAP_TYPE_NONE);
+ rc = efx_mae_encap_type_to_mae_type(act->tun_type);
+ if (rc < 0)
+ return rc;
+ MCDI_SET_DWORD(inbuf, MAE_OUTER_RULE_INSERT_IN_ENCAP_TYPE, rc);
/* We always inhibit CT lookup on TCP_INTERESTING_FLAGS, since the
* SW path needs to process the packet to update the conntrack tables
* on connection establishment (SYN) or termination (FIN, RST).
@@ -1734,9 +1737,60 @@ static int efx_mae_insert_lhs_outer_rule(struct efx_nic *efx,
return 0;
}
+static int efx_mae_populate_match_criteria(MCDI_DECLARE_STRUCT_PTR(match_crit),
+ const struct efx_tc_match *match);
+
+static int efx_mae_insert_lhs_action_rule(struct efx_nic *efx,
+ struct efx_tc_lhs_rule *rule,
+ u32 prio)
+{
+ MCDI_DECLARE_BUF(inbuf, MC_CMD_MAE_ACTION_RULE_INSERT_IN_LEN(MAE_FIELD_MASK_VALUE_PAIRS_V2_LEN));
+ MCDI_DECLARE_BUF(outbuf, MC_CMD_MAE_ACTION_RULE_INSERT_OUT_LEN);
+ struct efx_tc_lhs_action *act = &rule->lhs_act;
+ MCDI_DECLARE_STRUCT_PTR(match_crit);
+ MCDI_DECLARE_STRUCT_PTR(response);
+ size_t outlen;
+ int rc;
+
+ match_crit = _MCDI_DWORD(inbuf, MAE_ACTION_RULE_INSERT_IN_MATCH_CRITERIA);
+ response = _MCDI_DWORD(inbuf, MAE_ACTION_RULE_INSERT_IN_RESPONSE);
+ MCDI_STRUCT_SET_DWORD(response, MAE_ACTION_RULE_RESPONSE_ASL_ID,
+ MC_CMD_MAE_ACTION_SET_LIST_ALLOC_OUT_ACTION_SET_LIST_ID_NULL);
+ MCDI_STRUCT_SET_DWORD(response, MAE_ACTION_RULE_RESPONSE_AS_ID,
+ MC_CMD_MAE_ACTION_SET_ALLOC_OUT_ACTION_SET_ID_NULL);
+ EFX_POPULATE_DWORD_5(*_MCDI_STRUCT_DWORD(response, MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL),
+ MAE_ACTION_RULE_RESPONSE_DO_CT, !!act->zone,
+ MAE_ACTION_RULE_RESPONSE_DO_RECIRC,
+ act->rid && !act->zone,
+ MAE_ACTION_RULE_RESPONSE_CT_VNI_MODE,
+ MAE_CT_VNI_MODE_ZERO,
+ MAE_ACTION_RULE_RESPONSE_RECIRC_ID,
+ act->rid ? act->rid->fw_id : 0,
+ MAE_ACTION_RULE_RESPONSE_CT_DOMAIN,
+ act->zone ? act->zone->zone : 0);
+ MCDI_STRUCT_SET_DWORD(response, MAE_ACTION_RULE_RESPONSE_COUNTER_ID,
+ act->count ? act->count->cnt->fw_id :
+ MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NULL);
+ MCDI_SET_DWORD(inbuf, MAE_ACTION_RULE_INSERT_IN_PRIO, prio);
+ rc = efx_mae_populate_match_criteria(match_crit, &rule->match);
+ if (rc)
+ return rc;
+
+ rc = efx_mcdi_rpc(efx, MC_CMD_MAE_ACTION_RULE_INSERT, inbuf, sizeof(inbuf),
+ outbuf, sizeof(outbuf), &outlen);
+ if (rc)
+ return rc;
+ if (outlen < sizeof(outbuf))
+ return -EIO;
+ rule->fw_id = MCDI_DWORD(outbuf, MAE_ACTION_RULE_INSERT_OUT_AR_ID);
+ return 0;
+}
+
int efx_mae_insert_lhs_rule(struct efx_nic *efx, struct efx_tc_lhs_rule *rule,
u32 prio)
{
+ if (rule->is_ar)
+ return efx_mae_insert_lhs_action_rule(efx, rule, prio);
return efx_mae_insert_lhs_outer_rule(efx, rule, prio);
}
@@ -1770,6 +1824,8 @@ static int efx_mae_remove_lhs_outer_rule(struct efx_nic *efx,
int efx_mae_remove_lhs_rule(struct efx_nic *efx, struct efx_tc_lhs_rule *rule)
{
+ if (rule->is_ar)
+ return efx_mae_delete_rule(efx, rule->fw_id);
return efx_mae_remove_lhs_outer_rule(efx, rule);
}
diff --git a/drivers/net/ethernet/sfc/mcdi.c b/drivers/net/ethernet/sfc/mcdi.c
index d23da9627338..76578502226e 100644
--- a/drivers/net/ethernet/sfc/mcdi.c
+++ b/drivers/net/ethernet/sfc/mcdi.c
@@ -2205,10 +2205,9 @@ int efx_mcdi_nvram_metadata(struct efx_nic *efx, unsigned int type,
goto out_free;
}
- strncpy(desc,
+ strscpy(desc,
MCDI_PTR(outbuf, NVRAM_METADATA_OUT_DESCRIPTION),
MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen));
- desc[MC_CMD_NVRAM_METADATA_OUT_DESCRIPTION_NUM(outlen)] = '\0';
} else {
desc[0] = '\0';
}
diff --git a/drivers/net/ethernet/sfc/ptp.c b/drivers/net/ethernet/sfc/ptp.c
index f54200f03e15..b04fdbb8aece 100644
--- a/drivers/net/ethernet/sfc/ptp.c
+++ b/drivers/net/ethernet/sfc/ptp.c
@@ -108,11 +108,17 @@
#define PTP_MIN_LENGTH 63
#define PTP_ADDR_IPV4 0xe0000181 /* 224.0.1.129 */
-#define PTP_ADDR_IPV6 {0xff, 0x0e, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, \
- 0, 0x01, 0x81} /* ff0e::181 */
+
+/* ff0e::181 */
+static const struct in6_addr ptp_addr_ipv6 = { { {
+ 0xff, 0x0e, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0x01, 0x81 } } };
+
+/* 01-1B-19-00-00-00 */
+static const u8 ptp_addr_ether[ETH_ALEN] __aligned(2) = {
+ 0x01, 0x1b, 0x19, 0x00, 0x00, 0x00 };
+
#define PTP_EVENT_PORT 319
#define PTP_GENERAL_PORT 320
-#define PTP_ADDR_ETHER {0x01, 0x1b, 0x19, 0, 0, 0} /* 01-1B-19-00-00-00 */
/* Annoyingly the format of the version numbers are different between
* versions 1 and 2 so it isn't possible to simply look for 1 or 2.
@@ -1296,7 +1302,7 @@ static int efx_ptp_insert_ipv4_filter(struct efx_nic *efx,
static int efx_ptp_insert_ipv6_filter(struct efx_nic *efx,
struct list_head *filter_list,
- struct in6_addr *addr, u16 port,
+ const struct in6_addr *addr, u16 port,
unsigned long expiry)
{
struct efx_filter_spec spec;
@@ -1309,11 +1315,10 @@ static int efx_ptp_insert_ipv6_filter(struct efx_nic *efx,
static int efx_ptp_insert_eth_multicast_filter(struct efx_nic *efx)
{
struct efx_ptp_data *ptp = efx->ptp_data;
- const u8 addr[ETH_ALEN] = PTP_ADDR_ETHER;
struct efx_filter_spec spec;
efx_ptp_init_filter(efx, &spec);
- efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC, addr);
+ efx_filter_set_eth_local(&spec, EFX_FILTER_VID_UNSPEC, ptp_addr_ether);
spec.match_flags |= EFX_FILTER_MATCH_ETHER_TYPE;
spec.ether_type = htons(ETH_P_1588);
return efx_ptp_insert_filter(efx, &ptp->rxfilters_mcast, &spec, 0);
@@ -1346,15 +1351,13 @@ static int efx_ptp_insert_multicast_filters(struct efx_nic *efx)
* PTP over IPv6 and Ethernet
*/
if (efx_ptp_use_mac_tx_timestamps(efx)) {
- struct in6_addr ipv6_addr = {{PTP_ADDR_IPV6}};
-
rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_mcast,
- &ipv6_addr, PTP_EVENT_PORT, 0);
+ &ptp_addr_ipv6, PTP_EVENT_PORT, 0);
if (rc < 0)
goto fail;
rc = efx_ptp_insert_ipv6_filter(efx, &ptp->rxfilters_mcast,
- &ipv6_addr, PTP_GENERAL_PORT, 0);
+ &ptp_addr_ipv6, PTP_GENERAL_PORT, 0);
if (rc < 0)
goto fail;
@@ -1379,9 +1382,7 @@ static bool efx_ptp_valid_unicast_event_pkt(struct sk_buff *skb)
ip_hdr(skb)->protocol == IPPROTO_UDP &&
udp_hdr(skb)->source == htons(PTP_EVENT_PORT);
} else if (skb->protocol == htons(ETH_P_IPV6)) {
- struct in6_addr mcast_addr = {{PTP_ADDR_IPV6}};
-
- return !ipv6_addr_equal(&ipv6_hdr(skb)->daddr, &mcast_addr) &&
+ return !ipv6_addr_equal(&ipv6_hdr(skb)->daddr, &ptp_addr_ipv6) &&
ipv6_hdr(skb)->nexthdr == IPPROTO_UDP &&
udp_hdr(skb)->source == htons(PTP_EVENT_PORT);
}
diff --git a/drivers/net/ethernet/sfc/siena/efx_channels.c b/drivers/net/ethernet/sfc/siena/efx_channels.c
index 1776f7f8a7a9..a7346e965bfe 100644
--- a/drivers/net/ethernet/sfc/siena/efx_channels.c
+++ b/drivers/net/ethernet/sfc/siena/efx_channels.c
@@ -1285,7 +1285,7 @@ static int efx_poll(struct napi_struct *napi, int budget)
spent = efx_process_channel(channel, budget);
- xdp_do_flush_map();
+ xdp_do_flush();
if (spent < budget) {
if (efx_channel_has_rx_queue(channel) &&
diff --git a/drivers/net/ethernet/sfc/tc.c b/drivers/net/ethernet/sfc/tc.c
index 30ebef88248d..82e8891a619a 100644
--- a/drivers/net/ethernet/sfc/tc.c
+++ b/drivers/net/ethernet/sfc/tc.c
@@ -642,6 +642,15 @@ static int efx_tc_flower_record_encap_match(struct efx_nic *efx,
return -EEXIST;
}
break;
+ case EFX_TC_EM_PSEUDO_OR:
+ /* old EM corresponds to an OR that has to be unique
+ * (it must not overlap with any other OR, whether
+ * direct-EM or pseudo).
+ */
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "%s encap match conflicts with existing pseudo(OR) entry",
+ em_type ? "Pseudo" : "Direct");
+ return -EEXIST;
default: /* Unrecognised pseudo-type. Just say no */
NL_SET_ERR_MSG_FMT_MOD(extack,
"%s encap match conflicts with existing pseudo(%d) entry",
@@ -872,6 +881,93 @@ static bool efx_tc_rule_is_lhs_rule(struct flow_rule *fr,
return false;
}
+/* A foreign LHS rule has matches on enc_ keys at the TC layer (including an
+ * implied match on enc_ip_proto UDP). Translate these into non-enc_ keys,
+ * so that we can use the same MAE machinery as local LHS rules (and so that
+ * the lhs_rules entries have uniform semantics). It may seem odd to do it
+ * this way round, given that the corresponding fields in the MAE MCDIs are
+ * all ENC_, but (a) we don't have enc_L2 or enc_ip_proto in struct
+ * efx_tc_match_fields and (b) semantically an LHS rule doesn't have inner
+ * fields so it's just matching on *the* header rather than the outer header.
+ * Make sure that the non-enc_ keys were not already being matched on, as that
+ * would imply a rule that needed a triple lookup. (Hardware can do that,
+ * with OR-AR-CT-AR, but it halves packet rate so we avoid it where possible;
+ * see efx_tc_flower_flhs_needs_ar().)
+ */
+static int efx_tc_flower_translate_flhs_match(struct efx_tc_match *match)
+{
+ int rc = 0;
+
+#define COPY_MASK_AND_VALUE(_key, _ekey) ({ \
+ if (match->mask._key) { \
+ rc = -EOPNOTSUPP; \
+ } else { \
+ match->mask._key = match->mask._ekey; \
+ match->mask._ekey = 0; \
+ match->value._key = match->value._ekey; \
+ match->value._ekey = 0; \
+ } \
+ rc; \
+})
+#define COPY_FROM_ENC(_key) COPY_MASK_AND_VALUE(_key, enc_##_key)
+ if (match->mask.ip_proto)
+ return -EOPNOTSUPP;
+ match->mask.ip_proto = ~0;
+ match->value.ip_proto = IPPROTO_UDP;
+ if (COPY_FROM_ENC(src_ip) || COPY_FROM_ENC(dst_ip))
+ return rc;
+#ifdef CONFIG_IPV6
+ if (!ipv6_addr_any(&match->mask.src_ip6))
+ return -EOPNOTSUPP;
+ match->mask.src_ip6 = match->mask.enc_src_ip6;
+ memset(&match->mask.enc_src_ip6, 0, sizeof(struct in6_addr));
+ if (!ipv6_addr_any(&match->mask.dst_ip6))
+ return -EOPNOTSUPP;
+ match->mask.dst_ip6 = match->mask.enc_dst_ip6;
+ memset(&match->mask.enc_dst_ip6, 0, sizeof(struct in6_addr));
+#endif
+ if (COPY_FROM_ENC(ip_tos) || COPY_FROM_ENC(ip_ttl))
+ return rc;
+ /* should really copy enc_ip_frag but we don't have that in
+ * parse_match yet
+ */
+ if (COPY_MASK_AND_VALUE(l4_sport, enc_sport) ||
+ COPY_MASK_AND_VALUE(l4_dport, enc_dport))
+ return rc;
+ return 0;
+#undef COPY_FROM_ENC
+#undef COPY_MASK_AND_VALUE
+}
+
+/* If a foreign LHS rule wants to match on keys that are only available after
+ * encap header identification and parsing, then it can't be done in the Outer
+ * Rule lookup, because that lookup determines the encap type used to parse
+ * beyond the outer headers. Thus, such rules must use the OR-AR-CT-AR lookup
+ * sequence, with an EM (struct efx_tc_encap_match) in the OR step.
+ * Return true iff the passed match requires this.
+ */
+static bool efx_tc_flower_flhs_needs_ar(struct efx_tc_match *match)
+{
+ /* matches on inner-header keys can't be done in OR */
+ return match->mask.eth_proto ||
+ match->mask.vlan_tci[0] || match->mask.vlan_tci[1] ||
+ match->mask.vlan_proto[0] || match->mask.vlan_proto[1] ||
+ memchr_inv(match->mask.eth_saddr, 0, ETH_ALEN) ||
+ memchr_inv(match->mask.eth_daddr, 0, ETH_ALEN) ||
+ match->mask.ip_proto ||
+ match->mask.ip_tos || match->mask.ip_ttl ||
+ match->mask.src_ip || match->mask.dst_ip ||
+#ifdef CONFIG_IPV6
+ !ipv6_addr_any(&match->mask.src_ip6) ||
+ !ipv6_addr_any(&match->mask.dst_ip6) ||
+#endif
+ match->mask.ip_frag || match->mask.ip_firstfrag ||
+ match->mask.l4_sport || match->mask.l4_dport ||
+ match->mask.tcp_flags ||
+ /* nor can VNI */
+ match->mask.enc_keyid;
+}
+
static int efx_tc_flower_handle_lhs_actions(struct efx_nic *efx,
struct flow_cls_offload *tc,
struct flow_rule *fr,
@@ -882,9 +978,12 @@ static int efx_tc_flower_handle_lhs_actions(struct efx_nic *efx,
struct netlink_ext_ack *extack = tc->common.extack;
struct efx_tc_lhs_action *act = &rule->lhs_act;
const struct flow_action_entry *fa;
+ enum efx_tc_counter_type ctype;
bool pipe = true;
int i;
+ ctype = rule->is_ar ? EFX_TC_COUNTER_TYPE_AR : EFX_TC_COUNTER_TYPE_OR;
+
flow_action_for_each(i, fa, &fr->action) {
struct efx_tc_ct_zone *ct_zone;
struct efx_tc_recirc_id *rid;
@@ -917,7 +1016,7 @@ static int efx_tc_flower_handle_lhs_actions(struct efx_nic *efx,
return -EOPNOTSUPP;
}
cnt = efx_tc_flower_get_counter_index(efx, tc->cookie,
- EFX_TC_COUNTER_TYPE_OR);
+ ctype);
if (IS_ERR(cnt)) {
NL_SET_ERR_MSG_MOD(extack, "Failed to obtain a counter");
return PTR_ERR(cnt);
@@ -1354,6 +1453,222 @@ static int efx_tc_incomplete_mangle(struct efx_tc_mangler_state *mung,
return 0;
}
+static int efx_tc_flower_replace_foreign_lhs_ar(struct efx_nic *efx,
+ struct flow_cls_offload *tc,
+ struct flow_rule *fr,
+ struct efx_tc_match *match,
+ struct net_device *net_dev)
+{
+ struct netlink_ext_ack *extack = tc->common.extack;
+ struct efx_tc_lhs_rule *rule, *old;
+ enum efx_encap_type type;
+ int rc;
+
+ type = efx_tc_indr_netdev_type(net_dev);
+ if (type == EFX_ENCAP_TYPE_NONE) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match on unsupported tunnel device");
+ return -EOPNOTSUPP;
+ }
+
+ rc = efx_mae_check_encap_type_supported(efx, type);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Firmware reports no support for %s encap match",
+ efx_tc_encap_type_name(type));
+ return rc;
+ }
+ /* This is an Action Rule, so it needs a separate Encap Match in the
+ * Outer Rule table. Insert that now.
+ */
+ rc = efx_tc_flower_record_encap_match(efx, match, type,
+ EFX_TC_EM_DIRECT, 0, 0, extack);
+ if (rc)
+ return rc;
+
+ match->mask.recirc_id = 0xff;
+ if (match->mask.ct_state_trk && match->value.ct_state_trk) {
+ NL_SET_ERR_MSG_MOD(extack, "LHS rule can never match +trk");
+ rc = -EOPNOTSUPP;
+ goto release_encap_match;
+ }
+ /* LHS rules are always -trk, so we don't need to match on that */
+ match->mask.ct_state_trk = 0;
+ match->value.ct_state_trk = 0;
+ /* We must inhibit match on TCP SYN/FIN/RST, so that SW can see
+ * the packet and update the conntrack table.
+ * Outer Rules will do that with CT_TCP_FLAGS_INHIBIT, but Action
+ * Rules don't have that; instead they support matching on
+ * TCP_SYN_FIN_RST (aka TCP_INTERESTING_FLAGS), so use that.
+ * This is only strictly needed if there will be a DO_CT action,
+ * which we don't know yet, but typically there will be and it's
+ * simpler not to bother checking here.
+ */
+ match->mask.tcp_syn_fin_rst = true;
+
+ rc = efx_mae_match_check_caps(efx, &match->mask, extack);
+ if (rc)
+ goto release_encap_match;
+
+ rule = kzalloc(sizeof(*rule), GFP_USER);
+ if (!rule) {
+ rc = -ENOMEM;
+ goto release_encap_match;
+ }
+ rule->cookie = tc->cookie;
+ rule->is_ar = true;
+ old = rhashtable_lookup_get_insert_fast(&efx->tc->lhs_rule_ht,
+ &rule->linkage,
+ efx_tc_lhs_rule_ht_params);
+ if (old) {
+ netif_dbg(efx, drv, efx->net_dev,
+ "Already offloaded rule (cookie %lx)\n", tc->cookie);
+ rc = -EEXIST;
+ NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded");
+ goto release;
+ }
+
+ /* Parse actions */
+ rc = efx_tc_flower_handle_lhs_actions(efx, tc, fr, net_dev, rule);
+ if (rc)
+ goto release;
+
+ rule->match = *match;
+ rule->lhs_act.tun_type = type;
+
+ rc = efx_mae_insert_lhs_rule(efx, rule, EFX_TC_PRIO_TC);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to insert rule in hw");
+ goto release;
+ }
+ netif_dbg(efx, drv, efx->net_dev,
+ "Successfully parsed lhs rule (cookie %lx)\n",
+ tc->cookie);
+ return 0;
+
+release:
+ efx_tc_flower_release_lhs_actions(efx, &rule->lhs_act);
+ if (!old)
+ rhashtable_remove_fast(&efx->tc->lhs_rule_ht, &rule->linkage,
+ efx_tc_lhs_rule_ht_params);
+ kfree(rule);
+release_encap_match:
+ if (match->encap)
+ efx_tc_flower_release_encap_match(efx, match->encap);
+ return rc;
+}
+
+static int efx_tc_flower_replace_foreign_lhs(struct efx_nic *efx,
+ struct flow_cls_offload *tc,
+ struct flow_rule *fr,
+ struct efx_tc_match *match,
+ struct net_device *net_dev)
+{
+ struct netlink_ext_ack *extack = tc->common.extack;
+ struct efx_tc_lhs_rule *rule, *old;
+ enum efx_encap_type type;
+ int rc;
+
+ if (tc->common.chain_index) {
+ NL_SET_ERR_MSG_MOD(extack, "LHS rule only allowed in chain 0");
+ return -EOPNOTSUPP;
+ }
+
+ if (!efx_tc_match_is_encap(&match->mask)) {
+ /* This is not a tunnel decap rule, ignore it */
+ netif_dbg(efx, drv, efx->net_dev, "Ignoring foreign LHS filter without encap match\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (efx_tc_flower_flhs_needs_ar(match))
+ return efx_tc_flower_replace_foreign_lhs_ar(efx, tc, fr, match,
+ net_dev);
+
+ type = efx_tc_indr_netdev_type(net_dev);
+ if (type == EFX_ENCAP_TYPE_NONE) {
+ NL_SET_ERR_MSG_MOD(extack, "Egress encap match on unsupported tunnel device\n");
+ return -EOPNOTSUPP;
+ }
+
+ rc = efx_mae_check_encap_type_supported(efx, type);
+ if (rc) {
+ NL_SET_ERR_MSG_FMT_MOD(extack,
+ "Firmware reports no support for %s encap match",
+ efx_tc_encap_type_name(type));
+ return rc;
+ }
+ /* Reserve the outer tuple with a pseudo Encap Match */
+ rc = efx_tc_flower_record_encap_match(efx, match, type,
+ EFX_TC_EM_PSEUDO_OR, 0, 0,
+ extack);
+ if (rc)
+ return rc;
+
+ if (match->mask.ct_state_trk && match->value.ct_state_trk) {
+ NL_SET_ERR_MSG_MOD(extack, "LHS rule can never match +trk");
+ rc = -EOPNOTSUPP;
+ goto release_encap_match;
+ }
+ /* LHS rules are always -trk, so we don't need to match on that */
+ match->mask.ct_state_trk = 0;
+ match->value.ct_state_trk = 0;
+
+ rc = efx_tc_flower_translate_flhs_match(match);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "LHS rule cannot match on inner fields");
+ goto release_encap_match;
+ }
+
+ rc = efx_mae_match_check_caps_lhs(efx, &match->mask, extack);
+ if (rc)
+ goto release_encap_match;
+
+ rule = kzalloc(sizeof(*rule), GFP_USER);
+ if (!rule) {
+ rc = -ENOMEM;
+ goto release_encap_match;
+ }
+ rule->cookie = tc->cookie;
+ old = rhashtable_lookup_get_insert_fast(&efx->tc->lhs_rule_ht,
+ &rule->linkage,
+ efx_tc_lhs_rule_ht_params);
+ if (old) {
+ netif_dbg(efx, drv, efx->net_dev,
+ "Already offloaded rule (cookie %lx)\n", tc->cookie);
+ rc = -EEXIST;
+ NL_SET_ERR_MSG_MOD(extack, "Rule already offloaded");
+ goto release;
+ }
+
+ /* Parse actions */
+ rc = efx_tc_flower_handle_lhs_actions(efx, tc, fr, net_dev, rule);
+ if (rc)
+ goto release;
+
+ rule->match = *match;
+ rule->lhs_act.tun_type = type;
+
+ rc = efx_mae_insert_lhs_rule(efx, rule, EFX_TC_PRIO_TC);
+ if (rc) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to insert rule in hw");
+ goto release;
+ }
+ netif_dbg(efx, drv, efx->net_dev,
+ "Successfully parsed lhs rule (cookie %lx)\n",
+ tc->cookie);
+ return 0;
+
+release:
+ efx_tc_flower_release_lhs_actions(efx, &rule->lhs_act);
+ if (!old)
+ rhashtable_remove_fast(&efx->tc->lhs_rule_ht, &rule->linkage,
+ efx_tc_lhs_rule_ht_params);
+ kfree(rule);
+release_encap_match:
+ if (match->encap)
+ efx_tc_flower_release_encap_match(efx, match->encap);
+ return rc;
+}
+
static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
struct net_device *net_dev,
struct flow_cls_offload *tc)
@@ -1371,7 +1686,7 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
/* Parse match */
memset(&match, 0, sizeof(match));
- rc = efx_tc_flower_parse_match(efx, fr, &match, NULL);
+ rc = efx_tc_flower_parse_match(efx, fr, &match, extack);
if (rc)
return rc;
/* The rule as given to us doesn't specify a source netdevice.
@@ -1387,6 +1702,10 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
match.value.ingress_port = rc;
match.mask.ingress_port = ~0;
+ if (efx_tc_rule_is_lhs_rule(fr, &match))
+ return efx_tc_flower_replace_foreign_lhs(efx, tc, fr, &match,
+ net_dev);
+
if (tc->common.chain_index) {
struct efx_tc_recirc_id *rid;
@@ -1416,6 +1735,7 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
if (match.mask.ct_state_est && !match.value.ct_state_est) {
if (match.value.tcp_syn_fin_rst) {
/* Can't offload this combination */
+ NL_SET_ERR_MSG_MOD(extack, "TCP flags and -est conflict for offload");
rc = -EOPNOTSUPP;
goto release;
}
@@ -1442,7 +1762,7 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
goto release;
}
- rc = efx_mae_match_check_caps(efx, &match.mask, NULL);
+ rc = efx_mae_match_check_caps(efx, &match.mask, extack);
if (rc)
goto release;
@@ -1470,7 +1790,7 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
extack);
if (rc)
goto release;
- } else {
+ } else if (!tc->common.chain_index) {
/* This is not a tunnel decap rule, ignore it */
netif_dbg(efx, drv, efx->net_dev,
"Ignoring foreign filter without encap match\n");
@@ -1530,6 +1850,7 @@ static int efx_tc_flower_replace_foreign(struct efx_nic *efx,
goto release;
}
if (!efx_tc_flower_action_order_ok(act, EFX_TC_AO_COUNT)) {
+ NL_SET_ERR_MSG_MOD(extack, "Count action violates action order (can't happen)");
rc = -EOPNOTSUPP;
goto release;
}
@@ -2136,6 +2457,14 @@ static int efx_tc_flower_replace(struct efx_nic *efx,
NL_SET_ERR_MSG_MOD(extack, "Cannot offload tunnel decap action without tunnel device");
rc = -EOPNOTSUPP;
goto release;
+ case FLOW_ACTION_CT:
+ if (fa->ct.action != TCA_CT_ACT_NAT) {
+ rc = -EOPNOTSUPP;
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Can only offload CT 'nat' action in RHS rules, not %d", fa->ct.action);
+ goto release;
+ }
+ act->do_nat = 1;
+ break;
default:
NL_SET_ERR_MSG_FMT_MOD(extack, "Unhandled action %u",
fa->id);
diff --git a/drivers/net/ethernet/sfc/tc.h b/drivers/net/ethernet/sfc/tc.h
index 4dd2c378fd9f..7b5190078bee 100644
--- a/drivers/net/ethernet/sfc/tc.h
+++ b/drivers/net/ethernet/sfc/tc.h
@@ -48,6 +48,7 @@ struct efx_tc_encap_action; /* see tc_encap_actions.h */
* @vlan_push: the number of vlan headers to push
* @vlan_pop: the number of vlan headers to pop
* @decap: used to indicate a tunnel header decapsulation should take place
+ * @do_nat: perform NAT/NPT with values returned by conntrack match
* @do_ttl_dec: used to indicate IP TTL / Hop Limit should be decremented
* @deliver: used to indicate a deliver action should take place
* @vlan_tci: tci fields for vlan push actions
@@ -68,6 +69,7 @@ struct efx_tc_action_set {
u16 vlan_push:2;
u16 vlan_pop:2;
u16 decap:1;
+ u16 do_nat:1;
u16 do_ttl_dec:1;
u16 deliver:1;
__be16 vlan_tci[2];
@@ -140,10 +142,14 @@ static inline bool efx_tc_match_is_encap(const struct efx_tc_match_fields *mask)
* The pseudo encap match may be referenced again by an encap match
* with different values for these fields, but all masks must match the
* first (stored in our child_* fields).
+ * @EFX_TC_EM_PSEUDO_OR: registered by an fLHS rule that fits in the OR
+ * table. The &struct efx_tc_lhs_rule already holds the HW OR entry.
+ * Only one reference to this encap match may exist.
*/
enum efx_tc_em_pseudo_type {
EFX_TC_EM_DIRECT,
EFX_TC_EM_PSEUDO_MASK,
+ EFX_TC_EM_PSEUDO_OR,
};
struct efx_tc_encap_match {
@@ -183,6 +189,7 @@ struct efx_tc_action_set_list {
};
struct efx_tc_lhs_action {
+ enum efx_encap_type tun_type;
struct efx_tc_recirc_id *rid;
struct efx_tc_ct_zone *zone;
struct efx_tc_counter_index *count;
@@ -203,6 +210,7 @@ struct efx_tc_lhs_rule {
struct efx_tc_lhs_action lhs_act;
struct rhash_head linkage;
u32 fw_id;
+ bool is_ar; /* Action Rule (for OR-AR-CT-AR sequence) */
};
enum efx_tc_rule_prios {
diff --git a/drivers/net/ethernet/sfc/tc_conntrack.c b/drivers/net/ethernet/sfc/tc_conntrack.c
index 44bb57670340..d90206f27161 100644
--- a/drivers/net/ethernet/sfc/tc_conntrack.c
+++ b/drivers/net/ethernet/sfc/tc_conntrack.c
@@ -276,10 +276,84 @@ static int efx_tc_ct_parse_match(struct efx_nic *efx, struct flow_rule *fr,
return 0;
}
+/**
+ * struct efx_tc_ct_mangler_state - tracks which fields have been pedited
+ *
+ * @ipv4: IP source or destination addr has been set
+ * @tcpudp: TCP/UDP source or destination port has been set
+ */
+struct efx_tc_ct_mangler_state {
+ u8 ipv4:1;
+ u8 tcpudp:1;
+};
+
+static int efx_tc_ct_mangle(struct efx_nic *efx, struct efx_tc_ct_entry *conn,
+ const struct flow_action_entry *fa,
+ struct efx_tc_ct_mangler_state *mung)
+{
+ /* Is this the first mangle we've processed for this rule? */
+ bool first = !(mung->ipv4 || mung->tcpudp);
+ bool dnat = false;
+
+ switch (fa->mangle.htype) {
+ case FLOW_ACT_MANGLE_HDR_TYPE_IP4:
+ switch (fa->mangle.offset) {
+ case offsetof(struct iphdr, daddr):
+ dnat = true;
+ fallthrough;
+ case offsetof(struct iphdr, saddr):
+ if (fa->mangle.mask)
+ return -EOPNOTSUPP;
+ conn->nat_ip = htonl(fa->mangle.val);
+ mung->ipv4 = 1;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ break;
+ case FLOW_ACT_MANGLE_HDR_TYPE_TCP:
+ case FLOW_ACT_MANGLE_HDR_TYPE_UDP:
+ /* Both struct tcphdr and struct udphdr start with
+ * __be16 source;
+ * __be16 dest;
+ * so we can use the same code for both.
+ */
+ switch (fa->mangle.offset) {
+ case offsetof(struct tcphdr, dest):
+ BUILD_BUG_ON(offsetof(struct tcphdr, dest) !=
+ offsetof(struct udphdr, dest));
+ dnat = true;
+ fallthrough;
+ case offsetof(struct tcphdr, source):
+ BUILD_BUG_ON(offsetof(struct tcphdr, source) !=
+ offsetof(struct udphdr, source));
+ if (~fa->mangle.mask != 0xffff)
+ return -EOPNOTSUPP;
+ conn->l4_natport = htons(fa->mangle.val);
+ mung->tcpudp = 1;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ /* first mangle tells us whether this is SNAT or DNAT;
+ * subsequent mangles must match that
+ */
+ if (first)
+ conn->dnat = dnat;
+ else if (conn->dnat != dnat)
+ return -EOPNOTSUPP;
+ return 0;
+}
+
static int efx_tc_ct_replace(struct efx_tc_ct_zone *ct_zone,
struct flow_cls_offload *tc)
{
struct flow_rule *fr = flow_cls_offload_flow_rule(tc);
+ struct efx_tc_ct_mangler_state mung = {};
struct efx_tc_ct_entry *conn, *old;
struct efx_nic *efx = ct_zone->efx;
const struct flow_action_entry *fa;
@@ -326,6 +400,17 @@ static int efx_tc_ct_replace(struct efx_tc_ct_zone *ct_zone,
goto release;
}
break;
+ case FLOW_ACTION_MANGLE:
+ if (conn->eth_proto != htons(ETH_P_IP)) {
+ netif_dbg(efx, drv, efx->net_dev,
+ "NAT only supported for IPv4\n");
+ rc = -EOPNOTSUPP;
+ goto release;
+ }
+ rc = efx_tc_ct_mangle(efx, conn, fa, &mung);
+ if (rc)
+ goto release;
+ break;
default:
netif_dbg(efx, drv, efx->net_dev,
"Unhandled action %u for conntrack\n", fa->id);
@@ -335,8 +420,10 @@ static int efx_tc_ct_replace(struct efx_tc_ct_zone *ct_zone,
}
/* fill in defaults for unmangled values */
- conn->nat_ip = conn->dnat ? conn->dst_ip : conn->src_ip;
- conn->l4_natport = conn->dnat ? conn->l4_dport : conn->l4_sport;
+ if (!mung.ipv4)
+ conn->nat_ip = conn->dnat ? conn->dst_ip : conn->src_ip;
+ if (!mung.tcpudp)
+ conn->l4_natport = conn->dnat ? conn->l4_dport : conn->l4_sport;
cnt = efx_tc_flower_allocate_counter(efx, EFX_TC_COUNTER_TYPE_CT);
if (IS_ERR(cnt)) {
diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
index 8fc3f5272fa7..98d0b561a057 100644
--- a/drivers/net/ethernet/sgi/ioc3-eth.c
+++ b/drivers/net/ethernet/sgi/ioc3-eth.c
@@ -962,7 +962,7 @@ out_free:
return err;
}
-static int ioc3eth_remove(struct platform_device *pdev)
+static void ioc3eth_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct ioc3_private *ip = netdev_priv(dev);
@@ -973,8 +973,6 @@ static int ioc3eth_remove(struct platform_device *pdev)
unregister_netdev(dev);
del_timer_sync(&ip->ioc3_timer);
free_netdev(dev);
-
- return 0;
}
@@ -1275,7 +1273,7 @@ static void ioc3_set_multicast_list(struct net_device *dev)
static struct platform_driver ioc3eth_driver = {
.probe = ioc3eth_probe,
- .remove = ioc3eth_remove,
+ .remove_new = ioc3eth_remove,
.driver = {
.name = "ioc3-eth",
}
diff --git a/drivers/net/ethernet/sgi/meth.c b/drivers/net/ethernet/sgi/meth.c
index 6d850ea2b94c..18b6f93d875e 100644
--- a/drivers/net/ethernet/sgi/meth.c
+++ b/drivers/net/ethernet/sgi/meth.c
@@ -854,19 +854,17 @@ static int meth_probe(struct platform_device *pdev)
return 0;
}
-static int meth_remove(struct platform_device *pdev)
+static void meth_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
unregister_netdev(dev);
free_netdev(dev);
-
- return 0;
}
static struct platform_driver meth_driver = {
.probe = meth_probe,
- .remove = meth_remove,
+ .remove_new = meth_remove,
.driver = {
.name = "meth",
}
diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
index 032eccf8eb42..758347616535 100644
--- a/drivers/net/ethernet/smsc/smc91x.c
+++ b/drivers/net/ethernet/smsc/smc91x.c
@@ -2411,7 +2411,7 @@ static int smc_drv_probe(struct platform_device *pdev)
return ret;
}
-static int smc_drv_remove(struct platform_device *pdev)
+static void smc_drv_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct smc_local *lp = netdev_priv(ndev);
@@ -2436,8 +2436,6 @@ static int smc_drv_remove(struct platform_device *pdev)
release_mem_region(res->start, SMC_IO_EXTENT);
free_netdev(ndev);
-
- return 0;
}
static int smc_drv_suspend(struct device *dev)
@@ -2480,7 +2478,7 @@ static const struct dev_pm_ops smc_drv_pm_ops = {
static struct platform_driver smc_driver = {
.probe = smc_drv_probe,
- .remove = smc_drv_remove,
+ .remove_new = smc_drv_remove,
.driver = {
.name = CARDNAME,
.pm = &smc_drv_pm_ops,
diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
index cb590db625e8..31cb7d0166f0 100644
--- a/drivers/net/ethernet/smsc/smsc911x.c
+++ b/drivers/net/ethernet/smsc/smsc911x.c
@@ -2314,7 +2314,7 @@ static int smsc911x_init(struct net_device *dev)
return 0;
}
-static int smsc911x_drv_remove(struct platform_device *pdev)
+static void smsc911x_drv_remove(struct platform_device *pdev)
{
struct net_device *dev;
struct smsc911x_data *pdata;
@@ -2348,8 +2348,6 @@ static int smsc911x_drv_remove(struct platform_device *pdev)
free_netdev(dev);
pm_runtime_disable(&pdev->dev);
-
- return 0;
}
/* standard register acces */
@@ -2668,7 +2666,7 @@ MODULE_DEVICE_TABLE(acpi, smsc911x_acpi_match);
static struct platform_driver smsc911x_driver = {
.probe = smsc911x_drv_probe,
- .remove = smsc911x_drv_remove,
+ .remove_new = smsc911x_drv_remove,
.driver = {
.name = SMSC_CHIPNAME,
.pm = SMSC911X_PM_OPS,
diff --git a/drivers/net/ethernet/socionext/netsec.c b/drivers/net/ethernet/socionext/netsec.c
index f358ea003193..0891e9e49ecb 100644
--- a/drivers/net/ethernet/socionext/netsec.c
+++ b/drivers/net/ethernet/socionext/netsec.c
@@ -780,7 +780,7 @@ static void netsec_finalize_xdp_rx(struct netsec_priv *priv, u32 xdp_res,
u16 pkts)
{
if (xdp_res & NETSEC_XDP_REDIR)
- xdp_do_flush_map();
+ xdp_do_flush();
if (xdp_res & NETSEC_XDP_TX)
netsec_xdp_ring_tx_db(priv, pkts);
@@ -2150,7 +2150,7 @@ free_ndev:
return ret;
}
-static int netsec_remove(struct platform_device *pdev)
+static void netsec_remove(struct platform_device *pdev)
{
struct netsec_priv *priv = platform_get_drvdata(pdev);
@@ -2162,8 +2162,6 @@ static int netsec_remove(struct platform_device *pdev)
pm_runtime_disable(&pdev->dev);
free_netdev(priv->ndev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -2211,7 +2209,7 @@ MODULE_DEVICE_TABLE(acpi, netsec_acpi_ids);
static struct platform_driver netsec_driver = {
.probe = netsec_probe,
- .remove = netsec_remove,
+ .remove_new = netsec_remove,
.driver = {
.name = "netsec",
.pm = &netsec_pm_ops,
diff --git a/drivers/net/ethernet/socionext/sni_ave.c b/drivers/net/ethernet/socionext/sni_ave.c
index 4838d2383a43..eed24e67c5a6 100644
--- a/drivers/net/ethernet/socionext/sni_ave.c
+++ b/drivers/net/ethernet/socionext/sni_ave.c
@@ -1719,7 +1719,7 @@ out_del_napi:
return ret;
}
-static int ave_remove(struct platform_device *pdev)
+static void ave_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct ave_private *priv = netdev_priv(ndev);
@@ -1727,8 +1727,6 @@ static int ave_remove(struct platform_device *pdev)
unregister_netdev(ndev);
netif_napi_del(&priv->napi_rx);
netif_napi_del(&priv->napi_tx);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -1976,7 +1974,7 @@ MODULE_DEVICE_TABLE(of, of_ave_match);
static struct platform_driver ave_driver = {
.probe = ave_probe,
- .remove = ave_remove,
+ .remove_new = ave_remove,
.driver = {
.name = "ave",
.pm = AVE_PM_OPS,
diff --git a/drivers/net/ethernet/stmicro/stmmac/Kconfig b/drivers/net/ethernet/stmicro/stmmac/Kconfig
index 06c6871f8788..a2b9e289aa36 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Kconfig
+++ b/drivers/net/ethernet/stmicro/stmmac/Kconfig
@@ -239,6 +239,17 @@ config DWMAC_INTEL_PLAT
the stmmac device driver. This driver is used for the Intel Keem Bay
SoC.
+config DWMAC_LOONGSON1
+ tristate "Loongson1 GMAC support"
+ default MACH_LOONGSON32
+ depends on OF && (MACH_LOONGSON32 || COMPILE_TEST)
+ help
+ Support for ethernet controller on Loongson1 SoC.
+
+ This selects Loongson1 SoC glue layer support for the stmmac
+ device driver. This driver is used for Loongson1-based boards
+ like Loongson LS1B/LS1C.
+
config DWMAC_TEGRA
tristate "NVIDIA Tegra MGBE support"
depends on ARCH_TEGRA || COMPILE_TEST
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile
index 5b57aee19267..80e598bd4255 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Makefile
+++ b/drivers/net/ethernet/stmicro/stmmac/Makefile
@@ -29,6 +29,7 @@ obj-$(CONFIG_DWMAC_SUNXI) += dwmac-sunxi.o
obj-$(CONFIG_DWMAC_SUN8I) += dwmac-sun8i.o
obj-$(CONFIG_DWMAC_DWC_QOS_ETH) += dwmac-dwc-qos-eth.o
obj-$(CONFIG_DWMAC_INTEL_PLAT) += dwmac-intel-plat.o
+obj-$(CONFIG_DWMAC_LOONGSON1) += dwmac-loongson1.o
obj-$(CONFIG_DWMAC_GENERIC) += dwmac-generic.o
obj-$(CONFIG_DWMAC_IMX8) += dwmac-imx.o
obj-$(CONFIG_DWMAC_TEGRA) += dwmac-tegra.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
index 1e996c29043d..e3f650e88f82 100644
--- a/drivers/net/ethernet/stmicro/stmmac/common.h
+++ b/drivers/net/ethernet/stmicro/stmmac/common.h
@@ -293,7 +293,7 @@ struct stmmac_safety_stats {
#define MIN_DMA_RIWT 0x10
#define DEF_DMA_RIWT 0xa0
/* Tx coalesce parameters */
-#define STMMAC_COAL_TX_TIMER 1000
+#define STMMAC_COAL_TX_TIMER 5000
#define STMMAC_MAX_COAL_TX_TICK 100000
#define STMMAC_TX_MAX_FRAMES 256
#define STMMAC_TX_FRAMES 25
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-anarion.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-anarion.c
index 58a7f08e8d78..643ee6d8d4dd 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-anarion.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-anarion.c
@@ -115,7 +115,7 @@ static int anarion_dwmac_probe(struct platform_device *pdev)
if (IS_ERR(gmac))
return PTR_ERR(gmac);
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
@@ -124,13 +124,7 @@ static int anarion_dwmac_probe(struct platform_device *pdev)
anarion_gmac_init(pdev, gmac);
plat_dat->bsp_priv = gmac;
- ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (ret) {
- stmmac_remove_config_dt(pdev, plat_dat);
- return ret;
- }
-
- return 0;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
static const struct of_device_id anarion_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
index 61ebf36da13d..ec924c6c76c6 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-dwc-qos-eth.c
@@ -435,15 +435,14 @@ static int dwc_eth_dwmac_probe(struct platform_device *pdev)
if (IS_ERR(stmmac_res.addr))
return PTR_ERR(stmmac_res.addr);
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
ret = data->probe(pdev, plat_dat, &stmmac_res);
if (ret < 0) {
dev_err_probe(&pdev->dev, ret, "failed to probe subdriver\n");
-
- goto remove_config;
+ return ret;
}
ret = dwc_eth_dwmac_config_dt(pdev, plat_dat);
@@ -458,25 +457,17 @@ static int dwc_eth_dwmac_probe(struct platform_device *pdev)
remove:
data->remove(pdev);
-remove_config:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
static void dwc_eth_dwmac_remove(struct platform_device *pdev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct stmmac_priv *priv = netdev_priv(ndev);
- const struct dwc_eth_dwmac_data *data;
-
- data = device_get_match_data(&pdev->dev);
+ const struct dwc_eth_dwmac_data *data = device_get_match_data(&pdev->dev);
stmmac_dvr_remove(&pdev->dev);
data->remove(pdev);
-
- stmmac_remove_config_dt(pdev, priv->plat);
}
static const struct of_device_id dwc_eth_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
index 20fc455b3337..598eff926815 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-generic.c
@@ -27,7 +27,7 @@ static int dwmac_generic_probe(struct platform_device *pdev)
return ret;
if (pdev->dev.of_node) {
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat)) {
dev_err(&pdev->dev, "dt configuration failed\n");
return PTR_ERR(plat_dat);
@@ -46,17 +46,7 @@ static int dwmac_generic_probe(struct platform_device *pdev)
plat_dat->unicast_filter_entries = 1;
}
- ret = stmmac_pltfr_probe(pdev, plat_dat, &stmmac_res);
- if (ret)
- goto err_remove_config_dt;
-
- return 0;
-
-err_remove_config_dt:
- if (pdev->dev.of_node)
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return ret;
+ return devm_stmmac_pltfr_probe(pdev, plat_dat, &stmmac_res);
}
static const struct of_device_id dwmac_generic_match[] = {
@@ -77,7 +67,6 @@ MODULE_DEVICE_TABLE(of, dwmac_generic_match);
static struct platform_driver dwmac_generic_driver = {
.probe = dwmac_generic_probe,
- .remove_new = stmmac_pltfr_remove,
.driver = {
.name = STMMAC_RESOURCE_NAME,
.pm = &stmmac_pltfr_pm_ops,
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
index df34e34cc14f..8f730ada71f9 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
@@ -331,15 +331,14 @@ static int imx_dwmac_probe(struct platform_device *pdev)
if (!dwmac)
return -ENOMEM;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
data = of_device_get_match_data(&pdev->dev);
if (!data) {
dev_err(&pdev->dev, "failed to get match data\n");
- ret = -EINVAL;
- goto err_match_data;
+ return -EINVAL;
}
dwmac->ops = data;
@@ -348,7 +347,7 @@ static int imx_dwmac_probe(struct platform_device *pdev)
ret = imx_dwmac_parse_dt(dwmac, &pdev->dev);
if (ret) {
dev_err(&pdev->dev, "failed to parse OF data\n");
- goto err_parse_dt;
+ return ret;
}
if (data->flags & STMMAC_FLAG_HWTSTAMP_CORRECT_LATENCY)
@@ -365,7 +364,7 @@ static int imx_dwmac_probe(struct platform_device *pdev)
ret = imx_dwmac_clks_config(dwmac, true);
if (ret)
- goto err_clks_config;
+ return ret;
ret = imx_dwmac_init(pdev, dwmac);
if (ret)
@@ -385,10 +384,6 @@ err_drv_probe:
imx_dwmac_exit(pdev, plat_dat->bsp_priv);
err_dwmac_init:
imx_dwmac_clks_config(dwmac, false);
-err_clks_config:
-err_parse_dt:
-err_match_data:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ingenic.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ingenic.c
index 0a20c3d24722..19c93b998fb3 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ingenic.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ingenic.c
@@ -241,29 +241,25 @@ static int ingenic_mac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
mac = devm_kzalloc(&pdev->dev, sizeof(*mac), GFP_KERNEL);
- if (!mac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!mac)
+ return -ENOMEM;
data = of_device_get_match_data(&pdev->dev);
if (!data) {
dev_err(&pdev->dev, "No of match data provided\n");
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
/* Get MAC PHY control register */
mac->regmap = syscon_regmap_lookup_by_phandle(pdev->dev.of_node, "mode-reg");
if (IS_ERR(mac->regmap)) {
dev_err(&pdev->dev, "%s: Failed to get syscon regmap\n", __func__);
- ret = PTR_ERR(mac->regmap);
- goto err_remove_config_dt;
+ return PTR_ERR(mac->regmap);
}
if (!of_property_read_u32(pdev->dev.of_node, "tx-clk-delay-ps", &tx_delay_ps)) {
@@ -272,8 +268,7 @@ static int ingenic_mac_probe(struct platform_device *pdev)
mac->tx_delay = tx_delay_ps * 1000;
} else {
dev_err(&pdev->dev, "Invalid TX clock delay: %dps\n", tx_delay_ps);
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
}
@@ -283,8 +278,7 @@ static int ingenic_mac_probe(struct platform_device *pdev)
mac->rx_delay = rx_delay_ps * 1000;
} else {
dev_err(&pdev->dev, "Invalid RX clock delay: %dps\n", rx_delay_ps);
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
}
@@ -295,18 +289,9 @@ static int ingenic_mac_probe(struct platform_device *pdev)
ret = ingenic_mac_init(plat_dat);
if (ret)
- goto err_remove_config_dt;
-
- ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (ret)
- goto err_remove_config_dt;
-
- return 0;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
+ return ret;
- return ret;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
#ifdef CONFIG_PM_SLEEP
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
index d352a14f9d48..d68f0c4e7835 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel-plat.c
@@ -7,8 +7,8 @@
#include <linux/ethtool.h>
#include <linux/module.h>
#include <linux/of.h>
-#include <linux/of_device.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/stmmac.h>
#include "dwmac4.h"
@@ -76,7 +76,6 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
{
struct plat_stmmacenet_data *plat_dat;
struct stmmac_resources stmmac_res;
- const struct of_device_id *match;
struct intel_dwmac *dwmac;
unsigned long rate;
int ret;
@@ -85,35 +84,29 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat)) {
dev_err(&pdev->dev, "dt configuration failed\n");
return PTR_ERR(plat_dat);
}
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
dwmac->dev = &pdev->dev;
dwmac->tx_clk = NULL;
- match = of_match_device(intel_eth_plat_match, &pdev->dev);
- if (match && match->data) {
- dwmac->data = (const struct intel_dwmac_data *)match->data;
-
+ dwmac->data = device_get_match_data(&pdev->dev);
+ if (dwmac->data) {
if (dwmac->data->fix_mac_speed)
plat_dat->fix_mac_speed = dwmac->data->fix_mac_speed;
/* Enable TX clock */
if (dwmac->data->tx_clk_en) {
dwmac->tx_clk = devm_clk_get(&pdev->dev, "tx_clk");
- if (IS_ERR(dwmac->tx_clk)) {
- ret = PTR_ERR(dwmac->tx_clk);
- goto err_remove_config_dt;
- }
+ if (IS_ERR(dwmac->tx_clk))
+ return PTR_ERR(dwmac->tx_clk);
clk_prepare_enable(dwmac->tx_clk);
@@ -126,7 +119,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev,
"Failed to set tx_clk\n");
- goto err_remove_config_dt;
+ return ret;
}
}
}
@@ -140,7 +133,7 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
if (ret) {
dev_err(&pdev->dev,
"Failed to set clk_ptp_ref\n");
- goto err_remove_config_dt;
+ return ret;
}
}
}
@@ -158,15 +151,10 @@ static int intel_eth_plat_probe(struct platform_device *pdev)
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret) {
clk_disable_unprepare(dwmac->tx_clk);
- goto err_remove_config_dt;
+ return ret;
}
return 0;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return ret;
}
static void intel_eth_plat_remove(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
index a3a249c63598..60283543ffc8 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-intel.c
@@ -605,7 +605,6 @@ static int intel_mgbe_common_data(struct pci_dev *pdev,
plat->mdio_bus_data->phy_mask |= 1 << INTEL_MGBE_XPCS_ADDR;
plat->int_snapshot_num = AUX_SNAPSHOT1;
- plat->ext_snapshot_num = AUX_SNAPSHOT0;
plat->crosststamp = intel_crosststamp;
plat->flags &= ~STMMAC_FLAG_INT_SNAPSHOT_EN;
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
index 9b0200749109..281687d7083b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
@@ -384,22 +384,20 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
if (val)
return val;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL);
- if (!gmac) {
- err = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!gmac)
+ return -ENOMEM;
gmac->pdev = pdev;
err = ipq806x_gmac_of_parse(gmac);
if (err) {
dev_err(dev, "device tree parsing error\n");
- goto err_remove_config_dt;
+ return err;
}
regmap_write(gmac->qsgmii_csr, QSGMII_PCS_CAL_LCKDT_CTL,
@@ -459,11 +457,11 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
if (gmac->phy_mode == PHY_INTERFACE_MODE_SGMII) {
err = ipq806x_gmac_configure_qsgmii_params(gmac);
if (err)
- goto err_remove_config_dt;
+ return err;
err = ipq806x_gmac_configure_qsgmii_pcs_speed(gmac);
if (err)
- goto err_remove_config_dt;
+ return err;
}
plat_dat->has_gmac = true;
@@ -473,21 +471,12 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
plat_dat->tx_fifo_size = 8192;
plat_dat->rx_fifo_size = 8192;
- err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (err)
- goto err_remove_config_dt;
-
- return 0;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
err_unsupported_phy:
dev_err(&pdev->dev, "Unsupported PHY mode: \"%s\"\n",
phy_modes(gmac->phy_mode));
- err = -EINVAL;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return err;
+ return -EINVAL;
}
static const struct of_device_id ipq806x_gmac_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c
new file mode 100644
index 000000000000..3e86810717d3
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson1.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Loongson-1 DWMAC glue layer
+ *
+ * Copyright (C) 2011-2023 Keguang Zhang <keguang.zhang@gmail.com>
+ */
+
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/phy.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+#include "stmmac.h"
+#include "stmmac_platform.h"
+
+#define LS1B_GMAC0_BASE (0x1fe10000)
+#define LS1B_GMAC1_BASE (0x1fe20000)
+
+/* Loongson-1 SYSCON Registers */
+#define LS1X_SYSCON0 (0x0)
+#define LS1X_SYSCON1 (0x4)
+
+/* Loongson-1B SYSCON Register Bits */
+#define GMAC1_USE_UART1 BIT(4)
+#define GMAC1_USE_UART0 BIT(3)
+
+#define GMAC1_SHUT BIT(13)
+#define GMAC0_SHUT BIT(12)
+
+#define GMAC1_USE_TXCLK BIT(3)
+#define GMAC0_USE_TXCLK BIT(2)
+#define GMAC1_USE_PWM23 BIT(1)
+#define GMAC0_USE_PWM01 BIT(0)
+
+/* Loongson-1C SYSCON Register Bits */
+#define GMAC_SHUT BIT(6)
+
+#define PHY_INTF_SELI GENMASK(30, 28)
+#define PHY_INTF_MII FIELD_PREP(PHY_INTF_SELI, 0)
+#define PHY_INTF_RMII FIELD_PREP(PHY_INTF_SELI, 4)
+
+struct ls1x_dwmac {
+ struct plat_stmmacenet_data *plat_dat;
+ struct regmap *regmap;
+};
+
+static int ls1b_dwmac_syscon_init(struct platform_device *pdev, void *priv)
+{
+ struct ls1x_dwmac *dwmac = priv;
+ struct plat_stmmacenet_data *plat = dwmac->plat_dat;
+ struct regmap *regmap = dwmac->regmap;
+ struct resource *res;
+ unsigned long reg_base;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(&pdev->dev, "Could not get IO_MEM resources\n");
+ return -EINVAL;
+ }
+ reg_base = (unsigned long)res->start;
+
+ if (reg_base == LS1B_GMAC0_BASE) {
+ switch (plat->phy_interface) {
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ regmap_update_bits(regmap, LS1X_SYSCON0,
+ GMAC0_USE_TXCLK | GMAC0_USE_PWM01,
+ 0);
+ break;
+ case PHY_INTERFACE_MODE_MII:
+ regmap_update_bits(regmap, LS1X_SYSCON0,
+ GMAC0_USE_TXCLK | GMAC0_USE_PWM01,
+ GMAC0_USE_TXCLK | GMAC0_USE_PWM01);
+ break;
+ default:
+ dev_err(&pdev->dev, "Unsupported PHY mode %u\n",
+ plat->phy_interface);
+ return -EOPNOTSUPP;
+ }
+
+ regmap_update_bits(regmap, LS1X_SYSCON0, GMAC0_SHUT, 0);
+ } else if (reg_base == LS1B_GMAC1_BASE) {
+ regmap_update_bits(regmap, LS1X_SYSCON0,
+ GMAC1_USE_UART1 | GMAC1_USE_UART0,
+ GMAC1_USE_UART1 | GMAC1_USE_UART0);
+
+ switch (plat->phy_interface) {
+ case PHY_INTERFACE_MODE_RGMII_ID:
+ regmap_update_bits(regmap, LS1X_SYSCON1,
+ GMAC1_USE_TXCLK | GMAC1_USE_PWM23,
+ 0);
+
+ break;
+ case PHY_INTERFACE_MODE_MII:
+ regmap_update_bits(regmap, LS1X_SYSCON1,
+ GMAC1_USE_TXCLK | GMAC1_USE_PWM23,
+ GMAC1_USE_TXCLK | GMAC1_USE_PWM23);
+ break;
+ default:
+ dev_err(&pdev->dev, "Unsupported PHY mode %u\n",
+ plat->phy_interface);
+ return -EOPNOTSUPP;
+ }
+
+ regmap_update_bits(regmap, LS1X_SYSCON1, GMAC1_SHUT, 0);
+ } else {
+ dev_err(&pdev->dev, "Invalid Ethernet MAC base address %lx",
+ reg_base);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ls1c_dwmac_syscon_init(struct platform_device *pdev, void *priv)
+{
+ struct ls1x_dwmac *dwmac = priv;
+ struct plat_stmmacenet_data *plat = dwmac->plat_dat;
+ struct regmap *regmap = dwmac->regmap;
+
+ switch (plat->phy_interface) {
+ case PHY_INTERFACE_MODE_MII:
+ regmap_update_bits(regmap, LS1X_SYSCON1, PHY_INTF_SELI,
+ PHY_INTF_MII);
+ break;
+ case PHY_INTERFACE_MODE_RMII:
+ regmap_update_bits(regmap, LS1X_SYSCON1, PHY_INTF_SELI,
+ PHY_INTF_RMII);
+ break;
+ default:
+ dev_err(&pdev->dev, "Unsupported PHY-mode %u\n",
+ plat->phy_interface);
+ return -EOPNOTSUPP;
+ }
+
+ regmap_update_bits(regmap, LS1X_SYSCON0, GMAC0_SHUT, 0);
+
+ return 0;
+}
+
+static int ls1x_dwmac_probe(struct platform_device *pdev)
+{
+ struct plat_stmmacenet_data *plat_dat;
+ struct stmmac_resources stmmac_res;
+ struct regmap *regmap;
+ struct ls1x_dwmac *dwmac;
+ int (*init)(struct platform_device *pdev, void *priv);
+ int ret;
+
+ ret = stmmac_get_platform_resources(pdev, &stmmac_res);
+ if (ret)
+ return ret;
+
+ /* Probe syscon */
+ regmap = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "loongson,ls1-syscon");
+ if (IS_ERR(regmap))
+ return dev_err_probe(&pdev->dev, PTR_ERR(regmap),
+ "Unable to find syscon\n");
+
+ init = of_device_get_match_data(&pdev->dev);
+ if (!init) {
+ dev_err(&pdev->dev, "No of match data provided\n");
+ return -EINVAL;
+ }
+
+ dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
+ if (!dwmac)
+ return -ENOMEM;
+
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ if (IS_ERR(plat_dat))
+ return dev_err_probe(&pdev->dev, PTR_ERR(plat_dat),
+ "dt configuration failed\n");
+
+ plat_dat->bsp_priv = dwmac;
+ plat_dat->init = init;
+ dwmac->plat_dat = plat_dat;
+ dwmac->regmap = regmap;
+
+ return devm_stmmac_pltfr_probe(pdev, plat_dat, &stmmac_res);
+}
+
+static const struct of_device_id ls1x_dwmac_match[] = {
+ {
+ .compatible = "loongson,ls1b-gmac",
+ .data = &ls1b_dwmac_syscon_init,
+ },
+ {
+ .compatible = "loongson,ls1c-emac",
+ .data = &ls1c_dwmac_syscon_init,
+ },
+ { }
+};
+MODULE_DEVICE_TABLE(of, ls1x_dwmac_match);
+
+static struct platform_driver ls1x_dwmac_driver = {
+ .probe = ls1x_dwmac_probe,
+ .driver = {
+ .name = "loongson1-dwmac",
+ .of_match_table = ls1x_dwmac_match,
+ },
+};
+module_platform_driver(ls1x_dwmac_driver);
+
+MODULE_AUTHOR("Keguang Zhang <keguang.zhang@gmail.com>");
+MODULE_DESCRIPTION("Loongson-1 DWMAC glue layer");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c
index d0aa674ce705..4c810d8f5bea 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-lpc18xx.c
@@ -37,7 +37,7 @@ static int lpc18xx_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
@@ -46,8 +46,7 @@ static int lpc18xx_dwmac_probe(struct platform_device *pdev)
reg = syscon_regmap_lookup_by_compatible("nxp,lpc1850-creg");
if (IS_ERR(reg)) {
dev_err(&pdev->dev, "syscon lookup failed\n");
- ret = PTR_ERR(reg);
- goto err_remove_config_dt;
+ return PTR_ERR(reg);
}
if (plat_dat->mac_interface == PHY_INTERFACE_MODE_MII) {
@@ -56,23 +55,13 @@ static int lpc18xx_dwmac_probe(struct platform_device *pdev)
ethmode = LPC18XX_CREG_CREG6_ETHMODE_RMII;
} else {
dev_err(&pdev->dev, "Only MII and RMII mode supported\n");
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
regmap_update_bits(reg, LPC18XX_CREG_CREG6,
LPC18XX_CREG_CREG6_ETHMODE_MASK, ethmode);
- ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (ret)
- goto err_remove_config_dt;
-
- return 0;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return ret;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
static const struct of_device_id lpc18xx_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
index cd796ec04132..2a9132d6d743 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-mediatek.c
@@ -656,7 +656,7 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
@@ -665,7 +665,7 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
ret = mediatek_dwmac_clks_config(priv_plat, true);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
@@ -675,8 +675,6 @@ static int mediatek_dwmac_probe(struct platform_device *pdev)
err_drv_probe:
mediatek_dwmac_clks_config(priv_plat, false);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
index 959f88c6da16..a16bfa9089ea 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson.c
@@ -52,35 +52,22 @@ static int meson6_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
dwmac->reg = devm_platform_ioremap_resource(pdev, 1);
- if (IS_ERR(dwmac->reg)) {
- ret = PTR_ERR(dwmac->reg);
- goto err_remove_config_dt;
- }
+ if (IS_ERR(dwmac->reg))
+ return PTR_ERR(dwmac->reg);
plat_dat->bsp_priv = dwmac;
plat_dat->fix_mac_speed = meson6_dwmac_fix_mac_speed;
- ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (ret)
- goto err_remove_config_dt;
-
- return 0;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return ret;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
static const struct of_device_id meson6_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
index 0b159dc0d5f6..b23944aa344e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
@@ -400,33 +400,27 @@ static int meson8b_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
dwmac->data = (const struct meson8b_dwmac_data *)
of_device_get_match_data(&pdev->dev);
- if (!dwmac->data) {
- ret = -EINVAL;
- goto err_remove_config_dt;
- }
+ if (!dwmac->data)
+ return -EINVAL;
dwmac->regs = devm_platform_ioremap_resource(pdev, 1);
- if (IS_ERR(dwmac->regs)) {
- ret = PTR_ERR(dwmac->regs);
- goto err_remove_config_dt;
- }
+ if (IS_ERR(dwmac->regs))
+ return PTR_ERR(dwmac->regs);
dwmac->dev = &pdev->dev;
ret = of_get_phy_mode(pdev->dev.of_node, &dwmac->phy_mode);
if (ret) {
dev_err(&pdev->dev, "missing phy-mode property\n");
- goto err_remove_config_dt;
+ return ret;
}
/* use 2ns as fallback since this value was previously hardcoded */
@@ -448,53 +442,40 @@ static int meson8b_dwmac_probe(struct platform_device *pdev)
if (dwmac->rx_delay_ps > 3000 || dwmac->rx_delay_ps % 200) {
dev_err(dwmac->dev,
"The RGMII RX delay range is 0..3000ps in 200ps steps");
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
} else {
if (dwmac->rx_delay_ps != 0 && dwmac->rx_delay_ps != 2000) {
dev_err(dwmac->dev,
"The only allowed RGMII RX delays values are: 0ps, 2000ps");
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
}
dwmac->timing_adj_clk = devm_clk_get_optional(dwmac->dev,
"timing-adjustment");
- if (IS_ERR(dwmac->timing_adj_clk)) {
- ret = PTR_ERR(dwmac->timing_adj_clk);
- goto err_remove_config_dt;
- }
+ if (IS_ERR(dwmac->timing_adj_clk))
+ return PTR_ERR(dwmac->timing_adj_clk);
ret = meson8b_init_rgmii_delays(dwmac);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = meson8b_init_rgmii_tx_clk(dwmac);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = dwmac->data->set_phy_mode(dwmac);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = meson8b_init_prg_eth(dwmac);
if (ret)
- goto err_remove_config_dt;
+ return ret;
plat_dat->bsp_priv = dwmac;
- ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (ret)
- goto err_remove_config_dt;
-
- return 0;
-
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
-
- return ret;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
static const struct meson8b_dwmac_data meson8b_dwmac_data = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
index d920a50dd16c..382e8de1255d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-rk.c
@@ -1824,7 +1824,7 @@ static int rk_gmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
@@ -1836,18 +1836,16 @@ static int rk_gmac_probe(struct platform_device *pdev)
plat_dat->fix_mac_speed = rk_fix_speed;
plat_dat->bsp_priv = rk_gmac_setup(pdev, plat_dat, data);
- if (IS_ERR(plat_dat->bsp_priv)) {
- ret = PTR_ERR(plat_dat->bsp_priv);
- goto err_remove_config_dt;
- }
+ if (IS_ERR(plat_dat->bsp_priv))
+ return PTR_ERR(plat_dat->bsp_priv);
ret = rk_gmac_clk_init(plat_dat);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = rk_gmac_powerup(plat_dat->bsp_priv);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
@@ -1857,8 +1855,6 @@ static int rk_gmac_probe(struct platform_device *pdev)
err_gmac_powerdown:
rk_gmac_powerdown(plat_dat->bsp_priv);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index 9bf102bbc6a0..ba2ce776bd4d 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -400,21 +400,19 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
dwmac->stmmac_ocp_rst = devm_reset_control_get_optional(dev, "stmmaceth-ocp");
if (IS_ERR(dwmac->stmmac_ocp_rst)) {
ret = PTR_ERR(dwmac->stmmac_ocp_rst);
dev_err(dev, "error getting reset control of ocp %d\n", ret);
- goto err_remove_config_dt;
+ return ret;
}
reset_control_deassert(dwmac->stmmac_ocp_rst);
@@ -422,7 +420,7 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
ret = socfpga_dwmac_parse_data(dwmac, dev);
if (ret) {
dev_err(dev, "Unable to parse OF data\n");
- goto err_remove_config_dt;
+ return ret;
}
dwmac->ops = ops;
@@ -431,7 +429,7 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ndev = platform_get_drvdata(pdev);
stpriv = netdev_priv(ndev);
@@ -492,8 +490,6 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
err_dvr_remove:
stmmac_dvr_remove(&pdev->dev);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-starfive.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-starfive.c
index 9289bb87c3e3..5d630affb4d1 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-starfive.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-starfive.c
@@ -105,7 +105,7 @@ static int starfive_dwmac_probe(struct platform_device *pdev)
return dev_err_probe(&pdev->dev, err,
"failed to get resources\n");
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return dev_err_probe(&pdev->dev, PTR_ERR(plat_dat),
"dt configuration failed\n");
@@ -141,13 +141,7 @@ static int starfive_dwmac_probe(struct platform_device *pdev)
if (err)
return err;
- err = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (err) {
- stmmac_remove_config_dt(pdev, plat_dat);
- return err;
- }
-
- return 0;
+ return stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
}
static const struct of_device_id starfive_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
index 0d653bbb931b..4445cddc4cbe 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sti.c
@@ -273,20 +273,18 @@ static int sti_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
ret = sti_dwmac_parse_data(dwmac, pdev);
if (ret) {
dev_err(&pdev->dev, "Unable to parse OF data\n");
- goto err_remove_config_dt;
+ return ret;
}
dwmac->fix_retime_src = data->fix_retime_src;
@@ -296,7 +294,7 @@ static int sti_dwmac_probe(struct platform_device *pdev)
ret = clk_prepare_enable(dwmac->clk);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = sti_dwmac_set_mode(dwmac);
if (ret)
@@ -310,8 +308,6 @@ static int sti_dwmac_probe(struct platform_device *pdev)
disable_clk:
clk_disable_unprepare(dwmac->clk);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
index a0e276783e65..c92dfc4ecf57 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-stm32.c
@@ -98,7 +98,6 @@ struct stm32_dwmac {
struct stm32_ops {
int (*set_mode)(struct plat_stmmacenet_data *plat_dat);
- int (*clk_prepare)(struct stm32_dwmac *dwmac, bool prepare);
int (*suspend)(struct stm32_dwmac *dwmac);
void (*resume)(struct stm32_dwmac *dwmac);
int (*parse_data)(struct stm32_dwmac *dwmac,
@@ -107,62 +106,55 @@ struct stm32_ops {
bool clk_rx_enable_in_suspend;
};
-static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat)
+static int stm32_dwmac_clk_enable(struct stm32_dwmac *dwmac, bool resume)
{
- struct stm32_dwmac *dwmac = plat_dat->bsp_priv;
int ret;
- if (dwmac->ops->set_mode) {
- ret = dwmac->ops->set_mode(plat_dat);
- if (ret)
- return ret;
- }
-
ret = clk_prepare_enable(dwmac->clk_tx);
if (ret)
- return ret;
+ goto err_clk_tx;
- if (!dwmac->ops->clk_rx_enable_in_suspend ||
- !dwmac->dev->power.is_suspended) {
+ if (!dwmac->ops->clk_rx_enable_in_suspend || !resume) {
ret = clk_prepare_enable(dwmac->clk_rx);
- if (ret) {
- clk_disable_unprepare(dwmac->clk_tx);
- return ret;
- }
+ if (ret)
+ goto err_clk_rx;
}
- if (dwmac->ops->clk_prepare) {
- ret = dwmac->ops->clk_prepare(dwmac, true);
- if (ret) {
- clk_disable_unprepare(dwmac->clk_rx);
- clk_disable_unprepare(dwmac->clk_tx);
- }
+ ret = clk_prepare_enable(dwmac->syscfg_clk);
+ if (ret)
+ goto err_syscfg_clk;
+
+ if (dwmac->enable_eth_ck) {
+ ret = clk_prepare_enable(dwmac->clk_eth_ck);
+ if (ret)
+ goto err_clk_eth_ck;
}
return ret;
+
+err_clk_eth_ck:
+ clk_disable_unprepare(dwmac->syscfg_clk);
+err_syscfg_clk:
+ if (!dwmac->ops->clk_rx_enable_in_suspend || !resume)
+ clk_disable_unprepare(dwmac->clk_rx);
+err_clk_rx:
+ clk_disable_unprepare(dwmac->clk_tx);
+err_clk_tx:
+ return ret;
}
-static int stm32mp1_clk_prepare(struct stm32_dwmac *dwmac, bool prepare)
+static int stm32_dwmac_init(struct plat_stmmacenet_data *plat_dat, bool resume)
{
- int ret = 0;
+ struct stm32_dwmac *dwmac = plat_dat->bsp_priv;
+ int ret;
- if (prepare) {
- ret = clk_prepare_enable(dwmac->syscfg_clk);
+ if (dwmac->ops->set_mode) {
+ ret = dwmac->ops->set_mode(plat_dat);
if (ret)
return ret;
- if (dwmac->enable_eth_ck) {
- ret = clk_prepare_enable(dwmac->clk_eth_ck);
- if (ret) {
- clk_disable_unprepare(dwmac->syscfg_clk);
- return ret;
- }
- }
- } else {
- clk_disable_unprepare(dwmac->syscfg_clk);
- if (dwmac->enable_eth_ck)
- clk_disable_unprepare(dwmac->clk_eth_ck);
}
- return ret;
+
+ return stm32_dwmac_clk_enable(dwmac, resume);
}
static int stm32mp1_set_mode(struct plat_stmmacenet_data *plat_dat)
@@ -252,13 +244,15 @@ static int stm32mcu_set_mode(struct plat_stmmacenet_data *plat_dat)
dwmac->ops->syscfg_eth_mask, val << 23);
}
-static void stm32_dwmac_clk_disable(struct stm32_dwmac *dwmac)
+static void stm32_dwmac_clk_disable(struct stm32_dwmac *dwmac, bool suspend)
{
clk_disable_unprepare(dwmac->clk_tx);
- clk_disable_unprepare(dwmac->clk_rx);
+ if (!dwmac->ops->clk_rx_enable_in_suspend || !suspend)
+ clk_disable_unprepare(dwmac->clk_rx);
- if (dwmac->ops->clk_prepare)
- dwmac->ops->clk_prepare(dwmac, false);
+ clk_disable_unprepare(dwmac->syscfg_clk);
+ if (dwmac->enable_eth_ck)
+ clk_disable_unprepare(dwmac->clk_eth_ck);
}
static int stm32_dwmac_parse_data(struct stm32_dwmac *dwmac,
@@ -372,21 +366,18 @@ static int stm32_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!dwmac)
+ return -ENOMEM;
data = of_device_get_match_data(&pdev->dev);
if (!data) {
dev_err(&pdev->dev, "no of match data provided\n");
- ret = -EINVAL;
- goto err_remove_config_dt;
+ return -EINVAL;
}
dwmac->ops = data;
@@ -395,14 +386,14 @@ static int stm32_dwmac_probe(struct platform_device *pdev)
ret = stm32_dwmac_parse_data(dwmac, &pdev->dev);
if (ret) {
dev_err(&pdev->dev, "Unable to parse OF data\n");
- goto err_remove_config_dt;
+ return ret;
}
plat_dat->bsp_priv = dwmac;
- ret = stm32_dwmac_init(plat_dat);
+ ret = stm32_dwmac_init(plat_dat, false);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
@@ -411,9 +402,7 @@ static int stm32_dwmac_probe(struct platform_device *pdev)
return 0;
err_clk_disable:
- stm32_dwmac_clk_disable(dwmac);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
+ stm32_dwmac_clk_disable(dwmac, false);
return ret;
}
@@ -426,7 +415,7 @@ static void stm32_dwmac_remove(struct platform_device *pdev)
stmmac_dvr_remove(&pdev->dev);
- stm32_dwmac_clk_disable(priv->plat->bsp_priv);
+ stm32_dwmac_clk_disable(dwmac, false);
if (dwmac->irq_pwr_wakeup >= 0) {
dev_pm_clear_wake_irq(&pdev->dev);
@@ -436,18 +425,7 @@ static void stm32_dwmac_remove(struct platform_device *pdev)
static int stm32mp1_suspend(struct stm32_dwmac *dwmac)
{
- int ret = 0;
-
- ret = clk_prepare_enable(dwmac->clk_ethstp);
- if (ret)
- return ret;
-
- clk_disable_unprepare(dwmac->clk_tx);
- clk_disable_unprepare(dwmac->syscfg_clk);
- if (dwmac->enable_eth_ck)
- clk_disable_unprepare(dwmac->clk_eth_ck);
-
- return ret;
+ return clk_prepare_enable(dwmac->clk_ethstp);
}
static void stm32mp1_resume(struct stm32_dwmac *dwmac)
@@ -455,14 +433,6 @@ static void stm32mp1_resume(struct stm32_dwmac *dwmac)
clk_disable_unprepare(dwmac->clk_ethstp);
}
-static int stm32mcu_suspend(struct stm32_dwmac *dwmac)
-{
- clk_disable_unprepare(dwmac->clk_tx);
- clk_disable_unprepare(dwmac->clk_rx);
-
- return 0;
-}
-
#ifdef CONFIG_PM_SLEEP
static int stm32_dwmac_suspend(struct device *dev)
{
@@ -473,6 +443,10 @@ static int stm32_dwmac_suspend(struct device *dev)
int ret;
ret = stmmac_suspend(dev);
+ if (ret)
+ return ret;
+
+ stm32_dwmac_clk_disable(dwmac, true);
if (dwmac->ops->suspend)
ret = dwmac->ops->suspend(dwmac);
@@ -490,7 +464,7 @@ static int stm32_dwmac_resume(struct device *dev)
if (dwmac->ops->resume)
dwmac->ops->resume(dwmac);
- ret = stm32_dwmac_init(priv->plat);
+ ret = stm32_dwmac_init(priv->plat, true);
if (ret)
return ret;
@@ -505,13 +479,11 @@ static SIMPLE_DEV_PM_OPS(stm32_dwmac_pm_ops,
static struct stm32_ops stm32mcu_dwmac_data = {
.set_mode = stm32mcu_set_mode,
- .suspend = stm32mcu_suspend,
.syscfg_eth_mask = SYSCFG_MCU_ETH_MASK
};
static struct stm32_ops stm32mp1_dwmac_data = {
.set_mode = stm32mp1_set_mode,
- .clk_prepare = stm32mp1_clk_prepare,
.suspend = stm32mp1_suspend,
.resume = stm32mp1_resume,
.parse_data = stm32mp1_parse_data,
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
index 465ff1fd4785..137741b94122 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sun8i.c
@@ -1224,7 +1224,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
if (ret)
return -EINVAL;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
@@ -1244,7 +1244,7 @@ static int sun8i_dwmac_probe(struct platform_device *pdev)
ret = sun8i_dwmac_set_syscon(&pdev->dev, plat_dat);
if (ret)
- goto dwmac_deconfig;
+ return ret;
ret = sun8i_dwmac_init(pdev, plat_dat->bsp_priv);
if (ret)
@@ -1295,8 +1295,6 @@ dwmac_exit:
sun8i_dwmac_exit(pdev, gmac);
dwmac_syscon:
sun8i_dwmac_unset_syscon(gmac);
-dwmac_deconfig:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
index beceeae579bf..2653a9f0958c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
@@ -108,36 +108,31 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
gmac = devm_kzalloc(dev, sizeof(*gmac), GFP_KERNEL);
- if (!gmac) {
- ret = -ENOMEM;
- goto err_remove_config_dt;
- }
+ if (!gmac)
+ return -ENOMEM;
ret = of_get_phy_mode(dev->of_node, &gmac->interface);
if (ret && ret != -ENODEV) {
dev_err(dev, "Can't get phy-mode\n");
- goto err_remove_config_dt;
+ return ret;
}
gmac->tx_clk = devm_clk_get(dev, "allwinner_gmac_tx");
if (IS_ERR(gmac->tx_clk)) {
dev_err(dev, "could not get tx clock\n");
- ret = PTR_ERR(gmac->tx_clk);
- goto err_remove_config_dt;
+ return PTR_ERR(gmac->tx_clk);
}
/* Optional regulator for PHY */
gmac->regulator = devm_regulator_get_optional(dev, "phy");
if (IS_ERR(gmac->regulator)) {
- if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER) {
- ret = -EPROBE_DEFER;
- goto err_remove_config_dt;
- }
+ if (PTR_ERR(gmac->regulator) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
dev_info(dev, "no regulator found\n");
gmac->regulator = NULL;
}
@@ -155,7 +150,7 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv);
if (ret)
- goto err_remove_config_dt;
+ return ret;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
if (ret)
@@ -165,8 +160,6 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
err_gmac_exit:
sun7i_gmac_exit(pdev, plat_dat->bsp_priv);
-err_remove_config_dt:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
index e0f3cbd36852..362f85136c3e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
@@ -284,7 +284,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
if (err < 0)
goto disable_clks;
- plat = stmmac_probe_config_dt(pdev, res.mac);
+ plat = devm_stmmac_probe_config_dt(pdev, res.mac);
if (IS_ERR(plat)) {
err = PTR_ERR(plat);
goto disable_clks;
@@ -303,7 +303,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
GFP_KERNEL);
if (!plat->mdio_bus_data) {
err = -ENOMEM;
- goto remove;
+ goto disable_clks;
}
}
@@ -321,7 +321,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
500, 500 * 2000);
if (err < 0) {
dev_err(mgbe->dev, "timeout waiting for TX lane to become enabled\n");
- goto remove;
+ goto disable_clks;
}
plat->serdes_powerup = mgbe_uphy_lane_bringup_serdes_up;
@@ -342,12 +342,10 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
err = stmmac_dvr_probe(&pdev->dev, plat, &res);
if (err < 0)
- goto remove;
+ goto disable_clks;
return 0;
-remove:
- stmmac_remove_config_dt(pdev, plat);
disable_clks:
clk_bulk_disable_unprepare(ARRAY_SIZE(mgbe_clks), mgbe->clks);
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
index 22d113fb8e09..a5a5cfa989c6 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-visconti.c
@@ -220,15 +220,13 @@ static int visconti_eth_dwmac_probe(struct platform_device *pdev)
if (ret)
return ret;
- plat_dat = stmmac_probe_config_dt(pdev, stmmac_res.mac);
+ plat_dat = devm_stmmac_probe_config_dt(pdev, stmmac_res.mac);
if (IS_ERR(plat_dat))
return PTR_ERR(plat_dat);
dwmac = devm_kzalloc(&pdev->dev, sizeof(*dwmac), GFP_KERNEL);
- if (!dwmac) {
- ret = -ENOMEM;
- goto remove_config;
- }
+ if (!dwmac)
+ return -ENOMEM;
spin_lock_init(&dwmac->lock);
dwmac->reg = stmmac_res.addr;
@@ -238,7 +236,7 @@ static int visconti_eth_dwmac_probe(struct platform_device *pdev)
ret = visconti_eth_clock_probe(pdev, plat_dat);
if (ret)
- goto remove_config;
+ return ret;
visconti_eth_init_hw(pdev, plat_dat);
@@ -252,22 +250,14 @@ static int visconti_eth_dwmac_probe(struct platform_device *pdev)
remove:
visconti_eth_clock_remove(pdev);
-remove_config:
- stmmac_remove_config_dt(pdev, plat_dat);
return ret;
}
static void visconti_eth_dwmac_remove(struct platform_device *pdev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct stmmac_priv *priv = netdev_priv(ndev);
-
stmmac_pltfr_remove(pdev);
-
visconti_eth_clock_remove(pdev);
-
- stmmac_remove_config_dt(pdev, priv->plat);
}
static const struct of_device_id visconti_eth_dwmac_match[] = {
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index 6aa5c0556d22..f628411ae4ae 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -981,7 +981,7 @@ static int __stmmac_set_coalesce(struct net_device *dev,
else if (queue >= max_cnt)
return -EINVAL;
- if (priv->use_riwt && (ec->rx_coalesce_usecs > 0)) {
+ if (priv->use_riwt) {
rx_riwt = stmmac_usec2riwt(ec->rx_coalesce_usecs, priv);
if ((rx_riwt > MAX_DMA_RIWT) || (rx_riwt < MIN_DMA_RIWT))
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 5801f4d50f95..3e50fd53a617 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -2551,9 +2551,13 @@ static void stmmac_bump_dma_threshold(struct stmmac_priv *priv, u32 chan)
* @priv: driver private structure
* @budget: napi budget limiting this functions packet handling
* @queue: TX queue index
+ * @pending_packets: signal to arm the TX coal timer
* Description: it reclaims the transmit resources after transmission completes.
+ * If some packets still needs to be handled, due to TX coalesce, set
+ * pending_packets to true to make NAPI arm the TX coal timer.
*/
-static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
+static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue,
+ bool *pending_packets)
{
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
struct stmmac_txq_stats *txq_stats = &priv->xstats.txq_stats[queue];
@@ -2714,7 +2718,7 @@ static int stmmac_tx_clean(struct stmmac_priv *priv, int budget, u32 queue)
/* We still have pending packets, let's call for a new scheduling */
if (tx_q->dirty_tx != tx_q->cur_tx)
- stmmac_tx_timer_arm(priv, queue);
+ *pending_packets = true;
flags = u64_stats_update_begin_irqsave(&txq_stats->syncp);
txq_stats->tx_packets += tx_packets;
@@ -3004,13 +3008,25 @@ static void stmmac_tx_timer_arm(struct stmmac_priv *priv, u32 queue)
{
struct stmmac_tx_queue *tx_q = &priv->dma_conf.tx_queue[queue];
u32 tx_coal_timer = priv->tx_coal_timer[queue];
+ struct stmmac_channel *ch;
+ struct napi_struct *napi;
if (!tx_coal_timer)
return;
- hrtimer_start(&tx_q->txtimer,
- STMMAC_COAL_TIMER(tx_coal_timer),
- HRTIMER_MODE_REL);
+ ch = &priv->channel[tx_q->queue_index];
+ napi = tx_q->xsk_pool ? &ch->rxtx_napi : &ch->tx_napi;
+
+ /* Arm timer only if napi is not already scheduled.
+ * Try to cancel any timer if napi is scheduled, timer will be armed
+ * again in the next scheduled napi.
+ */
+ if (unlikely(!napi_is_scheduled(napi)))
+ hrtimer_start(&tx_q->txtimer,
+ STMMAC_COAL_TIMER(tx_coal_timer),
+ HRTIMER_MODE_REL);
+ else
+ hrtimer_try_to_cancel(&tx_q->txtimer);
}
/**
@@ -4415,6 +4431,16 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
WARN_ON(tx_q->tx_skbuff[first_entry]);
csum_insertion = (skb->ip_summed == CHECKSUM_PARTIAL);
+ /* DWMAC IPs can be synthesized to support tx coe only for a few tx
+ * queues. In that case, checksum offloading for those queues that don't
+ * support tx coe needs to fallback to software checksum calculation.
+ */
+ if (csum_insertion &&
+ priv->plat->tx_queues_cfg[queue].coe_unsupported) {
+ if (unlikely(skb_checksum_help(skb)))
+ goto dma_map_err;
+ csum_insertion = !csum_insertion;
+ }
if (likely(priv->extend_desc))
desc = (struct dma_desc *)(tx_q->dma_etx + entry);
@@ -5558,6 +5584,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
container_of(napi, struct stmmac_channel, tx_napi);
struct stmmac_priv *priv = ch->priv_data;
struct stmmac_txq_stats *txq_stats;
+ bool pending_packets = false;
u32 chan = ch->index;
unsigned long flags;
int work_done;
@@ -5567,7 +5594,7 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
txq_stats->napi_poll++;
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
- work_done = stmmac_tx_clean(priv, budget, chan);
+ work_done = stmmac_tx_clean(priv, budget, chan, &pending_packets);
work_done = min(work_done, budget);
if (work_done < budget && napi_complete_done(napi, work_done)) {
@@ -5578,6 +5605,10 @@ static int stmmac_napi_poll_tx(struct napi_struct *napi, int budget)
spin_unlock_irqrestore(&ch->lock, flags);
}
+ /* TX still have packet to handle, check if we need to arm tx timer */
+ if (pending_packets)
+ stmmac_tx_timer_arm(priv, chan);
+
return work_done;
}
@@ -5586,6 +5617,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
struct stmmac_channel *ch =
container_of(napi, struct stmmac_channel, rxtx_napi);
struct stmmac_priv *priv = ch->priv_data;
+ bool tx_pending_packets = false;
int rx_done, tx_done, rxtx_done;
struct stmmac_rxq_stats *rxq_stats;
struct stmmac_txq_stats *txq_stats;
@@ -5602,7 +5634,7 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
txq_stats->napi_poll++;
u64_stats_update_end_irqrestore(&txq_stats->syncp, flags);
- tx_done = stmmac_tx_clean(priv, budget, chan);
+ tx_done = stmmac_tx_clean(priv, budget, chan, &tx_pending_packets);
tx_done = min(tx_done, budget);
rx_done = stmmac_rx_zc(priv, budget, chan);
@@ -5627,6 +5659,10 @@ static int stmmac_napi_poll_rxtx(struct napi_struct *napi, int budget)
spin_unlock_irqrestore(&ch->lock, flags);
}
+ /* TX still have packet to handle, check if we need to arm tx timer */
+ if (tx_pending_packets)
+ stmmac_tx_timer_arm(priv, chan);
+
return min(rxtx_done, budget - 1);
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index 2f0678f15fb7..1ffde555da47 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -276,6 +276,9 @@ static int stmmac_mtl_setup(struct platform_device *pdev,
plat->tx_queues_cfg[queue].use_prio = true;
}
+ plat->tx_queues_cfg[queue].coe_unsupported =
+ of_property_read_bool(q_node, "snps,coe-unsupported");
+
queue++;
}
if (queue != plat->tx_queues_to_use) {
@@ -385,6 +388,22 @@ static int stmmac_of_get_mac_mode(struct device_node *np)
}
/**
+ * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt()
+ * @pdev: platform_device structure
+ * @plat: driver data platform structure
+ *
+ * Release resources claimed by stmmac_probe_config_dt().
+ */
+static void stmmac_remove_config_dt(struct platform_device *pdev,
+ struct plat_stmmacenet_data *plat)
+{
+ clk_disable_unprepare(plat->stmmac_clk);
+ clk_disable_unprepare(plat->pclk);
+ of_node_put(plat->phy_node);
+ of_node_put(plat->mdio_node);
+}
+
+/**
* stmmac_probe_config_dt - parse device-tree driver parameters
* @pdev: platform_device structure
* @mac: MAC address to use
@@ -392,7 +411,7 @@ static int stmmac_of_get_mac_mode(struct device_node *np)
* this function is to read the driver parameters from device-tree and
* set some private fields that will be used by the main at runtime.
*/
-struct plat_stmmacenet_data *
+static struct plat_stmmacenet_data *
stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
{
struct device_node *np = pdev->dev.of_node;
@@ -662,43 +681,14 @@ devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
return plat;
}
-
-/**
- * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt()
- * @pdev: platform_device structure
- * @plat: driver data platform structure
- *
- * Release resources claimed by stmmac_probe_config_dt().
- */
-void stmmac_remove_config_dt(struct platform_device *pdev,
- struct plat_stmmacenet_data *plat)
-{
- clk_disable_unprepare(plat->stmmac_clk);
- clk_disable_unprepare(plat->pclk);
- of_node_put(plat->phy_node);
- of_node_put(plat->mdio_node);
-}
#else
struct plat_stmmacenet_data *
-stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
-{
- return ERR_PTR(-EINVAL);
-}
-
-struct plat_stmmacenet_data *
devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
{
return ERR_PTR(-EINVAL);
}
-
-void stmmac_remove_config_dt(struct platform_device *pdev,
- struct plat_stmmacenet_data *plat)
-{
-}
#endif /* CONFIG_OF */
-EXPORT_SYMBOL_GPL(stmmac_probe_config_dt);
EXPORT_SYMBOL_GPL(devm_stmmac_probe_config_dt);
-EXPORT_SYMBOL_GPL(stmmac_remove_config_dt);
int stmmac_get_platform_resources(struct platform_device *pdev,
struct stmmac_resources *stmmac_res)
@@ -807,7 +797,7 @@ static void devm_stmmac_pltfr_remove(void *data)
{
struct platform_device *pdev = data;
- stmmac_pltfr_remove_no_dt(pdev);
+ stmmac_pltfr_remove(pdev);
}
/**
@@ -834,12 +824,12 @@ int devm_stmmac_pltfr_probe(struct platform_device *pdev,
EXPORT_SYMBOL_GPL(devm_stmmac_pltfr_probe);
/**
- * stmmac_pltfr_remove_no_dt
+ * stmmac_pltfr_remove
* @pdev: pointer to the platform device
* Description: This undoes the effects of stmmac_pltfr_probe() by removing the
* driver and calling the platform's exit() callback.
*/
-void stmmac_pltfr_remove_no_dt(struct platform_device *pdev)
+void stmmac_pltfr_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
@@ -848,23 +838,6 @@ void stmmac_pltfr_remove_no_dt(struct platform_device *pdev)
stmmac_dvr_remove(&pdev->dev);
stmmac_pltfr_exit(pdev, plat);
}
-EXPORT_SYMBOL_GPL(stmmac_pltfr_remove_no_dt);
-
-/**
- * stmmac_pltfr_remove
- * @pdev: platform device pointer
- * Description: this function calls the main to free the net resources
- * and calls the platforms hook and release the resources (e.g. mem).
- */
-void stmmac_pltfr_remove(struct platform_device *pdev)
-{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct stmmac_priv *priv = netdev_priv(ndev);
- struct plat_stmmacenet_data *plat = priv->plat;
-
- stmmac_pltfr_remove_no_dt(pdev);
- stmmac_remove_config_dt(pdev, plat);
-}
EXPORT_SYMBOL_GPL(stmmac_pltfr_remove);
/**
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h
index c5565b2a70ac..bb6fc7e59aed 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.h
@@ -12,11 +12,7 @@
#include "stmmac.h"
struct plat_stmmacenet_data *
-stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac);
-struct plat_stmmacenet_data *
devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac);
-void stmmac_remove_config_dt(struct platform_device *pdev,
- struct plat_stmmacenet_data *plat);
int stmmac_get_platform_resources(struct platform_device *pdev,
struct stmmac_resources *stmmac_res);
@@ -32,7 +28,6 @@ int stmmac_pltfr_probe(struct platform_device *pdev,
int devm_stmmac_pltfr_probe(struct platform_device *pdev,
struct plat_stmmacenet_data *plat,
struct stmmac_resources *res);
-void stmmac_pltfr_remove_no_dt(struct platform_device *pdev);
void stmmac_pltfr_remove(struct platform_device *pdev);
extern const struct dev_pm_ops stmmac_pltfr_pm_ops;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
index 3d7825cb30bb..bffa5c017032 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.c
@@ -81,7 +81,7 @@ static int stmmac_adjust_time(struct ptp_clock_info *ptp, s64 delta)
stmmac_adjust_systime(priv, priv->ptpaddr, sec, nsec, neg_adj, xmac);
write_unlock_irqrestore(&priv->ptp_lock, flags);
- /* Caculate new basetime and re-configured EST after PTP time adjust. */
+ /* Calculate new basetime and re-configured EST after PTP time adjust. */
if (est_rst) {
struct timespec64 current_time, time;
ktime_t current_time_ns, basetime;
@@ -191,26 +191,33 @@ static int stmmac_enable(struct ptp_clock_info *ptp,
priv->systime_flags);
write_unlock_irqrestore(&priv->ptp_lock, flags);
break;
- case PTP_CLK_REQ_EXTTS:
- if (on)
- priv->plat->flags |= STMMAC_FLAG_EXT_SNAPSHOT_EN;
- else
- priv->plat->flags &= ~STMMAC_FLAG_EXT_SNAPSHOT_EN;
+ case PTP_CLK_REQ_EXTTS: {
+ u8 channel;
+
mutex_lock(&priv->aux_ts_lock);
acr_value = readl(ptpaddr + PTP_ACR);
+ channel = ilog2(FIELD_GET(PTP_ACR_MASK, acr_value));
acr_value &= ~PTP_ACR_MASK;
+
if (on) {
+ if (FIELD_GET(PTP_ACR_MASK, acr_value)) {
+ netdev_err(priv->dev,
+ "Cannot enable auxiliary snapshot %d as auxiliary snapshot %d is already enabled",
+ rq->extts.index, channel);
+ mutex_unlock(&priv->aux_ts_lock);
+ return -EBUSY;
+ }
+
+ priv->plat->flags |= STMMAC_FLAG_EXT_SNAPSHOT_EN;
+
/* Enable External snapshot trigger */
- acr_value |= priv->plat->ext_snapshot_num;
+ acr_value |= PTP_ACR_ATSEN(rq->extts.index);
acr_value |= PTP_ACR_ATSFC;
- netdev_dbg(priv->dev, "Auxiliary Snapshot %d enabled.\n",
- priv->plat->ext_snapshot_num >>
- PTP_ACR_ATSEN_SHIFT);
} else {
- netdev_dbg(priv->dev, "Auxiliary Snapshot %d disabled.\n",
- priv->plat->ext_snapshot_num >>
- PTP_ACR_ATSEN_SHIFT);
+ priv->plat->flags &= ~STMMAC_FLAG_EXT_SNAPSHOT_EN;
}
+ netdev_dbg(priv->dev, "Auxiliary Snapshot %d %s.\n",
+ rq->extts.index, on ? "enabled" : "disabled");
writel(acr_value, ptpaddr + PTP_ACR);
mutex_unlock(&priv->aux_ts_lock);
/* wait for auxts fifo clear to finish */
@@ -218,6 +225,7 @@ static int stmmac_enable(struct ptp_clock_info *ptp,
!(acr_value & PTP_ACR_ATSFC),
10, 10000);
break;
+ }
default:
break;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
index d1fe4b46f162..fce3fba2ffd2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ptp.h
@@ -79,7 +79,7 @@
#define PTP_ACR_ATSEN1 BIT(5) /* Auxiliary Snapshot 1 Enable */
#define PTP_ACR_ATSEN2 BIT(6) /* Auxiliary Snapshot 2 Enable */
#define PTP_ACR_ATSEN3 BIT(7) /* Auxiliary Snapshot 3 Enable */
-#define PTP_ACR_ATSEN_SHIFT 5 /* Auxiliary Snapshot shift */
+#define PTP_ACR_ATSEN(index) (PTP_ACR_ATSEN0 << (index))
#define PTP_ACR_MASK GENMASK(7, 4) /* Aux Snapshot Mask */
#define PMC_ART_VALUE0 0x01 /* PMC_ART[15:0] timer value */
#define PMC_ART_VALUE1 0x02 /* PMC_ART[31:16] timer value */
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
index f9e43fc32ee8..3ca1c2a816ff 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_selftests.c
@@ -802,7 +802,7 @@ static int stmmac_test_flowctrl(struct stmmac_priv *priv)
stmmac_start_rx(priv, priv->ioaddr, i);
local_bh_disable();
- napi_reschedule(&ch->rx_napi);
+ napi_schedule(&ch->rx_napi);
local_bh_enable();
}
diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
index 011d74087f86..21431f43e4c2 100644
--- a/drivers/net/ethernet/sun/niu.c
+++ b/drivers/net/ethernet/sun/niu.c
@@ -10132,7 +10132,7 @@ err_out:
return err;
}
-static int niu_of_remove(struct platform_device *op)
+static void niu_of_remove(struct platform_device *op)
{
struct net_device *dev = platform_get_drvdata(op);
@@ -10165,7 +10165,6 @@ static int niu_of_remove(struct platform_device *op)
free_netdev(dev);
}
- return 0;
}
static const struct of_device_id niu_match[] = {
@@ -10183,7 +10182,7 @@ static struct platform_driver niu_of_driver = {
.of_match_table = niu_match,
},
.probe = niu_of_probe,
- .remove = niu_of_remove,
+ .remove_new = niu_of_remove,
};
#endif /* CONFIG_SPARC64 */
diff --git a/drivers/net/ethernet/sun/sunbmac.c b/drivers/net/ethernet/sun/sunbmac.c
index cc34d92d2e3d..16c86b13c185 100644
--- a/drivers/net/ethernet/sun/sunbmac.c
+++ b/drivers/net/ethernet/sun/sunbmac.c
@@ -1234,7 +1234,7 @@ static int bigmac_sbus_probe(struct platform_device *op)
return bigmac_ether_init(op, qec_op);
}
-static int bigmac_sbus_remove(struct platform_device *op)
+static void bigmac_sbus_remove(struct platform_device *op)
{
struct bigmac *bp = platform_get_drvdata(op);
struct device *parent = op->dev.parent;
@@ -1255,8 +1255,6 @@ static int bigmac_sbus_remove(struct platform_device *op)
bp->bblock_dvma);
free_netdev(net_dev);
-
- return 0;
}
static const struct of_device_id bigmac_sbus_match[] = {
@@ -1274,7 +1272,7 @@ static struct platform_driver bigmac_sbus_driver = {
.of_match_table = bigmac_sbus_match,
},
.probe = bigmac_sbus_probe,
- .remove = bigmac_sbus_remove,
+ .remove_new = bigmac_sbus_remove,
};
module_platform_driver(bigmac_sbus_driver);
diff --git a/drivers/net/ethernet/sun/sunqe.c b/drivers/net/ethernet/sun/sunqe.c
index b37360f44972..aedd13c94225 100644
--- a/drivers/net/ethernet/sun/sunqe.c
+++ b/drivers/net/ethernet/sun/sunqe.c
@@ -933,7 +933,7 @@ static int qec_sbus_probe(struct platform_device *op)
return qec_ether_init(op);
}
-static int qec_sbus_remove(struct platform_device *op)
+static void qec_sbus_remove(struct platform_device *op)
{
struct sunqe *qp = platform_get_drvdata(op);
struct net_device *net_dev = qp->dev;
@@ -948,8 +948,6 @@ static int qec_sbus_remove(struct platform_device *op)
qp->buffers, qp->buffers_dvma);
free_netdev(net_dev);
-
- return 0;
}
static const struct of_device_id qec_sbus_match[] = {
@@ -967,7 +965,7 @@ static struct platform_driver qec_sbus_driver = {
.of_match_table = qec_sbus_match,
},
.probe = qec_sbus_probe,
- .remove = qec_sbus_remove,
+ .remove_new = qec_sbus_remove,
};
static int __init qec_init(void)
diff --git a/drivers/net/ethernet/sunplus/spl2sw_driver.c b/drivers/net/ethernet/sunplus/spl2sw_driver.c
index c499a14314f1..391a1bc7f446 100644
--- a/drivers/net/ethernet/sunplus/spl2sw_driver.c
+++ b/drivers/net/ethernet/sunplus/spl2sw_driver.c
@@ -511,7 +511,7 @@ out_clk_disable:
return ret;
}
-static int spl2sw_remove(struct platform_device *pdev)
+static void spl2sw_remove(struct platform_device *pdev)
{
struct spl2sw_common *comm;
int i;
@@ -538,8 +538,6 @@ static int spl2sw_remove(struct platform_device *pdev)
spl2sw_mdio_remove(comm);
clk_disable_unprepare(comm->clk);
-
- return 0;
}
static const struct of_device_id spl2sw_of_match[] = {
@@ -551,7 +549,7 @@ MODULE_DEVICE_TABLE(of, spl2sw_of_match);
static struct platform_driver spl2sw_driver = {
.probe = spl2sw_probe,
- .remove = spl2sw_remove,
+ .remove_new = spl2sw_remove,
.driver = {
.name = "sp7021_emac",
.of_match_table = spl2sw_of_match,
diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
index cac61f5d3fd4..e60b557d59b9 100644
--- a/drivers/net/ethernet/ti/Kconfig
+++ b/drivers/net/ethernet/ti/Kconfig
@@ -6,7 +6,7 @@
config NET_VENDOR_TI
bool "Texas Instruments (TI) devices"
default y
- depends on PCI || EISA || AR7 || ARCH_DAVINCI || ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3
+ depends on PCI || EISA || ARCH_DAVINCI || ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3
help
If you have a network (Ethernet) card belonging to this class, say Y.
@@ -180,13 +180,6 @@ config TLAN
Please email feedback to <torben.mathiasen@compaq.com>.
-config CPMAC
- tristate "TI AR7 CPMAC Ethernet support"
- depends on AR7
- select PHYLIB
- help
- TI AR7 CPMAC Ethernet support
-
config TI_ICSSG_PRUETH
tristate "TI Gigabit PRU Ethernet driver"
select PHYLIB
diff --git a/drivers/net/ethernet/ti/Makefile b/drivers/net/ethernet/ti/Makefile
index 67bed861f31d..27de1d697134 100644
--- a/drivers/net/ethernet/ti/Makefile
+++ b/drivers/net/ethernet/ti/Makefile
@@ -8,7 +8,6 @@ obj-$(CONFIG_TI_DAVINCI_EMAC) += cpsw-common.o
obj-$(CONFIG_TI_CPSW_SWITCHDEV) += cpsw-common.o
obj-$(CONFIG_TLAN) += tlan.o
-obj-$(CONFIG_CPMAC) += cpmac.o
obj-$(CONFIG_TI_DAVINCI_EMAC) += ti_davinci_emac.o
ti_davinci_emac-y := davinci_emac.o davinci_cpdma.o
obj-$(CONFIG_TI_DAVINCI_MDIO) += davinci_mdio.o
diff --git a/drivers/net/ethernet/ti/cpmac.c b/drivers/net/ethernet/ti/cpmac.c
deleted file mode 100644
index 80eeeb463c4f..000000000000
--- a/drivers/net/ethernet/ti/cpmac.c
+++ /dev/null
@@ -1,1251 +0,0 @@
-// SPDX-License-Identifier: GPL-2.0+
-/*
- * Copyright (C) 2006, 2007 Eugene Konev
- *
- */
-
-#include <linux/module.h>
-#include <linux/interrupt.h>
-#include <linux/moduleparam.h>
-
-#include <linux/sched.h>
-#include <linux/kernel.h>
-#include <linux/slab.h>
-#include <linux/errno.h>
-#include <linux/types.h>
-#include <linux/delay.h>
-
-#include <linux/netdevice.h>
-#include <linux/if_vlan.h>
-#include <linux/etherdevice.h>
-#include <linux/ethtool.h>
-#include <linux/skbuff.h>
-#include <linux/mii.h>
-#include <linux/phy.h>
-#include <linux/phy_fixed.h>
-#include <linux/platform_device.h>
-#include <linux/dma-mapping.h>
-#include <linux/clk.h>
-#include <linux/gpio.h>
-#include <linux/atomic.h>
-
-#include <asm/mach-ar7/ar7.h>
-
-MODULE_AUTHOR("Eugene Konev <ejka@imfi.kspu.ru>");
-MODULE_DESCRIPTION("TI AR7 ethernet driver (CPMAC)");
-MODULE_LICENSE("GPL");
-MODULE_ALIAS("platform:cpmac");
-
-static int debug_level = 8;
-static int dumb_switch;
-
-/* Next 2 are only used in cpmac_probe, so it's pointless to change them */
-module_param(debug_level, int, 0444);
-module_param(dumb_switch, int, 0444);
-
-MODULE_PARM_DESC(debug_level, "Number of NETIF_MSG bits to enable");
-MODULE_PARM_DESC(dumb_switch, "Assume switch is not connected to MDIO bus");
-
-#define CPMAC_VERSION "0.5.2"
-/* frame size + 802.1q tag + FCS size */
-#define CPMAC_SKB_SIZE (ETH_FRAME_LEN + ETH_FCS_LEN + VLAN_HLEN)
-#define CPMAC_QUEUES 8
-
-/* Ethernet registers */
-#define CPMAC_TX_CONTROL 0x0004
-#define CPMAC_TX_TEARDOWN 0x0008
-#define CPMAC_RX_CONTROL 0x0014
-#define CPMAC_RX_TEARDOWN 0x0018
-#define CPMAC_MBP 0x0100
-#define MBP_RXPASSCRC 0x40000000
-#define MBP_RXQOS 0x20000000
-#define MBP_RXNOCHAIN 0x10000000
-#define MBP_RXCMF 0x01000000
-#define MBP_RXSHORT 0x00800000
-#define MBP_RXCEF 0x00400000
-#define MBP_RXPROMISC 0x00200000
-#define MBP_PROMISCCHAN(channel) (((channel) & 0x7) << 16)
-#define MBP_RXBCAST 0x00002000
-#define MBP_BCASTCHAN(channel) (((channel) & 0x7) << 8)
-#define MBP_RXMCAST 0x00000020
-#define MBP_MCASTCHAN(channel) ((channel) & 0x7)
-#define CPMAC_UNICAST_ENABLE 0x0104
-#define CPMAC_UNICAST_CLEAR 0x0108
-#define CPMAC_MAX_LENGTH 0x010c
-#define CPMAC_BUFFER_OFFSET 0x0110
-#define CPMAC_MAC_CONTROL 0x0160
-#define MAC_TXPTYPE 0x00000200
-#define MAC_TXPACE 0x00000040
-#define MAC_MII 0x00000020
-#define MAC_TXFLOW 0x00000010
-#define MAC_RXFLOW 0x00000008
-#define MAC_MTEST 0x00000004
-#define MAC_LOOPBACK 0x00000002
-#define MAC_FDX 0x00000001
-#define CPMAC_MAC_STATUS 0x0164
-#define MAC_STATUS_QOS 0x00000004
-#define MAC_STATUS_RXFLOW 0x00000002
-#define MAC_STATUS_TXFLOW 0x00000001
-#define CPMAC_TX_INT_ENABLE 0x0178
-#define CPMAC_TX_INT_CLEAR 0x017c
-#define CPMAC_MAC_INT_VECTOR 0x0180
-#define MAC_INT_STATUS 0x00080000
-#define MAC_INT_HOST 0x00040000
-#define MAC_INT_RX 0x00020000
-#define MAC_INT_TX 0x00010000
-#define CPMAC_MAC_EOI_VECTOR 0x0184
-#define CPMAC_RX_INT_ENABLE 0x0198
-#define CPMAC_RX_INT_CLEAR 0x019c
-#define CPMAC_MAC_INT_ENABLE 0x01a8
-#define CPMAC_MAC_INT_CLEAR 0x01ac
-#define CPMAC_MAC_ADDR_LO(channel) (0x01b0 + (channel) * 4)
-#define CPMAC_MAC_ADDR_MID 0x01d0
-#define CPMAC_MAC_ADDR_HI 0x01d4
-#define CPMAC_MAC_HASH_LO 0x01d8
-#define CPMAC_MAC_HASH_HI 0x01dc
-#define CPMAC_TX_PTR(channel) (0x0600 + (channel) * 4)
-#define CPMAC_RX_PTR(channel) (0x0620 + (channel) * 4)
-#define CPMAC_TX_ACK(channel) (0x0640 + (channel) * 4)
-#define CPMAC_RX_ACK(channel) (0x0660 + (channel) * 4)
-#define CPMAC_REG_END 0x0680
-
-/* Rx/Tx statistics
- * TODO: use some of them to fill stats in cpmac_stats()
- */
-#define CPMAC_STATS_RX_GOOD 0x0200
-#define CPMAC_STATS_RX_BCAST 0x0204
-#define CPMAC_STATS_RX_MCAST 0x0208
-#define CPMAC_STATS_RX_PAUSE 0x020c
-#define CPMAC_STATS_RX_CRC 0x0210
-#define CPMAC_STATS_RX_ALIGN 0x0214
-#define CPMAC_STATS_RX_OVER 0x0218
-#define CPMAC_STATS_RX_JABBER 0x021c
-#define CPMAC_STATS_RX_UNDER 0x0220
-#define CPMAC_STATS_RX_FRAG 0x0224
-#define CPMAC_STATS_RX_FILTER 0x0228
-#define CPMAC_STATS_RX_QOSFILTER 0x022c
-#define CPMAC_STATS_RX_OCTETS 0x0230
-
-#define CPMAC_STATS_TX_GOOD 0x0234
-#define CPMAC_STATS_TX_BCAST 0x0238
-#define CPMAC_STATS_TX_MCAST 0x023c
-#define CPMAC_STATS_TX_PAUSE 0x0240
-#define CPMAC_STATS_TX_DEFER 0x0244
-#define CPMAC_STATS_TX_COLLISION 0x0248
-#define CPMAC_STATS_TX_SINGLECOLL 0x024c
-#define CPMAC_STATS_TX_MULTICOLL 0x0250
-#define CPMAC_STATS_TX_EXCESSCOLL 0x0254
-#define CPMAC_STATS_TX_LATECOLL 0x0258
-#define CPMAC_STATS_TX_UNDERRUN 0x025c
-#define CPMAC_STATS_TX_CARRIERSENSE 0x0260
-#define CPMAC_STATS_TX_OCTETS 0x0264
-
-#define cpmac_read(base, reg) (readl((void __iomem *)(base) + (reg)))
-#define cpmac_write(base, reg, val) (writel(val, (void __iomem *)(base) + \
- (reg)))
-
-/* MDIO bus */
-#define CPMAC_MDIO_VERSION 0x0000
-#define CPMAC_MDIO_CONTROL 0x0004
-#define MDIOC_IDLE 0x80000000
-#define MDIOC_ENABLE 0x40000000
-#define MDIOC_PREAMBLE 0x00100000
-#define MDIOC_FAULT 0x00080000
-#define MDIOC_FAULTDETECT 0x00040000
-#define MDIOC_INTTEST 0x00020000
-#define MDIOC_CLKDIV(div) ((div) & 0xff)
-#define CPMAC_MDIO_ALIVE 0x0008
-#define CPMAC_MDIO_LINK 0x000c
-#define CPMAC_MDIO_ACCESS(channel) (0x0080 + (channel) * 8)
-#define MDIO_BUSY 0x80000000
-#define MDIO_WRITE 0x40000000
-#define MDIO_REG(reg) (((reg) & 0x1f) << 21)
-#define MDIO_PHY(phy) (((phy) & 0x1f) << 16)
-#define MDIO_DATA(data) ((data) & 0xffff)
-#define CPMAC_MDIO_PHYSEL(channel) (0x0084 + (channel) * 8)
-#define PHYSEL_LINKSEL 0x00000040
-#define PHYSEL_LINKINT 0x00000020
-
-struct cpmac_desc {
- u32 hw_next;
- u32 hw_data;
- u16 buflen;
- u16 bufflags;
- u16 datalen;
- u16 dataflags;
-#define CPMAC_SOP 0x8000
-#define CPMAC_EOP 0x4000
-#define CPMAC_OWN 0x2000
-#define CPMAC_EOQ 0x1000
- struct sk_buff *skb;
- struct cpmac_desc *next;
- struct cpmac_desc *prev;
- dma_addr_t mapping;
- dma_addr_t data_mapping;
-};
-
-struct cpmac_priv {
- spinlock_t lock;
- spinlock_t rx_lock;
- struct cpmac_desc *rx_head;
- int ring_size;
- struct cpmac_desc *desc_ring;
- dma_addr_t dma_ring;
- void __iomem *regs;
- struct mii_bus *mii_bus;
- char phy_name[MII_BUS_ID_SIZE + 3];
- int oldlink, oldspeed, oldduplex;
- u32 msg_enable;
- struct net_device *dev;
- struct work_struct reset_work;
- struct platform_device *pdev;
- struct napi_struct napi;
- atomic_t reset_pending;
-};
-
-static irqreturn_t cpmac_irq(int, void *);
-static void cpmac_hw_start(struct net_device *dev);
-static void cpmac_hw_stop(struct net_device *dev);
-static int cpmac_stop(struct net_device *dev);
-static int cpmac_open(struct net_device *dev);
-
-static void cpmac_dump_regs(struct net_device *dev)
-{
- int i;
- struct cpmac_priv *priv = netdev_priv(dev);
-
- for (i = 0; i < CPMAC_REG_END; i += 4) {
- if (i % 16 == 0) {
- if (i)
- printk("\n");
- printk("%s: reg[%p]:", dev->name, priv->regs + i);
- }
- printk(" %08x", cpmac_read(priv->regs, i));
- }
- printk("\n");
-}
-
-static void cpmac_dump_desc(struct net_device *dev, struct cpmac_desc *desc)
-{
- int i;
-
- printk("%s: desc[%p]:", dev->name, desc);
- for (i = 0; i < sizeof(*desc) / 4; i++)
- printk(" %08x", ((u32 *)desc)[i]);
- printk("\n");
-}
-
-static void cpmac_dump_all_desc(struct net_device *dev)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
- struct cpmac_desc *dump = priv->rx_head;
-
- do {
- cpmac_dump_desc(dev, dump);
- dump = dump->next;
- } while (dump != priv->rx_head);
-}
-
-static void cpmac_dump_skb(struct net_device *dev, struct sk_buff *skb)
-{
- int i;
-
- printk("%s: skb 0x%p, len=%d\n", dev->name, skb, skb->len);
- for (i = 0; i < skb->len; i++) {
- if (i % 16 == 0) {
- if (i)
- printk("\n");
- printk("%s: data[%p]:", dev->name, skb->data + i);
- }
- printk(" %02x", ((u8 *)skb->data)[i]);
- }
- printk("\n");
-}
-
-static int cpmac_mdio_read(struct mii_bus *bus, int phy_id, int reg)
-{
- u32 val;
-
- while (cpmac_read(bus->priv, CPMAC_MDIO_ACCESS(0)) & MDIO_BUSY)
- cpu_relax();
- cpmac_write(bus->priv, CPMAC_MDIO_ACCESS(0), MDIO_BUSY | MDIO_REG(reg) |
- MDIO_PHY(phy_id));
- while ((val = cpmac_read(bus->priv, CPMAC_MDIO_ACCESS(0))) & MDIO_BUSY)
- cpu_relax();
-
- return MDIO_DATA(val);
-}
-
-static int cpmac_mdio_write(struct mii_bus *bus, int phy_id,
- int reg, u16 val)
-{
- while (cpmac_read(bus->priv, CPMAC_MDIO_ACCESS(0)) & MDIO_BUSY)
- cpu_relax();
- cpmac_write(bus->priv, CPMAC_MDIO_ACCESS(0), MDIO_BUSY | MDIO_WRITE |
- MDIO_REG(reg) | MDIO_PHY(phy_id) | MDIO_DATA(val));
-
- return 0;
-}
-
-static int cpmac_mdio_reset(struct mii_bus *bus)
-{
- struct clk *cpmac_clk;
-
- cpmac_clk = clk_get(&bus->dev, "cpmac");
- if (IS_ERR(cpmac_clk)) {
- pr_err("unable to get cpmac clock\n");
- return -1;
- }
- ar7_device_reset(AR7_RESET_BIT_MDIO);
- cpmac_write(bus->priv, CPMAC_MDIO_CONTROL, MDIOC_ENABLE |
- MDIOC_CLKDIV(clk_get_rate(cpmac_clk) / 2200000 - 1));
-
- return 0;
-}
-
-static struct mii_bus *cpmac_mii;
-
-static void cpmac_set_multicast_list(struct net_device *dev)
-{
- struct netdev_hw_addr *ha;
- u8 tmp;
- u32 mbp, bit, hash[2] = { 0, };
- struct cpmac_priv *priv = netdev_priv(dev);
-
- mbp = cpmac_read(priv->regs, CPMAC_MBP);
- if (dev->flags & IFF_PROMISC) {
- cpmac_write(priv->regs, CPMAC_MBP, (mbp & ~MBP_PROMISCCHAN(0)) |
- MBP_RXPROMISC);
- } else {
- cpmac_write(priv->regs, CPMAC_MBP, mbp & ~MBP_RXPROMISC);
- if (dev->flags & IFF_ALLMULTI) {
- /* enable all multicast mode */
- cpmac_write(priv->regs, CPMAC_MAC_HASH_LO, 0xffffffff);
- cpmac_write(priv->regs, CPMAC_MAC_HASH_HI, 0xffffffff);
- } else {
- /* cpmac uses some strange mac address hashing
- * (not crc32)
- */
- netdev_for_each_mc_addr(ha, dev) {
- bit = 0;
- tmp = ha->addr[0];
- bit ^= (tmp >> 2) ^ (tmp << 4);
- tmp = ha->addr[1];
- bit ^= (tmp >> 4) ^ (tmp << 2);
- tmp = ha->addr[2];
- bit ^= (tmp >> 6) ^ tmp;
- tmp = ha->addr[3];
- bit ^= (tmp >> 2) ^ (tmp << 4);
- tmp = ha->addr[4];
- bit ^= (tmp >> 4) ^ (tmp << 2);
- tmp = ha->addr[5];
- bit ^= (tmp >> 6) ^ tmp;
- bit &= 0x3f;
- hash[bit / 32] |= 1 << (bit % 32);
- }
-
- cpmac_write(priv->regs, CPMAC_MAC_HASH_LO, hash[0]);
- cpmac_write(priv->regs, CPMAC_MAC_HASH_HI, hash[1]);
- }
- }
-}
-
-static struct sk_buff *cpmac_rx_one(struct cpmac_priv *priv,
- struct cpmac_desc *desc)
-{
- struct sk_buff *skb, *result = NULL;
-
- if (unlikely(netif_msg_hw(priv)))
- cpmac_dump_desc(priv->dev, desc);
- cpmac_write(priv->regs, CPMAC_RX_ACK(0), (u32)desc->mapping);
- if (unlikely(!desc->datalen)) {
- if (netif_msg_rx_err(priv) && net_ratelimit())
- netdev_warn(priv->dev, "rx: spurious interrupt\n");
-
- return NULL;
- }
-
- skb = netdev_alloc_skb_ip_align(priv->dev, CPMAC_SKB_SIZE);
- if (likely(skb)) {
- skb_put(desc->skb, desc->datalen);
- desc->skb->protocol = eth_type_trans(desc->skb, priv->dev);
- skb_checksum_none_assert(desc->skb);
- priv->dev->stats.rx_packets++;
- priv->dev->stats.rx_bytes += desc->datalen;
- result = desc->skb;
- dma_unmap_single(&priv->dev->dev, desc->data_mapping,
- CPMAC_SKB_SIZE, DMA_FROM_DEVICE);
- desc->skb = skb;
- desc->data_mapping = dma_map_single(&priv->dev->dev, skb->data,
- CPMAC_SKB_SIZE,
- DMA_FROM_DEVICE);
- desc->hw_data = (u32)desc->data_mapping;
- if (unlikely(netif_msg_pktdata(priv))) {
- netdev_dbg(priv->dev, "received packet:\n");
- cpmac_dump_skb(priv->dev, result);
- }
- } else {
- if (netif_msg_rx_err(priv) && net_ratelimit())
- netdev_warn(priv->dev,
- "low on skbs, dropping packet\n");
-
- priv->dev->stats.rx_dropped++;
- }
-
- desc->buflen = CPMAC_SKB_SIZE;
- desc->dataflags = CPMAC_OWN;
-
- return result;
-}
-
-static int cpmac_poll(struct napi_struct *napi, int budget)
-{
- struct sk_buff *skb;
- struct cpmac_desc *desc, *restart;
- struct cpmac_priv *priv = container_of(napi, struct cpmac_priv, napi);
- int received = 0, processed = 0;
-
- spin_lock(&priv->rx_lock);
- if (unlikely(!priv->rx_head)) {
- if (netif_msg_rx_err(priv) && net_ratelimit())
- netdev_warn(priv->dev, "rx: polling, but no queue\n");
-
- spin_unlock(&priv->rx_lock);
- napi_complete(napi);
- return 0;
- }
-
- desc = priv->rx_head;
- restart = NULL;
- while (((desc->dataflags & CPMAC_OWN) == 0) && (received < budget)) {
- processed++;
-
- if ((desc->dataflags & CPMAC_EOQ) != 0) {
- /* The last update to eoq->hw_next didn't happen
- * soon enough, and the receiver stopped here.
- * Remember this descriptor so we can restart
- * the receiver after freeing some space.
- */
- if (unlikely(restart)) {
- if (netif_msg_rx_err(priv))
- netdev_err(priv->dev, "poll found a"
- " duplicate EOQ: %p and %p\n",
- restart, desc);
- goto fatal_error;
- }
-
- restart = desc->next;
- }
-
- skb = cpmac_rx_one(priv, desc);
- if (likely(skb)) {
- netif_receive_skb(skb);
- received++;
- }
- desc = desc->next;
- }
-
- if (desc != priv->rx_head) {
- /* We freed some buffers, but not the whole ring,
- * add what we did free to the rx list
- */
- desc->prev->hw_next = (u32)0;
- priv->rx_head->prev->hw_next = priv->rx_head->mapping;
- }
-
- /* Optimization: If we did not actually process an EOQ (perhaps because
- * of quota limits), check to see if the tail of the queue has EOQ set.
- * We should immediately restart in that case so that the receiver can
- * restart and run in parallel with more packet processing.
- * This lets us handle slightly larger bursts before running
- * out of ring space (assuming dev->weight < ring_size)
- */
-
- if (!restart &&
- (priv->rx_head->prev->dataflags & (CPMAC_OWN|CPMAC_EOQ))
- == CPMAC_EOQ &&
- (priv->rx_head->dataflags & CPMAC_OWN) != 0) {
- /* reset EOQ so the poll loop (above) doesn't try to
- * restart this when it eventually gets to this descriptor.
- */
- priv->rx_head->prev->dataflags &= ~CPMAC_EOQ;
- restart = priv->rx_head;
- }
-
- if (restart) {
- priv->dev->stats.rx_errors++;
- priv->dev->stats.rx_fifo_errors++;
- if (netif_msg_rx_err(priv) && net_ratelimit())
- netdev_warn(priv->dev, "rx dma ring overrun\n");
-
- if (unlikely((restart->dataflags & CPMAC_OWN) == 0)) {
- if (netif_msg_drv(priv))
- netdev_err(priv->dev, "cpmac_poll is trying "
- "to restart rx from a descriptor "
- "that's not free: %p\n", restart);
- goto fatal_error;
- }
-
- cpmac_write(priv->regs, CPMAC_RX_PTR(0), restart->mapping);
- }
-
- priv->rx_head = desc;
- spin_unlock(&priv->rx_lock);
- if (unlikely(netif_msg_rx_status(priv)))
- netdev_dbg(priv->dev, "poll processed %d packets\n", received);
-
- if (processed == 0) {
- /* we ran out of packets to read,
- * revert to interrupt-driven mode
- */
- napi_complete(napi);
- cpmac_write(priv->regs, CPMAC_RX_INT_ENABLE, 1);
- return 0;
- }
-
- return 1;
-
-fatal_error:
- /* Something went horribly wrong.
- * Reset hardware to try to recover rather than wedging.
- */
- if (netif_msg_drv(priv)) {
- netdev_err(priv->dev, "cpmac_poll is confused. "
- "Resetting hardware\n");
- cpmac_dump_all_desc(priv->dev);
- netdev_dbg(priv->dev, "RX_PTR(0)=0x%08x RX_ACK(0)=0x%08x\n",
- cpmac_read(priv->regs, CPMAC_RX_PTR(0)),
- cpmac_read(priv->regs, CPMAC_RX_ACK(0)));
- }
-
- spin_unlock(&priv->rx_lock);
- napi_complete(napi);
- netif_tx_stop_all_queues(priv->dev);
- napi_disable(&priv->napi);
-
- atomic_inc(&priv->reset_pending);
- cpmac_hw_stop(priv->dev);
- if (!schedule_work(&priv->reset_work))
- atomic_dec(&priv->reset_pending);
-
- return 0;
-
-}
-
-static netdev_tx_t cpmac_start_xmit(struct sk_buff *skb, struct net_device *dev)
-{
- int queue;
- unsigned int len;
- struct cpmac_desc *desc;
- struct cpmac_priv *priv = netdev_priv(dev);
-
- if (unlikely(atomic_read(&priv->reset_pending)))
- return NETDEV_TX_BUSY;
-
- if (unlikely(skb_padto(skb, ETH_ZLEN)))
- return NETDEV_TX_OK;
-
- len = max_t(unsigned int, skb->len, ETH_ZLEN);
- queue = skb_get_queue_mapping(skb);
- netif_stop_subqueue(dev, queue);
-
- desc = &priv->desc_ring[queue];
- if (unlikely(desc->dataflags & CPMAC_OWN)) {
- if (netif_msg_tx_err(priv) && net_ratelimit())
- netdev_warn(dev, "tx dma ring full\n");
-
- return NETDEV_TX_BUSY;
- }
-
- spin_lock(&priv->lock);
- spin_unlock(&priv->lock);
- desc->dataflags = CPMAC_SOP | CPMAC_EOP | CPMAC_OWN;
- desc->skb = skb;
- desc->data_mapping = dma_map_single(&dev->dev, skb->data, len,
- DMA_TO_DEVICE);
- desc->hw_data = (u32)desc->data_mapping;
- desc->datalen = len;
- desc->buflen = len;
- if (unlikely(netif_msg_tx_queued(priv)))
- netdev_dbg(dev, "sending 0x%p, len=%d\n", skb, skb->len);
- if (unlikely(netif_msg_hw(priv)))
- cpmac_dump_desc(dev, desc);
- if (unlikely(netif_msg_pktdata(priv)))
- cpmac_dump_skb(dev, skb);
- cpmac_write(priv->regs, CPMAC_TX_PTR(queue), (u32)desc->mapping);
-
- return NETDEV_TX_OK;
-}
-
-static void cpmac_end_xmit(struct net_device *dev, int queue)
-{
- struct cpmac_desc *desc;
- struct cpmac_priv *priv = netdev_priv(dev);
-
- desc = &priv->desc_ring[queue];
- cpmac_write(priv->regs, CPMAC_TX_ACK(queue), (u32)desc->mapping);
- if (likely(desc->skb)) {
- spin_lock(&priv->lock);
- dev->stats.tx_packets++;
- dev->stats.tx_bytes += desc->skb->len;
- spin_unlock(&priv->lock);
- dma_unmap_single(&dev->dev, desc->data_mapping, desc->skb->len,
- DMA_TO_DEVICE);
-
- if (unlikely(netif_msg_tx_done(priv)))
- netdev_dbg(dev, "sent 0x%p, len=%d\n",
- desc->skb, desc->skb->len);
-
- dev_consume_skb_irq(desc->skb);
- desc->skb = NULL;
- if (__netif_subqueue_stopped(dev, queue))
- netif_wake_subqueue(dev, queue);
- } else {
- if (netif_msg_tx_err(priv) && net_ratelimit())
- netdev_warn(dev, "end_xmit: spurious interrupt\n");
- if (__netif_subqueue_stopped(dev, queue))
- netif_wake_subqueue(dev, queue);
- }
-}
-
-static void cpmac_hw_stop(struct net_device *dev)
-{
- int i;
- struct cpmac_priv *priv = netdev_priv(dev);
- struct plat_cpmac_data *pdata = dev_get_platdata(&priv->pdev->dev);
-
- ar7_device_reset(pdata->reset_bit);
- cpmac_write(priv->regs, CPMAC_RX_CONTROL,
- cpmac_read(priv->regs, CPMAC_RX_CONTROL) & ~1);
- cpmac_write(priv->regs, CPMAC_TX_CONTROL,
- cpmac_read(priv->regs, CPMAC_TX_CONTROL) & ~1);
- for (i = 0; i < 8; i++) {
- cpmac_write(priv->regs, CPMAC_TX_PTR(i), 0);
- cpmac_write(priv->regs, CPMAC_RX_PTR(i), 0);
- }
- cpmac_write(priv->regs, CPMAC_UNICAST_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_RX_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_TX_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_MAC_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_MAC_CONTROL,
- cpmac_read(priv->regs, CPMAC_MAC_CONTROL) & ~MAC_MII);
-}
-
-static void cpmac_hw_start(struct net_device *dev)
-{
- int i;
- struct cpmac_priv *priv = netdev_priv(dev);
- struct plat_cpmac_data *pdata = dev_get_platdata(&priv->pdev->dev);
-
- ar7_device_reset(pdata->reset_bit);
- for (i = 0; i < 8; i++) {
- cpmac_write(priv->regs, CPMAC_TX_PTR(i), 0);
- cpmac_write(priv->regs, CPMAC_RX_PTR(i), 0);
- }
- cpmac_write(priv->regs, CPMAC_RX_PTR(0), priv->rx_head->mapping);
-
- cpmac_write(priv->regs, CPMAC_MBP, MBP_RXSHORT | MBP_RXBCAST |
- MBP_RXMCAST);
- cpmac_write(priv->regs, CPMAC_BUFFER_OFFSET, 0);
- for (i = 0; i < 8; i++)
- cpmac_write(priv->regs, CPMAC_MAC_ADDR_LO(i), dev->dev_addr[5]);
- cpmac_write(priv->regs, CPMAC_MAC_ADDR_MID, dev->dev_addr[4]);
- cpmac_write(priv->regs, CPMAC_MAC_ADDR_HI, dev->dev_addr[0] |
- (dev->dev_addr[1] << 8) | (dev->dev_addr[2] << 16) |
- (dev->dev_addr[3] << 24));
- cpmac_write(priv->regs, CPMAC_MAX_LENGTH, CPMAC_SKB_SIZE);
- cpmac_write(priv->regs, CPMAC_UNICAST_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_RX_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_TX_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_MAC_INT_CLEAR, 0xff);
- cpmac_write(priv->regs, CPMAC_UNICAST_ENABLE, 1);
- cpmac_write(priv->regs, CPMAC_RX_INT_ENABLE, 1);
- cpmac_write(priv->regs, CPMAC_TX_INT_ENABLE, 0xff);
- cpmac_write(priv->regs, CPMAC_MAC_INT_ENABLE, 3);
-
- cpmac_write(priv->regs, CPMAC_RX_CONTROL,
- cpmac_read(priv->regs, CPMAC_RX_CONTROL) | 1);
- cpmac_write(priv->regs, CPMAC_TX_CONTROL,
- cpmac_read(priv->regs, CPMAC_TX_CONTROL) | 1);
- cpmac_write(priv->regs, CPMAC_MAC_CONTROL,
- cpmac_read(priv->regs, CPMAC_MAC_CONTROL) | MAC_MII |
- MAC_FDX);
-}
-
-static void cpmac_clear_rx(struct net_device *dev)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
- struct cpmac_desc *desc;
- int i;
-
- if (unlikely(!priv->rx_head))
- return;
- desc = priv->rx_head;
- for (i = 0; i < priv->ring_size; i++) {
- if ((desc->dataflags & CPMAC_OWN) == 0) {
- if (netif_msg_rx_err(priv) && net_ratelimit())
- netdev_warn(dev, "packet dropped\n");
- if (unlikely(netif_msg_hw(priv)))
- cpmac_dump_desc(dev, desc);
- desc->dataflags = CPMAC_OWN;
- dev->stats.rx_dropped++;
- }
- desc->hw_next = desc->next->mapping;
- desc = desc->next;
- }
- priv->rx_head->prev->hw_next = 0;
-}
-
-static void cpmac_clear_tx(struct net_device *dev)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
- int i;
-
- if (unlikely(!priv->desc_ring))
- return;
- for (i = 0; i < CPMAC_QUEUES; i++) {
- priv->desc_ring[i].dataflags = 0;
- if (priv->desc_ring[i].skb) {
- dev_kfree_skb_any(priv->desc_ring[i].skb);
- priv->desc_ring[i].skb = NULL;
- }
- }
-}
-
-static void cpmac_hw_error(struct work_struct *work)
-{
- struct cpmac_priv *priv =
- container_of(work, struct cpmac_priv, reset_work);
-
- spin_lock(&priv->rx_lock);
- cpmac_clear_rx(priv->dev);
- spin_unlock(&priv->rx_lock);
- cpmac_clear_tx(priv->dev);
- cpmac_hw_start(priv->dev);
- barrier();
- atomic_dec(&priv->reset_pending);
-
- netif_tx_wake_all_queues(priv->dev);
- cpmac_write(priv->regs, CPMAC_MAC_INT_ENABLE, 3);
-}
-
-static void cpmac_check_status(struct net_device *dev)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
-
- u32 macstatus = cpmac_read(priv->regs, CPMAC_MAC_STATUS);
- int rx_channel = (macstatus >> 8) & 7;
- int rx_code = (macstatus >> 12) & 15;
- int tx_channel = (macstatus >> 16) & 7;
- int tx_code = (macstatus >> 20) & 15;
-
- if (rx_code || tx_code) {
- if (netif_msg_drv(priv) && net_ratelimit()) {
- /* Can't find any documentation on what these
- * error codes actually are. So just log them and hope..
- */
- if (rx_code)
- netdev_warn(dev, "host error %d on rx "
- "channel %d (macstatus %08x), resetting\n",
- rx_code, rx_channel, macstatus);
- if (tx_code)
- netdev_warn(dev, "host error %d on tx "
- "channel %d (macstatus %08x), resetting\n",
- tx_code, tx_channel, macstatus);
- }
-
- netif_tx_stop_all_queues(dev);
- cpmac_hw_stop(dev);
- if (schedule_work(&priv->reset_work))
- atomic_inc(&priv->reset_pending);
- if (unlikely(netif_msg_hw(priv)))
- cpmac_dump_regs(dev);
- }
- cpmac_write(priv->regs, CPMAC_MAC_INT_CLEAR, 0xff);
-}
-
-static irqreturn_t cpmac_irq(int irq, void *dev_id)
-{
- struct net_device *dev = dev_id;
- struct cpmac_priv *priv;
- int queue;
- u32 status;
-
- priv = netdev_priv(dev);
-
- status = cpmac_read(priv->regs, CPMAC_MAC_INT_VECTOR);
-
- if (unlikely(netif_msg_intr(priv)))
- netdev_dbg(dev, "interrupt status: 0x%08x\n", status);
-
- if (status & MAC_INT_TX)
- cpmac_end_xmit(dev, (status & 7));
-
- if (status & MAC_INT_RX) {
- queue = (status >> 8) & 7;
- if (napi_schedule_prep(&priv->napi)) {
- cpmac_write(priv->regs, CPMAC_RX_INT_CLEAR, 1 << queue);
- __napi_schedule(&priv->napi);
- }
- }
-
- cpmac_write(priv->regs, CPMAC_MAC_EOI_VECTOR, 0);
-
- if (unlikely(status & (MAC_INT_HOST | MAC_INT_STATUS)))
- cpmac_check_status(dev);
-
- return IRQ_HANDLED;
-}
-
-static void cpmac_tx_timeout(struct net_device *dev, unsigned int txqueue)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
-
- spin_lock(&priv->lock);
- dev->stats.tx_errors++;
- spin_unlock(&priv->lock);
- if (netif_msg_tx_err(priv) && net_ratelimit())
- netdev_warn(dev, "transmit timeout\n");
-
- atomic_inc(&priv->reset_pending);
- barrier();
- cpmac_clear_tx(dev);
- barrier();
- atomic_dec(&priv->reset_pending);
-
- netif_tx_wake_all_queues(priv->dev);
-}
-
-static void cpmac_get_ringparam(struct net_device *dev,
- struct ethtool_ringparam *ring,
- struct kernel_ethtool_ringparam *kernel_ring,
- struct netlink_ext_ack *extack)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
-
- ring->rx_max_pending = 1024;
- ring->rx_mini_max_pending = 1;
- ring->rx_jumbo_max_pending = 1;
- ring->tx_max_pending = 1;
-
- ring->rx_pending = priv->ring_size;
- ring->rx_mini_pending = 1;
- ring->rx_jumbo_pending = 1;
- ring->tx_pending = 1;
-}
-
-static int cpmac_set_ringparam(struct net_device *dev,
- struct ethtool_ringparam *ring,
- struct kernel_ethtool_ringparam *kernel_ring,
- struct netlink_ext_ack *extack)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
-
- if (netif_running(dev))
- return -EBUSY;
- priv->ring_size = ring->rx_pending;
-
- return 0;
-}
-
-static void cpmac_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *info)
-{
- strscpy(info->driver, "cpmac", sizeof(info->driver));
- strscpy(info->version, CPMAC_VERSION, sizeof(info->version));
- snprintf(info->bus_info, sizeof(info->bus_info), "%s", "cpmac");
-}
-
-static const struct ethtool_ops cpmac_ethtool_ops = {
- .get_drvinfo = cpmac_get_drvinfo,
- .get_link = ethtool_op_get_link,
- .get_ringparam = cpmac_get_ringparam,
- .set_ringparam = cpmac_set_ringparam,
- .get_link_ksettings = phy_ethtool_get_link_ksettings,
- .set_link_ksettings = phy_ethtool_set_link_ksettings,
-};
-
-static void cpmac_adjust_link(struct net_device *dev)
-{
- struct cpmac_priv *priv = netdev_priv(dev);
- int new_state = 0;
-
- spin_lock(&priv->lock);
- if (dev->phydev->link) {
- netif_tx_start_all_queues(dev);
- if (dev->phydev->duplex != priv->oldduplex) {
- new_state = 1;
- priv->oldduplex = dev->phydev->duplex;
- }
-
- if (dev->phydev->speed != priv->oldspeed) {
- new_state = 1;
- priv->oldspeed = dev->phydev->speed;
- }
-
- if (!priv->oldlink) {
- new_state = 1;
- priv->oldlink = 1;
- }
- } else if (priv->oldlink) {
- new_state = 1;
- priv->oldlink = 0;
- priv->oldspeed = 0;
- priv->oldduplex = -1;
- }
-
- if (new_state && netif_msg_link(priv) && net_ratelimit())
- phy_print_status(dev->phydev);
-
- spin_unlock(&priv->lock);
-}
-
-static int cpmac_open(struct net_device *dev)
-{
- int i, size, res;
- struct cpmac_priv *priv = netdev_priv(dev);
- struct resource *mem;
- struct cpmac_desc *desc;
- struct sk_buff *skb;
-
- mem = platform_get_resource_byname(priv->pdev, IORESOURCE_MEM, "regs");
- if (!request_mem_region(mem->start, resource_size(mem), dev->name)) {
- if (netif_msg_drv(priv))
- netdev_err(dev, "failed to request registers\n");
-
- res = -ENXIO;
- goto fail_reserve;
- }
-
- priv->regs = ioremap(mem->start, resource_size(mem));
- if (!priv->regs) {
- if (netif_msg_drv(priv))
- netdev_err(dev, "failed to remap registers\n");
-
- res = -ENXIO;
- goto fail_remap;
- }
-
- size = priv->ring_size + CPMAC_QUEUES;
- priv->desc_ring = dma_alloc_coherent(&dev->dev,
- sizeof(struct cpmac_desc) * size,
- &priv->dma_ring,
- GFP_KERNEL);
- if (!priv->desc_ring) {
- res = -ENOMEM;
- goto fail_alloc;
- }
-
- for (i = 0; i < size; i++)
- priv->desc_ring[i].mapping = priv->dma_ring + sizeof(*desc) * i;
-
- priv->rx_head = &priv->desc_ring[CPMAC_QUEUES];
- for (i = 0, desc = priv->rx_head; i < priv->ring_size; i++, desc++) {
- skb = netdev_alloc_skb_ip_align(dev, CPMAC_SKB_SIZE);
- if (unlikely(!skb)) {
- res = -ENOMEM;
- goto fail_desc;
- }
- desc->skb = skb;
- desc->data_mapping = dma_map_single(&dev->dev, skb->data,
- CPMAC_SKB_SIZE,
- DMA_FROM_DEVICE);
- desc->hw_data = (u32)desc->data_mapping;
- desc->buflen = CPMAC_SKB_SIZE;
- desc->dataflags = CPMAC_OWN;
- desc->next = &priv->rx_head[(i + 1) % priv->ring_size];
- desc->next->prev = desc;
- desc->hw_next = (u32)desc->next->mapping;
- }
-
- priv->rx_head->prev->hw_next = (u32)0;
-
- res = request_irq(dev->irq, cpmac_irq, IRQF_SHARED, dev->name, dev);
- if (res) {
- if (netif_msg_drv(priv))
- netdev_err(dev, "failed to obtain irq\n");
-
- goto fail_irq;
- }
-
- atomic_set(&priv->reset_pending, 0);
- INIT_WORK(&priv->reset_work, cpmac_hw_error);
- cpmac_hw_start(dev);
-
- napi_enable(&priv->napi);
- phy_start(dev->phydev);
-
- return 0;
-
-fail_irq:
-fail_desc:
- for (i = 0; i < priv->ring_size; i++) {
- if (priv->rx_head[i].skb) {
- dma_unmap_single(&dev->dev,
- priv->rx_head[i].data_mapping,
- CPMAC_SKB_SIZE,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_head[i].skb);
- }
- }
- dma_free_coherent(&dev->dev, sizeof(struct cpmac_desc) * size,
- priv->desc_ring, priv->dma_ring);
-
-fail_alloc:
- iounmap(priv->regs);
-
-fail_remap:
- release_mem_region(mem->start, resource_size(mem));
-
-fail_reserve:
- return res;
-}
-
-static int cpmac_stop(struct net_device *dev)
-{
- int i;
- struct cpmac_priv *priv = netdev_priv(dev);
- struct resource *mem;
-
- netif_tx_stop_all_queues(dev);
-
- cancel_work_sync(&priv->reset_work);
- napi_disable(&priv->napi);
- phy_stop(dev->phydev);
-
- cpmac_hw_stop(dev);
-
- for (i = 0; i < 8; i++)
- cpmac_write(priv->regs, CPMAC_TX_PTR(i), 0);
- cpmac_write(priv->regs, CPMAC_RX_PTR(0), 0);
- cpmac_write(priv->regs, CPMAC_MBP, 0);
-
- free_irq(dev->irq, dev);
- iounmap(priv->regs);
- mem = platform_get_resource_byname(priv->pdev, IORESOURCE_MEM, "regs");
- release_mem_region(mem->start, resource_size(mem));
- priv->rx_head = &priv->desc_ring[CPMAC_QUEUES];
- for (i = 0; i < priv->ring_size; i++) {
- if (priv->rx_head[i].skb) {
- dma_unmap_single(&dev->dev,
- priv->rx_head[i].data_mapping,
- CPMAC_SKB_SIZE,
- DMA_FROM_DEVICE);
- kfree_skb(priv->rx_head[i].skb);
- }
- }
-
- dma_free_coherent(&dev->dev, sizeof(struct cpmac_desc) *
- (CPMAC_QUEUES + priv->ring_size),
- priv->desc_ring, priv->dma_ring);
-
- return 0;
-}
-
-static const struct net_device_ops cpmac_netdev_ops = {
- .ndo_open = cpmac_open,
- .ndo_stop = cpmac_stop,
- .ndo_start_xmit = cpmac_start_xmit,
- .ndo_tx_timeout = cpmac_tx_timeout,
- .ndo_set_rx_mode = cpmac_set_multicast_list,
- .ndo_eth_ioctl = phy_do_ioctl_running,
- .ndo_validate_addr = eth_validate_addr,
- .ndo_set_mac_address = eth_mac_addr,
-};
-
-static int external_switch;
-
-static int cpmac_probe(struct platform_device *pdev)
-{
- int rc, phy_id;
- char mdio_bus_id[MII_BUS_ID_SIZE];
- struct resource *mem;
- struct cpmac_priv *priv;
- struct net_device *dev;
- struct plat_cpmac_data *pdata;
- struct phy_device *phydev = NULL;
-
- pdata = dev_get_platdata(&pdev->dev);
-
- if (external_switch || dumb_switch) {
- strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE); /* fixed phys bus */
- phy_id = pdev->id;
- } else {
- for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) {
- if (!(pdata->phy_mask & (1 << phy_id)))
- continue;
- if (!mdiobus_get_phy(cpmac_mii, phy_id))
- continue;
- strncpy(mdio_bus_id, cpmac_mii->id, MII_BUS_ID_SIZE);
- break;
- }
- }
-
- if (phy_id == PHY_MAX_ADDR) {
- dev_err(&pdev->dev, "no PHY present, falling back "
- "to switch on MDIO bus 0\n");
- strncpy(mdio_bus_id, "fixed-0", MII_BUS_ID_SIZE); /* fixed phys bus */
- phy_id = pdev->id;
- }
- mdio_bus_id[sizeof(mdio_bus_id) - 1] = '\0';
-
- dev = alloc_etherdev_mq(sizeof(*priv), CPMAC_QUEUES);
- if (!dev)
- return -ENOMEM;
-
- SET_NETDEV_DEV(dev, &pdev->dev);
- platform_set_drvdata(pdev, dev);
- priv = netdev_priv(dev);
-
- priv->pdev = pdev;
- mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "regs");
- if (!mem) {
- rc = -ENODEV;
- goto fail;
- }
-
- dev->irq = platform_get_irq_byname(pdev, "irq");
-
- dev->netdev_ops = &cpmac_netdev_ops;
- dev->ethtool_ops = &cpmac_ethtool_ops;
-
- netif_napi_add(dev, &priv->napi, cpmac_poll);
-
- spin_lock_init(&priv->lock);
- spin_lock_init(&priv->rx_lock);
- priv->dev = dev;
- priv->ring_size = 64;
- priv->msg_enable = netif_msg_init(debug_level, 0xff);
- eth_hw_addr_set(dev, pdata->dev_addr);
-
- snprintf(priv->phy_name, MII_BUS_ID_SIZE, PHY_ID_FMT,
- mdio_bus_id, phy_id);
-
- phydev = phy_connect(dev, priv->phy_name, cpmac_adjust_link,
- PHY_INTERFACE_MODE_MII);
-
- if (IS_ERR(phydev)) {
- if (netif_msg_drv(priv))
- dev_err(&pdev->dev, "Could not attach to PHY\n");
-
- rc = PTR_ERR(phydev);
- goto fail;
- }
-
- rc = register_netdev(dev);
- if (rc) {
- dev_err(&pdev->dev, "Could not register net device\n");
- goto fail;
- }
-
- if (netif_msg_probe(priv)) {
- dev_info(&pdev->dev, "regs: %p, irq: %d, phy: %s, "
- "mac: %pM\n", (void *)mem->start, dev->irq,
- priv->phy_name, dev->dev_addr);
- }
-
- return 0;
-
-fail:
- free_netdev(dev);
- return rc;
-}
-
-static int cpmac_remove(struct platform_device *pdev)
-{
- struct net_device *dev = platform_get_drvdata(pdev);
-
- unregister_netdev(dev);
- free_netdev(dev);
-
- return 0;
-}
-
-static struct platform_driver cpmac_driver = {
- .driver = {
- .name = "cpmac",
- },
- .probe = cpmac_probe,
- .remove = cpmac_remove,
-};
-
-int __init cpmac_init(void)
-{
- u32 mask;
- int i, res;
-
- cpmac_mii = mdiobus_alloc();
- if (cpmac_mii == NULL)
- return -ENOMEM;
-
- cpmac_mii->name = "cpmac-mii";
- cpmac_mii->read = cpmac_mdio_read;
- cpmac_mii->write = cpmac_mdio_write;
- cpmac_mii->reset = cpmac_mdio_reset;
-
- cpmac_mii->priv = ioremap(AR7_REGS_MDIO, 256);
-
- if (!cpmac_mii->priv) {
- pr_err("Can't ioremap mdio registers\n");
- res = -ENXIO;
- goto fail_alloc;
- }
-
- /* FIXME: unhardcode gpio&reset bits */
- ar7_gpio_disable(26);
- ar7_gpio_disable(27);
- ar7_device_reset(AR7_RESET_BIT_CPMAC_LO);
- ar7_device_reset(AR7_RESET_BIT_CPMAC_HI);
- ar7_device_reset(AR7_RESET_BIT_EPHY);
-
- cpmac_mii->reset(cpmac_mii);
-
- for (i = 0; i < 300; i++) {
- mask = cpmac_read(cpmac_mii->priv, CPMAC_MDIO_ALIVE);
- if (mask)
- break;
- else
- msleep(10);
- }
-
- mask &= 0x7fffffff;
- if (mask & (mask - 1)) {
- external_switch = 1;
- mask = 0;
- }
-
- cpmac_mii->phy_mask = ~(mask | 0x80000000);
- snprintf(cpmac_mii->id, MII_BUS_ID_SIZE, "cpmac-1");
-
- res = mdiobus_register(cpmac_mii);
- if (res)
- goto fail_mii;
-
- res = platform_driver_register(&cpmac_driver);
- if (res)
- goto fail_cpmac;
-
- return 0;
-
-fail_cpmac:
- mdiobus_unregister(cpmac_mii);
-
-fail_mii:
- iounmap(cpmac_mii->priv);
-
-fail_alloc:
- mdiobus_free(cpmac_mii);
-
- return res;
-}
-
-void __exit cpmac_exit(void)
-{
- platform_driver_unregister(&cpmac_driver);
- mdiobus_unregister(cpmac_mii);
- iounmap(cpmac_mii->priv);
- mdiobus_free(cpmac_mii);
-}
-
-module_init(cpmac_init);
-module_exit(cpmac_exit);
diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
index 0ec85635dfd6..764ed298b570 100644
--- a/drivers/net/ethernet/ti/cpsw_priv.c
+++ b/drivers/net/ethernet/ti/cpsw_priv.c
@@ -1360,7 +1360,7 @@ int cpsw_run_xdp(struct cpsw_priv *priv, int ch, struct xdp_buff *xdp,
* particular hardware is sharing a common queue, so the
* incoming device might change per packet.
*/
- xdp_do_flush_map();
+ xdp_do_flush();
break;
default:
bpf_warn_invalid_xdp_action(ndev, prog, act);
diff --git a/drivers/net/ethernet/ti/davinci_emac.c b/drivers/net/ethernet/ti/davinci_emac.c
index 2eb9d5a32588..b0950a318c42 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -38,6 +38,7 @@
#include <linux/dma-mapping.h>
#include <linux/clk.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/regmap.h>
#include <linux/semaphore.h>
#include <linux/phy.h>
@@ -47,10 +48,7 @@
#include <linux/pm_runtime.h>
#include <linux/davinci_emac.h>
#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_device.h>
#include <linux/of_mdio.h>
-#include <linux/of_irq.h>
#include <linux/of_net.h>
#include <linux/mfd/syscon.h>
@@ -1726,13 +1724,10 @@ static const struct net_device_ops emac_netdev_ops = {
#endif
};
-static const struct of_device_id davinci_emac_of_match[];
-
static struct emac_platform_data *
davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
{
struct device_node *np;
- const struct of_device_id *match;
const struct emac_platform_data *auxdata;
struct emac_platform_data *pdata = NULL;
@@ -1779,9 +1774,8 @@ davinci_emac_of_get_pdata(struct platform_device *pdev, struct emac_priv *priv)
pdata->interrupt_disable = auxdata->interrupt_disable;
}
- match = of_match_device(davinci_emac_of_match, &pdev->dev);
- if (match && match->data) {
- auxdata = match->data;
+ auxdata = device_get_match_data(&pdev->dev);
+ if (auxdata) {
pdata->version = auxdata->version;
pdata->hw_ram_addr = auxdata->hw_ram_addr;
}
@@ -1934,18 +1928,20 @@ static int davinci_emac_probe(struct platform_device *pdev)
goto err_free_rxchan;
ndev->irq = rc;
- rc = davinci_emac_try_get_mac(pdev, res_ctrl ? 0 : 1, priv->mac_addr);
- if (!rc)
- eth_hw_addr_set(ndev, priv->mac_addr);
-
+ /* If the MAC address is not present, read the registers from the SoC */
if (!is_valid_ether_addr(priv->mac_addr)) {
- /* Use random MAC if still none obtained. */
- eth_hw_addr_random(ndev);
- memcpy(priv->mac_addr, ndev->dev_addr, ndev->addr_len);
- dev_warn(&pdev->dev, "using random MAC addr: %pM\n",
- priv->mac_addr);
+ rc = davinci_emac_try_get_mac(pdev, res_ctrl ? 0 : 1, priv->mac_addr);
+ if (!rc)
+ eth_hw_addr_set(ndev, priv->mac_addr);
+
+ if (!is_valid_ether_addr(priv->mac_addr)) {
+ /* Use random MAC if still none obtained. */
+ eth_hw_addr_random(ndev);
+ memcpy(priv->mac_addr, ndev->dev_addr, ndev->addr_len);
+ dev_warn(&pdev->dev, "using random MAC addr: %pM\n",
+ priv->mac_addr);
+ }
}
-
ndev->netdev_ops = &emac_netdev_ops;
ndev->ethtool_ops = &ethtool_ops;
netif_napi_add(ndev, &priv->napi, emac_poll);
@@ -2002,7 +1998,7 @@ err_free_netdev:
* Called when removing the device driver. We disable clock usage and release
* the resources taken up by the driver and unregister network device
*/
-static int davinci_emac_remove(struct platform_device *pdev)
+static void davinci_emac_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct emac_priv *priv = netdev_priv(ndev);
@@ -2022,8 +2018,6 @@ static int davinci_emac_remove(struct platform_device *pdev)
if (of_phy_is_fixed_link(np))
of_phy_deregister_fixed_link(np);
free_netdev(ndev);
-
- return 0;
}
static int davinci_emac_suspend(struct device *dev)
@@ -2076,7 +2070,7 @@ static struct platform_driver davinci_emac_driver = {
.of_match_table = davinci_emac_of_match,
},
.probe = davinci_emac_probe,
- .remove = davinci_emac_remove,
+ .remove_new = davinci_emac_remove,
};
/**
diff --git a/drivers/net/ethernet/ti/davinci_mdio.c b/drivers/net/ethernet/ti/davinci_mdio.c
index 89b6d23e9937..628c87dc1d28 100644
--- a/drivers/net/ethernet/ti/davinci_mdio.c
+++ b/drivers/net/ethernet/ti/davinci_mdio.c
@@ -673,7 +673,7 @@ bail_out:
return ret;
}
-static int davinci_mdio_remove(struct platform_device *pdev)
+static void davinci_mdio_remove(struct platform_device *pdev)
{
struct davinci_mdio_data *data = platform_get_drvdata(pdev);
@@ -686,8 +686,6 @@ static int davinci_mdio_remove(struct platform_device *pdev)
pm_runtime_dont_use_autosuspend(&pdev->dev);
pm_runtime_disable(&pdev->dev);
-
- return 0;
}
#ifdef CONFIG_PM
@@ -766,7 +764,7 @@ static struct platform_driver davinci_mdio_driver = {
.of_match_table = of_match_ptr(davinci_mdio_of_mtable),
},
.probe = davinci_mdio_probe,
- .remove = davinci_mdio_remove,
+ .remove_new = davinci_mdio_remove,
};
static int __init davinci_mdio_init(void)
diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c
index b272361e378f..99de8a40ed60 100644
--- a/drivers/net/ethernet/ti/icssg/icssg_config.c
+++ b/drivers/net/ethernet/ti/icssg/icssg_config.c
@@ -433,6 +433,17 @@ int emac_set_port_state(struct prueth_emac *emac,
return ret;
}
+void icssg_config_half_duplex(struct prueth_emac *emac)
+{
+ u32 val;
+
+ if (!emac->half_duplex)
+ return;
+
+ val = get_random_u32();
+ writel(val, emac->dram.va + HD_RAND_SEED_OFFSET);
+}
+
void icssg_config_set_speed(struct prueth_emac *emac)
{
u8 fw_speed;
@@ -453,5 +464,8 @@ void icssg_config_set_speed(struct prueth_emac *emac)
return;
}
+ if (emac->duplex == DUPLEX_HALF)
+ fw_speed |= FW_LINK_SPEED_HD;
+
writeb(fw_speed, emac->dram.va + PORT_LINK_SPEED_OFFSET);
}
diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
index 4914d0ef58e9..6c4b64227ac8 100644
--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
@@ -19,11 +19,11 @@
#include <linux/mfd/syscon.h>
#include <linux/module.h>
#include <linux/of.h>
-#include <linux/of_irq.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
-#include <linux/of_platform.h>
+#include <linux/platform_device.h>
#include <linux/phy.h>
+#include <linux/property.h>
#include <linux/remoteproc/pruss.h>
#include <linux/regmap.h>
#include <linux/remoteproc.h>
@@ -1029,6 +1029,8 @@ static void emac_adjust_link(struct net_device *ndev)
* values
*/
if (emac->link) {
+ if (emac->duplex == DUPLEX_HALF)
+ icssg_config_half_duplex(emac);
/* Set the RGMII cfg for gig en and full duplex */
icssg_update_rgmii_cfg(prueth->miig_rt, emac);
@@ -1147,9 +1149,13 @@ static int emac_phy_connect(struct prueth_emac *emac)
return -ENODEV;
}
+ if (!emac->half_duplex) {
+ dev_dbg(prueth->dev, "half duplex mode is not supported\n");
+ phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
+ phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
+ }
+
/* remove unsupported modes */
- phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_10baseT_Half_BIT);
- phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_100baseT_Half_BIT);
phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_1000baseT_Half_BIT);
phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_Pause_BIT);
phy_remove_link_mode(ndev->phydev, ETHTOOL_LINK_MODE_Asym_Pause_BIT);
@@ -1653,6 +1659,19 @@ static void emac_ndo_get_stats64(struct net_device *ndev,
stats->tx_dropped = ndev->stats.tx_dropped;
}
+static int emac_ndo_get_phys_port_name(struct net_device *ndev, char *name,
+ size_t len)
+{
+ struct prueth_emac *emac = netdev_priv(ndev);
+ int ret;
+
+ ret = snprintf(name, len, "p%d", emac->port_id);
+ if (ret >= len)
+ return -EINVAL;
+
+ return 0;
+}
+
static const struct net_device_ops emac_netdev_ops = {
.ndo_open = emac_ndo_open,
.ndo_stop = emac_ndo_stop,
@@ -1663,6 +1682,7 @@ static const struct net_device_ops emac_netdev_ops = {
.ndo_set_rx_mode = emac_ndo_set_rx_mode,
.ndo_eth_ioctl = emac_ndo_ioctl,
.ndo_get_stats64 = emac_ndo_get_stats64,
+ .ndo_get_phys_port_name = emac_ndo_get_phys_port_name,
};
/* get emac_port corresponding to eth_node name */
@@ -1928,8 +1948,6 @@ static void prueth_put_cores(struct prueth *prueth, int slice)
pru_rproc_put(prueth->pru[slice]);
}
-static const struct of_device_id prueth_dt_match[];
-
static int prueth_probe(struct platform_device *pdev)
{
struct device_node *eth_node, *eth_ports_node;
@@ -1938,7 +1956,6 @@ static int prueth_probe(struct platform_device *pdev)
struct genpool_data_align gp_data = {
.align = SZ_64K,
};
- const struct of_device_id *match;
struct device *dev = &pdev->dev;
struct device_node *np;
struct prueth *prueth;
@@ -1948,17 +1965,13 @@ static int prueth_probe(struct platform_device *pdev)
np = dev->of_node;
- match = of_match_device(prueth_dt_match, dev);
- if (!match)
- return -ENODEV;
-
prueth = devm_kzalloc(dev, sizeof(*prueth), GFP_KERNEL);
if (!prueth)
return -ENOMEM;
dev_set_drvdata(dev, prueth);
prueth->pdev = pdev;
- prueth->pdata = *(const struct prueth_pdata *)match->data;
+ prueth->pdata = *(const struct prueth_pdata *)device_get_match_data(dev);
prueth->dev = dev;
eth_ports_node = of_get_child_by_name(np, "ethernet-ports");
@@ -2113,6 +2126,10 @@ static int prueth_probe(struct platform_device *pdev)
eth0_node->name);
goto exit_iep;
}
+
+ if (of_find_property(eth0_node, "ti,half-duplex-capable", NULL))
+ prueth->emac[PRUETH_MAC0]->half_duplex = 1;
+
prueth->emac[PRUETH_MAC0]->iep = prueth->iep0;
}
@@ -2124,6 +2141,9 @@ static int prueth_probe(struct platform_device *pdev)
goto netdev_exit;
}
+ if (of_find_property(eth1_node, "ti,half-duplex-capable", NULL))
+ prueth->emac[PRUETH_MAC1]->half_duplex = 1;
+
prueth->emac[PRUETH_MAC1]->iep = prueth->iep0;
}
@@ -2313,8 +2333,13 @@ static const struct prueth_pdata am654_icssg_pdata = {
.quirk_10m_link_issue = 1,
};
+static const struct prueth_pdata am64x_icssg_pdata = {
+ .fdqring_mode = K3_RINGACC_RING_MODE_RING,
+};
+
static const struct of_device_id prueth_dt_match[] = {
{ .compatible = "ti,am654-icssg-prueth", .data = &am654_icssg_pdata },
+ { .compatible = "ti,am642-icssg-prueth", .data = &am64x_icssg_pdata },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, prueth_dt_match);
diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
index 3fe80a8758d3..8b6d6b497010 100644
--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
@@ -145,6 +145,7 @@ struct prueth_emac {
struct icss_iep *iep;
unsigned int rx_ts_enabled : 1;
unsigned int tx_ts_enabled : 1;
+ unsigned int half_duplex : 1;
/* DMA related */
struct prueth_tx_chn tx_chns[PRUETH_MAX_TX_QUEUES];
@@ -271,6 +272,7 @@ int icssg_config(struct prueth *prueth, struct prueth_emac *emac,
int emac_set_port_state(struct prueth_emac *emac,
enum icssg_port_state_cmd state);
void icssg_config_set_speed(struct prueth_emac *emac);
+void icssg_config_half_duplex(struct prueth_emac *emac);
/* Buffer queue helpers */
int icssg_queue_pop(struct prueth *prueth, u8 queue);
diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
index d829113c16ee..11b90e1da0c6 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -2228,7 +2228,7 @@ probe_quit:
return ret;
}
-static int netcp_remove(struct platform_device *pdev)
+static void netcp_remove(struct platform_device *pdev)
{
struct netcp_device *netcp_device = platform_get_drvdata(pdev);
struct netcp_intf *netcp_intf, *netcp_tmp;
@@ -2256,7 +2256,6 @@ static int netcp_remove(struct platform_device *pdev)
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
platform_set_drvdata(pdev, NULL);
- return 0;
}
static const struct of_device_id of_match[] = {
@@ -2271,7 +2270,7 @@ static struct platform_driver netcp_driver = {
.of_match_table = of_match,
},
.probe = netcp_probe,
- .remove = netcp_remove,
+ .remove_new = netcp_remove,
};
module_platform_driver(netcp_driver);
diff --git a/drivers/net/ethernet/ti/netcp_ethss.c b/drivers/net/ethernet/ti/netcp_ethss.c
index 2adf82a32bf6..02cb6474f6dc 100644
--- a/drivers/net/ethernet/ti/netcp_ethss.c
+++ b/drivers/net/ethernet/ti/netcp_ethss.c
@@ -1735,8 +1735,8 @@ static const struct netcp_ethtool_stat xgbe10_et_stats[] = {
static void keystone_get_drvinfo(struct net_device *ndev,
struct ethtool_drvinfo *info)
{
- strncpy(info->driver, NETCP_DRIVER_NAME, sizeof(info->driver));
- strncpy(info->version, NETCP_DRIVER_VERSION, sizeof(info->version));
+ strscpy(info->driver, NETCP_DRIVER_NAME, sizeof(info->driver));
+ strscpy(info->version, NETCP_DRIVER_VERSION, sizeof(info->version));
}
static u32 keystone_get_msglevel(struct net_device *ndev)
diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
index 50d7eacfec58..87e67121477c 100644
--- a/drivers/net/ethernet/toshiba/spider_net.c
+++ b/drivers/net/ethernet/toshiba/spider_net.c
@@ -2332,7 +2332,7 @@ spider_net_alloc_card(void)
struct spider_net_card *card;
netdev = alloc_etherdev(struct_size(card, darray,
- tx_descriptors + rx_descriptors));
+ size_add(tx_descriptors, rx_descriptors)));
if (!netdev)
return NULL;
diff --git a/drivers/net/ethernet/toshiba/tc35815.c b/drivers/net/ethernet/toshiba/tc35815.c
index 14cf6ecf6d0d..6e3758dfbdbd 100644
--- a/drivers/net/ethernet/toshiba/tc35815.c
+++ b/drivers/net/ethernet/toshiba/tc35815.c
@@ -1434,14 +1434,10 @@ static irqreturn_t tc35815_interrupt(int irq, void *dev_id)
u32 dmactl = tc_readl(&tr->DMA_Ctl);
if (!(dmactl & DMA_IntMask)) {
- /* disable interrupts */
- tc_writel(dmactl | DMA_IntMask, &tr->DMA_Ctl);
- if (napi_schedule_prep(&lp->napi))
+ if (napi_schedule_prep(&lp->napi)) {
+ /* disable interrupts */
+ tc_writel(dmactl | DMA_IntMask, &tr->DMA_Ctl);
__napi_schedule(&lp->napi);
- else {
- printk(KERN_ERR "%s: interrupt taken in poll\n",
- dev->name);
- BUG();
}
(void)tc_readl(&tr->Int_Src); /* flush */
return IRQ_HANDLED;
diff --git a/drivers/net/ethernet/tundra/tsi108_eth.c b/drivers/net/ethernet/tundra/tsi108_eth.c
index d09d352e1c0a..554aff7c8f3b 100644
--- a/drivers/net/ethernet/tundra/tsi108_eth.c
+++ b/drivers/net/ethernet/tundra/tsi108_eth.c
@@ -1660,7 +1660,7 @@ static void tsi108_timed_checker(struct timer_list *t)
mod_timer(&data->timer, jiffies + CHECK_PHY_INTERVAL);
}
-static int tsi108_ether_remove(struct platform_device *pdev)
+static void tsi108_ether_remove(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct tsi108_prv_data *priv = netdev_priv(dev);
@@ -1670,15 +1670,13 @@ static int tsi108_ether_remove(struct platform_device *pdev)
iounmap(priv->regs);
iounmap(priv->phyregs);
free_netdev(dev);
-
- return 0;
}
/* Structure for a device driver */
static struct platform_driver tsi_eth_driver = {
.probe = tsi108_init_one,
- .remove = tsi108_ether_remove,
+ .remove_new = tsi108_ether_remove,
.driver = {
.name = "tsi-ethernet",
},
diff --git a/drivers/net/ethernet/via/via-rhine.c b/drivers/net/ethernet/via/via-rhine.c
index 3e09e5036490..e80c02948801 100644
--- a/drivers/net/ethernet/via/via-rhine.c
+++ b/drivers/net/ethernet/via/via-rhine.c
@@ -2443,7 +2443,7 @@ static void rhine_remove_one_pci(struct pci_dev *pdev)
pci_disable_device(pdev);
}
-static int rhine_remove_one_platform(struct platform_device *pdev)
+static void rhine_remove_one_platform(struct platform_device *pdev)
{
struct net_device *dev = platform_get_drvdata(pdev);
struct rhine_private *rp = netdev_priv(dev);
@@ -2453,8 +2453,6 @@ static int rhine_remove_one_platform(struct platform_device *pdev)
iounmap(rp->base);
free_netdev(dev);
-
- return 0;
}
static void rhine_shutdown_pci(struct pci_dev *pdev)
@@ -2572,7 +2570,7 @@ static struct pci_driver rhine_driver_pci = {
static struct platform_driver rhine_driver_platform = {
.probe = rhine_init_one_platform,
- .remove = rhine_remove_one_platform,
+ .remove_new = rhine_remove_one_platform,
.driver = {
.name = DRV_NAME,
.of_match_table = rhine_of_tbl,
diff --git a/drivers/net/ethernet/via/via-velocity.c b/drivers/net/ethernet/via/via-velocity.c
index 731f689412e6..1c6b2a9bba08 100644
--- a/drivers/net/ethernet/via/via-velocity.c
+++ b/drivers/net/ethernet/via/via-velocity.c
@@ -2957,11 +2957,9 @@ static int velocity_platform_probe(struct platform_device *pdev)
return velocity_probe(&pdev->dev, irq, info, BUS_PLATFORM);
}
-static int velocity_platform_remove(struct platform_device *pdev)
+static void velocity_platform_remove(struct platform_device *pdev)
{
velocity_remove(&pdev->dev);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -3249,7 +3247,7 @@ static struct pci_driver velocity_pci_driver = {
static struct platform_driver velocity_platform_driver = {
.probe = velocity_platform_probe,
- .remove = velocity_platform_remove,
+ .remove_new = velocity_platform_remove,
.driver = {
.name = "via-velocity",
.of_match_table = velocity_of_ids,
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c
index 93cb6f2294e7..ddc5f6d20b9c 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c
+++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.c
@@ -3,9 +3,171 @@
#include <linux/pci.h>
#include <linux/phy.h>
+#include <linux/ethtool.h>
#include "wx_type.h"
#include "wx_ethtool.h"
+#include "wx_hw.h"
+
+struct wx_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ size_t sizeof_stat;
+ off_t stat_offset;
+};
+
+#define WX_STAT(str, m) { \
+ .stat_string = str, \
+ .sizeof_stat = sizeof(((struct wx *)0)->m), \
+ .stat_offset = offsetof(struct wx, m) }
+
+static const struct wx_stats wx_gstrings_stats[] = {
+ WX_STAT("rx_dma_pkts", stats.gprc),
+ WX_STAT("tx_dma_pkts", stats.gptc),
+ WX_STAT("rx_dma_bytes", stats.gorc),
+ WX_STAT("tx_dma_bytes", stats.gotc),
+ WX_STAT("rx_total_pkts", stats.tpr),
+ WX_STAT("tx_total_pkts", stats.tpt),
+ WX_STAT("rx_long_length_count", stats.roc),
+ WX_STAT("rx_short_length_count", stats.ruc),
+ WX_STAT("os2bmc_rx_by_bmc", stats.o2bgptc),
+ WX_STAT("os2bmc_tx_by_bmc", stats.b2ospc),
+ WX_STAT("os2bmc_tx_by_host", stats.o2bspc),
+ WX_STAT("os2bmc_rx_by_host", stats.b2ogprc),
+ WX_STAT("rx_no_dma_resources", stats.rdmdrop),
+ WX_STAT("tx_busy", tx_busy),
+ WX_STAT("non_eop_descs", non_eop_descs),
+ WX_STAT("tx_restart_queue", restart_queue),
+ WX_STAT("rx_csum_offload_good_count", hw_csum_rx_good),
+ WX_STAT("rx_csum_offload_errors", hw_csum_rx_error),
+ WX_STAT("alloc_rx_buff_failed", alloc_rx_buff_failed),
+};
+
+/* drivers allocates num_tx_queues and num_rx_queues symmetrically so
+ * we set the num_rx_queues to evaluate to num_tx_queues. This is
+ * used because we do not have a good way to get the max number of
+ * rx queues with CONFIG_RPS disabled.
+ */
+#define WX_NUM_RX_QUEUES netdev->num_tx_queues
+#define WX_NUM_TX_QUEUES netdev->num_tx_queues
+
+#define WX_QUEUE_STATS_LEN ( \
+ (WX_NUM_TX_QUEUES + WX_NUM_RX_QUEUES) * \
+ (sizeof(struct wx_queue_stats) / sizeof(u64)))
+#define WX_GLOBAL_STATS_LEN ARRAY_SIZE(wx_gstrings_stats)
+#define WX_STATS_LEN (WX_GLOBAL_STATS_LEN + WX_QUEUE_STATS_LEN)
+
+int wx_get_sset_count(struct net_device *netdev, int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return WX_STATS_LEN;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+EXPORT_SYMBOL(wx_get_sset_count);
+
+void wx_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
+{
+ u8 *p = data;
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < WX_GLOBAL_STATS_LEN; i++)
+ ethtool_sprintf(&p, wx_gstrings_stats[i].stat_string);
+ for (i = 0; i < netdev->num_tx_queues; i++) {
+ ethtool_sprintf(&p, "tx_queue_%u_packets", i);
+ ethtool_sprintf(&p, "tx_queue_%u_bytes", i);
+ }
+ for (i = 0; i < WX_NUM_RX_QUEUES; i++) {
+ ethtool_sprintf(&p, "rx_queue_%u_packets", i);
+ ethtool_sprintf(&p, "rx_queue_%u_bytes", i);
+ }
+ break;
+ }
+}
+EXPORT_SYMBOL(wx_get_strings);
+
+void wx_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data)
+{
+ struct wx *wx = netdev_priv(netdev);
+ struct wx_ring *ring;
+ unsigned int start;
+ int i, j;
+ char *p;
+
+ wx_update_stats(wx);
+
+ for (i = 0; i < WX_GLOBAL_STATS_LEN; i++) {
+ p = (char *)wx + wx_gstrings_stats[i].stat_offset;
+ data[i] = (wx_gstrings_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ }
+
+ for (j = 0; j < netdev->num_tx_queues; j++) {
+ ring = wx->tx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+ continue;
+ }
+
+ do {
+ start = u64_stats_fetch_begin(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i + 1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry(&ring->syncp, start));
+ i += 2;
+ }
+ for (j = 0; j < WX_NUM_RX_QUEUES; j++) {
+ ring = wx->rx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+ continue;
+ }
+
+ do {
+ start = u64_stats_fetch_begin(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i + 1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry(&ring->syncp, start));
+ i += 2;
+ }
+}
+EXPORT_SYMBOL(wx_get_ethtool_stats);
+
+void wx_get_mac_stats(struct net_device *netdev,
+ struct ethtool_eth_mac_stats *mac_stats)
+{
+ struct wx *wx = netdev_priv(netdev);
+ struct wx_hw_stats *hwstats;
+
+ wx_update_stats(wx);
+
+ hwstats = &wx->stats;
+ mac_stats->MulticastFramesXmittedOK = hwstats->mptc;
+ mac_stats->BroadcastFramesXmittedOK = hwstats->bptc;
+ mac_stats->MulticastFramesReceivedOK = hwstats->mprc;
+ mac_stats->BroadcastFramesReceivedOK = hwstats->bprc;
+}
+EXPORT_SYMBOL(wx_get_mac_stats);
+
+void wx_get_pause_stats(struct net_device *netdev,
+ struct ethtool_pause_stats *stats)
+{
+ struct wx *wx = netdev_priv(netdev);
+ struct wx_hw_stats *hwstats;
+
+ wx_update_stats(wx);
+
+ hwstats = &wx->stats;
+ stats->tx_pause_frames = hwstats->lxontxc + hwstats->lxofftxc;
+ stats->rx_pause_frames = hwstats->lxonoffrxc;
+}
+EXPORT_SYMBOL(wx_get_pause_stats);
void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info)
{
@@ -14,5 +176,12 @@ void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info)
strscpy(info->driver, wx->driver_name, sizeof(info->driver));
strscpy(info->fw_version, wx->eeprom_id, sizeof(info->fw_version));
strscpy(info->bus_info, pci_name(wx->pdev), sizeof(info->bus_info));
+ if (wx->num_tx_queues <= WX_NUM_TX_QUEUES) {
+ info->n_stats = WX_STATS_LEN -
+ (WX_NUM_TX_QUEUES - wx->num_tx_queues) *
+ (sizeof(struct wx_queue_stats) / sizeof(u64)) * 2;
+ } else {
+ info->n_stats = WX_STATS_LEN;
+ }
}
EXPORT_SYMBOL(wx_get_drvinfo);
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h
index e85538c69454..16d1a09369a6 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h
+++ b/drivers/net/ethernet/wangxun/libwx/wx_ethtool.h
@@ -4,5 +4,13 @@
#ifndef _WX_ETHTOOL_H_
#define _WX_ETHTOOL_H_
+int wx_get_sset_count(struct net_device *netdev, int sset);
+void wx_get_strings(struct net_device *netdev, u32 stringset, u8 *data);
+void wx_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats *stats, u64 *data);
+void wx_get_mac_stats(struct net_device *netdev,
+ struct ethtool_eth_mac_stats *mac_stats);
+void wx_get_pause_stats(struct net_device *netdev,
+ struct ethtool_pause_stats *stats);
void wx_get_drvinfo(struct net_device *netdev, struct ethtool_drvinfo *info);
#endif /* _WX_ETHTOOL_H_ */
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
index 85dc16faca54..a3c5de9d547a 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
@@ -12,6 +12,98 @@
#include "wx_lib.h"
#include "wx_hw.h"
+static int wx_phy_read_reg_mdi(struct mii_bus *bus, int phy_addr, int devnum, int regnum)
+{
+ struct wx *wx = bus->priv;
+ u32 command, val;
+ int ret;
+
+ /* setup and write the address cycle command */
+ command = WX_MSCA_RA(regnum) |
+ WX_MSCA_PA(phy_addr) |
+ WX_MSCA_DA(devnum);
+ wr32(wx, WX_MSCA, command);
+
+ command = WX_MSCC_CMD(WX_MSCA_CMD_READ) | WX_MSCC_BUSY;
+ if (wx->mac.type == wx_mac_em)
+ command |= WX_MDIO_CLK(6);
+ wr32(wx, WX_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
+ 100000, false, wx, WX_MSCC);
+ if (ret) {
+ wx_err(wx, "Mdio read c22 command did not complete.\n");
+ return ret;
+ }
+
+ return (u16)rd32(wx, WX_MSCC);
+}
+
+static int wx_phy_write_reg_mdi(struct mii_bus *bus, int phy_addr,
+ int devnum, int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+ u32 command, val;
+ int ret;
+
+ /* setup and write the address cycle command */
+ command = WX_MSCA_RA(regnum) |
+ WX_MSCA_PA(phy_addr) |
+ WX_MSCA_DA(devnum);
+ wr32(wx, WX_MSCA, command);
+
+ command = value | WX_MSCC_CMD(WX_MSCA_CMD_WRITE) | WX_MSCC_BUSY;
+ if (wx->mac.type == wx_mac_em)
+ command |= WX_MDIO_CLK(6);
+ wr32(wx, WX_MSCC, command);
+
+ /* wait to complete */
+ ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
+ 100000, false, wx, WX_MSCC);
+ if (ret)
+ wx_err(wx, "Mdio write c22 command did not complete.\n");
+
+ return ret;
+}
+
+int wx_phy_read_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum)
+{
+ struct wx *wx = bus->priv;
+
+ wr32(wx, WX_MDIO_CLAUSE_SELECT, 0xF);
+ return wx_phy_read_reg_mdi(bus, phy_addr, 0, regnum);
+}
+EXPORT_SYMBOL(wx_phy_read_reg_mdi_c22);
+
+int wx_phy_write_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+
+ wr32(wx, WX_MDIO_CLAUSE_SELECT, 0xF);
+ return wx_phy_write_reg_mdi(bus, phy_addr, 0, regnum, value);
+}
+EXPORT_SYMBOL(wx_phy_write_reg_mdi_c22);
+
+int wx_phy_read_reg_mdi_c45(struct mii_bus *bus, int phy_addr, int devnum, int regnum)
+{
+ struct wx *wx = bus->priv;
+
+ wr32(wx, WX_MDIO_CLAUSE_SELECT, 0);
+ return wx_phy_read_reg_mdi(bus, phy_addr, devnum, regnum);
+}
+EXPORT_SYMBOL(wx_phy_read_reg_mdi_c45);
+
+int wx_phy_write_reg_mdi_c45(struct mii_bus *bus, int phy_addr,
+ int devnum, int regnum, u16 value)
+{
+ struct wx *wx = bus->priv;
+
+ wr32(wx, WX_MDIO_CLAUSE_SELECT, 0);
+ return wx_phy_write_reg_mdi(bus, phy_addr, devnum, regnum, value);
+}
+EXPORT_SYMBOL(wx_phy_write_reg_mdi_c45);
+
static void wx_intr_disable(struct wx *wx, u64 qmask)
{
u32 mask;
@@ -1910,6 +2002,105 @@ int wx_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid)
EXPORT_SYMBOL(wx_vlan_rx_kill_vid);
/**
+ * wx_update_stats - Update the board statistics counters.
+ * @wx: board private structure
+ **/
+void wx_update_stats(struct wx *wx)
+{
+ struct wx_hw_stats *hwstats = &wx->stats;
+
+ u64 non_eop_descs = 0, alloc_rx_buff_failed = 0;
+ u64 hw_csum_rx_good = 0, hw_csum_rx_error = 0;
+ u64 restart_queue = 0, tx_busy = 0;
+ u32 i;
+
+ /* gather some stats to the wx struct that are per queue */
+ for (i = 0; i < wx->num_rx_queues; i++) {
+ struct wx_ring *rx_ring = wx->rx_ring[i];
+
+ non_eop_descs += rx_ring->rx_stats.non_eop_descs;
+ alloc_rx_buff_failed += rx_ring->rx_stats.alloc_rx_buff_failed;
+ hw_csum_rx_good += rx_ring->rx_stats.csum_good_cnt;
+ hw_csum_rx_error += rx_ring->rx_stats.csum_err;
+ }
+ wx->non_eop_descs = non_eop_descs;
+ wx->alloc_rx_buff_failed = alloc_rx_buff_failed;
+ wx->hw_csum_rx_error = hw_csum_rx_error;
+ wx->hw_csum_rx_good = hw_csum_rx_good;
+
+ for (i = 0; i < wx->num_tx_queues; i++) {
+ struct wx_ring *tx_ring = wx->tx_ring[i];
+
+ restart_queue += tx_ring->tx_stats.restart_queue;
+ tx_busy += tx_ring->tx_stats.tx_busy;
+ }
+ wx->restart_queue = restart_queue;
+ wx->tx_busy = tx_busy;
+
+ hwstats->gprc += rd32(wx, WX_RDM_PKT_CNT);
+ hwstats->gptc += rd32(wx, WX_TDM_PKT_CNT);
+ hwstats->gorc += rd64(wx, WX_RDM_BYTE_CNT_LSB);
+ hwstats->gotc += rd64(wx, WX_TDM_BYTE_CNT_LSB);
+ hwstats->tpr += rd64(wx, WX_RX_FRAME_CNT_GOOD_BAD_L);
+ hwstats->tpt += rd64(wx, WX_TX_FRAME_CNT_GOOD_BAD_L);
+ hwstats->crcerrs += rd64(wx, WX_RX_CRC_ERROR_FRAMES_L);
+ hwstats->rlec += rd64(wx, WX_RX_LEN_ERROR_FRAMES_L);
+ hwstats->bprc += rd64(wx, WX_RX_BC_FRAMES_GOOD_L);
+ hwstats->bptc += rd64(wx, WX_TX_BC_FRAMES_GOOD_L);
+ hwstats->mprc += rd64(wx, WX_RX_MC_FRAMES_GOOD_L);
+ hwstats->mptc += rd64(wx, WX_TX_MC_FRAMES_GOOD_L);
+ hwstats->roc += rd32(wx, WX_RX_OVERSIZE_FRAMES_GOOD);
+ hwstats->ruc += rd32(wx, WX_RX_UNDERSIZE_FRAMES_GOOD);
+ hwstats->lxonoffrxc += rd32(wx, WX_MAC_LXONOFFRXC);
+ hwstats->lxontxc += rd32(wx, WX_RDB_LXONTXC);
+ hwstats->lxofftxc += rd32(wx, WX_RDB_LXOFFTXC);
+ hwstats->o2bgptc += rd32(wx, WX_TDM_OS2BMC_CNT);
+ hwstats->b2ospc += rd32(wx, WX_MNG_BMC2OS_CNT);
+ hwstats->o2bspc += rd32(wx, WX_MNG_OS2BMC_CNT);
+ hwstats->b2ogprc += rd32(wx, WX_RDM_BMC2OS_CNT);
+ hwstats->rdmdrop += rd32(wx, WX_RDM_DRP_PKT);
+
+ for (i = 0; i < wx->mac.max_rx_queues; i++)
+ hwstats->qmprc += rd32(wx, WX_PX_MPRC(i));
+}
+EXPORT_SYMBOL(wx_update_stats);
+
+/**
+ * wx_clear_hw_cntrs - Generic clear hardware counters
+ * @wx: board private structure
+ *
+ * Clears all hardware statistics counters by reading them from the hardware
+ * Statistics counters are clear on read.
+ **/
+void wx_clear_hw_cntrs(struct wx *wx)
+{
+ u16 i = 0;
+
+ for (i = 0; i < wx->mac.max_rx_queues; i++)
+ wr32(wx, WX_PX_MPRC(i), 0);
+
+ rd32(wx, WX_RDM_PKT_CNT);
+ rd32(wx, WX_TDM_PKT_CNT);
+ rd64(wx, WX_RDM_BYTE_CNT_LSB);
+ rd32(wx, WX_TDM_BYTE_CNT_LSB);
+ rd32(wx, WX_RDM_DRP_PKT);
+ rd32(wx, WX_RX_UNDERSIZE_FRAMES_GOOD);
+ rd32(wx, WX_RX_OVERSIZE_FRAMES_GOOD);
+ rd64(wx, WX_RX_FRAME_CNT_GOOD_BAD_L);
+ rd64(wx, WX_TX_FRAME_CNT_GOOD_BAD_L);
+ rd64(wx, WX_RX_MC_FRAMES_GOOD_L);
+ rd64(wx, WX_TX_MC_FRAMES_GOOD_L);
+ rd64(wx, WX_RX_BC_FRAMES_GOOD_L);
+ rd64(wx, WX_TX_BC_FRAMES_GOOD_L);
+ rd64(wx, WX_RX_CRC_ERROR_FRAMES_L);
+ rd64(wx, WX_RX_LEN_ERROR_FRAMES_L);
+ rd32(wx, WX_RDB_LXONTXC);
+ rd32(wx, WX_RDB_LXOFFTXC);
+ rd32(wx, WX_MAC_LXONOFFRXC);
+}
+EXPORT_SYMBOL(wx_clear_hw_cntrs);
+
+/**
* wx_start_hw - Prepare hardware for Tx/Rx
* @wx: pointer to hardware structure
*
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.h b/drivers/net/ethernet/wangxun/libwx/wx_hw.h
index 0b3447bc6f2f..12c20a7c364d 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.h
+++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.h
@@ -4,6 +4,13 @@
#ifndef _WX_HW_H_
#define _WX_HW_H_
+#include <linux/phy.h>
+
+int wx_phy_read_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum);
+int wx_phy_write_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum, u16 value);
+int wx_phy_read_reg_mdi_c45(struct mii_bus *bus, int phy_addr, int devnum, int regnum);
+int wx_phy_write_reg_mdi_c45(struct mii_bus *bus, int phy_addr,
+ int devnum, int regnum, u16 value);
void wx_intr_enable(struct wx *wx, u64 qmask);
void wx_irq_disable(struct wx *wx);
int wx_check_flash_load(struct wx *wx, u32 check_bit);
@@ -34,5 +41,7 @@ int wx_get_pcie_msix_counts(struct wx *wx, u16 *msix_count, u16 max_msix_count);
int wx_sw_init(struct wx *wx);
int wx_vlan_rx_add_vid(struct net_device *netdev, __be16 proto, u16 vid);
int wx_vlan_rx_kill_vid(struct net_device *netdev, __be16 proto, u16 vid);
+void wx_update_stats(struct wx *wx);
+void wx_clear_hw_cntrs(struct wx *wx);
#endif /* _WX_HW_H_ */
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
index e04d4a5eed7b..2823861e5a92 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
@@ -488,6 +488,7 @@ static bool wx_is_non_eop(struct wx_ring *rx_ring,
return false;
rx_ring->rx_buffer_info[ntc].skb = skb;
+ rx_ring->rx_stats.non_eop_descs++;
return true;
}
@@ -721,6 +722,7 @@ static int wx_clean_rx_irq(struct wx_q_vector *q_vector,
/* exit if we failed to retrieve a buffer */
if (!skb) {
+ rx_ring->rx_stats.alloc_rx_buff_failed++;
rx_buffer->pagecnt_bias++;
break;
}
@@ -877,9 +879,11 @@ static bool wx_clean_tx_irq(struct wx_q_vector *q_vector,
if (__netif_subqueue_stopped(tx_ring->netdev,
tx_ring->queue_index) &&
- netif_running(tx_ring->netdev))
+ netif_running(tx_ring->netdev)) {
netif_wake_subqueue(tx_ring->netdev,
tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
+ }
}
return !!budget;
@@ -956,6 +960,7 @@ static int wx_maybe_stop_tx(struct wx_ring *tx_ring, u16 size)
/* A reprieve! - use start_queue because it doesn't call schedule */
netif_start_subqueue(tx_ring->netdev, tx_ring->queue_index);
+ ++tx_ring->tx_stats.restart_queue;
return 0;
}
@@ -1533,8 +1538,10 @@ static netdev_tx_t wx_xmit_frame_ring(struct sk_buff *skb,
count += TXD_USE_COUNT(skb_frag_size(&skb_shinfo(skb)->
frags[f]));
- if (wx_maybe_stop_tx(tx_ring, count + 3))
+ if (wx_maybe_stop_tx(tx_ring, count + 3)) {
+ tx_ring->tx_stats.tx_busy++;
return NETDEV_TX_BUSY;
+ }
/* record the location of the first descriptor for this packet */
first = &tx_ring->tx_buffer_info[tx_ring->next_to_use];
@@ -2665,8 +2672,11 @@ void wx_get_stats64(struct net_device *netdev,
struct rtnl_link_stats64 *stats)
{
struct wx *wx = netdev_priv(netdev);
+ struct wx_hw_stats *hwstats;
int i;
+ wx_update_stats(wx);
+
rcu_read_lock();
for (i = 0; i < wx->num_rx_queues; i++) {
struct wx_ring *ring = READ_ONCE(wx->rx_ring[i]);
@@ -2702,6 +2712,12 @@ void wx_get_stats64(struct net_device *netdev,
}
rcu_read_unlock();
+
+ hwstats = &wx->stats;
+ stats->rx_errors = hwstats->crcerrs + hwstats->rlec;
+ stats->multicast = hwstats->qmprc;
+ stats->rx_length_errors = hwstats->rlec;
+ stats->rx_crc_errors = hwstats->crcerrs;
}
EXPORT_SYMBOL(wx_get_stats64);
diff --git a/drivers/net/ethernet/wangxun/libwx/wx_type.h b/drivers/net/ethernet/wangxun/libwx/wx_type.h
index c5cbd177ef62..165e82de772e 100644
--- a/drivers/net/ethernet/wangxun/libwx/wx_type.h
+++ b/drivers/net/ethernet/wangxun/libwx/wx_type.h
@@ -59,6 +59,25 @@
#define WX_TS_ALARM_ST_DALARM BIT(1)
#define WX_TS_ALARM_ST_ALARM BIT(0)
+/* statistic */
+#define WX_TX_FRAME_CNT_GOOD_BAD_L 0x1181C
+#define WX_TX_BC_FRAMES_GOOD_L 0x11824
+#define WX_TX_MC_FRAMES_GOOD_L 0x1182C
+#define WX_RX_FRAME_CNT_GOOD_BAD_L 0x11900
+#define WX_RX_BC_FRAMES_GOOD_L 0x11918
+#define WX_RX_MC_FRAMES_GOOD_L 0x11920
+#define WX_RX_CRC_ERROR_FRAMES_L 0x11928
+#define WX_RX_LEN_ERROR_FRAMES_L 0x11978
+#define WX_RX_UNDERSIZE_FRAMES_GOOD 0x11938
+#define WX_RX_OVERSIZE_FRAMES_GOOD 0x1193C
+#define WX_MAC_LXONOFFRXC 0x11E0C
+
+/*********************** Receive DMA registers **************************/
+#define WX_RDM_DRP_PKT 0x12500
+#define WX_RDM_PKT_CNT 0x12504
+#define WX_RDM_BYTE_CNT_LSB 0x12508
+#define WX_RDM_BMC2OS_CNT 0x12510
+
/************************* Port Registers ************************************/
/* port cfg Registers */
#define WX_CFG_PORT_CTL 0x14400
@@ -94,6 +113,9 @@
#define WX_TDM_CTL_TE BIT(0) /* Transmit Enable */
#define WX_TDM_PB_THRE(_i) (0x18020 + ((_i) * 4))
#define WX_TDM_RP_IDX 0x1820C
+#define WX_TDM_PKT_CNT 0x18308
+#define WX_TDM_BYTE_CNT_LSB 0x1830C
+#define WX_TDM_OS2BMC_CNT 0x18314
#define WX_TDM_RP_RATE 0x18404
/***************************** RDB registers *********************************/
@@ -106,6 +128,8 @@
/* statistic */
#define WX_RDB_PFCMACDAL 0x19210
#define WX_RDB_PFCMACDAH 0x19214
+#define WX_RDB_LXOFFTXC 0x19218
+#define WX_RDB_LXONTXC 0x1921C
/* ring assignment */
#define WX_RDB_PL_CFG(_i) (0x19300 + ((_i) * 4))
#define WX_RDB_PL_CFG_L4HDR BIT(1)
@@ -218,6 +242,8 @@
#define WX_MNG_MBOX_CTL 0x1E044
#define WX_MNG_MBOX_CTL_SWRDY BIT(0)
#define WX_MNG_MBOX_CTL_FWRDY BIT(2)
+#define WX_MNG_BMC2OS_CNT 0x1E090
+#define WX_MNG_OS2BMC_CNT 0x1E094
/************************************* ETH MAC *****************************/
#define WX_MAC_TX_CFG 0x11000
@@ -251,6 +277,7 @@ enum WX_MSCA_CMD_value {
#define WX_MSCC_SADDR BIT(18)
#define WX_MSCC_BUSY BIT(22)
#define WX_MDIO_CLK(v) FIELD_PREP(GENMASK(21, 19), v)
+#define WX_MDIO_CLAUSE_SELECT 0x11220
#define WX_MMC_CONTROL 0x11800
#define WX_MMC_CONTROL_RSTONRD BIT(2) /* reset on read */
@@ -300,6 +327,7 @@ enum WX_MSCA_CMD_value {
#define WX_PX_RR_WP(_i) (0x01008 + ((_i) * 0x40))
#define WX_PX_RR_RP(_i) (0x0100C + ((_i) * 0x40))
#define WX_PX_RR_CFG(_i) (0x01010 + ((_i) * 0x40))
+#define WX_PX_MPRC(_i) (0x01020 + ((_i) * 0x40))
/* PX_RR_CFG bit definitions */
#define WX_PX_RR_CFG_VLAN BIT(31)
#define WX_PX_RR_CFG_SPLIT_MODE BIT(26)
@@ -767,9 +795,16 @@ struct wx_queue_stats {
u64 bytes;
};
+struct wx_tx_queue_stats {
+ u64 restart_queue;
+ u64 tx_busy;
+};
+
struct wx_rx_queue_stats {
+ u64 non_eop_descs;
u64 csum_good_cnt;
u64 csum_err;
+ u64 alloc_rx_buff_failed;
};
/* iterator for handling rings in ring container */
@@ -813,6 +848,7 @@ struct wx_ring {
struct wx_queue_stats stats;
struct u64_stats_sync syncp;
union {
+ struct wx_tx_queue_stats tx_stats;
struct wx_rx_queue_stats rx_stats;
};
} ____cacheline_internodealigned_in_smp;
@@ -844,6 +880,33 @@ enum wx_isb_idx {
WX_ISB_MAX
};
+/* Statistics counters collected by the MAC */
+struct wx_hw_stats {
+ u64 gprc;
+ u64 gptc;
+ u64 gorc;
+ u64 gotc;
+ u64 tpr;
+ u64 tpt;
+ u64 bprc;
+ u64 bptc;
+ u64 mprc;
+ u64 mptc;
+ u64 roc;
+ u64 ruc;
+ u64 lxonoffrxc;
+ u64 lxontxc;
+ u64 lxofftxc;
+ u64 o2bgptc;
+ u64 b2ospc;
+ u64 o2bspc;
+ u64 b2ogprc;
+ u64 rdmdrop;
+ u64 crcerrs;
+ u64 rlec;
+ u64 qmprc;
+};
+
struct wx {
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
@@ -919,6 +982,14 @@ struct wx {
u32 wol;
u16 bd_number;
+
+ struct wx_hw_stats stats;
+ u64 tx_busy;
+ u64 non_eop_descs;
+ u64 restart_queue;
+ u64 hw_csum_rx_good;
+ u64 hw_csum_rx_error;
+ u64 alloc_rx_buff_failed;
};
#define WX_INTR_ALL (~0ULL)
@@ -952,6 +1023,17 @@ wr32m(struct wx *wx, u32 reg, u32 mask, u32 field)
wr32(wx, reg, val);
}
+static inline u64
+rd64(struct wx *wx, u32 reg)
+{
+ u64 lsb, msb;
+
+ lsb = rd32(wx, reg);
+ msb = rd32(wx, reg + 4);
+
+ return (lsb | msb << 32);
+}
+
/* On some domestic CPU platforms, sometimes IO is not synchronized with
* flushing memory, here use readl() to flush PCI read and write.
*/
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c
index ec0e869e9aac..afbdf6919071 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_ethtool.c
@@ -49,6 +49,11 @@ static const struct ethtool_ops ngbe_ethtool_ops = {
.nway_reset = phy_ethtool_nway_reset,
.get_wol = ngbe_get_wol,
.set_wol = ngbe_set_wol,
+ .get_sset_count = wx_get_sset_count,
+ .get_strings = wx_get_strings,
+ .get_ethtool_stats = wx_get_ethtool_stats,
+ .get_eth_mac_stats = wx_get_mac_stats,
+ .get_pause_stats = wx_get_pause_stats,
};
void ngbe_set_ethtool_ops(struct net_device *netdev)
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
index 6562a2de9527..6459bc1d7c22 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_hw.c
@@ -85,6 +85,8 @@ int ngbe_reset_hw(struct wx *wx)
}
ngbe_reset_misc(wx);
+ wx_clear_hw_cntrs(wx);
+
/* Store the permanent mac address */
wx_get_mac_addr(wx, wx->mac.perm_addr);
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
index 2b431db6085a..3d43f808c86b 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
@@ -332,6 +332,8 @@ static void ngbe_disable_device(struct wx *wx)
wr32(wx, WX_PX_TR_CFG(reg_idx), WX_PX_TR_CFG_SWFLSH);
}
+
+ wx_update_stats(wx);
}
static void ngbe_down(struct wx *wx)
@@ -677,11 +679,6 @@ static int ngbe_probe(struct pci_dev *pdev,
pci_set_drvdata(pdev, wx);
- netif_info(wx, probe, netdev,
- "PHY: %s, PBA No: Wang Xun GbE Family Controller\n",
- wx->mac_type == em_mac_type_mdi ? "Internal" : "External");
- netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr);
-
return 0;
err_register:
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
index 591f5b7b6da6..6302ecca71bb 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_mdio.c
@@ -29,117 +29,6 @@ static int ngbe_phy_write_reg_internal(struct mii_bus *bus, int phy_addr, int re
return 0;
}
-static int ngbe_phy_read_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum)
-{
- u32 command, val, device_type = 0;
- struct wx *wx = bus->priv;
- int ret;
-
- wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF);
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(device_type);
- wr32(wx, WX_MSCA, command);
- command = WX_MSCC_CMD(WX_MSCA_CMD_READ) |
- WX_MSCC_BUSY |
- WX_MDIO_CLK(6);
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret) {
- wx_err(wx, "Mdio read c22 command did not complete.\n");
- return ret;
- }
-
- return (u16)rd32(wx, WX_MSCC);
-}
-
-static int ngbe_phy_write_reg_mdi_c22(struct mii_bus *bus, int phy_addr, int regnum, u16 value)
-{
- u32 command, val, device_type = 0;
- struct wx *wx = bus->priv;
- int ret;
-
- wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0xF);
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(device_type);
- wr32(wx, WX_MSCA, command);
- command = value |
- WX_MSCC_CMD(WX_MSCA_CMD_WRITE) |
- WX_MSCC_BUSY |
- WX_MDIO_CLK(6);
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret)
- wx_err(wx, "Mdio write c22 command did not complete.\n");
-
- return ret;
-}
-
-static int ngbe_phy_read_reg_mdi_c45(struct mii_bus *bus, int phy_addr, int devnum, int regnum)
-{
- struct wx *wx = bus->priv;
- u32 val, command;
- int ret;
-
- wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0);
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(devnum);
- wr32(wx, WX_MSCA, command);
- command = WX_MSCC_CMD(WX_MSCA_CMD_READ) |
- WX_MSCC_BUSY |
- WX_MDIO_CLK(6);
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret) {
- wx_err(wx, "Mdio read c45 command did not complete.\n");
- return ret;
- }
-
- return (u16)rd32(wx, WX_MSCC);
-}
-
-static int ngbe_phy_write_reg_mdi_c45(struct mii_bus *bus, int phy_addr,
- int devnum, int regnum, u16 value)
-{
- struct wx *wx = bus->priv;
- int ret, command;
- u16 val;
-
- wr32(wx, NGBE_MDIO_CLAUSE_SELECT, 0x0);
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(devnum);
- wr32(wx, WX_MSCA, command);
- command = value |
- WX_MSCC_CMD(WX_MSCA_CMD_WRITE) |
- WX_MSCC_BUSY |
- WX_MDIO_CLK(6);
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret)
- wx_err(wx, "Mdio write c45 command did not complete.\n");
-
- return ret;
-}
-
static int ngbe_phy_read_reg_c22(struct mii_bus *bus, int phy_addr, int regnum)
{
struct wx *wx = bus->priv;
@@ -148,7 +37,7 @@ static int ngbe_phy_read_reg_c22(struct mii_bus *bus, int phy_addr, int regnum)
if (wx->mac_type == em_mac_type_mdi)
phy_data = ngbe_phy_read_reg_internal(bus, phy_addr, regnum);
else
- phy_data = ngbe_phy_read_reg_mdi_c22(bus, phy_addr, regnum);
+ phy_data = wx_phy_read_reg_mdi_c22(bus, phy_addr, regnum);
return phy_data;
}
@@ -162,7 +51,7 @@ static int ngbe_phy_write_reg_c22(struct mii_bus *bus, int phy_addr,
if (wx->mac_type == em_mac_type_mdi)
ret = ngbe_phy_write_reg_internal(bus, phy_addr, regnum, value);
else
- ret = ngbe_phy_write_reg_mdi_c22(bus, phy_addr, regnum, value);
+ ret = wx_phy_write_reg_mdi_c22(bus, phy_addr, regnum, value);
return ret;
}
@@ -262,8 +151,8 @@ int ngbe_mdio_init(struct wx *wx)
mii_bus->priv = wx;
if (wx->mac_type == em_mac_type_rgmii) {
- mii_bus->read_c45 = ngbe_phy_read_reg_mdi_c45;
- mii_bus->write_c45 = ngbe_phy_write_reg_mdi_c45;
+ mii_bus->read_c45 = wx_phy_read_reg_mdi_c45;
+ mii_bus->write_c45 = wx_phy_write_reg_mdi_c45;
}
snprintf(mii_bus->id, MII_BUS_ID_SIZE, "ngbe-%x", pci_dev_id(pdev));
diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
index 72c8cd2d5575..ff754d69bdf6 100644
--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
+++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_type.h
@@ -59,9 +59,6 @@
#define NGBE_EEPROM_VERSION_L 0x1D
#define NGBE_EEPROM_VERSION_H 0x1E
-/* Media-dependent registers. */
-#define NGBE_MDIO_CLAUSE_SELECT 0x11220
-
/* GPIO Registers */
#define NGBE_GPIO_DR 0x14800
#define NGBE_GPIO_DDR 0x14804
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
index 859da112586a..3f336a088e43 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_ethtool.c
@@ -39,6 +39,11 @@ static const struct ethtool_ops txgbe_ethtool_ops = {
.get_link = ethtool_op_get_link,
.get_link_ksettings = txgbe_get_link_ksettings,
.set_link_ksettings = txgbe_set_link_ksettings,
+ .get_sset_count = wx_get_sset_count,
+ .get_strings = wx_get_strings,
+ .get_ethtool_stats = wx_get_ethtool_stats,
+ .get_eth_mac_stats = wx_get_mac_stats,
+ .get_pause_stats = wx_get_pause_stats,
};
void txgbe_set_ethtool_ops(struct net_device *netdev)
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
index 372745250270..d6b2b3c781b6 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.c
@@ -71,114 +71,6 @@ static void txgbe_init_thermal_sensor_thresh(struct wx *wx)
}
/**
- * txgbe_read_pba_string - Reads part number string from EEPROM
- * @wx: pointer to hardware structure
- * @pba_num: stores the part number string from the EEPROM
- * @pba_num_size: part number string buffer length
- *
- * Reads the part number string from the EEPROM.
- **/
-int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size)
-{
- u16 pba_ptr, offset, length, data;
- int ret_val;
-
- if (!pba_num) {
- wx_err(wx, "PBA string buffer was null\n");
- return -EINVAL;
- }
-
- ret_val = wx_read_ee_hostif(wx,
- wx->eeprom.sw_region_offset + TXGBE_PBANUM0_PTR,
- &data);
- if (ret_val != 0) {
- wx_err(wx, "NVM Read Error\n");
- return ret_val;
- }
-
- ret_val = wx_read_ee_hostif(wx,
- wx->eeprom.sw_region_offset + TXGBE_PBANUM1_PTR,
- &pba_ptr);
- if (ret_val != 0) {
- wx_err(wx, "NVM Read Error\n");
- return ret_val;
- }
-
- /* if data is not ptr guard the PBA must be in legacy format which
- * means pba_ptr is actually our second data word for the PBA number
- * and we can decode it into an ascii string
- */
- if (data != TXGBE_PBANUM_PTR_GUARD) {
- wx_err(wx, "NVM PBA number is not stored as string\n");
-
- /* we will need 11 characters to store the PBA */
- if (pba_num_size < 11) {
- wx_err(wx, "PBA string buffer too small\n");
- return -ENOMEM;
- }
-
- /* extract hex string from data and pba_ptr */
- pba_num[0] = (data >> 12) & 0xF;
- pba_num[1] = (data >> 8) & 0xF;
- pba_num[2] = (data >> 4) & 0xF;
- pba_num[3] = data & 0xF;
- pba_num[4] = (pba_ptr >> 12) & 0xF;
- pba_num[5] = (pba_ptr >> 8) & 0xF;
- pba_num[6] = '-';
- pba_num[7] = 0;
- pba_num[8] = (pba_ptr >> 4) & 0xF;
- pba_num[9] = pba_ptr & 0xF;
-
- /* put a null character on the end of our string */
- pba_num[10] = '\0';
-
- /* switch all the data but the '-' to hex char */
- for (offset = 0; offset < 10; offset++) {
- if (pba_num[offset] < 0xA)
- pba_num[offset] += '0';
- else if (pba_num[offset] < 0x10)
- pba_num[offset] += 'A' - 0xA;
- }
-
- return 0;
- }
-
- ret_val = wx_read_ee_hostif(wx, pba_ptr, &length);
- if (ret_val != 0) {
- wx_err(wx, "NVM Read Error\n");
- return ret_val;
- }
-
- if (length == 0xFFFF || length == 0) {
- wx_err(wx, "NVM PBA number section invalid length\n");
- return -EINVAL;
- }
-
- /* check if pba_num buffer is big enough */
- if (pba_num_size < (((u32)length * 2) - 1)) {
- wx_err(wx, "PBA string buffer too small\n");
- return -ENOMEM;
- }
-
- /* trim pba length from start of string */
- pba_ptr++;
- length--;
-
- for (offset = 0; offset < length; offset++) {
- ret_val = wx_read_ee_hostif(wx, pba_ptr + offset, &data);
- if (ret_val != 0) {
- wx_err(wx, "NVM Read Error\n");
- return ret_val;
- }
- pba_num[offset * 2] = (u8)(data >> 8);
- pba_num[(offset * 2) + 1] = (u8)(data & 0xFF);
- }
- pba_num[offset * 2] = '\0';
-
- return 0;
-}
-
-/**
* txgbe_calc_eeprom_checksum - Calculates and returns the checksum
* @wx: pointer to hardware structure
* @checksum: pointer to cheksum
@@ -306,6 +198,8 @@ int txgbe_reset_hw(struct wx *wx)
txgbe_reset_misc(wx);
+ wx_clear_hw_cntrs(wx);
+
/* Store the permanent mac address */
wx_get_mac_addr(wx, wx->mac.perm_addr);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
index abc729eb187a..1f3ecf60e3c4 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_hw.h
@@ -6,7 +6,6 @@
int txgbe_disable_sec_tx_path(struct wx *wx);
void txgbe_enable_sec_tx_path(struct wx *wx);
-int txgbe_read_pba_string(struct wx *wx, u8 *pba_num, u32 pba_num_size);
int txgbe_validate_eeprom_checksum(struct wx *wx, u16 *checksum_val);
int txgbe_reset_hw(struct wx *wx);
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
index 5c3aed516ac2..70f0b5c01dac 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
@@ -286,6 +286,8 @@ static void txgbe_disable_device(struct wx *wx)
/* Disable the Tx DMA engine */
wr32m(wx, WX_TDM_CTL, WX_TDM_CTL_TE, 0);
+
+ wx_update_stats(wx);
}
static void txgbe_down(struct wx *wx)
@@ -538,7 +540,6 @@ static int txgbe_probe(struct pci_dev *pdev,
u16 eeprom_verh = 0, eeprom_verl = 0, offset = 0;
u16 eeprom_cfg_blkh = 0, eeprom_cfg_blkl = 0;
u16 build = 0, major = 0, patch = 0;
- u8 part_str[TXGBE_PBANUM_LENGTH];
u32 etrack_id = 0;
err = pci_enable_device_mem(pdev);
@@ -736,13 +737,6 @@ static int txgbe_probe(struct pci_dev *pdev,
else
dev_warn(&pdev->dev, "Failed to enumerate PF devices.\n");
- /* First try to read PBA as a string */
- err = txgbe_read_pba_string(wx, part_str, TXGBE_PBANUM_LENGTH);
- if (err)
- strncpy(part_str, "Unknown", TXGBE_PBANUM_LENGTH);
-
- netif_info(wx, probe, netdev, "%pM\n", netdev->dev_addr);
-
return 0;
err_remove_phy:
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
index 4159c84035fd..b6c06adb8656 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
@@ -647,58 +647,6 @@ static int txgbe_sfp_register(struct txgbe *txgbe)
return 0;
}
-static int txgbe_phy_read(struct mii_bus *bus, int phy_addr,
- int devnum, int regnum)
-{
- struct wx *wx = bus->priv;
- u32 val, command;
- int ret;
-
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(devnum);
- wr32(wx, WX_MSCA, command);
-
- command = WX_MSCC_CMD(WX_MSCA_CMD_READ) | WX_MSCC_BUSY;
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret) {
- wx_err(wx, "Mdio read c45 command did not complete.\n");
- return ret;
- }
-
- return (u16)rd32(wx, WX_MSCC);
-}
-
-static int txgbe_phy_write(struct mii_bus *bus, int phy_addr,
- int devnum, int regnum, u16 value)
-{
- struct wx *wx = bus->priv;
- int ret, command;
- u16 val;
-
- /* setup and write the address cycle command */
- command = WX_MSCA_RA(regnum) |
- WX_MSCA_PA(phy_addr) |
- WX_MSCA_DA(devnum);
- wr32(wx, WX_MSCA, command);
-
- command = value | WX_MSCC_CMD(WX_MSCA_CMD_WRITE) | WX_MSCC_BUSY;
- wr32(wx, WX_MSCC, command);
-
- /* wait to complete */
- ret = read_poll_timeout(rd32, val, !(val & WX_MSCC_BUSY), 1000,
- 100000, false, wx, WX_MSCC);
- if (ret)
- wx_err(wx, "Mdio write c45 command did not complete.\n");
-
- return ret;
-}
-
static int txgbe_ext_phy_init(struct txgbe *txgbe)
{
struct phy_device *phydev;
@@ -715,8 +663,8 @@ static int txgbe_ext_phy_init(struct txgbe *txgbe)
return -ENOMEM;
mii_bus->name = "txgbe_mii_bus";
- mii_bus->read_c45 = &txgbe_phy_read;
- mii_bus->write_c45 = &txgbe_phy_write;
+ mii_bus->read_c45 = &wx_phy_read_reg_mdi_c45;
+ mii_bus->write_c45 = &wx_phy_write_reg_mdi_c45;
mii_bus->parent = &pdev->dev;
mii_bus->phy_mask = GENMASK(31, 1);
mii_bus->priv = wx;
diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
index 51199c355f95..3ba9ce43f394 100644
--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
@@ -88,9 +88,6 @@
#define TXGBE_XPCS_IDA_ADDR 0x13000
#define TXGBE_XPCS_IDA_DATA 0x13004
-/* Part Number String Length */
-#define TXGBE_PBANUM_LENGTH 32
-
/* Checksum and EEPROM pointers */
#define TXGBE_EEPROM_LAST_WORD 0x800
#define TXGBE_EEPROM_CHECKSUM 0x2F
@@ -98,9 +95,6 @@
#define TXGBE_EEPROM_VERSION_L 0x1D
#define TXGBE_EEPROM_VERSION_H 0x1E
#define TXGBE_ISCSI_BOOT_CONFIG 0x07
-#define TXGBE_PBANUM0_PTR 0x05
-#define TXGBE_PBANUM1_PTR 0x06
-#define TXGBE_PBANUM_PTR_GUARD 0xFAFA
#define TXGBE_MAX_MSIX_VECTORS 64
#define TXGBE_MAX_FDIR_INDICES 63
diff --git a/drivers/net/ethernet/wiznet/w5100-spi.c b/drivers/net/ethernet/wiznet/w5100-spi.c
index 7c52796273a4..990a3cce8c0f 100644
--- a/drivers/net/ethernet/wiznet/w5100-spi.c
+++ b/drivers/net/ethernet/wiznet/w5100-spi.c
@@ -14,8 +14,8 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/netdevice.h>
+#include <linux/of.h>
#include <linux/of_net.h>
-#include <linux/of_device.h>
#include <linux/spi/spi.h>
#include "w5100.h"
@@ -420,7 +420,6 @@ MODULE_DEVICE_TABLE(of, w5100_of_match);
static int w5100_spi_probe(struct spi_device *spi)
{
- const struct of_device_id *of_id;
const struct w5100_ops *ops;
kernel_ulong_t driver_data;
const void *mac = NULL;
@@ -432,14 +431,7 @@ static int w5100_spi_probe(struct spi_device *spi)
if (!ret)
mac = tmpmac;
- if (spi->dev.of_node) {
- of_id = of_match_device(w5100_of_match, &spi->dev);
- if (!of_id)
- return -ENODEV;
- driver_data = (kernel_ulong_t)of_id->data;
- } else {
- driver_data = spi_get_device_id(spi)->driver_data;
- }
+ driver_data = (uintptr_t)spi_get_device_match_data(spi);
switch (driver_data) {
case W5100:
diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
index 634946e87e5f..b26fd15c25ae 100644
--- a/drivers/net/ethernet/wiznet/w5100.c
+++ b/drivers/net/ethernet/wiznet/w5100.c
@@ -930,8 +930,8 @@ static irqreturn_t w5100_interrupt(int irq, void *ndev_instance)
if (priv->ops->may_sleep)
queue_work(priv->xfer_wq, &priv->rx_work);
- else if (napi_schedule_prep(&priv->napi))
- __napi_schedule(&priv->napi);
+ else
+ napi_schedule(&priv->napi);
}
return IRQ_HANDLED;
@@ -1062,11 +1062,9 @@ static int w5100_mmio_probe(struct platform_device *pdev)
mac_addr, irq, data ? data->link_gpio : -EINVAL);
}
-static int w5100_mmio_remove(struct platform_device *pdev)
+static void w5100_mmio_remove(struct platform_device *pdev)
{
w5100_remove(&pdev->dev);
-
- return 0;
}
void *w5100_ops_priv(const struct net_device *ndev)
@@ -1273,6 +1271,6 @@ static struct platform_driver w5100_mmio_driver = {
.pm = &w5100_pm_ops,
},
.probe = w5100_mmio_probe,
- .remove = w5100_mmio_remove,
+ .remove_new = w5100_mmio_remove,
};
module_platform_driver(w5100_mmio_driver);
diff --git a/drivers/net/ethernet/wiznet/w5300.c b/drivers/net/ethernet/wiznet/w5300.c
index b0958fe8111e..3318b50a5911 100644
--- a/drivers/net/ethernet/wiznet/w5300.c
+++ b/drivers/net/ethernet/wiznet/w5300.c
@@ -627,7 +627,7 @@ err_register:
return err;
}
-static int w5300_remove(struct platform_device *pdev)
+static void w5300_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct w5300_priv *priv = netdev_priv(ndev);
@@ -639,7 +639,6 @@ static int w5300_remove(struct platform_device *pdev)
unregister_netdev(ndev);
free_netdev(ndev);
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -683,7 +682,7 @@ static struct platform_driver w5300_driver = {
.pm = &w5300_pm_ops,
},
.probe = w5300_probe,
- .remove = w5300_remove,
+ .remove_new = w5300_remove,
};
module_platform_driver(w5300_driver);
diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
index 1444b855e7aa..9df39cf8b097 100644
--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
+++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
@@ -1626,7 +1626,7 @@ err_sysfs_create:
return rc;
}
-static int temac_remove(struct platform_device *pdev)
+static void temac_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct temac_local *lp = netdev_priv(ndev);
@@ -1636,7 +1636,6 @@ static int temac_remove(struct platform_device *pdev)
if (lp->phy_node)
of_node_put(lp->phy_node);
temac_mdio_teardown(lp);
- return 0;
}
static const struct of_device_id temac_of_match[] = {
@@ -1650,7 +1649,7 @@ MODULE_DEVICE_TABLE(of, temac_of_match);
static struct platform_driver temac_driver = {
.probe = temac_probe,
- .remove = temac_remove,
+ .remove_new = temac_remove,
.driver = {
.name = "xilinx_temac",
.of_match_table = temac_of_match,
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index b7ec4dafae90..82d0d44b2b02 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -2183,7 +2183,7 @@ free_netdev:
return ret;
}
-static int axienet_remove(struct platform_device *pdev)
+static void axienet_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct axienet_local *lp = netdev_priv(ndev);
@@ -2202,8 +2202,6 @@ static int axienet_remove(struct platform_device *pdev)
clk_disable_unprepare(lp->axi_clk);
free_netdev(ndev);
-
- return 0;
}
static void axienet_shutdown(struct platform_device *pdev)
@@ -2256,7 +2254,7 @@ static DEFINE_SIMPLE_DEV_PM_OPS(axienet_pm_ops,
static struct platform_driver axienet_driver = {
.probe = axienet_probe,
- .remove = axienet_remove,
+ .remove_new = axienet_remove,
.shutdown = axienet_shutdown,
.driver = {
.name = "xilinx_axienet",
diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
index b358ecc67227..765aa516aada 100644
--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
@@ -1180,10 +1180,8 @@ error:
* This function is called if a device is physically removed from the system or
* if the driver module is being unloaded. It frees any resources allocated to
* the device.
- *
- * Return: 0, always.
*/
-static int xemaclite_of_remove(struct platform_device *of_dev)
+static void xemaclite_of_remove(struct platform_device *of_dev)
{
struct net_device *ndev = platform_get_drvdata(of_dev);
@@ -1202,8 +1200,6 @@ static int xemaclite_of_remove(struct platform_device *of_dev)
lp->phy_node = NULL;
free_netdev(ndev);
-
- return 0;
}
#ifdef CONFIG_NET_POLL_CONTROLLER
@@ -1262,7 +1258,7 @@ static struct platform_driver xemaclite_of_driver = {
.of_match_table = xemaclite_of_match,
},
.probe = xemaclite_of_probe,
- .remove = xemaclite_of_remove,
+ .remove_new = xemaclite_of_remove,
};
module_platform_driver(xemaclite_of_driver);
diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
index 3b0c5f177447..531bf919aef5 100644
--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
+++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
@@ -24,6 +24,7 @@
#include <linux/dma-mapping.h>
#include <linux/dmapool.h>
#include <linux/etherdevice.h>
+#include <linux/if_vlan.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/net_tstamp.h>
@@ -63,7 +64,15 @@
#define POOL_ALLOC_SIZE (sizeof(struct desc) * (RX_DESCS + TX_DESCS))
#define REGS_SIZE 0x1000
-#define MAX_MRU 1536 /* 0x600 */
+
+/* MRU is said to be 14320 in a code dump, the SW manual says that
+ * MRU/MTU is 16320 and includes VLAN and ethernet headers.
+ * See "IXP400 Software Programmer's Guide" section 10.3.2, page 161.
+ *
+ * FIXME: we have chosen the safe default (14320) but if you can test
+ * jumboframes, experiment with 16320 and see what happens!
+ */
+#define MAX_MRU (14320 - VLAN_ETH_HLEN)
#define RX_BUFF_SIZE ALIGN((NET_IP_ALIGN) + MAX_MRU, 4)
#define NAPI_WEIGHT 16
@@ -714,9 +723,9 @@ static int eth_poll(struct napi_struct *napi, int budget)
napi_complete(napi);
qmgr_enable_irq(rxq);
if (!qmgr_stat_below_low_watermark(rxq) &&
- napi_reschedule(napi)) { /* not empty again */
+ napi_schedule(napi)) { /* not empty again */
#if DEBUG_RX
- netdev_debug(dev, "eth_poll napi_reschedule succeeded\n");
+ netdev_debug(dev, "eth_poll napi_schedule succeeded\n");
#endif
qmgr_disable_irq(rxq);
continue;
@@ -1182,6 +1191,54 @@ static void destroy_queues(struct port *port)
}
}
+static int ixp4xx_do_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct port *port = netdev_priv(dev);
+ struct npe *npe = port->npe;
+ int framesize, chunks;
+ struct msg msg = {};
+
+ /* adjust for ethernet headers */
+ framesize = new_mtu + VLAN_ETH_HLEN;
+ /* max rx/tx 64 byte chunks */
+ chunks = DIV_ROUND_UP(framesize, 64);
+
+ msg.cmd = NPE_SETMAXFRAMELENGTHS;
+ msg.eth_id = port->id;
+
+ /* Firmware wants to know buffer size in 64 byte chunks */
+ msg.byte2 = chunks << 8;
+ msg.byte3 = chunks << 8;
+
+ msg.byte4 = msg.byte6 = framesize >> 8;
+ msg.byte5 = msg.byte7 = framesize & 0xff;
+
+ if (npe_send_recv_message(npe, &msg, "ETH_SET_MAX_FRAME_LENGTH"))
+ return -EIO;
+ netdev_dbg(dev, "set MTU on NPE %s to %d bytes\n",
+ npe_name(npe), new_mtu);
+
+ return 0;
+}
+
+static int ixp4xx_eth_change_mtu(struct net_device *dev, int new_mtu)
+{
+ int ret;
+
+ /* MTU can only be changed when the interface is up. We also
+ * set the MTU from dev->mtu when opening the device.
+ */
+ if (dev->flags & IFF_UP) {
+ ret = ixp4xx_do_change_mtu(dev, new_mtu);
+ if (ret < 0)
+ return ret;
+ }
+
+ dev->mtu = new_mtu;
+
+ return 0;
+}
+
static int eth_open(struct net_device *dev)
{
struct port *port = netdev_priv(dev);
@@ -1232,6 +1289,8 @@ static int eth_open(struct net_device *dev)
if (npe_send_recv_message(port->npe, &msg, "ETH_SET_FIREWALL_MODE"))
return -EIO;
+ ixp4xx_do_change_mtu(dev, dev->mtu);
+
if ((err = request_queues(port)) != 0)
return err;
@@ -1374,6 +1433,7 @@ static int eth_close(struct net_device *dev)
static const struct net_device_ops ixp4xx_netdev_ops = {
.ndo_open = eth_open,
.ndo_stop = eth_close,
+ .ndo_change_mtu = ixp4xx_eth_change_mtu,
.ndo_start_xmit = eth_xmit,
.ndo_set_rx_mode = eth_set_mcast_list,
.ndo_eth_ioctl = eth_ioctl,
@@ -1488,6 +1548,9 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
ndev->dev.dma_mask = dev->dma_mask;
ndev->dev.coherent_dma_mask = dev->coherent_dma_mask;
+ ndev->min_mtu = ETH_MIN_MTU;
+ ndev->max_mtu = MAX_MRU;
+
netif_napi_add_weight(ndev, &port->napi, eth_poll, NAPI_WEIGHT);
if (!(port->npe = npe_request(NPE_ID(port->id))))
@@ -1533,7 +1596,7 @@ err_free_mem:
return err;
}
-static int ixp4xx_eth_remove(struct platform_device *pdev)
+static void ixp4xx_eth_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct phy_device *phydev = ndev->phydev;
@@ -1544,7 +1607,6 @@ static int ixp4xx_eth_remove(struct platform_device *pdev)
ixp4xx_mdio_remove();
npe_port_tab[NPE_ID(port->id)] = NULL;
npe_release(port->npe);
- return 0;
}
static const struct of_device_id ixp4xx_eth_of_match[] = {
@@ -1560,7 +1622,7 @@ static struct platform_driver ixp4xx_eth_driver = {
.of_match_table = of_match_ptr(ixp4xx_eth_of_match),
},
.probe = ixp4xx_eth_probe,
- .remove = ixp4xx_eth_remove,
+ .remove_new = ixp4xx_eth_remove,
};
module_platform_driver(ixp4xx_eth_driver);
diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
index 2513be6d4e11..cd8cf08477ec 100644
--- a/drivers/net/fjes/fjes_main.c
+++ b/drivers/net/fjes/fjes_main.c
@@ -1030,7 +1030,7 @@ static int fjes_poll(struct napi_struct *napi, int budget)
}
if (((long)jiffies - (long)adapter->rx_last_jiffies) < 3) {
- napi_reschedule(napi);
+ napi_schedule(napi);
} else {
spin_lock(&hw->rx_status_lock);
for (epidx = 0; epidx < hw->max_epid; epidx++) {
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
index 78f9d588f712..acd9c615d1f4 100644
--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -784,117 +784,21 @@ free_dst:
return err;
}
-static struct rtable *geneve_get_v4_rt(struct sk_buff *skb,
- struct net_device *dev,
- struct geneve_sock *gs4,
- struct flowi4 *fl4,
- const struct ip_tunnel_info *info,
- __be16 dport, __be16 sport,
- __u8 *full_tos)
+static u8 geneve_get_dsfield(struct sk_buff *skb, struct net_device *dev,
+ const struct ip_tunnel_info *info,
+ bool *use_cache)
{
- bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev);
- struct dst_cache *dst_cache;
- struct rtable *rt = NULL;
- __u8 tos;
+ u8 dsfield;
- if (!gs4)
- return ERR_PTR(-EIO);
-
- memset(fl4, 0, sizeof(*fl4));
- fl4->flowi4_mark = skb->mark;
- fl4->flowi4_proto = IPPROTO_UDP;
- fl4->daddr = info->key.u.ipv4.dst;
- fl4->saddr = info->key.u.ipv4.src;
- fl4->fl4_dport = dport;
- fl4->fl4_sport = sport;
- fl4->flowi4_flags = info->key.flow_flags;
-
- tos = info->key.tos;
- if ((tos == 1) && !geneve->cfg.collect_md) {
- tos = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
- use_cache = false;
- }
- fl4->flowi4_tos = RT_TOS(tos);
- if (full_tos)
- *full_tos = tos;
-
- dst_cache = (struct dst_cache *)&info->dst_cache;
- if (use_cache) {
- rt = dst_cache_get_ip4(dst_cache, &fl4->saddr);
- if (rt)
- return rt;
- }
- rt = ip_route_output_key(geneve->net, fl4);
- if (IS_ERR(rt)) {
- netdev_dbg(dev, "no route to %pI4\n", &fl4->daddr);
- return ERR_PTR(-ENETUNREACH);
- }
- if (rt->dst.dev == dev) { /* is this necessary? */
- netdev_dbg(dev, "circular route to %pI4\n", &fl4->daddr);
- ip_rt_put(rt);
- return ERR_PTR(-ELOOP);
- }
- if (use_cache)
- dst_cache_set_ip4(dst_cache, &rt->dst, fl4->saddr);
- return rt;
-}
-
-#if IS_ENABLED(CONFIG_IPV6)
-static struct dst_entry *geneve_get_v6_dst(struct sk_buff *skb,
- struct net_device *dev,
- struct geneve_sock *gs6,
- struct flowi6 *fl6,
- const struct ip_tunnel_info *info,
- __be16 dport, __be16 sport)
-{
- bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
- struct geneve_dev *geneve = netdev_priv(dev);
- struct dst_entry *dst = NULL;
- struct dst_cache *dst_cache;
- __u8 prio;
-
- if (!gs6)
- return ERR_PTR(-EIO);
-
- memset(fl6, 0, sizeof(*fl6));
- fl6->flowi6_mark = skb->mark;
- fl6->flowi6_proto = IPPROTO_UDP;
- fl6->daddr = info->key.u.ipv6.dst;
- fl6->saddr = info->key.u.ipv6.src;
- fl6->fl6_dport = dport;
- fl6->fl6_sport = sport;
-
- prio = info->key.tos;
- if ((prio == 1) && !geneve->cfg.collect_md) {
- prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
- use_cache = false;
- }
-
- fl6->flowlabel = ip6_make_flowinfo(prio, info->key.label);
- dst_cache = (struct dst_cache *)&info->dst_cache;
- if (use_cache) {
- dst = dst_cache_get_ip6(dst_cache, &fl6->saddr);
- if (dst)
- return dst;
- }
- dst = ipv6_stub->ipv6_dst_lookup_flow(geneve->net, gs6->sock->sk, fl6,
- NULL);
- if (IS_ERR(dst)) {
- netdev_dbg(dev, "no route to %pI6\n", &fl6->daddr);
- return ERR_PTR(-ENETUNREACH);
- }
- if (dst->dev == dev) { /* is this necessary? */
- netdev_dbg(dev, "circular route to %pI6\n", &fl6->daddr);
- dst_release(dst);
- return ERR_PTR(-ELOOP);
+ dsfield = info->key.tos;
+ if (dsfield == 1 && !geneve->cfg.collect_md) {
+ dsfield = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
+ *use_cache = false;
}
- if (use_cache)
- dst_cache_set_ip6(dst_cache, dst, &fl6->saddr);
- return dst;
+ return dsfield;
}
-#endif
static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
struct geneve_dev *geneve,
@@ -904,19 +808,28 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
const struct ip_tunnel_key *key = &info->key;
struct rtable *rt;
- struct flowi4 fl4;
- __u8 full_tos;
+ bool use_cache;
__u8 tos, ttl;
__be16 df = 0;
+ __be32 saddr;
__be16 sport;
int err;
if (!pskb_inet_may_pull(skb))
return -EINVAL;
+ if (!gs4)
+ return -EIO;
+
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ tos = geneve_get_dsfield(skb, dev, info, &use_cache);
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
- geneve->cfg.info.key.tp_dst, sport, &full_tos);
+
+ rt = udp_tunnel_dst_lookup(skb, dev, geneve->net, 0, &saddr,
+ &info->key,
+ sport, geneve->cfg.info.key.tp_dst, tos,
+ use_cache ?
+ (struct dst_cache *)&info->dst_cache : NULL);
if (IS_ERR(rt))
return PTR_ERR(rt);
@@ -939,8 +852,8 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -ENOMEM;
}
- unclone->key.u.ipv4.dst = fl4.saddr;
- unclone->key.u.ipv4.src = fl4.daddr;
+ unclone->key.u.ipv4.dst = saddr;
+ unclone->key.u.ipv4.src = info->key.u.ipv4.dst;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {
@@ -954,13 +867,12 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -EMSGSIZE;
}
+ tos = ip_tunnel_ecn_encap(tos, ip_hdr(skb), skb);
if (geneve->cfg.collect_md) {
- tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
df = key->tun_flags & TUNNEL_DONT_FRAGMENT ? htons(IP_DF) : 0;
} else {
- tos = ip_tunnel_ecn_encap(full_tos, ip_hdr(skb), skb);
if (geneve->cfg.ttl_inherit)
ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb);
else
@@ -988,7 +900,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (unlikely(err))
return err;
- udp_tunnel_xmit_skb(rt, gs4->sock->sk, skb, fl4.saddr, fl4.daddr,
+ udp_tunnel_xmit_skb(rt, gs4->sock->sk, skb, saddr, info->key.u.ipv4.dst,
tos, ttl, df, sport, geneve->cfg.info.key.tp_dst,
!net_eq(geneve->net, dev_net(geneve->dev)),
!(info->key.tun_flags & TUNNEL_CSUM));
@@ -1004,7 +916,8 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
const struct ip_tunnel_key *key = &info->key;
struct dst_entry *dst = NULL;
- struct flowi6 fl6;
+ struct in6_addr saddr;
+ bool use_cache;
__u8 prio, ttl;
__be16 sport;
int err;
@@ -1012,9 +925,18 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
if (!pskb_inet_may_pull(skb))
return -EINVAL;
+ if (!gs6)
+ return -EIO;
+
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ prio = geneve_get_dsfield(skb, dev, info, &use_cache);
sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
- geneve->cfg.info.key.tp_dst, sport);
+
+ dst = udp_tunnel6_dst_lookup(skb, dev, geneve->net, gs6->sock, 0,
+ &saddr, key, sport,
+ geneve->cfg.info.key.tp_dst, prio,
+ use_cache ?
+ (struct dst_cache *)&info->dst_cache : NULL);
if (IS_ERR(dst))
return PTR_ERR(dst);
@@ -1036,8 +958,8 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -ENOMEM;
}
- unclone->key.u.ipv6.dst = fl6.saddr;
- unclone->key.u.ipv6.src = fl6.daddr;
+ unclone->key.u.ipv6.dst = saddr;
+ unclone->key.u.ipv6.src = info->key.u.ipv6.dst;
}
if (!pskb_may_pull(skb, ETH_HLEN)) {
@@ -1051,12 +973,10 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return -EMSGSIZE;
}
+ prio = ip_tunnel_ecn_encap(prio, ip_hdr(skb), skb);
if (geneve->cfg.collect_md) {
- prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
} else {
- prio = ip_tunnel_ecn_encap(ip6_tclass(fl6.flowlabel),
- ip_hdr(skb), skb);
if (geneve->cfg.ttl_inherit)
ttl = ip_tunnel_get_ttl(ip_hdr(skb), skb);
else
@@ -1069,7 +989,7 @@ static int geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
return err;
udp_tunnel6_xmit_skb(dst, gs6->sock->sk, skb, dev,
- &fl6.saddr, &fl6.daddr, prio, ttl,
+ &saddr, &key->u.ipv6.dst, prio, ttl,
info->key.label, sport, geneve->cfg.info.key.tp_dst,
!(info->key.tun_flags & TUNNEL_CSUM));
return 0;
@@ -1137,35 +1057,54 @@ static int geneve_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
- struct flowi4 fl4;
-
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
+ bool use_cache;
+ __be32 saddr;
+ u8 tos;
+
+ if (!gs4)
+ return -EIO;
+
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ tos = geneve_get_dsfield(skb, dev, info, &use_cache);
sport = udp_flow_src_port(geneve->net, skb,
1, USHRT_MAX, true);
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
- geneve->cfg.info.key.tp_dst, sport, NULL);
+ rt = udp_tunnel_dst_lookup(skb, dev, geneve->net, 0, &saddr,
+ &info->key,
+ sport, geneve->cfg.info.key.tp_dst,
+ tos,
+ use_cache ? &info->dst_cache : NULL);
if (IS_ERR(rt))
return PTR_ERR(rt);
ip_rt_put(rt);
- info->key.u.ipv4.src = fl4.saddr;
+ info->key.u.ipv4.src = saddr;
#if IS_ENABLED(CONFIG_IPV6)
} else if (ip_tunnel_info_af(info) == AF_INET6) {
struct dst_entry *dst;
- struct flowi6 fl6;
-
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
+ struct in6_addr saddr;
+ bool use_cache;
+ u8 prio;
+
+ if (!gs6)
+ return -EIO;
+
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
+ prio = geneve_get_dsfield(skb, dev, info, &use_cache);
sport = udp_flow_src_port(geneve->net, skb,
1, USHRT_MAX, true);
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
- geneve->cfg.info.key.tp_dst, sport);
+ dst = udp_tunnel6_dst_lookup(skb, dev, geneve->net, gs6->sock, 0,
+ &saddr, &info->key, sport,
+ geneve->cfg.info.key.tp_dst, prio,
+ use_cache ? &info->dst_cache : NULL);
if (IS_ERR(dst))
return PTR_ERR(dst);
dst_release(dst);
- info->key.u.ipv6.src = fl6.saddr;
+ info->key.u.ipv6.src = saddr;
#endif
} else {
return -EINVAL;
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
index b22596b18ee8..b1919278e931 100644
--- a/drivers/net/gtp.c
+++ b/drivers/net/gtp.c
@@ -630,7 +630,7 @@ static void __gtp_encap_destroy(struct sock *sk)
gtp->sk0 = NULL;
else
gtp->sk1u = NULL;
- udp_sk(sk)->encap_type = 0;
+ WRITE_ONCE(udp_sk(sk)->encap_type, 0);
rcu_assign_sk_user_data(sk, NULL);
release_sock(sk);
sock_put(sk);
@@ -682,7 +682,7 @@ static int gtp_encap_recv(struct sock *sk, struct sk_buff *skb)
netdev_dbg(gtp->dev, "encap_recv sk=%p\n", sk);
- switch (udp_sk(sk)->encap_type) {
+ switch (READ_ONCE(udp_sk(sk)->encap_type)) {
case UDP_ENCAP_GTP0:
netdev_dbg(gtp->dev, "received GTP0 packet\n");
ret = gtp0_udp_encap_recv(gtp, skb);
diff --git a/drivers/net/hamradio/Kconfig b/drivers/net/hamradio/Kconfig
index a94c7bd5db2e..25b1f929c422 100644
--- a/drivers/net/hamradio/Kconfig
+++ b/drivers/net/hamradio/Kconfig
@@ -94,8 +94,8 @@ config BAYCOM_SER_FDX
driver, "BAYCOM ser12 half-duplex driver for AX.25" is the old
driver and still provided in case this driver does not work with
your serial interface chip. To configure the driver, use the sethdlc
- utility available in the standard ax25 utilities package. For
- information on the modems, see <http://www.baycom.de/> and
+ utility available in the standard ax25 utilities package.
+ For more information on the modems, see
<file:Documentation/networking/device_drivers/hamradio/baycom.rst>.
To compile this driver as a module, choose M here: the module
@@ -112,8 +112,7 @@ config BAYCOM_SER_HDX
still provided in case your serial interface chip does not work with
the full-duplex driver. This driver is deprecated. To configure
the driver, use the sethdlc utility available in the standard ax25
- utilities package. For information on the modems, see
- <http://www.baycom.de/> and
+ utilities package. For more information on the modems, see
<file:Documentation/networking/device_drivers/hamradio/baycom.rst>.
To compile this driver as a module, choose M here: the module
@@ -127,8 +126,8 @@ config BAYCOM_PAR
This is a driver for Baycom style simple amateur radio modems that
connect to a parallel interface. The driver supports the picpar and
par96 designs. To configure the driver, use the sethdlc utility
- available in the standard ax25 utilities package. For information on
- the modems, see <http://www.baycom.de/> and the file
+ available in the standard ax25 utilities package.
+ For more information on the modems, see
<file:Documentation/networking/device_drivers/hamradio/baycom.rst>.
To compile this driver as a module, choose M here: the module
@@ -142,8 +141,8 @@ config BAYCOM_EPP
This is a driver for Baycom style simple amateur radio modems that
connect to a parallel interface. The driver supports the EPP
designs. To configure the driver, use the sethdlc utility available
- in the standard ax25 utilities package. For information on the
- modems, see <http://www.baycom.de/> and the file
+ in the standard ax25 utilities package.
+ For more information on the modems, see
<file:Documentation/networking/device_drivers/hamradio/baycom.rst>.
To compile this driver as a module, choose M here: the module
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
index 83ff882f5d97..ccfc83857c26 100644
--- a/drivers/net/hamradio/baycom_epp.c
+++ b/drivers/net/hamradio/baycom_epp.c
@@ -1074,7 +1074,7 @@ static int baycom_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
return 0;
case HDLCDRVCTL_DRIVERNAME:
- strncpy(hi.data.drivername, "baycom_epp", sizeof(hi.data.drivername));
+ strscpy_pad(hi.data.drivername, "baycom_epp", sizeof(hi.data.drivername));
break;
case HDLCDRVCTL_GETMODE:
@@ -1091,7 +1091,7 @@ static int baycom_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
return baycom_setmode(bc, hi.data.modename);
case HDLCDRVCTL_MODELIST:
- strncpy(hi.data.modename, "intclk,extclk,intmodem,extmodem,divider=x",
+ strscpy_pad(hi.data.modename, "intclk,extclk,intmodem,extmodem,divider=x",
sizeof(hi.data.modename));
break;
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 82e9796c8f5e..1dafa44155d0 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -851,7 +851,7 @@ static void netvsc_send_completion(struct net_device *ndev,
msglen);
return;
}
- fallthrough;
+ break;
case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE:
if (msglen < sizeof(struct nvsp_message_header) +
@@ -860,7 +860,7 @@ static void netvsc_send_completion(struct net_device *ndev,
msglen);
return;
}
- fallthrough;
+ break;
case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE:
if (msglen < sizeof(struct nvsp_message_header) +
@@ -869,7 +869,7 @@ static void netvsc_send_completion(struct net_device *ndev,
msglen);
return;
}
- fallthrough;
+ break;
case NVSP_MSG5_TYPE_SUBCHANNEL:
if (msglen < sizeof(struct nvsp_message_header) +
@@ -878,10 +878,6 @@ static void netvsc_send_completion(struct net_device *ndev,
msglen);
return;
}
- /* Copy the response back */
- memcpy(&net_device->channel_init_pkt, nvsp_packet,
- sizeof(struct nvsp_message));
- complete(&net_device->channel_init_wait);
break;
case NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE:
@@ -904,13 +900,19 @@ static void netvsc_send_completion(struct net_device *ndev,
netvsc_send_tx_complete(ndev, net_device, incoming_channel,
desc, budget);
- break;
+ return;
default:
netdev_err(ndev,
"Unknown send completion type %d received!!\n",
nvsp_packet->hdr.msg_type);
+ return;
}
+
+ /* Copy the response back */
+ memcpy(&net_device->channel_init_pkt, nvsp_packet,
+ sizeof(struct nvsp_message));
+ complete(&net_device->channel_init_wait);
}
static u32 netvsc_get_next_send_section(struct netvsc_device *net_device)
diff --git a/drivers/net/ipa/ipa_power.c b/drivers/net/ipa/ipa_power.c
index 0eaa7a7f3343..e223886123ce 100644
--- a/drivers/net/ipa/ipa_power.c
+++ b/drivers/net/ipa/ipa_power.c
@@ -67,7 +67,7 @@ struct ipa_power {
spinlock_t spinlock; /* used with STOPPED/STARTED power flags */
DECLARE_BITMAP(flags, IPA_POWER_FLAG_COUNT);
u32 interconnect_count;
- struct icc_bulk_data interconnect[];
+ struct icc_bulk_data interconnect[] __counted_by(interconnect_count);
};
/* Initialize interconnects required for IPA operation */
diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
index c0c49f181367..21e9cac73121 100644
--- a/drivers/net/ipvlan/ipvlan_core.c
+++ b/drivers/net/ipvlan/ipvlan_core.c
@@ -441,12 +441,12 @@ static int ipvlan_process_v4_outbound(struct sk_buff *skb)
err = ip_local_out(net, skb->sk, skb);
if (unlikely(net_xmit_eval(err)))
- dev->stats.tx_errors++;
+ DEV_STATS_INC(dev, tx_errors);
else
ret = NET_XMIT_SUCCESS;
goto out;
err:
- dev->stats.tx_errors++;
+ DEV_STATS_INC(dev, tx_errors);
kfree_skb(skb);
out:
return ret;
@@ -482,12 +482,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
err = ip6_local_out(net, skb->sk, skb);
if (unlikely(net_xmit_eval(err)))
- dev->stats.tx_errors++;
+ DEV_STATS_INC(dev, tx_errors);
else
ret = NET_XMIT_SUCCESS;
goto out;
err:
- dev->stats.tx_errors++;
+ DEV_STATS_INC(dev, tx_errors);
kfree_skb(skb);
out:
return ret;
diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
index 1b55928e89b8..57c79f5f2991 100644
--- a/drivers/net/ipvlan/ipvlan_main.c
+++ b/drivers/net/ipvlan/ipvlan_main.c
@@ -324,6 +324,7 @@ static void ipvlan_get_stats64(struct net_device *dev,
s->rx_dropped = rx_errs;
s->tx_dropped = tx_drps;
}
+ s->tx_errors = DEV_STATS_READ(dev, tx_errors);
}
static int ipvlan_vlan_rx_add_vid(struct net_device *dev, __be16 proto, u16 vid)
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index c5cd4551c67c..9663050a852d 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -3657,9 +3657,9 @@ static void macsec_get_stats64(struct net_device *dev,
dev_fetch_sw_netstats(s, dev->tstats);
- s->rx_dropped = atomic_long_read(&dev->stats.__rx_dropped);
- s->tx_dropped = atomic_long_read(&dev->stats.__tx_dropped);
- s->rx_errors = atomic_long_read(&dev->stats.__rx_errors);
+ s->rx_dropped = DEV_STATS_READ(dev, rx_dropped);
+ s->tx_dropped = DEV_STATS_READ(dev, tx_dropped);
+ s->rx_errors = DEV_STATS_READ(dev, rx_errors);
}
static int macsec_get_iflink(const struct net_device *dev)
diff --git a/drivers/net/mctp/Kconfig b/drivers/net/mctp/Kconfig
index dc71657d9184..ce9d2d2ccf3b 100644
--- a/drivers/net/mctp/Kconfig
+++ b/drivers/net/mctp/Kconfig
@@ -33,6 +33,15 @@ config MCTP_TRANSPORT_I2C
from DMTF specification DSP0237. A MCTP protocol network device is
created for each I2C bus that has been assigned a mctp-i2c device.
+config MCTP_TRANSPORT_I3C
+ tristate "MCTP I3C transport"
+ depends on I3C
+ help
+ Provides a driver to access MCTP devices over I3C transport,
+ from DMTF specification DSP0233.
+ A MCTP protocol network device is created for each I3C bus
+ having a "mctp-controller" devicetree property.
+
endmenu
endif
diff --git a/drivers/net/mctp/Makefile b/drivers/net/mctp/Makefile
index 1ca3e6028f77..e1cb99ced54a 100644
--- a/drivers/net/mctp/Makefile
+++ b/drivers/net/mctp/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_MCTP_SERIAL) += mctp-serial.o
obj-$(CONFIG_MCTP_TRANSPORT_I2C) += mctp-i2c.o
+obj-$(CONFIG_MCTP_TRANSPORT_I3C) += mctp-i3c.o
diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c
new file mode 100644
index 000000000000..8e989c157caa
--- /dev/null
+++ b/drivers/net/mctp/mctp-i3c.c
@@ -0,0 +1,755 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Implements DMTF specification
+ * "DSP0233 Management Component Transport Protocol (MCTP) I3C Transport
+ * Binding"
+ * https://www.dmtf.org/sites/default/files/standards/documents/DSP0233_1.0.0.pdf
+ *
+ * Copyright (c) 2023 Code Construct
+ */
+
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/i3c/device.h>
+#include <linux/i3c/master.h>
+#include <linux/if_arp.h>
+#include <asm/unaligned.h>
+#include <net/mctp.h>
+#include <net/mctpdevice.h>
+
+#define MCTP_I3C_MAXBUF 65536
+/* 48 bit Provisioned Id */
+#define PID_SIZE 6
+
+/* 64 byte payload, 4 byte MCTP header */
+static const int MCTP_I3C_MINMTU = 64 + 4;
+/* One byte less to allow for the PEC */
+static const int MCTP_I3C_MAXMTU = MCTP_I3C_MAXBUF - 1;
+/* 4 byte MCTP header, no data, 1 byte PEC */
+static const int MCTP_I3C_MINLEN = 4 + 1;
+
+/* Sufficient for 64kB at min mtu */
+static const int MCTP_I3C_TX_QUEUE_LEN = 1100;
+
+/* Somewhat arbitrary */
+static const int MCTP_I3C_IBI_SLOTS = 8;
+
+/* Mandatory Data Byte in an IBI, from DSP0233 */
+#define I3C_MDB_MCTP 0xAE
+/* From MIPI Device Characteristics Register (DCR) Assignments */
+#define I3C_DCR_MCTP 0xCC
+
+static const char *MCTP_I3C_OF_PROP = "mctp-controller";
+
+/* List of mctp_i3c_busdev */
+static LIST_HEAD(busdevs);
+/* Protects busdevs, as well as mctp_i3c_bus.devs lists */
+static DEFINE_MUTEX(busdevs_lock);
+
+struct mctp_i3c_bus {
+ struct net_device *ndev;
+
+ struct task_struct *tx_thread;
+ wait_queue_head_t tx_wq;
+ /* tx_lock protects tx_skb and devs */
+ spinlock_t tx_lock;
+ /* Next skb to transmit */
+ struct sk_buff *tx_skb;
+ /* Scratch buffer for xmit */
+ u8 tx_scratch[MCTP_I3C_MAXBUF];
+
+ /* Element of busdevs */
+ struct list_head list;
+
+ /* Provisioned ID of our controller */
+ u64 pid;
+
+ struct i3c_bus *bus;
+ /* Head of mctp_i3c_device.list. Protected by busdevs_lock */
+ struct list_head devs;
+};
+
+struct mctp_i3c_device {
+ struct i3c_device *i3c;
+ struct mctp_i3c_bus *mbus;
+ struct list_head list; /* Element of mctp_i3c_bus.devs */
+
+ /* Held while tx_thread is using this device */
+ struct mutex lock;
+
+ /* Whether BCR indicates MDB is present in IBI */
+ bool have_mdb;
+ /* I3C dynamic address */
+ u8 addr;
+ /* Maximum read length */
+ u16 mrl;
+ /* Maximum write length */
+ u16 mwl;
+ /* Provisioned ID */
+ u64 pid;
+};
+
+/* We synthesise a mac header using the Provisioned ID.
+ * Used to pass dest to mctp_i3c_start_xmit.
+ */
+struct mctp_i3c_internal_hdr {
+ u8 dest[PID_SIZE];
+ u8 source[PID_SIZE];
+} __packed;
+
+static int mctp_i3c_read(struct mctp_i3c_device *mi)
+{
+ struct i3c_priv_xfer xfer = { .rnw = 1, .len = mi->mrl };
+ struct net_device_stats *stats = &mi->mbus->ndev->stats;
+ struct mctp_i3c_internal_hdr *ihdr = NULL;
+ struct sk_buff *skb = NULL;
+ struct mctp_skb_cb *cb;
+ int net_status, rc;
+ u8 pec, addr;
+
+ skb = netdev_alloc_skb(mi->mbus->ndev,
+ mi->mrl + sizeof(struct mctp_i3c_internal_hdr));
+ if (!skb) {
+ stats->rx_dropped++;
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ skb->protocol = htons(ETH_P_MCTP);
+ /* Create a header for internal use */
+ skb_reset_mac_header(skb);
+ ihdr = skb_put(skb, sizeof(struct mctp_i3c_internal_hdr));
+ put_unaligned_be48(mi->pid, ihdr->source);
+ put_unaligned_be48(mi->mbus->pid, ihdr->dest);
+ skb_pull(skb, sizeof(struct mctp_i3c_internal_hdr));
+
+ xfer.data.in = skb_put(skb, mi->mrl);
+
+ rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1);
+ if (rc < 0)
+ goto err;
+
+ if (WARN_ON_ONCE(xfer.len > mi->mrl)) {
+ /* Bad i3c bus driver */
+ rc = -EIO;
+ goto err;
+ }
+ if (xfer.len < MCTP_I3C_MINLEN) {
+ stats->rx_length_errors++;
+ rc = -EIO;
+ goto err;
+ }
+
+ /* check PEC, including address byte */
+ addr = mi->addr << 1 | 1;
+ pec = i2c_smbus_pec(0, &addr, 1);
+ pec = i2c_smbus_pec(pec, xfer.data.in, xfer.len - 1);
+ if (pec != ((u8 *)xfer.data.in)[xfer.len - 1]) {
+ stats->rx_crc_errors++;
+ rc = -EINVAL;
+ goto err;
+ }
+
+ /* Remove PEC */
+ skb_trim(skb, xfer.len - 1);
+
+ cb = __mctp_cb(skb);
+ cb->halen = PID_SIZE;
+ put_unaligned_be48(mi->pid, cb->haddr);
+
+ net_status = netif_rx(skb);
+
+ if (net_status == NET_RX_SUCCESS) {
+ stats->rx_packets++;
+ stats->rx_bytes += xfer.len - 1;
+ } else {
+ stats->rx_dropped++;
+ }
+
+ return 0;
+err:
+ kfree_skb(skb);
+ return rc;
+}
+
+static void mctp_i3c_ibi_handler(struct i3c_device *i3c,
+ const struct i3c_ibi_payload *payload)
+{
+ struct mctp_i3c_device *mi = i3cdev_get_drvdata(i3c);
+
+ if (WARN_ON_ONCE(!mi))
+ return;
+
+ if (mi->have_mdb) {
+ if (payload->len > 0) {
+ if (((u8 *)payload->data)[0] != I3C_MDB_MCTP) {
+ /* Not a mctp-i3c interrupt, ignore it */
+ return;
+ }
+ } else {
+ /* The BCR advertised a Mandatory Data Byte but the
+ * device didn't send one.
+ */
+ dev_warn_once(i3cdev_to_dev(i3c), "IBI with missing MDB");
+ }
+ }
+
+ mctp_i3c_read(mi);
+}
+
+static int mctp_i3c_setup(struct mctp_i3c_device *mi)
+{
+ const struct i3c_ibi_setup ibi = {
+ .max_payload_len = 1,
+ .num_slots = MCTP_I3C_IBI_SLOTS,
+ .handler = mctp_i3c_ibi_handler,
+ };
+ struct i3c_device_info info;
+ int rc;
+
+ i3c_device_get_info(mi->i3c, &info);
+ mi->have_mdb = info.bcr & BIT(2);
+ mi->addr = info.dyn_addr;
+ mi->mwl = info.max_write_len;
+ mi->mrl = info.max_read_len;
+ mi->pid = info.pid;
+
+ rc = i3c_device_request_ibi(mi->i3c, &ibi);
+ if (rc == -ENOTSUPP) {
+ /* This driver only supports In-Band Interrupt mode.
+ * Support for Polling Mode could be added if required.
+ * (ENOTSUPP is from the i3c layer, not EOPNOTSUPP).
+ */
+ dev_warn(i3cdev_to_dev(mi->i3c),
+ "Failed, bus driver doesn't support In-Band Interrupts");
+ goto err;
+ } else if (rc < 0) {
+ dev_err(i3cdev_to_dev(mi->i3c),
+ "Failed requesting IBI (%d)\n", rc);
+ goto err;
+ }
+
+ rc = i3c_device_enable_ibi(mi->i3c);
+ if (rc < 0) {
+ /* Assume a driver supporting request_ibi also
+ * supports enable_ibi.
+ */
+ dev_err(i3cdev_to_dev(mi->i3c), "Failed enabling IBI (%d)\n", rc);
+ goto err_free_ibi;
+ }
+
+ return 0;
+
+err_free_ibi:
+ i3c_device_free_ibi(mi->i3c);
+
+err:
+ return rc;
+}
+
+/* Adds a new MCTP i3c_device to a bus */
+static int mctp_i3c_add_device(struct mctp_i3c_bus *mbus,
+ struct i3c_device *i3c)
+__must_hold(&busdevs_lock)
+{
+ struct mctp_i3c_device *mi = NULL;
+ int rc;
+
+ mi = kzalloc(sizeof(*mi), GFP_KERNEL);
+ if (!mi) {
+ rc = -ENOMEM;
+ goto err;
+ }
+ mi->mbus = mbus;
+ mi->i3c = i3c;
+ mutex_init(&mi->lock);
+ list_add(&mi->list, &mbus->devs);
+
+ i3cdev_set_drvdata(i3c, mi);
+ rc = mctp_i3c_setup(mi);
+ if (rc < 0)
+ goto err_free;
+
+ return 0;
+
+err_free:
+ list_del(&mi->list);
+ kfree(mi);
+
+err:
+ dev_warn(i3cdev_to_dev(i3c), "Error adding mctp-i3c device, %d\n", rc);
+ return rc;
+}
+
+static int mctp_i3c_probe(struct i3c_device *i3c)
+{
+ struct mctp_i3c_bus *b = NULL, *mbus = NULL;
+
+ /* Look for a known bus */
+ mutex_lock(&busdevs_lock);
+ list_for_each_entry(b, &busdevs, list)
+ if (b->bus == i3c->bus) {
+ mbus = b;
+ break;
+ }
+ mutex_unlock(&busdevs_lock);
+
+ if (!mbus) {
+ /* probably no "mctp-controller" property on the i3c bus */
+ return -ENODEV;
+ }
+
+ return mctp_i3c_add_device(mbus, i3c);
+}
+
+static void mctp_i3c_remove_device(struct mctp_i3c_device *mi)
+__must_hold(&busdevs_lock)
+{
+ /* Ensure the tx thread isn't using the device */
+ mutex_lock(&mi->lock);
+
+ /* Counterpart of mctp_i3c_setup */
+ i3c_device_disable_ibi(mi->i3c);
+ i3c_device_free_ibi(mi->i3c);
+
+ /* Counterpart of mctp_i3c_add_device */
+ i3cdev_set_drvdata(mi->i3c, NULL);
+ list_del(&mi->list);
+
+ /* Safe to unlock after removing from the list */
+ mutex_unlock(&mi->lock);
+ kfree(mi);
+}
+
+static void mctp_i3c_remove(struct i3c_device *i3c)
+{
+ struct mctp_i3c_device *mi = i3cdev_get_drvdata(i3c);
+
+ /* We my have received a Bus Remove notify prior to device remove,
+ * so mi will already be removed.
+ */
+ if (!mi)
+ return;
+
+ mutex_lock(&busdevs_lock);
+ mctp_i3c_remove_device(mi);
+ mutex_unlock(&busdevs_lock);
+}
+
+/* Returns the device for an address, with mi->lock held */
+static struct mctp_i3c_device *
+mctp_i3c_lookup(struct mctp_i3c_bus *mbus, u64 pid)
+{
+ struct mctp_i3c_device *mi = NULL, *ret = NULL;
+
+ mutex_lock(&busdevs_lock);
+ list_for_each_entry(mi, &mbus->devs, list)
+ if (mi->pid == pid) {
+ ret = mi;
+ mutex_lock(&mi->lock);
+ break;
+ }
+ mutex_unlock(&busdevs_lock);
+ return ret;
+}
+
+static void mctp_i3c_xmit(struct mctp_i3c_bus *mbus, struct sk_buff *skb)
+{
+ struct net_device_stats *stats = &mbus->ndev->stats;
+ struct i3c_priv_xfer xfer = { .rnw = false };
+ struct mctp_i3c_internal_hdr *ihdr = NULL;
+ struct mctp_i3c_device *mi = NULL;
+ unsigned int data_len;
+ u8 *data = NULL;
+ u8 addr, pec;
+ int rc = 0;
+ u64 pid;
+
+ skb_pull(skb, sizeof(struct mctp_i3c_internal_hdr));
+ data_len = skb->len;
+
+ ihdr = (void *)skb_mac_header(skb);
+
+ pid = get_unaligned_be48(ihdr->dest);
+ mi = mctp_i3c_lookup(mbus, pid);
+ if (!mi) {
+ /* I3C endpoint went away after the packet was enqueued? */
+ stats->tx_dropped++;
+ goto out;
+ }
+
+ if (WARN_ON_ONCE(data_len + 1 > MCTP_I3C_MAXBUF))
+ goto out;
+
+ if (data_len + 1 > (unsigned int)mi->mwl) {
+ /* Route MTU was larger than supported by the endpoint */
+ stats->tx_dropped++;
+ goto out;
+ }
+
+ /* Need a linear buffer with space for the PEC */
+ xfer.len = data_len + 1;
+ if (skb_tailroom(skb) >= 1) {
+ skb_put(skb, 1);
+ data = skb->data;
+ } else {
+ /* Otherwise need to copy the buffer */
+ skb_copy_bits(skb, 0, mbus->tx_scratch, skb->len);
+ data = mbus->tx_scratch;
+ }
+
+ /* PEC calculation */
+ addr = mi->addr << 1;
+ pec = i2c_smbus_pec(0, &addr, 1);
+ pec = i2c_smbus_pec(pec, data, data_len);
+ data[data_len] = pec;
+
+ xfer.data.out = data;
+ rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1);
+ if (rc == 0) {
+ stats->tx_bytes += data_len;
+ stats->tx_packets++;
+ } else {
+ stats->tx_errors++;
+ }
+
+out:
+ if (mi)
+ mutex_unlock(&mi->lock);
+}
+
+static int mctp_i3c_tx_thread(void *data)
+{
+ struct mctp_i3c_bus *mbus = data;
+ struct sk_buff *skb;
+
+ for (;;) {
+ if (kthread_should_stop())
+ break;
+
+ spin_lock_bh(&mbus->tx_lock);
+ skb = mbus->tx_skb;
+ mbus->tx_skb = NULL;
+ spin_unlock_bh(&mbus->tx_lock);
+
+ if (netif_queue_stopped(mbus->ndev))
+ netif_wake_queue(mbus->ndev);
+
+ if (skb) {
+ mctp_i3c_xmit(mbus, skb);
+ kfree_skb(skb);
+ } else {
+ wait_event_idle(mbus->tx_wq,
+ mbus->tx_skb || kthread_should_stop());
+ }
+ }
+
+ return 0;
+}
+
+static netdev_tx_t mctp_i3c_start_xmit(struct sk_buff *skb,
+ struct net_device *ndev)
+{
+ struct mctp_i3c_bus *mbus = netdev_priv(ndev);
+ netdev_tx_t ret;
+
+ spin_lock(&mbus->tx_lock);
+ netif_stop_queue(ndev);
+ if (mbus->tx_skb) {
+ dev_warn_ratelimited(&ndev->dev, "TX with queue stopped");
+ ret = NETDEV_TX_BUSY;
+ } else {
+ mbus->tx_skb = skb;
+ ret = NETDEV_TX_OK;
+ }
+ spin_unlock(&mbus->tx_lock);
+
+ if (ret == NETDEV_TX_OK)
+ wake_up(&mbus->tx_wq);
+
+ return ret;
+}
+
+static void mctp_i3c_bus_free(struct mctp_i3c_bus *mbus)
+__must_hold(&busdevs_lock)
+{
+ struct mctp_i3c_device *mi = NULL, *tmp = NULL;
+
+ if (mbus->tx_thread) {
+ kthread_stop(mbus->tx_thread);
+ mbus->tx_thread = NULL;
+ }
+
+ /* Remove any child devices */
+ list_for_each_entry_safe(mi, tmp, &mbus->devs, list) {
+ mctp_i3c_remove_device(mi);
+ }
+
+ kfree_skb(mbus->tx_skb);
+ list_del(&mbus->list);
+}
+
+static void mctp_i3c_ndo_uninit(struct net_device *ndev)
+{
+ struct mctp_i3c_bus *mbus = netdev_priv(ndev);
+
+ /* Perform cleanup here to ensure there are no remaining references */
+ mctp_i3c_bus_free(mbus);
+}
+
+static int mctp_i3c_header_create(struct sk_buff *skb, struct net_device *dev,
+ unsigned short type, const void *daddr,
+ const void *saddr, unsigned int len)
+{
+ struct mctp_i3c_internal_hdr *ihdr;
+
+ skb_push(skb, sizeof(struct mctp_i3c_internal_hdr));
+ skb_reset_mac_header(skb);
+ ihdr = (void *)skb_mac_header(skb);
+ memcpy(ihdr->dest, daddr, PID_SIZE);
+ memcpy(ihdr->source, saddr, PID_SIZE);
+ return 0;
+}
+
+static const struct net_device_ops mctp_i3c_ops = {
+ .ndo_start_xmit = mctp_i3c_start_xmit,
+ .ndo_uninit = mctp_i3c_ndo_uninit,
+};
+
+static const struct header_ops mctp_i3c_headops = {
+ .create = mctp_i3c_header_create,
+};
+
+static void mctp_i3c_net_setup(struct net_device *dev)
+{
+ dev->type = ARPHRD_MCTP;
+
+ dev->mtu = MCTP_I3C_MAXMTU;
+ dev->min_mtu = MCTP_I3C_MINMTU;
+ dev->max_mtu = MCTP_I3C_MAXMTU;
+ dev->tx_queue_len = MCTP_I3C_TX_QUEUE_LEN;
+
+ dev->hard_header_len = sizeof(struct mctp_i3c_internal_hdr);
+ dev->addr_len = PID_SIZE;
+
+ dev->netdev_ops = &mctp_i3c_ops;
+ dev->header_ops = &mctp_i3c_headops;
+}
+
+static bool mctp_i3c_is_mctp_controller(struct i3c_bus *bus)
+{
+ struct i3c_dev_desc *master = bus->cur_master;
+
+ if (!master)
+ return false;
+
+ return of_property_read_bool(master->common.master->dev.of_node,
+ MCTP_I3C_OF_PROP);
+}
+
+/* Returns the Provisioned Id of a local bus master */
+static int mctp_i3c_bus_local_pid(struct i3c_bus *bus, u64 *ret_pid)
+{
+ struct i3c_dev_desc *master;
+
+ master = bus->cur_master;
+ if (WARN_ON_ONCE(!master))
+ return -ENOENT;
+ *ret_pid = master->info.pid;
+
+ return 0;
+}
+
+/* Returns an ERR_PTR on failure */
+static struct mctp_i3c_bus *mctp_i3c_bus_add(struct i3c_bus *bus)
+__must_hold(&busdevs_lock)
+{
+ struct mctp_i3c_bus *mbus = NULL;
+ struct net_device *ndev = NULL;
+ char namebuf[IFNAMSIZ];
+ u8 addr[PID_SIZE];
+ int rc;
+
+ if (!mctp_i3c_is_mctp_controller(bus))
+ return ERR_PTR(-ENOENT);
+
+ snprintf(namebuf, sizeof(namebuf), "mctpi3c%d", bus->id);
+ ndev = alloc_netdev(sizeof(*mbus), namebuf, NET_NAME_ENUM,
+ mctp_i3c_net_setup);
+ if (!ndev) {
+ rc = -ENOMEM;
+ goto err;
+ }
+
+ mbus = netdev_priv(ndev);
+ mbus->ndev = ndev;
+ mbus->bus = bus;
+ INIT_LIST_HEAD(&mbus->devs);
+ list_add(&mbus->list, &busdevs);
+
+ rc = mctp_i3c_bus_local_pid(bus, &mbus->pid);
+ if (rc < 0) {
+ dev_err(&ndev->dev, "No I3C PID available\n");
+ goto err_free_uninit;
+ }
+ put_unaligned_be48(mbus->pid, addr);
+ dev_addr_set(ndev, addr);
+
+ init_waitqueue_head(&mbus->tx_wq);
+ spin_lock_init(&mbus->tx_lock);
+ mbus->tx_thread = kthread_run(mctp_i3c_tx_thread, mbus,
+ "%s/tx", ndev->name);
+ if (IS_ERR(mbus->tx_thread)) {
+ dev_warn(&ndev->dev, "Error creating thread: %pe\n",
+ mbus->tx_thread);
+ rc = PTR_ERR(mbus->tx_thread);
+ mbus->tx_thread = NULL;
+ goto err_free_uninit;
+ }
+
+ rc = mctp_register_netdev(ndev, NULL);
+ if (rc < 0) {
+ dev_warn(&ndev->dev, "netdev register failed: %d\n", rc);
+ goto err_free_netdev;
+ }
+ return mbus;
+
+err_free_uninit:
+ /* uninit will not get called if a netdev has not been registered,
+ * so we perform the same mbus cleanup manually.
+ */
+ mctp_i3c_bus_free(mbus);
+
+err_free_netdev:
+ free_netdev(ndev);
+
+err:
+ return ERR_PTR(rc);
+}
+
+static void mctp_i3c_bus_remove(struct mctp_i3c_bus *mbus)
+__must_hold(&busdevs_lock)
+{
+ /* Unregister calls through to ndo_uninit -> mctp_i3c_bus_free() */
+ mctp_unregister_netdev(mbus->ndev);
+
+ free_netdev(mbus->ndev);
+ /* mbus is deallocated */
+}
+
+/* Removes all mctp-i3c busses */
+static void mctp_i3c_bus_remove_all(void)
+{
+ struct mctp_i3c_bus *mbus = NULL, *tmp = NULL;
+
+ mutex_lock(&busdevs_lock);
+ list_for_each_entry_safe(mbus, tmp, &busdevs, list) {
+ mctp_i3c_bus_remove(mbus);
+ }
+ mutex_unlock(&busdevs_lock);
+}
+
+/* Adds a i3c_bus if it isn't already in the busdevs list.
+ * Suitable as an i3c_for_each_bus_locked callback.
+ */
+static int mctp_i3c_bus_add_new(struct i3c_bus *bus, void *data)
+{
+ struct mctp_i3c_bus *mbus = NULL, *tmp = NULL;
+ bool exists = false;
+
+ mutex_lock(&busdevs_lock);
+ list_for_each_entry_safe(mbus, tmp, &busdevs, list)
+ if (mbus->bus == bus)
+ exists = true;
+
+ /* It is OK for a bus to already exist. That can occur due to
+ * the race in mod_init between notifier and for_each_bus
+ */
+ if (!exists)
+ mctp_i3c_bus_add(bus);
+ mutex_unlock(&busdevs_lock);
+ return 0;
+}
+
+static void mctp_i3c_notify_bus_remove(struct i3c_bus *bus)
+{
+ struct mctp_i3c_bus *mbus = NULL, *tmp;
+
+ mutex_lock(&busdevs_lock);
+ list_for_each_entry_safe(mbus, tmp, &busdevs, list)
+ if (mbus->bus == bus)
+ mctp_i3c_bus_remove(mbus);
+ mutex_unlock(&busdevs_lock);
+}
+
+static int mctp_i3c_notifier_call(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ switch (action) {
+ case I3C_NOTIFY_BUS_ADD:
+ mctp_i3c_bus_add_new((struct i3c_bus *)data, NULL);
+ break;
+ case I3C_NOTIFY_BUS_REMOVE:
+ mctp_i3c_notify_bus_remove((struct i3c_bus *)data);
+ break;
+ }
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block mctp_i3c_notifier = {
+ .notifier_call = mctp_i3c_notifier_call,
+};
+
+static const struct i3c_device_id mctp_i3c_ids[] = {
+ I3C_CLASS(I3C_DCR_MCTP, NULL),
+ { 0 },
+};
+
+static struct i3c_driver mctp_i3c_driver = {
+ .driver = {
+ .name = "mctp-i3c",
+ },
+ .probe = mctp_i3c_probe,
+ .remove = mctp_i3c_remove,
+ .id_table = mctp_i3c_ids,
+};
+
+static __init int mctp_i3c_mod_init(void)
+{
+ int rc;
+
+ rc = i3c_register_notifier(&mctp_i3c_notifier);
+ if (rc < 0) {
+ i3c_driver_unregister(&mctp_i3c_driver);
+ return rc;
+ }
+
+ i3c_for_each_bus_locked(mctp_i3c_bus_add_new, NULL);
+
+ rc = i3c_driver_register(&mctp_i3c_driver);
+ if (rc < 0)
+ return rc;
+
+ return 0;
+}
+
+static __exit void mctp_i3c_mod_exit(void)
+{
+ int rc;
+
+ i3c_driver_unregister(&mctp_i3c_driver);
+
+ rc = i3c_unregister_notifier(&mctp_i3c_notifier);
+ if (rc < 0)
+ pr_warn("MCTP I3C could not unregister notifier, %d\n", rc);
+
+ mctp_i3c_bus_remove_all();
+}
+
+module_init(mctp_i3c_mod_init);
+module_exit(mctp_i3c_mod_exit);
+
+MODULE_DEVICE_TABLE(i3c, mctp_i3c_ids);
+MODULE_DESCRIPTION("MCTP I3C device");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Matt Johnston <matt@codeconstruct.com.au>");
diff --git a/drivers/net/mdio/mdio-aspeed.c b/drivers/net/mdio/mdio-aspeed.c
index c727103c8b05..70edeeb7771e 100644
--- a/drivers/net/mdio/mdio-aspeed.c
+++ b/drivers/net/mdio/mdio-aspeed.c
@@ -177,15 +177,13 @@ static int aspeed_mdio_probe(struct platform_device *pdev)
return 0;
}
-static int aspeed_mdio_remove(struct platform_device *pdev)
+static void aspeed_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = (struct mii_bus *)platform_get_drvdata(pdev);
struct aspeed_mdio *ctx = bus->priv;
reset_control_assert(ctx->reset);
mdiobus_unregister(bus);
-
- return 0;
}
static const struct of_device_id aspeed_mdio_of_match[] = {
@@ -200,7 +198,7 @@ static struct platform_driver aspeed_mdio_driver = {
.of_match_table = aspeed_mdio_of_match,
},
.probe = aspeed_mdio_probe,
- .remove = aspeed_mdio_remove,
+ .remove_new = aspeed_mdio_remove,
};
module_platform_driver(aspeed_mdio_driver);
diff --git a/drivers/net/mdio/mdio-bcm-iproc.c b/drivers/net/mdio/mdio-bcm-iproc.c
index 77fc970cdfde..5a2d26c6afdc 100644
--- a/drivers/net/mdio/mdio-bcm-iproc.c
+++ b/drivers/net/mdio/mdio-bcm-iproc.c
@@ -168,14 +168,12 @@ err_iproc_mdio:
return rc;
}
-static int iproc_mdio_remove(struct platform_device *pdev)
+static void iproc_mdio_remove(struct platform_device *pdev)
{
struct iproc_mdio_priv *priv = platform_get_drvdata(pdev);
mdiobus_unregister(priv->mii_bus);
mdiobus_free(priv->mii_bus);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -210,7 +208,7 @@ static struct platform_driver iproc_mdio_driver = {
#endif
},
.probe = iproc_mdio_probe,
- .remove = iproc_mdio_remove,
+ .remove_new = iproc_mdio_remove,
};
module_platform_driver(iproc_mdio_driver);
diff --git a/drivers/net/mdio/mdio-bcm-unimac.c b/drivers/net/mdio/mdio-bcm-unimac.c
index 6b26a0803696..e8cd8eef319b 100644
--- a/drivers/net/mdio/mdio-bcm-unimac.c
+++ b/drivers/net/mdio/mdio-bcm-unimac.c
@@ -296,15 +296,13 @@ out_clk_disable:
return ret;
}
-static int unimac_mdio_remove(struct platform_device *pdev)
+static void unimac_mdio_remove(struct platform_device *pdev)
{
struct unimac_mdio_priv *priv = platform_get_drvdata(pdev);
mdiobus_unregister(priv->mii_bus);
mdiobus_free(priv->mii_bus);
clk_disable_unprepare(priv->clk);
-
- return 0;
}
static int __maybe_unused unimac_mdio_suspend(struct device *d)
@@ -353,7 +351,7 @@ static struct platform_driver unimac_mdio_driver = {
.pm = &unimac_mdio_pm_ops,
},
.probe = unimac_mdio_probe,
- .remove = unimac_mdio_remove,
+ .remove_new = unimac_mdio_remove,
};
module_platform_driver(unimac_mdio_driver);
diff --git a/drivers/net/mdio/mdio-gpio.c b/drivers/net/mdio/mdio-gpio.c
index 0fb3c2de0845..897b88c50bbb 100644
--- a/drivers/net/mdio/mdio-gpio.c
+++ b/drivers/net/mdio/mdio-gpio.c
@@ -194,11 +194,9 @@ static int mdio_gpio_probe(struct platform_device *pdev)
return ret;
}
-static int mdio_gpio_remove(struct platform_device *pdev)
+static void mdio_gpio_remove(struct platform_device *pdev)
{
mdio_gpio_bus_destroy(&pdev->dev);
-
- return 0;
}
static const struct of_device_id mdio_gpio_of_match[] = {
@@ -210,7 +208,7 @@ MODULE_DEVICE_TABLE(of, mdio_gpio_of_match);
static struct platform_driver mdio_gpio_driver = {
.probe = mdio_gpio_probe,
- .remove = mdio_gpio_remove,
+ .remove_new = mdio_gpio_remove,
.driver = {
.name = "mdio-gpio",
.of_match_table = mdio_gpio_of_match,
diff --git a/drivers/net/mdio/mdio-hisi-femac.c b/drivers/net/mdio/mdio-hisi-femac.c
index f231c2fbb1de..6703f626ee83 100644
--- a/drivers/net/mdio/mdio-hisi-femac.c
+++ b/drivers/net/mdio/mdio-hisi-femac.c
@@ -118,7 +118,7 @@ err_out_free_mdiobus:
return ret;
}
-static int hisi_femac_mdio_remove(struct platform_device *pdev)
+static void hisi_femac_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
struct hisi_femac_mdio_data *data = bus->priv;
@@ -126,8 +126,6 @@ static int hisi_femac_mdio_remove(struct platform_device *pdev)
mdiobus_unregister(bus);
clk_disable_unprepare(data->clk);
mdiobus_free(bus);
-
- return 0;
}
static const struct of_device_id hisi_femac_mdio_dt_ids[] = {
@@ -138,7 +136,7 @@ MODULE_DEVICE_TABLE(of, hisi_femac_mdio_dt_ids);
static struct platform_driver hisi_femac_mdio_driver = {
.probe = hisi_femac_mdio_probe,
- .remove = hisi_femac_mdio_remove,
+ .remove_new = hisi_femac_mdio_remove,
.driver = {
.name = "hisi-femac-mdio",
.of_match_table = hisi_femac_mdio_dt_ids,
diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
index 78b93de636f5..abd8b508ec16 100644
--- a/drivers/net/mdio/mdio-ipq4019.c
+++ b/drivers/net/mdio/mdio-ipq4019.c
@@ -278,13 +278,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
return 0;
}
-static int ipq4019_mdio_remove(struct platform_device *pdev)
+static void ipq4019_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
mdiobus_unregister(bus);
-
- return 0;
}
static const struct of_device_id ipq4019_mdio_dt_ids[] = {
@@ -296,7 +294,7 @@ MODULE_DEVICE_TABLE(of, ipq4019_mdio_dt_ids);
static struct platform_driver ipq4019_mdio_driver = {
.probe = ipq4019_mdio_probe,
- .remove = ipq4019_mdio_remove,
+ .remove_new = ipq4019_mdio_remove,
.driver = {
.name = "ipq4019-mdio",
.of_match_table = ipq4019_mdio_dt_ids,
diff --git a/drivers/net/mdio/mdio-ipq8064.c b/drivers/net/mdio/mdio-ipq8064.c
index fd9716960106..f71b6e1c66e4 100644
--- a/drivers/net/mdio/mdio-ipq8064.c
+++ b/drivers/net/mdio/mdio-ipq8064.c
@@ -147,14 +147,11 @@ ipq8064_mdio_probe(struct platform_device *pdev)
return 0;
}
-static int
-ipq8064_mdio_remove(struct platform_device *pdev)
+static void ipq8064_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
mdiobus_unregister(bus);
-
- return 0;
}
static const struct of_device_id ipq8064_mdio_dt_ids[] = {
@@ -165,7 +162,7 @@ MODULE_DEVICE_TABLE(of, ipq8064_mdio_dt_ids);
static struct platform_driver ipq8064_mdio_driver = {
.probe = ipq8064_mdio_probe,
- .remove = ipq8064_mdio_remove,
+ .remove_new = ipq8064_mdio_remove,
.driver = {
.name = "ipq8064-mdio",
.of_match_table = ipq8064_mdio_dt_ids,
diff --git a/drivers/net/mdio/mdio-moxart.c b/drivers/net/mdio/mdio-moxart.c
index f0cff584e176..d35af8cd7c4d 100644
--- a/drivers/net/mdio/mdio-moxart.c
+++ b/drivers/net/mdio/mdio-moxart.c
@@ -155,14 +155,12 @@ err_out_free_mdiobus:
return ret;
}
-static int moxart_mdio_remove(struct platform_device *pdev)
+static void moxart_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
mdiobus_unregister(bus);
mdiobus_free(bus);
-
- return 0;
}
static const struct of_device_id moxart_mdio_dt_ids[] = {
@@ -173,7 +171,7 @@ MODULE_DEVICE_TABLE(of, moxart_mdio_dt_ids);
static struct platform_driver moxart_mdio_driver = {
.probe = moxart_mdio_probe,
- .remove = moxart_mdio_remove,
+ .remove_new = moxart_mdio_remove,
.driver = {
.name = "moxart-mdio",
.of_match_table = moxart_mdio_dt_ids,
diff --git a/drivers/net/mdio/mdio-mscc-miim.c b/drivers/net/mdio/mdio-mscc-miim.c
index 1a1b95ae95fa..c29377c85307 100644
--- a/drivers/net/mdio/mdio-mscc-miim.c
+++ b/drivers/net/mdio/mdio-mscc-miim.c
@@ -335,15 +335,13 @@ out_disable_clk:
return ret;
}
-static int mscc_miim_remove(struct platform_device *pdev)
+static void mscc_miim_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
struct mscc_miim_dev *miim = bus->priv;
clk_disable_unprepare(miim->clk);
mdiobus_unregister(bus);
-
- return 0;
}
static const struct mscc_miim_info mscc_ocelot_miim_info = {
@@ -371,7 +369,7 @@ MODULE_DEVICE_TABLE(of, mscc_miim_match);
static struct platform_driver mscc_miim_driver = {
.probe = mscc_miim_probe,
- .remove = mscc_miim_remove,
+ .remove_new = mscc_miim_remove,
.driver = {
.name = "mscc-miim",
.of_match_table = mscc_miim_match,
diff --git a/drivers/net/mdio/mdio-mux-bcm-iproc.c b/drivers/net/mdio/mdio-mux-bcm-iproc.c
index 956d54846b62..a750bd4c77a0 100644
--- a/drivers/net/mdio/mdio-mux-bcm-iproc.c
+++ b/drivers/net/mdio/mdio-mux-bcm-iproc.c
@@ -287,15 +287,13 @@ out_clk:
return rc;
}
-static int mdio_mux_iproc_remove(struct platform_device *pdev)
+static void mdio_mux_iproc_remove(struct platform_device *pdev)
{
struct iproc_mdiomux_desc *md = platform_get_drvdata(pdev);
mdio_mux_uninit(md->mux_handle);
mdiobus_unregister(md->mii_bus);
clk_disable_unprepare(md->core_clk);
-
- return 0;
}
#ifdef CONFIG_PM_SLEEP
@@ -342,7 +340,7 @@ static struct platform_driver mdiomux_iproc_driver = {
.pm = &mdio_mux_iproc_pm_ops,
},
.probe = mdio_mux_iproc_probe,
- .remove = mdio_mux_iproc_remove,
+ .remove_new = mdio_mux_iproc_remove,
};
module_platform_driver(mdiomux_iproc_driver);
diff --git a/drivers/net/mdio/mdio-mux-bcm6368.c b/drivers/net/mdio/mdio-mux-bcm6368.c
index 8b444a8eb6b5..1b77e0e3e6e1 100644
--- a/drivers/net/mdio/mdio-mux-bcm6368.c
+++ b/drivers/net/mdio/mdio-mux-bcm6368.c
@@ -153,14 +153,12 @@ out_register:
return rc;
}
-static int bcm6368_mdiomux_remove(struct platform_device *pdev)
+static void bcm6368_mdiomux_remove(struct platform_device *pdev)
{
struct bcm6368_mdiomux_desc *md = platform_get_drvdata(pdev);
mdio_mux_uninit(md->mux_handle);
mdiobus_unregister(md->mii_bus);
-
- return 0;
}
static const struct of_device_id bcm6368_mdiomux_ids[] = {
@@ -175,7 +173,7 @@ static struct platform_driver bcm6368_mdiomux_driver = {
.of_match_table = bcm6368_mdiomux_ids,
},
.probe = bcm6368_mdiomux_probe,
- .remove = bcm6368_mdiomux_remove,
+ .remove_new = bcm6368_mdiomux_remove,
};
module_platform_driver(bcm6368_mdiomux_driver);
diff --git a/drivers/net/mdio/mdio-mux-gpio.c b/drivers/net/mdio/mdio-mux-gpio.c
index 3c7f16f06b45..38fb031f8979 100644
--- a/drivers/net/mdio/mdio-mux-gpio.c
+++ b/drivers/net/mdio/mdio-mux-gpio.c
@@ -62,11 +62,10 @@ static int mdio_mux_gpio_probe(struct platform_device *pdev)
return 0;
}
-static int mdio_mux_gpio_remove(struct platform_device *pdev)
+static void mdio_mux_gpio_remove(struct platform_device *pdev)
{
struct mdio_mux_gpio_state *s = dev_get_platdata(&pdev->dev);
mdio_mux_uninit(s->mux_handle);
- return 0;
}
static const struct of_device_id mdio_mux_gpio_match[] = {
@@ -87,7 +86,7 @@ static struct platform_driver mdio_mux_gpio_driver = {
.of_match_table = mdio_mux_gpio_match,
},
.probe = mdio_mux_gpio_probe,
- .remove = mdio_mux_gpio_remove,
+ .remove_new = mdio_mux_gpio_remove,
};
module_platform_driver(mdio_mux_gpio_driver);
diff --git a/drivers/net/mdio/mdio-mux-meson-g12a.c b/drivers/net/mdio/mdio-mux-meson-g12a.c
index 910e5cf74e89..754b0f2cf15b 100644
--- a/drivers/net/mdio/mdio-mux-meson-g12a.c
+++ b/drivers/net/mdio/mdio-mux-meson-g12a.c
@@ -336,7 +336,7 @@ static int g12a_mdio_mux_probe(struct platform_device *pdev)
return ret;
}
-static int g12a_mdio_mux_remove(struct platform_device *pdev)
+static void g12a_mdio_mux_remove(struct platform_device *pdev)
{
struct g12a_mdio_mux *priv = platform_get_drvdata(pdev);
@@ -344,13 +344,11 @@ static int g12a_mdio_mux_remove(struct platform_device *pdev)
if (__clk_is_enabled(priv->pll))
clk_disable_unprepare(priv->pll);
-
- return 0;
}
static struct platform_driver g12a_mdio_mux_driver = {
.probe = g12a_mdio_mux_probe,
- .remove = g12a_mdio_mux_remove,
+ .remove_new = g12a_mdio_mux_remove,
.driver = {
.name = "g12a-mdio_mux",
.of_match_table = g12a_mdio_mux_match,
diff --git a/drivers/net/mdio/mdio-mux-meson-gxl.c b/drivers/net/mdio/mdio-mux-meson-gxl.c
index 76188575ca1f..89554021b5cc 100644
--- a/drivers/net/mdio/mdio-mux-meson-gxl.c
+++ b/drivers/net/mdio/mdio-mux-meson-gxl.c
@@ -140,18 +140,16 @@ static int gxl_mdio_mux_probe(struct platform_device *pdev)
return ret;
}
-static int gxl_mdio_mux_remove(struct platform_device *pdev)
+static void gxl_mdio_mux_remove(struct platform_device *pdev)
{
struct gxl_mdio_mux *priv = platform_get_drvdata(pdev);
mdio_mux_uninit(priv->mux_handle);
-
- return 0;
}
static struct platform_driver gxl_mdio_mux_driver = {
.probe = gxl_mdio_mux_probe,
- .remove = gxl_mdio_mux_remove,
+ .remove_new = gxl_mdio_mux_remove,
.driver = {
.name = "gxl-mdio-mux",
.of_match_table = gxl_mdio_mux_match,
diff --git a/drivers/net/mdio/mdio-mux-mmioreg.c b/drivers/net/mdio/mdio-mux-mmioreg.c
index 09af150ed774..de08419d0c98 100644
--- a/drivers/net/mdio/mdio-mux-mmioreg.c
+++ b/drivers/net/mdio/mdio-mux-mmioreg.c
@@ -169,13 +169,11 @@ static int mdio_mux_mmioreg_probe(struct platform_device *pdev)
return 0;
}
-static int mdio_mux_mmioreg_remove(struct platform_device *pdev)
+static void mdio_mux_mmioreg_remove(struct platform_device *pdev)
{
struct mdio_mux_mmioreg_state *s = dev_get_platdata(&pdev->dev);
mdio_mux_uninit(s->mux_handle);
-
- return 0;
}
static const struct of_device_id mdio_mux_mmioreg_match[] = {
@@ -192,7 +190,7 @@ static struct platform_driver mdio_mux_mmioreg_driver = {
.of_match_table = mdio_mux_mmioreg_match,
},
.probe = mdio_mux_mmioreg_probe,
- .remove = mdio_mux_mmioreg_remove,
+ .remove_new = mdio_mux_mmioreg_remove,
};
module_platform_driver(mdio_mux_mmioreg_driver);
diff --git a/drivers/net/mdio/mdio-mux-multiplexer.c b/drivers/net/mdio/mdio-mux-multiplexer.c
index bfa5af577b0a..569b13383191 100644
--- a/drivers/net/mdio/mdio-mux-multiplexer.c
+++ b/drivers/net/mdio/mdio-mux-multiplexer.c
@@ -85,7 +85,7 @@ static int mdio_mux_multiplexer_probe(struct platform_device *pdev)
return ret;
}
-static int mdio_mux_multiplexer_remove(struct platform_device *pdev)
+static void mdio_mux_multiplexer_remove(struct platform_device *pdev)
{
struct mdio_mux_multiplexer_state *s = platform_get_drvdata(pdev);
@@ -93,8 +93,6 @@ static int mdio_mux_multiplexer_remove(struct platform_device *pdev)
if (s->do_deselect)
mux_control_deselect(s->muxc);
-
- return 0;
}
static const struct of_device_id mdio_mux_multiplexer_match[] = {
@@ -109,7 +107,7 @@ static struct platform_driver mdio_mux_multiplexer_driver = {
.of_match_table = mdio_mux_multiplexer_match,
},
.probe = mdio_mux_multiplexer_probe,
- .remove = mdio_mux_multiplexer_remove,
+ .remove_new = mdio_mux_multiplexer_remove,
};
module_platform_driver(mdio_mux_multiplexer_driver);
diff --git a/drivers/net/mdio/mdio-octeon.c b/drivers/net/mdio/mdio-octeon.c
index 7c65c547d377..037a38cfed56 100644
--- a/drivers/net/mdio/mdio-octeon.c
+++ b/drivers/net/mdio/mdio-octeon.c
@@ -78,7 +78,7 @@ fail_register:
return err;
}
-static int octeon_mdiobus_remove(struct platform_device *pdev)
+static void octeon_mdiobus_remove(struct platform_device *pdev)
{
struct cavium_mdiobus *bus;
union cvmx_smix_en smi_en;
@@ -88,7 +88,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev)
mdiobus_unregister(bus->mii_bus);
smi_en.u64 = 0;
oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN);
- return 0;
}
static const struct of_device_id octeon_mdiobus_match[] = {
@@ -105,7 +104,7 @@ static struct platform_driver octeon_mdiobus_driver = {
.of_match_table = octeon_mdiobus_match,
},
.probe = octeon_mdiobus_probe,
- .remove = octeon_mdiobus_remove,
+ .remove_new = octeon_mdiobus_remove,
};
module_platform_driver(octeon_mdiobus_driver);
diff --git a/drivers/net/mdio/mdio-sun4i.c b/drivers/net/mdio/mdio-sun4i.c
index f798de3276dc..4511bcc73b36 100644
--- a/drivers/net/mdio/mdio-sun4i.c
+++ b/drivers/net/mdio/mdio-sun4i.c
@@ -142,7 +142,7 @@ err_out_free_mdiobus:
return ret;
}
-static int sun4i_mdio_remove(struct platform_device *pdev)
+static void sun4i_mdio_remove(struct platform_device *pdev)
{
struct mii_bus *bus = platform_get_drvdata(pdev);
struct sun4i_mdio_data *data = bus->priv;
@@ -151,8 +151,6 @@ static int sun4i_mdio_remove(struct platform_device *pdev)
if (data->regulator)
regulator_disable(data->regulator);
mdiobus_free(bus);
-
- return 0;
}
static const struct of_device_id sun4i_mdio_dt_ids[] = {
@@ -166,7 +164,7 @@ MODULE_DEVICE_TABLE(of, sun4i_mdio_dt_ids);
static struct platform_driver sun4i_mdio_driver = {
.probe = sun4i_mdio_probe,
- .remove = sun4i_mdio_remove,
+ .remove_new = sun4i_mdio_remove,
.driver = {
.name = "sun4i-mdio",
.of_match_table = sun4i_mdio_dt_ids,
diff --git a/drivers/net/mdio/mdio-xgene.c b/drivers/net/mdio/mdio-xgene.c
index 1190a793555a..2772a3098543 100644
--- a/drivers/net/mdio/mdio-xgene.c
+++ b/drivers/net/mdio/mdio-xgene.c
@@ -13,11 +13,13 @@
#include <linux/io.h>
#include <linux/mdio/mdio-xgene.h>
#include <linux/module.h>
+#include <linux/of.h>
#include <linux/of_mdio.h>
#include <linux/of_net.h>
-#include <linux/of_platform.h>
#include <linux/phy.h>
+#include <linux/platform_device.h>
#include <linux/prefetch.h>
+#include <linux/property.h>
#include <net/ip.h>
u32 xgene_mdio_rd_mac(struct xgene_mdio_pdata *pdata, u32 rd_addr)
@@ -326,24 +328,11 @@ static int xgene_mdio_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct mii_bus *mdio_bus;
- const struct of_device_id *of_id;
struct xgene_mdio_pdata *pdata;
void __iomem *csr_base;
int mdio_id = 0, ret = 0;
- of_id = of_match_device(xgene_mdio_of_match, &pdev->dev);
- if (of_id) {
- mdio_id = (uintptr_t)of_id->data;
- } else {
-#ifdef CONFIG_ACPI
- const struct acpi_device_id *acpi_id;
-
- acpi_id = acpi_match_device(xgene_mdio_acpi_match, &pdev->dev);
- if (acpi_id)
- mdio_id = (enum xgene_mdio_id)acpi_id->driver_data;
-#endif
- }
-
+ mdio_id = (uintptr_t)device_get_match_data(&pdev->dev);
if (!mdio_id)
return -ENODEV;
@@ -432,7 +421,7 @@ out_clk:
return ret;
}
-static int xgene_mdio_remove(struct platform_device *pdev)
+static void xgene_mdio_remove(struct platform_device *pdev)
{
struct xgene_mdio_pdata *pdata = platform_get_drvdata(pdev);
struct mii_bus *mdio_bus = pdata->mdio_bus;
@@ -443,18 +432,16 @@ static int xgene_mdio_remove(struct platform_device *pdev)
if (dev->of_node)
clk_disable_unprepare(pdata->clk);
-
- return 0;
}
static struct platform_driver xgene_mdio_driver = {
.driver = {
.name = "xgene-mdio",
- .of_match_table = of_match_ptr(xgene_mdio_of_match),
+ .of_match_table = xgene_mdio_of_match,
.acpi_match_table = ACPI_PTR(xgene_mdio_acpi_match),
},
.probe = xgene_mdio_probe,
- .remove = xgene_mdio_remove,
+ .remove_new = xgene_mdio_remove,
};
module_platform_driver(xgene_mdio_driver);
diff --git a/drivers/net/netconsole.c b/drivers/net/netconsole.c
index 3111e1648592..6e14ba5e06c8 100644
--- a/drivers/net/netconsole.c
+++ b/drivers/net/netconsole.c
@@ -53,6 +53,8 @@ static bool oops_only = false;
module_param(oops_only, bool, 0600);
MODULE_PARM_DESC(oops_only, "Only log oops messages");
+#define NETCONSOLE_PARAM_TARGET_PREFIX "cmdline"
+
#ifndef MODULE
static int __init option_setup(char *opt)
{
@@ -165,6 +167,10 @@ static void netconsole_target_put(struct netconsole_target *nt)
{
}
+static void populate_configfs_item(struct netconsole_target *nt,
+ int cmdline_count)
+{
+}
#endif /* CONFIG_NETCONSOLE_DYNAMIC */
/* Allocate and initialize with defaults.
@@ -192,58 +198,6 @@ static struct netconsole_target *alloc_and_init(void)
return nt;
}
-/* Allocate new target (from boot/module param) and setup netpoll for it */
-static struct netconsole_target *alloc_param_target(char *target_config)
-{
- struct netconsole_target *nt;
- int err;
-
- nt = alloc_and_init();
- if (!nt) {
- err = -ENOMEM;
- goto fail;
- }
-
- if (*target_config == '+') {
- nt->extended = true;
- target_config++;
- }
-
- if (*target_config == 'r') {
- if (!nt->extended) {
- pr_err("Netconsole configuration error. Release feature requires extended log message");
- err = -EINVAL;
- goto fail;
- }
- nt->release = true;
- target_config++;
- }
-
- /* Parse parameters and setup netpoll */
- err = netpoll_parse_options(&nt->np, target_config);
- if (err)
- goto fail;
-
- err = netpoll_setup(&nt->np);
- if (err)
- goto fail;
-
- nt->enabled = true;
-
- return nt;
-
-fail:
- kfree(nt);
- return ERR_PTR(err);
-}
-
-/* Cleanup netpoll for given target (from boot/module param) and free it */
-static void free_param_target(struct netconsole_target *nt)
-{
- netpoll_cleanup(&nt->np);
- kfree(nt);
-}
-
#ifdef CONFIG_NETCONSOLE_DYNAMIC
/*
@@ -675,6 +629,23 @@ static const struct config_item_type netconsole_target_type = {
.ct_owner = THIS_MODULE,
};
+static struct netconsole_target *find_cmdline_target(const char *name)
+{
+ struct netconsole_target *nt, *ret = NULL;
+ unsigned long flags;
+
+ spin_lock_irqsave(&target_list_lock, flags);
+ list_for_each_entry(nt, &target_list, list) {
+ if (!strcmp(nt->item.ci_name, name)) {
+ ret = nt;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&target_list_lock, flags);
+
+ return ret;
+}
+
/*
* Group operations and type for netconsole_subsys.
*/
@@ -685,6 +656,17 @@ static struct config_item *make_netconsole_target(struct config_group *group,
struct netconsole_target *nt;
unsigned long flags;
+ /* Checking if a target by this name was created at boot time. If so,
+ * attach a configfs entry to that target. This enables dynamic
+ * control.
+ */
+ if (!strncmp(name, NETCONSOLE_PARAM_TARGET_PREFIX,
+ strlen(NETCONSOLE_PARAM_TARGET_PREFIX))) {
+ nt = find_cmdline_target(name);
+ if (nt)
+ return &nt->item;
+ }
+
nt = alloc_and_init();
if (!nt)
return ERR_PTR(-ENOMEM);
@@ -740,6 +722,17 @@ static struct configfs_subsystem netconsole_subsys = {
},
};
+static void populate_configfs_item(struct netconsole_target *nt,
+ int cmdline_count)
+{
+ char target_name[16];
+
+ snprintf(target_name, sizeof(target_name), "%s%d",
+ NETCONSOLE_PARAM_TARGET_PREFIX, cmdline_count);
+ config_item_init_type_name(&nt->item, target_name,
+ &netconsole_target_type);
+}
+
#endif /* CONFIG_NETCONSOLE_DYNAMIC */
/* Handle network interface device notifications */
@@ -938,6 +931,60 @@ static void write_msg(struct console *con, const char *msg, unsigned int len)
spin_unlock_irqrestore(&target_list_lock, flags);
}
+/* Allocate new target (from boot/module param) and setup netpoll for it */
+static struct netconsole_target *alloc_param_target(char *target_config,
+ int cmdline_count)
+{
+ struct netconsole_target *nt;
+ int err;
+
+ nt = alloc_and_init();
+ if (!nt) {
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ if (*target_config == '+') {
+ nt->extended = true;
+ target_config++;
+ }
+
+ if (*target_config == 'r') {
+ if (!nt->extended) {
+ pr_err("Netconsole configuration error. Release feature requires extended log message");
+ err = -EINVAL;
+ goto fail;
+ }
+ nt->release = true;
+ target_config++;
+ }
+
+ /* Parse parameters and setup netpoll */
+ err = netpoll_parse_options(&nt->np, target_config);
+ if (err)
+ goto fail;
+
+ err = netpoll_setup(&nt->np);
+ if (err)
+ goto fail;
+
+ populate_configfs_item(nt, cmdline_count);
+ nt->enabled = true;
+
+ return nt;
+
+fail:
+ kfree(nt);
+ return ERR_PTR(err);
+}
+
+/* Cleanup netpoll for given target (from boot/module param) and free it */
+static void free_param_target(struct netconsole_target *nt)
+{
+ netpoll_cleanup(&nt->np);
+ kfree(nt);
+}
+
static struct console netconsole_ext = {
.name = "netcon_ext",
.flags = CON_ENABLED | CON_EXTENDED,
@@ -954,6 +1001,7 @@ static int __init init_netconsole(void)
{
int err;
struct netconsole_target *nt, *tmp;
+ unsigned int count = 0;
bool extended = false;
unsigned long flags;
char *target_config;
@@ -961,7 +1009,7 @@ static int __init init_netconsole(void)
if (strnlen(input, MAX_PARAM_LENGTH)) {
while ((target_config = strsep(&input, ";"))) {
- nt = alloc_param_target(target_config);
+ nt = alloc_param_target(target_config, count);
if (IS_ERR(nt)) {
err = PTR_ERR(nt);
goto fail;
@@ -977,6 +1025,7 @@ static int __init init_netconsole(void)
spin_lock_irqsave(&target_list_lock, flags);
list_add(&nt->list, &target_list);
spin_unlock_irqrestore(&target_list_lock, flags);
+ count++;
}
}
diff --git a/drivers/net/netdevsim/bus.c b/drivers/net/netdevsim/bus.c
index 0787ad252dd9..bcbc1e19edde 100644
--- a/drivers/net/netdevsim/bus.c
+++ b/drivers/net/netdevsim/bus.c
@@ -3,11 +3,13 @@
* Copyright (C) 2019 Mellanox Technologies. All rights reserved
*/
+#include <linux/completion.h>
#include <linux/device.h>
#include <linux/idr.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/mutex.h>
+#include <linux/refcount.h>
#include <linux/slab.h>
#include <linux/sysfs.h>
@@ -17,6 +19,8 @@ static DEFINE_IDA(nsim_bus_dev_ids);
static LIST_HEAD(nsim_bus_dev_list);
static DEFINE_MUTEX(nsim_bus_dev_list_lock);
static bool nsim_bus_enable;
+static refcount_t nsim_bus_devs; /* Including the bus itself. */
+static DECLARE_COMPLETION(nsim_bus_devs_released);
static struct nsim_bus_dev *to_nsim_bus_dev(struct device *dev)
{
@@ -121,6 +125,8 @@ static void nsim_bus_dev_release(struct device *dev)
nsim_bus_dev = container_of(dev, struct nsim_bus_dev, dev);
kfree(nsim_bus_dev);
+ if (refcount_dec_and_test(&nsim_bus_devs))
+ complete(&nsim_bus_devs_released);
}
static struct device_type nsim_bus_dev_type = {
@@ -170,6 +176,7 @@ new_device_store(const struct bus_type *bus, const char *buf, size_t count)
goto err;
}
+ refcount_inc(&nsim_bus_devs);
/* Allow using nsim_bus_dev */
smp_store_release(&nsim_bus_dev->init, true);
@@ -326,6 +333,7 @@ int nsim_bus_init(void)
err = driver_register(&nsim_driver);
if (err)
goto err_bus_unregister;
+ refcount_set(&nsim_bus_devs, 1);
/* Allow using resources */
smp_store_release(&nsim_bus_enable, true);
return 0;
@@ -341,6 +349,8 @@ void nsim_bus_exit(void)
/* Disallow using resources */
smp_store_release(&nsim_bus_enable, false);
+ if (refcount_dec_and_test(&nsim_bus_devs))
+ complete(&nsim_bus_devs_released);
mutex_lock(&nsim_bus_dev_list_lock);
list_for_each_entry_safe(nsim_bus_dev, tmp, &nsim_bus_dev_list, list) {
@@ -349,6 +359,8 @@ void nsim_bus_exit(void)
}
mutex_unlock(&nsim_bus_dev_list_lock);
+ wait_for_completion(&nsim_bus_devs_released);
+
driver_unregister(&nsim_driver);
bus_unregister(&nsim_bus);
}
diff --git a/drivers/net/netdevsim/health.c b/drivers/net/netdevsim/health.c
index eb04ed715d2d..70e8bdf34be9 100644
--- a/drivers/net/netdevsim/health.c
+++ b/drivers/net/netdevsim/health.c
@@ -63,91 +63,45 @@ nsim_dev_dummy_reporter_recover(struct devlink_health_reporter *reporter,
static int nsim_dev_dummy_fmsg_put(struct devlink_fmsg *fmsg, u32 binary_len)
{
char *binary;
- int err;
int i;
- err = devlink_fmsg_bool_pair_put(fmsg, "test_bool", true);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "test_u8", 1);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "test_u32", 3);
- if (err)
- return err;
- err = devlink_fmsg_u64_pair_put(fmsg, "test_u64", 4);
- if (err)
- return err;
- err = devlink_fmsg_string_pair_put(fmsg, "test_string", "somestring");
- if (err)
- return err;
+ devlink_fmsg_bool_pair_put(fmsg, "test_bool", true);
+ devlink_fmsg_u8_pair_put(fmsg, "test_u8", 1);
+ devlink_fmsg_u32_pair_put(fmsg, "test_u32", 3);
+ devlink_fmsg_u64_pair_put(fmsg, "test_u64", 4);
+ devlink_fmsg_string_pair_put(fmsg, "test_string", "somestring");
binary = kmalloc(binary_len, GFP_KERNEL | __GFP_NOWARN);
if (!binary)
return -ENOMEM;
get_random_bytes(binary, binary_len);
- err = devlink_fmsg_binary_pair_put(fmsg, "test_binary", binary, binary_len);
+ devlink_fmsg_binary_pair_put(fmsg, "test_binary", binary, binary_len);
kfree(binary);
- if (err)
- return err;
- err = devlink_fmsg_pair_nest_start(fmsg, "test_nest");
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_bool_pair_put(fmsg, "nested_test_bool", false);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg, "nested_test_u8", false);
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
+ devlink_fmsg_pair_nest_start(fmsg, "test_nest");
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_bool_pair_put(fmsg, "nested_test_bool", false);
+ devlink_fmsg_u8_pair_put(fmsg, "nested_test_u8", false);
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "test_u32_array");
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
+ for (i = 0; i < 10; i++)
+ devlink_fmsg_u32_put(fmsg, i);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "test_array_of_objects");
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "test_u32_array");
- if (err)
- return err;
for (i = 0; i < 10; i++) {
- err = devlink_fmsg_u32_put(fmsg, i);
- if (err)
- return err;
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_bool_pair_put(fmsg, "in_array_nested_test_bool",
+ false);
+ devlink_fmsg_u8_pair_put(fmsg, "in_array_nested_test_u8", i);
+ devlink_fmsg_obj_nest_end(fmsg);
}
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
+ devlink_fmsg_arr_pair_nest_end(fmsg);
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "test_array_of_objects");
- if (err)
- return err;
- for (i = 0; i < 10; i++) {
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_bool_pair_put(fmsg,
- "in_array_nested_test_bool",
- false);
- if (err)
- return err;
- err = devlink_fmsg_u8_pair_put(fmsg,
- "in_array_nested_test_u8",
- i);
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
- }
- return devlink_fmsg_arr_pair_nest_end(fmsg);
+ return 0;
}
static int
@@ -157,14 +111,10 @@ nsim_dev_dummy_reporter_dump(struct devlink_health_reporter *reporter,
{
struct nsim_dev_health *health = devlink_health_reporter_priv(reporter);
struct nsim_dev_dummy_reporter_ctx *ctx = priv_ctx;
- int err;
- if (ctx) {
- err = devlink_fmsg_string_pair_put(fmsg, "break_message",
- ctx->break_msg);
- if (err)
- return err;
- }
+ if (ctx)
+ devlink_fmsg_string_pair_put(fmsg, "break_message", ctx->break_msg);
+
return nsim_dev_dummy_fmsg_put(fmsg, health->binary_len);
}
@@ -174,15 +124,11 @@ nsim_dev_dummy_reporter_diagnose(struct devlink_health_reporter *reporter,
struct netlink_ext_ack *extack)
{
struct nsim_dev_health *health = devlink_health_reporter_priv(reporter);
- int err;
- if (health->recovered_break_msg) {
- err = devlink_fmsg_string_pair_put(fmsg,
- "recovered_break_message",
- health->recovered_break_msg);
- if (err)
- return err;
- }
+ if (health->recovered_break_msg)
+ devlink_fmsg_string_pair_put(fmsg, "recovered_break_message",
+ health->recovered_break_msg);
+
return nsim_dev_dummy_fmsg_put(fmsg, health->binary_len);
}
diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
new file mode 100644
index 000000000000..5a0f86f38f09
--- /dev/null
+++ b/drivers/net/netkit.c
@@ -0,0 +1,936 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Copyright (c) 2023 Isovalent */
+
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/etherdevice.h>
+#include <linux/filter.h>
+#include <linux/netfilter_netdev.h>
+#include <linux/bpf_mprog.h>
+
+#include <net/netkit.h>
+#include <net/dst.h>
+#include <net/tcx.h>
+
+#define DRV_NAME "netkit"
+
+struct netkit {
+ /* Needed in fast-path */
+ struct net_device __rcu *peer;
+ struct bpf_mprog_entry __rcu *active;
+ enum netkit_action policy;
+ struct bpf_mprog_bundle bundle;
+
+ /* Needed in slow-path */
+ enum netkit_mode mode;
+ bool primary;
+ u32 headroom;
+};
+
+struct netkit_link {
+ struct bpf_link link;
+ struct net_device *dev;
+ u32 location;
+};
+
+static __always_inline int
+netkit_run(const struct bpf_mprog_entry *entry, struct sk_buff *skb,
+ enum netkit_action ret)
+{
+ const struct bpf_mprog_fp *fp;
+ const struct bpf_prog *prog;
+
+ bpf_mprog_foreach_prog(entry, fp, prog) {
+ bpf_compute_data_pointers(skb);
+ ret = bpf_prog_run(prog, skb);
+ if (ret != NETKIT_NEXT)
+ break;
+ }
+ return ret;
+}
+
+static void netkit_prep_forward(struct sk_buff *skb, bool xnet)
+{
+ skb_scrub_packet(skb, xnet);
+ skb->priority = 0;
+ nf_skip_egress(skb, true);
+}
+
+static struct netkit *netkit_priv(const struct net_device *dev)
+{
+ return netdev_priv(dev);
+}
+
+static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ enum netkit_action ret = READ_ONCE(nk->policy);
+ netdev_tx_t ret_dev = NET_XMIT_SUCCESS;
+ const struct bpf_mprog_entry *entry;
+ struct net_device *peer;
+
+ rcu_read_lock();
+ peer = rcu_dereference(nk->peer);
+ if (unlikely(!peer || !(peer->flags & IFF_UP) ||
+ !pskb_may_pull(skb, ETH_HLEN) ||
+ skb_orphan_frags(skb, GFP_ATOMIC)))
+ goto drop;
+ netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
+ skb->dev = peer;
+ entry = rcu_dereference(nk->active);
+ if (entry)
+ ret = netkit_run(entry, skb, ret);
+ switch (ret) {
+ case NETKIT_NEXT:
+ case NETKIT_PASS:
+ skb->protocol = eth_type_trans(skb, skb->dev);
+ skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN);
+ __netif_rx(skb);
+ break;
+ case NETKIT_REDIRECT:
+ skb_do_redirect(skb);
+ break;
+ case NETKIT_DROP:
+ default:
+drop:
+ kfree_skb(skb);
+ dev_core_stats_tx_dropped_inc(dev);
+ ret_dev = NET_XMIT_DROP;
+ break;
+ }
+ rcu_read_unlock();
+ return ret_dev;
+}
+
+static int netkit_open(struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+
+ if (!peer)
+ return -ENOTCONN;
+ if (peer->flags & IFF_UP) {
+ netif_carrier_on(dev);
+ netif_carrier_on(peer);
+ }
+ return 0;
+}
+
+static int netkit_close(struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+
+ netif_carrier_off(dev);
+ if (peer)
+ netif_carrier_off(peer);
+ return 0;
+}
+
+static int netkit_get_iflink(const struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer;
+ int iflink = 0;
+
+ rcu_read_lock();
+ peer = rcu_dereference(nk->peer);
+ if (peer)
+ iflink = peer->ifindex;
+ rcu_read_unlock();
+ return iflink;
+}
+
+static void netkit_set_multicast(struct net_device *dev)
+{
+ /* Nothing to do, we receive whatever gets pushed to us! */
+}
+
+static void netkit_set_headroom(struct net_device *dev, int headroom)
+{
+ struct netkit *nk = netkit_priv(dev), *nk2;
+ struct net_device *peer;
+
+ if (headroom < 0)
+ headroom = NET_SKB_PAD;
+
+ rcu_read_lock();
+ peer = rcu_dereference(nk->peer);
+ if (unlikely(!peer))
+ goto out;
+
+ nk2 = netkit_priv(peer);
+ nk->headroom = headroom;
+ headroom = max(nk->headroom, nk2->headroom);
+
+ peer->needed_headroom = headroom;
+ dev->needed_headroom = headroom;
+out:
+ rcu_read_unlock();
+}
+
+static struct net_device *netkit_peer_dev(struct net_device *dev)
+{
+ return rcu_dereference(netkit_priv(dev)->peer);
+}
+
+static void netkit_uninit(struct net_device *dev);
+
+static const struct net_device_ops netkit_netdev_ops = {
+ .ndo_open = netkit_open,
+ .ndo_stop = netkit_close,
+ .ndo_start_xmit = netkit_xmit,
+ .ndo_set_rx_mode = netkit_set_multicast,
+ .ndo_set_rx_headroom = netkit_set_headroom,
+ .ndo_get_iflink = netkit_get_iflink,
+ .ndo_get_peer_dev = netkit_peer_dev,
+ .ndo_uninit = netkit_uninit,
+ .ndo_features_check = passthru_features_check,
+};
+
+static void netkit_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strscpy(info->driver, DRV_NAME, sizeof(info->driver));
+}
+
+static const struct ethtool_ops netkit_ethtool_ops = {
+ .get_drvinfo = netkit_get_drvinfo,
+};
+
+static void netkit_setup(struct net_device *dev)
+{
+ static const netdev_features_t netkit_features_hw_vlan =
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_STAG_TX |
+ NETIF_F_HW_VLAN_STAG_RX;
+ static const netdev_features_t netkit_features =
+ netkit_features_hw_vlan |
+ NETIF_F_SG |
+ NETIF_F_FRAGLIST |
+ NETIF_F_HW_CSUM |
+ NETIF_F_RXCSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_HIGHDMA |
+ NETIF_F_GSO_SOFTWARE |
+ NETIF_F_GSO_ENCAP_ALL;
+
+ ether_setup(dev);
+ dev->max_mtu = ETH_MAX_MTU;
+
+ dev->flags |= IFF_NOARP;
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ dev->priv_flags |= IFF_PHONY_HEADROOM;
+ dev->priv_flags |= IFF_NO_QUEUE;
+
+ dev->ethtool_ops = &netkit_ethtool_ops;
+ dev->netdev_ops = &netkit_netdev_ops;
+
+ dev->features |= netkit_features | NETIF_F_LLTX;
+ dev->hw_features = netkit_features;
+ dev->hw_enc_features = netkit_features;
+ dev->mpls_features = NETIF_F_HW_CSUM | NETIF_F_GSO_SOFTWARE;
+ dev->vlan_features = dev->features & ~netkit_features_hw_vlan;
+
+ dev->needs_free_netdev = true;
+
+ netif_set_tso_max_size(dev, GSO_MAX_SIZE);
+}
+
+static struct net *netkit_get_link_net(const struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+
+ return peer ? dev_net(peer) : dev_net(dev);
+}
+
+static int netkit_check_policy(int policy, struct nlattr *tb,
+ struct netlink_ext_ack *extack)
+{
+ switch (policy) {
+ case NETKIT_PASS:
+ case NETKIT_DROP:
+ return 0;
+ default:
+ NL_SET_ERR_MSG_ATTR(extack, tb,
+ "Provided default xmit policy not supported");
+ return -EINVAL;
+ }
+}
+
+static int netkit_check_mode(int mode, struct nlattr *tb,
+ struct netlink_ext_ack *extack)
+{
+ switch (mode) {
+ case NETKIT_L2:
+ case NETKIT_L3:
+ return 0;
+ default:
+ NL_SET_ERR_MSG_ATTR(extack, tb,
+ "Provided device mode can only be L2 or L3");
+ return -EINVAL;
+ }
+}
+
+static int netkit_validate(struct nlattr *tb[], struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *attr = tb[IFLA_ADDRESS];
+
+ if (!attr)
+ return 0;
+ NL_SET_ERR_MSG_ATTR(extack, attr,
+ "Setting Ethernet address is not supported");
+ return -EOPNOTSUPP;
+}
+
+static struct rtnl_link_ops netkit_link_ops;
+
+static int netkit_new_link(struct net *src_net, struct net_device *dev,
+ struct nlattr *tb[], struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *peer_tb[IFLA_MAX + 1], **tbp = tb, *attr;
+ enum netkit_action default_prim = NETKIT_PASS;
+ enum netkit_action default_peer = NETKIT_PASS;
+ enum netkit_mode mode = NETKIT_L3;
+ unsigned char ifname_assign_type;
+ struct ifinfomsg *ifmp = NULL;
+ struct net_device *peer;
+ char ifname[IFNAMSIZ];
+ struct netkit *nk;
+ struct net *net;
+ int err;
+
+ if (data) {
+ if (data[IFLA_NETKIT_MODE]) {
+ attr = data[IFLA_NETKIT_MODE];
+ mode = nla_get_u32(attr);
+ err = netkit_check_mode(mode, attr, extack);
+ if (err < 0)
+ return err;
+ }
+ if (data[IFLA_NETKIT_PEER_INFO]) {
+ attr = data[IFLA_NETKIT_PEER_INFO];
+ ifmp = nla_data(attr);
+ err = rtnl_nla_parse_ifinfomsg(peer_tb, attr, extack);
+ if (err < 0)
+ return err;
+ err = netkit_validate(peer_tb, NULL, extack);
+ if (err < 0)
+ return err;
+ tbp = peer_tb;
+ }
+ if (data[IFLA_NETKIT_POLICY]) {
+ attr = data[IFLA_NETKIT_POLICY];
+ default_prim = nla_get_u32(attr);
+ err = netkit_check_policy(default_prim, attr, extack);
+ if (err < 0)
+ return err;
+ }
+ if (data[IFLA_NETKIT_PEER_POLICY]) {
+ attr = data[IFLA_NETKIT_PEER_POLICY];
+ default_peer = nla_get_u32(attr);
+ err = netkit_check_policy(default_peer, attr, extack);
+ if (err < 0)
+ return err;
+ }
+ }
+
+ if (ifmp && tbp[IFLA_IFNAME]) {
+ nla_strscpy(ifname, tbp[IFLA_IFNAME], IFNAMSIZ);
+ ifname_assign_type = NET_NAME_USER;
+ } else {
+ strscpy(ifname, "nk%d", IFNAMSIZ);
+ ifname_assign_type = NET_NAME_ENUM;
+ }
+
+ net = rtnl_link_get_net(src_net, tbp);
+ if (IS_ERR(net))
+ return PTR_ERR(net);
+
+ peer = rtnl_create_link(net, ifname, ifname_assign_type,
+ &netkit_link_ops, tbp, extack);
+ if (IS_ERR(peer)) {
+ put_net(net);
+ return PTR_ERR(peer);
+ }
+
+ netif_inherit_tso_max(peer, dev);
+
+ if (mode == NETKIT_L2)
+ eth_hw_addr_random(peer);
+ if (ifmp && dev->ifindex)
+ peer->ifindex = ifmp->ifi_index;
+
+ nk = netkit_priv(peer);
+ nk->primary = false;
+ nk->policy = default_peer;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+ err = register_netdevice(peer);
+ put_net(net);
+ if (err < 0)
+ goto err_register_peer;
+ netif_carrier_off(peer);
+ if (mode == NETKIT_L2)
+ dev_change_flags(peer, peer->flags & ~IFF_NOARP, NULL);
+
+ err = rtnl_configure_link(peer, NULL, 0, NULL);
+ if (err < 0)
+ goto err_configure_peer;
+
+ if (mode == NETKIT_L2)
+ eth_hw_addr_random(dev);
+ if (tb[IFLA_IFNAME])
+ nla_strscpy(dev->name, tb[IFLA_IFNAME], IFNAMSIZ);
+ else
+ strscpy(dev->name, "nk%d", IFNAMSIZ);
+
+ nk = netkit_priv(dev);
+ nk->primary = true;
+ nk->policy = default_prim;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+ err = register_netdevice(dev);
+ if (err < 0)
+ goto err_configure_peer;
+ netif_carrier_off(dev);
+ if (mode == NETKIT_L2)
+ dev_change_flags(dev, dev->flags & ~IFF_NOARP, NULL);
+
+ rcu_assign_pointer(netkit_priv(dev)->peer, peer);
+ rcu_assign_pointer(netkit_priv(peer)->peer, dev);
+ return 0;
+err_configure_peer:
+ unregister_netdevice(peer);
+ return err;
+err_register_peer:
+ free_netdev(peer);
+ return err;
+}
+
+static struct bpf_mprog_entry *netkit_entry_fetch(struct net_device *dev,
+ bool bundle_fallback)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct bpf_mprog_entry *entry;
+
+ ASSERT_RTNL();
+ entry = rcu_dereference_rtnl(nk->active);
+ if (entry)
+ return entry;
+ if (bundle_fallback)
+ return &nk->bundle.a;
+ return NULL;
+}
+
+static void netkit_entry_update(struct net_device *dev,
+ struct bpf_mprog_entry *entry)
+{
+ struct netkit *nk = netkit_priv(dev);
+
+ ASSERT_RTNL();
+ rcu_assign_pointer(nk->active, entry);
+}
+
+static void netkit_entry_sync(void)
+{
+ synchronize_rcu();
+}
+
+static struct net_device *netkit_dev_fetch(struct net *net, u32 ifindex, u32 which)
+{
+ struct net_device *dev;
+ struct netkit *nk;
+
+ ASSERT_RTNL();
+
+ switch (which) {
+ case BPF_NETKIT_PRIMARY:
+ case BPF_NETKIT_PEER:
+ break;
+ default:
+ return ERR_PTR(-EINVAL);
+ }
+
+ dev = __dev_get_by_index(net, ifindex);
+ if (!dev)
+ return ERR_PTR(-ENODEV);
+ if (dev->netdev_ops != &netkit_netdev_ops)
+ return ERR_PTR(-ENXIO);
+
+ nk = netkit_priv(dev);
+ if (!nk->primary)
+ return ERR_PTR(-EACCES);
+ if (which == BPF_NETKIT_PEER) {
+ dev = rcu_dereference_rtnl(nk->peer);
+ if (!dev)
+ return ERR_PTR(-ENODEV);
+ }
+ return dev;
+}
+
+int netkit_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+{
+ struct bpf_mprog_entry *entry, *entry_new;
+ struct bpf_prog *replace_prog = NULL;
+ struct net_device *dev;
+ int ret;
+
+ rtnl_lock();
+ dev = netkit_dev_fetch(current->nsproxy->net_ns, attr->target_ifindex,
+ attr->attach_type);
+ if (IS_ERR(dev)) {
+ ret = PTR_ERR(dev);
+ goto out;
+ }
+ entry = netkit_entry_fetch(dev, true);
+ if (attr->attach_flags & BPF_F_REPLACE) {
+ replace_prog = bpf_prog_get_type(attr->replace_bpf_fd,
+ prog->type);
+ if (IS_ERR(replace_prog)) {
+ ret = PTR_ERR(replace_prog);
+ replace_prog = NULL;
+ goto out;
+ }
+ }
+ ret = bpf_mprog_attach(entry, &entry_new, prog, NULL, replace_prog,
+ attr->attach_flags, attr->relative_fd,
+ attr->expected_revision);
+ if (!ret) {
+ if (entry != entry_new) {
+ netkit_entry_update(dev, entry_new);
+ netkit_entry_sync();
+ }
+ bpf_mprog_commit(entry);
+ }
+out:
+ if (replace_prog)
+ bpf_prog_put(replace_prog);
+ rtnl_unlock();
+ return ret;
+}
+
+int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog)
+{
+ struct bpf_mprog_entry *entry, *entry_new;
+ struct net_device *dev;
+ int ret;
+
+ rtnl_lock();
+ dev = netkit_dev_fetch(current->nsproxy->net_ns, attr->target_ifindex,
+ attr->attach_type);
+ if (IS_ERR(dev)) {
+ ret = PTR_ERR(dev);
+ goto out;
+ }
+ entry = netkit_entry_fetch(dev, false);
+ if (!entry) {
+ ret = -ENOENT;
+ goto out;
+ }
+ ret = bpf_mprog_detach(entry, &entry_new, prog, NULL, attr->attach_flags,
+ attr->relative_fd, attr->expected_revision);
+ if (!ret) {
+ if (!bpf_mprog_total(entry_new))
+ entry_new = NULL;
+ netkit_entry_update(dev, entry_new);
+ netkit_entry_sync();
+ bpf_mprog_commit(entry);
+ }
+out:
+ rtnl_unlock();
+ return ret;
+}
+
+int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr)
+{
+ struct net_device *dev;
+ int ret;
+
+ rtnl_lock();
+ dev = netkit_dev_fetch(current->nsproxy->net_ns,
+ attr->query.target_ifindex,
+ attr->query.attach_type);
+ if (IS_ERR(dev)) {
+ ret = PTR_ERR(dev);
+ goto out;
+ }
+ ret = bpf_mprog_query(attr, uattr, netkit_entry_fetch(dev, false));
+out:
+ rtnl_unlock();
+ return ret;
+}
+
+static struct netkit_link *netkit_link(const struct bpf_link *link)
+{
+ return container_of(link, struct netkit_link, link);
+}
+
+static int netkit_link_prog_attach(struct bpf_link *link, u32 flags,
+ u32 id_or_fd, u64 revision)
+{
+ struct netkit_link *nkl = netkit_link(link);
+ struct bpf_mprog_entry *entry, *entry_new;
+ struct net_device *dev = nkl->dev;
+ int ret;
+
+ ASSERT_RTNL();
+ entry = netkit_entry_fetch(dev, true);
+ ret = bpf_mprog_attach(entry, &entry_new, link->prog, link, NULL, flags,
+ id_or_fd, revision);
+ if (!ret) {
+ if (entry != entry_new) {
+ netkit_entry_update(dev, entry_new);
+ netkit_entry_sync();
+ }
+ bpf_mprog_commit(entry);
+ }
+ return ret;
+}
+
+static void netkit_link_release(struct bpf_link *link)
+{
+ struct netkit_link *nkl = netkit_link(link);
+ struct bpf_mprog_entry *entry, *entry_new;
+ struct net_device *dev;
+ int ret = 0;
+
+ rtnl_lock();
+ dev = nkl->dev;
+ if (!dev)
+ goto out;
+ entry = netkit_entry_fetch(dev, false);
+ if (!entry) {
+ ret = -ENOENT;
+ goto out;
+ }
+ ret = bpf_mprog_detach(entry, &entry_new, link->prog, link, 0, 0, 0);
+ if (!ret) {
+ if (!bpf_mprog_total(entry_new))
+ entry_new = NULL;
+ netkit_entry_update(dev, entry_new);
+ netkit_entry_sync();
+ bpf_mprog_commit(entry);
+ nkl->dev = NULL;
+ }
+out:
+ WARN_ON_ONCE(ret);
+ rtnl_unlock();
+}
+
+static int netkit_link_update(struct bpf_link *link, struct bpf_prog *nprog,
+ struct bpf_prog *oprog)
+{
+ struct netkit_link *nkl = netkit_link(link);
+ struct bpf_mprog_entry *entry, *entry_new;
+ struct net_device *dev;
+ int ret = 0;
+
+ rtnl_lock();
+ dev = nkl->dev;
+ if (!dev) {
+ ret = -ENOLINK;
+ goto out;
+ }
+ if (oprog && link->prog != oprog) {
+ ret = -EPERM;
+ goto out;
+ }
+ oprog = link->prog;
+ if (oprog == nprog) {
+ bpf_prog_put(nprog);
+ goto out;
+ }
+ entry = netkit_entry_fetch(dev, false);
+ if (!entry) {
+ ret = -ENOENT;
+ goto out;
+ }
+ ret = bpf_mprog_attach(entry, &entry_new, nprog, link, oprog,
+ BPF_F_REPLACE | BPF_F_ID,
+ link->prog->aux->id, 0);
+ if (!ret) {
+ WARN_ON_ONCE(entry != entry_new);
+ oprog = xchg(&link->prog, nprog);
+ bpf_prog_put(oprog);
+ bpf_mprog_commit(entry);
+ }
+out:
+ rtnl_unlock();
+ return ret;
+}
+
+static void netkit_link_dealloc(struct bpf_link *link)
+{
+ kfree(netkit_link(link));
+}
+
+static void netkit_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
+{
+ const struct netkit_link *nkl = netkit_link(link);
+ u32 ifindex = 0;
+
+ rtnl_lock();
+ if (nkl->dev)
+ ifindex = nkl->dev->ifindex;
+ rtnl_unlock();
+
+ seq_printf(seq, "ifindex:\t%u\n", ifindex);
+ seq_printf(seq, "attach_type:\t%u (%s)\n",
+ nkl->location,
+ nkl->location == BPF_NETKIT_PRIMARY ? "primary" : "peer");
+}
+
+static int netkit_link_fill_info(const struct bpf_link *link,
+ struct bpf_link_info *info)
+{
+ const struct netkit_link *nkl = netkit_link(link);
+ u32 ifindex = 0;
+
+ rtnl_lock();
+ if (nkl->dev)
+ ifindex = nkl->dev->ifindex;
+ rtnl_unlock();
+
+ info->netkit.ifindex = ifindex;
+ info->netkit.attach_type = nkl->location;
+ return 0;
+}
+
+static int netkit_link_detach(struct bpf_link *link)
+{
+ netkit_link_release(link);
+ return 0;
+}
+
+static const struct bpf_link_ops netkit_link_lops = {
+ .release = netkit_link_release,
+ .detach = netkit_link_detach,
+ .dealloc = netkit_link_dealloc,
+ .update_prog = netkit_link_update,
+ .show_fdinfo = netkit_link_fdinfo,
+ .fill_link_info = netkit_link_fill_info,
+};
+
+static int netkit_link_init(struct netkit_link *nkl,
+ struct bpf_link_primer *link_primer,
+ const union bpf_attr *attr,
+ struct net_device *dev,
+ struct bpf_prog *prog)
+{
+ bpf_link_init(&nkl->link, BPF_LINK_TYPE_NETKIT,
+ &netkit_link_lops, prog);
+ nkl->location = attr->link_create.attach_type;
+ nkl->dev = dev;
+ return bpf_link_prime(&nkl->link, link_primer);
+}
+
+int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
+{
+ struct bpf_link_primer link_primer;
+ struct netkit_link *nkl;
+ struct net_device *dev;
+ int ret;
+
+ rtnl_lock();
+ dev = netkit_dev_fetch(current->nsproxy->net_ns,
+ attr->link_create.target_ifindex,
+ attr->link_create.attach_type);
+ if (IS_ERR(dev)) {
+ ret = PTR_ERR(dev);
+ goto out;
+ }
+ nkl = kzalloc(sizeof(*nkl), GFP_KERNEL_ACCOUNT);
+ if (!nkl) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ ret = netkit_link_init(nkl, &link_primer, attr, dev, prog);
+ if (ret) {
+ kfree(nkl);
+ goto out;
+ }
+ ret = netkit_link_prog_attach(&nkl->link,
+ attr->link_create.flags,
+ attr->link_create.netkit.relative_fd,
+ attr->link_create.netkit.expected_revision);
+ if (ret) {
+ nkl->dev = NULL;
+ bpf_link_cleanup(&link_primer);
+ goto out;
+ }
+ ret = bpf_link_settle(&link_primer);
+out:
+ rtnl_unlock();
+ return ret;
+}
+
+static void netkit_release_all(struct net_device *dev)
+{
+ struct bpf_mprog_entry *entry;
+ struct bpf_tuple tuple = {};
+ struct bpf_mprog_fp *fp;
+ struct bpf_mprog_cp *cp;
+
+ entry = netkit_entry_fetch(dev, false);
+ if (!entry)
+ return;
+ netkit_entry_update(dev, NULL);
+ netkit_entry_sync();
+ bpf_mprog_foreach_tuple(entry, fp, cp, tuple) {
+ if (tuple.link)
+ netkit_link(tuple.link)->dev = NULL;
+ else
+ bpf_prog_put(tuple.prog);
+ }
+}
+
+static void netkit_uninit(struct net_device *dev)
+{
+ netkit_release_all(dev);
+}
+
+static void netkit_del_link(struct net_device *dev, struct list_head *head)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+
+ RCU_INIT_POINTER(nk->peer, NULL);
+ unregister_netdevice_queue(dev, head);
+ if (peer) {
+ nk = netkit_priv(peer);
+ RCU_INIT_POINTER(nk->peer, NULL);
+ unregister_netdevice_queue(peer, head);
+ }
+}
+
+static int netkit_change_link(struct net_device *dev, struct nlattr *tb[],
+ struct nlattr *data[],
+ struct netlink_ext_ack *extack)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+ enum netkit_action policy;
+ struct nlattr *attr;
+ int err;
+
+ if (!nk->primary) {
+ NL_SET_ERR_MSG(extack,
+ "netkit link settings can be changed only through the primary device");
+ return -EACCES;
+ }
+
+ if (data[IFLA_NETKIT_MODE]) {
+ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_MODE],
+ "netkit link operating mode cannot be changed after device creation");
+ return -EACCES;
+ }
+
+ if (data[IFLA_NETKIT_POLICY]) {
+ attr = data[IFLA_NETKIT_POLICY];
+ policy = nla_get_u32(attr);
+ err = netkit_check_policy(policy, attr, extack);
+ if (err)
+ return err;
+ WRITE_ONCE(nk->policy, policy);
+ }
+
+ if (data[IFLA_NETKIT_PEER_POLICY]) {
+ err = -EOPNOTSUPP;
+ attr = data[IFLA_NETKIT_PEER_POLICY];
+ policy = nla_get_u32(attr);
+ if (peer)
+ err = netkit_check_policy(policy, attr, extack);
+ if (err)
+ return err;
+ nk = netkit_priv(peer);
+ WRITE_ONCE(nk->policy, policy);
+ }
+
+ return 0;
+}
+
+static size_t netkit_get_size(const struct net_device *dev)
+{
+ return nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_POLICY */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_POLICY */
+ nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_MODE */
+ 0;
+}
+
+static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+ struct netkit *nk = netkit_priv(dev);
+ struct net_device *peer = rtnl_dereference(nk->peer);
+
+ if (nla_put_u8(skb, IFLA_NETKIT_PRIMARY, nk->primary))
+ return -EMSGSIZE;
+ if (nla_put_u32(skb, IFLA_NETKIT_POLICY, nk->policy))
+ return -EMSGSIZE;
+ if (nla_put_u32(skb, IFLA_NETKIT_MODE, nk->mode))
+ return -EMSGSIZE;
+
+ if (peer) {
+ nk = netkit_priv(peer);
+ if (nla_put_u32(skb, IFLA_NETKIT_PEER_POLICY, nk->policy))
+ return -EMSGSIZE;
+ }
+
+ return 0;
+}
+
+static const struct nla_policy netkit_policy[IFLA_NETKIT_MAX + 1] = {
+ [IFLA_NETKIT_PEER_INFO] = { .len = sizeof(struct ifinfomsg) },
+ [IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_MODE] = { .type = NLA_U32 },
+ [IFLA_NETKIT_PEER_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_PRIMARY] = { .type = NLA_REJECT,
+ .reject_message = "Primary attribute is read-only" },
+};
+
+static struct rtnl_link_ops netkit_link_ops = {
+ .kind = DRV_NAME,
+ .priv_size = sizeof(struct netkit),
+ .setup = netkit_setup,
+ .newlink = netkit_new_link,
+ .dellink = netkit_del_link,
+ .changelink = netkit_change_link,
+ .get_link_net = netkit_get_link_net,
+ .get_size = netkit_get_size,
+ .fill_info = netkit_fill_info,
+ .policy = netkit_policy,
+ .validate = netkit_validate,
+ .maxtype = IFLA_NETKIT_MAX,
+};
+
+static __init int netkit_init(void)
+{
+ BUILD_BUG_ON((int)NETKIT_NEXT != (int)TCX_NEXT ||
+ (int)NETKIT_PASS != (int)TCX_PASS ||
+ (int)NETKIT_DROP != (int)TCX_DROP ||
+ (int)NETKIT_REDIRECT != (int)TCX_REDIRECT);
+
+ return rtnl_link_register(&netkit_link_ops);
+}
+
+static __exit void netkit_exit(void)
+{
+ rtnl_link_unregister(&netkit_link_ops);
+}
+
+module_init(netkit_init);
+module_exit(netkit_exit);
+
+MODULE_DESCRIPTION("BPF-programmable network device");
+MODULE_AUTHOR("Daniel Borkmann <daniel@iogearbox.net>");
+MODULE_AUTHOR("Nikolay Aleksandrov <razor@blackwall.org>");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_RTNL_LINK(DRV_NAME);
diff --git a/drivers/net/pcs/pcs-xpcs.c b/drivers/net/pcs/pcs-xpcs.c
index 4dbc21f604f2..31f0beba638a 100644
--- a/drivers/net/pcs/pcs-xpcs.c
+++ b/drivers/net/pcs/pcs-xpcs.c
@@ -1090,6 +1090,28 @@ static int xpcs_get_state_c37_1000basex(struct dw_xpcs *xpcs,
return 0;
}
+static int xpcs_get_state_2500basex(struct dw_xpcs *xpcs,
+ struct phylink_link_state *state)
+{
+ int ret;
+
+ ret = xpcs_read(xpcs, MDIO_MMD_VEND2, DW_VR_MII_MMD_STS);
+ if (ret < 0) {
+ state->link = 0;
+ return ret;
+ }
+
+ state->link = !!(ret & DW_VR_MII_MMD_STS_LINK_STS);
+ if (!state->link)
+ return 0;
+
+ state->speed = SPEED_2500;
+ state->pause |= MLO_PAUSE_TX | MLO_PAUSE_RX;
+ state->duplex = DUPLEX_FULL;
+
+ return 0;
+}
+
static void xpcs_get_state(struct phylink_pcs *pcs,
struct phylink_link_state *state)
{
@@ -1127,6 +1149,13 @@ static void xpcs_get_state(struct phylink_pcs *pcs,
ERR_PTR(ret));
}
break;
+ case DW_2500BASEX:
+ ret = xpcs_get_state_2500basex(xpcs, state);
+ if (ret) {
+ pr_err("xpcs_get_state_2500basex returned %pe\n",
+ ERR_PTR(ret));
+ }
+ break;
default:
return;
}
diff --git a/drivers/net/pcs/pcs-xpcs.h b/drivers/net/pcs/pcs-xpcs.h
index 39a90417e535..96c36b32ca99 100644
--- a/drivers/net/pcs/pcs-xpcs.h
+++ b/drivers/net/pcs/pcs-xpcs.h
@@ -55,6 +55,8 @@
/* Clause 37 Defines */
/* VR MII MMD registers offsets */
#define DW_VR_MII_MMD_CTRL 0x0000
+#define DW_VR_MII_MMD_STS 0x0001
+#define DW_VR_MII_MMD_STS_LINK_STS BIT(2)
#define DW_VR_MII_DIG_CTRL1 0x8000
#define DW_VR_MII_AN_CTRL 0x8001
#define DW_VR_MII_AN_INTR_STS 0x8002
diff --git a/drivers/net/phy/Kconfig b/drivers/net/phy/Kconfig
index 107880d13d21..421d2b62918f 100644
--- a/drivers/net/phy/Kconfig
+++ b/drivers/net/phy/Kconfig
@@ -69,9 +69,9 @@ config SFP
comment "MII PHY device drivers"
config AMD_PHY
- tristate "AMD PHYs"
+ tristate "AMD and Altima PHYs"
help
- Currently supports the am79c874
+ Currently supports the AMD am79c874 and Altima AC101L.
config MESON_GXL_PHY
tristate "Amlogic Meson GXL Internal PHY"
diff --git a/drivers/net/phy/amd.c b/drivers/net/phy/amd.c
index 001bb6d8bfce..930b15fa6ce9 100644
--- a/drivers/net/phy/amd.c
+++ b/drivers/net/phy/amd.c
@@ -13,6 +13,7 @@
#include <linux/mii.h>
#include <linux/phy.h>
+#define PHY_ID_AC101L 0x00225520
#define PHY_ID_AM79C874 0x0022561b
#define MII_AM79C_IR 17 /* Interrupt Status/Control Register */
@@ -87,19 +88,31 @@ static irqreturn_t am79c_handle_interrupt(struct phy_device *phydev)
return IRQ_HANDLED;
}
-static struct phy_driver am79c_driver[] = { {
- .phy_id = PHY_ID_AM79C874,
- .name = "AM79C874",
- .phy_id_mask = 0xfffffff0,
- /* PHY_BASIC_FEATURES */
- .config_init = am79c_config_init,
- .config_intr = am79c_config_intr,
- .handle_interrupt = am79c_handle_interrupt,
-} };
+static struct phy_driver am79c_drivers[] = {
+ {
+ .phy_id = PHY_ID_AM79C874,
+ .name = "AM79C874",
+ .phy_id_mask = 0xfffffff0,
+ /* PHY_BASIC_FEATURES */
+ .config_init = am79c_config_init,
+ .config_intr = am79c_config_intr,
+ .handle_interrupt = am79c_handle_interrupt,
+ },
+ {
+ .phy_id = PHY_ID_AC101L,
+ .name = "AC101L",
+ .phy_id_mask = 0xfffffff0,
+ /* PHY_BASIC_FEATURES */
+ .config_init = am79c_config_init,
+ .config_intr = am79c_config_intr,
+ .handle_interrupt = am79c_handle_interrupt,
+ },
+};
-module_phy_driver(am79c_driver);
+module_phy_driver(am79c_drivers);
static struct mdio_device_id __maybe_unused amd_tbl[] = {
+ { PHY_ID_AC101L, 0xfffffff0 },
{ PHY_ID_AM79C874, 0xfffffff0 },
{ }
};
diff --git a/drivers/net/phy/ax88796b.c b/drivers/net/phy/ax88796b.c
index 0f1e617a26c9..eb74a8cf8df1 100644
--- a/drivers/net/phy/ax88796b.c
+++ b/drivers/net/phy/ax88796b.c
@@ -90,7 +90,7 @@ static void asix_ax88772a_link_change_notify(struct phy_device *phydev)
*/
if (phydev->state == PHY_NOLINK) {
phy_init_hw(phydev);
- phy_start_aneg(phydev);
+ _phy_start_aneg(phydev);
}
}
diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
index 04b2e6eeb195..3a627105675a 100644
--- a/drivers/net/phy/broadcom.c
+++ b/drivers/net/phy/broadcom.c
@@ -704,16 +704,21 @@ static int brcm_fet_config_init(struct phy_device *phydev)
if (err < 0 && err != -EIO)
return err;
+ /* Read to clear status bits */
reg = phy_read(phydev, MII_BRCM_FET_INTREG);
if (reg < 0)
return reg;
/* Unmask events we are interested in and mask interrupts globally. */
- reg = MII_BRCM_FET_IR_DUPLEX_EN |
- MII_BRCM_FET_IR_SPEED_EN |
- MII_BRCM_FET_IR_LINK_EN |
- MII_BRCM_FET_IR_ENABLE |
- MII_BRCM_FET_IR_MASK;
+ if (phydev->phy_id == PHY_ID_BCM5221)
+ reg = MII_BRCM_FET_IR_ENABLE |
+ MII_BRCM_FET_IR_MASK;
+ else
+ reg = MII_BRCM_FET_IR_DUPLEX_EN |
+ MII_BRCM_FET_IR_SPEED_EN |
+ MII_BRCM_FET_IR_LINK_EN |
+ MII_BRCM_FET_IR_ENABLE |
+ MII_BRCM_FET_IR_MASK;
err = phy_write(phydev, MII_BRCM_FET_INTREG, reg);
if (err < 0)
@@ -726,42 +731,49 @@ static int brcm_fet_config_init(struct phy_device *phydev)
reg = brcmtest | MII_BRCM_FET_BT_SRE;
- err = phy_write(phydev, MII_BRCM_FET_BRCMTEST, reg);
- if (err < 0)
- return err;
+ phy_lock_mdio_bus(phydev);
- /* Set the LED mode */
- reg = phy_read(phydev, MII_BRCM_FET_SHDW_AUXMODE4);
- if (reg < 0) {
- err = reg;
- goto done;
+ err = __phy_write(phydev, MII_BRCM_FET_BRCMTEST, reg);
+ if (err < 0) {
+ phy_unlock_mdio_bus(phydev);
+ return err;
}
- reg &= ~MII_BRCM_FET_SHDW_AM4_LED_MASK;
- reg |= MII_BRCM_FET_SHDW_AM4_LED_MODE1;
+ if (phydev->phy_id != PHY_ID_BCM5221) {
+ /* Set the LED mode */
+ reg = __phy_read(phydev, MII_BRCM_FET_SHDW_AUXMODE4);
+ if (reg < 0) {
+ err = reg;
+ goto done;
+ }
- err = phy_write(phydev, MII_BRCM_FET_SHDW_AUXMODE4, reg);
- if (err < 0)
- goto done;
+ err = __phy_modify(phydev, MII_BRCM_FET_SHDW_AUXMODE4,
+ MII_BRCM_FET_SHDW_AM4_LED_MASK,
+ MII_BRCM_FET_SHDW_AM4_LED_MODE1);
+ if (err < 0)
+ goto done;
- /* Enable auto MDIX */
- err = phy_set_bits(phydev, MII_BRCM_FET_SHDW_MISCCTRL,
- MII_BRCM_FET_SHDW_MC_FAME);
- if (err < 0)
- goto done;
+ /* Enable auto MDIX */
+ err = __phy_set_bits(phydev, MII_BRCM_FET_SHDW_MISCCTRL,
+ MII_BRCM_FET_SHDW_MC_FAME);
+ if (err < 0)
+ goto done;
+ }
if (phydev->dev_flags & PHY_BRCM_AUTO_PWRDWN_ENABLE) {
/* Enable auto power down */
- err = phy_set_bits(phydev, MII_BRCM_FET_SHDW_AUXSTAT2,
- MII_BRCM_FET_SHDW_AS2_APDE);
+ err = __phy_set_bits(phydev, MII_BRCM_FET_SHDW_AUXSTAT2,
+ MII_BRCM_FET_SHDW_AS2_APDE);
}
done:
/* Disable shadow register access */
- err2 = phy_write(phydev, MII_BRCM_FET_BRCMTEST, brcmtest);
+ err2 = __phy_write(phydev, MII_BRCM_FET_BRCMTEST, brcmtest);
if (!err)
err = err2;
+ phy_unlock_mdio_bus(phydev);
+
return err;
}
@@ -840,23 +852,86 @@ static int brcm_fet_suspend(struct phy_device *phydev)
reg = brcmtest | MII_BRCM_FET_BT_SRE;
- err = phy_write(phydev, MII_BRCM_FET_BRCMTEST, reg);
- if (err < 0)
+ phy_lock_mdio_bus(phydev);
+
+ err = __phy_write(phydev, MII_BRCM_FET_BRCMTEST, reg);
+ if (err < 0) {
+ phy_unlock_mdio_bus(phydev);
return err;
+ }
+
+ if (phydev->phy_id == PHY_ID_BCM5221)
+ /* Force Low Power Mode with clock enabled */
+ reg = BCM5221_SHDW_AM4_EN_CLK_LPM | BCM5221_SHDW_AM4_FORCE_LPM;
+ else
+ /* Set standby mode */
+ reg = MII_BRCM_FET_SHDW_AM4_STANDBY;
- /* Set standby mode */
- err = phy_modify(phydev, MII_BRCM_FET_SHDW_AUXMODE4,
- MII_BRCM_FET_SHDW_AM4_STANDBY,
- MII_BRCM_FET_SHDW_AM4_STANDBY);
+ err = __phy_set_bits(phydev, MII_BRCM_FET_SHDW_AUXMODE4, reg);
/* Disable shadow register access */
- err2 = phy_write(phydev, MII_BRCM_FET_BRCMTEST, brcmtest);
+ err2 = __phy_write(phydev, MII_BRCM_FET_BRCMTEST, brcmtest);
if (!err)
err = err2;
+ phy_unlock_mdio_bus(phydev);
+
return err;
}
+static int bcm5221_config_aneg(struct phy_device *phydev)
+{
+ int ret, val;
+
+ ret = genphy_config_aneg(phydev);
+ if (ret)
+ return ret;
+
+ switch (phydev->mdix_ctrl) {
+ case ETH_TP_MDI:
+ val = BCM5221_AEGSR_MDIX_DIS;
+ break;
+ case ETH_TP_MDI_X:
+ val = BCM5221_AEGSR_MDIX_DIS | BCM5221_AEGSR_MDIX_MAN_SWAP;
+ break;
+ case ETH_TP_MDI_AUTO:
+ val = 0;
+ break;
+ default:
+ return 0;
+ }
+
+ return phy_modify(phydev, BCM5221_AEGSR, BCM5221_AEGSR_MDIX_MAN_SWAP |
+ BCM5221_AEGSR_MDIX_DIS,
+ val);
+}
+
+static int bcm5221_read_status(struct phy_device *phydev)
+{
+ int ret;
+
+ /* Read MDIX status */
+ ret = phy_read(phydev, BCM5221_AEGSR);
+ if (ret < 0)
+ return ret;
+
+ if (ret & BCM5221_AEGSR_MDIX_DIS) {
+ if (ret & BCM5221_AEGSR_MDIX_MAN_SWAP)
+ phydev->mdix_ctrl = ETH_TP_MDI_X;
+ else
+ phydev->mdix_ctrl = ETH_TP_MDI;
+ } else {
+ phydev->mdix_ctrl = ETH_TP_MDI_AUTO;
+ }
+
+ if (ret & BCM5221_AEGSR_MDIX_STATUS)
+ phydev->mdix = ETH_TP_MDI_X;
+ else
+ phydev->mdix = ETH_TP_MDI;
+
+ return genphy_read_status(phydev);
+}
+
static void bcm54xx_phy_get_wol(struct phy_device *phydev,
struct ethtool_wolinfo *wol)
{
@@ -1222,6 +1297,18 @@ static struct phy_driver broadcom_drivers[] = {
.suspend = brcm_fet_suspend,
.resume = brcm_fet_config_init,
}, {
+ .phy_id = PHY_ID_BCM5221,
+ .phy_id_mask = 0xfffffff0,
+ .name = "Broadcom BCM5221",
+ /* PHY_BASIC_FEATURES */
+ .config_init = brcm_fet_config_init,
+ .config_intr = brcm_fet_config_intr,
+ .handle_interrupt = brcm_fet_handle_interrupt,
+ .suspend = brcm_fet_suspend,
+ .resume = brcm_fet_config_init,
+ .config_aneg = bcm5221_config_aneg,
+ .read_status = bcm5221_read_status,
+}, {
.phy_id = PHY_ID_BCM5395,
.phy_id_mask = 0xfffffff0,
.name = "Broadcom BCM5395",
@@ -1296,6 +1383,7 @@ static struct mdio_device_id __maybe_unused broadcom_tbl[] = {
{ PHY_ID_BCM50610M, 0xfffffff0 },
{ PHY_ID_BCM57780, 0xfffffff0 },
{ PHY_ID_BCMAC131, 0xfffffff0 },
+ { PHY_ID_BCM5221, 0xfffffff0 },
{ PHY_ID_BCM5241, 0xfffffff0 },
{ PHY_ID_BCM5395, 0xfffffff0 },
{ PHY_ID_BCM53125, 0xfffffff0 },
diff --git a/drivers/net/phy/dp83867.c b/drivers/net/phy/dp83867.c
index e397e7d642d9..5f08f9d38bd7 100644
--- a/drivers/net/phy/dp83867.c
+++ b/drivers/net/phy/dp83867.c
@@ -159,6 +159,23 @@
#define DP83867_LED_DRV_EN(x) BIT((x) * 4)
#define DP83867_LED_DRV_VAL(x) BIT((x) * 4 + 1)
+#define DP83867_LED_FN(idx, val) (((val) & 0xf) << ((idx) * 4))
+#define DP83867_LED_FN_MASK(idx) (0xf << ((idx) * 4))
+#define DP83867_LED_FN_RX_ERR 0xe /* Receive Error */
+#define DP83867_LED_FN_RX_TX_ERR 0xd /* Receive Error or Transmit Error */
+#define DP83867_LED_FN_LINK_RX_TX 0xb /* Link established, blink for rx or tx activity */
+#define DP83867_LED_FN_FULL_DUPLEX 0xa /* Full duplex */
+#define DP83867_LED_FN_LINK_100_1000_BT 0x9 /* 100/1000BT link established */
+#define DP83867_LED_FN_LINK_10_100_BT 0x8 /* 10/100BT link established */
+#define DP83867_LED_FN_LINK_10_BT 0x7 /* 10BT link established */
+#define DP83867_LED_FN_LINK_100_BTX 0x6 /* 100 BTX link established */
+#define DP83867_LED_FN_LINK_1000_BT 0x5 /* 1000 BT link established */
+#define DP83867_LED_FN_COLLISION 0x4 /* Collision detected */
+#define DP83867_LED_FN_RX 0x3 /* Receive activity */
+#define DP83867_LED_FN_TX 0x2 /* Transmit activity */
+#define DP83867_LED_FN_RX_TX 0x1 /* Receive or Transmit activity */
+#define DP83867_LED_FN_LINK 0x0 /* Link established */
+
enum {
DP83867_PORT_MIRROING_KEEP,
DP83867_PORT_MIRROING_EN,
@@ -1018,6 +1035,123 @@ dp83867_led_brightness_set(struct phy_device *phydev,
val);
}
+static int dp83867_led_mode(u8 index, unsigned long rules)
+{
+ if (index >= DP83867_LED_COUNT)
+ return -EINVAL;
+
+ switch (rules) {
+ case BIT(TRIGGER_NETDEV_LINK):
+ return DP83867_LED_FN_LINK;
+ case BIT(TRIGGER_NETDEV_LINK_10):
+ return DP83867_LED_FN_LINK_10_BT;
+ case BIT(TRIGGER_NETDEV_LINK_100):
+ return DP83867_LED_FN_LINK_100_BTX;
+ case BIT(TRIGGER_NETDEV_FULL_DUPLEX):
+ return DP83867_LED_FN_FULL_DUPLEX;
+ case BIT(TRIGGER_NETDEV_TX):
+ return DP83867_LED_FN_TX;
+ case BIT(TRIGGER_NETDEV_RX):
+ return DP83867_LED_FN_RX;
+ case BIT(TRIGGER_NETDEV_LINK_1000):
+ return DP83867_LED_FN_LINK_1000_BT;
+ case BIT(TRIGGER_NETDEV_TX) | BIT(TRIGGER_NETDEV_RX):
+ return DP83867_LED_FN_RX_TX;
+ case BIT(TRIGGER_NETDEV_LINK_100) | BIT(TRIGGER_NETDEV_LINK_1000):
+ return DP83867_LED_FN_LINK_100_1000_BT;
+ case BIT(TRIGGER_NETDEV_LINK_10) | BIT(TRIGGER_NETDEV_LINK_100):
+ return DP83867_LED_FN_LINK_10_100_BT;
+ case BIT(TRIGGER_NETDEV_LINK) | BIT(TRIGGER_NETDEV_TX) | BIT(TRIGGER_NETDEV_RX):
+ return DP83867_LED_FN_LINK_RX_TX;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int dp83867_led_hw_is_supported(struct phy_device *phydev, u8 index,
+ unsigned long rules)
+{
+ int ret;
+
+ ret = dp83867_led_mode(index, rules);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static int dp83867_led_hw_control_set(struct phy_device *phydev, u8 index,
+ unsigned long rules)
+{
+ int mode, ret;
+
+ mode = dp83867_led_mode(index, rules);
+ if (mode < 0)
+ return mode;
+
+ ret = phy_modify(phydev, DP83867_LEDCR1, DP83867_LED_FN_MASK(index),
+ DP83867_LED_FN(index, mode));
+ if (ret)
+ return ret;
+
+ return phy_modify(phydev, DP83867_LEDCR2, DP83867_LED_DRV_EN(index), 0);
+}
+
+static int dp83867_led_hw_control_get(struct phy_device *phydev, u8 index,
+ unsigned long *rules)
+{
+ int val;
+
+ val = phy_read(phydev, DP83867_LEDCR1);
+ if (val < 0)
+ return val;
+
+ val &= DP83867_LED_FN_MASK(index);
+ val >>= index * 4;
+
+ switch (val) {
+ case DP83867_LED_FN_LINK:
+ *rules = BIT(TRIGGER_NETDEV_LINK);
+ break;
+ case DP83867_LED_FN_LINK_10_BT:
+ *rules = BIT(TRIGGER_NETDEV_LINK_10);
+ break;
+ case DP83867_LED_FN_LINK_100_BTX:
+ *rules = BIT(TRIGGER_NETDEV_LINK_100);
+ break;
+ case DP83867_LED_FN_FULL_DUPLEX:
+ *rules = BIT(TRIGGER_NETDEV_FULL_DUPLEX);
+ break;
+ case DP83867_LED_FN_TX:
+ *rules = BIT(TRIGGER_NETDEV_TX);
+ break;
+ case DP83867_LED_FN_RX:
+ *rules = BIT(TRIGGER_NETDEV_RX);
+ break;
+ case DP83867_LED_FN_LINK_1000_BT:
+ *rules = BIT(TRIGGER_NETDEV_LINK_1000);
+ break;
+ case DP83867_LED_FN_RX_TX:
+ *rules = BIT(TRIGGER_NETDEV_TX) | BIT(TRIGGER_NETDEV_RX);
+ break;
+ case DP83867_LED_FN_LINK_100_1000_BT:
+ *rules = BIT(TRIGGER_NETDEV_LINK_100) | BIT(TRIGGER_NETDEV_LINK_1000);
+ break;
+ case DP83867_LED_FN_LINK_10_100_BT:
+ *rules = BIT(TRIGGER_NETDEV_LINK_10) | BIT(TRIGGER_NETDEV_LINK_100);
+ break;
+ case DP83867_LED_FN_LINK_RX_TX:
+ *rules = BIT(TRIGGER_NETDEV_LINK) | BIT(TRIGGER_NETDEV_TX) |
+ BIT(TRIGGER_NETDEV_RX);
+ break;
+ default:
+ *rules = 0;
+ break;
+ }
+
+ return 0;
+}
+
static struct phy_driver dp83867_driver[] = {
{
.phy_id = DP83867_PHY_ID,
@@ -1047,6 +1181,9 @@ static struct phy_driver dp83867_driver[] = {
.set_loopback = dp83867_loopback,
.led_brightness_set = dp83867_led_brightness_set,
+ .led_hw_is_supported = dp83867_led_hw_is_supported,
+ .led_hw_control_set = dp83867_led_hw_control_set,
+ .led_hw_control_get = dp83867_led_hw_control_get,
},
};
module_phy_driver(dp83867_driver);
diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
index 927d3d54658e..08e3915001c3 100644
--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -1733,6 +1733,28 @@ static int ksz886x_config_aneg(struct phy_device *phydev)
if (ret)
return ret;
+ if (phydev->autoneg != AUTONEG_ENABLE) {
+ /* When autonegotation is disabled, we need to manually force
+ * the link state. If we don't do this, the PHY will keep
+ * sending Fast Link Pulses (FLPs) which are part of the
+ * autonegotiation process. This is not desired when
+ * autonegotiation is off.
+ */
+ ret = phy_set_bits(phydev, MII_KSZPHY_CTRL,
+ KSZ886X_CTRL_FORCE_LINK);
+ if (ret)
+ return ret;
+ } else {
+ /* If we had previously forced the link state, we need to
+ * clear KSZ886X_CTRL_FORCE_LINK bit now. Otherwise, the PHY
+ * will not perform autonegotiation.
+ */
+ ret = phy_clear_bits(phydev, MII_KSZPHY_CTRL,
+ KSZ886X_CTRL_FORCE_LINK);
+ if (ret)
+ return ret;
+ }
+
/* The MDI-X configuration is automatically changed by the PHY after
* switching from autoneg off to on. So, take MDI-X configuration under
* own control and set it after autoneg configuration was done.
diff --git a/drivers/net/phy/nxp-tja11xx.c b/drivers/net/phy/nxp-tja11xx.c
index b13e15310feb..a71399965142 100644
--- a/drivers/net/phy/nxp-tja11xx.c
+++ b/drivers/net/phy/nxp-tja11xx.c
@@ -414,10 +414,8 @@ static void tja11xx_get_strings(struct phy_device *phydev, u8 *data)
{
int i;
- for (i = 0; i < ARRAY_SIZE(tja11xx_hw_stats); i++) {
- strncpy(data + i * ETH_GSTRING_LEN,
- tja11xx_hw_stats[i].string, ETH_GSTRING_LEN);
- }
+ for (i = 0; i < ARRAY_SIZE(tja11xx_hw_stats); i++)
+ ethtool_sprintf(&data, "%s", tja11xx_hw_stats[i].string);
}
static void tja11xx_get_stats(struct phy_device *phydev,
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index df54c137c5f5..a5fa077650e8 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -981,7 +981,7 @@ static int phy_check_link_status(struct phy_device *phydev)
* If the PHYCONTROL Layer is operating, we change the state to
* reflect the beginning of Auto-negotiation or forcing.
*/
-static int _phy_start_aneg(struct phy_device *phydev)
+int _phy_start_aneg(struct phy_device *phydev)
{
int err;
@@ -1002,6 +1002,7 @@ static int _phy_start_aneg(struct phy_device *phydev)
return err;
}
+EXPORT_SYMBOL(_phy_start_aneg);
/**
* phy_start_aneg - start auto-negotiation for this PHY device
@@ -1231,9 +1232,7 @@ static void phy_error_precise(struct phy_device *phydev,
const void *func, int err)
{
WARN(1, "%pS: returned: %d\n", func, err);
- mutex_lock(&phydev->lock);
phy_process_error(phydev);
- mutex_unlock(&phydev->lock);
}
/**
@@ -1355,6 +1354,113 @@ void phy_free_interrupt(struct phy_device *phydev)
}
EXPORT_SYMBOL(phy_free_interrupt);
+enum phy_state_work {
+ PHY_STATE_WORK_NONE,
+ PHY_STATE_WORK_ANEG,
+ PHY_STATE_WORK_SUSPEND,
+};
+
+static enum phy_state_work _phy_state_machine(struct phy_device *phydev)
+{
+ enum phy_state_work state_work = PHY_STATE_WORK_NONE;
+ struct net_device *dev = phydev->attached_dev;
+ enum phy_state old_state = phydev->state;
+ const void *func = NULL;
+ bool finished = false;
+ int err = 0;
+
+ switch (phydev->state) {
+ case PHY_DOWN:
+ case PHY_READY:
+ break;
+ case PHY_UP:
+ state_work = PHY_STATE_WORK_ANEG;
+ break;
+ case PHY_NOLINK:
+ case PHY_RUNNING:
+ err = phy_check_link_status(phydev);
+ func = &phy_check_link_status;
+ break;
+ case PHY_CABLETEST:
+ err = phydev->drv->cable_test_get_status(phydev, &finished);
+ if (err) {
+ phy_abort_cable_test(phydev);
+ netif_testing_off(dev);
+ state_work = PHY_STATE_WORK_ANEG;
+ phydev->state = PHY_UP;
+ break;
+ }
+
+ if (finished) {
+ ethnl_cable_test_finished(phydev);
+ netif_testing_off(dev);
+ state_work = PHY_STATE_WORK_ANEG;
+ phydev->state = PHY_UP;
+ }
+ break;
+ case PHY_HALTED:
+ case PHY_ERROR:
+ if (phydev->link) {
+ phydev->link = 0;
+ phy_link_down(phydev);
+ }
+ state_work = PHY_STATE_WORK_SUSPEND;
+ break;
+ }
+
+ if (state_work == PHY_STATE_WORK_ANEG) {
+ err = _phy_start_aneg(phydev);
+ func = &_phy_start_aneg;
+ }
+
+ if (err == -ENODEV)
+ return state_work;
+
+ if (err < 0)
+ phy_error_precise(phydev, func, err);
+
+ phy_process_state_change(phydev, old_state);
+
+ /* Only re-schedule a PHY state machine change if we are polling the
+ * PHY, if PHY_MAC_INTERRUPT is set, then we will be moving
+ * between states from phy_mac_interrupt().
+ *
+ * In state PHY_HALTED the PHY gets suspended, so rescheduling the
+ * state machine would be pointless and possibly error prone when
+ * called from phy_disconnect() synchronously.
+ */
+ if (phy_polling_mode(phydev) && phy_is_started(phydev))
+ phy_queue_state_machine(phydev, PHY_STATE_TIME);
+
+ return state_work;
+}
+
+/* unlocked part of the PHY state machine */
+static void _phy_state_machine_post_work(struct phy_device *phydev,
+ enum phy_state_work state_work)
+{
+ if (state_work == PHY_STATE_WORK_SUSPEND)
+ phy_suspend(phydev);
+}
+
+/**
+ * phy_state_machine - Handle the state machine
+ * @work: work_struct that describes the work to be done
+ */
+void phy_state_machine(struct work_struct *work)
+{
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct phy_device *phydev =
+ container_of(dwork, struct phy_device, state_queue);
+ enum phy_state_work state_work;
+
+ mutex_lock(&phydev->lock);
+ state_work = _phy_state_machine(phydev);
+ mutex_unlock(&phydev->lock);
+
+ _phy_state_machine_post_work(phydev, state_work);
+}
+
/**
* phy_stop - Bring down the PHY link, and stop checking the status
* @phydev: target phy_device struct
@@ -1362,6 +1468,7 @@ EXPORT_SYMBOL(phy_free_interrupt);
void phy_stop(struct phy_device *phydev)
{
struct net_device *dev = phydev->attached_dev;
+ enum phy_state_work state_work;
enum phy_state old_state;
if (!phy_is_started(phydev) && phydev->state != PHY_DOWN &&
@@ -1385,9 +1492,10 @@ void phy_stop(struct phy_device *phydev)
phydev->state = PHY_HALTED;
phy_process_state_change(phydev, old_state);
+ state_work = _phy_state_machine(phydev);
mutex_unlock(&phydev->lock);
- phy_state_machine(&phydev->state_queue.work);
+ _phy_state_machine_post_work(phydev, state_work);
phy_stop_machine(phydev);
/* Cannot call flush_scheduled_work() here as desired because
@@ -1432,97 +1540,6 @@ out:
EXPORT_SYMBOL(phy_start);
/**
- * phy_state_machine - Handle the state machine
- * @work: work_struct that describes the work to be done
- */
-void phy_state_machine(struct work_struct *work)
-{
- struct delayed_work *dwork = to_delayed_work(work);
- struct phy_device *phydev =
- container_of(dwork, struct phy_device, state_queue);
- struct net_device *dev = phydev->attached_dev;
- bool needs_aneg = false, do_suspend = false;
- enum phy_state old_state;
- const void *func = NULL;
- bool finished = false;
- int err = 0;
-
- mutex_lock(&phydev->lock);
-
- old_state = phydev->state;
-
- switch (phydev->state) {
- case PHY_DOWN:
- case PHY_READY:
- break;
- case PHY_UP:
- needs_aneg = true;
-
- break;
- case PHY_NOLINK:
- case PHY_RUNNING:
- err = phy_check_link_status(phydev);
- func = &phy_check_link_status;
- break;
- case PHY_CABLETEST:
- err = phydev->drv->cable_test_get_status(phydev, &finished);
- if (err) {
- phy_abort_cable_test(phydev);
- netif_testing_off(dev);
- needs_aneg = true;
- phydev->state = PHY_UP;
- break;
- }
-
- if (finished) {
- ethnl_cable_test_finished(phydev);
- netif_testing_off(dev);
- needs_aneg = true;
- phydev->state = PHY_UP;
- }
- break;
- case PHY_HALTED:
- case PHY_ERROR:
- if (phydev->link) {
- phydev->link = 0;
- phy_link_down(phydev);
- }
- do_suspend = true;
- break;
- }
-
- mutex_unlock(&phydev->lock);
-
- if (needs_aneg) {
- err = phy_start_aneg(phydev);
- func = &phy_start_aneg;
- } else if (do_suspend) {
- phy_suspend(phydev);
- }
-
- if (err == -ENODEV)
- return;
-
- if (err < 0)
- phy_error_precise(phydev, func, err);
-
- phy_process_state_change(phydev, old_state);
-
- /* Only re-schedule a PHY state machine change if we are polling the
- * PHY, if PHY_MAC_INTERRUPT is set, then we will be moving
- * between states from phy_mac_interrupt().
- *
- * In state PHY_HALTED the PHY gets suspended, so rescheduling the
- * state machine would be pointless and possibly error prone when
- * called from phy_disconnect() synchronously.
- */
- mutex_lock(&phydev->lock);
- if (phy_polling_mode(phydev) && phy_is_started(phydev))
- phy_queue_state_machine(phydev, PHY_STATE_TIME);
- mutex_unlock(&phydev->lock);
-}
-
-/**
* phy_mac_interrupt - MAC says the link has changed
* @phydev: phy_device struct with changed link
*
diff --git a/drivers/net/phy/phylink.c b/drivers/net/phy/phylink.c
index 0d7354955d62..6712883498bb 100644
--- a/drivers/net/phy/phylink.c
+++ b/drivers/net/phy/phylink.c
@@ -257,7 +257,8 @@ static int phylink_interface_max_speed(phy_interface_t interface)
* Set all possible pause, speed and duplex linkmodes in @linkmodes that are
* supported by the @caps. @linkmodes must have been initialised previously.
*/
-void phylink_caps_to_linkmodes(unsigned long *linkmodes, unsigned long caps)
+static void phylink_caps_to_linkmodes(unsigned long *linkmodes,
+ unsigned long caps)
{
if (caps & MAC_SYM_PAUSE)
__set_bit(ETHTOOL_LINK_MODE_Pause_BIT, linkmodes);
@@ -400,7 +401,6 @@ void phylink_caps_to_linkmodes(unsigned long *linkmodes, unsigned long caps)
__set_bit(ETHTOOL_LINK_MODE_400000baseCR4_Full_BIT, linkmodes);
}
}
-EXPORT_SYMBOL_GPL(phylink_caps_to_linkmodes);
static struct {
unsigned long mask;
@@ -477,9 +477,9 @@ static unsigned long phylink_cap_from_speed_duplex(int speed,
* Get the MAC capabilities that are supported by the @interface mode and
* @mac_capabilities.
*/
-unsigned long phylink_get_capabilities(phy_interface_t interface,
- unsigned long mac_capabilities,
- int rate_matching)
+static unsigned long phylink_get_capabilities(phy_interface_t interface,
+ unsigned long mac_capabilities,
+ int rate_matching)
{
int max_speed = phylink_interface_max_speed(interface);
unsigned long caps = MAC_SYM_PAUSE | MAC_ASYM_PAUSE;
@@ -606,7 +606,6 @@ unsigned long phylink_get_capabilities(phy_interface_t interface,
return (caps & mac_capabilities) | matched_caps;
}
-EXPORT_SYMBOL_GPL(phylink_get_capabilities);
/**
* phylink_validate_mask_caps() - Restrict link modes based on caps
@@ -618,9 +617,9 @@ EXPORT_SYMBOL_GPL(phylink_get_capabilities);
* @supported and @state based on that. Use this function if your capabiliies
* aren't constant, such as if they vary depending on the interface.
*/
-void phylink_validate_mask_caps(unsigned long *supported,
- struct phylink_link_state *state,
- unsigned long mac_capabilities)
+static void phylink_validate_mask_caps(unsigned long *supported,
+ struct phylink_link_state *state,
+ unsigned long mac_capabilities)
{
__ETHTOOL_DECLARE_LINK_MODE_MASK(mask) = { 0, };
unsigned long caps;
@@ -634,29 +633,12 @@ void phylink_validate_mask_caps(unsigned long *supported,
linkmode_and(supported, supported, mask);
linkmode_and(state->advertising, state->advertising, mask);
}
-EXPORT_SYMBOL_GPL(phylink_validate_mask_caps);
-
-/**
- * phylink_generic_validate() - generic validate() callback implementation
- * @config: a pointer to a &struct phylink_config.
- * @supported: ethtool bitmask for supported link modes.
- * @state: a pointer to a &struct phylink_link_state.
- *
- * Generic implementation of the validate() callback that MAC drivers can
- * use when they pass the range of supported interfaces and MAC capabilities.
- */
-void phylink_generic_validate(struct phylink_config *config,
- unsigned long *supported,
- struct phylink_link_state *state)
-{
- phylink_validate_mask_caps(supported, state, config->mac_capabilities);
-}
-EXPORT_SYMBOL_GPL(phylink_generic_validate);
static int phylink_validate_mac_and_pcs(struct phylink *pl,
unsigned long *supported,
struct phylink_link_state *state)
{
+ unsigned long capabilities;
struct phylink_pcs *pcs;
int ret;
@@ -696,10 +678,13 @@ static int phylink_validate_mac_and_pcs(struct phylink *pl,
}
/* Then validate the link parameters with the MAC */
- if (pl->mac_ops->validate)
- pl->mac_ops->validate(pl->config, supported, state);
+ if (pl->mac_ops->mac_get_caps)
+ capabilities = pl->mac_ops->mac_get_caps(pl->config,
+ state->interface);
else
- phylink_generic_validate(pl->config, supported, state);
+ capabilities = pl->config->mac_capabilities;
+
+ phylink_validate_mask_caps(supported, state, capabilities);
return phylink_is_empty_linkmode(supported) ? -EINVAL : 0;
}
diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
index 4ecfac227865..b8c0961daf53 100644
--- a/drivers/net/phy/sfp.c
+++ b/drivers/net/phy/sfp.c
@@ -257,6 +257,7 @@ struct sfp {
unsigned int state_hw_drive;
unsigned int state_hw_mask;
unsigned int state_soft_mask;
+ unsigned int state_ignore_mask;
unsigned int state;
struct delayed_work poll;
@@ -280,7 +281,6 @@ struct sfp {
unsigned int rs_state_mask;
bool have_a2;
- bool tx_fault_ignore;
const struct sfp_quirk *quirk;
@@ -345,9 +345,24 @@ static void sfp_fixup_long_startup(struct sfp *sfp)
sfp->module_t_start_up = T_START_UP_BAD_GPON;
}
+static void sfp_fixup_ignore_los(struct sfp *sfp)
+{
+ /* This forces LOS to zero, so we ignore transitions */
+ sfp->state_ignore_mask |= SFP_F_LOS;
+ /* Make sure that LOS options are clear */
+ sfp->id.ext.options &= ~cpu_to_be16(SFP_OPTIONS_LOS_INVERTED |
+ SFP_OPTIONS_LOS_NORMAL);
+}
+
static void sfp_fixup_ignore_tx_fault(struct sfp *sfp)
{
- sfp->tx_fault_ignore = true;
+ sfp->state_ignore_mask |= SFP_F_TX_FAULT;
+}
+
+static void sfp_fixup_nokia(struct sfp *sfp)
+{
+ sfp_fixup_long_startup(sfp);
+ sfp_fixup_ignore_los(sfp);
}
// For 10GBASE-T short-reach modules
@@ -446,12 +461,17 @@ static const struct sfp_quirk sfp_quirks[] = {
// Alcatel Lucent G-010S-A can operate at 2500base-X, but report 3.2GBd
// NRZ in their EEPROM
SFP_QUIRK("ALCATELLUCENT", "3FE46541AA", sfp_quirk_2500basex,
- sfp_fixup_long_startup),
+ sfp_fixup_nokia),
// Fiberstore SFP-10G-T doesn't identify as copper, and uses the
// Rollball protocol to talk to the PHY.
SFP_QUIRK_F("FS", "SFP-10G-T", sfp_fixup_fs_10gt),
+ // Fiberstore GPON-ONU-34-20BI can operate at 2500base-X, but report 1.2GBd
+ // NRZ in their EEPROM
+ SFP_QUIRK("FS", "GPON-ONU-34-20BI", sfp_quirk_2500basex,
+ sfp_fixup_ignore_tx_fault),
+
SFP_QUIRK_F("HALNy", "HL-GSFP", sfp_fixup_halny_gsfp),
// HG MXPD-483II-F 2.5G supports 2500Base-X, but incorrectly reports
@@ -463,6 +483,9 @@ static const struct sfp_quirk sfp_quirks[] = {
SFP_QUIRK("HUAWEI", "MA5671A", sfp_quirk_2500basex,
sfp_fixup_ignore_tx_fault),
+ // FS 2.5G Base-T
+ SFP_QUIRK_M("FS", "SFP-2.5G-T", sfp_quirk_oem_2_5g),
+
// Lantech 8330-262D-E can operate at 2500base-X, but incorrectly report
// 2500MBd NRZ in their EEPROM
SFP_QUIRK_M("Lantech", "8330-262D-E", sfp_quirk_2500basex),
@@ -784,7 +807,8 @@ static void sfp_soft_start_poll(struct sfp *sfp)
mutex_lock(&sfp->st_mutex);
// Poll the soft state for hardware pins we want to ignore
- sfp->state_soft_mask = ~sfp->state_hw_mask & mask;
+ sfp->state_soft_mask = ~sfp->state_hw_mask & ~sfp->state_ignore_mask &
+ mask;
if (sfp->state_soft_mask & (SFP_F_LOS | SFP_F_TX_FAULT) &&
!sfp->need_poll)
@@ -2309,7 +2333,7 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
sfp->module_t_start_up = T_START_UP;
sfp->module_t_wait = T_WAIT;
- sfp->tx_fault_ignore = false;
+ sfp->state_ignore_mask = 0;
if (sfp->id.base.extended_cc == SFF8024_ECC_10GBASE_T_SFI ||
sfp->id.base.extended_cc == SFF8024_ECC_10GBASE_T_SR ||
@@ -2332,6 +2356,8 @@ static int sfp_sm_mod_probe(struct sfp *sfp, bool report)
if (sfp->quirk && sfp->quirk->fixup)
sfp->quirk->fixup(sfp);
+
+ sfp->state_hw_mask &= ~sfp->state_ignore_mask;
mutex_unlock(&sfp->st_mutex);
return 0;
@@ -2833,10 +2859,7 @@ static void sfp_check_state(struct sfp *sfp)
mutex_lock(&sfp->st_mutex);
state = sfp_get_state(sfp);
changed = state ^ sfp->state;
- if (sfp->tx_fault_ignore)
- changed &= SFP_F_PRESENT | SFP_F_LOS;
- else
- changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
+ changed &= SFP_F_PRESENT | SFP_F_LOS | SFP_F_TX_FAULT;
for (i = 0; i < GPIO_MAX; i++)
if (changed & BIT(i))
diff --git a/drivers/net/phy/smsc.c b/drivers/net/phy/smsc.c
index c88edb19d2e7..1c7306a1af13 100644
--- a/drivers/net/phy/smsc.c
+++ b/drivers/net/phy/smsc.c
@@ -507,10 +507,8 @@ static void smsc_get_strings(struct phy_device *phydev, u8 *data)
{
int i;
- for (i = 0; i < ARRAY_SIZE(smsc_hw_stats); i++) {
- strncpy(data + i * ETH_GSTRING_LEN,
- smsc_hw_stats[i].string, ETH_GSTRING_LEN);
- }
+ for (i = 0; i < ARRAY_SIZE(smsc_hw_stats); i++)
+ ethtool_sprintf(&data, "%s", smsc_hw_stats[i].string);
}
static u64 smsc_get_stat(struct phy_device *phydev, int i)
diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
index ba8b6bd8233c..8e7238e97d0a 100644
--- a/drivers/net/ppp/pppoe.c
+++ b/drivers/net/ppp/pppoe.c
@@ -877,7 +877,7 @@ static int pppoe_sendmsg(struct socket *sock, struct msghdr *m,
skb->dev = dev;
- skb->priority = sk->sk_priority;
+ skb->priority = READ_ONCE(sk->sk_priority);
skb->protocol = cpu_to_be16(ETH_P_PPP_SES);
ph = skb_put(skb, total_len + sizeof(struct pppoe_hdr));
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
index 59cde06aa7f6..5add4145d9fc 100644
--- a/drivers/net/usb/lan78xx.c
+++ b/drivers/net/usb/lan78xx.c
@@ -1758,7 +1758,7 @@ static void lan78xx_get_drvinfo(struct net_device *net,
{
struct lan78xx_net *dev = netdev_priv(net);
- strncpy(info->driver, DRIVER_NAME, sizeof(info->driver));
+ strscpy(info->driver, DRIVER_NAME, sizeof(info->driver));
usb_make_path(dev->udev, info->bus_info, sizeof(info->bus_info));
}
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index afb20c0ed688..2c5c1e91ded6 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -2543,7 +2543,7 @@ static int rx_bottom(struct r8152 *tp, int budget)
}
}
- if (list_empty(&tp->rx_done))
+ if (list_empty(&tp->rx_done) || work_done >= budget)
goto out1;
clear_bit(RX_EPROTO, &tp->flags);
@@ -2559,6 +2559,15 @@ static int rx_bottom(struct r8152 *tp, int budget)
struct urb *urb;
u8 *rx_data;
+ /* A bulk transfer of USB may contain may packets, so the
+ * total packets may more than the budget. Deal with all
+ * packets in current bulk transfer, and stop to handle the
+ * next bulk transfer until next schedule, if budget is
+ * exhausted.
+ */
+ if (work_done >= budget)
+ break;
+
list_del_init(cursor);
agg = list_entry(cursor, struct rx_agg, list);
@@ -2575,12 +2584,11 @@ static int rx_bottom(struct r8152 *tp, int budget)
while (urb->actual_length > len_used) {
struct net_device *netdev = tp->netdev;
struct net_device_stats *stats = &netdev->stats;
- unsigned int pkt_len, rx_frag_head_sz;
+ unsigned int pkt_len, rx_frag_head_sz, len;
struct sk_buff *skb;
+ bool use_frags;
- /* limit the skb numbers for rx_queue */
- if (unlikely(skb_queue_len(&tp->rx_queue) >= 1000))
- break;
+ WARN_ON_ONCE(skb_queue_len(&tp->rx_queue) >= 1000);
pkt_len = le32_to_cpu(rx_desc->opts1) & RX_LEN_MASK;
if (pkt_len < ETH_ZLEN)
@@ -2591,45 +2599,77 @@ static int rx_bottom(struct r8152 *tp, int budget)
break;
pkt_len -= ETH_FCS_LEN;
+ len = pkt_len;
rx_data += sizeof(struct rx_desc);
- if (!agg_free || tp->rx_copybreak > pkt_len)
- rx_frag_head_sz = pkt_len;
+ if (!agg_free || tp->rx_copybreak > len)
+ use_frags = false;
else
- rx_frag_head_sz = tp->rx_copybreak;
+ use_frags = true;
+
+ if (use_frags) {
+ /* If the budget is exhausted, the packet
+ * would be queued in the driver. That is,
+ * napi_gro_frags() wouldn't be called, so
+ * we couldn't use napi_get_frags().
+ */
+ if (work_done >= budget) {
+ rx_frag_head_sz = tp->rx_copybreak;
+ skb = napi_alloc_skb(napi,
+ rx_frag_head_sz);
+ } else {
+ rx_frag_head_sz = 0;
+ skb = napi_get_frags(napi);
+ }
+ } else {
+ rx_frag_head_sz = 0;
+ skb = napi_alloc_skb(napi, len);
+ }
- skb = napi_alloc_skb(napi, rx_frag_head_sz);
if (!skb) {
stats->rx_dropped++;
goto find_next_rx;
}
skb->ip_summed = r8152_rx_csum(tp, rx_desc);
- memcpy(skb->data, rx_data, rx_frag_head_sz);
- skb_put(skb, rx_frag_head_sz);
- pkt_len -= rx_frag_head_sz;
- rx_data += rx_frag_head_sz;
- if (pkt_len) {
+ rtl_rx_vlan_tag(rx_desc, skb);
+
+ if (use_frags) {
+ if (rx_frag_head_sz) {
+ memcpy(skb->data, rx_data,
+ rx_frag_head_sz);
+ skb_put(skb, rx_frag_head_sz);
+ len -= rx_frag_head_sz;
+ rx_data += rx_frag_head_sz;
+ skb->protocol = eth_type_trans(skb,
+ netdev);
+ }
+
skb_add_rx_frag(skb, 0, agg->page,
agg_offset(agg, rx_data),
- pkt_len,
- SKB_DATA_ALIGN(pkt_len));
+ len, SKB_DATA_ALIGN(len));
get_page(agg->page);
+ } else {
+ memcpy(skb->data, rx_data, len);
+ skb_put(skb, len);
+ skb->protocol = eth_type_trans(skb, netdev);
}
- skb->protocol = eth_type_trans(skb, netdev);
- rtl_rx_vlan_tag(rx_desc, skb);
if (work_done < budget) {
+ if (use_frags)
+ napi_gro_frags(napi);
+ else
+ napi_gro_receive(napi, skb);
+
work_done++;
stats->rx_packets++;
- stats->rx_bytes += skb->len;
- napi_gro_receive(napi, skb);
+ stats->rx_bytes += pkt_len;
} else {
__skb_queue_tail(&tp->rx_queue, skb);
}
find_next_rx:
- rx_data = rx_agg_align(rx_data + pkt_len + ETH_FCS_LEN);
+ rx_data = rx_agg_align(rx_data + len + ETH_FCS_LEN);
rx_desc = (struct rx_desc *)rx_data;
len_used = agg_offset(agg, rx_data);
len_used += sizeof(struct rx_desc);
@@ -2658,9 +2698,10 @@ submit:
}
}
+ /* Splice the remained list back to rx_done for next schedule */
if (!list_empty(&rx_queue)) {
spin_lock_irqsave(&tp->rx_lock, flags);
- list_splice_tail(&rx_queue, &tp->rx_done);
+ list_splice(&rx_queue, &tp->rx_done);
spin_unlock_irqrestore(&tp->rx_lock, flags);
}
diff --git a/drivers/net/usb/sr9800.c b/drivers/net/usb/sr9800.c
index f5e19f3ef6cd..143bd4ab160d 100644
--- a/drivers/net/usb/sr9800.c
+++ b/drivers/net/usb/sr9800.c
@@ -474,8 +474,8 @@ static void sr_get_drvinfo(struct net_device *net,
{
/* Inherit standard device info */
usbnet_get_drvinfo(net, info);
- strncpy(info->driver, DRIVER_NAME, sizeof(info->driver));
- strncpy(info->version, DRIVER_VERSION, sizeof(info->version));
+ strscpy(info->driver, DRIVER_NAME, sizeof(info->driver));
+ strscpy(info->version, DRIVER_VERSION, sizeof(info->version));
}
static u32 sr_get_link(struct net_device *net)
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 0deefd1573cf..9980517ed8b0 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -737,10 +737,11 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
if (skb_shared(skb) || skb_head_is_locked(skb) ||
skb_shinfo(skb)->nr_frags ||
skb_headroom(skb) < XDP_PACKET_HEADROOM) {
- u32 size, len, max_head_size, off;
+ u32 size, len, max_head_size, off, truesize, page_offset;
struct sk_buff *nskb;
struct page *page;
int i, head_off;
+ void *va;
/* We need a private copy of the skb and data buffers since
* the ebpf program can modify it. We segment the original skb
@@ -753,14 +754,17 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size)
goto drop;
+ size = min_t(u32, skb->len, max_head_size);
+ truesize = SKB_HEAD_ALIGN(size) + VETH_XDP_HEADROOM;
+
/* Allocate skb head */
- page = page_pool_dev_alloc_pages(rq->page_pool);
- if (!page)
+ va = page_pool_dev_alloc_va(rq->page_pool, &truesize);
+ if (!va)
goto drop;
- nskb = napi_build_skb(page_address(page), PAGE_SIZE);
+ nskb = napi_build_skb(va, truesize);
if (!nskb) {
- page_pool_put_full_page(rq->page_pool, page, true);
+ page_pool_free_va(rq->page_pool, va, true);
goto drop;
}
@@ -768,7 +772,6 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
skb_copy_header(nskb, skb);
skb_mark_for_recycle(nskb);
- size = min_t(u32, skb->len, max_head_size);
if (skb_copy_bits(skb, 0, nskb->data, size)) {
consume_skb(nskb);
goto drop;
@@ -783,14 +786,18 @@ static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
len = skb->len - off;
for (i = 0; i < MAX_SKB_FRAGS && off < skb->len; i++) {
- page = page_pool_dev_alloc_pages(rq->page_pool);
+ size = min_t(u32, len, PAGE_SIZE);
+ truesize = size;
+
+ page = page_pool_dev_alloc(rq->page_pool, &page_offset,
+ &truesize);
if (!page) {
consume_skb(nskb);
goto drop;
}
- size = min_t(u32, len, PAGE_SIZE);
- skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE);
+ skb_add_rx_frag(nskb, i, page, page_offset, size,
+ truesize);
if (skb_copy_bits(skb, off, page_address(page),
size)) {
consume_skb(nskb);
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d67f742fbd4c..d16f592c2061 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -81,24 +81,24 @@ struct virtnet_stat_desc {
struct virtnet_sq_stats {
struct u64_stats_sync syncp;
- u64 packets;
- u64 bytes;
- u64 xdp_tx;
- u64 xdp_tx_drops;
- u64 kicks;
- u64 tx_timeouts;
+ u64_stats_t packets;
+ u64_stats_t bytes;
+ u64_stats_t xdp_tx;
+ u64_stats_t xdp_tx_drops;
+ u64_stats_t kicks;
+ u64_stats_t tx_timeouts;
};
struct virtnet_rq_stats {
struct u64_stats_sync syncp;
- u64 packets;
- u64 bytes;
- u64 drops;
- u64 xdp_packets;
- u64 xdp_tx;
- u64 xdp_redirects;
- u64 xdp_drops;
- u64 kicks;
+ u64_stats_t packets;
+ u64_stats_t bytes;
+ u64_stats_t drops;
+ u64_stats_t xdp_packets;
+ u64_stats_t xdp_tx;
+ u64_stats_t xdp_redirects;
+ u64_stats_t xdp_drops;
+ u64_stats_t kicks;
};
#define VIRTNET_SQ_STAT(m) offsetof(struct virtnet_sq_stats, m)
@@ -775,8 +775,8 @@ static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
return;
u64_stats_update_begin(&sq->stats.syncp);
- sq->stats.bytes += bytes;
- sq->stats.packets += packets;
+ u64_stats_add(&sq->stats.bytes, bytes);
+ u64_stats_add(&sq->stats.packets, packets);
u64_stats_update_end(&sq->stats.syncp);
}
@@ -975,11 +975,11 @@ static int virtnet_xdp_xmit(struct net_device *dev,
}
out:
u64_stats_update_begin(&sq->stats.syncp);
- sq->stats.bytes += bytes;
- sq->stats.packets += packets;
- sq->stats.xdp_tx += n;
- sq->stats.xdp_tx_drops += n - nxmit;
- sq->stats.kicks += kicks;
+ u64_stats_add(&sq->stats.bytes, bytes);
+ u64_stats_add(&sq->stats.packets, packets);
+ u64_stats_add(&sq->stats.xdp_tx, n);
+ u64_stats_add(&sq->stats.xdp_tx_drops, n - nxmit);
+ u64_stats_add(&sq->stats.kicks, kicks);
u64_stats_update_end(&sq->stats.syncp);
virtnet_xdp_put_sq(vi, sq);
@@ -1011,14 +1011,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
u32 act;
act = bpf_prog_run_xdp(xdp_prog, xdp);
- stats->xdp_packets++;
+ u64_stats_inc(&stats->xdp_packets);
switch (act) {
case XDP_PASS:
return act;
case XDP_TX:
- stats->xdp_tx++;
+ u64_stats_inc(&stats->xdp_tx);
xdpf = xdp_convert_buff_to_frame(xdp);
if (unlikely(!xdpf)) {
netdev_dbg(dev, "convert buff to frame failed for xdp\n");
@@ -1036,7 +1036,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
return act;
case XDP_REDIRECT:
- stats->xdp_redirects++;
+ u64_stats_inc(&stats->xdp_redirects);
err = xdp_do_redirect(dev, xdp, xdp_prog);
if (err)
return XDP_DROP;
@@ -1232,9 +1232,9 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev,
return skb;
err_xdp:
- stats->xdp_drops++;
+ u64_stats_inc(&stats->xdp_drops);
err:
- stats->drops++;
+ u64_stats_inc(&stats->drops);
put_page(page);
xdp_xmit:
return NULL;
@@ -1253,12 +1253,12 @@ static struct sk_buff *receive_small(struct net_device *dev,
struct sk_buff *skb;
len -= vi->hdr_len;
- stats->bytes += len;
+ u64_stats_add(&stats->bytes, len);
if (unlikely(len > GOOD_PACKET_LEN)) {
pr_debug("%s: rx error: len %u exceeds max size %d\n",
dev->name, len, GOOD_PACKET_LEN);
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err;
}
@@ -1282,7 +1282,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
return skb;
err:
- stats->drops++;
+ u64_stats_inc(&stats->drops);
put_page(page);
return NULL;
}
@@ -1298,14 +1298,14 @@ static struct sk_buff *receive_big(struct net_device *dev,
struct sk_buff *skb =
page_to_skb(vi, rq, page, 0, len, PAGE_SIZE, 0);
- stats->bytes += len - vi->hdr_len;
+ u64_stats_add(&stats->bytes, len - vi->hdr_len);
if (unlikely(!skb))
goto err;
return skb;
err:
- stats->drops++;
+ u64_stats_inc(&stats->drops);
give_pages(rq, page);
return NULL;
}
@@ -1323,10 +1323,10 @@ static void mergeable_buf_free(struct receive_queue *rq, int num_buf,
if (unlikely(!buf)) {
pr_debug("%s: rx error: %d buffers missing\n",
dev->name, num_buf);
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
break;
}
- stats->bytes += len;
+ u64_stats_add(&stats->bytes, len);
page = virt_to_head_page(buf);
put_page(page);
}
@@ -1432,11 +1432,11 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
pr_debug("%s: rx error: %d buffers out of %d missing\n",
dev->name, *num_buf,
virtio16_to_cpu(vi->vdev, hdr->num_buffers));
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err;
}
- stats->bytes += len;
+ u64_stats_add(&stats->bytes, len);
page = virt_to_head_page(buf);
offset = buf - page_address(page);
@@ -1451,7 +1451,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
put_page(page);
pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
dev->name, len, (unsigned long)(truesize - room));
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err;
}
@@ -1600,8 +1600,8 @@ err_xdp:
put_page(page);
mergeable_buf_free(rq, num_buf, dev, stats);
- stats->xdp_drops++;
- stats->drops++;
+ u64_stats_inc(&stats->xdp_drops);
+ u64_stats_inc(&stats->drops);
return NULL;
}
@@ -1625,12 +1625,12 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
head_skb = NULL;
- stats->bytes += len - vi->hdr_len;
+ u64_stats_add(&stats->bytes, len - vi->hdr_len);
if (unlikely(len > truesize - room)) {
pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
dev->name, len, (unsigned long)(truesize - room));
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err_skb;
}
@@ -1662,11 +1662,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
dev->name, num_buf,
virtio16_to_cpu(vi->vdev,
hdr->num_buffers));
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err_buf;
}
- stats->bytes += len;
+ u64_stats_add(&stats->bytes, len);
page = virt_to_head_page(buf);
truesize = mergeable_ctx_to_truesize(ctx);
@@ -1676,7 +1676,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
if (unlikely(len > truesize - room)) {
pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
dev->name, len, (unsigned long)(truesize - room));
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
goto err_skb;
}
@@ -1718,7 +1718,7 @@ err_skb:
mergeable_buf_free(rq, num_buf, dev, stats);
err_buf:
- stats->drops++;
+ u64_stats_inc(&stats->drops);
dev_kfree_skb(head_skb);
return NULL;
}
@@ -1763,7 +1763,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
if (unlikely(len < vi->hdr_len + ETH_HLEN)) {
pr_debug("%s: short packet %i\n", dev->name, len);
- dev->stats.rx_length_errors++;
+ DEV_STATS_INC(dev, rx_length_errors);
virtnet_rq_free_unused_buf(rq->vq, buf);
return;
}
@@ -1803,7 +1803,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
return;
frame_err:
- dev->stats.rx_frame_errors++;
+ DEV_STATS_INC(dev, rx_frame_errors);
dev_kfree_skb(skb);
}
@@ -1985,7 +1985,7 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
unsigned long flags;
flags = u64_stats_update_begin_irqsave(&rq->stats.syncp);
- rq->stats.kicks++;
+ u64_stats_inc(&rq->stats.kicks);
u64_stats_update_end_irqrestore(&rq->stats.syncp, flags);
}
@@ -2065,22 +2065,23 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
struct virtnet_info *vi = rq->vq->vdev->priv;
struct virtnet_rq_stats stats = {};
unsigned int len;
+ int packets = 0;
void *buf;
int i;
if (!vi->big_packets || vi->mergeable_rx_bufs) {
void *ctx;
- while (stats.packets < budget &&
+ while (packets < budget &&
(buf = virtnet_rq_get_buf(rq, &len, &ctx))) {
receive_buf(vi, rq, buf, len, ctx, xdp_xmit, &stats);
- stats.packets++;
+ packets++;
}
} else {
- while (stats.packets < budget &&
+ while (packets < budget &&
(buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats);
- stats.packets++;
+ packets++;
}
}
@@ -2093,17 +2094,19 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
}
}
+ u64_stats_set(&stats.packets, packets);
u64_stats_update_begin(&rq->stats.syncp);
for (i = 0; i < VIRTNET_RQ_STATS_LEN; i++) {
size_t offset = virtnet_rq_stats_desc[i].offset;
- u64 *item;
+ u64_stats_t *item, *src;
- item = (u64 *)((u8 *)&rq->stats + offset);
- *item += *(u64 *)((u8 *)&stats + offset);
+ item = (u64_stats_t *)((u8 *)&rq->stats + offset);
+ src = (u64_stats_t *)((u8 *)&stats + offset);
+ u64_stats_add(item, u64_stats_read(src));
}
u64_stats_update_end(&rq->stats.syncp);
- return stats.packets;
+ return packets;
}
static void virtnet_poll_cleantx(struct receive_queue *rq)
@@ -2158,7 +2161,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
sq = virtnet_xdp_get_sq(vi);
if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
u64_stats_update_begin(&sq->stats.syncp);
- sq->stats.kicks++;
+ u64_stats_inc(&sq->stats.kicks);
u64_stats_update_end(&sq->stats.syncp);
}
virtnet_xdp_put_sq(vi, sq);
@@ -2349,12 +2352,12 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
/* This should not happen! */
if (unlikely(err)) {
- dev->stats.tx_fifo_errors++;
+ DEV_STATS_INC(dev, tx_fifo_errors);
if (net_ratelimit())
dev_warn(&dev->dev,
"Unexpected TXQ (%d) queue failure: %d\n",
qnum, err);
- dev->stats.tx_dropped++;
+ DEV_STATS_INC(dev, tx_dropped);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@@ -2370,7 +2373,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
if (kick || netif_xmit_stopped(txq)) {
if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
u64_stats_update_begin(&sq->stats.syncp);
- sq->stats.kicks++;
+ u64_stats_inc(&sq->stats.kicks);
u64_stats_update_end(&sq->stats.syncp);
}
}
@@ -2553,16 +2556,16 @@ static void virtnet_stats(struct net_device *dev,
do {
start = u64_stats_fetch_begin(&sq->stats.syncp);
- tpackets = sq->stats.packets;
- tbytes = sq->stats.bytes;
- terrors = sq->stats.tx_timeouts;
+ tpackets = u64_stats_read(&sq->stats.packets);
+ tbytes = u64_stats_read(&sq->stats.bytes);
+ terrors = u64_stats_read(&sq->stats.tx_timeouts);
} while (u64_stats_fetch_retry(&sq->stats.syncp, start));
do {
start = u64_stats_fetch_begin(&rq->stats.syncp);
- rpackets = rq->stats.packets;
- rbytes = rq->stats.bytes;
- rdrops = rq->stats.drops;
+ rpackets = u64_stats_read(&rq->stats.packets);
+ rbytes = u64_stats_read(&rq->stats.bytes);
+ rdrops = u64_stats_read(&rq->stats.drops);
} while (u64_stats_fetch_retry(&rq->stats.syncp, start));
tot->rx_packets += rpackets;
@@ -2573,10 +2576,10 @@ static void virtnet_stats(struct net_device *dev,
tot->tx_errors += terrors;
}
- tot->tx_dropped = dev->stats.tx_dropped;
- tot->tx_fifo_errors = dev->stats.tx_fifo_errors;
- tot->rx_length_errors = dev->stats.rx_length_errors;
- tot->rx_frame_errors = dev->stats.rx_frame_errors;
+ tot->tx_dropped = DEV_STATS_READ(dev, tx_dropped);
+ tot->tx_fifo_errors = DEV_STATS_READ(dev, tx_fifo_errors);
+ tot->rx_length_errors = DEV_STATS_READ(dev, rx_length_errors);
+ tot->rx_frame_errors = DEV_STATS_READ(dev, rx_frame_errors);
}
static void virtnet_ack_link_announce(struct virtnet_info *vi)
@@ -2855,6 +2858,9 @@ static void virtnet_get_ringparam(struct net_device *dev,
ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
}
+static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+ u16 vqn, u32 max_usecs, u32 max_packets);
+
static int virtnet_set_ringparam(struct net_device *dev,
struct ethtool_ringparam *ring,
struct kernel_ethtool_ringparam *kernel_ring,
@@ -2890,12 +2896,36 @@ static int virtnet_set_ringparam(struct net_device *dev,
err = virtnet_tx_resize(vi, sq, ring->tx_pending);
if (err)
return err;
+
+ /* Upon disabling and re-enabling a transmit virtqueue, the device must
+ * set the coalescing parameters of the virtqueue to those configured
+ * through the VIRTIO_NET_CTRL_NOTF_COAL_TX_SET command, or, if the driver
+ * did not set any TX coalescing parameters, to 0.
+ */
+ err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(i),
+ vi->intr_coal_tx.max_usecs,
+ vi->intr_coal_tx.max_packets);
+ if (err)
+ return err;
+
+ vi->sq[i].intr_coal.max_usecs = vi->intr_coal_tx.max_usecs;
+ vi->sq[i].intr_coal.max_packets = vi->intr_coal_tx.max_packets;
}
if (ring->rx_pending != rx_pending) {
err = virtnet_rx_resize(vi, rq, ring->rx_pending);
if (err)
return err;
+
+ /* The reason is same as the transmit virtqueue reset */
+ err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(i),
+ vi->intr_coal_rx.max_usecs,
+ vi->intr_coal_rx.max_packets);
+ if (err)
+ return err;
+
+ vi->rq[i].intr_coal.max_usecs = vi->intr_coal_rx.max_usecs;
+ vi->rq[i].intr_coal.max_packets = vi->intr_coal_rx.max_packets;
}
}
@@ -3164,17 +3194,19 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
struct virtnet_info *vi = netdev_priv(dev);
unsigned int idx = 0, start, i, j;
const u8 *stats_base;
+ const u64_stats_t *p;
size_t offset;
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct receive_queue *rq = &vi->rq[i];
- stats_base = (u8 *)&rq->stats;
+ stats_base = (const u8 *)&rq->stats;
do {
start = u64_stats_fetch_begin(&rq->stats.syncp);
for (j = 0; j < VIRTNET_RQ_STATS_LEN; j++) {
offset = virtnet_rq_stats_desc[j].offset;
- data[idx + j] = *(u64 *)(stats_base + offset);
+ p = (const u64_stats_t *)(stats_base + offset);
+ data[idx + j] = u64_stats_read(p);
}
} while (u64_stats_fetch_retry(&rq->stats.syncp, start));
idx += VIRTNET_RQ_STATS_LEN;
@@ -3183,12 +3215,13 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
for (i = 0; i < vi->curr_queue_pairs; i++) {
struct send_queue *sq = &vi->sq[i];
- stats_base = (u8 *)&sq->stats;
+ stats_base = (const u8 *)&sq->stats;
do {
start = u64_stats_fetch_begin(&sq->stats.syncp);
for (j = 0; j < VIRTNET_SQ_STATS_LEN; j++) {
offset = virtnet_sq_stats_desc[j].offset;
- data[idx + j] = *(u64 *)(stats_base + offset);
+ p = (const u64_stats_t *)(stats_base + offset);
+ data[idx + j] = u64_stats_read(p);
}
} while (u64_stats_fetch_retry(&sq->stats.syncp, start));
idx += VIRTNET_SQ_STATS_LEN;
@@ -3233,6 +3266,7 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
struct ethtool_coalesce *ec)
{
struct scatterlist sgs_tx, sgs_rx;
+ int i;
vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
vi->ctrl->coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames);
@@ -3246,6 +3280,10 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
/* Save parameters */
vi->intr_coal_tx.max_usecs = ec->tx_coalesce_usecs;
vi->intr_coal_tx.max_packets = ec->tx_max_coalesced_frames;
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ vi->sq[i].intr_coal.max_usecs = ec->tx_coalesce_usecs;
+ vi->sq[i].intr_coal.max_packets = ec->tx_max_coalesced_frames;
+ }
vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
@@ -3259,6 +3297,10 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
/* Save parameters */
vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs;
vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames;
+ for (i = 0; i < vi->max_queue_pairs; i++) {
+ vi->rq[i].intr_coal.max_usecs = ec->rx_coalesce_usecs;
+ vi->rq[i].intr_coal.max_packets = ec->rx_max_coalesced_frames;
+ }
return 0;
}
@@ -3287,27 +3329,23 @@ static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
{
int err;
- if (ec->rx_coalesce_usecs || ec->rx_max_coalesced_frames) {
- err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
- ec->rx_coalesce_usecs,
- ec->rx_max_coalesced_frames);
- if (err)
- return err;
- /* Save parameters */
- vi->rq[queue].intr_coal.max_usecs = ec->rx_coalesce_usecs;
- vi->rq[queue].intr_coal.max_packets = ec->rx_max_coalesced_frames;
- }
+ err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
+ ec->rx_coalesce_usecs,
+ ec->rx_max_coalesced_frames);
+ if (err)
+ return err;
- if (ec->tx_coalesce_usecs || ec->tx_max_coalesced_frames) {
- err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
- ec->tx_coalesce_usecs,
- ec->tx_max_coalesced_frames);
- if (err)
- return err;
- /* Save parameters */
- vi->sq[queue].intr_coal.max_usecs = ec->tx_coalesce_usecs;
- vi->sq[queue].intr_coal.max_packets = ec->tx_max_coalesced_frames;
- }
+ vi->rq[queue].intr_coal.max_usecs = ec->rx_coalesce_usecs;
+ vi->rq[queue].intr_coal.max_packets = ec->rx_max_coalesced_frames;
+
+ err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
+ ec->tx_coalesce_usecs,
+ ec->tx_max_coalesced_frames);
+ if (err)
+ return err;
+
+ vi->sq[queue].intr_coal.max_usecs = ec->tx_coalesce_usecs;
+ vi->sq[queue].intr_coal.max_packets = ec->tx_max_coalesced_frames;
return 0;
}
@@ -3315,7 +3353,7 @@ static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
static int virtnet_coal_params_supported(struct ethtool_coalesce *ec)
{
/* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL
- * feature is negotiated.
+ * or VIRTIO_NET_F_VQ_NOTF_COAL feature is negotiated.
*/
if (ec->rx_coalesce_usecs || ec->tx_coalesce_usecs)
return -EOPNOTSUPP;
@@ -3453,7 +3491,7 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev,
} else {
ec->rx_max_coalesced_frames = 1;
- if (vi->sq[0].napi.weight)
+ if (vi->sq[queue].napi.weight)
ec->tx_max_coalesced_frames = 1;
}
@@ -3866,7 +3904,7 @@ static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue)
struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue);
u64_stats_update_begin(&sq->stats.syncp);
- sq->stats.tx_timeouts++;
+ u64_stats_inc(&sq->stats.tx_timeouts);
u64_stats_update_end(&sq->stats.syncp);
netdev_err(dev, "TX timeout on queue: %u, sq: %s, vq: 0x%x, name: %s, %u usecs ago\n",
@@ -4442,13 +4480,6 @@ static int virtnet_probe(struct virtio_device *vdev)
dev->xdp_features |= NETDEV_XDP_ACT_RX_SG;
}
- if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) {
- vi->intr_coal_rx.max_usecs = 0;
- vi->intr_coal_tx.max_usecs = 0;
- vi->intr_coal_tx.max_packets = 0;
- vi->intr_coal_rx.max_packets = 0;
- }
-
if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT))
vi->has_rss_hash_report = true;
@@ -4523,6 +4554,27 @@ static int virtnet_probe(struct virtio_device *vdev)
if (err)
goto free;
+ if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) {
+ vi->intr_coal_rx.max_usecs = 0;
+ vi->intr_coal_tx.max_usecs = 0;
+ vi->intr_coal_rx.max_packets = 0;
+
+ /* Keep the default values of the coalescing parameters
+ * aligned with the default napi_tx state.
+ */
+ if (vi->sq[0].napi.weight)
+ vi->intr_coal_tx.max_packets = 1;
+ else
+ vi->intr_coal_tx.max_packets = 0;
+ }
+
+ if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL)) {
+ /* The reason is the same as VIRTIO_NET_F_NOTF_COAL. */
+ for (i = 0; i < vi->max_queue_pairs; i++)
+ if (vi->sq[i].napi.weight)
+ vi->sq[i].intr_coal.max_packets = 1;
+ }
+
#ifdef CONFIG_SYSFS
if (vi->mergeable_rx_bufs)
dev->sysfs_rx_queue_group = &virtio_net_mrg_rx_group;
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index 5b5597073b00..412c3c0b6990 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -2215,114 +2215,6 @@ static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
return 0;
}
-static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan, struct net_device *dev,
- struct vxlan_sock *sock4,
- struct sk_buff *skb, int oif, u8 tos,
- __be32 daddr, __be32 *saddr, __be16 dport, __be16 sport,
- __u8 flow_flags, struct dst_cache *dst_cache,
- const struct ip_tunnel_info *info)
-{
- bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
- struct rtable *rt = NULL;
- struct flowi4 fl4;
-
- if (!sock4)
- return ERR_PTR(-EIO);
-
- if (tos && !info)
- use_cache = false;
- if (use_cache) {
- rt = dst_cache_get_ip4(dst_cache, saddr);
- if (rt)
- return rt;
- }
-
- memset(&fl4, 0, sizeof(fl4));
- fl4.flowi4_oif = oif;
- fl4.flowi4_tos = RT_TOS(tos);
- fl4.flowi4_mark = skb->mark;
- fl4.flowi4_proto = IPPROTO_UDP;
- fl4.daddr = daddr;
- fl4.saddr = *saddr;
- fl4.fl4_dport = dport;
- fl4.fl4_sport = sport;
- fl4.flowi4_flags = flow_flags;
-
- rt = ip_route_output_key(vxlan->net, &fl4);
- if (!IS_ERR(rt)) {
- if (rt->dst.dev == dev) {
- netdev_dbg(dev, "circular route to %pI4\n", &daddr);
- ip_rt_put(rt);
- return ERR_PTR(-ELOOP);
- }
-
- *saddr = fl4.saddr;
- if (use_cache)
- dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
- } else {
- netdev_dbg(dev, "no route to %pI4\n", &daddr);
- return ERR_PTR(-ENETUNREACH);
- }
- return rt;
-}
-
-#if IS_ENABLED(CONFIG_IPV6)
-static struct dst_entry *vxlan6_get_route(struct vxlan_dev *vxlan,
- struct net_device *dev,
- struct vxlan_sock *sock6,
- struct sk_buff *skb, int oif, u8 tos,
- __be32 label,
- const struct in6_addr *daddr,
- struct in6_addr *saddr,
- __be16 dport, __be16 sport,
- struct dst_cache *dst_cache,
- const struct ip_tunnel_info *info)
-{
- bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
- struct dst_entry *ndst;
- struct flowi6 fl6;
-
- if (!sock6)
- return ERR_PTR(-EIO);
-
- if (tos && !info)
- use_cache = false;
- if (use_cache) {
- ndst = dst_cache_get_ip6(dst_cache, saddr);
- if (ndst)
- return ndst;
- }
-
- memset(&fl6, 0, sizeof(fl6));
- fl6.flowi6_oif = oif;
- fl6.daddr = *daddr;
- fl6.saddr = *saddr;
- fl6.flowlabel = ip6_make_flowinfo(tos, label);
- fl6.flowi6_mark = skb->mark;
- fl6.flowi6_proto = IPPROTO_UDP;
- fl6.fl6_dport = dport;
- fl6.fl6_sport = sport;
-
- ndst = ipv6_stub->ipv6_dst_lookup_flow(vxlan->net, sock6->sock->sk,
- &fl6, NULL);
- if (IS_ERR(ndst)) {
- netdev_dbg(dev, "no route to %pI6\n", daddr);
- return ERR_PTR(-ENETUNREACH);
- }
-
- if (unlikely(ndst->dev == dev)) {
- netdev_dbg(dev, "circular route to %pI6\n", daddr);
- dst_release(ndst);
- return ERR_PTR(-ELOOP);
- }
-
- *saddr = fl6.saddr;
- if (use_cache)
- dst_cache_set_ip6(dst_cache, ndst, saddr);
- return ndst;
-}
-#endif
-
/* Bypass encapsulation if the destination is local */
static void vxlan_encap_bypass(struct sk_buff *skb, struct vxlan_dev *src_vxlan,
struct vxlan_dev *dst_vxlan, __be32 vni,
@@ -2376,7 +2268,7 @@ drop:
static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev,
struct vxlan_dev *vxlan,
- union vxlan_addr *daddr,
+ int addr_family,
__be16 dst_port, int dst_ifindex, __be32 vni,
struct dst_entry *dst,
u32 rt_flags)
@@ -2396,7 +2288,7 @@ static int encap_bypass_if_local(struct sk_buff *skb, struct net_device *dev,
dst_release(dst);
dst_vxlan = vxlan_find_vni(vxlan->net, dst_ifindex, vni,
- daddr->sa.sa_family, dst_port,
+ addr_family, dst_port,
vxlan->cfg.flags);
if (!dst_vxlan) {
dev->stats.tx_errors++;
@@ -2418,31 +2310,33 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
{
struct dst_cache *dst_cache;
struct ip_tunnel_info *info;
+ struct ip_tunnel_key *pkey;
+ struct ip_tunnel_key key;
struct vxlan_dev *vxlan = netdev_priv(dev);
const struct iphdr *old_iph = ip_hdr(skb);
- union vxlan_addr *dst;
- union vxlan_addr remote_ip, local_ip;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
unsigned int pkt_len = skb->len;
__be16 src_port = 0, dst_port;
struct dst_entry *ndst = NULL;
- __u8 tos, ttl, flow_flags = 0;
+ int addr_family;
+ __u8 tos, ttl;
int ifindex;
int err;
u32 flags = vxlan->cfg.flags;
+ bool use_cache;
bool udp_sum = false;
bool xnet = !net_eq(vxlan->net, dev_net(vxlan->dev));
__be32 vni = 0;
-#if IS_ENABLED(CONFIG_IPV6)
- __be32 label;
-#endif
info = skb_tunnel_info(skb);
+ use_cache = ip_tunnel_dst_cache_usable(skb, info);
if (rdst) {
- dst = &rdst->remote_ip;
- if (vxlan_addr_any(dst)) {
+ memset(&key, 0, sizeof(key));
+ pkey = &key;
+
+ if (vxlan_addr_any(&rdst->remote_ip)) {
if (did_rsc) {
/* short-circuited back to local bridge */
vxlan_encap_bypass(skb, vxlan, vxlan,
@@ -2452,30 +2346,40 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto drop;
}
+ addr_family = vxlan->cfg.saddr.sa.sa_family;
dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
vni = (rdst->remote_vni) ? : default_vni;
ifindex = rdst->remote_ifindex;
- local_ip = vxlan->cfg.saddr;
+
+ if (addr_family == AF_INET) {
+ key.u.ipv4.src = vxlan->cfg.saddr.sin.sin_addr.s_addr;
+ key.u.ipv4.dst = rdst->remote_ip.sin.sin_addr.s_addr;
+ } else {
+ key.u.ipv6.src = vxlan->cfg.saddr.sin6.sin6_addr;
+ key.u.ipv6.dst = rdst->remote_ip.sin6.sin6_addr;
+ }
+
dst_cache = &rdst->dst_cache;
md->gbp = skb->mark;
if (flags & VXLAN_F_TTL_INHERIT) {
ttl = ip_tunnel_get_ttl(old_iph, skb);
} else {
ttl = vxlan->cfg.ttl;
- if (!ttl && vxlan_addr_multicast(dst))
+ if (!ttl && vxlan_addr_multicast(&rdst->remote_ip))
ttl = 1;
}
-
tos = vxlan->cfg.tos;
if (tos == 1)
tos = ip_tunnel_get_dsfield(old_iph, skb);
+ if (tos && !info)
+ use_cache = false;
- if (dst->sa.sa_family == AF_INET)
+ if (addr_family == AF_INET)
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM_TX);
else
udp_sum = !(flags & VXLAN_F_UDP_ZERO_CSUM6_TX);
#if IS_ENABLED(CONFIG_IPV6)
- label = vxlan->cfg.label;
+ key.label = vxlan->cfg.label;
#endif
} else {
if (!info) {
@@ -2483,17 +2387,9 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
dev->name);
goto drop;
}
- remote_ip.sa.sa_family = ip_tunnel_info_af(info);
- if (remote_ip.sa.sa_family == AF_INET) {
- remote_ip.sin.sin_addr.s_addr = info->key.u.ipv4.dst;
- local_ip.sin.sin_addr.s_addr = info->key.u.ipv4.src;
- } else {
- remote_ip.sin6.sin6_addr = info->key.u.ipv6.dst;
- local_ip.sin6.sin6_addr = info->key.u.ipv6.src;
- }
- dst = &remote_ip;
+ pkey = &info->key;
+ addr_family = ip_tunnel_info_af(info);
dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
- flow_flags = info->key.flow_flags;
vni = tunnel_id_to_key32(info->key.tun_id);
ifindex = 0;
dst_cache = &info->dst_cache;
@@ -2504,28 +2400,24 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
}
ttl = info->key.ttl;
tos = info->key.tos;
-#if IS_ENABLED(CONFIG_IPV6)
- label = info->key.label;
-#endif
udp_sum = !!(info->key.tun_flags & TUNNEL_CSUM);
}
src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
vxlan->cfg.port_max, true);
rcu_read_lock();
- if (dst->sa.sa_family == AF_INET) {
+ if (addr_family == AF_INET) {
struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
struct rtable *rt;
__be16 df = 0;
+ __be32 saddr;
if (!ifindex)
ifindex = sock4->sock->sk->sk_bound_dev_if;
- rt = vxlan_get_route(vxlan, dev, sock4, skb, ifindex, tos,
- dst->sin.sin_addr.s_addr,
- &local_ip.sin.sin_addr.s_addr,
- dst_port, src_port, flow_flags,
- dst_cache, info);
+ rt = udp_tunnel_dst_lookup(skb, dev, vxlan->net, ifindex,
+ &saddr, pkey, src_port, dst_port,
+ tos, use_cache ? dst_cache : NULL);
if (IS_ERR(rt)) {
err = PTR_ERR(rt);
goto tx_error;
@@ -2533,7 +2425,7 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
if (!info) {
/* Bypass encapsulation if the destination is local */
- err = encap_bypass_if_local(skb, dev, vxlan, dst,
+ err = encap_bypass_if_local(skb, dev, vxlan, AF_INET,
dst_port, ifindex, vni,
&rt->dst, rt->rt_flags);
if (err)
@@ -2561,16 +2453,13 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
} else if (err) {
if (info) {
struct ip_tunnel_info *unclone;
- struct in_addr src, dst;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone))
goto tx_error;
- src = remote_ip.sin.sin_addr;
- dst = local_ip.sin.sin_addr;
- unclone->key.u.ipv4.src = src.s_addr;
- unclone->key.u.ipv4.dst = dst.s_addr;
+ unclone->key.u.ipv4.src = pkey->u.ipv4.dst;
+ unclone->key.u.ipv4.dst = saddr;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
dst_release(ndst);
@@ -2584,21 +2473,21 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
if (err < 0)
goto tx_error;
- udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, local_ip.sin.sin_addr.s_addr,
- dst->sin.sin_addr.s_addr, tos, ttl, df,
+ udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, saddr,
+ pkey->u.ipv4.dst, tos, ttl, df,
src_port, dst_port, xnet, !udp_sum);
#if IS_ENABLED(CONFIG_IPV6)
} else {
struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
+ struct in6_addr saddr;
if (!ifindex)
ifindex = sock6->sock->sk->sk_bound_dev_if;
- ndst = vxlan6_get_route(vxlan, dev, sock6, skb, ifindex, tos,
- label, &dst->sin6.sin6_addr,
- &local_ip.sin6.sin6_addr,
- dst_port, src_port,
- dst_cache, info);
+ ndst = udp_tunnel6_dst_lookup(skb, dev, vxlan->net, sock6->sock,
+ ifindex, &saddr, pkey,
+ src_port, dst_port, tos,
+ use_cache ? dst_cache : NULL);
if (IS_ERR(ndst)) {
err = PTR_ERR(ndst);
ndst = NULL;
@@ -2608,7 +2497,7 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
if (!info) {
u32 rt6i_flags = ((struct rt6_info *)ndst)->rt6i_flags;
- err = encap_bypass_if_local(skb, dev, vxlan, dst,
+ err = encap_bypass_if_local(skb, dev, vxlan, AF_INET6,
dst_port, ifindex, vni,
ndst, rt6i_flags);
if (err)
@@ -2623,16 +2512,13 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
} else if (err) {
if (info) {
struct ip_tunnel_info *unclone;
- struct in6_addr src, dst;
unclone = skb_tunnel_info_unclone(skb);
if (unlikely(!unclone))
goto tx_error;
- src = remote_ip.sin6.sin6_addr;
- dst = local_ip.sin6.sin6_addr;
- unclone->key.u.ipv6.src = src;
- unclone->key.u.ipv6.dst = dst;
+ unclone->key.u.ipv6.src = pkey->u.ipv6.dst;
+ unclone->key.u.ipv6.dst = saddr;
}
vxlan_encap_bypass(skb, vxlan, vxlan, vni, false);
@@ -2649,9 +2535,8 @@ void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
goto tx_error;
udp_tunnel6_xmit_skb(ndst, sock6->sock->sk, skb, dev,
- &local_ip.sin6.sin6_addr,
- &dst->sin6.sin6_addr, tos, ttl,
- label, src_port, dst_port, !udp_sum);
+ &saddr, &pkey->u.ipv6.dst, tos, ttl,
+ pkey->label, src_port, dst_port, !udp_sum);
#endif
}
vxlan_vnifilter_count(vxlan, vni, NULL, VXLAN_VNI_STATS_TX, pkt_len);
@@ -3022,9 +2907,101 @@ static int vxlan_open(struct net_device *dev)
return ret;
}
+struct vxlan_fdb_flush_desc {
+ bool ignore_default_entry;
+ unsigned long state;
+ unsigned long state_mask;
+ unsigned long flags;
+ unsigned long flags_mask;
+ __be32 src_vni;
+ u32 nhid;
+ __be32 vni;
+ __be16 port;
+ union vxlan_addr dst_ip;
+};
+
+static bool vxlan_fdb_is_default_entry(const struct vxlan_fdb *f,
+ const struct vxlan_dev *vxlan)
+{
+ return is_zero_ether_addr(f->eth_addr) && f->vni == vxlan->cfg.vni;
+}
+
+static bool vxlan_fdb_nhid_matches(const struct vxlan_fdb *f, u32 nhid)
+{
+ struct nexthop *nh = rtnl_dereference(f->nh);
+
+ return nh && nh->id == nhid;
+}
+
+static bool vxlan_fdb_flush_matches(const struct vxlan_fdb *f,
+ const struct vxlan_dev *vxlan,
+ const struct vxlan_fdb_flush_desc *desc)
+{
+ if (desc->state_mask && (f->state & desc->state_mask) != desc->state)
+ return false;
+
+ if (desc->flags_mask && (f->flags & desc->flags_mask) != desc->flags)
+ return false;
+
+ if (desc->ignore_default_entry && vxlan_fdb_is_default_entry(f, vxlan))
+ return false;
+
+ if (desc->src_vni && f->vni != desc->src_vni)
+ return false;
+
+ if (desc->nhid && !vxlan_fdb_nhid_matches(f, desc->nhid))
+ return false;
+
+ return true;
+}
+
+static bool
+vxlan_fdb_flush_should_match_remotes(const struct vxlan_fdb_flush_desc *desc)
+{
+ return desc->vni || desc->port || desc->dst_ip.sa.sa_family;
+}
+
+static bool
+vxlan_fdb_flush_remote_matches(const struct vxlan_fdb_flush_desc *desc,
+ const struct vxlan_rdst *rd)
+{
+ if (desc->vni && rd->remote_vni != desc->vni)
+ return false;
+
+ if (desc->port && rd->remote_port != desc->port)
+ return false;
+
+ if (desc->dst_ip.sa.sa_family &&
+ !vxlan_addr_equal(&rd->remote_ip, &desc->dst_ip))
+ return false;
+
+ return true;
+}
+
+static void
+vxlan_fdb_flush_match_remotes(struct vxlan_fdb *f, struct vxlan_dev *vxlan,
+ const struct vxlan_fdb_flush_desc *desc,
+ bool *p_destroy_fdb)
+{
+ bool remotes_flushed = false;
+ struct vxlan_rdst *rd, *tmp;
+
+ list_for_each_entry_safe(rd, tmp, &f->remotes, list) {
+ if (!vxlan_fdb_flush_remote_matches(desc, rd))
+ continue;
+
+ vxlan_fdb_dst_destroy(vxlan, f, rd, true);
+ remotes_flushed = true;
+ }
+
+ *p_destroy_fdb = remotes_flushed && list_empty(&f->remotes);
+}
+
/* Purge the forwarding table */
-static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
+static void vxlan_flush(struct vxlan_dev *vxlan,
+ const struct vxlan_fdb_flush_desc *desc)
{
+ bool match_remotes = vxlan_fdb_flush_should_match_remotes(desc);
unsigned int h;
for (h = 0; h < FDB_HASH_SIZE; ++h) {
@@ -3034,28 +3011,122 @@ static void vxlan_flush(struct vxlan_dev *vxlan, bool do_all)
hlist_for_each_safe(p, n, &vxlan->fdb_head[h]) {
struct vxlan_fdb *f
= container_of(p, struct vxlan_fdb, hlist);
- if (!do_all && (f->state & (NUD_PERMANENT | NUD_NOARP)))
- continue;
- /* the all_zeros_mac entry is deleted at vxlan_uninit */
- if (is_zero_ether_addr(f->eth_addr) &&
- f->vni == vxlan->cfg.vni)
+
+ if (!vxlan_fdb_flush_matches(f, vxlan, desc))
continue;
+
+ if (match_remotes) {
+ bool destroy_fdb = false;
+
+ vxlan_fdb_flush_match_remotes(f, vxlan, desc,
+ &destroy_fdb);
+
+ if (!destroy_fdb)
+ continue;
+ }
+
vxlan_fdb_destroy(vxlan, f, true, true);
}
spin_unlock_bh(&vxlan->hash_lock[h]);
}
}
+static const struct nla_policy vxlan_del_bulk_policy[NDA_MAX + 1] = {
+ [NDA_SRC_VNI] = { .type = NLA_U32 },
+ [NDA_NH_ID] = { .type = NLA_U32 },
+ [NDA_VNI] = { .type = NLA_U32 },
+ [NDA_PORT] = { .type = NLA_U16 },
+ [NDA_DST] = NLA_POLICY_RANGE(NLA_BINARY, sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+ [NDA_NDM_STATE_MASK] = { .type = NLA_U16 },
+ [NDA_NDM_FLAGS_MASK] = { .type = NLA_U8 },
+};
+
+#define VXLAN_FDB_FLUSH_IGNORED_NDM_FLAGS (NTF_MASTER | NTF_SELF)
+#define VXLAN_FDB_FLUSH_ALLOWED_NDM_STATES (NUD_PERMANENT | NUD_NOARP)
+#define VXLAN_FDB_FLUSH_ALLOWED_NDM_FLAGS (NTF_EXT_LEARNED | NTF_OFFLOADED | \
+ NTF_ROUTER)
+
+static int vxlan_fdb_delete_bulk(struct nlmsghdr *nlh, struct net_device *dev,
+ struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct vxlan_fdb_flush_desc desc = {};
+ struct ndmsg *ndm = nlmsg_data(nlh);
+ struct nlattr *tb[NDA_MAX + 1];
+ u8 ndm_flags;
+ int err;
+
+ ndm_flags = ndm->ndm_flags & ~VXLAN_FDB_FLUSH_IGNORED_NDM_FLAGS;
+
+ err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX, vxlan_del_bulk_policy,
+ extack);
+ if (err)
+ return err;
+
+ if (ndm_flags & ~VXLAN_FDB_FLUSH_ALLOWED_NDM_FLAGS) {
+ NL_SET_ERR_MSG(extack, "Unsupported fdb flush ndm flag bits set");
+ return -EINVAL;
+ }
+ if (ndm->ndm_state & ~VXLAN_FDB_FLUSH_ALLOWED_NDM_STATES) {
+ NL_SET_ERR_MSG(extack, "Unsupported fdb flush ndm state bits set");
+ return -EINVAL;
+ }
+
+ desc.state = ndm->ndm_state;
+ desc.flags = ndm_flags;
+
+ if (tb[NDA_NDM_STATE_MASK])
+ desc.state_mask = nla_get_u16(tb[NDA_NDM_STATE_MASK]);
+
+ if (tb[NDA_NDM_FLAGS_MASK])
+ desc.flags_mask = nla_get_u8(tb[NDA_NDM_FLAGS_MASK]);
+
+ if (tb[NDA_SRC_VNI])
+ desc.src_vni = cpu_to_be32(nla_get_u32(tb[NDA_SRC_VNI]));
+
+ if (tb[NDA_NH_ID])
+ desc.nhid = nla_get_u32(tb[NDA_NH_ID]);
+
+ if (tb[NDA_VNI])
+ desc.vni = cpu_to_be32(nla_get_u32(tb[NDA_VNI]));
+
+ if (tb[NDA_PORT])
+ desc.port = nla_get_be16(tb[NDA_PORT]);
+
+ if (tb[NDA_DST]) {
+ union vxlan_addr ip;
+
+ err = vxlan_nla_get_addr(&ip, tb[NDA_DST]);
+ if (err) {
+ NL_SET_ERR_MSG_ATTR(extack, tb[NDA_DST],
+ "Unsupported address family");
+ return err;
+ }
+ desc.dst_ip = ip;
+ }
+
+ vxlan_flush(vxlan, &desc);
+
+ return 0;
+}
+
/* Cleanup timer and forwarding table on shutdown */
static int vxlan_stop(struct net_device *dev)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct vxlan_fdb_flush_desc desc = {
+ /* Default entry is deleted at vxlan_uninit. */
+ .ignore_default_entry = true,
+ .state = 0,
+ .state_mask = NUD_PERMANENT | NUD_NOARP,
+ };
vxlan_multicast_leave(vxlan);
del_timer_sync(&vxlan->age_timer);
- vxlan_flush(vxlan, false);
+ vxlan_flush(vxlan, &desc);
vxlan_sock_release(vxlan);
return 0;
@@ -3100,11 +3171,14 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
struct rtable *rt;
- rt = vxlan_get_route(vxlan, dev, sock4, skb, 0, info->key.tos,
- info->key.u.ipv4.dst,
- &info->key.u.ipv4.src, dport, sport,
- info->key.flow_flags, &info->dst_cache,
- info);
+ if (!sock4)
+ return -EIO;
+
+ rt = udp_tunnel_dst_lookup(skb, dev, vxlan->net, 0,
+ &info->key.u.ipv4.src,
+ &info->key,
+ sport, dport, info->key.tos,
+ &info->dst_cache);
if (IS_ERR(rt))
return PTR_ERR(rt);
ip_rt_put(rt);
@@ -3113,10 +3187,14 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
struct vxlan_sock *sock6 = rcu_dereference(vxlan->vn6_sock);
struct dst_entry *ndst;
- ndst = vxlan6_get_route(vxlan, dev, sock6, skb, 0, info->key.tos,
- info->key.label, &info->key.u.ipv6.dst,
- &info->key.u.ipv6.src, dport, sport,
- &info->dst_cache, info);
+ if (!sock6)
+ return -EIO;
+
+ ndst = udp_tunnel6_dst_lookup(skb, dev, vxlan->net, sock6->sock,
+ 0, &info->key.u.ipv6.src,
+ &info->key,
+ sport, dport, info->key.tos,
+ &info->dst_cache);
if (IS_ERR(ndst))
return PTR_ERR(ndst);
dst_release(ndst);
@@ -3142,11 +3220,13 @@ static const struct net_device_ops vxlan_netdev_ether_ops = {
.ndo_set_mac_address = eth_mac_addr,
.ndo_fdb_add = vxlan_fdb_add,
.ndo_fdb_del = vxlan_fdb_delete,
+ .ndo_fdb_del_bulk = vxlan_fdb_delete_bulk,
.ndo_fdb_dump = vxlan_fdb_dump,
.ndo_fdb_get = vxlan_fdb_get,
.ndo_mdb_add = vxlan_mdb_add,
.ndo_mdb_del = vxlan_mdb_del,
.ndo_mdb_dump = vxlan_mdb_dump,
+ .ndo_mdb_get = vxlan_mdb_get,
.ndo_fill_metadata_dst = vxlan_fill_metadata_dst,
};
@@ -4294,8 +4374,12 @@ static int vxlan_changelink(struct net_device *dev, struct nlattr *tb[],
static void vxlan_dellink(struct net_device *dev, struct list_head *head)
{
struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct vxlan_fdb_flush_desc desc = {
+ /* Default entry is deleted at vxlan_uninit. */
+ .ignore_default_entry = true,
+ };
- vxlan_flush(vxlan, true);
+ vxlan_flush(vxlan, &desc);
list_del(&vxlan->next);
unregister_netdevice_queue(dev, head);
@@ -4305,7 +4389,6 @@ static void vxlan_dellink(struct net_device *dev, struct list_head *head)
static size_t vxlan_get_size(const struct net_device *dev)
{
-
return nla_total_size(sizeof(__u32)) + /* IFLA_VXLAN_ID */
nla_total_size(sizeof(struct in6_addr)) + /* IFLA_VXLAN_GROUP{6} */
nla_total_size(sizeof(__u32)) + /* IFLA_VXLAN_LINK */
@@ -4323,7 +4406,6 @@ static size_t vxlan_get_size(const struct net_device *dev)
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_COLLECT_METADATA */
nla_total_size(sizeof(__u32)) + /* IFLA_VXLAN_AGEING */
nla_total_size(sizeof(__u32)) + /* IFLA_VXLAN_LIMIT */
- nla_total_size(sizeof(struct ifla_vxlan_port_range)) +
nla_total_size(sizeof(__be16)) + /* IFLA_VXLAN_PORT */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_CSUM */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_UDP_ZERO_CSUM6_TX */
@@ -4331,6 +4413,8 @@ static size_t vxlan_get_size(const struct net_device *dev)
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_TX */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_REMCSUM_RX */
nla_total_size(sizeof(__u8)) + /* IFLA_VXLAN_LOCALBYPASS */
+ /* IFLA_VXLAN_PORT_RANGE */
+ nla_total_size(sizeof(struct ifla_vxlan_port_range)) +
nla_total_size(0) + /* IFLA_VXLAN_GBP */
nla_total_size(0) + /* IFLA_VXLAN_GPE */
nla_total_size(0) + /* IFLA_VXLAN_REMCSUM_NOPARTIAL */
diff --git a/drivers/net/vxlan/vxlan_mdb.c b/drivers/net/vxlan/vxlan_mdb.c
index 5e041622261a..eb4c580b5cee 100644
--- a/drivers/net/vxlan/vxlan_mdb.c
+++ b/drivers/net/vxlan/vxlan_mdb.c
@@ -311,7 +311,7 @@ vxlan_mdbe_src_list_pol[MDBE_SRC_LIST_MAX + 1] = {
[MDBE_SRC_LIST_ENTRY] = NLA_POLICY_NESTED(vxlan_mdbe_src_list_entry_pol),
};
-static struct netlink_range_validation vni_range = {
+static const struct netlink_range_validation vni_range = {
.max = VXLAN_N_VID - 1,
};
@@ -370,12 +370,10 @@ static bool vxlan_mdb_is_valid_source(const struct nlattr *attr, __be16 proto,
return true;
}
-static void vxlan_mdb_config_group_set(struct vxlan_mdb_config *cfg,
- const struct br_mdb_entry *entry,
- const struct nlattr *source_attr)
+static void vxlan_mdb_group_set(struct vxlan_mdb_entry_key *group,
+ const struct br_mdb_entry *entry,
+ const struct nlattr *source_attr)
{
- struct vxlan_mdb_entry_key *group = &cfg->group;
-
switch (entry->addr.proto) {
case htons(ETH_P_IP):
group->dst.sa.sa_family = AF_INET;
@@ -503,7 +501,7 @@ static int vxlan_mdb_config_attrs_init(struct vxlan_mdb_config *cfg,
entry->addr.proto, extack))
return -EINVAL;
- vxlan_mdb_config_group_set(cfg, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
+ vxlan_mdb_group_set(&cfg->group, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
/* rtnetlink code only validates that IPv4 group address is
* multicast.
@@ -927,23 +925,20 @@ vxlan_mdb_nlmsg_src_list_size(const struct vxlan_mdb_entry_key *group,
return nlmsg_size;
}
-static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
- const struct vxlan_mdb_entry *mdb_entry,
- const struct vxlan_mdb_remote *remote)
+static size_t
+vxlan_mdb_nlmsg_remote_size(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
{
const struct vxlan_mdb_entry_key *group = &mdb_entry->key;
struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
size_t nlmsg_size;
- nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
- /* MDBA_MDB */
- nla_total_size(0) +
- /* MDBA_MDB_ENTRY */
- nla_total_size(0) +
/* MDBA_MDB_ENTRY_INFO */
- nla_total_size(sizeof(struct br_mdb_entry)) +
+ nlmsg_size = nla_total_size(sizeof(struct br_mdb_entry)) +
/* MDBA_MDB_EATTR_TIMER */
nla_total_size(sizeof(u32));
+
/* MDBA_MDB_EATTR_SOURCE */
if (vxlan_mdb_is_sg(group))
nlmsg_size += nla_total_size(vxlan_addr_size(&group->dst));
@@ -971,6 +966,19 @@ static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
return nlmsg_size;
}
+static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
+{
+ return NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* Remote entry */
+ vxlan_mdb_nlmsg_remote_size(vxlan, mdb_entry, remote);
+}
+
static int vxlan_mdb_nlmsg_fill(const struct vxlan_dev *vxlan,
struct sk_buff *skb,
const struct vxlan_mdb_entry *mdb_entry,
@@ -1298,6 +1306,156 @@ int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
return err;
}
+static const struct nla_policy vxlan_mdbe_attrs_get_pol[MDBE_ATTR_MAX + 1] = {
+ [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+ [MDBE_ATTR_SRC_VNI] = NLA_POLICY_FULL_RANGE(NLA_U32, &vni_range),
+};
+
+static int vxlan_mdb_get_parse(struct net_device *dev, struct nlattr *tb[],
+ struct vxlan_mdb_entry_key *group,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(tb[MDBA_GET_ENTRY]);
+ struct nlattr *mdbe_attrs[MDBE_ATTR_MAX + 1];
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ int err;
+
+ memset(group, 0, sizeof(*group));
+ group->vni = vxlan->default_dst.remote_vni;
+
+ if (!tb[MDBA_GET_ENTRY_ATTRS]) {
+ vxlan_mdb_group_set(group, entry, NULL);
+ return 0;
+ }
+
+ err = nla_parse_nested(mdbe_attrs, MDBE_ATTR_MAX,
+ tb[MDBA_GET_ENTRY_ATTRS],
+ vxlan_mdbe_attrs_get_pol, extack);
+ if (err)
+ return err;
+
+ if (mdbe_attrs[MDBE_ATTR_SOURCE] &&
+ !vxlan_mdb_is_valid_source(mdbe_attrs[MDBE_ATTR_SOURCE],
+ entry->addr.proto, extack))
+ return -EINVAL;
+
+ vxlan_mdb_group_set(group, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
+
+ if (mdbe_attrs[MDBE_ATTR_SRC_VNI])
+ group->vni =
+ cpu_to_be32(nla_get_u32(mdbe_attrs[MDBE_ATTR_SRC_VNI]));
+
+ return 0;
+}
+
+static struct sk_buff *
+vxlan_mdb_get_reply_alloc(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry)
+{
+ struct vxlan_mdb_remote *remote;
+ size_t nlmsg_size;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0);
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list)
+ nlmsg_size += vxlan_mdb_nlmsg_remote_size(vxlan, mdb_entry,
+ remote);
+
+ return nlmsg_new(nlmsg_size, GFP_KERNEL);
+}
+
+static int
+vxlan_mdb_get_reply_fill(const struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ const struct vxlan_mdb_entry *mdb_entry,
+ u32 portid, u32 seq)
+{
+ struct nlattr *mdb_nest, *mdb_entry_nest;
+ struct vxlan_mdb_remote *remote;
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = nlmsg_put(skb, portid, seq, RTM_NEWMDB, sizeof(*bpm), 0);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = vxlan->dev->ifindex;
+ mdb_nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!mdb_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+ mdb_entry_nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!mdb_entry_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list) {
+ err = vxlan_mdb_entry_info_fill(vxlan, skb, mdb_entry, remote);
+ if (err)
+ goto cancel;
+ }
+
+ nla_nest_end(skb, mdb_entry_nest);
+ nla_nest_end(skb, mdb_nest);
+ nlmsg_end(skb, nlh);
+
+ return 0;
+
+cancel:
+ nlmsg_cancel(skb, nlh);
+ return err;
+}
+
+int vxlan_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid,
+ u32 seq, struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct vxlan_mdb_entry *mdb_entry;
+ struct vxlan_mdb_entry_key group;
+ struct sk_buff *skb;
+ int err;
+
+ ASSERT_RTNL();
+
+ err = vxlan_mdb_get_parse(dev, tb, &group, extack);
+ if (err)
+ return err;
+
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, &group);
+ if (!mdb_entry) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ return -ENOENT;
+ }
+
+ skb = vxlan_mdb_get_reply_alloc(vxlan, mdb_entry);
+ if (!skb)
+ return -ENOMEM;
+
+ err = vxlan_mdb_get_reply_fill(vxlan, skb, mdb_entry, portid, seq);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
+ goto free;
+ }
+
+ return rtnl_unicast(skb, dev_net(dev), portid);
+
+free:
+ kfree_skb(skb);
+ return err;
+}
+
struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
struct sk_buff *skb,
__be32 src_vni)
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index 817fa3075842..db679c380955 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -235,6 +235,8 @@ int vxlan_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
struct netlink_ext_ack *extack);
int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack);
+int vxlan_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid,
+ u32 seq, struct netlink_ext_ack *extack);
struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
struct sk_buff *skb,
__be32 src_vni);
diff --git a/drivers/net/wan/ixp4xx_hss.c b/drivers/net/wan/ixp4xx_hss.c
index e46b7f5ee49e..b09f4c235142 100644
--- a/drivers/net/wan/ixp4xx_hss.c
+++ b/drivers/net/wan/ixp4xx_hss.c
@@ -687,10 +687,10 @@ static int hss_hdlc_poll(struct napi_struct *napi, int budget)
napi_complete(napi);
qmgr_enable_irq(rxq);
if (!qmgr_stat_empty(rxq) &&
- napi_reschedule(napi)) {
+ napi_schedule(napi)) {
#if DEBUG_RX
printk(KERN_DEBUG "%s: hss_hdlc_poll"
- " napi_reschedule succeeded\n",
+ " napi_schedule succeeded\n",
dev->name);
#endif
qmgr_disable_irq(rxq);
diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
index 19f61225a708..43e0db78d42b 100644
--- a/drivers/net/wireless/ath/ar5523/ar5523.c
+++ b/drivers/net/wireless/ath/ar5523/ar5523.c
@@ -256,7 +256,7 @@ static int ar5523_cmd(struct ar5523 *ar, u32 code, const void *idata,
/* always bulk-out a multiple of 4 bytes */
xferlen = (sizeof(struct ar5523_cmd_hdr) + ilen + 3) & ~3;
- hdr = (struct ar5523_cmd_hdr *)cmd->buf_tx;
+ hdr = cmd->buf_tx;
memset(hdr, 0, sizeof(struct ar5523_cmd_hdr));
hdr->len = cpu_to_be32(xferlen);
hdr->code = cpu_to_be32(code);
diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
index 666ce384a1d8..27367bd64e95 100644
--- a/drivers/net/wireless/ath/ath10k/ce.h
+++ b/drivers/net/wireless/ath/ath10k/ce.h
@@ -110,7 +110,7 @@ struct ath10k_ce_ring {
struct ce_desc_64 *shadow_base;
/* keep last */
- void *per_transfer_context[];
+ void *per_transfer_context[] __counted_by(nentries);
};
struct ath10k_ce_pipe {
diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
index f9518e1c9903..ad9cf953a2fc 100644
--- a/drivers/net/wireless/ath/ath10k/debug.c
+++ b/drivers/net/wireless/ath/ath10k/debug.c
@@ -1140,7 +1140,7 @@ void ath10k_debug_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *ath10k_gstrings_stats,
+ memcpy(data, ath10k_gstrings_stats,
sizeof(ath10k_gstrings_stats));
}
@@ -1964,20 +1964,13 @@ static ssize_t ath10k_write_btcoex(struct file *file,
size_t count, loff_t *ppos)
{
struct ath10k *ar = file->private_data;
- char buf[32];
- size_t buf_size;
- int ret;
+ ssize_t ret;
bool val;
u32 pdev_param;
- buf_size = min(count, (sizeof(buf) - 1));
- if (copy_from_user(buf, ubuf, buf_size))
- return -EFAULT;
-
- buf[buf_size] = '\0';
-
- if (kstrtobool(buf, &val) != 0)
- return -EINVAL;
+ ret = kstrtobool_from_user(ubuf, count, &val);
+ if (ret)
+ return ret;
if (!ar->coex_support)
return -EOPNOTSUPP;
@@ -2000,7 +1993,7 @@ static ssize_t ath10k_write_btcoex(struct file *file,
ar->running_fw->fw_file.fw_features)) {
ret = ath10k_wmi_pdev_set_param(ar, pdev_param, val);
if (ret) {
- ath10k_warn(ar, "failed to enable btcoex: %d\n", ret);
+ ath10k_warn(ar, "failed to enable btcoex: %zd\n", ret);
ret = count;
goto exit;
}
@@ -2103,19 +2096,12 @@ static ssize_t ath10k_write_peer_stats(struct file *file,
size_t count, loff_t *ppos)
{
struct ath10k *ar = file->private_data;
- char buf[32];
- size_t buf_size;
- int ret;
+ ssize_t ret;
bool val;
- buf_size = min(count, (sizeof(buf) - 1));
- if (copy_from_user(buf, ubuf, buf_size))
- return -EFAULT;
-
- buf[buf_size] = '\0';
-
- if (kstrtobool(buf, &val) != 0)
- return -EINVAL;
+ ret = kstrtobool_from_user(ubuf, count, &val);
+ if (ret)
+ return ret;
mutex_lock(&ar->conf_mutex);
@@ -2239,21 +2225,16 @@ static ssize_t ath10k_sta_tid_stats_mask_write(struct file *file,
size_t count, loff_t *ppos)
{
struct ath10k *ar = file->private_data;
- char buf[32];
- ssize_t len;
+ ssize_t ret;
u32 mask;
- len = min(count, sizeof(buf) - 1);
- if (copy_from_user(buf, user_buf, len))
- return -EFAULT;
-
- buf[len] = '\0';
- if (kstrtoint(buf, 0, &mask))
- return -EINVAL;
+ ret = kstrtoint_from_user(user_buf, count, 0, &mask);
+ if (ret)
+ return ret;
ar->sta_tid_stats_mask = mask;
- return len;
+ return count;
}
static const struct file_operations fops_sta_tid_stats_mask = {
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index 7b24297146e7..c80470e8886a 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -880,8 +880,7 @@ enum htt_data_tx_status {
HTT_DATA_TX_STATUS_OK = 0,
HTT_DATA_TX_STATUS_DISCARD = 1,
HTT_DATA_TX_STATUS_NO_ACK = 2,
- HTT_DATA_TX_STATUS_POSTPONE = 3, /* HL only */
- HTT_DATA_TX_STATUS_DOWNLOAD_FAIL = 128
+ HTT_DATA_TX_STATUS_POSTPONE = 3 /* HL only */
};
enum htt_data_tx_flags {
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index 438b0caaceb7..b261d6371c0f 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -2964,7 +2964,6 @@ static void ath10k_htt_rx_tx_compl_ind(struct ath10k *ar,
break;
case HTT_DATA_TX_STATUS_DISCARD:
case HTT_DATA_TX_STATUS_POSTPONE:
- case HTT_DATA_TX_STATUS_DOWNLOAD_FAIL:
tx_done.status = HTT_TX_COMPL_STATE_DISCARD;
break;
default:
diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
index bd603feb7953..be4d4536aaa8 100644
--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
@@ -796,20 +796,16 @@ static int ath10k_htt_send_frag_desc_bank_cfg_64(struct ath10k_htt *htt)
return 0;
}
-static void ath10k_htt_fill_rx_desc_offset_32(struct ath10k_hw_params *hw, void *rx_ring)
+static void ath10k_htt_fill_rx_desc_offset_32(struct ath10k_hw_params *hw,
+ struct htt_rx_ring_setup_ring32 *rx_ring)
{
- struct htt_rx_ring_setup_ring32 *ring =
- (struct htt_rx_ring_setup_ring32 *)rx_ring;
-
- ath10k_htt_rx_desc_get_offsets(hw, &ring->offsets);
+ ath10k_htt_rx_desc_get_offsets(hw, &rx_ring->offsets);
}
-static void ath10k_htt_fill_rx_desc_offset_64(struct ath10k_hw_params *hw, void *rx_ring)
+static void ath10k_htt_fill_rx_desc_offset_64(struct ath10k_hw_params *hw,
+ struct htt_rx_ring_setup_ring64 *rx_ring)
{
- struct htt_rx_ring_setup_ring64 *ring =
- (struct htt_rx_ring_setup_ring64 *)rx_ring;
-
- ath10k_htt_rx_desc_get_offsets(hw, &ring->offsets);
+ ath10k_htt_rx_desc_get_offsets(hw, &rx_ring->offsets);
}
static int ath10k_htt_send_rx_ring_cfg_32(struct ath10k_htt *htt)
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index 03e7bc5b6c0b..2cf693f3fea9 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -728,20 +728,13 @@ static int ath10k_peer_create(struct ath10k *ar,
const u8 *addr,
enum wmi_peer_type peer_type)
{
- struct ath10k_vif *arvif;
struct ath10k_peer *peer;
- int num_peers = 0;
int ret;
lockdep_assert_held(&ar->conf_mutex);
- num_peers = ar->num_peers;
-
- /* Each vdev consumes a peer entry as well */
- list_for_each_entry(arvif, &ar->arvifs, list)
- num_peers++;
-
- if (num_peers >= ar->max_num_peers)
+ /* Each vdev consumes a peer entry as well. */
+ if (ar->num_peers + list_count_nodes(&ar->arvifs) >= ar->max_num_peers)
return -ENOBUFS;
ret = ath10k_wmi_peer_create(ar, vdev_id, addr, peer_type);
@@ -4503,18 +4496,21 @@ void __ath10k_scan_finish(struct ath10k *ar)
break;
case ATH10K_SCAN_RUNNING:
case ATH10K_SCAN_ABORTING:
+ if (ar->scan.is_roc && ar->scan.roc_notify)
+ ieee80211_remain_on_channel_expired(ar->hw);
+ fallthrough;
+ case ATH10K_SCAN_STARTING:
if (!ar->scan.is_roc) {
struct cfg80211_scan_info info = {
- .aborted = (ar->scan.state ==
- ATH10K_SCAN_ABORTING),
+ .aborted = ((ar->scan.state ==
+ ATH10K_SCAN_ABORTING) ||
+ (ar->scan.state ==
+ ATH10K_SCAN_STARTING)),
};
ieee80211_scan_completed(ar->hw, &info);
- } else if (ar->scan.roc_notify) {
- ieee80211_remain_on_channel_expired(ar->hw);
}
- fallthrough;
- case ATH10K_SCAN_STARTING:
+
ar->scan.state = ATH10K_SCAN_IDLE;
ar->scan_channel = NULL;
ar->scan.roc_freq = 0;
diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
index 23f366221939..2f8c785277af 100644
--- a/drivers/net/wireless/ath/ath10k/pci.c
+++ b/drivers/net/wireless/ath/ath10k/pci.c
@@ -3148,7 +3148,7 @@ static int ath10k_pci_napi_poll(struct napi_struct *ctx, int budget)
* immediate servicing.
*/
if (ath10k_ce_interrupt_summary(ar)) {
- napi_reschedule(ctx);
+ napi_schedule(ctx);
goto out;
}
ath10k_pci_enable_legacy_irq(ar);
diff --git a/drivers/net/wireless/ath/ath10k/snoc.c b/drivers/net/wireless/ath/ath10k/snoc.c
index 26214c00cd0d..2c39bad7ebfb 100644
--- a/drivers/net/wireless/ath/ath10k/snoc.c
+++ b/drivers/net/wireless/ath/ath10k/snoc.c
@@ -828,12 +828,20 @@ static void ath10k_snoc_hif_get_default_pipe(struct ath10k *ar,
static inline void ath10k_snoc_irq_disable(struct ath10k *ar)
{
- ath10k_ce_disable_interrupts(ar);
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+ int id;
+
+ for (id = 0; id < CE_COUNT_MAX; id++)
+ disable_irq(ar_snoc->ce_irqs[id].irq_line);
}
static inline void ath10k_snoc_irq_enable(struct ath10k *ar)
{
- ath10k_ce_enable_interrupts(ar);
+ struct ath10k_snoc *ar_snoc = ath10k_snoc_priv(ar);
+ int id;
+
+ for (id = 0; id < CE_COUNT_MAX; id++)
+ enable_irq(ar_snoc->ce_irqs[id].irq_line);
}
static void ath10k_snoc_rx_pipe_cleanup(struct ath10k_snoc_pipe *snoc_pipe)
@@ -1090,6 +1098,8 @@ static int ath10k_snoc_hif_power_up(struct ath10k *ar,
goto err_free_rri;
}
+ ath10k_ce_enable_interrupts(ar);
+
return 0;
err_free_rri:
@@ -1253,8 +1263,8 @@ static int ath10k_snoc_request_irq(struct ath10k *ar)
for (id = 0; id < CE_COUNT_MAX; id++) {
ret = request_irq(ar_snoc->ce_irqs[id].irq_line,
- ath10k_snoc_per_engine_handler, 0,
- ce_name[id], ar);
+ ath10k_snoc_per_engine_handler,
+ IRQF_NO_AUTOEN, ce_name[id], ar);
if (ret) {
ath10k_err(ar,
"failed to register IRQ handler for CE %d: %d\n",
diff --git a/drivers/net/wireless/ath/ath10k/spectral.c b/drivers/net/wireless/ath/ath10k/spectral.c
index 68254a967ccb..2240994390ed 100644
--- a/drivers/net/wireless/ath/ath10k/spectral.c
+++ b/drivers/net/wireless/ath/ath10k/spectral.c
@@ -384,16 +384,11 @@ static ssize_t write_file_spectral_count(struct file *file,
{
struct ath10k *ar = file->private_data;
unsigned long val;
- char buf[32];
- ssize_t len;
-
- len = min(count, sizeof(buf) - 1);
- if (copy_from_user(buf, user_buf, len))
- return -EFAULT;
+ ssize_t ret;
- buf[len] = '\0';
- if (kstrtoul(buf, 0, &val))
- return -EINVAL;
+ ret = kstrtoul_from_user(user_buf, count, 0, &val);
+ if (ret)
+ return ret;
if (val > 255)
return -EINVAL;
@@ -440,16 +435,11 @@ static ssize_t write_file_spectral_bins(struct file *file,
{
struct ath10k *ar = file->private_data;
unsigned long val;
- char buf[32];
- ssize_t len;
-
- len = min(count, sizeof(buf) - 1);
- if (copy_from_user(buf, user_buf, len))
- return -EFAULT;
+ ssize_t ret;
- buf[len] = '\0';
- if (kstrtoul(buf, 0, &val))
- return -EINVAL;
+ ret = kstrtoul_from_user(user_buf, count, 0, &val);
+ if (ret)
+ return ret;
if (val < 64 || val > SPECTRAL_ATH10K_MAX_NUM_BINS)
return -EINVAL;
diff --git a/drivers/net/wireless/ath/ath11k/Makefile b/drivers/net/wireless/ath/ath11k/Makefile
index cc47e0114595..2c94d50ae36f 100644
--- a/drivers/net/wireless/ath/ath11k/Makefile
+++ b/drivers/net/wireless/ath/ath11k/Makefile
@@ -17,7 +17,8 @@ ath11k-y += core.o \
peer.o \
dbring.o \
hw.o \
- pcic.o
+ pcic.o \
+ fw.o
ath11k-$(CONFIG_ATH11K_DEBUGFS) += debugfs.o debugfs_htt_stats.o debugfs_sta.o
ath11k-$(CONFIG_NL80211_TESTMODE) += testmode.o
diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
index 1215ebdf173a..235336ef2a7a 100644
--- a/drivers/net/wireless/ath/ath11k/ahb.c
+++ b/drivers/net/wireless/ath/ath11k/ahb.c
@@ -6,6 +6,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/of_device.h>
#include <linux/of.h>
#include <linux/dma-mapping.h>
@@ -1084,19 +1085,12 @@ static int ath11k_ahb_fw_resource_deinit(struct ath11k_base *ab)
static int ath11k_ahb_probe(struct platform_device *pdev)
{
struct ath11k_base *ab;
- const struct of_device_id *of_id;
const struct ath11k_hif_ops *hif_ops;
const struct ath11k_pci_ops *pci_ops;
enum ath11k_hw_rev hw_rev;
int ret;
- of_id = of_match_device(ath11k_ahb_of_match, &pdev->dev);
- if (!of_id) {
- dev_err(&pdev->dev, "failed to find matching device tree id\n");
- return -EINVAL;
- }
-
- hw_rev = (uintptr_t)of_id->data;
+ hw_rev = (uintptr_t)device_get_match_data(&pdev->dev);
switch (hw_rev) {
case ATH11K_HW_IPQ8074:
diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
index fc7c4564a715..0c6ecbb9a066 100644
--- a/drivers/net/wireless/ath/ath11k/core.c
+++ b/drivers/net/wireless/ath/ath11k/core.c
@@ -16,6 +16,7 @@
#include "debug.h"
#include "hif.h"
#include "wow.h"
+#include "fw.h"
unsigned int ath11k_debug_mask;
EXPORT_SYMBOL(ath11k_debug_mask);
@@ -985,9 +986,15 @@ int ath11k_core_check_dt(struct ath11k_base *ab)
return 0;
}
+enum ath11k_bdf_name_type {
+ ATH11K_BDF_NAME_FULL,
+ ATH11K_BDF_NAME_BUS_NAME,
+ ATH11K_BDF_NAME_CHIP_ID,
+};
+
static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
size_t name_len, bool with_variant,
- bool bus_type_mode)
+ enum ath11k_bdf_name_type name_type)
{
/* strlen(',variant=') + strlen(ab->qmi.target.bdf_ext) */
char variant[9 + ATH11K_QMI_BDF_EXT_STR_LENGTH] = { 0 };
@@ -998,11 +1005,8 @@ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
switch (ab->id.bdf_search) {
case ATH11K_BDF_SEARCH_BUS_AND_BOARD:
- if (bus_type_mode)
- scnprintf(name, name_len,
- "bus=%s",
- ath11k_bus_str(ab->hif.bus));
- else
+ switch (name_type) {
+ case ATH11K_BDF_NAME_FULL:
scnprintf(name, name_len,
"bus=%s,vendor=%04x,device=%04x,subsystem-vendor=%04x,subsystem-device=%04x,qmi-chip-id=%d,qmi-board-id=%d%s",
ath11k_bus_str(ab->hif.bus),
@@ -1012,6 +1016,19 @@ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
ab->qmi.target.chip_id,
ab->qmi.target.board_id,
variant);
+ break;
+ case ATH11K_BDF_NAME_BUS_NAME:
+ scnprintf(name, name_len,
+ "bus=%s",
+ ath11k_bus_str(ab->hif.bus));
+ break;
+ case ATH11K_BDF_NAME_CHIP_ID:
+ scnprintf(name, name_len,
+ "bus=%s,qmi-chip-id=%d",
+ ath11k_bus_str(ab->hif.bus),
+ ab->qmi.target.chip_id);
+ break;
+ }
break;
default:
scnprintf(name, name_len,
@@ -1030,19 +1047,29 @@ static int __ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
static int ath11k_core_create_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
{
- return __ath11k_core_create_board_name(ab, name, name_len, true, false);
+ return __ath11k_core_create_board_name(ab, name, name_len, true,
+ ATH11K_BDF_NAME_FULL);
}
static int ath11k_core_create_fallback_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
{
- return __ath11k_core_create_board_name(ab, name, name_len, false, false);
+ return __ath11k_core_create_board_name(ab, name, name_len, false,
+ ATH11K_BDF_NAME_FULL);
}
static int ath11k_core_create_bus_type_board_name(struct ath11k_base *ab, char *name,
size_t name_len)
{
- return __ath11k_core_create_board_name(ab, name, name_len, false, true);
+ return __ath11k_core_create_board_name(ab, name, name_len, false,
+ ATH11K_BDF_NAME_BUS_NAME);
+}
+
+static int ath11k_core_create_chip_id_board_name(struct ath11k_base *ab, char *name,
+ size_t name_len)
+{
+ return __ath11k_core_create_board_name(ab, name, name_len, false,
+ ATH11K_BDF_NAME_CHIP_ID);
}
const struct firmware *ath11k_core_firmware_request(struct ath11k_base *ab,
@@ -1289,31 +1316,43 @@ int ath11k_core_fetch_board_data_api_1(struct ath11k_base *ab,
#define BOARD_NAME_SIZE 200
int ath11k_core_fetch_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd)
{
- char boardname[BOARD_NAME_SIZE], fallback_boardname[BOARD_NAME_SIZE];
+ char *boardname = NULL, *fallback_boardname = NULL, *chip_id_boardname = NULL;
char *filename, filepath[100];
- int ret;
+ int bd_api;
+ int ret = 0;
filename = ATH11K_BOARD_API2_FILE;
+ boardname = kzalloc(BOARD_NAME_SIZE, GFP_KERNEL);
+ if (!boardname) {
+ ret = -ENOMEM;
+ goto exit;
+ }
- ret = ath11k_core_create_board_name(ab, boardname, sizeof(boardname));
+ ret = ath11k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
if (ret) {
ath11k_err(ab, "failed to create board name: %d", ret);
- return ret;
+ goto exit;
}
- ab->bd_api = 2;
+ bd_api = 2;
ret = ath11k_core_fetch_board_data_api_n(ab, bd, boardname,
ATH11K_BD_IE_BOARD,
ATH11K_BD_IE_BOARD_NAME,
ATH11K_BD_IE_BOARD_DATA);
if (!ret)
- goto success;
+ goto exit;
+
+ fallback_boardname = kzalloc(BOARD_NAME_SIZE, GFP_KERNEL);
+ if (!fallback_boardname) {
+ ret = -ENOMEM;
+ goto exit;
+ }
ret = ath11k_core_create_fallback_board_name(ab, fallback_boardname,
- sizeof(fallback_boardname));
+ BOARD_NAME_SIZE);
if (ret) {
ath11k_err(ab, "failed to create fallback board name: %d", ret);
- return ret;
+ goto exit;
}
ret = ath11k_core_fetch_board_data_api_n(ab, bd, fallback_boardname,
@@ -1321,9 +1360,30 @@ int ath11k_core_fetch_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd)
ATH11K_BD_IE_BOARD_NAME,
ATH11K_BD_IE_BOARD_DATA);
if (!ret)
- goto success;
+ goto exit;
+
+ chip_id_boardname = kzalloc(BOARD_NAME_SIZE, GFP_KERNEL);
+ if (!chip_id_boardname) {
+ ret = -ENOMEM;
+ goto exit;
+ }
- ab->bd_api = 1;
+ ret = ath11k_core_create_chip_id_board_name(ab, chip_id_boardname,
+ BOARD_NAME_SIZE);
+ if (ret) {
+ ath11k_err(ab, "failed to create chip id board name: %d", ret);
+ goto exit;
+ }
+
+ ret = ath11k_core_fetch_board_data_api_n(ab, bd, chip_id_boardname,
+ ATH11K_BD_IE_BOARD,
+ ATH11K_BD_IE_BOARD_NAME,
+ ATH11K_BD_IE_BOARD_DATA);
+
+ if (!ret)
+ goto exit;
+
+ bd_api = 1;
ret = ath11k_core_fetch_board_data_api_1(ab, bd, ATH11K_DEFAULT_BOARD_FILE);
if (ret) {
ath11k_core_create_firmware_path(ab, filename,
@@ -1334,14 +1394,22 @@ int ath11k_core_fetch_bdf(struct ath11k_base *ab, struct ath11k_board_data *bd)
ath11k_err(ab, "failed to fetch board data for %s from %s\n",
fallback_boardname, filepath);
+ ath11k_err(ab, "failed to fetch board data for %s from %s\n",
+ chip_id_boardname, filepath);
+
ath11k_err(ab, "failed to fetch board.bin from %s\n",
ab->hw_params.fw.dir);
- return ret;
}
-success:
- ath11k_dbg(ab, ATH11K_DBG_BOOT, "using board api %d\n", ab->bd_api);
- return 0;
+exit:
+ kfree(boardname);
+ kfree(fallback_boardname);
+ kfree(chip_id_boardname);
+
+ if (!ret)
+ ath11k_dbg(ab, ATH11K_DBG_BOOT, "using board api %d\n", bd_api);
+
+ return ret;
}
int ath11k_core_fetch_regdb(struct ath11k_base *ab, struct ath11k_board_data *bd)
@@ -2005,6 +2073,12 @@ int ath11k_core_pre_init(struct ath11k_base *ab)
return ret;
}
+ ret = ath11k_fw_pre_init(ab);
+ if (ret) {
+ ath11k_err(ab, "failed to pre init firmware: %d", ret);
+ return ret;
+ }
+
return 0;
}
EXPORT_SYMBOL(ath11k_core_pre_init);
@@ -2035,6 +2109,7 @@ void ath11k_core_deinit(struct ath11k_base *ab)
ath11k_hif_power_down(ab);
ath11k_mac_destroy(ab);
ath11k_core_soc_destroy(ab);
+ ath11k_fw_destroy(ab);
}
EXPORT_SYMBOL(ath11k_core_deinit);
diff --git a/drivers/net/wireless/ath/ath11k/core.h b/drivers/net/wireless/ath/ath11k/core.h
index b04447762483..f12b606e2d2e 100644
--- a/drivers/net/wireless/ath/ath11k/core.h
+++ b/drivers/net/wireless/ath/ath11k/core.h
@@ -15,6 +15,8 @@
#include <linux/ctype.h>
#include <linux/rhashtable.h>
#include <linux/average.h>
+#include <linux/firmware.h>
+
#include "qmi.h"
#include "htc.h"
#include "wmi.h"
@@ -29,6 +31,7 @@
#include "dbring.h"
#include "spectral.h"
#include "wow.h"
+#include "fw.h"
#define SM(_v, _f) (((_v) << _f##_LSB) & _f##_MASK)
@@ -901,14 +904,11 @@ struct ath11k_base {
struct list_head peers;
wait_queue_head_t peer_mapping_wq;
u8 mac_addr[ETH_ALEN];
- bool wmi_ready;
- u32 wlan_init_status;
int irq_num[ATH11K_IRQ_NUM_MAX];
struct ath11k_ext_irq_grp ext_irq_grp[ATH11K_EXT_IRQ_GRP_NUM_MAX];
struct ath11k_targ_cap target_caps;
u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
bool pdevs_macaddr_valid;
- int bd_api;
struct ath11k_hw_params hw_params;
@@ -984,6 +984,18 @@ struct ath11k_base {
const struct ath11k_pci_ops *ops;
} pci;
+ struct {
+ u32 api_version;
+
+ const struct firmware *fw;
+ const u8 *amss_data;
+ size_t amss_len;
+ const u8 *m3_data;
+ size_t m3_len;
+
+ DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT);
+ } fw;
+
#ifdef CONFIG_NL80211_TESTMODE
struct {
u32 data_pos;
@@ -1225,6 +1237,11 @@ static inline struct ath11k_vif *ath11k_vif_to_arvif(struct ieee80211_vif *vif)
return (struct ath11k_vif *)vif->drv_priv;
}
+static inline struct ath11k_sta *ath11k_sta_to_arsta(struct ieee80211_sta *sta)
+{
+ return (struct ath11k_sta *)sta->drv_priv;
+}
+
static inline struct ath11k *ath11k_ab_to_ar(struct ath11k_base *ab,
int mac_id)
{
diff --git a/drivers/net/wireless/ath/ath11k/debugfs.c b/drivers/net/wireless/ath/ath11k/debugfs.c
index 5bb6fd17fdf6..be76e7d1c436 100644
--- a/drivers/net/wireless/ath/ath11k/debugfs.c
+++ b/drivers/net/wireless/ath/ath11k/debugfs.c
@@ -1459,7 +1459,7 @@ static void ath11k_reset_peer_ps_duration(void *data,
struct ieee80211_sta *sta)
{
struct ath11k *ar = data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
spin_lock_bh(&ar->data_lock);
arsta->ps_total_duration = 0;
@@ -1510,7 +1510,7 @@ static void ath11k_peer_ps_state_disable(void *data,
struct ieee80211_sta *sta)
{
struct ath11k *ar = data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
spin_lock_bh(&ar->data_lock);
arsta->peer_ps_state = WMI_PEER_PS_STATE_DISABLED;
@@ -1591,10 +1591,10 @@ static const struct file_operations fops_ps_state_enable = {
int ath11k_debugfs_register(struct ath11k *ar)
{
struct ath11k_base *ab = ar->ab;
- char pdev_name[5];
+ char pdev_name[10];
char buf[100] = {0};
- snprintf(pdev_name, sizeof(pdev_name), "%s%d", "mac", ar->pdev_idx);
+ snprintf(pdev_name, sizeof(pdev_name), "%s%u", "mac", ar->pdev_idx);
ar->debug.debugfs_pdev = debugfs_create_dir(pdev_name, ab->debugfs_soc);
if (IS_ERR(ar->debug.debugfs_pdev))
diff --git a/drivers/net/wireless/ath/ath11k/debugfs_sta.c b/drivers/net/wireless/ath/ath11k/debugfs_sta.c
index 9cc4ef28e751..8c177fba6f14 100644
--- a/drivers/net/wireless/ath/ath11k/debugfs_sta.c
+++ b/drivers/net/wireless/ath/ath11k/debugfs_sta.c
@@ -136,7 +136,7 @@ static ssize_t ath11k_dbg_sta_dump_tx_stats(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
struct ath11k_htt_data_stats *stats;
static const char *str_name[ATH11K_STATS_TYPE_MAX] = {"succ", "fail",
@@ -243,7 +243,7 @@ static ssize_t ath11k_dbg_sta_dump_rx_stats(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
struct ath11k_rx_peer_stats *rx_stats = arsta->rx_stats;
int len = 0, i, retval = 0;
@@ -340,7 +340,7 @@ static int
ath11k_dbg_sta_open_htt_peer_stats(struct inode *inode, struct file *file)
{
struct ieee80211_sta *sta = inode->i_private;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
struct debug_htt_stats_req *stats_req;
int type = ar->debug.htt_stats.type;
@@ -376,7 +376,7 @@ static int
ath11k_dbg_sta_release_htt_peer_stats(struct inode *inode, struct file *file)
{
struct ieee80211_sta *sta = inode->i_private;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
mutex_lock(&ar->conf_mutex);
@@ -413,7 +413,7 @@ static ssize_t ath11k_dbg_sta_write_peer_pktlog(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
int ret, enable;
@@ -453,7 +453,7 @@ static ssize_t ath11k_dbg_sta_read_peer_pktlog(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
char buf[32] = {0};
int len;
@@ -480,7 +480,7 @@ static ssize_t ath11k_dbg_sta_write_delba(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
u32 tid, initiator, reason;
int ret;
@@ -531,7 +531,7 @@ static ssize_t ath11k_dbg_sta_write_addba_resp(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
u32 tid, status;
int ret;
@@ -581,7 +581,7 @@ static ssize_t ath11k_dbg_sta_write_addba(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
u32 tid, buf_size;
int ret;
@@ -632,7 +632,7 @@ static ssize_t ath11k_dbg_sta_read_aggr_mode(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
char buf[64];
int len = 0;
@@ -652,7 +652,7 @@ static ssize_t ath11k_dbg_sta_write_aggr_mode(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
u32 aggr_mode;
int ret;
@@ -697,7 +697,7 @@ ath11k_write_htt_peer_stats_reset(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
struct htt_ext_stats_cfg_params cfg_params = { 0 };
int ret;
@@ -756,7 +756,7 @@ static ssize_t ath11k_dbg_sta_read_peer_ps_state(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
char buf[20];
int len;
@@ -783,7 +783,7 @@ static ssize_t ath11k_dbg_sta_read_current_ps_duration(struct file *file,
loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
u64 time_since_station_in_power_save;
char buf[20];
@@ -817,7 +817,7 @@ static ssize_t ath11k_dbg_sta_read_total_ps_duration(struct file *file,
size_t count, loff_t *ppos)
{
struct ieee80211_sta *sta = file->private_data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
char buf[20];
u64 power_save_duration;
diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
index d070bcb3fe24..a7252b52555c 100644
--- a/drivers/net/wireless/ath/ath11k/dp.c
+++ b/drivers/net/wireless/ath/ath11k/dp.c
@@ -1009,7 +1009,7 @@ void ath11k_dp_vdev_tx_attach(struct ath11k *ar, struct ath11k_vif *arvif)
static int ath11k_dp_tx_pending_cleanup(int buf_id, void *skb, void *ctx)
{
- struct ath11k_base *ab = (struct ath11k_base *)ctx;
+ struct ath11k_base *ab = ctx;
struct sk_buff *msdu = skb;
dma_unmap_single(ab->dev, ATH11K_SKB_CB(msdu)->paddr, msdu->len,
diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
index 62bc98852f0f..7eac93ce7a1d 100644
--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
+++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
@@ -1099,7 +1099,7 @@ int ath11k_dp_rx_ampdu_start(struct ath11k *ar,
struct ieee80211_ampdu_params *params)
{
struct ath11k_base *ab = ar->ab;
- struct ath11k_sta *arsta = (void *)params->sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(params->sta);
int vdev_id = arsta->arvif->vdev_id;
int ret;
@@ -1117,7 +1117,7 @@ int ath11k_dp_rx_ampdu_stop(struct ath11k *ar,
{
struct ath11k_base *ab = ar->ab;
struct ath11k_peer *peer;
- struct ath11k_sta *arsta = (void *)params->sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(params->sta);
int vdev_id = arsta->arvif->vdev_id;
dma_addr_t paddr;
bool active;
@@ -1256,7 +1256,7 @@ static int ath11k_htt_tlv_ppdu_stats_parse(struct ath11k_base *ab,
int cur_user;
u16 peer_id;
- ppdu_info = (struct htt_ppdu_stats_info *)data;
+ ppdu_info = data;
switch (tag) {
case HTT_PPDU_STATS_TAG_COMMON:
@@ -1388,9 +1388,6 @@ ath11k_update_per_peer_tx_stats(struct ath11k *ar,
u8 tid = HTT_PPDU_STATS_NON_QOS_TID;
bool is_ampdu = false;
- if (!usr_stats)
- return;
-
if (!(usr_stats->tlv_flags & BIT(HTT_PPDU_STATS_TAG_USR_RATE)))
return;
@@ -1459,7 +1456,7 @@ ath11k_update_per_peer_tx_stats(struct ath11k *ar,
}
sta = peer->sta;
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
memset(&arsta->txrate, 0, sizeof(arsta->txrate));
@@ -1621,14 +1618,20 @@ static void ath11k_htt_pktlog(struct ath11k_base *ab, struct sk_buff *skb)
u8 pdev_id;
pdev_id = FIELD_GET(HTT_T2H_PPDU_STATS_INFO_PDEV_ID, data->hdr);
+
+ rcu_read_lock();
+
ar = ath11k_mac_get_ar_by_pdev_id(ab, pdev_id);
if (!ar) {
ath11k_warn(ab, "invalid pdev id %d on htt pktlog\n", pdev_id);
- return;
+ goto out;
}
trace_ath11k_htt_pktlog(ar, data->payload, hdr->size,
ar->ab->pktlog_defs_checksum);
+
+out:
+ rcu_read_unlock();
}
static void ath11k_htt_backpressure_event_handler(struct ath11k_base *ab,
@@ -4489,8 +4492,7 @@ int ath11k_dp_rx_monitor_link_desc_return(struct ath11k *ar,
src_srng_desc = ath11k_hal_srng_src_get_next_entry(ar->ab, hal_srng);
if (src_srng_desc) {
- struct ath11k_buffer_addr *src_desc =
- (struct ath11k_buffer_addr *)src_srng_desc;
+ struct ath11k_buffer_addr *src_desc = src_srng_desc;
*src_desc = *((struct ath11k_buffer_addr *)p_last_buf_addr_info);
} else {
@@ -4509,8 +4511,7 @@ void ath11k_dp_rx_mon_next_link_desc_get(void *rx_msdu_link_desc,
u8 *rbm,
void **pp_buf_addr_info)
{
- struct hal_rx_msdu_link *msdu_link =
- (struct hal_rx_msdu_link *)rx_msdu_link_desc;
+ struct hal_rx_msdu_link *msdu_link = rx_msdu_link_desc;
struct ath11k_buffer_addr *buf_addr_info;
buf_addr_info = (struct ath11k_buffer_addr *)&msdu_link->buf_addr_info;
@@ -4551,7 +4552,7 @@ static void ath11k_hal_rx_msdu_list_get(struct ath11k *ar,
u32 first = FIELD_PREP(RX_MSDU_DESC_INFO0_FIRST_MSDU_IN_MPDU, 1);
u8 tmp = 0;
- msdu_link = (struct hal_rx_msdu_link *)msdu_link_desc;
+ msdu_link = msdu_link_desc;
msdu_details = &msdu_link->msdu_link[0];
for (i = 0; i < HAL_RX_NUM_MSDU_DESC; i++) {
@@ -4648,8 +4649,7 @@ ath11k_dp_rx_mon_mpdu_pop(struct ath11k *ar, int mac_id,
bool is_frag, is_first_msdu;
bool drop_mpdu = false;
struct ath11k_skb_rxcb *rxcb;
- struct hal_reo_entrance_ring *ent_desc =
- (struct hal_reo_entrance_ring *)ring_entry;
+ struct hal_reo_entrance_ring *ent_desc = ring_entry;
int buf_id;
u32 rx_link_buf_info[2];
u8 rbm;
@@ -5097,13 +5097,6 @@ static void ath11k_dp_rx_mon_dest_process(struct ath11k *ar, int mac_id,
mon_dst_srng = &ar->ab->hal.srng_list[ring_id];
- if (!mon_dst_srng) {
- ath11k_warn(ar->ab,
- "HAL Monitor Destination Ring Init Failed -- %p",
- mon_dst_srng);
- return;
- }
-
spin_lock_bh(&pmon->mon_lock);
ath11k_hal_srng_access_begin(ar->ab, mon_dst_srng);
@@ -5255,7 +5248,7 @@ int ath11k_dp_rx_process_mon_status(struct ath11k_base *ab, int mac_id,
goto next_skb;
}
- arsta = (struct ath11k_sta *)peer->sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(peer->sta);
ath11k_dp_rx_update_peer_stats(arsta, ppdu_info);
if (ath11k_debugfs_is_pktlog_peer_valid(ar, peer->addr))
diff --git a/drivers/net/wireless/ath/ath11k/dp_tx.c b/drivers/net/wireless/ath/ath11k/dp_tx.c
index 0dda76f7a4b5..a5fa08bc623b 100644
--- a/drivers/net/wireless/ath/ath11k/dp_tx.c
+++ b/drivers/net/wireless/ath/ath11k/dp_tx.c
@@ -467,7 +467,7 @@ void ath11k_dp_tx_update_txcompl(struct ath11k *ar, struct hal_tx_status *ts)
}
sta = peer->sta;
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
memset(&arsta->txrate, 0, sizeof(arsta->txrate));
pkt_type = FIELD_GET(HAL_TX_RATE_STATS_INFO0_PKT_TYPE,
@@ -627,7 +627,7 @@ static void ath11k_dp_tx_complete_msdu(struct ath11k *ar,
ieee80211_free_txskb(ar->hw, msdu);
return;
}
- arsta = (struct ath11k_sta *)peer->sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(peer->sta);
status.sta = peer->sta;
status.skb = msdu;
status.info = info;
diff --git a/drivers/net/wireless/ath/ath11k/fw.c b/drivers/net/wireless/ath/ath11k/fw.c
new file mode 100644
index 000000000000..8f84fba29886
--- /dev/null
+++ b/drivers/net/wireless/ath/ath11k/fw.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: BSD-3-Clause-Clear
+/*
+ * Copyright (c) 2022-2023, Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#include "core.h"
+
+#include "debug.h"
+
+static int ath11k_fw_request_firmware_api_n(struct ath11k_base *ab,
+ const char *name)
+{
+ size_t magic_len, len, ie_len;
+ int ie_id, i, index, bit, ret;
+ struct ath11k_fw_ie *hdr;
+ const u8 *data;
+ __le32 *timestamp;
+
+ ab->fw.fw = ath11k_core_firmware_request(ab, name);
+ if (IS_ERR(ab->fw.fw)) {
+ ret = PTR_ERR(ab->fw.fw);
+ ath11k_dbg(ab, ATH11K_DBG_BOOT, "failed to load %s: %d\n", name, ret);
+ ab->fw.fw = NULL;
+ return ret;
+ }
+
+ data = ab->fw.fw->data;
+ len = ab->fw.fw->size;
+
+ /* magic also includes the null byte, check that as well */
+ magic_len = strlen(ATH11K_FIRMWARE_MAGIC) + 1;
+
+ if (len < magic_len) {
+ ath11k_err(ab, "firmware image too small to contain magic: %zu\n",
+ len);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ if (memcmp(data, ATH11K_FIRMWARE_MAGIC, magic_len) != 0) {
+ ath11k_err(ab, "Invalid firmware magic\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* jump over the padding */
+ magic_len = ALIGN(magic_len, 4);
+
+ /* make sure there's space for padding */
+ if (magic_len > len) {
+ ath11k_err(ab, "No space for padding after magic\n");
+ ret = -EINVAL;
+ goto err;
+ }
+
+ len -= magic_len;
+ data += magic_len;
+
+ /* loop elements */
+ while (len > sizeof(struct ath11k_fw_ie)) {
+ hdr = (struct ath11k_fw_ie *)data;
+
+ ie_id = le32_to_cpu(hdr->id);
+ ie_len = le32_to_cpu(hdr->len);
+
+ len -= sizeof(*hdr);
+ data += sizeof(*hdr);
+
+ if (len < ie_len) {
+ ath11k_err(ab, "Invalid length for FW IE %d (%zu < %zu)\n",
+ ie_id, len, ie_len);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ switch (ie_id) {
+ case ATH11K_FW_IE_TIMESTAMP:
+ if (ie_len != sizeof(u32))
+ break;
+
+ timestamp = (__le32 *)data;
+
+ ath11k_dbg(ab, ATH11K_DBG_BOOT, "found fw timestamp %d\n",
+ le32_to_cpup(timestamp));
+ break;
+ case ATH11K_FW_IE_FEATURES:
+ ath11k_dbg(ab, ATH11K_DBG_BOOT,
+ "found firmware features ie (%zd B)\n",
+ ie_len);
+
+ for (i = 0; i < ATH11K_FW_FEATURE_COUNT; i++) {
+ index = i / 8;
+ bit = i % 8;
+
+ if (index == ie_len)
+ break;
+
+ if (data[index] & (1 << bit))
+ __set_bit(i, ab->fw.fw_features);
+ }
+
+ ath11k_dbg_dump(ab, ATH11K_DBG_BOOT, "features", "",
+ ab->fw.fw_features,
+ sizeof(ab->fw.fw_features));
+ break;
+ case ATH11K_FW_IE_AMSS_IMAGE:
+ ath11k_dbg(ab, ATH11K_DBG_BOOT,
+ "found fw image ie (%zd B)\n",
+ ie_len);
+
+ ab->fw.amss_data = data;
+ ab->fw.amss_len = ie_len;
+ break;
+ case ATH11K_FW_IE_M3_IMAGE:
+ ath11k_dbg(ab, ATH11K_DBG_BOOT,
+ "found m3 image ie (%zd B)\n",
+ ie_len);
+
+ ab->fw.m3_data = data;
+ ab->fw.m3_len = ie_len;
+ break;
+ default:
+ ath11k_warn(ab, "Unknown FW IE: %u\n", ie_id);
+ break;
+ }
+
+ /* jump over the padding */
+ ie_len = ALIGN(ie_len, 4);
+
+ /* make sure there's space for padding */
+ if (ie_len > len)
+ break;
+
+ len -= ie_len;
+ data += ie_len;
+ };
+
+ return 0;
+
+err:
+ release_firmware(ab->fw.fw);
+ ab->fw.fw = NULL;
+ return ret;
+}
+
+int ath11k_fw_pre_init(struct ath11k_base *ab)
+{
+ int ret;
+
+ ret = ath11k_fw_request_firmware_api_n(ab, ATH11K_FW_API2_FILE);
+ if (ret == 0) {
+ ab->fw.api_version = 2;
+ goto out;
+ }
+
+ ab->fw.api_version = 1;
+
+out:
+ ath11k_dbg(ab, ATH11K_DBG_BOOT, "using fw api %d\n",
+ ab->fw.api_version);
+
+ return 0;
+}
+
+void ath11k_fw_destroy(struct ath11k_base *ab)
+{
+ release_firmware(ab->fw.fw);
+}
diff --git a/drivers/net/wireless/ath/ath11k/fw.h b/drivers/net/wireless/ath/ath11k/fw.h
new file mode 100644
index 000000000000..d9893ceb2c3d
--- /dev/null
+++ b/drivers/net/wireless/ath/ath11k/fw.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: BSD-3-Clause-Clear */
+/*
+ * Copyright (c) 2022-2023, Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+#ifndef ATH11K_FW_H
+#define ATH11K_FW_H
+
+#define ATH11K_FW_API2_FILE "firmware-2.bin"
+#define ATH11K_FIRMWARE_MAGIC "QCOM-ATH11K-FW"
+
+enum ath11k_fw_ie_type {
+ ATH11K_FW_IE_TIMESTAMP = 0,
+ ATH11K_FW_IE_FEATURES = 1,
+ ATH11K_FW_IE_AMSS_IMAGE = 2,
+ ATH11K_FW_IE_M3_IMAGE = 3,
+};
+
+enum ath11k_fw_features {
+ /* keep last */
+ ATH11K_FW_FEATURE_COUNT,
+};
+
+int ath11k_fw_pre_init(struct ath11k_base *ab);
+void ath11k_fw_destroy(struct ath11k_base *ab);
+
+#endif /* ATH11K_FW_H */
diff --git a/drivers/net/wireless/ath/ath11k/hal.c b/drivers/net/wireless/ath/ath11k/hal.c
index 0a99aa7ddbf4..23f3af8e372d 100644
--- a/drivers/net/wireless/ath/ath11k/hal.c
+++ b/drivers/net/wireless/ath/ath11k/hal.c
@@ -571,7 +571,7 @@ u32 ath11k_hal_ce_get_desc_size(enum hal_ce_desc type)
void ath11k_hal_ce_src_set_desc(void *buf, dma_addr_t paddr, u32 len, u32 id,
u8 byte_swap_data)
{
- struct hal_ce_srng_src_desc *desc = (struct hal_ce_srng_src_desc *)buf;
+ struct hal_ce_srng_src_desc *desc = buf;
desc->buffer_addr_low = paddr & HAL_ADDR_LSB_REG_MASK;
desc->buffer_addr_info =
@@ -586,8 +586,7 @@ void ath11k_hal_ce_src_set_desc(void *buf, dma_addr_t paddr, u32 len, u32 id,
void ath11k_hal_ce_dst_set_desc(void *buf, dma_addr_t paddr)
{
- struct hal_ce_srng_dest_desc *desc =
- (struct hal_ce_srng_dest_desc *)buf;
+ struct hal_ce_srng_dest_desc *desc = buf;
desc->buffer_addr_low = paddr & HAL_ADDR_LSB_REG_MASK;
desc->buffer_addr_info =
@@ -597,8 +596,7 @@ void ath11k_hal_ce_dst_set_desc(void *buf, dma_addr_t paddr)
u32 ath11k_hal_ce_dst_status_get_length(void *buf)
{
- struct hal_ce_srng_dst_status_desc *desc =
- (struct hal_ce_srng_dst_status_desc *)buf;
+ struct hal_ce_srng_dst_status_desc *desc = buf;
u32 len;
len = FIELD_GET(HAL_CE_DST_STATUS_DESC_FLAGS_LEN, desc->flags);
diff --git a/drivers/net/wireless/ath/ath11k/hal_rx.c b/drivers/net/wireless/ath/ath11k/hal_rx.c
index e5ed5efb139e..41946795d620 100644
--- a/drivers/net/wireless/ath/ath11k/hal_rx.c
+++ b/drivers/net/wireless/ath/ath11k/hal_rx.c
@@ -265,7 +265,7 @@ out:
void ath11k_hal_rx_buf_addr_info_set(void *desc, dma_addr_t paddr,
u32 cookie, u8 manager)
{
- struct ath11k_buffer_addr *binfo = (struct ath11k_buffer_addr *)desc;
+ struct ath11k_buffer_addr *binfo = desc;
u32 paddr_lo, paddr_hi;
paddr_lo = lower_32_bits(paddr);
@@ -279,7 +279,7 @@ void ath11k_hal_rx_buf_addr_info_set(void *desc, dma_addr_t paddr,
void ath11k_hal_rx_buf_addr_info_get(void *desc, dma_addr_t *paddr,
u32 *cookie, u8 *rbm)
{
- struct ath11k_buffer_addr *binfo = (struct ath11k_buffer_addr *)desc;
+ struct ath11k_buffer_addr *binfo = desc;
*paddr =
(((u64)FIELD_GET(BUFFER_ADDR_INFO1_ADDR, binfo->info1)) << 32) |
@@ -292,7 +292,7 @@ void ath11k_hal_rx_msdu_link_info_get(void *link_desc, u32 *num_msdus,
u32 *msdu_cookies,
enum hal_rx_buf_return_buf_manager *rbm)
{
- struct hal_rx_msdu_link *link = (struct hal_rx_msdu_link *)link_desc;
+ struct hal_rx_msdu_link *link = link_desc;
struct hal_rx_msdu_details *msdu;
int i;
@@ -699,7 +699,7 @@ u32 ath11k_hal_reo_qdesc_size(u32 ba_window_size, u8 tid)
void ath11k_hal_reo_qdesc_setup(void *vaddr, int tid, u32 ba_window_size,
u32 start_seq, enum hal_pn_type type)
{
- struct hal_rx_reo_queue *qdesc = (struct hal_rx_reo_queue *)vaddr;
+ struct hal_rx_reo_queue *qdesc = vaddr;
struct hal_rx_reo_queue_ext *ext_desc;
memset(qdesc, 0, sizeof(*qdesc));
@@ -809,27 +809,25 @@ static inline void
ath11k_hal_rx_handle_ofdma_info(void *rx_tlv,
struct hal_rx_user_status *rx_user_status)
{
- struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
- (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
+ struct hal_rx_ppdu_end_user_stats *ppdu_end_user = rx_tlv;
rx_user_status->ul_ofdma_user_v0_word0 = __le32_to_cpu(ppdu_end_user->info6);
- rx_user_status->ul_ofdma_user_v0_word1 = __le32_to_cpu(ppdu_end_user->rsvd2[10]);
+ rx_user_status->ul_ofdma_user_v0_word1 = __le32_to_cpu(ppdu_end_user->info10);
}
static inline void
ath11k_hal_rx_populate_byte_count(void *rx_tlv, void *ppduinfo,
struct hal_rx_user_status *rx_user_status)
{
- struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
- (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
+ struct hal_rx_ppdu_end_user_stats *ppdu_end_user = rx_tlv;
rx_user_status->mpdu_ok_byte_count =
- FIELD_GET(HAL_RX_PPDU_END_USER_STATS_RSVD2_6_MPDU_OK_BYTE_COUNT,
- __le32_to_cpu(ppdu_end_user->rsvd2[6]));
+ FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO8_MPDU_OK_BYTE_COUNT,
+ __le32_to_cpu(ppdu_end_user->info8));
rx_user_status->mpdu_err_byte_count =
- FIELD_GET(HAL_RX_PPDU_END_USER_STATS_RSVD2_8_MPDU_ERR_BYTE_COUNT,
- __le32_to_cpu(ppdu_end_user->rsvd2[8]));
+ FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO9_MPDU_ERR_BYTE_COUNT,
+ __le32_to_cpu(ppdu_end_user->info9));
}
static inline void
@@ -903,8 +901,8 @@ ath11k_hal_rx_parse_mon_status_tlv(struct ath11k_base *ab,
FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO2_AST_INDEX,
__le32_to_cpu(eu_stats->info2));
ppdu_info->tid =
- ffs(FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO6_TID_BITMAP,
- __le32_to_cpu(eu_stats->info6))) - 1;
+ ffs(FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO7_TID_BITMAP,
+ __le32_to_cpu(eu_stats->info7))) - 1;
ppdu_info->tcp_msdu_count =
FIELD_GET(HAL_RX_PPDU_END_USER_STATS_INFO4_TCP_MSDU_CNT,
__le32_to_cpu(eu_stats->info4));
@@ -1540,8 +1538,7 @@ void ath11k_hal_rx_reo_ent_buf_paddr_get(void *rx_desc, dma_addr_t *paddr,
u32 *sw_cookie, void **pp_buf_addr,
u8 *rbm, u32 *msdu_cnt)
{
- struct hal_reo_entrance_ring *reo_ent_ring =
- (struct hal_reo_entrance_ring *)rx_desc;
+ struct hal_reo_entrance_ring *reo_ent_ring = rx_desc;
struct ath11k_buffer_addr *buf_addr_info;
struct rx_mpdu_desc *rx_mpdu_desc_info_details;
diff --git a/drivers/net/wireless/ath/ath11k/hal_rx.h b/drivers/net/wireless/ath/ath11k/hal_rx.h
index 61bd8416c4fd..472a52cf5889 100644
--- a/drivers/net/wireless/ath/ath11k/hal_rx.h
+++ b/drivers/net/wireless/ath/ath11k/hal_rx.h
@@ -149,7 +149,7 @@ struct hal_rx_mon_ppdu_info {
u8 beamformed;
u8 rssi_comb;
u8 rssi_chain_pri20[HAL_RX_MAX_NSS];
- u8 tid;
+ u16 tid;
u16 ht_flags;
u16 vht_flags;
u16 he_flags;
@@ -219,11 +219,11 @@ struct hal_rx_ppdu_start {
#define HAL_RX_PPDU_END_USER_STATS_INFO5_OTHER_MSDU_CNT GENMASK(15, 0)
#define HAL_RX_PPDU_END_USER_STATS_INFO5_TCP_ACK_MSDU_CNT GENMASK(31, 16)
-#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_BITMAP GENMASK(15, 0)
-#define HAL_RX_PPDU_END_USER_STATS_INFO6_TID_EOSP_BITMAP GENMASK(31, 16)
+#define HAL_RX_PPDU_END_USER_STATS_INFO7_TID_BITMAP GENMASK(15, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO7_TID_EOSP_BITMAP GENMASK(31, 16)
-#define HAL_RX_PPDU_END_USER_STATS_RSVD2_6_MPDU_OK_BYTE_COUNT GENMASK(24, 0)
-#define HAL_RX_PPDU_END_USER_STATS_RSVD2_8_MPDU_ERR_BYTE_COUNT GENMASK(24, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO8_MPDU_OK_BYTE_COUNT GENMASK(24, 0)
+#define HAL_RX_PPDU_END_USER_STATS_INFO9_MPDU_ERR_BYTE_COUNT GENMASK(24, 0)
struct hal_rx_ppdu_end_user_stats {
__le32 rsvd0[2];
@@ -236,7 +236,13 @@ struct hal_rx_ppdu_end_user_stats {
__le32 info4;
__le32 info5;
__le32 info6;
- __le32 rsvd2[11];
+ __le32 info7;
+ __le32 rsvd2[4];
+ __le32 info8;
+ __le32 rsvd3;
+ __le32 info9;
+ __le32 rsvd4[2];
+ __le32 info10;
} __packed;
struct hal_rx_ppdu_end_user_stats_ext {
diff --git a/drivers/net/wireless/ath/ath11k/hal_tx.c b/drivers/net/wireless/ath/ath11k/hal_tx.c
index d1b0e36e04a9..b919df6ce743 100644
--- a/drivers/net/wireless/ath/ath11k/hal_tx.c
+++ b/drivers/net/wireless/ath/ath11k/hal_tx.c
@@ -37,7 +37,7 @@ static const u8 dscp_tid_map[DSCP_TID_MAP_TBL_ENTRY_SIZE] = {
void ath11k_hal_tx_cmd_desc_setup(struct ath11k_base *ab, void *cmd,
struct hal_tx_info *ti)
{
- struct hal_tcl_data_cmd *tcl_cmd = (struct hal_tcl_data_cmd *)cmd;
+ struct hal_tcl_data_cmd *tcl_cmd = cmd;
tcl_cmd->buf_addr_info.info0 =
FIELD_PREP(BUFFER_ADDR_INFO0_ADDR, ti->paddr);
diff --git a/drivers/net/wireless/ath/ath11k/hif.h b/drivers/net/wireless/ath/ath11k/hif.h
index 659b80d2abd4..d68ed4214dec 100644
--- a/drivers/net/wireless/ath/ath11k/hif.h
+++ b/drivers/net/wireless/ath/ath11k/hif.h
@@ -9,18 +9,18 @@
#include "core.h"
struct ath11k_hif_ops {
- u32 (*read32)(struct ath11k_base *sc, u32 address);
- void (*write32)(struct ath11k_base *sc, u32 address, u32 data);
+ u32 (*read32)(struct ath11k_base *ab, u32 address);
+ void (*write32)(struct ath11k_base *ab, u32 address, u32 data);
int (*read)(struct ath11k_base *ab, void *buf, u32 start, u32 end);
- void (*irq_enable)(struct ath11k_base *sc);
- void (*irq_disable)(struct ath11k_base *sc);
- int (*start)(struct ath11k_base *sc);
- void (*stop)(struct ath11k_base *sc);
- int (*power_up)(struct ath11k_base *sc);
- void (*power_down)(struct ath11k_base *sc);
+ void (*irq_enable)(struct ath11k_base *ab);
+ void (*irq_disable)(struct ath11k_base *ab);
+ int (*start)(struct ath11k_base *ab);
+ void (*stop)(struct ath11k_base *ab);
+ int (*power_up)(struct ath11k_base *ab);
+ void (*power_down)(struct ath11k_base *ab);
int (*suspend)(struct ath11k_base *ab);
int (*resume)(struct ath11k_base *ab);
- int (*map_service_to_pipe)(struct ath11k_base *sc, u16 service_id,
+ int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
int (*get_user_msi_vector)(struct ath11k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
@@ -44,34 +44,34 @@ static inline void ath11k_hif_ce_irq_disable(struct ath11k_base *ab)
ab->hif.ops->ce_irq_disable(ab);
}
-static inline int ath11k_hif_start(struct ath11k_base *sc)
+static inline int ath11k_hif_start(struct ath11k_base *ab)
{
- return sc->hif.ops->start(sc);
+ return ab->hif.ops->start(ab);
}
-static inline void ath11k_hif_stop(struct ath11k_base *sc)
+static inline void ath11k_hif_stop(struct ath11k_base *ab)
{
- sc->hif.ops->stop(sc);
+ ab->hif.ops->stop(ab);
}
-static inline void ath11k_hif_irq_enable(struct ath11k_base *sc)
+static inline void ath11k_hif_irq_enable(struct ath11k_base *ab)
{
- sc->hif.ops->irq_enable(sc);
+ ab->hif.ops->irq_enable(ab);
}
-static inline void ath11k_hif_irq_disable(struct ath11k_base *sc)
+static inline void ath11k_hif_irq_disable(struct ath11k_base *ab)
{
- sc->hif.ops->irq_disable(sc);
+ ab->hif.ops->irq_disable(ab);
}
-static inline int ath11k_hif_power_up(struct ath11k_base *sc)
+static inline int ath11k_hif_power_up(struct ath11k_base *ab)
{
- return sc->hif.ops->power_up(sc);
+ return ab->hif.ops->power_up(ab);
}
-static inline void ath11k_hif_power_down(struct ath11k_base *sc)
+static inline void ath11k_hif_power_down(struct ath11k_base *ab)
{
- sc->hif.ops->power_down(sc);
+ ab->hif.ops->power_down(ab);
}
static inline int ath11k_hif_suspend(struct ath11k_base *ab)
@@ -90,14 +90,14 @@ static inline int ath11k_hif_resume(struct ath11k_base *ab)
return 0;
}
-static inline u32 ath11k_hif_read32(struct ath11k_base *sc, u32 address)
+static inline u32 ath11k_hif_read32(struct ath11k_base *ab, u32 address)
{
- return sc->hif.ops->read32(sc, address);
+ return ab->hif.ops->read32(ab, address);
}
-static inline void ath11k_hif_write32(struct ath11k_base *sc, u32 address, u32 data)
+static inline void ath11k_hif_write32(struct ath11k_base *ab, u32 address, u32 data)
{
- sc->hif.ops->write32(sc, address, data);
+ ab->hif.ops->write32(ab, address, data);
}
static inline int ath11k_hif_read(struct ath11k_base *ab, void *buf,
@@ -109,10 +109,10 @@ static inline int ath11k_hif_read(struct ath11k_base *ab, void *buf,
return ab->hif.ops->read(ab, buf, start, end);
}
-static inline int ath11k_hif_map_service_to_pipe(struct ath11k_base *sc, u16 service_id,
+static inline int ath11k_hif_map_service_to_pipe(struct ath11k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe)
{
- return sc->hif.ops->map_service_to_pipe(sc, service_id, ul_pipe, dl_pipe);
+ return ab->hif.ops->map_service_to_pipe(ab, service_id, ul_pipe, dl_pipe);
}
static inline int ath11k_get_user_msi_vector(struct ath11k_base *ab, char *user_name,
diff --git a/drivers/net/wireless/ath/ath11k/htc.h b/drivers/net/wireless/ath/ath11k/htc.h
index f429b37cfdf7..d31e501c807c 100644
--- a/drivers/net/wireless/ath/ath11k/htc.h
+++ b/drivers/net/wireless/ath/ath11k/htc.h
@@ -156,18 +156,6 @@ struct ath11k_htc_record {
};
} __packed __aligned(4);
-/* note: the trailer offset is dynamic depending
- * on payload length. this is only a struct layout draft
- */
-struct ath11k_htc_frame {
- struct ath11k_htc_hdr hdr;
- union {
- struct ath11k_htc_msg msg;
- u8 payload[0];
- };
- struct ath11k_htc_record trailer[0];
-} __packed __aligned(4);
-
enum ath11k_htc_svc_gid {
ATH11K_HTC_SVC_GRP_RSVD = 0,
ATH11K_HTC_SVC_GRP_WMI = 1,
diff --git a/drivers/net/wireless/ath/ath11k/mac.c b/drivers/net/wireless/ath/ath11k/mac.c
index c071bf5841af..7f7b39817773 100644
--- a/drivers/net/wireless/ath/ath11k/mac.c
+++ b/drivers/net/wireless/ath/ath11k/mac.c
@@ -5,6 +5,7 @@
*/
#include <net/mac80211.h>
+#include <net/cfg80211.h>
#include <linux/etherdevice.h>
#include <linux/bitfield.h>
#include <linux/inetdevice.h>
@@ -2831,7 +2832,7 @@ static void ath11k_peer_assoc_prepare(struct ath11k *ar,
lockdep_assert_held(&ar->conf_mutex);
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
memset(arg, 0, sizeof(*arg));
@@ -4314,7 +4315,7 @@ static int ath11k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
ath11k_warn(ab, "peer %pM disappeared!\n", peer_addr);
if (sta) {
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
switch (key->cipher) {
case WLAN_CIPHER_SUITE_TKIP:
@@ -4905,7 +4906,7 @@ static int ath11k_mac_station_add(struct ath11k *ar,
{
struct ath11k_base *ab = ar->ab;
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct peer_create_params peer_param;
int ret;
@@ -5029,7 +5030,7 @@ static int ath11k_mac_op_sta_state(struct ieee80211_hw *hw,
{
struct ath11k *ar = hw->priv;
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k_peer *peer;
int ret = 0;
@@ -5195,7 +5196,7 @@ static void ath11k_mac_op_sta_set_4addr(struct ieee80211_hw *hw,
struct ieee80211_sta *sta, bool enabled)
{
struct ath11k *ar = hw->priv;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
if (enabled && !arsta->use_4addr_set) {
ieee80211_queue_work(ar->hw, &arsta->set_4addr_wk);
@@ -5209,7 +5210,7 @@ static void ath11k_mac_op_sta_rc_update(struct ieee80211_hw *hw,
u32 changed)
{
struct ath11k *ar = hw->priv;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
struct ath11k_peer *peer;
u32 bw, smps;
@@ -5893,8 +5894,9 @@ static void ath11k_mac_setup_he_cap(struct ath11k *ar,
ar->mac.iftype[NL80211_BAND_2GHZ],
NL80211_BAND_2GHZ);
band = &ar->mac.sbands[NL80211_BAND_2GHZ];
- band->iftype_data = ar->mac.iftype[NL80211_BAND_2GHZ];
- band->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(band,
+ ar->mac.iftype[NL80211_BAND_2GHZ],
+ count);
}
if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP) {
@@ -5902,8 +5904,9 @@ static void ath11k_mac_setup_he_cap(struct ath11k *ar,
ar->mac.iftype[NL80211_BAND_5GHZ],
NL80211_BAND_5GHZ);
band = &ar->mac.sbands[NL80211_BAND_5GHZ];
- band->iftype_data = ar->mac.iftype[NL80211_BAND_5GHZ];
- band->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(band,
+ ar->mac.iftype[NL80211_BAND_5GHZ],
+ count);
}
if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
@@ -5912,8 +5915,9 @@ static void ath11k_mac_setup_he_cap(struct ath11k *ar,
ar->mac.iftype[NL80211_BAND_6GHZ],
NL80211_BAND_6GHZ);
band = &ar->mac.sbands[NL80211_BAND_6GHZ];
- band->iftype_data = ar->mac.iftype[NL80211_BAND_6GHZ];
- band->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(band,
+ ar->mac.iftype[NL80211_BAND_6GHZ],
+ count);
}
}
@@ -6199,7 +6203,7 @@ static void ath11k_mac_op_tx(struct ieee80211_hw *hw,
}
if (control->sta)
- arsta = (struct ath11k_sta *)control->sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(control->sta);
ret = ath11k_dp_tx(ar, arvif, arsta, skb);
if (unlikely(ret)) {
@@ -6967,8 +6971,8 @@ err:
static int ath11k_mac_vif_unref(int buf_id, void *skb, void *ctx)
{
- struct ieee80211_vif *vif = (struct ieee80211_vif *)ctx;
- struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB((struct sk_buff *)skb);
+ struct ieee80211_vif *vif = ctx;
+ struct ath11k_skb_cb *skb_cb = ATH11K_SKB_CB(skb);
if (skb_cb->vif == vif)
skb_cb->vif = NULL;
@@ -7193,6 +7197,7 @@ ath11k_mac_vdev_start_restart(struct ath11k_vif *arvif,
struct wmi_vdev_start_req_arg arg = {};
const struct cfg80211_chan_def *chandef = &ctx->def;
int ret = 0;
+ unsigned int dfs_cac_time;
lockdep_assert_held(&ar->conf_mutex);
@@ -7272,20 +7277,21 @@ ath11k_mac_vdev_start_restart(struct ath11k_vif *arvif,
ath11k_dbg(ab, ATH11K_DBG_MAC, "vdev %pM started, vdev_id %d\n",
arvif->vif->addr, arvif->vdev_id);
- /* Enable CAC Flag in the driver by checking the channel DFS cac time,
- * i.e dfs_cac_ms value which will be valid only for radar channels
- * and state as NL80211_DFS_USABLE which indicates CAC needs to be
+ /* Enable CAC Flag in the driver by checking the all sub-channel's DFS
+ * state as NL80211_DFS_USABLE which indicates CAC needs to be
* done before channel usage. This flags is used to drop rx packets.
* during CAC.
*/
/* TODO Set the flag for other interface types as required */
- if (arvif->vdev_type == WMI_VDEV_TYPE_AP &&
- chandef->chan->dfs_cac_ms &&
- chandef->chan->dfs_state == NL80211_DFS_USABLE) {
+ if (arvif->vdev_type == WMI_VDEV_TYPE_AP && ctx->radar_enabled &&
+ cfg80211_chandef_dfs_usable(ar->hw->wiphy, chandef)) {
set_bit(ATH11K_CAC_RUNNING, &ar->dev_flags);
+ dfs_cac_time = cfg80211_chandef_dfs_cac_time(ar->hw->wiphy,
+ chandef);
ath11k_dbg(ab, ATH11K_DBG_MAC,
- "CAC Started in chan_freq %d for vdev %d\n",
- arg.channel.freq, arg.vdev_id);
+ "cac started dfs_cac_time %u center_freq %d center_freq1 %d for vdev %d\n",
+ dfs_cac_time, arg.channel.freq, chandef->center_freq1,
+ arg.vdev_id);
}
ret = ath11k_mac_set_txbf_conf(arvif);
@@ -7910,12 +7916,14 @@ ath11k_mac_get_tx_mcs_map(const struct ieee80211_sta_he_cap *he_cap)
static bool
ath11k_mac_bitrate_mask_get_single_nss(struct ath11k *ar,
+ struct ath11k_vif *arvif,
enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask,
int *nss)
{
struct ieee80211_supported_band *sband = &ar->mac.sbands[band];
u16 vht_mcs_map = le16_to_cpu(sband->vht_cap.vht_mcs.tx_mcs_map);
+ const struct ieee80211_sta_he_cap *he_cap;
u16 he_mcs_map = 0;
u8 ht_nss_mask = 0;
u8 vht_nss_mask = 0;
@@ -7946,7 +7954,11 @@ ath11k_mac_bitrate_mask_get_single_nss(struct ath11k *ar,
return false;
}
- he_mcs_map = le16_to_cpu(ath11k_mac_get_tx_mcs_map(&sband->iftype_data->he_cap));
+ he_cap = ieee80211_get_he_iftype_cap_vif(sband, arvif->vif);
+ if (!he_cap)
+ return false;
+
+ he_mcs_map = le16_to_cpu(ath11k_mac_get_tx_mcs_map(he_cap));
for (i = 0; i < ARRAY_SIZE(mask->control[band].he_mcs); i++) {
if (mask->control[band].he_mcs[i] == 0)
@@ -8223,7 +8235,7 @@ static void ath11k_mac_set_bitrate_mask_iter(void *data,
struct ieee80211_sta *sta)
{
struct ath11k_vif *arvif = data;
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arvif->ar;
spin_lock_bh(&ar->data_lock);
@@ -8362,7 +8374,7 @@ ath11k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
ieee80211_iterate_stations_atomic(ar->hw,
ath11k_mac_disable_peer_fixed_rate,
arvif);
- } else if (ath11k_mac_bitrate_mask_get_single_nss(ar, band, mask,
+ } else if (ath11k_mac_bitrate_mask_get_single_nss(ar, arvif, band, mask,
&single_nss)) {
rate = WMI_FIXED_RATE_NONE;
nss = single_nss;
@@ -8627,7 +8639,7 @@ static void ath11k_mac_op_sta_statistics(struct ieee80211_hw *hw,
struct ieee80211_sta *sta,
struct station_info *sinfo)
{
- struct ath11k_sta *arsta = (struct ath11k_sta *)sta->drv_priv;
+ struct ath11k_sta *arsta = ath11k_sta_to_arsta(sta);
struct ath11k *ar = arsta->arvif->ar;
s8 signal;
bool db2dbm = test_bit(WMI_TLV_SERVICE_HW_DB2DBM_CONVERSION_SUPPORT,
@@ -8905,7 +8917,7 @@ static int ath11k_mac_op_remain_on_channel(struct ieee80211_hw *hw,
{
struct ath11k *ar = hw->priv;
struct ath11k_vif *arvif = ath11k_vif_to_arvif(vif);
- struct scan_req_params arg;
+ struct scan_req_params *arg;
int ret;
u32 scan_time_msec;
@@ -8937,27 +8949,31 @@ static int ath11k_mac_op_remain_on_channel(struct ieee80211_hw *hw,
scan_time_msec = ar->hw->wiphy->max_remain_on_channel_duration * 2;
- memset(&arg, 0, sizeof(arg));
- ath11k_wmi_start_scan_init(ar, &arg);
- arg.num_chan = 1;
- arg.chan_list = kcalloc(arg.num_chan, sizeof(*arg.chan_list),
- GFP_KERNEL);
- if (!arg.chan_list) {
+ arg = kzalloc(sizeof(*arg), GFP_KERNEL);
+ if (!arg) {
ret = -ENOMEM;
goto exit;
}
+ ath11k_wmi_start_scan_init(ar, arg);
+ arg->num_chan = 1;
+ arg->chan_list = kcalloc(arg->num_chan, sizeof(*arg->chan_list),
+ GFP_KERNEL);
+ if (!arg->chan_list) {
+ ret = -ENOMEM;
+ goto free_arg;
+ }
- arg.vdev_id = arvif->vdev_id;
- arg.scan_id = ATH11K_SCAN_ID;
- arg.chan_list[0] = chan->center_freq;
- arg.dwell_time_active = scan_time_msec;
- arg.dwell_time_passive = scan_time_msec;
- arg.max_scan_time = scan_time_msec;
- arg.scan_flags |= WMI_SCAN_FLAG_PASSIVE;
- arg.scan_flags |= WMI_SCAN_FILTER_PROBE_REQ;
- arg.burst_duration = duration;
-
- ret = ath11k_start_scan(ar, &arg);
+ arg->vdev_id = arvif->vdev_id;
+ arg->scan_id = ATH11K_SCAN_ID;
+ arg->chan_list[0] = chan->center_freq;
+ arg->dwell_time_active = scan_time_msec;
+ arg->dwell_time_passive = scan_time_msec;
+ arg->max_scan_time = scan_time_msec;
+ arg->scan_flags |= WMI_SCAN_FLAG_PASSIVE;
+ arg->scan_flags |= WMI_SCAN_FILTER_PROBE_REQ;
+ arg->burst_duration = duration;
+
+ ret = ath11k_start_scan(ar, arg);
if (ret) {
ath11k_warn(ar->ab, "failed to start roc scan: %d\n", ret);
@@ -8983,7 +8999,9 @@ static int ath11k_mac_op_remain_on_channel(struct ieee80211_hw *hw,
ret = 0;
free_chan_list:
- kfree(arg.chan_list);
+ kfree(arg->chan_list);
+free_arg:
+ kfree(arg);
exit:
mutex_unlock(&ar->conf_mutex);
return ret;
@@ -9042,6 +9060,14 @@ static int ath11k_mac_op_get_txpower(struct ieee80211_hw *hw,
if (ar->state != ATH11K_STATE_ON)
goto err_fallback;
+ /* Firmware doesn't provide Tx power during CAC hence no need to fetch
+ * the stats.
+ */
+ if (test_bit(ATH11K_CAC_RUNNING, &ar->dev_flags)) {
+ mutex_unlock(&ar->conf_mutex);
+ return -EAGAIN;
+ }
+
req_param.pdev_id = ar->pdev->pdev_id;
req_param.stats_id = WMI_REQUEST_PDEV_STAT;
diff --git a/drivers/net/wireless/ath/ath11k/mhi.c b/drivers/net/wireless/ath/ath11k/mhi.c
index 3ac689f1def4..afeabd6ecc67 100644
--- a/drivers/net/wireless/ath/ath11k/mhi.c
+++ b/drivers/net/wireless/ath/ath11k/mhi.c
@@ -6,6 +6,7 @@
#include <linux/msi.h>
#include <linux/pci.h>
+#include <linux/firmware.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/ioport.h>
@@ -333,6 +334,7 @@ static void ath11k_mhi_op_status_cb(struct mhi_controller *mhi_cntrl,
ath11k_warn(ab, "firmware crashed: MHI_CB_SYS_ERROR\n");
break;
case MHI_CB_EE_RDDM:
+ ath11k_warn(ab, "firmware crashed: MHI_CB_EE_RDDM\n");
if (!(test_bit(ATH11K_FLAG_UNREGISTERING, &ab->dev_flags)))
queue_work(ab->workqueue_aux, &ab->reset_work);
break;
@@ -389,16 +391,23 @@ int ath11k_mhi_register(struct ath11k_pci *ab_pci)
if (!mhi_ctrl)
return -ENOMEM;
- ath11k_core_create_firmware_path(ab, ATH11K_AMSS_FILE,
- ab_pci->amss_path,
- sizeof(ab_pci->amss_path));
-
ab_pci->mhi_ctrl = mhi_ctrl;
mhi_ctrl->cntrl_dev = ab->dev;
- mhi_ctrl->fw_image = ab_pci->amss_path;
mhi_ctrl->regs = ab->mem;
mhi_ctrl->reg_len = ab->mem_len;
+ if (ab->fw.amss_data && ab->fw.amss_len > 0) {
+ /* use MHI firmware file from firmware-N.bin */
+ mhi_ctrl->fw_data = ab->fw.amss_data;
+ mhi_ctrl->fw_sz = ab->fw.amss_len;
+ } else {
+ /* use the old separate mhi.bin MHI firmware file */
+ ath11k_core_create_firmware_path(ab, ATH11K_AMSS_FILE,
+ ab_pci->amss_path,
+ sizeof(ab_pci->amss_path));
+ mhi_ctrl->fw_image = ab_pci->amss_path;
+ }
+
ret = ath11k_mhi_get_msi(ab_pci);
if (ret) {
ath11k_err(ab, "failed to get msi for mhi\n");
diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
index a5aa1857ec14..09e65c5e55c4 100644
--- a/drivers/net/wireless/ath/ath11k/pci.c
+++ b/drivers/net/wireless/ath/ath11k/pci.c
@@ -854,10 +854,16 @@ unsupported_wcn6855_soc:
if (ret)
goto err_pci_disable_msi;
+ ret = ath11k_pci_set_irq_affinity_hint(ab_pci, cpumask_of(0));
+ if (ret) {
+ ath11k_err(ab, "failed to set irq affinity %d\n", ret);
+ goto err_pci_disable_msi;
+ }
+
ret = ath11k_mhi_register(ab_pci);
if (ret) {
ath11k_err(ab, "failed to register mhi: %d\n", ret);
- goto err_pci_disable_msi;
+ goto err_irq_affinity_cleanup;
}
ret = ath11k_hal_srng_init(ab);
@@ -878,12 +884,6 @@ unsupported_wcn6855_soc:
goto err_ce_free;
}
- ret = ath11k_pci_set_irq_affinity_hint(ab_pci, cpumask_of(0));
- if (ret) {
- ath11k_err(ab, "failed to set irq affinity %d\n", ret);
- goto err_free_irq;
- }
-
/* kernel may allocate a dummy vector before request_irq and
* then allocate a real vector when request_irq is called.
* So get msi_data here again to avoid spurious interrupt
@@ -892,20 +892,17 @@ unsupported_wcn6855_soc:
ret = ath11k_pci_config_msi_data(ab_pci);
if (ret) {
ath11k_err(ab, "failed to config msi_data: %d\n", ret);
- goto err_irq_affinity_cleanup;
+ goto err_free_irq;
}
ret = ath11k_core_init(ab);
if (ret) {
ath11k_err(ab, "failed to init core: %d\n", ret);
- goto err_irq_affinity_cleanup;
+ goto err_free_irq;
}
ath11k_qmi_fwreset_from_cold_boot(ab);
return 0;
-err_irq_affinity_cleanup:
- ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
-
err_free_irq:
ath11k_pcic_free_irq(ab);
@@ -918,6 +915,9 @@ err_hal_srng_deinit:
err_mhi_unregister:
ath11k_mhi_unregister(ab_pci);
+err_irq_affinity_cleanup:
+ ath11k_pci_set_irq_affinity_hint(ab_pci, NULL);
+
err_pci_disable_msi:
ath11k_pci_free_msi(ab_pci);
diff --git a/drivers/net/wireless/ath/ath11k/pcic.c b/drivers/net/wireless/ath/ath11k/pcic.c
index c63083633b37..16d1e332193f 100644
--- a/drivers/net/wireless/ath/ath11k/pcic.c
+++ b/drivers/net/wireless/ath/ath11k/pcic.c
@@ -422,14 +422,14 @@ static void ath11k_pcic_ext_grp_disable(struct ath11k_ext_irq_grp *irq_grp)
disable_irq_nosync(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
}
-static void __ath11k_pcic_ext_irq_disable(struct ath11k_base *sc)
+static void __ath11k_pcic_ext_irq_disable(struct ath11k_base *ab)
{
int i;
- clear_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &sc->dev_flags);
+ clear_bit(ATH11K_FLAG_EXT_IRQ_ENABLED, &ab->dev_flags);
for (i = 0; i < ATH11K_EXT_IRQ_GRP_NUM_MAX; i++) {
- struct ath11k_ext_irq_grp *irq_grp = &sc->ext_irq_grp[i];
+ struct ath11k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
ath11k_pcic_ext_grp_disable(irq_grp);
diff --git a/drivers/net/wireless/ath/ath11k/peer.c b/drivers/net/wireless/ath/ath11k/peer.c
index 114aa3a9a339..1c79a932d17f 100644
--- a/drivers/net/wireless/ath/ath11k/peer.c
+++ b/drivers/net/wireless/ath/ath11k/peer.c
@@ -446,7 +446,7 @@ int ath11k_peer_create(struct ath11k *ar, struct ath11k_vif *arvif,
peer->sec_type_grp = HAL_ENCRYPT_TYPE_OPEN;
if (sta) {
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
arsta->tcl_metadata |= FIELD_PREP(HTT_TCL_META_DATA_TYPE, 0) |
FIELD_PREP(HTT_TCL_META_DATA_PEER_ID,
peer->peer_id);
diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
index 41fad03a3025..c270dc46d506 100644
--- a/drivers/net/wireless/ath/ath11k/qmi.c
+++ b/drivers/net/wireless/ath/ath11k/qmi.c
@@ -2502,38 +2502,56 @@ out:
static int ath11k_qmi_m3_load(struct ath11k_base *ab)
{
struct m3_mem_region *m3_mem = &ab->qmi.m3_mem;
- const struct firmware *fw;
+ const struct firmware *fw = NULL;
+ const void *m3_data;
char path[100];
+ size_t m3_len;
int ret;
- fw = ath11k_core_firmware_request(ab, ATH11K_M3_FILE);
- if (IS_ERR(fw)) {
- ret = PTR_ERR(fw);
- ath11k_core_create_firmware_path(ab, ATH11K_M3_FILE,
- path, sizeof(path));
- ath11k_err(ab, "failed to load %s: %d\n", path, ret);
- return ret;
- }
+ if (m3_mem->vaddr)
+ /* m3 firmware buffer is already available in the DMA buffer */
+ return 0;
- if (m3_mem->vaddr || m3_mem->size)
- goto skip_m3_alloc;
+ if (ab->fw.m3_data && ab->fw.m3_len > 0) {
+ /* firmware-N.bin had a m3 firmware file so use that */
+ m3_data = ab->fw.m3_data;
+ m3_len = ab->fw.m3_len;
+ } else {
+ /* No m3 file in firmware-N.bin so try to request old
+ * separate m3.bin.
+ */
+ fw = ath11k_core_firmware_request(ab, ATH11K_M3_FILE);
+ if (IS_ERR(fw)) {
+ ret = PTR_ERR(fw);
+ ath11k_core_create_firmware_path(ab, ATH11K_M3_FILE,
+ path, sizeof(path));
+ ath11k_err(ab, "failed to load %s: %d\n", path, ret);
+ return ret;
+ }
+
+ m3_data = fw->data;
+ m3_len = fw->size;
+ }
m3_mem->vaddr = dma_alloc_coherent(ab->dev,
- fw->size, &m3_mem->paddr,
+ m3_len, &m3_mem->paddr,
GFP_KERNEL);
if (!m3_mem->vaddr) {
ath11k_err(ab, "failed to allocate memory for M3 with size %zu\n",
fw->size);
- release_firmware(fw);
- return -ENOMEM;
+ ret = -ENOMEM;
+ goto out;
}
-skip_m3_alloc:
- memcpy(m3_mem->vaddr, fw->data, fw->size);
- m3_mem->size = fw->size;
+ memcpy(m3_mem->vaddr, m3_data, m3_len);
+ m3_mem->size = m3_len;
+
+ ret = 0;
+
+out:
release_firmware(fw);
- return 0;
+ return ret;
}
static void ath11k_qmi_m3_free(struct ath11k_base *ab)
diff --git a/drivers/net/wireless/ath/ath11k/reg.c b/drivers/net/wireless/ath/ath11k/reg.c
index 7f9fb968dac6..3c7debae800a 100644
--- a/drivers/net/wireless/ath/ath11k/reg.c
+++ b/drivers/net/wireless/ath/ath11k/reg.c
@@ -352,6 +352,16 @@ static u32 ath11k_map_fw_reg_flags(u16 reg_flags)
return flags;
}
+static u32 ath11k_map_fw_phy_flags(u32 phy_flags)
+{
+ u32 flags = 0;
+
+ if (phy_flags & ATH11K_REG_PHY_BITMAP_NO11AX)
+ flags |= NL80211_RRF_NO_HE;
+
+ return flags;
+}
+
static bool
ath11k_reg_can_intersect(struct ieee80211_reg_rule *rule1,
struct ieee80211_reg_rule *rule2)
@@ -685,6 +695,7 @@ ath11k_reg_build_regd(struct ath11k_base *ab,
}
flags |= ath11k_map_fw_reg_flags(reg_rule->flags);
+ flags |= ath11k_map_fw_phy_flags(reg_info->phybitmap);
ath11k_reg_update_rule(tmp_regd->reg_rules + i,
reg_rule->start_freq,
diff --git a/drivers/net/wireless/ath/ath11k/reg.h b/drivers/net/wireless/ath/ath11k/reg.h
index 2f284f26378d..84daa6543b6a 100644
--- a/drivers/net/wireless/ath/ath11k/reg.h
+++ b/drivers/net/wireless/ath/ath11k/reg.h
@@ -24,6 +24,9 @@ enum ath11k_dfs_region {
ATH11K_DFS_REG_UNDEF,
};
+/* Phy bitmaps */
+#define ATH11K_REG_PHY_BITMAP_NO11AX BIT(5)
+
/* ATH11K Regulatory API's */
void ath11k_reg_init(struct ath11k *ar);
void ath11k_reg_free(struct ath11k_base *ab);
diff --git a/drivers/net/wireless/ath/ath11k/spectral.c b/drivers/net/wireless/ath/ath11k/spectral.c
index 705868198df4..0b7b7122cc05 100644
--- a/drivers/net/wireless/ath/ath11k/spectral.c
+++ b/drivers/net/wireless/ath/ath11k/spectral.c
@@ -382,16 +382,11 @@ static ssize_t ath11k_write_file_spectral_count(struct file *file,
{
struct ath11k *ar = file->private_data;
unsigned long val;
- char buf[32];
- ssize_t len;
-
- len = min(count, sizeof(buf) - 1);
- if (copy_from_user(buf, user_buf, len))
- return -EFAULT;
+ ssize_t ret;
- buf[len] = '\0';
- if (kstrtoul(buf, 0, &val))
- return -EINVAL;
+ ret = kstrtoul_from_user(user_buf, count, 0, &val);
+ if (ret)
+ return ret;
if (val > ATH11K_SPECTRAL_SCAN_COUNT_MAX)
return -EINVAL;
@@ -437,16 +432,11 @@ static ssize_t ath11k_write_file_spectral_bins(struct file *file,
{
struct ath11k *ar = file->private_data;
unsigned long val;
- char buf[32];
- ssize_t len;
-
- len = min(count, sizeof(buf) - 1);
- if (copy_from_user(buf, user_buf, len))
- return -EFAULT;
+ ssize_t ret;
- buf[len] = '\0';
- if (kstrtoul(buf, 0, &val))
- return -EINVAL;
+ ret = kstrtoul_from_user(user_buf, count, 0, &val);
+ if (ret)
+ return ret;
if (val < ATH11K_SPECTRAL_MIN_BINS ||
val > ar->ab->hw_params.spectral.max_fft_bins)
@@ -598,7 +588,7 @@ int ath11k_spectral_process_fft(struct ath11k *ar,
return -EINVAL;
}
- tlv = (struct spectral_tlv *)data;
+ tlv = data;
tlv_len = FIELD_GET(SPECTRAL_TLV_HDR_LEN, __le32_to_cpu(tlv->header));
/* convert Dword into bytes */
tlv_len *= ATH11K_SPECTRAL_DWORD_SIZE;
diff --git a/drivers/net/wireless/ath/ath11k/thermal.c b/drivers/net/wireless/ath/ath11k/thermal.c
index 23ed01bd44f9..c9b012f97ba5 100644
--- a/drivers/net/wireless/ath/ath11k/thermal.c
+++ b/drivers/net/wireless/ath/ath11k/thermal.c
@@ -125,7 +125,7 @@ ATTRIBUTE_GROUPS(ath11k_hwmon);
int ath11k_thermal_set_throttling(struct ath11k *ar, u32 throttle_state)
{
- struct ath11k_base *sc = ar->ab;
+ struct ath11k_base *ab = ar->ab;
struct thermal_mitigation_params param;
int ret = 0;
@@ -147,14 +147,14 @@ int ath11k_thermal_set_throttling(struct ath11k *ar, u32 throttle_state)
ret = ath11k_wmi_send_thermal_mitigation_param_cmd(ar, &param);
if (ret) {
- ath11k_warn(sc, "failed to send thermal mitigation duty cycle %u ret %d\n",
+ ath11k_warn(ab, "failed to send thermal mitigation duty cycle %u ret %d\n",
throttle_state, ret);
}
return ret;
}
-int ath11k_thermal_register(struct ath11k_base *sc)
+int ath11k_thermal_register(struct ath11k_base *ab)
{
struct thermal_cooling_device *cdev;
struct device *hwmon_dev;
@@ -162,8 +162,8 @@ int ath11k_thermal_register(struct ath11k_base *sc)
struct ath11k_pdev *pdev;
int i, ret;
- for (i = 0; i < sc->num_radios; i++) {
- pdev = &sc->pdevs[i];
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
ar = pdev->ar;
if (!ar)
continue;
@@ -172,7 +172,7 @@ int ath11k_thermal_register(struct ath11k_base *sc)
&ath11k_thermal_ops);
if (IS_ERR(cdev)) {
- ath11k_err(sc, "failed to setup thermal device result: %ld\n",
+ ath11k_err(ab, "failed to setup thermal device result: %ld\n",
PTR_ERR(cdev));
ret = -EINVAL;
goto err_thermal_destroy;
@@ -183,7 +183,7 @@ int ath11k_thermal_register(struct ath11k_base *sc)
ret = sysfs_create_link(&ar->hw->wiphy->dev.kobj, &cdev->device.kobj,
"cooling_device");
if (ret) {
- ath11k_err(sc, "failed to create cooling device symlink\n");
+ ath11k_err(ab, "failed to create cooling device symlink\n");
goto err_thermal_destroy;
}
@@ -204,18 +204,18 @@ int ath11k_thermal_register(struct ath11k_base *sc)
return 0;
err_thermal_destroy:
- ath11k_thermal_unregister(sc);
+ ath11k_thermal_unregister(ab);
return ret;
}
-void ath11k_thermal_unregister(struct ath11k_base *sc)
+void ath11k_thermal_unregister(struct ath11k_base *ab)
{
struct ath11k *ar;
struct ath11k_pdev *pdev;
int i;
- for (i = 0; i < sc->num_radios; i++) {
- pdev = &sc->pdevs[i];
+ for (i = 0; i < ab->num_radios; i++) {
+ pdev = &ab->pdevs[i];
ar = pdev->ar;
if (!ar)
continue;
diff --git a/drivers/net/wireless/ath/ath11k/thermal.h b/drivers/net/wireless/ath/ath11k/thermal.h
index 3e39675ef7f5..83cb67686733 100644
--- a/drivers/net/wireless/ath/ath11k/thermal.h
+++ b/drivers/net/wireless/ath/ath11k/thermal.h
@@ -26,17 +26,17 @@ struct ath11k_thermal {
};
#if IS_REACHABLE(CONFIG_THERMAL)
-int ath11k_thermal_register(struct ath11k_base *sc);
-void ath11k_thermal_unregister(struct ath11k_base *sc);
+int ath11k_thermal_register(struct ath11k_base *ab);
+void ath11k_thermal_unregister(struct ath11k_base *ab);
int ath11k_thermal_set_throttling(struct ath11k *ar, u32 throttle_state);
void ath11k_thermal_event_temperature(struct ath11k *ar, int temperature);
#else
-static inline int ath11k_thermal_register(struct ath11k_base *sc)
+static inline int ath11k_thermal_register(struct ath11k_base *ab)
{
return 0;
}
-static inline void ath11k_thermal_unregister(struct ath11k_base *sc)
+static inline void ath11k_thermal_unregister(struct ath11k_base *ab)
{
}
diff --git a/drivers/net/wireless/ath/ath11k/wmi.c b/drivers/net/wireless/ath/ath11k/wmi.c
index 23ad6825e5be..2845b4313d3a 100644
--- a/drivers/net/wireless/ath/ath11k/wmi.c
+++ b/drivers/net/wireless/ath/ath11k/wmi.c
@@ -292,18 +292,18 @@ err_pull:
int ath11k_wmi_cmd_send(struct ath11k_pdev_wmi *wmi, struct sk_buff *skb,
u32 cmd_id)
{
- struct ath11k_wmi_base *wmi_sc = wmi->wmi_ab;
+ struct ath11k_wmi_base *wmi_ab = wmi->wmi_ab;
int ret = -EOPNOTSUPP;
- struct ath11k_base *ab = wmi_sc->ab;
+ struct ath11k_base *ab = wmi_ab->ab;
might_sleep();
if (ab->hw_params.credit_flow) {
- wait_event_timeout(wmi_sc->tx_credits_wq, ({
+ wait_event_timeout(wmi_ab->tx_credits_wq, ({
ret = ath11k_wmi_cmd_send_nowait(wmi, skb, cmd_id);
if (ret && test_bit(ATH11K_FLAG_CRASH_FLUSH,
- &wmi_sc->ab->dev_flags))
+ &wmi_ab->ab->dev_flags))
ret = -ESHUTDOWN;
(ret != -EAGAIN);
@@ -313,7 +313,7 @@ int ath11k_wmi_cmd_send(struct ath11k_pdev_wmi *wmi, struct sk_buff *skb,
ret = ath11k_wmi_cmd_send_nowait(wmi, skb, cmd_id);
if (ret && test_bit(ATH11K_FLAG_CRASH_FLUSH,
- &wmi_sc->ab->dev_flags))
+ &wmi_ab->ab->dev_flags))
ret = -ESHUTDOWN;
(ret != -ENOBUFS);
@@ -321,10 +321,10 @@ int ath11k_wmi_cmd_send(struct ath11k_pdev_wmi *wmi, struct sk_buff *skb,
}
if (ret == -EAGAIN)
- ath11k_warn(wmi_sc->ab, "wmi command %d timeout\n", cmd_id);
+ ath11k_warn(wmi_ab->ab, "wmi command %d timeout\n", cmd_id);
if (ret == -ENOBUFS)
- ath11k_warn(wmi_sc->ab, "ce desc not available for wmi command %d\n",
+ ath11k_warn(wmi_ab->ab, "ce desc not available for wmi command %d\n",
cmd_id);
return ret;
@@ -611,10 +611,10 @@ static int ath11k_service_ready_event(struct ath11k_base *ab, struct sk_buff *sk
return 0;
}
-struct sk_buff *ath11k_wmi_alloc_skb(struct ath11k_wmi_base *wmi_sc, u32 len)
+struct sk_buff *ath11k_wmi_alloc_skb(struct ath11k_wmi_base *wmi_ab, u32 len)
{
struct sk_buff *skb;
- struct ath11k_base *ab = wmi_sc->ab;
+ struct ath11k_base *ab = wmi_ab->ab;
u32 round_len = roundup(len, 4);
skb = ath11k_htc_alloc_skb(ab, WMI_SKB_HEADROOM + round_len);
@@ -2281,7 +2281,7 @@ int ath11k_wmi_send_scan_start_cmd(struct ath11k *ar,
tlv->header = FIELD_PREP(WMI_TLV_TAG, WMI_TAG_ARRAY_UINT32) |
FIELD_PREP(WMI_TLV_LEN, len);
ptr += TLV_HDR_SIZE;
- tmp_ptr = (u32 *)ptr;
+ tmp_ptr = ptr;
for (i = 0; i < params->num_chan; ++i)
tmp_ptr[i] = params->chan_list[i];
@@ -4148,7 +4148,7 @@ static int ath11k_init_cmd_send(struct ath11k_pdev_wmi *wmi,
ptr += TLV_HDR_SIZE + len;
if (param->hw_mode_id != WMI_HOST_HW_MODE_MAX) {
- hw_mode = (struct wmi_pdev_set_hw_mode_cmd_param *)ptr;
+ hw_mode = ptr;
hw_mode->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_PDEV_SET_HW_MODE_CMD) |
FIELD_PREP(WMI_TLV_LEN,
@@ -4168,7 +4168,7 @@ static int ath11k_init_cmd_send(struct ath11k_pdev_wmi *wmi,
len = sizeof(*band_to_mac);
for (idx = 0; idx < param->num_band_to_mac; idx++) {
- band_to_mac = (void *)ptr;
+ band_to_mac = ptr;
band_to_mac->tlv_header = FIELD_PREP(WMI_TLV_TAG,
WMI_TAG_PDEV_BAND_TO_MAC) |
@@ -4291,7 +4291,7 @@ int ath11k_wmi_set_hw_mode(struct ath11k_base *ab,
int ath11k_wmi_cmd_init(struct ath11k_base *ab)
{
- struct ath11k_wmi_base *wmi_sc = &ab->wmi_ab;
+ struct ath11k_wmi_base *wmi_ab = &ab->wmi_ab;
struct wmi_init_cmd_param init_param;
struct target_resource_config config;
@@ -4304,12 +4304,12 @@ int ath11k_wmi_cmd_init(struct ath11k_base *ab)
ab->wmi_ab.svc_map))
config.is_reg_cc_ext_event_supported = 1;
- memcpy(&wmi_sc->wlan_resource_config, &config, sizeof(config));
+ memcpy(&wmi_ab->wlan_resource_config, &config, sizeof(config));
- init_param.res_cfg = &wmi_sc->wlan_resource_config;
- init_param.num_mem_chunks = wmi_sc->num_mem_chunks;
- init_param.hw_mode_id = wmi_sc->preferred_hw_mode;
- init_param.mem_chunks = wmi_sc->mem_chunks;
+ init_param.res_cfg = &wmi_ab->wlan_resource_config;
+ init_param.num_mem_chunks = wmi_ab->num_mem_chunks;
+ init_param.hw_mode_id = wmi_ab->preferred_hw_mode;
+ init_param.mem_chunks = wmi_ab->mem_chunks;
if (ab->hw_params.single_pdev_only)
init_param.hw_mode_id = WMI_HOST_HW_MODE_MAX;
@@ -4317,7 +4317,7 @@ int ath11k_wmi_cmd_init(struct ath11k_base *ab)
init_param.num_band_to_mac = ab->num_radios;
ath11k_fill_band_to_mac_param(ab, init_param.band_to_mac);
- return ath11k_init_cmd_send(&wmi_sc->wmi[0], &init_param);
+ return ath11k_init_cmd_send(&wmi_ab->wmi[0], &init_param);
}
int ath11k_wmi_vdev_spectral_conf(struct ath11k *ar,
@@ -5440,10 +5440,11 @@ static int ath11k_pull_reg_chan_list_ext_update_ev(struct ath11k_base *ab,
}
ath11k_dbg(ab, ATH11K_DBG_WMI,
- "cc_ext %s dsf %d BW: min_2ghz %d max_2ghz %d min_5ghz %d max_5ghz %d",
+ "cc_ext %s dfs %d BW: min_2ghz %d max_2ghz %d min_5ghz %d max_5ghz %d phy_bitmap 0x%x",
reg_info->alpha2, reg_info->dfs_region,
reg_info->min_bw_2ghz, reg_info->max_bw_2ghz,
- reg_info->min_bw_5ghz, reg_info->max_bw_5ghz);
+ reg_info->min_bw_5ghz, reg_info->max_bw_5ghz,
+ reg_info->phybitmap);
ath11k_dbg(ab, ATH11K_DBG_WMI,
"num_2ghz_reg_rules %d num_5ghz_reg_rules %d",
@@ -6452,7 +6453,7 @@ static int ath11k_wmi_tlv_rssi_chain_parse(struct ath11k_base *ab,
goto exit;
}
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
BUILD_BUG_ON(ARRAY_SIZE(arsta->chain_signal) >
ARRAY_SIZE(stats_rssi->rssi_avg_beacon));
@@ -6540,7 +6541,7 @@ static int ath11k_wmi_tlv_fw_stats_data_parse(struct ath11k_base *ab,
arvif->bssid,
NULL);
if (sta) {
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
arsta->rssi_beacon = src->beacon_snr;
ath11k_dbg(ab, ATH11K_DBG_WMI,
"stats vdev id %d snr %d\n",
@@ -7222,14 +7223,12 @@ static int ath11k_wmi_tlv_rdy_parse(struct ath11k_base *ab, u16 tag, u16 len,
memset(&fixed_param, 0, sizeof(fixed_param));
memcpy(&fixed_param, (struct wmi_ready_event *)ptr,
min_t(u16, sizeof(fixed_param), len));
- ab->wlan_init_status = fixed_param.ready_event_min.status;
rdy_parse->num_extra_mac_addr =
fixed_param.ready_event_min.num_extra_mac_addr;
ether_addr_copy(ab->mac_addr,
fixed_param.ready_event_min.mac_addr.addr);
ab->pktlog_defs_checksum = fixed_param.pktlog_defs_checksum;
- ab->wmi_ready = true;
break;
case WMI_TAG_ARRAY_FIXED_STRUCT:
addr_list = (struct wmi_mac_addr *)ptr;
@@ -7469,7 +7468,7 @@ static void ath11k_wmi_event_peer_sta_ps_state_chg(struct ath11k_base *ab,
goto exit;
}
- arsta = (struct ath11k_sta *)sta->drv_priv;
+ arsta = ath11k_sta_to_arsta(sta);
spin_lock_bh(&ar->data_lock);
@@ -8337,6 +8336,8 @@ ath11k_wmi_pdev_dfs_radar_detected_event(struct ath11k_base *ab, struct sk_buff
ev->detector_id, ev->segment_id, ev->timestamp, ev->is_chirp,
ev->freq_offset, ev->sidx);
+ rcu_read_lock();
+
ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
if (!ar) {
@@ -8354,6 +8355,8 @@ ath11k_wmi_pdev_dfs_radar_detected_event(struct ath11k_base *ab, struct sk_buff
ieee80211_radar_detected(ar->hw);
exit:
+ rcu_read_unlock();
+
kfree(tb);
}
@@ -8383,15 +8386,19 @@ ath11k_wmi_pdev_temperature_event(struct ath11k_base *ab,
ath11k_dbg(ab, ATH11K_DBG_WMI, "event pdev temperature ev temp %d pdev_id %d\n",
ev->temp, ev->pdev_id);
+ rcu_read_lock();
+
ar = ath11k_mac_get_ar_by_pdev_id(ab, ev->pdev_id);
if (!ar) {
ath11k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev->pdev_id);
- kfree(tb);
- return;
+ goto exit;
}
ath11k_thermal_event_temperature(ar, ev->temp);
+exit:
+ rcu_read_unlock();
+
kfree(tb);
}
@@ -8611,12 +8618,13 @@ static void ath11k_wmi_gtk_offload_status_event(struct ath11k_base *ab,
return;
}
+ rcu_read_lock();
+
arvif = ath11k_mac_get_arvif_by_vdev_id(ab, ev->vdev_id);
if (!arvif) {
ath11k_warn(ab, "failed to get arvif for vdev_id:%d\n",
ev->vdev_id);
- kfree(tb);
- return;
+ goto exit;
}
ath11k_dbg(ab, ATH11K_DBG_WMI, "event gtk offload refresh_cnt %d\n",
@@ -8633,6 +8641,8 @@ static void ath11k_wmi_gtk_offload_status_event(struct ath11k_base *ab,
ieee80211_gtk_rekey_notify(arvif->vif, arvif->bssid,
(void *)&replay_ctr_be, GFP_ATOMIC);
+exit:
+ rcu_read_unlock();
kfree(tb);
}
diff --git a/drivers/net/wireless/ath/ath12k/core.c b/drivers/net/wireless/ath/ath12k/core.c
index 3df8059d5512..b936760b5140 100644
--- a/drivers/net/wireless/ath/ath12k/core.c
+++ b/drivers/net/wireless/ath/ath12k/core.c
@@ -19,6 +19,27 @@ unsigned int ath12k_debug_mask;
module_param_named(debug_mask, ath12k_debug_mask, uint, 0644);
MODULE_PARM_DESC(debug_mask, "Debugging mask");
+static int ath12k_core_rfkill_config(struct ath12k_base *ab)
+{
+ struct ath12k *ar;
+ int ret = 0, i;
+
+ if (!(ab->target_caps.sys_cap_info & WMI_SYS_CAP_INFO_RFKILL))
+ return 0;
+
+ for (i = 0; i < ab->num_radios; i++) {
+ ar = ab->pdevs[i].ar;
+
+ ret = ath12k_mac_rfkill_config(ar);
+ if (ret && ret != -EOPNOTSUPP) {
+ ath12k_warn(ab, "failed to configure rfkill: %d", ret);
+ return ret;
+ }
+ }
+
+ return ret;
+}
+
int ath12k_core_suspend(struct ath12k_base *ab)
{
int ret;
@@ -339,6 +360,7 @@ int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
int ath12k_core_fetch_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
{
char boardname[BOARD_NAME_SIZE];
+ int bd_api;
int ret;
ret = ath12k_core_create_board_name(ab, boardname, BOARD_NAME_SIZE);
@@ -347,12 +369,12 @@ int ath12k_core_fetch_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
return ret;
}
- ab->bd_api = 2;
+ bd_api = 2;
ret = ath12k_core_fetch_board_data_api_n(ab, bd, boardname);
if (!ret)
goto success;
- ab->bd_api = 1;
+ bd_api = 1;
ret = ath12k_core_fetch_board_data_api_1(ab, bd, ATH12K_DEFAULT_BOARD_FILE);
if (ret) {
ath12k_err(ab, "failed to fetch board-2.bin or board.bin from %s\n",
@@ -361,7 +383,7 @@ int ath12k_core_fetch_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd)
}
success:
- ath12k_dbg(ab, ATH12K_DBG_BOOT, "using board api %d\n", ab->bd_api);
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "using board api %d\n", bd_api);
return 0;
}
@@ -377,6 +399,75 @@ static void ath12k_core_stop(struct ath12k_base *ab)
/* De-Init of components as needed */
}
+static void ath12k_core_check_bdfext(const struct dmi_header *hdr, void *data)
+{
+ struct ath12k_base *ab = data;
+ const char *magic = ATH12K_SMBIOS_BDF_EXT_MAGIC;
+ struct ath12k_smbios_bdf *smbios = (struct ath12k_smbios_bdf *)hdr;
+ ssize_t copied;
+ size_t len;
+ int i;
+
+ if (ab->qmi.target.bdf_ext[0] != '\0')
+ return;
+
+ if (hdr->type != ATH12K_SMBIOS_BDF_EXT_TYPE)
+ return;
+
+ if (hdr->length != ATH12K_SMBIOS_BDF_EXT_LENGTH) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "wrong smbios bdf ext type length (%d).\n",
+ hdr->length);
+ return;
+ }
+
+ if (!smbios->bdf_enabled) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT, "bdf variant name not found.\n");
+ return;
+ }
+
+ /* Only one string exists (per spec) */
+ if (memcmp(smbios->bdf_ext, magic, strlen(magic)) != 0) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "bdf variant magic does not match.\n");
+ return;
+ }
+
+ len = min_t(size_t,
+ strlen(smbios->bdf_ext), sizeof(ab->qmi.target.bdf_ext));
+ for (i = 0; i < len; i++) {
+ if (!isascii(smbios->bdf_ext[i]) || !isprint(smbios->bdf_ext[i])) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "bdf variant name contains non ascii chars.\n");
+ return;
+ }
+ }
+
+ /* Copy extension name without magic prefix */
+ copied = strscpy(ab->qmi.target.bdf_ext, smbios->bdf_ext + strlen(magic),
+ sizeof(ab->qmi.target.bdf_ext));
+ if (copied < 0) {
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "bdf variant string is longer than the buffer can accommodate\n");
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_BOOT,
+ "found and validated bdf variant smbios_type 0x%x bdf %s\n",
+ ATH12K_SMBIOS_BDF_EXT_TYPE, ab->qmi.target.bdf_ext);
+}
+
+int ath12k_core_check_smbios(struct ath12k_base *ab)
+{
+ ab->qmi.target.bdf_ext[0] = '\0';
+ dmi_walk(ath12k_core_check_bdfext, ab);
+
+ if (ab->qmi.target.bdf_ext[0] == '\0')
+ return -ENODATA;
+
+ return 0;
+}
+
static int ath12k_core_soc_create(struct ath12k_base *ab)
{
int ret;
@@ -603,6 +694,13 @@ int ath12k_core_qmi_firmware_ready(struct ath12k_base *ab)
goto err_core_stop;
}
ath12k_hif_irq_enable(ab);
+
+ ret = ath12k_core_rfkill_config(ab);
+ if (ret && ret != -EOPNOTSUPP) {
+ ath12k_err(ab, "failed to config rfkill: %d\n", ret);
+ goto err_core_stop;
+ }
+
mutex_unlock(&ab->core_lock);
return 0;
@@ -655,6 +753,27 @@ err_hal_srng_deinit:
return ret;
}
+static void ath12k_rfkill_work(struct work_struct *work)
+{
+ struct ath12k_base *ab = container_of(work, struct ath12k_base, rfkill_work);
+ struct ath12k *ar;
+ bool rfkill_radio_on;
+ int i;
+
+ spin_lock_bh(&ab->base_lock);
+ rfkill_radio_on = ab->rfkill_radio_on;
+ spin_unlock_bh(&ab->base_lock);
+
+ for (i = 0; i < ab->num_radios; i++) {
+ ar = ab->pdevs[i].ar;
+ if (!ar)
+ continue;
+
+ ath12k_mac_rfkill_enable_radio(ar, rfkill_radio_on);
+ wiphy_rfkill_set_hw_state(ar->hw->wiphy, !rfkill_radio_on);
+ }
+}
+
void ath12k_core_halt(struct ath12k *ar)
{
struct ath12k_base *ab = ar->ab;
@@ -668,6 +787,7 @@ void ath12k_core_halt(struct ath12k *ar)
ath12k_mac_peer_cleanup_all(ar);
cancel_delayed_work_sync(&ar->scan.timeout);
cancel_work_sync(&ar->regd_update_work);
+ cancel_work_sync(&ab->rfkill_work);
rcu_assign_pointer(ab->pdevs_active[ar->pdev_idx], NULL);
synchronize_rcu();
@@ -685,6 +805,9 @@ static void ath12k_core_pre_reconfigure_recovery(struct ath12k_base *ab)
ab->stats.fw_crash_counter++;
spin_unlock_bh(&ab->base_lock);
+ if (ab->is_reset)
+ set_bit(ATH12K_FLAG_CRASH_FLUSH, &ab->dev_flags);
+
for (i = 0; i < ab->num_radios; i++) {
pdev = &ab->pdevs[i];
ar = pdev->ar;
@@ -823,6 +946,8 @@ static void ath12k_core_reset(struct work_struct *work)
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset starting\n");
ab->is_reset = true;
+ atomic_set(&ab->recovery_start_count, 0);
+ reinit_completion(&ab->recovery_start);
atomic_set(&ab->recovery_count, 0);
ath12k_core_pre_reconfigure_recovery(ab);
@@ -830,15 +955,13 @@ static void ath12k_core_reset(struct work_struct *work)
reinit_completion(&ab->reconfigure_complete);
ath12k_core_post_reconfigure_recovery(ab);
- reinit_completion(&ab->recovery_start);
- atomic_set(&ab->recovery_start_count, 0);
-
ath12k_dbg(ab, ATH12K_DBG_BOOT, "waiting recovery start...\n");
time_left = wait_for_completion_timeout(&ab->recovery_start,
ATH12K_RECOVER_START_TIMEOUT_HZ);
ath12k_hif_power_down(ab);
+ ath12k_qmi_free_resource(ab);
ath12k_hif_power_up(ab);
ath12k_dbg(ab, ATH12K_DBG_BOOT, "reset started\n");
@@ -922,6 +1045,8 @@ struct ath12k_base *ath12k_core_alloc(struct device *dev, size_t priv_size,
init_waitqueue_head(&ab->wmi_ab.tx_credits_wq);
INIT_WORK(&ab->restart_work, ath12k_core_restart);
INIT_WORK(&ab->reset_work, ath12k_core_reset);
+ INIT_WORK(&ab->rfkill_work, ath12k_rfkill_work);
+
timer_setup(&ab->rx_replenish_retry, ath12k_ce_rx_replenish_retry, 0);
init_completion(&ab->htc_suspend);
diff --git a/drivers/net/wireless/ath/ath12k/core.h b/drivers/net/wireless/ath/ath12k/core.h
index d873b573dac6..68c42ca44fcb 100644
--- a/drivers/net/wireless/ath/ath12k/core.h
+++ b/drivers/net/wireless/ath/ath12k/core.h
@@ -11,6 +11,8 @@
#include <linux/interrupt.h>
#include <linux/irq.h>
#include <linux/bitfield.h>
+#include <linux/dmi.h>
+#include <linux/ctype.h>
#include "qmi.h"
#include "htc.h"
#include "wmi.h"
@@ -32,6 +34,15 @@
/* Pending management packets threshold for dropping probe responses */
#define ATH12K_PRB_RSP_DROP_THRESHOLD ((ATH12K_TX_MGMT_TARGET_MAX_SUPPORT_WMI * 3) / 4)
+/* SMBIOS type containing Board Data File Name Extension */
+#define ATH12K_SMBIOS_BDF_EXT_TYPE 0xF8
+
+/* SMBIOS type structure length (excluding strings-set) */
+#define ATH12K_SMBIOS_BDF_EXT_LENGTH 0x9
+
+/* The magic used by QCA spec */
+#define ATH12K_SMBIOS_BDF_EXT_MAGIC "BDF_"
+
#define ATH12K_INVALID_HW_MAC_ID 0xFF
#define ATH12K_RX_RATE_TABLE_NUM 320
#define ATH12K_RX_RATE_TABLE_11AX_NUM 576
@@ -129,6 +140,13 @@ struct ath12k_ext_irq_grp {
struct net_device napi_ndev;
};
+struct ath12k_smbios_bdf {
+ struct dmi_header hdr;
+ u32 padding;
+ u8 bdf_enabled;
+ u8 bdf_ext[];
+} __packed;
+
#define HEHANDLE_CAP_PHYINFO_SIZE 3
#define HECAP_PHYINFO_SIZE 9
#define HECAP_MACINFO_SIZE 5
@@ -719,7 +737,6 @@ struct ath12k_base {
struct ath12k_wmi_target_cap_arg target_caps;
u32 ext_service_bitmap[WMI_SERVICE_EXT_BM_SIZE];
bool pdevs_macaddr_valid;
- int bd_api;
const struct ath12k_hw_params *hw_params;
@@ -771,6 +788,10 @@ struct ath12k_base {
u64 fw_soc_drop_count;
bool static_window_map;
+ struct work_struct rfkill_work;
+ /* true means radio is on */
+ bool rfkill_radio_on;
+
/* must be last */
u8 drv_priv[] __aligned(sizeof(void *));
};
@@ -788,7 +809,8 @@ int ath12k_core_fetch_board_data_api_1(struct ath12k_base *ab,
int ath12k_core_fetch_bdf(struct ath12k_base *ath12k,
struct ath12k_board_data *bd);
void ath12k_core_free_bdf(struct ath12k_base *ab, struct ath12k_board_data *bd);
-
+int ath12k_core_check_dt(struct ath12k_base *ath12k);
+int ath12k_core_check_smbios(struct ath12k_base *ab);
void ath12k_core_halt(struct ath12k *ar);
int ath12k_core_resume(struct ath12k_base *ab);
int ath12k_core_suspend(struct ath12k_base *ab);
@@ -830,6 +852,11 @@ static inline struct ath12k_vif *ath12k_vif_to_arvif(struct ieee80211_vif *vif)
return (struct ath12k_vif *)vif->drv_priv;
}
+static inline struct ath12k_sta *ath12k_sta_to_arsta(struct ieee80211_sta *sta)
+{
+ return (struct ath12k_sta *)sta->drv_priv;
+}
+
static inline struct ath12k *ath12k_ab_to_ar(struct ath12k_base *ab,
int mac_id)
{
diff --git a/drivers/net/wireless/ath/ath12k/debug.c b/drivers/net/wireless/ath/ath12k/debug.c
index 67893923e010..45d33279e665 100644
--- a/drivers/net/wireless/ath/ath12k/debug.c
+++ b/drivers/net/wireless/ath/ath12k/debug.c
@@ -64,7 +64,7 @@ void __ath12k_dbg(struct ath12k_base *ab, enum ath12k_debug_mask mask,
vaf.va = &args;
if (ath12k_debug_mask & mask)
- dev_dbg(ab->dev, "%pV", &vaf);
+ dev_printk(KERN_DEBUG, ab->dev, "%pV", &vaf);
/* TODO: trace log */
diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
index f933896f2a68..6893466f61f0 100644
--- a/drivers/net/wireless/ath/ath12k/dp.c
+++ b/drivers/net/wireless/ath/ath12k/dp.c
@@ -38,6 +38,7 @@ void ath12k_dp_peer_cleanup(struct ath12k *ar, int vdev_id, const u8 *addr)
ath12k_dp_rx_peer_tid_cleanup(ar, peer);
crypto_free_shash(peer->tfm_mmic);
+ peer->dp_setup_done = false;
spin_unlock_bh(&ab->base_lock);
}
diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
index f1e57e98bdc6..f44bc5494ce7 100644
--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
+++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
@@ -13,8 +13,7 @@
static void ath12k_dp_mon_rx_handle_ofdma_info(void *rx_tlv,
struct hal_rx_user_status *rx_user_status)
{
- struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
- (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
+ struct hal_rx_ppdu_end_user_stats *ppdu_end_user = rx_tlv;
rx_user_status->ul_ofdma_user_v0_word0 =
__le32_to_cpu(ppdu_end_user->usr_resp_ref);
@@ -23,13 +22,12 @@ static void ath12k_dp_mon_rx_handle_ofdma_info(void *rx_tlv,
}
static void
-ath12k_dp_mon_rx_populate_byte_count(void *rx_tlv, void *ppduinfo,
+ath12k_dp_mon_rx_populate_byte_count(const struct hal_rx_ppdu_end_user_stats *stats,
+ void *ppduinfo,
struct hal_rx_user_status *rx_user_status)
{
- struct hal_rx_ppdu_end_user_stats *ppdu_end_user =
- (struct hal_rx_ppdu_end_user_stats *)rx_tlv;
- u32 mpdu_ok_byte_count = __le32_to_cpu(ppdu_end_user->mpdu_ok_cnt);
- u32 mpdu_err_byte_count = __le32_to_cpu(ppdu_end_user->mpdu_err_cnt);
+ u32 mpdu_ok_byte_count = __le32_to_cpu(stats->mpdu_ok_cnt);
+ u32 mpdu_err_byte_count = __le32_to_cpu(stats->mpdu_err_cnt);
rx_user_status->mpdu_ok_byte_count =
u32_get_bits(mpdu_ok_byte_count,
@@ -2376,7 +2374,7 @@ ath12k_dp_mon_rx_update_user_stats(struct ath12k *ar,
return;
}
- arsta = (struct ath12k_sta *)peer->sta->drv_priv;
+ arsta = ath12k_sta_to_arsta(peer->sta);
rx_stats = arsta->rx_stats;
if (!rx_stats)
@@ -2552,7 +2550,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
}
if (ppdu_info->reception_type == HAL_RX_RECEPTION_TYPE_SU) {
- arsta = (struct ath12k_sta *)peer->sta->drv_priv;
+ arsta = ath12k_sta_to_arsta(peer->sta);
ath12k_dp_mon_rx_update_peer_su_stats(ar, arsta,
ppdu_info);
} else if ((ppdu_info->fc_valid) &&
diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
index e6e64d437c47..3543fadac4a5 100644
--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
+++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
@@ -1054,7 +1054,7 @@ int ath12k_dp_rx_ampdu_start(struct ath12k *ar,
struct ieee80211_ampdu_params *params)
{
struct ath12k_base *ab = ar->ab;
- struct ath12k_sta *arsta = (void *)params->sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(params->sta);
int vdev_id = arsta->arvif->vdev_id;
int ret;
@@ -1072,7 +1072,7 @@ int ath12k_dp_rx_ampdu_stop(struct ath12k *ar,
{
struct ath12k_base *ab = ar->ab;
struct ath12k_peer *peer;
- struct ath12k_sta *arsta = (void *)params->sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(params->sta);
int vdev_id = arsta->arvif->vdev_id;
bool active;
int ret;
@@ -1410,7 +1410,7 @@ ath12k_update_per_peer_tx_stats(struct ath12k *ar,
}
sta = peer->sta;
- arsta = (struct ath12k_sta *)sta->drv_priv;
+ arsta = ath12k_sta_to_arsta(sta);
memset(&arsta->txrate, 0, sizeof(arsta->txrate));
@@ -1555,6 +1555,13 @@ static int ath12k_htt_pull_ppdu_stats(struct ath12k_base *ab,
msg = (struct ath12k_htt_ppdu_stats_msg *)skb->data;
len = le32_get_bits(msg->info, HTT_T2H_PPDU_STATS_INFO_PAYLOAD_SIZE);
+ if (len > (skb->len - struct_size(msg, data, 0))) {
+ ath12k_warn(ab,
+ "HTT PPDU STATS event has unexpected payload size %u, should be smaller than %u\n",
+ len, skb->len);
+ return -EINVAL;
+ }
+
pdev_id = le32_get_bits(msg->info, HTT_T2H_PPDU_STATS_INFO_PDEV_ID);
ppdu_id = le32_to_cpu(msg->ppdu_id);
@@ -1583,6 +1590,16 @@ static int ath12k_htt_pull_ppdu_stats(struct ath12k_base *ab,
goto exit;
}
+ if (ppdu_info->ppdu_stats.common.num_users >= HTT_PPDU_STATS_MAX_USERS) {
+ spin_unlock_bh(&ar->data_lock);
+ ath12k_warn(ab,
+ "HTT PPDU STATS event has unexpected num_users %u, should be smaller than %u\n",
+ ppdu_info->ppdu_stats.common.num_users,
+ HTT_PPDU_STATS_MAX_USERS);
+ ret = -EINVAL;
+ goto exit;
+ }
+
/* back up data rate tlv for all peers */
if (ppdu_info->frame_type == HTT_STATS_PPDU_FTYPE_DATA &&
(ppdu_info->tlv_bitmap & (1 << HTT_PPDU_STATS_TAG_USR_COMMON)) &&
@@ -1641,11 +1658,12 @@ static void ath12k_htt_mlo_offset_event_handler(struct ath12k_base *ab,
msg = (struct ath12k_htt_mlo_offset_msg *)skb->data;
pdev_id = u32_get_bits(__le32_to_cpu(msg->info),
HTT_T2H_MLO_OFFSET_INFO_PDEV_ID);
- ar = ath12k_mac_get_ar_by_pdev_id(ab, pdev_id);
+ rcu_read_lock();
+ ar = ath12k_mac_get_ar_by_pdev_id(ab, pdev_id);
if (!ar) {
ath12k_warn(ab, "invalid pdev id %d on htt mlo offset\n", pdev_id);
- return;
+ goto exit;
}
spin_lock_bh(&ar->data_lock);
@@ -1661,6 +1679,8 @@ static void ath12k_htt_mlo_offset_event_handler(struct ath12k_base *ab,
pdev->timestamp.mlo_comp_timer = __le32_to_cpu(msg->mlo_comp_timer);
spin_unlock_bh(&ar->data_lock);
+exit:
+ rcu_read_unlock();
}
void ath12k_dp_htt_htc_t2h_msg_handler(struct ath12k_base *ab,
@@ -2748,6 +2768,7 @@ int ath12k_dp_rx_peer_frag_setup(struct ath12k *ar, const u8 *peer_mac, int vdev
}
peer->tfm_mmic = tfm;
+ peer->dp_setup_done = true;
spin_unlock_bh(&ab->base_lock);
return 0;
@@ -3214,6 +3235,14 @@ static int ath12k_dp_rx_frag_h_mpdu(struct ath12k *ar,
ret = -ENOENT;
goto out_unlock;
}
+
+ if (!peer->dp_setup_done) {
+ ath12k_warn(ab, "The peer %pM [%d] has uninitialized datapath\n",
+ peer->addr, peer_id);
+ ret = -ENOENT;
+ goto out_unlock;
+ }
+
rx_tid = &peer->rx_tid[tid];
if ((!skb_queue_empty(&rx_tid->rx_frags) && seqno != rx_tid->cur_sn) ||
@@ -3229,7 +3258,7 @@ static int ath12k_dp_rx_frag_h_mpdu(struct ath12k *ar,
goto out_unlock;
}
- if (frag_no > __fls(rx_tid->rx_frag_bitmap))
+ if ((!rx_tid->rx_frag_bitmap || frag_no > __fls(rx_tid->rx_frag_bitmap)))
__skb_queue_tail(&rx_tid->rx_frags, msdu);
else
ath12k_dp_rx_h_sort_frags(ab, &rx_tid->rx_frags, msdu);
@@ -3500,23 +3529,13 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
struct sk_buff_head *msdu_list)
{
struct ath12k_base *ab = ar->ab;
- u16 msdu_len, peer_id;
+ u16 msdu_len;
struct hal_rx_desc *desc = (struct hal_rx_desc *)msdu->data;
u8 l3pad_bytes;
struct ath12k_skb_rxcb *rxcb = ATH12K_SKB_RXCB(msdu);
u32 hal_rx_desc_sz = ar->ab->hw_params->hal_desc_sz;
msdu_len = ath12k_dp_rx_h_msdu_len(ab, desc);
- peer_id = ath12k_dp_rx_h_peer_id(ab, desc);
-
- spin_lock(&ab->base_lock);
- if (!ath12k_peer_find_by_id(ab, peer_id)) {
- spin_unlock(&ab->base_lock);
- ath12k_dbg(ab, ATH12K_DBG_DATA, "invalid peer id received in wbm err pkt%d\n",
- peer_id);
- return -EINVAL;
- }
- spin_unlock(&ab->base_lock);
if (!rxcb->is_frag && ((msdu_len + hal_rx_desc_sz) > DP_RX_BUFFER_SIZE)) {
/* First buffer will be freed by the caller, so deduct it's length */
@@ -3730,7 +3749,7 @@ int ath12k_dp_rx_process_wbm_err(struct ath12k_base *ab,
continue;
}
- desc_info = (struct ath12k_rx_desc_info *)err_info.rx_desc;
+ desc_info = err_info.rx_desc;
/* retry manual desc retrieval if hw cc is not done */
if (!desc_info) {
diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c
index 8874c815d7fa..492ca6ce6714 100644
--- a/drivers/net/wireless/ath/ath12k/dp_tx.c
+++ b/drivers/net/wireless/ath/ath12k/dp_tx.c
@@ -106,11 +106,10 @@ static struct ath12k_tx_desc_info *ath12k_dp_tx_assign_buffer(struct ath12k_dp *
return desc;
}
-static void ath12k_hal_tx_cmd_ext_desc_setup(struct ath12k_base *ab, void *cmd,
+static void ath12k_hal_tx_cmd_ext_desc_setup(struct ath12k_base *ab,
+ struct hal_tx_msdu_ext_desc *tcl_ext_cmd,
struct hal_tx_info *ti)
{
- struct hal_tx_msdu_ext_desc *tcl_ext_cmd = (struct hal_tx_msdu_ext_desc *)cmd;
-
tcl_ext_cmd->info0 = le32_encode_bits(ti->paddr,
HAL_TX_MSDU_EXT_INFO0_BUF_PTR_LO);
tcl_ext_cmd->info1 = le32_encode_bits(0x0,
@@ -330,8 +329,11 @@ tcl_ring_sel:
fail_unmap_dma:
dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE);
- dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
- sizeof(struct hal_tx_msdu_ext_desc), DMA_TO_DEVICE);
+
+ if (skb_cb->paddr_ext_desc)
+ dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc,
+ sizeof(struct hal_tx_msdu_ext_desc),
+ DMA_TO_DEVICE);
fail_remove_tx_buf:
ath12k_dp_tx_release_txbuf(dp, tx_desc, pool_id);
@@ -399,7 +401,7 @@ ath12k_dp_tx_htt_tx_complete_buf(struct ath12k_base *ab,
}
}
- ieee80211_tx_status(ar->hw, msdu);
+ ieee80211_tx_status_skb(ar->hw, msdu);
}
static void
@@ -496,7 +498,7 @@ static void ath12k_dp_tx_complete_msdu(struct ath12k *ar,
* Might end up reporting it out-of-band from HTT stats.
*/
- ieee80211_tx_status(ar->hw, msdu);
+ ieee80211_tx_status_skb(ar->hw, msdu);
exit:
rcu_read_unlock();
diff --git a/drivers/net/wireless/ath/ath12k/hal.c b/drivers/net/wireless/ath/ath12k/hal.c
index e7a150e7158e..eca86fc25a60 100644
--- a/drivers/net/wireless/ath/ath12k/hal.c
+++ b/drivers/net/wireless/ath/ath12k/hal.c
@@ -385,13 +385,13 @@ static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc)
static u8 ath12k_hw_qcn9274_rx_desc_get_msdu_nss(struct hal_rx_desc *desc)
{
return le32_get_bits(desc->u.qcn9274.msdu_end.info12,
- RX_MSDU_END_INFO12_MIMO_SS_BITMAP);
+ RX_MSDU_END_QCN9274_INFO12_MIMO_SS_BITMAP);
}
static u8 ath12k_hw_qcn9274_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc)
{
return le16_get_bits(desc->u.qcn9274.msdu_end.info5,
- RX_MSDU_END_INFO5_TID);
+ RX_MSDU_END_QCN9274_INFO5_TID);
}
static u16 ath12k_hw_qcn9274_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc)
@@ -819,13 +819,13 @@ static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_pkt_type(struct hal_rx_desc *desc)
static u8 ath12k_hw_wcn7850_rx_desc_get_msdu_nss(struct hal_rx_desc *desc)
{
return le32_get_bits(desc->u.wcn7850.msdu_end.info12,
- RX_MSDU_END_INFO12_MIMO_SS_BITMAP);
+ RX_MSDU_END_WCN7850_INFO12_MIMO_SS_BITMAP);
}
static u8 ath12k_hw_wcn7850_rx_desc_get_mpdu_tid(struct hal_rx_desc *desc)
{
- return le16_get_bits(desc->u.wcn7850.msdu_end.info5,
- RX_MSDU_END_INFO5_TID);
+ return le32_get_bits(desc->u.wcn7850.mpdu_start.info2,
+ RX_MPDU_START_INFO2_TID);
}
static u16 ath12k_hw_wcn7850_rx_desc_get_mpdu_peer_id(struct hal_rx_desc *desc)
@@ -837,7 +837,7 @@ static void ath12k_hw_wcn7850_rx_desc_copy_end_tlv(struct hal_rx_desc *fdesc,
struct hal_rx_desc *ldesc)
{
memcpy(&fdesc->u.wcn7850.msdu_end, &ldesc->u.wcn7850.msdu_end,
- sizeof(struct rx_msdu_end_qcn9274));
+ sizeof(struct rx_msdu_end_wcn7850));
}
static u32 ath12k_hw_wcn7850_rx_desc_get_mpdu_start_tag(struct hal_rx_desc *desc)
diff --git a/drivers/net/wireless/ath/ath12k/hal_rx.c b/drivers/net/wireless/ath/ath12k/hal_rx.c
index ee61a6462fdc..f6afbd8196bf 100644
--- a/drivers/net/wireless/ath/ath12k/hal_rx.c
+++ b/drivers/net/wireless/ath/ath12k/hal_rx.c
@@ -713,8 +713,6 @@ void ath12k_hal_reo_qdesc_setup(struct hal_rx_reo_queue *qdesc,
{
struct hal_rx_reo_queue_ext *ext_desc;
- memset(qdesc, 0, sizeof(*qdesc));
-
ath12k_hal_reo_set_desc_hdr(&qdesc->desc_hdr, HAL_DESC_REO_OWNED,
HAL_DESC_REO_QUEUE_DESC,
REO_QUEUE_DESC_MAGIC_DEBUG_PATTERN_0);
diff --git a/drivers/net/wireless/ath/ath12k/hif.h b/drivers/net/wireless/ath/ath12k/hif.h
index 54490cdb63a1..4095fd82b1b3 100644
--- a/drivers/net/wireless/ath/ath12k/hif.h
+++ b/drivers/net/wireless/ath/ath12k/hif.h
@@ -10,17 +10,17 @@
#include "core.h"
struct ath12k_hif_ops {
- u32 (*read32)(struct ath12k_base *sc, u32 address);
- void (*write32)(struct ath12k_base *sc, u32 address, u32 data);
- void (*irq_enable)(struct ath12k_base *sc);
- void (*irq_disable)(struct ath12k_base *sc);
- int (*start)(struct ath12k_base *sc);
- void (*stop)(struct ath12k_base *sc);
- int (*power_up)(struct ath12k_base *sc);
- void (*power_down)(struct ath12k_base *sc);
+ u32 (*read32)(struct ath12k_base *ab, u32 address);
+ void (*write32)(struct ath12k_base *ab, u32 address, u32 data);
+ void (*irq_enable)(struct ath12k_base *ab);
+ void (*irq_disable)(struct ath12k_base *ab);
+ int (*start)(struct ath12k_base *ab);
+ void (*stop)(struct ath12k_base *ab);
+ int (*power_up)(struct ath12k_base *ab);
+ void (*power_down)(struct ath12k_base *ab);
int (*suspend)(struct ath12k_base *ab);
int (*resume)(struct ath12k_base *ab);
- int (*map_service_to_pipe)(struct ath12k_base *sc, u16 service_id,
+ int (*map_service_to_pipe)(struct ath12k_base *ab, u16 service_id,
u8 *ul_pipe, u8 *dl_pipe);
int (*get_user_msi_vector)(struct ath12k_base *ab, char *user_name,
int *num_vectors, u32 *user_base_data,
diff --git a/drivers/net/wireless/ath/ath12k/hw.c b/drivers/net/wireless/ath/ath12k/hw.c
index 5991cc91cd00..2245fb510ba2 100644
--- a/drivers/net/wireless/ath/ath12k/hw.c
+++ b/drivers/net/wireless/ath/ath12k/hw.c
@@ -886,7 +886,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.vdev_start_delay = false,
.interface_modes = BIT(NL80211_IFTYPE_STATION) |
- BIT(NL80211_IFTYPE_AP),
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_MESH_POINT),
.supports_monitor = false,
.idle_ps = false,
@@ -907,6 +908,12 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.hal_ops = &hal_qcn9274_ops,
.qmi_cnss_feature_bitmap = BIT(CNSS_QDSS_CFG_MISS_V01),
+
+ .rfkill_pin = 0,
+ .rfkill_cfg = 0,
+ .rfkill_on_level = 0,
+
+ .rddm_size = 0,
},
{
.name = "wcn7850 hw2.0",
@@ -964,6 +971,12 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.qmi_cnss_feature_bitmap = BIT(CNSS_QDSS_CFG_MISS_V01) |
BIT(CNSS_PCIE_PERST_NO_PULL_V01),
+
+ .rfkill_pin = 48,
+ .rfkill_cfg = 0,
+ .rfkill_on_level = 1,
+
+ .rddm_size = 0x780000,
},
{
.name = "qcn9274 hw2.0",
@@ -998,7 +1011,8 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.vdev_start_delay = false,
.interface_modes = BIT(NL80211_IFTYPE_STATION) |
- BIT(NL80211_IFTYPE_AP),
+ BIT(NL80211_IFTYPE_AP) |
+ BIT(NL80211_IFTYPE_MESH_POINT),
.supports_monitor = false,
.idle_ps = false,
@@ -1019,6 +1033,12 @@ static const struct ath12k_hw_params ath12k_hw_params[] = {
.hal_ops = &hal_qcn9274_ops,
.qmi_cnss_feature_bitmap = BIT(CNSS_QDSS_CFG_MISS_V01),
+
+ .rfkill_pin = 0,
+ .rfkill_cfg = 0,
+ .rfkill_on_level = 0,
+
+ .rddm_size = 0,
},
};
diff --git a/drivers/net/wireless/ath/ath12k/hw.h b/drivers/net/wireless/ath/ath12k/hw.h
index e6c4223c283c..2d6427cf41a4 100644
--- a/drivers/net/wireless/ath/ath12k/hw.h
+++ b/drivers/net/wireless/ath/ath12k/hw.h
@@ -186,6 +186,12 @@ struct ath12k_hw_params {
const struct hal_ops *hal_ops;
u64 qmi_cnss_feature_bitmap;
+
+ u32 rfkill_pin;
+ u32 rfkill_cfg;
+ u32 rfkill_on_level;
+
+ u32 rddm_size;
};
struct ath12k_hw_ops {
diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
index 88346e66bb75..fc0d14ea328e 100644
--- a/drivers/net/wireless/ath/ath12k/mac.c
+++ b/drivers/net/wireless/ath/ath12k/mac.c
@@ -523,7 +523,7 @@ static void ath12k_get_arvif_iter(void *data, u8 *mac,
struct ieee80211_vif *vif)
{
struct ath12k_vif_iter *arvif_iter = data;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
if (arvif->vdev_id == arvif_iter->vdev_id)
arvif_iter->arvif = arvif;
@@ -1208,7 +1208,7 @@ static void ath12k_peer_assoc_h_basic(struct ath12k *ar,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_assoc_arg *arg)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
u32 aid;
lockdep_assert_held(&ar->conf_mutex);
@@ -1236,7 +1236,7 @@ static void ath12k_peer_assoc_h_crypto(struct ath12k *ar,
struct ieee80211_bss_conf *info = &vif->bss_conf;
struct cfg80211_chan_def def;
struct cfg80211_bss *bss;
- struct ath12k_vif *arvif = (struct ath12k_vif *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
const u8 *rsnie = NULL;
const u8 *wpaie = NULL;
@@ -1294,7 +1294,7 @@ static void ath12k_peer_assoc_h_rates(struct ath12k *ar,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_assoc_arg *arg)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct wmi_rate_set_arg *rateset = &arg->peer_legacy_rates;
struct cfg80211_chan_def def;
const struct ieee80211_supported_band *sband;
@@ -1357,7 +1357,7 @@ static void ath12k_peer_assoc_h_ht(struct ath12k *ar,
struct ath12k_wmi_peer_assoc_arg *arg)
{
const struct ieee80211_sta_ht_cap *ht_cap = &sta->deflink.ht_cap;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
enum nl80211_band band;
const u8 *ht_mcs_mask;
@@ -1518,7 +1518,7 @@ static void ath12k_peer_assoc_h_vht(struct ath12k *ar,
struct ath12k_wmi_peer_assoc_arg *arg)
{
const struct ieee80211_sta_vht_cap *vht_cap = &sta->deflink.vht_cap;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
enum nl80211_band band;
const u16 *vht_mcs_mask;
@@ -1793,7 +1793,7 @@ static void ath12k_peer_assoc_h_qos(struct ath12k *ar,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_assoc_arg *arg)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
switch (arvif->vdev_type) {
case WMI_VDEV_TYPE_AP:
@@ -1991,7 +1991,7 @@ static void ath12k_peer_assoc_h_phymode(struct ath12k *ar,
struct ieee80211_sta *sta,
struct ath12k_wmi_peer_assoc_arg *arg)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
enum nl80211_band band;
const u8 *ht_mcs_mask;
@@ -2140,7 +2140,7 @@ static void ath12k_peer_assoc_h_eht(struct ath12k *ar,
const struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
const struct ieee80211_eht_mcs_nss_supp_20mhz_only *bw_20;
const struct ieee80211_eht_mcs_nss_supp_bw *bw;
- struct ath12k_vif *arvif = (struct ath12k_vif *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
u32 *rx_mcs, *tx_mcs;
if (!sta->deflink.he_cap.has_he || !eht_cap->has_eht)
@@ -2266,7 +2266,7 @@ static void ath12k_bss_assoc(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *bss_conf)
{
struct ath12k *ar = hw->priv;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct ath12k_wmi_peer_assoc_arg peer_arg;
struct ieee80211_sta *ap_sta;
struct ath12k_peer *peer;
@@ -2360,7 +2360,7 @@ static void ath12k_bss_disassoc(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
struct ath12k *ar = hw->priv;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
lockdep_assert_held(&ar->conf_mutex);
@@ -2407,7 +2407,7 @@ static void ath12k_recalculate_mgmt_rate(struct ath12k *ar,
struct ieee80211_vif *vif,
struct cfg80211_chan_def *def)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
const struct ieee80211_supported_band *sband;
u8 basic_rate_idx;
int hw_rate_code;
@@ -2525,7 +2525,7 @@ static void ath12k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
if (changed & BSS_CHANGED_BEACON) {
param_id = WMI_PDEV_PARAM_BEACON_TX_MODE;
- param_value = WMI_BEACON_STAGGERED_MODE;
+ param_value = WMI_BEACON_BURST_MODE;
ret = ath12k_wmi_pdev_set_param(ar, param_id,
param_value, ar->pdev->pdev_id);
if (ret)
@@ -2533,7 +2533,7 @@ static void ath12k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
arvif->vdev_id);
else
ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
- "Set staggered beacon mode for VDEV: %d\n",
+ "Set burst beacon mode for VDEV: %d\n",
arvif->vdev_id);
ret = ath12k_mac_setup_bcn_tmpl(arvif);
@@ -2761,9 +2761,7 @@ static void ath12k_mac_op_bss_info_changed(struct ieee80211_hw *hw,
}
}
- if (changed & BSS_CHANGED_FILS_DISCOVERY ||
- changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP)
- ath12k_mac_fils_discovery(arvif, info);
+ ath12k_mac_fils_discovery(arvif, info);
if (changed & BSS_CHANGED_EHT_PUNCTURING)
arvif->punct_bitmap = info->eht_puncturing;
@@ -2780,18 +2778,21 @@ void __ath12k_mac_scan_finish(struct ath12k *ar)
break;
case ATH12K_SCAN_RUNNING:
case ATH12K_SCAN_ABORTING:
+ if (ar->scan.is_roc && ar->scan.roc_notify)
+ ieee80211_remain_on_channel_expired(ar->hw);
+ fallthrough;
+ case ATH12K_SCAN_STARTING:
if (!ar->scan.is_roc) {
struct cfg80211_scan_info info = {
- .aborted = (ar->scan.state ==
- ATH12K_SCAN_ABORTING),
+ .aborted = ((ar->scan.state ==
+ ATH12K_SCAN_ABORTING) ||
+ (ar->scan.state ==
+ ATH12K_SCAN_STARTING)),
};
ieee80211_scan_completed(ar->hw, &info);
- } else if (ar->scan.roc_notify) {
- ieee80211_remain_on_channel_expired(ar->hw);
}
- fallthrough;
- case ATH12K_SCAN_STARTING:
+
ar->scan.state = ATH12K_SCAN_IDLE;
ar->scan_channel = NULL;
ar->scan.roc_freq = 0;
@@ -3246,7 +3247,7 @@ static int ath12k_mac_op_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
ath12k_warn(ab, "peer %pM disappeared!\n", peer_addr);
if (sta) {
- arsta = (struct ath12k_sta *)sta->drv_priv;
+ arsta = ath12k_sta_to_arsta(sta);
switch (key->cipher) {
case WLAN_CIPHER_SUITE_TKIP:
@@ -3419,7 +3420,7 @@ static int ath12k_station_disassoc(struct ath12k *ar,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
lockdep_assert_held(&ar->conf_mutex);
@@ -3636,7 +3637,7 @@ static int ath12k_mac_station_add(struct ath12k *ar,
{
struct ath12k_base *ab = ar->ab;
struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
- struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(sta);
struct ath12k_wmi_peer_create_arg peer_param;
int ret;
@@ -3743,7 +3744,7 @@ static int ath12k_mac_op_sta_state(struct ieee80211_hw *hw,
{
struct ath12k *ar = hw->priv;
struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
- struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(sta);
struct ath12k_peer *peer;
int ret = 0;
@@ -3855,7 +3856,7 @@ static int ath12k_mac_op_sta_set_txpwr(struct ieee80211_hw *hw,
struct ieee80211_sta *sta)
{
struct ath12k *ar = hw->priv;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
s16 txpwr;
@@ -3891,8 +3892,8 @@ static void ath12k_mac_op_sta_rc_update(struct ieee80211_hw *hw,
u32 changed)
{
struct ath12k *ar = hw->priv;
- struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(sta);
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct ath12k_peer *peer;
u32 bw, smps;
@@ -4018,7 +4019,7 @@ static int ath12k_mac_op_conf_tx(struct ieee80211_hw *hw,
const struct ieee80211_tx_queue_params *params)
{
struct ath12k *ar = hw->priv;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct wmi_wmm_params_arg *p = NULL;
int ret;
@@ -4553,7 +4554,50 @@ static void ath12k_mac_copy_eht_ppe_thresh(struct ath12k_wmi_ppe_threshold_arg *
}
}
-static void ath12k_mac_copy_eht_cap(struct ath12k_band_cap *band_cap,
+static void
+ath12k_mac_filter_eht_cap_mesh(struct ieee80211_eht_cap_elem_fixed
+ *eht_cap_elem)
+{
+ u8 m;
+
+ m = IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS;
+ eht_cap_elem->mac_cap_info[0] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP0_PARTIAL_BW_UL_MU_MIMO;
+ eht_cap_elem->phy_cap_info[0] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK;
+ eht_cap_elem->phy_cap_info[3] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP4_PART_BW_DL_MU_MIMO |
+ IEEE80211_EHT_PHY_CAP4_PSR_SR_SUPP |
+ IEEE80211_EHT_PHY_CAP4_POWER_BOOST_FACT_SUPP |
+ IEEE80211_EHT_PHY_CAP4_EHT_MU_PPDU_4_EHT_LTF_08_GI;
+ eht_cap_elem->phy_cap_info[4] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP5_NON_TRIG_CQI_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP5_TX_LESS_242_TONE_RU_SUPP |
+ IEEE80211_EHT_PHY_CAP5_RX_LESS_242_TONE_RU_SUPP |
+ IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK;
+ eht_cap_elem->phy_cap_info[5] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK;
+ eht_cap_elem->phy_cap_info[6] &= ~m;
+
+ m = IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_320MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ;
+ eht_cap_elem->phy_cap_info[7] &= ~m;
+}
+
+static void ath12k_mac_copy_eht_cap(struct ath12k *ar,
+ struct ath12k_band_cap *band_cap,
struct ieee80211_he_cap_elem *he_cap_elem,
int iftype,
struct ieee80211_sta_eht_cap *eht_cap)
@@ -4561,6 +4605,10 @@ static void ath12k_mac_copy_eht_cap(struct ath12k_band_cap *band_cap,
struct ieee80211_eht_cap_elem_fixed *eht_cap_elem = &eht_cap->eht_cap_elem;
memset(eht_cap, 0, sizeof(struct ieee80211_sta_eht_cap));
+
+ if (!(test_bit(WMI_TLV_SERVICE_11BE, ar->ab->wmi_ab.svc_map)))
+ return;
+
eht_cap->has_eht = true;
memcpy(eht_cap_elem->mac_cap_info, band_cap->eht_cap_mac_info,
sizeof(eht_cap_elem->mac_cap_info));
@@ -4586,6 +4634,9 @@ static void ath12k_mac_copy_eht_cap(struct ath12k_band_cap *band_cap,
IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ |
IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_320MHZ);
break;
+ case NL80211_IFTYPE_MESH_POINT:
+ ath12k_mac_filter_eht_cap_mesh(eht_cap_elem);
+ break;
default:
break;
}
@@ -4626,7 +4677,7 @@ static int ath12k_mac_copy_sband_iftype_data(struct ath12k *ar,
data[idx].he_6ghz_capa.capa =
ath12k_mac_setup_he_6ghz_cap(cap, band_cap);
}
- ath12k_mac_copy_eht_cap(band_cap, &he_cap->he_cap_elem, i,
+ ath12k_mac_copy_eht_cap(ar, band_cap, &he_cap->he_cap_elem, i,
&data[idx].eht_cap);
idx++;
}
@@ -4647,8 +4698,8 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
ar->mac.iftype[band],
band);
sband = &ar->mac.sbands[band];
- sband->iftype_data = ar->mac.iftype[band];
- sband->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(sband, ar->mac.iftype[band],
+ count);
}
if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP) {
@@ -4657,8 +4708,8 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
ar->mac.iftype[band],
band);
sband = &ar->mac.sbands[band];
- sband->iftype_data = ar->mac.iftype[band];
- sband->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(sband, ar->mac.iftype[band],
+ count);
}
if (cap->supported_bands & WMI_HOST_WLAN_5G_CAP &&
@@ -4668,8 +4719,8 @@ static void ath12k_mac_setup_sband_iftype_data(struct ath12k *ar,
ar->mac.iftype[band],
band);
sband = &ar->mac.sbands[band];
- sband->iftype_data = ar->mac.iftype[band];
- sband->n_iftype_data = count;
+ _ieee80211_set_sband_iftype_data(sband, ar->mac.iftype[band],
+ count);
}
}
@@ -5108,6 +5159,63 @@ err:
return ret;
}
+int ath12k_mac_rfkill_config(struct ath12k *ar)
+{
+ struct ath12k_base *ab = ar->ab;
+ u32 param;
+ int ret;
+
+ if (ab->hw_params->rfkill_pin == 0)
+ return -EOPNOTSUPP;
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "mac rfkill_pin %d rfkill_cfg %d rfkill_on_level %d",
+ ab->hw_params->rfkill_pin, ab->hw_params->rfkill_cfg,
+ ab->hw_params->rfkill_on_level);
+
+ param = u32_encode_bits(ab->hw_params->rfkill_on_level,
+ WMI_RFKILL_CFG_RADIO_LEVEL) |
+ u32_encode_bits(ab->hw_params->rfkill_pin,
+ WMI_RFKILL_CFG_GPIO_PIN_NUM) |
+ u32_encode_bits(ab->hw_params->rfkill_cfg,
+ WMI_RFKILL_CFG_PIN_AS_GPIO);
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_HW_RFKILL_CONFIG,
+ param, ar->pdev->pdev_id);
+ if (ret) {
+ ath12k_warn(ab,
+ "failed to set rfkill config 0x%x: %d\n",
+ param, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int ath12k_mac_rfkill_enable_radio(struct ath12k *ar, bool enable)
+{
+ enum wmi_rfkill_enable_radio param;
+ int ret;
+
+ if (enable)
+ param = WMI_RFKILL_ENABLE_RADIO_ON;
+ else
+ param = WMI_RFKILL_ENABLE_RADIO_OFF;
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC, "mac %d rfkill enable %d",
+ ar->pdev_idx, param);
+
+ ret = ath12k_wmi_pdev_set_param(ar, WMI_PDEV_PARAM_RFKILL_ENABLE,
+ param, ar->pdev->pdev_id);
+ if (ret) {
+ ath12k_warn(ar->ab, "failed to set rfkill enable param %d: %d\n",
+ param, ret);
+ return ret;
+ }
+
+ return 0;
+}
+
static void ath12k_mac_op_stop(struct ieee80211_hw *hw)
{
struct ath12k *ar = hw->priv;
@@ -5128,6 +5236,7 @@ static void ath12k_mac_op_stop(struct ieee80211_hw *hw)
cancel_delayed_work_sync(&ar->scan.timeout);
cancel_work_sync(&ar->regd_update_work);
+ cancel_work_sync(&ar->ab->rfkill_work);
spin_lock_bh(&ar->data_lock);
list_for_each_entry_safe(ppdu_stats, tmp, &ar->ppdu_stats_info, list) {
@@ -5788,14 +5897,68 @@ static void ath12k_mac_op_remove_chanctx(struct ieee80211_hw *hw,
mutex_unlock(&ar->conf_mutex);
}
+static enum wmi_phy_mode
+ath12k_mac_check_down_grade_phy_mode(struct ath12k *ar,
+ enum wmi_phy_mode mode,
+ enum nl80211_band band,
+ enum nl80211_iftype type)
+{
+ struct ieee80211_sta_eht_cap *eht_cap;
+ enum wmi_phy_mode down_mode;
+
+ if (mode < MODE_11BE_EHT20)
+ return mode;
+
+ eht_cap = &ar->mac.iftype[band][type].eht_cap;
+ if (eht_cap->has_eht)
+ return mode;
+
+ switch (mode) {
+ case MODE_11BE_EHT20:
+ down_mode = MODE_11AX_HE20;
+ break;
+ case MODE_11BE_EHT40:
+ down_mode = MODE_11AX_HE40;
+ break;
+ case MODE_11BE_EHT80:
+ down_mode = MODE_11AX_HE80;
+ break;
+ case MODE_11BE_EHT80_80:
+ down_mode = MODE_11AX_HE80_80;
+ break;
+ case MODE_11BE_EHT160:
+ case MODE_11BE_EHT160_160:
+ case MODE_11BE_EHT320:
+ down_mode = MODE_11AX_HE160;
+ break;
+ case MODE_11BE_EHT20_2G:
+ down_mode = MODE_11AX_HE20_2G;
+ break;
+ case MODE_11BE_EHT40_2G:
+ down_mode = MODE_11AX_HE40_2G;
+ break;
+ default:
+ down_mode = mode;
+ break;
+ }
+
+ ath12k_dbg(ar->ab, ATH12K_DBG_MAC,
+ "mac vdev start phymode %s downgrade to %s\n",
+ ath12k_mac_phymode_str(mode),
+ ath12k_mac_phymode_str(down_mode));
+
+ return down_mode;
+}
+
static int
ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
- const struct cfg80211_chan_def *chandef,
+ struct ieee80211_chanctx_conf *ctx,
bool restart)
{
struct ath12k *ar = arvif->ar;
struct ath12k_base *ab = ar->ab;
struct wmi_vdev_start_req_arg arg = {};
+ const struct cfg80211_chan_def *chandef = &ctx->def;
int he_support = arvif->vif->bss_conf.he_support;
int ret;
@@ -5813,6 +5976,9 @@ ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
arg.band_center_freq2 = chandef->center_freq2;
arg.mode = ath12k_phymodes[chandef->chan->band][chandef->width];
+ arg.mode = ath12k_mac_check_down_grade_phy_mode(ar, arg.mode,
+ chandef->chan->band,
+ arvif->vif->type);
arg.min_power = 0;
arg.max_power = chandef->chan->max_power * 2;
arg.max_reg_power = chandef->chan->max_reg_power * 2;
@@ -5829,6 +5995,8 @@ ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
/* For now allow DFS for AP mode */
arg.chan_radar = !!(chandef->chan->flags & IEEE80211_CHAN_RADAR);
+ arg.freq2_radar = ctx->radar_enabled;
+
arg.passive = arg.chan_radar;
spin_lock_bh(&ab->base_lock);
@@ -5936,15 +6104,15 @@ err:
}
static int ath12k_mac_vdev_start(struct ath12k_vif *arvif,
- const struct cfg80211_chan_def *chandef)
+ struct ieee80211_chanctx_conf *ctx)
{
- return ath12k_mac_vdev_start_restart(arvif, chandef, false);
+ return ath12k_mac_vdev_start_restart(arvif, ctx, false);
}
static int ath12k_mac_vdev_restart(struct ath12k_vif *arvif,
- const struct cfg80211_chan_def *chandef)
+ struct ieee80211_chanctx_conf *ctx)
{
- return ath12k_mac_vdev_start_restart(arvif, chandef, true);
+ return ath12k_mac_vdev_start_restart(arvif, ctx, true);
}
struct ath12k_mac_change_chanctx_arg {
@@ -6000,7 +6168,7 @@ ath12k_mac_update_vif_chan(struct ath12k *ar,
lockdep_assert_held(&ar->conf_mutex);
for (i = 0; i < n_vifs; i++) {
- arvif = (void *)vifs[i].vif->drv_priv;
+ arvif = ath12k_vif_to_arvif(vifs[i].vif);
if (vifs[i].vif->type == NL80211_IFTYPE_MONITOR)
monitor_vif = true;
@@ -6034,18 +6202,33 @@ ath12k_mac_update_vif_chan(struct ath12k *ar,
/* TODO: Update ar->rx_channel */
for (i = 0; i < n_vifs; i++) {
- arvif = (void *)vifs[i].vif->drv_priv;
+ arvif = ath12k_vif_to_arvif(vifs[i].vif);
if (WARN_ON(!arvif->is_started))
continue;
- if (WARN_ON(!arvif->is_up))
- continue;
+ /* Firmware expect vdev_restart only if vdev is up.
+ * If vdev is down then it expect vdev_stop->vdev_start.
+ */
+ if (arvif->is_up) {
+ ret = ath12k_mac_vdev_restart(arvif, vifs[i].new_ctx);
+ if (ret) {
+ ath12k_warn(ab, "failed to restart vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ continue;
+ }
+ } else {
+ ret = ath12k_mac_vdev_stop(arvif);
+ if (ret) {
+ ath12k_warn(ab, "failed to stop vdev %d: %d\n",
+ arvif->vdev_id, ret);
+ continue;
+ }
- ret = ath12k_mac_vdev_restart(arvif, &vifs[i].new_ctx->def);
- if (ret) {
- ath12k_warn(ab, "failed to restart vdev %d: %d\n",
- arvif->vdev_id, ret);
+ ret = ath12k_mac_vdev_start(arvif, vifs[i].new_ctx);
+ if (ret)
+ ath12k_warn(ab, "failed to start vdev %d: %d\n",
+ arvif->vdev_id, ret);
continue;
}
@@ -6118,7 +6301,8 @@ static void ath12k_mac_op_change_chanctx(struct ieee80211_hw *hw,
if (WARN_ON(changed & IEEE80211_CHANCTX_CHANGE_CHANNEL))
goto unlock;
- if (changed & IEEE80211_CHANCTX_CHANGE_WIDTH)
+ if (changed & IEEE80211_CHANCTX_CHANGE_WIDTH ||
+ changed & IEEE80211_CHANCTX_CHANGE_RADAR)
ath12k_mac_update_active_vif_chan(ar, ctx);
/* TODO: Recalc radar detection */
@@ -6132,13 +6316,13 @@ static int ath12k_start_vdev_delay(struct ieee80211_hw *hw,
{
struct ath12k *ar = hw->priv;
struct ath12k_base *ab = ar->ab;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
if (WARN_ON(arvif->is_started))
return -EBUSY;
- ret = ath12k_mac_vdev_start(arvif, &arvif->chanctx.def);
+ ret = ath12k_mac_vdev_start(arvif, &arvif->chanctx);
if (ret) {
ath12k_warn(ab, "failed to start vdev %i addr %pM on freq %d: %d\n",
arvif->vdev_id, vif->addr,
@@ -6168,7 +6352,7 @@ ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
{
struct ath12k *ar = hw->priv;
struct ath12k_base *ab = ar->ab;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
struct ath12k_wmi_peer_create_arg param;
@@ -6218,7 +6402,7 @@ ath12k_mac_op_assign_vif_chanctx(struct ieee80211_hw *hw,
goto out;
}
- ret = ath12k_mac_vdev_start(arvif, &ctx->def);
+ ret = ath12k_mac_vdev_start(arvif, ctx);
if (ret) {
ath12k_warn(ab, "failed to start vdev %i addr %pM on freq %d: %d\n",
arvif->vdev_id, vif->addr,
@@ -6247,7 +6431,7 @@ ath12k_mac_op_unassign_vif_chanctx(struct ieee80211_hw *hw,
{
struct ath12k *ar = hw->priv;
struct ath12k_base *ab = ar->ab;
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
int ret;
mutex_lock(&ar->conf_mutex);
@@ -6578,7 +6762,7 @@ static void ath12k_mac_set_bitrate_mask_iter(void *data,
struct ieee80211_sta *sta)
{
struct ath12k_vif *arvif = data;
- struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(sta);
struct ath12k *ar = arvif->ar;
spin_lock_bh(&ar->data_lock);
@@ -6610,7 +6794,7 @@ ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
const struct cfg80211_bitrate_mask *mask)
{
- struct ath12k_vif *arvif = (void *)vif->drv_priv;
+ struct ath12k_vif *arvif = ath12k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
struct ath12k *ar = arvif->ar;
enum nl80211_band band;
@@ -6867,7 +7051,7 @@ static void ath12k_mac_op_sta_statistics(struct ieee80211_hw *hw,
struct ieee80211_sta *sta,
struct station_info *sinfo)
{
- struct ath12k_sta *arsta = (struct ath12k_sta *)sta->drv_priv;
+ struct ath12k_sta *arsta = ath12k_sta_to_arsta(sta);
sinfo->rx_duration = arsta->rx_duration;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_DURATION);
@@ -7231,6 +7415,11 @@ static int __ath12k_mac_register(struct ath12k *ar)
ar->hw->wiphy->interface_modes = ab->hw_params->interface_modes;
+ if (ar->hw->wiphy->bands[NL80211_BAND_2GHZ] &&
+ ar->hw->wiphy->bands[NL80211_BAND_5GHZ] &&
+ ar->hw->wiphy->bands[NL80211_BAND_6GHZ])
+ ieee80211_hw_set(ar->hw, SINGLE_SCAN_ON_ALL_BANDS);
+
ieee80211_hw_set(ar->hw, SIGNAL_DBM);
ieee80211_hw_set(ar->hw, SUPPORTS_PS);
ieee80211_hw_set(ar->hw, SUPPORTS_DYNAMIC_PS);
diff --git a/drivers/net/wireless/ath/ath12k/mac.h b/drivers/net/wireless/ath/ath12k/mac.h
index 7b16b70df4fa..59b4e8f5eee0 100644
--- a/drivers/net/wireless/ath/ath12k/mac.h
+++ b/drivers/net/wireless/ath/ath12k/mac.h
@@ -73,4 +73,6 @@ int ath12k_mac_tx_mgmt_pending_free(int buf_id, void *skb, void *ctx);
enum rate_info_bw ath12k_mac_bw_to_mac80211_bw(enum ath12k_supported_bw bw);
enum ath12k_supported_bw ath12k_mac_mac80211_bw_to_ath12k_bw(enum rate_info_bw bw);
enum hal_encrypt_type ath12k_dp_tx_get_encrypt_type(u32 cipher);
+int ath12k_mac_rfkill_enable_radio(struct ath12k *ar, bool enable);
+int ath12k_mac_rfkill_config(struct ath12k *ar);
#endif
diff --git a/drivers/net/wireless/ath/ath12k/mhi.c b/drivers/net/wireless/ath/ath12k/mhi.c
index 42f1140baa4f..39e640293cdc 100644
--- a/drivers/net/wireless/ath/ath12k/mhi.c
+++ b/drivers/net/wireless/ath/ath12k/mhi.c
@@ -366,12 +366,12 @@ int ath12k_mhi_register(struct ath12k_pci *ab_pci)
mhi_ctrl->fw_image = ab_pci->amss_path;
mhi_ctrl->regs = ab->mem;
mhi_ctrl->reg_len = ab->mem_len;
+ mhi_ctrl->rddm_size = ab->hw_params->rddm_size;
ret = ath12k_mhi_get_msi(ab_pci);
if (ret) {
ath12k_err(ab, "failed to get msi for mhi\n");
- mhi_free_controller(mhi_ctrl);
- return ret;
+ goto free_controller;
}
mhi_ctrl->iova_start = 0;
@@ -388,11 +388,15 @@ int ath12k_mhi_register(struct ath12k_pci *ab_pci)
ret = mhi_register_controller(mhi_ctrl, ab->hw_params->mhi_config);
if (ret) {
ath12k_err(ab, "failed to register to mhi bus, err = %d\n", ret);
- mhi_free_controller(mhi_ctrl);
- return ret;
+ goto free_controller;
}
return 0;
+
+free_controller:
+ mhi_free_controller(mhi_ctrl);
+ ab_pci->mhi_ctrl = NULL;
+ return ret;
}
void ath12k_mhi_unregister(struct ath12k_pci *ab_pci)
diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
index fae5dfd6e9d7..3006cd3fbe11 100644
--- a/drivers/net/wireless/ath/ath12k/pci.c
+++ b/drivers/net/wireless/ath/ath12k/pci.c
@@ -424,12 +424,12 @@ static void ath12k_pci_ext_grp_disable(struct ath12k_ext_irq_grp *irq_grp)
disable_irq_nosync(irq_grp->ab->irq_num[irq_grp->irqs[i]]);
}
-static void __ath12k_pci_ext_irq_disable(struct ath12k_base *sc)
+static void __ath12k_pci_ext_irq_disable(struct ath12k_base *ab)
{
int i;
for (i = 0; i < ATH12K_EXT_IRQ_GRP_NUM_MAX; i++) {
- struct ath12k_ext_irq_grp *irq_grp = &sc->ext_irq_grp[i];
+ struct ath12k_ext_irq_grp *irq_grp = &ab->ext_irq_grp[i];
ath12k_pci_ext_grp_disable(irq_grp);
diff --git a/drivers/net/wireless/ath/ath12k/peer.h b/drivers/net/wireless/ath/ath12k/peer.h
index b296dc0e2f67..c6edb24cbedd 100644
--- a/drivers/net/wireless/ath/ath12k/peer.h
+++ b/drivers/net/wireless/ath/ath12k/peer.h
@@ -44,6 +44,9 @@ struct ath12k_peer {
struct ppdu_user_delayba ppdu_stats_delayba;
bool delayba_flag;
bool is_authorized;
+
+ /* protected by ab->data_lock */
+ bool dp_setup_done;
};
void ath12k_peer_unmap_event(struct ath12k_base *ab, u16 peer_id);
diff --git a/drivers/net/wireless/ath/ath12k/qmi.c b/drivers/net/wireless/ath/ath12k/qmi.c
index b2db0436bdde..f6e949c618d0 100644
--- a/drivers/net/wireless/ath/ath12k/qmi.c
+++ b/drivers/net/wireless/ath/ath12k/qmi.c
@@ -2213,6 +2213,7 @@ static int ath12k_qmi_request_target_cap(struct ath12k_base *ab)
struct qmi_txn txn = {};
unsigned int board_id = ATH12K_BOARD_ID_DEFAULT;
int ret = 0;
+ int r;
int i;
memset(&req, 0, sizeof(req));
@@ -2297,6 +2298,10 @@ static int ath12k_qmi_request_target_cap(struct ath12k_base *ab)
ab->qmi.target.fw_build_timestamp,
ab->qmi.target.fw_build_id);
+ r = ath12k_core_check_smbios(ab);
+ if (r)
+ ath12k_dbg(ab, ATH12K_DBG_QMI, "SMBIOS bdf variant name not set.\n");
+
out:
return ret;
}
@@ -2535,6 +2540,7 @@ static void ath12k_qmi_m3_free(struct ath12k_base *ab)
dma_free_coherent(ab->dev, m3_mem->size,
m3_mem->vaddr, m3_mem->paddr);
m3_mem->vaddr = NULL;
+ m3_mem->size = 0;
}
static int ath12k_qmi_wlanfw_m3_info_send(struct ath12k_base *ab)
@@ -3088,3 +3094,9 @@ void ath12k_qmi_deinit_service(struct ath12k_base *ab)
ath12k_qmi_m3_free(ab);
ath12k_qmi_free_target_mem_chunk(ab);
}
+
+void ath12k_qmi_free_resource(struct ath12k_base *ab)
+{
+ ath12k_qmi_free_target_mem_chunk(ab);
+ ath12k_qmi_m3_free(ab);
+}
diff --git a/drivers/net/wireless/ath/ath12k/qmi.h b/drivers/net/wireless/ath/ath12k/qmi.h
index 15944f5f33ab..e20d6511d1ca 100644
--- a/drivers/net/wireless/ath/ath12k/qmi.h
+++ b/drivers/net/wireless/ath/ath12k/qmi.h
@@ -564,5 +564,6 @@ int ath12k_qmi_firmware_start(struct ath12k_base *ab,
void ath12k_qmi_firmware_stop(struct ath12k_base *ab);
void ath12k_qmi_deinit_service(struct ath12k_base *ab);
int ath12k_qmi_init_service(struct ath12k_base *ab);
+void ath12k_qmi_free_resource(struct ath12k_base *ab);
#endif
diff --git a/drivers/net/wireless/ath/ath12k/reg.c b/drivers/net/wireless/ath/ath12k/reg.c
index 6ede91ebc8e1..5c006256c82a 100644
--- a/drivers/net/wireless/ath/ath12k/reg.c
+++ b/drivers/net/wireless/ath/ath12k/reg.c
@@ -314,6 +314,19 @@ static u32 ath12k_map_fw_reg_flags(u16 reg_flags)
return flags;
}
+static u32 ath12k_map_fw_phy_flags(u32 phy_flags)
+{
+ u32 flags = 0;
+
+ if (phy_flags & ATH12K_REG_PHY_BITMAP_NO11AX)
+ flags |= NL80211_RRF_NO_HE;
+
+ if (phy_flags & ATH12K_REG_PHY_BITMAP_NO11BE)
+ flags |= NL80211_RRF_NO_EHT;
+
+ return flags;
+}
+
static bool
ath12k_reg_can_intersect(struct ieee80211_reg_rule *rule1,
struct ieee80211_reg_rule *rule2)
@@ -638,6 +651,7 @@ ath12k_reg_build_regd(struct ath12k_base *ab,
}
flags |= ath12k_map_fw_reg_flags(reg_rule->flags);
+ flags |= ath12k_map_fw_phy_flags(reg_info->phybitmap);
ath12k_reg_update_rule(tmp_regd->reg_rules + i,
reg_rule->start_freq,
diff --git a/drivers/net/wireless/ath/ath12k/reg.h b/drivers/net/wireless/ath/ath12k/reg.h
index 56d009a47234..35569f03042d 100644
--- a/drivers/net/wireless/ath/ath12k/reg.h
+++ b/drivers/net/wireless/ath/ath12k/reg.h
@@ -83,6 +83,12 @@ struct ath12k_reg_info {
[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
};
+/* Phy bitmaps */
+enum ath12k_reg_phy_bitmap {
+ ATH12K_REG_PHY_BITMAP_NO11AX = BIT(5),
+ ATH12K_REG_PHY_BITMAP_NO11BE = BIT(6),
+};
+
void ath12k_reg_init(struct ath12k *ar);
void ath12k_reg_free(struct ath12k_base *ab);
void ath12k_regd_update_work(struct work_struct *work);
diff --git a/drivers/net/wireless/ath/ath12k/rx_desc.h b/drivers/net/wireless/ath/ath12k/rx_desc.h
index bfa87cb8d021..c4058abc516e 100644
--- a/drivers/net/wireless/ath/ath12k/rx_desc.h
+++ b/drivers/net/wireless/ath/ath12k/rx_desc.h
@@ -627,17 +627,18 @@ enum rx_msdu_start_reception_type {
#define RX_MSDU_END_INFO5_SA_IDX_TIMEOUT BIT(0)
#define RX_MSDU_END_INFO5_DA_IDX_TIMEOUT BIT(1)
-#define RX_MSDU_END_INFO5_TO_DS BIT(2)
-#define RX_MSDU_END_INFO5_TID GENMASK(6, 3)
#define RX_MSDU_END_INFO5_SA_IS_VALID BIT(7)
#define RX_MSDU_END_INFO5_DA_IS_VALID BIT(8)
#define RX_MSDU_END_INFO5_DA_IS_MCBC BIT(9)
#define RX_MSDU_END_INFO5_L3_HDR_PADDING GENMASK(11, 10)
#define RX_MSDU_END_INFO5_FIRST_MSDU BIT(12)
#define RX_MSDU_END_INFO5_LAST_MSDU BIT(13)
-#define RX_MSDU_END_INFO5_FROM_DS BIT(14)
#define RX_MSDU_END_INFO5_IP_CHKSUM_FAIL_COPY BIT(15)
+#define RX_MSDU_END_QCN9274_INFO5_TO_DS BIT(2)
+#define RX_MSDU_END_QCN9274_INFO5_TID GENMASK(6, 3)
+#define RX_MSDU_END_QCN9274_INFO5_FROM_DS BIT(14)
+
#define RX_MSDU_END_INFO6_MSDU_DROP BIT(0)
#define RX_MSDU_END_INFO6_REO_DEST_IND GENMASK(5, 1)
#define RX_MSDU_END_INFO6_FLOW_IDX GENMASK(25, 6)
@@ -650,14 +651,15 @@ enum rx_msdu_start_reception_type {
#define RX_MSDU_END_INFO7_AGGR_COUNT GENMASK(7, 0)
#define RX_MSDU_END_INFO7_FLOW_AGGR_CONTN BIT(8)
#define RX_MSDU_END_INFO7_FISA_TIMEOUT BIT(9)
-#define RX_MSDU_END_INFO7_TCPUDP_CSUM_FAIL_CPY BIT(10)
-#define RX_MSDU_END_INFO7_MSDU_LIMIT_ERROR BIT(11)
-#define RX_MSDU_END_INFO7_FLOW_IDX_TIMEOUT BIT(12)
-#define RX_MSDU_END_INFO7_FLOW_IDX_INVALID BIT(13)
-#define RX_MSDU_END_INFO7_CCE_MATCH BIT(14)
-#define RX_MSDU_END_INFO7_AMSDU_PARSER_ERR BIT(15)
-#define RX_MSDU_END_INFO8_KEY_ID GENMASK(7, 0)
+#define RX_MSDU_END_QCN9274_INFO7_TCPUDP_CSUM_FAIL_CPY BIT(10)
+#define RX_MSDU_END_QCN9274_INFO7_MSDU_LIMIT_ERROR BIT(11)
+#define RX_MSDU_END_QCN9274_INFO7_FLOW_IDX_TIMEOUT BIT(12)
+#define RX_MSDU_END_QCN9274_INFO7_FLOW_IDX_INVALID BIT(13)
+#define RX_MSDU_END_QCN9274_INFO7_CCE_MATCH BIT(14)
+#define RX_MSDU_END_QCN9274_INFO7_AMSDU_PARSER_ERR BIT(15)
+
+#define RX_MSDU_END_QCN9274_INFO8_KEY_ID GENMASK(7, 0)
#define RX_MSDU_END_INFO9_SERVICE_CODE GENMASK(14, 6)
#define RX_MSDU_END_INFO9_PRIORITY_VALID BIT(15)
@@ -698,8 +700,9 @@ enum rx_msdu_start_reception_type {
#define RX_MSDU_END_INFO12_RATE_MCS GENMASK(17, 14)
#define RX_MSDU_END_INFO12_RECV_BW GENMASK(20, 18)
#define RX_MSDU_END_INFO12_RECEPTION_TYPE GENMASK(23, 21)
-#define RX_MSDU_END_INFO12_MIMO_SS_BITMAP GENMASK(30, 24)
-#define RX_MSDU_END_INFO12_MIMO_DONE_COPY BIT(31)
+
+#define RX_MSDU_END_QCN9274_INFO12_MIMO_SS_BITMAP GENMASK(30, 24)
+#define RX_MSDU_END_QCN9274_INFO12_MIMO_DONE_COPY BIT(31)
#define RX_MSDU_END_INFO13_FIRST_MPDU BIT(0)
#define RX_MSDU_END_INFO13_MCAST_BCAST BIT(2)
@@ -714,7 +717,6 @@ enum rx_msdu_start_reception_type {
#define RX_MSDU_END_INFO13_EOSP BIT(11)
#define RX_MSDU_END_INFO13_A_MSDU_ERROR BIT(12)
#define RX_MSDU_END_INFO13_ORDER BIT(14)
-#define RX_MSDU_END_INFO13_WIFI_PARSER_ERR BIT(15)
#define RX_MSDU_END_INFO13_OVERFLOW_ERR BIT(16)
#define RX_MSDU_END_INFO13_MSDU_LEN_ERR BIT(17)
#define RX_MSDU_END_INFO13_TCP_UDP_CKSUM_FAIL BIT(18)
@@ -732,6 +734,8 @@ enum rx_msdu_start_reception_type {
#define RX_MSDU_END_INFO13_UNDECRYPT_FRAME_ERR BIT(30)
#define RX_MSDU_END_INFO13_FCS_ERR BIT(31)
+#define RX_MSDU_END_QCN9274_INFO13_WIFI_PARSER_ERR BIT(15)
+
#define RX_MSDU_END_INFO14_DECRYPT_STATUS_CODE GENMASK(12, 10)
#define RX_MSDU_END_INFO14_RX_BITMAP_NOT_UPDED BIT(13)
#define RX_MSDU_END_INFO14_MSDU_DONE BIT(31)
@@ -782,6 +786,65 @@ struct rx_msdu_end_qcn9274 {
__le32 info14;
} __packed;
+/* These macro definitions are only used for WCN7850 */
+#define RX_MSDU_END_WCN7850_INFO2_KEY_ID BIT(7, 0)
+
+#define RX_MSDU_END_WCN7850_INFO5_MSDU_LIMIT_ERR BIT(2)
+#define RX_MSDU_END_WCN7850_INFO5_IDX_TIMEOUT BIT(3)
+#define RX_MSDU_END_WCN7850_INFO5_IDX_INVALID BIT(4)
+#define RX_MSDU_END_WCN7850_INFO5_WIFI_PARSE_ERR BIT(5)
+#define RX_MSDU_END_WCN7850_INFO5_AMSDU_PARSER_ERR BIT(6)
+#define RX_MSDU_END_WCN7850_INFO5_TCPUDP_CSUM_FAIL_CPY BIT(14)
+
+#define RX_MSDU_END_WCN7850_INFO12_MIMO_SS_BITMAP GENMASK(31, 24)
+
+#define RX_MSDU_END_WCN7850_INFO13_FRAGMENT_FLAG BIT(13)
+#define RX_MSDU_END_WCN7850_INFO13_CCE_MATCH BIT(15)
+
+struct rx_msdu_end_wcn7850 {
+ __le16 info0;
+ __le16 phy_ppdu_id;
+ __le16 ip_hdr_cksum;
+ __le16 info1;
+ __le16 info2;
+ __le16 cumulative_l3_checksum;
+ __le32 rule_indication0;
+ __le32 rule_indication1;
+ __le16 info3;
+ __le16 l3_type;
+ __le32 ipv6_options_crc;
+ __le32 tcp_seq_num;
+ __le32 tcp_ack_num;
+ __le16 info4;
+ __le16 window_size;
+ __le16 tcp_udp_chksum;
+ __le16 info5;
+ __le16 sa_idx;
+ __le16 da_idx_or_sw_peer_id;
+ __le32 info6;
+ __le32 fse_metadata;
+ __le16 cce_metadata;
+ __le16 sa_sw_peer_id;
+ __le16 info7;
+ __le16 rsvd0;
+ __le16 cumulative_l4_checksum;
+ __le16 cumulative_ip_length;
+ __le32 info9;
+ __le32 info10;
+ __le32 info11;
+ __le32 toeplitz_hash_2_or_4;
+ __le32 flow_id_toeplitz;
+ __le32 info12;
+ __le32 ppdu_start_timestamp_31_0;
+ __le32 ppdu_start_timestamp_63_32;
+ __le32 phy_meta_data;
+ __le16 vlan_ctag_ci;
+ __le16 vlan_stag_ci;
+ __le32 rsvd[3];
+ __le32 info13;
+ __le32 info14;
+} __packed;
+
/* rx_msdu_end
*
* rxpcu_mpdu_filter_in_category
@@ -1410,7 +1473,7 @@ struct rx_pkt_hdr_tlv {
struct hal_rx_desc_wcn7850 {
__le64 msdu_end_tag;
- struct rx_msdu_end_qcn9274 msdu_end;
+ struct rx_msdu_end_wcn7850 msdu_end;
u8 rx_padding0[RX_BE_PADDING0_BYTES];
__le64 mpdu_start_tag;
struct rx_mpdu_start_qcn9274 mpdu_start;
diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
index ef0f3cf35cfd..0e5bf5ce8d4c 100644
--- a/drivers/net/wireless/ath/ath12k/wmi.c
+++ b/drivers/net/wireless/ath/ath12k/wmi.c
@@ -152,6 +152,8 @@ static const struct ath12k_wmi_tlv_policy ath12k_wmi_tlv_policies[] = {
.min_len = sizeof(struct wmi_service_available_event) },
[WMI_TAG_PEER_ASSOC_CONF_EVENT] = {
.min_len = sizeof(struct wmi_peer_assoc_conf_event) },
+ [WMI_TAG_RFKILL_EVENT] = {
+ .min_len = sizeof(struct wmi_rfkill_state_change_event) },
[WMI_TAG_PDEV_CTL_FAILSAFE_CHECK_EVENT] = {
.min_len = sizeof(struct wmi_pdev_ctl_failsafe_chk_event) },
[WMI_TAG_HOST_SWFDA_EVENT] = {
@@ -406,22 +408,22 @@ err_pull:
int ath12k_wmi_cmd_send(struct ath12k_wmi_pdev *wmi, struct sk_buff *skb,
u32 cmd_id)
{
- struct ath12k_wmi_base *wmi_sc = wmi->wmi_ab;
+ struct ath12k_wmi_base *wmi_ab = wmi->wmi_ab;
int ret = -EOPNOTSUPP;
might_sleep();
- wait_event_timeout(wmi_sc->tx_credits_wq, ({
+ wait_event_timeout(wmi_ab->tx_credits_wq, ({
ret = ath12k_wmi_cmd_send_nowait(wmi, skb, cmd_id);
- if (ret && test_bit(ATH12K_FLAG_CRASH_FLUSH, &wmi_sc->ab->dev_flags))
+ if (ret && test_bit(ATH12K_FLAG_CRASH_FLUSH, &wmi_ab->ab->dev_flags))
ret = -ESHUTDOWN;
(ret != -EAGAIN);
}), WMI_SEND_TIMEOUT_HZ);
if (ret == -EAGAIN)
- ath12k_warn(wmi_sc->ab, "wmi command %d timeout\n", cmd_id);
+ ath12k_warn(wmi_ab->ab, "wmi command %d timeout\n", cmd_id);
return ret;
}
@@ -725,10 +727,10 @@ static int ath12k_service_ready_event(struct ath12k_base *ab, struct sk_buff *sk
return 0;
}
-struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_sc, u32 len)
+struct sk_buff *ath12k_wmi_alloc_skb(struct ath12k_wmi_base *wmi_ab, u32 len)
{
struct sk_buff *skb;
- struct ath12k_base *ab = wmi_sc->ab;
+ struct ath12k_base *ab = wmi_ab->ab;
u32 round_len = roundup(len, 4);
skb = ath12k_htc_alloc_skb(ab, WMI_SKB_HEADROOM + round_len);
@@ -3469,7 +3471,7 @@ int ath12k_wmi_set_hw_mode(struct ath12k_base *ab,
int ath12k_wmi_cmd_init(struct ath12k_base *ab)
{
- struct ath12k_wmi_base *wmi_sc = &ab->wmi_ab;
+ struct ath12k_wmi_base *wmi_ab = &ab->wmi_ab;
struct ath12k_wmi_init_cmd_arg arg = {};
if (test_bit(WMI_TLV_SERVICE_REG_CC_EXT_EVENT_SUPPORT,
@@ -3478,9 +3480,9 @@ int ath12k_wmi_cmd_init(struct ath12k_base *ab)
ab->hw_params->wmi_init(ab, &arg.res_cfg);
- arg.num_mem_chunks = wmi_sc->num_mem_chunks;
- arg.hw_mode_id = wmi_sc->preferred_hw_mode;
- arg.mem_chunks = wmi_sc->mem_chunks;
+ arg.num_mem_chunks = wmi_ab->num_mem_chunks;
+ arg.hw_mode_id = wmi_ab->preferred_hw_mode;
+ arg.mem_chunks = wmi_ab->mem_chunks;
if (ab->hw_params->single_pdev_only)
arg.hw_mode_id = WMI_HOST_HW_MODE_MAX;
@@ -3488,7 +3490,7 @@ int ath12k_wmi_cmd_init(struct ath12k_base *ab)
arg.num_band_to_mac = ab->num_radios;
ath12k_fill_band_to_mac_param(ab, arg.band_to_mac);
- return ath12k_init_cmd_send(&wmi_sc->wmi[0], &arg);
+ return ath12k_init_cmd_send(&wmi_ab->wmi[0], &arg);
}
int ath12k_wmi_vdev_spectral_conf(struct ath12k *ar,
@@ -3876,6 +3878,12 @@ static int ath12k_wmi_ext_hal_reg_caps(struct ath12k_base *soc,
ath12k_warn(soc, "failed to extract reg cap %d\n", i);
return ret;
}
+
+ if (reg_cap.phy_id >= MAX_RADIOS) {
+ ath12k_warn(soc, "unexpected phy id %u\n", reg_cap.phy_id);
+ return -EINVAL;
+ }
+
soc->hal_reg_cap[reg_cap.phy_id] = reg_cap;
}
return 0;
@@ -4153,14 +4161,22 @@ static void ath12k_wmi_eht_caps_parse(struct ath12k_pdev *pdev, u32 band,
__le32 cap_info_internal)
{
struct ath12k_band_cap *cap_band = &pdev->cap.band[band];
+ u32 support_320mhz;
u8 i;
+ if (band == NL80211_BAND_6GHZ)
+ support_320mhz = cap_band->eht_cap_phy_info[0] &
+ IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
+
for (i = 0; i < WMI_MAX_EHTCAP_MAC_SIZE; i++)
cap_band->eht_cap_mac_info[i] = le32_to_cpu(cap_mac_info[i]);
for (i = 0; i < WMI_MAX_EHTCAP_PHY_SIZE; i++)
cap_band->eht_cap_phy_info[i] = le32_to_cpu(cap_phy_info[i]);
+ if (band == NL80211_BAND_6GHZ)
+ cap_band->eht_cap_phy_info[0] |= support_320mhz;
+
cap_band->eht_mcs_20_only = le32_to_cpu(supp_mcs[0]);
cap_band->eht_mcs_80 = le32_to_cpu(supp_mcs[1]);
if (band != NL80211_BAND_2GHZ) {
@@ -4182,10 +4198,19 @@ ath12k_wmi_tlv_mac_phy_caps_ext_parse(struct ath12k_base *ab,
const struct ath12k_wmi_caps_ext_params *caps,
struct ath12k_pdev *pdev)
{
- u32 bands;
+ struct ath12k_band_cap *cap_band;
+ u32 bands, support_320mhz;
int i;
if (ab->hw_params->single_pdev_only) {
+ if (caps->hw_mode_id == WMI_HOST_HW_MODE_SINGLE) {
+ support_320mhz = le32_to_cpu(caps->eht_cap_phy_info_5ghz[0]) &
+ IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
+ cap_band = &pdev->cap.band[NL80211_BAND_6GHZ];
+ cap_band->eht_cap_phy_info[0] |= support_320mhz;
+ return 0;
+ }
+
for (i = 0; i < ab->fw_pdev_count; i++) {
struct ath12k_fw_pdev *fw_pdev = &ab->fw_pdev[i];
@@ -4241,7 +4266,8 @@ static int ath12k_wmi_tlv_mac_phy_caps_ext(struct ath12k_base *ab, u16 tag,
return -EPROTO;
if (ab->hw_params->single_pdev_only) {
- if (ab->wmi_ab.preferred_hw_mode != le32_to_cpu(caps->hw_mode_id))
+ if (ab->wmi_ab.preferred_hw_mode != le32_to_cpu(caps->hw_mode_id) &&
+ caps->hw_mode_id != WMI_HOST_HW_MODE_SINGLE)
return 0;
} else {
for (i = 0; i < ab->num_radios; i++) {
@@ -4585,10 +4611,11 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
}
ath12k_dbg(ab, ATH12K_DBG_WMI,
- "%s:cc_ext %s dsf %d BW: min_2g %d max_2g %d min_5g %d max_5g %d",
+ "%s:cc_ext %s dfs %d BW: min_2g %d max_2g %d min_5g %d max_5g %d phy_bitmap 0x%x",
__func__, reg_info->alpha2, reg_info->dfs_region,
reg_info->min_bw_2g, reg_info->max_bw_2g,
- reg_info->min_bw_5g, reg_info->max_bw_5g);
+ reg_info->min_bw_5g, reg_info->max_bw_5g,
+ reg_info->phybitmap);
ath12k_dbg(ab, ATH12K_DBG_WMI,
"num_2g_reg_rules %d num_5g_reg_rules %d",
@@ -5395,7 +5422,13 @@ static void ath12k_wmi_htc_tx_complete(struct ath12k_base *ab,
static bool ath12k_reg_is_world_alpha(char *alpha)
{
- return alpha[0] == '0' && alpha[1] == '0';
+ if (alpha[0] == '0' && alpha[1] == '0')
+ return true;
+
+ if (alpha[0] == 'n' && alpha[1] == 'a')
+ return true;
+
+ return false;
}
static int ath12k_reg_chan_list_event(struct ath12k_base *ab, struct sk_buff *skb)
@@ -5867,8 +5900,9 @@ exit:
rcu_read_unlock();
}
-static struct ath12k *ath12k_get_ar_on_scan_abort(struct ath12k_base *ab,
- u32 vdev_id)
+static struct ath12k *ath12k_get_ar_on_scan_state(struct ath12k_base *ab,
+ u32 vdev_id,
+ enum ath12k_scan_state state)
{
int i;
struct ath12k_pdev *pdev;
@@ -5880,7 +5914,7 @@ static struct ath12k *ath12k_get_ar_on_scan_abort(struct ath12k_base *ab,
ar = pdev->ar;
spin_lock_bh(&ar->data_lock);
- if (ar->scan.state == ATH12K_SCAN_ABORTING &&
+ if (ar->scan.state == state &&
ar->scan.vdev_id == vdev_id) {
spin_unlock_bh(&ar->data_lock);
return ar;
@@ -5910,10 +5944,15 @@ static void ath12k_scan_event(struct ath12k_base *ab, struct sk_buff *skb)
* aborting scan's vdev id matches this event info.
*/
if (le32_to_cpu(scan_ev.event_type) == WMI_SCAN_EVENT_COMPLETED &&
- le32_to_cpu(scan_ev.reason) == WMI_SCAN_REASON_CANCELLED)
- ar = ath12k_get_ar_on_scan_abort(ab, le32_to_cpu(scan_ev.vdev_id));
- else
+ le32_to_cpu(scan_ev.reason) == WMI_SCAN_REASON_CANCELLED) {
+ ar = ath12k_get_ar_on_scan_state(ab, le32_to_cpu(scan_ev.vdev_id),
+ ATH12K_SCAN_ABORTING);
+ if (!ar)
+ ar = ath12k_get_ar_on_scan_state(ab, le32_to_cpu(scan_ev.vdev_id),
+ ATH12K_SCAN_RUNNING);
+ } else {
ar = ath12k_mac_get_ar_by_vdev_id(ab, le32_to_cpu(scan_ev.vdev_id));
+ }
if (!ar) {
ath12k_warn(ab, "Received scan event for unknown vdev");
@@ -6476,6 +6515,8 @@ ath12k_wmi_pdev_dfs_radar_detected_event(struct ath12k_base *ab, struct sk_buff
ev->detector_id, ev->segment_id, ev->timestamp, ev->is_chirp,
ev->freq_offset, ev->sidx);
+ rcu_read_lock();
+
ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(ev->pdev_id));
if (!ar) {
@@ -6493,6 +6534,8 @@ ath12k_wmi_pdev_dfs_radar_detected_event(struct ath12k_base *ab, struct sk_buff
ieee80211_radar_detected(ar->hw);
exit:
+ rcu_read_unlock();
+
kfree(tb);
}
@@ -6511,11 +6554,16 @@ ath12k_wmi_pdev_temperature_event(struct ath12k_base *ab,
ath12k_dbg(ab, ATH12K_DBG_WMI,
"pdev temperature ev temp %d pdev_id %d\n", ev.temp, ev.pdev_id);
+ rcu_read_lock();
+
ar = ath12k_mac_get_ar_by_pdev_id(ab, le32_to_cpu(ev.pdev_id));
if (!ar) {
ath12k_warn(ab, "invalid pdev id in pdev temperature ev %d", ev.pdev_id);
- return;
+ goto exit;
}
+
+exit:
+ rcu_read_unlock();
}
static void ath12k_fils_discovery_event(struct ath12k_base *ab,
@@ -6580,6 +6628,40 @@ static void ath12k_probe_resp_tx_status_event(struct ath12k_base *ab,
kfree(tb);
}
+static void ath12k_rfkill_state_change_event(struct ath12k_base *ab,
+ struct sk_buff *skb)
+{
+ const struct wmi_rfkill_state_change_event *ev;
+ const void **tb;
+ int ret;
+
+ tb = ath12k_wmi_tlv_parse_alloc(ab, skb->data, skb->len, GFP_ATOMIC);
+ if (IS_ERR(tb)) {
+ ret = PTR_ERR(tb);
+ ath12k_warn(ab, "failed to parse tlv: %d\n", ret);
+ return;
+ }
+
+ ev = tb[WMI_TAG_RFKILL_EVENT];
+ if (!ev) {
+ kfree(tb);
+ return;
+ }
+
+ ath12k_dbg(ab, ATH12K_DBG_MAC,
+ "wmi tlv rfkill state change gpio %d type %d radio_state %d\n",
+ le32_to_cpu(ev->gpio_pin_num),
+ le32_to_cpu(ev->int_type),
+ le32_to_cpu(ev->radio_state));
+
+ spin_lock_bh(&ab->base_lock);
+ ab->rfkill_radio_on = (ev->radio_state == cpu_to_le32(WMI_RFKILL_RADIO_STATE_ON));
+ spin_unlock_bh(&ab->base_lock);
+
+ queue_work(ab->workqueue, &ab->rfkill_work);
+ kfree(tb);
+}
+
static void ath12k_wmi_op_rx(struct ath12k_base *ab, struct sk_buff *skb)
{
struct wmi_cmd_hdr *cmd_hdr;
@@ -6672,6 +6754,9 @@ static void ath12k_wmi_op_rx(struct ath12k_base *ab, struct sk_buff *skb)
case WMI_OFFLOAD_PROB_RESP_TX_STATUS_EVENTID:
ath12k_probe_resp_tx_status_event(ab, skb);
break;
+ case WMI_RFKILL_STATE_CHANGE_EVENTID:
+ ath12k_rfkill_state_change_event(ab, skb);
+ break;
/* add Unsupported events here */
case WMI_TBTTOFFSET_EXT_UPDATE_EVENTID:
case WMI_PEER_OPER_MODE_CHANGE_EVENTID:
diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
index c75a6fa1f7e0..629373d67421 100644
--- a/drivers/net/wireless/ath/ath12k/wmi.h
+++ b/drivers/net/wireless/ath/ath12k/wmi.h
@@ -2158,6 +2158,9 @@ enum wmi_tlv_service {
WMI_MAX_EXT_SERVICE = 256,
WMI_TLV_SERVICE_REG_CC_EXT_EVENT_SUPPORT = 281,
+
+ WMI_TLV_SERVICE_11BE = 289,
+
WMI_MAX_EXT2_SERVICE,
};
@@ -4793,6 +4796,31 @@ struct ath12k_wmi_base {
#define ATH12K_FW_STATS_BUF_SIZE (1024 * 1024)
+enum wmi_sys_cap_info_flags {
+ WMI_SYS_CAP_INFO_RXTX_LED = BIT(0),
+ WMI_SYS_CAP_INFO_RFKILL = BIT(1),
+};
+
+#define WMI_RFKILL_CFG_GPIO_PIN_NUM GENMASK(5, 0)
+#define WMI_RFKILL_CFG_RADIO_LEVEL BIT(6)
+#define WMI_RFKILL_CFG_PIN_AS_GPIO GENMASK(10, 7)
+
+enum wmi_rfkill_enable_radio {
+ WMI_RFKILL_ENABLE_RADIO_ON = 0,
+ WMI_RFKILL_ENABLE_RADIO_OFF = 1,
+};
+
+enum wmi_rfkill_radio_state {
+ WMI_RFKILL_RADIO_STATE_OFF = 1,
+ WMI_RFKILL_RADIO_STATE_ON = 2,
+};
+
+struct wmi_rfkill_state_change_event {
+ __le32 gpio_pin_num;
+ __le32 int_type;
+ __le32 radio_state;
+} __packed;
+
void ath12k_wmi_init_qcn9274(struct ath12k_base *ab,
struct ath12k_wmi_resource_config_arg *config);
void ath12k_wmi_init_wcn7850(struct ath12k_base *ab,
diff --git a/drivers/net/wireless/ath/ath5k/base.c b/drivers/net/wireless/ath/ath5k/base.c
index c59c14483177..9f534ed2fbb3 100644
--- a/drivers/net/wireless/ath/ath5k/base.c
+++ b/drivers/net/wireless/ath/ath5k/base.c
@@ -230,13 +230,13 @@ ath5k_chip_name(enum ath5k_srev_type type, u_int16_t val)
}
static unsigned int ath5k_ioread32(void *hw_priv, u32 reg_offset)
{
- struct ath5k_hw *ah = (struct ath5k_hw *) hw_priv;
+ struct ath5k_hw *ah = hw_priv;
return ath5k_hw_reg_read(ah, reg_offset);
}
static void ath5k_iowrite32(void *hw_priv, u32 val, u32 reg_offset)
{
- struct ath5k_hw *ah = (struct ath5k_hw *) hw_priv;
+ struct ath5k_hw *ah = hw_priv;
ath5k_hw_reg_write(ah, val, reg_offset);
}
@@ -1770,7 +1770,7 @@ ath5k_tx_frame_completed(struct ath5k_hw *ah, struct sk_buff *skb,
ah->stats.antenna_tx[0]++; /* invalid */
trace_ath5k_tx_complete(ah, skb, txq, ts);
- ieee80211_tx_status(ah->hw, skb);
+ ieee80211_tx_status_skb(ah->hw, skb);
}
static void
diff --git a/drivers/net/wireless/ath/ath5k/led.c b/drivers/net/wireless/ath/ath5k/led.c
index 33e9928af363..439052984796 100644
--- a/drivers/net/wireless/ath/ath5k/led.c
+++ b/drivers/net/wireless/ath/ath5k/led.c
@@ -131,8 +131,7 @@ ath5k_register_led(struct ath5k_hw *ah, struct ath5k_led *led,
int err;
led->ah = ah;
- strncpy(led->name, name, sizeof(led->name));
- led->name[sizeof(led->name)-1] = 0;
+ strscpy(led->name, name, sizeof(led->name));
led->led_dev.name = led->name;
led->led_dev.default_trigger = trigger;
led->led_dev.brightness_set = ath5k_led_brightness_set;
diff --git a/drivers/net/wireless/ath/ath5k/pci.c b/drivers/net/wireless/ath/ath5k/pci.c
index 86b8cb975b1a..b51fce5ae260 100644
--- a/drivers/net/wireless/ath/ath5k/pci.c
+++ b/drivers/net/wireless/ath/ath5k/pci.c
@@ -54,7 +54,7 @@ MODULE_DEVICE_TABLE(pci, ath5k_pci_id_table);
/* return bus cachesize in 4B word units */
static void ath5k_pci_read_cachesize(struct ath_common *common, int *csz)
{
- struct ath5k_hw *ah = (struct ath5k_hw *) common->priv;
+ struct ath5k_hw *ah = common->priv;
u8 u8tmp;
pci_read_config_byte(ah->pdev, PCI_CACHE_LINE_SIZE, &u8tmp);
@@ -76,7 +76,7 @@ static void ath5k_pci_read_cachesize(struct ath_common *common, int *csz)
static bool
ath5k_pci_eeprom_read(struct ath_common *common, u32 offset, u16 *data)
{
- struct ath5k_hw *ah = (struct ath5k_hw *) common->ah;
+ struct ath5k_hw *ah = common->ah;
u32 status, timeout;
/*
diff --git a/drivers/net/wireless/ath/ath6kl/cfg80211.c b/drivers/net/wireless/ath/ath6kl/cfg80211.c
index 0c2b8b1a10d5..e37db4af33de 100644
--- a/drivers/net/wireless/ath/ath6kl/cfg80211.c
+++ b/drivers/net/wireless/ath/ath6kl/cfg80211.c
@@ -1118,9 +1118,9 @@ void ath6kl_cfg80211_ch_switch_notify(struct ath6kl_vif *vif, int freq,
ath6kl_band_2ghz.ht_cap.ht_supported) ?
NL80211_CHAN_HT20 : NL80211_CHAN_NO_HT);
- mutex_lock(&vif->wdev.mtx);
+ wiphy_lock(vif->ar->wiphy);
cfg80211_ch_switch_notify(vif->ndev, &chandef, 0, 0);
- mutex_unlock(&vif->wdev.mtx);
+ wiphy_unlock(vif->ar->wiphy);
}
static int ath6kl_cfg80211_add_key(struct wiphy *wiphy, struct net_device *ndev,
@@ -2954,7 +2954,7 @@ static int ath6kl_start_ap(struct wiphy *wiphy, struct net_device *dev,
}
static int ath6kl_change_beacon(struct wiphy *wiphy, struct net_device *dev,
- struct cfg80211_beacon_data *beacon)
+ struct cfg80211_ap_update *params)
{
struct ath6kl_vif *vif = netdev_priv(dev);
@@ -2964,7 +2964,7 @@ static int ath6kl_change_beacon(struct wiphy *wiphy, struct net_device *dev,
if (vif->next_mode != AP_NETWORK)
return -EOPNOTSUPP;
- return ath6kl_set_ies(vif, beacon);
+ return ath6kl_set_ies(vif, &params->beacon);
}
static int ath6kl_stop_ap(struct wiphy *wiphy, struct net_device *dev,
diff --git a/drivers/net/wireless/ath/ath6kl/init.c b/drivers/net/wireless/ath/ath6kl/init.c
index 201e45554070..15f455adb860 100644
--- a/drivers/net/wireless/ath/ath6kl/init.c
+++ b/drivers/net/wireless/ath/ath6kl/init.c
@@ -1677,7 +1677,7 @@ static void ath6kl_init_get_fwcaps(struct ath6kl *ar, char *buf, size_t buf_len)
/* add "..." to the end of string */
trunc_len = strlen(trunc) + 1;
- strncpy(buf + buf_len - trunc_len, trunc, trunc_len);
+ memcpy(buf + buf_len - trunc_len, trunc, trunc_len);
return;
}
diff --git a/drivers/net/wireless/ath/ath6kl/main.c b/drivers/net/wireless/ath/ath6kl/main.c
index d3aa9e7a37c2..8f9fe23e9755 100644
--- a/drivers/net/wireless/ath/ath6kl/main.c
+++ b/drivers/net/wireless/ath/ath6kl/main.c
@@ -852,14 +852,14 @@ void ath6kl_tgt_stats_event(struct ath6kl_vif *vif, u8 *ptr, u32 len)
void ath6kl_wakeup_event(void *dev)
{
- struct ath6kl *ar = (struct ath6kl *) dev;
+ struct ath6kl *ar = dev;
wake_up(&ar->event_wq);
}
void ath6kl_txpwr_rx_evt(void *devt, u8 tx_pwr)
{
- struct ath6kl *ar = (struct ath6kl *) devt;
+ struct ath6kl *ar = devt;
ar->tx_pwr = tx_pwr;
wake_up(&ar->event_wq);
diff --git a/drivers/net/wireless/ath/ath6kl/txrx.c b/drivers/net/wireless/ath/ath6kl/txrx.c
index a56fab6232a9..80e66acc5cf6 100644
--- a/drivers/net/wireless/ath/ath6kl/txrx.c
+++ b/drivers/net/wireless/ath/ath6kl/txrx.c
@@ -708,7 +708,7 @@ void ath6kl_tx_complete(struct htc_target *target,
packet->endpoint >= ENDPOINT_MAX))
continue;
- ath6kl_cookie = (struct ath6kl_cookie *)packet->pkt_cntxt;
+ ath6kl_cookie = packet->pkt_cntxt;
if (WARN_ON_ONCE(!ath6kl_cookie))
continue;
diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.c b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
index a29c11f944a5..6274d1624261 100644
--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.c
+++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.c
@@ -766,10 +766,10 @@ static void ar9003_hw_prog_ini(struct ath_hw *ah,
}
}
-static int ar9550_hw_get_modes_txgain_index(struct ath_hw *ah,
+static u32 ar9550_hw_get_modes_txgain_index(struct ath_hw *ah,
struct ath9k_channel *chan)
{
- int ret;
+ u32 ret;
if (IS_CHAN_2GHZ(chan)) {
if (IS_CHAN_HT40(chan))
@@ -791,7 +791,7 @@ static int ar9550_hw_get_modes_txgain_index(struct ath_hw *ah,
return ret;
}
-static int ar9561_hw_get_modes_txgain_index(struct ath_hw *ah,
+static u32 ar9561_hw_get_modes_txgain_index(struct ath_hw *ah,
struct ath9k_channel *chan)
{
if (IS_CHAN_2GHZ(chan)) {
@@ -916,7 +916,7 @@ static int ar9003_hw_process_ini(struct ath_hw *ah,
* TXGAIN initvals.
*/
if (AR_SREV_9550(ah) || AR_SREV_9531(ah) || AR_SREV_9561(ah)) {
- int modes_txgain_index = 1;
+ u32 modes_txgain_index = 1;
if (AR_SREV_9550(ah))
modes_txgain_index = ar9550_hw_get_modes_txgain_index(ah, chan);
@@ -925,9 +925,6 @@ static int ar9003_hw_process_ini(struct ath_hw *ah,
modes_txgain_index =
ar9561_hw_get_modes_txgain_index(ah, chan);
- if (modes_txgain_index < 0)
- return -EINVAL;
-
REG_WRITE_ARRAY(&ah->iniModesTxGain, modes_txgain_index,
regWrites);
} else {
diff --git a/drivers/net/wireless/ath/ath9k/debug.c b/drivers/net/wireless/ath/ath9k/debug.c
index 9bc57c5a89bf..a0376a6787b8 100644
--- a/drivers/net/wireless/ath/ath9k/debug.c
+++ b/drivers/net/wireless/ath/ath9k/debug.c
@@ -1293,7 +1293,7 @@ void ath9k_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *ath9k_gstrings_stats,
+ memcpy(data, ath9k_gstrings_stats,
sizeof(ath9k_gstrings_stats));
}
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
index e5414435b141..90cfe39aa433 100644
--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
+++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
@@ -1481,31 +1481,31 @@ static int ath9k_hif_usb_resume(struct usb_interface *interface)
{
struct hif_device_usb *hif_dev = usb_get_intfdata(interface);
struct htc_target *htc_handle = hif_dev->htc_handle;
- int ret;
const struct firmware *fw;
+ int ret;
ret = ath9k_hif_usb_alloc_urbs(hif_dev);
if (ret)
return ret;
- if (hif_dev->flags & HIF_USB_READY) {
- /* request cached firmware during suspend/resume cycle */
- ret = request_firmware(&fw, hif_dev->fw_name,
- &hif_dev->udev->dev);
- if (ret)
- goto fail_resume;
-
- hif_dev->fw_data = fw->data;
- hif_dev->fw_size = fw->size;
- ret = ath9k_hif_usb_download_fw(hif_dev);
- release_firmware(fw);
- if (ret)
- goto fail_resume;
- } else {
- ath9k_hif_usb_dealloc_urbs(hif_dev);
- return -EIO;
+ if (!(hif_dev->flags & HIF_USB_READY)) {
+ ret = -EIO;
+ goto fail_resume;
}
+ /* request cached firmware during suspend/resume cycle */
+ ret = request_firmware(&fw, hif_dev->fw_name,
+ &hif_dev->udev->dev);
+ if (ret)
+ goto fail_resume;
+
+ hif_dev->fw_data = fw->data;
+ hif_dev->fw_size = fw->size;
+ ret = ath9k_hif_usb_download_fw(hif_dev);
+ release_firmware(fw);
+ if (ret)
+ goto fail_resume;
+
mdelay(100);
ret = ath9k_htc_resume(htc_handle);
diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.h b/drivers/net/wireless/ath/ath9k/hif_usb.h
index 5985aa15ca93..b3e66b0485a5 100644
--- a/drivers/net/wireless/ath/ath9k/hif_usb.h
+++ b/drivers/net/wireless/ath/ath9k/hif_usb.h
@@ -126,7 +126,7 @@ struct hif_device_usb {
struct usb_anchor reg_in_submitted;
struct usb_anchor mgmt_submitted;
struct sk_buff *remain_skb;
- char fw_name[32];
+ char fw_name[64];
int fw_minor_index;
int rx_remain_len;
int rx_pkt_len;
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
index c549ff3abcdc..278ddc713fdc 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_debug.c
@@ -423,7 +423,7 @@ void ath9k_htc_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *ath9k_htc_gstrings_stats,
+ memcpy(data, ath9k_htc_gstrings_stats,
sizeof(ath9k_htc_gstrings_stats));
}
diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
index 672789e3c55d..800177021baf 100644
--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
@@ -523,7 +523,7 @@ send_mac80211:
}
/* Send status to mac80211 */
- ieee80211_tx_status(priv->hw, skb);
+ ieee80211_tx_status_skb(priv->hw, skb);
}
static inline void ath9k_htc_tx_drainq(struct ath9k_htc_priv *priv,
diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
index 4e939dcac1c9..f15684379b03 100644
--- a/drivers/net/wireless/ath/ath9k/xmit.c
+++ b/drivers/net/wireless/ath/ath9k/xmit.c
@@ -94,7 +94,7 @@ static void ath_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
if (info->flags & (IEEE80211_TX_CTL_REQ_TX_STATUS |
IEEE80211_TX_STATUS_EOSP)) {
- ieee80211_tx_status(hw, skb);
+ ieee80211_tx_status_skb(hw, skb);
return;
}
diff --git a/drivers/net/wireless/ath/carl9170/usb.c b/drivers/net/wireless/ath/carl9170/usb.c
index e4eb666c6eea..c4edf8355941 100644
--- a/drivers/net/wireless/ath/carl9170/usb.c
+++ b/drivers/net/wireless/ath/carl9170/usb.c
@@ -178,7 +178,7 @@ static void carl9170_usb_tx_data_complete(struct urb *urb)
switch (urb->status) {
/* everything is fine */
case 0:
- carl9170_tx_callback(ar, (void *)urb->context);
+ carl9170_tx_callback(ar, urb->context);
break;
/* disconnect */
@@ -369,7 +369,7 @@ void carl9170_usb_handle_tx_err(struct ar9170 *ar)
struct urb *urb;
while ((urb = usb_get_from_anchor(&ar->tx_err))) {
- struct sk_buff *skb = (void *)urb->context;
+ struct sk_buff *skb = urb->context;
carl9170_tx_drop(ar, skb);
carl9170_tx_callback(ar, skb);
@@ -397,7 +397,7 @@ static void carl9170_usb_tasklet(struct tasklet_struct *t)
static void carl9170_usb_rx_complete(struct urb *urb)
{
- struct ar9170 *ar = (struct ar9170 *)urb->context;
+ struct ar9170 *ar = urb->context;
int err;
if (WARN_ON_ONCE(!ar))
@@ -559,7 +559,7 @@ static int carl9170_usb_flush(struct ar9170 *ar)
int ret, err = 0;
while ((urb = usb_get_from_anchor(&ar->tx_wait))) {
- struct sk_buff *skb = (void *)urb->context;
+ struct sk_buff *skb = urb->context;
carl9170_tx_drop(ar, skb);
carl9170_tx_callback(ar, skb);
usb_free_urb(urb);
@@ -668,7 +668,7 @@ int carl9170_exec_cmd(struct ar9170 *ar, const enum carl9170_cmd_oids cmd,
memcpy(ar->cmd.data, payload, plen);
spin_lock_bh(&ar->cmd_lock);
- ar->readbuf = (u8 *)out;
+ ar->readbuf = out;
ar->readlen = outlen;
spin_unlock_bh(&ar->cmd_lock);
diff --git a/drivers/net/wireless/ath/dfs_pattern_detector.c b/drivers/net/wireless/ath/dfs_pattern_detector.c
index 27f4d74a41c8..700da9f4531e 100644
--- a/drivers/net/wireless/ath/dfs_pattern_detector.c
+++ b/drivers/net/wireless/ath/dfs_pattern_detector.c
@@ -161,7 +161,7 @@ get_dfs_domain_radar_types(enum nl80211_dfs_regions region)
struct channel_detector {
struct list_head head;
u16 freq;
- struct pri_detector **detectors;
+ struct pri_detector *detectors[];
};
/* channel_detector_reset() - reset detector lines for a given channel */
@@ -183,14 +183,13 @@ static void channel_detector_exit(struct dfs_pattern_detector *dpd,
if (cd == NULL)
return;
list_del(&cd->head);
- if (cd->detectors) {
- for (i = 0; i < dpd->num_radar_types; i++) {
- struct pri_detector *de = cd->detectors[i];
- if (de != NULL)
- de->exit(de);
- }
+
+ for (i = 0; i < dpd->num_radar_types; i++) {
+ struct pri_detector *de = cd->detectors[i];
+ if (de != NULL)
+ de->exit(de);
}
- kfree(cd->detectors);
+
kfree(cd);
}
@@ -200,16 +199,12 @@ channel_detector_create(struct dfs_pattern_detector *dpd, u16 freq)
u32 i;
struct channel_detector *cd;
- cd = kmalloc(sizeof(*cd), GFP_ATOMIC);
+ cd = kzalloc(struct_size(cd, detectors, dpd->num_radar_types), GFP_ATOMIC);
if (cd == NULL)
goto fail;
INIT_LIST_HEAD(&cd->head);
cd->freq = freq;
- cd->detectors = kmalloc_array(dpd->num_radar_types,
- sizeof(*cd->detectors), GFP_ATOMIC);
- if (cd->detectors == NULL)
- goto fail;
for (i = 0; i < dpd->num_radar_types; i++) {
const struct radar_detector_specs *rs = &dpd->radar_spec[i];
diff --git a/drivers/net/wireless/ath/wcn36xx/dxe.c b/drivers/net/wireless/ath/wcn36xx/dxe.c
index 9013f056eecb..d405a4c34059 100644
--- a/drivers/net/wireless/ath/wcn36xx/dxe.c
+++ b/drivers/net/wireless/ath/wcn36xx/dxe.c
@@ -180,7 +180,7 @@ static int wcn36xx_dxe_init_descs(struct wcn36xx *wcn, struct wcn36xx_dxe_ch *wc
if (!wcn_ch->cpu_addr)
return -ENOMEM;
- cur_dxe = (struct wcn36xx_dxe_desc *)wcn_ch->cpu_addr;
+ cur_dxe = wcn_ch->cpu_addr;
cur_ctl = wcn_ch->head_blk_ctl;
for (i = 0; i < wcn_ch->desc_num; i++) {
@@ -453,7 +453,7 @@ static void reap_tx_dxes(struct wcn36xx *wcn, struct wcn36xx_dxe_ch *ch)
static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
{
- struct wcn36xx *wcn = (struct wcn36xx *)dev;
+ struct wcn36xx *wcn = dev;
int int_src, int_reason;
wcn36xx_dxe_read_register(wcn, WCN36XX_DXE_INT_SRC_RAW_REG, &int_src);
@@ -541,7 +541,7 @@ static irqreturn_t wcn36xx_irq_tx_complete(int irq, void *dev)
static irqreturn_t wcn36xx_irq_rx_ready(int irq, void *dev)
{
- struct wcn36xx *wcn = (struct wcn36xx *)dev;
+ struct wcn36xx *wcn = dev;
wcn36xx_dxe_rx_frame(wcn);
diff --git a/drivers/net/wireless/ath/wcn36xx/smd.c b/drivers/net/wireless/ath/wcn36xx/smd.c
index 17e1919d1cd8..2cf86fc3f8fe 100644
--- a/drivers/net/wireless/ath/wcn36xx/smd.c
+++ b/drivers/net/wireless/ath/wcn36xx/smd.c
@@ -576,7 +576,7 @@ static int wcn36xx_smd_start_rsp(struct wcn36xx *wcn, void *buf, size_t len)
if (len < sizeof(*rsp))
return -EIO;
- rsp = (struct wcn36xx_hal_mac_start_rsp_msg *)buf;
+ rsp = buf;
if (WCN36XX_FW_MSG_RESULT_SUCCESS != rsp->start_rsp_params.status)
return -EIO;
@@ -1025,7 +1025,7 @@ static int wcn36xx_smd_switch_channel_rsp(void *buf, size_t len)
ret = wcn36xx_smd_rsp_status_check(buf, len);
if (ret)
return ret;
- rsp = (struct wcn36xx_hal_switch_channel_rsp_msg *)buf;
+ rsp = buf;
wcn36xx_dbg(WCN36XX_DBG_HAL, "channel switched to: %d, status: %d\n",
rsp->channel_number, rsp->status);
return ret;
@@ -1072,7 +1072,7 @@ static int wcn36xx_smd_process_ptt_msg_rsp(void *buf, size_t len,
if (ret)
return ret;
- rsp = (struct wcn36xx_hal_process_ptt_msg_rsp_msg *)buf;
+ rsp = buf;
wcn36xx_dbg(WCN36XX_DBG_HAL, "process ptt msg responded with length %d\n",
rsp->header.len);
@@ -1131,7 +1131,7 @@ static int wcn36xx_smd_update_scan_params_rsp(void *buf, size_t len)
{
struct wcn36xx_hal_update_scan_params_resp *rsp;
- rsp = (struct wcn36xx_hal_update_scan_params_resp *)buf;
+ rsp = buf;
/* Remove the PNO version bit */
rsp->status &= (~(WCN36XX_FW_MSG_PNO_VERSION_MASK));
@@ -1198,7 +1198,7 @@ static int wcn36xx_smd_add_sta_self_rsp(struct wcn36xx *wcn,
if (len < sizeof(*rsp))
return -EINVAL;
- rsp = (struct wcn36xx_hal_add_sta_self_rsp_msg *)buf;
+ rsp = buf;
if (rsp->status != WCN36XX_FW_MSG_RESULT_SUCCESS) {
wcn36xx_warn("hal add sta self failure: %d\n",
@@ -1316,7 +1316,7 @@ static int wcn36xx_smd_join_rsp(void *buf, size_t len)
if (wcn36xx_smd_rsp_status_check(buf, len))
return -EIO;
- rsp = (struct wcn36xx_hal_join_rsp_msg *)buf;
+ rsp = buf;
wcn36xx_dbg(WCN36XX_DBG_HAL,
"hal rsp join status %d tx_mgmt_power %d\n",
@@ -1481,7 +1481,7 @@ static int wcn36xx_smd_config_sta_rsp(struct wcn36xx *wcn,
if (len < sizeof(*rsp))
return -EINVAL;
- rsp = (struct wcn36xx_hal_config_sta_rsp_msg *)buf;
+ rsp = buf;
params = &rsp->params;
if (params->status != WCN36XX_FW_MSG_RESULT_SUCCESS) {
@@ -1849,7 +1849,7 @@ static int wcn36xx_smd_config_bss_rsp(struct wcn36xx *wcn,
if (len < sizeof(*rsp))
return -EINVAL;
- rsp = (struct wcn36xx_hal_config_bss_rsp_msg *)buf;
+ rsp = buf;
params = &rsp->bss_rsp_params;
if (params->status != WCN36XX_FW_MSG_RESULT_SUCCESS) {
@@ -2476,7 +2476,7 @@ static int wcn36xx_smd_add_ba_session_rsp(void *buf, int len, u8 *session)
if (len < sizeof(*rsp))
return -EINVAL;
- rsp = (struct wcn36xx_hal_add_ba_session_rsp_msg *)buf;
+ rsp = buf;
if (rsp->status != WCN36XX_FW_MSG_RESULT_SUCCESS)
return rsp->status;
@@ -2654,7 +2654,7 @@ static int wcn36xx_smd_trigger_ba_rsp(void *buf, int len, struct add_ba_info *ba
if (len < sizeof(*rsp))
return -EINVAL;
- rsp = (struct wcn36xx_hal_trigger_ba_rsp_msg *) buf;
+ rsp = buf;
if (rsp->candidate_cnt < 1)
return rsp->status ? rsp->status : -EINVAL;
diff --git a/drivers/net/wireless/ath/wcn36xx/smd.h b/drivers/net/wireless/ath/wcn36xx/smd.h
index cf15cde2a364..2c1ed9e570bf 100644
--- a/drivers/net/wireless/ath/wcn36xx/smd.h
+++ b/drivers/net/wireless/ath/wcn36xx/smd.h
@@ -47,7 +47,7 @@ struct wcn36xx_fw_msg_status_rsp {
struct wcn36xx_hal_ind_msg {
struct list_head list;
size_t msg_len;
- u8 msg[];
+ u8 msg[] __counted_by(msg_len);
};
struct wcn36xx;
diff --git a/drivers/net/wireless/ath/wcn36xx/testmode.c b/drivers/net/wireless/ath/wcn36xx/testmode.c
index 7ae14b4d2d0e..e5142c052985 100644
--- a/drivers/net/wireless/ath/wcn36xx/testmode.c
+++ b/drivers/net/wireless/ath/wcn36xx/testmode.c
@@ -53,7 +53,7 @@ static int wcn36xx_tm_cmd_ptt(struct wcn36xx *wcn, struct ieee80211_vif *vif,
buf = nla_data(tb[WCN36XX_TM_ATTR_DATA]);
buf_len = nla_len(tb[WCN36XX_TM_ATTR_DATA]);
- msg = (struct ftm_rsp_msg *)buf;
+ msg = buf;
wcn36xx_dbg(WCN36XX_DBG_TESTMODE,
"testmode cmd wmi msg_id 0x%04X msg_len %d buf %pK buf_len %d\n",
diff --git a/drivers/net/wireless/ath/wil6210/cfg80211.c b/drivers/net/wireless/ath/wil6210/cfg80211.c
index 40f9a7ef8980..dbe4b3478f03 100644
--- a/drivers/net/wireless/ath/wil6210/cfg80211.c
+++ b/drivers/net/wireless/ath/wil6210/cfg80211.c
@@ -2082,11 +2082,12 @@ void wil_cfg80211_ap_recovery(struct wil6210_priv *wil)
static int wil_cfg80211_change_beacon(struct wiphy *wiphy,
struct net_device *ndev,
- struct cfg80211_beacon_data *bcon)
+ struct cfg80211_ap_update *params)
{
struct wil6210_priv *wil = wiphy_to_wil(wiphy);
struct wireless_dev *wdev = ndev->ieee80211_ptr;
struct wil6210_vif *vif = ndev_to_vif(ndev);
+ struct cfg80211_beacon_data *bcon = &params->beacon;
int rc;
u32 privacy = 0;
diff --git a/drivers/net/wireless/ath/wil6210/wmi.c b/drivers/net/wireless/ath/wil6210/wmi.c
index 6a5976a2944c..6fdb77d4c59e 100644
--- a/drivers/net/wireless/ath/wil6210/wmi.c
+++ b/drivers/net/wireless/ath/wil6210/wmi.c
@@ -870,7 +870,6 @@ static void wmi_evt_rx_mgmt(struct wil6210_vif *vif, int id, void *d, int len)
struct cfg80211_bss *bss;
struct cfg80211_inform_bss bss_data = {
.chan = channel,
- .scan_width = NL80211_BSS_CHAN_WIDTH_20,
.signal = signal,
.boottime_ns = ktime_to_ns(ktime_get_boottime()),
};
@@ -1389,7 +1388,6 @@ wmi_evt_sched_scan_result(struct wil6210_vif *vif, int id, void *d, int len)
u32 d_len;
struct cfg80211_bss *bss;
struct cfg80211_inform_bss bss_data = {
- .scan_width = NL80211_BSS_CHAN_WIDTH_20,
.boottime_ns = ktime_to_ns(ktime_get_boottime()),
};
diff --git a/drivers/net/wireless/atmel/atmel.c b/drivers/net/wireless/atmel/atmel.c
index 7c2d1c588156..461dce21de2b 100644
--- a/drivers/net/wireless/atmel/atmel.c
+++ b/drivers/net/wireless/atmel/atmel.c
@@ -571,7 +571,6 @@ static const struct {
{ REG_DOMAIN_ISRAEL, 3, 9, "Israel"} };
static void build_wpa_mib(struct atmel_private *priv);
-static int atmel_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
static void atmel_copy_to_card(struct net_device *dev, u16 dest,
const unsigned char *src, u16 len);
static void atmel_copy_to_host(struct net_device *dev, unsigned char *dest,
@@ -1487,7 +1486,6 @@ static const struct net_device_ops atmel_netdev_ops = {
.ndo_stop = atmel_close,
.ndo_set_mac_address = atmel_set_mac_address,
.ndo_start_xmit = start_tx,
- .ndo_do_ioctl = atmel_ioctl,
.ndo_validate_addr = eth_validate_addr,
};
@@ -2616,76 +2614,6 @@ static const struct iw_handler_def atmel_handler_def = {
.get_wireless_stats = atmel_get_wireless_stats
};
-static int atmel_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
- int i, rc = 0;
- struct atmel_private *priv = netdev_priv(dev);
- struct atmel_priv_ioctl com;
- struct iwreq *wrq = (struct iwreq *) rq;
- unsigned char *new_firmware;
- char domain[REGDOMAINSZ + 1];
-
- switch (cmd) {
- case ATMELIDIFC:
- wrq->u.param.value = ATMELMAGIC;
- break;
-
- case ATMELFWL:
- if (copy_from_user(&com, rq->ifr_data, sizeof(com))) {
- rc = -EFAULT;
- break;
- }
-
- if (!capable(CAP_NET_ADMIN)) {
- rc = -EPERM;
- break;
- }
-
- new_firmware = memdup_user(com.data, com.len);
- if (IS_ERR(new_firmware)) {
- rc = PTR_ERR(new_firmware);
- break;
- }
-
- kfree(priv->firmware);
-
- priv->firmware = new_firmware;
- priv->firmware_length = com.len;
- strncpy(priv->firmware_id, com.id, 31);
- priv->firmware_id[31] = '\0';
- break;
-
- case ATMELRD:
- if (copy_from_user(domain, rq->ifr_data, REGDOMAINSZ)) {
- rc = -EFAULT;
- break;
- }
-
- if (!capable(CAP_NET_ADMIN)) {
- rc = -EPERM;
- break;
- }
-
- domain[REGDOMAINSZ] = 0;
- rc = -EINVAL;
- for (i = 0; i < ARRAY_SIZE(channel_table); i++) {
- if (!strcasecmp(channel_table[i].name, domain)) {
- priv->config_reg_domain = channel_table[i].reg_domain;
- rc = 0;
- }
- }
-
- if (rc == 0 && priv->station_state != STATION_STATE_DOWN)
- rc = atmel_open(dev);
- break;
-
- default:
- rc = -EOPNOTSUPP;
- }
-
- return rc;
-}
-
struct auth_body {
__le16 alg;
__le16 trans_seq;
diff --git a/drivers/net/wireless/broadcom/b43/dma.c b/drivers/net/wireless/broadcom/b43/dma.c
index 9a7c62bd5e43..760d1a28edc6 100644
--- a/drivers/net/wireless/broadcom/b43/dma.c
+++ b/drivers/net/wireless/broadcom/b43/dma.c
@@ -1531,9 +1531,9 @@ void b43_dma_handle_txstatus(struct b43_wldev *dev,
ring->nr_failed_tx_packets++;
ring->nr_total_packet_tries += status->frame_count;
#endif /* DEBUG */
- ieee80211_tx_status(dev->wl->hw, meta->skb);
+ ieee80211_tx_status_skb(dev->wl->hw, meta->skb);
- /* skb will be freed by ieee80211_tx_status().
+ /* skb will be freed by ieee80211_tx_status_skb().
* Poison our pointer. */
meta->skb = B43_DMA_PTR_POISON;
} else {
diff --git a/drivers/net/wireless/broadcom/b43/pio.c b/drivers/net/wireless/broadcom/b43/pio.c
index 8c28a9250cd1..0cf70fdb60a6 100644
--- a/drivers/net/wireless/broadcom/b43/pio.c
+++ b/drivers/net/wireless/broadcom/b43/pio.c
@@ -582,7 +582,7 @@ void b43_pio_handle_txstatus(struct b43_wldev *dev,
q->buffer_used -= total_len;
q->free_packet_slots += 1;
- ieee80211_tx_status(dev->wl->hw, pack->skb);
+ ieee80211_tx_status_skb(dev->wl->hw, pack->skb);
pack->skb = NULL;
list_add(&pack->list, &q->packets_list);
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
index 2a90bb24ba77..667462369a32 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c
@@ -3367,7 +3367,6 @@ static s32 brcmf_inform_single_bss(struct brcmf_cfg80211_info *cfg,
freq = ieee80211_channel_to_frequency(channel, band);
bss_data.chan = ieee80211_get_channel(wiphy, freq);
- bss_data.scan_width = NL80211_BSS_CHAN_WIDTH_20;
bss_data.boottime_ns = ktime_to_ns(ktime_get_boottime());
notify_capability = le16_to_cpu(bi->capability);
@@ -5416,13 +5415,13 @@ static int brcmf_cfg80211_stop_ap(struct wiphy *wiphy, struct net_device *ndev,
static s32
brcmf_cfg80211_change_beacon(struct wiphy *wiphy, struct net_device *ndev,
- struct cfg80211_beacon_data *info)
+ struct cfg80211_ap_update *info)
{
struct brcmf_if *ifp = netdev_priv(ndev);
brcmf_dbg(TRACE, "Enter\n");
- return brcmf_config_ap_mgmt_ie(ifp->vif, info);
+ return brcmf_config_ap_mgmt_ie(ifp->vif, &info->beacon);
}
static int
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
index 09d2f2dc2b46..83f8ed7d00f9 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
@@ -19,7 +19,7 @@
#define BRCMF_FW_MAX_NVRAM_SIZE 64000
#define BRCMF_FW_NVRAM_DEVPATH_LEN 19 /* devpath0=pcie/1/4/ */
-#define BRCMF_FW_NVRAM_PCIEDEV_LEN 10 /* pcie/1/4/ + \0 */
+#define BRCMF_FW_NVRAM_PCIEDEV_LEN 20 /* pcie/1/4/ + \0 */
#define BRCMF_FW_DEFAULT_BOARDREV "boardrev=0xff"
#define BRCMF_FW_MACADDR_FMT "macaddr=%pM"
#define BRCMF_FW_MACADDR_LEN (7 + ETH_ALEN * 3)
@@ -238,9 +238,9 @@ static void brcmf_fw_strip_multi_v1(struct nvram_parser *nvp, u16 domain_nr,
u16 bus_nr)
{
/* Device path with a leading '=' key-value separator */
- char pci_path[] = "=pci/?/?";
+ char pci_path[20];
size_t pci_len;
- char pcie_path[] = "=pcie/?/?";
+ char pcie_path[20];
size_t pcie_len;
u32 i, j;
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
index 1266cbaee072..4002d326fd21 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.h
@@ -69,7 +69,7 @@ struct brcmf_fw_request {
u16 bus_nr;
u32 n_items;
const char *board_types[BRCMF_FW_MAX_BOARD_TYPES];
- struct brcmf_fw_item items[];
+ struct brcmf_fw_item items[] __counted_by(n_items);
};
struct brcmf_fw_name {
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
index dac7eb77799b..68960ae98987 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c
@@ -33,7 +33,7 @@ struct brcmf_fweh_queue_item {
u8 ifaddr[ETH_ALEN];
struct brcmf_event_msg_be emsg;
u32 datalen;
- u8 data[];
+ u8 data[] __counted_by(datalen);
};
/*
@@ -418,17 +418,17 @@ void brcmf_fweh_process_event(struct brcmf_pub *drvr,
datalen + sizeof(*event_packet) > packet_len)
return;
- event = kzalloc(sizeof(*event) + datalen, gfp);
+ event = kzalloc(struct_size(event, data, datalen), gfp);
if (!event)
return;
+ event->datalen = datalen;
event->code = code;
event->ifidx = event_packet->msg.ifidx;
/* use memcpy to get aligned event message */
memcpy(&event->emsg, &event_packet->msg, sizeof(event->emsg));
memcpy(event->data, data, datalen);
- event->datalen = datalen;
memcpy(event->ifaddr, event_packet->eth.h_dest, ETH_ALEN);
brcmf_fweh_queue_event(fweh, event);
diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
index 611d1a6aabb9..9d248ba1c0b2 100644
--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
+++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil_types.h
@@ -1214,7 +1214,7 @@ struct brcmf_gscan_config {
u8 count_of_channel_buckets;
u8 retry_threshold;
__le16 lost_ap_window;
- struct brcmf_gscan_bucket_config bucket[];
+ struct brcmf_gscan_bucket_config bucket[] __counted_by(count_of_channel_buckets);
};
/**
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2100.c b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
index 0812db8936f1..b6636002c7d2 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2100.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2100.c
@@ -317,8 +317,6 @@ static int ipw2100_get_firmware(struct ipw2100_priv *priv,
struct ipw2100_fw *fw);
static int ipw2100_get_fwversion(struct ipw2100_priv *priv, char *buf,
size_t max);
-static int ipw2100_get_ucodeversion(struct ipw2100_priv *priv, char *buf,
- size_t max);
static void ipw2100_release_firmware(struct ipw2100_priv *priv,
struct ipw2100_fw *fw);
static int ipw2100_ucode_download(struct ipw2100_priv *priv,
@@ -5894,17 +5892,14 @@ static void ipw_ethtool_get_drvinfo(struct net_device *dev,
struct ethtool_drvinfo *info)
{
struct ipw2100_priv *priv = libipw_priv(dev);
- char fw_ver[64], ucode_ver[64];
+ char fw_ver[64];
strscpy(info->driver, DRV_NAME, sizeof(info->driver));
strscpy(info->version, DRV_VERSION, sizeof(info->version));
ipw2100_get_fwversion(priv, fw_ver, sizeof(fw_ver));
- ipw2100_get_ucodeversion(priv, ucode_ver, sizeof(ucode_ver));
-
- snprintf(info->fw_version, sizeof(info->fw_version), "%s:%d:%s",
- fw_ver, priv->eeprom_version, ucode_ver);
+ strscpy(info->fw_version, fw_ver, sizeof(info->fw_version));
strscpy(info->bus_info, pci_name(priv->pci_dev),
sizeof(info->bus_info));
}
@@ -8406,17 +8401,6 @@ static int ipw2100_get_fwversion(struct ipw2100_priv *priv, char *buf,
return tmp;
}
-static int ipw2100_get_ucodeversion(struct ipw2100_priv *priv, char *buf,
- size_t max)
-{
- u32 ver;
- u32 len = sizeof(ver);
- /* microcode version is a 32 bit integer */
- if (ipw2100_get_ordinal(priv, IPW_ORD_UCODE_VERSION, &ver, &len))
- return -EIO;
- return snprintf(buf, max, "%08X", ver);
-}
-
/*
* On exit, the firmware will have been freed from the fw list
*/
diff --git a/drivers/net/wireless/intel/ipw2x00/ipw2200.c b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
index 820100cac491..eed9ef17bc29 100644
--- a/drivers/net/wireless/intel/ipw2x00/ipw2200.c
+++ b/drivers/net/wireless/intel/ipw2x00/ipw2200.c
@@ -9656,31 +9656,30 @@ static int ipw_wx_get_wireless_mode(struct net_device *dev,
mutex_lock(&priv->mutex);
switch (priv->ieee->mode) {
case IEEE_A:
- strncpy(extra, "802.11a (1)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11a (1)", MAX_WX_STRING);
break;
case IEEE_B:
- strncpy(extra, "802.11b (2)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11b (2)", MAX_WX_STRING);
break;
case IEEE_A | IEEE_B:
- strncpy(extra, "802.11ab (3)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11ab (3)", MAX_WX_STRING);
break;
case IEEE_G:
- strncpy(extra, "802.11g (4)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11g (4)", MAX_WX_STRING);
break;
case IEEE_A | IEEE_G:
- strncpy(extra, "802.11ag (5)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11ag (5)", MAX_WX_STRING);
break;
case IEEE_B | IEEE_G:
- strncpy(extra, "802.11bg (6)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11bg (6)", MAX_WX_STRING);
break;
case IEEE_A | IEEE_B | IEEE_G:
- strncpy(extra, "802.11abg (7)", MAX_WX_STRING);
+ strscpy_pad(extra, "802.11abg (7)", MAX_WX_STRING);
break;
default:
- strncpy(extra, "unknown", MAX_WX_STRING);
+ strscpy_pad(extra, "unknown", MAX_WX_STRING);
break;
}
- extra[MAX_WX_STRING - 1] = '\0';
IPW_DEBUG_WX("PRIV GET MODE: %s\n", extra);
@@ -10378,7 +10377,6 @@ static void ipw_ethtool_get_drvinfo(struct net_device *dev,
{
struct ipw_priv *p = libipw_priv(dev);
char vers[64];
- char date[32];
u32 len;
strscpy(info->driver, DRV_NAME, sizeof(info->driver));
@@ -10386,11 +10384,8 @@ static void ipw_ethtool_get_drvinfo(struct net_device *dev,
len = sizeof(vers);
ipw_get_ordinal(p, IPW_ORD_STAT_FW_VERSION, vers, &len);
- len = sizeof(date);
- ipw_get_ordinal(p, IPW_ORD_STAT_FW_DATE, date, &len);
- snprintf(info->fw_version, sizeof(info->fw_version), "%s (%s)",
- vers, date);
+ strscpy(info->fw_version, vers, sizeof(info->fw_version));
strscpy(info->bus_info, pci_name(p->pci_dev),
sizeof(info->bus_info));
}
diff --git a/drivers/net/wireless/intel/ipw2x00/libipw.h b/drivers/net/wireless/intel/ipw2x00/libipw.h
index bec7bc273748..9065ca5b0208 100644
--- a/drivers/net/wireless/intel/ipw2x00/libipw.h
+++ b/drivers/net/wireless/intel/ipw2x00/libipw.h
@@ -488,7 +488,7 @@ struct libipw_txb {
u8 reserved;
u16 frag_size;
u16 payload_size;
- struct sk_buff *fragments[];
+ struct sk_buff *fragments[] __counted_by(nr_frags);
};
/* SWEEP TABLE ENTRIES NUMBER */
diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
index 0a4aa3c678c1..69276266ce6f 100644
--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
@@ -6122,7 +6122,7 @@ il4965_mac_channel_switch(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
if (il->ops->set_channel_switch(il, ch_switch)) {
clear_bit(S_CHANNEL_SWITCH_PENDING, &il->status);
il->switch_channel = 0;
- ieee80211_chswitch_done(il->vif, false);
+ ieee80211_chswitch_done(il->vif, false, 0);
}
out:
diff --git a/drivers/net/wireless/intel/iwlegacy/common.c b/drivers/net/wireless/intel/iwlegacy/common.c
index 96002121bb8b..054fef680aba 100644
--- a/drivers/net/wireless/intel/iwlegacy/common.c
+++ b/drivers/net/wireless/intel/iwlegacy/common.c
@@ -4090,7 +4090,7 @@ il_chswitch_done(struct il_priv *il, bool is_success)
return;
if (test_and_clear_bit(S_CHANNEL_SWITCH_PENDING, &il->status))
- ieee80211_chswitch_done(il->vif, is_success);
+ ieee80211_chswitch_done(il->vif, is_success, 0);
}
EXPORT_SYMBOL(il_chswitch_done);
diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c b/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c
index 8d5f9dce71d5..134635c70ce8 100644
--- a/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c
+++ b/drivers/net/wireless/intel/iwlwifi/cfg/ax210.c
@@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
-#define IWL_AX210_UCODE_API_MAX 83
+#define IWL_AX210_UCODE_API_MAX 86
/* Lowest firmware API version supported */
#define IWL_AX210_UCODE_API_MIN 59
diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
index b9893b22e41d..82da957adcf6 100644
--- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
@@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
-#define IWL_BZ_UCODE_API_MAX 83
+#define IWL_BZ_UCODE_API_MAX 86
/* Lowest firmware API version supported */
#define IWL_BZ_UCODE_API_MIN 80
@@ -134,12 +134,10 @@ static const struct iwl_base_params iwl_bz_base_params = {
.ht_params = &iwl_gl_a_ht_params
/*
- * If the device doesn't support HE, no need to have that many buffers.
- * These sizes were picked according to 8 MSDUs inside 256 A-MSDUs in an
+ * This size was picked according to 8 MSDUs inside 512 A-MSDUs in an
* A-MPDU, with additional overhead to account for processing time.
*/
-#define IWL_NUM_RBDS_NON_HE 512
-#define IWL_NUM_RBDS_BZ_HE 4096
+#define IWL_NUM_RBDS_BZ_EHT (512 * 16)
const struct iwl_cfg_trans_params iwl_bz_trans_cfg = {
.device_family = IWL_DEVICE_FAMILY_BZ,
@@ -160,16 +158,16 @@ const struct iwl_cfg iwl_cfg_bz = {
.fw_name_mac = "bz",
.uhb_supported = true,
IWL_DEVICE_BZ,
- .features = IWL_TX_CSUM_NETIF_FLAGS_BZ | NETIF_F_RXCSUM,
- .num_rbds = IWL_NUM_RBDS_BZ_HE,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_BZ_EHT,
};
const struct iwl_cfg iwl_cfg_gl = {
.fw_name_mac = "gl",
.uhb_supported = true,
IWL_DEVICE_BZ,
- .features = IWL_TX_CSUM_NETIF_FLAGS_BZ | NETIF_F_RXCSUM,
- .num_rbds = IWL_NUM_RBDS_BZ_HE,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_BZ_EHT,
};
diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/sc.c b/drivers/net/wireless/intel/iwlwifi/cfg/sc.c
index ad283fd22e2a..80eb9b499538 100644
--- a/drivers/net/wireless/intel/iwlwifi/cfg/sc.c
+++ b/drivers/net/wireless/intel/iwlwifi/cfg/sc.c
@@ -10,7 +10,7 @@
#include "fw/api/txq.h"
/* Highest firmware API version supported */
-#define IWL_SC_UCODE_API_MAX 83
+#define IWL_SC_UCODE_API_MAX 86
/* Lowest firmware API version supported */
#define IWL_SC_UCODE_API_MIN 82
@@ -127,12 +127,10 @@ static const struct iwl_base_params iwl_sc_base_params = {
.ht_params = &iwl_22000_ht_params
/*
- * If the device doesn't support HE, no need to have that many buffers.
- * These sizes were picked according to 8 MSDUs inside 256 A-MSDUs in an
+ * This size was picked according to 8 MSDUs inside 512 A-MSDUs in an
* A-MPDU, with additional overhead to account for processing time.
*/
-#define IWL_NUM_RBDS_NON_HE 512
-#define IWL_NUM_RBDS_SC_HE 4096
+#define IWL_NUM_RBDS_SC_EHT (512 * 16)
const struct iwl_cfg_trans_params iwl_sc_trans_cfg = {
.device_family = IWL_DEVICE_FAMILY_SC,
@@ -153,8 +151,8 @@ const struct iwl_cfg iwl_cfg_sc = {
.fw_name_mac = "sc",
.uhb_supported = true,
IWL_DEVICE_SC,
- .features = IWL_TX_CSUM_NETIF_FLAGS_BZ | NETIF_F_RXCSUM,
- .num_rbds = IWL_NUM_RBDS_SC_HE,
+ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM,
+ .num_rbds = IWL_NUM_RBDS_SC_EHT,
};
MODULE_FIRMWARE(IWL_SC_A_FM_B_FW_MODULE_FIRMWARE(IWL_SC_UCODE_API_MAX));
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/commands.h b/drivers/net/wireless/intel/iwlwifi/dvm/commands.h
index 75a4b8e26232..04864d3fda63 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/commands.h
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/commands.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014 Intel Corporation
+ * Copyright (C) 2005-2014, 2023 Intel Corporation
*/
/*
* Please use this file (commands.h) only for uCode API definitions.
@@ -270,7 +270,7 @@ enum {
#define IWL_PWR_NUM_HT_OFDM_ENTRIES 24
#define IWL_PWR_CCK_ENTRIES 2
-/**
+/*
* struct tx_power_dual_stream
*
* Table entries in REPLY_TX_PWR_TABLE_CMD, REPLY_CHANNEL_SWITCH
@@ -281,7 +281,7 @@ struct tx_power_dual_stream {
__le32 dw;
} __packed;
-/**
+/*
* Command REPLY_TX_POWER_DBM_CMD = 0x98
* struct iwlagn_tx_power_dbm_cmd
*/
@@ -295,7 +295,7 @@ struct iwlagn_tx_power_dbm_cmd {
u8 reserved;
} __packed;
-/**
+/*
* Command TX_ANT_CONFIGURATION_CMD = 0x98
* This command is used to configure valid Tx antenna.
* By default uCode concludes the valid antenna according to the radio flavor.
@@ -313,7 +313,7 @@ struct iwl_tx_ant_config_cmd {
#define UCODE_VALID_OK cpu_to_le32(0x1)
-/**
+/*
* REPLY_ALIVE = 0x1 (response only, not a command)
*
* uCode issues this "alive" notification once the runtime image is ready
@@ -534,7 +534,7 @@ enum {
/* transfer to host non bssid beacons in associated state */
#define RXON_FILTER_BCON_AWARE_MSK cpu_to_le32(1 << 6)
-/**
+/*
* REPLY_RXON = 0x10 (command, has simple generic response)
*
* RXON tunes the radio tuner to a service channel, and sets up a number
@@ -681,6 +681,7 @@ struct iwl_csa_notification {
* @aifsn: Number of slots in Arbitration Interframe Space (before
* performing random backoff timing prior to Tx). Device default 1.
* @edca_txop: Length of Tx opportunity, in uSecs. Device default is 0.
+ * @reserved1: reserved for alignment
*
* Device will automatically increase contention window by (2*CW) + 1 for each
* transmission retry. Device uses cw_max as a bit mask, ANDed with new CW
@@ -791,9 +792,11 @@ struct iwl_keyinfo {
/**
* struct sta_id_modify
- * @addr[ETH_ALEN]: station's MAC address
+ * @addr: station's MAC address
+ * @reserved1: reserved for alignment
* @sta_id: index of station in uCode's station table
* @modify_mask: STA_MODIFY_*, 1: modify, 0: don't change
+ * @reserved2: reserved for alignment
*
* Driver selects unused table index when adding new station,
* or the index to a pre-existing station entry when modifying that station.
@@ -1464,7 +1467,7 @@ struct iwl_compressed_ba_resp {
#define LINK_QUAL_ANT_MSK (LINK_QUAL_ANT_A_MSK|LINK_QUAL_ANT_B_MSK)
-/**
+/*
* struct iwl_link_qual_general_params
*
* Used in REPLY_TX_LINK_QUALITY_CMD
@@ -1507,7 +1510,7 @@ struct iwl_link_qual_general_params {
#define LINK_QUAL_AGG_FRAME_LIMIT_MAX (63)
#define LINK_QUAL_AGG_FRAME_LIMIT_MIN (0)
-/**
+/*
* struct iwl_link_qual_agg_params
*
* Used in REPLY_TX_LINK_QUALITY_CMD
@@ -2040,7 +2043,7 @@ struct iwl_spectrum_notification {
*
*****************************************************************************/
-/**
+/*
* struct iwl_powertable_cmd - Power Table Command
* @flags: See below:
*
@@ -2171,7 +2174,7 @@ struct iwl_ct_kill_throttling_config {
#define SCAN_CHANNEL_TYPE_PASSIVE cpu_to_le32(0)
#define SCAN_CHANNEL_TYPE_ACTIVE cpu_to_le32(1)
-/**
+/*
* struct iwl_scan_channel - entry in REPLY_SCAN_CMD channel table
*
* One for each channel in the scan list.
@@ -2210,7 +2213,7 @@ struct iwl_scan_channel {
/* set number of direct probes __le32 type */
#define IWL_SCAN_PROBE_MASK(n) cpu_to_le32((BIT(n) | (BIT(n) - BIT(1))))
-/**
+/*
* struct iwl_ssid_ie - directed scan network information element
*
* Up to 20 of these may appear in REPLY_SCAN_CMD,
@@ -2560,6 +2563,7 @@ struct statistics_rx_bt {
* @ant_a: current tx power on chain a in 1/2 dB step
* @ant_b: current tx power on chain b in 1/2 dB step
* @ant_c: current tx power on chain c in 1/2 dB step
+ * @reserved: reserved for alignment
*/
struct statistics_tx_power {
u8 ant_a;
@@ -3006,7 +3010,7 @@ struct iwl_enhance_sensitivity_cmd {
} __packed;
-/**
+/*
* REPLY_PHY_CALIBRATION_CMD = 0xb0 (command, has simple generic response)
*
* This command sets the relative gains of agn device's 3 radio receiver chains.
@@ -3847,6 +3851,7 @@ struct iwlagn_wowlan_status {
* @type:
* 0 - BSS
* 1 - PAN
+ * @reserved: reserved for alignment
*/
struct iwl_wipan_slot {
__le16 width;
@@ -3874,6 +3879,8 @@ struct iwl_wipan_slot {
* uCode will perform leaving channel methods in context switch
* also when working in same channel mode
* @num_slots: 1 - 10
+ * @slots: per-slot data
+ * @reserved: reserved for alignment
*/
struct iwl_wipan_params_cmd {
__le16 flags;
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/dev.h b/drivers/net/wireless/intel/iwlwifi/dvm/dev.h
index 1a9eadace188..25283e4b849f 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/dev.h
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/dev.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/******************************************************************************
*
- * Copyright(c) 2003 - 2014, 2020 Intel Corporation. All rights reserved.
+ * Copyright(c) 2003 - 2014, 2020, 2023 Intel Corporation. All rights reserved.
*****************************************************************************/
/*
* Please use this file (dev.h) for driver implementation definitions.
@@ -126,11 +126,11 @@ enum iwl_agg_state {
/**
* struct iwl_ht_agg - aggregation state machine
-
+ *
* This structs holds the states for the BA agreement establishment and tear
* down. It also holds the state during the BA session itself. This struct is
* duplicated for each RA / TID.
-
+ *
* @rate_n_flags: Rate at which Tx was attempted. Holds the data between the
* Tx response (REPLY_TX), and the block ack notification
* (REPLY_COMPRESSED_BA).
@@ -152,9 +152,9 @@ struct iwl_ht_agg {
/**
* struct iwl_tid_data - one for each RA / TID
-
+ *
* This structs holds the states for each RA / TID.
-
+ *
* @seq_number: the next WiFi sequence number to use
* @next_reclaimed: the WiFi sequence number of the next packet to be acked.
* This is basically (last acked packet++).
@@ -195,7 +195,7 @@ struct iwl_station_priv {
u8 sta_id;
};
-/**
+/*
* struct iwl_vif_priv - driver's private per-interface information
*
* When mac80211 allocates a virtual interface, it can allocate
@@ -529,6 +529,7 @@ enum iwl_scan_type {
* relevant for 1000, 6000 and up
* @struct iwl_sensitivity_ranges: range of sensitivity values
* @use_rts_for_aggregation: use rts/cts protection for HT traffic
+ * @sens: sensitivity ranges pointer
*/
struct iwl_hw_params {
u8 tx_chains_num;
@@ -547,6 +548,7 @@ struct iwl_hw_params {
* @bt_prio_boost: default bt priority boost value
* @agg_time_limit: maximum number of uSec in aggregation
* @bt_sco_disable: uCode should not response to BT in SCO/ESCO mode
+ * @bt_session_2: indicates version 2 of the BT command is used
*/
struct iwl_dvm_bt_params {
bool advanced_bt_coexist;
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
index b1939ff275b5..5f3d5b15f727 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c
@@ -2,7 +2,7 @@
/******************************************************************************
*
* Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
- * Copyright (C) 2018 - 2019, 2022 Intel Corporation
+ * Copyright(C) 2018 - 2019, 2022 - 2023 Intel Corporation
*
* Portions of this file are derived from the ipw3945 project, as well
* as portions of the ieee80211 subsystem header files.
@@ -1001,7 +1001,7 @@ static void iwlagn_mac_channel_switch(struct ieee80211_hw *hw,
if (priv->lib->set_channel_switch(priv, ch_switch)) {
clear_bit(STATUS_CHANNEL_SWITCH_PENDING, &priv->status);
priv->switch_channel = 0;
- ieee80211_chswitch_done(ctx->vif, false);
+ ieee80211_chswitch_done(ctx->vif, false, 0);
}
out:
@@ -1024,7 +1024,7 @@ void iwl_chswitch_done(struct iwl_priv *priv, bool is_success)
return;
if (ctx->vif)
- ieee80211_chswitch_done(ctx->vif, is_success);
+ ieee80211_chswitch_done(ctx->vif, is_success, 0);
}
static void iwlagn_configure_filter(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/main.c b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
index a873be109f43..8774dd7b921e 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/main.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/main.c
@@ -1464,7 +1464,7 @@ static struct iwl_op_mode *iwl_op_mode_dvm_start(struct iwl_trans *trans,
snprintf(priv->hw->wiphy->fw_version,
sizeof(priv->hw->wiphy->fw_version),
- "%s", fw->fw_version);
+ "%.31s", fw->fw_version);
priv->new_scan_threshold_behaviour =
!!(ucode_flags & IWL_UCODE_TLV_FLAGS_NEWSCAN);
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.h b/drivers/net/wireless/intel/iwlwifi/dvm/rs.h
index 0b47f1993c5d..100cb932c6b8 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.h
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/******************************************************************************
*
- * Copyright(c) 2003 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2003 - 2014, 2023 Intel Corporation. All rights reserved.
*****************************************************************************/
#ifndef __iwl_agn_rs_h__
@@ -269,7 +269,7 @@ struct iwl_rate_mcs_info {
char mcs[IWL_MAX_MCS_DISPLAY_SIZE];
};
-/**
+/*
* struct iwl_rate_scale_data -- tx success history for one rate
*/
struct iwl_rate_scale_data {
@@ -281,7 +281,7 @@ struct iwl_rate_scale_data {
unsigned long stamp;
};
-/**
+/*
* struct iwl_scale_tbl_info -- tx params and success history for all rates
*
* There are two of these in struct iwl_lq_sta,
@@ -311,7 +311,7 @@ struct iwl_traffic_load {
u8 head; /* start of the circular buffer */
};
-/**
+/*
* struct iwl_lq_sta -- driver's rate scaling private structure
*
* Pointer to this gets passed back and forth between driver and mac80211.
@@ -379,7 +379,7 @@ static inline u8 first_antenna(u8 mask)
void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta,
u8 sta_id);
-/**
+/*
* iwl_rate_control_register - Register the rate control algorithm callbacks
*
* Since the rate control algorithm is hardware specific, there is no need
@@ -391,7 +391,7 @@ void iwl_rs_rate_init(struct iwl_priv *priv, struct ieee80211_sta *sta,
*/
int iwlagn_rate_control_register(void);
-/**
+/*
* iwl_rate_control_unregister - Unregister the rate control callbacks
*
* This should be called after calling ieee80211_unregister_hw, but before
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/tt.h b/drivers/net/wireless/intel/iwlwifi/dvm/tt.h
index 7ace052fc78a..23dfcda0dd86 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/tt.h
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/tt.h
@@ -1,7 +1,7 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/******************************************************************************
*
- * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
+ * Copyright(c) 2007 - 2014, 2023 Intel Corporation. All rights reserved.
*
* Portions of this file are derived from the ipw3945 project, as well
* as portions of the ieee80211 subsystem header files.
@@ -72,14 +72,15 @@ struct iwl_tt_trans {
* when thermal throttling state != IWL_TI_0
* the tt_power_mode should set to different
* power mode based on the current tt state
- * @tt_previous_temperature: last measured temperature
- * @iwl_tt_restriction: ptr to restriction tbl, used by advance
+ * @tt_previous_temp: last measured temperature
+ * @restriction: ptr to restriction tbl, used by advance
* thermal throttling to determine how many tx/rx streams
* should be used in tt state; and can HT be enabled or not
- * @iwl_tt_trans: ptr to adv trans table, used by advance thermal throttling
+ * @transaction: ptr to adv trans table, used by advance thermal throttling
* state transaction
* @ct_kill_toggle: used to toggle the CSR bit when checking uCode temperature
* @ct_kill_exit_tm: timer to exit thermal kill
+ * @ct_kill_waiting_tm: timer to enter thermal kill
*/
struct iwl_tt_mgmt {
enum iwl_tt_state state;
diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/tx.c b/drivers/net/wireless/intel/iwlwifi/dvm/tx.c
index 60a7b61d59aa..111ed1873006 100644
--- a/drivers/net/wireless/intel/iwlwifi/dvm/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/dvm/tx.c
@@ -3,6 +3,7 @@
*
* Copyright(c) 2008 - 2014 Intel Corporation. All rights reserved.
* Copyright (C) 2019 Intel Corporation
+ * Copyright (C) 2023 Intel Corporation
*****************************************************************************/
#include <linux/kernel.h>
@@ -1169,7 +1170,7 @@ void iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb)
iwlagn_check_ratid_empty(priv, sta_id, tid);
}
- iwl_trans_reclaim(priv->trans, txq_id, ssn, &skbs);
+ iwl_trans_reclaim(priv->trans, txq_id, ssn, &skbs, false);
freed = 0;
@@ -1247,7 +1248,7 @@ void iwlagn_rx_reply_tx(struct iwl_priv *priv, struct iwl_rx_cmd_buffer *rxb)
while (!skb_queue_empty(&skbs)) {
skb = __skb_dequeue(&skbs);
- ieee80211_tx_status(priv->hw, skb);
+ ieee80211_tx_status_skb(priv->hw, skb);
}
}
@@ -1315,7 +1316,7 @@ void iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv,
* block-ack window (we assume that they've been successfully
* transmitted ... if not, it's too late anyway). */
iwl_trans_reclaim(priv->trans, scd_flow, ba_resp_scd_ssn,
- &reclaimed_skbs);
+ &reclaimed_skbs, false);
IWL_DEBUG_TX_REPLY(priv, "REPLY_COMPRESSED_BA [%d] Received from %pM, "
"sta_id = %d\n",
@@ -1384,6 +1385,6 @@ void iwlagn_rx_reply_compressed_ba(struct iwl_priv *priv,
while (!skb_queue_empty(&reclaimed_skbs)) {
skb = __skb_dequeue(&reclaimed_skbs);
- ieee80211_tx_status(priv->hw, skb);
+ ieee80211_tx_status_skb(priv->hw, skb);
}
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
index b26f90e52256..b96f30d11644 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
@@ -1011,18 +1011,29 @@ __le32 iwl_acpi_get_lari_config_bitmap(struct iwl_fw_runtime *fwrt)
{
int ret;
u8 value;
+ u32 val;
__le32 config_bitmap = 0;
/*
- ** Evaluate func 'DSM_FUNC_ENABLE_INDONESIA_5G2'
+ * Evaluate func 'DSM_FUNC_ENABLE_INDONESIA_5G2'.
+ * Setting config_bitmap Indonesia bit is valid only for HR/JF.
*/
- ret = iwl_acpi_get_dsm_u8(fwrt->dev, 0,
- DSM_FUNC_ENABLE_INDONESIA_5G2,
- &iwl_guid, &value);
-
- if (!ret && value == DSM_VALUE_INDONESIA_ENABLE)
- config_bitmap |=
- cpu_to_le32(LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK);
+ switch (CSR_HW_RFID_TYPE(fwrt->trans->hw_rf_id)) {
+ case IWL_CFG_RF_TYPE_HR1:
+ case IWL_CFG_RF_TYPE_HR2:
+ case IWL_CFG_RF_TYPE_JF1:
+ case IWL_CFG_RF_TYPE_JF2:
+ ret = iwl_acpi_get_dsm_u8(fwrt->dev, 0,
+ DSM_FUNC_ENABLE_INDONESIA_5G2,
+ &iwl_guid, &value);
+
+ if (!ret && value == DSM_VALUE_INDONESIA_ENABLE)
+ config_bitmap |=
+ cpu_to_le32(LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK);
+ break;
+ default:
+ break;
+ }
/*
** Evaluate func 'DSM_FUNC_DISABLE_SRD'
@@ -1039,6 +1050,23 @@ __le32 iwl_acpi_get_lari_config_bitmap(struct iwl_fw_runtime *fwrt)
cpu_to_le32(LARI_CONFIG_CHANGE_ETSI_TO_DISABLED_MSK);
}
+ if (fw_has_capa(&fwrt->fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_CHINA_22_REG_SUPPORT)) {
+ /*
+ ** Evaluate func 'DSM_FUNC_REGULATORY_CONFIG'
+ */
+ ret = iwl_acpi_get_dsm_u32(fwrt->dev, 0,
+ DSM_FUNC_REGULATORY_CONFIG,
+ &iwl_guid, &val);
+ /*
+ * China 2022 enable if the BIOS object does not exist or
+ * if it is enabled in BIOS.
+ */
+ if (ret < 0 || val & DSM_MASK_CHINA_22_REG)
+ config_bitmap |=
+ cpu_to_le32(LARI_CONFIG_ENABLE_CHINA_22_REG_SUPPORT_MSK);
+ }
+
return config_bitmap;
}
IWL_EXPORT_SYMBOL(iwl_acpi_get_lari_config_bitmap);
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
index c36c62d6414d..e9277f6f3582 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.h
@@ -134,10 +134,12 @@ enum iwl_dsm_funcs_rev_0 {
DSM_FUNC_DISABLE_SRD = 1,
DSM_FUNC_ENABLE_INDONESIA_5G2 = 2,
DSM_FUNC_ENABLE_6E = 3,
+ DSM_FUNC_REGULATORY_CONFIG = 4,
DSM_FUNC_11AX_ENABLEMENT = 6,
DSM_FUNC_ENABLE_UNII4_CHAN = 7,
DSM_FUNC_ACTIVATE_CHANNEL = 8,
- DSM_FUNC_FORCE_DISABLE_CHANNELS = 9
+ DSM_FUNC_FORCE_DISABLE_CHANNELS = 9,
+ DSM_FUNC_ENERGY_DETECTION_THRESHOLD = 10,
};
enum iwl_dsm_values_srd {
@@ -164,6 +166,10 @@ enum iwl_dsm_values_rfi {
DSM_VALUE_RFI_MAX
};
+enum iwl_dsm_masks_reg {
+ DSM_MASK_CHINA_22_REG = BIT(2)
+};
+
#ifdef CONFIG_ACPI
struct iwl_fw_runtime;
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
index 13cb0d53a1a3..7544c4cb1a30 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/commands.h
@@ -30,6 +30,8 @@
* @REGULATORY_AND_NVM_GROUP: regulatory/NVM group, uses command IDs from
* &enum iwl_regulatory_and_nvm_subcmd_ids
* @DEBUG_GROUP: Debug group, uses command IDs from &enum iwl_debug_cmds
+ * @STATISTICS_GROUP: Statistics group, uses command IDs from
+ * &enum iwl_statistics_subcmd_ids
*/
enum iwl_mvm_command_groups {
LEGACY_GROUP = 0x0,
@@ -44,6 +46,7 @@ enum iwl_mvm_command_groups {
PROT_OFFLOAD_GROUP = 0xb,
REGULATORY_AND_NVM_GROUP = 0xc,
DEBUG_GROUP = 0xf,
+ STATISTICS_GROUP = 0x10,
};
/**
@@ -617,9 +620,36 @@ enum iwl_system_subcmd_ids {
SYSTEM_FEATURES_CONTROL_CMD = 0xd,
/**
+ * @SYSTEM_STATISTICS_CMD: &struct iwl_system_statistics_cmd
+ */
+ SYSTEM_STATISTICS_CMD = 0xf,
+
+ /**
+ * @SYSTEM_STATISTICS_END_NOTIF: &struct iwl_system_statistics_end_notif
+ */
+ SYSTEM_STATISTICS_END_NOTIF = 0xfd,
+
+ /**
* @RFI_DEACTIVATE_NOTIF: &struct iwl_rfi_deactivate_notif
*/
RFI_DEACTIVATE_NOTIF = 0xff,
};
+/**
+ * enum iwl_statistics_subcmd_ids - Statistics group command IDs
+ */
+enum iwl_statistics_subcmd_ids {
+ /**
+ * @STATISTICS_OPER_NOTIF: Notification about operational
+ * statistics &struct iwl_system_statistics_notif_oper
+ */
+ STATISTICS_OPER_NOTIF = 0x0,
+
+ /**
+ * @STATISTICS_OPER_PART1_NOTIF: Notification about operational part1
+ * statistics &struct iwl_system_statistics_part1_notif_oper
+ */
+ STATISTICS_OPER_PART1_NOTIF = 0x1,
+};
+
#endif /* __iwl_fw_api_commands_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
index 72d461c47323..ea99d41040d2 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/d3.h
@@ -397,6 +397,8 @@ struct iwl_wowlan_config_cmd {
#define WOWLAN_GTK_KEYS_NUM 2
#define WOWLAN_IGTK_KEYS_NUM 2
#define WOWLAN_IGTK_MIN_INDEX 4
+#define WOWLAN_BIGTK_KEYS_NUM 2
+#define WOWLAN_BIGTK_MIN_INDEX 6
/*
* WOWLAN_TSC_RSC_PARAMS
@@ -621,9 +623,10 @@ struct iwl_wowlan_gtk_status_v3 {
* @ipn: the IGTK packet number (replay counter)
* @key_len: IGTK length, if set to 0, the key is not available
* @key_flags: information about the key:
- * bits[0]: key index assigned by the AP (0: index 4, 1: index 5)
- * bits[1:5]: IGTK index of the key in the internal DB
- * bit[6]: Set iff this is the currently used IGTK
+ * bits[0]: key index assigned by the AP (0: index 4, 1: index 5)
+ * (0: index 6, 1: index 7 with bigtk)
+ * bits[1:5]: IGTK index of the key in the internal DB
+ * bit[6]: Set iff this is the currently used IGTK
*/
struct iwl_wowlan_igtk_status {
u8 key[WOWLAN_KEY_MAX_SIZE];
@@ -808,9 +811,43 @@ struct iwl_wowlan_info_notif_v1 {
} __packed; /* WOWLAN_INFO_NTFY_API_S_VER_1 */
/**
+ * struct iwl_wowlan_info_notif_v2 - WoWLAN information notification
+ * @gtk: GTK data
+ * @igtk: IGTK data
+ * @replay_ctr: GTK rekey replay counter
+ * @pattern_number: number of the matched patterns
+ * @reserved1: reserved
+ * @qos_seq_ctr: QoS sequence counters to use next
+ * @wakeup_reasons: wakeup reasons, see &enum iwl_wowlan_wakeup_reason
+ * @num_of_gtk_rekeys: number of GTK rekeys
+ * @transmitted_ndps: number of transmitted neighbor discovery packets
+ * @received_beacons: number of received beacons
+ * @tid_tear_down: bit mask of tids whose BA sessions were closed
+ * in suspend state
+ * @station_id: station id
+ * @reserved2: reserved
+ */
+struct iwl_wowlan_info_notif_v2 {
+ struct iwl_wowlan_gtk_status_v3 gtk[WOWLAN_GTK_KEYS_NUM];
+ struct iwl_wowlan_igtk_status igtk[WOWLAN_IGTK_KEYS_NUM];
+ __le64 replay_ctr;
+ __le16 pattern_number;
+ __le16 reserved1;
+ __le16 qos_seq_ctr[8];
+ __le32 wakeup_reasons;
+ __le32 num_of_gtk_rekeys;
+ __le32 transmitted_ndps;
+ __le32 received_beacons;
+ u8 tid_tear_down;
+ u8 station_id;
+ u8 reserved2[2];
+} __packed; /* WOWLAN_INFO_NTFY_API_S_VER_2 */
+
+/**
* struct iwl_wowlan_info_notif - WoWLAN information notification
* @gtk: GTK data
* @igtk: IGTK data
+ * @bigtk: BIGTK data
* @replay_ctr: GTK rekey replay counter
* @pattern_number: number of the matched patterns
* @reserved1: reserved
@@ -827,6 +864,7 @@ struct iwl_wowlan_info_notif_v1 {
struct iwl_wowlan_info_notif {
struct iwl_wowlan_gtk_status_v3 gtk[WOWLAN_GTK_KEYS_NUM];
struct iwl_wowlan_igtk_status igtk[WOWLAN_IGTK_KEYS_NUM];
+ struct iwl_wowlan_igtk_status bigtk[WOWLAN_BIGTK_KEYS_NUM];
__le64 replay_ctr;
__le16 pattern_number;
__le16 reserved1;
@@ -838,7 +876,7 @@ struct iwl_wowlan_info_notif {
u8 tid_tear_down;
u8 station_id;
u8 reserved2[2];
-} __packed; /* WOWLAN_INFO_NTFY_API_S_VER_2 */
+} __packed; /* WOWLAN_INFO_NTFY_API_S_VER_3 */
/**
* struct iwl_wowlan_wake_pkt_notif - WoWLAN wake packet notification
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
index ba538d70985f..394747deb269 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/dbg-tlv.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2018-2022 Intel Corporation
+ * Copyright (C) 2018-2023 Intel Corporation
*/
#ifndef __iwl_fw_dbg_tlv_h__
#define __iwl_fw_dbg_tlv_h__
@@ -13,6 +13,7 @@
#define IWL_FW_INI_DOMAIN_ALWAYS_ON 0
#define IWL_FW_INI_REGION_ID_MASK GENMASK(15, 0)
#define IWL_FW_INI_REGION_DUMP_POLICY_MASK GENMASK(31, 16)
+#define IWL_FW_INI_PRESET_DISABLE 0xff
/**
* struct iwl_fw_ini_hcmd
@@ -42,6 +43,30 @@ struct iwl_fw_ini_header {
} __packed; /* FW_TLV_DEBUG_HEADER_S_VER_1 */
/**
+ * struct iwl_fw_ini_addr_size - Base address and size that defines
+ * a chunk of memory
+ *
+ * @addr: the base address (fixed size - 4 bytes)
+ * @size: the size to read
+ */
+struct iwl_fw_ini_addr_size {
+ __le32 addr;
+ __le32 size;
+} __packed; /* FW_TLV_DEBUG_ADDR_SIZE_VER_1 */
+
+/**
+ * struct iwl_fw_ini_region_dev_addr_range - Configuration to read
+ * device address range
+ *
+ * @offset: offset to add to the base address of each chunk
+ * The addrs[] array will be treated as an array of &iwl_fw_ini_addr_size -
+ * an array of (addr, size) pairs.
+ */
+struct iwl_fw_ini_region_dev_addr_range {
+ __le32 offset;
+} __packed; /* FW_TLV_DEBUG_DEVICE_ADDR_RANGE_API_S_VER_1 */
+
+/**
* struct iwl_fw_ini_region_dev_addr - Configuration to read device addresses
*
* @size: size of each memory chunk
@@ -134,6 +159,10 @@ struct iwl_fw_ini_region_internal_buffer {
* &IWL_FW_INI_REGION_PAGING, &IWL_FW_INI_REGION_CSR,
* &IWL_FW_INI_REGION_DRAM_IMR and &IWL_FW_INI_REGION_PCI_IOSF_CONFIG
* &IWL_FW_INI_REGION_DBGI_SRAM, &FW_TLV_DEBUG_REGION_TYPE_DBGI_SRAM,
+ * &IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP,
+ * @dev_addr_range: device address range configuration. Used by
+ * &IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE and
+ * &IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE
* @fifos: fifos configuration. Used by &IWL_FW_INI_REGION_TXF and
* &IWL_FW_INI_REGION_RXF
* @err_table: error table configuration. Used by
@@ -156,6 +185,7 @@ struct iwl_fw_ini_region_tlv {
u8 name[IWL_FW_INI_MAX_NAME];
union {
struct iwl_fw_ini_region_dev_addr dev_addr;
+ struct iwl_fw_ini_region_dev_addr_range dev_addr_range;
struct iwl_fw_ini_region_fifos fifos;
struct iwl_fw_ini_region_err_table err_table;
struct iwl_fw_ini_region_internal_buffer internal_buffer;
@@ -361,6 +391,9 @@ enum iwl_fw_ini_buffer_location {
* @IWL_FW_INI_REGION_PCI_IOSF_CONFIG: PCI/IOSF config
* @IWL_FW_INI_REGION_SPECIAL_DEVICE_MEMORY: special device memory
* @IWL_FW_INI_REGION_DBGI_SRAM: periphery registers of DBGI SRAM
+ * @IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE: a range of periphery registers of MAC
+ * @IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE: a range of periphery registers of PHY
+ * @IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP: periphery registers of SNPS DPHYIP
* @IWL_FW_INI_REGION_NUM: number of region types
*/
enum iwl_fw_ini_region_type {
@@ -383,6 +416,9 @@ enum iwl_fw_ini_region_type {
IWL_FW_INI_REGION_PCI_IOSF_CONFIG,
IWL_FW_INI_REGION_SPECIAL_DEVICE_MEMORY,
IWL_FW_INI_REGION_DBGI_SRAM,
+ IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE,
+ IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE,
+ IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP,
IWL_FW_INI_REGION_NUM
}; /* FW_TLV_DEBUG_REGION_TYPE_API_E */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
index 90ce8d9b6ad3..7b18e098b125 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/debug.h
@@ -522,4 +522,26 @@ enum iwl_mvm_tas_statically_disabled_reason {
TAS_DISABLED_REASON_MAX,
}; /*_TAS_STATICALLY_DISABLED_REASON_E*/
+/**
+ * enum iwl_fw_dbg_config_cmd_type - types of FW debug config command
+ * @DEBUG_TOKEN_CONFIG_TYPE: token config type
+ */
+enum iwl_fw_dbg_config_cmd_type {
+ DEBUG_TOKEN_CONFIG_TYPE = 0x2B,
+}; /* LDBG_CFG_CMD_TYPE_API_E_VER_1 */
+
+/* this token disables debug asserts in the firmware */
+#define IWL_FW_DBG_CONFIG_TOKEN 0x00011301
+
+/**
+ * struct iwl_fw_dbg_config_cmd - configure FW debug
+ *
+ * @type: according to &enum iwl_fw_dbg_config_cmd_type
+ * @conf: FW configuration
+ */
+struct iwl_fw_dbg_config_cmd {
+ __le32 type;
+ __le32 conf;
+} __packed; /* LDBG_CFG_CMD_API_S_VER_7 */
+
#endif /* __iwl_fw_api_debug_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h
index 184db5a6f06f..f15e6d64c298 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/mac-cfg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2019, 2021-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2019, 2021-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -58,6 +58,14 @@ enum iwl_mac_conf_subcmd_ids {
*/
STA_DISABLE_TX_CMD = 0xD,
/**
+ * @ROC_CMD: &struct iwl_roc_req
+ */
+ ROC_CMD = 0xE,
+ /**
+ * @ROC_NOTIF: &struct iwl_roc_notif
+ */
+ ROC_NOTIF = 0xF8,
+ /**
* @SESSION_PROTECTION_NOTIF: &struct iwl_mvm_session_prot_notif
*/
SESSION_PROTECTION_NOTIF = 0xFB,
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
index 28bfabb399b2..dfe0bebabc81 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/nvm-reg.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -21,8 +21,9 @@ enum iwl_regulatory_and_nvm_subcmd_ids {
* &struct iwl_lari_config_change_cmd_v2,
* &struct iwl_lari_config_change_cmd_v3,
* &struct iwl_lari_config_change_cmd_v4,
- * &struct iwl_lari_config_change_cmd_v5 or
- * &struct iwl_lari_config_change_cmd_v6
+ * &struct iwl_lari_config_change_cmd_v5,
+ * &struct iwl_lari_config_change_cmd_v6 or
+ * &struct iwl_lari_config_change_cmd_v7
*/
LARI_CONFIG_CHANGE = 0x1,
@@ -44,6 +45,11 @@ enum iwl_regulatory_and_nvm_subcmd_ids {
SAR_OFFSET_MAPPING_TABLE_CMD = 0x4,
/**
+ * @UATS_TABLE_CMD: &struct iwl_uats_table_cmd
+ */
+ UATS_TABLE_CMD = 0x5,
+
+ /**
* @PNVM_INIT_COMPLETE_NTFY: &struct iwl_pnvm_init_complete_ntfy
*/
PNVM_INIT_COMPLETE_NTFY = 0xFE,
@@ -480,18 +486,20 @@ union iwl_tas_config_cmd {
struct iwl_tas_config_cmd_v4 v4;
};
/**
- * enum iwl_lari_configs - bit masks for the various LARI config operations
+ * enum iwl_lari_config_masks - bit masks for the various LARI config operations
* @LARI_CONFIG_DISABLE_11AC_UKRAINE_MSK: disable 11ac in ukraine
* @LARI_CONFIG_CHANGE_ETSI_TO_PASSIVE_MSK: ETSI 5.8GHz SRD passive scan
* @LARI_CONFIG_CHANGE_ETSI_TO_DISABLED_MSK: ETSI 5.8GHz SRD disabled
* @LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK: enable 5.15/5.35GHz bands in
* Indonesia
+ * @LARI_CONFIG_ENABLE_CHINA_22_REG_SUPPORT_MSK: enable 2022 china regulatory
*/
enum iwl_lari_config_masks {
LARI_CONFIG_DISABLE_11AC_UKRAINE_MSK = BIT(0),
LARI_CONFIG_CHANGE_ETSI_TO_PASSIVE_MSK = BIT(1),
LARI_CONFIG_CHANGE_ETSI_TO_DISABLED_MSK = BIT(2),
LARI_CONFIG_ENABLE_5G2_IN_INDONESIA_MSK = BIT(3),
+ LARI_CONFIG_ENABLE_CHINA_22_REG_SUPPORT_MSK = BIT(7),
};
#define IWL_11AX_UKRAINE_MASK 3
@@ -601,6 +609,45 @@ struct iwl_lari_config_change_cmd_v6 {
} __packed; /* LARI_CHANGE_CONF_CMD_S_VER_6 */
/**
+ * struct iwl_lari_config_change_cmd_v7 - change LARI configuration
+ * This structure is used also for lari cmd version 8.
+ * @config_bitmap: Bitmap of the config commands. Each bit will trigger a
+ * different predefined FW config operation.
+ * @oem_uhb_allow_bitmap: Bitmap of UHB enabled MCC sets.
+ * @oem_11ax_allow_bitmap: Bitmap of 11ax allowed MCCs. There are two bits
+ * per country, one to indicate whether to override and the other to
+ * indicate the value to use.
+ * @oem_unii4_allow_bitmap: Bitmap of unii4 allowed MCCs.There are two bits
+ * per country, one to indicate whether to override and the other to
+ * indicate allow/disallow unii4 channels.
+ * @chan_state_active_bitmap: Bitmap to enable different bands per country
+ * or region.
+ * Each bit represents a country or region, and a band to activate
+ * according to the BIOS definitions.
+ * For LARI cmd version 7 - bits 0:3 are supported.
+ * For LARI cmd version 8 - bits 0:4 are supported.
+ * @force_disable_channels_bitmap: Bitmap of disabled bands/channels.
+ * Each bit represents a set of channels in a specific band that should be
+ * disabled
+ * @edt_bitmap: Bitmap of energy detection threshold table.
+ * Disable/enable the EDT optimization method for different band.
+ */
+struct iwl_lari_config_change_cmd_v7 {
+ __le32 config_bitmap;
+ __le32 oem_uhb_allow_bitmap;
+ __le32 oem_11ax_allow_bitmap;
+ __le32 oem_unii4_allow_bitmap;
+ __le32 chan_state_active_bitmap;
+ __le32 force_disable_channels_bitmap;
+ __le32 edt_bitmap;
+} __packed;
+/* LARI_CHANGE_CONF_CMD_S_VER_7 */
+/* LARI_CHANGE_CONF_CMD_S_VER_8 */
+
+/* Activate UNII-1 (5.2GHz) for World Wide */
+#define ACTIVATE_5G2_IN_WW_MASK BIT(4)
+
+/**
* struct iwl_pnvm_init_complete_ntfy - PNVM initialization complete
* @status: PNVM image loading status
*/
@@ -608,4 +655,17 @@ struct iwl_pnvm_init_complete_ntfy {
__le32 status;
} __packed; /* PNVM_INIT_COMPLETE_NTFY_S_VER_1 */
+#define UATS_TABLE_ROW_SIZE 26
+#define UATS_TABLE_COL_SIZE 13
+
+/**
+ * struct iwl_uats_table_cmd - struct for UATS_TABLE_CMD
+ * @offset_map: mapping a mcc to UHB AP type support (UATS) allowed
+ * @reserved: reserved
+ */
+struct iwl_uats_table_cmd {
+ u8 offset_map[UATS_TABLE_ROW_SIZE][UATS_TABLE_COL_SIZE];
+ __le16 reserved;
+} __packed; /* UATS_TABLE_CMD_S_VER_1 */
+
#endif /* __iwl_fw_api_nvm_reg_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h b/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h
index 898bf351f6e4..2d2b9c8c36ea 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/offload.h
@@ -3,7 +3,7 @@
* Copyright (C) 2012-2014 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
- * Copyright (C) 2021-2022 Intel Corporation
+ * Copyright (C) 2021-2023 Intel Corporation
*/
#ifndef __iwl_fw_api_offload_h__
#define __iwl_fw_api_offload_h__
@@ -18,7 +18,9 @@ enum iwl_prot_offload_subcmd_ids {
WOWLAN_WAKE_PKT_NOTIFICATION = 0xFC,
/**
- * @WOWLAN_INFO_NOTIFICATION: Notification in &struct iwl_wowlan_info_notif
+ * @WOWLAN_INFO_NOTIFICATION: Notification in
+ * &struct iwl_wowlan_info_notif_v1, &struct iwl_wowlan_info_notif_v2,
+ * or iwl_wowlan_info_notif
*/
WOWLAN_INFO_NOTIFICATION = 0xFD,
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
index 8fe42cff1102..306ed88de463 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/phy-ctxt.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018, 2020-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018, 2020-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -25,8 +25,8 @@
* For legacy set bit means upper channel, otherwise lower.
* For VHT - bit-2 marks if the control is lower/upper relative to center-freq
* bits-1:0 mark the distance from the center freq. for 20Mhz, offset is 0.
- * center_freq
* For EHT - bit-3 is used for extended distance
+ * center_freq
* |
* 40Mhz |____|____|
* 80Mhz |____|____|____|____|
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
index 85d89f559f6c..040d83fa5424 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/power.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
@@ -144,6 +144,8 @@ struct iwl_powertable_cmd {
* receiver and transmitter. '0' - does not allow.
* @DEVICE_POWER_FLAGS_ALLOW_MEM_RETENTION_MSK:
* Device Retention indication, '1' indicate retention is enabled.
+ * @DEVICE_POWER_FLAGS_NO_SLEEP_TILL_D3_MSK:
+ * Prevent power save until entering d3 is completed.
* @DEVICE_POWER_FLAGS_32K_CLK_VALID_MSK:
* 32Khz external slow clock valid indication, '1' indicate cloack is
* valid.
@@ -151,6 +153,7 @@ struct iwl_powertable_cmd {
enum iwl_device_power_flags {
DEVICE_POWER_FLAGS_POWER_SAVE_ENA_MSK = BIT(0),
DEVICE_POWER_FLAGS_ALLOW_MEM_RETENTION_MSK = BIT(1),
+ DEVICE_POWER_FLAGS_NO_SLEEP_TILL_D3_MSK = BIT(7),
DEVICE_POWER_FLAGS_32K_CLK_VALID_MSK = BIT(12),
};
@@ -162,7 +165,7 @@ enum iwl_device_power_flags {
* @reserved: reserved (padding)
*/
struct iwl_device_power_cmd {
- /* PM_POWER_TABLE_CMD_API_S_VER_6 */
+ /* PM_POWER_TABLE_CMD_API_S_VER_7 */
__le16 flags;
__le16 reserved;
} __packed;
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rfi.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rfi.h
index 1a84a4081e7c..34d664023473 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/rfi.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rfi.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2020-2021 Intel Corporation
+ * Copyright (C) 2020-2021, 2023 Intel Corporation
*/
#ifndef __iwl_fw_api_rfi_h__
#define __iwl_fw_api_rfi_h__
@@ -25,8 +25,9 @@ struct iwl_rfi_lut_entry {
/**
* struct iwl_rfi_config_cmd - RFI configuration table
*
- * @entry: a table can have 24 frequency/channel mappings
+ * @table: a table can have 24 frequency/channel mappings
* @oem: specifies if this is the default table or set by OEM
+ * @reserved: (reserved/padding)
*/
struct iwl_rfi_config_cmd {
struct iwl_rfi_lut_entry table[IWL_RFI_LUT_SIZE];
@@ -35,7 +36,7 @@ struct iwl_rfi_config_cmd {
} __packed; /* RFI_CONFIG_CMD_API_S_VER_1 */
/**
- * iwl_rfi_freq_table_status - status of the frequency table query
+ * enum iwl_rfi_freq_table_status - status of the frequency table query
* @RFI_FREQ_TABLE_OK: can be used
* @RFI_FREQ_TABLE_DVFS_NOT_READY: DVFS is not ready yet, should try later
* @RFI_FREQ_TABLE_DISABLED: the feature is disabled in FW
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
index 25e2e23dce3d..e71f29d0c694 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/rx.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
@@ -371,7 +371,7 @@ enum iwl_rx_phy_eht_data1 {
IWL_RX_PHY_DATA1_EHT_RU_ALLOC_B1_B7 = 0x0000fe00,
};
-/* goes into Metadata DW 7 */
+/* goes into Metadata DW 7 (Qu) or 8 (So or higher) */
enum iwl_rx_phy_he_data2 {
/* info type: HE MU-EXT */
/* the a1/a2/... is what the PHY/firmware calls the values */
@@ -387,7 +387,7 @@ enum iwl_rx_phy_he_data2 {
IWL_RX_PHY_DATA2_HE_TB_EXT_SPTL_REUSE4 = 0x0000f000,
};
-/* goes into Metadata DW 8 */
+/* goes into Metadata DW 8 (Qu) or 7 (So or higher) */
enum iwl_rx_phy_he_data3 {
/* info type: HE MU-EXT */
IWL_RX_PHY_DATA3_HE_MU_EXT_CH1_RU1 = 0x000000ff, /* c1 */
@@ -408,10 +408,9 @@ enum iwl_rx_phy_he_he_data4 {
IWL_RX_PHY_DATA4_HE_MU_EXT_PREAMBLE_PUNC_TYPE_MASK = 0x0600,
};
-/* goes into Metadata DW 7 */
+/* goes into Metadata DW 8 (Qu has no EHT) */
enum iwl_rx_phy_eht_data2 {
/* info type: EHT-MU-EXT */
- /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_0_OUT */
IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A1 = 0x000001ff,
IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_A2 = 0x0003fe00,
IWL_RX_PHY_DATA2_EHT_MU_EXT_RU_ALLOC_B1 = 0x07fc0000,
@@ -420,11 +419,10 @@ enum iwl_rx_phy_eht_data2 {
IWL_RX_PHY_DATA2_EHT_TB_EXT_TRIG_SIGA1 = 0xffffffff,
};
-/* goes into Metadata DW 8 */
+/* goes into Metadata DW 7 (Qu has no EHT) */
enum iwl_rx_phy_eht_data3 {
+ /* note: low 8 bits cannot be used */
/* info type: EHT-MU-EXT */
- /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_1_OUT */
- IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_B2 = 0x000001ff,
IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_C1 = 0x0003fe00,
IWL_RX_PHY_DATA3_EHT_MU_EXT_RU_ALLOC_C2 = 0x07fc0000,
};
@@ -432,10 +430,10 @@ enum iwl_rx_phy_eht_data3 {
/* goes into Metadata DW 4 */
enum iwl_rx_phy_eht_data4 {
/* info type: EHT-MU-EXT */
- /* OFDM_RX_VECTOR_COMMON_RU_ALLOC_2_OUT */
IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_D1 = 0x000001ff,
IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_D2 = 0x0003fe00,
IWL_RX_PHY_DATA4_EHT_MU_EXT_SIGB_MCS = 0x000c0000,
+ IWL_RX_PHY_DATA4_EHT_MU_EXT_RU_ALLOC_B2 = 0x1ff00000,
};
/* goes into Metadata DW 16 */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/stats.h b/drivers/net/wireless/intel/iwlwifi/fw/api/stats.h
index 898e62326e6c..2271b19213fa 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/stats.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/stats.h
@@ -1,12 +1,13 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018, 2020 - 2021 Intel Corporation
+ * Copyright (C) 2012-2014, 2018, 2020 - 2021, 2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_fw_api_stats_h__
#define __iwl_fw_api_stats_h__
#include "mac.h"
+#include "mac-cfg.h"
struct mvm_statistics_dbg {
__le32 burst_check;
@@ -412,6 +413,49 @@ struct iwl_statistics_cmd {
#define MAX_BCAST_FILTER_NUM 8
/**
+ * enum iwl_statistics_notify_type_id - type_id used in system statistics
+ * command
+ * @IWL_STATS_NTFY_TYPE_ID_OPER: request legacy statistics
+ * @IWL_STATS_NTFY_TYPE_ID_OPER_PART1: request operational part1 statistics
+ * @IWL_STATS_NTFY_TYPE_ID_OPER_PART2: request operational part2 statistics
+ * @IWL_STATS_NTFY_TYPE_ID_OPER_PART3: request operational part3 statistics
+ * @IWL_STATS_NTFY_TYPE_ID_OPER_PART4: request operational part4 statistics
+ */
+enum iwl_statistics_notify_type_id {
+ IWL_STATS_NTFY_TYPE_ID_OPER = BIT(0),
+ IWL_STATS_NTFY_TYPE_ID_OPER_PART1 = BIT(1),
+ IWL_STATS_NTFY_TYPE_ID_OPER_PART2 = BIT(2),
+ IWL_STATS_NTFY_TYPE_ID_OPER_PART3 = BIT(3),
+ IWL_STATS_NTFY_TYPE_ID_OPER_PART4 = BIT(4),
+};
+
+/**
+ * enum iwl_statistics_cfg_flags - cfg_mask used in system statistics command
+ * @IWL_STATS_CFG_FLG_DISABLE_NTFY_MSK: 0 for enable, 1 for disable
+ * @IWL_STATS_CFG_FLG_ON_DEMAND_NTFY_MSK: 0 for periodic, 1 for on-demand
+ * @IWL_STATS_CFG_FLG_RESET_MSK: 0 for reset statistics after
+ * sending the notification, 1 for do not reset statistics after sending
+ * the notification
+ */
+enum iwl_statistics_cfg_flags {
+ IWL_STATS_CFG_FLG_DISABLE_NTFY_MSK = BIT(0),
+ IWL_STATS_CFG_FLG_ON_DEMAND_NTFY_MSK = BIT(1),
+ IWL_STATS_CFG_FLG_RESET_MSK = BIT(2),
+};
+
+/**
+ * struct iwl_system_statistics_cmd - system statistics command
+ * @cfg_mask: configuration mask, &enum iwl_statistics_cfg_flags
+ * @config_time_sec: time in sec for periodic notification
+ * @type_id_mask: type_id masks, &enum iwl_statistics_notify_type_id
+ */
+struct iwl_system_statistics_cmd {
+ __le32 cfg_mask;
+ __le32 config_time_sec;
+ __le32 type_id_mask;
+} __packed; /* STATISTICS_FW_CMD_API_S_VER_1 */
+
+/**
* enum iwl_fw_statistics_type
*
* @FW_STATISTICS_OPERATIONAL: operational statistics
@@ -447,7 +491,49 @@ struct iwl_statistics_ntfy_hdr {
}; /* STATISTICS_NTFY_HDR_API_S_VER_1 */
/**
- * struct iwl_statistics_ntfy_per_mac
+ * struct iwl_stats_ntfy_per_link
+ *
+ * @beacon_filter_average_energy: Average energy [-dBm] of the 2
+ * antennas.
+ * @air_time: air time
+ * @beacon_counter: all beacons (both filtered and not filtered)
+ * @beacon_average_energy: Average energy [-dBm] of all beacons
+ * (both filtered and not filtered)
+ * @beacon_rssi_a: beacon RSSI on antenna A
+ * @beacon_rssi_b: beacon RSSI on antenna B
+ * @rx_bytes: RX byte count
+ */
+struct iwl_stats_ntfy_per_link {
+ __le32 beacon_filter_average_energy;
+ __le32 air_time;
+ __le32 beacon_counter;
+ __le32 beacon_average_energy;
+ __le32 beacon_rssi_a;
+ __le32 beacon_rssi_b;
+ __le32 rx_bytes;
+} __packed; /* STATISTICS_NTFY_PER_LINK_API_S_VER_1 */
+
+/**
+ * struct iwl_stats_ntfy_part1_per_link
+ *
+ * @rx_time: rx time
+ * @tx_time: tx time
+ * @rx_action: action frames handled by FW
+ * @tx_action: action frames generated and transmitted by FW
+ * @cca_defers: cca defer count
+ * @beacon_filtered: filtered out beacons
+ */
+struct iwl_stats_ntfy_part1_per_link {
+ __le64 rx_time;
+ __le64 tx_time;
+ __le32 rx_action;
+ __le32 tx_action;
+ __le32 cca_defers;
+ __le32 beacon_filtered;
+} __packed; /* STATISTICS_FW_NTFY_OPERATIONAL_PART1_PER_LINK_API_S_VER_1 */
+
+/**
+ * struct iwl_stats_ntfy_per_mac
*
* @beacon_filter_average_energy: Average energy [-dBm] of the 2
* antennas.
@@ -459,7 +545,7 @@ struct iwl_statistics_ntfy_hdr {
* @beacon_rssi_b: beacon RSSI on antenna B
* @rx_bytes: RX byte count
*/
-struct iwl_statistics_ntfy_per_mac {
+struct iwl_stats_ntfy_per_mac {
__le32 beacon_filter_average_energy;
__le32 air_time;
__le32 beacon_counter;
@@ -470,7 +556,7 @@ struct iwl_statistics_ntfy_per_mac {
} __packed; /* STATISTICS_NTFY_PER_MAC_API_S_VER_1 */
#define IWL_STATS_MAX_BW_INDEX 5
-/** struct iwl_statistics_ntfy_per_phy
+/** struct iwl_stats_ntfy_per_phy
* @channel_load: channel load
* @channel_load_by_us: device contribution to MCLM
* @channel_load_not_by_us: other devices' contribution to MCLM
@@ -485,7 +571,7 @@ struct iwl_statistics_ntfy_per_mac {
* per channel BW. note BACK counted as 1
* @last_tx_ch_width_indx: last txed frame channel width index
*/
-struct iwl_statistics_ntfy_per_phy {
+struct iwl_stats_ntfy_per_phy {
__le32 channel_load;
__le32 channel_load_by_us;
__le32 channel_load_not_by_us;
@@ -499,23 +585,62 @@ struct iwl_statistics_ntfy_per_phy {
} __packed; /* STATISTICS_NTFY_PER_PHY_API_S_VER_1 */
/**
- * struct iwl_statistics_ntfy_per_sta
+ * struct iwl_stats_ntfy_per_sta
*
* @average_energy: in fact it is minus the energy..
*/
-struct iwl_statistics_ntfy_per_sta {
+struct iwl_stats_ntfy_per_sta {
__le32 average_energy;
} __packed; /* STATISTICS_NTFY_PER_STA_API_S_VER_1 */
-#define IWL_STATS_MAX_PHY_OPERTINAL 3
+#define IWL_STATS_MAX_PHY_OPERATIONAL 3
+#define IWL_STATS_MAX_FW_LINKS (IWL_MVM_FW_MAX_LINK_ID + 1)
+
+/**
+ * struct iwl_system_statistics_notif_oper
+ *
+ * @time_stamp: time when the notification is sent from firmware
+ * @per_link: per link statistics, &struct iwl_stats_ntfy_per_link
+ * @per_phy: per phy statistics, &struct iwl_stats_ntfy_per_phy
+ * @per_sta: per sta statistics, &struct iwl_stats_ntfy_per_sta
+ */
+struct iwl_system_statistics_notif_oper {
+ __le32 time_stamp;
+ struct iwl_stats_ntfy_per_link per_link[IWL_STATS_MAX_FW_LINKS];
+ struct iwl_stats_ntfy_per_phy per_phy[IWL_STATS_MAX_PHY_OPERATIONAL];
+ struct iwl_stats_ntfy_per_sta per_sta[IWL_MVM_STATION_COUNT_MAX];
+} __packed; /* STATISTICS_FW_NTFY_OPERATIONAL_API_S_VER_3 */
+
+/**
+ * struct iwl_system_statistics_part1_notif_oper
+ *
+ * @time_stamp: time when the notification is sent from firmware
+ * @per_link: per link statistics &struct iwl_stats_ntfy_part1_per_link
+ * @per_phy_crc_error_stats: per phy crc error statistics
+ */
+struct iwl_system_statistics_part1_notif_oper {
+ __le32 time_stamp;
+ struct iwl_stats_ntfy_part1_per_link per_link[IWL_STATS_MAX_FW_LINKS];
+ __le32 per_phy_crc_error_stats[IWL_STATS_MAX_PHY_OPERATIONAL];
+} __packed; /* STATISTICS_FW_NTFY_OPERATIONAL_PART1_API_S_VER_4 */
+
+/**
+ * struct iwl_system_statistics_end_notif
+ *
+ * @time_stamp: time when the notification is sent from firmware
+ */
+struct iwl_system_statistics_end_notif {
+ __le32 time_stamp;
+} __packed; /* STATISTICS_FW_NTFY_END_API_S_VER_1 */
+
/**
* struct iwl_statistics_operational_ntfy
*
* @hdr: general statistics header
* @flags: bitmap of possible notification structures
- * @per_mac_stats: per mac statistics, &struct iwl_statistics_ntfy_per_mac
- * @per_phy_stats: per phy statistics, &struct iwl_statistics_ntfy_per_phy
- * @per_sta_stats: per sta statistics, &struct iwl_statistics_ntfy_per_sta
+ * @per_mac: per mac statistics, &struct iwl_stats_ntfy_per_mac
+ * @per_phy: per phy statistics, &struct iwl_stats_ntfy_per_phy
+ * @per_sta: per sta statistics, &struct iwl_stats_ntfy_per_sta
* @rx_time: rx time
* @tx_time: usec the radio is transmitting.
* @on_time_rf: The total time in usec the RF is awake.
@@ -524,9 +649,9 @@ struct iwl_statistics_ntfy_per_sta {
struct iwl_statistics_operational_ntfy {
struct iwl_statistics_ntfy_hdr hdr;
__le32 flags;
- struct iwl_statistics_ntfy_per_mac per_mac_stats[MAC_INDEX_AUX];
- struct iwl_statistics_ntfy_per_phy per_phy_stats[IWL_STATS_MAX_PHY_OPERTINAL];
- struct iwl_statistics_ntfy_per_sta per_sta_stats[IWL_MVM_STATION_COUNT_MAX];
+ struct iwl_stats_ntfy_per_mac per_mac[MAC_INDEX_AUX];
+ struct iwl_stats_ntfy_per_phy per_phy[IWL_STATS_MAX_PHY_OPERATIONAL];
+ struct iwl_stats_ntfy_per_sta per_sta[IWL_MVM_STATION_COUNT_MAX];
__le64 rx_time;
__le64 tx_time;
__le64 on_time_rf;
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h b/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h
index 7cc706731d70..2e15be71c957 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/time-event.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2020, 2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2020, 2022-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -335,6 +335,63 @@ struct iwl_hs20_roc_res {
__le32 status;
} __packed; /* HOT_SPOT_RSP_API_S_VER_1 */
+/*
+ * Activity types for the ROC command
+ * @ROC_ACTIVITY_HOTSPOT: ROC for hs20 activity
+ * @ROC_ACTIVITY_P2P_DISC: ROC for p2p discoverability activity
+ * @ROC_ACTIVITY_P2P_TXRX: ROC for p2p action frames activity
+ */
+enum iwl_roc_activity {
+ ROC_ACTIVITY_HOTSPOT,
+ ROC_ACTIVITY_P2P_DISC,
+ ROC_ACTIVITY_P2P_TXRX,
+ ROC_NUM_ACTIVITIES
+}; /* ROC_ACTIVITY_API_E_VER_1 */
+
+/*
+ * ROC command
+ *
+ * Command requests the firmware to remain on a channel for a certain duration.
+ *
+ * ( MAC_CONF_GROUP 0x3, ROC_CMD 0xE )
+ *
+ * @action: action to perform, see &enum iwl_ctxt_action
+ * @activity: type of activity, see &enum iwl_roc_activity
+ * @sta_id: station id, resumed during "Remain On Channel" activity.
+ * @channel_info: &struct iwl_fw_channel_info
+ * @node_addr: node MAC address for Rx filtering
+ * @reserved: align to a dword
+ * @max_delay: max delay the ROC can start in TU
+ * @duration: remain on channel duration in TU
+ */
+struct iwl_roc_req {
+ __le32 action;
+ __le32 activity;
+ __le32 sta_id;
+ struct iwl_fw_channel_info channel_info;
+ u8 node_addr[ETH_ALEN];
+ __le16 reserved;
+ __le32 max_delay;
+ __le32 duration;
+} __packed; /* ROC_CMD_API_S_VER_3 */
+
+/*
+ * ROC notification
+ *
+ * Notification when ROC startes and when ROC ended.
+ *
+ * ( MAC_CONF_GROUP 0x3, ROC_NOTIF 0xf8 )
+ *
+ * @status: true if ROC succeeded to start
+ * @start_end: true if ROC started, false if ROC ended
+ * @activity: notification to which activity - &enum iwl_roc_activity
+ */
+struct iwl_roc_notif {
+ __le32 success;
+ __le32 started;
+ __le32 activity;
+} __packed; /* ROC_NOTIF_API_S_VER_1 */
+
/**
* enum iwl_mvm_session_prot_conf_id - session protection's configurations
* @SESSION_PROTECT_CONF_ASSOC: Start a session protection for association.
@@ -375,8 +432,8 @@ enum iwl_mvm_session_prot_conf_id {
/**
* struct iwl_mvm_session_prot_cmd - configure a session protection
- * @id_and_color: the id and color of the mac for which this session protection
- * is sent
+ * @id_and_color: the id and color of the link (or mac, for command version 1)
+ * for which this session protection is sent
* @action: can be either FW_CTXT_ACTION_ADD or FW_CTXT_ACTION_REMOVE,
* see &enum iwl_ctxt_action
* @conf_id: see &enum iwl_mvm_session_prot_conf_id
@@ -397,11 +454,15 @@ struct iwl_mvm_session_prot_cmd {
__le32 duration_tu;
__le32 repetition_count;
__le32 interval;
-} __packed; /* SESSION_PROTECTION_CMD_API_S_VER_1 */
+} __packed;
+/* SESSION_PROTECTION_CMD_API_S_VER_1 and
+ * SESSION_PROTECTION_CMD_API_S_VER_2
+ */
/**
* struct iwl_mvm_session_prot_notif - session protection started / ended
- * @mac_id: the mac id for which the session protection started / ended
+ * @mac_link_id: the mac id (or link id, for notif ver > 2) for which the
+ * session protection started / ended
* @status: 1 means success, 0 means failure
* @start: 1 means the session protection started, 0 means it ended
* @conf_id: see &enum iwl_mvm_session_prot_conf_id
@@ -410,10 +471,13 @@ struct iwl_mvm_session_prot_cmd {
* and end even the firmware could not schedule it.
*/
struct iwl_mvm_session_prot_notif {
- __le32 mac_id;
+ __le32 mac_link_id;
__le32 status;
__le32 start;
__le32 conf_id;
-} __packed; /* SESSION_PROTECTION_NOTIFICATION_API_S_VER_2 */
+} __packed;
+/* SESSION_PROTECTION_NOTIFICATION_API_S_VER_2 and
+ * SESSION_PROTECTION_NOTIFICATION_API_S_VER_3
+ */
#endif /* __iwl_fw_api_time_event_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
index e018946310d1..9c69d3674384 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014, 2019-2021 Intel Corporation
+ * Copyright (C) 2005-2014, 2019-2021, 2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -76,7 +76,7 @@ enum iwl_tx_queue_cfg_actions {
TX_QUEUE_CFG_TFD_SHORT_FORMAT = BIT(1),
};
-#define IWL_DEFAULT_QUEUE_SIZE_EHT (1024 * 4)
+#define IWL_DEFAULT_QUEUE_SIZE_EHT (512 * 4)
#define IWL_DEFAULT_QUEUE_SIZE_HE 1024
#define IWL_DEFAULT_QUEUE_SIZE 256
#define IWL_MGMT_QUEUE_SIZE 16
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
index 3ab6a68f1e9f..3975a53a9f20 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
@@ -1021,22 +1021,18 @@ struct iwl_dump_ini_region_data {
struct iwl_fwrt_dump_data *dump_data;
};
-static int
-iwl_dump_ini_prph_mac_iter(struct iwl_fw_runtime *fwrt,
- struct iwl_dump_ini_region_data *reg_data,
- void *range_ptr, u32 range_len, int idx)
+static int iwl_dump_ini_prph_mac_iter_common(struct iwl_fw_runtime *fwrt,
+ void *range_ptr, u32 addr,
+ __le32 size)
{
- struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
struct iwl_fw_ini_error_dump_range *range = range_ptr;
__le32 *val = range->data;
u32 prph_val;
- u32 addr = le32_to_cpu(reg->addrs[idx]) +
- le32_to_cpu(reg->dev_addr.offset);
int i;
range->internal_base_addr = cpu_to_le32(addr);
- range->range_data_size = reg->dev_addr.size;
- for (i = 0; i < le32_to_cpu(reg->dev_addr.size); i += 4) {
+ range->range_data_size = size;
+ for (i = 0; i < le32_to_cpu(size); i += 4) {
prph_val = iwl_read_prph(fwrt->trans, addr + i);
if (iwl_trans_is_hw_error_value(prph_val))
return -EBUSY;
@@ -1047,38 +1043,61 @@ iwl_dump_ini_prph_mac_iter(struct iwl_fw_runtime *fwrt,
}
static int
-iwl_dump_ini_prph_phy_iter(struct iwl_fw_runtime *fwrt,
+iwl_dump_ini_prph_mac_iter(struct iwl_fw_runtime *fwrt,
struct iwl_dump_ini_region_data *reg_data,
void *range_ptr, u32 range_len, int idx)
{
struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ u32 addr = le32_to_cpu(reg->addrs[idx]) +
+ le32_to_cpu(reg->dev_addr.offset);
+
+ return iwl_dump_ini_prph_mac_iter_common(fwrt, range_ptr, addr,
+ reg->dev_addr.size);
+}
+
+static int
+iwl_dump_ini_prph_mac_block_iter(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data,
+ void *range_ptr, u32 range_len, int idx)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ struct iwl_fw_ini_addr_size *pairs = (void *)reg->addrs;
+ u32 addr = le32_to_cpu(reg->dev_addr_range.offset) +
+ le32_to_cpu(pairs[idx].addr);
+
+ return iwl_dump_ini_prph_mac_iter_common(fwrt, range_ptr, addr,
+ pairs[idx].size);
+}
+
+static int iwl_dump_ini_prph_phy_iter_common(struct iwl_fw_runtime *fwrt,
+ void *range_ptr, u32 addr,
+ __le32 size, __le32 offset)
+{
struct iwl_fw_ini_error_dump_range *range = range_ptr;
__le32 *val = range->data;
u32 indirect_wr_addr = WMAL_INDRCT_RD_CMD1;
u32 indirect_rd_addr = WMAL_MRSPF_1;
u32 prph_val;
- u32 addr = le32_to_cpu(reg->addrs[idx]);
u32 dphy_state;
u32 dphy_addr;
int i;
range->internal_base_addr = cpu_to_le32(addr);
- range->range_data_size = reg->dev_addr.size;
+ range->range_data_size = size;
if (fwrt->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
indirect_wr_addr = WMAL_INDRCT_CMD1;
- indirect_wr_addr += le32_to_cpu(reg->dev_addr.offset);
- indirect_rd_addr += le32_to_cpu(reg->dev_addr.offset);
+ indirect_wr_addr += le32_to_cpu(offset);
+ indirect_rd_addr += le32_to_cpu(offset);
if (!iwl_trans_grab_nic_access(fwrt->trans))
return -EBUSY;
- dphy_addr = (reg->dev_addr.offset) ? WFPM_LMAC2_PS_CTL_RW :
- WFPM_LMAC1_PS_CTL_RW;
+ dphy_addr = (offset) ? WFPM_LMAC2_PS_CTL_RW : WFPM_LMAC1_PS_CTL_RW;
dphy_state = iwl_read_umac_prph_no_grab(fwrt->trans, dphy_addr);
- for (i = 0; i < le32_to_cpu(reg->dev_addr.size); i += 4) {
+ for (i = 0; i < le32_to_cpu(size); i += 4) {
if (dphy_state == HBUS_TIMEOUT ||
(dphy_state & WFPM_PS_CTL_RW_PHYRF_PD_FSM_CURSTATE_MSK) !=
WFPM_PHYRF_STATE_ON) {
@@ -1097,6 +1116,33 @@ iwl_dump_ini_prph_phy_iter(struct iwl_fw_runtime *fwrt,
return sizeof(*range) + le32_to_cpu(range->range_data_size);
}
+static int
+iwl_dump_ini_prph_phy_iter(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data,
+ void *range_ptr, u32 range_len, int idx)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ u32 addr = le32_to_cpu(reg->addrs[idx]);
+
+ return iwl_dump_ini_prph_phy_iter_common(fwrt, range_ptr, addr,
+ reg->dev_addr.size,
+ reg->dev_addr.offset);
+}
+
+static int
+iwl_dump_ini_prph_phy_block_iter(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data,
+ void *range_ptr, u32 range_len, int idx)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ struct iwl_fw_ini_addr_size *pairs = (void *)reg->addrs;
+ u32 addr = le32_to_cpu(pairs[idx].addr);
+
+ return iwl_dump_ini_prph_phy_iter_common(fwrt, range_ptr, addr,
+ pairs[idx].size,
+ reg->dev_addr_range.offset);
+}
+
static int iwl_dump_ini_csr_iter(struct iwl_fw_runtime *fwrt,
struct iwl_dump_ini_region_data *reg_data,
void *range_ptr, u32 range_len, int idx)
@@ -1370,6 +1416,53 @@ out:
return sizeof(*range) + le32_to_cpu(range->range_data_size);
}
+static int
+iwl_dump_ini_prph_snps_dphyip_iter(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data,
+ void *range_ptr, u32 range_len, int idx)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ struct iwl_fw_ini_error_dump_range *range = range_ptr;
+ __le32 *val = range->data;
+ __le32 offset = reg->dev_addr.offset;
+ u32 indirect_rd_wr_addr = DPHYIP_INDIRECT;
+ u32 addr = le32_to_cpu(reg->addrs[idx]);
+ u32 dphy_state, dphy_addr, prph_val;
+ int i;
+
+ range->internal_base_addr = cpu_to_le32(addr);
+ range->range_data_size = reg->dev_addr.size;
+
+ if (!iwl_trans_grab_nic_access(fwrt->trans))
+ return -EBUSY;
+
+ indirect_rd_wr_addr += le32_to_cpu(offset);
+
+ dphy_addr = offset ? WFPM_LMAC2_PS_CTL_RW : WFPM_LMAC1_PS_CTL_RW;
+ dphy_state = iwl_read_umac_prph_no_grab(fwrt->trans, dphy_addr);
+
+ for (i = 0; i < le32_to_cpu(reg->dev_addr.size); i += 4) {
+ if (dphy_state == HBUS_TIMEOUT ||
+ (dphy_state & WFPM_PS_CTL_RW_PHYRF_PD_FSM_CURSTATE_MSK) !=
+ WFPM_PHYRF_STATE_ON) {
+ *val++ = cpu_to_le32(WFPM_DPHY_OFF);
+ continue;
+ }
+
+ iwl_write_prph_no_grab(fwrt->trans, indirect_rd_wr_addr,
+ addr + i);
+ /* wait a bit for value to be ready in register */
+ udelay(1);
+ prph_val = iwl_read_prph_no_grab(fwrt->trans,
+ indirect_rd_wr_addr);
+ *val++ = cpu_to_le32((prph_val & DPHYIP_INDIRECT_RD_MSK) >>
+ DPHYIP_INDIRECT_RD_SHIFT);
+ }
+
+ iwl_trans_release_nic_access(fwrt->trans);
+ return sizeof(*range) + le32_to_cpu(range->range_data_size);
+}
+
struct iwl_ini_rxf_data {
u32 fifo_num;
u32 size;
@@ -1781,6 +1874,16 @@ static u32 iwl_dump_ini_mem_ranges(struct iwl_fw_runtime *fwrt,
return iwl_tlv_array_len(reg_data->reg_tlv, reg, addrs);
}
+static u32
+iwl_dump_ini_mem_block_ranges(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ size_t size = sizeof(struct iwl_fw_ini_addr_size);
+
+ return iwl_tlv_array_len_with_size(reg_data->reg_tlv, reg, size);
+}
+
static u32 iwl_dump_ini_paging_ranges(struct iwl_fw_runtime *fwrt,
struct iwl_dump_ini_region_data *reg_data)
{
@@ -1867,6 +1970,25 @@ static u32 iwl_dump_ini_mem_get_size(struct iwl_fw_runtime *fwrt,
}
static u32
+iwl_dump_ini_mem_block_get_size(struct iwl_fw_runtime *fwrt,
+ struct iwl_dump_ini_region_data *reg_data)
+{
+ struct iwl_fw_ini_region_tlv *reg = (void *)reg_data->reg_tlv->data;
+ struct iwl_fw_ini_addr_size *pairs = (void *)reg->addrs;
+ u32 ranges = iwl_dump_ini_mem_block_ranges(fwrt, reg_data);
+ u32 size = sizeof(struct iwl_fw_ini_error_dump);
+ int range;
+
+ if (!ranges)
+ return 0;
+
+ for (range = 0; range < ranges; range++)
+ size += le32_to_cpu(pairs[range].size);
+
+ return size + ranges * sizeof(struct iwl_fw_ini_error_dump_range);
+}
+
+static u32
iwl_dump_ini_paging_get_size(struct iwl_fw_runtime *fwrt,
struct iwl_dump_ini_region_data *reg_data)
{
@@ -2413,6 +2535,18 @@ static const struct iwl_dump_ini_mem_ops iwl_dump_ini_region_ops[] = {
.fill_mem_hdr = iwl_dump_ini_mem_fill_header,
.fill_range = iwl_dump_ini_prph_phy_iter,
},
+ [IWL_FW_INI_REGION_PERIPHERY_MAC_RANGE] = {
+ .get_num_of_ranges = iwl_dump_ini_mem_block_ranges,
+ .get_size = iwl_dump_ini_mem_block_get_size,
+ .fill_mem_hdr = iwl_dump_ini_mem_fill_header,
+ .fill_range = iwl_dump_ini_prph_mac_block_iter,
+ },
+ [IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE] = {
+ .get_num_of_ranges = iwl_dump_ini_mem_block_ranges,
+ .get_size = iwl_dump_ini_mem_block_get_size,
+ .fill_mem_hdr = iwl_dump_ini_mem_fill_header,
+ .fill_range = iwl_dump_ini_prph_phy_block_iter,
+ },
[IWL_FW_INI_REGION_PERIPHERY_AUX] = {},
[IWL_FW_INI_REGION_PAGING] = {
.fill_mem_hdr = iwl_dump_ini_mem_fill_header,
@@ -2450,6 +2584,12 @@ static const struct iwl_dump_ini_mem_ops iwl_dump_ini_region_ops[] = {
.fill_mem_hdr = iwl_dump_ini_mon_dbgi_fill_header,
.fill_range = iwl_dump_ini_dbgi_sram_iter,
},
+ [IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP] = {
+ .get_num_of_ranges = iwl_dump_ini_mem_ranges,
+ .get_size = iwl_dump_ini_mem_get_size,
+ .fill_mem_hdr = iwl_dump_ini_mem_fill_header,
+ .fill_range = iwl_dump_ini_prph_snps_dphyip_iter,
+ },
};
static u32 iwl_dump_ini_trigger(struct iwl_fw_runtime *fwrt,
@@ -2492,7 +2632,9 @@ static u32 iwl_dump_ini_trigger(struct iwl_fw_runtime *fwrt,
if (reg_type >= ARRAY_SIZE(iwl_dump_ini_region_ops))
continue;
- if (reg_type == IWL_FW_INI_REGION_PERIPHERY_PHY &&
+ if ((reg_type == IWL_FW_INI_REGION_PERIPHERY_PHY ||
+ reg_type == IWL_FW_INI_REGION_PERIPHERY_PHY_RANGE ||
+ reg_type == IWL_FW_INI_REGION_PERIPHERY_SNPS_DPHYIP) &&
tp_id != IWL_FW_INI_TIME_POINT_FW_ASSERT) {
IWL_WARN(fwrt,
"WRT: trying to collect phy prph at time point: %d, skipping\n",
@@ -3228,3 +3370,28 @@ void iwl_fw_dbg_stop_restart_recording(struct iwl_fw_runtime *fwrt,
#endif
}
IWL_EXPORT_SYMBOL(iwl_fw_dbg_stop_restart_recording);
+
+void iwl_fw_disable_dbg_asserts(struct iwl_fw_runtime *fwrt)
+{
+ struct iwl_fw_dbg_config_cmd cmd = {
+ .type = cpu_to_le32(DEBUG_TOKEN_CONFIG_TYPE),
+ .conf = cpu_to_le32(IWL_FW_DBG_CONFIG_TOKEN),
+ };
+ struct iwl_host_cmd hcmd = {
+ .id = WIDE_ID(LONG_GROUP, LDBG_CONFIG_CMD),
+ .data[0] = &cmd,
+ .len[0] = sizeof(cmd),
+ };
+ u32 preset = u32_get_bits(fwrt->trans->dbg.domains_bitmap,
+ GENMASK(31, IWL_FW_DBG_DOMAIN_POS + 1));
+
+ /* supported starting from 9000 devices */
+ if (fwrt->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000)
+ return;
+
+ if (fwrt->trans->dbg.yoyo_bin_loaded || (preset && preset != 1))
+ return;
+
+ iwl_trans_send_cmd(fwrt->trans, &hcmd);
+}
+IWL_EXPORT_SYMBOL(iwl_fw_disable_dbg_asserts);
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
index 4227fbd2b977..66b233250c7c 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.h
@@ -329,6 +329,7 @@ void iwl_fwrt_dump_error_logs(struct iwl_fw_runtime *fwrt);
void iwl_send_dbg_dump_complete_cmd(struct iwl_fw_runtime *fwrt,
u32 timepoint,
u32 timepoint_data);
+void iwl_fw_disable_dbg_asserts(struct iwl_fw_runtime *fwrt);
#define IWL_FW_CHECK_FAILED(_obj, _fmt, ...) \
IWL_ERR_LIMIT(_obj, _fmt, __VA_ARGS__)
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
index 3cdbc6ac7ae5..751a125a1566 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/debugfs.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2012-2014, 2018-2020 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -141,7 +141,11 @@ static int iwl_dbgfs_enabled_severities_write(struct iwl_fw_runtime *fwrt,
event_cfg.enabled_severities = cpu_to_le32(enabled_severities);
- ret = iwl_trans_send_cmd(fwrt->trans, &hcmd);
+ if (fwrt->ops && fwrt->ops->send_hcmd)
+ ret = fwrt->ops->send_hcmd(fwrt->ops_ctx, &hcmd);
+ else
+ ret = -EPERM;
+
IWL_INFO(fwrt,
"sent host event cfg with enabled_severities: %u, ret: %d\n",
enabled_severities, ret);
@@ -342,6 +346,12 @@ static int iwl_dbgfs_fw_info_seq_show(struct seq_file *seq, void *v)
" %d: %d\n",
IWL_UCODE_TLV_CAPA_PPAG_CHINA_BIOS_SUPPORT,
has_capa);
+ has_capa = fw_has_capa(&fw->ucode_capa,
+ IWL_UCODE_TLV_CAPA_CHINA_22_REG_SUPPORT) ? 1 : 0;
+ seq_printf(seq,
+ " %d: %d\n",
+ IWL_UCODE_TLV_CAPA_CHINA_22_REG_SUPPORT,
+ has_capa);
seq_puts(seq, "fw_api_ver:\n");
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/file.h b/drivers/net/wireless/intel/iwlwifi/fw/file.h
index b36e9613a52c..03f6e520145f 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/file.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/file.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2008-2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2008-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -281,12 +281,16 @@ enum iwl_ucode_tlv_api {
IWL_UCODE_TLV_API_SCAN_EXT_CHAN_VER = (__force iwl_ucode_tlv_api_t)58,
IWL_UCODE_TLV_API_BAND_IN_RX_DATA = (__force iwl_ucode_tlv_api_t)59,
-
-#ifdef __CHECKER__
- /* sparse says it cannot increment the previous enum member */
-#define NUM_IWL_UCODE_TLV_API 128
-#else
NUM_IWL_UCODE_TLV_API
+/*
+ * This construction make both sparse (which cannot increment the previous
+ * member due to its bitwise type) and kernel-doc (which doesn't understand
+ * the ifdef/else properly) work.
+ */
+#ifdef __CHECKER__
+#define __CHECKER_NUM_IWL_UCODE_TLV_API 128
+ = (__force iwl_ucode_tlv_api_t)__CHECKER_NUM_IWL_UCODE_TLV_API,
+#define NUM_IWL_UCODE_TLV_API __CHECKER_NUM_IWL_UCODE_TLV_API
#endif
};
@@ -468,12 +472,18 @@ enum iwl_ucode_tlv_capa {
IWL_UCODE_TLV_CAPA_OFFLOAD_BTM_SUPPORT = (__force iwl_ucode_tlv_capa_t)113,
IWL_UCODE_TLV_CAPA_STA_EXP_MFP_SUPPORT = (__force iwl_ucode_tlv_capa_t)114,
IWL_UCODE_TLV_CAPA_SNIFF_VALIDATE_SUPPORT = (__force iwl_ucode_tlv_capa_t)116,
+ IWL_UCODE_TLV_CAPA_CHINA_22_REG_SUPPORT = (__force iwl_ucode_tlv_capa_t)117,
-#ifdef __CHECKER__
- /* sparse says it cannot increment the previous enum member */
-#define NUM_IWL_UCODE_TLV_CAPA 128
-#else
NUM_IWL_UCODE_TLV_CAPA
+/*
+ * This construction make both sparse (which cannot increment the previous
+ * member due to its bitwise type) and kernel-doc (which doesn't understand
+ * the ifdef/else properly) work.
+ */
+#ifdef __CHECKER__
+#define __CHECKER_NUM_IWL_UCODE_TLV_CAPA 128
+ = (__force iwl_ucode_tlv_capa_t)__CHECKER_NUM_IWL_UCODE_TLV_CAPA,
+#define NUM_IWL_UCODE_TLV_CAPA __CHECKER_NUM_IWL_UCODE_TLV_CAPA
#endif
};
@@ -965,4 +975,6 @@ static inline size_t _iwl_tlv_array_len(const struct iwl_ucode_tlv *tlv,
_iwl_tlv_array_len((_tlv_ptr), sizeof(*(_struct_ptr)), \
sizeof(_struct_ptr->_memb[0]))
+#define iwl_tlv_array_len_with_size(_tlv_ptr, _struct_ptr, _size) \
+ _iwl_tlv_array_len((_tlv_ptr), sizeof(*(_struct_ptr)), _size)
#endif /* __iwl_fw_file_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/img.h b/drivers/net/wireless/intel/iwlwifi/fw/img.h
index 8d0d58d61892..96bda80632f3 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/img.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/img.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016 Intel Deutschland GmbH
*/
@@ -198,7 +198,7 @@ struct iwl_dump_exclude {
struct iwl_fw {
u32 ucode_ver;
- char fw_version[64];
+ char fw_version[128];
/* ucode images */
struct fw_img img[IWL_UCODE_TYPE_MAX];
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/notif-wait.h b/drivers/net/wireless/intel/iwlwifi/fw/notif-wait.h
index 49e8ba11b6a8..0e49794911c1 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/notif-wait.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/notif-wait.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014 Intel Corporation
+ * Copyright (C) 2005-2014, 2023 Intel Corporation
* Copyright (C) 2015-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_notif_wait_h__
@@ -25,6 +25,7 @@ struct iwl_notif_wait_data {
* returns true, the wait is over, if it returns false then
* the waiter stays blocked. If no function is given, any
* of the listed commands will unblock the waiter.
+ * @fn_data: pointer to pass to the @fn's data argument
* @cmds: command IDs
* @n_cmds: number of command IDs
* @triggered: waiter should be woken up
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/rs.c b/drivers/net/wireless/intel/iwlwifi/fw/rs.c
index b09e68dbf5a9..8f99e501629e 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/rs.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/rs.c
@@ -208,7 +208,6 @@ int rs_pretty_print_rate(char *buf, int bufsz, const u32 rate)
return scnprintf(buf, bufsz, "Legacy | ANT: %s Rate: %s Mbps",
iwl_rs_pretty_ant(ant),
- index == IWL_RATE_INVALID ? "BAD" :
iwl_rate_mcs(index)->mbps);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
index 702586945533..357727774db9 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/runtime.h
@@ -98,6 +98,8 @@ struct iwl_txf_iter_data {
* @cur_fw_img: current firmware image, must be maintained by
* the driver by calling &iwl_fw_set_current_image()
* @dump: debug dump data
+ * @uats_enabled: VLP or AFC AP is enabled
+ * @uats_table: AP type table
*/
struct iwl_fw_runtime {
struct iwl_trans *trans;
@@ -171,6 +173,8 @@ struct iwl_fw_runtime {
struct iwl_sar_offset_mapping_cmd sgom_table;
bool sgom_enabled;
u8 reduced_power_flags;
+ bool uats_enabled;
+ struct iwl_uats_table_cmd uats_table;
#endif
};
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
index 9877988db0d2..2964c5fb11e9 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
@@ -388,4 +388,54 @@ void iwl_uefi_get_sgom_table(struct iwl_trans *trans,
kfree(data);
}
IWL_EXPORT_SYMBOL(iwl_uefi_get_sgom_table);
+
+static int iwl_uefi_uats_parse(struct uefi_cnv_wlan_uats_data *uats_data,
+ struct iwl_fw_runtime *fwrt)
+{
+ if (uats_data->revision != 1)
+ return -EINVAL;
+
+ memcpy(fwrt->uats_table.offset_map, uats_data->offset_map,
+ sizeof(fwrt->uats_table.offset_map));
+ return 0;
+}
+
+int iwl_uefi_get_uats_table(struct iwl_trans *trans,
+ struct iwl_fw_runtime *fwrt)
+{
+ struct uefi_cnv_wlan_uats_data *data;
+ unsigned long package_size;
+ int ret;
+
+ data = iwl_uefi_get_variable(IWL_UEFI_UATS_NAME, &IWL_EFI_VAR_GUID,
+ &package_size);
+ if (IS_ERR(data)) {
+ IWL_DEBUG_FW(trans,
+ "UATS UEFI variable not found 0x%lx\n",
+ PTR_ERR(data));
+ return -EINVAL;
+ }
+
+ if (package_size < sizeof(*data)) {
+ IWL_DEBUG_FW(trans,
+ "Invalid UATS table UEFI variable len (%lu)\n",
+ package_size);
+ kfree(data);
+ return -EINVAL;
+ }
+
+ IWL_DEBUG_FW(trans, "Read UATS from UEFI with size %lu\n",
+ package_size);
+
+ ret = iwl_uefi_uats_parse(data, fwrt);
+ if (ret < 0) {
+ IWL_DEBUG_FW(trans, "Cannot read UATS table. rev is invalid\n");
+ kfree(data);
+ return ret;
+ }
+
+ kfree(data);
+ return 0;
+}
+IWL_EXPORT_SYMBOL(iwl_uefi_get_uats_table);
#endif /* CONFIG_ACPI */
diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
index 1369cc4855c3..bf61a8df1225 100644
--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
+++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.h
@@ -9,8 +9,10 @@
#define IWL_UEFI_REDUCED_POWER_NAME L"UefiCnvWlanReducedPower"
#define IWL_UEFI_SGOM_NAME L"UefiCnvWlanSarGeoOffsetMapping"
#define IWL_UEFI_STEP_NAME L"UefiCnvCommonSTEP"
+#define IWL_UEFI_UATS_NAME L"CnvUefiWlanUATS"
#define IWL_SGOM_MAP_SIZE 339
+#define IWL_UATS_MAP_SIZE 339
struct pnvm_sku_package {
u8 rev;
@@ -25,6 +27,11 @@ struct uefi_cnv_wlan_sgom_data {
u8 offset_map[IWL_SGOM_MAP_SIZE - 1];
} __packed;
+struct uefi_cnv_wlan_uats_data {
+ u8 revision;
+ u8 offset_map[IWL_UATS_MAP_SIZE - 1];
+} __packed;
+
struct uefi_cnv_common_step_data {
u8 revision;
u8 step_mode;
@@ -82,10 +89,20 @@ iwl_uefi_handle_tlv_mem_desc(struct iwl_trans *trans, const u8 *data,
#if defined(CONFIG_EFI) && defined(CONFIG_ACPI)
void iwl_uefi_get_sgom_table(struct iwl_trans *trans, struct iwl_fw_runtime *fwrt);
+int iwl_uefi_get_uats_table(struct iwl_trans *trans,
+ struct iwl_fw_runtime *fwrt);
#else
static inline
void iwl_uefi_get_sgom_table(struct iwl_trans *trans, struct iwl_fw_runtime *fwrt)
{
}
+
+static inline
+int iwl_uefi_get_uats_table(struct iwl_trans *trans,
+ struct iwl_fw_runtime *fwrt)
+{
+ return 0;
+}
+
#endif
#endif /* __iwl_fw_uefi__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
index 241a9e3f2a1a..02ded22295c1 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
@@ -86,10 +86,7 @@ enum iwl_nvm_type {
#define IWL_DEFAULT_MAX_TX_POWER 22
#define IWL_TX_CSUM_NETIF_FLAGS (NETIF_F_IPV6_CSUM | NETIF_F_IP_CSUM |\
NETIF_F_TSO | NETIF_F_TSO6)
-#define IWL_TX_CSUM_NETIF_FLAGS_BZ (NETIF_F_HW_CSUM | NETIF_F_TSO | NETIF_F_TSO6)
-#define IWL_CSUM_NETIF_FLAGS_MASK (IWL_TX_CSUM_NETIF_FLAGS | \
- IWL_TX_CSUM_NETIF_FLAGS_BZ | \
- NETIF_F_RXCSUM)
+#define IWL_CSUM_NETIF_FLAGS_MASK (IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM)
/* Antenna presence definitions */
#define ANT_NONE 0x0
@@ -250,7 +247,6 @@ enum iwl_cfg_trans_ltr_delay {
* RFID can be read before deciding the remaining parameters to use.
*
* @base_params: pointer to basic parameters
- * @csr: csr flags and addresses that are different across devices
* @device_family: the device family
* @umac_prph_offset: offset to add to UMAC periphery address
* @xtal_latency: power up latency to get the xtal stabilized
@@ -319,7 +315,6 @@ struct iwl_fw_mon_regs {
* @non_shared_ant: the antenna that is for WiFi only
* @nvm_ver: NVM version
* @nvm_calib_ver: NVM calibration version
- * @lib: pointer to the lib ops
* @ht_params: point to ht parameters
* @led_mode: 0=blinking, 1=On(RF On)/Off(RF Off)
* @rx_with_siso_diversity: 1x1 device with rx antenna diversity
@@ -344,15 +339,12 @@ struct iwl_fw_mon_regs {
* @nvm_type: see &enum iwl_nvm_type
* @d3_debug_data_base_addr: base address where D3 debug data is stored
* @d3_debug_data_length: length of the D3 debug data
- * @bisr_workaround: BISR hardware workaround (for 22260 series devices)
* @min_txq_size: minimum number of slots required in a TX queue
* @uhb_supported: ultra high band channels supported
* @min_ba_txq_size: minimum number of slots required in a TX queue which
* based on hardware support (HE - 256, EHT - 1K).
* @num_rbds: number of receive buffer descriptors to use
* (only used for multi-queue capable devices)
- * @mac_addr_csr_base: CSR base register for MAC address access, if not set
- * assume 0x380
*
* We enable the driver to be backward compatible wrt. hardware features.
* API differences in uCode shouldn't be handled here but through TLVs
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h b/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
index 96bf353469b8..1379dc2d231b 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-context-info-gen3.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2018, 2020-2022 Intel Corporation
+ * Copyright (C) 2018, 2020-2023 Intel Corporation
*/
#ifndef __iwl_context_info_file_gen3_h__
#define __iwl_context_info_file_gen3_h__
@@ -44,7 +44,7 @@ enum iwl_prph_scratch_mtr_format {
* @IWL_PRPH_SCRATCH_EDBG_DEST_ST_ARBITER: use st arbiter, mainly for
* multicomm.
* @IWL_PRPH_SCRATCH_EDBG_DEST_TB22DTF: route debug data to SoC HW
- * @IWL_PRPH_SCTATCH_RB_SIZE_4K: Use 4K RB size (the default is 2K)
+ * @IWL_PRPH_SCRATCH_RB_SIZE_4K: Use 4K RB size (the default is 2K)
* @IWL_PRPH_SCRATCH_MTR_MODE: format used for completion - 0: for
* completion descriptor, 1 for responses (legacy)
* @IWL_PRPH_SCRATCH_MTR_FORMAT: a mask for the size of the tfd.
@@ -187,11 +187,15 @@ struct iwl_prph_scratch_ctrl_cfg {
* struct iwl_prph_scratch - peripheral scratch mapping
* @ctrl_cfg: control and configuration of prph scratch
* @dram: firmware images addresses in DRAM
+ * @fseq_override: FSEQ override parameters
+ * @step_analog_params: STEP analog calibration values
* @reserved: reserved
*/
struct iwl_prph_scratch {
struct iwl_prph_scratch_ctrl_cfg ctrl_cfg;
- __le32 reserved[10];
+ __le32 fseq_override;
+ __le32 step_analog_params;
+ __le32 reserved[8];
struct iwl_context_info_dram dram;
} __packed; /* PERIPH_SCRATCH_S */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
index 587368a0ad4a..a4df67ff21ba 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-csr.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2016 Intel Deutschland GmbH
*/
@@ -313,6 +313,7 @@ enum {
SILICON_C_STEP,
SILICON_D_STEP,
SILICON_E_STEP,
+ SILICON_TC_STEP = 0xe,
SILICON_Z_STEP = 0xf,
};
@@ -618,6 +619,7 @@ enum msix_hw_int_causes {
MSIX_HW_INT_CAUSES_REG_WAKEUP = BIT(1),
MSIX_HW_INT_CAUSES_REG_IML = BIT(1),
MSIX_HW_INT_CAUSES_REG_RESET_DONE = BIT(2),
+ MSIX_HW_INT_CAUSES_REG_TOP_FATAL_ERR = BIT(3),
MSIX_HW_INT_CAUSES_REG_SW_ERR_BZ = BIT(5),
MSIX_HW_INT_CAUSES_REG_CT_KILL = BIT(6),
MSIX_HW_INT_CAUSES_REG_RF_KILL = BIT(7),
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
index ef5baee6c9c5..b658cf228fbe 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.c
@@ -509,6 +509,8 @@ void iwl_dbg_tlv_load_bin(struct device *dev, struct iwl_trans *trans)
if (res)
return;
+ trans->dbg.yoyo_bin_loaded = true;
+
iwl_dbg_tlv_parse_bin(trans, fw->data, fw->size);
release_firmware(fw);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h
index 128059ca77e6..06fb7d665390 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-dbg-tlv.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2018-2022 Intel Corporation
+ * Copyright (C) 2018-2023 Intel Corporation
*/
#ifndef __iwl_dbg_tlv_h__
#define __iwl_dbg_tlv_h__
@@ -10,7 +10,8 @@
#include <fw/file.h>
#include <fw/api/dbg-tlv.h>
-#define IWL_DBG_TLV_MAX_PRESET 15
+#define IWL_DBG_TLV_MAX_PRESET 15
+#define ENABLE_INI (IWL_DBG_TLV_MAX_PRESET + 1)
/**
* struct iwl_dbg_tlv_node - debug TLV node
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h
index 1455b578358b..01fb7b900a6d 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-devtrace.h
@@ -3,17 +3,19 @@
*
* Copyright(c) 2009 - 2014 Intel Corporation. All rights reserved.
* Copyright(C) 2016 Intel Deutschland GmbH
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018, 2023 Intel Corporation
*****************************************************************************/
#ifndef __IWLWIFI_DEVICE_TRACE
#include <linux/skbuff.h>
#include <linux/ieee80211.h>
#include <net/cfg80211.h>
+#include <net/mac80211.h>
#include "iwl-trans.h"
#if !defined(__IWLWIFI_DEVICE_TRACE)
static inline bool iwl_trace_data(struct sk_buff *skb)
{
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_hdr *hdr = (void *)skb->data;
__le16 fc = hdr->frame_control;
int offs = 24; /* start with normal header length */
@@ -21,6 +23,10 @@ static inline bool iwl_trace_data(struct sk_buff *skb)
if (!ieee80211_is_data(fc))
return false;
+ /* If upper layers wanted TX status it's an important frame */
+ if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS)
+ return false;
+
/* Try to determine if the frame is EAPOL. This might have false
* positives (if there's no RFC 1042 header and we compare to some
* payload instead) but since we're only doing tracing that's not
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
index 3d87d26845e7..ffe2670720c9 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2005-2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -162,6 +162,8 @@ static inline char iwl_drv_get_step(int step)
{
if (step == SILICON_Z_STEP)
return 'z';
+ if (step == SILICON_TC_STEP)
+ return 'a';
return 'a' + step;
}
@@ -178,6 +180,8 @@ const char *iwl_drv_get_fwname_pre(struct iwl_trans *trans, char *buf)
mac_step = iwl_drv_get_step(trans->hw_rev_step);
+ rf_step = iwl_drv_get_step(CSR_HW_RFID_STEP(trans->hw_rf_id));
+
switch (CSR_HW_RFID_TYPE(trans->hw_rf_id)) {
case IWL_CFG_RF_TYPE_HR1:
case IWL_CFG_RF_TYPE_HR2:
@@ -196,7 +200,13 @@ const char *iwl_drv_get_fwname_pre(struct iwl_trans *trans, char *buf)
rf = "fm";
break;
case IWL_CFG_RF_TYPE_WH:
- rf = "wh";
+ if (SILICON_Z_STEP ==
+ CSR_HW_RFID_STEP(trans->hw_rf_id)) {
+ rf = "whtc";
+ rf_step = 'a';
+ } else {
+ rf = "wh";
+ }
break;
default:
return "unknown-rf";
@@ -204,8 +214,6 @@ const char *iwl_drv_get_fwname_pre(struct iwl_trans *trans, char *buf)
cdb = CSR_HW_RFID_IS_CDB(trans->hw_rf_id) ? "4" : "";
- rf_step = iwl_drv_get_step(CSR_HW_RFID_STEP(trans->hw_rf_id));
-
scnprintf(buf, FW_NAME_PRE_BUFSIZE,
"iwlwifi-%s-%c0-%s%s-%c0",
trans->cfg->fw_name_mac, mac_step,
@@ -1303,10 +1311,12 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
case IWL_UCODE_TLV_CURRENT_PC:
if (tlv_len < sizeof(struct iwl_pc_data))
goto invalid_tlv_len;
- drv->trans->dbg.num_pc =
- tlv_len / sizeof(struct iwl_pc_data);
drv->trans->dbg.pc_data =
kmemdup(tlv_data, tlv_len, GFP_KERNEL);
+ if (!drv->trans->dbg.pc_data)
+ return -ENOMEM;
+ drv->trans->dbg.num_pc =
+ tlv_len / sizeof(struct iwl_pc_data);
break;
default:
IWL_DEBUG_INFO(drv, "unknown TLV: %d\n", tlv_type);
@@ -1415,6 +1425,9 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
struct iwl_op_mode *op_mode = NULL;
int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY;
+ /* also protects start/stop from racing against each other */
+ lockdep_assert_held(&iwlwifi_opmode_table_mtx);
+
for (retry = 0; retry <= max_retry; retry++) {
#ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -1429,6 +1442,9 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
if (op_mode)
return op_mode;
+ if (test_bit(STATUS_TRANS_DEAD, &drv->trans->status))
+ break;
+
IWL_ERR(drv, "retry init count %d\n", retry);
#ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -1442,6 +1458,9 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
static void _iwl_op_mode_stop(struct iwl_drv *drv)
{
+ /* also protects start/stop from racing against each other */
+ lockdep_assert_held(&iwlwifi_opmode_table_mtx);
+
/* op_mode can be NULL if its start failed */
if (drv->op_mode) {
iwl_op_mode_stop(drv->op_mode);
@@ -1725,11 +1744,6 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
}
mutex_unlock(&iwlwifi_opmode_table_mtx);
- /*
- * Complete the firmware request last so that
- * a driver unbind (stop) doesn't run while we
- * are doing the start() above.
- */
complete(&drv->request_firmware_complete);
/*
@@ -1795,6 +1809,22 @@ struct iwl_drv *iwl_drv_start(struct iwl_trans *trans)
#endif
drv->trans->dbg.domains_bitmap = IWL_TRANS_FW_DBG_DOMAIN(drv->trans);
+ if (iwlwifi_mod_params.enable_ini != ENABLE_INI) {
+ /* We have a non-default value in the module parameter,
+ * take its value
+ */
+ drv->trans->dbg.domains_bitmap &= 0xffff;
+ if (iwlwifi_mod_params.enable_ini != IWL_FW_INI_PRESET_DISABLE) {
+ if (iwlwifi_mod_params.enable_ini > ENABLE_INI) {
+ IWL_ERR(trans,
+ "invalid enable_ini module parameter value: max = %d, using 0 instead\n",
+ ENABLE_INI);
+ iwlwifi_mod_params.enable_ini = 0;
+ }
+ drv->trans->dbg.domains_bitmap =
+ BIT(IWL_FW_DBG_DOMAIN_POS + iwlwifi_mod_params.enable_ini);
+ }
+ }
ret = iwl_request_firmware(drv, true);
if (ret) {
@@ -1818,11 +1848,12 @@ void iwl_drv_stop(struct iwl_drv *drv)
{
wait_for_completion(&drv->request_firmware_complete);
+ mutex_lock(&iwlwifi_opmode_table_mtx);
+
_iwl_op_mode_stop(drv);
iwl_dealloc_ucode(drv);
- mutex_lock(&iwlwifi_opmode_table_mtx);
/*
* List is empty (this item wasn't added)
* when firmware loading failed -- in that
@@ -1843,8 +1874,6 @@ void iwl_drv_stop(struct iwl_drv *drv)
kfree(drv);
}
-#define ENABLE_INI (IWL_DBG_TLV_MAX_PRESET + 1)
-
/* shared module parameters */
struct iwl_mod_params iwlwifi_mod_params = {
.fw_restart = true,
@@ -1964,38 +1993,7 @@ module_param_named(uapsd_disable, iwlwifi_mod_params.uapsd_disable, uint, 0644);
MODULE_PARM_DESC(uapsd_disable,
"disable U-APSD functionality bitmap 1: BSS 2: P2P Client (default: 3)");
-static int enable_ini_set(const char *arg, const struct kernel_param *kp)
-{
- int ret = 0;
- bool res;
- __u32 new_enable_ini;
-
- /* in case the argument type is a number */
- ret = kstrtou32(arg, 0, &new_enable_ini);
- if (!ret) {
- if (new_enable_ini > ENABLE_INI) {
- pr_err("enable_ini cannot be %d, in range 0-16\n", new_enable_ini);
- return -EINVAL;
- }
- goto out;
- }
-
- /* in case the argument type is boolean */
- ret = kstrtobool(arg, &res);
- if (ret)
- return ret;
- new_enable_ini = (res ? ENABLE_INI : 0);
-
-out:
- iwlwifi_mod_params.enable_ini = new_enable_ini;
- return 0;
-}
-
-static const struct kernel_param_ops enable_ini_ops = {
- .set = enable_ini_set
-};
-
-module_param_cb(enable_ini, &enable_ini_ops, &iwlwifi_mod_params.enable_ini, 0644);
+module_param_named(enable_ini, iwlwifi_mod_params.enable_ini, uint, 0444);
MODULE_PARM_DESC(enable_ini,
"0:disable, 1-15:FW_DBG_PRESET Values, 16:enabled without preset value defined,"
"Debug INI TLV FW debug infrastructure (default: 16)");
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
index 6c19989e4ab7..3d1a27ba35c6 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.h
@@ -56,7 +56,7 @@ struct iwl_cfg;
/**
* iwl_drv_start - start the drv
*
- * @trans_ops: the ops of the transport
+ * @trans: the transport
*
* starts the driver: fetches the firmware. This should be called by bus
* specific system flows implementations. For example, the bus specific probe
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c
index d7a7835b935c..5aab64c63a13 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2005-2014, 2018-2020 Intel Corporation
+ * Copyright (C) 2005-2014, 2018-2021, 2023 Intel Corporation
* Copyright (C) 2015 Intel Mobile Communications GmbH
*/
#include <linux/types.h>
@@ -721,6 +721,9 @@ void iwl_init_ht_hw_capab(struct iwl_trans *trans,
ht_info->ampdu_density = IEEE80211_HT_MPDU_DENSITY_4;
ht_info->mcs.rx_mask[0] = 0xFF;
+ ht_info->mcs.rx_mask[1] = 0x00;
+ ht_info->mcs.rx_mask[2] = 0x00;
+
if (rx_chains >= 2)
ht_info->mcs.rx_mask[1] = 0xFF;
if (rx_chains >= 3)
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h
index 0e8ca761d24b..34a178a2eb5d 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2014, 2018, 2020-2022 Intel Corporation
+ * Copyright (C) 2005-2014, 2018, 2020-2023 Intel Corporation
* Copyright (C) 2015 Intel Mobile Communications GmbH
*/
#ifndef __iwl_eeprom_parse_h__
@@ -61,7 +61,7 @@ struct iwl_nvm_data {
/**
* iwl_parse_eeprom_data - parse EEPROM data and return values
*
- * @dev: device pointer we're parsing for, for debug only
+ * @trans: ransport we're parsing for, for debug only
* @cfg: device configuration for parsing and overrides
* @eeprom: the EEPROM data
* @eeprom_size: length of the EEPROM data
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-fh.h b/drivers/net/wireless/intel/iwlwifi/iwl-fh.h
index 41ab5a6e2dd3..e0400ba2ab74 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-fh.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-fh.h
@@ -681,12 +681,13 @@ struct iwl_tfh_tb {
/**
* struct iwl_tfd - Transmit Frame Descriptor (TFD)
- * @ __reserved1[3] reserved
- * @ num_tbs 0-4 number of active tbs
- * 5 reserved
- * 6-7 padding (not used)
- * @ tbs[20] transmit frame buffer descriptors
- * @ __pad padding
+ * @__reserved1: reserved
+ * @num_tbs:
+ * 0-4 number of active tbs
+ * 5 reserved
+ * 6-7 padding (not used)
+ * @tbs: transmit frame buffer descriptors
+ * @__pad: padding
*/
struct iwl_tfd {
u8 __reserved1[3];
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
index 31176897b746..6015e1255d2a 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
@@ -671,7 +671,8 @@ static const struct ieee80211_sband_iftype_data iwl_he_eht_capa[] = {
IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
IEEE80211_EHT_MAC_CAP0_OM_CONTROL |
IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE1 |
- IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE2,
+ IEEE80211_EHT_MAC_CAP0_TRIG_TXOP_SHARING_MODE2 |
+ IEEE80211_EHT_MAC_CAP0_SCS_TRAFFIC_DESC,
.phy_cap_info[0] =
IEEE80211_EHT_PHY_CAP0_242_TONE_RU_GT20MHZ |
IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
@@ -962,6 +963,9 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
}
}
} else {
+ struct ieee80211_he_mcs_nss_supp *he_mcs_nss_supp =
+ &iftype_data->he_cap.he_mcs_nss_supp;
+
if (iftype_data->eht_cap.has_eht) {
struct ieee80211_eht_mcs_nss_supp *mcs_nss =
&iftype_data->eht_cap.eht_mcs_nss_supp;
@@ -980,6 +984,19 @@ iwl_nvm_fixup_sband_iftd(struct iwl_trans *trans,
iftype_data->he_cap.he_cap_elem.phy_cap_info[7] |=
IEEE80211_HE_PHY_CAP7_MAX_NC_1;
}
+
+ he_mcs_nss_supp->rx_mcs_80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
+ he_mcs_nss_supp->tx_mcs_80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
+ he_mcs_nss_supp->rx_mcs_160 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
+ he_mcs_nss_supp->tx_mcs_160 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
+ he_mcs_nss_supp->rx_mcs_80p80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
+ he_mcs_nss_supp->tx_mcs_80p80 |=
+ cpu_to_le16(IEEE80211_HE_MCS_NOT_SUPPORTED << 2);
}
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210 && !is_ap)
@@ -1052,10 +1069,6 @@ static void iwl_init_he_hw_capab(struct iwl_trans *trans,
struct ieee80211_sband_iftype_data *iftype_data;
int i;
- /* should only initialize once */
- if (WARN_ON(sband->iftype_data))
- return;
-
BUILD_BUG_ON(sizeof(data->iftd.low) != sizeof(iwl_he_eht_capa));
BUILD_BUG_ON(sizeof(data->iftd.high) != sizeof(iwl_he_eht_capa));
BUILD_BUG_ON(sizeof(data->iftd.uhb) != sizeof(iwl_he_eht_capa));
@@ -1077,8 +1090,8 @@ static void iwl_init_he_hw_capab(struct iwl_trans *trans,
memcpy(iftype_data, iwl_he_eht_capa, sizeof(iwl_he_eht_capa));
- sband->iftype_data = iftype_data;
- sband->n_iftype_data = ARRAY_SIZE(iwl_he_eht_capa);
+ _ieee80211_set_sband_iftype_data(sband, iftype_data,
+ ARRAY_SIZE(iwl_he_eht_capa));
for (i = 0; i < sband->n_iftype_data; i++)
iwl_nvm_fixup_sband_iftd(trans, data, sband, &iftype_data[i],
@@ -1087,6 +1100,37 @@ static void iwl_init_he_hw_capab(struct iwl_trans *trans,
iwl_init_he_6ghz_capa(trans, data, sband, tx_chains, rx_chains);
}
+void iwl_reinit_cab(struct iwl_trans *trans, struct iwl_nvm_data *data,
+ u8 tx_chains, u8 rx_chains, const struct iwl_fw *fw)
+{
+ struct ieee80211_supported_band *sband;
+
+ sband = &data->bands[NL80211_BAND_2GHZ];
+ iwl_init_ht_hw_capab(trans, data, &sband->ht_cap, NL80211_BAND_2GHZ,
+ tx_chains, rx_chains);
+
+ if (data->sku_cap_11ax_enable && !iwlwifi_mod_params.disable_11ax)
+ iwl_init_he_hw_capab(trans, data, sband, tx_chains, rx_chains,
+ fw);
+
+ sband = &data->bands[NL80211_BAND_5GHZ];
+ iwl_init_ht_hw_capab(trans, data, &sband->ht_cap, NL80211_BAND_5GHZ,
+ tx_chains, rx_chains);
+ if (data->sku_cap_11ac_enable && !iwlwifi_mod_params.disable_11ac)
+ iwl_init_vht_hw_capab(trans, data, &sband->vht_cap,
+ tx_chains, rx_chains);
+
+ if (data->sku_cap_11ax_enable && !iwlwifi_mod_params.disable_11ax)
+ iwl_init_he_hw_capab(trans, data, sband, tx_chains, rx_chains,
+ fw);
+
+ sband = &data->bands[NL80211_BAND_6GHZ];
+ if (data->sku_cap_11ax_enable && !iwlwifi_mod_params.disable_11ax)
+ iwl_init_he_hw_capab(trans, data, sband, tx_chains, rx_chains,
+ fw);
+}
+IWL_EXPORT_SYMBOL(iwl_reinit_cab);
+
static void iwl_init_sbands(struct iwl_trans *trans,
struct iwl_nvm_data *data,
const void *nvm_ch_flags, u8 tx_chains,
@@ -1365,7 +1409,7 @@ iwl_nvm_no_wide_in_5ghz(struct iwl_trans *trans, const struct iwl_cfg *cfg,
struct iwl_nvm_data *
iwl_parse_mei_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg,
const struct iwl_mei_nvm *mei_nvm,
- const struct iwl_fw *fw)
+ const struct iwl_fw *fw, u8 tx_ant, u8 rx_ant)
{
struct iwl_nvm_data *data;
u32 sbands_flags = 0;
@@ -1392,6 +1436,10 @@ iwl_parse_mei_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg,
tx_chains &= data->valid_tx_ant;
if (data->valid_rx_ant)
rx_chains &= data->valid_rx_ant;
+ if (tx_ant)
+ tx_chains &= tx_ant;
+ if (rx_ant)
+ rx_chains &= rx_ant;
data->sku_cap_mimo_disabled = false;
data->sku_cap_band_24ghz_enable = true;
@@ -1957,7 +2005,8 @@ out:
IWL_EXPORT_SYMBOL(iwl_read_external_nvm);
struct iwl_nvm_data *iwl_get_nvm(struct iwl_trans *trans,
- const struct iwl_fw *fw)
+ const struct iwl_fw *fw,
+ u8 set_tx_ant, u8 set_rx_ant)
{
struct iwl_nvm_get_info cmd = {};
struct iwl_nvm_data *nvm;
@@ -1971,6 +2020,9 @@ struct iwl_nvm_data *iwl_get_nvm(struct iwl_trans *trans,
bool empty_otp;
u32 mac_flags;
u32 sbands_flags = 0;
+ u8 tx_ant;
+ u8 rx_ant;
+
/*
* All the values in iwl_nvm_get_info_rsp v4 are the same as
* in v3, except for the channel profile part of the
@@ -2058,10 +2110,15 @@ struct iwl_nvm_data *iwl_get_nvm(struct iwl_trans *trans,
channel_profile = v4 ? (void *)rsp->regulatory.channel_profile :
(void *)rsp_v3->regulatory.channel_profile;
- iwl_init_sbands(trans, nvm,
- channel_profile,
- nvm->valid_tx_ant & fw->valid_tx_ant,
- nvm->valid_rx_ant & fw->valid_rx_ant,
+ tx_ant = nvm->valid_tx_ant & fw->valid_tx_ant;
+ rx_ant = nvm->valid_rx_ant & fw->valid_rx_ant;
+
+ if (set_tx_ant)
+ tx_ant &= set_tx_ant;
+ if (set_rx_ant)
+ rx_ant &= set_rx_ant;
+
+ iwl_init_sbands(trans, nvm, channel_profile, tx_ant, rx_ant,
sbands_flags, v4, fw);
iwl_free_resp(&hcmd);
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
index c79f72d54482..651ed25b683b 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2005-2015, 2018-2022 Intel Corporation
+ * Copyright (C) 2005-2015, 2018-2023 Intel Corporation
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
#ifndef __iwl_nvm_parse_h__
@@ -21,7 +21,7 @@ enum iwl_nvm_sbands_flags {
IWL_NVM_SBANDS_FLAGS_NO_WIDE_IN_5GHZ = BIT(1),
};
-/**
+/*
* iwl_parse_nvm_data - parse NVM data and return values
*
* This function parses all NVM values we need and then
@@ -73,21 +73,28 @@ int iwl_read_external_nvm(struct iwl_trans *trans,
void iwl_nvm_fixups(u32 hw_id, unsigned int section, u8 *data,
unsigned int len);
-/**
+/*
* iwl_get_nvm - retrieve NVM data from firmware
*
* Allocates a new iwl_nvm_data structure, fills it with
* NVM data, and returns it to caller.
*/
struct iwl_nvm_data *iwl_get_nvm(struct iwl_trans *trans,
- const struct iwl_fw *fw);
+ const struct iwl_fw *fw,
+ u8 set_tx_ant, u8 set_rx_ant);
-/**
+/*
* iwl_parse_mei_nvm_data - parse the mei_nvm_data and get an iwl_nvm_data
*/
struct iwl_nvm_data *
iwl_parse_mei_nvm_data(struct iwl_trans *trans, const struct iwl_cfg *cfg,
const struct iwl_mei_nvm *mei_nvm,
- const struct iwl_fw *fw);
+ const struct iwl_fw *fw, u8 set_tx_ant, u8 set_rx_ant);
+
+/*
+ * iwl_reinit_cab - to be called when the tx_chains or rx_chains are modified
+ */
+void iwl_reinit_cab(struct iwl_trans *trans, struct iwl_nvm_data *data,
+ u8 tx_chains, u8 rx_chains, const struct iwl_fw *fw);
#endif /* __iwl_nvm_parse_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
index 6dd381ff0f9e..dd32c287b983 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-prph.h
@@ -348,8 +348,8 @@
#define RFIC_REG_RD 0xAD0470
#define WFPM_CTRL_REG 0xA03030
#define WFPM_OTP_CFG1_ADDR 0x00a03098
-#define WFPM_OTP_CFG1_IS_JACKET_BIT BIT(4)
-#define WFPM_OTP_CFG1_IS_CDB_BIT BIT(5)
+#define WFPM_OTP_CFG1_IS_JACKET_BIT BIT(5)
+#define WFPM_OTP_CFG1_IS_CDB_BIT BIT(4)
#define WFPM_OTP_BZ_BNJ_JACKET_BIT 5
#define WFPM_OTP_BZ_BNJ_CDB_BIT 4
#define WFPM_OTP_CFG1_IS_JACKET(_val) (((_val) & 0x00000020) >> WFPM_OTP_BZ_BNJ_JACKET_BIT)
@@ -365,7 +365,6 @@
#define DBGI_SRAM_FIFO_POINTERS_WR_PTR_MSK 0x00000FFF
enum {
- ENABLE_WFPM = BIT(31),
WFPM_AUX_CTL_AUX_IF_MAC_OWNER_MSK = 0x80000000,
};
@@ -383,7 +382,7 @@ enum {
#define PREG_PRPH_WPROT_22000 0xA04D00
#define SB_MODIFY_CFG_FLAG 0xA03088
-#define SB_CFG_RESIDES_IN_OTP_MASK 0x10
+#define SB_CFG_RESIDES_IN_ROM 0x80
#define SB_CPU_1_STATUS 0xA01E30
#define SB_CPU_2_STATUS 0xA01E34
#define UMAG_SB_CPU_1_STATUS 0xA038C0
@@ -424,14 +423,14 @@ enum {
* reserved: bits 12-18
* slave_exist: bit 19
* dash: bits 20-23
- * step: bits 24-26
- * flavor: bits 27-31
+ * step: bits 24-27
+ * flavor: bits 28-31
*/
#define REG_CRF_ID_TYPE(val) (((val) & 0x00000FFF) >> 0)
#define REG_CRF_ID_SLAVE(val) (((val) & 0x00080000) >> 19)
#define REG_CRF_ID_DASH(val) (((val) & 0x00F00000) >> 20)
-#define REG_CRF_ID_STEP(val) (((val) & 0x07000000) >> 24)
-#define REG_CRF_ID_FLAVOR(val) (((val) & 0xF8000000) >> 27)
+#define REG_CRF_ID_STEP(val) (((val) & 0x0F000000) >> 24)
+#define REG_CRF_ID_FLAVOR(val) (((val) & 0xF0000000) >> 28)
#define UREG_CHICK (0xA05C00)
#define UREG_CHICK_MSI_ENABLE BIT(24)
@@ -452,6 +451,7 @@ enum {
#define REG_CRF_ID_TYPE_FM 0x910
#define REG_CRF_ID_TYPE_FMI 0x930
#define REG_CRF_ID_TYPE_FMR 0x900
+#define REG_CRF_ID_TYPE_WHP 0xA10
#define HPM_DEBUG 0xA03440
#define PERSISTENCE_BIT BIT(12)
@@ -516,4 +516,8 @@ enum {
#define WFPM_LMAC2_PD_NOTIFICATION 0xA033CC
#define WFPM_LMAC2_PD_RE_READ BIT(31)
+#define DPHYIP_INDIRECT 0xA2D800
+#define DPHYIP_INDIRECT_RD_MSK 0xFF000000
+#define DPHYIP_INDIRECT_RD_SHIFT 24
+
#endif /* __iwl_prph_h__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
index 3b6b0e03037f..05e72a2125b3 100644
--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
@@ -56,6 +56,10 @@
* 6) Eventually, the free function will be called.
*/
+/* default preset 0 (start from bit 16)*/
+#define IWL_FW_DBG_DOMAIN_POS 16
+#define IWL_FW_DBG_DOMAIN BIT(IWL_FW_DBG_DOMAIN_POS)
+
#define IWL_TRANS_FW_DBG_DOMAIN(trans) IWL_FW_INI_DOMAIN_ALWAYS_ON
#define FH_RSCSR_FRAME_SIZE_MSK 0x00003FFF /* bits 0-13 */
@@ -105,6 +109,7 @@ static inline u32 iwl_rx_packet_payload_len(const struct iwl_rx_packet *pkt)
* @CMD_ASYNC: Return right away and don't wait for the response
* @CMD_WANT_SKB: Not valid with CMD_ASYNC. The caller needs the buffer of
* the response. The caller needs to call iwl_free_resp when done.
+ * @CMD_SEND_IN_RFKILL: Send the command even if the NIC is in RF-kill.
* @CMD_WANT_ASYNC_CALLBACK: the op_mode's async callback function must be
* called after this command completes. Valid only with CMD_ASYNC.
* @CMD_SEND_IN_D3: Allow the command to be sent in D3 mode, relevant to
@@ -274,7 +279,7 @@ static inline void iwl_free_rxb(struct iwl_rx_cmd_buffer *r)
#define IWL_MGMT_TID 15
#define IWL_FRAME_LIMIT 64
#define IWL_MAX_RX_HW_QUEUES 16
-#define IWL_9000_MAX_RX_HW_QUEUES 6
+#define IWL_9000_MAX_RX_HW_QUEUES 1
/**
* enum iwl_wowlan_status - WoWLAN image/device status
@@ -584,7 +589,7 @@ struct iwl_trans_ops {
int (*tx)(struct iwl_trans *trans, struct sk_buff *skb,
struct iwl_device_tx_cmd *dev_cmd, int queue);
void (*reclaim)(struct iwl_trans *trans, int queue, int ssn,
- struct sk_buff_head *skbs);
+ struct sk_buff_head *skbs, bool is_flush);
void (*set_q_ptrs)(struct iwl_trans *trans, int queue, int ptr);
@@ -734,6 +739,7 @@ struct iwl_dram_data {
};
/**
+ * struct iwl_dram_regions - DRAM regions container structure
* @drams: array of several DRAM areas that contains the pnvm and power
* reduction table payloads.
* @n_regions: number of DRAM regions that were allocated
@@ -833,6 +839,7 @@ struct iwl_pc_data {
* @dump_file_name_ext_valid: dump file name extension if valid or not
* @num_pc: number of program counter for cpu
* @pc_data: details of the program counter
+ * @yoyo_bin_loaded: tells if a yoyo debug file has been loaded
*/
struct iwl_trans_debug {
u8 n_dest_reg;
@@ -862,8 +869,7 @@ struct iwl_trans_debug {
u64 unsupported_region_msk;
struct iwl_ucode_tlv *active_regions[IWL_FW_INI_MAX_REGION_ID];
struct list_head debug_info_tlv_list;
- struct iwl_dbg_tlv_time_point_data
- time_point[IWL_FW_INI_TIME_POINT_NUM];
+ struct iwl_dbg_tlv_time_point_data time_point[IWL_FW_INI_TIME_POINT_NUM];
struct list_head periodic_trig_list;
u32 domains_bitmap;
@@ -875,6 +881,7 @@ struct iwl_trans_debug {
bool dump_file_name_ext_valid;
u32 num_pc;
struct iwl_pc_data *pc_data;
+ bool yoyo_bin_loaded;
};
struct iwl_dma_ptr {
@@ -916,7 +923,6 @@ struct iwl_pcie_first_tb_buf {
/**
* struct iwl_txq - Tx Queue for DMA
- * @q: generic Rx/Tx queue descriptor
* @tfds: transmit frame descriptors (DMA memory)
* @first_tb_bufs: start of command headers, including scratch buffers, for
* the writeback -- this is DMA memory and an array holding one buffer
@@ -1060,11 +1066,10 @@ struct iwl_trans_txqs {
* starting the firmware, used for tracing
* @rx_mpdu_cmd_hdr_size: used for tracing, amount of data before the
* start of the 802.11 header in the @rx_mpdu_cmd
- * @dflt_pwr_limit: default power limit fetched from the platform (ACPI)
* @system_pm_mode: the system-wide power management mode in use.
* This mode is set dynamically, depending on the WoWLAN values
* configured from the userspace at runtime.
- * @iwl_trans_txqs: transport tx queues data.
+ * @txqs: transport tx queues data.
* @mbx_addr_0_step: step address data 0
* @mbx_addr_1_step: step address data 1
* @pcie_link_speed: current PCIe link speed (%PCI_EXP_LNKSTA_CLS_*),
@@ -1269,14 +1274,15 @@ static inline int iwl_trans_tx(struct iwl_trans *trans, struct sk_buff *skb,
}
static inline void iwl_trans_reclaim(struct iwl_trans *trans, int queue,
- int ssn, struct sk_buff_head *skbs)
+ int ssn, struct sk_buff_head *skbs,
+ bool is_flush)
{
if (WARN_ON_ONCE(trans->state != IWL_TRANS_FW_ALIVE)) {
IWL_ERR(trans, "%s bad state = %d\n", __func__, trans->state);
return;
}
- trans->ops->reclaim(trans, queue, ssn, skbs);
+ trans->ops->reclaim(trans, queue, ssn, skbs, is_flush);
}
static inline void iwl_trans_set_q_ptrs(struct iwl_trans *trans, int queue,
diff --git a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
index 655d95d3a068..1f3c885aeb65 100644
--- a/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
+++ b/drivers/net/wireless/intel/iwlwifi/mei/iwl-mei.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
- * Copyright (C) 2021 - 2022 Intel Corporation
+ * Copyright (C) 2021-2023 Intel Corporation
*/
#ifndef __iwl_mei_h__
@@ -493,7 +493,7 @@ static inline void iwl_mei_set_power_limit(__le16 *power_limit)
static inline int iwl_mei_register(void *priv,
const struct iwl_mei_ops *ops)
-{ return 0; }
+{ return -EOPNOTSUPP; }
static inline void iwl_mei_start_unregister(void)
{}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
index 243eccc68cb0..c832068b5718 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/constants.h
@@ -60,6 +60,7 @@
#define IWL_MVM_UAPSD_NONAGG_PERIOD 5000 /* msecs */
#define IWL_MVM_UAPSD_NOAGG_LIST_LEN IWL_MVM_UAPSD_NOAGG_BSSIDS_NUM
#define IWL_MVM_NON_TRANSMITTING_AP 0
+#define IWL_MVM_CONN_LISTEN_INTERVAL 10
#define IWL_MVM_RS_NUM_TRY_BEFORE_ANT_TOGGLE 1
#define IWL_MVM_RS_HT_VHT_RETRIES_PER_RATE 2
#define IWL_MVM_RS_HT_VHT_RETRIES_PER_RATE_TW 1
@@ -118,5 +119,6 @@
#define IWL_MVM_DISABLE_AP_FILS false
#define IWL_MVM_6GHZ_PASSIVE_SCAN_TIMEOUT 3000 /* in seconds */
#define IWL_MVM_6GHZ_PASSIVE_SCAN_ASSOC_TIMEOUT 60 /* in seconds */
+#define IWL_MVM_AUTO_EML_ENABLE true
#endif /* __MVM_CONSTANTS_H */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
index f6488b4bbe68..92c45571bd69 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
@@ -818,7 +818,7 @@ static int iwl_mvm_d3_reprogram(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (ret)
IWL_ERR(mvm, "Failed to send quota: %d\n", ret);
- if (iwl_mvm_is_lar_supported(mvm) && iwl_mvm_init_fw_regd(mvm))
+ if (iwl_mvm_is_lar_supported(mvm) && iwl_mvm_init_fw_regd(mvm, false))
IWL_ERR(mvm, "Failed to initialize D3 LAR information\n");
return 0;
@@ -1438,6 +1438,7 @@ struct iwl_wowlan_status_data {
} ptk;
struct iwl_multicast_key_data igtk;
+ struct iwl_multicast_key_data bigtk[WOWLAN_BIGTK_KEYS_NUM];
u8 *wake_packet;
};
@@ -1781,8 +1782,8 @@ static void iwl_mvm_set_key_rx_seq(struct ieee80211_key_conf *key,
struct iwl_mvm_d3_gtk_iter_data {
struct iwl_mvm *mvm;
struct iwl_wowlan_status_data *status;
- u32 gtk_cipher, igtk_cipher;
- bool unhandled_cipher, igtk_support;
+ u32 gtk_cipher, igtk_cipher, bigtk_cipher;
+ bool unhandled_cipher, igtk_support, bigtk_support;
int num_keys;
};
@@ -1817,6 +1818,9 @@ static void iwl_mvm_d3_find_last_keys(struct ieee80211_hw *hw,
if (data->igtk_support &&
(key->keyidx == 4 || key->keyidx == 5)) {
data->igtk_cipher = key->cipher;
+ } else if (data->bigtk_support &&
+ (key->keyidx == 6 || key->keyidx == 7)) {
+ data->bigtk_cipher = key->cipher;
} else {
data->unhandled_cipher = true;
return;
@@ -1848,6 +1852,24 @@ iwl_mvm_d3_set_igtk_bigtk_ipn(const struct iwl_multicast_key_data *key,
}
}
+static void
+iwl_mvm_d3_update_igtk_bigtk(struct iwl_wowlan_status_data *status,
+ struct ieee80211_key_conf *key,
+ struct iwl_multicast_key_data *key_data)
+{
+ if (status->num_of_gtk_rekeys && key_data->len) {
+ /* remove rekeyed key */
+ ieee80211_remove_key(key);
+ } else {
+ struct ieee80211_key_seq seq;
+
+ iwl_mvm_d3_set_igtk_bigtk_ipn(key_data,
+ &seq,
+ key->cipher);
+ ieee80211_set_key_rx_seq(key, 0, &seq);
+ }
+}
+
static void iwl_mvm_d3_update_keys(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta,
@@ -1900,17 +1922,14 @@ static void iwl_mvm_d3_update_keys(struct ieee80211_hw *hw,
case WLAN_CIPHER_SUITE_BIP_CMAC_256:
case WLAN_CIPHER_SUITE_AES_CMAC:
if (key->keyidx == 4 || key->keyidx == 5) {
- /* remove rekeyed key */
- if (status->num_of_gtk_rekeys) {
- ieee80211_remove_key(key);
- } else {
- struct ieee80211_key_seq seq;
+ iwl_mvm_d3_update_igtk_bigtk(status, key,
+ &status->igtk);
+ }
+ if (key->keyidx == 6 || key->keyidx == 7) {
+ u8 idx = key->keyidx == status->bigtk[1].id;
- iwl_mvm_d3_set_igtk_bigtk_ipn(&status->igtk,
- &seq,
- key->cipher);
- ieee80211_set_key_rx_seq(key, 0, &seq);
- }
+ iwl_mvm_d3_update_igtk_bigtk(status, key,
+ &status->bigtk[idx]);
}
}
}
@@ -2012,6 +2031,16 @@ iwl_mvm_d3_igtk_bigtk_rekey_add(struct iwl_wowlan_status_data *status,
if (IS_ERR(key_config))
return false;
ieee80211_set_key_rx_seq(key_config, 0, &seq);
+
+ if (key_config->keyidx == 4 || key_config->keyidx == 5) {
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int link_id = vif->active_links ? __ffs(vif->active_links) : 0;
+ struct iwl_mvm_vif_link_info *mvm_link =
+ mvmvif->link[link_id];
+
+ mvm_link->igtk = key_config;
+ }
+
return true;
}
@@ -2042,6 +2071,8 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
.mvm = mvm,
.status = status,
};
+ int i;
+
u32 disconnection_reasons =
IWL_WOWLAN_WAKEUP_BY_DISCONNECTION_ON_MISSED_BEACON |
IWL_WOWLAN_WAKEUP_BY_DISCONNECTION_ON_DEAUTH;
@@ -2058,6 +2089,11 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
0))
gtkdata.igtk_support = true;
+ if (iwl_fw_lookup_notif_ver(mvm->fw, PROT_OFFLOAD_GROUP,
+ WOWLAN_INFO_NOTIFICATION,
+ 0) >= 3)
+ gtkdata.bigtk_support = true;
+
/* find last GTK that we used initially, if any */
ieee80211_iter_keys(mvm->hw, vif,
iwl_mvm_d3_find_last_keys, &gtkdata);
@@ -2088,6 +2124,13 @@ static bool iwl_mvm_setup_connection_keep(struct iwl_mvm *mvm,
&status->igtk))
return false;
+ for (i = 0; i < ARRAY_SIZE(status->bigtk); i++) {
+ if (!iwl_mvm_d3_igtk_bigtk_rekey_add(status, vif,
+ gtkdata.bigtk_cipher,
+ &status->bigtk[i]))
+ return false;
+ }
+
ieee80211_gtk_rekey_notify(vif, vif->bss_conf.bssid,
(void *)&replay_ctr, GFP_KERNEL);
}
@@ -2172,6 +2215,37 @@ static void iwl_mvm_convert_igtk(struct iwl_wowlan_status_data *status,
memcpy(status->igtk.ipn, data->ipn, sizeof(data->ipn));
}
+static void iwl_mvm_convert_bigtk(struct iwl_wowlan_status_data *status,
+ const struct iwl_wowlan_igtk_status *data)
+{
+ int data_idx, status_idx = 0;
+
+ BUILD_BUG_ON(ARRAY_SIZE(status->bigtk) < WOWLAN_BIGTK_KEYS_NUM);
+
+ for (data_idx = 0; data_idx < WOWLAN_BIGTK_KEYS_NUM; data_idx++) {
+ if (!data[data_idx].key_len)
+ continue;
+
+ status->bigtk[status_idx].len = data[data_idx].key_len;
+ status->bigtk[status_idx].flags = data[data_idx].key_flags;
+ status->bigtk[status_idx].id =
+ u32_get_bits(data[data_idx].key_flags,
+ IWL_WOWLAN_IGTK_BIGTK_IDX_MASK)
+ + WOWLAN_BIGTK_MIN_INDEX;
+
+ BUILD_BUG_ON(sizeof(status->bigtk[status_idx].key) <
+ sizeof(data[data_idx].key));
+ BUILD_BUG_ON(sizeof(status->bigtk[status_idx].ipn) <
+ sizeof(data[data_idx].ipn));
+
+ memcpy(status->bigtk[status_idx].key, data[data_idx].key,
+ sizeof(data[data_idx].key));
+ memcpy(status->bigtk[status_idx].ipn, data[data_idx].ipn,
+ sizeof(data[data_idx].ipn));
+ status_idx++;
+ }
+}
+
static void iwl_mvm_parse_wowlan_info_notif(struct iwl_mvm *mvm,
struct iwl_wowlan_info_notif *data,
struct iwl_wowlan_status_data *status,
@@ -2194,7 +2268,42 @@ static void iwl_mvm_parse_wowlan_info_notif(struct iwl_mvm *mvm,
iwl_mvm_convert_key_counters_v5(status, &data->gtk[0].sc);
iwl_mvm_convert_gtk_v3(status, data->gtk);
iwl_mvm_convert_igtk(status, &data->igtk[0]);
+ iwl_mvm_convert_bigtk(status, data->bigtk);
+ status->replay_ctr = le64_to_cpu(data->replay_ctr);
+ status->pattern_number = le16_to_cpu(data->pattern_number);
+ for (i = 0; i < IWL_MAX_TID_COUNT; i++)
+ status->qos_seq_ctr[i] =
+ le16_to_cpu(data->qos_seq_ctr[i]);
+ status->wakeup_reasons = le32_to_cpu(data->wakeup_reasons);
+ status->num_of_gtk_rekeys =
+ le32_to_cpu(data->num_of_gtk_rekeys);
+ status->received_beacons = le32_to_cpu(data->received_beacons);
+ status->tid_tear_down = data->tid_tear_down;
+}
+static void
+iwl_mvm_parse_wowlan_info_notif_v2(struct iwl_mvm *mvm,
+ struct iwl_wowlan_info_notif_v2 *data,
+ struct iwl_wowlan_status_data *status,
+ u32 len)
+{
+ u32 i;
+
+ if (!data) {
+ IWL_ERR(mvm, "iwl_wowlan_info_notif data is NULL\n");
+ status = NULL;
+ return;
+ }
+
+ if (len < sizeof(*data)) {
+ IWL_ERR(mvm, "Invalid WoWLAN info notification!\n");
+ status = NULL;
+ return;
+ }
+
+ iwl_mvm_convert_key_counters_v5(status, &data->gtk[0].sc);
+ iwl_mvm_convert_gtk_v3(status, data->gtk);
+ iwl_mvm_convert_igtk(status, &data->igtk[0]);
status->replay_ctr = le64_to_cpu(data->replay_ctr);
status->pattern_number = le16_to_cpu(data->pattern_number);
for (i = 0; i < IWL_MAX_TID_COUNT; i++)
@@ -2866,7 +2975,7 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
struct iwl_mvm *mvm =
container_of(notif_wait, struct iwl_mvm, notif_wait);
struct iwl_d3_data *d3_data = data;
- u32 len;
+ u32 len = iwl_rx_packet_payload_len(pkt);
int ret;
int wowlan_info_ver = iwl_fw_lookup_notif_ver(mvm->fw,
PROT_OFFLOAD_GROUP,
@@ -2876,7 +2985,6 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
switch (WIDE_ID(pkt->hdr.group_id, pkt->hdr.cmd)) {
case WIDE_ID(PROT_OFFLOAD_GROUP, WOWLAN_INFO_NOTIFICATION): {
- struct iwl_wowlan_info_notif *notif;
if (d3_data->notif_received & IWL_D3_NOTIF_WOWLAN_INFO) {
/* We might get two notifications due to dual bss */
@@ -2886,26 +2994,39 @@ static bool iwl_mvm_wait_d3_notif(struct iwl_notif_wait_data *notif_wait,
}
if (wowlan_info_ver < 2) {
- struct iwl_wowlan_info_notif_v1 *notif_v1 = (void *)pkt->data;
+ struct iwl_wowlan_info_notif_v1 *notif_v1 =
+ (void *)pkt->data;
+ struct iwl_wowlan_info_notif_v2 *notif_v2;
- notif = kmemdup(notif_v1, sizeof(*notif), GFP_ATOMIC);
- if (!notif)
+ notif_v2 = kmemdup(notif_v1, sizeof(*notif_v2), GFP_ATOMIC);
+
+ if (!notif_v2)
return false;
- notif->tid_tear_down = notif_v1->tid_tear_down;
- notif->station_id = notif_v1->station_id;
- memset_after(notif, 0, station_id);
+ notif_v2->tid_tear_down = notif_v1->tid_tear_down;
+ notif_v2->station_id = notif_v1->station_id;
+ memset_after(notif_v2, 0, station_id);
+ iwl_mvm_parse_wowlan_info_notif_v2(mvm, notif_v2,
+ d3_data->status,
+ len);
+ kfree(notif_v2);
+
+ } else if (wowlan_info_ver == 2) {
+ struct iwl_wowlan_info_notif_v2 *notif_v2 =
+ (void *)pkt->data;
+
+ iwl_mvm_parse_wowlan_info_notif_v2(mvm, notif_v2,
+ d3_data->status,
+ len);
} else {
- notif = (void *)pkt->data;
+ struct iwl_wowlan_info_notif *notif =
+ (void *)pkt->data;
+
+ iwl_mvm_parse_wowlan_info_notif(mvm, notif,
+ d3_data->status, len);
}
d3_data->notif_received |= IWL_D3_NOTIF_WOWLAN_INFO;
- len = iwl_rx_packet_payload_len(pkt);
- iwl_mvm_parse_wowlan_info_notif(mvm, notif, d3_data->status,
- len);
-
- if (wowlan_info_ver < 2)
- kfree(notif);
if (d3_data->status &&
d3_data->status->wakeup_reasons & IWL_WOWLAN_WAKEUP_REASON_HAS_WAKEUP_PKT)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
index cb4ecad6103f..e8b881596baf 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c
@@ -699,19 +699,11 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(rx_phyinfo, 10);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(quota_min, 32);
MVM_DEBUGFS_READ_FILE_OPS(os_device_timediff);
-
-void iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+void iwl_mvm_vif_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct dentry *dbgfs_dir = vif->debugfs_dir;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- char buf[100];
-
- /*
- * Check if debugfs directory already exist before creating it.
- * This may happen when, for example, resetting hw or suspend-resume
- */
- if (!dbgfs_dir || mvmvif->dbgfs_dir)
- return;
mvmvif->dbgfs_dir = debugfs_create_dir("iwlmvm", dbgfs_dir);
if (IS_ERR_OR_NULL(mvmvif->dbgfs_dir)) {
@@ -737,6 +729,17 @@ void iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
if (vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
mvmvif == mvm->bf_allowed_vif)
MVM_DEBUGFS_ADD_FILE_VIF(bf_params, mvmvif->dbgfs_dir, 0600);
+}
+
+void iwl_mvm_vif_dbgfs_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+{
+ struct dentry *dbgfs_dir = vif->debugfs_dir;
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ char buf[100];
+
+ /* this will happen in monitor mode */
+ if (!dbgfs_dir)
+ return;
/*
* Create symlink for convenience pointing to interface specific
@@ -745,21 +748,62 @@ void iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
* find
* netdev:wlan0 -> ../../../ieee80211/phy0/netdev:wlan0/iwlmvm/
*/
- snprintf(buf, 100, "../../../%pd3/%pd",
- dbgfs_dir,
- mvmvif->dbgfs_dir);
+ snprintf(buf, 100, "../../../%pd3/iwlmvm", dbgfs_dir);
mvmvif->dbgfs_slink = debugfs_create_symlink(dbgfs_dir->d_name.name,
mvm->debugfs_dir, buf);
}
-void iwl_mvm_vif_dbgfs_clean(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+void iwl_mvm_vif_dbgfs_rm_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
debugfs_remove(mvmvif->dbgfs_slink);
mvmvif->dbgfs_slink = NULL;
+}
+
+#define MVM_DEBUGFS_WRITE_LINK_FILE_OPS(name, bufsz) \
+ _MVM_DEBUGFS_WRITE_FILE_OPS(link_##name, bufsz, \
+ struct ieee80211_bss_conf)
+#define MVM_DEBUGFS_READ_WRITE_LINK_FILE_OPS(name, bufsz) \
+ _MVM_DEBUGFS_READ_WRITE_FILE_OPS(link_##name, bufsz, \
+ struct ieee80211_bss_conf)
+#define MVM_DEBUGFS_ADD_LINK_FILE(name, parent, mode) \
+ debugfs_create_file(#name, mode, parent, link_conf, \
+ &iwl_dbgfs_link_##name##_ops)
+
+static void iwl_mvm_debugfs_add_link_files(struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct dentry *mvm_dir)
+{
+ /* Add per-link files here*/
+}
+
+void iwl_mvm_link_add_debugfs(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct dentry *dir)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm *mvm = mvmvif->mvm;
+ unsigned int link_id = link_conf->link_id;
+ struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
+ struct dentry *mvm_dir;
+
+ if (WARN_ON(!link_info) || !dir)
+ return;
+
+ if (dir == vif->debugfs_dir) {
+ WARN_ON(!mvmvif->dbgfs_dir);
+ mvm_dir = mvmvif->dbgfs_dir;
+ } else {
+ mvm_dir = debugfs_create_dir("iwlmvm", dir);
+ if (IS_ERR_OR_NULL(mvm_dir)) {
+ IWL_ERR(mvm, "Failed to create debugfs directory under %pd\n",
+ dir);
+ return;
+ }
+ }
- debugfs_remove_recursive(mvmvif->dbgfs_dir);
- mvmvif->dbgfs_dir = NULL;
+ iwl_mvm_debugfs_add_link_files(vif, link_conf, mvm_dir);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
index cf27f106d4d5..329c545f65fd 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
@@ -50,8 +50,18 @@ static ssize_t iwl_dbgfs_stop_ctdp_write(struct iwl_mvm *mvm, char *buf,
size_t count, loff_t *ppos)
{
int ret;
+ bool force;
- if (!iwl_mvm_is_ctdp_supported(mvm))
+ if (!kstrtobool(buf, &force))
+ IWL_DEBUG_INFO(mvm,
+ "force start is %d [0=disabled, 1=enabled]\n",
+ force);
+
+ /* we allow skipping cap support check and force stop ctdp
+ * statistics collection and with guerantee that it is
+ * safe to use.
+ */
+ if (!force && !iwl_mvm_is_ctdp_supported(mvm))
return -EOPNOTSUPP;
if (!iwl_mvm_firmware_running(mvm) ||
@@ -65,6 +75,36 @@ static ssize_t iwl_dbgfs_stop_ctdp_write(struct iwl_mvm *mvm, char *buf,
return ret ?: count;
}
+static ssize_t iwl_dbgfs_start_ctdp_write(struct iwl_mvm *mvm,
+ char *buf, size_t count,
+ loff_t *ppos)
+{
+ int ret;
+ bool force;
+
+ if (!kstrtobool(buf, &force))
+ IWL_DEBUG_INFO(mvm,
+ "force start is %d [0=disabled, 1=enabled]\n",
+ force);
+
+ /* we allow skipping cap support check and force enable ctdp
+ * for statistics collection and with guerantee that it is
+ * safe to use.
+ */
+ if (!force && !iwl_mvm_is_ctdp_supported(mvm))
+ return -EOPNOTSUPP;
+
+ if (!iwl_mvm_firmware_running(mvm) ||
+ mvm->fwrt.cur_fw_img != IWL_UCODE_REGULAR)
+ return -EIO;
+
+ mutex_lock(&mvm->mutex);
+ ret = iwl_mvm_ctdp_command(mvm, CTDP_CMD_OPERATION_START, 0);
+ mutex_unlock(&mvm->mutex);
+
+ return ret ?: count;
+}
+
static ssize_t iwl_dbgfs_force_ctkill_write(struct iwl_mvm *mvm, char *buf,
size_t count, loff_t *ppos)
{
@@ -965,6 +1005,13 @@ static ssize_t iwl_dbgfs_fw_rx_stats_read(struct file *file,
char *buf;
int ret;
size_t bufsz;
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(SYSTEM_GROUP,
+ SYSTEM_STATISTICS_CMD),
+ IWL_FW_CMD_VER_UNKNOWN);
+
+ if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN)
+ return -EOPNOTSUPP;
if (iwl_mvm_has_new_rx_stats_api(mvm))
bufsz = ((sizeof(struct mvm_statistics_rx) /
@@ -1144,6 +1191,101 @@ static ssize_t iwl_dbgfs_fw_rx_stats_read(struct file *file,
}
#undef PRINT_STAT_LE32
+static ssize_t iwl_dbgfs_fw_system_stats_read(struct file *file,
+ char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ char *buff, *pos, *endpos;
+ int ret;
+ size_t bufsz;
+ int i;
+ struct iwl_mvm_vif *mvmvif;
+ struct ieee80211_vif *vif;
+ struct iwl_mvm *mvm = file->private_data;
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(SYSTEM_GROUP,
+ SYSTEM_STATISTICS_CMD),
+ IWL_FW_CMD_VER_UNKNOWN);
+
+ /* in case of a wrong cmd version, allocate buffer only for error msg */
+ bufsz = (cmd_ver == 1) ? 4096 : 64;
+
+ buff = kzalloc(bufsz, GFP_KERNEL);
+ if (!buff)
+ return -ENOMEM;
+
+ pos = buff;
+ endpos = pos + bufsz;
+
+ if (cmd_ver != 1) {
+ pos += scnprintf(pos, endpos - pos,
+ "System stats not supported:%d\n", cmd_ver);
+ goto send_out;
+ }
+
+ mutex_lock(&mvm->mutex);
+ if (iwl_mvm_firmware_running(mvm))
+ iwl_mvm_request_statistics(mvm, false);
+
+ for (i = 0; i < NUM_MAC_INDEX_DRIVER; i++) {
+ vif = iwl_mvm_rcu_dereference_vif_id(mvm, i, false);
+ if (!vif)
+ continue;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ break;
+ }
+
+ if (i == NUM_MAC_INDEX_DRIVER || !vif) {
+ pos += scnprintf(pos, endpos - pos, "vif is NULL\n");
+ goto release_send_out;
+ }
+
+ mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ if (!mvmvif) {
+ pos += scnprintf(pos, endpos - pos, "mvmvif is NULL\n");
+ goto release_send_out;
+ }
+
+ for_each_mvm_vif_valid_link(mvmvif, i) {
+ struct iwl_mvm_vif_link_info *link_info = mvmvif->link[i];
+
+ pos += scnprintf(pos, endpos - pos,
+ "link_id %d", i);
+ pos += scnprintf(pos, endpos - pos,
+ " num_beacons %d",
+ link_info->beacon_stats.num_beacons);
+ pos += scnprintf(pos, endpos - pos,
+ " accu_num_beacons %d",
+ link_info->beacon_stats.accu_num_beacons);
+ pos += scnprintf(pos, endpos - pos,
+ " avg_signal %d\n",
+ link_info->beacon_stats.avg_signal);
+ }
+
+ pos += scnprintf(pos, endpos - pos,
+ "radio_stats.rx_time %lld\n",
+ mvm->radio_stats.rx_time);
+ pos += scnprintf(pos, endpos - pos,
+ "radio_stats.tx_time %lld\n",
+ mvm->radio_stats.tx_time);
+ pos += scnprintf(pos, endpos - pos,
+ "accu_radio_stats.rx_time %lld\n",
+ mvm->accu_radio_stats.rx_time);
+ pos += scnprintf(pos, endpos - pos,
+ "accu_radio_stats.tx_time %lld\n",
+ mvm->accu_radio_stats.tx_time);
+
+release_send_out:
+ mutex_unlock(&mvm->mutex);
+
+send_out:
+ ret = simple_read_from_buffer(user_buf, count, ppos, buff, pos - buff);
+ kfree(buff);
+
+ return ret;
+}
+
static ssize_t iwl_dbgfs_frame_stats_read(struct iwl_mvm *mvm,
char __user *user_buf, size_t count,
loff_t *ppos,
@@ -1998,6 +2140,7 @@ MVM_DEBUGFS_READ_WRITE_FILE_OPS(prph_reg, 64);
/* Device wide debugfs entries */
MVM_DEBUGFS_READ_FILE_OPS(ctdp_budget);
MVM_DEBUGFS_WRITE_FILE_OPS(stop_ctdp, 8);
+MVM_DEBUGFS_WRITE_FILE_OPS(start_ctdp, 8);
MVM_DEBUGFS_WRITE_FILE_OPS(force_ctkill, 8);
MVM_DEBUGFS_WRITE_FILE_OPS(tx_flush, 16);
MVM_DEBUGFS_WRITE_FILE_OPS(sta_drain, 8);
@@ -2012,6 +2155,7 @@ MVM_DEBUGFS_READ_FILE_OPS(bt_cmd);
MVM_DEBUGFS_READ_WRITE_FILE_OPS(disable_power_off, 64);
MVM_DEBUGFS_READ_FILE_OPS(fw_rx_stats);
MVM_DEBUGFS_READ_FILE_OPS(drv_rx_stats);
+MVM_DEBUGFS_READ_FILE_OPS(fw_system_stats);
MVM_DEBUGFS_READ_FILE_OPS(fw_ver);
MVM_DEBUGFS_READ_FILE_OPS(phy_integration_ver);
MVM_DEBUGFS_READ_FILE_OPS(tas_get_status);
@@ -2210,6 +2354,7 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
MVM_DEBUGFS_ADD_FILE(nic_temp, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(ctdp_budget, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(stop_ctdp, mvm->debugfs_dir, 0200);
+ MVM_DEBUGFS_ADD_FILE(start_ctdp, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(force_ctkill, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(stations, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(bt_notif, mvm->debugfs_dir, 0400);
@@ -2218,6 +2363,7 @@ void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
MVM_DEBUGFS_ADD_FILE(fw_ver, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(fw_rx_stats, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(drv_rx_stats, mvm->debugfs_dir, 0400);
+ MVM_DEBUGFS_ADD_FILE(fw_system_stats, mvm->debugfs_dir, 0400);
MVM_DEBUGFS_ADD_FILE(fw_restart, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(fw_nmi, mvm->debugfs_dir, 0200);
MVM_DEBUGFS_ADD_FILE(bt_tx_prio, mvm->debugfs_dir, 0200);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.h b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.h
index 0711ab689c48..cc2c45b45ddc 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
+ * Copyright (C) 2023 Intel Corporation
* Copyright (C) 2012-2014 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
*/
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
index b49781d1a07a..10b9219b3bfd 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ftm-responder.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
* Copyright (C) 2015-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2022 Intel Corporation
+ * Copyright (C) 2018-2023 Intel Corporation
*/
#include <net/cfg80211.h>
#include <linux/etherdevice.h>
@@ -302,7 +302,12 @@ static void iwl_mvm_resp_del_pasn_sta(struct iwl_mvm *mvm,
struct iwl_mvm_pasn_sta *sta)
{
list_del(&sta->list);
- iwl_mvm_rm_sta_id(mvm, vif, sta->int_sta.sta_id);
+
+ if (iwl_mvm_has_mld_api(mvm->fw))
+ iwl_mvm_mld_rm_sta_id(mvm, sta->int_sta.sta_id);
+ else
+ iwl_mvm_rm_sta_id(mvm, vif, sta->int_sta.sta_id);
+
iwl_mvm_dealloc_int_sta(mvm, &sta->int_sta);
kfree(sta);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
index 1d5ee4330f29..403bd17b8b7a 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
@@ -15,6 +15,7 @@
#include "iwl-prph.h"
#include "fw/acpi.h"
#include "fw/pnvm.h"
+#include "fw/uefi.h"
#include "mvm.h"
#include "fw/dbg.h"
@@ -23,12 +24,15 @@
#include "iwl-nvm-parse.h"
#include "time-sync.h"
-#define MVM_UCODE_ALIVE_TIMEOUT (HZ)
+#define MVM_UCODE_ALIVE_TIMEOUT (2 * HZ)
#define MVM_UCODE_CALIB_TIMEOUT (2 * HZ)
#define IWL_TAS_US_MCC 0x5553
#define IWL_TAS_CANADA_MCC 0x4341
+#define IWL_UATS_VLP_AP_SUPPORTED BIT(29)
+#define IWL_UATS_AFC_AP_SUPPORTED BIT(30)
+
struct iwl_mvm_alive_data {
bool valid;
u32 scd_base_addr;
@@ -487,6 +491,52 @@ static void iwl_mvm_phy_filter_init(struct iwl_mvm *mvm,
}
#if defined(CONFIG_ACPI) && defined(CONFIG_EFI)
+static void iwl_mvm_uats_init(struct iwl_mvm *mvm)
+{
+ u8 cmd_ver;
+ int ret;
+ struct iwl_host_cmd cmd = {
+ .id = WIDE_ID(REGULATORY_AND_NVM_GROUP,
+ UATS_TABLE_CMD),
+ .flags = 0,
+ .data[0] = &mvm->fwrt.uats_table,
+ .len[0] = sizeof(mvm->fwrt.uats_table),
+ .dataflags[0] = IWL_HCMD_DFL_NOCOPY,
+ };
+
+ if (!(mvm->trans->trans_cfg->device_family >=
+ IWL_DEVICE_FAMILY_AX210)) {
+ IWL_DEBUG_RADIO(mvm, "UATS feature is not supported\n");
+ return;
+ }
+
+ if (!mvm->fwrt.uats_enabled) {
+ IWL_DEBUG_RADIO(mvm, "UATS feature is disabled\n");
+ return;
+ }
+
+ cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd.id,
+ IWL_FW_CMD_VER_UNKNOWN);
+ if (cmd_ver != 1) {
+ IWL_DEBUG_RADIO(mvm,
+ "UATS_TABLE_CMD ver %d not supported\n",
+ cmd_ver);
+ return;
+ }
+
+ ret = iwl_uefi_get_uats_table(mvm->trans, &mvm->fwrt);
+ if (ret < 0) {
+ IWL_ERR(mvm, "failed to read UATS table (%d)\n", ret);
+ return;
+ }
+
+ ret = iwl_mvm_send_cmd(mvm, &cmd);
+ if (ret < 0)
+ IWL_ERR(mvm, "failed to send UATS_TABLE_CMD (%d)\n", ret);
+ else
+ IWL_DEBUG_RADIO(mvm, "UATS_TABLE_CMD sent to FW\n");
+}
+
static int iwl_mvm_sgom_init(struct iwl_mvm *mvm)
{
u8 cmd_ver;
@@ -526,6 +576,10 @@ static int iwl_mvm_sgom_init(struct iwl_mvm *mvm)
{
return 0;
}
+
+static void iwl_mvm_uats_init(struct iwl_mvm *mvm)
+{
+}
#endif
static int iwl_send_phy_cfg_cmd(struct iwl_mvm *mvm)
@@ -583,6 +637,7 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm)
static const u16 init_complete[] = {
INIT_COMPLETE_NOTIF,
};
+ u32 sb_cfg;
int ret;
if (mvm->trans->cfg->tx_with_siso_diversity)
@@ -592,6 +647,14 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm)
mvm->rfkill_safe_init_done = false;
+ if (mvm->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_AX210) {
+ sb_cfg = iwl_read_umac_prph(mvm->trans, SB_MODIFY_CFG_FLAG);
+ /* if needed, we'll reset this on our way out later */
+ mvm->pldr_sync = sb_cfg == SB_CFG_RESIDES_IN_ROM;
+ if (mvm->pldr_sync && iwl_mei_pldr_req())
+ return -EBUSY;
+ }
+
iwl_init_notification_wait(&mvm->notif_wait,
&init_wait,
init_complete,
@@ -605,6 +668,13 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm)
ret = iwl_mvm_load_ucode_wait_alive(mvm, IWL_UCODE_REGULAR);
if (ret) {
IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret);
+
+ /* if we needed reset then fail here, but notify and remove */
+ if (mvm->pldr_sync) {
+ iwl_mei_alive_notif(false);
+ iwl_trans_pcie_remove(mvm->trans, true);
+ }
+
goto error;
}
iwl_dbg_tlv_time_point(&mvm->fwrt, IWL_FW_INI_TIME_POINT_AFTER_ALIVE,
@@ -667,7 +737,8 @@ static int iwl_run_unified_mvm_ucode(struct iwl_mvm *mvm)
/* Read the NVM only at driver load time, no need to do this twice */
if (!IWL_MVM_PARSE_NVM && !mvm->nvm_data) {
- mvm->nvm_data = iwl_get_nvm(mvm->trans, mvm->fw);
+ mvm->nvm_data = iwl_get_nvm(mvm->trans, mvm->fw,
+ mvm->set_tx_ant, mvm->set_rx_ant);
if (IS_ERR(mvm->nvm_data)) {
ret = PTR_ERR(mvm->nvm_data);
mvm->nvm_data = NULL;
@@ -1084,6 +1155,12 @@ static const struct dmi_system_id dmi_tas_approved_list[] = {
DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
},
},
+ { .ident = "GOOGLE-HP",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Google"),
+ DMI_MATCH(DMI_BOARD_VENDOR, "HP"),
+ },
+ },
{ .ident = "MSI",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star International Co., Ltd."),
@@ -1209,7 +1286,10 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
{
int ret;
u32 value;
- struct iwl_lari_config_change_cmd_v6 cmd = {};
+ struct iwl_lari_config_change_cmd_v7 cmd = {};
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(REGULATORY_AND_NVM_GROUP,
+ LARI_CONFIG_CHANGE), 1);
cmd.config_bitmap = iwl_acpi_get_lari_config_bitmap(&mvm->fwrt);
@@ -1227,8 +1307,11 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
ret = iwl_acpi_get_dsm_u32(mvm->fwrt.dev, 0,
DSM_FUNC_ACTIVATE_CHANNEL,
&iwl_guid, &value);
- if (!ret)
+ if (!ret) {
+ if (cmd_ver < 8)
+ value &= ~ACTIVATE_5G2_IN_WW_MASK;
cmd.chan_state_active_bitmap = cpu_to_le32(value);
+ }
ret = iwl_acpi_get_dsm_u32(mvm->fwrt.dev, 0,
DSM_FUNC_ENABLE_6E,
@@ -1242,18 +1325,26 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
if (!ret)
cmd.force_disable_channels_bitmap = cpu_to_le32(value);
+ ret = iwl_acpi_get_dsm_u32(mvm->fwrt.dev, 0,
+ DSM_FUNC_ENERGY_DETECTION_THRESHOLD,
+ &iwl_guid, &value);
+ if (!ret)
+ cmd.edt_bitmap = cpu_to_le32(value);
+
if (cmd.config_bitmap ||
cmd.oem_uhb_allow_bitmap ||
cmd.oem_11ax_allow_bitmap ||
cmd.oem_unii4_allow_bitmap ||
cmd.chan_state_active_bitmap ||
- cmd.force_disable_channels_bitmap) {
+ cmd.force_disable_channels_bitmap ||
+ cmd.edt_bitmap) {
size_t cmd_size;
- u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
- WIDE_ID(REGULATORY_AND_NVM_GROUP,
- LARI_CONFIG_CHANGE),
- 1);
+
switch (cmd_ver) {
+ case 8:
+ case 7:
+ cmd_size = sizeof(struct iwl_lari_config_change_cmd_v7);
+ break;
case 6:
cmd_size = sizeof(struct iwl_lari_config_change_cmd_v6);
break;
@@ -1287,6 +1378,9 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
"sending LARI_CONFIG_CHANGE, oem_uhb_allow_bitmap=0x%x, force_disable_channels_bitmap=0x%x\n",
le32_to_cpu(cmd.oem_uhb_allow_bitmap),
le32_to_cpu(cmd.force_disable_channels_bitmap));
+ IWL_DEBUG_RADIO(mvm,
+ "sending LARI_CONFIG_CHANGE, edt_bitmap=0x%x\n",
+ le32_to_cpu(cmd.edt_bitmap));
ret = iwl_mvm_send_cmd_pdu(mvm,
WIDE_ID(REGULATORY_AND_NVM_GROUP,
LARI_CONFIG_CHANGE),
@@ -1296,6 +1390,10 @@ static void iwl_mvm_lari_cfg(struct iwl_mvm *mvm)
"Failed to send LARI_CONFIG_CHANGE (%d)\n",
ret);
}
+
+ if (le32_to_cpu(cmd.oem_uhb_allow_bitmap) & IWL_UATS_VLP_AP_SUPPORTED ||
+ le32_to_cpu(cmd.oem_uhb_allow_bitmap) & IWL_UATS_AFC_AP_SUPPORTED)
+ mvm->fwrt.uats_enabled = TRUE;
}
void iwl_mvm_get_acpi_tables(struct iwl_mvm *mvm)
@@ -1499,10 +1597,7 @@ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm)
int iwl_mvm_up(struct iwl_mvm *mvm)
{
int ret, i;
- struct ieee80211_channel *chan;
- struct cfg80211_chan_def chandef;
struct ieee80211_supported_band *sband = NULL;
- u32 sb_cfg;
lockdep_assert_held(&mvm->mutex);
@@ -1510,11 +1605,6 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
if (ret)
return ret;
- sb_cfg = iwl_read_umac_prph(mvm->trans, SB_MODIFY_CFG_FLAG);
- mvm->pldr_sync = !(sb_cfg & SB_CFG_RESIDES_IN_OTP_MASK);
- if (mvm->pldr_sync && iwl_mei_pldr_req())
- return -EBUSY;
-
ret = iwl_mvm_load_rt_fw(mvm);
if (ret) {
IWL_ERR(mvm, "Failed to start RT ucode: %d\n", ret);
@@ -1527,6 +1617,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
/* FW loaded successfully */
mvm->pldr_sync = false;
+ iwl_fw_disable_dbg_asserts(&mvm->fwrt);
iwl_get_shared_mem_conf(&mvm->fwrt);
ret = iwl_mvm_sf_update(mvm, NULL, false);
@@ -1630,21 +1721,6 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
goto error;
}
- chan = &sband->channels[0];
-
- cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_NO_HT);
- for (i = 0; i < NUM_PHY_CTX; i++) {
- /*
- * The channel used here isn't relevant as it's
- * going to be overwritten in the other flows.
- * For now use the first channel we have.
- */
- ret = iwl_mvm_phy_ctxt_add(mvm, &mvm->phy_ctxts[i],
- &chandef, 1, 1);
- if (ret)
- goto error;
- }
-
if (iwl_mvm_is_tt_in_fw(mvm)) {
/* in order to give the responsibility of ct-kill and
* TX backoff to FW we need to send empty temperature reporting
@@ -1727,6 +1803,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
iwl_mvm_tas_init(mvm);
iwl_mvm_leds_sync(mvm);
+ iwl_mvm_uats_init(mvm);
if (iwl_rfi_supported(mvm)) {
if (iwl_mvm_eval_dsm_rfi(mvm) == DSM_VALUE_RFI_ENABLE)
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/link.c b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
index ace82e2c5bd9..be48b0fc9cb6 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/link.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/link.c
@@ -53,7 +53,6 @@ int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
unsigned int link_id = link_conf->link_id;
struct iwl_mvm_vif_link_info *link_info = mvmvif->link[link_id];
struct iwl_link_config_cmd cmd = {};
- struct iwl_mvm_phy_ctxt *phyctxt;
if (WARN_ON_ONCE(!link_info))
return -EINVAL;
@@ -61,7 +60,7 @@ int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID) {
link_info->fw_link_id = iwl_mvm_get_free_fw_link_id(mvm,
mvmvif);
- if (link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID)
+ if (link_info->fw_link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf))
return -EINVAL;
rcu_assign_pointer(mvm->link_id_to_link_conf[link_info->fw_link_id],
@@ -77,12 +76,8 @@ int iwl_mvm_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
cmd.link_id = cpu_to_le32(link_info->fw_link_id);
cmd.mac_id = cpu_to_le32(mvmvif->id);
cmd.spec_link_id = link_conf->link_id;
- /* P2P-Device already has a valid PHY context during add */
- phyctxt = link_info->phy_ctxt;
- if (phyctxt)
- cmd.phy_id = cpu_to_le32(phyctxt->id);
- else
- cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
+ WARN_ON_ONCE(link_info->phy_ctxt);
+ cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
memcpy(cmd.local_link_addr, link_conf->addr, ETH_ALEN);
@@ -194,11 +189,14 @@ int iwl_mvm_link_changed(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
flags_mask |= LINK_FLG_MU_EDCA_CW;
}
- if (link_conf->eht_puncturing && !iwlwifi_mod_params.disable_11be)
- cmd.puncture_mask = cpu_to_le16(link_conf->eht_puncturing);
- else
- /* This flag can be set only if the MAC has eht support */
- changes &= ~LINK_CONTEXT_MODIFY_EHT_PARAMS;
+ if (changes & LINK_CONTEXT_MODIFY_EHT_PARAMS) {
+ if (iwlwifi_mod_params.disable_11be ||
+ !link_conf->eht_support)
+ changes &= ~LINK_CONTEXT_MODIFY_EHT_PARAMS;
+ else
+ cmd.puncture_mask =
+ cpu_to_le16(link_conf->eht_puncturing);
+ }
cmd.bss_color = link_conf->he_bss_color.color;
@@ -244,9 +242,10 @@ int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
struct iwl_link_config_cmd cmd = {};
int ret;
+ /* mac80211 thought we have the link, but it was never configured */
if (WARN_ON(!link_info ||
- link_info->fw_link_id == IWL_MVM_FW_LINK_ID_INVALID))
- return -EINVAL;
+ link_info->fw_link_id >= ARRAY_SIZE(mvm->link_id_to_link_conf)))
+ return 0;
RCU_INIT_POINTER(mvm->link_id_to_link_conf[link_info->fw_link_id],
NULL);
@@ -254,6 +253,7 @@ int iwl_mvm_remove_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
iwl_mvm_release_fw_link_id(mvm, link_info->fw_link_id);
link_info->fw_link_id = IWL_MVM_FW_LINK_ID_INVALID;
cmd.spec_link_id = link_conf->link_id;
+ cmd.phy_id = cpu_to_le32(FW_CTXT_INVALID);
ret = iwl_mvm_link_cmd_send(mvm, &cmd, FW_CTXT_ACTION_REMOVE);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
index 7369a45f7f2b..c4f96125cf33 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
@@ -286,6 +286,10 @@ int iwl_mvm_mac_ctxt_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
INIT_LIST_HEAD(&mvmvif->time_event_data.list);
mvmvif->time_event_data.id = TE_MAX;
+ mvmvif->deflink.bcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.mcast_sta.sta_id = IWL_MVM_INVALID_STA;
+ mvmvif->deflink.ap_sta_id = IWL_MVM_INVALID_STA;
+
/* No need to allocate data queues to P2P Device MAC and NAN.*/
if (vif->type == NL80211_IFTYPE_P2P_DEVICE)
return 0;
@@ -300,10 +304,6 @@ int iwl_mvm_mac_ctxt_init(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
mvmvif->deflink.cab_queue = IWL_MVM_DQA_GCAST_QUEUE;
}
- mvmvif->deflink.bcast_sta.sta_id = IWL_MVM_INVALID_STA;
- mvmvif->deflink.mcast_sta.sta_id = IWL_MVM_INVALID_STA;
- mvmvif->deflink.ap_sta_id = IWL_MVM_INVALID_STA;
-
for (i = 0; i < NUM_IWL_MVM_SMPS_REQ; i++)
mvmvif->deflink.smps_requests[i] = IEEE80211_SMPS_AUTOMATIC;
@@ -1083,6 +1083,19 @@ static int iwl_mvm_mac_ctxt_send_beacon_v7(struct iwl_mvm *mvm,
sizeof(beacon_cmd));
}
+bool iwl_mvm_enable_fils(struct iwl_mvm *mvm,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ if (IWL_MVM_DISABLE_AP_FILS)
+ return false;
+
+ if (cfg80211_channel_is_psc(ctx->def.chan))
+ return true;
+
+ return (ctx->def.chan->band == NL80211_BAND_6GHZ &&
+ ctx->def.width >= NL80211_CHAN_WIDTH_80);
+}
+
static int iwl_mvm_mac_ctxt_send_beacon_v9(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
struct sk_buff *beacon,
@@ -1102,8 +1115,7 @@ static int iwl_mvm_mac_ctxt_send_beacon_v9(struct iwl_mvm *mvm,
ctx = rcu_dereference(link_conf->chanctx_conf);
channel = ieee80211_frequency_to_channel(ctx->def.chan->center_freq);
WARN_ON(channel == 0);
- if (cfg80211_channel_is_psc(ctx->def.chan) &&
- !IWL_MVM_DISABLE_AP_FILS) {
+ if (iwl_mvm_enable_fils(mvm, ctx)) {
flags |= iwl_fw_lookup_cmd_ver(mvm->fw, BEACON_TEMPLATE_CMD,
0) > 10 ?
IWL_MAC_BEACON_FILS :
@@ -1761,6 +1773,7 @@ void iwl_mvm_channel_switch_start_notif(struct iwl_mvm *mvm,
u32 id_n_color, csa_id;
/* save mac_id or link_id to use later to cancel csa if needed */
u32 id;
+ u32 mac_link_id = 0;
u8 notif_ver = iwl_fw_lookup_notif_ver(mvm->fw, MAC_CONF_GROUP,
CHANNEL_SWITCH_START_NOTIF, 0);
bool csa_active;
@@ -1790,6 +1803,7 @@ void iwl_mvm_channel_switch_start_notif(struct iwl_mvm *mvm,
goto out_unlock;
id = link_id;
+ mac_link_id = bss_conf->link_id;
vif = bss_conf->vif;
csa_active = bss_conf->csa_active;
}
@@ -1839,7 +1853,7 @@ void iwl_mvm_channel_switch_start_notif(struct iwl_mvm *mvm,
iwl_mvm_csa_client_absent(mvm, vif);
cancel_delayed_work(&mvmvif->csa_work);
- ieee80211_chswitch_done(vif, true);
+ ieee80211_chswitch_done(vif, true, mac_link_id);
break;
default:
/* should never happen */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
index 5918c1f2b10c..a64600f0ed9f 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
@@ -186,7 +186,7 @@ struct ieee80211_regdomain *iwl_mvm_get_current_regdomain(struct iwl_mvm *mvm,
MCC_SOURCE_OLD_FW, changed);
}
-int iwl_mvm_init_fw_regd(struct iwl_mvm *mvm)
+int iwl_mvm_init_fw_regd(struct iwl_mvm *mvm, bool force_regd_sync)
{
enum iwl_mcc_source used_src;
struct ieee80211_regdomain *regd;
@@ -213,8 +213,10 @@ int iwl_mvm_init_fw_regd(struct iwl_mvm *mvm)
if (IS_ERR_OR_NULL(regd))
return -EIO;
- /* update cfg80211 if the regdomain was changed */
- if (changed)
+ /* update cfg80211 if the regdomain was changed or the caller explicitly
+ * asked to update regdomain
+ */
+ if (changed || force_regd_sync)
ret = regulatory_set_wiphy_regd_sync(mvm->hw->wiphy, regd);
else
ret = 0;
@@ -279,6 +281,30 @@ int iwl_mvm_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant)
return 0;
}
+int iwl_mvm_op_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+{
+ struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
+
+ /* This has been tested on those devices only */
+ if (mvm->trans->trans_cfg->device_family != IWL_DEVICE_FAMILY_9000 &&
+ mvm->trans->trans_cfg->device_family != IWL_DEVICE_FAMILY_22000)
+ return -ENOTSUPP;
+
+ if (!mvm->nvm_data)
+ return -EBUSY;
+
+ /* mac80211 ensures the device is not started,
+ * so the firmware cannot be running
+ */
+
+ mvm->set_tx_ant = tx_ant;
+ mvm->set_rx_ant = rx_ant;
+
+ iwl_reinit_cab(mvm->trans, mvm->nvm_data, tx_ant, rx_ant, mvm->fw);
+
+ return 0;
+}
+
int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
{
struct ieee80211_hw *hw = mvm->hw;
@@ -352,7 +378,9 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
ieee80211_hw_set(hw, HAS_RATE_CONTROL);
}
- if (iwl_mvm_has_new_rx_api(mvm))
+ /* We want to use the mac80211's reorder buffer for 9000 */
+ if (iwl_mvm_has_new_rx_api(mvm) &&
+ mvm->trans->trans_cfg->device_family > IWL_DEVICE_FAMILY_9000)
ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER);
if (fw_has_capa(&mvm->fw->ucode_capa,
@@ -498,7 +526,7 @@ int iwl_mvm_mac_setup_register(struct iwl_mvm *mvm)
ARRAY_SIZE(iwl_mvm_iface_combinations);
hw->wiphy->max_remain_on_channel_duration = 10000;
- hw->max_listen_interval = IWL_CONN_MAX_LISTEN_INTERVAL;
+ hw->max_listen_interval = IWL_MVM_CONN_LISTEN_INTERVAL;
/* Extract MAC address */
memcpy(mvm->addresses[0].addr, mvm->nvm_data->hw_addr, ETH_ALEN);
@@ -1033,6 +1061,7 @@ static void iwl_mvm_cleanup_iterator(void *data, u8 *mac,
spin_unlock_bh(&mvm->time_event_lock);
memset(&mvmvif->bf_data, 0, sizeof(mvmvif->bf_data));
+ mvmvif->ap_sta = NULL;
for_each_mvm_vif_valid_link(mvmvif, link_id) {
mvmvif->link[link_id]->ap_sta_id = IWL_MVM_INVALID_STA;
@@ -1169,19 +1198,9 @@ int iwl_mvm_mac_start(struct ieee80211_hw *hw)
for (retry = 0; retry <= max_retry; retry++) {
ret = __iwl_mvm_mac_start(mvm);
- if (!ret)
+ if (!ret || mvm->pldr_sync)
break;
- /*
- * In PLDR sync PCI re-enumeration is needed. no point to retry
- * mac start before that.
- */
- if (mvm->pldr_sync) {
- iwl_mei_alive_notif(false);
- iwl_trans_pcie_remove(mvm->trans, true);
- break;
- }
-
IWL_ERR(mvm, "mac start retry %d\n", retry);
}
clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
@@ -1370,7 +1389,8 @@ int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
}
int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
@@ -1380,10 +1400,11 @@ int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
if (vif->type == NL80211_IFTYPE_STATION) {
struct iwl_mvm_sta *mvmsta;
+ unsigned int link_id = link_conf->link_id;
+ u8 ap_sta_id = mvmvif->link[link_id]->ap_sta_id;
mvmvif->csa_bcn_pending = false;
- mvmsta = iwl_mvm_sta_from_staid_protected(mvm,
- mvmvif->deflink.ap_sta_id);
+ mvmsta = iwl_mvm_sta_from_staid_protected(mvm, ap_sta_id);
if (WARN_ON(!mvmsta)) {
ret = -EIO;
@@ -1452,7 +1473,8 @@ void iwl_mvm_abort_channel_switch(struct ieee80211_hw *hw,
mvmvif->csa_failed = true;
mutex_unlock(&mvm->mutex);
- iwl_mvm_post_channel_switch(hw, vif);
+ /* If we're here, we can't support MLD */
+ iwl_mvm_post_channel_switch(hw, vif, &vif->bss_conf);
}
void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk)
@@ -1464,7 +1486,7 @@ void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk)
vif = container_of((void *)mvmvif, struct ieee80211_vif, drv_priv);
/* Trigger disconnect (should clear the CSA state) */
- ieee80211_chswitch_done(vif, false);
+ ieee80211_chswitch_done(vif, false, 0);
}
static u8
@@ -1517,6 +1539,7 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
int ret;
+ int i;
mutex_lock(&mvm->mutex);
@@ -1533,8 +1556,9 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
/* make sure that beacon statistics don't go backwards with FW reset */
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
- mvmvif->deflink.beacon_stats.accu_num_beacons +=
- mvmvif->deflink.beacon_stats.num_beacons;
+ for_each_mvm_vif_valid_link(mvmvif, i)
+ mvmvif->link[i]->beacon_stats.accu_num_beacons +=
+ mvmvif->link[i]->beacon_stats.num_beacons;
/* Allocate resources for the MAC context, and add it to the fw */
ret = iwl_mvm_mac_ctxt_init(mvm, vif);
@@ -1562,7 +1586,7 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
*/
if (vif->type == NL80211_IFTYPE_AP ||
vif->type == NL80211_IFTYPE_ADHOC) {
- iwl_mvm_vif_dbgfs_register(mvm, vif);
+ iwl_mvm_vif_dbgfs_add_link(mvm, vif);
ret = 0;
goto out;
}
@@ -1589,32 +1613,8 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
IEEE80211_VIF_SUPPORTS_CQM_RSSI;
}
- /*
- * P2P_DEVICE interface does not have a channel context assigned to it,
- * so a dedicated PHY context is allocated to it and the corresponding
- * MAC context is bound to it at this stage.
- */
- if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
-
- mvmvif->deflink.phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
- if (!mvmvif->deflink.phy_ctxt) {
- ret = -ENOSPC;
- goto out_free_bf;
- }
-
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
- ret = iwl_mvm_binding_add_vif(mvm, vif);
- if (ret)
- goto out_unref_phy;
-
- ret = iwl_mvm_add_p2p_bcast_sta(mvm, vif);
- if (ret)
- goto out_unbind;
-
- /* Save a pointer to p2p device vif, so it can later be used to
- * update the p2p device MAC when a GO is started/stopped */
+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE)
mvm->p2p_device_vif = vif;
- }
iwl_mvm_tcm_add_vif(mvm, vif);
INIT_DELAYED_WORK(&mvmvif->csa_work,
@@ -1626,7 +1626,7 @@ static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
iwl_mvm_chandef_get_primary_80(&vif->bss_conf.chandef);
}
- iwl_mvm_vif_dbgfs_register(mvm, vif);
+ iwl_mvm_vif_dbgfs_add_link(mvm, vif);
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
@@ -1643,16 +1643,6 @@ out:
goto out_unlock;
- out_unbind:
- iwl_mvm_binding_remove_vif(mvm, vif);
- out_unref_phy:
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
- out_free_bf:
- if (mvm->bf_allowed_vif == mvmvif) {
- mvm->bf_allowed_vif = NULL;
- vif->driver_flags &= ~(IEEE80211_VIF_BEACON_FILTER |
- IEEE80211_VIF_SUPPORTS_CQM_RSSI);
- }
out_remove_mac:
mvmvif->deflink.phy_ctxt = NULL;
iwl_mvm_mac_ctxt_remove(mvm, vif);
@@ -1714,7 +1704,7 @@ static bool iwl_mvm_mac_remove_interface_common(struct ieee80211_hw *hw,
if (vif->bss_conf.ftm_responder)
memset(&mvm->ftm_resp_stats, 0, sizeof(mvm->ftm_resp_stats));
- iwl_mvm_vif_dbgfs_clean(mvm, vif);
+ iwl_mvm_vif_dbgfs_rm_link(mvm, vif);
/*
* For AP/GO interface, the tear down of the resources allocated to the
@@ -1744,12 +1734,17 @@ static void iwl_mvm_mac_remove_interface(struct ieee80211_hw *hw,
if (iwl_mvm_mac_remove_interface_common(hw, vif))
goto out;
+ /* Before the interface removal, mac80211 would cancel the ROC, and the
+ * ROC worker would be scheduled if needed. The worker would be flushed
+ * in iwl_mvm_prepare_mac_removal() and thus at this point there is no
+ * binding etc. so nothing needs to be done here.
+ */
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ if (mvmvif->deflink.phy_ctxt) {
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+ mvmvif->deflink.phy_ctxt = NULL;
+ }
mvm->p2p_device_vif = NULL;
- iwl_mvm_rm_p2p_bcast_sta(mvm, vif);
- iwl_mvm_binding_remove_vif(mvm, vif);
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
- mvmvif->deflink.phy_ctxt = NULL;
}
iwl_mvm_mac_ctxt_remove(mvm, vif);
@@ -2473,7 +2468,7 @@ static void iwl_mvm_cfg_he_sta(struct iwl_mvm *mvm,
}
void iwl_mvm_protect_assoc(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
- u32 duration_override)
+ u32 duration_override, unsigned int link_id)
{
u32 duration = IWL_MVM_TE_SESSION_PROTECTION_MAX_TIME_MS;
u32 min_duration = IWL_MVM_TE_SESSION_PROTECTION_MIN_TIME_MS;
@@ -2493,7 +2488,8 @@ void iwl_mvm_protect_assoc(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
if (fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD))
iwl_mvm_schedule_session_protection(mvm, vif, 900,
- min_duration, false);
+ min_duration, false,
+ link_id);
else
iwl_mvm_protect_session(mvm, vif, duration,
min_duration, 500, false);
@@ -2587,6 +2583,7 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
int ret;
+ int i;
/*
* Re-calculate the tsf id, as the leader-follower relations depend
@@ -2633,8 +2630,9 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
if (vif->cfg.assoc) {
/* clear statistics to get clean beacon counter */
iwl_mvm_request_statistics(mvm, true);
- memset(&mvmvif->deflink.beacon_stats, 0,
- sizeof(mvmvif->deflink.beacon_stats));
+ for_each_mvm_vif_valid_link(mvmvif, i)
+ memset(&mvmvif->link[i]->beacon_stats, 0,
+ sizeof(mvmvif->link[i]->beacon_stats));
/* add quota for this interface */
ret = iwl_mvm_update_quotas(mvm, true, NULL);
@@ -2681,7 +2679,7 @@ static void iwl_mvm_bss_info_changed_station(struct iwl_mvm *mvm,
* time could be small without us having heard
* a beacon yet.
*/
- iwl_mvm_protect_assoc(mvm, vif, 0);
+ iwl_mvm_protect_assoc(mvm, vif, 0, 0);
}
iwl_mvm_sf_update(mvm, vif, false);
@@ -3048,22 +3046,6 @@ static void iwl_mvm_bss_info_changed(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *bss_conf,
u64 changes)
{
- static const struct iwl_mvm_bss_info_changed_ops callbacks = {
- .bss_info_changed_sta = iwl_mvm_bss_info_changed_station,
- .bss_info_changed_ap_ibss = iwl_mvm_bss_info_changed_ap_ibss,
- };
-
- iwl_mvm_bss_info_changed_common(hw, vif, bss_conf, &callbacks,
- changes);
-}
-
-void
-iwl_mvm_bss_info_changed_common(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *bss_conf,
- const struct iwl_mvm_bss_info_changed_ops *callbacks,
- u64 changes)
-{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_lock(&mvm->mutex);
@@ -3073,12 +3055,11 @@ iwl_mvm_bss_info_changed_common(struct ieee80211_hw *hw,
switch (vif->type) {
case NL80211_IFTYPE_STATION:
- callbacks->bss_info_changed_sta(mvm, vif, bss_conf, changes);
+ iwl_mvm_bss_info_changed_station(mvm, vif, bss_conf, changes);
break;
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_ADHOC:
- callbacks->bss_info_changed_ap_ibss(mvm, vif, bss_conf,
- changes);
+ iwl_mvm_bss_info_changed_ap_ibss(mvm, vif, bss_conf, changes);
break;
case NL80211_IFTYPE_MONITOR:
if (changes & BSS_CHANGED_MU_GROUPS)
@@ -3785,12 +3766,25 @@ iwl_mvm_sta_state_assoc_to_authorized(struct iwl_mvm *mvm,
callbacks->mac_ctxt_changed(mvm, vif, false);
iwl_mvm_mei_host_associated(mvm, vif, mvm_sta);
+
+ /* when client is authorized (AP station marked as such),
+ * try to enable more links
+ */
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ !test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
+ iwl_mvm_mld_select_links(mvm, vif, false);
}
mvm_sta->authorized = true;
iwl_mvm_rs_rate_init_all_links(mvm, vif, sta);
+ /* MFP is set by default before the station is authorized.
+ * Clear it here in case it's not used.
+ */
+ if (!sta->mfp)
+ return callbacks->update_sta(mvm, vif, sta);
+
return 0;
}
@@ -3878,9 +3872,14 @@ int iwl_mvm_mac_sta_state_common(struct ieee80211_hw *hw,
mutex_lock(&mvm->mutex);
- /* this would be a mac80211 bug ... but don't crash */
+ /* this would be a mac80211 bug ... but don't crash, unless we had a
+ * firmware crash while we were activating a link, in which case it is
+ * legit to have phy_ctxt = NULL. Don't bother not to WARN if we are in
+ * recovery flow since we spit tons of error messages anyway.
+ */
for_each_sta_active_link(vif, sta, link_sta, link_id) {
- if (WARN_ON_ONCE(!mvmvif->link[link_id]->phy_ctxt)) {
+ if (WARN_ON_ONCE(!mvmvif->link[link_id] ||
+ !mvmvif->link[link_id]->phy_ctxt)) {
mutex_unlock(&mvm->mutex);
return test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED,
&mvm->status) ? 0 : -EINVAL;
@@ -4019,7 +4018,7 @@ void iwl_mvm_mac_mgd_prepare_tx(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_lock(&mvm->mutex);
- iwl_mvm_protect_assoc(mvm, vif, info->duration);
+ iwl_mvm_protect_assoc(mvm, vif, info->duration, info->link_id);
mutex_unlock(&mvm->mutex);
}
@@ -4156,12 +4155,21 @@ static int __iwl_mvm_mac_set_key(struct ieee80211_hw *hw,
* GTK on AP interface is a TX-only key, return 0;
* on IBSS they're per-station and because we're lazy
* we don't support them for RX, so do the same.
- * CMAC/GMAC in AP/IBSS modes must be done in software.
+ * CMAC/GMAC in AP/IBSS modes must be done in software
+ * on older NICs.
*
* Except, of course, beacon protection - it must be
- * offloaded since we just set a beacon template.
+ * offloaded since we just set a beacon template, and
+ * then we must also offload the IGTK (not just BIGTK)
+ * for firmware reasons.
+ *
+ * So just check for beacon protection - if we don't
+ * have it we cannot get here with keyidx >= 6, and
+ * if we do have it we need to send the key to FW in
+ * all cases (CMAC/GMAC).
*/
- if (keyidx < 6 &&
+ if (!wiphy_ext_feature_isset(hw->wiphy,
+ NL80211_EXT_FEATURE_BEACON_PROTECTION) &&
(key->cipher == WLAN_CIPHER_SUITE_AES_CMAC ||
key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_128 ||
key->cipher == WLAN_CIPHER_SUITE_BIP_GMAC_256)) {
@@ -4390,6 +4398,39 @@ static bool iwl_mvm_rx_aux_roc(struct iwl_notif_wait_data *notif_wait,
#define AUX_ROC_MAX_DELAY MSEC_TO_TU(600)
#define AUX_ROC_SAFETY_BUFFER MSEC_TO_TU(20)
#define AUX_ROC_MIN_SAFETY_BUFFER MSEC_TO_TU(10)
+
+static void iwl_mvm_roc_duration_and_delay(struct ieee80211_vif *vif,
+ u32 duration_ms,
+ u32 *duration_tu,
+ u32 *delay)
+{
+ u32 dtim_interval = vif->bss_conf.dtim_period *
+ vif->bss_conf.beacon_int;
+
+ *delay = AUX_ROC_MIN_DELAY;
+ *duration_tu = MSEC_TO_TU(duration_ms);
+
+ /*
+ * If we are associated we want the delay time to be at least one
+ * dtim interval so that the FW can wait until after the DTIM and
+ * then start the time event, this will potentially allow us to
+ * remain off-channel for the max duration.
+ * Since we want to use almost a whole dtim interval we would also
+ * like the delay to be for 2-3 dtim intervals, in case there are
+ * other time events with higher priority.
+ */
+ if (vif->cfg.assoc) {
+ *delay = min_t(u32, dtim_interval * 3, AUX_ROC_MAX_DELAY);
+ /* We cannot remain off-channel longer than the DTIM interval */
+ if (dtim_interval <= *duration_tu) {
+ *duration_tu = dtim_interval - AUX_ROC_SAFETY_BUFFER;
+ if (*duration_tu <= AUX_ROC_MIN_DURATION)
+ *duration_tu = dtim_interval -
+ AUX_ROC_MIN_SAFETY_BUFFER;
+ }
+ }
+}
+
static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
struct ieee80211_channel *channel,
struct ieee80211_vif *vif,
@@ -4400,8 +4441,6 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
struct iwl_mvm_time_event_data *te_data = &mvmvif->hs_time_event_data;
static const u16 time_event_response[] = { HOT_SPOT_CMD };
struct iwl_notification_wait wait_time_event;
- u32 dtim_interval = vif->bss_conf.dtim_period *
- vif->bss_conf.beacon_int;
u32 req_dur, delay;
struct iwl_hs20_roc_req aux_roc_req = {
.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
@@ -4422,29 +4461,7 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
/* Set the time and duration */
tail->apply_time = cpu_to_le32(iwl_mvm_get_systime(mvm));
- delay = AUX_ROC_MIN_DELAY;
- req_dur = MSEC_TO_TU(duration);
-
- /*
- * If we are associated we want the delay time to be at least one
- * dtim interval so that the FW can wait until after the DTIM and
- * then start the time event, this will potentially allow us to
- * remain off-channel for the max duration.
- * Since we want to use almost a whole dtim interval we would also
- * like the delay to be for 2-3 dtim intervals, in case there are
- * other time events with higher priority.
- */
- if (vif->cfg.assoc) {
- delay = min_t(u32, dtim_interval * 3, AUX_ROC_MAX_DELAY);
- /* We cannot remain off-channel longer than the DTIM interval */
- if (dtim_interval <= req_dur) {
- req_dur = dtim_interval - AUX_ROC_SAFETY_BUFFER;
- if (req_dur <= AUX_ROC_MIN_DURATION)
- req_dur = dtim_interval -
- AUX_ROC_MIN_SAFETY_BUFFER;
- }
- }
-
+ iwl_mvm_roc_duration_and_delay(vif, duration, &req_dur, &delay);
tail->duration = cpu_to_le32(req_dur);
tail->apply_time_max_delay = cpu_to_le32(delay);
@@ -4452,8 +4469,8 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
"ROC: Requesting to remain on channel %u for %ums\n",
channel->hw_value, req_dur);
IWL_DEBUG_TE(mvm,
- "\t(requested = %ums, max_delay = %ums, dtim_interval = %ums)\n",
- duration, delay, dtim_interval);
+ "\t(requested = %ums, max_delay = %ums)\n",
+ duration, delay);
/* Set the node address */
memcpy(tail->node_addr, vif->addr, ETH_ALEN);
@@ -4511,6 +4528,48 @@ static int iwl_mvm_send_aux_roc_cmd(struct iwl_mvm *mvm,
return res;
}
+static int iwl_mvm_roc_add_cmd(struct iwl_mvm *mvm,
+ struct ieee80211_channel *channel,
+ struct ieee80211_vif *vif,
+ int duration, u32 activity)
+{
+ int res;
+ u32 duration_tu, delay;
+ struct iwl_roc_req roc_req = {
+ .action = cpu_to_le32(FW_CTXT_ACTION_ADD),
+ .activity = cpu_to_le32(activity),
+ .sta_id = cpu_to_le32(mvm->aux_sta.sta_id),
+ };
+
+ lockdep_assert_held(&mvm->mutex);
+
+ /* Set the channel info data */
+ iwl_mvm_set_chan_info(mvm, &roc_req.channel_info,
+ channel->hw_value,
+ iwl_mvm_phy_band_from_nl80211(channel->band),
+ IWL_PHY_CHANNEL_MODE20, 0);
+
+ iwl_mvm_roc_duration_and_delay(vif, duration, &duration_tu,
+ &delay);
+ roc_req.duration = cpu_to_le32(duration_tu);
+ roc_req.max_delay = cpu_to_le32(delay);
+
+ IWL_DEBUG_TE(mvm,
+ "\t(requested = %ums, max_delay = %ums)\n",
+ duration, delay);
+ IWL_DEBUG_TE(mvm,
+ "Requesting to remain on channel %u for %utu\n",
+ channel->hw_value, duration_tu);
+
+ /* Set the node address */
+ memcpy(roc_req.node_addr, vif->addr, ETH_ALEN);
+
+ res = iwl_mvm_send_cmd_pdu(mvm, WIDE_ID(MAC_CONF_GROUP, ROC_CMD),
+ 0, sizeof(roc_req), &roc_req);
+
+ return res;
+}
+
static int iwl_mvm_add_aux_sta_for_hs20(struct iwl_mvm *mvm, u32 lmac_id)
{
int ret = 0;
@@ -4531,30 +4590,20 @@ static int iwl_mvm_add_aux_sta_for_hs20(struct iwl_mvm *mvm, u32 lmac_id)
return ret;
}
-static int iwl_mvm_roc_switch_binding(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct iwl_mvm_phy_ctxt *new_phy_ctxt)
+static int iwl_mvm_roc_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret = 0;
+ int ret;
lockdep_assert_held(&mvm->mutex);
- /* Unbind the P2P_DEVICE from the current PHY context,
- * and if the PHY context is not used remove it.
- */
- ret = iwl_mvm_binding_remove_vif(mvm, vif);
- if (WARN(ret, "Failed unbinding P2P_DEVICE\n"))
+ ret = iwl_mvm_binding_add_vif(mvm, vif);
+ if (WARN(ret, "Failed binding P2P_DEVICE\n"))
return ret;
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
-
- /* Bind the P2P_DEVICE to the current PHY Context */
- mvmvif->deflink.phy_ctxt = new_phy_ctxt;
-
- ret = iwl_mvm_binding_add_vif(mvm, vif);
- WARN(ret, "Failed binding P2P_DEVICE\n");
- return ret;
+ /* The station and queue allocation must be done only after the binding
+ * is done, as otherwise the FW might incorrectly configure its state.
+ */
+ return iwl_mvm_add_p2p_bcast_sta(mvm, vif);
}
static int iwl_mvm_roc(struct ieee80211_hw *hw,
@@ -4565,12 +4614,81 @@ static int iwl_mvm_roc(struct ieee80211_hw *hw,
{
static const struct iwl_mvm_roc_ops ops = {
.add_aux_sta_for_hs20 = iwl_mvm_add_aux_sta_for_hs20,
- .switch_phy_ctxt = iwl_mvm_roc_switch_binding,
+ .link = iwl_mvm_roc_link,
};
return iwl_mvm_roc_common(hw, vif, channel, duration, type, &ops);
}
+static int iwl_mvm_roc_station(struct iwl_mvm *mvm,
+ struct ieee80211_channel *channel,
+ struct ieee80211_vif *vif,
+ int duration)
+{
+ int ret;
+ u32 cmd_id = WIDE_ID(MAC_CONF_GROUP, ROC_CMD);
+ u8 fw_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id,
+ IWL_FW_CMD_VER_UNKNOWN);
+
+ if (fw_ver == IWL_FW_CMD_VER_UNKNOWN) {
+ ret = iwl_mvm_send_aux_roc_cmd(mvm, channel, vif, duration);
+ } else if (fw_ver == 3) {
+ ret = iwl_mvm_roc_add_cmd(mvm, channel, vif, duration,
+ ROC_ACTIVITY_HOTSPOT);
+ } else {
+ ret = -EOPNOTSUPP;
+ IWL_ERR(mvm, "ROC command version %d mismatch!\n", fw_ver);
+ }
+
+ return ret;
+}
+
+static int iwl_mvm_p2p_find_phy_ctxt(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel *channel)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct cfg80211_chan_def chandef;
+ int i;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (mvmvif->deflink.phy_ctxt &&
+ channel == mvmvif->deflink.phy_ctxt->channel)
+ return 0;
+
+ /* Try using a PHY context that is already in use */
+ for (i = 0; i < NUM_PHY_CTX; i++) {
+ struct iwl_mvm_phy_ctxt *phy_ctxt = &mvm->phy_ctxts[i];
+
+ if (!phy_ctxt->ref || mvmvif->deflink.phy_ctxt == phy_ctxt)
+ continue;
+
+ if (channel == phy_ctxt->channel) {
+ if (mvmvif->deflink.phy_ctxt)
+ iwl_mvm_phy_ctxt_unref(mvm,
+ mvmvif->deflink.phy_ctxt);
+
+ mvmvif->deflink.phy_ctxt = phy_ctxt;
+ iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
+ return 0;
+ }
+ }
+
+ /* We already have a phy_ctxt, but it's not on the right channel */
+ if (mvmvif->deflink.phy_ctxt)
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+
+ mvmvif->deflink.phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
+ if (!mvmvif->deflink.phy_ctxt)
+ return -ENOSPC;
+
+ cfg80211_chandef_create(&chandef, channel, NL80211_CHAN_NO_HT);
+
+ return iwl_mvm_phy_ctxt_add(mvm, mvmvif->deflink.phy_ctxt,
+ &chandef, 1, 1);
+}
+
/* Execute the common part for MLD and non-MLD modes */
int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_channel *channel, int duration,
@@ -4578,12 +4696,8 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
const struct iwl_mvm_roc_ops *ops)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- struct cfg80211_chan_def chandef;
- struct iwl_mvm_phy_ctxt *phy_ctxt;
- bool band_change_removal;
- int ret, i;
u32 lmac_id;
+ int ret;
IWL_DEBUG_MAC80211(mvm, "enter (%d, %d, %d)\n", channel->hw_value,
duration, type);
@@ -4603,89 +4717,27 @@ int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
/* Use aux roc framework (HS20) */
ret = ops->add_aux_sta_for_hs20(mvm, lmac_id);
if (!ret)
- ret = iwl_mvm_send_aux_roc_cmd(mvm, channel,
- vif, duration);
+ ret = iwl_mvm_roc_station(mvm, channel, vif, duration);
goto out_unlock;
case NL80211_IFTYPE_P2P_DEVICE:
/* handle below */
break;
default:
- IWL_ERR(mvm, "vif isn't P2P_DEVICE: %d\n", vif->type);
+ IWL_ERR(mvm, "ROC: Invalid vif type=%u\n", vif->type);
ret = -EINVAL;
goto out_unlock;
}
- for (i = 0; i < NUM_PHY_CTX; i++) {
- phy_ctxt = &mvm->phy_ctxts[i];
- if (phy_ctxt->ref == 0 || mvmvif->deflink.phy_ctxt == phy_ctxt)
- continue;
- if (phy_ctxt->ref && channel == phy_ctxt->channel) {
- ret = ops->switch_phy_ctxt(mvm, vif, phy_ctxt);
- if (ret)
- goto out_unlock;
-
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
- goto schedule_time_event;
- }
- }
-
- /* Need to update the PHY context only if the ROC channel changed */
- if (channel == mvmvif->deflink.phy_ctxt->channel)
- goto schedule_time_event;
-
- cfg80211_chandef_create(&chandef, channel, NL80211_CHAN_NO_HT);
-
- /*
- * Check if the remain-on-channel is on a different band and that
- * requires context removal, see iwl_mvm_phy_ctxt_changed(). If
- * so, we'll need to release and then re-configure here, since we
- * must not remove a PHY context that's part of a binding.
- */
- band_change_removal =
- fw_has_capa(&mvm->fw->ucode_capa,
- IWL_UCODE_TLV_CAPA_BINDING_CDB_SUPPORT) &&
- mvmvif->deflink.phy_ctxt->channel->band != chandef.chan->band;
-
- if (mvmvif->deflink.phy_ctxt->ref == 1 && !band_change_removal) {
- /*
- * Change the PHY context configuration as it is currently
- * referenced only by the P2P Device MAC (and we can modify it)
- */
- ret = iwl_mvm_phy_ctxt_changed(mvm, mvmvif->deflink.phy_ctxt,
- &chandef, 1, 1);
- if (ret)
- goto out_unlock;
- } else {
- /*
- * The PHY context is shared with other MACs (or we're trying to
- * switch bands), so remove the P2P Device from the binding,
- * allocate an new PHY context and create a new binding.
- */
- phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
- if (!phy_ctxt) {
- ret = -ENOSPC;
- goto out_unlock;
- }
-
- ret = iwl_mvm_phy_ctxt_changed(mvm, phy_ctxt, &chandef,
- 1, 1);
- if (ret) {
- IWL_ERR(mvm, "Failed to change PHY context\n");
- goto out_unlock;
- }
-
- ret = ops->switch_phy_ctxt(mvm, vif, phy_ctxt);
- if (ret)
- goto out_unlock;
+ ret = iwl_mvm_p2p_find_phy_ctxt(mvm, vif, channel);
+ if (ret)
+ goto out_unlock;
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
- }
+ ret = ops->link(mvm, vif);
+ if (ret)
+ goto out_unlock;
-schedule_time_event:
- /* Schedule the time events */
ret = iwl_mvm_start_p2p_roc(mvm, vif, duration, type);
-
out_unlock:
mutex_unlock(&mvm->mutex);
IWL_DEBUG_MAC80211(mvm, "leave\n");
@@ -4742,8 +4794,9 @@ static int __iwl_mvm_add_chanctx(struct iwl_mvm *mvm,
{
u16 *phy_ctxt_id = (u16 *)ctx->drv_priv;
struct iwl_mvm_phy_ctxt *phy_ctxt;
- bool responder = iwl_mvm_is_ftm_responder_chanctx(mvm, ctx);
- struct cfg80211_chan_def *def = responder ? &ctx->def : &ctx->min_def;
+ bool use_def = iwl_mvm_is_ftm_responder_chanctx(mvm, ctx) ||
+ iwl_mvm_enable_fils(mvm, ctx);
+ struct cfg80211_chan_def *def = use_def ? &ctx->def : &ctx->min_def;
int ret;
lockdep_assert_held(&mvm->mutex);
@@ -4756,15 +4809,14 @@ static int __iwl_mvm_add_chanctx(struct iwl_mvm *mvm,
goto out;
}
- ret = iwl_mvm_phy_ctxt_changed(mvm, phy_ctxt, def,
- ctx->rx_chains_static,
- ctx->rx_chains_dynamic);
+ ret = iwl_mvm_phy_ctxt_add(mvm, phy_ctxt, def,
+ ctx->rx_chains_static,
+ ctx->rx_chains_dynamic);
if (ret) {
IWL_ERR(mvm, "Failed to add PHY context\n");
goto out;
}
- iwl_mvm_phy_ctxt_ref(mvm, phy_ctxt);
*phy_ctxt_id = phy_ctxt->id;
out:
return ret;
@@ -4810,8 +4862,9 @@ void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
u16 *phy_ctxt_id = (u16 *)ctx->drv_priv;
struct iwl_mvm_phy_ctxt *phy_ctxt = &mvm->phy_ctxts[*phy_ctxt_id];
- bool responder = iwl_mvm_is_ftm_responder_chanctx(mvm, ctx);
- struct cfg80211_chan_def *def = responder ? &ctx->def : &ctx->min_def;
+ bool use_def = iwl_mvm_is_ftm_responder_chanctx(mvm, ctx) ||
+ iwl_mvm_enable_fils(mvm, ctx);
+ struct cfg80211_chan_def *def = use_def ? &ctx->def : &ctx->min_def;
if (WARN_ONCE((phy_ctxt->ref > 1) &&
(changed & ~(IEEE80211_CHANCTX_CHANGE_WIDTH |
@@ -4950,7 +5003,7 @@ static int __iwl_mvm_assign_vif_chanctx(struct iwl_mvm *mvm,
if (!fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_CHANNEL_SWITCH_CMD)) {
- u32 duration = 3 * vif->bss_conf.beacon_int;
+ u32 duration = 5 * vif->bss_conf.beacon_int;
/* Protect the session to make sure we hear the first
* beacon on the new channel.
@@ -5228,8 +5281,8 @@ int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw)
return mvm->ibss_manager;
}
-int iwl_mvm_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
- bool set)
+static int iwl_mvm_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ bool set)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
@@ -5456,7 +5509,8 @@ int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
goto out_unlock;
}
- if (chsw->delay > IWL_MAX_CSA_BLOCK_TX)
+ if (chsw->delay > IWL_MAX_CSA_BLOCK_TX &&
+ hweight16(vif->valid_links) <= 1)
schedule_delayed_work(&mvmvif->csa_work, 0);
if (chsw->block_tx) {
@@ -5535,7 +5589,7 @@ void iwl_mvm_channel_switch_rx_beacon(struct ieee80211_hw *hw,
if (mvmvif->csa_misbehave) {
/* Second time, give up on this AP*/
iwl_mvm_abort_channel_switch(hw, vif);
- ieee80211_chswitch_done(vif, false);
+ ieee80211_chswitch_done(vif, false, 0);
mvmvif->csa_misbehave = false;
return;
}
@@ -5629,7 +5683,8 @@ void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
}
if (drop) {
- if (iwl_mvm_flush_sta(mvm, mvmsta, false))
+ if (iwl_mvm_flush_sta(mvm, mvmsta->deflink.sta_id,
+ mvmsta->tfd_queue_msk))
IWL_ERR(mvm, "flush request fail\n");
} else {
if (iwl_mvm_has_new_tx_api(mvm))
@@ -5651,22 +5706,21 @@ void iwl_mvm_mac_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
void iwl_mvm_mac_flush_sta(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_sta *sta)
{
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
- int i;
+ struct iwl_mvm_link_sta *mvm_link_sta;
+ struct ieee80211_link_sta *link_sta;
+ int link_id;
mutex_lock(&mvm->mutex);
- for (i = 0; i < mvm->fw->ucode_capa.num_stations; i++) {
- struct iwl_mvm_sta *mvmsta;
- struct ieee80211_sta *tmp;
-
- tmp = rcu_dereference_protected(mvm->fw_id_to_mac_id[i],
- lockdep_is_held(&mvm->mutex));
- if (tmp != sta)
+ for_each_sta_active_link(vif, sta, link_sta, link_id) {
+ mvm_link_sta = rcu_dereference_protected(mvmsta->link[link_id],
+ lockdep_is_held(&mvm->mutex));
+ if (!mvm_link_sta)
continue;
- mvmsta = iwl_mvm_sta_from_mac80211(sta);
-
- if (iwl_mvm_flush_sta(mvm, mvmsta, false))
+ if (iwl_mvm_flush_sta(mvm, mvm_link_sta->sta_id,
+ mvmsta->tfd_queue_msk))
IWL_ERR(mvm, "flush request fail\n");
}
mutex_unlock(&mvm->mutex);
@@ -5676,7 +5730,11 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
struct survey_info *survey)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
- int ret;
+ int ret = 0;
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(SYSTEM_GROUP,
+ SYSTEM_STATISTICS_CMD),
+ IWL_FW_CMD_VER_UNKNOWN);
memset(survey, 0, sizeof(*survey));
@@ -5696,13 +5754,8 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
goto out;
}
- survey->filled = SURVEY_INFO_TIME |
- SURVEY_INFO_TIME_RX |
- SURVEY_INFO_TIME_TX |
- SURVEY_INFO_TIME_SCAN;
- survey->time = mvm->accu_radio_stats.on_time_rf +
- mvm->radio_stats.on_time_rf;
- do_div(survey->time, USEC_PER_MSEC);
+ survey->filled = SURVEY_INFO_TIME_RX |
+ SURVEY_INFO_TIME_TX;
survey->time_rx = mvm->accu_radio_stats.rx_time +
mvm->radio_stats.rx_time;
@@ -5712,11 +5765,20 @@ int iwl_mvm_mac_get_survey(struct ieee80211_hw *hw, int idx,
mvm->radio_stats.tx_time;
do_div(survey->time_tx, USEC_PER_MSEC);
+ /* the new fw api doesn't support the following fields */
+ if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN)
+ goto out;
+
+ survey->filled |= SURVEY_INFO_TIME |
+ SURVEY_INFO_TIME_SCAN;
+ survey->time = mvm->accu_radio_stats.on_time_rf +
+ mvm->radio_stats.on_time_rf;
+ do_div(survey->time, USEC_PER_MSEC);
+
survey->time_scan = mvm->accu_radio_stats.on_time_scan +
mvm->radio_stats.on_time_scan;
do_div(survey->time_scan, USEC_PER_MSEC);
- ret = 0;
out:
mutex_unlock(&mvm->mutex);
return ret;
@@ -5865,6 +5927,7 @@ void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ int i;
if (mvmsta->deflink.avg_energy) {
sinfo->signal_avg = -(s8)mvmsta->deflink.avg_energy;
@@ -5893,8 +5956,11 @@ void iwl_mvm_mac_sta_statistics(struct ieee80211_hw *hw,
if (iwl_mvm_request_statistics(mvm, false))
goto unlock;
- sinfo->rx_beacon = mvmvif->deflink.beacon_stats.num_beacons +
- mvmvif->deflink.beacon_stats.accu_num_beacons;
+ sinfo->rx_beacon = 0;
+ for_each_mvm_vif_valid_link(mvmvif, i)
+ sinfo->rx_beacon += mvmvif->link[i]->beacon_stats.num_beacons +
+ mvmvif->link[i]->beacon_stats.accu_num_beacons;
+
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_BEACON_RX);
if (mvmvif->deflink.beacon_stats.avg_signal) {
/* firmware only reports a value after RXing a few beacons */
@@ -6200,6 +6266,7 @@ const struct ieee80211_ops iwl_mvm_hw_ops = {
.wake_tx_queue = iwl_mvm_mac_wake_tx_queue,
.ampdu_action = iwl_mvm_mac_ampdu_action,
.get_antenna = iwl_mvm_op_get_antenna,
+ .set_antenna = iwl_mvm_op_set_antenna,
.start = iwl_mvm_mac_start,
.reconfig_complete = iwl_mvm_mac_reconfig_complete,
.stop = iwl_mvm_mac_stop,
@@ -6282,6 +6349,7 @@ const struct ieee80211_ops iwl_mvm_hw_ops = {
.can_aggregate_in_amsdu = iwl_mvm_mac_can_aggregate,
#ifdef CONFIG_IWLWIFI_DEBUGFS
+ .vif_add_debugfs = iwl_mvm_vif_add_debugfs,
.link_sta_add_debugfs = iwl_mvm_link_sta_add_debugfs,
#endif
.set_hw_timestamp = iwl_mvm_set_hw_timestamp,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
index 2c9f2f71b083..ea3e9e9c6e26 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-key.c
@@ -24,10 +24,15 @@ static u32 iwl_mvm_get_sec_sta_mask(struct iwl_mvm *mvm,
return 0;
}
- /* AP group keys are per link and should be on the mcast STA */
+ /* AP group keys are per link and should be on the mcast/bcast STA */
if (vif->type == NL80211_IFTYPE_AP &&
- !(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ !(keyconf->flags & IEEE80211_KEY_FLAG_PAIRWISE)) {
+ /* IGTK/BIGTK to bcast STA */
+ if (keyconf->keyidx >= 4)
+ return BIT(link_info->bcast_sta.sta_id);
+ /* GTK for data to mcast STA */
return BIT(link_info->mcast_sta.sta_id);
+ }
/* for client mode use the AP STA also for group keys */
if (!sta && vif->type == NL80211_IFTYPE_STATION)
@@ -91,7 +96,12 @@ u32 iwl_mvm_get_sec_flags(struct iwl_mvm *mvm,
if (!sta && vif->type == NL80211_IFTYPE_STATION)
sta = mvmvif->ap_sta;
- if (!IS_ERR_OR_NULL(sta) && sta->mfp)
+ /* Set the MFP flag also for an AP interface where the key is an IGTK
+ * key as in such a case the station would always be NULL
+ */
+ if ((!IS_ERR_OR_NULL(sta) && sta->mfp) ||
+ (vif->type == NL80211_IFTYPE_AP &&
+ (keyconf->keyidx == 4 || keyconf->keyidx == 5)))
flags |= IWL_SEC_KEY_FLAG_MFP;
return flags;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
index b719843e9457..ff6cb064051b 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-mac80211.c
@@ -10,6 +10,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
int ret;
+ int i;
mutex_lock(&mvm->mutex);
@@ -22,8 +23,9 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
/* make sure that beacon statistics don't go backwards with FW reset */
if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status))
- mvmvif->deflink.beacon_stats.accu_num_beacons +=
- mvmvif->deflink.beacon_stats.num_beacons;
+ for_each_mvm_vif_valid_link(mvmvif, i)
+ mvmvif->link[i]->beacon_stats.accu_num_beacons +=
+ mvmvif->link[i]->beacon_stats.num_beacons;
/* Allocate resources for the MAC context, and add it to the fw */
ret = iwl_mvm_mac_ctxt_init(mvm, vif);
@@ -56,43 +58,15 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
IEEE80211_VIF_SUPPORTS_CQM_RSSI;
}
- /*
- * P2P_DEVICE interface does not have a channel context assigned to it,
- * so a dedicated PHY context is allocated to it and the corresponding
- * MAC context is bound to it at this stage.
- */
- if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
- mvmvif->deflink.phy_ctxt = iwl_mvm_get_free_phy_ctxt(mvm);
- if (!mvmvif->deflink.phy_ctxt) {
- ret = -ENOSPC;
- goto out_free_bf;
- }
-
- iwl_mvm_phy_ctxt_ref(mvm, mvmvif->deflink.phy_ctxt);
- ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
- if (ret)
- goto out_unref_phy;
-
- ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
- LINK_CONTEXT_MODIFY_ACTIVE |
- LINK_CONTEXT_MODIFY_RATES_INFO,
- true);
- if (ret)
- goto out_remove_link;
-
- ret = iwl_mvm_mld_add_bcast_sta(mvm, vif, &vif->bss_conf);
- if (ret)
- goto out_remove_link;
+ ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
+ if (ret)
+ goto out_free_bf;
- /* Save a pointer to p2p device vif, so it can later be used to
- * update the p2p device MAC when a GO is started/stopped
- */
+ /* Save a pointer to p2p device vif, so it can later be used to
+ * update the p2p device MAC when a GO is started/stopped
+ */
+ if (vif->type == NL80211_IFTYPE_P2P_DEVICE)
mvm->p2p_device_vif = vif;
- } else {
- ret = iwl_mvm_add_link(mvm, vif, &vif->bss_conf);
- if (ret)
- goto out_free_bf;
- }
ret = iwl_mvm_power_update_mac(mvm);
if (ret)
@@ -107,7 +81,7 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
ieee80211_hw_set(mvm->hw, RX_INCLUDES_FCS);
}
- iwl_mvm_vif_dbgfs_register(mvm, vif);
+ iwl_mvm_vif_dbgfs_add_link(mvm, vif);
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
vif->type == NL80211_IFTYPE_STATION && !vif->p2p &&
@@ -119,10 +93,6 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
goto out_unlock;
- out_remove_link:
- iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
- out_unref_phy:
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
out_free_bf:
if (mvm->bf_allowed_vif == mvmvif) {
mvm->bf_allowed_vif = NULL;
@@ -130,7 +100,6 @@ static int iwl_mvm_mld_mac_add_interface(struct ieee80211_hw *hw,
IEEE80211_VIF_SUPPORTS_CQM_RSSI);
}
out_remove_mac:
- mvmvif->deflink.phy_ctxt = NULL;
mvmvif->link[0] = NULL;
iwl_mvm_mld_mac_ctxt_remove(mvm, vif);
out_unlock:
@@ -168,7 +137,7 @@ static void iwl_mvm_mld_mac_remove_interface(struct ieee80211_hw *hw,
if (vif->bss_conf.ftm_responder)
memset(&mvm->ftm_resp_stats, 0, sizeof(mvm->ftm_resp_stats));
- iwl_mvm_vif_dbgfs_clean(mvm, vif);
+ iwl_mvm_vif_dbgfs_rm_link(mvm, vif);
/* For AP/GO interface, the tear down of the resources allocated to the
* interface is be handled as part of the stop_ap flow.
@@ -185,14 +154,18 @@ static void iwl_mvm_mld_mac_remove_interface(struct ieee80211_hw *hw,
iwl_mvm_power_update_mac(mvm);
+ /* Before the interface removal, mac80211 would cancel the ROC, and the
+ * ROC worker would be scheduled if needed. The worker would be flushed
+ * in iwl_mvm_prepare_mac_removal() and thus at this point the link is
+ * not active. So need only to remove the link.
+ */
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
+ if (mvmvif->deflink.phy_ctxt) {
+ iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
+ mvmvif->deflink.phy_ctxt = NULL;
+ }
mvm->p2p_device_vif = NULL;
-
- /* P2P device uses only one link */
- iwl_mvm_mld_rm_bcast_sta(mvm, vif, &vif->bss_conf);
- iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
- mvmvif->deflink.phy_ctxt = NULL;
+ iwl_mvm_remove_link(mvm, vif, &vif->bss_conf);
} else {
iwl_mvm_disable_link(mvm, vif, &vif->bss_conf);
}
@@ -240,8 +213,8 @@ static int iwl_mvm_esr_mode_active(struct iwl_mvm *mvm,
mvmvif->esr_active = true;
- /* Disable SMPS overrideing by user */
- vif->driver_flags |= IEEE80211_VIF_DISABLE_SMPS_OVERRIDE;
+ /* Indicate to mac80211 that EML is enabled */
+ vif->driver_flags |= IEEE80211_VIF_EML_ACTIVE;
iwl_mvm_update_smps_on_active_links(mvm, vif, IWL_MVM_SMPS_REQ_FW,
IEEE80211_SMPS_OFF);
@@ -399,7 +372,7 @@ static int iwl_mvm_esr_mode_inactive(struct iwl_mvm *mvm,
mvmvif->esr_active = false;
- vif->driver_flags &= ~IEEE80211_VIF_DISABLE_SMPS_OVERRIDE;
+ vif->driver_flags &= ~IEEE80211_VIF_EML_ACTIVE;
iwl_mvm_update_smps_on_active_links(mvm, vif, IWL_MVM_SMPS_REQ_FW,
IEEE80211_SMPS_AUTOMATIC);
@@ -489,10 +462,17 @@ static void iwl_mvm_mld_unassign_vif_chanctx(struct ieee80211_hw *hw,
struct ieee80211_bss_conf *link_conf,
struct ieee80211_chanctx_conf *ctx)
{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
mutex_lock(&mvm->mutex);
__iwl_mvm_mld_unassign_vif_chanctx(mvm, vif, link_conf, ctx, false);
+ /* in the non-MLD case, remove/re-add the link to clean up FW state */
+ if (!ieee80211_vif_is_mld(vif) && !mvmvif->ap_sta &&
+ !WARN_ON_ONCE(vif->cfg.assoc)) {
+ iwl_mvm_remove_link(mvm, vif, link_conf);
+ iwl_mvm_add_link(mvm, vif, link_conf);
+ }
mutex_unlock(&mvm->mutex);
}
@@ -623,6 +603,126 @@ static int iwl_mvm_mld_mac_sta_state(struct ieee80211_hw *hw,
&callbacks);
}
+struct iwl_mvm_link_sel_data {
+ u8 link_id;
+ enum nl80211_band band;
+ bool active;
+};
+
+static bool iwl_mvm_mld_valid_link_pair(struct iwl_mvm_link_sel_data *a,
+ struct iwl_mvm_link_sel_data *b)
+{
+ return a->band != b->band;
+}
+
+void iwl_mvm_mld_select_links(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool valid_links_changed)
+{
+ struct iwl_mvm_link_sel_data data[IEEE80211_MLD_MAX_NUM_LINKS];
+ unsigned long usable_links = ieee80211_vif_usable_links(vif);
+ u32 max_active_links = iwl_mvm_max_active_links(mvm, vif);
+ u16 new_active_links;
+ u8 link_id, n_data = 0, i, j;
+
+ if (!IWL_MVM_AUTO_EML_ENABLE)
+ return;
+
+ if (!ieee80211_vif_is_mld(vif) || usable_links == 1)
+ return;
+
+ /* The logic below is a simple version that doesn't suit more than 2
+ * links
+ */
+ WARN_ON_ONCE(max_active_links > 2);
+
+ /* if only a single active link is supported, assume that the one
+ * selected by higher layer for connection establishment is the best.
+ */
+ if (max_active_links == 1 && !valid_links_changed)
+ return;
+
+ /* If we are already using the maximal number of active links, don't do
+ * any change. This can later be optimized to pick a 'better' link pair.
+ */
+ if (hweight16(vif->active_links) == max_active_links)
+ return;
+
+ rcu_read_lock();
+
+ for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
+ struct ieee80211_bss_conf *link_conf =
+ rcu_dereference(vif->link_conf[link_id]);
+
+ if (WARN_ON_ONCE(!link_conf))
+ continue;
+
+ data[n_data].link_id = link_id;
+ data[n_data].band = link_conf->chandef.chan->band;
+ data[n_data].active = vif->active_links & BIT(link_id);
+ n_data++;
+ }
+
+ rcu_read_unlock();
+
+ /* this is expected to be the current active link */
+ if (n_data == 1)
+ return;
+
+ new_active_links = 0;
+
+ /* Assume that after association only a single link is active, thus,
+ * select only the 2nd link
+ */
+ if (!valid_links_changed) {
+ for (i = 0; i < n_data; i++) {
+ if (data[i].active)
+ break;
+ }
+
+ if (WARN_ON_ONCE(i == n_data))
+ return;
+
+ for (j = 0; j < n_data; j++) {
+ if (i == j)
+ continue;
+
+ if (iwl_mvm_mld_valid_link_pair(&data[i], &data[j]))
+ break;
+ }
+
+ if (j != n_data)
+ new_active_links = BIT(data[i].link_id) |
+ BIT(data[j].link_id);
+ } else {
+ /* Try to find a valid link pair for EMLSR operation. If a pair
+ * is not found continue using the current active link.
+ */
+ for (i = 0; i < n_data; i++) {
+ for (j = 0; j < n_data; j++) {
+ if (i == j)
+ continue;
+
+ if (iwl_mvm_mld_valid_link_pair(&data[i],
+ &data[j]))
+ break;
+ }
+
+ /* found a valid pair for EMLSR, use it */
+ if (j != n_data) {
+ new_active_links = BIT(data[i].link_id) |
+ BIT(data[j].link_id);
+ break;
+ }
+ }
+ }
+
+ if (WARN_ON(!new_active_links))
+ return;
+
+ if (vif->active_links != new_active_links)
+ ieee80211_set_active_links_async(vif, new_active_links);
+}
+
static void
iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
@@ -653,7 +753,7 @@ iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
}
/* Update EHT Puncturing info */
- if (changes & BSS_CHANGED_EHT_PUNCTURING && vif->cfg.assoc && has_eht)
+ if (changes & BSS_CHANGED_EHT_PUNCTURING && vif->cfg.assoc)
link_changes |= LINK_CONTEXT_MODIFY_EHT_PARAMS;
if (link_changes) {
@@ -667,6 +767,9 @@ iwl_mvm_mld_link_info_changed_station(struct iwl_mvm *mvm,
if (ret)
IWL_ERR(mvm, "failed to update MAC %pM\n", vif->addr);
+ if (changes & BSS_CHANGED_MLD_VALID_LINKS)
+ iwl_mvm_mld_select_links(mvm, vif, true);
+
memcpy(mvmvif->link[link_conf->link_id]->bssid, link_conf->bssid,
ETH_ALEN);
@@ -757,6 +860,12 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
if (!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
protect) {
+ /* We are in assoc so only one link is active-
+ * The association link
+ */
+ unsigned int link_id =
+ ffs(vif->active_links) - 1;
+
/* If we're not restarting and still haven't
* heard a beacon (dtim period unknown) then
* make sure we still have enough minimum time
@@ -766,7 +875,7 @@ static void iwl_mvm_mld_vif_cfg_changed_station(struct iwl_mvm *mvm,
* time could be small without us having heard
* a beacon yet.
*/
- iwl_mvm_protect_assoc(mvm, vif, 0);
+ iwl_mvm_protect_assoc(mvm, vif, 0, link_id);
}
iwl_mvm_sf_update(mvm, vif, false);
@@ -968,36 +1077,29 @@ iwl_mvm_mld_mac_conf_tx(struct ieee80211_hw *hw,
return 0;
}
-static int iwl_mvm_link_switch_phy_ctx(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct iwl_mvm_phy_ctxt *new_phy_ctxt)
+static int iwl_mvm_mld_roc_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
- struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
- int ret = 0;
+ int ret;
lockdep_assert_held(&mvm->mutex);
- /* Inorder to change the phy_ctx of a link, the link needs to be
- * inactive. Therefore, first deactivate the link, then change its
- * phy_ctx, and then activate it again.
- */
- ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
- LINK_CONTEXT_MODIFY_ACTIVE, false);
- if (WARN(ret, "Failed to deactivate link\n"))
+ /* The PHY context ID might have changed so need to set it */
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf, 0, false);
+ if (WARN(ret, "Failed to set PHY context ID\n"))
return ret;
- iwl_mvm_phy_ctxt_unref(mvm, mvmvif->deflink.phy_ctxt);
-
- mvmvif->deflink.phy_ctxt = new_phy_ctxt;
+ ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE |
+ LINK_CONTEXT_MODIFY_RATES_INFO,
+ true);
- ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf, 0, false);
- if (WARN(ret, "Failed to deactivate link\n"))
+ if (WARN(ret, "Failed linking P2P_DEVICE\n"))
return ret;
- ret = iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
- LINK_CONTEXT_MODIFY_ACTIVE, true);
- WARN(ret, "Failed binding P2P_DEVICE\n");
- return ret;
+ /* The station and queue allocation must be done only after the linking
+ * is done, as otherwise the FW might incorrectly configure its state.
+ */
+ return iwl_mvm_mld_add_bcast_sta(mvm, vif, &vif->bss_conf);
}
static int iwl_mvm_mld_roc(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
@@ -1006,7 +1108,7 @@ static int iwl_mvm_mld_roc(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
{
static const struct iwl_mvm_roc_ops ops = {
.add_aux_sta_for_hs20 = iwl_mvm_mld_add_aux_sta,
- .switch_phy_ctxt = iwl_mvm_link_switch_phy_ctx,
+ .link = iwl_mvm_mld_roc_link,
};
return iwl_mvm_roc_common(hw, vif, channel, duration, type, &ops);
@@ -1089,9 +1191,6 @@ iwl_mvm_mld_change_vif_links(struct ieee80211_hw *hw,
}
}
- if (err)
- goto out_err;
-
err = 0;
if (new_links == 0) {
mvmvif->link[0] = &mvmvif->deflink;
@@ -1129,6 +1228,7 @@ const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
.wake_tx_queue = iwl_mvm_mac_wake_tx_queue,
.ampdu_action = iwl_mvm_mac_ampdu_action,
.get_antenna = iwl_mvm_op_get_antenna,
+ .set_antenna = iwl_mvm_op_set_antenna,
.start = iwl_mvm_mac_start,
.reconfig_complete = iwl_mvm_mac_reconfig_complete,
.stop = iwl_mvm_mac_stop,
@@ -1175,8 +1275,6 @@ const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
.tx_last_beacon = iwl_mvm_tx_last_beacon,
- .set_tim = iwl_mvm_set_tim,
-
.channel_switch = iwl_mvm_channel_switch,
.pre_channel_switch = iwl_mvm_pre_channel_switch,
.post_channel_switch = iwl_mvm_post_channel_switch,
@@ -1211,6 +1309,8 @@ const struct ieee80211_ops iwl_mvm_mld_hw_ops = {
.abort_pmsr = iwl_mvm_abort_pmsr,
#ifdef CONFIG_IWLWIFI_DEBUGFS
+ .vif_add_debugfs = iwl_mvm_vif_add_debugfs,
+ .link_add_debugfs = iwl_mvm_link_add_debugfs,
.link_sta_add_debugfs = iwl_mvm_link_sta_add_debugfs,
#endif
.set_hw_timestamp = iwl_mvm_set_hw_timestamp,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c
index 524852cf5cd2..ca5e4fbcf8ce 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mld-sta.c
@@ -347,7 +347,7 @@ static int iwl_mvm_mld_rm_int_sta(struct iwl_mvm *mvm,
return -EINVAL;
if (flush)
- iwl_mvm_flush_sta(mvm, int_sta, true);
+ iwl_mvm_flush_sta(mvm, int_sta->sta_id, int_sta->tfd_queue_msk);
iwl_mvm_mld_disable_txq(mvm, BIT(int_sta->sta_id), queuptr, tid);
@@ -697,6 +697,8 @@ int iwl_mvm_mld_add_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
/* at this stage sta link pointers are already allocated */
ret = iwl_mvm_mld_update_sta(mvm, vif, sta);
+ if (ret)
+ goto err;
for_each_sta_active_link(vif, sta, link_sta, link_id) {
struct ieee80211_bss_conf *link_conf =
@@ -1104,15 +1106,26 @@ int iwl_mvm_mld_update_sta_links(struct iwl_mvm *mvm,
link_sta_dereference_protected(sta, link_id);
mvm_vif_link = mvm_vif->link[link_id];
- if (WARN_ON(!mvm_vif_link || !link_conf || !link_sta ||
- mvm_sta->link[link_id])) {
+ if (WARN_ON(!mvm_vif_link || !link_conf || !link_sta)) {
ret = -EINVAL;
goto err;
}
- ret = iwl_mvm_mld_alloc_sta_link(mvm, vif, sta, link_id);
- if (WARN_ON(ret))
- goto err;
+ if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
+ if (WARN_ON(!mvm_sta->link[link_id])) {
+ ret = -EINVAL;
+ goto err;
+ }
+ } else {
+ if (WARN_ON(mvm_sta->link[link_id])) {
+ ret = -EINVAL;
+ goto err;
+ }
+ ret = iwl_mvm_mld_alloc_sta_link(mvm, vif, sta,
+ link_id);
+ if (WARN_ON(ret))
+ goto err;
+ }
link_sta->agg.max_rc_amsdu_len = 1;
ieee80211_sta_recalc_aggregates(sta);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
index b18c91c5dd5d..f2af3e571409 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mvm.h
@@ -121,15 +121,16 @@ struct iwl_mvm_time_event_data {
* if the te is in the time event list or not (when id == TE_MAX)
*/
u32 id;
+ u8 link_id;
};
/* Power management */
/**
* enum iwl_power_scheme
- * @IWL_POWER_LEVEL_CAM - Continuously Active Mode
- * @IWL_POWER_LEVEL_BPS - Balanced Power Save (default)
- * @IWL_POWER_LEVEL_LP - Low Power
+ * @IWL_POWER_SCHEME_CAM: Continuously Active Mode
+ * @IWL_POWER_SCHEME_BPS: Balanced Power Save (default)
+ * @IWL_POWER_SCHEME_LP: Low Power
*/
enum iwl_power_scheme {
IWL_POWER_SCHEME_CAM = 1,
@@ -137,7 +138,6 @@ enum iwl_power_scheme {
IWL_POWER_SCHEME_LP
};
-#define IWL_CONN_MAX_LISTEN_INTERVAL 10
#define IWL_UAPSD_MAX_SP IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL
#ifdef CONFIG_IWLWIFI_DEBUGFS
@@ -218,7 +218,7 @@ enum iwl_bt_force_ant_mode {
};
/**
- * struct iwl_mvm_low_latency_force - low latency force mode set by debugfs
+ * enum iwl_mvm_low_latency_force - low latency force mode set by debugfs
* @LOW_LATENCY_FORCE_UNSET: unset force mode
* @LOW_LATENCY_FORCE_ON: for low latency on
* @LOW_LATENCY_FORCE_OFF: for low latency off
@@ -232,7 +232,7 @@ enum iwl_mvm_low_latency_force {
};
/**
-* struct iwl_mvm_low_latency_cause - low latency set causes
+* enum iwl_mvm_low_latency_cause - low latency set causes
* @LOW_LATENCY_TRAFFIC: indicates low latency traffic was detected
* @LOW_LATENCY_DEBUGFS: low latency mode set from debugfs
* @LOW_LATENCY_VCMD: low latency mode set from vendor command
@@ -302,7 +302,11 @@ struct iwl_probe_resp_data {
* @queue_params: QoS params for this MAC
* @mgmt_queue: queue number for unbufferable management frames
* @igtk: the current IGTK programmed into the firmware
+ * @active: indicates the link is active in FW (for sanity checking)
+ * @cab_queue: content-after-beacon (multicast) queue
* @listen_lmac: indicates this link is allocated to the listen LMAC
+ * @mcast_sta: multicast station
+ * @phy_ctxt: phy context allocated to this link, if any
*/
struct iwl_mvm_vif_link_info {
u8 bssid[ETH_ALEN];
@@ -342,6 +346,7 @@ struct iwl_mvm_vif_link_info {
/**
* struct iwl_mvm_vif - data per Virtual Interface, it is a MAC context
+ * @mvm: pointer back to the mvm struct
* @id: between 0 and 3
* @color: to solve races upon MAC addition and removal
* @associated: indicates that we're currently associated, used only for
@@ -364,6 +369,13 @@ struct iwl_mvm_vif_link_info {
* @csa_failed: CSA failed to schedule time event, report an error later
* @csa_bcn_pending: indicates that we are waiting for a beacon on a new channel
* @features: hw features active for this vif
+ * @ap_beacon_time: AP beacon time for synchronisation (on older FW)
+ * @bcn_prot: beacon protection data (keys; FIXME: needs to be per link)
+ * @bf_data: beacon filtering data
+ * @deflink: default link data for use in non-MLO
+ * @link: link data for each link in MLO
+ * @esr_active: indicates eSR mode is active
+ * @pm_enabled: indicates powersave is enabled
*/
struct iwl_mvm_vif {
struct iwl_mvm *mvm;
@@ -635,18 +647,9 @@ struct iwl_mvm_tcm {
* @queue: queue of this reorder buffer
* @last_amsdu: track last ASMDU SN for duplication detection
* @last_sub_index: track ASMDU sub frame index for duplication detection
- * @reorder_timer: timer for frames are in the reorder buffer. For AMSDU
- * it is the time of last received sub-frame
- * @removed: prevent timer re-arming
* @valid: reordering is valid for this queue
* @lock: protect reorder buffer internal state
* @mvm: mvm pointer, needed for frame timer context
- * @consec_oldsn_drops: consecutive drops due to old SN
- * @consec_oldsn_ampdu_gp2: A-MPDU GP2 timestamp to track
- * when to apply old SN consecutive drop workaround
- * @consec_oldsn_prev_drop: track whether or not an MPDU
- * that was single/part of the previous A-MPDU was
- * dropped due to old SN
*/
struct iwl_mvm_reorder_buffer {
u16 head_sn;
@@ -655,33 +658,21 @@ struct iwl_mvm_reorder_buffer {
int queue;
u16 last_amsdu;
u8 last_sub_index;
- struct timer_list reorder_timer;
- bool removed;
bool valid;
spinlock_t lock;
struct iwl_mvm *mvm;
- unsigned int consec_oldsn_drops;
- u32 consec_oldsn_ampdu_gp2;
- unsigned int consec_oldsn_prev_drop:1;
} ____cacheline_aligned_in_smp;
/**
- * struct _iwl_mvm_reorder_buf_entry - reorder buffer entry per-queue/per-seqno
+ * struct iwl_mvm_reorder_buf_entry - reorder buffer entry per-queue/per-seqno
* @frames: list of skbs stored
- * @reorder_time: time the packet was stored in the reorder buffer
*/
-struct _iwl_mvm_reorder_buf_entry {
- struct sk_buff_head frames;
- unsigned long reorder_time;
-};
-
-/* make this indirection to get the aligned thing */
struct iwl_mvm_reorder_buf_entry {
- struct _iwl_mvm_reorder_buf_entry e;
+ struct sk_buff_head frames;
}
#ifndef __CHECKER__
/* sparse doesn't like this construct: "bad integer constant expression" */
-__aligned(roundup_pow_of_two(sizeof(struct _iwl_mvm_reorder_buf_entry)))
+__aligned(roundup_pow_of_two(sizeof(struct sk_buff_head)))
#endif
;
@@ -689,15 +680,17 @@ __aligned(roundup_pow_of_two(sizeof(struct _iwl_mvm_reorder_buf_entry)))
* struct iwl_mvm_baid_data - BA session data
* @sta_mask: current station mask for the BAID
* @tid: tid of the session
- * @baid baid of the session
+ * @baid: baid of the session
* @timeout: the timeout set in the addba request
* @entries_per_queue: # of buffers per queue, this actually gets
* aligned up to avoid cache line sharing between queues
* @last_rx: last rx jiffies, updated only if timeout passed from last update
* @session_timer: timer to check if BA session expired, runs at 2 * timeout
+ * @rcu_ptr: BA data RCU protected access
+ * @rcu_head: RCU head for freeing this data
* @mvm: mvm pointer, needed for timer context
* @reorder_buf: reorder buffer, allocated per queue
- * @reorder_buf_data: data
+ * @entries: data
*/
struct iwl_mvm_baid_data {
struct rcu_head rcu_head;
@@ -967,6 +960,9 @@ struct iwl_mvm {
u8 scan_last_antenna_idx; /* to toggle TX between antennas */
u8 mgmt_last_antenna_idx;
+ u8 set_tx_ant;
+ u8 set_rx_ant;
+
/* last smart fifo state that was successfully sent to firmware */
enum iwl_sf_state sf_state;
@@ -1198,6 +1194,8 @@ struct iwl_mvm {
struct iwl_time_sync_data time_sync;
struct iwl_mei_scan_filter mei_scan_filter;
+
+ bool statistics_clear;
};
/* Extract MVM priv from op_mode and _hw */
@@ -1658,7 +1656,7 @@ const char *iwl_mvm_get_tx_fail_reason(u32 status);
static inline const char *iwl_mvm_get_tx_fail_reason(u32 status) { return ""; }
#endif
int iwl_mvm_flush_tx_path(struct iwl_mvm *mvm, u32 tfd_msk);
-int iwl_mvm_flush_sta(struct iwl_mvm *mvm, void *sta, bool internal);
+int iwl_mvm_flush_sta(struct iwl_mvm *mvm, u32 sta_id, u32 tfd_queue_mask);
int iwl_mvm_flush_sta_tids(struct iwl_mvm *mvm, u32 sta_id, u16 tids);
/* Utils to extract sta related data */
@@ -1689,6 +1687,16 @@ static inline void iwl_mvm_wait_for_async_handlers(struct iwl_mvm *mvm)
}
/* Statistics */
+void iwl_mvm_handle_rx_system_oper_stats(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb);
+void iwl_mvm_handle_rx_system_oper_part1_stats(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb);
+static inline void
+iwl_mvm_handle_rx_system_end_stats_notif(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+}
+
void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
struct iwl_rx_packet *pkt);
void iwl_mvm_rx_statistics(struct iwl_mvm *mvm,
@@ -1702,16 +1710,29 @@ int iwl_mvm_load_nvm_to_nic(struct iwl_mvm *mvm);
static inline u8 iwl_mvm_get_valid_tx_ant(struct iwl_mvm *mvm)
{
- return mvm->nvm_data && mvm->nvm_data->valid_tx_ant ?
- mvm->fw->valid_tx_ant & mvm->nvm_data->valid_tx_ant :
- mvm->fw->valid_tx_ant;
+ u8 tx_ant = mvm->fw->valid_tx_ant;
+
+ if (mvm->nvm_data && mvm->nvm_data->valid_tx_ant)
+ tx_ant &= mvm->nvm_data->valid_tx_ant;
+
+ if (mvm->set_tx_ant)
+ tx_ant &= mvm->set_tx_ant;
+
+ return tx_ant;
}
static inline u8 iwl_mvm_get_valid_rx_ant(struct iwl_mvm *mvm)
{
- return mvm->nvm_data && mvm->nvm_data->valid_rx_ant ?
- mvm->fw->valid_rx_ant & mvm->nvm_data->valid_rx_ant :
- mvm->fw->valid_rx_ant;
+ u8 rx_ant = mvm->fw->valid_tx_ant;
+
+ if (mvm->nvm_data && mvm->nvm_data->valid_rx_ant)
+ rx_ant &= mvm->nvm_data->valid_tx_ant;
+
+ if (mvm->set_rx_ant)
+ rx_ant &= mvm->set_rx_ant;
+
+ return rx_ant;
+
}
static inline void iwl_mvm_toggle_tx_ant(struct iwl_mvm *mvm, u8 *ant)
@@ -1892,41 +1913,10 @@ void iwl_mvm_stop_ap_ibss_common(struct iwl_mvm *mvm,
struct ieee80211_vif *vif);
/* BSS Info */
-/**
- * struct iwl_mvm_bss_info_changed_ops - callbacks for the bss_info_changed()
- *
- * Since the only difference between both MLD and
- * non-MLD versions of bss_info_changed() is these function calls,
- * each version will send its specific function calls to
- * %iwl_mvm_bss_info_changed_common().
- *
- * @bss_info_changed_sta: pointer to the function that handles changes
- * in bss_info in sta mode
- * @bss_info_changed_ap_ibss: pointer to the function that handles changes
- * in bss_info in ap and ibss modes
- */
-struct iwl_mvm_bss_info_changed_ops {
- void (*bss_info_changed_sta)(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *bss_conf,
- u64 changes);
- void (*bss_info_changed_ap_ibss)(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *bss_conf,
- u64 changes);
-};
-
-void
-iwl_mvm_bss_info_changed_common(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *bss_conf,
- const struct iwl_mvm_bss_info_changed_ops *callbacks,
- u64 changes);
-void
-iwl_mvm_bss_info_changed_station_common(struct iwl_mvm *mvm,
- struct ieee80211_vif *vif,
- struct ieee80211_bss_conf *link_conf,
- u64 changes);
+void iwl_mvm_bss_info_changed_station_common(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ u64 changes);
void iwl_mvm_bss_info_changed_station_assoc(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
u64 changes);
@@ -1942,13 +1932,12 @@ void iwl_mvm_bss_info_changed_station_assoc(struct iwl_mvm *mvm,
*
* @add_aux_sta_for_hs20: pointer to the function that adds an aux sta
* for Hot Spot 2.0
- * @switch_phy_ctxt: pointer to the function that switches a vif from one
- * phy_ctx to another
+ * @link: For a P2P Device interface, pointer to a function that links the
+ * MAC/Link to the PHY context
*/
struct iwl_mvm_roc_ops {
int (*add_aux_sta_for_hs20)(struct iwl_mvm *mvm, u32 lmac_id);
- int (*switch_phy_ctxt)(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
- struct iwl_mvm_phy_ctxt *new_phy_ctxt);
+ int (*link)(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
};
int iwl_mvm_roc_common(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
@@ -1959,7 +1948,7 @@ int iwl_mvm_cancel_roc(struct ieee80211_hw *hw,
struct ieee80211_vif *vif);
/*Session Protection */
void iwl_mvm_protect_assoc(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
- u32 duration_override);
+ u32 duration_override, unsigned int link_id);
/* Quota management */
static inline size_t iwl_mvm_quota_cmd_size(struct iwl_mvm *mvm)
@@ -2019,18 +2008,19 @@ void iwl_mvm_rx_umac_scan_iter_complete_notif(struct iwl_mvm *mvm,
/* MVM debugfs */
#ifdef CONFIG_IWLWIFI_DEBUGFS
void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm);
-void iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
-void iwl_mvm_vif_dbgfs_clean(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+void iwl_mvm_vif_add_debugfs(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+void iwl_mvm_vif_dbgfs_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+void iwl_mvm_vif_dbgfs_rm_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
#else
static inline void iwl_mvm_dbgfs_register(struct iwl_mvm *mvm)
{
}
static inline void
-iwl_mvm_vif_dbgfs_register(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+iwl_mvm_vif_dbgfs_add_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
}
static inline void
-iwl_mvm_vif_dbgfs_clean(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
+iwl_mvm_vif_dbgfs_rm_link(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
}
#endif /* CONFIG_IWLWIFI_DEBUGFS */
@@ -2263,7 +2253,7 @@ struct ieee80211_regdomain *iwl_mvm_get_regdomain(struct wiphy *wiphy,
bool *changed);
struct ieee80211_regdomain *iwl_mvm_get_current_regdomain(struct iwl_mvm *mvm,
bool *changed);
-int iwl_mvm_init_fw_regd(struct iwl_mvm *mvm);
+int iwl_mvm_init_fw_regd(struct iwl_mvm *mvm, bool force_regd_sync);
void iwl_mvm_update_changed_regdom(struct iwl_mvm *mvm);
/* smart fifo */
@@ -2316,7 +2306,8 @@ void iwl_mvm_teardown_tdls_peers(struct iwl_mvm *mvm);
void iwl_mvm_recalc_tdls_state(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
bool sta_added);
void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ unsigned int link_id);
int iwl_mvm_tdls_channel_switch(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta, u8 oper_class,
@@ -2335,7 +2326,6 @@ void iwl_mvm_sync_rx_queues_internal(struct iwl_mvm *mvm,
enum iwl_mvm_rxq_notif_type type,
bool sync,
const void *data, u32 size);
-void iwl_mvm_reorder_timer_expired(struct timer_list *t);
struct ieee80211_vif *iwl_mvm_get_bss_vif(struct iwl_mvm *mvm);
struct ieee80211_vif *iwl_mvm_get_vif_by_macid(struct iwl_mvm *mvm, u32 macid);
bool iwl_mvm_is_vif_assoc(struct iwl_mvm *mvm);
@@ -2375,6 +2365,10 @@ void iwl_mvm_link_sta_add_debugfs(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_link_sta *link_sta,
struct dentry *dir);
+void iwl_mvm_link_add_debugfs(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf,
+ struct dentry *dir);
#endif
/* new MLD related APIs */
@@ -2427,7 +2421,8 @@ static inline u8 iwl_mvm_phy_band_from_nl80211(enum nl80211_band band)
/* Channel Switch */
void iwl_mvm_channel_switch_disconnect_wk(struct work_struct *wk);
int iwl_mvm_post_channel_switch(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link);
/* Channel Context */
/**
@@ -2611,6 +2606,7 @@ int iwl_mvm_mac_ampdu_action(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_ampdu_params *params);
int iwl_mvm_op_get_antenna(struct ieee80211_hw *hw, u32 *tx_ant, u32 *rx_ant);
+int iwl_mvm_op_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant);
int iwl_mvm_mac_start(struct ieee80211_hw *hw);
void iwl_mvm_mac_reconfig_complete(struct ieee80211_hw *hw,
enum ieee80211_reconfig_type reconfig_type);
@@ -2682,8 +2678,6 @@ void iwl_mvm_remove_chanctx(struct ieee80211_hw *hw,
void iwl_mvm_change_chanctx(struct ieee80211_hw *hw,
struct ieee80211_chanctx_conf *ctx, u32 changed);
int iwl_mvm_tx_last_beacon(struct ieee80211_hw *hw);
-int iwl_mvm_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
- bool set);
void iwl_mvm_channel_switch(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_channel_switch *chsw);
int iwl_mvm_pre_channel_switch(struct ieee80211_hw *hw,
@@ -2725,4 +2719,8 @@ int iwl_mvm_set_hw_timestamp(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct cfg80211_set_hw_timestamp *hwts);
int iwl_mvm_update_mu_groups(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
+bool iwl_mvm_enable_fils(struct iwl_mvm *mvm,
+ struct ieee80211_chanctx_conf *ctx);
+void iwl_mvm_mld_select_links(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
+ bool valid_links_changed);
#endif /* __IWL_MVM_H__ */
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
index f67ab8ee18c2..c0dd441e800e 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2012-2014, 2018-2019, 2021 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2019, 2021-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -220,6 +220,8 @@ iwl_parse_nvm_sections(struct iwl_mvm *mvm)
struct iwl_nvm_section *sections = mvm->nvm_sections;
const __be16 *hw;
const __le16 *sw, *calib, *regulatory, *mac_override, *phy_sku;
+ u8 tx_ant = mvm->fw->valid_tx_ant;
+ u8 rx_ant = mvm->fw->valid_rx_ant;
int regulatory_type;
/* Checking for required sections */
@@ -270,9 +272,15 @@ iwl_parse_nvm_sections(struct iwl_mvm *mvm)
(const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY_SDP].data :
(const __le16 *)sections[NVM_SECTION_TYPE_REGULATORY].data;
+ if (mvm->set_tx_ant)
+ tx_ant &= mvm->set_tx_ant;
+
+ if (mvm->set_rx_ant)
+ rx_ant &= mvm->set_rx_ant;
+
return iwl_parse_nvm_data(mvm->trans, mvm->cfg, mvm->fw, hw, sw, calib,
regulatory, mac_override, phy_sku,
- mvm->fw->valid_tx_ant, mvm->fw->valid_rx_ant);
+ tx_ant, rx_ant);
}
/* Loads the NVM data stored in mvm->nvm_sections into the NIC */
@@ -565,7 +573,7 @@ int iwl_mvm_init_mcc(struct iwl_mvm *mvm)
* try to replay the last set MCC to FW. If it doesn't exist,
* queue an update to cfg80211 to retrieve the default alpha2 from FW.
*/
- retval = iwl_mvm_init_fw_regd(mvm);
+ retval = iwl_mvm_init_fw_regd(mvm, true);
if (retval != -ENOENT)
return retval;
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
index 5336a4afde4d..fef86a8b4163 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/ops.c
@@ -322,6 +322,19 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
RX_HANDLER_NO_SIZE(STATISTICS_NOTIFICATION, iwl_mvm_rx_statistics,
RX_HANDLER_ASYNC_LOCKED),
+ RX_HANDLER_GRP(STATISTICS_GROUP, STATISTICS_OPER_NOTIF,
+ iwl_mvm_handle_rx_system_oper_stats,
+ RX_HANDLER_ASYNC_LOCKED,
+ struct iwl_system_statistics_notif_oper),
+ RX_HANDLER_GRP(STATISTICS_GROUP, STATISTICS_OPER_PART1_NOTIF,
+ iwl_mvm_handle_rx_system_oper_part1_stats,
+ RX_HANDLER_ASYNC_LOCKED,
+ struct iwl_system_statistics_part1_notif_oper),
+ RX_HANDLER_GRP(SYSTEM_GROUP, SYSTEM_STATISTICS_END_NOTIF,
+ iwl_mvm_handle_rx_system_end_stats_notif,
+ RX_HANDLER_ASYNC_LOCKED,
+ struct iwl_system_statistics_end_notif),
+
RX_HANDLER(BA_WINDOW_STATUS_NOTIFICATION_ID,
iwl_mvm_window_status_notif, RX_HANDLER_SYNC,
struct iwl_ba_window_status_notif),
@@ -426,6 +439,9 @@ static const struct iwl_rx_handlers iwl_mvm_rx_handlers[] = {
WNM_80211V_TIMING_MEASUREMENT_CONFIRM_NOTIFICATION,
iwl_mvm_time_sync_msmt_confirm_event, RX_HANDLER_SYNC,
struct iwl_time_msmt_cfm_notify),
+ RX_HANDLER_GRP(MAC_CONF_GROUP, ROC_NOTIF,
+ iwl_mvm_rx_roc_notif, RX_HANDLER_SYNC,
+ struct iwl_roc_notif),
};
#undef RX_HANDLER
#undef RX_HANDLER_GRP
@@ -535,6 +551,8 @@ static const struct iwl_hcmd_names iwl_mvm_system_names[] = {
HCMD_NAME(RFI_GET_FREQ_TABLE_CMD),
HCMD_NAME(SYSTEM_FEATURES_CONTROL_CMD),
HCMD_NAME(RFI_DEACTIVATE_NOTIF),
+ HCMD_NAME(SYSTEM_STATISTICS_CMD),
+ HCMD_NAME(SYSTEM_STATISTICS_END_NOTIF),
};
/* Please keep this array *SORTED* by hex value.
@@ -549,6 +567,8 @@ static const struct iwl_hcmd_names iwl_mvm_mac_conf_names[] = {
HCMD_NAME(AUX_STA_CMD),
HCMD_NAME(STA_REMOVE_CMD),
HCMD_NAME(STA_DISABLE_TX_CMD),
+ HCMD_NAME(ROC_CMD),
+ HCMD_NAME(ROC_NOTIF),
HCMD_NAME(SESSION_PROTECTION_NOTIF),
HCMD_NAME(CHANNEL_SWITCH_START_NOTIF),
};
@@ -589,6 +609,14 @@ static const struct iwl_hcmd_names iwl_mvm_data_path_names[] = {
/* Please keep this array *SORTED* by hex value.
* Access is done through binary search
*/
+static const struct iwl_hcmd_names iwl_mvm_statistics_names[] = {
+ HCMD_NAME(STATISTICS_OPER_NOTIF),
+ HCMD_NAME(STATISTICS_OPER_PART1_NOTIF),
+};
+
+/* Please keep this array *SORTED* by hex value.
+ * Access is done through binary search
+ */
static const struct iwl_hcmd_names iwl_mvm_scan_names[] = {
HCMD_NAME(OFFLOAD_MATCH_INFO_NOTIF),
};
@@ -640,6 +668,7 @@ static const struct iwl_hcmd_arr iwl_mvm_groups[] = {
[PROT_OFFLOAD_GROUP] = HCMD_ARR(iwl_mvm_prot_offload_names),
[REGULATORY_AND_NVM_GROUP] =
HCMD_ARR(iwl_mvm_regulatory_and_nvm_names),
+ [STATISTICS_GROUP] = HCMD_ARR(iwl_mvm_statistics_names),
};
/* this forward declaration can avoid to export the function */
@@ -751,7 +780,10 @@ static int iwl_mvm_start_get_nvm(struct iwl_mvm *mvm)
*/
mvm->nvm_data =
iwl_parse_mei_nvm_data(trans, trans->cfg,
- mvm->mei_nvm_data, mvm->fw);
+ mvm->mei_nvm_data,
+ mvm->fw,
+ mvm->set_tx_ant,
+ mvm->set_rx_ant);
return 0;
}
@@ -790,6 +822,9 @@ get_nvm_from_fw:
if (ret)
IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret);
+ /* no longer need this regardless of failure or not */
+ mvm->pldr_sync = false;
+
return ret;
}
@@ -1136,7 +1171,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
return NULL;
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_BZ)
- max_agg = IEEE80211_MAX_AMPDU_BUF_EHT;
+ max_agg = 512;
else
max_agg = IEEE80211_MAX_AMPDU_BUF_HE;
@@ -1298,7 +1333,7 @@ iwl_op_mode_mvm_start(struct iwl_trans *trans, const struct iwl_cfg *cfg,
snprintf(mvm->hw->wiphy->fw_version,
sizeof(mvm->hw->wiphy->fw_version),
- "%s", fw->fw_version);
+ "%.31s", fw->fw_version);
trans_cfg.fw_reset_handshake = fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_FW_RESET_HANDSHAKE);
@@ -1944,9 +1979,6 @@ static void iwl_mvm_nic_error(struct iwl_op_mode *op_mode, bool sync)
{
struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode);
- if (mvm->pldr_sync)
- return;
-
if (!test_bit(STATUS_TRANS_DEAD, &mvm->trans->status) &&
!test_and_clear_bit(IWL_MVM_STATUS_SUPPRESS_ERROR_LOG_ONCE,
&mvm->status))
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
index a5b432bc9e2f..4e1fccff3987 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c
@@ -192,6 +192,9 @@ int iwl_mvm_phy_send_rlc(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
iwl_mvm_phy_ctxt_set_rxchain(mvm, ctxt, &cmd.rlc.rx_chain_info,
chains_static, chains_dynamic);
+ IWL_DEBUG_FW(mvm, "Send RLC command: phy=%d, rx_chain_info=0x%x\n",
+ ctxt->id, cmd.rlc.rx_chain_info);
+
return iwl_mvm_send_cmd_pdu(mvm, iwl_cmd_id(RLC_CONFIG_CMD,
DATA_PATH_GROUP, 2),
0, sizeof(cmd), &cmd);
@@ -265,6 +268,8 @@ int iwl_mvm_phy_ctxt_add(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
struct cfg80211_chan_def *chandef,
u8 chains_static, u8 chains_dynamic)
{
+ int ret;
+
WARN_ON(!test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status) &&
ctxt->ref);
lockdep_assert_held(&mvm->mutex);
@@ -273,9 +278,16 @@ int iwl_mvm_phy_ctxt_add(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
ctxt->width = chandef->width;
ctxt->center_freq1 = chandef->center_freq1;
- return iwl_mvm_phy_ctxt_apply(mvm, ctxt, chandef,
- chains_static, chains_dynamic,
- FW_CTXT_ACTION_ADD);
+ ret = iwl_mvm_phy_ctxt_apply(mvm, ctxt, chandef,
+ chains_static, chains_dynamic,
+ FW_CTXT_ACTION_ADD);
+
+ if (ret)
+ return ret;
+
+ ctxt->ref++;
+
+ return 0;
}
/*
@@ -285,6 +297,11 @@ int iwl_mvm_phy_ctxt_add(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
void iwl_mvm_phy_ctxt_ref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
{
lockdep_assert_held(&mvm->mutex);
+
+ /* If we were taking the first ref, we should have
+ * called iwl_mvm_phy_ctxt_add.
+ */
+ WARN_ON(!ctxt->ref);
ctxt->ref++;
}
@@ -301,7 +318,11 @@ int iwl_mvm_phy_ctxt_changed(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
lockdep_assert_held(&mvm->mutex);
- if (iwl_fw_lookup_cmd_ver(mvm->fw, WIDE_ID(DATA_PATH_GROUP, RLC_CONFIG_CMD), 0) >= 2 &&
+ if (WARN_ON_ONCE(!ctxt->ref))
+ return -EINVAL;
+
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, WIDE_ID(DATA_PATH_GROUP,
+ RLC_CONFIG_CMD), 0) >= 2 &&
ctxt->channel == chandef->chan &&
ctxt->width == chandef->width &&
ctxt->center_freq1 == chandef->center_freq1)
@@ -335,6 +356,7 @@ int iwl_mvm_phy_ctxt_changed(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt,
void iwl_mvm_phy_ctxt_unref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
{
+ struct cfg80211_chan_def chandef;
lockdep_assert_held(&mvm->mutex);
if (WARN_ON_ONCE(!ctxt))
@@ -342,41 +364,13 @@ void iwl_mvm_phy_ctxt_unref(struct iwl_mvm *mvm, struct iwl_mvm_phy_ctxt *ctxt)
ctxt->ref--;
- /*
- * Move unused phy's to a default channel. When the phy is moved the,
- * fw will cleanup immediate quiet bit if it was previously set,
- * otherwise we might not be able to reuse this phy.
- */
- if (ctxt->ref == 0) {
- struct ieee80211_channel *chan = NULL;
- struct cfg80211_chan_def chandef;
- struct ieee80211_supported_band *sband;
- enum nl80211_band band;
- int channel;
-
- for (band = NL80211_BAND_2GHZ; band < NUM_NL80211_BANDS; band++) {
- sband = mvm->hw->wiphy->bands[band];
-
- if (!sband)
- continue;
-
- for (channel = 0; channel < sband->n_channels; channel++)
- if (!(sband->channels[channel].flags &
- IEEE80211_CHAN_DISABLED)) {
- chan = &sband->channels[channel];
- break;
- }
-
- if (chan)
- break;
- }
-
- if (WARN_ON(!chan))
- return;
-
- cfg80211_chandef_create(&chandef, chan, NL80211_CHAN_NO_HT);
- iwl_mvm_phy_ctxt_changed(mvm, ctxt, &chandef, 1, 1);
- }
+ if (ctxt->ref)
+ return;
+
+ cfg80211_chandef_create(&chandef, ctxt->channel, NL80211_CHAN_NO_HT);
+
+ iwl_mvm_phy_ctxt_apply(mvm, ctxt, &chandef, 1, 1,
+ FW_CTXT_ACTION_REMOVE);
}
static void iwl_mvm_binding_iterator(void *_data, u8 *mac,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/power.c b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
index 9131b5f1bc76..1b9b06e0443f 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/power.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/power.c
@@ -489,6 +489,11 @@ int iwl_mvm_power_update_device(struct iwl_mvm *mvm)
if (mvm->ext_clock_valid)
cmd.flags |= cpu_to_le16(DEVICE_POWER_FLAGS_32K_CLK_VALID_MSK);
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, POWER_TABLE_CMD, 0) >= 7 &&
+ test_bit(IWL_MVM_STATUS_IN_D3, &mvm->status))
+ cmd.flags |=
+ cpu_to_le16(DEVICE_POWER_FLAGS_NO_SLEEP_TILL_D3_MSK);
+
IWL_DEBUG_POWER(mvm,
"Sending device power command with flags = 0x%X\n",
cmd.flags);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
index 1ca375a5cf6b..376b23b409dc 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.h
@@ -3,7 +3,7 @@
*
* Copyright(c) 2015 Intel Mobile Communications GmbH
* Copyright(c) 2017 Intel Deutschland GmbH
- * Copyright (C) 2003 - 2014, 2018 - 2022 Intel Corporation
+ * Copyright (C) 2003 - 2014, 2018 - 2023 Intel Corporation
*****************************************************************************/
#ifndef __rs_h__
@@ -203,18 +203,12 @@ struct rs_rate {
/**
* struct iwl_lq_sta_rs_fw - rate and related statistics for RS in FW
* @last_rate_n_flags: last rate reported by FW
- * @max_agg_bufsize: the maximal size of the AGG buffer for this station
- * @sta_id: the id of the station
-#ifdef CONFIG_MAC80211_DEBUGFS
- * @dbg_fixed_rate: for debug, use fixed rate if not 0
- * @dbg_agg_frame_count_lim: for debug, max number of frames in A-MPDU
-#endif
+ * @pers.sta_id: the id of the station
* @chains: bitmask of chains reported in %chain_signal
* @chain_signal: per chain signal strength
* @last_rssi: last rssi reported
* @drv: pointer back to the driver data
*/
-
struct iwl_lq_sta_rs_fw {
/* last tx rate_n_flags */
u32 last_rate_n_flags;
@@ -223,7 +217,14 @@ struct iwl_lq_sta_rs_fw {
struct lq_sta_pers_rs_fw {
u32 sta_id;
#ifdef CONFIG_MAC80211_DEBUGFS
+ /**
+ * @dbg_fixed_rate: for debug, use fixed rate if not 0
+ */
u32 dbg_fixed_rate;
+ /**
+ * @dbg_agg_frame_count_lim: for debug, max number of
+ * frames in A-MPDU
+ */
u16 dbg_agg_frame_count_lim;
#endif
u8 chains;
@@ -233,7 +234,7 @@ struct iwl_lq_sta_rs_fw {
} pers;
};
-/**
+/*
* struct iwl_rate_scale_data -- tx success history for one rate
*/
struct iwl_rate_scale_data {
@@ -275,7 +276,7 @@ struct rs_rate_stats {
u64 total;
};
-/**
+/*
* struct iwl_scale_tbl_info -- tx params and success history for all rates
*
* There are two of these in struct iwl_lq_sta,
@@ -296,7 +297,7 @@ enum {
RS_STATE_STAY_IN_COLUMN,
};
-/**
+/*
* struct iwl_lq_sta -- driver's rate scaling private structure
*
* Pointer to this gets passed back and forth between driver and mac80211.
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
index 542c192698a4..8caa971770c6 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
@@ -553,7 +553,7 @@ struct iwl_mvm_stat_data {
struct iwl_mvm_stat_data_all_macs {
struct iwl_mvm *mvm;
__le32 flags;
- struct iwl_statistics_ntfy_per_mac *per_mac_stats;
+ struct iwl_stats_ntfy_per_mac *per_mac;
};
static void iwl_mvm_update_vif_sig(struct ieee80211_vif *vif, int sig)
@@ -658,7 +658,7 @@ static void iwl_mvm_stat_iterator_all_macs(void *_data, u8 *mac,
struct ieee80211_vif *vif)
{
struct iwl_mvm_stat_data_all_macs *data = _data;
- struct iwl_statistics_ntfy_per_mac *mac_stats;
+ struct iwl_stats_ntfy_per_mac *mac_stats;
int sig;
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
u16 vif_id = mvmvif->id;
@@ -669,7 +669,7 @@ static void iwl_mvm_stat_iterator_all_macs(void *_data, u8 *mac,
if (vif->type != NL80211_IFTYPE_STATION)
return;
- mac_stats = &data->per_mac_stats[vif_id];
+ mac_stats = &data->per_mac[vif_id];
mvmvif->deflink.beacon_stats.num_beacons =
le32_to_cpu(mac_stats->beacon_counter);
@@ -759,7 +759,7 @@ iwl_mvm_stats_ver_15(struct iwl_mvm *mvm,
struct iwl_mvm_stat_data_all_macs data = {
.mvm = mvm,
.flags = stats->flags,
- .per_mac_stats = stats->per_mac_stats,
+ .per_mac = stats->per_mac,
};
ieee80211_iterate_active_interfaces(mvm->hw,
@@ -829,6 +829,142 @@ static bool iwl_mvm_verify_stats_len(struct iwl_mvm *mvm,
}
static void
+iwl_mvm_stat_iterator_all_links(struct iwl_mvm *mvm,
+ struct iwl_stats_ntfy_per_link *per_link)
+{
+ u32 air_time[MAC_INDEX_AUX] = {};
+ u32 rx_bytes[MAC_INDEX_AUX] = {};
+ int fw_link_id;
+
+ for (fw_link_id = 0; fw_link_id < ARRAY_SIZE(mvm->link_id_to_link_conf);
+ fw_link_id++) {
+ struct iwl_stats_ntfy_per_link *link_stats;
+ struct ieee80211_bss_conf *bss_conf;
+ struct iwl_mvm_vif *mvmvif;
+ int link_id;
+ int sig;
+
+ bss_conf = iwl_mvm_rcu_fw_link_id_to_link_conf(mvm, fw_link_id,
+ false);
+ if (!bss_conf)
+ continue;
+
+ if (bss_conf->vif->type != NL80211_IFTYPE_STATION)
+ continue;
+
+ link_id = bss_conf->link_id;
+ if (link_id >= ARRAY_SIZE(mvmvif->link))
+ continue;
+
+ mvmvif = iwl_mvm_vif_from_mac80211(bss_conf->vif);
+ if (!mvmvif || !mvmvif->link[link_id])
+ continue;
+
+ link_stats = &per_link[fw_link_id];
+
+ mvmvif->link[link_id]->beacon_stats.num_beacons =
+ le32_to_cpu(link_stats->beacon_counter);
+
+ /* we basically just use the u8 to store 8 bits and then treat
+ * it as a s8 whenever we take it out to a different type.
+ */
+ mvmvif->link[link_id]->beacon_stats.avg_signal =
+ -le32_to_cpu(link_stats->beacon_average_energy);
+
+ /* make sure that beacon statistics don't go backwards with TCM
+ * request to clear statistics
+ */
+ if (mvm->statistics_clear)
+ mvmvif->link[link_id]->beacon_stats.accu_num_beacons +=
+ mvmvif->link[link_id]->beacon_stats.num_beacons;
+
+ sig = -le32_to_cpu(link_stats->beacon_filter_average_energy);
+ iwl_mvm_update_vif_sig(bss_conf->vif, sig);
+
+ if (WARN_ONCE(mvmvif->id >= MAC_INDEX_AUX,
+ "invalid mvmvif id: %d", mvmvif->id))
+ continue;
+
+ air_time[mvmvif->id] +=
+ le32_to_cpu(per_link[fw_link_id].air_time);
+ rx_bytes[mvmvif->id] +=
+ le32_to_cpu(per_link[fw_link_id].rx_bytes);
+ }
+
+ /* Don't update in case the statistics are not cleared, since
+ * we will end up counting twice the same airtime, once in TCM
+ * request and once in statistics notification.
+ */
+ if (mvm->statistics_clear) {
+ __le32 air_time_le[MAC_INDEX_AUX];
+ __le32 rx_bytes_le[MAC_INDEX_AUX];
+ int vif_id;
+
+ for (vif_id = 0; vif_id < ARRAY_SIZE(air_time_le); vif_id++) {
+ air_time_le[vif_id] = cpu_to_le32(air_time[vif_id]);
+ rx_bytes_le[vif_id] = cpu_to_le32(rx_bytes[vif_id]);
+ }
+
+ iwl_mvm_update_tcm_from_stats(mvm, air_time_le, rx_bytes_le);
+ }
+}
+
+void iwl_mvm_handle_rx_system_oper_stats(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+ u8 average_energy[IWL_MVM_STATION_COUNT_MAX];
+ struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ struct iwl_system_statistics_notif_oper *stats;
+ int i;
+ u32 notif_ver = iwl_fw_lookup_notif_ver(mvm->fw, STATISTICS_GROUP,
+ STATISTICS_OPER_NOTIF, 0);
+
+ if (notif_ver != 3) {
+ IWL_FW_CHECK_FAILED(mvm,
+ "Oper stats notif ver %d is not supported\n",
+ notif_ver);
+ return;
+ }
+
+ stats = (void *)&pkt->data;
+ iwl_mvm_stat_iterator_all_links(mvm, stats->per_link);
+
+ for (i = 0; i < ARRAY_SIZE(average_energy); i++)
+ average_energy[i] =
+ le32_to_cpu(stats->per_sta[i].average_energy);
+
+ ieee80211_iterate_stations_atomic(mvm->hw, iwl_mvm_stats_energy_iter,
+ average_energy);
+}
+
+void iwl_mvm_handle_rx_system_oper_part1_stats(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+ struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ struct iwl_system_statistics_part1_notif_oper *part1_stats;
+ int i;
+ u32 notif_ver = iwl_fw_lookup_notif_ver(mvm->fw, STATISTICS_GROUP,
+ STATISTICS_OPER_PART1_NOTIF, 0);
+
+ if (notif_ver != 4) {
+ IWL_FW_CHECK_FAILED(mvm,
+ "Part1 stats notif ver %d is not supported\n",
+ notif_ver);
+ return;
+ }
+
+ part1_stats = (void *)&pkt->data;
+ mvm->radio_stats.rx_time = 0;
+ mvm->radio_stats.tx_time = 0;
+ for (i = 0; i < ARRAY_SIZE(part1_stats->per_link); i++) {
+ mvm->radio_stats.rx_time +=
+ le64_to_cpu(part1_stats->per_link[i].rx_time);
+ mvm->radio_stats.tx_time +=
+ le64_to_cpu(part1_stats->per_link[i].tx_time);
+ }
+}
+
+static void
iwl_mvm_handle_rx_statistics_tlv(struct iwl_mvm *mvm,
struct iwl_rx_packet *pkt)
{
@@ -887,11 +1023,11 @@ iwl_mvm_handle_rx_statistics_tlv(struct iwl_mvm *mvm,
for (i = 0; i < ARRAY_SIZE(average_energy); i++)
average_energy[i] =
- le32_to_cpu(stats->per_sta_stats[i].average_energy);
+ le32_to_cpu(stats->per_sta[i].average_energy);
for (i = 0; i < ARRAY_SIZE(air_time); i++) {
- air_time[i] = stats->per_mac_stats[i].air_time;
- rx_bytes[i] = stats->per_mac_stats[i].rx_bytes;
+ air_time[i] = stats->per_mac[i].air_time;
+ rx_bytes[i] = stats->per_mac[i].rx_bytes;
}
}
@@ -917,6 +1053,13 @@ void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
__le32 *bytes, *air_time, flags;
int expected_size;
u8 *energy;
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(SYSTEM_GROUP,
+ SYSTEM_STATISTICS_CMD),
+ IWL_FW_CMD_VER_UNKNOWN);
+
+ if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN)
+ return;
/* From ver 14 and up we use TLV statistics format */
if (iwl_fw_lookup_notif_ver(mvm->fw, LEGACY_GROUP,
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
index 8d1e44fd9de7..886d00098528 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
@@ -376,8 +376,10 @@ static int iwl_mvm_rx_crypto(struct iwl_mvm *mvm, struct ieee80211_sta *sta,
*/
if (phy_info & IWL_RX_MPDU_PHY_AMPDU &&
(status & IWL_RX_MPDU_STATUS_SEC_MASK) ==
- IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on)
+ IWL_RX_MPDU_STATUS_SEC_UNKNOWN && !mvm->monitor_on) {
+ IWL_DEBUG_DROP(mvm, "Dropping packets, bad enc status\n");
return -1;
+ }
if (unlikely(ieee80211_is_mgmt(hdr->frame_control) &&
!ieee80211_has_protected(hdr->frame_control)))
@@ -548,44 +550,12 @@ static bool iwl_mvm_is_dup(struct ieee80211_sta *sta, int queue,
return false;
}
-/*
- * Returns true if sn2 - buffer_size < sn1 < sn2.
- * To be used only in order to compare reorder buffer head with NSSN.
- * We fully trust NSSN unless it is behind us due to reorder timeout.
- * Reorder timeout can only bring us up to buffer_size SNs ahead of NSSN.
- */
-static bool iwl_mvm_is_sn_less(u16 sn1, u16 sn2, u16 buffer_size)
-{
- return ieee80211_sn_less(sn1, sn2) &&
- !ieee80211_sn_less(sn1, sn2 - buffer_size);
-}
-
-static void iwl_mvm_sync_nssn(struct iwl_mvm *mvm, u8 baid, u16 nssn)
-{
- if (IWL_MVM_USE_NSSN_SYNC) {
- struct iwl_mvm_nssn_sync_data notif = {
- .baid = baid,
- .nssn = nssn,
- };
-
- iwl_mvm_sync_rx_queues_internal(mvm, IWL_MVM_RXQ_NSSN_SYNC, false,
- &notif, sizeof(notif));
- }
-}
-
-#define RX_REORDER_BUF_TIMEOUT_MQ (HZ / 10)
-
-enum iwl_mvm_release_flags {
- IWL_MVM_RELEASE_SEND_RSS_SYNC = BIT(0),
- IWL_MVM_RELEASE_FROM_RSS_SYNC = BIT(1),
-};
-
static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
struct ieee80211_sta *sta,
struct napi_struct *napi,
struct iwl_mvm_baid_data *baid_data,
struct iwl_mvm_reorder_buffer *reorder_buf,
- u16 nssn, u32 flags)
+ u16 nssn)
{
struct iwl_mvm_reorder_buf_entry *entries =
&baid_data->entries[reorder_buf->queue *
@@ -594,31 +564,12 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
lockdep_assert_held(&reorder_buf->lock);
- /*
- * We keep the NSSN not too far behind, if we are sync'ing it and it
- * is more than 2048 ahead of us, it must be behind us. Discard it.
- * This can happen if the queue that hit the 0 / 2048 seqno was lagging
- * behind and this queue already processed packets. The next if
- * would have caught cases where this queue would have processed less
- * than 64 packets, but it may have processed more than 64 packets.
- */
- if ((flags & IWL_MVM_RELEASE_FROM_RSS_SYNC) &&
- ieee80211_sn_less(nssn, ssn))
- goto set_timer;
-
- /* ignore nssn smaller than head sn - this can happen due to timeout */
- if (iwl_mvm_is_sn_less(nssn, ssn, reorder_buf->buf_size))
- goto set_timer;
-
- while (iwl_mvm_is_sn_less(ssn, nssn, reorder_buf->buf_size)) {
+ while (ieee80211_sn_less(ssn, nssn)) {
int index = ssn % reorder_buf->buf_size;
- struct sk_buff_head *skb_list = &entries[index].e.frames;
+ struct sk_buff_head *skb_list = &entries[index].frames;
struct sk_buff *skb;
ssn = ieee80211_sn_inc(ssn);
- if ((flags & IWL_MVM_RELEASE_SEND_RSS_SYNC) &&
- (ssn == 2048 || ssn == 0))
- iwl_mvm_sync_nssn(mvm, baid_data->baid, ssn);
/*
* Empty the list. Will have more than one frame for A-MSDU.
@@ -633,99 +584,6 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
}
}
reorder_buf->head_sn = nssn;
-
-set_timer:
- if (reorder_buf->num_stored && !reorder_buf->removed) {
- u16 index = reorder_buf->head_sn % reorder_buf->buf_size;
-
- while (skb_queue_empty(&entries[index].e.frames))
- index = (index + 1) % reorder_buf->buf_size;
- /* modify timer to match next frame's expiration time */
- mod_timer(&reorder_buf->reorder_timer,
- entries[index].e.reorder_time + 1 +
- RX_REORDER_BUF_TIMEOUT_MQ);
- } else {
- del_timer(&reorder_buf->reorder_timer);
- }
-}
-
-void iwl_mvm_reorder_timer_expired(struct timer_list *t)
-{
- struct iwl_mvm_reorder_buffer *buf = from_timer(buf, t, reorder_timer);
- struct iwl_mvm_baid_data *baid_data =
- iwl_mvm_baid_data_from_reorder_buf(buf);
- struct iwl_mvm_reorder_buf_entry *entries =
- &baid_data->entries[buf->queue * baid_data->entries_per_queue];
- int i;
- u16 sn = 0, index = 0;
- bool expired = false;
- bool cont = false;
-
- spin_lock(&buf->lock);
-
- if (!buf->num_stored || buf->removed) {
- spin_unlock(&buf->lock);
- return;
- }
-
- for (i = 0; i < buf->buf_size ; i++) {
- index = (buf->head_sn + i) % buf->buf_size;
-
- if (skb_queue_empty(&entries[index].e.frames)) {
- /*
- * If there is a hole and the next frame didn't expire
- * we want to break and not advance SN
- */
- cont = false;
- continue;
- }
- if (!cont &&
- !time_after(jiffies, entries[index].e.reorder_time +
- RX_REORDER_BUF_TIMEOUT_MQ))
- break;
-
- expired = true;
- /* continue until next hole after this expired frames */
- cont = true;
- sn = ieee80211_sn_add(buf->head_sn, i + 1);
- }
-
- if (expired) {
- struct ieee80211_sta *sta;
- struct iwl_mvm_sta *mvmsta;
- u8 sta_id = ffs(baid_data->sta_mask) - 1;
-
- rcu_read_lock();
- sta = rcu_dereference(buf->mvm->fw_id_to_mac_id[sta_id]);
- if (WARN_ON_ONCE(IS_ERR_OR_NULL(sta))) {
- rcu_read_unlock();
- goto out;
- }
-
- mvmsta = iwl_mvm_sta_from_mac80211(sta);
-
- /* SN is set to the last expired frame + 1 */
- IWL_DEBUG_HT(buf->mvm,
- "Releasing expired frames for sta %u, sn %d\n",
- sta_id, sn);
- iwl_mvm_event_frame_timeout_callback(buf->mvm, mvmsta->vif,
- sta, baid_data->tid);
- iwl_mvm_release_frames(buf->mvm, sta, NULL, baid_data,
- buf, sn, IWL_MVM_RELEASE_SEND_RSS_SYNC);
- rcu_read_unlock();
- } else {
- /*
- * If no frame expired and there are stored frames, index is now
- * pointing to the first unexpired frame - modify timer
- * accordingly to this frame.
- */
- mod_timer(&buf->reorder_timer,
- entries[index].e.reorder_time +
- 1 + RX_REORDER_BUF_TIMEOUT_MQ);
- }
-
-out:
- spin_unlock(&buf->lock);
}
static void iwl_mvm_del_ba(struct iwl_mvm *mvm, int queue,
@@ -758,10 +616,8 @@ static void iwl_mvm_del_ba(struct iwl_mvm *mvm, int queue,
spin_lock_bh(&reorder_buf->lock);
iwl_mvm_release_frames(mvm, sta, NULL, ba_data, reorder_buf,
ieee80211_sn_add(reorder_buf->head_sn,
- reorder_buf->buf_size),
- 0);
+ reorder_buf->buf_size));
spin_unlock_bh(&reorder_buf->lock);
- del_timer_sync(&reorder_buf->reorder_timer);
out:
rcu_read_unlock();
@@ -769,8 +625,7 @@ out:
static void iwl_mvm_release_frames_from_notif(struct iwl_mvm *mvm,
struct napi_struct *napi,
- u8 baid, u16 nssn, int queue,
- u32 flags)
+ u8 baid, u16 nssn, int queue)
{
struct ieee80211_sta *sta;
struct iwl_mvm_reorder_buffer *reorder_buf;
@@ -788,8 +643,7 @@ static void iwl_mvm_release_frames_from_notif(struct iwl_mvm *mvm,
ba_data = rcu_dereference(mvm->baid_map[baid]);
if (!ba_data) {
- WARN(!(flags & IWL_MVM_RELEASE_FROM_RSS_SYNC),
- "BAID %d not found in map\n", baid);
+ WARN(true, "BAID %d not found in map\n", baid);
goto out;
}
@@ -803,22 +657,13 @@ static void iwl_mvm_release_frames_from_notif(struct iwl_mvm *mvm,
spin_lock_bh(&reorder_buf->lock);
iwl_mvm_release_frames(mvm, sta, napi, ba_data,
- reorder_buf, nssn, flags);
+ reorder_buf, nssn);
spin_unlock_bh(&reorder_buf->lock);
out:
rcu_read_unlock();
}
-static void iwl_mvm_nssn_sync(struct iwl_mvm *mvm,
- struct napi_struct *napi, int queue,
- const struct iwl_mvm_nssn_sync_data *data)
-{
- iwl_mvm_release_frames_from_notif(mvm, napi, data->baid,
- data->nssn, queue,
- IWL_MVM_RELEASE_FROM_RSS_SYNC);
-}
-
void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
struct iwl_rx_cmd_buffer *rxb, int queue)
{
@@ -853,14 +698,6 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
break;
iwl_mvm_del_ba(mvm, queue, (void *)internal_notif->data);
break;
- case IWL_MVM_RXQ_NSSN_SYNC:
- if (WARN_ONCE(len != sizeof(struct iwl_mvm_nssn_sync_data),
- "invalid nssn sync notification size %d (%d)",
- len, (int)sizeof(struct iwl_mvm_nssn_sync_data)))
- break;
- iwl_mvm_nssn_sync(mvm, napi, queue,
- (void *)internal_notif->data);
- break;
default:
WARN_ONCE(1, "Invalid identifier %d", internal_notif->type);
}
@@ -874,55 +711,6 @@ void iwl_mvm_rx_queue_notif(struct iwl_mvm *mvm, struct napi_struct *napi,
}
}
-static void iwl_mvm_oldsn_workaround(struct iwl_mvm *mvm,
- struct ieee80211_sta *sta, int tid,
- struct iwl_mvm_reorder_buffer *buffer,
- u32 reorder, u32 gp2, int queue)
-{
- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
-
- if (gp2 != buffer->consec_oldsn_ampdu_gp2) {
- /* we have a new (A-)MPDU ... */
-
- /*
- * reset counter to 0 if we didn't have any oldsn in
- * the last A-MPDU (as detected by GP2 being identical)
- */
- if (!buffer->consec_oldsn_prev_drop)
- buffer->consec_oldsn_drops = 0;
-
- /* either way, update our tracking state */
- buffer->consec_oldsn_ampdu_gp2 = gp2;
- } else if (buffer->consec_oldsn_prev_drop) {
- /*
- * tracking state didn't change, and we had an old SN
- * indication before - do nothing in this case, we
- * already noted this one down and are waiting for the
- * next A-MPDU (by GP2)
- */
- return;
- }
-
- /* return unless this MPDU has old SN */
- if (!(reorder & IWL_RX_MPDU_REORDER_BA_OLD_SN))
- return;
-
- /* update state */
- buffer->consec_oldsn_prev_drop = 1;
- buffer->consec_oldsn_drops++;
-
- /* if limit is reached, send del BA and reset state */
- if (buffer->consec_oldsn_drops == IWL_MVM_AMPDU_CONSEC_DROPS_DELBA) {
- IWL_WARN(mvm,
- "reached %d old SN frames from %pM on queue %d, stopping BA session on TID %d\n",
- IWL_MVM_AMPDU_CONSEC_DROPS_DELBA,
- sta->addr, queue, tid);
- ieee80211_stop_rx_ba_session(mvmsta->vif, BIT(tid), sta->addr);
- buffer->consec_oldsn_prev_drop = 0;
- buffer->consec_oldsn_drops = 0;
- }
-}
-
/*
* Returns true if the MPDU was buffered\dropped, false if it should be passed
* to upper layer.
@@ -934,11 +722,9 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
struct sk_buff *skb,
struct iwl_rx_mpdu_desc *desc)
{
- struct ieee80211_rx_status *rx_status = IEEE80211_SKB_RXCB(skb);
struct ieee80211_hdr *hdr = (void *)skb_mac_header(skb);
struct iwl_mvm_baid_data *baid_data;
struct iwl_mvm_reorder_buffer *buffer;
- struct sk_buff *tail;
u32 reorder = le32_to_cpu(desc->reorder_data);
bool amsdu = desc->mac_flags2 & IWL_RX_MPDU_MFLG2_AMSDU;
bool last_subframe =
@@ -955,6 +741,9 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
baid = (reorder & IWL_RX_MPDU_REORDER_BAID_MASK) >>
IWL_RX_MPDU_REORDER_BAID_SHIFT;
+ if (mvm->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_9000)
+ return false;
+
/*
* This also covers the case of receiving a Block Ack Request
* outside a BA session; we'll pass it to mac80211 and that
@@ -1016,59 +805,18 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
buffer->valid = true;
}
- if (ieee80211_is_back_req(hdr->frame_control)) {
- iwl_mvm_release_frames(mvm, sta, napi, baid_data,
- buffer, nssn, 0);
+ /* drop any duplicated packets */
+ if (desc->status & cpu_to_le32(IWL_RX_MPDU_STATUS_DUPLICATE))
goto drop;
- }
-
- /*
- * If there was a significant jump in the nssn - adjust.
- * If the SN is smaller than the NSSN it might need to first go into
- * the reorder buffer, in which case we just release up to it and the
- * rest of the function will take care of storing it and releasing up to
- * the nssn.
- * This should not happen. This queue has been lagging and it should
- * have been updated by a IWL_MVM_RXQ_NSSN_SYNC notification. Be nice
- * and update the other queues.
- */
- if (!iwl_mvm_is_sn_less(nssn, buffer->head_sn + buffer->buf_size,
- buffer->buf_size) ||
- !ieee80211_sn_less(sn, buffer->head_sn + buffer->buf_size)) {
- u16 min_sn = ieee80211_sn_less(sn, nssn) ? sn : nssn;
-
- iwl_mvm_release_frames(mvm, sta, napi, baid_data, buffer,
- min_sn, IWL_MVM_RELEASE_SEND_RSS_SYNC);
- }
-
- iwl_mvm_oldsn_workaround(mvm, sta, tid, buffer, reorder,
- rx_status->device_timestamp, queue);
/* drop any oudated packets */
- if (ieee80211_sn_less(sn, buffer->head_sn))
+ if (reorder & IWL_RX_MPDU_REORDER_BA_OLD_SN)
goto drop;
/* release immediately if allowed by nssn and no stored frames */
if (!buffer->num_stored && ieee80211_sn_less(sn, nssn)) {
- if (iwl_mvm_is_sn_less(buffer->head_sn, nssn,
- buffer->buf_size) &&
- (!amsdu || last_subframe)) {
- /*
- * If we crossed the 2048 or 0 SN, notify all the
- * queues. This is done in order to avoid having a
- * head_sn that lags behind for too long. When that
- * happens, we can get to a situation where the head_sn
- * is within the interval [nssn - buf_size : nssn]
- * which will make us think that the nssn is a packet
- * that we already freed because of the reordering
- * buffer and we will ignore it. So maintain the
- * head_sn somewhat updated across all the queues:
- * when it crosses 0 and 2048.
- */
- if (sn == 2048 || sn == 0)
- iwl_mvm_sync_nssn(mvm, baid, sn);
+ if (!amsdu || last_subframe)
buffer->head_sn = nssn;
- }
/* No need to update AMSDU last SN - we are moving the head */
spin_unlock_bh(&buffer->lock);
return false;
@@ -1083,37 +831,18 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
* while technically there is no hole and we can move forward.
*/
if (!buffer->num_stored && sn == buffer->head_sn) {
- if (!amsdu || last_subframe) {
- if (sn == 2048 || sn == 0)
- iwl_mvm_sync_nssn(mvm, baid, sn);
+ if (!amsdu || last_subframe)
buffer->head_sn = ieee80211_sn_inc(buffer->head_sn);
- }
+
/* No need to update AMSDU last SN - we are moving the head */
spin_unlock_bh(&buffer->lock);
return false;
}
- index = sn % buffer->buf_size;
-
- /*
- * Check if we already stored this frame
- * As AMSDU is either received or not as whole, logic is simple:
- * If we have frames in that position in the buffer and the last frame
- * originated from AMSDU had a different SN then it is a retransmission.
- * If it is the same SN then if the subframe index is incrementing it
- * is the same AMSDU - otherwise it is a retransmission.
- */
- tail = skb_peek_tail(&entries[index].e.frames);
- if (tail && !amsdu)
- goto drop;
- else if (tail && (sn != buffer->last_amsdu ||
- buffer->last_sub_index >= sub_frame_idx))
- goto drop;
-
/* put in reorder buffer */
- __skb_queue_tail(&entries[index].e.frames, skb);
+ index = sn % buffer->buf_size;
+ __skb_queue_tail(&entries[index].frames, skb);
buffer->num_stored++;
- entries[index].e.reorder_time = jiffies;
if (amsdu) {
buffer->last_amsdu = sn;
@@ -1133,8 +862,7 @@ static bool iwl_mvm_reorder(struct iwl_mvm *mvm,
*/
if (!amsdu || last_subframe)
iwl_mvm_release_frames(mvm, sta, napi, baid_data,
- buffer, nssn,
- IWL_MVM_RELEASE_SEND_RSS_SYNC);
+ buffer, nssn);
spin_unlock_bh(&buffer->lock);
return true;
@@ -1507,7 +1235,7 @@ static void iwl_mvm_decode_he_phy_data(struct iwl_mvm *mvm,
#define IWL_RX_RU_DATA_A1 2
#define IWL_RX_RU_DATA_A2 2
#define IWL_RX_RU_DATA_B1 2
-#define IWL_RX_RU_DATA_B2 3
+#define IWL_RX_RU_DATA_B2 4
#define IWL_RX_RU_DATA_C1 3
#define IWL_RX_RU_DATA_C2 3
#define IWL_RX_RU_DATA_D1 4
@@ -2562,6 +2290,8 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
iwl_mvm_rx_csum(mvm, sta, skb, pkt);
if (iwl_mvm_is_dup(sta, queue, rx_status, hdr, desc)) {
+ IWL_DEBUG_DROP(mvm, "Dropping duplicate packet 0x%x\n",
+ le16_to_cpu(hdr->seq_ctrl));
kfree_skb(skb);
goto out;
}
@@ -2613,9 +2343,15 @@ void iwl_mvm_rx_mpdu_mq(struct iwl_mvm *mvm, struct napi_struct *napi,
if (!iwl_mvm_reorder(mvm, napi, queue, sta, skb, desc) &&
likely(!iwl_mvm_time_sync_frame(mvm, skb, hdr->addr2)) &&
- likely(!iwl_mvm_mei_filter_scan(mvm, skb)))
+ likely(!iwl_mvm_mei_filter_scan(mvm, skb))) {
+ if (mvm->trans->trans_cfg->device_family == IWL_DEVICE_FAMILY_9000 &&
+ (desc->mac_flags2 & IWL_RX_MPDU_MFLG2_AMSDU) &&
+ !(desc->amsdu_info & IWL_RX_MPDU_AMSDU_LAST_SUBFRAME))
+ rx_status->flag |= RX_FLAG_AMSDU_MORE;
+
iwl_mvm_pass_packet_to_mac80211(mvm, napi, skb, queue, sta,
link_sta);
+ }
out:
rcu_read_unlock();
}
@@ -2758,7 +2494,7 @@ void iwl_mvm_rx_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
iwl_mvm_release_frames_from_notif(mvm, napi, release->baid,
le16_to_cpu(release->nssn),
- queue, 0);
+ queue);
}
void iwl_mvm_rx_bar_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
@@ -2799,7 +2535,10 @@ void iwl_mvm_rx_bar_frame_release(struct iwl_mvm *mvm, struct napi_struct *napi,
tid))
goto out;
- iwl_mvm_release_frames_from_notif(mvm, napi, baid, nssn, queue, 0);
+ IWL_DEBUG_DROP(mvm, "Received a BAR, expect packet loss: nssn %d\n",
+ nssn);
+
+ iwl_mvm_release_frames_from_notif(mvm, napi, baid, nssn, queue);
out:
rcu_read_unlock();
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
index 3cbe2c0b8d6b..75c5c58e14a5 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/scan.c
@@ -3408,7 +3408,7 @@ int iwl_mvm_scan_stop(struct iwl_mvm *mvm, int type, bool notify)
if (!(mvm->scan_status & type))
return 0;
- if (iwl_mvm_is_radio_killed(mvm)) {
+ if (!test_bit(STATUS_DEVICE_ENABLED, &mvm->trans->status)) {
ret = 0;
goto out;
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
index 3b9a343d4f67..bba96a968890 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
@@ -827,7 +827,7 @@ static int iwl_mvm_get_queue_size(struct ieee80211_sta *sta)
if (!link)
continue;
- /* support for 1k ba size */
+ /* support for 512 ba size */
if (link->eht_cap.has_eht &&
max_size < IWL_DEFAULT_QUEUE_SIZE_EHT)
max_size = IWL_DEFAULT_QUEUE_SIZE_EHT;
@@ -865,11 +865,11 @@ int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm,
if (sta) {
struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+ struct ieee80211_link_sta *link_sta;
unsigned int link_id;
- for (link_id = 0;
- link_id < ARRAY_SIZE(mvmsta->link);
- link_id++) {
+ rcu_read_lock();
+ for_each_sta_active_link(mvmsta->vif, sta, link_sta, link_id) {
struct iwl_mvm_link_sta *link =
rcu_dereference_protected(mvmsta->link[link_id],
lockdep_is_held(&mvm->mutex));
@@ -879,6 +879,7 @@ int iwl_mvm_tvqm_enable_txq(struct iwl_mvm *mvm,
sta_mask |= BIT(link->sta_id);
}
+ rcu_read_unlock();
} else {
sta_mask |= BIT(sta_id);
}
@@ -2059,7 +2060,8 @@ bool iwl_mvm_sta_del(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
*status = IWL_MVM_QUEUE_FREE;
}
- if (vif->type == NL80211_IFTYPE_STATION) {
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ mvm_link->ap_sta_id == sta_id) {
/* if associated - we can't remove the AP STA now */
if (vif->cfg.assoc)
return true;
@@ -2097,7 +2099,8 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
return ret;
/* flush its queues here since we are freeing mvm_sta */
- ret = iwl_mvm_flush_sta(mvm, mvm_sta, false);
+ ret = iwl_mvm_flush_sta(mvm, mvm_sta->deflink.sta_id,
+ mvm_sta->tfd_queue_msk);
if (ret)
return ret;
if (iwl_mvm_has_new_tx_api(mvm)) {
@@ -2408,7 +2411,8 @@ void iwl_mvm_free_bcast_sta_queues(struct iwl_mvm *mvm,
lockdep_assert_held(&mvm->mutex);
- iwl_mvm_flush_sta(mvm, &mvmvif->deflink.bcast_sta, true);
+ iwl_mvm_flush_sta(mvm, mvmvif->deflink.bcast_sta.sta_id,
+ mvmvif->deflink.bcast_sta.tfd_queue_msk);
switch (vif->type) {
case NL80211_IFTYPE_AP:
@@ -2664,7 +2668,8 @@ int iwl_mvm_rm_mcast_sta(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
lockdep_assert_held(&mvm->mutex);
- iwl_mvm_flush_sta(mvm, &mvmvif->deflink.mcast_sta, true);
+ iwl_mvm_flush_sta(mvm, mvmvif->deflink.mcast_sta.sta_id,
+ mvmvif->deflink.mcast_sta.tfd_queue_msk);
iwl_mvm_disable_txq(mvm, NULL, mvmvif->deflink.mcast_sta.sta_id,
&mvmvif->deflink.cab_queue, 0);
@@ -2714,18 +2719,9 @@ static void iwl_mvm_free_reorder(struct iwl_mvm *mvm,
WARN_ON(1);
for (j = 0; j < reorder_buf->buf_size; j++)
- __skb_queue_purge(&entries[j].e.frames);
- /*
- * Prevent timer re-arm. This prevents a very far fetched case
- * where we timed out on the notification. There may be prior
- * RX frames pending in the RX queue before the notification
- * that might get processed between now and the actual deletion
- * and we would re-arm the timer although we are deleting the
- * reorder buffer.
- */
- reorder_buf->removed = true;
+ __skb_queue_purge(&entries[j].frames);
+
spin_unlock_bh(&reorder_buf->lock);
- del_timer_sync(&reorder_buf->reorder_timer);
}
}
@@ -2745,15 +2741,12 @@ static void iwl_mvm_init_reorder_buffer(struct iwl_mvm *mvm,
reorder_buf->num_stored = 0;
reorder_buf->head_sn = ssn;
reorder_buf->buf_size = buf_size;
- /* rx reorder timer */
- timer_setup(&reorder_buf->reorder_timer,
- iwl_mvm_reorder_timer_expired, 0);
spin_lock_init(&reorder_buf->lock);
reorder_buf->mvm = mvm;
reorder_buf->queue = i;
reorder_buf->valid = false;
for (j = 0; j < reorder_buf->buf_size; j++)
- __skb_queue_head_init(&entries[j].e.frames);
+ __skb_queue_head_init(&entries[j].frames);
}
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
index 7364346a1209..b33a0ce096d4 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2018-2022 Intel Corporation
+ * Copyright (C) 2012-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
* Copyright (C) 2015-2016 Intel Deutschland GmbH
*/
@@ -286,12 +286,10 @@ struct iwl_mvm_key_pn {
*
* @IWL_MVM_RXQ_EMPTY: empty sync notification
* @IWL_MVM_RXQ_NOTIF_DEL_BA: notify RSS queues of delBA
- * @IWL_MVM_RXQ_NSSN_SYNC: notify all the RSS queues with the new NSSN
*/
enum iwl_mvm_rxq_notif_type {
IWL_MVM_RXQ_EMPTY,
IWL_MVM_RXQ_NOTIF_DEL_BA,
- IWL_MVM_RXQ_NSSN_SYNC,
};
/**
@@ -315,11 +313,6 @@ struct iwl_mvm_delba_data {
u32 baid;
} __packed;
-struct iwl_mvm_nssn_sync_data {
- u32 baid;
- u32 nssn;
-} __packed;
-
/**
* struct iwl_mvm_rxq_dup_data - per station per rx queue data
* @last_seq: last sequence per tid for duplicate packet detection
@@ -356,6 +349,7 @@ struct iwl_mvm_link_sta {
/**
* struct iwl_mvm_sta - representation of a station in the driver
+ * @vif: the interface the station belongs to
* @tfd_queue_msk: the tfd queues used by the station
* @mac_id_n_color: the MAC context this station is linked to
* @tid_disable_agg: bitmap: if bit(tid) is set, the fw won't send ampdus for
@@ -380,6 +374,7 @@ struct iwl_mvm_link_sta {
* @amsdu_enabled: bitmap of TX AMSDU allowed TIDs.
* In case TLC offload is not active it is either 0xFFFF or 0.
* @max_amsdu_len: max AMSDU length
+ * @sleeping: indicates the station is sleeping (when not offloaded to FW)
* @agg_tids: bitmap of tids whose status is operational aggregated (IWL_AGG_ON)
* @sleeping: sta sleep transitions in power management
* @sleep_tx_count: the number of frames that we told the firmware to let out
@@ -389,7 +384,6 @@ struct iwl_mvm_link_sta {
* the BA window. To be used for UAPSD only.
* @ptk_pn: per-queue PTK PN data structures
* @dup_data: per queue duplicate packet detection data
- * @deferred_traffic_tid_map: indication bitmap of deferred traffic per-TID
* @tx_ant: the index of the antenna to use for data tx to this station. Only
* used during connection establishment (e.g. for the 4 way handshake
* exchange).
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
index dae6f2a1aad9..e7d5f4ebeb25 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tdls.c
@@ -2,7 +2,7 @@
/*
* Copyright (C) 2014 Intel Mobile Communications GmbH
* Copyright (C) 2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2020, 2022 Intel Corporation
+ * Copyright (C) 2018-2020, 2022-2023 Intel Corporation
*/
#include <linux/etherdevice.h>
#include "mvm.h"
@@ -144,7 +144,8 @@ void iwl_mvm_recalc_tdls_state(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
}
void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif)
+ struct ieee80211_vif *vif,
+ unsigned int link_id)
{
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
u32 duration = 2 * vif->bss_conf.dtim_period * vif->bss_conf.beacon_int;
@@ -154,7 +155,7 @@ void iwl_mvm_mac_mgd_protect_tdls_discover(struct ieee80211_hw *hw,
if (fw_has_capa(&mvm->fw->ucode_capa,
IWL_UCODE_TLV_CAPA_SESSION_PROT_CMD))
iwl_mvm_schedule_session_protection(mvm, vif, duration,
- duration, true);
+ duration, true, link_id);
else
iwl_mvm_protect_session(mvm, vif, duration,
duration, 100, true);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
index 5f0e7144a951..218fdf1ed530 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
@@ -42,6 +42,7 @@ void iwl_mvm_te_clear_data(struct iwl_mvm *mvm,
te_data->uid = 0;
te_data->id = TE_MAX;
te_data->vif = NULL;
+ te_data->link_id = -1;
}
void iwl_mvm_roc_done_wk(struct work_struct *wk)
@@ -78,9 +79,29 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
*/
if (!WARN_ON(!mvm->p2p_device_vif)) {
- mvmvif = iwl_mvm_vif_from_mac80211(mvm->p2p_device_vif);
- iwl_mvm_flush_sta(mvm, &mvmvif->deflink.bcast_sta,
- true);
+ struct ieee80211_vif *vif = mvm->p2p_device_vif;
+
+ mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ iwl_mvm_flush_sta(mvm, mvmvif->deflink.bcast_sta.sta_id,
+ mvmvif->deflink.bcast_sta.tfd_queue_msk);
+
+ if (mvm->mld_api_is_used) {
+ iwl_mvm_mld_rm_bcast_sta(mvm, vif,
+ &vif->bss_conf);
+
+ iwl_mvm_link_changed(mvm, vif, &vif->bss_conf,
+ LINK_CONTEXT_MODIFY_ACTIVE,
+ false);
+ } else {
+ iwl_mvm_rm_p2p_bcast_sta(mvm, vif);
+ iwl_mvm_binding_remove_vif(mvm, vif);
+ }
+
+ /* Do not remove the PHY context as removing and adding
+ * a PHY context has timing overheads. Leaving it
+ * configured in FW would be useful in case the next ROC
+ * is with the same channel.
+ */
}
}
@@ -93,7 +114,8 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
*/
if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) {
/* do the same in case of hot spot 2.0 */
- iwl_mvm_flush_sta(mvm, &mvm->aux_sta, true);
+ iwl_mvm_flush_sta(mvm, mvm->aux_sta.sta_id,
+ mvm->aux_sta.tfd_queue_msk);
if (mvm->mld_api_is_used) {
iwl_mvm_mld_rm_aux_sta(mvm);
@@ -223,7 +245,7 @@ iwl_mvm_te_handle_notify_csa(struct iwl_mvm *mvm,
}
iwl_mvm_csa_client_absent(mvm, te_data->vif);
cancel_delayed_work(&mvmvif->csa_work);
- ieee80211_chswitch_done(te_data->vif, true);
+ ieee80211_chswitch_done(te_data->vif, true, 0);
break;
default:
/* should never happen */
@@ -378,6 +400,22 @@ static void iwl_mvm_te_handle_notif(struct iwl_mvm *mvm,
}
}
+void iwl_mvm_rx_roc_notif(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb)
+{
+ struct iwl_rx_packet *pkt = rxb_addr(rxb);
+ struct iwl_roc_notif *notif = (void *)pkt->data;
+
+ if (le32_to_cpu(notif->success) && le32_to_cpu(notif->started) &&
+ le32_to_cpu(notif->activity) == ROC_ACTIVITY_HOTSPOT) {
+ set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
+ ieee80211_ready_on_channel(mvm->hw);
+ } else {
+ iwl_mvm_roc_finished(mvm);
+ ieee80211_remain_on_channel_expired(mvm->hw);
+ }
+}
+
/*
* Handle A Aux ROC time event
*/
@@ -651,19 +689,46 @@ void iwl_mvm_protect_session(struct iwl_mvm *mvm,
}
}
+/* Determine whether mac or link id should be used, and validate the link id */
+static int iwl_mvm_get_session_prot_id(struct iwl_mvm *mvm,
+ struct ieee80211_vif *vif,
+ u32 link_id)
+{
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ int ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(MAC_CONF_GROUP,
+ SESSION_PROTECTION_CMD), 1);
+
+ if (ver < 2)
+ return mvmvif->id;
+
+ if (WARN(link_id < 0 || !mvmvif->link[link_id],
+ "Invalid link ID for session protection: %u\n", link_id))
+ return -EINVAL;
+
+ if (WARN(ieee80211_vif_is_mld(vif) &&
+ !(vif->active_links & BIT(link_id)),
+ "Session Protection on an inactive link: %u\n", link_id))
+ return -EINVAL;
+
+ return mvmvif->link[link_id]->fw_link_id;
+}
+
static void iwl_mvm_cancel_session_protection(struct iwl_mvm *mvm,
- struct iwl_mvm_vif *mvmvif,
- u32 id)
+ struct ieee80211_vif *vif,
+ u32 id, u32 link_id)
{
+ int mac_link_id = iwl_mvm_get_session_prot_id(mvm, vif, link_id);
struct iwl_mvm_session_prot_cmd cmd = {
- .id_and_color =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
- mvmvif->color)),
+ .id_and_color = cpu_to_le32(mac_link_id),
.action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),
.conf_id = cpu_to_le32(id),
};
int ret;
+ if (mac_link_id < 0)
+ return;
+
ret = iwl_mvm_send_cmd_pdu(mvm,
WIDE_ID(MAC_CONF_GROUP, SESSION_PROTECTION_CMD),
0, sizeof(cmd), &cmd);
@@ -677,10 +742,12 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
u32 *uid)
{
u32 id;
+ struct ieee80211_vif *vif = te_data->vif;
struct iwl_mvm_vif *mvmvif;
enum nl80211_iftype iftype;
+ unsigned int link_id;
- if (!te_data->vif)
+ if (!vif)
return false;
mvmvif = iwl_mvm_vif_from_mac80211(te_data->vif);
@@ -695,6 +762,7 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
/* Save time event uid before clearing its data */
*uid = te_data->uid;
id = te_data->id;
+ link_id = te_data->link_id;
/*
* The clear_data function handles time events that were already removed
@@ -712,7 +780,8 @@ static bool __iwl_mvm_remove_time_event(struct iwl_mvm *mvm,
id != HOT_SPOT_CMD) {
if (mvmvif && id < SESSION_PROTECT_CONF_MAX_ID) {
/* Session protection is still ongoing. Cancel it */
- iwl_mvm_cancel_session_protection(mvm, mvmvif, id);
+ iwl_mvm_cancel_session_protection(mvm, vif, id,
+ link_id);
if (iftype == NL80211_IFTYPE_P2P_DEVICE) {
iwl_mvm_p2p_roc_finished(mvm);
}
@@ -829,18 +898,41 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
{
struct iwl_rx_packet *pkt = rxb_addr(rxb);
struct iwl_mvm_session_prot_notif *notif = (void *)pkt->data;
+ unsigned int ver =
+ iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(MAC_CONF_GROUP,
+ SESSION_PROTECTION_CMD), 2);
+ int id = le32_to_cpu(notif->mac_link_id);
struct ieee80211_vif *vif;
struct iwl_mvm_vif *mvmvif;
+ unsigned int notif_link_id;
rcu_read_lock();
- vif = iwl_mvm_rcu_dereference_vif_id(mvm, le32_to_cpu(notif->mac_id),
- true);
+
+ if (ver <= 2) {
+ vif = iwl_mvm_rcu_dereference_vif_id(mvm, id, true);
+ } else {
+ struct ieee80211_bss_conf *link_conf =
+ iwl_mvm_rcu_fw_link_id_to_link_conf(mvm, id, true);
+
+ if (!link_conf)
+ goto out_unlock;
+
+ notif_link_id = link_conf->link_id;
+ vif = link_conf->vif;
+ }
if (!vif)
goto out_unlock;
mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ if (WARN(ver > 2 && mvmvif->time_event_data.link_id >= 0 &&
+ mvmvif->time_event_data.link_id != notif_link_id,
+ "SESION_PROTECTION_NOTIF was received for link %u, while the current time event is on link %u\n",
+ notif_link_id, mvmvif->time_event_data.link_id))
+ goto out_unlock;
+
/* The vif is not a P2P_DEVICE, maintain its time_event_data */
if (vif->type != NL80211_IFTYPE_P2P_DEVICE) {
struct iwl_mvm_time_event_data *te_data =
@@ -880,8 +972,8 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
if (!le32_to_cpu(notif->status) || !le32_to_cpu(notif->start)) {
/* End TE, notify mac80211 */
mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
- ieee80211_remain_on_channel_expired(mvm->hw);
iwl_mvm_p2p_roc_finished(mvm);
+ ieee80211_remain_on_channel_expired(mvm->hw);
} else if (le32_to_cpu(notif->start)) {
if (WARN_ON(mvmvif->time_event_data.id !=
le32_to_cpu(notif->conf_id)))
@@ -903,8 +995,7 @@ iwl_mvm_start_p2p_roc_session_protection(struct iwl_mvm *mvm,
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_session_prot_cmd cmd = {
.id_and_color =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
- mvmvif->color)),
+ cpu_to_le32(iwl_mvm_get_session_prot_id(mvm, vif, 0)),
.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
.duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
};
@@ -914,6 +1005,9 @@ iwl_mvm_start_p2p_roc_session_protection(struct iwl_mvm *mvm,
/* The time_event_data.id field is reused to save session
* protection's configuration.
*/
+
+ mvmvif->time_event_data.link_id = 0;
+
switch (type) {
case IEEE80211_ROC_TYPE_NORMAL:
mvmvif->time_event_data.id =
@@ -1030,6 +1124,37 @@ void iwl_mvm_cleanup_roc_te(struct iwl_mvm *mvm)
__iwl_mvm_remove_time_event(mvm, te_data, &uid);
}
+static void iwl_mvm_roc_rm_cmd(struct iwl_mvm *mvm, u32 activity)
+{
+ int ret;
+ struct iwl_roc_req roc_cmd = {
+ .action = cpu_to_le32(FW_CTXT_ACTION_REMOVE),
+ .activity = cpu_to_le32(activity),
+ };
+
+ lockdep_assert_held(&mvm->mutex);
+ ret = iwl_mvm_send_cmd_pdu(mvm,
+ WIDE_ID(MAC_CONF_GROUP, ROC_CMD),
+ 0, sizeof(roc_cmd), &roc_cmd);
+ WARN_ON(ret);
+}
+
+static void iwl_mvm_roc_station_remove(struct iwl_mvm *mvm,
+ struct iwl_mvm_vif *mvmvif)
+{
+ u32 cmd_id = WIDE_ID(MAC_CONF_GROUP, ROC_CMD);
+ u8 fw_ver = iwl_fw_lookup_cmd_ver(mvm->fw, cmd_id,
+ IWL_FW_CMD_VER_UNKNOWN);
+
+ if (fw_ver == IWL_FW_CMD_VER_UNKNOWN)
+ iwl_mvm_remove_aux_roc_te(mvm, mvmvif,
+ &mvmvif->hs_time_event_data);
+ else if (fw_ver == 3)
+ iwl_mvm_roc_rm_cmd(mvm, ROC_ACTIVITY_HOTSPOT);
+ else
+ IWL_ERR(mvm, "ROC command version %d mismatch!\n", fw_ver);
+}
+
void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
{
struct iwl_mvm_vif *mvmvif;
@@ -1040,12 +1165,12 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif)
mvmvif = iwl_mvm_vif_from_mac80211(vif);
if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
- iwl_mvm_cancel_session_protection(mvm, mvmvif,
- mvmvif->time_event_data.id);
+ iwl_mvm_cancel_session_protection(mvm, vif,
+ mvmvif->time_event_data.id,
+ mvmvif->time_event_data.link_id);
iwl_mvm_p2p_roc_finished(mvm);
} else {
- iwl_mvm_remove_aux_roc_te(mvm, mvmvif,
- &mvmvif->hs_time_event_data);
+ iwl_mvm_roc_station_remove(mvm, mvmvif);
iwl_mvm_roc_finished(mvm);
}
@@ -1164,25 +1289,28 @@ static bool iwl_mvm_session_prot_notif(struct iwl_notif_wait_data *notif_wait,
void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
u32 duration, u32 min_duration,
- bool wait_for_notif)
+ bool wait_for_notif,
+ unsigned int link_id)
{
struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
struct iwl_mvm_time_event_data *te_data = &mvmvif->time_event_data;
const u16 notif[] = { WIDE_ID(MAC_CONF_GROUP, SESSION_PROTECTION_NOTIF) };
struct iwl_notification_wait wait_notif;
+ int mac_link_id = iwl_mvm_get_session_prot_id(mvm, vif, link_id);
struct iwl_mvm_session_prot_cmd cmd = {
- .id_and_color =
- cpu_to_le32(FW_CMD_ID_AND_COLOR(mvmvif->id,
- mvmvif->color)),
+ .id_and_color = cpu_to_le32(mac_link_id),
.action = cpu_to_le32(FW_CTXT_ACTION_ADD),
.conf_id = cpu_to_le32(SESSION_PROTECT_CONF_ASSOC),
.duration_tu = cpu_to_le32(MSEC_TO_TU(duration)),
};
+ if (mac_link_id < 0)
+ return;
+
lockdep_assert_held(&mvm->mutex);
spin_lock_bh(&mvm->time_event_lock);
- if (te_data->running &&
+ if (te_data->running && te_data->link_id == link_id &&
time_after(te_data->end_jiffies, TU_TO_EXP_TIME(min_duration))) {
IWL_DEBUG_TE(mvm, "We have enough time in the current TE: %u\n",
jiffies_to_msecs(te_data->end_jiffies - jiffies));
@@ -1199,6 +1327,7 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
te_data->id = le32_to_cpu(cmd.conf_id);
te_data->duration = le32_to_cpu(cmd.duration_tu);
te_data->vif = vif;
+ te_data->link_id = link_id;
spin_unlock_bh(&mvm->time_event_lock);
IWL_DEBUG_TE(mvm, "Add new session protection, duration %d TU\n",
@@ -1208,11 +1337,7 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
if (iwl_mvm_send_cmd_pdu(mvm,
WIDE_ID(MAC_CONF_GROUP, SESSION_PROTECTION_CMD),
0, sizeof(cmd), &cmd)) {
- IWL_ERR(mvm,
- "Couldn't send the SESSION_PROTECTION_CMD\n");
- spin_lock_bh(&mvm->time_event_lock);
- iwl_mvm_te_clear_data(mvm, te_data);
- spin_unlock_bh(&mvm->time_event_lock);
+ goto send_cmd_err;
}
return;
@@ -1225,12 +1350,19 @@ void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
if (iwl_mvm_send_cmd_pdu(mvm,
WIDE_ID(MAC_CONF_GROUP, SESSION_PROTECTION_CMD),
0, sizeof(cmd), &cmd)) {
- IWL_ERR(mvm,
- "Couldn't send the SESSION_PROTECTION_CMD\n");
iwl_remove_notification(&mvm->notif_wait, &wait_notif);
+ goto send_cmd_err;
} else if (iwl_wait_notification(&mvm->notif_wait, &wait_notif,
TU_TO_JIFFIES(100))) {
IWL_ERR(mvm,
"Failed to protect session until session protection\n");
}
+ return;
+
+send_cmd_err:
+ IWL_ERR(mvm,
+ "Couldn't send the SESSION_PROTECTION_CMD\n");
+ spin_lock_bh(&mvm->time_event_lock);
+ iwl_mvm_te_clear_data(mvm, te_data);
+ spin_unlock_bh(&mvm->time_event_lock);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.h b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.h
index 989a5319fb21..49256ba4cf58 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.h
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.h
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
/*
- * Copyright (C) 2012-2014, 2019-2020 Intel Corporation
+ * Copyright (C) 2012-2014, 2019-2020, 2023 Intel Corporation
* Copyright (C) 2013-2014 Intel Mobile Communications GmbH
*/
#ifndef __time_event_h__
@@ -101,6 +101,14 @@ void iwl_mvm_rx_time_event_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb);
/**
+ * iwl_mvm_rx_roc_notif - handles %DISCOVERY_ROC_NTF.
+ * @mvm: the mvm component
+ * @rxb: RX buffer
+ */
+void iwl_mvm_rx_roc_notif(struct iwl_mvm *mvm,
+ struct iwl_rx_cmd_buffer *rxb);
+
+/**
* iwl_mvm_start_p2p_roc - start remain on channel for p2p device functionality
* @mvm: the mvm component
* @vif: the virtual interface for which the roc is requested. It is assumed
@@ -134,7 +142,7 @@ void iwl_mvm_stop_roc(struct iwl_mvm *mvm, struct ieee80211_vif *vif);
/**
* iwl_mvm_remove_time_event - general function to clean up of time event
* @mvm: the mvm component
- * @vif: the vif to which the time event belongs
+ * @mvmvif: the vif to which the time event belongs
* @te_data: the time event data that corresponds to that time event
*
* This function can be used to cancel a time event regardless its type.
@@ -195,16 +203,21 @@ iwl_mvm_te_scheduled(struct iwl_mvm_time_event_data *te_data)
* iwl_mvm_schedule_session_protection - schedule a session protection
* @mvm: the mvm component
* @vif: the virtual interface for which the protection issued
- * @duration: the duration of the protection
+ * @duration: the requested duration of the protection
+ * @min_duration: the minimum duration of the protection
* @wait_for_notif: if true, will block until the start of the protection
+ * @link_id: The link to schedule a session protection for
*/
void iwl_mvm_schedule_session_protection(struct iwl_mvm *mvm,
struct ieee80211_vif *vif,
u32 duration, u32 min_duration,
- bool wait_for_notif);
+ bool wait_for_notif,
+ unsigned int link_id);
/**
* iwl_mvm_rx_session_protect_notif - handles %SESSION_PROTECTION_NOTIF
+ * @mvm: the mvm component
+ * @rxb: the RX buffer containing the notification
*/
void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
struct iwl_rx_cmd_buffer *rxb);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
index 157e96fa23c1..dee9c367dcd3 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tt.c
@@ -642,7 +642,6 @@ static int iwl_mvm_tzone_set_trip_temp(struct thermal_zone_device *device,
int trip, int temp)
{
struct iwl_mvm *mvm = thermal_zone_device_priv(device);
- struct iwl_mvm_thermal_device *tzone;
int ret;
mutex_lock(&mvm->mutex);
@@ -658,12 +657,6 @@ static int iwl_mvm_tzone_set_trip_temp(struct thermal_zone_device *device,
goto out;
}
- tzone = &mvm->tz_device;
- if (!tzone) {
- ret = -EIO;
- goto out;
- }
-
ret = iwl_mvm_send_temp_report_ths_cmd(mvm);
out:
mutex_unlock(&mvm->mutex);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
index 898dca393643..ae5cd13cd6dd 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
@@ -262,8 +262,42 @@ static u32 iwl_mvm_get_tx_ant(struct iwl_mvm *mvm,
return BIT(mvm->mgmt_last_antenna_idx) << RATE_MCS_ANT_POS;
}
+static u32 iwl_mvm_convert_rate_idx(struct iwl_mvm *mvm,
+ struct ieee80211_tx_info *info,
+ int rate_idx)
+{
+ u32 rate_flags = 0;
+ u8 rate_plcp;
+ bool is_cck;
+
+ /* if the rate isn't a well known legacy rate, take the lowest one */
+ if (rate_idx < 0 || rate_idx >= IWL_RATE_COUNT_LEGACY)
+ rate_idx = iwl_mvm_mac_ctxt_get_lowest_rate(mvm,
+ info,
+ info->control.vif);
+
+ /* Get PLCP rate for tx_cmd->rate_n_flags */
+ rate_plcp = iwl_mvm_mac80211_idx_to_hwrate(mvm->fw, rate_idx);
+ is_cck = (rate_idx >= IWL_FIRST_CCK_RATE) &&
+ (rate_idx <= IWL_LAST_CCK_RATE);
+
+ /* Set CCK or OFDM flag */
+ if (iwl_fw_lookup_cmd_ver(mvm->fw, TX_CMD, 0) > 8) {
+ if (!is_cck)
+ rate_flags |= RATE_MCS_LEGACY_OFDM_MSK;
+ else
+ rate_flags |= RATE_MCS_CCK_MSK;
+ } else if (is_cck) {
+ rate_flags |= RATE_MCS_CCK_MSK_V1;
+ }
+
+ return (u32)rate_plcp | rate_flags;
+}
+
static u32 iwl_mvm_get_inject_tx_rate(struct iwl_mvm *mvm,
- struct ieee80211_tx_info *info)
+ struct ieee80211_tx_info *info,
+ struct ieee80211_sta *sta,
+ __le16 fc)
{
struct ieee80211_tx_rate *rate = &info->control.rates[0];
u32 result;
@@ -288,6 +322,9 @@ static u32 iwl_mvm_get_inject_tx_rate(struct iwl_mvm *mvm,
result |= u32_encode_bits(2, RATE_MCS_CHAN_WIDTH_MSK_V1);
else if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH)
result |= u32_encode_bits(3, RATE_MCS_CHAN_WIDTH_MSK_V1);
+
+ if (iwl_fw_lookup_notif_ver(mvm->fw, LONG_GROUP, TX_CMD, 0) > 6)
+ result = iwl_new_rate_from_v1(result);
} else if (rate->flags & IEEE80211_TX_RC_MCS) {
result = RATE_MCS_HT_MSK_V1;
result |= u32_encode_bits(rate->idx,
@@ -301,12 +338,21 @@ static u32 iwl_mvm_get_inject_tx_rate(struct iwl_mvm *mvm,
result |= RATE_MCS_LDPC_MSK_V1;
if (u32_get_bits(info->flags, IEEE80211_TX_CTL_STBC))
result |= RATE_MCS_STBC_MSK;
+
+ if (iwl_fw_lookup_notif_ver(mvm->fw, LONG_GROUP, TX_CMD, 0) > 6)
+ result = iwl_new_rate_from_v1(result);
} else {
- return 0;
+ int rate_idx = info->control.rates[0].idx;
+
+ result = iwl_mvm_convert_rate_idx(mvm, info, rate_idx);
}
- if (iwl_fw_lookup_notif_ver(mvm->fw, LONG_GROUP, TX_CMD, 0) > 6)
- return iwl_new_rate_from_v1(result);
+ if (info->control.antennas)
+ result |= u32_encode_bits(info->control.antennas,
+ RATE_MCS_ANT_AB_MSK);
+ else
+ result |= iwl_mvm_get_tx_ant(mvm, info, sta, fc);
+
return result;
}
@@ -315,17 +361,8 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm,
struct ieee80211_sta *sta, __le16 fc)
{
int rate_idx = -1;
- u8 rate_plcp;
- u32 rate_flags = 0;
- bool is_cck;
-
- if (unlikely(info->control.flags & IEEE80211_TX_CTRL_RATE_INJECT)) {
- u32 result = iwl_mvm_get_inject_tx_rate(mvm, info);
- if (result)
- return result;
- rate_idx = info->control.rates[0].idx;
- } else if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
+ if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {
/* info->control is only relevant for non HW rate control */
/* HT rate doesn't make sense for a non data frame */
@@ -350,33 +387,16 @@ static u32 iwl_mvm_get_tx_rate(struct iwl_mvm *mvm,
BUILD_BUG_ON(IWL_FIRST_CCK_RATE != 0);
}
- /* if the rate isn't a well known legacy rate, take the lowest one */
- if (rate_idx < 0 || rate_idx >= IWL_RATE_COUNT_LEGACY)
- rate_idx = iwl_mvm_mac_ctxt_get_lowest_rate(mvm,
- info,
- info->control.vif);
-
- /* Get PLCP rate for tx_cmd->rate_n_flags */
- rate_plcp = iwl_mvm_mac80211_idx_to_hwrate(mvm->fw, rate_idx);
- is_cck = (rate_idx >= IWL_FIRST_CCK_RATE) && (rate_idx <= IWL_LAST_CCK_RATE);
-
- /* Set CCK or OFDM flag */
- if (iwl_fw_lookup_cmd_ver(mvm->fw, TX_CMD, 0) > 8) {
- if (!is_cck)
- rate_flags |= RATE_MCS_LEGACY_OFDM_MSK;
- else
- rate_flags |= RATE_MCS_CCK_MSK;
- } else if (is_cck) {
- rate_flags |= RATE_MCS_CCK_MSK_V1;
- }
-
- return (u32)rate_plcp | rate_flags;
+ return iwl_mvm_convert_rate_idx(mvm, info, rate_idx);
}
static u32 iwl_mvm_get_tx_rate_n_flags(struct iwl_mvm *mvm,
struct ieee80211_tx_info *info,
struct ieee80211_sta *sta, __le16 fc)
{
+ if (unlikely(info->control.flags & IEEE80211_TX_CTRL_RATE_INJECT))
+ return iwl_mvm_get_inject_tx_rate(mvm, info, sta, fc);
+
return iwl_mvm_get_tx_rate(mvm, info, sta, fc) |
iwl_mvm_get_tx_ant(mvm, info, sta, fc);
}
@@ -536,16 +556,20 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
flags |= IWL_TX_FLAGS_ENCRYPT_DIS;
/*
- * For data packets rate info comes from the fw. Only
- * set rate/antenna during connection establishment or in case
- * no station is given.
+ * For data and mgmt packets rate info comes from the fw. Only
+ * set rate/antenna for injected frames with fixed rate, or
+ * when no sta is given.
*/
- if (!sta || !ieee80211_is_data(hdr->frame_control) ||
- mvmsta->sta_state < IEEE80211_STA_AUTHORIZED) {
+ if (unlikely(!sta ||
+ info->control.flags & IEEE80211_TX_CTRL_RATE_INJECT)) {
flags |= IWL_TX_FLAGS_CMD_RATE;
rate_n_flags =
iwl_mvm_get_tx_rate_n_flags(mvm, info, sta,
hdr->frame_control);
+ } else if (!ieee80211_is_data(hdr->frame_control) ||
+ mvmsta->sta_state < IEEE80211_STA_AUTHORIZED) {
+ /* These are important frames */
+ flags |= IWL_TX_FLAGS_HIGH_PRI;
}
if (mvm->trans->trans_cfg->device_family >=
@@ -1599,7 +1623,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
seq_ctl = le16_to_cpu(tx_resp->seq_ctl);
/* we can free until ssn % q.n_bd not inclusive */
- iwl_trans_reclaim(mvm->trans, txq_id, ssn, &skbs);
+ iwl_trans_reclaim(mvm->trans, txq_id, ssn, &skbs, false);
while (!skb_queue_empty(&skbs)) {
struct sk_buff *skb = __skb_dequeue(&skbs);
@@ -1700,7 +1724,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
RS_DRV_DATA_PACK(lq_color, tx_resp->reduced_tpc);
if (likely(!iwl_mvm_time_sync_frame(mvm, skb, hdr->addr1)))
- ieee80211_tx_status(mvm->hw, skb);
+ ieee80211_tx_status_skb(mvm->hw, skb);
}
/* This is an aggregation queue or might become one, so we use
@@ -1951,7 +1975,7 @@ static void iwl_mvm_tx_reclaim(struct iwl_mvm *mvm, int sta_id, int tid,
* block-ack window (we assume that they've been successfully
* transmitted ... if not, it's too late anyway).
*/
- iwl_trans_reclaim(mvm->trans, txq, index, &reclaimed_skbs);
+ iwl_trans_reclaim(mvm->trans, txq, index, &reclaimed_skbs, is_flush);
skb_queue_walk(&reclaimed_skbs, skb) {
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
@@ -2056,7 +2080,7 @@ out:
while (!skb_queue_empty(&reclaimed_skbs)) {
skb = __skb_dequeue(&reclaimed_skbs);
- ieee80211_tx_status(mvm->hw, skb);
+ ieee80211_tx_status_skb(mvm->hw, skb);
}
}
@@ -2293,24 +2317,10 @@ free_rsp:
return ret;
}
-int iwl_mvm_flush_sta(struct iwl_mvm *mvm, void *sta, bool internal)
+int iwl_mvm_flush_sta(struct iwl_mvm *mvm, u32 sta_id, u32 tfd_queue_mask)
{
- u32 sta_id, tfd_queue_msk;
-
- if (internal) {
- struct iwl_mvm_int_sta *int_sta = sta;
-
- sta_id = int_sta->sta_id;
- tfd_queue_msk = int_sta->tfd_queue_msk;
- } else {
- struct iwl_mvm_sta *mvm_sta = sta;
-
- sta_id = mvm_sta->deflink.sta_id;
- tfd_queue_msk = mvm_sta->tfd_queue_msk;
- }
-
if (iwl_mvm_has_new_tx_api(mvm))
return iwl_mvm_flush_sta_tids(mvm, sta_id, 0xffff);
- return iwl_mvm_flush_tx_path(mvm, tfd_queue_msk);
+ return iwl_mvm_flush_tx_path(mvm, tfd_queue_mask);
}
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
index 48016b4343d2..91286018a69d 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
@@ -342,6 +342,60 @@ static bool iwl_wait_stats_complete(struct iwl_notif_wait_data *notif_wait,
return true;
}
+static int iwl_mvm_request_system_statistics(struct iwl_mvm *mvm, bool clear,
+ u8 cmd_ver)
+{
+ struct iwl_system_statistics_cmd system_cmd = {
+ .cfg_mask = clear ?
+ cpu_to_le32(IWL_STATS_CFG_FLG_ON_DEMAND_NTFY_MSK) :
+ cpu_to_le32(IWL_STATS_CFG_FLG_RESET_MSK |
+ IWL_STATS_CFG_FLG_ON_DEMAND_NTFY_MSK),
+ .type_id_mask = cpu_to_le32(IWL_STATS_NTFY_TYPE_ID_OPER |
+ IWL_STATS_NTFY_TYPE_ID_OPER_PART1),
+ };
+ struct iwl_host_cmd cmd = {
+ .id = WIDE_ID(SYSTEM_GROUP, SYSTEM_STATISTICS_CMD),
+ .len[0] = sizeof(system_cmd),
+ .data[0] = &system_cmd,
+ };
+ struct iwl_notification_wait stats_wait;
+ static const u16 stats_complete[] = {
+ WIDE_ID(SYSTEM_GROUP, SYSTEM_STATISTICS_END_NOTIF),
+ };
+ int ret;
+
+ if (cmd_ver != 1) {
+ IWL_FW_CHECK_FAILED(mvm,
+ "Invalid system statistics command version:%d\n",
+ cmd_ver);
+ return -EOPNOTSUPP;
+ }
+
+ iwl_init_notification_wait(&mvm->notif_wait, &stats_wait,
+ stats_complete, ARRAY_SIZE(stats_complete),
+ NULL, NULL);
+
+ mvm->statistics_clear = clear;
+ ret = iwl_mvm_send_cmd(mvm, &cmd);
+ if (ret) {
+ iwl_remove_notification(&mvm->notif_wait, &stats_wait);
+ return ret;
+ }
+
+ /* 500ms for OPERATIONAL, PART1 and END notification should be enough
+ * for FW to collect data from all LMACs and send
+ * STATISTICS_NOTIFICATION to host
+ */
+ ret = iwl_wait_notification(&mvm->notif_wait, &stats_wait, HZ / 2);
+ if (ret)
+ return ret;
+
+ if (clear)
+ iwl_mvm_accu_radio_stats(mvm);
+
+ return ret;
+}
+
int iwl_mvm_request_statistics(struct iwl_mvm *mvm, bool clear)
{
struct iwl_statistics_cmd scmd = {
@@ -353,8 +407,15 @@ int iwl_mvm_request_statistics(struct iwl_mvm *mvm, bool clear)
.len[0] = sizeof(scmd),
.data[0] = &scmd,
};
+ u8 cmd_ver = iwl_fw_lookup_cmd_ver(mvm->fw,
+ WIDE_ID(SYSTEM_GROUP,
+ SYSTEM_STATISTICS_CMD),
+ IWL_FW_CMD_VER_UNKNOWN);
int ret;
+ if (cmd_ver != IWL_FW_CMD_VER_UNKNOWN)
+ return iwl_mvm_request_system_statistics(mvm, clear, cmd_ver);
+
/* From version 15 - STATISTICS_NOTIFICATION, the reply for
* STATISTICS_CMD is empty, and the response is with
* STATISTICS_NOTIFICATION notification
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
index bc83d2ba55c6..26a0953603ab 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
/*
- * Copyright (C) 2005-2014, 2018-2021 Intel Corporation
+ * Copyright (C) 2005-2014, 2018-2023 Intel Corporation
* Copyright (C) 2013-2015 Intel Mobile Communications GmbH
* Copyright (C) 2016-2017 Intel Deutschland GmbH
*/
@@ -1134,7 +1134,7 @@ static int get_crf_id(struct iwl_trans *iwl_trans)
/* Enable access to peripheral registers */
val = iwl_read_umac_prph_no_grab(iwl_trans, WFPM_CTRL_REG);
- val |= ENABLE_WFPM;
+ val |= WFPM_AUX_CTL_AUX_IF_MAC_OWNER_MSK;
iwl_write_umac_prph_no_grab(iwl_trans, WFPM_CTRL_REG, val);
/* Read crf info */
@@ -1196,6 +1196,9 @@ static int map_crf_id(struct iwl_trans *iwl_trans)
case REG_CRF_ID_TYPE_FMR:
iwl_trans->hw_rf_id = (IWL_CFG_RF_TYPE_FM << 12);
break;
+ case REG_CRF_ID_TYPE_WHP:
+ iwl_trans->hw_rf_id = (IWL_CFG_RF_TYPE_WH << 12);
+ break;
default:
ret = -EIO;
IWL_ERR(iwl_trans,
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
index 0f6493dab8cb..56def20374f3 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
@@ -58,10 +58,7 @@ struct iwl_rx_mem_buffer {
bool invalid;
};
-/**
- * struct isr_statistics - interrupt statistics
- *
- */
+/* interrupt statistics */
struct isr_statistics {
u32 hw;
u32 sw;
@@ -127,6 +124,8 @@ struct iwl_rx_completion_desc_bz {
* @used_bd_dma: physical address of buffer of used receive buffer descriptors (rbd)
* @read: Shared index to newest available Rx buffer
* @write: Shared index to oldest written Rx packet
+ * @write_actual: actual write pointer written to device, since we update in
+ * blocks of 8 only
* @free_count: Number of pre-allocated buffers in rx_free
* @used_count: Number of RBDs handled to allocator to use for allocation
* @write_actual:
@@ -135,10 +134,12 @@ struct iwl_rx_completion_desc_bz {
* @need_update: flag to indicate we need to update read/write index
* @rb_stts: driver's pointer to receive buffer status
* @rb_stts_dma: bus address of receive buffer status
- * @lock:
+ * @lock: per-queue lock
* @queue: actual rx queue. Not used for multi-rx queue.
* @next_rb_is_fragment: indicates that the previous RB that we handled set
* the fragmented flag, so the next one is still another fragment
+ * @napi: NAPI struct for this queue
+ * @queue_size: size of this queue
*
* NOTE: rx_free and rx_used are used as a FIFO for iwl_rx_mem_buffers
*/
@@ -188,19 +189,20 @@ struct iwl_rb_allocator {
/**
* iwl_get_closed_rb_stts - get closed rb stts from different structs
- * @rxq - the rxq to get the rb stts from
+ * @trans: transport pointer (for configuration)
+ * @rxq: the rxq to get the rb stts from
*/
-static inline __le16 iwl_get_closed_rb_stts(struct iwl_trans *trans,
- struct iwl_rxq *rxq)
+static inline u16 iwl_get_closed_rb_stts(struct iwl_trans *trans,
+ struct iwl_rxq *rxq)
{
if (trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210) {
__le16 *rb_stts = rxq->rb_stts;
- return READ_ONCE(*rb_stts);
+ return le16_to_cpu(READ_ONCE(*rb_stts));
} else {
struct iwl_rb_status *rb_stts = rxq->rb_stts;
- return READ_ONCE(rb_stts->closed_rb_num);
+ return le16_to_cpu(READ_ONCE(rb_stts->closed_rb_num)) & 0xFFF;
}
}
@@ -243,6 +245,7 @@ enum iwl_image_response_code {
IWL_IMAGE_RESP_FAIL = 2,
};
+#ifdef CONFIG_IWLWIFI_DEBUGFS
/**
* struct cont_rec: continuous recording data structure
* @prev_wr_ptr: the last address that was read in monitor_data
@@ -253,7 +256,6 @@ enum iwl_image_response_code {
* in &iwl_fw_mon_dbgfs_state enum
* @mutex: locked while reading from monitor_data debugfs file
*/
-#ifdef CONFIG_IWLWIFI_DEBUGFS
struct cont_rec {
u32 prev_wr_ptr;
u32 prev_wrap_cnt;
@@ -298,10 +300,6 @@ enum iwl_pcie_imr_status {
* @prph_info_dma_addr: dma addr of prph info
* @prph_scratch_dma_addr: dma addr of prph scratch
* @ctxt_info_dma_addr: dma addr of context information
- * @init_dram: DRAM data of firmware image (including paging).
- * Context information addresses will be taken from here.
- * This is driver's local copy for keeping track of size and
- * count for allocating and freeing the memory.
* @iml: image loader image virtual address
* @iml_dma_addr: image loader image DMA address
* @trans: pointer to the generic transport area
@@ -321,10 +319,9 @@ enum iwl_pcie_imr_status {
* @rx_buf_bytes: RX buffer (RB) size in bytes
* @reg_lock: protect hw register access
* @mutex: to protect stop_device / start_fw / start_hw
- * @cmd_in_flight: true when we have a host command in flight
-#ifdef CONFIG_IWLWIFI_DEBUGFS
* @fw_mon_data: fw continuous recording data
-#endif
+ * @cmd_hold_nic_awake: indicates NIC is held awake for APMG workaround
+ * during commands in flight
* @msix_entries: array of MSI-X entries
* @msix_enabled: true if managed to enable MSI-X
* @shared_vec_mask: the type of causes the shared vector handles
@@ -344,8 +341,32 @@ enum iwl_pcie_imr_status {
* @alloc_page: allocated page to still use parts of
* @alloc_page_used: how much of the allocated page was already used (bytes)
* @imr_status: imr dma state machine
- * @wait_queue_head_t: imr wait queue for dma completion
+ * @imr_waitq: imr wait queue for dma completion
* @rf_name: name/version of the CRF, if any
+ * @use_ict: whether or not ICT (interrupt table) is used
+ * @ict_index: current ICT read index
+ * @ict_tbl: ICT table pointer
+ * @ict_tbl_dma: ICT table DMA address
+ * @inta_mask: interrupt (INT-A) mask
+ * @irq_lock: lock to synchronize IRQ handling
+ * @txq_memory: TXQ allocation array
+ * @sx_waitq: waitqueue for Sx transitions
+ * @sx_complete: completion for Sx transitions
+ * @pcie_dbg_dumped_once: indicates PCIe regs were dumped already
+ * @opmode_down: indicates opmode went away
+ * @num_rx_bufs: number of RX buffers to allocate/use
+ * @no_reclaim_cmds: special commands not using reclaim flow
+ * (firmware workaround)
+ * @n_no_reclaim_cmds: number of special commands not using reclaim flow
+ * @affinity_mask: IRQ affinity mask for each RX queue
+ * @debug_rfkill: RF-kill debugging state, -1 for unset, 0/1 for radio
+ * enable/disable
+ * @fw_reset_handshake: indicates FW reset handshake is needed
+ * @fw_reset_state: state of FW reset handshake
+ * @fw_reset_waitq: waitqueue for FW reset handshake
+ * @is_down: indicates the NIC is down
+ * @isr_stats: interrupt statistics
+ * @napi_dev: (fake) netdev for NAPI registration
*/
struct iwl_trans_pcie {
struct iwl_rxq *rxq;
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
index 4614acee9f7b..146bc7bd14fb 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/rx.c
@@ -1510,7 +1510,7 @@ restart:
spin_lock(&rxq->lock);
/* uCode's read index (stored in shared DRAM) indicates the last Rx
* buffer that the driver may process (last buffer filled by ucode). */
- r = le16_to_cpu(iwl_get_closed_rb_stts(trans, rxq)) & 0x0FFF;
+ r = iwl_get_closed_rb_stts(trans, rxq);
i = rxq->read;
/* W/A 9000 device step A0 wrap-around bug */
@@ -1660,9 +1660,7 @@ irqreturn_t iwl_pcie_irq_rx_msix_handler(int irq, void *dev_id)
IWL_DEBUG_ISR(trans, "[%d] Got interrupt\n", entry->entry);
local_bh_disable();
- if (napi_schedule_prep(&rxq->napi))
- __napi_schedule(&rxq->napi);
- else
+ if (!napi_schedule(&rxq->napi))
iwl_pcie_clear_irq(trans, entry->entry);
local_bh_enable();
@@ -2291,6 +2289,12 @@ irqreturn_t iwl_pcie_irq_msix_handler(int irq, void *dev_id)
else
sw_err = inta_hw & MSIX_HW_INT_CAUSES_REG_SW_ERR;
+ if (inta_hw & MSIX_HW_INT_CAUSES_REG_TOP_FATAL_ERR) {
+ IWL_ERR(trans, "TOP Fatal error detected, inta_hw=0x%x.\n",
+ inta_hw);
+ /* TODO: PLDR flow required here for >= Bz */
+ }
+
/* Error detected by uCode */
if ((inta_fh & MSIX_FH_INT_CAUSES_FH_ERR) || sw_err) {
IWL_ERR(trans,
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
index fa46dad5fd68..c9e5bda8f0b7 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans-gen2.c
@@ -161,6 +161,7 @@ void _iwl_trans_pcie_gen2_stop_device(struct iwl_trans *trans)
if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
IWL_DEBUG_INFO(trans,
"DEVICE_ENABLED bit was set and is now cleared\n");
+ iwl_pcie_synchronize_irqs(trans);
iwl_pcie_rx_napi_sync(trans);
iwl_txq_gen2_tx_free(trans);
iwl_pcie_rx_stop(trans);
@@ -230,11 +231,14 @@ static int iwl_pcie_gen2_nic_init(struct iwl_trans *trans)
struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans);
int queue_size = max_t(u32, IWL_CMD_QUEUE_SIZE,
trans->cfg->min_txq_size);
+ int ret;
/* TODO: most of the logic can be removed in A0 - but not in Z0 */
spin_lock_bh(&trans_pcie->irq_lock);
- iwl_pcie_gen2_apm_init(trans);
+ ret = iwl_pcie_gen2_apm_init(trans);
spin_unlock_bh(&trans_pcie->irq_lock);
+ if (ret)
+ return ret;
iwl_op_mode_nic_config(trans->op_mode);
diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
index 198933f853c5..a468e5efeecd 100644
--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
@@ -1111,6 +1111,7 @@ static const struct iwl_causes_list causes_list_common[] = {
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_ALIVE),
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_WAKEUP),
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RESET_DONE),
+ IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_TOP_FATAL_ERR),
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_CT_KILL),
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_RF_KILL),
IWL_CAUSE(CSR_MSIX_HW_INT_MASK_AD, MSIX_HW_INT_CAUSES_REG_PERIODIC),
@@ -1263,6 +1264,7 @@ static void _iwl_trans_pcie_stop_device(struct iwl_trans *trans)
if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) {
IWL_DEBUG_INFO(trans,
"DEVICE_ENABLED bit was set and is now cleared\n");
+ iwl_pcie_synchronize_irqs(trans);
iwl_pcie_rx_napi_sync(trans);
iwl_pcie_tx_stop(trans);
iwl_pcie_rx_stop(trans);
@@ -2112,8 +2114,11 @@ static void iwl_trans_pcie_removal_wk(struct work_struct *wk)
pci_lock_rescan_remove();
pci_dev_put(pdev);
pci_stop_and_remove_bus_device(pdev);
- if (removal->rescan)
- pci_rescan_bus(bus->parent);
+ if (removal->rescan && bus) {
+ if (bus->parent)
+ bus = bus->parent;
+ pci_rescan_bus(bus);
+ }
pci_unlock_rescan_remove();
kfree(removal);
@@ -2173,6 +2178,9 @@ bool __iwl_trans_pcie_grab_nic_access(struct iwl_trans *trans)
CSR_GP_CNTRL_REG_FLAG_GOING_TO_SLEEP;
u32 poll = CSR_GP_CNTRL_REG_VAL_MAC_ACCESS_EN;
+ if (test_bit(STATUS_TRANS_DEAD, &trans->status))
+ return false;
+
spin_lock(&trans_pcie->reg_lock);
if (trans_pcie->cmd_hold_nic_awake)
@@ -2285,6 +2293,8 @@ out:
static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
void *buf, int dwords)
{
+#define IWL_MAX_HW_ERRS 5
+ unsigned int num_consec_hw_errors = 0;
int offs = 0;
u32 *vals = buf;
@@ -2300,6 +2310,17 @@ static int iwl_trans_pcie_read_mem(struct iwl_trans *trans, u32 addr,
while (offs < dwords) {
vals[offs] = iwl_read32(trans,
HBUS_TARG_MEM_RDAT);
+
+ if (iwl_trans_is_hw_error_value(vals[offs]))
+ num_consec_hw_errors++;
+ else
+ num_consec_hw_errors = 0;
+
+ if (num_consec_hw_errors >= IWL_MAX_HW_ERRS) {
+ iwl_trans_release_nic_access(trans);
+ return -EIO;
+ }
+
offs++;
if (time_after(jiffies, end)) {
@@ -2712,11 +2733,9 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
pos += scnprintf(buf + pos, bufsz - pos, "\tfree_count: %u\n",
rxq->free_count);
if (rxq->rb_stts) {
- u32 r = __le16_to_cpu(iwl_get_closed_rb_stts(trans,
- rxq));
+ u32 r = iwl_get_closed_rb_stts(trans, rxq);
pos += scnprintf(buf + pos, bufsz - pos,
- "\tclosed_rb_num: %u\n",
- r & 0x0FFF);
+ "\tclosed_rb_num: %u\n", r);
} else {
pos += scnprintf(buf + pos, bufsz - pos,
"\tclosed_rb_num: Not Allocated\n");
@@ -3089,7 +3108,7 @@ static u32 iwl_trans_pcie_dump_rbs(struct iwl_trans *trans,
spin_lock(&rxq->lock);
- r = le16_to_cpu(iwl_get_closed_rb_stts(trans, rxq)) & 0x0FFF;
+ r = iwl_get_closed_rb_stts(trans, rxq);
for (i = rxq->read, j = 0;
i != r && j < allocated_rb_nums;
@@ -3385,9 +3404,7 @@ iwl_trans_pcie_dump_data(struct iwl_trans *trans,
/* Dump RBs is supported only for pre-9000 devices (1 queue) */
struct iwl_rxq *rxq = &trans_pcie->rxq[0];
/* RBs */
- num_rbs =
- le16_to_cpu(iwl_get_closed_rb_stts(trans, rxq))
- & 0x0FFF;
+ num_rbs = iwl_get_closed_rb_stts(trans, rxq);
num_rbs = (num_rbs - rxq->read) & RX_QUEUE_MASK;
len += num_rbs * (sizeof(*data) +
sizeof(struct iwl_fw_error_dump_rb) +
@@ -3597,10 +3614,19 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
int ret, addr_size;
const struct iwl_trans_ops *ops = &trans_ops_pcie_gen2;
void __iomem * const *table;
+ u32 bar0;
if (!cfg_trans->gen2)
ops = &trans_ops_pcie;
+ /* reassign our BAR 0 if invalid due to possible runtime PM races */
+ pci_read_config_dword(pdev, PCI_BASE_ADDRESS_0, &bar0);
+ if (bar0 == PCI_BASE_ADDRESS_MEM_TYPE_64) {
+ ret = pci_assign_resource(pdev, 0);
+ if (ret)
+ return ERR_PTR(ret);
+ }
+
ret = pcim_enable_device(pdev);
if (ret)
return ERR_PTR(ret);
diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.c b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
index 340240b8954f..ca74b1b63cac 100644
--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.c
+++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.c
@@ -1575,7 +1575,7 @@ void iwl_txq_progress(struct iwl_txq *txq)
/* Frees buffers until index _not_ inclusive */
void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
- struct sk_buff_head *skbs)
+ struct sk_buff_head *skbs, bool is_flush)
{
struct iwl_txq *txq = trans->txqs.txq[txq_id];
int tfd_num, read_ptr, last_to_free;
@@ -1650,9 +1650,11 @@ void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
if (iwl_txq_space(trans, txq) > txq->low_mark &&
test_bit(txq_id, trans->txqs.queue_stopped)) {
struct sk_buff_head overflow_skbs;
+ struct sk_buff *skb;
__skb_queue_head_init(&overflow_skbs);
- skb_queue_splice_init(&txq->overflow_q, &overflow_skbs);
+ skb_queue_splice_init(&txq->overflow_q,
+ is_flush ? skbs : &overflow_skbs);
/*
* We are going to transmit from the overflow queue.
@@ -1672,8 +1674,7 @@ void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
*/
spin_unlock_bh(&txq->lock);
- while (!skb_queue_empty(&overflow_skbs)) {
- struct sk_buff *skb = __skb_dequeue(&overflow_skbs);
+ while ((skb = __skb_dequeue(&overflow_skbs))) {
struct iwl_device_tx_cmd *dev_cmd_ptr;
dev_cmd_ptr = *(void **)((u8 *)skb->cb +
diff --git a/drivers/net/wireless/intel/iwlwifi/queue/tx.h b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
index b7d3808588bf..124b29aac4a1 100644
--- a/drivers/net/wireless/intel/iwlwifi/queue/tx.h
+++ b/drivers/net/wireless/intel/iwlwifi/queue/tx.h
@@ -71,7 +71,8 @@ static inline void iwl_txq_stop(struct iwl_trans *trans, struct iwl_txq *txq)
/**
* iwl_txq_inc_wrap - increment queue index, wrap back to beginning
- * @index -- current index
+ * @trans: the transport (for configuration data)
+ * @index: current index
*/
static inline int iwl_txq_inc_wrap(struct iwl_trans *trans, int index)
{
@@ -81,7 +82,8 @@ static inline int iwl_txq_inc_wrap(struct iwl_trans *trans, int index)
/**
* iwl_txq_dec_wrap - decrement queue index, wrap back to end
- * @index -- current index
+ * @trans: the transport (for configuration data)
+ * @index: current index
*/
static inline int iwl_txq_dec_wrap(struct iwl_trans *trans, int index)
{
@@ -179,7 +181,7 @@ void iwl_txq_gen1_update_byte_cnt_tbl(struct iwl_trans *trans,
struct iwl_txq *txq, u16 byte_cnt,
int num_tbs);
void iwl_txq_reclaim(struct iwl_trans *trans, int txq_id, int ssn,
- struct sk_buff_head *skbs);
+ struct sk_buff_head *skbs, bool is_flush);
void iwl_txq_set_q_ptrs(struct iwl_trans *trans, int txq_id, int ptr);
void iwl_trans_txq_freeze_timer(struct iwl_trans *trans, unsigned long txqs,
bool freeze);
diff --git a/drivers/net/wireless/intersil/hostap/hostap.h b/drivers/net/wireless/intersil/hostap/hostap.h
index c17ab6dbbb53..552ae33d7875 100644
--- a/drivers/net/wireless/intersil/hostap/hostap.h
+++ b/drivers/net/wireless/intersil/hostap/hostap.h
@@ -92,7 +92,6 @@ void hostap_info_process(local_info_t *local, struct sk_buff *skb);
extern const struct iw_handler_def hostap_iw_handler_def;
extern const struct ethtool_ops prism2_ethtool_ops;
-int hostap_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd);
int hostap_siocdevprivate(struct net_device *dev, struct ifreq *ifr,
void __user *data, int cmd);
diff --git a/drivers/net/wireless/intersil/hostap/hostap_download.c b/drivers/net/wireless/intersil/hostap/hostap_download.c
index 3672291ced5c..5e5bada28b5b 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_download.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_download.c
@@ -732,8 +732,7 @@ static int prism2_download(local_info_t *local,
goto out;
}
- dl = kzalloc(sizeof(*dl) + param->num_areas *
- sizeof(struct prism2_download_data_area), GFP_KERNEL);
+ dl = kzalloc(struct_size(dl, data, param->num_areas), GFP_KERNEL);
if (dl == NULL) {
ret = -ENOMEM;
goto out;
diff --git a/drivers/net/wireless/intersil/hostap/hostap_ioctl.c b/drivers/net/wireless/intersil/hostap/hostap_ioctl.c
index b4adfc190ae8..26162f92e3c3 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_ioctl.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_ioctl.c
@@ -2316,21 +2316,6 @@ static const struct iw_priv_args prism2_priv[] = {
};
-static int prism2_ioctl_priv_inquire(struct net_device *dev, int *i)
-{
- struct hostap_interface *iface;
- local_info_t *local;
-
- iface = netdev_priv(dev);
- local = iface->local;
-
- if (local->func->cmd(dev, HFA384X_CMDCODE_INQUIRE, *i, NULL, NULL))
- return -EOPNOTSUPP;
-
- return 0;
-}
-
-
static int prism2_ioctl_priv_prism2_param(struct net_device *dev,
struct iw_request_info *info,
union iwreq_data *uwrq, char *extra)
@@ -2910,146 +2895,6 @@ static int prism2_ioctl_priv_writemif(struct net_device *dev,
}
-static int prism2_ioctl_priv_monitor(struct net_device *dev, int *i)
-{
- struct hostap_interface *iface;
- local_info_t *local;
- int ret = 0;
- union iwreq_data wrqu;
-
- iface = netdev_priv(dev);
- local = iface->local;
-
- printk(KERN_DEBUG "%s: process %d (%s) used deprecated iwpriv monitor "
- "- update software to use iwconfig mode monitor\n",
- dev->name, task_pid_nr(current), current->comm);
-
- /* Backward compatibility code - this can be removed at some point */
-
- if (*i == 0) {
- /* Disable monitor mode - old mode was not saved, so go to
- * Master mode */
- wrqu.mode = IW_MODE_MASTER;
- ret = prism2_ioctl_siwmode(dev, NULL, &wrqu, NULL);
- } else if (*i == 1) {
- /* netlink socket mode is not supported anymore since it did
- * not separate different devices from each other and was not
- * best method for delivering large amount of packets to
- * user space */
- ret = -EOPNOTSUPP;
- } else if (*i == 2 || *i == 3) {
- switch (*i) {
- case 2:
- local->monitor_type = PRISM2_MONITOR_80211;
- break;
- case 3:
- local->monitor_type = PRISM2_MONITOR_PRISM;
- break;
- }
- wrqu.mode = IW_MODE_MONITOR;
- ret = prism2_ioctl_siwmode(dev, NULL, &wrqu, NULL);
- hostap_monitor_mode_enable(local);
- } else
- ret = -EINVAL;
-
- return ret;
-}
-
-
-static int prism2_ioctl_priv_reset(struct net_device *dev, int *i)
-{
- struct hostap_interface *iface;
- local_info_t *local;
-
- iface = netdev_priv(dev);
- local = iface->local;
-
- printk(KERN_DEBUG "%s: manual reset request(%d)\n", dev->name, *i);
- switch (*i) {
- case 0:
- /* Disable and enable card */
- local->func->hw_shutdown(dev, 1);
- local->func->hw_config(dev, 0);
- break;
-
- case 1:
- /* COR sreset */
- local->func->hw_reset(dev);
- break;
-
- case 2:
- /* Disable and enable port 0 */
- local->func->reset_port(dev);
- break;
-
- case 3:
- prism2_sta_deauth(local, WLAN_REASON_DEAUTH_LEAVING);
- if (local->func->cmd(dev, HFA384X_CMDCODE_DISABLE, 0, NULL,
- NULL))
- return -EINVAL;
- break;
-
- case 4:
- if (local->func->cmd(dev, HFA384X_CMDCODE_ENABLE, 0, NULL,
- NULL))
- return -EINVAL;
- break;
-
- default:
- printk(KERN_DEBUG "Unknown reset request %d\n", *i);
- return -EOPNOTSUPP;
- }
-
- return 0;
-}
-
-
-static int prism2_ioctl_priv_set_rid_word(struct net_device *dev, int *i)
-{
- int rid = *i;
- int value = *(i + 1);
-
- printk(KERN_DEBUG "%s: Set RID[0x%X] = %d\n", dev->name, rid, value);
-
- if (hostap_set_word(dev, rid, value))
- return -EINVAL;
-
- return 0;
-}
-
-
-#ifndef PRISM2_NO_KERNEL_IEEE80211_MGMT
-static int ap_mac_cmd_ioctl(local_info_t *local, int *cmd)
-{
- int ret = 0;
-
- switch (*cmd) {
- case AP_MAC_CMD_POLICY_OPEN:
- local->ap->mac_restrictions.policy = MAC_POLICY_OPEN;
- break;
- case AP_MAC_CMD_POLICY_ALLOW:
- local->ap->mac_restrictions.policy = MAC_POLICY_ALLOW;
- break;
- case AP_MAC_CMD_POLICY_DENY:
- local->ap->mac_restrictions.policy = MAC_POLICY_DENY;
- break;
- case AP_MAC_CMD_FLUSH:
- ap_control_flush_macs(&local->ap->mac_restrictions);
- break;
- case AP_MAC_CMD_KICKALL:
- ap_control_kickall(local->ap);
- hostap_deauth_all_stas(local->dev, local->ap, 0);
- break;
- default:
- ret = -EOPNOTSUPP;
- break;
- }
-
- return ret;
-}
-#endif /* PRISM2_NO_KERNEL_IEEE80211_MGMT */
-
-
#ifdef PRISM2_DOWNLOAD_SUPPORT
static int prism2_ioctl_priv_download(local_info_t *local, struct iw_point *p)
{
@@ -3963,79 +3808,6 @@ const struct iw_handler_def hostap_iw_handler_def =
.get_wireless_stats = hostap_get_wireless_stats,
};
-/* Private ioctls (iwpriv) that have not yet been converted
- * into new wireless extensions API */
-int hostap_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
-{
- struct iwreq *wrq = (struct iwreq *) ifr;
- struct hostap_interface *iface;
- local_info_t *local;
- int ret = 0;
-
- iface = netdev_priv(dev);
- local = iface->local;
-
- switch (cmd) {
- case PRISM2_IOCTL_INQUIRE:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_ioctl_priv_inquire(dev, (int *) wrq->u.name);
- break;
-
- case PRISM2_IOCTL_MONITOR:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_ioctl_priv_monitor(dev, (int *) wrq->u.name);
- break;
-
- case PRISM2_IOCTL_RESET:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_ioctl_priv_reset(dev, (int *) wrq->u.name);
- break;
-
- case PRISM2_IOCTL_WDS_ADD:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_wds_add(local, wrq->u.ap_addr.sa_data, 1);
- break;
-
- case PRISM2_IOCTL_WDS_DEL:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_wds_del(local, wrq->u.ap_addr.sa_data, 1, 0);
- break;
-
- case PRISM2_IOCTL_SET_RID_WORD:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = prism2_ioctl_priv_set_rid_word(dev,
- (int *) wrq->u.name);
- break;
-
-#ifndef PRISM2_NO_KERNEL_IEEE80211_MGMT
- case PRISM2_IOCTL_MACCMD:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = ap_mac_cmd_ioctl(local, (int *) wrq->u.name);
- break;
-
- case PRISM2_IOCTL_ADDMAC:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = ap_control_add_mac(&local->ap->mac_restrictions,
- wrq->u.ap_addr.sa_data);
- break;
- case PRISM2_IOCTL_DELMAC:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = ap_control_del_mac(&local->ap->mac_restrictions,
- wrq->u.ap_addr.sa_data);
- break;
- case PRISM2_IOCTL_KICKMAC:
- if (!capable(CAP_NET_ADMIN)) ret = -EPERM;
- else ret = ap_control_kick_mac(local->ap, local->dev,
- wrq->u.ap_addr.sa_data);
- break;
-#endif /* PRISM2_NO_KERNEL_IEEE80211_MGMT */
- default:
- ret = -EOPNOTSUPP;
- break;
- }
-
- return ret;
-}
/* Private ioctls that are not used with iwpriv;
* in SIOCDEVPRIVATE range */
diff --git a/drivers/net/wireless/intersil/hostap/hostap_main.c b/drivers/net/wireless/intersil/hostap/hostap_main.c
index 787f685e70b4..bf86ac26c2ac 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_main.c
+++ b/drivers/net/wireless/intersil/hostap/hostap_main.c
@@ -796,7 +796,6 @@ static const struct net_device_ops hostap_netdev_ops = {
.ndo_open = prism2_open,
.ndo_stop = prism2_close,
- .ndo_do_ioctl = hostap_ioctl,
.ndo_siocdevprivate = hostap_siocdevprivate,
.ndo_set_mac_address = prism2_set_mac_address,
.ndo_set_rx_mode = hostap_set_multicast_list,
@@ -809,7 +808,6 @@ static const struct net_device_ops hostap_mgmt_netdev_ops = {
.ndo_open = prism2_open,
.ndo_stop = prism2_close,
- .ndo_do_ioctl = hostap_ioctl,
.ndo_siocdevprivate = hostap_siocdevprivate,
.ndo_set_mac_address = prism2_set_mac_address,
.ndo_set_rx_mode = hostap_set_multicast_list,
@@ -822,7 +820,6 @@ static const struct net_device_ops hostap_master_ops = {
.ndo_open = prism2_open,
.ndo_stop = prism2_close,
- .ndo_do_ioctl = hostap_ioctl,
.ndo_siocdevprivate = hostap_siocdevprivate,
.ndo_set_mac_address = prism2_set_mac_address,
.ndo_set_rx_mode = hostap_set_multicast_list,
diff --git a/drivers/net/wireless/intersil/hostap/hostap_wlan.h b/drivers/net/wireless/intersil/hostap/hostap_wlan.h
index c25cd21d18bd..f71c0545c0be 100644
--- a/drivers/net/wireless/intersil/hostap/hostap_wlan.h
+++ b/drivers/net/wireless/intersil/hostap/hostap_wlan.h
@@ -617,7 +617,7 @@ struct prism2_download_data {
u32 addr; /* wlan card address */
u32 len;
u8 *data; /* allocated data */
- } data[];
+ } data[] __counted_by(num_areas);
};
diff --git a/drivers/net/wireless/intersil/p54/p54.h b/drivers/net/wireless/intersil/p54/p54.h
index 3356ea708d81..522656de4159 100644
--- a/drivers/net/wireless/intersil/p54/p54.h
+++ b/drivers/net/wireless/intersil/p54/p54.h
@@ -126,7 +126,7 @@ struct p54_cal_database {
size_t entry_size;
size_t offset;
size_t len;
- u8 data[];
+ u8 data[] __counted_by(len);
};
#define EEPROM_READBACK_LEN 0x3fc
diff --git a/drivers/net/wireless/legacy/ray_cs.c b/drivers/net/wireless/legacy/ray_cs.c
index 8ace797ce951..c95a79e01cd0 100644
--- a/drivers/net/wireless/legacy/ray_cs.c
+++ b/drivers/net/wireless/legacy/ray_cs.c
@@ -348,7 +348,7 @@ static int ray_config(struct pcmcia_device *link)
{
int ret = 0;
int i;
- struct net_device *dev = (struct net_device *)link->priv;
+ struct net_device *dev = link->priv;
ray_dev_t *local = netdev_priv(dev);
dev_dbg(&link->dev, "ray_config\n");
@@ -1830,7 +1830,7 @@ static void set_multicast_list(struct net_device *dev)
=============================================================================*/
static irqreturn_t ray_interrupt(int irq, void *dev_id)
{
- struct net_device *dev = (struct net_device *)dev_id;
+ struct net_device *dev = dev_id;
struct pcmcia_device *link;
ray_dev_t *local;
struct ccs __iomem *pccs;
@@ -2567,7 +2567,7 @@ static int ray_cs_proc_show(struct seq_file *m, void *v)
link = this_device;
if (!link)
return 0;
- dev = (struct net_device *)link->priv;
+ dev = link->priv;
if (!dev)
return 0;
local = netdev_priv(dev);
diff --git a/drivers/net/wireless/marvell/mwifiex/11h.c b/drivers/net/wireless/marvell/mwifiex/11h.c
index 2ea03725f188..da211372a481 100644
--- a/drivers/net/wireless/marvell/mwifiex/11h.c
+++ b/drivers/net/wireless/marvell/mwifiex/11h.c
@@ -287,7 +287,7 @@ void mwifiex_dfs_chan_sw_work_queue(struct work_struct *work)
mwifiex_dbg(priv->adapter, MSG,
"indicating channel switch completion to kernel\n");
- mutex_lock(&priv->wdev.mtx);
+ wiphy_lock(priv->wdev.wiphy);
cfg80211_ch_switch_notify(priv->netdev, &priv->dfs_chandef, 0, 0);
- mutex_unlock(&priv->wdev.mtx);
+ wiphy_unlock(priv->wdev.wiphy);
}
diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
index ba4e29713a8c..7a15ea8072e6 100644
--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
@@ -1835,10 +1835,11 @@ static int mwifiex_cfg80211_set_cqm_rssi_config(struct wiphy *wiphy,
*/
static int mwifiex_cfg80211_change_beacon(struct wiphy *wiphy,
struct net_device *dev,
- struct cfg80211_beacon_data *data)
+ struct cfg80211_ap_update *params)
{
struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
struct mwifiex_adapter *adapter = priv->adapter;
+ struct cfg80211_beacon_data *data = &params->beacon;
mwifiex_cancel_scan(adapter);
diff --git a/drivers/net/wireless/marvell/mwifiex/main.h b/drivers/net/wireless/marvell/mwifiex/main.h
index 7bdec6c62248..d263eae6078c 100644
--- a/drivers/net/wireless/marvell/mwifiex/main.h
+++ b/drivers/net/wireless/marvell/mwifiex/main.h
@@ -834,12 +834,12 @@ struct mwifiex_if_ops {
void (*cleanup_mpa_buf) (struct mwifiex_adapter *);
int (*cmdrsp_complete) (struct mwifiex_adapter *, struct sk_buff *);
int (*event_complete) (struct mwifiex_adapter *, struct sk_buff *);
- int (*init_fw_port) (struct mwifiex_adapter *);
+ void (*init_fw_port)(struct mwifiex_adapter *adapter);
int (*dnld_fw) (struct mwifiex_adapter *, struct mwifiex_fw_image *);
void (*card_reset) (struct mwifiex_adapter *);
int (*reg_dump)(struct mwifiex_adapter *, char *);
void (*device_dump)(struct mwifiex_adapter *);
- int (*clean_pcie_ring) (struct mwifiex_adapter *adapter);
+ void (*clean_pcie_ring)(struct mwifiex_adapter *adapter);
void (*iface_work)(struct work_struct *work);
void (*submit_rem_rx_urbs)(struct mwifiex_adapter *adapter);
void (*deaggr_pkt)(struct mwifiex_adapter *, struct sk_buff *);
diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
index 6697132ecc97..5f997becdbaa 100644
--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
+++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
@@ -222,12 +222,19 @@ static void mwifiex_unmap_pci_memory(struct mwifiex_adapter *adapter,
/*
* This function writes data into PCIE card register.
*/
-static int mwifiex_write_reg(struct mwifiex_adapter *adapter, int reg, u32 data)
+static inline void
+mwifiex_write_reg(struct mwifiex_adapter *adapter, int reg, u32 data)
{
struct pcie_service_card *card = adapter->card;
iowrite32(data, card->pci_mmap1 + reg);
+}
+/* Non-void wrapper needed for read_poll_timeout(). */
+static inline int
+mwifiex_write_reg_rpt(struct mwifiex_adapter *adapter, int reg, u32 data)
+{
+ mwifiex_write_reg(adapter, reg, data);
return 0;
}
@@ -658,12 +665,12 @@ static int mwifiex_pm_wakeup_card(struct mwifiex_adapter *adapter)
* appears to ignore or miss our wakeup request, so we continue trying
* until we receive an interrupt from the card.
*/
- if (read_poll_timeout(mwifiex_write_reg, retval,
+ if (read_poll_timeout(mwifiex_write_reg_rpt, retval,
READ_ONCE(adapter->int_status) != 0,
500, 500 * N_WAKEUP_TRIES_SHORT_INTERVAL,
false,
adapter, reg->fw_status, FIRMWARE_READY_PCIE)) {
- if (read_poll_timeout(mwifiex_write_reg, retval,
+ if (read_poll_timeout(mwifiex_write_reg_rpt, retval,
READ_ONCE(adapter->int_status) != 0,
10000, 10000 * N_WAKEUP_TRIES_LONG_INTERVAL,
false,
@@ -703,24 +710,12 @@ static int mwifiex_pm_wakeup_card_complete(struct mwifiex_adapter *adapter)
* The host interrupt mask is read, the disable bit is reset and
* written back to the card host interrupt mask register.
*/
-static int mwifiex_pcie_disable_host_int(struct mwifiex_adapter *adapter)
+static void mwifiex_pcie_disable_host_int(struct mwifiex_adapter *adapter)
{
- if (mwifiex_pcie_ok_to_access_hw(adapter)) {
- if (mwifiex_write_reg(adapter, PCIE_HOST_INT_MASK,
- 0x00000000)) {
- mwifiex_dbg(adapter, ERROR,
- "Disable host interrupt failed\n");
- return -1;
- }
- }
+ if (mwifiex_pcie_ok_to_access_hw(adapter))
+ mwifiex_write_reg(adapter, PCIE_HOST_INT_MASK, 0x00000000);
atomic_set(&adapter->tx_hw_pending, 0);
- return 0;
-}
-
-static void mwifiex_pcie_disable_host_int_noerr(struct mwifiex_adapter *adapter)
-{
- WARN_ON(mwifiex_pcie_disable_host_int(adapter));
}
/*
@@ -731,15 +726,9 @@ static void mwifiex_pcie_disable_host_int_noerr(struct mwifiex_adapter *adapter)
*/
static int mwifiex_pcie_enable_host_int(struct mwifiex_adapter *adapter)
{
- if (mwifiex_pcie_ok_to_access_hw(adapter)) {
+ if (mwifiex_pcie_ok_to_access_hw(adapter))
/* Simply write the mask to the register */
- if (mwifiex_write_reg(adapter, PCIE_HOST_INT_MASK,
- HOST_INTR_MASK)) {
- mwifiex_dbg(adapter, ERROR,
- "Enable host interrupt failed\n");
- return -1;
- }
- }
+ mwifiex_write_reg(adapter, PCIE_HOST_INT_MASK, HOST_INTR_MASK);
return 0;
}
@@ -1303,7 +1292,7 @@ static int mwifiex_pcie_delete_sleep_cookie_buf(struct mwifiex_adapter *adapter)
* This function defined as handler is also called while cleaning TXRX
* during disconnect/ bss stop.
*/
-static int mwifiex_clean_pcie_ring_buf(struct mwifiex_adapter *adapter)
+static void mwifiex_clean_pcie_ring_buf(struct mwifiex_adapter *adapter)
{
struct pcie_service_card *card = adapter->card;
@@ -1312,14 +1301,9 @@ static int mwifiex_clean_pcie_ring_buf(struct mwifiex_adapter *adapter)
/* write pointer already set at last send
* send dnld-rdy intr again, wait for completion.
*/
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_DNLD_RDY)) {
- mwifiex_dbg(adapter, ERROR,
- "failed to assert dnld-rdy interrupt.\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
+ CPU_INTR_DNLD_RDY);
}
- return 0;
}
/*
@@ -1429,7 +1413,6 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
struct pcie_service_card *card = adapter->card;
const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
u32 wrindx, num_tx_buffs, rx_val;
- int ret;
dma_addr_t buf_pa;
struct mwifiex_pcie_buf_desc *desc = NULL;
struct mwifiex_pfu_buf_desc *desc2 = NULL;
@@ -1498,13 +1481,8 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
rx_val = card->rxbd_rdptr & reg->rx_wrap_mask;
/* Write the TX ring write pointer in to reg->tx_wrptr */
- if (mwifiex_write_reg(adapter, reg->tx_wrptr,
- card->txbd_wrptr | rx_val)) {
- mwifiex_dbg(adapter, ERROR,
- "SEND DATA: failed to write reg->tx_wrptr\n");
- ret = -1;
- goto done_unmap;
- }
+ mwifiex_write_reg(adapter, reg->tx_wrptr,
+ card->txbd_wrptr | rx_val);
/* The firmware (latest version 15.68.19.p21) of the 88W8897 PCIe+USB card
* seems to crash randomly after setting the TX ring write pointer when
@@ -1521,13 +1499,8 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
adapter->data_sent = false;
} else {
/* Send the TX ready interrupt */
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_DNLD_RDY)) {
- mwifiex_dbg(adapter, ERROR,
- "SEND DATA: failed to assert dnld-rdy interrupt.\n");
- ret = -1;
- goto done_unmap;
- }
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
+ CPU_INTR_DNLD_RDY);
}
mwifiex_dbg(adapter, DATA,
"info: SEND DATA: Updated <Rd: %#x, Wr:\t"
@@ -1538,24 +1511,12 @@ mwifiex_pcie_send_data(struct mwifiex_adapter *adapter, struct sk_buff *skb,
"info: TX Ring full, can't send packets to fw\n");
adapter->data_sent = true;
/* Send the TX ready interrupt */
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_DNLD_RDY))
- mwifiex_dbg(adapter, ERROR,
- "SEND DATA: failed to assert door-bell intr\n");
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
+ CPU_INTR_DNLD_RDY);
return -EBUSY;
}
return -EINPROGRESS;
-done_unmap:
- mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE);
- card->tx_buf_list[wrindx] = NULL;
- atomic_dec(&adapter->tx_hw_pending);
- if (reg->pfu_enabled)
- memset(desc2, 0, sizeof(*desc2));
- else
- memset(desc, 0, sizeof(*desc));
-
- return ret;
}
/*
@@ -1675,13 +1636,8 @@ static int mwifiex_pcie_process_recv_data(struct mwifiex_adapter *adapter)
tx_val = card->txbd_wrptr & reg->tx_wrap_mask;
/* Write the RX ring read pointer in to reg->rx_rdptr */
- if (mwifiex_write_reg(adapter, reg->rx_rdptr,
- card->rxbd_rdptr | tx_val)) {
- mwifiex_dbg(adapter, DATA,
- "RECV DATA: failed to write reg->rx_rdptr\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->rx_rdptr,
+ card->rxbd_rdptr | tx_val);
/* Read the RX ring Write pointer set by firmware */
if (mwifiex_read_reg(adapter, reg->rx_wrptr, &wrptr)) {
@@ -1724,43 +1680,18 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb)
/* Write the lower 32bits of the physical address to low command
* address scratch register
*/
- if (mwifiex_write_reg(adapter, reg->cmd_addr_lo, (u32)buf_pa)) {
- mwifiex_dbg(adapter, ERROR,
- "%s: failed to write download command to boot code.\n",
- __func__);
- mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE);
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->cmd_addr_lo, (u32)buf_pa);
/* Write the upper 32bits of the physical address to high command
* address scratch register
*/
- if (mwifiex_write_reg(adapter, reg->cmd_addr_hi,
- (u32)((u64)buf_pa >> 32))) {
- mwifiex_dbg(adapter, ERROR,
- "%s: failed to write download command to boot code.\n",
- __func__);
- mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE);
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->cmd_addr_hi, (u32)((u64)buf_pa >> 32));
/* Write the command length to cmd_size scratch register */
- if (mwifiex_write_reg(adapter, reg->cmd_size, skb->len)) {
- mwifiex_dbg(adapter, ERROR,
- "%s: failed to write command len to cmd_size scratch reg\n",
- __func__);
- mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE);
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->cmd_size, skb->len);
/* Ring the door bell */
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_DOOR_BELL)) {
- mwifiex_dbg(adapter, ERROR,
- "%s: failed to assert door-bell intr\n", __func__);
- mwifiex_unmap_pci_memory(adapter, skb, DMA_TO_DEVICE);
- return -1;
- }
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT, CPU_INTR_DOOR_BELL);
return 0;
}
@@ -1768,20 +1699,14 @@ mwifiex_pcie_send_boot_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb)
/* This function init rx port in firmware which in turn enables to receive data
* from device before transmitting any packet.
*/
-static int mwifiex_pcie_init_fw_port(struct mwifiex_adapter *adapter)
+static void mwifiex_pcie_init_fw_port(struct mwifiex_adapter *adapter)
{
struct pcie_service_card *card = adapter->card;
const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
int tx_wrap = card->txbd_wrptr & reg->tx_wrap_mask;
/* Write the RX ring read pointer in to reg->rx_rdptr */
- if (mwifiex_write_reg(adapter, reg->rx_rdptr, card->rxbd_rdptr |
- tx_wrap)) {
- mwifiex_dbg(adapter, ERROR,
- "RECV DATA: failed to write reg->rx_rdptr\n");
- return -1;
- }
- return 0;
+ mwifiex_write_reg(adapter, reg->rx_rdptr, card->rxbd_rdptr | tx_wrap);
}
/* This function downloads commands to the device
@@ -1791,7 +1716,6 @@ mwifiex_pcie_send_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb)
{
struct pcie_service_card *card = adapter->card;
const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
- int ret = 0;
dma_addr_t cmd_buf_pa, cmdrsp_buf_pa;
u8 *payload = (u8 *)skb->data;
@@ -1841,63 +1765,29 @@ mwifiex_pcie_send_cmd(struct mwifiex_adapter *adapter, struct sk_buff *skb)
cmdrsp_buf_pa = MWIFIEX_SKB_DMA_ADDR(card->cmdrsp_buf);
/* Write the lower 32bits of the cmdrsp buffer physical
address */
- if (mwifiex_write_reg(adapter, reg->cmdrsp_addr_lo,
- (u32)cmdrsp_buf_pa)) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write download cmd to boot code.\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->cmdrsp_addr_lo,
+ (u32)cmdrsp_buf_pa);
+
/* Write the upper 32bits of the cmdrsp buffer physical
address */
- if (mwifiex_write_reg(adapter, reg->cmdrsp_addr_hi,
- (u32)((u64)cmdrsp_buf_pa >> 32))) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write download cmd to boot code.\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->cmdrsp_addr_hi,
+ (u32)((u64)cmdrsp_buf_pa >> 32));
}
cmd_buf_pa = MWIFIEX_SKB_DMA_ADDR(card->cmd_buf);
+
/* Write the lower 32bits of the physical address to reg->cmd_addr_lo */
- if (mwifiex_write_reg(adapter, reg->cmd_addr_lo,
- (u32)cmd_buf_pa)) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write download cmd to boot code.\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->cmd_addr_lo, (u32)cmd_buf_pa);
+
/* Write the upper 32bits of the physical address to reg->cmd_addr_hi */
- if (mwifiex_write_reg(adapter, reg->cmd_addr_hi,
- (u32)((u64)cmd_buf_pa >> 32))) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write download cmd to boot code.\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->cmd_addr_hi,
+ (u32)((u64)cmd_buf_pa >> 32));
/* Write the command length to reg->cmd_size */
- if (mwifiex_write_reg(adapter, reg->cmd_size,
- card->cmd_buf->len)) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write cmd len to reg->cmd_size\n");
- ret = -1;
- goto done;
- }
+ mwifiex_write_reg(adapter, reg->cmd_size, card->cmd_buf->len);
/* Ring the door bell */
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_DOOR_BELL)) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to assert door-bell intr\n");
- ret = -1;
- goto done;
- }
-
-done:
- if (ret)
- adapter->cmd_sent = false;
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT, CPU_INTR_DOOR_BELL);
return 0;
}
@@ -1941,13 +1831,9 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter)
MWIFIEX_SKB_DMA_ADDR(skb),
MWIFIEX_SLEEP_COOKIE_SIZE,
DMA_FROM_DEVICE);
- if (mwifiex_write_reg(adapter,
- PCIE_CPU_INT_EVENT,
- CPU_INTR_SLEEP_CFM_DONE)) {
- mwifiex_dbg(adapter, ERROR,
- "Write register failed\n");
- return -1;
- }
+ mwifiex_write_reg(adapter,
+ PCIE_CPU_INT_EVENT,
+ CPU_INTR_SLEEP_CFM_DONE);
mwifiex_delay_for_sleep_cookie(adapter,
MWIFIEX_MAX_DELAY_COUNT);
mwifiex_unmap_pci_memory(adapter, skb,
@@ -1980,18 +1866,11 @@ static int mwifiex_pcie_process_cmd_complete(struct mwifiex_adapter *adapter)
/* Clear the cmd-rsp buffer address in scratch registers. This
will prevent firmware from writing to the same response
buffer again. */
- if (mwifiex_write_reg(adapter, reg->cmdrsp_addr_lo, 0)) {
- mwifiex_dbg(adapter, ERROR,
- "cmd_done: failed to clear cmd_rsp_addr_lo\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->cmdrsp_addr_lo, 0);
+
/* Write the upper 32bits of the cmdrsp buffer physical
address */
- if (mwifiex_write_reg(adapter, reg->cmdrsp_addr_hi, 0)) {
- mwifiex_dbg(adapter, ERROR,
- "cmd_done: failed to clear cmd_rsp_addr_hi\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->cmdrsp_addr_hi, 0);
}
return 0;
@@ -2098,12 +1977,8 @@ static int mwifiex_pcie_process_event_ready(struct mwifiex_adapter *adapter)
we need to find a better method of managing these buffers.
*/
} else {
- if (mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
- CPU_INTR_EVENT_DONE)) {
- mwifiex_dbg(adapter, ERROR,
- "Write register failed\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, PCIE_CPU_INT_EVENT,
+ CPU_INTR_EVENT_DONE);
}
return 0;
@@ -2117,7 +1992,6 @@ static int mwifiex_pcie_event_complete(struct mwifiex_adapter *adapter,
{
struct pcie_service_card *card = adapter->card;
const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
- int ret = 0;
u32 rdptr = card->evtbd_rdptr & MWIFIEX_EVTBD_MASK;
u32 wrptr;
struct mwifiex_evt_buf_desc *desc;
@@ -2169,18 +2043,11 @@ static int mwifiex_pcie_event_complete(struct mwifiex_adapter *adapter,
card->evtbd_rdptr, wrptr);
/* Write the event ring read pointer in to reg->evt_rdptr */
- if (mwifiex_write_reg(adapter, reg->evt_rdptr,
- card->evtbd_rdptr)) {
- mwifiex_dbg(adapter, ERROR,
- "event_complete: failed to read reg->evt_rdptr\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, reg->evt_rdptr, card->evtbd_rdptr);
mwifiex_dbg(adapter, EVENT,
"info: Check Events Again\n");
- ret = mwifiex_pcie_process_event_ready(adapter);
-
- return ret;
+ return mwifiex_pcie_process_event_ready(adapter);
}
/* Combo firmware image is a combination of
@@ -2313,11 +2180,7 @@ static int mwifiex_prog_fw_w_helper(struct mwifiex_adapter *adapter,
"info: Downloading FW image (%d bytes)\n",
firmware_len);
- if (mwifiex_pcie_disable_host_int(adapter)) {
- mwifiex_dbg(adapter, ERROR,
- "%s: Disabling interrupts failed.\n", __func__);
- return -1;
- }
+ mwifiex_pcie_disable_host_int(adapter);
skb = dev_alloc_skb(MWIFIEX_UPLD_SIZE);
if (!skb) {
@@ -2471,21 +2334,12 @@ mwifiex_check_fw_status(struct mwifiex_adapter *adapter, u32 poll_num)
u32 tries;
/* Mask spurios interrupts */
- if (mwifiex_write_reg(adapter, PCIE_HOST_INT_STATUS_MASK,
- HOST_INTR_MASK)) {
- mwifiex_dbg(adapter, ERROR,
- "Write register failed\n");
- return -1;
- }
+ mwifiex_write_reg(adapter, PCIE_HOST_INT_STATUS_MASK, HOST_INTR_MASK);
mwifiex_dbg(adapter, INFO,
"Setting driver ready signature\n");
- if (mwifiex_write_reg(adapter, reg->drv_rdy,
- FIRMWARE_READY_PCIE)) {
- mwifiex_dbg(adapter, ERROR,
- "Failed to write driver ready signature\n");
- return -1;
- }
+
+ mwifiex_write_reg(adapter, reg->drv_rdy, FIRMWARE_READY_PCIE);
/* Wait for firmware initialization event */
for (tries = 0; tries < poll_num; tries++) {
@@ -2571,12 +2425,7 @@ static void mwifiex_interrupt_status(struct mwifiex_adapter *adapter,
mwifiex_pcie_disable_host_int(adapter);
/* Clear the pending interrupts */
- if (mwifiex_write_reg(adapter, PCIE_HOST_INT_STATUS,
- ~pcie_ireg)) {
- mwifiex_dbg(adapter, ERROR,
- "Write register failed\n");
- return;
- }
+ mwifiex_write_reg(adapter, PCIE_HOST_INT_STATUS, ~pcie_ireg);
}
if (!adapter->pps_uapsd_mode &&
@@ -2671,13 +2520,9 @@ static int mwifiex_process_int_status(struct mwifiex_adapter *adapter)
}
if ((pcie_ireg != 0xFFFFFFFF) && (pcie_ireg)) {
- if (mwifiex_write_reg(adapter,
- PCIE_HOST_INT_STATUS,
- ~pcie_ireg)) {
- mwifiex_dbg(adapter, ERROR,
- "Write register failed\n");
- return -1;
- }
+ mwifiex_write_reg(adapter,
+ PCIE_HOST_INT_STATUS,
+ ~pcie_ireg);
if (!adapter->pps_uapsd_mode &&
adapter->ps_state == PS_STATE_SLEEP) {
adapter->ps_state = PS_STATE_AWAKE;
@@ -2801,7 +2646,7 @@ mwifiex_pcie_reg_dump(struct mwifiex_adapter *adapter, char *drv_buf)
static enum rdwr_status
mwifiex_pcie_rdwr_firmware(struct mwifiex_adapter *adapter, u8 doneflag)
{
- int ret, tries;
+ int tries;
u8 ctrl_data;
u32 fw_status;
struct pcie_service_card *card = adapter->card;
@@ -2810,13 +2655,7 @@ mwifiex_pcie_rdwr_firmware(struct mwifiex_adapter *adapter, u8 doneflag)
if (mwifiex_read_reg(adapter, reg->fw_status, &fw_status))
return RDWR_STATUS_FAILURE;
- ret = mwifiex_write_reg(adapter, reg->fw_dump_ctrl,
- reg->fw_dump_host_ready);
- if (ret) {
- mwifiex_dbg(adapter, ERROR,
- "PCIE write err\n");
- return RDWR_STATUS_FAILURE;
- }
+ mwifiex_write_reg(adapter, reg->fw_dump_ctrl, reg->fw_dump_host_ready);
for (tries = 0; tries < MAX_POLL_TRIES; tries++) {
mwifiex_read_reg_byte(adapter, reg->fw_dump_ctrl, &ctrl_data);
@@ -2827,13 +2666,8 @@ mwifiex_pcie_rdwr_firmware(struct mwifiex_adapter *adapter, u8 doneflag)
if (ctrl_data != reg->fw_dump_host_ready) {
mwifiex_dbg(adapter, WARN,
"The ctrl reg was changed, re-try again!\n");
- ret = mwifiex_write_reg(adapter, reg->fw_dump_ctrl,
- reg->fw_dump_host_ready);
- if (ret) {
- mwifiex_dbg(adapter, ERROR,
- "PCIE write err\n");
- return RDWR_STATUS_FAILURE;
- }
+ mwifiex_write_reg(adapter, reg->fw_dump_ctrl,
+ reg->fw_dump_host_ready);
}
usleep_range(100, 200);
}
@@ -2852,7 +2686,6 @@ static void mwifiex_pcie_fw_dump(struct mwifiex_adapter *adapter)
u8 idx, i, read_reg, doneflag = 0;
enum rdwr_status stat;
u32 memory_size;
- int ret;
if (!card->pcie.can_dump_fw)
return;
@@ -2906,12 +2739,8 @@ static void mwifiex_pcie_fw_dump(struct mwifiex_adapter *adapter)
if (memory_size == 0) {
mwifiex_dbg(adapter, MSG, "Firmware dump Finished!\n");
- ret = mwifiex_write_reg(adapter, creg->fw_dump_ctrl,
- creg->fw_dump_read_done);
- if (ret) {
- mwifiex_dbg(adapter, ERROR, "PCIE write err\n");
- return;
- }
+ mwifiex_write_reg(adapter, creg->fw_dump_ctrl,
+ creg->fw_dump_read_done);
break;
}
@@ -3197,9 +3026,7 @@ static void mwifiex_cleanup_pcie(struct mwifiex_adapter *adapter)
if (fw_status == FIRMWARE_READY_PCIE) {
mwifiex_dbg(adapter, INFO,
"Clearing driver ready signature\n");
- if (mwifiex_write_reg(adapter, reg->drv_rdy, 0x00000000))
- mwifiex_dbg(adapter, ERROR,
- "Failed to write driver not-ready signature\n");
+ mwifiex_write_reg(adapter, reg->drv_rdy, 0x00000000);
}
pci_disable_device(pdev);
@@ -3404,8 +3231,7 @@ static void mwifiex_pcie_down_dev(struct mwifiex_adapter *adapter)
const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
struct pci_dev *pdev = card->dev;
- if (mwifiex_write_reg(adapter, reg->drv_rdy, 0x00000000))
- mwifiex_dbg(adapter, ERROR, "Failed to write driver not-ready signature\n");
+ mwifiex_write_reg(adapter, reg->drv_rdy, 0x00000000);
pci_clear_master(pdev);
@@ -3423,7 +3249,7 @@ static struct mwifiex_if_ops pcie_ops = {
.register_dev = mwifiex_register_dev,
.unregister_dev = mwifiex_unregister_dev,
.enable_int = mwifiex_pcie_enable_host_int,
- .disable_int = mwifiex_pcie_disable_host_int_noerr,
+ .disable_int = mwifiex_pcie_disable_host_int,
.process_int_status = mwifiex_process_int_status,
.host_to_card = mwifiex_pcie_host_to_card,
.wakeup = mwifiex_pm_wakeup_card,
@@ -3449,3 +3275,9 @@ MODULE_AUTHOR("Marvell International Ltd.");
MODULE_DESCRIPTION("Marvell WiFi-Ex PCI-Express Driver version " PCIE_VERSION);
MODULE_VERSION(PCIE_VERSION);
MODULE_LICENSE("GPL v2");
+MODULE_FIRMWARE(PCIE8766_DEFAULT_FW_NAME);
+MODULE_FIRMWARE(PCIE8897_DEFAULT_FW_NAME);
+MODULE_FIRMWARE(PCIE8897_A0_FW_NAME);
+MODULE_FIRMWARE(PCIE8897_B0_FW_NAME);
+MODULE_FIRMWARE(PCIEUART8997_FW_NAME_V4);
+MODULE_FIRMWARE(PCIEUSB8997_FW_NAME_V4);
diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.c b/drivers/net/wireless/marvell/mwifiex/sdio.c
index 774858cfe86f..6462a0ffe698 100644
--- a/drivers/net/wireless/marvell/mwifiex/sdio.c
+++ b/drivers/net/wireless/marvell/mwifiex/sdio.c
@@ -2555,20 +2555,11 @@ static int mwifiex_init_sdio(struct mwifiex_adapter *adapter)
if (!card->mp_regs)
return -ENOMEM;
- /* Allocate skb pointer buffers */
- card->mpa_rx.skb_arr = kcalloc(card->mp_agg_pkt_limit, sizeof(void *),
- GFP_KERNEL);
- if (!card->mpa_rx.skb_arr) {
- kfree(card->mp_regs);
- return -ENOMEM;
- }
-
card->mpa_rx.len_arr = kcalloc(card->mp_agg_pkt_limit,
sizeof(*card->mpa_rx.len_arr),
GFP_KERNEL);
if (!card->mpa_rx.len_arr) {
kfree(card->mp_regs);
- kfree(card->mpa_rx.skb_arr);
return -ENOMEM;
}
@@ -2623,7 +2614,6 @@ static void mwifiex_cleanup_sdio(struct mwifiex_adapter *adapter)
cancel_work_sync(&card->work);
kfree(card->mp_regs);
- kfree(card->mpa_rx.skb_arr);
kfree(card->mpa_rx.len_arr);
kfree(card->mpa_tx.buf);
kfree(card->mpa_rx.buf);
diff --git a/drivers/net/wireless/marvell/mwifiex/sdio.h b/drivers/net/wireless/marvell/mwifiex/sdio.h
index ae94c172310f..b86a9263a6a8 100644
--- a/drivers/net/wireless/marvell/mwifiex/sdio.h
+++ b/drivers/net/wireless/marvell/mwifiex/sdio.h
@@ -164,10 +164,7 @@ struct mwifiex_sdio_mpa_rx {
u32 pkt_cnt;
u32 ports;
u16 start_port;
-
- struct sk_buff **skb_arr;
u32 *len_arr;
-
u8 enabled;
u32 buf_size;
u32 pkt_aggr_limit;
@@ -372,7 +369,6 @@ static inline void mp_rx_aggr_setup(struct sdio_mmc_card *card,
else
card->mpa_rx.ports |= 1 << (card->mpa_rx.pkt_cnt + 1);
}
- card->mpa_rx.skb_arr[card->mpa_rx.pkt_cnt] = NULL;
card->mpa_rx.len_arr[card->mpa_rx.pkt_cnt] = rx_len;
card->mpa_rx.pkt_cnt++;
}
diff --git a/drivers/net/wireless/mediatek/mt76/Kconfig b/drivers/net/wireless/mediatek/mt76/Kconfig
index 7eb1b0b63d11..a86f800b8bf5 100644
--- a/drivers/net/wireless/mediatek/mt76/Kconfig
+++ b/drivers/net/wireless/mediatek/mt76/Kconfig
@@ -44,3 +44,4 @@ source "drivers/net/wireless/mediatek/mt76/mt7615/Kconfig"
source "drivers/net/wireless/mediatek/mt76/mt7915/Kconfig"
source "drivers/net/wireless/mediatek/mt76/mt7921/Kconfig"
source "drivers/net/wireless/mediatek/mt76/mt7996/Kconfig"
+source "drivers/net/wireless/mediatek/mt76/mt7925/Kconfig"
diff --git a/drivers/net/wireless/mediatek/mt76/Makefile b/drivers/net/wireless/mediatek/mt76/Makefile
index 85c4799be954..d6575fe18c6b 100644
--- a/drivers/net/wireless/mediatek/mt76/Makefile
+++ b/drivers/net/wireless/mediatek/mt76/Makefile
@@ -44,3 +44,4 @@ obj-$(CONFIG_MT7615_COMMON) += mt7615/
obj-$(CONFIG_MT7915E) += mt7915/
obj-$(CONFIG_MT7921_COMMON) += mt7921/
obj-$(CONFIG_MT7996E) += mt7996/
+obj-$(CONFIG_MT7925_COMMON) += mt7925/
diff --git a/drivers/net/wireless/mediatek/mt76/debugfs.c b/drivers/net/wireless/mediatek/mt76/debugfs.c
index 57fbcc83e074..ae83be572b94 100644
--- a/drivers/net/wireless/mediatek/mt76/debugfs.c
+++ b/drivers/net/wireless/mediatek/mt76/debugfs.c
@@ -109,8 +109,6 @@ mt76_register_debugfs_fops(struct mt76_phy *phy,
struct dentry *dir;
dir = debugfs_create_dir("mt76", phy->hw->wiphy->debugfsdir);
- if (!dir)
- return NULL;
debugfs_create_u8("led_pin", 0600, dir, &phy->leds.pin);
debugfs_create_u32("regidx", 0600, dir, &dev->debugfs_reg);
diff --git a/drivers/net/wireless/mediatek/mt76/dma.c b/drivers/net/wireless/mediatek/mt76/dma.c
index dc8f4e157eb2..511fe7e6e744 100644
--- a/drivers/net/wireless/mediatek/mt76/dma.c
+++ b/drivers/net/wireless/mediatek/mt76/dma.c
@@ -53,6 +53,11 @@ mt76_alloc_txwi(struct mt76_dev *dev)
addr = dma_map_single(dev->dma_dev, txwi, dev->drv->txwi_size,
DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(dev->dma_dev, addr))) {
+ kfree(txwi);
+ return NULL;
+ }
+
t = (struct mt76_txwi_cache *)(txwi + dev->drv->txwi_size);
t->dma_addr = addr;
@@ -330,9 +335,6 @@ mt76_dma_tx_cleanup_idx(struct mt76_dev *dev, struct mt76_queue *q, int idx,
if (e->txwi == DMA_DUMMY_DATA)
e->txwi = NULL;
- if (e->skb == DMA_DUMMY_DATA)
- e->skb = NULL;
-
*prev_e = *e;
memset(e, 0, sizeof(*e));
}
@@ -737,16 +739,18 @@ mt76_dma_rx_cleanup(struct mt76_dev *dev, struct mt76_queue *q)
if (!q->ndesc)
return;
- spin_lock_bh(&q->lock);
-
do {
+ spin_lock_bh(&q->lock);
buf = mt76_dma_dequeue(dev, q, true, NULL, NULL, &more, NULL);
+ spin_unlock_bh(&q->lock);
+
if (!buf)
break;
mt76_put_page_pool_buf(buf, false);
} while (1);
+ spin_lock_bh(&q->lock);
if (q->rx_head) {
dev_kfree_skb(q->rx_head);
q->rx_head = NULL;
diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
index 36564930aef1..7725dd6763ef 100644
--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
+++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
@@ -188,7 +188,7 @@ static bool mt76_string_prop_find(struct property *prop, const char *str)
return false;
}
-static struct device_node *
+struct device_node *
mt76_find_power_limits_node(struct mt76_dev *dev)
{
struct device_node *np = dev->dev->of_node;
@@ -227,6 +227,7 @@ mt76_find_power_limits_node(struct mt76_dev *dev)
of_node_put(np);
return fallback;
}
+EXPORT_SYMBOL_GPL(mt76_find_power_limits_node);
static const __be32 *
mt76_get_of_array(struct device_node *np, char *name, size_t *len, int min)
@@ -241,7 +242,7 @@ mt76_get_of_array(struct device_node *np, char *name, size_t *len, int min)
return prop->value;
}
-static struct device_node *
+struct device_node *
mt76_find_channel_node(struct device_node *np, struct ieee80211_channel *chan)
{
struct device_node *cur;
@@ -265,6 +266,8 @@ mt76_find_channel_node(struct device_node *np, struct ieee80211_channel *chan)
return NULL;
}
+EXPORT_SYMBOL_GPL(mt76_find_channel_node);
+
static s8
mt76_get_txs_delta(struct device_node *np, u8 nss)
diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
index d158320bc15d..51a767121b0d 100644
--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
+++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
@@ -415,6 +415,9 @@ mt76_phy_init(struct mt76_phy *phy, struct ieee80211_hw *hw)
struct mt76_dev *dev = phy->dev;
struct wiphy *wiphy = hw->wiphy;
+ INIT_LIST_HEAD(&phy->tx_list);
+ spin_lock_init(&phy->tx_lock);
+
SET_IEEE80211_DEV(hw, dev->dev);
SET_IEEE80211_PERM_ADDR(hw, phy->macaddr);
@@ -452,7 +455,8 @@ mt76_phy_init(struct mt76_phy *phy, struct ieee80211_hw *hw)
ieee80211_hw_set(hw, SUPPORTS_AMSDU_IN_AMPDU);
ieee80211_hw_set(hw, SUPPORTS_REORDERING_BUFFER);
- if (!(dev->drv->drv_flags & MT_DRV_AMSDU_OFFLOAD)) {
+ if (!(dev->drv->drv_flags & MT_DRV_AMSDU_OFFLOAD) &&
+ hw->max_tx_fragments > 1) {
ieee80211_hw_set(hw, TX_AMSDU);
ieee80211_hw_set(hw, TX_FRAG_LIST);
}
@@ -566,7 +570,7 @@ int mt76_create_page_pool(struct mt76_dev *dev, struct mt76_queue *q)
{
struct page_pool_params pp_params = {
.order = 0,
- .flags = PP_FLAG_PAGE_FRAG,
+ .flags = 0,
.nid = NUMA_NO_NODE,
.dev = dev->dma_dev,
};
@@ -688,6 +692,7 @@ int mt76_register_device(struct mt76_dev *dev, bool vht,
int ret;
dev_set_drvdata(dev->dev, dev);
+ mt76_wcid_init(&dev->global_wcid);
ret = mt76_phy_init(phy, hw);
if (ret)
return ret;
@@ -743,6 +748,7 @@ void mt76_unregister_device(struct mt76_dev *dev)
if (IS_ENABLED(CONFIG_MT76_LEDS))
mt76_led_cleanup(&dev->phy);
mt76_tx_status_check(dev, true);
+ mt76_wcid_cleanup(dev, &dev->global_wcid);
ieee80211_unregister_hw(hw);
}
EXPORT_SYMBOL_GPL(mt76_unregister_device);
@@ -1411,7 +1417,7 @@ mt76_sta_add(struct mt76_phy *phy, struct ieee80211_vif *vif,
wcid->phy_idx = phy->band_idx;
rcu_assign_pointer(dev->wcid[wcid->idx], wcid);
- mt76_packet_id_init(wcid);
+ mt76_wcid_init(wcid);
out:
mutex_unlock(&dev->mutex);
@@ -1430,7 +1436,7 @@ void __mt76_sta_remove(struct mt76_dev *dev, struct ieee80211_vif *vif,
if (dev->drv->sta_remove)
dev->drv->sta_remove(dev, vif, sta);
- mt76_packet_id_flush(dev, wcid);
+ mt76_wcid_cleanup(dev, wcid);
mt76_wcid_mask_clear(dev->wcid_mask, idx);
mt76_wcid_mask_clear(dev->wcid_phy_mask, idx);
@@ -1486,6 +1492,47 @@ void mt76_sta_pre_rcu_remove(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
}
EXPORT_SYMBOL_GPL(mt76_sta_pre_rcu_remove);
+void mt76_wcid_init(struct mt76_wcid *wcid)
+{
+ INIT_LIST_HEAD(&wcid->tx_list);
+ skb_queue_head_init(&wcid->tx_pending);
+
+ INIT_LIST_HEAD(&wcid->list);
+ idr_init(&wcid->pktid);
+}
+EXPORT_SYMBOL_GPL(mt76_wcid_init);
+
+void mt76_wcid_cleanup(struct mt76_dev *dev, struct mt76_wcid *wcid)
+{
+ struct mt76_phy *phy = dev->phys[wcid->phy_idx];
+ struct ieee80211_hw *hw;
+ struct sk_buff_head list;
+ struct sk_buff *skb;
+
+ mt76_tx_status_lock(dev, &list);
+ mt76_tx_status_skb_get(dev, wcid, -1, &list);
+ mt76_tx_status_unlock(dev, &list);
+
+ idr_destroy(&wcid->pktid);
+
+ spin_lock_bh(&phy->tx_lock);
+
+ if (!list_empty(&wcid->tx_list))
+ list_del_init(&wcid->tx_list);
+
+ spin_lock(&wcid->tx_pending.lock);
+ skb_queue_splice_tail_init(&wcid->tx_pending, &list);
+ spin_unlock(&wcid->tx_pending.lock);
+
+ spin_unlock_bh(&phy->tx_lock);
+
+ while ((skb = __skb_dequeue(&list)) != NULL) {
+ hw = mt76_tx_status_get_hw(dev, skb);
+ ieee80211_free_txskb(hw, skb);
+ }
+}
+EXPORT_SYMBOL_GPL(mt76_wcid_cleanup);
+
int mt76_get_txpower(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
int *dbm)
{
@@ -1697,11 +1744,16 @@ mt76_init_queue(struct mt76_dev *dev, int qid, int idx, int n_desc,
}
EXPORT_SYMBOL_GPL(mt76_init_queue);
-u16 mt76_calculate_default_rate(struct mt76_phy *phy, int rateidx)
+u16 mt76_calculate_default_rate(struct mt76_phy *phy,
+ struct ieee80211_vif *vif, int rateidx)
{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = mvif->ctx ?
+ &mvif->ctx->def :
+ &phy->chandef;
int offset = 0;
- if (phy->chandef.chan->band != NL80211_BAND_2GHZ)
+ if (chandef->chan->band != NL80211_BAND_2GHZ)
offset = 4;
/* pick the lowest rate for hidden nodes */
diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
index e8757865a3d0..ea828ba0b83a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76.h
@@ -334,6 +334,9 @@ struct mt76_wcid {
u32 tx_info;
bool sw_iv;
+ struct list_head tx_list;
+ struct sk_buff_head tx_pending;
+
struct list_head list;
struct idr pktid;
@@ -376,7 +379,7 @@ struct mt76_rx_tid {
u8 started:1, stopped:1, timer_pending:1;
- struct sk_buff *reorder_buf[];
+ struct sk_buff *reorder_buf[] __counted_by(size);
};
#define MT_TX_CB_DMA_DONE BIT(0)
@@ -709,6 +712,7 @@ struct mt76_vif {
u8 basic_rates_idx;
u8 mcast_rates_idx;
u8 beacon_rates_idx;
+ struct ieee80211_chanctx_conf *ctx;
};
struct mt76_phy {
@@ -719,6 +723,8 @@ struct mt76_phy {
unsigned long state;
u8 band_idx;
+ spinlock_t tx_lock;
+ struct list_head tx_list;
struct mt76_queue *q_tx[__MT_TXQ_MAX];
struct cfg80211_chan_def chandef;
@@ -967,6 +973,7 @@ struct mt76_power_limits {
s8 ofdm[8];
s8 mcs[4][10];
s8 ru[7][12];
+ s8 eht[16][16];
};
struct mt76_ethtool_worker_info {
@@ -1100,7 +1107,8 @@ int mt76_get_of_eeprom(struct mt76_dev *dev, void *data, int offset, int len);
struct mt76_queue *
mt76_init_queue(struct mt76_dev *dev, int qid, int idx, int n_desc,
int ring_base, u32 flags);
-u16 mt76_calculate_default_rate(struct mt76_phy *phy, int rateidx);
+u16 mt76_calculate_default_rate(struct mt76_phy *phy,
+ struct ieee80211_vif *vif, int rateidx);
static inline int mt76_init_tx_queue(struct mt76_phy *phy, int qid, int idx,
int n_desc, int ring_base, u32 flags)
{
@@ -1529,6 +1537,11 @@ mt76_mcu_skb_send_msg(struct mt76_dev *dev, struct sk_buff *skb, int cmd,
void mt76_set_irq_mask(struct mt76_dev *dev, u32 addr, u32 clear, u32 set);
+struct device_node *
+mt76_find_power_limits_node(struct mt76_dev *dev);
+struct device_node *
+mt76_find_channel_node(struct device_node *np, struct ieee80211_channel *chan);
+
s8 mt76_get_rate_power_limits(struct mt76_phy *phy,
struct ieee80211_channel *chan,
struct mt76_power_limits *dest,
@@ -1598,22 +1611,7 @@ mt76_token_put(struct mt76_dev *dev, int token)
return txwi;
}
-static inline void mt76_packet_id_init(struct mt76_wcid *wcid)
-{
- INIT_LIST_HEAD(&wcid->list);
- idr_init(&wcid->pktid);
-}
-
-static inline void
-mt76_packet_id_flush(struct mt76_dev *dev, struct mt76_wcid *wcid)
-{
- struct sk_buff_head list;
-
- mt76_tx_status_lock(dev, &list);
- mt76_tx_status_skb_get(dev, wcid, -1, &list);
- mt76_tx_status_unlock(dev, &list);
-
- idr_destroy(&wcid->pktid);
-}
+void mt76_wcid_init(struct mt76_wcid *wcid);
+void mt76_wcid_cleanup(struct mt76_dev *dev, struct mt76_wcid *wcid);
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c b/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
index 888678732f29..c223f7c19e6d 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/beacon.c
@@ -10,12 +10,31 @@ struct beacon_bc_data {
};
static void
+mt7603_mac_stuck_beacon_recovery(struct mt7603_dev *dev)
+{
+ if (dev->beacon_check % 5 != 4)
+ return;
+
+ mt76_clear(dev, MT_WPDMA_GLO_CFG, MT_WPDMA_GLO_CFG_TX_DMA_EN);
+ mt76_set(dev, MT_SCH_4, MT_SCH_4_RESET);
+ mt76_clear(dev, MT_SCH_4, MT_SCH_4_RESET);
+ mt76_set(dev, MT_WPDMA_GLO_CFG, MT_WPDMA_GLO_CFG_TX_DMA_EN);
+
+ mt76_set(dev, MT_WF_CFG_OFF_WOCCR, MT_WF_CFG_OFF_WOCCR_TMAC_GC_DIS);
+ mt76_set(dev, MT_ARB_SCR, MT_ARB_SCR_TX_DISABLE);
+ mt76_clear(dev, MT_ARB_SCR, MT_ARB_SCR_TX_DISABLE);
+ mt76_clear(dev, MT_WF_CFG_OFF_WOCCR, MT_WF_CFG_OFF_WOCCR_TMAC_GC_DIS);
+}
+
+static void
mt7603_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
struct mt7603_dev *dev = (struct mt7603_dev *)priv;
struct mt76_dev *mdev = &dev->mt76;
struct mt7603_vif *mvif = (struct mt7603_vif *)vif->drv_priv;
struct sk_buff *skb = NULL;
+ u32 om_idx = mvif->idx;
+ u32 val;
if (!(mdev->beacon_mask & BIT(mvif->idx)))
return;
@@ -24,20 +43,33 @@ mt7603_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
if (!skb)
return;
- mt76_tx_queue_skb(dev, dev->mphy.q_tx[MT_TXQ_BEACON],
- MT_TXQ_BEACON, skb, &mvif->sta.wcid, NULL);
+ if (om_idx)
+ om_idx |= 0x10;
+ val = MT_DMA_FQCR0_BUSY | MT_DMA_FQCR0_MODE |
+ FIELD_PREP(MT_DMA_FQCR0_TARGET_BSS, om_idx) |
+ FIELD_PREP(MT_DMA_FQCR0_DEST_PORT_ID, 3) |
+ FIELD_PREP(MT_DMA_FQCR0_DEST_QUEUE_ID, 8);
spin_lock_bh(&dev->ps_lock);
- mt76_wr(dev, MT_DMA_FQCR0, MT_DMA_FQCR0_BUSY |
- FIELD_PREP(MT_DMA_FQCR0_TARGET_WCID, mvif->sta.wcid.idx) |
- FIELD_PREP(MT_DMA_FQCR0_TARGET_QID,
- dev->mphy.q_tx[MT_TXQ_CAB]->hw_idx) |
- FIELD_PREP(MT_DMA_FQCR0_DEST_PORT_ID, 3) |
- FIELD_PREP(MT_DMA_FQCR0_DEST_QUEUE_ID, 8));
- if (!mt76_poll(dev, MT_DMA_FQCR0, MT_DMA_FQCR0_BUSY, 0, 5000))
+ mt76_wr(dev, MT_DMA_FQCR0, val |
+ FIELD_PREP(MT_DMA_FQCR0_TARGET_QID, MT_TX_HW_QUEUE_BCN));
+ if (!mt76_poll(dev, MT_DMA_FQCR0, MT_DMA_FQCR0_BUSY, 0, 5000)) {
dev->beacon_check = MT7603_WATCHDOG_TIMEOUT;
+ goto out;
+ }
+
+ mt76_wr(dev, MT_DMA_FQCR0, val |
+ FIELD_PREP(MT_DMA_FQCR0_TARGET_QID, MT_TX_HW_QUEUE_BMC));
+ if (!mt76_poll(dev, MT_DMA_FQCR0, MT_DMA_FQCR0_BUSY, 0, 5000)) {
+ dev->beacon_check = MT7603_WATCHDOG_TIMEOUT;
+ goto out;
+ }
+ mt76_tx_queue_skb(dev, dev->mphy.q_tx[MT_TXQ_BEACON],
+ MT_TXQ_BEACON, skb, &mvif->sta.wcid, NULL);
+
+out:
spin_unlock_bh(&dev->ps_lock);
}
@@ -81,6 +113,18 @@ void mt7603_pre_tbtt_tasklet(struct tasklet_struct *t)
data.dev = dev;
__skb_queue_head_init(&data.q);
+ /* Flush all previous CAB queue packets and beacons */
+ mt76_wr(dev, MT_WF_ARB_CAB_FLUSH, GENMASK(30, 16) | BIT(0));
+
+ mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_CAB], false);
+ mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_BEACON], false);
+
+ if (dev->mphy.q_tx[MT_TXQ_BEACON]->queued > 0)
+ dev->beacon_check++;
+ else
+ dev->beacon_check = 0;
+ mt7603_mac_stuck_beacon_recovery(dev);
+
q = dev->mphy.q_tx[MT_TXQ_BEACON];
spin_lock(&q->lock);
ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev),
@@ -89,14 +133,9 @@ void mt7603_pre_tbtt_tasklet(struct tasklet_struct *t)
mt76_queue_kick(dev, q);
spin_unlock(&q->lock);
- /* Flush all previous CAB queue packets */
- mt76_wr(dev, MT_WF_ARB_CAB_FLUSH, GENMASK(30, 16) | BIT(0));
-
- mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_CAB], false);
-
mt76_csa_check(mdev);
if (mdev->csa_complete)
- goto out;
+ return;
q = dev->mphy.q_tx[MT_TXQ_CAB];
do {
@@ -108,7 +147,7 @@ void mt7603_pre_tbtt_tasklet(struct tasklet_struct *t)
skb_queue_len(&data.q) < 8);
if (skb_queue_empty(&data.q))
- goto out;
+ return;
for (i = 0; i < ARRAY_SIZE(data.tail); i++) {
if (!data.tail[i])
@@ -136,11 +175,6 @@ void mt7603_pre_tbtt_tasklet(struct tasklet_struct *t)
MT_WF_ARB_CAB_START_BSSn(0) |
(MT_WF_ARB_CAB_START_BSS0n(1) *
((1 << (MT7603_MAX_INTERFACES - 1)) - 1)));
-
-out:
- mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_BEACON], false);
- if (dev->mphy.q_tx[MT_TXQ_BEACON]->queued > hweight8(mdev->beacon_mask))
- dev->beacon_check++;
}
void mt7603_beacon_set_timer(struct mt7603_dev *dev, int idx, int intval)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/core.c b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
index 60a996b63c0c..915b8349146a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/core.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/core.c
@@ -42,11 +42,13 @@ irqreturn_t mt7603_irq_handler(int irq, void *dev_instance)
}
if (intr & MT_INT_RX_DONE(0)) {
+ dev->rx_pse_check = 0;
mt7603_irq_disable(dev, MT_INT_RX_DONE(0));
napi_schedule(&dev->mt76.napi[0]);
}
if (intr & MT_INT_RX_DONE(1)) {
+ dev->rx_pse_check = 0;
mt7603_irq_disable(dev, MT_INT_RX_DONE(1));
napi_schedule(&dev->mt76.napi[1]);
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/init.c b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
index 0762de3ce5ac..6c55c72f28a2 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/init.c
@@ -184,6 +184,13 @@ mt7603_mac_init(struct mt7603_dev *dev)
mt76_set(dev, MT_TMAC_TCR, MT_TMAC_TCR_RX_RIFS_MODE);
+ if (is_mt7628(dev)) {
+ mt76_set(dev, MT_TMAC_TCR,
+ MT_TMAC_TCR_TXOP_BURST_STOP | BIT(1) | BIT(0));
+ mt76_set(dev, MT_TXREQ, BIT(27));
+ mt76_set(dev, MT_AGG_TMP, GENMASK(4, 2));
+ }
+
mt7603_set_tmac_template(dev);
/* Enable RX group to HIF */
@@ -517,6 +524,7 @@ int mt7603_register_device(struct mt7603_dev *dev)
hw->max_rates = 3;
hw->max_report_rates = 7;
hw->max_rate_tries = 11;
+ hw->max_tx_fragments = 1;
hw->radiotap_timestamp.units_pos =
IEEE80211_RADIOTAP_TIMESTAMP_UNIT_US;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
index 99ae080502d8..cf21d06257e5 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/mac.c
@@ -1441,15 +1441,6 @@ static void mt7603_mac_watchdog_reset(struct mt7603_dev *dev)
mt7603_beacon_set_timer(dev, -1, 0);
- if (dev->reset_cause[RESET_CAUSE_RESET_FAILED] ||
- dev->cur_reset_cause == RESET_CAUSE_RX_PSE_BUSY ||
- dev->cur_reset_cause == RESET_CAUSE_BEACON_STUCK ||
- dev->cur_reset_cause == RESET_CAUSE_TX_HANG)
- mt7603_pse_reset(dev);
-
- if (dev->reset_cause[RESET_CAUSE_RESET_FAILED])
- goto skip_dma_reset;
-
mt7603_mac_stop(dev);
mt76_clear(dev, MT_WPDMA_GLO_CFG,
@@ -1459,28 +1450,32 @@ static void mt7603_mac_watchdog_reset(struct mt7603_dev *dev)
mt7603_irq_disable(dev, mask);
- mt76_set(dev, MT_WPDMA_GLO_CFG, MT_WPDMA_GLO_CFG_FORCE_TX_EOF);
-
mt7603_pse_client_reset(dev);
mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_WM], true);
for (i = 0; i < __MT_TXQ_MAX; i++)
mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[i], true);
+ mt7603_dma_sched_reset(dev);
+
+ mt76_tx_status_check(&dev->mt76, true);
+
mt76_for_each_q_rx(&dev->mt76, i) {
mt76_queue_rx_reset(dev, i);
}
- mt76_tx_status_check(&dev->mt76, true);
+ if (dev->reset_cause[RESET_CAUSE_RESET_FAILED] ||
+ dev->cur_reset_cause == RESET_CAUSE_RX_PSE_BUSY)
+ mt7603_pse_reset(dev);
- mt7603_dma_sched_reset(dev);
+ if (!dev->reset_cause[RESET_CAUSE_RESET_FAILED]) {
+ mt7603_mac_dma_start(dev);
- mt7603_mac_dma_start(dev);
+ mt7603_irq_enable(dev, mask);
- mt7603_irq_enable(dev, mask);
+ clear_bit(MT76_RESET, &dev->mphy.state);
+ }
-skip_dma_reset:
- clear_bit(MT76_RESET, &dev->mphy.state);
mutex_unlock(&dev->mt76.mutex);
mt76_worker_enable(&dev->mt76.tx_worker);
@@ -1570,20 +1565,29 @@ static bool mt7603_rx_pse_busy(struct mt7603_dev *dev)
{
u32 addr, val;
- if (mt76_rr(dev, MT_MCU_DEBUG_RESET) & MT_MCU_DEBUG_RESET_QUEUES)
- return true;
-
if (mt7603_rx_fifo_busy(dev))
- return false;
+ goto out;
addr = mt7603_reg_map(dev, MT_CLIENT_BASE_PHYS_ADDR + MT_CLIENT_STATUS);
mt76_wr(dev, addr, 3);
val = mt76_rr(dev, addr) >> 16;
- if (is_mt7628(dev) && (val & 0x4001) == 0x4001)
- return true;
+ if (!(val & BIT(0)))
+ return false;
- return (val & 0x8001) == 0x8001 || (val & 0xe001) == 0xe001;
+ if (is_mt7628(dev))
+ val &= 0xa000;
+ else
+ val &= 0x8000;
+ if (!val)
+ return false;
+
+out:
+ if (mt76_rr(dev, MT_INT_SOURCE_CSR) &
+ (MT_INT_RX_DONE(0) | MT_INT_RX_DONE(1)))
+ return false;
+
+ return true;
}
static bool
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/main.c b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
index c213fd2a5216..89d738deea62 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/main.c
@@ -70,7 +70,7 @@ mt7603_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
mvif->sta.wcid.idx = idx;
mvif->sta.wcid.hw_key_idx = -1;
mvif->sta.vif = mvif;
- mt76_packet_id_init(&mvif->sta.wcid);
+ mt76_wcid_init(&mvif->sta.wcid);
eth_broadcast_addr(bc_addr);
mt7603_wtbl_init(dev, idx, mvif->idx, bc_addr);
@@ -110,7 +110,7 @@ mt7603_remove_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
dev->mt76.vif_mask &= ~BIT_ULL(mvif->idx);
mutex_unlock(&dev->mt76.mutex);
- mt76_packet_id_flush(&dev->mt76, &mvif->sta.wcid);
+ mt76_wcid_cleanup(&dev->mt76, &mvif->sta.wcid);
}
void mt7603_init_edcca(struct mt7603_dev *dev)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
index a39c9a0fcb1c..524bceb8e958 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7603/regs.h
@@ -469,6 +469,11 @@ enum {
#define MT_WF_SEC_BASE 0x21a00
#define MT_WF_SEC(ofs) (MT_WF_SEC_BASE + (ofs))
+#define MT_WF_CFG_OFF_BASE 0x21e00
+#define MT_WF_CFG_OFF(ofs) (MT_WF_CFG_OFF_BASE + (ofs))
+#define MT_WF_CFG_OFF_WOCCR MT_WF_CFG_OFF(0x004)
+#define MT_WF_CFG_OFF_WOCCR_TMAC_GC_DIS BIT(4)
+
#define MT_SEC_SCR MT_WF_SEC(0x004)
#define MT_SEC_SCR_MASK_ORDER GENMASK(1, 0)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/init.c b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
index 18a50ccff106..f7722f67db57 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/init.c
@@ -58,10 +58,7 @@ int mt7615_thermal_init(struct mt7615_dev *dev)
wiphy_name(wiphy));
hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, dev,
mt7615_hwmon_groups);
- if (IS_ERR(hwmon))
- return PTR_ERR(hwmon);
-
- return 0;
+ return PTR_ERR_OR_ZERO(hwmon);
}
EXPORT_SYMBOL_GPL(mt7615_thermal_init);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/main.c b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
index 200b1752ca77..dab16b5fc386 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/main.c
@@ -226,7 +226,7 @@ static int mt7615_add_interface(struct ieee80211_hw *hw,
mvif->sta.wcid.idx = idx;
mvif->sta.wcid.phy_idx = mvif->mt76.band_idx;
mvif->sta.wcid.hw_key_idx = -1;
- mt76_packet_id_init(&mvif->sta.wcid);
+ mt76_wcid_init(&mvif->sta.wcid);
mt7615_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
@@ -279,7 +279,7 @@ static void mt7615_remove_interface(struct ieee80211_hw *hw,
list_del_init(&msta->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
- mt76_packet_id_flush(&dev->mt76, &mvif->sta.wcid);
+ mt76_wcid_cleanup(&dev->mt76, &mvif->sta.wcid);
}
int mt7615_set_channel(struct mt7615_phy *phy)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
index 8d745c9730c7..955974a82180 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
@@ -2147,7 +2147,7 @@ int mt7615_mcu_set_chan_info(struct mt7615_phy *phy, int cmd)
};
if (cmd == MCU_EXT_CMD(SET_RX_PATH) ||
- dev->mt76.hw->conf.flags & IEEE80211_CONF_MONITOR)
+ phy->mt76->hw->conf.flags & IEEE80211_CONF_MONITOR)
req.switch_reason = CH_SWITCH_NORMAL;
else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
req.switch_reason = CH_SWITCH_SCAN_BYPASS_DPD;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
index 0019890fdb78..fbb1181c58ff 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7615/pci_mac.c
@@ -106,7 +106,7 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
else
mt76_connac_write_hw_txp(mdev, tx_info, txp, id);
- tx_info->skb = DMA_DUMMY_DATA;
+ tx_info->skb = NULL;
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
index 22878f088804..1f29d8cd900c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac.h
@@ -172,6 +172,11 @@ struct mt76_connac_tx_free {
extern const struct wiphy_wowlan_support mt76_connac_wowlan_support;
+static inline bool is_mt7925(struct mt76_dev *dev)
+{
+ return mt76_chip(dev) == 0x7925;
+}
+
static inline bool is_mt7922(struct mt76_dev *dev)
{
return mt76_chip(dev) == 0x7922;
@@ -245,6 +250,7 @@ static inline bool is_mt76_fw_txp(struct mt76_dev *dev)
switch (mt76_chip(dev)) {
case 0x7961:
case 0x7922:
+ case 0x7925:
case 0x7663:
case 0x7622:
return false;
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
index 68ca0844cbbf..2250252b2047 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac3_mac.h
@@ -257,6 +257,8 @@ enum tx_mgnt_type {
#define MT_TXD7_UDP_TCP_SUM BIT(15)
#define MT_TXD7_TX_TIME GENMASK(9, 0)
+#define MT_TXD9_WLAN_IDX GENMASK(23, 8)
+
#define MT_TX_RATE_STBC BIT(14)
#define MT_TX_RATE_NSS GENMASK(13, 10)
#define MT_TX_RATE_MODE GENMASK(9, 6)
@@ -269,7 +271,7 @@ enum tx_mgnt_type {
#define MT_TXFREE0_MSDU_CNT GENMASK(25, 16)
#define MT_TXFREE0_RX_BYTE GENMASK(15, 0)
-#define MT_TXFREE1_VER GENMASK(18, 16)
+#define MT_TXFREE1_VER GENMASK(19, 16)
#define MT_TXFREE_INFO_PAIR BIT(31)
#define MT_TXFREE_INFO_HEADER BIT(30)
@@ -315,6 +317,7 @@ enum tx_mgnt_type {
#define MT_TXS4_TIMESTAMP GENMASK(31, 0)
+/* MPDU based TXS */
#define MT_TXS5_F0_FINAL_MPDU BIT(31)
#define MT_TXS5_F0_QOS BIT(30)
#define MT_TXS5_F0_TX_COUNT GENMASK(29, 25)
@@ -336,4 +339,17 @@ enum tx_mgnt_type {
#define MT_TXS7_F1_MPDU_RETRY_COUNT GENMASK(31, 24)
#define MT_TXS7_F1_MPDU_RETRY_BYTES GENMASK(23, 0)
+/* PPDU based TXS */
+#define MT_TXS5_MPDU_TX_CNT GENMASK(30, 20)
+#define MT_TXS5_MPDU_TX_BYTE_SCALE BIT(15)
+#define MT_TXS5_MPDU_TX_BYTE GENMASK(14, 0)
+
+#define MT_TXS6_MPDU_FAIL_CNT GENMASK(30, 20)
+#define MT_TXS6_MPDU_FAIL_BYTE_SCALE BIT(15)
+#define MT_TXS6_MPDU_FAIL_BYTE GENMASK(14, 0)
+
+#define MT_TXS7_MPDU_RETRY_CNT GENMASK(30, 20)
+#define MT_TXS7_MPDU_RETRY_BYTE_SCALE BIT(15)
+#define MT_TXS7_MPDU_RETRY_BYTE GENMASK(14, 0)
+
#endif /* __MT76_CONNAC3_MAC_H */
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
index ee5177fd6dde..93402d2c2538 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mac.c
@@ -151,23 +151,6 @@ void mt76_connac_tx_complete_skb(struct mt76_dev *mdev,
return;
}
- /* error path */
- if (e->skb == DMA_DUMMY_DATA) {
- struct mt76_connac_txp_common *txp;
- struct mt76_txwi_cache *t;
- u16 token;
-
- txp = mt76_connac_txwi_to_txp(mdev, e->txwi);
- if (is_mt76_fw_txp(mdev))
- token = le16_to_cpu(txp->fw.token);
- else
- token = le16_to_cpu(txp->hw.msdu_id[0]) &
- ~MT_MSDU_ID_VALID;
-
- t = mt76_token_put(mdev, token);
- e->skb = t ? t->skb : NULL;
- }
-
if (e->skb)
mt76_tx_complete_skb(mdev, e->wcid, e->skb);
}
@@ -187,7 +170,7 @@ void mt76_connac_write_hw_txp(struct mt76_dev *dev,
txp->msdu_id[0] = cpu_to_le16(id | MT_MSDU_ID_VALID);
- if (is_mt7663(dev) || is_mt7921(dev))
+ if (is_mt7663(dev) || is_mt7921(dev) || is_mt7925(dev))
last_mask = MT_TXD_LEN_LAST;
else
last_mask = MT_TXD_LEN_AMSDU_LAST |
@@ -231,7 +214,7 @@ mt76_connac_txp_skb_unmap_hw(struct mt76_dev *dev,
u32 last_mask;
int i;
- if (is_mt7663(dev) || is_mt7921(dev))
+ if (is_mt7663(dev) || is_mt7921(dev) || is_mt7925(dev))
last_mask = MT_TXD_LEN_LAST;
else
last_mask = MT_TXD_LEN_MSDU_LAST;
@@ -310,7 +293,10 @@ u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
struct ieee80211_vif *vif,
bool beacon, bool mcast)
{
- u8 nss = 0, mode = 0, band = mphy->chandef.chan->band;
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = mvif->ctx ?
+ &mvif->ctx->def : &mphy->chandef;
+ u8 nss = 0, mode = 0, band = chandef->chan->band;
int rateidx = 0, mcast_rate;
if (!vif)
@@ -343,7 +329,7 @@ u16 mt76_connac2_mac_tx_rate_val(struct mt76_phy *mphy,
rateidx = ffs(vif->bss_conf.basic_rates) - 1;
legacy:
- rateidx = mt76_calculate_default_rate(mphy, rateidx);
+ rateidx = mt76_calculate_default_rate(mphy, vif, rateidx);
mode = rateidx >> 8;
rateidx &= GENMASK(7, 0);
out:
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
index 0f0a519f956f..ae6bf3c968df 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
@@ -66,6 +66,7 @@ int mt76_connac_mcu_init_download(struct mt76_dev *dev, u32 addr, u32 len,
if ((!is_connac_v1(dev) && addr == MCU_PATCH_ADDRESS) ||
(is_mt7921(dev) && addr == 0x900000) ||
+ (is_mt7925(dev) && addr == 0x900000) ||
(is_mt7996(dev) && addr == 0x900000))
cmd = MCU_CMD(PATCH_START_REQ);
else
@@ -745,7 +746,7 @@ mt76_connac_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
he->pkt_ext = 2;
}
-static void
+void
mt76_connac_mcu_sta_he_tlv_v2(struct sk_buff *skb, struct ieee80211_sta *sta)
{
struct ieee80211_sta_he_cap *he_cap = &sta->deflink.he_cap;
@@ -777,20 +778,23 @@ mt76_connac_mcu_sta_he_tlv_v2(struct sk_buff *skb, struct ieee80211_sta *sta)
he->pkt_ext = IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US;
}
+EXPORT_SYMBOL_GPL(mt76_connac_mcu_sta_he_tlv_v2);
-static u8
+u8
mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
enum nl80211_band band, struct ieee80211_sta *sta)
{
struct ieee80211_sta_ht_cap *ht_cap;
struct ieee80211_sta_vht_cap *vht_cap;
const struct ieee80211_sta_he_cap *he_cap;
+ const struct ieee80211_sta_eht_cap *eht_cap;
u8 mode = 0;
if (sta) {
ht_cap = &sta->deflink.ht_cap;
vht_cap = &sta->deflink.vht_cap;
he_cap = &sta->deflink.he_cap;
+ eht_cap = &sta->deflink.eht_cap;
} else {
struct ieee80211_supported_band *sband;
@@ -798,6 +802,7 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
ht_cap = &sband->ht_cap;
vht_cap = &sband->vht_cap;
he_cap = ieee80211_get_he_iftype_cap(sband, vif->type);
+ eht_cap = ieee80211_get_eht_iftype_cap(sband, vif->type);
}
if (band == NL80211_BAND_2GHZ) {
@@ -808,6 +813,9 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
if (he_cap && he_cap->has_he)
mode |= PHY_TYPE_BIT_HE;
+
+ if (eht_cap && eht_cap->has_eht)
+ mode |= PHY_TYPE_BIT_BE;
} else if (band == NL80211_BAND_5GHZ || band == NL80211_BAND_6GHZ) {
mode |= PHY_TYPE_BIT_OFDM;
@@ -819,17 +827,23 @@ mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
if (he_cap && he_cap->has_he)
mode |= PHY_TYPE_BIT_HE;
+
+ if (eht_cap && eht_cap->has_eht)
+ mode |= PHY_TYPE_BIT_BE;
}
return mode;
}
+EXPORT_SYMBOL_GPL(mt76_connac_get_phy_mode_v2);
void mt76_connac_mcu_sta_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
struct ieee80211_sta *sta,
struct ieee80211_vif *vif,
u8 rcpi, u8 sta_state)
{
- struct cfg80211_chan_def *chandef = &mphy->chandef;
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = mvif->ctx ?
+ &mvif->ctx->def : &mphy->chandef;
enum nl80211_band band = chandef->chan->band;
struct mt76_dev *dev = mphy->dev;
struct sta_rec_ra_info *ra_info;
@@ -1369,7 +1383,10 @@ EXPORT_SYMBOL_GPL(mt76_connac_get_phy_mode_ext);
const struct ieee80211_sta_he_cap *
mt76_connac_get_he_phy_cap(struct mt76_phy *phy, struct ieee80211_vif *vif)
{
- enum nl80211_band band = phy->chandef.chan->band;
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = mvif->ctx ?
+ &mvif->ctx->def : &phy->chandef;
+ enum nl80211_band band = chandef->chan->band;
struct ieee80211_supported_band *sband;
sband = phy->hw->wiphy->bands[band];
@@ -1924,126 +1941,6 @@ void mt76_connac_mcu_coredump_event(struct mt76_dev *dev, struct sk_buff *skb,
}
EXPORT_SYMBOL_GPL(mt76_connac_mcu_coredump_event);
-static void mt76_connac_mcu_parse_tx_resource(struct mt76_dev *dev,
- struct sk_buff *skb)
-{
- struct mt76_sdio *sdio = &dev->sdio;
- struct mt76_connac_tx_resource {
- __le32 version;
- __le32 pse_data_quota;
- __le32 pse_mcu_quota;
- __le32 ple_data_quota;
- __le32 ple_mcu_quota;
- __le16 pse_page_size;
- __le16 ple_page_size;
- u8 pp_padding;
- u8 pad[3];
- } __packed * tx_res;
-
- tx_res = (struct mt76_connac_tx_resource *)skb->data;
- sdio->sched.pse_data_quota = le32_to_cpu(tx_res->pse_data_quota);
- sdio->sched.pse_mcu_quota = le32_to_cpu(tx_res->pse_mcu_quota);
- sdio->sched.ple_data_quota = le32_to_cpu(tx_res->ple_data_quota);
- sdio->sched.pse_page_size = le16_to_cpu(tx_res->pse_page_size);
- sdio->sched.deficit = tx_res->pp_padding;
-}
-
-static void mt76_connac_mcu_parse_phy_cap(struct mt76_dev *dev,
- struct sk_buff *skb)
-{
- struct mt76_connac_phy_cap {
- u8 ht;
- u8 vht;
- u8 _5g;
- u8 max_bw;
- u8 nss;
- u8 dbdc;
- u8 tx_ldpc;
- u8 rx_ldpc;
- u8 tx_stbc;
- u8 rx_stbc;
- u8 hw_path;
- u8 he;
- } __packed * cap;
-
- enum {
- WF0_24G,
- WF0_5G
- };
-
- cap = (struct mt76_connac_phy_cap *)skb->data;
-
- dev->phy.antenna_mask = BIT(cap->nss) - 1;
- dev->phy.chainmask = dev->phy.antenna_mask;
- dev->phy.cap.has_2ghz = cap->hw_path & BIT(WF0_24G);
- dev->phy.cap.has_5ghz = cap->hw_path & BIT(WF0_5G);
-}
-
-int mt76_connac_mcu_get_nic_capability(struct mt76_phy *phy)
-{
- struct mt76_connac_cap_hdr {
- __le16 n_element;
- u8 rsv[2];
- } __packed * hdr;
- struct sk_buff *skb;
- int ret, i;
-
- ret = mt76_mcu_send_and_get_msg(phy->dev, MCU_CE_CMD(GET_NIC_CAPAB),
- NULL, 0, true, &skb);
- if (ret)
- return ret;
-
- hdr = (struct mt76_connac_cap_hdr *)skb->data;
- if (skb->len < sizeof(*hdr)) {
- ret = -EINVAL;
- goto out;
- }
-
- skb_pull(skb, sizeof(*hdr));
-
- for (i = 0; i < le16_to_cpu(hdr->n_element); i++) {
- struct tlv_hdr {
- __le32 type;
- __le32 len;
- } __packed * tlv = (struct tlv_hdr *)skb->data;
- int len;
-
- if (skb->len < sizeof(*tlv))
- break;
-
- skb_pull(skb, sizeof(*tlv));
-
- len = le32_to_cpu(tlv->len);
- if (skb->len < len)
- break;
-
- switch (le32_to_cpu(tlv->type)) {
- case MT_NIC_CAP_6G:
- phy->cap.has_6ghz = skb->data[0];
- break;
- case MT_NIC_CAP_MAC_ADDR:
- memcpy(phy->macaddr, (void *)skb->data, ETH_ALEN);
- break;
- case MT_NIC_CAP_PHY:
- mt76_connac_mcu_parse_phy_cap(phy->dev, skb);
- break;
- case MT_NIC_CAP_TX_RESOURCE:
- if (mt76_is_sdio(phy->dev))
- mt76_connac_mcu_parse_tx_resource(phy->dev,
- skb);
- break;
- default:
- break;
- }
- skb_pull(skb, len);
- }
-out:
- dev_kfree_skb(skb);
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(mt76_connac_mcu_get_nic_capability);
-
static void
mt76_connac_mcu_build_sku(struct mt76_dev *dev, s8 *sku,
struct mt76_power_limits *limits,
@@ -2087,9 +1984,9 @@ mt76_connac_mcu_build_sku(struct mt76_dev *dev, s8 *sku,
}
}
-static s8 mt76_connac_get_ch_power(struct mt76_phy *phy,
- struct ieee80211_channel *chan,
- s8 target_power)
+s8 mt76_connac_get_ch_power(struct mt76_phy *phy,
+ struct ieee80211_channel *chan,
+ s8 target_power)
{
struct mt76_dev *dev = phy->dev;
struct ieee80211_supported_band *sband;
@@ -2126,6 +2023,7 @@ static s8 mt76_connac_get_ch_power(struct mt76_phy *phy,
return target_power;
}
+EXPORT_SYMBOL_GPL(mt76_connac_get_ch_power);
static int
mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
@@ -2144,7 +2042,7 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
112, 114, 116, 118, 120, 122, 124,
126, 128, 132, 134, 136, 138, 140,
142, 144, 149, 151, 153, 155, 157,
- 159, 161, 165
+ 159, 161, 165, 169, 173, 177
};
static const u8 chan_list_6ghz[] = {
1, 3, 5, 7, 9, 11, 13,
@@ -2164,11 +2062,15 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
209, 211, 213, 215, 217, 219, 221,
225, 227, 229, 233
};
- int i, n_chan, batch_size, idx = 0, tx_power, last_ch;
+ int i, n_chan, batch_size, idx = 0, tx_power, last_ch, err = 0;
struct mt76_connac_sku_tlv sku_tlbv;
- struct mt76_power_limits limits;
+ struct mt76_power_limits *limits;
const u8 *ch_list;
+ limits = devm_kmalloc(dev->dev, sizeof(*limits), GFP_KERNEL);
+ if (!limits)
+ return -ENOMEM;
+
sku_len = is_mt7921(dev) ? sizeof(sku_tlbv) : sizeof(sku_tlbv) - 92;
tx_power = 2 * phy->hw->conf.power_level;
if (!tx_power)
@@ -2195,14 +2097,16 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
for (i = 0; i < batch_size; i++) {
struct mt76_connac_tx_power_limit_tlv tx_power_tlv = {};
- int j, err, msg_len, num_ch;
+ int j, msg_len, num_ch;
struct sk_buff *skb;
num_ch = i == batch_size - 1 ? n_chan % batch_len : batch_len;
msg_len = sizeof(tx_power_tlv) + num_ch * sizeof(sku_tlbv);
skb = mt76_mcu_msg_alloc(dev, NULL, msg_len);
- if (!skb)
- return -ENOMEM;
+ if (!skb) {
+ err = -ENOMEM;
+ goto out;
+ }
skb_reserve(skb, sizeof(tx_power_tlv));
@@ -2233,14 +2137,14 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
tx_power);
sar_power = mt76_get_sar_power(phy, &chan, reg_power);
- mt76_get_rate_power_limits(phy, &chan, &limits,
+ mt76_get_rate_power_limits(phy, &chan, limits,
sar_power);
tx_power_tlv.last_msg = ch_list[idx] == last_ch;
sku_tlbv.channel = ch_list[idx];
mt76_connac_mcu_build_sku(dev, sku_tlbv.pwr_limit,
- &limits, band);
+ limits, band);
skb_put_data(skb, &sku_tlbv, sku_len);
}
__skb_push(skb, sizeof(tx_power_tlv));
@@ -2250,10 +2154,12 @@ mt76_connac_mcu_rate_txpower_band(struct mt76_phy *phy,
MCU_CE_CMD(SET_RATE_TX_POWER),
false);
if (err < 0)
- return err;
+ goto out;
}
- return 0;
+out:
+ devm_kfree(dev->dev, limits);
+ return err;
}
int mt76_connac_mcu_set_rate_txpower(struct mt76_phy *phy)
@@ -2457,7 +2363,7 @@ mt76_connac_mcu_set_arp_filter(struct mt76_dev *dev, struct ieee80211_vif *vif,
sizeof(req), true);
}
-static int
+int
mt76_connac_mcu_set_gtk_rekey(struct mt76_dev *dev, struct ieee80211_vif *vif,
bool suspend)
{
@@ -2482,8 +2388,9 @@ mt76_connac_mcu_set_gtk_rekey(struct mt76_dev *dev, struct ieee80211_vif *vif,
return mt76_mcu_send_msg(dev, MCU_UNI_CMD(OFFLOAD), &req,
sizeof(req), true);
}
+EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_gtk_rekey);
-static int
+int
mt76_connac_mcu_set_suspend_mode(struct mt76_dev *dev,
struct ieee80211_vif *vif,
bool enable, u8 mdtim,
@@ -2512,6 +2419,7 @@ mt76_connac_mcu_set_suspend_mode(struct mt76_dev *dev,
return mt76_mcu_send_msg(dev, MCU_UNI_CMD(SUSPEND), &req,
sizeof(req), true);
}
+EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_suspend_mode);
static int
mt76_connac_mcu_set_wow_pattern(struct mt76_dev *dev,
@@ -2547,7 +2455,7 @@ mt76_connac_mcu_set_wow_pattern(struct mt76_dev *dev,
return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SUSPEND), true);
}
-static int
+int
mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
bool suspend, struct cfg80211_wowlan *wowlan)
{
@@ -2599,6 +2507,7 @@ mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
return mt76_mcu_send_msg(dev, MCU_UNI_CMD(SUSPEND), &req,
sizeof(req), true);
}
+EXPORT_SYMBOL_GPL(mt76_connac_mcu_set_wow_ctrl);
int mt76_connac_mcu_set_hif_suspend(struct mt76_dev *dev, bool suspend)
{
@@ -3064,7 +2973,7 @@ static u32 mt76_connac2_get_data_mode(struct mt76_dev *dev, u32 info)
{
u32 mode = DL_MODE_NEED_RSP;
- if (!is_mt7921(dev) || info == PATCH_SEC_NOT_SUPPORT)
+ if ((!is_mt7921(dev) && !is_mt7925(dev)) || info == PATCH_SEC_NOT_SUPPORT)
return mode;
switch (FIELD_GET(PATCH_SEC_ENC_TYPE_MASK, info)) {
diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
index 4543e5bf0482..0563b1b22f48 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
@@ -191,6 +191,7 @@ struct mt76_connac2_fw_region {
struct tlv {
__le16 tag;
__le16 len;
+ u8 data[];
} __packed;
struct bss_info_omac {
@@ -795,6 +796,7 @@ enum {
STA_REC_PHY = 0x15,
STA_REC_HE_6G = 0x17,
STA_REC_HE_V2 = 0x19,
+ STA_REC_MLD = 0x20,
STA_REC_EHT = 0x22,
STA_REC_HDRT = 0x28,
STA_REC_HDR_TRANS = 0x2B,
@@ -919,6 +921,7 @@ enum {
PHY_TYPE_HT_INDEX,
PHY_TYPE_VHT_INDEX,
PHY_TYPE_HE_INDEX,
+ PHY_TYPE_BE_INDEX,
PHY_TYPE_INDEX_NUM
};
@@ -928,6 +931,7 @@ enum {
#define PHY_TYPE_BIT_HT BIT(PHY_TYPE_HT_INDEX)
#define PHY_TYPE_BIT_VHT BIT(PHY_TYPE_VHT_INDEX)
#define PHY_TYPE_BIT_HE BIT(PHY_TYPE_HE_INDEX)
+#define PHY_TYPE_BIT_BE BIT(PHY_TYPE_BE_INDEX)
#define MT_WTBL_RATE_TX_MODE GENMASK(9, 6)
#define MT_WTBL_RATE_MCS GENMASK(5, 0)
@@ -1009,8 +1013,17 @@ enum {
enum {
MCU_UNI_EVENT_RESULT = 0x01,
MCU_UNI_EVENT_FW_LOG_2_HOST = 0x04,
+ MCU_UNI_EVENT_ACCESS_REG = 0x6,
MCU_UNI_EVENT_IE_COUNTDOWN = 0x09,
+ MCU_UNI_EVENT_COREDUMP = 0x0a,
+ MCU_UNI_EVENT_BSS_BEACON_LOSS = 0x0c,
+ MCU_UNI_EVENT_SCAN_DONE = 0x0e,
MCU_UNI_EVENT_RDD_REPORT = 0x11,
+ MCU_UNI_EVENT_ROC = 0x27,
+ MCU_UNI_EVENT_TX_DONE = 0x2d,
+ MCU_UNI_EVENT_NIC_CAPAB = 0x43,
+ MCU_UNI_EVENT_PER_STA_INFO = 0x6d,
+ MCU_UNI_EVENT_ALL_STA_INFO = 0x6e,
};
#define MCU_UNI_CMD_EVENT BIT(1)
@@ -1209,12 +1222,17 @@ enum {
MCU_UNI_CMD_RX_HDR_TRANS = 0x12,
MCU_UNI_CMD_SER = 0x13,
MCU_UNI_CMD_TWT = 0x14,
+ MCU_UNI_CMD_SET_DOMAIN_INFO = 0x15,
+ MCU_UNI_CMD_SCAN_REQ = 0x16,
MCU_UNI_CMD_RDD_CTRL = 0x19,
MCU_UNI_CMD_GET_MIB_INFO = 0x22,
+ MCU_UNI_CMD_GET_STAT_INFO = 0x23,
MCU_UNI_CMD_SNIFFER = 0x24,
MCU_UNI_CMD_SR = 0x25,
MCU_UNI_CMD_ROC = 0x27,
+ MCU_UNI_CMD_SET_DBDC_PARMS = 0x28,
MCU_UNI_CMD_TXPOWER = 0x2b,
+ MCU_UNI_CMD_SET_POWER_LIMIT = 0x2c,
MCU_UNI_CMD_EFUSE_CTRL = 0x2d,
MCU_UNI_CMD_RA = 0x2f,
MCU_UNI_CMD_MURU = 0x31,
@@ -1224,6 +1242,8 @@ enum {
MCU_UNI_CMD_VOW = 0x37,
MCU_UNI_CMD_RRO = 0x57,
MCU_UNI_CMD_OFFCH_SCAN_CTRL = 0x58,
+ MCU_UNI_CMD_PER_STA_INFO = 0x6d,
+ MCU_UNI_CMD_ALL_STA_INFO = 0x6e,
MCU_UNI_CMD_ASSERT_DUMP = 0x6f,
};
@@ -1279,6 +1299,7 @@ enum {
UNI_BSS_INFO_RLM = 2,
UNI_BSS_INFO_BSS_COLOR = 4,
UNI_BSS_INFO_HE_BASIC = 5,
+ UNI_BSS_INFO_11V_MBSSID = 6,
UNI_BSS_INFO_BCN_CONTENT = 7,
UNI_BSS_INFO_BCN_CSA = 8,
UNI_BSS_INFO_BCN_BCC = 9,
@@ -1293,6 +1314,7 @@ enum {
UNI_BSS_INFO_IFS_TIME = 23,
UNI_BSS_INFO_OFFLOAD = 25,
UNI_BSS_INFO_MLD = 26,
+ UNI_BSS_INFO_PM_DISABLE = 27,
};
enum {
@@ -1302,6 +1324,17 @@ enum {
UNI_OFFLOAD_OFFLOAD_BMC_RPY_DETECT,
};
+enum UNI_ALL_STA_INFO_TAG {
+ UNI_ALL_STA_TX_RATE,
+ UNI_ALL_STA_TX_STAT,
+ UNI_ALL_STA_TXRX_ADM_STAT,
+ UNI_ALL_STA_TXRX_AIR_TIME,
+ UNI_ALL_STA_DATA_TX_RETRY_COUNT,
+ UNI_ALL_STA_GI_MODE,
+ UNI_ALL_STA_TXRX_MSDU_COUNT,
+ UNI_ALL_STA_MAX_NUM
+};
+
enum {
MT_NIC_CAP_TX_RESOURCE,
MT_NIC_CAP_TX_EFUSE_ADDR,
@@ -1322,6 +1355,7 @@ enum {
MT_NIC_CAP_ANTSWP = 0x16,
MT_NIC_CAP_WFDMA_REALLOC,
MT_NIC_CAP_6G,
+ MT_NIC_CAP_CHIP_CAP = 0x20,
};
#define UNI_WOW_DETECT_TYPE_MAGIC BIT(0)
@@ -1549,6 +1583,15 @@ struct bss_info_uni_he {
u8 rsv[2];
} __packed;
+struct bss_info_uni_mbssid {
+ __le16 tag;
+ __le16 len;
+ u8 max_indicator;
+ u8 mbss_idx;
+ u8 tx_bss_omac_idx;
+ u8 rsv;
+} __packed;
+
struct mt76_connac_gtk_rekey_tlv {
__le16 tag;
__le16 len;
@@ -1739,7 +1782,7 @@ mt76_connac_mcu_gen_dl_mode(struct mt76_dev *dev, u8 feature_set, bool is_wa)
ret |= feature_set & FW_FEATURE_SET_ENCRYPT ?
DL_MODE_ENCRYPT | DL_MODE_RESET_SEC_IV : 0;
- if (is_mt7921(dev))
+ if (is_mt7921(dev) || is_mt7925(dev))
ret |= feature_set & FW_FEATURE_ENCRY_MODE ?
DL_CONFIG_ENCRY_MODE_SEL : 0;
ret |= FIELD_PREP(DL_MODE_KEY_IDX,
@@ -1807,6 +1850,9 @@ void mt76_connac_mcu_wtbl_hdr_trans_tlv(struct sk_buff *skb,
int mt76_connac_mcu_sta_update_hdr_trans(struct mt76_dev *dev,
struct ieee80211_vif *vif,
struct mt76_wcid *wcid, int cmd);
+void mt76_connac_mcu_sta_he_tlv_v2(struct sk_buff *skb, struct ieee80211_sta *sta);
+u8 mt76_connac_get_phy_mode_v2(struct mt76_phy *mphy, struct ieee80211_vif *vif,
+ enum nl80211_band band, struct ieee80211_sta *sta);
int mt76_connac_mcu_wtbl_update_hdr_trans(struct mt76_dev *dev,
struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
@@ -1851,7 +1897,6 @@ int mt76_connac_mcu_init_download(struct mt76_dev *dev, u32 addr, u32 len,
int mt76_connac_mcu_start_patch(struct mt76_dev *dev);
int mt76_connac_mcu_patch_sem_ctrl(struct mt76_dev *dev, bool get);
int mt76_connac_mcu_start_firmware(struct mt76_dev *dev, u32 addr, u32 option);
-int mt76_connac_mcu_get_nic_capability(struct mt76_phy *phy);
int mt76_connac_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_scan_request *scan_req);
@@ -1866,9 +1911,17 @@ int mt76_connac_mcu_sched_scan_enable(struct mt76_phy *phy,
int mt76_connac_mcu_update_arp_filter(struct mt76_dev *dev,
struct mt76_vif *vif,
struct ieee80211_bss_conf *info);
+int mt76_connac_mcu_set_gtk_rekey(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ bool suspend);
+int mt76_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ bool suspend, struct cfg80211_wowlan *wowlan);
int mt76_connac_mcu_update_gtk_rekey(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct cfg80211_gtk_rekey_data *key);
+int mt76_connac_mcu_set_suspend_mode(struct mt76_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable, u8 mdtim,
+ bool wow_suspend);
int mt76_connac_mcu_set_hif_suspend(struct mt76_dev *dev, bool suspend);
void mt76_connac_mcu_set_suspend_iter(void *priv, u8 *mac,
struct ieee80211_vif *vif);
@@ -1879,6 +1932,9 @@ int mt76_connac_mcu_chip_config(struct mt76_dev *dev);
int mt76_connac_mcu_set_deep_sleep(struct mt76_dev *dev, bool enable);
void mt76_connac_mcu_coredump_event(struct mt76_dev *dev, struct sk_buff *skb,
struct mt76_connac_coredump *coredump);
+s8 mt76_connac_get_ch_power(struct mt76_phy *phy,
+ struct ieee80211_channel *chan,
+ s8 target_power);
int mt76_connac_mcu_set_rate_txpower(struct mt76_phy *phy);
int mt76_connac_mcu_set_p2p_oppps(struct ieee80211_hw *hw,
struct ieee80211_vif *vif);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c b/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
index ad4dc8e17b58..d570b99bccb9 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_beacon.c
@@ -136,7 +136,8 @@ EXPORT_SYMBOL_GPL(mt76x02_resync_beacon_timer);
void
mt76x02_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
- struct mt76x02_dev *dev = (struct mt76x02_dev *)priv;
+ struct beacon_bc_data *data = priv;
+ struct mt76x02_dev *dev = data->dev;
struct mt76x02_vif *mvif = (struct mt76x02_vif *)vif->drv_priv;
struct sk_buff *skb = NULL;
@@ -147,7 +148,7 @@ mt76x02_update_beacon_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
if (!skb)
return;
- mt76x02_mac_set_beacon(dev, skb);
+ __skb_queue_tail(&data->q, skb);
}
EXPORT_SYMBOL_GPL(mt76x02_update_beacon_iter);
@@ -182,9 +183,6 @@ mt76x02_enqueue_buffered_bc(struct mt76x02_dev *dev,
{
int i, nframes;
- data->dev = dev;
- __skb_queue_head_init(&data->q);
-
do {
nframes = skb_queue_len(&data->q);
ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev),
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
index e9c5e85ec07c..9b5e3fb7b0df 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_mmio.c
@@ -16,13 +16,17 @@ static void mt76x02_pre_tbtt_tasklet(struct tasklet_struct *t)
struct mt76x02_dev *dev = from_tasklet(dev, t, mt76.pre_tbtt_tasklet);
struct mt76_dev *mdev = &dev->mt76;
struct mt76_queue *q = dev->mphy.q_tx[MT_TXQ_PSD];
- struct beacon_bc_data data = {};
+ struct beacon_bc_data data = {
+ .dev = dev,
+ };
struct sk_buff *skb;
int i;
if (mt76_hw(dev)->conf.flags & IEEE80211_CONF_OFFCHANNEL)
return;
+ __skb_queue_head_init(&data.q);
+
mt76x02_resync_beacon_timer(dev);
/* Prevent corrupt transmissions during update */
@@ -31,7 +35,10 @@ static void mt76x02_pre_tbtt_tasklet(struct tasklet_struct *t)
ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev),
IEEE80211_IFACE_ITER_RESUME_ALL,
- mt76x02_update_beacon_iter, dev);
+ mt76x02_update_beacon_iter, &data);
+
+ while ((skb = __skb_dequeue(&data.q)) != NULL)
+ mt76x02_mac_set_beacon(dev, skb);
mt76_wr(dev, MT_BCN_BYPASS_MASK,
0xff00 | ~(0xff00 >> dev->beacon_data_count));
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
index 2c6c03809b20..85a78dea4085 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_usb_core.c
@@ -182,7 +182,9 @@ static void mt76x02u_pre_tbtt_work(struct work_struct *work)
{
struct mt76x02_dev *dev =
container_of(work, struct mt76x02_dev, pre_tbtt_work);
- struct beacon_bc_data data = {};
+ struct beacon_bc_data data = {
+ .dev = dev,
+ };
struct sk_buff *skb;
int nbeacons;
@@ -192,15 +194,20 @@ static void mt76x02u_pre_tbtt_work(struct work_struct *work)
if (mt76_hw(dev)->conf.flags & IEEE80211_CONF_OFFCHANNEL)
return;
+ __skb_queue_head_init(&data.q);
+
mt76x02_resync_beacon_timer(dev);
/* Prevent corrupt transmissions during update */
mt76_set(dev, MT_BCN_BYPASS_MASK, 0xffff);
dev->beacon_data_count = 0;
- ieee80211_iterate_active_interfaces(mt76_hw(dev),
+ ieee80211_iterate_active_interfaces_atomic(mt76_hw(dev),
IEEE80211_IFACE_ITER_RESUME_ALL,
- mt76x02_update_beacon_iter, dev);
+ mt76x02_update_beacon_iter, &data);
+
+ while ((skb = __skb_dequeue(&data.q)) != NULL)
+ mt76x02_mac_set_beacon(dev, skb);
mt76_csa_check(&dev->mt76);
diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
index dcbb5c605dfe..8a0e8124b894 100644
--- a/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
+++ b/drivers/net/wireless/mediatek/mt76/mt76x02_util.c
@@ -288,7 +288,7 @@ mt76x02_vif_init(struct mt76x02_dev *dev, struct ieee80211_vif *vif,
mvif->idx = idx;
mvif->group_wcid.idx = MT_VIF_WCID(idx);
mvif->group_wcid.hw_key_idx = -1;
- mt76_packet_id_init(&mvif->group_wcid);
+ mt76_wcid_init(&mvif->group_wcid);
mtxq = (struct mt76_txq *)vif->txq->drv_priv;
rcu_assign_pointer(dev->mt76.wcid[MT_VIF_WCID(idx)], &mvif->group_wcid);
@@ -346,7 +346,7 @@ void mt76x02_remove_interface(struct ieee80211_hw *hw,
dev->mt76.vif_mask &= ~BIT_ULL(mvif->idx);
rcu_assign_pointer(dev->mt76.wcid[mvif->group_wcid.idx], NULL);
- mt76_packet_id_flush(&dev->mt76, &mvif->group_wcid);
+ mt76_wcid_cleanup(&dev->mt76, &mvif->group_wcid);
}
EXPORT_SYMBOL_GPL(mt76x02_remove_interface);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
index 35fdf4f98d80..81478289f17e 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
@@ -213,10 +213,7 @@ static int mt7915_thermal_init(struct mt7915_phy *phy)
hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
mt7915_hwmon_groups);
- if (IS_ERR(hwmon))
- return PTR_ERR(hwmon);
-
- return 0;
+ return PTR_ERR_OR_ZERO(hwmon);
}
static void mt7915_led_set_config(struct led_classdev *led_cdev,
@@ -347,6 +344,9 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
hw->max_tx_aggregation_subframes = IEEE80211_MAX_AMPDU_BUF_HE;
hw->netdev_features = NETIF_F_RXCSUM;
+ if (mtk_wed_device_active(&mdev->mmio.wed))
+ hw->netdev_features |= NETIF_F_HW_TC;
+
hw->radiotap_timestamp.units_pos =
IEEE80211_RADIOTAP_TIMESTAMP_UNIT_US;
@@ -393,8 +393,12 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
phy->mt76->sband_2g.sband.ht_cap.cap |=
IEEE80211_HT_CAP_LDPC_CODING |
IEEE80211_HT_CAP_MAX_AMSDU;
- phy->mt76->sband_2g.sband.ht_cap.ampdu_density =
- IEEE80211_HT_MPDU_DENSITY_4;
+ if (is_mt7915(&dev->mt76))
+ phy->mt76->sband_2g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_4;
+ else
+ phy->mt76->sband_2g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_2;
}
if (phy->mt76->cap.has_5ghz) {
@@ -404,10 +408,11 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
phy->mt76->sband_5g.sband.ht_cap.cap |=
IEEE80211_HT_CAP_LDPC_CODING |
IEEE80211_HT_CAP_MAX_AMSDU;
- phy->mt76->sband_5g.sband.ht_cap.ampdu_density =
- IEEE80211_HT_MPDU_DENSITY_4;
if (is_mt7915(&dev->mt76)) {
+ phy->mt76->sband_5g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_4;
+
vht_cap->cap |=
IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 |
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
@@ -417,6 +422,9 @@ mt7915_init_wiphy(struct mt7915_phy *phy)
IEEE80211_VHT_CAP_SHORT_GI_160 |
FIELD_PREP(IEEE80211_VHT_CAP_EXT_NSS_BW_MASK, 1);
} else {
+ phy->mt76->sband_5g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_2;
+
vht_cap->cap |=
IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
@@ -1127,8 +1135,7 @@ void mt7915_set_stream_he_caps(struct mt7915_phy *phy)
n = mt7915_init_he_caps(phy, NL80211_BAND_2GHZ, data);
band = &phy->mt76->sband_2g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
}
if (phy->mt76->cap.has_5ghz) {
@@ -1136,8 +1143,7 @@ void mt7915_set_stream_he_caps(struct mt7915_phy *phy)
n = mt7915_init_he_caps(phy, NL80211_BAND_5GHZ, data);
band = &phy->mt76->sband_5g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
}
if (phy->mt76->cap.has_6ghz) {
@@ -1145,8 +1151,7 @@ void mt7915_set_stream_he_caps(struct mt7915_phy *phy)
n = mt7915_init_he_caps(phy, NL80211_BAND_6GHZ, data);
band = &phy->mt76->sband_6g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
}
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
index b8b0c0fda752..2222fb9aa103 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
@@ -809,7 +809,7 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
txp->rept_wds_wcid = cpu_to_le16(wcid->idx);
else
txp->rept_wds_wcid = cpu_to_le16(0x3ff);
- tx_info->skb = DMA_DUMMY_DATA;
+ tx_info->skb = NULL;
/* pass partial skb header to fw */
tx_info->buf[1].len = MT_CT_PARSE_LEN;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
index 8ebbf186fab2..a3fd54cc1911 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
@@ -253,7 +253,7 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
mvif->sta.wcid.phy_idx = ext_phy;
mvif->sta.wcid.hw_key_idx = -1;
mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
- mt76_packet_id_init(&mvif->sta.wcid);
+ mt76_wcid_init(&mvif->sta.wcid);
mt7915_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
@@ -314,7 +314,7 @@ static void mt7915_remove_interface(struct ieee80211_hw *hw,
list_del_init(&msta->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
- mt76_packet_id_flush(&dev->mt76, &msta->wcid);
+ mt76_wcid_cleanup(&dev->mt76, &msta->wcid);
}
int mt7915_set_channel(struct mt7915_phy *phy)
@@ -483,16 +483,22 @@ static int mt7915_config(struct ieee80211_hw *hw, u32 changed)
if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
bool enabled = !!(hw->conf.flags & IEEE80211_CONF_MONITOR);
bool band = phy->mt76->band_idx;
+ u32 rxfilter = phy->rxfilter;
- if (!enabled)
- phy->rxfilter |= MT_WF_RFCR_DROP_OTHER_UC;
- else
- phy->rxfilter &= ~MT_WF_RFCR_DROP_OTHER_UC;
+ if (!enabled) {
+ rxfilter |= MT_WF_RFCR_DROP_OTHER_UC;
+ dev->monitor_mask &= ~BIT(band);
+ } else {
+ rxfilter &= ~MT_WF_RFCR_DROP_OTHER_UC;
+ dev->monitor_mask |= BIT(band);
+ }
mt76_rmw_field(dev, MT_DMA_DCR0(band), MT_DMA_DCR0_RXD_G5_EN,
enabled);
+ mt76_rmw_field(dev, MT_DMA_DCR0(band), MT_MDP_DCR0_RX_HDR_TRANS_EN,
+ !dev->monitor_mask);
mt76_testmode_reset(phy->mt76, true);
- mt76_wr(dev, MT_WF_RFCR(band), phy->rxfilter);
+ mt76_wr(dev, MT_WF_RFCR(band), rxfilter);
}
mutex_unlock(&dev->mt76.mutex);
@@ -527,6 +533,7 @@ static void mt7915_configure_filter(struct ieee80211_hw *hw,
MT_WF_RFCR1_DROP_BA |
MT_WF_RFCR1_DROP_CFEND |
MT_WF_RFCR1_DROP_CFACK;
+ u32 rxfilter;
u32 flags = 0;
#define MT76_FILTER(_flag, _hw) do { \
@@ -561,7 +568,12 @@ static void mt7915_configure_filter(struct ieee80211_hw *hw,
MT_WF_RFCR_DROP_NDPA);
*total_flags = flags;
- mt76_wr(dev, MT_WF_RFCR(band), phy->rxfilter);
+ rxfilter = phy->rxfilter;
+ if (hw->conf.flags & IEEE80211_CONF_MONITOR)
+ rxfilter &= ~MT_WF_RFCR_DROP_OTHER_UC;
+ else
+ rxfilter |= MT_WF_RFCR_DROP_OTHER_UC;
+ mt76_wr(dev, MT_WF_RFCR(band), rxfilter);
if (*total_flags & FIF_CONTROL)
mt76_clear(dev, MT_WF_RFCR1(band), ctl_flags);
@@ -646,11 +658,13 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
mt7915_update_bss_color(hw, vif, &info->he_bss_color);
if (changed & (BSS_CHANGED_BEACON |
- BSS_CHANGED_BEACON_ENABLED |
- BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
- BSS_CHANGED_FILS_DISCOVERY))
+ BSS_CHANGED_BEACON_ENABLED))
mt7915_mcu_add_beacon(hw, vif, info->enable_beacon, changed);
+ if (changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+ BSS_CHANGED_FILS_DISCOVERY))
+ mt7915_mcu_add_inband_discov(dev, vif, changed);
+
if (set_bss_info == 0)
mt7915_mcu_add_bss_info(phy, vif, false);
if (set_sta == 0)
@@ -1386,7 +1400,7 @@ void mt7915_get_et_strings(struct ieee80211_hw *hw,
if (sset != ETH_SS_STATS)
return;
- memcpy(data, *mt7915_gstrings_stats, sizeof(mt7915_gstrings_stats));
+ memcpy(data, mt7915_gstrings_stats, sizeof(mt7915_gstrings_stats));
data += sizeof(mt7915_gstrings_stats);
page_pool_ethtool_stats_get_strings(data);
}
@@ -1639,6 +1653,20 @@ mt7915_net_fill_forward_path(struct ieee80211_hw *hw,
return 0;
}
+
+static int
+mt7915_net_setup_tc(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct net_device *netdev, enum tc_setup_type type,
+ void *type_data)
+{
+ struct mt7915_dev *dev = mt7915_hw_dev(hw);
+ struct mtk_wed_device *wed = &dev->mt76.mmio.wed;
+
+ if (!mtk_wed_device_active(wed))
+ return -EOPNOTSUPP;
+
+ return mtk_wed_device_setup_tc(wed, netdev, type, type_data);
+}
#endif
const struct ieee80211_ops mt7915_ops = {
@@ -1693,5 +1721,6 @@ const struct ieee80211_ops mt7915_ops = {
.set_radar_background = mt7915_set_radar_background,
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
.net_fill_forward_path = mt7915_net_fill_forward_path,
+ .net_setup_tc = mt7915_net_setup_tc,
#endif
};
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
index 50ae7bf3af91..b22f06d4411a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.c
@@ -225,8 +225,10 @@ int mt7915_mcu_wa_cmd(struct mt7915_dev *dev, int cmd, u32 a1, u32 a2, u32 a3)
static void
mt7915_mcu_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
- if (vif->bss_conf.csa_active)
- ieee80211_csa_finish(vif);
+ if (!vif->bss_conf.csa_active || vif->type == NL80211_IFTYPE_STATION)
+ return;
+
+ ieee80211_csa_finish(vif);
}
static void
@@ -326,7 +328,7 @@ mt7915_mcu_rx_log_message(struct mt7915_dev *dev, struct sk_buff *skb)
static void
mt7915_mcu_cca_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
- if (!vif->bss_conf.color_change_active)
+ if (!vif->bss_conf.color_change_active || vif->type == NL80211_IFTYPE_STATION)
return;
ieee80211_color_change_finish(vif);
@@ -906,6 +908,8 @@ mt7915_mcu_sta_muru_tlv(struct mt7915_dev *dev, struct sk_buff *skb,
HE_MAC(CAP2_MU_CASCADING, elem->mac_cap_info[2]);
muru->ofdma_ul.uo_ra =
HE_MAC(CAP3_OFDMA_RA, elem->mac_cap_info[3]);
+ muru->ofdma_ul.rx_ctrl_frame_to_mbss =
+ HE_MAC(CAP3_RX_CTRL_FRAME_TO_MULTIBSS, elem->mac_cap_info[3]);
}
static void
@@ -1015,13 +1019,13 @@ mt7915_is_ebf_supported(struct mt7915_phy *phy, struct ieee80211_vif *vif,
struct ieee80211_sta *sta, bool bfee)
{
struct mt7915_vif *mvif = (struct mt7915_vif *)vif->drv_priv;
- int tx_ant = hweight8(phy->mt76->chainmask) - 1;
+ int sts = hweight16(phy->mt76->chainmask);
if (vif->type != NL80211_IFTYPE_STATION &&
vif->type != NL80211_IFTYPE_AP)
return false;
- if (!bfee && tx_ant < 2)
+ if (!bfee && sts < 2)
return false;
if (sta->deflink.he_cap.has_he) {
@@ -1882,10 +1886,9 @@ mt7915_mcu_beacon_cont(struct mt7915_dev *dev, struct ieee80211_vif *vif,
memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
}
-static void
-mt7915_mcu_beacon_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vif,
- struct sk_buff *rskb, struct bss_info_bcn *bcn,
- u32 changed)
+int
+mt7915_mcu_add_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ u32 changed)
{
#define OFFLOAD_TX_MODE_SU BIT(0)
#define OFFLOAD_TX_MODE_MU BIT(1)
@@ -1895,14 +1898,27 @@ mt7915_mcu_beacon_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vi
struct cfg80211_chan_def *chandef = &mvif->phy->mt76->chandef;
enum nl80211_band band = chandef->chan->band;
struct mt76_wcid *wcid = &dev->mt76.global_wcid;
+ struct bss_info_bcn *bcn;
struct bss_info_inband_discovery *discov;
struct ieee80211_tx_info *info;
- struct sk_buff *skb = NULL;
- struct tlv *tlv;
+ struct sk_buff *rskb, *skb = NULL;
+ struct tlv *tlv, *sub_tlv;
bool ext_phy = phy != &dev->phy;
u8 *buf, interval;
int len;
+ if (vif->bss_conf.nontransmitted)
+ return 0;
+
+ rskb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76, NULL,
+ MT7915_MAX_BSS_OFFLOAD_SIZE);
+ if (IS_ERR(rskb))
+ return PTR_ERR(rskb);
+
+ tlv = mt76_connac_mcu_add_tlv(rskb, BSS_INFO_OFFLOAD, sizeof(*bcn));
+ bcn = (struct bss_info_bcn *)tlv;
+ bcn->enable = true;
+
if (changed & BSS_CHANGED_FILS_DISCOVERY &&
vif->bss_conf.fils_discovery.max_interval) {
interval = vif->bss_conf.fils_discovery.max_interval;
@@ -1913,27 +1929,29 @@ mt7915_mcu_beacon_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vi
skb = ieee80211_get_unsol_bcast_probe_resp_tmpl(hw, vif);
}
- if (!skb)
- return;
+ if (!skb) {
+ dev_kfree_skb(rskb);
+ return -EINVAL;
+ }
info = IEEE80211_SKB_CB(skb);
info->control.vif = vif;
info->band = band;
-
info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, ext_phy);
len = sizeof(*discov) + MT_TXD_SIZE + skb->len;
len = (len & 0x3) ? ((len | 0x3) + 1) : len;
- if (len > (MT7915_MAX_BSS_OFFLOAD_SIZE - rskb->len)) {
+ if (skb->len > MT7915_MAX_BEACON_SIZE) {
dev_err(dev->mt76.dev, "inband discovery size limit exceed\n");
+ dev_kfree_skb(rskb);
dev_kfree_skb(skb);
- return;
+ return -EINVAL;
}
- tlv = mt7915_mcu_add_nested_subtlv(rskb, BSS_INFO_BCN_DISCOV,
- len, &bcn->sub_ntlv, &bcn->len);
- discov = (struct bss_info_inband_discovery *)tlv;
+ sub_tlv = mt7915_mcu_add_nested_subtlv(rskb, BSS_INFO_BCN_DISCOV,
+ len, &bcn->sub_ntlv, &bcn->len);
+ discov = (struct bss_info_inband_discovery *)sub_tlv;
discov->tx_mode = OFFLOAD_TX_MODE_SU;
/* 0: UNSOL PROBE RESP, 1: FILS DISCOV */
discov->tx_type = !!(changed & BSS_CHANGED_FILS_DISCOVERY);
@@ -1941,13 +1959,16 @@ mt7915_mcu_beacon_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vi
discov->prob_rsp_len = cpu_to_le16(MT_TXD_SIZE + skb->len);
discov->enable = true;
- buf = (u8 *)tlv + sizeof(*discov);
+ buf = (u8 *)sub_tlv + sizeof(*discov);
mt7915_mac_write_txwi(&dev->mt76, (__le32 *)buf, skb, wcid, 0, NULL,
0, changed);
memcpy(buf + MT_TXD_SIZE, skb->data, skb->len);
dev_kfree_skb(skb);
+
+ return mt76_mcu_skb_send_msg(&phy->dev->mt76, rskb,
+ MCU_EXT_CMD(BSS_INFO_UPDATE), true);
}
int mt7915_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
@@ -1980,11 +2001,14 @@ int mt7915_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
goto out;
skb = ieee80211_beacon_get_template(hw, vif, &offs, 0);
- if (!skb)
+ if (!skb) {
+ dev_kfree_skb(rskb);
return -EINVAL;
+ }
- if (skb->len > MT7915_MAX_BEACON_SIZE - MT_TXD_SIZE) {
+ if (skb->len > MT7915_MAX_BEACON_SIZE) {
dev_err(dev->mt76.dev, "Bcn size limit exceed\n");
+ dev_kfree_skb(rskb);
dev_kfree_skb(skb);
return -EINVAL;
}
@@ -1997,11 +2021,6 @@ int mt7915_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
mt7915_mcu_beacon_cont(dev, vif, rskb, skb, bcn, &offs);
dev_kfree_skb(skb);
- if (changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP ||
- changed & BSS_CHANGED_FILS_DISCOVERY)
- mt7915_mcu_beacon_inband_discov(dev, vif, rskb,
- bcn, changed);
-
out:
return mt76_mcu_skb_send_msg(&phy->dev->mt76, rskb,
MCU_EXT_CMD(BSS_INFO_UPDATE), true);
@@ -2725,10 +2744,10 @@ int mt7915_mcu_set_chan_info(struct mt7915_phy *phy, int cmd)
if (mt76_connac_spe_idx(phy->mt76->antenna_mask))
req.tx_path_num = fls(phy->mt76->antenna_mask);
- if (cmd == MCU_EXT_CMD(SET_RX_PATH) ||
- dev->mt76.hw->conf.flags & IEEE80211_CONF_MONITOR)
+ if (phy->mt76->hw->conf.flags & IEEE80211_CONF_MONITOR)
req.switch_reason = CH_SWITCH_NORMAL;
- else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
+ else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL ||
+ phy->mt76->hw->conf.flags & IEEE80211_CONF_IDLE)
req.switch_reason = CH_SWITCH_SCAN_BYPASS_DPD;
else if (!cfg80211_reg_can_beacon(phy->mt76->hw->wiphy, chandef,
NL80211_IFTYPE_AP))
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
index b9ea297f382c..1592b5d6751a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mcu.h
@@ -495,10 +495,14 @@ enum {
SER_RECOVER
};
-#define MT7915_MAX_BEACON_SIZE 512
-#define MT7915_MAX_INBAND_FRAME_SIZE 256
-#define MT7915_MAX_BSS_OFFLOAD_SIZE (MT7915_MAX_BEACON_SIZE + \
- MT7915_MAX_INBAND_FRAME_SIZE + \
+#define MT7915_MAX_BEACON_SIZE 1308
+#define MT7915_BEACON_UPDATE_SIZE (sizeof(struct sta_req_hdr) + \
+ sizeof(struct bss_info_bcn) + \
+ sizeof(struct bss_info_bcn_cntdwn) + \
+ sizeof(struct bss_info_bcn_mbss) + \
+ MT_TXD_SIZE + \
+ sizeof(struct bss_info_bcn_cont))
+#define MT7915_MAX_BSS_OFFLOAD_SIZE (MT7915_MAX_BEACON_SIZE + \
MT7915_BEACON_UPDATE_SIZE)
#define MT7915_BSS_UPDATE_MAX_SIZE (sizeof(struct sta_req_hdr) + \
@@ -511,12 +515,6 @@ enum {
sizeof(struct bss_info_bmc_rate) +\
sizeof(struct bss_info_ext_bss))
-#define MT7915_BEACON_UPDATE_SIZE (sizeof(struct sta_req_hdr) + \
- sizeof(struct bss_info_bcn_cntdwn) + \
- sizeof(struct bss_info_bcn_mbss) + \
- sizeof(struct bss_info_bcn_cont) + \
- sizeof(struct bss_info_inband_discovery))
-
static inline s8
mt7915_get_power_bound(struct mt7915_phy *phy, s8 txpower)
{
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
index fc7ace638ce8..e7d8e03f826f 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
@@ -591,7 +591,7 @@ static void mt7915_mmio_wed_release_rx_buf(struct mtk_wed_device *wed)
static u32 mt7915_mmio_wed_init_rx_buf(struct mtk_wed_device *wed, int size)
{
- struct mtk_rxbm_desc *desc = wed->rx_buf_ring.desc;
+ struct mtk_wed_bm_desc *desc = wed->rx_buf_ring.desc;
struct mt76_txwi_cache *t = NULL;
struct mt7915_dev *dev;
struct mt76_queue *q;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
index 0456e56f6348..d317c523b23f 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
@@ -295,6 +295,8 @@ struct mt7915_dev {
bool muru_debug;
bool ibf;
+ u8 monitor_mask;
+
struct dentry *debugfs_dir;
struct rchan *relay_fwlog;
@@ -447,6 +449,8 @@ int mt7915_mcu_add_rx_ba(struct mt7915_dev *dev,
bool add);
int mt7915_mcu_update_bss_color(struct mt7915_dev *dev, struct ieee80211_vif *vif,
struct cfg80211_he_bss_color *he_bss_color);
+int mt7915_mcu_add_inband_discov(struct mt7915_dev *dev, struct ieee80211_vif *vif,
+ u32 changed);
int mt7915_mcu_add_beacon(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
int enable, u32 changed);
int mt7915_mcu_add_obss_spr(struct mt7915_phy *phy, struct ieee80211_vif *vif,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
index 588cd87e24e9..89ac8e6707b8 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/regs.h
@@ -172,6 +172,7 @@ enum offs_rev {
#define MT_MDP_DCR0 MT_MDP(0x000)
#define MT_MDP_DCR0_DAMSDU_EN BIT(15)
+#define MT_MDP_DCR0_RX_HDR_TRANS_EN BIT(19)
#define MT_MDP_DCR1 MT_MDP(0x004)
#define MT_MDP_DCR1_MAX_RX_LEN GENMASK(15, 3)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
index 37348b208736..06e3d9db996c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7915/soc.c
@@ -1219,10 +1219,7 @@ static int mt798x_wmac_init(struct mt7915_dev *dev)
return PTR_ERR(dev->sku);
dev->rstc = devm_reset_control_get(pdev, "consys");
- if (IS_ERR(dev->rstc))
- return PTR_ERR(dev->rstc);
-
- return 0;
+ return PTR_ERR_OR_ZERO(dev->rstc);
}
static int mt798x_wmac_probe(struct platform_device *pdev)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/init.c b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
index ff63f37f67d9..7d6a9d746011 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/init.c
@@ -55,10 +55,59 @@ static int mt7921_thermal_init(struct mt792x_phy *phy)
hwmon = devm_hwmon_device_register_with_groups(&wiphy->dev, name, phy,
mt7921_hwmon_groups);
- if (IS_ERR(hwmon))
- return PTR_ERR(hwmon);
+ return PTR_ERR_OR_ZERO(hwmon);
+}
- return 0;
+static void
+mt7921_regd_channel_update(struct wiphy *wiphy, struct mt792x_dev *dev)
+{
+#define IS_UNII_INVALID(idx, sfreq, efreq) \
+ (!(dev->phy.clc_chan_conf & BIT(idx)) && (cfreq) >= (sfreq) && (cfreq) <= (efreq))
+ struct ieee80211_supported_band *sband;
+ struct mt76_dev *mdev = &dev->mt76;
+ struct device_node *np, *band_np;
+ struct ieee80211_channel *ch;
+ int i, cfreq;
+
+ np = mt76_find_power_limits_node(mdev);
+
+ sband = wiphy->bands[NL80211_BAND_5GHZ];
+ band_np = np ? of_get_child_by_name(np, "txpower-5g") : NULL;
+ for (i = 0; i < sband->n_channels; i++) {
+ ch = &sband->channels[i];
+ cfreq = ch->center_freq;
+
+ if (np && (!band_np || !mt76_find_channel_node(band_np, ch))) {
+ ch->flags |= IEEE80211_CHAN_DISABLED;
+ continue;
+ }
+
+ /* UNII-4 */
+ if (IS_UNII_INVALID(0, 5850, 5925))
+ ch->flags |= IEEE80211_CHAN_DISABLED;
+ }
+
+ sband = wiphy->bands[NL80211_BAND_6GHZ];
+ if (!sband)
+ return;
+
+ band_np = np ? of_get_child_by_name(np, "txpower-6g") : NULL;
+ for (i = 0; i < sband->n_channels; i++) {
+ ch = &sband->channels[i];
+ cfreq = ch->center_freq;
+
+ if (np && (!band_np || !mt76_find_channel_node(band_np, ch))) {
+ ch->flags |= IEEE80211_CHAN_DISABLED;
+ continue;
+ }
+
+ /* UNII-5/6/7/8 */
+ if (IS_UNII_INVALID(1, 5925, 6425) ||
+ IS_UNII_INVALID(2, 6425, 6525) ||
+ IS_UNII_INVALID(3, 6525, 6875) ||
+ IS_UNII_INVALID(4, 6875, 7125))
+ ch->flags |= IEEE80211_CHAN_DISABLED;
+ }
}
static void
@@ -77,6 +126,8 @@ mt7921_regd_notifier(struct wiphy *wiphy,
mt76_connac_mcu_set_channel_domain(hw->priv);
mt7921_set_tx_sar_pwr(hw, NULL);
mt792x_mutex_release(dev);
+
+ mt7921_regd_channel_update(wiphy, dev);
}
int mt7921_mac_init(struct mt792x_dev *dev)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
index 21f937454229..867e14f6b93a 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
@@ -794,7 +794,7 @@ int mt7921_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
mt7921_usb_sdio_write_txwi(dev, wcid, qid, sta, key, pktid, skb);
type = mt76_is_sdio(mdev) ? MT7921_SDIO_DATA : 0;
- mt7921_skb_add_usb_sdio_hdr(dev, skb, type);
+ mt792x_skb_add_usb_sdio_hdr(dev, skb, type);
pad = round_up(skb->len, 4) - skb->len;
if (mt76_is_usb(mdev))
pad += 4;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
index 0844d28b3223..510a575a973b 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
@@ -196,8 +196,7 @@ void mt7921_set_stream_he_caps(struct mt792x_phy *phy)
n = mt7921_init_he_caps(phy, NL80211_BAND_2GHZ, data);
band = &phy->mt76->sband_2g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
}
if (phy->mt76->cap.has_5ghz) {
@@ -205,16 +204,14 @@ void mt7921_set_stream_he_caps(struct mt792x_phy *phy)
n = mt7921_init_he_caps(phy, NL80211_BAND_5GHZ, data);
band = &phy->mt76->sband_5g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
if (phy->mt76->cap.has_6ghz) {
data = phy->iftype[NL80211_BAND_6GHZ];
n = mt7921_init_he_caps(phy, NL80211_BAND_6GHZ, data);
band = &phy->mt76->sband_6g.sband;
- band->iftype_data = data;
- band->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(band, data, n);
}
}
}
@@ -262,25 +259,6 @@ static int mt7921_start(struct ieee80211_hw *hw)
return err;
}
-void mt7921_stop(struct ieee80211_hw *hw)
-{
- struct mt792x_dev *dev = mt792x_hw_dev(hw);
- struct mt792x_phy *phy = mt792x_hw_phy(hw);
-
- cancel_delayed_work_sync(&phy->mt76->mac_work);
-
- cancel_delayed_work_sync(&dev->pm.ps_work);
- cancel_work_sync(&dev->pm.wake_work);
- cancel_work_sync(&dev->reset_work);
- mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
-
- mt792x_mutex_acquire(dev);
- clear_bit(MT76_STATE_RUNNING, &phy->mt76->state);
- mt76_connac_mcu_set_mac_enable(&dev->mt76, 0, false, false);
- mt792x_mutex_release(dev);
-}
-EXPORT_SYMBOL_GPL(mt7921_stop);
-
static int
mt7921_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
{
@@ -318,7 +296,7 @@ mt7921_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
mvif->sta.wcid.phy_idx = mvif->mt76.band_idx;
mvif->sta.wcid.hw_key_idx = -1;
mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
- mt76_packet_id_init(&mvif->sta.wcid);
+ mt76_wcid_init(&mvif->sta.wcid);
mt7921_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
@@ -704,6 +682,38 @@ static void mt7921_bss_info_changed(struct ieee80211_hw *hw,
mt792x_mutex_release(dev);
}
+static void
+mt7921_regd_set_6ghz_power_type(struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_phy *phy = mvif->phy;
+ struct mt792x_dev *dev = phy->dev;
+
+ if (hweight64(dev->mt76.vif_mask) > 1) {
+ phy->power_type = MT_AP_DEFAULT;
+ goto out;
+ }
+
+ switch (vif->bss_conf.power_type) {
+ case IEEE80211_REG_SP_AP:
+ phy->power_type = MT_AP_SP;
+ break;
+ case IEEE80211_REG_VLP_AP:
+ phy->power_type = MT_AP_VLP;
+ break;
+ case IEEE80211_REG_LPI_AP:
+ phy->power_type = MT_AP_LPI;
+ break;
+ case IEEE80211_REG_UNSET_AP:
+ default:
+ phy->power_type = MT_AP_DEFAULT;
+ break;
+ }
+
+out:
+ mt7921_mcu_set_clc(dev, dev->mt76.alpha2, dev->country_ie_env);
+}
+
int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta)
{
@@ -739,6 +749,8 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
if (ret)
return ret;
+ mt7921_regd_set_6ghz_power_type(vif);
+
mt76_connac_power_save_sched(&dev->mphy, &dev->pm);
return 0;
@@ -756,7 +768,7 @@ void mt7921_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif, &mvif->sta.wcid,
- true, mvif->ctx);
+ true, mvif->mt76.ctx);
ewma_avg_signal_init(&msta->avg_ack_signal);
@@ -791,7 +803,7 @@ void mt7921_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
if (!sta->tdls)
mt76_connac_mcu_uni_add_bss(&dev->mphy, vif,
&mvif->sta.wcid, false,
- mvif->ctx);
+ mvif->mt76.ctx);
}
spin_lock_bh(&dev->mt76.sta_poll_lock);
@@ -1208,7 +1220,7 @@ mt7921_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
mt792x_mutex_acquire(dev);
err = mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid,
- true, mvif->ctx);
+ true, mvif->mt76.ctx);
if (err)
goto out;
@@ -1240,7 +1252,7 @@ mt7921_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
goto out;
mt76_connac_mcu_uni_add_bss(phy->mt76, vif, &mvif->sta.wcid, false,
- mvif->ctx);
+ mvif->mt76.ctx);
out:
mt792x_mutex_release(dev);
@@ -1265,7 +1277,7 @@ static void mt7921_ctx_iter(void *priv, u8 *mac,
struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
struct ieee80211_chanctx_conf *ctx = priv;
- if (ctx != mvif->ctx)
+ if (ctx != mvif->mt76.ctx)
return;
if (vif->type == NL80211_IFTYPE_MONITOR)
@@ -1298,7 +1310,7 @@ static void mt7921_mgd_prepare_tx(struct ieee80211_hw *hw,
jiffies_to_msecs(HZ);
mt792x_mutex_acquire(dev);
- mt7921_set_roc(mvif->phy, mvif, mvif->ctx->def.chan, duration,
+ mt7921_set_roc(mvif->phy, mvif, mvif->mt76.ctx->def.chan, duration,
MT7921_ROC_REQ_JOIN);
mt792x_mutex_release(dev);
}
@@ -1315,7 +1327,7 @@ static void mt7921_mgd_complete_tx(struct ieee80211_hw *hw,
const struct ieee80211_ops mt7921_ops = {
.tx = mt792x_tx,
.start = mt7921_start,
- .stop = mt7921_stop,
+ .stop = mt792x_stop,
.add_interface = mt7921_add_interface,
.remove_interface = mt792x_remove_interface,
.config = mt7921_config,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
index 90c93970acab..63f3d4a5c9aa 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.c
@@ -448,6 +448,129 @@ out:
return ret;
}
+static void mt7921_mcu_parse_tx_resource(struct mt76_dev *dev,
+ struct sk_buff *skb)
+{
+ struct mt76_sdio *sdio = &dev->sdio;
+ struct mt7921_tx_resource {
+ __le32 version;
+ __le32 pse_data_quota;
+ __le32 pse_mcu_quota;
+ __le32 ple_data_quota;
+ __le32 ple_mcu_quota;
+ __le16 pse_page_size;
+ __le16 ple_page_size;
+ u8 pp_padding;
+ u8 pad[3];
+ } __packed * tx_res;
+
+ tx_res = (struct mt7921_tx_resource *)skb->data;
+ sdio->sched.pse_data_quota = le32_to_cpu(tx_res->pse_data_quota);
+ sdio->sched.pse_mcu_quota = le32_to_cpu(tx_res->pse_mcu_quota);
+ sdio->sched.ple_data_quota = le32_to_cpu(tx_res->ple_data_quota);
+ sdio->sched.pse_page_size = le16_to_cpu(tx_res->pse_page_size);
+ sdio->sched.deficit = tx_res->pp_padding;
+}
+
+static void mt7921_mcu_parse_phy_cap(struct mt76_dev *dev,
+ struct sk_buff *skb)
+{
+ struct mt7921_phy_cap {
+ u8 ht;
+ u8 vht;
+ u8 _5g;
+ u8 max_bw;
+ u8 nss;
+ u8 dbdc;
+ u8 tx_ldpc;
+ u8 rx_ldpc;
+ u8 tx_stbc;
+ u8 rx_stbc;
+ u8 hw_path;
+ u8 he;
+ } __packed * cap;
+
+ enum {
+ WF0_24G,
+ WF0_5G
+ };
+
+ cap = (struct mt7921_phy_cap *)skb->data;
+
+ dev->phy.antenna_mask = BIT(cap->nss) - 1;
+ dev->phy.chainmask = dev->phy.antenna_mask;
+ dev->phy.cap.has_2ghz = cap->hw_path & BIT(WF0_24G);
+ dev->phy.cap.has_5ghz = cap->hw_path & BIT(WF0_5G);
+}
+
+static int mt7921_mcu_get_nic_capability(struct mt792x_phy *mphy)
+{
+ struct mt76_connac_cap_hdr {
+ __le16 n_element;
+ u8 rsv[2];
+ } __packed * hdr;
+ struct sk_buff *skb;
+ struct mt76_phy *phy = mphy->mt76;
+ int ret, i;
+
+ ret = mt76_mcu_send_and_get_msg(phy->dev, MCU_CE_CMD(GET_NIC_CAPAB),
+ NULL, 0, true, &skb);
+ if (ret)
+ return ret;
+
+ hdr = (struct mt76_connac_cap_hdr *)skb->data;
+ if (skb->len < sizeof(*hdr)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ skb_pull(skb, sizeof(*hdr));
+
+ for (i = 0; i < le16_to_cpu(hdr->n_element); i++) {
+ struct tlv_hdr {
+ __le32 type;
+ __le32 len;
+ } __packed * tlv = (struct tlv_hdr *)skb->data;
+ int len;
+
+ if (skb->len < sizeof(*tlv))
+ break;
+
+ skb_pull(skb, sizeof(*tlv));
+
+ len = le32_to_cpu(tlv->len);
+ if (skb->len < len)
+ break;
+
+ switch (le32_to_cpu(tlv->type)) {
+ case MT_NIC_CAP_6G:
+ phy->cap.has_6ghz = skb->data[0];
+ break;
+ case MT_NIC_CAP_MAC_ADDR:
+ memcpy(phy->macaddr, (void *)skb->data, ETH_ALEN);
+ break;
+ case MT_NIC_CAP_PHY:
+ mt7921_mcu_parse_phy_cap(phy->dev, skb);
+ break;
+ case MT_NIC_CAP_TX_RESOURCE:
+ if (mt76_is_sdio(phy->dev))
+ mt7921_mcu_parse_tx_resource(phy->dev,
+ skb);
+ break;
+ case MT_NIC_CAP_CHIP_CAP:
+ memcpy(&mphy->chip_cap, (void *)skb->data, sizeof(u64));
+ break;
+ default:
+ break;
+ }
+ skb_pull(skb, len);
+ }
+out:
+ dev_kfree_skb(skb);
+
+ return ret;
+}
+
int mt7921_mcu_fw_log_2_host(struct mt792x_dev *dev, u8 ctrl)
{
struct {
@@ -469,7 +592,7 @@ int mt7921_run_firmware(struct mt792x_dev *dev)
if (err)
return err;
- err = mt76_connac_mcu_get_nic_capability(&dev->mphy);
+ err = mt7921_mcu_get_nic_capability(&dev->phy);
if (err)
return err;
@@ -1123,7 +1246,9 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
struct mt7921_clc *clc,
u8 idx)
{
- struct sk_buff *skb;
+#define CLC_CAP_EVT_EN BIT(0)
+#define CLC_CAP_DTS_EN BIT(1)
+ struct sk_buff *skb, *ret_skb = NULL;
struct {
u8 ver;
u8 pad0;
@@ -1131,13 +1256,15 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
u8 idx;
u8 env;
u8 acpi_conf;
- u8 pad1;
+ u8 cap;
u8 alpha2[2];
u8 type[2];
- u8 rsvd[64];
+ u8 env_6g;
+ u8 rsvd[63];
} __packed req = {
.idx = idx,
.env = env_cap,
+ .env_6g = dev->phy.power_type,
.acpi_conf = mt792x_acpi_get_flags(&dev->phy),
};
int ret, valid_cnt = 0;
@@ -1146,6 +1273,11 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
if (!clc)
return 0;
+ if (dev->phy.chip_cap & MT792x_CHIP_CAP_CLC_EVT_EN)
+ req.cap |= CLC_CAP_EVT_EN;
+ if (mt76_find_power_limits_node(&dev->mt76))
+ req.cap |= CLC_CAP_DTS_EN;
+
pos = clc->data;
for (i = 0; i < clc->nr_country; i++) {
struct mt7921_clc_rule *rule = (struct mt7921_clc_rule *)pos;
@@ -1167,10 +1299,21 @@ int __mt7921_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
return -ENOMEM;
skb_put_data(skb, rule->data, len);
- ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
- MCU_CE_CMD(SET_CLC), false);
+ ret = mt76_mcu_skb_send_and_get_msg(&dev->mt76, skb,
+ MCU_CE_CMD(SET_CLC),
+ !!(req.cap & CLC_CAP_EVT_EN),
+ &ret_skb);
if (ret < 0)
return ret;
+
+ if (ret_skb) {
+ struct mt7921_clc_info_tlv *info;
+
+ info = (struct mt7921_clc_info_tlv *)(ret_skb->data + 4);
+ dev->phy.clc_chan_conf = info->chan_conf;
+ dev_kfree_skb(ret_skb);
+ }
+
valid_cnt++;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
index 9b0aa3b70f0e..f9a259ee6b82 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mcu.h
@@ -99,4 +99,17 @@ struct mt7921_rftest_evt {
__le32 param0;
__le32 param1;
} __packed;
+
+struct mt7921_clc_info_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 chan_conf; /* BIT(0) : Enable UNII-4
+ * BIT(1) : Enable UNII-5
+ * BIT(2) : Enable UNII-6
+ * BIT(3) : Enable UNII-7
+ * BIT(4) : Enable UNII-8
+ */
+ u8 rsv[63];
+} __packed;
#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
index 87dd06855f68..f28621121927 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/mt7921.h
@@ -23,10 +23,8 @@
#define MT7921_SKU_MAX_DELTA_IDX MT7921_SKU_RATE_NUM
#define MT7921_SKU_TABLE_SIZE (MT7921_SKU_RATE_NUM + 1)
-#define MT7921_SDIO_HDR_TX_BYTES GENMASK(15, 0)
-#define MT7921_SDIO_HDR_PKT_TYPE GENMASK(17, 16)
-
#define MCU_UNI_EVENT_ROC 0x27
+#define MCU_UNI_EVENT_CLC 0x80
enum {
UNI_ROC_ACQUIRE,
@@ -235,20 +233,6 @@ mt7921_l1_rmw(struct mt792x_dev *dev, u32 addr, u32 mask, u32 val)
#define mt7921_l1_set(dev, addr, val) mt7921_l1_rmw(dev, addr, 0, val)
#define mt7921_l1_clear(dev, addr, val) mt7921_l1_rmw(dev, addr, val, 0)
-static inline void
-mt7921_skb_add_usb_sdio_hdr(struct mt792x_dev *dev, struct sk_buff *skb,
- int type)
-{
- u32 hdr, len;
-
- len = mt76_is_usb(&dev->mt76) ? skb->len : skb->len + sizeof(hdr);
- hdr = FIELD_PREP(MT7921_SDIO_HDR_TX_BYTES, len) |
- FIELD_PREP(MT7921_SDIO_HDR_PKT_TYPE, type);
-
- put_unaligned_le32(hdr, skb_push(skb, sizeof(hdr)));
-}
-
-void mt7921_stop(struct ieee80211_hw *hw);
int mt7921_mac_init(struct mt792x_dev *dev);
bool mt7921_mac_wtbl_update(struct mt792x_dev *dev, int idx, u32 mask);
int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
index 3dda84a93717..f04e7095e181 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci.c
@@ -17,6 +17,8 @@ static const struct pci_device_id mt7921_pci_device_table[] = {
.driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM },
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7922),
.driver_data = (kernel_ulong_t)MT7922_FIRMWARE_WM },
+ { PCI_DEVICE(PCI_VENDOR_ID_ITTIM, 0x7922),
+ .driver_data = (kernel_ulong_t)MT7922_FIRMWARE_WM },
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0608),
.driver_data = (kernel_ulong_t)MT7921_FIRMWARE_WM },
{ PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0616),
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
index e7a995e7e70a..c866144ff061 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/pci_mac.c
@@ -48,7 +48,7 @@ int mt7921e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
memset(txp, 0, sizeof(struct mt76_connac_hw_txp));
mt76_connac_write_hw_txp(mdev, tx_info, txp, id);
- tx_info->skb = DMA_DUMMY_DATA;
+ tx_info->skb = NULL;
return 0;
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
index 310eeca024ad..5e4501d7f1c0 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/sdio_mcu.c
@@ -38,7 +38,7 @@ mt7921s_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
if (cmd == MCU_CMD(FW_SCATTER))
type = MT7921_SDIO_FWDL;
- mt7921_skb_add_usb_sdio_hdr(dev, skb, type);
+ mt792x_skb_add_usb_sdio_hdr(dev, skb, type);
pad = round_up(skb->len, 4) - skb->len;
__skb_put_zero(skb, pad);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
index 59cd3d98bf90..e5258c74fc07 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
@@ -43,7 +43,7 @@ mt7921u_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
else
ep = MT_EP_OUT_AC_BE;
- mt7921_skb_add_usb_sdio_hdr(dev, skb, 0);
+ mt792x_skb_add_usb_sdio_hdr(dev, skb, 0);
pad = round_up(skb->len, 4) + 4 - skb->len;
__skb_put_zero(skb, pad);
@@ -135,14 +135,6 @@ out:
return err;
}
-static void mt7921u_stop(struct ieee80211_hw *hw)
-{
- struct mt792x_dev *dev = mt792x_hw_dev(hw);
-
- mt76u_stop_tx(&dev->mt76);
- mt7921_stop(hw);
-}
-
static int mt7921u_probe(struct usb_interface *usb_intf,
const struct usb_device_id *id)
{
@@ -189,7 +181,7 @@ static int mt7921u_probe(struct usb_interface *usb_intf,
if (!ops)
return -ENOMEM;
- ops->stop = mt7921u_stop;
+ ops->stop = mt792xu_stop;
mdev = mt76_alloc_device(&usb_intf->dev, sizeof(*dev), ops, &drv_ops);
if (!mdev)
return -ENOMEM;
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/Kconfig b/drivers/net/wireless/mediatek/mt76/mt7925/Kconfig
new file mode 100644
index 000000000000..5854e95e68a5
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/Kconfig
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: ISC
+config MT7925_COMMON
+ tristate
+ select MT792x_LIB
+ select WANT_DEV_COREDUMP
+
+config MT7925E
+ tristate "MediaTek MT7925E (PCIe) support"
+ select MT7925_COMMON
+ depends on MAC80211
+ depends on PCI
+ help
+ This adds support for MT7925-based wireless PCIe devices,
+ which support operation at 6GHz, 5GHz, and 2.4GHz IEEE 802.11be
+ 2x2:2SS 4096-QAM, 160MHz channels.
+
+ To compile this driver as a module, choose M here.
+
+config MT7925U
+ tristate "MediaTek MT7925U (USB) support"
+ select MT792x_USB
+ select MT7925_COMMON
+ depends on MAC80211
+ depends on USB
+ help
+ This adds support for MT7925-based wireless USB devices,
+ which support operation at 6GHz, 5GHz, and 2.4GHz IEEE 802.11be
+ 2x2:2SS 4096-QAM, 160MHz channels.
+
+ To compile this driver as a module, choose M here.
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/Makefile b/drivers/net/wireless/mediatek/mt76/mt7925/Makefile
new file mode 100644
index 000000000000..d321e4ed732f
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/Makefile
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: ISC
+
+obj-$(CONFIG_MT7925_COMMON) += mt7925-common.o
+obj-$(CONFIG_MT7925E) += mt7925e.o
+obj-$(CONFIG_MT7925U) += mt7925u.o
+
+mt7925-common-y := mac.o mcu.o main.o init.o debugfs.o
+mt7925e-y := pci.o pci_mac.o pci_mcu.o
+mt7925u-y := usb.o
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/debugfs.c b/drivers/net/wireless/mediatek/mt76/mt7925/debugfs.c
new file mode 100644
index 000000000000..1e2fc6577e78
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/debugfs.c
@@ -0,0 +1,319 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include "mt7925.h"
+#include "mcu.h"
+
+static int
+mt7925_reg_set(void *data, u64 val)
+{
+ struct mt792x_dev *dev = data;
+ u32 regval = val;
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_regval(dev, dev->mt76.debugfs_reg, &regval, true);
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static int
+mt7925_reg_get(void *data, u64 *val)
+{
+ struct mt792x_dev *dev = data;
+ u32 regval;
+ int ret;
+
+ mt792x_mutex_acquire(dev);
+ ret = mt7925_mcu_regval(dev, dev->mt76.debugfs_reg, &regval, false);
+ mt792x_mutex_release(dev);
+ if (!ret)
+ *val = regval;
+
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_regval, mt7925_reg_get, mt7925_reg_set,
+ "0x%08llx\n");
+static int
+mt7925_fw_debug_set(void *data, u64 val)
+{
+ struct mt792x_dev *dev = data;
+
+ mt792x_mutex_acquire(dev);
+
+ dev->fw_debug = (u8)val;
+ mt7925_mcu_fw_log_2_host(dev, dev->fw_debug);
+
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static int
+mt7925_fw_debug_get(void *data, u64 *val)
+{
+ struct mt792x_dev *dev = data;
+
+ *val = dev->fw_debug;
+
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_fw_debug, mt7925_fw_debug_get,
+ mt7925_fw_debug_set, "%lld\n");
+
+DEFINE_SHOW_ATTRIBUTE(mt792x_tx_stats);
+
+static void
+mt7925_seq_puts_array(struct seq_file *file, const char *str,
+ s8 val[][2], int len, u8 band_idx)
+{
+ int i;
+
+ seq_printf(file, "%-22s:", str);
+ for (i = 0; i < len; i++)
+ if (val[i][band_idx] == 127)
+ seq_printf(file, " %6s", "N.A");
+ else
+ seq_printf(file, " %6d", val[i][band_idx]);
+ seq_puts(file, "\n");
+}
+
+#define mt7925_print_txpwr_entry(prefix, rate, idx) \
+({ \
+ mt7925_seq_puts_array(s, #prefix " (tmac)", \
+ txpwr->rate, \
+ ARRAY_SIZE(txpwr->rate), \
+ idx); \
+})
+
+static inline void
+mt7925_eht_txpwr(struct seq_file *s, struct mt7925_txpwr *txpwr, u8 band_idx)
+{
+ seq_printf(s, "%-22s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s\n",
+ " ", "mcs0", "mcs1", "mcs2", "mcs3", "mcs4", "mcs5",
+ "mcs6", "mcs7", "mcs8", "mcs9", "mcs10", "mcs11",
+ "mcs12", "mcs13", "mcs14", "mcs15");
+ mt7925_print_txpwr_entry(EHT26, eht26, band_idx);
+ mt7925_print_txpwr_entry(EHT52, eht52, band_idx);
+ mt7925_print_txpwr_entry(EHT106, eht106, band_idx);
+ mt7925_print_txpwr_entry(EHT242, eht242, band_idx);
+ mt7925_print_txpwr_entry(EHT484, eht484, band_idx);
+
+ mt7925_print_txpwr_entry(EHT996, eht996, band_idx);
+ mt7925_print_txpwr_entry(EHT996x2, eht996x2, band_idx);
+ mt7925_print_txpwr_entry(EHT996x4, eht996x4, band_idx);
+ mt7925_print_txpwr_entry(EHT26_52, eht26_52, band_idx);
+ mt7925_print_txpwr_entry(EHT26_106, eht26_106, band_idx);
+ mt7925_print_txpwr_entry(EHT484_242, eht484_242, band_idx);
+ mt7925_print_txpwr_entry(EHT996_484, eht996_484, band_idx);
+ mt7925_print_txpwr_entry(EHT996_484_242, eht996_484_242, band_idx);
+ mt7925_print_txpwr_entry(EHT996x2_484, eht996x2_484, band_idx);
+ mt7925_print_txpwr_entry(EHT996x3, eht996x3, band_idx);
+ mt7925_print_txpwr_entry(EHT996x3_484, eht996x3_484, band_idx);
+}
+
+static int
+mt7925_txpwr(struct seq_file *s, void *data)
+{
+ struct mt792x_dev *dev = dev_get_drvdata(s->private);
+ struct mt7925_txpwr *txpwr = NULL;
+ u8 band_idx = dev->mphy.band_idx;
+ int ret = 0;
+
+ txpwr = devm_kmalloc(dev->mt76.dev, sizeof(*txpwr), GFP_KERNEL);
+
+ if (!txpwr)
+ return -ENOMEM;
+
+ mt792x_mutex_acquire(dev);
+ ret = mt7925_get_txpwr_info(dev, band_idx, txpwr);
+ mt792x_mutex_release(dev);
+
+ if (ret)
+ goto out;
+
+ seq_printf(s, "%-22s %6s %6s %6s %6s\n",
+ " ", "1m", "2m", "5m", "11m");
+ mt7925_print_txpwr_entry(CCK, cck, band_idx);
+
+ seq_printf(s, "%-22s %6s %6s %6s %6s %6s %6s %6s %6s\n",
+ " ", "6m", "9m", "12m", "18m", "24m", "36m",
+ "48m", "54m");
+ mt7925_print_txpwr_entry(OFDM, ofdm, band_idx);
+
+ seq_printf(s, "%-22s %6s %6s %6s %6s %6s %6s %6s %6s\n",
+ " ", "mcs0", "mcs1", "mcs2", "mcs3", "mcs4", "mcs5",
+ "mcs6", "mcs7");
+ mt7925_print_txpwr_entry(HT20, ht20, band_idx);
+
+ seq_printf(s, "%-22s %6s %6s %6s %6s %6s %6s %6s %6s %6s\n",
+ " ", "mcs0", "mcs1", "mcs2", "mcs3", "mcs4", "mcs5",
+ "mcs6", "mcs7", "mcs32");
+ mt7925_print_txpwr_entry(HT40, ht40, band_idx);
+
+ seq_printf(s, "%-22s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s\n",
+ " ", "mcs0", "mcs1", "mcs2", "mcs3", "mcs4", "mcs5",
+ "mcs6", "mcs7", "mcs8", "mcs9", "mcs10", "mcs11");
+ mt7925_print_txpwr_entry(VHT20, vht20, band_idx);
+ mt7925_print_txpwr_entry(VHT40, vht40, band_idx);
+
+ mt7925_print_txpwr_entry(VHT80, vht80, band_idx);
+ mt7925_print_txpwr_entry(VHT160, vht160, band_idx);
+
+ mt7925_print_txpwr_entry(HE26, he26, band_idx);
+ mt7925_print_txpwr_entry(HE52, he52, band_idx);
+ mt7925_print_txpwr_entry(HE106, he106, band_idx);
+ mt7925_print_txpwr_entry(HE242, he242, band_idx);
+ mt7925_print_txpwr_entry(HE484, he484, band_idx);
+
+ mt7925_print_txpwr_entry(HE996, he996, band_idx);
+ mt7925_print_txpwr_entry(HE996x2, he996x2, band_idx);
+
+ mt7925_eht_txpwr(s, txpwr, band_idx);
+
+out:
+ devm_kfree(dev->mt76.dev, txpwr);
+ return ret;
+}
+
+static int
+mt7925_pm_set(void *data, u64 val)
+{
+ struct mt792x_dev *dev = data;
+ struct mt76_connac_pm *pm = &dev->pm;
+
+ if (mt76_is_usb(&dev->mt76))
+ return -EOPNOTSUPP;
+
+ mutex_lock(&dev->mt76.mutex);
+
+ if (val == pm->enable_user)
+ goto out;
+
+ if (!pm->enable_user) {
+ pm->stats.last_wake_event = jiffies;
+ pm->stats.last_doze_event = jiffies;
+ }
+ /* make sure the chip is awake here and ps_work is scheduled
+ * just at end of the this routine.
+ */
+ pm->enable = false;
+ mt76_connac_pm_wake(&dev->mphy, pm);
+
+ pm->enable_user = val;
+ mt7925_set_runtime_pm(dev);
+ mt76_connac_power_save_sched(&dev->mphy, pm);
+out:
+ mutex_unlock(&dev->mt76.mutex);
+
+ return 0;
+}
+
+static int
+mt7925_pm_get(void *data, u64 *val)
+{
+ struct mt792x_dev *dev = data;
+
+ *val = dev->pm.enable_user;
+
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_pm, mt7925_pm_get, mt7925_pm_set, "%lld\n");
+
+static int
+mt7925_deep_sleep_set(void *data, u64 val)
+{
+ struct mt792x_dev *dev = data;
+ struct mt76_connac_pm *pm = &dev->pm;
+ bool monitor = !!(dev->mphy.hw->conf.flags & IEEE80211_CONF_MONITOR);
+ bool enable = !!val;
+
+ if (mt76_is_usb(&dev->mt76))
+ return -EOPNOTSUPP;
+
+ mt792x_mutex_acquire(dev);
+ if (pm->ds_enable_user == enable)
+ goto out;
+
+ pm->ds_enable_user = enable;
+ pm->ds_enable = enable && !monitor;
+ mt7925_mcu_set_deep_sleep(dev, pm->ds_enable);
+out:
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static int
+mt7925_deep_sleep_get(void *data, u64 *val)
+{
+ struct mt792x_dev *dev = data;
+
+ *val = dev->pm.ds_enable_user;
+
+ return 0;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_ds, mt7925_deep_sleep_get,
+ mt7925_deep_sleep_set, "%lld\n");
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_pm_idle_timeout, mt792x_pm_idle_timeout_get,
+ mt792x_pm_idle_timeout_set, "%lld\n");
+
+static int mt7925_chip_reset(void *data, u64 val)
+{
+ struct mt792x_dev *dev = data;
+ int ret = 0;
+
+ switch (val) {
+ case 1:
+ /* Reset wifisys directly. */
+ mt792x_reset(&dev->mt76);
+ break;
+ default:
+ /* Collect the core dump before reset wifisys. */
+ mt792x_mutex_acquire(dev);
+ ret = mt7925_mcu_chip_config(dev, "assert");
+ mt792x_mutex_release(dev);
+ break;
+ }
+
+ return ret;
+}
+
+DEFINE_DEBUGFS_ATTRIBUTE(fops_reset, NULL, mt7925_chip_reset, "%lld\n");
+
+int mt7925_init_debugfs(struct mt792x_dev *dev)
+{
+ struct dentry *dir;
+
+ dir = mt76_register_debugfs_fops(&dev->mphy, &fops_regval);
+ if (!dir)
+ return -ENOMEM;
+
+ if (mt76_is_mmio(&dev->mt76))
+ debugfs_create_devm_seqfile(dev->mt76.dev, "xmit-queues",
+ dir, mt792x_queues_read);
+ else
+ debugfs_create_devm_seqfile(dev->mt76.dev, "xmit-queues",
+ dir, mt76_queues_read);
+
+ debugfs_create_devm_seqfile(dev->mt76.dev, "acq", dir,
+ mt792x_queues_acq);
+ debugfs_create_devm_seqfile(dev->mt76.dev, "txpower_sku", dir,
+ mt7925_txpwr);
+ debugfs_create_file("tx_stats", 0400, dir, dev, &mt792x_tx_stats_fops);
+ debugfs_create_file("fw_debug", 0600, dir, dev, &fops_fw_debug);
+ debugfs_create_file("runtime-pm", 0600, dir, dev, &fops_pm);
+ debugfs_create_file("idle-timeout", 0600, dir, dev,
+ &fops_pm_idle_timeout);
+ debugfs_create_file("chip_reset", 0600, dir, dev, &fops_reset);
+ debugfs_create_devm_seqfile(dev->mt76.dev, "runtime_pm_stats", dir,
+ mt792x_pm_stats);
+ debugfs_create_file("deep-sleep", 0600, dir, dev, &fops_ds);
+
+ return 0;
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/init.c b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
new file mode 100644
index 000000000000..8f9b7a2f376c
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/init.c
@@ -0,0 +1,235 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/etherdevice.h>
+#include <linux/firmware.h>
+#include "mt7925.h"
+#include "mac.h"
+#include "mcu.h"
+
+static void
+mt7925_regd_notifier(struct wiphy *wiphy,
+ struct regulatory_request *req)
+{
+ struct ieee80211_hw *hw = wiphy_to_ieee80211_hw(wiphy);
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt76_dev *mdev = &dev->mt76;
+
+ /* allow world regdom at the first boot only */
+ if (!memcmp(req->alpha2, "00", 2) &&
+ mdev->alpha2[0] && mdev->alpha2[1])
+ return;
+
+ /* do not need to update the same country twice */
+ if (!memcmp(req->alpha2, mdev->alpha2, 2) &&
+ dev->country_ie_env == req->country_ie_env)
+ return;
+
+ memcpy(mdev->alpha2, req->alpha2, 2);
+ mdev->region = req->dfs_region;
+ dev->country_ie_env = req->country_ie_env;
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_set_clc(dev, req->alpha2, req->country_ie_env);
+ mt7925_mcu_set_channel_domain(hw->priv);
+ mt7925_set_tx_sar_pwr(hw, NULL);
+ mt792x_mutex_release(dev);
+}
+
+static void mt7925_mac_init_basic_rates(struct mt792x_dev *dev)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(mt76_rates); i++) {
+ u16 rate = mt76_rates[i].hw_value;
+ u16 idx = MT792x_BASIC_RATES_TBL + i;
+
+ rate = FIELD_PREP(MT_TX_RATE_MODE, rate >> 8) |
+ FIELD_PREP(MT_TX_RATE_IDX, rate & GENMASK(7, 0));
+ mt7925_mac_set_fixed_rate_table(dev, idx, rate);
+ }
+}
+
+int mt7925_mac_init(struct mt792x_dev *dev)
+{
+ int i;
+
+ mt76_rmw_field(dev, MT_MDP_DCR1, MT_MDP_DCR1_MAX_RX_LEN, 1536);
+ /* enable hardware de-agg */
+ mt76_set(dev, MT_MDP_DCR0, MT_MDP_DCR0_DAMSDU_EN);
+
+ for (i = 0; i < MT792x_WTBL_SIZE; i++)
+ mt7925_mac_wtbl_update(dev, i,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+ for (i = 0; i < 2; i++)
+ mt792x_mac_init_band(dev, i);
+
+ mt7925_mac_init_basic_rates(dev);
+
+ memzero_explicit(&dev->mt76.alpha2, sizeof(dev->mt76.alpha2));
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7925_mac_init);
+
+static int __mt7925_init_hardware(struct mt792x_dev *dev)
+{
+ int ret;
+
+ ret = mt792x_mcu_init(dev);
+ if (ret)
+ goto out;
+
+ mt76_eeprom_override(&dev->mphy);
+
+ ret = mt7925_mcu_set_eeprom(dev);
+ if (ret)
+ goto out;
+
+ ret = mt7925_mac_init(dev);
+ if (ret)
+ goto out;
+
+out:
+ return ret;
+}
+
+static int mt7925_init_hardware(struct mt792x_dev *dev)
+{
+ int ret, i;
+
+ set_bit(MT76_STATE_INITIALIZED, &dev->mphy.state);
+
+ for (i = 0; i < MT792x_MCU_INIT_RETRY_COUNT; i++) {
+ ret = __mt7925_init_hardware(dev);
+ if (!ret)
+ break;
+
+ mt792x_init_reset(dev);
+ }
+
+ if (i == MT792x_MCU_INIT_RETRY_COUNT) {
+ dev_err(dev->mt76.dev, "hardware init failed\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void mt7925_init_work(struct work_struct *work)
+{
+ struct mt792x_dev *dev = container_of(work, struct mt792x_dev,
+ init_work);
+ int ret;
+
+ ret = mt7925_init_hardware(dev);
+ if (ret)
+ return;
+
+ mt76_set_stream_caps(&dev->mphy, true);
+ mt7925_set_stream_he_eht_caps(&dev->phy);
+
+ ret = mt76_register_device(&dev->mt76, true, mt76_rates,
+ ARRAY_SIZE(mt76_rates));
+ if (ret) {
+ dev_err(dev->mt76.dev, "register device failed\n");
+ return;
+ }
+
+ ret = mt7925_init_debugfs(dev);
+ if (ret) {
+ dev_err(dev->mt76.dev, "register debugfs failed\n");
+ return;
+ }
+
+ /* we support chip reset now */
+ dev->hw_init_done = true;
+
+ mt7925_mcu_set_deep_sleep(dev, dev->pm.ds_enable);
+}
+
+int mt7925_register_device(struct mt792x_dev *dev)
+{
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ int ret;
+
+ dev->phy.dev = dev;
+ dev->phy.mt76 = &dev->mt76.phy;
+ dev->mt76.phy.priv = &dev->phy;
+ dev->mt76.tx_worker.fn = mt792x_tx_worker;
+
+ INIT_DELAYED_WORK(&dev->pm.ps_work, mt792x_pm_power_save_work);
+ INIT_WORK(&dev->pm.wake_work, mt792x_pm_wake_work);
+ spin_lock_init(&dev->pm.wake.lock);
+ mutex_init(&dev->pm.mutex);
+ init_waitqueue_head(&dev->pm.wait);
+ spin_lock_init(&dev->pm.txq_lock);
+ INIT_DELAYED_WORK(&dev->mphy.mac_work, mt792x_mac_work);
+ INIT_DELAYED_WORK(&dev->phy.scan_work, mt7925_scan_work);
+ INIT_DELAYED_WORK(&dev->coredump.work, mt7925_coredump_work);
+#if IS_ENABLED(CONFIG_IPV6)
+ INIT_WORK(&dev->ipv6_ns_work, mt7925_set_ipv6_ns_work);
+ skb_queue_head_init(&dev->ipv6_ns_list);
+#endif
+ skb_queue_head_init(&dev->phy.scan_event_list);
+ skb_queue_head_init(&dev->coredump.msg_list);
+
+ INIT_WORK(&dev->reset_work, mt7925_mac_reset_work);
+ INIT_WORK(&dev->init_work, mt7925_init_work);
+
+ INIT_WORK(&dev->phy.roc_work, mt7925_roc_work);
+ timer_setup(&dev->phy.roc_timer, mt792x_roc_timer, 0);
+ init_waitqueue_head(&dev->phy.roc_wait);
+
+ dev->pm.idle_timeout = MT792x_PM_TIMEOUT;
+ dev->pm.stats.last_wake_event = jiffies;
+ dev->pm.stats.last_doze_event = jiffies;
+ if (!mt76_is_usb(&dev->mt76)) {
+ dev->pm.enable_user = true;
+ dev->pm.enable = true;
+ dev->pm.ds_enable_user = true;
+ dev->pm.ds_enable = true;
+ }
+
+ if (!mt76_is_mmio(&dev->mt76))
+ hw->extra_tx_headroom += MT_SDIO_TXD_SIZE + MT_SDIO_HDR_SIZE;
+
+ mt792x_init_acpi_sar(dev);
+
+ ret = mt792x_init_wcid(dev);
+ if (ret)
+ return ret;
+
+ ret = mt792x_init_wiphy(hw);
+ if (ret)
+ return ret;
+
+ hw->wiphy->reg_notifier = mt7925_regd_notifier;
+ dev->mphy.sband_2g.sband.ht_cap.cap |=
+ IEEE80211_HT_CAP_LDPC_CODING |
+ IEEE80211_HT_CAP_MAX_AMSDU;
+ dev->mphy.sband_2g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_2;
+ dev->mphy.sband_5g.sband.ht_cap.cap |=
+ IEEE80211_HT_CAP_LDPC_CODING |
+ IEEE80211_HT_CAP_MAX_AMSDU;
+ dev->mphy.sband_2g.sband.ht_cap.ampdu_density =
+ IEEE80211_HT_MPDU_DENSITY_1;
+ dev->mphy.sband_5g.sband.vht_cap.cap |=
+ IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK |
+ IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+ IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE |
+ (3 << IEEE80211_VHT_CAP_BEAMFORMEE_STS_SHIFT);
+ dev->mphy.sband_5g.sband.vht_cap.cap |=
+ IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ |
+ IEEE80211_VHT_CAP_SHORT_GI_160;
+
+ dev->mphy.hw->wiphy->available_antennas_rx = dev->mphy.chainmask;
+ dev->mphy.hw->wiphy->available_antennas_tx = dev->mphy.chainmask;
+
+ queue_work(system_wq, &dev->init_work);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7925_register_device);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
new file mode 100644
index 000000000000..1b9fbd9a140d
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
@@ -0,0 +1,1452 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/devcoredump.h>
+#include <linux/etherdevice.h>
+#include <linux/timekeeping.h>
+#include "mt7925.h"
+#include "../dma.h"
+#include "mac.h"
+#include "mcu.h"
+
+bool mt7925_mac_wtbl_update(struct mt792x_dev *dev, int idx, u32 mask)
+{
+ mt76_rmw(dev, MT_WTBL_UPDATE, MT_WTBL_UPDATE_WLAN_IDX,
+ FIELD_PREP(MT_WTBL_UPDATE_WLAN_IDX, idx) | mask);
+
+ return mt76_poll(dev, MT_WTBL_UPDATE, MT_WTBL_UPDATE_BUSY,
+ 0, 5000);
+}
+
+static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
+{
+ static const u8 ac_to_tid[] = {
+ [IEEE80211_AC_BE] = 0,
+ [IEEE80211_AC_BK] = 1,
+ [IEEE80211_AC_VI] = 4,
+ [IEEE80211_AC_VO] = 6
+ };
+ struct ieee80211_sta *sta;
+ struct mt792x_sta *msta;
+ u32 tx_time[IEEE80211_NUM_ACS], rx_time[IEEE80211_NUM_ACS];
+ LIST_HEAD(sta_poll_list);
+ struct rate_info *rate;
+ s8 rssi[4];
+ int i;
+
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ list_splice_init(&dev->mt76.sta_poll_list, &sta_poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+
+ while (true) {
+ bool clear = false;
+ u32 addr, val;
+ u16 idx;
+ u8 bw;
+
+ if (list_empty(&sta_poll_list))
+ break;
+ msta = list_first_entry(&sta_poll_list,
+ struct mt792x_sta, wcid.poll_list);
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ list_del_init(&msta->wcid.poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+
+ idx = msta->wcid.idx;
+ addr = mt7925_mac_wtbl_lmac_addr(dev, idx, MT_WTBL_AC0_CTT_OFFSET);
+
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ u32 tx_last = msta->airtime_ac[i];
+ u32 rx_last = msta->airtime_ac[i + 4];
+
+ msta->airtime_ac[i] = mt76_rr(dev, addr);
+ msta->airtime_ac[i + 4] = mt76_rr(dev, addr + 4);
+
+ tx_time[i] = msta->airtime_ac[i] - tx_last;
+ rx_time[i] = msta->airtime_ac[i + 4] - rx_last;
+
+ if ((tx_last | rx_last) & BIT(30))
+ clear = true;
+
+ addr += 8;
+ }
+
+ if (clear) {
+ mt7925_mac_wtbl_update(dev, idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+ memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
+ }
+
+ if (!msta->wcid.sta)
+ continue;
+
+ sta = container_of((void *)msta, struct ieee80211_sta,
+ drv_priv);
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+ u8 q = mt76_connac_lmac_mapping(i);
+ u32 tx_cur = tx_time[q];
+ u32 rx_cur = rx_time[q];
+ u8 tid = ac_to_tid[i];
+
+ if (!tx_cur && !rx_cur)
+ continue;
+
+ ieee80211_sta_register_airtime(sta, tid, tx_cur,
+ rx_cur);
+ }
+
+ /* We don't support reading GI info from txs packets.
+ * For accurate tx status reporting and AQL improvement,
+ * we need to make sure that flags match so polling GI
+ * from per-sta counters directly.
+ */
+ rate = &msta->wcid.rate;
+
+ switch (rate->bw) {
+ case RATE_INFO_BW_160:
+ bw = IEEE80211_STA_RX_BW_160;
+ break;
+ case RATE_INFO_BW_80:
+ bw = IEEE80211_STA_RX_BW_80;
+ break;
+ case RATE_INFO_BW_40:
+ bw = IEEE80211_STA_RX_BW_40;
+ break;
+ default:
+ bw = IEEE80211_STA_RX_BW_20;
+ break;
+ }
+
+ addr = mt7925_mac_wtbl_lmac_addr(dev, idx, 6);
+ val = mt76_rr(dev, addr);
+ if (rate->flags & RATE_INFO_FLAGS_EHT_MCS) {
+ addr = mt7925_mac_wtbl_lmac_addr(dev, idx, 5);
+ val = mt76_rr(dev, addr);
+ rate->eht_gi = FIELD_GET(GENMASK(25, 24), val);
+ } else if (rate->flags & RATE_INFO_FLAGS_HE_MCS) {
+ u8 offs = MT_WTBL_TXRX_RATE_G2_HE + 2 * bw;
+
+ rate->he_gi = (val & (0x3 << offs)) >> offs;
+ } else if (rate->flags &
+ (RATE_INFO_FLAGS_VHT_MCS | RATE_INFO_FLAGS_MCS)) {
+ if (val & BIT(MT_WTBL_TXRX_RATE_G2 + bw))
+ rate->flags |= RATE_INFO_FLAGS_SHORT_GI;
+ else
+ rate->flags &= ~RATE_INFO_FLAGS_SHORT_GI;
+ }
+
+ /* get signal strength of resp frames (CTS/BA/ACK) */
+ addr = mt7925_mac_wtbl_lmac_addr(dev, idx, 34);
+ val = mt76_rr(dev, addr);
+
+ rssi[0] = to_rssi(GENMASK(7, 0), val);
+ rssi[1] = to_rssi(GENMASK(15, 8), val);
+ rssi[2] = to_rssi(GENMASK(23, 16), val);
+ rssi[3] = to_rssi(GENMASK(31, 14), val);
+
+ msta->ack_signal =
+ mt76_rx_signal(msta->vif->phy->mt76->antenna_mask, rssi);
+
+ ewma_avg_signal_add(&msta->avg_ack_signal, -msta->ack_signal);
+ }
+}
+
+void mt7925_mac_set_fixed_rate_table(struct mt792x_dev *dev,
+ u8 tbl_idx, u16 rate_idx)
+{
+ u32 ctrl = MT_WTBL_ITCR_WR | MT_WTBL_ITCR_EXEC | tbl_idx;
+
+ mt76_wr(dev, MT_WTBL_ITDR0, rate_idx);
+ /* use wtbl spe idx */
+ mt76_wr(dev, MT_WTBL_ITDR1, MT_WTBL_SPE_IDX_SEL);
+ mt76_wr(dev, MT_WTBL_ITCR, ctrl);
+}
+
+/* The HW does not translate the mac header to 802.3 for mesh point */
+static int mt7925_reverse_frag0_hdr_trans(struct sk_buff *skb, u16 hdr_gap)
+{
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ struct ethhdr *eth_hdr = (struct ethhdr *)(skb->data + hdr_gap);
+ struct mt792x_sta *msta = (struct mt792x_sta *)status->wcid;
+ __le32 *rxd = (__le32 *)skb->data;
+ struct ieee80211_sta *sta;
+ struct ieee80211_vif *vif;
+ struct ieee80211_hdr hdr;
+ u16 frame_control;
+
+ if (le32_get_bits(rxd[3], MT_RXD3_NORMAL_ADDR_TYPE) !=
+ MT_RXD3_NORMAL_U2M)
+ return -EINVAL;
+
+ if (!(le32_to_cpu(rxd[1]) & MT_RXD1_NORMAL_GROUP_4))
+ return -EINVAL;
+
+ if (!msta || !msta->vif)
+ return -EINVAL;
+
+ sta = container_of((void *)msta, struct ieee80211_sta, drv_priv);
+ vif = container_of((void *)msta->vif, struct ieee80211_vif, drv_priv);
+
+ /* store the info from RXD and ethhdr to avoid being overridden */
+ frame_control = le32_get_bits(rxd[8], MT_RXD8_FRAME_CONTROL);
+ hdr.frame_control = cpu_to_le16(frame_control);
+ hdr.seq_ctrl = cpu_to_le16(le32_get_bits(rxd[10], MT_RXD10_SEQ_CTRL));
+ hdr.duration_id = 0;
+
+ ether_addr_copy(hdr.addr1, vif->addr);
+ ether_addr_copy(hdr.addr2, sta->addr);
+ switch (frame_control & (IEEE80211_FCTL_TODS |
+ IEEE80211_FCTL_FROMDS)) {
+ case 0:
+ ether_addr_copy(hdr.addr3, vif->bss_conf.bssid);
+ break;
+ case IEEE80211_FCTL_FROMDS:
+ ether_addr_copy(hdr.addr3, eth_hdr->h_source);
+ break;
+ case IEEE80211_FCTL_TODS:
+ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
+ break;
+ case IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS:
+ ether_addr_copy(hdr.addr3, eth_hdr->h_dest);
+ ether_addr_copy(hdr.addr4, eth_hdr->h_source);
+ break;
+ default:
+ break;
+ }
+
+ skb_pull(skb, hdr_gap + sizeof(struct ethhdr) - 2);
+ if (eth_hdr->h_proto == cpu_to_be16(ETH_P_AARP) ||
+ eth_hdr->h_proto == cpu_to_be16(ETH_P_IPX))
+ ether_addr_copy(skb_push(skb, ETH_ALEN), bridge_tunnel_header);
+ else if (be16_to_cpu(eth_hdr->h_proto) >= ETH_P_802_3_MIN)
+ ether_addr_copy(skb_push(skb, ETH_ALEN), rfc1042_header);
+ else
+ skb_pull(skb, 2);
+
+ if (ieee80211_has_order(hdr.frame_control))
+ memcpy(skb_push(skb, IEEE80211_HT_CTL_LEN), &rxd[11],
+ IEEE80211_HT_CTL_LEN);
+ if (ieee80211_is_data_qos(hdr.frame_control)) {
+ __le16 qos_ctrl;
+
+ qos_ctrl = cpu_to_le16(le32_get_bits(rxd[10], MT_RXD10_QOS_CTL));
+ memcpy(skb_push(skb, IEEE80211_QOS_CTL_LEN), &qos_ctrl,
+ IEEE80211_QOS_CTL_LEN);
+ }
+
+ if (ieee80211_has_a4(hdr.frame_control))
+ memcpy(skb_push(skb, sizeof(hdr)), &hdr, sizeof(hdr));
+ else
+ memcpy(skb_push(skb, sizeof(hdr) - 6), &hdr, sizeof(hdr) - 6);
+
+ return 0;
+}
+
+static int
+mt7925_mac_fill_rx_rate(struct mt792x_dev *dev,
+ struct mt76_rx_status *status,
+ struct ieee80211_supported_band *sband,
+ __le32 *rxv, u8 *mode)
+{
+ u32 v0, v2;
+ u8 stbc, gi, bw, dcm, nss;
+ int i, idx;
+ bool cck = false;
+
+ v0 = le32_to_cpu(rxv[0]);
+ v2 = le32_to_cpu(rxv[2]);
+
+ idx = FIELD_GET(MT_PRXV_TX_RATE, v0);
+ i = idx;
+ nss = FIELD_GET(MT_PRXV_NSTS, v0) + 1;
+
+ stbc = FIELD_GET(MT_PRXV_HT_STBC, v2);
+ gi = FIELD_GET(MT_PRXV_HT_SHORT_GI, v2);
+ *mode = FIELD_GET(MT_PRXV_TX_MODE, v2);
+ dcm = FIELD_GET(MT_PRXV_DCM, v2);
+ bw = FIELD_GET(MT_PRXV_FRAME_MODE, v2);
+
+ switch (*mode) {
+ case MT_PHY_TYPE_CCK:
+ cck = true;
+ fallthrough;
+ case MT_PHY_TYPE_OFDM:
+ i = mt76_get_rate(&dev->mt76, sband, i, cck);
+ break;
+ case MT_PHY_TYPE_HT_GF:
+ case MT_PHY_TYPE_HT:
+ status->encoding = RX_ENC_HT;
+ if (gi)
+ status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+ if (i > 31)
+ return -EINVAL;
+ break;
+ case MT_PHY_TYPE_VHT:
+ status->nss = nss;
+ status->encoding = RX_ENC_VHT;
+ if (gi)
+ status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+ if (i > 11)
+ return -EINVAL;
+ break;
+ case MT_PHY_TYPE_HE_MU:
+ case MT_PHY_TYPE_HE_SU:
+ case MT_PHY_TYPE_HE_EXT_SU:
+ case MT_PHY_TYPE_HE_TB:
+ status->nss = nss;
+ status->encoding = RX_ENC_HE;
+ i &= GENMASK(3, 0);
+
+ if (gi <= NL80211_RATE_INFO_HE_GI_3_2)
+ status->he_gi = gi;
+
+ status->he_dcm = dcm;
+ break;
+ case MT_PHY_TYPE_EHT_SU:
+ case MT_PHY_TYPE_EHT_TRIG:
+ case MT_PHY_TYPE_EHT_MU:
+ status->nss = nss;
+ status->encoding = RX_ENC_EHT;
+ i &= GENMASK(3, 0);
+
+ if (gi <= NL80211_RATE_INFO_EHT_GI_3_2)
+ status->eht.gi = gi;
+ break;
+ default:
+ return -EINVAL;
+ }
+ status->rate_idx = i;
+
+ switch (bw) {
+ case IEEE80211_STA_RX_BW_20:
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ if (*mode & MT_PHY_TYPE_HE_EXT_SU &&
+ (idx & MT_PRXV_TX_ER_SU_106T)) {
+ status->bw = RATE_INFO_BW_HE_RU;
+ status->he_ru =
+ NL80211_RATE_INFO_HE_RU_ALLOC_106;
+ } else {
+ status->bw = RATE_INFO_BW_40;
+ }
+ break;
+ case IEEE80211_STA_RX_BW_80:
+ status->bw = RATE_INFO_BW_80;
+ break;
+ case IEEE80211_STA_RX_BW_160:
+ status->bw = RATE_INFO_BW_160;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ status->enc_flags |= RX_ENC_FLAG_STBC_MASK * stbc;
+ if (*mode < MT_PHY_TYPE_HE_SU && gi)
+ status->enc_flags |= RX_ENC_FLAG_SHORT_GI;
+
+ return 0;
+}
+
+static int
+mt7925_mac_fill_rx(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ u32 csum_mask = MT_RXD0_NORMAL_IP_SUM | MT_RXD0_NORMAL_UDP_TCP_SUM;
+ struct mt76_rx_status *status = (struct mt76_rx_status *)skb->cb;
+ bool hdr_trans, unicast, insert_ccmp_hdr = false;
+ u8 chfreq, qos_ctl = 0, remove_pad, amsdu_info;
+ u16 hdr_gap;
+ __le32 *rxv = NULL, *rxd = (__le32 *)skb->data;
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct mt792x_phy *phy = &dev->phy;
+ struct ieee80211_supported_band *sband;
+ u32 csum_status = *(u32 *)skb->cb;
+ u32 rxd0 = le32_to_cpu(rxd[0]);
+ u32 rxd1 = le32_to_cpu(rxd[1]);
+ u32 rxd2 = le32_to_cpu(rxd[2]);
+ u32 rxd3 = le32_to_cpu(rxd[3]);
+ u32 rxd4 = le32_to_cpu(rxd[4]);
+ struct mt792x_sta *msta = NULL;
+ u8 mode = 0; /* , band_idx; */
+ u16 seq_ctrl = 0;
+ __le16 fc = 0;
+ int idx;
+
+ memset(status, 0, sizeof(*status));
+
+ if (!test_bit(MT76_STATE_RUNNING, &mphy->state))
+ return -EINVAL;
+
+ if (rxd2 & MT_RXD2_NORMAL_AMSDU_ERR)
+ return -EINVAL;
+
+ hdr_trans = rxd2 & MT_RXD2_NORMAL_HDR_TRANS;
+ if (hdr_trans && (rxd1 & MT_RXD1_NORMAL_CM))
+ return -EINVAL;
+
+ /* ICV error or CCMP/BIP/WPI MIC error */
+ if (rxd1 & MT_RXD1_NORMAL_ICV_ERR)
+ status->flag |= RX_FLAG_ONLY_MONITOR;
+
+ chfreq = FIELD_GET(MT_RXD3_NORMAL_CH_FREQ, rxd3);
+ unicast = FIELD_GET(MT_RXD3_NORMAL_ADDR_TYPE, rxd3) == MT_RXD3_NORMAL_U2M;
+ idx = FIELD_GET(MT_RXD1_NORMAL_WLAN_IDX, rxd1);
+ status->wcid = mt792x_rx_get_wcid(dev, idx, unicast);
+
+ if (status->wcid) {
+ msta = container_of(status->wcid, struct mt792x_sta, wcid);
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ if (list_empty(&msta->wcid.poll_list))
+ list_add_tail(&msta->wcid.poll_list,
+ &dev->mt76.sta_poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+ }
+
+ mt792x_get_status_freq_info(status, chfreq);
+
+ switch (status->band) {
+ case NL80211_BAND_5GHZ:
+ sband = &mphy->sband_5g.sband;
+ break;
+ case NL80211_BAND_6GHZ:
+ sband = &mphy->sband_6g.sband;
+ break;
+ default:
+ sband = &mphy->sband_2g.sband;
+ break;
+ }
+
+ if (!sband->channels)
+ return -EINVAL;
+
+ if (mt76_is_mmio(&dev->mt76) && (rxd0 & csum_mask) == csum_mask &&
+ !(csum_status & (BIT(0) | BIT(2) | BIT(3))))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ if (rxd3 & MT_RXD3_NORMAL_FCS_ERR)
+ status->flag |= RX_FLAG_FAILED_FCS_CRC;
+
+ if (rxd1 & MT_RXD1_NORMAL_TKIP_MIC_ERR)
+ status->flag |= RX_FLAG_MMIC_ERROR;
+
+ if (FIELD_GET(MT_RXD2_NORMAL_SEC_MODE, rxd2) != 0 &&
+ !(rxd1 & (MT_RXD1_NORMAL_CLM | MT_RXD1_NORMAL_CM))) {
+ status->flag |= RX_FLAG_DECRYPTED;
+ status->flag |= RX_FLAG_IV_STRIPPED;
+ status->flag |= RX_FLAG_MMIC_STRIPPED | RX_FLAG_MIC_STRIPPED;
+ }
+
+ remove_pad = FIELD_GET(MT_RXD2_NORMAL_HDR_OFFSET, rxd2);
+
+ if (rxd2 & MT_RXD2_NORMAL_MAX_LEN_ERROR)
+ return -EINVAL;
+
+ rxd += 8;
+ if (rxd1 & MT_RXD1_NORMAL_GROUP_4) {
+ u32 v0 = le32_to_cpu(rxd[0]);
+ u32 v2 = le32_to_cpu(rxd[2]);
+
+ /* TODO: need to map rxd address */
+ fc = cpu_to_le16(FIELD_GET(MT_RXD8_FRAME_CONTROL, v0));
+ seq_ctrl = FIELD_GET(MT_RXD10_SEQ_CTRL, v2);
+ qos_ctl = FIELD_GET(MT_RXD10_QOS_CTL, v2);
+
+ rxd += 4;
+ if ((u8 *)rxd - skb->data >= skb->len)
+ return -EINVAL;
+ }
+
+ if (rxd1 & MT_RXD1_NORMAL_GROUP_1) {
+ u8 *data = (u8 *)rxd;
+
+ if (status->flag & RX_FLAG_DECRYPTED) {
+ switch (FIELD_GET(MT_RXD2_NORMAL_SEC_MODE, rxd2)) {
+ case MT_CIPHER_AES_CCMP:
+ case MT_CIPHER_CCMP_CCX:
+ case MT_CIPHER_CCMP_256:
+ insert_ccmp_hdr =
+ FIELD_GET(MT_RXD2_NORMAL_FRAG, rxd2);
+ fallthrough;
+ case MT_CIPHER_TKIP:
+ case MT_CIPHER_TKIP_NO_MIC:
+ case MT_CIPHER_GCMP:
+ case MT_CIPHER_GCMP_256:
+ status->iv[0] = data[5];
+ status->iv[1] = data[4];
+ status->iv[2] = data[3];
+ status->iv[3] = data[2];
+ status->iv[4] = data[1];
+ status->iv[5] = data[0];
+ break;
+ default:
+ break;
+ }
+ }
+ rxd += 4;
+ if ((u8 *)rxd - skb->data >= skb->len)
+ return -EINVAL;
+ }
+
+ if (rxd1 & MT_RXD1_NORMAL_GROUP_2) {
+ status->timestamp = le32_to_cpu(rxd[0]);
+ status->flag |= RX_FLAG_MACTIME_START;
+
+ if (!(rxd2 & MT_RXD2_NORMAL_NON_AMPDU)) {
+ status->flag |= RX_FLAG_AMPDU_DETAILS;
+
+ /* all subframes of an A-MPDU have the same timestamp */
+ if (phy->rx_ampdu_ts != status->timestamp) {
+ if (!++phy->ampdu_ref)
+ phy->ampdu_ref++;
+ }
+ phy->rx_ampdu_ts = status->timestamp;
+
+ status->ampdu_ref = phy->ampdu_ref;
+ }
+
+ rxd += 4;
+ if ((u8 *)rxd - skb->data >= skb->len)
+ return -EINVAL;
+ }
+
+ /* RXD Group 3 - P-RXV */
+ if (rxd1 & MT_RXD1_NORMAL_GROUP_3) {
+ u32 v3;
+ int ret;
+
+ rxv = rxd;
+ rxd += 4;
+ if ((u8 *)rxd - skb->data >= skb->len)
+ return -EINVAL;
+
+ v3 = le32_to_cpu(rxv[3]);
+
+ status->chains = mphy->antenna_mask;
+ status->chain_signal[0] = to_rssi(MT_PRXV_RCPI0, v3);
+ status->chain_signal[1] = to_rssi(MT_PRXV_RCPI1, v3);
+ status->chain_signal[2] = to_rssi(MT_PRXV_RCPI2, v3);
+ status->chain_signal[3] = to_rssi(MT_PRXV_RCPI3, v3);
+
+ /* RXD Group 5 - C-RXV */
+ if (rxd1 & MT_RXD1_NORMAL_GROUP_5) {
+ rxd += 24;
+ if ((u8 *)rxd - skb->data >= skb->len)
+ return -EINVAL;
+ }
+
+ ret = mt7925_mac_fill_rx_rate(dev, status, sband, rxv, &mode);
+ if (ret < 0)
+ return ret;
+ }
+
+ amsdu_info = FIELD_GET(MT_RXD4_NORMAL_PAYLOAD_FORMAT, rxd4);
+ status->amsdu = !!amsdu_info;
+ if (status->amsdu) {
+ status->first_amsdu = amsdu_info == MT_RXD4_FIRST_AMSDU_FRAME;
+ status->last_amsdu = amsdu_info == MT_RXD4_LAST_AMSDU_FRAME;
+ }
+
+ hdr_gap = (u8 *)rxd - skb->data + 2 * remove_pad;
+ if (hdr_trans && ieee80211_has_morefrags(fc)) {
+ if (mt7925_reverse_frag0_hdr_trans(skb, hdr_gap))
+ return -EINVAL;
+ hdr_trans = false;
+ } else {
+ int pad_start = 0;
+
+ skb_pull(skb, hdr_gap);
+ if (!hdr_trans && status->amsdu) {
+ pad_start = ieee80211_get_hdrlen_from_skb(skb);
+ } else if (hdr_trans && (rxd2 & MT_RXD2_NORMAL_HDR_TRANS_ERROR)) {
+ /* When header translation failure is indicated,
+ * the hardware will insert an extra 2-byte field
+ * containing the data length after the protocol
+ * type field.
+ */
+ pad_start = 12;
+ if (get_unaligned_be16(skb->data + pad_start) == ETH_P_8021Q)
+ pad_start += 4;
+ else
+ pad_start = 0;
+ }
+
+ if (pad_start) {
+ memmove(skb->data + 2, skb->data, pad_start);
+ skb_pull(skb, 2);
+ }
+ }
+
+ if (!hdr_trans) {
+ struct ieee80211_hdr *hdr;
+
+ if (insert_ccmp_hdr) {
+ u8 key_id = FIELD_GET(MT_RXD1_NORMAL_KEY_ID, rxd1);
+
+ mt76_insert_ccmp_hdr(skb, key_id);
+ }
+
+ hdr = mt76_skb_get_hdr(skb);
+ fc = hdr->frame_control;
+ if (ieee80211_is_data_qos(fc)) {
+ seq_ctrl = le16_to_cpu(hdr->seq_ctrl);
+ qos_ctl = *ieee80211_get_qos_ctl(hdr);
+ }
+ } else {
+ status->flag |= RX_FLAG_8023;
+ }
+
+ mt792x_mac_assoc_rssi(dev, skb);
+
+ if (rxv && mode >= MT_PHY_TYPE_HE_SU && !(status->flag & RX_FLAG_8023))
+ mt76_connac3_mac_decode_he_radiotap(skb, rxv, mode);
+
+ if (!status->wcid || !ieee80211_is_data_qos(fc))
+ return 0;
+
+ status->aggr = unicast && !ieee80211_is_qos_nullfunc(fc);
+ status->seqno = IEEE80211_SEQ_TO_SN(seq_ctrl);
+ status->qos_ctl = qos_ctl;
+
+ return 0;
+}
+
+static void
+mt7925_mac_write_txwi_8023(__le32 *txwi, struct sk_buff *skb,
+ struct mt76_wcid *wcid)
+{
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ u8 fc_type, fc_stype;
+ u16 ethertype;
+ bool wmm = false;
+ u32 val;
+
+ if (wcid->sta) {
+ struct ieee80211_sta *sta;
+
+ sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
+ wmm = sta->wme;
+ }
+
+ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_3) |
+ FIELD_PREP(MT_TXD1_TID, tid);
+
+ ethertype = get_unaligned_be16(&skb->data[12]);
+ if (ethertype >= ETH_P_802_3_MIN)
+ val |= MT_TXD1_ETH_802_3;
+
+ txwi[1] |= cpu_to_le32(val);
+
+ fc_type = IEEE80211_FTYPE_DATA >> 2;
+ fc_stype = wmm ? IEEE80211_STYPE_QOS_DATA >> 4 : 0;
+
+ val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+ FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype);
+
+ txwi[2] |= cpu_to_le32(val);
+}
+
+static void
+mt7925_mac_write_txwi_80211(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb,
+ struct ieee80211_key_conf *key)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ bool multicast = is_multicast_ether_addr(hdr->addr1);
+ u8 tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
+ __le16 fc = hdr->frame_control;
+ u8 fc_type, fc_stype;
+ u32 val;
+
+ if (ieee80211_is_action(fc) &&
+ mgmt->u.action.category == WLAN_CATEGORY_BACK &&
+ mgmt->u.action.u.addba_req.action_code == WLAN_ACTION_ADDBA_REQ)
+ tid = MT_TX_ADDBA;
+ else if (ieee80211_is_mgmt(hdr->frame_control))
+ tid = MT_TX_NORMAL;
+
+ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_802_11) |
+ FIELD_PREP(MT_TXD1_HDR_INFO,
+ ieee80211_get_hdrlen_from_skb(skb) / 2) |
+ FIELD_PREP(MT_TXD1_TID, tid);
+
+ if (!ieee80211_is_data(fc) || multicast ||
+ info->flags & IEEE80211_TX_CTL_USE_MINRATE)
+ val |= MT_TXD1_FIXED_RATE;
+
+ if (key && multicast && ieee80211_is_robust_mgmt_frame(skb) &&
+ key->cipher == WLAN_CIPHER_SUITE_AES_CMAC) {
+ val |= MT_TXD1_BIP;
+ txwi[3] &= ~cpu_to_le32(MT_TXD3_PROTECT_FRAME);
+ }
+
+ txwi[1] |= cpu_to_le32(val);
+
+ fc_type = (le16_to_cpu(fc) & IEEE80211_FCTL_FTYPE) >> 2;
+ fc_stype = (le16_to_cpu(fc) & IEEE80211_FCTL_STYPE) >> 4;
+
+ val = FIELD_PREP(MT_TXD2_FRAME_TYPE, fc_type) |
+ FIELD_PREP(MT_TXD2_SUB_TYPE, fc_stype);
+
+ txwi[2] |= cpu_to_le32(val);
+
+ txwi[3] |= cpu_to_le32(FIELD_PREP(MT_TXD3_BCM, multicast));
+ if (ieee80211_is_beacon(fc))
+ txwi[3] |= cpu_to_le32(MT_TXD3_REM_TX_COUNT);
+
+ if (info->flags & IEEE80211_TX_CTL_INJECTED) {
+ u16 seqno = le16_to_cpu(hdr->seq_ctrl);
+
+ if (ieee80211_is_back_req(hdr->frame_control)) {
+ struct ieee80211_bar *bar;
+
+ bar = (struct ieee80211_bar *)skb->data;
+ seqno = le16_to_cpu(bar->start_seq_num);
+ }
+
+ val = MT_TXD3_SN_VALID |
+ FIELD_PREP(MT_TXD3_SEQ, IEEE80211_SEQ_TO_SN(seqno));
+ txwi[3] |= cpu_to_le32(val);
+ txwi[3] &= ~cpu_to_le32(MT_TXD3_HW_AMSDU);
+ }
+}
+
+void
+mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid,
+ struct ieee80211_key_conf *key, int pid,
+ enum mt76_txq_id qid, u32 changed)
+{
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+ u8 p_fmt, q_idx, omac_idx = 0, wmm_idx = 0, band_idx = 0;
+ u32 val, sz_txd = mt76_is_mmio(dev) ? MT_TXD_SIZE : MT_SDIO_TXD_SIZE;
+ bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
+ struct mt76_vif *mvif;
+ bool beacon = !!(changed & (BSS_CHANGED_BEACON |
+ BSS_CHANGED_BEACON_ENABLED));
+ bool inband_disc = !!(changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+ BSS_CHANGED_FILS_DISCOVERY));
+
+ mvif = vif ? (struct mt76_vif *)vif->drv_priv : NULL;
+ if (mvif) {
+ omac_idx = mvif->omac_idx;
+ wmm_idx = mvif->wmm_idx;
+ band_idx = mvif->band_idx;
+ }
+
+ if (inband_disc) {
+ p_fmt = MT_TX_TYPE_FW;
+ q_idx = MT_LMAC_ALTX0;
+ } else if (beacon) {
+ p_fmt = MT_TX_TYPE_FW;
+ q_idx = MT_LMAC_BCN0;
+ } else if (qid >= MT_TXQ_PSD) {
+ p_fmt = mt76_is_mmio(dev) ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
+ q_idx = MT_LMAC_ALTX0;
+ } else {
+ p_fmt = mt76_is_mmio(dev) ? MT_TX_TYPE_CT : MT_TX_TYPE_SF;
+ q_idx = wmm_idx * MT76_CONNAC_MAX_WMM_SETS +
+ mt76_connac_lmac_mapping(skb_get_queue_mapping(skb));
+
+ /* counting non-offloading skbs */
+ wcid->stats.tx_bytes += skb->len;
+ wcid->stats.tx_packets++;
+ }
+
+ val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len + sz_txd) |
+ FIELD_PREP(MT_TXD0_PKT_FMT, p_fmt) |
+ FIELD_PREP(MT_TXD0_Q_IDX, q_idx);
+ txwi[0] = cpu_to_le32(val);
+
+ val = FIELD_PREP(MT_TXD1_WLAN_IDX, wcid->idx) |
+ FIELD_PREP(MT_TXD1_OWN_MAC, omac_idx);
+
+ if (band_idx)
+ val |= FIELD_PREP(MT_TXD1_TGID, band_idx);
+
+ txwi[1] = cpu_to_le32(val);
+ txwi[2] = 0;
+
+ val = FIELD_PREP(MT_TXD3_REM_TX_COUNT, 15);
+
+ if (key)
+ val |= MT_TXD3_PROTECT_FRAME;
+ if (info->flags & IEEE80211_TX_CTL_NO_ACK)
+ val |= MT_TXD3_NO_ACK;
+ if (wcid->amsdu)
+ val |= MT_TXD3_HW_AMSDU;
+
+ txwi[3] = cpu_to_le32(val);
+ txwi[4] = 0;
+
+ val = FIELD_PREP(MT_TXD5_PID, pid);
+ if (pid >= MT_PACKET_ID_FIRST) {
+ val |= MT_TXD5_TX_STATUS_HOST;
+ txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
+ txwi[3] &= ~cpu_to_le32(MT_TXD3_HW_AMSDU);
+ }
+
+ txwi[5] = cpu_to_le32(val);
+
+ val = MT_TXD6_DIS_MAT | MT_TXD6_DAS |
+ FIELD_PREP(MT_TXD6_MSDU_CNT, 1);
+ txwi[6] = cpu_to_le32(val);
+ txwi[7] = 0;
+
+ if (is_8023)
+ mt7925_mac_write_txwi_8023(txwi, skb, wcid);
+ else
+ mt7925_mac_write_txwi_80211(dev, txwi, skb, key);
+
+ if (txwi[1] & cpu_to_le32(MT_TXD1_FIXED_RATE)) {
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ bool mcast = ieee80211_is_data(hdr->frame_control) &&
+ is_multicast_ether_addr(hdr->addr1);
+ u8 idx = MT792x_BASIC_RATES_TBL;
+
+ if (mvif) {
+ if (mcast && mvif->mcast_rates_idx)
+ idx = mvif->mcast_rates_idx;
+ else if (beacon && mvif->beacon_rates_idx)
+ idx = mvif->beacon_rates_idx;
+ else
+ idx = mvif->basic_rates_idx;
+ }
+
+ txwi[6] |= cpu_to_le32(FIELD_PREP(MT_TXD6_TX_RATE, idx));
+ txwi[3] |= cpu_to_le32(MT_TXD3_BA_DISABLE);
+ }
+}
+EXPORT_SYMBOL_GPL(mt7925_mac_write_txwi);
+
+static void mt7925_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi)
+{
+ struct mt792x_sta *msta;
+ u16 fc, tid;
+ u32 val;
+
+ if (!sta || !(sta->deflink.ht_cap.ht_supported || sta->deflink.he_cap.has_he))
+ return;
+
+ tid = le32_get_bits(txwi[1], MT_TXD1_TID);
+ if (tid >= 6) /* skip VO queue */
+ return;
+
+ val = le32_to_cpu(txwi[2]);
+ fc = FIELD_GET(MT_TXD2_FRAME_TYPE, val) << 2 |
+ FIELD_GET(MT_TXD2_SUB_TYPE, val) << 4;
+ if (unlikely(fc != (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_DATA)))
+ return;
+
+ msta = (struct mt792x_sta *)sta->drv_priv;
+ if (!test_and_set_bit(tid, &msta->wcid.ampdu_state))
+ ieee80211_start_tx_ba_session(sta, tid, 0);
+}
+
+static bool
+mt7925_mac_add_txs_skb(struct mt792x_dev *dev, struct mt76_wcid *wcid,
+ int pid, __le32 *txs_data)
+{
+ struct mt76_sta_stats *stats = &wcid->stats;
+ struct ieee80211_supported_band *sband;
+ struct mt76_dev *mdev = &dev->mt76;
+ struct mt76_phy *mphy;
+ struct ieee80211_tx_info *info;
+ struct sk_buff_head list;
+ struct rate_info rate = {};
+ struct sk_buff *skb;
+ bool cck = false;
+ u32 txrate, txs, mode, stbc;
+
+ mt76_tx_status_lock(mdev, &list);
+ skb = mt76_tx_status_skb_get(mdev, wcid, pid, &list);
+ if (!skb)
+ goto out_no_skb;
+
+ txs = le32_to_cpu(txs_data[0]);
+
+ info = IEEE80211_SKB_CB(skb);
+ if (!(txs & MT_TXS0_ACK_ERROR_MASK))
+ info->flags |= IEEE80211_TX_STAT_ACK;
+
+ info->status.ampdu_len = 1;
+ info->status.ampdu_ack_len = !!(info->flags &
+ IEEE80211_TX_STAT_ACK);
+
+ info->status.rates[0].idx = -1;
+
+ txrate = FIELD_GET(MT_TXS0_TX_RATE, txs);
+
+ rate.mcs = FIELD_GET(MT_TX_RATE_IDX, txrate);
+ rate.nss = FIELD_GET(MT_TX_RATE_NSS, txrate) + 1;
+ stbc = le32_get_bits(txs_data[3], MT_TXS3_RATE_STBC);
+
+ if (stbc && rate.nss > 1)
+ rate.nss >>= 1;
+
+ if (rate.nss - 1 < ARRAY_SIZE(stats->tx_nss))
+ stats->tx_nss[rate.nss - 1]++;
+ if (rate.mcs < ARRAY_SIZE(stats->tx_mcs))
+ stats->tx_mcs[rate.mcs]++;
+
+ mode = FIELD_GET(MT_TX_RATE_MODE, txrate);
+ switch (mode) {
+ case MT_PHY_TYPE_CCK:
+ cck = true;
+ fallthrough;
+ case MT_PHY_TYPE_OFDM:
+ mphy = mt76_dev_phy(mdev, wcid->phy_idx);
+
+ if (mphy->chandef.chan->band == NL80211_BAND_5GHZ)
+ sband = &mphy->sband_5g.sband;
+ else if (mphy->chandef.chan->band == NL80211_BAND_6GHZ)
+ sband = &mphy->sband_6g.sband;
+ else
+ sband = &mphy->sband_2g.sband;
+
+ rate.mcs = mt76_get_rate(mphy->dev, sband, rate.mcs, cck);
+ rate.legacy = sband->bitrates[rate.mcs].bitrate;
+ break;
+ case MT_PHY_TYPE_HT:
+ case MT_PHY_TYPE_HT_GF:
+ if (rate.mcs > 31)
+ goto out;
+
+ rate.flags = RATE_INFO_FLAGS_MCS;
+ if (wcid->rate.flags & RATE_INFO_FLAGS_SHORT_GI)
+ rate.flags |= RATE_INFO_FLAGS_SHORT_GI;
+ break;
+ case MT_PHY_TYPE_VHT:
+ if (rate.mcs > 9)
+ goto out;
+
+ rate.flags = RATE_INFO_FLAGS_VHT_MCS;
+ break;
+ case MT_PHY_TYPE_HE_SU:
+ case MT_PHY_TYPE_HE_EXT_SU:
+ case MT_PHY_TYPE_HE_TB:
+ case MT_PHY_TYPE_HE_MU:
+ if (rate.mcs > 11)
+ goto out;
+
+ rate.he_gi = wcid->rate.he_gi;
+ rate.he_dcm = FIELD_GET(MT_TX_RATE_DCM, txrate);
+ rate.flags = RATE_INFO_FLAGS_HE_MCS;
+ break;
+ case MT_PHY_TYPE_EHT_SU:
+ case MT_PHY_TYPE_EHT_TRIG:
+ case MT_PHY_TYPE_EHT_MU:
+ if (rate.mcs > 13)
+ goto out;
+
+ rate.eht_gi = wcid->rate.eht_gi;
+ rate.flags = RATE_INFO_FLAGS_EHT_MCS;
+ break;
+ default:
+ goto out;
+ }
+
+ stats->tx_mode[mode]++;
+
+ switch (FIELD_GET(MT_TXS0_BW, txs)) {
+ case IEEE80211_STA_RX_BW_160:
+ rate.bw = RATE_INFO_BW_160;
+ stats->tx_bw[3]++;
+ break;
+ case IEEE80211_STA_RX_BW_80:
+ rate.bw = RATE_INFO_BW_80;
+ stats->tx_bw[2]++;
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ rate.bw = RATE_INFO_BW_40;
+ stats->tx_bw[1]++;
+ break;
+ default:
+ rate.bw = RATE_INFO_BW_20;
+ stats->tx_bw[0]++;
+ break;
+ }
+ wcid->rate = rate;
+
+out:
+ mt76_tx_status_skb_done(mdev, skb, &list);
+
+out_no_skb:
+ mt76_tx_status_unlock(mdev, &list);
+
+ return !!skb;
+}
+
+void mt7925_mac_add_txs(struct mt792x_dev *dev, void *data)
+{
+ struct mt792x_sta *msta = NULL;
+ struct mt76_wcid *wcid;
+ __le32 *txs_data = data;
+ u16 wcidx;
+ u8 pid;
+
+ if (le32_get_bits(txs_data[0], MT_TXS0_TXS_FORMAT) > 1)
+ return;
+
+ wcidx = le32_get_bits(txs_data[2], MT_TXS2_WCID);
+ pid = le32_get_bits(txs_data[3], MT_TXS3_PID);
+
+ if (pid < MT_PACKET_ID_FIRST)
+ return;
+
+ if (wcidx >= MT792x_WTBL_SIZE)
+ return;
+
+ rcu_read_lock();
+
+ wcid = rcu_dereference(dev->mt76.wcid[wcidx]);
+ if (!wcid)
+ goto out;
+
+ msta = container_of(wcid, struct mt792x_sta, wcid);
+
+ mt7925_mac_add_txs_skb(dev, wcid, pid, txs_data);
+ if (!wcid->sta)
+ goto out;
+
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ if (list_empty(&msta->wcid.poll_list))
+ list_add_tail(&msta->wcid.poll_list, &dev->mt76.sta_poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+
+out:
+ rcu_read_unlock();
+}
+
+void mt7925_txwi_free(struct mt792x_dev *dev, struct mt76_txwi_cache *t,
+ struct ieee80211_sta *sta, bool clear_status,
+ struct list_head *free_list)
+{
+ struct mt76_dev *mdev = &dev->mt76;
+ __le32 *txwi;
+ u16 wcid_idx;
+
+ mt76_connac_txp_skb_unmap(mdev, t);
+ if (!t->skb)
+ goto out;
+
+ txwi = (__le32 *)mt76_get_txwi_ptr(mdev, t);
+ if (sta) {
+ struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv;
+
+ if (likely(t->skb->protocol != cpu_to_be16(ETH_P_PAE)))
+ mt7925_tx_check_aggr(sta, txwi);
+
+ wcid_idx = wcid->idx;
+ } else {
+ wcid_idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX);
+ }
+
+ __mt76_tx_complete_skb(mdev, wcid_idx, t->skb, free_list);
+out:
+ t->skb = NULL;
+ mt76_put_txwi(mdev, t);
+}
+EXPORT_SYMBOL_GPL(mt7925_txwi_free);
+
+static void
+mt7925_mac_tx_free(struct mt792x_dev *dev, void *data, int len)
+{
+ __le32 *tx_free = (__le32 *)data, *cur_info;
+ struct mt76_dev *mdev = &dev->mt76;
+ struct mt76_txwi_cache *txwi;
+ struct ieee80211_sta *sta = NULL;
+ struct mt76_wcid *wcid = NULL;
+ LIST_HEAD(free_list);
+ struct sk_buff *skb, *tmp;
+ void *end = data + len;
+ bool wake = false;
+ u16 total, count = 0;
+
+ /* clean DMA queues and unmap buffers first */
+ mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_PSD], false);
+ mt76_queue_tx_cleanup(dev, dev->mphy.q_tx[MT_TXQ_BE], false);
+
+ if (WARN_ON_ONCE(le32_get_bits(tx_free[1], MT_TXFREE1_VER) < 4))
+ return;
+
+ total = le32_get_bits(tx_free[0], MT_TXFREE0_MSDU_CNT);
+ for (cur_info = &tx_free[2]; count < total; cur_info++) {
+ u32 msdu, info;
+ u8 i;
+
+ if (WARN_ON_ONCE((void *)cur_info >= end))
+ return;
+ /* 1'b1: new wcid pair.
+ * 1'b0: msdu_id with the same 'wcid pair' as above.
+ */
+ info = le32_to_cpu(*cur_info);
+ if (info & MT_TXFREE_INFO_PAIR) {
+ struct mt792x_sta *msta;
+ u16 idx;
+
+ idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info);
+ wcid = rcu_dereference(dev->mt76.wcid[idx]);
+ sta = wcid_to_sta(wcid);
+ if (!sta)
+ continue;
+
+ msta = container_of(wcid, struct mt792x_sta, wcid);
+ spin_lock_bh(&mdev->sta_poll_lock);
+ if (list_empty(&msta->wcid.poll_list))
+ list_add_tail(&msta->wcid.poll_list,
+ &mdev->sta_poll_list);
+ spin_unlock_bh(&mdev->sta_poll_lock);
+ continue;
+ }
+
+ if (info & MT_TXFREE_INFO_HEADER) {
+ if (wcid) {
+ wcid->stats.tx_retries +=
+ FIELD_GET(MT_TXFREE_INFO_COUNT, info) - 1;
+ wcid->stats.tx_failed +=
+ !!FIELD_GET(MT_TXFREE_INFO_STAT, info);
+ }
+ continue;
+ }
+
+ for (i = 0; i < 2; i++) {
+ msdu = (info >> (15 * i)) & MT_TXFREE_INFO_MSDU_ID;
+ if (msdu == MT_TXFREE_INFO_MSDU_ID)
+ continue;
+
+ count++;
+ txwi = mt76_token_release(mdev, msdu, &wake);
+ if (!txwi)
+ continue;
+
+ mt7925_txwi_free(dev, txwi, sta, 0, &free_list);
+ }
+ }
+
+ mt7925_mac_sta_poll(dev);
+
+ if (wake)
+ mt76_set_tx_blocked(&dev->mt76, false);
+
+ mt76_worker_schedule(&dev->mt76.tx_worker);
+
+ list_for_each_entry_safe(skb, tmp, &free_list, list) {
+ skb_list_del_init(skb);
+ napi_consume_skb(skb, 1);
+ }
+}
+
+bool mt7925_rx_check(struct mt76_dev *mdev, void *data, int len)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ __le32 *rxd = (__le32 *)data;
+ __le32 *end = (__le32 *)&rxd[len / 4];
+ enum rx_pkt_type type;
+
+ type = le32_get_bits(rxd[0], MT_RXD0_PKT_TYPE);
+ if (type != PKT_TYPE_NORMAL) {
+ u32 sw_type = le32_get_bits(rxd[0], MT_RXD0_SW_PKT_TYPE_MASK);
+
+ if (unlikely((sw_type & MT_RXD0_SW_PKT_TYPE_MAP) ==
+ MT_RXD0_SW_PKT_TYPE_FRAME))
+ return true;
+ }
+
+ switch (type) {
+ case PKT_TYPE_TXRX_NOTIFY:
+ /* PKT_TYPE_TXRX_NOTIFY can be received only by mmio devices */
+ mt7925_mac_tx_free(dev, data, len); /* mmio */
+ return false;
+ case PKT_TYPE_TXS:
+ for (rxd += 4; rxd + 12 <= end; rxd += 12)
+ mt7925_mac_add_txs(dev, rxd);
+ return false;
+ default:
+ return true;
+ }
+}
+EXPORT_SYMBOL_GPL(mt7925_rx_check);
+
+void mt7925_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
+ struct sk_buff *skb, u32 *info)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ __le32 *rxd = (__le32 *)skb->data;
+ __le32 *end = (__le32 *)&skb->data[skb->len];
+ enum rx_pkt_type type;
+ u16 flag;
+
+ type = le32_get_bits(rxd[0], MT_RXD0_PKT_TYPE);
+ flag = le32_get_bits(rxd[0], MT_RXD0_PKT_FLAG);
+ if (type != PKT_TYPE_NORMAL) {
+ u32 sw_type = le32_get_bits(rxd[0], MT_RXD0_SW_PKT_TYPE_MASK);
+
+ if (unlikely((sw_type & MT_RXD0_SW_PKT_TYPE_MAP) ==
+ MT_RXD0_SW_PKT_TYPE_FRAME))
+ type = PKT_TYPE_NORMAL;
+ }
+
+ if (type == PKT_TYPE_RX_EVENT && flag == 0x1)
+ type = PKT_TYPE_NORMAL_MCU;
+
+ switch (type) {
+ case PKT_TYPE_TXRX_NOTIFY:
+ /* PKT_TYPE_TXRX_NOTIFY can be received only by mmio devices */
+ mt7925_mac_tx_free(dev, skb->data, skb->len);
+ napi_consume_skb(skb, 1);
+ break;
+ case PKT_TYPE_RX_EVENT:
+ mt7925_mcu_rx_event(dev, skb);
+ break;
+ case PKT_TYPE_TXS:
+ for (rxd += 2; rxd + 8 <= end; rxd += 8)
+ mt7925_mac_add_txs(dev, rxd);
+ dev_kfree_skb(skb);
+ break;
+ case PKT_TYPE_NORMAL_MCU:
+ case PKT_TYPE_NORMAL:
+ if (!mt7925_mac_fill_rx(dev, skb)) {
+ mt76_rx(&dev->mt76, q, skb);
+ return;
+ }
+ fallthrough;
+ default:
+ dev_kfree_skb(skb);
+ break;
+ }
+}
+EXPORT_SYMBOL_GPL(mt7925_queue_rx_skb);
+
+static void
+mt7925_vif_connect_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mvif->phy->dev;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ ieee80211_disconnect(vif, true);
+
+ mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid, true);
+ mt7925_mcu_set_tx(dev, vif);
+
+ if (vif->type == NL80211_IFTYPE_AP) {
+ mt76_connac_mcu_uni_add_bss(dev->phy.mt76, vif, &mvif->sta.wcid,
+ true, NULL);
+ mt7925_mcu_sta_update(dev, NULL, vif, true,
+ MT76_STA_INFO_STATE_NONE);
+ mt7925_mcu_uni_add_beacon_offload(dev, hw, vif, true);
+ }
+}
+
+/* system error recovery */
+void mt7925_mac_reset_work(struct work_struct *work)
+{
+ struct mt792x_dev *dev = container_of(work, struct mt792x_dev,
+ reset_work);
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct mt76_connac_pm *pm = &dev->pm;
+ int i, ret;
+
+ dev_dbg(dev->mt76.dev, "chip reset\n");
+ dev->hw_full_reset = true;
+ ieee80211_stop_queues(hw);
+
+ cancel_delayed_work_sync(&dev->mphy.mac_work);
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+
+ for (i = 0; i < 10; i++) {
+ mutex_lock(&dev->mt76.mutex);
+ ret = mt792x_dev_reset(dev);
+ mutex_unlock(&dev->mt76.mutex);
+
+ if (!ret)
+ break;
+ }
+
+ if (i == 10)
+ dev_err(dev->mt76.dev, "chip reset failed\n");
+
+ if (test_and_clear_bit(MT76_HW_SCANNING, &dev->mphy.state)) {
+ struct cfg80211_scan_info info = {
+ .aborted = true,
+ };
+
+ ieee80211_scan_completed(dev->mphy.hw, &info);
+ }
+
+ dev->hw_full_reset = false;
+ pm->suspended = false;
+ ieee80211_wake_queues(hw);
+ ieee80211_iterate_active_interfaces(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_vif_connect_iter, NULL);
+ mt76_connac_power_save_sched(&dev->mt76.phy, pm);
+}
+
+void mt7925_coredump_work(struct work_struct *work)
+{
+ struct mt792x_dev *dev;
+ char *dump, *data;
+
+ dev = (struct mt792x_dev *)container_of(work, struct mt792x_dev,
+ coredump.work.work);
+
+ if (time_is_after_jiffies(dev->coredump.last_activity +
+ 4 * MT76_CONNAC_COREDUMP_TIMEOUT)) {
+ queue_delayed_work(dev->mt76.wq, &dev->coredump.work,
+ MT76_CONNAC_COREDUMP_TIMEOUT);
+ return;
+ }
+
+ dump = vzalloc(MT76_CONNAC_COREDUMP_SZ);
+ data = dump;
+
+ while (true) {
+ struct sk_buff *skb;
+
+ spin_lock_bh(&dev->mt76.lock);
+ skb = __skb_dequeue(&dev->coredump.msg_list);
+ spin_unlock_bh(&dev->mt76.lock);
+
+ if (!skb)
+ break;
+
+ skb_pull(skb, sizeof(struct mt7925_mcu_rxd) + 8);
+ if (!dump || data + skb->len - dump > MT76_CONNAC_COREDUMP_SZ) {
+ dev_kfree_skb(skb);
+ continue;
+ }
+
+ memcpy(data, skb->data, skb->len);
+ data += skb->len;
+
+ dev_kfree_skb(skb);
+ }
+
+ if (dump)
+ dev_coredumpv(dev->mt76.dev, dump, MT76_CONNAC_COREDUMP_SZ,
+ GFP_KERNEL);
+
+ mt792x_reset(&dev->mt76);
+}
+
+/* usb_sdio */
+static void
+mt7925_usb_sdio_write_txwi(struct mt792x_dev *dev, struct mt76_wcid *wcid,
+ enum mt76_txq_id qid, struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key, int pid,
+ struct sk_buff *skb)
+{
+ __le32 *txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE);
+
+ memset(txwi, 0, MT_SDIO_TXD_SIZE);
+ mt7925_mac_write_txwi(&dev->mt76, txwi, skb, wcid, key, pid, qid, 0);
+ skb_push(skb, MT_SDIO_TXD_SIZE);
+}
+
+int mt7925_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ enum mt76_txq_id qid, struct mt76_wcid *wcid,
+ struct ieee80211_sta *sta,
+ struct mt76_tx_info *tx_info)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
+ struct ieee80211_key_conf *key = info->control.hw_key;
+ struct sk_buff *skb = tx_info->skb;
+ int err, pad, pktid;
+
+ if (unlikely(tx_info->skb->len <= ETH_HLEN))
+ return -EINVAL;
+
+ if (!wcid)
+ wcid = &dev->mt76.global_wcid;
+
+ if (sta) {
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+
+ if (time_after(jiffies, msta->last_txs + HZ / 4)) {
+ info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
+ msta->last_txs = jiffies;
+ }
+ }
+
+ pktid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb);
+ mt7925_usb_sdio_write_txwi(dev, wcid, qid, sta, key, pktid, skb);
+
+ mt792x_skb_add_usb_sdio_hdr(dev, skb, 0);
+ pad = round_up(skb->len, 4) - skb->len;
+ if (mt76_is_usb(mdev))
+ pad += 4;
+
+ err = mt76_skb_adjust_pad(skb, pad);
+ if (err)
+ /* Release pktid in case of error. */
+ idr_remove(&wcid->pktid, pktid);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(mt7925_usb_sdio_tx_prepare_skb);
+
+void mt7925_usb_sdio_tx_complete_skb(struct mt76_dev *mdev,
+ struct mt76_queue_entry *e)
+{
+ __le32 *txwi = (__le32 *)(e->skb->data + MT_SDIO_HDR_SIZE);
+ unsigned int headroom = MT_SDIO_TXD_SIZE + MT_SDIO_HDR_SIZE;
+ struct ieee80211_sta *sta;
+ struct mt76_wcid *wcid;
+ u16 idx;
+
+ idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX);
+ wcid = rcu_dereference(mdev->wcid[idx]);
+ sta = wcid_to_sta(wcid);
+
+ if (sta && likely(e->skb->protocol != cpu_to_be16(ETH_P_PAE)))
+ mt7925_tx_check_aggr(sta, txwi);
+
+ skb_pull(e->skb, headroom);
+ mt76_tx_complete_skb(mdev, e->wcid, e->skb);
+}
+EXPORT_SYMBOL_GPL(mt7925_usb_sdio_tx_complete_skb);
+
+bool mt7925_usb_sdio_tx_status_data(struct mt76_dev *mdev, u8 *update)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mac_sta_poll(dev);
+ mt792x_mutex_release(dev);
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(mt7925_usb_sdio_tx_status_data);
+
+#if IS_ENABLED(CONFIG_IPV6)
+void mt7925_set_ipv6_ns_work(struct work_struct *work)
+{
+ struct mt792x_dev *dev = container_of(work, struct mt792x_dev,
+ ipv6_ns_work);
+ struct sk_buff *skb;
+ int ret = 0;
+
+ do {
+ skb = skb_dequeue(&dev->ipv6_ns_list);
+
+ if (!skb)
+ break;
+
+ mt792x_mutex_acquire(dev);
+ ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_UNI_CMD(OFFLOAD), true);
+ mt792x_mutex_release(dev);
+
+ } while (!ret);
+
+ if (ret)
+ skb_queue_purge(&dev->ipv6_ns_list);
+}
+#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.h b/drivers/net/wireless/mediatek/mt76/mt7925/mac.h
new file mode 100644
index 000000000000..b10a993326b9
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: ISC */
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#ifndef __MT7925_MAC_H
+#define __MT7925_MAC_H
+
+#include "../mt76_connac3_mac.h"
+
+#define MT_WTBL_TXRX_CAP_RATE_OFFSET 7
+#define MT_WTBL_TXRX_RATE_G2_HE 24
+#define MT_WTBL_TXRX_RATE_G2 12
+
+#define MT_WTBL_AC0_CTT_OFFSET 20
+
+static inline u32 mt7925_mac_wtbl_lmac_addr(struct mt792x_dev *dev, u16 wcid, u8 dw)
+{
+ mt76_wr(dev, MT_WTBLON_TOP_WDUCR,
+ FIELD_PREP(MT_WTBLON_TOP_WDUCR_GROUP, (wcid >> 7)));
+
+ return MT_WTBL_LMAC_OFFS(wcid, dw);
+}
+
+#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
new file mode 100644
index 000000000000..15c2fb0bcb1b
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
@@ -0,0 +1,1454 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/etherdevice.h>
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <linux/ctype.h>
+#include <net/ipv6.h>
+#include "mt7925.h"
+#include "mcu.h"
+#include "mac.h"
+
+static void
+mt7925_init_he_caps(struct mt792x_phy *phy, enum nl80211_band band,
+ struct ieee80211_sband_iftype_data *data,
+ enum nl80211_iftype iftype)
+{
+ struct ieee80211_sta_he_cap *he_cap = &data->he_cap;
+ struct ieee80211_he_cap_elem *he_cap_elem = &he_cap->he_cap_elem;
+ struct ieee80211_he_mcs_nss_supp *he_mcs = &he_cap->he_mcs_nss_supp;
+ int i, nss = hweight8(phy->mt76->antenna_mask);
+ u16 mcs_map = 0;
+
+ for (i = 0; i < 8; i++) {
+ if (i < nss)
+ mcs_map |= (IEEE80211_HE_MCS_SUPPORT_0_11 << (i * 2));
+ else
+ mcs_map |= (IEEE80211_HE_MCS_NOT_SUPPORTED << (i * 2));
+ }
+
+ he_cap->has_he = true;
+
+ he_cap_elem->mac_cap_info[0] = IEEE80211_HE_MAC_CAP0_HTC_HE;
+ he_cap_elem->mac_cap_info[3] = IEEE80211_HE_MAC_CAP3_OMI_CONTROL |
+ IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_EXT_3;
+ he_cap_elem->mac_cap_info[4] = IEEE80211_HE_MAC_CAP4_AMSDU_IN_AMPDU;
+
+ if (band == NL80211_BAND_2GHZ)
+ he_cap_elem->phy_cap_info[0] =
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_IN_2G;
+ else
+ he_cap_elem->phy_cap_info[0] =
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_40MHZ_80MHZ_IN_5G |
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_160MHZ_IN_5G;
+
+ he_cap_elem->phy_cap_info[1] =
+ IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD;
+ he_cap_elem->phy_cap_info[2] =
+ IEEE80211_HE_PHY_CAP2_NDP_4x_LTF_AND_3_2US |
+ IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ |
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ |
+ IEEE80211_HE_PHY_CAP2_UL_MU_FULL_MU_MIMO |
+ IEEE80211_HE_PHY_CAP2_UL_MU_PARTIAL_MU_MIMO;
+
+ switch (i) {
+ case NL80211_IFTYPE_AP:
+ he_cap_elem->mac_cap_info[2] |=
+ IEEE80211_HE_MAC_CAP2_BSR;
+ he_cap_elem->mac_cap_info[4] |=
+ IEEE80211_HE_MAC_CAP4_BQR;
+ he_cap_elem->mac_cap_info[5] |=
+ IEEE80211_HE_MAC_CAP5_OM_CTRL_UL_MU_DATA_DIS_RX;
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
+ he_cap_elem->phy_cap_info[6] |=
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU;
+ break;
+ case NL80211_IFTYPE_STATION:
+ he_cap_elem->mac_cap_info[1] |=
+ IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US;
+
+ if (band == NL80211_BAND_2GHZ)
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_2G;
+ else
+ he_cap_elem->phy_cap_info[0] |=
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_RU_MAPPING_IN_5G;
+
+ he_cap_elem->phy_cap_info[1] |=
+ IEEE80211_HE_PHY_CAP1_DEVICE_CLASS_A |
+ IEEE80211_HE_PHY_CAP1_HE_LTF_AND_GI_FOR_HE_PPDUS_0_8US;
+ he_cap_elem->phy_cap_info[3] |=
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_TX_QPSK |
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_QPSK;
+ he_cap_elem->phy_cap_info[4] |=
+ IEEE80211_HE_PHY_CAP4_SU_BEAMFORMEE |
+ IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_UNDER_80MHZ_4 |
+ IEEE80211_HE_PHY_CAP4_BEAMFORMEE_MAX_STS_ABOVE_80MHZ_4;
+ he_cap_elem->phy_cap_info[5] |=
+ IEEE80211_HE_PHY_CAP5_NG16_SU_FEEDBACK |
+ IEEE80211_HE_PHY_CAP5_NG16_MU_FEEDBACK;
+ he_cap_elem->phy_cap_info[6] |=
+ IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_42_SU |
+ IEEE80211_HE_PHY_CAP6_CODEBOOK_SIZE_75_MU |
+ IEEE80211_HE_PHY_CAP6_TRIG_CQI_FB |
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[7] |=
+ IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
+ IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
+ he_cap_elem->phy_cap_info[8] |=
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
+ IEEE80211_HE_PHY_CAP8_80MHZ_IN_160MHZ_HE_PPDU |
+ IEEE80211_HE_PHY_CAP8_DCM_MAX_RU_484;
+ he_cap_elem->phy_cap_info[9] |=
+ IEEE80211_HE_PHY_CAP9_LONGER_THAN_16_SIGB_OFDM_SYM |
+ IEEE80211_HE_PHY_CAP9_NON_TRIGGERED_CQI_FEEDBACK |
+ IEEE80211_HE_PHY_CAP9_TX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_1024_QAM_LESS_THAN_242_TONE_RU |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_COMP_SIGB |
+ IEEE80211_HE_PHY_CAP9_RX_FULL_BW_SU_USING_MU_WITH_NON_COMP_SIGB;
+ break;
+ default:
+ break;
+ }
+
+ he_mcs->rx_mcs_80 = cpu_to_le16(mcs_map);
+ he_mcs->tx_mcs_80 = cpu_to_le16(mcs_map);
+ he_mcs->rx_mcs_160 = cpu_to_le16(mcs_map);
+ he_mcs->tx_mcs_160 = cpu_to_le16(mcs_map);
+
+ memset(he_cap->ppe_thres, 0, sizeof(he_cap->ppe_thres));
+
+ if (he_cap_elem->phy_cap_info[6] &
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT) {
+ mt76_connac_gen_ppe_thresh(he_cap->ppe_thres, nss);
+ } else {
+ he_cap_elem->phy_cap_info[9] |=
+ u8_encode_bits(IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_16US,
+ IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK);
+ }
+
+ if (band == NL80211_BAND_6GHZ) {
+ u16 cap = IEEE80211_HE_6GHZ_CAP_TX_ANTPAT_CONS |
+ IEEE80211_HE_6GHZ_CAP_RX_ANTPAT_CONS;
+
+ cap |= u16_encode_bits(IEEE80211_HT_MPDU_DENSITY_0_5,
+ IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START) |
+ u16_encode_bits(IEEE80211_VHT_MAX_AMPDU_1024K,
+ IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP) |
+ u16_encode_bits(IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454,
+ IEEE80211_HE_6GHZ_CAP_MAX_MPDU_LEN);
+
+ data->he_6ghz_capa.capa = cpu_to_le16(cap);
+ }
+}
+
+static void
+mt7925_init_eht_caps(struct mt792x_phy *phy, enum nl80211_band band,
+ struct ieee80211_sband_iftype_data *data,
+ enum nl80211_iftype iftype)
+{
+ struct ieee80211_sta_eht_cap *eht_cap = &data->eht_cap;
+ struct ieee80211_eht_cap_elem_fixed *eht_cap_elem = &eht_cap->eht_cap_elem;
+ struct ieee80211_eht_mcs_nss_supp *eht_nss = &eht_cap->eht_mcs_nss_supp;
+ enum nl80211_chan_width width = phy->mt76->chandef.width;
+ int nss = hweight8(phy->mt76->antenna_mask);
+ int sts = hweight16(phy->mt76->chainmask);
+ u8 val;
+
+ if (!phy->dev->has_eht)
+ return;
+
+ eht_cap->has_eht = true;
+
+ eht_cap_elem->mac_cap_info[0] =
+ IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+ IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
+
+ eht_cap_elem->phy_cap_info[0] =
+ IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
+ IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+
+ eht_cap_elem->phy_cap_info[0] |=
+ u8_encode_bits(u8_get_bits(sts - 1, BIT(0)),
+ IEEE80211_EHT_PHY_CAP0_BEAMFORMEE_SS_80MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[1] =
+ u8_encode_bits(u8_get_bits(sts - 1, GENMASK(2, 1)),
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK) |
+ u8_encode_bits(sts - 1,
+ IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[2] =
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_80MHZ_MASK) |
+ u8_encode_bits(sts - 1, IEEE80211_EHT_PHY_CAP2_SOUNDING_DIM_160MHZ_MASK);
+
+ eht_cap_elem->phy_cap_info[3] =
+ IEEE80211_EHT_PHY_CAP3_NG_16_SU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_NG_16_MU_FEEDBACK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_4_2_SU_FDBK |
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_SU_BF_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_MU_BF_PART_BW_FDBK |
+ IEEE80211_EHT_PHY_CAP3_TRIG_CQI_FDBK;
+
+ eht_cap_elem->phy_cap_info[4] =
+ u8_encode_bits(min_t(int, sts - 1, 2),
+ IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+
+ eht_cap_elem->phy_cap_info[5] =
+ IEEE80211_EHT_PHY_CAP5_NON_TRIG_CQI_FEEDBACK |
+ u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+ u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
+ IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK);
+
+ val = width == NL80211_CHAN_WIDTH_160 ? 0x7 :
+ width == NL80211_CHAN_WIDTH_80 ? 0x3 : 0x1;
+ eht_cap_elem->phy_cap_info[6] =
+ u8_encode_bits(u8_get_bits(0x11, GENMASK(4, 2)),
+ IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+
+ eht_cap_elem->phy_cap_info[7] =
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_NON_OFDMA_UL_MU_MIMO_160MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_80MHZ |
+ IEEE80211_EHT_PHY_CAP7_MU_BEAMFORMER_160MHZ;
+
+ val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+ u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_TX);
+
+ eht_nss->bw._80.rx_tx_mcs9_max_nss = val;
+ eht_nss->bw._80.rx_tx_mcs11_max_nss = val;
+ eht_nss->bw._80.rx_tx_mcs13_max_nss = val;
+ eht_nss->bw._160.rx_tx_mcs9_max_nss = val;
+ eht_nss->bw._160.rx_tx_mcs11_max_nss = val;
+ eht_nss->bw._160.rx_tx_mcs13_max_nss = val;
+}
+
+static void
+__mt7925_set_stream_he_eht_caps(struct mt792x_phy *phy,
+ struct ieee80211_supported_band *sband,
+ enum nl80211_band band)
+{
+ struct ieee80211_sband_iftype_data *data = phy->iftype[band];
+ int i, n = 0;
+
+ for (i = 0; i < NUM_NL80211_IFTYPES; i++) {
+ switch (i) {
+ case NL80211_IFTYPE_STATION:
+ case NL80211_IFTYPE_AP:
+ break;
+ default:
+ continue;
+ }
+
+ data[n].types_mask = BIT(i);
+ mt7925_init_he_caps(phy, band, &data[n], i);
+ mt7925_init_eht_caps(phy, band, &data[n], i);
+
+ n++;
+ }
+
+ _ieee80211_set_sband_iftype_data(sband, data, n);
+}
+
+void mt7925_set_stream_he_eht_caps(struct mt792x_phy *phy)
+{
+ if (phy->mt76->cap.has_2ghz)
+ __mt7925_set_stream_he_eht_caps(phy, &phy->mt76->sband_2g.sband,
+ NL80211_BAND_2GHZ);
+
+ if (phy->mt76->cap.has_5ghz)
+ __mt7925_set_stream_he_eht_caps(phy, &phy->mt76->sband_5g.sband,
+ NL80211_BAND_5GHZ);
+
+ if (phy->mt76->cap.has_6ghz)
+ __mt7925_set_stream_he_eht_caps(phy, &phy->mt76->sband_6g.sband,
+ NL80211_BAND_6GHZ);
+}
+
+int __mt7925_start(struct mt792x_phy *phy)
+{
+ struct mt76_phy *mphy = phy->mt76;
+ int err;
+
+ err = mt7925_mcu_set_channel_domain(mphy);
+ if (err)
+ return err;
+
+ err = mt7925_mcu_set_rts_thresh(phy, 0x92b);
+ if (err)
+ return err;
+
+ err = mt7925_set_tx_sar_pwr(mphy->hw, NULL);
+ if (err)
+ return err;
+
+ mt792x_mac_reset_counters(phy);
+ set_bit(MT76_STATE_RUNNING, &mphy->state);
+
+ ieee80211_queue_delayed_work(mphy->hw, &mphy->mac_work,
+ MT792x_WATCHDOG_TIME);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(__mt7925_start);
+
+static int mt7925_start(struct ieee80211_hw *hw)
+{
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ int err;
+
+ mt792x_mutex_acquire(phy->dev);
+ err = __mt7925_start(phy);
+ mt792x_mutex_release(phy->dev);
+
+ return err;
+}
+
+static int
+mt7925_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ struct mt76_txq *mtxq;
+ int idx, ret = 0;
+
+ mt792x_mutex_acquire(dev);
+
+ mvif->mt76.idx = __ffs64(~dev->mt76.vif_mask);
+ if (mvif->mt76.idx >= MT792x_MAX_INTERFACES) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ mvif->mt76.omac_idx = mvif->mt76.idx;
+ mvif->phy = phy;
+ mvif->mt76.band_idx = 0;
+ mvif->mt76.wmm_idx = mvif->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ if (phy->mt76->chandef.chan->band != NL80211_BAND_2GHZ)
+ mvif->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL + 4;
+ else
+ mvif->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL;
+
+ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, vif, &mvif->sta.wcid,
+ true);
+ if (ret)
+ goto out;
+
+ dev->mt76.vif_mask |= BIT_ULL(mvif->mt76.idx);
+ phy->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
+
+ idx = MT792x_WTBL_RESERVED - mvif->mt76.idx;
+
+ INIT_LIST_HEAD(&mvif->sta.wcid.poll_list);
+ mvif->sta.wcid.idx = idx;
+ mvif->sta.wcid.phy_idx = mvif->mt76.band_idx;
+ mvif->sta.wcid.hw_key_idx = -1;
+ mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ mt76_wcid_init(&mvif->sta.wcid);
+
+ mt7925_mac_wtbl_update(dev, idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+
+ ewma_rssi_init(&mvif->rssi);
+
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mvif->sta.wcid);
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+ mtxq->wcid = idx;
+ }
+
+ vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER;
+out:
+ mt792x_mutex_release(dev);
+
+ return ret;
+}
+
+static void mt7925_roc_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_phy *phy = priv;
+
+ mt7925_mcu_abort_roc(phy, mvif, phy->roc_token_id);
+}
+
+void mt7925_roc_work(struct work_struct *work)
+{
+ struct mt792x_phy *phy;
+
+ phy = (struct mt792x_phy *)container_of(work, struct mt792x_phy,
+ roc_work);
+
+ if (!test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
+ return;
+
+ mt792x_mutex_acquire(phy->dev);
+ ieee80211_iterate_active_interfaces(phy->mt76->hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_roc_iter, phy);
+ mt792x_mutex_release(phy->dev);
+ ieee80211_remain_on_channel_expired(phy->mt76->hw);
+}
+
+static int mt7925_abort_roc(struct mt792x_phy *phy, struct mt792x_vif *vif)
+{
+ int err = 0;
+
+ del_timer_sync(&phy->roc_timer);
+ cancel_work_sync(&phy->roc_work);
+
+ mt792x_mutex_acquire(phy->dev);
+ if (test_and_clear_bit(MT76_STATE_ROC, &phy->mt76->state))
+ err = mt7925_mcu_abort_roc(phy, vif, phy->roc_token_id);
+ mt792x_mutex_release(phy->dev);
+
+ return err;
+}
+
+static int mt7925_set_roc(struct mt792x_phy *phy,
+ struct mt792x_vif *vif,
+ struct ieee80211_channel *chan,
+ int duration,
+ enum mt7925_roc_req type)
+{
+ int err;
+
+ if (test_and_set_bit(MT76_STATE_ROC, &phy->mt76->state))
+ return -EBUSY;
+
+ phy->roc_grant = false;
+
+ err = mt7925_mcu_set_roc(phy, vif, chan, duration, type,
+ ++phy->roc_token_id);
+ if (err < 0) {
+ clear_bit(MT76_STATE_ROC, &phy->mt76->state);
+ goto out;
+ }
+
+ if (!wait_event_timeout(phy->roc_wait, phy->roc_grant, 4 * HZ)) {
+ mt7925_mcu_abort_roc(phy, vif, phy->roc_token_id);
+ clear_bit(MT76_STATE_ROC, &phy->mt76->state);
+ err = -ETIMEDOUT;
+ }
+
+out:
+ return err;
+}
+
+static int mt7925_remain_on_channel(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_channel *chan,
+ int duration,
+ enum ieee80211_roc_type type)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ int err;
+
+ mt792x_mutex_acquire(phy->dev);
+ err = mt7925_set_roc(phy, mvif, chan, duration, MT7925_ROC_REQ_ROC);
+ mt792x_mutex_release(phy->dev);
+
+ return err;
+}
+
+static int mt7925_cancel_remain_on_channel(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ return mt7925_abort_roc(phy, mvif);
+}
+
+static int mt7925_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta,
+ struct ieee80211_key_conf *key)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_sta *msta = sta ? (struct mt792x_sta *)sta->drv_priv :
+ &mvif->sta;
+ struct mt76_wcid *wcid = &msta->wcid;
+ u8 *wcid_keyidx = &wcid->hw_key_idx;
+ int idx = key->keyidx, err = 0;
+
+ /* The hardware does not support per-STA RX GTK, fallback
+ * to software mode for these.
+ */
+ if ((vif->type == NL80211_IFTYPE_ADHOC ||
+ vif->type == NL80211_IFTYPE_MESH_POINT) &&
+ (key->cipher == WLAN_CIPHER_SUITE_TKIP ||
+ key->cipher == WLAN_CIPHER_SUITE_CCMP) &&
+ !(key->flags & IEEE80211_KEY_FLAG_PAIRWISE))
+ return -EOPNOTSUPP;
+
+ /* fall back to sw encryption for unsupported ciphers */
+ switch (key->cipher) {
+ case WLAN_CIPHER_SUITE_AES_CMAC:
+ key->flags |= IEEE80211_KEY_FLAG_GENERATE_MMIE;
+ wcid_keyidx = &wcid->hw_key_idx2;
+ break;
+ case WLAN_CIPHER_SUITE_WEP40:
+ case WLAN_CIPHER_SUITE_WEP104:
+ if (!mvif->wep_sta)
+ return -EOPNOTSUPP;
+ break;
+ case WLAN_CIPHER_SUITE_TKIP:
+ case WLAN_CIPHER_SUITE_CCMP:
+ case WLAN_CIPHER_SUITE_CCMP_256:
+ case WLAN_CIPHER_SUITE_GCMP:
+ case WLAN_CIPHER_SUITE_GCMP_256:
+ case WLAN_CIPHER_SUITE_SMS4:
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ mt792x_mutex_acquire(dev);
+
+ if (cmd == SET_KEY && !mvif->mt76.cipher) {
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ mvif->mt76.cipher = mt76_connac_mcu_get_cipher(key->cipher);
+ mt7925_mcu_add_bss_info(phy, mvif->mt76.ctx, vif, sta, true);
+ }
+
+ if (cmd == SET_KEY)
+ *wcid_keyidx = idx;
+ else if (idx == *wcid_keyidx)
+ *wcid_keyidx = -1;
+ else
+ goto out;
+
+ mt76_wcid_key_setup(&dev->mt76, wcid,
+ cmd == SET_KEY ? key : NULL);
+
+ err = mt7925_mcu_add_key(&dev->mt76, vif, &msta->bip,
+ key, MCU_UNI_CMD(STA_REC_UPDATE),
+ &msta->wcid, cmd);
+
+ if (err)
+ goto out;
+
+ if (key->cipher == WLAN_CIPHER_SUITE_WEP104 ||
+ key->cipher == WLAN_CIPHER_SUITE_WEP40)
+ err = mt7925_mcu_add_key(&dev->mt76, vif, &mvif->wep_sta->bip,
+ key, MCU_WMWA_UNI_CMD(STA_REC_UPDATE),
+ &mvif->wep_sta->wcid, cmd);
+
+out:
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static void
+mt7925_pm_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+{
+ struct mt792x_dev *dev = priv;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ bool pm_enable = dev->pm.enable;
+ int err;
+
+ err = mt7925_mcu_set_beacon_filter(dev, vif, pm_enable);
+ if (err < 0)
+ return;
+
+ if (pm_enable) {
+ vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER;
+ ieee80211_hw_set(hw, CONNECTION_MONITOR);
+ } else {
+ vif->driver_flags &= ~IEEE80211_VIF_BEACON_FILTER;
+ __clear_bit(IEEE80211_HW_CONNECTION_MONITOR, hw->flags);
+ }
+}
+
+static void
+mt7925_sniffer_interface_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+{
+ struct mt792x_dev *dev = priv;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct mt76_connac_pm *pm = &dev->pm;
+ bool monitor = !!(hw->conf.flags & IEEE80211_CONF_MONITOR);
+
+ mt7925_mcu_set_sniffer(dev, vif, monitor);
+ pm->enable = pm->enable_user && !monitor;
+ pm->ds_enable = pm->ds_enable_user && !monitor;
+
+ mt7925_mcu_set_deep_sleep(dev, pm->ds_enable);
+
+ if (monitor)
+ mt7925_mcu_set_beacon_filter(dev, vif, false);
+}
+
+void mt7925_set_runtime_pm(struct mt792x_dev *dev)
+{
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct mt76_connac_pm *pm = &dev->pm;
+ bool monitor = !!(hw->conf.flags & IEEE80211_CONF_MONITOR);
+
+ pm->enable = pm->enable_user && !monitor;
+ ieee80211_iterate_active_interfaces(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_pm_interface_iter, dev);
+ pm->ds_enable = pm->ds_enable_user && !monitor;
+ mt7925_mcu_set_deep_sleep(dev, pm->ds_enable);
+}
+
+static int mt7925_config(struct ieee80211_hw *hw, u32 changed)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ int ret = 0;
+
+ mt792x_mutex_acquire(dev);
+
+ if (changed & IEEE80211_CONF_CHANGE_POWER) {
+ ret = mt7925_set_tx_sar_pwr(hw, NULL);
+ if (ret)
+ goto out;
+ }
+
+ if (changed & IEEE80211_CONF_CHANGE_MONITOR) {
+ ieee80211_iterate_active_interfaces(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_sniffer_interface_iter, dev);
+ }
+
+out:
+ mt792x_mutex_release(dev);
+
+ return ret;
+}
+
+static void mt7925_configure_filter(struct ieee80211_hw *hw,
+ unsigned int changed_flags,
+ unsigned int *total_flags,
+ u64 multicast)
+{
+#define MT7925_FILTER_FCSFAIL BIT(2)
+#define MT7925_FILTER_CONTROL BIT(5)
+#define MT7925_FILTER_OTHER_BSS BIT(6)
+#define MT7925_FILTER_ENABLE BIT(31)
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ u32 flags = MT7925_FILTER_ENABLE;
+
+#define MT7925_FILTER(_fif, _type) do { \
+ if (*total_flags & (_fif)) \
+ flags |= MT7925_FILTER_##_type; \
+ } while (0)
+
+ MT7925_FILTER(FIF_FCSFAIL, FCSFAIL);
+ MT7925_FILTER(FIF_CONTROL, CONTROL);
+ MT7925_FILTER(FIF_OTHER_BSS, OTHER_BSS);
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_set_rxfilter(dev, flags, 0, 0);
+ mt792x_mutex_release(dev);
+
+ *total_flags &= (FIF_OTHER_BSS | FIF_FCSFAIL | FIF_CONTROL);
+}
+
+static u8
+mt7925_get_rates_table(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ bool beacon, bool mcast)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt76_phy *mphy = hw->priv;
+ u16 rate;
+ u8 i, idx, ht;
+
+ rate = mt76_connac2_mac_tx_rate_val(mphy, vif, beacon, mcast);
+ ht = FIELD_GET(MT_TX_RATE_MODE, rate) > MT_PHY_TYPE_OFDM;
+
+ if (beacon && ht) {
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ /* must odd index */
+ idx = MT7925_BEACON_RATES_TBL + 2 * (mvif->idx % 20);
+ mt7925_mac_set_fixed_rate_table(dev, idx, rate);
+ return idx;
+ }
+
+ idx = FIELD_GET(MT_TX_RATE_IDX, rate);
+ for (i = 0; i < ARRAY_SIZE(mt76_rates); i++)
+ if ((mt76_rates[i].hw_value & GENMASK(7, 0)) == idx)
+ return MT792x_BASIC_RATES_TBL + i;
+
+ return mvif->basic_rates_idx;
+}
+
+static void mt7925_bss_info_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *info,
+ u64 changed)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt792x_mutex_acquire(dev);
+
+ if (changed & BSS_CHANGED_ERP_SLOT) {
+ int slottime = info->use_short_slot ? 9 : 20;
+
+ if (slottime != phy->slottime) {
+ phy->slottime = slottime;
+ mt792x_mac_set_timeing(phy);
+ }
+ }
+
+ if (changed & BSS_CHANGED_MCAST_RATE)
+ mvif->mcast_rates_idx =
+ mt7925_get_rates_table(hw, vif, false, true);
+
+ if (changed & BSS_CHANGED_BASIC_RATES)
+ mvif->basic_rates_idx =
+ mt7925_get_rates_table(hw, vif, false, false);
+
+ if (changed & (BSS_CHANGED_BEACON |
+ BSS_CHANGED_BEACON_ENABLED)) {
+ mvif->beacon_rates_idx =
+ mt7925_get_rates_table(hw, vif, true, false);
+
+ mt7925_mcu_uni_add_beacon_offload(dev, hw, vif,
+ info->enable_beacon);
+ }
+
+ /* ensure that enable txcmd_mode after bss_info */
+ if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
+ mt7925_mcu_set_tx(dev, vif);
+
+ if (changed & BSS_CHANGED_PS)
+ mt7925_mcu_uni_bss_ps(dev, vif);
+
+ if (changed & BSS_CHANGED_ASSOC) {
+ mt7925_mcu_sta_update(dev, NULL, vif, true,
+ MT76_STA_INFO_STATE_ASSOC);
+ mt7925_mcu_set_beacon_filter(dev, vif, vif->cfg.assoc);
+ }
+
+ if (changed & BSS_CHANGED_ARP_FILTER) {
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+ mt7925_mcu_update_arp_filter(&dev->mt76, &mvif->mt76, info);
+ }
+
+ mt792x_mutex_release(dev);
+}
+
+int mt7925_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ int ret, idx;
+
+ idx = mt76_wcid_alloc(dev->mt76.wcid_mask, MT792x_WTBL_STA - 1);
+ if (idx < 0)
+ return -ENOSPC;
+
+ INIT_LIST_HEAD(&msta->wcid.poll_list);
+ msta->vif = mvif;
+ msta->wcid.sta = 1;
+ msta->wcid.idx = idx;
+ msta->wcid.phy_idx = mvif->mt76.band_idx;
+ msta->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ msta->last_txs = jiffies;
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
+ return ret;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ mvif->wep_sta = msta;
+
+ mt7925_mac_wtbl_update(dev, idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+
+ /* should update bss info before STA add */
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ mt7925_mcu_add_bss_info(&dev->phy, mvif->mt76.ctx, vif, sta,
+ false);
+
+ ret = mt7925_mcu_sta_update(dev, sta, vif, true,
+ MT76_STA_INFO_STATE_NONE);
+ if (ret)
+ return ret;
+
+ mt76_connac_power_save_sched(&dev->mphy, &dev->pm);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7925_mac_sta_add);
+
+void mt7925_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+ mt792x_mutex_acquire(dev);
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ mt7925_mcu_add_bss_info(&dev->phy, mvif->mt76.ctx, vif, sta,
+ true);
+
+ ewma_avg_signal_init(&msta->avg_ack_signal);
+
+ mt7925_mac_wtbl_update(dev, msta->wcid.idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+ memset(msta->airtime_ac, 0, sizeof(msta->airtime_ac));
+
+ mt7925_mcu_sta_update(dev, sta, vif, true, MT76_STA_INFO_STATE_ASSOC);
+
+ mt792x_mutex_release(dev);
+}
+EXPORT_SYMBOL_GPL(mt7925_mac_sta_assoc);
+
+void mt7925_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+
+ mt76_connac_free_pending_tx_skbs(&dev->pm, &msta->wcid);
+ mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+
+ mt7925_mcu_sta_update(dev, sta, vif, false, MT76_STA_INFO_STATE_NONE);
+ mt7925_mac_wtbl_update(dev, msta->wcid.idx,
+ MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
+
+ if (vif->type == NL80211_IFTYPE_STATION) {
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+ mvif->wep_sta = NULL;
+ ewma_rssi_init(&mvif->rssi);
+ if (!sta->tdls)
+ mt7925_mcu_add_bss_info(&dev->phy, mvif->mt76.ctx, vif, sta,
+ false);
+ }
+
+ spin_lock_bh(&mdev->sta_poll_lock);
+ if (!list_empty(&msta->wcid.poll_list))
+ list_del_init(&msta->wcid.poll_list);
+ spin_unlock_bh(&mdev->sta_poll_lock);
+
+ mt76_connac_power_save_sched(&dev->mphy, &dev->pm);
+}
+EXPORT_SYMBOL_GPL(mt7925_mac_sta_remove);
+
+static int mt7925_set_rts_threshold(struct ieee80211_hw *hw, u32 val)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_set_rts_thresh(&dev->phy, val);
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static int
+mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params)
+{
+ enum ieee80211_ampdu_mlme_action action = params->action;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct ieee80211_sta *sta = params->sta;
+ struct ieee80211_txq *txq = sta->txq[params->tid];
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ u16 tid = params->tid;
+ u16 ssn = params->ssn;
+ struct mt76_txq *mtxq;
+ int ret = 0;
+
+ if (!txq)
+ return -EINVAL;
+
+ mtxq = (struct mt76_txq *)txq->drv_priv;
+
+ mt792x_mutex_acquire(dev);
+ switch (action) {
+ case IEEE80211_AMPDU_RX_START:
+ mt76_rx_aggr_start(&dev->mt76, &msta->wcid, tid, ssn,
+ params->buf_size);
+ mt7925_mcu_uni_rx_ba(dev, params, true);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ mt76_rx_aggr_stop(&dev->mt76, &msta->wcid, tid);
+ mt7925_mcu_uni_rx_ba(dev, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ mtxq->aggr = true;
+ mtxq->send_bar = false;
+ mt7925_mcu_uni_tx_ba(dev, params, true);
+ break;
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->wcid.ampdu_state);
+ mt7925_mcu_uni_tx_ba(dev, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ set_bit(tid, &msta->wcid.ampdu_state);
+ ret = IEEE80211_AMPDU_TX_START_IMMEDIATE;
+ break;
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->wcid.ampdu_state);
+ mt7925_mcu_uni_tx_ba(dev, params, false);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+ }
+ mt792x_mutex_release(dev);
+
+ return ret;
+}
+
+static bool is_valid_alpha2(const char *alpha2)
+{
+ if (!alpha2)
+ return false;
+
+ if (alpha2[0] == '0' && alpha2[1] == '0')
+ return true;
+
+ if (isalpha(alpha2[0]) && isalpha(alpha2[1]))
+ return true;
+
+ return false;
+}
+
+void mt7925_scan_work(struct work_struct *work)
+{
+ struct mt792x_phy *phy;
+
+ phy = (struct mt792x_phy *)container_of(work, struct mt792x_phy,
+ scan_work.work);
+
+ while (true) {
+ struct mt76_dev *mdev = &phy->dev->mt76;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+ int tlv_len;
+
+ spin_lock_bh(&phy->dev->mt76.lock);
+ skb = __skb_dequeue(&phy->scan_event_list);
+ spin_unlock_bh(&phy->dev->mt76.lock);
+
+ if (!skb)
+ break;
+
+ skb_pull(skb, sizeof(struct mt7925_mcu_rxd) + 4);
+ tlv = (struct tlv *)skb->data;
+ tlv_len = skb->len;
+
+ while (tlv_len > 0 && le16_to_cpu(tlv->len) <= tlv_len) {
+ struct mt7925_mcu_scan_chinfo_event *evt;
+
+ switch (le16_to_cpu(tlv->tag)) {
+ case UNI_EVENT_SCAN_DONE_BASIC:
+ if (test_and_clear_bit(MT76_HW_SCANNING, &phy->mt76->state)) {
+ struct cfg80211_scan_info info = {
+ .aborted = false,
+ };
+ ieee80211_scan_completed(phy->mt76->hw, &info);
+ }
+ break;
+ case UNI_EVENT_SCAN_DONE_CHNLINFO:
+ evt = (struct mt7925_mcu_scan_chinfo_event *)tlv->data;
+
+ if (!is_valid_alpha2(evt->alpha2))
+ break;
+
+ if (mdev->alpha2[0] != '0' && mdev->alpha2[1] != '0')
+ break;
+
+ mt7925_mcu_set_clc(phy->dev, evt->alpha2, ENVIRON_INDOOR);
+
+ break;
+ case UNI_EVENT_SCAN_DONE_NLO:
+ ieee80211_sched_scan_results(phy->mt76->hw);
+ break;
+ default:
+ break;
+ }
+
+ tlv_len -= le16_to_cpu(tlv->len);
+ tlv = (struct tlv *)((char *)(tlv) + le16_to_cpu(tlv->len));
+ }
+
+ dev_kfree_skb(skb);
+ }
+}
+
+static int
+mt7925_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *req)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt76_phy *mphy = hw->priv;
+ int err;
+
+ mt792x_mutex_acquire(dev);
+ err = mt7925_mcu_hw_scan(mphy, vif, req);
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static void
+mt7925_cancel_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt76_phy *mphy = hw->priv;
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_cancel_hw_scan(mphy, vif);
+ mt792x_mutex_release(dev);
+}
+
+static int
+mt7925_start_sched_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct cfg80211_sched_scan_request *req,
+ struct ieee80211_scan_ies *ies)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt76_phy *mphy = hw->priv;
+ int err;
+
+ mt792x_mutex_acquire(dev);
+
+ err = mt7925_mcu_sched_scan_req(mphy, vif, req);
+ if (err < 0)
+ goto out;
+
+ err = mt7925_mcu_sched_scan_enable(mphy, vif, true);
+out:
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static int
+mt7925_stop_sched_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt76_phy *mphy = hw->priv;
+ int err;
+
+ mt792x_mutex_acquire(dev);
+ err = mt7925_mcu_sched_scan_enable(mphy, vif, false);
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static int
+mt7925_set_antenna(struct ieee80211_hw *hw, u32 tx_ant, u32 rx_ant)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+ int max_nss = hweight8(hw->wiphy->available_antennas_tx);
+
+ if (!tx_ant || tx_ant != rx_ant || ffs(tx_ant) > max_nss)
+ return -EINVAL;
+
+ if ((BIT(hweight8(tx_ant)) - 1) != tx_ant)
+ tx_ant = BIT(ffs(tx_ant) - 1) - 1;
+
+ mt792x_mutex_acquire(dev);
+
+ phy->mt76->antenna_mask = tx_ant;
+ phy->mt76->chainmask = tx_ant;
+
+ mt76_set_stream_caps(phy->mt76, true);
+ mt7925_set_stream_he_eht_caps(phy);
+
+ /* TODO: update bmc_wtbl spe_idx when antenna changes */
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int mt7925_suspend(struct ieee80211_hw *hw,
+ struct cfg80211_wowlan *wowlan)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ cancel_delayed_work_sync(&phy->scan_work);
+ cancel_delayed_work_sync(&phy->mt76->mac_work);
+
+ cancel_delayed_work_sync(&dev->pm.ps_work);
+ mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
+
+ mt792x_mutex_acquire(dev);
+
+ clear_bit(MT76_STATE_RUNNING, &phy->mt76->state);
+ ieee80211_iterate_active_interfaces(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_mcu_set_suspend_iter,
+ &dev->mphy);
+
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static int mt7925_resume(struct ieee80211_hw *hw)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ mt792x_mutex_acquire(dev);
+
+ set_bit(MT76_STATE_RUNNING, &phy->mt76->state);
+ ieee80211_iterate_active_interfaces(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_mcu_set_suspend_iter,
+ &dev->mphy);
+
+ ieee80211_queue_delayed_work(hw, &phy->mt76->mac_work,
+ MT792x_WATCHDOG_TIME);
+
+ mt792x_mutex_release(dev);
+
+ return 0;
+}
+
+static void mt7925_set_rekey_data(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_gtk_rekey_data *data)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt792x_mutex_acquire(dev);
+ mt76_connac_mcu_update_gtk_rekey(hw, vif, data);
+ mt792x_mutex_release(dev);
+}
+#endif /* CONFIG_PM */
+
+static void mt7925_sta_set_decap_offload(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ bool enabled)
+{
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt792x_mutex_acquire(dev);
+
+ if (enabled)
+ set_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+ else
+ clear_bit(MT_WCID_FLAG_HDR_TRANS, &msta->wcid.flags);
+
+ mt7925_mcu_wtbl_update_hdr_trans(dev, vif, sta);
+
+ mt792x_mutex_release(dev);
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static void mt7925_ipv6_addr_change(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct inet6_dev *idev)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mvif->phy->dev;
+ struct inet6_ifaddr *ifa;
+ struct sk_buff *skb;
+ u8 idx = 0;
+
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct mt7925_arpns_tlv arpns;
+ struct in6_addr ns_addrs[IEEE80211_BSS_ARP_ADDR_LIST_LEN];
+ } req_hdr = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .arpns = {
+ .tag = cpu_to_le16(UNI_OFFLOAD_OFFLOAD_ND),
+ .len = cpu_to_le16(sizeof(req_hdr) - 4),
+ .enable = true,
+ },
+ };
+
+ read_lock_bh(&idev->lock);
+ list_for_each_entry(ifa, &idev->addr_list, if_list) {
+ if (ifa->flags & IFA_F_TENTATIVE)
+ continue;
+ req_hdr.ns_addrs[idx] = ifa->addr;
+ if (++idx >= IEEE80211_BSS_ARP_ADDR_LIST_LEN)
+ break;
+ }
+ read_unlock_bh(&idev->lock);
+
+ if (!idx)
+ return;
+
+ req_hdr.arpns.ips_num = idx;
+
+ skb = __mt76_mcu_msg_alloc(&dev->mt76, NULL, sizeof(req_hdr),
+ 0, GFP_ATOMIC);
+ if (!skb)
+ return;
+
+ skb_put_data(skb, &req_hdr, sizeof(req_hdr));
+
+ skb_queue_tail(&dev->ipv6_ns_list, skb);
+
+ ieee80211_queue_work(dev->mt76.hw, &dev->ipv6_ns_work);
+}
+#endif
+
+int mt7925_set_tx_sar_pwr(struct ieee80211_hw *hw,
+ const struct cfg80211_sar_specs *sar)
+{
+ struct mt76_phy *mphy = hw->priv;
+
+ if (sar) {
+ int err = mt76_init_sar_power(hw, sar);
+
+ if (err)
+ return err;
+ }
+ mt792x_init_acpi_sar_power(mt792x_hw_phy(hw), !sar);
+
+ return mt7925_mcu_set_rate_txpower(mphy);
+}
+
+static int mt7925_set_sar_specs(struct ieee80211_hw *hw,
+ const struct cfg80211_sar_specs *sar)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ int err;
+
+ mt792x_mutex_acquire(dev);
+ err = mt7925_mcu_set_clc(dev, dev->mt76.alpha2,
+ dev->country_ie_env);
+ if (err < 0)
+ goto out;
+
+ err = mt7925_set_tx_sar_pwr(hw, sar);
+out:
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static void
+mt7925_channel_switch_beacon(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct cfg80211_chan_def *chandef)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt792x_mutex_acquire(dev);
+ mt7925_mcu_uni_add_beacon_offload(dev, hw, vif, true);
+ mt792x_mutex_release(dev);
+}
+
+static int
+mt7925_start_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ int err;
+
+ mt792x_mutex_acquire(dev);
+
+ err = mt7925_mcu_add_bss_info(&dev->phy, mvif->mt76.ctx, vif, NULL,
+ true);
+ if (err)
+ goto out;
+
+ err = mt7925_mcu_set_bss_pm(dev, vif, true);
+ if (err)
+ goto out;
+
+ err = mt7925_mcu_sta_update(dev, NULL, vif, true,
+ MT76_STA_INFO_STATE_NONE);
+out:
+ mt792x_mutex_release(dev);
+
+ return err;
+}
+
+static void
+mt7925_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ int err;
+
+ mt792x_mutex_acquire(dev);
+
+ err = mt7925_mcu_set_bss_pm(dev, vif, false);
+ if (err)
+ goto out;
+
+ mt7925_mcu_add_bss_info(&dev->phy, mvif->mt76.ctx, vif, NULL,
+ false);
+
+out:
+ mt792x_mutex_release(dev);
+}
+
+static int
+mt7925_add_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ return 0;
+}
+
+static void
+mt7925_remove_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+{
+}
+
+static void mt7925_ctx_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct ieee80211_chanctx_conf *ctx = priv;
+
+ if (ctx != mvif->mt76.ctx)
+ return;
+
+ if (vif->type == NL80211_IFTYPE_MONITOR) {
+ mt7925_mcu_set_sniffer(mvif->phy->dev, vif, true);
+ mt7925_mcu_config_sniffer(mvif, ctx);
+ } else {
+ mt7925_mcu_set_chctx(mvif->phy->mt76, &mvif->mt76, ctx);
+ }
+}
+
+static void
+mt7925_change_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx,
+ u32 changed)
+{
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ mt792x_mutex_acquire(phy->dev);
+ ieee80211_iterate_active_interfaces(phy->mt76->hw,
+ IEEE80211_IFACE_ITER_ACTIVE,
+ mt7925_ctx_iter, ctx);
+ mt792x_mutex_release(phy->dev);
+}
+
+static void mt7925_mgd_prepare_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ u16 duration = info->duration ? info->duration :
+ jiffies_to_msecs(HZ);
+
+ mt792x_mutex_acquire(dev);
+ mt7925_set_roc(mvif->phy, mvif, mvif->mt76.ctx->def.chan, duration,
+ MT7925_ROC_REQ_JOIN);
+ mt792x_mutex_release(dev);
+}
+
+static void mt7925_mgd_complete_tx(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ struct ieee80211_prep_tx_info *info)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+ mt7925_abort_roc(mvif->phy, mvif);
+}
+
+const struct ieee80211_ops mt7925_ops = {
+ .tx = mt792x_tx,
+ .start = mt7925_start,
+ .stop = mt792x_stop,
+ .add_interface = mt7925_add_interface,
+ .remove_interface = mt792x_remove_interface,
+ .config = mt7925_config,
+ .conf_tx = mt792x_conf_tx,
+ .configure_filter = mt7925_configure_filter,
+ .bss_info_changed = mt7925_bss_info_changed,
+ .start_ap = mt7925_start_ap,
+ .stop_ap = mt7925_stop_ap,
+ .sta_state = mt76_sta_state,
+ .sta_pre_rcu_remove = mt76_sta_pre_rcu_remove,
+ .set_key = mt7925_set_key,
+ .sta_set_decap_offload = mt7925_sta_set_decap_offload,
+#if IS_ENABLED(CONFIG_IPV6)
+ .ipv6_addr_change = mt7925_ipv6_addr_change,
+#endif /* CONFIG_IPV6 */
+ .ampdu_action = mt7925_ampdu_action,
+ .set_rts_threshold = mt7925_set_rts_threshold,
+ .wake_tx_queue = mt76_wake_tx_queue,
+ .release_buffered_frames = mt76_release_buffered_frames,
+ .channel_switch_beacon = mt7925_channel_switch_beacon,
+ .get_txpower = mt76_get_txpower,
+ .get_stats = mt792x_get_stats,
+ .get_et_sset_count = mt792x_get_et_sset_count,
+ .get_et_strings = mt792x_get_et_strings,
+ .get_et_stats = mt792x_get_et_stats,
+ .get_tsf = mt792x_get_tsf,
+ .set_tsf = mt792x_set_tsf,
+ .get_survey = mt76_get_survey,
+ .get_antenna = mt76_get_antenna,
+ .set_antenna = mt7925_set_antenna,
+ .set_coverage_class = mt792x_set_coverage_class,
+ .hw_scan = mt7925_hw_scan,
+ .cancel_hw_scan = mt7925_cancel_hw_scan,
+ .sta_statistics = mt792x_sta_statistics,
+ .sched_scan_start = mt7925_start_sched_scan,
+ .sched_scan_stop = mt7925_stop_sched_scan,
+#ifdef CONFIG_PM
+ .suspend = mt7925_suspend,
+ .resume = mt7925_resume,
+ .set_wakeup = mt792x_set_wakeup,
+ .set_rekey_data = mt7925_set_rekey_data,
+#endif /* CONFIG_PM */
+ .flush = mt792x_flush,
+ .set_sar_specs = mt7925_set_sar_specs,
+ .remain_on_channel = mt7925_remain_on_channel,
+ .cancel_remain_on_channel = mt7925_cancel_remain_on_channel,
+ .add_chanctx = mt7925_add_chanctx,
+ .remove_chanctx = mt7925_remove_chanctx,
+ .change_chanctx = mt7925_change_chanctx,
+ .assign_vif_chanctx = mt792x_assign_vif_chanctx,
+ .unassign_vif_chanctx = mt792x_unassign_vif_chanctx,
+ .mgd_prepare_tx = mt7925_mgd_prepare_tx,
+ .mgd_complete_tx = mt7925_mgd_complete_tx,
+};
+EXPORT_SYMBOL_GPL(mt7925_ops);
+
+MODULE_AUTHOR("Deren Wu <deren.wu@mediatek.com>");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
new file mode 100644
index 000000000000..9c0e397537ac
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
@@ -0,0 +1,3174 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/fs.h>
+#include <linux/firmware.h>
+#include "mt7925.h"
+#include "mcu.h"
+#include "mac.h"
+
+#define MT_STA_BFER BIT(0)
+#define MT_STA_BFEE BIT(1)
+
+static bool mt7925_disable_clc;
+module_param_named(disable_clc, mt7925_disable_clc, bool, 0644);
+MODULE_PARM_DESC(disable_clc, "disable CLC support");
+
+int mt7925_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ struct sk_buff *skb, int seq)
+{
+ int mcu_cmd = FIELD_GET(__MCU_CMD_FIELD_ID, cmd);
+ struct mt7925_mcu_rxd *rxd;
+ int ret = 0;
+
+ if (!skb) {
+ dev_err(mdev->dev, "Message %08x (seq %d) timeout\n", cmd, seq);
+ mt792x_reset(mdev);
+
+ return -ETIMEDOUT;
+ }
+
+ rxd = (struct mt7925_mcu_rxd *)skb->data;
+ if (seq != rxd->seq)
+ return -EAGAIN;
+
+ if (cmd == MCU_CMD(PATCH_SEM_CONTROL) ||
+ cmd == MCU_CMD(PATCH_FINISH_REQ)) {
+ skb_pull(skb, sizeof(*rxd) - 4);
+ ret = *skb->data;
+ } else if (cmd == MCU_UNI_CMD(DEV_INFO_UPDATE) ||
+ cmd == MCU_UNI_CMD(BSS_INFO_UPDATE) ||
+ cmd == MCU_UNI_CMD(STA_REC_UPDATE) ||
+ cmd == MCU_UNI_CMD(HIF_CTRL) ||
+ cmd == MCU_UNI_CMD(OFFLOAD) ||
+ cmd == MCU_UNI_CMD(SUSPEND)) {
+ struct mt7925_mcu_uni_event *event;
+
+ skb_pull(skb, sizeof(*rxd));
+ event = (struct mt7925_mcu_uni_event *)skb->data;
+ ret = le32_to_cpu(event->status);
+ /* skip invalid event */
+ if (mcu_cmd != event->cid)
+ ret = -EAGAIN;
+ } else {
+ skb_pull(skb, sizeof(*rxd));
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_parse_response);
+
+int mt7925_mcu_regval(struct mt792x_dev *dev, u32 regidx, u32 *val, bool set)
+{
+#define MT_RF_REG_HDR GENMASK(31, 24)
+#define MT_RF_REG_ANT GENMASK(23, 16)
+#define RF_REG_PREFIX 0x99
+ struct {
+ u8 __rsv[4];
+ union {
+ struct uni_cmd_access_reg_basic {
+ __le16 tag;
+ __le16 len;
+ __le32 idx;
+ __le32 data;
+ } __packed reg;
+ struct uni_cmd_access_rf_reg_basic {
+ __le16 tag;
+ __le16 len;
+ __le16 ant;
+ u8 __rsv[2];
+ __le32 idx;
+ __le32 data;
+ } __packed rf_reg;
+ };
+ } __packed * res, req;
+ struct sk_buff *skb;
+ int ret;
+
+ if (u32_get_bits(regidx, MT_RF_REG_HDR) == RF_REG_PREFIX) {
+ req.rf_reg.tag = cpu_to_le16(UNI_CMD_ACCESS_RF_REG_BASIC);
+ req.rf_reg.len = cpu_to_le16(sizeof(req.rf_reg));
+ req.rf_reg.ant = cpu_to_le16(u32_get_bits(regidx, MT_RF_REG_ANT));
+ req.rf_reg.idx = cpu_to_le32(regidx);
+ req.rf_reg.data = set ? cpu_to_le32(*val) : 0;
+ } else {
+ req.reg.tag = cpu_to_le16(UNI_CMD_ACCESS_REG_BASIC);
+ req.reg.len = cpu_to_le16(sizeof(req.reg));
+ req.reg.idx = cpu_to_le32(regidx);
+ req.reg.data = set ? cpu_to_le32(*val) : 0;
+ }
+
+ if (set)
+ return mt76_mcu_send_msg(&dev->mt76, MCU_WM_UNI_CMD(REG_ACCESS),
+ &req, sizeof(req), true);
+
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76,
+ MCU_WM_UNI_CMD_QUERY(REG_ACCESS),
+ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+ res = (void *)skb->data;
+ if (u32_get_bits(regidx, MT_RF_REG_HDR) == RF_REG_PREFIX)
+ *val = le32_to_cpu(res->rf_reg.data);
+ else
+ *val = le32_to_cpu(res->reg.data);
+
+ dev_kfree_skb(skb);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_regval);
+
+int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct mt76_vif *vif,
+ struct ieee80211_bss_conf *info)
+{
+ struct ieee80211_vif *mvif = container_of(info, struct ieee80211_vif,
+ bss_conf);
+ struct sk_buff *skb;
+ int i, len = min_t(int, mvif->cfg.arp_addr_cnt,
+ IEEE80211_BSS_ARP_ADDR_LIST_LEN);
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct mt7925_arpns_tlv arp;
+ } req = {
+ .hdr = {
+ .bss_idx = vif->idx,
+ },
+ .arp = {
+ .tag = cpu_to_le16(UNI_OFFLOAD_OFFLOAD_ARP),
+ .len = cpu_to_le16(sizeof(req) - 4 + len * 2 * sizeof(__be32)),
+ .ips_num = len,
+ .enable = true,
+ },
+ };
+
+ skb = mt76_mcu_msg_alloc(dev, NULL, sizeof(req) + len * 2 * sizeof(__be32));
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put_data(skb, &req, sizeof(req));
+ for (i = 0; i < len; i++) {
+ skb_put_data(skb, &mvif->cfg.arp_addr_list[i], sizeof(__be32));
+ skb_put_zero(skb, sizeof(__be32));
+ }
+
+ return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(OFFLOAD), true);
+}
+
+#ifdef CONFIG_PM
+static int
+mt7925_connac_mcu_set_wow_ctrl(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ bool suspend, struct cfg80211_wowlan *wowlan)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt76_dev *dev = phy->dev;
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct mt76_connac_wow_ctrl_tlv wow_ctrl_tlv;
+ struct mt76_connac_wow_gpio_param_tlv gpio_tlv;
+ } req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ },
+ .wow_ctrl_tlv = {
+ .tag = cpu_to_le16(UNI_SUSPEND_WOW_CTRL),
+ .len = cpu_to_le16(sizeof(struct mt76_connac_wow_ctrl_tlv)),
+ .cmd = suspend ? 1 : 2,
+ },
+ .gpio_tlv = {
+ .tag = cpu_to_le16(UNI_SUSPEND_WOW_GPIO_PARAM),
+ .len = cpu_to_le16(sizeof(struct mt76_connac_wow_gpio_param_tlv)),
+ .gpio_pin = 0xff, /* follow fw about GPIO pin */
+ },
+ };
+
+ if (wowlan->magic_pkt)
+ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_MAGIC;
+ if (wowlan->disconnect)
+ req.wow_ctrl_tlv.trigger |= (UNI_WOW_DETECT_TYPE_DISCONNECT |
+ UNI_WOW_DETECT_TYPE_BCN_LOST);
+ if (wowlan->nd_config) {
+ mt7925_mcu_sched_scan_req(phy, vif, wowlan->nd_config);
+ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_SCH_SCAN_HIT;
+ mt7925_mcu_sched_scan_enable(phy, vif, suspend);
+ }
+ if (wowlan->n_patterns)
+ req.wow_ctrl_tlv.trigger |= UNI_WOW_DETECT_TYPE_BITMAP;
+
+ if (mt76_is_mmio(dev))
+ req.wow_ctrl_tlv.wakeup_hif = WOW_PCIE;
+ else if (mt76_is_usb(dev))
+ req.wow_ctrl_tlv.wakeup_hif = WOW_USB;
+ else if (mt76_is_sdio(dev))
+ req.wow_ctrl_tlv.wakeup_hif = WOW_GPIO;
+
+ return mt76_mcu_send_msg(dev, MCU_UNI_CMD(SUSPEND), &req,
+ sizeof(req), true);
+}
+
+static int
+mt7925_mcu_set_wow_pattern(struct mt76_dev *dev,
+ struct ieee80211_vif *vif,
+ u8 index, bool enable,
+ struct cfg80211_pkt_pattern *pattern)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt7925_wow_pattern_tlv *tlv;
+ struct sk_buff *skb;
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr = {
+ .bss_idx = mvif->idx,
+ };
+
+ skb = mt76_mcu_msg_alloc(dev, NULL, sizeof(hdr) + sizeof(*tlv));
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put_data(skb, &hdr, sizeof(hdr));
+ tlv = (struct mt7925_wow_pattern_tlv *)skb_put(skb, sizeof(*tlv));
+ tlv->tag = cpu_to_le16(UNI_SUSPEND_WOW_PATTERN);
+ tlv->len = cpu_to_le16(sizeof(*tlv));
+ tlv->bss_idx = 0xF;
+ tlv->data_len = pattern->pattern_len;
+ tlv->enable = enable;
+ tlv->index = index;
+ tlv->offset = 0;
+
+ memcpy(tlv->pattern, pattern->pattern, pattern->pattern_len);
+ memcpy(tlv->mask, pattern->mask, DIV_ROUND_UP(pattern->pattern_len, 8));
+
+ return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SUSPEND), true);
+}
+
+void mt7925_mcu_set_suspend_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mt76_phy *phy = priv;
+ bool suspend = !test_bit(MT76_STATE_RUNNING, &phy->state);
+ struct ieee80211_hw *hw = phy->hw;
+ struct cfg80211_wowlan *wowlan = hw->wiphy->wowlan_config;
+ int i;
+
+ mt76_connac_mcu_set_gtk_rekey(phy->dev, vif, suspend);
+
+ mt76_connac_mcu_set_suspend_mode(phy->dev, vif, suspend, 1, true);
+
+ for (i = 0; i < wowlan->n_patterns; i++)
+ mt7925_mcu_set_wow_pattern(phy->dev, vif, i, suspend,
+ &wowlan->patterns[i]);
+ mt7925_connac_mcu_set_wow_ctrl(phy, vif, suspend, wowlan);
+}
+
+#endif /* CONFIG_PM */
+
+static void
+mt7925_mcu_connection_loss_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt7925_uni_beacon_loss_event *event = priv;
+
+ if (mvif->idx != event->hdr.bss_idx)
+ return;
+
+ if (!(vif->driver_flags & IEEE80211_VIF_BEACON_FILTER) ||
+ vif->type != NL80211_IFTYPE_STATION)
+ return;
+
+ ieee80211_connection_loss(vif);
+}
+
+static void
+mt7925_mcu_connection_loss_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ struct mt7925_uni_beacon_loss_event *event;
+ struct mt76_phy *mphy = &dev->mt76.phy;
+
+ skb_pull(skb, sizeof(struct mt7925_mcu_rxd));
+ event = (struct mt7925_uni_beacon_loss_event *)skb->data;
+
+ ieee80211_iterate_active_interfaces_atomic(mphy->hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_mcu_connection_loss_iter, event);
+}
+
+static void
+mt7925_mcu_roc_iter(void *priv, u8 *mac, struct ieee80211_vif *vif)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct mt7925_roc_grant_tlv *grant = priv;
+
+ if (mvif->idx != grant->bss_idx)
+ return;
+
+ mvif->band_idx = grant->dbdcband;
+}
+
+static void
+mt7925_mcu_uni_roc_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ struct ieee80211_hw *hw = dev->mt76.hw;
+ struct mt7925_roc_grant_tlv *grant;
+ struct mt7925_mcu_rxd *rxd;
+ int duration;
+
+ rxd = (struct mt7925_mcu_rxd *)skb->data;
+ grant = (struct mt7925_roc_grant_tlv *)(rxd->tlv + 4);
+
+ /* should never happen */
+ WARN_ON_ONCE((le16_to_cpu(grant->tag) != UNI_EVENT_ROC_GRANT));
+
+ if (grant->reqtype == MT7925_ROC_REQ_ROC)
+ ieee80211_ready_on_channel(hw);
+ else if (grant->reqtype == MT7925_ROC_REQ_JOIN)
+ ieee80211_iterate_active_interfaces_atomic(hw,
+ IEEE80211_IFACE_ITER_RESUME_ALL,
+ mt7925_mcu_roc_iter, grant);
+ dev->phy.roc_grant = true;
+ wake_up(&dev->phy.roc_wait);
+ duration = le32_to_cpu(grant->max_interval);
+ mod_timer(&dev->phy.roc_timer,
+ jiffies + msecs_to_jiffies(duration));
+}
+
+static void
+mt7925_mcu_scan_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct mt792x_phy *phy = (struct mt792x_phy *)mphy->priv;
+
+ spin_lock_bh(&dev->mt76.lock);
+ __skb_queue_tail(&phy->scan_event_list, skb);
+ spin_unlock_bh(&dev->mt76.lock);
+
+ ieee80211_queue_delayed_work(mphy->hw, &phy->scan_work,
+ MT792x_HW_SCAN_TIMEOUT);
+}
+
+static void
+mt7925_mcu_tx_done_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+#define UNI_EVENT_TX_DONE_MSG 0
+#define UNI_EVENT_TX_DONE_RAW 1
+ struct mt7925_mcu_txs_event {
+ u8 ver;
+ u8 rsv[3];
+ u8 data[0];
+ } __packed * txs;
+ struct tlv *tlv;
+ u32 tlv_len;
+
+ skb_pull(skb, sizeof(struct mt7925_mcu_rxd) + 4);
+ tlv = (struct tlv *)skb->data;
+ tlv_len = skb->len;
+
+ while (tlv_len > 0 && le16_to_cpu(tlv->len) <= tlv_len) {
+ switch (le16_to_cpu(tlv->tag)) {
+ case UNI_EVENT_TX_DONE_RAW:
+ txs = (struct mt7925_mcu_txs_event *)tlv->data;
+ mt7925_mac_add_txs(dev, txs->data);
+ break;
+ default:
+ break;
+ }
+ tlv_len -= le16_to_cpu(tlv->len);
+ tlv = (struct tlv *)((char *)(tlv) + le16_to_cpu(tlv->len));
+ }
+}
+
+static void
+mt7925_mcu_uni_debug_msg_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ struct mt7925_uni_debug_msg {
+ __le16 tag;
+ __le16 len;
+ u8 fmt;
+ u8 rsv[3];
+ u8 id;
+ u8 type:3;
+ u8 nr_args:5;
+ union {
+ struct idxlog {
+ __le16 rsv;
+ __le32 ts;
+ __le32 idx;
+ u8 data[];
+ } __packed idx;
+ struct txtlog {
+ u8 len;
+ u8 rsv;
+ __le32 ts;
+ u8 data[];
+ } __packed txt;
+ };
+ } __packed * hdr;
+
+ skb_pull(skb, sizeof(struct mt7925_mcu_rxd) + 4);
+ hdr = (struct mt7925_uni_debug_msg *)skb->data;
+
+ if (hdr->id == 0x28) {
+ skb_pull(skb, offsetof(struct mt7925_uni_debug_msg, id));
+ wiphy_info(mt76_hw(dev)->wiphy, "%.*s", skb->len, skb->data);
+ return;
+ } else if (hdr->id != 0xa8) {
+ return;
+ }
+
+ if (hdr->type == 0) { /* idx log */
+ int i, ret, len = PAGE_SIZE - 1, nr_val;
+ struct page *page = dev_alloc_pages(get_order(len));
+ __le32 *val;
+ char *buf, *cur;
+
+ if (!page)
+ return;
+
+ buf = page_address(page);
+ cur = buf;
+
+ nr_val = (le16_to_cpu(hdr->len) - sizeof(*hdr)) / 4;
+ val = (__le32 *)hdr->idx.data;
+ for (i = 0; i < nr_val && len > 0; i++) {
+ ret = snprintf(cur, len, "0x%x,", le32_to_cpu(val[i]));
+ if (ret <= 0)
+ break;
+
+ cur += ret;
+ len -= ret;
+ }
+ if (cur > buf)
+ wiphy_info(mt76_hw(dev)->wiphy, "idx: 0x%X,%d,%s",
+ le32_to_cpu(hdr->idx.idx), nr_val, buf);
+ put_page(page);
+ } else if (hdr->type == 2) { /* str log */
+ wiphy_info(mt76_hw(dev)->wiphy, "%.*s", hdr->txt.len, hdr->txt.data);
+ }
+}
+
+static void
+mt7925_mcu_uni_rx_unsolicited_event(struct mt792x_dev *dev,
+ struct sk_buff *skb)
+{
+ struct mt7925_mcu_rxd *rxd;
+
+ rxd = (struct mt7925_mcu_rxd *)skb->data;
+
+ switch (rxd->eid) {
+ case MCU_UNI_EVENT_FW_LOG_2_HOST:
+ mt7925_mcu_uni_debug_msg_event(dev, skb);
+ break;
+ case MCU_UNI_EVENT_ROC:
+ mt7925_mcu_uni_roc_event(dev, skb);
+ break;
+ case MCU_UNI_EVENT_SCAN_DONE:
+ mt7925_mcu_scan_event(dev, skb);
+ return;
+ case MCU_UNI_EVENT_TX_DONE:
+ mt7925_mcu_tx_done_event(dev, skb);
+ break;
+ case MCU_UNI_EVENT_BSS_BEACON_LOSS:
+ mt7925_mcu_connection_loss_event(dev, skb);
+ break;
+ case MCU_UNI_EVENT_COREDUMP:
+ dev->fw_assert = true;
+ mt76_connac_mcu_coredump_event(&dev->mt76, skb, &dev->coredump);
+ return;
+ default:
+ break;
+ }
+ dev_kfree_skb(skb);
+}
+
+void mt7925_mcu_rx_event(struct mt792x_dev *dev, struct sk_buff *skb)
+{
+ struct mt7925_mcu_rxd *rxd = (struct mt7925_mcu_rxd *)skb->data;
+
+ if (skb_linearize(skb))
+ return;
+
+ if (rxd->option & MCU_UNI_CMD_UNSOLICITED_EVENT) {
+ mt7925_mcu_uni_rx_unsolicited_event(dev, skb);
+ return;
+ }
+
+ mt76_mcu_rx_event(&dev->mt76, skb);
+}
+
+static int
+mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+ struct ieee80211_ampdu_params *params,
+ bool enable, bool tx)
+{
+ struct mt76_wcid *wcid = (struct mt76_wcid *)params->sta->drv_priv;
+ struct sta_rec_ba_uni *ba;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+ int len;
+
+ len = sizeof(struct sta_req_hdr) + sizeof(*ba);
+ skb = __mt76_connac_mcu_alloc_sta_req(dev, mvif, wcid,
+ len);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_BA, sizeof(*ba));
+
+ ba = (struct sta_rec_ba_uni *)tlv;
+ ba->ba_type = tx ? MT_BA_TYPE_ORIGINATOR : MT_BA_TYPE_RECIPIENT;
+ ba->winsize = cpu_to_le16(params->buf_size);
+ ba->ssn = cpu_to_le16(params->ssn);
+ ba->ba_en = enable << params->tid;
+ ba->amsdu = params->amsdu;
+ ba->tid = params->tid;
+
+ return mt76_mcu_skb_send_msg(dev, skb,
+ MCU_UNI_CMD(STA_REC_UPDATE), true);
+}
+
+/** starec & wtbl **/
+int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+{
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+ struct mt792x_vif *mvif = msta->vif;
+
+ if (enable && !params->amsdu)
+ msta->wcid.amsdu = false;
+
+ return mt7925_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
+ enable, true);
+}
+
+int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+{
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+ struct mt792x_vif *mvif = msta->vif;
+
+ return mt7925_mcu_sta_ba(&dev->mt76, &mvif->mt76, params,
+ enable, false);
+}
+
+static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+{
+ const struct mt76_connac2_fw_trailer *hdr;
+ const struct mt76_connac2_fw_region *region;
+ const struct mt7925_clc *clc;
+ struct mt76_dev *mdev = &dev->mt76;
+ struct mt792x_phy *phy = &dev->phy;
+ const struct firmware *fw;
+ int ret, i, len, offset = 0;
+ u8 *clc_base = NULL;
+
+ if (mt7925_disable_clc ||
+ mt76_is_usb(&dev->mt76))
+ return 0;
+
+ ret = request_firmware(&fw, fw_name, mdev->dev);
+ if (ret)
+ return ret;
+
+ if (!fw || !fw->data || fw->size < sizeof(*hdr)) {
+ dev_err(mdev->dev, "Invalid firmware\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ hdr = (const void *)(fw->data + fw->size - sizeof(*hdr));
+ for (i = 0; i < hdr->n_region; i++) {
+ region = (const void *)((const u8 *)hdr -
+ (hdr->n_region - i) * sizeof(*region));
+ len = le32_to_cpu(region->len);
+
+ /* check if we have valid buffer size */
+ if (offset + len > fw->size) {
+ dev_err(mdev->dev, "Invalid firmware region\n");
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if ((region->feature_set & FW_FEATURE_NON_DL) &&
+ region->type == FW_TYPE_CLC) {
+ clc_base = (u8 *)(fw->data + offset);
+ break;
+ }
+ offset += len;
+ }
+
+ if (!clc_base)
+ goto out;
+
+ for (offset = 0; offset < len; offset += le32_to_cpu(clc->len)) {
+ clc = (const struct mt7925_clc *)(clc_base + offset);
+
+ /* do not init buf again if chip reset triggered */
+ if (phy->clc[clc->idx])
+ continue;
+
+ phy->clc[clc->idx] = devm_kmemdup(mdev->dev, clc,
+ le32_to_cpu(clc->len),
+ GFP_KERNEL);
+
+ if (!phy->clc[clc->idx]) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ }
+
+ ret = mt7925_mcu_set_clc(dev, "00", ENVIRON_INDOOR);
+out:
+ release_firmware(fw);
+
+ return ret;
+}
+
+int mt7925_mcu_fw_log_2_host(struct mt792x_dev *dev, u8 ctrl)
+{
+ struct {
+ u8 _rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ u8 ctrl;
+ u8 interval;
+ u8 _rsv2[2];
+ } __packed req = {
+ .tag = cpu_to_le16(UNI_WSYS_CONFIG_FW_LOG_CTRL),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .ctrl = ctrl,
+ };
+ int ret;
+
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(WSYS_CONFIG),
+ &req, sizeof(req), false, NULL);
+ return ret;
+}
+
+static void
+mt7925_mcu_parse_phy_cap(struct mt792x_dev *dev, char *data)
+{
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct mt76_dev *mdev = mphy->dev;
+ struct mt7925_mcu_phy_cap {
+ u8 ht;
+ u8 vht;
+ u8 _5g;
+ u8 max_bw;
+ u8 nss;
+ u8 dbdc;
+ u8 tx_ldpc;
+ u8 rx_ldpc;
+ u8 tx_stbc;
+ u8 rx_stbc;
+ u8 hw_path;
+ u8 he;
+ u8 eht;
+ } __packed * cap;
+ enum {
+ WF0_24G,
+ WF0_5G
+ };
+
+ cap = (struct mt7925_mcu_phy_cap *)data;
+
+ mdev->phy.antenna_mask = BIT(cap->nss) - 1;
+ mdev->phy.chainmask = mdev->phy.antenna_mask;
+ mdev->phy.cap.has_2ghz = cap->hw_path & BIT(WF0_24G);
+ mdev->phy.cap.has_5ghz = cap->hw_path & BIT(WF0_5G);
+ dev->has_eht = cap->eht;
+}
+
+static int
+mt7925_mcu_get_nic_capability(struct mt792x_dev *dev)
+{
+ struct mt76_phy *mphy = &dev->mt76.phy;
+ struct {
+ u8 _rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ } __packed req = {
+ .tag = cpu_to_le16(UNI_CHIP_CONFIG_NIC_CAPA),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ };
+ struct mt76_connac_cap_hdr {
+ __le16 n_element;
+ u8 rsv[2];
+ } __packed * hdr;
+ struct sk_buff *skb;
+ int ret, i;
+
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(CHIP_CONFIG),
+ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+ hdr = (struct mt76_connac_cap_hdr *)skb->data;
+ if (skb->len < sizeof(*hdr)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ skb_pull(skb, sizeof(*hdr));
+
+ for (i = 0; i < le16_to_cpu(hdr->n_element); i++) {
+ struct tlv *tlv = (struct tlv *)skb->data;
+ int len;
+
+ if (skb->len < sizeof(*tlv))
+ break;
+
+ len = le16_to_cpu(tlv->len);
+ if (skb->len < len)
+ break;
+
+ switch (le16_to_cpu(tlv->tag)) {
+ case MT_NIC_CAP_6G:
+ mphy->cap.has_6ghz = !!tlv->data[0];
+ break;
+ case MT_NIC_CAP_MAC_ADDR:
+ memcpy(mphy->macaddr, (void *)tlv->data, ETH_ALEN);
+ break;
+ case MT_NIC_CAP_PHY:
+ mt7925_mcu_parse_phy_cap(dev, tlv->data);
+ break;
+ default:
+ break;
+ }
+ skb_pull(skb, len);
+ }
+out:
+ dev_kfree_skb(skb);
+ return ret;
+}
+
+int mt7925_mcu_chip_config(struct mt792x_dev *dev, const char *cmd)
+{
+ u16 len = strlen(cmd) + 1;
+ struct {
+ u8 _rsv[4];
+ __le16 tag;
+ __le16 len;
+ struct mt76_connac_config config;
+ } __packed req = {
+ .tag = cpu_to_le16(UNI_CHIP_CONFIG_CHIP_CFG),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .config = {
+ .resp_type = 0,
+ .type = 0,
+ .data_size = cpu_to_le16(len),
+ },
+ };
+
+ memcpy(req.config.data, cmd, len);
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(CHIP_CONFIG),
+ &req, sizeof(req), false);
+}
+
+int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable)
+{
+ char cmd[16];
+
+ snprintf(cmd, sizeof(cmd), "KeepFullPwr %d", !enable);
+
+ return mt7925_mcu_chip_config(dev, cmd);
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_set_deep_sleep);
+
+int mt7925_run_firmware(struct mt792x_dev *dev)
+{
+ int err;
+
+ err = mt792x_load_firmware(dev);
+ if (err)
+ return err;
+
+ err = mt7925_mcu_get_nic_capability(dev);
+ if (err)
+ return err;
+
+ set_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state);
+ err = mt7925_load_clc(dev, mt792x_ram_name(dev));
+ if (err)
+ return err;
+
+ return mt7925_mcu_fw_log_2_host(dev, 1);
+}
+EXPORT_SYMBOL_GPL(mt7925_run_firmware);
+
+static void
+mt7925_mcu_sta_hdr_trans_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct sta_rec_hdr_trans *hdr_trans;
+ struct mt76_wcid *wcid;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_HDR_TRANS, sizeof(*hdr_trans));
+ hdr_trans = (struct sta_rec_hdr_trans *)tlv;
+ hdr_trans->dis_rx_hdr_tran = true;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+ hdr_trans->to_ds = true;
+ else
+ hdr_trans->from_ds = true;
+
+ wcid = (struct mt76_wcid *)sta->drv_priv;
+ if (!wcid)
+ return;
+
+ hdr_trans->dis_rx_hdr_tran = !test_bit(MT_WCID_FLAG_HDR_TRANS, &wcid->flags);
+ if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags)) {
+ hdr_trans->to_ds = true;
+ hdr_trans->from_ds = true;
+ }
+}
+
+int mt7925_mcu_wtbl_update_hdr_trans(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_sta *msta;
+ struct sk_buff *skb;
+
+ msta = sta ? (struct mt792x_sta *)sta->drv_priv : &mvif->sta;
+
+ skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
+ &msta->wcid,
+ MT7925_STA_UPDATE_MAX_SIZE);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ /* starec hdr trans */
+ mt7925_mcu_sta_hdr_trans_tlv(skb, vif, sta);
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
+}
+
+int mt7925_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif)
+{
+#define MCU_EDCA_AC_PARAM 0
+#define WMM_AIFS_SET BIT(0)
+#define WMM_CW_MIN_SET BIT(1)
+#define WMM_CW_MAX_SET BIT(2)
+#define WMM_TXOP_SET BIT(3)
+#define WMM_PARAM_SET (WMM_AIFS_SET | WMM_CW_MIN_SET | \
+ WMM_CW_MAX_SET | WMM_TXOP_SET)
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct {
+ u8 bss_idx;
+ u8 __rsv[3];
+ } __packed hdr = {
+ .bss_idx = mvif->mt76.idx,
+ };
+ struct sk_buff *skb;
+ int len = sizeof(hdr) + IEEE80211_NUM_ACS * sizeof(struct edca);
+ int ac;
+
+ skb = mt76_mcu_msg_alloc(&dev->mt76, NULL, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put_data(skb, &hdr, sizeof(hdr));
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ struct ieee80211_tx_queue_params *q = &mvif->queue_params[ac];
+ struct edca *e;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, MCU_EDCA_AC_PARAM, sizeof(*e));
+
+ e = (struct edca *)tlv;
+ e->set = WMM_PARAM_SET;
+ e->queue = ac + mvif->mt76.wmm_idx * MT76_CONNAC_MAX_WMM_SETS;
+ e->aifs = q->aifs;
+ e->txop = cpu_to_le16(q->txop);
+
+ if (q->cw_min)
+ e->cw_min = fls(q->cw_min);
+ else
+ e->cw_min = 5;
+
+ if (q->cw_max)
+ e->cw_max = fls(q->cw_max);
+ else
+ e->cw_max = 10;
+ }
+
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_UNI_CMD(EDCA_UPDATE), true);
+}
+
+static int
+mt7925_mcu_sta_key_tlv(struct mt76_wcid *wcid,
+ struct mt76_connac_sta_key_conf *sta_key_conf,
+ struct sk_buff *skb,
+ struct ieee80211_key_conf *key,
+ enum set_key_cmd cmd)
+{
+ struct sta_rec_sec_uni *sec;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_KEY_V2, sizeof(*sec));
+ sec = (struct sta_rec_sec_uni *)tlv;
+ sec->add = cmd;
+
+ if (cmd == SET_KEY) {
+ struct sec_key_uni *sec_key;
+ u8 cipher;
+
+ cipher = mt76_connac_mcu_get_cipher(key->cipher);
+ if (cipher == MCU_CIPHER_NONE)
+ return -EOPNOTSUPP;
+
+ sec_key = &sec->key[0];
+ sec_key->cipher_len = sizeof(*sec_key);
+
+ if (cipher == MCU_CIPHER_BIP_CMAC_128) {
+ sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+ sec_key->cipher_id = MCU_CIPHER_AES_CCMP;
+ sec_key->key_id = sta_key_conf->keyidx;
+ sec_key->key_len = 16;
+ memcpy(sec_key->key, sta_key_conf->key, 16);
+
+ sec_key = &sec->key[1];
+ sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+ sec_key->cipher_id = MCU_CIPHER_BIP_CMAC_128;
+ sec_key->cipher_len = sizeof(*sec_key);
+ sec_key->key_len = 16;
+ memcpy(sec_key->key, key->key, 16);
+ sec->n_cipher = 2;
+ } else {
+ sec_key->wlan_idx = cpu_to_le16(wcid->idx);
+ sec_key->cipher_id = cipher;
+ sec_key->key_id = key->keyidx;
+ sec_key->key_len = key->keylen;
+ memcpy(sec_key->key, key->key, key->keylen);
+
+ if (cipher == MCU_CIPHER_TKIP) {
+ /* Rx/Tx MIC keys are swapped */
+ memcpy(sec_key->key + 16, key->key + 24, 8);
+ memcpy(sec_key->key + 24, key->key + 16, 8);
+ }
+
+ /* store key_conf for BIP batch update */
+ if (cipher == MCU_CIPHER_AES_CCMP) {
+ memcpy(sta_key_conf->key, key->key, key->keylen);
+ sta_key_conf->keyidx = key->keyidx;
+ }
+
+ sec->n_cipher = 1;
+ }
+ } else {
+ sec->n_cipher = 0;
+ }
+
+ return 0;
+}
+
+int mt7925_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ struct mt76_connac_sta_key_conf *sta_key_conf,
+ struct ieee80211_key_conf *key, int mcu_cmd,
+ struct mt76_wcid *wcid, enum set_key_cmd cmd)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct sk_buff *skb;
+ int ret;
+
+ skb = __mt76_connac_mcu_alloc_sta_req(dev, mvif, wcid,
+ MT7925_STA_UPDATE_MAX_SIZE);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ ret = mt7925_mcu_sta_key_tlv(wcid, sta_key_conf, skb, key, cmd);
+ if (ret)
+ return ret;
+
+ return mt76_mcu_skb_send_msg(dev, skb, mcu_cmd, true);
+}
+
+int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
+ struct ieee80211_channel *chan, int duration,
+ enum mt7925_roc_req type, u8 token_id)
+{
+ int center_ch = ieee80211_frequency_to_channel(chan->center_freq);
+ struct mt792x_dev *dev = phy->dev;
+ struct {
+ struct {
+ u8 rsv[4];
+ } __packed hdr;
+ struct roc_acquire_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 bss_idx;
+ u8 tokenid;
+ u8 control_channel;
+ u8 sco;
+ u8 band;
+ u8 bw;
+ u8 center_chan;
+ u8 center_chan2;
+ u8 bw_from_ap;
+ u8 center_chan_from_ap;
+ u8 center_chan2_from_ap;
+ u8 reqtype;
+ __le32 maxinterval;
+ u8 dbdcband;
+ u8 rsv[3];
+ } __packed roc;
+ } __packed req = {
+ .roc = {
+ .tag = cpu_to_le16(UNI_ROC_ACQUIRE),
+ .len = cpu_to_le16(sizeof(struct roc_acquire_tlv)),
+ .tokenid = token_id,
+ .reqtype = type,
+ .maxinterval = cpu_to_le32(duration),
+ .bss_idx = vif->mt76.idx,
+ .control_channel = chan->hw_value,
+ .bw = CMD_CBW_20MHZ,
+ .bw_from_ap = CMD_CBW_20MHZ,
+ .center_chan = center_ch,
+ .center_chan_from_ap = center_ch,
+ .dbdcband = 0xff, /* auto */
+ },
+ };
+
+ if (chan->hw_value < center_ch)
+ req.roc.sco = 1; /* SCA */
+ else if (chan->hw_value > center_ch)
+ req.roc.sco = 3; /* SCB */
+
+ switch (chan->band) {
+ case NL80211_BAND_6GHZ:
+ req.roc.band = 3;
+ break;
+ case NL80211_BAND_5GHZ:
+ req.roc.band = 2;
+ break;
+ default:
+ req.roc.band = 1;
+ break;
+ }
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+ &req, sizeof(req), false);
+}
+
+int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
+ u8 token_id)
+{
+ struct mt792x_dev *dev = phy->dev;
+ struct {
+ struct {
+ u8 rsv[4];
+ } __packed hdr;
+ struct roc_abort_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 bss_idx;
+ u8 tokenid;
+ u8 dbdcband;
+ u8 rsv[5];
+ } __packed abort;
+ } __packed req = {
+ .abort = {
+ .tag = cpu_to_le16(UNI_ROC_ABORT),
+ .len = cpu_to_le16(sizeof(struct roc_abort_tlv)),
+ .tokenid = token_id,
+ .bss_idx = vif->mt76.idx,
+ .dbdcband = 0xff, /* auto*/
+ },
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+ &req, sizeof(req), false);
+}
+
+int mt7925_mcu_set_chan_info(struct mt792x_phy *phy, u16 tag)
+{
+ static const u8 ch_band[] = {
+ [NL80211_BAND_2GHZ] = 0,
+ [NL80211_BAND_5GHZ] = 1,
+ [NL80211_BAND_6GHZ] = 2,
+ };
+ struct mt792x_dev *dev = phy->dev;
+ struct cfg80211_chan_def *chandef = &phy->mt76->chandef;
+ int freq1 = chandef->center_freq1;
+ u8 band_idx = chandef->chan->band != NL80211_BAND_2GHZ;
+ struct {
+ /* fixed field */
+ u8 __rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ u8 control_ch;
+ u8 center_ch;
+ u8 bw;
+ u8 tx_path_num;
+ u8 rx_path; /* mask or num */
+ u8 switch_reason;
+ u8 band_idx;
+ u8 center_ch2; /* for 80+80 only */
+ __le16 cac_case;
+ u8 channel_band;
+ u8 rsv0;
+ __le32 outband_freq;
+ u8 txpower_drop;
+ u8 ap_bw;
+ u8 ap_center_ch;
+ u8 rsv1[53];
+ } __packed req = {
+ .tag = cpu_to_le16(tag),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .control_ch = chandef->chan->hw_value,
+ .center_ch = ieee80211_frequency_to_channel(freq1),
+ .bw = mt76_connac_chan_bw(chandef),
+ .tx_path_num = hweight8(phy->mt76->antenna_mask),
+ .rx_path = phy->mt76->antenna_mask,
+ .band_idx = band_idx,
+ .channel_band = ch_band[chandef->chan->band],
+ };
+
+ if (chandef->chan->band == NL80211_BAND_6GHZ)
+ req.channel_band = 2;
+ else
+ req.channel_band = chandef->chan->band;
+
+ if (tag == UNI_CHANNEL_RX_PATH ||
+ dev->mt76.hw->conf.flags & IEEE80211_CONF_MONITOR)
+ req.switch_reason = CH_SWITCH_NORMAL;
+ else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
+ req.switch_reason = CH_SWITCH_SCAN_BYPASS_DPD;
+ else if (!cfg80211_reg_can_beacon(phy->mt76->hw->wiphy, chandef,
+ NL80211_IFTYPE_AP))
+ req.switch_reason = CH_SWITCH_DFS;
+ else
+ req.switch_reason = CH_SWITCH_NORMAL;
+
+ if (tag == UNI_CHANNEL_SWITCH)
+ req.rx_path = hweight8(req.rx_path);
+
+ if (chandef->width == NL80211_CHAN_WIDTH_80P80) {
+ int freq2 = chandef->center_freq2;
+
+ req.center_ch2 = ieee80211_frequency_to_channel(freq2);
+ }
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(CHANNEL_SWITCH),
+ &req, sizeof(req), true);
+}
+
+int mt7925_mcu_set_eeprom(struct mt792x_dev *dev)
+{
+ struct {
+ u8 _rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ u8 buffer_mode;
+ u8 format;
+ __le16 buf_len;
+ } __packed req = {
+ .tag = cpu_to_le16(UNI_EFUSE_BUFFER_MODE),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .buffer_mode = EE_MODE_EFUSE,
+ .format = EE_FORMAT_WHOLE
+ };
+
+ return mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(EFUSE_CTRL),
+ &req, sizeof(req), false, NULL);
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_set_eeprom);
+
+int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev, struct ieee80211_vif *vif)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct ps_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 ps_state; /* 0: device awake
+ * 1: static power save
+ * 2: dynamic power saving
+ * 3: enter TWT power saving
+ * 4: leave TWT power saving
+ */
+ u8 pad[3];
+ } __packed ps;
+ } __packed ps_req = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .ps = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_PS),
+ .len = cpu_to_le16(sizeof(struct ps_tlv)),
+ .ps_state = vif->cfg.ps ? 2 : 0,
+ },
+ };
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ return -EOPNOTSUPP;
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &ps_req, sizeof(ps_req), true);
+}
+
+static int
+mt7925_mcu_uni_bss_bcnft(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct bcnft_tlv {
+ __le16 tag;
+ __le16 len;
+ __le16 bcn_interval;
+ u8 dtim_period;
+ u8 bmc_delivered_ac;
+ u8 bmc_triggered_ac;
+ u8 pad[3];
+ } __packed bcnft;
+ } __packed bcnft_req = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .bcnft = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_BCNFT),
+ .len = cpu_to_le16(sizeof(struct bcnft_tlv)),
+ .bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int),
+ .dtim_period = vif->bss_conf.dtim_period,
+ },
+ };
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ return 0;
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &bcnft_req, sizeof(bcnft_req), true);
+}
+
+int
+mt7925_mcu_set_bss_pm(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct bcnft_tlv {
+ __le16 tag;
+ __le16 len;
+ __le16 bcn_interval;
+ u8 dtim_period;
+ u8 bmc_delivered_ac;
+ u8 bmc_triggered_ac;
+ u8 pad[3];
+ } __packed enable;
+ } req = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .enable = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_BCNFT),
+ .len = cpu_to_le16(sizeof(struct bcnft_tlv)),
+ .dtim_period = vif->bss_conf.dtim_period,
+ .bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int),
+ },
+ };
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct pm_disable {
+ __le16 tag;
+ __le16 len;
+ } __packed disable;
+ } req1 = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .disable = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_PM_DISABLE),
+ .len = cpu_to_le16(sizeof(struct pm_disable))
+ },
+ };
+ int err;
+
+ err = mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &req1, sizeof(req1), false);
+ if (err < 0 || !enable)
+ return err;
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &req, sizeof(req), false);
+}
+
+static void
+mt7925_mcu_sta_he_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ if (!sta->deflink.he_cap.has_he)
+ return;
+
+ mt76_connac_mcu_sta_he_tlv_v2(skb, sta);
+}
+
+static void
+mt7925_mcu_sta_he_6g_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ struct sta_rec_he_6g_capa *he_6g;
+ struct tlv *tlv;
+
+ if (!sta->deflink.he_6ghz_capa.capa)
+ return;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_HE_6G, sizeof(*he_6g));
+
+ he_6g = (struct sta_rec_he_6g_capa *)tlv;
+ he_6g->capa = sta->deflink.he_6ghz_capa.capa;
+}
+
+static void
+mt7925_mcu_sta_eht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ struct ieee80211_eht_mcs_nss_supp *mcs_map;
+ struct ieee80211_eht_cap_elem_fixed *elem;
+ struct sta_rec_eht *eht;
+ struct tlv *tlv;
+
+ if (!sta->deflink.eht_cap.has_eht)
+ return;
+
+ mcs_map = &sta->deflink.eht_cap.eht_mcs_nss_supp;
+ elem = &sta->deflink.eht_cap.eht_cap_elem;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_EHT, sizeof(*eht));
+
+ eht = (struct sta_rec_eht *)tlv;
+ eht->tid_bitmap = 0xff;
+ eht->mac_cap = cpu_to_le16(*(u16 *)elem->mac_cap_info);
+ eht->phy_cap = cpu_to_le64(*(u64 *)elem->phy_cap_info);
+ eht->phy_cap_ext = cpu_to_le64(elem->phy_cap_info[8]);
+
+ if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
+ memcpy(eht->mcs_map_bw20, &mcs_map->only_20mhz, sizeof(eht->mcs_map_bw20));
+ memcpy(eht->mcs_map_bw80, &mcs_map->bw._80, sizeof(eht->mcs_map_bw80));
+ memcpy(eht->mcs_map_bw160, &mcs_map->bw._160, sizeof(eht->mcs_map_bw160));
+}
+
+static void
+mt7925_mcu_sta_ht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ struct sta_rec_ht *ht;
+ struct tlv *tlv;
+
+ if (!sta->deflink.ht_cap.ht_supported)
+ return;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_HT, sizeof(*ht));
+
+ ht = (struct sta_rec_ht *)tlv;
+ ht->ht_cap = cpu_to_le16(sta->deflink.ht_cap.cap);
+}
+
+static void
+mt7925_mcu_sta_vht_tlv(struct sk_buff *skb, struct ieee80211_sta *sta)
+{
+ struct sta_rec_vht *vht;
+ struct tlv *tlv;
+
+ /* For 6G band, this tlv is necessary to let hw work normally */
+ if (!sta->deflink.he_6ghz_capa.capa && !sta->deflink.vht_cap.vht_supported)
+ return;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_VHT, sizeof(*vht));
+
+ vht = (struct sta_rec_vht *)tlv;
+ vht->vht_cap = cpu_to_le32(sta->deflink.vht_cap.cap);
+ vht->vht_rx_mcs_map = sta->deflink.vht_cap.vht_mcs.rx_mcs_map;
+ vht->vht_tx_mcs_map = sta->deflink.vht_cap.vht_mcs.tx_mcs_map;
+}
+
+static void
+mt7925_mcu_sta_amsdu_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+ struct sta_rec_amsdu *amsdu;
+ struct tlv *tlv;
+
+ if (vif->type != NL80211_IFTYPE_STATION &&
+ vif->type != NL80211_IFTYPE_AP)
+ return;
+
+ if (!sta->deflink.agg.max_amsdu_len)
+ return;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_HW_AMSDU, sizeof(*amsdu));
+ amsdu = (struct sta_rec_amsdu *)tlv;
+ amsdu->max_amsdu_num = 8;
+ amsdu->amsdu_en = true;
+ msta->wcid.amsdu = true;
+
+ switch (sta->deflink.agg.max_amsdu_len) {
+ case IEEE80211_MAX_MPDU_LEN_VHT_11454:
+ amsdu->max_mpdu_size =
+ IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454;
+ return;
+ case IEEE80211_MAX_MPDU_LEN_HT_7935:
+ case IEEE80211_MAX_MPDU_LEN_VHT_7991:
+ amsdu->max_mpdu_size = IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991;
+ return;
+ default:
+ amsdu->max_mpdu_size = IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895;
+ return;
+ }
+}
+
+static void
+mt7925_mcu_sta_phy_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = &mvif->mt76.ctx->def;
+ struct sta_rec_phy *phy;
+ struct tlv *tlv;
+ u8 af = 0, mm = 0;
+
+ if (!sta->deflink.ht_cap.ht_supported && !sta->deflink.he_6ghz_capa.capa)
+ return;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_PHY, sizeof(*phy));
+ phy = (struct sta_rec_phy *)tlv;
+ phy->phy_type = mt76_connac_get_phy_mode_v2(mvif->phy->mt76, vif, chandef->chan->band, sta);
+ if (sta->deflink.ht_cap.ht_supported) {
+ af = sta->deflink.ht_cap.ampdu_factor;
+ mm = sta->deflink.ht_cap.ampdu_density;
+ }
+
+ if (sta->deflink.vht_cap.vht_supported) {
+ u8 vht_af = FIELD_GET(IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK,
+ sta->deflink.vht_cap.cap);
+
+ af = max_t(u8, af, vht_af);
+ }
+
+ if (sta->deflink.he_6ghz_capa.capa) {
+ af = le16_get_bits(sta->deflink.he_6ghz_capa.capa,
+ IEEE80211_HE_6GHZ_CAP_MAX_AMPDU_LEN_EXP);
+ mm = le16_get_bits(sta->deflink.he_6ghz_capa.capa,
+ IEEE80211_HE_6GHZ_CAP_MIN_MPDU_START);
+ }
+
+ phy->ampdu = FIELD_PREP(IEEE80211_HT_AMPDU_PARM_FACTOR, af) |
+ FIELD_PREP(IEEE80211_HT_AMPDU_PARM_DENSITY, mm);
+ phy->max_ampdu_len = af;
+}
+
+static void
+mt7925_mcu_sta_state_v2_tlv(struct mt76_phy *mphy, struct sk_buff *skb,
+ struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif,
+ u8 rcpi, u8 sta_state)
+{
+ struct sta_rec_state_v2 {
+ __le16 tag;
+ __le16 len;
+ u8 state;
+ u8 rsv[3];
+ __le32 flags;
+ u8 vht_opmode;
+ u8 action;
+ u8 rsv2[2];
+ } __packed * state;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_STATE, sizeof(*state));
+ state = (struct sta_rec_state_v2 *)tlv;
+ state->state = sta_state;
+
+ if (sta->deflink.vht_cap.vht_supported) {
+ state->vht_opmode = sta->deflink.bandwidth;
+ state->vht_opmode |= sta->deflink.rx_nss <<
+ IEEE80211_OPMODE_NOTIF_RX_NSS_SHIFT;
+ }
+}
+
+static void
+mt7925_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = &mvif->mt76.ctx->def;
+ enum nl80211_band band = chandef->chan->band;
+ struct sta_rec_ra_info *ra_info;
+ struct tlv *tlv;
+ u16 supp_rates;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA, sizeof(*ra_info));
+ ra_info = (struct sta_rec_ra_info *)tlv;
+
+ supp_rates = sta->deflink.supp_rates[band];
+ if (band == NL80211_BAND_2GHZ)
+ supp_rates = FIELD_PREP(RA_LEGACY_OFDM, supp_rates >> 4) |
+ FIELD_PREP(RA_LEGACY_CCK, supp_rates & 0xf);
+ else
+ supp_rates = FIELD_PREP(RA_LEGACY_OFDM, supp_rates);
+
+ ra_info->legacy = cpu_to_le16(supp_rates);
+
+ if (sta->deflink.ht_cap.ht_supported)
+ memcpy(ra_info->rx_mcs_bitmask,
+ sta->deflink.ht_cap.mcs.rx_mask,
+ HT_MCS_MASK_NUM);
+}
+
+static void
+mt7925_mcu_sta_mld_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif, struct ieee80211_sta *sta)
+{
+ struct mt76_wcid *wcid = (struct mt76_wcid *)sta->drv_priv;
+ struct sta_rec_mld *mld;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_MLD, sizeof(*mld));
+ mld = (struct sta_rec_mld *)tlv;
+ memcpy(mld->mac_addr, vif->addr, ETH_ALEN);
+ mld->primary_id = cpu_to_le16(wcid->idx);
+ mld->wlan_id = cpu_to_le16(wcid->idx);
+
+ /* TODO: 0 means deflink only, add secondary link(1) later */
+ mld->link_num = !!(hweight8(vif->active_links) > 1);
+ WARN_ON_ONCE(mld->link_num);
+}
+
+static int
+mt7925_mcu_sta_cmd(struct mt76_phy *phy,
+ struct mt76_sta_cmd_info *info)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)info->vif->drv_priv;
+ struct mt76_dev *dev = phy->dev;
+ struct wtbl_req_hdr *wtbl_hdr;
+ struct tlv *sta_wtbl;
+ struct sk_buff *skb;
+
+ skb = __mt76_connac_mcu_alloc_sta_req(dev, mvif, info->wcid,
+ MT7925_STA_UPDATE_MAX_SIZE);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ if (info->sta || !info->offload_fw)
+ mt76_connac_mcu_sta_basic_tlv(dev, skb, info->vif, info->sta,
+ info->enable, info->newly);
+ if (info->sta && info->enable) {
+ mt7925_mcu_sta_phy_tlv(skb, info->vif, info->sta);
+ mt7925_mcu_sta_ht_tlv(skb, info->sta);
+ mt7925_mcu_sta_vht_tlv(skb, info->sta);
+ mt76_connac_mcu_sta_uapsd(skb, info->vif, info->sta);
+ mt7925_mcu_sta_amsdu_tlv(skb, info->vif, info->sta);
+ mt7925_mcu_sta_he_tlv(skb, info->sta);
+ mt7925_mcu_sta_he_6g_tlv(skb, info->sta);
+ mt7925_mcu_sta_eht_tlv(skb, info->sta);
+ mt7925_mcu_sta_rate_ctrl_tlv(skb, info->vif, info->sta);
+ mt7925_mcu_sta_state_v2_tlv(phy, skb, info->sta,
+ info->vif, info->rcpi,
+ info->state);
+ mt7925_mcu_sta_hdr_trans_tlv(skb, info->vif, info->sta);
+ mt7925_mcu_sta_mld_tlv(skb, info->vif, info->sta);
+ }
+
+ sta_wtbl = mt76_connac_mcu_add_tlv(skb, STA_REC_WTBL,
+ sizeof(struct tlv));
+
+ wtbl_hdr = mt76_connac_mcu_alloc_wtbl_req(dev, info->wcid,
+ WTBL_RESET_AND_SET,
+ sta_wtbl, &skb);
+ if (IS_ERR(wtbl_hdr))
+ return PTR_ERR(wtbl_hdr);
+
+ if (info->enable) {
+ mt76_connac_mcu_wtbl_generic_tlv(dev, skb, info->vif,
+ info->sta, sta_wtbl,
+ wtbl_hdr);
+ mt76_connac_mcu_wtbl_hdr_trans_tlv(skb, info->vif, info->wcid,
+ sta_wtbl, wtbl_hdr);
+ if (info->sta)
+ mt76_connac_mcu_wtbl_ht_tlv(dev, skb, info->sta,
+ sta_wtbl, wtbl_hdr,
+ true, true);
+ }
+
+ return mt76_mcu_skb_send_msg(dev, skb, info->cmd, true);
+}
+
+int mt7925_mcu_sta_update(struct mt792x_dev *dev, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif, bool enable,
+ enum mt76_sta_info_state state)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ int rssi = -ewma_rssi_read(&mvif->rssi);
+ struct mt76_sta_cmd_info info = {
+ .sta = sta,
+ .vif = vif,
+ .enable = enable,
+ .cmd = MCU_UNI_CMD(STA_REC_UPDATE),
+ .state = state,
+ .offload_fw = true,
+ .rcpi = to_rcpi(rssi),
+ };
+ struct mt792x_sta *msta;
+
+ msta = sta ? (struct mt792x_sta *)sta->drv_priv : NULL;
+ info.wcid = msta ? &msta->wcid : &mvif->sta.wcid;
+ info.newly = msta ? state != MT76_STA_INFO_STATE_ASSOC : true;
+
+ return mt7925_mcu_sta_cmd(&dev->mphy, &info);
+}
+
+int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable)
+{
+#define MT7925_FIF_BIT_CLR BIT(1)
+#define MT7925_FIF_BIT_SET BIT(0)
+ int err = 0;
+
+ if (enable) {
+ err = mt7925_mcu_uni_bss_bcnft(dev, vif, true);
+ if (err)
+ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_SET,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+ }
+
+ err = mt7925_mcu_set_bss_pm(dev, vif, false);
+ if (err)
+ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_CLR,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+}
+
+int mt7925_get_txpwr_info(struct mt792x_dev *dev, u8 band_idx, struct mt7925_txpwr *txpwr)
+{
+#define TX_POWER_SHOW_INFO 0x7
+#define TXPOWER_ALL_RATE_POWER_INFO 0x2
+ struct mt7925_txpwr_event *event;
+ struct mt7925_txpwr_req req = {
+ .tag = cpu_to_le16(TX_POWER_SHOW_INFO),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .catg = TXPOWER_ALL_RATE_POWER_INFO,
+ .band_idx = band_idx,
+ };
+ struct sk_buff *skb;
+ int ret;
+
+ ret = mt76_mcu_send_and_get_msg(&dev->mt76, MCU_UNI_CMD(TXPOWER),
+ &req, sizeof(req), true, &skb);
+ if (ret)
+ return ret;
+
+ event = (struct mt7925_txpwr_event *)skb->data;
+ memcpy(txpwr, &event->txpwr, sizeof(event->txpwr));
+
+ dev_kfree_skb(skb);
+
+ return 0;
+}
+
+int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+ struct {
+ struct {
+ u8 band_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct sniffer_enable_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 enable;
+ u8 pad[3];
+ } __packed enable;
+ } __packed req = {
+ .hdr = {
+ .band_idx = mvif->mt76.band_idx,
+ },
+ .enable = {
+ .tag = cpu_to_le16(UNI_SNIFFER_ENABLE),
+ .len = cpu_to_le16(sizeof(struct sniffer_enable_tlv)),
+ .enable = enable,
+ },
+ };
+
+ mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req), true);
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(SNIFFER), &req, sizeof(req),
+ true);
+}
+
+int mt7925_mcu_config_sniffer(struct mt792x_vif *vif,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct mt76_phy *mphy = vif->phy->mt76;
+ struct cfg80211_chan_def *chandef = ctx ? &ctx->def : &mphy->chandef;
+ int freq1 = chandef->center_freq1, freq2 = chandef->center_freq2;
+
+ const u8 ch_band[] = {
+ [NL80211_BAND_2GHZ] = 1,
+ [NL80211_BAND_5GHZ] = 2,
+ [NL80211_BAND_6GHZ] = 3,
+ };
+ const u8 ch_width[] = {
+ [NL80211_CHAN_WIDTH_20_NOHT] = 0,
+ [NL80211_CHAN_WIDTH_20] = 0,
+ [NL80211_CHAN_WIDTH_40] = 0,
+ [NL80211_CHAN_WIDTH_80] = 1,
+ [NL80211_CHAN_WIDTH_160] = 2,
+ [NL80211_CHAN_WIDTH_80P80] = 3,
+ [NL80211_CHAN_WIDTH_5] = 4,
+ [NL80211_CHAN_WIDTH_10] = 5,
+ [NL80211_CHAN_WIDTH_320] = 6,
+ };
+
+ struct {
+ struct {
+ u8 band_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct config_tlv {
+ __le16 tag;
+ __le16 len;
+ u16 aid;
+ u8 ch_band;
+ u8 bw;
+ u8 control_ch;
+ u8 sco;
+ u8 center_ch;
+ u8 center_ch2;
+ u8 drop_err;
+ u8 pad[3];
+ } __packed tlv;
+ } __packed req = {
+ .hdr = {
+ .band_idx = vif->mt76.band_idx,
+ },
+ .tlv = {
+ .tag = cpu_to_le16(UNI_SNIFFER_CONFIG),
+ .len = cpu_to_le16(sizeof(req.tlv)),
+ .control_ch = chandef->chan->hw_value,
+ .center_ch = ieee80211_frequency_to_channel(freq1),
+ .drop_err = 1,
+ },
+ };
+
+ if (chandef->chan->band < ARRAY_SIZE(ch_band))
+ req.tlv.ch_band = ch_band[chandef->chan->band];
+ if (chandef->width < ARRAY_SIZE(ch_width))
+ req.tlv.bw = ch_width[chandef->width];
+
+ if (freq2)
+ req.tlv.center_ch2 = ieee80211_frequency_to_channel(freq2);
+
+ if (req.tlv.control_ch < req.tlv.center_ch)
+ req.tlv.sco = 1; /* SCA */
+ else if (req.tlv.control_ch > req.tlv.center_ch)
+ req.tlv.sco = 3; /* SCB */
+
+ return mt76_mcu_send_msg(mphy->dev, MCU_UNI_CMD(SNIFFER),
+ &req, sizeof(req), true);
+}
+
+int
+mt7925_mcu_uni_add_beacon_offload(struct mt792x_dev *dev,
+ struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct ieee80211_mutable_offsets offs;
+ struct {
+ struct req_hdr {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct bcn_content_tlv {
+ __le16 tag;
+ __le16 len;
+ __le16 tim_ie_pos;
+ __le16 csa_ie_pos;
+ __le16 bcc_ie_pos;
+ /* 0: disable beacon offload
+ * 1: enable beacon offload
+ * 2: update probe respond offload
+ */
+ u8 enable;
+ /* 0: legacy format (TXD + payload)
+ * 1: only cap field IE
+ */
+ u8 type;
+ __le16 pkt_len;
+ u8 pkt[512];
+ } __packed beacon_tlv;
+ } req = {
+ .hdr = {
+ .bss_idx = mvif->mt76.idx,
+ },
+ .beacon_tlv = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_BCN_CONTENT),
+ .len = cpu_to_le16(sizeof(struct bcn_content_tlv)),
+ .enable = enable,
+ .type = 1,
+ },
+ };
+ struct sk_buff *skb;
+ u8 cap_offs;
+
+ /* support enable/update process only
+ * disable flow would be handled in bss stop handler automatically
+ */
+ if (!enable)
+ return -EOPNOTSUPP;
+
+ skb = ieee80211_beacon_get_template(mt76_hw(dev), vif, &offs, 0);
+ if (!skb)
+ return -EINVAL;
+
+ cap_offs = offsetof(struct ieee80211_mgmt, u.beacon.capab_info);
+ if (!skb_pull(skb, cap_offs)) {
+ dev_err(dev->mt76.dev, "beacon format err\n");
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ if (skb->len > 512) {
+ dev_err(dev->mt76.dev, "beacon size limit exceed\n");
+ dev_kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ memcpy(req.beacon_tlv.pkt, skb->data, skb->len);
+ req.beacon_tlv.pkt_len = cpu_to_le16(skb->len);
+ offs.tim_offset -= cap_offs;
+ req.beacon_tlv.tim_ie_pos = cpu_to_le16(offs.tim_offset);
+
+ if (offs.cntdwn_counter_offs[0]) {
+ u16 csa_offs;
+
+ csa_offs = offs.cntdwn_counter_offs[0] - cap_offs - 4;
+ req.beacon_tlv.csa_ie_pos = cpu_to_le16(csa_offs);
+ }
+ dev_kfree_skb(skb);
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+ &req, sizeof(req), true);
+}
+
+int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif *mvif,
+ struct ieee80211_chanctx_conf *ctx)
+{
+ struct cfg80211_chan_def *chandef = ctx ? &ctx->def : &phy->chandef;
+ int freq1 = chandef->center_freq1, freq2 = chandef->center_freq2;
+ enum nl80211_band band = chandef->chan->band;
+ struct mt76_dev *mdev = phy->dev;
+ struct {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct rlm_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 control_channel;
+ u8 center_chan;
+ u8 center_chan2;
+ u8 bw;
+ u8 tx_streams;
+ u8 rx_streams;
+ u8 ht_op_info;
+ u8 sco;
+ u8 band;
+ u8 pad[3];
+ } __packed rlm;
+ } __packed rlm_req = {
+ .hdr = {
+ .bss_idx = mvif->idx,
+ },
+ .rlm = {
+ .tag = cpu_to_le16(UNI_BSS_INFO_RLM),
+ .len = cpu_to_le16(sizeof(struct rlm_tlv)),
+ .control_channel = chandef->chan->hw_value,
+ .center_chan = ieee80211_frequency_to_channel(freq1),
+ .center_chan2 = ieee80211_frequency_to_channel(freq2),
+ .tx_streams = hweight8(phy->antenna_mask),
+ .ht_op_info = 4, /* set HT 40M allowed */
+ .rx_streams = hweight8(phy->antenna_mask),
+ .band = band,
+ },
+ };
+
+ switch (chandef->width) {
+ case NL80211_CHAN_WIDTH_40:
+ rlm_req.rlm.bw = CMD_CBW_40MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_80:
+ rlm_req.rlm.bw = CMD_CBW_80MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_80P80:
+ rlm_req.rlm.bw = CMD_CBW_8080MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_160:
+ rlm_req.rlm.bw = CMD_CBW_160MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_5:
+ rlm_req.rlm.bw = CMD_CBW_5MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_10:
+ rlm_req.rlm.bw = CMD_CBW_10MHZ;
+ break;
+ case NL80211_CHAN_WIDTH_20_NOHT:
+ case NL80211_CHAN_WIDTH_20:
+ default:
+ rlm_req.rlm.bw = CMD_CBW_20MHZ;
+ rlm_req.rlm.ht_op_info = 0;
+ break;
+ }
+
+ if (rlm_req.rlm.control_channel < rlm_req.rlm.center_chan)
+ rlm_req.rlm.sco = 1; /* SCA */
+ else if (rlm_req.rlm.control_channel > rlm_req.rlm.center_chan)
+ rlm_req.rlm.sco = 3; /* SCB */
+
+ return mt76_mcu_send_msg(mdev, MCU_UNI_CMD(BSS_INFO_UPDATE), &rlm_req,
+ sizeof(rlm_req), true);
+}
+
+static struct sk_buff *
+__mt7925_mcu_alloc_bss_req(struct mt76_dev *dev, struct mt76_vif *mvif, int len)
+{
+ struct bss_req_hdr hdr = {
+ .bss_idx = mvif->idx,
+ };
+ struct sk_buff *skb;
+
+ skb = mt76_mcu_msg_alloc(dev, NULL, len);
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+ skb_put_data(skb, &hdr, sizeof(hdr));
+
+ return skb;
+}
+
+static u8
+mt7925_get_phy_mode_ext(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ enum nl80211_band band, struct ieee80211_sta *sta)
+{
+ struct ieee80211_he_6ghz_capa *he_6ghz_capa;
+ const struct ieee80211_sta_eht_cap *eht_cap;
+ __le16 capa = 0;
+ u8 mode = 0;
+
+ if (sta) {
+ he_6ghz_capa = &sta->deflink.he_6ghz_capa;
+ eht_cap = &sta->deflink.eht_cap;
+ } else {
+ struct ieee80211_supported_band *sband;
+
+ sband = phy->hw->wiphy->bands[band];
+ capa = ieee80211_get_he_6ghz_capa(sband, vif->type);
+ he_6ghz_capa = (struct ieee80211_he_6ghz_capa *)&capa;
+
+ eht_cap = ieee80211_get_eht_iftype_cap(sband, vif->type);
+ }
+
+ switch (band) {
+ case NL80211_BAND_2GHZ:
+ if (eht_cap && eht_cap->has_eht)
+ mode |= PHY_MODE_BE_24G;
+ break;
+ case NL80211_BAND_5GHZ:
+ if (eht_cap && eht_cap->has_eht)
+ mode |= PHY_MODE_BE_5G;
+ break;
+ case NL80211_BAND_6GHZ:
+ if (he_6ghz_capa && he_6ghz_capa->capa)
+ mode |= PHY_MODE_AX_6G;
+
+ if (eht_cap && eht_cap->has_eht)
+ mode |= PHY_MODE_BE_6G;
+ break;
+ default:
+ break;
+ }
+
+ return mode;
+}
+
+static void
+mt7925_mcu_bss_basic_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ struct ieee80211_chanctx_conf *ctx,
+ struct mt76_phy *phy, u16 wlan_idx,
+ bool enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_sta *msta = sta ? (struct mt792x_sta *)sta->drv_priv :
+ &mvif->sta;
+ struct cfg80211_chan_def *chandef = ctx ? &ctx->def : &phy->chandef;
+ enum nl80211_band band = chandef->chan->band;
+ struct mt76_connac_bss_basic_tlv *basic_req;
+ u8 idx, basic_phy;
+ struct tlv *tlv;
+ int conn_type;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_BASIC, sizeof(*basic_req));
+ basic_req = (struct mt76_connac_bss_basic_tlv *)tlv;
+
+ idx = mvif->mt76.omac_idx > EXT_BSSID_START ? HW_BSSID_0 :
+ mvif->mt76.omac_idx;
+ basic_req->hw_bss_idx = idx;
+
+ basic_req->phymode_ext = mt7925_get_phy_mode_ext(phy, vif, band, sta);
+
+ basic_phy = mt76_connac_get_phy_mode_v2(phy, vif, band, sta);
+ basic_req->nonht_basic_phy = cpu_to_le16(basic_phy);
+
+ memcpy(basic_req->bssid, vif->bss_conf.bssid, ETH_ALEN);
+ basic_req->phymode = mt76_connac_get_phy_mode(phy, vif, band, sta);
+ basic_req->bcn_interval = cpu_to_le16(vif->bss_conf.beacon_int);
+ basic_req->dtim_period = vif->bss_conf.dtim_period;
+ basic_req->bmc_tx_wlan_idx = cpu_to_le16(wlan_idx);
+ basic_req->sta_idx = cpu_to_le16(msta->wcid.idx);
+ basic_req->omac_idx = mvif->mt76.omac_idx;
+ basic_req->band_idx = mvif->mt76.band_idx;
+ basic_req->wmm_idx = mvif->mt76.wmm_idx;
+ basic_req->conn_state = !enable;
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_MESH_POINT:
+ case NL80211_IFTYPE_AP:
+ if (vif->p2p)
+ conn_type = CONNECTION_P2P_GO;
+ else
+ conn_type = CONNECTION_INFRA_AP;
+ basic_req->conn_type = cpu_to_le32(conn_type);
+ basic_req->active = enable;
+ break;
+ case NL80211_IFTYPE_STATION:
+ if (vif->p2p)
+ conn_type = CONNECTION_P2P_GC;
+ else
+ conn_type = CONNECTION_INFRA_STA;
+ basic_req->conn_type = cpu_to_le32(conn_type);
+ basic_req->active = true;
+ break;
+ case NL80211_IFTYPE_ADHOC:
+ basic_req->conn_type = cpu_to_le32(CONNECTION_IBSS_ADHOC);
+ basic_req->active = true;
+ break;
+ default:
+ WARN_ON(1);
+ break;
+ }
+}
+
+static void
+mt7925_mcu_bss_sec_tlv(struct sk_buff *skb, struct ieee80211_vif *vif)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct bss_sec_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 mode;
+ u8 status;
+ u8 cipher;
+ u8 __rsv;
+ } __packed * sec;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_SEC, sizeof(*sec));
+ sec = (struct bss_sec_tlv *)tlv;
+
+ switch (mvif->cipher) {
+ case MCU_CIPHER_GCMP_256:
+ case MCU_CIPHER_GCMP:
+ sec->mode = MODE_WPA3_SAE;
+ sec->status = 8;
+ break;
+ case MCU_CIPHER_AES_CCMP:
+ sec->mode = MODE_WPA2_PSK;
+ sec->status = 6;
+ break;
+ case MCU_CIPHER_TKIP:
+ sec->mode = MODE_WPA2_PSK;
+ sec->status = 4;
+ break;
+ case MCU_CIPHER_WEP104:
+ case MCU_CIPHER_WEP40:
+ sec->mode = MODE_SHARED;
+ sec->status = 0;
+ break;
+ default:
+ sec->mode = MODE_OPEN;
+ sec->status = 1;
+ break;
+ }
+
+ sec->cipher = mvif->cipher;
+}
+
+static void
+mt7925_mcu_bss_bmc_tlv(struct sk_buff *skb, struct mt792x_phy *phy,
+ struct ieee80211_chanctx_conf *ctx,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct cfg80211_chan_def *chandef = ctx ? &ctx->def : &phy->mt76->chandef;
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ enum nl80211_band band = chandef->chan->band;
+ struct bss_rate_tlv *bmc;
+ struct tlv *tlv;
+ u8 idx = mvif->mcast_rates_idx ?
+ mvif->mcast_rates_idx : mvif->basic_rates_idx;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_RATE, sizeof(*bmc));
+
+ bmc = (struct bss_rate_tlv *)tlv;
+
+ bmc->short_preamble = (band == NL80211_BAND_2GHZ);
+ bmc->bc_fixed_rate = idx;
+ bmc->mc_fixed_rate = idx;
+}
+
+static void
+mt7925_mcu_bss_mld_tlv(struct sk_buff *skb,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ bool is_mld = ieee80211_vif_is_mld(vif);
+ struct bss_mld_tlv *mld;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_MLD, sizeof(*mld));
+ mld = (struct bss_mld_tlv *)tlv;
+
+ mld->link_id = sta ? (is_mld ? vif->bss_conf.link_id : 0) : 0xff;
+ mld->group_mld_id = is_mld ? mvif->mt76.idx : 0xff;
+ mld->own_mld_id = mvif->mt76.idx + 32;
+ mld->remap_idx = 0xff;
+
+ if (sta)
+ memcpy(mld->mac_addr, sta->addr, ETH_ALEN);
+}
+
+static void
+mt7925_mcu_bss_qos_tlv(struct sk_buff *skb, struct ieee80211_vif *vif)
+{
+ struct mt76_connac_bss_qos_tlv *qos;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_QBSS, sizeof(*qos));
+ qos = (struct mt76_connac_bss_qos_tlv *)tlv;
+ qos->qos = vif->bss_conf.qos;
+}
+
+static void
+mt7925_mcu_bss_he_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ struct mt792x_phy *phy)
+{
+#define DEFAULT_HE_PE_DURATION 4
+#define DEFAULT_HE_DURATION_RTS_THRES 1023
+ const struct ieee80211_sta_he_cap *cap;
+ struct bss_info_uni_he *he;
+ struct tlv *tlv;
+
+ cap = mt76_connac_get_he_phy_cap(phy->mt76, vif);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_HE_BASIC, sizeof(*he));
+
+ he = (struct bss_info_uni_he *)tlv;
+ he->he_pe_duration = vif->bss_conf.htc_trig_based_pkt_ext;
+ if (!he->he_pe_duration)
+ he->he_pe_duration = DEFAULT_HE_PE_DURATION;
+
+ he->he_rts_thres = cpu_to_le16(vif->bss_conf.frame_time_rts_th);
+ if (!he->he_rts_thres)
+ he->he_rts_thres = cpu_to_le16(DEFAULT_HE_DURATION_RTS_THRES);
+
+ he->max_nss_mcs[CMD_HE_MCS_BW80] = cap->he_mcs_nss_supp.tx_mcs_80;
+ he->max_nss_mcs[CMD_HE_MCS_BW160] = cap->he_mcs_nss_supp.tx_mcs_160;
+ he->max_nss_mcs[CMD_HE_MCS_BW8080] = cap->he_mcs_nss_supp.tx_mcs_80p80;
+}
+
+static void
+mt7925_mcu_bss_color_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct bss_info_uni_bss_color *color;
+ struct tlv *tlv;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_BSS_INFO_BSS_COLOR, sizeof(*color));
+ color = (struct bss_info_uni_bss_color *)tlv;
+
+ color->enable = enable ?
+ vif->bss_conf.he_bss_color.enabled : 0;
+ color->bss_color = enable ?
+ vif->bss_conf.he_bss_color.color : 0;
+}
+
+int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ struct ieee80211_chanctx_conf *ctx,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ int enable)
+{
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = phy->dev;
+ struct sk_buff *skb;
+ int err;
+
+ skb = __mt7925_mcu_alloc_bss_req(&dev->mt76, &mvif->mt76,
+ MT7925_BSS_UPDATE_MAX_SIZE);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ /* bss_basic must be first */
+ mt7925_mcu_bss_basic_tlv(skb, vif, sta, ctx, phy->mt76,
+ mvif->sta.wcid.idx, enable);
+ mt7925_mcu_bss_sec_tlv(skb, vif);
+
+ mt7925_mcu_bss_bmc_tlv(skb, phy, ctx, vif, sta);
+ mt7925_mcu_bss_qos_tlv(skb, vif);
+ mt7925_mcu_bss_mld_tlv(skb, vif, sta);
+
+ if (vif->bss_conf.he_support) {
+ mt7925_mcu_bss_he_tlv(skb, vif, phy);
+ mt7925_mcu_bss_color_tlv(skb, vif, enable);
+ }
+
+ err = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_UNI_CMD(BSS_INFO_UPDATE), true);
+ if (err < 0)
+ return err;
+
+ return mt7925_mcu_set_chctx(phy->mt76, &mvif->mt76, ctx);
+}
+
+int mt7925_mcu_set_dbdc(struct mt76_phy *phy)
+{
+ struct mt76_dev *mdev = phy->dev;
+
+ struct mbmc_conf_tlv *conf;
+ struct mbmc_set_req *hdr;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+ int max_len, err;
+
+ max_len = sizeof(*hdr) + sizeof(*conf);
+ skb = mt76_mcu_msg_alloc(mdev, NULL, max_len);
+ if (!skb)
+ return -ENOMEM;
+
+ hdr = (struct mbmc_set_req *)skb_put(skb, sizeof(*hdr));
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_MBMC_SETTING, sizeof(*conf));
+ conf = (struct mbmc_conf_tlv *)tlv;
+
+ conf->mbmc_en = 1;
+ conf->band = 0; /* unused */
+
+ err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SET_DBDC_PARMS),
+ false);
+
+ return err;
+}
+
+#define MT76_CONNAC_SCAN_CHANNEL_TIME 60
+
+int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *scan_req)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct cfg80211_scan_request *sreq = &scan_req->req;
+ int n_ssids = 0, err, i, duration;
+ struct ieee80211_channel **scan_list = sreq->channels;
+ struct mt76_dev *mdev = phy->dev;
+ struct mt76_connac_mcu_scan_channel *chan;
+ struct sk_buff *skb;
+
+ struct scan_hdr_tlv *hdr;
+ struct scan_req_tlv *req;
+ struct scan_ssid_tlv *ssid;
+ struct scan_bssid_tlv *bssid;
+ struct scan_chan_info_tlv *chan_info;
+ struct scan_ie_tlv *ie;
+ struct scan_misc_tlv *misc;
+ struct tlv *tlv;
+ int max_len;
+
+ max_len = sizeof(*hdr) + sizeof(*req) + sizeof(*ssid) +
+ sizeof(*bssid) + sizeof(*chan_info) +
+ sizeof(*misc) + sizeof(*ie);
+
+ skb = mt76_mcu_msg_alloc(mdev, NULL, max_len);
+ if (!skb)
+ return -ENOMEM;
+
+ set_bit(MT76_HW_SCANNING, &phy->state);
+ mvif->scan_seq_num = (mvif->scan_seq_num + 1) & 0x7f;
+
+ hdr = (struct scan_hdr_tlv *)skb_put(skb, sizeof(*hdr));
+ hdr->seq_num = mvif->scan_seq_num | mvif->band_idx << 7;
+ hdr->bss_idx = mvif->idx;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_REQ, sizeof(*req));
+ req = (struct scan_req_tlv *)tlv;
+ req->scan_type = sreq->n_ssids ? 1 : 0;
+ req->probe_req_num = sreq->n_ssids ? 2 : 0;
+
+ duration = MT76_CONNAC_SCAN_CHANNEL_TIME;
+ /* increase channel time for passive scan */
+ if (!sreq->n_ssids)
+ duration *= 2;
+ req->timeout_value = cpu_to_le16(sreq->n_channels * duration);
+ req->channel_min_dwell_time = cpu_to_le16(duration);
+ req->channel_dwell_time = cpu_to_le16(duration);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_SSID, sizeof(*ssid));
+ ssid = (struct scan_ssid_tlv *)tlv;
+ for (i = 0; i < sreq->n_ssids; i++) {
+ if (!sreq->ssids[i].ssid_len)
+ continue;
+
+ ssid->ssids[i].ssid_len = cpu_to_le32(sreq->ssids[i].ssid_len);
+ memcpy(ssid->ssids[i].ssid, sreq->ssids[i].ssid,
+ sreq->ssids[i].ssid_len);
+ n_ssids++;
+ }
+ ssid->ssid_type = n_ssids ? BIT(2) : BIT(0);
+ ssid->ssids_num = n_ssids;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_BSSID, sizeof(*bssid));
+ bssid = (struct scan_bssid_tlv *)tlv;
+
+ memcpy(bssid->bssid, sreq->bssid, ETH_ALEN);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_CHANNEL, sizeof(*chan_info));
+ chan_info = (struct scan_chan_info_tlv *)tlv;
+ chan_info->channels_num = min_t(u8, sreq->n_channels,
+ ARRAY_SIZE(chan_info->channels));
+ for (i = 0; i < chan_info->channels_num; i++) {
+ chan = &chan_info->channels[i];
+
+ switch (scan_list[i]->band) {
+ case NL80211_BAND_2GHZ:
+ chan->band = 1;
+ break;
+ case NL80211_BAND_6GHZ:
+ chan->band = 3;
+ break;
+ default:
+ chan->band = 2;
+ break;
+ }
+ chan->channel_num = scan_list[i]->hw_value;
+ }
+ chan_info->channel_type = sreq->n_channels ? 4 : 0;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_IE, sizeof(*ie));
+ ie = (struct scan_ie_tlv *)tlv;
+ if (sreq->ie_len > 0) {
+ memcpy(ie->ies, sreq->ie, sreq->ie_len);
+ ie->ies_len = cpu_to_le16(sreq->ie_len);
+ }
+
+ req->scan_func |= SCAN_FUNC_SPLIT_SCAN;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_MISC, sizeof(*misc));
+ misc = (struct scan_misc_tlv *)tlv;
+ if (sreq->flags & NL80211_SCAN_FLAG_RANDOM_ADDR) {
+ get_random_mask_addr(misc->random_mac, sreq->mac_addr,
+ sreq->mac_addr_mask);
+ req->scan_func |= SCAN_FUNC_RANDOM_MAC;
+ }
+
+ err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+ false);
+ if (err < 0)
+ clear_bit(MT76_HW_SCANNING, &phy->state);
+
+ return err;
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_hw_scan);
+
+int mt7925_mcu_sched_scan_req(struct mt76_phy *phy,
+ struct ieee80211_vif *vif,
+ struct cfg80211_sched_scan_request *sreq)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct ieee80211_channel **scan_list = sreq->channels;
+ struct mt76_connac_mcu_scan_channel *chan;
+ struct mt76_dev *mdev = phy->dev;
+ struct cfg80211_match_set *cfg_match;
+ struct cfg80211_ssid *cfg_ssid;
+
+ struct scan_hdr_tlv *hdr;
+ struct scan_sched_req *req;
+ struct scan_ssid_tlv *ssid;
+ struct scan_chan_info_tlv *chan_info;
+ struct scan_ie_tlv *ie;
+ struct scan_sched_ssid_match_sets *match;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+ int i, max_len;
+
+ max_len = sizeof(*hdr) + sizeof(*req) + sizeof(*ssid) +
+ sizeof(*chan_info) + sizeof(*ie) +
+ sizeof(*match);
+
+ skb = mt76_mcu_msg_alloc(mdev, NULL, max_len);
+ if (!skb)
+ return -ENOMEM;
+
+ mvif->scan_seq_num = (mvif->scan_seq_num + 1) & 0x7f;
+
+ hdr = (struct scan_hdr_tlv *)skb_put(skb, sizeof(*hdr));
+ hdr->seq_num = mvif->scan_seq_num | mvif->band_idx << 7;
+ hdr->bss_idx = mvif->idx;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_SCHED_REQ, sizeof(*req));
+ req = (struct scan_sched_req *)tlv;
+ req->version = 1;
+
+ if (sreq->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
+ req->scan_func |= SCAN_FUNC_RANDOM_MAC;
+
+ req->intervals_num = sreq->n_scan_plans;
+ for (i = 0; i < req->intervals_num; i++)
+ req->intervals[i] = cpu_to_le16(sreq->scan_plans[i].interval);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_SSID, sizeof(*ssid));
+ ssid = (struct scan_ssid_tlv *)tlv;
+
+ ssid->ssids_num = sreq->n_ssids;
+ ssid->ssid_type = BIT(2);
+ for (i = 0; i < ssid->ssids_num; i++) {
+ cfg_ssid = &sreq->ssids[i];
+ memcpy(ssid->ssids[i].ssid, cfg_ssid->ssid, cfg_ssid->ssid_len);
+ ssid->ssids[i].ssid_len = cpu_to_le32(cfg_ssid->ssid_len);
+ }
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_SSID_MATCH_SETS, sizeof(*match));
+ match = (struct scan_sched_ssid_match_sets *)tlv;
+ match->match_num = sreq->n_match_sets;
+ for (i = 0; i < match->match_num; i++) {
+ cfg_match = &sreq->match_sets[i];
+ memcpy(match->match[i].ssid, cfg_match->ssid.ssid,
+ cfg_match->ssid.ssid_len);
+ match->match[i].rssi_th = cpu_to_le32(cfg_match->rssi_thold);
+ match->match[i].ssid_len = cfg_match->ssid.ssid_len;
+ }
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_CHANNEL, sizeof(*chan_info));
+ chan_info = (struct scan_chan_info_tlv *)tlv;
+ chan_info->channels_num = min_t(u8, sreq->n_channels,
+ ARRAY_SIZE(chan_info->channels));
+ for (i = 0; i < chan_info->channels_num; i++) {
+ chan = &chan_info->channels[i];
+
+ switch (scan_list[i]->band) {
+ case NL80211_BAND_2GHZ:
+ chan->band = 1;
+ break;
+ case NL80211_BAND_6GHZ:
+ chan->band = 3;
+ break;
+ default:
+ chan->band = 2;
+ break;
+ }
+ chan->channel_num = scan_list[i]->hw_value;
+ }
+ chan_info->channel_type = sreq->n_channels ? 4 : 0;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_IE, sizeof(*ie));
+ ie = (struct scan_ie_tlv *)tlv;
+ if (sreq->ie_len > 0) {
+ memcpy(ie->ies, sreq->ie, sreq->ie_len);
+ ie->ies_len = cpu_to_le16(sreq->ie_len);
+ }
+
+ return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+ false);
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_sched_scan_req);
+
+int
+mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
+ struct ieee80211_vif *vif,
+ bool enable)
+{
+ struct mt76_dev *mdev = phy->dev;
+ struct scan_sched_enable *req;
+ struct scan_hdr_tlv *hdr;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+ int max_len;
+
+ max_len = sizeof(*hdr) + sizeof(*req);
+
+ skb = mt76_mcu_msg_alloc(mdev, NULL, max_len);
+ if (!skb)
+ return -ENOMEM;
+
+ hdr = (struct scan_hdr_tlv *)skb_put(skb, sizeof(*hdr));
+ hdr->seq_num = 0;
+ hdr->bss_idx = 0;
+
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_SCAN_SCHED_ENABLE, sizeof(*req));
+ req = (struct scan_sched_enable *)tlv;
+ req->active = !enable;
+
+ if (enable)
+ set_bit(MT76_HW_SCHED_SCANNING, &phy->state);
+ else
+ clear_bit(MT76_HW_SCHED_SCANNING, &phy->state);
+
+ return mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SCAN_REQ),
+ false);
+}
+
+int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+ struct ieee80211_vif *vif)
+{
+ struct mt76_vif *mvif = (struct mt76_vif *)vif->drv_priv;
+ struct {
+ struct scan_hdr {
+ u8 seq_num;
+ u8 bss_idx;
+ u8 pad[2];
+ } __packed hdr;
+ struct scan_cancel_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 is_ext_channel;
+ u8 rsv[3];
+ } __packed cancel;
+ } req = {
+ .hdr = {
+ .seq_num = mvif->scan_seq_num,
+ .bss_idx = mvif->idx,
+ },
+ .cancel = {
+ .tag = cpu_to_le16(UNI_SCAN_CANCEL),
+ .len = cpu_to_le16(sizeof(struct scan_cancel_tlv)),
+ },
+ };
+
+ if (test_and_clear_bit(MT76_HW_SCANNING, &phy->state)) {
+ struct cfg80211_scan_info info = {
+ .aborted = true,
+ };
+
+ ieee80211_scan_completed(phy->hw, &info);
+ }
+
+ return mt76_mcu_send_msg(phy->dev, MCU_UNI_CMD(SCAN_REQ),
+ &req, sizeof(req), false);
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_cancel_hw_scan);
+
+int mt7925_mcu_set_channel_domain(struct mt76_phy *phy)
+{
+ int len, i, n_max_channels, n_2ch = 0, n_5ch = 0, n_6ch = 0;
+ struct {
+ struct {
+ u8 alpha2[4]; /* regulatory_request.alpha2 */
+ u8 bw_2g; /* BW_20_40M 0
+ * BW_20M 1
+ * BW_20_40_80M 2
+ * BW_20_40_80_160M 3
+ * BW_20_40_80_8080M 4
+ */
+ u8 bw_5g;
+ u8 bw_6g;
+ u8 pad;
+ } __packed hdr;
+ struct n_chan {
+ __le16 tag;
+ __le16 len;
+ u8 n_2ch;
+ u8 n_5ch;
+ u8 n_6ch;
+ u8 pad;
+ } __packed n_ch;
+ } req = {
+ .hdr = {
+ .bw_2g = 0,
+ .bw_5g = 3, /* BW_20_40_80_160M */
+ .bw_6g = 3,
+ },
+ .n_ch = {
+ .tag = cpu_to_le16(2),
+ },
+ };
+ struct mt76_connac_mcu_chan {
+ __le16 hw_value;
+ __le16 pad;
+ __le32 flags;
+ } __packed channel;
+ struct mt76_dev *dev = phy->dev;
+ struct ieee80211_channel *chan;
+ struct sk_buff *skb;
+
+ n_max_channels = phy->sband_2g.sband.n_channels +
+ phy->sband_5g.sband.n_channels +
+ phy->sband_6g.sband.n_channels;
+ len = sizeof(req) + n_max_channels * sizeof(channel);
+
+ skb = mt76_mcu_msg_alloc(dev, NULL, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_reserve(skb, sizeof(req));
+
+ for (i = 0; i < phy->sband_2g.sband.n_channels; i++) {
+ chan = &phy->sband_2g.sband.channels[i];
+ if (chan->flags & IEEE80211_CHAN_DISABLED)
+ continue;
+
+ channel.hw_value = cpu_to_le16(chan->hw_value);
+ channel.flags = cpu_to_le32(chan->flags);
+ channel.pad = 0;
+
+ skb_put_data(skb, &channel, sizeof(channel));
+ n_2ch++;
+ }
+ for (i = 0; i < phy->sband_5g.sband.n_channels; i++) {
+ chan = &phy->sband_5g.sband.channels[i];
+ if (chan->flags & IEEE80211_CHAN_DISABLED)
+ continue;
+
+ channel.hw_value = cpu_to_le16(chan->hw_value);
+ channel.flags = cpu_to_le32(chan->flags);
+ channel.pad = 0;
+
+ skb_put_data(skb, &channel, sizeof(channel));
+ n_5ch++;
+ }
+ for (i = 0; i < phy->sband_6g.sband.n_channels; i++) {
+ chan = &phy->sband_6g.sband.channels[i];
+ if (chan->flags & IEEE80211_CHAN_DISABLED)
+ continue;
+
+ channel.hw_value = cpu_to_le16(chan->hw_value);
+ channel.flags = cpu_to_le32(chan->flags);
+ channel.pad = 0;
+
+ skb_put_data(skb, &channel, sizeof(channel));
+ n_6ch++;
+ }
+
+ BUILD_BUG_ON(sizeof(dev->alpha2) > sizeof(req.hdr.alpha2));
+ memcpy(req.hdr.alpha2, dev->alpha2, sizeof(dev->alpha2));
+ req.n_ch.n_2ch = n_2ch;
+ req.n_ch.n_5ch = n_5ch;
+ req.n_ch.n_6ch = n_6ch;
+ len = sizeof(struct n_chan) + (n_2ch + n_5ch + n_6ch) * sizeof(channel);
+ req.n_ch.len = cpu_to_le16(len);
+ memcpy(__skb_push(skb, sizeof(req)), &req, sizeof(req));
+
+ return mt76_mcu_skb_send_msg(dev, skb, MCU_UNI_CMD(SET_DOMAIN_INFO),
+ false);
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_set_channel_domain);
+
+static int
+__mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ enum environment_cap env_cap,
+ struct mt7925_clc *clc, u8 idx)
+{
+ struct mt7925_clc_segment *seg;
+ struct sk_buff *skb;
+ struct {
+ u8 rsv[4];
+ __le16 tag;
+ __le16 len;
+
+ u8 ver;
+ u8 pad0;
+ __le16 size;
+ u8 idx;
+ u8 env;
+ u8 acpi_conf;
+ u8 pad1;
+ u8 alpha2[2];
+ u8 type[2];
+ u8 rsvd[64];
+ } __packed req = {
+ .tag = cpu_to_le16(0x3),
+ .len = cpu_to_le16(sizeof(req) - 4),
+
+ .idx = idx,
+ .env = env_cap,
+ .acpi_conf = mt792x_acpi_get_flags(&dev->phy),
+ };
+ int ret, valid_cnt = 0;
+ u8 i, *pos;
+
+ if (!clc)
+ return 0;
+
+ pos = clc->data + sizeof(*seg) * clc->nr_seg;
+ for (i = 0; i < clc->nr_country; i++) {
+ struct mt7925_clc_rule *rule = (struct mt7925_clc_rule *)pos;
+
+ pos += sizeof(*rule);
+ if (rule->alpha2[0] != alpha2[0] ||
+ rule->alpha2[1] != alpha2[1])
+ continue;
+
+ seg = (struct mt7925_clc_segment *)clc->data
+ + rule->seg_idx - 1;
+
+ memcpy(req.alpha2, rule->alpha2, 2);
+ memcpy(req.type, rule->type, 2);
+
+ req.size = cpu_to_le16(seg->len);
+ skb = __mt76_mcu_msg_alloc(&dev->mt76, &req,
+ le16_to_cpu(req.size) + sizeof(req),
+ sizeof(req), GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+ skb_put_data(skb, clc->data + seg->offset, seg->len);
+
+ ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_UNI_CMD(SET_POWER_LIMIT),
+ true);
+ if (ret < 0)
+ return ret;
+ valid_cnt++;
+ }
+
+ if (!valid_cnt)
+ return -ENOENT;
+
+ return 0;
+}
+
+int mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ enum environment_cap env_cap)
+{
+ struct mt792x_phy *phy = (struct mt792x_phy *)&dev->phy;
+ int i, ret;
+
+ /* submit all clc config */
+ for (i = 0; i < ARRAY_SIZE(phy->clc); i++) {
+ ret = __mt7925_mcu_set_clc(dev, alpha2, env_cap,
+ phy->clc[i], i);
+
+ /* If no country found, set "00" as default */
+ if (ret == -ENOENT)
+ ret = __mt7925_mcu_set_clc(dev, "00",
+ ENVIRON_INDOOR,
+ phy->clc[i], i);
+ if (ret < 0)
+ return ret;
+ }
+ return 0;
+}
+
+int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ int cmd, int *wait_seq)
+{
+ int txd_len, mcu_cmd = FIELD_GET(__MCU_CMD_FIELD_ID, cmd);
+ struct mt76_connac2_mcu_uni_txd *uni_txd;
+ struct mt76_connac2_mcu_txd *mcu_txd;
+ __le32 *txd;
+ u32 val;
+ u8 seq;
+
+ /* TODO: make dynamic based on msg type */
+ mdev->mcu.timeout = 20 * HZ;
+
+ seq = ++mdev->mcu.msg_seq & 0xf;
+ if (!seq)
+ seq = ++mdev->mcu.msg_seq & 0xf;
+
+ if (cmd == MCU_CMD(FW_SCATTER))
+ goto exit;
+
+ txd_len = cmd & __MCU_CMD_FIELD_UNI ? sizeof(*uni_txd) : sizeof(*mcu_txd);
+ txd = (__le32 *)skb_push(skb, txd_len);
+
+ val = FIELD_PREP(MT_TXD0_TX_BYTES, skb->len) |
+ FIELD_PREP(MT_TXD0_PKT_FMT, MT_TX_TYPE_CMD) |
+ FIELD_PREP(MT_TXD0_Q_IDX, MT_TX_MCU_PORT_RX_Q0);
+ txd[0] = cpu_to_le32(val);
+
+ val = FIELD_PREP(MT_TXD1_HDR_FORMAT, MT_HDR_FORMAT_CMD);
+ txd[1] = cpu_to_le32(val);
+
+ if (cmd & __MCU_CMD_FIELD_UNI) {
+ uni_txd = (struct mt76_connac2_mcu_uni_txd *)txd;
+ uni_txd->len = cpu_to_le16(skb->len - sizeof(uni_txd->txd));
+ uni_txd->option = MCU_CMD_UNI_EXT_ACK;
+ uni_txd->cid = cpu_to_le16(mcu_cmd);
+ uni_txd->s2d_index = MCU_S2D_H2N;
+ uni_txd->pkt_type = MCU_PKT_ID;
+ uni_txd->seq = seq;
+
+ goto exit;
+ }
+
+ mcu_txd = (struct mt76_connac2_mcu_txd *)txd;
+ mcu_txd->len = cpu_to_le16(skb->len - sizeof(mcu_txd->txd));
+ mcu_txd->pq_id = cpu_to_le16(MCU_PQ_ID(MT_TX_PORT_IDX_MCU,
+ MT_TX_MCU_PORT_RX_Q0));
+ mcu_txd->pkt_type = MCU_PKT_ID;
+ mcu_txd->seq = seq;
+ mcu_txd->cid = mcu_cmd;
+ mcu_txd->ext_cid = FIELD_GET(__MCU_CMD_FIELD_EXT_ID, cmd);
+
+ if (mcu_txd->ext_cid || (cmd & __MCU_CMD_FIELD_CE)) {
+ if (cmd & __MCU_CMD_FIELD_QUERY)
+ mcu_txd->set_query = MCU_Q_QUERY;
+ else
+ mcu_txd->set_query = MCU_Q_SET;
+ mcu_txd->ext_cid_ack = !!mcu_txd->ext_cid;
+ } else {
+ mcu_txd->set_query = MCU_Q_NA;
+ }
+
+ if (cmd & __MCU_CMD_FIELD_WA)
+ mcu_txd->s2d_index = MCU_S2D_H2C;
+ else
+ mcu_txd->s2d_index = MCU_S2D_H2N;
+
+exit:
+ if (wait_seq)
+ *wait_seq = seq;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mt7925_mcu_fill_message);
+
+int mt7925_mcu_set_rts_thresh(struct mt792x_phy *phy, u32 val)
+{
+ struct {
+ u8 band_idx;
+ u8 _rsv[3];
+
+ __le16 tag;
+ __le16 len;
+ __le32 len_thresh;
+ __le32 pkt_thresh;
+ } __packed req = {
+ .band_idx = phy->mt76->band_idx,
+ .tag = cpu_to_le16(UNI_BAND_CONFIG_RTS_THRESHOLD),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .len_thresh = cpu_to_le32(val),
+ .pkt_thresh = cpu_to_le32(0x2),
+ };
+
+ return mt76_mcu_send_msg(&phy->dev->mt76, MCU_UNI_CMD(BAND_CONFIG),
+ &req, sizeof(req), true);
+}
+
+int mt7925_mcu_set_radio_en(struct mt792x_phy *phy, bool enable)
+{
+ struct {
+ u8 band_idx;
+ u8 _rsv[3];
+
+ __le16 tag;
+ __le16 len;
+ u8 enable;
+ u8 _rsv2[3];
+ } __packed req = {
+ .band_idx = phy->mt76->band_idx,
+ .tag = cpu_to_le16(UNI_BAND_CONFIG_RADIO_ENABLE),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ .enable = enable,
+ };
+
+ return mt76_mcu_send_msg(&phy->dev->mt76, MCU_UNI_CMD(BAND_CONFIG),
+ &req, sizeof(req), true);
+}
+
+static void
+mt7925_mcu_build_sku(struct mt76_dev *dev, s8 *sku,
+ struct mt76_power_limits *limits,
+ enum nl80211_band band)
+{
+ int i, offset = sizeof(limits->cck);
+
+ memset(sku, 127, MT_CONNAC3_SKU_POWER_LIMIT);
+
+ if (band == NL80211_BAND_2GHZ) {
+ /* cck */
+ memcpy(sku, limits->cck, sizeof(limits->cck));
+ }
+
+ /* ofdm */
+ memcpy(&sku[offset], limits->ofdm, sizeof(limits->ofdm));
+ offset += (sizeof(limits->ofdm) * 5);
+
+ /* ht */
+ for (i = 0; i < 2; i++) {
+ memcpy(&sku[offset], limits->mcs[i], 8);
+ offset += 8;
+ }
+ sku[offset++] = limits->mcs[0][0];
+
+ /* vht */
+ for (i = 0; i < ARRAY_SIZE(limits->mcs); i++) {
+ memcpy(&sku[offset], limits->mcs[i],
+ ARRAY_SIZE(limits->mcs[i]));
+ offset += 12;
+ }
+
+ /* he */
+ for (i = 0; i < ARRAY_SIZE(limits->ru); i++) {
+ memcpy(&sku[offset], limits->ru[i], ARRAY_SIZE(limits->ru[i]));
+ offset += ARRAY_SIZE(limits->ru[i]);
+ }
+
+ /* eht */
+ for (i = 0; i < ARRAY_SIZE(limits->eht); i++) {
+ memcpy(&sku[offset], limits->eht[i], ARRAY_SIZE(limits->eht[i]));
+ offset += ARRAY_SIZE(limits->eht[i]);
+ }
+}
+
+static int
+mt7925_mcu_rate_txpower_band(struct mt76_phy *phy,
+ enum nl80211_band band)
+{
+ int tx_power, n_chan, last_ch, err = 0, idx = 0;
+ int i, sku_len, batch_size, batch_len = 3;
+ struct mt76_dev *dev = phy->dev;
+ static const u8 chan_list_2ghz[] = {
+ 1, 2, 3, 4, 5, 6, 7,
+ 8, 9, 10, 11, 12, 13, 14
+ };
+ static const u8 chan_list_5ghz[] = {
+ 36, 38, 40, 42, 44, 46, 48,
+ 50, 52, 54, 56, 58, 60, 62,
+ 64, 100, 102, 104, 106, 108, 110,
+ 112, 114, 116, 118, 120, 122, 124,
+ 126, 128, 132, 134, 136, 138, 140,
+ 142, 144, 149, 151, 153, 155, 157,
+ 159, 161, 165, 167
+ };
+ static const u8 chan_list_6ghz[] = {
+ 1, 3, 5, 7, 9, 11, 13,
+ 15, 17, 19, 21, 23, 25, 27,
+ 29, 33, 35, 37, 39, 41, 43,
+ 45, 47, 49, 51, 53, 55, 57,
+ 59, 61, 65, 67, 69, 71, 73,
+ 75, 77, 79, 81, 83, 85, 87,
+ 89, 91, 93, 97, 99, 101, 103,
+ 105, 107, 109, 111, 113, 115, 117,
+ 119, 121, 123, 125, 129, 131, 133,
+ 135, 137, 139, 141, 143, 145, 147,
+ 149, 151, 153, 155, 157, 161, 163,
+ 165, 167, 169, 171, 173, 175, 177,
+ 179, 181, 183, 185, 187, 189, 193,
+ 195, 197, 199, 201, 203, 205, 207,
+ 209, 211, 213, 215, 217, 219, 221,
+ 225, 227, 229, 233
+ };
+ struct mt76_power_limits *limits;
+ struct mt7925_sku_tlv *sku_tlbv;
+ const u8 *ch_list;
+
+ sku_len = sizeof(*sku_tlbv);
+ tx_power = 2 * phy->hw->conf.power_level;
+ if (!tx_power)
+ tx_power = 127;
+
+ if (band == NL80211_BAND_2GHZ) {
+ n_chan = ARRAY_SIZE(chan_list_2ghz);
+ ch_list = chan_list_2ghz;
+ last_ch = chan_list_2ghz[ARRAY_SIZE(chan_list_2ghz) - 1];
+ } else if (band == NL80211_BAND_6GHZ) {
+ n_chan = ARRAY_SIZE(chan_list_6ghz);
+ ch_list = chan_list_6ghz;
+ last_ch = chan_list_6ghz[ARRAY_SIZE(chan_list_6ghz) - 1];
+ } else {
+ n_chan = ARRAY_SIZE(chan_list_5ghz);
+ ch_list = chan_list_5ghz;
+ last_ch = chan_list_5ghz[ARRAY_SIZE(chan_list_5ghz) - 1];
+ }
+ batch_size = DIV_ROUND_UP(n_chan, batch_len);
+
+ limits = devm_kmalloc(dev->dev, sizeof(*limits), GFP_KERNEL);
+ if (!limits)
+ return -ENOMEM;
+
+ sku_tlbv = devm_kmalloc(dev->dev, sku_len, GFP_KERNEL);
+ if (!sku_tlbv) {
+ devm_kfree(dev->dev, limits);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < batch_size; i++) {
+ struct mt7925_tx_power_limit_tlv *tx_power_tlv;
+ int j, msg_len, num_ch;
+ struct sk_buff *skb;
+
+ num_ch = i == batch_size - 1 ? n_chan % batch_len : batch_len;
+ msg_len = sizeof(*tx_power_tlv) + num_ch * sku_len;
+ skb = mt76_mcu_msg_alloc(dev, NULL, msg_len);
+ if (!skb) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ tx_power_tlv = (struct mt7925_tx_power_limit_tlv *)
+ skb_put(skb, sizeof(*tx_power_tlv));
+
+ BUILD_BUG_ON(sizeof(dev->alpha2) > sizeof(tx_power_tlv->alpha2));
+ memcpy(tx_power_tlv->alpha2, dev->alpha2, sizeof(dev->alpha2));
+ tx_power_tlv->n_chan = num_ch;
+ tx_power_tlv->tag = cpu_to_le16(0x1);
+ tx_power_tlv->len = cpu_to_le16(sizeof(*tx_power_tlv));
+
+ switch (band) {
+ case NL80211_BAND_2GHZ:
+ tx_power_tlv->band = 1;
+ break;
+ case NL80211_BAND_6GHZ:
+ tx_power_tlv->band = 3;
+ break;
+ default:
+ tx_power_tlv->band = 2;
+ break;
+ }
+
+ for (j = 0; j < num_ch; j++, idx++) {
+ struct ieee80211_channel chan = {
+ .hw_value = ch_list[idx],
+ .band = band,
+ };
+ s8 reg_power, sar_power;
+
+ reg_power = mt76_connac_get_ch_power(phy, &chan,
+ tx_power);
+ sar_power = mt76_get_sar_power(phy, &chan, reg_power);
+
+ mt76_get_rate_power_limits(phy, &chan, limits,
+ sar_power);
+
+ tx_power_tlv->last_msg = ch_list[idx] == last_ch;
+ sku_tlbv->channel = ch_list[idx];
+
+ mt7925_mcu_build_sku(dev, sku_tlbv->pwr_limit,
+ limits, band);
+ skb_put_data(skb, sku_tlbv, sku_len);
+ }
+ err = mt76_mcu_skb_send_msg(dev, skb,
+ MCU_UNI_CMD(SET_POWER_LIMIT),
+ true);
+ if (err < 0)
+ goto out;
+ }
+
+out:
+ devm_kfree(dev->dev, sku_tlbv);
+ devm_kfree(dev->dev, limits);
+ return err;
+}
+
+int mt7925_mcu_set_rate_txpower(struct mt76_phy *phy)
+{
+ int err;
+
+ if (phy->cap.has_2ghz) {
+ err = mt7925_mcu_rate_txpower_band(phy,
+ NL80211_BAND_2GHZ);
+ if (err < 0)
+ return err;
+ }
+
+ if (phy->cap.has_5ghz) {
+ err = mt7925_mcu_rate_txpower_band(phy,
+ NL80211_BAND_5GHZ);
+ if (err < 0)
+ return err;
+ }
+
+ if (phy->cap.has_6ghz) {
+ err = mt7925_mcu_rate_txpower_band(phy,
+ NL80211_BAND_6GHZ);
+ if (err < 0)
+ return err;
+ }
+
+ return 0;
+}
+
+int mt7925_mcu_set_rxfilter(struct mt792x_dev *dev, u32 fif,
+ u8 bit_op, u32 bit_map)
+{
+ struct mt792x_phy *phy = &dev->phy;
+ struct {
+ u8 band_idx;
+ u8 rsv1[3];
+
+ __le16 tag;
+ __le16 len;
+ u8 mode;
+ u8 rsv2[3];
+ __le32 fif;
+ __le32 bit_map; /* bit_* for bitmap update */
+ u8 bit_op;
+ u8 pad[51];
+ } __packed req = {
+ .band_idx = phy->mt76->band_idx,
+ .tag = cpu_to_le16(UNI_BAND_CONFIG_SET_MAC80211_RX_FILTER),
+ .len = cpu_to_le16(sizeof(req) - 4),
+
+ .mode = fif ? 0 : 1,
+ .fif = cpu_to_le32(fif),
+ .bit_map = cpu_to_le32(bit_map),
+ .bit_op = bit_op,
+ };
+
+ return mt76_mcu_send_msg(&phy->dev->mt76, MCU_UNI_CMD(BAND_CONFIG),
+ &req, sizeof(req), true);
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
new file mode 100644
index 000000000000..3c41e21303b1
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
@@ -0,0 +1,537 @@
+/* SPDX-License-Identifier: ISC */
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#ifndef __MT7925_MCU_H
+#define __MT7925_MCU_H
+
+#include "../mt76_connac_mcu.h"
+
+/* ext event table */
+enum {
+ MCU_EXT_EVENT_RATE_REPORT = 0x87,
+};
+
+struct mt7925_mcu_eeprom_info {
+ __le32 addr;
+ __le32 valid;
+ u8 data[MT7925_EEPROM_BLOCK_SIZE];
+} __packed;
+
+#define MT_RA_RATE_NSS GENMASK(8, 6)
+#define MT_RA_RATE_MCS GENMASK(3, 0)
+#define MT_RA_RATE_TX_MODE GENMASK(12, 9)
+#define MT_RA_RATE_DCM_EN BIT(4)
+#define MT_RA_RATE_BW GENMASK(14, 13)
+
+struct mt7925_mcu_rxd {
+ __le32 rxd[8];
+
+ __le16 len;
+ __le16 pkt_type_id;
+
+ u8 eid;
+ u8 seq;
+ u8 option;
+ u8 __rsv;
+
+ u8 ext_eid;
+ u8 __rsv1[2];
+ u8 s2d_index;
+
+ u8 tlv[];
+};
+
+struct mt7925_mcu_uni_event {
+ u8 cid;
+ u8 pad[3];
+ __le32 status; /* 0: success, others: fail */
+} __packed;
+
+enum {
+ MT_EBF = BIT(0), /* explicit beamforming */
+ MT_IBF = BIT(1) /* implicit beamforming */
+};
+
+struct mt7925_mcu_reg_event {
+ __le32 reg;
+ __le32 val;
+} __packed;
+
+struct mt7925_mcu_ant_id_config {
+ u8 ant_id[4];
+} __packed;
+
+struct mt7925_txpwr_req {
+ u8 _rsv[4];
+ __le16 tag;
+ __le16 len;
+
+ u8 format_id;
+ u8 catg;
+ u8 band_idx;
+ u8 _rsv1;
+} __packed;
+
+struct mt7925_txpwr_event {
+ u8 rsv[4];
+ __le16 tag;
+ __le16 len;
+
+ u8 catg;
+ u8 band_idx;
+ u8 ch_band;
+ u8 format; /* 0:Legacy, 1:HE */
+
+ /* Rate power info */
+ struct mt7925_txpwr txpwr;
+
+ s8 pwr_max;
+ s8 pwr_min;
+ u8 rsv1;
+} __packed;
+
+enum {
+ TM_SWITCH_MODE,
+ TM_SET_AT_CMD,
+ TM_QUERY_AT_CMD,
+};
+
+enum {
+ MT7925_TM_NORMAL,
+ MT7925_TM_TESTMODE,
+ MT7925_TM_ICAP,
+ MT7925_TM_ICAP_OVERLAP,
+ MT7925_TM_WIFISPECTRUM,
+};
+
+struct mt7925_rftest_cmd {
+ u8 action;
+ u8 rsv[3];
+ __le32 param0;
+ __le32 param1;
+} __packed;
+
+struct mt7925_rftest_evt {
+ __le32 param0;
+ __le32 param1;
+} __packed;
+
+enum {
+ UNI_CHANNEL_SWITCH,
+ UNI_CHANNEL_RX_PATH,
+};
+
+enum {
+ UNI_CHIP_CONFIG_CHIP_CFG = 0x2,
+ UNI_CHIP_CONFIG_NIC_CAPA = 0x3,
+};
+
+enum {
+ UNI_BAND_CONFIG_RADIO_ENABLE,
+ UNI_BAND_CONFIG_RTS_THRESHOLD = 0x08,
+ UNI_BAND_CONFIG_SET_MAC80211_RX_FILTER = 0x0C,
+};
+
+enum {
+ UNI_WSYS_CONFIG_FW_LOG_CTRL,
+ UNI_WSYS_CONFIG_FW_DBG_CTRL,
+};
+
+enum {
+ UNI_EFUSE_ACCESS = 1,
+ UNI_EFUSE_BUFFER_MODE,
+ UNI_EFUSE_FREE_BLOCK,
+ UNI_EFUSE_BUFFER_RD,
+};
+
+enum {
+ UNI_CMD_ACCESS_REG_BASIC = 0x0,
+ UNI_CMD_ACCESS_RF_REG_BASIC,
+};
+
+enum {
+ UNI_MBMC_SETTING,
+};
+
+enum {
+ UNI_EVENT_SCAN_DONE_BASIC = 0,
+ UNI_EVENT_SCAN_DONE_CHNLINFO = 2,
+ UNI_EVENT_SCAN_DONE_NLO = 3,
+};
+
+struct mt7925_mcu_scan_chinfo_event {
+ u8 nr_chan;
+ u8 alpha2[3];
+} __packed;
+
+enum {
+ UNI_SCAN_REQ = 1,
+ UNI_SCAN_CANCEL = 2,
+ UNI_SCAN_SCHED_REQ = 3,
+ UNI_SCAN_SCHED_ENABLE = 4,
+ UNI_SCAN_SSID = 10,
+ UNI_SCAN_BSSID,
+ UNI_SCAN_CHANNEL,
+ UNI_SCAN_IE,
+ UNI_SCAN_MISC,
+ UNI_SCAN_SSID_MATCH_SETS,
+};
+
+enum {
+ UNI_SNIFFER_ENABLE,
+ UNI_SNIFFER_CONFIG,
+};
+
+struct scan_hdr_tlv {
+ /* fixed field */
+ u8 seq_num;
+ u8 bss_idx;
+ u8 pad[2];
+ /* tlv */
+ u8 data[];
+} __packed;
+
+struct scan_req_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 scan_type; /* 0: PASSIVE SCAN
+ * 1: ACTIVE SCAN
+ */
+ u8 probe_req_num; /* Number of probe request for each SSID */
+ u8 scan_func; /* BIT(0) Enable random MAC scan
+ * BIT(1) Disable DBDC scan type 1~3.
+ * BIT(2) Use DBDC scan type 3 (dedicated one RF to scan).
+ */
+ u8 src_mask;
+ __le16 channel_min_dwell_time;
+ __le16 channel_dwell_time; /* channel Dwell interval */
+ __le16 timeout_value;
+ __le16 probe_delay_time;
+ u8 func_mask_ext;
+};
+
+struct scan_ssid_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 ssid_type; /* BIT(0) wildcard SSID
+ * BIT(1) P2P wildcard SSID
+ * BIT(2) specified SSID + wildcard SSID
+ * BIT(2) + ssid_type_ext BIT(0) specified SSID only
+ */
+ u8 ssids_num;
+ u8 pad[2];
+ struct mt76_connac_mcu_scan_ssid ssids[4];
+};
+
+struct scan_bssid_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 bssid[ETH_ALEN];
+ u8 match_ch;
+ u8 match_ssid_ind;
+ u8 rcpi;
+ u8 pad[3];
+};
+
+struct scan_chan_info_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 channel_type; /* 0: Full channels
+ * 1: Only 2.4GHz channels
+ * 2: Only 5GHz channels
+ * 3: P2P social channel only (channel #1, #6 and #11)
+ * 4: Specified channels
+ * Others: Reserved
+ */
+ u8 channels_num; /* valid when channel_type is 4 */
+ u8 pad[2];
+ struct mt76_connac_mcu_scan_channel channels[64];
+};
+
+struct scan_ie_tlv {
+ __le16 tag;
+ __le16 len;
+
+ __le16 ies_len;
+ u8 band;
+ u8 pad;
+ u8 ies[MT76_CONNAC_SCAN_IE_LEN];
+};
+
+struct scan_misc_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 random_mac[ETH_ALEN];
+ u8 rsv[2];
+};
+
+struct scan_sched_req {
+ __le16 tag;
+ __le16 len;
+
+ u8 version;
+ u8 stop_on_match;
+ u8 intervals_num;
+ u8 scan_func;
+ __le16 intervals[MT76_CONNAC_MAX_NUM_SCHED_SCAN_INTERVAL];
+};
+
+struct scan_sched_ssid_match_sets {
+ __le16 tag;
+ __le16 len;
+
+ u8 match_num;
+ u8 rsv[3];
+
+ struct mt76_connac_mcu_scan_match match[MT76_CONNAC_MAX_SCAN_MATCH];
+};
+
+struct scan_sched_enable {
+ __le16 tag;
+ __le16 len;
+
+ u8 active;
+ u8 rsv[3];
+};
+
+struct mbmc_set_req {
+ u8 pad[4];
+ u8 data[];
+} __packed;
+
+struct mbmc_conf_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 mbmc_en;
+ u8 band;
+ u8 pad[2];
+} __packed;
+
+struct edca {
+ __le16 tag;
+ __le16 len;
+
+ u8 queue;
+ u8 set;
+ u8 cw_min;
+ u8 cw_max;
+ __le16 txop;
+ u8 aifs;
+ u8 __rsv;
+};
+
+struct bss_req_hdr {
+ u8 bss_idx;
+ u8 __rsv[3];
+} __packed;
+
+struct bss_rate_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 __rsv1[4];
+ __le16 bc_trans;
+ __le16 mc_trans;
+ u8 short_preamble;
+ u8 bc_fixed_rate;
+ u8 mc_fixed_rate;
+ u8 __rsv2;
+} __packed;
+
+struct bss_mld_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 group_mld_id;
+ u8 own_mld_id;
+ u8 mac_addr[ETH_ALEN];
+ u8 remap_idx;
+ u8 link_id;
+ u8 __rsv[2];
+} __packed;
+
+struct sta_rec_ba_uni {
+ __le16 tag;
+ __le16 len;
+ u8 tid;
+ u8 ba_type;
+ u8 amsdu;
+ u8 ba_en;
+ __le16 ssn;
+ __le16 winsize;
+ u8 ba_rdd_rro;
+ u8 __rsv[3];
+} __packed;
+
+struct sta_rec_eht {
+ __le16 tag;
+ __le16 len;
+ u8 tid_bitmap;
+ u8 _rsv;
+ __le16 mac_cap;
+ __le64 phy_cap;
+ __le64 phy_cap_ext;
+ u8 mcs_map_bw20[4];
+ u8 mcs_map_bw80[3];
+ u8 mcs_map_bw160[3];
+ u8 mcs_map_bw320[3];
+ u8 _rsv2[3];
+} __packed;
+
+struct sec_key_uni {
+ __le16 wlan_idx;
+ u8 mgmt_prot;
+ u8 cipher_id;
+ u8 cipher_len;
+ u8 key_id;
+ u8 key_len;
+ u8 need_resp;
+ u8 key[32];
+} __packed;
+
+struct sta_rec_sec_uni {
+ __le16 tag;
+ __le16 len;
+ u8 add;
+ u8 n_cipher;
+ u8 rsv[2];
+
+ struct sec_key_uni key[2];
+} __packed;
+
+struct sta_rec_hdr_trans {
+ __le16 tag;
+ __le16 len;
+ u8 from_ds;
+ u8 to_ds;
+ u8 dis_rx_hdr_tran;
+ u8 rsv;
+} __packed;
+
+struct sta_rec_mld {
+ __le16 tag;
+ __le16 len;
+ u8 mac_addr[ETH_ALEN];
+ __le16 primary_id;
+ __le16 secondary_id;
+ __le16 wlan_id;
+ u8 link_num;
+ u8 rsv[3];
+ struct {
+ __le16 wlan_id;
+ u8 bss_idx;
+ u8 rsv;
+ } __packed link[2];
+} __packed;
+
+#define MT7925_STA_UPDATE_MAX_SIZE (sizeof(struct sta_req_hdr) + \
+ sizeof(struct sta_rec_basic) + \
+ sizeof(struct sta_rec_bf) + \
+ sizeof(struct sta_rec_ht) + \
+ sizeof(struct sta_rec_he_v2) + \
+ sizeof(struct sta_rec_ba_uni) + \
+ sizeof(struct sta_rec_vht) + \
+ sizeof(struct sta_rec_uapsd) + \
+ sizeof(struct sta_rec_amsdu) + \
+ sizeof(struct sta_rec_bfee) + \
+ sizeof(struct sta_rec_phy) + \
+ sizeof(struct sta_rec_ra) + \
+ sizeof(struct sta_rec_sec) + \
+ sizeof(struct sta_rec_ra_fixed) + \
+ sizeof(struct sta_rec_he_6g_capa) + \
+ sizeof(struct sta_rec_eht) + \
+ sizeof(struct sta_rec_hdr_trans) + \
+ sizeof(struct sta_rec_mld) + \
+ sizeof(struct tlv))
+
+#define MT7925_BSS_UPDATE_MAX_SIZE (sizeof(struct bss_req_hdr) + \
+ sizeof(struct mt76_connac_bss_basic_tlv) + \
+ sizeof(struct mt76_connac_bss_qos_tlv) + \
+ sizeof(struct bss_rate_tlv) + \
+ sizeof(struct bss_mld_tlv) + \
+ sizeof(struct bss_info_uni_he) + \
+ sizeof(struct bss_info_uni_bss_color) + \
+ sizeof(struct tlv))
+
+#define MT_CONNAC3_SKU_POWER_LIMIT 449
+struct mt7925_sku_tlv {
+ u8 channel;
+ s8 pwr_limit[MT_CONNAC3_SKU_POWER_LIMIT];
+} __packed;
+
+struct mt7925_tx_power_limit_tlv {
+ u8 rsv[4];
+
+ __le16 tag;
+ __le16 len;
+
+ /* DW0 - common info*/
+ u8 ver;
+ u8 pad0;
+ __le16 rsv1;
+ /* DW1 - cmd hint */
+ u8 n_chan; /* # channel */
+ u8 band; /* 2.4GHz - 5GHz - 6GHz */
+ u8 last_msg;
+ u8 limit_type;
+ /* DW3 */
+ u8 alpha2[4]; /* regulatory_request.alpha2 */
+ u8 pad2[32];
+
+ u8 data[];
+} __packed;
+
+struct mt7925_arpns_tlv {
+ __le16 tag;
+ __le16 len;
+
+ u8 enable;
+ u8 ips_num;
+ u8 rsv[2];
+} __packed;
+
+struct mt7925_wow_pattern_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 bss_idx;
+ u8 index; /* pattern index */
+ u8 enable; /* 0: disable
+ * 1: enable
+ */
+ u8 data_len; /* pattern length */
+ u8 offset;
+ u8 mask[MT76_CONNAC_WOW_MASK_MAX_LEN];
+ u8 pattern[MT76_CONNAC_WOW_PATTEN_MAX_LEN];
+ u8 rsv[4];
+} __packed;
+
+int mt7925_mcu_set_dbdc(struct mt76_phy *phy);
+int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *scan_req);
+int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+ struct ieee80211_vif *vif);
+int mt7925_mcu_sched_scan_req(struct mt76_phy *phy,
+ struct ieee80211_vif *vif,
+ struct cfg80211_sched_scan_request *sreq);
+int mt7925_mcu_sched_scan_enable(struct mt76_phy *phy,
+ struct ieee80211_vif *vif,
+ bool enable);
+int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ struct ieee80211_chanctx_conf *ctx,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ int enable);
+int mt7925_mcu_set_deep_sleep(struct mt792x_dev *dev, bool enable);
+int mt7925_mcu_set_channel_domain(struct mt76_phy *phy);
+int mt7925_mcu_set_radio_en(struct mt792x_phy *phy, bool enable);
+int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif *mvif,
+ struct ieee80211_chanctx_conf *ctx);
+int mt7925_mcu_set_rate_txpower(struct mt76_phy *phy);
+int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct mt76_vif *vif,
+ struct ieee80211_bss_conf *info);
+#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
new file mode 100644
index 000000000000..33785f526acf
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
@@ -0,0 +1,309 @@
+/* SPDX-License-Identifier: ISC */
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#ifndef __MT7925_H
+#define __MT7925_H
+
+#include "../mt792x.h"
+#include "regs.h"
+
+#define MT7925_BEACON_RATES_TBL 25
+
+#define MT7925_TX_RING_SIZE 2048
+#define MT7925_TX_MCU_RING_SIZE 256
+#define MT7925_TX_FWDL_RING_SIZE 128
+
+#define MT7925_RX_RING_SIZE 1536
+#define MT7925_RX_MCU_RING_SIZE 512
+
+#define MT7925_EEPROM_SIZE 3584
+#define MT7925_TOKEN_SIZE 8192
+
+#define MT7925_EEPROM_BLOCK_SIZE 16
+
+#define MT7925_SKU_RATE_NUM 161
+#define MT7925_SKU_MAX_DELTA_IDX MT7925_SKU_RATE_NUM
+#define MT7925_SKU_TABLE_SIZE (MT7925_SKU_RATE_NUM + 1)
+
+#define MCU_UNI_EVENT_ROC 0x27
+
+enum {
+ UNI_ROC_ACQUIRE,
+ UNI_ROC_ABORT,
+ UNI_ROC_NUM
+};
+
+enum mt7925_roc_req {
+ MT7925_ROC_REQ_JOIN,
+ MT7925_ROC_REQ_ROC,
+ MT7925_ROC_REQ_NUM
+};
+
+enum {
+ UNI_EVENT_ROC_GRANT = 0,
+ UNI_EVENT_ROC_TAG_NUM
+};
+
+struct mt7925_roc_grant_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 bss_idx;
+ u8 tokenid;
+ u8 status;
+ u8 primarychannel;
+ u8 rfsco;
+ u8 rfband;
+ u8 channelwidth;
+ u8 centerfreqseg1;
+ u8 centerfreqseg2;
+ u8 reqtype;
+ u8 dbdcband;
+ u8 rsv[1];
+ __le32 max_interval;
+} __packed;
+
+struct mt7925_beacon_loss_tlv {
+ __le16 tag;
+ __le16 len;
+ u8 reason;
+ u8 nr_btolink;
+ u8 pad[2];
+} __packed;
+
+struct mt7925_uni_beacon_loss_event {
+ struct {
+ u8 bss_idx;
+ u8 pad[3];
+ } __packed hdr;
+ struct mt7925_beacon_loss_tlv beacon_loss;
+} __packed;
+
+#define to_rssi(field, rxv) ((FIELD_GET(field, rxv) - 220) / 2)
+#define to_rcpi(rssi) (2 * (rssi) + 220)
+
+enum mt7925_txq_id {
+ MT7925_TXQ_BAND0,
+ MT7925_TXQ_BAND1,
+ MT7925_TXQ_MCU_WM = 15,
+ MT7925_TXQ_FWDL,
+};
+
+enum mt7925_rxq_id {
+ MT7925_RXQ_BAND0 = 2,
+ MT7925_RXQ_BAND1,
+ MT7925_RXQ_MCU_WM = 0,
+ MT7925_RXQ_MCU_WM2, /* for tx done */
+};
+
+enum {
+ MODE_OPEN = 0,
+ MODE_SHARED = 1,
+ MODE_WPA = 3,
+ MODE_WPA_PSK = 4,
+ MODE_WPA_NONE = 5,
+ MODE_WPA2 = 6,
+ MODE_WPA2_PSK = 7,
+ MODE_WPA3_SAE = 11,
+};
+
+enum {
+ MT7925_CLC_POWER,
+ MT7925_CLC_CHAN,
+ MT7925_CLC_MAX_NUM,
+};
+
+struct mt7925_clc_rule {
+ u8 alpha2[2];
+ u8 type[2];
+ u8 seg_idx;
+ u8 rsv[3];
+} __packed;
+
+struct mt7925_clc_segment {
+ u8 idx;
+ u8 rsv1[3];
+ u32 offset;
+ u32 len;
+ u8 rsv2[4];
+} __packed;
+
+struct mt7925_clc {
+ __le32 len;
+ u8 idx;
+ u8 ver;
+ u8 nr_country;
+ u8 type;
+ u8 nr_seg;
+ u8 rsv[7];
+ u8 data[];
+} __packed;
+
+enum mt7925_eeprom_field {
+ MT_EE_CHIP_ID = 0x000,
+ MT_EE_VERSION = 0x002,
+ MT_EE_MAC_ADDR = 0x004,
+ __MT_EE_MAX = 0x9ff
+};
+
+enum {
+ TXPWR_USER,
+ TXPWR_EEPROM,
+ TXPWR_MAC,
+ TXPWR_MAX_NUM,
+};
+
+struct mt7925_txpwr {
+ s8 cck[4][2];
+ s8 ofdm[8][2];
+ s8 ht20[8][2];
+ s8 ht40[9][2];
+ s8 vht20[12][2];
+ s8 vht40[12][2];
+ s8 vht80[12][2];
+ s8 vht160[12][2];
+ s8 he26[12][2];
+ s8 he52[12][2];
+ s8 he106[12][2];
+ s8 he242[12][2];
+ s8 he484[12][2];
+ s8 he996[12][2];
+ s8 he996x2[12][2];
+ s8 eht26[16][2];
+ s8 eht52[16][2];
+ s8 eht106[16][2];
+ s8 eht242[16][2];
+ s8 eht484[16][2];
+ s8 eht996[16][2];
+ s8 eht996x2[16][2];
+ s8 eht996x4[16][2];
+ s8 eht26_52[16][2];
+ s8 eht26_106[16][2];
+ s8 eht484_242[16][2];
+ s8 eht996_484[16][2];
+ s8 eht996_484_242[16][2];
+ s8 eht996x2_484[16][2];
+ s8 eht996x3[16][2];
+ s8 eht996x3_484[16][2];
+};
+
+extern const struct ieee80211_ops mt7925_ops;
+
+int __mt7925_start(struct mt792x_phy *phy);
+int mt7925_register_device(struct mt792x_dev *dev);
+void mt7925_unregister_device(struct mt792x_dev *dev);
+int mt7925_run_firmware(struct mt792x_dev *dev);
+int mt7925_mcu_set_bss_pm(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable);
+int mt7925_mcu_sta_update(struct mt792x_dev *dev, struct ieee80211_sta *sta,
+ struct ieee80211_vif *vif, bool enable,
+ enum mt76_sta_info_state state);
+int mt7925_mcu_set_chan_info(struct mt792x_phy *phy, u16 tag);
+int mt7925_mcu_set_tx(struct mt792x_dev *dev, struct ieee80211_vif *vif);
+int mt7925_mcu_set_eeprom(struct mt792x_dev *dev);
+int mt7925_mcu_get_rx_rate(struct mt792x_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, struct rate_info *rate);
+int mt7925_mcu_fw_log_2_host(struct mt792x_dev *dev, u8 ctrl);
+void mt7925_mcu_rx_event(struct mt792x_dev *dev, struct sk_buff *skb);
+int mt7925_mcu_chip_config(struct mt792x_dev *dev, const char *cmd);
+int mt7925_mcu_set_rxfilter(struct mt792x_dev *dev, u32 fif,
+ u8 bit_op, u32 bit_map);
+
+int mt7925_mac_init(struct mt792x_dev *dev);
+int mt7925_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+bool mt7925_mac_wtbl_update(struct mt792x_dev *dev, int idx, u32 mask);
+void mt7925_mac_sta_assoc(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+void mt7925_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+void mt7925_mac_reset_work(struct work_struct *work);
+int mt7925e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ enum mt76_txq_id qid, struct mt76_wcid *wcid,
+ struct ieee80211_sta *sta,
+ struct mt76_tx_info *tx_info);
+
+void mt7925_tx_token_put(struct mt792x_dev *dev);
+bool mt7925_rx_check(struct mt76_dev *mdev, void *data, int len);
+void mt7925_queue_rx_skb(struct mt76_dev *mdev, enum mt76_rxq_id q,
+ struct sk_buff *skb, u32 *info);
+void mt7925_stats_work(struct work_struct *work);
+void mt7925_set_stream_he_eht_caps(struct mt792x_phy *phy);
+int mt7925_init_debugfs(struct mt792x_dev *dev);
+
+int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable);
+int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+void mt7925_scan_work(struct work_struct *work);
+void mt7925_roc_work(struct work_struct *work);
+int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev, struct ieee80211_vif *vif);
+void mt7925_coredump_work(struct work_struct *work);
+int mt7925_get_txpwr_info(struct mt792x_dev *dev, u8 band_idx,
+ struct mt7925_txpwr *txpwr);
+void mt7925_mac_set_fixed_rate_table(struct mt792x_dev *dev,
+ u8 tbl_idx, u16 rate_idx);
+void mt7925_mac_write_txwi(struct mt76_dev *dev, __le32 *txwi,
+ struct sk_buff *skb, struct mt76_wcid *wcid,
+ struct ieee80211_key_conf *key, int pid,
+ enum mt76_txq_id qid, u32 changed);
+void mt7925_txwi_free(struct mt792x_dev *dev, struct mt76_txwi_cache *t,
+ struct ieee80211_sta *sta, bool clear_status,
+ struct list_head *free_list);
+int mt7925_mcu_parse_response(struct mt76_dev *mdev, int cmd,
+ struct sk_buff *skb, int seq);
+
+int mt7925e_mac_reset(struct mt792x_dev *dev);
+int mt7925e_mcu_init(struct mt792x_dev *dev);
+void mt7925_mac_add_txs(struct mt792x_dev *dev, void *data);
+void mt7925_set_runtime_pm(struct mt792x_dev *dev);
+void mt7925_mcu_set_suspend_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif);
+void mt7925_connac_mcu_set_suspend_iter(void *priv, u8 *mac,
+ struct ieee80211_vif *vif);
+void mt7925_set_ipv6_ns_work(struct work_struct *work);
+
+int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable);
+int mt7925_mcu_config_sniffer(struct mt792x_vif *vif,
+ struct ieee80211_chanctx_conf *ctx);
+
+int mt7925_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ enum mt76_txq_id qid, struct mt76_wcid *wcid,
+ struct ieee80211_sta *sta,
+ struct mt76_tx_info *tx_info);
+void mt7925_usb_sdio_tx_complete_skb(struct mt76_dev *mdev,
+ struct mt76_queue_entry *e);
+bool mt7925_usb_sdio_tx_status_data(struct mt76_dev *mdev, u8 *update);
+
+int mt7925_mcu_uni_add_beacon_offload(struct mt792x_dev *dev,
+ struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif,
+ bool enable);
+int mt7925_set_tx_sar_pwr(struct ieee80211_hw *hw,
+ const struct cfg80211_sar_specs *sar);
+
+int mt7925_mcu_regval(struct mt792x_dev *dev, u32 regidx, u32 *val, bool set);
+int mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ enum environment_cap env_cap);
+int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
+ struct ieee80211_channel *chan, int duration,
+ enum mt7925_roc_req type, u8 token_id);
+int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_vif *vif,
+ u8 token_id);
+int mt7925_mcu_fill_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ int cmd, int *wait_seq);
+int mt7925_mcu_add_key(struct mt76_dev *dev, struct ieee80211_vif *vif,
+ struct mt76_connac_sta_key_conf *sta_key_conf,
+ struct ieee80211_key_conf *key, int mcu_cmd,
+ struct mt76_wcid *wcid, enum set_key_cmd cmd);
+int mt7925_mcu_set_rts_thresh(struct mt792x_phy *phy, u32 val);
+int mt7925_mcu_wtbl_update_hdr_trans(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+
+#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
new file mode 100644
index 000000000000..08ef75e24e1c
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci.c
@@ -0,0 +1,586 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+
+#include "mt7925.h"
+#include "mac.h"
+#include "mcu.h"
+#include "../dma.h"
+
+static const struct pci_device_id mt7925_pci_device_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x7925),
+ .driver_data = (kernel_ulong_t)MT7925_FIRMWARE_WM },
+ { PCI_DEVICE(PCI_VENDOR_ID_MEDIATEK, 0x0717),
+ .driver_data = (kernel_ulong_t)MT7925_FIRMWARE_WM },
+ { },
+};
+
+static bool mt7925_disable_aspm;
+module_param_named(disable_aspm, mt7925_disable_aspm, bool, 0644);
+MODULE_PARM_DESC(disable_aspm, "disable PCI ASPM support");
+
+static int mt7925e_init_reset(struct mt792x_dev *dev)
+{
+ return mt792x_wpdma_reset(dev, true);
+}
+
+static void mt7925e_unregister_device(struct mt792x_dev *dev)
+{
+ int i;
+ struct mt76_connac_pm *pm = &dev->pm;
+
+ cancel_work_sync(&dev->init_work);
+ mt76_unregister_device(&dev->mt76);
+ mt76_for_each_q_rx(&dev->mt76, i)
+ napi_disable(&dev->mt76.napi[i]);
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+ cancel_work_sync(&dev->reset_work);
+
+ mt7925_tx_token_put(dev);
+ __mt792x_mcu_drv_pmctrl(dev);
+ mt792x_dma_cleanup(dev);
+ mt792x_wfsys_reset(dev);
+ skb_queue_purge(&dev->mt76.mcu.res_q);
+
+ tasklet_disable(&dev->mt76.irq_tasklet);
+}
+
+static void mt7925_reg_remap_restore(struct mt792x_dev *dev)
+{
+ /* remap to ori status */
+ if (unlikely(dev->backup_l1)) {
+ dev->bus_ops->wr(&dev->mt76, MT_HIF_REMAP_L1, dev->backup_l1);
+ dev->backup_l1 = 0;
+ }
+
+ if (dev->backup_l2) {
+ dev->bus_ops->wr(&dev->mt76, MT_HIF_REMAP_L2, dev->backup_l2);
+ dev->backup_l2 = 0;
+ }
+}
+
+static u32 mt7925_reg_map_l1(struct mt792x_dev *dev, u32 addr)
+{
+ u32 offset = FIELD_GET(MT_HIF_REMAP_L1_OFFSET, addr);
+ u32 base = FIELD_GET(MT_HIF_REMAP_L1_BASE, addr);
+
+ dev->backup_l1 = dev->bus_ops->rr(&dev->mt76, MT_HIF_REMAP_L1);
+
+ dev->bus_ops->rmw(&dev->mt76, MT_HIF_REMAP_L1,
+ MT_HIF_REMAP_L1_MASK,
+ FIELD_PREP(MT_HIF_REMAP_L1_MASK, base));
+
+ /* use read to push write */
+ dev->bus_ops->rr(&dev->mt76, MT_HIF_REMAP_L1);
+
+ return MT_HIF_REMAP_BASE_L1 + offset;
+}
+
+static u32 mt7925_reg_map_l2(struct mt792x_dev *dev, u32 addr)
+{
+ u32 base = FIELD_GET(MT_HIF_REMAP_L1_BASE, MT_HIF_REMAP_BASE_L2);
+
+ dev->backup_l2 = dev->bus_ops->rr(&dev->mt76, MT_HIF_REMAP_L1);
+
+ dev->bus_ops->rmw(&dev->mt76, MT_HIF_REMAP_L1,
+ MT_HIF_REMAP_L1_MASK,
+ FIELD_PREP(MT_HIF_REMAP_L1_MASK, base));
+
+ dev->bus_ops->wr(&dev->mt76, MT_HIF_REMAP_L2, addr);
+ /* use read to push write */
+ dev->bus_ops->rr(&dev->mt76, MT_HIF_REMAP_L1);
+
+ return MT_HIF_REMAP_BASE_L1;
+}
+
+static u32 __mt7925_reg_addr(struct mt792x_dev *dev, u32 addr)
+{
+ static const struct mt76_connac_reg_map fixed_map[] = {
+ { 0x830c0000, 0x000000, 0x0001000 }, /* WF_MCU_BUS_CR_REMAP */
+ { 0x54000000, 0x002000, 0x0001000 }, /* WFDMA PCIE0 MCU DMA0 */
+ { 0x55000000, 0x003000, 0x0001000 }, /* WFDMA PCIE0 MCU DMA1 */
+ { 0x56000000, 0x004000, 0x0001000 }, /* WFDMA reserved */
+ { 0x57000000, 0x005000, 0x0001000 }, /* WFDMA MCU wrap CR */
+ { 0x58000000, 0x006000, 0x0001000 }, /* WFDMA PCIE1 MCU DMA0 (MEM_DMA) */
+ { 0x59000000, 0x007000, 0x0001000 }, /* WFDMA PCIE1 MCU DMA1 */
+ { 0x820c0000, 0x008000, 0x0004000 }, /* WF_UMAC_TOP (PLE) */
+ { 0x820c8000, 0x00c000, 0x0002000 }, /* WF_UMAC_TOP (PSE) */
+ { 0x820cc000, 0x00e000, 0x0002000 }, /* WF_UMAC_TOP (PP) */
+ { 0x74030000, 0x010000, 0x0001000 }, /* PCIe MAC */
+ { 0x820e0000, 0x020000, 0x0000400 }, /* WF_LMAC_TOP BN0 (WF_CFG) */
+ { 0x820e1000, 0x020400, 0x0000200 }, /* WF_LMAC_TOP BN0 (WF_TRB) */
+ { 0x820e2000, 0x020800, 0x0000400 }, /* WF_LMAC_TOP BN0 (WF_AGG) */
+ { 0x820e3000, 0x020c00, 0x0000400 }, /* WF_LMAC_TOP BN0 (WF_ARB) */
+ { 0x820e4000, 0x021000, 0x0000400 }, /* WF_LMAC_TOP BN0 (WF_TMAC) */
+ { 0x820e5000, 0x021400, 0x0000800 }, /* WF_LMAC_TOP BN0 (WF_RMAC) */
+ { 0x820ce000, 0x021c00, 0x0000200 }, /* WF_LMAC_TOP (WF_SEC) */
+ { 0x820e7000, 0x021e00, 0x0000200 }, /* WF_LMAC_TOP BN0 (WF_DMA) */
+ { 0x820cf000, 0x022000, 0x0001000 }, /* WF_LMAC_TOP (WF_PF) */
+ { 0x820e9000, 0x023400, 0x0000200 }, /* WF_LMAC_TOP BN0 (WF_WTBLOFF) */
+ { 0x820ea000, 0x024000, 0x0000200 }, /* WF_LMAC_TOP BN0 (WF_ETBF) */
+ { 0x820eb000, 0x024200, 0x0000400 }, /* WF_LMAC_TOP BN0 (WF_LPON) */
+ { 0x820ec000, 0x024600, 0x0000200 }, /* WF_LMAC_TOP BN0 (WF_INT) */
+ { 0x820ed000, 0x024800, 0x0000800 }, /* WF_LMAC_TOP BN0 (WF_MIB) */
+ { 0x820ca000, 0x026000, 0x0002000 }, /* WF_LMAC_TOP BN0 (WF_MUCOP) */
+ { 0x820d0000, 0x030000, 0x0010000 }, /* WF_LMAC_TOP (WF_WTBLON) */
+ { 0x40000000, 0x070000, 0x0010000 }, /* WF_UMAC_SYSRAM */
+ { 0x00400000, 0x080000, 0x0010000 }, /* WF_MCU_SYSRAM */
+ { 0x00410000, 0x090000, 0x0010000 }, /* WF_MCU_SYSRAM (configure register) */
+ { 0x820f0000, 0x0a0000, 0x0000400 }, /* WF_LMAC_TOP BN1 (WF_CFG) */
+ { 0x820f1000, 0x0a0600, 0x0000200 }, /* WF_LMAC_TOP BN1 (WF_TRB) */
+ { 0x820f2000, 0x0a0800, 0x0000400 }, /* WF_LMAC_TOP BN1 (WF_AGG) */
+ { 0x820f3000, 0x0a0c00, 0x0000400 }, /* WF_LMAC_TOP BN1 (WF_ARB) */
+ { 0x820f4000, 0x0a1000, 0x0000400 }, /* WF_LMAC_TOP BN1 (WF_TMAC) */
+ { 0x820f5000, 0x0a1400, 0x0000800 }, /* WF_LMAC_TOP BN1 (WF_RMAC) */
+ { 0x820f7000, 0x0a1e00, 0x0000200 }, /* WF_LMAC_TOP BN1 (WF_DMA) */
+ { 0x820f9000, 0x0a3400, 0x0000200 }, /* WF_LMAC_TOP BN1 (WF_WTBLOFF) */
+ { 0x820fa000, 0x0a4000, 0x0000200 }, /* WF_LMAC_TOP BN1 (WF_ETBF) */
+ { 0x820fb000, 0x0a4200, 0x0000400 }, /* WF_LMAC_TOP BN1 (WF_LPON) */
+ { 0x820fc000, 0x0a4600, 0x0000200 }, /* WF_LMAC_TOP BN1 (WF_INT) */
+ { 0x820fd000, 0x0a4800, 0x0000800 }, /* WF_LMAC_TOP BN1 (WF_MIB) */
+ { 0x820c4000, 0x0a8000, 0x0004000 }, /* WF_LMAC_TOP BN1 (WF_MUCOP) */
+ { 0x820b0000, 0x0ae000, 0x0001000 }, /* [APB2] WFSYS_ON */
+ { 0x80020000, 0x0b0000, 0x0010000 }, /* WF_TOP_MISC_OFF */
+ { 0x81020000, 0x0c0000, 0x0010000 }, /* WF_TOP_MISC_ON */
+ { 0x7c020000, 0x0d0000, 0x0010000 }, /* CONN_INFRA, wfdma */
+ { 0x7c060000, 0x0e0000, 0x0010000 }, /* CONN_INFRA, conn_host_csr_top */
+ { 0x7c000000, 0x0f0000, 0x0010000 }, /* CONN_INFRA */
+ { 0x70020000, 0x1f0000, 0x0010000 }, /* Reserved for CBTOP, can't switch */
+ { 0x7c500000, 0x060000, 0x2000000 }, /* remap */
+ { 0x0, 0x0, 0x0 } /* End */
+ };
+ int i;
+
+ if (addr < 0x200000)
+ return addr;
+
+ mt7925_reg_remap_restore(dev);
+
+ for (i = 0; i < ARRAY_SIZE(fixed_map); i++) {
+ u32 ofs;
+
+ if (addr < fixed_map[i].phys)
+ continue;
+
+ ofs = addr - fixed_map[i].phys;
+ if (ofs > fixed_map[i].size)
+ continue;
+
+ return fixed_map[i].maps + ofs;
+ }
+
+ if ((addr >= 0x18000000 && addr < 0x18c00000) ||
+ (addr >= 0x70000000 && addr < 0x78000000) ||
+ (addr >= 0x7c000000 && addr < 0x7c400000))
+ return mt7925_reg_map_l1(dev, addr);
+
+ return mt7925_reg_map_l2(dev, addr);
+}
+
+static u32 mt7925_rr(struct mt76_dev *mdev, u32 offset)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ u32 addr = __mt7925_reg_addr(dev, offset);
+
+ return dev->bus_ops->rr(mdev, addr);
+}
+
+static void mt7925_wr(struct mt76_dev *mdev, u32 offset, u32 val)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ u32 addr = __mt7925_reg_addr(dev, offset);
+
+ dev->bus_ops->wr(mdev, addr, val);
+}
+
+static u32 mt7925_rmw(struct mt76_dev *mdev, u32 offset, u32 mask, u32 val)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ u32 addr = __mt7925_reg_addr(dev, offset);
+
+ return dev->bus_ops->rmw(mdev, addr, mask, val);
+}
+
+static int mt7925_dma_init(struct mt792x_dev *dev)
+{
+ int ret;
+
+ mt76_dma_attach(&dev->mt76);
+
+ ret = mt792x_dma_disable(dev, true);
+ if (ret)
+ return ret;
+
+ /* init tx queue */
+ ret = mt76_connac_init_tx_queues(dev->phy.mt76, MT7925_TXQ_BAND0,
+ MT7925_TX_RING_SIZE,
+ MT_TX_RING_BASE, 0);
+ if (ret)
+ return ret;
+
+ mt76_wr(dev, MT_WFDMA0_TX_RING0_EXT_CTRL, 0x4);
+
+ /* command to WM */
+ ret = mt76_init_mcu_queue(&dev->mt76, MT_MCUQ_WM, MT7925_TXQ_MCU_WM,
+ MT7925_TX_MCU_RING_SIZE, MT_TX_RING_BASE);
+ if (ret)
+ return ret;
+
+ /* firmware download */
+ ret = mt76_init_mcu_queue(&dev->mt76, MT_MCUQ_FWDL, MT7925_TXQ_FWDL,
+ MT7925_TX_FWDL_RING_SIZE, MT_TX_RING_BASE);
+ if (ret)
+ return ret;
+
+ /* rx event */
+ ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MCU],
+ MT7925_RXQ_MCU_WM, MT7925_RX_MCU_RING_SIZE,
+ MT_RX_BUF_SIZE, MT_RX_EVENT_RING_BASE);
+ if (ret)
+ return ret;
+
+ /* rx data */
+ ret = mt76_queue_alloc(dev, &dev->mt76.q_rx[MT_RXQ_MAIN],
+ MT7925_RXQ_BAND0, MT7925_RX_RING_SIZE,
+ MT_RX_BUF_SIZE, MT_RX_DATA_RING_BASE);
+ if (ret)
+ return ret;
+
+ ret = mt76_init_queues(dev, mt792x_poll_rx);
+ if (ret < 0)
+ return ret;
+
+ netif_napi_add_tx(&dev->mt76.tx_napi_dev, &dev->mt76.tx_napi,
+ mt792x_poll_tx);
+ napi_enable(&dev->mt76.tx_napi);
+
+ return mt792x_dma_enable(dev);
+}
+
+static int mt7925_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+{
+ static const struct mt76_driver_ops drv_ops = {
+ /* txwi_size = txd size + txp size */
+ .txwi_size = MT_TXD_SIZE + sizeof(struct mt76_connac_hw_txp),
+ .drv_flags = MT_DRV_TXWI_NO_FREE | MT_DRV_HW_MGMT_TXQ |
+ MT_DRV_AMSDU_OFFLOAD,
+ .survey_flags = SURVEY_INFO_TIME_TX |
+ SURVEY_INFO_TIME_RX |
+ SURVEY_INFO_TIME_BSS_RX,
+ .token_size = MT7925_TOKEN_SIZE,
+ .tx_prepare_skb = mt7925e_tx_prepare_skb,
+ .tx_complete_skb = mt76_connac_tx_complete_skb,
+ .rx_check = mt7925_rx_check,
+ .rx_skb = mt7925_queue_rx_skb,
+ .rx_poll_complete = mt792x_rx_poll_complete,
+ .sta_add = mt7925_mac_sta_add,
+ .sta_assoc = mt7925_mac_sta_assoc,
+ .sta_remove = mt7925_mac_sta_remove,
+ .update_survey = mt792x_update_channel,
+ };
+ static const struct mt792x_hif_ops mt7925_pcie_ops = {
+ .init_reset = mt7925e_init_reset,
+ .reset = mt7925e_mac_reset,
+ .mcu_init = mt7925e_mcu_init,
+ .drv_own = mt792xe_mcu_drv_pmctrl,
+ .fw_own = mt792xe_mcu_fw_pmctrl,
+ };
+ static const struct mt792x_irq_map irq_map = {
+ .host_irq_enable = MT_WFDMA0_HOST_INT_ENA,
+ .tx = {
+ .all_complete_mask = MT_INT_TX_DONE_ALL,
+ .mcu_complete_mask = MT_INT_TX_DONE_MCU,
+ },
+ .rx = {
+ .data_complete_mask = HOST_RX_DONE_INT_ENA2,
+ .wm_complete_mask = HOST_RX_DONE_INT_ENA0,
+ },
+ };
+ struct ieee80211_ops *ops;
+ struct mt76_bus_ops *bus_ops;
+ struct mt792x_dev *dev;
+ struct mt76_dev *mdev;
+ u8 features;
+ int ret;
+ u16 cmd;
+
+ ret = pcim_enable_device(pdev);
+ if (ret)
+ return ret;
+
+ ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
+ if (ret)
+ return ret;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &cmd);
+ if (!(cmd & PCI_COMMAND_MEMORY)) {
+ cmd |= PCI_COMMAND_MEMORY;
+ pci_write_config_word(pdev, PCI_COMMAND, cmd);
+ }
+ pci_set_master(pdev);
+
+ ret = pci_alloc_irq_vectors(pdev, 1, 1, PCI_IRQ_ALL_TYPES);
+ if (ret < 0)
+ return ret;
+
+ ret = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (ret)
+ goto err_free_pci_vec;
+
+ if (mt7925_disable_aspm)
+ mt76_pci_disable_aspm(pdev);
+
+ ops = mt792x_get_mac80211_ops(&pdev->dev, &mt7925_ops,
+ (void *)id->driver_data, &features);
+ if (!ops) {
+ ret = -ENOMEM;
+ goto err_free_pci_vec;
+ }
+
+ mdev = mt76_alloc_device(&pdev->dev, sizeof(*dev), ops, &drv_ops);
+ if (!mdev) {
+ ret = -ENOMEM;
+ goto err_free_pci_vec;
+ }
+
+ pci_set_drvdata(pdev, mdev);
+
+ dev = container_of(mdev, struct mt792x_dev, mt76);
+ dev->fw_features = features;
+ dev->hif_ops = &mt7925_pcie_ops;
+ dev->irq_map = &irq_map;
+ mt76_mmio_init(&dev->mt76, pcim_iomap_table(pdev)[0]);
+ tasklet_init(&mdev->irq_tasklet, mt792x_irq_tasklet, (unsigned long)dev);
+
+ dev->phy.dev = dev;
+ dev->phy.mt76 = &dev->mt76.phy;
+ dev->mt76.phy.priv = &dev->phy;
+ dev->bus_ops = dev->mt76.bus;
+ bus_ops = devm_kmemdup(dev->mt76.dev, dev->bus_ops, sizeof(*bus_ops),
+ GFP_KERNEL);
+ if (!bus_ops) {
+ ret = -ENOMEM;
+ goto err_free_dev;
+ }
+
+ bus_ops->rr = mt7925_rr;
+ bus_ops->wr = mt7925_wr;
+ bus_ops->rmw = mt7925_rmw;
+ dev->mt76.bus = bus_ops;
+
+ ret = __mt792x_mcu_fw_pmctrl(dev);
+ if (ret)
+ goto err_free_dev;
+
+ ret = __mt792xe_mcu_drv_pmctrl(dev);
+ if (ret)
+ goto err_free_dev;
+
+ mdev->rev = (mt76_rr(dev, MT_HW_CHIPID) << 16) |
+ (mt76_rr(dev, MT_HW_REV) & 0xff);
+
+ dev_info(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+
+ ret = mt792x_wfsys_reset(dev);
+ if (ret)
+ goto err_free_dev;
+
+ mt76_wr(dev, irq_map.host_irq_enable, 0);
+
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+
+ ret = devm_request_irq(mdev->dev, pdev->irq, mt792x_irq_handler,
+ IRQF_SHARED, KBUILD_MODNAME, dev);
+ if (ret)
+ goto err_free_dev;
+
+ ret = mt7925_dma_init(dev);
+ if (ret)
+ goto err_free_irq;
+
+ ret = mt7925_register_device(dev);
+ if (ret)
+ goto err_free_irq;
+
+ return 0;
+
+err_free_irq:
+ devm_free_irq(&pdev->dev, pdev->irq, dev);
+err_free_dev:
+ mt76_free_device(&dev->mt76);
+err_free_pci_vec:
+ pci_free_irq_vectors(pdev);
+
+ return ret;
+}
+
+static void mt7925_pci_remove(struct pci_dev *pdev)
+{
+ struct mt76_dev *mdev = pci_get_drvdata(pdev);
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+
+ mt7925e_unregister_device(dev);
+ devm_free_irq(&pdev->dev, pdev->irq, dev);
+ mt76_free_device(&dev->mt76);
+ pci_free_irq_vectors(pdev);
+}
+
+static int mt7925_pci_suspend(struct device *device)
+{
+ struct pci_dev *pdev = to_pci_dev(device);
+ struct mt76_dev *mdev = pci_get_drvdata(pdev);
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt76_connac_pm *pm = &dev->pm;
+ int i, err;
+
+ pm->suspended = true;
+ flush_work(&dev->reset_work);
+ cancel_delayed_work_sync(&pm->ps_work);
+ cancel_work_sync(&pm->wake_work);
+
+ err = mt792x_mcu_drv_pmctrl(dev);
+ if (err < 0)
+ goto restore_suspend;
+
+ /* always enable deep sleep during suspend to reduce
+ * power consumption
+ */
+ mt7925_mcu_set_deep_sleep(dev, true);
+
+ err = mt76_connac_mcu_set_hif_suspend(mdev, true);
+ if (err)
+ goto restore_suspend;
+
+ napi_disable(&mdev->tx_napi);
+ mt76_worker_disable(&mdev->tx_worker);
+
+ mt76_for_each_q_rx(mdev, i) {
+ napi_disable(&mdev->napi[i]);
+ }
+
+ /* wait until dma is idle */
+ mt76_poll(dev, MT_WFDMA0_GLO_CFG,
+ MT_WFDMA0_GLO_CFG_TX_DMA_BUSY |
+ MT_WFDMA0_GLO_CFG_RX_DMA_BUSY, 0, 1000);
+
+ /* put dma disabled */
+ mt76_clear(dev, MT_WFDMA0_GLO_CFG,
+ MT_WFDMA0_GLO_CFG_TX_DMA_EN | MT_WFDMA0_GLO_CFG_RX_DMA_EN);
+
+ /* disable interrupt */
+ mt76_wr(dev, dev->irq_map->host_irq_enable, 0);
+ mt76_wr(dev, MT_WFDMA0_HOST_INT_DIS,
+ dev->irq_map->tx.all_complete_mask |
+ MT_INT_RX_DONE_ALL | MT_INT_MCU_CMD);
+
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0x0);
+
+ synchronize_irq(pdev->irq);
+ tasklet_kill(&mdev->irq_tasklet);
+
+ err = mt792x_mcu_fw_pmctrl(dev);
+ if (err)
+ goto restore_napi;
+
+ return 0;
+
+restore_napi:
+ mt76_for_each_q_rx(mdev, i) {
+ napi_enable(&mdev->napi[i]);
+ }
+ napi_enable(&mdev->tx_napi);
+
+ if (!pm->ds_enable)
+ mt7925_mcu_set_deep_sleep(dev, false);
+
+ mt76_connac_mcu_set_hif_suspend(mdev, false);
+
+restore_suspend:
+ pm->suspended = false;
+
+ if (err < 0)
+ mt792x_reset(&dev->mt76);
+
+ return err;
+}
+
+static int mt7925_pci_resume(struct device *device)
+{
+ struct pci_dev *pdev = to_pci_dev(device);
+ struct mt76_dev *mdev = pci_get_drvdata(pdev);
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt76_connac_pm *pm = &dev->pm;
+ int i, err;
+
+ err = mt792x_mcu_drv_pmctrl(dev);
+ if (err < 0)
+ goto failed;
+
+ mt792x_wpdma_reinit_cond(dev);
+
+ /* enable interrupt */
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ mt76_connac_irq_enable(&dev->mt76,
+ dev->irq_map->tx.all_complete_mask |
+ MT_INT_RX_DONE_ALL | MT_INT_MCU_CMD);
+ mt76_set(dev, MT_MCU2HOST_SW_INT_ENA, MT_MCU_CMD_WAKE_RX_PCIE);
+
+ /* put dma enabled */
+ mt76_set(dev, MT_WFDMA0_GLO_CFG,
+ MT_WFDMA0_GLO_CFG_TX_DMA_EN | MT_WFDMA0_GLO_CFG_RX_DMA_EN);
+
+ mt76_worker_enable(&mdev->tx_worker);
+
+ local_bh_disable();
+ mt76_for_each_q_rx(mdev, i) {
+ napi_enable(&mdev->napi[i]);
+ napi_schedule(&mdev->napi[i]);
+ }
+ napi_enable(&mdev->tx_napi);
+ napi_schedule(&mdev->tx_napi);
+ local_bh_enable();
+
+ err = mt76_connac_mcu_set_hif_suspend(mdev, false);
+
+ /* restore previous ds setting */
+ if (!pm->ds_enable)
+ mt7925_mcu_set_deep_sleep(dev, false);
+
+failed:
+ pm->suspended = false;
+
+ if (err < 0)
+ mt792x_reset(&dev->mt76);
+
+ return err;
+}
+
+static void mt7925_pci_shutdown(struct pci_dev *pdev)
+{
+ mt7925_pci_remove(pdev);
+}
+
+static DEFINE_SIMPLE_DEV_PM_OPS(mt7925_pm_ops, mt7925_pci_suspend, mt7925_pci_resume);
+
+static struct pci_driver mt7925_pci_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = mt7925_pci_device_table,
+ .probe = mt7925_pci_probe,
+ .remove = mt7925_pci_remove,
+ .shutdown = mt7925_pci_shutdown,
+ .driver.pm = pm_sleep_ptr(&mt7925_pm_ops),
+};
+
+module_pci_driver(mt7925_pci_driver);
+
+MODULE_DEVICE_TABLE(pci, mt7925_pci_device_table);
+MODULE_FIRMWARE(MT7925_FIRMWARE_WM);
+MODULE_FIRMWARE(MT7925_ROM_PATCH);
+MODULE_AUTHOR("Deren Wu <deren.wu@mediatek.com>");
+MODULE_AUTHOR("Lorenzo Bianconi <lorenzo@kernel.org>");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci_mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci_mac.c
new file mode 100644
index 000000000000..9fca887977d2
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci_mac.c
@@ -0,0 +1,148 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include "mt7925.h"
+#include "../dma.h"
+#include "mac.h"
+
+int mt7925e_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
+ enum mt76_txq_id qid, struct mt76_wcid *wcid,
+ struct ieee80211_sta *sta,
+ struct mt76_tx_info *tx_info)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
+ struct ieee80211_key_conf *key = info->control.hw_key;
+ struct mt76_connac_hw_txp *txp;
+ struct mt76_txwi_cache *t;
+ int id, pid;
+ u8 *txwi = (u8 *)txwi_ptr;
+
+ if (unlikely(tx_info->skb->len <= ETH_HLEN))
+ return -EINVAL;
+
+ if (!wcid)
+ wcid = &dev->mt76.global_wcid;
+
+ t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size);
+ t->skb = tx_info->skb;
+
+ id = mt76_token_consume(mdev, &t);
+ if (id < 0)
+ return id;
+
+ if (sta) {
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+
+ if (time_after(jiffies, msta->last_txs + HZ / 4)) {
+ info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
+ msta->last_txs = jiffies;
+ }
+ }
+
+ pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
+ mt7925_mac_write_txwi(mdev, txwi_ptr, tx_info->skb, wcid, key,
+ pid, qid, 0);
+
+ txp = (struct mt76_connac_hw_txp *)(txwi + MT_TXD_SIZE);
+ memset(txp, 0, sizeof(struct mt76_connac_hw_txp));
+ mt76_connac_write_hw_txp(mdev, tx_info, txp, id);
+
+ tx_info->skb = NULL;
+
+ return 0;
+}
+
+void mt7925_tx_token_put(struct mt792x_dev *dev)
+{
+ struct mt76_txwi_cache *txwi;
+ int id;
+
+ spin_lock_bh(&dev->mt76.token_lock);
+ idr_for_each_entry(&dev->mt76.token, txwi, id) {
+ mt7925_txwi_free(dev, txwi, NULL, false, NULL);
+ dev->mt76.token_count--;
+ }
+ spin_unlock_bh(&dev->mt76.token_lock);
+ idr_destroy(&dev->mt76.token);
+}
+
+int mt7925e_mac_reset(struct mt792x_dev *dev)
+{
+ const struct mt792x_irq_map *irq_map = dev->irq_map;
+ int i, err;
+
+ mt792xe_mcu_drv_pmctrl(dev);
+
+ mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
+
+ mt76_wr(dev, dev->irq_map->host_irq_enable, 0);
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0x0);
+
+ set_bit(MT76_RESET, &dev->mphy.state);
+ set_bit(MT76_MCU_RESET, &dev->mphy.state);
+ wake_up(&dev->mt76.mcu.wait);
+ skb_queue_purge(&dev->mt76.mcu.res_q);
+
+ mt76_txq_schedule_all(&dev->mphy);
+
+ mt76_worker_disable(&dev->mt76.tx_worker);
+ if (irq_map->rx.data_complete_mask)
+ napi_disable(&dev->mt76.napi[MT_RXQ_MAIN]);
+ if (irq_map->rx.wm_complete_mask)
+ napi_disable(&dev->mt76.napi[MT_RXQ_MCU]);
+ if (irq_map->rx.wm2_complete_mask)
+ napi_disable(&dev->mt76.napi[MT_RXQ_MCU_WA]);
+ if (irq_map->tx.all_complete_mask)
+ napi_disable(&dev->mt76.tx_napi);
+
+ mt7925_tx_token_put(dev);
+ idr_init(&dev->mt76.token);
+
+ mt792x_wpdma_reset(dev, true);
+
+ local_bh_disable();
+ mt76_for_each_q_rx(&dev->mt76, i) {
+ napi_enable(&dev->mt76.napi[i]);
+ napi_schedule(&dev->mt76.napi[i]);
+ }
+ napi_enable(&dev->mt76.tx_napi);
+ napi_schedule(&dev->mt76.tx_napi);
+ local_bh_enable();
+
+ dev->fw_assert = false;
+ clear_bit(MT76_MCU_RESET, &dev->mphy.state);
+
+ mt76_wr(dev, dev->irq_map->host_irq_enable,
+ dev->irq_map->tx.all_complete_mask |
+ MT_INT_RX_DONE_ALL | MT_INT_MCU_CMD);
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+
+ err = mt792xe_mcu_fw_pmctrl(dev);
+ if (err)
+ return err;
+
+ err = __mt792xe_mcu_drv_pmctrl(dev);
+ if (err)
+ goto out;
+
+ err = mt7925_run_firmware(dev);
+ if (err)
+ goto out;
+
+ err = mt7925_mcu_set_eeprom(dev);
+ if (err)
+ goto out;
+
+ err = mt7925_mac_init(dev);
+ if (err)
+ goto out;
+
+ err = __mt7925_start(&dev->phy);
+out:
+ clear_bit(MT76_RESET, &dev->mphy.state);
+
+ mt76_worker_enable(&dev->mt76.tx_worker);
+
+ return err;
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/pci_mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/pci_mcu.c
new file mode 100644
index 000000000000..f95bc5dcd830
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/pci_mcu.c
@@ -0,0 +1,53 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include "mt7925.h"
+#include "mcu.h"
+
+static int
+mt7925_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ int cmd, int *seq)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ enum mt76_mcuq_id txq = MT_MCUQ_WM;
+ int ret;
+
+ ret = mt7925_mcu_fill_message(mdev, skb, cmd, seq);
+ if (ret)
+ return ret;
+
+ mdev->mcu.timeout = 3 * HZ;
+
+ if (cmd == MCU_CMD(FW_SCATTER))
+ txq = MT_MCUQ_FWDL;
+
+ return mt76_tx_queue_skb_raw(dev, mdev->q_mcu[txq], skb, 0);
+}
+
+int mt7925e_mcu_init(struct mt792x_dev *dev)
+{
+ static const struct mt76_mcu_ops mt7925_mcu_ops = {
+ .headroom = sizeof(struct mt76_connac2_mcu_txd),
+ .mcu_skb_send_msg = mt7925_mcu_send_message,
+ .mcu_parse_response = mt7925_mcu_parse_response,
+ };
+ int err;
+
+ dev->mt76.mcu_ops = &mt7925_mcu_ops;
+
+ err = mt792xe_mcu_fw_pmctrl(dev);
+ if (err)
+ return err;
+
+ err = __mt792xe_mcu_drv_pmctrl(dev);
+ if (err)
+ return err;
+
+ mt76_rmw_field(dev, MT_PCIE_MAC_PM, MT_PCIE_MAC_PM_L0S_DIS, 1);
+
+ err = mt7925_run_firmware(dev);
+
+ mt76_queue_tx_cleanup(dev, dev->mt76.q_mcu[MT_MCUQ_FWDL], false);
+
+ return err;
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/regs.h b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
new file mode 100644
index 000000000000..985794a40c1a
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/regs.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: ISC */
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#ifndef __MT7925_REGS_H
+#define __MT7925_REGS_H
+
+#include "../mt792x_regs.h"
+
+#define MT_MDP_BASE 0x820cc800
+#define MT_MDP(ofs) (MT_MDP_BASE + (ofs))
+
+#define MT_MDP_DCR0 MT_MDP(0x000)
+#define MT_MDP_DCR0_DAMSDU_EN BIT(15)
+#define MT_MDP_DCR0_RX_HDR_TRANS_EN BIT(19)
+
+#define MT_MDP_DCR1 MT_MDP(0x004)
+#define MT_MDP_DCR1_MAX_RX_LEN GENMASK(15, 3)
+
+#define MT_MDP_BNRCFR0(_band) MT_MDP(0x090 + ((_band) << 8))
+#define MT_MDP_RCFR0_MCU_RX_MGMT GENMASK(5, 4)
+#define MT_MDP_RCFR0_MCU_RX_CTL_NON_BAR GENMASK(7, 6)
+#define MT_MDP_RCFR0_MCU_RX_CTL_BAR GENMASK(9, 8)
+
+#define MT_MDP_BNRCFR1(_band) MT_MDP(0x094 + ((_band) << 8))
+#define MT_MDP_RCFR1_MCU_RX_BYPASS GENMASK(23, 22)
+#define MT_MDP_RCFR1_RX_DROPPED_UCAST GENMASK(28, 27)
+#define MT_MDP_RCFR1_RX_DROPPED_MCAST GENMASK(30, 29)
+#define MT_MDP_TO_HIF 0
+#define MT_MDP_TO_WM 1
+
+#define MT_WFDMA0_HOST_INT_ENA MT_WFDMA0(0x228)
+#define MT_WFDMA0_HOST_INT_DIS MT_WFDMA0(0x22c)
+#define HOST_RX_DONE_INT_ENA4 BIT(12)
+#define HOST_RX_DONE_INT_ENA5 BIT(13)
+#define HOST_RX_DONE_INT_ENA6 BIT(14)
+#define HOST_RX_DONE_INT_ENA7 BIT(15)
+#define HOST_RX_DONE_INT_ENA8 BIT(16)
+#define HOST_RX_DONE_INT_ENA9 BIT(17)
+#define HOST_RX_DONE_INT_ENA10 BIT(18)
+#define HOST_RX_DONE_INT_ENA11 BIT(19)
+#define HOST_TX_DONE_INT_ENA15 BIT(25)
+#define HOST_TX_DONE_INT_ENA16 BIT(26)
+#define HOST_TX_DONE_INT_ENA17 BIT(27)
+
+/* WFDMA interrupt */
+#define MT_INT_RX_DONE_DATA HOST_RX_DONE_INT_ENA2
+#define MT_INT_RX_DONE_WM HOST_RX_DONE_INT_ENA0
+#define MT_INT_RX_DONE_WM2 HOST_RX_DONE_INT_ENA1
+#define MT_INT_RX_DONE_ALL (MT_INT_RX_DONE_DATA | \
+ MT_INT_RX_DONE_WM | \
+ MT_INT_RX_DONE_WM2)
+
+#define MT_INT_TX_DONE_MCU_WM (HOST_TX_DONE_INT_ENA15 | \
+ HOST_TX_DONE_INT_ENA17)
+
+#define MT_INT_TX_DONE_FWDL HOST_TX_DONE_INT_ENA16
+#define MT_INT_TX_DONE_BAND0 HOST_TX_DONE_INT_ENA0
+
+#define MT_INT_TX_DONE_MCU (MT_INT_TX_DONE_MCU_WM | \
+ MT_INT_TX_DONE_FWDL)
+#define MT_INT_TX_DONE_ALL (MT_INT_TX_DONE_MCU_WM | \
+ MT_INT_TX_DONE_BAND0 | \
+ GENMASK(18, 4))
+
+#define MT_RX_DATA_RING_BASE MT_WFDMA0(0x500)
+
+#define MT_INFRA_CFG_BASE 0xd1000
+#define MT_INFRA(ofs) (MT_INFRA_CFG_BASE + (ofs))
+
+#define MT_HIF_REMAP_L1 0x155024
+#define MT_HIF_REMAP_L1_MASK GENMASK(31, 16)
+#define MT_HIF_REMAP_L1_OFFSET GENMASK(15, 0)
+#define MT_HIF_REMAP_L1_BASE GENMASK(31, 16)
+#define MT_HIF_REMAP_BASE_L1 0x130000
+
+#define MT_HIF_REMAP_L2 0x0120
+#if IS_ENABLED(CONFIG_MT76_DEV)
+#define MT_HIF_REMAP_BASE_L2 (0x7c500000 - (0x7c000000 - 0x18000000))
+#else
+#define MT_HIF_REMAP_BASE_L2 0x18500000
+#endif
+
+#define MT_WFSYS_SW_RST_B 0x7c000140
+
+#define MT_WTBLON_TOP_WDUCR MT_WTBLON_TOP(0x370)
+#define MT_WTBLON_TOP_WDUCR_GROUP GENMASK(4, 0)
+
+#define MT_WTBL_UPDATE MT_WTBLON_TOP(0x380)
+#define MT_WTBL_UPDATE_WLAN_IDX GENMASK(11, 0)
+#define MT_WTBL_UPDATE_ADM_COUNT_CLEAR BIT(14)
+
+#endif
diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/usb.c b/drivers/net/wireless/mediatek/mt76/mt7925/usb.c
new file mode 100644
index 000000000000..9b885c5b3ed5
--- /dev/null
+++ b/drivers/net/wireless/mediatek/mt76/mt7925/usb.c
@@ -0,0 +1,332 @@
+// SPDX-License-Identifier: ISC
+/* Copyright (C) 2023 MediaTek Inc. */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+
+#include "mt7925.h"
+#include "mcu.h"
+#include "mac.h"
+
+static const struct usb_device_id mt7925u_device_table[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(0x0e8d, 0x7925, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)MT7925_FIRMWARE_WM },
+ { },
+};
+
+static int
+mt7925u_mcu_send_message(struct mt76_dev *mdev, struct sk_buff *skb,
+ int cmd, int *seq)
+{
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ u32 pad, ep;
+ int ret;
+
+ ret = mt7925_mcu_fill_message(mdev, skb, cmd, seq);
+ if (ret)
+ return ret;
+
+ mdev->mcu.timeout = 3 * HZ;
+
+ if (cmd != MCU_CMD(FW_SCATTER))
+ ep = MT_EP_OUT_INBAND_CMD;
+ else
+ ep = MT_EP_OUT_AC_BE;
+
+ mt792x_skb_add_usb_sdio_hdr(dev, skb, 0);
+ pad = round_up(skb->len, 4) + 4 - skb->len;
+ __skb_put_zero(skb, pad);
+
+ ret = mt76u_bulk_msg(&dev->mt76, skb->data, skb->len, NULL,
+ 1000, ep);
+ dev_kfree_skb(skb);
+
+ return ret;
+}
+
+static int mt7925u_mcu_init(struct mt792x_dev *dev)
+{
+ static const struct mt76_mcu_ops mcu_ops = {
+ .headroom = MT_SDIO_HDR_SIZE +
+ sizeof(struct mt76_connac2_mcu_txd),
+ .tailroom = MT_USB_TAIL_SIZE,
+ .mcu_skb_send_msg = mt7925u_mcu_send_message,
+ .mcu_parse_response = mt7925_mcu_parse_response,
+ };
+ int ret;
+
+ dev->mt76.mcu_ops = &mcu_ops;
+
+ mt76_set(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN);
+ ret = mt7925_run_firmware(dev);
+ if (ret)
+ return ret;
+
+ set_bit(MT76_STATE_MCU_RUNNING, &dev->mphy.state);
+ mt76_clear(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN);
+
+ return 0;
+}
+
+static int mt7925u_mac_reset(struct mt792x_dev *dev)
+{
+ int err;
+
+ mt76_txq_schedule_all(&dev->mphy);
+ mt76_worker_disable(&dev->mt76.tx_worker);
+
+ set_bit(MT76_RESET, &dev->mphy.state);
+ set_bit(MT76_MCU_RESET, &dev->mphy.state);
+
+ wake_up(&dev->mt76.mcu.wait);
+ skb_queue_purge(&dev->mt76.mcu.res_q);
+
+ mt76u_stop_rx(&dev->mt76);
+ mt76u_stop_tx(&dev->mt76);
+
+ mt792xu_wfsys_reset(dev);
+
+ clear_bit(MT76_MCU_RESET, &dev->mphy.state);
+ err = mt76u_resume_rx(&dev->mt76);
+ if (err)
+ goto out;
+
+ err = mt792xu_mcu_power_on(dev);
+ if (err)
+ goto out;
+
+ err = mt792xu_dma_init(dev, false);
+ if (err)
+ goto out;
+
+ mt76_wr(dev, MT_SWDEF_MODE, MT_SWDEF_NORMAL_MODE);
+ mt76_set(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN);
+
+ err = mt7925_run_firmware(dev);
+ if (err)
+ goto out;
+
+ mt76_clear(dev, MT_UDMA_TX_QSEL, MT_FW_DL_EN);
+
+ err = mt7925_mcu_set_eeprom(dev);
+ if (err)
+ goto out;
+
+ err = mt7925_mac_init(dev);
+ if (err)
+ goto out;
+
+ err = __mt7925_start(&dev->phy);
+out:
+ clear_bit(MT76_RESET, &dev->mphy.state);
+
+ mt76_worker_enable(&dev->mt76.tx_worker);
+
+ return err;
+}
+
+static int mt7925u_probe(struct usb_interface *usb_intf,
+ const struct usb_device_id *id)
+{
+ static const struct mt76_driver_ops drv_ops = {
+ .txwi_size = MT_SDIO_TXD_SIZE,
+ .drv_flags = MT_DRV_RX_DMA_HDR | MT_DRV_HW_MGMT_TXQ |
+ MT_DRV_AMSDU_OFFLOAD,
+ .survey_flags = SURVEY_INFO_TIME_TX |
+ SURVEY_INFO_TIME_RX |
+ SURVEY_INFO_TIME_BSS_RX,
+ .tx_prepare_skb = mt7925_usb_sdio_tx_prepare_skb,
+ .tx_complete_skb = mt7925_usb_sdio_tx_complete_skb,
+ .tx_status_data = mt7925_usb_sdio_tx_status_data,
+ .rx_skb = mt7925_queue_rx_skb,
+ .rx_check = mt7925_rx_check,
+ .sta_add = mt7925_mac_sta_add,
+ .sta_assoc = mt7925_mac_sta_assoc,
+ .sta_remove = mt7925_mac_sta_remove,
+ .update_survey = mt792x_update_channel,
+ };
+ static const struct mt792x_hif_ops hif_ops = {
+ .mcu_init = mt7925u_mcu_init,
+ .init_reset = mt792xu_init_reset,
+ .reset = mt7925u_mac_reset,
+ };
+ static struct mt76_bus_ops bus_ops = {
+ .rr = mt792xu_rr,
+ .wr = mt792xu_wr,
+ .rmw = mt792xu_rmw,
+ .read_copy = mt76u_read_copy,
+ .write_copy = mt792xu_copy,
+ .type = MT76_BUS_USB,
+ };
+ struct usb_device *udev = interface_to_usbdev(usb_intf);
+ struct ieee80211_ops *ops;
+ struct ieee80211_hw *hw;
+ struct mt792x_dev *dev;
+ struct mt76_dev *mdev;
+ u8 features;
+ int ret;
+
+ ops = mt792x_get_mac80211_ops(&usb_intf->dev, &mt7925_ops,
+ (void *)id->driver_info, &features);
+ if (!ops)
+ return -ENOMEM;
+
+ ops->stop = mt792xu_stop;
+
+ mdev = mt76_alloc_device(&usb_intf->dev, sizeof(*dev), ops, &drv_ops);
+ if (!mdev)
+ return -ENOMEM;
+
+ dev = container_of(mdev, struct mt792x_dev, mt76);
+ dev->fw_features = features;
+ dev->hif_ops = &hif_ops;
+
+ udev = usb_get_dev(udev);
+ usb_reset_device(udev);
+
+ usb_set_intfdata(usb_intf, dev);
+
+ ret = __mt76u_init(mdev, usb_intf, &bus_ops);
+ if (ret < 0)
+ goto error;
+
+ mdev->rev = (mt76_rr(dev, MT_HW_CHIPID) << 16) |
+ (mt76_rr(dev, MT_HW_REV) & 0xff);
+ dev_dbg(mdev->dev, "ASIC revision: %04x\n", mdev->rev);
+
+ if (mt76_get_field(dev, MT_CONN_ON_MISC, MT_TOP_MISC2_FW_N9_RDY)) {
+ ret = mt792xu_wfsys_reset(dev);
+ if (ret)
+ goto error;
+ }
+
+ ret = mt792xu_mcu_power_on(dev);
+ if (ret)
+ goto error;
+
+ ret = mt76u_alloc_mcu_queue(&dev->mt76);
+ if (ret)
+ goto error;
+
+ ret = mt76u_alloc_queues(&dev->mt76);
+ if (ret)
+ goto error;
+
+ ret = mt792xu_dma_init(dev, false);
+ if (ret)
+ goto error;
+
+ hw = mt76_hw(dev);
+ /* check hw sg support in order to enable AMSDU */
+ hw->max_tx_fragments = mdev->usb.sg_en ? MT_HW_TXP_MAX_BUF_NUM : 1;
+
+ ret = mt7925_register_device(dev);
+ if (ret)
+ goto error;
+
+ return 0;
+
+error:
+ mt76u_queues_deinit(&dev->mt76);
+
+ usb_set_intfdata(usb_intf, NULL);
+ usb_put_dev(interface_to_usbdev(usb_intf));
+
+ mt76_free_device(&dev->mt76);
+
+ return ret;
+}
+
+#ifdef CONFIG_PM
+static int mt7925u_suspend(struct usb_interface *intf, pm_message_t state)
+{
+ struct mt792x_dev *dev = usb_get_intfdata(intf);
+ struct mt76_connac_pm *pm = &dev->pm;
+ int err;
+
+ pm->suspended = true;
+ flush_work(&dev->reset_work);
+
+ err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, true);
+ if (err)
+ goto failed;
+
+ mt76u_stop_rx(&dev->mt76);
+ mt76u_stop_tx(&dev->mt76);
+
+ return 0;
+
+failed:
+ pm->suspended = false;
+
+ if (err < 0)
+ mt792x_reset(&dev->mt76);
+
+ return err;
+}
+
+static int mt7925u_resume(struct usb_interface *intf)
+{
+ struct mt792x_dev *dev = usb_get_intfdata(intf);
+ struct mt76_connac_pm *pm = &dev->pm;
+ bool reinit = true;
+ int err, i;
+
+ for (i = 0; i < 10; i++) {
+ u32 val = mt76_rr(dev, MT_WF_SW_DEF_CR_USB_MCU_EVENT);
+
+ if (!(val & MT_WF_SW_SER_TRIGGER_SUSPEND)) {
+ reinit = false;
+ break;
+ }
+ if (val & MT_WF_SW_SER_DONE_SUSPEND) {
+ mt76_wr(dev, MT_WF_SW_DEF_CR_USB_MCU_EVENT, 0);
+ break;
+ }
+
+ msleep(20);
+ }
+
+ if (reinit || mt792x_dma_need_reinit(dev)) {
+ err = mt792xu_dma_init(dev, true);
+ if (err)
+ goto failed;
+ }
+
+ err = mt76u_resume_rx(&dev->mt76);
+ if (err < 0)
+ goto failed;
+
+ err = mt76_connac_mcu_set_hif_suspend(&dev->mt76, false);
+failed:
+ pm->suspended = false;
+
+ if (err < 0)
+ mt792x_reset(&dev->mt76);
+
+ return err;
+}
+#endif /* CONFIG_PM */
+
+MODULE_DEVICE_TABLE(usb, mt7925u_device_table);
+MODULE_FIRMWARE(MT7925_FIRMWARE_WM);
+MODULE_FIRMWARE(MT7925_ROM_PATCH);
+
+static struct usb_driver mt7925u_driver = {
+ .name = KBUILD_MODNAME,
+ .id_table = mt7925u_device_table,
+ .probe = mt7925u_probe,
+ .disconnect = mt792xu_disconnect,
+#ifdef CONFIG_PM
+ .suspend = mt7925u_suspend,
+ .resume = mt7925u_resume,
+ .reset_resume = mt7925u_resume,
+#endif /* CONFIG_PM */
+ .soft_unbind = 1,
+ .disable_hub_initiated_lpm = 1,
+};
+module_usb_driver(mt7925u_driver);
+
+MODULE_AUTHOR("Lorenzo Bianconi <lorenzo@kernel.org>");
+MODULE_LICENSE("Dual BSD/GPL");
diff --git a/drivers/net/wireless/mediatek/mt76/mt792x.h b/drivers/net/wireless/mediatek/mt76/mt792x.h
index 5d5ab8630041..36fae736dd19 100644
--- a/drivers/net/wireless/mediatek/mt76/mt792x.h
+++ b/drivers/net/wireless/mediatek/mt76/mt792x.h
@@ -25,6 +25,8 @@
#define MT792x_FW_TAG_FEATURE 4
#define MT792x_FW_CAP_CNM BIT(7)
+#define MT792x_CHIP_CAP_CLC_EVT_EN BIT(0)
+
/* NOTE: used to map mt76_rates. idx may change if firmware expands table */
#define MT792x_BASIC_RATES_TBL 11
@@ -36,9 +38,14 @@
#define MT7921_FIRMWARE_WM "mediatek/WIFI_RAM_CODE_MT7961_1.bin"
#define MT7922_FIRMWARE_WM "mediatek/WIFI_RAM_CODE_MT7922_1.bin"
+#define MT7925_FIRMWARE_WM "mediatek/mt7925/WIFI_RAM_CODE_MT7925_1_1.bin"
#define MT7921_ROM_PATCH "mediatek/WIFI_MT7961_patch_mcu_1_2_hdr.bin"
#define MT7922_ROM_PATCH "mediatek/WIFI_MT7922_patch_mcu_1_1_hdr.bin"
+#define MT7925_ROM_PATCH "mediatek/mt7925/WIFI_MT7925_PATCH_MCU_1_1_hdr.bin"
+
+#define MT792x_SDIO_HDR_TX_BYTES GENMASK(15, 0)
+#define MT792x_SDIO_HDR_PKT_TYPE GENMASK(17, 16)
struct mt792x_vif;
struct mt792x_sta;
@@ -61,6 +68,14 @@ enum {
MT792x_CLC_MAX_NUM,
};
+enum mt792x_reg_power_type {
+ MT_AP_UNSET = 0,
+ MT_AP_DEFAULT,
+ MT_AP_LPI,
+ MT_AP_SP,
+ MT_AP_VLP,
+};
+
DECLARE_EWMA(avg_signal, 10, 8)
struct mt792x_sta {
@@ -91,7 +106,6 @@ struct mt792x_vif {
struct ewma_rssi rssi;
struct ieee80211_tx_queue_params queue_params[IEEE80211_NUM_ACS];
- struct ieee80211_chanctx_conf *ctx;
};
struct mt792x_phy {
@@ -113,6 +127,8 @@ struct mt792x_phy {
struct mt76_mib_stats mib;
u8 sta_work_count;
+ u8 clc_chan_conf;
+ enum mt792x_reg_power_type power_type;
struct sk_buff_head scan_event_list;
struct delayed_work scan_work;
@@ -120,6 +136,7 @@ struct mt792x_phy {
void *acpisar;
#endif
void *clc[MT792x_CLC_MAX_NUM];
+ u64 chip_cap;
struct work_struct roc_work;
struct timer_list roc_timer;
@@ -229,6 +246,7 @@ static inline bool mt792x_dma_need_reinit(struct mt792x_dev *dev)
#define mt792x_mutex_release(dev) \
mt76_connac_mutex_release(&(dev)->mt76, &(dev)->pm)
+void mt792x_stop(struct ieee80211_hw *hw);
void mt792x_pm_wake_work(struct work_struct *work);
void mt792x_pm_power_save_work(struct work_struct *work);
void mt792x_reset(struct mt76_dev *mdev);
@@ -308,6 +326,8 @@ static inline char *mt792x_ram_name(struct mt792x_dev *dev)
switch (mt76_chip(&dev->mt76)) {
case 0x7922:
return MT7922_FIRMWARE_WM;
+ case 0x7925:
+ return MT7925_FIRMWARE_WM;
default:
return MT7921_FIRMWARE_WM;
}
@@ -318,6 +338,8 @@ static inline char *mt792x_patch_name(struct mt792x_dev *dev)
switch (mt76_chip(&dev->mt76)) {
case 0x7922:
return MT7922_ROM_PATCH;
+ case 0x7925:
+ return MT7925_ROM_PATCH;
default:
return MT7921_ROM_PATCH;
}
@@ -337,6 +359,20 @@ void mt792xu_wr(struct mt76_dev *dev, u32 addr, u32 val);
u32 mt792xu_rmw(struct mt76_dev *dev, u32 addr, u32 mask, u32 val);
void mt792xu_copy(struct mt76_dev *dev, u32 offset, const void *data, int len);
void mt792xu_disconnect(struct usb_interface *usb_intf);
+void mt792xu_stop(struct ieee80211_hw *hw);
+
+static inline void
+mt792x_skb_add_usb_sdio_hdr(struct mt792x_dev *dev, struct sk_buff *skb,
+ int type)
+{
+ u32 hdr, len;
+
+ len = mt76_is_usb(&dev->mt76) ? skb->len : skb->len + sizeof(hdr);
+ hdr = FIELD_PREP(MT792x_SDIO_HDR_TX_BYTES, len) |
+ FIELD_PREP(MT792x_SDIO_HDR_PKT_TYPE, type);
+
+ put_unaligned_le32(hdr, skb_push(skb, sizeof(hdr)));
+}
int __mt792xe_mcu_drv_pmctrl(struct mt792x_dev *dev);
int mt792xe_mcu_drv_pmctrl(struct mt792x_dev *dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_core.c b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
index 46be7f996c7e..502be22dbe36 100644
--- a/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+++ b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
@@ -91,6 +91,28 @@ void mt792x_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control,
}
EXPORT_SYMBOL_GPL(mt792x_tx);
+void mt792x_stop(struct ieee80211_hw *hw)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+ struct mt792x_phy *phy = mt792x_hw_phy(hw);
+
+ cancel_delayed_work_sync(&phy->mt76->mac_work);
+
+ cancel_delayed_work_sync(&dev->pm.ps_work);
+ cancel_work_sync(&dev->pm.wake_work);
+ cancel_work_sync(&dev->reset_work);
+ mt76_connac_free_pending_tx_skbs(&dev->pm, NULL);
+
+ if (is_mt7921(&dev->mt76)) {
+ mt792x_mutex_acquire(dev);
+ mt76_connac_mcu_set_mac_enable(&dev->mt76, 0, false, false);
+ mt792x_mutex_release(dev);
+ }
+
+ clear_bit(MT76_STATE_RUNNING, &phy->mt76->state);
+}
+EXPORT_SYMBOL_GPL(mt792x_stop);
+
void mt792x_remove_interface(struct ieee80211_hw *hw,
struct ieee80211_vif *vif)
{
@@ -115,7 +137,7 @@ void mt792x_remove_interface(struct ieee80211_hw *hw,
list_del_init(&msta->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
- mt76_packet_id_flush(&dev->mt76, &msta->wcid);
+ mt76_wcid_cleanup(&dev->mt76, &msta->wcid);
}
EXPORT_SYMBOL_GPL(mt792x_remove_interface);
@@ -243,7 +265,7 @@ int mt792x_assign_vif_chanctx(struct ieee80211_hw *hw,
struct mt792x_dev *dev = mt792x_hw_dev(hw);
mutex_lock(&dev->mt76.mutex);
- mvif->ctx = ctx;
+ mvif->mt76.ctx = ctx;
mutex_unlock(&dev->mt76.mutex);
return 0;
@@ -259,7 +281,7 @@ void mt792x_unassign_vif_chanctx(struct ieee80211_hw *hw,
struct mt792x_dev *dev = mt792x_hw_dev(hw);
mutex_lock(&dev->mt76.mutex);
- mvif->ctx = NULL;
+ mvif->mt76.ctx = NULL;
mutex_unlock(&dev->mt76.mutex);
}
EXPORT_SYMBOL_GPL(mt792x_unassign_vif_chanctx);
@@ -358,7 +380,7 @@ void mt792x_get_et_strings(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
if (sset != ETH_SS_STATS)
return;
- memcpy(data, *mt792x_gstrings_stats, sizeof(mt792x_gstrings_stats));
+ memcpy(data, mt792x_gstrings_stats, sizeof(mt792x_gstrings_stats));
data += sizeof(mt792x_gstrings_stats);
page_pool_ethtool_stats_get_strings(data);
diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_dma.c b/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
index a3dbd3865b2f..488326ce5ed4 100644
--- a/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
+++ b/drivers/net/wireless/mediatek/mt76/mt792x_dma.c
@@ -88,25 +88,44 @@ EXPORT_SYMBOL_GPL(mt792x_rx_poll_complete);
#define PREFETCH(base, depth) ((base) << 16 | (depth))
static void mt792x_dma_prefetch(struct mt792x_dev *dev)
{
- mt76_wr(dev, MT_WFDMA0_RX_RING0_EXT_CTRL, PREFETCH(0x0, 0x4));
- mt76_wr(dev, MT_WFDMA0_RX_RING2_EXT_CTRL, PREFETCH(0x40, 0x4));
- mt76_wr(dev, MT_WFDMA0_RX_RING3_EXT_CTRL, PREFETCH(0x80, 0x4));
- mt76_wr(dev, MT_WFDMA0_RX_RING4_EXT_CTRL, PREFETCH(0xc0, 0x4));
- mt76_wr(dev, MT_WFDMA0_RX_RING5_EXT_CTRL, PREFETCH(0x100, 0x4));
-
- mt76_wr(dev, MT_WFDMA0_TX_RING0_EXT_CTRL, PREFETCH(0x140, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING1_EXT_CTRL, PREFETCH(0x180, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING2_EXT_CTRL, PREFETCH(0x1c0, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING3_EXT_CTRL, PREFETCH(0x200, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING4_EXT_CTRL, PREFETCH(0x240, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING5_EXT_CTRL, PREFETCH(0x280, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING6_EXT_CTRL, PREFETCH(0x2c0, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING16_EXT_CTRL, PREFETCH(0x340, 0x4));
- mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ if (is_mt7925(&dev->mt76)) {
+ /* rx ring */
+ mt76_wr(dev, MT_WFDMA0_RX_RING0_EXT_CTRL, PREFETCH(0x0000, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING1_EXT_CTRL, PREFETCH(0x0040, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING2_EXT_CTRL, PREFETCH(0x0080, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING3_EXT_CTRL, PREFETCH(0x00c0, 0x4));
+ /* tx ring */
+ mt76_wr(dev, MT_WFDMA0_TX_RING0_EXT_CTRL, PREFETCH(0x0100, 0x10));
+ mt76_wr(dev, MT_WFDMA0_TX_RING1_EXT_CTRL, PREFETCH(0x0200, 0x10));
+ mt76_wr(dev, MT_WFDMA0_TX_RING2_EXT_CTRL, PREFETCH(0x0300, 0x10));
+ mt76_wr(dev, MT_WFDMA0_TX_RING3_EXT_CTRL, PREFETCH(0x0400, 0x10));
+ mt76_wr(dev, MT_WFDMA0_TX_RING15_EXT_CTRL, PREFETCH(0x0500, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING16_EXT_CTRL, PREFETCH(0x0540, 0x4));
+ } else {
+ /* rx ring */
+ mt76_wr(dev, MT_WFDMA0_RX_RING0_EXT_CTRL, PREFETCH(0x0, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING2_EXT_CTRL, PREFETCH(0x40, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING3_EXT_CTRL, PREFETCH(0x80, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING4_EXT_CTRL, PREFETCH(0xc0, 0x4));
+ mt76_wr(dev, MT_WFDMA0_RX_RING5_EXT_CTRL, PREFETCH(0x100, 0x4));
+ /* tx ring */
+ mt76_wr(dev, MT_WFDMA0_TX_RING0_EXT_CTRL, PREFETCH(0x140, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING1_EXT_CTRL, PREFETCH(0x180, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING2_EXT_CTRL, PREFETCH(0x1c0, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING3_EXT_CTRL, PREFETCH(0x200, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING4_EXT_CTRL, PREFETCH(0x240, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING5_EXT_CTRL, PREFETCH(0x280, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING6_EXT_CTRL, PREFETCH(0x2c0, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING16_EXT_CTRL, PREFETCH(0x340, 0x4));
+ mt76_wr(dev, MT_WFDMA0_TX_RING17_EXT_CTRL, PREFETCH(0x380, 0x4));
+ }
}
int mt792x_dma_enable(struct mt792x_dev *dev)
{
+ if (is_mt7925(&dev->mt76))
+ mt76_rmw(dev, MT_UWFDMA0_GLO_CFG_EXT1, BIT(28), BIT(28));
+
/* configure perfetch settings */
mt792x_dma_prefetch(dev);
diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_usb.c b/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
index 20e7f9c7c88c..2dd283caed36 100644
--- a/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
+++ b/drivers/net/wireless/mediatek/mt76/mt792x_usb.c
@@ -287,6 +287,15 @@ int mt792xu_init_reset(struct mt792x_dev *dev)
}
EXPORT_SYMBOL_GPL(mt792xu_init_reset);
+void mt792xu_stop(struct ieee80211_hw *hw)
+{
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+
+ mt76u_stop_tx(&dev->mt76);
+ mt792x_stop(hw);
+}
+EXPORT_SYMBOL_GPL(mt792xu_stop);
+
void mt792xu_disconnect(struct usb_interface *usb_intf)
{
struct mt792x_dev *dev = usb_get_intfdata(usb_intf);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
index 26e03b28935f..55cb1770fa34 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
@@ -54,23 +54,31 @@ static void mt7996_led_set_config(struct led_classdev *led_cdev,
dev = container_of(mphy->dev, struct mt7996_dev, mt76);
/* select TX blink mode, 2: only data frames */
- mt76_rmw_field(dev, MT_TMAC_TCR0(0), MT_TMAC_TCR0_TX_BLINK, 2);
+ mt76_rmw_field(dev, MT_TMAC_TCR0(mphy->band_idx), MT_TMAC_TCR0_TX_BLINK, 2);
/* enable LED */
- mt76_wr(dev, MT_LED_EN(0), 1);
+ mt76_wr(dev, MT_LED_EN(mphy->band_idx), 1);
/* set LED Tx blink on/off time */
val = FIELD_PREP(MT_LED_TX_BLINK_ON_MASK, delay_on) |
FIELD_PREP(MT_LED_TX_BLINK_OFF_MASK, delay_off);
- mt76_wr(dev, MT_LED_TX_BLINK(0), val);
+ mt76_wr(dev, MT_LED_TX_BLINK(mphy->band_idx), val);
+
+ /* turn LED off */
+ if (delay_off == 0xff && delay_on == 0x0) {
+ val = MT_LED_CTRL_POLARITY | MT_LED_CTRL_KICK;
+ } else {
+ /* control LED */
+ val = MT_LED_CTRL_BLINK_MODE | MT_LED_CTRL_KICK;
+ if (mphy->band_idx == MT_BAND1)
+ val |= MT_LED_CTRL_BLINK_BAND_SEL;
+ }
- /* control LED */
- val = MT_LED_CTRL_BLINK_MODE | MT_LED_CTRL_KICK;
if (mphy->leds.al)
val |= MT_LED_CTRL_POLARITY;
- mt76_wr(dev, MT_LED_CTRL(0), val);
- mt76_clear(dev, MT_LED_CTRL(0), MT_LED_CTRL_KICK);
+ mt76_wr(dev, MT_LED_CTRL(mphy->band_idx), val);
+ mt76_clear(dev, MT_LED_CTRL(mphy->band_idx), MT_LED_CTRL_KICK);
}
static int mt7996_led_set_blink(struct led_classdev *led_cdev,
@@ -173,6 +181,7 @@ mt7996_init_wiphy(struct ieee80211_hw *hw)
wiphy->n_iface_combinations = ARRAY_SIZE(if_comb);
wiphy->reg_notifier = mt7996_regd_notifier;
wiphy->flags |= WIPHY_FLAG_HAS_CHANNEL_SWITCH;
+ wiphy->mbssid_max_interfaces = 16;
wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_BSS_COLOR);
wiphy_ext_feature_set(wiphy, NL80211_EXT_FEATURE_VHT_IBSS);
@@ -196,6 +205,7 @@ mt7996_init_wiphy(struct ieee80211_hw *hw)
ieee80211_hw_set(hw, SUPPORTS_TX_ENCAP_OFFLOAD);
ieee80211_hw_set(hw, SUPPORTS_RX_DECAP_OFFLOAD);
ieee80211_hw_set(hw, WANT_MONITOR_VIF);
+ ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
hw->max_tx_fragments = 4;
@@ -223,6 +233,12 @@ mt7996_init_wiphy(struct ieee80211_hw *hw)
ieee80211_hw_set(hw, SUPPORTS_VHT_EXT_NSS_BW);
}
+ /* init led callbacks */
+ if (IS_ENABLED(CONFIG_MT76_LEDS)) {
+ phy->mt76->leds.cdev.brightness_set = mt7996_led_set_brightness;
+ phy->mt76->leds.cdev.blink_set = mt7996_led_set_blink;
+ }
+
mt76_set_stream_caps(phy->mt76, true);
mt7996_set_stream_vht_txbf_caps(phy);
mt7996_set_stream_he_eht_caps(phy);
@@ -258,6 +274,11 @@ mt7996_mac_init_band(struct mt7996_dev *dev, u8 band)
set = FIELD_PREP(MT_WTBLOFF_RSCR_RCPI_MODE, 0) |
FIELD_PREP(MT_WTBLOFF_RSCR_RCPI_PARAM, 0x3);
mt76_rmw(dev, MT_WTBLOFF_RSCR(band), mask, set);
+
+ /* MT_TXD5_TX_STATUS_HOST (MPDU format) has higher priority than
+ * MT_AGG_ACR_PPDU_TXS2H (PPDU format) even though ACR bit is set.
+ */
+ mt76_set(dev, MT_AGG_ACR4(band), MT_AGG_ACR_PPDU_TXS2H);
}
static void mt7996_mac_init_basic_rates(struct mt7996_dev *dev)
@@ -733,16 +754,17 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMER |
IEEE80211_EHT_PHY_CAP0_SU_BEAMFORMEE;
+ val = max_t(u8, sts - 1, 3);
eht_cap_elem->phy_cap_info[0] |=
- u8_encode_bits(u8_get_bits(sts - 1, BIT(0)),
+ u8_encode_bits(u8_get_bits(val, BIT(0)),
IEEE80211_EHT_PHY_CAP0_BEAMFORMEE_SS_80MHZ_MASK);
eht_cap_elem->phy_cap_info[1] =
- u8_encode_bits(u8_get_bits(sts - 1, GENMASK(2, 1)),
+ u8_encode_bits(u8_get_bits(val, GENMASK(2, 1)),
IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_80MHZ_MASK) |
- u8_encode_bits(sts - 1,
+ u8_encode_bits(val,
IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_160MHZ_MASK) |
- u8_encode_bits(sts - 1,
+ u8_encode_bits(val,
IEEE80211_EHT_PHY_CAP1_BEAMFORMEE_SS_320MHZ_MASK);
eht_cap_elem->phy_cap_info[2] =
@@ -827,8 +849,7 @@ __mt7996_set_stream_he_eht_caps(struct mt7996_phy *phy,
n++;
}
- sband->iftype_data = data;
- sband->n_iftype_data = n;
+ _ieee80211_set_sband_iftype_data(sband, data, n);
}
void mt7996_set_stream_he_eht_caps(struct mt7996_phy *phy)
@@ -870,12 +891,6 @@ int mt7996_register_device(struct mt7996_dev *dev)
mt7996_init_wiphy(hw);
- /* init led callbacks */
- if (IS_ENABLED(CONFIG_MT76_LEDS)) {
- dev->mphy.leds.cdev.brightness_set = mt7996_led_set_brightness;
- dev->mphy.leds.cdev.blink_set = mt7996_led_set_blink;
- }
-
ret = mt76_register_device(&dev->mt76, true, mt76_rates,
ARRAY_SIZE(mt76_rates));
if (ret)
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
index ac8759febe48..04540833485f 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
@@ -433,7 +433,9 @@ mt7996_mac_fill_rx_rate(struct mt7996_dev *dev,
case IEEE80211_STA_RX_BW_160:
status->bw = RATE_INFO_BW_160;
break;
+ /* rxv reports bw 320-1 and 320-2 separately */
case IEEE80211_STA_RX_BW_320:
+ case IEEE80211_STA_RX_BW_320 + 1:
status->bw = RATE_INFO_BW_320;
break;
default:
@@ -948,15 +950,6 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (!wcid)
wcid = &dev->mt76.global_wcid;
- if (sta) {
- struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
-
- if (time_after(jiffies, msta->jiffies + HZ / 4)) {
- info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
- msta->jiffies = jiffies;
- }
- }
-
t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size);
t->skb = tx_info->skb;
@@ -991,11 +984,9 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
}
txp->fw.token = cpu_to_le16(id);
- if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags))
- txp->fw.rept_wds_wcid = cpu_to_le16(wcid->idx);
- else
- txp->fw.rept_wds_wcid = cpu_to_le16(0xfff);
- tx_info->skb = DMA_DUMMY_DATA;
+ txp->fw.rept_wds_wcid = cpu_to_le16(sta ? wcid->idx : 0xfff);
+
+ tx_info->skb = NULL;
/* pass partial skb header to fw */
tx_info->buf[1].len = MT_CT_PARSE_LEN;
@@ -1006,22 +997,35 @@ int mt7996_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
}
static void
-mt7996_tx_check_aggr(struct ieee80211_sta *sta, __le32 *txwi)
+mt7996_tx_check_aggr(struct ieee80211_sta *sta, struct sk_buff *skb)
{
struct mt7996_sta *msta;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ bool is_8023 = info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP;
u16 fc, tid;
- u32 val;
if (!sta || !(sta->deflink.ht_cap.ht_supported || sta->deflink.he_cap.has_he))
return;
- tid = le32_get_bits(txwi[1], MT_TXD1_TID);
+ tid = skb->priority & IEEE80211_QOS_CTL_TID_MASK;
if (tid >= 6) /* skip VO queue */
return;
- val = le32_to_cpu(txwi[2]);
- fc = FIELD_GET(MT_TXD2_FRAME_TYPE, val) << 2 |
- FIELD_GET(MT_TXD2_SUB_TYPE, val) << 4;
+ if (is_8023) {
+ fc = IEEE80211_FTYPE_DATA |
+ (sta->wme ? IEEE80211_STYPE_QOS_DATA : IEEE80211_STYPE_DATA);
+ } else {
+ /* No need to get precise TID for Action/Management Frame,
+ * since it will not meet the following Frame Control
+ * condition anyway.
+ */
+
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+
+ fc = le16_to_cpu(hdr->frame_control) &
+ (IEEE80211_FCTL_FTYPE | IEEE80211_FCTL_STYPE);
+ }
+
if (unlikely(fc != (IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_DATA)))
return;
@@ -1049,9 +1053,9 @@ mt7996_txwi_free(struct mt7996_dev *dev, struct mt76_txwi_cache *t,
wcid_idx = wcid->idx;
if (likely(t->skb->protocol != cpu_to_be16(ETH_P_PAE)))
- mt7996_tx_check_aggr(sta, txwi);
+ mt7996_tx_check_aggr(sta, t->skb);
} else {
- wcid_idx = le32_get_bits(txwi[1], MT_TXD1_WLAN_IDX);
+ wcid_idx = le32_get_bits(txwi[9], MT_TXD9_WLAN_IDX);
}
__mt76_tx_complete_skb(mdev, wcid_idx, t->skb, free_list);
@@ -1070,6 +1074,7 @@ mt7996_mac_tx_free(struct mt7996_dev *dev, void *data, int len)
struct mt76_phy *phy3 = mdev->phys[MT_BAND2];
struct mt76_txwi_cache *txwi;
struct ieee80211_sta *sta = NULL;
+ struct mt76_wcid *wcid;
LIST_HEAD(free_list);
struct sk_buff *skb, *tmp;
void *end = data + len;
@@ -1088,7 +1093,7 @@ mt7996_mac_tx_free(struct mt7996_dev *dev, void *data, int len)
mt76_queue_tx_cleanup(dev, phy3->q_tx[MT_TXQ_BE], false);
}
- if (WARN_ON_ONCE(le32_get_bits(tx_free[1], MT_TXFREE1_VER) < 4))
+ if (WARN_ON_ONCE(le32_get_bits(tx_free[1], MT_TXFREE1_VER) < 5))
return;
total = le32_get_bits(tx_free[0], MT_TXFREE0_MSDU_CNT);
@@ -1104,7 +1109,6 @@ mt7996_mac_tx_free(struct mt7996_dev *dev, void *data, int len)
info = le32_to_cpu(*cur_info);
if (info & MT_TXFREE_INFO_PAIR) {
struct mt7996_sta *msta;
- struct mt76_wcid *wcid;
u16 idx;
idx = FIELD_GET(MT_TXFREE_INFO_WLAN_ID, info);
@@ -1120,10 +1124,21 @@ mt7996_mac_tx_free(struct mt7996_dev *dev, void *data, int len)
&mdev->sta_poll_list);
spin_unlock_bh(&mdev->sta_poll_lock);
continue;
- }
+ } else if (info & MT_TXFREE_INFO_HEADER) {
+ u32 tx_retries = 0, tx_failed = 0;
+
+ if (!wcid)
+ continue;
- if (info & MT_TXFREE_INFO_HEADER)
+ tx_retries =
+ FIELD_GET(MT_TXFREE_INFO_COUNT, info) - 1;
+ tx_failed = tx_retries +
+ !!FIELD_GET(MT_TXFREE_INFO_STAT, info);
+
+ wcid->stats.tx_retries += tx_retries;
+ wcid->stats.tx_failed += tx_failed;
continue;
+ }
for (i = 0; i < 2; i++) {
msdu = (info >> (15 * i)) & MT_TXFREE_INFO_MSDU_ID;
@@ -1167,22 +1182,31 @@ mt7996_mac_add_txs_skb(struct mt7996_dev *dev, struct mt76_wcid *wcid,
bool cck = false;
u32 txrate, txs, mode, stbc;
+ txs = le32_to_cpu(txs_data[0]);
+
mt76_tx_status_lock(mdev, &list);
skb = mt76_tx_status_skb_get(mdev, wcid, pid, &list);
- if (!skb)
- goto out_no_skb;
- txs = le32_to_cpu(txs_data[0]);
+ if (skb) {
+ info = IEEE80211_SKB_CB(skb);
+ if (!(txs & MT_TXS0_ACK_ERROR_MASK))
+ info->flags |= IEEE80211_TX_STAT_ACK;
+
+ info->status.ampdu_len = 1;
+ info->status.ampdu_ack_len =
+ !!(info->flags & IEEE80211_TX_STAT_ACK);
- info = IEEE80211_SKB_CB(skb);
- if (!(txs & MT_TXS0_ACK_ERROR_MASK))
- info->flags |= IEEE80211_TX_STAT_ACK;
+ info->status.rates[0].idx = -1;
+ }
- info->status.ampdu_len = 1;
- info->status.ampdu_ack_len = !!(info->flags &
- IEEE80211_TX_STAT_ACK);
+ if (mtk_wed_device_active(&dev->mt76.mmio.wed) && wcid->sta) {
+ struct ieee80211_sta *sta;
+ u8 tid;
- info->status.rates[0].idx = -1;
+ sta = container_of((void *)wcid, struct ieee80211_sta, drv_priv);
+ tid = FIELD_GET(MT_TXS0_TID, txs);
+ ieee80211_refresh_tx_agg_session_timer(sta, tid);
+ }
txrate = FIELD_GET(MT_TXS0_TX_RATE, txs);
@@ -1282,9 +1306,8 @@ mt7996_mac_add_txs_skb(struct mt7996_dev *dev, struct mt76_wcid *wcid,
wcid->rate = rate;
out:
- mt76_tx_status_skb_done(mdev, skb, &list);
-
-out_no_skb:
+ if (skb)
+ mt76_tx_status_skb_done(mdev, skb, &list);
mt76_tx_status_unlock(mdev, &list);
return !!skb;
@@ -1298,13 +1321,10 @@ static void mt7996_mac_add_txs(struct mt7996_dev *dev, void *data)
u16 wcidx;
u8 pid;
- if (le32_get_bits(txs_data[0], MT_TXS0_TXS_FORMAT) > 1)
- return;
-
wcidx = le32_get_bits(txs_data[2], MT_TXS2_WCID);
pid = le32_get_bits(txs_data[3], MT_TXS3_PID);
- if (pid < MT_PACKET_ID_FIRST)
+ if (pid < MT_PACKET_ID_NO_SKB)
return;
if (wcidx >= mt7996_wtbl_size(dev))
@@ -2191,6 +2211,11 @@ void mt7996_mac_work(struct work_struct *work)
mphy->mac_work_count = 0;
mt7996_mac_update_stats(phy);
+
+ if (mtk_wed_device_active(&phy->dev->mt76.mmio.wed)) {
+ mt7996_mcu_get_all_sta_info(phy, UNI_ALL_STA_TXRX_ADM_STAT);
+ mt7996_mcu_get_all_sta_info(phy, UNI_ALL_STA_TXRX_MSDU_COUNT);
+ }
}
mutex_unlock(&mphy->dev->mutex);
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
index c3a479dc3f53..09c7a28a3d51 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
@@ -190,7 +190,7 @@ static int mt7996_add_interface(struct ieee80211_hw *hw,
mvif->mt76.omac_idx = idx;
mvif->phy = phy;
mvif->mt76.band_idx = band_idx;
- mvif->mt76.wmm_idx = band_idx;
+ mvif->mt76.wmm_idx = vif->type != NL80211_IFTYPE_AP;
ret = mt7996_mcu_add_dev_info(phy, vif, true);
if (ret)
@@ -207,7 +207,7 @@ static int mt7996_add_interface(struct ieee80211_hw *hw,
mvif->sta.wcid.phy_idx = band_idx;
mvif->sta.wcid.hw_key_idx = -1;
mvif->sta.wcid.tx_info |= MT_WCID_TX_INFO_SET;
- mt76_packet_id_init(&mvif->sta.wcid);
+ mt76_wcid_init(&mvif->sta.wcid);
mt7996_mac_wtbl_update(dev, idx,
MT_WTBL_UPDATE_ADM_COUNT_CLEAR);
@@ -248,8 +248,8 @@ static void mt7996_remove_interface(struct ieee80211_hw *hw,
struct mt7996_phy *phy = mt7996_hw_phy(hw);
int idx = msta->wcid.idx;
- mt7996_mcu_add_bss_info(phy, vif, false);
mt7996_mcu_add_sta(dev, vif, NULL, false);
+ mt7996_mcu_add_bss_info(phy, vif, false);
if (vif == phy->monitor_vif)
phy->monitor_vif = NULL;
@@ -268,7 +268,7 @@ static void mt7996_remove_interface(struct ieee80211_hw *hw,
list_del_init(&msta->wcid.poll_list);
spin_unlock_bh(&dev->mt76.sta_poll_lock);
- mt76_packet_id_flush(&dev->mt76, &msta->wcid);
+ mt76_wcid_cleanup(&dev->mt76, &msta->wcid);
}
int mt7996_set_channel(struct mt7996_phy *phy)
@@ -414,10 +414,16 @@ mt7996_conf_tx(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
const struct ieee80211_tx_queue_params *params)
{
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+ const u8 mq_to_aci[] = {
+ [IEEE80211_AC_VO] = 3,
+ [IEEE80211_AC_VI] = 2,
+ [IEEE80211_AC_BE] = 0,
+ [IEEE80211_AC_BK] = 1,
+ };
+ /* firmware uses access class index */
+ mvif->queue_params[mq_to_aci[queue]] = *params;
/* no need to update right away, we'll get BSS_CHANGED_QOS */
- queue = mt76_connac_lmac_mapping(queue);
- mvif->queue_params[queue] = *params;
return 0;
}
@@ -564,17 +570,13 @@ static void mt7996_bss_info_changed(struct ieee80211_hw *hw,
/* station mode uses BSSID to map the wlan entry to a peer,
* and then peer references bss_info_rfch to set bandwidth cap.
*/
- if (changed & BSS_CHANGED_BSSID &&
- vif->type == NL80211_IFTYPE_STATION) {
- bool join = !is_zero_ether_addr(info->bssid);
-
- mt7996_mcu_add_bss_info(phy, vif, join);
- mt7996_mcu_add_sta(dev, vif, NULL, join);
+ if ((changed & BSS_CHANGED_BSSID && !is_zero_ether_addr(info->bssid)) ||
+ (changed & BSS_CHANGED_ASSOC && vif->cfg.assoc) ||
+ (changed & BSS_CHANGED_BEACON_ENABLED && info->enable_beacon)) {
+ mt7996_mcu_add_bss_info(phy, vif, true);
+ mt7996_mcu_add_sta(dev, vif, NULL, true);
}
- if (changed & BSS_CHANGED_ASSOC)
- mt7996_mcu_add_bss_info(phy, vif, vif->cfg.assoc);
-
if (changed & BSS_CHANGED_ERP_CTS_PROT)
mt7996_mac_enable_rtscts(dev, vif, info->use_cts_prot);
@@ -595,11 +597,6 @@ static void mt7996_bss_info_changed(struct ieee80211_hw *hw,
mvif->basic_rates_idx =
mt7996_get_rates_table(hw, vif, false, false);
- if (changed & BSS_CHANGED_BEACON_ENABLED && info->enable_beacon) {
- mt7996_mcu_add_bss_info(phy, vif, true);
- mt7996_mcu_add_sta(dev, vif, NULL, true);
- }
-
/* ensure that enable txcmd_mode after bss_info */
if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
mt7996_mcu_set_tx(dev, vif);
@@ -618,8 +615,8 @@ static void mt7996_bss_info_changed(struct ieee80211_hw *hw,
mt7996_mcu_add_beacon(hw, vif, info->enable_beacon);
}
- if (changed & BSS_CHANGED_UNSOL_BCAST_PROBE_RESP ||
- changed & BSS_CHANGED_FILS_DISCOVERY)
+ if (changed & (BSS_CHANGED_UNSOL_BCAST_PROBE_RESP |
+ BSS_CHANGED_FILS_DISCOVERY))
mt7996_mcu_beacon_inband_discov(dev, vif, changed);
if (changed & BSS_CHANGED_MU_GROUPS)
@@ -660,7 +657,6 @@ int mt7996_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
msta->wcid.idx = idx;
msta->wcid.phy_idx = band_idx;
msta->wcid.tx_info |= MT_WCID_TX_INFO_SET;
- msta->jiffies = jiffies;
ewma_avg_signal_init(&msta->avg_ack_signal);
@@ -972,6 +968,7 @@ static void mt7996_sta_statistics(struct ieee80211_hw *hw,
struct ieee80211_sta *sta,
struct station_info *sinfo)
{
+ struct mt7996_phy *phy = mt7996_hw_phy(hw);
struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
struct rate_info *txrate = &msta->wcid.rate;
@@ -992,11 +989,31 @@ static void mt7996_sta_statistics(struct ieee80211_hw *hw,
sinfo->txrate.flags = txrate->flags;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+ sinfo->tx_failed = msta->wcid.stats.tx_failed;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED);
+
+ sinfo->tx_retries = msta->wcid.stats.tx_retries;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_RETRIES);
+
sinfo->ack_signal = (s8)msta->ack_signal;
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL);
sinfo->avg_ack_signal = -(s8)ewma_avg_signal_read(&msta->avg_ack_signal);
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_ACK_SIGNAL_AVG);
+
+ if (mtk_wed_device_active(&phy->dev->mt76.mmio.wed)) {
+ sinfo->tx_bytes = msta->wcid.stats.tx_bytes;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BYTES64);
+
+ sinfo->rx_bytes = msta->wcid.stats.rx_bytes;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_BYTES64);
+
+ sinfo->tx_packets = msta->wcid.stats.tx_packets;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS);
+
+ sinfo->rx_packets = msta->wcid.stats.rx_packets;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_RX_PACKETS);
+ }
}
static void mt7996_sta_rc_work(void *data, struct ieee80211_sta *sta)
@@ -1192,7 +1209,7 @@ void mt7996_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *mt7996_gstrings_stats,
+ memcpy(data, mt7996_gstrings_stats,
sizeof(mt7996_gstrings_stats));
}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
index 4a30db49ef33..bf917beb9439 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
@@ -324,8 +324,10 @@ int mt7996_mcu_wa_cmd(struct mt7996_dev *dev, int cmd, u32 a1, u32 a2, u32 a3)
static void
mt7996_mcu_csa_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
- if (vif->bss_conf.csa_active)
- ieee80211_csa_finish(vif);
+ if (!vif->bss_conf.csa_active || vif->type == NL80211_IFTYPE_STATION)
+ return;
+
+ ieee80211_csa_finish(vif);
}
static void
@@ -399,7 +401,7 @@ out:
static void
mt7996_mcu_cca_finish(void *priv, u8 *mac, struct ieee80211_vif *vif)
{
- if (!vif->bss_conf.color_change_active)
+ if (!vif->bss_conf.color_change_active || vif->type == NL80211_IFTYPE_STATION)
return;
ieee80211_color_change_finish(vif);
@@ -448,6 +450,54 @@ mt7996_mcu_ie_countdown(struct mt7996_dev *dev, struct sk_buff *skb)
}
static void
+mt7996_mcu_rx_all_sta_info_event(struct mt7996_dev *dev, struct sk_buff *skb)
+{
+ struct mt7996_mcu_all_sta_info_event *res;
+ u16 i;
+
+ skb_pull(skb, sizeof(struct mt7996_mcu_rxd));
+
+ res = (struct mt7996_mcu_all_sta_info_event *)skb->data;
+
+ for (i = 0; i < le16_to_cpu(res->sta_num); i++) {
+ u8 ac;
+ u16 wlan_idx;
+ struct mt76_wcid *wcid;
+
+ switch (le16_to_cpu(res->tag)) {
+ case UNI_ALL_STA_TXRX_ADM_STAT:
+ wlan_idx = le16_to_cpu(res->adm_stat[i].wlan_idx);
+ wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]);
+
+ if (!wcid)
+ break;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
+ wcid->stats.tx_bytes +=
+ le32_to_cpu(res->adm_stat[i].tx_bytes[ac]);
+ wcid->stats.rx_bytes +=
+ le32_to_cpu(res->adm_stat[i].rx_bytes[ac]);
+ }
+ break;
+ case UNI_ALL_STA_TXRX_MSDU_COUNT:
+ wlan_idx = le16_to_cpu(res->msdu_cnt[i].wlan_idx);
+ wcid = rcu_dereference(dev->mt76.wcid[wlan_idx]);
+
+ if (!wcid)
+ break;
+
+ wcid->stats.tx_packets +=
+ le32_to_cpu(res->msdu_cnt[i].tx_msdu_cnt);
+ wcid->stats.rx_packets +=
+ le32_to_cpu(res->msdu_cnt[i].rx_msdu_cnt);
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+static void
mt7996_mcu_rx_ext_event(struct mt7996_dev *dev, struct sk_buff *skb)
{
struct mt7996_mcu_rxd *rxd = (struct mt7996_mcu_rxd *)skb->data;
@@ -491,6 +541,9 @@ mt7996_mcu_uni_rx_unsolicited_event(struct mt7996_dev *dev, struct sk_buff *skb)
case MCU_UNI_EVENT_RDD_REPORT:
mt7996_mcu_rx_radar_detected(dev, skb);
break;
+ case MCU_UNI_EVENT_ALL_STA_INFO:
+ mt7996_mcu_rx_all_sta_info_event(dev, skb);
+ break;
default:
break;
}
@@ -601,6 +654,24 @@ mt7996_mcu_bss_he_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
}
static void
+mt7996_mcu_bss_mbssid_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
+ struct mt7996_phy *phy, int enable)
+{
+ struct bss_info_uni_mbssid *mbssid;
+ struct tlv *tlv;
+
+ tlv = mt7996_mcu_add_uni_tlv(skb, UNI_BSS_INFO_11V_MBSSID, sizeof(*mbssid));
+
+ mbssid = (struct bss_info_uni_mbssid *)tlv;
+
+ if (enable && vif->bss_conf.bssid_indicator) {
+ mbssid->max_indicator = vif->bss_conf.bssid_indicator;
+ mbssid->mbss_idx = vif->bss_conf.bssid_index;
+ mbssid->tx_bss_omac_idx = 0;
+ }
+}
+
+static void
mt7996_mcu_bss_bmc_tlv(struct sk_buff *skb, struct ieee80211_vif *vif,
struct mt7996_phy *phy)
{
@@ -866,6 +937,9 @@ int mt7996_mcu_add_bss_info(struct mt7996_phy *phy,
/* this tag is necessary no matter if the vif is MLD */
mt7996_mcu_bss_mld_tlv(skb, vif);
}
+
+ mt7996_mcu_bss_mbssid_tlv(skb, vif, phy, enable);
+
out:
return mt76_mcu_skb_send_msg(&dev->mt76, skb,
MCU_WMWA_UNI_CMD(BSS_INFO_UPDATE), true);
@@ -1152,6 +1226,8 @@ mt7996_mcu_sta_muru_tlv(struct mt7996_dev *dev, struct sk_buff *skb,
HE_MAC(CAP2_MU_CASCADING, elem->mac_cap_info[2]);
muru->ofdma_ul.uo_ra =
HE_MAC(CAP3_OFDMA_RA, elem->mac_cap_info[3]);
+ muru->ofdma_ul.rx_ctrl_frame_to_mbss =
+ HE_MAC(CAP3_RX_CTRL_FRAME_TO_MULTIBSS, elem->mac_cap_info[3]);
}
static inline bool
@@ -1624,6 +1700,132 @@ int mt7996_mcu_set_fixed_rate_ctrl(struct mt7996_dev *dev,
MCU_WM_UNI_CMD(RA), true);
}
+static int
+mt7996_mcu_set_fixed_field(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, void *data, u32 field)
+{
+ struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+ struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
+ struct sta_phy *phy = data;
+ struct sta_rec_ra_fixed *ra;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+
+ skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
+ &msta->wcid,
+ MT7996_STA_UPDATE_MAX_SIZE);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ tlv = mt76_connac_mcu_add_tlv(skb, STA_REC_RA_UPDATE, sizeof(*ra));
+ ra = (struct sta_rec_ra_fixed *)tlv;
+
+ switch (field) {
+ case RATE_PARAM_AUTO:
+ break;
+ case RATE_PARAM_FIXED:
+ case RATE_PARAM_FIXED_MCS:
+ case RATE_PARAM_FIXED_GI:
+ case RATE_PARAM_FIXED_HE_LTF:
+ if (phy)
+ ra->phy = *phy;
+ break;
+ default:
+ break;
+ }
+ ra->field = cpu_to_le32(field);
+
+ return mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
+}
+
+static int
+mt7996_mcu_add_rate_ctrl_fixed(struct mt7996_dev *dev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
+ struct cfg80211_chan_def *chandef = &mvif->phy->mt76->chandef;
+ struct cfg80211_bitrate_mask *mask = &mvif->bitrate_mask;
+ enum nl80211_band band = chandef->chan->band;
+ struct sta_phy phy = {};
+ int ret, nrates = 0;
+
+#define __sta_phy_bitrate_mask_check(_mcs, _gi, _ht, _he) \
+ do { \
+ u8 i, gi = mask->control[band]._gi; \
+ gi = (_he) ? gi : gi == NL80211_TXRATE_FORCE_SGI; \
+ phy.sgi = gi; \
+ phy.he_ltf = mask->control[band].he_ltf; \
+ for (i = 0; i < ARRAY_SIZE(mask->control[band]._mcs); i++) { \
+ if (!mask->control[band]._mcs[i]) \
+ continue; \
+ nrates += hweight16(mask->control[band]._mcs[i]); \
+ phy.mcs = ffs(mask->control[band]._mcs[i]) - 1; \
+ if (_ht) \
+ phy.mcs += 8 * i; \
+ } \
+ } while (0)
+
+ if (sta->deflink.he_cap.has_he) {
+ __sta_phy_bitrate_mask_check(he_mcs, he_gi, 0, 1);
+ } else if (sta->deflink.vht_cap.vht_supported) {
+ __sta_phy_bitrate_mask_check(vht_mcs, gi, 0, 0);
+ } else if (sta->deflink.ht_cap.ht_supported) {
+ __sta_phy_bitrate_mask_check(ht_mcs, gi, 1, 0);
+ } else {
+ nrates = hweight32(mask->control[band].legacy);
+ phy.mcs = ffs(mask->control[band].legacy) - 1;
+ }
+#undef __sta_phy_bitrate_mask_check
+
+ /* fall back to auto rate control */
+ if (mask->control[band].gi == NL80211_TXRATE_DEFAULT_GI &&
+ mask->control[band].he_gi == GENMASK(7, 0) &&
+ mask->control[band].he_ltf == GENMASK(7, 0) &&
+ nrates != 1)
+ return 0;
+
+ /* fixed single rate */
+ if (nrates == 1) {
+ ret = mt7996_mcu_set_fixed_field(dev, vif, sta, &phy,
+ RATE_PARAM_FIXED_MCS);
+ if (ret)
+ return ret;
+ }
+
+ /* fixed GI */
+ if (mask->control[band].gi != NL80211_TXRATE_DEFAULT_GI ||
+ mask->control[band].he_gi != GENMASK(7, 0)) {
+ struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
+ u32 addr;
+
+ /* firmware updates only TXCMD but doesn't take WTBL into
+ * account, so driver should update here to reflect the
+ * actual txrate hardware sends out.
+ */
+ addr = mt7996_mac_wtbl_lmac_addr(dev, msta->wcid.idx, 7);
+ if (sta->deflink.he_cap.has_he)
+ mt76_rmw_field(dev, addr, GENMASK(31, 24), phy.sgi);
+ else
+ mt76_rmw_field(dev, addr, GENMASK(15, 12), phy.sgi);
+
+ ret = mt7996_mcu_set_fixed_field(dev, vif, sta, &phy,
+ RATE_PARAM_FIXED_GI);
+ if (ret)
+ return ret;
+ }
+
+ /* fixed HE_LTF */
+ if (mask->control[band].he_ltf != GENMASK(7, 0)) {
+ ret = mt7996_mcu_set_fixed_field(dev, vif, sta, &phy,
+ RATE_PARAM_FIXED_HE_LTF);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
static void
mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
struct ieee80211_vif *vif, struct ieee80211_sta *sta)
@@ -1733,6 +1935,7 @@ int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct ieee80211_vif *vif,
struct mt7996_vif *mvif = (struct mt7996_vif *)vif->drv_priv;
struct mt7996_sta *msta = (struct mt7996_sta *)sta->drv_priv;
struct sk_buff *skb;
+ int ret;
skb = __mt76_connac_mcu_alloc_sta_req(&dev->mt76, &mvif->mt76,
&msta->wcid,
@@ -1752,8 +1955,12 @@ int mt7996_mcu_add_rate_ctrl(struct mt7996_dev *dev, struct ieee80211_vif *vif,
*/
mt7996_mcu_sta_rate_ctrl_tlv(skb, dev, vif, sta);
- return mt76_mcu_skb_send_msg(&dev->mt76, skb,
- MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
+ ret = mt76_mcu_skb_send_msg(&dev->mt76, skb,
+ MCU_WMWA_UNI_CMD(STA_REC_UPDATE), true);
+ if (ret)
+ return ret;
+
+ return mt7996_mcu_add_rate_ctrl_fixed(dev, vif, sta);
}
static int
@@ -1996,6 +2203,59 @@ mt7996_mcu_beacon_cntdwn(struct ieee80211_vif *vif, struct sk_buff *rskb,
}
static void
+mt7996_mcu_beacon_mbss(struct sk_buff *rskb, struct sk_buff *skb,
+ struct ieee80211_vif *vif, struct bss_bcn_content_tlv *bcn,
+ struct ieee80211_mutable_offsets *offs)
+{
+ struct bss_bcn_mbss_tlv *mbss;
+ const struct element *elem;
+ struct tlv *tlv;
+
+ if (!vif->bss_conf.bssid_indicator)
+ return;
+
+ tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_BCN_MBSSID, sizeof(*mbss));
+
+ mbss = (struct bss_bcn_mbss_tlv *)tlv;
+ mbss->offset[0] = cpu_to_le16(offs->tim_offset);
+ mbss->bitmap = cpu_to_le32(1);
+
+ for_each_element_id(elem, WLAN_EID_MULTIPLE_BSSID,
+ &skb->data[offs->mbssid_off],
+ skb->len - offs->mbssid_off) {
+ const struct element *sub_elem;
+
+ if (elem->datalen < 2)
+ continue;
+
+ for_each_element(sub_elem, elem->data + 1, elem->datalen - 1) {
+ const struct ieee80211_bssid_index *idx;
+ const u8 *idx_ie;
+
+ /* not a valid BSS profile */
+ if (sub_elem->id || sub_elem->datalen < 4)
+ continue;
+
+ /* Find WLAN_EID_MULTI_BSSID_IDX
+ * in the merged nontransmitted profile
+ */
+ idx_ie = cfg80211_find_ie(WLAN_EID_MULTI_BSSID_IDX,
+ sub_elem->data, sub_elem->datalen);
+ if (!idx_ie || idx_ie[1] < sizeof(*idx))
+ continue;
+
+ idx = (void *)(idx_ie + 2);
+ if (!idx->bssid_index || idx->bssid_index > 31)
+ continue;
+
+ mbss->offset[idx->bssid_index] = cpu_to_le16(idx_ie -
+ skb->data);
+ mbss->bitmap |= cpu_to_le32(BIT(idx->bssid_index));
+ }
+ }
+}
+
+static void
mt7996_mcu_beacon_cont(struct mt7996_dev *dev, struct ieee80211_vif *vif,
struct sk_buff *rskb, struct sk_buff *skb,
struct bss_bcn_content_tlv *bcn,
@@ -2016,7 +2276,7 @@ mt7996_mcu_beacon_cont(struct mt7996_dev *dev, struct ieee80211_vif *vif,
bcn->bcc_ie_pos = cpu_to_le16(offset - 3);
}
- buf = (u8 *)bcn + sizeof(*bcn) - MAX_BEACON_SIZE;
+ buf = (u8 *)bcn + sizeof(*bcn);
mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, NULL, 0, 0,
BSS_CHANGED_BEACON);
@@ -2034,26 +2294,25 @@ int mt7996_mcu_add_beacon(struct ieee80211_hw *hw,
struct sk_buff *skb, *rskb;
struct tlv *tlv;
struct bss_bcn_content_tlv *bcn;
+ int len;
+
+ if (vif->bss_conf.nontransmitted)
+ return 0;
rskb = __mt7996_mcu_alloc_bss_req(&dev->mt76, &mvif->mt76,
- MT7996_BEACON_UPDATE_SIZE);
+ MT7996_MAX_BSS_OFFLOAD_SIZE);
if (IS_ERR(rskb))
return PTR_ERR(rskb);
- tlv = mt7996_mcu_add_uni_tlv(rskb,
- UNI_BSS_INFO_BCN_CONTENT, sizeof(*bcn));
- bcn = (struct bss_bcn_content_tlv *)tlv;
- bcn->enable = en;
-
- if (!en)
- goto out;
-
skb = ieee80211_beacon_get_template(hw, vif, &offs, 0);
- if (!skb)
+ if (!skb) {
+ dev_kfree_skb(rskb);
return -EINVAL;
+ }
- if (skb->len > MAX_BEACON_SIZE - MT_TXD_SIZE) {
+ if (skb->len > MT7996_MAX_BEACON_SIZE) {
dev_err(dev->mt76.dev, "Bcn size limit exceed\n");
+ dev_kfree_skb(rskb);
dev_kfree_skb(skb);
return -EINVAL;
}
@@ -2061,11 +2320,18 @@ int mt7996_mcu_add_beacon(struct ieee80211_hw *hw,
info = IEEE80211_SKB_CB(skb);
info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, phy->mt76->band_idx);
+ len = sizeof(*bcn) + MT_TXD_SIZE + skb->len;
+ tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_BCN_CONTENT, len);
+ bcn = (struct bss_bcn_content_tlv *)tlv;
+ bcn->enable = en;
+ if (!en)
+ goto out;
+
mt7996_mcu_beacon_cont(dev, vif, rskb, skb, bcn, &offs);
- /* TODO: subtag - 11v MBSSID */
+ mt7996_mcu_beacon_mbss(rskb, skb, vif, bcn, &offs);
mt7996_mcu_beacon_cntdwn(vif, rskb, skb, &offs);
- dev_kfree_skb(skb);
out:
+ dev_kfree_skb(skb);
return mt76_mcu_skb_send_msg(&phy->dev->mt76, rskb,
MCU_WMWA_UNI_CMD(BSS_INFO_UPDATE), true);
}
@@ -2086,9 +2352,13 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
struct sk_buff *rskb, *skb = NULL;
struct tlv *tlv;
u8 *buf, interval;
+ int len;
+
+ if (vif->bss_conf.nontransmitted)
+ return 0;
rskb = __mt7996_mcu_alloc_bss_req(&dev->mt76, &mvif->mt76,
- MT7996_INBAND_FRAME_SIZE);
+ MT7996_MAX_BSS_OFFLOAD_SIZE);
if (IS_ERR(rskb))
return PTR_ERR(rskb);
@@ -2102,11 +2372,14 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
skb = ieee80211_get_unsol_bcast_probe_resp_tmpl(hw, vif);
}
- if (!skb)
+ if (!skb) {
+ dev_kfree_skb(rskb);
return -EINVAL;
+ }
- if (skb->len > MAX_INBAND_FRAME_SIZE - MT_TXD_SIZE) {
+ if (skb->len > MT7996_MAX_BEACON_SIZE) {
dev_err(dev->mt76.dev, "inband discovery size limit exceed\n");
+ dev_kfree_skb(rskb);
dev_kfree_skb(skb);
return -EINVAL;
}
@@ -2116,7 +2389,9 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
info->band = band;
info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, phy->mt76->band_idx);
- tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_OFFLOAD, sizeof(*discov));
+ len = sizeof(*discov) + MT_TXD_SIZE + skb->len;
+
+ tlv = mt7996_mcu_add_uni_tlv(rskb, UNI_BSS_INFO_OFFLOAD, len);
discov = (struct bss_inband_discovery_tlv *)tlv;
discov->tx_mode = OFFLOAD_TX_MODE_SU;
@@ -2127,7 +2402,7 @@ int mt7996_mcu_beacon_inband_discov(struct mt7996_dev *dev,
discov->enable = true;
discov->wcid = cpu_to_le16(MT7996_WTBL_RESERVED);
- buf = (u8 *)tlv + sizeof(*discov) - MAX_INBAND_FRAME_SIZE;
+ buf = (u8 *)tlv + sizeof(*discov);
mt7996_mac_write_txwi(dev, (__le32 *)buf, skb, wcid, NULL, 0, 0, changed);
@@ -2679,7 +2954,7 @@ int mt7996_mcu_set_tx(struct mt7996_dev *dev, struct ieee80211_vif *vif)
e = (struct edca *)tlv;
e->set = WMM_PARAM_SET;
- e->queue = ac + mvif->mt76.wmm_idx * MT7996_MAX_WMM_SETS;
+ e->queue = ac;
e->aifs = q->aifs;
e->txop = cpu_to_le16(q->txop);
@@ -2960,10 +3235,10 @@ int mt7996_mcu_set_chan_info(struct mt7996_phy *phy, u16 tag)
.channel_band = ch_band[chandef->chan->band],
};
- if (tag == UNI_CHANNEL_RX_PATH ||
- dev->mt76.hw->conf.flags & IEEE80211_CONF_MONITOR)
+ if (phy->mt76->hw->conf.flags & IEEE80211_CONF_MONITOR)
req.switch_reason = CH_SWITCH_NORMAL;
- else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL)
+ else if (phy->mt76->hw->conf.flags & IEEE80211_CONF_OFFCHANNEL ||
+ phy->mt76->hw->conf.flags & IEEE80211_CONF_IDLE)
req.switch_reason = CH_SWITCH_SCAN_BYPASS_DPD;
else if (!cfg80211_reg_can_beacon(phy->mt76->hw->wiphy, chandef,
NL80211_IFTYPE_AP))
@@ -3307,8 +3582,8 @@ int mt7996_mcu_set_txbf(struct mt7996_dev *dev, u8 action)
tlv = mt7996_mcu_add_uni_tlv(skb, action, sizeof(*req_mod_en));
req_mod_en = (struct bf_mod_en_ctrl *)tlv;
- req_mod_en->bf_num = 2;
- req_mod_en->bf_bitmap = GENMASK(0, 0);
+ req_mod_en->bf_num = 3;
+ req_mod_en->bf_bitmap = GENMASK(2, 0);
break;
}
default:
@@ -3548,7 +3823,9 @@ int mt7996_mcu_twt_agrt_update(struct mt7996_dev *dev,
int cmd)
{
struct {
- u8 _rsv[4];
+ /* fixed field */
+ u8 bss;
+ u8 _rsv[3];
__le16 tag;
__le16 len;
@@ -3566,7 +3843,7 @@ int mt7996_mcu_twt_agrt_update(struct mt7996_dev *dev,
u8 exponent;
u8 is_ap;
u8 agrt_params;
- u8 __rsv2[135];
+ u8 __rsv2[23];
} __packed req = {
.tag = cpu_to_le16(UNI_CMD_TWT_ARGT_UPDATE),
.len = cpu_to_le16(sizeof(req) - 4),
@@ -3576,6 +3853,7 @@ int mt7996_mcu_twt_agrt_update(struct mt7996_dev *dev,
.flowid = flow->id,
.peer_id = cpu_to_le16(flow->wcid),
.duration = flow->duration,
+ .bss = mvif->mt76.idx,
.bss_idx = mvif->mt76.idx,
.start_tsf = cpu_to_le64(flow->tsf),
.mantissa = flow->mantissa,
@@ -3786,3 +4064,20 @@ int mt7996_mcu_set_rro(struct mt7996_dev *dev, u16 tag, u8 val)
return mt76_mcu_send_msg(&dev->mt76, MCU_WM_UNI_CMD(RRO), &req,
sizeof(req), true);
}
+
+int mt7996_mcu_get_all_sta_info(struct mt7996_phy *phy, u16 tag)
+{
+ struct mt7996_dev *dev = phy->dev;
+ struct {
+ u8 _rsv[4];
+
+ __le16 tag;
+ __le16 len;
+ } __packed req = {
+ .tag = cpu_to_le16(tag),
+ .len = cpu_to_le16(sizeof(req) - 4),
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_WM_UNI_CMD(ALL_STA_INFO),
+ &req, sizeof(req), false);
+}
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
index 078f82858621..a88f6af323da 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.h
@@ -153,6 +153,32 @@ struct mt7996_mcu_mib {
__le64 data;
} __packed;
+struct mt7996_mcu_all_sta_info_event {
+ u8 rsv[4];
+ __le16 tag;
+ __le16 len;
+ u8 more;
+ u8 rsv2;
+ __le16 sta_num;
+ u8 rsv3[2];
+
+ union {
+ struct {
+ __le16 wlan_idx;
+ u8 rsv[2];
+ __le32 tx_bytes[IEEE80211_NUM_ACS];
+ __le32 rx_bytes[IEEE80211_NUM_ACS];
+ } adm_stat[0];
+
+ struct {
+ __le16 wlan_idx;
+ u8 rsv[2];
+ __le32 tx_msdu_cnt;
+ __le32 rx_msdu_cnt;
+ } msdu_cnt[0];
+ };
+} __packed;
+
enum mt7996_chan_mib_offs {
UNI_MIB_OBSS_AIRTIME = 26,
UNI_MIB_NON_WIFI_TIME = 27,
@@ -270,8 +296,6 @@ struct bss_inband_discovery_tlv {
u8 enable;
__le16 wcid;
__le16 prob_rsp_len;
-#define MAX_INBAND_FRAME_SIZE 512
- u8 pkt[MAX_INBAND_FRAME_SIZE];
} __packed;
struct bss_bcn_content_tlv {
@@ -283,8 +307,6 @@ struct bss_bcn_content_tlv {
u8 enable;
u8 type;
__le16 pkt_len;
-#define MAX_BEACON_SIZE 512
- u8 pkt[MAX_BEACON_SIZE];
} __packed;
struct bss_bcn_cntdwn_tlv {
@@ -591,13 +613,14 @@ enum {
sizeof(struct sta_rec_hdr_trans) + \
sizeof(struct tlv))
+#define MT7996_MAX_BEACON_SIZE 1342
#define MT7996_BEACON_UPDATE_SIZE (sizeof(struct bss_req_hdr) + \
sizeof(struct bss_bcn_content_tlv) + \
+ MT_TXD_SIZE + \
sizeof(struct bss_bcn_cntdwn_tlv) + \
sizeof(struct bss_bcn_mbss_tlv))
-
-#define MT7996_INBAND_FRAME_SIZE (sizeof(struct bss_req_hdr) + \
- sizeof(struct bss_inband_discovery_tlv))
+#define MT7996_MAX_BSS_OFFLOAD_SIZE (MT7996_MAX_BEACON_SIZE + \
+ MT7996_BEACON_UPDATE_SIZE)
enum {
UNI_BAND_CONFIG_RADIO_ENABLE,
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
index 7354e5cf8e67..e53cf6a3704c 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/mt7996.h
@@ -110,7 +110,6 @@ struct mt7996_sta {
struct ewma_avg_signal avg_ack_signal;
unsigned long changed;
- unsigned long jiffies;
struct mt76_connac_sta_key_conf bip;
@@ -402,6 +401,7 @@ int mt7996_mcu_fw_dbg_ctrl(struct mt7996_dev *dev, u32 module, u8 level);
int mt7996_mcu_trigger_assert(struct mt7996_dev *dev);
void mt7996_mcu_rx_event(struct mt7996_dev *dev, struct sk_buff *skb);
void mt7996_mcu_exit(struct mt7996_dev *dev);
+int mt7996_mcu_get_all_sta_info(struct mt7996_phy *phy, u16 tag);
static inline u8 mt7996_max_interface_num(struct mt7996_dev *dev)
{
diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
index 97beab924517..0086a7866657 100644
--- a/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
+++ b/drivers/net/wireless/mediatek/mt76/mt7996/regs.h
@@ -243,6 +243,13 @@ enum base_rev {
FIELD_PREP(MT_WTBL_LMAC_ID, _id) | \
FIELD_PREP(MT_WTBL_LMAC_DW, _dw))
+/* AGG: band 0(0x820e2000), band 1(0x820f2000), band 2(0x830e2000) */
+#define MT_WF_AGG_BASE(_band) __BASE(WF_AGG_BASE, (_band))
+#define MT_WF_AGG(_band, ofs) (MT_WF_AGG_BASE(_band) + (ofs))
+
+#define MT_AGG_ACR4(_band) MT_WF_AGG(_band, 0x3c)
+#define MT_AGG_ACR_PPDU_TXS2H BIT(1)
+
/* ARB: band 0(0x820e3000), band 1(0x820f3000), band 2(0x830e3000) */
#define MT_WF_ARB_BASE(_band) __BASE(WF_ARB_BASE, (_band))
#define MT_WF_ARB(_band, ofs) (MT_WF_ARB_BASE(_band) + (ofs))
@@ -509,6 +516,7 @@ enum base_rev {
#define MT_LED_CTRL(_n) MT_LED_PHYS(0x00 + ((_n) * 4))
#define MT_LED_CTRL_KICK BIT(7)
+#define MT_LED_CTRL_BLINK_BAND_SEL BIT(4)
#define MT_LED_CTRL_BLINK_MODE BIT(2)
#define MT_LED_CTRL_POLARITY BIT(1)
diff --git a/drivers/net/wireless/mediatek/mt76/tx.c b/drivers/net/wireless/mediatek/mt76/tx.c
index 6cc26cc6c517..1809b03292c3 100644
--- a/drivers/net/wireless/mediatek/mt76/tx.c
+++ b/drivers/net/wireless/mediatek/mt76/tx.c
@@ -329,40 +329,32 @@ void
mt76_tx(struct mt76_phy *phy, struct ieee80211_sta *sta,
struct mt76_wcid *wcid, struct sk_buff *skb)
{
- struct mt76_dev *dev = phy->dev;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
- struct mt76_queue *q;
- int qid = skb_get_queue_mapping(skb);
if (mt76_testmode_enabled(phy)) {
ieee80211_free_txskb(phy->hw, skb);
return;
}
- if (WARN_ON(qid >= MT_TXQ_PSD)) {
- qid = MT_TXQ_BE;
- skb_set_queue_mapping(skb, qid);
- }
-
- if ((dev->drv->drv_flags & MT_DRV_HW_MGMT_TXQ) &&
- !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) &&
- !ieee80211_is_data(hdr->frame_control) &&
- !ieee80211_is_bufferable_mmpdu(skb)) {
- qid = MT_TXQ_PSD;
- }
+ if (WARN_ON(skb_get_queue_mapping(skb) >= MT_TXQ_PSD))
+ skb_set_queue_mapping(skb, MT_TXQ_BE);
if (wcid && !(wcid->tx_info & MT_WCID_TX_INFO_SET))
ieee80211_get_tx_rates(info->control.vif, sta, skb,
info->control.rates, 1);
info->hw_queue |= FIELD_PREP(MT_TX_HW_QUEUE_PHY, phy->band_idx);
- q = phy->q_tx[qid];
- spin_lock_bh(&q->lock);
- __mt76_tx_queue_skb(phy, qid, skb, wcid, sta, NULL);
- dev->queue_ops->kick(dev, q);
- spin_unlock_bh(&q->lock);
+ spin_lock_bh(&wcid->tx_pending.lock);
+ __skb_queue_tail(&wcid->tx_pending, skb);
+ spin_unlock_bh(&wcid->tx_pending.lock);
+
+ spin_lock_bh(&phy->tx_lock);
+ if (list_empty(&wcid->tx_list))
+ list_add_tail(&wcid->tx_list, &phy->tx_list);
+ spin_unlock_bh(&phy->tx_lock);
+
+ mt76_worker_schedule(&phy->dev->tx_worker);
}
EXPORT_SYMBOL_GPL(mt76_tx);
@@ -593,10 +585,86 @@ void mt76_txq_schedule(struct mt76_phy *phy, enum mt76_txq_id qid)
}
EXPORT_SYMBOL_GPL(mt76_txq_schedule);
+static int
+mt76_txq_schedule_pending_wcid(struct mt76_phy *phy, struct mt76_wcid *wcid)
+{
+ struct mt76_dev *dev = phy->dev;
+ struct ieee80211_sta *sta;
+ struct mt76_queue *q;
+ struct sk_buff *skb;
+ int ret = 0;
+
+ spin_lock(&wcid->tx_pending.lock);
+ while ((skb = skb_peek(&wcid->tx_pending)) != NULL) {
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ int qid = skb_get_queue_mapping(skb);
+
+ if ((dev->drv->drv_flags & MT_DRV_HW_MGMT_TXQ) &&
+ !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP) &&
+ !ieee80211_is_data(hdr->frame_control) &&
+ !ieee80211_is_bufferable_mmpdu(skb))
+ qid = MT_TXQ_PSD;
+
+ q = phy->q_tx[qid];
+ if (mt76_txq_stopped(q)) {
+ ret = -1;
+ break;
+ }
+
+ __skb_unlink(skb, &wcid->tx_pending);
+ spin_unlock(&wcid->tx_pending.lock);
+
+ sta = wcid_to_sta(wcid);
+ spin_lock(&q->lock);
+ __mt76_tx_queue_skb(phy, qid, skb, wcid, sta, NULL);
+ dev->queue_ops->kick(dev, q);
+ spin_unlock(&q->lock);
+
+ spin_lock(&wcid->tx_pending.lock);
+ }
+ spin_unlock(&wcid->tx_pending.lock);
+
+ return ret;
+}
+
+static void mt76_txq_schedule_pending(struct mt76_phy *phy)
+{
+ if (list_empty(&phy->tx_list))
+ return;
+
+ local_bh_disable();
+ rcu_read_lock();
+
+ spin_lock(&phy->tx_lock);
+ while (!list_empty(&phy->tx_list)) {
+ struct mt76_wcid *wcid = NULL;
+ int ret;
+
+ wcid = list_first_entry(&phy->tx_list, struct mt76_wcid, tx_list);
+ list_del_init(&wcid->tx_list);
+
+ spin_unlock(&phy->tx_lock);
+ ret = mt76_txq_schedule_pending_wcid(phy, wcid);
+ spin_lock(&phy->tx_lock);
+
+ if (ret) {
+ if (list_empty(&wcid->tx_list))
+ list_add_tail(&wcid->tx_list, &phy->tx_list);
+ break;
+ }
+ }
+ spin_unlock(&phy->tx_lock);
+
+ rcu_read_unlock();
+ local_bh_enable();
+}
+
void mt76_txq_schedule_all(struct mt76_phy *phy)
{
int i;
+ mt76_txq_schedule_pending(phy);
for (i = 0; i <= MT_TXQ_BK; i++)
mt76_txq_schedule(phy, i);
}
diff --git a/drivers/net/wireless/mediatek/mt7601u/tx.c b/drivers/net/wireless/mediatek/mt7601u/tx.c
index 51d977ffc52f..5aeeac0dd9fe 100644
--- a/drivers/net/wireless/mediatek/mt7601u/tx.c
+++ b/drivers/net/wireless/mediatek/mt7601u/tx.c
@@ -110,7 +110,7 @@ void mt7601u_tx_status(struct mt7601u_dev *dev, struct sk_buff *skb)
info->flags |= IEEE80211_TX_STAT_ACK;
spin_lock_bh(&dev->mac_lock);
- ieee80211_tx_status(dev->hw, skb);
+ ieee80211_tx_status_skb(dev->hw, skb);
spin_unlock_bh(&dev->mac_lock);
}
diff --git a/drivers/net/wireless/microchip/wilc1000/cfg80211.c b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
index b545d93c6e37..da52f91693b5 100644
--- a/drivers/net/wireless/microchip/wilc1000/cfg80211.c
+++ b/drivers/net/wireless/microchip/wilc1000/cfg80211.c
@@ -1441,11 +1441,11 @@ static int start_ap(struct wiphy *wiphy, struct net_device *dev,
}
static int change_beacon(struct wiphy *wiphy, struct net_device *dev,
- struct cfg80211_beacon_data *beacon)
+ struct cfg80211_ap_update *params)
{
struct wilc_vif *vif = netdev_priv(dev);
- return wilc_add_beacon(vif, 0, 0, beacon);
+ return wilc_add_beacon(vif, 0, 0, &params->beacon);
}
static int stop_ap(struct wiphy *wiphy, struct net_device *dev,
diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
index e9f59de31b0b..91d71e0f7ef2 100644
--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
+++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
@@ -148,8 +148,8 @@ static int wilc_txq_task(void *vp)
complete(&wl->txq_thread_started);
while (1) {
- wait_for_completion(&wl->txq_event);
-
+ if (wait_for_completion_interruptible(&wl->txq_event))
+ continue;
if (wl->close) {
complete(&wl->txq_thread_started);
@@ -166,12 +166,24 @@ static int wilc_txq_task(void *vp)
srcu_idx = srcu_read_lock(&wl->srcu);
list_for_each_entry_rcu(ifc, &wl->vif_list,
list) {
- if (ifc->mac_opened && ifc->ndev)
+ if (ifc->mac_opened &&
+ netif_queue_stopped(ifc->ndev))
netif_wake_queue(ifc->ndev);
}
srcu_read_unlock(&wl->srcu, srcu_idx);
}
- } while (ret == WILC_VMM_ENTRY_FULL_RETRY && !wl->close);
+ if (ret != WILC_VMM_ENTRY_FULL_RETRY)
+ break;
+ /* Back off TX task from sending packets for some time.
+ * msleep_interruptible will allow RX task to run and
+ * free buffers. TX task will be in TASK_INTERRUPTIBLE
+ * state which will put the thread back to CPU running
+ * queue when it's signaled even if the timeout isn't
+ * elapsed. This gives faster chance for reserved SK
+ * buffers to be free.
+ */
+ msleep_interruptible(TX_BACKOFF_WEIGHT_MS);
+ } while (!wl->close);
}
return 0;
}
diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.h b/drivers/net/wireless/microchip/wilc1000/netdev.h
index bb1a315a7b7e..aafe3dc44ac6 100644
--- a/drivers/net/wireless/microchip/wilc1000/netdev.h
+++ b/drivers/net/wireless/microchip/wilc1000/netdev.h
@@ -27,6 +27,8 @@
#define TCP_ACK_FILTER_LINK_SPEED_THRESH 54
#define DEFAULT_LINK_SPEED 72
+#define TX_BACKOFF_WEIGHT_MS 1
+
struct wilc_wfi_stats {
unsigned long rx_packets;
unsigned long tx_packets;
diff --git a/drivers/net/wireless/microchip/wilc1000/wlan.c b/drivers/net/wireless/microchip/wilc1000/wlan.c
index 58bbf50081e4..9eb115c79c90 100644
--- a/drivers/net/wireless/microchip/wilc1000/wlan.c
+++ b/drivers/net/wireless/microchip/wilc1000/wlan.c
@@ -1492,7 +1492,7 @@ int wilc_wlan_init(struct net_device *dev)
}
if (!wilc->vmm_table)
- wilc->vmm_table = kzalloc(WILC_VMM_TBL_SIZE, GFP_KERNEL);
+ wilc->vmm_table = kcalloc(WILC_VMM_TBL_SIZE, sizeof(u32), GFP_KERNEL);
if (!wilc->vmm_table) {
ret = -ENOBUFS;
diff --git a/drivers/net/wireless/purelifi/plfxlc/mac.c b/drivers/net/wireless/purelifi/plfxlc/mac.c
index 94ee831b5de3..506d2f31efb5 100644
--- a/drivers/net/wireless/purelifi/plfxlc/mac.c
+++ b/drivers/net/wireless/purelifi/plfxlc/mac.c
@@ -666,7 +666,7 @@ static void plfxlc_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *et_strings, sizeof(et_strings));
+ memcpy(data, et_strings, sizeof(et_strings));
}
static void plfxlc_get_et_stats(struct ieee80211_hw *hw,
diff --git a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
index 73e6f9408b51..663d77770fce 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/cfg80211.c
@@ -331,11 +331,11 @@ out:
}
static int qtnf_change_beacon(struct wiphy *wiphy, struct net_device *dev,
- struct cfg80211_beacon_data *info)
+ struct cfg80211_ap_update *info)
{
struct qtnf_vif *vif = qtnf_netdev_get_priv(dev);
- return qtnf_mgmt_set_appie(vif, info);
+ return qtnf_mgmt_set_appie(vif, &info->beacon);
}
static int qtnf_start_ap(struct wiphy *wiphy, struct net_device *dev,
diff --git a/drivers/net/wireless/quantenna/qtnfmac/commands.c b/drivers/net/wireless/quantenna/qtnfmac/commands.c
index 68ae9c7ea95a..9540ad6196d7 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/commands.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/commands.c
@@ -1335,7 +1335,7 @@ static int qtnf_cmd_band_fill_iftype(const u8 *data,
return -EINVAL;
}
- kfree(band->iftype_data);
+ kfree((__force void *)band->iftype_data);
band->iftype_data = NULL;
band->n_iftype_data = tlv->n_iftype_data;
if (band->n_iftype_data == 0)
@@ -1347,7 +1347,8 @@ static int qtnf_cmd_band_fill_iftype(const u8 *data,
band->n_iftype_data = 0;
return -ENOMEM;
}
- band->iftype_data = iftype_data;
+
+ _ieee80211_set_sband_iftype_data(band, iftype_data, tlv->n_iftype_data);
for (i = 0; i < band->n_iftype_data; i++)
qtnf_cmd_conv_iftype(iftype_data++, &tlv->iftype_data[i]);
diff --git a/drivers/net/wireless/quantenna/qtnfmac/core.c b/drivers/net/wireless/quantenna/qtnfmac/core.c
index 2a63ffdc4b2c..677bac835330 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/core.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/core.c
@@ -535,7 +535,7 @@ static void qtnf_core_mac_detach(struct qtnf_bus *bus, unsigned int macid)
if (!wiphy->bands[band])
continue;
- kfree(wiphy->bands[band]->iftype_data);
+ kfree((__force void *)wiphy->bands[band]->iftype_data);
wiphy->bands[band]->n_iftype_data = 0;
kfree(wiphy->bands[band]->channels);
diff --git a/drivers/net/wireless/quantenna/qtnfmac/event.c b/drivers/net/wireless/quantenna/qtnfmac/event.c
index 31bc58e96ac0..3b283e93a13e 100644
--- a/drivers/net/wireless/quantenna/qtnfmac/event.c
+++ b/drivers/net/wireless/quantenna/qtnfmac/event.c
@@ -477,9 +477,9 @@ qtnf_event_handle_freq_change(struct qtnf_wmac *mac,
if (!vif->netdev)
continue;
- mutex_lock(&vif->wdev.mtx);
+ wiphy_lock(priv_to_wiphy(vif->mac));
cfg80211_ch_switch_notify(vif->netdev, &chandef, 0, 0);
- mutex_unlock(&vif->wdev.mtx);
+ wiphy_unlock(priv_to_wiphy(vif->mac));
}
return 0;
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800.h b/drivers/net/wireless/ralink/rt2x00/rt2800.h
index de2ee5ffc34e..48521e45577d 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800.h
@@ -871,6 +871,18 @@
#define LED_CFG_LED_POLAR FIELD32(0x40000000)
/*
+ * AMPDU_MAX_LEN_20M1S: Per MCS max A-MPDU length, 20 MHz, MCS 0-7
+ * AMPDU_MAX_LEN_20M2S: Per MCS max A-MPDU length, 20 MHz, MCS 8-15
+ * AMPDU_MAX_LEN_40M1S: Per MCS max A-MPDU length, 40 MHz, MCS 0-7
+ * AMPDU_MAX_LEN_40M2S: Per MCS max A-MPDU length, 40 MHz, MCS 8-15
+ * Maximum A-MPDU length = 2^(AMPDU_MAX - 5) kilobytes
+ */
+#define AMPDU_MAX_LEN_20M1S 0x1030
+#define AMPDU_MAX_LEN_20M2S 0x1034
+#define AMPDU_MAX_LEN_40M1S 0x1038
+#define AMPDU_MAX_LEN_40M2S 0x103C
+
+/*
* AMPDU_BA_WINSIZE: Force BlockAck window size
* FORCE_WINSIZE_ENABLE:
* 0: Disable forcing of BlockAck window size
@@ -1545,6 +1557,12 @@
*/
#define EXP_ACK_TIME 0x1380
+/*
+ * HT_FBK_TO_LEGACY: Enable/Disable HT/RTS fallback to OFDM/CCK rate
+ * Not available for legacy SoCs
+ */
+#define HT_FBK_TO_LEGACY 0x1384
+
/* TX_PWR_CFG_5 */
#define TX_PWR_CFG_5 0x1384
#define TX_PWR_CFG_5_MCS16_CH0 FIELD32(0x0000000f)
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
index e65cc00fa17c..ee880f749b3c 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800lib.c
@@ -856,6 +856,7 @@ static int rt2800_agc_to_rssi(struct rt2x00_dev *rt2x00dev, u32 rxwi_w2)
s8 rssi0 = rt2x00_get_field32(rxwi_w2, RXWI_W2_RSSI0);
s8 rssi1 = rt2x00_get_field32(rxwi_w2, RXWI_W2_RSSI1);
s8 rssi2 = rt2x00_get_field32(rxwi_w2, RXWI_W2_RSSI2);
+ s8 base_val = rt2x00_rt(rt2x00dev, RT6352) ? -2 : -12;
u16 eeprom;
u8 offset0;
u8 offset1;
@@ -880,9 +881,9 @@ static int rt2800_agc_to_rssi(struct rt2x00_dev *rt2x00dev, u32 rxwi_w2)
* If the value in the descriptor is 0, it is considered invalid
* and the default (extremely low) rssi value is assumed
*/
- rssi0 = (rssi0) ? (-12 - offset0 - rt2x00dev->lna_gain - rssi0) : -128;
- rssi1 = (rssi1) ? (-12 - offset1 - rt2x00dev->lna_gain - rssi1) : -128;
- rssi2 = (rssi2) ? (-12 - offset2 - rt2x00dev->lna_gain - rssi2) : -128;
+ rssi0 = (rssi0) ? (base_val - offset0 - rt2x00dev->lna_gain - rssi0) : -128;
+ rssi1 = (rssi1) ? (base_val - offset1 - rt2x00dev->lna_gain - rssi1) : -128;
+ rssi2 = (rssi2) ? (base_val - offset2 - rt2x00dev->lna_gain - rssi2) : -128;
/*
* mac80211 only accepts a single RSSI value. Calculating the
@@ -1236,13 +1237,14 @@ void rt2800_txdone_nostatus(struct rt2x00_dev *rt2x00dev)
}
EXPORT_SYMBOL_GPL(rt2800_txdone_nostatus);
-static int rt2800_check_hung(struct data_queue *queue)
+static bool rt2800_check_hung(struct data_queue *queue)
{
unsigned int cur_idx = rt2800_drv_get_dma_done(queue);
- if (queue->wd_idx != cur_idx)
+ if (queue->wd_idx != cur_idx) {
+ queue->wd_idx = cur_idx;
queue->wd_count = 0;
- else
+ } else
queue->wd_count++;
return queue->wd_count > 16;
@@ -1279,7 +1281,7 @@ void rt2800_watchdog(struct rt2x00_dev *rt2x00dev)
case QID_MGMT:
if (rt2x00queue_empty(queue))
continue;
- hung_tx = rt2800_check_hung(queue);
+ hung_tx = hung_tx || rt2800_check_hung(queue);
break;
case QID_RX:
/* For station mode we should reactive at least
@@ -1288,7 +1290,7 @@ void rt2800_watchdog(struct rt2x00_dev *rt2x00dev)
*/
if (rt2x00dev->intf_sta_count == 0)
continue;
- hung_rx = rt2800_check_hung(queue);
+ hung_rx = hung_rx || rt2800_check_hung(queue);
break;
default:
break;
@@ -1301,8 +1303,12 @@ void rt2800_watchdog(struct rt2x00_dev *rt2x00dev)
if (hung_rx)
rt2x00_warn(rt2x00dev, "Watchdog RX hung detected\n");
- if (hung_tx || hung_rx)
+ if (hung_tx || hung_rx) {
+ queue_for_each(rt2x00dev, queue)
+ queue->wd_count = 0;
+
ieee80211_restart_hw(rt2x00dev->hw);
+ }
}
EXPORT_SYMBOL_GPL(rt2800_watchdog);
@@ -3855,14 +3861,6 @@ static void rt2800_config_channel_rf7620(struct rt2x00_dev *rt2x00dev,
rfcsr |= tx_agc_fc;
rt2800_rfcsr_write_bank(rt2x00dev, 7, 59, rfcsr);
}
-
- if (conf_is_ht40(conf)) {
- rt2800_bbp_glrt_write(rt2x00dev, 141, 0x10);
- rt2800_bbp_glrt_write(rt2x00dev, 157, 0x2f);
- } else {
- rt2800_bbp_glrt_write(rt2x00dev, 141, 0x1a);
- rt2800_bbp_glrt_write(rt2x00dev, 157, 0x40);
- }
}
static void rt2800_config_alc_rt6352(struct rt2x00_dev *rt2x00dev,
@@ -4431,66 +4429,45 @@ static void rt2800_config_channel(struct rt2x00_dev *rt2x00dev,
usleep_range(1000, 1500);
}
- if (rt2x00_rt(rt2x00dev, RT5592) || rt2x00_rt(rt2x00dev, RT6352)) {
- reg = 0x10;
- if (!conf_is_ht40(conf)) {
- if (rt2x00_rt(rt2x00dev, RT6352) &&
- rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
- reg |= 0x5;
- } else {
- reg |= 0xa;
- }
- }
- rt2800_bbp_write(rt2x00dev, 195, 141);
- rt2800_bbp_write(rt2x00dev, 196, reg);
+ if (rt2x00_rt(rt2x00dev, RT5592)) {
+ bbp = conf_is_ht40(conf) ? 0x10 : 0x1a;
+ rt2800_bbp_glrt_write(rt2x00dev, 141, bbp);
- /* AGC init.
- * Despite the vendor driver using different values here for
- * RT6352 chip, we use 0x1c for now. This may have to be changed
- * once TSSI got implemented.
- */
- reg = (rf->channel <= 14 ? 0x1c : 0x24) + 2*rt2x00dev->lna_gain;
- rt2800_bbp_write_with_rx_chain(rt2x00dev, 66, reg);
+ bbp = (rf->channel <= 14 ? 0x1c : 0x24) + 2 * rt2x00dev->lna_gain;
+ rt2800_bbp_write_with_rx_chain(rt2x00dev, 66, bbp);
- if (rt2x00_rt(rt2x00dev, RT5592))
- rt2800_iq_calibrate(rt2x00dev, rf->channel);
+ rt2800_iq_calibrate(rt2x00dev, rf->channel);
}
if (rt2x00_rt(rt2x00dev, RT6352)) {
- if (test_bit(CAPABILITY_EXTERNAL_PA_TX0,
- &rt2x00dev->cap_flags)) {
- reg = rt2800_register_read(rt2x00dev, RF_CONTROL3);
- reg |= 0x00000101;
- rt2800_register_write(rt2x00dev, RF_CONTROL3, reg);
-
- reg = rt2800_register_read(rt2x00dev, RF_BYPASS3);
- reg |= 0x00000101;
- rt2800_register_write(rt2x00dev, RF_BYPASS3, reg);
-
- rt2800_rfcsr_write_chanreg(rt2x00dev, 43, 0x73);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 44, 0x73);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 45, 0x73);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 46, 0x27);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 47, 0xC8);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 48, 0xA4);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 49, 0x05);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 54, 0x27);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 55, 0xC8);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 56, 0xA4);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 57, 0x05);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 58, 0x27);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 59, 0xC8);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 60, 0xA4);
- rt2800_rfcsr_write_chanreg(rt2x00dev, 61, 0x05);
- rt2800_rfcsr_write_dccal(rt2x00dev, 05, 0x00);
-
- rt2800_register_write(rt2x00dev, TX0_RF_GAIN_CORRECT,
- 0x36303636);
- rt2800_register_write(rt2x00dev, TX0_RF_GAIN_ATTEN,
- 0x6C6C6B6C);
- rt2800_register_write(rt2x00dev, TX1_RF_GAIN_ATTEN,
- 0x6C6C6B6C);
+ /* BBP for GLRT BW */
+ bbp = conf_is_ht40(conf) ?
+ 0x10 : rt2x00_has_cap_external_lna_bg(rt2x00dev) ?
+ 0x15 : 0x1a;
+ rt2800_bbp_glrt_write(rt2x00dev, 141, bbp);
+
+ bbp = conf_is_ht40(conf) ? 0x2f : 0x40;
+ rt2800_bbp_glrt_write(rt2x00dev, 157, bbp);
+
+ if (rt2x00dev->default_ant.rx_chain_num == 1) {
+ rt2800_bbp_write(rt2x00dev, 91, 0x07);
+ rt2800_bbp_write(rt2x00dev, 95, 0x1a);
+ rt2800_bbp_glrt_write(rt2x00dev, 128, 0xa0);
+ rt2800_bbp_glrt_write(rt2x00dev, 170, 0x12);
+ rt2800_bbp_glrt_write(rt2x00dev, 171, 0x10);
+ } else {
+ rt2800_bbp_write(rt2x00dev, 91, 0x06);
+ rt2800_bbp_write(rt2x00dev, 95, 0x9a);
+ rt2800_bbp_glrt_write(rt2x00dev, 128, 0xe0);
+ rt2800_bbp_glrt_write(rt2x00dev, 170, 0x30);
+ rt2800_bbp_glrt_write(rt2x00dev, 171, 0x30);
}
+
+ /* AGC init */
+ bbp = rf->channel <= 14 ? 0x04 + 2 * rt2x00dev->lna_gain : 0;
+ rt2800_bbp_write_with_rx_chain(rt2x00dev, 66, bbp);
+
+ usleep_range(1000, 1500);
}
bbp = rt2800_bbp_read(rt2x00dev, 4);
@@ -5600,43 +5577,6 @@ void rt2800_vco_calibration(struct rt2x00_dev *rt2x00dev)
}
}
rt2800_register_write(rt2x00dev, TX_PIN_CFG, tx_pin);
-
- if (rt2x00_rt(rt2x00dev, RT6352)) {
- if (rt2x00dev->default_ant.rx_chain_num == 1) {
- rt2800_bbp_write(rt2x00dev, 91, 0x07);
- rt2800_bbp_write(rt2x00dev, 95, 0x1A);
- rt2800_bbp_write(rt2x00dev, 195, 128);
- rt2800_bbp_write(rt2x00dev, 196, 0xA0);
- rt2800_bbp_write(rt2x00dev, 195, 170);
- rt2800_bbp_write(rt2x00dev, 196, 0x12);
- rt2800_bbp_write(rt2x00dev, 195, 171);
- rt2800_bbp_write(rt2x00dev, 196, 0x10);
- } else {
- rt2800_bbp_write(rt2x00dev, 91, 0x06);
- rt2800_bbp_write(rt2x00dev, 95, 0x9A);
- rt2800_bbp_write(rt2x00dev, 195, 128);
- rt2800_bbp_write(rt2x00dev, 196, 0xE0);
- rt2800_bbp_write(rt2x00dev, 195, 170);
- rt2800_bbp_write(rt2x00dev, 196, 0x30);
- rt2800_bbp_write(rt2x00dev, 195, 171);
- rt2800_bbp_write(rt2x00dev, 196, 0x30);
- }
-
- if (rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
- rt2800_bbp_write(rt2x00dev, 75, 0x68);
- rt2800_bbp_write(rt2x00dev, 76, 0x4C);
- rt2800_bbp_write(rt2x00dev, 79, 0x1C);
- rt2800_bbp_write(rt2x00dev, 80, 0x0C);
- rt2800_bbp_write(rt2x00dev, 82, 0xB6);
- }
-
- /* On 11A, We should delay and wait RF/BBP to be stable
- * and the appropriate time should be 1000 micro seconds
- * 2005/06/05 - On 11G, we also need this delay time.
- * Otherwise it's difficult to pass the WHQL.
- */
- usleep_range(1000, 1500);
- }
}
EXPORT_SYMBOL_GPL(rt2800_vco_calibration);
@@ -5845,6 +5785,7 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
struct rt2800_drv_data *drv_data = rt2x00dev->drv_data;
u32 reg;
u16 eeprom;
+ u8 bbp;
unsigned int i;
int ret;
@@ -5854,6 +5795,19 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
if (ret)
return ret;
+ if (rt2x00_rt(rt2x00dev, RT6352)) {
+ rt2800_register_write(rt2x00dev, MAC_SYS_CTRL, 0x01);
+
+ bbp = rt2800_bbp_read(rt2x00dev, 21);
+ bbp |= 0x01;
+ rt2800_bbp_write(rt2x00dev, 21, bbp);
+ bbp = rt2800_bbp_read(rt2x00dev, 21);
+ bbp &= (~0x01);
+ rt2800_bbp_write(rt2x00dev, 21, bbp);
+
+ rt2800_register_write(rt2x00dev, MAC_SYS_CTRL, 0x00);
+ }
+
rt2800_register_write(rt2x00dev, LEGACY_BASIC_RATE, 0x0000013f);
rt2800_register_write(rt2x00dev, HT_BASIC_RATE, 0x00008003);
@@ -6007,6 +5961,14 @@ static int rt2800_init_registers(struct rt2x00_dev *rt2x00dev)
reg = rt2800_register_read(rt2x00dev, TX_ALC_CFG_1);
rt2x00_set_field32(&reg, TX_ALC_CFG_1_ROS_BUSY_EN, 0);
rt2800_register_write(rt2x00dev, TX_ALC_CFG_1, reg);
+
+ rt2800_register_write(rt2x00dev, AMPDU_MAX_LEN_20M1S, 0x77754433);
+ rt2800_register_write(rt2x00dev, AMPDU_MAX_LEN_20M2S, 0x77765543);
+ rt2800_register_write(rt2x00dev, AMPDU_MAX_LEN_40M1S, 0x77765544);
+ rt2800_register_write(rt2x00dev, AMPDU_MAX_LEN_40M2S, 0x77765544);
+
+ rt2800_register_write(rt2x00dev, HT_FBK_TO_LEGACY, 0x1010);
+
} else {
rt2800_register_write(rt2x00dev, TX_SW_CFG0, 0x00000000);
rt2800_register_write(rt2x00dev, TX_SW_CFG1, 0x00080606);
@@ -7225,6 +7187,8 @@ static void rt2800_init_bbp_6352(struct rt2x00_dev *rt2x00dev)
rt2800_bbp_dcoc_write(rt2x00dev, 159, 0x64);
rt2800_bbp4_mac_if_ctrl(rt2x00dev);
+
+ rt2800_bbp_write(rt2x00dev, 84, 0x19);
}
static void rt2800_init_bbp(struct rt2x00_dev *rt2x00dev)
@@ -9700,9 +9664,6 @@ static void rt2800_loft_iq_calibration(struct rt2x00_dev *rt2x00dev)
rt2x00_dbg(rt2x00dev, "Used VGA %d %x\n", vga_gain[ch_idx],
rfvga_gain_table[vga_gain[ch_idx]]);
-
- if (vga_gain[ch_idx] < 0)
- vga_gain[ch_idx] = 0;
}
rfvalue = rfvga_gain_table[vga_gain[ch_idx]];
@@ -10339,6 +10300,128 @@ do_cal:
rt2800_register_write(rt2x00dev, RF_BYPASS0, MAC_RF_BYPASS0);
}
+static void rt2800_restore_rf_bbp_rt6352(struct rt2x00_dev *rt2x00dev)
+{
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ rt2800_register_write(rt2x00dev, RF_CONTROL3, 0x0);
+ rt2800_register_write(rt2x00dev, RF_BYPASS3, 0x0);
+ }
+
+ if (rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 14, 0x16);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 17, 0x23);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 18, 0x02);
+ }
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 43, 0xd3);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 44, 0xb3);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 45, 0xd5);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 46, 0x27);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 47, 0x6c);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 48, 0xfc);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 49, 0x1f);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 54, 0x27);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 55, 0x66);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 56, 0xff);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 57, 0x1c);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 58, 0x20);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 59, 0x6b);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 60, 0xf7);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 61, 0x09);
+ }
+
+ if (rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
+ rt2800_bbp_write(rt2x00dev, 75, 0x60);
+ rt2800_bbp_write(rt2x00dev, 76, 0x44);
+ rt2800_bbp_write(rt2x00dev, 79, 0x1c);
+ rt2800_bbp_write(rt2x00dev, 80, 0x0c);
+ rt2800_bbp_write(rt2x00dev, 82, 0xB6);
+ }
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ rt2800_register_write(rt2x00dev, TX0_RF_GAIN_CORRECT, 0x3630363a);
+ rt2800_register_write(rt2x00dev, TX0_RF_GAIN_ATTEN, 0x6c6c666c);
+ rt2800_register_write(rt2x00dev, TX1_RF_GAIN_ATTEN, 0x6c6c666c);
+ }
+}
+
+static void rt2800_calibration_rt6352(struct rt2x00_dev *rt2x00dev)
+{
+ u32 reg;
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev) ||
+ rt2x00_has_cap_external_lna_bg(rt2x00dev))
+ rt2800_restore_rf_bbp_rt6352(rt2x00dev);
+
+ rt2800_r_calibration(rt2x00dev);
+ rt2800_rf_self_txdc_cal(rt2x00dev);
+ rt2800_rxdcoc_calibration(rt2x00dev);
+ rt2800_bw_filter_calibration(rt2x00dev, true);
+ rt2800_bw_filter_calibration(rt2x00dev, false);
+ rt2800_loft_iq_calibration(rt2x00dev);
+
+ /* missing DPD calibration for internal PA devices */
+
+ rt2800_rxdcoc_calibration(rt2x00dev);
+ rt2800_rxiq_calibration(rt2x00dev);
+
+ if (!rt2x00_has_cap_external_pa(rt2x00dev) &&
+ !rt2x00_has_cap_external_lna_bg(rt2x00dev))
+ return;
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ reg = rt2800_register_read(rt2x00dev, RF_CONTROL3);
+ reg |= 0x00000101;
+ rt2800_register_write(rt2x00dev, RF_CONTROL3, reg);
+
+ reg = rt2800_register_read(rt2x00dev, RF_BYPASS3);
+ reg |= 0x00000101;
+ rt2800_register_write(rt2x00dev, RF_BYPASS3, reg);
+ }
+
+ if (rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 14, 0x66);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 17, 0x20);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 18, 0x42);
+ }
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 43, 0x73);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 44, 0x73);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 45, 0x73);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 46, 0x27);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 47, 0xc8);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 48, 0xa4);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 49, 0x05);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 54, 0x27);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 55, 0xc8);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 56, 0xa4);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 57, 0x05);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 58, 0x27);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 59, 0xc8);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 60, 0xa4);
+ rt2800_rfcsr_write_chanreg(rt2x00dev, 61, 0x05);
+ }
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev))
+ rt2800_rfcsr_write_dccal(rt2x00dev, 05, 0x00);
+
+ if (rt2x00_has_cap_external_lna_bg(rt2x00dev)) {
+ rt2800_bbp_write(rt2x00dev, 75, 0x68);
+ rt2800_bbp_write(rt2x00dev, 76, 0x4c);
+ rt2800_bbp_write(rt2x00dev, 79, 0x1c);
+ rt2800_bbp_write(rt2x00dev, 80, 0x0c);
+ rt2800_bbp_write(rt2x00dev, 82, 0xb6);
+ }
+
+ if (rt2x00_has_cap_external_pa(rt2x00dev)) {
+ rt2800_register_write(rt2x00dev, TX0_RF_GAIN_CORRECT, 0x36303636);
+ rt2800_register_write(rt2x00dev, TX0_RF_GAIN_ATTEN, 0x6c6c6b6c);
+ rt2800_register_write(rt2x00dev, TX1_RF_GAIN_ATTEN, 0x6c6c6b6c);
+ }
+}
+
static void rt2800_init_rfcsr_6352(struct rt2x00_dev *rt2x00dev)
{
/* Initialize RF central register to default value */
@@ -10603,13 +10686,8 @@ static void rt2800_init_rfcsr_6352(struct rt2x00_dev *rt2x00dev)
rt2800_rfcsr_write_dccal(rt2x00dev, 5, 0x00);
rt2800_rfcsr_write_dccal(rt2x00dev, 17, 0x7C);
- rt2800_r_calibration(rt2x00dev);
- rt2800_rf_self_txdc_cal(rt2x00dev);
- rt2800_rxdcoc_calibration(rt2x00dev);
- rt2800_bw_filter_calibration(rt2x00dev, true);
- rt2800_bw_filter_calibration(rt2x00dev, false);
- rt2800_loft_iq_calibration(rt2x00dev);
- rt2800_rxiq_calibration(rt2x00dev);
+ /* Do calibration and init PA/LNA */
+ rt2800_calibration_rt6352(rt2x00dev);
}
static void rt2800_init_rfcsr(struct rt2x00_dev *rt2x00dev)
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
index 862098f753d2..5323acff962a 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2800mmio.c
@@ -760,6 +760,9 @@ int rt2800mmio_init_registers(struct rt2x00_dev *rt2x00dev)
rt2x00mmio_register_write(rt2x00dev, PWR_PIN_CFG, 0x00000003);
+ if (rt2x00_rt(rt2x00dev, RT6352))
+ return 0;
+
reg = 0;
rt2x00_set_field32(&reg, MAC_SYS_CTRL_RESET_CSR, 1);
rt2x00_set_field32(&reg, MAC_SYS_CTRL_RESET_BBP, 1);
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00.h b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
index 07a6a5a9ce13..aaaf99331967 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00.h
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00.h
@@ -1263,6 +1263,12 @@ rt2x00_has_cap_external_lna_bg(struct rt2x00_dev *rt2x00dev)
}
static inline bool
+rt2x00_has_cap_external_pa(struct rt2x00_dev *rt2x00dev)
+{
+ return rt2x00_has_cap_flag(rt2x00dev, CAPABILITY_EXTERNAL_PA_TX0);
+}
+
+static inline bool
rt2x00_has_cap_double_antenna(struct rt2x00_dev *rt2x00dev)
{
return rt2x00_has_cap_flag(rt2x00dev, CAPABILITY_DOUBLE_ANTENNA);
diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
index 9a9cfd0ce402..c88ce446e117 100644
--- a/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
+++ b/drivers/net/wireless/ralink/rt2x00/rt2x00dev.c
@@ -533,7 +533,7 @@ void rt2x00lib_txdone(struct queue_entry *entry,
*/
if (!(skbdesc_flags & SKBDESC_NOT_MAC80211)) {
if (rt2x00_has_cap_flag(rt2x00dev, REQUIRE_TASKLET_CONTEXT))
- ieee80211_tx_status(rt2x00dev->hw, entry->skb);
+ ieee80211_tx_status_skb(rt2x00dev->hw, entry->skb);
else
ieee80211_tx_status_ni(rt2x00dev->hw, entry->skb);
} else {
diff --git a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
index 5d102a1246a3..43ee7592bc6e 100644
--- a/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
+++ b/drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c
@@ -7500,6 +7500,7 @@ static int rtl8xxxu_probe(struct usb_interface *interface,
case 0x8179:
case 0xb711:
case 0xf192:
+ case 0x2005:
untested = 0;
break;
}
@@ -7800,6 +7801,7 @@ static const struct usb_device_id dev_table[] = {
/* Asus USB-N13 rev C1 */
{USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x18f1, 0xff, 0xff, 0xff),
.driver_info = (unsigned long)&rtl8192fu_fops},
+/* EDIMAX EW-7722UTn V3 */
{USB_DEVICE_AND_INTERFACE_INFO(0x7392, 0xb722, 0xff, 0xff, 0xff),
.driver_info = (unsigned long)&rtl8192fu_fops},
{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x318b, 0xff, 0xff, 0xff),
diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
index 807a53a97325..7ce37fb4fdbf 100644
--- a/drivers/net/wireless/realtek/rtlwifi/base.c
+++ b/drivers/net/wireless/realtek/rtlwifi/base.c
@@ -1317,12 +1317,6 @@ bool rtl_tx_mgmt_proc(struct ieee80211_hw *hw, struct sk_buff *skb)
struct rtl_priv *rtlpriv = rtl_priv(hw);
__le16 fc = rtl_get_fc(skb);
- if (rtlpriv->dm.supp_phymode_switch &&
- mac->link_state < MAC80211_LINKED &&
- (ieee80211_is_auth(fc) || ieee80211_is_probe_req(fc))) {
- if (rtlpriv->cfg->ops->chk_switch_dmdp)
- rtlpriv->cfg->ops->chk_switch_dmdp(hw);
- }
if (ieee80211_is_auth(fc)) {
rtl_dbg(rtlpriv, COMP_SEND, DBG_DMESG, "MAC80211_LINKING\n");
diff --git a/drivers/net/wireless/realtek/rtlwifi/core.c b/drivers/net/wireless/realtek/rtlwifi/core.c
index 3835b639d453..69e97647e3d6 100644
--- a/drivers/net/wireless/realtek/rtlwifi/core.c
+++ b/drivers/net/wireless/realtek/rtlwifi/core.c
@@ -662,13 +662,6 @@ static int rtl_op_config(struct ieee80211_hw *hw, u32 changed)
if (mac->act_scanning)
mac->n_channels++;
- if (rtlpriv->dm.supp_phymode_switch &&
- mac->link_state < MAC80211_LINKED &&
- !mac->act_scanning) {
- if (rtlpriv->cfg->ops->chk_switch_dmdp)
- rtlpriv->cfg->ops->chk_switch_dmdp(hw);
- }
-
/*
*because we should back channel to
*current_network.chan in scanning,
@@ -1197,10 +1190,6 @@ static void rtl_op_bss_info_changed(struct ieee80211_hw *hw,
mac->vendor = PEER_UNKNOWN;
mac->mode = 0;
- if (rtlpriv->dm.supp_phymode_switch) {
- if (rtlpriv->cfg->ops->chk_switch_dmdp)
- rtlpriv->cfg->ops->chk_switch_dmdp(hw);
- }
rtl_dbg(rtlpriv, COMP_MAC80211, DBG_DMESG,
"BSS_CHANGED_UN_ASSOC\n");
}
@@ -1464,11 +1453,6 @@ static void rtl_op_sw_scan_start(struct ieee80211_hw *hw,
rtlpriv->btcoexist.btc_ops->btc_scan_notify_wifi_only(rtlpriv,
1);
- if (rtlpriv->dm.supp_phymode_switch) {
- if (rtlpriv->cfg->ops->chk_switch_dmdp)
- rtlpriv->cfg->ops->chk_switch_dmdp(hw);
- }
-
if (mac->link_state == MAC80211_LINKED) {
rtl_lps_leave(hw, true);
mac->link_state = MAC80211_LINKED_SCANNING;
@@ -1897,7 +1881,7 @@ bool rtl_cmd_send_packet(struct ieee80211_hw *hw, struct sk_buff *skb)
/*this is wrong, fill_tx_cmddesc needs update*/
pdesc = &ring->desc[0];
- rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *)pdesc, 1, 1, skb);
+ rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *)pdesc, skb);
__skb_queue_tail(&ring->queue, skb);
diff --git a/drivers/net/wireless/realtek/rtlwifi/ps.c b/drivers/net/wireless/realtek/rtlwifi/ps.c
index 629c03271bde..6241e4fed4f6 100644
--- a/drivers/net/wireless/realtek/rtlwifi/ps.c
+++ b/drivers/net/wireless/realtek/rtlwifi/ps.c
@@ -681,25 +681,10 @@ void rtl_swlps_wq_callback(struct work_struct *work)
ps_work.work);
struct ieee80211_hw *hw = rtlworks->hw;
struct rtl_priv *rtlpriv = rtl_priv(hw);
- bool ps = false;
-
- ps = (hw->conf.flags & IEEE80211_CONF_PS);
/* we can sleep after ps null send ok */
- if (rtlpriv->psc.state_inap) {
+ if (rtlpriv->psc.state_inap)
rtl_swlps_rf_sleep(hw);
-
- if (rtlpriv->psc.state && !ps) {
- rtlpriv->psc.sleep_ms = jiffies_to_msecs(jiffies -
- rtlpriv->psc.last_action);
- }
-
- if (ps)
- rtlpriv->psc.last_slept = jiffies;
-
- rtlpriv->psc.last_action = jiffies;
- rtlpriv->psc.state = ps;
- }
}
static void rtl_p2p_noa_ie(struct ieee80211_hw *hw, void *data,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
index 6f61d6a10627..5a34894a533b 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c
@@ -799,7 +799,7 @@ static void rtl88e_dm_check_edca_turbo(struct ieee80211_hw *hw)
}
if (rtlpriv->btcoexist.bt_edca_dl != 0) {
- edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
+ edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
bt_change_edca = true;
}
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
index 58b1a46066b5..27f6c35ba0f9 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
@@ -433,14 +433,9 @@ void rtl88ee_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
break;
case HW_VAR_AMPDU_MIN_SPACE:{
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *val;
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
-
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
mac->min_space_cfg = ((mac->min_space_cfg &
0xf8) |
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c
index 65ebe52883d3..d094163a9a71 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.c
@@ -665,9 +665,8 @@ void rtl88ee_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl88ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl88ee_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -687,8 +686,7 @@ void rtl88ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
}
clear_pci_tx_desc_content(pdesc, TX_DESC_SIZE);
- if (firstseg)
- set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
set_tx_desc_tx_rate(pdesc, DESC92C_RATE1M);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h
index e17f70b4d199..aae654b0e3ba 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/trx.h
@@ -797,6 +797,5 @@ bool rtl88ee_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl88ee_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl88ee_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
index 0b6a15c2e5cc..d92aad60edfe 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c
@@ -640,7 +640,7 @@ static void rtl92c_dm_check_edca_turbo(struct ieee80211_hw *hw)
}
if (rtlpriv->btcoexist.bt_edca_dl != 0) {
- edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
+ edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
bt_change_edca = true;
}
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c
index 049c4fe9eeed..0bc915723b93 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/hw.c
@@ -208,14 +208,9 @@ void rtl92ce_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
}
case HW_VAR_AMPDU_MIN_SPACE:{
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *val;
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
-
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
mac->min_space_cfg = ((mac->min_space_cfg &
0xf8) |
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c
index 5376bb34251f..50e139186a93 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.c
@@ -518,9 +518,8 @@ void rtl92ce_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -540,9 +539,7 @@ void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw,
}
clear_pci_tx_desc_content(pdesc, TX_DESC_SIZE);
- if (firstseg)
- set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
-
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
set_tx_desc_tx_rate(pdesc, DESC_RATE1M);
set_tx_desc_seq(pdesc, 0);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h
index b45b05a6a523..f3ffe3d9883c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ce/trx.h
@@ -527,6 +527,5 @@ bool rtl92ce_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl92ce_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl92ce_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool b_firstseg, bool b_lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
index a040c07791d1..5ec0eb8773a5 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/hw.c
@@ -1539,7 +1539,7 @@ static bool usb_cmd_send_packet(struct ieee80211_hw *hw, struct sk_buff *skb)
* if its "here".
*
* This is maybe necessary:
- * rtlpriv->cfg->ops->fill_tx_cmddesc(hw, buffer, 1, 1, skb);
+ * rtlpriv->cfg->ops->fill_tx_cmddesc(hw, buffer, skb);
*/
dev_kfree_skb(skb);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/sw.c
index e6403d4c937c..20b4aac69642 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/sw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/sw.c
@@ -102,7 +102,6 @@ static struct rtl_hal_ops rtl8192cu_hal_ops = {
.set_hw_reg = rtl92cu_set_hw_reg,
.update_rate_tbl = rtl92cu_update_hal_rate_tbl,
.fill_tx_desc = rtl92cu_tx_fill_desc,
- .fill_fake_txdesc = rtl92cu_fill_fake_txdesc,
.fill_tx_cmddesc = rtl92cu_tx_fill_cmddesc,
.query_rx_desc = rtl92cu_rx_query_desc,
.set_channel_access = rtl92cu_update_channel_access_setting,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c
index b70767e72f3d..2f44c8aa6066 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.c
@@ -600,35 +600,8 @@ void rtl92cu_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "==>\n");
}
-void rtl92cu_fill_fake_txdesc(struct ieee80211_hw *hw, u8 *pdesc8,
- u32 buffer_len, bool is_pspoll)
-{
- __le32 *pdesc = (__le32 *)pdesc8;
-
- /* Clear all status */
- memset(pdesc, 0, RTL_TX_HEADER_SIZE);
- set_tx_desc_first_seg(pdesc, 1); /* bFirstSeg; */
- set_tx_desc_last_seg(pdesc, 1); /* bLastSeg; */
- set_tx_desc_offset(pdesc, RTL_TX_HEADER_SIZE); /* Offset = 32 */
- set_tx_desc_pkt_size(pdesc, buffer_len); /* Buffer size + command hdr */
- set_tx_desc_queue_sel(pdesc, QSLT_MGNT); /* Fixed queue of Mgnt queue */
- /* Set NAVUSEHDR to prevent Ps-poll AId filed to be changed to error
- * vlaue by Hw. */
- if (is_pspoll) {
- set_tx_desc_nav_use_hdr(pdesc, 1);
- } else {
- set_tx_desc_hwseq_en(pdesc, 1); /* Hw set sequence number */
- set_tx_desc_pkt_id(pdesc, BIT(3)); /* set bit3 to 1. */
- }
- set_tx_desc_use_rate(pdesc, 1); /* use data rate which is set by Sw */
- set_tx_desc_own(pdesc, 1);
- set_tx_desc_tx_rate(pdesc, DESC_RATE1M);
- _rtl_tx_desc_checksum(pdesc);
-}
-
-void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
u8 fw_queue = QSLT_BEACON;
@@ -637,8 +610,7 @@ void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw,
__le32 *pdesc = (__le32 *)pdesc8;
memset((void *)pdesc, 0, RTL_TX_HEADER_SIZE);
- if (firstseg)
- set_tx_desc_offset(pdesc, RTL_TX_HEADER_SIZE);
+ set_tx_desc_offset(pdesc, RTL_TX_HEADER_SIZE);
set_tx_desc_tx_rate(pdesc, DESC_RATE1M);
set_tx_desc_seq(pdesc, 0);
set_tx_desc_linip(pdesc, 0);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.h
index 171fe39dfb0c..5f81cab205cc 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192cu/trx.h
@@ -394,10 +394,7 @@ void rtl92cu_tx_fill_desc(struct ieee80211_hw *hw,
struct sk_buff *skb,
u8 queue_index,
struct rtl_tcb_desc *tcb_desc);
-void rtl92cu_fill_fake_txdesc(struct ieee80211_hw *hw, u8 *pdesc,
- u32 buffer_len, bool ispspoll);
-void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc, bool b_firstseg,
- bool b_lastseg, struct sk_buff *skb);
+void rtl92cu_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
+ struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
index 6cc9c7649eda..cf4aca83bd05 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/dm.c
@@ -592,32 +592,18 @@ static void rtl92d_dm_check_edca_turbo(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_mac *mac = rtl_mac(rtl_priv(hw));
+ const u32 edca_be_ul = 0x5ea42b;
+ const u32 edca_be_dl = 0x5ea42b;
static u64 last_txok_cnt;
static u64 last_rxok_cnt;
u64 cur_txok_cnt;
u64 cur_rxok_cnt;
- u32 edca_be_ul = 0x5ea42b;
- u32 edca_be_dl = 0x5ea42b;
if (mac->link_state != MAC80211_LINKED) {
rtlpriv->dm.current_turbo_edca = false;
goto exit;
}
- /* Enable BEQ TxOP limit configuration in wireless G-mode. */
- /* To check whether we shall force turn on TXOP configuration. */
- if ((!rtlpriv->dm.disable_framebursting) &&
- (rtlpriv->sec.pairwise_enc_algorithm == WEP40_ENCRYPTION ||
- rtlpriv->sec.pairwise_enc_algorithm == WEP104_ENCRYPTION ||
- rtlpriv->sec.pairwise_enc_algorithm == TKIP_ENCRYPTION)) {
- /* Force TxOP limit to 0x005e for UL. */
- if (!(edca_be_ul & 0xffff0000))
- edca_be_ul |= 0x005e0000;
- /* Force TxOP limit to 0x005e for DL. */
- if (!(edca_be_dl & 0xffff0000))
- edca_be_dl |= 0x005e0000;
- }
-
if ((!rtlpriv->dm.is_any_nonbepkts) &&
(!rtlpriv->dm.disable_framebursting)) {
cur_txok_cnt = rtlpriv->stats.txbytesunicast - last_txok_cnt;
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c
index 9ddb8478784b..e1fb29962801 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/fw.c
@@ -469,7 +469,7 @@ static bool _rtl92d_cmd_send_packet(struct ieee80211_hw *hw,
pdesc = &ring->desc[idx];
/* discard output from call below */
rtlpriv->cfg->ops->get_desc(hw, (u8 *)pdesc, true, HW_DESC_OWN);
- rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *) pdesc, 1, 1, skb);
+ rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *)pdesc, skb);
__skb_queue_tail(&ring->queue, skb);
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
rtlpriv->cfg->ops->tx_polling(hw, BEACON_QUEUE);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
index 31a18bbface9..743ac6871bf4 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/hw.c
@@ -225,13 +225,9 @@ void rtl92de_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
}
case HW_VAR_AMPDU_MIN_SPACE: {
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *val;
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
mac->min_space_cfg = ((mac->min_space_cfg & 0xf8) |
min_spacing_to_set);
*val = min_spacing_to_set;
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
index c09c0c312665..02ac69c08ed3 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.c
@@ -655,9 +655,8 @@ void rtl92de_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -678,8 +677,7 @@ void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw,
return;
}
clear_pci_tx_desc_content(pdesc, TX_DESC_SIZE);
- if (firstseg)
- set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
/* 5G have no CCK rate
* Caution: The macros below are multi-line expansions.
* The braces are needed no matter what checkpatch says
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
index d01578875cd5..2992668c156c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192de/trx.h
@@ -564,7 +564,6 @@ bool rtl92de_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl92de_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl92de_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool b_firstseg, bool b_lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c
index 997ff115b9ab..5a828a934fe9 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/dm.c
@@ -936,8 +936,7 @@ void rtl92ee_dm_init(struct ieee80211_hw *hw)
static void rtl92ee_dm_common_info_self_update(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
- struct rtl_sta_info *drv_priv;
- u8 cnt = 0;
+ u8 cnt;
rtlpriv->dm.one_entry_only = false;
@@ -951,9 +950,7 @@ static void rtl92ee_dm_common_info_self_update(struct ieee80211_hw *hw)
rtlpriv->mac80211.opmode == NL80211_IFTYPE_ADHOC ||
rtlpriv->mac80211.opmode == NL80211_IFTYPE_MESH_POINT) {
spin_lock_bh(&rtlpriv->locks.entry_list_lock);
- list_for_each_entry(drv_priv, &rtlpriv->entry_list, list) {
- cnt++;
- }
+ cnt = list_count_nodes(&rtlpriv->entry_list);
spin_unlock_bh(&rtlpriv->locks.entry_list_lock);
if (cnt == 1)
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/sw.c
index 616a47d8d97a..011ce82efeff 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/sw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/sw.c
@@ -199,7 +199,6 @@ static struct rtl_hal_ops rtl8192ee_hal_ops = {
.get_hw_reg = rtl92ee_get_hw_reg,
.set_hw_reg = rtl92ee_set_hw_reg,
.update_rate_tbl = rtl92ee_update_hal_rate_tbl,
- .pre_fill_tx_bd_desc = rtl92ee_pre_fill_tx_bd_desc,
.rx_desc_buff_remained_cnt = rtl92ee_rx_desc_buff_remained_cnt,
.rx_check_dma_ok = rtl92ee_rx_check_dma_ok,
.fill_tx_desc = rtl92ee_tx_fill_desc,
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c
index a182cdeb58e2..16589e18494b 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c
@@ -550,9 +550,11 @@ u16 rtl92ee_get_available_desc(struct ieee80211_hw *hw, u8 q_idx)
return point_diff;
}
-void rtl92ee_pre_fill_tx_bd_desc(struct ieee80211_hw *hw,
- u8 *tx_bd_desc8, u8 *desc8, u8 queue_index,
- struct sk_buff *skb, dma_addr_t addr)
+static void rtl92ee_pre_fill_tx_bd_desc(struct ieee80211_hw *hw,
+ u8 *tx_bd_desc8, u8 *desc8,
+ u8 queue_index,
+ struct sk_buff *skb,
+ dma_addr_t addr)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -827,9 +829,8 @@ void rtl92ee_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl92ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl92ee_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -846,8 +847,7 @@ void rtl92ee_tx_fill_cmddesc(struct ieee80211_hw *hw,
}
clear_pci_tx_desc_content(pdesc, txdesc_len);
- if (firstseg)
- set_tx_desc_offset(pdesc, txdesc_len);
+ set_tx_desc_offset(pdesc, txdesc_len);
set_tx_desc_tx_rate(pdesc, DESC_RATE1M);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h
index 967cef3a9cbf..4c6cf4f16f95 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.h
@@ -720,9 +720,6 @@ void rtl92ee_rx_check_dma_ok(struct ieee80211_hw *hw, u8 *header_desc,
u16 rtl92ee_rx_desc_buff_remained_cnt(struct ieee80211_hw *hw,
u8 queue_index);
u16 rtl92ee_get_available_desc(struct ieee80211_hw *hw, u8 queue_index);
-void rtl92ee_pre_fill_tx_bd_desc(struct ieee80211_hw *hw,
- u8 *tx_bd_desc, u8 *desc, u8 queue_index,
- struct sk_buff *skb, dma_addr_t addr);
void rtl92ee_tx_fill_desc(struct ieee80211_hw *hw,
struct ieee80211_hdr *hdr, u8 *pdesc_tx,
@@ -743,6 +740,5 @@ u64 rtl92ee_get_desc(struct ieee80211_hw *hw,
bool rtl92ee_is_tx_desc_closed(struct ieee80211_hw *hw, u8 hw_queue, u16 index);
void rtl92ee_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl92ee_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.c
index f570495af044..579b1789d6d1 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/fw.c
@@ -122,7 +122,7 @@ static bool _rtl92s_cmd_send_packet(struct ieee80211_hw *hw,
idx = (ring->idx + skb_queue_len(&ring->queue)) % ring->entries;
pdesc = &ring->desc[idx];
- rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *)pdesc, 1, 1, skb);
+ rtlpriv->cfg->ops->fill_tx_cmddesc(hw, (u8 *)pdesc, skb);
__skb_queue_tail(&ring->queue, skb);
spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c
index a5853a170b58..f104cdb649f8 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.c
@@ -492,7 +492,7 @@ void rtl92se_tx_fill_desc(struct ieee80211_hw *hw,
}
void rtl92se_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
- bool firstseg, bool lastseg, struct sk_buff *skb)
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.h
index 90aa12fc6a7f..40291a6a15d0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/trx.h
@@ -10,8 +10,8 @@ void rtl92se_tx_fill_desc(struct ieee80211_hw *hw,
struct ieee80211_sta *sta,
struct sk_buff *skb, u8 hw_queue,
struct rtl_tcb_desc *ptcb_desc);
-void rtl92se_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc, bool firstseg,
- bool lastseg, struct sk_buff *skb);
+void rtl92se_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
+ struct sk_buff *skb);
bool rtl92se_rx_query_desc(struct ieee80211_hw *hw, struct rtl_stats *stats,
struct ieee80211_rx_status *rx_status, u8 *pdesc,
struct sk_buff *skb);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
index 8ada31380efa..0ff8e355c23a 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/dm.c
@@ -466,7 +466,7 @@ static void rtl8723e_dm_check_edca_turbo(struct ieee80211_hw *hw)
}
if (rtlpriv->btcoexist.bt_edca_dl != 0) {
- edca_be_ul = rtlpriv->btcoexist.bt_edca_dl;
+ edca_be_dl = rtlpriv->btcoexist.bt_edca_dl;
bt_change_edca = true;
}
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c
index 53af0d209b11..b34dffc6a30c 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c
@@ -1122,7 +1122,7 @@ static void rtl8723e_dm_bt_2_ant_hid_sco_esco(struct ieee80211_hw *hw)
/* Always ignore WlanAct if bHid|bSCOBusy|bSCOeSCO */
rtl_dbg(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
- "[BTCoex], BT btInqPageStartTime = 0x%x, btTxRxCntLvl = %d\n",
+ "[BTCoex], BT btInqPageStartTime = 0x%lx, btTxRxCntLvl = %d\n",
hal_coex_8723.bt_inq_page_start_time, bt_tx_rx_cnt_lvl);
if ((hal_coex_8723.bt_inq_page_start_time) ||
(BT_TXRX_CNT_LEVEL_3 == bt_tx_rx_cnt_lvl)) {
@@ -1335,7 +1335,7 @@ static void rtl8723e_dm_bt_2_ant_ftp_a2dp(struct ieee80211_hw *hw)
btdm8723.dec_bt_pwr = true;
rtl_dbg(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
- "[BTCoex], BT btInqPageStartTime = 0x%x, btTxRxCntLvl = %d\n",
+ "[BTCoex], BT btInqPageStartTime = 0x%lx, btTxRxCntLvl = %d\n",
hal_coex_8723.bt_inq_page_start_time, bt_tx_rx_cnt_lvl);
if ((hal_coex_8723.bt_inq_page_start_time) ||
@@ -1358,9 +1358,8 @@ static void rtl8723e_dm_bt_2_ant_ftp_a2dp(struct ieee80211_hw *hw)
static void rtl8723e_dm_bt_inq_page_monitor(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
- u32 cur_time;
+ unsigned long cur_time = jiffies;
- cur_time = jiffies;
if (hal_coex_8723.c2h_bt_inquiry_page) {
/* bt inquiry or page is started. */
if (hal_coex_8723.bt_inq_page_start_time == 0) {
@@ -1368,18 +1367,17 @@ static void rtl8723e_dm_bt_inq_page_monitor(struct ieee80211_hw *hw)
BT_COEX_STATE_BT_INQ_PAGE;
hal_coex_8723.bt_inq_page_start_time = cur_time;
rtl_dbg(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
- "[BTCoex], BT Inquiry/page is started at time : 0x%x\n",
+ "[BTCoex], BT Inquiry/page is started at time : 0x%lx\n",
hal_coex_8723.bt_inq_page_start_time);
}
}
rtl_dbg(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
- "[BTCoex], BT Inquiry/page started time : 0x%x, cur_time : 0x%x\n",
+ "[BTCoex], BT Inquiry/page started time : 0x%lx, cur_time : 0x%lx\n",
hal_coex_8723.bt_inq_page_start_time, cur_time);
if (hal_coex_8723.bt_inq_page_start_time) {
- if ((((long)cur_time -
- (long)hal_coex_8723.bt_inq_page_start_time) / HZ)
- >= 10) {
+ if (jiffies_to_msecs(cur_time -
+ hal_coex_8723.bt_inq_page_start_time) >= 10000) {
rtl_dbg(rtlpriv, COMP_BT_COEXIST, DBG_DMESG,
"[BTCoex], BT Inquiry/page >= 10sec!!!\n");
hal_coex_8723.bt_inq_page_start_time = 0;
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
index d26d4c4314a3..6991713a66d0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
@@ -212,14 +212,9 @@ void rtl8723e_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
}
case HW_VAR_AMPDU_MIN_SPACE:{
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *((u8 *)val);
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
-
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
mac->min_space_cfg = ((mac->min_space_cfg &
0xf8) |
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c
index 7f294e698994..d9823ddab7be 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.c
@@ -519,9 +519,8 @@ void rtl8723e_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl8723e_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl8723e_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
@@ -541,8 +540,7 @@ void rtl8723e_tx_fill_cmddesc(struct ieee80211_hw *hw,
}
clear_pci_tx_desc_content(pdesc, TX_DESC_SIZE);
- if (firstseg)
- set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
+ set_tx_desc_offset(pdesc, USB_HWDESC_HEADER_LEN);
set_tx_desc_tx_rate(pdesc, DESC92C_RATE1M);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h
index 2d25f62a4d52..f352fddfff32 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/trx.h
@@ -530,6 +530,5 @@ bool rtl8723e_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl8723e_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl8723e_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c
index c3c990cc032f..c53f95144812 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/dm.c
@@ -1210,8 +1210,7 @@ static void rtl8723be_dm_dynamic_atc_switch(struct ieee80211_hw *hw)
static void rtl8723be_dm_common_info_self_update(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
- u8 cnt = 0;
- struct rtl_sta_info *drv_priv;
+ u8 cnt;
rtlpriv->dm.one_entry_only = false;
@@ -1225,9 +1224,7 @@ static void rtl8723be_dm_common_info_self_update(struct ieee80211_hw *hw)
rtlpriv->mac80211.opmode == NL80211_IFTYPE_ADHOC ||
rtlpriv->mac80211.opmode == NL80211_IFTYPE_MESH_POINT) {
spin_lock_bh(&rtlpriv->locks.entry_list_lock);
- list_for_each_entry(drv_priv, &rtlpriv->entry_list, list) {
- cnt++;
- }
+ cnt = list_count_nodes(&rtlpriv->entry_list);
spin_unlock_bh(&rtlpriv->locks.entry_list_lock);
if (cnt == 1)
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
index 15575644551f..0e77de1baaf8 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
@@ -468,15 +468,9 @@ void rtl8723be_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
break;
case HW_VAR_AMPDU_MIN_SPACE:{
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *((u8 *)val);
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
-
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
-
mac->min_space_cfg = ((mac->min_space_cfg & 0xf8) |
min_spacing_to_set);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c
index 24ef7cc52e99..8b6352f7f93b 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.c
@@ -585,7 +585,6 @@ void rtl8723be_tx_fill_desc(struct ieee80211_hw *hw,
}
void rtl8723be_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
- bool firstseg, bool lastseg,
struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h
index 174aca20c7e1..da027f915cf4 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/trx.h
@@ -642,6 +642,5 @@ bool rtl8723be_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl8723be_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl8723be_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
index f3fe16798c59..76b5395539d0 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c
@@ -827,8 +827,7 @@ static void rtl8821ae_dm_dig(struct ieee80211_hw *hw)
static void rtl8821ae_dm_common_info_self_update(struct ieee80211_hw *hw)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
- u8 cnt = 0;
- struct rtl_sta_info *drv_priv;
+ u8 cnt;
rtlpriv->dm.tx_rate = 0xff;
@@ -844,8 +843,7 @@ static void rtl8821ae_dm_common_info_self_update(struct ieee80211_hw *hw)
rtlpriv->mac80211.opmode == NL80211_IFTYPE_ADHOC ||
rtlpriv->mac80211.opmode == NL80211_IFTYPE_MESH_POINT) {
spin_lock_bh(&rtlpriv->locks.entry_list_lock);
- list_for_each_entry(drv_priv, &rtlpriv->entry_list, list)
- cnt++;
+ cnt = list_count_nodes(&rtlpriv->entry_list);
spin_unlock_bh(&rtlpriv->locks.entry_list_lock);
if (cnt == 1)
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
index 3f8f6da33b12..1633328bc3d1 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
@@ -546,14 +546,9 @@ void rtl8821ae_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val)
break;
case HW_VAR_AMPDU_MIN_SPACE:{
u8 min_spacing_to_set;
- u8 sec_min_space;
min_spacing_to_set = *((u8 *)val);
if (min_spacing_to_set <= 7) {
- sec_min_space = 0;
-
- if (min_spacing_to_set < sec_min_space)
- min_spacing_to_set = sec_min_space;
mac->min_space_cfg = ((mac->min_space_cfg &
0xf8) |
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
index d7cb3319d885..bd71592fe26a 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.c
@@ -828,9 +828,8 @@ void rtl8821ae_tx_fill_desc(struct ieee80211_hw *hw,
rtl_dbg(rtlpriv, COMP_SEND, DBG_TRACE, "\n");
}
-void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw,
- u8 *pdesc8, bool firstseg,
- bool lastseg, struct sk_buff *skb)
+void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc8,
+ struct sk_buff *skb)
{
struct rtl_priv *rtlpriv = rtl_priv(hw);
struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
index a9ed6fd41089..1155365348f3 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/trx.h
@@ -648,6 +648,5 @@ bool rtl8821ae_is_tx_desc_closed(struct ieee80211_hw *hw,
u8 hw_queue, u16 index);
void rtl8821ae_tx_polling(struct ieee80211_hw *hw, u8 hw_queue);
void rtl8821ae_tx_fill_cmddesc(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
index 2e7e04f91279..31a481f43a07 100644
--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
+++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
@@ -1597,7 +1597,7 @@ struct bt_coexist_8723 {
u8 c2h_bt_info;
bool c2h_bt_info_req_sent;
bool c2h_bt_inquiry_page;
- u32 bt_inq_page_start_time;
+ unsigned long bt_inq_page_start_time;
u8 bt_retry_cnt;
u8 c2h_bt_info_original;
u8 bt_inquiry_page_cnt;
@@ -2032,19 +2032,15 @@ struct rtl_ps_ctl {
/* for SW LPS*/
bool sw_ps_enabled;
- bool state;
bool state_inap;
bool multi_buffered;
u16 nullfunc_seq;
unsigned int dtim_counter;
- unsigned int sleep_ms;
unsigned long last_sleep_jiffies;
unsigned long last_awake_jiffies;
unsigned long last_delaylps_stamp_jiffies;
unsigned long last_dtim;
unsigned long last_beacon;
- unsigned long last_action;
- unsigned long last_slept;
/*For P2P PS */
struct rtl_p2p_ps_info p2p_ps_info;
@@ -2231,9 +2227,6 @@ struct rtl_hal_ops {
void (*update_rate_tbl)(struct ieee80211_hw *hw,
struct ieee80211_sta *sta, u8 rssi_leve,
bool update_bw);
- void (*pre_fill_tx_bd_desc)(struct ieee80211_hw *hw, u8 *tx_bd_desc,
- u8 *desc, u8 queue_index,
- struct sk_buff *skb, dma_addr_t addr);
void (*update_rate_mask)(struct ieee80211_hw *hw, u8 rssi_level);
u16 (*rx_desc_buff_remained_cnt)(struct ieee80211_hw *hw,
u8 queue_index);
@@ -2246,10 +2239,7 @@ struct rtl_hal_ops {
struct ieee80211_sta *sta,
struct sk_buff *skb, u8 hw_queue,
struct rtl_tcb_desc *ptcb_desc);
- void (*fill_fake_txdesc)(struct ieee80211_hw *hw, u8 *pdesc,
- u32 buffer_len, bool bsspspoll);
void (*fill_tx_cmddesc)(struct ieee80211_hw *hw, u8 *pdesc,
- bool firstseg, bool lastseg,
struct sk_buff *skb);
void (*fill_tx_special_desc)(struct ieee80211_hw *hw,
u8 *pdesc, u8 *pbd_desc,
@@ -2285,7 +2275,6 @@ struct rtl_hal_ops {
void (*set_rfreg)(struct ieee80211_hw *hw, enum radio_path rfpath,
u32 regaddr, u32 bitmask, u32 data);
void (*linked_set_reg)(struct ieee80211_hw *hw);
- void (*chk_switch_dmdp)(struct ieee80211_hw *hw);
void (*dualmac_switch_to_dmdp)(struct ieee80211_hw *hw);
bool (*phy_rf6052_config)(struct ieee80211_hw *hw);
void (*phy_rf6052_set_cck_txpower)(struct ieee80211_hw *hw,
@@ -2708,7 +2697,7 @@ struct rtl_c2hcmd {
struct rtl_bssid_entry {
struct list_head list;
u8 bssid[ETH_ALEN];
- u32 age;
+ unsigned long age;
};
struct rtl_scan_list {
diff --git a/drivers/net/wireless/realtek/rtw88/debug.c b/drivers/net/wireless/realtek/rtw88/debug.c
index f8ba133baff0..35bc37a3c469 100644
--- a/drivers/net/wireless/realtek/rtw88/debug.c
+++ b/drivers/net/wireless/realtek/rtw88/debug.c
@@ -1233,9 +1233,9 @@ static struct rtw_debugfs_priv rtw_debug_priv_dm_cap = {
#define rtw_debugfs_add_core(name, mode, fopname, parent) \
do { \
rtw_debug_priv_ ##name.rtwdev = rtwdev; \
- if (!debugfs_create_file(#name, mode, \
+ if (IS_ERR(debugfs_create_file(#name, mode, \
parent, &rtw_debug_priv_ ##name,\
- &file_ops_ ##fopname)) \
+ &file_ops_ ##fopname))) \
pr_debug("Unable to initialize debugfs:%s\n", \
#name); \
} while (0)
diff --git a/drivers/net/wireless/realtek/rtw88/debug.h b/drivers/net/wireless/realtek/rtw88/debug.h
index a9149c6c2b48..a03ced11bbe0 100644
--- a/drivers/net/wireless/realtek/rtw88/debug.h
+++ b/drivers/net/wireless/realtek/rtw88/debug.h
@@ -48,11 +48,23 @@ void __rtw_dbg(struct rtw_dev *rtwdev, enum rtw_debug_mask mask,
#define rtw_dbg(rtwdev, a...) __rtw_dbg(rtwdev, ##a)
+static inline bool rtw_dbg_is_enabled(struct rtw_dev *rtwdev,
+ enum rtw_debug_mask mask)
+{
+ return !!(rtw_debug_mask & mask);
+}
+
#else
static inline void rtw_dbg(struct rtw_dev *rtwdev, enum rtw_debug_mask mask,
const char *fmt, ...) {}
+static inline bool rtw_dbg_is_enabled(struct rtw_dev *rtwdev,
+ enum rtw_debug_mask mask)
+{
+ return false;
+}
+
#endif /* CONFIG_RTW88_DEBUG */
#define rtw_info(rtwdev, a...) dev_info(rtwdev->dev, ##a)
diff --git a/drivers/net/wireless/realtek/rtw88/fw.c b/drivers/net/wireless/realtek/rtw88/fw.c
index a1b674e3caaa..acd78311c8c4 100644
--- a/drivers/net/wireless/realtek/rtw88/fw.c
+++ b/drivers/net/wireless/realtek/rtw88/fw.c
@@ -17,6 +17,79 @@
#include "phy.h"
#include "mac.h"
+static const struct rtw_hw_reg_desc fw_h2c_regs[] = {
+ {REG_FWIMR, MASKDWORD, "FWIMR"},
+ {REG_FWIMR, BIT_FS_H2CCMD_INT_EN, "FWIMR enable"},
+ {REG_FWISR, MASKDWORD, "FWISR"},
+ {REG_FWISR, BIT_FS_H2CCMD_INT, "FWISR enable"},
+ {REG_HMETFR, BIT_INT_BOX_ALL, "BoxBitMap"},
+ {REG_HMEBOX0, MASKDWORD, "MSG 0"},
+ {REG_HMEBOX0_EX, MASKDWORD, "MSG_EX 0"},
+ {REG_HMEBOX1, MASKDWORD, "MSG 1"},
+ {REG_HMEBOX1_EX, MASKDWORD, "MSG_EX 1"},
+ {REG_HMEBOX2, MASKDWORD, "MSG 2"},
+ {REG_HMEBOX2_EX, MASKDWORD, "MSG_EX 2"},
+ {REG_HMEBOX3, MASKDWORD, "MSG 3"},
+ {REG_HMEBOX3_EX, MASKDWORD, "MSG_EX 3"},
+ {REG_FT1IMR, MASKDWORD, "FT1IMR"},
+ {REG_FT1IMR, BIT_FS_H2C_CMD_OK_INT_EN, "FT1IMR enable"},
+ {REG_FT1ISR, MASKDWORD, "FT1ISR"},
+ {REG_FT1ISR, BIT_FS_H2C_CMD_OK_INT, "FT1ISR enable "},
+};
+
+static const struct rtw_hw_reg_desc fw_c2h_regs[] = {
+ {REG_FWIMR, MASKDWORD, "FWIMR"},
+ {REG_FWIMR, BIT_FS_H2CCMD_INT_EN, "CPWM"},
+ {REG_FWIMR, BIT_FS_HRCV_INT_EN, "HRECV"},
+ {REG_FWISR, MASKDWORD, "FWISR"},
+ {REG_FWISR, BIT_FS_H2CCMD_INT, "CPWM"},
+ {REG_FWISR, BIT_FS_HRCV_INT, "HRECV"},
+ {REG_CPWM, MASKDWORD, "REG_CPWM"},
+};
+
+static const struct rtw_hw_reg_desc fw_core_regs[] = {
+ {REG_ARFR2_V1, MASKDWORD, "EPC"},
+ {REG_ARFRH2_V1, MASKDWORD, "BADADDR"},
+ {REG_ARFR3_V1, MASKDWORD, "CAUSE"},
+ {REG_ARFR3_V1, BIT_EXC_CODE, "ExcCode"},
+ {REG_ARFRH3_V1, MASKDWORD, "Status"},
+ {REG_ARFR4, MASKDWORD, "SP"},
+ {REG_ARFRH4, MASKDWORD, "RA"},
+ {REG_FW_DBG6, MASKDWORD, "DBG 6"},
+ {REG_FW_DBG7, MASKDWORD, "DBG 7"},
+};
+
+static void _rtw_fw_dump_dbg_info(struct rtw_dev *rtwdev,
+ const struct rtw_hw_reg_desc regs[], u32 size)
+{
+ const struct rtw_hw_reg_desc *reg;
+ u32 val;
+ int i;
+
+ for (i = 0; i < size; i++) {
+ reg = &regs[i];
+ val = rtw_read32_mask(rtwdev, reg->addr, reg->mask);
+
+ rtw_dbg(rtwdev, RTW_DBG_FW, "[%s]addr:0x%x mask:0x%x value:0x%x\n",
+ reg->desc, reg->addr, reg->mask, val);
+ }
+}
+
+void rtw_fw_dump_dbg_info(struct rtw_dev *rtwdev)
+{
+ int i;
+
+ if (!rtw_dbg_is_enabled(rtwdev, RTW_DBG_FW))
+ return;
+
+ _rtw_fw_dump_dbg_info(rtwdev, fw_h2c_regs, ARRAY_SIZE(fw_h2c_regs));
+ _rtw_fw_dump_dbg_info(rtwdev, fw_c2h_regs, ARRAY_SIZE(fw_c2h_regs));
+ for (i = 0 ; i < RTW_DEBUG_DUMP_TIMES; i++) {
+ rtw_dbg(rtwdev, RTW_DBG_FW, "Firmware Coredump %dth\n", i + 1);
+ _rtw_fw_dump_dbg_info(rtwdev, fw_core_regs, ARRAY_SIZE(fw_core_regs));
+ }
+}
+
static void rtw_fw_c2h_cmd_handle_ext(struct rtw_dev *rtwdev,
struct sk_buff *skb)
{
@@ -349,6 +422,7 @@ static void rtw_fw_send_h2c_command_register(struct rtw_dev *rtwdev,
if (ret) {
rtw_err(rtwdev, "failed to send h2c command\n");
+ rtw_fw_dump_dbg_info(rtwdev);
return;
}
diff --git a/drivers/net/wireless/realtek/rtw88/fw.h b/drivers/net/wireless/realtek/rtw88/fw.h
index 43ccdf9965ac..84e47c71ea12 100644
--- a/drivers/net/wireless/realtek/rtw88/fw.h
+++ b/drivers/net/wireless/realtek/rtw88/fw.h
@@ -44,6 +44,8 @@
#define RTW_OLD_PROBE_PG_CNT 2
#define RTW_PROBE_PG_CNT 4
+#define RTW_DEBUG_DUMP_TIMES 10
+
enum rtw_c2h_cmd_id {
C2H_CCX_TX_RPT = 0x03,
C2H_BT_INFO = 0x09,
@@ -808,6 +810,7 @@ static inline bool rtw_fw_feature_ext_check(struct rtw_fw_state *fw,
return !!(fw->feature_ext & feature);
}
+void rtw_fw_dump_dbg_info(struct rtw_dev *rtwdev);
void rtw_fw_c2h_cmd_rx_irqsafe(struct rtw_dev *rtwdev, u32 pkt_offset,
struct sk_buff *skb);
void rtw_fw_c2h_cmd_handle(struct rtw_dev *rtwdev, struct sk_buff *skb);
diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
index c42ef8294d59..b6bfd4c02e2d 100644
--- a/drivers/net/wireless/realtek/rtw88/main.h
+++ b/drivers/net/wireless/realtek/rtw88/main.h
@@ -342,8 +342,10 @@ enum rtw_regulatory_domains {
RTW_REGD_UKRAINE = 7,
RTW_REGD_MEXICO = 8,
RTW_REGD_CN = 9,
- RTW_REGD_WW,
+ RTW_REGD_QATAR = 10,
+ RTW_REGD_UK = 11,
+ RTW_REGD_WW,
RTW_REGD_MAX
};
@@ -522,6 +524,12 @@ struct rtw_hw_reg {
u32 mask;
};
+struct rtw_hw_reg_desc {
+ u32 addr;
+ u32 mask;
+ const char *desc;
+};
+
struct rtw_ltecoex_addr {
u32 ctrl;
u32 wdata;
diff --git a/drivers/net/wireless/realtek/rtw88/ps.c b/drivers/net/wireless/realtek/rtw88/ps.c
index 07e8cbd436cd..add5a20b8432 100644
--- a/drivers/net/wireless/realtek/rtw88/ps.c
+++ b/drivers/net/wireless/realtek/rtw88/ps.c
@@ -104,6 +104,7 @@ void rtw_power_mode_change(struct rtw_dev *rtwdev, bool enter)
*/
WARN(1, "firmware failed to ack driver for %s Deep Power mode\n",
enter ? "entering" : "leaving");
+ rtw_fw_dump_dbg_info(rtwdev);
}
}
EXPORT_SYMBOL(rtw_power_mode_change);
@@ -164,6 +165,7 @@ static void rtw_fw_leave_lps_check(struct rtw_dev *rtwdev)
if (ret) {
rtw_write32_clr(rtwdev, REG_TCR, BIT_PWRMGT_HWDATA_EN);
rtw_warn(rtwdev, "firmware failed to leave lps state\n");
+ rtw_fw_dump_dbg_info(rtwdev);
}
}
diff --git a/drivers/net/wireless/realtek/rtw88/reg.h b/drivers/net/wireless/realtek/rtw88/reg.h
index 7c6c11d50ff3..1634f03784f1 100644
--- a/drivers/net/wireless/realtek/rtw88/reg.h
+++ b/drivers/net/wireless/realtek/rtw88/reg.h
@@ -224,12 +224,25 @@
#define REG_RXFF_BNDY 0x011C
#define REG_FE1IMR 0x0120
#define BIT_FS_RXDONE BIT(16)
+#define REG_CPWM 0x012C
+#define REG_FWIMR 0x0130
+#define BIT_FS_H2CCMD_INT_EN BIT(4)
+#define BIT_FS_HRCV_INT_EN BIT(5)
+#define REG_FWISR 0x0134
+#define BIT_FS_H2CCMD_INT BIT(4)
+#define BIT_FS_HRCV_INT BIT(5)
#define REG_PKTBUF_DBG_CTRL 0x0140
#define REG_C2HEVT 0x01A0
#define REG_MCUTST_1 0x01C0
#define REG_MCUTST_II 0x01C4
#define REG_WOWLAN_WAKE_REASON 0x01C7
#define REG_HMETFR 0x01CC
+#define BIT_INT_BOX0 BIT(0)
+#define BIT_INT_BOX1 BIT(1)
+#define BIT_INT_BOX2 BIT(2)
+#define BIT_INT_BOX3 BIT(3)
+#define BIT_INT_BOX_ALL (BIT_INT_BOX0 | BIT_INT_BOX1 | BIT_INT_BOX2 | \
+ BIT_INT_BOX3)
#define REG_HMEBOX0 0x01D0
#define REG_HMEBOX1 0x01D4
#define REG_HMEBOX2 0x01D8
@@ -338,6 +351,11 @@
#define BIT_EN_GNT_BT_AWAKE BIT(3)
#define BIT_EN_EOF_V1 BIT(2)
#define REG_DATA_SC 0x0483
+#define REG_ARFR2_V1 0x048C
+#define REG_ARFRH2_V1 0x0490
+#define REG_ARFR3_V1 0x0494
+#define BIT_EXC_CODE GENMASK(6, 2)
+#define REG_ARFRH3_V1 0x0498
#define REG_ARFR4 0x049C
#define BIT_WL_RFK BIT(0)
#define REG_ARFRH4 0x04A0
@@ -548,11 +566,16 @@
#define REG_H2C_PKT_READADDR 0x10D0
#define REG_H2C_PKT_WRITEADDR 0x10D4
+#define REG_FW_DBG6 0x10F8
#define REG_FW_DBG7 0x10FC
#define FW_KEY_MASK 0xffffff00
#define REG_CR_EXT 0x1100
+#define REG_FT1IMR 0x1138
+#define BIT_FS_H2C_CMD_OK_INT_EN BIT(25)
+#define REG_FT1ISR 0x113c
+#define BIT_FS_H2C_CMD_OK_INT BIT(25)
#define REG_DDMA_CH0SA 0x1200
#define REG_DDMA_CH0DA 0x1204
#define REG_DDMA_CH0CTRL 0x1208
diff --git a/drivers/net/wireless/realtek/rtw88/regd.c b/drivers/net/wireless/realtek/rtw88/regd.c
index 2f547cbcf6da..7f3b2ea3f2a5 100644
--- a/drivers/net/wireless/realtek/rtw88/regd.c
+++ b/drivers/net/wireless/realtek/rtw88/regd.c
@@ -70,16 +70,16 @@ static const struct rtw_regulatory rtw_reg_map[] = {
COUNTRY_REGD_ENT("BY", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("BZ", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("CA", RTW_REGD_IC, RTW_REGD_IC),
- COUNTRY_REGD_ENT("CC", RTW_REGD_ETSI, RTW_REGD_ETSI),
+ COUNTRY_REGD_ENT("CC", RTW_REGD_ACMA, RTW_REGD_ACMA),
COUNTRY_REGD_ENT("CD", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("CF", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("CG", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("CH", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("CI", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("CK", RTW_REGD_ETSI, RTW_REGD_ETSI),
- COUNTRY_REGD_ENT("CL", RTW_REGD_FCC, RTW_REGD_FCC),
+ COUNTRY_REGD_ENT("CL", RTW_REGD_CHILE, RTW_REGD_CHILE),
COUNTRY_REGD_ENT("CM", RTW_REGD_ETSI, RTW_REGD_ETSI),
- COUNTRY_REGD_ENT("CN", RTW_REGD_ETSI, RTW_REGD_ETSI),
+ COUNTRY_REGD_ENT("CN", RTW_REGD_CN, RTW_REGD_CN),
COUNTRY_REGD_ENT("CO", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("CR", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("CV", RTW_REGD_ETSI, RTW_REGD_ETSI),
@@ -106,7 +106,7 @@ static const struct rtw_regulatory rtw_reg_map[] = {
COUNTRY_REGD_ENT("FO", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("FR", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("GA", RTW_REGD_ETSI, RTW_REGD_ETSI),
- COUNTRY_REGD_ENT("GB", RTW_REGD_ETSI, RTW_REGD_ETSI),
+ COUNTRY_REGD_ENT("GB", RTW_REGD_UK, RTW_REGD_UK),
COUNTRY_REGD_ENT("GD", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("GE", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("GF", RTW_REGD_ETSI, RTW_REGD_ETSI),
@@ -214,7 +214,7 @@ static const struct rtw_regulatory rtw_reg_map[] = {
COUNTRY_REGD_ENT("PT", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("PW", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("PY", RTW_REGD_FCC, RTW_REGD_FCC),
- COUNTRY_REGD_ENT("QA", RTW_REGD_ETSI, RTW_REGD_ETSI),
+ COUNTRY_REGD_ENT("QA", RTW_REGD_QATAR, RTW_REGD_QATAR),
COUNTRY_REGD_ENT("RE", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("RO", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("RS", RTW_REGD_ETSI, RTW_REGD_ETSI),
@@ -234,7 +234,7 @@ static const struct rtw_regulatory rtw_reg_map[] = {
COUNTRY_REGD_ENT("SN", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("SO", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("SR", RTW_REGD_FCC, RTW_REGD_FCC),
- COUNTRY_REGD_ENT("ST", RTW_REGD_FCC, RTW_REGD_FCC),
+ COUNTRY_REGD_ENT("ST", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("SV", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("SX", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("SZ", RTW_REGD_ETSI, RTW_REGD_ETSI),
@@ -253,7 +253,7 @@ static const struct rtw_regulatory rtw_reg_map[] = {
COUNTRY_REGD_ENT("TV", RTW_REGD_ETSI, RTW_REGD_WW),
COUNTRY_REGD_ENT("TW", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("TZ", RTW_REGD_ETSI, RTW_REGD_ETSI),
- COUNTRY_REGD_ENT("UA", RTW_REGD_ETSI, RTW_REGD_ETSI),
+ COUNTRY_REGD_ENT("UA", RTW_REGD_UKRAINE, RTW_REGD_UKRAINE),
COUNTRY_REGD_ENT("UG", RTW_REGD_ETSI, RTW_REGD_ETSI),
COUNTRY_REGD_ENT("US", RTW_REGD_FCC, RTW_REGD_FCC),
COUNTRY_REGD_ENT("UY", RTW_REGD_FCC, RTW_REGD_FCC),
@@ -502,6 +502,14 @@ u8 rtw_regd_get(struct rtw_dev *rtwdev)
}
EXPORT_SYMBOL(rtw_regd_get);
+bool rtw_regd_srrc(struct rtw_dev *rtwdev)
+{
+ struct rtw_regd *regd = &rtwdev->regd;
+
+ return rtw_reg_match(regd->regulatory, "CN");
+}
+EXPORT_SYMBOL(rtw_regd_srrc);
+
struct rtw_regd_alternative_t {
bool set;
u8 alt;
@@ -519,6 +527,8 @@ rtw_regd_alt[RTW_REGD_MAX] = {
DECL_REGD_ALT(RTW_REGD_UKRAINE, RTW_REGD_ETSI),
DECL_REGD_ALT(RTW_REGD_MEXICO, RTW_REGD_FCC),
DECL_REGD_ALT(RTW_REGD_CN, RTW_REGD_ETSI),
+ DECL_REGD_ALT(RTW_REGD_QATAR, RTW_REGD_ETSI),
+ DECL_REGD_ALT(RTW_REGD_UK, RTW_REGD_ETSI),
};
bool rtw_regd_has_alt(u8 regd, u8 *regd_alt)
diff --git a/drivers/net/wireless/realtek/rtw88/regd.h b/drivers/net/wireless/realtek/rtw88/regd.h
index 34cb13d0cd9e..3c5a6fd8e6dd 100644
--- a/drivers/net/wireless/realtek/rtw88/regd.h
+++ b/drivers/net/wireless/realtek/rtw88/regd.h
@@ -68,4 +68,6 @@ int rtw_regd_init(struct rtw_dev *rtwdev);
int rtw_regd_hint(struct rtw_dev *rtwdev);
u8 rtw_regd_get(struct rtw_dev *rtwdev);
bool rtw_regd_has_alt(u8 regd, u8 *regd_alt);
+bool rtw_regd_srrc(struct rtw_dev *rtwdev);
+
#endif
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.c b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
index adf224618a2a..429bb420b056 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.c
@@ -381,6 +381,65 @@ static void rtw8821c_set_channel_rxdfir(struct rtw_dev *rtwdev, u8 bw)
}
}
+static void rtw8821c_cck_tx_filter_srrc(struct rtw_dev *rtwdev, u8 channel, u8 bw)
+{
+ struct rtw_hal *hal = &rtwdev->hal;
+
+ if (channel == 14) {
+ rtw_write32_mask(rtwdev, REG_CCA_FLTR, MASKHWORD, 0xe82c);
+ rtw_write32_mask(rtwdev, REG_TXSF2, MASKDWORD, 0x0000b81c);
+ rtw_write32_mask(rtwdev, REG_TXSF6, MASKLWORD, 0x0000);
+ rtw_write32_mask(rtwdev, REG_TXFILTER, MASKDWORD, 0x00003667);
+
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00002);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00000);
+ } else if (channel == 13 ||
+ (channel == 11 && bw == RTW_CHANNEL_WIDTH_40)) {
+ rtw_write32_mask(rtwdev, REG_CCA_FLTR, MASKHWORD, 0xf8fe);
+ rtw_write32_mask(rtwdev, REG_TXSF2, MASKDWORD, 0x64b80c1c);
+ rtw_write32_mask(rtwdev, REG_TXSF6, MASKLWORD, 0x8810);
+ rtw_write32_mask(rtwdev, REG_TXFILTER, MASKDWORD, 0x01235667);
+
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00002);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00027);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00027);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00029);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00026);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00000);
+ } else {
+ rtw_write32_mask(rtwdev, REG_CCA_FLTR, MASKHWORD, 0xe82c);
+ rtw_write32_mask(rtwdev, REG_TXSF2, MASKDWORD,
+ hal->ch_param[0]);
+ rtw_write32_mask(rtwdev, REG_TXSF6, MASKLWORD,
+ hal->ch_param[1] & MASKLWORD);
+ rtw_write32_mask(rtwdev, REG_TXFILTER, MASKDWORD,
+ hal->ch_param[2]);
+
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00002);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0001c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000e);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWA, RFREG_MASK, 0x0000c);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWD0, RFREG_MASK, 0x00000);
+ rtw_write_rf(rtwdev, RF_PATH_A, RF_LUTWE2, RFREG_MASK, 0x00000);
+ }
+}
+
static void rtw8821c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
u8 primary_ch_idx)
{
@@ -395,6 +454,13 @@ static void rtw8821c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_TXSCALE_A, 0xf00, 0x0);
rtw_write32_mask(rtwdev, REG_CLKTRK, 0x1ffe0000, 0x96a);
+
+ if (rtw_regd_srrc(rtwdev)) {
+ rtw8821c_cck_tx_filter_srrc(rtwdev, channel, bw);
+ goto set_bw;
+ }
+
+ /* CCK TX filter parameters for default case */
if (channel == 14) {
rtw_write32_mask(rtwdev, REG_TXSF2, MASKDWORD, 0x0000b81c);
rtw_write32_mask(rtwdev, REG_TXSF6, MASKLWORD, 0x0000);
@@ -430,6 +496,7 @@ static void rtw8821c_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
rtw_write32_mask(rtwdev, REG_CLKTRK, 0x1ffe0000, 0x412);
}
+set_bw:
switch (bw) {
case RTW_CHANNEL_WIDTH_20:
default:
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
index fcff31688c45..91ed921407bb 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
@@ -238,6 +238,7 @@ extern const struct rtw_chip_info rtw8821c_hw_spec;
#define REG_RXSB 0xa00
#define REG_ADCINI 0xa04
#define REG_PWRTH 0xa08
+#define REG_CCA_FLTR 0xa20
#define REG_TXSF2 0xa24
#define REG_TXSF6 0xa28
#define REG_FA_CCK 0xa5c
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c_table.c b/drivers/net/wireless/realtek/rtw88/rtw8821c_table.c
index 6c82c4383497..0393b9a0c1a3 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8821c_table.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8821c_table.c
@@ -6013,996 +6013,1492 @@ RTW_DECL_TABLE_RF_RADIO(rtw8821c_rf_a, A);
static const struct rtw_txpwr_lmt_cfg_pair rtw8821c_txpwr_lmt_type0[] = {
{ 0, 0, 0, 0, 1, 30, },
{ 2, 0, 0, 0, 1, 30, },
- { 0, 0, 0, 0, 2, 32, },
- { 2, 0, 0, 0, 2, 30, },
- { 0, 0, 0, 0, 3, 32, },
- { 2, 0, 0, 0, 3, 30, },
- { 0, 0, 0, 0, 4, 32, },
- { 2, 0, 0, 0, 4, 30, },
- { 0, 0, 0, 0, 5, 32, },
- { 2, 0, 0, 0, 5, 30, },
- { 0, 0, 0, 0, 6, 32, },
- { 2, 0, 0, 0, 6, 30, },
- { 0, 0, 0, 0, 7, 32, },
- { 2, 0, 0, 0, 7, 30, },
- { 0, 0, 0, 0, 8, 32, },
- { 2, 0, 0, 0, 8, 30, },
- { 0, 0, 0, 0, 9, 32, },
- { 2, 0, 0, 0, 9, 30, },
- { 0, 0, 0, 0, 10, 32, },
- { 2, 0, 0, 0, 10, 30, },
- { 0, 0, 0, 0, 11, 32, },
- { 2, 0, 0, 0, 11, 30, },
- { 0, 0, 0, 0, 12, 24, },
- { 2, 0, 0, 0, 12, 30, },
- { 0, 0, 0, 0, 13, 16, },
- { 2, 0, 0, 0, 13, 30, },
- { 0, 0, 0, 0, 14, 63, },
- { 2, 0, 0, 0, 14, 63, },
- { 0, 0, 0, 1, 1, 30, },
- { 2, 0, 0, 1, 1, 30, },
- { 0, 0, 0, 1, 2, 32, },
- { 2, 0, 0, 1, 2, 30, },
- { 0, 0, 0, 1, 3, 34, },
- { 2, 0, 0, 1, 3, 30, },
- { 0, 0, 0, 1, 4, 34, },
- { 2, 0, 0, 1, 4, 30, },
- { 0, 0, 0, 1, 5, 34, },
- { 2, 0, 0, 1, 5, 30, },
- { 0, 0, 0, 1, 6, 34, },
- { 2, 0, 0, 1, 6, 30, },
- { 0, 0, 0, 1, 7, 34, },
- { 2, 0, 0, 1, 7, 30, },
- { 0, 0, 0, 1, 8, 34, },
- { 2, 0, 0, 1, 8, 30, },
- { 0, 0, 0, 1, 9, 34, },
- { 2, 0, 0, 1, 9, 30, },
- { 0, 0, 0, 1, 10, 32, },
- { 2, 0, 0, 1, 10, 30, },
- { 0, 0, 0, 1, 11, 30, },
- { 2, 0, 0, 1, 11, 30, },
- { 0, 0, 0, 1, 12, 28, },
- { 2, 0, 0, 1, 12, 30, },
- { 0, 0, 0, 1, 13, 16, },
- { 2, 0, 0, 1, 13, 30, },
- { 0, 0, 0, 1, 14, 63, },
- { 2, 0, 0, 1, 14, 63, },
- { 0, 0, 0, 2, 1, 26, },
- { 2, 0, 0, 2, 1, 30, },
- { 0, 0, 0, 2, 2, 30, },
- { 2, 0, 0, 2, 2, 30, },
- { 0, 0, 0, 2, 3, 32, },
- { 2, 0, 0, 2, 3, 30, },
- { 0, 0, 0, 2, 4, 34, },
- { 2, 0, 0, 2, 4, 30, },
- { 0, 0, 0, 2, 5, 34, },
- { 2, 0, 0, 2, 5, 30, },
- { 0, 0, 0, 2, 6, 34, },
- { 2, 0, 0, 2, 6, 30, },
- { 0, 0, 0, 2, 7, 34, },
- { 2, 0, 0, 2, 7, 30, },
- { 0, 0, 0, 2, 8, 34, },
- { 2, 0, 0, 2, 8, 30, },
- { 0, 0, 0, 2, 9, 32, },
- { 2, 0, 0, 2, 9, 30, },
- { 0, 0, 0, 2, 10, 30, },
- { 2, 0, 0, 2, 10, 30, },
- { 0, 0, 0, 2, 11, 28, },
- { 2, 0, 0, 2, 11, 30, },
- { 0, 0, 0, 2, 12, 26, },
- { 2, 0, 0, 2, 12, 30, },
- { 0, 0, 0, 2, 13, 12, },
- { 2, 0, 0, 2, 13, 30, },
- { 0, 0, 0, 2, 14, 63, },
- { 2, 0, 0, 2, 14, 63, },
- { 0, 0, 1, 2, 1, 63, },
- { 2, 0, 1, 2, 1, 63, },
- { 0, 0, 1, 2, 2, 63, },
- { 2, 0, 1, 2, 2, 63, },
- { 0, 0, 1, 2, 3, 26, },
- { 2, 0, 1, 2, 3, 30, },
- { 0, 0, 1, 2, 4, 26, },
- { 2, 0, 1, 2, 4, 30, },
- { 0, 0, 1, 2, 5, 30, },
- { 2, 0, 1, 2, 5, 30, },
- { 0, 0, 1, 2, 6, 30, },
- { 2, 0, 1, 2, 6, 30, },
- { 0, 0, 1, 2, 7, 30, },
- { 2, 0, 1, 2, 7, 30, },
- { 0, 0, 1, 2, 8, 26, },
- { 2, 0, 1, 2, 8, 30, },
- { 0, 0, 1, 2, 9, 26, },
- { 2, 0, 1, 2, 9, 30, },
- { 0, 0, 1, 2, 10, 28, },
- { 2, 0, 1, 2, 10, 30, },
- { 0, 0, 1, 2, 11, 20, },
- { 2, 0, 1, 2, 11, 30, },
- { 0, 0, 1, 2, 12, 63, },
- { 2, 0, 1, 2, 12, 63, },
- { 0, 0, 1, 2, 13, 63, },
- { 2, 0, 1, 2, 13, 63, },
- { 0, 0, 1, 2, 14, 63, },
- { 2, 0, 1, 2, 14, 63, },
- { 0, 1, 0, 1, 36, 31, },
- { 2, 1, 0, 1, 36, 32, },
- { 0, 1, 0, 1, 40, 33, },
- { 2, 1, 0, 1, 40, 32, },
- { 0, 1, 0, 1, 44, 33, },
- { 2, 1, 0, 1, 44, 32, },
- { 0, 1, 0, 1, 48, 31, },
- { 2, 1, 0, 1, 48, 32, },
- { 0, 1, 0, 1, 52, 33, },
- { 2, 1, 0, 1, 52, 32, },
- { 0, 1, 0, 1, 56, 33, },
- { 2, 1, 0, 1, 56, 32, },
- { 0, 1, 0, 1, 60, 33, },
- { 2, 1, 0, 1, 60, 32, },
- { 0, 1, 0, 1, 64, 30, },
- { 2, 1, 0, 1, 64, 32, },
- { 0, 1, 0, 1, 100, 30, },
- { 2, 1, 0, 1, 100, 32, },
- { 0, 1, 0, 1, 104, 33, },
- { 2, 1, 0, 1, 104, 32, },
- { 0, 1, 0, 1, 108, 33, },
- { 2, 1, 0, 1, 108, 32, },
- { 0, 1, 0, 1, 112, 33, },
- { 2, 1, 0, 1, 112, 32, },
- { 0, 1, 0, 1, 116, 33, },
- { 2, 1, 0, 1, 116, 32, },
- { 0, 1, 0, 1, 120, 33, },
- { 2, 1, 0, 1, 120, 32, },
- { 0, 1, 0, 1, 124, 33, },
- { 2, 1, 0, 1, 124, 32, },
- { 0, 1, 0, 1, 128, 33, },
- { 2, 1, 0, 1, 128, 32, },
- { 0, 1, 0, 1, 132, 33, },
- { 2, 1, 0, 1, 132, 32, },
- { 0, 1, 0, 1, 136, 33, },
- { 2, 1, 0, 1, 136, 32, },
- { 0, 1, 0, 1, 140, 31, },
- { 2, 1, 0, 1, 140, 32, },
- { 0, 1, 0, 1, 144, 30, },
- { 2, 1, 0, 1, 144, 63, },
- { 0, 1, 0, 1, 149, 33, },
- { 2, 1, 0, 1, 149, 63, },
- { 0, 1, 0, 1, 153, 33, },
- { 2, 1, 0, 1, 153, 63, },
- { 0, 1, 0, 1, 157, 33, },
- { 2, 1, 0, 1, 157, 63, },
- { 0, 1, 0, 1, 161, 33, },
- { 2, 1, 0, 1, 161, 63, },
- { 0, 1, 0, 1, 165, 33, },
- { 2, 1, 0, 1, 165, 63, },
- { 0, 1, 0, 2, 36, 30, },
- { 2, 1, 0, 2, 36, 32, },
- { 0, 1, 0, 2, 40, 33, },
- { 2, 1, 0, 2, 40, 32, },
- { 0, 1, 0, 2, 44, 33, },
- { 2, 1, 0, 2, 44, 32, },
- { 0, 1, 0, 2, 48, 33, },
- { 2, 1, 0, 2, 48, 32, },
- { 0, 1, 0, 2, 52, 33, },
- { 2, 1, 0, 2, 52, 32, },
- { 0, 1, 0, 2, 56, 33, },
- { 2, 1, 0, 2, 56, 32, },
- { 0, 1, 0, 2, 60, 33, },
- { 2, 1, 0, 2, 60, 32, },
- { 0, 1, 0, 2, 64, 30, },
- { 2, 1, 0, 2, 64, 32, },
- { 0, 1, 0, 2, 100, 30, },
- { 2, 1, 0, 2, 100, 32, },
- { 0, 1, 0, 2, 104, 33, },
- { 2, 1, 0, 2, 104, 32, },
- { 0, 1, 0, 2, 108, 33, },
- { 2, 1, 0, 2, 108, 32, },
- { 0, 1, 0, 2, 112, 33, },
- { 2, 1, 0, 2, 112, 32, },
- { 0, 1, 0, 2, 116, 33, },
- { 2, 1, 0, 2, 116, 32, },
- { 0, 1, 0, 2, 120, 33, },
- { 2, 1, 0, 2, 120, 32, },
- { 0, 1, 0, 2, 124, 33, },
- { 2, 1, 0, 2, 124, 32, },
- { 0, 1, 0, 2, 128, 33, },
- { 2, 1, 0, 2, 128, 32, },
- { 0, 1, 0, 2, 132, 33, },
- { 2, 1, 0, 2, 132, 32, },
- { 0, 1, 0, 2, 136, 33, },
- { 2, 1, 0, 2, 136, 32, },
- { 0, 1, 0, 2, 140, 29, },
- { 2, 1, 0, 2, 140, 32, },
- { 0, 1, 0, 2, 144, 27, },
- { 2, 1, 0, 2, 144, 63, },
- { 0, 1, 0, 2, 149, 33, },
- { 2, 1, 0, 2, 149, 63, },
- { 0, 1, 0, 2, 153, 33, },
- { 2, 1, 0, 2, 153, 63, },
- { 0, 1, 0, 2, 157, 33, },
- { 2, 1, 0, 2, 157, 63, },
- { 0, 1, 0, 2, 161, 33, },
- { 2, 1, 0, 2, 161, 63, },
- { 0, 1, 0, 2, 165, 33, },
- { 2, 1, 0, 2, 165, 63, },
- { 0, 1, 1, 2, 38, 22, },
- { 2, 1, 1, 2, 38, 32, },
- { 0, 1, 1, 2, 46, 32, },
- { 2, 1, 1, 2, 46, 32, },
- { 0, 1, 1, 2, 54, 32, },
- { 2, 1, 1, 2, 54, 32, },
- { 0, 1, 1, 2, 62, 23, },
- { 2, 1, 1, 2, 62, 32, },
- { 0, 1, 1, 2, 102, 21, },
- { 2, 1, 1, 2, 102, 32, },
- { 0, 1, 1, 2, 110, 32, },
- { 2, 1, 1, 2, 110, 32, },
- { 0, 1, 1, 2, 118, 32, },
- { 2, 1, 1, 2, 118, 32, },
- { 0, 1, 1, 2, 126, 32, },
- { 2, 1, 1, 2, 126, 32, },
- { 0, 1, 1, 2, 134, 32, },
- { 2, 1, 1, 2, 134, 32, },
- { 0, 1, 1, 2, 142, 29, },
- { 2, 1, 1, 2, 142, 63, },
- { 0, 1, 1, 2, 151, 32, },
- { 2, 1, 1, 2, 151, 63, },
- { 0, 1, 1, 2, 159, 32, },
- { 2, 1, 1, 2, 159, 63, },
- { 0, 1, 2, 4, 42, 19, },
- { 2, 1, 2, 4, 42, 32, },
- { 0, 1, 2, 4, 58, 22, },
- { 2, 1, 2, 4, 58, 32, },
- { 0, 1, 2, 4, 106, 18, },
- { 2, 1, 2, 4, 106, 32, },
- { 0, 1, 2, 4, 122, 32, },
- { 2, 1, 2, 4, 122, 32, },
- { 0, 1, 2, 4, 138, 28, },
- { 2, 1, 2, 4, 138, 63, },
- { 0, 1, 2, 4, 155, 32, },
- { 2, 1, 2, 4, 155, 63, },
{ 1, 0, 0, 0, 1, 34, },
{ 3, 0, 0, 0, 1, 30, },
{ 4, 0, 0, 0, 1, 34, },
{ 5, 0, 0, 0, 1, 30, },
{ 6, 0, 0, 0, 1, 30, },
{ 7, 0, 0, 0, 1, 30, },
+ { 8, 0, 0, 0, 1, 30, },
+ { 9, 0, 0, 0, 1, 28, },
+ { 10, 0, 0, 0, 1, 30, },
+ { 11, 0, 0, 0, 1, 30, },
+ { 0, 0, 0, 0, 2, 32, },
+ { 2, 0, 0, 0, 2, 30, },
{ 1, 0, 0, 0, 2, 34, },
{ 3, 0, 0, 0, 2, 32, },
{ 4, 0, 0, 0, 2, 34, },
{ 5, 0, 0, 0, 2, 30, },
{ 6, 0, 0, 0, 2, 32, },
{ 7, 0, 0, 0, 2, 30, },
+ { 8, 0, 0, 0, 2, 32, },
+ { 9, 0, 0, 0, 2, 28, },
+ { 10, 0, 0, 0, 2, 30, },
+ { 11, 0, 0, 0, 2, 30, },
+ { 0, 0, 0, 0, 3, 32, },
+ { 2, 0, 0, 0, 3, 30, },
{ 1, 0, 0, 0, 3, 34, },
{ 3, 0, 0, 0, 3, 32, },
{ 4, 0, 0, 0, 3, 34, },
{ 5, 0, 0, 0, 3, 30, },
{ 6, 0, 0, 0, 3, 32, },
{ 7, 0, 0, 0, 3, 30, },
+ { 8, 0, 0, 0, 3, 32, },
+ { 9, 0, 0, 0, 3, 28, },
+ { 10, 0, 0, 0, 3, 30, },
+ { 11, 0, 0, 0, 3, 30, },
+ { 0, 0, 0, 0, 4, 32, },
+ { 2, 0, 0, 0, 4, 30, },
{ 1, 0, 0, 0, 4, 34, },
{ 3, 0, 0, 0, 4, 32, },
{ 4, 0, 0, 0, 4, 34, },
{ 5, 0, 0, 0, 4, 30, },
{ 6, 0, 0, 0, 4, 32, },
{ 7, 0, 0, 0, 4, 30, },
+ { 8, 0, 0, 0, 4, 32, },
+ { 9, 0, 0, 0, 4, 28, },
+ { 10, 0, 0, 0, 4, 30, },
+ { 11, 0, 0, 0, 4, 30, },
+ { 0, 0, 0, 0, 5, 32, },
+ { 2, 0, 0, 0, 5, 30, },
{ 1, 0, 0, 0, 5, 34, },
{ 3, 0, 0, 0, 5, 32, },
{ 4, 0, 0, 0, 5, 34, },
{ 5, 0, 0, 0, 5, 30, },
{ 6, 0, 0, 0, 5, 32, },
{ 7, 0, 0, 0, 5, 30, },
+ { 8, 0, 0, 0, 5, 32, },
+ { 9, 0, 0, 0, 5, 28, },
+ { 10, 0, 0, 0, 5, 30, },
+ { 11, 0, 0, 0, 5, 30, },
+ { 0, 0, 0, 0, 6, 32, },
+ { 2, 0, 0, 0, 6, 30, },
{ 1, 0, 0, 0, 6, 34, },
{ 3, 0, 0, 0, 6, 32, },
{ 4, 0, 0, 0, 6, 34, },
{ 5, 0, 0, 0, 6, 30, },
{ 6, 0, 0, 0, 6, 32, },
{ 7, 0, 0, 0, 6, 30, },
+ { 8, 0, 0, 0, 6, 32, },
+ { 9, 0, 0, 0, 6, 28, },
+ { 10, 0, 0, 0, 6, 30, },
+ { 11, 0, 0, 0, 6, 30, },
+ { 0, 0, 0, 0, 7, 32, },
+ { 2, 0, 0, 0, 7, 30, },
{ 1, 0, 0, 0, 7, 34, },
{ 3, 0, 0, 0, 7, 32, },
{ 4, 0, 0, 0, 7, 34, },
{ 5, 0, 0, 0, 7, 30, },
{ 6, 0, 0, 0, 7, 32, },
{ 7, 0, 0, 0, 7, 30, },
+ { 8, 0, 0, 0, 7, 32, },
+ { 9, 0, 0, 0, 7, 28, },
+ { 10, 0, 0, 0, 7, 30, },
+ { 11, 0, 0, 0, 7, 30, },
+ { 0, 0, 0, 0, 8, 32, },
+ { 2, 0, 0, 0, 8, 30, },
{ 1, 0, 0, 0, 8, 34, },
{ 3, 0, 0, 0, 8, 32, },
{ 4, 0, 0, 0, 8, 34, },
{ 5, 0, 0, 0, 8, 30, },
{ 6, 0, 0, 0, 8, 32, },
{ 7, 0, 0, 0, 8, 30, },
+ { 8, 0, 0, 0, 8, 32, },
+ { 9, 0, 0, 0, 8, 28, },
+ { 10, 0, 0, 0, 8, 30, },
+ { 11, 0, 0, 0, 8, 30, },
+ { 0, 0, 0, 0, 9, 32, },
+ { 2, 0, 0, 0, 9, 30, },
{ 1, 0, 0, 0, 9, 34, },
{ 3, 0, 0, 0, 9, 32, },
{ 4, 0, 0, 0, 9, 34, },
{ 5, 0, 0, 0, 9, 30, },
{ 6, 0, 0, 0, 9, 32, },
{ 7, 0, 0, 0, 9, 30, },
+ { 8, 0, 0, 0, 9, 32, },
+ { 9, 0, 0, 0, 9, 28, },
+ { 10, 0, 0, 0, 9, 30, },
+ { 11, 0, 0, 0, 9, 30, },
+ { 0, 0, 0, 0, 10, 32, },
+ { 2, 0, 0, 0, 10, 30, },
{ 1, 0, 0, 0, 10, 34, },
{ 3, 0, 0, 0, 10, 32, },
{ 4, 0, 0, 0, 10, 34, },
{ 5, 0, 0, 0, 10, 30, },
{ 6, 0, 0, 0, 10, 32, },
{ 7, 0, 0, 0, 10, 30, },
+ { 8, 0, 0, 0, 10, 32, },
+ { 9, 0, 0, 0, 10, 28, },
+ { 10, 0, 0, 0, 10, 30, },
+ { 11, 0, 0, 0, 10, 30, },
+ { 0, 0, 0, 0, 11, 32, },
+ { 2, 0, 0, 0, 11, 30, },
{ 1, 0, 0, 0, 11, 34, },
{ 3, 0, 0, 0, 11, 32, },
{ 4, 0, 0, 0, 11, 34, },
{ 5, 0, 0, 0, 11, 30, },
{ 6, 0, 0, 0, 11, 32, },
{ 7, 0, 0, 0, 11, 30, },
+ { 8, 0, 0, 0, 11, 32, },
+ { 9, 0, 0, 0, 11, 28, },
+ { 10, 0, 0, 0, 11, 30, },
+ { 11, 0, 0, 0, 11, 30, },
+ { 0, 0, 0, 0, 12, 24, },
+ { 2, 0, 0, 0, 12, 30, },
{ 1, 0, 0, 0, 12, 34, },
{ 3, 0, 0, 0, 12, 24, },
{ 4, 0, 0, 0, 12, 34, },
{ 5, 0, 0, 0, 12, 30, },
{ 6, 0, 0, 0, 12, 24, },
{ 7, 0, 0, 0, 12, 30, },
+ { 8, 0, 0, 0, 12, 24, },
+ { 9, 0, 0, 0, 12, 24, },
+ { 10, 0, 0, 0, 12, 30, },
+ { 11, 0, 0, 0, 12, 30, },
+ { 0, 0, 0, 0, 13, 16, },
+ { 2, 0, 0, 0, 13, 30, },
{ 1, 0, 0, 0, 13, 34, },
{ 3, 0, 0, 0, 13, 16, },
{ 4, 0, 0, 0, 13, 34, },
{ 5, 0, 0, 0, 13, 30, },
{ 6, 0, 0, 0, 13, 16, },
{ 7, 0, 0, 0, 13, 30, },
+ { 8, 0, 0, 0, 13, 16, },
+ { 9, 0, 0, 0, 13, 18, },
+ { 10, 0, 0, 0, 13, 30, },
+ { 11, 0, 0, 0, 13, 30, },
+ { 0, 0, 0, 0, 14, 63, },
+ { 2, 0, 0, 0, 14, 63, },
{ 1, 0, 0, 0, 14, 34, },
{ 3, 0, 0, 0, 14, 63, },
{ 4, 0, 0, 0, 14, 63, },
{ 5, 0, 0, 0, 14, 63, },
{ 6, 0, 0, 0, 14, 63, },
{ 7, 0, 0, 0, 14, 63, },
+ { 8, 0, 0, 0, 14, 63, },
+ { 9, 0, 0, 0, 14, 63, },
+ { 10, 0, 0, 0, 14, 63, },
+ { 11, 0, 0, 0, 14, 63, },
+ { 0, 0, 0, 1, 1, 30, },
+ { 2, 0, 0, 1, 1, 30, },
{ 1, 0, 0, 1, 1, 34, },
{ 3, 0, 0, 1, 1, 30, },
{ 4, 0, 0, 1, 1, 32, },
{ 5, 0, 0, 1, 1, 30, },
{ 6, 0, 0, 1, 1, 30, },
{ 7, 0, 0, 1, 1, 30, },
+ { 8, 0, 0, 1, 1, 30, },
+ { 9, 0, 0, 1, 1, 30, },
+ { 10, 0, 0, 1, 1, 30, },
+ { 11, 0, 0, 1, 1, 30, },
+ { 0, 0, 0, 1, 2, 32, },
+ { 2, 0, 0, 1, 2, 30, },
{ 1, 0, 0, 1, 2, 34, },
{ 3, 0, 0, 1, 2, 32, },
{ 4, 0, 0, 1, 2, 34, },
{ 5, 0, 0, 1, 2, 30, },
{ 6, 0, 0, 1, 2, 32, },
{ 7, 0, 0, 1, 2, 30, },
+ { 8, 0, 0, 1, 2, 32, },
+ { 9, 0, 0, 1, 2, 30, },
+ { 10, 0, 0, 1, 2, 30, },
+ { 11, 0, 0, 1, 2, 30, },
+ { 0, 0, 0, 1, 3, 34, },
+ { 2, 0, 0, 1, 3, 30, },
{ 1, 0, 0, 1, 3, 34, },
{ 3, 0, 0, 1, 3, 34, },
{ 4, 0, 0, 1, 3, 34, },
{ 5, 0, 0, 1, 3, 30, },
{ 6, 0, 0, 1, 3, 34, },
{ 7, 0, 0, 1, 3, 30, },
+ { 8, 0, 0, 1, 3, 34, },
+ { 9, 0, 0, 1, 3, 30, },
+ { 10, 0, 0, 1, 3, 30, },
+ { 11, 0, 0, 1, 3, 30, },
+ { 0, 0, 0, 1, 4, 34, },
+ { 2, 0, 0, 1, 4, 30, },
{ 1, 0, 0, 1, 4, 34, },
{ 3, 0, 0, 1, 4, 34, },
{ 4, 0, 0, 1, 4, 34, },
{ 5, 0, 0, 1, 4, 30, },
{ 6, 0, 0, 1, 4, 34, },
{ 7, 0, 0, 1, 4, 30, },
+ { 8, 0, 0, 1, 4, 34, },
+ { 9, 0, 0, 1, 4, 30, },
+ { 10, 0, 0, 1, 4, 30, },
+ { 11, 0, 0, 1, 4, 30, },
+ { 0, 0, 0, 1, 5, 34, },
+ { 2, 0, 0, 1, 5, 30, },
{ 1, 0, 0, 1, 5, 34, },
{ 3, 0, 0, 1, 5, 34, },
{ 4, 0, 0, 1, 5, 34, },
{ 5, 0, 0, 1, 5, 30, },
{ 6, 0, 0, 1, 5, 34, },
{ 7, 0, 0, 1, 5, 30, },
+ { 8, 0, 0, 1, 5, 34, },
+ { 9, 0, 0, 1, 5, 30, },
+ { 10, 0, 0, 1, 5, 30, },
+ { 11, 0, 0, 1, 5, 30, },
+ { 0, 0, 0, 1, 6, 34, },
+ { 2, 0, 0, 1, 6, 30, },
{ 1, 0, 0, 1, 6, 34, },
{ 3, 0, 0, 1, 6, 34, },
{ 4, 0, 0, 1, 6, 34, },
{ 5, 0, 0, 1, 6, 30, },
{ 6, 0, 0, 1, 6, 34, },
{ 7, 0, 0, 1, 6, 30, },
+ { 8, 0, 0, 1, 6, 34, },
+ { 9, 0, 0, 1, 6, 30, },
+ { 10, 0, 0, 1, 6, 30, },
+ { 11, 0, 0, 1, 6, 30, },
+ { 0, 0, 0, 1, 7, 34, },
+ { 2, 0, 0, 1, 7, 30, },
{ 1, 0, 0, 1, 7, 34, },
{ 3, 0, 0, 1, 7, 34, },
{ 4, 0, 0, 1, 7, 34, },
{ 5, 0, 0, 1, 7, 30, },
{ 6, 0, 0, 1, 7, 34, },
{ 7, 0, 0, 1, 7, 30, },
+ { 8, 0, 0, 1, 7, 34, },
+ { 9, 0, 0, 1, 7, 30, },
+ { 10, 0, 0, 1, 7, 30, },
+ { 11, 0, 0, 1, 7, 30, },
+ { 0, 0, 0, 1, 8, 34, },
+ { 2, 0, 0, 1, 8, 30, },
{ 1, 0, 0, 1, 8, 34, },
{ 3, 0, 0, 1, 8, 34, },
{ 4, 0, 0, 1, 8, 34, },
{ 5, 0, 0, 1, 8, 30, },
{ 6, 0, 0, 1, 8, 34, },
{ 7, 0, 0, 1, 8, 30, },
+ { 8, 0, 0, 1, 8, 34, },
+ { 9, 0, 0, 1, 8, 30, },
+ { 10, 0, 0, 1, 8, 30, },
+ { 11, 0, 0, 1, 8, 30, },
+ { 0, 0, 0, 1, 9, 34, },
+ { 2, 0, 0, 1, 9, 30, },
{ 1, 0, 0, 1, 9, 34, },
{ 3, 0, 0, 1, 9, 34, },
{ 4, 0, 0, 1, 9, 34, },
{ 5, 0, 0, 1, 9, 30, },
{ 6, 0, 0, 1, 9, 34, },
{ 7, 0, 0, 1, 9, 30, },
+ { 8, 0, 0, 1, 9, 34, },
+ { 9, 0, 0, 1, 9, 30, },
+ { 10, 0, 0, 1, 9, 30, },
+ { 11, 0, 0, 1, 9, 30, },
+ { 0, 0, 0, 1, 10, 32, },
+ { 2, 0, 0, 1, 10, 30, },
{ 1, 0, 0, 1, 10, 34, },
{ 3, 0, 0, 1, 10, 32, },
{ 4, 0, 0, 1, 10, 34, },
{ 5, 0, 0, 1, 10, 30, },
{ 6, 0, 0, 1, 10, 32, },
{ 7, 0, 0, 1, 10, 30, },
+ { 8, 0, 0, 1, 10, 32, },
+ { 9, 0, 0, 1, 10, 26, },
+ { 10, 0, 0, 1, 10, 30, },
+ { 11, 0, 0, 1, 10, 30, },
+ { 0, 0, 0, 1, 11, 30, },
+ { 2, 0, 0, 1, 11, 30, },
{ 1, 0, 0, 1, 11, 34, },
{ 3, 0, 0, 1, 11, 30, },
{ 4, 0, 0, 1, 11, 34, },
{ 5, 0, 0, 1, 11, 30, },
{ 6, 0, 0, 1, 11, 30, },
{ 7, 0, 0, 1, 11, 30, },
+ { 8, 0, 0, 1, 11, 30, },
+ { 9, 0, 0, 1, 11, 22, },
+ { 10, 0, 0, 1, 11, 30, },
+ { 11, 0, 0, 1, 11, 30, },
+ { 0, 0, 0, 1, 12, 28, },
+ { 2, 0, 0, 1, 12, 30, },
{ 1, 0, 0, 1, 12, 34, },
{ 3, 0, 0, 1, 12, 28, },
{ 4, 0, 0, 1, 12, 34, },
{ 5, 0, 0, 1, 12, 30, },
{ 6, 0, 0, 1, 12, 28, },
{ 7, 0, 0, 1, 12, 30, },
+ { 8, 0, 0, 1, 12, 28, },
+ { 9, 0, 0, 1, 12, 18, },
+ { 10, 0, 0, 1, 12, 30, },
+ { 11, 0, 0, 1, 12, 30, },
+ { 0, 0, 0, 1, 13, 16, },
+ { 2, 0, 0, 1, 13, 30, },
{ 1, 0, 0, 1, 13, 34, },
{ 3, 0, 0, 1, 13, 16, },
{ 4, 0, 0, 1, 13, 32, },
{ 5, 0, 0, 1, 13, 30, },
{ 6, 0, 0, 1, 13, 16, },
{ 7, 0, 0, 1, 13, 30, },
+ { 8, 0, 0, 1, 13, 16, },
+ { 9, 0, 0, 1, 13, 2, },
+ { 10, 0, 0, 1, 13, 30, },
+ { 11, 0, 0, 1, 13, 30, },
+ { 0, 0, 0, 1, 14, 63, },
+ { 2, 0, 0, 1, 14, 63, },
{ 1, 0, 0, 1, 14, 63, },
{ 3, 0, 0, 1, 14, 63, },
{ 4, 0, 0, 1, 14, 63, },
{ 5, 0, 0, 1, 14, 63, },
{ 6, 0, 0, 1, 14, 63, },
{ 7, 0, 0, 1, 14, 63, },
+ { 8, 0, 0, 1, 14, 63, },
+ { 9, 0, 0, 1, 14, 63, },
+ { 10, 0, 0, 1, 14, 63, },
+ { 11, 0, 0, 1, 14, 63, },
+ { 0, 0, 0, 2, 1, 26, },
+ { 2, 0, 0, 2, 1, 30, },
{ 1, 0, 0, 2, 1, 34, },
{ 3, 0, 0, 2, 1, 26, },
{ 4, 0, 0, 2, 1, 32, },
{ 5, 0, 0, 2, 1, 30, },
{ 6, 0, 0, 2, 1, 26, },
{ 7, 0, 0, 2, 1, 30, },
+ { 8, 0, 0, 2, 1, 26, },
+ { 9, 0, 0, 2, 1, 30, },
+ { 10, 0, 0, 2, 1, 30, },
+ { 11, 0, 0, 2, 1, 30, },
+ { 0, 0, 0, 2, 2, 30, },
+ { 2, 0, 0, 2, 2, 30, },
{ 1, 0, 0, 2, 2, 34, },
{ 3, 0, 0, 2, 2, 30, },
{ 4, 0, 0, 2, 2, 34, },
{ 5, 0, 0, 2, 2, 30, },
{ 6, 0, 0, 2, 2, 30, },
{ 7, 0, 0, 2, 2, 30, },
+ { 8, 0, 0, 2, 2, 30, },
+ { 9, 0, 0, 2, 2, 30, },
+ { 10, 0, 0, 2, 2, 30, },
+ { 11, 0, 0, 2, 2, 30, },
+ { 0, 0, 0, 2, 3, 32, },
+ { 2, 0, 0, 2, 3, 30, },
{ 1, 0, 0, 2, 3, 34, },
{ 3, 0, 0, 2, 3, 32, },
{ 4, 0, 0, 2, 3, 34, },
{ 5, 0, 0, 2, 3, 30, },
{ 6, 0, 0, 2, 3, 32, },
{ 7, 0, 0, 2, 3, 30, },
+ { 8, 0, 0, 2, 3, 32, },
+ { 9, 0, 0, 2, 3, 30, },
+ { 10, 0, 0, 2, 3, 30, },
+ { 11, 0, 0, 2, 3, 30, },
+ { 0, 0, 0, 2, 4, 34, },
+ { 2, 0, 0, 2, 4, 30, },
{ 1, 0, 0, 2, 4, 34, },
{ 3, 0, 0, 2, 4, 34, },
{ 4, 0, 0, 2, 4, 34, },
{ 5, 0, 0, 2, 4, 30, },
{ 6, 0, 0, 2, 4, 34, },
{ 7, 0, 0, 2, 4, 30, },
+ { 8, 0, 0, 2, 4, 34, },
+ { 9, 0, 0, 2, 4, 30, },
+ { 10, 0, 0, 2, 4, 30, },
+ { 11, 0, 0, 2, 4, 30, },
+ { 0, 0, 0, 2, 5, 34, },
+ { 2, 0, 0, 2, 5, 30, },
{ 1, 0, 0, 2, 5, 34, },
{ 3, 0, 0, 2, 5, 34, },
{ 4, 0, 0, 2, 5, 34, },
{ 5, 0, 0, 2, 5, 30, },
{ 6, 0, 0, 2, 5, 34, },
{ 7, 0, 0, 2, 5, 30, },
+ { 8, 0, 0, 2, 5, 34, },
+ { 9, 0, 0, 2, 5, 30, },
+ { 10, 0, 0, 2, 5, 30, },
+ { 11, 0, 0, 2, 5, 30, },
+ { 0, 0, 0, 2, 6, 34, },
+ { 2, 0, 0, 2, 6, 30, },
{ 1, 0, 0, 2, 6, 34, },
{ 3, 0, 0, 2, 6, 34, },
{ 4, 0, 0, 2, 6, 34, },
{ 5, 0, 0, 2, 6, 30, },
{ 6, 0, 0, 2, 6, 34, },
{ 7, 0, 0, 2, 6, 30, },
+ { 8, 0, 0, 2, 6, 34, },
+ { 9, 0, 0, 2, 6, 30, },
+ { 10, 0, 0, 2, 6, 30, },
+ { 11, 0, 0, 2, 6, 30, },
+ { 0, 0, 0, 2, 7, 34, },
+ { 2, 0, 0, 2, 7, 30, },
{ 1, 0, 0, 2, 7, 34, },
{ 3, 0, 0, 2, 7, 34, },
{ 4, 0, 0, 2, 7, 34, },
{ 5, 0, 0, 2, 7, 30, },
{ 6, 0, 0, 2, 7, 34, },
{ 7, 0, 0, 2, 7, 30, },
+ { 8, 0, 0, 2, 7, 34, },
+ { 9, 0, 0, 2, 7, 30, },
+ { 10, 0, 0, 2, 7, 30, },
+ { 11, 0, 0, 2, 7, 30, },
+ { 0, 0, 0, 2, 8, 34, },
+ { 2, 0, 0, 2, 8, 30, },
{ 1, 0, 0, 2, 8, 34, },
{ 3, 0, 0, 2, 8, 34, },
{ 4, 0, 0, 2, 8, 34, },
{ 5, 0, 0, 2, 8, 30, },
{ 6, 0, 0, 2, 8, 34, },
{ 7, 0, 0, 2, 8, 30, },
+ { 8, 0, 0, 2, 8, 34, },
+ { 9, 0, 0, 2, 8, 30, },
+ { 10, 0, 0, 2, 8, 30, },
+ { 11, 0, 0, 2, 8, 30, },
+ { 0, 0, 0, 2, 9, 32, },
+ { 2, 0, 0, 2, 9, 30, },
{ 1, 0, 0, 2, 9, 34, },
{ 3, 0, 0, 2, 9, 32, },
{ 4, 0, 0, 2, 9, 34, },
{ 5, 0, 0, 2, 9, 30, },
{ 6, 0, 0, 2, 9, 32, },
{ 7, 0, 0, 2, 9, 30, },
+ { 8, 0, 0, 2, 9, 32, },
+ { 9, 0, 0, 2, 9, 30, },
+ { 10, 0, 0, 2, 9, 30, },
+ { 11, 0, 0, 2, 9, 30, },
+ { 0, 0, 0, 2, 10, 30, },
+ { 2, 0, 0, 2, 10, 30, },
{ 1, 0, 0, 2, 10, 34, },
{ 3, 0, 0, 2, 10, 30, },
{ 4, 0, 0, 2, 10, 34, },
{ 5, 0, 0, 2, 10, 30, },
{ 6, 0, 0, 2, 10, 30, },
{ 7, 0, 0, 2, 10, 30, },
+ { 8, 0, 0, 2, 10, 30, },
+ { 9, 0, 0, 2, 10, 24, },
+ { 10, 0, 0, 2, 10, 30, },
+ { 11, 0, 0, 2, 10, 30, },
+ { 0, 0, 0, 2, 11, 28, },
+ { 2, 0, 0, 2, 11, 30, },
{ 1, 0, 0, 2, 11, 34, },
{ 3, 0, 0, 2, 11, 28, },
{ 4, 0, 0, 2, 11, 34, },
{ 5, 0, 0, 2, 11, 30, },
{ 6, 0, 0, 2, 11, 28, },
{ 7, 0, 0, 2, 11, 30, },
+ { 8, 0, 0, 2, 11, 28, },
+ { 9, 0, 0, 2, 11, 20, },
+ { 10, 0, 0, 2, 11, 30, },
+ { 11, 0, 0, 2, 11, 30, },
+ { 0, 0, 0, 2, 12, 26, },
+ { 2, 0, 0, 2, 12, 30, },
{ 1, 0, 0, 2, 12, 34, },
{ 3, 0, 0, 2, 12, 26, },
{ 4, 0, 0, 2, 12, 34, },
{ 5, 0, 0, 2, 12, 30, },
{ 6, 0, 0, 2, 12, 26, },
{ 7, 0, 0, 2, 12, 30, },
+ { 8, 0, 0, 2, 12, 26, },
+ { 9, 0, 0, 2, 12, 16, },
+ { 10, 0, 0, 2, 12, 30, },
+ { 11, 0, 0, 2, 12, 30, },
+ { 0, 0, 0, 2, 13, 12, },
+ { 2, 0, 0, 2, 13, 30, },
{ 1, 0, 0, 2, 13, 34, },
{ 3, 0, 0, 2, 13, 12, },
{ 4, 0, 0, 2, 13, 32, },
{ 5, 0, 0, 2, 13, 30, },
{ 6, 0, 0, 2, 13, 12, },
{ 7, 0, 0, 2, 13, 30, },
+ { 8, 0, 0, 2, 13, 12, },
+ { 9, 0, 0, 2, 13, 0, },
+ { 10, 0, 0, 2, 13, 30, },
+ { 11, 0, 0, 2, 13, 30, },
+ { 0, 0, 0, 2, 14, 63, },
+ { 2, 0, 0, 2, 14, 63, },
{ 1, 0, 0, 2, 14, 63, },
{ 3, 0, 0, 2, 14, 63, },
{ 4, 0, 0, 2, 14, 63, },
{ 5, 0, 0, 2, 14, 63, },
{ 6, 0, 0, 2, 14, 63, },
{ 7, 0, 0, 2, 14, 63, },
+ { 8, 0, 0, 2, 14, 63, },
+ { 9, 0, 0, 2, 14, 63, },
+ { 10, 0, 0, 2, 14, 63, },
+ { 11, 0, 0, 2, 14, 63, },
+ { 0, 0, 1, 2, 1, 63, },
+ { 2, 0, 1, 2, 1, 63, },
{ 1, 0, 1, 2, 1, 63, },
{ 3, 0, 1, 2, 1, 63, },
{ 4, 0, 1, 2, 1, 63, },
{ 5, 0, 1, 2, 1, 63, },
{ 6, 0, 1, 2, 1, 63, },
{ 7, 0, 1, 2, 1, 63, },
+ { 8, 0, 1, 2, 1, 63, },
+ { 9, 0, 1, 2, 1, 63, },
+ { 10, 0, 1, 2, 1, 63, },
+ { 11, 0, 1, 2, 1, 63, },
+ { 0, 0, 1, 2, 2, 63, },
+ { 2, 0, 1, 2, 2, 63, },
{ 1, 0, 1, 2, 2, 63, },
{ 3, 0, 1, 2, 2, 63, },
{ 4, 0, 1, 2, 2, 63, },
{ 5, 0, 1, 2, 2, 63, },
{ 6, 0, 1, 2, 2, 63, },
{ 7, 0, 1, 2, 2, 63, },
+ { 8, 0, 1, 2, 2, 63, },
+ { 9, 0, 1, 2, 2, 63, },
+ { 10, 0, 1, 2, 2, 63, },
+ { 11, 0, 1, 2, 2, 63, },
+ { 0, 0, 1, 2, 3, 26, },
+ { 2, 0, 1, 2, 3, 30, },
{ 1, 0, 1, 2, 3, 30, },
{ 3, 0, 1, 2, 3, 26, },
{ 4, 0, 1, 2, 3, 30, },
{ 5, 0, 1, 2, 3, 30, },
{ 6, 0, 1, 2, 3, 26, },
{ 7, 0, 1, 2, 3, 30, },
+ { 8, 0, 1, 2, 3, 26, },
+ { 9, 0, 1, 2, 3, 29, },
+ { 10, 0, 1, 2, 3, 30, },
+ { 11, 0, 1, 2, 3, 30, },
+ { 0, 0, 1, 2, 4, 26, },
+ { 2, 0, 1, 2, 4, 30, },
{ 1, 0, 1, 2, 4, 30, },
{ 3, 0, 1, 2, 4, 26, },
{ 4, 0, 1, 2, 4, 30, },
{ 5, 0, 1, 2, 4, 30, },
{ 6, 0, 1, 2, 4, 26, },
{ 7, 0, 1, 2, 4, 30, },
+ { 8, 0, 1, 2, 4, 26, },
+ { 9, 0, 1, 2, 4, 29, },
+ { 10, 0, 1, 2, 4, 30, },
+ { 11, 0, 1, 2, 4, 30, },
+ { 0, 0, 1, 2, 5, 30, },
+ { 2, 0, 1, 2, 5, 30, },
{ 1, 0, 1, 2, 5, 30, },
{ 3, 0, 1, 2, 5, 30, },
{ 4, 0, 1, 2, 5, 30, },
{ 5, 0, 1, 2, 5, 30, },
{ 6, 0, 1, 2, 5, 30, },
{ 7, 0, 1, 2, 5, 30, },
+ { 8, 0, 1, 2, 5, 30, },
+ { 9, 0, 1, 2, 5, 29, },
+ { 10, 0, 1, 2, 5, 30, },
+ { 11, 0, 1, 2, 5, 30, },
+ { 0, 0, 1, 2, 6, 30, },
+ { 2, 0, 1, 2, 6, 30, },
{ 1, 0, 1, 2, 6, 30, },
{ 3, 0, 1, 2, 6, 30, },
{ 4, 0, 1, 2, 6, 30, },
{ 5, 0, 1, 2, 6, 30, },
{ 6, 0, 1, 2, 6, 30, },
{ 7, 0, 1, 2, 6, 30, },
+ { 8, 0, 1, 2, 6, 30, },
+ { 9, 0, 1, 2, 6, 29, },
+ { 10, 0, 1, 2, 6, 30, },
+ { 11, 0, 1, 2, 6, 30, },
+ { 0, 0, 1, 2, 7, 30, },
+ { 2, 0, 1, 2, 7, 30, },
{ 1, 0, 1, 2, 7, 30, },
{ 3, 0, 1, 2, 7, 30, },
{ 4, 0, 1, 2, 7, 30, },
{ 5, 0, 1, 2, 7, 30, },
{ 6, 0, 1, 2, 7, 30, },
{ 7, 0, 1, 2, 7, 30, },
+ { 8, 0, 1, 2, 7, 30, },
+ { 9, 0, 1, 2, 7, 29, },
+ { 10, 0, 1, 2, 7, 30, },
+ { 11, 0, 1, 2, 7, 30, },
+ { 0, 0, 1, 2, 8, 26, },
+ { 2, 0, 1, 2, 8, 30, },
{ 1, 0, 1, 2, 8, 30, },
{ 3, 0, 1, 2, 8, 26, },
{ 4, 0, 1, 2, 8, 30, },
{ 5, 0, 1, 2, 8, 30, },
{ 6, 0, 1, 2, 8, 26, },
{ 7, 0, 1, 2, 8, 30, },
+ { 8, 0, 1, 2, 8, 26, },
+ { 9, 0, 1, 2, 8, 25, },
+ { 10, 0, 1, 2, 8, 30, },
+ { 11, 0, 1, 2, 8, 30, },
+ { 0, 0, 1, 2, 9, 26, },
+ { 2, 0, 1, 2, 9, 30, },
{ 1, 0, 1, 2, 9, 30, },
{ 3, 0, 1, 2, 9, 26, },
{ 4, 0, 1, 2, 9, 30, },
{ 5, 0, 1, 2, 9, 30, },
{ 6, 0, 1, 2, 9, 26, },
{ 7, 0, 1, 2, 9, 30, },
+ { 8, 0, 1, 2, 9, 26, },
+ { 9, 0, 1, 2, 9, 21, },
+ { 10, 0, 1, 2, 9, 30, },
+ { 11, 0, 1, 2, 9, 30, },
+ { 0, 0, 1, 2, 10, 28, },
+ { 2, 0, 1, 2, 10, 30, },
{ 1, 0, 1, 2, 10, 30, },
{ 3, 0, 1, 2, 10, 28, },
{ 4, 0, 1, 2, 10, 30, },
{ 5, 0, 1, 2, 10, 30, },
{ 6, 0, 1, 2, 10, 28, },
{ 7, 0, 1, 2, 10, 30, },
+ { 8, 0, 1, 2, 10, 28, },
+ { 9, 0, 1, 2, 10, 17, },
+ { 10, 0, 1, 2, 10, 30, },
+ { 11, 0, 1, 2, 10, 30, },
+ { 0, 0, 1, 2, 11, 20, },
+ { 2, 0, 1, 2, 11, 30, },
{ 1, 0, 1, 2, 11, 30, },
{ 3, 0, 1, 2, 11, 20, },
{ 4, 0, 1, 2, 11, 30, },
{ 5, 0, 1, 2, 11, 30, },
{ 6, 0, 1, 2, 11, 20, },
{ 7, 0, 1, 2, 11, 30, },
+ { 8, 0, 1, 2, 11, 20, },
+ { 9, 0, 1, 2, 11, 5, },
+ { 10, 0, 1, 2, 11, 30, },
+ { 11, 0, 1, 2, 11, 30, },
+ { 0, 0, 1, 2, 12, 63, },
+ { 2, 0, 1, 2, 12, 63, },
{ 1, 0, 1, 2, 12, 63, },
{ 3, 0, 1, 2, 12, 63, },
{ 4, 0, 1, 2, 12, 63, },
{ 5, 0, 1, 2, 12, 63, },
{ 6, 0, 1, 2, 12, 63, },
{ 7, 0, 1, 2, 12, 63, },
+ { 8, 0, 1, 2, 12, 63, },
+ { 9, 0, 1, 2, 12, 63, },
+ { 10, 0, 1, 2, 12, 63, },
+ { 11, 0, 1, 2, 12, 63, },
+ { 0, 0, 1, 2, 13, 63, },
+ { 2, 0, 1, 2, 13, 63, },
{ 1, 0, 1, 2, 13, 63, },
{ 3, 0, 1, 2, 13, 63, },
{ 4, 0, 1, 2, 13, 63, },
{ 5, 0, 1, 2, 13, 63, },
{ 6, 0, 1, 2, 13, 63, },
{ 7, 0, 1, 2, 13, 63, },
+ { 8, 0, 1, 2, 13, 63, },
+ { 9, 0, 1, 2, 13, 63, },
+ { 10, 0, 1, 2, 13, 63, },
+ { 11, 0, 1, 2, 13, 63, },
+ { 0, 0, 1, 2, 14, 63, },
+ { 2, 0, 1, 2, 14, 63, },
{ 1, 0, 1, 2, 14, 63, },
{ 3, 0, 1, 2, 14, 63, },
{ 4, 0, 1, 2, 14, 63, },
{ 5, 0, 1, 2, 14, 63, },
{ 6, 0, 1, 2, 14, 63, },
{ 7, 0, 1, 2, 14, 63, },
+ { 8, 0, 1, 2, 14, 63, },
+ { 9, 0, 1, 2, 14, 63, },
+ { 10, 0, 1, 2, 14, 63, },
+ { 11, 0, 1, 2, 14, 63, },
+ { 0, 1, 0, 1, 36, 31, },
+ { 2, 1, 0, 1, 36, 32, },
{ 1, 1, 0, 1, 36, 33, },
{ 3, 1, 0, 1, 36, 31, },
{ 4, 1, 0, 1, 36, 29, },
{ 5, 1, 0, 1, 36, 32, },
- { 6, 1, 0, 1, 36, 29, },
+ { 6, 1, 0, 1, 36, 31, },
{ 7, 1, 0, 1, 36, 27, },
+ { 8, 1, 0, 1, 36, 31, },
+ { 9, 1, 0, 1, 36, 29, },
+ { 10, 1, 0, 1, 36, 63, },
+ { 11, 1, 0, 1, 36, 32, },
+ { 0, 1, 0, 1, 40, 33, },
+ { 2, 1, 0, 1, 40, 32, },
{ 1, 1, 0, 1, 40, 33, },
{ 3, 1, 0, 1, 40, 31, },
{ 4, 1, 0, 1, 40, 28, },
{ 5, 1, 0, 1, 40, 32, },
- { 6, 1, 0, 1, 40, 29, },
+ { 6, 1, 0, 1, 40, 33, },
{ 7, 1, 0, 1, 40, 27, },
+ { 8, 1, 0, 1, 40, 31, },
+ { 9, 1, 0, 1, 40, 29, },
+ { 10, 1, 0, 1, 40, 63, },
+ { 11, 1, 0, 1, 40, 32, },
+ { 0, 1, 0, 1, 44, 33, },
+ { 2, 1, 0, 1, 44, 32, },
{ 1, 1, 0, 1, 44, 33, },
{ 3, 1, 0, 1, 44, 31, },
{ 4, 1, 0, 1, 44, 28, },
{ 5, 1, 0, 1, 44, 32, },
- { 6, 1, 0, 1, 44, 30, },
+ { 6, 1, 0, 1, 44, 33, },
{ 7, 1, 0, 1, 44, 27, },
+ { 8, 1, 0, 1, 44, 31, },
+ { 9, 1, 0, 1, 44, 29, },
+ { 10, 1, 0, 1, 44, 63, },
+ { 11, 1, 0, 1, 44, 32, },
+ { 0, 1, 0, 1, 48, 31, },
+ { 2, 1, 0, 1, 48, 32, },
{ 1, 1, 0, 1, 48, 33, },
{ 3, 1, 0, 1, 48, 31, },
{ 4, 1, 0, 1, 48, 27, },
{ 5, 1, 0, 1, 48, 32, },
- { 6, 1, 0, 1, 48, 30, },
+ { 6, 1, 0, 1, 48, 31, },
{ 7, 1, 0, 1, 48, 27, },
+ { 8, 1, 0, 1, 48, 31, },
+ { 9, 1, 0, 1, 48, 29, },
+ { 10, 1, 0, 1, 48, 63, },
+ { 11, 1, 0, 1, 48, 32, },
+ { 0, 1, 0, 1, 52, 33, },
+ { 2, 1, 0, 1, 52, 32, },
{ 1, 1, 0, 1, 52, 33, },
{ 3, 1, 0, 1, 52, 32, },
{ 4, 1, 0, 1, 52, 16, },
{ 5, 1, 0, 1, 52, 32, },
- { 6, 1, 0, 1, 52, 30, },
+ { 6, 1, 0, 1, 52, 33, },
{ 7, 1, 0, 1, 52, 27, },
+ { 8, 1, 0, 1, 52, 33, },
+ { 9, 1, 0, 1, 52, 29, },
+ { 10, 1, 0, 1, 52, 63, },
+ { 11, 1, 0, 1, 52, 32, },
+ { 0, 1, 0, 1, 56, 33, },
+ { 2, 1, 0, 1, 56, 32, },
{ 1, 1, 0, 1, 56, 33, },
{ 3, 1, 0, 1, 56, 32, },
{ 4, 1, 0, 1, 56, 33, },
{ 5, 1, 0, 1, 56, 32, },
- { 6, 1, 0, 1, 56, 30, },
+ { 6, 1, 0, 1, 56, 33, },
{ 7, 1, 0, 1, 56, 27, },
+ { 8, 1, 0, 1, 56, 33, },
+ { 9, 1, 0, 1, 56, 29, },
+ { 10, 1, 0, 1, 56, 63, },
+ { 11, 1, 0, 1, 56, 32, },
+ { 0, 1, 0, 1, 60, 33, },
+ { 2, 1, 0, 1, 60, 32, },
{ 1, 1, 0, 1, 60, 33, },
{ 3, 1, 0, 1, 60, 32, },
{ 4, 1, 0, 1, 60, 33, },
{ 5, 1, 0, 1, 60, 32, },
- { 6, 1, 0, 1, 60, 30, },
+ { 6, 1, 0, 1, 60, 33, },
{ 7, 1, 0, 1, 60, 27, },
+ { 8, 1, 0, 1, 60, 33, },
+ { 9, 1, 0, 1, 60, 29, },
+ { 10, 1, 0, 1, 60, 63, },
+ { 11, 1, 0, 1, 60, 32, },
+ { 0, 1, 0, 1, 64, 30, },
+ { 2, 1, 0, 1, 64, 32, },
{ 1, 1, 0, 1, 64, 33, },
{ 3, 1, 0, 1, 64, 30, },
{ 4, 1, 0, 1, 64, 33, },
{ 5, 1, 0, 1, 64, 32, },
- { 6, 1, 0, 1, 64, 29, },
+ { 6, 1, 0, 1, 64, 30, },
{ 7, 1, 0, 1, 64, 27, },
+ { 8, 1, 0, 1, 64, 30, },
+ { 9, 1, 0, 1, 64, 29, },
+ { 10, 1, 0, 1, 64, 63, },
+ { 11, 1, 0, 1, 64, 32, },
+ { 0, 1, 0, 1, 100, 30, },
+ { 2, 1, 0, 1, 100, 32, },
{ 1, 1, 0, 1, 100, 33, },
{ 3, 1, 0, 1, 100, 30, },
{ 4, 1, 0, 1, 100, 33, },
{ 5, 1, 0, 1, 100, 32, },
- { 6, 1, 0, 1, 100, 30, },
+ { 6, 1, 0, 1, 100, 33, },
{ 7, 1, 0, 1, 100, 27, },
+ { 8, 1, 0, 1, 100, 30, },
+ { 9, 1, 0, 1, 100, 63, },
+ { 10, 1, 0, 1, 100, 63, },
+ { 11, 1, 0, 1, 100, 32, },
+ { 0, 1, 0, 1, 104, 33, },
+ { 2, 1, 0, 1, 104, 32, },
{ 1, 1, 0, 1, 104, 33, },
{ 3, 1, 0, 1, 104, 33, },
{ 4, 1, 0, 1, 104, 33, },
{ 5, 1, 0, 1, 104, 32, },
- { 6, 1, 0, 1, 104, 30, },
+ { 6, 1, 0, 1, 104, 33, },
{ 7, 1, 0, 1, 104, 27, },
+ { 8, 1, 0, 1, 104, 33, },
+ { 9, 1, 0, 1, 104, 63, },
+ { 10, 1, 0, 1, 104, 63, },
+ { 11, 1, 0, 1, 104, 32, },
+ { 0, 1, 0, 1, 108, 33, },
+ { 2, 1, 0, 1, 108, 32, },
{ 1, 1, 0, 1, 108, 33, },
{ 3, 1, 0, 1, 108, 33, },
{ 4, 1, 0, 1, 108, 33, },
{ 5, 1, 0, 1, 108, 32, },
- { 6, 1, 0, 1, 108, 30, },
+ { 6, 1, 0, 1, 108, 33, },
{ 7, 1, 0, 1, 108, 27, },
+ { 8, 1, 0, 1, 108, 33, },
+ { 9, 1, 0, 1, 108, 63, },
+ { 10, 1, 0, 1, 108, 63, },
+ { 11, 1, 0, 1, 108, 32, },
+ { 0, 1, 0, 1, 112, 33, },
+ { 2, 1, 0, 1, 112, 32, },
{ 1, 1, 0, 1, 112, 33, },
{ 3, 1, 0, 1, 112, 33, },
{ 4, 1, 0, 1, 112, 33, },
{ 5, 1, 0, 1, 112, 32, },
- { 6, 1, 0, 1, 112, 30, },
+ { 6, 1, 0, 1, 112, 33, },
{ 7, 1, 0, 1, 112, 27, },
+ { 8, 1, 0, 1, 112, 33, },
+ { 9, 1, 0, 1, 112, 63, },
+ { 10, 1, 0, 1, 112, 63, },
+ { 11, 1, 0, 1, 112, 32, },
+ { 0, 1, 0, 1, 116, 33, },
+ { 2, 1, 0, 1, 116, 32, },
{ 1, 1, 0, 1, 116, 33, },
{ 3, 1, 0, 1, 116, 33, },
{ 4, 1, 0, 1, 116, 33, },
{ 5, 1, 0, 1, 116, 32, },
- { 6, 1, 0, 1, 116, 30, },
+ { 6, 1, 0, 1, 116, 33, },
{ 7, 1, 0, 1, 116, 27, },
+ { 8, 1, 0, 1, 116, 33, },
+ { 9, 1, 0, 1, 116, 63, },
+ { 10, 1, 0, 1, 116, 63, },
+ { 11, 1, 0, 1, 116, 32, },
+ { 0, 1, 0, 1, 120, 33, },
+ { 2, 1, 0, 1, 120, 32, },
{ 1, 1, 0, 1, 120, 33, },
{ 3, 1, 0, 1, 120, 63, },
{ 4, 1, 0, 1, 120, 33, },
{ 5, 1, 0, 1, 120, 63, },
- { 6, 1, 0, 1, 120, 30, },
+ { 6, 1, 0, 1, 120, 33, },
{ 7, 1, 0, 1, 120, 27, },
+ { 8, 1, 0, 1, 120, 33, },
+ { 9, 1, 0, 1, 120, 63, },
+ { 10, 1, 0, 1, 120, 63, },
+ { 11, 1, 0, 1, 120, 32, },
+ { 0, 1, 0, 1, 124, 33, },
+ { 2, 1, 0, 1, 124, 32, },
{ 1, 1, 0, 1, 124, 33, },
{ 3, 1, 0, 1, 124, 63, },
{ 4, 1, 0, 1, 124, 33, },
{ 5, 1, 0, 1, 124, 63, },
- { 6, 1, 0, 1, 124, 30, },
+ { 6, 1, 0, 1, 124, 33, },
{ 7, 1, 0, 1, 124, 27, },
+ { 8, 1, 0, 1, 124, 33, },
+ { 9, 1, 0, 1, 124, 63, },
+ { 10, 1, 0, 1, 124, 63, },
+ { 11, 1, 0, 1, 124, 32, },
+ { 0, 1, 0, 1, 128, 33, },
+ { 2, 1, 0, 1, 128, 32, },
{ 1, 1, 0, 1, 128, 33, },
{ 3, 1, 0, 1, 128, 63, },
- { 4, 1, 0, 1, 128, 63, },
+ { 4, 1, 0, 1, 128, 33, },
{ 5, 1, 0, 1, 128, 63, },
- { 6, 1, 0, 1, 128, 30, },
+ { 6, 1, 0, 1, 128, 33, },
{ 7, 1, 0, 1, 128, 27, },
+ { 8, 1, 0, 1, 128, 33, },
+ { 9, 1, 0, 1, 128, 63, },
+ { 10, 1, 0, 1, 128, 63, },
+ { 11, 1, 0, 1, 128, 32, },
+ { 0, 1, 0, 1, 132, 33, },
+ { 2, 1, 0, 1, 132, 32, },
{ 1, 1, 0, 1, 132, 33, },
{ 3, 1, 0, 1, 132, 33, },
- { 4, 1, 0, 1, 132, 63, },
+ { 4, 1, 0, 1, 132, 33, },
{ 5, 1, 0, 1, 132, 32, },
- { 6, 1, 0, 1, 132, 30, },
+ { 6, 1, 0, 1, 132, 33, },
{ 7, 1, 0, 1, 132, 27, },
+ { 8, 1, 0, 1, 132, 33, },
+ { 9, 1, 0, 1, 132, 63, },
+ { 10, 1, 0, 1, 132, 63, },
+ { 11, 1, 0, 1, 132, 32, },
+ { 0, 1, 0, 1, 136, 33, },
+ { 2, 1, 0, 1, 136, 32, },
{ 1, 1, 0, 1, 136, 33, },
{ 3, 1, 0, 1, 136, 33, },
- { 4, 1, 0, 1, 136, 63, },
+ { 4, 1, 0, 1, 136, 33, },
{ 5, 1, 0, 1, 136, 32, },
- { 6, 1, 0, 1, 136, 30, },
- { 7, 1, 0, 1, 136, 63, },
+ { 6, 1, 0, 1, 136, 33, },
+ { 7, 1, 0, 1, 136, 27, },
+ { 8, 1, 0, 1, 136, 33, },
+ { 9, 1, 0, 1, 136, 63, },
+ { 10, 1, 0, 1, 136, 63, },
+ { 11, 1, 0, 1, 136, 32, },
+ { 0, 1, 0, 1, 140, 31, },
+ { 2, 1, 0, 1, 140, 32, },
{ 1, 1, 0, 1, 140, 33, },
{ 3, 1, 0, 1, 140, 31, },
- { 4, 1, 0, 1, 140, 63, },
+ { 4, 1, 0, 1, 140, 33, },
{ 5, 1, 0, 1, 140, 32, },
- { 6, 1, 0, 1, 140, 30, },
- { 7, 1, 0, 1, 140, 63, },
+ { 6, 1, 0, 1, 140, 33, },
+ { 7, 1, 0, 1, 140, 27, },
+ { 8, 1, 0, 1, 140, 31, },
+ { 9, 1, 0, 1, 140, 63, },
+ { 10, 1, 0, 1, 140, 63, },
+ { 11, 1, 0, 1, 140, 32, },
+ { 0, 1, 0, 1, 144, 30, },
+ { 2, 1, 0, 1, 144, 63, },
{ 1, 1, 0, 1, 144, 63, },
{ 3, 1, 0, 1, 144, 30, },
- { 4, 1, 0, 1, 144, 63, },
+ { 4, 1, 0, 1, 144, 33, },
{ 5, 1, 0, 1, 144, 63, },
- { 6, 1, 0, 1, 144, 30, },
+ { 6, 1, 0, 1, 144, 33, },
{ 7, 1, 0, 1, 144, 63, },
+ { 8, 1, 0, 1, 144, 30, },
+ { 9, 1, 0, 1, 144, 63, },
+ { 10, 1, 0, 1, 144, 63, },
+ { 11, 1, 0, 1, 144, 32, },
+ { 0, 1, 0, 1, 149, 33, },
+ { 2, 1, 0, 1, 149, 14, },
{ 1, 1, 0, 1, 149, 63, },
{ 3, 1, 0, 1, 149, 30, },
{ 4, 1, 0, 1, 149, 33, },
{ 5, 1, 0, 1, 149, 33, },
- { 6, 1, 0, 1, 149, 30, },
+ { 6, 1, 0, 1, 149, 33, },
{ 7, 1, 0, 1, 149, 27, },
+ { 8, 1, 0, 1, 149, 33, },
+ { 9, 1, 0, 1, 149, 30, },
+ { 10, 1, 0, 1, 149, 14, },
+ { 11, 1, 0, 1, 149, 31, },
+ { 0, 1, 0, 1, 153, 33, },
+ { 2, 1, 0, 1, 153, 14, },
{ 1, 1, 0, 1, 153, 63, },
{ 3, 1, 0, 1, 153, 33, },
{ 4, 1, 0, 1, 153, 33, },
{ 5, 1, 0, 1, 153, 33, },
- { 6, 1, 0, 1, 153, 30, },
+ { 6, 1, 0, 1, 153, 33, },
{ 7, 1, 0, 1, 153, 27, },
+ { 8, 1, 0, 1, 153, 33, },
+ { 9, 1, 0, 1, 153, 30, },
+ { 10, 1, 0, 1, 153, 14, },
+ { 11, 1, 0, 1, 153, 31, },
+ { 0, 1, 0, 1, 157, 33, },
+ { 2, 1, 0, 1, 157, 14, },
{ 1, 1, 0, 1, 157, 63, },
{ 3, 1, 0, 1, 157, 33, },
{ 4, 1, 0, 1, 157, 33, },
{ 5, 1, 0, 1, 157, 33, },
- { 6, 1, 0, 1, 157, 30, },
+ { 6, 1, 0, 1, 157, 33, },
{ 7, 1, 0, 1, 157, 27, },
+ { 8, 1, 0, 1, 157, 33, },
+ { 9, 1, 0, 1, 157, 30, },
+ { 10, 1, 0, 1, 157, 14, },
+ { 11, 1, 0, 1, 157, 31, },
+ { 0, 1, 0, 1, 161, 33, },
+ { 2, 1, 0, 1, 161, 14, },
{ 1, 1, 0, 1, 161, 63, },
{ 3, 1, 0, 1, 161, 33, },
{ 4, 1, 0, 1, 161, 31, },
{ 5, 1, 0, 1, 161, 33, },
- { 6, 1, 0, 1, 161, 30, },
+ { 6, 1, 0, 1, 161, 33, },
{ 7, 1, 0, 1, 161, 27, },
+ { 8, 1, 0, 1, 161, 33, },
+ { 9, 1, 0, 1, 161, 30, },
+ { 10, 1, 0, 1, 161, 14, },
+ { 11, 1, 0, 1, 161, 31, },
+ { 0, 1, 0, 1, 165, 33, },
+ { 2, 1, 0, 1, 165, 14, },
{ 1, 1, 0, 1, 165, 63, },
{ 3, 1, 0, 1, 165, 33, },
- { 4, 1, 0, 1, 165, 63, },
+ { 4, 1, 0, 1, 165, 33, },
{ 5, 1, 0, 1, 165, 33, },
- { 6, 1, 0, 1, 165, 30, },
+ { 6, 1, 0, 1, 165, 33, },
{ 7, 1, 0, 1, 165, 27, },
+ { 8, 1, 0, 1, 165, 30, },
+ { 9, 1, 0, 1, 165, 30, },
+ { 10, 1, 0, 1, 165, 14, },
+ { 11, 1, 0, 1, 165, 31, },
+ { 0, 1, 0, 2, 36, 30, },
+ { 2, 1, 0, 2, 36, 32, },
{ 1, 1, 0, 2, 36, 33, },
{ 3, 1, 0, 2, 36, 30, },
{ 4, 1, 0, 2, 36, 27, },
{ 5, 1, 0, 2, 36, 32, },
{ 6, 1, 0, 2, 36, 30, },
{ 7, 1, 0, 2, 36, 27, },
+ { 8, 1, 0, 2, 36, 30, },
+ { 9, 1, 0, 2, 36, 29, },
+ { 10, 1, 0, 2, 36, 63, },
+ { 11, 1, 0, 2, 36, 32, },
+ { 0, 1, 0, 2, 40, 33, },
+ { 2, 1, 0, 2, 40, 32, },
{ 1, 1, 0, 2, 40, 33, },
{ 3, 1, 0, 2, 40, 31, },
{ 4, 1, 0, 2, 40, 29, },
{ 5, 1, 0, 2, 40, 32, },
- { 6, 1, 0, 2, 40, 30, },
+ { 6, 1, 0, 2, 40, 33, },
{ 7, 1, 0, 2, 40, 27, },
+ { 8, 1, 0, 2, 40, 31, },
+ { 9, 1, 0, 2, 40, 29, },
+ { 10, 1, 0, 2, 40, 63, },
+ { 11, 1, 0, 2, 40, 32, },
+ { 0, 1, 0, 2, 44, 33, },
+ { 2, 1, 0, 2, 44, 32, },
{ 1, 1, 0, 2, 44, 33, },
{ 3, 1, 0, 2, 44, 31, },
{ 4, 1, 0, 2, 44, 29, },
{ 5, 1, 0, 2, 44, 32, },
- { 6, 1, 0, 2, 44, 30, },
+ { 6, 1, 0, 2, 44, 33, },
{ 7, 1, 0, 2, 44, 27, },
+ { 8, 1, 0, 2, 44, 31, },
+ { 9, 1, 0, 2, 44, 29, },
+ { 10, 1, 0, 2, 44, 63, },
+ { 11, 1, 0, 2, 44, 32, },
+ { 0, 1, 0, 2, 48, 33, },
+ { 2, 1, 0, 2, 48, 32, },
{ 1, 1, 0, 2, 48, 33, },
{ 3, 1, 0, 2, 48, 31, },
{ 4, 1, 0, 2, 48, 26, },
{ 5, 1, 0, 2, 48, 32, },
- { 6, 1, 0, 2, 48, 30, },
+ { 6, 1, 0, 2, 48, 33, },
{ 7, 1, 0, 2, 48, 27, },
+ { 8, 1, 0, 2, 48, 31, },
+ { 9, 1, 0, 2, 48, 29, },
+ { 10, 1, 0, 2, 48, 63, },
+ { 11, 1, 0, 2, 48, 32, },
+ { 0, 1, 0, 2, 52, 33, },
+ { 2, 1, 0, 2, 52, 32, },
{ 1, 1, 0, 2, 52, 33, },
{ 3, 1, 0, 2, 52, 32, },
{ 4, 1, 0, 2, 52, 7, },
{ 5, 1, 0, 2, 52, 32, },
- { 6, 1, 0, 2, 52, 30, },
+ { 6, 1, 0, 2, 52, 33, },
{ 7, 1, 0, 2, 52, 27, },
+ { 8, 1, 0, 2, 52, 33, },
+ { 9, 1, 0, 2, 52, 29, },
+ { 10, 1, 0, 2, 52, 63, },
+ { 11, 1, 0, 2, 52, 32, },
+ { 0, 1, 0, 2, 56, 33, },
+ { 2, 1, 0, 2, 56, 32, },
{ 1, 1, 0, 2, 56, 33, },
{ 3, 1, 0, 2, 56, 32, },
{ 4, 1, 0, 2, 56, 33, },
{ 5, 1, 0, 2, 56, 32, },
- { 6, 1, 0, 2, 56, 30, },
+ { 6, 1, 0, 2, 56, 33, },
{ 7, 1, 0, 2, 56, 27, },
+ { 8, 1, 0, 2, 56, 33, },
+ { 9, 1, 0, 2, 56, 29, },
+ { 10, 1, 0, 2, 56, 63, },
+ { 11, 1, 0, 2, 56, 32, },
+ { 0, 1, 0, 2, 60, 33, },
+ { 2, 1, 0, 2, 60, 32, },
{ 1, 1, 0, 2, 60, 33, },
{ 3, 1, 0, 2, 60, 32, },
{ 4, 1, 0, 2, 60, 33, },
{ 5, 1, 0, 2, 60, 32, },
- { 6, 1, 0, 2, 60, 30, },
+ { 6, 1, 0, 2, 60, 33, },
{ 7, 1, 0, 2, 60, 27, },
+ { 8, 1, 0, 2, 60, 33, },
+ { 9, 1, 0, 2, 60, 29, },
+ { 10, 1, 0, 2, 60, 63, },
+ { 11, 1, 0, 2, 60, 32, },
+ { 0, 1, 0, 2, 64, 30, },
+ { 2, 1, 0, 2, 64, 32, },
{ 1, 1, 0, 2, 64, 33, },
{ 3, 1, 0, 2, 64, 30, },
{ 4, 1, 0, 2, 64, 33, },
{ 5, 1, 0, 2, 64, 32, },
{ 6, 1, 0, 2, 64, 30, },
{ 7, 1, 0, 2, 64, 27, },
+ { 8, 1, 0, 2, 64, 30, },
+ { 9, 1, 0, 2, 64, 29, },
+ { 10, 1, 0, 2, 64, 63, },
+ { 11, 1, 0, 2, 64, 32, },
+ { 0, 1, 0, 2, 100, 30, },
+ { 2, 1, 0, 2, 100, 32, },
{ 1, 1, 0, 2, 100, 33, },
{ 3, 1, 0, 2, 100, 30, },
{ 4, 1, 0, 2, 100, 33, },
{ 5, 1, 0, 2, 100, 32, },
- { 6, 1, 0, 2, 100, 30, },
+ { 6, 1, 0, 2, 100, 33, },
{ 7, 1, 0, 2, 100, 27, },
+ { 8, 1, 0, 2, 100, 30, },
+ { 9, 1, 0, 2, 100, 63, },
+ { 10, 1, 0, 2, 100, 63, },
+ { 11, 1, 0, 2, 100, 32, },
+ { 0, 1, 0, 2, 104, 33, },
+ { 2, 1, 0, 2, 104, 32, },
{ 1, 1, 0, 2, 104, 33, },
{ 3, 1, 0, 2, 104, 33, },
{ 4, 1, 0, 2, 104, 33, },
{ 5, 1, 0, 2, 104, 32, },
- { 6, 1, 0, 2, 104, 30, },
+ { 6, 1, 0, 2, 104, 33, },
{ 7, 1, 0, 2, 104, 27, },
+ { 8, 1, 0, 2, 104, 33, },
+ { 9, 1, 0, 2, 104, 63, },
+ { 10, 1, 0, 2, 104, 63, },
+ { 11, 1, 0, 2, 104, 32, },
+ { 0, 1, 0, 2, 108, 33, },
+ { 2, 1, 0, 2, 108, 32, },
{ 1, 1, 0, 2, 108, 33, },
{ 3, 1, 0, 2, 108, 33, },
{ 4, 1, 0, 2, 108, 33, },
{ 5, 1, 0, 2, 108, 32, },
- { 6, 1, 0, 2, 108, 30, },
+ { 6, 1, 0, 2, 108, 33, },
{ 7, 1, 0, 2, 108, 27, },
+ { 8, 1, 0, 2, 108, 33, },
+ { 9, 1, 0, 2, 108, 63, },
+ { 10, 1, 0, 2, 108, 63, },
+ { 11, 1, 0, 2, 108, 32, },
+ { 0, 1, 0, 2, 112, 33, },
+ { 2, 1, 0, 2, 112, 32, },
{ 1, 1, 0, 2, 112, 33, },
{ 3, 1, 0, 2, 112, 33, },
{ 4, 1, 0, 2, 112, 33, },
{ 5, 1, 0, 2, 112, 32, },
- { 6, 1, 0, 2, 112, 30, },
+ { 6, 1, 0, 2, 112, 33, },
{ 7, 1, 0, 2, 112, 27, },
+ { 8, 1, 0, 2, 112, 33, },
+ { 9, 1, 0, 2, 112, 63, },
+ { 10, 1, 0, 2, 112, 63, },
+ { 11, 1, 0, 2, 112, 32, },
+ { 0, 1, 0, 2, 116, 33, },
+ { 2, 1, 0, 2, 116, 32, },
{ 1, 1, 0, 2, 116, 33, },
{ 3, 1, 0, 2, 116, 33, },
{ 4, 1, 0, 2, 116, 33, },
{ 5, 1, 0, 2, 116, 32, },
- { 6, 1, 0, 2, 116, 30, },
+ { 6, 1, 0, 2, 116, 33, },
{ 7, 1, 0, 2, 116, 27, },
+ { 8, 1, 0, 2, 116, 33, },
+ { 9, 1, 0, 2, 116, 63, },
+ { 10, 1, 0, 2, 116, 63, },
+ { 11, 1, 0, 2, 116, 32, },
+ { 0, 1, 0, 2, 120, 33, },
+ { 2, 1, 0, 2, 120, 32, },
{ 1, 1, 0, 2, 120, 33, },
{ 3, 1, 0, 2, 120, 63, },
{ 4, 1, 0, 2, 120, 33, },
{ 5, 1, 0, 2, 120, 63, },
- { 6, 1, 0, 2, 120, 30, },
+ { 6, 1, 0, 2, 120, 33, },
{ 7, 1, 0, 2, 120, 27, },
+ { 8, 1, 0, 2, 120, 33, },
+ { 9, 1, 0, 2, 120, 63, },
+ { 10, 1, 0, 2, 120, 63, },
+ { 11, 1, 0, 2, 120, 32, },
+ { 0, 1, 0, 2, 124, 33, },
+ { 2, 1, 0, 2, 124, 32, },
{ 1, 1, 0, 2, 124, 33, },
{ 3, 1, 0, 2, 124, 63, },
{ 4, 1, 0, 2, 124, 33, },
{ 5, 1, 0, 2, 124, 63, },
- { 6, 1, 0, 2, 124, 30, },
+ { 6, 1, 0, 2, 124, 33, },
{ 7, 1, 0, 2, 124, 27, },
+ { 8, 1, 0, 2, 124, 33, },
+ { 9, 1, 0, 2, 124, 63, },
+ { 10, 1, 0, 2, 124, 63, },
+ { 11, 1, 0, 2, 124, 32, },
+ { 0, 1, 0, 2, 128, 33, },
+ { 2, 1, 0, 2, 128, 32, },
{ 1, 1, 0, 2, 128, 33, },
{ 3, 1, 0, 2, 128, 63, },
- { 4, 1, 0, 2, 128, 63, },
+ { 4, 1, 0, 2, 128, 33, },
{ 5, 1, 0, 2, 128, 63, },
- { 6, 1, 0, 2, 128, 30, },
+ { 6, 1, 0, 2, 128, 33, },
{ 7, 1, 0, 2, 128, 27, },
+ { 8, 1, 0, 2, 128, 33, },
+ { 9, 1, 0, 2, 128, 63, },
+ { 10, 1, 0, 2, 128, 63, },
+ { 11, 1, 0, 2, 128, 32, },
+ { 0, 1, 0, 2, 132, 33, },
+ { 2, 1, 0, 2, 132, 32, },
{ 1, 1, 0, 2, 132, 33, },
{ 3, 1, 0, 2, 132, 33, },
- { 4, 1, 0, 2, 132, 63, },
+ { 4, 1, 0, 2, 132, 33, },
{ 5, 1, 0, 2, 132, 32, },
- { 6, 1, 0, 2, 132, 30, },
+ { 6, 1, 0, 2, 132, 33, },
{ 7, 1, 0, 2, 132, 27, },
+ { 8, 1, 0, 2, 132, 33, },
+ { 9, 1, 0, 2, 132, 63, },
+ { 10, 1, 0, 2, 132, 63, },
+ { 11, 1, 0, 2, 132, 32, },
+ { 0, 1, 0, 2, 136, 33, },
+ { 2, 1, 0, 2, 136, 32, },
{ 1, 1, 0, 2, 136, 33, },
{ 3, 1, 0, 2, 136, 33, },
- { 4, 1, 0, 2, 136, 63, },
+ { 4, 1, 0, 2, 136, 33, },
{ 5, 1, 0, 2, 136, 32, },
- { 6, 1, 0, 2, 136, 30, },
- { 7, 1, 0, 2, 136, 63, },
+ { 6, 1, 0, 2, 136, 33, },
+ { 7, 1, 0, 2, 136, 27, },
+ { 8, 1, 0, 2, 136, 33, },
+ { 9, 1, 0, 2, 136, 63, },
+ { 10, 1, 0, 2, 136, 63, },
+ { 11, 1, 0, 2, 136, 32, },
+ { 0, 1, 0, 2, 140, 29, },
+ { 2, 1, 0, 2, 140, 32, },
{ 1, 1, 0, 2, 140, 33, },
{ 3, 1, 0, 2, 140, 29, },
- { 4, 1, 0, 2, 140, 63, },
+ { 4, 1, 0, 2, 140, 33, },
{ 5, 1, 0, 2, 140, 32, },
- { 6, 1, 0, 2, 140, 30, },
- { 7, 1, 0, 2, 140, 63, },
+ { 6, 1, 0, 2, 140, 33, },
+ { 7, 1, 0, 2, 140, 27, },
+ { 8, 1, 0, 2, 140, 29, },
+ { 9, 1, 0, 2, 140, 63, },
+ { 10, 1, 0, 2, 140, 63, },
+ { 11, 1, 0, 2, 140, 32, },
+ { 0, 1, 0, 2, 144, 27, },
+ { 2, 1, 0, 2, 144, 63, },
{ 1, 1, 0, 2, 144, 63, },
{ 3, 1, 0, 2, 144, 27, },
- { 4, 1, 0, 2, 144, 63, },
+ { 4, 1, 0, 2, 144, 33, },
{ 5, 1, 0, 2, 144, 63, },
- { 6, 1, 0, 2, 144, 30, },
+ { 6, 1, 0, 2, 144, 33, },
{ 7, 1, 0, 2, 144, 63, },
+ { 8, 1, 0, 2, 144, 27, },
+ { 9, 1, 0, 2, 144, 63, },
+ { 10, 1, 0, 2, 144, 63, },
+ { 11, 1, 0, 2, 144, 31, },
+ { 0, 1, 0, 2, 149, 33, },
+ { 2, 1, 0, 2, 149, 14, },
{ 1, 1, 0, 2, 149, 63, },
{ 3, 1, 0, 2, 149, 33, },
{ 4, 1, 0, 2, 149, 33, },
{ 5, 1, 0, 2, 149, 33, },
- { 6, 1, 0, 2, 149, 30, },
+ { 6, 1, 0, 2, 149, 33, },
{ 7, 1, 0, 2, 149, 27, },
+ { 8, 1, 0, 2, 149, 33, },
+ { 9, 1, 0, 2, 149, 31, },
+ { 10, 1, 0, 2, 149, 14, },
+ { 11, 1, 0, 2, 149, 31, },
+ { 0, 1, 0, 2, 153, 33, },
+ { 2, 1, 0, 2, 153, 14, },
{ 1, 1, 0, 2, 153, 63, },
{ 3, 1, 0, 2, 153, 33, },
{ 4, 1, 0, 2, 153, 33, },
{ 5, 1, 0, 2, 153, 33, },
- { 6, 1, 0, 2, 153, 30, },
+ { 6, 1, 0, 2, 153, 33, },
{ 7, 1, 0, 2, 153, 27, },
+ { 8, 1, 0, 2, 153, 33, },
+ { 9, 1, 0, 2, 153, 31, },
+ { 10, 1, 0, 2, 153, 14, },
+ { 11, 1, 0, 2, 153, 31, },
+ { 0, 1, 0, 2, 157, 33, },
+ { 2, 1, 0, 2, 157, 14, },
{ 1, 1, 0, 2, 157, 63, },
{ 3, 1, 0, 2, 157, 33, },
{ 4, 1, 0, 2, 157, 33, },
{ 5, 1, 0, 2, 157, 33, },
- { 6, 1, 0, 2, 157, 30, },
+ { 6, 1, 0, 2, 157, 33, },
{ 7, 1, 0, 2, 157, 27, },
+ { 8, 1, 0, 2, 157, 33, },
+ { 9, 1, 0, 2, 157, 31, },
+ { 10, 1, 0, 2, 157, 14, },
+ { 11, 1, 0, 2, 157, 31, },
+ { 0, 1, 0, 2, 161, 33, },
+ { 2, 1, 0, 2, 161, 14, },
{ 1, 1, 0, 2, 161, 63, },
{ 3, 1, 0, 2, 161, 33, },
{ 4, 1, 0, 2, 161, 31, },
{ 5, 1, 0, 2, 161, 33, },
- { 6, 1, 0, 2, 161, 30, },
+ { 6, 1, 0, 2, 161, 33, },
{ 7, 1, 0, 2, 161, 27, },
+ { 8, 1, 0, 2, 161, 33, },
+ { 9, 1, 0, 2, 161, 31, },
+ { 10, 1, 0, 2, 161, 14, },
+ { 11, 1, 0, 2, 161, 31, },
+ { 0, 1, 0, 2, 165, 33, },
+ { 2, 1, 0, 2, 165, 14, },
{ 1, 1, 0, 2, 165, 63, },
{ 3, 1, 0, 2, 165, 33, },
- { 4, 1, 0, 2, 165, 63, },
+ { 4, 1, 0, 2, 165, 33, },
{ 5, 1, 0, 2, 165, 33, },
- { 6, 1, 0, 2, 165, 30, },
+ { 6, 1, 0, 2, 165, 33, },
{ 7, 1, 0, 2, 165, 27, },
+ { 8, 1, 0, 2, 165, 30, },
+ { 9, 1, 0, 2, 165, 31, },
+ { 10, 1, 0, 2, 165, 14, },
+ { 11, 1, 0, 2, 165, 31, },
+ { 0, 1, 1, 2, 38, 22, },
+ { 2, 1, 1, 2, 38, 32, },
{ 1, 1, 1, 2, 38, 32, },
{ 3, 1, 1, 2, 38, 22, },
{ 4, 1, 1, 2, 38, 26, },
{ 5, 1, 1, 2, 38, 32, },
{ 6, 1, 1, 2, 38, 22, },
{ 7, 1, 1, 2, 38, 27, },
+ { 8, 1, 1, 2, 38, 22, },
+ { 9, 1, 1, 2, 38, 29, },
+ { 10, 1, 1, 2, 38, 63, },
+ { 11, 1, 1, 2, 38, 32, },
+ { 0, 1, 1, 2, 46, 32, },
+ { 2, 1, 1, 2, 46, 32, },
{ 1, 1, 1, 2, 46, 32, },
{ 3, 1, 1, 2, 46, 32, },
{ 4, 1, 1, 2, 46, 28, },
{ 5, 1, 1, 2, 46, 32, },
- { 6, 1, 1, 2, 46, 30, },
+ { 6, 1, 1, 2, 46, 32, },
{ 7, 1, 1, 2, 46, 27, },
+ { 8, 1, 1, 2, 46, 31, },
+ { 9, 1, 1, 2, 46, 29, },
+ { 10, 1, 1, 2, 46, 63, },
+ { 11, 1, 1, 2, 46, 32, },
+ { 0, 1, 1, 2, 54, 32, },
+ { 2, 1, 1, 2, 54, 32, },
{ 1, 1, 1, 2, 54, 32, },
{ 3, 1, 1, 2, 54, 32, },
{ 4, 1, 1, 2, 54, 22, },
{ 5, 1, 1, 2, 54, 32, },
- { 6, 1, 1, 2, 54, 30, },
+ { 6, 1, 1, 2, 54, 32, },
{ 7, 1, 1, 2, 54, 27, },
+ { 8, 1, 1, 2, 54, 32, },
+ { 9, 1, 1, 2, 54, 28, },
+ { 10, 1, 1, 2, 54, 63, },
+ { 11, 1, 1, 2, 54, 32, },
+ { 0, 1, 1, 2, 62, 23, },
+ { 2, 1, 1, 2, 62, 32, },
{ 1, 1, 1, 2, 62, 32, },
{ 3, 1, 1, 2, 62, 23, },
{ 4, 1, 1, 2, 62, 31, },
{ 5, 1, 1, 2, 62, 32, },
{ 6, 1, 1, 2, 62, 23, },
{ 7, 1, 1, 2, 62, 27, },
+ { 8, 1, 1, 2, 62, 23, },
+ { 9, 1, 1, 2, 62, 28, },
+ { 10, 1, 1, 2, 62, 63, },
+ { 11, 1, 1, 2, 62, 32, },
+ { 0, 1, 1, 2, 102, 21, },
+ { 2, 1, 1, 2, 102, 32, },
{ 1, 1, 1, 2, 102, 32, },
{ 3, 1, 1, 2, 102, 21, },
{ 4, 1, 1, 2, 102, 31, },
{ 5, 1, 1, 2, 102, 32, },
- { 6, 1, 1, 2, 102, 30, },
+ { 6, 1, 1, 2, 102, 32, },
{ 7, 1, 1, 2, 102, 27, },
+ { 8, 1, 1, 2, 102, 21, },
+ { 9, 1, 1, 2, 102, 63, },
+ { 10, 1, 1, 2, 102, 63, },
+ { 11, 1, 1, 2, 102, 32, },
+ { 0, 1, 1, 2, 110, 32, },
+ { 2, 1, 1, 2, 110, 32, },
{ 1, 1, 1, 2, 110, 32, },
{ 3, 1, 1, 2, 110, 32, },
{ 4, 1, 1, 2, 110, 32, },
{ 5, 1, 1, 2, 110, 32, },
- { 6, 1, 1, 2, 110, 30, },
+ { 6, 1, 1, 2, 110, 32, },
{ 7, 1, 1, 2, 110, 27, },
+ { 8, 1, 1, 2, 110, 32, },
+ { 9, 1, 1, 2, 110, 63, },
+ { 10, 1, 1, 2, 110, 63, },
+ { 11, 1, 1, 2, 110, 32, },
+ { 0, 1, 1, 2, 118, 32, },
+ { 2, 1, 1, 2, 118, 32, },
{ 1, 1, 1, 2, 118, 32, },
{ 3, 1, 1, 2, 118, 63, },
{ 4, 1, 1, 2, 118, 32, },
{ 5, 1, 1, 2, 118, 63, },
- { 6, 1, 1, 2, 118, 30, },
+ { 6, 1, 1, 2, 118, 32, },
{ 7, 1, 1, 2, 118, 27, },
+ { 8, 1, 1, 2, 118, 32, },
+ { 9, 1, 1, 2, 118, 63, },
+ { 10, 1, 1, 2, 118, 63, },
+ { 11, 1, 1, 2, 118, 32, },
+ { 0, 1, 1, 2, 126, 32, },
+ { 2, 1, 1, 2, 126, 32, },
{ 1, 1, 1, 2, 126, 32, },
{ 3, 1, 1, 2, 126, 63, },
- { 4, 1, 1, 2, 126, 63, },
+ { 4, 1, 1, 2, 126, 32, },
{ 5, 1, 1, 2, 126, 63, },
- { 6, 1, 1, 2, 126, 30, },
+ { 6, 1, 1, 2, 126, 32, },
{ 7, 1, 1, 2, 126, 27, },
+ { 8, 1, 1, 2, 126, 32, },
+ { 9, 1, 1, 2, 126, 63, },
+ { 10, 1, 1, 2, 126, 63, },
+ { 11, 1, 1, 2, 126, 32, },
+ { 0, 1, 1, 2, 134, 32, },
+ { 2, 1, 1, 2, 134, 32, },
{ 1, 1, 1, 2, 134, 32, },
{ 3, 1, 1, 2, 134, 32, },
- { 4, 1, 1, 2, 134, 63, },
+ { 4, 1, 1, 2, 134, 32, },
{ 5, 1, 1, 2, 134, 32, },
- { 6, 1, 1, 2, 134, 30, },
- { 7, 1, 1, 2, 134, 63, },
+ { 6, 1, 1, 2, 134, 32, },
+ { 7, 1, 1, 2, 134, 27, },
+ { 8, 1, 1, 2, 134, 32, },
+ { 9, 1, 1, 2, 134, 63, },
+ { 10, 1, 1, 2, 134, 63, },
+ { 11, 1, 1, 2, 134, 32, },
+ { 0, 1, 1, 2, 142, 29, },
+ { 2, 1, 1, 2, 142, 63, },
{ 1, 1, 1, 2, 142, 63, },
{ 3, 1, 1, 2, 142, 29, },
- { 4, 1, 1, 2, 142, 63, },
+ { 4, 1, 1, 2, 142, 32, },
{ 5, 1, 1, 2, 142, 63, },
- { 6, 1, 1, 2, 142, 30, },
+ { 6, 1, 1, 2, 142, 32, },
{ 7, 1, 1, 2, 142, 63, },
+ { 8, 1, 1, 2, 142, 29, },
+ { 9, 1, 1, 2, 142, 63, },
+ { 10, 1, 1, 2, 142, 63, },
+ { 11, 1, 1, 2, 142, 31, },
+ { 0, 1, 1, 2, 151, 32, },
+ { 2, 1, 1, 2, 151, 14, },
{ 1, 1, 1, 2, 151, 63, },
{ 3, 1, 1, 2, 151, 32, },
{ 4, 1, 1, 2, 151, 27, },
{ 5, 1, 1, 2, 151, 32, },
- { 6, 1, 1, 2, 151, 30, },
+ { 6, 1, 1, 2, 151, 32, },
{ 7, 1, 1, 2, 151, 27, },
+ { 8, 1, 1, 2, 151, 32, },
+ { 9, 1, 1, 2, 151, 27, },
+ { 10, 1, 1, 2, 151, 14, },
+ { 11, 1, 1, 2, 151, 30, },
+ { 0, 1, 1, 2, 159, 32, },
+ { 2, 1, 1, 2, 159, 14, },
{ 1, 1, 1, 2, 159, 63, },
{ 3, 1, 1, 2, 159, 32, },
{ 4, 1, 1, 2, 159, 26, },
{ 5, 1, 1, 2, 159, 32, },
- { 6, 1, 1, 2, 159, 30, },
+ { 6, 1, 1, 2, 159, 32, },
{ 7, 1, 1, 2, 159, 27, },
+ { 8, 1, 1, 2, 159, 32, },
+ { 9, 1, 1, 2, 159, 31, },
+ { 10, 1, 1, 2, 159, 14, },
+ { 11, 1, 1, 2, 159, 30, },
+ { 0, 1, 2, 4, 42, 19, },
+ { 2, 1, 2, 4, 42, 32, },
{ 1, 1, 2, 4, 42, 28, },
{ 3, 1, 2, 4, 42, 19, },
{ 4, 1, 2, 4, 42, 25, },
{ 5, 1, 2, 4, 42, 32, },
{ 6, 1, 2, 4, 42, 19, },
{ 7, 1, 2, 4, 42, 27, },
+ { 8, 1, 2, 4, 42, 19, },
+ { 9, 1, 2, 4, 42, 25, },
+ { 10, 1, 2, 4, 42, 63, },
+ { 11, 1, 2, 4, 42, 32, },
+ { 0, 1, 2, 4, 58, 22, },
+ { 2, 1, 2, 4, 58, 32, },
{ 1, 1, 2, 4, 58, 28, },
{ 3, 1, 2, 4, 58, 22, },
{ 4, 1, 2, 4, 58, 28, },
{ 5, 1, 2, 4, 58, 32, },
{ 6, 1, 2, 4, 58, 22, },
{ 7, 1, 2, 4, 58, 27, },
+ { 8, 1, 2, 4, 58, 22, },
+ { 9, 1, 2, 4, 58, 23, },
+ { 10, 1, 2, 4, 58, 63, },
+ { 11, 1, 2, 4, 58, 32, },
+ { 0, 1, 2, 4, 106, 18, },
+ { 2, 1, 2, 4, 106, 32, },
{ 1, 1, 2, 4, 106, 32, },
{ 3, 1, 2, 4, 106, 18, },
{ 4, 1, 2, 4, 106, 30, },
{ 5, 1, 2, 4, 106, 32, },
- { 6, 1, 2, 4, 106, 30, },
+ { 6, 1, 2, 4, 106, 32, },
{ 7, 1, 2, 4, 106, 27, },
+ { 8, 1, 2, 4, 106, 18, },
+ { 9, 1, 2, 4, 106, 63, },
+ { 10, 1, 2, 4, 106, 63, },
+ { 11, 1, 2, 4, 106, 32, },
+ { 0, 1, 2, 4, 122, 32, },
+ { 2, 1, 2, 4, 122, 32, },
{ 1, 1, 2, 4, 122, 32, },
{ 3, 1, 2, 4, 122, 63, },
{ 4, 1, 2, 4, 122, 26, },
{ 5, 1, 2, 4, 122, 63, },
- { 6, 1, 2, 4, 122, 30, },
+ { 6, 1, 2, 4, 122, 32, },
{ 7, 1, 2, 4, 122, 27, },
+ { 8, 1, 2, 4, 122, 32, },
+ { 9, 1, 2, 4, 122, 63, },
+ { 10, 1, 2, 4, 122, 63, },
+ { 11, 1, 2, 4, 122, 32, },
+ { 0, 1, 2, 4, 138, 28, },
+ { 2, 1, 2, 4, 138, 63, },
{ 1, 1, 2, 4, 138, 63, },
{ 3, 1, 2, 4, 138, 28, },
- { 4, 1, 2, 4, 138, 63, },
+ { 4, 1, 2, 4, 138, 32, },
{ 5, 1, 2, 4, 138, 63, },
- { 6, 1, 2, 4, 138, 30, },
+ { 6, 1, 2, 4, 138, 32, },
{ 7, 1, 2, 4, 138, 63, },
+ { 8, 1, 2, 4, 138, 28, },
+ { 9, 1, 2, 4, 138, 63, },
+ { 10, 1, 2, 4, 138, 63, },
+ { 11, 1, 2, 4, 138, 30, },
+ { 0, 1, 2, 4, 155, 32, },
+ { 2, 1, 2, 4, 155, 14, },
{ 1, 1, 2, 4, 155, 63, },
{ 3, 1, 2, 4, 155, 32, },
{ 4, 1, 2, 4, 155, 27, },
{ 5, 1, 2, 4, 155, 32, },
- { 6, 1, 2, 4, 155, 30, },
+ { 6, 1, 2, 4, 155, 32, },
{ 7, 1, 2, 4, 155, 27, },
+ { 8, 1, 2, 4, 155, 32, },
+ { 9, 1, 2, 4, 155, 20, },
+ { 10, 1, 2, 4, 155, 14, },
+ { 11, 1, 2, 4, 155, 30, },
};
RTW_DECL_TABLE_TXPWR_LMT(rtw8821c_txpwr_lmt_type0);
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
index f9e3d0779c59..5699846a399b 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822c_table.c
@@ -39832,6 +39832,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 1, 60, },
{ 8, 0, 0, 0, 1, 72, },
{ 9, 0, 0, 0, 1, 60, },
+ { 10, 0, 0, 0, 1, 60, },
+ { 11, 0, 0, 0, 1, 60, },
{ 0, 0, 0, 0, 2, 72, },
{ 2, 0, 0, 0, 2, 60, },
{ 1, 0, 0, 0, 2, 68, },
@@ -39842,6 +39844,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 2, 60, },
{ 8, 0, 0, 0, 2, 72, },
{ 9, 0, 0, 0, 2, 60, },
+ { 10, 0, 0, 0, 2, 60, },
+ { 11, 0, 0, 0, 2, 60, },
{ 0, 0, 0, 0, 3, 76, },
{ 2, 0, 0, 0, 3, 60, },
{ 1, 0, 0, 0, 3, 68, },
@@ -39852,6 +39856,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 3, 60, },
{ 8, 0, 0, 0, 3, 76, },
{ 9, 0, 0, 0, 3, 60, },
+ { 10, 0, 0, 0, 3, 60, },
+ { 11, 0, 0, 0, 3, 60, },
{ 0, 0, 0, 0, 4, 76, },
{ 2, 0, 0, 0, 4, 60, },
{ 1, 0, 0, 0, 4, 68, },
@@ -39862,6 +39868,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 4, 60, },
{ 8, 0, 0, 0, 4, 76, },
{ 9, 0, 0, 0, 4, 60, },
+ { 10, 0, 0, 0, 4, 60, },
+ { 11, 0, 0, 0, 4, 60, },
{ 0, 0, 0, 0, 5, 76, },
{ 2, 0, 0, 0, 5, 60, },
{ 1, 0, 0, 0, 5, 68, },
@@ -39872,6 +39880,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 5, 60, },
{ 8, 0, 0, 0, 5, 76, },
{ 9, 0, 0, 0, 5, 60, },
+ { 10, 0, 0, 0, 5, 60, },
+ { 11, 0, 0, 0, 5, 60, },
{ 0, 0, 0, 0, 6, 76, },
{ 2, 0, 0, 0, 6, 60, },
{ 1, 0, 0, 0, 6, 68, },
@@ -39882,6 +39892,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 6, 60, },
{ 8, 0, 0, 0, 6, 76, },
{ 9, 0, 0, 0, 6, 60, },
+ { 10, 0, 0, 0, 6, 60, },
+ { 11, 0, 0, 0, 6, 60, },
{ 0, 0, 0, 0, 7, 76, },
{ 2, 0, 0, 0, 7, 60, },
{ 1, 0, 0, 0, 7, 68, },
@@ -39892,6 +39904,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 7, 60, },
{ 8, 0, 0, 0, 7, 76, },
{ 9, 0, 0, 0, 7, 60, },
+ { 10, 0, 0, 0, 7, 60, },
+ { 11, 0, 0, 0, 7, 60, },
{ 0, 0, 0, 0, 8, 76, },
{ 2, 0, 0, 0, 8, 60, },
{ 1, 0, 0, 0, 8, 68, },
@@ -39902,6 +39916,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 8, 60, },
{ 8, 0, 0, 0, 8, 76, },
{ 9, 0, 0, 0, 8, 60, },
+ { 10, 0, 0, 0, 8, 60, },
+ { 11, 0, 0, 0, 8, 60, },
{ 0, 0, 0, 0, 9, 76, },
{ 2, 0, 0, 0, 9, 60, },
{ 1, 0, 0, 0, 9, 68, },
@@ -39912,6 +39928,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 9, 60, },
{ 8, 0, 0, 0, 9, 76, },
{ 9, 0, 0, 0, 9, 60, },
+ { 10, 0, 0, 0, 9, 60, },
+ { 11, 0, 0, 0, 9, 60, },
{ 0, 0, 0, 0, 10, 72, },
{ 2, 0, 0, 0, 10, 60, },
{ 1, 0, 0, 0, 10, 68, },
@@ -39922,6 +39940,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 10, 60, },
{ 8, 0, 0, 0, 10, 72, },
{ 9, 0, 0, 0, 10, 60, },
+ { 10, 0, 0, 0, 10, 60, },
+ { 11, 0, 0, 0, 10, 60, },
{ 0, 0, 0, 0, 11, 72, },
{ 2, 0, 0, 0, 11, 60, },
{ 1, 0, 0, 0, 11, 68, },
@@ -39932,7 +39952,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 11, 60, },
{ 8, 0, 0, 0, 11, 72, },
{ 9, 0, 0, 0, 11, 60, },
- { 0, 0, 0, 0, 12, 44, },
+ { 10, 0, 0, 0, 11, 60, },
+ { 11, 0, 0, 0, 11, 60, },
+ { 0, 0, 0, 0, 12, 52, },
{ 2, 0, 0, 0, 12, 60, },
{ 1, 0, 0, 0, 12, 68, },
{ 3, 0, 0, 0, 12, 52, },
@@ -39942,7 +39964,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 12, 60, },
{ 8, 0, 0, 0, 12, 52, },
{ 9, 0, 0, 0, 12, 60, },
- { 0, 0, 0, 0, 13, 40, },
+ { 10, 0, 0, 0, 12, 60, },
+ { 11, 0, 0, 0, 12, 60, },
+ { 0, 0, 0, 0, 13, 48, },
{ 2, 0, 0, 0, 13, 60, },
{ 1, 0, 0, 0, 13, 68, },
{ 3, 0, 0, 0, 13, 48, },
@@ -39952,6 +39976,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 13, 60, },
{ 8, 0, 0, 0, 13, 48, },
{ 9, 0, 0, 0, 13, 60, },
+ { 10, 0, 0, 0, 13, 60, },
+ { 11, 0, 0, 0, 13, 60, },
{ 0, 0, 0, 0, 14, 127, },
{ 2, 0, 0, 0, 14, 127, },
{ 1, 0, 0, 0, 14, 68, },
@@ -39962,6 +39988,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 0, 14, 127, },
{ 8, 0, 0, 0, 14, 127, },
{ 9, 0, 0, 0, 14, 127, },
+ { 10, 0, 0, 0, 14, 127, },
+ { 11, 0, 0, 0, 14, 127, },
{ 0, 0, 0, 1, 1, 52, },
{ 2, 0, 0, 1, 1, 60, },
{ 1, 0, 0, 1, 1, 76, },
@@ -39972,6 +40000,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 1, 60, },
{ 8, 0, 0, 1, 1, 52, },
{ 9, 0, 0, 1, 1, 60, },
+ { 10, 0, 0, 1, 1, 60, },
+ { 11, 0, 0, 1, 1, 60, },
{ 0, 0, 0, 1, 2, 60, },
{ 2, 0, 0, 1, 2, 60, },
{ 1, 0, 0, 1, 2, 76, },
@@ -39982,6 +40012,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 2, 60, },
{ 8, 0, 0, 1, 2, 60, },
{ 9, 0, 0, 1, 2, 60, },
+ { 10, 0, 0, 1, 2, 60, },
+ { 11, 0, 0, 1, 2, 60, },
{ 0, 0, 0, 1, 3, 64, },
{ 2, 0, 0, 1, 3, 60, },
{ 1, 0, 0, 1, 3, 76, },
@@ -39992,6 +40024,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 3, 60, },
{ 8, 0, 0, 1, 3, 64, },
{ 9, 0, 0, 1, 3, 60, },
+ { 10, 0, 0, 1, 3, 60, },
+ { 11, 0, 0, 1, 3, 60, },
{ 0, 0, 0, 1, 4, 68, },
{ 2, 0, 0, 1, 4, 60, },
{ 1, 0, 0, 1, 4, 76, },
@@ -40002,6 +40036,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 4, 60, },
{ 8, 0, 0, 1, 4, 68, },
{ 9, 0, 0, 1, 4, 60, },
+ { 10, 0, 0, 1, 4, 60, },
+ { 11, 0, 0, 1, 4, 60, },
{ 0, 0, 0, 1, 5, 76, },
{ 2, 0, 0, 1, 5, 60, },
{ 1, 0, 0, 1, 5, 76, },
@@ -40012,6 +40048,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 5, 60, },
{ 8, 0, 0, 1, 5, 76, },
{ 9, 0, 0, 1, 5, 60, },
+ { 10, 0, 0, 1, 5, 60, },
+ { 11, 0, 0, 1, 5, 60, },
{ 0, 0, 0, 1, 6, 76, },
{ 2, 0, 0, 1, 6, 60, },
{ 1, 0, 0, 1, 6, 76, },
@@ -40022,6 +40060,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 6, 60, },
{ 8, 0, 0, 1, 6, 76, },
{ 9, 0, 0, 1, 6, 60, },
+ { 10, 0, 0, 1, 6, 60, },
+ { 11, 0, 0, 1, 6, 60, },
{ 0, 0, 0, 1, 7, 76, },
{ 2, 0, 0, 1, 7, 60, },
{ 1, 0, 0, 1, 7, 76, },
@@ -40032,6 +40072,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 7, 60, },
{ 8, 0, 0, 1, 7, 76, },
{ 9, 0, 0, 1, 7, 60, },
+ { 10, 0, 0, 1, 7, 60, },
+ { 11, 0, 0, 1, 7, 60, },
{ 0, 0, 0, 1, 8, 68, },
{ 2, 0, 0, 1, 8, 60, },
{ 1, 0, 0, 1, 8, 76, },
@@ -40042,6 +40084,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 8, 60, },
{ 8, 0, 0, 1, 8, 68, },
{ 9, 0, 0, 1, 8, 60, },
+ { 10, 0, 0, 1, 8, 60, },
+ { 11, 0, 0, 1, 8, 60, },
{ 0, 0, 0, 1, 9, 64, },
{ 2, 0, 0, 1, 9, 60, },
{ 1, 0, 0, 1, 9, 76, },
@@ -40052,6 +40096,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 9, 60, },
{ 8, 0, 0, 1, 9, 64, },
{ 9, 0, 0, 1, 9, 60, },
+ { 10, 0, 0, 1, 9, 60, },
+ { 11, 0, 0, 1, 9, 60, },
{ 0, 0, 0, 1, 10, 60, },
{ 2, 0, 0, 1, 10, 60, },
{ 1, 0, 0, 1, 10, 76, },
@@ -40062,6 +40108,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 10, 60, },
{ 8, 0, 0, 1, 10, 60, },
{ 9, 0, 0, 1, 10, 60, },
+ { 10, 0, 0, 1, 10, 60, },
+ { 11, 0, 0, 1, 10, 60, },
{ 0, 0, 0, 1, 11, 52, },
{ 2, 0, 0, 1, 11, 60, },
{ 1, 0, 0, 1, 11, 76, },
@@ -40071,8 +40119,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 1, 11, 52, },
{ 7, 0, 0, 1, 11, 60, },
{ 8, 0, 0, 1, 11, 52, },
- { 9, 0, 0, 1, 11, 60, },
- { 0, 0, 0, 1, 12, 32, },
+ { 9, 0, 0, 1, 11, 44, },
+ { 10, 0, 0, 1, 11, 60, },
+ { 11, 0, 0, 1, 11, 60, },
+ { 0, 0, 0, 1, 12, 40, },
{ 2, 0, 0, 1, 12, 60, },
{ 1, 0, 0, 1, 12, 76, },
{ 3, 0, 0, 1, 12, 40, },
@@ -40081,8 +40131,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 1, 12, 40, },
{ 7, 0, 0, 1, 12, 60, },
{ 8, 0, 0, 1, 12, 40, },
- { 9, 0, 0, 1, 12, 60, },
- { 0, 0, 0, 1, 13, 20, },
+ { 9, 0, 0, 1, 12, 44, },
+ { 10, 0, 0, 1, 12, 60, },
+ { 11, 0, 0, 1, 12, 60, },
+ { 0, 0, 0, 1, 13, 28, },
{ 2, 0, 0, 1, 13, 60, },
{ 1, 0, 0, 1, 13, 76, },
{ 3, 0, 0, 1, 13, 28, },
@@ -40091,7 +40143,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 1, 13, 28, },
{ 7, 0, 0, 1, 13, 60, },
{ 8, 0, 0, 1, 13, 28, },
- { 9, 0, 0, 1, 13, 60, },
+ { 9, 0, 0, 1, 13, 36, },
+ { 10, 0, 0, 1, 13, 60, },
+ { 11, 0, 0, 1, 13, 60, },
{ 0, 0, 0, 1, 14, 127, },
{ 2, 0, 0, 1, 14, 127, },
{ 1, 0, 0, 1, 14, 127, },
@@ -40102,6 +40156,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 1, 14, 127, },
{ 8, 0, 0, 1, 14, 127, },
{ 9, 0, 0, 1, 14, 127, },
+ { 10, 0, 0, 1, 14, 127, },
+ { 11, 0, 0, 1, 14, 127, },
{ 0, 0, 0, 2, 1, 52, },
{ 2, 0, 0, 2, 1, 60, },
{ 1, 0, 0, 2, 1, 76, },
@@ -40112,6 +40168,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 1, 60, },
{ 8, 0, 0, 2, 1, 52, },
{ 9, 0, 0, 2, 1, 60, },
+ { 10, 0, 0, 2, 1, 60, },
+ { 11, 0, 0, 2, 1, 60, },
{ 0, 0, 0, 2, 2, 60, },
{ 2, 0, 0, 2, 2, 60, },
{ 1, 0, 0, 2, 2, 76, },
@@ -40122,6 +40180,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 2, 60, },
{ 8, 0, 0, 2, 2, 60, },
{ 9, 0, 0, 2, 2, 60, },
+ { 10, 0, 0, 2, 2, 60, },
+ { 11, 0, 0, 2, 2, 60, },
{ 0, 0, 0, 2, 3, 64, },
{ 2, 0, 0, 2, 3, 60, },
{ 1, 0, 0, 2, 3, 76, },
@@ -40132,6 +40192,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 3, 60, },
{ 8, 0, 0, 2, 3, 64, },
{ 9, 0, 0, 2, 3, 60, },
+ { 10, 0, 0, 2, 3, 60, },
+ { 11, 0, 0, 2, 3, 60, },
{ 0, 0, 0, 2, 4, 68, },
{ 2, 0, 0, 2, 4, 60, },
{ 1, 0, 0, 2, 4, 76, },
@@ -40142,6 +40204,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 4, 60, },
{ 8, 0, 0, 2, 4, 68, },
{ 9, 0, 0, 2, 4, 60, },
+ { 10, 0, 0, 2, 4, 60, },
+ { 11, 0, 0, 2, 4, 60, },
{ 0, 0, 0, 2, 5, 76, },
{ 2, 0, 0, 2, 5, 60, },
{ 1, 0, 0, 2, 5, 76, },
@@ -40152,6 +40216,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 5, 60, },
{ 8, 0, 0, 2, 5, 76, },
{ 9, 0, 0, 2, 5, 60, },
+ { 10, 0, 0, 2, 5, 60, },
+ { 11, 0, 0, 2, 5, 60, },
{ 0, 0, 0, 2, 6, 76, },
{ 2, 0, 0, 2, 6, 60, },
{ 1, 0, 0, 2, 6, 76, },
@@ -40162,6 +40228,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 6, 60, },
{ 8, 0, 0, 2, 6, 76, },
{ 9, 0, 0, 2, 6, 60, },
+ { 10, 0, 0, 2, 6, 60, },
+ { 11, 0, 0, 2, 6, 60, },
{ 0, 0, 0, 2, 7, 76, },
{ 2, 0, 0, 2, 7, 60, },
{ 1, 0, 0, 2, 7, 76, },
@@ -40172,6 +40240,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 7, 60, },
{ 8, 0, 0, 2, 7, 76, },
{ 9, 0, 0, 2, 7, 60, },
+ { 10, 0, 0, 2, 7, 60, },
+ { 11, 0, 0, 2, 7, 60, },
{ 0, 0, 0, 2, 8, 68, },
{ 2, 0, 0, 2, 8, 60, },
{ 1, 0, 0, 2, 8, 76, },
@@ -40182,6 +40252,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 8, 60, },
{ 8, 0, 0, 2, 8, 68, },
{ 9, 0, 0, 2, 8, 60, },
+ { 10, 0, 0, 2, 8, 60, },
+ { 11, 0, 0, 2, 8, 60, },
{ 0, 0, 0, 2, 9, 64, },
{ 2, 0, 0, 2, 9, 60, },
{ 1, 0, 0, 2, 9, 76, },
@@ -40192,6 +40264,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 9, 60, },
{ 8, 0, 0, 2, 9, 64, },
{ 9, 0, 0, 2, 9, 60, },
+ { 10, 0, 0, 2, 9, 60, },
+ { 11, 0, 0, 2, 9, 60, },
{ 0, 0, 0, 2, 10, 60, },
{ 2, 0, 0, 2, 10, 60, },
{ 1, 0, 0, 2, 10, 76, },
@@ -40202,6 +40276,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 10, 60, },
{ 8, 0, 0, 2, 10, 60, },
{ 9, 0, 0, 2, 10, 60, },
+ { 10, 0, 0, 2, 10, 60, },
+ { 11, 0, 0, 2, 10, 60, },
{ 0, 0, 0, 2, 11, 52, },
{ 2, 0, 0, 2, 11, 60, },
{ 1, 0, 0, 2, 11, 76, },
@@ -40211,8 +40287,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 2, 11, 52, },
{ 7, 0, 0, 2, 11, 60, },
{ 8, 0, 0, 2, 11, 52, },
- { 9, 0, 0, 2, 11, 60, },
- { 0, 0, 0, 2, 12, 32, },
+ { 9, 0, 0, 2, 11, 46, },
+ { 10, 0, 0, 2, 11, 60, },
+ { 11, 0, 0, 2, 11, 60, },
+ { 0, 0, 0, 2, 12, 40, },
{ 2, 0, 0, 2, 12, 60, },
{ 1, 0, 0, 2, 12, 76, },
{ 3, 0, 0, 2, 12, 40, },
@@ -40221,8 +40299,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 2, 12, 40, },
{ 7, 0, 0, 2, 12, 60, },
{ 8, 0, 0, 2, 12, 40, },
- { 9, 0, 0, 2, 12, 60, },
- { 0, 0, 0, 2, 13, 20, },
+ { 9, 0, 0, 2, 12, 42, },
+ { 10, 0, 0, 2, 12, 60, },
+ { 11, 0, 0, 2, 12, 60, },
+ { 0, 0, 0, 2, 13, 28, },
{ 2, 0, 0, 2, 13, 60, },
{ 1, 0, 0, 2, 13, 76, },
{ 3, 0, 0, 2, 13, 28, },
@@ -40231,7 +40311,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 2, 13, 28, },
{ 7, 0, 0, 2, 13, 60, },
{ 8, 0, 0, 2, 13, 28, },
- { 9, 0, 0, 2, 13, 60, },
+ { 9, 0, 0, 2, 13, 34, },
+ { 10, 0, 0, 2, 13, 60, },
+ { 11, 0, 0, 2, 13, 60, },
{ 0, 0, 0, 2, 14, 127, },
{ 2, 0, 0, 2, 14, 127, },
{ 1, 0, 0, 2, 14, 127, },
@@ -40242,6 +40324,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 2, 14, 127, },
{ 8, 0, 0, 2, 14, 127, },
{ 9, 0, 0, 2, 14, 127, },
+ { 10, 0, 0, 2, 14, 127, },
+ { 11, 0, 0, 2, 14, 127, },
{ 0, 0, 0, 3, 1, 52, },
{ 2, 0, 0, 3, 1, 36, },
{ 1, 0, 0, 3, 1, 66, },
@@ -40252,6 +40336,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 1, 36, },
{ 8, 0, 0, 3, 1, 52, },
{ 9, 0, 0, 3, 1, 36, },
+ { 10, 0, 0, 3, 1, 36, },
+ { 11, 0, 0, 3, 1, 36, },
{ 0, 0, 0, 3, 2, 60, },
{ 2, 0, 0, 3, 2, 36, },
{ 1, 0, 0, 3, 2, 66, },
@@ -40262,6 +40348,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 2, 36, },
{ 8, 0, 0, 3, 2, 60, },
{ 9, 0, 0, 3, 2, 36, },
+ { 10, 0, 0, 3, 2, 36, },
+ { 11, 0, 0, 3, 2, 36, },
{ 0, 0, 0, 3, 3, 64, },
{ 2, 0, 0, 3, 3, 36, },
{ 1, 0, 0, 3, 3, 66, },
@@ -40272,6 +40360,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 3, 36, },
{ 8, 0, 0, 3, 3, 64, },
{ 9, 0, 0, 3, 3, 36, },
+ { 10, 0, 0, 3, 3, 36, },
+ { 11, 0, 0, 3, 3, 36, },
{ 0, 0, 0, 3, 4, 68, },
{ 2, 0, 0, 3, 4, 36, },
{ 1, 0, 0, 3, 4, 66, },
@@ -40282,6 +40372,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 4, 36, },
{ 8, 0, 0, 3, 4, 68, },
{ 9, 0, 0, 3, 4, 36, },
+ { 10, 0, 0, 3, 4, 36, },
+ { 11, 0, 0, 3, 4, 36, },
{ 0, 0, 0, 3, 5, 76, },
{ 2, 0, 0, 3, 5, 36, },
{ 1, 0, 0, 3, 5, 66, },
@@ -40292,6 +40384,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 5, 36, },
{ 8, 0, 0, 3, 5, 76, },
{ 9, 0, 0, 3, 5, 36, },
+ { 10, 0, 0, 3, 5, 36, },
+ { 11, 0, 0, 3, 5, 36, },
{ 0, 0, 0, 3, 6, 76, },
{ 2, 0, 0, 3, 6, 36, },
{ 1, 0, 0, 3, 6, 66, },
@@ -40302,6 +40396,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 6, 36, },
{ 8, 0, 0, 3, 6, 76, },
{ 9, 0, 0, 3, 6, 36, },
+ { 10, 0, 0, 3, 6, 36, },
+ { 11, 0, 0, 3, 6, 36, },
{ 0, 0, 0, 3, 7, 76, },
{ 2, 0, 0, 3, 7, 36, },
{ 1, 0, 0, 3, 7, 66, },
@@ -40312,6 +40408,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 7, 36, },
{ 8, 0, 0, 3, 7, 76, },
{ 9, 0, 0, 3, 7, 36, },
+ { 10, 0, 0, 3, 7, 36, },
+ { 11, 0, 0, 3, 7, 36, },
{ 0, 0, 0, 3, 8, 68, },
{ 2, 0, 0, 3, 8, 36, },
{ 1, 0, 0, 3, 8, 66, },
@@ -40322,6 +40420,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 8, 36, },
{ 8, 0, 0, 3, 8, 68, },
{ 9, 0, 0, 3, 8, 36, },
+ { 10, 0, 0, 3, 8, 36, },
+ { 11, 0, 0, 3, 8, 36, },
{ 0, 0, 0, 3, 9, 64, },
{ 2, 0, 0, 3, 9, 36, },
{ 1, 0, 0, 3, 9, 66, },
@@ -40332,6 +40432,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 9, 36, },
{ 8, 0, 0, 3, 9, 64, },
{ 9, 0, 0, 3, 9, 36, },
+ { 10, 0, 0, 3, 9, 36, },
+ { 11, 0, 0, 3, 9, 36, },
{ 0, 0, 0, 3, 10, 60, },
{ 2, 0, 0, 3, 10, 36, },
{ 1, 0, 0, 3, 10, 66, },
@@ -40342,6 +40444,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 10, 36, },
{ 8, 0, 0, 3, 10, 60, },
{ 9, 0, 0, 3, 10, 36, },
+ { 10, 0, 0, 3, 10, 36, },
+ { 11, 0, 0, 3, 10, 36, },
{ 0, 0, 0, 3, 11, 52, },
{ 2, 0, 0, 3, 11, 36, },
{ 1, 0, 0, 3, 11, 66, },
@@ -40352,7 +40456,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 11, 36, },
{ 8, 0, 0, 3, 11, 52, },
{ 9, 0, 0, 3, 11, 36, },
- { 0, 0, 0, 3, 12, 32, },
+ { 10, 0, 0, 3, 11, 36, },
+ { 11, 0, 0, 3, 11, 36, },
+ { 0, 0, 0, 3, 12, 40, },
{ 2, 0, 0, 3, 12, 36, },
{ 1, 0, 0, 3, 12, 66, },
{ 3, 0, 0, 3, 12, 40, },
@@ -40362,7 +40468,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 12, 36, },
{ 8, 0, 0, 3, 12, 40, },
{ 9, 0, 0, 3, 12, 36, },
- { 0, 0, 0, 3, 13, 20, },
+ { 10, 0, 0, 3, 12, 36, },
+ { 11, 0, 0, 3, 12, 36, },
+ { 0, 0, 0, 3, 13, 28, },
{ 2, 0, 0, 3, 13, 36, },
{ 1, 0, 0, 3, 13, 66, },
{ 3, 0, 0, 3, 13, 28, },
@@ -40371,7 +40479,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 0, 3, 13, 28, },
{ 7, 0, 0, 3, 13, 36, },
{ 8, 0, 0, 3, 13, 28, },
- { 9, 0, 0, 3, 13, 36, },
+ { 9, 0, 0, 3, 13, 34, },
+ { 10, 0, 0, 3, 13, 36, },
+ { 11, 0, 0, 3, 13, 36, },
{ 0, 0, 0, 3, 14, 127, },
{ 2, 0, 0, 3, 14, 127, },
{ 1, 0, 0, 3, 14, 127, },
@@ -40382,6 +40492,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 0, 3, 14, 127, },
{ 8, 0, 0, 3, 14, 127, },
{ 9, 0, 0, 3, 14, 127, },
+ { 10, 0, 0, 3, 14, 127, },
+ { 11, 0, 0, 3, 14, 127, },
{ 0, 0, 1, 2, 1, 127, },
{ 2, 0, 1, 2, 1, 127, },
{ 1, 0, 1, 2, 1, 127, },
@@ -40392,6 +40504,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 1, 127, },
{ 8, 0, 1, 2, 1, 127, },
{ 9, 0, 1, 2, 1, 127, },
+ { 10, 0, 1, 2, 1, 127, },
+ { 11, 0, 1, 2, 1, 127, },
{ 0, 0, 1, 2, 2, 127, },
{ 2, 0, 1, 2, 2, 127, },
{ 1, 0, 1, 2, 2, 127, },
@@ -40402,6 +40516,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 2, 127, },
{ 8, 0, 1, 2, 2, 127, },
{ 9, 0, 1, 2, 2, 127, },
+ { 10, 0, 1, 2, 2, 127, },
+ { 11, 0, 1, 2, 2, 127, },
{ 0, 0, 1, 2, 3, 52, },
{ 2, 0, 1, 2, 3, 60, },
{ 1, 0, 1, 2, 3, 72, },
@@ -40412,6 +40528,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 3, 60, },
{ 8, 0, 1, 2, 3, 52, },
{ 9, 0, 1, 2, 3, 60, },
+ { 10, 0, 1, 2, 3, 60, },
+ { 11, 0, 1, 2, 3, 60, },
{ 0, 0, 1, 2, 4, 52, },
{ 2, 0, 1, 2, 4, 60, },
{ 1, 0, 1, 2, 4, 72, },
@@ -40422,6 +40540,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 4, 60, },
{ 8, 0, 1, 2, 4, 52, },
{ 9, 0, 1, 2, 4, 60, },
+ { 10, 0, 1, 2, 4, 60, },
+ { 11, 0, 1, 2, 4, 60, },
{ 0, 0, 1, 2, 5, 60, },
{ 2, 0, 1, 2, 5, 60, },
{ 1, 0, 1, 2, 5, 72, },
@@ -40432,6 +40552,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 5, 60, },
{ 8, 0, 1, 2, 5, 60, },
{ 9, 0, 1, 2, 5, 60, },
+ { 10, 0, 1, 2, 5, 60, },
+ { 11, 0, 1, 2, 5, 60, },
{ 0, 0, 1, 2, 6, 64, },
{ 2, 0, 1, 2, 6, 60, },
{ 1, 0, 1, 2, 6, 72, },
@@ -40442,6 +40564,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 6, 60, },
{ 8, 0, 1, 2, 6, 64, },
{ 9, 0, 1, 2, 6, 60, },
+ { 10, 0, 1, 2, 6, 60, },
+ { 11, 0, 1, 2, 6, 60, },
{ 0, 0, 1, 2, 7, 60, },
{ 2, 0, 1, 2, 7, 60, },
{ 1, 0, 1, 2, 7, 72, },
@@ -40452,6 +40576,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 7, 60, },
{ 8, 0, 1, 2, 7, 60, },
{ 9, 0, 1, 2, 7, 60, },
+ { 10, 0, 1, 2, 7, 60, },
+ { 11, 0, 1, 2, 7, 60, },
{ 0, 0, 1, 2, 8, 52, },
{ 2, 0, 1, 2, 8, 60, },
{ 1, 0, 1, 2, 8, 72, },
@@ -40462,6 +40588,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 8, 60, },
{ 8, 0, 1, 2, 8, 52, },
{ 9, 0, 1, 2, 8, 60, },
+ { 10, 0, 1, 2, 8, 60, },
+ { 11, 0, 1, 2, 8, 60, },
{ 0, 0, 1, 2, 9, 52, },
{ 2, 0, 1, 2, 9, 60, },
{ 1, 0, 1, 2, 9, 72, },
@@ -40471,7 +40599,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 1, 2, 9, 52, },
{ 7, 0, 1, 2, 9, 60, },
{ 8, 0, 1, 2, 9, 52, },
- { 9, 0, 1, 2, 9, 60, },
+ { 9, 0, 1, 2, 9, 44, },
+ { 10, 0, 1, 2, 9, 60, },
+ { 11, 0, 1, 2, 9, 60, },
{ 0, 0, 1, 2, 10, 40, },
{ 2, 0, 1, 2, 10, 60, },
{ 1, 0, 1, 2, 10, 72, },
@@ -40481,7 +40611,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 1, 2, 10, 40, },
{ 7, 0, 1, 2, 10, 60, },
{ 8, 0, 1, 2, 10, 40, },
- { 9, 0, 1, 2, 10, 60, },
+ { 9, 0, 1, 2, 10, 44, },
+ { 10, 0, 1, 2, 10, 60, },
+ { 11, 0, 1, 2, 10, 60, },
{ 0, 0, 1, 2, 11, 28, },
{ 2, 0, 1, 2, 11, 60, },
{ 1, 0, 1, 2, 11, 72, },
@@ -40491,7 +40623,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 1, 2, 11, 28, },
{ 7, 0, 1, 2, 11, 60, },
{ 8, 0, 1, 2, 11, 28, },
- { 9, 0, 1, 2, 11, 60, },
+ { 9, 0, 1, 2, 11, 16, },
+ { 10, 0, 1, 2, 11, 60, },
+ { 11, 0, 1, 2, 11, 60, },
{ 0, 0, 1, 2, 12, 127, },
{ 2, 0, 1, 2, 12, 127, },
{ 1, 0, 1, 2, 12, 127, },
@@ -40502,6 +40636,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 12, 127, },
{ 8, 0, 1, 2, 12, 127, },
{ 9, 0, 1, 2, 12, 127, },
+ { 10, 0, 1, 2, 12, 127, },
+ { 11, 0, 1, 2, 12, 127, },
{ 0, 0, 1, 2, 13, 127, },
{ 2, 0, 1, 2, 13, 127, },
{ 1, 0, 1, 2, 13, 127, },
@@ -40512,6 +40648,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 13, 127, },
{ 8, 0, 1, 2, 13, 127, },
{ 9, 0, 1, 2, 13, 127, },
+ { 10, 0, 1, 2, 13, 127, },
+ { 11, 0, 1, 2, 13, 127, },
{ 0, 0, 1, 2, 14, 127, },
{ 2, 0, 1, 2, 14, 127, },
{ 1, 0, 1, 2, 14, 127, },
@@ -40522,6 +40660,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 2, 14, 127, },
{ 8, 0, 1, 2, 14, 127, },
{ 9, 0, 1, 2, 14, 127, },
+ { 10, 0, 1, 2, 14, 127, },
+ { 11, 0, 1, 2, 14, 127, },
{ 0, 0, 1, 3, 1, 127, },
{ 2, 0, 1, 3, 1, 127, },
{ 1, 0, 1, 3, 1, 127, },
@@ -40532,6 +40672,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 1, 127, },
{ 8, 0, 1, 3, 1, 127, },
{ 9, 0, 1, 3, 1, 127, },
+ { 10, 0, 1, 3, 1, 127, },
+ { 11, 0, 1, 3, 1, 127, },
{ 0, 0, 1, 3, 2, 127, },
{ 2, 0, 1, 3, 2, 127, },
{ 1, 0, 1, 3, 2, 127, },
@@ -40542,6 +40684,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 2, 127, },
{ 8, 0, 1, 3, 2, 127, },
{ 9, 0, 1, 3, 2, 127, },
+ { 10, 0, 1, 3, 2, 127, },
+ { 11, 0, 1, 3, 2, 127, },
{ 0, 0, 1, 3, 3, 48, },
{ 2, 0, 1, 3, 3, 36, },
{ 1, 0, 1, 3, 3, 66, },
@@ -40552,6 +40696,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 3, 36, },
{ 8, 0, 1, 3, 3, 48, },
{ 9, 0, 1, 3, 3, 36, },
+ { 10, 0, 1, 3, 3, 36, },
+ { 11, 0, 1, 3, 3, 36, },
{ 0, 0, 1, 3, 4, 48, },
{ 2, 0, 1, 3, 4, 36, },
{ 1, 0, 1, 3, 4, 66, },
@@ -40562,6 +40708,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 4, 36, },
{ 8, 0, 1, 3, 4, 48, },
{ 9, 0, 1, 3, 4, 36, },
+ { 10, 0, 1, 3, 4, 36, },
+ { 11, 0, 1, 3, 4, 36, },
{ 0, 0, 1, 3, 5, 60, },
{ 2, 0, 1, 3, 5, 36, },
{ 1, 0, 1, 3, 5, 66, },
@@ -40572,6 +40720,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 5, 36, },
{ 8, 0, 1, 3, 5, 60, },
{ 9, 0, 1, 3, 5, 36, },
+ { 10, 0, 1, 3, 5, 36, },
+ { 11, 0, 1, 3, 5, 36, },
{ 0, 0, 1, 3, 6, 64, },
{ 2, 0, 1, 3, 6, 36, },
{ 1, 0, 1, 3, 6, 66, },
@@ -40582,6 +40732,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 6, 36, },
{ 8, 0, 1, 3, 6, 64, },
{ 9, 0, 1, 3, 6, 36, },
+ { 10, 0, 1, 3, 6, 36, },
+ { 11, 0, 1, 3, 6, 36, },
{ 0, 0, 1, 3, 7, 60, },
{ 2, 0, 1, 3, 7, 36, },
{ 1, 0, 1, 3, 7, 66, },
@@ -40592,6 +40744,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 7, 36, },
{ 8, 0, 1, 3, 7, 60, },
{ 9, 0, 1, 3, 7, 36, },
+ { 10, 0, 1, 3, 7, 36, },
+ { 11, 0, 1, 3, 7, 36, },
{ 0, 0, 1, 3, 8, 52, },
{ 2, 0, 1, 3, 8, 36, },
{ 1, 0, 1, 3, 8, 66, },
@@ -40602,6 +40756,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 8, 36, },
{ 8, 0, 1, 3, 8, 52, },
{ 9, 0, 1, 3, 8, 36, },
+ { 10, 0, 1, 3, 8, 36, },
+ { 11, 0, 1, 3, 8, 36, },
{ 0, 0, 1, 3, 9, 52, },
{ 2, 0, 1, 3, 9, 36, },
{ 1, 0, 1, 3, 9, 66, },
@@ -40612,6 +40768,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 9, 36, },
{ 8, 0, 1, 3, 9, 52, },
{ 9, 0, 1, 3, 9, 36, },
+ { 10, 0, 1, 3, 9, 36, },
+ { 11, 0, 1, 3, 9, 36, },
{ 0, 0, 1, 3, 10, 40, },
{ 2, 0, 1, 3, 10, 36, },
{ 1, 0, 1, 3, 10, 66, },
@@ -40622,6 +40780,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 10, 36, },
{ 8, 0, 1, 3, 10, 40, },
{ 9, 0, 1, 3, 10, 36, },
+ { 10, 0, 1, 3, 10, 36, },
+ { 11, 0, 1, 3, 10, 36, },
{ 0, 0, 1, 3, 11, 26, },
{ 2, 0, 1, 3, 11, 36, },
{ 1, 0, 1, 3, 11, 66, },
@@ -40631,7 +40791,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 0, 1, 3, 11, 26, },
{ 7, 0, 1, 3, 11, 36, },
{ 8, 0, 1, 3, 11, 26, },
- { 9, 0, 1, 3, 11, 36, },
+ { 9, 0, 1, 3, 11, 16, },
+ { 10, 0, 1, 3, 11, 36, },
+ { 11, 0, 1, 3, 11, 36, },
{ 0, 0, 1, 3, 12, 127, },
{ 2, 0, 1, 3, 12, 127, },
{ 1, 0, 1, 3, 12, 127, },
@@ -40642,6 +40804,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 12, 127, },
{ 8, 0, 1, 3, 12, 127, },
{ 9, 0, 1, 3, 12, 127, },
+ { 10, 0, 1, 3, 12, 127, },
+ { 11, 0, 1, 3, 12, 127, },
{ 0, 0, 1, 3, 13, 127, },
{ 2, 0, 1, 3, 13, 127, },
{ 1, 0, 1, 3, 13, 127, },
@@ -40652,6 +40816,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 13, 127, },
{ 8, 0, 1, 3, 13, 127, },
{ 9, 0, 1, 3, 13, 127, },
+ { 10, 0, 1, 3, 13, 127, },
+ { 11, 0, 1, 3, 13, 127, },
{ 0, 0, 1, 3, 14, 127, },
{ 2, 0, 1, 3, 14, 127, },
{ 1, 0, 1, 3, 14, 127, },
@@ -40662,6 +40828,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 0, 1, 3, 14, 127, },
{ 8, 0, 1, 3, 14, 127, },
{ 9, 0, 1, 3, 14, 127, },
+ { 10, 0, 1, 3, 14, 127, },
+ { 11, 0, 1, 3, 14, 127, },
{ 0, 1, 0, 1, 36, 74, },
{ 2, 1, 0, 1, 36, 62, },
{ 1, 1, 0, 1, 36, 60, },
@@ -40672,6 +40840,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 36, 54, },
{ 8, 1, 0, 1, 36, 62, },
{ 9, 1, 0, 1, 36, 62, },
+ { 10, 1, 0, 1, 36, 62, },
+ { 11, 1, 0, 1, 36, 62, },
{ 0, 1, 0, 1, 40, 76, },
{ 2, 1, 0, 1, 40, 62, },
{ 1, 1, 0, 1, 40, 62, },
@@ -40682,6 +40852,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 40, 54, },
{ 8, 1, 0, 1, 40, 62, },
{ 9, 1, 0, 1, 40, 62, },
+ { 10, 1, 0, 1, 40, 62, },
+ { 11, 1, 0, 1, 40, 62, },
{ 0, 1, 0, 1, 44, 76, },
{ 2, 1, 0, 1, 44, 62, },
{ 1, 1, 0, 1, 44, 62, },
@@ -40692,6 +40864,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 44, 54, },
{ 8, 1, 0, 1, 44, 62, },
{ 9, 1, 0, 1, 44, 62, },
+ { 10, 1, 0, 1, 44, 62, },
+ { 11, 1, 0, 1, 44, 62, },
{ 0, 1, 0, 1, 48, 76, },
{ 2, 1, 0, 1, 48, 62, },
{ 1, 1, 0, 1, 48, 62, },
@@ -40702,6 +40876,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 48, 54, },
{ 8, 1, 0, 1, 48, 62, },
{ 9, 1, 0, 1, 48, 62, },
+ { 10, 1, 0, 1, 48, 62, },
+ { 11, 1, 0, 1, 48, 62, },
{ 0, 1, 0, 1, 52, 76, },
{ 2, 1, 0, 1, 52, 62, },
{ 1, 1, 0, 1, 52, 62, },
@@ -40712,6 +40888,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 52, 54, },
{ 8, 1, 0, 1, 52, 76, },
{ 9, 1, 0, 1, 52, 62, },
+ { 10, 1, 0, 1, 52, 62, },
+ { 11, 1, 0, 1, 52, 62, },
{ 0, 1, 0, 1, 56, 76, },
{ 2, 1, 0, 1, 56, 62, },
{ 1, 1, 0, 1, 56, 62, },
@@ -40722,6 +40900,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 56, 54, },
{ 8, 1, 0, 1, 56, 76, },
{ 9, 1, 0, 1, 56, 62, },
+ { 10, 1, 0, 1, 56, 62, },
+ { 11, 1, 0, 1, 56, 62, },
{ 0, 1, 0, 1, 60, 76, },
{ 2, 1, 0, 1, 60, 62, },
{ 1, 1, 0, 1, 60, 62, },
@@ -40732,6 +40912,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 60, 54, },
{ 8, 1, 0, 1, 60, 76, },
{ 9, 1, 0, 1, 60, 62, },
+ { 10, 1, 0, 1, 60, 62, },
+ { 11, 1, 0, 1, 60, 62, },
{ 0, 1, 0, 1, 64, 74, },
{ 2, 1, 0, 1, 64, 62, },
{ 1, 1, 0, 1, 64, 60, },
@@ -40742,6 +40924,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 64, 54, },
{ 8, 1, 0, 1, 64, 74, },
{ 9, 1, 0, 1, 64, 62, },
+ { 10, 1, 0, 1, 64, 62, },
+ { 11, 1, 0, 1, 64, 62, },
{ 0, 1, 0, 1, 100, 72, },
{ 2, 1, 0, 1, 100, 62, },
{ 1, 1, 0, 1, 100, 76, },
@@ -40752,6 +40936,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 100, 54, },
{ 8, 1, 0, 1, 100, 72, },
{ 9, 1, 0, 1, 100, 127, },
+ { 10, 1, 0, 1, 100, 54, },
+ { 11, 1, 0, 1, 100, 62, },
{ 0, 1, 0, 1, 104, 76, },
{ 2, 1, 0, 1, 104, 62, },
{ 1, 1, 0, 1, 104, 76, },
@@ -40762,6 +40948,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 104, 54, },
{ 8, 1, 0, 1, 104, 76, },
{ 9, 1, 0, 1, 104, 127, },
+ { 10, 1, 0, 1, 104, 54, },
+ { 11, 1, 0, 1, 104, 62, },
{ 0, 1, 0, 1, 108, 76, },
{ 2, 1, 0, 1, 108, 62, },
{ 1, 1, 0, 1, 108, 76, },
@@ -40772,6 +40960,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 108, 54, },
{ 8, 1, 0, 1, 108, 76, },
{ 9, 1, 0, 1, 108, 127, },
+ { 10, 1, 0, 1, 108, 54, },
+ { 11, 1, 0, 1, 108, 62, },
{ 0, 1, 0, 1, 112, 76, },
{ 2, 1, 0, 1, 112, 62, },
{ 1, 1, 0, 1, 112, 76, },
@@ -40782,6 +40972,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 112, 54, },
{ 8, 1, 0, 1, 112, 76, },
{ 9, 1, 0, 1, 112, 127, },
+ { 10, 1, 0, 1, 112, 54, },
+ { 11, 1, 0, 1, 112, 62, },
{ 0, 1, 0, 1, 116, 76, },
{ 2, 1, 0, 1, 116, 62, },
{ 1, 1, 0, 1, 116, 76, },
@@ -40792,6 +40984,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 116, 54, },
{ 8, 1, 0, 1, 116, 76, },
{ 9, 1, 0, 1, 116, 127, },
+ { 10, 1, 0, 1, 116, 54, },
+ { 11, 1, 0, 1, 116, 62, },
{ 0, 1, 0, 1, 120, 76, },
{ 2, 1, 0, 1, 120, 62, },
{ 1, 1, 0, 1, 120, 76, },
@@ -40802,6 +40996,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 120, 54, },
{ 8, 1, 0, 1, 120, 76, },
{ 9, 1, 0, 1, 120, 127, },
+ { 10, 1, 0, 1, 120, 54, },
+ { 11, 1, 0, 1, 120, 62, },
{ 0, 1, 0, 1, 124, 76, },
{ 2, 1, 0, 1, 124, 62, },
{ 1, 1, 0, 1, 124, 76, },
@@ -40812,6 +41008,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 124, 54, },
{ 8, 1, 0, 1, 124, 76, },
{ 9, 1, 0, 1, 124, 127, },
+ { 10, 1, 0, 1, 124, 54, },
+ { 11, 1, 0, 1, 124, 62, },
{ 0, 1, 0, 1, 128, 76, },
{ 2, 1, 0, 1, 128, 62, },
{ 1, 1, 0, 1, 128, 76, },
@@ -40822,6 +41020,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 128, 54, },
{ 8, 1, 0, 1, 128, 76, },
{ 9, 1, 0, 1, 128, 127, },
+ { 10, 1, 0, 1, 128, 54, },
+ { 11, 1, 0, 1, 128, 62, },
{ 0, 1, 0, 1, 132, 76, },
{ 2, 1, 0, 1, 132, 62, },
{ 1, 1, 0, 1, 132, 76, },
@@ -40832,6 +41032,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 132, 54, },
{ 8, 1, 0, 1, 132, 76, },
{ 9, 1, 0, 1, 132, 127, },
+ { 10, 1, 0, 1, 132, 54, },
+ { 11, 1, 0, 1, 132, 62, },
{ 0, 1, 0, 1, 136, 76, },
{ 2, 1, 0, 1, 136, 62, },
{ 1, 1, 0, 1, 136, 76, },
@@ -40842,6 +41044,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 136, 54, },
{ 8, 1, 0, 1, 136, 76, },
{ 9, 1, 0, 1, 136, 127, },
+ { 10, 1, 0, 1, 136, 54, },
+ { 11, 1, 0, 1, 136, 62, },
{ 0, 1, 0, 1, 140, 72, },
{ 2, 1, 0, 1, 140, 62, },
{ 1, 1, 0, 1, 140, 76, },
@@ -40852,6 +41056,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 140, 54, },
{ 8, 1, 0, 1, 140, 72, },
{ 9, 1, 0, 1, 140, 127, },
+ { 10, 1, 0, 1, 140, 54, },
+ { 11, 1, 0, 1, 140, 62, },
{ 0, 1, 0, 1, 144, 76, },
{ 2, 1, 0, 1, 144, 127, },
{ 1, 1, 0, 1, 144, 127, },
@@ -40862,8 +41068,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 1, 144, 127, },
{ 8, 1, 0, 1, 144, 76, },
{ 9, 1, 0, 1, 144, 127, },
+ { 10, 1, 0, 1, 144, 127, },
+ { 11, 1, 0, 1, 144, 76, },
{ 0, 1, 0, 1, 149, 76, },
- { 2, 1, 0, 1, 149, 54, },
+ { 2, 1, 0, 1, 149, 28, },
{ 1, 1, 0, 1, 149, 127, },
{ 3, 1, 0, 1, 149, 76, },
{ 4, 1, 0, 1, 149, 74, },
@@ -40871,9 +41079,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 1, 149, 76, },
{ 7, 1, 0, 1, 149, 54, },
{ 8, 1, 0, 1, 149, 76, },
- { 9, 1, 0, 1, 149, 54, },
+ { 9, 1, 0, 1, 149, 28, },
+ { 10, 1, 0, 1, 149, 28, },
+ { 11, 1, 0, 1, 149, 58, },
{ 0, 1, 0, 1, 153, 76, },
- { 2, 1, 0, 1, 153, 54, },
+ { 2, 1, 0, 1, 153, 28, },
{ 1, 1, 0, 1, 153, 127, },
{ 3, 1, 0, 1, 153, 76, },
{ 4, 1, 0, 1, 153, 74, },
@@ -40881,9 +41091,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 1, 153, 76, },
{ 7, 1, 0, 1, 153, 54, },
{ 8, 1, 0, 1, 153, 76, },
- { 9, 1, 0, 1, 153, 54, },
+ { 9, 1, 0, 1, 153, 28, },
+ { 10, 1, 0, 1, 153, 28, },
+ { 11, 1, 0, 1, 153, 58, },
{ 0, 1, 0, 1, 157, 76, },
- { 2, 1, 0, 1, 157, 54, },
+ { 2, 1, 0, 1, 157, 28, },
{ 1, 1, 0, 1, 157, 127, },
{ 3, 1, 0, 1, 157, 76, },
{ 4, 1, 0, 1, 157, 74, },
@@ -40891,9 +41103,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 1, 157, 76, },
{ 7, 1, 0, 1, 157, 54, },
{ 8, 1, 0, 1, 157, 76, },
- { 9, 1, 0, 1, 157, 54, },
+ { 9, 1, 0, 1, 157, 28, },
+ { 10, 1, 0, 1, 157, 28, },
+ { 11, 1, 0, 1, 157, 58, },
{ 0, 1, 0, 1, 161, 76, },
- { 2, 1, 0, 1, 161, 54, },
+ { 2, 1, 0, 1, 161, 28, },
{ 1, 1, 0, 1, 161, 127, },
{ 3, 1, 0, 1, 161, 76, },
{ 4, 1, 0, 1, 161, 74, },
@@ -40901,9 +41115,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 1, 161, 76, },
{ 7, 1, 0, 1, 161, 54, },
{ 8, 1, 0, 1, 161, 76, },
- { 9, 1, 0, 1, 161, 54, },
+ { 9, 1, 0, 1, 161, 28, },
+ { 10, 1, 0, 1, 161, 28, },
+ { 11, 1, 0, 1, 161, 58, },
{ 0, 1, 0, 1, 165, 76, },
- { 2, 1, 0, 1, 165, 54, },
+ { 2, 1, 0, 1, 165, 28, },
{ 1, 1, 0, 1, 165, 127, },
{ 3, 1, 0, 1, 165, 76, },
{ 4, 1, 0, 1, 165, 74, },
@@ -40911,7 +41127,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 1, 165, 76, },
{ 7, 1, 0, 1, 165, 54, },
{ 8, 1, 0, 1, 165, 76, },
- { 9, 1, 0, 1, 165, 54, },
+ { 9, 1, 0, 1, 165, 28, },
+ { 10, 1, 0, 1, 165, 28, },
+ { 11, 1, 0, 1, 165, 58, },
{ 0, 1, 0, 2, 36, 72, },
{ 2, 1, 0, 2, 36, 62, },
{ 1, 1, 0, 2, 36, 62, },
@@ -40922,6 +41140,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 36, 54, },
{ 8, 1, 0, 2, 36, 62, },
{ 9, 1, 0, 2, 36, 62, },
+ { 10, 1, 0, 2, 36, 62, },
+ { 11, 1, 0, 2, 36, 62, },
{ 0, 1, 0, 2, 40, 76, },
{ 2, 1, 0, 2, 40, 62, },
{ 1, 1, 0, 2, 40, 62, },
@@ -40932,6 +41152,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 40, 54, },
{ 8, 1, 0, 2, 40, 62, },
{ 9, 1, 0, 2, 40, 62, },
+ { 10, 1, 0, 2, 40, 62, },
+ { 11, 1, 0, 2, 40, 62, },
{ 0, 1, 0, 2, 44, 76, },
{ 2, 1, 0, 2, 44, 62, },
{ 1, 1, 0, 2, 44, 62, },
@@ -40942,6 +41164,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 44, 54, },
{ 8, 1, 0, 2, 44, 62, },
{ 9, 1, 0, 2, 44, 62, },
+ { 10, 1, 0, 2, 44, 62, },
+ { 11, 1, 0, 2, 44, 62, },
{ 0, 1, 0, 2, 48, 76, },
{ 2, 1, 0, 2, 48, 62, },
{ 1, 1, 0, 2, 48, 62, },
@@ -40952,6 +41176,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 48, 54, },
{ 8, 1, 0, 2, 48, 62, },
{ 9, 1, 0, 2, 48, 62, },
+ { 10, 1, 0, 2, 48, 62, },
+ { 11, 1, 0, 2, 48, 62, },
{ 0, 1, 0, 2, 52, 76, },
{ 2, 1, 0, 2, 52, 62, },
{ 1, 1, 0, 2, 52, 62, },
@@ -40962,6 +41188,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 52, 54, },
{ 8, 1, 0, 2, 52, 76, },
{ 9, 1, 0, 2, 52, 62, },
+ { 10, 1, 0, 2, 52, 62, },
+ { 11, 1, 0, 2, 52, 62, },
{ 0, 1, 0, 2, 56, 76, },
{ 2, 1, 0, 2, 56, 62, },
{ 1, 1, 0, 2, 56, 62, },
@@ -40972,6 +41200,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 56, 54, },
{ 8, 1, 0, 2, 56, 76, },
{ 9, 1, 0, 2, 56, 62, },
+ { 10, 1, 0, 2, 56, 62, },
+ { 11, 1, 0, 2, 56, 62, },
{ 0, 1, 0, 2, 60, 76, },
{ 2, 1, 0, 2, 60, 62, },
{ 1, 1, 0, 2, 60, 62, },
@@ -40982,6 +41212,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 60, 54, },
{ 8, 1, 0, 2, 60, 76, },
{ 9, 1, 0, 2, 60, 62, },
+ { 10, 1, 0, 2, 60, 62, },
+ { 11, 1, 0, 2, 60, 62, },
{ 0, 1, 0, 2, 64, 74, },
{ 2, 1, 0, 2, 64, 62, },
{ 1, 1, 0, 2, 64, 60, },
@@ -40992,6 +41224,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 64, 54, },
{ 8, 1, 0, 2, 64, 74, },
{ 9, 1, 0, 2, 64, 62, },
+ { 10, 1, 0, 2, 64, 62, },
+ { 11, 1, 0, 2, 64, 62, },
{ 0, 1, 0, 2, 100, 70, },
{ 2, 1, 0, 2, 100, 62, },
{ 1, 1, 0, 2, 100, 76, },
@@ -41002,6 +41236,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 100, 54, },
{ 8, 1, 0, 2, 100, 70, },
{ 9, 1, 0, 2, 100, 127, },
+ { 10, 1, 0, 2, 100, 54, },
+ { 11, 1, 0, 2, 100, 62, },
{ 0, 1, 0, 2, 104, 76, },
{ 2, 1, 0, 2, 104, 62, },
{ 1, 1, 0, 2, 104, 76, },
@@ -41012,6 +41248,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 104, 54, },
{ 8, 1, 0, 2, 104, 76, },
{ 9, 1, 0, 2, 104, 127, },
+ { 10, 1, 0, 2, 104, 54, },
+ { 11, 1, 0, 2, 104, 62, },
{ 0, 1, 0, 2, 108, 76, },
{ 2, 1, 0, 2, 108, 62, },
{ 1, 1, 0, 2, 108, 76, },
@@ -41022,6 +41260,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 108, 54, },
{ 8, 1, 0, 2, 108, 76, },
{ 9, 1, 0, 2, 108, 127, },
+ { 10, 1, 0, 2, 108, 54, },
+ { 11, 1, 0, 2, 108, 62, },
{ 0, 1, 0, 2, 112, 76, },
{ 2, 1, 0, 2, 112, 62, },
{ 1, 1, 0, 2, 112, 76, },
@@ -41032,6 +41272,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 112, 54, },
{ 8, 1, 0, 2, 112, 76, },
{ 9, 1, 0, 2, 112, 127, },
+ { 10, 1, 0, 2, 112, 54, },
+ { 11, 1, 0, 2, 112, 62, },
{ 0, 1, 0, 2, 116, 76, },
{ 2, 1, 0, 2, 116, 62, },
{ 1, 1, 0, 2, 116, 76, },
@@ -41042,6 +41284,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 116, 54, },
{ 8, 1, 0, 2, 116, 76, },
{ 9, 1, 0, 2, 116, 127, },
+ { 10, 1, 0, 2, 116, 54, },
+ { 11, 1, 0, 2, 116, 62, },
{ 0, 1, 0, 2, 120, 76, },
{ 2, 1, 0, 2, 120, 62, },
{ 1, 1, 0, 2, 120, 76, },
@@ -41052,6 +41296,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 120, 54, },
{ 8, 1, 0, 2, 120, 76, },
{ 9, 1, 0, 2, 120, 127, },
+ { 10, 1, 0, 2, 120, 54, },
+ { 11, 1, 0, 2, 120, 62, },
{ 0, 1, 0, 2, 124, 76, },
{ 2, 1, 0, 2, 124, 62, },
{ 1, 1, 0, 2, 124, 76, },
@@ -41062,6 +41308,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 124, 54, },
{ 8, 1, 0, 2, 124, 76, },
{ 9, 1, 0, 2, 124, 127, },
+ { 10, 1, 0, 2, 124, 54, },
+ { 11, 1, 0, 2, 124, 62, },
{ 0, 1, 0, 2, 128, 76, },
{ 2, 1, 0, 2, 128, 62, },
{ 1, 1, 0, 2, 128, 76, },
@@ -41072,6 +41320,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 128, 54, },
{ 8, 1, 0, 2, 128, 76, },
{ 9, 1, 0, 2, 128, 127, },
+ { 10, 1, 0, 2, 128, 54, },
+ { 11, 1, 0, 2, 128, 62, },
{ 0, 1, 0, 2, 132, 76, },
{ 2, 1, 0, 2, 132, 62, },
{ 1, 1, 0, 2, 132, 76, },
@@ -41082,6 +41332,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 132, 54, },
{ 8, 1, 0, 2, 132, 76, },
{ 9, 1, 0, 2, 132, 127, },
+ { 10, 1, 0, 2, 132, 54, },
+ { 11, 1, 0, 2, 132, 62, },
{ 0, 1, 0, 2, 136, 76, },
{ 2, 1, 0, 2, 136, 62, },
{ 1, 1, 0, 2, 136, 76, },
@@ -41092,6 +41344,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 136, 54, },
{ 8, 1, 0, 2, 136, 76, },
{ 9, 1, 0, 2, 136, 127, },
+ { 10, 1, 0, 2, 136, 54, },
+ { 11, 1, 0, 2, 136, 62, },
{ 0, 1, 0, 2, 140, 70, },
{ 2, 1, 0, 2, 140, 62, },
{ 1, 1, 0, 2, 140, 76, },
@@ -41102,6 +41356,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 140, 54, },
{ 8, 1, 0, 2, 140, 70, },
{ 9, 1, 0, 2, 140, 127, },
+ { 10, 1, 0, 2, 140, 54, },
+ { 11, 1, 0, 2, 140, 62, },
{ 0, 1, 0, 2, 144, 76, },
{ 2, 1, 0, 2, 144, 127, },
{ 1, 1, 0, 2, 144, 127, },
@@ -41112,8 +41368,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 2, 144, 127, },
{ 8, 1, 0, 2, 144, 76, },
{ 9, 1, 0, 2, 144, 127, },
+ { 10, 1, 0, 2, 144, 127, },
+ { 11, 1, 0, 2, 144, 76, },
{ 0, 1, 0, 2, 149, 76, },
- { 2, 1, 0, 2, 149, 54, },
+ { 2, 1, 0, 2, 149, 28, },
{ 1, 1, 0, 2, 149, 127, },
{ 3, 1, 0, 2, 149, 76, },
{ 4, 1, 0, 2, 149, 74, },
@@ -41121,9 +41379,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 2, 149, 76, },
{ 7, 1, 0, 2, 149, 54, },
{ 8, 1, 0, 2, 149, 76, },
- { 9, 1, 0, 2, 149, 54, },
+ { 9, 1, 0, 2, 149, 28, },
+ { 10, 1, 0, 2, 149, 28, },
+ { 11, 1, 0, 2, 149, 60, },
{ 0, 1, 0, 2, 153, 76, },
- { 2, 1, 0, 2, 153, 54, },
+ { 2, 1, 0, 2, 153, 28, },
{ 1, 1, 0, 2, 153, 127, },
{ 3, 1, 0, 2, 153, 76, },
{ 4, 1, 0, 2, 153, 74, },
@@ -41131,9 +41391,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 2, 153, 76, },
{ 7, 1, 0, 2, 153, 54, },
{ 8, 1, 0, 2, 153, 76, },
- { 9, 1, 0, 2, 153, 54, },
+ { 9, 1, 0, 2, 153, 28, },
+ { 10, 1, 0, 2, 153, 28, },
+ { 11, 1, 0, 2, 153, 60, },
{ 0, 1, 0, 2, 157, 76, },
- { 2, 1, 0, 2, 157, 54, },
+ { 2, 1, 0, 2, 157, 28, },
{ 1, 1, 0, 2, 157, 127, },
{ 3, 1, 0, 2, 157, 76, },
{ 4, 1, 0, 2, 157, 74, },
@@ -41141,9 +41403,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 2, 157, 76, },
{ 7, 1, 0, 2, 157, 54, },
{ 8, 1, 0, 2, 157, 76, },
- { 9, 1, 0, 2, 157, 54, },
+ { 9, 1, 0, 2, 157, 28, },
+ { 10, 1, 0, 2, 157, 28, },
+ { 11, 1, 0, 2, 157, 60, },
{ 0, 1, 0, 2, 161, 76, },
- { 2, 1, 0, 2, 161, 54, },
+ { 2, 1, 0, 2, 161, 28, },
{ 1, 1, 0, 2, 161, 127, },
{ 3, 1, 0, 2, 161, 76, },
{ 4, 1, 0, 2, 161, 74, },
@@ -41151,9 +41415,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 2, 161, 76, },
{ 7, 1, 0, 2, 161, 54, },
{ 8, 1, 0, 2, 161, 76, },
- { 9, 1, 0, 2, 161, 54, },
+ { 9, 1, 0, 2, 161, 28, },
+ { 10, 1, 0, 2, 161, 28, },
+ { 11, 1, 0, 2, 161, 60, },
{ 0, 1, 0, 2, 165, 76, },
- { 2, 1, 0, 2, 165, 54, },
+ { 2, 1, 0, 2, 165, 28, },
{ 1, 1, 0, 2, 165, 127, },
{ 3, 1, 0, 2, 165, 76, },
{ 4, 1, 0, 2, 165, 74, },
@@ -41161,7 +41427,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 2, 165, 76, },
{ 7, 1, 0, 2, 165, 54, },
{ 8, 1, 0, 2, 165, 76, },
- { 9, 1, 0, 2, 165, 54, },
+ { 9, 1, 0, 2, 165, 28, },
+ { 10, 1, 0, 2, 165, 28, },
+ { 11, 1, 0, 2, 165, 60, },
{ 0, 1, 0, 3, 36, 68, },
{ 2, 1, 0, 3, 36, 38, },
{ 1, 1, 0, 3, 36, 50, },
@@ -41172,6 +41440,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 36, 30, },
{ 8, 1, 0, 3, 36, 50, },
{ 9, 1, 0, 3, 36, 38, },
+ { 10, 1, 0, 3, 36, 38, },
+ { 11, 1, 0, 3, 36, 38, },
{ 0, 1, 0, 3, 40, 68, },
{ 2, 1, 0, 3, 40, 38, },
{ 1, 1, 0, 3, 40, 50, },
@@ -41182,6 +41452,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 40, 30, },
{ 8, 1, 0, 3, 40, 50, },
{ 9, 1, 0, 3, 40, 38, },
+ { 10, 1, 0, 3, 40, 38, },
+ { 11, 1, 0, 3, 40, 38, },
{ 0, 1, 0, 3, 44, 68, },
{ 2, 1, 0, 3, 44, 38, },
{ 1, 1, 0, 3, 44, 50, },
@@ -41192,6 +41464,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 44, 30, },
{ 8, 1, 0, 3, 44, 50, },
{ 9, 1, 0, 3, 44, 38, },
+ { 10, 1, 0, 3, 44, 38, },
+ { 11, 1, 0, 3, 44, 38, },
{ 0, 1, 0, 3, 48, 68, },
{ 2, 1, 0, 3, 48, 38, },
{ 1, 1, 0, 3, 48, 50, },
@@ -41202,6 +41476,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 48, 30, },
{ 8, 1, 0, 3, 48, 50, },
{ 9, 1, 0, 3, 48, 38, },
+ { 10, 1, 0, 3, 48, 38, },
+ { 11, 1, 0, 3, 48, 38, },
{ 0, 1, 0, 3, 52, 68, },
{ 2, 1, 0, 3, 52, 38, },
{ 1, 1, 0, 3, 52, 50, },
@@ -41212,6 +41488,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 52, 30, },
{ 8, 1, 0, 3, 52, 68, },
{ 9, 1, 0, 3, 52, 38, },
+ { 10, 1, 0, 3, 52, 38, },
+ { 11, 1, 0, 3, 52, 38, },
{ 0, 1, 0, 3, 56, 68, },
{ 2, 1, 0, 3, 56, 38, },
{ 1, 1, 0, 3, 56, 50, },
@@ -41222,6 +41500,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 56, 30, },
{ 8, 1, 0, 3, 56, 68, },
{ 9, 1, 0, 3, 56, 38, },
+ { 10, 1, 0, 3, 56, 38, },
+ { 11, 1, 0, 3, 56, 38, },
{ 0, 1, 0, 3, 60, 66, },
{ 2, 1, 0, 3, 60, 38, },
{ 1, 1, 0, 3, 60, 50, },
@@ -41232,6 +41512,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 60, 30, },
{ 8, 1, 0, 3, 60, 66, },
{ 9, 1, 0, 3, 60, 38, },
+ { 10, 1, 0, 3, 60, 38, },
+ { 11, 1, 0, 3, 60, 38, },
{ 0, 1, 0, 3, 64, 68, },
{ 2, 1, 0, 3, 64, 38, },
{ 1, 1, 0, 3, 64, 50, },
@@ -41242,6 +41524,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 64, 30, },
{ 8, 1, 0, 3, 64, 68, },
{ 9, 1, 0, 3, 64, 38, },
+ { 10, 1, 0, 3, 64, 38, },
+ { 11, 1, 0, 3, 64, 38, },
{ 0, 1, 0, 3, 100, 60, },
{ 2, 1, 0, 3, 100, 38, },
{ 1, 1, 0, 3, 100, 70, },
@@ -41252,6 +41536,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 100, 30, },
{ 8, 1, 0, 3, 100, 60, },
{ 9, 1, 0, 3, 100, 127, },
+ { 10, 1, 0, 3, 100, 30, },
+ { 11, 1, 0, 3, 100, 38, },
{ 0, 1, 0, 3, 104, 68, },
{ 2, 1, 0, 3, 104, 38, },
{ 1, 1, 0, 3, 104, 70, },
@@ -41262,6 +41548,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 104, 30, },
{ 8, 1, 0, 3, 104, 68, },
{ 9, 1, 0, 3, 104, 127, },
+ { 10, 1, 0, 3, 104, 30, },
+ { 11, 1, 0, 3, 104, 38, },
{ 0, 1, 0, 3, 108, 68, },
{ 2, 1, 0, 3, 108, 38, },
{ 1, 1, 0, 3, 108, 70, },
@@ -41272,6 +41560,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 108, 30, },
{ 8, 1, 0, 3, 108, 68, },
{ 9, 1, 0, 3, 108, 127, },
+ { 10, 1, 0, 3, 108, 30, },
+ { 11, 1, 0, 3, 108, 38, },
{ 0, 1, 0, 3, 112, 68, },
{ 2, 1, 0, 3, 112, 38, },
{ 1, 1, 0, 3, 112, 70, },
@@ -41282,6 +41572,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 112, 30, },
{ 8, 1, 0, 3, 112, 68, },
{ 9, 1, 0, 3, 112, 127, },
+ { 10, 1, 0, 3, 112, 30, },
+ { 11, 1, 0, 3, 112, 38, },
{ 0, 1, 0, 3, 116, 68, },
{ 2, 1, 0, 3, 116, 38, },
{ 1, 1, 0, 3, 116, 70, },
@@ -41292,6 +41584,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 116, 30, },
{ 8, 1, 0, 3, 116, 68, },
{ 9, 1, 0, 3, 116, 127, },
+ { 10, 1, 0, 3, 116, 30, },
+ { 11, 1, 0, 3, 116, 38, },
{ 0, 1, 0, 3, 120, 68, },
{ 2, 1, 0, 3, 120, 38, },
{ 1, 1, 0, 3, 120, 70, },
@@ -41302,6 +41596,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 120, 30, },
{ 8, 1, 0, 3, 120, 68, },
{ 9, 1, 0, 3, 120, 127, },
+ { 10, 1, 0, 3, 120, 30, },
+ { 11, 1, 0, 3, 120, 38, },
{ 0, 1, 0, 3, 124, 68, },
{ 2, 1, 0, 3, 124, 38, },
{ 1, 1, 0, 3, 124, 70, },
@@ -41312,6 +41608,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 124, 30, },
{ 8, 1, 0, 3, 124, 68, },
{ 9, 1, 0, 3, 124, 127, },
+ { 10, 1, 0, 3, 124, 30, },
+ { 11, 1, 0, 3, 124, 38, },
{ 0, 1, 0, 3, 128, 68, },
{ 2, 1, 0, 3, 128, 38, },
{ 1, 1, 0, 3, 128, 70, },
@@ -41322,6 +41620,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 128, 30, },
{ 8, 1, 0, 3, 128, 68, },
{ 9, 1, 0, 3, 128, 127, },
+ { 10, 1, 0, 3, 128, 30, },
+ { 11, 1, 0, 3, 128, 38, },
{ 0, 1, 0, 3, 132, 68, },
{ 2, 1, 0, 3, 132, 38, },
{ 1, 1, 0, 3, 132, 70, },
@@ -41332,6 +41632,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 132, 30, },
{ 8, 1, 0, 3, 132, 68, },
{ 9, 1, 0, 3, 132, 127, },
+ { 10, 1, 0, 3, 132, 30, },
+ { 11, 1, 0, 3, 132, 38, },
{ 0, 1, 0, 3, 136, 68, },
{ 2, 1, 0, 3, 136, 38, },
{ 1, 1, 0, 3, 136, 70, },
@@ -41342,6 +41644,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 136, 30, },
{ 8, 1, 0, 3, 136, 68, },
{ 9, 1, 0, 3, 136, 127, },
+ { 10, 1, 0, 3, 136, 30, },
+ { 11, 1, 0, 3, 136, 38, },
{ 0, 1, 0, 3, 140, 60, },
{ 2, 1, 0, 3, 140, 38, },
{ 1, 1, 0, 3, 140, 70, },
@@ -41352,6 +41656,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 140, 30, },
{ 8, 1, 0, 3, 140, 60, },
{ 9, 1, 0, 3, 140, 127, },
+ { 10, 1, 0, 3, 140, 30, },
+ { 11, 1, 0, 3, 140, 38, },
{ 0, 1, 0, 3, 144, 68, },
{ 2, 1, 0, 3, 144, 127, },
{ 1, 1, 0, 3, 144, 127, },
@@ -41362,8 +41668,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 0, 3, 144, 127, },
{ 8, 1, 0, 3, 144, 68, },
{ 9, 1, 0, 3, 144, 127, },
+ { 10, 1, 0, 3, 144, 127, },
+ { 11, 1, 0, 3, 144, 60, },
{ 0, 1, 0, 3, 149, 76, },
- { 2, 1, 0, 3, 149, 30, },
+ { 2, 1, 0, 3, 149, 4, },
{ 1, 1, 0, 3, 149, 127, },
{ 3, 1, 0, 3, 149, 76, },
{ 4, 1, 0, 3, 149, 60, },
@@ -41371,9 +41679,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 3, 149, 76, },
{ 7, 1, 0, 3, 149, 30, },
{ 8, 1, 0, 3, 149, 72, },
- { 9, 1, 0, 3, 149, 30, },
+ { 9, 1, 0, 3, 149, 4, },
+ { 10, 1, 0, 3, 149, 4, },
+ { 11, 1, 0, 3, 149, 36, },
{ 0, 1, 0, 3, 153, 76, },
- { 2, 1, 0, 3, 153, 30, },
+ { 2, 1, 0, 3, 153, 4, },
{ 1, 1, 0, 3, 153, 127, },
{ 3, 1, 0, 3, 153, 76, },
{ 4, 1, 0, 3, 153, 60, },
@@ -41381,9 +41691,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 3, 153, 76, },
{ 7, 1, 0, 3, 153, 30, },
{ 8, 1, 0, 3, 153, 76, },
- { 9, 1, 0, 3, 153, 30, },
+ { 9, 1, 0, 3, 153, 4, },
+ { 10, 1, 0, 3, 153, 4, },
+ { 11, 1, 0, 3, 153, 36, },
{ 0, 1, 0, 3, 157, 76, },
- { 2, 1, 0, 3, 157, 30, },
+ { 2, 1, 0, 3, 157, 4, },
{ 1, 1, 0, 3, 157, 127, },
{ 3, 1, 0, 3, 157, 76, },
{ 4, 1, 0, 3, 157, 60, },
@@ -41391,9 +41703,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 3, 157, 76, },
{ 7, 1, 0, 3, 157, 30, },
{ 8, 1, 0, 3, 157, 76, },
- { 9, 1, 0, 3, 157, 30, },
+ { 9, 1, 0, 3, 157, 4, },
+ { 10, 1, 0, 3, 157, 4, },
+ { 11, 1, 0, 3, 157, 36, },
{ 0, 1, 0, 3, 161, 76, },
- { 2, 1, 0, 3, 161, 30, },
+ { 2, 1, 0, 3, 161, 4, },
{ 1, 1, 0, 3, 161, 127, },
{ 3, 1, 0, 3, 161, 76, },
{ 4, 1, 0, 3, 161, 60, },
@@ -41401,9 +41715,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 3, 161, 76, },
{ 7, 1, 0, 3, 161, 30, },
{ 8, 1, 0, 3, 161, 76, },
- { 9, 1, 0, 3, 161, 30, },
+ { 9, 1, 0, 3, 161, 4, },
+ { 10, 1, 0, 3, 161, 4, },
+ { 11, 1, 0, 3, 161, 36, },
{ 0, 1, 0, 3, 165, 76, },
- { 2, 1, 0, 3, 165, 30, },
+ { 2, 1, 0, 3, 165, 4, },
{ 1, 1, 0, 3, 165, 127, },
{ 3, 1, 0, 3, 165, 76, },
{ 4, 1, 0, 3, 165, 60, },
@@ -41411,7 +41727,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 0, 3, 165, 76, },
{ 7, 1, 0, 3, 165, 30, },
{ 8, 1, 0, 3, 165, 76, },
- { 9, 1, 0, 3, 165, 30, },
+ { 9, 1, 0, 3, 165, 4, },
+ { 10, 1, 0, 3, 165, 4, },
+ { 11, 1, 0, 3, 165, 36, },
{ 0, 1, 1, 2, 38, 66, },
{ 2, 1, 1, 2, 38, 64, },
{ 1, 1, 1, 2, 38, 62, },
@@ -41422,6 +41740,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 38, 54, },
{ 8, 1, 1, 2, 38, 62, },
{ 9, 1, 1, 2, 38, 64, },
+ { 10, 1, 1, 2, 38, 64, },
+ { 11, 1, 1, 2, 38, 64, },
{ 0, 1, 1, 2, 46, 72, },
{ 2, 1, 1, 2, 46, 64, },
{ 1, 1, 1, 2, 46, 62, },
@@ -41432,6 +41752,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 46, 54, },
{ 8, 1, 1, 2, 46, 62, },
{ 9, 1, 1, 2, 46, 64, },
+ { 10, 1, 1, 2, 46, 64, },
+ { 11, 1, 1, 2, 46, 64, },
{ 0, 1, 1, 2, 54, 72, },
{ 2, 1, 1, 2, 54, 64, },
{ 1, 1, 1, 2, 54, 62, },
@@ -41442,6 +41764,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 54, 54, },
{ 8, 1, 1, 2, 54, 72, },
{ 9, 1, 1, 2, 54, 64, },
+ { 10, 1, 1, 2, 54, 64, },
+ { 11, 1, 1, 2, 54, 64, },
{ 0, 1, 1, 2, 62, 64, },
{ 2, 1, 1, 2, 62, 64, },
{ 1, 1, 1, 2, 62, 62, },
@@ -41452,6 +41776,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 62, 54, },
{ 8, 1, 1, 2, 62, 64, },
{ 9, 1, 1, 2, 62, 64, },
+ { 10, 1, 1, 2, 62, 64, },
+ { 11, 1, 1, 2, 62, 64, },
{ 0, 1, 1, 2, 102, 58, },
{ 2, 1, 1, 2, 102, 64, },
{ 1, 1, 1, 2, 102, 72, },
@@ -41462,6 +41788,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 102, 54, },
{ 8, 1, 1, 2, 102, 58, },
{ 9, 1, 1, 2, 102, 127, },
+ { 10, 1, 1, 2, 102, 54, },
+ { 11, 1, 1, 2, 102, 64, },
{ 0, 1, 1, 2, 110, 72, },
{ 2, 1, 1, 2, 110, 64, },
{ 1, 1, 1, 2, 110, 72, },
@@ -41472,6 +41800,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 110, 54, },
{ 8, 1, 1, 2, 110, 72, },
{ 9, 1, 1, 2, 110, 127, },
+ { 10, 1, 1, 2, 110, 54, },
+ { 11, 1, 1, 2, 110, 64, },
{ 0, 1, 1, 2, 118, 72, },
{ 2, 1, 1, 2, 118, 64, },
{ 1, 1, 1, 2, 118, 72, },
@@ -41482,6 +41812,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 118, 54, },
{ 8, 1, 1, 2, 118, 72, },
{ 9, 1, 1, 2, 118, 127, },
+ { 10, 1, 1, 2, 118, 54, },
+ { 11, 1, 1, 2, 118, 64, },
{ 0, 1, 1, 2, 126, 72, },
{ 2, 1, 1, 2, 126, 64, },
{ 1, 1, 1, 2, 126, 72, },
@@ -41492,6 +41824,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 126, 54, },
{ 8, 1, 1, 2, 126, 72, },
{ 9, 1, 1, 2, 126, 127, },
+ { 10, 1, 1, 2, 126, 54, },
+ { 11, 1, 1, 2, 126, 64, },
{ 0, 1, 1, 2, 134, 72, },
{ 2, 1, 1, 2, 134, 64, },
{ 1, 1, 1, 2, 134, 72, },
@@ -41502,6 +41836,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 134, 54, },
{ 8, 1, 1, 2, 134, 72, },
{ 9, 1, 1, 2, 134, 127, },
+ { 10, 1, 1, 2, 134, 54, },
+ { 11, 1, 1, 2, 134, 64, },
{ 0, 1, 1, 2, 142, 72, },
{ 2, 1, 1, 2, 142, 127, },
{ 1, 1, 1, 2, 142, 127, },
@@ -41512,8 +41848,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 2, 142, 127, },
{ 8, 1, 1, 2, 142, 72, },
{ 9, 1, 1, 2, 142, 127, },
+ { 10, 1, 1, 2, 142, 127, },
+ { 11, 1, 1, 2, 142, 72, },
{ 0, 1, 1, 2, 151, 72, },
- { 2, 1, 1, 2, 151, 54, },
+ { 2, 1, 1, 2, 151, 28, },
{ 1, 1, 1, 2, 151, 127, },
{ 3, 1, 1, 2, 151, 72, },
{ 4, 1, 1, 2, 151, 72, },
@@ -41521,9 +41859,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 1, 2, 151, 72, },
{ 7, 1, 1, 2, 151, 54, },
{ 8, 1, 1, 2, 151, 72, },
- { 9, 1, 1, 2, 151, 54, },
+ { 9, 1, 1, 2, 151, 28, },
+ { 10, 1, 1, 2, 151, 28, },
+ { 11, 1, 1, 2, 151, 64, },
{ 0, 1, 1, 2, 159, 72, },
- { 2, 1, 1, 2, 159, 54, },
+ { 2, 1, 1, 2, 159, 28, },
{ 1, 1, 1, 2, 159, 127, },
{ 3, 1, 1, 2, 159, 72, },
{ 4, 1, 1, 2, 159, 72, },
@@ -41531,7 +41871,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 1, 2, 159, 72, },
{ 7, 1, 1, 2, 159, 54, },
{ 8, 1, 1, 2, 159, 72, },
- { 9, 1, 1, 2, 159, 54, },
+ { 9, 1, 1, 2, 159, 28, },
+ { 10, 1, 1, 2, 159, 28, },
+ { 11, 1, 1, 2, 159, 64, },
{ 0, 1, 1, 3, 38, 60, },
{ 2, 1, 1, 3, 38, 40, },
{ 1, 1, 1, 3, 38, 50, },
@@ -41542,6 +41884,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 38, 30, },
{ 8, 1, 1, 3, 38, 50, },
{ 9, 1, 1, 3, 38, 40, },
+ { 10, 1, 1, 3, 38, 40, },
+ { 11, 1, 1, 3, 38, 40, },
{ 0, 1, 1, 3, 46, 68, },
{ 2, 1, 1, 3, 46, 40, },
{ 1, 1, 1, 3, 46, 50, },
@@ -41552,6 +41896,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 46, 30, },
{ 8, 1, 1, 3, 46, 50, },
{ 9, 1, 1, 3, 46, 40, },
+ { 10, 1, 1, 3, 46, 40, },
+ { 11, 1, 1, 3, 46, 40, },
{ 0, 1, 1, 3, 54, 68, },
{ 2, 1, 1, 3, 54, 40, },
{ 1, 1, 1, 3, 54, 50, },
@@ -41562,6 +41908,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 54, 30, },
{ 8, 1, 1, 3, 54, 68, },
{ 9, 1, 1, 3, 54, 40, },
+ { 10, 1, 1, 3, 54, 40, },
+ { 11, 1, 1, 3, 54, 40, },
{ 0, 1, 1, 3, 62, 58, },
{ 2, 1, 1, 3, 62, 40, },
{ 1, 1, 1, 3, 62, 48, },
@@ -41572,6 +41920,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 62, 30, },
{ 8, 1, 1, 3, 62, 58, },
{ 9, 1, 1, 3, 62, 40, },
+ { 10, 1, 1, 3, 62, 40, },
+ { 11, 1, 1, 3, 62, 40, },
{ 0, 1, 1, 3, 102, 54, },
{ 2, 1, 1, 3, 102, 40, },
{ 1, 1, 1, 3, 102, 70, },
@@ -41582,6 +41932,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 102, 30, },
{ 8, 1, 1, 3, 102, 54, },
{ 9, 1, 1, 3, 102, 127, },
+ { 10, 1, 1, 3, 102, 30, },
+ { 11, 1, 1, 3, 102, 40, },
{ 0, 1, 1, 3, 110, 68, },
{ 2, 1, 1, 3, 110, 40, },
{ 1, 1, 1, 3, 110, 70, },
@@ -41592,6 +41944,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 110, 30, },
{ 8, 1, 1, 3, 110, 68, },
{ 9, 1, 1, 3, 110, 127, },
+ { 10, 1, 1, 3, 110, 30, },
+ { 11, 1, 1, 3, 110, 40, },
{ 0, 1, 1, 3, 118, 68, },
{ 2, 1, 1, 3, 118, 40, },
{ 1, 1, 1, 3, 118, 70, },
@@ -41602,6 +41956,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 118, 30, },
{ 8, 1, 1, 3, 118, 68, },
{ 9, 1, 1, 3, 118, 127, },
+ { 10, 1, 1, 3, 118, 30, },
+ { 11, 1, 1, 3, 118, 40, },
{ 0, 1, 1, 3, 126, 68, },
{ 2, 1, 1, 3, 126, 40, },
{ 1, 1, 1, 3, 126, 70, },
@@ -41612,6 +41968,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 126, 30, },
{ 8, 1, 1, 3, 126, 68, },
{ 9, 1, 1, 3, 126, 127, },
+ { 10, 1, 1, 3, 126, 30, },
+ { 11, 1, 1, 3, 126, 40, },
{ 0, 1, 1, 3, 134, 68, },
{ 2, 1, 1, 3, 134, 40, },
{ 1, 1, 1, 3, 134, 70, },
@@ -41622,6 +41980,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 134, 30, },
{ 8, 1, 1, 3, 134, 68, },
{ 9, 1, 1, 3, 134, 127, },
+ { 10, 1, 1, 3, 134, 30, },
+ { 11, 1, 1, 3, 134, 40, },
{ 0, 1, 1, 3, 142, 68, },
{ 2, 1, 1, 3, 142, 127, },
{ 1, 1, 1, 3, 142, 127, },
@@ -41632,8 +41992,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 1, 3, 142, 127, },
{ 8, 1, 1, 3, 142, 68, },
{ 9, 1, 1, 3, 142, 127, },
+ { 10, 1, 1, 3, 142, 127, },
+ { 11, 1, 1, 3, 142, 62, },
{ 0, 1, 1, 3, 151, 72, },
- { 2, 1, 1, 3, 151, 30, },
+ { 2, 1, 1, 3, 151, 4, },
{ 1, 1, 1, 3, 151, 127, },
{ 3, 1, 1, 3, 151, 72, },
{ 4, 1, 1, 3, 151, 66, },
@@ -41641,9 +42003,11 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 1, 3, 151, 72, },
{ 7, 1, 1, 3, 151, 30, },
{ 8, 1, 1, 3, 151, 68, },
- { 9, 1, 1, 3, 151, 30, },
+ { 9, 1, 1, 3, 151, 4, },
+ { 10, 1, 1, 3, 151, 4, },
+ { 11, 1, 1, 3, 151, 40, },
{ 0, 1, 1, 3, 159, 72, },
- { 2, 1, 1, 3, 159, 30, },
+ { 2, 1, 1, 3, 159, 4, },
{ 1, 1, 1, 3, 159, 127, },
{ 3, 1, 1, 3, 159, 72, },
{ 4, 1, 1, 3, 159, 66, },
@@ -41651,7 +42015,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 1, 3, 159, 72, },
{ 7, 1, 1, 3, 159, 30, },
{ 8, 1, 1, 3, 159, 72, },
- { 9, 1, 1, 3, 159, 30, },
+ { 9, 1, 1, 3, 159, 4, },
+ { 10, 1, 1, 3, 159, 4, },
+ { 11, 1, 1, 3, 159, 40, },
{ 0, 1, 2, 4, 42, 64, },
{ 2, 1, 2, 4, 42, 64, },
{ 1, 1, 2, 4, 42, 64, },
@@ -41662,6 +42028,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 4, 42, 54, },
{ 8, 1, 2, 4, 42, 62, },
{ 9, 1, 2, 4, 42, 64, },
+ { 10, 1, 2, 4, 42, 64, },
+ { 11, 1, 2, 4, 42, 64, },
{ 0, 1, 2, 4, 58, 62, },
{ 2, 1, 2, 4, 58, 64, },
{ 1, 1, 2, 4, 58, 64, },
@@ -41672,6 +42040,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 4, 58, 54, },
{ 8, 1, 2, 4, 58, 62, },
{ 9, 1, 2, 4, 58, 64, },
+ { 10, 1, 2, 4, 58, 64, },
+ { 11, 1, 2, 4, 58, 64, },
{ 0, 1, 2, 4, 106, 58, },
{ 2, 1, 2, 4, 106, 64, },
{ 1, 1, 2, 4, 106, 72, },
@@ -41682,6 +42052,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 4, 106, 54, },
{ 8, 1, 2, 4, 106, 58, },
{ 9, 1, 2, 4, 106, 127, },
+ { 10, 1, 2, 4, 106, 54, },
+ { 11, 1, 2, 4, 106, 64, },
{ 0, 1, 2, 4, 122, 72, },
{ 2, 1, 2, 4, 122, 64, },
{ 1, 1, 2, 4, 122, 72, },
@@ -41692,6 +42064,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 4, 122, 54, },
{ 8, 1, 2, 4, 122, 72, },
{ 9, 1, 2, 4, 122, 127, },
+ { 10, 1, 2, 4, 122, 54, },
+ { 11, 1, 2, 4, 122, 64, },
{ 0, 1, 2, 4, 138, 72, },
{ 2, 1, 2, 4, 138, 127, },
{ 1, 1, 2, 4, 138, 127, },
@@ -41702,8 +42076,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 4, 138, 127, },
{ 8, 1, 2, 4, 138, 72, },
{ 9, 1, 2, 4, 138, 127, },
+ { 10, 1, 2, 4, 138, 127, },
+ { 11, 1, 2, 4, 138, 72, },
{ 0, 1, 2, 4, 155, 72, },
- { 2, 1, 2, 4, 155, 54, },
+ { 2, 1, 2, 4, 155, 28, },
{ 1, 1, 2, 4, 155, 127, },
{ 3, 1, 2, 4, 155, 72, },
{ 4, 1, 2, 4, 155, 68, },
@@ -41711,7 +42087,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 2, 4, 155, 72, },
{ 7, 1, 2, 4, 155, 54, },
{ 8, 1, 2, 4, 155, 68, },
- { 9, 1, 2, 4, 155, 54, },
+ { 9, 1, 2, 4, 155, 28, },
+ { 10, 1, 2, 4, 155, 28, },
+ { 11, 1, 2, 4, 155, 64, },
{ 0, 1, 2, 5, 42, 54, },
{ 2, 1, 2, 5, 42, 40, },
{ 1, 1, 2, 5, 42, 50, },
@@ -41722,6 +42100,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 5, 42, 30, },
{ 8, 1, 2, 5, 42, 50, },
{ 9, 1, 2, 5, 42, 40, },
+ { 10, 1, 2, 5, 42, 40, },
+ { 11, 1, 2, 5, 42, 40, },
{ 0, 1, 2, 5, 58, 52, },
{ 2, 1, 2, 5, 58, 40, },
{ 1, 1, 2, 5, 58, 50, },
@@ -41732,6 +42112,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 5, 58, 30, },
{ 8, 1, 2, 5, 58, 52, },
{ 9, 1, 2, 5, 58, 40, },
+ { 10, 1, 2, 5, 58, 40, },
+ { 11, 1, 2, 5, 58, 40, },
{ 0, 1, 2, 5, 106, 50, },
{ 2, 1, 2, 5, 106, 40, },
{ 1, 1, 2, 5, 106, 72, },
@@ -41742,6 +42124,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 5, 106, 30, },
{ 8, 1, 2, 5, 106, 50, },
{ 9, 1, 2, 5, 106, 127, },
+ { 10, 1, 2, 5, 106, 30, },
+ { 11, 1, 2, 5, 106, 40, },
{ 0, 1, 2, 5, 122, 66, },
{ 2, 1, 2, 5, 122, 40, },
{ 1, 1, 2, 5, 122, 72, },
@@ -41752,6 +42136,8 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 5, 122, 30, },
{ 8, 1, 2, 5, 122, 66, },
{ 9, 1, 2, 5, 122, 127, },
+ { 10, 1, 2, 5, 122, 30, },
+ { 11, 1, 2, 5, 122, 40, },
{ 0, 1, 2, 5, 138, 66, },
{ 2, 1, 2, 5, 138, 127, },
{ 1, 1, 2, 5, 138, 127, },
@@ -41762,8 +42148,10 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 7, 1, 2, 5, 138, 127, },
{ 8, 1, 2, 5, 138, 66, },
{ 9, 1, 2, 5, 138, 127, },
+ { 10, 1, 2, 5, 138, 127, },
+ { 11, 1, 2, 5, 138, 60, },
{ 0, 1, 2, 5, 155, 62, },
- { 2, 1, 2, 5, 155, 30, },
+ { 2, 1, 2, 5, 155, 4, },
{ 1, 1, 2, 5, 155, 127, },
{ 3, 1, 2, 5, 155, 62, },
{ 4, 1, 2, 5, 155, 58, },
@@ -41771,7 +42159,9 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type0[] = {
{ 6, 1, 2, 5, 155, 62, },
{ 7, 1, 2, 5, 155, 30, },
{ 8, 1, 2, 5, 155, 62, },
- { 9, 1, 2, 5, 155, 30, },
+ { 9, 1, 2, 5, 155, 4, },
+ { 10, 1, 2, 5, 155, 4, },
+ { 11, 1, 2, 5, 155, 40, },
};
RTW_DECL_TABLE_TXPWR_LMT(rtw8822c_txpwr_lmt_type0);
@@ -41783,9 +42173,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 1, 72, },
{ 4, 0, 0, 0, 1, 76, },
{ 5, 0, 0, 0, 1, 56, },
- { 6, 0, 0, 0, 1, 72, },
- { 7, 0, 0, 0, 1, 60, },
- { 8, 0, 0, 0, 1, 72, },
{ 9, 0, 0, 0, 1, 60, },
{ 0, 0, 0, 0, 2, 72, },
{ 2, 0, 0, 0, 2, 56, },
@@ -41793,9 +42180,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 2, 72, },
{ 4, 0, 0, 0, 2, 76, },
{ 5, 0, 0, 0, 2, 56, },
- { 6, 0, 0, 0, 2, 72, },
- { 7, 0, 0, 0, 2, 60, },
- { 8, 0, 0, 0, 2, 72, },
{ 9, 0, 0, 0, 2, 60, },
{ 0, 0, 0, 0, 3, 76, },
{ 2, 0, 0, 0, 3, 56, },
@@ -41803,9 +42187,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 3, 76, },
{ 4, 0, 0, 0, 3, 76, },
{ 5, 0, 0, 0, 3, 56, },
- { 6, 0, 0, 0, 3, 76, },
- { 7, 0, 0, 0, 3, 60, },
- { 8, 0, 0, 0, 3, 76, },
{ 9, 0, 0, 0, 3, 60, },
{ 0, 0, 0, 0, 4, 76, },
{ 2, 0, 0, 0, 4, 56, },
@@ -41813,9 +42194,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 4, 76, },
{ 4, 0, 0, 0, 4, 76, },
{ 5, 0, 0, 0, 4, 56, },
- { 6, 0, 0, 0, 4, 76, },
- { 7, 0, 0, 0, 4, 60, },
- { 8, 0, 0, 0, 4, 76, },
{ 9, 0, 0, 0, 4, 60, },
{ 0, 0, 0, 0, 5, 76, },
{ 2, 0, 0, 0, 5, 56, },
@@ -41823,9 +42201,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 5, 76, },
{ 4, 0, 0, 0, 5, 76, },
{ 5, 0, 0, 0, 5, 56, },
- { 6, 0, 0, 0, 5, 76, },
- { 7, 0, 0, 0, 5, 60, },
- { 8, 0, 0, 0, 5, 76, },
{ 9, 0, 0, 0, 5, 60, },
{ 0, 0, 0, 0, 6, 76, },
{ 2, 0, 0, 0, 6, 56, },
@@ -41833,9 +42208,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 6, 76, },
{ 4, 0, 0, 0, 6, 76, },
{ 5, 0, 0, 0, 6, 56, },
- { 6, 0, 0, 0, 6, 76, },
- { 7, 0, 0, 0, 6, 60, },
- { 8, 0, 0, 0, 6, 76, },
{ 9, 0, 0, 0, 6, 60, },
{ 0, 0, 0, 0, 7, 76, },
{ 2, 0, 0, 0, 7, 56, },
@@ -41843,9 +42215,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 7, 76, },
{ 4, 0, 0, 0, 7, 76, },
{ 5, 0, 0, 0, 7, 56, },
- { 6, 0, 0, 0, 7, 76, },
- { 7, 0, 0, 0, 7, 60, },
- { 8, 0, 0, 0, 7, 76, },
{ 9, 0, 0, 0, 7, 60, },
{ 0, 0, 0, 0, 8, 76, },
{ 2, 0, 0, 0, 8, 56, },
@@ -41853,9 +42222,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 8, 76, },
{ 4, 0, 0, 0, 8, 76, },
{ 5, 0, 0, 0, 8, 56, },
- { 6, 0, 0, 0, 8, 76, },
- { 7, 0, 0, 0, 8, 60, },
- { 8, 0, 0, 0, 8, 76, },
{ 9, 0, 0, 0, 8, 60, },
{ 0, 0, 0, 0, 9, 76, },
{ 2, 0, 0, 0, 9, 56, },
@@ -41863,9 +42229,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 9, 76, },
{ 4, 0, 0, 0, 9, 76, },
{ 5, 0, 0, 0, 9, 56, },
- { 6, 0, 0, 0, 9, 76, },
- { 7, 0, 0, 0, 9, 60, },
- { 8, 0, 0, 0, 9, 76, },
{ 9, 0, 0, 0, 9, 60, },
{ 0, 0, 0, 0, 10, 72, },
{ 2, 0, 0, 0, 10, 56, },
@@ -41873,9 +42236,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 10, 72, },
{ 4, 0, 0, 0, 10, 76, },
{ 5, 0, 0, 0, 10, 56, },
- { 6, 0, 0, 0, 10, 72, },
- { 7, 0, 0, 0, 10, 60, },
- { 8, 0, 0, 0, 10, 72, },
{ 9, 0, 0, 0, 10, 60, },
{ 0, 0, 0, 0, 11, 72, },
{ 2, 0, 0, 0, 11, 56, },
@@ -41883,29 +42243,20 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 11, 72, },
{ 4, 0, 0, 0, 11, 76, },
{ 5, 0, 0, 0, 11, 56, },
- { 6, 0, 0, 0, 11, 72, },
- { 7, 0, 0, 0, 11, 60, },
- { 8, 0, 0, 0, 11, 72, },
{ 9, 0, 0, 0, 11, 60, },
- { 0, 0, 0, 0, 12, 44, },
+ { 0, 0, 0, 0, 12, 52, },
{ 2, 0, 0, 0, 12, 56, },
{ 1, 0, 0, 0, 12, 72, },
{ 3, 0, 0, 0, 12, 52, },
{ 4, 0, 0, 0, 12, 76, },
{ 5, 0, 0, 0, 12, 56, },
- { 6, 0, 0, 0, 12, 52, },
- { 7, 0, 0, 0, 12, 60, },
- { 8, 0, 0, 0, 12, 52, },
{ 9, 0, 0, 0, 12, 60, },
- { 0, 0, 0, 0, 13, 40, },
+ { 0, 0, 0, 0, 13, 48, },
{ 2, 0, 0, 0, 13, 56, },
{ 1, 0, 0, 0, 13, 72, },
{ 3, 0, 0, 0, 13, 48, },
{ 4, 0, 0, 0, 13, 76, },
{ 5, 0, 0, 0, 13, 56, },
- { 6, 0, 0, 0, 13, 48, },
- { 7, 0, 0, 0, 13, 60, },
- { 8, 0, 0, 0, 13, 48, },
{ 9, 0, 0, 0, 13, 60, },
{ 0, 0, 0, 0, 14, 127, },
{ 2, 0, 0, 0, 14, 127, },
@@ -41913,9 +42264,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 0, 14, 127, },
{ 4, 0, 0, 0, 14, 127, },
{ 5, 0, 0, 0, 14, 127, },
- { 6, 0, 0, 0, 14, 127, },
- { 7, 0, 0, 0, 14, 127, },
- { 8, 0, 0, 0, 14, 127, },
{ 9, 0, 0, 0, 14, 127, },
{ 0, 0, 0, 1, 1, 52, },
{ 2, 0, 0, 1, 1, 60, },
@@ -41923,9 +42271,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 1, 52, },
{ 4, 0, 0, 1, 1, 76, },
{ 5, 0, 0, 1, 1, 60, },
- { 6, 0, 0, 1, 1, 52, },
- { 7, 0, 0, 1, 1, 60, },
- { 8, 0, 0, 1, 1, 52, },
{ 9, 0, 0, 1, 1, 60, },
{ 0, 0, 0, 1, 2, 60, },
{ 2, 0, 0, 1, 2, 60, },
@@ -41933,9 +42278,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 2, 60, },
{ 4, 0, 0, 1, 2, 76, },
{ 5, 0, 0, 1, 2, 60, },
- { 6, 0, 0, 1, 2, 60, },
- { 7, 0, 0, 1, 2, 60, },
- { 8, 0, 0, 1, 2, 60, },
{ 9, 0, 0, 1, 2, 60, },
{ 0, 0, 0, 1, 3, 64, },
{ 2, 0, 0, 1, 3, 60, },
@@ -41943,9 +42285,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 3, 64, },
{ 4, 0, 0, 1, 3, 76, },
{ 5, 0, 0, 1, 3, 60, },
- { 6, 0, 0, 1, 3, 64, },
- { 7, 0, 0, 1, 3, 60, },
- { 8, 0, 0, 1, 3, 64, },
{ 9, 0, 0, 1, 3, 60, },
{ 0, 0, 0, 1, 4, 68, },
{ 2, 0, 0, 1, 4, 60, },
@@ -41953,9 +42292,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 4, 68, },
{ 4, 0, 0, 1, 4, 76, },
{ 5, 0, 0, 1, 4, 60, },
- { 6, 0, 0, 1, 4, 68, },
- { 7, 0, 0, 1, 4, 60, },
- { 8, 0, 0, 1, 4, 68, },
{ 9, 0, 0, 1, 4, 60, },
{ 0, 0, 0, 1, 5, 76, },
{ 2, 0, 0, 1, 5, 60, },
@@ -41963,9 +42299,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 5, 76, },
{ 4, 0, 0, 1, 5, 76, },
{ 5, 0, 0, 1, 5, 60, },
- { 6, 0, 0, 1, 5, 76, },
- { 7, 0, 0, 1, 5, 60, },
- { 8, 0, 0, 1, 5, 76, },
{ 9, 0, 0, 1, 5, 60, },
{ 0, 0, 0, 1, 6, 76, },
{ 2, 0, 0, 1, 6, 60, },
@@ -41973,9 +42306,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 6, 76, },
{ 4, 0, 0, 1, 6, 76, },
{ 5, 0, 0, 1, 6, 60, },
- { 6, 0, 0, 1, 6, 76, },
- { 7, 0, 0, 1, 6, 60, },
- { 8, 0, 0, 1, 6, 76, },
{ 9, 0, 0, 1, 6, 60, },
{ 0, 0, 0, 1, 7, 76, },
{ 2, 0, 0, 1, 7, 60, },
@@ -41983,9 +42313,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 7, 76, },
{ 4, 0, 0, 1, 7, 76, },
{ 5, 0, 0, 1, 7, 60, },
- { 6, 0, 0, 1, 7, 76, },
- { 7, 0, 0, 1, 7, 60, },
- { 8, 0, 0, 1, 7, 76, },
{ 9, 0, 0, 1, 7, 60, },
{ 0, 0, 0, 1, 8, 68, },
{ 2, 0, 0, 1, 8, 60, },
@@ -41993,9 +42320,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 8, 68, },
{ 4, 0, 0, 1, 8, 76, },
{ 5, 0, 0, 1, 8, 60, },
- { 6, 0, 0, 1, 8, 68, },
- { 7, 0, 0, 1, 8, 60, },
- { 8, 0, 0, 1, 8, 68, },
{ 9, 0, 0, 1, 8, 60, },
{ 0, 0, 0, 1, 9, 64, },
{ 2, 0, 0, 1, 9, 60, },
@@ -42003,9 +42327,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 9, 64, },
{ 4, 0, 0, 1, 9, 76, },
{ 5, 0, 0, 1, 9, 60, },
- { 6, 0, 0, 1, 9, 64, },
- { 7, 0, 0, 1, 9, 60, },
- { 8, 0, 0, 1, 9, 64, },
{ 9, 0, 0, 1, 9, 60, },
{ 0, 0, 0, 1, 10, 60, },
{ 2, 0, 0, 1, 10, 60, },
@@ -42013,9 +42334,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 10, 60, },
{ 4, 0, 0, 1, 10, 76, },
{ 5, 0, 0, 1, 10, 60, },
- { 6, 0, 0, 1, 10, 60, },
- { 7, 0, 0, 1, 10, 60, },
- { 8, 0, 0, 1, 10, 60, },
{ 9, 0, 0, 1, 10, 60, },
{ 0, 0, 0, 1, 11, 52, },
{ 2, 0, 0, 1, 11, 60, },
@@ -42023,39 +42341,27 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 1, 11, 52, },
{ 4, 0, 0, 1, 11, 76, },
{ 5, 0, 0, 1, 11, 60, },
- { 6, 0, 0, 1, 11, 52, },
- { 7, 0, 0, 1, 11, 60, },
- { 8, 0, 0, 1, 11, 52, },
- { 9, 0, 0, 1, 11, 60, },
- { 0, 0, 0, 1, 12, 32, },
+ { 9, 0, 0, 1, 11, 52, },
+ { 0, 0, 0, 1, 12, 40, },
{ 2, 0, 0, 1, 12, 60, },
{ 1, 0, 0, 1, 12, 76, },
{ 3, 0, 0, 1, 12, 40, },
{ 4, 0, 0, 1, 12, 76, },
{ 5, 0, 0, 1, 12, 60, },
- { 6, 0, 0, 1, 12, 40, },
- { 7, 0, 0, 1, 12, 60, },
- { 8, 0, 0, 1, 12, 40, },
- { 9, 0, 0, 1, 12, 60, },
- { 0, 0, 0, 1, 13, 20, },
+ { 9, 0, 0, 1, 12, 48, },
+ { 0, 0, 0, 1, 13, 28, },
{ 2, 0, 0, 1, 13, 60, },
{ 1, 0, 0, 1, 13, 76, },
{ 3, 0, 0, 1, 13, 28, },
{ 4, 0, 0, 1, 13, 74, },
{ 5, 0, 0, 1, 13, 60, },
- { 6, 0, 0, 1, 13, 28, },
- { 7, 0, 0, 1, 13, 60, },
- { 8, 0, 0, 1, 13, 28, },
- { 9, 0, 0, 1, 13, 60, },
+ { 9, 0, 0, 1, 13, 40, },
{ 0, 0, 0, 1, 14, 127, },
{ 2, 0, 0, 1, 14, 127, },
{ 1, 0, 0, 1, 14, 127, },
{ 3, 0, 0, 1, 14, 127, },
{ 4, 0, 0, 1, 14, 127, },
{ 5, 0, 0, 1, 14, 127, },
- { 6, 0, 0, 1, 14, 127, },
- { 7, 0, 0, 1, 14, 127, },
- { 8, 0, 0, 1, 14, 127, },
{ 9, 0, 0, 1, 14, 127, },
{ 0, 0, 0, 2, 1, 52, },
{ 2, 0, 0, 2, 1, 60, },
@@ -42063,9 +42369,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 1, 52, },
{ 4, 0, 0, 2, 1, 76, },
{ 5, 0, 0, 2, 1, 60, },
- { 6, 0, 0, 2, 1, 52, },
- { 7, 0, 0, 2, 1, 60, },
- { 8, 0, 0, 2, 1, 52, },
{ 9, 0, 0, 2, 1, 60, },
{ 0, 0, 0, 2, 2, 60, },
{ 2, 0, 0, 2, 2, 60, },
@@ -42073,9 +42376,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 2, 60, },
{ 4, 0, 0, 2, 2, 76, },
{ 5, 0, 0, 2, 2, 60, },
- { 6, 0, 0, 2, 2, 60, },
- { 7, 0, 0, 2, 2, 60, },
- { 8, 0, 0, 2, 2, 60, },
{ 9, 0, 0, 2, 2, 60, },
{ 0, 0, 0, 2, 3, 64, },
{ 2, 0, 0, 2, 3, 60, },
@@ -42083,9 +42383,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 3, 64, },
{ 4, 0, 0, 2, 3, 76, },
{ 5, 0, 0, 2, 3, 60, },
- { 6, 0, 0, 2, 3, 64, },
- { 7, 0, 0, 2, 3, 60, },
- { 8, 0, 0, 2, 3, 64, },
{ 9, 0, 0, 2, 3, 60, },
{ 0, 0, 0, 2, 4, 68, },
{ 2, 0, 0, 2, 4, 60, },
@@ -42093,9 +42390,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 4, 68, },
{ 4, 0, 0, 2, 4, 76, },
{ 5, 0, 0, 2, 4, 60, },
- { 6, 0, 0, 2, 4, 68, },
- { 7, 0, 0, 2, 4, 60, },
- { 8, 0, 0, 2, 4, 68, },
{ 9, 0, 0, 2, 4, 60, },
{ 0, 0, 0, 2, 5, 76, },
{ 2, 0, 0, 2, 5, 60, },
@@ -42103,9 +42397,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 5, 76, },
{ 4, 0, 0, 2, 5, 76, },
{ 5, 0, 0, 2, 5, 60, },
- { 6, 0, 0, 2, 5, 76, },
- { 7, 0, 0, 2, 5, 60, },
- { 8, 0, 0, 2, 5, 76, },
{ 9, 0, 0, 2, 5, 60, },
{ 0, 0, 0, 2, 6, 76, },
{ 2, 0, 0, 2, 6, 60, },
@@ -42113,9 +42404,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 6, 76, },
{ 4, 0, 0, 2, 6, 76, },
{ 5, 0, 0, 2, 6, 60, },
- { 6, 0, 0, 2, 6, 76, },
- { 7, 0, 0, 2, 6, 60, },
- { 8, 0, 0, 2, 6, 76, },
{ 9, 0, 0, 2, 6, 60, },
{ 0, 0, 0, 2, 7, 76, },
{ 2, 0, 0, 2, 7, 60, },
@@ -42123,9 +42411,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 7, 76, },
{ 4, 0, 0, 2, 7, 76, },
{ 5, 0, 0, 2, 7, 60, },
- { 6, 0, 0, 2, 7, 76, },
- { 7, 0, 0, 2, 7, 60, },
- { 8, 0, 0, 2, 7, 76, },
{ 9, 0, 0, 2, 7, 60, },
{ 0, 0, 0, 2, 8, 68, },
{ 2, 0, 0, 2, 8, 60, },
@@ -42133,9 +42418,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 8, 68, },
{ 4, 0, 0, 2, 8, 76, },
{ 5, 0, 0, 2, 8, 60, },
- { 6, 0, 0, 2, 8, 68, },
- { 7, 0, 0, 2, 8, 60, },
- { 8, 0, 0, 2, 8, 68, },
{ 9, 0, 0, 2, 8, 60, },
{ 0, 0, 0, 2, 9, 64, },
{ 2, 0, 0, 2, 9, 60, },
@@ -42143,9 +42425,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 9, 64, },
{ 4, 0, 0, 2, 9, 76, },
{ 5, 0, 0, 2, 9, 60, },
- { 6, 0, 0, 2, 9, 64, },
- { 7, 0, 0, 2, 9, 60, },
- { 8, 0, 0, 2, 9, 64, },
{ 9, 0, 0, 2, 9, 60, },
{ 0, 0, 0, 2, 10, 60, },
{ 2, 0, 0, 2, 10, 60, },
@@ -42153,9 +42432,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 10, 60, },
{ 4, 0, 0, 2, 10, 76, },
{ 5, 0, 0, 2, 10, 60, },
- { 6, 0, 0, 2, 10, 60, },
- { 7, 0, 0, 2, 10, 60, },
- { 8, 0, 0, 2, 10, 60, },
{ 9, 0, 0, 2, 10, 60, },
{ 0, 0, 0, 2, 11, 52, },
{ 2, 0, 0, 2, 11, 60, },
@@ -42163,39 +42439,27 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 2, 11, 52, },
{ 4, 0, 0, 2, 11, 76, },
{ 5, 0, 0, 2, 11, 60, },
- { 6, 0, 0, 2, 11, 52, },
- { 7, 0, 0, 2, 11, 60, },
- { 8, 0, 0, 2, 11, 52, },
- { 9, 0, 0, 2, 11, 60, },
- { 0, 0, 0, 2, 12, 32, },
+ { 9, 0, 0, 2, 11, 52, },
+ { 0, 0, 0, 2, 12, 40, },
{ 2, 0, 0, 2, 12, 60, },
{ 1, 0, 0, 2, 12, 76, },
{ 3, 0, 0, 2, 12, 40, },
{ 4, 0, 0, 2, 12, 76, },
{ 5, 0, 0, 2, 12, 60, },
- { 6, 0, 0, 2, 12, 40, },
- { 7, 0, 0, 2, 12, 60, },
- { 8, 0, 0, 2, 12, 40, },
- { 9, 0, 0, 2, 12, 60, },
- { 0, 0, 0, 2, 13, 20, },
+ { 9, 0, 0, 2, 12, 48, },
+ { 0, 0, 0, 2, 13, 28, },
{ 2, 0, 0, 2, 13, 60, },
{ 1, 0, 0, 2, 13, 76, },
{ 3, 0, 0, 2, 13, 28, },
{ 4, 0, 0, 2, 13, 74, },
{ 5, 0, 0, 2, 13, 60, },
- { 6, 0, 0, 2, 13, 28, },
- { 7, 0, 0, 2, 13, 60, },
- { 8, 0, 0, 2, 13, 28, },
- { 9, 0, 0, 2, 13, 60, },
+ { 9, 0, 0, 2, 13, 40, },
{ 0, 0, 0, 2, 14, 127, },
{ 2, 0, 0, 2, 14, 127, },
{ 1, 0, 0, 2, 14, 127, },
{ 3, 0, 0, 2, 14, 127, },
{ 4, 0, 0, 2, 14, 127, },
{ 5, 0, 0, 2, 14, 127, },
- { 6, 0, 0, 2, 14, 127, },
- { 7, 0, 0, 2, 14, 127, },
- { 8, 0, 0, 2, 14, 127, },
{ 9, 0, 0, 2, 14, 127, },
{ 0, 0, 0, 3, 1, 52, },
{ 2, 0, 0, 3, 1, 36, },
@@ -42203,9 +42467,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 1, 52, },
{ 4, 0, 0, 3, 1, 72, },
{ 5, 0, 0, 3, 1, 36, },
- { 6, 0, 0, 3, 1, 52, },
- { 7, 0, 0, 3, 1, 36, },
- { 8, 0, 0, 3, 1, 52, },
{ 9, 0, 0, 3, 1, 36, },
{ 0, 0, 0, 3, 2, 60, },
{ 2, 0, 0, 3, 2, 36, },
@@ -42213,9 +42474,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 2, 60, },
{ 4, 0, 0, 3, 2, 72, },
{ 5, 0, 0, 3, 2, 36, },
- { 6, 0, 0, 3, 2, 60, },
- { 7, 0, 0, 3, 2, 36, },
- { 8, 0, 0, 3, 2, 60, },
{ 9, 0, 0, 3, 2, 36, },
{ 0, 0, 0, 3, 3, 64, },
{ 2, 0, 0, 3, 3, 36, },
@@ -42223,9 +42481,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 3, 64, },
{ 4, 0, 0, 3, 3, 72, },
{ 5, 0, 0, 3, 3, 36, },
- { 6, 0, 0, 3, 3, 64, },
- { 7, 0, 0, 3, 3, 36, },
- { 8, 0, 0, 3, 3, 64, },
{ 9, 0, 0, 3, 3, 36, },
{ 0, 0, 0, 3, 4, 68, },
{ 2, 0, 0, 3, 4, 36, },
@@ -42233,9 +42488,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 4, 68, },
{ 4, 0, 0, 3, 4, 72, },
{ 5, 0, 0, 3, 4, 36, },
- { 6, 0, 0, 3, 4, 68, },
- { 7, 0, 0, 3, 4, 36, },
- { 8, 0, 0, 3, 4, 68, },
{ 9, 0, 0, 3, 4, 36, },
{ 0, 0, 0, 3, 5, 76, },
{ 2, 0, 0, 3, 5, 36, },
@@ -42243,9 +42495,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 5, 76, },
{ 4, 0, 0, 3, 5, 72, },
{ 5, 0, 0, 3, 5, 36, },
- { 6, 0, 0, 3, 5, 76, },
- { 7, 0, 0, 3, 5, 36, },
- { 8, 0, 0, 3, 5, 76, },
{ 9, 0, 0, 3, 5, 36, },
{ 0, 0, 0, 3, 6, 76, },
{ 2, 0, 0, 3, 6, 36, },
@@ -42253,9 +42502,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 6, 76, },
{ 4, 0, 0, 3, 6, 72, },
{ 5, 0, 0, 3, 6, 36, },
- { 6, 0, 0, 3, 6, 76, },
- { 7, 0, 0, 3, 6, 36, },
- { 8, 0, 0, 3, 6, 76, },
{ 9, 0, 0, 3, 6, 36, },
{ 0, 0, 0, 3, 7, 76, },
{ 2, 0, 0, 3, 7, 36, },
@@ -42263,9 +42509,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 7, 76, },
{ 4, 0, 0, 3, 7, 72, },
{ 5, 0, 0, 3, 7, 36, },
- { 6, 0, 0, 3, 7, 76, },
- { 7, 0, 0, 3, 7, 36, },
- { 8, 0, 0, 3, 7, 76, },
{ 9, 0, 0, 3, 7, 36, },
{ 0, 0, 0, 3, 8, 68, },
{ 2, 0, 0, 3, 8, 36, },
@@ -42273,9 +42516,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 8, 68, },
{ 4, 0, 0, 3, 8, 72, },
{ 5, 0, 0, 3, 8, 36, },
- { 6, 0, 0, 3, 8, 68, },
- { 7, 0, 0, 3, 8, 36, },
- { 8, 0, 0, 3, 8, 68, },
{ 9, 0, 0, 3, 8, 36, },
{ 0, 0, 0, 3, 9, 64, },
{ 2, 0, 0, 3, 9, 36, },
@@ -42283,9 +42523,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 9, 64, },
{ 4, 0, 0, 3, 9, 72, },
{ 5, 0, 0, 3, 9, 36, },
- { 6, 0, 0, 3, 9, 64, },
- { 7, 0, 0, 3, 9, 36, },
- { 8, 0, 0, 3, 9, 64, },
{ 9, 0, 0, 3, 9, 36, },
{ 0, 0, 0, 3, 10, 60, },
{ 2, 0, 0, 3, 10, 36, },
@@ -42293,9 +42530,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 10, 60, },
{ 4, 0, 0, 3, 10, 72, },
{ 5, 0, 0, 3, 10, 36, },
- { 6, 0, 0, 3, 10, 60, },
- { 7, 0, 0, 3, 10, 36, },
- { 8, 0, 0, 3, 10, 60, },
{ 9, 0, 0, 3, 10, 36, },
{ 0, 0, 0, 3, 11, 52, },
{ 2, 0, 0, 3, 11, 36, },
@@ -42303,39 +42537,27 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 0, 3, 11, 52, },
{ 4, 0, 0, 3, 11, 72, },
{ 5, 0, 0, 3, 11, 36, },
- { 6, 0, 0, 3, 11, 52, },
- { 7, 0, 0, 3, 11, 36, },
- { 8, 0, 0, 3, 11, 52, },
- { 9, 0, 0, 3, 11, 36, },
- { 0, 0, 0, 3, 12, 32, },
+ { 9, 0, 0, 3, 11, 40, },
+ { 0, 0, 0, 3, 12, 40, },
{ 2, 0, 0, 3, 12, 36, },
{ 1, 0, 0, 3, 12, 66, },
{ 3, 0, 0, 3, 12, 40, },
{ 4, 0, 0, 3, 12, 72, },
{ 5, 0, 0, 3, 12, 36, },
- { 6, 0, 0, 3, 12, 40, },
- { 7, 0, 0, 3, 12, 36, },
- { 8, 0, 0, 3, 12, 40, },
{ 9, 0, 0, 3, 12, 36, },
- { 0, 0, 0, 3, 13, 20, },
+ { 0, 0, 0, 3, 13, 28, },
{ 2, 0, 0, 3, 13, 36, },
{ 1, 0, 0, 3, 13, 66, },
{ 3, 0, 0, 3, 13, 28, },
{ 4, 0, 0, 3, 13, 68, },
{ 5, 0, 0, 3, 13, 36, },
- { 6, 0, 0, 3, 13, 28, },
- { 7, 0, 0, 3, 13, 36, },
- { 8, 0, 0, 3, 13, 28, },
- { 9, 0, 0, 3, 13, 36, },
+ { 9, 0, 0, 3, 13, 28, },
{ 0, 0, 0, 3, 14, 127, },
{ 2, 0, 0, 3, 14, 127, },
{ 1, 0, 0, 3, 14, 127, },
{ 3, 0, 0, 3, 14, 127, },
{ 4, 0, 0, 3, 14, 127, },
{ 5, 0, 0, 3, 14, 127, },
- { 6, 0, 0, 3, 14, 127, },
- { 7, 0, 0, 3, 14, 127, },
- { 8, 0, 0, 3, 14, 127, },
{ 9, 0, 0, 3, 14, 127, },
{ 0, 0, 1, 2, 1, 127, },
{ 2, 0, 1, 2, 1, 127, },
@@ -42343,29 +42565,20 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 1, 127, },
{ 4, 0, 1, 2, 1, 127, },
{ 5, 0, 1, 2, 1, 127, },
- { 6, 0, 1, 2, 1, 127, },
- { 7, 0, 1, 2, 1, 127, },
- { 8, 0, 1, 2, 1, 127, },
- { 9, 0, 1, 2, 1, 127, },
+ { 9, 0, 1, 2, 1, 60, },
{ 0, 0, 1, 2, 2, 127, },
{ 2, 0, 1, 2, 2, 127, },
{ 1, 0, 1, 2, 2, 127, },
{ 3, 0, 1, 2, 2, 127, },
{ 4, 0, 1, 2, 2, 127, },
{ 5, 0, 1, 2, 2, 127, },
- { 6, 0, 1, 2, 2, 127, },
- { 7, 0, 1, 2, 2, 127, },
- { 8, 0, 1, 2, 2, 127, },
- { 9, 0, 1, 2, 2, 127, },
+ { 9, 0, 1, 2, 2, 60, },
{ 0, 0, 1, 2, 3, 52, },
{ 2, 0, 1, 2, 3, 60, },
{ 1, 0, 1, 2, 3, 72, },
{ 3, 0, 1, 2, 3, 52, },
{ 4, 0, 1, 2, 3, 72, },
{ 5, 0, 1, 2, 3, 60, },
- { 6, 0, 1, 2, 3, 52, },
- { 7, 0, 1, 2, 3, 60, },
- { 8, 0, 1, 2, 3, 52, },
{ 9, 0, 1, 2, 3, 60, },
{ 0, 0, 1, 2, 4, 52, },
{ 2, 0, 1, 2, 4, 60, },
@@ -42373,9 +42586,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 4, 52, },
{ 4, 0, 1, 2, 4, 72, },
{ 5, 0, 1, 2, 4, 60, },
- { 6, 0, 1, 2, 4, 52, },
- { 7, 0, 1, 2, 4, 60, },
- { 8, 0, 1, 2, 4, 52, },
{ 9, 0, 1, 2, 4, 60, },
{ 0, 0, 1, 2, 5, 60, },
{ 2, 0, 1, 2, 5, 60, },
@@ -42383,9 +42593,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 5, 60, },
{ 4, 0, 1, 2, 5, 72, },
{ 5, 0, 1, 2, 5, 60, },
- { 6, 0, 1, 2, 5, 60, },
- { 7, 0, 1, 2, 5, 60, },
- { 8, 0, 1, 2, 5, 60, },
{ 9, 0, 1, 2, 5, 60, },
{ 0, 0, 1, 2, 6, 64, },
{ 2, 0, 1, 2, 6, 60, },
@@ -42393,9 +42600,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 6, 64, },
{ 4, 0, 1, 2, 6, 72, },
{ 5, 0, 1, 2, 6, 60, },
- { 6, 0, 1, 2, 6, 64, },
- { 7, 0, 1, 2, 6, 60, },
- { 8, 0, 1, 2, 6, 64, },
{ 9, 0, 1, 2, 6, 60, },
{ 0, 0, 1, 2, 7, 60, },
{ 2, 0, 1, 2, 7, 60, },
@@ -42403,9 +42607,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 7, 60, },
{ 4, 0, 1, 2, 7, 72, },
{ 5, 0, 1, 2, 7, 60, },
- { 6, 0, 1, 2, 7, 60, },
- { 7, 0, 1, 2, 7, 60, },
- { 8, 0, 1, 2, 7, 60, },
{ 9, 0, 1, 2, 7, 60, },
{ 0, 0, 1, 2, 8, 52, },
{ 2, 0, 1, 2, 8, 60, },
@@ -42413,9 +42614,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 8, 52, },
{ 4, 0, 1, 2, 8, 72, },
{ 5, 0, 1, 2, 8, 60, },
- { 6, 0, 1, 2, 8, 52, },
- { 7, 0, 1, 2, 8, 60, },
- { 8, 0, 1, 2, 8, 52, },
{ 9, 0, 1, 2, 8, 60, },
{ 0, 0, 1, 2, 9, 52, },
{ 2, 0, 1, 2, 9, 60, },
@@ -42423,9 +42621,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 9, 52, },
{ 4, 0, 1, 2, 9, 72, },
{ 5, 0, 1, 2, 9, 60, },
- { 6, 0, 1, 2, 9, 52, },
- { 7, 0, 1, 2, 9, 60, },
- { 8, 0, 1, 2, 9, 52, },
{ 9, 0, 1, 2, 9, 60, },
{ 0, 0, 1, 2, 10, 40, },
{ 2, 0, 1, 2, 10, 60, },
@@ -42433,9 +42628,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 10, 40, },
{ 4, 0, 1, 2, 10, 72, },
{ 5, 0, 1, 2, 10, 60, },
- { 6, 0, 1, 2, 10, 40, },
- { 7, 0, 1, 2, 10, 60, },
- { 8, 0, 1, 2, 10, 40, },
{ 9, 0, 1, 2, 10, 60, },
{ 0, 0, 1, 2, 11, 28, },
{ 2, 0, 1, 2, 11, 60, },
@@ -42443,39 +42635,27 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 2, 11, 28, },
{ 4, 0, 1, 2, 11, 70, },
{ 5, 0, 1, 2, 11, 60, },
- { 6, 0, 1, 2, 11, 28, },
- { 7, 0, 1, 2, 11, 60, },
- { 8, 0, 1, 2, 11, 28, },
- { 9, 0, 1, 2, 11, 60, },
+ { 9, 0, 1, 2, 11, 44, },
{ 0, 0, 1, 2, 12, 127, },
{ 2, 0, 1, 2, 12, 127, },
{ 1, 0, 1, 2, 12, 127, },
{ 3, 0, 1, 2, 12, 127, },
{ 4, 0, 1, 2, 12, 127, },
{ 5, 0, 1, 2, 12, 127, },
- { 6, 0, 1, 2, 12, 127, },
- { 7, 0, 1, 2, 12, 127, },
- { 8, 0, 1, 2, 12, 127, },
- { 9, 0, 1, 2, 12, 127, },
+ { 9, 0, 1, 2, 12, 44, },
{ 0, 0, 1, 2, 13, 127, },
{ 2, 0, 1, 2, 13, 127, },
{ 1, 0, 1, 2, 13, 127, },
{ 3, 0, 1, 2, 13, 127, },
{ 4, 0, 1, 2, 13, 127, },
{ 5, 0, 1, 2, 13, 127, },
- { 6, 0, 1, 2, 13, 127, },
- { 7, 0, 1, 2, 13, 127, },
- { 8, 0, 1, 2, 13, 127, },
- { 9, 0, 1, 2, 13, 127, },
+ { 9, 0, 1, 2, 13, 20, },
{ 0, 0, 1, 2, 14, 127, },
{ 2, 0, 1, 2, 14, 127, },
{ 1, 0, 1, 2, 14, 127, },
{ 3, 0, 1, 2, 14, 127, },
{ 4, 0, 1, 2, 14, 127, },
{ 5, 0, 1, 2, 14, 127, },
- { 6, 0, 1, 2, 14, 127, },
- { 7, 0, 1, 2, 14, 127, },
- { 8, 0, 1, 2, 14, 127, },
{ 9, 0, 1, 2, 14, 127, },
{ 0, 0, 1, 3, 1, 127, },
{ 2, 0, 1, 3, 1, 127, },
@@ -42483,29 +42663,20 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 1, 127, },
{ 4, 0, 1, 3, 1, 127, },
{ 5, 0, 1, 3, 1, 127, },
- { 6, 0, 1, 3, 1, 127, },
- { 7, 0, 1, 3, 1, 127, },
- { 8, 0, 1, 3, 1, 127, },
- { 9, 0, 1, 3, 1, 127, },
+ { 9, 0, 1, 3, 1, 36, },
{ 0, 0, 1, 3, 2, 127, },
{ 2, 0, 1, 3, 2, 127, },
{ 1, 0, 1, 3, 2, 127, },
{ 3, 0, 1, 3, 2, 127, },
{ 4, 0, 1, 3, 2, 127, },
{ 5, 0, 1, 3, 2, 127, },
- { 6, 0, 1, 3, 2, 127, },
- { 7, 0, 1, 3, 2, 127, },
- { 8, 0, 1, 3, 2, 127, },
- { 9, 0, 1, 3, 2, 127, },
+ { 9, 0, 1, 3, 2, 36, },
{ 0, 0, 1, 3, 3, 48, },
{ 2, 0, 1, 3, 3, 36, },
{ 1, 0, 1, 3, 3, 66, },
{ 3, 0, 1, 3, 3, 48, },
{ 4, 0, 1, 3, 3, 68, },
{ 5, 0, 1, 3, 3, 36, },
- { 6, 0, 1, 3, 3, 48, },
- { 7, 0, 1, 3, 3, 36, },
- { 8, 0, 1, 3, 3, 48, },
{ 9, 0, 1, 3, 3, 36, },
{ 0, 0, 1, 3, 4, 48, },
{ 2, 0, 1, 3, 4, 36, },
@@ -42513,9 +42684,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 4, 48, },
{ 4, 0, 1, 3, 4, 70, },
{ 5, 0, 1, 3, 4, 36, },
- { 6, 0, 1, 3, 4, 48, },
- { 7, 0, 1, 3, 4, 36, },
- { 8, 0, 1, 3, 4, 48, },
{ 9, 0, 1, 3, 4, 36, },
{ 0, 0, 1, 3, 5, 60, },
{ 2, 0, 1, 3, 5, 36, },
@@ -42523,9 +42691,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 5, 60, },
{ 4, 0, 1, 3, 5, 70, },
{ 5, 0, 1, 3, 5, 36, },
- { 6, 0, 1, 3, 5, 60, },
- { 7, 0, 1, 3, 5, 36, },
- { 8, 0, 1, 3, 5, 60, },
{ 9, 0, 1, 3, 5, 36, },
{ 0, 0, 1, 3, 6, 64, },
{ 2, 0, 1, 3, 6, 36, },
@@ -42533,9 +42698,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 6, 64, },
{ 4, 0, 1, 3, 6, 70, },
{ 5, 0, 1, 3, 6, 36, },
- { 6, 0, 1, 3, 6, 64, },
- { 7, 0, 1, 3, 6, 36, },
- { 8, 0, 1, 3, 6, 64, },
{ 9, 0, 1, 3, 6, 36, },
{ 0, 0, 1, 3, 7, 60, },
{ 2, 0, 1, 3, 7, 36, },
@@ -42543,9 +42705,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 7, 60, },
{ 4, 0, 1, 3, 7, 70, },
{ 5, 0, 1, 3, 7, 36, },
- { 6, 0, 1, 3, 7, 60, },
- { 7, 0, 1, 3, 7, 36, },
- { 8, 0, 1, 3, 7, 60, },
{ 9, 0, 1, 3, 7, 36, },
{ 0, 0, 1, 3, 8, 52, },
{ 2, 0, 1, 3, 8, 36, },
@@ -42553,9 +42712,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 8, 52, },
{ 4, 0, 1, 3, 8, 70, },
{ 5, 0, 1, 3, 8, 36, },
- { 6, 0, 1, 3, 8, 52, },
- { 7, 0, 1, 3, 8, 36, },
- { 8, 0, 1, 3, 8, 52, },
{ 9, 0, 1, 3, 8, 36, },
{ 0, 0, 1, 3, 9, 52, },
{ 2, 0, 1, 3, 9, 36, },
@@ -42563,9 +42719,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 9, 52, },
{ 4, 0, 1, 3, 9, 70, },
{ 5, 0, 1, 3, 9, 36, },
- { 6, 0, 1, 3, 9, 52, },
- { 7, 0, 1, 3, 9, 36, },
- { 8, 0, 1, 3, 9, 52, },
{ 9, 0, 1, 3, 9, 36, },
{ 0, 0, 1, 3, 10, 40, },
{ 2, 0, 1, 3, 10, 36, },
@@ -42573,9 +42726,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 10, 40, },
{ 4, 0, 1, 3, 10, 70, },
{ 5, 0, 1, 3, 10, 36, },
- { 6, 0, 1, 3, 10, 40, },
- { 7, 0, 1, 3, 10, 36, },
- { 8, 0, 1, 3, 10, 40, },
{ 9, 0, 1, 3, 10, 36, },
{ 0, 0, 1, 3, 11, 26, },
{ 2, 0, 1, 3, 11, 36, },
@@ -42583,39 +42733,27 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 0, 1, 3, 11, 26, },
{ 4, 0, 1, 3, 11, 66, },
{ 5, 0, 1, 3, 11, 36, },
- { 6, 0, 1, 3, 11, 26, },
- { 7, 0, 1, 3, 11, 36, },
- { 8, 0, 1, 3, 11, 26, },
- { 9, 0, 1, 3, 11, 36, },
+ { 9, 0, 1, 3, 11, 32, },
{ 0, 0, 1, 3, 12, 127, },
{ 2, 0, 1, 3, 12, 127, },
{ 1, 0, 1, 3, 12, 127, },
{ 3, 0, 1, 3, 12, 127, },
{ 4, 0, 1, 3, 12, 127, },
{ 5, 0, 1, 3, 12, 127, },
- { 6, 0, 1, 3, 12, 127, },
- { 7, 0, 1, 3, 12, 127, },
- { 8, 0, 1, 3, 12, 127, },
- { 9, 0, 1, 3, 12, 127, },
+ { 9, 0, 1, 3, 12, 32, },
{ 0, 0, 1, 3, 13, 127, },
{ 2, 0, 1, 3, 13, 127, },
{ 1, 0, 1, 3, 13, 127, },
{ 3, 0, 1, 3, 13, 127, },
{ 4, 0, 1, 3, 13, 127, },
{ 5, 0, 1, 3, 13, 127, },
- { 6, 0, 1, 3, 13, 127, },
- { 7, 0, 1, 3, 13, 127, },
- { 8, 0, 1, 3, 13, 127, },
- { 9, 0, 1, 3, 13, 127, },
+ { 9, 0, 1, 3, 13, 8, },
{ 0, 0, 1, 3, 14, 127, },
{ 2, 0, 1, 3, 14, 127, },
{ 1, 0, 1, 3, 14, 127, },
{ 3, 0, 1, 3, 14, 127, },
{ 4, 0, 1, 3, 14, 127, },
{ 5, 0, 1, 3, 14, 127, },
- { 6, 0, 1, 3, 14, 127, },
- { 7, 0, 1, 3, 14, 127, },
- { 8, 0, 1, 3, 14, 127, },
{ 9, 0, 1, 3, 14, 127, },
{ 0, 1, 0, 1, 36, 74, },
{ 2, 1, 0, 1, 36, 58, },
@@ -42623,89 +42761,62 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 36, 62, },
{ 4, 1, 0, 1, 36, 74, },
{ 5, 1, 0, 1, 36, 58, },
- { 6, 1, 0, 1, 36, 64, },
- { 7, 1, 0, 1, 36, 54, },
- { 8, 1, 0, 1, 36, 62, },
- { 9, 1, 0, 1, 36, 62, },
+ { 9, 1, 0, 1, 36, 64, },
{ 0, 1, 0, 1, 40, 76, },
{ 2, 1, 0, 1, 40, 58, },
{ 1, 1, 0, 1, 40, 62, },
{ 3, 1, 0, 1, 40, 62, },
{ 4, 1, 0, 1, 40, 76, },
{ 5, 1, 0, 1, 40, 58, },
- { 6, 1, 0, 1, 40, 64, },
- { 7, 1, 0, 1, 40, 54, },
- { 8, 1, 0, 1, 40, 62, },
- { 9, 1, 0, 1, 40, 62, },
+ { 9, 1, 0, 1, 40, 64, },
{ 0, 1, 0, 1, 44, 76, },
{ 2, 1, 0, 1, 44, 58, },
{ 1, 1, 0, 1, 44, 62, },
{ 3, 1, 0, 1, 44, 62, },
{ 4, 1, 0, 1, 44, 76, },
{ 5, 1, 0, 1, 44, 58, },
- { 6, 1, 0, 1, 44, 64, },
- { 7, 1, 0, 1, 44, 54, },
- { 8, 1, 0, 1, 44, 62, },
- { 9, 1, 0, 1, 44, 62, },
+ { 9, 1, 0, 1, 44, 64, },
{ 0, 1, 0, 1, 48, 76, },
{ 2, 1, 0, 1, 48, 58, },
{ 1, 1, 0, 1, 48, 62, },
{ 3, 1, 0, 1, 48, 62, },
{ 4, 1, 0, 1, 48, 58, },
{ 5, 1, 0, 1, 48, 58, },
- { 6, 1, 0, 1, 48, 64, },
- { 7, 1, 0, 1, 48, 54, },
- { 8, 1, 0, 1, 48, 62, },
- { 9, 1, 0, 1, 48, 62, },
+ { 9, 1, 0, 1, 48, 64, },
{ 0, 1, 0, 1, 52, 76, },
{ 2, 1, 0, 1, 52, 58, },
{ 1, 1, 0, 1, 52, 62, },
{ 3, 1, 0, 1, 52, 64, },
{ 4, 1, 0, 1, 52, 76, },
{ 5, 1, 0, 1, 52, 58, },
- { 6, 1, 0, 1, 52, 76, },
- { 7, 1, 0, 1, 52, 54, },
- { 8, 1, 0, 1, 52, 76, },
- { 9, 1, 0, 1, 52, 62, },
+ { 9, 1, 0, 1, 52, 64, },
{ 0, 1, 0, 1, 56, 76, },
{ 2, 1, 0, 1, 56, 58, },
{ 1, 1, 0, 1, 56, 62, },
{ 3, 1, 0, 1, 56, 64, },
{ 4, 1, 0, 1, 56, 76, },
{ 5, 1, 0, 1, 56, 58, },
- { 6, 1, 0, 1, 56, 76, },
- { 7, 1, 0, 1, 56, 54, },
- { 8, 1, 0, 1, 56, 76, },
- { 9, 1, 0, 1, 56, 62, },
+ { 9, 1, 0, 1, 56, 64, },
{ 0, 1, 0, 1, 60, 76, },
{ 2, 1, 0, 1, 60, 58, },
{ 1, 1, 0, 1, 60, 62, },
{ 3, 1, 0, 1, 60, 64, },
{ 4, 1, 0, 1, 60, 76, },
{ 5, 1, 0, 1, 60, 58, },
- { 6, 1, 0, 1, 60, 76, },
- { 7, 1, 0, 1, 60, 54, },
- { 8, 1, 0, 1, 60, 76, },
- { 9, 1, 0, 1, 60, 62, },
+ { 9, 1, 0, 1, 60, 64, },
{ 0, 1, 0, 1, 64, 76, },
{ 2, 1, 0, 1, 64, 58, },
{ 1, 1, 0, 1, 64, 62, },
{ 3, 1, 0, 1, 64, 64, },
{ 4, 1, 0, 1, 64, 76, },
{ 5, 1, 0, 1, 64, 58, },
- { 6, 1, 0, 1, 64, 74, },
- { 7, 1, 0, 1, 64, 54, },
- { 8, 1, 0, 1, 64, 74, },
- { 9, 1, 0, 1, 64, 62, },
+ { 9, 1, 0, 1, 64, 64, },
{ 0, 1, 0, 1, 100, 68, },
{ 2, 1, 0, 1, 100, 58, },
{ 1, 1, 0, 1, 100, 76, },
{ 3, 1, 0, 1, 100, 68, },
{ 4, 1, 0, 1, 100, 76, },
{ 5, 1, 0, 1, 100, 58, },
- { 6, 1, 0, 1, 100, 72, },
- { 7, 1, 0, 1, 100, 54, },
- { 8, 1, 0, 1, 100, 72, },
{ 9, 1, 0, 1, 100, 127, },
{ 0, 1, 0, 1, 104, 76, },
{ 2, 1, 0, 1, 104, 58, },
@@ -42713,9 +42824,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 104, 76, },
{ 4, 1, 0, 1, 104, 76, },
{ 5, 1, 0, 1, 104, 58, },
- { 6, 1, 0, 1, 104, 76, },
- { 7, 1, 0, 1, 104, 54, },
- { 8, 1, 0, 1, 104, 76, },
{ 9, 1, 0, 1, 104, 127, },
{ 0, 1, 0, 1, 108, 76, },
{ 2, 1, 0, 1, 108, 58, },
@@ -42723,9 +42831,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 108, 76, },
{ 4, 1, 0, 1, 108, 76, },
{ 5, 1, 0, 1, 108, 58, },
- { 6, 1, 0, 1, 108, 76, },
- { 7, 1, 0, 1, 108, 54, },
- { 8, 1, 0, 1, 108, 76, },
{ 9, 1, 0, 1, 108, 127, },
{ 0, 1, 0, 1, 112, 76, },
{ 2, 1, 0, 1, 112, 58, },
@@ -42733,9 +42838,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 112, 76, },
{ 4, 1, 0, 1, 112, 76, },
{ 5, 1, 0, 1, 112, 58, },
- { 6, 1, 0, 1, 112, 76, },
- { 7, 1, 0, 1, 112, 54, },
- { 8, 1, 0, 1, 112, 76, },
{ 9, 1, 0, 1, 112, 127, },
{ 0, 1, 0, 1, 116, 76, },
{ 2, 1, 0, 1, 116, 58, },
@@ -42743,9 +42845,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 116, 76, },
{ 4, 1, 0, 1, 116, 76, },
{ 5, 1, 0, 1, 116, 58, },
- { 6, 1, 0, 1, 116, 76, },
- { 7, 1, 0, 1, 116, 54, },
- { 8, 1, 0, 1, 116, 76, },
{ 9, 1, 0, 1, 116, 127, },
{ 0, 1, 0, 1, 120, 76, },
{ 2, 1, 0, 1, 120, 58, },
@@ -42753,9 +42852,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 120, 127, },
{ 4, 1, 0, 1, 120, 76, },
{ 5, 1, 0, 1, 120, 127, },
- { 6, 1, 0, 1, 120, 76, },
- { 7, 1, 0, 1, 120, 54, },
- { 8, 1, 0, 1, 120, 76, },
{ 9, 1, 0, 1, 120, 127, },
{ 0, 1, 0, 1, 124, 76, },
{ 2, 1, 0, 1, 124, 58, },
@@ -42763,9 +42859,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 124, 127, },
{ 4, 1, 0, 1, 124, 76, },
{ 5, 1, 0, 1, 124, 127, },
- { 6, 1, 0, 1, 124, 76, },
- { 7, 1, 0, 1, 124, 54, },
- { 8, 1, 0, 1, 124, 76, },
{ 9, 1, 0, 1, 124, 127, },
{ 0, 1, 0, 1, 128, 76, },
{ 2, 1, 0, 1, 128, 58, },
@@ -42773,9 +42866,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 128, 127, },
{ 4, 1, 0, 1, 128, 76, },
{ 5, 1, 0, 1, 128, 127, },
- { 6, 1, 0, 1, 128, 76, },
- { 7, 1, 0, 1, 128, 54, },
- { 8, 1, 0, 1, 128, 76, },
{ 9, 1, 0, 1, 128, 127, },
{ 0, 1, 0, 1, 132, 76, },
{ 2, 1, 0, 1, 132, 58, },
@@ -42783,9 +42873,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 132, 76, },
{ 4, 1, 0, 1, 132, 76, },
{ 5, 1, 0, 1, 132, 58, },
- { 6, 1, 0, 1, 132, 76, },
- { 7, 1, 0, 1, 132, 54, },
- { 8, 1, 0, 1, 132, 76, },
{ 9, 1, 0, 1, 132, 127, },
{ 0, 1, 0, 1, 136, 76, },
{ 2, 1, 0, 1, 136, 58, },
@@ -42793,9 +42880,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 136, 76, },
{ 4, 1, 0, 1, 136, 76, },
{ 5, 1, 0, 1, 136, 58, },
- { 6, 1, 0, 1, 136, 76, },
- { 7, 1, 0, 1, 136, 54, },
- { 8, 1, 0, 1, 136, 76, },
{ 9, 1, 0, 1, 136, 127, },
{ 0, 1, 0, 1, 140, 74, },
{ 2, 1, 0, 1, 140, 58, },
@@ -42803,9 +42887,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 140, 74, },
{ 4, 1, 0, 1, 140, 76, },
{ 5, 1, 0, 1, 140, 58, },
- { 6, 1, 0, 1, 140, 72, },
- { 7, 1, 0, 1, 140, 54, },
- { 8, 1, 0, 1, 140, 72, },
{ 9, 1, 0, 1, 140, 127, },
{ 0, 1, 0, 1, 144, 76, },
{ 2, 1, 0, 1, 144, 127, },
@@ -42813,9 +42894,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 144, 76, },
{ 4, 1, 0, 1, 144, 76, },
{ 5, 1, 0, 1, 144, 127, },
- { 6, 1, 0, 1, 144, 76, },
- { 7, 1, 0, 1, 144, 127, },
- { 8, 1, 0, 1, 144, 76, },
{ 9, 1, 0, 1, 144, 127, },
{ 0, 1, 0, 1, 149, 76, },
{ 2, 1, 0, 1, 149, 28, },
@@ -42823,139 +42901,97 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 1, 149, 76, },
{ 4, 1, 0, 1, 149, 74, },
{ 5, 1, 0, 1, 149, 76, },
- { 6, 1, 0, 1, 149, 76, },
- { 7, 1, 0, 1, 149, 54, },
- { 8, 1, 0, 1, 149, 76, },
- { 9, 1, 0, 1, 149, 28, },
+ { 9, 1, 0, 1, 149, 76, },
{ 0, 1, 0, 1, 153, 76, },
{ 2, 1, 0, 1, 153, 28, },
{ 1, 1, 0, 1, 153, 127, },
{ 3, 1, 0, 1, 153, 76, },
{ 4, 1, 0, 1, 153, 74, },
{ 5, 1, 0, 1, 153, 76, },
- { 6, 1, 0, 1, 153, 76, },
- { 7, 1, 0, 1, 153, 54, },
- { 8, 1, 0, 1, 153, 76, },
- { 9, 1, 0, 1, 153, 28, },
+ { 9, 1, 0, 1, 153, 76, },
{ 0, 1, 0, 1, 157, 76, },
{ 2, 1, 0, 1, 157, 28, },
{ 1, 1, 0, 1, 157, 127, },
{ 3, 1, 0, 1, 157, 76, },
{ 4, 1, 0, 1, 157, 74, },
{ 5, 1, 0, 1, 157, 76, },
- { 6, 1, 0, 1, 157, 76, },
- { 7, 1, 0, 1, 157, 54, },
- { 8, 1, 0, 1, 157, 76, },
- { 9, 1, 0, 1, 157, 28, },
+ { 9, 1, 0, 1, 157, 76, },
{ 0, 1, 0, 1, 161, 76, },
{ 2, 1, 0, 1, 161, 28, },
{ 1, 1, 0, 1, 161, 127, },
{ 3, 1, 0, 1, 161, 76, },
{ 4, 1, 0, 1, 161, 74, },
{ 5, 1, 0, 1, 161, 76, },
- { 6, 1, 0, 1, 161, 76, },
- { 7, 1, 0, 1, 161, 54, },
- { 8, 1, 0, 1, 161, 76, },
- { 9, 1, 0, 1, 161, 28, },
+ { 9, 1, 0, 1, 161, 76, },
{ 0, 1, 0, 1, 165, 76, },
{ 2, 1, 0, 1, 165, 28, },
{ 1, 1, 0, 1, 165, 127, },
{ 3, 1, 0, 1, 165, 76, },
{ 4, 1, 0, 1, 165, 74, },
{ 5, 1, 0, 1, 165, 76, },
- { 6, 1, 0, 1, 165, 76, },
- { 7, 1, 0, 1, 165, 54, },
- { 8, 1, 0, 1, 165, 76, },
- { 9, 1, 0, 1, 165, 28, },
+ { 9, 1, 0, 1, 165, 76, },
{ 0, 1, 0, 2, 36, 70, },
{ 2, 1, 0, 2, 36, 58, },
{ 1, 1, 0, 2, 36, 64, },
{ 3, 1, 0, 2, 36, 62, },
{ 4, 1, 0, 2, 36, 76, },
{ 5, 1, 0, 2, 36, 58, },
- { 6, 1, 0, 2, 36, 64, },
- { 7, 1, 0, 2, 36, 54, },
- { 8, 1, 0, 2, 36, 62, },
- { 9, 1, 0, 2, 36, 62, },
+ { 9, 1, 0, 2, 36, 60, },
{ 0, 1, 0, 2, 40, 76, },
{ 2, 1, 0, 2, 40, 58, },
{ 1, 1, 0, 2, 40, 62, },
{ 3, 1, 0, 2, 40, 62, },
{ 4, 1, 0, 2, 40, 76, },
{ 5, 1, 0, 2, 40, 58, },
- { 6, 1, 0, 2, 40, 64, },
- { 7, 1, 0, 2, 40, 54, },
- { 8, 1, 0, 2, 40, 62, },
- { 9, 1, 0, 2, 40, 62, },
+ { 9, 1, 0, 2, 40, 60, },
{ 0, 1, 0, 2, 44, 76, },
{ 2, 1, 0, 2, 44, 58, },
{ 1, 1, 0, 2, 44, 62, },
{ 3, 1, 0, 2, 44, 62, },
{ 4, 1, 0, 2, 44, 76, },
{ 5, 1, 0, 2, 44, 58, },
- { 6, 1, 0, 2, 44, 64, },
- { 7, 1, 0, 2, 44, 54, },
- { 8, 1, 0, 2, 44, 62, },
- { 9, 1, 0, 2, 44, 62, },
+ { 9, 1, 0, 2, 44, 60, },
{ 0, 1, 0, 2, 48, 76, },
{ 2, 1, 0, 2, 48, 58, },
{ 1, 1, 0, 2, 48, 62, },
{ 3, 1, 0, 2, 48, 62, },
{ 4, 1, 0, 2, 48, 58, },
{ 5, 1, 0, 2, 48, 58, },
- { 6, 1, 0, 2, 48, 64, },
- { 7, 1, 0, 2, 48, 54, },
- { 8, 1, 0, 2, 48, 62, },
- { 9, 1, 0, 2, 48, 62, },
+ { 9, 1, 0, 2, 48, 60, },
{ 0, 1, 0, 2, 52, 76, },
{ 2, 1, 0, 2, 52, 58, },
{ 1, 1, 0, 2, 52, 62, },
{ 3, 1, 0, 2, 52, 64, },
{ 4, 1, 0, 2, 52, 76, },
{ 5, 1, 0, 2, 52, 58, },
- { 6, 1, 0, 2, 52, 76, },
- { 7, 1, 0, 2, 52, 54, },
- { 8, 1, 0, 2, 52, 76, },
- { 9, 1, 0, 2, 52, 62, },
+ { 9, 1, 0, 2, 52, 60, },
{ 0, 1, 0, 2, 56, 76, },
{ 2, 1, 0, 2, 56, 58, },
{ 1, 1, 0, 2, 56, 62, },
{ 3, 1, 0, 2, 56, 64, },
{ 4, 1, 0, 2, 56, 76, },
{ 5, 1, 0, 2, 56, 58, },
- { 6, 1, 0, 2, 56, 76, },
- { 7, 1, 0, 2, 56, 54, },
- { 8, 1, 0, 2, 56, 76, },
- { 9, 1, 0, 2, 56, 62, },
+ { 9, 1, 0, 2, 56, 60, },
{ 0, 1, 0, 2, 60, 76, },
{ 2, 1, 0, 2, 60, 58, },
{ 1, 1, 0, 2, 60, 62, },
{ 3, 1, 0, 2, 60, 64, },
{ 4, 1, 0, 2, 60, 76, },
{ 5, 1, 0, 2, 60, 58, },
- { 6, 1, 0, 2, 60, 76, },
- { 7, 1, 0, 2, 60, 54, },
- { 8, 1, 0, 2, 60, 76, },
- { 9, 1, 0, 2, 60, 62, },
+ { 9, 1, 0, 2, 60, 60, },
{ 0, 1, 0, 2, 64, 70, },
{ 2, 1, 0, 2, 64, 58, },
{ 1, 1, 0, 2, 64, 62, },
{ 3, 1, 0, 2, 64, 64, },
{ 4, 1, 0, 2, 64, 74, },
{ 5, 1, 0, 2, 64, 58, },
- { 6, 1, 0, 2, 64, 74, },
- { 7, 1, 0, 2, 64, 54, },
- { 8, 1, 0, 2, 64, 74, },
- { 9, 1, 0, 2, 64, 62, },
+ { 9, 1, 0, 2, 64, 60, },
{ 0, 1, 0, 2, 100, 66, },
{ 2, 1, 0, 2, 100, 58, },
{ 1, 1, 0, 2, 100, 76, },
{ 3, 1, 0, 2, 100, 66, },
{ 4, 1, 0, 2, 100, 76, },
{ 5, 1, 0, 2, 100, 58, },
- { 6, 1, 0, 2, 100, 70, },
- { 7, 1, 0, 2, 100, 54, },
- { 8, 1, 0, 2, 100, 70, },
{ 9, 1, 0, 2, 100, 127, },
{ 0, 1, 0, 2, 104, 76, },
{ 2, 1, 0, 2, 104, 58, },
@@ -42963,9 +42999,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 104, 76, },
{ 4, 1, 0, 2, 104, 76, },
{ 5, 1, 0, 2, 104, 58, },
- { 6, 1, 0, 2, 104, 76, },
- { 7, 1, 0, 2, 104, 54, },
- { 8, 1, 0, 2, 104, 76, },
{ 9, 1, 0, 2, 104, 127, },
{ 0, 1, 0, 2, 108, 76, },
{ 2, 1, 0, 2, 108, 58, },
@@ -42973,9 +43006,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 108, 76, },
{ 4, 1, 0, 2, 108, 76, },
{ 5, 1, 0, 2, 108, 58, },
- { 6, 1, 0, 2, 108, 76, },
- { 7, 1, 0, 2, 108, 54, },
- { 8, 1, 0, 2, 108, 76, },
{ 9, 1, 0, 2, 108, 127, },
{ 0, 1, 0, 2, 112, 76, },
{ 2, 1, 0, 2, 112, 58, },
@@ -42983,9 +43013,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 112, 76, },
{ 4, 1, 0, 2, 112, 76, },
{ 5, 1, 0, 2, 112, 58, },
- { 6, 1, 0, 2, 112, 76, },
- { 7, 1, 0, 2, 112, 54, },
- { 8, 1, 0, 2, 112, 76, },
{ 9, 1, 0, 2, 112, 127, },
{ 0, 1, 0, 2, 116, 76, },
{ 2, 1, 0, 2, 116, 58, },
@@ -42993,9 +43020,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 116, 76, },
{ 4, 1, 0, 2, 116, 76, },
{ 5, 1, 0, 2, 116, 58, },
- { 6, 1, 0, 2, 116, 76, },
- { 7, 1, 0, 2, 116, 54, },
- { 8, 1, 0, 2, 116, 76, },
{ 9, 1, 0, 2, 116, 127, },
{ 0, 1, 0, 2, 120, 76, },
{ 2, 1, 0, 2, 120, 58, },
@@ -43003,9 +43027,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 120, 127, },
{ 4, 1, 0, 2, 120, 76, },
{ 5, 1, 0, 2, 120, 127, },
- { 6, 1, 0, 2, 120, 76, },
- { 7, 1, 0, 2, 120, 54, },
- { 8, 1, 0, 2, 120, 76, },
{ 9, 1, 0, 2, 120, 127, },
{ 0, 1, 0, 2, 124, 76, },
{ 2, 1, 0, 2, 124, 58, },
@@ -43013,9 +43034,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 124, 127, },
{ 4, 1, 0, 2, 124, 76, },
{ 5, 1, 0, 2, 124, 127, },
- { 6, 1, 0, 2, 124, 76, },
- { 7, 1, 0, 2, 124, 54, },
- { 8, 1, 0, 2, 124, 76, },
{ 9, 1, 0, 2, 124, 127, },
{ 0, 1, 0, 2, 128, 76, },
{ 2, 1, 0, 2, 128, 58, },
@@ -43023,9 +43041,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 128, 127, },
{ 4, 1, 0, 2, 128, 76, },
{ 5, 1, 0, 2, 128, 127, },
- { 6, 1, 0, 2, 128, 76, },
- { 7, 1, 0, 2, 128, 54, },
- { 8, 1, 0, 2, 128, 76, },
{ 9, 1, 0, 2, 128, 127, },
{ 0, 1, 0, 2, 132, 76, },
{ 2, 1, 0, 2, 132, 58, },
@@ -43033,9 +43048,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 132, 76, },
{ 4, 1, 0, 2, 132, 76, },
{ 5, 1, 0, 2, 132, 58, },
- { 6, 1, 0, 2, 132, 76, },
- { 7, 1, 0, 2, 132, 54, },
- { 8, 1, 0, 2, 132, 76, },
{ 9, 1, 0, 2, 132, 127, },
{ 0, 1, 0, 2, 136, 76, },
{ 2, 1, 0, 2, 136, 58, },
@@ -43043,9 +43055,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 136, 76, },
{ 4, 1, 0, 2, 136, 76, },
{ 5, 1, 0, 2, 136, 58, },
- { 6, 1, 0, 2, 136, 76, },
- { 7, 1, 0, 2, 136, 54, },
- { 8, 1, 0, 2, 136, 76, },
{ 9, 1, 0, 2, 136, 127, },
{ 0, 1, 0, 2, 140, 66, },
{ 2, 1, 0, 2, 140, 58, },
@@ -43053,9 +43062,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 140, 66, },
{ 4, 1, 0, 2, 140, 76, },
{ 5, 1, 0, 2, 140, 58, },
- { 6, 1, 0, 2, 140, 70, },
- { 7, 1, 0, 2, 140, 54, },
- { 8, 1, 0, 2, 140, 70, },
{ 9, 1, 0, 2, 140, 127, },
{ 0, 1, 0, 2, 144, 76, },
{ 2, 1, 0, 2, 144, 127, },
@@ -43063,9 +43069,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 144, 76, },
{ 4, 1, 0, 2, 144, 76, },
{ 5, 1, 0, 2, 144, 127, },
- { 6, 1, 0, 2, 144, 76, },
- { 7, 1, 0, 2, 144, 127, },
- { 8, 1, 0, 2, 144, 76, },
{ 9, 1, 0, 2, 144, 127, },
{ 0, 1, 0, 2, 149, 76, },
{ 2, 1, 0, 2, 149, 28, },
@@ -43073,139 +43076,97 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 2, 149, 76, },
{ 4, 1, 0, 2, 149, 74, },
{ 5, 1, 0, 2, 149, 76, },
- { 6, 1, 0, 2, 149, 76, },
- { 7, 1, 0, 2, 149, 54, },
- { 8, 1, 0, 2, 149, 76, },
- { 9, 1, 0, 2, 149, 28, },
+ { 9, 1, 0, 2, 149, 76, },
{ 0, 1, 0, 2, 153, 76, },
{ 2, 1, 0, 2, 153, 28, },
{ 1, 1, 0, 2, 153, 127, },
{ 3, 1, 0, 2, 153, 76, },
{ 4, 1, 0, 2, 153, 74, },
{ 5, 1, 0, 2, 153, 76, },
- { 6, 1, 0, 2, 153, 76, },
- { 7, 1, 0, 2, 153, 54, },
- { 8, 1, 0, 2, 153, 76, },
- { 9, 1, 0, 2, 153, 28, },
+ { 9, 1, 0, 2, 153, 76, },
{ 0, 1, 0, 2, 157, 76, },
{ 2, 1, 0, 2, 157, 28, },
{ 1, 1, 0, 2, 157, 127, },
{ 3, 1, 0, 2, 157, 76, },
{ 4, 1, 0, 2, 157, 74, },
{ 5, 1, 0, 2, 157, 76, },
- { 6, 1, 0, 2, 157, 76, },
- { 7, 1, 0, 2, 157, 54, },
- { 8, 1, 0, 2, 157, 76, },
- { 9, 1, 0, 2, 157, 28, },
+ { 9, 1, 0, 2, 157, 76, },
{ 0, 1, 0, 2, 161, 76, },
{ 2, 1, 0, 2, 161, 28, },
{ 1, 1, 0, 2, 161, 127, },
{ 3, 1, 0, 2, 161, 76, },
{ 4, 1, 0, 2, 161, 74, },
{ 5, 1, 0, 2, 161, 76, },
- { 6, 1, 0, 2, 161, 76, },
- { 7, 1, 0, 2, 161, 54, },
- { 8, 1, 0, 2, 161, 76, },
- { 9, 1, 0, 2, 161, 28, },
+ { 9, 1, 0, 2, 161, 76, },
{ 0, 1, 0, 2, 165, 76, },
{ 2, 1, 0, 2, 165, 28, },
{ 1, 1, 0, 2, 165, 127, },
{ 3, 1, 0, 2, 165, 76, },
{ 4, 1, 0, 2, 165, 74, },
{ 5, 1, 0, 2, 165, 76, },
- { 6, 1, 0, 2, 165, 76, },
- { 7, 1, 0, 2, 165, 54, },
- { 8, 1, 0, 2, 165, 76, },
- { 9, 1, 0, 2, 165, 28, },
+ { 9, 1, 0, 2, 165, 76, },
{ 0, 1, 0, 3, 36, 64, },
{ 2, 1, 0, 3, 36, 36, },
{ 1, 1, 0, 3, 36, 50, },
{ 3, 1, 0, 3, 36, 38, },
{ 4, 1, 0, 3, 36, 66, },
{ 5, 1, 0, 3, 36, 36, },
- { 6, 1, 0, 3, 36, 52, },
- { 7, 1, 0, 3, 36, 30, },
- { 8, 1, 0, 3, 36, 50, },
- { 9, 1, 0, 3, 36, 38, },
+ { 9, 1, 0, 3, 36, 36, },
{ 0, 1, 0, 3, 40, 68, },
{ 2, 1, 0, 3, 40, 36, },
{ 1, 1, 0, 3, 40, 50, },
{ 3, 1, 0, 3, 40, 38, },
{ 4, 1, 0, 3, 40, 66, },
{ 5, 1, 0, 3, 40, 36, },
- { 6, 1, 0, 3, 40, 52, },
- { 7, 1, 0, 3, 40, 30, },
- { 8, 1, 0, 3, 40, 50, },
- { 9, 1, 0, 3, 40, 38, },
+ { 9, 1, 0, 3, 40, 36, },
{ 0, 1, 0, 3, 44, 68, },
{ 2, 1, 0, 3, 44, 36, },
{ 1, 1, 0, 3, 44, 50, },
{ 3, 1, 0, 3, 44, 38, },
{ 4, 1, 0, 3, 44, 66, },
{ 5, 1, 0, 3, 44, 36, },
- { 6, 1, 0, 3, 44, 52, },
- { 7, 1, 0, 3, 44, 30, },
- { 8, 1, 0, 3, 44, 50, },
- { 9, 1, 0, 3, 44, 38, },
+ { 9, 1, 0, 3, 44, 36, },
{ 0, 1, 0, 3, 48, 68, },
{ 2, 1, 0, 3, 48, 36, },
{ 1, 1, 0, 3, 48, 50, },
{ 3, 1, 0, 3, 48, 38, },
{ 4, 1, 0, 3, 48, 42, },
{ 5, 1, 0, 3, 48, 36, },
- { 6, 1, 0, 3, 48, 52, },
- { 7, 1, 0, 3, 48, 30, },
- { 8, 1, 0, 3, 48, 50, },
- { 9, 1, 0, 3, 48, 38, },
+ { 9, 1, 0, 3, 48, 36, },
{ 0, 1, 0, 3, 52, 68, },
{ 2, 1, 0, 3, 52, 36, },
{ 1, 1, 0, 3, 52, 50, },
{ 3, 1, 0, 3, 52, 40, },
{ 4, 1, 0, 3, 52, 66, },
{ 5, 1, 0, 3, 52, 36, },
- { 6, 1, 0, 3, 52, 68, },
- { 7, 1, 0, 3, 52, 30, },
- { 8, 1, 0, 3, 52, 68, },
- { 9, 1, 0, 3, 52, 38, },
+ { 9, 1, 0, 3, 52, 36, },
{ 0, 1, 0, 3, 56, 68, },
{ 2, 1, 0, 3, 56, 36, },
{ 1, 1, 0, 3, 56, 50, },
{ 3, 1, 0, 3, 56, 40, },
{ 4, 1, 0, 3, 56, 66, },
{ 5, 1, 0, 3, 56, 36, },
- { 6, 1, 0, 3, 56, 68, },
- { 7, 1, 0, 3, 56, 30, },
- { 8, 1, 0, 3, 56, 68, },
- { 9, 1, 0, 3, 56, 38, },
+ { 9, 1, 0, 3, 56, 36, },
{ 0, 1, 0, 3, 60, 68, },
{ 2, 1, 0, 3, 60, 36, },
{ 1, 1, 0, 3, 60, 50, },
{ 3, 1, 0, 3, 60, 40, },
{ 4, 1, 0, 3, 60, 66, },
{ 5, 1, 0, 3, 60, 36, },
- { 6, 1, 0, 3, 60, 66, },
- { 7, 1, 0, 3, 60, 30, },
- { 8, 1, 0, 3, 60, 66, },
- { 9, 1, 0, 3, 60, 38, },
+ { 9, 1, 0, 3, 60, 36, },
{ 0, 1, 0, 3, 64, 66, },
{ 2, 1, 0, 3, 64, 36, },
{ 1, 1, 0, 3, 64, 50, },
{ 3, 1, 0, 3, 64, 40, },
{ 4, 1, 0, 3, 64, 66, },
{ 5, 1, 0, 3, 64, 36, },
- { 6, 1, 0, 3, 64, 68, },
- { 7, 1, 0, 3, 64, 30, },
- { 8, 1, 0, 3, 64, 68, },
- { 9, 1, 0, 3, 64, 38, },
+ { 9, 1, 0, 3, 64, 36, },
{ 0, 1, 0, 3, 100, 64, },
{ 2, 1, 0, 3, 100, 36, },
{ 1, 1, 0, 3, 100, 70, },
{ 3, 1, 0, 3, 100, 64, },
{ 4, 1, 0, 3, 100, 66, },
{ 5, 1, 0, 3, 100, 36, },
- { 6, 1, 0, 3, 100, 60, },
- { 7, 1, 0, 3, 100, 30, },
- { 8, 1, 0, 3, 100, 60, },
{ 9, 1, 0, 3, 100, 127, },
{ 0, 1, 0, 3, 104, 68, },
{ 2, 1, 0, 3, 104, 36, },
@@ -43213,9 +43174,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 104, 68, },
{ 4, 1, 0, 3, 104, 66, },
{ 5, 1, 0, 3, 104, 36, },
- { 6, 1, 0, 3, 104, 68, },
- { 7, 1, 0, 3, 104, 30, },
- { 8, 1, 0, 3, 104, 68, },
{ 9, 1, 0, 3, 104, 127, },
{ 0, 1, 0, 3, 108, 68, },
{ 2, 1, 0, 3, 108, 36, },
@@ -43223,9 +43181,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 108, 68, },
{ 4, 1, 0, 3, 108, 66, },
{ 5, 1, 0, 3, 108, 36, },
- { 6, 1, 0, 3, 108, 68, },
- { 7, 1, 0, 3, 108, 30, },
- { 8, 1, 0, 3, 108, 68, },
{ 9, 1, 0, 3, 108, 127, },
{ 0, 1, 0, 3, 112, 68, },
{ 2, 1, 0, 3, 112, 36, },
@@ -43233,9 +43188,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 112, 68, },
{ 4, 1, 0, 3, 112, 66, },
{ 5, 1, 0, 3, 112, 36, },
- { 6, 1, 0, 3, 112, 68, },
- { 7, 1, 0, 3, 112, 30, },
- { 8, 1, 0, 3, 112, 68, },
{ 9, 1, 0, 3, 112, 127, },
{ 0, 1, 0, 3, 116, 68, },
{ 2, 1, 0, 3, 116, 36, },
@@ -43243,9 +43195,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 116, 68, },
{ 4, 1, 0, 3, 116, 66, },
{ 5, 1, 0, 3, 116, 36, },
- { 6, 1, 0, 3, 116, 68, },
- { 7, 1, 0, 3, 116, 30, },
- { 8, 1, 0, 3, 116, 68, },
{ 9, 1, 0, 3, 116, 127, },
{ 0, 1, 0, 3, 120, 68, },
{ 2, 1, 0, 3, 120, 36, },
@@ -43253,9 +43202,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 120, 127, },
{ 4, 1, 0, 3, 120, 66, },
{ 5, 1, 0, 3, 120, 127, },
- { 6, 1, 0, 3, 120, 68, },
- { 7, 1, 0, 3, 120, 30, },
- { 8, 1, 0, 3, 120, 68, },
{ 9, 1, 0, 3, 120, 127, },
{ 0, 1, 0, 3, 124, 68, },
{ 2, 1, 0, 3, 124, 36, },
@@ -43263,9 +43209,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 124, 127, },
{ 4, 1, 0, 3, 124, 66, },
{ 5, 1, 0, 3, 124, 127, },
- { 6, 1, 0, 3, 124, 68, },
- { 7, 1, 0, 3, 124, 30, },
- { 8, 1, 0, 3, 124, 68, },
{ 9, 1, 0, 3, 124, 127, },
{ 0, 1, 0, 3, 128, 68, },
{ 2, 1, 0, 3, 128, 36, },
@@ -43273,9 +43216,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 128, 127, },
{ 4, 1, 0, 3, 128, 66, },
{ 5, 1, 0, 3, 128, 127, },
- { 6, 1, 0, 3, 128, 68, },
- { 7, 1, 0, 3, 128, 30, },
- { 8, 1, 0, 3, 128, 68, },
{ 9, 1, 0, 3, 128, 127, },
{ 0, 1, 0, 3, 132, 68, },
{ 2, 1, 0, 3, 132, 36, },
@@ -43283,9 +43223,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 132, 68, },
{ 4, 1, 0, 3, 132, 66, },
{ 5, 1, 0, 3, 132, 36, },
- { 6, 1, 0, 3, 132, 68, },
- { 7, 1, 0, 3, 132, 30, },
- { 8, 1, 0, 3, 132, 68, },
{ 9, 1, 0, 3, 132, 127, },
{ 0, 1, 0, 3, 136, 68, },
{ 2, 1, 0, 3, 136, 36, },
@@ -43293,9 +43230,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 136, 68, },
{ 4, 1, 0, 3, 136, 66, },
{ 5, 1, 0, 3, 136, 36, },
- { 6, 1, 0, 3, 136, 68, },
- { 7, 1, 0, 3, 136, 30, },
- { 8, 1, 0, 3, 136, 68, },
{ 9, 1, 0, 3, 136, 127, },
{ 0, 1, 0, 3, 140, 58, },
{ 2, 1, 0, 3, 140, 36, },
@@ -43303,9 +43237,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 140, 58, },
{ 4, 1, 0, 3, 140, 66, },
{ 5, 1, 0, 3, 140, 36, },
- { 6, 1, 0, 3, 140, 60, },
- { 7, 1, 0, 3, 140, 30, },
- { 8, 1, 0, 3, 140, 60, },
{ 9, 1, 0, 3, 140, 127, },
{ 0, 1, 0, 3, 144, 68, },
{ 2, 1, 0, 3, 144, 127, },
@@ -43313,9 +43244,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 144, 68, },
{ 4, 1, 0, 3, 144, 66, },
{ 5, 1, 0, 3, 144, 127, },
- { 6, 1, 0, 3, 144, 68, },
- { 7, 1, 0, 3, 144, 127, },
- { 8, 1, 0, 3, 144, 68, },
{ 9, 1, 0, 3, 144, 127, },
{ 0, 1, 0, 3, 149, 76, },
{ 2, 1, 0, 3, 149, 4, },
@@ -43323,59 +43251,41 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 0, 3, 149, 76, },
{ 4, 1, 0, 3, 149, 62, },
{ 5, 1, 0, 3, 149, 76, },
- { 6, 1, 0, 3, 149, 76, },
- { 7, 1, 0, 3, 149, 30, },
- { 8, 1, 0, 3, 149, 72, },
- { 9, 1, 0, 3, 149, 4, },
+ { 9, 1, 0, 3, 149, 68, },
{ 0, 1, 0, 3, 153, 76, },
{ 2, 1, 0, 3, 153, 4, },
{ 1, 1, 0, 3, 153, 127, },
{ 3, 1, 0, 3, 153, 76, },
{ 4, 1, 0, 3, 153, 62, },
{ 5, 1, 0, 3, 153, 76, },
- { 6, 1, 0, 3, 153, 76, },
- { 7, 1, 0, 3, 153, 30, },
- { 8, 1, 0, 3, 153, 76, },
- { 9, 1, 0, 3, 153, 4, },
+ { 9, 1, 0, 3, 153, 68, },
{ 0, 1, 0, 3, 157, 76, },
{ 2, 1, 0, 3, 157, 4, },
{ 1, 1, 0, 3, 157, 127, },
{ 3, 1, 0, 3, 157, 76, },
{ 4, 1, 0, 3, 157, 62, },
{ 5, 1, 0, 3, 157, 76, },
- { 6, 1, 0, 3, 157, 76, },
- { 7, 1, 0, 3, 157, 30, },
- { 8, 1, 0, 3, 157, 76, },
- { 9, 1, 0, 3, 157, 4, },
+ { 9, 1, 0, 3, 157, 68, },
{ 0, 1, 0, 3, 161, 76, },
{ 2, 1, 0, 3, 161, 4, },
{ 1, 1, 0, 3, 161, 127, },
{ 3, 1, 0, 3, 161, 76, },
{ 4, 1, 0, 3, 161, 62, },
{ 5, 1, 0, 3, 161, 76, },
- { 6, 1, 0, 3, 161, 76, },
- { 7, 1, 0, 3, 161, 30, },
- { 8, 1, 0, 3, 161, 76, },
- { 9, 1, 0, 3, 161, 4, },
+ { 9, 1, 0, 3, 161, 72, },
{ 0, 1, 0, 3, 165, 76, },
{ 2, 1, 0, 3, 165, 4, },
{ 1, 1, 0, 3, 165, 127, },
{ 3, 1, 0, 3, 165, 76, },
{ 4, 1, 0, 3, 165, 62, },
{ 5, 1, 0, 3, 165, 76, },
- { 6, 1, 0, 3, 165, 76, },
- { 7, 1, 0, 3, 165, 30, },
- { 8, 1, 0, 3, 165, 76, },
- { 9, 1, 0, 3, 165, 4, },
+ { 9, 1, 0, 3, 165, 72, },
{ 0, 1, 1, 2, 38, 66, },
{ 2, 1, 1, 2, 38, 64, },
{ 1, 1, 1, 2, 38, 64, },
{ 3, 1, 1, 2, 38, 64, },
{ 4, 1, 1, 2, 38, 64, },
{ 5, 1, 1, 2, 38, 64, },
- { 6, 1, 1, 2, 38, 64, },
- { 7, 1, 1, 2, 38, 54, },
- { 8, 1, 1, 2, 38, 62, },
{ 9, 1, 1, 2, 38, 64, },
{ 0, 1, 1, 2, 46, 72, },
{ 2, 1, 1, 2, 46, 64, },
@@ -43383,9 +43293,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 46, 64, },
{ 4, 1, 1, 2, 46, 70, },
{ 5, 1, 1, 2, 46, 64, },
- { 6, 1, 1, 2, 46, 64, },
- { 7, 1, 1, 2, 46, 54, },
- { 8, 1, 1, 2, 46, 62, },
{ 9, 1, 1, 2, 46, 64, },
{ 0, 1, 1, 2, 54, 72, },
{ 2, 1, 1, 2, 54, 64, },
@@ -43393,9 +43300,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 54, 64, },
{ 4, 1, 1, 2, 54, 72, },
{ 5, 1, 1, 2, 54, 64, },
- { 6, 1, 1, 2, 54, 72, },
- { 7, 1, 1, 2, 54, 54, },
- { 8, 1, 1, 2, 54, 72, },
{ 9, 1, 1, 2, 54, 64, },
{ 0, 1, 1, 2, 62, 60, },
{ 2, 1, 1, 2, 62, 64, },
@@ -43403,9 +43307,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 62, 60, },
{ 4, 1, 1, 2, 62, 60, },
{ 5, 1, 1, 2, 62, 64, },
- { 6, 1, 1, 2, 62, 64, },
- { 7, 1, 1, 2, 62, 54, },
- { 8, 1, 1, 2, 62, 64, },
{ 9, 1, 1, 2, 62, 64, },
{ 0, 1, 1, 2, 102, 60, },
{ 2, 1, 1, 2, 102, 64, },
@@ -43413,9 +43314,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 102, 60, },
{ 4, 1, 1, 2, 102, 64, },
{ 5, 1, 1, 2, 102, 64, },
- { 6, 1, 1, 2, 102, 58, },
- { 7, 1, 1, 2, 102, 54, },
- { 8, 1, 1, 2, 102, 58, },
{ 9, 1, 1, 2, 102, 127, },
{ 0, 1, 1, 2, 110, 72, },
{ 2, 1, 1, 2, 110, 64, },
@@ -43423,9 +43321,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 110, 72, },
{ 4, 1, 1, 2, 110, 72, },
{ 5, 1, 1, 2, 110, 64, },
- { 6, 1, 1, 2, 110, 72, },
- { 7, 1, 1, 2, 110, 54, },
- { 8, 1, 1, 2, 110, 72, },
{ 9, 1, 1, 2, 110, 127, },
{ 0, 1, 1, 2, 118, 72, },
{ 2, 1, 1, 2, 118, 64, },
@@ -43433,9 +43328,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 118, 127, },
{ 4, 1, 1, 2, 118, 72, },
{ 5, 1, 1, 2, 118, 127, },
- { 6, 1, 1, 2, 118, 72, },
- { 7, 1, 1, 2, 118, 54, },
- { 8, 1, 1, 2, 118, 72, },
{ 9, 1, 1, 2, 118, 127, },
{ 0, 1, 1, 2, 126, 72, },
{ 2, 1, 1, 2, 126, 64, },
@@ -43443,9 +43335,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 126, 127, },
{ 4, 1, 1, 2, 126, 72, },
{ 5, 1, 1, 2, 126, 127, },
- { 6, 1, 1, 2, 126, 72, },
- { 7, 1, 1, 2, 126, 54, },
- { 8, 1, 1, 2, 126, 72, },
{ 9, 1, 1, 2, 126, 127, },
{ 0, 1, 1, 2, 134, 72, },
{ 2, 1, 1, 2, 134, 64, },
@@ -43453,9 +43342,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 134, 72, },
{ 4, 1, 1, 2, 134, 72, },
{ 5, 1, 1, 2, 134, 64, },
- { 6, 1, 1, 2, 134, 72, },
- { 7, 1, 1, 2, 134, 54, },
- { 8, 1, 1, 2, 134, 72, },
{ 9, 1, 1, 2, 134, 127, },
{ 0, 1, 1, 2, 142, 72, },
{ 2, 1, 1, 2, 142, 127, },
@@ -43463,9 +43349,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 142, 72, },
{ 4, 1, 1, 2, 142, 72, },
{ 5, 1, 1, 2, 142, 127, },
- { 6, 1, 1, 2, 142, 72, },
- { 7, 1, 1, 2, 142, 127, },
- { 8, 1, 1, 2, 142, 72, },
{ 9, 1, 1, 2, 142, 127, },
{ 0, 1, 1, 2, 151, 72, },
{ 2, 1, 1, 2, 151, 28, },
@@ -43473,29 +43356,20 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 2, 151, 72, },
{ 4, 1, 1, 2, 151, 72, },
{ 5, 1, 1, 2, 151, 72, },
- { 6, 1, 1, 2, 151, 72, },
- { 7, 1, 1, 2, 151, 54, },
- { 8, 1, 1, 2, 151, 72, },
- { 9, 1, 1, 2, 151, 28, },
+ { 9, 1, 1, 2, 151, 72, },
{ 0, 1, 1, 2, 159, 72, },
{ 2, 1, 1, 2, 159, 28, },
{ 1, 1, 1, 2, 159, 127, },
{ 3, 1, 1, 2, 159, 72, },
{ 4, 1, 1, 2, 159, 72, },
{ 5, 1, 1, 2, 159, 72, },
- { 6, 1, 1, 2, 159, 72, },
- { 7, 1, 1, 2, 159, 54, },
- { 8, 1, 1, 2, 159, 72, },
- { 9, 1, 1, 2, 159, 28, },
+ { 9, 1, 1, 2, 159, 72, },
{ 0, 1, 1, 3, 38, 60, },
{ 2, 1, 1, 3, 38, 40, },
{ 1, 1, 1, 3, 38, 50, },
{ 3, 1, 1, 3, 38, 40, },
{ 4, 1, 1, 3, 38, 54, },
{ 5, 1, 1, 3, 38, 40, },
- { 6, 1, 1, 3, 38, 52, },
- { 7, 1, 1, 3, 38, 30, },
- { 8, 1, 1, 3, 38, 50, },
{ 9, 1, 1, 3, 38, 40, },
{ 0, 1, 1, 3, 46, 68, },
{ 2, 1, 1, 3, 46, 40, },
@@ -43503,9 +43377,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 46, 40, },
{ 4, 1, 1, 3, 46, 54, },
{ 5, 1, 1, 3, 46, 40, },
- { 6, 1, 1, 3, 46, 52, },
- { 7, 1, 1, 3, 46, 30, },
- { 8, 1, 1, 3, 46, 50, },
{ 9, 1, 1, 3, 46, 40, },
{ 0, 1, 1, 3, 54, 68, },
{ 2, 1, 1, 3, 54, 40, },
@@ -43513,9 +43384,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 54, 40, },
{ 4, 1, 1, 3, 54, 66, },
{ 5, 1, 1, 3, 54, 40, },
- { 6, 1, 1, 3, 54, 68, },
- { 7, 1, 1, 3, 54, 30, },
- { 8, 1, 1, 3, 54, 68, },
{ 9, 1, 1, 3, 54, 40, },
{ 0, 1, 1, 3, 62, 58, },
{ 2, 1, 1, 3, 62, 40, },
@@ -43523,9 +43391,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 62, 40, },
{ 4, 1, 1, 3, 62, 50, },
{ 5, 1, 1, 3, 62, 40, },
- { 6, 1, 1, 3, 62, 58, },
- { 7, 1, 1, 3, 62, 30, },
- { 8, 1, 1, 3, 62, 58, },
{ 9, 1, 1, 3, 62, 40, },
{ 0, 1, 1, 3, 102, 56, },
{ 2, 1, 1, 3, 102, 40, },
@@ -43533,9 +43398,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 102, 56, },
{ 4, 1, 1, 3, 102, 54, },
{ 5, 1, 1, 3, 102, 40, },
- { 6, 1, 1, 3, 102, 54, },
- { 7, 1, 1, 3, 102, 30, },
- { 8, 1, 1, 3, 102, 54, },
{ 9, 1, 1, 3, 102, 127, },
{ 0, 1, 1, 3, 110, 68, },
{ 2, 1, 1, 3, 110, 40, },
@@ -43543,9 +43405,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 110, 68, },
{ 4, 1, 1, 3, 110, 66, },
{ 5, 1, 1, 3, 110, 40, },
- { 6, 1, 1, 3, 110, 68, },
- { 7, 1, 1, 3, 110, 30, },
- { 8, 1, 1, 3, 110, 68, },
{ 9, 1, 1, 3, 110, 127, },
{ 0, 1, 1, 3, 118, 68, },
{ 2, 1, 1, 3, 118, 40, },
@@ -43553,9 +43412,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 118, 127, },
{ 4, 1, 1, 3, 118, 66, },
{ 5, 1, 1, 3, 118, 127, },
- { 6, 1, 1, 3, 118, 68, },
- { 7, 1, 1, 3, 118, 30, },
- { 8, 1, 1, 3, 118, 68, },
{ 9, 1, 1, 3, 118, 127, },
{ 0, 1, 1, 3, 126, 68, },
{ 2, 1, 1, 3, 126, 40, },
@@ -43563,9 +43419,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 126, 127, },
{ 4, 1, 1, 3, 126, 66, },
{ 5, 1, 1, 3, 126, 127, },
- { 6, 1, 1, 3, 126, 68, },
- { 7, 1, 1, 3, 126, 30, },
- { 8, 1, 1, 3, 126, 68, },
{ 9, 1, 1, 3, 126, 127, },
{ 0, 1, 1, 3, 134, 68, },
{ 2, 1, 1, 3, 134, 40, },
@@ -43573,9 +43426,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 134, 68, },
{ 4, 1, 1, 3, 134, 66, },
{ 5, 1, 1, 3, 134, 40, },
- { 6, 1, 1, 3, 134, 68, },
- { 7, 1, 1, 3, 134, 30, },
- { 8, 1, 1, 3, 134, 68, },
{ 9, 1, 1, 3, 134, 127, },
{ 0, 1, 1, 3, 142, 68, },
{ 2, 1, 1, 3, 142, 127, },
@@ -43583,9 +43433,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 142, 68, },
{ 4, 1, 1, 3, 142, 66, },
{ 5, 1, 1, 3, 142, 127, },
- { 6, 1, 1, 3, 142, 68, },
- { 7, 1, 1, 3, 142, 127, },
- { 8, 1, 1, 3, 142, 68, },
{ 9, 1, 1, 3, 142, 127, },
{ 0, 1, 1, 3, 151, 72, },
{ 2, 1, 1, 3, 151, 4, },
@@ -43593,29 +43440,20 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 1, 3, 151, 72, },
{ 4, 1, 1, 3, 151, 66, },
{ 5, 1, 1, 3, 151, 72, },
- { 6, 1, 1, 3, 151, 72, },
- { 7, 1, 1, 3, 151, 30, },
- { 8, 1, 1, 3, 151, 68, },
- { 9, 1, 1, 3, 151, 4, },
+ { 9, 1, 1, 3, 151, 64, },
{ 0, 1, 1, 3, 159, 72, },
{ 2, 1, 1, 3, 159, 4, },
{ 1, 1, 1, 3, 159, 127, },
{ 3, 1, 1, 3, 159, 72, },
{ 4, 1, 1, 3, 159, 66, },
{ 5, 1, 1, 3, 159, 72, },
- { 6, 1, 1, 3, 159, 72, },
- { 7, 1, 1, 3, 159, 30, },
- { 8, 1, 1, 3, 159, 72, },
- { 9, 1, 1, 3, 159, 4, },
+ { 9, 1, 1, 3, 159, 72, },
{ 0, 1, 2, 4, 42, 68, },
{ 2, 1, 2, 4, 42, 64, },
{ 1, 1, 2, 4, 42, 64, },
{ 3, 1, 2, 4, 42, 64, },
{ 4, 1, 2, 4, 42, 60, },
{ 5, 1, 2, 4, 42, 64, },
- { 6, 1, 2, 4, 42, 64, },
- { 7, 1, 2, 4, 42, 54, },
- { 8, 1, 2, 4, 42, 62, },
{ 9, 1, 2, 4, 42, 64, },
{ 0, 1, 2, 4, 58, 60, },
{ 2, 1, 2, 4, 58, 64, },
@@ -43623,9 +43461,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 4, 58, 60, },
{ 4, 1, 2, 4, 58, 56, },
{ 5, 1, 2, 4, 58, 64, },
- { 6, 1, 2, 4, 58, 62, },
- { 7, 1, 2, 4, 58, 54, },
- { 8, 1, 2, 4, 58, 62, },
{ 9, 1, 2, 4, 58, 64, },
{ 0, 1, 2, 4, 106, 60, },
{ 2, 1, 2, 4, 106, 64, },
@@ -43633,9 +43468,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 4, 106, 60, },
{ 4, 1, 2, 4, 106, 58, },
{ 5, 1, 2, 4, 106, 64, },
- { 6, 1, 2, 4, 106, 58, },
- { 7, 1, 2, 4, 106, 54, },
- { 8, 1, 2, 4, 106, 58, },
{ 9, 1, 2, 4, 106, 127, },
{ 0, 1, 2, 4, 122, 72, },
{ 2, 1, 2, 4, 122, 64, },
@@ -43643,9 +43475,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 4, 122, 127, },
{ 4, 1, 2, 4, 122, 68, },
{ 5, 1, 2, 4, 122, 127, },
- { 6, 1, 2, 4, 122, 72, },
- { 7, 1, 2, 4, 122, 54, },
- { 8, 1, 2, 4, 122, 72, },
{ 9, 1, 2, 4, 122, 127, },
{ 0, 1, 2, 4, 138, 72, },
{ 2, 1, 2, 4, 138, 127, },
@@ -43653,9 +43482,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 4, 138, 72, },
{ 4, 1, 2, 4, 138, 70, },
{ 5, 1, 2, 4, 138, 127, },
- { 6, 1, 2, 4, 138, 72, },
- { 7, 1, 2, 4, 138, 127, },
- { 8, 1, 2, 4, 138, 72, },
{ 9, 1, 2, 4, 138, 127, },
{ 0, 1, 2, 4, 155, 72, },
{ 2, 1, 2, 4, 155, 28, },
@@ -43663,19 +43489,13 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 4, 155, 72, },
{ 4, 1, 2, 4, 155, 62, },
{ 5, 1, 2, 4, 155, 72, },
- { 6, 1, 2, 4, 155, 72, },
- { 7, 1, 2, 4, 155, 54, },
- { 8, 1, 2, 4, 155, 68, },
- { 9, 1, 2, 4, 155, 28, },
+ { 9, 1, 2, 4, 155, 72, },
{ 0, 1, 2, 5, 42, 56, },
{ 2, 1, 2, 5, 42, 40, },
{ 1, 1, 2, 5, 42, 50, },
{ 3, 1, 2, 5, 42, 40, },
{ 4, 1, 2, 5, 42, 50, },
{ 5, 1, 2, 5, 42, 40, },
- { 6, 1, 2, 5, 42, 52, },
- { 7, 1, 2, 5, 42, 30, },
- { 8, 1, 2, 5, 42, 50, },
{ 9, 1, 2, 5, 42, 40, },
{ 0, 1, 2, 5, 58, 54, },
{ 2, 1, 2, 5, 58, 40, },
@@ -43683,9 +43503,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 5, 58, 40, },
{ 4, 1, 2, 5, 58, 46, },
{ 5, 1, 2, 5, 58, 40, },
- { 6, 1, 2, 5, 58, 52, },
- { 7, 1, 2, 5, 58, 30, },
- { 8, 1, 2, 5, 58, 52, },
{ 9, 1, 2, 5, 58, 40, },
{ 0, 1, 2, 5, 106, 48, },
{ 2, 1, 2, 5, 106, 40, },
@@ -43693,9 +43510,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 5, 106, 48, },
{ 4, 1, 2, 5, 106, 50, },
{ 5, 1, 2, 5, 106, 40, },
- { 6, 1, 2, 5, 106, 50, },
- { 7, 1, 2, 5, 106, 30, },
- { 8, 1, 2, 5, 106, 50, },
{ 9, 1, 2, 5, 106, 127, },
{ 0, 1, 2, 5, 122, 70, },
{ 2, 1, 2, 5, 122, 40, },
@@ -43703,9 +43517,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 5, 122, 127, },
{ 4, 1, 2, 5, 122, 62, },
{ 5, 1, 2, 5, 122, 127, },
- { 6, 1, 2, 5, 122, 66, },
- { 7, 1, 2, 5, 122, 30, },
- { 8, 1, 2, 5, 122, 66, },
{ 9, 1, 2, 5, 122, 127, },
{ 0, 1, 2, 5, 138, 70, },
{ 2, 1, 2, 5, 138, 127, },
@@ -43713,9 +43524,6 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 5, 138, 70, },
{ 4, 1, 2, 5, 138, 62, },
{ 5, 1, 2, 5, 138, 127, },
- { 6, 1, 2, 5, 138, 66, },
- { 7, 1, 2, 5, 138, 127, },
- { 8, 1, 2, 5, 138, 66, },
{ 9, 1, 2, 5, 138, 127, },
{ 0, 1, 2, 5, 155, 72, },
{ 2, 1, 2, 5, 155, 4, },
@@ -43723,10 +43531,7 @@ static const struct rtw_txpwr_lmt_cfg_pair rtw8822c_txpwr_lmt_type5[] = {
{ 3, 1, 2, 5, 155, 72, },
{ 4, 1, 2, 5, 155, 52, },
{ 5, 1, 2, 5, 155, 72, },
- { 6, 1, 2, 5, 155, 62, },
- { 7, 1, 2, 5, 155, 30, },
- { 8, 1, 2, 5, 155, 62, },
- { 9, 1, 2, 5, 155, 4, },
+ { 9, 1, 2, 5, 155, 66, },
};
RTW_DECL_TABLE_TXPWR_LMT(rtw8822c_txpwr_lmt_type5);
diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822cu.c b/drivers/net/wireless/realtek/rtw88/rtw8822cu.c
index af28ca09d41f..157d5102a4b1 100644
--- a/drivers/net/wireless/realtek/rtw88/rtw8822cu.c
+++ b/drivers/net/wireless/realtek/rtw88/rtw8822cu.c
@@ -25,7 +25,7 @@ static const struct usb_device_id rtw_8822cu_id_table[] = {
};
MODULE_DEVICE_TABLE(usb, rtw_8822cu_id_table);
-static int rtw8822bu_probe(struct usb_interface *intf,
+static int rtw8822cu_probe(struct usb_interface *intf,
const struct usb_device_id *id)
{
return rtw_usb_probe(intf, id);
@@ -34,7 +34,7 @@ static int rtw8822bu_probe(struct usb_interface *intf,
static struct usb_driver rtw_8822cu_driver = {
.name = "rtw_8822cu",
.id_table = rtw_8822cu_id_table,
- .probe = rtw8822bu_probe,
+ .probe = rtw8822cu_probe,
.disconnect = rtw_usb_disconnect,
};
module_usb_driver(rtw_8822cu_driver);
diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
index d879d7e3dc81..e6ab1ac6d709 100644
--- a/drivers/net/wireless/realtek/rtw88/usb.c
+++ b/drivers/net/wireless/realtek/rtw88/usb.c
@@ -611,8 +611,7 @@ static void rtw_usb_cancel_rx_bufs(struct rtw_usb *rtwusb)
for (i = 0; i < RTW_USB_RXCB_NUM; i++) {
rxcb = &rtwusb->rx_cb[i];
- if (rxcb->rx_urb)
- usb_kill_urb(rxcb->rx_urb);
+ usb_kill_urb(rxcb->rx_urb);
}
}
@@ -623,10 +622,8 @@ static void rtw_usb_free_rx_bufs(struct rtw_usb *rtwusb)
for (i = 0; i < RTW_USB_RXCB_NUM; i++) {
rxcb = &rtwusb->rx_cb[i];
- if (rxcb->rx_urb) {
- usb_kill_urb(rxcb->rx_urb);
- usb_free_urb(rxcb->rx_urb);
- }
+ usb_kill_urb(rxcb->rx_urb);
+ usb_free_urb(rxcb->rx_urb);
}
}
diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
index e1bc3606f9ae..cbf6821af6b8 100644
--- a/drivers/net/wireless/realtek/rtw89/chan.c
+++ b/drivers/net/wireless/realtek/rtw89/chan.c
@@ -3,8 +3,10 @@
*/
#include "chan.h"
+#include "coex.h"
#include "debug.h"
#include "fw.h"
+#include "mac.h"
#include "ps.h"
#include "util.h"
@@ -85,6 +87,19 @@ static enum rtw89_sc_offset rtw89_get_primary_chan_idx(enum rtw89_bandwidth bw,
return primary_chan_idx;
}
+static u8 rtw89_get_primary_sb_idx(u8 central_ch, u8 pri_ch,
+ enum rtw89_bandwidth bw)
+{
+ static const u8 prisb_cal_ofst[RTW89_CHANNEL_WIDTH_ORDINARY_NUM] = {
+ 0, 2, 6, 14, 30
+ };
+
+ if (bw >= RTW89_CHANNEL_WIDTH_ORDINARY_NUM)
+ return 0;
+
+ return (prisb_cal_ofst[bw] + pri_ch - central_ch) / 4;
+}
+
void rtw89_chan_create(struct rtw89_chan *chan, u8 center_chan, u8 primary_chan,
enum rtw89_band band, enum rtw89_bandwidth bandwidth)
{
@@ -104,6 +119,8 @@ void rtw89_chan_create(struct rtw89_chan *chan, u8 center_chan, u8 primary_chan,
chan->subband_type = rtw89_get_subband_type(band, center_chan);
chan->pri_ch_idx = rtw89_get_primary_chan_idx(bandwidth, center_freq,
primary_freq);
+ chan->pri_sb_idx = rtw89_get_primary_sb_idx(center_chan, primary_chan,
+ bandwidth);
}
bool rtw89_assign_entity_chan(struct rtw89_dev *rtwdev,
@@ -188,7 +205,9 @@ void rtw89_entity_init(struct rtw89_dev *rtwdev)
{
struct rtw89_hal *hal = &rtwdev->hal;
+ hal->entity_pause = false;
bitmap_zero(hal->entity_map, NUM_OF_RTW89_SUB_ENTITY);
+ bitmap_zero(hal->changes, NUM_OF_RTW89_CHANCTX_CHANGES);
atomic_set(&hal->roc_entity_idx, RTW89_SUB_ENTITY_IDLE);
rtw89_config_default_chandef(rtwdev);
}
@@ -203,6 +222,8 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
u8 last;
u8 idx;
+ lockdep_assert_held(&rtwdev->mutex);
+
weight = bitmap_weight(hal->entity_map, NUM_OF_RTW89_SUB_ENTITY);
switch (weight) {
default:
@@ -237,6 +258,9 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
rtw89_assign_entity_chan(rtwdev, idx, &chan);
}
+ if (hal->entity_pause)
+ return rtw89_get_entity_mode(rtwdev);
+
rtw89_set_entity_mode(rtwdev, mode);
return mode;
}
@@ -263,33 +287,1471 @@ static void rtw89_chanctx_notify(struct rtw89_dev *rtwdev,
}
}
+/* This function centrally manages how MCC roles are sorted and iterated.
+ * And, it guarantees that ordered_idx is less than NUM_OF_RTW89_MCC_ROLES.
+ * So, if data needs to pass an array for ordered_idx, the array can declare
+ * with NUM_OF_RTW89_MCC_ROLES. Besides, the entire iteration will stop
+ * immediately as long as iterator returns a non-zero value.
+ */
+static
+int rtw89_iterate_mcc_roles(struct rtw89_dev *rtwdev,
+ int (*iterator)(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data),
+ void *data)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role * const roles[] = {
+ &mcc->role_ref,
+ &mcc->role_aux,
+ };
+ unsigned int idx;
+ int ret;
+
+ BUILD_BUG_ON(ARRAY_SIZE(roles) != NUM_OF_RTW89_MCC_ROLES);
+
+ for (idx = 0; idx < NUM_OF_RTW89_MCC_ROLES; idx++) {
+ ret = iterator(rtwdev, roles[idx], idx, data);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+/* For now, IEEE80211_HW_TIMING_BEACON_ONLY can make things simple to ensure
+ * correctness of MCC calculation logic below. We have noticed that once driver
+ * declares WIPHY_FLAG_SUPPORTS_MLO, the use of IEEE80211_HW_TIMING_BEACON_ONLY
+ * will be restricted. We will make an alternative in driver when it is ready
+ * for MLO.
+ */
+static u32 rtw89_mcc_get_tbtt_ofst(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *role, u64 tsf)
+{
+ struct rtw89_vif *rtwvif = role->rtwvif;
+ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(role->beacon_interval);
+ u64 sync_tsf = vif->bss_conf.sync_tsf;
+ u32 remainder;
+
+ if (tsf < sync_tsf) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC get tbtt ofst: tsf might not update yet\n");
+ sync_tsf = 0;
+ }
+
+ div_u64_rem(tsf - sync_tsf, bcn_intvl_us, &remainder);
+
+ return remainder;
+}
+
+static u16 rtw89_mcc_get_bcn_ofst(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mac_mcc_tsf_rpt rpt = {};
+ struct rtw89_fw_mcc_tsf_req req = {};
+ u32 bcn_intvl_ref_us = ieee80211_tu_to_usec(ref->beacon_interval);
+ u32 tbtt_ofst_ref, tbtt_ofst_aux;
+ u64 tsf_ref, tsf_aux;
+ int ret;
+
+ req.group = mcc->group;
+ req.macid_x = ref->rtwvif->mac_id;
+ req.macid_y = aux->rtwvif->mac_id;
+ ret = rtw89_fw_h2c_mcc_req_tsf(rtwdev, &req, &rpt);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to request tsf: %d\n", ret);
+ return RTW89_MCC_DFLT_BCN_OFST_TIME;
+ }
+
+ tsf_ref = (u64)rpt.tsf_x_high << 32 | rpt.tsf_x_low;
+ tsf_aux = (u64)rpt.tsf_y_high << 32 | rpt.tsf_y_low;
+ tbtt_ofst_ref = rtw89_mcc_get_tbtt_ofst(rtwdev, ref, tsf_ref);
+ tbtt_ofst_aux = rtw89_mcc_get_tbtt_ofst(rtwdev, aux, tsf_aux);
+
+ while (tbtt_ofst_ref < tbtt_ofst_aux)
+ tbtt_ofst_ref += bcn_intvl_ref_us;
+
+ return (tbtt_ofst_ref - tbtt_ofst_aux) / 1024;
+}
+
+static
+void rtw89_mcc_role_fw_macid_bitmap_set_bit(struct rtw89_mcc_role *mcc_role,
+ unsigned int bit)
+{
+ unsigned int idx = bit / 8;
+ unsigned int pos = bit % 8;
+
+ if (idx >= ARRAY_SIZE(mcc_role->macid_bitmap))
+ return;
+
+ mcc_role->macid_bitmap[idx] |= BIT(pos);
+}
+
+static void rtw89_mcc_role_macid_sta_iter(void *data, struct ieee80211_sta *sta)
+{
+ struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_mcc_role *mcc_role = data;
+ struct rtw89_vif *target = mcc_role->rtwvif;
+
+ if (rtwvif != target)
+ return;
+
+ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta->mac_id);
+}
+
+static void rtw89_mcc_fill_role_macid_bitmap(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+{
+ struct rtw89_vif *rtwvif = mcc_role->rtwvif;
+
+ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif->mac_id);
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_mcc_role_macid_sta_iter,
+ mcc_role);
+}
+
+static void rtw89_mcc_fill_role_policy(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+{
+ struct rtw89_mcc_policy *policy = &mcc_role->policy;
+
+ policy->c2h_rpt = RTW89_FW_MCC_C2H_RPT_ALL;
+ policy->tx_null_early = RTW89_MCC_DFLT_TX_NULL_EARLY;
+ policy->in_curr_ch = false;
+ policy->dis_sw_retry = true;
+ policy->sw_retry_count = false;
+
+ if (mcc_role->is_go)
+ policy->dis_tx_null = true;
+ else
+ policy->dis_tx_null = false;
+}
+
+static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+{
+ struct ieee80211_vif *vif = rtwvif_to_vif(mcc_role->rtwvif);
+ struct ieee80211_p2p_noa_desc *noa_desc;
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(mcc_role->beacon_interval);
+ u32 max_toa_us, max_tob_us, max_dur_us;
+ u32 start_time, interval, duration;
+ u64 tsf, tsf_lmt;
+ int ret;
+ int i;
+
+ if (!mcc_role->is_go && !mcc_role->is_gc)
+ return;
+
+ /* find the first periodic NoA */
+ for (i = 0; i < RTW89_P2P_MAX_NOA_NUM; i++) {
+ noa_desc = &vif->bss_conf.p2p_noa_attr.desc[i];
+ if (noa_desc->count == 255)
+ goto fill;
+ }
+
+ return;
+
+fill:
+ start_time = le32_to_cpu(noa_desc->start_time);
+ interval = le32_to_cpu(noa_desc->interval);
+ duration = le32_to_cpu(noa_desc->duration);
+
+ if (interval != bcn_intvl_us) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: mismatch interval: %d vs. %d\n",
+ interval, bcn_intvl_us);
+ return;
+ }
+
+ ret = rtw89_mac_port_get_tsf(rtwdev, mcc_role->rtwvif, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+ }
+
+ tsf_lmt = (tsf & GENMASK_ULL(63, 32)) | start_time;
+ max_toa_us = rtw89_mcc_get_tbtt_ofst(rtwdev, mcc_role, tsf_lmt);
+ max_dur_us = interval - duration;
+ max_tob_us = max_dur_us - max_toa_us;
+
+ if (!max_toa_us || !max_tob_us) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: hit boundary\n");
+ return;
+ }
+
+ if (max_dur_us < max_toa_us) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: insufficient duration\n");
+ return;
+ }
+
+ mcc_role->limit.max_toa = max_toa_us / 1024;
+ mcc_role->limit.max_tob = max_tob_us / 1024;
+ mcc_role->limit.max_dur = max_dur_us / 1024;
+ mcc_role->limit.enable = true;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: max_toa %d, max_tob %d, max_dur %d\n",
+ mcc_role->limit.max_toa, mcc_role->limit.max_tob,
+ mcc_role->limit.max_dur);
+}
+
+static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif,
+ struct rtw89_mcc_role *role)
+{
+ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ const struct rtw89_chan *chan;
+
+ memset(role, 0, sizeof(*role));
+ role->rtwvif = rtwvif;
+ role->beacon_interval = vif->bss_conf.beacon_int;
+
+ if (!role->beacon_interval) {
+ rtw89_warn(rtwdev,
+ "cannot handle MCC role without beacon interval\n");
+ return -EINVAL;
+ }
+
+ role->duration = role->beacon_interval / 2;
+
+ chan = rtw89_chan_get(rtwdev, rtwvif->sub_entity_idx);
+ role->is_2ghz = chan->band_type == RTW89_BAND_2G;
+ role->is_go = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_GO;
+ role->is_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
+
+ rtw89_mcc_fill_role_macid_bitmap(rtwdev, role);
+ rtw89_mcc_fill_role_policy(rtwdev, role);
+ rtw89_mcc_fill_role_limit(rtwdev, role);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role: bcn_intvl %d, is_2ghz %d, is_go %d, is_gc %d\n",
+ role->beacon_interval, role->is_2ghz, role->is_go, role->is_gc);
+ return 0;
+}
+
+static void rtw89_mcc_fill_bt_role(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_bt_role *bt_role = &mcc->bt_role;
+
+ memset(bt_role, 0, sizeof(*bt_role));
+ bt_role->duration = rtw89_coex_query_bt_req_len(rtwdev, RTW89_PHY_0);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC bt role: dur %d\n",
+ bt_role->duration);
+}
+
+struct rtw89_mcc_fill_role_selector {
+ struct rtw89_vif *bind_vif[NUM_OF_RTW89_SUB_ENTITY];
+};
+
+static_assert((u8)NUM_OF_RTW89_SUB_ENTITY >= NUM_OF_RTW89_MCC_ROLES);
+
+static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data)
+{
+ struct rtw89_mcc_fill_role_selector *sel = data;
+ struct rtw89_vif *role_vif = sel->bind_vif[ordered_idx];
+ int ret;
+
+ if (!role_vif) {
+ rtw89_warn(rtwdev, "cannot handle MCC without role[%d]\n",
+ ordered_idx);
+ return -EINVAL;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC fill role[%d] with vif <macid %d>\n",
+ ordered_idx, role_vif->mac_id);
+
+ ret = rtw89_mcc_fill_role(rtwdev, role_vif, mcc_role);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_fill_role_selector sel = {};
+ struct rtw89_vif *rtwvif;
+ int ret;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+ if (sel.bind_vif[rtwvif->sub_entity_idx]) {
+ rtw89_warn(rtwdev,
+ "MCC skip extra vif <macid %d> on chanctx[%d]\n",
+ rtwvif->mac_id, rtwvif->sub_entity_idx);
+ continue;
+ }
+
+ sel.bind_vif[rtwvif->sub_entity_idx] = rtwvif;
+ }
+
+ ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel);
+ if (ret)
+ return ret;
+
+ rtw89_mcc_fill_bt_role(rtwdev);
+ return 0;
+}
+
+static void rtw89_mcc_assign_pattern(struct rtw89_dev *rtwdev,
+ const struct rtw89_mcc_pattern *new)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_pattern *pattern = &config->pattern;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC assign pattern: ref {%d | %d}, aux {%d | %d}\n",
+ new->tob_ref, new->toa_ref, new->tob_aux, new->toa_aux);
+
+ *pattern = *new;
+ memset(&pattern->courtesy, 0, sizeof(pattern->courtesy));
+
+ if (pattern->tob_aux <= 0 || pattern->toa_aux <= 0) {
+ pattern->courtesy.macid_tgt = aux->rtwvif->mac_id;
+ pattern->courtesy.macid_src = ref->rtwvif->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ } else if (pattern->tob_ref <= 0 || pattern->toa_ref <= 0) {
+ pattern->courtesy.macid_tgt = ref->rtwvif->mac_id;
+ pattern->courtesy.macid_src = aux->rtwvif->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC pattern flags: plan %d, courtesy_en %d\n",
+ pattern->plan, pattern->courtesy.enable);
+
+ if (!pattern->courtesy.enable)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC pattern courtesy: tgt %d, src %d, slot %d\n",
+ pattern->courtesy.macid_tgt, pattern->courtesy.macid_src,
+ pattern->courtesy.slot_num);
+}
+
+/* The follow-up roughly shows the relationship between the parameters
+ * for pattern calculation.
+ *
+ * |< duration ref >| (if mid bt) |< duration aux >|
+ * |< tob ref >|< toa ref >| ... |< tob aux >|< toa aux >|
+ * V V
+ * tbtt ref tbtt aux
+ * |< beacon offset >|
+ *
+ * In loose pattern calculation, we only ensure at least tob_ref and
+ * toa_ref have positive results. If tob_aux or toa_aux is negative
+ * unfortunately, FW will be notified to handle it with courtesy
+ * mechanism.
+ */
+static void __rtw89_mcc_calc_pattern_loose(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_pattern *ptrn,
+ bool hdl_bt)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 bcn_ofst = config->beacon_offset;
+ u16 bt_dur_in_mid = 0;
+ u16 max_bcn_ofst;
+ s16 upper, lower;
+ u16 res;
+
+ *ptrn = (typeof(*ptrn)){
+ .plan = hdl_bt ? RTW89_MCC_PLAN_TAIL_BT : RTW89_MCC_PLAN_NO_BT,
+ };
+
+ if (!hdl_bt)
+ goto calc;
+
+ max_bcn_ofst = ref->duration + aux->duration;
+ if (ref->limit.enable)
+ max_bcn_ofst = min_t(u16, max_bcn_ofst,
+ ref->limit.max_toa + aux->duration);
+ else if (aux->limit.enable)
+ max_bcn_ofst = min_t(u16, max_bcn_ofst,
+ ref->duration + aux->limit.max_tob);
+
+ if (bcn_ofst > max_bcn_ofst && bcn_ofst >= mcc->bt_role.duration) {
+ bt_dur_in_mid = mcc->bt_role.duration;
+ ptrn->plan = RTW89_MCC_PLAN_MID_BT;
+ }
+
+calc:
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_ls: plan %d, bcn_ofst %d\n",
+ ptrn->plan, bcn_ofst);
+
+ res = bcn_ofst - bt_dur_in_mid;
+ upper = min_t(s16, ref->duration, res);
+ lower = 0;
+
+ if (ref->limit.enable) {
+ upper = min_t(s16, upper, ref->limit.max_toa);
+ lower = max_t(s16, lower, ref->duration - ref->limit.max_tob);
+ } else if (aux->limit.enable) {
+ upper = min_t(s16, upper,
+ res - (aux->duration - aux->limit.max_toa));
+ lower = max_t(s16, lower, res - aux->limit.max_tob);
+ }
+
+ if (lower < upper)
+ ptrn->toa_ref = (upper + lower) / 2;
+ else
+ ptrn->toa_ref = lower;
+
+ ptrn->tob_ref = ref->duration - ptrn->toa_ref;
+ ptrn->tob_aux = res - ptrn->toa_ref;
+ ptrn->toa_aux = aux->duration - ptrn->tob_aux;
+}
+
+/* In strict pattern calculation, we consider timing that might need
+ * for HW stuffs, i.e. min_tob and min_toa.
+ */
+static int __rtw89_mcc_calc_pattern_strict(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_pattern *ptrn)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 min_tob = RTW89_MCC_EARLY_RX_BCN_TIME;
+ u16 min_toa = RTW89_MCC_MIN_RX_BCN_TIME;
+ u16 bcn_ofst = config->beacon_offset;
+ s16 upper_toa_ref, lower_toa_ref;
+ s16 upper_tob_aux, lower_tob_aux;
+ u16 bt_dur_in_mid;
+ s16 res;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: plan %d, bcn_ofst %d\n",
+ ptrn->plan, bcn_ofst);
+
+ if (ptrn->plan == RTW89_MCC_PLAN_MID_BT)
+ bt_dur_in_mid = mcc->bt_role.duration;
+ else
+ bt_dur_in_mid = 0;
+
+ if (ref->duration < min_tob + min_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: not meet ref dur cond\n");
+ return -EINVAL;
+ }
+
+ if (aux->duration < min_tob + min_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: not meet aux dur cond\n");
+ return -EINVAL;
+ }
+
+ res = bcn_ofst - min_toa - min_tob - bt_dur_in_mid;
+ if (res < 0) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: not meet bcn_ofst cond\n");
+ return -EINVAL;
+ }
+
+ upper_toa_ref = min_t(s16, min_toa + res, ref->duration - min_tob);
+ lower_toa_ref = min_toa;
+ upper_tob_aux = min_t(s16, min_tob + res, aux->duration - min_toa);
+ lower_tob_aux = min_tob;
+
+ if (ref->limit.enable) {
+ if (min_tob > ref->limit.max_tob || min_toa > ref->limit.max_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: conflict ref limit\n");
+ return -EINVAL;
+ }
+
+ upper_toa_ref = min_t(s16, upper_toa_ref, ref->limit.max_toa);
+ lower_toa_ref = max_t(s16, lower_toa_ref,
+ ref->duration - ref->limit.max_tob);
+ } else if (aux->limit.enable) {
+ if (min_tob > aux->limit.max_tob || min_toa > aux->limit.max_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: conflict aux limit\n");
+ return -EINVAL;
+ }
+
+ upper_tob_aux = min_t(s16, upper_tob_aux, aux->limit.max_tob);
+ lower_tob_aux = max_t(s16, lower_tob_aux,
+ aux->duration - aux->limit.max_toa);
+ }
+
+ upper_toa_ref = min_t(s16, upper_toa_ref,
+ bcn_ofst - bt_dur_in_mid - lower_tob_aux);
+ lower_toa_ref = max_t(s16, lower_toa_ref,
+ bcn_ofst - bt_dur_in_mid - upper_tob_aux);
+ if (lower_toa_ref > upper_toa_ref) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st: conflict boundary\n");
+ return -EINVAL;
+ }
+
+ ptrn->toa_ref = (upper_toa_ref + lower_toa_ref) / 2;
+ ptrn->tob_ref = ref->duration - ptrn->toa_ref;
+ ptrn->tob_aux = bcn_ofst - ptrn->toa_ref - bt_dur_in_mid;
+ ptrn->toa_aux = aux->duration - ptrn->tob_aux;
+ return 0;
+}
+
+static int rtw89_mcc_calc_pattern(struct rtw89_dev *rtwdev, bool hdl_bt)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ bool sel_plan[NUM_OF_RTW89_MCC_PLAN] = {};
+ struct rtw89_mcc_pattern ptrn;
+ int ret;
+ int i;
+
+ if (ref->limit.enable && aux->limit.enable) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn: not support dual limited roles\n");
+ return -EINVAL;
+ }
+
+ if (ref->limit.enable &&
+ ref->duration > ref->limit.max_tob + ref->limit.max_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn: not fit ref limit\n");
+ return -EINVAL;
+ }
+
+ if (aux->limit.enable &&
+ aux->duration > aux->limit.max_tob + aux->limit.max_toa) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn: not fit aux limit\n");
+ return -EINVAL;
+ }
+
+ if (hdl_bt) {
+ sel_plan[RTW89_MCC_PLAN_TAIL_BT] = true;
+ sel_plan[RTW89_MCC_PLAN_MID_BT] = true;
+ } else {
+ sel_plan[RTW89_MCC_PLAN_NO_BT] = true;
+ }
+
+ for (i = 0; i < NUM_OF_RTW89_MCC_PLAN; i++) {
+ if (!sel_plan[i])
+ continue;
+
+ ptrn = (typeof(ptrn)){
+ .plan = i,
+ };
+
+ ret = __rtw89_mcc_calc_pattern_strict(rtwdev, &ptrn);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC calc ptrn_st with plan %d: fail\n", i);
+ else
+ goto done;
+ }
+
+ __rtw89_mcc_calc_pattern_loose(rtwdev, &ptrn, hdl_bt);
+
+done:
+ rtw89_mcc_assign_pattern(rtwdev, &ptrn);
+ return 0;
+}
+
+static void rtw89_mcc_set_default_pattern(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_pattern tmp = {};
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC use default pattern unexpectedly\n");
+
+ tmp.plan = RTW89_MCC_PLAN_NO_BT;
+ tmp.tob_ref = ref->duration / 2;
+ tmp.toa_ref = ref->duration - tmp.tob_ref;
+ tmp.tob_aux = aux->duration / 2;
+ tmp.toa_aux = aux->duration - tmp.tob_aux;
+
+ rtw89_mcc_assign_pattern(rtwdev, &tmp);
+}
+
+static void rtw89_mcc_set_duration_go_sta(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *role_go,
+ struct rtw89_mcc_role *role_sta)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 mcc_intvl = config->mcc_interval;
+ u16 dur_go, dur_sta;
+
+ dur_go = clamp_t(u16, role_go->duration, RTW89_MCC_MIN_GO_DURATION,
+ mcc_intvl - RTW89_MCC_MIN_STA_DURATION);
+ if (role_go->limit.enable)
+ dur_go = min(dur_go, role_go->limit.max_dur);
+ dur_sta = mcc_intvl - dur_go;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC set dur: (go, sta) {%d, %d} -> {%d, %d}\n",
+ role_go->duration, role_sta->duration, dur_go, dur_sta);
+
+ role_go->duration = dur_go;
+ role_sta->duration = dur_sta;
+}
+
+static void rtw89_mcc_set_duration_gc_sta(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 mcc_intvl = config->mcc_interval;
+ u16 dur_ref, dur_aux;
+
+ if (ref->duration < RTW89_MCC_MIN_STA_DURATION) {
+ dur_ref = RTW89_MCC_MIN_STA_DURATION;
+ dur_aux = mcc_intvl - dur_ref;
+ } else if (aux->duration < RTW89_MCC_MIN_STA_DURATION) {
+ dur_aux = RTW89_MCC_MIN_STA_DURATION;
+ dur_ref = mcc_intvl - dur_aux;
+ } else {
+ dur_ref = ref->duration;
+ dur_aux = mcc_intvl - dur_ref;
+ }
+
+ if (ref->limit.enable) {
+ dur_ref = min(dur_ref, ref->limit.max_dur);
+ dur_aux = mcc_intvl - dur_ref;
+ } else if (aux->limit.enable) {
+ dur_aux = min(dur_aux, aux->limit.max_dur);
+ dur_ref = mcc_intvl - dur_aux;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC set dur: (ref, aux) {%d ~ %d} -> {%d ~ %d}\n",
+ ref->duration, aux->duration, dur_ref, dur_aux);
+
+ ref->duration = dur_ref;
+ aux->duration = dur_aux;
+}
+
+struct rtw89_mcc_mod_dur_data {
+ u16 available;
+ struct {
+ u16 dur;
+ u16 room;
+ } parm[NUM_OF_RTW89_MCC_ROLES];
+};
+
+static int rtw89_mcc_mod_dur_get_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data)
+{
+ struct rtw89_mcc_mod_dur_data *p = data;
+ u16 min;
+
+ p->parm[ordered_idx].dur = mcc_role->duration;
+
+ if (mcc_role->is_go)
+ min = RTW89_MCC_MIN_GO_DURATION;
+ else
+ min = RTW89_MCC_MIN_STA_DURATION;
+
+ p->parm[ordered_idx].room = max_t(s32, p->parm[ordered_idx].dur - min, 0);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: chk role[%u]: dur %u, min %u, room %u\n",
+ ordered_idx, p->parm[ordered_idx].dur, min,
+ p->parm[ordered_idx].room);
+
+ p->available += p->parm[ordered_idx].room;
+ return 0;
+}
+
+static int rtw89_mcc_mod_dur_put_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data)
+{
+ struct rtw89_mcc_mod_dur_data *p = data;
+
+ mcc_role->duration = p->parm[ordered_idx].dur;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: set role[%u]: dur %u\n",
+ ordered_idx, p->parm[ordered_idx].dur);
+ return 0;
+}
+
+static void rtw89_mcc_mod_duration_dual_2ghz_with_bt(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_mod_dur_data data = {};
+ u16 mcc_intvl = config->mcc_interval;
+ u16 bt_dur = mcc->bt_role.duration;
+ u16 wifi_dur;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur (dual 2ghz): mcc_intvl %u, raw bt_dur %u\n",
+ mcc_intvl, bt_dur);
+
+ rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_mod_dur_get_iterator, &data);
+
+ bt_dur = clamp_t(u16, bt_dur, 1, data.available / 3);
+ wifi_dur = mcc_intvl - bt_dur;
+
+ if (data.parm[0].room <= data.parm[1].room) {
+ data.parm[0].dur -= min_t(u16, bt_dur / 2, data.parm[0].room);
+ data.parm[1].dur = wifi_dur - data.parm[0].dur;
+ } else {
+ data.parm[1].dur -= min_t(u16, bt_dur / 2, data.parm[1].room);
+ data.parm[0].dur = wifi_dur - data.parm[1].dur;
+ }
+
+ rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_mod_dur_put_iterator, &data);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC mod dur: set bt: dur %u\n", bt_dur);
+ mcc->bt_role.duration = bt_dur;
+}
+
+static
+void rtw89_mcc_mod_duration_diff_band_with_bt(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *role_2ghz,
+ struct rtw89_mcc_role *role_non_2ghz)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 dur_2ghz, dur_non_2ghz;
+ u16 bt_dur, mcc_intvl;
+
+ dur_2ghz = role_2ghz->duration;
+ dur_non_2ghz = role_non_2ghz->duration;
+ mcc_intvl = config->mcc_interval;
+ bt_dur = mcc->bt_role.duration;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur (diff band): mcc_intvl %u, bt_dur %u\n",
+ mcc_intvl, bt_dur);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: check dur_2ghz %u, dur_non_2ghz %u\n",
+ dur_2ghz, dur_non_2ghz);
+
+ if (dur_non_2ghz >= bt_dur) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: dur_non_2ghz is enough for bt\n");
+ return;
+ }
+
+ dur_non_2ghz = bt_dur;
+ dur_2ghz = mcc_intvl - dur_non_2ghz;
+
+ if (role_non_2ghz->limit.enable) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: dur_non_2ghz is limited with max %u\n",
+ role_non_2ghz->limit.max_dur);
+
+ dur_non_2ghz = min(dur_non_2ghz, role_non_2ghz->limit.max_dur);
+ dur_2ghz = mcc_intvl - dur_non_2ghz;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC mod dur: set dur_2ghz %u, dur_non_2ghz %u\n",
+ dur_2ghz, dur_non_2ghz);
+
+ role_2ghz->duration = dur_2ghz;
+ role_non_2ghz->duration = dur_non_2ghz;
+}
+
+static bool rtw89_mcc_duration_decision_on_bt(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_bt_role *bt_role = &mcc->bt_role;
+
+ if (!bt_role->duration)
+ return false;
+
+ if (ref->is_2ghz && aux->is_2ghz) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC dual roles are on 2GHz; consider BT duration\n");
+
+ rtw89_mcc_mod_duration_dual_2ghz_with_bt(rtwdev);
+ return true;
+ }
+
+ if (!ref->is_2ghz && !aux->is_2ghz) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC dual roles are not on 2GHz; ignore BT duration\n");
+ return false;
+ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC one role is on 2GHz; modify another for BT duration\n");
+
+ if (ref->is_2ghz)
+ rtw89_mcc_mod_duration_diff_band_with_bt(rtwdev, ref, aux);
+ else
+ rtw89_mcc_mod_duration_diff_band_with_bt(rtwdev, aux, ref);
+
+ return false;
+}
+
+static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *tgt,
+ struct rtw89_mcc_role *src,
+ bool ref_is_src)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u16 beacon_offset_us = ieee80211_tu_to_usec(config->beacon_offset);
+ u32 bcn_intvl_src_us = ieee80211_tu_to_usec(src->beacon_interval);
+ u32 cur_tbtt_ofst_src;
+ u32 tsf_ofst_tgt;
+ u32 remainder;
+ u64 tbtt_tgt;
+ u64 tsf_src;
+ int ret;
+
+ ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif, &tsf_src);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+ }
+
+ cur_tbtt_ofst_src = rtw89_mcc_get_tbtt_ofst(rtwdev, src, tsf_src);
+
+ if (ref_is_src)
+ tbtt_tgt = tsf_src - cur_tbtt_ofst_src + beacon_offset_us;
+ else
+ tbtt_tgt = tsf_src - cur_tbtt_ofst_src +
+ (bcn_intvl_src_us - beacon_offset_us);
+
+ div_u64_rem(tbtt_tgt, bcn_intvl_src_us, &remainder);
+ tsf_ofst_tgt = bcn_intvl_src_us - remainder;
+
+ config->sync.macid_tgt = tgt->rtwvif->mac_id;
+ config->sync.macid_src = src->rtwvif->mac_id;
+ config->sync.offset = tsf_ofst_tgt / 1024;
+ config->sync.enable = true;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC sync tbtt: tgt %d, src %d, offset %d\n",
+ config->sync.macid_tgt, config->sync.macid_src,
+ config->sync.offset);
+
+ rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif, src->rtwvif,
+ config->sync.offset);
+}
+
+static int rtw89_mcc_fill_start_tsf(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_config *config = &mcc->config;
+ u32 bcn_intvl_ref_us = ieee80211_tu_to_usec(ref->beacon_interval);
+ u32 tob_ref_us = ieee80211_tu_to_usec(config->pattern.tob_ref);
+ struct rtw89_vif *rtwvif = ref->rtwvif;
+ u64 tsf, start_tsf;
+ u32 cur_tbtt_ofst;
+ u64 min_time;
+ int ret;
+
+ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return ret;
+ }
+
+ min_time = tsf;
+ if (ref->is_go)
+ min_time += ieee80211_tu_to_usec(RTW89_MCC_SHORT_TRIGGER_TIME);
+ else
+ min_time += ieee80211_tu_to_usec(RTW89_MCC_LONG_TRIGGER_TIME);
+
+ cur_tbtt_ofst = rtw89_mcc_get_tbtt_ofst(rtwdev, ref, tsf);
+ start_tsf = tsf - cur_tbtt_ofst + bcn_intvl_ref_us - tob_ref_us;
+ while (start_tsf < min_time)
+ start_tsf += bcn_intvl_ref_us;
+
+ config->start_tsf = start_tsf;
+ return 0;
+}
+
+static int rtw89_mcc_fill_config(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ bool hdl_bt;
+ int ret;
+
+ memset(config, 0, sizeof(*config));
+
+ switch (mcc->mode) {
+ case RTW89_MCC_MODE_GO_STA:
+ config->beacon_offset = RTW89_MCC_DFLT_BCN_OFST_TIME;
+ if (ref->is_go) {
+ rtw89_mcc_sync_tbtt(rtwdev, ref, aux, false);
+ config->mcc_interval = ref->beacon_interval;
+ rtw89_mcc_set_duration_go_sta(rtwdev, ref, aux);
+ } else {
+ rtw89_mcc_sync_tbtt(rtwdev, aux, ref, true);
+ config->mcc_interval = aux->beacon_interval;
+ rtw89_mcc_set_duration_go_sta(rtwdev, aux, ref);
+ }
+ break;
+ case RTW89_MCC_MODE_GC_STA:
+ config->beacon_offset = rtw89_mcc_get_bcn_ofst(rtwdev);
+ config->mcc_interval = ref->beacon_interval;
+ rtw89_mcc_set_duration_gc_sta(rtwdev);
+ break;
+ default:
+ rtw89_warn(rtwdev, "MCC unknown mode: %d\n", mcc->mode);
+ return -EFAULT;
+ }
+
+ hdl_bt = rtw89_mcc_duration_decision_on_bt(rtwdev);
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC handle bt: %d\n", hdl_bt);
+
+ ret = rtw89_mcc_calc_pattern(rtwdev, hdl_bt);
+ if (!ret)
+ goto bottom;
+
+ rtw89_mcc_set_default_pattern(rtwdev);
+
+bottom:
+ return rtw89_mcc_fill_start_tsf(rtwdev);
+}
+
+static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_pattern *pattern = &config->pattern;
+ struct rtw89_mcc_courtesy *courtesy = &pattern->courtesy;
+ struct rtw89_mcc_policy *policy = &role->policy;
+ struct rtw89_fw_mcc_add_req req = {};
+ const struct rtw89_chan *chan;
+ int ret;
+
+ chan = rtw89_chan_get(rtwdev, role->rtwvif->sub_entity_idx);
+ req.central_ch_seg0 = chan->channel;
+ req.primary_ch = chan->primary_channel;
+ req.bandwidth = chan->band_width;
+ req.ch_band_type = chan->band_type;
+
+ req.macid = role->rtwvif->mac_id;
+ req.group = mcc->group;
+ req.c2h_rpt = policy->c2h_rpt;
+ req.tx_null_early = policy->tx_null_early;
+ req.dis_tx_null = policy->dis_tx_null;
+ req.in_curr_ch = policy->in_curr_ch;
+ req.sw_retry_count = policy->sw_retry_count;
+ req.dis_sw_retry = policy->dis_sw_retry;
+ req.duration = role->duration;
+ req.btc_in_2g = false;
+
+ if (courtesy->enable && courtesy->macid_src == req.macid) {
+ req.courtesy_target = courtesy->macid_tgt;
+ req.courtesy_num = courtesy->slot_num;
+ req.courtesy_en = true;
+ }
+
+ ret = rtw89_fw_h2c_add_mcc(rtwdev, &req);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to add wifi role: %d\n", ret);
+ return ret;
+ }
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+ role->rtwvif->mac_id,
+ role->macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to set macid bitmap: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int __mcc_fw_add_bt_role(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_bt_role *bt_role = &mcc->bt_role;
+ struct rtw89_fw_mcc_add_req req = {};
+ int ret;
+
+ req.group = mcc->group;
+ req.duration = bt_role->duration;
+ req.btc_in_2g = true;
+
+ ret = rtw89_fw_h2c_add_mcc(rtwdev, &req);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to add bt role: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int __mcc_fw_start(struct rtw89_dev *rtwdev, bool replace)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_pattern *pattern = &config->pattern;
+ struct rtw89_mcc_sync *sync = &config->sync;
+ struct rtw89_fw_mcc_start_req req = {};
+ int ret;
+
+ if (replace) {
+ req.old_group = mcc->group;
+ req.old_group_action = RTW89_FW_MCC_OLD_GROUP_ACT_REPLACE;
+ mcc->group = RTW89_MCC_NEXT_GROUP(mcc->group);
+ }
+
+ req.group = mcc->group;
+
+ switch (pattern->plan) {
+ case RTW89_MCC_PLAN_TAIL_BT:
+ ret = __mcc_fw_add_role(rtwdev, ref);
+ if (ret)
+ return ret;
+ ret = __mcc_fw_add_role(rtwdev, aux);
+ if (ret)
+ return ret;
+ ret = __mcc_fw_add_bt_role(rtwdev);
+ if (ret)
+ return ret;
+
+ req.btc_in_group = true;
+ break;
+ case RTW89_MCC_PLAN_MID_BT:
+ ret = __mcc_fw_add_role(rtwdev, ref);
+ if (ret)
+ return ret;
+ ret = __mcc_fw_add_bt_role(rtwdev);
+ if (ret)
+ return ret;
+ ret = __mcc_fw_add_role(rtwdev, aux);
+ if (ret)
+ return ret;
+
+ req.btc_in_group = true;
+ break;
+ case RTW89_MCC_PLAN_NO_BT:
+ ret = __mcc_fw_add_role(rtwdev, ref);
+ if (ret)
+ return ret;
+ ret = __mcc_fw_add_role(rtwdev, aux);
+ if (ret)
+ return ret;
+
+ req.btc_in_group = false;
+ break;
+ default:
+ rtw89_warn(rtwdev, "MCC unknown plan: %d\n", pattern->plan);
+ return -EFAULT;
+ }
+
+ if (sync->enable) {
+ ret = rtw89_fw_h2c_mcc_sync(rtwdev, req.group, sync->macid_src,
+ sync->macid_tgt, sync->offset);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to trigger sync: %d\n", ret);
+ return ret;
+ }
+ }
+
+ req.macid = ref->rtwvif->mac_id;
+ req.tsf_high = config->start_tsf >> 32;
+ req.tsf_low = config->start_tsf;
+
+ ret = rtw89_fw_h2c_start_mcc(rtwdev, &req);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to trigger start: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int __mcc_fw_set_duration_no_bt(struct rtw89_dev *rtwdev, bool sync_changed)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_sync *sync = &config->sync;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_fw_mcc_duration req = {
+ .group = mcc->group,
+ .btc_in_group = false,
+ .start_macid = ref->rtwvif->mac_id,
+ .macid_x = ref->rtwvif->mac_id,
+ .macid_y = aux->rtwvif->mac_id,
+ .duration_x = ref->duration,
+ .duration_y = aux->duration,
+ .start_tsf_high = config->start_tsf >> 32,
+ .start_tsf_low = config->start_tsf,
+ };
+ int ret;
+
+ ret = rtw89_fw_h2c_mcc_set_duration(rtwdev, &req);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to set duration: %d\n", ret);
+ return ret;
+ }
+
+ if (!sync->enable || !sync_changed)
+ return 0;
+
+ ret = rtw89_fw_h2c_mcc_sync(rtwdev, mcc->group, sync->macid_src,
+ sync->macid_tgt, sync->offset);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to trigger sync: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void rtw89_mcc_handle_beacon_noa(struct rtw89_dev *rtwdev, bool enable)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_pattern *pattern = &config->pattern;
+ struct rtw89_mcc_sync *sync = &config->sync;
+ struct ieee80211_p2p_noa_desc noa_desc = {};
+ u64 start_time = config->start_tsf;
+ u32 interval = config->mcc_interval;
+ struct rtw89_vif *rtwvif_go;
+ u32 duration;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ if (ref->is_go) {
+ rtwvif_go = ref->rtwvif;
+ start_time += ieee80211_tu_to_usec(ref->duration);
+ duration = config->mcc_interval - ref->duration;
+ } else if (aux->is_go) {
+ rtwvif_go = aux->rtwvif;
+ start_time += ieee80211_tu_to_usec(pattern->tob_ref) +
+ ieee80211_tu_to_usec(config->beacon_offset) +
+ ieee80211_tu_to_usec(pattern->toa_aux);
+ duration = config->mcc_interval - aux->duration;
+
+ /* convert time domain from sta(ref) to GO(aux) */
+ start_time += ieee80211_tu_to_usec(sync->offset);
+ } else {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC find no GO: skip updating beacon NoA\n");
+ return;
+ }
+
+ rtw89_p2p_noa_renew(rtwvif_go);
+
+ if (enable) {
+ noa_desc.start_time = cpu_to_le32(start_time);
+ noa_desc.interval = cpu_to_le32(ieee80211_tu_to_usec(interval));
+ noa_desc.duration = cpu_to_le32(ieee80211_tu_to_usec(duration));
+ noa_desc.count = 255;
+ rtw89_p2p_noa_append(rtwvif_go, &noa_desc);
+ }
+
+ /* without chanctx, we cannot get beacon from mac80211 stack */
+ if (!rtwvif_go->chanctx_assigned)
+ return;
+
+ rtw89_fw_h2c_update_beacon(rtwdev, rtwvif_go);
+}
+
+static void rtw89_mcc_start_beacon_noa(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ if (ref->is_go)
+ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, true);
+ else if (aux->is_go)
+ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, true);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, true);
+}
+
+static void rtw89_mcc_stop_beacon_noa(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ if (ref->is_go)
+ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, false);
+ else if (aux->is_go)
+ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, false);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, false);
+}
+
static int rtw89_mcc_start(struct rtw89_dev *rtwdev)
{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ struct rtw89_mcc_role *aux = &mcc->role_aux;
+ int ret;
+
if (rtwdev->scanning)
rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
rtw89_leave_lps(rtwdev);
rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC start\n");
+
+ ret = rtw89_mcc_fill_all_roles(rtwdev);
+ if (ret)
+ return ret;
+
+ if (ref->is_go || aux->is_go)
+ mcc->mode = RTW89_MCC_MODE_GO_STA;
+ else
+ mcc->mode = RTW89_MCC_MODE_GC_STA;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC sel mode: %d\n", mcc->mode);
+
+ mcc->group = RTW89_MCC_DFLT_GROUP;
+
+ ret = rtw89_mcc_fill_config(rtwdev);
+ if (ret)
+ return ret;
+
+ ret = __mcc_fw_start(rtwdev, false);
+ if (ret)
+ return ret;
+
rtw89_chanctx_notify(rtwdev, RTW89_CHANCTX_STATE_MCC_START);
+
+ rtw89_mcc_start_beacon_noa(rtwdev);
return 0;
}
static void rtw89_mcc_stop(struct rtw89_dev *rtwdev)
{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role *ref = &mcc->role_ref;
+ int ret;
+
rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC stop\n");
+
+ ret = rtw89_fw_h2c_stop_mcc(rtwdev, mcc->group,
+ ref->rtwvif->mac_id, true);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to trigger stop: %d\n", ret);
+
+ ret = rtw89_fw_h2c_del_mcc_group(rtwdev, mcc->group, true);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to delete group: %d\n", ret);
+
rtw89_chanctx_notify(rtwdev, RTW89_CHANCTX_STATE_MCC_STOP);
+
+ rtw89_mcc_stop_beacon_noa(rtwdev);
+}
+
+static int rtw89_mcc_update(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_config old_cfg = *config;
+ bool sync_changed;
+ int ret;
+
+ if (rtwdev->scanning)
+ rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "MCC update\n");
+
+ ret = rtw89_mcc_fill_config(rtwdev);
+ if (ret)
+ return ret;
+
+ if (old_cfg.pattern.plan != RTW89_MCC_PLAN_NO_BT ||
+ config->pattern.plan != RTW89_MCC_PLAN_NO_BT) {
+ ret = __mcc_fw_start(rtwdev, true);
+ if (ret)
+ return ret;
+ } else {
+ if (memcmp(&old_cfg.sync, &config->sync, sizeof(old_cfg.sync)) == 0)
+ sync_changed = false;
+ else
+ sync_changed = true;
+
+ ret = __mcc_fw_set_duration_no_bt(rtwdev, sync_changed);
+ if (ret)
+ return ret;
+ }
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, true);
+ return 0;
+}
+
+static void rtw89_mcc_track(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_config *config = &mcc->config;
+ struct rtw89_mcc_pattern *pattern = &config->pattern;
+ s16 tolerance;
+ u16 bcn_ofst;
+ u16 diff;
+
+ if (mcc->mode != RTW89_MCC_MODE_GC_STA)
+ return;
+
+ bcn_ofst = rtw89_mcc_get_bcn_ofst(rtwdev);
+ if (bcn_ofst > config->beacon_offset) {
+ diff = bcn_ofst - config->beacon_offset;
+ if (pattern->tob_aux < 0)
+ tolerance = -pattern->tob_aux;
+ else
+ tolerance = pattern->toa_aux;
+ } else {
+ diff = config->beacon_offset - bcn_ofst;
+ if (pattern->toa_aux < 0)
+ tolerance = -pattern->toa_aux;
+ else
+ tolerance = pattern->tob_aux;
+ }
+
+ if (diff <= tolerance)
+ return;
+
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_BCN_OFFSET_CHANGE);
+}
+
+static int rtw89_mcc_upd_map_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+ struct rtw89_mcc_role upd = {
+ .rtwvif = mcc_role->rtwvif,
+ };
+ int ret;
+
+ if (!mcc_role->is_go)
+ return 0;
+
+ rtw89_mcc_fill_role_macid_bitmap(rtwdev, &upd);
+ if (memcmp(mcc_role->macid_bitmap, upd.macid_bitmap,
+ sizeof(mcc_role->macid_bitmap)) == 0)
+ return 0;
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+ upd.rtwvif->mac_id,
+ upd.macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC h2c failed to update macid bitmap: %d\n", ret);
+ return ret;
+ }
+
+ memcpy(mcc_role->macid_bitmap, upd.macid_bitmap,
+ sizeof(mcc_role->macid_bitmap));
+ return 0;
+}
+
+static void rtw89_mcc_update_macid_bitmap(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_upd_map_iterator, NULL);
+}
+
+static int rtw89_mcc_upd_lmt_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+ unsigned int ordered_idx,
+ void *data)
+{
+ memset(&mcc_role->limit, 0, sizeof(mcc_role->limit));
+ rtw89_mcc_fill_role_limit(rtwdev, mcc_role);
+ return 0;
+}
+
+static void rtw89_mcc_update_limit(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_mcc_info *mcc = &rtwdev->mcc;
+
+ if (mcc->mode != RTW89_MCC_MODE_GC_STA)
+ return;
+
+ rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_upd_lmt_iterator, NULL);
}
void rtw89_chanctx_work(struct work_struct *work)
{
struct rtw89_dev *rtwdev = container_of(work, struct rtw89_dev,
chanctx_work.work);
+ struct rtw89_hal *hal = &rtwdev->hal;
+ bool update_mcc_pattern = false;
enum rtw89_entity_mode mode;
+ u32 changed = 0;
int ret;
+ int i;
mutex_lock(&rtwdev->mutex);
+ if (hal->entity_pause) {
+ mutex_unlock(&rtwdev->mutex);
+ return;
+ }
+
+ for (i = 0; i < NUM_OF_RTW89_CHANCTX_CHANGES; i++) {
+ if (test_and_clear_bit(i, hal->changes))
+ changed |= BIT(i);
+ }
+
mode = rtw89_get_entity_mode(rtwdev);
switch (mode) {
case RTW89_ENTITY_MODE_MCC_PREPARE:
@@ -300,6 +1762,25 @@ void rtw89_chanctx_work(struct work_struct *work)
if (ret)
rtw89_warn(rtwdev, "failed to start MCC: %d\n", ret);
break;
+ case RTW89_ENTITY_MODE_MCC:
+ if (changed & BIT(RTW89_CHANCTX_BCN_OFFSET_CHANGE) ||
+ changed & BIT(RTW89_CHANCTX_P2P_PS_CHANGE) ||
+ changed & BIT(RTW89_CHANCTX_BT_SLOT_CHANGE) ||
+ changed & BIT(RTW89_CHANCTX_TSF32_TOGGLE_CHANGE))
+ update_mcc_pattern = true;
+ if (changed & BIT(RTW89_CHANCTX_REMOTE_STA_CHANGE))
+ rtw89_mcc_update_macid_bitmap(rtwdev);
+ if (changed & BIT(RTW89_CHANCTX_P2P_PS_CHANGE))
+ rtw89_mcc_update_limit(rtwdev);
+ if (changed & BIT(RTW89_CHANCTX_BT_SLOT_CHANGE))
+ rtw89_mcc_fill_bt_role(rtwdev);
+ if (update_mcc_pattern) {
+ ret = rtw89_mcc_update(rtwdev);
+ if (ret)
+ rtw89_warn(rtwdev, "failed to update MCC: %d\n",
+ ret);
+ }
+ break;
default:
break;
}
@@ -307,8 +1788,10 @@ void rtw89_chanctx_work(struct work_struct *work)
mutex_unlock(&rtwdev->mutex);
}
-void rtw89_queue_chanctx_work(struct rtw89_dev *rtwdev)
+void rtw89_queue_chanctx_change(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_changes change)
{
+ struct rtw89_hal *hal = &rtwdev->hal;
enum rtw89_entity_mode mode;
u32 delay;
@@ -319,6 +1802,15 @@ void rtw89_queue_chanctx_work(struct rtw89_dev *rtwdev)
case RTW89_ENTITY_MODE_MCC_PREPARE:
delay = ieee80211_tu_to_usec(RTW89_CHANCTX_TIME_MCC_PREPARE);
break;
+ case RTW89_ENTITY_MODE_MCC:
+ delay = ieee80211_tu_to_usec(RTW89_CHANCTX_TIME_MCC);
+ break;
+ }
+
+ if (change != RTW89_CHANCTX_CHANGE_DFLT) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "set chanctx change %d\n",
+ change);
+ set_bit(change, hal->changes);
}
rtw89_debug(rtwdev, RTW89_DBG_CHAN,
@@ -328,6 +1820,86 @@ void rtw89_queue_chanctx_work(struct rtw89_dev *rtwdev)
usecs_to_jiffies(delay));
}
+void rtw89_queue_chanctx_work(struct rtw89_dev *rtwdev)
+{
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_CHANGE_DFLT);
+}
+
+void rtw89_chanctx_track(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_entity_mode mode;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+ if (hal->entity_pause)
+ return;
+
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC:
+ rtw89_mcc_track(rtwdev);
+ break;
+ default:
+ break;
+ }
+}
+
+void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_pause_reasons rsn)
+{
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_entity_mode mode;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+ if (hal->entity_pause)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "chanctx pause (rsn: %d)\n", rsn);
+
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC:
+ rtw89_mcc_stop(rtwdev);
+ break;
+ default:
+ break;
+ }
+
+ hal->entity_pause = true;
+}
+
+void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_entity_mode mode;
+ int ret;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+ if (!hal->entity_pause)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "chanctx proceed\n");
+
+ hal->entity_pause = false;
+ rtw89_set_channel(rtwdev);
+
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC:
+ ret = rtw89_mcc_start(rtwdev);
+ if (ret)
+ rtw89_warn(rtwdev, "failed to start MCC: %d\n", ret);
+ break;
+ default:
+ break;
+ }
+
+ rtw89_queue_chanctx_work(rtwdev);
+}
+
int rtw89_chanctx_ops_add(struct rtw89_dev *rtwdev,
struct ieee80211_chanctx_conf *ctx)
{
@@ -415,6 +1987,7 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
rtwvif->sub_entity_idx = cfg->idx;
+ rtwvif->chanctx_assigned = true;
return 0;
}
@@ -423,4 +1996,5 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
struct ieee80211_chanctx_conf *ctx)
{
rtwvif->sub_entity_idx = RTW89_SUB_ENTITY_0;
+ rtwvif->chanctx_assigned = false;
}
diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h
index 448e6c5df9f1..9b98d8f4ee9d 100644
--- a/drivers/net/wireless/realtek/rtw89/chan.h
+++ b/drivers/net/wireless/realtek/rtw89/chan.h
@@ -9,6 +9,34 @@
/* The dwell time in TU before doing rtw89_chanctx_work(). */
#define RTW89_CHANCTX_TIME_MCC_PREPARE 100
+#define RTW89_CHANCTX_TIME_MCC 100
+
+/* various MCC setting time in TU */
+#define RTW89_MCC_LONG_TRIGGER_TIME 300
+#define RTW89_MCC_SHORT_TRIGGER_TIME 100
+#define RTW89_MCC_EARLY_TX_BCN_TIME 10
+#define RTW89_MCC_EARLY_RX_BCN_TIME 5
+#define RTW89_MCC_MIN_RX_BCN_TIME 10
+#define RTW89_MCC_DFLT_BCN_OFST_TIME 40
+
+#define RTW89_MCC_MIN_GO_DURATION \
+ (RTW89_MCC_EARLY_TX_BCN_TIME + RTW89_MCC_MIN_RX_BCN_TIME)
+
+#define RTW89_MCC_MIN_STA_DURATION \
+ (RTW89_MCC_EARLY_RX_BCN_TIME + RTW89_MCC_MIN_RX_BCN_TIME)
+
+#define RTW89_MCC_DFLT_GROUP 0
+#define RTW89_MCC_NEXT_GROUP(cur) (((cur) + 1) % 4)
+
+#define RTW89_MCC_DFLT_TX_NULL_EARLY 3
+#define RTW89_MCC_DFLT_COURTESY_SLOT 3
+
+#define NUM_OF_RTW89_MCC_ROLES 2
+
+enum rtw89_chanctx_pause_reasons {
+ RTW89_CHANCTX_PAUSE_REASON_HW_SCAN,
+ RTW89_CHANCTX_PAUSE_REASON_ROC,
+};
static inline bool rtw89_get_entity_state(struct rtw89_dev *rtwdev)
{
@@ -55,6 +83,12 @@ void rtw89_entity_init(struct rtw89_dev *rtwdev);
enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev);
void rtw89_chanctx_work(struct work_struct *work);
void rtw89_queue_chanctx_work(struct rtw89_dev *rtwdev);
+void rtw89_queue_chanctx_change(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_changes change);
+void rtw89_chanctx_track(struct rtw89_dev *rtwdev);
+void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_pause_reasons rsn);
+void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev);
int rtw89_chanctx_ops_add(struct rtw89_dev *rtwdev,
struct ieee80211_chanctx_conf *ctx);
void rtw89_chanctx_ops_remove(struct rtw89_dev *rtwdev,
diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
index 4ba8b3df70ae..bdcc172639e4 100644
--- a/drivers/net/wireless/realtek/rtw89/coex.c
+++ b/drivers/net/wireless/realtek/rtw89/coex.c
@@ -237,13 +237,13 @@ struct rtw89_btc_btf_set_report {
struct rtw89_btc_btf_set_slot_table {
u8 fver;
u8 tbl_num;
- u8 buf[];
+ struct rtw89_btc_fbtc_slot tbls[] __counted_by(tbl_num);
} __packed;
struct rtw89_btc_btf_set_mon_reg {
u8 fver;
u8 reg_num;
- u8 buf[];
+ struct rtw89_btc_fbtc_mreg regs[] __counted_by(reg_num);
} __packed;
enum btc_btf_set_cx_policy {
@@ -1821,19 +1821,17 @@ static void rtw89_btc_fw_en_rpt(struct rtw89_dev *rtwdev,
static void rtw89_btc_fw_set_slots(struct rtw89_dev *rtwdev, u8 num,
struct rtw89_btc_fbtc_slot *s)
{
- struct rtw89_btc_btf_set_slot_table *tbl = NULL;
- u8 *ptr = NULL;
- u16 n = 0;
+ struct rtw89_btc_btf_set_slot_table *tbl;
+ u16 n;
- n = sizeof(*s) * num + sizeof(*tbl);
+ n = struct_size(tbl, tbls, num);
tbl = kmalloc(n, GFP_KERNEL);
if (!tbl)
return;
tbl->fver = BTF_SET_SLOT_TABLE_VER;
tbl->tbl_num = num;
- ptr = &tbl->buf[0];
- memcpy(ptr, s, num * sizeof(*s));
+ memcpy(tbl->tbls, s, flex_array_size(tbl, tbls, num));
_send_fw_cmd(rtwdev, BTFC_SET, SET_SLOT_TABLE, tbl, n);
@@ -1845,7 +1843,7 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
const struct rtw89_chip_info *chip = rtwdev->chip;
const struct rtw89_btc_ver *ver = rtwdev->btc.ver;
struct rtw89_btc_btf_set_mon_reg *monreg = NULL;
- u8 n, *ptr = NULL, ulen, cxmreg_max;
+ u8 n, ulen, cxmreg_max;
u16 sz = 0;
n = chip->mon_reg_num;
@@ -1866,16 +1864,15 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
return;
}
- ulen = sizeof(struct rtw89_btc_fbtc_mreg);
- sz = (ulen * n) + sizeof(*monreg);
+ ulen = sizeof(monreg->regs[0]);
+ sz = struct_size(monreg, regs, n);
monreg = kmalloc(sz, GFP_KERNEL);
if (!monreg)
return;
monreg->fver = ver->fcxmreg;
monreg->reg_num = n;
- ptr = &monreg->buf[0];
- memcpy(ptr, chip->mon_reg, n * ulen);
+ memcpy(monreg->regs, chip->mon_reg, flex_array_size(monreg, regs, n));
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"[BTC], %s(): sz=%d ulen=%d n=%d\n",
__func__, sz, ulen, n);
@@ -3840,7 +3837,7 @@ static void _set_btg_ctrl(struct rtw89_dev *rtwdev)
if (mode == BTC_WLINK_25G_MCC)
return;
- rtw89_ctrl_btg(rtwdev, is_btg);
+ rtw89_ctrl_btg_bt_rx(rtwdev, is_btg, RTW89_PHY_0);
}
struct rtw89_txtime_data {
diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
index 133bf289bacb..3d75165e48be 100644
--- a/drivers/net/wireless/realtek/rtw89/core.c
+++ b/drivers/net/wireless/realtek/rtw89/core.c
@@ -172,13 +172,31 @@ static const struct ieee80211_iface_limit rtw89_iface_limits[] = {
},
};
+static const struct ieee80211_iface_limit rtw89_iface_limits_mcc[] = {
+ {
+ .max = 1,
+ .types = BIT(NL80211_IFTYPE_STATION),
+ },
+ {
+ .max = 1,
+ .types = BIT(NL80211_IFTYPE_P2P_CLIENT) |
+ BIT(NL80211_IFTYPE_P2P_GO),
+ },
+};
+
static const struct ieee80211_iface_combination rtw89_iface_combs[] = {
{
.limits = rtw89_iface_limits,
.n_limits = ARRAY_SIZE(rtw89_iface_limits),
.max_interfaces = 2,
.num_different_channels = 1,
- }
+ },
+ {
+ .limits = rtw89_iface_limits_mcc,
+ .n_limits = ARRAY_SIZE(rtw89_iface_limits_mcc),
+ .max_interfaces = 2,
+ .num_different_channels = 2,
+ },
};
bool rtw89_ra_report_to_bitrate(struct rtw89_dev *rtwdev, u8 rpt_rate, u16 *bitrate)
@@ -1215,6 +1233,136 @@ void rtw89_core_fill_txdesc_v1(struct rtw89_dev *rtwdev,
}
EXPORT_SYMBOL(rtw89_core_fill_txdesc_v1);
+static __le32 rtw89_build_txwd_body0_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY0_WP_OFFSET_V1, desc_info->wp_offset) |
+ FIELD_PREP(BE_TXD_BODY0_WDINFO_EN, desc_info->en_wd_info) |
+ FIELD_PREP(BE_TXD_BODY0_CH_DMA, desc_info->ch_dma) |
+ FIELD_PREP(BE_TXD_BODY0_HDR_LLC_LEN, desc_info->hdr_llc_len) |
+ FIELD_PREP(BE_TXD_BODY0_WD_PAGE, desc_info->wd_page);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body1_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY1_ADDR_INFO_NUM, desc_info->addr_info_nr) |
+ FIELD_PREP(BE_TXD_BODY1_SEC_KEYID, desc_info->sec_keyid) |
+ FIELD_PREP(BE_TXD_BODY1_SEC_TYPE, desc_info->sec_type);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body2_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY2_TID_IND, desc_info->tid_indicate) |
+ FIELD_PREP(BE_TXD_BODY2_QSEL, desc_info->qsel) |
+ FIELD_PREP(BE_TXD_BODY2_TXPKTSIZE, desc_info->pkt_size) |
+ FIELD_PREP(BE_TXD_BODY2_AGG_EN, desc_info->agg_en) |
+ FIELD_PREP(BE_TXD_BODY2_BK, desc_info->bk) |
+ FIELD_PREP(BE_TXD_BODY2_MACID, desc_info->mac_id);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body3_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY3_WIFI_SEQ, desc_info->seq);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body4_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY4_SEC_IV_L0, desc_info->sec_seq[0]) |
+ FIELD_PREP(BE_TXD_BODY4_SEC_IV_L1, desc_info->sec_seq[1]);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body5_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY5_SEC_IV_H2, desc_info->sec_seq[2]) |
+ FIELD_PREP(BE_TXD_BODY5_SEC_IV_H3, desc_info->sec_seq[3]) |
+ FIELD_PREP(BE_TXD_BODY5_SEC_IV_H4, desc_info->sec_seq[4]) |
+ FIELD_PREP(BE_TXD_BODY5_SEC_IV_H5, desc_info->sec_seq[5]);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_body7_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_BODY7_USERATE_SEL, desc_info->use_rate) |
+ FIELD_PREP(BE_TXD_BODY7_DATA_ER, desc_info->er_cap) |
+ FIELD_PREP(BE_TXD_BODY7_DATA_BW_ER, 0) |
+ FIELD_PREP(BE_TXD_BODY7_DATARATE, desc_info->data_rate);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_info0_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_INFO0_DISDATAFB, desc_info->dis_data_fb) |
+ FIELD_PREP(BE_TXD_INFO0_MULTIPORT_ID, desc_info->port);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_info1_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_INFO1_MAX_AGG_NUM, desc_info->ampdu_num) |
+ FIELD_PREP(BE_TXD_INFO1_A_CTRL_BSR, desc_info->a_ctrl_bsr) |
+ FIELD_PREP(BE_TXD_INFO1_DATA_RTY_LOWEST_RATE,
+ desc_info->data_retry_lowest_rate);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_info2_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_INFO2_AMPDU_DENSITY, desc_info->ampdu_density) |
+ FIELD_PREP(BE_TXD_INFO2_FORCE_KEY_EN, desc_info->sec_en) |
+ FIELD_PREP(BE_TXD_INFO2_SEC_CAM_IDX, desc_info->sec_cam_idx);
+
+ return cpu_to_le32(dword);
+}
+
+static __le32 rtw89_build_txwd_info4_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_TXD_INFO4_RTS_EN, 1) |
+ FIELD_PREP(BE_TXD_INFO4_HW_RTS_EN, 1);
+
+ return cpu_to_le32(dword);
+}
+
+void rtw89_core_fill_txdesc_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_tx_desc_info *desc_info,
+ void *txdesc)
+{
+ struct rtw89_txwd_body_v2 *txwd_body = txdesc;
+ struct rtw89_txwd_info_v2 *txwd_info;
+
+ txwd_body->dword0 = rtw89_build_txwd_body0_v2(desc_info);
+ txwd_body->dword1 = rtw89_build_txwd_body1_v2(desc_info);
+ txwd_body->dword2 = rtw89_build_txwd_body2_v2(desc_info);
+ txwd_body->dword3 = rtw89_build_txwd_body3_v2(desc_info);
+ if (desc_info->sec_en) {
+ txwd_body->dword4 = rtw89_build_txwd_body4_v2(desc_info);
+ txwd_body->dword5 = rtw89_build_txwd_body5_v2(desc_info);
+ }
+ txwd_body->dword7 = rtw89_build_txwd_body7_v2(desc_info);
+
+ if (!desc_info->en_wd_info)
+ return;
+
+ txwd_info = (struct rtw89_txwd_info_v2 *)(txwd_body + 1);
+ txwd_info->dword0 = rtw89_build_txwd_info0_v2(desc_info);
+ txwd_info->dword1 = rtw89_build_txwd_info1_v2(desc_info);
+ txwd_info->dword2 = rtw89_build_txwd_info2_v2(desc_info);
+ txwd_info->dword4 = rtw89_build_txwd_info4_v2(desc_info);
+}
+EXPORT_SYMBOL(rtw89_core_fill_txdesc_v2);
+
static __le32 rtw89_build_txwd_fwcmd0_v1(struct rtw89_tx_desc_info *desc_info)
{
u32 dword = FIELD_PREP(AX_RXD_RPKT_LEN_MASK, desc_info->pkt_size) |
@@ -1235,6 +1383,26 @@ void rtw89_core_fill_txdesc_fwcmd_v1(struct rtw89_dev *rtwdev,
}
EXPORT_SYMBOL(rtw89_core_fill_txdesc_fwcmd_v1);
+static __le32 rtw89_build_txwd_fwcmd0_v2(struct rtw89_tx_desc_info *desc_info)
+{
+ u32 dword = FIELD_PREP(BE_RXD_RPKT_LEN_MASK, desc_info->pkt_size) |
+ FIELD_PREP(BE_RXD_RPKT_TYPE_MASK, desc_info->fw_dl ?
+ RTW89_CORE_RX_TYPE_FWDL :
+ RTW89_CORE_RX_TYPE_H2C);
+
+ return cpu_to_le32(dword);
+}
+
+void rtw89_core_fill_txdesc_fwcmd_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_tx_desc_info *desc_info,
+ void *txdesc)
+{
+ struct rtw89_rxdesc_short_v2 *txwd_v2 = (struct rtw89_rxdesc_short_v2 *)txdesc;
+
+ txwd_v2->dword0 = rtw89_build_txwd_fwcmd0_v2(desc_info);
+}
+EXPORT_SYMBOL(rtw89_core_fill_txdesc_fwcmd_v2);
+
static int rtw89_core_rx_process_mac_ppdu(struct rtw89_dev *rtwdev,
struct sk_buff *skb,
struct rtw89_rx_phy_ppdu *phy_ppdu)
@@ -1453,32 +1621,49 @@ static void rtw89_core_rx_process_phy_sts(struct rtw89_dev *rtwdev,
phy_ppdu);
}
-static u8 rtw89_rxdesc_to_nl_he_gi(struct rtw89_dev *rtwdev,
- const struct rtw89_rx_desc_info *desc_info,
- bool rx_status)
+static u8 rtw89_rxdesc_to_nl_he_eht_gi(struct rtw89_dev *rtwdev,
+ u8 desc_info_gi,
+ bool rx_status, bool eht)
{
- switch (desc_info->gi_ltf) {
+ switch (desc_info_gi) {
case RTW89_GILTF_SGI_4XHE08:
case RTW89_GILTF_2XHE08:
case RTW89_GILTF_1XHE08:
- return NL80211_RATE_INFO_HE_GI_0_8;
+ return eht ? NL80211_RATE_INFO_EHT_GI_0_8 :
+ NL80211_RATE_INFO_HE_GI_0_8;
case RTW89_GILTF_2XHE16:
case RTW89_GILTF_1XHE16:
- return NL80211_RATE_INFO_HE_GI_1_6;
+ return eht ? NL80211_RATE_INFO_EHT_GI_1_6 :
+ NL80211_RATE_INFO_HE_GI_1_6;
case RTW89_GILTF_LGI_4XHE32:
- return NL80211_RATE_INFO_HE_GI_3_2;
+ return eht ? NL80211_RATE_INFO_EHT_GI_3_2 :
+ NL80211_RATE_INFO_HE_GI_3_2;
default:
- rtw89_warn(rtwdev, "invalid gi_ltf=%d", desc_info->gi_ltf);
- return rx_status ? NL80211_RATE_INFO_HE_GI_3_2 : U8_MAX;
+ rtw89_warn(rtwdev, "invalid gi_ltf=%d", desc_info_gi);
+ if (rx_status)
+ return eht ? NL80211_RATE_INFO_EHT_GI_3_2 :
+ NL80211_RATE_INFO_HE_GI_3_2;
+ return U8_MAX;
}
}
+static
+bool rtw89_check_rx_statu_gi_match(struct ieee80211_rx_status *status, u8 gi_ltf,
+ bool eht)
+{
+ if (eht)
+ return status->eht.gi == gi_ltf;
+
+ return status->he_gi == gi_ltf;
+}
+
static bool rtw89_core_rx_ppdu_match(struct rtw89_dev *rtwdev,
struct rtw89_rx_desc_info *desc_info,
struct ieee80211_rx_status *status)
{
u8 band = desc_info->bb_sel ? RTW89_PHY_1 : RTW89_PHY_0;
u8 data_rate_mode, bw, rate_idx = MASKBYTE0, gi_ltf;
+ bool eht = false;
u16 data_rate;
bool ret;
@@ -1489,19 +1674,20 @@ static bool rtw89_core_rx_ppdu_match(struct rtw89_dev *rtwdev,
/* rate_idx is still hardware value here */
} else if (data_rate_mode == DATA_RATE_MODE_HT) {
rate_idx = rtw89_get_data_ht_mcs(rtwdev, data_rate);
- } else if (data_rate_mode == DATA_RATE_MODE_VHT) {
- rate_idx = rtw89_get_data_mcs(rtwdev, data_rate);
- } else if (data_rate_mode == DATA_RATE_MODE_HE) {
+ } else if (data_rate_mode == DATA_RATE_MODE_VHT ||
+ data_rate_mode == DATA_RATE_MODE_HE ||
+ data_rate_mode == DATA_RATE_MODE_EHT) {
rate_idx = rtw89_get_data_mcs(rtwdev, data_rate);
} else {
rtw89_warn(rtwdev, "invalid RX rate mode %d\n", data_rate_mode);
}
+ eht = data_rate_mode == DATA_RATE_MODE_EHT;
bw = rtw89_hw_to_rate_info_bw(desc_info->bw);
- gi_ltf = rtw89_rxdesc_to_nl_he_gi(rtwdev, desc_info, false);
+ gi_ltf = rtw89_rxdesc_to_nl_he_eht_gi(rtwdev, desc_info->gi_ltf, false, eht);
ret = rtwdev->ppdu_sts.curr_rx_ppdu_cnt[band] == desc_info->ppdu_cnt &&
status->rate_idx == rate_idx &&
- status->he_gi == gi_ltf &&
+ rtw89_check_rx_statu_gi_match(status, gi_ltf, eht) &&
status->bw == bw;
return ret;
@@ -1521,8 +1707,8 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
{
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
struct ieee80211_trigger *tf = (struct ieee80211_trigger *)skb->data;
- u8 *pos, *end, type;
- u16 aid;
+ u8 *pos, *end, type, tf_bw;
+ u16 aid, tf_rua;
if (!ether_addr_equal(vif->bss_conf.bssid, tf->ta) ||
rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION ||
@@ -1530,7 +1716,7 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
return;
type = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_TYPE_MASK);
- if (type != IEEE80211_TRIGGER_TYPE_BASIC)
+ if (type != IEEE80211_TRIGGER_TYPE_BASIC && type != IEEE80211_TRIGGER_TYPE_MU_BAR)
return;
end = (u8 *)tf + skb->len;
@@ -1538,17 +1724,24 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
while (end - pos >= RTW89_TF_BASIC_USER_INFO_SZ) {
aid = RTW89_GET_TF_USER_INFO_AID12(pos);
+ tf_rua = RTW89_GET_TF_USER_INFO_RUA(pos);
+ tf_bw = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_ULBW_MASK);
rtw89_debug(rtwdev, RTW89_DBG_TXRX,
- "[TF] aid: %d, ul_mcs: %d, rua: %d\n",
+ "[TF] aid: %d, ul_mcs: %d, rua: %d, bw: %d\n",
aid, RTW89_GET_TF_USER_INFO_UL_MCS(pos),
- RTW89_GET_TF_USER_INFO_RUA(pos));
+ tf_rua, tf_bw);
if (aid == RTW89_TF_PAD)
break;
if (aid == vif->cfg.aid) {
+ enum nl80211_he_ru_alloc rua = rtw89_he_rua_to_ru_alloc(tf_rua >> 1);
+
rtwvif->stats.rx_tf_acc++;
rtwdev->stats.rx_tf_acc++;
+ if (tf_bw == IEEE80211_TRIGGER_ULBW_160_80P80MHZ &&
+ rua <= NL80211_RATE_INFO_HE_RU_ALLOC_106)
+ rtwvif->pwr_diff_en = true;
break;
}
@@ -1714,6 +1907,72 @@ static void rtw89_core_hw_to_sband_rate(struct ieee80211_rx_status *rx_status)
rx_status->rate_idx -= 4;
}
+static const u8 rx_status_bw_to_radiotap_eht_usig[] = {
+ [RATE_INFO_BW_20] = IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_20MHZ,
+ [RATE_INFO_BW_5] = U8_MAX,
+ [RATE_INFO_BW_10] = U8_MAX,
+ [RATE_INFO_BW_40] = IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_40MHZ,
+ [RATE_INFO_BW_80] = IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_80MHZ,
+ [RATE_INFO_BW_160] = IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_160MHZ,
+ [RATE_INFO_BW_HE_RU] = U8_MAX,
+ [RATE_INFO_BW_320] = IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_320MHZ_1,
+ [RATE_INFO_BW_EHT_RU] = U8_MAX,
+};
+
+static void rtw89_core_update_radiotap_eht(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb,
+ struct ieee80211_rx_status *rx_status)
+{
+ struct ieee80211_radiotap_eht_usig *usig;
+ struct ieee80211_radiotap_eht *eht;
+ struct ieee80211_radiotap_tlv *tlv;
+ int eht_len = struct_size(eht, user_info, 1);
+ int usig_len = sizeof(*usig);
+ int len;
+ u8 bw;
+
+ len = sizeof(*tlv) + ALIGN(eht_len, 4) +
+ sizeof(*tlv) + ALIGN(usig_len, 4);
+
+ rx_status->flag |= RX_FLAG_RADIOTAP_TLV_AT_END;
+ skb_reset_mac_header(skb);
+
+ /* EHT */
+ tlv = skb_push(skb, len);
+ memset(tlv, 0, len);
+ tlv->type = cpu_to_le16(IEEE80211_RADIOTAP_EHT);
+ tlv->len = cpu_to_le16(eht_len);
+
+ eht = (struct ieee80211_radiotap_eht *)tlv->data;
+ eht->known = cpu_to_le32(IEEE80211_RADIOTAP_EHT_KNOWN_GI);
+ eht->data[0] =
+ le32_encode_bits(rx_status->eht.gi, IEEE80211_RADIOTAP_EHT_DATA0_GI);
+
+ eht->user_info[0] =
+ cpu_to_le32(IEEE80211_RADIOTAP_EHT_USER_INFO_MCS_KNOWN |
+ IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_KNOWN_O);
+ eht->user_info[0] |=
+ le32_encode_bits(rx_status->rate_idx, IEEE80211_RADIOTAP_EHT_USER_INFO_MCS) |
+ le32_encode_bits(rx_status->nss, IEEE80211_RADIOTAP_EHT_USER_INFO_NSS_O);
+
+ /* U-SIG */
+ tlv = (void *)tlv + sizeof(*tlv) + ALIGN(eht_len, 4);
+ tlv->type = cpu_to_le16(IEEE80211_RADIOTAP_EHT_USIG);
+ tlv->len = cpu_to_le16(usig_len);
+
+ if (rx_status->bw >= ARRAY_SIZE(rx_status_bw_to_radiotap_eht_usig))
+ return;
+
+ bw = rx_status_bw_to_radiotap_eht_usig[rx_status->bw];
+ if (bw == U8_MAX)
+ return;
+
+ usig = (struct ieee80211_radiotap_eht_usig *)tlv->data;
+ usig->common =
+ le32_encode_bits(1, IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_KNOWN) |
+ le32_encode_bits(bw, IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW);
+}
+
static void rtw89_core_update_radiotap(struct rtw89_dev *rtwdev,
struct sk_buff *skb,
struct ieee80211_rx_status *rx_status)
@@ -1732,6 +1991,8 @@ static void rtw89_core_update_radiotap(struct rtw89_dev *rtwdev,
rx_status->flag |= RX_FLAG_RADIOTAP_HE;
he = skb_push(skb, sizeof(*he));
*he = known_he;
+ } else if (rx_status->encoding == RX_ENC_EHT) {
+ rtw89_core_update_radiotap_eht(rtwdev, skb, rx_status);
}
}
@@ -1744,7 +2005,7 @@ static void rtw89_core_rx_to_mac80211(struct rtw89_dev *rtwdev,
struct napi_struct *napi = &rtwdev->napi;
/* In low power mode, napi isn't scheduled. Receive it to netif. */
- if (unlikely(!test_bit(NAPI_STATE_SCHED, &napi->state)))
+ if (unlikely(!napi_is_scheduled(napi)))
napi = NULL;
rtw89_core_hw_to_sband_rate(rx_status);
@@ -1875,6 +2136,71 @@ void rtw89_core_query_rxdesc(struct rtw89_dev *rtwdev,
}
EXPORT_SYMBOL(rtw89_core_query_rxdesc);
+void rtw89_core_query_rxdesc_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_desc_info *desc_info,
+ u8 *data, u32 data_offset)
+{
+ struct rtw89_rxdesc_short_v2 *rxd_s;
+ struct rtw89_rxdesc_long_v2 *rxd_l;
+ u16 shift_len, drv_info_len, phy_rtp_len, hdr_cnv_len;
+
+ rxd_s = (struct rtw89_rxdesc_short_v2 *)(data + data_offset);
+
+ desc_info->pkt_size = le32_get_bits(rxd_s->dword0, BE_RXD_RPKT_LEN_MASK);
+ desc_info->drv_info_size = le32_get_bits(rxd_s->dword0, BE_RXD_DRV_INFO_SZ_MASK);
+ desc_info->phy_rpt_size = le32_get_bits(rxd_s->dword0, BE_RXD_PHY_RPT_SZ_MASK);
+ desc_info->hdr_cnv_size = le32_get_bits(rxd_s->dword0, BE_RXD_HDR_CNV_SZ_MASK);
+ desc_info->shift = le32_get_bits(rxd_s->dword0, BE_RXD_SHIFT_MASK);
+ desc_info->long_rxdesc = le32_get_bits(rxd_s->dword0, BE_RXD_LONG_RXD);
+ desc_info->pkt_type = le32_get_bits(rxd_s->dword0, BE_RXD_RPKT_TYPE_MASK);
+ if (desc_info->pkt_type == RTW89_CORE_RX_TYPE_PPDU_STAT)
+ desc_info->mac_info_valid = true;
+
+ desc_info->frame_type = le32_get_bits(rxd_s->dword2, BE_RXD_TYPE_MASK);
+ desc_info->mac_id = le32_get_bits(rxd_s->dword2, BE_RXD_MAC_ID_MASK);
+ desc_info->addr_cam_valid = le32_get_bits(rxd_s->dword2, BE_RXD_ADDR_CAM_VLD);
+
+ desc_info->icv_err = le32_get_bits(rxd_s->dword3, BE_RXD_ICV_ERR);
+ desc_info->crc32_err = le32_get_bits(rxd_s->dword3, BE_RXD_CRC32_ERR);
+ desc_info->hw_dec = le32_get_bits(rxd_s->dword3, BE_RXD_HW_DEC);
+ desc_info->sw_dec = le32_get_bits(rxd_s->dword3, BE_RXD_SW_DEC);
+ desc_info->addr1_match = le32_get_bits(rxd_s->dword3, BE_RXD_A1_MATCH);
+
+ desc_info->bw = le32_get_bits(rxd_s->dword4, BE_RXD_BW_MASK);
+ desc_info->data_rate = le32_get_bits(rxd_s->dword4, BE_RXD_RX_DATARATE_MASK);
+ desc_info->gi_ltf = le32_get_bits(rxd_s->dword4, BE_RXD_RX_GI_LTF_MASK);
+ desc_info->ppdu_cnt = le32_get_bits(rxd_s->dword4, BE_RXD_PPDU_CNT_MASK);
+ desc_info->ppdu_type = le32_get_bits(rxd_s->dword4, BE_RXD_PPDU_TYPE_MASK);
+
+ desc_info->free_run_cnt = le32_to_cpu(rxd_s->dword5);
+
+ shift_len = desc_info->shift << 1; /* 2-byte unit */
+ drv_info_len = desc_info->drv_info_size << 3; /* 8-byte unit */
+ phy_rtp_len = desc_info->phy_rpt_size << 3; /* 8-byte unit */
+ hdr_cnv_len = desc_info->hdr_cnv_size << 4; /* 16-byte unit */
+ desc_info->offset = data_offset + shift_len + drv_info_len +
+ phy_rtp_len + hdr_cnv_len;
+
+ if (desc_info->long_rxdesc)
+ desc_info->rxd_len = sizeof(struct rtw89_rxdesc_long_v2);
+ else
+ desc_info->rxd_len = sizeof(struct rtw89_rxdesc_short_v2);
+ desc_info->ready = true;
+
+ if (!desc_info->long_rxdesc)
+ return;
+
+ rxd_l = (struct rtw89_rxdesc_long_v2 *)(data + data_offset);
+
+ desc_info->sr_en = le32_get_bits(rxd_l->dword6, BE_RXD_SR_EN);
+ desc_info->user_id = le32_get_bits(rxd_l->dword6, BE_RXD_USER_ID_MASK);
+ desc_info->addr_cam_id = le32_get_bits(rxd_l->dword6, BE_RXD_ADDR_CAM_MASK);
+ desc_info->sec_cam_id = le32_get_bits(rxd_l->dword6, BE_RXD_SEC_CAM_IDX_MASK);
+
+ desc_info->rx_pl_id = le32_get_bits(rxd_l->dword7, BE_RXD_RX_PL_ID_MASK);
+}
+EXPORT_SYMBOL(rtw89_core_query_rxdesc_v2);
+
struct rtw89_core_iter_rx_status {
struct rtw89_dev *rtwdev;
struct ieee80211_rx_status *rx_status;
@@ -1928,6 +2254,8 @@ static void rtw89_core_update_rx_status(struct rtw89_dev *rtwdev,
rtw89_chandef_get(rtwdev, RTW89_SUB_ENTITY_0);
u16 data_rate;
u8 data_rate_mode;
+ bool eht = false;
+ u8 gi;
/* currently using single PHY */
rx_status->freq = chandef->chan->center_freq;
@@ -1975,12 +2303,21 @@ static void rtw89_core_update_rx_status(struct rtw89_dev *rtwdev,
rx_status->encoding = RX_ENC_HE;
rx_status->rate_idx = rtw89_get_data_mcs(rtwdev, data_rate);
rx_status->nss = rtw89_get_data_nss(rtwdev, data_rate) + 1;
+ } else if (data_rate_mode == DATA_RATE_MODE_EHT) {
+ rx_status->encoding = RX_ENC_EHT;
+ rx_status->rate_idx = rtw89_get_data_mcs(rtwdev, data_rate);
+ rx_status->nss = rtw89_get_data_nss(rtwdev, data_rate) + 1;
+ eht = true;
} else {
rtw89_warn(rtwdev, "invalid RX rate mode %d\n", data_rate_mode);
}
/* he_gi is used to match ppdu, so we always fill it. */
- rx_status->he_gi = rtw89_rxdesc_to_nl_he_gi(rtwdev, desc_info, true);
+ gi = rtw89_rxdesc_to_nl_he_eht_gi(rtwdev, desc_info->gi_ltf, true, eht);
+ if (eht)
+ rx_status->eht.gi = gi;
+ else
+ rx_status->he_gi = gi;
rx_status->flag |= RX_FLAG_MACTIME_START;
rx_status->mactime = desc_info->free_run_cnt;
@@ -2491,6 +2828,7 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
rtw89_leave_ips_by_hwflags(rtwdev);
rtw89_leave_lps(rtwdev);
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_ROC);
ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, true);
if (ret)
@@ -2533,7 +2871,7 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
roc->state = RTW89_ROC_IDLE;
rtw89_config_roc_chandef(rtwdev, rtwvif->sub_entity_idx, NULL);
- rtw89_set_channel(rtwdev);
+ rtw89_chanctx_proceed(rtwdev);
ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, false);
if (ret)
rtw89_debug(rtwdev, RTW89_DBG_TXRX,
@@ -2662,6 +3000,27 @@ static void rtw89_enter_lps_track(struct rtw89_dev *rtwdev)
rtw89_vif_enter_lps(rtwdev, rtwvif);
}
+static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev)
+{
+ enum rtw89_entity_mode mode;
+
+ mode = rtw89_get_entity_mode(rtwdev);
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ return;
+
+ rtw89_chip_rfk_track(rtwdev);
+}
+
+void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+{
+ enum rtw89_entity_mode mode = rtw89_get_entity_mode(rtwdev);
+
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_P2P_PS_CHANGE);
+ else
+ rtw89_process_p2p_ps(rtwdev, vif);
+}
+
void rtw89_traffic_stats_init(struct rtw89_dev *rtwdev,
struct rtw89_traffic_stats *stats)
{
@@ -2704,13 +3063,14 @@ static void rtw89_track_work(struct work_struct *work)
rtw89_phy_stat_track(rtwdev);
rtw89_phy_env_monitor_track(rtwdev);
rtw89_phy_dig(rtwdev);
- rtw89_chip_rfk_track(rtwdev);
+ rtw89_core_rfk_track(rtwdev);
rtw89_phy_ra_update(rtwdev);
rtw89_phy_cfo_track(rtwdev);
rtw89_phy_tx_path_div_track(rtwdev);
rtw89_phy_antdiv_track(rtwdev);
rtw89_phy_ul_tb_ctrl_track(rtwdev);
rtw89_tas_track(rtwdev);
+ rtw89_chanctx_track(rtwdev);
if (rtwdev->lps_enabled && !rtwdev->btc.lps)
rtw89_enter_lps_track(rtwdev);
@@ -2923,6 +3283,8 @@ int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
rtw89_warn(rtwdev, "failed to send h2c role info\n");
return ret;
}
+
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
}
return 0;
@@ -3088,6 +3450,8 @@ int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
rtw89_warn(rtwdev, "failed to send h2c role info\n");
return ret;
}
+
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
}
return 0;
@@ -3359,8 +3723,7 @@ static void rtw89_init_he_cap(struct rtw89_dev *rtwdev,
idx++;
}
- sband->iftype_data = iftype_data;
- sband->n_iftype_data = idx;
+ _ieee80211_set_sband_iftype_data(sband, iftype_data, idx);
}
static int rtw89_core_set_supported_band(struct rtw89_dev *rtwdev)
@@ -3405,11 +3768,11 @@ err:
hw->wiphy->bands[NL80211_BAND_5GHZ] = NULL;
hw->wiphy->bands[NL80211_BAND_6GHZ] = NULL;
if (sband_2ghz)
- kfree(sband_2ghz->iftype_data);
+ kfree((__force void *)sband_2ghz->iftype_data);
if (sband_5ghz)
- kfree(sband_5ghz->iftype_data);
+ kfree((__force void *)sband_5ghz->iftype_data);
if (sband_6ghz)
- kfree(sband_6ghz->iftype_data);
+ kfree((__force void *)sband_6ghz->iftype_data);
kfree(sband_2ghz);
kfree(sband_5ghz);
kfree(sband_6ghz);
@@ -3421,11 +3784,11 @@ static void rtw89_core_clr_supported_band(struct rtw89_dev *rtwdev)
struct ieee80211_hw *hw = rtwdev->hw;
if (hw->wiphy->bands[NL80211_BAND_2GHZ])
- kfree(hw->wiphy->bands[NL80211_BAND_2GHZ]->iftype_data);
+ kfree((__force void *)hw->wiphy->bands[NL80211_BAND_2GHZ]->iftype_data);
if (hw->wiphy->bands[NL80211_BAND_5GHZ])
- kfree(hw->wiphy->bands[NL80211_BAND_5GHZ]->iftype_data);
+ kfree((__force void *)hw->wiphy->bands[NL80211_BAND_5GHZ]->iftype_data);
if (hw->wiphy->bands[NL80211_BAND_6GHZ])
- kfree(hw->wiphy->bands[NL80211_BAND_6GHZ]->iftype_data);
+ kfree((__force void *)hw->wiphy->bands[NL80211_BAND_6GHZ]->iftype_data);
kfree(hw->wiphy->bands[NL80211_BAND_2GHZ]);
kfree(hw->wiphy->bands[NL80211_BAND_5GHZ]);
kfree(hw->wiphy->bands[NL80211_BAND_6GHZ]);
@@ -3503,6 +3866,7 @@ void rtw89_core_ntfy_btc_event(struct rtw89_dev *rtwdev, enum rtw89_btc_hmsg eve
bt_req_len = rtw89_coex_query_bt_req_len(rtwdev, RTW89_PHY_0);
rtw89_debug(rtwdev, RTW89_DBG_BTC,
"coex updates BT req len to %d TU\n", bt_req_len);
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_BT_SLOT_CHANGE);
break;
default:
if (event < NUM_OF_RTW89_BTC_HMSG)
@@ -3767,28 +4131,34 @@ static void rtw89_core_setup_rfe_parms(struct rtw89_dev *rtwdev)
const struct rtw89_chip_info *chip = rtwdev->chip;
const struct rtw89_rfe_parms_conf *conf = chip->rfe_parms_conf;
struct rtw89_efuse *efuse = &rtwdev->efuse;
+ const struct rtw89_rfe_parms *sel;
u8 rfe_type = efuse->rfe_type;
- if (!conf)
+ if (!conf) {
+ sel = chip->dflt_parms;
goto out;
+ }
while (conf->rfe_parms) {
if (rfe_type == conf->rfe_type) {
- rtwdev->rfe_parms = conf->rfe_parms;
- return;
+ sel = conf->rfe_parms;
+ goto out;
}
conf++;
}
+ sel = chip->dflt_parms;
+
out:
- rtwdev->rfe_parms = chip->dflt_parms;
+ rtwdev->rfe_parms = rtw89_load_rfe_data_from_fw(rtwdev, sel);
+ rtw89_load_txpwr_table(rtwdev, rtwdev->rfe_parms->byr_tbl);
}
static int rtw89_chip_efuse_info_setup(struct rtw89_dev *rtwdev)
{
int ret;
- ret = rtw89_mac_partial_init(rtwdev);
+ ret = rtw89_mac_partial_init(rtwdev, false);
if (ret)
return ret;
@@ -3805,7 +4175,6 @@ static int rtw89_chip_efuse_info_setup(struct rtw89_dev *rtwdev)
return ret;
rtw89_core_setup_phycap(rtwdev);
- rtw89_core_setup_rfe_parms(rtwdev);
rtw89_mac_pwr_off(rtwdev);
@@ -3837,20 +4206,21 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev)
return ret;
}
+ ret = rtw89_chip_efuse_info_setup(rtwdev);
+ if (ret)
+ return ret;
+
ret = rtw89_fw_recognize_elements(rtwdev);
if (ret) {
rtw89_err(rtwdev, "failed to recognize firmware elements\n");
return ret;
}
- ret = rtw89_chip_efuse_info_setup(rtwdev);
- if (ret)
- return ret;
-
ret = rtw89_chip_board_info_setup(rtwdev);
if (ret)
return ret;
+ rtw89_core_setup_rfe_parms(rtwdev);
rtwdev->ps_mode = rtw89_update_ps_mode(rtwdev);
return 0;
@@ -3892,6 +4262,10 @@ static int rtw89_core_register_hw(struct rtw89_dev *rtwdev)
ieee80211_hw_set(hw, SINGLE_SCAN_ON_ALL_BANDS);
ieee80211_hw_set(hw, SUPPORTS_MULTI_BSSID);
ieee80211_hw_set(hw, WANT_MONITOR_VIF);
+
+ /* ref: description of rtw89_mcc_get_tbtt_ofst() in chan.c */
+ ieee80211_hw_set(hw, TIMING_BEACON_ONLY);
+
if (RTW89_CHK_FW_FEATURE(BEACON_FILTER, &rtwdev->fw))
ieee80211_hw_set(hw, CONNECTION_MONITOR);
@@ -4033,7 +4407,11 @@ struct rtw89_dev *rtw89_alloc_ieee80211_hw(struct device *device,
goto err;
hw->wiphy->iface_combinations = rtw89_iface_combs;
- hw->wiphy->n_iface_combinations = ARRAY_SIZE(rtw89_iface_combs);
+
+ if (no_chanctx || chip->support_chanctx_num == 1)
+ hw->wiphy->n_iface_combinations = 1;
+ else
+ hw->wiphy->n_iface_combinations = ARRAY_SIZE(rtw89_iface_combs);
rtwdev = hw->priv;
rtwdev->hw = hw;
@@ -4058,6 +4436,7 @@ EXPORT_SYMBOL(rtw89_alloc_ieee80211_hw);
void rtw89_free_ieee80211_hw(struct rtw89_dev *rtwdev)
{
kfree(rtwdev->ops);
+ kfree(rtwdev->rfe_data);
release_firmware(rtwdev->fw.req.firmware);
ieee80211_free_hw(rtwdev->hw);
}
diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
index 04ce221730f9..91e4d4e79eea 100644
--- a/drivers/net/wireless/realtek/rtw89/core.h
+++ b/drivers/net/wireless/realtek/rtw89/core.h
@@ -37,7 +37,14 @@ extern const struct ieee80211_ops rtw89_ops;
#define RSSI_FACTOR 1
#define RTW89_RSSI_RAW_TO_DBM(rssi) ((s8)((rssi) >> RSSI_FACTOR) - MAX_RSSI)
#define RTW89_TX_DIV_RSSI_RAW_TH (2 << RSSI_FACTOR)
-#define RTW89_RADIOTAP_ROOM ALIGN(sizeof(struct ieee80211_radiotap_he), 64)
+#define RTW89_RADIOTAP_ROOM_HE sizeof(struct ieee80211_radiotap_he)
+#define RTW89_RADIOTAP_ROOM_EHT \
+ (sizeof(struct ieee80211_radiotap_tlv) + \
+ ALIGN(struct_size((struct ieee80211_radiotap_eht *)0, user_info, 1), 4) + \
+ sizeof(struct ieee80211_radiotap_tlv) + \
+ ALIGN(sizeof(struct ieee80211_radiotap_eht_usig), 4))
+#define RTW89_RADIOTAP_ROOM \
+ ALIGN(max(RTW89_RADIOTAP_ROOM_HE, RTW89_RADIOTAP_ROOM_EHT), 64)
#define RTW89_HTC_MASK_VARIANT GENMASK(1, 0)
#define RTW89_HTC_VARIANT_HE 3
@@ -640,12 +647,29 @@ enum rtw89_rate_section {
RTW89_RS_TX_SHAPE_NUM = RTW89_RS_OFDM + 1,
};
+enum rtw89_rate_offset_indexes {
+ RTW89_RATE_OFFSET_HE,
+ RTW89_RATE_OFFSET_VHT,
+ RTW89_RATE_OFFSET_HT,
+ RTW89_RATE_OFFSET_OFDM,
+ RTW89_RATE_OFFSET_CCK,
+ RTW89_RATE_OFFSET_DLRU_EHT,
+ RTW89_RATE_OFFSET_DLRU_HE,
+ RTW89_RATE_OFFSET_EHT,
+ __RTW89_RATE_OFFSET_NUM,
+
+ RTW89_RATE_OFFSET_NUM_AX = RTW89_RATE_OFFSET_CCK + 1,
+ RTW89_RATE_OFFSET_NUM_BE = RTW89_RATE_OFFSET_EHT + 1,
+};
+
enum rtw89_rate_num {
RTW89_RATE_CCK_NUM = 4,
RTW89_RATE_OFDM_NUM = 8,
- RTW89_RATE_MCS_NUM = 12,
RTW89_RATE_HEDCM_NUM = 4, /* for HEDCM MCS0/1/3/4 */
- RTW89_RATE_OFFSET_NUM = 5, /* for HE(HEDCM)/VHT/HT/OFDM/CCK offset */
+
+ RTW89_RATE_MCS_NUM_AX = 12,
+ RTW89_RATE_MCS_NUM_BE = 16,
+ __RTW89_RATE_MCS_NUM = 16,
};
enum rtw89_nss {
@@ -670,6 +694,12 @@ enum rtw89_beamforming_type {
RTW89_BF_NUM,
};
+enum rtw89_ofdma_type {
+ RTW89_NON_OFDMA = 0,
+ RTW89_OFDMA = 1,
+ RTW89_OFDMA_NUM,
+};
+
enum rtw89_regulation_type {
RTW89_WW = 0,
RTW89_ETSI = 1,
@@ -686,6 +716,7 @@ enum rtw89_regulation_type {
RTW89_CN = 12,
RTW89_QATAR = 13,
RTW89_UK = 14,
+ RTW89_THAILAND = 15,
RTW89_REGD_NUM,
};
@@ -715,44 +746,16 @@ enum rtw89_fw_pkt_ofld_type {
struct rtw89_txpwr_byrate {
s8 cck[RTW89_RATE_CCK_NUM];
s8 ofdm[RTW89_RATE_OFDM_NUM];
- s8 mcs[RTW89_NSS_NUM][RTW89_RATE_MCS_NUM];
- s8 hedcm[RTW89_NSS_HEDCM_NUM][RTW89_RATE_HEDCM_NUM];
- s8 offset[RTW89_RATE_OFFSET_NUM];
-};
-
-enum rtw89_bandwidth_section_num {
- RTW89_BW20_SEC_NUM = 8,
- RTW89_BW40_SEC_NUM = 4,
- RTW89_BW80_SEC_NUM = 2,
-};
-
-#define RTW89_TXPWR_LMT_PAGE_SIZE 40
-
-struct rtw89_txpwr_limit {
- s8 cck_20m[RTW89_BF_NUM];
- s8 cck_40m[RTW89_BF_NUM];
- s8 ofdm[RTW89_BF_NUM];
- s8 mcs_20m[RTW89_BW20_SEC_NUM][RTW89_BF_NUM];
- s8 mcs_40m[RTW89_BW40_SEC_NUM][RTW89_BF_NUM];
- s8 mcs_80m[RTW89_BW80_SEC_NUM][RTW89_BF_NUM];
- s8 mcs_160m[RTW89_BF_NUM];
- s8 mcs_40m_0p5[RTW89_BF_NUM];
- s8 mcs_40m_2p5[RTW89_BF_NUM];
-};
-
-#define RTW89_RU_SEC_NUM 8
-
-#define RTW89_TXPWR_LMT_RU_PAGE_SIZE 24
-
-struct rtw89_txpwr_limit_ru {
- s8 ru26[RTW89_RU_SEC_NUM];
- s8 ru52[RTW89_RU_SEC_NUM];
- s8 ru106[RTW89_RU_SEC_NUM];
+ s8 mcs[RTW89_OFDMA_NUM][RTW89_NSS_NUM][__RTW89_RATE_MCS_NUM];
+ s8 hedcm[RTW89_OFDMA_NUM][RTW89_NSS_HEDCM_NUM][RTW89_RATE_HEDCM_NUM];
+ s8 offset[__RTW89_RATE_OFFSET_NUM];
+ s8 trap;
};
struct rtw89_rate_desc {
enum rtw89_nss nss;
enum rtw89_rate_section rs;
+ enum rtw89_ofdma_type ofdma;
u8 idx;
};
@@ -841,9 +844,14 @@ enum rtw89_bandwidth {
RTW89_CHANNEL_WIDTH_40 = 1,
RTW89_CHANNEL_WIDTH_80 = 2,
RTW89_CHANNEL_WIDTH_160 = 3,
- RTW89_CHANNEL_WIDTH_80_80 = 4,
- RTW89_CHANNEL_WIDTH_5 = 5,
- RTW89_CHANNEL_WIDTH_10 = 6,
+ RTW89_CHANNEL_WIDTH_320 = 4,
+
+ /* keep index order above */
+ RTW89_CHANNEL_WIDTH_ORDINARY_NUM = 5,
+
+ RTW89_CHANNEL_WIDTH_80_80 = 5,
+ RTW89_CHANNEL_WIDTH_5 = 6,
+ RTW89_CHANNEL_WIDTH_10 = 7,
};
enum rtw89_ps_mode {
@@ -855,13 +863,16 @@ enum rtw89_ps_mode {
#define RTW89_2G_BW_NUM (RTW89_CHANNEL_WIDTH_40 + 1)
#define RTW89_5G_BW_NUM (RTW89_CHANNEL_WIDTH_160 + 1)
-#define RTW89_6G_BW_NUM (RTW89_CHANNEL_WIDTH_160 + 1)
+#define RTW89_6G_BW_NUM (RTW89_CHANNEL_WIDTH_320 + 1)
+#define RTW89_BYR_BW_NUM (RTW89_CHANNEL_WIDTH_320 + 1)
#define RTW89_PPE_BW_NUM (RTW89_CHANNEL_WIDTH_160 + 1)
enum rtw89_ru_bandwidth {
RTW89_RU26 = 0,
RTW89_RU52 = 1,
RTW89_RU106 = 2,
+ RTW89_RU52_26 = 3,
+ RTW89_RU106_26 = 4,
RTW89_RU_NUM,
};
@@ -898,6 +909,7 @@ struct rtw89_chan {
u32 freq;
enum rtw89_subband subband_type;
enum rtw89_sc_offset pri_ch_idx;
+ u8 pri_sb_idx;
};
struct rtw89_chan_rcd {
@@ -926,6 +938,12 @@ struct rtw89_port_reg {
u32 bcn_cnt_tmr;
u32 tsftr_l;
u32 tsftr_h;
+ u32 md_tsft;
+ u32 bss_color;
+ u32 mbssid;
+ u32 mbssid_drop;
+ u32 tsf_sync;
+ u32 hiq_win[RTW89_PORT_NUM];
};
struct rtw89_txwd_body {
@@ -948,6 +966,17 @@ struct rtw89_txwd_body_v1 {
__le32 dword7;
} __packed;
+struct rtw89_txwd_body_v2 {
+ __le32 dword0;
+ __le32 dword1;
+ __le32 dword2;
+ __le32 dword3;
+ __le32 dword4;
+ __le32 dword5;
+ __le32 dword6;
+ __le32 dword7;
+} __packed;
+
struct rtw89_txwd_info {
__le32 dword0;
__le32 dword1;
@@ -957,10 +986,23 @@ struct rtw89_txwd_info {
__le32 dword5;
} __packed;
+struct rtw89_txwd_info_v2 {
+ __le32 dword0;
+ __le32 dword1;
+ __le32 dword2;
+ __le32 dword3;
+ __le32 dword4;
+ __le32 dword5;
+ __le32 dword6;
+ __le32 dword7;
+} __packed;
+
struct rtw89_rx_desc_info {
u16 pkt_size;
u8 pkt_type;
u8 drv_info_size;
+ u8 phy_rpt_size;
+ u8 hdr_cnv_size;
u8 shift;
u8 wl_hd_iv_len;
bool long_rxdesc;
@@ -999,6 +1041,15 @@ struct rtw89_rxdesc_short {
__le32 dword3;
} __packed;
+struct rtw89_rxdesc_short_v2 {
+ __le32 dword0;
+ __le32 dword1;
+ __le32 dword2;
+ __le32 dword3;
+ __le32 dword4;
+ __le32 dword5;
+} __packed;
+
struct rtw89_rxdesc_long {
__le32 dword0;
__le32 dword1;
@@ -1010,6 +1061,19 @@ struct rtw89_rxdesc_long {
__le32 dword7;
} __packed;
+struct rtw89_rxdesc_long_v2 {
+ __le32 dword0;
+ __le32 dword1;
+ __le32 dword2;
+ __le32 dword3;
+ __le32 dword4;
+ __le32 dword5;
+ __le32 dword6;
+ __le32 dword7;
+ __le32 dword8;
+ __le32 dword9;
+} __packed;
+
struct rtw89_tx_desc_info {
u16 pkt_size;
u8 wp_offset;
@@ -2677,6 +2741,7 @@ enum rtw89_ra_mode {
RTW89_RA_MODE_HT = BIT(2),
RTW89_RA_MODE_VHT = BIT(3),
RTW89_RA_MODE_HE = BIT(4),
+ RTW89_RA_MODE_EHT = BIT(5),
};
enum rtw89_ra_report_mode {
@@ -2684,6 +2749,7 @@ enum rtw89_ra_report_mode {
RTW89_RA_RPT_MODE_HT,
RTW89_RA_RPT_MODE_VHT,
RTW89_RA_RPT_MODE_HE,
+ RTW89_RA_RPT_MODE_EHT,
};
enum rtw89_dig_noisy_level {
@@ -2930,6 +2996,7 @@ struct rtw89_vif {
struct list_head list;
struct rtw89_dev *rtwdev;
struct rtw89_roc roc;
+ bool chanctx_assigned; /* only valid when running with chanctx_ops */
enum rtw89_sub_entity_idx sub_entity_idx;
enum rtw89_reg_6ghz_power reg_6ghz_power;
@@ -2957,6 +3024,8 @@ struct rtw89_vif {
bool is_hesta;
bool last_a_ctrl;
bool dyn_tb_bedge_en;
+ bool pre_pwr_diff_en;
+ bool pwr_diff_en;
u8 def_tri_idx;
u32 tdls_peer;
struct work_struct update_beacon_work;
@@ -3032,6 +3101,7 @@ struct rtw89_hci_info {
struct rtw89_chip_ops {
int (*enable_bb_rf)(struct rtw89_dev *rtwdev);
int (*disable_bb_rf)(struct rtw89_dev *rtwdev);
+ void (*bb_preinit)(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx);
void (*bb_reset)(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx);
void (*bb_sethw)(struct rtw89_dev *rtwdev);
@@ -3066,11 +3136,13 @@ struct rtw89_chip_ops {
enum rtw89_phy_idx phy_idx);
int (*init_txpwr_unit)(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx);
u8 (*get_thermal)(struct rtw89_dev *rtwdev, enum rtw89_rf_path rf_path);
- void (*ctrl_btg)(struct rtw89_dev *rtwdev, bool btg);
+ void (*ctrl_btg_bt_rx)(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx);
void (*query_ppdu)(struct rtw89_dev *rtwdev,
struct rtw89_rx_phy_ppdu *phy_ppdu,
struct ieee80211_rx_status *status);
- void (*bb_ctrl_btc_preagc)(struct rtw89_dev *rtwdev, bool bt_en);
+ void (*ctrl_nbtg_bt_tx)(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx);
void (*cfg_txrx_path)(struct rtw89_dev *rtwdev);
void (*set_txpwr_ul_tb_offset)(struct rtw89_dev *rtwdev,
s8 pw_ofst, enum rtw89_mac_idx mac_idx);
@@ -3294,10 +3366,17 @@ struct rtw89_txpwr_rule_6ghz {
[RTW89_6G_CH_NUM];
};
+struct rtw89_tx_shape {
+ const u8 (*lmt)[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM][RTW89_REGD_NUM];
+ const u8 (*lmt_ru)[RTW89_BAND_NUM][RTW89_REGD_NUM];
+};
+
struct rtw89_rfe_parms {
+ const struct rtw89_txpwr_table *byr_tbl;
struct rtw89_txpwr_rule_2ghz rule_2ghz;
struct rtw89_txpwr_rule_5ghz rule_5ghz;
struct rtw89_txpwr_rule_6ghz rule_6ghz;
+ struct rtw89_tx_shape tx_shape;
};
struct rtw89_rfe_parms_conf {
@@ -3305,6 +3384,95 @@ struct rtw89_rfe_parms_conf {
u8 rfe_type;
};
+#define RTW89_TXPWR_CONF_DFLT_RFE_TYPE 0x0
+
+struct rtw89_txpwr_conf {
+ u8 rfe_type;
+ u8 ent_sz;
+ u32 num_ents;
+ const void *data;
+};
+
+#define rtw89_txpwr_conf_valid(conf) (!!(conf)->data)
+
+#define rtw89_for_each_in_txpwr_conf(entry, cursor, conf) \
+ for (typecheck(const void *, cursor), (cursor) = (conf)->data, \
+ memcpy(&(entry), cursor, \
+ min_t(u8, sizeof(entry), (conf)->ent_sz)); \
+ (cursor) < (conf)->data + (conf)->num_ents * (conf)->ent_sz; \
+ (cursor) += (conf)->ent_sz, \
+ memcpy(&(entry), cursor, \
+ min_t(u8, sizeof(entry), (conf)->ent_sz)))
+
+struct rtw89_txpwr_byrate_data {
+ struct rtw89_txpwr_conf conf;
+ struct rtw89_txpwr_table tbl;
+};
+
+struct rtw89_txpwr_lmt_2ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
+ [RTW89_RS_LMT_NUM][RTW89_BF_NUM]
+ [RTW89_REGD_NUM][RTW89_2G_CH_NUM];
+};
+
+struct rtw89_txpwr_lmt_5ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
+ [RTW89_RS_LMT_NUM][RTW89_BF_NUM]
+ [RTW89_REGD_NUM][RTW89_5G_CH_NUM];
+};
+
+struct rtw89_txpwr_lmt_6ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
+ [RTW89_RS_LMT_NUM][RTW89_BF_NUM]
+ [RTW89_REGD_NUM][NUM_OF_RTW89_REG_6GHZ_POWER]
+ [RTW89_6G_CH_NUM];
+};
+
+struct rtw89_txpwr_lmt_ru_2ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_RU_NUM][RTW89_NTX_NUM]
+ [RTW89_REGD_NUM][RTW89_2G_CH_NUM];
+};
+
+struct rtw89_txpwr_lmt_ru_5ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_RU_NUM][RTW89_NTX_NUM]
+ [RTW89_REGD_NUM][RTW89_5G_CH_NUM];
+};
+
+struct rtw89_txpwr_lmt_ru_6ghz_data {
+ struct rtw89_txpwr_conf conf;
+ s8 v[RTW89_RU_NUM][RTW89_NTX_NUM]
+ [RTW89_REGD_NUM][NUM_OF_RTW89_REG_6GHZ_POWER]
+ [RTW89_6G_CH_NUM];
+};
+
+struct rtw89_tx_shape_lmt_data {
+ struct rtw89_txpwr_conf conf;
+ u8 v[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM][RTW89_REGD_NUM];
+};
+
+struct rtw89_tx_shape_lmt_ru_data {
+ struct rtw89_txpwr_conf conf;
+ u8 v[RTW89_BAND_NUM][RTW89_REGD_NUM];
+};
+
+struct rtw89_rfe_data {
+ struct rtw89_txpwr_byrate_data byrate;
+ struct rtw89_txpwr_lmt_2ghz_data lmt_2ghz;
+ struct rtw89_txpwr_lmt_5ghz_data lmt_5ghz;
+ struct rtw89_txpwr_lmt_6ghz_data lmt_6ghz;
+ struct rtw89_txpwr_lmt_ru_2ghz_data lmt_ru_2ghz;
+ struct rtw89_txpwr_lmt_ru_5ghz_data lmt_ru_5ghz;
+ struct rtw89_txpwr_lmt_ru_6ghz_data lmt_ru_6ghz;
+ struct rtw89_tx_shape_lmt_data tx_shape_lmt;
+ struct rtw89_tx_shape_lmt_ru_data tx_shape_lmt_ru;
+ struct rtw89_rfe_parms rfe_parms;
+};
+
struct rtw89_page_regs {
u32 hci_fc_ctrl;
u32 ch_page_ctrl;
@@ -3428,6 +3596,7 @@ enum rtw89_chanctx_state {
enum rtw89_chanctx_callbacks {
RTW89_CHANCTX_CALLBACK_PLACEHOLDER,
+ RTW89_CHANCTX_CALLBACK_RFK,
NUM_OF_RTW89_CHANCTX_CALLBACKS,
};
@@ -3446,6 +3615,7 @@ struct rtw89_chip_info {
const char *fw_basename;
u8 fw_format_max;
bool try_ce_fw;
+ u8 bbmcu_nr;
u32 needed_fw_elms;
u32 fifo_size;
bool small_fifo_size;
@@ -3462,7 +3632,8 @@ struct rtw89_chip_info {
u8 support_bands;
bool support_bw160;
bool support_unii4;
- bool support_ul_tb_ctrl;
+ bool ul_tb_waveform_ctrl;
+ bool ul_tb_pwr_diff;
bool hw_sec_hdr;
u8 rf_path_num;
u8 tx_nss;
@@ -3490,7 +3661,6 @@ struct rtw89_chip_info {
const struct rtw89_phy_table *rf_table[RF_PATH_MAX];
const struct rtw89_phy_table *nctl_table;
const struct rtw89_rfk_tbl *nctl_post_table;
- const struct rtw89_txpwr_table *byr_table;
const struct rtw89_phy_dig_gain_table *dig_table;
const struct rtw89_dig_regs *dig_regs;
const struct rtw89_phy_tssi_dbw_table *tssi_dbw_table;
@@ -3527,6 +3697,7 @@ struct rtw89_chip_info {
u32 hci_func_en_addr;
u32 h2c_desc_size;
u32 txwd_body_size;
+ u32 txwd_info_size;
u32 h2c_ctrl_reg;
const u32 *h2c_regs;
struct rtw89_reg_def h2c_counter_reg;
@@ -3540,6 +3711,7 @@ struct rtw89_chip_info {
u8 dcfo_comp_sft;
const struct rtw89_imr_info *imr_info;
const struct rtw89_rrsr_cfgs *rrsr_cfgs;
+ struct rtw89_reg_def bss_clr_vld;
u32 bss_clr_map_reg;
u32 dma_ch_mask;
u32 edcca_lvl_reg;
@@ -3610,6 +3782,14 @@ struct rtw89_mac_info {
struct rtw89_wait_info fw_ofld_wait;
};
+enum rtw89_fwdl_check_type {
+ RTW89_FWDL_CHECK_FREERTOS_DONE,
+ RTW89_FWDL_CHECK_WCPU_FWDL_DONE,
+ RTW89_FWDL_CHECK_DCPU_FWDL_DONE,
+ RTW89_FWDL_CHECK_BB0_FWDL_DONE,
+ RTW89_FWDL_CHECK_BB1_FWDL_DONE,
+};
+
enum rtw89_fw_type {
RTW89_FW_NORMAL = 1,
RTW89_FW_WOWLAN = 3,
@@ -3776,6 +3956,17 @@ struct rtw89_chanctx_cfg {
enum rtw89_sub_entity_idx idx;
};
+enum rtw89_chanctx_changes {
+ RTW89_CHANCTX_REMOTE_STA_CHANGE,
+ RTW89_CHANCTX_BCN_OFFSET_CHANGE,
+ RTW89_CHANCTX_P2P_PS_CHANGE,
+ RTW89_CHANCTX_BT_SLOT_CHANGE,
+ RTW89_CHANCTX_TSF32_TOGGLE_CHANGE,
+
+ NUM_OF_RTW89_CHANCTX_CHANGES,
+ RTW89_CHANCTX_CHANGE_DFLT = NUM_OF_RTW89_CHANCTX_CHANGES,
+};
+
enum rtw89_entity_mode {
RTW89_ENTITY_MODE_SCC,
RTW89_ENTITY_MODE_MCC_PREPARE,
@@ -3807,11 +3998,13 @@ struct rtw89_hal {
bool support_igi;
atomic_t roc_entity_idx;
+ DECLARE_BITMAP(changes, NUM_OF_RTW89_CHANCTX_CHANGES);
DECLARE_BITMAP(entity_map, NUM_OF_RTW89_SUB_ENTITY);
struct rtw89_sub_entity sub[NUM_OF_RTW89_SUB_ENTITY];
struct cfg80211_chan_def roc_chandef;
bool entity_active;
+ bool entity_pause;
enum rtw89_entity_mode entity_mode;
u32 edcca_bak;
@@ -4357,8 +4550,95 @@ struct rtw89_wow_param {
u8 pattern_cnt;
};
+struct rtw89_mcc_limit {
+ bool enable;
+ u16 max_tob; /* TU; max time offset behind */
+ u16 max_toa; /* TU; max time offset ahead */
+ u16 max_dur; /* TU */
+};
+
+struct rtw89_mcc_policy {
+ u8 c2h_rpt;
+ u8 tx_null_early;
+ u8 dis_tx_null;
+ u8 in_curr_ch;
+ u8 dis_sw_retry;
+ u8 sw_retry_count;
+};
+
+struct rtw89_mcc_role {
+ struct rtw89_vif *rtwvif;
+ struct rtw89_mcc_policy policy;
+ struct rtw89_mcc_limit limit;
+
+ /* byte-array in LE order for FW */
+ u8 macid_bitmap[BITS_TO_BYTES(RTW89_MAX_MAC_ID_NUM)];
+
+ u16 duration; /* TU */
+ u16 beacon_interval; /* TU */
+ bool is_2ghz;
+ bool is_go;
+ bool is_gc;
+};
+
+struct rtw89_mcc_bt_role {
+ u16 duration; /* TU */
+};
+
+struct rtw89_mcc_courtesy {
+ bool enable;
+ u8 slot_num;
+ u8 macid_src;
+ u8 macid_tgt;
+};
+
+enum rtw89_mcc_plan {
+ RTW89_MCC_PLAN_TAIL_BT,
+ RTW89_MCC_PLAN_MID_BT,
+ RTW89_MCC_PLAN_NO_BT,
+
+ NUM_OF_RTW89_MCC_PLAN,
+};
+
+struct rtw89_mcc_pattern {
+ s16 tob_ref; /* TU; time offset behind of reference role */
+ s16 toa_ref; /* TU; time offset ahead of reference role */
+ s16 tob_aux; /* TU; time offset behind of auxiliary role */
+ s16 toa_aux; /* TU; time offset ahead of auxiliary role */
+
+ enum rtw89_mcc_plan plan;
+ struct rtw89_mcc_courtesy courtesy;
+};
+
+struct rtw89_mcc_sync {
+ bool enable;
+ u16 offset; /* TU */
+ u8 macid_src;
+ u8 macid_tgt;
+};
+
+struct rtw89_mcc_config {
+ struct rtw89_mcc_pattern pattern;
+ struct rtw89_mcc_sync sync;
+ u64 start_tsf;
+ u16 mcc_interval; /* TU */
+ u16 beacon_offset; /* TU */
+};
+
+enum rtw89_mcc_mode {
+ RTW89_MCC_MODE_GO_STA,
+ RTW89_MCC_MODE_GC_STA,
+};
+
struct rtw89_mcc_info {
struct rtw89_wait_info wait;
+
+ u8 group;
+ enum rtw89_mcc_mode mode;
+ struct rtw89_mcc_role role_ref; /* reference role */
+ struct rtw89_mcc_role role_aux; /* auxiliary role */
+ struct rtw89_mcc_bt_role bt_role;
+ struct rtw89_mcc_config config;
};
struct rtw89_dev {
@@ -4378,6 +4658,7 @@ struct rtw89_dev {
struct rtw89_hci_info hci;
struct rtw89_efuse efuse;
struct rtw89_traffic_stats stats;
+ struct rtw89_rfe_data *rfe_data;
/* ensures exclusive access from mac80211 callbacks */
struct mutex mutex;
@@ -4425,7 +4706,7 @@ struct rtw89_dev {
bool is_bt_iqk_timeout;
struct rtw89_fem_info fem;
- struct rtw89_txpwr_byrate byr[RTW89_BAND_NUM];
+ struct rtw89_txpwr_byrate byr[RTW89_BAND_NUM][RTW89_BYR_BW_NUM];
struct rtw89_tssi_info tssi;
struct rtw89_power_trim_info pwr_trim;
@@ -4911,6 +5192,30 @@ enum rtw89_bandwidth nl_to_rtw89_bandwidth(enum nl80211_chan_width width)
}
static inline
+enum nl80211_he_ru_alloc rtw89_he_rua_to_ru_alloc(u16 rua)
+{
+ switch (rua) {
+ default:
+ WARN(1, "Invalid RU allocation: %d\n", rua);
+ fallthrough;
+ case 0 ... 36:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_26;
+ case 37 ... 52:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_52;
+ case 53 ... 60:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_106;
+ case 61 ... 64:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_242;
+ case 65 ... 66:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_484;
+ case 67:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_996;
+ case 68:
+ return NL80211_RATE_INFO_HE_RU_ALLOC_2x996;
+ }
+}
+
+static inline
struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif *rtwvif,
struct rtw89_sta *rtwsta)
{
@@ -5017,6 +5322,15 @@ static inline void rtw89_chip_rfe_gpio(struct rtw89_dev *rtwdev)
chip->ops->rfe_gpio(rtwdev);
}
+static inline
+void rtw89_chip_bb_preinit(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->bb_preinit)
+ chip->ops->bb_preinit(rtwdev, phy_idx);
+}
+
static inline void rtw89_chip_bb_sethw(struct rtw89_dev *rtwdev)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
@@ -5112,13 +5426,13 @@ static inline void rtw89_chip_query_ppdu(struct rtw89_dev *rtwdev,
chip->ops->query_ppdu(rtwdev, phy_ppdu, status);
}
-static inline void rtw89_chip_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev,
- bool bt_en)
+static inline void rtw89_ctrl_nbtg_bt_tx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
- if (chip->ops->bb_ctrl_btc_preagc)
- chip->ops->bb_ctrl_btc_preagc(rtwdev, bt_en);
+ if (chip->ops->ctrl_nbtg_bt_tx)
+ chip->ops->ctrl_nbtg_bt_tx(rtwdev, en, phy_idx);
}
static inline void rtw89_chip_cfg_txrx_path(struct rtw89_dev *rtwdev)
@@ -5156,12 +5470,13 @@ static inline u8 rtw89_regd_get(struct rtw89_dev *rtwdev, u8 band)
return regd->txpwr_regd[band];
}
-static inline void rtw89_ctrl_btg(struct rtw89_dev *rtwdev, bool btg)
+static inline void rtw89_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
- if (chip->ops->ctrl_btg)
- chip->ops->ctrl_btg(rtwdev, btg);
+ if (chip->ops->ctrl_btg_bt_rx)
+ chip->ops->ctrl_btg_bt_rx(rtwdev, en, phy_idx);
}
static inline
@@ -5333,15 +5648,24 @@ void rtw89_core_fill_txdesc(struct rtw89_dev *rtwdev,
void rtw89_core_fill_txdesc_v1(struct rtw89_dev *rtwdev,
struct rtw89_tx_desc_info *desc_info,
void *txdesc);
+void rtw89_core_fill_txdesc_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_tx_desc_info *desc_info,
+ void *txdesc);
void rtw89_core_fill_txdesc_fwcmd_v1(struct rtw89_dev *rtwdev,
struct rtw89_tx_desc_info *desc_info,
void *txdesc);
+void rtw89_core_fill_txdesc_fwcmd_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_tx_desc_info *desc_info,
+ void *txdesc);
void rtw89_core_rx(struct rtw89_dev *rtwdev,
struct rtw89_rx_desc_info *desc_info,
struct sk_buff *skb);
void rtw89_core_query_rxdesc(struct rtw89_dev *rtwdev,
struct rtw89_rx_desc_info *desc_info,
u8 *data, u32 data_offset);
+void rtw89_core_query_rxdesc_v2(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_desc_info *desc_info,
+ u8 *data, u32 data_offset);
void rtw89_core_napi_start(struct rtw89_dev *rtwdev);
void rtw89_core_napi_stop(struct rtw89_dev *rtwdev);
void rtw89_core_napi_init(struct rtw89_dev *rtwdev);
@@ -5410,6 +5734,7 @@ void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
struct ieee80211_vif *vif, bool hw_scan);
void rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif, bool active);
+void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
void rtw89_core_ntfy_btc_event(struct rtw89_dev *rtwdev, enum rtw89_btc_hmsg event);
#endif
diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
index d162e64f6064..a3f795d240ea 100644
--- a/drivers/net/wireless/realtek/rtw89/debug.c
+++ b/drivers/net/wireless/realtek/rtw89/debug.c
@@ -367,7 +367,11 @@ static int rtw89_debug_priv_rf_reg_dump_get(struct seq_file *m, void *v)
}
struct txpwr_ent {
- const char *txt;
+ bool nested;
+ union {
+ const char *txt;
+ const struct txpwr_ent *ptr;
+ };
u8 len;
};
@@ -379,6 +383,12 @@ struct txpwr_map {
u32 addr_to_1ss;
};
+#define __GEN_TXPWR_ENT_NESTED(_e) \
+ { .nested = true, .ptr = __txpwr_ent_##_e, \
+ .len = ARRAY_SIZE(__txpwr_ent_##_e) }
+
+#define __GEN_TXPWR_ENT0(_t) { .len = 0, .txt = _t }
+
#define __GEN_TXPWR_ENT2(_t, _e0, _e1) \
{ .len = 2, .txt = _t "\t- " _e0 " " _e1 }
@@ -390,7 +400,7 @@ struct txpwr_map {
_e0 " " _e1 " " _e2 " " _e3 " " \
_e4 " " _e5 " " _e6 " " _e7 }
-static const struct txpwr_ent __txpwr_ent_byr[] = {
+static const struct txpwr_ent __txpwr_ent_byr_ax[] = {
__GEN_TXPWR_ENT4("CCK ", "1M ", "2M ", "5.5M ", "11M "),
__GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
__GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
@@ -406,18 +416,18 @@ static const struct txpwr_ent __txpwr_ent_byr[] = {
__GEN_TXPWR_ENT4("HEDCM_2NSS", "MCS0 ", "MCS1 ", "MCS3 ", "MCS4 "),
};
-static_assert((ARRAY_SIZE(__txpwr_ent_byr) * 4) ==
+static_assert((ARRAY_SIZE(__txpwr_ent_byr_ax) * 4) ==
(R_AX_PWR_BY_RATE_MAX - R_AX_PWR_BY_RATE + 4));
-static const struct txpwr_map __txpwr_map_byr = {
- .ent = __txpwr_ent_byr,
- .size = ARRAY_SIZE(__txpwr_ent_byr),
+static const struct txpwr_map __txpwr_map_byr_ax = {
+ .ent = __txpwr_ent_byr_ax,
+ .size = ARRAY_SIZE(__txpwr_ent_byr_ax),
.addr_from = R_AX_PWR_BY_RATE,
.addr_to = R_AX_PWR_BY_RATE_MAX,
.addr_to_1ss = R_AX_PWR_BY_RATE_1SS_MAX,
};
-static const struct txpwr_ent __txpwr_ent_lmt[] = {
+static const struct txpwr_ent __txpwr_ent_lmt_ax[] = {
/* 1TX */
__GEN_TXPWR_ENT2("CCK_1TX_20M ", "NON_BF", "BF"),
__GEN_TXPWR_ENT2("CCK_1TX_40M ", "NON_BF", "BF"),
@@ -462,18 +472,18 @@ static const struct txpwr_ent __txpwr_ent_lmt[] = {
__GEN_TXPWR_ENT2("MCS_2TX_40M_2p5", "NON_BF", "BF"),
};
-static_assert((ARRAY_SIZE(__txpwr_ent_lmt) * 2) ==
+static_assert((ARRAY_SIZE(__txpwr_ent_lmt_ax) * 2) ==
(R_AX_PWR_LMT_MAX - R_AX_PWR_LMT + 4));
-static const struct txpwr_map __txpwr_map_lmt = {
- .ent = __txpwr_ent_lmt,
- .size = ARRAY_SIZE(__txpwr_ent_lmt),
+static const struct txpwr_map __txpwr_map_lmt_ax = {
+ .ent = __txpwr_ent_lmt_ax,
+ .size = ARRAY_SIZE(__txpwr_ent_lmt_ax),
.addr_from = R_AX_PWR_LMT,
.addr_to = R_AX_PWR_LMT_MAX,
.addr_to_1ss = R_AX_PWR_LMT_1SS_MAX,
};
-static const struct txpwr_ent __txpwr_ent_lmt_ru[] = {
+static const struct txpwr_ent __txpwr_ent_lmt_ru_ax[] = {
/* 1TX */
__GEN_TXPWR_ENT8("1TX", "RU26__0", "RU26__1", "RU26__2", "RU26__3",
"RU26__4", "RU26__5", "RU26__6", "RU26__7"),
@@ -490,25 +500,207 @@ static const struct txpwr_ent __txpwr_ent_lmt_ru[] = {
"RU106_4", "RU106_5", "RU106_6", "RU106_7"),
};
-static_assert((ARRAY_SIZE(__txpwr_ent_lmt_ru) * 8) ==
+static_assert((ARRAY_SIZE(__txpwr_ent_lmt_ru_ax) * 8) ==
(R_AX_PWR_RU_LMT_MAX - R_AX_PWR_RU_LMT + 4));
-static const struct txpwr_map __txpwr_map_lmt_ru = {
- .ent = __txpwr_ent_lmt_ru,
- .size = ARRAY_SIZE(__txpwr_ent_lmt_ru),
+static const struct txpwr_map __txpwr_map_lmt_ru_ax = {
+ .ent = __txpwr_ent_lmt_ru_ax,
+ .size = ARRAY_SIZE(__txpwr_ent_lmt_ru_ax),
.addr_from = R_AX_PWR_RU_LMT,
.addr_to = R_AX_PWR_RU_LMT_MAX,
.addr_to_1ss = R_AX_PWR_RU_LMT_1SS_MAX,
};
-static u8 __print_txpwr_ent(struct seq_file *m, const struct txpwr_ent *ent,
- const s8 *buf, const u8 cur)
+static const struct txpwr_ent __txpwr_ent_byr_mcs_be[] = {
+ __GEN_TXPWR_ENT4("MCS_1SS ", "MCS0 ", "MCS1 ", "MCS2 ", "MCS3 "),
+ __GEN_TXPWR_ENT4("MCS_1SS ", "MCS4 ", "MCS5 ", "MCS6 ", "MCS7 "),
+ __GEN_TXPWR_ENT4("MCS_1SS ", "MCS8 ", "MCS9 ", "MCS10", "MCS11"),
+ __GEN_TXPWR_ENT2("MCS_1SS ", "MCS12 ", "MCS13 \t"),
+ __GEN_TXPWR_ENT4("HEDCM_1SS ", "MCS0 ", "MCS1 ", "MCS3 ", "MCS4 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_1SS ", "MCS0 ", "MCS1 ", "MCS2 ", "MCS3 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_1SS ", "MCS4 ", "MCS5 ", "MCS6 ", "MCS7 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_1SS ", "MCS8 ", "MCS9 ", "MCS10", "MCS11"),
+ __GEN_TXPWR_ENT2("DLRU_MCS_1SS ", "MCS12 ", "MCS13 \t"),
+ __GEN_TXPWR_ENT4("DLRU_HEDCM_1SS", "MCS0 ", "MCS1 ", "MCS3 ", "MCS4 "),
+ __GEN_TXPWR_ENT4("MCS_2SS ", "MCS0 ", "MCS1 ", "MCS2 ", "MCS3 "),
+ __GEN_TXPWR_ENT4("MCS_2SS ", "MCS4 ", "MCS5 ", "MCS6 ", "MCS7 "),
+ __GEN_TXPWR_ENT4("MCS_2SS ", "MCS8 ", "MCS9 ", "MCS10", "MCS11"),
+ __GEN_TXPWR_ENT2("MCS_2SS ", "MCS12 ", "MCS13 \t"),
+ __GEN_TXPWR_ENT4("HEDCM_2SS ", "MCS0 ", "MCS1 ", "MCS3 ", "MCS4 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_2SS ", "MCS0 ", "MCS1 ", "MCS2 ", "MCS3 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_2SS ", "MCS4 ", "MCS5 ", "MCS6 ", "MCS7 "),
+ __GEN_TXPWR_ENT4("DLRU_MCS_2SS ", "MCS8 ", "MCS9 ", "MCS10", "MCS11"),
+ __GEN_TXPWR_ENT2("DLRU_MCS_2SS ", "MCS12 ", "MCS13 \t"),
+ __GEN_TXPWR_ENT4("DLRU_HEDCM_2SS", "MCS0 ", "MCS1 ", "MCS3 ", "MCS4 "),
+};
+
+static const struct txpwr_ent __txpwr_ent_byr_be[] = {
+ __GEN_TXPWR_ENT0("BW20"),
+ __GEN_TXPWR_ENT4("CCK ", "1M ", "2M ", "5.5M ", "11M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
+ __GEN_TXPWR_ENT2("EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT2("DLRU_EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT_NESTED(byr_mcs_be),
+
+ __GEN_TXPWR_ENT0("BW40"),
+ __GEN_TXPWR_ENT4("CCK ", "1M ", "2M ", "5.5M ", "11M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
+ __GEN_TXPWR_ENT2("EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT2("DLRU_EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT_NESTED(byr_mcs_be),
+
+ /* there is no CCK section after BW80 */
+ __GEN_TXPWR_ENT0("BW80"),
+ __GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
+ __GEN_TXPWR_ENT2("EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT2("DLRU_EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT_NESTED(byr_mcs_be),
+
+ __GEN_TXPWR_ENT0("BW160"),
+ __GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
+ __GEN_TXPWR_ENT2("EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT2("DLRU_EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT_NESTED(byr_mcs_be),
+
+ __GEN_TXPWR_ENT0("BW320"),
+ __GEN_TXPWR_ENT4("LEGACY ", "6M ", "9M ", "12M ", "18M "),
+ __GEN_TXPWR_ENT4("LEGACY ", "24M ", "36M ", "48M ", "54M "),
+ __GEN_TXPWR_ENT2("EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT2("DLRU_EHT ", "MCS14 ", "MCS15 \t"),
+ __GEN_TXPWR_ENT_NESTED(byr_mcs_be),
+};
+
+static const struct txpwr_map __txpwr_map_byr_be = {
+ .ent = __txpwr_ent_byr_be,
+ .size = ARRAY_SIZE(__txpwr_ent_byr_be),
+ .addr_from = R_BE_PWR_BY_RATE,
+ .addr_to = R_BE_PWR_BY_RATE_MAX,
+ .addr_to_1ss = 0, /* not support */
+};
+
+static const struct txpwr_ent __txpwr_ent_lmt_mcs_be[] = {
+ __GEN_TXPWR_ENT2("MCS_20M_0 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_1 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_2 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_3 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_4 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_5 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_6 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_7 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_8 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_9 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_10 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_11 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_12 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_13 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_14 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_20M_15 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_0 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_1 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_2 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_3 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_4 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_5 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_6 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_7 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_80M_0 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_80M_1 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_80M_2 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_80M_3 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_160M_0 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_160M_1 ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_320M ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_0p5", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_2p5", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_4p5", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("MCS_40M_6p5", "NON_BF", "BF"),
+};
+
+static const struct txpwr_ent __txpwr_ent_lmt_be[] = {
+ __GEN_TXPWR_ENT0("1TX"),
+ __GEN_TXPWR_ENT2("CCK_20M ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("CCK_40M ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("OFDM ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT_NESTED(lmt_mcs_be),
+
+ __GEN_TXPWR_ENT0("2TX"),
+ __GEN_TXPWR_ENT2("CCK_20M ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("CCK_40M ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT2("OFDM ", "NON_BF", "BF"),
+ __GEN_TXPWR_ENT_NESTED(lmt_mcs_be),
+};
+
+static const struct txpwr_map __txpwr_map_lmt_be = {
+ .ent = __txpwr_ent_lmt_be,
+ .size = ARRAY_SIZE(__txpwr_ent_lmt_be),
+ .addr_from = R_BE_PWR_LMT,
+ .addr_to = R_BE_PWR_LMT_MAX,
+ .addr_to_1ss = 0, /* not support */
+};
+
+static const struct txpwr_ent __txpwr_ent_lmt_ru_indexes_be[] = {
+ __GEN_TXPWR_ENT8("RU26 ", "IDX_0 ", "IDX_1 ", "IDX_2 ", "IDX_3 ",
+ "IDX_4 ", "IDX_5 ", "IDX_6 ", "IDX_7 "),
+ __GEN_TXPWR_ENT8("RU26 ", "IDX_8 ", "IDX_9 ", "IDX_10", "IDX_11",
+ "IDX_12", "IDX_13", "IDX_14", "IDX_15"),
+ __GEN_TXPWR_ENT8("RU52 ", "IDX_0 ", "IDX_1 ", "IDX_2 ", "IDX_3 ",
+ "IDX_4 ", "IDX_5 ", "IDX_6 ", "IDX_7 "),
+ __GEN_TXPWR_ENT8("RU52 ", "IDX_8 ", "IDX_9 ", "IDX_10", "IDX_11",
+ "IDX_12", "IDX_13", "IDX_14", "IDX_15"),
+ __GEN_TXPWR_ENT8("RU106 ", "IDX_0 ", "IDX_1 ", "IDX_2 ", "IDX_3 ",
+ "IDX_4 ", "IDX_5 ", "IDX_6 ", "IDX_7 "),
+ __GEN_TXPWR_ENT8("RU106 ", "IDX_8 ", "IDX_9 ", "IDX_10", "IDX_11",
+ "IDX_12", "IDX_13", "IDX_14", "IDX_15"),
+ __GEN_TXPWR_ENT8("RU52_26 ", "IDX_0 ", "IDX_1 ", "IDX_2 ", "IDX_3 ",
+ "IDX_4 ", "IDX_5 ", "IDX_6 ", "IDX_7 "),
+ __GEN_TXPWR_ENT8("RU52_26 ", "IDX_8 ", "IDX_9 ", "IDX_10", "IDX_11",
+ "IDX_12", "IDX_13", "IDX_14", "IDX_15"),
+ __GEN_TXPWR_ENT8("RU106_26", "IDX_0 ", "IDX_1 ", "IDX_2 ", "IDX_3 ",
+ "IDX_4 ", "IDX_5 ", "IDX_6 ", "IDX_7 "),
+ __GEN_TXPWR_ENT8("RU106_26", "IDX_8 ", "IDX_9 ", "IDX_10", "IDX_11",
+ "IDX_12", "IDX_13", "IDX_14", "IDX_15"),
+};
+
+static const struct txpwr_ent __txpwr_ent_lmt_ru_be[] = {
+ __GEN_TXPWR_ENT0("1TX"),
+ __GEN_TXPWR_ENT_NESTED(lmt_ru_indexes_be),
+
+ __GEN_TXPWR_ENT0("2TX"),
+ __GEN_TXPWR_ENT_NESTED(lmt_ru_indexes_be),
+};
+
+static const struct txpwr_map __txpwr_map_lmt_ru_be = {
+ .ent = __txpwr_ent_lmt_ru_be,
+ .size = ARRAY_SIZE(__txpwr_ent_lmt_ru_be),
+ .addr_from = R_BE_PWR_RU_LMT,
+ .addr_to = R_BE_PWR_RU_LMT_MAX,
+ .addr_to_1ss = 0, /* not support */
+};
+
+static unsigned int
+__print_txpwr_ent(struct seq_file *m, const struct txpwr_ent *ent,
+ const s8 *buf, const unsigned int cur)
{
+ unsigned int cnt, i;
char *fmt;
+ if (ent->nested) {
+ for (cnt = 0, i = 0; i < ent->len; i++)
+ cnt += __print_txpwr_ent(m, ent->ptr + i, buf,
+ cur + cnt);
+ return cnt;
+ }
+
switch (ent->len) {
+ case 0:
+ seq_printf(m, "\t<< %s >>\n", ent->txt);
+ return 0;
case 2:
- fmt = "%s\t| %3d, %3d,\tdBm\n";
+ fmt = "%s\t| %3d, %3d,\t\tdBm\n";
seq_printf(m, fmt, ent->txt, buf[cur], buf[cur + 1]);
return 2;
case 4:
@@ -532,10 +724,10 @@ static int __print_txpwr_map(struct seq_file *m, struct rtw89_dev *rtwdev,
{
u8 fct = rtwdev->chip->txpwr_factor_mac;
u8 path_num = rtwdev->chip->rf_path_num;
+ unsigned int cur, i;
u32 max_valid_addr;
u32 val, addr;
s8 *buf, tmp;
- u8 cur, i;
int ret;
buf = vzalloc(map->addr_to - map->addr_from + 4);
@@ -547,6 +739,9 @@ static int __print_txpwr_map(struct seq_file *m, struct rtw89_dev *rtwdev,
else
max_valid_addr = map->addr_to;
+ if (max_valid_addr == 0)
+ return -EOPNOTSUPP;
+
for (addr = map->addr_from; addr <= max_valid_addr; addr += 4) {
ret = rtw89_mac_txpwr_read32(rtwdev, RTW89_PHY_0, addr, &val);
if (ret)
@@ -600,10 +795,35 @@ static void __print_regd(struct seq_file *m, struct rtw89_dev *rtwdev,
#undef case_REGD
+struct dbgfs_txpwr_table {
+ const struct txpwr_map *byr;
+ const struct txpwr_map *lmt;
+ const struct txpwr_map *lmt_ru;
+};
+
+static const struct dbgfs_txpwr_table dbgfs_txpwr_table_ax = {
+ .byr = &__txpwr_map_byr_ax,
+ .lmt = &__txpwr_map_lmt_ax,
+ .lmt_ru = &__txpwr_map_lmt_ru_ax,
+};
+
+static const struct dbgfs_txpwr_table dbgfs_txpwr_table_be = {
+ .byr = &__txpwr_map_byr_be,
+ .lmt = &__txpwr_map_lmt_be,
+ .lmt_ru = &__txpwr_map_lmt_ru_be,
+};
+
+static const struct dbgfs_txpwr_table *dbgfs_txpwr_tables[RTW89_CHIP_GEN_NUM] = {
+ [RTW89_CHIP_AX] = &dbgfs_txpwr_table_ax,
+ [RTW89_CHIP_BE] = &dbgfs_txpwr_table_be,
+};
+
static int rtw89_debug_priv_txpwr_table_get(struct seq_file *m, void *v)
{
struct rtw89_debugfs_priv *debugfs_priv = m->private;
struct rtw89_dev *rtwdev = debugfs_priv->rtwdev;
+ enum rtw89_chip_gen chip_gen = rtwdev->chip->chip_gen;
+ const struct dbgfs_txpwr_table *tbl;
const struct rtw89_chan *chan;
int ret = 0;
@@ -620,18 +840,24 @@ static int rtw89_debug_priv_txpwr_table_get(struct seq_file *m, void *v)
seq_puts(m, "[TAS]\n");
rtw89_print_tas(m, rtwdev);
+ tbl = dbgfs_txpwr_tables[chip_gen];
+ if (!tbl) {
+ ret = -EOPNOTSUPP;
+ goto err;
+ }
+
seq_puts(m, "\n[TX power byrate]\n");
- ret = __print_txpwr_map(m, rtwdev, &__txpwr_map_byr);
+ ret = __print_txpwr_map(m, rtwdev, tbl->byr);
if (ret)
goto err;
seq_puts(m, "\n[TX power limit]\n");
- ret = __print_txpwr_map(m, rtwdev, &__txpwr_map_lmt);
+ ret = __print_txpwr_map(m, rtwdev, tbl->lmt);
if (ret)
goto err;
seq_puts(m, "\n[TX power limit_ru]\n");
- ret = __print_txpwr_map(m, rtwdev, &__txpwr_map_lmt_ru);
+ ret = __print_txpwr_map(m, rtwdev, tbl->lmt_ru);
if (ret)
goto err;
@@ -3241,6 +3467,11 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
[NL80211_RATE_INFO_HE_GI_1_6] = "1.6",
[NL80211_RATE_INFO_HE_GI_3_2] = "3.2",
};
+ static const char * const eht_gi_str[] = {
+ [NL80211_RATE_INFO_EHT_GI_0_8] = "0.8",
+ [NL80211_RATE_INFO_EHT_GI_1_6] = "1.6",
+ [NL80211_RATE_INFO_EHT_GI_3_2] = "3.2",
+ };
struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
struct rate_info *rate = &rtwsta->ra_report.txrate;
struct ieee80211_rx_status *status = &rtwsta->rx_status;
@@ -3266,6 +3497,10 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
seq_printf(m, "HE %dSS MCS-%d GI:%s", rate->nss, rate->mcs,
rate->he_gi <= NL80211_RATE_INFO_HE_GI_3_2 ?
he_gi_str[rate->he_gi] : "N/A");
+ else if (rate->flags & RATE_INFO_FLAGS_EHT_MCS)
+ seq_printf(m, "EHT %dSS MCS-%d GI:%s", rate->nss, rate->mcs,
+ rate->eht_gi < ARRAY_SIZE(eht_gi_str) ?
+ eht_gi_str[rate->eht_gi] : "N/A");
else
seq_printf(m, "Legacy %d", rate->legacy);
seq_printf(m, "%s", rtwsta->ra_report.might_fallback_legacy ? " FB_G" : "");
@@ -3294,6 +3529,11 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
status->he_gi <= NL80211_RATE_INFO_HE_GI_3_2 ?
he_gi_str[rate->he_gi] : "N/A");
break;
+ case RX_ENC_EHT:
+ seq_printf(m, "EHT %dSS MCS-%d GI:%s", status->nss, status->rate_idx,
+ status->eht.gi < ARRAY_SIZE(eht_gi_str) ?
+ eht_gi_str[status->eht.gi] : "N/A");
+ break;
}
seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(status->bw));
seq_printf(m, "\t(hw_rate=0x%x)\n", rtwsta->rx_hw_rate);
diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
index df1dc2f43c86..a732c22a2d54 100644
--- a/drivers/net/wireless/realtek/rtw89/fw.c
+++ b/drivers/net/wireless/realtek/rtw89/fw.c
@@ -13,6 +13,20 @@
#include "reg.h"
#include "util.h"
+union rtw89_fw_element_arg {
+ size_t offset;
+ enum rtw89_rf_path rf_path;
+ enum rtw89_fw_type fw_type;
+};
+
+struct rtw89_fw_element_handler {
+ int (*fn)(struct rtw89_dev *rtwdev,
+ const struct rtw89_fw_element_hdr *elm,
+ const union rtw89_fw_element_arg arg);
+ const union rtw89_fw_element_arg arg;
+ const char *name;
+};
+
static void rtw89_fw_c2h_cmd_handle(struct rtw89_dev *rtwdev,
struct sk_buff *skb);
static int rtw89_h2c_tx_and_wait(struct rtw89_dev *rtwdev, struct sk_buff *skb,
@@ -47,22 +61,15 @@ struct sk_buff *rtw89_fw_h2c_alloc_skb_no_hdr(struct rtw89_dev *rtwdev, u32 len)
return rtw89_fw_h2c_alloc_skb(rtwdev, len, false);
}
-static u8 _fw_get_rdy(struct rtw89_dev *rtwdev)
-{
- u8 val = rtw89_read8(rtwdev, R_AX_WCPU_FW_CTRL);
-
- return FIELD_GET(B_AX_WCPU_FWDL_STS_MASK, val);
-}
-
-#define FWDL_WAIT_CNT 400000
-int rtw89_fw_check_rdy(struct rtw89_dev *rtwdev)
+int rtw89_fw_check_rdy(struct rtw89_dev *rtwdev, enum rtw89_fwdl_check_type type)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u8 val;
int ret;
- ret = read_poll_timeout_atomic(_fw_get_rdy, val,
+ ret = read_poll_timeout_atomic(mac->fwdl_get_status, val,
val == RTW89_FWDL_WCPU_FW_INIT_RDY,
- 1, FWDL_WAIT_CNT, false, rtwdev);
+ 1, FWDL_WAIT_CNT, false, rtwdev, type);
if (ret) {
switch (val) {
case RTW89_FWDL_CHECKSUM_FAIL:
@@ -78,6 +85,7 @@ int rtw89_fw_check_rdy(struct rtw89_dev *rtwdev)
return -EINVAL;
default:
+ rtw89_err(rtwdev, "fw unexpected status %d\n", val);
return -EBUSY;
}
}
@@ -390,9 +398,9 @@ int __rtw89_fw_recognize(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
static
int __rtw89_fw_recognize_from_elm(struct rtw89_dev *rtwdev,
const struct rtw89_fw_element_hdr *elm,
- const void *data)
+ const union rtw89_fw_element_arg arg)
{
- enum rtw89_fw_type type = (enum rtw89_fw_type)data;
+ enum rtw89_fw_type type = arg.fw_type;
struct rtw89_fw_suit *fw_suit;
fw_suit = rtw89_fw_suit_get(rtwdev, type);
@@ -548,7 +556,7 @@ normal_done:
static
int rtw89_build_phy_tbl_from_elm(struct rtw89_dev *rtwdev,
const struct rtw89_fw_element_hdr *elm,
- const void *data)
+ const union rtw89_fw_element_arg arg)
{
struct rtw89_fw_elm_info *elm_info = &rtwdev->fw.elm_info;
struct rtw89_phy_table *tbl;
@@ -572,7 +580,7 @@ int rtw89_build_phy_tbl_from_elm(struct rtw89_dev *rtwdev,
case RTW89_FW_ELEMENT_ID_RADIO_B:
case RTW89_FW_ELEMENT_ID_RADIO_C:
case RTW89_FW_ELEMENT_ID_RADIO_D:
- rf_path = (enum rtw89_rf_path)data;
+ rf_path = arg.rf_path;
idx = elm->u.reg2.idx;
elm_info->rf_radio[idx] = tbl;
@@ -607,29 +615,101 @@ out:
return -ENOMEM;
}
-struct rtw89_fw_element_handler {
- int (*fn)(struct rtw89_dev *rtwdev,
- const struct rtw89_fw_element_hdr *elm, const void *data);
- const void *data;
- const char *name;
-};
+static
+int rtw89_fw_recognize_txpwr_from_elm(struct rtw89_dev *rtwdev,
+ const struct rtw89_fw_element_hdr *elm,
+ const union rtw89_fw_element_arg arg)
+{
+ const struct __rtw89_fw_txpwr_element *txpwr_elm = &elm->u.txpwr;
+ const unsigned long offset = arg.offset;
+ struct rtw89_efuse *efuse = &rtwdev->efuse;
+ struct rtw89_txpwr_conf *conf;
+
+ if (!rtwdev->rfe_data) {
+ rtwdev->rfe_data = kzalloc(sizeof(*rtwdev->rfe_data), GFP_KERNEL);
+ if (!rtwdev->rfe_data)
+ return -ENOMEM;
+ }
+
+ conf = (void *)rtwdev->rfe_data + offset;
+
+ /* if multiple matched, take the last eventually */
+ if (txpwr_elm->rfe_type == efuse->rfe_type)
+ goto setup;
+
+ /* without one is matched, accept default */
+ if (txpwr_elm->rfe_type == RTW89_TXPWR_CONF_DFLT_RFE_TYPE &&
+ (!rtw89_txpwr_conf_valid(conf) ||
+ conf->rfe_type == RTW89_TXPWR_CONF_DFLT_RFE_TYPE))
+ goto setup;
+
+ rtw89_debug(rtwdev, RTW89_DBG_FW, "skip txpwr element ID %u RFE %u\n",
+ elm->id, txpwr_elm->rfe_type);
+ return 0;
+
+setup:
+ rtw89_debug(rtwdev, RTW89_DBG_FW, "take txpwr element ID %u RFE %u\n",
+ elm->id, txpwr_elm->rfe_type);
+
+ conf->rfe_type = txpwr_elm->rfe_type;
+ conf->ent_sz = txpwr_elm->ent_sz;
+ conf->num_ents = le32_to_cpu(txpwr_elm->num_ents);
+ conf->data = txpwr_elm->content;
+ return 0;
+}
static const struct rtw89_fw_element_handler __fw_element_handlers[] = {
[RTW89_FW_ELEMENT_ID_BBMCU0] = {__rtw89_fw_recognize_from_elm,
- (const void *)RTW89_FW_BBMCU0, NULL},
+ { .fw_type = RTW89_FW_BBMCU0 }, NULL},
[RTW89_FW_ELEMENT_ID_BBMCU1] = {__rtw89_fw_recognize_from_elm,
- (const void *)RTW89_FW_BBMCU1, NULL},
- [RTW89_FW_ELEMENT_ID_BB_REG] = {rtw89_build_phy_tbl_from_elm, NULL, "BB"},
- [RTW89_FW_ELEMENT_ID_BB_GAIN] = {rtw89_build_phy_tbl_from_elm, NULL, NULL},
+ { .fw_type = RTW89_FW_BBMCU1 }, NULL},
+ [RTW89_FW_ELEMENT_ID_BB_REG] = {rtw89_build_phy_tbl_from_elm, {}, "BB"},
+ [RTW89_FW_ELEMENT_ID_BB_GAIN] = {rtw89_build_phy_tbl_from_elm, {}, NULL},
[RTW89_FW_ELEMENT_ID_RADIO_A] = {rtw89_build_phy_tbl_from_elm,
- (const void *)RF_PATH_A, "radio A"},
+ { .rf_path = RF_PATH_A }, "radio A"},
[RTW89_FW_ELEMENT_ID_RADIO_B] = {rtw89_build_phy_tbl_from_elm,
- (const void *)RF_PATH_B, NULL},
+ { .rf_path = RF_PATH_B }, NULL},
[RTW89_FW_ELEMENT_ID_RADIO_C] = {rtw89_build_phy_tbl_from_elm,
- (const void *)RF_PATH_C, NULL},
+ { .rf_path = RF_PATH_C }, NULL},
[RTW89_FW_ELEMENT_ID_RADIO_D] = {rtw89_build_phy_tbl_from_elm,
- (const void *)RF_PATH_D, NULL},
- [RTW89_FW_ELEMENT_ID_RF_NCTL] = {rtw89_build_phy_tbl_from_elm, NULL, "NCTL"},
+ { .rf_path = RF_PATH_D }, NULL},
+ [RTW89_FW_ELEMENT_ID_RF_NCTL] = {rtw89_build_phy_tbl_from_elm, {}, "NCTL"},
+ [RTW89_FW_ELEMENT_ID_TXPWR_BYRATE] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, byrate.conf) }, "TXPWR",
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_2GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_2ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_5GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_5ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_6GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_6ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_2GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_ru_2ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_5GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_ru_5ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_6GHZ] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, lmt_ru_6ghz.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, tx_shape_lmt.conf) }, NULL,
+ },
+ [RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT_RU] = {
+ rtw89_fw_recognize_txpwr_from_elm,
+ { .offset = offsetof(struct rtw89_rfe_data, tx_shape_lmt_ru.conf) }, NULL,
+ },
};
int rtw89_fw_recognize_elements(struct rtw89_dev *rtwdev)
@@ -669,7 +749,7 @@ int rtw89_fw_recognize_elements(struct rtw89_dev *rtwdev)
if (!handler->fn)
goto next;
- ret = handler->fn(rtwdev, hdr, handler->data);
+ ret = handler->fn(rtwdev, hdr, handler->arg);
if (ret)
return ret;
@@ -768,7 +848,7 @@ fail:
static int rtw89_fw_download_hdr(struct rtw89_dev *rtwdev, const u8 *fw, u32 len)
{
- u8 val;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
int ret;
ret = __rtw89_fw_download_hdr(rtwdev, fw, len);
@@ -777,9 +857,7 @@ static int rtw89_fw_download_hdr(struct rtw89_dev *rtwdev, const u8 *fw, u32 len
return ret;
}
- ret = read_poll_timeout_atomic(rtw89_read8, val, val & B_AX_FWDL_PATH_RDY,
- 1, FWDL_WAIT_CNT, false,
- rtwdev, R_AX_WCPU_FW_CTRL);
+ ret = mac->fwdl_check_path_ready(rtwdev, false);
if (ret) {
rtw89_err(rtwdev, "[ERR]FWDL path ready\n");
return ret;
@@ -831,10 +909,27 @@ fail:
return ret;
}
-static int rtw89_fw_download_main(struct rtw89_dev *rtwdev, const u8 *fw,
+static enum rtw89_fwdl_check_type
+rtw89_fw_get_fwdl_chk_type_from_suit(struct rtw89_dev *rtwdev,
+ const struct rtw89_fw_suit *fw_suit)
+{
+ switch (fw_suit->type) {
+ case RTW89_FW_BBMCU0:
+ return RTW89_FWDL_CHECK_BB0_FWDL_DONE;
+ case RTW89_FW_BBMCU1:
+ return RTW89_FWDL_CHECK_BB1_FWDL_DONE;
+ default:
+ return RTW89_FWDL_CHECK_WCPU_FWDL_DONE;
+ }
+}
+
+static int rtw89_fw_download_main(struct rtw89_dev *rtwdev,
+ const struct rtw89_fw_suit *fw_suit,
struct rtw89_fw_bin_info *info)
{
struct rtw89_fw_hdr_section_info *section_info = info->section_info;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ enum rtw89_fwdl_check_type chk_type;
u8 section_num = info->section_num;
int ret;
@@ -845,11 +940,14 @@ static int rtw89_fw_download_main(struct rtw89_dev *rtwdev, const u8 *fw,
section_info++;
}
- mdelay(5);
+ if (chip->chip_gen == RTW89_CHIP_AX)
+ return 0;
- ret = rtw89_fw_check_rdy(rtwdev);
+ chk_type = rtw89_fw_get_fwdl_chk_type_from_suit(rtwdev, fw_suit);
+ ret = rtw89_fw_check_rdy(rtwdev, chk_type);
if (ret) {
- rtw89_warn(rtwdev, "download firmware fail\n");
+ rtw89_warn(rtwdev, "failed to download firmware type %u\n",
+ fw_suit->type);
return ret;
}
@@ -887,44 +985,66 @@ static void rtw89_fw_dl_fail_dump(struct rtw89_dev *rtwdev)
rtw89_fw_prog_cnt_dump(rtwdev);
}
-int rtw89_fw_download(struct rtw89_dev *rtwdev, enum rtw89_fw_type type)
+static int rtw89_fw_download_suit(struct rtw89_dev *rtwdev,
+ struct rtw89_fw_suit *fw_suit)
{
- struct rtw89_fw_info *fw_info = &rtwdev->fw;
- struct rtw89_fw_suit *fw_suit = rtw89_fw_suit_get(rtwdev, type);
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
struct rtw89_fw_bin_info info;
- u8 val;
int ret;
- rtw89_mac_disable_cpu(rtwdev);
- ret = rtw89_mac_enable_cpu(rtwdev, 0, true);
- if (ret)
- return ret;
-
ret = rtw89_fw_hdr_parser(rtwdev, fw_suit, &info);
if (ret) {
rtw89_err(rtwdev, "parse fw header fail\n");
- goto fwdl_err;
+ return ret;
}
- ret = read_poll_timeout_atomic(rtw89_read8, val, val & B_AX_H2C_PATH_RDY,
- 1, FWDL_WAIT_CNT, false,
- rtwdev, R_AX_WCPU_FW_CTRL);
+ if (rtwdev->chip->chip_id == RTL8922A &&
+ (fw_suit->type == RTW89_FW_NORMAL || fw_suit->type == RTW89_FW_WOWLAN))
+ rtw89_write32(rtwdev, R_BE_SECURE_BOOT_MALLOC_INFO, 0x20248000);
+
+ ret = mac->fwdl_check_path_ready(rtwdev, true);
if (ret) {
rtw89_err(rtwdev, "[ERR]H2C path ready\n");
- goto fwdl_err;
+ return ret;
}
ret = rtw89_fw_download_hdr(rtwdev, fw_suit->data, info.hdr_len -
info.dynamic_hdr_len);
- if (ret) {
- ret = -EBUSY;
- goto fwdl_err;
- }
+ if (ret)
+ return ret;
- ret = rtw89_fw_download_main(rtwdev, fw_suit->data, &info);
- if (ret) {
- ret = -EBUSY;
+ ret = rtw89_fw_download_main(rtwdev, fw_suit, &info);
+ if (ret)
+ return ret;
+
+ return 0;
+}
+
+int rtw89_fw_download(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ bool include_bb)
+{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_fw_info *fw_info = &rtwdev->fw;
+ struct rtw89_fw_suit *fw_suit = rtw89_fw_suit_get(rtwdev, type);
+ u8 bbmcu_nr = rtwdev->chip->bbmcu_nr;
+ int ret;
+ int i;
+
+ mac->disable_cpu(rtwdev);
+ ret = mac->fwdl_enable_wcpu(rtwdev, 0, true, include_bb);
+ if (ret)
+ return ret;
+
+ ret = rtw89_fw_download_suit(rtwdev, fw_suit);
+ if (ret)
goto fwdl_err;
+
+ for (i = 0; i < bbmcu_nr && include_bb; i++) {
+ fw_suit = rtw89_fw_suit_get(rtwdev, RTW89_FW_BBMCU0 + i);
+
+ ret = rtw89_fw_download_suit(rtwdev, fw_suit);
+ if (ret)
+ goto fwdl_err;
}
fw_info->h2c_seq = 0;
@@ -934,6 +1054,14 @@ int rtw89_fw_download(struct rtw89_dev *rtwdev, enum rtw89_fw_type type)
rtwdev->mac.rpwm_seq_num = RPWM_SEQ_NUM_MAX;
rtwdev->mac.cpwm_seq_num = CPWM_SEQ_NUM_MAX;
+ mdelay(5);
+
+ ret = rtw89_fw_check_rdy(rtwdev, RTW89_FWDL_CHECK_FREERTOS_DONE);
+ if (ret) {
+ rtw89_warn(rtwdev, "download firmware fail\n");
+ return ret;
+ }
+
return ret;
fwdl_err:
@@ -3178,11 +3306,11 @@ fail:
int rtw89_fw_h2c_rf_ntfy_mcc(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
struct rtw89_rfk_mcc_info *rfk_mcc = &rtwdev->rfk_mcc;
struct rtw89_fw_h2c_rf_get_mccch *mccch;
struct sk_buff *skb;
int ret;
+ u8 idx;
skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, sizeof(*mccch));
if (!skb) {
@@ -3192,12 +3320,13 @@ int rtw89_fw_h2c_rf_ntfy_mcc(struct rtw89_dev *rtwdev)
skb_put(skb, sizeof(*mccch));
mccch = (struct rtw89_fw_h2c_rf_get_mccch *)skb->data;
+ idx = rfk_mcc->table_idx;
mccch->ch_0 = cpu_to_le32(rfk_mcc->ch[0]);
mccch->ch_1 = cpu_to_le32(rfk_mcc->ch[1]);
mccch->band_0 = cpu_to_le32(rfk_mcc->band[0]);
mccch->band_1 = cpu_to_le32(rfk_mcc->band[1]);
- mccch->current_channel = cpu_to_le32(chan->channel);
- mccch->current_band_type = cpu_to_le32(chan->band_type);
+ mccch->current_channel = cpu_to_le32(rfk_mcc->ch[idx]);
+ mccch->current_band_type = cpu_to_le32(rfk_mcc->band[idx]);
rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
H2C_CAT_OUTSRC, H2C_CL_OUTSRC_RF_FW_NOTIFY,
@@ -3889,6 +4018,8 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
B_AX_RX_FLTR_CFG_MASK,
rx_fltr);
+
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN);
}
void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
@@ -3920,7 +4051,7 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
scan_info->last_chan_idx = 0;
scan_info->scanning_vif = NULL;
- rtw89_set_channel(rtwdev);
+ rtw89_chanctx_proceed(rtwdev);
}
void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
@@ -4520,7 +4651,7 @@ int rtw89_fw_h2c_mcc_req_tsf(struct rtw89_dev *rtwdev,
}
#define H2C_MCC_MACID_BITMAP_DSC_LEN 4
-int rtw89_fw_h2c_mcc_macid_bitamp(struct rtw89_dev *rtwdev, u8 group, u8 macid,
+int rtw89_fw_h2c_mcc_macid_bitmap(struct rtw89_dev *rtwdev, u8 group, u8 macid,
u8 *bitmap)
{
struct rtw89_wait_info *wait = &rtwdev->mcc.wait;
@@ -4623,3 +4754,454 @@ int rtw89_fw_h2c_mcc_set_duration(struct rtw89_dev *rtwdev,
cond = RTW89_MCC_WAIT_COND(p->group, H2C_FUNC_MCC_SET_DURATION);
return rtw89_h2c_tx_and_wait(rtwdev, skb, wait, cond);
}
+
+static bool __fw_txpwr_entry_zero_ext(const void *ext_ptr, u8 ext_len)
+{
+ static const u8 zeros[U8_MAX] = {};
+
+ return memcmp(ext_ptr, zeros, ext_len) == 0;
+}
+
+#define __fw_txpwr_entry_acceptable(e, cursor, ent_sz) \
+({ \
+ u8 __var_sz = sizeof(*(e)); \
+ bool __accept; \
+ if (__var_sz >= (ent_sz)) \
+ __accept = true; \
+ else \
+ __accept = __fw_txpwr_entry_zero_ext((cursor) + __var_sz,\
+ (ent_sz) - __var_sz);\
+ __accept; \
+})
+
+static bool
+fw_txpwr_byrate_entry_valid(const struct rtw89_fw_txpwr_byrate_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->band >= RTW89_BAND_NUM || e->bw >= RTW89_BYR_BW_NUM)
+ return false;
+
+ switch (e->rs) {
+ case RTW89_RS_CCK:
+ if (e->shf + e->len > RTW89_RATE_CCK_NUM)
+ return false;
+ break;
+ case RTW89_RS_OFDM:
+ if (e->shf + e->len > RTW89_RATE_OFDM_NUM)
+ return false;
+ break;
+ case RTW89_RS_MCS:
+ if (e->shf + e->len > __RTW89_RATE_MCS_NUM ||
+ e->nss >= RTW89_NSS_NUM ||
+ e->ofdma >= RTW89_OFDMA_NUM)
+ return false;
+ break;
+ case RTW89_RS_HEDCM:
+ if (e->shf + e->len > RTW89_RATE_HEDCM_NUM ||
+ e->nss >= RTW89_NSS_HEDCM_NUM ||
+ e->ofdma >= RTW89_OFDMA_NUM)
+ return false;
+ break;
+ case RTW89_RS_OFFSET:
+ if (e->shf + e->len > __RTW89_RATE_OFFSET_NUM)
+ return false;
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_byrate(struct rtw89_dev *rtwdev,
+ const struct rtw89_txpwr_table *tbl)
+{
+ const struct rtw89_txpwr_conf *conf = tbl->data;
+ struct rtw89_fw_txpwr_byrate_entry entry = {};
+ struct rtw89_txpwr_byrate *byr_head;
+ struct rtw89_rate_desc desc = {};
+ const void *cursor;
+ u32 data;
+ s8 *byr;
+ int i;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_byrate_entry_valid(&entry, cursor, conf))
+ continue;
+
+ byr_head = &rtwdev->byr[entry.band][entry.bw];
+ data = le32_to_cpu(entry.data);
+ desc.ofdma = entry.ofdma;
+ desc.nss = entry.nss;
+ desc.rs = entry.rs;
+
+ for (i = 0; i < entry.len; i++, data >>= 8) {
+ desc.idx = entry.shf + i;
+ byr = rtw89_phy_raw_byr_seek(rtwdev, byr_head, &desc);
+ *byr = data & 0xff;
+ }
+ }
+}
+
+static bool
+fw_txpwr_lmt_2ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_2ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->bw >= RTW89_2G_BW_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->rs >= RTW89_RS_LMT_NUM)
+ return false;
+ if (e->bf >= RTW89_BF_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->ch_idx >= RTW89_2G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_2ghz(struct rtw89_txpwr_lmt_2ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_2ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_2ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.bw][entry.nt][entry.rs][entry.bf][entry.regd]
+ [entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_txpwr_lmt_5ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_5ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->bw >= RTW89_5G_BW_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->rs >= RTW89_RS_LMT_NUM)
+ return false;
+ if (e->bf >= RTW89_BF_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->ch_idx >= RTW89_5G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_5ghz(struct rtw89_txpwr_lmt_5ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_5ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_5ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.bw][entry.nt][entry.rs][entry.bf][entry.regd]
+ [entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_txpwr_lmt_6ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_6ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->bw >= RTW89_6G_BW_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->rs >= RTW89_RS_LMT_NUM)
+ return false;
+ if (e->bf >= RTW89_BF_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->reg_6ghz_power >= NUM_OF_RTW89_REG_6GHZ_POWER)
+ return false;
+ if (e->ch_idx >= RTW89_6G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_6ghz(struct rtw89_txpwr_lmt_6ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_6ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_6ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.bw][entry.nt][entry.rs][entry.bf][entry.regd]
+ [entry.reg_6ghz_power][entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_txpwr_lmt_ru_2ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_ru_2ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->ru >= RTW89_RU_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->ch_idx >= RTW89_2G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_ru_2ghz(struct rtw89_txpwr_lmt_ru_2ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_ru_2ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_ru_2ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.ru][entry.nt][entry.regd][entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_txpwr_lmt_ru_5ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_ru_5ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->ru >= RTW89_RU_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->ch_idx >= RTW89_5G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_ru_5ghz(struct rtw89_txpwr_lmt_ru_5ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_ru_5ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_ru_5ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.ru][entry.nt][entry.regd][entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_txpwr_lmt_ru_6ghz_entry_valid(const struct rtw89_fw_txpwr_lmt_ru_6ghz_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->ru >= RTW89_RU_NUM)
+ return false;
+ if (e->nt >= RTW89_NTX_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+ if (e->reg_6ghz_power >= NUM_OF_RTW89_REG_6GHZ_POWER)
+ return false;
+ if (e->ch_idx >= RTW89_6G_CH_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_txpwr_lmt_ru_6ghz(struct rtw89_txpwr_lmt_ru_6ghz_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_txpwr_lmt_ru_6ghz_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_txpwr_lmt_ru_6ghz_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.ru][entry.nt][entry.regd][entry.reg_6ghz_power]
+ [entry.ch_idx] = entry.v;
+ }
+}
+
+static bool
+fw_tx_shape_lmt_entry_valid(const struct rtw89_fw_tx_shape_lmt_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->band >= RTW89_BAND_NUM)
+ return false;
+ if (e->tx_shape_rs >= RTW89_RS_TX_SHAPE_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_tx_shape_lmt(struct rtw89_tx_shape_lmt_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_tx_shape_lmt_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_tx_shape_lmt_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.band][entry.tx_shape_rs][entry.regd] = entry.v;
+ }
+}
+
+static bool
+fw_tx_shape_lmt_ru_entry_valid(const struct rtw89_fw_tx_shape_lmt_ru_entry *e,
+ const void *cursor,
+ const struct rtw89_txpwr_conf *conf)
+{
+ if (!__fw_txpwr_entry_acceptable(e, cursor, conf->ent_sz))
+ return false;
+
+ if (e->band >= RTW89_BAND_NUM)
+ return false;
+ if (e->regd >= RTW89_REGD_NUM)
+ return false;
+
+ return true;
+}
+
+static
+void rtw89_fw_load_tx_shape_lmt_ru(struct rtw89_tx_shape_lmt_ru_data *data)
+{
+ const struct rtw89_txpwr_conf *conf = &data->conf;
+ struct rtw89_fw_tx_shape_lmt_ru_entry entry = {};
+ const void *cursor;
+
+ rtw89_for_each_in_txpwr_conf(entry, cursor, conf) {
+ if (!fw_tx_shape_lmt_ru_entry_valid(&entry, cursor, conf))
+ continue;
+
+ data->v[entry.band][entry.regd] = entry.v;
+ }
+}
+
+const struct rtw89_rfe_parms *
+rtw89_load_rfe_data_from_fw(struct rtw89_dev *rtwdev,
+ const struct rtw89_rfe_parms *init)
+{
+ struct rtw89_rfe_data *rfe_data = rtwdev->rfe_data;
+ struct rtw89_rfe_parms *parms;
+
+ if (!rfe_data)
+ return init;
+
+ parms = &rfe_data->rfe_parms;
+ if (init)
+ *parms = *init;
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->byrate.conf)) {
+ rfe_data->byrate.tbl.data = &rfe_data->byrate.conf;
+ rfe_data->byrate.tbl.size = 0; /* don't care here */
+ rfe_data->byrate.tbl.load = rtw89_fw_load_txpwr_byrate;
+ parms->byr_tbl = &rfe_data->byrate.tbl;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_2ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_2ghz(&rfe_data->lmt_2ghz);
+ parms->rule_2ghz.lmt = &rfe_data->lmt_2ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_5ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_5ghz(&rfe_data->lmt_5ghz);
+ parms->rule_5ghz.lmt = &rfe_data->lmt_5ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_6ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_6ghz(&rfe_data->lmt_6ghz);
+ parms->rule_6ghz.lmt = &rfe_data->lmt_6ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_ru_2ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_ru_2ghz(&rfe_data->lmt_ru_2ghz);
+ parms->rule_2ghz.lmt_ru = &rfe_data->lmt_ru_2ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_ru_5ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_ru_5ghz(&rfe_data->lmt_ru_5ghz);
+ parms->rule_5ghz.lmt_ru = &rfe_data->lmt_ru_5ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->lmt_ru_6ghz.conf)) {
+ rtw89_fw_load_txpwr_lmt_ru_6ghz(&rfe_data->lmt_ru_6ghz);
+ parms->rule_6ghz.lmt_ru = &rfe_data->lmt_ru_6ghz.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->tx_shape_lmt.conf)) {
+ rtw89_fw_load_tx_shape_lmt(&rfe_data->tx_shape_lmt);
+ parms->tx_shape.lmt = &rfe_data->tx_shape_lmt.v;
+ }
+
+ if (rtw89_txpwr_conf_valid(&rfe_data->tx_shape_lmt_ru.conf)) {
+ rtw89_fw_load_tx_shape_lmt_ru(&rfe_data->tx_shape_lmt_ru);
+ parms->tx_shape.lmt_ru = &rfe_data->tx_shape_lmt_ru.v;
+ }
+
+ return parms;
+}
diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
index 775f4e8fbda4..d4db9ab0b5e8 100644
--- a/drivers/net/wireless/realtek/rtw89/fw.h
+++ b/drivers/net/wireless/realtek/rtw89/fw.h
@@ -2931,6 +2931,11 @@ static inline void RTW89_SET_FWCMD_ADD_MCC_COURTESY_TARGET(void *cmd, u32 val)
le32p_replace_bits((__le32 *)cmd + 3, val, GENMASK(23, 16));
}
+enum rtw89_fw_mcc_old_group_actions {
+ RTW89_FW_MCC_OLD_GROUP_ACT_NONE = 0,
+ RTW89_FW_MCC_OLD_GROUP_ACT_REPLACE = 1,
+};
+
struct rtw89_fw_mcc_start_req {
u32 group: 2;
u32 btc_in_group: 1;
@@ -3412,10 +3417,46 @@ enum rtw89_fw_element_id {
RTW89_FW_ELEMENT_ID_RADIO_C = 6,
RTW89_FW_ELEMENT_ID_RADIO_D = 7,
RTW89_FW_ELEMENT_ID_RF_NCTL = 8,
+ RTW89_FW_ELEMENT_ID_TXPWR_BYRATE = 9,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_2GHZ = 10,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_5GHZ = 11,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_6GHZ = 12,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_2GHZ = 13,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_5GHZ = 14,
+ RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_6GHZ = 15,
+ RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT = 16,
+ RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT_RU = 17,
RTW89_FW_ELEMENT_ID_NUM,
};
+#define BITS_OF_RTW89_TXPWR_FW_ELEMENTS \
+ (BIT(RTW89_FW_ELEMENT_ID_TXPWR_BYRATE) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_2GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_5GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_6GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_2GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_5GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TXPWR_LMT_RU_6GHZ) | \
+ BIT(RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT) | \
+ BIT(RTW89_FW_ELEMENT_ID_TX_SHAPE_LMT_RU))
+
+#define RTW89_BE_GEN_DEF_NEEDED_FW_ELEMENTS (BIT(RTW89_FW_ELEMENT_ID_BBMCU0) | \
+ BIT(RTW89_FW_ELEMENT_ID_BB_REG) | \
+ BIT(RTW89_FW_ELEMENT_ID_RADIO_A) | \
+ BIT(RTW89_FW_ELEMENT_ID_RADIO_B) | \
+ BIT(RTW89_FW_ELEMENT_ID_RF_NCTL) | \
+ BITS_OF_RTW89_TXPWR_FW_ELEMENTS)
+
+struct __rtw89_fw_txpwr_element {
+ u8 rsvd0;
+ u8 rsvd1;
+ u8 rfe_type;
+ u8 ent_sz;
+ __le32 num_ents;
+ u8 content[];
+} __packed;
+
struct rtw89_fw_element_hdr {
__le32 id; /* enum rtw89_fw_element_id */
__le32 size; /* exclude header size */
@@ -3436,6 +3477,7 @@ struct rtw89_fw_element_hdr {
__le32 data;
} __packed regs[];
} __packed reg2;
+ struct __rtw89_fw_txpwr_element txpwr;
} __packed u;
} __packed;
@@ -3618,7 +3660,9 @@ struct rtw89_fw_h2c_rf_get_mccch {
#define RTW89_FW_BACKTRACE_MAX_SIZE 512 /* 8 * 64 (entries) */
#define RTW89_FW_BACKTRACE_KEY 0xBACEBACE
-int rtw89_fw_check_rdy(struct rtw89_dev *rtwdev);
+#define FWDL_WAIT_CNT 400000
+
+int rtw89_fw_check_rdy(struct rtw89_dev *rtwdev, enum rtw89_fwdl_check_type type);
int rtw89_fw_recognize(struct rtw89_dev *rtwdev);
int rtw89_fw_recognize_elements(struct rtw89_dev *rtwdev);
const struct firmware *
@@ -3626,7 +3670,8 @@ rtw89_early_fw_feature_recognize(struct device *device,
const struct rtw89_chip_info *chip,
struct rtw89_fw_info *early_fw,
int *used_fw_format);
-int rtw89_fw_download(struct rtw89_dev *rtwdev, enum rtw89_fw_type type);
+int rtw89_fw_download(struct rtw89_dev *rtwdev, enum rtw89_fw_type type,
+ bool include_bb);
void rtw89_load_firmware_work(struct work_struct *work);
void rtw89_unload_firmware(struct rtw89_dev *rtwdev);
int rtw89_wait_firmware_completion(struct rtw89_dev *rtwdev);
@@ -3755,7 +3800,7 @@ int rtw89_fw_h2c_reset_mcc_group(struct rtw89_dev *rtwdev, u8 group);
int rtw89_fw_h2c_mcc_req_tsf(struct rtw89_dev *rtwdev,
const struct rtw89_fw_mcc_tsf_req *req,
struct rtw89_mac_mcc_tsf_rpt *rpt);
-int rtw89_fw_h2c_mcc_macid_bitamp(struct rtw89_dev *rtwdev, u8 group, u8 macid,
+int rtw89_fw_h2c_mcc_macid_bitmap(struct rtw89_dev *rtwdev, u8 group, u8 macid,
u8 *bitmap);
int rtw89_fw_h2c_mcc_sync(struct rtw89_dev *rtwdev, u8 group, u8 source,
u8 target, u8 offset);
@@ -3770,4 +3815,97 @@ static inline void rtw89_fw_h2c_init_ba_cam(struct rtw89_dev *rtwdev)
rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(rtwdev);
}
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_byrate_entry {
+ u8 band;
+ u8 nss;
+ u8 rs;
+ u8 shf;
+ u8 len;
+ __le32 data;
+ u8 bw;
+ u8 ofdma;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_2ghz_entry {
+ u8 bw;
+ u8 nt;
+ u8 rs;
+ u8 bf;
+ u8 regd;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_5ghz_entry {
+ u8 bw;
+ u8 nt;
+ u8 rs;
+ u8 bf;
+ u8 regd;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_6ghz_entry {
+ u8 bw;
+ u8 nt;
+ u8 rs;
+ u8 bf;
+ u8 regd;
+ u8 reg_6ghz_power;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_ru_2ghz_entry {
+ u8 ru;
+ u8 nt;
+ u8 regd;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_ru_5ghz_entry {
+ u8 ru;
+ u8 nt;
+ u8 regd;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_txpwr_lmt_ru_6ghz_entry {
+ u8 ru;
+ u8 nt;
+ u8 regd;
+ u8 reg_6ghz_power;
+ u8 ch_idx;
+ s8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_tx_shape_lmt_entry {
+ u8 band;
+ u8 tx_shape_rs;
+ u8 regd;
+ u8 v;
+} __packed;
+
+/* must consider compatibility; don't insert new in the mid */
+struct rtw89_fw_tx_shape_lmt_ru_entry {
+ u8 band;
+ u8 regd;
+ u8 v;
+} __packed;
+
+const struct rtw89_rfe_parms *
+rtw89_load_rfe_data_from_fw(struct rtw89_dev *rtwdev,
+ const struct rtw89_rfe_parms *init);
+
#endif
diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
index fab9f5004a75..0c5768f41d55 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.c
+++ b/drivers/net/wireless/realtek/rtw89/mac.c
@@ -3452,7 +3452,7 @@ static void rtw89_disable_fw_watchdog(struct rtw89_dev *rtwdev)
rtw89_mac_mem_write(rtwdev, R_AX_WDT_STATUS, val32, RTW89_MAC_MEM_CPU_LOCAL);
}
-void rtw89_mac_disable_cpu(struct rtw89_dev *rtwdev)
+static void rtw89_mac_disable_cpu_ax(struct rtw89_dev *rtwdev)
{
clear_bit(RTW89_FLAG_FW_RDY, rtwdev->flags);
@@ -3467,7 +3467,8 @@ void rtw89_mac_disable_cpu(struct rtw89_dev *rtwdev)
rtw89_write32_set(rtwdev, R_AX_PLATFORM_ENABLE, B_AX_PLATFORM_EN);
}
-int rtw89_mac_enable_cpu(struct rtw89_dev *rtwdev, u8 boot_reason, bool dlfw)
+static int rtw89_mac_enable_cpu_ax(struct rtw89_dev *rtwdev, u8 boot_reason,
+ bool dlfw, bool include_bb)
{
u32 val;
int ret;
@@ -3505,7 +3506,7 @@ int rtw89_mac_enable_cpu(struct rtw89_dev *rtwdev, u8 boot_reason, bool dlfw)
if (!dlfw) {
mdelay(5);
- ret = rtw89_fw_check_rdy(rtwdev);
+ ret = rtw89_fw_check_rdy(rtwdev, RTW89_FWDL_CHECK_FREERTOS_DONE);
if (ret)
return ret;
}
@@ -3592,7 +3593,7 @@ int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev)
}
EXPORT_SYMBOL(rtw89_mac_disable_bb_rf);
-int rtw89_mac_partial_init(struct rtw89_dev *rtwdev)
+int rtw89_mac_partial_init(struct rtw89_dev *rtwdev, bool include_bb)
{
int ret;
@@ -3606,6 +3607,12 @@ int rtw89_mac_partial_init(struct rtw89_dev *rtwdev)
rtw89_mac_ctrl_hci_dma_trx(rtwdev, true);
+ if (include_bb) {
+ rtw89_chip_bb_preinit(rtwdev, RTW89_PHY_0);
+ if (rtwdev->dbcc_en)
+ rtw89_chip_bb_preinit(rtwdev, RTW89_PHY_1);
+ }
+
ret = rtw89_mac_dmac_pre_init(rtwdev);
if (ret)
return ret;
@@ -3616,7 +3623,7 @@ int rtw89_mac_partial_init(struct rtw89_dev *rtwdev)
return ret;
}
- ret = rtw89_fw_download(rtwdev, RTW89_FW_NORMAL);
+ ret = rtw89_fw_download(rtwdev, RTW89_FW_NORMAL, include_bb);
if (ret)
return ret;
@@ -3625,9 +3632,11 @@ int rtw89_mac_partial_init(struct rtw89_dev *rtwdev)
int rtw89_mac_init(struct rtw89_dev *rtwdev)
{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool include_bb = !!chip->bbmcu_nr;
int ret;
- ret = rtw89_mac_partial_init(rtwdev);
+ ret = rtw89_mac_partial_init(rtwdev, include_bb);
if (ret)
goto fail;
@@ -3712,7 +3721,7 @@ int rtw89_mac_set_macid_pause(struct rtw89_dev *rtwdev, u8 macid, bool pause)
return 0;
}
-static const struct rtw89_port_reg rtw_port_base = {
+static const struct rtw89_port_reg rtw89_port_base_ax = {
.port_cfg = R_AX_PORT_CFG_P0,
.tbtt_prohib = R_AX_TBTT_PROHIB_P0,
.bcn_area = R_AX_BCN_AREA_P0,
@@ -3727,7 +3736,15 @@ static const struct rtw89_port_reg rtw_port_base = {
.tbtt_shift = R_AX_TBTT_SHIFT_P0,
.bcn_cnt_tmr = R_AX_BCN_CNT_TMR_P0,
.tsftr_l = R_AX_TSFTR_LOW_P0,
- .tsftr_h = R_AX_TSFTR_HIGH_P0
+ .tsftr_h = R_AX_TSFTR_HIGH_P0,
+ .md_tsft = R_AX_MD_TSFT_STMP_CTL,
+ .bss_color = R_AX_PTCL_BSS_COLOR_0,
+ .mbssid = R_AX_MBSSID_CTRL,
+ .mbssid_drop = R_AX_MBSSID_DROP_0,
+ .tsf_sync = R_AX_PORT0_TSF_SYNC,
+ .hiq_win = {R_AX_P0MB_HGQ_WINDOW_CFG_0, R_AX_PORT_HGQ_WINDOW_CFG,
+ R_AX_PORT_HGQ_WINDOW_CFG + 1, R_AX_PORT_HGQ_WINDOW_CFG + 2,
+ R_AX_PORT_HGQ_WINDOW_CFG + 3},
};
#define BCN_INTERVAL 100
@@ -3742,8 +3759,9 @@ static const struct rtw89_port_reg rtw_port_base = {
static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
- const struct rtw89_port_reg *p = &rtw_port_base;
if (!rtw89_read32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN))
return;
@@ -3764,7 +3782,8 @@ static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif, bool en)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
if (en)
rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN);
@@ -3775,7 +3794,8 @@ static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_rx_rpt(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif, bool en)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
if (en)
rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN);
@@ -3786,7 +3806,8 @@ static void rtw89_mac_port_cfg_rx_rpt(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_net_type(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_NET_TYPE_MASK,
rtwvif->net_type);
@@ -3795,7 +3816,8 @@ static void rtw89_mac_port_cfg_net_type(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_prct(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
bool en = rtwvif->net_type != RTW89_NET_TYPE_NO_LINK;
u32 bits = B_AX_TBTT_PROHIB_EN | B_AX_BRK_SETUP;
@@ -3808,7 +3830,8 @@ static void rtw89_mac_port_cfg_bcn_prct(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_rx_sw(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
u32 bit = B_AX_RX_BSSID_FIT_EN;
@@ -3822,7 +3845,8 @@ static void rtw89_mac_port_cfg_rx_sw(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
@@ -3835,7 +3859,8 @@ static void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ||
rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
@@ -3848,8 +3873,9 @@ static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
- const struct rtw89_port_reg *p = &rtw_port_base;
u16 bcn_int = vif->bss_conf.beacon_int ? vif->bss_conf.beacon_int : BCN_INTERVAL;
rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK,
@@ -3859,27 +3885,25 @@ static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_hiq_win(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- static const u32 hiq_win_addr[RTW89_PORT_NUM] = {
- R_AX_P0MB_HGQ_WINDOW_CFG_0, R_AX_PORT_HGQ_WINDOW_CFG,
- R_AX_PORT_HGQ_WINDOW_CFG + 1, R_AX_PORT_HGQ_WINDOW_CFG + 2,
- R_AX_PORT_HGQ_WINDOW_CFG + 3,
- };
u8 win = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u8 port = rtwvif->port;
u32 reg;
- reg = rtw89_mac_reg_by_idx(rtwdev, hiq_win_addr[port], rtwvif->mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif->mac_idx);
rtw89_write8(rtwdev, reg, win);
}
static void rtw89_mac_port_cfg_hiq_dtim(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
- const struct rtw89_port_reg *p = &rtw_port_base;
u32 addr;
- addr = rtw89_mac_reg_by_idx(rtwdev, R_AX_MD_TSFT_STMP_CTL, rtwvif->mac_idx);
+ addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif->mac_idx);
rtw89_write8_set(rtwdev, addr, B_AX_UPD_HGQMD | B_AX_UPD_TIMIE);
rtw89_write16_port_mask(rtwdev, rtwvif, p->dtim_ctrl, B_AX_DTIM_NUM_MASK,
@@ -3889,7 +3913,8 @@ static void rtw89_mac_port_cfg_hiq_dtim(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_setup_time(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
B_AX_TBTT_SETUP_MASK, BCN_SETUP_DEF);
@@ -3898,7 +3923,8 @@ static void rtw89_mac_port_cfg_bcn_setup_time(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_hold_time(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
B_AX_TBTT_HOLD_MASK, BCN_HOLD_DEF);
@@ -3907,7 +3933,8 @@ static void rtw89_mac_port_cfg_bcn_hold_time(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_mask_area(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area,
B_AX_BCN_MSK_AREA_MASK, BCN_MASK_DEF);
@@ -3916,7 +3943,8 @@ static void rtw89_mac_port_cfg_bcn_mask_area(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_tbtt_early(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early,
B_AX_TBTTERLY_MASK, TBTT_ERLY_DEF);
@@ -3925,6 +3953,8 @@ static void rtw89_mac_port_cfg_tbtt_early(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
static const u32 masks[RTW89_PORT_NUM] = {
B_AX_BSS_COLOB_AX_PORT_0_MASK, B_AX_BSS_COLOB_AX_PORT_1_MASK,
@@ -3937,7 +3967,7 @@ static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev,
u8 bss_color;
bss_color = vif->bss_conf.he_bss_color.color;
- reg_base = port >= 4 ? R_AX_PTCL_BSS_COLOR_1 : R_AX_PTCL_BSS_COLOR_0;
+ reg_base = port >= 4 ? p->bss_color + 4 : p->bss_color;
reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif->mac_idx);
rtw89_write32_mask(rtwdev, reg, masks[port], bss_color);
}
@@ -3945,6 +3975,8 @@ static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u8 port = rtwvif->port;
u32 reg;
@@ -3952,7 +3984,7 @@ static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev,
return;
if (port == 0) {
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_MBSSID_CTRL, rtwvif->mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif->mac_idx);
rtw89_write32_clr(rtwdev, reg, B_AX_P0MB_ALL_MASK);
}
}
@@ -3960,11 +3992,13 @@ static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u8 port = rtwvif->port;
u32 reg;
u32 val;
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_MBSSID_DROP_0, rtwvif->mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif->mac_idx);
val = rtw89_read32(rtwdev, reg);
val &= ~FIELD_PREP(B_AX_PORT_DROP_4_0_MASK, BIT(port));
if (port == 0)
@@ -3975,7 +4009,8 @@ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_func_en(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif, bool enable)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
if (enable)
rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg,
@@ -3988,7 +4023,8 @@ static void rtw89_mac_port_cfg_func_en(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_bcn_early(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK,
BCN_ERLY_DEF);
@@ -3997,7 +4033,8 @@ static void rtw89_mac_port_cfg_bcn_early(struct rtw89_dev *rtwdev,
static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u16 val;
if (rtwdev->chip->chip_id != RTL8852C)
@@ -4019,10 +4056,12 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif_src,
u16 offset_tu)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u32 val, reg;
val = RTW89_PORT_OFFSET_TU_TO_32US(offset_tu);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PORT0_TSF_SYNC + rtwvif->port * 4,
+ reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif->port * 4,
rtwvif->mac_idx);
rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_SRC, rtwvif_src->port);
@@ -4160,7 +4199,8 @@ int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
u64 *tsf)
{
- const struct rtw89_port_reg *p = &rtw_port_base;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
u32 tsf_low, tsf_high;
int ret;
@@ -4479,6 +4519,7 @@ static void
rtw89_mac_c2h_tsf32_toggle_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
u32 len)
{
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_TSF32_TOGGLE_CHANGE);
}
static void
@@ -4733,21 +4774,22 @@ void rtw89_mac_c2h_handle(struct rtw89_dev *rtwdev, struct sk_buff *skb,
handler(rtwdev, skb, len);
}
-bool rtw89_mac_get_txpwr_cr(struct rtw89_dev *rtwdev,
- enum rtw89_phy_idx phy_idx,
- u32 reg_base, u32 *cr)
+static
+bool rtw89_mac_get_txpwr_cr_ax(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+ u32 reg_base, u32 *cr)
{
const struct rtw89_dle_mem *dle_mem = rtwdev->chip->dle_mem;
enum rtw89_qta_mode mode = dle_mem->mode;
u32 addr = rtw89_mac_reg_by_idx(rtwdev, reg_base, phy_idx);
- if (addr < R_AX_PWR_RATE_CTRL || addr > CMAC1_END_ADDR) {
+ if (addr < R_AX_PWR_RATE_CTRL || addr > CMAC1_END_ADDR_AX) {
rtw89_err(rtwdev, "[TXPWR] addr=0x%x exceed txpwr cr\n",
addr);
goto error;
}
- if (addr >= CMAC1_START_ADDR && addr <= CMAC1_END_ADDR)
+ if (addr >= CMAC1_START_ADDR_AX && addr <= CMAC1_END_ADDR_AX)
if (mode == RTW89_QTA_SCC) {
rtw89_err(rtwdev,
"[TXPWR] addr=0x%x but hw not enable\n",
@@ -4764,7 +4806,6 @@ error:
return false;
}
-EXPORT_SYMBOL(rtw89_mac_get_txpwr_cr);
int rtw89_mac_cfg_ppdu_status(struct rtw89_dev *rtwdev, u8 mac_idx, bool enable)
{
@@ -4799,6 +4840,7 @@ void rtw89_mac_update_rts_threshold(struct rtw89_dev *rtwdev, u8 mac_idx)
#define MAC_AX_LEN_TH_MAX 255
#define MAC_AX_TIME_TH_DEF 88
#define MAC_AX_LEN_TH_DEF 4080
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
struct ieee80211_hw *hw = rtwdev->hw;
u32 rts_threshold = hw->wiphy->rts_threshold;
u32 time_th, len_th;
@@ -4815,7 +4857,7 @@ void rtw89_mac_update_rts_threshold(struct rtw89_dev *rtwdev, u8 mac_idx)
time_th = min_t(u32, time_th >> MAC_AX_TIME_TH_SH, MAC_AX_TIME_TH_MAX);
len_th = min_t(u32, len_th >> MAC_AX_LEN_TH_SH, MAC_AX_LEN_TH_MAX);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_AGG_LEN_HT_0, mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, mac->agg_len_ht, mac_idx);
rtw89_write16_mask(rtwdev, reg, B_AX_RTS_TXTIME_TH_MASK, time_th);
rtw89_write16_mask(rtwdev, reg, B_AX_RTS_LEN_TH_MASK, len_th);
}
@@ -5153,6 +5195,9 @@ static void rtw89_mac_bfee_standby_timer(struct rtw89_dev *rtwdev, u8 mac_idx,
{
u32 reg;
+ if (rtwdev->chip->chip_gen != RTW89_CHIP_AX)
+ return;
+
rtw89_debug(rtwdev, RTW89_DBG_BF, "set bfee standby_timer to %d\n", keep);
reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_BFMEE_RESP_OPTION, mac_idx);
if (keep) {
@@ -5166,14 +5211,14 @@ static void rtw89_mac_bfee_standby_timer(struct rtw89_dev *rtwdev, u8 mac_idx,
}
}
-static void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en)
+void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u32 reg;
- u32 mask = B_AX_BFMEE_HT_NDPA_EN | B_AX_BFMEE_VHT_NDPA_EN |
- B_AX_BFMEE_HE_NDPA_EN;
+ u32 mask = mac->bfee_ctrl.mask;
rtw89_debug(rtwdev, RTW89_DBG_BF, "set bfee ndpa_en to %d\n", en);
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_BFMEE_RESP_OPTION, mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, mac->bfee_ctrl.addr, mac_idx);
if (en) {
set_bit(RTW89_FLAG_BFEE_EN, rtwdev->flags);
rtw89_write32_set(rtwdev, reg, mask);
@@ -5183,7 +5228,7 @@ static void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en)
}
}
-static int rtw89_mac_init_bfee(struct rtw89_dev *rtwdev, u8 mac_idx)
+static int rtw89_mac_init_bfee_ax(struct rtw89_dev *rtwdev, u8 mac_idx)
{
u32 reg;
u32 val32;
@@ -5225,9 +5270,9 @@ static int rtw89_mac_init_bfee(struct rtw89_dev *rtwdev, u8 mac_idx)
return 0;
}
-static int rtw89_mac_set_csi_para_reg(struct rtw89_dev *rtwdev,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
{
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
u8 mac_idx = rtwvif->mac_idx;
@@ -5283,9 +5328,9 @@ static int rtw89_mac_set_csi_para_reg(struct rtw89_dev *rtwdev,
return 0;
}
-static int rtw89_mac_csi_rrsc(struct rtw89_dev *rtwdev,
- struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
{
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
@@ -5322,17 +5367,18 @@ static int rtw89_mac_csi_rrsc(struct rtw89_dev *rtwdev,
return 0;
}
-void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
- struct ieee80211_sta *sta)
+static void rtw89_mac_bf_assoc_ax(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
{
struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
if (rtw89_sta_has_beamformer_cap(sta)) {
rtw89_debug(rtwdev, RTW89_DBG_BF,
"initialize bfee for new association\n");
- rtw89_mac_init_bfee(rtwdev, rtwvif->mac_idx);
- rtw89_mac_set_csi_para_reg(rtwdev, vif, sta);
- rtw89_mac_csi_rrsc(rtwdev, vif, sta);
+ rtw89_mac_init_bfee_ax(rtwdev, rtwvif->mac_idx);
+ rtw89_mac_set_csi_para_reg_ax(rtwdev, vif, sta);
+ rtw89_mac_csi_rrsc_ax(rtwdev, vif, sta);
}
}
@@ -5551,8 +5597,9 @@ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif, bool en)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u8 mac_idx = rtwvif->mac_idx;
- u16 set = B_AX_MUEDCA_EN_0 | B_AX_SET_MUEDCATIMER_TF_0;
+ u16 set = mac->muedca_ctrl.mask;
u32 reg;
u32 ret;
@@ -5560,7 +5607,7 @@ int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
if (ret)
return ret;
- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_MUEDCA_EN, mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, mac->muedca_ctrl.addr, mac_idx);
if (en)
rtw89_write16_set(rtwdev, reg, set);
else
@@ -5684,11 +5731,51 @@ int rtw89_mac_ptk_drop_by_band_and_wait(struct rtw89_dev *rtwdev,
return ret;
}
+static u8 rtw89_fw_get_rdy_ax(struct rtw89_dev *rtwdev, enum rtw89_fwdl_check_type type)
+{
+ u8 val = rtw89_read8(rtwdev, R_AX_WCPU_FW_CTRL);
+
+ return FIELD_GET(B_AX_WCPU_FWDL_STS_MASK, val);
+}
+
+static
+int rtw89_fwdl_check_path_ready_ax(struct rtw89_dev *rtwdev,
+ bool h2c_or_fwdl)
+{
+ u8 check = h2c_or_fwdl ? B_AX_H2C_PATH_RDY : B_AX_FWDL_PATH_RDY;
+ u8 val;
+
+ return read_poll_timeout_atomic(rtw89_read8, val, val & check,
+ 1, FWDL_WAIT_CNT, false,
+ rtwdev, R_AX_WCPU_FW_CTRL);
+}
+
const struct rtw89_mac_gen_def rtw89_mac_gen_ax = {
.band1_offset = RTW89_MAC_AX_BAND_REG_OFFSET,
.filter_model_addr = R_AX_FILTER_MODEL_ADDR,
.indir_access_addr = R_AX_INDIR_ACCESS_ENTRY,
.mem_base_addrs = rtw89_mac_mem_base_addrs_ax,
.rx_fltr = R_AX_RX_FLTR_OPT,
+ .port_base = &rtw89_port_base_ax,
+ .agg_len_ht = R_AX_AGG_LEN_HT_0,
+
+ .muedca_ctrl = {
+ .addr = R_AX_MUEDCA_EN,
+ .mask = B_AX_MUEDCA_EN_0 | B_AX_SET_MUEDCATIMER_TF_0,
+ },
+ .bfee_ctrl = {
+ .addr = R_AX_BFMEE_RESP_OPTION,
+ .mask = B_AX_BFMEE_HT_NDPA_EN | B_AX_BFMEE_VHT_NDPA_EN |
+ B_AX_BFMEE_HE_NDPA_EN,
+ },
+
+ .bf_assoc = rtw89_mac_bf_assoc_ax,
+
+ .disable_cpu = rtw89_mac_disable_cpu_ax,
+ .fwdl_enable_wcpu = rtw89_mac_enable_cpu_ax,
+ .fwdl_get_status = rtw89_fw_get_rdy_ax,
+ .fwdl_check_path_ready = rtw89_fwdl_check_path_ready_ax,
+
+ .get_txpwr_cr = rtw89_mac_get_txpwr_cr_ax,
};
EXPORT_SYMBOL(rtw89_mac_gen_ax);
diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
index 7cf34137c0bc..c11c904f87fe 100644
--- a/drivers/net/wireless/realtek/rtw89/mac.h
+++ b/drivers/net/wireless/realtek/rtw89/mac.h
@@ -858,6 +858,24 @@ struct rtw89_mac_gen_def {
u32 indir_access_addr;
const u32 *mem_base_addrs;
u32 rx_fltr;
+ const struct rtw89_port_reg *port_base;
+ u32 agg_len_ht;
+
+ struct rtw89_reg_def muedca_ctrl;
+ struct rtw89_reg_def bfee_ctrl;
+
+ void (*bf_assoc)(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta);
+
+ void (*disable_cpu)(struct rtw89_dev *rtwdev);
+ int (*fwdl_enable_wcpu)(struct rtw89_dev *rtwdev, u8 boot_reason,
+ bool dlfw, bool include_bb);
+ u8 (*fwdl_get_status)(struct rtw89_dev *rtwdev, enum rtw89_fwdl_check_type type);
+ int (*fwdl_check_path_ready)(struct rtw89_dev *rtwdev, bool h2c_or_fwdl);
+
+ bool (*get_txpwr_cr)(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+ u32 reg_base, u32 *cr);
};
extern const struct rtw89_mac_gen_def rtw89_mac_gen_ax;
@@ -957,7 +975,7 @@ rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
}
void rtw89_mac_pwr_off(struct rtw89_dev *rtwdev);
-int rtw89_mac_partial_init(struct rtw89_dev *rtwdev);
+int rtw89_mac_partial_init(struct rtw89_dev *rtwdev, bool include_bb);
int rtw89_mac_init(struct rtw89_dev *rtwdev);
int rtw89_mac_check_mac_en(struct rtw89_dev *rtwdev, u8 band,
enum rtw89_mac_hwmod_sel sel);
@@ -975,8 +993,6 @@ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
struct ieee80211_vif *vif);
void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
-void rtw89_mac_disable_cpu(struct rtw89_dev *rtwdev);
-int rtw89_mac_enable_cpu(struct rtw89_dev *rtwdev, u8 boot_reason, bool dlfw);
int rtw89_mac_enable_bb_rf(struct rtw89_dev *rtwdev);
int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev);
@@ -1023,13 +1039,19 @@ u32 rtw89_mac_get_sb(struct rtw89_dev *rtwdev);
bool rtw89_mac_get_ctrl_path(struct rtw89_dev *rtwdev);
int rtw89_mac_cfg_ctrl_path(struct rtw89_dev *rtwdev, bool wl);
int rtw89_mac_cfg_ctrl_path_v1(struct rtw89_dev *rtwdev, bool wl);
-bool rtw89_mac_get_txpwr_cr(struct rtw89_dev *rtwdev,
- enum rtw89_phy_idx phy_idx,
- u32 reg_base, u32 *cr);
void rtw89_mac_power_mode_change(struct rtw89_dev *rtwdev, bool enter);
void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev);
+
+static inline
void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
- struct ieee80211_sta *sta);
+ struct ieee80211_sta *sta)
+{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+
+ if (mac->bf_assoc)
+ mac->bf_assoc(rtwdev, vif, sta);
+}
+
void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
@@ -1037,6 +1059,7 @@ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *
void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
struct ieee80211_sta *sta, bool disconnect);
void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev);
+void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en);
int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
@@ -1045,6 +1068,9 @@ int rtw89_mac_set_macid_pause(struct rtw89_dev *rtwdev, u8 macid, bool pause);
static inline void rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
{
+ if (rtwdev->chip->chip_gen != RTW89_CHIP_AX)
+ return;
+
if (!test_bit(RTW89_FLAG_BFEE_MON, rtwdev->flags))
return;
@@ -1055,9 +1081,10 @@ static inline int rtw89_mac_txpwr_read32(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx,
u32 reg_base, u32 *val)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u32 cr;
- if (!rtw89_mac_get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
+ if (!mac->get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
return -EINVAL;
*val = rtw89_read32(rtwdev, cr);
@@ -1068,9 +1095,10 @@ static inline int rtw89_mac_txpwr_write32(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx,
u32 reg_base, u32 val)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u32 cr;
- if (!rtw89_mac_get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
+ if (!mac->get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
return -EINVAL;
rtw89_write32(rtwdev, cr, val);
@@ -1081,9 +1109,10 @@ static inline int rtw89_mac_txpwr_write32_mask(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx,
u32 reg_base, u32 mask, u32 val)
{
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
u32 cr;
- if (!rtw89_mac_get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
+ if (!mac->get_txpwr_cr(rtwdev, phy_idx, reg_base, &cr))
return -EINVAL;
rtw89_write32_mask(rtwdev, cr, mask, val);
diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
index 5e48618706d9..31d1f7891675 100644
--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
+++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
@@ -145,6 +145,7 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
rtwvif->mac_idx = RTW89_MAC_0;
rtwvif->phy_idx = RTW89_PHY_0;
rtwvif->sub_entity_idx = RTW89_SUB_ENTITY_0;
+ rtwvif->chanctx_assigned = false;
rtwvif->hit_rule = 0;
rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
ether_addr_copy(rtwvif->mac_addr, vif->addr);
@@ -327,11 +328,14 @@ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev,
rtw89_fw_h2c_set_edca(rtwdev, rtwvif, ac_to_fw_idx[ac], val);
}
-static const u32 ac_to_mu_edca_param[IEEE80211_NUM_ACS] = {
- [IEEE80211_AC_VO] = R_AX_MUEDCA_VO_PARAM_0,
- [IEEE80211_AC_VI] = R_AX_MUEDCA_VI_PARAM_0,
- [IEEE80211_AC_BE] = R_AX_MUEDCA_BE_PARAM_0,
- [IEEE80211_AC_BK] = R_AX_MUEDCA_BK_PARAM_0,
+#define R_MUEDCA_ACS_PARAM(acs) {R_AX_MUEDCA_ ## acs ## _PARAM_0, \
+ R_BE_MUEDCA_ ## acs ## _PARAM_0}
+
+static const u32 ac_to_mu_edca_param[IEEE80211_NUM_ACS][RTW89_CHIP_GEN_NUM] = {
+ [IEEE80211_AC_VO] = R_MUEDCA_ACS_PARAM(VO),
+ [IEEE80211_AC_VI] = R_MUEDCA_ACS_PARAM(VI),
+ [IEEE80211_AC_BE] = R_MUEDCA_ACS_PARAM(BE),
+ [IEEE80211_AC_BK] = R_MUEDCA_ACS_PARAM(BK),
};
static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
@@ -339,6 +343,7 @@ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
{
struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac];
struct ieee80211_he_mu_edca_param_ac_rec *mu_edca;
+ int gen = rtwdev->chip->chip_gen;
u8 aifs, aifsn;
u16 timer_32us;
u32 reg;
@@ -355,7 +360,7 @@ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
val = FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_TIMER_MASK, timer_32us) |
FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_CW_MASK, mu_edca->ecw_min_max) |
FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_AIFS_MASK, aifs);
- reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac], rtwvif->mac_idx);
+ reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen], rtwvif->mac_idx);
rtw89_write32(rtwdev, reg, val);
rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif, true);
@@ -445,7 +450,7 @@ static void rtw89_ops_bss_info_changed(struct ieee80211_hw *hw,
rtw89_mac_bf_set_gid_table(rtwdev, vif, conf);
if (changed & BSS_CHANGED_P2P_PS)
- rtw89_process_p2p_ps(rtwdev, vif);
+ rtw89_core_update_p2p_ps(rtwdev, vif);
if (changed & BSS_CHANGED_CQM)
rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
diff --git a/drivers/net/wireless/realtek/rtw89/mac_be.c b/drivers/net/wireless/realtek/rtw89/mac_be.c
index 9a63fb35e867..3278f241db6e 100644
--- a/drivers/net/wireless/realtek/rtw89/mac_be.c
+++ b/drivers/net/wireless/realtek/rtw89/mac_be.c
@@ -2,6 +2,8 @@
/* Copyright(c) 2019-2020 Realtek Corporation
*/
+#include "debug.h"
+#include "fw.h"
#include "mac.h"
#include "reg.h"
@@ -28,11 +30,406 @@ static const u32 rtw89_mac_mem_base_addrs_be[RTW89_MAC_MEM_NUM] = {
[RTW89_MAC_MEM_WD_PAGE] = WD_PAGE_BASE_ADDR_BE,
};
+static const struct rtw89_port_reg rtw89_port_base_be = {
+ .port_cfg = R_BE_PORT_CFG_P0,
+ .tbtt_prohib = R_BE_TBTT_PROHIB_P0,
+ .bcn_area = R_BE_BCN_AREA_P0,
+ .bcn_early = R_BE_BCNERLYINT_CFG_P0,
+ .tbtt_early = R_BE_TBTTERLYINT_CFG_P0,
+ .tbtt_agg = R_BE_TBTT_AGG_P0,
+ .bcn_space = R_BE_BCN_SPACE_CFG_P0,
+ .bcn_forcetx = R_BE_BCN_FORCETX_P0,
+ .bcn_err_cnt = R_BE_BCN_ERR_CNT_P0,
+ .bcn_err_flag = R_BE_BCN_ERR_FLAG_P0,
+ .dtim_ctrl = R_BE_DTIM_CTRL_P0,
+ .tbtt_shift = R_BE_TBTT_SHIFT_P0,
+ .bcn_cnt_tmr = R_BE_BCN_CNT_TMR_P0,
+ .tsftr_l = R_BE_TSFTR_LOW_P0,
+ .tsftr_h = R_BE_TSFTR_HIGH_P0,
+ .md_tsft = R_BE_WMTX_MOREDATA_TSFT_STMP_CTL,
+ .bss_color = R_BE_PTCL_BSS_COLOR_0,
+ .mbssid = R_BE_MBSSID_CTRL,
+ .mbssid_drop = R_BE_MBSSID_DROP_0,
+ .tsf_sync = R_BE_PORT_0_TSF_SYNC,
+ .hiq_win = {R_BE_P0MB_HGQ_WINDOW_CFG_0, R_BE_PORT_HGQ_WINDOW_CFG,
+ R_BE_PORT_HGQ_WINDOW_CFG + 1, R_BE_PORT_HGQ_WINDOW_CFG + 2,
+ R_BE_PORT_HGQ_WINDOW_CFG + 3},
+};
+
+static void rtw89_mac_disable_cpu_be(struct rtw89_dev *rtwdev)
+{
+ u32 val32;
+
+ clear_bit(RTW89_FLAG_FW_RDY, rtwdev->flags);
+
+ rtw89_write32_clr(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_WCPU_EN);
+ rtw89_write32_set(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_HOLD_AFTER_RESET);
+ rtw89_write32_set(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_WCPU_EN);
+
+ val32 = rtw89_read32(rtwdev, R_BE_WCPU_FW_CTRL);
+ val32 &= B_BE_RUN_ENV_MASK;
+ rtw89_write32(rtwdev, R_BE_WCPU_FW_CTRL, val32);
+
+ rtw89_write32_set(rtwdev, R_BE_DCPU_PLATFORM_ENABLE, B_BE_DCPU_PLATFORM_EN);
+
+ rtw89_write32(rtwdev, R_BE_UDM0, 0);
+ rtw89_write32(rtwdev, R_BE_HALT_C2H, 0);
+ rtw89_write32(rtwdev, R_BE_UDM2, 0);
+}
+
+static void set_cpu_en(struct rtw89_dev *rtwdev, bool include_bb)
+{
+ u32 set = B_BE_WLANCPU_FWDL_EN;
+
+ if (include_bb)
+ set |= B_BE_BBMCU0_FWDL_EN;
+
+ rtw89_write32_set(rtwdev, R_BE_WCPU_FW_CTRL, set);
+}
+
+static int wcpu_on(struct rtw89_dev *rtwdev, u8 boot_reason, bool dlfw)
+{
+ u32 val32;
+ int ret;
+
+ rtw89_write32_set(rtwdev, R_BE_UDM0, B_BE_UDM0_DBG_MODE_CTRL);
+
+ val32 = rtw89_read32(rtwdev, R_BE_HALT_C2H);
+ if (val32) {
+ rtw89_warn(rtwdev, "[SER] AON L2 Debug register not empty before Boot.\n");
+ rtw89_warn(rtwdev, "[SER] %s: R_BE_HALT_C2H = 0x%x\n", __func__, val32);
+ }
+ val32 = rtw89_read32(rtwdev, R_BE_UDM1);
+ if (val32) {
+ rtw89_warn(rtwdev, "[SER] AON L2 Debug register not empty before Boot.\n");
+ rtw89_warn(rtwdev, "[SER] %s: R_BE_UDM1 = 0x%x\n", __func__, val32);
+ }
+ val32 = rtw89_read32(rtwdev, R_BE_UDM2);
+ if (val32) {
+ rtw89_warn(rtwdev, "[SER] AON L2 Debug register not empty before Boot.\n");
+ rtw89_warn(rtwdev, "[SER] %s: R_BE_UDM2 = 0x%x\n", __func__, val32);
+ }
+
+ rtw89_write32(rtwdev, R_BE_UDM1, 0);
+ rtw89_write32(rtwdev, R_BE_UDM2, 0);
+ rtw89_write32(rtwdev, R_BE_HALT_H2C, 0);
+ rtw89_write32(rtwdev, R_BE_HALT_C2H, 0);
+ rtw89_write32(rtwdev, R_BE_HALT_H2C_CTRL, 0);
+ rtw89_write32(rtwdev, R_BE_HALT_C2H_CTRL, 0);
+
+ rtw89_write32_set(rtwdev, R_BE_SYS_CLK_CTRL, B_BE_CPU_CLK_EN);
+ rtw89_write32_clr(rtwdev, R_BE_SYS_CFG5,
+ B_BE_WDT_WAKE_PCIE_EN | B_BE_WDT_WAKE_USB_EN);
+ rtw89_write32_clr(rtwdev, R_BE_WCPU_FW_CTRL,
+ B_BE_WDT_PLT_RST_EN | B_BE_WCPU_ROM_CUT_GET);
+
+ rtw89_write16_mask(rtwdev, R_BE_BOOT_REASON, B_BE_BOOT_REASON_MASK, boot_reason);
+ rtw89_write32_clr(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_WCPU_EN);
+ rtw89_write32_clr(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_HOLD_AFTER_RESET);
+ rtw89_write32_set(rtwdev, R_BE_PLATFORM_ENABLE, B_BE_WCPU_EN);
+
+ if (!dlfw) {
+ ret = rtw89_fw_check_rdy(rtwdev, RTW89_FWDL_CHECK_FREERTOS_DONE);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static int rtw89_mac_fwdl_enable_wcpu_be(struct rtw89_dev *rtwdev,
+ u8 boot_reason, bool dlfw,
+ bool include_bb)
+{
+ set_cpu_en(rtwdev, include_bb);
+
+ return wcpu_on(rtwdev, boot_reason, dlfw);
+}
+
+static const u8 fwdl_status_map[] = {
+ [0] = RTW89_FWDL_INITIAL_STATE,
+ [1] = RTW89_FWDL_FWDL_ONGOING,
+ [4] = RTW89_FWDL_CHECKSUM_FAIL,
+ [5] = RTW89_FWDL_SECURITY_FAIL,
+ [6] = RTW89_FWDL_SECURITY_FAIL,
+ [7] = RTW89_FWDL_CV_NOT_MATCH,
+ [8] = RTW89_FWDL_RSVD0,
+ [2] = RTW89_FWDL_WCPU_FWDL_RDY,
+ [3] = RTW89_FWDL_WCPU_FW_INIT_RDY,
+ [9] = RTW89_FWDL_RSVD0,
+};
+
+static u8 fwdl_get_status_be(struct rtw89_dev *rtwdev, enum rtw89_fwdl_check_type type)
+{
+ bool check_pass = false;
+ u32 val32;
+ u8 st;
+
+ val32 = rtw89_read32(rtwdev, R_BE_WCPU_FW_CTRL);
+
+ switch (type) {
+ case RTW89_FWDL_CHECK_WCPU_FWDL_DONE:
+ check_pass = !(val32 & B_BE_WLANCPU_FWDL_EN);
+ break;
+ case RTW89_FWDL_CHECK_DCPU_FWDL_DONE:
+ check_pass = !(val32 & B_BE_DATACPU_FWDL_EN);
+ break;
+ case RTW89_FWDL_CHECK_BB0_FWDL_DONE:
+ check_pass = !(val32 & B_BE_BBMCU0_FWDL_EN);
+ break;
+ case RTW89_FWDL_CHECK_BB1_FWDL_DONE:
+ check_pass = !(val32 & B_BE_BBMCU1_FWDL_EN);
+ break;
+ default:
+ break;
+ }
+
+ if (check_pass)
+ return RTW89_FWDL_WCPU_FW_INIT_RDY;
+
+ st = u32_get_bits(val32, B_BE_WCPU_FWDL_STATUS_MASK);
+ if (st < ARRAY_SIZE(fwdl_status_map))
+ return fwdl_status_map[st];
+
+ return st;
+}
+
+static int rtw89_fwdl_check_path_ready_be(struct rtw89_dev *rtwdev,
+ bool h2c_or_fwdl)
+{
+ u32 check = h2c_or_fwdl ? B_BE_H2C_PATH_RDY : B_BE_DLFW_PATH_RDY;
+ u32 val;
+
+ return read_poll_timeout_atomic(rtw89_read32, val, val & check,
+ 1, 1000000, false,
+ rtwdev, R_BE_WCPU_FW_CTRL);
+}
+
+static bool rtw89_mac_get_txpwr_cr_be(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+ u32 reg_base, u32 *cr)
+{
+ const struct rtw89_dle_mem *dle_mem = rtwdev->chip->dle_mem;
+ enum rtw89_qta_mode mode = dle_mem->mode;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, (enum rtw89_mac_idx)phy_idx,
+ RTW89_CMAC_SEL);
+ if (ret) {
+ if (test_bit(RTW89_FLAG_SER_HANDLING, rtwdev->flags))
+ return false;
+
+ rtw89_err(rtwdev, "[TXPWR] check mac enable failed\n");
+ return false;
+ }
+
+ if (reg_base < R_BE_PWR_MODULE || reg_base > R_BE_CMAC_FUNC_EN_C1) {
+ rtw89_err(rtwdev, "[TXPWR] reg_base=0x%x exceed txpwr cr\n",
+ reg_base);
+ return false;
+ }
+
+ *cr = rtw89_mac_reg_by_idx(rtwdev, reg_base, phy_idx);
+
+ if (*cr >= CMAC1_START_ADDR_BE && *cr <= CMAC1_END_ADDR_BE) {
+ if (mode == RTW89_QTA_SCC) {
+ rtw89_err(rtwdev,
+ "[TXPWR] addr=0x%x but hw not enable\n",
+ *cr);
+ return false;
+ }
+ }
+
+ return true;
+}
+
+static int rtw89_mac_init_bfee_be(struct rtw89_dev *rtwdev, u8 mac_idx)
+{
+ u32 reg;
+ u32 val;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+ rtw89_mac_bfee_ctrl(rtwdev, mac_idx, true);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL |
+ B_BE_BFMEE_USE_NSTS |
+ B_BE_BFMEE_CSI_GID_SEL |
+ B_BE_BFMEE_CSI_FORCE_RETE_EN);
+ rtw89_write32_mask(rtwdev, reg, B_BE_BFMEE_CSI_RSC_MASK, CSI_RX_BW_CFG);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_CSIRPT_OPTION, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_CSIPRT_VHTSU_AID_EN |
+ B_BE_CSIPRT_HESU_AID_EN |
+ B_BE_CSIPRT_EHTSU_AID_EN);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_RRSC, mac_idx);
+ rtw89_write32(rtwdev, reg, CSI_RRSC_BMAP_BE);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_1, mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_BE_BFMEE_BE_CSI_RRSC_BITMAP_MASK,
+ CSI_RRSC_BITMAP_CFG);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_RATE, mac_idx);
+ val = u32_encode_bits(CSI_INIT_RATE_HT, B_BE_BFMEE_HT_CSI_RATE_MASK) |
+ u32_encode_bits(CSI_INIT_RATE_VHT, B_BE_BFMEE_VHT_CSI_RATE_MASK) |
+ u32_encode_bits(CSI_INIT_RATE_HE, B_BE_BFMEE_HE_CSI_RATE_MASK) |
+ u32_encode_bits(CSI_INIT_RATE_EHT, B_BE_BFMEE_EHT_CSI_RATE_MASK);
+
+ rtw89_write32(rtwdev, reg, val);
+
+ return 0;
+}
+
+static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1;
+ u8 mac_idx = rtwvif->mac_idx;
+ u8 port_sel = rtwvif->port;
+ u8 sound_dim = 3, t;
+ u8 *phy_cap;
+ u32 reg;
+ u16 val;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+ phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
+
+ if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+ (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) {
+ ldpc_en &= !!(phy_cap[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD);
+ stbc_en &= !!(phy_cap[2] & IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ);
+ t = u8_get_bits(phy_cap[5],
+ IEEE80211_HE_PHY_CAP5_BEAMFORMEE_NUM_SND_DIM_UNDER_80MHZ_MASK);
+ sound_dim = min(sound_dim, t);
+ }
+
+ if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+ (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+ ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
+ stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
+ t = u32_get_bits(sta->deflink.vht_cap.cap,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+ sound_dim = min(sound_dim, t);
+ }
+
+ nc = min(nc, sound_dim);
+ nr = min(nr, sound_dim);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+
+ val = u16_encode_bits(nc, B_BE_BFMEE_CSIINFO0_NC_MASK) |
+ u16_encode_bits(nr, B_BE_BFMEE_CSIINFO0_NR_MASK) |
+ u16_encode_bits(ng, B_BE_BFMEE_CSIINFO0_NG_MASK) |
+ u16_encode_bits(cb, B_BE_BFMEE_CSIINFO0_CB_MASK) |
+ u16_encode_bits(cs, B_BE_BFMEE_CSIINFO0_CS_MASK) |
+ u16_encode_bits(ldpc_en, B_BE_BFMEE_CSIINFO0_LDPC_EN) |
+ u16_encode_bits(stbc_en, B_BE_BFMEE_CSIINFO0_STBC_EN);
+
+ if (port_sel == 0)
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0,
+ mac_idx);
+ else
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_1,
+ mac_idx);
+
+ rtw89_write16(rtwdev, reg, val);
+
+ return 0;
+}
+
+static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
+ u8 mac_idx = rtwvif->mac_idx;
+ int ret;
+ u32 reg;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+ if (sta->deflink.he_cap.has_he) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC5));
+ }
+ if (sta->deflink.vht_cap.vht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC5));
+ }
+ if (sta->deflink.ht_cap.ht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC5));
+ }
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+ rtw89_write32_clr(rtwdev, reg, B_BE_BFMEE_CSI_FORCE_RETE_EN);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_RRSC, mac_idx);
+ rtw89_write32(rtwdev, reg, rrsc);
+
+ return 0;
+}
+
+static void rtw89_mac_bf_assoc_be(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta)
+{
+ struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+
+ if (rtw89_sta_has_beamformer_cap(sta)) {
+ rtw89_debug(rtwdev, RTW89_DBG_BF,
+ "initialize bfee for new association\n");
+ rtw89_mac_init_bfee_be(rtwdev, rtwvif->mac_idx);
+ rtw89_mac_set_csi_para_reg_be(rtwdev, vif, sta);
+ rtw89_mac_csi_rrsc_be(rtwdev, vif, sta);
+ }
+}
+
const struct rtw89_mac_gen_def rtw89_mac_gen_be = {
.band1_offset = RTW89_MAC_BE_BAND_REG_OFFSET,
.filter_model_addr = R_BE_FILTER_MODEL_ADDR,
.indir_access_addr = R_BE_INDIR_ACCESS_ENTRY,
.mem_base_addrs = rtw89_mac_mem_base_addrs_be,
.rx_fltr = R_BE_RX_FLTR_OPT,
+ .port_base = &rtw89_port_base_be,
+ .agg_len_ht = R_BE_AGG_LEN_HT_0,
+
+ .muedca_ctrl = {
+ .addr = R_BE_MUEDCA_EN,
+ .mask = B_BE_MUEDCA_EN_0 | B_BE_SET_MUEDCATIMER_TF_0,
+ },
+ .bfee_ctrl = {
+ .addr = R_BE_BFMEE_RESP_OPTION,
+ .mask = B_BE_BFMEE_HT_NDPA_EN | B_BE_BFMEE_VHT_NDPA_EN |
+ B_BE_BFMEE_HE_NDPA_EN | B_BE_BFMEE_EHT_NDPA_EN,
+ },
+
+ .bf_assoc = rtw89_mac_bf_assoc_be,
+
+ .disable_cpu = rtw89_mac_disable_cpu_be,
+ .fwdl_enable_wcpu = rtw89_mac_fwdl_enable_wcpu_be,
+ .fwdl_get_status = fwdl_get_status_be,
+ .fwdl_check_path_ready = rtw89_fwdl_check_path_ready_be,
+
+ .get_txpwr_cr = rtw89_mac_get_txpwr_cr_be,
};
EXPORT_SYMBOL(rtw89_mac_gen_be);
diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
index 3a4bfc44142b..14ddb0d39e63 100644
--- a/drivers/net/wireless/realtek/rtw89/pci.c
+++ b/drivers/net/wireless/realtek/rtw89/pci.c
@@ -1196,7 +1196,6 @@ static int rtw89_pci_txwd_submit(struct rtw89_dev *rtwdev,
struct rtw89_pci *rtwpci = (struct rtw89_pci *)rtwdev->priv;
const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
- struct rtw89_txwd_info *txwd_info;
struct rtw89_pci_tx_wp_info *txwp_info;
void *txaddr_info_addr;
struct pci_dev *pdev = rtwpci->pdev;
@@ -1222,7 +1221,7 @@ static int rtw89_pci_txwd_submit(struct rtw89_dev *rtwdev,
txwp_len = sizeof(*txwp_info);
txwd_len = chip->txwd_body_size;
- txwd_len += en_wd_info ? sizeof(*txwd_info) : 0;
+ txwd_len += en_wd_info ? chip->txwd_info_size : 0;
txwp_info = txwd->vaddr + txwd_len;
txwp_info->seq0 = cpu_to_le16(txwd->seq | RTW89_PCI_TXWP_VALID);
diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
index 7139146cb3fa..17ccc9efed28 100644
--- a/drivers/net/wireless/realtek/rtw89/phy.c
+++ b/drivers/net/wireless/realtek/rtw89/phy.c
@@ -88,6 +88,55 @@ static u64 get_he_ra_mask(struct ieee80211_sta *sta)
return get_mcs_ra_mask(mcs_map, 11, 2);
}
+static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss)
+{
+ u64 nss_mcs_shift;
+ u64 nss_mcs_val;
+ u64 mask = 0;
+ int i, j;
+ u8 nss;
+
+ for (i = 0; i < n_nss; i++) {
+ nss = u8_get_bits(max_nss[i], IEEE80211_EHT_MCS_NSS_RX);
+ if (!nss)
+ continue;
+
+ nss_mcs_val = GENMASK_ULL(start_mcs + i * 2, 0);
+
+ for (j = 0, nss_mcs_shift = 12; j < nss; j++, nss_mcs_shift += 16)
+ mask |= nss_mcs_val << nss_mcs_shift;
+ }
+
+ return mask;
+}
+
+static u64 get_eht_ra_mask(struct ieee80211_sta *sta)
+{
+ struct ieee80211_sta_eht_cap *eht_cap = &sta->deflink.eht_cap;
+ struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz;
+ struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss;
+
+ switch (sta->deflink.bandwidth) {
+ case IEEE80211_STA_RX_BW_320:
+ mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._320;
+ /* MCS 9, 11, 13 */
+ return get_eht_mcs_ra_mask(mcs_nss->rx_tx_max_nss, 9, 3);
+ case IEEE80211_STA_RX_BW_160:
+ mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._160;
+ /* MCS 9, 11, 13 */
+ return get_eht_mcs_ra_mask(mcs_nss->rx_tx_max_nss, 9, 3);
+ case IEEE80211_STA_RX_BW_80:
+ default:
+ mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._80;
+ /* MCS 9, 11, 13 */
+ return get_eht_mcs_ra_mask(mcs_nss->rx_tx_max_nss, 9, 3);
+ case IEEE80211_STA_RX_BW_20:
+ mcs_nss_20mhz = &eht_cap->eht_mcs_nss_supp.only_20mhz;
+ /* MCS 7, 9, 11, 13 */
+ return get_eht_mcs_ra_mask(mcs_nss_20mhz->rx_tx_max_nss, 7, 4);
+ }
+}
+
#define RA_FLOOR_TABLE_SIZE 7
#define RA_FLOOR_UP_GAP 3
static u64 rtw89_phy_ra_mask_rssi(struct rtw89_dev *rtwdev, u8 rssi,
@@ -194,6 +243,9 @@ rtw89_ra_mask_vht_rates[4] = {RA_MASK_VHT_1SS_RATES, RA_MASK_VHT_2SS_RATES,
static const u64
rtw89_ra_mask_he_rates[4] = {RA_MASK_HE_1SS_RATES, RA_MASK_HE_2SS_RATES,
RA_MASK_HE_3SS_RATES, RA_MASK_HE_4SS_RATES};
+static const u64
+rtw89_ra_mask_eht_rates[4] = {RA_MASK_EHT_1SS_RATES, RA_MASK_EHT_2SS_RATES,
+ RA_MASK_EHT_3SS_RATES, RA_MASK_EHT_4SS_RATES};
static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev,
struct rtw89_sta *rtwsta,
@@ -255,7 +307,11 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
memset(ra, 0, sizeof(*ra));
/* Set the ra mask from sta's capability */
- if (sta->deflink.he_cap.has_he) {
+ if (sta->deflink.eht_cap.has_eht) {
+ mode |= RTW89_RA_MODE_EHT;
+ ra_mask |= get_eht_ra_mask(sta);
+ high_rate_masks = rtw89_ra_mask_eht_rates;
+ } else if (sta->deflink.he_cap.has_he) {
mode |= RTW89_RA_MODE_HE;
csi_mode = RTW89_RA_RPT_MODE_HE;
ra_mask |= get_he_ra_mask(sta);
@@ -1519,15 +1575,15 @@ void rtw89_phy_write_reg3_tbl(struct rtw89_dev *rtwdev,
}
EXPORT_SYMBOL(rtw89_phy_write_reg3_tbl);
-static const u8 rtw89_rs_idx_num[] = {
+static const u8 rtw89_rs_idx_num_ax[] = {
[RTW89_RS_CCK] = RTW89_RATE_CCK_NUM,
[RTW89_RS_OFDM] = RTW89_RATE_OFDM_NUM,
- [RTW89_RS_MCS] = RTW89_RATE_MCS_NUM,
+ [RTW89_RS_MCS] = RTW89_RATE_MCS_NUM_AX,
[RTW89_RS_HEDCM] = RTW89_RATE_HEDCM_NUM,
- [RTW89_RS_OFFSET] = RTW89_RATE_OFFSET_NUM,
+ [RTW89_RS_OFFSET] = RTW89_RATE_OFFSET_NUM_AX,
};
-static const u8 rtw89_rs_nss_num[] = {
+static const u8 rtw89_rs_nss_num_ax[] = {
[RTW89_RS_CCK] = 1,
[RTW89_RS_OFDM] = 1,
[RTW89_RS_MCS] = RTW89_NSS_NUM,
@@ -1535,68 +1591,73 @@ static const u8 rtw89_rs_nss_num[] = {
[RTW89_RS_OFFSET] = 1,
};
-static const u8 _byr_of_rs[] = {
- [RTW89_RS_CCK] = offsetof(struct rtw89_txpwr_byrate, cck),
- [RTW89_RS_OFDM] = offsetof(struct rtw89_txpwr_byrate, ofdm),
- [RTW89_RS_MCS] = offsetof(struct rtw89_txpwr_byrate, mcs),
- [RTW89_RS_HEDCM] = offsetof(struct rtw89_txpwr_byrate, hedcm),
- [RTW89_RS_OFFSET] = offsetof(struct rtw89_txpwr_byrate, offset),
-};
-
-#define _byr_seek(rs, raw) ((s8 *)(raw) + _byr_of_rs[rs])
-#define _byr_idx(rs, nss, idx) ((nss) * rtw89_rs_idx_num[rs] + (idx))
-#define _byr_chk(rs, nss, idx) \
- ((nss) < rtw89_rs_nss_num[rs] && (idx) < rtw89_rs_idx_num[rs])
+s8 *rtw89_phy_raw_byr_seek(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_byrate *head,
+ const struct rtw89_rate_desc *desc)
+{
+ switch (desc->rs) {
+ case RTW89_RS_CCK:
+ return &head->cck[desc->idx];
+ case RTW89_RS_OFDM:
+ return &head->ofdm[desc->idx];
+ case RTW89_RS_MCS:
+ return &head->mcs[desc->ofdma][desc->nss][desc->idx];
+ case RTW89_RS_HEDCM:
+ return &head->hedcm[desc->ofdma][desc->nss][desc->idx];
+ case RTW89_RS_OFFSET:
+ return &head->offset[desc->idx];
+ default:
+ rtw89_warn(rtwdev, "unrecognized byr rs: %d\n", desc->rs);
+ return &head->trap;
+ }
+}
void rtw89_phy_load_txpwr_byrate(struct rtw89_dev *rtwdev,
const struct rtw89_txpwr_table *tbl)
{
const struct rtw89_txpwr_byrate_cfg *cfg = tbl->data;
const struct rtw89_txpwr_byrate_cfg *end = cfg + tbl->size;
+ struct rtw89_txpwr_byrate *byr_head;
+ struct rtw89_rate_desc desc = {};
s8 *byr;
u32 data;
- u8 i, idx;
+ u8 i;
for (; cfg < end; cfg++) {
- byr = _byr_seek(cfg->rs, &rtwdev->byr[cfg->band]);
+ byr_head = &rtwdev->byr[cfg->band][0];
+ desc.rs = cfg->rs;
+ desc.nss = cfg->nss;
data = cfg->data;
for (i = 0; i < cfg->len; i++, data >>= 8) {
- idx = _byr_idx(cfg->rs, cfg->nss, (cfg->shf + i));
- byr[idx] = (s8)(data & 0xff);
+ desc.idx = cfg->shf + i;
+ byr = rtw89_phy_raw_byr_seek(rtwdev, byr_head, &desc);
+ *byr = data & 0xff;
}
}
}
EXPORT_SYMBOL(rtw89_phy_load_txpwr_byrate);
-#define _phy_txpwr_rf_to_mac(rtwdev, txpwr_rf) \
-({ \
- const struct rtw89_chip_info *__c = (rtwdev)->chip; \
- (txpwr_rf) >> (__c->txpwr_factor_rf - __c->txpwr_factor_mac); \
-})
+static s8 rtw89_phy_txpwr_rf_to_mac(struct rtw89_dev *rtwdev, s8 txpwr_rf)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
-static
-s8 rtw89_phy_read_txpwr_byrate(struct rtw89_dev *rtwdev, u8 band,
+ return txpwr_rf >> (chip->txpwr_factor_rf - chip->txpwr_factor_mac);
+}
+
+s8 rtw89_phy_read_txpwr_byrate(struct rtw89_dev *rtwdev, u8 band, u8 bw,
const struct rtw89_rate_desc *rate_desc)
{
+ struct rtw89_txpwr_byrate *byr_head;
s8 *byr;
- u8 idx;
if (rate_desc->rs == RTW89_RS_CCK)
band = RTW89_BAND_2G;
- if (!_byr_chk(rate_desc->rs, rate_desc->nss, rate_desc->idx)) {
- rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
- "[TXPWR] unknown byrate desc rs=%d nss=%d idx=%d\n",
- rate_desc->rs, rate_desc->nss, rate_desc->idx);
-
- return 0;
- }
-
- byr = _byr_seek(rate_desc->rs, &rtwdev->byr[band]);
- idx = _byr_idx(rate_desc->rs, rate_desc->nss, rate_desc->idx);
+ byr_head = &rtwdev->byr[band][bw];
+ byr = rtw89_phy_raw_byr_seek(rtwdev, byr_head, rate_desc);
- return _phy_txpwr_rf_to_mac(rtwdev, byr[idx]);
+ return rtw89_phy_txpwr_rf_to_mac(rtwdev, *byr);
}
static u8 rtw89_channel_6g_to_idx(struct rtw89_dev *rtwdev, u8 channel_6g)
@@ -1688,7 +1749,7 @@ s8 rtw89_phy_read_txpwr_limit(struct rtw89_dev *rtwdev, u8 band,
return 0;
}
- lmt = _phy_txpwr_rf_to_mac(rtwdev, lmt);
+ lmt = rtw89_phy_txpwr_rf_to_mac(rtwdev, lmt);
sar = rtw89_query_sar(rtwdev, freq);
return min(lmt, sar);
@@ -1706,9 +1767,9 @@ EXPORT_SYMBOL(rtw89_phy_read_txpwr_limit);
(ch)); \
} while (0)
-static void rtw89_phy_fill_txpwr_limit_20m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit *lmt,
- u8 band, u8 ntx, u8 ch)
+static void rtw89_phy_fill_txpwr_limit_20m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ax *lmt,
+ u8 band, u8 ntx, u8 ch)
{
__fill_txpwr_limit_nonbf_bf(lmt->cck_20m, band, RTW89_CHANNEL_WIDTH_20,
ntx, RTW89_RS_CCK, ch);
@@ -1721,9 +1782,9 @@ static void rtw89_phy_fill_txpwr_limit_20m(struct rtw89_dev *rtwdev,
ntx, RTW89_RS_MCS, ch);
}
-static void rtw89_phy_fill_txpwr_limit_40m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit *lmt,
- u8 band, u8 ntx, u8 ch, u8 pri_ch)
+static void rtw89_phy_fill_txpwr_limit_40m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ax *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
{
__fill_txpwr_limit_nonbf_bf(lmt->cck_20m, band, RTW89_CHANNEL_WIDTH_20,
ntx, RTW89_RS_CCK, ch - 2);
@@ -1742,9 +1803,9 @@ static void rtw89_phy_fill_txpwr_limit_40m(struct rtw89_dev *rtwdev,
ntx, RTW89_RS_MCS, ch);
}
-static void rtw89_phy_fill_txpwr_limit_80m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit *lmt,
- u8 band, u8 ntx, u8 ch, u8 pri_ch)
+static void rtw89_phy_fill_txpwr_limit_80m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ax *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
{
s8 val_0p5_n[RTW89_BF_NUM];
s8 val_0p5_p[RTW89_BF_NUM];
@@ -1783,9 +1844,9 @@ static void rtw89_phy_fill_txpwr_limit_80m(struct rtw89_dev *rtwdev,
lmt->mcs_40m_0p5[i] = min_t(s8, val_0p5_n[i], val_0p5_p[i]);
}
-static void rtw89_phy_fill_txpwr_limit_160m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit *lmt,
- u8 band, u8 ntx, u8 ch, u8 pri_ch)
+static void rtw89_phy_fill_txpwr_limit_160m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ax *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
{
s8 val_0p5_n[RTW89_BF_NUM];
s8 val_0p5_p[RTW89_BF_NUM];
@@ -1870,10 +1931,10 @@ static void rtw89_phy_fill_txpwr_limit_160m(struct rtw89_dev *rtwdev,
}
static
-void rtw89_phy_fill_txpwr_limit(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- struct rtw89_txpwr_limit *lmt,
- u8 ntx)
+void rtw89_phy_fill_txpwr_limit_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ struct rtw89_txpwr_limit_ax *lmt,
+ u8 ntx)
{
u8 band = chan->band_type;
u8 pri_ch = chan->primary_channel;
@@ -1884,25 +1945,25 @@ void rtw89_phy_fill_txpwr_limit(struct rtw89_dev *rtwdev,
switch (bw) {
case RTW89_CHANNEL_WIDTH_20:
- rtw89_phy_fill_txpwr_limit_20m(rtwdev, lmt, band, ntx, ch);
+ rtw89_phy_fill_txpwr_limit_20m_ax(rtwdev, lmt, band, ntx, ch);
break;
case RTW89_CHANNEL_WIDTH_40:
- rtw89_phy_fill_txpwr_limit_40m(rtwdev, lmt, band, ntx, ch,
- pri_ch);
+ rtw89_phy_fill_txpwr_limit_40m_ax(rtwdev, lmt, band, ntx, ch,
+ pri_ch);
break;
case RTW89_CHANNEL_WIDTH_80:
- rtw89_phy_fill_txpwr_limit_80m(rtwdev, lmt, band, ntx, ch,
- pri_ch);
+ rtw89_phy_fill_txpwr_limit_80m_ax(rtwdev, lmt, band, ntx, ch,
+ pri_ch);
break;
case RTW89_CHANNEL_WIDTH_160:
- rtw89_phy_fill_txpwr_limit_160m(rtwdev, lmt, band, ntx, ch,
- pri_ch);
+ rtw89_phy_fill_txpwr_limit_160m_ax(rtwdev, lmt, band, ntx, ch,
+ pri_ch);
break;
}
}
-static s8 rtw89_phy_read_txpwr_limit_ru(struct rtw89_dev *rtwdev, u8 band,
- u8 ru, u8 ntx, u8 ch)
+s8 rtw89_phy_read_txpwr_limit_ru(struct rtw89_dev *rtwdev, u8 band,
+ u8 ru, u8 ntx, u8 ch)
{
const struct rtw89_rfe_parms *rfe_parms = rtwdev->rfe_parms;
const struct rtw89_txpwr_rule_2ghz *rule_2ghz = &rfe_parms->rule_2ghz;
@@ -1945,16 +2006,16 @@ static s8 rtw89_phy_read_txpwr_limit_ru(struct rtw89_dev *rtwdev, u8 band,
return 0;
}
- lmt_ru = _phy_txpwr_rf_to_mac(rtwdev, lmt_ru);
+ lmt_ru = rtw89_phy_txpwr_rf_to_mac(rtwdev, lmt_ru);
sar = rtw89_query_sar(rtwdev, freq);
return min(lmt_ru, sar);
}
static void
-rtw89_phy_fill_txpwr_limit_ru_20m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit_ru *lmt_ru,
- u8 band, u8 ntx, u8 ch)
+rtw89_phy_fill_txpwr_limit_ru_20m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_ax *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
{
lmt_ru->ru26[0] = rtw89_phy_read_txpwr_limit_ru(rtwdev, band,
RTW89_RU26,
@@ -1968,9 +2029,9 @@ rtw89_phy_fill_txpwr_limit_ru_20m(struct rtw89_dev *rtwdev,
}
static void
-rtw89_phy_fill_txpwr_limit_ru_40m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit_ru *lmt_ru,
- u8 band, u8 ntx, u8 ch)
+rtw89_phy_fill_txpwr_limit_ru_40m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_ax *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
{
lmt_ru->ru26[0] = rtw89_phy_read_txpwr_limit_ru(rtwdev, band,
RTW89_RU26,
@@ -1993,9 +2054,9 @@ rtw89_phy_fill_txpwr_limit_ru_40m(struct rtw89_dev *rtwdev,
}
static void
-rtw89_phy_fill_txpwr_limit_ru_80m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit_ru *lmt_ru,
- u8 band, u8 ntx, u8 ch)
+rtw89_phy_fill_txpwr_limit_ru_80m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_ax *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
{
lmt_ru->ru26[0] = rtw89_phy_read_txpwr_limit_ru(rtwdev, band,
RTW89_RU26,
@@ -2036,15 +2097,15 @@ rtw89_phy_fill_txpwr_limit_ru_80m(struct rtw89_dev *rtwdev,
}
static void
-rtw89_phy_fill_txpwr_limit_ru_160m(struct rtw89_dev *rtwdev,
- struct rtw89_txpwr_limit_ru *lmt_ru,
- u8 band, u8 ntx, u8 ch)
+rtw89_phy_fill_txpwr_limit_ru_160m_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_ax *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
{
static const int ofst[] = { -14, -10, -6, -2, 2, 6, 10, 14 };
int i;
- static_assert(ARRAY_SIZE(ofst) == RTW89_RU_SEC_NUM);
- for (i = 0; i < RTW89_RU_SEC_NUM; i++) {
+ static_assert(ARRAY_SIZE(ofst) == RTW89_RU_SEC_NUM_AX);
+ for (i = 0; i < RTW89_RU_SEC_NUM_AX; i++) {
lmt_ru->ru26[i] = rtw89_phy_read_txpwr_limit_ru(rtwdev, band,
RTW89_RU26,
ntx,
@@ -2061,10 +2122,10 @@ rtw89_phy_fill_txpwr_limit_ru_160m(struct rtw89_dev *rtwdev,
}
static
-void rtw89_phy_fill_txpwr_limit_ru(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- struct rtw89_txpwr_limit_ru *lmt_ru,
- u8 ntx)
+void rtw89_phy_fill_txpwr_limit_ru_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ struct rtw89_txpwr_limit_ru_ax *lmt_ru,
+ u8 ntx)
{
u8 band = chan->band_type;
u8 ch = chan->channel;
@@ -2074,27 +2135,27 @@ void rtw89_phy_fill_txpwr_limit_ru(struct rtw89_dev *rtwdev,
switch (bw) {
case RTW89_CHANNEL_WIDTH_20:
- rtw89_phy_fill_txpwr_limit_ru_20m(rtwdev, lmt_ru, band, ntx,
- ch);
+ rtw89_phy_fill_txpwr_limit_ru_20m_ax(rtwdev, lmt_ru, band, ntx,
+ ch);
break;
case RTW89_CHANNEL_WIDTH_40:
- rtw89_phy_fill_txpwr_limit_ru_40m(rtwdev, lmt_ru, band, ntx,
- ch);
+ rtw89_phy_fill_txpwr_limit_ru_40m_ax(rtwdev, lmt_ru, band, ntx,
+ ch);
break;
case RTW89_CHANNEL_WIDTH_80:
- rtw89_phy_fill_txpwr_limit_ru_80m(rtwdev, lmt_ru, band, ntx,
- ch);
+ rtw89_phy_fill_txpwr_limit_ru_80m_ax(rtwdev, lmt_ru, band, ntx,
+ ch);
break;
case RTW89_CHANNEL_WIDTH_160:
- rtw89_phy_fill_txpwr_limit_ru_160m(rtwdev, lmt_ru, band, ntx,
- ch);
+ rtw89_phy_fill_txpwr_limit_ru_160m_ax(rtwdev, lmt_ru, band, ntx,
+ ch);
break;
}
}
-void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx)
+static void rtw89_phy_set_txpwr_byrate_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
{
u8 max_nss_num = rtwdev->chip->rf_path_num;
static const u8 rs[] = {
@@ -2103,7 +2164,7 @@ void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
RTW89_RS_MCS,
RTW89_RS_HEDCM,
};
- struct rtw89_rate_desc cur;
+ struct rtw89_rate_desc cur = {};
u8 band = chan->band_type;
u8 ch = chan->channel;
u32 addr, val;
@@ -2113,23 +2174,23 @@ void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
"[TXPWR] set txpwr byrate with ch=%d\n", ch);
- BUILD_BUG_ON(rtw89_rs_idx_num[RTW89_RS_CCK] % 4);
- BUILD_BUG_ON(rtw89_rs_idx_num[RTW89_RS_OFDM] % 4);
- BUILD_BUG_ON(rtw89_rs_idx_num[RTW89_RS_MCS] % 4);
- BUILD_BUG_ON(rtw89_rs_idx_num[RTW89_RS_HEDCM] % 4);
+ BUILD_BUG_ON(rtw89_rs_idx_num_ax[RTW89_RS_CCK] % 4);
+ BUILD_BUG_ON(rtw89_rs_idx_num_ax[RTW89_RS_OFDM] % 4);
+ BUILD_BUG_ON(rtw89_rs_idx_num_ax[RTW89_RS_MCS] % 4);
+ BUILD_BUG_ON(rtw89_rs_idx_num_ax[RTW89_RS_HEDCM] % 4);
addr = R_AX_PWR_BY_RATE;
for (cur.nss = 0; cur.nss < max_nss_num; cur.nss++) {
for (i = 0; i < ARRAY_SIZE(rs); i++) {
- if (cur.nss >= rtw89_rs_nss_num[rs[i]])
+ if (cur.nss >= rtw89_rs_nss_num_ax[rs[i]])
continue;
cur.rs = rs[i];
- for (cur.idx = 0; cur.idx < rtw89_rs_idx_num[rs[i]];
+ for (cur.idx = 0; cur.idx < rtw89_rs_idx_num_ax[rs[i]];
cur.idx++) {
v[cur.idx % 4] =
rtw89_phy_read_txpwr_byrate(rtwdev,
- band,
+ band, 0,
&cur);
if ((cur.idx + 1) % 4)
@@ -2147,26 +2208,26 @@ void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
}
}
}
-EXPORT_SYMBOL(rtw89_phy_set_txpwr_byrate);
-void rtw89_phy_set_txpwr_offset(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx)
+static
+void rtw89_phy_set_txpwr_offset_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
{
struct rtw89_rate_desc desc = {
.nss = RTW89_NSS_1,
.rs = RTW89_RS_OFFSET,
};
u8 band = chan->band_type;
- s8 v[RTW89_RATE_OFFSET_NUM] = {};
+ s8 v[RTW89_RATE_OFFSET_NUM_AX] = {};
u32 val;
rtw89_debug(rtwdev, RTW89_DBG_TXPWR, "[TXPWR] set txpwr offset\n");
- for (desc.idx = 0; desc.idx < RTW89_RATE_OFFSET_NUM; desc.idx++)
- v[desc.idx] = rtw89_phy_read_txpwr_byrate(rtwdev, band, &desc);
+ for (desc.idx = 0; desc.idx < RTW89_RATE_OFFSET_NUM_AX; desc.idx++)
+ v[desc.idx] = rtw89_phy_read_txpwr_byrate(rtwdev, band, 0, &desc);
- BUILD_BUG_ON(RTW89_RATE_OFFSET_NUM != 5);
+ BUILD_BUG_ON(RTW89_RATE_OFFSET_NUM_AX != 5);
val = FIELD_PREP(GENMASK(3, 0), v[0]) |
FIELD_PREP(GENMASK(7, 4), v[1]) |
FIELD_PREP(GENMASK(11, 8), v[2]) |
@@ -2176,14 +2237,13 @@ void rtw89_phy_set_txpwr_offset(struct rtw89_dev *rtwdev,
rtw89_mac_txpwr_write32_mask(rtwdev, phy_idx, R_AX_PWR_RATE_OFST_CTRL,
GENMASK(19, 0), val);
}
-EXPORT_SYMBOL(rtw89_phy_set_txpwr_offset);
-void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx)
+static void rtw89_phy_set_txpwr_limit_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
{
u8 max_ntx_num = rtwdev->chip->rf_path_num;
- struct rtw89_txpwr_limit lmt;
+ struct rtw89_txpwr_limit_ax lmt;
u8 ch = chan->channel;
u8 bw = chan->band_width;
const s8 *ptr;
@@ -2193,15 +2253,15 @@ void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
"[TXPWR] set txpwr limit with ch=%d bw=%d\n", ch, bw);
- BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit) !=
- RTW89_TXPWR_LMT_PAGE_SIZE);
+ BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit_ax) !=
+ RTW89_TXPWR_LMT_PAGE_SIZE_AX);
addr = R_AX_PWR_LMT;
for (i = 0; i < max_ntx_num; i++) {
- rtw89_phy_fill_txpwr_limit(rtwdev, chan, &lmt, i);
+ rtw89_phy_fill_txpwr_limit_ax(rtwdev, chan, &lmt, i);
ptr = (s8 *)&lmt;
- for (j = 0; j < RTW89_TXPWR_LMT_PAGE_SIZE;
+ for (j = 0; j < RTW89_TXPWR_LMT_PAGE_SIZE_AX;
j += 4, addr += 4, ptr += 4) {
val = FIELD_PREP(GENMASK(7, 0), ptr[0]) |
FIELD_PREP(GENMASK(15, 8), ptr[1]) |
@@ -2212,14 +2272,13 @@ void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
}
}
}
-EXPORT_SYMBOL(rtw89_phy_set_txpwr_limit);
-void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
- const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx)
+static void rtw89_phy_set_txpwr_limit_ru_ax(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
{
u8 max_ntx_num = rtwdev->chip->rf_path_num;
- struct rtw89_txpwr_limit_ru lmt_ru;
+ struct rtw89_txpwr_limit_ru_ax lmt_ru;
u8 ch = chan->channel;
u8 bw = chan->band_width;
const s8 *ptr;
@@ -2229,15 +2288,15 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
"[TXPWR] set txpwr limit ru with ch=%d bw=%d\n", ch, bw);
- BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit_ru) !=
- RTW89_TXPWR_LMT_RU_PAGE_SIZE);
+ BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit_ru_ax) !=
+ RTW89_TXPWR_LMT_RU_PAGE_SIZE_AX);
addr = R_AX_PWR_RU_LMT;
for (i = 0; i < max_ntx_num; i++) {
- rtw89_phy_fill_txpwr_limit_ru(rtwdev, chan, &lmt_ru, i);
+ rtw89_phy_fill_txpwr_limit_ru_ax(rtwdev, chan, &lmt_ru, i);
ptr = (s8 *)&lmt_ru;
- for (j = 0; j < RTW89_TXPWR_LMT_RU_PAGE_SIZE;
+ for (j = 0; j < RTW89_TXPWR_LMT_RU_PAGE_SIZE_AX;
j += 4, addr += 4, ptr += 4) {
val = FIELD_PREP(GENMASK(7, 0), ptr[0]) |
FIELD_PREP(GENMASK(15, 8), ptr[1]) |
@@ -2248,7 +2307,6 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
}
}
}
-EXPORT_SYMBOL(rtw89_phy_set_txpwr_limit_ru);
struct rtw89_phy_iter_ra_data {
struct rtw89_dev *rtwdev;
@@ -2341,6 +2399,18 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
ra_report->txrate.he_gi = NL80211_RATE_INFO_HE_GI_3_2;
mcs = ra_report->txrate.mcs;
break;
+ case RTW89_RA_RPT_MODE_EHT:
+ ra_report->txrate.flags |= RATE_INFO_FLAGS_EHT_MCS;
+ ra_report->txrate.mcs = u8_get_bits(rate, RTW89_RA_RATE_MASK_MCS_V1);
+ ra_report->txrate.nss = u8_get_bits(rate, RTW89_RA_RATE_MASK_NSS_V1) + 1;
+ if (giltf == RTW89_GILTF_2XHE08 || giltf == RTW89_GILTF_1XHE08)
+ ra_report->txrate.eht_gi = NL80211_RATE_INFO_EHT_GI_0_8;
+ else if (giltf == RTW89_GILTF_2XHE16 || giltf == RTW89_GILTF_1XHE16)
+ ra_report->txrate.eht_gi = NL80211_RATE_INFO_EHT_GI_1_6;
+ else
+ ra_report->txrate.eht_gi = NL80211_RATE_INFO_EHT_GI_3_2;
+ mcs = ra_report->txrate.mcs;
+ break;
}
ra_report->txrate.bw = rtw89_hw_to_rate_info_bw(bw);
@@ -2487,6 +2557,9 @@ static void rtw89_dcfo_comp(struct rtw89_dev *rtwdev, s32 curr_cfo)
s32 dcfo_comp_val;
int sign;
+ if (rtwdev->chip->chip_id == RTL8922A)
+ return;
+
if (!is_linked) {
rtw89_debug(rtwdev, RTW89_DBG_CFO, "DCFO: is_linked=%d\n",
is_linked);
@@ -2507,16 +2580,23 @@ static void rtw89_dcfo_comp(struct rtw89_dev *rtwdev, s32 curr_cfo)
static void rtw89_dcfo_comp_init(struct rtw89_dev *rtwdev)
{
+ const struct rtw89_phy_gen_def *phy = rtwdev->chip->phy_def;
const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_cfo_regs *cfo = phy->cfo;
- rtw89_phy_set_phy_regs(rtwdev, R_DCFO_OPT, B_DCFO_OPT_EN, 1);
- rtw89_phy_set_phy_regs(rtwdev, R_DCFO_WEIGHT, B_DCFO_WEIGHT_MSK, 8);
+ rtw89_phy_set_phy_regs(rtwdev, cfo->comp_seg0, cfo->valid_0_mask, 1);
+ rtw89_phy_set_phy_regs(rtwdev, cfo->comp, cfo->weighting_mask, 8);
- if (chip->cfo_hw_comp)
- rtw89_write32_mask(rtwdev, R_AX_PWR_UL_CTRL2,
- B_AX_PWR_UL_CFO_MASK, 0x6);
- else
- rtw89_write32_clr(rtwdev, R_AX_PWR_UL_CTRL2, B_AX_PWR_UL_CFO_MASK);
+ if (chip->chip_gen == RTW89_CHIP_AX) {
+ if (chip->cfo_hw_comp) {
+ rtw89_write32_mask(rtwdev, R_AX_PWR_UL_CTRL2,
+ B_AX_PWR_UL_CFO_MASK, 0x6);
+ } else {
+ rtw89_phy_set_phy_regs(rtwdev, R_DCFO, B_DCFO, 1);
+ rtw89_write32_clr(rtwdev, R_AX_PWR_UL_CTRL2,
+ B_AX_PWR_UL_CFO_MASK);
+ }
+ }
}
static void rtw89_phy_cfo_init(struct rtw89_dev *rtwdev)
@@ -2539,7 +2619,6 @@ static void rtw89_phy_cfo_init(struct rtw89_dev *rtwdev)
rtw89_debug(rtwdev, RTW89_DBG_CFO, "Default xcap=%0x\n",
cfo->crystal_cap_default);
rtw89_phy_cfo_set_crystal_cap(rtwdev, cfo->crystal_cap_default, true);
- rtw89_phy_set_phy_regs(rtwdev, R_DCFO, B_DCFO, 1);
rtw89_dcfo_comp_init(rtwdev);
cfo->cfo_timer_ms = 2000;
cfo->cfo_trig_by_timer_en = false;
@@ -2556,11 +2635,15 @@ static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
s32 cfo_abs = abs(curr_cfo);
int sign;
+ if (curr_cfo == 0) {
+ rtw89_debug(rtwdev, RTW89_DBG_CFO, "curr_cfo=0\n");
+ return;
+ }
if (!cfo->is_adjust) {
if (cfo_abs > CFO_TRK_ENABLE_TH)
cfo->is_adjust = true;
} else {
- if (cfo_abs < CFO_TRK_STOP_TH)
+ if (cfo_abs <= CFO_TRK_STOP_TH)
cfo->is_adjust = false;
}
if (!cfo->is_adjust) {
@@ -2752,10 +2835,6 @@ static void rtw89_phy_cfo_dm(struct rtw89_dev *rtwdev)
new_cfo = rtw89_phy_average_cfo_calc(rtwdev);
else
new_cfo = rtw89_phy_multi_sta_cfo_calc(rtwdev);
- if (new_cfo == 0) {
- rtw89_debug(rtwdev, RTW89_DBG_CFO, "curr_cfo=0\n");
- return;
- }
if (cfo->divergence_lock_en) {
cfo->lock_cnt++;
if (cfo->lock_cnt > CFO_PERIOD_CNT) {
@@ -2898,7 +2977,7 @@ void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
rtwvif->sub_entity_idx);
struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
- if (!chip->support_ul_tb_ctrl)
+ if (!chip->ul_tb_waveform_ctrl)
return;
rtwvif->def_tri_idx =
@@ -2928,6 +3007,61 @@ struct rtw89_phy_ul_tb_check_data {
u8 def_tri_idx;
};
+struct rtw89_phy_power_diff {
+ u32 q_00;
+ u32 q_11;
+ u32 q_matrix_en;
+ u32 ultb_1t_norm_160;
+ u32 ultb_2t_norm_160;
+ u32 com1_norm_1sts;
+ u32 com2_resp_1sts_path;
+};
+
+static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+ struct rtw89_vif *rtwvif)
+{
+ static const struct rtw89_phy_power_diff table[2] = {
+ {0x0, 0x0, 0x0, 0x0, 0xf4, 0x3, 0x3},
+ {0xb50, 0xb50, 0x1, 0xc, 0x0, 0x1, 0x1},
+ };
+ const struct rtw89_phy_power_diff *param;
+ u32 reg;
+
+ if (!rtwdev->chip->ul_tb_pwr_diff)
+ return;
+
+ if (rtwvif->pwr_diff_en == rtwvif->pre_pwr_diff_en) {
+ rtwvif->pwr_diff_en = false;
+ return;
+ }
+
+ rtwvif->pre_pwr_diff_en = rtwvif->pwr_diff_en;
+ param = &table[rtwvif->pwr_diff_en];
+
+ rtw89_phy_write32_mask(rtwdev, R_Q_MATRIX_00, B_Q_MATRIX_00_REAL,
+ param->q_00);
+ rtw89_phy_write32_mask(rtwdev, R_Q_MATRIX_11, B_Q_MATRIX_11_REAL,
+ param->q_11);
+ rtw89_phy_write32_mask(rtwdev, R_CUSTOMIZE_Q_MATRIX,
+ B_CUSTOMIZE_Q_MATRIX_EN, param->q_matrix_en);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_1T_NORM_BW160,
+ param->ultb_1t_norm_160);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_2T_NORM_BW160,
+ param->ultb_2t_norm_160);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM1_NORM_1STS,
+ param->com1_norm_1sts);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM2_RESP_1STS_PATH,
+ param->com2_resp_1sts_path);
+}
+
static
void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
struct rtw89_vif *rtwvif,
@@ -2942,41 +3076,34 @@ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
if (!vif->cfg.assoc)
return;
- if (stats->rx_tf_periodic > UL_TB_TF_CNT_L2H_TH)
- ul_tb_data->high_tf_client = true;
- else if (stats->rx_tf_periodic < UL_TB_TF_CNT_H2L_TH)
- ul_tb_data->low_tf_client = true;
+ if (rtwdev->chip->ul_tb_waveform_ctrl) {
+ if (stats->rx_tf_periodic > UL_TB_TF_CNT_L2H_TH)
+ ul_tb_data->high_tf_client = true;
+ else if (stats->rx_tf_periodic < UL_TB_TF_CNT_H2L_TH)
+ ul_tb_data->low_tf_client = true;
+
+ ul_tb_data->valid = true;
+ ul_tb_data->def_tri_idx = rtwvif->def_tri_idx;
+ ul_tb_data->dyn_tb_bedge_en = rtwvif->dyn_tb_bedge_en;
+ }
- ul_tb_data->valid = true;
- ul_tb_data->def_tri_idx = rtwvif->def_tri_idx;
- ul_tb_data->dyn_tb_bedge_en = rtwvif->dyn_tb_bedge_en;
+ rtw89_phy_ofdma_power_diff(rtwdev, rtwvif);
}
-void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+static void rtw89_phy_ul_tb_waveform_ctrl(struct rtw89_dev *rtwdev,
+ struct rtw89_phy_ul_tb_check_data *ul_tb_data)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
- struct rtw89_phy_ul_tb_check_data ul_tb_data = {};
- struct rtw89_vif *rtwvif;
- if (!chip->support_ul_tb_ctrl)
- return;
-
- if (rtwdev->total_sta_assoc != 1)
+ if (!rtwdev->chip->ul_tb_waveform_ctrl)
return;
- rtw89_for_each_rtwvif(rtwdev, rtwvif)
- rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif, &ul_tb_data);
-
- if (!ul_tb_data.valid)
- return;
-
- if (ul_tb_data.dyn_tb_bedge_en) {
- if (ul_tb_data.high_tf_client) {
+ if (ul_tb_data->dyn_tb_bedge_en) {
+ if (ul_tb_data->high_tf_client) {
rtw89_phy_write32_mask(rtwdev, R_BANDEDGE, B_BANDEDGE_EN, 0);
rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
"[ULTB] Turn off if_bandedge\n");
- } else if (ul_tb_data.low_tf_client) {
+ } else if (ul_tb_data->low_tf_client) {
rtw89_phy_write32_mask(rtwdev, R_BANDEDGE, B_BANDEDGE_EN,
ul_tb_info->def_if_bandedge);
rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
@@ -2986,28 +3113,49 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
}
if (ul_tb_info->dyn_tb_tri_en) {
- if (ul_tb_data.high_tf_client) {
+ if (ul_tb_data->high_tf_client) {
rtw89_phy_write32_mask(rtwdev, R_DCFO_OPT,
B_TXSHAPE_TRIANGULAR_CFG, 0);
rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
"[ULTB] Turn off Tx triangle\n");
- } else if (ul_tb_data.low_tf_client) {
+ } else if (ul_tb_data->low_tf_client) {
rtw89_phy_write32_mask(rtwdev, R_DCFO_OPT,
B_TXSHAPE_TRIANGULAR_CFG,
- ul_tb_data.def_tri_idx);
+ ul_tb_data->def_tri_idx);
rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
"[ULTB] Set to default tx_shap_idx = %d\n",
- ul_tb_data.def_tri_idx);
+ ul_tb_data->def_tri_idx);
}
}
}
+void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+{
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_phy_ul_tb_check_data ul_tb_data = {};
+ struct rtw89_vif *rtwvif;
+
+ if (!chip->ul_tb_waveform_ctrl && !chip->ul_tb_pwr_diff)
+ return;
+
+ if (rtwdev->total_sta_assoc != 1)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+ rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif, &ul_tb_data);
+
+ if (!ul_tb_data.valid)
+ return;
+
+ rtw89_phy_ul_tb_waveform_ctrl(rtwdev, &ul_tb_data);
+}
+
static void rtw89_phy_ul_tb_info_init(struct rtw89_dev *rtwdev)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
- if (!chip->support_ul_tb_ctrl)
+ if (!chip->ul_tb_waveform_ctrl)
return;
ul_tb_info->dyn_tb_tri_en = true;
@@ -4474,8 +4622,6 @@ static void rtw89_phy_env_monitor_init(struct rtw89_dev *rtwdev)
void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
{
- const struct rtw89_chip_info *chip = rtwdev->chip;
-
rtw89_phy_stat_init(rtwdev);
rtw89_chip_bb_sethw(rtwdev);
@@ -4491,7 +4637,6 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
rtw89_phy_init_rf_nctl(rtwdev);
rtw89_chip_rfk_init(rtwdev);
- rtw89_load_txpwr_table(rtwdev, chip->byr_table);
rtw89_chip_set_txpwr_ctrl(rtwdev);
rtw89_chip_power_trim(rtwdev);
rtw89_chip_cfg_txrx_path(rtwdev);
@@ -4500,6 +4645,7 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
{
const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_reg_def *bss_clr_vld = &chip->bss_clr_vld;
enum rtw89_phy_idx phy_idx = RTW89_PHY_0;
u8 bss_color;
@@ -4508,7 +4654,7 @@ void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif
bss_color = vif->bss_conf.he_bss_color.color;
- rtw89_phy_write32_idx(rtwdev, chip->bss_clr_map_reg, B_BSS_CLR_MAP_VLD0, 0x1,
+ rtw89_phy_write32_idx(rtwdev, bss_clr_vld->addr, bss_clr_vld->mask, 0x1,
phy_idx);
rtw89_phy_write32_idx(rtwdev, chip->bss_clr_map_reg, B_BSS_CLR_MAP_TGT,
bss_color, phy_idx);
@@ -4829,9 +4975,22 @@ static const struct rtw89_physts_regs rtw89_physts_regs_ax = {
.dis_trigger_brk_mask = B_STS_DIS_TRIG_BY_BRK,
};
+static const struct rtw89_cfo_regs rtw89_cfo_regs_ax = {
+ .comp = R_DCFO_WEIGHT,
+ .weighting_mask = B_DCFO_WEIGHT_MSK,
+ .comp_seg0 = R_DCFO_OPT,
+ .valid_0_mask = B_DCFO_OPT_EN,
+};
+
const struct rtw89_phy_gen_def rtw89_phy_gen_ax = {
.cr_base = 0x10000,
.ccx = &rtw89_ccx_regs_ax,
.physts = &rtw89_physts_regs_ax,
+ .cfo = &rtw89_cfo_regs_ax,
+
+ .set_txpwr_byrate = rtw89_phy_set_txpwr_byrate_ax,
+ .set_txpwr_offset = rtw89_phy_set_txpwr_offset_ax,
+ .set_txpwr_limit = rtw89_phy_set_txpwr_limit_ax,
+ .set_txpwr_limit_ru = rtw89_phy_set_txpwr_limit_ru_ax,
};
EXPORT_SYMBOL(rtw89_phy_gen_ax);
diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h
index d6dc0cbbae43..5c85122e7bb5 100644
--- a/drivers/net/wireless/realtek/rtw89/phy.h
+++ b/drivers/net/wireless/realtek/rtw89/phy.h
@@ -46,6 +46,11 @@
#define RA_MASK_HE_3SS_RATES GENMASK_ULL(47, 36)
#define RA_MASK_HE_4SS_RATES GENMASK_ULL(59, 48)
#define RA_MASK_HE_RATES GENMASK_ULL(59, 12)
+#define RA_MASK_EHT_1SS_RATES GENMASK_ULL(27, 12)
+#define RA_MASK_EHT_2SS_RATES GENMASK_ULL(43, 28)
+#define RA_MASK_EHT_3SS_RATES GENMASK_ULL(59, 44)
+#define RA_MASK_EHT_4SS_RATES GENMASK_ULL(62, 60)
+#define RA_MASK_EHT_RATES GENMASK_ULL(62, 12)
#define CFO_TRK_ENABLE_TH (2 << 2)
#define CFO_TRK_STOP_TH_4 (30 << 2)
@@ -400,10 +405,97 @@ struct rtw89_physts_regs {
u32 dis_trigger_brk_mask;
};
+struct rtw89_cfo_regs {
+ u32 comp;
+ u32 weighting_mask;
+ u32 comp_seg0;
+ u32 valid_0_mask;
+};
+
+enum rtw89_bandwidth_section_num_ax {
+ RTW89_BW20_SEC_NUM_AX = 8,
+ RTW89_BW40_SEC_NUM_AX = 4,
+ RTW89_BW80_SEC_NUM_AX = 2,
+};
+
+enum rtw89_bandwidth_section_num_be {
+ RTW89_BW20_SEC_NUM_BE = 16,
+ RTW89_BW40_SEC_NUM_BE = 8,
+ RTW89_BW80_SEC_NUM_BE = 4,
+ RTW89_BW160_SEC_NUM_BE = 2,
+};
+
+#define RTW89_TXPWR_LMT_PAGE_SIZE_AX 40
+
+struct rtw89_txpwr_limit_ax {
+ s8 cck_20m[RTW89_BF_NUM];
+ s8 cck_40m[RTW89_BF_NUM];
+ s8 ofdm[RTW89_BF_NUM];
+ s8 mcs_20m[RTW89_BW20_SEC_NUM_AX][RTW89_BF_NUM];
+ s8 mcs_40m[RTW89_BW40_SEC_NUM_AX][RTW89_BF_NUM];
+ s8 mcs_80m[RTW89_BW80_SEC_NUM_AX][RTW89_BF_NUM];
+ s8 mcs_160m[RTW89_BF_NUM];
+ s8 mcs_40m_0p5[RTW89_BF_NUM];
+ s8 mcs_40m_2p5[RTW89_BF_NUM];
+};
+
+#define RTW89_TXPWR_LMT_PAGE_SIZE_BE 76
+
+struct rtw89_txpwr_limit_be {
+ s8 cck_20m[RTW89_BF_NUM];
+ s8 cck_40m[RTW89_BF_NUM];
+ s8 ofdm[RTW89_BF_NUM];
+ s8 mcs_20m[RTW89_BW20_SEC_NUM_BE][RTW89_BF_NUM];
+ s8 mcs_40m[RTW89_BW40_SEC_NUM_BE][RTW89_BF_NUM];
+ s8 mcs_80m[RTW89_BW80_SEC_NUM_BE][RTW89_BF_NUM];
+ s8 mcs_160m[RTW89_BW160_SEC_NUM_BE][RTW89_BF_NUM];
+ s8 mcs_320m[RTW89_BF_NUM];
+ s8 mcs_40m_0p5[RTW89_BF_NUM];
+ s8 mcs_40m_2p5[RTW89_BF_NUM];
+ s8 mcs_40m_4p5[RTW89_BF_NUM];
+ s8 mcs_40m_6p5[RTW89_BF_NUM];
+};
+
+#define RTW89_RU_SEC_NUM_AX 8
+
+#define RTW89_TXPWR_LMT_RU_PAGE_SIZE_AX 24
+
+struct rtw89_txpwr_limit_ru_ax {
+ s8 ru26[RTW89_RU_SEC_NUM_AX];
+ s8 ru52[RTW89_RU_SEC_NUM_AX];
+ s8 ru106[RTW89_RU_SEC_NUM_AX];
+};
+
+#define RTW89_RU_SEC_NUM_BE 16
+
+#define RTW89_TXPWR_LMT_RU_PAGE_SIZE_BE 80
+
+struct rtw89_txpwr_limit_ru_be {
+ s8 ru26[RTW89_RU_SEC_NUM_BE];
+ s8 ru52[RTW89_RU_SEC_NUM_BE];
+ s8 ru106[RTW89_RU_SEC_NUM_BE];
+ s8 ru52_26[RTW89_RU_SEC_NUM_BE];
+ s8 ru106_26[RTW89_RU_SEC_NUM_BE];
+};
+
struct rtw89_phy_gen_def {
u32 cr_base;
const struct rtw89_ccx_regs *ccx;
const struct rtw89_physts_regs *physts;
+ const struct rtw89_cfo_regs *cfo;
+
+ void (*set_txpwr_byrate)(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx);
+ void (*set_txpwr_offset)(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx);
+ void (*set_txpwr_limit)(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx);
+ void (*set_txpwr_limit_ru)(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx);
};
extern const struct rtw89_phy_gen_def rtw89_phy_gen_ax;
@@ -613,22 +705,58 @@ void rtw89_phy_write32_idx(struct rtw89_dev *rtwdev, u32 addr, u32 mask,
u32 data, enum rtw89_phy_idx phy_idx);
u32 rtw89_phy_read32_idx(struct rtw89_dev *rtwdev, u32 addr, u32 mask,
enum rtw89_phy_idx phy_idx);
+s8 *rtw89_phy_raw_byr_seek(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_byrate *head,
+ const struct rtw89_rate_desc *desc);
+s8 rtw89_phy_read_txpwr_byrate(struct rtw89_dev *rtwdev, u8 band, u8 bw,
+ const struct rtw89_rate_desc *rate_desc);
void rtw89_phy_load_txpwr_byrate(struct rtw89_dev *rtwdev,
const struct rtw89_txpwr_table *tbl);
s8 rtw89_phy_read_txpwr_limit(struct rtw89_dev *rtwdev, u8 band,
u8 bw, u8 ntx, u8 rs, u8 bf, u8 ch);
+s8 rtw89_phy_read_txpwr_limit_ru(struct rtw89_dev *rtwdev, u8 band,
+ u8 ru, u8 ntx, u8 ch);
+
+static inline
void rtw89_phy_set_txpwr_byrate(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx);
+ enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_phy_gen_def *phy = rtwdev->chip->phy_def;
+
+ phy->set_txpwr_byrate(rtwdev, chan, phy_idx);
+}
+
+static inline
void rtw89_phy_set_txpwr_offset(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx);
+ enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_phy_gen_def *phy = rtwdev->chip->phy_def;
+
+ phy->set_txpwr_offset(rtwdev, chan, phy_idx);
+}
+
+static inline
void rtw89_phy_set_txpwr_limit(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx);
+ enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_phy_gen_def *phy = rtwdev->chip->phy_def;
+
+ phy->set_txpwr_limit(rtwdev, chan, phy_idx);
+}
+
+static inline
void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
- enum rtw89_phy_idx phy_idx);
+ enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_phy_gen_def *phy = rtwdev->chip->phy_def;
+
+ phy->set_txpwr_limit_ru(rtwdev, chan, phy_idx);
+}
+
void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta);
void rtw89_phy_ra_update(struct rtw89_dev *rtwdev);
void rtw89_phy_ra_updata_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
diff --git a/drivers/net/wireless/realtek/rtw89/phy_be.c b/drivers/net/wireless/realtek/rtw89/phy_be.c
index 778e4b0c8e87..63eeeea72b68 100644
--- a/drivers/net/wireless/realtek/rtw89/phy_be.c
+++ b/drivers/net/wireless/realtek/rtw89/phy_be.c
@@ -2,6 +2,8 @@
/* Copyright(c) 2023 Realtek Corporation
*/
+#include "debug.h"
+#include "mac.h"
#include "phy.h"
#include "reg.h"
@@ -69,9 +71,583 @@ static const struct rtw89_physts_regs rtw89_physts_regs_be = {
.dis_trigger_brk_mask = B_STS_DIS_TRIG_BY_BRK,
};
+static const struct rtw89_cfo_regs rtw89_cfo_regs_be = {
+ .comp = R_DCFO_WEIGHT_V1,
+ .weighting_mask = B_DCFO_WEIGHT_MSK_V1,
+ .comp_seg0 = R_DCFO_OPT_V1,
+ .valid_0_mask = B_DCFO_OPT_EN_V1,
+};
+
+struct rtw89_byr_spec_ent_be {
+ struct rtw89_rate_desc init;
+ u8 num_of_idx;
+ bool no_over_bw40;
+ bool no_multi_nss;
+};
+
+static const struct rtw89_byr_spec_ent_be rtw89_byr_spec_be[] = {
+ {
+ .init = { .rs = RTW89_RS_CCK },
+ .num_of_idx = RTW89_RATE_CCK_NUM,
+ .no_over_bw40 = true,
+ .no_multi_nss = true,
+ },
+ {
+ .init = { .rs = RTW89_RS_OFDM },
+ .num_of_idx = RTW89_RATE_OFDM_NUM,
+ .no_multi_nss = true,
+ },
+ {
+ .init = { .rs = RTW89_RS_MCS, .idx = 14, .ofdma = RTW89_NON_OFDMA },
+ .num_of_idx = 2,
+ .no_multi_nss = true,
+ },
+ {
+ .init = { .rs = RTW89_RS_MCS, .idx = 14, .ofdma = RTW89_OFDMA },
+ .num_of_idx = 2,
+ .no_multi_nss = true,
+ },
+ {
+ .init = { .rs = RTW89_RS_MCS, .ofdma = RTW89_NON_OFDMA },
+ .num_of_idx = 14,
+ },
+ {
+ .init = { .rs = RTW89_RS_HEDCM, .ofdma = RTW89_NON_OFDMA },
+ .num_of_idx = RTW89_RATE_HEDCM_NUM,
+ },
+ {
+ .init = { .rs = RTW89_RS_MCS, .ofdma = RTW89_OFDMA },
+ .num_of_idx = 14,
+ },
+ {
+ .init = { .rs = RTW89_RS_HEDCM, .ofdma = RTW89_OFDMA },
+ .num_of_idx = RTW89_RATE_HEDCM_NUM,
+ },
+};
+
+static
+void __phy_set_txpwr_byrate_be(struct rtw89_dev *rtwdev, u8 band, u8 bw,
+ u8 nss, u32 *addr, enum rtw89_phy_idx phy_idx)
+{
+ const struct rtw89_byr_spec_ent_be *ent;
+ struct rtw89_rate_desc desc;
+ int pos = 0;
+ int i, j;
+ u32 val;
+ s8 v[4];
+
+ for (i = 0; i < ARRAY_SIZE(rtw89_byr_spec_be); i++) {
+ ent = &rtw89_byr_spec_be[i];
+
+ if (bw > RTW89_CHANNEL_WIDTH_40 && ent->no_over_bw40)
+ continue;
+ if (nss > RTW89_NSS_1 && ent->no_multi_nss)
+ continue;
+
+ desc = ent->init;
+ desc.nss = nss;
+ for (j = 0; j < ent->num_of_idx; j++, desc.idx++) {
+ v[pos] = rtw89_phy_read_txpwr_byrate(rtwdev, band, bw,
+ &desc);
+ pos = (pos + 1) % 4;
+ if (pos)
+ continue;
+
+ val = u32_encode_bits(v[0], GENMASK(7, 0)) |
+ u32_encode_bits(v[1], GENMASK(15, 8)) |
+ u32_encode_bits(v[2], GENMASK(23, 16)) |
+ u32_encode_bits(v[3], GENMASK(31, 24));
+
+ rtw89_mac_txpwr_write32(rtwdev, phy_idx, *addr, val);
+ *addr += 4;
+ }
+ }
+}
+
+static void rtw89_phy_set_txpwr_byrate_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
+{
+ u32 addr = R_BE_PWR_BY_RATE;
+ u8 band = chan->band_type;
+ u8 bw, nss;
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
+ "[TXPWR] set txpwr byrate on band %d\n", band);
+
+ for (bw = 0; bw <= RTW89_CHANNEL_WIDTH_320; bw++)
+ for (nss = 0; nss <= RTW89_NSS_2; nss++)
+ __phy_set_txpwr_byrate_be(rtwdev, band, bw, nss,
+ &addr, phy_idx);
+}
+
+static void rtw89_phy_set_txpwr_offset_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
+{
+ struct rtw89_rate_desc desc = {
+ .nss = RTW89_NSS_1,
+ .rs = RTW89_RS_OFFSET,
+ };
+ u8 band = chan->band_type;
+ s8 v[RTW89_RATE_OFFSET_NUM_BE] = {};
+ u32 val;
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
+ "[TXPWR] set txpwr offset on band %d\n", band);
+
+ for (desc.idx = 0; desc.idx < RTW89_RATE_OFFSET_NUM_BE; desc.idx++)
+ v[desc.idx] = rtw89_phy_read_txpwr_byrate(rtwdev, band, 0, &desc);
+
+ val = u32_encode_bits(v[RTW89_RATE_OFFSET_CCK], GENMASK(3, 0)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_OFDM], GENMASK(7, 4)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_HT], GENMASK(11, 8)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_VHT], GENMASK(15, 12)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_HE], GENMASK(19, 16)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_EHT], GENMASK(23, 20)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_DLRU_HE], GENMASK(27, 24)) |
+ u32_encode_bits(v[RTW89_RATE_OFFSET_DLRU_EHT], GENMASK(31, 28));
+
+ rtw89_mac_txpwr_write32(rtwdev, phy_idx, R_BE_PWR_RATE_OFST_CTRL, val);
+}
+
+static void
+fill_limit_nonbf_bf(struct rtw89_dev *rtwdev, s8 (*ptr)[RTW89_BF_NUM],
+ u8 band, u8 bw, u8 ntx, u8 rs, u8 ch)
+{
+ int bf;
+
+ for (bf = 0; bf < RTW89_BF_NUM; bf++)
+ (*ptr)[bf] = rtw89_phy_read_txpwr_limit(rtwdev, band, bw, ntx,
+ rs, bf, ch);
+}
+
+static void
+fill_limit_nonbf_bf_min(struct rtw89_dev *rtwdev, s8 (*ptr)[RTW89_BF_NUM],
+ u8 band, u8 bw, u8 ntx, u8 rs, u8 ch1, u8 ch2)
+{
+ s8 v1[RTW89_BF_NUM];
+ s8 v2[RTW89_BF_NUM];
+ int bf;
+
+ fill_limit_nonbf_bf(rtwdev, &v1, band, bw, ntx, rs, ch1);
+ fill_limit_nonbf_bf(rtwdev, &v2, band, bw, ntx, rs, ch2);
+
+ for (bf = 0; bf < RTW89_BF_NUM; bf++)
+ (*ptr)[bf] = min(v1[bf], v2[bf]);
+}
+
+static void phy_fill_limit_20m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_nonbf_bf(rtwdev, &lmt->cck_20m, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_CCK, ch);
+ fill_limit_nonbf_bf(rtwdev, &lmt->cck_40m, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_CCK, ch);
+ fill_limit_nonbf_bf(rtwdev, &lmt->ofdm, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_OFDM, ch);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[0], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch);
+}
+
+static void phy_fill_limit_40m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
+{
+ fill_limit_nonbf_bf(rtwdev, &lmt->cck_20m, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_CCK, ch - 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->cck_40m, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_CCK, ch);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->ofdm, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_OFDM, pri_ch);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[0], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[1], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[0], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch);
+}
+
+static void phy_fill_limit_80m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
+{
+ fill_limit_nonbf_bf(rtwdev, &lmt->ofdm, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_OFDM, pri_ch);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[0], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[1], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[2], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[3], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[0], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[1], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[0], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch);
+
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_0p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch - 4, ch + 4);
+}
+
+static void phy_fill_limit_160m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
+{
+ fill_limit_nonbf_bf(rtwdev, &lmt->ofdm, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_OFDM, pri_ch);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[0], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 14);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[1], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 10);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[2], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[3], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[4], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[5], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[6], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 10);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[7], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 14);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[0], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 12);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[1], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[2], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[3], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 12);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[0], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch - 8);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[1], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch + 8);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_160m[0], band,
+ RTW89_CHANNEL_WIDTH_160, ntx, RTW89_RS_MCS, ch);
+
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_0p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch - 12, ch - 4);
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_2p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch + 4, ch + 12);
+}
+
+static void phy_fill_limit_320m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 band, u8 ntx, u8 ch, u8 pri_ch)
+{
+ fill_limit_nonbf_bf(rtwdev, &lmt->ofdm, band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_OFDM, pri_ch);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[0], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 30);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[1], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 26);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[2], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 22);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[3], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 18);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[4], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 14);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[5], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 10);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[6], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[7], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch - 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[8], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 2);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[9], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 6);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[10], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 10);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[11], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 14);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[12], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 18);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[13], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 22);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[14], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 26);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_20m[15], band,
+ RTW89_CHANNEL_WIDTH_20, ntx, RTW89_RS_MCS, ch + 30);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[0], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 28);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[1], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 20);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[2], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 12);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[3], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch - 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[4], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 4);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[5], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 12);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[6], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 20);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_40m[7], band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS, ch + 28);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[0], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch - 24);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[1], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch - 8);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[2], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch + 8);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_80m[3], band,
+ RTW89_CHANNEL_WIDTH_80, ntx, RTW89_RS_MCS, ch + 24);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_160m[0], band,
+ RTW89_CHANNEL_WIDTH_160, ntx, RTW89_RS_MCS, ch - 16);
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_160m[1], band,
+ RTW89_CHANNEL_WIDTH_160, ntx, RTW89_RS_MCS, ch + 16);
+
+ fill_limit_nonbf_bf(rtwdev, &lmt->mcs_320m, band,
+ RTW89_CHANNEL_WIDTH_320, ntx, RTW89_RS_MCS, ch);
+
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_0p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch - 28, ch - 20);
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_2p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch - 12, ch - 4);
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_4p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch + 4, ch + 12);
+ fill_limit_nonbf_bf_min(rtwdev, &lmt->mcs_40m_6p5, band,
+ RTW89_CHANNEL_WIDTH_40, ntx, RTW89_RS_MCS,
+ ch + 20, ch + 28);
+}
+
+static void rtw89_phy_fill_limit_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ struct rtw89_txpwr_limit_be *lmt,
+ u8 ntx)
+{
+ u8 band = chan->band_type;
+ u8 pri_ch = chan->primary_channel;
+ u8 ch = chan->channel;
+ u8 bw = chan->band_width;
+
+ memset(lmt, 0, sizeof(*lmt));
+
+ switch (bw) {
+ case RTW89_CHANNEL_WIDTH_20:
+ phy_fill_limit_20m_be(rtwdev, lmt, band, ntx, ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_40:
+ phy_fill_limit_40m_be(rtwdev, lmt, band, ntx, ch, pri_ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_80:
+ phy_fill_limit_80m_be(rtwdev, lmt, band, ntx, ch, pri_ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_160:
+ phy_fill_limit_160m_be(rtwdev, lmt, band, ntx, ch, pri_ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_320:
+ phy_fill_limit_320m_be(rtwdev, lmt, band, ntx, ch, pri_ch);
+ break;
+ }
+}
+
+static void rtw89_phy_set_txpwr_limit_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
+{
+ struct rtw89_txpwr_limit_be lmt;
+ const s8 *ptr;
+ u32 addr, val;
+ u8 i, j;
+
+ BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit_be) !=
+ RTW89_TXPWR_LMT_PAGE_SIZE_BE);
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
+ "[TXPWR] set txpwr limit on band %d bw %d\n",
+ chan->band_type, chan->band_width);
+
+ addr = R_BE_PWR_LMT;
+ for (i = 0; i <= RTW89_NSS_2; i++) {
+ rtw89_phy_fill_limit_be(rtwdev, chan, &lmt, i);
+
+ ptr = (s8 *)&lmt;
+ for (j = 0; j < RTW89_TXPWR_LMT_PAGE_SIZE_BE;
+ j += 4, addr += 4, ptr += 4) {
+ val = u32_encode_bits(ptr[0], GENMASK(7, 0)) |
+ u32_encode_bits(ptr[1], GENMASK(15, 8)) |
+ u32_encode_bits(ptr[2], GENMASK(23, 16)) |
+ u32_encode_bits(ptr[3], GENMASK(31, 24));
+
+ rtw89_mac_txpwr_write32(rtwdev, phy_idx, addr, val);
+ }
+ }
+}
+
+static void fill_limit_ru_each(struct rtw89_dev *rtwdev, u8 index,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ lmt_ru->ru26[index] =
+ rtw89_phy_read_txpwr_limit_ru(rtwdev, band, RTW89_RU26, ntx, ch);
+ lmt_ru->ru52[index] =
+ rtw89_phy_read_txpwr_limit_ru(rtwdev, band, RTW89_RU52, ntx, ch);
+ lmt_ru->ru106[index] =
+ rtw89_phy_read_txpwr_limit_ru(rtwdev, band, RTW89_RU106, ntx, ch);
+ lmt_ru->ru52_26[index] =
+ rtw89_phy_read_txpwr_limit_ru(rtwdev, band, RTW89_RU52_26, ntx, ch);
+ lmt_ru->ru106_26[index] =
+ rtw89_phy_read_txpwr_limit_ru(rtwdev, band, RTW89_RU106_26, ntx, ch);
+}
+
+static void phy_fill_limit_ru_20m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_ru_each(rtwdev, 0, lmt_ru, band, ntx, ch);
+}
+
+static void phy_fill_limit_ru_40m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_ru_each(rtwdev, 0, lmt_ru, band, ntx, ch - 2);
+ fill_limit_ru_each(rtwdev, 1, lmt_ru, band, ntx, ch + 2);
+}
+
+static void phy_fill_limit_ru_80m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_ru_each(rtwdev, 0, lmt_ru, band, ntx, ch - 6);
+ fill_limit_ru_each(rtwdev, 1, lmt_ru, band, ntx, ch - 2);
+ fill_limit_ru_each(rtwdev, 2, lmt_ru, band, ntx, ch + 2);
+ fill_limit_ru_each(rtwdev, 3, lmt_ru, band, ntx, ch + 6);
+}
+
+static void phy_fill_limit_ru_160m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_ru_each(rtwdev, 0, lmt_ru, band, ntx, ch - 14);
+ fill_limit_ru_each(rtwdev, 1, lmt_ru, band, ntx, ch - 10);
+ fill_limit_ru_each(rtwdev, 2, lmt_ru, band, ntx, ch - 6);
+ fill_limit_ru_each(rtwdev, 3, lmt_ru, band, ntx, ch - 2);
+ fill_limit_ru_each(rtwdev, 4, lmt_ru, band, ntx, ch + 2);
+ fill_limit_ru_each(rtwdev, 5, lmt_ru, band, ntx, ch + 6);
+ fill_limit_ru_each(rtwdev, 6, lmt_ru, band, ntx, ch + 10);
+ fill_limit_ru_each(rtwdev, 7, lmt_ru, band, ntx, ch + 14);
+}
+
+static void phy_fill_limit_ru_320m_be(struct rtw89_dev *rtwdev,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 band, u8 ntx, u8 ch)
+{
+ fill_limit_ru_each(rtwdev, 0, lmt_ru, band, ntx, ch - 30);
+ fill_limit_ru_each(rtwdev, 1, lmt_ru, band, ntx, ch - 26);
+ fill_limit_ru_each(rtwdev, 2, lmt_ru, band, ntx, ch - 22);
+ fill_limit_ru_each(rtwdev, 3, lmt_ru, band, ntx, ch - 18);
+ fill_limit_ru_each(rtwdev, 4, lmt_ru, band, ntx, ch - 14);
+ fill_limit_ru_each(rtwdev, 5, lmt_ru, band, ntx, ch - 10);
+ fill_limit_ru_each(rtwdev, 6, lmt_ru, band, ntx, ch - 6);
+ fill_limit_ru_each(rtwdev, 7, lmt_ru, band, ntx, ch - 2);
+ fill_limit_ru_each(rtwdev, 8, lmt_ru, band, ntx, ch + 2);
+ fill_limit_ru_each(rtwdev, 9, lmt_ru, band, ntx, ch + 6);
+ fill_limit_ru_each(rtwdev, 10, lmt_ru, band, ntx, ch + 10);
+ fill_limit_ru_each(rtwdev, 11, lmt_ru, band, ntx, ch + 14);
+ fill_limit_ru_each(rtwdev, 12, lmt_ru, band, ntx, ch + 18);
+ fill_limit_ru_each(rtwdev, 13, lmt_ru, band, ntx, ch + 22);
+ fill_limit_ru_each(rtwdev, 14, lmt_ru, band, ntx, ch + 26);
+ fill_limit_ru_each(rtwdev, 15, lmt_ru, band, ntx, ch + 30);
+}
+
+static void rtw89_phy_fill_limit_ru_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ struct rtw89_txpwr_limit_ru_be *lmt_ru,
+ u8 ntx)
+{
+ u8 band = chan->band_type;
+ u8 ch = chan->channel;
+ u8 bw = chan->band_width;
+
+ memset(lmt_ru, 0, sizeof(*lmt_ru));
+
+ switch (bw) {
+ case RTW89_CHANNEL_WIDTH_20:
+ phy_fill_limit_ru_20m_be(rtwdev, lmt_ru, band, ntx, ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_40:
+ phy_fill_limit_ru_40m_be(rtwdev, lmt_ru, band, ntx, ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_80:
+ phy_fill_limit_ru_80m_be(rtwdev, lmt_ru, band, ntx, ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_160:
+ phy_fill_limit_ru_160m_be(rtwdev, lmt_ru, band, ntx, ch);
+ break;
+ case RTW89_CHANNEL_WIDTH_320:
+ phy_fill_limit_ru_320m_be(rtwdev, lmt_ru, band, ntx, ch);
+ break;
+ }
+}
+
+static void rtw89_phy_set_txpwr_limit_ru_be(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan,
+ enum rtw89_phy_idx phy_idx)
+{
+ struct rtw89_txpwr_limit_ru_be lmt_ru;
+ const s8 *ptr;
+ u32 addr, val;
+ u8 i, j;
+
+ BUILD_BUG_ON(sizeof(struct rtw89_txpwr_limit_ru_be) !=
+ RTW89_TXPWR_LMT_RU_PAGE_SIZE_BE);
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXPWR,
+ "[TXPWR] set txpwr limit ru on band %d bw %d\n",
+ chan->band_type, chan->band_width);
+
+ addr = R_BE_PWR_RU_LMT;
+ for (i = 0; i <= RTW89_NSS_2; i++) {
+ rtw89_phy_fill_limit_ru_be(rtwdev, chan, &lmt_ru, i);
+
+ ptr = (s8 *)&lmt_ru;
+ for (j = 0; j < RTW89_TXPWR_LMT_RU_PAGE_SIZE_BE;
+ j += 4, addr += 4, ptr += 4) {
+ val = u32_encode_bits(ptr[0], GENMASK(7, 0)) |
+ u32_encode_bits(ptr[1], GENMASK(15, 8)) |
+ u32_encode_bits(ptr[2], GENMASK(23, 16)) |
+ u32_encode_bits(ptr[3], GENMASK(31, 24));
+
+ rtw89_mac_txpwr_write32(rtwdev, phy_idx, addr, val);
+ }
+ }
+}
+
const struct rtw89_phy_gen_def rtw89_phy_gen_be = {
.cr_base = 0x20000,
.ccx = &rtw89_ccx_regs_be,
.physts = &rtw89_physts_regs_be,
+ .cfo = &rtw89_cfo_regs_be,
+
+ .set_txpwr_byrate = rtw89_phy_set_txpwr_byrate_be,
+ .set_txpwr_offset = rtw89_phy_set_txpwr_offset_be,
+ .set_txpwr_limit = rtw89_phy_set_txpwr_limit_be,
+ .set_txpwr_limit_ru = rtw89_phy_set_txpwr_limit_ru_be,
};
EXPORT_SYMBOL(rtw89_phy_gen_be);
diff --git a/drivers/net/wireless/realtek/rtw89/reg.h b/drivers/net/wireless/realtek/rtw89/reg.h
index c0aac4d3678a..ccd5481e8a3d 100644
--- a/drivers/net/wireless/realtek/rtw89/reg.h
+++ b/drivers/net/wireless/realtek/rtw89/reg.h
@@ -3360,9 +3360,11 @@
#define R_AX_PWR_UL_TB_1T 0xD28C
#define B_AX_PWR_UL_TB_1T_MASK GENMASK(4, 0)
#define B_AX_PWR_UL_TB_1T_V1_MASK GENMASK(7, 0)
+#define B_AX_PWR_UL_TB_1T_NORM_BW160 GENMASK(31, 24)
#define R_AX_PWR_UL_TB_2T 0xD290
#define B_AX_PWR_UL_TB_2T_MASK GENMASK(4, 0)
#define B_AX_PWR_UL_TB_2T_V1_MASK GENMASK(7, 0)
+#define B_AX_PWR_UL_TB_2T_NORM_BW160 GENMASK(31, 24)
#define R_AX_PWR_BY_RATE_TABLE0 0xD2C0
#define R_AX_PWR_BY_RATE_TABLE6 0xD2D8
#define R_AX_PWR_BY_RATE_TABLE10 0xD2E8
@@ -3390,11 +3392,13 @@
#define AX_PATH_COM0_PATHB 0x11111900
#define AX_PATH_COM0_PATHAB 0x19999980
#define R_AX_PATH_COM1 0xD804
+#define B_AX_PATH_COM1_NORM_1STS GENMASK(31, 28)
#define AX_PATH_COM1_DFVAL 0x00000000
#define AX_PATH_COM1_PATHA 0x13111111
#define AX_PATH_COM1_PATHB 0x23222222
#define AX_PATH_COM1_PATHAB 0x33333333
#define R_AX_PATH_COM2 0xD808
+#define B_AX_PATH_COM2_RESP_1STS_PATH GENMASK(7, 4)
#define AX_PATH_COM2_DFVAL 0x00000000
#define AX_PATH_COM2_PATHA 0x01209313
#define AX_PATH_COM2_PATHB 0x01209323
@@ -3581,8 +3585,8 @@
#define R_AX_MACID_ANT_TABLE 0xDC00
#define R_AX_MACID_ANT_TABLE_LAST 0xDDFC
-#define CMAC1_START_ADDR 0xE000
-#define CMAC1_END_ADDR 0xFFFF
+#define CMAC1_START_ADDR_AX 0xE000
+#define CMAC1_END_ADDR_AX 0xFFFF
#define R_AX_CMAC_REG_END 0xFFFF
#define R_AX_LTE_SW_CFG_1 0x0038
@@ -3625,8 +3629,369 @@
#define B_AX_GNT_BT_TX_SW_VAL BIT(1)
#define B_AX_GNT_BT_TX_SW_CTRL BIT(0)
+#define R_BE_SYS_CLK_CTRL 0x0008
+#define B_BE_CPU_CLK_EN BIT(14)
+#define B_BE_SYMR_BE_CLK_EN BIT(13)
+#define B_BE_MAC_CLK_EN BIT(11)
+#define B_BE_EXT_32K_EN BIT(8)
+#define B_BE_WL_CLK_TEST BIT(7)
+#define B_BE_LOADER_CLK_EN BIT(5)
+#define B_BE_ANA_CLK_DIVISION_2 BIT(1)
+#define B_BE_CNTD16V_EN BIT(0)
+
+#define R_BE_PLATFORM_ENABLE 0x0088
+#define B_BE_HOLD_AFTER_RESET BIT(11)
+#define B_BE_SYM_WLPLT_MEM_MUX_EN BIT(10)
+#define B_BE_WCPU_WARM_EN BIT(9)
+#define B_BE_SPIC_EN BIT(8)
+#define B_BE_UART_EN BIT(7)
+#define B_BE_IDDMA_EN BIT(6)
+#define B_BE_IPSEC_EN BIT(5)
+#define B_BE_HIOE_EN BIT(4)
+#define B_BE_APB_WRAP_EN BIT(2)
+#define B_BE_WCPU_EN BIT(1)
+#define B_BE_PLATFORM_EN BIT(0)
+
+#define R_BE_HALT_H2C_CTRL 0x0160
+#define B_BE_HALT_H2C_TRIGGER BIT(0)
+
+#define R_BE_HALT_C2H_CTRL 0x0164
+#define B_BE_HALT_C2H_TRIGGER BIT(0)
+
+#define R_BE_HALT_H2C 0x0168
+#define B_BE_HALT_H2C_MASK GENMASK(31, 0)
+
+#define R_BE_HALT_C2H 0x016C
+#define B_BE_HALT_C2H_ERROR_SENARIO_MASK GENMASK(31, 28)
+#define B_BE_ERROR_CODE_MASK GENMASK(15, 0)
+
+#define R_BE_SYS_CFG5 0x0170
+#define B_BE_WDT_DATACPU_WAKE_PCIE_EN BIT(12)
+#define B_BE_WDT_DATACPU_WAKE_USB_EN BIT(11)
+#define B_BE_WDT_WAKE_PCIE_EN BIT(10)
+#define B_BE_WDT_WAKE_USB_EN BIT(9)
+#define B_BE_SYM_DIS_HC_ACCESS_MAC BIT(8)
+#define B_BE_LPS_STATUS BIT(3)
+#define B_BE_HCI_TXDMA_BUSY BIT(2)
+
+#define R_BE_SECURE_BOOT_MALLOC_INFO 0x0184
+
+#define R_BE_WCPU_FW_CTRL 0x01E0
+#define B_BE_RUN_ENV_MASK GENMASK(31, 30)
+#define B_BE_WCPU_FWDL_STATUS_MASK GENMASK(29, 26)
+#define B_BE_WDT_PLT_RST_EN BIT(17)
+#define B_BE_FW_SEC_AUTH_DONE BIT(14)
+#define B_BE_FW_CPU_UTIL_STS_EN BIT(13)
+#define B_BE_BBMCU1_FWDL_EN BIT(12)
+#define B_BE_BBMCU0_FWDL_EN BIT(11)
+#define B_BE_DATACPU_FWDL_EN BIT(10)
+#define B_BE_WLANCPU_FWDL_EN BIT(9)
+#define B_BE_WCPU_ROM_CUT_GET BIT(8)
+#define B_BE_WCPU_ROM_CUT_VAL_MASK GENMASK(7, 4)
+#define B_BE_FW_BOOT_MODE_MASK GENMASK(3, 2)
+#define B_BE_H2C_PATH_RDY BIT(1)
+#define B_BE_DLFW_PATH_RDY BIT(0)
+
+#define R_BE_BOOT_REASON 0x01E6
+#define B_BE_BOOT_REASON_MASK GENMASK(2, 0)
+
+#define R_BE_LDM 0x01E8
+#define B_BE_EN_32K BIT(31)
+#define B_BE_LDM_MASK GENMASK(30, 0)
+
+#define R_BE_UDM0 0x01F0
+#define B_BE_UDM0_SEND2RA_CNT_MASK GENMASK(31, 28)
+#define B_BE_UDM0_TX_RPT_CNT_MASK GENMASK(27, 24)
+#define B_BE_UDM0_FS_CODE_MASK GENMASK(23, 8)
+#define B_BE_NULL_POINTER_INDC BIT(7)
+#define B_BE_ROM_ASSERT_INDC BIT(6)
+#define B_BE_RAM_ASSERT_INDC BIT(5)
+#define B_BE_FW_IMAGE_TYPE BIT(4)
+#define B_BE_UDM0_TRAP_LOOP_CTRL BIT(2)
+#define B_BE_UDM0_SEND_HALTC2H_CTRL BIT(1)
+#define B_BE_UDM0_DBG_MODE_CTRL BIT(0)
+
+#define R_BE_UDM1 0x01F4
+#define B_BE_UDM1_ERROR_ADDR_MASK GENMASK(31, 16)
+#define B_BE_UDM1_HALMAC_C2H_ENQ_CNT_MASK GENMASK(15, 12)
+#define B_BE_UDM1_HALMAC_H2C_DEQ_CNT_MASK GENMASK(11, 8)
+#define B_BE_UDM1_WCPU_C2H_ENQ_CNT_MASK GENMASK(7, 4)
+#define B_BE_UDM1_WCPU_H2C_DEQ_CNT_MASK GENMASK(3, 0)
+
+#define R_BE_UDM2 0x01F8
+#define B_BE_UDM2_EPC_RA_MASK GENMASK(31, 0)
+
+#define R_BE_DCPU_PLATFORM_ENABLE 0x0888
+#define B_BE_DCPU_SYM_DPLT_MEM_MUX_EN BIT(10)
+#define B_BE_DCPU_WARM_EN BIT(9)
+#define B_BE_DCPU_UART_EN BIT(7)
+#define B_BE_DCPU_IDDMA_EN BIT(6)
+#define B_BE_DCPU_APB_WRAP_EN BIT(2)
+#define B_BE_DCPU_EN BIT(1)
+#define B_BE_DCPU_PLATFORM_EN BIT(0)
+
#define R_BE_FILTER_MODEL_ADDR 0x0C04
+#define R_BE_PLE_DBG_FUN_INTF_CTL 0x9110
+#define B_BE_PLE_DFI_ACTIVE BIT(31)
+#define B_BE_PLE_DFI_TRGSEL_MASK GENMASK(19, 16)
+#define B_BE_PLE_DFI_ADDR_MASK GENMASK(15, 0)
+
+#define R_BE_PLE_DBG_FUN_INTF_DATA 0x9114
+#define B_BE_PLE_DFI_DATA_MASK GENMASK(31, 0)
+
+#define R_BE_CMAC_FUNC_EN 0x10000
+#define R_BE_CMAC_FUNC_EN_C1 0x14000
+#define B_BE_CMAC_CRPRT BIT(31)
+#define B_BE_CMAC_EN BIT(30)
+#define B_BE_CMAC_TXEN BIT(29)
+#define B_BE_CMAC_RXEN BIT(28)
+#define B_BE_FORCE_RESP_PKTCTL_GCKEN BIT(26)
+#define B_BE_FORCE_SIGB_REG_GCKEN BIT(25)
+#define B_BE_FORCE_POWER_REG_GCKEN BIT(23)
+#define B_BE_FORCE_RMAC_REG_GCKEN BIT(22)
+#define B_BE_FORCE_TRXPTCL_REG_GCKEN BIT(21)
+#define B_BE_FORCE_TMAC_REG_GCKEN BIT(20)
+#define B_BE_FORCE_CMAC_DMA_REG_GCKEN BIT(19)
+#define B_BE_FORCE_PTCL_REG_GCKEN BIT(18)
+#define B_BE_FORCE_SCHEDULER_RREG_GCKEN BIT(17)
+#define B_BE_FORCE_CMAC_COMMON_REG_GCKEN BIT(16)
+#define B_BE_FORCE_CMACREG_GCKEN BIT(15)
+#define B_BE_TXTIME_EN BIT(8)
+#define B_BE_RESP_PKTCTL_EN BIT(7)
+#define B_BE_SIGB_EN BIT(6)
+#define B_BE_PHYINTF_EN BIT(5)
+#define B_BE_CMAC_DMA_EN BIT(4)
+#define B_BE_PTCLTOP_EN BIT(3)
+#define B_BE_SCHEDULER_EN BIT(2)
+#define B_BE_TMAC_EN BIT(1)
+#define B_BE_RMAC_EN BIT(0)
+#define B_BE_CMAC_FUNC_EN_SET (B_BE_CMAC_EN | B_BE_CMAC_TXEN | B_BE_CMAC_RXEN | \
+ B_BE_PHYINTF_EN | B_BE_CMAC_DMA_EN | B_BE_PTCLTOP_EN | \
+ B_BE_SCHEDULER_EN | B_BE_TMAC_EN | B_BE_RMAC_EN | \
+ B_BE_CMAC_CRPRT | B_BE_TXTIME_EN | B_BE_RESP_PKTCTL_EN | \
+ B_BE_SIGB_EN)
+
+#define R_BE_PORT_0_TSF_SYNC 0x102A0
+#define R_BE_PORT_0_TSF_SYNC_C1 0x142A0
+#define B_BE_P0_SYNC_NOW_P BIT(30)
+#define B_BE_P0_SYNC_ONCE_P BIT(29)
+#define B_BE_P0_AUTO_SYNC BIT(28)
+#define B_BE_P0_SYNC_PORT_SRC_SEL_MASK GENMASK(26, 24)
+#define B_BE_P0_TSFTR_SYNC_OFFSET_MASK GENMASK(18, 0)
+
+#define R_BE_MUEDCA_BE_PARAM_0 0x10350
+#define R_BE_MUEDCA_BK_PARAM_0 0x10354
+#define R_BE_MUEDCA_VI_PARAM_0 0x10358
+#define R_BE_MUEDCA_VO_PARAM_0 0x1035C
+
+#define R_BE_MUEDCA_EN 0x10370
+#define R_BE_MUEDCA_EN_C1 0x14370
+#define B_BE_MUEDCA_WMM_SEL BIT(8)
+#define B_BE_SET_MUEDCATIMER_TF_1 BIT(5)
+#define B_BE_SET_MUEDCATIMER_TF_0 BIT(4)
+#define B_BE_MUEDCA_EN_0 BIT(0)
+
+#define R_BE_PORT_CFG_P0 0x10400
+#define R_BE_PORT_CFG_P0_C1 0x14400
+#define B_BE_BCN_ERLY_SORT_EN_P0 BIT(18)
+#define B_BE_PROHIB_END_CAL_EN_P0 BIT(17)
+#define B_BE_BRK_SETUP_P0 BIT(16)
+#define B_BE_TBTT_UPD_SHIFT_SEL_P0 BIT(15)
+#define B_BE_BCN_DROP_ALLOW_P0 BIT(14)
+#define B_BE_TBTT_PROHIB_EN_P0 BIT(13)
+#define B_BE_BCNTX_EN_P0 BIT(12)
+#define B_BE_NET_TYPE_P0_MASK GENMASK(11, 10)
+#define B_BE_BCN_FORCETX_EN_P0 BIT(9)
+#define B_BE_TXBCN_BTCCA_EN_P0 BIT(8)
+#define B_BE_BCNERR_CNT_EN_P0 BIT(7)
+#define B_BE_BCN_AGRES_P0 BIT(6)
+#define B_BE_TSFTR_RST_P0 BIT(5)
+#define B_BE_RX_BSSID_FIT_EN_P0 BIT(4)
+#define B_BE_TSF_UDT_EN_P0 BIT(3)
+#define B_BE_PORT_FUNC_EN_P0 BIT(2)
+#define B_BE_TXBCN_RPT_EN_P0 BIT(1)
+#define B_BE_RXBCN_RPT_EN_P0 BIT(0)
+
+#define R_BE_TBTT_PROHIB_P0 0x10404
+#define R_BE_TBTT_PROHIB_P0_C1 0x14404
+#define B_BE_TBTT_HOLD_P0_MASK GENMASK(27, 16)
+#define B_BE_TBTT_SETUP_P0_MASK GENMASK(7, 0)
+
+#define R_BE_BCN_AREA_P0 0x10408
+#define R_BE_BCN_AREA_P0_C1 0x14408
+#define B_BE_BCN_MSK_AREA_P0_MSK 0xfff
+#define B_BE_BCN_CTN_AREA_P0_MASK GENMASK(11, 0)
+
+#define R_BE_BCNERLYINT_CFG_P0 0x1040C
+#define R_BE_BCNERLYINT_CFG_P0_C1 0x1440C
+#define B_BE_BCNERLY_P0_MASK GENMASK(11, 0)
+
+#define R_BE_TBTTERLYINT_CFG_P0 0x1040E
+#define R_BE_TBTTERLYINT_CFG_P0_C1 0x1440E
+#define B_BE_TBTTERLY_P0_MASK GENMASK(11, 0)
+
+#define R_BE_TBTT_AGG_P0 0x10412
+#define R_BE_TBTT_AGG_P0_C1 0x14412
+#define B_BE_TBTT_AGG_NUM_P0_MASK GENMASK(15, 8)
+
+#define R_BE_BCN_SPACE_CFG_P0 0x10414
+#define R_BE_BCN_SPACE_CFG_P0_C1 0x14414
+#define B_BE_SUB_BCN_SPACE_P0_MASK GENMASK(23, 16)
+#define B_BE_BCN_SPACE_P0_MASK GENMASK(15, 0)
+
+#define R_BE_BCN_FORCETX_P0 0x10418
+#define R_BE_BCN_FORCETX_P0_C1 0x14418
+#define B_BE_FORCE_BCN_NUM_P0_MASK GENMASK(15, 8)
+#define B_BE_BCN_MAX_ERR_P0_MASK GENMASK(7, 0)
+
+#define R_BE_BCN_ERR_CNT_P0 0x10420
+#define R_BE_BCN_ERR_CNT_P0_C1 0x14420
+#define B_BE_BCN_ERR_CNT_SUM_P0_MASK GENMASK(31, 24)
+#define B_BE_BCN_ERR_CNT_NAV_P0_MASK GENMASK(23, 16)
+#define B_BE_BCN_ERR_CNT_EDCCA_P0_MASK GENMASK(15, 8)
+#define B_BE_BCN_ERR_CNT_CCA_P0_MASK GENMASK(7, 0)
+
+#define R_BE_BCN_ERR_FLAG_P0 0x10424
+#define R_BE_BCN_ERR_FLAG_P0_C1 0x14424
+#define B_BE_BCN_ERR_FLAG_SRCHEND_P0 BIT(3)
+#define B_BE_BCN_ERR_FLAG_INVALID_P0 BIT(2)
+#define B_BE_BCN_ERR_FLAG_CMP_P0 BIT(1)
+#define B_BE_BCN_ERR_FLAG_LOCK_P0 BIT(0)
+
+#define R_BE_DTIM_CTRL_P0 0x10426
+#define R_BE_DTIM_CTRL_P0_C1 0x14426
+#define B_BE_DTIM_NUM_P0_MASK GENMASK(15, 8)
+#define B_BE_DTIM_CURRCNT_P0_MASK GENMASK(7, 0)
+
+#define R_BE_TBTT_SHIFT_P0 0x10428
+#define R_BE_TBTT_SHIFT_P0_C1 0x14428
+#define B_BE_TBTT_SHIFT_OFST_P0_SH 0
+#define B_BE_TBTT_SHIFT_OFST_P0_MSK 0xfff
+
+#define R_BE_BCN_CNT_TMR_P0 0x10434
+#define R_BE_BCN_CNT_TMR_P0_C1 0x14434
+#define B_BE_BCN_CNT_TMR_P0_MASK GENMASK(31, 0)
+
+#define R_BE_TSFTR_LOW_P0 0x10438
+#define R_BE_TSFTR_LOW_P0_C1 0x14438
+#define B_BE_TSFTR_LOW_P0_MASK GENMASK(31, 0)
+
+#define R_BE_TSFTR_HIGH_P0 0x1043C
+#define R_BE_TSFTR_HIGH_P0_C1 0x1443C
+#define B_BE_TSFTR_HIGH_P0_MASK GENMASK(31, 0)
+
+#define R_BE_MBSSID_CTRL 0x10568
+#define R_BE_MBSSID_CTRL_C1 0x14568
+#define B_BE_MBSSID_MODE_SEL BIT(20)
+#define B_BE_P0MB_NUM_MASK GENMASK(19, 16)
+#define B_BE_P0MB15_EN BIT(15)
+#define B_BE_P0MB14_EN BIT(14)
+#define B_BE_P0MB13_EN BIT(13)
+#define B_BE_P0MB12_EN BIT(12)
+#define B_BE_P0MB11_EN BIT(11)
+#define B_BE_P0MB10_EN BIT(10)
+#define B_BE_P0MB9_EN BIT(9)
+#define B_BE_P0MB8_EN BIT(8)
+#define B_BE_P0MB7_EN BIT(7)
+#define B_BE_P0MB6_EN BIT(6)
+#define B_BE_P0MB5_EN BIT(5)
+#define B_BE_P0MB4_EN BIT(4)
+#define B_BE_P0MB3_EN BIT(3)
+#define B_BE_P0MB2_EN BIT(2)
+#define B_BE_P0MB1_EN BIT(1)
+
+#define R_BE_P0MB_HGQ_WINDOW_CFG_0 0x10590
+#define R_BE_P0MB_HGQ_WINDOW_CFG_0_C1 0x14590
+#define R_BE_PORT_HGQ_WINDOW_CFG 0x105A0
+#define R_BE_PORT_HGQ_WINDOW_CFG_C1 0x145A0
+
+#define R_BE_AGG_LEN_HT_0 0x10814
+#define R_BE_AGG_LEN_HT_0_C1 0x14814
+#define B_BE_AMPDU_MAX_LEN_HT_MASK GENMASK(31, 16)
+#define B_BE_RTS_TXTIME_TH_MASK GENMASK(15, 8)
+#define B_BE_RTS_LEN_TH_MASK GENMASK(7, 0)
+
+#define R_BE_MBSSID_DROP_0 0x1083C
+#define R_BE_MBSSID_DROP_0_C1 0x1483C
+#define B_BE_GI_LTF_FB_SEL BIT(30)
+#define B_BE_RATE_SEL_MASK GENMASK(29, 24)
+#define B_BE_PORT_DROP_4_0_MASK GENMASK(20, 16)
+#define B_BE_MBSSID_DROP_15_0_MASK GENMASK(15, 0)
+
+#define R_BE_PTCL_BSS_COLOR_0 0x108A0
+#define R_BE_PTCL_BSS_COLOR_0_C1 0x148A0
+#define B_BE_BSS_COLOB_BE_PORT_3_MASK GENMASK(29, 24)
+#define B_BE_BSS_COLOB_BE_PORT_2_MASK GENMASK(21, 16)
+#define B_BE_BSS_COLOB_BE_PORT_1_MASK GENMASK(13, 8)
+#define B_BE_BSS_COLOB_BE_PORT_0_MASK GENMASK(5, 0)
+
+#define R_BE_PTCL_BSS_COLOR_1 0x108A4
+#define R_BE_PTCL_BSS_COLOR_1_C1 0x148A4
+#define B_BE_BSS_COLOB_BE_PORT_4_MASK GENMASK(5, 0)
+
+#define R_BE_WMTX_MOREDATA_TSFT_STMP_CTL 0x10E08
+#define R_BE_WMTX_MOREDATA_TSFT_STMP_CTL_C1 0x14E08
+#define B_BE_TSFT_OFS_MASK GENMASK(31, 16)
+#define B_BE_STMP_THSD_MASK GENMASK(15, 8)
+#define B_BE_UPD_HGQMD BIT(1)
+#define B_BE_UPD_TIMIE BIT(0)
+
+#define R_BE_BFMEE_RESP_OPTION 0x11180
+#define R_BE_BFMEE_RESP_OPTION_C1 0x15180
+#define B_BE_BFMEE_CSI_SEC_TYPE_SH 20
+#define B_BE_BFMEE_CSI_SEC_TYPE_MSK 0xf
+#define B_BE_BFMEE_BFRPT_SEG_SIZE_SH 16
+#define B_BE_BFMEE_BFRPT_SEG_SIZE_MSK 0x3
+#define B_BE_BFMEE_MIMO_EN_SEL BIT(8)
+#define B_BE_BFMEE_MU_BFEE_DIS BIT(7)
+#define B_BE_BFMEE_CHECK_RPTPOLL_MACID_DIS BIT(6)
+#define B_BE_BFMEE_NOCHK_BFPOLL_BMP BIT(5)
+#define B_BE_BFMEE_VHTBFRPT_CHK BIT(4)
+#define B_BE_BFMEE_EHT_NDPA_EN BIT(3)
+#define B_BE_BFMEE_HE_NDPA_EN BIT(2)
+#define B_BE_BFMEE_VHT_NDPA_EN BIT(1)
+#define B_BE_BFMEE_HT_NDPA_EN BIT(0)
+
+#define R_BE_TRXPTCL_RESP_CSI_CTRL_0 0x11188
+#define R_BE_TRXPTCL_RESP_CSI_CTRL_0_C1 0x15188
+#define B_BE_BFMEE_CSISEQ_SEL BIT(29)
+#define B_BE_BFMEE_BFPARAM_SEL BIT(28)
+#define B_BE_BFMEE_OFDM_LEN_TH_MASK GENMASK(27, 24)
+#define B_BE_BFMEE_BF_PORT_SEL BIT(23)
+#define B_BE_BFMEE_USE_NSTS BIT(22)
+#define B_BE_BFMEE_CSI_RATE_FB_EN BIT(21)
+#define B_BE_BFMEE_CSI_GID_SEL BIT(20)
+#define B_BE_BFMEE_CSI_RSC_MASK GENMASK(19, 18)
+#define B_BE_BFMEE_CSI_FORCE_RETE_EN BIT(17)
+#define B_BE_BFMEE_CSI_USE_NDPARATE BIT(16)
+#define B_BE_BFMEE_CSI_WITHHTC_EN BIT(15)
+#define B_BE_BFMEE_CSIINFO0_BF_EN BIT(14)
+#define B_BE_BFMEE_CSIINFO0_STBC_EN BIT(13)
+#define B_BE_BFMEE_CSIINFO0_LDPC_EN BIT(12)
+#define B_BE_BFMEE_CSIINFO0_CS_MASK GENMASK(11, 10)
+#define B_BE_BFMEE_CSIINFO0_CB_MASK GENMASK(9, 8)
+#define B_BE_BFMEE_CSIINFO0_NG_MASK GENMASK(7, 6)
+#define B_BE_BFMEE_CSIINFO0_NR_MASK GENMASK(5, 3)
+#define B_BE_BFMEE_CSIINFO0_NC_MASK GENMASK(2, 0)
+#define CSI_RX_BW_CFG 0x1
+#define R_BE_TRXPTCL_RESP_CSI_CTRL_1 0x11194
+#define R_BE_TRXPTCL_RESP_CSI_CTRL_1_C1 0x15194
+#define B_BE_BFMEE_BE_CSI_RRSC_BITMAP_MASK GENMASK(31, 24)
+#define CSI_RRSC_BITMAP_CFG 0x2A
+
+#define R_BE_TRXPTCL_RESP_CSI_RRSC 0x1118C
+#define R_BE_TRXPTCL_RESP_CSI_RRSC_C1 0x1518C
+#define CSI_RRSC_BMAP_BE 0x2A2AFF
+
+#define R_BE_TRXPTCL_RESP_CSI_RATE 0x11190
+#define R_BE_TRXPTCL_RESP_CSI_RATE_C1 0x15190
+#define B_BE_BFMEE_EHT_CSI_RATE_MASK GENMASK(31, 24)
+#define B_BE_BFMEE_HE_CSI_RATE_MASK GENMASK(23, 16)
+#define B_BE_BFMEE_VHT_CSI_RATE_MASK GENMASK(15, 8)
+#define B_BE_BFMEE_HT_CSI_RATE_MASK GENMASK(7, 0)
+#define CSI_INIT_RATE_EHT 0x3
+
#define R_BE_RX_FLTR_OPT 0x11420
#define R_BE_RX_FLTR_OPT_C1 0x15420
#define B_BE_UID_FILTER_MASK GENMASK(31, 24)
@@ -3646,6 +4011,26 @@
#define B_BE_A_A1_MATCH BIT(1)
#define B_BE_SNIFFER_MODE BIT(0)
+#define R_BE_CSIRPT_OPTION 0x11464
+#define R_BE_CSIRPT_OPTION_C1 0x15464
+#define B_BE_CSIPRT_EHTSU_AID_EN BIT(26)
+#define B_BE_CSIPRT_HESU_AID_EN BIT(25)
+#define B_BE_CSIPRT_VHTSU_AID_EN BIT(24)
+
+#define R_BE_PWR_MODULE 0x11900
+#define R_BE_PWR_MODULE_C1 0x15900
+
+#define R_BE_PWR_RATE_OFST_CTRL 0x11A30
+#define R_BE_PWR_BY_RATE 0x11E00
+#define R_BE_PWR_BY_RATE_MAX 0x11FA8
+#define R_BE_PWR_LMT 0x11FAC
+#define R_BE_PWR_LMT_MAX 0x12040
+#define R_BE_PWR_RU_LMT 0x12048
+#define R_BE_PWR_RU_LMT_MAX 0x120E4
+
+#define CMAC1_START_ADDR_BE 0x14000
+#define CMAC1_END_ADDR_BE 0x17FFF
+
#define RR_MOD 0x00
#define RR_MOD_V1 0x10000
#define RR_MOD_IQK GENMASK(19, 4)
@@ -4413,12 +4798,20 @@
#define B_ANT_RX_1RCCA_SEG1 GENMASK(21, 18)
#define B_ANT_RX_1RCCA_SEG0 GENMASK(17, 14)
#define B_FC0_BW_INV GENMASK(6, 0)
+#define R_Q_MATRIX_00 0x497C
+#define B_Q_MATRIX_00_IMAGINARY GENMASK(15, 0)
+#define B_Q_MATRIX_00_REAL GENMASK(31, 16)
#define R_CHBW_MOD 0x4978
#define R_CHBW_MOD_V1 0x49C4
#define B_BT_SHARE BIT(14)
#define B_CHBW_MOD_SBW GENMASK(13, 12)
#define B_CHBW_MOD_PRICH GENMASK(11, 8)
#define B_ANT_RX_SEG0 GENMASK(3, 0)
+#define R_Q_MATRIX_11 0x4988
+#define B_Q_MATRIX_11_IMAGINARY GENMASK(15, 0)
+#define B_Q_MATRIX_11_REAL GENMASK(31, 16)
+#define R_CUSTOMIZE_Q_MATRIX 0x498C
+#define B_CUSTOMIZE_Q_MATRIX_EN BIT(0)
#define R_P0_RPL1 0x49B0
#define B_P0_RPL1_41_MASK GENMASK(31, 24)
#define B_P0_RPL1_40_MASK GENMASK(23, 16)
@@ -4539,6 +4932,8 @@
#define B_P0_TSSI_ALIM2 GENMASK(29, 0)
#define R_P0_TSSI_ALIM4 0x5640
#define R_TSSI_PA_K8 0x5644
+#define R_P0_TSSI_ADC_CLK 0x566c
+#define B_P0_TSSI_ADC_CLK GENMASK(17, 16)
#define R_UPD_CLK 0x5670
#define B_DAC_VAL BIT(31)
#define B_ACK_VAL GENMASK(30, 29)
@@ -4619,6 +5014,8 @@
#define R_TXGAIN_SCALE 0x58F0
#define B_TXGAIN_SCALE_EN BIT(19)
#define B_TXGAIN_SCALE_OFT GENMASK(31, 24)
+#define R_P0_DAC_COMP_POST_DPD_EN 0x58F8
+#define B_P0_DAC_COMP_POST_DPD_EN BIT(31)
#define R_P0_TSSI_BASE 0x5C00
#define R_S0_DACKI 0x5E00
#define B_S0_DACKI_AR GENMASK(31, 28)
@@ -4638,6 +5035,10 @@
#define B_S0_DACKQ7_K GENMASK(15, 8)
#define R_S0_DACKQ8 0x5E98
#define B_S0_DACKQ8_K GENMASK(15, 8)
+#define R_DCFO_WEIGHT_V1 0x6244
+#define B_DCFO_WEIGHT_MSK_V1 GENMASK(31, 28)
+#define R_DCFO_OPT_V1 0x6260
+#define B_DCFO_OPT_EN_V1 BIT(17)
#define R_RPL_BIAS_COMP1 0x6DF0
#define B_RPL_BIAS_COMP1_MASK GENMASK(7, 0)
#define R_P1_TSSI_ALIM1 0x7630
@@ -4649,6 +5050,8 @@
#define B_P1_TSSI_ALIM31 GENMASK(9, 0)
#define R_P1_TSSI_ALIM2 0x763c
#define B_P1_TSSI_ALIM2 GENMASK(29, 0)
+#define R_P1_TSSI_ADC_CLK 0x766c
+#define B_P1_TSSI_ADC_CLK GENMASK(17, 16)
#define R_P1_TSSIC 0x7814
#define B_P1_TSSIC_BYPASS BIT(11)
#define R_P1_TMETER 0x7810
@@ -4675,6 +5078,8 @@
#define B_P1_TSSI_MV_MIX GENMASK(19, 11)
#define B_P1_TSSI_MV_AVG GENMASK(13, 11)
#define B_P1_TSSI_MV_CLR BIT(14)
+#define R_P1_DAC_COMP_POST_DPD_EN 0x78F8
+#define B_P1_DAC_COMP_POST_DPD_EN BIT(31)
#define R_TSSI_THOF 0x7C00
#define R_S1_DACKI 0x7E00
#define B_S1_DACKI_AR GENMASK(31, 28)
diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c
index 9e2328db1865..ca99422e600f 100644
--- a/drivers/net/wireless/realtek/rtw89/regd.c
+++ b/drivers/net/wireless/realtek/rtw89/regd.c
@@ -115,7 +115,7 @@ static const struct rtw89_regd rtw89_regd_map[] = {
COUNTRY_REGD("SG", RTW89_ETSI, RTW89_ETSI, RTW89_NA),
COUNTRY_REGD("LK", RTW89_ETSI, RTW89_ETSI, RTW89_NA),
COUNTRY_REGD("TW", RTW89_FCC, RTW89_FCC, RTW89_NA),
- COUNTRY_REGD("TH", RTW89_WW, RTW89_WW, RTW89_WW),
+ COUNTRY_REGD("TH", RTW89_ETSI, RTW89_ETSI, RTW89_THAILAND),
COUNTRY_REGD("VN", RTW89_ETSI, RTW89_ETSI, RTW89_NA),
COUNTRY_REGD("AU", RTW89_ACMA, RTW89_ACMA, RTW89_ACMA),
COUNTRY_REGD("NZ", RTW89_ACMA, RTW89_ACMA, RTW89_ACMA),
@@ -377,7 +377,7 @@ bottom:
return;
wiphy->bands[NL80211_BAND_6GHZ] = NULL;
- kfree(sband->iftype_data);
+ kfree((__force void *)sband->iftype_data);
kfree(sband);
}
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b.c b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
index 103893f28b51..50522ff85003 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
@@ -1704,10 +1704,11 @@ static void rtw8851b_set_tx_shape(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ const struct rtw89_rfe_parms *rfe_parms = rtwdev->rfe_parms;
u8 band = chan->band_type;
u8 regd = rtw89_regd_get(rtwdev, band);
- u8 tx_shape_cck = rtw89_8851b_tx_shape[band][RTW89_RS_CCK][regd];
- u8 tx_shape_ofdm = rtw89_8851b_tx_shape[band][RTW89_RS_OFDM][regd];
+ u8 tx_shape_cck = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_CCK][regd];
+ u8 tx_shape_ofdm = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_OFDM][regd];
if (band == RTW89_BAND_2G)
rtw8851b_bb_set_tx_shape_dfir(rtwdev, chan, tx_shape_cck, phy_idx);
@@ -1778,14 +1779,15 @@ rtw8851b_init_txpwr_unit(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
return 0;
}
-static void rtw8851b_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev, bool bt_en)
+static void rtw8851b_ctrl_nbtg_bt_tx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
- rtw89_phy_write_reg3_tbl(rtwdev, bt_en ? &rtw8851b_btc_preagc_en_defs_tbl :
+ rtw89_phy_write_reg3_tbl(rtwdev, en ? &rtw8851b_btc_preagc_en_defs_tbl :
&rtw8851b_btc_preagc_dis_defs_tbl);
- if (!bt_en) {
+ if (!en) {
if (chan->band_type == RTW89_BAND_2G) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_G_LNA6_OP1DB_V1,
B_PATH0_G_LNA6_OP1DB_V1, 0x20);
@@ -1800,11 +1802,12 @@ static void rtw8851b_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev, bool bt_en)
}
}
-static void rtw8851b_ctrl_btg(struct rtw89_dev *rtwdev, bool btg)
+static void rtw8851b_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
- if (btg) {
+ if (en) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_BT_SHARE_V1,
B_PATH0_BT_SHARE_V1, 0x1);
rtw89_phy_write32_mask(rtwdev, R_PATH0_BTG_PATH_V1,
@@ -2280,6 +2283,7 @@ static int rtw8851b_mac_disable_bb_rf(struct rtw89_dev *rtwdev)
static const struct rtw89_chip_ops rtw8851b_chip_ops = {
.enable_bb_rf = rtw8851b_mac_enable_bb_rf,
.disable_bb_rf = rtw8851b_mac_disable_bb_rf,
+ .bb_preinit = NULL,
.bb_reset = rtw8851b_bb_reset,
.bb_sethw = rtw8851b_bb_sethw,
.read_rf = rtw89_phy_read_rf_v1,
@@ -2300,9 +2304,9 @@ static const struct rtw89_chip_ops rtw8851b_chip_ops = {
.set_txpwr_ctrl = rtw8851b_set_txpwr_ctrl,
.init_txpwr_unit = rtw8851b_init_txpwr_unit,
.get_thermal = rtw8851b_get_thermal,
- .ctrl_btg = rtw8851b_ctrl_btg,
+ .ctrl_btg_bt_rx = rtw8851b_ctrl_btg_bt_rx,
.query_ppdu = rtw8851b_query_ppdu,
- .bb_ctrl_btc_preagc = rtw8851b_bb_ctrl_btc_preagc,
+ .ctrl_nbtg_bt_tx = rtw8851b_ctrl_nbtg_bt_tx,
.cfg_txrx_path = rtw8851b_bb_cfg_txrx_path,
.set_txpwr_ul_tb_offset = rtw8851b_set_txpwr_ul_tb_offset,
.pwr_on_func = rtw8851b_pwr_on_func,
@@ -2345,6 +2349,7 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
.fw_basename = RTW8851B_FW_BASENAME,
.fw_format_max = RTW8851B_FW_FORMAT_MAX,
.try_ce_fw = true,
+ .bbmcu_nr = 0,
.needed_fw_elms = 0,
.fifo_size = 196608,
.small_fifo_size = true,
@@ -2364,7 +2369,6 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
.rf_table = {&rtw89_8851b_phy_radioa_table,},
.nctl_table = &rtw89_8851b_phy_nctl_table,
.nctl_post_table = &rtw8851b_nctl_post_defs_tbl,
- .byr_table = &rtw89_8851b_byr_table,
.dflt_parms = &rtw89_8851b_dflt_parms,
.rfe_parms_conf = rtw89_8851b_rfe_parms_conf,
.txpwr_factor_rf = 2,
@@ -2377,7 +2381,8 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
BIT(NL80211_BAND_5GHZ),
.support_bw160 = false,
.support_unii4 = true,
- .support_ul_tb_ctrl = true,
+ .ul_tb_waveform_ctrl = true,
+ .ul_tb_pwr_diff = false,
.hw_sec_hdr = false,
.rf_path_num = 1,
.tx_nss = 1,
@@ -2419,6 +2424,7 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
.hci_func_en_addr = R_AX_HCI_FUNC_EN,
.h2c_desc_size = sizeof(struct rtw89_txwd_body),
.txwd_body_size = sizeof(struct rtw89_txwd_body),
+ .txwd_info_size = sizeof(struct rtw89_txwd_info),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL,
.h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8851b_h2c_regs,
@@ -2432,6 +2438,7 @@ const struct rtw89_chip_info rtw8851b_chip_info = {
.dcfo_comp_sft = 12,
.imr_info = &rtw8851b_imr_info,
.rrsr_cfgs = &rtw8851b_rrsr_cfgs,
+ .bss_clr_vld = {R_BSS_CLR_MAP_V1, B_BSS_CLR_MAP_VLD0},
.bss_clr_map_reg = R_BSS_CLR_MAP_V1,
.dma_ch_mask = BIT(RTW89_DMA_ACH4) | BIT(RTW89_DMA_ACH5) |
BIT(RTW89_DMA_ACH6) | BIT(RTW89_DMA_ACH7) |
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b_table.c b/drivers/net/wireless/realtek/rtw89/rtw8851b_table.c
index c447f91a4bd0..8cb5bde8f625 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8851b_table.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8851b_table.c
@@ -3247,12 +3247,50 @@ static const struct rtw89_reg2_def rtw89_8851b_phy_nctl_regs[] = {
static const struct rtw89_txpwr_byrate_cfg rtw89_8851b_txpwr_byrate[] = {
{ 0, 0, 0, 0, 4, 0x50505050, },
+ { 0, 0, 1, 0, 4, 0x58585858, },
+ { 0, 0, 1, 4, 4, 0x484c5054, },
+ { 0, 0, 2, 0, 4, 0x54585858, },
+ { 0, 0, 2, 4, 4, 0x44484c50, },
+ { 0, 0, 2, 8, 4, 0x34383c40, },
+ { 0, 0, 3, 0, 4, 0x58585858, },
+ { 0, 1, 2, 0, 4, 0x50545858, },
+ { 0, 1, 2, 4, 4, 0x4044484c, },
+ { 0, 1, 2, 8, 4, 0x3034383c, },
+ { 0, 1, 3, 0, 4, 0x50505050, },
+ { 0, 0, 4, 1, 4, 0x00000000, },
+ { 0, 0, 4, 0, 1, 0x00000000, },
+ { 1, 0, 1, 0, 4, 0x58585858, },
+ { 1, 0, 1, 4, 4, 0x484c5054, },
+ { 1, 0, 2, 0, 4, 0x54585858, },
+ { 1, 0, 2, 4, 4, 0x44484c50, },
+ { 1, 0, 2, 8, 4, 0x34383c40, },
+ { 1, 0, 3, 0, 4, 0x54585858, },
+ { 1, 1, 2, 0, 4, 0x54585858, },
+ { 1, 1, 2, 4, 4, 0x44484c50, },
+ { 1, 1, 2, 8, 4, 0x34383c40, },
+ { 1, 1, 3, 0, 4, 0x48484848, },
+ { 1, 0, 4, 0, 4, 0x00000000, },
+ { 2, 0, 1, 0, 4, 0x40404040, },
+ { 2, 0, 1, 4, 4, 0x383c4040, },
+ { 2, 0, 2, 0, 4, 0x40404040, },
+ { 2, 0, 2, 4, 4, 0x34383c40, },
+ { 2, 0, 2, 8, 4, 0x24282c30, },
+ { 2, 0, 3, 0, 4, 0x40404040, },
+ { 2, 1, 2, 0, 4, 0x40404040, },
+ { 2, 1, 2, 4, 4, 0x34383c40, },
+ { 2, 1, 2, 8, 4, 0x24282c30, },
+ { 2, 1, 3, 0, 4, 0x40404040, },
+ { 2, 0, 4, 0, 4, 0x00000000, },
+};
+
+static const struct rtw89_txpwr_byrate_cfg rtw89_8851b_txpwr_byrate_type2[] = {
+ { 0, 0, 0, 0, 4, 0x50505050, },
{ 0, 0, 1, 0, 4, 0x54585858, },
{ 0, 0, 1, 4, 4, 0x44484c50, },
{ 0, 0, 2, 0, 4, 0x50545858, },
{ 0, 0, 2, 4, 4, 0x4044484c, },
{ 0, 0, 2, 8, 4, 0x3034383c, },
- { 0, 0, 3, 0, 4, 0x50505050, },
+ { 0, 0, 3, 0, 4, 0x58585858, },
{ 0, 1, 2, 0, 4, 0x50545858, },
{ 0, 1, 2, 4, 4, 0x4044484c, },
{ 0, 1, 2, 8, 4, 0x3034383c, },
@@ -3264,7 +3302,7 @@ static const struct rtw89_txpwr_byrate_cfg rtw89_8851b_txpwr_byrate[] = {
{ 1, 0, 2, 0, 4, 0x54585858, },
{ 1, 0, 2, 4, 4, 0x44484c50, },
{ 1, 0, 2, 8, 4, 0x34383c40, },
- { 1, 0, 3, 0, 4, 0x40404040, },
+ { 1, 0, 3, 0, 4, 0x54585858, },
{ 1, 1, 2, 0, 4, 0x54585858, },
{ 1, 1, 2, 4, 4, 0x44484c50, },
{ 1, 1, 2, 8, 4, 0x34383c40, },
@@ -3321,8 +3359,9 @@ static const s8 _txpwr_track_delta_swingidx_2g_cck_a_p[] = {
0, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4
};
-const u8 rtw89_8851b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM] = {
+static
+const u8 rtw89_8851b_tx_shape_lmt[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
+ [RTW89_REGD_NUM] = {
[0][0][RTW89_ACMA] = 0,
[0][0][RTW89_CN] = 0,
[0][0][RTW89_ETSI] = 0,
@@ -3342,14 +3381,34 @@ const u8 rtw89_8851b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
[1][1][RTW89_ACMA] = 0,
[1][1][RTW89_CN] = 0,
[1][1][RTW89_ETSI] = 0,
- [1][1][RTW89_FCC] = 1,
- [1][1][RTW89_IC] = 1,
+ [1][1][RTW89_FCC] = 3,
+ [1][1][RTW89_IC] = 3,
[1][1][RTW89_KCC] = 0,
[1][1][RTW89_MKK] = 0,
[1][1][RTW89_UK] = 0,
};
static
+const u8 rtw89_8851b_tx_shape_lmt_ru[RTW89_BAND_NUM][RTW89_REGD_NUM] = {
+ [0][RTW89_ACMA] = 0,
+ [0][RTW89_CN] = 0,
+ [0][RTW89_ETSI] = 0,
+ [0][RTW89_FCC] = 3,
+ [0][RTW89_IC] = 3,
+ [0][RTW89_KCC] = 0,
+ [0][RTW89_MKK] = 0,
+ [0][RTW89_UK] = 0,
+ [1][RTW89_ACMA] = 0,
+ [1][RTW89_CN] = 0,
+ [1][RTW89_ETSI] = 0,
+ [1][RTW89_FCC] = 3,
+ [1][RTW89_IC] = 3,
+ [1][RTW89_KCC] = 0,
+ [1][RTW89_MKK] = 0,
+ [1][RTW89_UK] = 0,
+};
+
+static
const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[RTW89_RS_LMT_NUM][RTW89_BF_NUM]
[RTW89_REGD_NUM][RTW89_2G_CH_NUM] = {
@@ -3365,7 +3424,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_WW][9] = 58,
[0][0][0][0][RTW89_WW][10] = 58,
[0][0][0][0][RTW89_WW][11] = 58,
- [0][0][0][0][RTW89_WW][12] = 52,
+ [0][0][0][0][RTW89_WW][12] = 50,
[0][0][0][0][RTW89_WW][13] = 76,
[0][1][0][0][RTW89_WW][0] = 0,
[0][1][0][0][RTW89_WW][1] = 0,
@@ -3391,7 +3450,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_WW][7] = 58,
[1][0][0][0][RTW89_WW][8] = 58,
[1][0][0][0][RTW89_WW][9] = 58,
- [1][0][0][0][RTW89_WW][10] = 58,
+ [1][0][0][0][RTW89_WW][10] = 50,
[1][0][0][0][RTW89_WW][11] = 0,
[1][0][0][0][RTW89_WW][12] = 0,
[1][0][0][0][RTW89_WW][13] = 0,
@@ -3421,7 +3480,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][9] = 60,
[0][0][1][0][RTW89_WW][10] = 60,
[0][0][1][0][RTW89_WW][11] = 60,
- [0][0][1][0][RTW89_WW][12] = 58,
+ [0][0][1][0][RTW89_WW][12] = 40,
[0][0][1][0][RTW89_WW][13] = 0,
[0][1][1][0][RTW89_WW][0] = 0,
[0][1][1][0][RTW89_WW][1] = 0,
@@ -3449,7 +3508,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_WW][9] = 60,
[0][0][2][0][RTW89_WW][10] = 60,
[0][0][2][0][RTW89_WW][11] = 60,
- [0][0][2][0][RTW89_WW][12] = 60,
+ [0][0][2][0][RTW89_WW][12] = 38,
[0][0][2][0][RTW89_WW][13] = 0,
[0][1][2][0][RTW89_WW][0] = 0,
[0][1][2][0][RTW89_WW][1] = 0,
@@ -3489,7 +3548,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_WW][7] = 58,
[1][0][2][0][RTW89_WW][8] = 58,
[1][0][2][0][RTW89_WW][9] = 58,
- [1][0][2][0][RTW89_WW][10] = 58,
+ [1][0][2][0][RTW89_WW][10] = 46,
[1][0][2][0][RTW89_WW][11] = 0,
[1][0][2][0][RTW89_WW][12] = 0,
[1][0][2][0][RTW89_WW][13] = 0,
@@ -3527,7 +3586,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][0] = 84,
[0][0][0][0][RTW89_KCC][0] = 68,
[0][0][0][0][RTW89_ACMA][0] = 58,
- [0][0][0][0][RTW89_CN][0] = 60,
+ [0][0][0][0][RTW89_CN][0] = 58,
[0][0][0][0][RTW89_UK][0] = 58,
[0][0][0][0][RTW89_FCC][1] = 84,
[0][0][0][0][RTW89_ETSI][1] = 58,
@@ -3535,7 +3594,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][1] = 84,
[0][0][0][0][RTW89_KCC][1] = 68,
[0][0][0][0][RTW89_ACMA][1] = 58,
- [0][0][0][0][RTW89_CN][1] = 60,
+ [0][0][0][0][RTW89_CN][1] = 58,
[0][0][0][0][RTW89_UK][1] = 58,
[0][0][0][0][RTW89_FCC][2] = 84,
[0][0][0][0][RTW89_ETSI][2] = 58,
@@ -3543,7 +3602,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][2] = 84,
[0][0][0][0][RTW89_KCC][2] = 68,
[0][0][0][0][RTW89_ACMA][2] = 58,
- [0][0][0][0][RTW89_CN][2] = 60,
+ [0][0][0][0][RTW89_CN][2] = 58,
[0][0][0][0][RTW89_UK][2] = 58,
[0][0][0][0][RTW89_FCC][3] = 84,
[0][0][0][0][RTW89_ETSI][3] = 58,
@@ -3551,7 +3610,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][3] = 84,
[0][0][0][0][RTW89_KCC][3] = 68,
[0][0][0][0][RTW89_ACMA][3] = 58,
- [0][0][0][0][RTW89_CN][3] = 60,
+ [0][0][0][0][RTW89_CN][3] = 58,
[0][0][0][0][RTW89_UK][3] = 58,
[0][0][0][0][RTW89_FCC][4] = 84,
[0][0][0][0][RTW89_ETSI][4] = 58,
@@ -3559,7 +3618,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][4] = 84,
[0][0][0][0][RTW89_KCC][4] = 68,
[0][0][0][0][RTW89_ACMA][4] = 58,
- [0][0][0][0][RTW89_CN][4] = 60,
+ [0][0][0][0][RTW89_CN][4] = 58,
[0][0][0][0][RTW89_UK][4] = 58,
[0][0][0][0][RTW89_FCC][5] = 84,
[0][0][0][0][RTW89_ETSI][5] = 58,
@@ -3567,7 +3626,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][5] = 84,
[0][0][0][0][RTW89_KCC][5] = 68,
[0][0][0][0][RTW89_ACMA][5] = 58,
- [0][0][0][0][RTW89_CN][5] = 60,
+ [0][0][0][0][RTW89_CN][5] = 58,
[0][0][0][0][RTW89_UK][5] = 58,
[0][0][0][0][RTW89_FCC][6] = 84,
[0][0][0][0][RTW89_ETSI][6] = 58,
@@ -3575,7 +3634,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][6] = 84,
[0][0][0][0][RTW89_KCC][6] = 68,
[0][0][0][0][RTW89_ACMA][6] = 58,
- [0][0][0][0][RTW89_CN][6] = 60,
+ [0][0][0][0][RTW89_CN][6] = 58,
[0][0][0][0][RTW89_UK][6] = 58,
[0][0][0][0][RTW89_FCC][7] = 84,
[0][0][0][0][RTW89_ETSI][7] = 58,
@@ -3583,7 +3642,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][7] = 84,
[0][0][0][0][RTW89_KCC][7] = 68,
[0][0][0][0][RTW89_ACMA][7] = 58,
- [0][0][0][0][RTW89_CN][7] = 60,
+ [0][0][0][0][RTW89_CN][7] = 58,
[0][0][0][0][RTW89_UK][7] = 58,
[0][0][0][0][RTW89_FCC][8] = 84,
[0][0][0][0][RTW89_ETSI][8] = 58,
@@ -3591,7 +3650,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][8] = 84,
[0][0][0][0][RTW89_KCC][8] = 68,
[0][0][0][0][RTW89_ACMA][8] = 58,
- [0][0][0][0][RTW89_CN][8] = 60,
+ [0][0][0][0][RTW89_CN][8] = 58,
[0][0][0][0][RTW89_UK][8] = 58,
[0][0][0][0][RTW89_FCC][9] = 84,
[0][0][0][0][RTW89_ETSI][9] = 58,
@@ -3599,7 +3658,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][9] = 84,
[0][0][0][0][RTW89_KCC][9] = 68,
[0][0][0][0][RTW89_ACMA][9] = 58,
- [0][0][0][0][RTW89_CN][9] = 60,
+ [0][0][0][0][RTW89_CN][9] = 58,
[0][0][0][0][RTW89_UK][9] = 58,
[0][0][0][0][RTW89_FCC][10] = 82,
[0][0][0][0][RTW89_ETSI][10] = 58,
@@ -3607,7 +3666,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][10] = 82,
[0][0][0][0][RTW89_KCC][10] = 68,
[0][0][0][0][RTW89_ACMA][10] = 58,
- [0][0][0][0][RTW89_CN][10] = 60,
+ [0][0][0][0][RTW89_CN][10] = 58,
[0][0][0][0][RTW89_UK][10] = 58,
[0][0][0][0][RTW89_FCC][11] = 62,
[0][0][0][0][RTW89_ETSI][11] = 58,
@@ -3615,7 +3674,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][11] = 62,
[0][0][0][0][RTW89_KCC][11] = 68,
[0][0][0][0][RTW89_ACMA][11] = 58,
- [0][0][0][0][RTW89_CN][11] = 60,
+ [0][0][0][0][RTW89_CN][11] = 58,
[0][0][0][0][RTW89_UK][11] = 58,
[0][0][0][0][RTW89_FCC][12] = 52,
[0][0][0][0][RTW89_ETSI][12] = 58,
@@ -3623,7 +3682,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][12] = 52,
[0][0][0][0][RTW89_KCC][12] = 68,
[0][0][0][0][RTW89_ACMA][12] = 58,
- [0][0][0][0][RTW89_CN][12] = 60,
+ [0][0][0][0][RTW89_CN][12] = 50,
[0][0][0][0][RTW89_UK][12] = 58,
[0][0][0][0][RTW89_FCC][13] = 127,
[0][0][0][0][RTW89_ETSI][13] = 127,
@@ -3767,7 +3826,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][2] = 127,
[1][0][0][0][RTW89_KCC][2] = 68,
[1][0][0][0][RTW89_ACMA][2] = 58,
- [1][0][0][0][RTW89_CN][2] = 60,
+ [1][0][0][0][RTW89_CN][2] = 58,
[1][0][0][0][RTW89_UK][2] = 58,
[1][0][0][0][RTW89_FCC][3] = 127,
[1][0][0][0][RTW89_ETSI][3] = 58,
@@ -3775,7 +3834,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][3] = 127,
[1][0][0][0][RTW89_KCC][3] = 68,
[1][0][0][0][RTW89_ACMA][3] = 58,
- [1][0][0][0][RTW89_CN][3] = 60,
+ [1][0][0][0][RTW89_CN][3] = 58,
[1][0][0][0][RTW89_UK][3] = 58,
[1][0][0][0][RTW89_FCC][4] = 127,
[1][0][0][0][RTW89_ETSI][4] = 58,
@@ -3783,7 +3842,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][4] = 127,
[1][0][0][0][RTW89_KCC][4] = 68,
[1][0][0][0][RTW89_ACMA][4] = 58,
- [1][0][0][0][RTW89_CN][4] = 60,
+ [1][0][0][0][RTW89_CN][4] = 58,
[1][0][0][0][RTW89_UK][4] = 58,
[1][0][0][0][RTW89_FCC][5] = 127,
[1][0][0][0][RTW89_ETSI][5] = 58,
@@ -3791,7 +3850,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][5] = 127,
[1][0][0][0][RTW89_KCC][5] = 68,
[1][0][0][0][RTW89_ACMA][5] = 58,
- [1][0][0][0][RTW89_CN][5] = 60,
+ [1][0][0][0][RTW89_CN][5] = 58,
[1][0][0][0][RTW89_UK][5] = 58,
[1][0][0][0][RTW89_FCC][6] = 127,
[1][0][0][0][RTW89_ETSI][6] = 58,
@@ -3799,7 +3858,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][6] = 127,
[1][0][0][0][RTW89_KCC][6] = 68,
[1][0][0][0][RTW89_ACMA][6] = 58,
- [1][0][0][0][RTW89_CN][6] = 60,
+ [1][0][0][0][RTW89_CN][6] = 58,
[1][0][0][0][RTW89_UK][6] = 58,
[1][0][0][0][RTW89_FCC][7] = 127,
[1][0][0][0][RTW89_ETSI][7] = 58,
@@ -3807,7 +3866,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][7] = 127,
[1][0][0][0][RTW89_KCC][7] = 68,
[1][0][0][0][RTW89_ACMA][7] = 58,
- [1][0][0][0][RTW89_CN][7] = 60,
+ [1][0][0][0][RTW89_CN][7] = 58,
[1][0][0][0][RTW89_UK][7] = 58,
[1][0][0][0][RTW89_FCC][8] = 127,
[1][0][0][0][RTW89_ETSI][8] = 58,
@@ -3815,7 +3874,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][8] = 127,
[1][0][0][0][RTW89_KCC][8] = 68,
[1][0][0][0][RTW89_ACMA][8] = 58,
- [1][0][0][0][RTW89_CN][8] = 60,
+ [1][0][0][0][RTW89_CN][8] = 58,
[1][0][0][0][RTW89_UK][8] = 58,
[1][0][0][0][RTW89_FCC][9] = 127,
[1][0][0][0][RTW89_ETSI][9] = 58,
@@ -3823,7 +3882,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][9] = 127,
[1][0][0][0][RTW89_KCC][9] = 68,
[1][0][0][0][RTW89_ACMA][9] = 58,
- [1][0][0][0][RTW89_CN][9] = 60,
+ [1][0][0][0][RTW89_CN][9] = 58,
[1][0][0][0][RTW89_UK][9] = 58,
[1][0][0][0][RTW89_FCC][10] = 127,
[1][0][0][0][RTW89_ETSI][10] = 58,
@@ -3831,7 +3890,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][10] = 127,
[1][0][0][0][RTW89_KCC][10] = 68,
[1][0][0][0][RTW89_ACMA][10] = 58,
- [1][0][0][0][RTW89_CN][10] = 60,
+ [1][0][0][0][RTW89_CN][10] = 50,
[1][0][0][0][RTW89_UK][10] = 58,
[1][0][0][0][RTW89_FCC][11] = 127,
[1][0][0][0][RTW89_ETSI][11] = 127,
@@ -4071,7 +4130,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][12] = 64,
[0][0][1][0][RTW89_KCC][12] = 74,
[0][0][1][0][RTW89_ACMA][12] = 58,
- [0][0][1][0][RTW89_CN][12] = 60,
+ [0][0][1][0][RTW89_CN][12] = 40,
[0][0][1][0][RTW89_UK][12] = 58,
[0][0][1][0][RTW89_FCC][13] = 127,
[0][0][1][0][RTW89_ETSI][13] = 127,
@@ -4295,7 +4354,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][12] = 70,
[0][0][2][0][RTW89_KCC][12] = 78,
[0][0][2][0][RTW89_ACMA][12] = 60,
- [0][0][2][0][RTW89_CN][12] = 60,
+ [0][0][2][0][RTW89_CN][12] = 38,
[0][0][2][0][RTW89_UK][12] = 60,
[0][0][2][0][RTW89_FCC][13] = 127,
[0][0][2][0][RTW89_ETSI][13] = 127,
@@ -4551,7 +4610,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][2] = 72,
[1][0][2][0][RTW89_KCC][2] = 80,
[1][0][2][0][RTW89_ACMA][2] = 58,
- [1][0][2][0][RTW89_CN][2] = 60,
+ [1][0][2][0][RTW89_CN][2] = 58,
[1][0][2][0][RTW89_UK][2] = 58,
[1][0][2][0][RTW89_FCC][3] = 72,
[1][0][2][0][RTW89_ETSI][3] = 58,
@@ -4559,7 +4618,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][3] = 72,
[1][0][2][0][RTW89_KCC][3] = 80,
[1][0][2][0][RTW89_ACMA][3] = 58,
- [1][0][2][0][RTW89_CN][3] = 60,
+ [1][0][2][0][RTW89_CN][3] = 58,
[1][0][2][0][RTW89_UK][3] = 58,
[1][0][2][0][RTW89_FCC][4] = 76,
[1][0][2][0][RTW89_ETSI][4] = 58,
@@ -4567,7 +4626,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][4] = 76,
[1][0][2][0][RTW89_KCC][4] = 80,
[1][0][2][0][RTW89_ACMA][4] = 58,
- [1][0][2][0][RTW89_CN][4] = 60,
+ [1][0][2][0][RTW89_CN][4] = 58,
[1][0][2][0][RTW89_UK][4] = 58,
[1][0][2][0][RTW89_FCC][5] = 78,
[1][0][2][0][RTW89_ETSI][5] = 58,
@@ -4575,7 +4634,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][5] = 78,
[1][0][2][0][RTW89_KCC][5] = 80,
[1][0][2][0][RTW89_ACMA][5] = 58,
- [1][0][2][0][RTW89_CN][5] = 60,
+ [1][0][2][0][RTW89_CN][5] = 58,
[1][0][2][0][RTW89_UK][5] = 58,
[1][0][2][0][RTW89_FCC][6] = 78,
[1][0][2][0][RTW89_ETSI][6] = 58,
@@ -4583,7 +4642,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][6] = 78,
[1][0][2][0][RTW89_KCC][6] = 80,
[1][0][2][0][RTW89_ACMA][6] = 58,
- [1][0][2][0][RTW89_CN][6] = 60,
+ [1][0][2][0][RTW89_CN][6] = 58,
[1][0][2][0][RTW89_UK][6] = 58,
[1][0][2][0][RTW89_FCC][7] = 78,
[1][0][2][0][RTW89_ETSI][7] = 58,
@@ -4591,7 +4650,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][7] = 78,
[1][0][2][0][RTW89_KCC][7] = 80,
[1][0][2][0][RTW89_ACMA][7] = 58,
- [1][0][2][0][RTW89_CN][7] = 60,
+ [1][0][2][0][RTW89_CN][7] = 58,
[1][0][2][0][RTW89_UK][7] = 58,
[1][0][2][0][RTW89_FCC][8] = 78,
[1][0][2][0][RTW89_ETSI][8] = 58,
@@ -4599,7 +4658,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][8] = 78,
[1][0][2][0][RTW89_KCC][8] = 78,
[1][0][2][0][RTW89_ACMA][8] = 58,
- [1][0][2][0][RTW89_CN][8] = 60,
+ [1][0][2][0][RTW89_CN][8] = 58,
[1][0][2][0][RTW89_UK][8] = 58,
[1][0][2][0][RTW89_FCC][9] = 76,
[1][0][2][0][RTW89_ETSI][9] = 58,
@@ -4607,7 +4666,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][9] = 76,
[1][0][2][0][RTW89_KCC][9] = 78,
[1][0][2][0][RTW89_ACMA][9] = 58,
- [1][0][2][0][RTW89_CN][9] = 60,
+ [1][0][2][0][RTW89_CN][9] = 58,
[1][0][2][0][RTW89_UK][9] = 58,
[1][0][2][0][RTW89_FCC][10] = 70,
[1][0][2][0][RTW89_ETSI][10] = 58,
@@ -4615,7 +4674,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][10] = 70,
[1][0][2][0][RTW89_KCC][10] = 78,
[1][0][2][0][RTW89_ACMA][10] = 58,
- [1][0][2][0][RTW89_CN][10] = 60,
+ [1][0][2][0][RTW89_CN][10] = 46,
[1][0][2][0][RTW89_UK][10] = 58,
[1][0][2][0][RTW89_FCC][11] = 127,
[1][0][2][0][RTW89_ETSI][11] = 127,
@@ -4896,9 +4955,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][42] = 30,
[0][0][1][0][RTW89_WW][44] = 30,
[0][0][1][0][RTW89_WW][46] = 30,
- [0][0][1][0][RTW89_WW][48] = 68,
- [0][0][1][0][RTW89_WW][50] = 68,
- [0][0][1][0][RTW89_WW][52] = 68,
+ [0][0][1][0][RTW89_WW][48] = 72,
+ [0][0][1][0][RTW89_WW][50] = 72,
+ [0][0][1][0][RTW89_WW][52] = 72,
[0][1][1][0][RTW89_WW][0] = 0,
[0][1][1][0][RTW89_WW][2] = 0,
[0][1][1][0][RTW89_WW][4] = 0,
@@ -4927,14 +4986,14 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_WW][48] = 0,
[0][1][1][0][RTW89_WW][50] = 0,
[0][1][1][0][RTW89_WW][52] = 0,
- [0][0][2][0][RTW89_WW][0] = 62,
- [0][0][2][0][RTW89_WW][2] = 62,
- [0][0][2][0][RTW89_WW][4] = 62,
+ [0][0][2][0][RTW89_WW][0] = 60,
+ [0][0][2][0][RTW89_WW][2] = 60,
+ [0][0][2][0][RTW89_WW][4] = 60,
[0][0][2][0][RTW89_WW][6] = 54,
- [0][0][2][0][RTW89_WW][8] = 62,
- [0][0][2][0][RTW89_WW][10] = 62,
- [0][0][2][0][RTW89_WW][12] = 62,
- [0][0][2][0][RTW89_WW][14] = 62,
+ [0][0][2][0][RTW89_WW][8] = 60,
+ [0][0][2][0][RTW89_WW][10] = 60,
+ [0][0][2][0][RTW89_WW][12] = 60,
+ [0][0][2][0][RTW89_WW][14] = 60,
[0][0][2][0][RTW89_WW][15] = 60,
[0][0][2][0][RTW89_WW][17] = 62,
[0][0][2][0][RTW89_WW][19] = 62,
@@ -4952,9 +5011,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_WW][42] = 30,
[0][0][2][0][RTW89_WW][44] = 30,
[0][0][2][0][RTW89_WW][46] = 30,
- [0][0][2][0][RTW89_WW][48] = 70,
- [0][0][2][0][RTW89_WW][50] = 72,
- [0][0][2][0][RTW89_WW][52] = 72,
+ [0][0][2][0][RTW89_WW][48] = 74,
+ [0][0][2][0][RTW89_WW][50] = 76,
+ [0][0][2][0][RTW89_WW][52] = 76,
[0][1][2][0][RTW89_WW][0] = 0,
[0][1][2][0][RTW89_WW][2] = 0,
[0][1][2][0][RTW89_WW][4] = 0,
@@ -5011,11 +5070,11 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_WW][48] = 0,
[0][1][2][1][RTW89_WW][50] = 0,
[0][1][2][1][RTW89_WW][52] = 0,
- [1][0][2][0][RTW89_WW][1] = 60,
+ [1][0][2][0][RTW89_WW][1] = 62,
[1][0][2][0][RTW89_WW][5] = 62,
- [1][0][2][0][RTW89_WW][9] = 64,
- [1][0][2][0][RTW89_WW][13] = 60,
- [1][0][2][0][RTW89_WW][16] = 62,
+ [1][0][2][0][RTW89_WW][9] = 62,
+ [1][0][2][0][RTW89_WW][13] = 62,
+ [1][0][2][0][RTW89_WW][16] = 66,
[1][0][2][0][RTW89_WW][20] = 66,
[1][0][2][0][RTW89_WW][24] = 66,
[1][0][2][0][RTW89_WW][28] = 66,
@@ -5023,8 +5082,8 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_WW][36] = 76,
[1][0][2][0][RTW89_WW][39] = 30,
[1][0][2][0][RTW89_WW][43] = 30,
- [1][0][2][0][RTW89_WW][47] = 80,
- [1][0][2][0][RTW89_WW][51] = 80,
+ [1][0][2][0][RTW89_WW][47] = 84,
+ [1][0][2][0][RTW89_WW][51] = 84,
[1][1][2][0][RTW89_WW][1] = 0,
[1][1][2][0][RTW89_WW][5] = 0,
[1][1][2][0][RTW89_WW][9] = 0,
@@ -5054,12 +5113,12 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_WW][47] = 0,
[1][1][2][1][RTW89_WW][51] = 0,
[2][0][2][0][RTW89_WW][3] = 60,
- [2][0][2][0][RTW89_WW][11] = 58,
- [2][0][2][0][RTW89_WW][18] = 62,
+ [2][0][2][0][RTW89_WW][11] = 56,
+ [2][0][2][0][RTW89_WW][18] = 64,
[2][0][2][0][RTW89_WW][26] = 64,
[2][0][2][0][RTW89_WW][34] = 72,
[2][0][2][0][RTW89_WW][41] = 30,
- [2][0][2][0][RTW89_WW][49] = 70,
+ [2][0][2][0][RTW89_WW][49] = 74,
[2][1][2][0][RTW89_WW][3] = 0,
[2][1][2][0][RTW89_WW][11] = 0,
[2][1][2][0][RTW89_WW][18] = 0,
@@ -5074,8 +5133,8 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_WW][34] = 0,
[2][1][2][1][RTW89_WW][41] = 0,
[2][1][2][1][RTW89_WW][49] = 0,
- [3][0][2][0][RTW89_WW][7] = 58,
- [3][0][2][0][RTW89_WW][22] = 58,
+ [3][0][2][0][RTW89_WW][7] = 0,
+ [3][0][2][0][RTW89_WW][22] = 0,
[3][0][2][0][RTW89_WW][45] = 0,
[3][1][2][0][RTW89_WW][7] = 0,
[3][1][2][0][RTW89_WW][22] = 0,
@@ -5083,7 +5142,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_WW][7] = 0,
[3][1][2][1][RTW89_WW][22] = 0,
[3][1][2][1][RTW89_WW][45] = 0,
- [0][0][1][0][RTW89_FCC][0] = 76,
+ [0][0][1][0][RTW89_FCC][0] = 80,
[0][0][1][0][RTW89_ETSI][0] = 58,
[0][0][1][0][RTW89_MKK][0] = 60,
[0][0][1][0][RTW89_IC][0] = 62,
@@ -5139,7 +5198,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][12] = 58,
[0][0][1][0][RTW89_CN][12] = 60,
[0][0][1][0][RTW89_UK][12] = 58,
- [0][0][1][0][RTW89_FCC][14] = 74,
+ [0][0][1][0][RTW89_FCC][14] = 78,
[0][0][1][0][RTW89_ETSI][14] = 58,
[0][0][1][0][RTW89_MKK][14] = 60,
[0][0][1][0][RTW89_IC][14] = 64,
@@ -5147,10 +5206,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][14] = 58,
[0][0][1][0][RTW89_CN][14] = 60,
[0][0][1][0][RTW89_UK][14] = 58,
- [0][0][1][0][RTW89_FCC][15] = 74,
+ [0][0][1][0][RTW89_FCC][15] = 78,
[0][0][1][0][RTW89_ETSI][15] = 58,
[0][0][1][0][RTW89_MKK][15] = 78,
- [0][0][1][0][RTW89_IC][15] = 74,
+ [0][0][1][0][RTW89_IC][15] = 78,
[0][0][1][0][RTW89_KCC][15] = 78,
[0][0][1][0][RTW89_ACMA][15] = 58,
[0][0][1][0][RTW89_CN][15] = 127,
@@ -5227,10 +5286,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][33] = 60,
[0][0][1][0][RTW89_CN][33] = 127,
[0][0][1][0][RTW89_UK][33] = 60,
- [0][0][1][0][RTW89_FCC][35] = 68,
+ [0][0][1][0][RTW89_FCC][35] = 72,
[0][0][1][0][RTW89_ETSI][35] = 60,
[0][0][1][0][RTW89_MKK][35] = 78,
- [0][0][1][0][RTW89_IC][35] = 68,
+ [0][0][1][0][RTW89_IC][35] = 72,
[0][0][1][0][RTW89_KCC][35] = 74,
[0][0][1][0][RTW89_ACMA][35] = 60,
[0][0][1][0][RTW89_CN][35] = 127,
@@ -5249,7 +5308,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][38] = 82,
[0][0][1][0][RTW89_KCC][38] = 70,
[0][0][1][0][RTW89_ACMA][38] = 78,
- [0][0][1][0][RTW89_CN][38] = 78,
+ [0][0][1][0][RTW89_CN][38] = 74,
[0][0][1][0][RTW89_UK][38] = 58,
[0][0][1][0][RTW89_FCC][40] = 82,
[0][0][1][0][RTW89_ETSI][40] = 30,
@@ -5257,7 +5316,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][40] = 82,
[0][0][1][0][RTW89_KCC][40] = 76,
[0][0][1][0][RTW89_ACMA][40] = 78,
- [0][0][1][0][RTW89_CN][40] = 78,
+ [0][0][1][0][RTW89_CN][40] = 74,
[0][0][1][0][RTW89_UK][40] = 58,
[0][0][1][0][RTW89_FCC][42] = 82,
[0][0][1][0][RTW89_ETSI][42] = 30,
@@ -5265,7 +5324,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][42] = 82,
[0][0][1][0][RTW89_KCC][42] = 76,
[0][0][1][0][RTW89_ACMA][42] = 78,
- [0][0][1][0][RTW89_CN][42] = 78,
+ [0][0][1][0][RTW89_CN][42] = 74,
[0][0][1][0][RTW89_UK][42] = 58,
[0][0][1][0][RTW89_FCC][44] = 82,
[0][0][1][0][RTW89_ETSI][44] = 30,
@@ -5273,7 +5332,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][44] = 82,
[0][0][1][0][RTW89_KCC][44] = 76,
[0][0][1][0][RTW89_ACMA][44] = 78,
- [0][0][1][0][RTW89_CN][44] = 78,
+ [0][0][1][0][RTW89_CN][44] = 58,
[0][0][1][0][RTW89_UK][44] = 58,
[0][0][1][0][RTW89_FCC][46] = 82,
[0][0][1][0][RTW89_ETSI][46] = 30,
@@ -5281,9 +5340,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][46] = 82,
[0][0][1][0][RTW89_KCC][46] = 76,
[0][0][1][0][RTW89_ACMA][46] = 78,
- [0][0][1][0][RTW89_CN][46] = 78,
+ [0][0][1][0][RTW89_CN][46] = 58,
[0][0][1][0][RTW89_UK][46] = 58,
- [0][0][1][0][RTW89_FCC][48] = 68,
+ [0][0][1][0][RTW89_FCC][48] = 72,
[0][0][1][0][RTW89_ETSI][48] = 127,
[0][0][1][0][RTW89_MKK][48] = 127,
[0][0][1][0][RTW89_IC][48] = 127,
@@ -5291,7 +5350,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][48] = 127,
[0][0][1][0][RTW89_CN][48] = 127,
[0][0][1][0][RTW89_UK][48] = 127,
- [0][0][1][0][RTW89_FCC][50] = 68,
+ [0][0][1][0][RTW89_FCC][50] = 72,
[0][0][1][0][RTW89_ETSI][50] = 127,
[0][0][1][0][RTW89_MKK][50] = 127,
[0][0][1][0][RTW89_IC][50] = 127,
@@ -5299,7 +5358,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][50] = 127,
[0][0][1][0][RTW89_CN][50] = 127,
[0][0][1][0][RTW89_UK][50] = 127,
- [0][0][1][0][RTW89_FCC][52] = 68,
+ [0][0][1][0][RTW89_FCC][52] = 72,
[0][0][1][0][RTW89_ETSI][52] = 127,
[0][0][1][0][RTW89_MKK][52] = 127,
[0][0][1][0][RTW89_IC][52] = 127,
@@ -5531,13 +5590,13 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_ACMA][52] = 127,
[0][1][1][0][RTW89_CN][52] = 127,
[0][1][1][0][RTW89_UK][52] = 127,
- [0][0][2][0][RTW89_FCC][0] = 74,
+ [0][0][2][0][RTW89_FCC][0] = 78,
[0][0][2][0][RTW89_ETSI][0] = 62,
[0][0][2][0][RTW89_MKK][0] = 62,
[0][0][2][0][RTW89_IC][0] = 64,
[0][0][2][0][RTW89_KCC][0] = 76,
[0][0][2][0][RTW89_ACMA][0] = 62,
- [0][0][2][0][RTW89_CN][0] = 62,
+ [0][0][2][0][RTW89_CN][0] = 60,
[0][0][2][0][RTW89_UK][0] = 62,
[0][0][2][0][RTW89_FCC][2] = 82,
[0][0][2][0][RTW89_ETSI][2] = 62,
@@ -5545,7 +5604,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][2] = 64,
[0][0][2][0][RTW89_KCC][2] = 76,
[0][0][2][0][RTW89_ACMA][2] = 62,
- [0][0][2][0][RTW89_CN][2] = 62,
+ [0][0][2][0][RTW89_CN][2] = 60,
[0][0][2][0][RTW89_UK][2] = 62,
[0][0][2][0][RTW89_FCC][4] = 82,
[0][0][2][0][RTW89_ETSI][4] = 62,
@@ -5553,7 +5612,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][4] = 64,
[0][0][2][0][RTW89_KCC][4] = 76,
[0][0][2][0][RTW89_ACMA][4] = 62,
- [0][0][2][0][RTW89_CN][4] = 62,
+ [0][0][2][0][RTW89_CN][4] = 60,
[0][0][2][0][RTW89_UK][4] = 62,
[0][0][2][0][RTW89_FCC][6] = 82,
[0][0][2][0][RTW89_ETSI][6] = 62,
@@ -5561,7 +5620,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][6] = 64,
[0][0][2][0][RTW89_KCC][6] = 54,
[0][0][2][0][RTW89_ACMA][6] = 62,
- [0][0][2][0][RTW89_CN][6] = 62,
+ [0][0][2][0][RTW89_CN][6] = 60,
[0][0][2][0][RTW89_UK][6] = 62,
[0][0][2][0][RTW89_FCC][8] = 82,
[0][0][2][0][RTW89_ETSI][8] = 62,
@@ -5569,7 +5628,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][8] = 64,
[0][0][2][0][RTW89_KCC][8] = 76,
[0][0][2][0][RTW89_ACMA][8] = 62,
- [0][0][2][0][RTW89_CN][8] = 62,
+ [0][0][2][0][RTW89_CN][8] = 60,
[0][0][2][0][RTW89_UK][8] = 62,
[0][0][2][0][RTW89_FCC][10] = 82,
[0][0][2][0][RTW89_ETSI][10] = 62,
@@ -5577,7 +5636,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][10] = 64,
[0][0][2][0][RTW89_KCC][10] = 76,
[0][0][2][0][RTW89_ACMA][10] = 62,
- [0][0][2][0][RTW89_CN][10] = 62,
+ [0][0][2][0][RTW89_CN][10] = 60,
[0][0][2][0][RTW89_UK][10] = 62,
[0][0][2][0][RTW89_FCC][12] = 82,
[0][0][2][0][RTW89_ETSI][12] = 62,
@@ -5585,20 +5644,20 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][12] = 64,
[0][0][2][0][RTW89_KCC][12] = 78,
[0][0][2][0][RTW89_ACMA][12] = 62,
- [0][0][2][0][RTW89_CN][12] = 62,
+ [0][0][2][0][RTW89_CN][12] = 60,
[0][0][2][0][RTW89_UK][12] = 62,
- [0][0][2][0][RTW89_FCC][14] = 72,
+ [0][0][2][0][RTW89_FCC][14] = 76,
[0][0][2][0][RTW89_ETSI][14] = 62,
[0][0][2][0][RTW89_MKK][14] = 62,
[0][0][2][0][RTW89_IC][14] = 64,
[0][0][2][0][RTW89_KCC][14] = 78,
[0][0][2][0][RTW89_ACMA][14] = 62,
- [0][0][2][0][RTW89_CN][14] = 62,
+ [0][0][2][0][RTW89_CN][14] = 60,
[0][0][2][0][RTW89_UK][14] = 62,
- [0][0][2][0][RTW89_FCC][15] = 72,
+ [0][0][2][0][RTW89_FCC][15] = 76,
[0][0][2][0][RTW89_ETSI][15] = 60,
[0][0][2][0][RTW89_MKK][15] = 78,
- [0][0][2][0][RTW89_IC][15] = 72,
+ [0][0][2][0][RTW89_IC][15] = 76,
[0][0][2][0][RTW89_KCC][15] = 78,
[0][0][2][0][RTW89_ACMA][15] = 60,
[0][0][2][0][RTW89_CN][15] = 127,
@@ -5675,10 +5734,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][33] = 62,
[0][0][2][0][RTW89_CN][33] = 127,
[0][0][2][0][RTW89_UK][33] = 62,
- [0][0][2][0][RTW89_FCC][35] = 68,
+ [0][0][2][0][RTW89_FCC][35] = 72,
[0][0][2][0][RTW89_ETSI][35] = 62,
[0][0][2][0][RTW89_MKK][35] = 78,
- [0][0][2][0][RTW89_IC][35] = 68,
+ [0][0][2][0][RTW89_IC][35] = 72,
[0][0][2][0][RTW89_KCC][35] = 74,
[0][0][2][0][RTW89_ACMA][35] = 62,
[0][0][2][0][RTW89_CN][35] = 127,
@@ -5697,7 +5756,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][38] = 82,
[0][0][2][0][RTW89_KCC][38] = 66,
[0][0][2][0][RTW89_ACMA][38] = 78,
- [0][0][2][0][RTW89_CN][38] = 78,
+ [0][0][2][0][RTW89_CN][38] = 70,
[0][0][2][0][RTW89_UK][38] = 60,
[0][0][2][0][RTW89_FCC][40] = 82,
[0][0][2][0][RTW89_ETSI][40] = 30,
@@ -5705,7 +5764,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][40] = 82,
[0][0][2][0][RTW89_KCC][40] = 74,
[0][0][2][0][RTW89_ACMA][40] = 78,
- [0][0][2][0][RTW89_CN][40] = 78,
+ [0][0][2][0][RTW89_CN][40] = 70,
[0][0][2][0][RTW89_UK][40] = 60,
[0][0][2][0][RTW89_FCC][42] = 82,
[0][0][2][0][RTW89_ETSI][42] = 30,
@@ -5713,7 +5772,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][42] = 82,
[0][0][2][0][RTW89_KCC][42] = 74,
[0][0][2][0][RTW89_ACMA][42] = 78,
- [0][0][2][0][RTW89_CN][42] = 78,
+ [0][0][2][0][RTW89_CN][42] = 70,
[0][0][2][0][RTW89_UK][42] = 60,
[0][0][2][0][RTW89_FCC][44] = 82,
[0][0][2][0][RTW89_ETSI][44] = 30,
@@ -5721,7 +5780,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][44] = 82,
[0][0][2][0][RTW89_KCC][44] = 74,
[0][0][2][0][RTW89_ACMA][44] = 78,
- [0][0][2][0][RTW89_CN][44] = 78,
+ [0][0][2][0][RTW89_CN][44] = 58,
[0][0][2][0][RTW89_UK][44] = 60,
[0][0][2][0][RTW89_FCC][46] = 82,
[0][0][2][0][RTW89_ETSI][46] = 30,
@@ -5729,9 +5788,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][46] = 82,
[0][0][2][0][RTW89_KCC][46] = 74,
[0][0][2][0][RTW89_ACMA][46] = 78,
- [0][0][2][0][RTW89_CN][46] = 78,
+ [0][0][2][0][RTW89_CN][46] = 58,
[0][0][2][0][RTW89_UK][46] = 60,
- [0][0][2][0][RTW89_FCC][48] = 70,
+ [0][0][2][0][RTW89_FCC][48] = 74,
[0][0][2][0][RTW89_ETSI][48] = 127,
[0][0][2][0][RTW89_MKK][48] = 127,
[0][0][2][0][RTW89_IC][48] = 127,
@@ -5739,7 +5798,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][48] = 127,
[0][0][2][0][RTW89_CN][48] = 127,
[0][0][2][0][RTW89_UK][48] = 127,
- [0][0][2][0][RTW89_FCC][50] = 72,
+ [0][0][2][0][RTW89_FCC][50] = 76,
[0][0][2][0][RTW89_ETSI][50] = 127,
[0][0][2][0][RTW89_MKK][50] = 127,
[0][0][2][0][RTW89_IC][50] = 127,
@@ -5747,7 +5806,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][50] = 127,
[0][0][2][0][RTW89_CN][50] = 127,
[0][0][2][0][RTW89_UK][50] = 127,
- [0][0][2][0][RTW89_FCC][52] = 72,
+ [0][0][2][0][RTW89_FCC][52] = 76,
[0][0][2][0][RTW89_ETSI][52] = 127,
[0][0][2][0][RTW89_MKK][52] = 127,
[0][0][2][0][RTW89_IC][52] = 127,
@@ -6203,13 +6262,13 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_ACMA][52] = 127,
[0][1][2][1][RTW89_CN][52] = 127,
[0][1][2][1][RTW89_UK][52] = 127,
- [1][0][2][0][RTW89_FCC][1] = 64,
+ [1][0][2][0][RTW89_FCC][1] = 68,
[1][0][2][0][RTW89_ETSI][1] = 64,
[1][0][2][0][RTW89_MKK][1] = 64,
- [1][0][2][0][RTW89_IC][1] = 60,
+ [1][0][2][0][RTW89_IC][1] = 64,
[1][0][2][0][RTW89_KCC][1] = 74,
[1][0][2][0][RTW89_ACMA][1] = 64,
- [1][0][2][0][RTW89_CN][1] = 64,
+ [1][0][2][0][RTW89_CN][1] = 62,
[1][0][2][0][RTW89_UK][1] = 64,
[1][0][2][0][RTW89_FCC][5] = 82,
[1][0][2][0][RTW89_ETSI][5] = 64,
@@ -6217,7 +6276,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][5] = 64,
[1][0][2][0][RTW89_KCC][5] = 66,
[1][0][2][0][RTW89_ACMA][5] = 64,
- [1][0][2][0][RTW89_CN][5] = 64,
+ [1][0][2][0][RTW89_CN][5] = 62,
[1][0][2][0][RTW89_UK][5] = 64,
[1][0][2][0][RTW89_FCC][9] = 82,
[1][0][2][0][RTW89_ETSI][9] = 64,
@@ -6225,20 +6284,20 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][9] = 64,
[1][0][2][0][RTW89_KCC][9] = 78,
[1][0][2][0][RTW89_ACMA][9] = 64,
- [1][0][2][0][RTW89_CN][9] = 64,
+ [1][0][2][0][RTW89_CN][9] = 62,
[1][0][2][0][RTW89_UK][9] = 64,
- [1][0][2][0][RTW89_FCC][13] = 62,
+ [1][0][2][0][RTW89_FCC][13] = 66,
[1][0][2][0][RTW89_ETSI][13] = 64,
[1][0][2][0][RTW89_MKK][13] = 64,
- [1][0][2][0][RTW89_IC][13] = 60,
+ [1][0][2][0][RTW89_IC][13] = 64,
[1][0][2][0][RTW89_KCC][13] = 72,
[1][0][2][0][RTW89_ACMA][13] = 64,
- [1][0][2][0][RTW89_CN][13] = 64,
+ [1][0][2][0][RTW89_CN][13] = 62,
[1][0][2][0][RTW89_UK][13] = 64,
- [1][0][2][0][RTW89_FCC][16] = 62,
+ [1][0][2][0][RTW89_FCC][16] = 66,
[1][0][2][0][RTW89_ETSI][16] = 66,
[1][0][2][0][RTW89_MKK][16] = 80,
- [1][0][2][0][RTW89_IC][16] = 62,
+ [1][0][2][0][RTW89_IC][16] = 66,
[1][0][2][0][RTW89_KCC][16] = 74,
[1][0][2][0][RTW89_ACMA][16] = 66,
[1][0][2][0][RTW89_CN][16] = 127,
@@ -6246,7 +6305,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_FCC][20] = 80,
[1][0][2][0][RTW89_ETSI][20] = 66,
[1][0][2][0][RTW89_MKK][20] = 80,
- [1][0][2][0][RTW89_IC][20] = 76,
+ [1][0][2][0][RTW89_IC][20] = 80,
[1][0][2][0][RTW89_KCC][20] = 74,
[1][0][2][0][RTW89_ACMA][20] = 66,
[1][0][2][0][RTW89_CN][20] = 127,
@@ -6267,10 +6326,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_ACMA][28] = 127,
[1][0][2][0][RTW89_CN][28] = 127,
[1][0][2][0][RTW89_UK][28] = 66,
- [1][0][2][0][RTW89_FCC][32] = 72,
+ [1][0][2][0][RTW89_FCC][32] = 76,
[1][0][2][0][RTW89_ETSI][32] = 66,
[1][0][2][0][RTW89_MKK][32] = 80,
- [1][0][2][0][RTW89_IC][32] = 72,
+ [1][0][2][0][RTW89_IC][32] = 76,
[1][0][2][0][RTW89_KCC][32] = 78,
[1][0][2][0][RTW89_ACMA][32] = 66,
[1][0][2][0][RTW89_CN][32] = 127,
@@ -6286,10 +6345,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_FCC][39] = 84,
[1][0][2][0][RTW89_ETSI][39] = 30,
[1][0][2][0][RTW89_MKK][39] = 127,
- [1][0][2][0][RTW89_IC][39] = 80,
+ [1][0][2][0][RTW89_IC][39] = 84,
[1][0][2][0][RTW89_KCC][39] = 68,
[1][0][2][0][RTW89_ACMA][39] = 80,
- [1][0][2][0][RTW89_CN][39] = 70,
+ [1][0][2][0][RTW89_CN][39] = 60,
[1][0][2][0][RTW89_UK][39] = 64,
[1][0][2][0][RTW89_FCC][43] = 84,
[1][0][2][0][RTW89_ETSI][43] = 30,
@@ -6297,9 +6356,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][43] = 84,
[1][0][2][0][RTW89_KCC][43] = 78,
[1][0][2][0][RTW89_ACMA][43] = 80,
- [1][0][2][0][RTW89_CN][43] = 80,
+ [1][0][2][0][RTW89_CN][43] = 62,
[1][0][2][0][RTW89_UK][43] = 64,
- [1][0][2][0][RTW89_FCC][47] = 80,
+ [1][0][2][0][RTW89_FCC][47] = 84,
[1][0][2][0][RTW89_ETSI][47] = 127,
[1][0][2][0][RTW89_MKK][47] = 127,
[1][0][2][0][RTW89_IC][47] = 127,
@@ -6307,7 +6366,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_ACMA][47] = 127,
[1][0][2][0][RTW89_CN][47] = 127,
[1][0][2][0][RTW89_UK][47] = 127,
- [1][0][2][0][RTW89_FCC][51] = 80,
+ [1][0][2][0][RTW89_FCC][51] = 84,
[1][0][2][0][RTW89_ETSI][51] = 127,
[1][0][2][0][RTW89_MKK][51] = 127,
[1][0][2][0][RTW89_IC][51] = 127,
@@ -6539,26 +6598,26 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_ACMA][51] = 127,
[1][1][2][1][RTW89_CN][51] = 127,
[1][1][2][1][RTW89_UK][51] = 127,
- [2][0][2][0][RTW89_FCC][3] = 72,
+ [2][0][2][0][RTW89_FCC][3] = 76,
[2][0][2][0][RTW89_ETSI][3] = 64,
[2][0][2][0][RTW89_MKK][3] = 62,
- [2][0][2][0][RTW89_IC][3] = 60,
+ [2][0][2][0][RTW89_IC][3] = 64,
[2][0][2][0][RTW89_KCC][3] = 72,
[2][0][2][0][RTW89_ACMA][3] = 64,
- [2][0][2][0][RTW89_CN][3] = 64,
+ [2][0][2][0][RTW89_CN][3] = 60,
[2][0][2][0][RTW89_UK][3] = 64,
- [2][0][2][0][RTW89_FCC][11] = 60,
+ [2][0][2][0][RTW89_FCC][11] = 64,
[2][0][2][0][RTW89_ETSI][11] = 64,
[2][0][2][0][RTW89_MKK][11] = 64,
- [2][0][2][0][RTW89_IC][11] = 58,
+ [2][0][2][0][RTW89_IC][11] = 62,
[2][0][2][0][RTW89_KCC][11] = 72,
[2][0][2][0][RTW89_ACMA][11] = 64,
- [2][0][2][0][RTW89_CN][11] = 64,
+ [2][0][2][0][RTW89_CN][11] = 56,
[2][0][2][0][RTW89_UK][11] = 64,
- [2][0][2][0][RTW89_FCC][18] = 62,
+ [2][0][2][0][RTW89_FCC][18] = 66,
[2][0][2][0][RTW89_ETSI][18] = 64,
[2][0][2][0][RTW89_MKK][18] = 72,
- [2][0][2][0][RTW89_IC][18] = 62,
+ [2][0][2][0][RTW89_IC][18] = 66,
[2][0][2][0][RTW89_KCC][18] = 72,
[2][0][2][0][RTW89_ACMA][18] = 64,
[2][0][2][0][RTW89_CN][18] = 127,
@@ -6574,7 +6633,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_FCC][34] = 76,
[2][0][2][0][RTW89_ETSI][34] = 127,
[2][0][2][0][RTW89_MKK][34] = 72,
- [2][0][2][0][RTW89_IC][34] = 72,
+ [2][0][2][0][RTW89_IC][34] = 76,
[2][0][2][0][RTW89_KCC][34] = 72,
[2][0][2][0][RTW89_ACMA][34] = 72,
[2][0][2][0][RTW89_CN][34] = 127,
@@ -6582,12 +6641,12 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_FCC][41] = 76,
[2][0][2][0][RTW89_ETSI][41] = 30,
[2][0][2][0][RTW89_MKK][41] = 127,
- [2][0][2][0][RTW89_IC][41] = 72,
+ [2][0][2][0][RTW89_IC][41] = 76,
[2][0][2][0][RTW89_KCC][41] = 64,
[2][0][2][0][RTW89_ACMA][41] = 72,
- [2][0][2][0][RTW89_CN][41] = 72,
+ [2][0][2][0][RTW89_CN][41] = 40,
[2][0][2][0][RTW89_UK][41] = 64,
- [2][0][2][0][RTW89_FCC][49] = 70,
+ [2][0][2][0][RTW89_FCC][49] = 74,
[2][0][2][0][RTW89_ETSI][49] = 127,
[2][0][2][0][RTW89_MKK][49] = 127,
[2][0][2][0][RTW89_IC][49] = 127,
@@ -6713,7 +6772,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_IC][7] = 127,
[3][0][2][0][RTW89_KCC][7] = 127,
[3][0][2][0][RTW89_ACMA][7] = 127,
- [3][0][2][0][RTW89_CN][7] = 58,
+ [3][0][2][0][RTW89_CN][7] = 127,
[3][0][2][0][RTW89_UK][7] = 127,
[3][0][2][0][RTW89_FCC][22] = 127,
[3][0][2][0][RTW89_ETSI][22] = 127,
@@ -6721,7 +6780,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_IC][22] = 127,
[3][0][2][0][RTW89_KCC][22] = 127,
[3][0][2][0][RTW89_ACMA][22] = 127,
- [3][0][2][0][RTW89_CN][22] = 58,
+ [3][0][2][0][RTW89_CN][22] = 127,
[3][0][2][0][RTW89_UK][22] = 127,
[3][0][2][0][RTW89_FCC][45] = 127,
[3][0][2][0][RTW89_ETSI][45] = 127,
@@ -6798,19 +6857,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][11] = 30,
[0][0][RTW89_WW][12] = 30,
[0][0][RTW89_WW][13] = 0,
- [0][1][RTW89_WW][0] = 20,
- [0][1][RTW89_WW][1] = 22,
- [0][1][RTW89_WW][2] = 22,
- [0][1][RTW89_WW][3] = 22,
- [0][1][RTW89_WW][4] = 22,
- [0][1][RTW89_WW][5] = 22,
- [0][1][RTW89_WW][6] = 22,
- [0][1][RTW89_WW][7] = 22,
- [0][1][RTW89_WW][8] = 22,
- [0][1][RTW89_WW][9] = 22,
- [0][1][RTW89_WW][10] = 22,
- [0][1][RTW89_WW][11] = 22,
- [0][1][RTW89_WW][12] = 20,
+ [0][1][RTW89_WW][0] = 0,
+ [0][1][RTW89_WW][1] = 0,
+ [0][1][RTW89_WW][2] = 0,
+ [0][1][RTW89_WW][3] = 0,
+ [0][1][RTW89_WW][4] = 0,
+ [0][1][RTW89_WW][5] = 0,
+ [0][1][RTW89_WW][6] = 0,
+ [0][1][RTW89_WW][7] = 0,
+ [0][1][RTW89_WW][8] = 0,
+ [0][1][RTW89_WW][9] = 0,
+ [0][1][RTW89_WW][10] = 0,
+ [0][1][RTW89_WW][11] = 0,
+ [0][1][RTW89_WW][12] = 0,
[0][1][RTW89_WW][13] = 0,
[1][0][RTW89_WW][0] = 42,
[1][0][RTW89_WW][1] = 42,
@@ -6826,19 +6885,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_WW][11] = 42,
[1][0][RTW89_WW][12] = 34,
[1][0][RTW89_WW][13] = 0,
- [1][1][RTW89_WW][0] = 32,
- [1][1][RTW89_WW][1] = 32,
- [1][1][RTW89_WW][2] = 32,
- [1][1][RTW89_WW][3] = 32,
- [1][1][RTW89_WW][4] = 32,
- [1][1][RTW89_WW][5] = 32,
- [1][1][RTW89_WW][6] = 32,
- [1][1][RTW89_WW][7] = 32,
- [1][1][RTW89_WW][8] = 32,
- [1][1][RTW89_WW][9] = 32,
- [1][1][RTW89_WW][10] = 32,
- [1][1][RTW89_WW][11] = 32,
- [1][1][RTW89_WW][12] = 32,
+ [1][1][RTW89_WW][0] = 0,
+ [1][1][RTW89_WW][1] = 0,
+ [1][1][RTW89_WW][2] = 0,
+ [1][1][RTW89_WW][3] = 0,
+ [1][1][RTW89_WW][4] = 0,
+ [1][1][RTW89_WW][5] = 0,
+ [1][1][RTW89_WW][6] = 0,
+ [1][1][RTW89_WW][7] = 0,
+ [1][1][RTW89_WW][8] = 0,
+ [1][1][RTW89_WW][9] = 0,
+ [1][1][RTW89_WW][10] = 0,
+ [1][1][RTW89_WW][11] = 0,
+ [1][1][RTW89_WW][12] = 0,
[1][1][RTW89_WW][13] = 0,
[2][0][RTW89_WW][0] = 54,
[2][0][RTW89_WW][1] = 54,
@@ -6854,19 +6913,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_WW][11] = 54,
[2][0][RTW89_WW][12] = 34,
[2][0][RTW89_WW][13] = 0,
- [2][1][RTW89_WW][0] = 44,
- [2][1][RTW89_WW][1] = 44,
- [2][1][RTW89_WW][2] = 44,
- [2][1][RTW89_WW][3] = 44,
- [2][1][RTW89_WW][4] = 44,
- [2][1][RTW89_WW][5] = 44,
- [2][1][RTW89_WW][6] = 44,
- [2][1][RTW89_WW][7] = 44,
- [2][1][RTW89_WW][8] = 44,
- [2][1][RTW89_WW][9] = 44,
- [2][1][RTW89_WW][10] = 44,
- [2][1][RTW89_WW][11] = 44,
- [2][1][RTW89_WW][12] = 42,
+ [2][1][RTW89_WW][0] = 0,
+ [2][1][RTW89_WW][1] = 0,
+ [2][1][RTW89_WW][2] = 0,
+ [2][1][RTW89_WW][3] = 0,
+ [2][1][RTW89_WW][4] = 0,
+ [2][1][RTW89_WW][5] = 0,
+ [2][1][RTW89_WW][6] = 0,
+ [2][1][RTW89_WW][7] = 0,
+ [2][1][RTW89_WW][8] = 0,
+ [2][1][RTW89_WW][9] = 0,
+ [2][1][RTW89_WW][10] = 0,
+ [2][1][RTW89_WW][11] = 0,
+ [2][1][RTW89_WW][12] = 0,
[2][1][RTW89_WW][13] = 0,
[0][0][RTW89_FCC][0] = 62,
[0][0][RTW89_ETSI][0] = 30,
@@ -6986,7 +7045,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][0] = 127,
[0][1][RTW89_KCC][0] = 127,
[0][1][RTW89_ACMA][0] = 127,
- [0][1][RTW89_CN][0] = 20,
+ [0][1][RTW89_CN][0] = 127,
[0][1][RTW89_UK][0] = 127,
[0][1][RTW89_FCC][1] = 127,
[0][1][RTW89_ETSI][1] = 127,
@@ -6994,7 +7053,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][1] = 127,
[0][1][RTW89_KCC][1] = 127,
[0][1][RTW89_ACMA][1] = 127,
- [0][1][RTW89_CN][1] = 22,
+ [0][1][RTW89_CN][1] = 127,
[0][1][RTW89_UK][1] = 127,
[0][1][RTW89_FCC][2] = 127,
[0][1][RTW89_ETSI][2] = 127,
@@ -7002,7 +7061,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][2] = 127,
[0][1][RTW89_KCC][2] = 127,
[0][1][RTW89_ACMA][2] = 127,
- [0][1][RTW89_CN][2] = 22,
+ [0][1][RTW89_CN][2] = 127,
[0][1][RTW89_UK][2] = 127,
[0][1][RTW89_FCC][3] = 127,
[0][1][RTW89_ETSI][3] = 127,
@@ -7010,7 +7069,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][3] = 127,
[0][1][RTW89_KCC][3] = 127,
[0][1][RTW89_ACMA][3] = 127,
- [0][1][RTW89_CN][3] = 22,
+ [0][1][RTW89_CN][3] = 127,
[0][1][RTW89_UK][3] = 127,
[0][1][RTW89_FCC][4] = 127,
[0][1][RTW89_ETSI][4] = 127,
@@ -7018,7 +7077,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][4] = 127,
[0][1][RTW89_KCC][4] = 127,
[0][1][RTW89_ACMA][4] = 127,
- [0][1][RTW89_CN][4] = 22,
+ [0][1][RTW89_CN][4] = 127,
[0][1][RTW89_UK][4] = 127,
[0][1][RTW89_FCC][5] = 127,
[0][1][RTW89_ETSI][5] = 127,
@@ -7026,7 +7085,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][5] = 127,
[0][1][RTW89_KCC][5] = 127,
[0][1][RTW89_ACMA][5] = 127,
- [0][1][RTW89_CN][5] = 22,
+ [0][1][RTW89_CN][5] = 127,
[0][1][RTW89_UK][5] = 127,
[0][1][RTW89_FCC][6] = 127,
[0][1][RTW89_ETSI][6] = 127,
@@ -7034,7 +7093,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][6] = 127,
[0][1][RTW89_KCC][6] = 127,
[0][1][RTW89_ACMA][6] = 127,
- [0][1][RTW89_CN][6] = 22,
+ [0][1][RTW89_CN][6] = 127,
[0][1][RTW89_UK][6] = 127,
[0][1][RTW89_FCC][7] = 127,
[0][1][RTW89_ETSI][7] = 127,
@@ -7042,7 +7101,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][7] = 127,
[0][1][RTW89_KCC][7] = 127,
[0][1][RTW89_ACMA][7] = 127,
- [0][1][RTW89_CN][7] = 22,
+ [0][1][RTW89_CN][7] = 127,
[0][1][RTW89_UK][7] = 127,
[0][1][RTW89_FCC][8] = 127,
[0][1][RTW89_ETSI][8] = 127,
@@ -7050,7 +7109,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][8] = 127,
[0][1][RTW89_KCC][8] = 127,
[0][1][RTW89_ACMA][8] = 127,
- [0][1][RTW89_CN][8] = 22,
+ [0][1][RTW89_CN][8] = 127,
[0][1][RTW89_UK][8] = 127,
[0][1][RTW89_FCC][9] = 127,
[0][1][RTW89_ETSI][9] = 127,
@@ -7058,7 +7117,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][9] = 127,
[0][1][RTW89_KCC][9] = 127,
[0][1][RTW89_ACMA][9] = 127,
- [0][1][RTW89_CN][9] = 22,
+ [0][1][RTW89_CN][9] = 127,
[0][1][RTW89_UK][9] = 127,
[0][1][RTW89_FCC][10] = 127,
[0][1][RTW89_ETSI][10] = 127,
@@ -7066,7 +7125,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][10] = 127,
[0][1][RTW89_KCC][10] = 127,
[0][1][RTW89_ACMA][10] = 127,
- [0][1][RTW89_CN][10] = 22,
+ [0][1][RTW89_CN][10] = 127,
[0][1][RTW89_UK][10] = 127,
[0][1][RTW89_FCC][11] = 127,
[0][1][RTW89_ETSI][11] = 127,
@@ -7074,7 +7133,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][11] = 127,
[0][1][RTW89_KCC][11] = 127,
[0][1][RTW89_ACMA][11] = 127,
- [0][1][RTW89_CN][11] = 22,
+ [0][1][RTW89_CN][11] = 127,
[0][1][RTW89_UK][11] = 127,
[0][1][RTW89_FCC][12] = 127,
[0][1][RTW89_ETSI][12] = 127,
@@ -7082,7 +7141,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][12] = 127,
[0][1][RTW89_KCC][12] = 127,
[0][1][RTW89_ACMA][12] = 127,
- [0][1][RTW89_CN][12] = 20,
+ [0][1][RTW89_CN][12] = 127,
[0][1][RTW89_UK][12] = 127,
[0][1][RTW89_FCC][13] = 127,
[0][1][RTW89_ETSI][13] = 127,
@@ -7210,7 +7269,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][0] = 127,
[1][1][RTW89_KCC][0] = 127,
[1][1][RTW89_ACMA][0] = 127,
- [1][1][RTW89_CN][0] = 32,
+ [1][1][RTW89_CN][0] = 127,
[1][1][RTW89_UK][0] = 127,
[1][1][RTW89_FCC][1] = 127,
[1][1][RTW89_ETSI][1] = 127,
@@ -7218,7 +7277,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][1] = 127,
[1][1][RTW89_KCC][1] = 127,
[1][1][RTW89_ACMA][1] = 127,
- [1][1][RTW89_CN][1] = 32,
+ [1][1][RTW89_CN][1] = 127,
[1][1][RTW89_UK][1] = 127,
[1][1][RTW89_FCC][2] = 127,
[1][1][RTW89_ETSI][2] = 127,
@@ -7226,7 +7285,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][2] = 127,
[1][1][RTW89_KCC][2] = 127,
[1][1][RTW89_ACMA][2] = 127,
- [1][1][RTW89_CN][2] = 32,
+ [1][1][RTW89_CN][2] = 127,
[1][1][RTW89_UK][2] = 127,
[1][1][RTW89_FCC][3] = 127,
[1][1][RTW89_ETSI][3] = 127,
@@ -7234,7 +7293,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][3] = 127,
[1][1][RTW89_KCC][3] = 127,
[1][1][RTW89_ACMA][3] = 127,
- [1][1][RTW89_CN][3] = 32,
+ [1][1][RTW89_CN][3] = 127,
[1][1][RTW89_UK][3] = 127,
[1][1][RTW89_FCC][4] = 127,
[1][1][RTW89_ETSI][4] = 127,
@@ -7242,7 +7301,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][4] = 127,
[1][1][RTW89_KCC][4] = 127,
[1][1][RTW89_ACMA][4] = 127,
- [1][1][RTW89_CN][4] = 32,
+ [1][1][RTW89_CN][4] = 127,
[1][1][RTW89_UK][4] = 127,
[1][1][RTW89_FCC][5] = 127,
[1][1][RTW89_ETSI][5] = 127,
@@ -7250,7 +7309,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][5] = 127,
[1][1][RTW89_KCC][5] = 127,
[1][1][RTW89_ACMA][5] = 127,
- [1][1][RTW89_CN][5] = 32,
+ [1][1][RTW89_CN][5] = 127,
[1][1][RTW89_UK][5] = 127,
[1][1][RTW89_FCC][6] = 127,
[1][1][RTW89_ETSI][6] = 127,
@@ -7258,7 +7317,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][6] = 127,
[1][1][RTW89_KCC][6] = 127,
[1][1][RTW89_ACMA][6] = 127,
- [1][1][RTW89_CN][6] = 32,
+ [1][1][RTW89_CN][6] = 127,
[1][1][RTW89_UK][6] = 127,
[1][1][RTW89_FCC][7] = 127,
[1][1][RTW89_ETSI][7] = 127,
@@ -7266,7 +7325,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][7] = 127,
[1][1][RTW89_KCC][7] = 127,
[1][1][RTW89_ACMA][7] = 127,
- [1][1][RTW89_CN][7] = 32,
+ [1][1][RTW89_CN][7] = 127,
[1][1][RTW89_UK][7] = 127,
[1][1][RTW89_FCC][8] = 127,
[1][1][RTW89_ETSI][8] = 127,
@@ -7274,7 +7333,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][8] = 127,
[1][1][RTW89_KCC][8] = 127,
[1][1][RTW89_ACMA][8] = 127,
- [1][1][RTW89_CN][8] = 32,
+ [1][1][RTW89_CN][8] = 127,
[1][1][RTW89_UK][8] = 127,
[1][1][RTW89_FCC][9] = 127,
[1][1][RTW89_ETSI][9] = 127,
@@ -7282,7 +7341,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][9] = 127,
[1][1][RTW89_KCC][9] = 127,
[1][1][RTW89_ACMA][9] = 127,
- [1][1][RTW89_CN][9] = 32,
+ [1][1][RTW89_CN][9] = 127,
[1][1][RTW89_UK][9] = 127,
[1][1][RTW89_FCC][10] = 127,
[1][1][RTW89_ETSI][10] = 127,
@@ -7290,7 +7349,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][10] = 127,
[1][1][RTW89_KCC][10] = 127,
[1][1][RTW89_ACMA][10] = 127,
- [1][1][RTW89_CN][10] = 32,
+ [1][1][RTW89_CN][10] = 127,
[1][1][RTW89_UK][10] = 127,
[1][1][RTW89_FCC][11] = 127,
[1][1][RTW89_ETSI][11] = 127,
@@ -7298,7 +7357,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][11] = 127,
[1][1][RTW89_KCC][11] = 127,
[1][1][RTW89_ACMA][11] = 127,
- [1][1][RTW89_CN][11] = 32,
+ [1][1][RTW89_CN][11] = 127,
[1][1][RTW89_UK][11] = 127,
[1][1][RTW89_FCC][12] = 127,
[1][1][RTW89_ETSI][12] = 127,
@@ -7306,7 +7365,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][12] = 127,
[1][1][RTW89_KCC][12] = 127,
[1][1][RTW89_ACMA][12] = 127,
- [1][1][RTW89_CN][12] = 32,
+ [1][1][RTW89_CN][12] = 127,
[1][1][RTW89_UK][12] = 127,
[1][1][RTW89_FCC][13] = 127,
[1][1][RTW89_ETSI][13] = 127,
@@ -7434,7 +7493,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][0] = 127,
[2][1][RTW89_KCC][0] = 127,
[2][1][RTW89_ACMA][0] = 127,
- [2][1][RTW89_CN][0] = 44,
+ [2][1][RTW89_CN][0] = 127,
[2][1][RTW89_UK][0] = 127,
[2][1][RTW89_FCC][1] = 127,
[2][1][RTW89_ETSI][1] = 127,
@@ -7442,7 +7501,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][1] = 127,
[2][1][RTW89_KCC][1] = 127,
[2][1][RTW89_ACMA][1] = 127,
- [2][1][RTW89_CN][1] = 44,
+ [2][1][RTW89_CN][1] = 127,
[2][1][RTW89_UK][1] = 127,
[2][1][RTW89_FCC][2] = 127,
[2][1][RTW89_ETSI][2] = 127,
@@ -7450,7 +7509,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][2] = 127,
[2][1][RTW89_KCC][2] = 127,
[2][1][RTW89_ACMA][2] = 127,
- [2][1][RTW89_CN][2] = 44,
+ [2][1][RTW89_CN][2] = 127,
[2][1][RTW89_UK][2] = 127,
[2][1][RTW89_FCC][3] = 127,
[2][1][RTW89_ETSI][3] = 127,
@@ -7458,7 +7517,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][3] = 127,
[2][1][RTW89_KCC][3] = 127,
[2][1][RTW89_ACMA][3] = 127,
- [2][1][RTW89_CN][3] = 44,
+ [2][1][RTW89_CN][3] = 127,
[2][1][RTW89_UK][3] = 127,
[2][1][RTW89_FCC][4] = 127,
[2][1][RTW89_ETSI][4] = 127,
@@ -7466,7 +7525,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][4] = 127,
[2][1][RTW89_KCC][4] = 127,
[2][1][RTW89_ACMA][4] = 127,
- [2][1][RTW89_CN][4] = 44,
+ [2][1][RTW89_CN][4] = 127,
[2][1][RTW89_UK][4] = 127,
[2][1][RTW89_FCC][5] = 127,
[2][1][RTW89_ETSI][5] = 127,
@@ -7474,7 +7533,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][5] = 127,
[2][1][RTW89_KCC][5] = 127,
[2][1][RTW89_ACMA][5] = 127,
- [2][1][RTW89_CN][5] = 44,
+ [2][1][RTW89_CN][5] = 127,
[2][1][RTW89_UK][5] = 127,
[2][1][RTW89_FCC][6] = 127,
[2][1][RTW89_ETSI][6] = 127,
@@ -7482,7 +7541,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][6] = 127,
[2][1][RTW89_KCC][6] = 127,
[2][1][RTW89_ACMA][6] = 127,
- [2][1][RTW89_CN][6] = 44,
+ [2][1][RTW89_CN][6] = 127,
[2][1][RTW89_UK][6] = 127,
[2][1][RTW89_FCC][7] = 127,
[2][1][RTW89_ETSI][7] = 127,
@@ -7490,7 +7549,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][7] = 127,
[2][1][RTW89_KCC][7] = 127,
[2][1][RTW89_ACMA][7] = 127,
- [2][1][RTW89_CN][7] = 44,
+ [2][1][RTW89_CN][7] = 127,
[2][1][RTW89_UK][7] = 127,
[2][1][RTW89_FCC][8] = 127,
[2][1][RTW89_ETSI][8] = 127,
@@ -7498,7 +7557,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][8] = 127,
[2][1][RTW89_KCC][8] = 127,
[2][1][RTW89_ACMA][8] = 127,
- [2][1][RTW89_CN][8] = 44,
+ [2][1][RTW89_CN][8] = 127,
[2][1][RTW89_UK][8] = 127,
[2][1][RTW89_FCC][9] = 127,
[2][1][RTW89_ETSI][9] = 127,
@@ -7506,7 +7565,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][9] = 127,
[2][1][RTW89_KCC][9] = 127,
[2][1][RTW89_ACMA][9] = 127,
- [2][1][RTW89_CN][9] = 44,
+ [2][1][RTW89_CN][9] = 127,
[2][1][RTW89_UK][9] = 127,
[2][1][RTW89_FCC][10] = 127,
[2][1][RTW89_ETSI][10] = 127,
@@ -7514,7 +7573,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][10] = 127,
[2][1][RTW89_KCC][10] = 127,
[2][1][RTW89_ACMA][10] = 127,
- [2][1][RTW89_CN][10] = 44,
+ [2][1][RTW89_CN][10] = 127,
[2][1][RTW89_UK][10] = 127,
[2][1][RTW89_FCC][11] = 127,
[2][1][RTW89_ETSI][11] = 127,
@@ -7522,7 +7581,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][11] = 127,
[2][1][RTW89_KCC][11] = 127,
[2][1][RTW89_ACMA][11] = 127,
- [2][1][RTW89_CN][11] = 44,
+ [2][1][RTW89_CN][11] = 127,
[2][1][RTW89_UK][11] = 127,
[2][1][RTW89_FCC][12] = 127,
[2][1][RTW89_ETSI][12] = 127,
@@ -7530,7 +7589,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][12] = 127,
[2][1][RTW89_KCC][12] = 127,
[2][1][RTW89_ACMA][12] = 127,
- [2][1][RTW89_CN][12] = 42,
+ [2][1][RTW89_CN][12] = 127,
[2][1][RTW89_UK][12] = 127,
[2][1][RTW89_FCC][13] = 127,
[2][1][RTW89_ETSI][13] = 127,
@@ -7573,14 +7632,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][48] = 42,
[0][0][RTW89_WW][50] = 42,
[0][0][RTW89_WW][52] = 40,
- [0][1][RTW89_WW][0] = 4,
- [0][1][RTW89_WW][2] = 4,
- [0][1][RTW89_WW][4] = 4,
- [0][1][RTW89_WW][6] = 4,
- [0][1][RTW89_WW][8] = 4,
- [0][1][RTW89_WW][10] = 4,
- [0][1][RTW89_WW][12] = 4,
- [0][1][RTW89_WW][14] = 4,
+ [0][1][RTW89_WW][0] = 0,
+ [0][1][RTW89_WW][2] = 0,
+ [0][1][RTW89_WW][4] = 0,
+ [0][1][RTW89_WW][6] = 0,
+ [0][1][RTW89_WW][8] = 0,
+ [0][1][RTW89_WW][10] = 0,
+ [0][1][RTW89_WW][12] = 0,
+ [0][1][RTW89_WW][14] = 0,
[0][1][RTW89_WW][15] = 0,
[0][1][RTW89_WW][17] = 0,
[0][1][RTW89_WW][19] = 0,
@@ -7593,11 +7652,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_WW][33] = 0,
[0][1][RTW89_WW][35] = 0,
[0][1][RTW89_WW][37] = 0,
- [0][1][RTW89_WW][38] = 42,
- [0][1][RTW89_WW][40] = 42,
- [0][1][RTW89_WW][42] = 42,
- [0][1][RTW89_WW][44] = 42,
- [0][1][RTW89_WW][46] = 42,
+ [0][1][RTW89_WW][38] = 0,
+ [0][1][RTW89_WW][40] = 0,
+ [0][1][RTW89_WW][42] = 0,
+ [0][1][RTW89_WW][44] = 0,
+ [0][1][RTW89_WW][46] = 0,
[0][1][RTW89_WW][48] = 0,
[0][1][RTW89_WW][50] = 0,
[0][1][RTW89_WW][52] = 0,
@@ -7629,14 +7688,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_WW][48] = 52,
[1][0][RTW89_WW][50] = 52,
[1][0][RTW89_WW][52] = 52,
- [1][1][RTW89_WW][0] = 14,
- [1][1][RTW89_WW][2] = 14,
- [1][1][RTW89_WW][4] = 14,
- [1][1][RTW89_WW][6] = 14,
- [1][1][RTW89_WW][8] = 14,
- [1][1][RTW89_WW][10] = 14,
- [1][1][RTW89_WW][12] = 14,
- [1][1][RTW89_WW][14] = 14,
+ [1][1][RTW89_WW][0] = 0,
+ [1][1][RTW89_WW][2] = 0,
+ [1][1][RTW89_WW][4] = 0,
+ [1][1][RTW89_WW][6] = 0,
+ [1][1][RTW89_WW][8] = 0,
+ [1][1][RTW89_WW][10] = 0,
+ [1][1][RTW89_WW][12] = 0,
+ [1][1][RTW89_WW][14] = 0,
[1][1][RTW89_WW][15] = 0,
[1][1][RTW89_WW][17] = 0,
[1][1][RTW89_WW][19] = 0,
@@ -7649,11 +7708,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_WW][33] = 0,
[1][1][RTW89_WW][35] = 0,
[1][1][RTW89_WW][37] = 0,
- [1][1][RTW89_WW][38] = 54,
- [1][1][RTW89_WW][40] = 54,
- [1][1][RTW89_WW][42] = 54,
- [1][1][RTW89_WW][44] = 54,
- [1][1][RTW89_WW][46] = 54,
+ [1][1][RTW89_WW][38] = 0,
+ [1][1][RTW89_WW][40] = 0,
+ [1][1][RTW89_WW][42] = 0,
+ [1][1][RTW89_WW][44] = 0,
+ [1][1][RTW89_WW][46] = 0,
[1][1][RTW89_WW][48] = 0,
[1][1][RTW89_WW][50] = 0,
[1][1][RTW89_WW][52] = 0,
@@ -7685,14 +7744,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_WW][48] = 64,
[2][0][RTW89_WW][50] = 64,
[2][0][RTW89_WW][52] = 60,
- [2][1][RTW89_WW][0] = 28,
- [2][1][RTW89_WW][2] = 28,
- [2][1][RTW89_WW][4] = 28,
- [2][1][RTW89_WW][6] = 28,
- [2][1][RTW89_WW][8] = 28,
- [2][1][RTW89_WW][10] = 28,
- [2][1][RTW89_WW][12] = 28,
- [2][1][RTW89_WW][14] = 28,
+ [2][1][RTW89_WW][0] = 0,
+ [2][1][RTW89_WW][2] = 0,
+ [2][1][RTW89_WW][4] = 0,
+ [2][1][RTW89_WW][6] = 0,
+ [2][1][RTW89_WW][8] = 0,
+ [2][1][RTW89_WW][10] = 0,
+ [2][1][RTW89_WW][12] = 0,
+ [2][1][RTW89_WW][14] = 0,
[2][1][RTW89_WW][15] = 0,
[2][1][RTW89_WW][17] = 0,
[2][1][RTW89_WW][19] = 0,
@@ -7705,11 +7764,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_WW][33] = 0,
[2][1][RTW89_WW][35] = 0,
[2][1][RTW89_WW][37] = 0,
- [2][1][RTW89_WW][38] = 56,
- [2][1][RTW89_WW][40] = 56,
- [2][1][RTW89_WW][42] = 56,
- [2][1][RTW89_WW][44] = 56,
- [2][1][RTW89_WW][46] = 56,
+ [2][1][RTW89_WW][38] = 0,
+ [2][1][RTW89_WW][40] = 0,
+ [2][1][RTW89_WW][42] = 0,
+ [2][1][RTW89_WW][44] = 0,
+ [2][1][RTW89_WW][46] = 0,
[2][1][RTW89_WW][48] = 0,
[2][1][RTW89_WW][50] = 0,
[2][1][RTW89_WW][52] = 0,
@@ -7943,7 +8002,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][0] = 127,
[0][1][RTW89_KCC][0] = 127,
[0][1][RTW89_ACMA][0] = 127,
- [0][1][RTW89_CN][0] = 4,
+ [0][1][RTW89_CN][0] = 127,
[0][1][RTW89_UK][0] = 127,
[0][1][RTW89_FCC][2] = 127,
[0][1][RTW89_ETSI][2] = 127,
@@ -7951,7 +8010,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][2] = 127,
[0][1][RTW89_KCC][2] = 127,
[0][1][RTW89_ACMA][2] = 127,
- [0][1][RTW89_CN][2] = 4,
+ [0][1][RTW89_CN][2] = 127,
[0][1][RTW89_UK][2] = 127,
[0][1][RTW89_FCC][4] = 127,
[0][1][RTW89_ETSI][4] = 127,
@@ -7959,7 +8018,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][4] = 127,
[0][1][RTW89_KCC][4] = 127,
[0][1][RTW89_ACMA][4] = 127,
- [0][1][RTW89_CN][4] = 4,
+ [0][1][RTW89_CN][4] = 127,
[0][1][RTW89_UK][4] = 127,
[0][1][RTW89_FCC][6] = 127,
[0][1][RTW89_ETSI][6] = 127,
@@ -7967,7 +8026,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][6] = 127,
[0][1][RTW89_KCC][6] = 127,
[0][1][RTW89_ACMA][6] = 127,
- [0][1][RTW89_CN][6] = 4,
+ [0][1][RTW89_CN][6] = 127,
[0][1][RTW89_UK][6] = 127,
[0][1][RTW89_FCC][8] = 127,
[0][1][RTW89_ETSI][8] = 127,
@@ -7975,7 +8034,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][8] = 127,
[0][1][RTW89_KCC][8] = 127,
[0][1][RTW89_ACMA][8] = 127,
- [0][1][RTW89_CN][8] = 4,
+ [0][1][RTW89_CN][8] = 127,
[0][1][RTW89_UK][8] = 127,
[0][1][RTW89_FCC][10] = 127,
[0][1][RTW89_ETSI][10] = 127,
@@ -7983,7 +8042,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][10] = 127,
[0][1][RTW89_KCC][10] = 127,
[0][1][RTW89_ACMA][10] = 127,
- [0][1][RTW89_CN][10] = 4,
+ [0][1][RTW89_CN][10] = 127,
[0][1][RTW89_UK][10] = 127,
[0][1][RTW89_FCC][12] = 127,
[0][1][RTW89_ETSI][12] = 127,
@@ -7991,7 +8050,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][12] = 127,
[0][1][RTW89_KCC][12] = 127,
[0][1][RTW89_ACMA][12] = 127,
- [0][1][RTW89_CN][12] = 4,
+ [0][1][RTW89_CN][12] = 127,
[0][1][RTW89_UK][12] = 127,
[0][1][RTW89_FCC][14] = 127,
[0][1][RTW89_ETSI][14] = 127,
@@ -7999,7 +8058,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][14] = 127,
[0][1][RTW89_KCC][14] = 127,
[0][1][RTW89_ACMA][14] = 127,
- [0][1][RTW89_CN][14] = 4,
+ [0][1][RTW89_CN][14] = 127,
[0][1][RTW89_UK][14] = 127,
[0][1][RTW89_FCC][15] = 127,
[0][1][RTW89_ETSI][15] = 127,
@@ -8103,7 +8162,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][38] = 127,
[0][1][RTW89_KCC][38] = 127,
[0][1][RTW89_ACMA][38] = 127,
- [0][1][RTW89_CN][38] = 42,
+ [0][1][RTW89_CN][38] = 127,
[0][1][RTW89_UK][38] = 127,
[0][1][RTW89_FCC][40] = 127,
[0][1][RTW89_ETSI][40] = 127,
@@ -8111,7 +8170,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][40] = 127,
[0][1][RTW89_KCC][40] = 127,
[0][1][RTW89_ACMA][40] = 127,
- [0][1][RTW89_CN][40] = 42,
+ [0][1][RTW89_CN][40] = 127,
[0][1][RTW89_UK][40] = 127,
[0][1][RTW89_FCC][42] = 127,
[0][1][RTW89_ETSI][42] = 127,
@@ -8119,7 +8178,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][42] = 127,
[0][1][RTW89_KCC][42] = 127,
[0][1][RTW89_ACMA][42] = 127,
- [0][1][RTW89_CN][42] = 42,
+ [0][1][RTW89_CN][42] = 127,
[0][1][RTW89_UK][42] = 127,
[0][1][RTW89_FCC][44] = 127,
[0][1][RTW89_ETSI][44] = 127,
@@ -8127,7 +8186,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][44] = 127,
[0][1][RTW89_KCC][44] = 127,
[0][1][RTW89_ACMA][44] = 127,
- [0][1][RTW89_CN][44] = 42,
+ [0][1][RTW89_CN][44] = 127,
[0][1][RTW89_UK][44] = 127,
[0][1][RTW89_FCC][46] = 127,
[0][1][RTW89_ETSI][46] = 127,
@@ -8135,7 +8194,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][46] = 127,
[0][1][RTW89_KCC][46] = 127,
[0][1][RTW89_ACMA][46] = 127,
- [0][1][RTW89_CN][46] = 42,
+ [0][1][RTW89_CN][46] = 127,
[0][1][RTW89_UK][46] = 127,
[0][1][RTW89_FCC][48] = 127,
[0][1][RTW89_ETSI][48] = 127,
@@ -8391,7 +8450,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][0] = 127,
[1][1][RTW89_KCC][0] = 127,
[1][1][RTW89_ACMA][0] = 127,
- [1][1][RTW89_CN][0] = 14,
+ [1][1][RTW89_CN][0] = 127,
[1][1][RTW89_UK][0] = 127,
[1][1][RTW89_FCC][2] = 127,
[1][1][RTW89_ETSI][2] = 127,
@@ -8399,7 +8458,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][2] = 127,
[1][1][RTW89_KCC][2] = 127,
[1][1][RTW89_ACMA][2] = 127,
- [1][1][RTW89_CN][2] = 14,
+ [1][1][RTW89_CN][2] = 127,
[1][1][RTW89_UK][2] = 127,
[1][1][RTW89_FCC][4] = 127,
[1][1][RTW89_ETSI][4] = 127,
@@ -8407,7 +8466,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][4] = 127,
[1][1][RTW89_KCC][4] = 127,
[1][1][RTW89_ACMA][4] = 127,
- [1][1][RTW89_CN][4] = 14,
+ [1][1][RTW89_CN][4] = 127,
[1][1][RTW89_UK][4] = 127,
[1][1][RTW89_FCC][6] = 127,
[1][1][RTW89_ETSI][6] = 127,
@@ -8415,7 +8474,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][6] = 127,
[1][1][RTW89_KCC][6] = 127,
[1][1][RTW89_ACMA][6] = 127,
- [1][1][RTW89_CN][6] = 14,
+ [1][1][RTW89_CN][6] = 127,
[1][1][RTW89_UK][6] = 127,
[1][1][RTW89_FCC][8] = 127,
[1][1][RTW89_ETSI][8] = 127,
@@ -8423,7 +8482,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][8] = 127,
[1][1][RTW89_KCC][8] = 127,
[1][1][RTW89_ACMA][8] = 127,
- [1][1][RTW89_CN][8] = 14,
+ [1][1][RTW89_CN][8] = 127,
[1][1][RTW89_UK][8] = 127,
[1][1][RTW89_FCC][10] = 127,
[1][1][RTW89_ETSI][10] = 127,
@@ -8431,7 +8490,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][10] = 127,
[1][1][RTW89_KCC][10] = 127,
[1][1][RTW89_ACMA][10] = 127,
- [1][1][RTW89_CN][10] = 14,
+ [1][1][RTW89_CN][10] = 127,
[1][1][RTW89_UK][10] = 127,
[1][1][RTW89_FCC][12] = 127,
[1][1][RTW89_ETSI][12] = 127,
@@ -8439,7 +8498,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][12] = 127,
[1][1][RTW89_KCC][12] = 127,
[1][1][RTW89_ACMA][12] = 127,
- [1][1][RTW89_CN][12] = 14,
+ [1][1][RTW89_CN][12] = 127,
[1][1][RTW89_UK][12] = 127,
[1][1][RTW89_FCC][14] = 127,
[1][1][RTW89_ETSI][14] = 127,
@@ -8447,7 +8506,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][14] = 127,
[1][1][RTW89_KCC][14] = 127,
[1][1][RTW89_ACMA][14] = 127,
- [1][1][RTW89_CN][14] = 14,
+ [1][1][RTW89_CN][14] = 127,
[1][1][RTW89_UK][14] = 127,
[1][1][RTW89_FCC][15] = 127,
[1][1][RTW89_ETSI][15] = 127,
@@ -8551,7 +8610,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][38] = 127,
[1][1][RTW89_KCC][38] = 127,
[1][1][RTW89_ACMA][38] = 127,
- [1][1][RTW89_CN][38] = 54,
+ [1][1][RTW89_CN][38] = 127,
[1][1][RTW89_UK][38] = 127,
[1][1][RTW89_FCC][40] = 127,
[1][1][RTW89_ETSI][40] = 127,
@@ -8559,7 +8618,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][40] = 127,
[1][1][RTW89_KCC][40] = 127,
[1][1][RTW89_ACMA][40] = 127,
- [1][1][RTW89_CN][40] = 54,
+ [1][1][RTW89_CN][40] = 127,
[1][1][RTW89_UK][40] = 127,
[1][1][RTW89_FCC][42] = 127,
[1][1][RTW89_ETSI][42] = 127,
@@ -8567,7 +8626,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][42] = 127,
[1][1][RTW89_KCC][42] = 127,
[1][1][RTW89_ACMA][42] = 127,
- [1][1][RTW89_CN][42] = 54,
+ [1][1][RTW89_CN][42] = 127,
[1][1][RTW89_UK][42] = 127,
[1][1][RTW89_FCC][44] = 127,
[1][1][RTW89_ETSI][44] = 127,
@@ -8575,7 +8634,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][44] = 127,
[1][1][RTW89_KCC][44] = 127,
[1][1][RTW89_ACMA][44] = 127,
- [1][1][RTW89_CN][44] = 54,
+ [1][1][RTW89_CN][44] = 127,
[1][1][RTW89_UK][44] = 127,
[1][1][RTW89_FCC][46] = 127,
[1][1][RTW89_ETSI][46] = 127,
@@ -8583,7 +8642,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][46] = 127,
[1][1][RTW89_KCC][46] = 127,
[1][1][RTW89_ACMA][46] = 127,
- [1][1][RTW89_CN][46] = 54,
+ [1][1][RTW89_CN][46] = 127,
[1][1][RTW89_UK][46] = 127,
[1][1][RTW89_FCC][48] = 127,
[1][1][RTW89_ETSI][48] = 127,
@@ -8839,7 +8898,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][0] = 127,
[2][1][RTW89_KCC][0] = 127,
[2][1][RTW89_ACMA][0] = 127,
- [2][1][RTW89_CN][0] = 28,
+ [2][1][RTW89_CN][0] = 127,
[2][1][RTW89_UK][0] = 127,
[2][1][RTW89_FCC][2] = 127,
[2][1][RTW89_ETSI][2] = 127,
@@ -8847,7 +8906,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][2] = 127,
[2][1][RTW89_KCC][2] = 127,
[2][1][RTW89_ACMA][2] = 127,
- [2][1][RTW89_CN][2] = 28,
+ [2][1][RTW89_CN][2] = 127,
[2][1][RTW89_UK][2] = 127,
[2][1][RTW89_FCC][4] = 127,
[2][1][RTW89_ETSI][4] = 127,
@@ -8855,7 +8914,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][4] = 127,
[2][1][RTW89_KCC][4] = 127,
[2][1][RTW89_ACMA][4] = 127,
- [2][1][RTW89_CN][4] = 28,
+ [2][1][RTW89_CN][4] = 127,
[2][1][RTW89_UK][4] = 127,
[2][1][RTW89_FCC][6] = 127,
[2][1][RTW89_ETSI][6] = 127,
@@ -8863,7 +8922,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][6] = 127,
[2][1][RTW89_KCC][6] = 127,
[2][1][RTW89_ACMA][6] = 127,
- [2][1][RTW89_CN][6] = 28,
+ [2][1][RTW89_CN][6] = 127,
[2][1][RTW89_UK][6] = 127,
[2][1][RTW89_FCC][8] = 127,
[2][1][RTW89_ETSI][8] = 127,
@@ -8871,7 +8930,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][8] = 127,
[2][1][RTW89_KCC][8] = 127,
[2][1][RTW89_ACMA][8] = 127,
- [2][1][RTW89_CN][8] = 28,
+ [2][1][RTW89_CN][8] = 127,
[2][1][RTW89_UK][8] = 127,
[2][1][RTW89_FCC][10] = 127,
[2][1][RTW89_ETSI][10] = 127,
@@ -8879,7 +8938,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][10] = 127,
[2][1][RTW89_KCC][10] = 127,
[2][1][RTW89_ACMA][10] = 127,
- [2][1][RTW89_CN][10] = 28,
+ [2][1][RTW89_CN][10] = 127,
[2][1][RTW89_UK][10] = 127,
[2][1][RTW89_FCC][12] = 127,
[2][1][RTW89_ETSI][12] = 127,
@@ -8887,7 +8946,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][12] = 127,
[2][1][RTW89_KCC][12] = 127,
[2][1][RTW89_ACMA][12] = 127,
- [2][1][RTW89_CN][12] = 28,
+ [2][1][RTW89_CN][12] = 127,
[2][1][RTW89_UK][12] = 127,
[2][1][RTW89_FCC][14] = 127,
[2][1][RTW89_ETSI][14] = 127,
@@ -8895,7 +8954,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][14] = 127,
[2][1][RTW89_KCC][14] = 127,
[2][1][RTW89_ACMA][14] = 127,
- [2][1][RTW89_CN][14] = 28,
+ [2][1][RTW89_CN][14] = 127,
[2][1][RTW89_UK][14] = 127,
[2][1][RTW89_FCC][15] = 127,
[2][1][RTW89_ETSI][15] = 127,
@@ -8999,7 +9058,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][38] = 127,
[2][1][RTW89_KCC][38] = 127,
[2][1][RTW89_ACMA][38] = 127,
- [2][1][RTW89_CN][38] = 56,
+ [2][1][RTW89_CN][38] = 127,
[2][1][RTW89_UK][38] = 127,
[2][1][RTW89_FCC][40] = 127,
[2][1][RTW89_ETSI][40] = 127,
@@ -9007,7 +9066,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][40] = 127,
[2][1][RTW89_KCC][40] = 127,
[2][1][RTW89_ACMA][40] = 127,
- [2][1][RTW89_CN][40] = 56,
+ [2][1][RTW89_CN][40] = 127,
[2][1][RTW89_UK][40] = 127,
[2][1][RTW89_FCC][42] = 127,
[2][1][RTW89_ETSI][42] = 127,
@@ -9015,7 +9074,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][42] = 127,
[2][1][RTW89_KCC][42] = 127,
[2][1][RTW89_ACMA][42] = 127,
- [2][1][RTW89_CN][42] = 56,
+ [2][1][RTW89_CN][42] = 127,
[2][1][RTW89_UK][42] = 127,
[2][1][RTW89_FCC][44] = 127,
[2][1][RTW89_ETSI][44] = 127,
@@ -9023,7 +9082,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][44] = 127,
[2][1][RTW89_KCC][44] = 127,
[2][1][RTW89_ACMA][44] = 127,
- [2][1][RTW89_CN][44] = 56,
+ [2][1][RTW89_CN][44] = 127,
[2][1][RTW89_UK][44] = 127,
[2][1][RTW89_FCC][46] = 127,
[2][1][RTW89_ETSI][46] = 127,
@@ -9031,7 +9090,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][46] = 127,
[2][1][RTW89_KCC][46] = 127,
[2][1][RTW89_ACMA][46] = 127,
- [2][1][RTW89_CN][46] = 56,
+ [2][1][RTW89_CN][46] = 127,
[2][1][RTW89_UK][46] = 127,
[2][1][RTW89_FCC][48] = 127,
[2][1][RTW89_ETSI][48] = 127,
@@ -9063,19 +9122,19 @@ static
const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[RTW89_RS_LMT_NUM][RTW89_BF_NUM]
[RTW89_REGD_NUM][RTW89_2G_CH_NUM] = {
- [0][0][0][0][RTW89_WW][0] = 58,
- [0][0][0][0][RTW89_WW][1] = 58,
- [0][0][0][0][RTW89_WW][2] = 58,
- [0][0][0][0][RTW89_WW][3] = 58,
- [0][0][0][0][RTW89_WW][4] = 58,
- [0][0][0][0][RTW89_WW][5] = 58,
- [0][0][0][0][RTW89_WW][6] = 58,
- [0][0][0][0][RTW89_WW][7] = 58,
- [0][0][0][0][RTW89_WW][8] = 58,
- [0][0][0][0][RTW89_WW][9] = 58,
- [0][0][0][0][RTW89_WW][10] = 58,
- [0][0][0][0][RTW89_WW][11] = 58,
- [0][0][0][0][RTW89_WW][12] = 52,
+ [0][0][0][0][RTW89_WW][0] = 56,
+ [0][0][0][0][RTW89_WW][1] = 56,
+ [0][0][0][0][RTW89_WW][2] = 56,
+ [0][0][0][0][RTW89_WW][3] = 56,
+ [0][0][0][0][RTW89_WW][4] = 56,
+ [0][0][0][0][RTW89_WW][5] = 56,
+ [0][0][0][0][RTW89_WW][6] = 56,
+ [0][0][0][0][RTW89_WW][7] = 56,
+ [0][0][0][0][RTW89_WW][8] = 56,
+ [0][0][0][0][RTW89_WW][9] = 56,
+ [0][0][0][0][RTW89_WW][10] = 56,
+ [0][0][0][0][RTW89_WW][11] = 56,
+ [0][0][0][0][RTW89_WW][12] = 42,
[0][0][0][0][RTW89_WW][13] = 76,
[0][1][0][0][RTW89_WW][0] = 0,
[0][1][0][0][RTW89_WW][1] = 0,
@@ -9093,15 +9152,15 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_WW][13] = 0,
[1][0][0][0][RTW89_WW][0] = 0,
[1][0][0][0][RTW89_WW][1] = 0,
- [1][0][0][0][RTW89_WW][2] = 58,
- [1][0][0][0][RTW89_WW][3] = 58,
- [1][0][0][0][RTW89_WW][4] = 58,
- [1][0][0][0][RTW89_WW][5] = 58,
- [1][0][0][0][RTW89_WW][6] = 58,
- [1][0][0][0][RTW89_WW][7] = 58,
- [1][0][0][0][RTW89_WW][8] = 58,
- [1][0][0][0][RTW89_WW][9] = 58,
- [1][0][0][0][RTW89_WW][10] = 58,
+ [1][0][0][0][RTW89_WW][2] = 56,
+ [1][0][0][0][RTW89_WW][3] = 56,
+ [1][0][0][0][RTW89_WW][4] = 56,
+ [1][0][0][0][RTW89_WW][5] = 56,
+ [1][0][0][0][RTW89_WW][6] = 56,
+ [1][0][0][0][RTW89_WW][7] = 56,
+ [1][0][0][0][RTW89_WW][8] = 56,
+ [1][0][0][0][RTW89_WW][9] = 56,
+ [1][0][0][0][RTW89_WW][10] = 42,
[1][0][0][0][RTW89_WW][11] = 0,
[1][0][0][0][RTW89_WW][12] = 0,
[1][0][0][0][RTW89_WW][13] = 0,
@@ -9131,7 +9190,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][9] = 60,
[0][0][1][0][RTW89_WW][10] = 60,
[0][0][1][0][RTW89_WW][11] = 60,
- [0][0][1][0][RTW89_WW][12] = 58,
+ [0][0][1][0][RTW89_WW][12] = 40,
[0][0][1][0][RTW89_WW][13] = 0,
[0][1][1][0][RTW89_WW][0] = 0,
[0][1][1][0][RTW89_WW][1] = 0,
@@ -9147,19 +9206,19 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_WW][11] = 0,
[0][1][1][0][RTW89_WW][12] = 0,
[0][1][1][0][RTW89_WW][13] = 0,
- [0][0][2][0][RTW89_WW][0] = 60,
- [0][0][2][0][RTW89_WW][1] = 60,
- [0][0][2][0][RTW89_WW][2] = 60,
- [0][0][2][0][RTW89_WW][3] = 60,
- [0][0][2][0][RTW89_WW][4] = 60,
- [0][0][2][0][RTW89_WW][5] = 60,
- [0][0][2][0][RTW89_WW][6] = 60,
- [0][0][2][0][RTW89_WW][7] = 60,
- [0][0][2][0][RTW89_WW][8] = 60,
- [0][0][2][0][RTW89_WW][9] = 60,
- [0][0][2][0][RTW89_WW][10] = 60,
- [0][0][2][0][RTW89_WW][11] = 60,
- [0][0][2][0][RTW89_WW][12] = 60,
+ [0][0][2][0][RTW89_WW][0] = 58,
+ [0][0][2][0][RTW89_WW][1] = 58,
+ [0][0][2][0][RTW89_WW][2] = 58,
+ [0][0][2][0][RTW89_WW][3] = 58,
+ [0][0][2][0][RTW89_WW][4] = 58,
+ [0][0][2][0][RTW89_WW][5] = 58,
+ [0][0][2][0][RTW89_WW][6] = 58,
+ [0][0][2][0][RTW89_WW][7] = 58,
+ [0][0][2][0][RTW89_WW][8] = 58,
+ [0][0][2][0][RTW89_WW][9] = 58,
+ [0][0][2][0][RTW89_WW][10] = 58,
+ [0][0][2][0][RTW89_WW][11] = 58,
+ [0][0][2][0][RTW89_WW][12] = 38,
[0][0][2][0][RTW89_WW][13] = 0,
[0][1][2][0][RTW89_WW][0] = 0,
[0][1][2][0][RTW89_WW][1] = 0,
@@ -9191,15 +9250,15 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_WW][13] = 0,
[1][0][2][0][RTW89_WW][0] = 0,
[1][0][2][0][RTW89_WW][1] = 0,
- [1][0][2][0][RTW89_WW][2] = 58,
- [1][0][2][0][RTW89_WW][3] = 58,
- [1][0][2][0][RTW89_WW][4] = 58,
- [1][0][2][0][RTW89_WW][5] = 58,
- [1][0][2][0][RTW89_WW][6] = 58,
- [1][0][2][0][RTW89_WW][7] = 58,
- [1][0][2][0][RTW89_WW][8] = 58,
- [1][0][2][0][RTW89_WW][9] = 58,
- [1][0][2][0][RTW89_WW][10] = 58,
+ [1][0][2][0][RTW89_WW][2] = 56,
+ [1][0][2][0][RTW89_WW][3] = 56,
+ [1][0][2][0][RTW89_WW][4] = 56,
+ [1][0][2][0][RTW89_WW][5] = 56,
+ [1][0][2][0][RTW89_WW][6] = 56,
+ [1][0][2][0][RTW89_WW][7] = 56,
+ [1][0][2][0][RTW89_WW][8] = 56,
+ [1][0][2][0][RTW89_WW][9] = 56,
+ [1][0][2][0][RTW89_WW][10] = 48,
[1][0][2][0][RTW89_WW][11] = 0,
[1][0][2][0][RTW89_WW][12] = 0,
[1][0][2][0][RTW89_WW][13] = 0,
@@ -9237,7 +9296,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][0] = 82,
[0][0][0][0][RTW89_KCC][0] = 68,
[0][0][0][0][RTW89_ACMA][0] = 58,
- [0][0][0][0][RTW89_CN][0] = 60,
+ [0][0][0][0][RTW89_CN][0] = 56,
[0][0][0][0][RTW89_UK][0] = 58,
[0][0][0][0][RTW89_FCC][1] = 82,
[0][0][0][0][RTW89_ETSI][1] = 58,
@@ -9245,7 +9304,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][1] = 82,
[0][0][0][0][RTW89_KCC][1] = 68,
[0][0][0][0][RTW89_ACMA][1] = 58,
- [0][0][0][0][RTW89_CN][1] = 60,
+ [0][0][0][0][RTW89_CN][1] = 56,
[0][0][0][0][RTW89_UK][1] = 58,
[0][0][0][0][RTW89_FCC][2] = 82,
[0][0][0][0][RTW89_ETSI][2] = 58,
@@ -9253,7 +9312,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][2] = 82,
[0][0][0][0][RTW89_KCC][2] = 68,
[0][0][0][0][RTW89_ACMA][2] = 58,
- [0][0][0][0][RTW89_CN][2] = 60,
+ [0][0][0][0][RTW89_CN][2] = 56,
[0][0][0][0][RTW89_UK][2] = 58,
[0][0][0][0][RTW89_FCC][3] = 82,
[0][0][0][0][RTW89_ETSI][3] = 58,
@@ -9261,7 +9320,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][3] = 82,
[0][0][0][0][RTW89_KCC][3] = 68,
[0][0][0][0][RTW89_ACMA][3] = 58,
- [0][0][0][0][RTW89_CN][3] = 60,
+ [0][0][0][0][RTW89_CN][3] = 56,
[0][0][0][0][RTW89_UK][3] = 58,
[0][0][0][0][RTW89_FCC][4] = 82,
[0][0][0][0][RTW89_ETSI][4] = 58,
@@ -9269,7 +9328,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][4] = 82,
[0][0][0][0][RTW89_KCC][4] = 68,
[0][0][0][0][RTW89_ACMA][4] = 58,
- [0][0][0][0][RTW89_CN][4] = 60,
+ [0][0][0][0][RTW89_CN][4] = 56,
[0][0][0][0][RTW89_UK][4] = 58,
[0][0][0][0][RTW89_FCC][5] = 82,
[0][0][0][0][RTW89_ETSI][5] = 58,
@@ -9277,7 +9336,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][5] = 82,
[0][0][0][0][RTW89_KCC][5] = 68,
[0][0][0][0][RTW89_ACMA][5] = 58,
- [0][0][0][0][RTW89_CN][5] = 60,
+ [0][0][0][0][RTW89_CN][5] = 56,
[0][0][0][0][RTW89_UK][5] = 58,
[0][0][0][0][RTW89_FCC][6] = 82,
[0][0][0][0][RTW89_ETSI][6] = 58,
@@ -9285,7 +9344,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][6] = 82,
[0][0][0][0][RTW89_KCC][6] = 68,
[0][0][0][0][RTW89_ACMA][6] = 58,
- [0][0][0][0][RTW89_CN][6] = 60,
+ [0][0][0][0][RTW89_CN][6] = 56,
[0][0][0][0][RTW89_UK][6] = 58,
[0][0][0][0][RTW89_FCC][7] = 82,
[0][0][0][0][RTW89_ETSI][7] = 58,
@@ -9293,7 +9352,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][7] = 82,
[0][0][0][0][RTW89_KCC][7] = 68,
[0][0][0][0][RTW89_ACMA][7] = 58,
- [0][0][0][0][RTW89_CN][7] = 60,
+ [0][0][0][0][RTW89_CN][7] = 56,
[0][0][0][0][RTW89_UK][7] = 58,
[0][0][0][0][RTW89_FCC][8] = 82,
[0][0][0][0][RTW89_ETSI][8] = 58,
@@ -9301,7 +9360,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][8] = 82,
[0][0][0][0][RTW89_KCC][8] = 68,
[0][0][0][0][RTW89_ACMA][8] = 58,
- [0][0][0][0][RTW89_CN][8] = 60,
+ [0][0][0][0][RTW89_CN][8] = 56,
[0][0][0][0][RTW89_UK][8] = 58,
[0][0][0][0][RTW89_FCC][9] = 82,
[0][0][0][0][RTW89_ETSI][9] = 58,
@@ -9309,7 +9368,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][9] = 82,
[0][0][0][0][RTW89_KCC][9] = 68,
[0][0][0][0][RTW89_ACMA][9] = 58,
- [0][0][0][0][RTW89_CN][9] = 60,
+ [0][0][0][0][RTW89_CN][9] = 56,
[0][0][0][0][RTW89_UK][9] = 58,
[0][0][0][0][RTW89_FCC][10] = 80,
[0][0][0][0][RTW89_ETSI][10] = 58,
@@ -9317,7 +9376,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][10] = 80,
[0][0][0][0][RTW89_KCC][10] = 68,
[0][0][0][0][RTW89_ACMA][10] = 58,
- [0][0][0][0][RTW89_CN][10] = 60,
+ [0][0][0][0][RTW89_CN][10] = 56,
[0][0][0][0][RTW89_UK][10] = 58,
[0][0][0][0][RTW89_FCC][11] = 60,
[0][0][0][0][RTW89_ETSI][11] = 58,
@@ -9325,7 +9384,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][11] = 60,
[0][0][0][0][RTW89_KCC][11] = 68,
[0][0][0][0][RTW89_ACMA][11] = 58,
- [0][0][0][0][RTW89_CN][11] = 60,
+ [0][0][0][0][RTW89_CN][11] = 56,
[0][0][0][0][RTW89_UK][11] = 58,
[0][0][0][0][RTW89_FCC][12] = 52,
[0][0][0][0][RTW89_ETSI][12] = 58,
@@ -9333,7 +9392,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][12] = 52,
[0][0][0][0][RTW89_KCC][12] = 68,
[0][0][0][0][RTW89_ACMA][12] = 58,
- [0][0][0][0][RTW89_CN][12] = 60,
+ [0][0][0][0][RTW89_CN][12] = 42,
[0][0][0][0][RTW89_UK][12] = 58,
[0][0][0][0][RTW89_FCC][13] = 127,
[0][0][0][0][RTW89_ETSI][13] = 127,
@@ -9477,7 +9536,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][2] = 127,
[1][0][0][0][RTW89_KCC][2] = 68,
[1][0][0][0][RTW89_ACMA][2] = 58,
- [1][0][0][0][RTW89_CN][2] = 60,
+ [1][0][0][0][RTW89_CN][2] = 56,
[1][0][0][0][RTW89_UK][2] = 58,
[1][0][0][0][RTW89_FCC][3] = 127,
[1][0][0][0][RTW89_ETSI][3] = 58,
@@ -9485,7 +9544,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][3] = 127,
[1][0][0][0][RTW89_KCC][3] = 68,
[1][0][0][0][RTW89_ACMA][3] = 58,
- [1][0][0][0][RTW89_CN][3] = 60,
+ [1][0][0][0][RTW89_CN][3] = 56,
[1][0][0][0][RTW89_UK][3] = 58,
[1][0][0][0][RTW89_FCC][4] = 127,
[1][0][0][0][RTW89_ETSI][4] = 58,
@@ -9493,7 +9552,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][4] = 127,
[1][0][0][0][RTW89_KCC][4] = 68,
[1][0][0][0][RTW89_ACMA][4] = 58,
- [1][0][0][0][RTW89_CN][4] = 60,
+ [1][0][0][0][RTW89_CN][4] = 56,
[1][0][0][0][RTW89_UK][4] = 58,
[1][0][0][0][RTW89_FCC][5] = 127,
[1][0][0][0][RTW89_ETSI][5] = 58,
@@ -9501,7 +9560,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][5] = 127,
[1][0][0][0][RTW89_KCC][5] = 68,
[1][0][0][0][RTW89_ACMA][5] = 58,
- [1][0][0][0][RTW89_CN][5] = 60,
+ [1][0][0][0][RTW89_CN][5] = 56,
[1][0][0][0][RTW89_UK][5] = 58,
[1][0][0][0][RTW89_FCC][6] = 127,
[1][0][0][0][RTW89_ETSI][6] = 58,
@@ -9509,7 +9568,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][6] = 127,
[1][0][0][0][RTW89_KCC][6] = 68,
[1][0][0][0][RTW89_ACMA][6] = 58,
- [1][0][0][0][RTW89_CN][6] = 60,
+ [1][0][0][0][RTW89_CN][6] = 56,
[1][0][0][0][RTW89_UK][6] = 58,
[1][0][0][0][RTW89_FCC][7] = 127,
[1][0][0][0][RTW89_ETSI][7] = 58,
@@ -9517,7 +9576,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][7] = 127,
[1][0][0][0][RTW89_KCC][7] = 68,
[1][0][0][0][RTW89_ACMA][7] = 58,
- [1][0][0][0][RTW89_CN][7] = 60,
+ [1][0][0][0][RTW89_CN][7] = 56,
[1][0][0][0][RTW89_UK][7] = 58,
[1][0][0][0][RTW89_FCC][8] = 127,
[1][0][0][0][RTW89_ETSI][8] = 58,
@@ -9525,7 +9584,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][8] = 127,
[1][0][0][0][RTW89_KCC][8] = 68,
[1][0][0][0][RTW89_ACMA][8] = 58,
- [1][0][0][0][RTW89_CN][8] = 60,
+ [1][0][0][0][RTW89_CN][8] = 56,
[1][0][0][0][RTW89_UK][8] = 58,
[1][0][0][0][RTW89_FCC][9] = 127,
[1][0][0][0][RTW89_ETSI][9] = 58,
@@ -9533,7 +9592,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][9] = 127,
[1][0][0][0][RTW89_KCC][9] = 68,
[1][0][0][0][RTW89_ACMA][9] = 58,
- [1][0][0][0][RTW89_CN][9] = 60,
+ [1][0][0][0][RTW89_CN][9] = 56,
[1][0][0][0][RTW89_UK][9] = 58,
[1][0][0][0][RTW89_FCC][10] = 127,
[1][0][0][0][RTW89_ETSI][10] = 58,
@@ -9541,7 +9600,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_IC][10] = 127,
[1][0][0][0][RTW89_KCC][10] = 68,
[1][0][0][0][RTW89_ACMA][10] = 58,
- [1][0][0][0][RTW89_CN][10] = 60,
+ [1][0][0][0][RTW89_CN][10] = 42,
[1][0][0][0][RTW89_UK][10] = 58,
[1][0][0][0][RTW89_FCC][11] = 127,
[1][0][0][0][RTW89_ETSI][11] = 127,
@@ -9781,7 +9840,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][12] = 64,
[0][0][1][0][RTW89_KCC][12] = 74,
[0][0][1][0][RTW89_ACMA][12] = 58,
- [0][0][1][0][RTW89_CN][12] = 60,
+ [0][0][1][0][RTW89_CN][12] = 40,
[0][0][1][0][RTW89_UK][12] = 58,
[0][0][1][0][RTW89_FCC][13] = 127,
[0][0][1][0][RTW89_ETSI][13] = 127,
@@ -9909,7 +9968,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][0] = 78,
[0][0][2][0][RTW89_KCC][0] = 76,
[0][0][2][0][RTW89_ACMA][0] = 60,
- [0][0][2][0][RTW89_CN][0] = 60,
+ [0][0][2][0][RTW89_CN][0] = 58,
[0][0][2][0][RTW89_UK][0] = 60,
[0][0][2][0][RTW89_FCC][1] = 78,
[0][0][2][0][RTW89_ETSI][1] = 60,
@@ -9917,7 +9976,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][1] = 78,
[0][0][2][0][RTW89_KCC][1] = 76,
[0][0][2][0][RTW89_ACMA][1] = 60,
- [0][0][2][0][RTW89_CN][1] = 60,
+ [0][0][2][0][RTW89_CN][1] = 58,
[0][0][2][0][RTW89_UK][1] = 60,
[0][0][2][0][RTW89_FCC][2] = 80,
[0][0][2][0][RTW89_ETSI][2] = 60,
@@ -9925,7 +9984,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][2] = 80,
[0][0][2][0][RTW89_KCC][2] = 76,
[0][0][2][0][RTW89_ACMA][2] = 60,
- [0][0][2][0][RTW89_CN][2] = 60,
+ [0][0][2][0][RTW89_CN][2] = 58,
[0][0][2][0][RTW89_UK][2] = 60,
[0][0][2][0][RTW89_FCC][3] = 80,
[0][0][2][0][RTW89_ETSI][3] = 60,
@@ -9933,7 +9992,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][3] = 80,
[0][0][2][0][RTW89_KCC][3] = 76,
[0][0][2][0][RTW89_ACMA][3] = 60,
- [0][0][2][0][RTW89_CN][3] = 60,
+ [0][0][2][0][RTW89_CN][3] = 58,
[0][0][2][0][RTW89_UK][3] = 60,
[0][0][2][0][RTW89_FCC][4] = 80,
[0][0][2][0][RTW89_ETSI][4] = 60,
@@ -9941,7 +10000,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][4] = 80,
[0][0][2][0][RTW89_KCC][4] = 76,
[0][0][2][0][RTW89_ACMA][4] = 60,
- [0][0][2][0][RTW89_CN][4] = 60,
+ [0][0][2][0][RTW89_CN][4] = 58,
[0][0][2][0][RTW89_UK][4] = 60,
[0][0][2][0][RTW89_FCC][5] = 80,
[0][0][2][0][RTW89_ETSI][5] = 60,
@@ -9949,7 +10008,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][5] = 80,
[0][0][2][0][RTW89_KCC][5] = 76,
[0][0][2][0][RTW89_ACMA][5] = 60,
- [0][0][2][0][RTW89_CN][5] = 60,
+ [0][0][2][0][RTW89_CN][5] = 58,
[0][0][2][0][RTW89_UK][5] = 60,
[0][0][2][0][RTW89_FCC][6] = 80,
[0][0][2][0][RTW89_ETSI][6] = 60,
@@ -9957,7 +10016,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][6] = 80,
[0][0][2][0][RTW89_KCC][6] = 76,
[0][0][2][0][RTW89_ACMA][6] = 60,
- [0][0][2][0][RTW89_CN][6] = 60,
+ [0][0][2][0][RTW89_CN][6] = 58,
[0][0][2][0][RTW89_UK][6] = 60,
[0][0][2][0][RTW89_FCC][7] = 80,
[0][0][2][0][RTW89_ETSI][7] = 60,
@@ -9965,7 +10024,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][7] = 80,
[0][0][2][0][RTW89_KCC][7] = 76,
[0][0][2][0][RTW89_ACMA][7] = 60,
- [0][0][2][0][RTW89_CN][7] = 60,
+ [0][0][2][0][RTW89_CN][7] = 58,
[0][0][2][0][RTW89_UK][7] = 60,
[0][0][2][0][RTW89_FCC][8] = 78,
[0][0][2][0][RTW89_ETSI][8] = 60,
@@ -9973,7 +10032,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][8] = 78,
[0][0][2][0][RTW89_KCC][8] = 76,
[0][0][2][0][RTW89_ACMA][8] = 60,
- [0][0][2][0][RTW89_CN][8] = 60,
+ [0][0][2][0][RTW89_CN][8] = 58,
[0][0][2][0][RTW89_UK][8] = 60,
[0][0][2][0][RTW89_FCC][9] = 74,
[0][0][2][0][RTW89_ETSI][9] = 60,
@@ -9981,7 +10040,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][9] = 74,
[0][0][2][0][RTW89_KCC][9] = 76,
[0][0][2][0][RTW89_ACMA][9] = 60,
- [0][0][2][0][RTW89_CN][9] = 60,
+ [0][0][2][0][RTW89_CN][9] = 58,
[0][0][2][0][RTW89_UK][9] = 60,
[0][0][2][0][RTW89_FCC][10] = 74,
[0][0][2][0][RTW89_ETSI][10] = 60,
@@ -9989,7 +10048,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][10] = 74,
[0][0][2][0][RTW89_KCC][10] = 76,
[0][0][2][0][RTW89_ACMA][10] = 60,
- [0][0][2][0][RTW89_CN][10] = 60,
+ [0][0][2][0][RTW89_CN][10] = 58,
[0][0][2][0][RTW89_UK][10] = 60,
[0][0][2][0][RTW89_FCC][11] = 68,
[0][0][2][0][RTW89_ETSI][11] = 60,
@@ -9997,7 +10056,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][11] = 68,
[0][0][2][0][RTW89_KCC][11] = 76,
[0][0][2][0][RTW89_ACMA][11] = 60,
- [0][0][2][0][RTW89_CN][11] = 60,
+ [0][0][2][0][RTW89_CN][11] = 58,
[0][0][2][0][RTW89_UK][11] = 60,
[0][0][2][0][RTW89_FCC][12] = 68,
[0][0][2][0][RTW89_ETSI][12] = 60,
@@ -10005,7 +10064,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][12] = 68,
[0][0][2][0][RTW89_KCC][12] = 76,
[0][0][2][0][RTW89_ACMA][12] = 60,
- [0][0][2][0][RTW89_CN][12] = 60,
+ [0][0][2][0][RTW89_CN][12] = 38,
[0][0][2][0][RTW89_UK][12] = 60,
[0][0][2][0][RTW89_FCC][13] = 127,
[0][0][2][0][RTW89_ETSI][13] = 127,
@@ -10261,7 +10320,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][2] = 70,
[1][0][2][0][RTW89_KCC][2] = 76,
[1][0][2][0][RTW89_ACMA][2] = 58,
- [1][0][2][0][RTW89_CN][2] = 60,
+ [1][0][2][0][RTW89_CN][2] = 56,
[1][0][2][0][RTW89_UK][2] = 58,
[1][0][2][0][RTW89_FCC][3] = 70,
[1][0][2][0][RTW89_ETSI][3] = 58,
@@ -10269,7 +10328,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][3] = 70,
[1][0][2][0][RTW89_KCC][3] = 76,
[1][0][2][0][RTW89_ACMA][3] = 58,
- [1][0][2][0][RTW89_CN][3] = 60,
+ [1][0][2][0][RTW89_CN][3] = 56,
[1][0][2][0][RTW89_UK][3] = 58,
[1][0][2][0][RTW89_FCC][4] = 74,
[1][0][2][0][RTW89_ETSI][4] = 58,
@@ -10277,7 +10336,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][4] = 74,
[1][0][2][0][RTW89_KCC][4] = 76,
[1][0][2][0][RTW89_ACMA][4] = 58,
- [1][0][2][0][RTW89_CN][4] = 60,
+ [1][0][2][0][RTW89_CN][4] = 56,
[1][0][2][0][RTW89_UK][4] = 58,
[1][0][2][0][RTW89_FCC][5] = 76,
[1][0][2][0][RTW89_ETSI][5] = 58,
@@ -10285,7 +10344,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][5] = 76,
[1][0][2][0][RTW89_KCC][5] = 76,
[1][0][2][0][RTW89_ACMA][5] = 58,
- [1][0][2][0][RTW89_CN][5] = 60,
+ [1][0][2][0][RTW89_CN][5] = 56,
[1][0][2][0][RTW89_UK][5] = 58,
[1][0][2][0][RTW89_FCC][6] = 76,
[1][0][2][0][RTW89_ETSI][6] = 58,
@@ -10293,7 +10352,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][6] = 76,
[1][0][2][0][RTW89_KCC][6] = 76,
[1][0][2][0][RTW89_ACMA][6] = 58,
- [1][0][2][0][RTW89_CN][6] = 60,
+ [1][0][2][0][RTW89_CN][6] = 56,
[1][0][2][0][RTW89_UK][6] = 58,
[1][0][2][0][RTW89_FCC][7] = 76,
[1][0][2][0][RTW89_ETSI][7] = 58,
@@ -10301,7 +10360,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][7] = 76,
[1][0][2][0][RTW89_KCC][7] = 76,
[1][0][2][0][RTW89_ACMA][7] = 58,
- [1][0][2][0][RTW89_CN][7] = 60,
+ [1][0][2][0][RTW89_CN][7] = 56,
[1][0][2][0][RTW89_UK][7] = 58,
[1][0][2][0][RTW89_FCC][8] = 78,
[1][0][2][0][RTW89_ETSI][8] = 58,
@@ -10309,7 +10368,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][8] = 78,
[1][0][2][0][RTW89_KCC][8] = 76,
[1][0][2][0][RTW89_ACMA][8] = 58,
- [1][0][2][0][RTW89_CN][8] = 60,
+ [1][0][2][0][RTW89_CN][8] = 56,
[1][0][2][0][RTW89_UK][8] = 58,
[1][0][2][0][RTW89_FCC][9] = 74,
[1][0][2][0][RTW89_ETSI][9] = 58,
@@ -10317,7 +10376,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][9] = 74,
[1][0][2][0][RTW89_KCC][9] = 76,
[1][0][2][0][RTW89_ACMA][9] = 58,
- [1][0][2][0][RTW89_CN][9] = 60,
+ [1][0][2][0][RTW89_CN][9] = 56,
[1][0][2][0][RTW89_UK][9] = 58,
[1][0][2][0][RTW89_FCC][10] = 68,
[1][0][2][0][RTW89_ETSI][10] = 58,
@@ -10325,7 +10384,7 @@ const s8 rtw89_8851b_txpwr_lmt_2g_type2[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][10] = 68,
[1][0][2][0][RTW89_KCC][10] = 76,
[1][0][2][0][RTW89_ACMA][10] = 58,
- [1][0][2][0][RTW89_CN][10] = 60,
+ [1][0][2][0][RTW89_CN][10] = 48,
[1][0][2][0][RTW89_UK][10] = 58,
[1][0][2][0][RTW89_FCC][11] = 127,
[1][0][2][0][RTW89_ETSI][11] = 127,
@@ -10606,9 +10665,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][42] = 30,
[0][0][1][0][RTW89_WW][44] = 30,
[0][0][1][0][RTW89_WW][46] = 30,
- [0][0][1][0][RTW89_WW][48] = 68,
- [0][0][1][0][RTW89_WW][50] = 68,
- [0][0][1][0][RTW89_WW][52] = 68,
+ [0][0][1][0][RTW89_WW][48] = 72,
+ [0][0][1][0][RTW89_WW][50] = 72,
+ [0][0][1][0][RTW89_WW][52] = 72,
[0][1][1][0][RTW89_WW][0] = 0,
[0][1][1][0][RTW89_WW][2] = 0,
[0][1][1][0][RTW89_WW][4] = 0,
@@ -10637,14 +10696,14 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_WW][48] = 0,
[0][1][1][0][RTW89_WW][50] = 0,
[0][1][1][0][RTW89_WW][52] = 0,
- [0][0][2][0][RTW89_WW][0] = 62,
- [0][0][2][0][RTW89_WW][2] = 62,
- [0][0][2][0][RTW89_WW][4] = 62,
+ [0][0][2][0][RTW89_WW][0] = 60,
+ [0][0][2][0][RTW89_WW][2] = 60,
+ [0][0][2][0][RTW89_WW][4] = 60,
[0][0][2][0][RTW89_WW][6] = 54,
- [0][0][2][0][RTW89_WW][8] = 62,
- [0][0][2][0][RTW89_WW][10] = 62,
- [0][0][2][0][RTW89_WW][12] = 62,
- [0][0][2][0][RTW89_WW][14] = 62,
+ [0][0][2][0][RTW89_WW][8] = 60,
+ [0][0][2][0][RTW89_WW][10] = 60,
+ [0][0][2][0][RTW89_WW][12] = 60,
+ [0][0][2][0][RTW89_WW][14] = 60,
[0][0][2][0][RTW89_WW][15] = 60,
[0][0][2][0][RTW89_WW][17] = 62,
[0][0][2][0][RTW89_WW][19] = 62,
@@ -10662,9 +10721,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_WW][42] = 30,
[0][0][2][0][RTW89_WW][44] = 30,
[0][0][2][0][RTW89_WW][46] = 30,
- [0][0][2][0][RTW89_WW][48] = 70,
- [0][0][2][0][RTW89_WW][50] = 70,
- [0][0][2][0][RTW89_WW][52] = 70,
+ [0][0][2][0][RTW89_WW][48] = 74,
+ [0][0][2][0][RTW89_WW][50] = 74,
+ [0][0][2][0][RTW89_WW][52] = 74,
[0][1][2][0][RTW89_WW][0] = 0,
[0][1][2][0][RTW89_WW][2] = 0,
[0][1][2][0][RTW89_WW][4] = 0,
@@ -10721,11 +10780,11 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_WW][48] = 0,
[0][1][2][1][RTW89_WW][50] = 0,
[0][1][2][1][RTW89_WW][52] = 0,
- [1][0][2][0][RTW89_WW][1] = 60,
+ [1][0][2][0][RTW89_WW][1] = 64,
[1][0][2][0][RTW89_WW][5] = 62,
- [1][0][2][0][RTW89_WW][9] = 64,
- [1][0][2][0][RTW89_WW][13] = 60,
- [1][0][2][0][RTW89_WW][16] = 62,
+ [1][0][2][0][RTW89_WW][9] = 58,
+ [1][0][2][0][RTW89_WW][13] = 58,
+ [1][0][2][0][RTW89_WW][16] = 66,
[1][0][2][0][RTW89_WW][20] = 66,
[1][0][2][0][RTW89_WW][24] = 66,
[1][0][2][0][RTW89_WW][28] = 66,
@@ -10733,8 +10792,8 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_WW][36] = 76,
[1][0][2][0][RTW89_WW][39] = 30,
[1][0][2][0][RTW89_WW][43] = 30,
- [1][0][2][0][RTW89_WW][47] = 76,
- [1][0][2][0][RTW89_WW][51] = 76,
+ [1][0][2][0][RTW89_WW][47] = 80,
+ [1][0][2][0][RTW89_WW][51] = 80,
[1][1][2][0][RTW89_WW][1] = 0,
[1][1][2][0][RTW89_WW][5] = 0,
[1][1][2][0][RTW89_WW][9] = 0,
@@ -10764,12 +10823,12 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_WW][47] = 0,
[1][1][2][1][RTW89_WW][51] = 0,
[2][0][2][0][RTW89_WW][3] = 60,
- [2][0][2][0][RTW89_WW][11] = 58,
- [2][0][2][0][RTW89_WW][18] = 62,
+ [2][0][2][0][RTW89_WW][11] = 54,
+ [2][0][2][0][RTW89_WW][18] = 64,
[2][0][2][0][RTW89_WW][26] = 64,
[2][0][2][0][RTW89_WW][34] = 68,
[2][0][2][0][RTW89_WW][41] = 30,
- [2][0][2][0][RTW89_WW][49] = 68,
+ [2][0][2][0][RTW89_WW][49] = 72,
[2][1][2][0][RTW89_WW][3] = 0,
[2][1][2][0][RTW89_WW][11] = 0,
[2][1][2][0][RTW89_WW][18] = 0,
@@ -10784,8 +10843,8 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_WW][34] = 0,
[2][1][2][1][RTW89_WW][41] = 0,
[2][1][2][1][RTW89_WW][49] = 0,
- [3][0][2][0][RTW89_WW][7] = 58,
- [3][0][2][0][RTW89_WW][22] = 58,
+ [3][0][2][0][RTW89_WW][7] = 0,
+ [3][0][2][0][RTW89_WW][22] = 0,
[3][0][2][0][RTW89_WW][45] = 0,
[3][1][2][0][RTW89_WW][7] = 0,
[3][1][2][0][RTW89_WW][22] = 0,
@@ -10793,7 +10852,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_WW][7] = 0,
[3][1][2][1][RTW89_WW][22] = 0,
[3][1][2][1][RTW89_WW][45] = 0,
- [0][0][1][0][RTW89_FCC][0] = 74,
+ [0][0][1][0][RTW89_FCC][0] = 78,
[0][0][1][0][RTW89_ETSI][0] = 58,
[0][0][1][0][RTW89_MKK][0] = 60,
[0][0][1][0][RTW89_IC][0] = 62,
@@ -10849,7 +10908,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][12] = 58,
[0][0][1][0][RTW89_CN][12] = 60,
[0][0][1][0][RTW89_UK][12] = 58,
- [0][0][1][0][RTW89_FCC][14] = 72,
+ [0][0][1][0][RTW89_FCC][14] = 76,
[0][0][1][0][RTW89_ETSI][14] = 58,
[0][0][1][0][RTW89_MKK][14] = 60,
[0][0][1][0][RTW89_IC][14] = 62,
@@ -10857,10 +10916,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][14] = 58,
[0][0][1][0][RTW89_CN][14] = 60,
[0][0][1][0][RTW89_UK][14] = 58,
- [0][0][1][0][RTW89_FCC][15] = 72,
+ [0][0][1][0][RTW89_FCC][15] = 76,
[0][0][1][0][RTW89_ETSI][15] = 58,
[0][0][1][0][RTW89_MKK][15] = 74,
- [0][0][1][0][RTW89_IC][15] = 72,
+ [0][0][1][0][RTW89_IC][15] = 76,
[0][0][1][0][RTW89_KCC][15] = 74,
[0][0][1][0][RTW89_ACMA][15] = 58,
[0][0][1][0][RTW89_CN][15] = 127,
@@ -10937,10 +10996,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][33] = 60,
[0][0][1][0][RTW89_CN][33] = 127,
[0][0][1][0][RTW89_UK][33] = 60,
- [0][0][1][0][RTW89_FCC][35] = 66,
+ [0][0][1][0][RTW89_FCC][35] = 70,
[0][0][1][0][RTW89_ETSI][35] = 60,
[0][0][1][0][RTW89_MKK][35] = 74,
- [0][0][1][0][RTW89_IC][35] = 66,
+ [0][0][1][0][RTW89_IC][35] = 70,
[0][0][1][0][RTW89_KCC][35] = 74,
[0][0][1][0][RTW89_ACMA][35] = 60,
[0][0][1][0][RTW89_CN][35] = 127,
@@ -10959,7 +11018,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][38] = 78,
[0][0][1][0][RTW89_KCC][38] = 70,
[0][0][1][0][RTW89_ACMA][38] = 74,
- [0][0][1][0][RTW89_CN][38] = 74,
+ [0][0][1][0][RTW89_CN][38] = 64,
[0][0][1][0][RTW89_UK][38] = 58,
[0][0][1][0][RTW89_FCC][40] = 78,
[0][0][1][0][RTW89_ETSI][40] = 30,
@@ -10967,7 +11026,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][40] = 78,
[0][0][1][0][RTW89_KCC][40] = 74,
[0][0][1][0][RTW89_ACMA][40] = 74,
- [0][0][1][0][RTW89_CN][40] = 74,
+ [0][0][1][0][RTW89_CN][40] = 64,
[0][0][1][0][RTW89_UK][40] = 58,
[0][0][1][0][RTW89_FCC][42] = 78,
[0][0][1][0][RTW89_ETSI][42] = 30,
@@ -10975,7 +11034,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][42] = 78,
[0][0][1][0][RTW89_KCC][42] = 74,
[0][0][1][0][RTW89_ACMA][42] = 74,
- [0][0][1][0][RTW89_CN][42] = 74,
+ [0][0][1][0][RTW89_CN][42] = 64,
[0][0][1][0][RTW89_UK][42] = 58,
[0][0][1][0][RTW89_FCC][44] = 78,
[0][0][1][0][RTW89_ETSI][44] = 30,
@@ -10983,7 +11042,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][44] = 78,
[0][0][1][0][RTW89_KCC][44] = 74,
[0][0][1][0][RTW89_ACMA][44] = 74,
- [0][0][1][0][RTW89_CN][44] = 74,
+ [0][0][1][0][RTW89_CN][44] = 62,
[0][0][1][0][RTW89_UK][44] = 58,
[0][0][1][0][RTW89_FCC][46] = 78,
[0][0][1][0][RTW89_ETSI][46] = 30,
@@ -10991,9 +11050,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_IC][46] = 78,
[0][0][1][0][RTW89_KCC][46] = 74,
[0][0][1][0][RTW89_ACMA][46] = 74,
- [0][0][1][0][RTW89_CN][46] = 74,
+ [0][0][1][0][RTW89_CN][46] = 62,
[0][0][1][0][RTW89_UK][46] = 58,
- [0][0][1][0][RTW89_FCC][48] = 68,
+ [0][0][1][0][RTW89_FCC][48] = 72,
[0][0][1][0][RTW89_ETSI][48] = 127,
[0][0][1][0][RTW89_MKK][48] = 127,
[0][0][1][0][RTW89_IC][48] = 127,
@@ -11001,7 +11060,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][48] = 127,
[0][0][1][0][RTW89_CN][48] = 127,
[0][0][1][0][RTW89_UK][48] = 127,
- [0][0][1][0][RTW89_FCC][50] = 68,
+ [0][0][1][0][RTW89_FCC][50] = 72,
[0][0][1][0][RTW89_ETSI][50] = 127,
[0][0][1][0][RTW89_MKK][50] = 127,
[0][0][1][0][RTW89_IC][50] = 127,
@@ -11009,7 +11068,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_ACMA][50] = 127,
[0][0][1][0][RTW89_CN][50] = 127,
[0][0][1][0][RTW89_UK][50] = 127,
- [0][0][1][0][RTW89_FCC][52] = 68,
+ [0][0][1][0][RTW89_FCC][52] = 72,
[0][0][1][0][RTW89_ETSI][52] = 127,
[0][0][1][0][RTW89_MKK][52] = 127,
[0][0][1][0][RTW89_IC][52] = 127,
@@ -11241,13 +11300,13 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_ACMA][52] = 127,
[0][1][1][0][RTW89_CN][52] = 127,
[0][1][1][0][RTW89_UK][52] = 127,
- [0][0][2][0][RTW89_FCC][0] = 72,
+ [0][0][2][0][RTW89_FCC][0] = 76,
[0][0][2][0][RTW89_ETSI][0] = 62,
[0][0][2][0][RTW89_MKK][0] = 62,
[0][0][2][0][RTW89_IC][0] = 64,
[0][0][2][0][RTW89_KCC][0] = 74,
[0][0][2][0][RTW89_ACMA][0] = 62,
- [0][0][2][0][RTW89_CN][0] = 62,
+ [0][0][2][0][RTW89_CN][0] = 60,
[0][0][2][0][RTW89_UK][0] = 62,
[0][0][2][0][RTW89_FCC][2] = 78,
[0][0][2][0][RTW89_ETSI][2] = 62,
@@ -11255,7 +11314,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][2] = 64,
[0][0][2][0][RTW89_KCC][2] = 74,
[0][0][2][0][RTW89_ACMA][2] = 62,
- [0][0][2][0][RTW89_CN][2] = 62,
+ [0][0][2][0][RTW89_CN][2] = 60,
[0][0][2][0][RTW89_UK][2] = 62,
[0][0][2][0][RTW89_FCC][4] = 78,
[0][0][2][0][RTW89_ETSI][4] = 62,
@@ -11263,7 +11322,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][4] = 64,
[0][0][2][0][RTW89_KCC][4] = 74,
[0][0][2][0][RTW89_ACMA][4] = 62,
- [0][0][2][0][RTW89_CN][4] = 62,
+ [0][0][2][0][RTW89_CN][4] = 60,
[0][0][2][0][RTW89_UK][4] = 62,
[0][0][2][0][RTW89_FCC][6] = 78,
[0][0][2][0][RTW89_ETSI][6] = 62,
@@ -11271,7 +11330,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][6] = 64,
[0][0][2][0][RTW89_KCC][6] = 54,
[0][0][2][0][RTW89_ACMA][6] = 62,
- [0][0][2][0][RTW89_CN][6] = 62,
+ [0][0][2][0][RTW89_CN][6] = 60,
[0][0][2][0][RTW89_UK][6] = 62,
[0][0][2][0][RTW89_FCC][8] = 78,
[0][0][2][0][RTW89_ETSI][8] = 62,
@@ -11279,7 +11338,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][8] = 64,
[0][0][2][0][RTW89_KCC][8] = 74,
[0][0][2][0][RTW89_ACMA][8] = 62,
- [0][0][2][0][RTW89_CN][8] = 62,
+ [0][0][2][0][RTW89_CN][8] = 60,
[0][0][2][0][RTW89_UK][8] = 62,
[0][0][2][0][RTW89_FCC][10] = 78,
[0][0][2][0][RTW89_ETSI][10] = 62,
@@ -11287,7 +11346,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][10] = 64,
[0][0][2][0][RTW89_KCC][10] = 74,
[0][0][2][0][RTW89_ACMA][10] = 62,
- [0][0][2][0][RTW89_CN][10] = 62,
+ [0][0][2][0][RTW89_CN][10] = 60,
[0][0][2][0][RTW89_UK][10] = 62,
[0][0][2][0][RTW89_FCC][12] = 78,
[0][0][2][0][RTW89_ETSI][12] = 62,
@@ -11295,20 +11354,20 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][12] = 64,
[0][0][2][0][RTW89_KCC][12] = 74,
[0][0][2][0][RTW89_ACMA][12] = 62,
- [0][0][2][0][RTW89_CN][12] = 62,
+ [0][0][2][0][RTW89_CN][12] = 60,
[0][0][2][0][RTW89_UK][12] = 62,
- [0][0][2][0][RTW89_FCC][14] = 70,
+ [0][0][2][0][RTW89_FCC][14] = 74,
[0][0][2][0][RTW89_ETSI][14] = 62,
[0][0][2][0][RTW89_MKK][14] = 62,
[0][0][2][0][RTW89_IC][14] = 64,
[0][0][2][0][RTW89_KCC][14] = 74,
[0][0][2][0][RTW89_ACMA][14] = 62,
- [0][0][2][0][RTW89_CN][14] = 62,
+ [0][0][2][0][RTW89_CN][14] = 60,
[0][0][2][0][RTW89_UK][14] = 62,
- [0][0][2][0][RTW89_FCC][15] = 70,
+ [0][0][2][0][RTW89_FCC][15] = 74,
[0][0][2][0][RTW89_ETSI][15] = 60,
[0][0][2][0][RTW89_MKK][15] = 74,
- [0][0][2][0][RTW89_IC][15] = 70,
+ [0][0][2][0][RTW89_IC][15] = 74,
[0][0][2][0][RTW89_KCC][15] = 74,
[0][0][2][0][RTW89_ACMA][15] = 60,
[0][0][2][0][RTW89_CN][15] = 127,
@@ -11385,10 +11444,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][33] = 62,
[0][0][2][0][RTW89_CN][33] = 127,
[0][0][2][0][RTW89_UK][33] = 62,
- [0][0][2][0][RTW89_FCC][35] = 68,
+ [0][0][2][0][RTW89_FCC][35] = 72,
[0][0][2][0][RTW89_ETSI][35] = 62,
[0][0][2][0][RTW89_MKK][35] = 74,
- [0][0][2][0][RTW89_IC][35] = 68,
+ [0][0][2][0][RTW89_IC][35] = 72,
[0][0][2][0][RTW89_KCC][35] = 74,
[0][0][2][0][RTW89_ACMA][35] = 62,
[0][0][2][0][RTW89_CN][35] = 127,
@@ -11407,7 +11466,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][38] = 78,
[0][0][2][0][RTW89_KCC][38] = 66,
[0][0][2][0][RTW89_ACMA][38] = 74,
- [0][0][2][0][RTW89_CN][38] = 74,
+ [0][0][2][0][RTW89_CN][38] = 66,
[0][0][2][0][RTW89_UK][38] = 60,
[0][0][2][0][RTW89_FCC][40] = 78,
[0][0][2][0][RTW89_ETSI][40] = 30,
@@ -11415,7 +11474,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][40] = 78,
[0][0][2][0][RTW89_KCC][40] = 74,
[0][0][2][0][RTW89_ACMA][40] = 74,
- [0][0][2][0][RTW89_CN][40] = 74,
+ [0][0][2][0][RTW89_CN][40] = 66,
[0][0][2][0][RTW89_UK][40] = 60,
[0][0][2][0][RTW89_FCC][42] = 78,
[0][0][2][0][RTW89_ETSI][42] = 30,
@@ -11423,7 +11482,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][42] = 78,
[0][0][2][0][RTW89_KCC][42] = 74,
[0][0][2][0][RTW89_ACMA][42] = 74,
- [0][0][2][0][RTW89_CN][42] = 74,
+ [0][0][2][0][RTW89_CN][42] = 66,
[0][0][2][0][RTW89_UK][42] = 60,
[0][0][2][0][RTW89_FCC][44] = 78,
[0][0][2][0][RTW89_ETSI][44] = 30,
@@ -11431,7 +11490,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][44] = 78,
[0][0][2][0][RTW89_KCC][44] = 74,
[0][0][2][0][RTW89_ACMA][44] = 74,
- [0][0][2][0][RTW89_CN][44] = 74,
+ [0][0][2][0][RTW89_CN][44] = 64,
[0][0][2][0][RTW89_UK][44] = 60,
[0][0][2][0][RTW89_FCC][46] = 78,
[0][0][2][0][RTW89_ETSI][46] = 30,
@@ -11439,9 +11498,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_IC][46] = 78,
[0][0][2][0][RTW89_KCC][46] = 74,
[0][0][2][0][RTW89_ACMA][46] = 74,
- [0][0][2][0][RTW89_CN][46] = 74,
+ [0][0][2][0][RTW89_CN][46] = 64,
[0][0][2][0][RTW89_UK][46] = 60,
- [0][0][2][0][RTW89_FCC][48] = 70,
+ [0][0][2][0][RTW89_FCC][48] = 74,
[0][0][2][0][RTW89_ETSI][48] = 127,
[0][0][2][0][RTW89_MKK][48] = 127,
[0][0][2][0][RTW89_IC][48] = 127,
@@ -11449,7 +11508,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][48] = 127,
[0][0][2][0][RTW89_CN][48] = 127,
[0][0][2][0][RTW89_UK][48] = 127,
- [0][0][2][0][RTW89_FCC][50] = 70,
+ [0][0][2][0][RTW89_FCC][50] = 74,
[0][0][2][0][RTW89_ETSI][50] = 127,
[0][0][2][0][RTW89_MKK][50] = 127,
[0][0][2][0][RTW89_IC][50] = 127,
@@ -11457,7 +11516,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_ACMA][50] = 127,
[0][0][2][0][RTW89_CN][50] = 127,
[0][0][2][0][RTW89_UK][50] = 127,
- [0][0][2][0][RTW89_FCC][52] = 70,
+ [0][0][2][0][RTW89_FCC][52] = 74,
[0][0][2][0][RTW89_ETSI][52] = 127,
[0][0][2][0][RTW89_MKK][52] = 127,
[0][0][2][0][RTW89_IC][52] = 127,
@@ -11913,13 +11972,13 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_ACMA][52] = 127,
[0][1][2][1][RTW89_CN][52] = 127,
[0][1][2][1][RTW89_UK][52] = 127,
- [1][0][2][0][RTW89_FCC][1] = 62,
+ [1][0][2][0][RTW89_FCC][1] = 66,
[1][0][2][0][RTW89_ETSI][1] = 64,
[1][0][2][0][RTW89_MKK][1] = 64,
- [1][0][2][0][RTW89_IC][1] = 60,
+ [1][0][2][0][RTW89_IC][1] = 64,
[1][0][2][0][RTW89_KCC][1] = 74,
[1][0][2][0][RTW89_ACMA][1] = 64,
- [1][0][2][0][RTW89_CN][1] = 64,
+ [1][0][2][0][RTW89_CN][1] = 66,
[1][0][2][0][RTW89_UK][1] = 64,
[1][0][2][0][RTW89_FCC][5] = 80,
[1][0][2][0][RTW89_ETSI][5] = 64,
@@ -11927,7 +11986,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][5] = 64,
[1][0][2][0][RTW89_KCC][5] = 66,
[1][0][2][0][RTW89_ACMA][5] = 64,
- [1][0][2][0][RTW89_CN][5] = 64,
+ [1][0][2][0][RTW89_CN][5] = 66,
[1][0][2][0][RTW89_UK][5] = 64,
[1][0][2][0][RTW89_FCC][9] = 80,
[1][0][2][0][RTW89_ETSI][9] = 64,
@@ -11935,20 +11994,20 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][9] = 64,
[1][0][2][0][RTW89_KCC][9] = 76,
[1][0][2][0][RTW89_ACMA][9] = 64,
- [1][0][2][0][RTW89_CN][9] = 64,
+ [1][0][2][0][RTW89_CN][9] = 58,
[1][0][2][0][RTW89_UK][9] = 64,
- [1][0][2][0][RTW89_FCC][13] = 60,
+ [1][0][2][0][RTW89_FCC][13] = 64,
[1][0][2][0][RTW89_ETSI][13] = 64,
[1][0][2][0][RTW89_MKK][13] = 64,
- [1][0][2][0][RTW89_IC][13] = 60,
+ [1][0][2][0][RTW89_IC][13] = 64,
[1][0][2][0][RTW89_KCC][13] = 72,
[1][0][2][0][RTW89_ACMA][13] = 64,
- [1][0][2][0][RTW89_CN][13] = 64,
+ [1][0][2][0][RTW89_CN][13] = 58,
[1][0][2][0][RTW89_UK][13] = 64,
- [1][0][2][0][RTW89_FCC][16] = 62,
+ [1][0][2][0][RTW89_FCC][16] = 66,
[1][0][2][0][RTW89_ETSI][16] = 66,
[1][0][2][0][RTW89_MKK][16] = 76,
- [1][0][2][0][RTW89_IC][16] = 62,
+ [1][0][2][0][RTW89_IC][16] = 66,
[1][0][2][0][RTW89_KCC][16] = 74,
[1][0][2][0][RTW89_ACMA][16] = 66,
[1][0][2][0][RTW89_CN][16] = 127,
@@ -11956,7 +12015,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_FCC][20] = 80,
[1][0][2][0][RTW89_ETSI][20] = 66,
[1][0][2][0][RTW89_MKK][20] = 76,
- [1][0][2][0][RTW89_IC][20] = 76,
+ [1][0][2][0][RTW89_IC][20] = 80,
[1][0][2][0][RTW89_KCC][20] = 74,
[1][0][2][0][RTW89_ACMA][20] = 66,
[1][0][2][0][RTW89_CN][20] = 127,
@@ -11977,10 +12036,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_ACMA][28] = 127,
[1][0][2][0][RTW89_CN][28] = 127,
[1][0][2][0][RTW89_UK][28] = 66,
- [1][0][2][0][RTW89_FCC][32] = 70,
+ [1][0][2][0][RTW89_FCC][32] = 74,
[1][0][2][0][RTW89_ETSI][32] = 66,
[1][0][2][0][RTW89_MKK][32] = 76,
- [1][0][2][0][RTW89_IC][32] = 70,
+ [1][0][2][0][RTW89_IC][32] = 74,
[1][0][2][0][RTW89_KCC][32] = 76,
[1][0][2][0][RTW89_ACMA][32] = 66,
[1][0][2][0][RTW89_CN][32] = 127,
@@ -11996,10 +12055,10 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_FCC][39] = 80,
[1][0][2][0][RTW89_ETSI][39] = 30,
[1][0][2][0][RTW89_MKK][39] = 127,
- [1][0][2][0][RTW89_IC][39] = 76,
+ [1][0][2][0][RTW89_IC][39] = 80,
[1][0][2][0][RTW89_KCC][39] = 68,
[1][0][2][0][RTW89_ACMA][39] = 76,
- [1][0][2][0][RTW89_CN][39] = 70,
+ [1][0][2][0][RTW89_CN][39] = 56,
[1][0][2][0][RTW89_UK][39] = 64,
[1][0][2][0][RTW89_FCC][43] = 80,
[1][0][2][0][RTW89_ETSI][43] = 30,
@@ -12007,9 +12066,9 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_IC][43] = 80,
[1][0][2][0][RTW89_KCC][43] = 76,
[1][0][2][0][RTW89_ACMA][43] = 76,
- [1][0][2][0][RTW89_CN][43] = 76,
+ [1][0][2][0][RTW89_CN][43] = 64,
[1][0][2][0][RTW89_UK][43] = 64,
- [1][0][2][0][RTW89_FCC][47] = 76,
+ [1][0][2][0][RTW89_FCC][47] = 80,
[1][0][2][0][RTW89_ETSI][47] = 127,
[1][0][2][0][RTW89_MKK][47] = 127,
[1][0][2][0][RTW89_IC][47] = 127,
@@ -12017,7 +12076,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_ACMA][47] = 127,
[1][0][2][0][RTW89_CN][47] = 127,
[1][0][2][0][RTW89_UK][47] = 127,
- [1][0][2][0][RTW89_FCC][51] = 76,
+ [1][0][2][0][RTW89_FCC][51] = 80,
[1][0][2][0][RTW89_ETSI][51] = 127,
[1][0][2][0][RTW89_MKK][51] = 127,
[1][0][2][0][RTW89_IC][51] = 127,
@@ -12249,26 +12308,26 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_ACMA][51] = 127,
[1][1][2][1][RTW89_CN][51] = 127,
[1][1][2][1][RTW89_UK][51] = 127,
- [2][0][2][0][RTW89_FCC][3] = 68,
+ [2][0][2][0][RTW89_FCC][3] = 72,
[2][0][2][0][RTW89_ETSI][3] = 64,
[2][0][2][0][RTW89_MKK][3] = 62,
- [2][0][2][0][RTW89_IC][3] = 60,
+ [2][0][2][0][RTW89_IC][3] = 64,
[2][0][2][0][RTW89_KCC][3] = 68,
[2][0][2][0][RTW89_ACMA][3] = 64,
- [2][0][2][0][RTW89_CN][3] = 64,
+ [2][0][2][0][RTW89_CN][3] = 60,
[2][0][2][0][RTW89_UK][3] = 64,
- [2][0][2][0][RTW89_FCC][11] = 58,
+ [2][0][2][0][RTW89_FCC][11] = 62,
[2][0][2][0][RTW89_ETSI][11] = 64,
[2][0][2][0][RTW89_MKK][11] = 64,
- [2][0][2][0][RTW89_IC][11] = 58,
+ [2][0][2][0][RTW89_IC][11] = 62,
[2][0][2][0][RTW89_KCC][11] = 68,
[2][0][2][0][RTW89_ACMA][11] = 64,
- [2][0][2][0][RTW89_CN][11] = 64,
+ [2][0][2][0][RTW89_CN][11] = 54,
[2][0][2][0][RTW89_UK][11] = 64,
- [2][0][2][0][RTW89_FCC][18] = 62,
+ [2][0][2][0][RTW89_FCC][18] = 66,
[2][0][2][0][RTW89_ETSI][18] = 64,
[2][0][2][0][RTW89_MKK][18] = 68,
- [2][0][2][0][RTW89_IC][18] = 62,
+ [2][0][2][0][RTW89_IC][18] = 66,
[2][0][2][0][RTW89_KCC][18] = 68,
[2][0][2][0][RTW89_ACMA][18] = 64,
[2][0][2][0][RTW89_CN][18] = 127,
@@ -12284,7 +12343,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_FCC][34] = 72,
[2][0][2][0][RTW89_ETSI][34] = 127,
[2][0][2][0][RTW89_MKK][34] = 68,
- [2][0][2][0][RTW89_IC][34] = 68,
+ [2][0][2][0][RTW89_IC][34] = 72,
[2][0][2][0][RTW89_KCC][34] = 68,
[2][0][2][0][RTW89_ACMA][34] = 68,
[2][0][2][0][RTW89_CN][34] = 127,
@@ -12292,12 +12351,12 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_FCC][41] = 72,
[2][0][2][0][RTW89_ETSI][41] = 30,
[2][0][2][0][RTW89_MKK][41] = 127,
- [2][0][2][0][RTW89_IC][41] = 68,
+ [2][0][2][0][RTW89_IC][41] = 72,
[2][0][2][0][RTW89_KCC][41] = 64,
[2][0][2][0][RTW89_ACMA][41] = 68,
- [2][0][2][0][RTW89_CN][41] = 68,
+ [2][0][2][0][RTW89_CN][41] = 38,
[2][0][2][0][RTW89_UK][41] = 64,
- [2][0][2][0][RTW89_FCC][49] = 68,
+ [2][0][2][0][RTW89_FCC][49] = 72,
[2][0][2][0][RTW89_ETSI][49] = 127,
[2][0][2][0][RTW89_MKK][49] = 127,
[2][0][2][0][RTW89_IC][49] = 127,
@@ -12423,7 +12482,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_IC][7] = 127,
[3][0][2][0][RTW89_KCC][7] = 127,
[3][0][2][0][RTW89_ACMA][7] = 127,
- [3][0][2][0][RTW89_CN][7] = 58,
+ [3][0][2][0][RTW89_CN][7] = 127,
[3][0][2][0][RTW89_UK][7] = 127,
[3][0][2][0][RTW89_FCC][22] = 127,
[3][0][2][0][RTW89_ETSI][22] = 127,
@@ -12431,7 +12490,7 @@ const s8 rtw89_8851b_txpwr_lmt_5g_type2[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_IC][22] = 127,
[3][0][2][0][RTW89_KCC][22] = 127,
[3][0][2][0][RTW89_ACMA][22] = 127,
- [3][0][2][0][RTW89_CN][22] = 58,
+ [3][0][2][0][RTW89_CN][22] = 127,
[3][0][2][0][RTW89_UK][22] = 127,
[3][0][2][0][RTW89_FCC][45] = 127,
[3][0][2][0][RTW89_ETSI][45] = 127,
@@ -12508,19 +12567,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][11] = 30,
[0][0][RTW89_WW][12] = 30,
[0][0][RTW89_WW][13] = 0,
- [0][1][RTW89_WW][0] = 20,
- [0][1][RTW89_WW][1] = 22,
- [0][1][RTW89_WW][2] = 22,
- [0][1][RTW89_WW][3] = 22,
- [0][1][RTW89_WW][4] = 22,
- [0][1][RTW89_WW][5] = 22,
- [0][1][RTW89_WW][6] = 22,
- [0][1][RTW89_WW][7] = 22,
- [0][1][RTW89_WW][8] = 22,
- [0][1][RTW89_WW][9] = 22,
- [0][1][RTW89_WW][10] = 22,
- [0][1][RTW89_WW][11] = 22,
- [0][1][RTW89_WW][12] = 20,
+ [0][1][RTW89_WW][0] = 0,
+ [0][1][RTW89_WW][1] = 0,
+ [0][1][RTW89_WW][2] = 0,
+ [0][1][RTW89_WW][3] = 0,
+ [0][1][RTW89_WW][4] = 0,
+ [0][1][RTW89_WW][5] = 0,
+ [0][1][RTW89_WW][6] = 0,
+ [0][1][RTW89_WW][7] = 0,
+ [0][1][RTW89_WW][8] = 0,
+ [0][1][RTW89_WW][9] = 0,
+ [0][1][RTW89_WW][10] = 0,
+ [0][1][RTW89_WW][11] = 0,
+ [0][1][RTW89_WW][12] = 0,
[0][1][RTW89_WW][13] = 0,
[1][0][RTW89_WW][0] = 42,
[1][0][RTW89_WW][1] = 42,
@@ -12536,19 +12595,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_WW][11] = 42,
[1][0][RTW89_WW][12] = 34,
[1][0][RTW89_WW][13] = 0,
- [1][1][RTW89_WW][0] = 32,
- [1][1][RTW89_WW][1] = 32,
- [1][1][RTW89_WW][2] = 32,
- [1][1][RTW89_WW][3] = 32,
- [1][1][RTW89_WW][4] = 32,
- [1][1][RTW89_WW][5] = 32,
- [1][1][RTW89_WW][6] = 32,
- [1][1][RTW89_WW][7] = 32,
- [1][1][RTW89_WW][8] = 32,
- [1][1][RTW89_WW][9] = 32,
- [1][1][RTW89_WW][10] = 32,
- [1][1][RTW89_WW][11] = 32,
- [1][1][RTW89_WW][12] = 32,
+ [1][1][RTW89_WW][0] = 0,
+ [1][1][RTW89_WW][1] = 0,
+ [1][1][RTW89_WW][2] = 0,
+ [1][1][RTW89_WW][3] = 0,
+ [1][1][RTW89_WW][4] = 0,
+ [1][1][RTW89_WW][5] = 0,
+ [1][1][RTW89_WW][6] = 0,
+ [1][1][RTW89_WW][7] = 0,
+ [1][1][RTW89_WW][8] = 0,
+ [1][1][RTW89_WW][9] = 0,
+ [1][1][RTW89_WW][10] = 0,
+ [1][1][RTW89_WW][11] = 0,
+ [1][1][RTW89_WW][12] = 0,
[1][1][RTW89_WW][13] = 0,
[2][0][RTW89_WW][0] = 54,
[2][0][RTW89_WW][1] = 54,
@@ -12564,19 +12623,19 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_WW][11] = 54,
[2][0][RTW89_WW][12] = 34,
[2][0][RTW89_WW][13] = 0,
- [2][1][RTW89_WW][0] = 44,
- [2][1][RTW89_WW][1] = 44,
- [2][1][RTW89_WW][2] = 44,
- [2][1][RTW89_WW][3] = 44,
- [2][1][RTW89_WW][4] = 44,
- [2][1][RTW89_WW][5] = 44,
- [2][1][RTW89_WW][6] = 44,
- [2][1][RTW89_WW][7] = 44,
- [2][1][RTW89_WW][8] = 44,
- [2][1][RTW89_WW][9] = 44,
- [2][1][RTW89_WW][10] = 44,
- [2][1][RTW89_WW][11] = 44,
- [2][1][RTW89_WW][12] = 42,
+ [2][1][RTW89_WW][0] = 0,
+ [2][1][RTW89_WW][1] = 0,
+ [2][1][RTW89_WW][2] = 0,
+ [2][1][RTW89_WW][3] = 0,
+ [2][1][RTW89_WW][4] = 0,
+ [2][1][RTW89_WW][5] = 0,
+ [2][1][RTW89_WW][6] = 0,
+ [2][1][RTW89_WW][7] = 0,
+ [2][1][RTW89_WW][8] = 0,
+ [2][1][RTW89_WW][9] = 0,
+ [2][1][RTW89_WW][10] = 0,
+ [2][1][RTW89_WW][11] = 0,
+ [2][1][RTW89_WW][12] = 0,
[2][1][RTW89_WW][13] = 0,
[0][0][RTW89_FCC][0] = 60,
[0][0][RTW89_ETSI][0] = 30,
@@ -12696,7 +12755,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][0] = 127,
[0][1][RTW89_KCC][0] = 127,
[0][1][RTW89_ACMA][0] = 127,
- [0][1][RTW89_CN][0] = 20,
+ [0][1][RTW89_CN][0] = 127,
[0][1][RTW89_UK][0] = 127,
[0][1][RTW89_FCC][1] = 127,
[0][1][RTW89_ETSI][1] = 127,
@@ -12704,7 +12763,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][1] = 127,
[0][1][RTW89_KCC][1] = 127,
[0][1][RTW89_ACMA][1] = 127,
- [0][1][RTW89_CN][1] = 22,
+ [0][1][RTW89_CN][1] = 127,
[0][1][RTW89_UK][1] = 127,
[0][1][RTW89_FCC][2] = 127,
[0][1][RTW89_ETSI][2] = 127,
@@ -12712,7 +12771,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][2] = 127,
[0][1][RTW89_KCC][2] = 127,
[0][1][RTW89_ACMA][2] = 127,
- [0][1][RTW89_CN][2] = 22,
+ [0][1][RTW89_CN][2] = 127,
[0][1][RTW89_UK][2] = 127,
[0][1][RTW89_FCC][3] = 127,
[0][1][RTW89_ETSI][3] = 127,
@@ -12720,7 +12779,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][3] = 127,
[0][1][RTW89_KCC][3] = 127,
[0][1][RTW89_ACMA][3] = 127,
- [0][1][RTW89_CN][3] = 22,
+ [0][1][RTW89_CN][3] = 127,
[0][1][RTW89_UK][3] = 127,
[0][1][RTW89_FCC][4] = 127,
[0][1][RTW89_ETSI][4] = 127,
@@ -12728,7 +12787,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][4] = 127,
[0][1][RTW89_KCC][4] = 127,
[0][1][RTW89_ACMA][4] = 127,
- [0][1][RTW89_CN][4] = 22,
+ [0][1][RTW89_CN][4] = 127,
[0][1][RTW89_UK][4] = 127,
[0][1][RTW89_FCC][5] = 127,
[0][1][RTW89_ETSI][5] = 127,
@@ -12736,7 +12795,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][5] = 127,
[0][1][RTW89_KCC][5] = 127,
[0][1][RTW89_ACMA][5] = 127,
- [0][1][RTW89_CN][5] = 22,
+ [0][1][RTW89_CN][5] = 127,
[0][1][RTW89_UK][5] = 127,
[0][1][RTW89_FCC][6] = 127,
[0][1][RTW89_ETSI][6] = 127,
@@ -12744,7 +12803,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][6] = 127,
[0][1][RTW89_KCC][6] = 127,
[0][1][RTW89_ACMA][6] = 127,
- [0][1][RTW89_CN][6] = 22,
+ [0][1][RTW89_CN][6] = 127,
[0][1][RTW89_UK][6] = 127,
[0][1][RTW89_FCC][7] = 127,
[0][1][RTW89_ETSI][7] = 127,
@@ -12752,7 +12811,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][7] = 127,
[0][1][RTW89_KCC][7] = 127,
[0][1][RTW89_ACMA][7] = 127,
- [0][1][RTW89_CN][7] = 22,
+ [0][1][RTW89_CN][7] = 127,
[0][1][RTW89_UK][7] = 127,
[0][1][RTW89_FCC][8] = 127,
[0][1][RTW89_ETSI][8] = 127,
@@ -12760,7 +12819,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][8] = 127,
[0][1][RTW89_KCC][8] = 127,
[0][1][RTW89_ACMA][8] = 127,
- [0][1][RTW89_CN][8] = 22,
+ [0][1][RTW89_CN][8] = 127,
[0][1][RTW89_UK][8] = 127,
[0][1][RTW89_FCC][9] = 127,
[0][1][RTW89_ETSI][9] = 127,
@@ -12768,7 +12827,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][9] = 127,
[0][1][RTW89_KCC][9] = 127,
[0][1][RTW89_ACMA][9] = 127,
- [0][1][RTW89_CN][9] = 22,
+ [0][1][RTW89_CN][9] = 127,
[0][1][RTW89_UK][9] = 127,
[0][1][RTW89_FCC][10] = 127,
[0][1][RTW89_ETSI][10] = 127,
@@ -12776,7 +12835,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][10] = 127,
[0][1][RTW89_KCC][10] = 127,
[0][1][RTW89_ACMA][10] = 127,
- [0][1][RTW89_CN][10] = 22,
+ [0][1][RTW89_CN][10] = 127,
[0][1][RTW89_UK][10] = 127,
[0][1][RTW89_FCC][11] = 127,
[0][1][RTW89_ETSI][11] = 127,
@@ -12784,7 +12843,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][11] = 127,
[0][1][RTW89_KCC][11] = 127,
[0][1][RTW89_ACMA][11] = 127,
- [0][1][RTW89_CN][11] = 22,
+ [0][1][RTW89_CN][11] = 127,
[0][1][RTW89_UK][11] = 127,
[0][1][RTW89_FCC][12] = 127,
[0][1][RTW89_ETSI][12] = 127,
@@ -12792,7 +12851,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][12] = 127,
[0][1][RTW89_KCC][12] = 127,
[0][1][RTW89_ACMA][12] = 127,
- [0][1][RTW89_CN][12] = 20,
+ [0][1][RTW89_CN][12] = 127,
[0][1][RTW89_UK][12] = 127,
[0][1][RTW89_FCC][13] = 127,
[0][1][RTW89_ETSI][13] = 127,
@@ -12920,7 +12979,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][0] = 127,
[1][1][RTW89_KCC][0] = 127,
[1][1][RTW89_ACMA][0] = 127,
- [1][1][RTW89_CN][0] = 32,
+ [1][1][RTW89_CN][0] = 127,
[1][1][RTW89_UK][0] = 127,
[1][1][RTW89_FCC][1] = 127,
[1][1][RTW89_ETSI][1] = 127,
@@ -12928,7 +12987,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][1] = 127,
[1][1][RTW89_KCC][1] = 127,
[1][1][RTW89_ACMA][1] = 127,
- [1][1][RTW89_CN][1] = 32,
+ [1][1][RTW89_CN][1] = 127,
[1][1][RTW89_UK][1] = 127,
[1][1][RTW89_FCC][2] = 127,
[1][1][RTW89_ETSI][2] = 127,
@@ -12936,7 +12995,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][2] = 127,
[1][1][RTW89_KCC][2] = 127,
[1][1][RTW89_ACMA][2] = 127,
- [1][1][RTW89_CN][2] = 32,
+ [1][1][RTW89_CN][2] = 127,
[1][1][RTW89_UK][2] = 127,
[1][1][RTW89_FCC][3] = 127,
[1][1][RTW89_ETSI][3] = 127,
@@ -12944,7 +13003,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][3] = 127,
[1][1][RTW89_KCC][3] = 127,
[1][1][RTW89_ACMA][3] = 127,
- [1][1][RTW89_CN][3] = 32,
+ [1][1][RTW89_CN][3] = 127,
[1][1][RTW89_UK][3] = 127,
[1][1][RTW89_FCC][4] = 127,
[1][1][RTW89_ETSI][4] = 127,
@@ -12952,7 +13011,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][4] = 127,
[1][1][RTW89_KCC][4] = 127,
[1][1][RTW89_ACMA][4] = 127,
- [1][1][RTW89_CN][4] = 32,
+ [1][1][RTW89_CN][4] = 127,
[1][1][RTW89_UK][4] = 127,
[1][1][RTW89_FCC][5] = 127,
[1][1][RTW89_ETSI][5] = 127,
@@ -12960,7 +13019,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][5] = 127,
[1][1][RTW89_KCC][5] = 127,
[1][1][RTW89_ACMA][5] = 127,
- [1][1][RTW89_CN][5] = 32,
+ [1][1][RTW89_CN][5] = 127,
[1][1][RTW89_UK][5] = 127,
[1][1][RTW89_FCC][6] = 127,
[1][1][RTW89_ETSI][6] = 127,
@@ -12968,7 +13027,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][6] = 127,
[1][1][RTW89_KCC][6] = 127,
[1][1][RTW89_ACMA][6] = 127,
- [1][1][RTW89_CN][6] = 32,
+ [1][1][RTW89_CN][6] = 127,
[1][1][RTW89_UK][6] = 127,
[1][1][RTW89_FCC][7] = 127,
[1][1][RTW89_ETSI][7] = 127,
@@ -12976,7 +13035,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][7] = 127,
[1][1][RTW89_KCC][7] = 127,
[1][1][RTW89_ACMA][7] = 127,
- [1][1][RTW89_CN][7] = 32,
+ [1][1][RTW89_CN][7] = 127,
[1][1][RTW89_UK][7] = 127,
[1][1][RTW89_FCC][8] = 127,
[1][1][RTW89_ETSI][8] = 127,
@@ -12984,7 +13043,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][8] = 127,
[1][1][RTW89_KCC][8] = 127,
[1][1][RTW89_ACMA][8] = 127,
- [1][1][RTW89_CN][8] = 32,
+ [1][1][RTW89_CN][8] = 127,
[1][1][RTW89_UK][8] = 127,
[1][1][RTW89_FCC][9] = 127,
[1][1][RTW89_ETSI][9] = 127,
@@ -12992,7 +13051,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][9] = 127,
[1][1][RTW89_KCC][9] = 127,
[1][1][RTW89_ACMA][9] = 127,
- [1][1][RTW89_CN][9] = 32,
+ [1][1][RTW89_CN][9] = 127,
[1][1][RTW89_UK][9] = 127,
[1][1][RTW89_FCC][10] = 127,
[1][1][RTW89_ETSI][10] = 127,
@@ -13000,7 +13059,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][10] = 127,
[1][1][RTW89_KCC][10] = 127,
[1][1][RTW89_ACMA][10] = 127,
- [1][1][RTW89_CN][10] = 32,
+ [1][1][RTW89_CN][10] = 127,
[1][1][RTW89_UK][10] = 127,
[1][1][RTW89_FCC][11] = 127,
[1][1][RTW89_ETSI][11] = 127,
@@ -13008,7 +13067,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][11] = 127,
[1][1][RTW89_KCC][11] = 127,
[1][1][RTW89_ACMA][11] = 127,
- [1][1][RTW89_CN][11] = 32,
+ [1][1][RTW89_CN][11] = 127,
[1][1][RTW89_UK][11] = 127,
[1][1][RTW89_FCC][12] = 127,
[1][1][RTW89_ETSI][12] = 127,
@@ -13016,7 +13075,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][12] = 127,
[1][1][RTW89_KCC][12] = 127,
[1][1][RTW89_ACMA][12] = 127,
- [1][1][RTW89_CN][12] = 32,
+ [1][1][RTW89_CN][12] = 127,
[1][1][RTW89_UK][12] = 127,
[1][1][RTW89_FCC][13] = 127,
[1][1][RTW89_ETSI][13] = 127,
@@ -13144,7 +13203,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][0] = 127,
[2][1][RTW89_KCC][0] = 127,
[2][1][RTW89_ACMA][0] = 127,
- [2][1][RTW89_CN][0] = 44,
+ [2][1][RTW89_CN][0] = 127,
[2][1][RTW89_UK][0] = 127,
[2][1][RTW89_FCC][1] = 127,
[2][1][RTW89_ETSI][1] = 127,
@@ -13152,7 +13211,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][1] = 127,
[2][1][RTW89_KCC][1] = 127,
[2][1][RTW89_ACMA][1] = 127,
- [2][1][RTW89_CN][1] = 44,
+ [2][1][RTW89_CN][1] = 127,
[2][1][RTW89_UK][1] = 127,
[2][1][RTW89_FCC][2] = 127,
[2][1][RTW89_ETSI][2] = 127,
@@ -13160,7 +13219,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][2] = 127,
[2][1][RTW89_KCC][2] = 127,
[2][1][RTW89_ACMA][2] = 127,
- [2][1][RTW89_CN][2] = 44,
+ [2][1][RTW89_CN][2] = 127,
[2][1][RTW89_UK][2] = 127,
[2][1][RTW89_FCC][3] = 127,
[2][1][RTW89_ETSI][3] = 127,
@@ -13168,7 +13227,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][3] = 127,
[2][1][RTW89_KCC][3] = 127,
[2][1][RTW89_ACMA][3] = 127,
- [2][1][RTW89_CN][3] = 44,
+ [2][1][RTW89_CN][3] = 127,
[2][1][RTW89_UK][3] = 127,
[2][1][RTW89_FCC][4] = 127,
[2][1][RTW89_ETSI][4] = 127,
@@ -13176,7 +13235,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][4] = 127,
[2][1][RTW89_KCC][4] = 127,
[2][1][RTW89_ACMA][4] = 127,
- [2][1][RTW89_CN][4] = 44,
+ [2][1][RTW89_CN][4] = 127,
[2][1][RTW89_UK][4] = 127,
[2][1][RTW89_FCC][5] = 127,
[2][1][RTW89_ETSI][5] = 127,
@@ -13184,7 +13243,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][5] = 127,
[2][1][RTW89_KCC][5] = 127,
[2][1][RTW89_ACMA][5] = 127,
- [2][1][RTW89_CN][5] = 44,
+ [2][1][RTW89_CN][5] = 127,
[2][1][RTW89_UK][5] = 127,
[2][1][RTW89_FCC][6] = 127,
[2][1][RTW89_ETSI][6] = 127,
@@ -13192,7 +13251,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][6] = 127,
[2][1][RTW89_KCC][6] = 127,
[2][1][RTW89_ACMA][6] = 127,
- [2][1][RTW89_CN][6] = 44,
+ [2][1][RTW89_CN][6] = 127,
[2][1][RTW89_UK][6] = 127,
[2][1][RTW89_FCC][7] = 127,
[2][1][RTW89_ETSI][7] = 127,
@@ -13200,7 +13259,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][7] = 127,
[2][1][RTW89_KCC][7] = 127,
[2][1][RTW89_ACMA][7] = 127,
- [2][1][RTW89_CN][7] = 44,
+ [2][1][RTW89_CN][7] = 127,
[2][1][RTW89_UK][7] = 127,
[2][1][RTW89_FCC][8] = 127,
[2][1][RTW89_ETSI][8] = 127,
@@ -13208,7 +13267,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][8] = 127,
[2][1][RTW89_KCC][8] = 127,
[2][1][RTW89_ACMA][8] = 127,
- [2][1][RTW89_CN][8] = 44,
+ [2][1][RTW89_CN][8] = 127,
[2][1][RTW89_UK][8] = 127,
[2][1][RTW89_FCC][9] = 127,
[2][1][RTW89_ETSI][9] = 127,
@@ -13216,7 +13275,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][9] = 127,
[2][1][RTW89_KCC][9] = 127,
[2][1][RTW89_ACMA][9] = 127,
- [2][1][RTW89_CN][9] = 44,
+ [2][1][RTW89_CN][9] = 127,
[2][1][RTW89_UK][9] = 127,
[2][1][RTW89_FCC][10] = 127,
[2][1][RTW89_ETSI][10] = 127,
@@ -13224,7 +13283,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][10] = 127,
[2][1][RTW89_KCC][10] = 127,
[2][1][RTW89_ACMA][10] = 127,
- [2][1][RTW89_CN][10] = 44,
+ [2][1][RTW89_CN][10] = 127,
[2][1][RTW89_UK][10] = 127,
[2][1][RTW89_FCC][11] = 127,
[2][1][RTW89_ETSI][11] = 127,
@@ -13232,7 +13291,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][11] = 127,
[2][1][RTW89_KCC][11] = 127,
[2][1][RTW89_ACMA][11] = 127,
- [2][1][RTW89_CN][11] = 44,
+ [2][1][RTW89_CN][11] = 127,
[2][1][RTW89_UK][11] = 127,
[2][1][RTW89_FCC][12] = 127,
[2][1][RTW89_ETSI][12] = 127,
@@ -13240,7 +13299,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_2g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][12] = 127,
[2][1][RTW89_KCC][12] = 127,
[2][1][RTW89_ACMA][12] = 127,
- [2][1][RTW89_CN][12] = 42,
+ [2][1][RTW89_CN][12] = 127,
[2][1][RTW89_UK][12] = 127,
[2][1][RTW89_FCC][13] = 127,
[2][1][RTW89_ETSI][13] = 127,
@@ -13283,14 +13342,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][48] = 40,
[0][0][RTW89_WW][50] = 42,
[0][0][RTW89_WW][52] = 38,
- [0][1][RTW89_WW][0] = 4,
- [0][1][RTW89_WW][2] = 4,
- [0][1][RTW89_WW][4] = 4,
- [0][1][RTW89_WW][6] = 4,
- [0][1][RTW89_WW][8] = 4,
- [0][1][RTW89_WW][10] = 4,
- [0][1][RTW89_WW][12] = 4,
- [0][1][RTW89_WW][14] = 4,
+ [0][1][RTW89_WW][0] = 0,
+ [0][1][RTW89_WW][2] = 0,
+ [0][1][RTW89_WW][4] = 0,
+ [0][1][RTW89_WW][6] = 0,
+ [0][1][RTW89_WW][8] = 0,
+ [0][1][RTW89_WW][10] = 0,
+ [0][1][RTW89_WW][12] = 0,
+ [0][1][RTW89_WW][14] = 0,
[0][1][RTW89_WW][15] = 0,
[0][1][RTW89_WW][17] = 0,
[0][1][RTW89_WW][19] = 0,
@@ -13303,11 +13362,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_WW][33] = 0,
[0][1][RTW89_WW][35] = 0,
[0][1][RTW89_WW][37] = 0,
- [0][1][RTW89_WW][38] = 42,
- [0][1][RTW89_WW][40] = 42,
- [0][1][RTW89_WW][42] = 42,
- [0][1][RTW89_WW][44] = 42,
- [0][1][RTW89_WW][46] = 42,
+ [0][1][RTW89_WW][38] = 0,
+ [0][1][RTW89_WW][40] = 0,
+ [0][1][RTW89_WW][42] = 0,
+ [0][1][RTW89_WW][44] = 0,
+ [0][1][RTW89_WW][46] = 0,
[0][1][RTW89_WW][48] = 0,
[0][1][RTW89_WW][50] = 0,
[0][1][RTW89_WW][52] = 0,
@@ -13339,14 +13398,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_WW][48] = 52,
[1][0][RTW89_WW][50] = 52,
[1][0][RTW89_WW][52] = 50,
- [1][1][RTW89_WW][0] = 14,
- [1][1][RTW89_WW][2] = 14,
- [1][1][RTW89_WW][4] = 14,
- [1][1][RTW89_WW][6] = 14,
- [1][1][RTW89_WW][8] = 14,
- [1][1][RTW89_WW][10] = 14,
- [1][1][RTW89_WW][12] = 14,
- [1][1][RTW89_WW][14] = 14,
+ [1][1][RTW89_WW][0] = 0,
+ [1][1][RTW89_WW][2] = 0,
+ [1][1][RTW89_WW][4] = 0,
+ [1][1][RTW89_WW][6] = 0,
+ [1][1][RTW89_WW][8] = 0,
+ [1][1][RTW89_WW][10] = 0,
+ [1][1][RTW89_WW][12] = 0,
+ [1][1][RTW89_WW][14] = 0,
[1][1][RTW89_WW][15] = 0,
[1][1][RTW89_WW][17] = 0,
[1][1][RTW89_WW][19] = 0,
@@ -13359,11 +13418,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_WW][33] = 0,
[1][1][RTW89_WW][35] = 0,
[1][1][RTW89_WW][37] = 0,
- [1][1][RTW89_WW][38] = 54,
- [1][1][RTW89_WW][40] = 54,
- [1][1][RTW89_WW][42] = 54,
- [1][1][RTW89_WW][44] = 54,
- [1][1][RTW89_WW][46] = 54,
+ [1][1][RTW89_WW][38] = 0,
+ [1][1][RTW89_WW][40] = 0,
+ [1][1][RTW89_WW][42] = 0,
+ [1][1][RTW89_WW][44] = 0,
+ [1][1][RTW89_WW][46] = 0,
[1][1][RTW89_WW][48] = 0,
[1][1][RTW89_WW][50] = 0,
[1][1][RTW89_WW][52] = 0,
@@ -13395,14 +13454,14 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_WW][48] = 62,
[2][0][RTW89_WW][50] = 62,
[2][0][RTW89_WW][52] = 60,
- [2][1][RTW89_WW][0] = 28,
- [2][1][RTW89_WW][2] = 28,
- [2][1][RTW89_WW][4] = 28,
- [2][1][RTW89_WW][6] = 28,
- [2][1][RTW89_WW][8] = 28,
- [2][1][RTW89_WW][10] = 28,
- [2][1][RTW89_WW][12] = 28,
- [2][1][RTW89_WW][14] = 28,
+ [2][1][RTW89_WW][0] = 0,
+ [2][1][RTW89_WW][2] = 0,
+ [2][1][RTW89_WW][4] = 0,
+ [2][1][RTW89_WW][6] = 0,
+ [2][1][RTW89_WW][8] = 0,
+ [2][1][RTW89_WW][10] = 0,
+ [2][1][RTW89_WW][12] = 0,
+ [2][1][RTW89_WW][14] = 0,
[2][1][RTW89_WW][15] = 0,
[2][1][RTW89_WW][17] = 0,
[2][1][RTW89_WW][19] = 0,
@@ -13415,11 +13474,11 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_WW][33] = 0,
[2][1][RTW89_WW][35] = 0,
[2][1][RTW89_WW][37] = 0,
- [2][1][RTW89_WW][38] = 56,
- [2][1][RTW89_WW][40] = 56,
- [2][1][RTW89_WW][42] = 56,
- [2][1][RTW89_WW][44] = 56,
- [2][1][RTW89_WW][46] = 56,
+ [2][1][RTW89_WW][38] = 0,
+ [2][1][RTW89_WW][40] = 0,
+ [2][1][RTW89_WW][42] = 0,
+ [2][1][RTW89_WW][44] = 0,
+ [2][1][RTW89_WW][46] = 0,
[2][1][RTW89_WW][48] = 0,
[2][1][RTW89_WW][50] = 0,
[2][1][RTW89_WW][52] = 0,
@@ -13653,7 +13712,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][0] = 127,
[0][1][RTW89_KCC][0] = 127,
[0][1][RTW89_ACMA][0] = 127,
- [0][1][RTW89_CN][0] = 4,
+ [0][1][RTW89_CN][0] = 127,
[0][1][RTW89_UK][0] = 127,
[0][1][RTW89_FCC][2] = 127,
[0][1][RTW89_ETSI][2] = 127,
@@ -13661,7 +13720,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][2] = 127,
[0][1][RTW89_KCC][2] = 127,
[0][1][RTW89_ACMA][2] = 127,
- [0][1][RTW89_CN][2] = 4,
+ [0][1][RTW89_CN][2] = 127,
[0][1][RTW89_UK][2] = 127,
[0][1][RTW89_FCC][4] = 127,
[0][1][RTW89_ETSI][4] = 127,
@@ -13669,7 +13728,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][4] = 127,
[0][1][RTW89_KCC][4] = 127,
[0][1][RTW89_ACMA][4] = 127,
- [0][1][RTW89_CN][4] = 4,
+ [0][1][RTW89_CN][4] = 127,
[0][1][RTW89_UK][4] = 127,
[0][1][RTW89_FCC][6] = 127,
[0][1][RTW89_ETSI][6] = 127,
@@ -13677,7 +13736,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][6] = 127,
[0][1][RTW89_KCC][6] = 127,
[0][1][RTW89_ACMA][6] = 127,
- [0][1][RTW89_CN][6] = 4,
+ [0][1][RTW89_CN][6] = 127,
[0][1][RTW89_UK][6] = 127,
[0][1][RTW89_FCC][8] = 127,
[0][1][RTW89_ETSI][8] = 127,
@@ -13685,7 +13744,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][8] = 127,
[0][1][RTW89_KCC][8] = 127,
[0][1][RTW89_ACMA][8] = 127,
- [0][1][RTW89_CN][8] = 4,
+ [0][1][RTW89_CN][8] = 127,
[0][1][RTW89_UK][8] = 127,
[0][1][RTW89_FCC][10] = 127,
[0][1][RTW89_ETSI][10] = 127,
@@ -13693,7 +13752,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][10] = 127,
[0][1][RTW89_KCC][10] = 127,
[0][1][RTW89_ACMA][10] = 127,
- [0][1][RTW89_CN][10] = 4,
+ [0][1][RTW89_CN][10] = 127,
[0][1][RTW89_UK][10] = 127,
[0][1][RTW89_FCC][12] = 127,
[0][1][RTW89_ETSI][12] = 127,
@@ -13701,7 +13760,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][12] = 127,
[0][1][RTW89_KCC][12] = 127,
[0][1][RTW89_ACMA][12] = 127,
- [0][1][RTW89_CN][12] = 4,
+ [0][1][RTW89_CN][12] = 127,
[0][1][RTW89_UK][12] = 127,
[0][1][RTW89_FCC][14] = 127,
[0][1][RTW89_ETSI][14] = 127,
@@ -13709,7 +13768,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][14] = 127,
[0][1][RTW89_KCC][14] = 127,
[0][1][RTW89_ACMA][14] = 127,
- [0][1][RTW89_CN][14] = 4,
+ [0][1][RTW89_CN][14] = 127,
[0][1][RTW89_UK][14] = 127,
[0][1][RTW89_FCC][15] = 127,
[0][1][RTW89_ETSI][15] = 127,
@@ -13813,7 +13872,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][38] = 127,
[0][1][RTW89_KCC][38] = 127,
[0][1][RTW89_ACMA][38] = 127,
- [0][1][RTW89_CN][38] = 42,
+ [0][1][RTW89_CN][38] = 127,
[0][1][RTW89_UK][38] = 127,
[0][1][RTW89_FCC][40] = 127,
[0][1][RTW89_ETSI][40] = 127,
@@ -13821,7 +13880,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][40] = 127,
[0][1][RTW89_KCC][40] = 127,
[0][1][RTW89_ACMA][40] = 127,
- [0][1][RTW89_CN][40] = 42,
+ [0][1][RTW89_CN][40] = 127,
[0][1][RTW89_UK][40] = 127,
[0][1][RTW89_FCC][42] = 127,
[0][1][RTW89_ETSI][42] = 127,
@@ -13829,7 +13888,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][42] = 127,
[0][1][RTW89_KCC][42] = 127,
[0][1][RTW89_ACMA][42] = 127,
- [0][1][RTW89_CN][42] = 42,
+ [0][1][RTW89_CN][42] = 127,
[0][1][RTW89_UK][42] = 127,
[0][1][RTW89_FCC][44] = 127,
[0][1][RTW89_ETSI][44] = 127,
@@ -13837,7 +13896,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][44] = 127,
[0][1][RTW89_KCC][44] = 127,
[0][1][RTW89_ACMA][44] = 127,
- [0][1][RTW89_CN][44] = 42,
+ [0][1][RTW89_CN][44] = 127,
[0][1][RTW89_UK][44] = 127,
[0][1][RTW89_FCC][46] = 127,
[0][1][RTW89_ETSI][46] = 127,
@@ -13845,7 +13904,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_IC][46] = 127,
[0][1][RTW89_KCC][46] = 127,
[0][1][RTW89_ACMA][46] = 127,
- [0][1][RTW89_CN][46] = 42,
+ [0][1][RTW89_CN][46] = 127,
[0][1][RTW89_UK][46] = 127,
[0][1][RTW89_FCC][48] = 127,
[0][1][RTW89_ETSI][48] = 127,
@@ -14101,7 +14160,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][0] = 127,
[1][1][RTW89_KCC][0] = 127,
[1][1][RTW89_ACMA][0] = 127,
- [1][1][RTW89_CN][0] = 14,
+ [1][1][RTW89_CN][0] = 127,
[1][1][RTW89_UK][0] = 127,
[1][1][RTW89_FCC][2] = 127,
[1][1][RTW89_ETSI][2] = 127,
@@ -14109,7 +14168,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][2] = 127,
[1][1][RTW89_KCC][2] = 127,
[1][1][RTW89_ACMA][2] = 127,
- [1][1][RTW89_CN][2] = 14,
+ [1][1][RTW89_CN][2] = 127,
[1][1][RTW89_UK][2] = 127,
[1][1][RTW89_FCC][4] = 127,
[1][1][RTW89_ETSI][4] = 127,
@@ -14117,7 +14176,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][4] = 127,
[1][1][RTW89_KCC][4] = 127,
[1][1][RTW89_ACMA][4] = 127,
- [1][1][RTW89_CN][4] = 14,
+ [1][1][RTW89_CN][4] = 127,
[1][1][RTW89_UK][4] = 127,
[1][1][RTW89_FCC][6] = 127,
[1][1][RTW89_ETSI][6] = 127,
@@ -14125,7 +14184,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][6] = 127,
[1][1][RTW89_KCC][6] = 127,
[1][1][RTW89_ACMA][6] = 127,
- [1][1][RTW89_CN][6] = 14,
+ [1][1][RTW89_CN][6] = 127,
[1][1][RTW89_UK][6] = 127,
[1][1][RTW89_FCC][8] = 127,
[1][1][RTW89_ETSI][8] = 127,
@@ -14133,7 +14192,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][8] = 127,
[1][1][RTW89_KCC][8] = 127,
[1][1][RTW89_ACMA][8] = 127,
- [1][1][RTW89_CN][8] = 14,
+ [1][1][RTW89_CN][8] = 127,
[1][1][RTW89_UK][8] = 127,
[1][1][RTW89_FCC][10] = 127,
[1][1][RTW89_ETSI][10] = 127,
@@ -14141,7 +14200,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][10] = 127,
[1][1][RTW89_KCC][10] = 127,
[1][1][RTW89_ACMA][10] = 127,
- [1][1][RTW89_CN][10] = 14,
+ [1][1][RTW89_CN][10] = 127,
[1][1][RTW89_UK][10] = 127,
[1][1][RTW89_FCC][12] = 127,
[1][1][RTW89_ETSI][12] = 127,
@@ -14149,7 +14208,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][12] = 127,
[1][1][RTW89_KCC][12] = 127,
[1][1][RTW89_ACMA][12] = 127,
- [1][1][RTW89_CN][12] = 14,
+ [1][1][RTW89_CN][12] = 127,
[1][1][RTW89_UK][12] = 127,
[1][1][RTW89_FCC][14] = 127,
[1][1][RTW89_ETSI][14] = 127,
@@ -14157,7 +14216,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][14] = 127,
[1][1][RTW89_KCC][14] = 127,
[1][1][RTW89_ACMA][14] = 127,
- [1][1][RTW89_CN][14] = 14,
+ [1][1][RTW89_CN][14] = 127,
[1][1][RTW89_UK][14] = 127,
[1][1][RTW89_FCC][15] = 127,
[1][1][RTW89_ETSI][15] = 127,
@@ -14261,7 +14320,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][38] = 127,
[1][1][RTW89_KCC][38] = 127,
[1][1][RTW89_ACMA][38] = 127,
- [1][1][RTW89_CN][38] = 54,
+ [1][1][RTW89_CN][38] = 127,
[1][1][RTW89_UK][38] = 127,
[1][1][RTW89_FCC][40] = 127,
[1][1][RTW89_ETSI][40] = 127,
@@ -14269,7 +14328,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][40] = 127,
[1][1][RTW89_KCC][40] = 127,
[1][1][RTW89_ACMA][40] = 127,
- [1][1][RTW89_CN][40] = 54,
+ [1][1][RTW89_CN][40] = 127,
[1][1][RTW89_UK][40] = 127,
[1][1][RTW89_FCC][42] = 127,
[1][1][RTW89_ETSI][42] = 127,
@@ -14277,7 +14336,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][42] = 127,
[1][1][RTW89_KCC][42] = 127,
[1][1][RTW89_ACMA][42] = 127,
- [1][1][RTW89_CN][42] = 54,
+ [1][1][RTW89_CN][42] = 127,
[1][1][RTW89_UK][42] = 127,
[1][1][RTW89_FCC][44] = 127,
[1][1][RTW89_ETSI][44] = 127,
@@ -14285,7 +14344,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][44] = 127,
[1][1][RTW89_KCC][44] = 127,
[1][1][RTW89_ACMA][44] = 127,
- [1][1][RTW89_CN][44] = 54,
+ [1][1][RTW89_CN][44] = 127,
[1][1][RTW89_UK][44] = 127,
[1][1][RTW89_FCC][46] = 127,
[1][1][RTW89_ETSI][46] = 127,
@@ -14293,7 +14352,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_IC][46] = 127,
[1][1][RTW89_KCC][46] = 127,
[1][1][RTW89_ACMA][46] = 127,
- [1][1][RTW89_CN][46] = 54,
+ [1][1][RTW89_CN][46] = 127,
[1][1][RTW89_UK][46] = 127,
[1][1][RTW89_FCC][48] = 127,
[1][1][RTW89_ETSI][48] = 127,
@@ -14549,7 +14608,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][0] = 127,
[2][1][RTW89_KCC][0] = 127,
[2][1][RTW89_ACMA][0] = 127,
- [2][1][RTW89_CN][0] = 28,
+ [2][1][RTW89_CN][0] = 127,
[2][1][RTW89_UK][0] = 127,
[2][1][RTW89_FCC][2] = 127,
[2][1][RTW89_ETSI][2] = 127,
@@ -14557,7 +14616,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][2] = 127,
[2][1][RTW89_KCC][2] = 127,
[2][1][RTW89_ACMA][2] = 127,
- [2][1][RTW89_CN][2] = 28,
+ [2][1][RTW89_CN][2] = 127,
[2][1][RTW89_UK][2] = 127,
[2][1][RTW89_FCC][4] = 127,
[2][1][RTW89_ETSI][4] = 127,
@@ -14565,7 +14624,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][4] = 127,
[2][1][RTW89_KCC][4] = 127,
[2][1][RTW89_ACMA][4] = 127,
- [2][1][RTW89_CN][4] = 28,
+ [2][1][RTW89_CN][4] = 127,
[2][1][RTW89_UK][4] = 127,
[2][1][RTW89_FCC][6] = 127,
[2][1][RTW89_ETSI][6] = 127,
@@ -14573,7 +14632,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][6] = 127,
[2][1][RTW89_KCC][6] = 127,
[2][1][RTW89_ACMA][6] = 127,
- [2][1][RTW89_CN][6] = 28,
+ [2][1][RTW89_CN][6] = 127,
[2][1][RTW89_UK][6] = 127,
[2][1][RTW89_FCC][8] = 127,
[2][1][RTW89_ETSI][8] = 127,
@@ -14581,7 +14640,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][8] = 127,
[2][1][RTW89_KCC][8] = 127,
[2][1][RTW89_ACMA][8] = 127,
- [2][1][RTW89_CN][8] = 28,
+ [2][1][RTW89_CN][8] = 127,
[2][1][RTW89_UK][8] = 127,
[2][1][RTW89_FCC][10] = 127,
[2][1][RTW89_ETSI][10] = 127,
@@ -14589,7 +14648,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][10] = 127,
[2][1][RTW89_KCC][10] = 127,
[2][1][RTW89_ACMA][10] = 127,
- [2][1][RTW89_CN][10] = 28,
+ [2][1][RTW89_CN][10] = 127,
[2][1][RTW89_UK][10] = 127,
[2][1][RTW89_FCC][12] = 127,
[2][1][RTW89_ETSI][12] = 127,
@@ -14597,7 +14656,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][12] = 127,
[2][1][RTW89_KCC][12] = 127,
[2][1][RTW89_ACMA][12] = 127,
- [2][1][RTW89_CN][12] = 28,
+ [2][1][RTW89_CN][12] = 127,
[2][1][RTW89_UK][12] = 127,
[2][1][RTW89_FCC][14] = 127,
[2][1][RTW89_ETSI][14] = 127,
@@ -14605,7 +14664,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][14] = 127,
[2][1][RTW89_KCC][14] = 127,
[2][1][RTW89_ACMA][14] = 127,
- [2][1][RTW89_CN][14] = 28,
+ [2][1][RTW89_CN][14] = 127,
[2][1][RTW89_UK][14] = 127,
[2][1][RTW89_FCC][15] = 127,
[2][1][RTW89_ETSI][15] = 127,
@@ -14709,7 +14768,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][38] = 127,
[2][1][RTW89_KCC][38] = 127,
[2][1][RTW89_ACMA][38] = 127,
- [2][1][RTW89_CN][38] = 56,
+ [2][1][RTW89_CN][38] = 127,
[2][1][RTW89_UK][38] = 127,
[2][1][RTW89_FCC][40] = 127,
[2][1][RTW89_ETSI][40] = 127,
@@ -14717,7 +14776,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][40] = 127,
[2][1][RTW89_KCC][40] = 127,
[2][1][RTW89_ACMA][40] = 127,
- [2][1][RTW89_CN][40] = 56,
+ [2][1][RTW89_CN][40] = 127,
[2][1][RTW89_UK][40] = 127,
[2][1][RTW89_FCC][42] = 127,
[2][1][RTW89_ETSI][42] = 127,
@@ -14725,7 +14784,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][42] = 127,
[2][1][RTW89_KCC][42] = 127,
[2][1][RTW89_ACMA][42] = 127,
- [2][1][RTW89_CN][42] = 56,
+ [2][1][RTW89_CN][42] = 127,
[2][1][RTW89_UK][42] = 127,
[2][1][RTW89_FCC][44] = 127,
[2][1][RTW89_ETSI][44] = 127,
@@ -14733,7 +14792,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][44] = 127,
[2][1][RTW89_KCC][44] = 127,
[2][1][RTW89_ACMA][44] = 127,
- [2][1][RTW89_CN][44] = 56,
+ [2][1][RTW89_CN][44] = 127,
[2][1][RTW89_UK][44] = 127,
[2][1][RTW89_FCC][46] = 127,
[2][1][RTW89_ETSI][46] = 127,
@@ -14741,7 +14800,7 @@ const s8 rtw89_8851b_txpwr_lmt_ru_5g_type2[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_IC][46] = 127,
[2][1][RTW89_KCC][46] = 127,
[2][1][RTW89_ACMA][46] = 127,
- [2][1][RTW89_CN][46] = 56,
+ [2][1][RTW89_CN][46] = 127,
[2][1][RTW89_UK][46] = 127,
[2][1][RTW89_FCC][48] = 127,
[2][1][RTW89_ETSI][48] = 127,
@@ -14794,12 +14853,20 @@ const struct rtw89_phy_table rtw89_8851b_phy_nctl_table = {
.rf_path = 0, /* don't care */
};
+static
const struct rtw89_txpwr_table rtw89_8851b_byr_table = {
.data = rtw89_8851b_txpwr_byrate,
.size = ARRAY_SIZE(rtw89_8851b_txpwr_byrate),
.load = rtw89_phy_load_txpwr_byrate,
};
+static
+const struct rtw89_txpwr_table rtw89_8851b_byr_table_type2 = {
+ .data = rtw89_8851b_txpwr_byrate_type2,
+ .size = ARRAY_SIZE(rtw89_8851b_txpwr_byrate_type2),
+ .load = rtw89_phy_load_txpwr_byrate,
+};
+
const struct rtw89_txpwr_track_cfg rtw89_8851b_trk_cfg = {
.delta_swingidx_5ga_n = _txpwr_track_delta_swingidx_5ga_n,
.delta_swingidx_5ga_p = _txpwr_track_delta_swingidx_5ga_p,
@@ -14810,6 +14877,7 @@ const struct rtw89_txpwr_track_cfg rtw89_8851b_trk_cfg = {
};
const struct rtw89_rfe_parms rtw89_8851b_dflt_parms = {
+ .byr_tbl = &rtw89_8851b_byr_table,
.rule_2ghz = {
.lmt = &rtw89_8851b_txpwr_lmt_2g,
.lmt_ru = &rtw89_8851b_txpwr_lmt_ru_2g,
@@ -14818,9 +14886,14 @@ const struct rtw89_rfe_parms rtw89_8851b_dflt_parms = {
.lmt = &rtw89_8851b_txpwr_lmt_5g,
.lmt_ru = &rtw89_8851b_txpwr_lmt_ru_5g,
},
+ .tx_shape = {
+ .lmt = &rtw89_8851b_tx_shape_lmt,
+ .lmt_ru = &rtw89_8851b_tx_shape_lmt_ru,
+ },
};
static const struct rtw89_rfe_parms rtw89_8851b_rfe_parms_type2 = {
+ .byr_tbl = &rtw89_8851b_byr_table_type2,
.rule_2ghz = {
.lmt = &rtw89_8851b_txpwr_lmt_2g_type2,
.lmt_ru = &rtw89_8851b_txpwr_lmt_ru_2g_type2,
@@ -14829,6 +14902,10 @@ static const struct rtw89_rfe_parms rtw89_8851b_rfe_parms_type2 = {
.lmt = &rtw89_8851b_txpwr_lmt_5g_type2,
.lmt_ru = &rtw89_8851b_txpwr_lmt_ru_5g_type2,
},
+ .tx_shape = {
+ .lmt = &rtw89_8851b_tx_shape_lmt,
+ .lmt_ru = &rtw89_8851b_tx_shape_lmt_ru,
+ },
};
const struct rtw89_rfe_parms_conf rtw89_8851b_rfe_parms_conf[] = {
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b_table.h b/drivers/net/wireless/realtek/rtw89/rtw8851b_table.h
index a8737de02f66..d8cf545d40a0 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8851b_table.h
+++ b/drivers/net/wireless/realtek/rtw89/rtw8851b_table.h
@@ -11,10 +11,7 @@ extern const struct rtw89_phy_table rtw89_8851b_phy_bb_table;
extern const struct rtw89_phy_table rtw89_8851b_phy_bb_gain_table;
extern const struct rtw89_phy_table rtw89_8851b_phy_radioa_table;
extern const struct rtw89_phy_table rtw89_8851b_phy_nctl_table;
-extern const struct rtw89_txpwr_table rtw89_8851b_byr_table;
extern const struct rtw89_txpwr_track_cfg rtw89_8851b_trk_cfg;
-extern const u8 rtw89_8851b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM];
extern const struct rtw89_rfe_parms rtw89_8851b_dflt_parms;
extern const struct rtw89_rfe_parms_conf rtw89_8851b_rfe_parms_conf[];
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
index d068eae6a2f0..0c36e6180e25 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
@@ -1624,9 +1624,10 @@ void rtw8852a_bb_tx_mode_switch(struct rtw89_dev *rtwdev,
rtw89_phy_write32_idx(rtwdev, R_MAC_SEL, B_MAC_SEL_PWR_EN, 0, idx);
}
-static void rtw8852a_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev, bool bt_en)
+static void rtw8852a_ctrl_nbtg_bt_tx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- rtw89_phy_write_reg3_tbl(rtwdev, bt_en ? &rtw8852a_btc_preagc_en_defs_tbl :
+ rtw89_phy_write_reg3_tbl(rtwdev, en ? &rtw8852a_btc_preagc_en_defs_tbl :
&rtw8852a_btc_preagc_dis_defs_tbl);
}
@@ -1683,9 +1684,10 @@ void rtw8852a_set_trx_mask(struct rtw89_dev *rtwdev, u8 path, u8 group, u32 val)
rtw89_write_rf(rtwdev, path, RR_LUTWE, 0xfffff, 0x0);
}
-static void rtw8852a_ctrl_btg(struct rtw89_dev *rtwdev, bool btg)
+static void rtw8852a_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- if (btg) {
+ if (en) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_BTG, B_PATH0_BTG_SHEN, 0x1);
rtw89_phy_write32_mask(rtwdev, R_PATH1_BTG, B_PATH1_BTG_SHEN, 0x3);
rtw89_phy_write32_mask(rtwdev, R_PMAC_GNT, B_PMAC_GNT_P1, 0x0);
@@ -1966,15 +1968,15 @@ static void rtw8852a_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
switch (level) {
case 0: /* original */
default:
- rtw8852a_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852a_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 1: /* for FDD free-run */
- rtw8852a_bb_ctrl_btc_preagc(rtwdev, true);
+ rtw8852a_ctrl_nbtg_bt_tx(rtwdev, true, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 2: /* for BTG Co-Rx*/
- rtw8852a_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852a_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 1;
break;
}
@@ -2025,6 +2027,7 @@ static const struct wiphy_wowlan_support rtw_wowlan_stub_8852a = {
static const struct rtw89_chip_ops rtw8852a_chip_ops = {
.enable_bb_rf = rtw89_mac_enable_bb_rf,
.disable_bb_rf = rtw89_mac_disable_bb_rf,
+ .bb_preinit = NULL,
.bb_reset = rtw8852a_bb_reset,
.bb_sethw = rtw8852a_bb_sethw,
.read_rf = rtw89_phy_read_rf,
@@ -2045,9 +2048,9 @@ static const struct rtw89_chip_ops rtw8852a_chip_ops = {
.set_txpwr_ctrl = rtw8852a_set_txpwr_ctrl,
.init_txpwr_unit = rtw8852a_init_txpwr_unit,
.get_thermal = rtw8852a_get_thermal,
- .ctrl_btg = rtw8852a_ctrl_btg,
+ .ctrl_btg_bt_rx = rtw8852a_ctrl_btg_bt_rx,
.query_ppdu = rtw8852a_query_ppdu,
- .bb_ctrl_btc_preagc = rtw8852a_bb_ctrl_btc_preagc,
+ .ctrl_nbtg_bt_tx = rtw8852a_ctrl_nbtg_bt_tx,
.cfg_txrx_path = NULL,
.set_txpwr_ul_tb_offset = rtw8852a_set_txpwr_ul_tb_offset,
.pwr_on_func = NULL,
@@ -2081,6 +2084,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.fw_basename = RTW8852A_FW_BASENAME,
.fw_format_max = RTW8852A_FW_FORMAT_MAX,
.try_ce_fw = false,
+ .bbmcu_nr = 0,
.needed_fw_elms = 0,
.fifo_size = 458752,
.small_fifo_size = false,
@@ -2101,7 +2105,6 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
&rtw89_8852a_phy_radiob_table,},
.nctl_table = &rtw89_8852a_phy_nctl_table,
.nctl_post_table = NULL,
- .byr_table = &rtw89_8852a_byr_table,
.dflt_parms = &rtw89_8852a_dflt_parms,
.rfe_parms_conf = NULL,
.txpwr_factor_rf = 2,
@@ -2114,7 +2117,8 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
BIT(NL80211_BAND_5GHZ),
.support_bw160 = false,
.support_unii4 = false,
- .support_ul_tb_ctrl = false,
+ .ul_tb_waveform_ctrl = false,
+ .ul_tb_pwr_diff = false,
.hw_sec_hdr = false,
.rf_path_num = 2,
.tx_nss = 2,
@@ -2157,6 +2161,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.hci_func_en_addr = R_AX_HCI_FUNC_EN,
.h2c_desc_size = sizeof(struct rtw89_txwd_body),
.txwd_body_size = sizeof(struct rtw89_txwd_body),
+ .txwd_info_size = sizeof(struct rtw89_txwd_info),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL,
.h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852a_h2c_regs,
@@ -2170,6 +2175,7 @@ const struct rtw89_chip_info rtw8852a_chip_info = {
.dcfo_comp_sft = 10,
.imr_info = &rtw8852a_imr_info,
.rrsr_cfgs = &rtw8852a_rrsr_cfgs,
+ .bss_clr_vld = {R_BSS_CLR_MAP, B_BSS_CLR_MAP_VLD0},
.bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
.edcca_lvl_reg = R_SEG0R_EDCCA_LVL,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a_table.c b/drivers/net/wireless/realtek/rtw89/rtw8852a_table.c
index be54194558ff..495890c180ef 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852a_table.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852a_table.c
@@ -51020,6 +51020,7 @@ const struct rtw89_phy_table rtw89_8852a_phy_nctl_table = {
.rf_path = 0, /* don't care */
};
+static
const struct rtw89_txpwr_table rtw89_8852a_byr_table = {
.data = rtw89_8852a_txpwr_byrate,
.size = ARRAY_SIZE(rtw89_8852a_txpwr_byrate),
@@ -51049,6 +51050,7 @@ const struct rtw89_phy_dig_gain_table rtw89_8852a_phy_dig_table = {
};
const struct rtw89_rfe_parms rtw89_8852a_dflt_parms = {
+ .byr_tbl = &rtw89_8852a_byr_table,
.rule_2ghz = {
.lmt = &rtw89_8852a_txpwr_lmt_2g,
.lmt_ru = &rtw89_8852a_txpwr_lmt_ru_2g,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a_table.h b/drivers/net/wireless/realtek/rtw89/rtw8852a_table.h
index 41c379b1044d..7463ae6ee3f9 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852a_table.h
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852a_table.h
@@ -11,7 +11,6 @@ extern const struct rtw89_phy_table rtw89_8852a_phy_bb_table;
extern const struct rtw89_phy_table rtw89_8852a_phy_radioa_table;
extern const struct rtw89_phy_table rtw89_8852a_phy_radiob_table;
extern const struct rtw89_phy_table rtw89_8852a_phy_nctl_table;
-extern const struct rtw89_txpwr_table rtw89_8852a_byr_table;
extern const struct rtw89_phy_dig_gain_table rtw89_8852a_phy_dig_table;
extern const struct rtw89_txpwr_track_cfg rtw89_8852a_trk_cfg;
extern const struct rtw89_rfe_parms rtw89_8852a_dflt_parms;
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
index 0063301952b3..9d4e6f08218d 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
@@ -1689,10 +1689,11 @@ static void rtw8852b_set_tx_shape(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ const struct rtw89_rfe_parms *rfe_parms = rtwdev->rfe_parms;
u8 band = chan->band_type;
u8 regd = rtw89_regd_get(rtwdev, band);
- u8 tx_shape_cck = rtw89_8852b_tx_shape[band][RTW89_RS_CCK][regd];
- u8 tx_shape_ofdm = rtw89_8852b_tx_shape[band][RTW89_RS_OFDM][regd];
+ u8 tx_shape_cck = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_CCK][regd];
+ u8 tx_shape_ofdm = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_OFDM][regd];
if (band == RTW89_BAND_2G)
rtw8852b_bb_set_tx_shape_dfir(rtwdev, chan, tx_shape_cck, phy_idx);
@@ -1928,15 +1929,17 @@ void rtw8852b_bb_restore_tssi(struct rtw89_dev *rtwdev, enum rtw89_phy_idx idx,
rtw89_phy_write32_idx(rtwdev, R_TXPWR, B_TXPWR_MSK, bak->tx_pwr, idx);
}
-static void rtw8852b_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev, bool bt_en)
+static void rtw8852b_ctrl_nbtg_bt_tx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- rtw89_phy_write_reg3_tbl(rtwdev, bt_en ? &rtw8852b_btc_preagc_en_defs_tbl :
+ rtw89_phy_write_reg3_tbl(rtwdev, en ? &rtw8852b_btc_preagc_en_defs_tbl :
&rtw8852b_btc_preagc_dis_defs_tbl);
}
-static void rtw8852b_ctrl_btg(struct rtw89_dev *rtwdev, bool btg)
+static void rtw8852b_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- if (btg) {
+ if (en) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_BT_SHARE_V1,
B_PATH0_BT_SHARE_V1, 0x1);
rtw89_phy_write32_mask(rtwdev, R_PATH0_BTG_PATH_V1,
@@ -2017,9 +2020,9 @@ void rtw8852b_bb_ctrl_rx_path(struct rtw89_dev *rtwdev,
if (chan->band_type == RTW89_BAND_2G &&
(rx_path == RF_B || rx_path == RF_AB))
- rtw8852b_ctrl_btg(rtwdev, true);
+ rtw8852b_ctrl_btg_bt_rx(rtwdev, true, RTW89_PHY_0);
else
- rtw8852b_ctrl_btg(rtwdev, false);
+ rtw8852b_ctrl_btg_bt_rx(rtwdev, false, RTW89_PHY_0);
rst_mask0 = B_P0_TXPW_RSTB_MANON | B_P0_TXPW_RSTB_TSSI;
rst_mask1 = B_P1_TXPW_RSTB_MANON | B_P1_TXPW_RSTB_TSSI;
@@ -2345,15 +2348,15 @@ static void rtw8852b_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
switch (level) {
case 0: /* original */
default:
- rtw8852b_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852b_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 1: /* for FDD free-run */
- rtw8852b_bb_ctrl_btc_preagc(rtwdev, true);
+ rtw8852b_ctrl_nbtg_bt_tx(rtwdev, true, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 2: /* for BTG Co-Rx*/
- rtw8852b_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852b_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 1;
break;
}
@@ -2449,6 +2452,7 @@ static int rtw8852b_mac_disable_bb_rf(struct rtw89_dev *rtwdev)
static const struct rtw89_chip_ops rtw8852b_chip_ops = {
.enable_bb_rf = rtw8852b_mac_enable_bb_rf,
.disable_bb_rf = rtw8852b_mac_disable_bb_rf,
+ .bb_preinit = NULL,
.bb_reset = rtw8852b_bb_reset,
.bb_sethw = rtw8852b_bb_sethw,
.read_rf = rtw89_phy_read_rf_v1,
@@ -2469,9 +2473,9 @@ static const struct rtw89_chip_ops rtw8852b_chip_ops = {
.set_txpwr_ctrl = rtw8852b_set_txpwr_ctrl,
.init_txpwr_unit = rtw8852b_init_txpwr_unit,
.get_thermal = rtw8852b_get_thermal,
- .ctrl_btg = rtw8852b_ctrl_btg,
+ .ctrl_btg_bt_rx = rtw8852b_ctrl_btg_bt_rx,
.query_ppdu = rtw8852b_query_ppdu,
- .bb_ctrl_btc_preagc = rtw8852b_bb_ctrl_btc_preagc,
+ .ctrl_nbtg_bt_tx = rtw8852b_ctrl_nbtg_bt_tx,
.cfg_txrx_path = rtw8852b_bb_cfg_txrx_path,
.set_txpwr_ul_tb_offset = rtw8852b_set_txpwr_ul_tb_offset,
.pwr_on_func = rtw8852b_pwr_on_func,
@@ -2514,6 +2518,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.fw_basename = RTW8852B_FW_BASENAME,
.fw_format_max = RTW8852B_FW_FORMAT_MAX,
.try_ce_fw = true,
+ .bbmcu_nr = 0,
.needed_fw_elms = 0,
.fifo_size = 196608,
.small_fifo_size = true,
@@ -2534,7 +2539,6 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
&rtw89_8852b_phy_radiob_table,},
.nctl_table = &rtw89_8852b_phy_nctl_table,
.nctl_post_table = NULL,
- .byr_table = &rtw89_8852b_byr_table,
.dflt_parms = &rtw89_8852b_dflt_parms,
.rfe_parms_conf = NULL,
.txpwr_factor_rf = 2,
@@ -2547,7 +2551,8 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
BIT(NL80211_BAND_5GHZ),
.support_bw160 = false,
.support_unii4 = true,
- .support_ul_tb_ctrl = true,
+ .ul_tb_waveform_ctrl = true,
+ .ul_tb_pwr_diff = false,
.hw_sec_hdr = false,
.rf_path_num = 2,
.tx_nss = 2,
@@ -2590,6 +2595,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.hci_func_en_addr = R_AX_HCI_FUNC_EN,
.h2c_desc_size = sizeof(struct rtw89_txwd_body),
.txwd_body_size = sizeof(struct rtw89_txwd_body),
+ .txwd_info_size = sizeof(struct rtw89_txwd_info),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL,
.h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852b_h2c_regs,
@@ -2603,6 +2609,7 @@ const struct rtw89_chip_info rtw8852b_chip_info = {
.dcfo_comp_sft = 10,
.imr_info = &rtw8852b_imr_info,
.rrsr_cfgs = &rtw8852b_rrsr_cfgs,
+ .bss_clr_vld = {R_BSS_CLR_MAP_V1, B_BSS_CLR_MAP_VLD0},
.bss_clr_map_reg = R_BSS_CLR_MAP_V1,
.dma_ch_mask = BIT(RTW89_DMA_ACH4) | BIT(RTW89_DMA_ACH5) |
BIT(RTW89_DMA_ACH6) | BIT(RTW89_DMA_ACH7) |
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b_table.c b/drivers/net/wireless/realtek/rtw89/rtw8852b_table.c
index 17124d851a22..d2ce16e98bac 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852b_table.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852b_table.c
@@ -14666,8 +14666,9 @@ static const s8 _txpwr_track_delta_swingidx_2g_cck_a_p[] = {
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1};
-const u8 rtw89_8852b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM] = {
+static
+const u8 rtw89_8852b_tx_shape_lmt[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
+ [RTW89_REGD_NUM] = {
[0][0][RTW89_ACMA] = 0,
[0][0][RTW89_CHILE] = 0,
[0][0][RTW89_CN] = 0,
@@ -14707,35 +14708,63 @@ const u8 rtw89_8852b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
};
static
+const u8 rtw89_8852b_tx_shape_lmt_ru[RTW89_BAND_NUM][RTW89_REGD_NUM] = {
+ [0][RTW89_ACMA] = 0,
+ [0][RTW89_CHILE] = 0,
+ [0][RTW89_CN] = 0,
+ [0][RTW89_ETSI] = 0,
+ [0][RTW89_FCC] = 3,
+ [0][RTW89_IC] = 3,
+ [0][RTW89_KCC] = 0,
+ [0][RTW89_MEXICO] = 3,
+ [0][RTW89_MKK] = 0,
+ [0][RTW89_QATAR] = 0,
+ [0][RTW89_UK] = 0,
+ [0][RTW89_UKRAINE] = 0,
+ [1][RTW89_ACMA] = 0,
+ [1][RTW89_CHILE] = 0,
+ [1][RTW89_CN] = 0,
+ [1][RTW89_ETSI] = 0,
+ [1][RTW89_FCC] = 3,
+ [1][RTW89_IC] = 3,
+ [1][RTW89_KCC] = 0,
+ [1][RTW89_MEXICO] = 3,
+ [1][RTW89_MKK] = 0,
+ [1][RTW89_QATAR] = 0,
+ [1][RTW89_UK] = 0,
+ [1][RTW89_UKRAINE] = 0,
+};
+
+static
const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[RTW89_RS_LMT_NUM][RTW89_BF_NUM]
[RTW89_REGD_NUM][RTW89_2G_CH_NUM] = {
- [0][0][0][0][RTW89_WW][0] = 58,
- [0][0][0][0][RTW89_WW][1] = 58,
- [0][0][0][0][RTW89_WW][2] = 58,
- [0][0][0][0][RTW89_WW][3] = 58,
- [0][0][0][0][RTW89_WW][4] = 58,
- [0][0][0][0][RTW89_WW][5] = 58,
- [0][0][0][0][RTW89_WW][6] = 58,
- [0][0][0][0][RTW89_WW][7] = 58,
- [0][0][0][0][RTW89_WW][8] = 58,
- [0][0][0][0][RTW89_WW][9] = 58,
- [0][0][0][0][RTW89_WW][10] = 58,
- [0][0][0][0][RTW89_WW][11] = 58,
+ [0][0][0][0][RTW89_WW][0] = 56,
+ [0][0][0][0][RTW89_WW][1] = 56,
+ [0][0][0][0][RTW89_WW][2] = 56,
+ [0][0][0][0][RTW89_WW][3] = 56,
+ [0][0][0][0][RTW89_WW][4] = 56,
+ [0][0][0][0][RTW89_WW][5] = 56,
+ [0][0][0][0][RTW89_WW][6] = 56,
+ [0][0][0][0][RTW89_WW][7] = 56,
+ [0][0][0][0][RTW89_WW][8] = 56,
+ [0][0][0][0][RTW89_WW][9] = 56,
+ [0][0][0][0][RTW89_WW][10] = 56,
+ [0][0][0][0][RTW89_WW][11] = 56,
[0][0][0][0][RTW89_WW][12] = 56,
[0][0][0][0][RTW89_WW][13] = 76,
- [0][1][0][0][RTW89_WW][0] = 46,
- [0][1][0][0][RTW89_WW][1] = 46,
- [0][1][0][0][RTW89_WW][2] = 46,
- [0][1][0][0][RTW89_WW][3] = 46,
- [0][1][0][0][RTW89_WW][4] = 46,
- [0][1][0][0][RTW89_WW][5] = 46,
- [0][1][0][0][RTW89_WW][6] = 46,
- [0][1][0][0][RTW89_WW][7] = 46,
- [0][1][0][0][RTW89_WW][8] = 46,
- [0][1][0][0][RTW89_WW][9] = 46,
- [0][1][0][0][RTW89_WW][10] = 46,
- [0][1][0][0][RTW89_WW][11] = 46,
+ [0][1][0][0][RTW89_WW][0] = 44,
+ [0][1][0][0][RTW89_WW][1] = 44,
+ [0][1][0][0][RTW89_WW][2] = 44,
+ [0][1][0][0][RTW89_WW][3] = 44,
+ [0][1][0][0][RTW89_WW][4] = 44,
+ [0][1][0][0][RTW89_WW][5] = 44,
+ [0][1][0][0][RTW89_WW][6] = 44,
+ [0][1][0][0][RTW89_WW][7] = 44,
+ [0][1][0][0][RTW89_WW][8] = 44,
+ [0][1][0][0][RTW89_WW][9] = 44,
+ [0][1][0][0][RTW89_WW][10] = 44,
+ [0][1][0][0][RTW89_WW][11] = 44,
[0][1][0][0][RTW89_WW][12] = 42,
[0][1][0][0][RTW89_WW][13] = 64,
[1][0][0][0][RTW89_WW][0] = 0,
@@ -14743,7 +14772,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_WW][2] = 50,
[1][0][0][0][RTW89_WW][3] = 50,
[1][0][0][0][RTW89_WW][4] = 50,
- [1][0][0][0][RTW89_WW][5] = 58,
+ [1][0][0][0][RTW89_WW][5] = 56,
[1][0][0][0][RTW89_WW][6] = 50,
[1][0][0][0][RTW89_WW][7] = 50,
[1][0][0][0][RTW89_WW][8] = 50,
@@ -14754,10 +14783,10 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_WW][13] = 0,
[1][1][0][0][RTW89_WW][0] = 0,
[1][1][0][0][RTW89_WW][1] = 0,
- [1][1][0][0][RTW89_WW][2] = 46,
- [1][1][0][0][RTW89_WW][3] = 46,
- [1][1][0][0][RTW89_WW][4] = 46,
- [1][1][0][0][RTW89_WW][5] = 46,
+ [1][1][0][0][RTW89_WW][2] = 44,
+ [1][1][0][0][RTW89_WW][3] = 44,
+ [1][1][0][0][RTW89_WW][4] = 44,
+ [1][1][0][0][RTW89_WW][5] = 44,
[1][1][0][0][RTW89_WW][6] = 34,
[1][1][0][0][RTW89_WW][7] = 34,
[1][1][0][0][RTW89_WW][8] = 34,
@@ -14846,7 +14875,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_WW][7] = 58,
[1][0][2][0][RTW89_WW][8] = 58,
[1][0][2][0][RTW89_WW][9] = 58,
- [1][0][2][0][RTW89_WW][10] = 58,
+ [1][0][2][0][RTW89_WW][10] = 40,
[1][0][2][0][RTW89_WW][11] = 0,
[1][0][2][0][RTW89_WW][12] = 0,
[1][0][2][0][RTW89_WW][13] = 0,
@@ -14887,7 +14916,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][0] = 64,
[0][0][0][0][RTW89_UKRAINE][0] = 58,
[0][0][0][0][RTW89_MEXICO][0] = 78,
- [0][0][0][0][RTW89_CN][0] = 58,
+ [0][0][0][0][RTW89_CN][0] = 56,
[0][0][0][0][RTW89_QATAR][0] = 58,
[0][0][0][0][RTW89_UK][0] = 58,
[0][0][0][0][RTW89_FCC][1] = 78,
@@ -14899,7 +14928,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][1] = 64,
[0][0][0][0][RTW89_UKRAINE][1] = 58,
[0][0][0][0][RTW89_MEXICO][1] = 78,
- [0][0][0][0][RTW89_CN][1] = 58,
+ [0][0][0][0][RTW89_CN][1] = 56,
[0][0][0][0][RTW89_QATAR][1] = 58,
[0][0][0][0][RTW89_UK][1] = 58,
[0][0][0][0][RTW89_FCC][2] = 78,
@@ -14911,7 +14940,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][2] = 64,
[0][0][0][0][RTW89_UKRAINE][2] = 58,
[0][0][0][0][RTW89_MEXICO][2] = 78,
- [0][0][0][0][RTW89_CN][2] = 58,
+ [0][0][0][0][RTW89_CN][2] = 56,
[0][0][0][0][RTW89_QATAR][2] = 58,
[0][0][0][0][RTW89_UK][2] = 58,
[0][0][0][0][RTW89_FCC][3] = 78,
@@ -14923,7 +14952,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][3] = 64,
[0][0][0][0][RTW89_UKRAINE][3] = 58,
[0][0][0][0][RTW89_MEXICO][3] = 78,
- [0][0][0][0][RTW89_CN][3] = 58,
+ [0][0][0][0][RTW89_CN][3] = 56,
[0][0][0][0][RTW89_QATAR][3] = 58,
[0][0][0][0][RTW89_UK][3] = 58,
[0][0][0][0][RTW89_FCC][4] = 78,
@@ -14935,7 +14964,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][4] = 64,
[0][0][0][0][RTW89_UKRAINE][4] = 58,
[0][0][0][0][RTW89_MEXICO][4] = 78,
- [0][0][0][0][RTW89_CN][4] = 58,
+ [0][0][0][0][RTW89_CN][4] = 56,
[0][0][0][0][RTW89_QATAR][4] = 58,
[0][0][0][0][RTW89_UK][4] = 58,
[0][0][0][0][RTW89_FCC][5] = 78,
@@ -14947,7 +14976,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][5] = 64,
[0][0][0][0][RTW89_UKRAINE][5] = 58,
[0][0][0][0][RTW89_MEXICO][5] = 78,
- [0][0][0][0][RTW89_CN][5] = 58,
+ [0][0][0][0][RTW89_CN][5] = 56,
[0][0][0][0][RTW89_QATAR][5] = 58,
[0][0][0][0][RTW89_UK][5] = 58,
[0][0][0][0][RTW89_FCC][6] = 78,
@@ -14959,7 +14988,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][6] = 64,
[0][0][0][0][RTW89_UKRAINE][6] = 58,
[0][0][0][0][RTW89_MEXICO][6] = 78,
- [0][0][0][0][RTW89_CN][6] = 58,
+ [0][0][0][0][RTW89_CN][6] = 56,
[0][0][0][0][RTW89_QATAR][6] = 58,
[0][0][0][0][RTW89_UK][6] = 58,
[0][0][0][0][RTW89_FCC][7] = 78,
@@ -14971,7 +15000,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][7] = 64,
[0][0][0][0][RTW89_UKRAINE][7] = 58,
[0][0][0][0][RTW89_MEXICO][7] = 78,
- [0][0][0][0][RTW89_CN][7] = 58,
+ [0][0][0][0][RTW89_CN][7] = 56,
[0][0][0][0][RTW89_QATAR][7] = 58,
[0][0][0][0][RTW89_UK][7] = 58,
[0][0][0][0][RTW89_FCC][8] = 78,
@@ -14983,7 +15012,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][8] = 64,
[0][0][0][0][RTW89_UKRAINE][8] = 58,
[0][0][0][0][RTW89_MEXICO][8] = 78,
- [0][0][0][0][RTW89_CN][8] = 58,
+ [0][0][0][0][RTW89_CN][8] = 56,
[0][0][0][0][RTW89_QATAR][8] = 58,
[0][0][0][0][RTW89_UK][8] = 58,
[0][0][0][0][RTW89_FCC][9] = 78,
@@ -14995,7 +15024,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][9] = 64,
[0][0][0][0][RTW89_UKRAINE][9] = 58,
[0][0][0][0][RTW89_MEXICO][9] = 78,
- [0][0][0][0][RTW89_CN][9] = 58,
+ [0][0][0][0][RTW89_CN][9] = 56,
[0][0][0][0][RTW89_QATAR][9] = 58,
[0][0][0][0][RTW89_UK][9] = 58,
[0][0][0][0][RTW89_FCC][10] = 78,
@@ -15007,7 +15036,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][10] = 66,
[0][0][0][0][RTW89_UKRAINE][10] = 58,
[0][0][0][0][RTW89_MEXICO][10] = 78,
- [0][0][0][0][RTW89_CN][10] = 58,
+ [0][0][0][0][RTW89_CN][10] = 56,
[0][0][0][0][RTW89_QATAR][10] = 58,
[0][0][0][0][RTW89_UK][10] = 58,
[0][0][0][0][RTW89_FCC][11] = 70,
@@ -15019,7 +15048,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][11] = 64,
[0][0][0][0][RTW89_UKRAINE][11] = 58,
[0][0][0][0][RTW89_MEXICO][11] = 70,
- [0][0][0][0][RTW89_CN][11] = 58,
+ [0][0][0][0][RTW89_CN][11] = 56,
[0][0][0][0][RTW89_QATAR][11] = 58,
[0][0][0][0][RTW89_UK][11] = 58,
[0][0][0][0][RTW89_FCC][12] = 56,
@@ -15031,7 +15060,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_CHILE][12] = 56,
[0][0][0][0][RTW89_UKRAINE][12] = 58,
[0][0][0][0][RTW89_MEXICO][12] = 56,
- [0][0][0][0][RTW89_CN][12] = 58,
+ [0][0][0][0][RTW89_CN][12] = 56,
[0][0][0][0][RTW89_QATAR][12] = 58,
[0][0][0][0][RTW89_UK][12] = 58,
[0][0][0][0][RTW89_FCC][13] = 127,
@@ -15055,7 +15084,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][0] = 50,
[0][1][0][0][RTW89_UKRAINE][0] = 46,
[0][1][0][0][RTW89_MEXICO][0] = 74,
- [0][1][0][0][RTW89_CN][0] = 46,
+ [0][1][0][0][RTW89_CN][0] = 44,
[0][1][0][0][RTW89_QATAR][0] = 46,
[0][1][0][0][RTW89_UK][0] = 46,
[0][1][0][0][RTW89_FCC][1] = 74,
@@ -15067,7 +15096,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][1] = 50,
[0][1][0][0][RTW89_UKRAINE][1] = 46,
[0][1][0][0][RTW89_MEXICO][1] = 74,
- [0][1][0][0][RTW89_CN][1] = 46,
+ [0][1][0][0][RTW89_CN][1] = 44,
[0][1][0][0][RTW89_QATAR][1] = 46,
[0][1][0][0][RTW89_UK][1] = 46,
[0][1][0][0][RTW89_FCC][2] = 74,
@@ -15079,7 +15108,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][2] = 50,
[0][1][0][0][RTW89_UKRAINE][2] = 46,
[0][1][0][0][RTW89_MEXICO][2] = 74,
- [0][1][0][0][RTW89_CN][2] = 46,
+ [0][1][0][0][RTW89_CN][2] = 44,
[0][1][0][0][RTW89_QATAR][2] = 46,
[0][1][0][0][RTW89_UK][2] = 46,
[0][1][0][0][RTW89_FCC][3] = 74,
@@ -15091,7 +15120,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][3] = 50,
[0][1][0][0][RTW89_UKRAINE][3] = 46,
[0][1][0][0][RTW89_MEXICO][3] = 74,
- [0][1][0][0][RTW89_CN][3] = 46,
+ [0][1][0][0][RTW89_CN][3] = 44,
[0][1][0][0][RTW89_QATAR][3] = 46,
[0][1][0][0][RTW89_UK][3] = 46,
[0][1][0][0][RTW89_FCC][4] = 74,
@@ -15103,7 +15132,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][4] = 50,
[0][1][0][0][RTW89_UKRAINE][4] = 46,
[0][1][0][0][RTW89_MEXICO][4] = 74,
- [0][1][0][0][RTW89_CN][4] = 46,
+ [0][1][0][0][RTW89_CN][4] = 44,
[0][1][0][0][RTW89_QATAR][4] = 46,
[0][1][0][0][RTW89_UK][4] = 46,
[0][1][0][0][RTW89_FCC][5] = 74,
@@ -15115,7 +15144,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][5] = 50,
[0][1][0][0][RTW89_UKRAINE][5] = 46,
[0][1][0][0][RTW89_MEXICO][5] = 74,
- [0][1][0][0][RTW89_CN][5] = 46,
+ [0][1][0][0][RTW89_CN][5] = 44,
[0][1][0][0][RTW89_QATAR][5] = 46,
[0][1][0][0][RTW89_UK][5] = 46,
[0][1][0][0][RTW89_FCC][6] = 74,
@@ -15127,7 +15156,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][6] = 52,
[0][1][0][0][RTW89_UKRAINE][6] = 46,
[0][1][0][0][RTW89_MEXICO][6] = 74,
- [0][1][0][0][RTW89_CN][6] = 46,
+ [0][1][0][0][RTW89_CN][6] = 44,
[0][1][0][0][RTW89_QATAR][6] = 46,
[0][1][0][0][RTW89_UK][6] = 46,
[0][1][0][0][RTW89_FCC][7] = 74,
@@ -15139,7 +15168,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][7] = 50,
[0][1][0][0][RTW89_UKRAINE][7] = 46,
[0][1][0][0][RTW89_MEXICO][7] = 74,
- [0][1][0][0][RTW89_CN][7] = 46,
+ [0][1][0][0][RTW89_CN][7] = 44,
[0][1][0][0][RTW89_QATAR][7] = 46,
[0][1][0][0][RTW89_UK][7] = 46,
[0][1][0][0][RTW89_FCC][8] = 74,
@@ -15151,7 +15180,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][8] = 50,
[0][1][0][0][RTW89_UKRAINE][8] = 46,
[0][1][0][0][RTW89_MEXICO][8] = 74,
- [0][1][0][0][RTW89_CN][8] = 46,
+ [0][1][0][0][RTW89_CN][8] = 44,
[0][1][0][0][RTW89_QATAR][8] = 46,
[0][1][0][0][RTW89_UK][8] = 46,
[0][1][0][0][RTW89_FCC][9] = 74,
@@ -15163,7 +15192,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][9] = 50,
[0][1][0][0][RTW89_UKRAINE][9] = 46,
[0][1][0][0][RTW89_MEXICO][9] = 74,
- [0][1][0][0][RTW89_CN][9] = 46,
+ [0][1][0][0][RTW89_CN][9] = 44,
[0][1][0][0][RTW89_QATAR][9] = 46,
[0][1][0][0][RTW89_UK][9] = 46,
[0][1][0][0][RTW89_FCC][10] = 74,
@@ -15175,7 +15204,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][10] = 52,
[0][1][0][0][RTW89_UKRAINE][10] = 46,
[0][1][0][0][RTW89_MEXICO][10] = 74,
- [0][1][0][0][RTW89_CN][10] = 46,
+ [0][1][0][0][RTW89_CN][10] = 44,
[0][1][0][0][RTW89_QATAR][10] = 46,
[0][1][0][0][RTW89_UK][10] = 46,
[0][1][0][0][RTW89_FCC][11] = 54,
@@ -15187,7 +15216,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][11] = 50,
[0][1][0][0][RTW89_UKRAINE][11] = 46,
[0][1][0][0][RTW89_MEXICO][11] = 54,
- [0][1][0][0][RTW89_CN][11] = 46,
+ [0][1][0][0][RTW89_CN][11] = 44,
[0][1][0][0][RTW89_QATAR][11] = 46,
[0][1][0][0][RTW89_UK][11] = 46,
[0][1][0][0][RTW89_FCC][12] = 42,
@@ -15199,7 +15228,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_CHILE][12] = 42,
[0][1][0][0][RTW89_UKRAINE][12] = 46,
[0][1][0][0][RTW89_MEXICO][12] = 42,
- [0][1][0][0][RTW89_CN][12] = 46,
+ [0][1][0][0][RTW89_CN][12] = 44,
[0][1][0][0][RTW89_QATAR][12] = 46,
[0][1][0][0][RTW89_UK][12] = 46,
[0][1][0][0][RTW89_FCC][13] = 127,
@@ -15247,7 +15276,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][2] = 62,
[1][0][0][0][RTW89_UKRAINE][2] = 58,
[1][0][0][0][RTW89_MEXICO][2] = 50,
- [1][0][0][0][RTW89_CN][2] = 58,
+ [1][0][0][0][RTW89_CN][2] = 56,
[1][0][0][0][RTW89_QATAR][2] = 58,
[1][0][0][0][RTW89_UK][2] = 58,
[1][0][0][0][RTW89_FCC][3] = 50,
@@ -15259,7 +15288,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][3] = 62,
[1][0][0][0][RTW89_UKRAINE][3] = 58,
[1][0][0][0][RTW89_MEXICO][3] = 50,
- [1][0][0][0][RTW89_CN][3] = 58,
+ [1][0][0][0][RTW89_CN][3] = 56,
[1][0][0][0][RTW89_QATAR][3] = 58,
[1][0][0][0][RTW89_UK][3] = 58,
[1][0][0][0][RTW89_FCC][4] = 50,
@@ -15271,7 +15300,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][4] = 62,
[1][0][0][0][RTW89_UKRAINE][4] = 58,
[1][0][0][0][RTW89_MEXICO][4] = 50,
- [1][0][0][0][RTW89_CN][4] = 58,
+ [1][0][0][0][RTW89_CN][4] = 56,
[1][0][0][0][RTW89_QATAR][4] = 58,
[1][0][0][0][RTW89_UK][4] = 58,
[1][0][0][0][RTW89_FCC][5] = 66,
@@ -15283,7 +15312,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][5] = 62,
[1][0][0][0][RTW89_UKRAINE][5] = 58,
[1][0][0][0][RTW89_MEXICO][5] = 66,
- [1][0][0][0][RTW89_CN][5] = 58,
+ [1][0][0][0][RTW89_CN][5] = 56,
[1][0][0][0][RTW89_QATAR][5] = 58,
[1][0][0][0][RTW89_UK][5] = 58,
[1][0][0][0][RTW89_FCC][6] = 50,
@@ -15295,7 +15324,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][6] = 62,
[1][0][0][0][RTW89_UKRAINE][6] = 58,
[1][0][0][0][RTW89_MEXICO][6] = 50,
- [1][0][0][0][RTW89_CN][6] = 58,
+ [1][0][0][0][RTW89_CN][6] = 56,
[1][0][0][0][RTW89_QATAR][6] = 58,
[1][0][0][0][RTW89_UK][6] = 58,
[1][0][0][0][RTW89_FCC][7] = 50,
@@ -15307,7 +15336,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][7] = 62,
[1][0][0][0][RTW89_UKRAINE][7] = 58,
[1][0][0][0][RTW89_MEXICO][7] = 50,
- [1][0][0][0][RTW89_CN][7] = 58,
+ [1][0][0][0][RTW89_CN][7] = 56,
[1][0][0][0][RTW89_QATAR][7] = 58,
[1][0][0][0][RTW89_UK][7] = 58,
[1][0][0][0][RTW89_FCC][8] = 50,
@@ -15319,7 +15348,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][8] = 62,
[1][0][0][0][RTW89_UKRAINE][8] = 58,
[1][0][0][0][RTW89_MEXICO][8] = 50,
- [1][0][0][0][RTW89_CN][8] = 58,
+ [1][0][0][0][RTW89_CN][8] = 56,
[1][0][0][0][RTW89_QATAR][8] = 58,
[1][0][0][0][RTW89_UK][8] = 58,
[1][0][0][0][RTW89_FCC][9] = 42,
@@ -15331,7 +15360,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][9] = 42,
[1][0][0][0][RTW89_UKRAINE][9] = 58,
[1][0][0][0][RTW89_MEXICO][9] = 42,
- [1][0][0][0][RTW89_CN][9] = 58,
+ [1][0][0][0][RTW89_CN][9] = 56,
[1][0][0][0][RTW89_QATAR][9] = 58,
[1][0][0][0][RTW89_UK][9] = 58,
[1][0][0][0][RTW89_FCC][10] = 30,
@@ -15343,7 +15372,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_CHILE][10] = 30,
[1][0][0][0][RTW89_UKRAINE][10] = 58,
[1][0][0][0][RTW89_MEXICO][10] = 30,
- [1][0][0][0][RTW89_CN][10] = 58,
+ [1][0][0][0][RTW89_CN][10] = 56,
[1][0][0][0][RTW89_QATAR][10] = 58,
[1][0][0][0][RTW89_UK][10] = 58,
[1][0][0][0][RTW89_FCC][11] = 127,
@@ -15415,7 +15444,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][2] = 50,
[1][1][0][0][RTW89_UKRAINE][2] = 46,
[1][1][0][0][RTW89_MEXICO][2] = 46,
- [1][1][0][0][RTW89_CN][2] = 46,
+ [1][1][0][0][RTW89_CN][2] = 44,
[1][1][0][0][RTW89_QATAR][2] = 46,
[1][1][0][0][RTW89_UK][2] = 46,
[1][1][0][0][RTW89_FCC][3] = 46,
@@ -15427,7 +15456,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][3] = 50,
[1][1][0][0][RTW89_UKRAINE][3] = 46,
[1][1][0][0][RTW89_MEXICO][3] = 46,
- [1][1][0][0][RTW89_CN][3] = 46,
+ [1][1][0][0][RTW89_CN][3] = 44,
[1][1][0][0][RTW89_QATAR][3] = 46,
[1][1][0][0][RTW89_UK][3] = 46,
[1][1][0][0][RTW89_FCC][4] = 46,
@@ -15439,7 +15468,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][4] = 50,
[1][1][0][0][RTW89_UKRAINE][4] = 46,
[1][1][0][0][RTW89_MEXICO][4] = 46,
- [1][1][0][0][RTW89_CN][4] = 46,
+ [1][1][0][0][RTW89_CN][4] = 44,
[1][1][0][0][RTW89_QATAR][4] = 46,
[1][1][0][0][RTW89_UK][4] = 46,
[1][1][0][0][RTW89_FCC][5] = 62,
@@ -15451,7 +15480,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][5] = 50,
[1][1][0][0][RTW89_UKRAINE][5] = 46,
[1][1][0][0][RTW89_MEXICO][5] = 62,
- [1][1][0][0][RTW89_CN][5] = 46,
+ [1][1][0][0][RTW89_CN][5] = 44,
[1][1][0][0][RTW89_QATAR][5] = 46,
[1][1][0][0][RTW89_UK][5] = 46,
[1][1][0][0][RTW89_FCC][6] = 34,
@@ -15463,7 +15492,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][6] = 50,
[1][1][0][0][RTW89_UKRAINE][6] = 46,
[1][1][0][0][RTW89_MEXICO][6] = 34,
- [1][1][0][0][RTW89_CN][6] = 46,
+ [1][1][0][0][RTW89_CN][6] = 44,
[1][1][0][0][RTW89_QATAR][6] = 46,
[1][1][0][0][RTW89_UK][6] = 46,
[1][1][0][0][RTW89_FCC][7] = 34,
@@ -15475,7 +15504,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][7] = 50,
[1][1][0][0][RTW89_UKRAINE][7] = 46,
[1][1][0][0][RTW89_MEXICO][7] = 34,
- [1][1][0][0][RTW89_CN][7] = 46,
+ [1][1][0][0][RTW89_CN][7] = 44,
[1][1][0][0][RTW89_QATAR][7] = 46,
[1][1][0][0][RTW89_UK][7] = 46,
[1][1][0][0][RTW89_FCC][8] = 34,
@@ -15487,7 +15516,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][8] = 50,
[1][1][0][0][RTW89_UKRAINE][8] = 46,
[1][1][0][0][RTW89_MEXICO][8] = 34,
- [1][1][0][0][RTW89_CN][8] = 46,
+ [1][1][0][0][RTW89_CN][8] = 44,
[1][1][0][0][RTW89_QATAR][8] = 46,
[1][1][0][0][RTW89_UK][8] = 46,
[1][1][0][0][RTW89_FCC][9] = 30,
@@ -15499,7 +15528,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][9] = 30,
[1][1][0][0][RTW89_UKRAINE][9] = 46,
[1][1][0][0][RTW89_MEXICO][9] = 30,
- [1][1][0][0][RTW89_CN][9] = 46,
+ [1][1][0][0][RTW89_CN][9] = 44,
[1][1][0][0][RTW89_QATAR][9] = 46,
[1][1][0][0][RTW89_UK][9] = 46,
[1][1][0][0][RTW89_FCC][10] = 30,
@@ -15511,7 +15540,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_CHILE][10] = 30,
[1][1][0][0][RTW89_UKRAINE][10] = 46,
[1][1][0][0][RTW89_MEXICO][10] = 30,
- [1][1][0][0][RTW89_CN][10] = 46,
+ [1][1][0][0][RTW89_CN][10] = 44,
[1][1][0][0][RTW89_QATAR][10] = 46,
[1][1][0][0][RTW89_UK][10] = 46,
[1][1][0][0][RTW89_FCC][11] = 127,
@@ -16519,7 +16548,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_CHILE][10] = 66,
[1][0][2][0][RTW89_UKRAINE][10] = 58,
[1][0][2][0][RTW89_MEXICO][10] = 66,
- [1][0][2][0][RTW89_CN][10] = 58,
+ [1][0][2][0][RTW89_CN][10] = 40,
[1][0][2][0][RTW89_QATAR][10] = 58,
[1][0][2][0][RTW89_UK][10] = 58,
[1][0][2][0][RTW89_FCC][11] = 127,
@@ -16687,7 +16716,7 @@ const s8 rtw89_8852b_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_CHILE][10] = 38,
[1][1][2][0][RTW89_UKRAINE][10] = 46,
[1][1][2][0][RTW89_MEXICO][10] = 38,
- [1][1][2][0][RTW89_CN][10] = 46,
+ [1][1][2][0][RTW89_CN][10] = 40,
[1][1][2][0][RTW89_QATAR][10] = 46,
[1][1][2][0][RTW89_UK][10] = 46,
[1][1][2][0][RTW89_FCC][11] = 127,
@@ -16907,7 +16936,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][8] = 52,
[0][0][1][0][RTW89_WW][10] = 52,
[0][0][1][0][RTW89_WW][12] = 52,
- [0][0][1][0][RTW89_WW][14] = 52,
+ [0][0][1][0][RTW89_WW][14] = 1,
[0][0][1][0][RTW89_WW][15] = 52,
[0][0][1][0][RTW89_WW][17] = 52,
[0][0][1][0][RTW89_WW][19] = 52,
@@ -16928,7 +16957,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][48] = 78,
[0][0][1][0][RTW89_WW][50] = 78,
[0][0][1][0][RTW89_WW][52] = 78,
- [0][1][1][0][RTW89_WW][0] = 30,
+ [0][1][1][0][RTW89_WW][0] = 1,
[0][1][1][0][RTW89_WW][2] = 32,
[0][1][1][0][RTW89_WW][4] = 30,
[0][1][1][0][RTW89_WW][6] = 30,
@@ -17198,7 +17227,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MEXICO][14] = 78,
[0][0][1][0][RTW89_CN][14] = 58,
[0][0][1][0][RTW89_QATAR][14] = 58,
- [0][0][1][0][RTW89_UK][14] = 58,
+ [0][0][1][0][RTW89_UK][14] = 1,
[0][0][1][0][RTW89_FCC][15] = 76,
[0][0][1][0][RTW89_ETSI][15] = 58,
[0][0][1][0][RTW89_MKK][15] = 76,
@@ -17352,7 +17381,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_CHILE][38] = 68,
[0][0][1][0][RTW89_UKRAINE][38] = 28,
[0][0][1][0][RTW89_MEXICO][38] = 78,
- [0][0][1][0][RTW89_CN][38] = 76,
+ [0][0][1][0][RTW89_CN][38] = 62,
[0][0][1][0][RTW89_QATAR][38] = 28,
[0][0][1][0][RTW89_UK][38] = 58,
[0][0][1][0][RTW89_FCC][40] = 78,
@@ -17364,7 +17393,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_CHILE][40] = 68,
[0][0][1][0][RTW89_UKRAINE][40] = 28,
[0][0][1][0][RTW89_MEXICO][40] = 78,
- [0][0][1][0][RTW89_CN][40] = 76,
+ [0][0][1][0][RTW89_CN][40] = 62,
[0][0][1][0][RTW89_QATAR][40] = 28,
[0][0][1][0][RTW89_UK][40] = 58,
[0][0][1][0][RTW89_FCC][42] = 78,
@@ -17376,7 +17405,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_CHILE][42] = 66,
[0][0][1][0][RTW89_UKRAINE][42] = 28,
[0][0][1][0][RTW89_MEXICO][42] = 78,
- [0][0][1][0][RTW89_CN][42] = 76,
+ [0][0][1][0][RTW89_CN][42] = 62,
[0][0][1][0][RTW89_QATAR][42] = 28,
[0][0][1][0][RTW89_UK][42] = 58,
[0][0][1][0][RTW89_FCC][44] = 78,
@@ -17388,7 +17417,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_CHILE][44] = 68,
[0][0][1][0][RTW89_UKRAINE][44] = 28,
[0][0][1][0][RTW89_MEXICO][44] = 78,
- [0][0][1][0][RTW89_CN][44] = 76,
+ [0][0][1][0][RTW89_CN][44] = 62,
[0][0][1][0][RTW89_QATAR][44] = 28,
[0][0][1][0][RTW89_UK][44] = 58,
[0][0][1][0][RTW89_FCC][46] = 78,
@@ -17400,7 +17429,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_CHILE][46] = 68,
[0][0][1][0][RTW89_UKRAINE][46] = 28,
[0][0][1][0][RTW89_MEXICO][46] = 78,
- [0][0][1][0][RTW89_CN][46] = 76,
+ [0][0][1][0][RTW89_CN][46] = 62,
[0][0][1][0][RTW89_QATAR][46] = 28,
[0][0][1][0][RTW89_UK][46] = 58,
[0][0][1][0][RTW89_FCC][48] = 78,
@@ -17450,7 +17479,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MEXICO][0] = 50,
[0][1][1][0][RTW89_CN][0] = 46,
[0][1][1][0][RTW89_QATAR][0] = 46,
- [0][1][1][0][RTW89_UK][0] = 46,
+ [0][1][1][0][RTW89_UK][0] = 1,
[0][1][1][0][RTW89_FCC][2] = 68,
[0][1][1][0][RTW89_ETSI][2] = 46,
[0][1][1][0][RTW89_MKK][2] = 48,
@@ -17688,7 +17717,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_CHILE][38] = 48,
[0][1][1][0][RTW89_UKRAINE][38] = 16,
[0][1][1][0][RTW89_MEXICO][38] = 78,
- [0][1][1][0][RTW89_CN][38] = 76,
+ [0][1][1][0][RTW89_CN][38] = 62,
[0][1][1][0][RTW89_QATAR][38] = 16,
[0][1][1][0][RTW89_UK][38] = 46,
[0][1][1][0][RTW89_FCC][40] = 78,
@@ -17700,7 +17729,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_CHILE][40] = 48,
[0][1][1][0][RTW89_UKRAINE][40] = 16,
[0][1][1][0][RTW89_MEXICO][40] = 78,
- [0][1][1][0][RTW89_CN][40] = 76,
+ [0][1][1][0][RTW89_CN][40] = 62,
[0][1][1][0][RTW89_QATAR][40] = 16,
[0][1][1][0][RTW89_UK][40] = 46,
[0][1][1][0][RTW89_FCC][42] = 78,
@@ -17712,7 +17741,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_CHILE][42] = 48,
[0][1][1][0][RTW89_UKRAINE][42] = 16,
[0][1][1][0][RTW89_MEXICO][42] = 78,
- [0][1][1][0][RTW89_CN][42] = 76,
+ [0][1][1][0][RTW89_CN][42] = 62,
[0][1][1][0][RTW89_QATAR][42] = 16,
[0][1][1][0][RTW89_UK][42] = 46,
[0][1][1][0][RTW89_FCC][44] = 78,
@@ -17724,7 +17753,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_CHILE][44] = 48,
[0][1][1][0][RTW89_UKRAINE][44] = 16,
[0][1][1][0][RTW89_MEXICO][44] = 78,
- [0][1][1][0][RTW89_CN][44] = 76,
+ [0][1][1][0][RTW89_CN][44] = 62,
[0][1][1][0][RTW89_QATAR][44] = 16,
[0][1][1][0][RTW89_UK][44] = 46,
[0][1][1][0][RTW89_FCC][46] = 78,
@@ -17736,7 +17765,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_CHILE][46] = 48,
[0][1][1][0][RTW89_UKRAINE][46] = 16,
[0][1][1][0][RTW89_MEXICO][46] = 78,
- [0][1][1][0][RTW89_CN][46] = 76,
+ [0][1][1][0][RTW89_CN][46] = 62,
[0][1][1][0][RTW89_QATAR][46] = 16,
[0][1][1][0][RTW89_UK][46] = 46,
[0][1][1][0][RTW89_FCC][48] = 56,
@@ -17784,7 +17813,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][0] = 42,
[0][0][2][0][RTW89_UKRAINE][0] = 52,
[0][0][2][0][RTW89_MEXICO][0] = 62,
- [0][0][2][0][RTW89_CN][0] = 60,
+ [0][0][2][0][RTW89_CN][0] = 58,
[0][0][2][0][RTW89_QATAR][0] = 60,
[0][0][2][0][RTW89_UK][0] = 60,
[0][0][2][0][RTW89_FCC][2] = 78,
@@ -17796,7 +17825,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][2] = 42,
[0][0][2][0][RTW89_UKRAINE][2] = 52,
[0][0][2][0][RTW89_MEXICO][2] = 62,
- [0][0][2][0][RTW89_CN][2] = 60,
+ [0][0][2][0][RTW89_CN][2] = 58,
[0][0][2][0][RTW89_QATAR][2] = 60,
[0][0][2][0][RTW89_UK][2] = 60,
[0][0][2][0][RTW89_FCC][4] = 78,
@@ -17808,7 +17837,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][4] = 42,
[0][0][2][0][RTW89_UKRAINE][4] = 52,
[0][0][2][0][RTW89_MEXICO][4] = 62,
- [0][0][2][0][RTW89_CN][4] = 60,
+ [0][0][2][0][RTW89_CN][4] = 58,
[0][0][2][0][RTW89_QATAR][4] = 60,
[0][0][2][0][RTW89_UK][4] = 60,
[0][0][2][0][RTW89_FCC][6] = 78,
@@ -17820,7 +17849,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][6] = 42,
[0][0][2][0][RTW89_UKRAINE][6] = 52,
[0][0][2][0][RTW89_MEXICO][6] = 62,
- [0][0][2][0][RTW89_CN][6] = 60,
+ [0][0][2][0][RTW89_CN][6] = 58,
[0][0][2][0][RTW89_QATAR][6] = 60,
[0][0][2][0][RTW89_UK][6] = 60,
[0][0][2][0][RTW89_FCC][8] = 78,
@@ -18024,7 +18053,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][38] = 64,
[0][0][2][0][RTW89_UKRAINE][38] = 28,
[0][0][2][0][RTW89_MEXICO][38] = 78,
- [0][0][2][0][RTW89_CN][38] = 76,
+ [0][0][2][0][RTW89_CN][38] = 62,
[0][0][2][0][RTW89_QATAR][38] = 28,
[0][0][2][0][RTW89_UK][38] = 60,
[0][0][2][0][RTW89_FCC][40] = 78,
@@ -18036,7 +18065,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][40] = 64,
[0][0][2][0][RTW89_UKRAINE][40] = 28,
[0][0][2][0][RTW89_MEXICO][40] = 78,
- [0][0][2][0][RTW89_CN][40] = 76,
+ [0][0][2][0][RTW89_CN][40] = 62,
[0][0][2][0][RTW89_QATAR][40] = 28,
[0][0][2][0][RTW89_UK][40] = 60,
[0][0][2][0][RTW89_FCC][42] = 78,
@@ -18048,7 +18077,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][42] = 64,
[0][0][2][0][RTW89_UKRAINE][42] = 28,
[0][0][2][0][RTW89_MEXICO][42] = 78,
- [0][0][2][0][RTW89_CN][42] = 76,
+ [0][0][2][0][RTW89_CN][42] = 62,
[0][0][2][0][RTW89_QATAR][42] = 28,
[0][0][2][0][RTW89_UK][42] = 60,
[0][0][2][0][RTW89_FCC][44] = 78,
@@ -18060,7 +18089,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][44] = 66,
[0][0][2][0][RTW89_UKRAINE][44] = 28,
[0][0][2][0][RTW89_MEXICO][44] = 78,
- [0][0][2][0][RTW89_CN][44] = 76,
+ [0][0][2][0][RTW89_CN][44] = 62,
[0][0][2][0][RTW89_QATAR][44] = 28,
[0][0][2][0][RTW89_UK][44] = 60,
[0][0][2][0][RTW89_FCC][46] = 78,
@@ -18072,7 +18101,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_CHILE][46] = 66,
[0][0][2][0][RTW89_UKRAINE][46] = 28,
[0][0][2][0][RTW89_MEXICO][46] = 78,
- [0][0][2][0][RTW89_CN][46] = 76,
+ [0][0][2][0][RTW89_CN][46] = 62,
[0][0][2][0][RTW89_QATAR][46] = 28,
[0][0][2][0][RTW89_UK][46] = 60,
[0][0][2][0][RTW89_FCC][48] = 78,
@@ -18120,7 +18149,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][0] = 30,
[0][1][2][0][RTW89_UKRAINE][0] = 40,
[0][1][2][0][RTW89_MEXICO][0] = 50,
- [0][1][2][0][RTW89_CN][0] = 48,
+ [0][1][2][0][RTW89_CN][0] = 46,
[0][1][2][0][RTW89_QATAR][0] = 48,
[0][1][2][0][RTW89_UK][0] = 48,
[0][1][2][0][RTW89_FCC][2] = 70,
@@ -18132,7 +18161,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][2] = 30,
[0][1][2][0][RTW89_UKRAINE][2] = 40,
[0][1][2][0][RTW89_MEXICO][2] = 50,
- [0][1][2][0][RTW89_CN][2] = 48,
+ [0][1][2][0][RTW89_CN][2] = 46,
[0][1][2][0][RTW89_QATAR][2] = 48,
[0][1][2][0][RTW89_UK][2] = 48,
[0][1][2][0][RTW89_FCC][4] = 70,
@@ -18144,7 +18173,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][4] = 30,
[0][1][2][0][RTW89_UKRAINE][4] = 40,
[0][1][2][0][RTW89_MEXICO][4] = 50,
- [0][1][2][0][RTW89_CN][4] = 48,
+ [0][1][2][0][RTW89_CN][4] = 46,
[0][1][2][0][RTW89_QATAR][4] = 48,
[0][1][2][0][RTW89_UK][4] = 48,
[0][1][2][0][RTW89_FCC][6] = 70,
@@ -18156,7 +18185,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][6] = 30,
[0][1][2][0][RTW89_UKRAINE][6] = 40,
[0][1][2][0][RTW89_MEXICO][6] = 50,
- [0][1][2][0][RTW89_CN][6] = 48,
+ [0][1][2][0][RTW89_CN][6] = 46,
[0][1][2][0][RTW89_QATAR][6] = 48,
[0][1][2][0][RTW89_UK][6] = 48,
[0][1][2][0][RTW89_FCC][8] = 70,
@@ -18360,7 +18389,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][38] = 50,
[0][1][2][0][RTW89_UKRAINE][38] = 16,
[0][1][2][0][RTW89_MEXICO][38] = 78,
- [0][1][2][0][RTW89_CN][38] = 76,
+ [0][1][2][0][RTW89_CN][38] = 62,
[0][1][2][0][RTW89_QATAR][38] = 16,
[0][1][2][0][RTW89_UK][38] = 48,
[0][1][2][0][RTW89_FCC][40] = 78,
@@ -18372,7 +18401,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][40] = 50,
[0][1][2][0][RTW89_UKRAINE][40] = 16,
[0][1][2][0][RTW89_MEXICO][40] = 78,
- [0][1][2][0][RTW89_CN][40] = 76,
+ [0][1][2][0][RTW89_CN][40] = 62,
[0][1][2][0][RTW89_QATAR][40] = 16,
[0][1][2][0][RTW89_UK][40] = 48,
[0][1][2][0][RTW89_FCC][42] = 78,
@@ -18384,7 +18413,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][42] = 52,
[0][1][2][0][RTW89_UKRAINE][42] = 16,
[0][1][2][0][RTW89_MEXICO][42] = 78,
- [0][1][2][0][RTW89_CN][42] = 76,
+ [0][1][2][0][RTW89_CN][42] = 62,
[0][1][2][0][RTW89_QATAR][42] = 16,
[0][1][2][0][RTW89_UK][42] = 48,
[0][1][2][0][RTW89_FCC][44] = 78,
@@ -18396,7 +18425,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][44] = 52,
[0][1][2][0][RTW89_UKRAINE][44] = 16,
[0][1][2][0][RTW89_MEXICO][44] = 78,
- [0][1][2][0][RTW89_CN][44] = 76,
+ [0][1][2][0][RTW89_CN][44] = 62,
[0][1][2][0][RTW89_QATAR][44] = 16,
[0][1][2][0][RTW89_UK][44] = 48,
[0][1][2][0][RTW89_FCC][46] = 78,
@@ -18408,7 +18437,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_CHILE][46] = 52,
[0][1][2][0][RTW89_UKRAINE][46] = 16,
[0][1][2][0][RTW89_MEXICO][46] = 78,
- [0][1][2][0][RTW89_CN][46] = 76,
+ [0][1][2][0][RTW89_CN][46] = 62,
[0][1][2][0][RTW89_QATAR][46] = 16,
[0][1][2][0][RTW89_UK][46] = 48,
[0][1][2][0][RTW89_FCC][48] = 58,
@@ -18456,7 +18485,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][0] = 14,
[0][1][2][1][RTW89_UKRAINE][0] = 28,
[0][1][2][1][RTW89_MEXICO][0] = 50,
- [0][1][2][1][RTW89_CN][0] = 36,
+ [0][1][2][1][RTW89_CN][0] = 34,
[0][1][2][1][RTW89_QATAR][0] = 36,
[0][1][2][1][RTW89_UK][0] = 36,
[0][1][2][1][RTW89_FCC][2] = 68,
@@ -18468,7 +18497,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][2] = 14,
[0][1][2][1][RTW89_UKRAINE][2] = 28,
[0][1][2][1][RTW89_MEXICO][2] = 50,
- [0][1][2][1][RTW89_CN][2] = 36,
+ [0][1][2][1][RTW89_CN][2] = 34,
[0][1][2][1][RTW89_QATAR][2] = 36,
[0][1][2][1][RTW89_UK][2] = 36,
[0][1][2][1][RTW89_FCC][4] = 68,
@@ -18480,7 +18509,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][4] = 14,
[0][1][2][1][RTW89_UKRAINE][4] = 28,
[0][1][2][1][RTW89_MEXICO][4] = 50,
- [0][1][2][1][RTW89_CN][4] = 36,
+ [0][1][2][1][RTW89_CN][4] = 34,
[0][1][2][1][RTW89_QATAR][4] = 36,
[0][1][2][1][RTW89_UK][4] = 36,
[0][1][2][1][RTW89_FCC][6] = 68,
@@ -18492,7 +18521,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][6] = 14,
[0][1][2][1][RTW89_UKRAINE][6] = 28,
[0][1][2][1][RTW89_MEXICO][6] = 50,
- [0][1][2][1][RTW89_CN][6] = 36,
+ [0][1][2][1][RTW89_CN][6] = 34,
[0][1][2][1][RTW89_QATAR][6] = 36,
[0][1][2][1][RTW89_UK][6] = 36,
[0][1][2][1][RTW89_FCC][8] = 68,
@@ -18696,7 +18725,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][38] = 36,
[0][1][2][1][RTW89_UKRAINE][38] = 4,
[0][1][2][1][RTW89_MEXICO][38] = 78,
- [0][1][2][1][RTW89_CN][38] = 72,
+ [0][1][2][1][RTW89_CN][38] = 62,
[0][1][2][1][RTW89_QATAR][38] = 4,
[0][1][2][1][RTW89_UK][38] = 36,
[0][1][2][1][RTW89_FCC][40] = 78,
@@ -18708,7 +18737,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][40] = 36,
[0][1][2][1][RTW89_UKRAINE][40] = 4,
[0][1][2][1][RTW89_MEXICO][40] = 78,
- [0][1][2][1][RTW89_CN][40] = 72,
+ [0][1][2][1][RTW89_CN][40] = 62,
[0][1][2][1][RTW89_QATAR][40] = 4,
[0][1][2][1][RTW89_UK][40] = 36,
[0][1][2][1][RTW89_FCC][42] = 78,
@@ -18720,7 +18749,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][42] = 36,
[0][1][2][1][RTW89_UKRAINE][42] = 4,
[0][1][2][1][RTW89_MEXICO][42] = 78,
- [0][1][2][1][RTW89_CN][42] = 72,
+ [0][1][2][1][RTW89_CN][42] = 62,
[0][1][2][1][RTW89_QATAR][42] = 4,
[0][1][2][1][RTW89_UK][42] = 36,
[0][1][2][1][RTW89_FCC][44] = 78,
@@ -18732,7 +18761,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][44] = 36,
[0][1][2][1][RTW89_UKRAINE][44] = 4,
[0][1][2][1][RTW89_MEXICO][44] = 78,
- [0][1][2][1][RTW89_CN][44] = 76,
+ [0][1][2][1][RTW89_CN][44] = 62,
[0][1][2][1][RTW89_QATAR][44] = 4,
[0][1][2][1][RTW89_UK][44] = 36,
[0][1][2][1][RTW89_FCC][46] = 78,
@@ -18744,7 +18773,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_CHILE][46] = 36,
[0][1][2][1][RTW89_UKRAINE][46] = 4,
[0][1][2][1][RTW89_MEXICO][46] = 78,
- [0][1][2][1][RTW89_CN][46] = 76,
+ [0][1][2][1][RTW89_CN][46] = 62,
[0][1][2][1][RTW89_QATAR][46] = 4,
[0][1][2][1][RTW89_UK][46] = 36,
[0][1][2][1][RTW89_FCC][48] = 58,
@@ -18912,7 +18941,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_CHILE][39] = 64,
[1][0][2][0][RTW89_UKRAINE][39] = 28,
[1][0][2][0][RTW89_MEXICO][39] = 78,
- [1][0][2][0][RTW89_CN][39] = 70,
+ [1][0][2][0][RTW89_CN][39] = 56,
[1][0][2][0][RTW89_QATAR][39] = 28,
[1][0][2][0][RTW89_UK][39] = 64,
[1][0][2][0][RTW89_FCC][43] = 78,
@@ -18924,7 +18953,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_CHILE][43] = 64,
[1][0][2][0][RTW89_UKRAINE][43] = 28,
[1][0][2][0][RTW89_MEXICO][43] = 78,
- [1][0][2][0][RTW89_CN][43] = 74,
+ [1][0][2][0][RTW89_CN][43] = 62,
[1][0][2][0][RTW89_QATAR][43] = 28,
[1][0][2][0][RTW89_UK][43] = 62,
[1][0][2][0][RTW89_FCC][47] = 78,
@@ -19080,7 +19109,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_CHILE][39] = 52,
[1][1][2][0][RTW89_UKRAINE][39] = 16,
[1][1][2][0][RTW89_MEXICO][39] = 78,
- [1][1][2][0][RTW89_CN][39] = 70,
+ [1][1][2][0][RTW89_CN][39] = 56,
[1][1][2][0][RTW89_QATAR][39] = 16,
[1][1][2][0][RTW89_UK][39] = 52,
[1][1][2][0][RTW89_FCC][43] = 78,
@@ -19092,7 +19121,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_CHILE][43] = 52,
[1][1][2][0][RTW89_UKRAINE][43] = 16,
[1][1][2][0][RTW89_MEXICO][43] = 78,
- [1][1][2][0][RTW89_CN][43] = 74,
+ [1][1][2][0][RTW89_CN][43] = 62,
[1][1][2][0][RTW89_QATAR][43] = 16,
[1][1][2][0][RTW89_UK][43] = 52,
[1][1][2][0][RTW89_FCC][47] = 68,
@@ -19248,7 +19277,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_CHILE][39] = 36,
[1][1][2][1][RTW89_UKRAINE][39] = 4,
[1][1][2][1][RTW89_MEXICO][39] = 78,
- [1][1][2][1][RTW89_CN][39] = 70,
+ [1][1][2][1][RTW89_CN][39] = 58,
[1][1][2][1][RTW89_QATAR][39] = 4,
[1][1][2][1][RTW89_UK][39] = 40,
[1][1][2][1][RTW89_FCC][43] = 78,
@@ -19260,7 +19289,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_CHILE][43] = 36,
[1][1][2][1][RTW89_UKRAINE][43] = 4,
[1][1][2][1][RTW89_MEXICO][43] = 78,
- [1][1][2][1][RTW89_CN][43] = 74,
+ [1][1][2][1][RTW89_CN][43] = 62,
[1][1][2][1][RTW89_QATAR][43] = 4,
[1][1][2][1][RTW89_UK][43] = 40,
[1][1][2][1][RTW89_FCC][47] = 68,
@@ -19356,7 +19385,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_CHILE][41] = 64,
[2][0][2][0][RTW89_UKRAINE][41] = 28,
[2][0][2][0][RTW89_MEXICO][41] = 74,
- [2][0][2][0][RTW89_CN][41] = 70,
+ [2][0][2][0][RTW89_CN][41] = 48,
[2][0][2][0][RTW89_QATAR][41] = 28,
[2][0][2][0][RTW89_UK][41] = 64,
[2][0][2][0][RTW89_FCC][49] = 64,
@@ -19440,7 +19469,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_CHILE][41] = 50,
[2][1][2][0][RTW89_UKRAINE][41] = 16,
[2][1][2][0][RTW89_MEXICO][41] = 74,
- [2][1][2][0][RTW89_CN][41] = 70,
+ [2][1][2][0][RTW89_CN][41] = 48,
[2][1][2][0][RTW89_QATAR][41] = 16,
[2][1][2][0][RTW89_UK][41] = 52,
[2][1][2][0][RTW89_FCC][49] = 58,
@@ -19524,7 +19553,7 @@ const s8 rtw89_8852b_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_CHILE][41] = 36,
[2][1][2][1][RTW89_UKRAINE][41] = 4,
[2][1][2][1][RTW89_MEXICO][41] = 74,
- [2][1][2][1][RTW89_CN][41] = 70,
+ [2][1][2][1][RTW89_CN][41] = 46,
[2][1][2][1][RTW89_QATAR][41] = 4,
[2][1][2][1][RTW89_UK][41] = 38,
[2][1][2][1][RTW89_FCC][49] = 58,
@@ -20669,10 +20698,10 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][48] = 32,
[0][0][RTW89_WW][50] = 32,
[0][0][RTW89_WW][52] = 32,
- [0][1][RTW89_WW][0] = 0,
+ [0][1][RTW89_WW][0] = 1,
[0][1][RTW89_WW][2] = 4,
- [0][1][RTW89_WW][4] = 0,
- [0][1][RTW89_WW][6] = 0,
+ [0][1][RTW89_WW][4] = 1,
+ [0][1][RTW89_WW][6] = 1,
[0][1][RTW89_WW][8] = 12,
[0][1][RTW89_WW][10] = 12,
[0][1][RTW89_WW][12] = 12,
@@ -21148,7 +21177,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_FCC][0] = 34,
[0][1][RTW89_ETSI][0] = 12,
[0][1][RTW89_MKK][0] = 12,
- [0][1][RTW89_IC][0] = 0,
+ [0][1][RTW89_IC][0] = 1,
[0][1][RTW89_KCC][0] = 28,
[0][1][RTW89_ACMA][0] = 12,
[0][1][RTW89_CHILE][0] = 14,
@@ -21172,7 +21201,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_FCC][4] = 34,
[0][1][RTW89_ETSI][4] = 12,
[0][1][RTW89_MKK][4] = 14,
- [0][1][RTW89_IC][4] = 0,
+ [0][1][RTW89_IC][4] = 1,
[0][1][RTW89_KCC][4] = 28,
[0][1][RTW89_ACMA][4] = 12,
[0][1][RTW89_CHILE][4] = 12,
@@ -21184,7 +21213,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_FCC][6] = 34,
[0][1][RTW89_ETSI][6] = 12,
[0][1][RTW89_MKK][6] = 14,
- [0][1][RTW89_IC][6] = 0,
+ [0][1][RTW89_IC][6] = 1,
[0][1][RTW89_KCC][6] = 2,
[0][1][RTW89_ACMA][6] = 12,
[0][1][RTW89_CHILE][6] = 12,
@@ -21730,7 +21759,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_CHILE][38] = 64,
[1][0][RTW89_UKRAINE][38] = 28,
[1][0][RTW89_MEXICO][38] = 84,
- [1][0][RTW89_CN][38] = 74,
+ [1][0][RTW89_CN][38] = 62,
[1][0][RTW89_QATAR][38] = 28,
[1][0][RTW89_UK][38] = 38,
[1][0][RTW89_FCC][40] = 84,
@@ -21742,7 +21771,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_CHILE][40] = 64,
[1][0][RTW89_UKRAINE][40] = 28,
[1][0][RTW89_MEXICO][40] = 84,
- [1][0][RTW89_CN][40] = 74,
+ [1][0][RTW89_CN][40] = 62,
[1][0][RTW89_QATAR][40] = 28,
[1][0][RTW89_UK][40] = 38,
[1][0][RTW89_FCC][42] = 84,
@@ -21754,7 +21783,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_CHILE][42] = 64,
[1][0][RTW89_UKRAINE][42] = 28,
[1][0][RTW89_MEXICO][42] = 84,
- [1][0][RTW89_CN][42] = 74,
+ [1][0][RTW89_CN][42] = 62,
[1][0][RTW89_QATAR][42] = 28,
[1][0][RTW89_UK][42] = 38,
[1][0][RTW89_FCC][44] = 84,
@@ -21766,7 +21795,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_CHILE][44] = 64,
[1][0][RTW89_UKRAINE][44] = 28,
[1][0][RTW89_MEXICO][44] = 84,
- [1][0][RTW89_CN][44] = 74,
+ [1][0][RTW89_CN][44] = 62,
[1][0][RTW89_QATAR][44] = 28,
[1][0][RTW89_UK][44] = 38,
[1][0][RTW89_FCC][46] = 84,
@@ -21778,7 +21807,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_CHILE][46] = 64,
[1][0][RTW89_UKRAINE][46] = 28,
[1][0][RTW89_MEXICO][46] = 84,
- [1][0][RTW89_CN][46] = 74,
+ [1][0][RTW89_CN][46] = 62,
[1][0][RTW89_QATAR][46] = 28,
[1][0][RTW89_UK][46] = 38,
[1][0][RTW89_FCC][48] = 44,
@@ -22402,7 +22431,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_CHILE][38] = 64,
[2][0][RTW89_UKRAINE][38] = 28,
[2][0][RTW89_MEXICO][38] = 84,
- [2][0][RTW89_CN][38] = 76,
+ [2][0][RTW89_CN][38] = 62,
[2][0][RTW89_QATAR][38] = 28,
[2][0][RTW89_UK][38] = 50,
[2][0][RTW89_FCC][40] = 84,
@@ -22414,7 +22443,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_CHILE][40] = 64,
[2][0][RTW89_UKRAINE][40] = 28,
[2][0][RTW89_MEXICO][40] = 84,
- [2][0][RTW89_CN][40] = 76,
+ [2][0][RTW89_CN][40] = 62,
[2][0][RTW89_QATAR][40] = 28,
[2][0][RTW89_UK][40] = 50,
[2][0][RTW89_FCC][42] = 84,
@@ -22426,7 +22455,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_CHILE][42] = 66,
[2][0][RTW89_UKRAINE][42] = 28,
[2][0][RTW89_MEXICO][42] = 84,
- [2][0][RTW89_CN][42] = 76,
+ [2][0][RTW89_CN][42] = 62,
[2][0][RTW89_QATAR][42] = 28,
[2][0][RTW89_UK][42] = 50,
[2][0][RTW89_FCC][44] = 84,
@@ -22438,7 +22467,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_CHILE][44] = 64,
[2][0][RTW89_UKRAINE][44] = 28,
[2][0][RTW89_MEXICO][44] = 84,
- [2][0][RTW89_CN][44] = 76,
+ [2][0][RTW89_CN][44] = 62,
[2][0][RTW89_QATAR][44] = 28,
[2][0][RTW89_UK][44] = 50,
[2][0][RTW89_FCC][46] = 84,
@@ -22450,7 +22479,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_CHILE][46] = 64,
[2][0][RTW89_UKRAINE][46] = 28,
[2][0][RTW89_MEXICO][46] = 84,
- [2][0][RTW89_CN][46] = 76,
+ [2][0][RTW89_CN][46] = 62,
[2][0][RTW89_QATAR][46] = 28,
[2][0][RTW89_UK][46] = 50,
[2][0][RTW89_FCC][48] = 56,
@@ -22738,7 +22767,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_CHILE][38] = 58,
[2][1][RTW89_UKRAINE][38] = 16,
[2][1][RTW89_MEXICO][38] = 84,
- [2][1][RTW89_CN][38] = 64,
+ [2][1][RTW89_CN][38] = 62,
[2][1][RTW89_QATAR][38] = 16,
[2][1][RTW89_UK][38] = 38,
[2][1][RTW89_FCC][40] = 84,
@@ -22750,7 +22779,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_CHILE][40] = 58,
[2][1][RTW89_UKRAINE][40] = 16,
[2][1][RTW89_MEXICO][40] = 84,
- [2][1][RTW89_CN][40] = 64,
+ [2][1][RTW89_CN][40] = 62,
[2][1][RTW89_QATAR][40] = 16,
[2][1][RTW89_UK][40] = 38,
[2][1][RTW89_FCC][42] = 84,
@@ -22762,7 +22791,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_CHILE][42] = 58,
[2][1][RTW89_UKRAINE][42] = 16,
[2][1][RTW89_MEXICO][42] = 84,
- [2][1][RTW89_CN][42] = 64,
+ [2][1][RTW89_CN][42] = 62,
[2][1][RTW89_QATAR][42] = 16,
[2][1][RTW89_UK][42] = 38,
[2][1][RTW89_FCC][44] = 84,
@@ -22774,7 +22803,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_CHILE][44] = 58,
[2][1][RTW89_UKRAINE][44] = 16,
[2][1][RTW89_MEXICO][44] = 84,
- [2][1][RTW89_CN][44] = 64,
+ [2][1][RTW89_CN][44] = 62,
[2][1][RTW89_QATAR][44] = 16,
[2][1][RTW89_UK][44] = 38,
[2][1][RTW89_FCC][46] = 84,
@@ -22786,7 +22815,7 @@ const s8 rtw89_8852b_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_CHILE][46] = 58,
[2][1][RTW89_UKRAINE][46] = 16,
[2][1][RTW89_MEXICO][46] = 84,
- [2][1][RTW89_CN][46] = 64,
+ [2][1][RTW89_CN][46] = 62,
[2][1][RTW89_QATAR][46] = 16,
[2][1][RTW89_UK][46] = 38,
[2][1][RTW89_FCC][48] = 44,
@@ -22859,6 +22888,7 @@ const struct rtw89_phy_table rtw89_8852b_phy_nctl_table = {
.rf_path = 0, /* don't care */
};
+static
const struct rtw89_txpwr_table rtw89_8852b_byr_table = {
.data = rtw89_8852b_txpwr_byrate,
.size = ARRAY_SIZE(rtw89_8852b_txpwr_byrate),
@@ -22881,6 +22911,7 @@ const struct rtw89_txpwr_track_cfg rtw89_8852b_trk_cfg = {
};
const struct rtw89_rfe_parms rtw89_8852b_dflt_parms = {
+ .byr_tbl = &rtw89_8852b_byr_table,
.rule_2ghz = {
.lmt = &rtw89_8852b_txpwr_lmt_2g,
.lmt_ru = &rtw89_8852b_txpwr_lmt_ru_2g,
@@ -22889,4 +22920,8 @@ const struct rtw89_rfe_parms rtw89_8852b_dflt_parms = {
.lmt = &rtw89_8852b_txpwr_lmt_5g,
.lmt_ru = &rtw89_8852b_txpwr_lmt_ru_5g,
},
+ .tx_shape = {
+ .lmt = &rtw89_8852b_tx_shape_lmt,
+ .lmt_ru = &rtw89_8852b_tx_shape_lmt_ru,
+ },
};
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b_table.h b/drivers/net/wireless/realtek/rtw89/rtw8852b_table.h
index 7ef217629f46..da6c90e2ba93 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852b_table.h
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852b_table.h
@@ -12,10 +12,7 @@ extern const struct rtw89_phy_table rtw89_8852b_phy_bb_gain_table;
extern const struct rtw89_phy_table rtw89_8852b_phy_radioa_table;
extern const struct rtw89_phy_table rtw89_8852b_phy_radiob_table;
extern const struct rtw89_phy_table rtw89_8852b_phy_nctl_table;
-extern const struct rtw89_txpwr_table rtw89_8852b_byr_table;
extern const struct rtw89_txpwr_track_cfg rtw89_8852b_trk_cfg;
-extern const u8 rtw89_8852b_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM];
extern const struct rtw89_rfe_parms rtw89_8852b_dflt_parms;
#endif
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
index 1e16cc0a05dc..3b7d8ab39bab 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
@@ -2,6 +2,7 @@
/* Copyright(c) 2019-2022 Realtek Corporation
*/
+#include "chan.h"
#include "coex.h"
#include "debug.h"
#include "fw.h"
@@ -166,7 +167,9 @@ static const struct rtw89_dig_regs rtw8852c_dig_regs = {
B_PATH1_S20_FOLLOW_BY_PAGCUGC_EN_MSK},
};
-static void rtw8852c_ctrl_btg(struct rtw89_dev *rtwdev, bool btg);
+static void rtw8852c_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx);
+
static void rtw8852c_ctrl_tx_path_tmac(struct rtw89_dev *rtwdev, u8 tx_path,
enum rtw89_mac_idx mac_idx);
@@ -1650,7 +1653,8 @@ static void rtw8852c_set_channel_bb(struct rtw89_dev *rtwdev,
}
rtw8852c_spur_elimination(rtwdev, chan, pri_ch_idx, phy_idx);
- rtw8852c_ctrl_btg(rtwdev, chan->band_type == RTW89_BAND_2G);
+ rtw8852c_ctrl_btg_bt_rx(rtwdev, chan->band_type == RTW89_BAND_2G,
+ RTW89_PHY_0);
rtw8852c_5m_mask(rtwdev, chan, phy_idx);
if (chan->band_width == RTW89_CHANNEL_WIDTH_160 &&
@@ -1776,6 +1780,7 @@ static void rtw8852c_rfk_init(struct rtw89_dev *rtwdev)
rtwdev->is_tssi_mode[RF_PATH_B] = false;
memset(rfk_mcc, 0, sizeof(*rfk_mcc));
rtw8852c_lck_init(rtwdev);
+ rtw8852c_dpk_init(rtwdev);
rtw8852c_rck(rtwdev);
rtw8852c_dack(rtwdev);
@@ -1964,10 +1969,11 @@ static void rtw8852c_set_tx_shape(struct rtw89_dev *rtwdev,
const struct rtw89_chan *chan,
enum rtw89_phy_idx phy_idx)
{
+ const struct rtw89_rfe_parms *rfe_parms = rtwdev->rfe_parms;
u8 band = chan->band_type;
u8 regd = rtw89_regd_get(rtwdev, band);
- u8 tx_shape_cck = rtw89_8852c_tx_shape[band][RTW89_RS_CCK][regd];
- u8 tx_shape_ofdm = rtw89_8852c_tx_shape[band][RTW89_RS_OFDM][regd];
+ u8 tx_shape_cck = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_CCK][regd];
+ u8 tx_shape_ofdm = (*rfe_parms->tx_shape.lmt)[band][RTW89_RS_OFDM][regd];
if (band == RTW89_BAND_2G)
rtw8852c_bb_set_tx_shape_dfir(rtwdev, chan, tx_shape_cck, phy_idx);
@@ -1975,6 +1981,11 @@ static void rtw8852c_set_tx_shape(struct rtw89_dev *rtwdev,
rtw89_phy_tssi_ctrl_set_bandedge_cfg(rtwdev,
(enum rtw89_mac_idx)phy_idx,
tx_shape_ofdm);
+
+ rtw89_phy_write32_set(rtwdev, R_P0_DAC_COMP_POST_DPD_EN,
+ B_P0_DAC_COMP_POST_DPD_EN);
+ rtw89_phy_write32_set(rtwdev, R_P1_DAC_COMP_POST_DPD_EN,
+ B_P1_DAC_COMP_POST_DPD_EN);
}
static void rtw8852c_set_txpwr(struct rtw89_dev *rtwdev,
@@ -2142,7 +2153,8 @@ static void rtw8852c_bb_cfg_rx_path(struct rtw89_dev *rtwdev, u8 rx_path)
1);
rtw89_phy_write32_mask(rtwdev, R_RXHE, B_RXHETB_MAX_NSS,
1);
- rtw8852c_ctrl_btg(rtwdev, band == RTW89_BAND_2G);
+ rtw8852c_ctrl_btg_bt_rx(rtwdev, band == RTW89_BAND_2G,
+ RTW89_PHY_0);
rtw89_phy_write32_mask(rtwdev, R_P0_TXPW_RSTB,
rst_mask0, 1);
rtw89_phy_write32_mask(rtwdev, R_P0_TXPW_RSTB,
@@ -2218,9 +2230,10 @@ static void rtw8852c_ctrl_tx_path_tmac(struct rtw89_dev *rtwdev, u8 tx_path,
}
}
-static void rtw8852c_bb_ctrl_btc_preagc(struct rtw89_dev *rtwdev, bool bt_en)
+static void rtw8852c_ctrl_nbtg_bt_tx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- if (bt_en) {
+ if (en) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_FRC_FIR_TYPE_V1,
B_PATH0_FRC_FIR_TYPE_MSK_V1, 0x3);
rtw89_phy_write32_mask(rtwdev, R_PATH1_FRC_FIR_TYPE_V1,
@@ -2338,9 +2351,10 @@ static void rtw8852c_btc_set_rfe(struct rtw89_dev *rtwdev)
}
}
-static void rtw8852c_ctrl_btg(struct rtw89_dev *rtwdev, bool btg)
+static void rtw8852c_ctrl_btg_bt_rx(struct rtw89_dev *rtwdev, bool en,
+ enum rtw89_phy_idx phy_idx)
{
- if (btg) {
+ if (en) {
rtw89_phy_write32_mask(rtwdev, R_PATH0_BT_SHARE_V1,
B_PATH0_BT_SHARE_V1, 0x1);
rtw89_phy_write32_mask(rtwdev, R_PATH0_BTG_PATH_V1,
@@ -2650,15 +2664,15 @@ static void rtw8852c_btc_set_wl_rx_gain(struct rtw89_dev *rtwdev, u32 level)
switch (level) {
case 0: /* original */
default:
- rtw8852c_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852c_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 1: /* for FDD free-run */
- rtw8852c_bb_ctrl_btc_preagc(rtwdev, true);
+ rtw8852c_ctrl_nbtg_bt_tx(rtwdev, true, RTW89_PHY_0);
btc->dm.wl_lna2 = 0;
break;
case 2: /* for BTG Co-Rx*/
- rtw8852c_bb_ctrl_btc_preagc(rtwdev, false);
+ rtw8852c_ctrl_nbtg_bt_tx(rtwdev, false, RTW89_PHY_0);
btc->dm.wl_lna2 = 1;
break;
}
@@ -2743,6 +2757,10 @@ static int rtw8852c_mac_disable_bb_rf(struct rtw89_dev *rtwdev)
return 0;
}
+static const struct rtw89_chanctx_listener rtw8852c_chanctx_listener = {
+ .callbacks[RTW89_CHANCTX_CALLBACK_RFK] = rtw8852c_rfk_chanctx_cb,
+};
+
#ifdef CONFIG_PM
static const struct wiphy_wowlan_support rtw_wowlan_stub_8852c = {
.flags = WIPHY_WOWLAN_MAGIC_PKT | WIPHY_WOWLAN_DISCONNECT,
@@ -2755,6 +2773,7 @@ static const struct wiphy_wowlan_support rtw_wowlan_stub_8852c = {
static const struct rtw89_chip_ops rtw8852c_chip_ops = {
.enable_bb_rf = rtw8852c_mac_enable_bb_rf,
.disable_bb_rf = rtw8852c_mac_disable_bb_rf,
+ .bb_preinit = NULL,
.bb_reset = rtw8852c_bb_reset,
.bb_sethw = rtw8852c_bb_sethw,
.read_rf = rtw89_phy_read_rf_v1,
@@ -2775,9 +2794,9 @@ static const struct rtw89_chip_ops rtw8852c_chip_ops = {
.set_txpwr_ctrl = rtw8852c_set_txpwr_ctrl,
.init_txpwr_unit = rtw8852c_init_txpwr_unit,
.get_thermal = rtw8852c_get_thermal,
- .ctrl_btg = rtw8852c_ctrl_btg,
+ .ctrl_btg_bt_rx = rtw8852c_ctrl_btg_bt_rx,
.query_ppdu = rtw8852c_query_ppdu,
- .bb_ctrl_btc_preagc = rtw8852c_bb_ctrl_btc_preagc,
+ .ctrl_nbtg_bt_tx = rtw8852c_ctrl_nbtg_bt_tx,
.cfg_txrx_path = rtw8852c_bb_cfg_txrx_path,
.set_txpwr_ul_tb_offset = rtw8852c_set_txpwr_ul_tb_offset,
.pwr_on_func = rtw8852c_pwr_on_func,
@@ -2811,6 +2830,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.fw_basename = RTW8852C_FW_BASENAME,
.fw_format_max = RTW8852C_FW_FORMAT_MAX,
.try_ce_fw = false,
+ .bbmcu_nr = 0,
.needed_fw_elms = 0,
.fifo_size = 458752,
.small_fifo_size = false,
@@ -2831,21 +2851,22 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
&rtw89_8852c_phy_radioa_table,},
.nctl_table = &rtw89_8852c_phy_nctl_table,
.nctl_post_table = NULL,
- .byr_table = &rtw89_8852c_byr_table,
.dflt_parms = &rtw89_8852c_dflt_parms,
.rfe_parms_conf = NULL,
+ .chanctx_listener = &rtw8852c_chanctx_listener,
.txpwr_factor_rf = 2,
.txpwr_factor_mac = 1,
.dig_table = NULL,
.dig_regs = &rtw8852c_dig_regs,
.tssi_dbw_table = &rtw89_8852c_tssi_dbw_table,
- .support_chanctx_num = 1,
+ .support_chanctx_num = 2,
.support_bands = BIT(NL80211_BAND_2GHZ) |
BIT(NL80211_BAND_5GHZ) |
BIT(NL80211_BAND_6GHZ),
.support_bw160 = true,
.support_unii4 = true,
- .support_ul_tb_ctrl = false,
+ .ul_tb_waveform_ctrl = false,
+ .ul_tb_pwr_diff = true,
.hw_sec_hdr = true,
.rf_path_num = 2,
.tx_nss = 2,
@@ -2889,6 +2910,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.hci_func_en_addr = R_AX_HCI_FUNC_EN_V1,
.h2c_desc_size = sizeof(struct rtw89_rxdesc_short),
.txwd_body_size = sizeof(struct rtw89_txwd_body_v1),
+ .txwd_info_size = sizeof(struct rtw89_txwd_info),
.h2c_ctrl_reg = R_AX_H2CREG_CTRL_V1,
.h2c_counter_reg = {R_AX_UDM1 + 1, B_AX_UDM1_HALMAC_H2C_DEQ_CNT_MASK >> 8},
.h2c_regs = rtw8852c_h2c_regs,
@@ -2902,6 +2924,7 @@ const struct rtw89_chip_info rtw8852c_chip_info = {
.dcfo_comp_sft = 12,
.imr_info = &rtw8852c_imr_info,
.rrsr_cfgs = &rtw8852c_rrsr_cfgs,
+ .bss_clr_vld = {R_BSS_CLR_MAP, B_BSS_CLR_MAP_VLD0},
.bss_clr_map_reg = R_BSS_CLR_MAP,
.dma_ch_mask = 0,
.edcca_lvl_reg = R_SEG0R_EDCCA_LVL,
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
index de7714f871d5..654e3e5507cb 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.c
@@ -2,6 +2,7 @@
/* Copyright(c) 2019-2022 Realtek Corporation
*/
+#include "chan.h"
#include "coex.h"
#include "debug.h"
#include "phy.h"
@@ -2893,18 +2894,37 @@ static void _tssi_set_sys(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
enum rtw89_rf_path path)
{
const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
+ enum rtw89_bandwidth bw = chan->band_width;
enum rtw89_band band = chan->band_type;
+ u32 clk = 0x0;
rtw89_rfk_parser(rtwdev, &rtw8852c_tssi_sys_defs_tbl);
- if (path == RF_PATH_A)
+ switch (bw) {
+ case RTW89_CHANNEL_WIDTH_80:
+ clk = 0x1;
+ break;
+ case RTW89_CHANNEL_WIDTH_80_80:
+ case RTW89_CHANNEL_WIDTH_160:
+ clk = 0x2;
+ break;
+ default:
+ break;
+ }
+
+ if (path == RF_PATH_A) {
+ rtw89_phy_write32_mask(rtwdev, R_P0_TSSI_ADC_CLK,
+ B_P0_TSSI_ADC_CLK, clk);
rtw89_rfk_parser_by_cond(rtwdev, band == RTW89_BAND_2G,
&rtw8852c_tssi_sys_defs_2g_a_tbl,
&rtw8852c_tssi_sys_defs_5g_a_tbl);
- else
+ } else {
+ rtw89_phy_write32_mask(rtwdev, R_P1_TSSI_ADC_CLK,
+ B_P1_TSSI_ADC_CLK, clk);
rtw89_rfk_parser_by_cond(rtwdev, band == RTW89_BAND_2G,
&rtw8852c_tssi_sys_defs_2g_b_tbl,
&rtw8852c_tssi_sys_defs_5g_b_tbl);
+ }
}
static void _tssi_ini_txpwr_ctrl_bb(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy,
@@ -4049,21 +4069,53 @@ void rtw8852c_set_channel_rf(struct rtw89_dev *rtwdev,
void rtw8852c_mcc_get_ch_info(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
{
- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, RTW89_SUB_ENTITY_0);
struct rtw89_rfk_mcc_info *rfk_mcc = &rtwdev->rfk_mcc;
- u8 idx = rfk_mcc->table_idx;
- int i;
+ DECLARE_BITMAP(map, RTW89_IQK_CHS_NR) = {};
+ const struct rtw89_chan *chan;
+ enum rtw89_entity_mode mode;
+ u8 chan_idx;
+ u8 idx;
+ u8 i;
- for (i = 0; i < RTW89_IQK_CHS_NR; i++) {
- if (rfk_mcc->ch[idx] == 0)
- break;
- if (++idx >= RTW89_IQK_CHS_NR)
- idx = 0;
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC_PREPARE:
+ chan_idx = RTW89_SUB_ENTITY_1;
+ break;
+ default:
+ chan_idx = RTW89_SUB_ENTITY_0;
+ break;
+ }
+
+ for (i = 0; i <= chan_idx; i++) {
+ chan = rtw89_chan_get(rtwdev, i);
+
+ for (idx = 0; idx < RTW89_IQK_CHS_NR; idx++) {
+ if (rfk_mcc->ch[idx] == chan->channel &&
+ rfk_mcc->band[idx] == chan->band_type) {
+ if (i != chan_idx) {
+ set_bit(idx, map);
+ break;
+ }
+
+ goto bottom;
+ }
+ }
+ }
+
+ idx = find_first_zero_bit(map, RTW89_IQK_CHS_NR);
+ if (idx == RTW89_IQK_CHS_NR) {
+ rtw89_debug(rtwdev, RTW89_DBG_RFK,
+ "%s: no empty rfk table; force replace the first\n",
+ __func__);
+ idx = 0;
}
- rfk_mcc->table_idx = idx;
rfk_mcc->ch[idx] = chan->channel;
rfk_mcc->band[idx] = chan->band_type;
+
+bottom:
+ rfk_mcc->table_idx = idx;
}
void rtw8852c_rck(struct rtw89_dev *rtwdev)
@@ -4213,6 +4265,14 @@ trigger_rx_dck:
rtw89_btc_ntfy_wl_rfk(rtwdev, phy_map, BTC_WRFKT_RXDCK, BTC_WRFK_STOP);
}
+void rtw8852c_dpk_init(struct rtw89_dev *rtwdev)
+{
+ struct rtw89_dpk_info *dpk = &rtwdev->dpk;
+
+ dpk->is_dpk_enable = true;
+ dpk->is_dpk_reload_en = false;
+}
+
void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
{
u32 tx_en;
@@ -4222,8 +4282,6 @@ void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx)
rtw89_chip_stop_sch_tx(rtwdev, phy_idx, &tx_en, RTW89_SCH_TX_SEL_ALL);
_wait_rx_mode(rtwdev, _kpath(rtwdev, phy_idx));
- rtwdev->dpk.is_dpk_enable = true;
- rtwdev->dpk.is_dpk_reload_en = false;
_dpk(rtwdev, phy_idx, false);
rtw89_chip_resume_sch_tx(rtwdev, phy_idx, tx_en);
@@ -4361,3 +4419,26 @@ void rtw8852c_wifi_scan_notify(struct rtw89_dev *rtwdev,
else
rtw8852c_tssi_default_txagc(rtwdev, phy_idx, false);
}
+
+void rtw8852c_rfk_chanctx_cb(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_state state)
+{
+ struct rtw89_dpk_info *dpk = &rtwdev->dpk;
+ u8 path;
+
+ switch (state) {
+ case RTW89_CHANCTX_STATE_MCC_START:
+ dpk->is_dpk_enable = false;
+ for (path = 0; path < RTW8852C_DPK_RF_PATH; path++)
+ _dpk_onoff(rtwdev, path, false);
+ break;
+ case RTW89_CHANCTX_STATE_MCC_STOP:
+ dpk->is_dpk_enable = true;
+ for (path = 0; path < RTW8852C_DPK_RF_PATH; path++)
+ _dpk_onoff(rtwdev, path, false);
+ rtw8852c_dpk(rtwdev, RTW89_PHY_0);
+ break;
+ default:
+ break;
+ }
+}
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h
index 928a587cdd05..6605137e61aa 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk.h
@@ -13,6 +13,7 @@ void rtw8852c_dack(struct rtw89_dev *rtwdev);
void rtw8852c_iqk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx);
void rtw8852c_rx_dck(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy_idx, bool is_afe);
void rtw8852c_rx_dck_track(struct rtw89_dev *rtwdev);
+void rtw8852c_dpk_init(struct rtw89_dev *rtwdev);
void rtw8852c_dpk(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy);
void rtw8852c_dpk_track(struct rtw89_dev *rtwdev);
void rtw8852c_tssi(struct rtw89_dev *rtwdev, enum rtw89_phy_idx phy);
@@ -25,5 +26,7 @@ void rtw8852c_set_channel_rf(struct rtw89_dev *rtwdev,
enum rtw89_phy_idx phy_idx);
void rtw8852c_lck_init(struct rtw89_dev *rtwdev);
void rtw8852c_lck_track(struct rtw89_dev *rtwdev);
+void rtw8852c_rfk_chanctx_cb(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_state state);
#endif
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk_table.c b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk_table.c
index d727d528b365..e5b0c2a686f0 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk_table.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_rfk_table.c
@@ -165,11 +165,11 @@ static const struct rtw89_reg5_def rtw8852c_tssi_sys_defs[] = {
RTW89_DECL_RFK_WM(0x12bc, 0x000ffff0, 0xb5b5),
RTW89_DECL_RFK_WM(0x32bc, 0x000ffff0, 0xb5b5),
RTW89_DECL_RFK_WM(0x0300, 0xff000000, 0x16),
- RTW89_DECL_RFK_WM(0x0304, 0x0000ffff, 0x1f19),
- RTW89_DECL_RFK_WM(0x0308, 0xff000000, 0x1c),
+ RTW89_DECL_RFK_WM(0x0304, 0x0000ffff, 0x1313),
+ RTW89_DECL_RFK_WM(0x0308, 0xff000000, 0x13),
RTW89_DECL_RFK_WM(0x0314, 0xffff0000, 0x2041),
- RTW89_DECL_RFK_WM(0x0318, 0xffffffff, 0x20012041),
- RTW89_DECL_RFK_WM(0x0324, 0xffff0000, 0x2001),
+ RTW89_DECL_RFK_WM(0x0318, 0xffffffff, 0x00410041),
+ RTW89_DECL_RFK_WM(0x0324, 0xffff0000, 0x0041),
RTW89_DECL_RFK_WM(0x0020, 0x00006000, 0x3),
RTW89_DECL_RFK_WM(0x0024, 0x00006000, 0x3),
RTW89_DECL_RFK_WM(0x0704, 0xffff0000, 0x601e),
@@ -222,7 +222,7 @@ static const struct rtw89_reg5_def rtw8852c_tssi_txpwr_ctrl_bb_defs_a[] = {
RTW89_DECL_RFK_WM(0x5810, 0xffffffff, 0x59010000),
RTW89_DECL_RFK_WM(0x5814, 0x01ffffff, 0x026d000),
RTW89_DECL_RFK_WM(0x5814, 0xf8000000, 0x00),
- RTW89_DECL_RFK_WM(0x5818, 0xffffffff, 0x002c1800),
+ RTW89_DECL_RFK_WM(0x5818, 0xffffffff, 0x002c18e8),
RTW89_DECL_RFK_WM(0x581c, 0x3fffffff, 0x3dc80280),
RTW89_DECL_RFK_WM(0x5820, 0xffffffff, 0x00000080),
RTW89_DECL_RFK_WM(0x58e8, 0x0000003f, 0x03),
@@ -251,7 +251,7 @@ static const struct rtw89_reg5_def rtw8852c_tssi_txpwr_ctrl_bb_defs_a[] = {
RTW89_DECL_RFK_WM(0x58d4, 0x07fc0000, 0x100),
RTW89_DECL_RFK_WM(0x58d8, 0xffffffff, 0x8008016c),
RTW89_DECL_RFK_WM(0x58dc, 0x0001ffff, 0x0807f),
- RTW89_DECL_RFK_WM(0x58dc, 0xfff00000, 0x800),
+ RTW89_DECL_RFK_WM(0x58dc, 0xfff00000, 0xc00),
RTW89_DECL_RFK_WM(0x58f0, 0x0003ffff, 0x001ff),
RTW89_DECL_RFK_WM(0x58f4, 0x000fffff, 0x000),
RTW89_DECL_RFK_WM(0x58f8, 0x000fffff, 0x000),
@@ -260,14 +260,14 @@ static const struct rtw89_reg5_def rtw8852c_tssi_txpwr_ctrl_bb_defs_a[] = {
RTW89_DECLARE_RFK_TBL(rtw8852c_tssi_txpwr_ctrl_bb_defs_a);
static const struct rtw89_reg5_def rtw8852c_tssi_txpwr_ctrl_bb_defs_b[] = {
- RTW89_DECL_RFK_WM(0x566c, 0x00001000, 0x0),
+ RTW89_DECL_RFK_WM(0x766c, 0x00001000, 0x0),
RTW89_DECL_RFK_WM(0x7800, 0xffffffff, 0x003f807f),
RTW89_DECL_RFK_WM(0x780c, 0x0000007f, 0x40),
RTW89_DECL_RFK_WM(0x780c, 0x0fffff00, 0x00040),
RTW89_DECL_RFK_WM(0x7810, 0xffffffff, 0x59010000),
RTW89_DECL_RFK_WM(0x7814, 0x01ffffff, 0x026d000),
RTW89_DECL_RFK_WM(0x7814, 0xf8000000, 0x00),
- RTW89_DECL_RFK_WM(0x7818, 0xffffffff, 0x002c1800),
+ RTW89_DECL_RFK_WM(0x7818, 0xffffffff, 0x002c18e8),
RTW89_DECL_RFK_WM(0x781c, 0x3fffffff, 0x3dc80280),
RTW89_DECL_RFK_WM(0x7820, 0xffffffff, 0x00000080),
RTW89_DECL_RFK_WM(0x78e8, 0x0000003f, 0x03),
@@ -296,7 +296,7 @@ static const struct rtw89_reg5_def rtw8852c_tssi_txpwr_ctrl_bb_defs_b[] = {
RTW89_DECL_RFK_WM(0x78d4, 0x07fc0000, 0x100),
RTW89_DECL_RFK_WM(0x78d8, 0xffffffff, 0x8008016c),
RTW89_DECL_RFK_WM(0x78dc, 0x0001ffff, 0x0807f),
- RTW89_DECL_RFK_WM(0x78dc, 0xfff00000, 0x800),
+ RTW89_DECL_RFK_WM(0x78dc, 0xfff00000, 0xc00),
RTW89_DECL_RFK_WM(0x78f0, 0x0003ffff, 0x001ff),
RTW89_DECL_RFK_WM(0x78f4, 0x000fffff, 0x000),
RTW89_DECL_RFK_WM(0x78f8, 0x000fffff, 0x000),
@@ -511,9 +511,9 @@ static const struct rtw89_reg5_def rtw8852c_tssi_set_aligk_default_defs_5g_a[] =
RTW89_DECL_RFK_WM(0x563c, 0x3fffffff, 0x00000000),
RTW89_DECL_RFK_WM(0x5640, 0x000003ff, 0x000),
RTW89_DECL_RFK_WM(0x5640, 0x000ffc00, 0x000),
- RTW89_DECL_RFK_WM(0x5640, 0x3ff00000, 0x000),
- RTW89_DECL_RFK_WM(0x5644, 0x000003ff, 0x000),
- RTW89_DECL_RFK_WM(0x5644, 0x000ffc00, 0x000),
+ RTW89_DECL_RFK_WM(0x5640, 0x3ff00000, 0x3e9),
+ RTW89_DECL_RFK_WM(0x5644, 0x000003ff, 0x039),
+ RTW89_DECL_RFK_WM(0x5644, 0x000ffc00, 0x07d),
};
RTW89_DECLARE_RFK_TBL(rtw8852c_tssi_set_aligk_default_defs_5g_a);
@@ -531,9 +531,9 @@ static const struct rtw89_reg5_def rtw8852c_tssi_set_aligk_default_defs_5g_b[] =
RTW89_DECL_RFK_WM(0x763c, 0x3fffffff, 0x00000000),
RTW89_DECL_RFK_WM(0x7640, 0x000003ff, 0x000),
RTW89_DECL_RFK_WM(0x7640, 0x000ffc00, 0x000),
- RTW89_DECL_RFK_WM(0x7640, 0x3ff00000, 0x000),
- RTW89_DECL_RFK_WM(0x7644, 0x000003ff, 0x000),
- RTW89_DECL_RFK_WM(0x7644, 0x000ffc00, 0x000),
+ RTW89_DECL_RFK_WM(0x7640, 0x3ff00000, 0x3e9),
+ RTW89_DECL_RFK_WM(0x7644, 0x000003ff, 0x039),
+ RTW89_DECL_RFK_WM(0x7644, 0x000ffc00, 0x07d),
};
RTW89_DECLARE_RFK_TBL(rtw8852c_tssi_set_aligk_default_defs_5g_b);
@@ -551,9 +551,9 @@ static const struct rtw89_reg5_def rtw8852c_tssi_set_aligk_default_defs_6g_a[] =
RTW89_DECL_RFK_WM(0x563c, 0x3fffffff, 0x00000000),
RTW89_DECL_RFK_WM(0x5640, 0x000003ff, 0x000),
RTW89_DECL_RFK_WM(0x5640, 0x000ffc00, 0x000),
- RTW89_DECL_RFK_WM(0x5640, 0x3ff00000, 0x000),
- RTW89_DECL_RFK_WM(0x5644, 0x000003ff, 0x000),
- RTW89_DECL_RFK_WM(0x5644, 0x000ffc00, 0x000),
+ RTW89_DECL_RFK_WM(0x5640, 0x3ff00000, 0x3e9),
+ RTW89_DECL_RFK_WM(0x5644, 0x000003ff, 0x039),
+ RTW89_DECL_RFK_WM(0x5644, 0x000ffc00, 0x080),
};
RTW89_DECLARE_RFK_TBL(rtw8852c_tssi_set_aligk_default_defs_6g_a);
@@ -571,9 +571,9 @@ static const struct rtw89_reg5_def rtw8852c_tssi_set_aligk_default_defs_6g_b[] =
RTW89_DECL_RFK_WM(0x763c, 0x3fffffff, 0x00000000),
RTW89_DECL_RFK_WM(0x7640, 0x000003ff, 0x000),
RTW89_DECL_RFK_WM(0x7640, 0x000ffc00, 0x000),
- RTW89_DECL_RFK_WM(0x7640, 0x3ff00000, 0x000),
- RTW89_DECL_RFK_WM(0x7644, 0x000003ff, 0x000),
- RTW89_DECL_RFK_WM(0x7644, 0x000ffc00, 0x000),
+ RTW89_DECL_RFK_WM(0x7640, 0x3ff00000, 0x3e9),
+ RTW89_DECL_RFK_WM(0x7644, 0x000003ff, 0x039),
+ RTW89_DECL_RFK_WM(0x7644, 0x000ffc00, 0x080),
};
RTW89_DECLARE_RFK_TBL(rtw8852c_tssi_set_aligk_default_defs_6g_b);
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_table.c b/drivers/net/wireless/realtek/rtw89/rtw8852c_table.c
index 4b272fdf1fd7..ab1a0aadc869 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_table.c
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_table.c
@@ -31525,8 +31525,9 @@ static const s8 _txpwr_track_delta_swingidx_2g_cck_a_p[] = {
3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 5, 5, 5
};
-const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM] = {
+static
+const u8 rtw89_8852c_tx_shape_lmt[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
+ [RTW89_REGD_NUM] = {
[0][0][RTW89_ACMA] = 0,
[0][0][RTW89_CHILE] = 0,
[0][0][RTW89_CN] = 0,
@@ -31537,6 +31538,7 @@ const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
[0][0][RTW89_MEXICO] = 1,
[0][0][RTW89_MKK] = 0,
[0][0][RTW89_QATAR] = 0,
+ [0][0][RTW89_THAILAND] = 0,
[0][0][RTW89_UK] = 0,
[0][0][RTW89_UKRAINE] = 0,
[0][1][RTW89_ACMA] = 0,
@@ -31549,6 +31551,7 @@ const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
[0][1][RTW89_MEXICO] = 3,
[0][1][RTW89_MKK] = 0,
[0][1][RTW89_QATAR] = 0,
+ [0][1][RTW89_THAILAND] = 0,
[0][1][RTW89_UK] = 0,
[0][1][RTW89_UKRAINE] = 0,
[1][1][RTW89_ACMA] = 0,
@@ -31561,6 +31564,7 @@ const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
[1][1][RTW89_MEXICO] = 3,
[1][1][RTW89_MKK] = 0,
[1][1][RTW89_QATAR] = 0,
+ [1][1][RTW89_THAILAND] = 0,
[1][1][RTW89_UK] = 0,
[1][1][RTW89_UKRAINE] = 0,
[2][1][RTW89_ACMA] = 0,
@@ -31571,25 +31575,66 @@ const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
[2][1][RTW89_KCC] = 0,
[2][1][RTW89_MKK] = 0,
[2][1][RTW89_QATAR] = 0,
+ [2][1][RTW89_THAILAND] = 0,
[2][1][RTW89_UK] = 0,
};
static
+const u8 rtw89_8852c_tx_shape_lmt_ru[RTW89_BAND_NUM][RTW89_REGD_NUM] = {
+ [0][RTW89_ACMA] = 0,
+ [0][RTW89_CHILE] = 0,
+ [0][RTW89_CN] = 0,
+ [0][RTW89_ETSI] = 0,
+ [0][RTW89_FCC] = 3,
+ [0][RTW89_IC] = 3,
+ [0][RTW89_KCC] = 0,
+ [0][RTW89_MEXICO] = 3,
+ [0][RTW89_MKK] = 0,
+ [0][RTW89_QATAR] = 0,
+ [0][RTW89_THAILAND] = 0,
+ [0][RTW89_UK] = 0,
+ [0][RTW89_UKRAINE] = 0,
+ [1][RTW89_ACMA] = 0,
+ [1][RTW89_CHILE] = 0,
+ [1][RTW89_CN] = 0,
+ [1][RTW89_ETSI] = 0,
+ [1][RTW89_FCC] = 3,
+ [1][RTW89_IC] = 3,
+ [1][RTW89_KCC] = 0,
+ [1][RTW89_MEXICO] = 3,
+ [1][RTW89_MKK] = 0,
+ [1][RTW89_QATAR] = 0,
+ [1][RTW89_THAILAND] = 0,
+ [1][RTW89_UK] = 0,
+ [1][RTW89_UKRAINE] = 0,
+ [2][RTW89_ACMA] = 0,
+ [2][RTW89_CHILE] = 0,
+ [2][RTW89_ETSI] = 0,
+ [2][RTW89_FCC] = 0,
+ [2][RTW89_IC] = 0,
+ [2][RTW89_KCC] = 0,
+ [2][RTW89_MKK] = 0,
+ [2][RTW89_QATAR] = 0,
+ [2][RTW89_THAILAND] = 0,
+ [2][RTW89_UK] = 0,
+};
+
+static
const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[RTW89_RS_LMT_NUM][RTW89_BF_NUM]
[RTW89_REGD_NUM][RTW89_2G_CH_NUM] = {
- [0][0][0][0][RTW89_WW][0] = 58,
- [0][0][0][0][RTW89_WW][1] = 58,
- [0][0][0][0][RTW89_WW][2] = 58,
- [0][0][0][0][RTW89_WW][3] = 58,
- [0][0][0][0][RTW89_WW][4] = 58,
- [0][0][0][0][RTW89_WW][5] = 58,
- [0][0][0][0][RTW89_WW][6] = 58,
- [0][0][0][0][RTW89_WW][7] = 58,
- [0][0][0][0][RTW89_WW][8] = 58,
- [0][0][0][0][RTW89_WW][9] = 58,
- [0][0][0][0][RTW89_WW][10] = 58,
- [0][0][0][0][RTW89_WW][11] = 58,
+ [0][0][0][0][RTW89_WW][0] = 56,
+ [0][0][0][0][RTW89_WW][1] = 56,
+ [0][0][0][0][RTW89_WW][2] = 56,
+ [0][0][0][0][RTW89_WW][3] = 56,
+ [0][0][0][0][RTW89_WW][4] = 56,
+ [0][0][0][0][RTW89_WW][5] = 56,
+ [0][0][0][0][RTW89_WW][6] = 56,
+ [0][0][0][0][RTW89_WW][7] = 56,
+ [0][0][0][0][RTW89_WW][8] = 56,
+ [0][0][0][0][RTW89_WW][9] = 56,
+ [0][0][0][0][RTW89_WW][10] = 56,
+ [0][0][0][0][RTW89_WW][11] = 56,
[0][0][0][0][RTW89_WW][12] = 46,
[0][0][0][0][RTW89_WW][13] = 72,
[0][1][0][0][RTW89_WW][0] = 42,
@@ -31609,9 +31654,9 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_WW][0] = 0,
[1][0][0][0][RTW89_WW][1] = 0,
[1][0][0][0][RTW89_WW][2] = 44,
- [1][0][0][0][RTW89_WW][3] = 58,
- [1][0][0][0][RTW89_WW][4] = 58,
- [1][0][0][0][RTW89_WW][5] = 58,
+ [1][0][0][0][RTW89_WW][3] = 56,
+ [1][0][0][0][RTW89_WW][4] = 56,
+ [1][0][0][0][RTW89_WW][5] = 56,
[1][0][0][0][RTW89_WW][6] = 46,
[1][0][0][0][RTW89_WW][7] = 46,
[1][0][0][0][RTW89_WW][8] = 28,
@@ -31622,10 +31667,10 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_WW][13] = 0,
[1][1][0][0][RTW89_WW][0] = 0,
[1][1][0][0][RTW89_WW][1] = 0,
- [1][1][0][0][RTW89_WW][2] = 46,
- [1][1][0][0][RTW89_WW][3] = 46,
- [1][1][0][0][RTW89_WW][4] = 46,
- [1][1][0][0][RTW89_WW][5] = 46,
+ [1][1][0][0][RTW89_WW][2] = 44,
+ [1][1][0][0][RTW89_WW][3] = 44,
+ [1][1][0][0][RTW89_WW][4] = 44,
+ [1][1][0][0][RTW89_WW][5] = 44,
[1][1][0][0][RTW89_WW][6] = 40,
[1][1][0][0][RTW89_WW][7] = 40,
[1][1][0][0][RTW89_WW][8] = 14,
@@ -31646,7 +31691,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][9] = 58,
[0][0][1][0][RTW89_WW][10] = 58,
[0][0][1][0][RTW89_WW][11] = 58,
- [0][0][1][0][RTW89_WW][12] = 58,
+ [0][0][1][0][RTW89_WW][12] = 40,
[0][0][1][0][RTW89_WW][13] = 0,
[0][1][1][0][RTW89_WW][0] = 46,
[0][1][1][0][RTW89_WW][1] = 46,
@@ -31690,7 +31735,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_WW][11] = 46,
[0][1][2][0][RTW89_WW][12] = 16,
[0][1][2][0][RTW89_WW][13] = 0,
- [0][1][2][1][RTW89_WW][0] = 36,
+ [0][1][2][1][RTW89_WW][0] = 34,
[0][1][2][1][RTW89_WW][1] = 34,
[0][1][2][1][RTW89_WW][2] = 34,
[0][1][2][1][RTW89_WW][3] = 34,
@@ -31742,7 +31787,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_WW][7] = 34,
[1][1][2][1][RTW89_WW][8] = 34,
[1][1][2][1][RTW89_WW][9] = 34,
- [1][1][2][1][RTW89_WW][10] = 36,
+ [1][1][2][1][RTW89_WW][10] = 34,
[1][1][2][1][RTW89_WW][11] = 0,
[1][1][2][1][RTW89_WW][12] = 0,
[1][1][2][1][RTW89_WW][13] = 0,
@@ -31752,156 +31797,169 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_IC][0] = 76,
[0][0][0][0][RTW89_KCC][0] = 68,
[0][0][0][0][RTW89_ACMA][0] = 60,
- [0][0][0][0][RTW89_CN][0] = 58,
+ [0][0][0][0][RTW89_CN][0] = 56,
[0][0][0][0][RTW89_UK][0] = 60,
[0][0][0][0][RTW89_MEXICO][0] = 76,
[0][0][0][0][RTW89_UKRAINE][0] = 60,
[0][0][0][0][RTW89_CHILE][0] = 76,
[0][0][0][0][RTW89_QATAR][0] = 60,
+ [0][0][0][0][RTW89_THAILAND][0] = 60,
[0][0][0][0][RTW89_FCC][1] = 76,
[0][0][0][0][RTW89_ETSI][1] = 60,
[0][0][0][0][RTW89_MKK][1] = 68,
[0][0][0][0][RTW89_IC][1] = 76,
[0][0][0][0][RTW89_KCC][1] = 68,
[0][0][0][0][RTW89_ACMA][1] = 60,
- [0][0][0][0][RTW89_CN][1] = 58,
+ [0][0][0][0][RTW89_CN][1] = 56,
[0][0][0][0][RTW89_UK][1] = 60,
[0][0][0][0][RTW89_MEXICO][1] = 76,
[0][0][0][0][RTW89_UKRAINE][1] = 60,
[0][0][0][0][RTW89_CHILE][1] = 68,
[0][0][0][0][RTW89_QATAR][1] = 60,
+ [0][0][0][0][RTW89_THAILAND][1] = 60,
[0][0][0][0][RTW89_FCC][2] = 76,
[0][0][0][0][RTW89_ETSI][2] = 60,
[0][0][0][0][RTW89_MKK][2] = 68,
[0][0][0][0][RTW89_IC][2] = 76,
[0][0][0][0][RTW89_KCC][2] = 68,
[0][0][0][0][RTW89_ACMA][2] = 60,
- [0][0][0][0][RTW89_CN][2] = 58,
+ [0][0][0][0][RTW89_CN][2] = 56,
[0][0][0][0][RTW89_UK][2] = 60,
[0][0][0][0][RTW89_MEXICO][2] = 76,
[0][0][0][0][RTW89_UKRAINE][2] = 60,
[0][0][0][0][RTW89_CHILE][2] = 68,
[0][0][0][0][RTW89_QATAR][2] = 60,
+ [0][0][0][0][RTW89_THAILAND][2] = 60,
[0][0][0][0][RTW89_FCC][3] = 76,
[0][0][0][0][RTW89_ETSI][3] = 60,
[0][0][0][0][RTW89_MKK][3] = 68,
[0][0][0][0][RTW89_IC][3] = 76,
[0][0][0][0][RTW89_KCC][3] = 68,
[0][0][0][0][RTW89_ACMA][3] = 60,
- [0][0][0][0][RTW89_CN][3] = 58,
+ [0][0][0][0][RTW89_CN][3] = 56,
[0][0][0][0][RTW89_UK][3] = 60,
[0][0][0][0][RTW89_MEXICO][3] = 76,
[0][0][0][0][RTW89_UKRAINE][3] = 60,
[0][0][0][0][RTW89_CHILE][3] = 68,
[0][0][0][0][RTW89_QATAR][3] = 60,
+ [0][0][0][0][RTW89_THAILAND][3] = 60,
[0][0][0][0][RTW89_FCC][4] = 76,
[0][0][0][0][RTW89_ETSI][4] = 60,
[0][0][0][0][RTW89_MKK][4] = 68,
[0][0][0][0][RTW89_IC][4] = 76,
[0][0][0][0][RTW89_KCC][4] = 68,
[0][0][0][0][RTW89_ACMA][4] = 60,
- [0][0][0][0][RTW89_CN][4] = 58,
+ [0][0][0][0][RTW89_CN][4] = 56,
[0][0][0][0][RTW89_UK][4] = 60,
[0][0][0][0][RTW89_MEXICO][4] = 76,
[0][0][0][0][RTW89_UKRAINE][4] = 60,
[0][0][0][0][RTW89_CHILE][4] = 68,
[0][0][0][0][RTW89_QATAR][4] = 60,
+ [0][0][0][0][RTW89_THAILAND][4] = 60,
[0][0][0][0][RTW89_FCC][5] = 76,
[0][0][0][0][RTW89_ETSI][5] = 60,
[0][0][0][0][RTW89_MKK][5] = 68,
[0][0][0][0][RTW89_IC][5] = 76,
[0][0][0][0][RTW89_KCC][5] = 68,
[0][0][0][0][RTW89_ACMA][5] = 60,
- [0][0][0][0][RTW89_CN][5] = 58,
+ [0][0][0][0][RTW89_CN][5] = 56,
[0][0][0][0][RTW89_UK][5] = 60,
[0][0][0][0][RTW89_MEXICO][5] = 76,
[0][0][0][0][RTW89_UKRAINE][5] = 60,
[0][0][0][0][RTW89_CHILE][5] = 76,
[0][0][0][0][RTW89_QATAR][5] = 60,
+ [0][0][0][0][RTW89_THAILAND][5] = 60,
[0][0][0][0][RTW89_FCC][6] = 76,
[0][0][0][0][RTW89_ETSI][6] = 60,
[0][0][0][0][RTW89_MKK][6] = 68,
[0][0][0][0][RTW89_IC][6] = 76,
[0][0][0][0][RTW89_KCC][6] = 68,
[0][0][0][0][RTW89_ACMA][6] = 60,
- [0][0][0][0][RTW89_CN][6] = 58,
+ [0][0][0][0][RTW89_CN][6] = 56,
[0][0][0][0][RTW89_UK][6] = 60,
[0][0][0][0][RTW89_MEXICO][6] = 76,
[0][0][0][0][RTW89_UKRAINE][6] = 60,
[0][0][0][0][RTW89_CHILE][6] = 76,
[0][0][0][0][RTW89_QATAR][6] = 60,
+ [0][0][0][0][RTW89_THAILAND][6] = 60,
[0][0][0][0][RTW89_FCC][7] = 76,
[0][0][0][0][RTW89_ETSI][7] = 60,
[0][0][0][0][RTW89_MKK][7] = 68,
[0][0][0][0][RTW89_IC][7] = 76,
[0][0][0][0][RTW89_KCC][7] = 68,
[0][0][0][0][RTW89_ACMA][7] = 60,
- [0][0][0][0][RTW89_CN][7] = 58,
+ [0][0][0][0][RTW89_CN][7] = 56,
[0][0][0][0][RTW89_UK][7] = 60,
[0][0][0][0][RTW89_MEXICO][7] = 76,
[0][0][0][0][RTW89_UKRAINE][7] = 60,
[0][0][0][0][RTW89_CHILE][7] = 76,
[0][0][0][0][RTW89_QATAR][7] = 60,
+ [0][0][0][0][RTW89_THAILAND][7] = 60,
[0][0][0][0][RTW89_FCC][8] = 76,
[0][0][0][0][RTW89_ETSI][8] = 60,
[0][0][0][0][RTW89_MKK][8] = 68,
[0][0][0][0][RTW89_IC][8] = 76,
[0][0][0][0][RTW89_KCC][8] = 68,
[0][0][0][0][RTW89_ACMA][8] = 60,
- [0][0][0][0][RTW89_CN][8] = 58,
+ [0][0][0][0][RTW89_CN][8] = 56,
[0][0][0][0][RTW89_UK][8] = 60,
[0][0][0][0][RTW89_MEXICO][8] = 76,
[0][0][0][0][RTW89_UKRAINE][8] = 60,
[0][0][0][0][RTW89_CHILE][8] = 76,
[0][0][0][0][RTW89_QATAR][8] = 60,
+ [0][0][0][0][RTW89_THAILAND][8] = 60,
[0][0][0][0][RTW89_FCC][9] = 76,
[0][0][0][0][RTW89_ETSI][9] = 60,
[0][0][0][0][RTW89_MKK][9] = 68,
[0][0][0][0][RTW89_IC][9] = 76,
[0][0][0][0][RTW89_KCC][9] = 70,
[0][0][0][0][RTW89_ACMA][9] = 60,
- [0][0][0][0][RTW89_CN][9] = 58,
+ [0][0][0][0][RTW89_CN][9] = 56,
[0][0][0][0][RTW89_UK][9] = 60,
[0][0][0][0][RTW89_MEXICO][9] = 76,
[0][0][0][0][RTW89_UKRAINE][9] = 60,
[0][0][0][0][RTW89_CHILE][9] = 76,
[0][0][0][0][RTW89_QATAR][9] = 60,
+ [0][0][0][0][RTW89_THAILAND][9] = 60,
[0][0][0][0][RTW89_FCC][10] = 76,
[0][0][0][0][RTW89_ETSI][10] = 60,
[0][0][0][0][RTW89_MKK][10] = 68,
[0][0][0][0][RTW89_IC][10] = 76,
[0][0][0][0][RTW89_KCC][10] = 70,
[0][0][0][0][RTW89_ACMA][10] = 60,
- [0][0][0][0][RTW89_CN][10] = 58,
+ [0][0][0][0][RTW89_CN][10] = 56,
[0][0][0][0][RTW89_UK][10] = 60,
[0][0][0][0][RTW89_MEXICO][10] = 76,
[0][0][0][0][RTW89_UKRAINE][10] = 60,
[0][0][0][0][RTW89_CHILE][10] = 76,
[0][0][0][0][RTW89_QATAR][10] = 60,
+ [0][0][0][0][RTW89_THAILAND][10] = 60,
[0][0][0][0][RTW89_FCC][11] = 58,
[0][0][0][0][RTW89_ETSI][11] = 60,
[0][0][0][0][RTW89_MKK][11] = 68,
[0][0][0][0][RTW89_IC][11] = 58,
[0][0][0][0][RTW89_KCC][11] = 70,
[0][0][0][0][RTW89_ACMA][11] = 60,
- [0][0][0][0][RTW89_CN][11] = 58,
+ [0][0][0][0][RTW89_CN][11] = 56,
[0][0][0][0][RTW89_UK][11] = 60,
[0][0][0][0][RTW89_MEXICO][11] = 58,
[0][0][0][0][RTW89_UKRAINE][11] = 60,
[0][0][0][0][RTW89_CHILE][11] = 58,
[0][0][0][0][RTW89_QATAR][11] = 60,
+ [0][0][0][0][RTW89_THAILAND][11] = 60,
[0][0][0][0][RTW89_FCC][12] = 46,
[0][0][0][0][RTW89_ETSI][12] = 60,
[0][0][0][0][RTW89_MKK][12] = 68,
[0][0][0][0][RTW89_IC][12] = 46,
[0][0][0][0][RTW89_KCC][12] = 70,
[0][0][0][0][RTW89_ACMA][12] = 60,
- [0][0][0][0][RTW89_CN][12] = 58,
+ [0][0][0][0][RTW89_CN][12] = 56,
[0][0][0][0][RTW89_UK][12] = 60,
[0][0][0][0][RTW89_MEXICO][12] = 46,
[0][0][0][0][RTW89_UKRAINE][12] = 60,
[0][0][0][0][RTW89_CHILE][12] = 46,
[0][0][0][0][RTW89_QATAR][12] = 60,
+ [0][0][0][0][RTW89_THAILAND][12] = 60,
[0][0][0][0][RTW89_FCC][13] = 127,
[0][0][0][0][RTW89_ETSI][13] = 127,
[0][0][0][0][RTW89_MKK][13] = 72,
@@ -31914,6 +31972,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][0][0][RTW89_UKRAINE][13] = 127,
[0][0][0][0][RTW89_CHILE][13] = 127,
[0][0][0][0][RTW89_QATAR][13] = 127,
+ [0][0][0][0][RTW89_THAILAND][13] = 127,
[0][1][0][0][RTW89_FCC][0] = 76,
[0][1][0][0][RTW89_ETSI][0] = 48,
[0][1][0][0][RTW89_MKK][0] = 58,
@@ -31926,6 +31985,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][0] = 48,
[0][1][0][0][RTW89_CHILE][0] = 76,
[0][1][0][0][RTW89_QATAR][0] = 48,
+ [0][1][0][0][RTW89_THAILAND][0] = 48,
[0][1][0][0][RTW89_FCC][1] = 76,
[0][1][0][0][RTW89_ETSI][1] = 48,
[0][1][0][0][RTW89_MKK][1] = 58,
@@ -31938,6 +31998,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][1] = 48,
[0][1][0][0][RTW89_CHILE][1] = 54,
[0][1][0][0][RTW89_QATAR][1] = 48,
+ [0][1][0][0][RTW89_THAILAND][1] = 48,
[0][1][0][0][RTW89_FCC][2] = 76,
[0][1][0][0][RTW89_ETSI][2] = 48,
[0][1][0][0][RTW89_MKK][2] = 58,
@@ -31950,6 +32011,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][2] = 48,
[0][1][0][0][RTW89_CHILE][2] = 54,
[0][1][0][0][RTW89_QATAR][2] = 48,
+ [0][1][0][0][RTW89_THAILAND][2] = 48,
[0][1][0][0][RTW89_FCC][3] = 76,
[0][1][0][0][RTW89_ETSI][3] = 48,
[0][1][0][0][RTW89_MKK][3] = 58,
@@ -31962,6 +32024,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][3] = 48,
[0][1][0][0][RTW89_CHILE][3] = 54,
[0][1][0][0][RTW89_QATAR][3] = 48,
+ [0][1][0][0][RTW89_THAILAND][3] = 48,
[0][1][0][0][RTW89_FCC][4] = 76,
[0][1][0][0][RTW89_ETSI][4] = 48,
[0][1][0][0][RTW89_MKK][4] = 58,
@@ -31974,6 +32037,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][4] = 48,
[0][1][0][0][RTW89_CHILE][4] = 54,
[0][1][0][0][RTW89_QATAR][4] = 48,
+ [0][1][0][0][RTW89_THAILAND][4] = 48,
[0][1][0][0][RTW89_FCC][5] = 76,
[0][1][0][0][RTW89_ETSI][5] = 48,
[0][1][0][0][RTW89_MKK][5] = 58,
@@ -31986,6 +32050,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][5] = 48,
[0][1][0][0][RTW89_CHILE][5] = 76,
[0][1][0][0][RTW89_QATAR][5] = 48,
+ [0][1][0][0][RTW89_THAILAND][5] = 48,
[0][1][0][0][RTW89_FCC][6] = 76,
[0][1][0][0][RTW89_ETSI][6] = 48,
[0][1][0][0][RTW89_MKK][6] = 58,
@@ -31998,6 +32063,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][6] = 48,
[0][1][0][0][RTW89_CHILE][6] = 76,
[0][1][0][0][RTW89_QATAR][6] = 48,
+ [0][1][0][0][RTW89_THAILAND][6] = 48,
[0][1][0][0][RTW89_FCC][7] = 76,
[0][1][0][0][RTW89_ETSI][7] = 48,
[0][1][0][0][RTW89_MKK][7] = 58,
@@ -32010,6 +32076,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][7] = 48,
[0][1][0][0][RTW89_CHILE][7] = 76,
[0][1][0][0][RTW89_QATAR][7] = 48,
+ [0][1][0][0][RTW89_THAILAND][7] = 48,
[0][1][0][0][RTW89_FCC][8] = 76,
[0][1][0][0][RTW89_ETSI][8] = 48,
[0][1][0][0][RTW89_MKK][8] = 58,
@@ -32022,6 +32089,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][8] = 48,
[0][1][0][0][RTW89_CHILE][8] = 76,
[0][1][0][0][RTW89_QATAR][8] = 48,
+ [0][1][0][0][RTW89_THAILAND][8] = 48,
[0][1][0][0][RTW89_FCC][9] = 70,
[0][1][0][0][RTW89_ETSI][9] = 48,
[0][1][0][0][RTW89_MKK][9] = 58,
@@ -32034,6 +32102,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][9] = 48,
[0][1][0][0][RTW89_CHILE][9] = 70,
[0][1][0][0][RTW89_QATAR][9] = 48,
+ [0][1][0][0][RTW89_THAILAND][9] = 48,
[0][1][0][0][RTW89_FCC][10] = 72,
[0][1][0][0][RTW89_ETSI][10] = 48,
[0][1][0][0][RTW89_MKK][10] = 58,
@@ -32046,6 +32115,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][10] = 48,
[0][1][0][0][RTW89_CHILE][10] = 72,
[0][1][0][0][RTW89_QATAR][10] = 48,
+ [0][1][0][0][RTW89_THAILAND][10] = 48,
[0][1][0][0][RTW89_FCC][11] = 44,
[0][1][0][0][RTW89_ETSI][11] = 48,
[0][1][0][0][RTW89_MKK][11] = 58,
@@ -32058,6 +32128,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][11] = 48,
[0][1][0][0][RTW89_CHILE][11] = 44,
[0][1][0][0][RTW89_QATAR][11] = 48,
+ [0][1][0][0][RTW89_THAILAND][11] = 48,
[0][1][0][0][RTW89_FCC][12] = 18,
[0][1][0][0][RTW89_ETSI][12] = 48,
[0][1][0][0][RTW89_MKK][12] = 58,
@@ -32070,6 +32141,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][12] = 48,
[0][1][0][0][RTW89_CHILE][12] = 18,
[0][1][0][0][RTW89_QATAR][12] = 48,
+ [0][1][0][0][RTW89_THAILAND][12] = 48,
[0][1][0][0][RTW89_FCC][13] = 127,
[0][1][0][0][RTW89_ETSI][13] = 127,
[0][1][0][0][RTW89_MKK][13] = 60,
@@ -32082,6 +32154,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][0][0][RTW89_UKRAINE][13] = 127,
[0][1][0][0][RTW89_CHILE][13] = 127,
[0][1][0][0][RTW89_QATAR][13] = 127,
+ [0][1][0][0][RTW89_THAILAND][13] = 127,
[1][0][0][0][RTW89_FCC][0] = 127,
[1][0][0][0][RTW89_ETSI][0] = 127,
[1][0][0][0][RTW89_MKK][0] = 127,
@@ -32094,6 +32167,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_UKRAINE][0] = 127,
[1][0][0][0][RTW89_CHILE][0] = 127,
[1][0][0][0][RTW89_QATAR][0] = 127,
+ [1][0][0][0][RTW89_THAILAND][0] = 127,
[1][0][0][0][RTW89_FCC][1] = 127,
[1][0][0][0][RTW89_ETSI][1] = 127,
[1][0][0][0][RTW89_MKK][1] = 127,
@@ -32106,114 +32180,124 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_UKRAINE][1] = 127,
[1][0][0][0][RTW89_CHILE][1] = 127,
[1][0][0][0][RTW89_QATAR][1] = 127,
+ [1][0][0][0][RTW89_THAILAND][1] = 127,
[1][0][0][0][RTW89_FCC][2] = 44,
[1][0][0][0][RTW89_ETSI][2] = 60,
[1][0][0][0][RTW89_MKK][2] = 66,
[1][0][0][0][RTW89_IC][2] = 44,
[1][0][0][0][RTW89_KCC][2] = 68,
[1][0][0][0][RTW89_ACMA][2] = 60,
- [1][0][0][0][RTW89_CN][2] = 58,
+ [1][0][0][0][RTW89_CN][2] = 56,
[1][0][0][0][RTW89_UK][2] = 60,
[1][0][0][0][RTW89_MEXICO][2] = 44,
[1][0][0][0][RTW89_UKRAINE][2] = 60,
[1][0][0][0][RTW89_CHILE][2] = 44,
[1][0][0][0][RTW89_QATAR][2] = 60,
+ [1][0][0][0][RTW89_THAILAND][2] = 60,
[1][0][0][0][RTW89_FCC][3] = 60,
[1][0][0][0][RTW89_ETSI][3] = 60,
[1][0][0][0][RTW89_MKK][3] = 66,
[1][0][0][0][RTW89_IC][3] = 60,
[1][0][0][0][RTW89_KCC][3] = 68,
[1][0][0][0][RTW89_ACMA][3] = 60,
- [1][0][0][0][RTW89_CN][3] = 58,
+ [1][0][0][0][RTW89_CN][3] = 56,
[1][0][0][0][RTW89_UK][3] = 60,
[1][0][0][0][RTW89_MEXICO][3] = 60,
[1][0][0][0][RTW89_UKRAINE][3] = 60,
[1][0][0][0][RTW89_CHILE][3] = 60,
[1][0][0][0][RTW89_QATAR][3] = 60,
+ [1][0][0][0][RTW89_THAILAND][3] = 60,
[1][0][0][0][RTW89_FCC][4] = 60,
[1][0][0][0][RTW89_ETSI][4] = 60,
[1][0][0][0][RTW89_MKK][4] = 66,
[1][0][0][0][RTW89_IC][4] = 60,
[1][0][0][0][RTW89_KCC][4] = 68,
[1][0][0][0][RTW89_ACMA][4] = 60,
- [1][0][0][0][RTW89_CN][4] = 58,
+ [1][0][0][0][RTW89_CN][4] = 56,
[1][0][0][0][RTW89_UK][4] = 60,
[1][0][0][0][RTW89_MEXICO][4] = 60,
[1][0][0][0][RTW89_UKRAINE][4] = 60,
[1][0][0][0][RTW89_CHILE][4] = 60,
[1][0][0][0][RTW89_QATAR][4] = 60,
+ [1][0][0][0][RTW89_THAILAND][4] = 60,
[1][0][0][0][RTW89_FCC][5] = 62,
[1][0][0][0][RTW89_ETSI][5] = 60,
[1][0][0][0][RTW89_MKK][5] = 66,
[1][0][0][0][RTW89_IC][5] = 62,
[1][0][0][0][RTW89_KCC][5] = 68,
[1][0][0][0][RTW89_ACMA][5] = 60,
- [1][0][0][0][RTW89_CN][5] = 58,
+ [1][0][0][0][RTW89_CN][5] = 56,
[1][0][0][0][RTW89_UK][5] = 60,
[1][0][0][0][RTW89_MEXICO][5] = 62,
[1][0][0][0][RTW89_UKRAINE][5] = 60,
[1][0][0][0][RTW89_CHILE][5] = 62,
[1][0][0][0][RTW89_QATAR][5] = 60,
+ [1][0][0][0][RTW89_THAILAND][5] = 60,
[1][0][0][0][RTW89_FCC][6] = 46,
[1][0][0][0][RTW89_ETSI][6] = 60,
[1][0][0][0][RTW89_MKK][6] = 66,
[1][0][0][0][RTW89_IC][6] = 46,
[1][0][0][0][RTW89_KCC][6] = 68,
[1][0][0][0][RTW89_ACMA][6] = 60,
- [1][0][0][0][RTW89_CN][6] = 58,
+ [1][0][0][0][RTW89_CN][6] = 56,
[1][0][0][0][RTW89_UK][6] = 60,
[1][0][0][0][RTW89_MEXICO][6] = 46,
[1][0][0][0][RTW89_UKRAINE][6] = 60,
[1][0][0][0][RTW89_CHILE][6] = 46,
[1][0][0][0][RTW89_QATAR][6] = 60,
+ [1][0][0][0][RTW89_THAILAND][6] = 60,
[1][0][0][0][RTW89_FCC][7] = 46,
[1][0][0][0][RTW89_ETSI][7] = 60,
[1][0][0][0][RTW89_MKK][7] = 66,
[1][0][0][0][RTW89_IC][7] = 46,
[1][0][0][0][RTW89_KCC][7] = 68,
[1][0][0][0][RTW89_ACMA][7] = 60,
- [1][0][0][0][RTW89_CN][7] = 58,
+ [1][0][0][0][RTW89_CN][7] = 56,
[1][0][0][0][RTW89_UK][7] = 60,
[1][0][0][0][RTW89_MEXICO][7] = 46,
[1][0][0][0][RTW89_UKRAINE][7] = 60,
[1][0][0][0][RTW89_CHILE][7] = 46,
[1][0][0][0][RTW89_QATAR][7] = 60,
+ [1][0][0][0][RTW89_THAILAND][7] = 60,
[1][0][0][0][RTW89_FCC][8] = 28,
[1][0][0][0][RTW89_ETSI][8] = 60,
[1][0][0][0][RTW89_MKK][8] = 66,
[1][0][0][0][RTW89_IC][8] = 28,
[1][0][0][0][RTW89_KCC][8] = 70,
[1][0][0][0][RTW89_ACMA][8] = 60,
- [1][0][0][0][RTW89_CN][8] = 58,
+ [1][0][0][0][RTW89_CN][8] = 56,
[1][0][0][0][RTW89_UK][8] = 60,
[1][0][0][0][RTW89_MEXICO][8] = 28,
[1][0][0][0][RTW89_UKRAINE][8] = 60,
[1][0][0][0][RTW89_CHILE][8] = 28,
[1][0][0][0][RTW89_QATAR][8] = 60,
+ [1][0][0][0][RTW89_THAILAND][8] = 60,
[1][0][0][0][RTW89_FCC][9] = 26,
[1][0][0][0][RTW89_ETSI][9] = 60,
[1][0][0][0][RTW89_MKK][9] = 66,
[1][0][0][0][RTW89_IC][9] = 26,
[1][0][0][0][RTW89_KCC][9] = 70,
[1][0][0][0][RTW89_ACMA][9] = 60,
- [1][0][0][0][RTW89_CN][9] = 58,
+ [1][0][0][0][RTW89_CN][9] = 56,
[1][0][0][0][RTW89_UK][9] = 60,
[1][0][0][0][RTW89_MEXICO][9] = 26,
[1][0][0][0][RTW89_UKRAINE][9] = 60,
[1][0][0][0][RTW89_CHILE][9] = 26,
[1][0][0][0][RTW89_QATAR][9] = 60,
+ [1][0][0][0][RTW89_THAILAND][9] = 60,
[1][0][0][0][RTW89_FCC][10] = 26,
[1][0][0][0][RTW89_ETSI][10] = 60,
[1][0][0][0][RTW89_MKK][10] = 66,
[1][0][0][0][RTW89_IC][10] = 26,
[1][0][0][0][RTW89_KCC][10] = 70,
[1][0][0][0][RTW89_ACMA][10] = 60,
- [1][0][0][0][RTW89_CN][10] = 58,
+ [1][0][0][0][RTW89_CN][10] = 56,
[1][0][0][0][RTW89_UK][10] = 60,
[1][0][0][0][RTW89_MEXICO][10] = 26,
[1][0][0][0][RTW89_UKRAINE][10] = 60,
[1][0][0][0][RTW89_CHILE][10] = 26,
[1][0][0][0][RTW89_QATAR][10] = 60,
+ [1][0][0][0][RTW89_THAILAND][10] = 60,
[1][0][0][0][RTW89_FCC][11] = 127,
[1][0][0][0][RTW89_ETSI][11] = 127,
[1][0][0][0][RTW89_MKK][11] = 127,
@@ -32226,6 +32310,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_UKRAINE][11] = 127,
[1][0][0][0][RTW89_CHILE][11] = 127,
[1][0][0][0][RTW89_QATAR][11] = 127,
+ [1][0][0][0][RTW89_THAILAND][11] = 127,
[1][0][0][0][RTW89_FCC][12] = 127,
[1][0][0][0][RTW89_ETSI][12] = 127,
[1][0][0][0][RTW89_MKK][12] = 127,
@@ -32238,6 +32323,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_UKRAINE][12] = 127,
[1][0][0][0][RTW89_CHILE][12] = 127,
[1][0][0][0][RTW89_QATAR][12] = 127,
+ [1][0][0][0][RTW89_THAILAND][12] = 127,
[1][0][0][0][RTW89_FCC][13] = 127,
[1][0][0][0][RTW89_ETSI][13] = 127,
[1][0][0][0][RTW89_MKK][13] = 127,
@@ -32250,6 +32336,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][0][0][RTW89_UKRAINE][13] = 127,
[1][0][0][0][RTW89_CHILE][13] = 127,
[1][0][0][0][RTW89_QATAR][13] = 127,
+ [1][0][0][0][RTW89_THAILAND][13] = 127,
[1][1][0][0][RTW89_FCC][0] = 127,
[1][1][0][0][RTW89_ETSI][0] = 127,
[1][1][0][0][RTW89_MKK][0] = 127,
@@ -32262,6 +32349,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_UKRAINE][0] = 127,
[1][1][0][0][RTW89_CHILE][0] = 127,
[1][1][0][0][RTW89_QATAR][0] = 127,
+ [1][1][0][0][RTW89_THAILAND][0] = 127,
[1][1][0][0][RTW89_FCC][1] = 127,
[1][1][0][0][RTW89_ETSI][1] = 127,
[1][1][0][0][RTW89_MKK][1] = 127,
@@ -32274,114 +32362,124 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_UKRAINE][1] = 127,
[1][1][0][0][RTW89_CHILE][1] = 127,
[1][1][0][0][RTW89_QATAR][1] = 127,
+ [1][1][0][0][RTW89_THAILAND][1] = 127,
[1][1][0][0][RTW89_FCC][2] = 46,
[1][1][0][0][RTW89_ETSI][2] = 48,
[1][1][0][0][RTW89_MKK][2] = 58,
[1][1][0][0][RTW89_IC][2] = 46,
[1][1][0][0][RTW89_KCC][2] = 56,
[1][1][0][0][RTW89_ACMA][2] = 48,
- [1][1][0][0][RTW89_CN][2] = 46,
+ [1][1][0][0][RTW89_CN][2] = 44,
[1][1][0][0][RTW89_UK][2] = 48,
[1][1][0][0][RTW89_MEXICO][2] = 46,
[1][1][0][0][RTW89_UKRAINE][2] = 48,
[1][1][0][0][RTW89_CHILE][2] = 46,
[1][1][0][0][RTW89_QATAR][2] = 48,
+ [1][1][0][0][RTW89_THAILAND][2] = 48,
[1][1][0][0][RTW89_FCC][3] = 46,
[1][1][0][0][RTW89_ETSI][3] = 48,
[1][1][0][0][RTW89_MKK][3] = 58,
[1][1][0][0][RTW89_IC][3] = 46,
[1][1][0][0][RTW89_KCC][3] = 56,
[1][1][0][0][RTW89_ACMA][3] = 48,
- [1][1][0][0][RTW89_CN][3] = 46,
+ [1][1][0][0][RTW89_CN][3] = 44,
[1][1][0][0][RTW89_UK][3] = 48,
[1][1][0][0][RTW89_MEXICO][3] = 46,
[1][1][0][0][RTW89_UKRAINE][3] = 48,
[1][1][0][0][RTW89_CHILE][3] = 46,
[1][1][0][0][RTW89_QATAR][3] = 48,
+ [1][1][0][0][RTW89_THAILAND][3] = 48,
[1][1][0][0][RTW89_FCC][4] = 46,
[1][1][0][0][RTW89_ETSI][4] = 48,
[1][1][0][0][RTW89_MKK][4] = 58,
[1][1][0][0][RTW89_IC][4] = 46,
[1][1][0][0][RTW89_KCC][4] = 56,
[1][1][0][0][RTW89_ACMA][4] = 48,
- [1][1][0][0][RTW89_CN][4] = 46,
+ [1][1][0][0][RTW89_CN][4] = 44,
[1][1][0][0][RTW89_UK][4] = 48,
[1][1][0][0][RTW89_MEXICO][4] = 46,
[1][1][0][0][RTW89_UKRAINE][4] = 48,
[1][1][0][0][RTW89_CHILE][4] = 46,
[1][1][0][0][RTW89_QATAR][4] = 48,
+ [1][1][0][0][RTW89_THAILAND][4] = 48,
[1][1][0][0][RTW89_FCC][5] = 48,
[1][1][0][0][RTW89_ETSI][5] = 48,
[1][1][0][0][RTW89_MKK][5] = 58,
[1][1][0][0][RTW89_IC][5] = 48,
[1][1][0][0][RTW89_KCC][5] = 56,
[1][1][0][0][RTW89_ACMA][5] = 48,
- [1][1][0][0][RTW89_CN][5] = 46,
+ [1][1][0][0][RTW89_CN][5] = 44,
[1][1][0][0][RTW89_UK][5] = 48,
[1][1][0][0][RTW89_MEXICO][5] = 48,
[1][1][0][0][RTW89_UKRAINE][5] = 48,
[1][1][0][0][RTW89_CHILE][5] = 48,
[1][1][0][0][RTW89_QATAR][5] = 48,
+ [1][1][0][0][RTW89_THAILAND][5] = 48,
[1][1][0][0][RTW89_FCC][6] = 40,
[1][1][0][0][RTW89_ETSI][6] = 48,
[1][1][0][0][RTW89_MKK][6] = 58,
[1][1][0][0][RTW89_IC][6] = 40,
[1][1][0][0][RTW89_KCC][6] = 56,
[1][1][0][0][RTW89_ACMA][6] = 48,
- [1][1][0][0][RTW89_CN][6] = 46,
+ [1][1][0][0][RTW89_CN][6] = 44,
[1][1][0][0][RTW89_UK][6] = 48,
[1][1][0][0][RTW89_MEXICO][6] = 40,
[1][1][0][0][RTW89_UKRAINE][6] = 48,
[1][1][0][0][RTW89_CHILE][6] = 40,
[1][1][0][0][RTW89_QATAR][6] = 48,
+ [1][1][0][0][RTW89_THAILAND][6] = 48,
[1][1][0][0][RTW89_FCC][7] = 40,
[1][1][0][0][RTW89_ETSI][7] = 48,
[1][1][0][0][RTW89_MKK][7] = 58,
[1][1][0][0][RTW89_IC][7] = 40,
[1][1][0][0][RTW89_KCC][7] = 56,
[1][1][0][0][RTW89_ACMA][7] = 48,
- [1][1][0][0][RTW89_CN][7] = 46,
+ [1][1][0][0][RTW89_CN][7] = 44,
[1][1][0][0][RTW89_UK][7] = 48,
[1][1][0][0][RTW89_MEXICO][7] = 40,
[1][1][0][0][RTW89_UKRAINE][7] = 48,
[1][1][0][0][RTW89_CHILE][7] = 40,
[1][1][0][0][RTW89_QATAR][7] = 48,
+ [1][1][0][0][RTW89_THAILAND][7] = 48,
[1][1][0][0][RTW89_FCC][8] = 14,
[1][1][0][0][RTW89_ETSI][8] = 48,
[1][1][0][0][RTW89_MKK][8] = 58,
[1][1][0][0][RTW89_IC][8] = 14,
[1][1][0][0][RTW89_KCC][8] = 58,
[1][1][0][0][RTW89_ACMA][8] = 48,
- [1][1][0][0][RTW89_CN][8] = 46,
+ [1][1][0][0][RTW89_CN][8] = 44,
[1][1][0][0][RTW89_UK][8] = 48,
[1][1][0][0][RTW89_MEXICO][8] = 14,
[1][1][0][0][RTW89_UKRAINE][8] = 48,
[1][1][0][0][RTW89_CHILE][8] = 14,
[1][1][0][0][RTW89_QATAR][8] = 48,
+ [1][1][0][0][RTW89_THAILAND][8] = 48,
[1][1][0][0][RTW89_FCC][9] = 14,
[1][1][0][0][RTW89_ETSI][9] = 48,
[1][1][0][0][RTW89_MKK][9] = 58,
[1][1][0][0][RTW89_IC][9] = 14,
[1][1][0][0][RTW89_KCC][9] = 58,
[1][1][0][0][RTW89_ACMA][9] = 48,
- [1][1][0][0][RTW89_CN][9] = 46,
+ [1][1][0][0][RTW89_CN][9] = 44,
[1][1][0][0][RTW89_UK][9] = 48,
[1][1][0][0][RTW89_MEXICO][9] = 14,
[1][1][0][0][RTW89_UKRAINE][9] = 48,
[1][1][0][0][RTW89_CHILE][9] = 14,
[1][1][0][0][RTW89_QATAR][9] = 48,
+ [1][1][0][0][RTW89_THAILAND][9] = 48,
[1][1][0][0][RTW89_FCC][10] = 12,
[1][1][0][0][RTW89_ETSI][10] = 48,
[1][1][0][0][RTW89_MKK][10] = 56,
[1][1][0][0][RTW89_IC][10] = 12,
[1][1][0][0][RTW89_KCC][10] = 58,
[1][1][0][0][RTW89_ACMA][10] = 48,
- [1][1][0][0][RTW89_CN][10] = 46,
+ [1][1][0][0][RTW89_CN][10] = 44,
[1][1][0][0][RTW89_UK][10] = 48,
[1][1][0][0][RTW89_MEXICO][10] = 12,
[1][1][0][0][RTW89_UKRAINE][10] = 48,
[1][1][0][0][RTW89_CHILE][10] = 12,
[1][1][0][0][RTW89_QATAR][10] = 48,
+ [1][1][0][0][RTW89_THAILAND][10] = 48,
[1][1][0][0][RTW89_FCC][11] = 127,
[1][1][0][0][RTW89_ETSI][11] = 127,
[1][1][0][0][RTW89_MKK][11] = 127,
@@ -32394,6 +32492,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_UKRAINE][11] = 127,
[1][1][0][0][RTW89_CHILE][11] = 127,
[1][1][0][0][RTW89_QATAR][11] = 127,
+ [1][1][0][0][RTW89_THAILAND][11] = 127,
[1][1][0][0][RTW89_FCC][12] = 127,
[1][1][0][0][RTW89_ETSI][12] = 127,
[1][1][0][0][RTW89_MKK][12] = 127,
@@ -32406,6 +32505,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_UKRAINE][12] = 127,
[1][1][0][0][RTW89_CHILE][12] = 127,
[1][1][0][0][RTW89_QATAR][12] = 127,
+ [1][1][0][0][RTW89_THAILAND][12] = 127,
[1][1][0][0][RTW89_FCC][13] = 127,
[1][1][0][0][RTW89_ETSI][13] = 127,
[1][1][0][0][RTW89_MKK][13] = 127,
@@ -32418,6 +32518,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][0][0][RTW89_UKRAINE][13] = 127,
[1][1][0][0][RTW89_CHILE][13] = 127,
[1][1][0][0][RTW89_QATAR][13] = 127,
+ [1][1][0][0][RTW89_THAILAND][13] = 127,
[0][0][1][0][RTW89_FCC][0] = 66,
[0][0][1][0][RTW89_ETSI][0] = 60,
[0][0][1][0][RTW89_MKK][0] = 76,
@@ -32430,6 +32531,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][0] = 60,
[0][0][1][0][RTW89_CHILE][0] = 66,
[0][0][1][0][RTW89_QATAR][0] = 60,
+ [0][0][1][0][RTW89_THAILAND][0] = 60,
[0][0][1][0][RTW89_FCC][1] = 68,
[0][0][1][0][RTW89_ETSI][1] = 60,
[0][0][1][0][RTW89_MKK][1] = 78,
@@ -32442,6 +32544,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][1] = 60,
[0][0][1][0][RTW89_CHILE][1] = 68,
[0][0][1][0][RTW89_QATAR][1] = 60,
+ [0][0][1][0][RTW89_THAILAND][1] = 60,
[0][0][1][0][RTW89_FCC][2] = 72,
[0][0][1][0][RTW89_ETSI][2] = 60,
[0][0][1][0][RTW89_MKK][2] = 78,
@@ -32454,6 +32557,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][2] = 60,
[0][0][1][0][RTW89_CHILE][2] = 62,
[0][0][1][0][RTW89_QATAR][2] = 60,
+ [0][0][1][0][RTW89_THAILAND][2] = 60,
[0][0][1][0][RTW89_FCC][3] = 76,
[0][0][1][0][RTW89_ETSI][3] = 60,
[0][0][1][0][RTW89_MKK][3] = 78,
@@ -32466,6 +32570,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][3] = 60,
[0][0][1][0][RTW89_CHILE][3] = 62,
[0][0][1][0][RTW89_QATAR][3] = 60,
+ [0][0][1][0][RTW89_THAILAND][3] = 60,
[0][0][1][0][RTW89_FCC][4] = 80,
[0][0][1][0][RTW89_ETSI][4] = 60,
[0][0][1][0][RTW89_MKK][4] = 78,
@@ -32478,6 +32583,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][4] = 60,
[0][0][1][0][RTW89_CHILE][4] = 62,
[0][0][1][0][RTW89_QATAR][4] = 60,
+ [0][0][1][0][RTW89_THAILAND][4] = 60,
[0][0][1][0][RTW89_FCC][5] = 80,
[0][0][1][0][RTW89_ETSI][5] = 60,
[0][0][1][0][RTW89_MKK][5] = 78,
@@ -32490,6 +32596,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][5] = 60,
[0][0][1][0][RTW89_CHILE][5] = 80,
[0][0][1][0][RTW89_QATAR][5] = 60,
+ [0][0][1][0][RTW89_THAILAND][5] = 60,
[0][0][1][0][RTW89_FCC][6] = 80,
[0][0][1][0][RTW89_ETSI][6] = 60,
[0][0][1][0][RTW89_MKK][6] = 76,
@@ -32502,6 +32609,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][6] = 60,
[0][0][1][0][RTW89_CHILE][6] = 70,
[0][0][1][0][RTW89_QATAR][6] = 60,
+ [0][0][1][0][RTW89_THAILAND][6] = 60,
[0][0][1][0][RTW89_FCC][7] = 80,
[0][0][1][0][RTW89_ETSI][7] = 60,
[0][0][1][0][RTW89_MKK][7] = 78,
@@ -32514,6 +32622,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][7] = 60,
[0][0][1][0][RTW89_CHILE][7] = 70,
[0][0][1][0][RTW89_QATAR][7] = 60,
+ [0][0][1][0][RTW89_THAILAND][7] = 60,
[0][0][1][0][RTW89_FCC][8] = 80,
[0][0][1][0][RTW89_ETSI][8] = 60,
[0][0][1][0][RTW89_MKK][8] = 78,
@@ -32526,6 +32635,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][8] = 60,
[0][0][1][0][RTW89_CHILE][8] = 70,
[0][0][1][0][RTW89_QATAR][8] = 60,
+ [0][0][1][0][RTW89_THAILAND][8] = 60,
[0][0][1][0][RTW89_FCC][9] = 76,
[0][0][1][0][RTW89_ETSI][9] = 60,
[0][0][1][0][RTW89_MKK][9] = 78,
@@ -32538,6 +32648,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][9] = 60,
[0][0][1][0][RTW89_CHILE][9] = 76,
[0][0][1][0][RTW89_QATAR][9] = 60,
+ [0][0][1][0][RTW89_THAILAND][9] = 60,
[0][0][1][0][RTW89_FCC][10] = 66,
[0][0][1][0][RTW89_ETSI][10] = 60,
[0][0][1][0][RTW89_MKK][10] = 78,
@@ -32550,6 +32661,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][10] = 60,
[0][0][1][0][RTW89_CHILE][10] = 66,
[0][0][1][0][RTW89_QATAR][10] = 60,
+ [0][0][1][0][RTW89_THAILAND][10] = 60,
[0][0][1][0][RTW89_FCC][11] = 62,
[0][0][1][0][RTW89_ETSI][11] = 60,
[0][0][1][0][RTW89_MKK][11] = 78,
@@ -32562,18 +32674,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][11] = 60,
[0][0][1][0][RTW89_CHILE][11] = 62,
[0][0][1][0][RTW89_QATAR][11] = 60,
+ [0][0][1][0][RTW89_THAILAND][11] = 60,
[0][0][1][0][RTW89_FCC][12] = 60,
[0][0][1][0][RTW89_ETSI][12] = 60,
[0][0][1][0][RTW89_MKK][12] = 78,
[0][0][1][0][RTW89_IC][12] = 60,
[0][0][1][0][RTW89_KCC][12] = 70,
[0][0][1][0][RTW89_ACMA][12] = 60,
- [0][0][1][0][RTW89_CN][12] = 58,
+ [0][0][1][0][RTW89_CN][12] = 40,
[0][0][1][0][RTW89_UK][12] = 60,
[0][0][1][0][RTW89_MEXICO][12] = 60,
[0][0][1][0][RTW89_UKRAINE][12] = 60,
[0][0][1][0][RTW89_CHILE][12] = 60,
[0][0][1][0][RTW89_QATAR][12] = 60,
+ [0][0][1][0][RTW89_THAILAND][12] = 60,
[0][0][1][0][RTW89_FCC][13] = 127,
[0][0][1][0][RTW89_ETSI][13] = 127,
[0][0][1][0][RTW89_MKK][13] = 127,
@@ -32586,6 +32700,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][13] = 127,
[0][0][1][0][RTW89_CHILE][13] = 127,
[0][0][1][0][RTW89_QATAR][13] = 127,
+ [0][0][1][0][RTW89_THAILAND][13] = 127,
[0][1][1][0][RTW89_FCC][0] = 66,
[0][1][1][0][RTW89_ETSI][0] = 48,
[0][1][1][0][RTW89_MKK][0] = 66,
@@ -32598,6 +32713,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][0] = 48,
[0][1][1][0][RTW89_CHILE][0] = 66,
[0][1][1][0][RTW89_QATAR][0] = 48,
+ [0][1][1][0][RTW89_THAILAND][0] = 48,
[0][1][1][0][RTW89_FCC][1] = 68,
[0][1][1][0][RTW89_ETSI][1] = 48,
[0][1][1][0][RTW89_MKK][1] = 66,
@@ -32610,6 +32726,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][1] = 48,
[0][1][1][0][RTW89_CHILE][1] = 68,
[0][1][1][0][RTW89_QATAR][1] = 48,
+ [0][1][1][0][RTW89_THAILAND][1] = 48,
[0][1][1][0][RTW89_FCC][2] = 72,
[0][1][1][0][RTW89_ETSI][2] = 48,
[0][1][1][0][RTW89_MKK][2] = 66,
@@ -32622,6 +32739,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][2] = 48,
[0][1][1][0][RTW89_CHILE][2] = 54,
[0][1][1][0][RTW89_QATAR][2] = 48,
+ [0][1][1][0][RTW89_THAILAND][2] = 48,
[0][1][1][0][RTW89_FCC][3] = 76,
[0][1][1][0][RTW89_ETSI][3] = 48,
[0][1][1][0][RTW89_MKK][3] = 66,
@@ -32634,6 +32752,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][3] = 48,
[0][1][1][0][RTW89_CHILE][3] = 54,
[0][1][1][0][RTW89_QATAR][3] = 48,
+ [0][1][1][0][RTW89_THAILAND][3] = 48,
[0][1][1][0][RTW89_FCC][4] = 80,
[0][1][1][0][RTW89_ETSI][4] = 48,
[0][1][1][0][RTW89_MKK][4] = 66,
@@ -32646,6 +32765,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][4] = 48,
[0][1][1][0][RTW89_CHILE][4] = 54,
[0][1][1][0][RTW89_QATAR][4] = 48,
+ [0][1][1][0][RTW89_THAILAND][4] = 48,
[0][1][1][0][RTW89_FCC][5] = 80,
[0][1][1][0][RTW89_ETSI][5] = 48,
[0][1][1][0][RTW89_MKK][5] = 66,
@@ -32658,6 +32778,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][5] = 48,
[0][1][1][0][RTW89_CHILE][5] = 80,
[0][1][1][0][RTW89_QATAR][5] = 48,
+ [0][1][1][0][RTW89_THAILAND][5] = 48,
[0][1][1][0][RTW89_FCC][6] = 80,
[0][1][1][0][RTW89_ETSI][6] = 48,
[0][1][1][0][RTW89_MKK][6] = 66,
@@ -32670,6 +32791,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][6] = 48,
[0][1][1][0][RTW89_CHILE][6] = 56,
[0][1][1][0][RTW89_QATAR][6] = 48,
+ [0][1][1][0][RTW89_THAILAND][6] = 48,
[0][1][1][0][RTW89_FCC][7] = 78,
[0][1][1][0][RTW89_ETSI][7] = 48,
[0][1][1][0][RTW89_MKK][7] = 66,
@@ -32682,6 +32804,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][7] = 48,
[0][1][1][0][RTW89_CHILE][7] = 56,
[0][1][1][0][RTW89_QATAR][7] = 48,
+ [0][1][1][0][RTW89_THAILAND][7] = 48,
[0][1][1][0][RTW89_FCC][8] = 74,
[0][1][1][0][RTW89_ETSI][8] = 48,
[0][1][1][0][RTW89_MKK][8] = 66,
@@ -32694,6 +32817,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][8] = 48,
[0][1][1][0][RTW89_CHILE][8] = 56,
[0][1][1][0][RTW89_QATAR][8] = 48,
+ [0][1][1][0][RTW89_THAILAND][8] = 48,
[0][1][1][0][RTW89_FCC][9] = 70,
[0][1][1][0][RTW89_ETSI][9] = 48,
[0][1][1][0][RTW89_MKK][9] = 66,
@@ -32706,6 +32830,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][9] = 48,
[0][1][1][0][RTW89_CHILE][9] = 70,
[0][1][1][0][RTW89_QATAR][9] = 48,
+ [0][1][1][0][RTW89_THAILAND][9] = 48,
[0][1][1][0][RTW89_FCC][10] = 62,
[0][1][1][0][RTW89_ETSI][10] = 48,
[0][1][1][0][RTW89_MKK][10] = 66,
@@ -32718,6 +32843,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][10] = 48,
[0][1][1][0][RTW89_CHILE][10] = 62,
[0][1][1][0][RTW89_QATAR][10] = 48,
+ [0][1][1][0][RTW89_THAILAND][10] = 48,
[0][1][1][0][RTW89_FCC][11] = 60,
[0][1][1][0][RTW89_ETSI][11] = 48,
[0][1][1][0][RTW89_MKK][11] = 66,
@@ -32730,18 +32856,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][11] = 48,
[0][1][1][0][RTW89_CHILE][11] = 60,
[0][1][1][0][RTW89_QATAR][11] = 48,
+ [0][1][1][0][RTW89_THAILAND][11] = 48,
[0][1][1][0][RTW89_FCC][12] = 36,
[0][1][1][0][RTW89_ETSI][12] = 48,
[0][1][1][0][RTW89_MKK][12] = 66,
[0][1][1][0][RTW89_IC][12] = 36,
[0][1][1][0][RTW89_KCC][12] = 64,
[0][1][1][0][RTW89_ACMA][12] = 48,
- [0][1][1][0][RTW89_CN][12] = 46,
+ [0][1][1][0][RTW89_CN][12] = 40,
[0][1][1][0][RTW89_UK][12] = 48,
[0][1][1][0][RTW89_MEXICO][12] = 36,
[0][1][1][0][RTW89_UKRAINE][12] = 48,
[0][1][1][0][RTW89_CHILE][12] = 36,
[0][1][1][0][RTW89_QATAR][12] = 48,
+ [0][1][1][0][RTW89_THAILAND][12] = 48,
[0][1][1][0][RTW89_FCC][13] = 127,
[0][1][1][0][RTW89_ETSI][13] = 127,
[0][1][1][0][RTW89_MKK][13] = 127,
@@ -32754,6 +32882,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][13] = 127,
[0][1][1][0][RTW89_CHILE][13] = 127,
[0][1][1][0][RTW89_QATAR][13] = 127,
+ [0][1][1][0][RTW89_THAILAND][13] = 127,
[0][0][2][0][RTW89_FCC][0] = 66,
[0][0][2][0][RTW89_ETSI][0] = 60,
[0][0][2][0][RTW89_MKK][0] = 78,
@@ -32766,6 +32895,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][0] = 60,
[0][0][2][0][RTW89_CHILE][0] = 66,
[0][0][2][0][RTW89_QATAR][0] = 60,
+ [0][0][2][0][RTW89_THAILAND][0] = 60,
[0][0][2][0][RTW89_FCC][1] = 70,
[0][0][2][0][RTW89_ETSI][1] = 60,
[0][0][2][0][RTW89_MKK][1] = 78,
@@ -32778,6 +32908,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][1] = 60,
[0][0][2][0][RTW89_CHILE][1] = 70,
[0][0][2][0][RTW89_QATAR][1] = 60,
+ [0][0][2][0][RTW89_THAILAND][1] = 60,
[0][0][2][0][RTW89_FCC][2] = 74,
[0][0][2][0][RTW89_ETSI][2] = 60,
[0][0][2][0][RTW89_MKK][2] = 78,
@@ -32790,6 +32921,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][2] = 60,
[0][0][2][0][RTW89_CHILE][2] = 64,
[0][0][2][0][RTW89_QATAR][2] = 60,
+ [0][0][2][0][RTW89_THAILAND][2] = 60,
[0][0][2][0][RTW89_FCC][3] = 78,
[0][0][2][0][RTW89_ETSI][3] = 60,
[0][0][2][0][RTW89_MKK][3] = 78,
@@ -32802,6 +32934,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][3] = 60,
[0][0][2][0][RTW89_CHILE][3] = 64,
[0][0][2][0][RTW89_QATAR][3] = 60,
+ [0][0][2][0][RTW89_THAILAND][3] = 60,
[0][0][2][0][RTW89_FCC][4] = 80,
[0][0][2][0][RTW89_ETSI][4] = 60,
[0][0][2][0][RTW89_MKK][4] = 78,
@@ -32814,6 +32947,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][4] = 60,
[0][0][2][0][RTW89_CHILE][4] = 64,
[0][0][2][0][RTW89_QATAR][4] = 60,
+ [0][0][2][0][RTW89_THAILAND][4] = 60,
[0][0][2][0][RTW89_FCC][5] = 80,
[0][0][2][0][RTW89_ETSI][5] = 60,
[0][0][2][0][RTW89_MKK][5] = 78,
@@ -32826,6 +32960,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][5] = 60,
[0][0][2][0][RTW89_CHILE][5] = 80,
[0][0][2][0][RTW89_QATAR][5] = 60,
+ [0][0][2][0][RTW89_THAILAND][5] = 60,
[0][0][2][0][RTW89_FCC][6] = 80,
[0][0][2][0][RTW89_ETSI][6] = 60,
[0][0][2][0][RTW89_MKK][6] = 78,
@@ -32838,6 +32973,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][6] = 60,
[0][0][2][0][RTW89_CHILE][6] = 68,
[0][0][2][0][RTW89_QATAR][6] = 60,
+ [0][0][2][0][RTW89_THAILAND][6] = 60,
[0][0][2][0][RTW89_FCC][7] = 80,
[0][0][2][0][RTW89_ETSI][7] = 60,
[0][0][2][0][RTW89_MKK][7] = 78,
@@ -32850,6 +32986,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][7] = 60,
[0][0][2][0][RTW89_CHILE][7] = 68,
[0][0][2][0][RTW89_QATAR][7] = 60,
+ [0][0][2][0][RTW89_THAILAND][7] = 60,
[0][0][2][0][RTW89_FCC][8] = 78,
[0][0][2][0][RTW89_ETSI][8] = 60,
[0][0][2][0][RTW89_MKK][8] = 78,
@@ -32862,6 +32999,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][8] = 60,
[0][0][2][0][RTW89_CHILE][8] = 68,
[0][0][2][0][RTW89_QATAR][8] = 60,
+ [0][0][2][0][RTW89_THAILAND][8] = 60,
[0][0][2][0][RTW89_FCC][9] = 74,
[0][0][2][0][RTW89_ETSI][9] = 60,
[0][0][2][0][RTW89_MKK][9] = 78,
@@ -32874,6 +33012,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][9] = 60,
[0][0][2][0][RTW89_CHILE][9] = 74,
[0][0][2][0][RTW89_QATAR][9] = 60,
+ [0][0][2][0][RTW89_THAILAND][9] = 60,
[0][0][2][0][RTW89_FCC][10] = 62,
[0][0][2][0][RTW89_ETSI][10] = 60,
[0][0][2][0][RTW89_MKK][10] = 78,
@@ -32886,6 +33025,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][10] = 60,
[0][0][2][0][RTW89_CHILE][10] = 62,
[0][0][2][0][RTW89_QATAR][10] = 60,
+ [0][0][2][0][RTW89_THAILAND][10] = 60,
[0][0][2][0][RTW89_FCC][11] = 60,
[0][0][2][0][RTW89_ETSI][11] = 60,
[0][0][2][0][RTW89_MKK][11] = 78,
@@ -32898,18 +33038,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][11] = 60,
[0][0][2][0][RTW89_CHILE][11] = 60,
[0][0][2][0][RTW89_QATAR][11] = 60,
+ [0][0][2][0][RTW89_THAILAND][11] = 60,
[0][0][2][0][RTW89_FCC][12] = 38,
[0][0][2][0][RTW89_ETSI][12] = 60,
[0][0][2][0][RTW89_MKK][12] = 78,
[0][0][2][0][RTW89_IC][12] = 38,
[0][0][2][0][RTW89_KCC][12] = 66,
[0][0][2][0][RTW89_ACMA][12] = 60,
- [0][0][2][0][RTW89_CN][12] = 58,
+ [0][0][2][0][RTW89_CN][12] = 38,
[0][0][2][0][RTW89_UK][12] = 60,
[0][0][2][0][RTW89_MEXICO][12] = 38,
[0][0][2][0][RTW89_UKRAINE][12] = 60,
[0][0][2][0][RTW89_CHILE][12] = 38,
[0][0][2][0][RTW89_QATAR][12] = 60,
+ [0][0][2][0][RTW89_THAILAND][12] = 60,
[0][0][2][0][RTW89_FCC][13] = 127,
[0][0][2][0][RTW89_ETSI][13] = 127,
[0][0][2][0][RTW89_MKK][13] = 127,
@@ -32922,6 +33064,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][13] = 127,
[0][0][2][0][RTW89_CHILE][13] = 127,
[0][0][2][0][RTW89_QATAR][13] = 127,
+ [0][0][2][0][RTW89_THAILAND][13] = 127,
[0][1][2][0][RTW89_FCC][0] = 64,
[0][1][2][0][RTW89_ETSI][0] = 48,
[0][1][2][0][RTW89_MKK][0] = 68,
@@ -32934,6 +33077,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][0] = 48,
[0][1][2][0][RTW89_CHILE][0] = 64,
[0][1][2][0][RTW89_QATAR][0] = 48,
+ [0][1][2][0][RTW89_THAILAND][0] = 48,
[0][1][2][0][RTW89_FCC][1] = 70,
[0][1][2][0][RTW89_ETSI][1] = 48,
[0][1][2][0][RTW89_MKK][1] = 68,
@@ -32946,6 +33090,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][1] = 48,
[0][1][2][0][RTW89_CHILE][1] = 70,
[0][1][2][0][RTW89_QATAR][1] = 48,
+ [0][1][2][0][RTW89_THAILAND][1] = 48,
[0][1][2][0][RTW89_FCC][2] = 74,
[0][1][2][0][RTW89_ETSI][2] = 48,
[0][1][2][0][RTW89_MKK][2] = 68,
@@ -32958,6 +33103,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][2] = 48,
[0][1][2][0][RTW89_CHILE][2] = 56,
[0][1][2][0][RTW89_QATAR][2] = 48,
+ [0][1][2][0][RTW89_THAILAND][2] = 48,
[0][1][2][0][RTW89_FCC][3] = 78,
[0][1][2][0][RTW89_ETSI][3] = 48,
[0][1][2][0][RTW89_MKK][3] = 68,
@@ -32970,6 +33116,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][3] = 48,
[0][1][2][0][RTW89_CHILE][3] = 56,
[0][1][2][0][RTW89_QATAR][3] = 48,
+ [0][1][2][0][RTW89_THAILAND][3] = 48,
[0][1][2][0][RTW89_FCC][4] = 80,
[0][1][2][0][RTW89_ETSI][4] = 48,
[0][1][2][0][RTW89_MKK][4] = 68,
@@ -32982,6 +33129,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][4] = 48,
[0][1][2][0][RTW89_CHILE][4] = 56,
[0][1][2][0][RTW89_QATAR][4] = 48,
+ [0][1][2][0][RTW89_THAILAND][4] = 48,
[0][1][2][0][RTW89_FCC][5] = 80,
[0][1][2][0][RTW89_ETSI][5] = 48,
[0][1][2][0][RTW89_MKK][5] = 68,
@@ -32994,6 +33142,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][5] = 48,
[0][1][2][0][RTW89_CHILE][5] = 78,
[0][1][2][0][RTW89_QATAR][5] = 48,
+ [0][1][2][0][RTW89_THAILAND][5] = 48,
[0][1][2][0][RTW89_FCC][6] = 80,
[0][1][2][0][RTW89_ETSI][6] = 48,
[0][1][2][0][RTW89_MKK][6] = 68,
@@ -33006,6 +33155,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][6] = 48,
[0][1][2][0][RTW89_CHILE][6] = 54,
[0][1][2][0][RTW89_QATAR][6] = 48,
+ [0][1][2][0][RTW89_THAILAND][6] = 48,
[0][1][2][0][RTW89_FCC][7] = 74,
[0][1][2][0][RTW89_ETSI][7] = 48,
[0][1][2][0][RTW89_MKK][7] = 68,
@@ -33018,6 +33168,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][7] = 48,
[0][1][2][0][RTW89_CHILE][7] = 54,
[0][1][2][0][RTW89_QATAR][7] = 48,
+ [0][1][2][0][RTW89_THAILAND][7] = 48,
[0][1][2][0][RTW89_FCC][8] = 70,
[0][1][2][0][RTW89_ETSI][8] = 48,
[0][1][2][0][RTW89_MKK][8] = 68,
@@ -33030,6 +33181,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][8] = 48,
[0][1][2][0][RTW89_CHILE][8] = 54,
[0][1][2][0][RTW89_QATAR][8] = 48,
+ [0][1][2][0][RTW89_THAILAND][8] = 48,
[0][1][2][0][RTW89_FCC][9] = 66,
[0][1][2][0][RTW89_ETSI][9] = 48,
[0][1][2][0][RTW89_MKK][9] = 68,
@@ -33042,6 +33194,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][9] = 48,
[0][1][2][0][RTW89_CHILE][9] = 66,
[0][1][2][0][RTW89_QATAR][9] = 48,
+ [0][1][2][0][RTW89_THAILAND][9] = 48,
[0][1][2][0][RTW89_FCC][10] = 58,
[0][1][2][0][RTW89_ETSI][10] = 48,
[0][1][2][0][RTW89_MKK][10] = 68,
@@ -33054,6 +33207,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][10] = 48,
[0][1][2][0][RTW89_CHILE][10] = 58,
[0][1][2][0][RTW89_QATAR][10] = 48,
+ [0][1][2][0][RTW89_THAILAND][10] = 48,
[0][1][2][0][RTW89_FCC][11] = 58,
[0][1][2][0][RTW89_ETSI][11] = 48,
[0][1][2][0][RTW89_MKK][11] = 68,
@@ -33066,18 +33220,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][11] = 48,
[0][1][2][0][RTW89_CHILE][11] = 58,
[0][1][2][0][RTW89_QATAR][11] = 48,
+ [0][1][2][0][RTW89_THAILAND][11] = 48,
[0][1][2][0][RTW89_FCC][12] = 16,
[0][1][2][0][RTW89_ETSI][12] = 48,
[0][1][2][0][RTW89_MKK][12] = 68,
[0][1][2][0][RTW89_IC][12] = 16,
[0][1][2][0][RTW89_KCC][12] = 64,
[0][1][2][0][RTW89_ACMA][12] = 48,
- [0][1][2][0][RTW89_CN][12] = 46,
+ [0][1][2][0][RTW89_CN][12] = 38,
[0][1][2][0][RTW89_UK][12] = 48,
[0][1][2][0][RTW89_MEXICO][12] = 16,
[0][1][2][0][RTW89_UKRAINE][12] = 48,
[0][1][2][0][RTW89_CHILE][12] = 16,
[0][1][2][0][RTW89_QATAR][12] = 48,
+ [0][1][2][0][RTW89_THAILAND][12] = 48,
[0][1][2][0][RTW89_FCC][13] = 127,
[0][1][2][0][RTW89_ETSI][13] = 127,
[0][1][2][0][RTW89_MKK][13] = 127,
@@ -33090,18 +33246,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][13] = 127,
[0][1][2][0][RTW89_CHILE][13] = 127,
[0][1][2][0][RTW89_QATAR][13] = 127,
+ [0][1][2][0][RTW89_THAILAND][13] = 127,
[0][1][2][1][RTW89_FCC][0] = 64,
[0][1][2][1][RTW89_ETSI][0] = 36,
[0][1][2][1][RTW89_MKK][0] = 68,
[0][1][2][1][RTW89_IC][0] = 64,
[0][1][2][1][RTW89_KCC][0] = 66,
[0][1][2][1][RTW89_ACMA][0] = 36,
- [0][1][2][1][RTW89_CN][0] = 36,
+ [0][1][2][1][RTW89_CN][0] = 34,
[0][1][2][1][RTW89_UK][0] = 36,
[0][1][2][1][RTW89_MEXICO][0] = 64,
[0][1][2][1][RTW89_UKRAINE][0] = 36,
[0][1][2][1][RTW89_CHILE][0] = 64,
[0][1][2][1][RTW89_QATAR][0] = 36,
+ [0][1][2][1][RTW89_THAILAND][0] = 36,
[0][1][2][1][RTW89_FCC][1] = 70,
[0][1][2][1][RTW89_ETSI][1] = 36,
[0][1][2][1][RTW89_MKK][1] = 68,
@@ -33114,6 +33272,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][1] = 36,
[0][1][2][1][RTW89_CHILE][1] = 70,
[0][1][2][1][RTW89_QATAR][1] = 36,
+ [0][1][2][1][RTW89_THAILAND][1] = 36,
[0][1][2][1][RTW89_FCC][2] = 74,
[0][1][2][1][RTW89_ETSI][2] = 36,
[0][1][2][1][RTW89_MKK][2] = 68,
@@ -33126,6 +33285,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][2] = 36,
[0][1][2][1][RTW89_CHILE][2] = 44,
[0][1][2][1][RTW89_QATAR][2] = 36,
+ [0][1][2][1][RTW89_THAILAND][2] = 36,
[0][1][2][1][RTW89_FCC][3] = 78,
[0][1][2][1][RTW89_ETSI][3] = 36,
[0][1][2][1][RTW89_MKK][3] = 68,
@@ -33138,6 +33298,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][3] = 36,
[0][1][2][1][RTW89_CHILE][3] = 44,
[0][1][2][1][RTW89_QATAR][3] = 36,
+ [0][1][2][1][RTW89_THAILAND][3] = 36,
[0][1][2][1][RTW89_FCC][4] = 80,
[0][1][2][1][RTW89_ETSI][4] = 36,
[0][1][2][1][RTW89_MKK][4] = 68,
@@ -33150,6 +33311,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][4] = 36,
[0][1][2][1][RTW89_CHILE][4] = 44,
[0][1][2][1][RTW89_QATAR][4] = 36,
+ [0][1][2][1][RTW89_THAILAND][4] = 36,
[0][1][2][1][RTW89_FCC][5] = 80,
[0][1][2][1][RTW89_ETSI][5] = 36,
[0][1][2][1][RTW89_MKK][5] = 68,
@@ -33162,6 +33324,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][5] = 36,
[0][1][2][1][RTW89_CHILE][5] = 74,
[0][1][2][1][RTW89_QATAR][5] = 36,
+ [0][1][2][1][RTW89_THAILAND][5] = 36,
[0][1][2][1][RTW89_FCC][6] = 80,
[0][1][2][1][RTW89_ETSI][6] = 36,
[0][1][2][1][RTW89_MKK][6] = 68,
@@ -33174,6 +33337,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][6] = 36,
[0][1][2][1][RTW89_CHILE][6] = 42,
[0][1][2][1][RTW89_QATAR][6] = 36,
+ [0][1][2][1][RTW89_THAILAND][6] = 36,
[0][1][2][1][RTW89_FCC][7] = 74,
[0][1][2][1][RTW89_ETSI][7] = 36,
[0][1][2][1][RTW89_MKK][7] = 68,
@@ -33186,6 +33350,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][7] = 36,
[0][1][2][1][RTW89_CHILE][7] = 42,
[0][1][2][1][RTW89_QATAR][7] = 36,
+ [0][1][2][1][RTW89_THAILAND][7] = 36,
[0][1][2][1][RTW89_FCC][8] = 70,
[0][1][2][1][RTW89_ETSI][8] = 36,
[0][1][2][1][RTW89_MKK][8] = 68,
@@ -33198,6 +33363,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][8] = 36,
[0][1][2][1][RTW89_CHILE][8] = 42,
[0][1][2][1][RTW89_QATAR][8] = 36,
+ [0][1][2][1][RTW89_THAILAND][8] = 36,
[0][1][2][1][RTW89_FCC][9] = 66,
[0][1][2][1][RTW89_ETSI][9] = 36,
[0][1][2][1][RTW89_MKK][9] = 68,
@@ -33210,6 +33376,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][9] = 36,
[0][1][2][1][RTW89_CHILE][9] = 66,
[0][1][2][1][RTW89_QATAR][9] = 36,
+ [0][1][2][1][RTW89_THAILAND][9] = 36,
[0][1][2][1][RTW89_FCC][10] = 58,
[0][1][2][1][RTW89_ETSI][10] = 36,
[0][1][2][1][RTW89_MKK][10] = 68,
@@ -33222,6 +33389,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][10] = 36,
[0][1][2][1][RTW89_CHILE][10] = 58,
[0][1][2][1][RTW89_QATAR][10] = 36,
+ [0][1][2][1][RTW89_THAILAND][10] = 36,
[0][1][2][1][RTW89_FCC][11] = 58,
[0][1][2][1][RTW89_ETSI][11] = 36,
[0][1][2][1][RTW89_MKK][11] = 68,
@@ -33234,18 +33402,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][11] = 36,
[0][1][2][1][RTW89_CHILE][11] = 58,
[0][1][2][1][RTW89_QATAR][11] = 36,
+ [0][1][2][1][RTW89_THAILAND][11] = 36,
[0][1][2][1][RTW89_FCC][12] = 16,
[0][1][2][1][RTW89_ETSI][12] = 36,
[0][1][2][1][RTW89_MKK][12] = 68,
[0][1][2][1][RTW89_IC][12] = 16,
[0][1][2][1][RTW89_KCC][12] = 64,
[0][1][2][1][RTW89_ACMA][12] = 36,
- [0][1][2][1][RTW89_CN][12] = 34,
+ [0][1][2][1][RTW89_CN][12] = 26,
[0][1][2][1][RTW89_UK][12] = 36,
[0][1][2][1][RTW89_MEXICO][12] = 16,
[0][1][2][1][RTW89_UKRAINE][12] = 36,
[0][1][2][1][RTW89_CHILE][12] = 16,
[0][1][2][1][RTW89_QATAR][12] = 36,
+ [0][1][2][1][RTW89_THAILAND][12] = 36,
[0][1][2][1][RTW89_FCC][13] = 127,
[0][1][2][1][RTW89_ETSI][13] = 127,
[0][1][2][1][RTW89_MKK][13] = 127,
@@ -33258,6 +33428,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][13] = 127,
[0][1][2][1][RTW89_CHILE][13] = 127,
[0][1][2][1][RTW89_QATAR][13] = 127,
+ [0][1][2][1][RTW89_THAILAND][13] = 127,
[1][0][2][0][RTW89_FCC][0] = 127,
[1][0][2][0][RTW89_ETSI][0] = 127,
[1][0][2][0][RTW89_MKK][0] = 127,
@@ -33270,6 +33441,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][0] = 127,
[1][0][2][0][RTW89_CHILE][0] = 127,
[1][0][2][0][RTW89_QATAR][0] = 127,
+ [1][0][2][0][RTW89_THAILAND][0] = 127,
[1][0][2][0][RTW89_FCC][1] = 127,
[1][0][2][0][RTW89_ETSI][1] = 127,
[1][0][2][0][RTW89_MKK][1] = 127,
@@ -33282,6 +33454,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][1] = 127,
[1][0][2][0][RTW89_CHILE][1] = 127,
[1][0][2][0][RTW89_QATAR][1] = 127,
+ [1][0][2][0][RTW89_THAILAND][1] = 127,
[1][0][2][0][RTW89_FCC][2] = 64,
[1][0][2][0][RTW89_ETSI][2] = 60,
[1][0][2][0][RTW89_MKK][2] = 74,
@@ -33294,6 +33467,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][2] = 60,
[1][0][2][0][RTW89_CHILE][2] = 64,
[1][0][2][0][RTW89_QATAR][2] = 60,
+ [1][0][2][0][RTW89_THAILAND][2] = 60,
[1][0][2][0][RTW89_FCC][3] = 64,
[1][0][2][0][RTW89_ETSI][3] = 60,
[1][0][2][0][RTW89_MKK][3] = 74,
@@ -33306,6 +33480,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][3] = 60,
[1][0][2][0][RTW89_CHILE][3] = 64,
[1][0][2][0][RTW89_QATAR][3] = 60,
+ [1][0][2][0][RTW89_THAILAND][3] = 60,
[1][0][2][0][RTW89_FCC][4] = 68,
[1][0][2][0][RTW89_ETSI][4] = 60,
[1][0][2][0][RTW89_MKK][4] = 74,
@@ -33318,6 +33493,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][4] = 60,
[1][0][2][0][RTW89_CHILE][4] = 68,
[1][0][2][0][RTW89_QATAR][4] = 60,
+ [1][0][2][0][RTW89_THAILAND][4] = 60,
[1][0][2][0][RTW89_FCC][5] = 68,
[1][0][2][0][RTW89_ETSI][5] = 60,
[1][0][2][0][RTW89_MKK][5] = 74,
@@ -33330,6 +33506,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][5] = 60,
[1][0][2][0][RTW89_CHILE][5] = 68,
[1][0][2][0][RTW89_QATAR][5] = 60,
+ [1][0][2][0][RTW89_THAILAND][5] = 60,
[1][0][2][0][RTW89_FCC][6] = 66,
[1][0][2][0][RTW89_ETSI][6] = 60,
[1][0][2][0][RTW89_MKK][6] = 74,
@@ -33342,6 +33519,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][6] = 60,
[1][0][2][0][RTW89_CHILE][6] = 66,
[1][0][2][0][RTW89_QATAR][6] = 60,
+ [1][0][2][0][RTW89_THAILAND][6] = 60,
[1][0][2][0][RTW89_FCC][7] = 62,
[1][0][2][0][RTW89_ETSI][7] = 60,
[1][0][2][0][RTW89_MKK][7] = 74,
@@ -33354,6 +33532,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][7] = 60,
[1][0][2][0][RTW89_CHILE][7] = 62,
[1][0][2][0][RTW89_QATAR][7] = 60,
+ [1][0][2][0][RTW89_THAILAND][7] = 60,
[1][0][2][0][RTW89_FCC][8] = 62,
[1][0][2][0][RTW89_ETSI][8] = 60,
[1][0][2][0][RTW89_MKK][8] = 74,
@@ -33366,6 +33545,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][8] = 60,
[1][0][2][0][RTW89_CHILE][8] = 62,
[1][0][2][0][RTW89_QATAR][8] = 60,
+ [1][0][2][0][RTW89_THAILAND][8] = 60,
[1][0][2][0][RTW89_FCC][9] = 60,
[1][0][2][0][RTW89_ETSI][9] = 60,
[1][0][2][0][RTW89_MKK][9] = 74,
@@ -33378,6 +33558,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][9] = 60,
[1][0][2][0][RTW89_CHILE][9] = 60,
[1][0][2][0][RTW89_QATAR][9] = 60,
+ [1][0][2][0][RTW89_THAILAND][9] = 60,
[1][0][2][0][RTW89_FCC][10] = 56,
[1][0][2][0][RTW89_ETSI][10] = 60,
[1][0][2][0][RTW89_MKK][10] = 74,
@@ -33390,6 +33571,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][10] = 60,
[1][0][2][0][RTW89_CHILE][10] = 56,
[1][0][2][0][RTW89_QATAR][10] = 60,
+ [1][0][2][0][RTW89_THAILAND][10] = 60,
[1][0][2][0][RTW89_FCC][11] = 127,
[1][0][2][0][RTW89_ETSI][11] = 127,
[1][0][2][0][RTW89_MKK][11] = 127,
@@ -33402,6 +33584,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][11] = 127,
[1][0][2][0][RTW89_CHILE][11] = 127,
[1][0][2][0][RTW89_QATAR][11] = 127,
+ [1][0][2][0][RTW89_THAILAND][11] = 127,
[1][0][2][0][RTW89_FCC][12] = 127,
[1][0][2][0][RTW89_ETSI][12] = 127,
[1][0][2][0][RTW89_MKK][12] = 127,
@@ -33414,6 +33597,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][12] = 127,
[1][0][2][0][RTW89_CHILE][12] = 127,
[1][0][2][0][RTW89_QATAR][12] = 127,
+ [1][0][2][0][RTW89_THAILAND][12] = 127,
[1][0][2][0][RTW89_FCC][13] = 127,
[1][0][2][0][RTW89_ETSI][13] = 127,
[1][0][2][0][RTW89_MKK][13] = 127,
@@ -33426,6 +33610,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][13] = 127,
[1][0][2][0][RTW89_CHILE][13] = 127,
[1][0][2][0][RTW89_QATAR][13] = 127,
+ [1][0][2][0][RTW89_THAILAND][13] = 127,
[1][1][2][0][RTW89_FCC][0] = 127,
[1][1][2][0][RTW89_ETSI][0] = 127,
[1][1][2][0][RTW89_MKK][0] = 127,
@@ -33438,6 +33623,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][0] = 127,
[1][1][2][0][RTW89_CHILE][0] = 127,
[1][1][2][0][RTW89_QATAR][0] = 127,
+ [1][1][2][0][RTW89_THAILAND][0] = 127,
[1][1][2][0][RTW89_FCC][1] = 127,
[1][1][2][0][RTW89_ETSI][1] = 127,
[1][1][2][0][RTW89_MKK][1] = 127,
@@ -33450,6 +33636,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][1] = 127,
[1][1][2][0][RTW89_CHILE][1] = 127,
[1][1][2][0][RTW89_QATAR][1] = 127,
+ [1][1][2][0][RTW89_THAILAND][1] = 127,
[1][1][2][0][RTW89_FCC][2] = 60,
[1][1][2][0][RTW89_ETSI][2] = 48,
[1][1][2][0][RTW89_MKK][2] = 68,
@@ -33462,6 +33649,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][2] = 48,
[1][1][2][0][RTW89_CHILE][2] = 60,
[1][1][2][0][RTW89_QATAR][2] = 48,
+ [1][1][2][0][RTW89_THAILAND][2] = 48,
[1][1][2][0][RTW89_FCC][3] = 60,
[1][1][2][0][RTW89_ETSI][3] = 48,
[1][1][2][0][RTW89_MKK][3] = 68,
@@ -33474,6 +33662,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][3] = 48,
[1][1][2][0][RTW89_CHILE][3] = 56,
[1][1][2][0][RTW89_QATAR][3] = 48,
+ [1][1][2][0][RTW89_THAILAND][3] = 48,
[1][1][2][0][RTW89_FCC][4] = 60,
[1][1][2][0][RTW89_ETSI][4] = 48,
[1][1][2][0][RTW89_MKK][4] = 68,
@@ -33486,6 +33675,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][4] = 48,
[1][1][2][0][RTW89_CHILE][4] = 56,
[1][1][2][0][RTW89_QATAR][4] = 48,
+ [1][1][2][0][RTW89_THAILAND][4] = 48,
[1][1][2][0][RTW89_FCC][5] = 60,
[1][1][2][0][RTW89_ETSI][5] = 48,
[1][1][2][0][RTW89_MKK][5] = 68,
@@ -33498,6 +33688,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][5] = 48,
[1][1][2][0][RTW89_CHILE][5] = 60,
[1][1][2][0][RTW89_QATAR][5] = 48,
+ [1][1][2][0][RTW89_THAILAND][5] = 48,
[1][1][2][0][RTW89_FCC][6] = 58,
[1][1][2][0][RTW89_ETSI][6] = 48,
[1][1][2][0][RTW89_MKK][6] = 68,
@@ -33510,6 +33701,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][6] = 48,
[1][1][2][0][RTW89_CHILE][6] = 52,
[1][1][2][0][RTW89_QATAR][6] = 48,
+ [1][1][2][0][RTW89_THAILAND][6] = 48,
[1][1][2][0][RTW89_FCC][7] = 54,
[1][1][2][0][RTW89_ETSI][7] = 48,
[1][1][2][0][RTW89_MKK][7] = 68,
@@ -33522,6 +33714,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][7] = 48,
[1][1][2][0][RTW89_CHILE][7] = 52,
[1][1][2][0][RTW89_QATAR][7] = 48,
+ [1][1][2][0][RTW89_THAILAND][7] = 48,
[1][1][2][0][RTW89_FCC][8] = 54,
[1][1][2][0][RTW89_ETSI][8] = 48,
[1][1][2][0][RTW89_MKK][8] = 68,
@@ -33534,6 +33727,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][8] = 48,
[1][1][2][0][RTW89_CHILE][8] = 54,
[1][1][2][0][RTW89_QATAR][8] = 48,
+ [1][1][2][0][RTW89_THAILAND][8] = 48,
[1][1][2][0][RTW89_FCC][9] = 54,
[1][1][2][0][RTW89_ETSI][9] = 48,
[1][1][2][0][RTW89_MKK][9] = 68,
@@ -33546,6 +33740,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][9] = 48,
[1][1][2][0][RTW89_CHILE][9] = 54,
[1][1][2][0][RTW89_QATAR][9] = 48,
+ [1][1][2][0][RTW89_THAILAND][9] = 48,
[1][1][2][0][RTW89_FCC][10] = 46,
[1][1][2][0][RTW89_ETSI][10] = 48,
[1][1][2][0][RTW89_MKK][10] = 68,
@@ -33558,6 +33753,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][10] = 48,
[1][1][2][0][RTW89_CHILE][10] = 46,
[1][1][2][0][RTW89_QATAR][10] = 48,
+ [1][1][2][0][RTW89_THAILAND][10] = 48,
[1][1][2][0][RTW89_FCC][11] = 127,
[1][1][2][0][RTW89_ETSI][11] = 127,
[1][1][2][0][RTW89_MKK][11] = 127,
@@ -33570,6 +33766,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][11] = 127,
[1][1][2][0][RTW89_CHILE][11] = 127,
[1][1][2][0][RTW89_QATAR][11] = 127,
+ [1][1][2][0][RTW89_THAILAND][11] = 127,
[1][1][2][0][RTW89_FCC][12] = 127,
[1][1][2][0][RTW89_ETSI][12] = 127,
[1][1][2][0][RTW89_MKK][12] = 127,
@@ -33582,6 +33779,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][12] = 127,
[1][1][2][0][RTW89_CHILE][12] = 127,
[1][1][2][0][RTW89_QATAR][12] = 127,
+ [1][1][2][0][RTW89_THAILAND][12] = 127,
[1][1][2][0][RTW89_FCC][13] = 127,
[1][1][2][0][RTW89_ETSI][13] = 127,
[1][1][2][0][RTW89_MKK][13] = 127,
@@ -33594,6 +33792,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][13] = 127,
[1][1][2][0][RTW89_CHILE][13] = 127,
[1][1][2][0][RTW89_QATAR][13] = 127,
+ [1][1][2][0][RTW89_THAILAND][13] = 127,
[1][1][2][1][RTW89_FCC][0] = 127,
[1][1][2][1][RTW89_ETSI][0] = 127,
[1][1][2][1][RTW89_MKK][0] = 127,
@@ -33606,6 +33805,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][0] = 127,
[1][1][2][1][RTW89_CHILE][0] = 127,
[1][1][2][1][RTW89_QATAR][0] = 127,
+ [1][1][2][1][RTW89_THAILAND][0] = 127,
[1][1][2][1][RTW89_FCC][1] = 127,
[1][1][2][1][RTW89_ETSI][1] = 127,
[1][1][2][1][RTW89_MKK][1] = 127,
@@ -33618,6 +33818,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][1] = 127,
[1][1][2][1][RTW89_CHILE][1] = 127,
[1][1][2][1][RTW89_QATAR][1] = 127,
+ [1][1][2][1][RTW89_THAILAND][1] = 127,
[1][1][2][1][RTW89_FCC][2] = 60,
[1][1][2][1][RTW89_ETSI][2] = 36,
[1][1][2][1][RTW89_MKK][2] = 68,
@@ -33630,6 +33831,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][2] = 36,
[1][1][2][1][RTW89_CHILE][2] = 60,
[1][1][2][1][RTW89_QATAR][2] = 36,
+ [1][1][2][1][RTW89_THAILAND][2] = 36,
[1][1][2][1][RTW89_FCC][3] = 60,
[1][1][2][1][RTW89_ETSI][3] = 36,
[1][1][2][1][RTW89_MKK][3] = 68,
@@ -33642,6 +33844,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][3] = 36,
[1][1][2][1][RTW89_CHILE][3] = 44,
[1][1][2][1][RTW89_QATAR][3] = 36,
+ [1][1][2][1][RTW89_THAILAND][3] = 36,
[1][1][2][1][RTW89_FCC][4] = 60,
[1][1][2][1][RTW89_ETSI][4] = 36,
[1][1][2][1][RTW89_MKK][4] = 68,
@@ -33654,6 +33857,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][4] = 36,
[1][1][2][1][RTW89_CHILE][4] = 44,
[1][1][2][1][RTW89_QATAR][4] = 36,
+ [1][1][2][1][RTW89_THAILAND][4] = 36,
[1][1][2][1][RTW89_FCC][5] = 60,
[1][1][2][1][RTW89_ETSI][5] = 36,
[1][1][2][1][RTW89_MKK][5] = 68,
@@ -33666,6 +33870,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][5] = 36,
[1][1][2][1][RTW89_CHILE][5] = 60,
[1][1][2][1][RTW89_QATAR][5] = 36,
+ [1][1][2][1][RTW89_THAILAND][5] = 36,
[1][1][2][1][RTW89_FCC][6] = 58,
[1][1][2][1][RTW89_ETSI][6] = 36,
[1][1][2][1][RTW89_MKK][6] = 68,
@@ -33678,6 +33883,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][6] = 36,
[1][1][2][1][RTW89_CHILE][6] = 40,
[1][1][2][1][RTW89_QATAR][6] = 36,
+ [1][1][2][1][RTW89_THAILAND][6] = 36,
[1][1][2][1][RTW89_FCC][7] = 54,
[1][1][2][1][RTW89_ETSI][7] = 36,
[1][1][2][1][RTW89_MKK][7] = 68,
@@ -33690,6 +33896,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][7] = 36,
[1][1][2][1][RTW89_CHILE][7] = 40,
[1][1][2][1][RTW89_QATAR][7] = 36,
+ [1][1][2][1][RTW89_THAILAND][7] = 36,
[1][1][2][1][RTW89_FCC][8] = 54,
[1][1][2][1][RTW89_ETSI][8] = 36,
[1][1][2][1][RTW89_MKK][8] = 68,
@@ -33702,6 +33909,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][8] = 36,
[1][1][2][1][RTW89_CHILE][8] = 54,
[1][1][2][1][RTW89_QATAR][8] = 36,
+ [1][1][2][1][RTW89_THAILAND][8] = 36,
[1][1][2][1][RTW89_FCC][9] = 54,
[1][1][2][1][RTW89_ETSI][9] = 36,
[1][1][2][1][RTW89_MKK][9] = 68,
@@ -33714,18 +33922,20 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][9] = 36,
[1][1][2][1][RTW89_CHILE][9] = 54,
[1][1][2][1][RTW89_QATAR][9] = 36,
+ [1][1][2][1][RTW89_THAILAND][9] = 36,
[1][1][2][1][RTW89_FCC][10] = 46,
[1][1][2][1][RTW89_ETSI][10] = 36,
[1][1][2][1][RTW89_MKK][10] = 68,
[1][1][2][1][RTW89_IC][10] = 46,
[1][1][2][1][RTW89_KCC][10] = 64,
[1][1][2][1][RTW89_ACMA][10] = 36,
- [1][1][2][1][RTW89_CN][10] = 36,
+ [1][1][2][1][RTW89_CN][10] = 34,
[1][1][2][1][RTW89_UK][10] = 36,
[1][1][2][1][RTW89_MEXICO][10] = 46,
[1][1][2][1][RTW89_UKRAINE][10] = 36,
[1][1][2][1][RTW89_CHILE][10] = 46,
[1][1][2][1][RTW89_QATAR][10] = 36,
+ [1][1][2][1][RTW89_THAILAND][10] = 36,
[1][1][2][1][RTW89_FCC][11] = 127,
[1][1][2][1][RTW89_ETSI][11] = 127,
[1][1][2][1][RTW89_MKK][11] = 127,
@@ -33738,6 +33948,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][11] = 127,
[1][1][2][1][RTW89_CHILE][11] = 127,
[1][1][2][1][RTW89_QATAR][11] = 127,
+ [1][1][2][1][RTW89_THAILAND][11] = 127,
[1][1][2][1][RTW89_FCC][12] = 127,
[1][1][2][1][RTW89_ETSI][12] = 127,
[1][1][2][1][RTW89_MKK][12] = 127,
@@ -33750,6 +33961,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][12] = 127,
[1][1][2][1][RTW89_CHILE][12] = 127,
[1][1][2][1][RTW89_QATAR][12] = 127,
+ [1][1][2][1][RTW89_THAILAND][12] = 127,
[1][1][2][1][RTW89_FCC][13] = 127,
[1][1][2][1][RTW89_ETSI][13] = 127,
[1][1][2][1][RTW89_MKK][13] = 127,
@@ -33762,6 +33974,7 @@ const s8 rtw89_8852c_txpwr_lmt_2g[RTW89_2G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][13] = 127,
[1][1][2][1][RTW89_CHILE][13] = 127,
[1][1][2][1][RTW89_QATAR][13] = 127,
+ [1][1][2][1][RTW89_THAILAND][13] = 127,
};
static
@@ -33992,6 +34205,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][0] = 54,
[0][0][1][0][RTW89_CHILE][0] = 70,
[0][0][1][0][RTW89_QATAR][0] = 66,
+ [0][0][1][0][RTW89_THAILAND][0] = 66,
[0][0][1][0][RTW89_FCC][2] = 72,
[0][0][1][0][RTW89_ETSI][2] = 66,
[0][0][1][0][RTW89_MKK][2] = 66,
@@ -34004,6 +34218,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][2] = 54,
[0][0][1][0][RTW89_CHILE][2] = 70,
[0][0][1][0][RTW89_QATAR][2] = 66,
+ [0][0][1][0][RTW89_THAILAND][2] = 66,
[0][0][1][0][RTW89_FCC][4] = 72,
[0][0][1][0][RTW89_ETSI][4] = 66,
[0][0][1][0][RTW89_MKK][4] = 66,
@@ -34016,6 +34231,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][4] = 54,
[0][0][1][0][RTW89_CHILE][4] = 70,
[0][0][1][0][RTW89_QATAR][4] = 66,
+ [0][0][1][0][RTW89_THAILAND][4] = 66,
[0][0][1][0][RTW89_FCC][6] = 72,
[0][0][1][0][RTW89_ETSI][6] = 66,
[0][0][1][0][RTW89_MKK][6] = 66,
@@ -34028,6 +34244,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][6] = 54,
[0][0][1][0][RTW89_CHILE][6] = 70,
[0][0][1][0][RTW89_QATAR][6] = 66,
+ [0][0][1][0][RTW89_THAILAND][6] = 66,
[0][0][1][0][RTW89_FCC][8] = 72,
[0][0][1][0][RTW89_ETSI][8] = 66,
[0][0][1][0][RTW89_MKK][8] = 66,
@@ -34040,6 +34257,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][8] = 54,
[0][0][1][0][RTW89_CHILE][8] = 70,
[0][0][1][0][RTW89_QATAR][8] = 66,
+ [0][0][1][0][RTW89_THAILAND][8] = 66,
[0][0][1][0][RTW89_FCC][10] = 72,
[0][0][1][0][RTW89_ETSI][10] = 66,
[0][0][1][0][RTW89_MKK][10] = 66,
@@ -34052,6 +34270,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][10] = 54,
[0][0][1][0][RTW89_CHILE][10] = 70,
[0][0][1][0][RTW89_QATAR][10] = 66,
+ [0][0][1][0][RTW89_THAILAND][10] = 66,
[0][0][1][0][RTW89_FCC][12] = 72,
[0][0][1][0][RTW89_ETSI][12] = 66,
[0][0][1][0][RTW89_MKK][12] = 66,
@@ -34064,6 +34283,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][12] = 54,
[0][0][1][0][RTW89_CHILE][12] = 70,
[0][0][1][0][RTW89_QATAR][12] = 66,
+ [0][0][1][0][RTW89_THAILAND][12] = 66,
[0][0][1][0][RTW89_FCC][14] = 70,
[0][0][1][0][RTW89_ETSI][14] = 66,
[0][0][1][0][RTW89_MKK][14] = 66,
@@ -34076,6 +34296,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][14] = 54,
[0][0][1][0][RTW89_CHILE][14] = 68,
[0][0][1][0][RTW89_QATAR][14] = 66,
+ [0][0][1][0][RTW89_THAILAND][14] = 66,
[0][0][1][0][RTW89_FCC][15] = 72,
[0][0][1][0][RTW89_ETSI][15] = 66,
[0][0][1][0][RTW89_MKK][15] = 70,
@@ -34088,6 +34309,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][15] = 54,
[0][0][1][0][RTW89_CHILE][15] = 70,
[0][0][1][0][RTW89_QATAR][15] = 66,
+ [0][0][1][0][RTW89_THAILAND][15] = 66,
[0][0][1][0][RTW89_FCC][17] = 72,
[0][0][1][0][RTW89_ETSI][17] = 66,
[0][0][1][0][RTW89_MKK][17] = 70,
@@ -34100,6 +34322,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][17] = 54,
[0][0][1][0][RTW89_CHILE][17] = 70,
[0][0][1][0][RTW89_QATAR][17] = 66,
+ [0][0][1][0][RTW89_THAILAND][17] = 66,
[0][0][1][0][RTW89_FCC][19] = 72,
[0][0][1][0][RTW89_ETSI][19] = 66,
[0][0][1][0][RTW89_MKK][19] = 70,
@@ -34112,6 +34335,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][19] = 54,
[0][0][1][0][RTW89_CHILE][19] = 70,
[0][0][1][0][RTW89_QATAR][19] = 66,
+ [0][0][1][0][RTW89_THAILAND][19] = 66,
[0][0][1][0][RTW89_FCC][21] = 72,
[0][0][1][0][RTW89_ETSI][21] = 66,
[0][0][1][0][RTW89_MKK][21] = 70,
@@ -34124,6 +34348,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][21] = 54,
[0][0][1][0][RTW89_CHILE][21] = 70,
[0][0][1][0][RTW89_QATAR][21] = 66,
+ [0][0][1][0][RTW89_THAILAND][21] = 66,
[0][0][1][0][RTW89_FCC][23] = 72,
[0][0][1][0][RTW89_ETSI][23] = 66,
[0][0][1][0][RTW89_MKK][23] = 70,
@@ -34136,6 +34361,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][23] = 54,
[0][0][1][0][RTW89_CHILE][23] = 70,
[0][0][1][0][RTW89_QATAR][23] = 66,
+ [0][0][1][0][RTW89_THAILAND][23] = 66,
[0][0][1][0][RTW89_FCC][25] = 72,
[0][0][1][0][RTW89_ETSI][25] = 66,
[0][0][1][0][RTW89_MKK][25] = 70,
@@ -34148,6 +34374,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][25] = 54,
[0][0][1][0][RTW89_CHILE][25] = 70,
[0][0][1][0][RTW89_QATAR][25] = 66,
+ [0][0][1][0][RTW89_THAILAND][25] = 66,
[0][0][1][0][RTW89_FCC][27] = 72,
[0][0][1][0][RTW89_ETSI][27] = 66,
[0][0][1][0][RTW89_MKK][27] = 70,
@@ -34160,6 +34387,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][27] = 54,
[0][0][1][0][RTW89_CHILE][27] = 58,
[0][0][1][0][RTW89_QATAR][27] = 66,
+ [0][0][1][0][RTW89_THAILAND][27] = 66,
[0][0][1][0][RTW89_FCC][29] = 72,
[0][0][1][0][RTW89_ETSI][29] = 66,
[0][0][1][0][RTW89_MKK][29] = 70,
@@ -34172,6 +34400,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][29] = 54,
[0][0][1][0][RTW89_CHILE][29] = 58,
[0][0][1][0][RTW89_QATAR][29] = 66,
+ [0][0][1][0][RTW89_THAILAND][29] = 66,
[0][0][1][0][RTW89_FCC][31] = 72,
[0][0][1][0][RTW89_ETSI][31] = 66,
[0][0][1][0][RTW89_MKK][31] = 70,
@@ -34184,6 +34413,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][31] = 54,
[0][0][1][0][RTW89_CHILE][31] = 58,
[0][0][1][0][RTW89_QATAR][31] = 66,
+ [0][0][1][0][RTW89_THAILAND][31] = 66,
[0][0][1][0][RTW89_FCC][33] = 72,
[0][0][1][0][RTW89_ETSI][33] = 66,
[0][0][1][0][RTW89_MKK][33] = 70,
@@ -34196,6 +34426,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][33] = 54,
[0][0][1][0][RTW89_CHILE][33] = 58,
[0][0][1][0][RTW89_QATAR][33] = 66,
+ [0][0][1][0][RTW89_THAILAND][33] = 66,
[0][0][1][0][RTW89_FCC][35] = 60,
[0][0][1][0][RTW89_ETSI][35] = 66,
[0][0][1][0][RTW89_MKK][35] = 70,
@@ -34208,6 +34439,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][35] = 54,
[0][0][1][0][RTW89_CHILE][35] = 58,
[0][0][1][0][RTW89_QATAR][35] = 66,
+ [0][0][1][0][RTW89_THAILAND][35] = 66,
[0][0][1][0][RTW89_FCC][37] = 72,
[0][0][1][0][RTW89_ETSI][37] = 127,
[0][0][1][0][RTW89_MKK][37] = 70,
@@ -34220,66 +34452,72 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][37] = 127,
[0][0][1][0][RTW89_CHILE][37] = 70,
[0][0][1][0][RTW89_QATAR][37] = 127,
+ [0][0][1][0][RTW89_THAILAND][37] = 127,
[0][0][1][0][RTW89_FCC][38] = 72,
[0][0][1][0][RTW89_ETSI][38] = 30,
[0][0][1][0][RTW89_MKK][38] = 127,
[0][0][1][0][RTW89_IC][38] = 72,
[0][0][1][0][RTW89_KCC][38] = 62,
[0][0][1][0][RTW89_ACMA][38] = 70,
- [0][0][1][0][RTW89_CN][38] = 68,
+ [0][0][1][0][RTW89_CN][38] = 54,
[0][0][1][0][RTW89_UK][38] = 64,
[0][0][1][0][RTW89_MEXICO][38] = 72,
[0][0][1][0][RTW89_UKRAINE][38] = 30,
[0][0][1][0][RTW89_CHILE][38] = 70,
[0][0][1][0][RTW89_QATAR][38] = 30,
+ [0][0][1][0][RTW89_THAILAND][38] = 30,
[0][0][1][0][RTW89_FCC][40] = 72,
[0][0][1][0][RTW89_ETSI][40] = 30,
[0][0][1][0][RTW89_MKK][40] = 127,
[0][0][1][0][RTW89_IC][40] = 72,
[0][0][1][0][RTW89_KCC][40] = 62,
[0][0][1][0][RTW89_ACMA][40] = 70,
- [0][0][1][0][RTW89_CN][40] = 68,
+ [0][0][1][0][RTW89_CN][40] = 54,
[0][0][1][0][RTW89_UK][40] = 64,
[0][0][1][0][RTW89_MEXICO][40] = 72,
[0][0][1][0][RTW89_UKRAINE][40] = 30,
[0][0][1][0][RTW89_CHILE][40] = 70,
[0][0][1][0][RTW89_QATAR][40] = 30,
+ [0][0][1][0][RTW89_THAILAND][40] = 30,
[0][0][1][0][RTW89_FCC][42] = 72,
[0][0][1][0][RTW89_ETSI][42] = 30,
[0][0][1][0][RTW89_MKK][42] = 127,
[0][0][1][0][RTW89_IC][42] = 72,
[0][0][1][0][RTW89_KCC][42] = 62,
[0][0][1][0][RTW89_ACMA][42] = 70,
- [0][0][1][0][RTW89_CN][42] = 68,
+ [0][0][1][0][RTW89_CN][42] = 54,
[0][0][1][0][RTW89_UK][42] = 64,
[0][0][1][0][RTW89_MEXICO][42] = 72,
[0][0][1][0][RTW89_UKRAINE][42] = 30,
[0][0][1][0][RTW89_CHILE][42] = 70,
[0][0][1][0][RTW89_QATAR][42] = 30,
+ [0][0][1][0][RTW89_THAILAND][42] = 30,
[0][0][1][0][RTW89_FCC][44] = 72,
[0][0][1][0][RTW89_ETSI][44] = 30,
[0][0][1][0][RTW89_MKK][44] = 127,
[0][0][1][0][RTW89_IC][44] = 72,
[0][0][1][0][RTW89_KCC][44] = 62,
[0][0][1][0][RTW89_ACMA][44] = 70,
- [0][0][1][0][RTW89_CN][44] = 68,
+ [0][0][1][0][RTW89_CN][44] = 54,
[0][0][1][0][RTW89_UK][44] = 64,
[0][0][1][0][RTW89_MEXICO][44] = 72,
[0][0][1][0][RTW89_UKRAINE][44] = 30,
[0][0][1][0][RTW89_CHILE][44] = 70,
[0][0][1][0][RTW89_QATAR][44] = 30,
+ [0][0][1][0][RTW89_THAILAND][44] = 30,
[0][0][1][0][RTW89_FCC][46] = 72,
[0][0][1][0][RTW89_ETSI][46] = 30,
[0][0][1][0][RTW89_MKK][46] = 127,
[0][0][1][0][RTW89_IC][46] = 72,
[0][0][1][0][RTW89_KCC][46] = 62,
[0][0][1][0][RTW89_ACMA][46] = 70,
- [0][0][1][0][RTW89_CN][46] = 68,
+ [0][0][1][0][RTW89_CN][46] = 54,
[0][0][1][0][RTW89_UK][46] = 64,
[0][0][1][0][RTW89_MEXICO][46] = 72,
[0][0][1][0][RTW89_UKRAINE][46] = 30,
[0][0][1][0][RTW89_CHILE][46] = 70,
[0][0][1][0][RTW89_QATAR][46] = 30,
+ [0][0][1][0][RTW89_THAILAND][46] = 30,
[0][0][1][0][RTW89_FCC][48] = 72,
[0][0][1][0][RTW89_ETSI][48] = 127,
[0][0][1][0][RTW89_MKK][48] = 127,
@@ -34292,6 +34530,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][48] = 127,
[0][0][1][0][RTW89_CHILE][48] = 127,
[0][0][1][0][RTW89_QATAR][48] = 127,
+ [0][0][1][0][RTW89_THAILAND][48] = 127,
[0][0][1][0][RTW89_FCC][50] = 72,
[0][0][1][0][RTW89_ETSI][50] = 127,
[0][0][1][0][RTW89_MKK][50] = 127,
@@ -34304,6 +34543,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][50] = 127,
[0][0][1][0][RTW89_CHILE][50] = 127,
[0][0][1][0][RTW89_QATAR][50] = 127,
+ [0][0][1][0][RTW89_THAILAND][50] = 127,
[0][0][1][0][RTW89_FCC][52] = 72,
[0][0][1][0][RTW89_ETSI][52] = 127,
[0][0][1][0][RTW89_MKK][52] = 127,
@@ -34316,6 +34556,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_UKRAINE][52] = 127,
[0][0][1][0][RTW89_CHILE][52] = 127,
[0][0][1][0][RTW89_QATAR][52] = 127,
+ [0][0][1][0][RTW89_THAILAND][52] = 127,
[0][1][1][0][RTW89_FCC][0] = 60,
[0][1][1][0][RTW89_ETSI][0] = 54,
[0][1][1][0][RTW89_MKK][0] = 54,
@@ -34328,6 +34569,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][0] = 42,
[0][1][1][0][RTW89_CHILE][0] = 60,
[0][1][1][0][RTW89_QATAR][0] = 54,
+ [0][1][1][0][RTW89_THAILAND][0] = 54,
[0][1][1][0][RTW89_FCC][2] = 60,
[0][1][1][0][RTW89_ETSI][2] = 54,
[0][1][1][0][RTW89_MKK][2] = 54,
@@ -34340,6 +34582,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][2] = 42,
[0][1][1][0][RTW89_CHILE][2] = 60,
[0][1][1][0][RTW89_QATAR][2] = 54,
+ [0][1][1][0][RTW89_THAILAND][2] = 54,
[0][1][1][0][RTW89_FCC][4] = 60,
[0][1][1][0][RTW89_ETSI][4] = 54,
[0][1][1][0][RTW89_MKK][4] = 54,
@@ -34352,6 +34595,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][4] = 42,
[0][1][1][0][RTW89_CHILE][4] = 60,
[0][1][1][0][RTW89_QATAR][4] = 54,
+ [0][1][1][0][RTW89_THAILAND][4] = 54,
[0][1][1][0][RTW89_FCC][6] = 60,
[0][1][1][0][RTW89_ETSI][6] = 54,
[0][1][1][0][RTW89_MKK][6] = 54,
@@ -34364,6 +34608,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][6] = 42,
[0][1][1][0][RTW89_CHILE][6] = 60,
[0][1][1][0][RTW89_QATAR][6] = 54,
+ [0][1][1][0][RTW89_THAILAND][6] = 54,
[0][1][1][0][RTW89_FCC][8] = 62,
[0][1][1][0][RTW89_ETSI][8] = 54,
[0][1][1][0][RTW89_MKK][8] = 52,
@@ -34376,6 +34621,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][8] = 42,
[0][1][1][0][RTW89_CHILE][8] = 62,
[0][1][1][0][RTW89_QATAR][8] = 54,
+ [0][1][1][0][RTW89_THAILAND][8] = 54,
[0][1][1][0][RTW89_FCC][10] = 62,
[0][1][1][0][RTW89_ETSI][10] = 54,
[0][1][1][0][RTW89_MKK][10] = 54,
@@ -34388,6 +34634,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][10] = 42,
[0][1][1][0][RTW89_CHILE][10] = 62,
[0][1][1][0][RTW89_QATAR][10] = 54,
+ [0][1][1][0][RTW89_THAILAND][10] = 54,
[0][1][1][0][RTW89_FCC][12] = 62,
[0][1][1][0][RTW89_ETSI][12] = 54,
[0][1][1][0][RTW89_MKK][12] = 54,
@@ -34400,6 +34647,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][12] = 42,
[0][1][1][0][RTW89_CHILE][12] = 62,
[0][1][1][0][RTW89_QATAR][12] = 54,
+ [0][1][1][0][RTW89_THAILAND][12] = 54,
[0][1][1][0][RTW89_FCC][14] = 60,
[0][1][1][0][RTW89_ETSI][14] = 54,
[0][1][1][0][RTW89_MKK][14] = 54,
@@ -34412,6 +34660,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][14] = 42,
[0][1][1][0][RTW89_CHILE][14] = 60,
[0][1][1][0][RTW89_QATAR][14] = 54,
+ [0][1][1][0][RTW89_THAILAND][14] = 54,
[0][1][1][0][RTW89_FCC][15] = 60,
[0][1][1][0][RTW89_ETSI][15] = 54,
[0][1][1][0][RTW89_MKK][15] = 70,
@@ -34424,6 +34673,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][15] = 42,
[0][1][1][0][RTW89_CHILE][15] = 60,
[0][1][1][0][RTW89_QATAR][15] = 54,
+ [0][1][1][0][RTW89_THAILAND][15] = 54,
[0][1][1][0][RTW89_FCC][17] = 60,
[0][1][1][0][RTW89_ETSI][17] = 54,
[0][1][1][0][RTW89_MKK][17] = 70,
@@ -34436,6 +34686,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][17] = 42,
[0][1][1][0][RTW89_CHILE][17] = 60,
[0][1][1][0][RTW89_QATAR][17] = 54,
+ [0][1][1][0][RTW89_THAILAND][17] = 54,
[0][1][1][0][RTW89_FCC][19] = 60,
[0][1][1][0][RTW89_ETSI][19] = 54,
[0][1][1][0][RTW89_MKK][19] = 70,
@@ -34448,6 +34699,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][19] = 42,
[0][1][1][0][RTW89_CHILE][19] = 60,
[0][1][1][0][RTW89_QATAR][19] = 54,
+ [0][1][1][0][RTW89_THAILAND][19] = 54,
[0][1][1][0][RTW89_FCC][21] = 60,
[0][1][1][0][RTW89_ETSI][21] = 54,
[0][1][1][0][RTW89_MKK][21] = 70,
@@ -34460,6 +34712,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][21] = 42,
[0][1][1][0][RTW89_CHILE][21] = 60,
[0][1][1][0][RTW89_QATAR][21] = 54,
+ [0][1][1][0][RTW89_THAILAND][21] = 54,
[0][1][1][0][RTW89_FCC][23] = 60,
[0][1][1][0][RTW89_ETSI][23] = 54,
[0][1][1][0][RTW89_MKK][23] = 70,
@@ -34472,6 +34725,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][23] = 42,
[0][1][1][0][RTW89_CHILE][23] = 60,
[0][1][1][0][RTW89_QATAR][23] = 54,
+ [0][1][1][0][RTW89_THAILAND][23] = 54,
[0][1][1][0][RTW89_FCC][25] = 60,
[0][1][1][0][RTW89_ETSI][25] = 54,
[0][1][1][0][RTW89_MKK][25] = 70,
@@ -34484,6 +34738,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][25] = 42,
[0][1][1][0][RTW89_CHILE][25] = 60,
[0][1][1][0][RTW89_QATAR][25] = 54,
+ [0][1][1][0][RTW89_THAILAND][25] = 54,
[0][1][1][0][RTW89_FCC][27] = 60,
[0][1][1][0][RTW89_ETSI][27] = 54,
[0][1][1][0][RTW89_MKK][27] = 70,
@@ -34496,6 +34751,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][27] = 42,
[0][1][1][0][RTW89_CHILE][27] = 52,
[0][1][1][0][RTW89_QATAR][27] = 54,
+ [0][1][1][0][RTW89_THAILAND][27] = 54,
[0][1][1][0][RTW89_FCC][29] = 60,
[0][1][1][0][RTW89_ETSI][29] = 54,
[0][1][1][0][RTW89_MKK][29] = 70,
@@ -34508,6 +34764,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][29] = 42,
[0][1][1][0][RTW89_CHILE][29] = 52,
[0][1][1][0][RTW89_QATAR][29] = 54,
+ [0][1][1][0][RTW89_THAILAND][29] = 54,
[0][1][1][0][RTW89_FCC][31] = 60,
[0][1][1][0][RTW89_ETSI][31] = 54,
[0][1][1][0][RTW89_MKK][31] = 70,
@@ -34520,6 +34777,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][31] = 42,
[0][1][1][0][RTW89_CHILE][31] = 52,
[0][1][1][0][RTW89_QATAR][31] = 54,
+ [0][1][1][0][RTW89_THAILAND][31] = 54,
[0][1][1][0][RTW89_FCC][33] = 60,
[0][1][1][0][RTW89_ETSI][33] = 54,
[0][1][1][0][RTW89_MKK][33] = 70,
@@ -34532,6 +34790,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][33] = 42,
[0][1][1][0][RTW89_CHILE][33] = 52,
[0][1][1][0][RTW89_QATAR][33] = 54,
+ [0][1][1][0][RTW89_THAILAND][33] = 54,
[0][1][1][0][RTW89_FCC][35] = 52,
[0][1][1][0][RTW89_ETSI][35] = 54,
[0][1][1][0][RTW89_MKK][35] = 70,
@@ -34544,6 +34803,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][35] = 42,
[0][1][1][0][RTW89_CHILE][35] = 52,
[0][1][1][0][RTW89_QATAR][35] = 54,
+ [0][1][1][0][RTW89_THAILAND][35] = 54,
[0][1][1][0][RTW89_FCC][37] = 62,
[0][1][1][0][RTW89_ETSI][37] = 127,
[0][1][1][0][RTW89_MKK][37] = 70,
@@ -34556,66 +34816,72 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][37] = 127,
[0][1][1][0][RTW89_CHILE][37] = 62,
[0][1][1][0][RTW89_QATAR][37] = 127,
+ [0][1][1][0][RTW89_THAILAND][37] = 127,
[0][1][1][0][RTW89_FCC][38] = 72,
[0][1][1][0][RTW89_ETSI][38] = 18,
[0][1][1][0][RTW89_MKK][38] = 127,
[0][1][1][0][RTW89_IC][38] = 72,
[0][1][1][0][RTW89_KCC][38] = 60,
[0][1][1][0][RTW89_ACMA][38] = 70,
- [0][1][1][0][RTW89_CN][38] = 64,
+ [0][1][1][0][RTW89_CN][38] = 54,
[0][1][1][0][RTW89_UK][38] = 52,
[0][1][1][0][RTW89_MEXICO][38] = 72,
[0][1][1][0][RTW89_UKRAINE][38] = 18,
[0][1][1][0][RTW89_CHILE][38] = 70,
[0][1][1][0][RTW89_QATAR][38] = 18,
+ [0][1][1][0][RTW89_THAILAND][38] = 18,
[0][1][1][0][RTW89_FCC][40] = 72,
[0][1][1][0][RTW89_ETSI][40] = 18,
[0][1][1][0][RTW89_MKK][40] = 127,
[0][1][1][0][RTW89_IC][40] = 72,
[0][1][1][0][RTW89_KCC][40] = 60,
[0][1][1][0][RTW89_ACMA][40] = 70,
- [0][1][1][0][RTW89_CN][40] = 64,
+ [0][1][1][0][RTW89_CN][40] = 54,
[0][1][1][0][RTW89_UK][40] = 52,
[0][1][1][0][RTW89_MEXICO][40] = 72,
[0][1][1][0][RTW89_UKRAINE][40] = 18,
[0][1][1][0][RTW89_CHILE][40] = 70,
[0][1][1][0][RTW89_QATAR][40] = 18,
+ [0][1][1][0][RTW89_THAILAND][40] = 18,
[0][1][1][0][RTW89_FCC][42] = 72,
[0][1][1][0][RTW89_ETSI][42] = 18,
[0][1][1][0][RTW89_MKK][42] = 127,
[0][1][1][0][RTW89_IC][42] = 72,
[0][1][1][0][RTW89_KCC][42] = 60,
[0][1][1][0][RTW89_ACMA][42] = 70,
- [0][1][1][0][RTW89_CN][42] = 64,
+ [0][1][1][0][RTW89_CN][42] = 54,
[0][1][1][0][RTW89_UK][42] = 52,
[0][1][1][0][RTW89_MEXICO][42] = 72,
[0][1][1][0][RTW89_UKRAINE][42] = 18,
[0][1][1][0][RTW89_CHILE][42] = 70,
[0][1][1][0][RTW89_QATAR][42] = 18,
+ [0][1][1][0][RTW89_THAILAND][42] = 18,
[0][1][1][0][RTW89_FCC][44] = 72,
[0][1][1][0][RTW89_ETSI][44] = 18,
[0][1][1][0][RTW89_MKK][44] = 127,
[0][1][1][0][RTW89_IC][44] = 72,
[0][1][1][0][RTW89_KCC][44] = 60,
[0][1][1][0][RTW89_ACMA][44] = 70,
- [0][1][1][0][RTW89_CN][44] = 60,
+ [0][1][1][0][RTW89_CN][44] = 54,
[0][1][1][0][RTW89_UK][44] = 52,
[0][1][1][0][RTW89_MEXICO][44] = 72,
[0][1][1][0][RTW89_UKRAINE][44] = 18,
[0][1][1][0][RTW89_CHILE][44] = 70,
[0][1][1][0][RTW89_QATAR][44] = 18,
+ [0][1][1][0][RTW89_THAILAND][44] = 18,
[0][1][1][0][RTW89_FCC][46] = 72,
[0][1][1][0][RTW89_ETSI][46] = 18,
[0][1][1][0][RTW89_MKK][46] = 127,
[0][1][1][0][RTW89_IC][46] = 72,
[0][1][1][0][RTW89_KCC][46] = 60,
[0][1][1][0][RTW89_ACMA][46] = 70,
- [0][1][1][0][RTW89_CN][46] = 60,
+ [0][1][1][0][RTW89_CN][46] = 54,
[0][1][1][0][RTW89_UK][46] = 52,
[0][1][1][0][RTW89_MEXICO][46] = 72,
[0][1][1][0][RTW89_UKRAINE][46] = 18,
[0][1][1][0][RTW89_CHILE][46] = 70,
[0][1][1][0][RTW89_QATAR][46] = 18,
+ [0][1][1][0][RTW89_THAILAND][46] = 18,
[0][1][1][0][RTW89_FCC][48] = 48,
[0][1][1][0][RTW89_ETSI][48] = 127,
[0][1][1][0][RTW89_MKK][48] = 127,
@@ -34628,6 +34894,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][48] = 127,
[0][1][1][0][RTW89_CHILE][48] = 127,
[0][1][1][0][RTW89_QATAR][48] = 127,
+ [0][1][1][0][RTW89_THAILAND][48] = 127,
[0][1][1][0][RTW89_FCC][50] = 48,
[0][1][1][0][RTW89_ETSI][50] = 127,
[0][1][1][0][RTW89_MKK][50] = 127,
@@ -34640,6 +34907,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][50] = 127,
[0][1][1][0][RTW89_CHILE][50] = 127,
[0][1][1][0][RTW89_QATAR][50] = 127,
+ [0][1][1][0][RTW89_THAILAND][50] = 127,
[0][1][1][0][RTW89_FCC][52] = 48,
[0][1][1][0][RTW89_ETSI][52] = 127,
[0][1][1][0][RTW89_MKK][52] = 127,
@@ -34652,6 +34920,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_UKRAINE][52] = 127,
[0][1][1][0][RTW89_CHILE][52] = 127,
[0][1][1][0][RTW89_QATAR][52] = 127,
+ [0][1][1][0][RTW89_THAILAND][52] = 127,
[0][0][2][0][RTW89_FCC][0] = 70,
[0][0][2][0][RTW89_ETSI][0] = 66,
[0][0][2][0][RTW89_MKK][0] = 68,
@@ -34664,6 +34933,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][0] = 54,
[0][0][2][0][RTW89_CHILE][0] = 68,
[0][0][2][0][RTW89_QATAR][0] = 66,
+ [0][0][2][0][RTW89_THAILAND][0] = 66,
[0][0][2][0][RTW89_FCC][2] = 72,
[0][0][2][0][RTW89_ETSI][2] = 66,
[0][0][2][0][RTW89_MKK][2] = 68,
@@ -34676,6 +34946,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][2] = 54,
[0][0][2][0][RTW89_CHILE][2] = 70,
[0][0][2][0][RTW89_QATAR][2] = 66,
+ [0][0][2][0][RTW89_THAILAND][2] = 66,
[0][0][2][0][RTW89_FCC][4] = 72,
[0][0][2][0][RTW89_ETSI][4] = 66,
[0][0][2][0][RTW89_MKK][4] = 68,
@@ -34688,6 +34959,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][4] = 54,
[0][0][2][0][RTW89_CHILE][4] = 70,
[0][0][2][0][RTW89_QATAR][4] = 66,
+ [0][0][2][0][RTW89_THAILAND][4] = 66,
[0][0][2][0][RTW89_FCC][6] = 72,
[0][0][2][0][RTW89_ETSI][6] = 66,
[0][0][2][0][RTW89_MKK][6] = 60,
@@ -34700,6 +34972,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][6] = 54,
[0][0][2][0][RTW89_CHILE][6] = 70,
[0][0][2][0][RTW89_QATAR][6] = 66,
+ [0][0][2][0][RTW89_THAILAND][6] = 66,
[0][0][2][0][RTW89_FCC][8] = 72,
[0][0][2][0][RTW89_ETSI][8] = 66,
[0][0][2][0][RTW89_MKK][8] = 58,
@@ -34712,6 +34985,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][8] = 54,
[0][0][2][0][RTW89_CHILE][8] = 70,
[0][0][2][0][RTW89_QATAR][8] = 66,
+ [0][0][2][0][RTW89_THAILAND][8] = 66,
[0][0][2][0][RTW89_FCC][10] = 72,
[0][0][2][0][RTW89_ETSI][10] = 66,
[0][0][2][0][RTW89_MKK][10] = 70,
@@ -34724,6 +34998,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][10] = 54,
[0][0][2][0][RTW89_CHILE][10] = 70,
[0][0][2][0][RTW89_QATAR][10] = 66,
+ [0][0][2][0][RTW89_THAILAND][10] = 66,
[0][0][2][0][RTW89_FCC][12] = 72,
[0][0][2][0][RTW89_ETSI][12] = 66,
[0][0][2][0][RTW89_MKK][12] = 70,
@@ -34736,6 +35011,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][12] = 54,
[0][0][2][0][RTW89_CHILE][12] = 70,
[0][0][2][0][RTW89_QATAR][12] = 66,
+ [0][0][2][0][RTW89_THAILAND][12] = 66,
[0][0][2][0][RTW89_FCC][14] = 68,
[0][0][2][0][RTW89_ETSI][14] = 66,
[0][0][2][0][RTW89_MKK][14] = 70,
@@ -34748,6 +35024,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][14] = 54,
[0][0][2][0][RTW89_CHILE][14] = 66,
[0][0][2][0][RTW89_QATAR][14] = 66,
+ [0][0][2][0][RTW89_THAILAND][14] = 66,
[0][0][2][0][RTW89_FCC][15] = 70,
[0][0][2][0][RTW89_ETSI][15] = 66,
[0][0][2][0][RTW89_MKK][15] = 70,
@@ -34760,6 +35037,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][15] = 54,
[0][0][2][0][RTW89_CHILE][15] = 68,
[0][0][2][0][RTW89_QATAR][15] = 66,
+ [0][0][2][0][RTW89_THAILAND][15] = 66,
[0][0][2][0][RTW89_FCC][17] = 72,
[0][0][2][0][RTW89_ETSI][17] = 66,
[0][0][2][0][RTW89_MKK][17] = 70,
@@ -34772,6 +35050,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][17] = 54,
[0][0][2][0][RTW89_CHILE][17] = 68,
[0][0][2][0][RTW89_QATAR][17] = 66,
+ [0][0][2][0][RTW89_THAILAND][17] = 66,
[0][0][2][0][RTW89_FCC][19] = 72,
[0][0][2][0][RTW89_ETSI][19] = 66,
[0][0][2][0][RTW89_MKK][19] = 70,
@@ -34784,6 +35063,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][19] = 54,
[0][0][2][0][RTW89_CHILE][19] = 68,
[0][0][2][0][RTW89_QATAR][19] = 66,
+ [0][0][2][0][RTW89_THAILAND][19] = 66,
[0][0][2][0][RTW89_FCC][21] = 72,
[0][0][2][0][RTW89_ETSI][21] = 66,
[0][0][2][0][RTW89_MKK][21] = 70,
@@ -34796,6 +35076,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][21] = 54,
[0][0][2][0][RTW89_CHILE][21] = 70,
[0][0][2][0][RTW89_QATAR][21] = 66,
+ [0][0][2][0][RTW89_THAILAND][21] = 66,
[0][0][2][0][RTW89_FCC][23] = 72,
[0][0][2][0][RTW89_ETSI][23] = 66,
[0][0][2][0][RTW89_MKK][23] = 70,
@@ -34808,6 +35089,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][23] = 54,
[0][0][2][0][RTW89_CHILE][23] = 70,
[0][0][2][0][RTW89_QATAR][23] = 66,
+ [0][0][2][0][RTW89_THAILAND][23] = 66,
[0][0][2][0][RTW89_FCC][25] = 72,
[0][0][2][0][RTW89_ETSI][25] = 66,
[0][0][2][0][RTW89_MKK][25] = 70,
@@ -34820,6 +35102,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][25] = 54,
[0][0][2][0][RTW89_CHILE][25] = 70,
[0][0][2][0][RTW89_QATAR][25] = 66,
+ [0][0][2][0][RTW89_THAILAND][25] = 66,
[0][0][2][0][RTW89_FCC][27] = 72,
[0][0][2][0][RTW89_ETSI][27] = 66,
[0][0][2][0][RTW89_MKK][27] = 70,
@@ -34832,6 +35115,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][27] = 54,
[0][0][2][0][RTW89_CHILE][27] = 56,
[0][0][2][0][RTW89_QATAR][27] = 66,
+ [0][0][2][0][RTW89_THAILAND][27] = 66,
[0][0][2][0][RTW89_FCC][29] = 72,
[0][0][2][0][RTW89_ETSI][29] = 66,
[0][0][2][0][RTW89_MKK][29] = 70,
@@ -34844,6 +35128,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][29] = 54,
[0][0][2][0][RTW89_CHILE][29] = 56,
[0][0][2][0][RTW89_QATAR][29] = 66,
+ [0][0][2][0][RTW89_THAILAND][29] = 66,
[0][0][2][0][RTW89_FCC][31] = 72,
[0][0][2][0][RTW89_ETSI][31] = 66,
[0][0][2][0][RTW89_MKK][31] = 70,
@@ -34856,6 +35141,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][31] = 54,
[0][0][2][0][RTW89_CHILE][31] = 56,
[0][0][2][0][RTW89_QATAR][31] = 66,
+ [0][0][2][0][RTW89_THAILAND][31] = 66,
[0][0][2][0][RTW89_FCC][33] = 72,
[0][0][2][0][RTW89_ETSI][33] = 66,
[0][0][2][0][RTW89_MKK][33] = 70,
@@ -34868,6 +35154,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][33] = 54,
[0][0][2][0][RTW89_CHILE][33] = 56,
[0][0][2][0][RTW89_QATAR][33] = 66,
+ [0][0][2][0][RTW89_THAILAND][33] = 66,
[0][0][2][0][RTW89_FCC][35] = 56,
[0][0][2][0][RTW89_ETSI][35] = 66,
[0][0][2][0][RTW89_MKK][35] = 70,
@@ -34880,6 +35167,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][35] = 54,
[0][0][2][0][RTW89_CHILE][35] = 56,
[0][0][2][0][RTW89_QATAR][35] = 66,
+ [0][0][2][0][RTW89_THAILAND][35] = 66,
[0][0][2][0][RTW89_FCC][37] = 72,
[0][0][2][0][RTW89_ETSI][37] = 127,
[0][0][2][0][RTW89_MKK][37] = 70,
@@ -34892,66 +35180,72 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][37] = 127,
[0][0][2][0][RTW89_CHILE][37] = 70,
[0][0][2][0][RTW89_QATAR][37] = 127,
+ [0][0][2][0][RTW89_THAILAND][37] = 127,
[0][0][2][0][RTW89_FCC][38] = 72,
[0][0][2][0][RTW89_ETSI][38] = 30,
[0][0][2][0][RTW89_MKK][38] = 127,
[0][0][2][0][RTW89_IC][38] = 72,
[0][0][2][0][RTW89_KCC][38] = 58,
[0][0][2][0][RTW89_ACMA][38] = 70,
- [0][0][2][0][RTW89_CN][38] = 68,
+ [0][0][2][0][RTW89_CN][38] = 56,
[0][0][2][0][RTW89_UK][38] = 64,
[0][0][2][0][RTW89_MEXICO][38] = 72,
[0][0][2][0][RTW89_UKRAINE][38] = 30,
[0][0][2][0][RTW89_CHILE][38] = 70,
[0][0][2][0][RTW89_QATAR][38] = 30,
+ [0][0][2][0][RTW89_THAILAND][38] = 30,
[0][0][2][0][RTW89_FCC][40] = 72,
[0][0][2][0][RTW89_ETSI][40] = 30,
[0][0][2][0][RTW89_MKK][40] = 127,
[0][0][2][0][RTW89_IC][40] = 72,
[0][0][2][0][RTW89_KCC][40] = 58,
[0][0][2][0][RTW89_ACMA][40] = 70,
- [0][0][2][0][RTW89_CN][40] = 68,
+ [0][0][2][0][RTW89_CN][40] = 56,
[0][0][2][0][RTW89_UK][40] = 64,
[0][0][2][0][RTW89_MEXICO][40] = 72,
[0][0][2][0][RTW89_UKRAINE][40] = 30,
[0][0][2][0][RTW89_CHILE][40] = 70,
[0][0][2][0][RTW89_QATAR][40] = 30,
+ [0][0][2][0][RTW89_THAILAND][40] = 30,
[0][0][2][0][RTW89_FCC][42] = 72,
[0][0][2][0][RTW89_ETSI][42] = 30,
[0][0][2][0][RTW89_MKK][42] = 127,
[0][0][2][0][RTW89_IC][42] = 72,
[0][0][2][0][RTW89_KCC][42] = 58,
[0][0][2][0][RTW89_ACMA][42] = 70,
- [0][0][2][0][RTW89_CN][42] = 68,
+ [0][0][2][0][RTW89_CN][42] = 56,
[0][0][2][0][RTW89_UK][42] = 64,
[0][0][2][0][RTW89_MEXICO][42] = 72,
[0][0][2][0][RTW89_UKRAINE][42] = 30,
[0][0][2][0][RTW89_CHILE][42] = 70,
[0][0][2][0][RTW89_QATAR][42] = 30,
+ [0][0][2][0][RTW89_THAILAND][42] = 30,
[0][0][2][0][RTW89_FCC][44] = 72,
[0][0][2][0][RTW89_ETSI][44] = 30,
[0][0][2][0][RTW89_MKK][44] = 127,
[0][0][2][0][RTW89_IC][44] = 72,
[0][0][2][0][RTW89_KCC][44] = 58,
[0][0][2][0][RTW89_ACMA][44] = 70,
- [0][0][2][0][RTW89_CN][44] = 68,
+ [0][0][2][0][RTW89_CN][44] = 56,
[0][0][2][0][RTW89_UK][44] = 64,
[0][0][2][0][RTW89_MEXICO][44] = 72,
[0][0][2][0][RTW89_UKRAINE][44] = 30,
[0][0][2][0][RTW89_CHILE][44] = 70,
[0][0][2][0][RTW89_QATAR][44] = 30,
+ [0][0][2][0][RTW89_THAILAND][44] = 30,
[0][0][2][0][RTW89_FCC][46] = 72,
[0][0][2][0][RTW89_ETSI][46] = 30,
[0][0][2][0][RTW89_MKK][46] = 127,
[0][0][2][0][RTW89_IC][46] = 72,
[0][0][2][0][RTW89_KCC][46] = 58,
[0][0][2][0][RTW89_ACMA][46] = 70,
- [0][0][2][0][RTW89_CN][46] = 68,
+ [0][0][2][0][RTW89_CN][46] = 56,
[0][0][2][0][RTW89_UK][46] = 64,
[0][0][2][0][RTW89_MEXICO][46] = 72,
[0][0][2][0][RTW89_UKRAINE][46] = 30,
[0][0][2][0][RTW89_CHILE][46] = 70,
[0][0][2][0][RTW89_QATAR][46] = 30,
+ [0][0][2][0][RTW89_THAILAND][46] = 30,
[0][0][2][0][RTW89_FCC][48] = 72,
[0][0][2][0][RTW89_ETSI][48] = 127,
[0][0][2][0][RTW89_MKK][48] = 127,
@@ -34964,6 +35258,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][48] = 127,
[0][0][2][0][RTW89_CHILE][48] = 127,
[0][0][2][0][RTW89_QATAR][48] = 127,
+ [0][0][2][0][RTW89_THAILAND][48] = 127,
[0][0][2][0][RTW89_FCC][50] = 72,
[0][0][2][0][RTW89_ETSI][50] = 127,
[0][0][2][0][RTW89_MKK][50] = 127,
@@ -34976,6 +35271,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][50] = 127,
[0][0][2][0][RTW89_CHILE][50] = 127,
[0][0][2][0][RTW89_QATAR][50] = 127,
+ [0][0][2][0][RTW89_THAILAND][50] = 127,
[0][0][2][0][RTW89_FCC][52] = 72,
[0][0][2][0][RTW89_ETSI][52] = 127,
[0][0][2][0][RTW89_MKK][52] = 127,
@@ -34988,6 +35284,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_UKRAINE][52] = 127,
[0][0][2][0][RTW89_CHILE][52] = 127,
[0][0][2][0][RTW89_QATAR][52] = 127,
+ [0][0][2][0][RTW89_THAILAND][52] = 127,
[0][1][2][0][RTW89_FCC][0] = 60,
[0][1][2][0][RTW89_ETSI][0] = 54,
[0][1][2][0][RTW89_MKK][0] = 54,
@@ -35000,6 +35297,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][0] = 42,
[0][1][2][0][RTW89_CHILE][0] = 60,
[0][1][2][0][RTW89_QATAR][0] = 54,
+ [0][1][2][0][RTW89_THAILAND][0] = 54,
[0][1][2][0][RTW89_FCC][2] = 62,
[0][1][2][0][RTW89_ETSI][2] = 54,
[0][1][2][0][RTW89_MKK][2] = 54,
@@ -35012,6 +35310,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][2] = 42,
[0][1][2][0][RTW89_CHILE][2] = 62,
[0][1][2][0][RTW89_QATAR][2] = 54,
+ [0][1][2][0][RTW89_THAILAND][2] = 54,
[0][1][2][0][RTW89_FCC][4] = 62,
[0][1][2][0][RTW89_ETSI][4] = 54,
[0][1][2][0][RTW89_MKK][4] = 54,
@@ -35024,6 +35323,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][4] = 42,
[0][1][2][0][RTW89_CHILE][4] = 62,
[0][1][2][0][RTW89_QATAR][4] = 54,
+ [0][1][2][0][RTW89_THAILAND][4] = 54,
[0][1][2][0][RTW89_FCC][6] = 62,
[0][1][2][0][RTW89_ETSI][6] = 54,
[0][1][2][0][RTW89_MKK][6] = 50,
@@ -35036,6 +35336,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][6] = 42,
[0][1][2][0][RTW89_CHILE][6] = 62,
[0][1][2][0][RTW89_QATAR][6] = 54,
+ [0][1][2][0][RTW89_THAILAND][6] = 54,
[0][1][2][0][RTW89_FCC][8] = 62,
[0][1][2][0][RTW89_ETSI][8] = 54,
[0][1][2][0][RTW89_MKK][8] = 42,
@@ -35048,6 +35349,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][8] = 42,
[0][1][2][0][RTW89_CHILE][8] = 62,
[0][1][2][0][RTW89_QATAR][8] = 54,
+ [0][1][2][0][RTW89_THAILAND][8] = 54,
[0][1][2][0][RTW89_FCC][10] = 62,
[0][1][2][0][RTW89_ETSI][10] = 54,
[0][1][2][0][RTW89_MKK][10] = 54,
@@ -35060,6 +35362,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][10] = 42,
[0][1][2][0][RTW89_CHILE][10] = 62,
[0][1][2][0][RTW89_QATAR][10] = 54,
+ [0][1][2][0][RTW89_THAILAND][10] = 54,
[0][1][2][0][RTW89_FCC][12] = 62,
[0][1][2][0][RTW89_ETSI][12] = 54,
[0][1][2][0][RTW89_MKK][12] = 54,
@@ -35072,6 +35375,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][12] = 42,
[0][1][2][0][RTW89_CHILE][12] = 62,
[0][1][2][0][RTW89_QATAR][12] = 54,
+ [0][1][2][0][RTW89_THAILAND][12] = 54,
[0][1][2][0][RTW89_FCC][14] = 62,
[0][1][2][0][RTW89_ETSI][14] = 54,
[0][1][2][0][RTW89_MKK][14] = 54,
@@ -35084,6 +35388,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][14] = 42,
[0][1][2][0][RTW89_CHILE][14] = 62,
[0][1][2][0][RTW89_QATAR][14] = 54,
+ [0][1][2][0][RTW89_THAILAND][14] = 54,
[0][1][2][0][RTW89_FCC][15] = 60,
[0][1][2][0][RTW89_ETSI][15] = 54,
[0][1][2][0][RTW89_MKK][15] = 68,
@@ -35096,6 +35401,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][15] = 42,
[0][1][2][0][RTW89_CHILE][15] = 60,
[0][1][2][0][RTW89_QATAR][15] = 54,
+ [0][1][2][0][RTW89_THAILAND][15] = 54,
[0][1][2][0][RTW89_FCC][17] = 62,
[0][1][2][0][RTW89_ETSI][17] = 54,
[0][1][2][0][RTW89_MKK][17] = 68,
@@ -35108,6 +35414,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][17] = 42,
[0][1][2][0][RTW89_CHILE][17] = 60,
[0][1][2][0][RTW89_QATAR][17] = 54,
+ [0][1][2][0][RTW89_THAILAND][17] = 54,
[0][1][2][0][RTW89_FCC][19] = 62,
[0][1][2][0][RTW89_ETSI][19] = 54,
[0][1][2][0][RTW89_MKK][19] = 68,
@@ -35120,6 +35427,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][19] = 42,
[0][1][2][0][RTW89_CHILE][19] = 62,
[0][1][2][0][RTW89_QATAR][19] = 54,
+ [0][1][2][0][RTW89_THAILAND][19] = 54,
[0][1][2][0][RTW89_FCC][21] = 62,
[0][1][2][0][RTW89_ETSI][21] = 54,
[0][1][2][0][RTW89_MKK][21] = 68,
@@ -35132,6 +35440,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][21] = 42,
[0][1][2][0][RTW89_CHILE][21] = 62,
[0][1][2][0][RTW89_QATAR][21] = 54,
+ [0][1][2][0][RTW89_THAILAND][21] = 54,
[0][1][2][0][RTW89_FCC][23] = 62,
[0][1][2][0][RTW89_ETSI][23] = 54,
[0][1][2][0][RTW89_MKK][23] = 68,
@@ -35144,6 +35453,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][23] = 42,
[0][1][2][0][RTW89_CHILE][23] = 62,
[0][1][2][0][RTW89_QATAR][23] = 54,
+ [0][1][2][0][RTW89_THAILAND][23] = 54,
[0][1][2][0][RTW89_FCC][25] = 62,
[0][1][2][0][RTW89_ETSI][25] = 54,
[0][1][2][0][RTW89_MKK][25] = 68,
@@ -35156,6 +35466,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][25] = 42,
[0][1][2][0][RTW89_CHILE][25] = 62,
[0][1][2][0][RTW89_QATAR][25] = 54,
+ [0][1][2][0][RTW89_THAILAND][25] = 54,
[0][1][2][0][RTW89_FCC][27] = 62,
[0][1][2][0][RTW89_ETSI][27] = 54,
[0][1][2][0][RTW89_MKK][27] = 68,
@@ -35168,6 +35479,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][27] = 42,
[0][1][2][0][RTW89_CHILE][27] = 46,
[0][1][2][0][RTW89_QATAR][27] = 54,
+ [0][1][2][0][RTW89_THAILAND][27] = 54,
[0][1][2][0][RTW89_FCC][29] = 62,
[0][1][2][0][RTW89_ETSI][29] = 54,
[0][1][2][0][RTW89_MKK][29] = 68,
@@ -35180,6 +35492,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][29] = 42,
[0][1][2][0][RTW89_CHILE][29] = 46,
[0][1][2][0][RTW89_QATAR][29] = 54,
+ [0][1][2][0][RTW89_THAILAND][29] = 54,
[0][1][2][0][RTW89_FCC][31] = 62,
[0][1][2][0][RTW89_ETSI][31] = 54,
[0][1][2][0][RTW89_MKK][31] = 68,
@@ -35192,6 +35505,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][31] = 42,
[0][1][2][0][RTW89_CHILE][31] = 46,
[0][1][2][0][RTW89_QATAR][31] = 54,
+ [0][1][2][0][RTW89_THAILAND][31] = 54,
[0][1][2][0][RTW89_FCC][33] = 62,
[0][1][2][0][RTW89_ETSI][33] = 54,
[0][1][2][0][RTW89_MKK][33] = 68,
@@ -35204,6 +35518,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][33] = 42,
[0][1][2][0][RTW89_CHILE][33] = 46,
[0][1][2][0][RTW89_QATAR][33] = 54,
+ [0][1][2][0][RTW89_THAILAND][33] = 54,
[0][1][2][0][RTW89_FCC][35] = 46,
[0][1][2][0][RTW89_ETSI][35] = 54,
[0][1][2][0][RTW89_MKK][35] = 68,
@@ -35216,6 +35531,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][35] = 42,
[0][1][2][0][RTW89_CHILE][35] = 46,
[0][1][2][0][RTW89_QATAR][35] = 54,
+ [0][1][2][0][RTW89_THAILAND][35] = 54,
[0][1][2][0][RTW89_FCC][37] = 64,
[0][1][2][0][RTW89_ETSI][37] = 127,
[0][1][2][0][RTW89_MKK][37] = 68,
@@ -35228,66 +35544,72 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][37] = 127,
[0][1][2][0][RTW89_CHILE][37] = 64,
[0][1][2][0][RTW89_QATAR][37] = 127,
+ [0][1][2][0][RTW89_THAILAND][37] = 127,
[0][1][2][0][RTW89_FCC][38] = 72,
[0][1][2][0][RTW89_ETSI][38] = 18,
[0][1][2][0][RTW89_MKK][38] = 127,
[0][1][2][0][RTW89_IC][38] = 72,
[0][1][2][0][RTW89_KCC][38] = 56,
[0][1][2][0][RTW89_ACMA][38] = 70,
- [0][1][2][0][RTW89_CN][38] = 68,
+ [0][1][2][0][RTW89_CN][38] = 56,
[0][1][2][0][RTW89_UK][38] = 52,
[0][1][2][0][RTW89_MEXICO][38] = 72,
[0][1][2][0][RTW89_UKRAINE][38] = 18,
[0][1][2][0][RTW89_CHILE][38] = 70,
[0][1][2][0][RTW89_QATAR][38] = 18,
+ [0][1][2][0][RTW89_THAILAND][38] = 18,
[0][1][2][0][RTW89_FCC][40] = 72,
[0][1][2][0][RTW89_ETSI][40] = 18,
[0][1][2][0][RTW89_MKK][40] = 127,
[0][1][2][0][RTW89_IC][40] = 72,
[0][1][2][0][RTW89_KCC][40] = 56,
[0][1][2][0][RTW89_ACMA][40] = 70,
- [0][1][2][0][RTW89_CN][40] = 68,
+ [0][1][2][0][RTW89_CN][40] = 56,
[0][1][2][0][RTW89_UK][40] = 52,
[0][1][2][0][RTW89_MEXICO][40] = 72,
[0][1][2][0][RTW89_UKRAINE][40] = 18,
[0][1][2][0][RTW89_CHILE][40] = 70,
[0][1][2][0][RTW89_QATAR][40] = 18,
+ [0][1][2][0][RTW89_THAILAND][40] = 18,
[0][1][2][0][RTW89_FCC][42] = 72,
[0][1][2][0][RTW89_ETSI][42] = 18,
[0][1][2][0][RTW89_MKK][42] = 127,
[0][1][2][0][RTW89_IC][42] = 72,
[0][1][2][0][RTW89_KCC][42] = 56,
[0][1][2][0][RTW89_ACMA][42] = 70,
- [0][1][2][0][RTW89_CN][42] = 68,
+ [0][1][2][0][RTW89_CN][42] = 56,
[0][1][2][0][RTW89_UK][42] = 52,
[0][1][2][0][RTW89_MEXICO][42] = 72,
[0][1][2][0][RTW89_UKRAINE][42] = 18,
[0][1][2][0][RTW89_CHILE][42] = 70,
[0][1][2][0][RTW89_QATAR][42] = 18,
+ [0][1][2][0][RTW89_THAILAND][42] = 18,
[0][1][2][0][RTW89_FCC][44] = 72,
[0][1][2][0][RTW89_ETSI][44] = 18,
[0][1][2][0][RTW89_MKK][44] = 127,
[0][1][2][0][RTW89_IC][44] = 72,
[0][1][2][0][RTW89_KCC][44] = 56,
[0][1][2][0][RTW89_ACMA][44] = 70,
- [0][1][2][0][RTW89_CN][44] = 68,
+ [0][1][2][0][RTW89_CN][44] = 56,
[0][1][2][0][RTW89_UK][44] = 52,
[0][1][2][0][RTW89_MEXICO][44] = 72,
[0][1][2][0][RTW89_UKRAINE][44] = 18,
[0][1][2][0][RTW89_CHILE][44] = 70,
[0][1][2][0][RTW89_QATAR][44] = 18,
+ [0][1][2][0][RTW89_THAILAND][44] = 18,
[0][1][2][0][RTW89_FCC][46] = 72,
[0][1][2][0][RTW89_ETSI][46] = 18,
[0][1][2][0][RTW89_MKK][46] = 127,
[0][1][2][0][RTW89_IC][46] = 72,
[0][1][2][0][RTW89_KCC][46] = 56,
[0][1][2][0][RTW89_ACMA][46] = 70,
- [0][1][2][0][RTW89_CN][46] = 68,
+ [0][1][2][0][RTW89_CN][46] = 56,
[0][1][2][0][RTW89_UK][46] = 52,
[0][1][2][0][RTW89_MEXICO][46] = 72,
[0][1][2][0][RTW89_UKRAINE][46] = 18,
[0][1][2][0][RTW89_CHILE][46] = 70,
[0][1][2][0][RTW89_QATAR][46] = 18,
+ [0][1][2][0][RTW89_THAILAND][46] = 18,
[0][1][2][0][RTW89_FCC][48] = 48,
[0][1][2][0][RTW89_ETSI][48] = 127,
[0][1][2][0][RTW89_MKK][48] = 127,
@@ -35300,6 +35622,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][48] = 127,
[0][1][2][0][RTW89_CHILE][48] = 127,
[0][1][2][0][RTW89_QATAR][48] = 127,
+ [0][1][2][0][RTW89_THAILAND][48] = 127,
[0][1][2][0][RTW89_FCC][50] = 50,
[0][1][2][0][RTW89_ETSI][50] = 127,
[0][1][2][0][RTW89_MKK][50] = 127,
@@ -35312,6 +35635,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][50] = 127,
[0][1][2][0][RTW89_CHILE][50] = 127,
[0][1][2][0][RTW89_QATAR][50] = 127,
+ [0][1][2][0][RTW89_THAILAND][50] = 127,
[0][1][2][0][RTW89_FCC][52] = 48,
[0][1][2][0][RTW89_ETSI][52] = 127,
[0][1][2][0][RTW89_MKK][52] = 127,
@@ -35324,6 +35648,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_UKRAINE][52] = 127,
[0][1][2][0][RTW89_CHILE][52] = 127,
[0][1][2][0][RTW89_QATAR][52] = 127,
+ [0][1][2][0][RTW89_THAILAND][52] = 127,
[0][1][2][1][RTW89_FCC][0] = 60,
[0][1][2][1][RTW89_ETSI][0] = 40,
[0][1][2][1][RTW89_MKK][0] = 54,
@@ -35336,6 +35661,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][0] = 30,
[0][1][2][1][RTW89_CHILE][0] = 60,
[0][1][2][1][RTW89_QATAR][0] = 40,
+ [0][1][2][1][RTW89_THAILAND][0] = 40,
[0][1][2][1][RTW89_FCC][2] = 62,
[0][1][2][1][RTW89_ETSI][2] = 40,
[0][1][2][1][RTW89_MKK][2] = 54,
@@ -35348,6 +35674,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][2] = 30,
[0][1][2][1][RTW89_CHILE][2] = 60,
[0][1][2][1][RTW89_QATAR][2] = 40,
+ [0][1][2][1][RTW89_THAILAND][2] = 40,
[0][1][2][1][RTW89_FCC][4] = 62,
[0][1][2][1][RTW89_ETSI][4] = 40,
[0][1][2][1][RTW89_MKK][4] = 54,
@@ -35360,6 +35687,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][4] = 30,
[0][1][2][1][RTW89_CHILE][4] = 60,
[0][1][2][1][RTW89_QATAR][4] = 40,
+ [0][1][2][1][RTW89_THAILAND][4] = 40,
[0][1][2][1][RTW89_FCC][6] = 62,
[0][1][2][1][RTW89_ETSI][6] = 40,
[0][1][2][1][RTW89_MKK][6] = 50,
@@ -35372,6 +35700,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][6] = 30,
[0][1][2][1][RTW89_CHILE][6] = 60,
[0][1][2][1][RTW89_QATAR][6] = 40,
+ [0][1][2][1][RTW89_THAILAND][6] = 40,
[0][1][2][1][RTW89_FCC][8] = 62,
[0][1][2][1][RTW89_ETSI][8] = 40,
[0][1][2][1][RTW89_MKK][8] = 42,
@@ -35384,6 +35713,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][8] = 30,
[0][1][2][1][RTW89_CHILE][8] = 60,
[0][1][2][1][RTW89_QATAR][8] = 40,
+ [0][1][2][1][RTW89_THAILAND][8] = 40,
[0][1][2][1][RTW89_FCC][10] = 62,
[0][1][2][1][RTW89_ETSI][10] = 40,
[0][1][2][1][RTW89_MKK][10] = 54,
@@ -35396,6 +35726,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][10] = 30,
[0][1][2][1][RTW89_CHILE][10] = 60,
[0][1][2][1][RTW89_QATAR][10] = 40,
+ [0][1][2][1][RTW89_THAILAND][10] = 40,
[0][1][2][1][RTW89_FCC][12] = 62,
[0][1][2][1][RTW89_ETSI][12] = 40,
[0][1][2][1][RTW89_MKK][12] = 54,
@@ -35408,6 +35739,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][12] = 30,
[0][1][2][1][RTW89_CHILE][12] = 60,
[0][1][2][1][RTW89_QATAR][12] = 40,
+ [0][1][2][1][RTW89_THAILAND][12] = 40,
[0][1][2][1][RTW89_FCC][14] = 62,
[0][1][2][1][RTW89_ETSI][14] = 40,
[0][1][2][1][RTW89_MKK][14] = 54,
@@ -35420,6 +35752,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][14] = 30,
[0][1][2][1][RTW89_CHILE][14] = 60,
[0][1][2][1][RTW89_QATAR][14] = 40,
+ [0][1][2][1][RTW89_THAILAND][14] = 40,
[0][1][2][1][RTW89_FCC][15] = 60,
[0][1][2][1][RTW89_ETSI][15] = 40,
[0][1][2][1][RTW89_MKK][15] = 68,
@@ -35432,6 +35765,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][15] = 30,
[0][1][2][1][RTW89_CHILE][15] = 60,
[0][1][2][1][RTW89_QATAR][15] = 40,
+ [0][1][2][1][RTW89_THAILAND][15] = 40,
[0][1][2][1][RTW89_FCC][17] = 62,
[0][1][2][1][RTW89_ETSI][17] = 40,
[0][1][2][1][RTW89_MKK][17] = 68,
@@ -35444,6 +35778,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][17] = 30,
[0][1][2][1][RTW89_CHILE][17] = 60,
[0][1][2][1][RTW89_QATAR][17] = 40,
+ [0][1][2][1][RTW89_THAILAND][17] = 40,
[0][1][2][1][RTW89_FCC][19] = 62,
[0][1][2][1][RTW89_ETSI][19] = 40,
[0][1][2][1][RTW89_MKK][19] = 68,
@@ -35456,6 +35791,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][19] = 30,
[0][1][2][1][RTW89_CHILE][19] = 60,
[0][1][2][1][RTW89_QATAR][19] = 40,
+ [0][1][2][1][RTW89_THAILAND][19] = 40,
[0][1][2][1][RTW89_FCC][21] = 62,
[0][1][2][1][RTW89_ETSI][21] = 40,
[0][1][2][1][RTW89_MKK][21] = 68,
@@ -35468,6 +35804,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][21] = 30,
[0][1][2][1][RTW89_CHILE][21] = 60,
[0][1][2][1][RTW89_QATAR][21] = 40,
+ [0][1][2][1][RTW89_THAILAND][21] = 40,
[0][1][2][1][RTW89_FCC][23] = 62,
[0][1][2][1][RTW89_ETSI][23] = 40,
[0][1][2][1][RTW89_MKK][23] = 68,
@@ -35480,6 +35817,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][23] = 30,
[0][1][2][1][RTW89_CHILE][23] = 60,
[0][1][2][1][RTW89_QATAR][23] = 40,
+ [0][1][2][1][RTW89_THAILAND][23] = 40,
[0][1][2][1][RTW89_FCC][25] = 46,
[0][1][2][1][RTW89_ETSI][25] = 40,
[0][1][2][1][RTW89_MKK][25] = 68,
@@ -35492,6 +35830,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][25] = 30,
[0][1][2][1][RTW89_CHILE][25] = 60,
[0][1][2][1][RTW89_QATAR][25] = 40,
+ [0][1][2][1][RTW89_THAILAND][25] = 40,
[0][1][2][1][RTW89_FCC][27] = 46,
[0][1][2][1][RTW89_ETSI][27] = 40,
[0][1][2][1][RTW89_MKK][27] = 68,
@@ -35504,6 +35843,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][27] = 30,
[0][1][2][1][RTW89_CHILE][27] = 46,
[0][1][2][1][RTW89_QATAR][27] = 40,
+ [0][1][2][1][RTW89_THAILAND][27] = 40,
[0][1][2][1][RTW89_FCC][29] = 46,
[0][1][2][1][RTW89_ETSI][29] = 40,
[0][1][2][1][RTW89_MKK][29] = 68,
@@ -35516,6 +35856,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][29] = 30,
[0][1][2][1][RTW89_CHILE][29] = 46,
[0][1][2][1][RTW89_QATAR][29] = 40,
+ [0][1][2][1][RTW89_THAILAND][29] = 40,
[0][1][2][1][RTW89_FCC][31] = 46,
[0][1][2][1][RTW89_ETSI][31] = 40,
[0][1][2][1][RTW89_MKK][31] = 68,
@@ -35528,6 +35869,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][31] = 30,
[0][1][2][1][RTW89_CHILE][31] = 46,
[0][1][2][1][RTW89_QATAR][31] = 40,
+ [0][1][2][1][RTW89_THAILAND][31] = 40,
[0][1][2][1][RTW89_FCC][33] = 46,
[0][1][2][1][RTW89_ETSI][33] = 40,
[0][1][2][1][RTW89_MKK][33] = 68,
@@ -35540,6 +35882,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][33] = 30,
[0][1][2][1][RTW89_CHILE][33] = 46,
[0][1][2][1][RTW89_QATAR][33] = 40,
+ [0][1][2][1][RTW89_THAILAND][33] = 40,
[0][1][2][1][RTW89_FCC][35] = 46,
[0][1][2][1][RTW89_ETSI][35] = 40,
[0][1][2][1][RTW89_MKK][35] = 68,
@@ -35552,6 +35895,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][35] = 30,
[0][1][2][1][RTW89_CHILE][35] = 46,
[0][1][2][1][RTW89_QATAR][35] = 40,
+ [0][1][2][1][RTW89_THAILAND][35] = 40,
[0][1][2][1][RTW89_FCC][37] = 64,
[0][1][2][1][RTW89_ETSI][37] = 127,
[0][1][2][1][RTW89_MKK][37] = 68,
@@ -35564,66 +35908,72 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][37] = 127,
[0][1][2][1][RTW89_CHILE][37] = 64,
[0][1][2][1][RTW89_QATAR][37] = 127,
+ [0][1][2][1][RTW89_THAILAND][37] = 127,
[0][1][2][1][RTW89_FCC][38] = 72,
[0][1][2][1][RTW89_ETSI][38] = 6,
[0][1][2][1][RTW89_MKK][38] = 127,
[0][1][2][1][RTW89_IC][38] = 72,
[0][1][2][1][RTW89_KCC][38] = 56,
[0][1][2][1][RTW89_ACMA][38] = 70,
- [0][1][2][1][RTW89_CN][38] = 60,
+ [0][1][2][1][RTW89_CN][38] = 50,
[0][1][2][1][RTW89_UK][38] = 40,
[0][1][2][1][RTW89_MEXICO][38] = 72,
[0][1][2][1][RTW89_UKRAINE][38] = 6,
[0][1][2][1][RTW89_CHILE][38] = 60,
[0][1][2][1][RTW89_QATAR][38] = 6,
+ [0][1][2][1][RTW89_THAILAND][38] = 6,
[0][1][2][1][RTW89_FCC][40] = 72,
[0][1][2][1][RTW89_ETSI][40] = 6,
[0][1][2][1][RTW89_MKK][40] = 127,
[0][1][2][1][RTW89_IC][40] = 72,
[0][1][2][1][RTW89_KCC][40] = 56,
[0][1][2][1][RTW89_ACMA][40] = 70,
- [0][1][2][1][RTW89_CN][40] = 60,
+ [0][1][2][1][RTW89_CN][40] = 50,
[0][1][2][1][RTW89_UK][40] = 40,
[0][1][2][1][RTW89_MEXICO][40] = 72,
[0][1][2][1][RTW89_UKRAINE][40] = 6,
[0][1][2][1][RTW89_CHILE][40] = 60,
[0][1][2][1][RTW89_QATAR][40] = 6,
+ [0][1][2][1][RTW89_THAILAND][40] = 6,
[0][1][2][1][RTW89_FCC][42] = 72,
[0][1][2][1][RTW89_ETSI][42] = 6,
[0][1][2][1][RTW89_MKK][42] = 127,
[0][1][2][1][RTW89_IC][42] = 72,
[0][1][2][1][RTW89_KCC][42] = 56,
[0][1][2][1][RTW89_ACMA][42] = 70,
- [0][1][2][1][RTW89_CN][42] = 60,
+ [0][1][2][1][RTW89_CN][42] = 50,
[0][1][2][1][RTW89_UK][42] = 40,
[0][1][2][1][RTW89_MEXICO][42] = 72,
[0][1][2][1][RTW89_UKRAINE][42] = 6,
[0][1][2][1][RTW89_CHILE][42] = 60,
[0][1][2][1][RTW89_QATAR][42] = 6,
+ [0][1][2][1][RTW89_THAILAND][42] = 6,
[0][1][2][1][RTW89_FCC][44] = 72,
[0][1][2][1][RTW89_ETSI][44] = 6,
[0][1][2][1][RTW89_MKK][44] = 127,
[0][1][2][1][RTW89_IC][44] = 72,
[0][1][2][1][RTW89_KCC][44] = 56,
[0][1][2][1][RTW89_ACMA][44] = 70,
- [0][1][2][1][RTW89_CN][44] = 54,
+ [0][1][2][1][RTW89_CN][44] = 50,
[0][1][2][1][RTW89_UK][44] = 40,
[0][1][2][1][RTW89_MEXICO][44] = 72,
[0][1][2][1][RTW89_UKRAINE][44] = 6,
[0][1][2][1][RTW89_CHILE][44] = 60,
[0][1][2][1][RTW89_QATAR][44] = 6,
+ [0][1][2][1][RTW89_THAILAND][44] = 6,
[0][1][2][1][RTW89_FCC][46] = 72,
[0][1][2][1][RTW89_ETSI][46] = 6,
[0][1][2][1][RTW89_MKK][46] = 127,
[0][1][2][1][RTW89_IC][46] = 72,
[0][1][2][1][RTW89_KCC][46] = 56,
[0][1][2][1][RTW89_ACMA][46] = 70,
- [0][1][2][1][RTW89_CN][46] = 54,
+ [0][1][2][1][RTW89_CN][46] = 50,
[0][1][2][1][RTW89_UK][46] = 40,
[0][1][2][1][RTW89_MEXICO][46] = 72,
[0][1][2][1][RTW89_UKRAINE][46] = 6,
[0][1][2][1][RTW89_CHILE][46] = 60,
[0][1][2][1][RTW89_QATAR][46] = 6,
+ [0][1][2][1][RTW89_THAILAND][46] = 6,
[0][1][2][1][RTW89_FCC][48] = 48,
[0][1][2][1][RTW89_ETSI][48] = 127,
[0][1][2][1][RTW89_MKK][48] = 127,
@@ -35636,6 +35986,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][48] = 127,
[0][1][2][1][RTW89_CHILE][48] = 127,
[0][1][2][1][RTW89_QATAR][48] = 127,
+ [0][1][2][1][RTW89_THAILAND][48] = 127,
[0][1][2][1][RTW89_FCC][50] = 50,
[0][1][2][1][RTW89_ETSI][50] = 127,
[0][1][2][1][RTW89_MKK][50] = 127,
@@ -35648,6 +35999,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][50] = 127,
[0][1][2][1][RTW89_CHILE][50] = 127,
[0][1][2][1][RTW89_QATAR][50] = 127,
+ [0][1][2][1][RTW89_THAILAND][50] = 127,
[0][1][2][1][RTW89_FCC][52] = 48,
[0][1][2][1][RTW89_ETSI][52] = 127,
[0][1][2][1][RTW89_MKK][52] = 127,
@@ -35660,6 +36012,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_UKRAINE][52] = 127,
[0][1][2][1][RTW89_CHILE][52] = 127,
[0][1][2][1][RTW89_QATAR][52] = 127,
+ [0][1][2][1][RTW89_THAILAND][52] = 127,
[1][0][2][0][RTW89_FCC][1] = 64,
[1][0][2][0][RTW89_ETSI][1] = 66,
[1][0][2][0][RTW89_MKK][1] = 66,
@@ -35672,6 +36025,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][1] = 54,
[1][0][2][0][RTW89_CHILE][1] = 62,
[1][0][2][0][RTW89_QATAR][1] = 66,
+ [1][0][2][0][RTW89_THAILAND][1] = 66,
[1][0][2][0][RTW89_FCC][5] = 68,
[1][0][2][0][RTW89_ETSI][5] = 66,
[1][0][2][0][RTW89_MKK][5] = 66,
@@ -35684,6 +36038,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][5] = 54,
[1][0][2][0][RTW89_CHILE][5] = 66,
[1][0][2][0][RTW89_QATAR][5] = 66,
+ [1][0][2][0][RTW89_THAILAND][5] = 66,
[1][0][2][0][RTW89_FCC][9] = 68,
[1][0][2][0][RTW89_ETSI][9] = 66,
[1][0][2][0][RTW89_MKK][9] = 66,
@@ -35696,6 +36051,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][9] = 54,
[1][0][2][0][RTW89_CHILE][9] = 66,
[1][0][2][0][RTW89_QATAR][9] = 66,
+ [1][0][2][0][RTW89_THAILAND][9] = 66,
[1][0][2][0][RTW89_FCC][13] = 60,
[1][0][2][0][RTW89_ETSI][13] = 66,
[1][0][2][0][RTW89_MKK][13] = 66,
@@ -35708,6 +36064,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][13] = 54,
[1][0][2][0][RTW89_CHILE][13] = 60,
[1][0][2][0][RTW89_QATAR][13] = 66,
+ [1][0][2][0][RTW89_THAILAND][13] = 66,
[1][0][2][0][RTW89_FCC][16] = 64,
[1][0][2][0][RTW89_ETSI][16] = 66,
[1][0][2][0][RTW89_MKK][16] = 66,
@@ -35720,6 +36077,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][16] = 54,
[1][0][2][0][RTW89_CHILE][16] = 64,
[1][0][2][0][RTW89_QATAR][16] = 66,
+ [1][0][2][0][RTW89_THAILAND][16] = 66,
[1][0][2][0][RTW89_FCC][20] = 68,
[1][0][2][0][RTW89_ETSI][20] = 66,
[1][0][2][0][RTW89_MKK][20] = 66,
@@ -35732,6 +36090,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][20] = 54,
[1][0][2][0][RTW89_CHILE][20] = 66,
[1][0][2][0][RTW89_QATAR][20] = 66,
+ [1][0][2][0][RTW89_THAILAND][20] = 66,
[1][0][2][0][RTW89_FCC][24] = 68,
[1][0][2][0][RTW89_ETSI][24] = 66,
[1][0][2][0][RTW89_MKK][24] = 66,
@@ -35744,6 +36103,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][24] = 54,
[1][0][2][0][RTW89_CHILE][24] = 66,
[1][0][2][0][RTW89_QATAR][24] = 66,
+ [1][0][2][0][RTW89_THAILAND][24] = 66,
[1][0][2][0][RTW89_FCC][28] = 68,
[1][0][2][0][RTW89_ETSI][28] = 66,
[1][0][2][0][RTW89_MKK][28] = 66,
@@ -35756,6 +36116,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][28] = 54,
[1][0][2][0][RTW89_CHILE][28] = 62,
[1][0][2][0][RTW89_QATAR][28] = 66,
+ [1][0][2][0][RTW89_THAILAND][28] = 66,
[1][0][2][0][RTW89_FCC][32] = 62,
[1][0][2][0][RTW89_ETSI][32] = 66,
[1][0][2][0][RTW89_MKK][32] = 66,
@@ -35768,6 +36129,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][32] = 54,
[1][0][2][0][RTW89_CHILE][32] = 62,
[1][0][2][0][RTW89_QATAR][32] = 66,
+ [1][0][2][0][RTW89_THAILAND][32] = 66,
[1][0][2][0][RTW89_FCC][36] = 68,
[1][0][2][0][RTW89_ETSI][36] = 127,
[1][0][2][0][RTW89_MKK][36] = 66,
@@ -35780,30 +36142,33 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][36] = 127,
[1][0][2][0][RTW89_CHILE][36] = 66,
[1][0][2][0][RTW89_QATAR][36] = 127,
+ [1][0][2][0][RTW89_THAILAND][36] = 127,
[1][0][2][0][RTW89_FCC][39] = 68,
[1][0][2][0][RTW89_ETSI][39] = 30,
[1][0][2][0][RTW89_MKK][39] = 127,
[1][0][2][0][RTW89_IC][39] = 68,
[1][0][2][0][RTW89_KCC][39] = 66,
[1][0][2][0][RTW89_ACMA][39] = 66,
- [1][0][2][0][RTW89_CN][39] = 62,
+ [1][0][2][0][RTW89_CN][39] = 52,
[1][0][2][0][RTW89_UK][39] = 64,
[1][0][2][0][RTW89_MEXICO][39] = 68,
[1][0][2][0][RTW89_UKRAINE][39] = 30,
[1][0][2][0][RTW89_CHILE][39] = 66,
[1][0][2][0][RTW89_QATAR][39] = 30,
+ [1][0][2][0][RTW89_THAILAND][39] = 30,
[1][0][2][0][RTW89_FCC][43] = 68,
[1][0][2][0][RTW89_ETSI][43] = 30,
[1][0][2][0][RTW89_MKK][43] = 127,
[1][0][2][0][RTW89_IC][43] = 68,
[1][0][2][0][RTW89_KCC][43] = 66,
[1][0][2][0][RTW89_ACMA][43] = 66,
- [1][0][2][0][RTW89_CN][43] = 66,
+ [1][0][2][0][RTW89_CN][43] = 52,
[1][0][2][0][RTW89_UK][43] = 64,
[1][0][2][0][RTW89_MEXICO][43] = 68,
[1][0][2][0][RTW89_UKRAINE][43] = 30,
[1][0][2][0][RTW89_CHILE][43] = 66,
[1][0][2][0][RTW89_QATAR][43] = 30,
+ [1][0][2][0][RTW89_THAILAND][43] = 30,
[1][0][2][0][RTW89_FCC][47] = 68,
[1][0][2][0][RTW89_ETSI][47] = 127,
[1][0][2][0][RTW89_MKK][47] = 127,
@@ -35816,6 +36181,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][47] = 127,
[1][0][2][0][RTW89_CHILE][47] = 127,
[1][0][2][0][RTW89_QATAR][47] = 127,
+ [1][0][2][0][RTW89_THAILAND][47] = 127,
[1][0][2][0][RTW89_FCC][51] = 68,
[1][0][2][0][RTW89_ETSI][51] = 127,
[1][0][2][0][RTW89_MKK][51] = 127,
@@ -35828,6 +36194,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_UKRAINE][51] = 127,
[1][0][2][0][RTW89_CHILE][51] = 127,
[1][0][2][0][RTW89_QATAR][51] = 127,
+ [1][0][2][0][RTW89_THAILAND][51] = 127,
[1][1][2][0][RTW89_FCC][1] = 54,
[1][1][2][0][RTW89_ETSI][1] = 54,
[1][1][2][0][RTW89_MKK][1] = 48,
@@ -35840,6 +36207,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][1] = 42,
[1][1][2][0][RTW89_CHILE][1] = 54,
[1][1][2][0][RTW89_QATAR][1] = 54,
+ [1][1][2][0][RTW89_THAILAND][1] = 54,
[1][1][2][0][RTW89_FCC][5] = 68,
[1][1][2][0][RTW89_ETSI][5] = 54,
[1][1][2][0][RTW89_MKK][5] = 52,
@@ -35852,6 +36220,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][5] = 42,
[1][1][2][0][RTW89_CHILE][5] = 66,
[1][1][2][0][RTW89_QATAR][5] = 54,
+ [1][1][2][0][RTW89_THAILAND][5] = 54,
[1][1][2][0][RTW89_FCC][9] = 68,
[1][1][2][0][RTW89_ETSI][9] = 54,
[1][1][2][0][RTW89_MKK][9] = 52,
@@ -35864,6 +36233,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][9] = 42,
[1][1][2][0][RTW89_CHILE][9] = 66,
[1][1][2][0][RTW89_QATAR][9] = 54,
+ [1][1][2][0][RTW89_THAILAND][9] = 54,
[1][1][2][0][RTW89_FCC][13] = 54,
[1][1][2][0][RTW89_ETSI][13] = 54,
[1][1][2][0][RTW89_MKK][13] = 52,
@@ -35876,6 +36246,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][13] = 42,
[1][1][2][0][RTW89_CHILE][13] = 54,
[1][1][2][0][RTW89_QATAR][13] = 54,
+ [1][1][2][0][RTW89_THAILAND][13] = 54,
[1][1][2][0][RTW89_FCC][16] = 56,
[1][1][2][0][RTW89_ETSI][16] = 54,
[1][1][2][0][RTW89_MKK][16] = 66,
@@ -35888,6 +36259,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][16] = 42,
[1][1][2][0][RTW89_CHILE][16] = 54,
[1][1][2][0][RTW89_QATAR][16] = 54,
+ [1][1][2][0][RTW89_THAILAND][16] = 54,
[1][1][2][0][RTW89_FCC][20] = 68,
[1][1][2][0][RTW89_ETSI][20] = 54,
[1][1][2][0][RTW89_MKK][20] = 66,
@@ -35900,6 +36272,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][20] = 42,
[1][1][2][0][RTW89_CHILE][20] = 66,
[1][1][2][0][RTW89_QATAR][20] = 54,
+ [1][1][2][0][RTW89_THAILAND][20] = 54,
[1][1][2][0][RTW89_FCC][24] = 68,
[1][1][2][0][RTW89_ETSI][24] = 54,
[1][1][2][0][RTW89_MKK][24] = 66,
@@ -35912,6 +36285,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][24] = 42,
[1][1][2][0][RTW89_CHILE][24] = 66,
[1][1][2][0][RTW89_QATAR][24] = 54,
+ [1][1][2][0][RTW89_THAILAND][24] = 54,
[1][1][2][0][RTW89_FCC][28] = 68,
[1][1][2][0][RTW89_ETSI][28] = 54,
[1][1][2][0][RTW89_MKK][28] = 66,
@@ -35924,6 +36298,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][28] = 42,
[1][1][2][0][RTW89_CHILE][28] = 54,
[1][1][2][0][RTW89_QATAR][28] = 54,
+ [1][1][2][0][RTW89_THAILAND][28] = 54,
[1][1][2][0][RTW89_FCC][32] = 56,
[1][1][2][0][RTW89_ETSI][32] = 54,
[1][1][2][0][RTW89_MKK][32] = 66,
@@ -35936,6 +36311,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][32] = 42,
[1][1][2][0][RTW89_CHILE][32] = 54,
[1][1][2][0][RTW89_QATAR][32] = 54,
+ [1][1][2][0][RTW89_THAILAND][32] = 54,
[1][1][2][0][RTW89_FCC][36] = 68,
[1][1][2][0][RTW89_ETSI][36] = 127,
[1][1][2][0][RTW89_MKK][36] = 66,
@@ -35948,30 +36324,33 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][36] = 127,
[1][1][2][0][RTW89_CHILE][36] = 66,
[1][1][2][0][RTW89_QATAR][36] = 127,
+ [1][1][2][0][RTW89_THAILAND][36] = 127,
[1][1][2][0][RTW89_FCC][39] = 68,
[1][1][2][0][RTW89_ETSI][39] = 18,
[1][1][2][0][RTW89_MKK][39] = 127,
[1][1][2][0][RTW89_IC][39] = 68,
[1][1][2][0][RTW89_KCC][39] = 56,
[1][1][2][0][RTW89_ACMA][39] = 66,
- [1][1][2][0][RTW89_CN][39] = 62,
+ [1][1][2][0][RTW89_CN][39] = 52,
[1][1][2][0][RTW89_UK][39] = 52,
[1][1][2][0][RTW89_MEXICO][39] = 68,
[1][1][2][0][RTW89_UKRAINE][39] = 18,
[1][1][2][0][RTW89_CHILE][39] = 66,
[1][1][2][0][RTW89_QATAR][39] = 18,
+ [1][1][2][0][RTW89_THAILAND][39] = 18,
[1][1][2][0][RTW89_FCC][43] = 68,
[1][1][2][0][RTW89_ETSI][43] = 18,
[1][1][2][0][RTW89_MKK][43] = 127,
[1][1][2][0][RTW89_IC][43] = 68,
[1][1][2][0][RTW89_KCC][43] = 56,
[1][1][2][0][RTW89_ACMA][43] = 66,
- [1][1][2][0][RTW89_CN][43] = 66,
+ [1][1][2][0][RTW89_CN][43] = 52,
[1][1][2][0][RTW89_UK][43] = 52,
[1][1][2][0][RTW89_MEXICO][43] = 68,
[1][1][2][0][RTW89_UKRAINE][43] = 18,
[1][1][2][0][RTW89_CHILE][43] = 66,
[1][1][2][0][RTW89_QATAR][43] = 18,
+ [1][1][2][0][RTW89_THAILAND][43] = 18,
[1][1][2][0][RTW89_FCC][47] = 62,
[1][1][2][0][RTW89_ETSI][47] = 127,
[1][1][2][0][RTW89_MKK][47] = 127,
@@ -35984,6 +36363,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][47] = 127,
[1][1][2][0][RTW89_CHILE][47] = 127,
[1][1][2][0][RTW89_QATAR][47] = 127,
+ [1][1][2][0][RTW89_THAILAND][47] = 127,
[1][1][2][0][RTW89_FCC][51] = 60,
[1][1][2][0][RTW89_ETSI][51] = 127,
[1][1][2][0][RTW89_MKK][51] = 127,
@@ -35996,6 +36376,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_UKRAINE][51] = 127,
[1][1][2][0][RTW89_CHILE][51] = 127,
[1][1][2][0][RTW89_QATAR][51] = 127,
+ [1][1][2][0][RTW89_THAILAND][51] = 127,
[1][1][2][1][RTW89_FCC][1] = 54,
[1][1][2][1][RTW89_ETSI][1] = 40,
[1][1][2][1][RTW89_MKK][1] = 48,
@@ -36008,6 +36389,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][1] = 30,
[1][1][2][1][RTW89_CHILE][1] = 54,
[1][1][2][1][RTW89_QATAR][1] = 40,
+ [1][1][2][1][RTW89_THAILAND][1] = 40,
[1][1][2][1][RTW89_FCC][5] = 68,
[1][1][2][1][RTW89_ETSI][5] = 40,
[1][1][2][1][RTW89_MKK][5] = 52,
@@ -36020,6 +36402,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][5] = 30,
[1][1][2][1][RTW89_CHILE][5] = 60,
[1][1][2][1][RTW89_QATAR][5] = 40,
+ [1][1][2][1][RTW89_THAILAND][5] = 40,
[1][1][2][1][RTW89_FCC][9] = 68,
[1][1][2][1][RTW89_ETSI][9] = 40,
[1][1][2][1][RTW89_MKK][9] = 52,
@@ -36032,6 +36415,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][9] = 30,
[1][1][2][1][RTW89_CHILE][9] = 60,
[1][1][2][1][RTW89_QATAR][9] = 40,
+ [1][1][2][1][RTW89_THAILAND][9] = 40,
[1][1][2][1][RTW89_FCC][13] = 54,
[1][1][2][1][RTW89_ETSI][13] = 40,
[1][1][2][1][RTW89_MKK][13] = 52,
@@ -36044,6 +36428,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][13] = 30,
[1][1][2][1][RTW89_CHILE][13] = 54,
[1][1][2][1][RTW89_QATAR][13] = 40,
+ [1][1][2][1][RTW89_THAILAND][13] = 40,
[1][1][2][1][RTW89_FCC][16] = 56,
[1][1][2][1][RTW89_ETSI][16] = 40,
[1][1][2][1][RTW89_MKK][16] = 66,
@@ -36056,6 +36441,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][16] = 30,
[1][1][2][1][RTW89_CHILE][16] = 54,
[1][1][2][1][RTW89_QATAR][16] = 40,
+ [1][1][2][1][RTW89_THAILAND][16] = 40,
[1][1][2][1][RTW89_FCC][20] = 68,
[1][1][2][1][RTW89_ETSI][20] = 40,
[1][1][2][1][RTW89_MKK][20] = 66,
@@ -36068,6 +36454,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][20] = 30,
[1][1][2][1][RTW89_CHILE][20] = 60,
[1][1][2][1][RTW89_QATAR][20] = 40,
+ [1][1][2][1][RTW89_THAILAND][20] = 40,
[1][1][2][1][RTW89_FCC][24] = 68,
[1][1][2][1][RTW89_ETSI][24] = 40,
[1][1][2][1][RTW89_MKK][24] = 66,
@@ -36080,6 +36467,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][24] = 30,
[1][1][2][1][RTW89_CHILE][24] = 60,
[1][1][2][1][RTW89_QATAR][24] = 40,
+ [1][1][2][1][RTW89_THAILAND][24] = 40,
[1][1][2][1][RTW89_FCC][28] = 68,
[1][1][2][1][RTW89_ETSI][28] = 40,
[1][1][2][1][RTW89_MKK][28] = 66,
@@ -36092,6 +36480,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][28] = 30,
[1][1][2][1][RTW89_CHILE][28] = 54,
[1][1][2][1][RTW89_QATAR][28] = 40,
+ [1][1][2][1][RTW89_THAILAND][28] = 40,
[1][1][2][1][RTW89_FCC][32] = 56,
[1][1][2][1][RTW89_ETSI][32] = 40,
[1][1][2][1][RTW89_MKK][32] = 66,
@@ -36104,6 +36493,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][32] = 30,
[1][1][2][1][RTW89_CHILE][32] = 54,
[1][1][2][1][RTW89_QATAR][32] = 40,
+ [1][1][2][1][RTW89_THAILAND][32] = 40,
[1][1][2][1][RTW89_FCC][36] = 68,
[1][1][2][1][RTW89_ETSI][36] = 127,
[1][1][2][1][RTW89_MKK][36] = 66,
@@ -36116,18 +36506,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][36] = 127,
[1][1][2][1][RTW89_CHILE][36] = 66,
[1][1][2][1][RTW89_QATAR][36] = 127,
+ [1][1][2][1][RTW89_THAILAND][36] = 127,
[1][1][2][1][RTW89_FCC][39] = 68,
[1][1][2][1][RTW89_ETSI][39] = 6,
[1][1][2][1][RTW89_MKK][39] = 127,
[1][1][2][1][RTW89_IC][39] = 68,
[1][1][2][1][RTW89_KCC][39] = 56,
[1][1][2][1][RTW89_ACMA][39] = 66,
- [1][1][2][1][RTW89_CN][39] = 60,
+ [1][1][2][1][RTW89_CN][39] = 52,
[1][1][2][1][RTW89_UK][39] = 40,
[1][1][2][1][RTW89_MEXICO][39] = 68,
[1][1][2][1][RTW89_UKRAINE][39] = 6,
[1][1][2][1][RTW89_CHILE][39] = 60,
[1][1][2][1][RTW89_QATAR][39] = 6,
+ [1][1][2][1][RTW89_THAILAND][39] = 6,
[1][1][2][1][RTW89_FCC][43] = 68,
[1][1][2][1][RTW89_ETSI][43] = 6,
[1][1][2][1][RTW89_MKK][43] = 127,
@@ -36140,6 +36532,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][43] = 6,
[1][1][2][1][RTW89_CHILE][43] = 60,
[1][1][2][1][RTW89_QATAR][43] = 6,
+ [1][1][2][1][RTW89_THAILAND][43] = 6,
[1][1][2][1][RTW89_FCC][47] = 62,
[1][1][2][1][RTW89_ETSI][47] = 127,
[1][1][2][1][RTW89_MKK][47] = 127,
@@ -36152,6 +36545,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][47] = 127,
[1][1][2][1][RTW89_CHILE][47] = 127,
[1][1][2][1][RTW89_QATAR][47] = 127,
+ [1][1][2][1][RTW89_THAILAND][47] = 127,
[1][1][2][1][RTW89_FCC][51] = 60,
[1][1][2][1][RTW89_ETSI][51] = 127,
[1][1][2][1][RTW89_MKK][51] = 127,
@@ -36164,6 +36558,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_UKRAINE][51] = 127,
[1][1][2][1][RTW89_CHILE][51] = 127,
[1][1][2][1][RTW89_QATAR][51] = 127,
+ [1][1][2][1][RTW89_THAILAND][51] = 127,
[2][0][2][0][RTW89_FCC][3] = 58,
[2][0][2][0][RTW89_ETSI][3] = 60,
[2][0][2][0][RTW89_MKK][3] = 60,
@@ -36176,6 +36571,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][3] = 54,
[2][0][2][0][RTW89_CHILE][3] = 58,
[2][0][2][0][RTW89_QATAR][3] = 60,
+ [2][0][2][0][RTW89_THAILAND][3] = 60,
[2][0][2][0][RTW89_FCC][11] = 50,
[2][0][2][0][RTW89_ETSI][11] = 60,
[2][0][2][0][RTW89_MKK][11] = 60,
@@ -36188,6 +36584,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][11] = 54,
[2][0][2][0][RTW89_CHILE][11] = 50,
[2][0][2][0][RTW89_QATAR][11] = 60,
+ [2][0][2][0][RTW89_THAILAND][11] = 60,
[2][0][2][0][RTW89_FCC][18] = 60,
[2][0][2][0][RTW89_ETSI][18] = 60,
[2][0][2][0][RTW89_MKK][18] = 60,
@@ -36200,6 +36597,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][18] = 54,
[2][0][2][0][RTW89_CHILE][18] = 60,
[2][0][2][0][RTW89_QATAR][18] = 60,
+ [2][0][2][0][RTW89_THAILAND][18] = 60,
[2][0][2][0][RTW89_FCC][26] = 62,
[2][0][2][0][RTW89_ETSI][26] = 60,
[2][0][2][0][RTW89_MKK][26] = 60,
@@ -36212,6 +36610,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][26] = 54,
[2][0][2][0][RTW89_CHILE][26] = 60,
[2][0][2][0][RTW89_QATAR][26] = 60,
+ [2][0][2][0][RTW89_THAILAND][26] = 60,
[2][0][2][0][RTW89_FCC][34] = 62,
[2][0][2][0][RTW89_ETSI][34] = 127,
[2][0][2][0][RTW89_MKK][34] = 60,
@@ -36224,18 +36623,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][34] = 127,
[2][0][2][0][RTW89_CHILE][34] = 60,
[2][0][2][0][RTW89_QATAR][34] = 127,
+ [2][0][2][0][RTW89_THAILAND][34] = 127,
[2][0][2][0][RTW89_FCC][41] = 62,
[2][0][2][0][RTW89_ETSI][41] = 30,
[2][0][2][0][RTW89_MKK][41] = 127,
[2][0][2][0][RTW89_IC][41] = 62,
[2][0][2][0][RTW89_KCC][41] = 58,
[2][0][2][0][RTW89_ACMA][41] = 60,
- [2][0][2][0][RTW89_CN][41] = 62,
+ [2][0][2][0][RTW89_CN][41] = 42,
[2][0][2][0][RTW89_UK][41] = 60,
[2][0][2][0][RTW89_MEXICO][41] = 62,
[2][0][2][0][RTW89_UKRAINE][41] = 30,
[2][0][2][0][RTW89_CHILE][41] = 60,
[2][0][2][0][RTW89_QATAR][41] = 30,
+ [2][0][2][0][RTW89_THAILAND][41] = 30,
[2][0][2][0][RTW89_FCC][49] = 62,
[2][0][2][0][RTW89_ETSI][49] = 127,
[2][0][2][0][RTW89_MKK][49] = 127,
@@ -36248,6 +36649,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_UKRAINE][49] = 127,
[2][0][2][0][RTW89_CHILE][49] = 127,
[2][0][2][0][RTW89_QATAR][49] = 127,
+ [2][0][2][0][RTW89_THAILAND][49] = 127,
[2][1][2][0][RTW89_FCC][3] = 48,
[2][1][2][0][RTW89_ETSI][3] = 54,
[2][1][2][0][RTW89_MKK][3] = 56,
@@ -36260,18 +36662,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_UKRAINE][3] = 42,
[2][1][2][0][RTW89_CHILE][3] = 46,
[2][1][2][0][RTW89_QATAR][3] = 54,
+ [2][1][2][0][RTW89_THAILAND][3] = 54,
[2][1][2][0][RTW89_FCC][11] = 38,
[2][1][2][0][RTW89_ETSI][11] = 54,
[2][1][2][0][RTW89_MKK][11] = 54,
[2][1][2][0][RTW89_IC][11] = 38,
[2][1][2][0][RTW89_KCC][11] = 52,
[2][1][2][0][RTW89_ACMA][11] = 54,
- [2][1][2][0][RTW89_CN][11] = 52,
+ [2][1][2][0][RTW89_CN][11] = 50,
[2][1][2][0][RTW89_UK][11] = 54,
[2][1][2][0][RTW89_MEXICO][11] = 38,
[2][1][2][0][RTW89_UKRAINE][11] = 42,
[2][1][2][0][RTW89_CHILE][11] = 38,
[2][1][2][0][RTW89_QATAR][11] = 54,
+ [2][1][2][0][RTW89_THAILAND][11] = 54,
[2][1][2][0][RTW89_FCC][18] = 50,
[2][1][2][0][RTW89_ETSI][18] = 54,
[2][1][2][0][RTW89_MKK][18] = 60,
@@ -36284,6 +36688,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_UKRAINE][18] = 42,
[2][1][2][0][RTW89_CHILE][18] = 50,
[2][1][2][0][RTW89_QATAR][18] = 54,
+ [2][1][2][0][RTW89_THAILAND][18] = 54,
[2][1][2][0][RTW89_FCC][26] = 52,
[2][1][2][0][RTW89_ETSI][26] = 54,
[2][1][2][0][RTW89_MKK][26] = 56,
@@ -36296,6 +36701,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_UKRAINE][26] = 42,
[2][1][2][0][RTW89_CHILE][26] = 52,
[2][1][2][0][RTW89_QATAR][26] = 54,
+ [2][1][2][0][RTW89_THAILAND][26] = 54,
[2][1][2][0][RTW89_FCC][34] = 62,
[2][1][2][0][RTW89_ETSI][34] = 127,
[2][1][2][0][RTW89_MKK][34] = 60,
@@ -36308,18 +36714,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_UKRAINE][34] = 127,
[2][1][2][0][RTW89_CHILE][34] = 60,
[2][1][2][0][RTW89_QATAR][34] = 127,
+ [2][1][2][0][RTW89_THAILAND][34] = 127,
[2][1][2][0][RTW89_FCC][41] = 60,
[2][1][2][0][RTW89_ETSI][41] = 18,
[2][1][2][0][RTW89_MKK][41] = 127,
[2][1][2][0][RTW89_IC][41] = 60,
[2][1][2][0][RTW89_KCC][41] = 50,
[2][1][2][0][RTW89_ACMA][41] = 58,
- [2][1][2][0][RTW89_CN][41] = 62,
+ [2][1][2][0][RTW89_CN][41] = 42,
[2][1][2][0][RTW89_UK][41] = 52,
[2][1][2][0][RTW89_MEXICO][41] = 60,
[2][1][2][0][RTW89_UKRAINE][41] = 18,
[2][1][2][0][RTW89_CHILE][41] = 58,
[2][1][2][0][RTW89_QATAR][41] = 18,
+ [2][1][2][0][RTW89_THAILAND][41] = 18,
[2][1][2][0][RTW89_FCC][49] = 62,
[2][1][2][0][RTW89_ETSI][49] = 127,
[2][1][2][0][RTW89_MKK][49] = 127,
@@ -36332,6 +36740,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_UKRAINE][49] = 127,
[2][1][2][0][RTW89_CHILE][49] = 127,
[2][1][2][0][RTW89_QATAR][49] = 127,
+ [2][1][2][0][RTW89_THAILAND][49] = 127,
[2][1][2][1][RTW89_FCC][3] = 48,
[2][1][2][1][RTW89_ETSI][3] = 40,
[2][1][2][1][RTW89_MKK][3] = 56,
@@ -36344,6 +36753,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][3] = 30,
[2][1][2][1][RTW89_CHILE][3] = 46,
[2][1][2][1][RTW89_QATAR][3] = 40,
+ [2][1][2][1][RTW89_THAILAND][3] = 40,
[2][1][2][1][RTW89_FCC][11] = 38,
[2][1][2][1][RTW89_ETSI][11] = 40,
[2][1][2][1][RTW89_MKK][11] = 54,
@@ -36356,6 +36766,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][11] = 30,
[2][1][2][1][RTW89_CHILE][11] = 38,
[2][1][2][1][RTW89_QATAR][11] = 40,
+ [2][1][2][1][RTW89_THAILAND][11] = 40,
[2][1][2][1][RTW89_FCC][18] = 50,
[2][1][2][1][RTW89_ETSI][18] = 40,
[2][1][2][1][RTW89_MKK][18] = 60,
@@ -36368,6 +36779,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][18] = 30,
[2][1][2][1][RTW89_CHILE][18] = 50,
[2][1][2][1][RTW89_QATAR][18] = 40,
+ [2][1][2][1][RTW89_THAILAND][18] = 40,
[2][1][2][1][RTW89_FCC][26] = 52,
[2][1][2][1][RTW89_ETSI][26] = 42,
[2][1][2][1][RTW89_MKK][26] = 56,
@@ -36380,6 +36792,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][26] = 30,
[2][1][2][1][RTW89_CHILE][26] = 52,
[2][1][2][1][RTW89_QATAR][26] = 42,
+ [2][1][2][1][RTW89_THAILAND][26] = 42,
[2][1][2][1][RTW89_FCC][34] = 62,
[2][1][2][1][RTW89_ETSI][34] = 127,
[2][1][2][1][RTW89_MKK][34] = 60,
@@ -36392,18 +36805,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][34] = 127,
[2][1][2][1][RTW89_CHILE][34] = 60,
[2][1][2][1][RTW89_QATAR][34] = 127,
+ [2][1][2][1][RTW89_THAILAND][34] = 127,
[2][1][2][1][RTW89_FCC][41] = 60,
[2][1][2][1][RTW89_ETSI][41] = 6,
[2][1][2][1][RTW89_MKK][41] = 127,
[2][1][2][1][RTW89_IC][41] = 60,
[2][1][2][1][RTW89_KCC][41] = 50,
[2][1][2][1][RTW89_ACMA][41] = 58,
- [2][1][2][1][RTW89_CN][41] = 40,
+ [2][1][2][1][RTW89_CN][41] = 36,
[2][1][2][1][RTW89_UK][41] = 40,
[2][1][2][1][RTW89_MEXICO][41] = 60,
[2][1][2][1][RTW89_UKRAINE][41] = 6,
[2][1][2][1][RTW89_CHILE][41] = 58,
[2][1][2][1][RTW89_QATAR][41] = 6,
+ [2][1][2][1][RTW89_THAILAND][41] = 6,
[2][1][2][1][RTW89_FCC][49] = 62,
[2][1][2][1][RTW89_ETSI][49] = 127,
[2][1][2][1][RTW89_MKK][49] = 127,
@@ -36416,6 +36831,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_UKRAINE][49] = 127,
[2][1][2][1][RTW89_CHILE][49] = 127,
[2][1][2][1][RTW89_QATAR][49] = 127,
+ [2][1][2][1][RTW89_THAILAND][49] = 127,
[3][0][2][0][RTW89_FCC][7] = 40,
[3][0][2][0][RTW89_ETSI][7] = 50,
[3][0][2][0][RTW89_MKK][7] = 50,
@@ -36428,18 +36844,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_UKRAINE][7] = 50,
[3][0][2][0][RTW89_CHILE][7] = 40,
[3][0][2][0][RTW89_QATAR][7] = 50,
+ [3][0][2][0][RTW89_THAILAND][7] = 50,
[3][0][2][0][RTW89_FCC][22] = 42,
[3][0][2][0][RTW89_ETSI][22] = 50,
[3][0][2][0][RTW89_MKK][22] = 50,
[3][0][2][0][RTW89_IC][22] = 127,
[3][0][2][0][RTW89_KCC][22] = 50,
[3][0][2][0][RTW89_ACMA][22] = 127,
- [3][0][2][0][RTW89_CN][22] = 66,
+ [3][0][2][0][RTW89_CN][22] = 127,
[3][0][2][0][RTW89_UK][22] = 127,
[3][0][2][0][RTW89_MEXICO][22] = 127,
[3][0][2][0][RTW89_UKRAINE][22] = 50,
[3][0][2][0][RTW89_CHILE][22] = 42,
[3][0][2][0][RTW89_QATAR][22] = 50,
+ [3][0][2][0][RTW89_THAILAND][22] = 50,
[3][0][2][0][RTW89_FCC][45] = 52,
[3][0][2][0][RTW89_ETSI][45] = 127,
[3][0][2][0][RTW89_MKK][45] = 127,
@@ -36452,6 +36870,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_UKRAINE][45] = 127,
[3][0][2][0][RTW89_CHILE][45] = 127,
[3][0][2][0][RTW89_QATAR][45] = 127,
+ [3][0][2][0][RTW89_THAILAND][45] = 127,
[3][1][2][0][RTW89_FCC][7] = 32,
[3][1][2][0][RTW89_ETSI][7] = 50,
[3][1][2][0][RTW89_MKK][7] = 36,
@@ -36464,18 +36883,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_UKRAINE][7] = 50,
[3][1][2][0][RTW89_CHILE][7] = 32,
[3][1][2][0][RTW89_QATAR][7] = 50,
+ [3][1][2][0][RTW89_THAILAND][7] = 50,
[3][1][2][0][RTW89_FCC][22] = 36,
[3][1][2][0][RTW89_ETSI][22] = 50,
[3][1][2][0][RTW89_MKK][22] = 48,
[3][1][2][0][RTW89_IC][22] = 127,
[3][1][2][0][RTW89_KCC][22] = 50,
[3][1][2][0][RTW89_ACMA][22] = 127,
- [3][1][2][0][RTW89_CN][22] = 54,
+ [3][1][2][0][RTW89_CN][22] = 127,
[3][1][2][0][RTW89_UK][22] = 127,
[3][1][2][0][RTW89_MEXICO][22] = 127,
[3][1][2][0][RTW89_UKRAINE][22] = 50,
[3][1][2][0][RTW89_CHILE][22] = 36,
[3][1][2][0][RTW89_QATAR][22] = 50,
+ [3][1][2][0][RTW89_THAILAND][22] = 50,
[3][1][2][0][RTW89_FCC][45] = 46,
[3][1][2][0][RTW89_ETSI][45] = 127,
[3][1][2][0][RTW89_MKK][45] = 127,
@@ -36488,6 +36909,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_UKRAINE][45] = 127,
[3][1][2][0][RTW89_CHILE][45] = 127,
[3][1][2][0][RTW89_QATAR][45] = 127,
+ [3][1][2][0][RTW89_THAILAND][45] = 127,
[3][1][2][1][RTW89_FCC][7] = 32,
[3][1][2][1][RTW89_ETSI][7] = 42,
[3][1][2][1][RTW89_MKK][7] = 36,
@@ -36500,18 +36922,20 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_UKRAINE][7] = 42,
[3][1][2][1][RTW89_CHILE][7] = 32,
[3][1][2][1][RTW89_QATAR][7] = 42,
+ [3][1][2][1][RTW89_THAILAND][7] = 42,
[3][1][2][1][RTW89_FCC][22] = 36,
[3][1][2][1][RTW89_ETSI][22] = 42,
[3][1][2][1][RTW89_MKK][22] = 48,
[3][1][2][1][RTW89_IC][22] = 127,
[3][1][2][1][RTW89_KCC][22] = 50,
[3][1][2][1][RTW89_ACMA][22] = 127,
- [3][1][2][1][RTW89_CN][22] = 42,
+ [3][1][2][1][RTW89_CN][22] = 127,
[3][1][2][1][RTW89_UK][22] = 127,
[3][1][2][1][RTW89_MEXICO][22] = 127,
[3][1][2][1][RTW89_UKRAINE][22] = 42,
[3][1][2][1][RTW89_CHILE][22] = 36,
[3][1][2][1][RTW89_QATAR][22] = 42,
+ [3][1][2][1][RTW89_THAILAND][22] = 42,
[3][1][2][1][RTW89_FCC][45] = 46,
[3][1][2][1][RTW89_ETSI][45] = 127,
[3][1][2][1][RTW89_MKK][45] = 127,
@@ -36524,6 +36948,7 @@ const s8 rtw89_8852c_txpwr_lmt_5g[RTW89_5G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_UKRAINE][45] = 127,
[3][1][2][1][RTW89_CHILE][45] = 127,
[3][1][2][1][RTW89_QATAR][45] = 127,
+ [3][1][2][1][RTW89_THAILAND][45] = 127,
};
static
@@ -36605,19 +37030,19 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_WW][2][44] = 70,
[0][0][1][0][RTW89_WW][0][45] = 22,
[0][0][1][0][RTW89_WW][1][45] = 22,
- [0][0][1][0][RTW89_WW][2][45] = 0,
+ [0][0][1][0][RTW89_WW][2][45] = 70,
[0][0][1][0][RTW89_WW][0][47] = 22,
[0][0][1][0][RTW89_WW][1][47] = 22,
- [0][0][1][0][RTW89_WW][2][47] = 0,
+ [0][0][1][0][RTW89_WW][2][47] = 70,
[0][0][1][0][RTW89_WW][0][49] = 24,
[0][0][1][0][RTW89_WW][1][49] = 24,
- [0][0][1][0][RTW89_WW][2][49] = 0,
+ [0][0][1][0][RTW89_WW][2][49] = 70,
[0][0][1][0][RTW89_WW][0][51] = 22,
[0][0][1][0][RTW89_WW][1][51] = 22,
- [0][0][1][0][RTW89_WW][2][51] = 0,
+ [0][0][1][0][RTW89_WW][2][51] = 70,
[0][0][1][0][RTW89_WW][0][53] = 22,
[0][0][1][0][RTW89_WW][1][53] = 22,
- [0][0][1][0][RTW89_WW][2][53] = 0,
+ [0][0][1][0][RTW89_WW][2][53] = 70,
[0][0][1][0][RTW89_WW][0][55] = 22,
[0][0][1][0][RTW89_WW][1][55] = 22,
[0][0][1][0][RTW89_WW][2][55] = 68,
@@ -36797,19 +37222,19 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_WW][2][44] = 68,
[0][1][1][0][RTW89_WW][0][45] = -2,
[0][1][1][0][RTW89_WW][1][45] = -2,
- [0][1][1][0][RTW89_WW][2][45] = 0,
+ [0][1][1][0][RTW89_WW][2][45] = 70,
[0][1][1][0][RTW89_WW][0][47] = -2,
[0][1][1][0][RTW89_WW][1][47] = -2,
- [0][1][1][0][RTW89_WW][2][47] = 0,
+ [0][1][1][0][RTW89_WW][2][47] = 68,
[0][1][1][0][RTW89_WW][0][49] = -2,
[0][1][1][0][RTW89_WW][1][49] = -2,
- [0][1][1][0][RTW89_WW][2][49] = 0,
+ [0][1][1][0][RTW89_WW][2][49] = 68,
[0][1][1][0][RTW89_WW][0][51] = -2,
[0][1][1][0][RTW89_WW][1][51] = -2,
- [0][1][1][0][RTW89_WW][2][51] = 0,
+ [0][1][1][0][RTW89_WW][2][51] = 68,
[0][1][1][0][RTW89_WW][0][53] = -2,
[0][1][1][0][RTW89_WW][1][53] = -2,
- [0][1][1][0][RTW89_WW][2][53] = 0,
+ [0][1][1][0][RTW89_WW][2][53] = 68,
[0][1][1][0][RTW89_WW][0][55] = -2,
[0][1][1][0][RTW89_WW][1][55] = -2,
[0][1][1][0][RTW89_WW][2][55] = 68,
@@ -36989,19 +37414,19 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_WW][2][44] = 70,
[0][0][2][0][RTW89_WW][0][45] = 22,
[0][0][2][0][RTW89_WW][1][45] = 22,
- [0][0][2][0][RTW89_WW][2][45] = 0,
+ [0][0][2][0][RTW89_WW][2][45] = 70,
[0][0][2][0][RTW89_WW][0][47] = 22,
[0][0][2][0][RTW89_WW][1][47] = 22,
- [0][0][2][0][RTW89_WW][2][47] = 0,
+ [0][0][2][0][RTW89_WW][2][47] = 70,
[0][0][2][0][RTW89_WW][0][49] = 24,
[0][0][2][0][RTW89_WW][1][49] = 24,
- [0][0][2][0][RTW89_WW][2][49] = 0,
+ [0][0][2][0][RTW89_WW][2][49] = 70,
[0][0][2][0][RTW89_WW][0][51] = 22,
[0][0][2][0][RTW89_WW][1][51] = 22,
- [0][0][2][0][RTW89_WW][2][51] = 0,
+ [0][0][2][0][RTW89_WW][2][51] = 70,
[0][0][2][0][RTW89_WW][0][53] = 22,
[0][0][2][0][RTW89_WW][1][53] = 22,
- [0][0][2][0][RTW89_WW][2][53] = 0,
+ [0][0][2][0][RTW89_WW][2][53] = 70,
[0][0][2][0][RTW89_WW][0][55] = 22,
[0][0][2][0][RTW89_WW][1][55] = 22,
[0][0][2][0][RTW89_WW][2][55] = 68,
@@ -37181,19 +37606,19 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_WW][2][44] = 68,
[0][1][2][0][RTW89_WW][0][45] = -2,
[0][1][2][0][RTW89_WW][1][45] = -2,
- [0][1][2][0][RTW89_WW][2][45] = 0,
+ [0][1][2][0][RTW89_WW][2][45] = 70,
[0][1][2][0][RTW89_WW][0][47] = -2,
[0][1][2][0][RTW89_WW][1][47] = -2,
- [0][1][2][0][RTW89_WW][2][47] = 0,
+ [0][1][2][0][RTW89_WW][2][47] = 68,
[0][1][2][0][RTW89_WW][0][49] = -2,
[0][1][2][0][RTW89_WW][1][49] = -2,
- [0][1][2][0][RTW89_WW][2][49] = 0,
+ [0][1][2][0][RTW89_WW][2][49] = 68,
[0][1][2][0][RTW89_WW][0][51] = -2,
[0][1][2][0][RTW89_WW][1][51] = -2,
- [0][1][2][0][RTW89_WW][2][51] = 0,
+ [0][1][2][0][RTW89_WW][2][51] = 68,
[0][1][2][0][RTW89_WW][0][53] = -2,
[0][1][2][0][RTW89_WW][1][53] = -2,
- [0][1][2][0][RTW89_WW][2][53] = 0,
+ [0][1][2][0][RTW89_WW][2][53] = 68,
[0][1][2][0][RTW89_WW][0][55] = -2,
[0][1][2][0][RTW89_WW][1][55] = -2,
[0][1][2][0][RTW89_WW][2][55] = 68,
@@ -37373,19 +37798,19 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_WW][2][44] = 68,
[0][1][2][1][RTW89_WW][0][45] = -2,
[0][1][2][1][RTW89_WW][1][45] = -2,
- [0][1][2][1][RTW89_WW][2][45] = 0,
+ [0][1][2][1][RTW89_WW][2][45] = 70,
[0][1][2][1][RTW89_WW][0][47] = -2,
[0][1][2][1][RTW89_WW][1][47] = -2,
- [0][1][2][1][RTW89_WW][2][47] = 0,
+ [0][1][2][1][RTW89_WW][2][47] = 68,
[0][1][2][1][RTW89_WW][0][49] = -2,
[0][1][2][1][RTW89_WW][1][49] = -2,
- [0][1][2][1][RTW89_WW][2][49] = 0,
+ [0][1][2][1][RTW89_WW][2][49] = 68,
[0][1][2][1][RTW89_WW][0][51] = -2,
[0][1][2][1][RTW89_WW][1][51] = -2,
- [0][1][2][1][RTW89_WW][2][51] = 0,
+ [0][1][2][1][RTW89_WW][2][51] = 68,
[0][1][2][1][RTW89_WW][0][53] = -2,
[0][1][2][1][RTW89_WW][1][53] = -2,
- [0][1][2][1][RTW89_WW][2][53] = 0,
+ [0][1][2][1][RTW89_WW][2][53] = 68,
[0][1][2][1][RTW89_WW][0][55] = -2,
[0][1][2][1][RTW89_WW][1][55] = -2,
[0][1][2][1][RTW89_WW][2][55] = 68,
@@ -37529,10 +37954,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_WW][2][43] = 70,
[1][0][2][0][RTW89_WW][0][46] = 34,
[1][0][2][0][RTW89_WW][1][46] = 34,
- [1][0][2][0][RTW89_WW][2][46] = 0,
+ [1][0][2][0][RTW89_WW][2][46] = 68,
[1][0][2][0][RTW89_WW][0][50] = 34,
[1][0][2][0][RTW89_WW][1][50] = 34,
- [1][0][2][0][RTW89_WW][2][50] = 0,
+ [1][0][2][0][RTW89_WW][2][50] = 68,
[1][0][2][0][RTW89_WW][0][54] = 36,
[1][0][2][0][RTW89_WW][1][54] = 36,
[1][0][2][0][RTW89_WW][2][54] = 0,
@@ -37625,10 +38050,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_WW][2][43] = 70,
[1][1][2][0][RTW89_WW][0][46] = 12,
[1][1][2][0][RTW89_WW][1][46] = 12,
- [1][1][2][0][RTW89_WW][2][46] = 0,
+ [1][1][2][0][RTW89_WW][2][46] = 68,
[1][1][2][0][RTW89_WW][0][50] = 12,
[1][1][2][0][RTW89_WW][1][50] = 12,
- [1][1][2][0][RTW89_WW][2][50] = 0,
+ [1][1][2][0][RTW89_WW][2][50] = 68,
[1][1][2][0][RTW89_WW][0][54] = 10,
[1][1][2][0][RTW89_WW][1][54] = 10,
[1][1][2][0][RTW89_WW][2][54] = 0,
@@ -37721,10 +38146,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_WW][2][43] = 70,
[1][1][2][1][RTW89_WW][0][46] = 12,
[1][1][2][1][RTW89_WW][1][46] = 12,
- [1][1][2][1][RTW89_WW][2][46] = 0,
+ [1][1][2][1][RTW89_WW][2][46] = 68,
[1][1][2][1][RTW89_WW][0][50] = 12,
[1][1][2][1][RTW89_WW][1][50] = 12,
- [1][1][2][1][RTW89_WW][2][50] = 0,
+ [1][1][2][1][RTW89_WW][2][50] = 68,
[1][1][2][1][RTW89_WW][0][54] = 10,
[1][1][2][1][RTW89_WW][1][54] = 10,
[1][1][2][1][RTW89_WW][2][54] = 0,
@@ -37799,10 +38224,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_WW][2][41] = 60,
[2][0][2][0][RTW89_WW][0][48] = 46,
[2][0][2][0][RTW89_WW][1][48] = 46,
- [2][0][2][0][RTW89_WW][2][48] = 0,
+ [2][0][2][0][RTW89_WW][2][48] = 60,
[2][0][2][0][RTW89_WW][0][56] = 46,
[2][0][2][0][RTW89_WW][1][56] = 46,
- [2][0][2][0][RTW89_WW][2][56] = 0,
+ [2][0][2][0][RTW89_WW][2][56] = 58,
[2][0][2][0][RTW89_WW][0][63] = 46,
[2][0][2][0][RTW89_WW][1][63] = 46,
[2][0][2][0][RTW89_WW][2][63] = 58,
@@ -37847,10 +38272,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_WW][2][41] = 60,
[2][1][2][0][RTW89_WW][0][48] = 22,
[2][1][2][0][RTW89_WW][1][48] = 22,
- [2][1][2][0][RTW89_WW][2][48] = 0,
+ [2][1][2][0][RTW89_WW][2][48] = 60,
[2][1][2][0][RTW89_WW][0][56] = 20,
[2][1][2][0][RTW89_WW][1][56] = 20,
- [2][1][2][0][RTW89_WW][2][56] = 0,
+ [2][1][2][0][RTW89_WW][2][56] = 56,
[2][1][2][0][RTW89_WW][0][63] = 22,
[2][1][2][0][RTW89_WW][1][63] = 22,
[2][1][2][0][RTW89_WW][2][63] = 58,
@@ -37895,10 +38320,10 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_WW][2][41] = 60,
[2][1][2][1][RTW89_WW][0][48] = 22,
[2][1][2][1][RTW89_WW][1][48] = 22,
- [2][1][2][1][RTW89_WW][2][48] = 0,
+ [2][1][2][1][RTW89_WW][2][48] = 60,
[2][1][2][1][RTW89_WW][0][56] = 20,
[2][1][2][1][RTW89_WW][1][56] = 20,
- [2][1][2][1][RTW89_WW][2][56] = 0,
+ [2][1][2][1][RTW89_WW][2][56] = 56,
[2][1][2][1][RTW89_WW][0][63] = 22,
[2][1][2][1][RTW89_WW][1][63] = 22,
[2][1][2][1][RTW89_WW][2][63] = 58,
@@ -37934,7 +38359,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_WW][2][37] = 52,
[3][0][2][0][RTW89_WW][0][52] = 54,
[3][0][2][0][RTW89_WW][1][52] = 54,
- [3][0][2][0][RTW89_WW][2][52] = 0,
+ [3][0][2][0][RTW89_WW][2][52] = 56,
[3][0][2][0][RTW89_WW][0][67] = 54,
[3][0][2][0][RTW89_WW][1][67] = 54,
[3][0][2][0][RTW89_WW][2][67] = 54,
@@ -37958,7 +38383,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_WW][2][37] = 52,
[3][1][2][0][RTW89_WW][0][52] = 30,
[3][1][2][0][RTW89_WW][1][52] = 30,
- [3][1][2][0][RTW89_WW][2][52] = 0,
+ [3][1][2][0][RTW89_WW][2][52] = 56,
[3][1][2][0][RTW89_WW][0][67] = 32,
[3][1][2][0][RTW89_WW][1][67] = 32,
[3][1][2][0][RTW89_WW][2][67] = 54,
@@ -37982,7 +38407,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_WW][2][37] = 52,
[3][1][2][1][RTW89_WW][0][52] = 30,
[3][1][2][1][RTW89_WW][1][52] = 30,
- [3][1][2][1][RTW89_WW][2][52] = 0,
+ [3][1][2][1][RTW89_WW][2][52] = 56,
[3][1][2][1][RTW89_WW][0][67] = 32,
[3][1][2][1][RTW89_WW][1][67] = 32,
[3][1][2][1][RTW89_WW][2][67] = 54,
@@ -38002,6 +38427,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][0] = 66,
[0][0][1][0][RTW89_MKK][0][0] = 26,
[0][0][1][0][RTW89_IC][1][0] = 24,
+ [0][0][1][0][RTW89_IC][2][0] = 56,
[0][0][1][0][RTW89_KCC][1][0] = 24,
[0][0][1][0][RTW89_KCC][0][0] = 24,
[0][0][1][0][RTW89_ACMA][1][0] = 66,
@@ -38011,6 +38437,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][0] = 28,
[0][0][1][0][RTW89_UK][1][0] = 66,
[0][0][1][0][RTW89_UK][0][0] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][0] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][0] = 24,
[0][0][1][0][RTW89_FCC][1][2] = 22,
[0][0][1][0][RTW89_FCC][2][2] = 56,
[0][0][1][0][RTW89_ETSI][1][2] = 66,
@@ -38018,6 +38446,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][2] = 66,
[0][0][1][0][RTW89_MKK][0][2] = 26,
[0][0][1][0][RTW89_IC][1][2] = 22,
+ [0][0][1][0][RTW89_IC][2][2] = 56,
[0][0][1][0][RTW89_KCC][1][2] = 24,
[0][0][1][0][RTW89_KCC][0][2] = 24,
[0][0][1][0][RTW89_ACMA][1][2] = 66,
@@ -38027,6 +38456,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][2] = 28,
[0][0][1][0][RTW89_UK][1][2] = 66,
[0][0][1][0][RTW89_UK][0][2] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][2] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][2] = 22,
[0][0][1][0][RTW89_FCC][1][4] = 22,
[0][0][1][0][RTW89_FCC][2][4] = 56,
[0][0][1][0][RTW89_ETSI][1][4] = 66,
@@ -38034,6 +38465,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][4] = 66,
[0][0][1][0][RTW89_MKK][0][4] = 26,
[0][0][1][0][RTW89_IC][1][4] = 22,
+ [0][0][1][0][RTW89_IC][2][4] = 56,
[0][0][1][0][RTW89_KCC][1][4] = 24,
[0][0][1][0][RTW89_KCC][0][4] = 24,
[0][0][1][0][RTW89_ACMA][1][4] = 66,
@@ -38043,6 +38475,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][4] = 28,
[0][0][1][0][RTW89_UK][1][4] = 66,
[0][0][1][0][RTW89_UK][0][4] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][4] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][4] = 22,
[0][0][1][0][RTW89_FCC][1][6] = 22,
[0][0][1][0][RTW89_FCC][2][6] = 56,
[0][0][1][0][RTW89_ETSI][1][6] = 66,
@@ -38050,6 +38484,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][6] = 66,
[0][0][1][0][RTW89_MKK][0][6] = 26,
[0][0][1][0][RTW89_IC][1][6] = 22,
+ [0][0][1][0][RTW89_IC][2][6] = 56,
[0][0][1][0][RTW89_KCC][1][6] = 24,
[0][0][1][0][RTW89_KCC][0][6] = 24,
[0][0][1][0][RTW89_ACMA][1][6] = 66,
@@ -38059,6 +38494,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][6] = 28,
[0][0][1][0][RTW89_UK][1][6] = 66,
[0][0][1][0][RTW89_UK][0][6] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][6] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][6] = 22,
[0][0][1][0][RTW89_FCC][1][8] = 22,
[0][0][1][0][RTW89_FCC][2][8] = 56,
[0][0][1][0][RTW89_ETSI][1][8] = 66,
@@ -38066,6 +38503,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][8] = 66,
[0][0][1][0][RTW89_MKK][0][8] = 26,
[0][0][1][0][RTW89_IC][1][8] = 22,
+ [0][0][1][0][RTW89_IC][2][8] = 56,
[0][0][1][0][RTW89_KCC][1][8] = 24,
[0][0][1][0][RTW89_KCC][0][8] = 24,
[0][0][1][0][RTW89_ACMA][1][8] = 66,
@@ -38075,6 +38513,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][8] = 28,
[0][0][1][0][RTW89_UK][1][8] = 66,
[0][0][1][0][RTW89_UK][0][8] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][8] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][8] = 22,
[0][0][1][0][RTW89_FCC][1][10] = 22,
[0][0][1][0][RTW89_FCC][2][10] = 56,
[0][0][1][0][RTW89_ETSI][1][10] = 66,
@@ -38082,6 +38522,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][10] = 66,
[0][0][1][0][RTW89_MKK][0][10] = 26,
[0][0][1][0][RTW89_IC][1][10] = 22,
+ [0][0][1][0][RTW89_IC][2][10] = 56,
[0][0][1][0][RTW89_KCC][1][10] = 24,
[0][0][1][0][RTW89_KCC][0][10] = 24,
[0][0][1][0][RTW89_ACMA][1][10] = 66,
@@ -38091,6 +38532,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][10] = 28,
[0][0][1][0][RTW89_UK][1][10] = 66,
[0][0][1][0][RTW89_UK][0][10] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][10] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][10] = 22,
[0][0][1][0][RTW89_FCC][1][12] = 22,
[0][0][1][0][RTW89_FCC][2][12] = 56,
[0][0][1][0][RTW89_ETSI][1][12] = 66,
@@ -38098,6 +38541,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][12] = 66,
[0][0][1][0][RTW89_MKK][0][12] = 26,
[0][0][1][0][RTW89_IC][1][12] = 22,
+ [0][0][1][0][RTW89_IC][2][12] = 56,
[0][0][1][0][RTW89_KCC][1][12] = 24,
[0][0][1][0][RTW89_KCC][0][12] = 24,
[0][0][1][0][RTW89_ACMA][1][12] = 66,
@@ -38107,6 +38551,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][12] = 28,
[0][0][1][0][RTW89_UK][1][12] = 66,
[0][0][1][0][RTW89_UK][0][12] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][12] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][12] = 22,
[0][0][1][0][RTW89_FCC][1][14] = 22,
[0][0][1][0][RTW89_FCC][2][14] = 56,
[0][0][1][0][RTW89_ETSI][1][14] = 66,
@@ -38114,6 +38560,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][14] = 66,
[0][0][1][0][RTW89_MKK][0][14] = 26,
[0][0][1][0][RTW89_IC][1][14] = 22,
+ [0][0][1][0][RTW89_IC][2][14] = 56,
[0][0][1][0][RTW89_KCC][1][14] = 24,
[0][0][1][0][RTW89_KCC][0][14] = 24,
[0][0][1][0][RTW89_ACMA][1][14] = 66,
@@ -38123,6 +38570,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][14] = 28,
[0][0][1][0][RTW89_UK][1][14] = 66,
[0][0][1][0][RTW89_UK][0][14] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][14] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][14] = 22,
[0][0][1][0][RTW89_FCC][1][15] = 22,
[0][0][1][0][RTW89_FCC][2][15] = 56,
[0][0][1][0][RTW89_ETSI][1][15] = 66,
@@ -38130,6 +38579,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][15] = 66,
[0][0][1][0][RTW89_MKK][0][15] = 26,
[0][0][1][0][RTW89_IC][1][15] = 22,
+ [0][0][1][0][RTW89_IC][2][15] = 56,
[0][0][1][0][RTW89_KCC][1][15] = 24,
[0][0][1][0][RTW89_KCC][0][15] = 24,
[0][0][1][0][RTW89_ACMA][1][15] = 66,
@@ -38139,6 +38589,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][15] = 28,
[0][0][1][0][RTW89_UK][1][15] = 66,
[0][0][1][0][RTW89_UK][0][15] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][15] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][15] = 22,
[0][0][1][0][RTW89_FCC][1][17] = 22,
[0][0][1][0][RTW89_FCC][2][17] = 56,
[0][0][1][0][RTW89_ETSI][1][17] = 66,
@@ -38146,6 +38598,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][17] = 66,
[0][0][1][0][RTW89_MKK][0][17] = 26,
[0][0][1][0][RTW89_IC][1][17] = 22,
+ [0][0][1][0][RTW89_IC][2][17] = 56,
[0][0][1][0][RTW89_KCC][1][17] = 24,
[0][0][1][0][RTW89_KCC][0][17] = 24,
[0][0][1][0][RTW89_ACMA][1][17] = 66,
@@ -38155,6 +38608,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][17] = 28,
[0][0][1][0][RTW89_UK][1][17] = 66,
[0][0][1][0][RTW89_UK][0][17] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][17] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][17] = 22,
[0][0][1][0][RTW89_FCC][1][19] = 22,
[0][0][1][0][RTW89_FCC][2][19] = 56,
[0][0][1][0][RTW89_ETSI][1][19] = 66,
@@ -38162,6 +38617,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][19] = 66,
[0][0][1][0][RTW89_MKK][0][19] = 26,
[0][0][1][0][RTW89_IC][1][19] = 22,
+ [0][0][1][0][RTW89_IC][2][19] = 56,
[0][0][1][0][RTW89_KCC][1][19] = 24,
[0][0][1][0][RTW89_KCC][0][19] = 24,
[0][0][1][0][RTW89_ACMA][1][19] = 66,
@@ -38171,6 +38627,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][19] = 28,
[0][0][1][0][RTW89_UK][1][19] = 66,
[0][0][1][0][RTW89_UK][0][19] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][19] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][19] = 22,
[0][0][1][0][RTW89_FCC][1][21] = 22,
[0][0][1][0][RTW89_FCC][2][21] = 56,
[0][0][1][0][RTW89_ETSI][1][21] = 66,
@@ -38178,6 +38636,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][21] = 66,
[0][0][1][0][RTW89_MKK][0][21] = 26,
[0][0][1][0][RTW89_IC][1][21] = 22,
+ [0][0][1][0][RTW89_IC][2][21] = 56,
[0][0][1][0][RTW89_KCC][1][21] = 24,
[0][0][1][0][RTW89_KCC][0][21] = 24,
[0][0][1][0][RTW89_ACMA][1][21] = 66,
@@ -38187,6 +38646,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][21] = 28,
[0][0][1][0][RTW89_UK][1][21] = 66,
[0][0][1][0][RTW89_UK][0][21] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][21] = 56,
+ [0][0][1][0][RTW89_THAILAND][0][21] = 22,
[0][0][1][0][RTW89_FCC][1][23] = 22,
[0][0][1][0][RTW89_FCC][2][23] = 70,
[0][0][1][0][RTW89_ETSI][1][23] = 66,
@@ -38194,6 +38655,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][23] = 66,
[0][0][1][0][RTW89_MKK][0][23] = 26,
[0][0][1][0][RTW89_IC][1][23] = 22,
+ [0][0][1][0][RTW89_IC][2][23] = 70,
[0][0][1][0][RTW89_KCC][1][23] = 24,
[0][0][1][0][RTW89_KCC][0][23] = 26,
[0][0][1][0][RTW89_ACMA][1][23] = 66,
@@ -38203,6 +38665,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][23] = 28,
[0][0][1][0][RTW89_UK][1][23] = 66,
[0][0][1][0][RTW89_UK][0][23] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][23] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][23] = 22,
[0][0][1][0][RTW89_FCC][1][25] = 22,
[0][0][1][0][RTW89_FCC][2][25] = 70,
[0][0][1][0][RTW89_ETSI][1][25] = 66,
@@ -38210,6 +38674,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][25] = 66,
[0][0][1][0][RTW89_MKK][0][25] = 26,
[0][0][1][0][RTW89_IC][1][25] = 22,
+ [0][0][1][0][RTW89_IC][2][25] = 70,
[0][0][1][0][RTW89_KCC][1][25] = 24,
[0][0][1][0][RTW89_KCC][0][25] = 26,
[0][0][1][0][RTW89_ACMA][1][25] = 66,
@@ -38219,6 +38684,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][25] = 28,
[0][0][1][0][RTW89_UK][1][25] = 66,
[0][0][1][0][RTW89_UK][0][25] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][25] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][25] = 22,
[0][0][1][0][RTW89_FCC][1][27] = 22,
[0][0][1][0][RTW89_FCC][2][27] = 70,
[0][0][1][0][RTW89_ETSI][1][27] = 66,
@@ -38226,6 +38693,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][27] = 66,
[0][0][1][0][RTW89_MKK][0][27] = 26,
[0][0][1][0][RTW89_IC][1][27] = 22,
+ [0][0][1][0][RTW89_IC][2][27] = 70,
[0][0][1][0][RTW89_KCC][1][27] = 24,
[0][0][1][0][RTW89_KCC][0][27] = 26,
[0][0][1][0][RTW89_ACMA][1][27] = 66,
@@ -38235,6 +38703,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][27] = 28,
[0][0][1][0][RTW89_UK][1][27] = 66,
[0][0][1][0][RTW89_UK][0][27] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][27] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][27] = 22,
[0][0][1][0][RTW89_FCC][1][29] = 22,
[0][0][1][0][RTW89_FCC][2][29] = 70,
[0][0][1][0][RTW89_ETSI][1][29] = 66,
@@ -38242,6 +38712,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][29] = 66,
[0][0][1][0][RTW89_MKK][0][29] = 26,
[0][0][1][0][RTW89_IC][1][29] = 22,
+ [0][0][1][0][RTW89_IC][2][29] = 70,
[0][0][1][0][RTW89_KCC][1][29] = 24,
[0][0][1][0][RTW89_KCC][0][29] = 26,
[0][0][1][0][RTW89_ACMA][1][29] = 66,
@@ -38251,6 +38722,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][29] = 28,
[0][0][1][0][RTW89_UK][1][29] = 66,
[0][0][1][0][RTW89_UK][0][29] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][29] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][29] = 22,
[0][0][1][0][RTW89_FCC][1][30] = 22,
[0][0][1][0][RTW89_FCC][2][30] = 70,
[0][0][1][0][RTW89_ETSI][1][30] = 66,
@@ -38258,6 +38731,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][30] = 66,
[0][0][1][0][RTW89_MKK][0][30] = 26,
[0][0][1][0][RTW89_IC][1][30] = 22,
+ [0][0][1][0][RTW89_IC][2][30] = 70,
[0][0][1][0][RTW89_KCC][1][30] = 24,
[0][0][1][0][RTW89_KCC][0][30] = 26,
[0][0][1][0][RTW89_ACMA][1][30] = 66,
@@ -38267,6 +38741,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][30] = 28,
[0][0][1][0][RTW89_UK][1][30] = 66,
[0][0][1][0][RTW89_UK][0][30] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][30] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][30] = 22,
[0][0][1][0][RTW89_FCC][1][32] = 22,
[0][0][1][0][RTW89_FCC][2][32] = 70,
[0][0][1][0][RTW89_ETSI][1][32] = 66,
@@ -38274,6 +38750,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][32] = 66,
[0][0][1][0][RTW89_MKK][0][32] = 26,
[0][0][1][0][RTW89_IC][1][32] = 22,
+ [0][0][1][0][RTW89_IC][2][32] = 70,
[0][0][1][0][RTW89_KCC][1][32] = 24,
[0][0][1][0][RTW89_KCC][0][32] = 26,
[0][0][1][0][RTW89_ACMA][1][32] = 66,
@@ -38283,6 +38760,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][32] = 28,
[0][0][1][0][RTW89_UK][1][32] = 66,
[0][0][1][0][RTW89_UK][0][32] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][32] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][32] = 22,
[0][0][1][0][RTW89_FCC][1][34] = 22,
[0][0][1][0][RTW89_FCC][2][34] = 70,
[0][0][1][0][RTW89_ETSI][1][34] = 66,
@@ -38290,6 +38769,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][34] = 66,
[0][0][1][0][RTW89_MKK][0][34] = 26,
[0][0][1][0][RTW89_IC][1][34] = 22,
+ [0][0][1][0][RTW89_IC][2][34] = 70,
[0][0][1][0][RTW89_KCC][1][34] = 24,
[0][0][1][0][RTW89_KCC][0][34] = 26,
[0][0][1][0][RTW89_ACMA][1][34] = 66,
@@ -38299,6 +38779,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][34] = 28,
[0][0][1][0][RTW89_UK][1][34] = 66,
[0][0][1][0][RTW89_UK][0][34] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][34] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][34] = 22,
[0][0][1][0][RTW89_FCC][1][36] = 22,
[0][0][1][0][RTW89_FCC][2][36] = 70,
[0][0][1][0][RTW89_ETSI][1][36] = 66,
@@ -38306,6 +38788,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][36] = 66,
[0][0][1][0][RTW89_MKK][0][36] = 26,
[0][0][1][0][RTW89_IC][1][36] = 22,
+ [0][0][1][0][RTW89_IC][2][36] = 70,
[0][0][1][0][RTW89_KCC][1][36] = 24,
[0][0][1][0][RTW89_KCC][0][36] = 26,
[0][0][1][0][RTW89_ACMA][1][36] = 66,
@@ -38315,6 +38798,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][36] = 28,
[0][0][1][0][RTW89_UK][1][36] = 66,
[0][0][1][0][RTW89_UK][0][36] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][36] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][36] = 22,
[0][0][1][0][RTW89_FCC][1][38] = 22,
[0][0][1][0][RTW89_FCC][2][38] = 70,
[0][0][1][0][RTW89_ETSI][1][38] = 66,
@@ -38322,6 +38807,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][38] = 66,
[0][0][1][0][RTW89_MKK][0][38] = 26,
[0][0][1][0][RTW89_IC][1][38] = 22,
+ [0][0][1][0][RTW89_IC][2][38] = 70,
[0][0][1][0][RTW89_KCC][1][38] = 24,
[0][0][1][0][RTW89_KCC][0][38] = 26,
[0][0][1][0][RTW89_ACMA][1][38] = 66,
@@ -38331,6 +38817,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][38] = 28,
[0][0][1][0][RTW89_UK][1][38] = 66,
[0][0][1][0][RTW89_UK][0][38] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][38] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][38] = 22,
[0][0][1][0][RTW89_FCC][1][40] = 22,
[0][0][1][0][RTW89_FCC][2][40] = 70,
[0][0][1][0][RTW89_ETSI][1][40] = 66,
@@ -38338,6 +38826,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][40] = 66,
[0][0][1][0][RTW89_MKK][0][40] = 26,
[0][0][1][0][RTW89_IC][1][40] = 22,
+ [0][0][1][0][RTW89_IC][2][40] = 70,
[0][0][1][0][RTW89_KCC][1][40] = 24,
[0][0][1][0][RTW89_KCC][0][40] = 26,
[0][0][1][0][RTW89_ACMA][1][40] = 66,
@@ -38347,6 +38836,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][40] = 28,
[0][0][1][0][RTW89_UK][1][40] = 66,
[0][0][1][0][RTW89_UK][0][40] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][40] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][40] = 22,
[0][0][1][0][RTW89_FCC][1][42] = 22,
[0][0][1][0][RTW89_FCC][2][42] = 70,
[0][0][1][0][RTW89_ETSI][1][42] = 66,
@@ -38354,6 +38845,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][42] = 66,
[0][0][1][0][RTW89_MKK][0][42] = 26,
[0][0][1][0][RTW89_IC][1][42] = 22,
+ [0][0][1][0][RTW89_IC][2][42] = 70,
[0][0][1][0][RTW89_KCC][1][42] = 24,
[0][0][1][0][RTW89_KCC][0][42] = 26,
[0][0][1][0][RTW89_ACMA][1][42] = 66,
@@ -38363,6 +38855,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][42] = 28,
[0][0][1][0][RTW89_UK][1][42] = 66,
[0][0][1][0][RTW89_UK][0][42] = 28,
+ [0][0][1][0][RTW89_THAILAND][1][42] = 66,
+ [0][0][1][0][RTW89_THAILAND][0][42] = 22,
[0][0][1][0][RTW89_FCC][1][44] = 22,
[0][0][1][0][RTW89_FCC][2][44] = 70,
[0][0][1][0][RTW89_ETSI][1][44] = 66,
@@ -38370,6 +38864,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][44] = 44,
[0][0][1][0][RTW89_MKK][0][44] = 28,
[0][0][1][0][RTW89_IC][1][44] = 22,
+ [0][0][1][0][RTW89_IC][2][44] = 70,
[0][0][1][0][RTW89_KCC][1][44] = 24,
[0][0][1][0][RTW89_KCC][0][44] = 26,
[0][0][1][0][RTW89_ACMA][1][44] = 66,
@@ -38379,6 +38874,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][44] = 30,
[0][0][1][0][RTW89_UK][1][44] = 66,
[0][0][1][0][RTW89_UK][0][44] = 30,
+ [0][0][1][0][RTW89_THAILAND][1][44] = 68,
+ [0][0][1][0][RTW89_THAILAND][0][44] = 22,
[0][0][1][0][RTW89_FCC][1][45] = 22,
[0][0][1][0][RTW89_FCC][2][45] = 127,
[0][0][1][0][RTW89_ETSI][1][45] = 127,
@@ -38386,6 +38883,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][45] = 127,
[0][0][1][0][RTW89_MKK][0][45] = 127,
[0][0][1][0][RTW89_IC][1][45] = 22,
+ [0][0][1][0][RTW89_IC][2][45] = 70,
[0][0][1][0][RTW89_KCC][1][45] = 24,
[0][0][1][0][RTW89_KCC][0][45] = 127,
[0][0][1][0][RTW89_ACMA][1][45] = 127,
@@ -38395,6 +38893,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][45] = 127,
[0][0][1][0][RTW89_UK][1][45] = 127,
[0][0][1][0][RTW89_UK][0][45] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][45] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][45] = 127,
[0][0][1][0][RTW89_FCC][1][47] = 22,
[0][0][1][0][RTW89_FCC][2][47] = 127,
[0][0][1][0][RTW89_ETSI][1][47] = 127,
@@ -38402,6 +38902,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][47] = 127,
[0][0][1][0][RTW89_MKK][0][47] = 127,
[0][0][1][0][RTW89_IC][1][47] = 22,
+ [0][0][1][0][RTW89_IC][2][47] = 70,
[0][0][1][0][RTW89_KCC][1][47] = 24,
[0][0][1][0][RTW89_KCC][0][47] = 127,
[0][0][1][0][RTW89_ACMA][1][47] = 127,
@@ -38411,6 +38912,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][47] = 127,
[0][0][1][0][RTW89_UK][1][47] = 127,
[0][0][1][0][RTW89_UK][0][47] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][47] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][47] = 127,
[0][0][1][0][RTW89_FCC][1][49] = 24,
[0][0][1][0][RTW89_FCC][2][49] = 127,
[0][0][1][0][RTW89_ETSI][1][49] = 127,
@@ -38418,6 +38921,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][49] = 127,
[0][0][1][0][RTW89_MKK][0][49] = 127,
[0][0][1][0][RTW89_IC][1][49] = 24,
+ [0][0][1][0][RTW89_IC][2][49] = 70,
[0][0][1][0][RTW89_KCC][1][49] = 24,
[0][0][1][0][RTW89_KCC][0][49] = 127,
[0][0][1][0][RTW89_ACMA][1][49] = 127,
@@ -38427,6 +38931,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][49] = 127,
[0][0][1][0][RTW89_UK][1][49] = 127,
[0][0][1][0][RTW89_UK][0][49] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][49] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][49] = 127,
[0][0][1][0][RTW89_FCC][1][51] = 22,
[0][0][1][0][RTW89_FCC][2][51] = 127,
[0][0][1][0][RTW89_ETSI][1][51] = 127,
@@ -38434,6 +38940,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][51] = 127,
[0][0][1][0][RTW89_MKK][0][51] = 127,
[0][0][1][0][RTW89_IC][1][51] = 22,
+ [0][0][1][0][RTW89_IC][2][51] = 70,
[0][0][1][0][RTW89_KCC][1][51] = 24,
[0][0][1][0][RTW89_KCC][0][51] = 127,
[0][0][1][0][RTW89_ACMA][1][51] = 127,
@@ -38443,6 +38950,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][51] = 127,
[0][0][1][0][RTW89_UK][1][51] = 127,
[0][0][1][0][RTW89_UK][0][51] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][51] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][51] = 127,
[0][0][1][0][RTW89_FCC][1][53] = 22,
[0][0][1][0][RTW89_FCC][2][53] = 127,
[0][0][1][0][RTW89_ETSI][1][53] = 127,
@@ -38450,6 +38959,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][53] = 127,
[0][0][1][0][RTW89_MKK][0][53] = 127,
[0][0][1][0][RTW89_IC][1][53] = 22,
+ [0][0][1][0][RTW89_IC][2][53] = 70,
[0][0][1][0][RTW89_KCC][1][53] = 24,
[0][0][1][0][RTW89_KCC][0][53] = 127,
[0][0][1][0][RTW89_ACMA][1][53] = 127,
@@ -38459,6 +38969,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][53] = 127,
[0][0][1][0][RTW89_UK][1][53] = 127,
[0][0][1][0][RTW89_UK][0][53] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][53] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][53] = 127,
[0][0][1][0][RTW89_FCC][1][55] = 22,
[0][0][1][0][RTW89_FCC][2][55] = 68,
[0][0][1][0][RTW89_ETSI][1][55] = 127,
@@ -38466,6 +38978,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][55] = 127,
[0][0][1][0][RTW89_MKK][0][55] = 127,
[0][0][1][0][RTW89_IC][1][55] = 22,
+ [0][0][1][0][RTW89_IC][2][55] = 68,
[0][0][1][0][RTW89_KCC][1][55] = 26,
[0][0][1][0][RTW89_KCC][0][55] = 127,
[0][0][1][0][RTW89_ACMA][1][55] = 127,
@@ -38475,6 +38988,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][55] = 127,
[0][0][1][0][RTW89_UK][1][55] = 127,
[0][0][1][0][RTW89_UK][0][55] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][55] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][55] = 127,
[0][0][1][0][RTW89_FCC][1][57] = 22,
[0][0][1][0][RTW89_FCC][2][57] = 68,
[0][0][1][0][RTW89_ETSI][1][57] = 127,
@@ -38482,6 +38997,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][57] = 127,
[0][0][1][0][RTW89_MKK][0][57] = 127,
[0][0][1][0][RTW89_IC][1][57] = 22,
+ [0][0][1][0][RTW89_IC][2][57] = 68,
[0][0][1][0][RTW89_KCC][1][57] = 26,
[0][0][1][0][RTW89_KCC][0][57] = 127,
[0][0][1][0][RTW89_ACMA][1][57] = 127,
@@ -38491,6 +39007,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][57] = 127,
[0][0][1][0][RTW89_UK][1][57] = 127,
[0][0][1][0][RTW89_UK][0][57] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][57] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][57] = 127,
[0][0][1][0][RTW89_FCC][1][59] = 22,
[0][0][1][0][RTW89_FCC][2][59] = 68,
[0][0][1][0][RTW89_ETSI][1][59] = 127,
@@ -38498,6 +39016,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][59] = 127,
[0][0][1][0][RTW89_MKK][0][59] = 127,
[0][0][1][0][RTW89_IC][1][59] = 22,
+ [0][0][1][0][RTW89_IC][2][59] = 68,
[0][0][1][0][RTW89_KCC][1][59] = 26,
[0][0][1][0][RTW89_KCC][0][59] = 127,
[0][0][1][0][RTW89_ACMA][1][59] = 127,
@@ -38507,6 +39026,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][59] = 127,
[0][0][1][0][RTW89_UK][1][59] = 127,
[0][0][1][0][RTW89_UK][0][59] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][59] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][59] = 127,
[0][0][1][0][RTW89_FCC][1][60] = 22,
[0][0][1][0][RTW89_FCC][2][60] = 68,
[0][0][1][0][RTW89_ETSI][1][60] = 127,
@@ -38514,6 +39035,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][60] = 127,
[0][0][1][0][RTW89_MKK][0][60] = 127,
[0][0][1][0][RTW89_IC][1][60] = 22,
+ [0][0][1][0][RTW89_IC][2][60] = 68,
[0][0][1][0][RTW89_KCC][1][60] = 26,
[0][0][1][0][RTW89_KCC][0][60] = 127,
[0][0][1][0][RTW89_ACMA][1][60] = 127,
@@ -38523,6 +39045,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][60] = 127,
[0][0][1][0][RTW89_UK][1][60] = 127,
[0][0][1][0][RTW89_UK][0][60] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][60] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][60] = 127,
[0][0][1][0][RTW89_FCC][1][62] = 22,
[0][0][1][0][RTW89_FCC][2][62] = 68,
[0][0][1][0][RTW89_ETSI][1][62] = 127,
@@ -38530,6 +39054,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][62] = 127,
[0][0][1][0][RTW89_MKK][0][62] = 127,
[0][0][1][0][RTW89_IC][1][62] = 22,
+ [0][0][1][0][RTW89_IC][2][62] = 68,
[0][0][1][0][RTW89_KCC][1][62] = 26,
[0][0][1][0][RTW89_KCC][0][62] = 127,
[0][0][1][0][RTW89_ACMA][1][62] = 127,
@@ -38539,6 +39064,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][62] = 127,
[0][0][1][0][RTW89_UK][1][62] = 127,
[0][0][1][0][RTW89_UK][0][62] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][62] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][62] = 127,
[0][0][1][0][RTW89_FCC][1][64] = 22,
[0][0][1][0][RTW89_FCC][2][64] = 68,
[0][0][1][0][RTW89_ETSI][1][64] = 127,
@@ -38546,6 +39073,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][64] = 127,
[0][0][1][0][RTW89_MKK][0][64] = 127,
[0][0][1][0][RTW89_IC][1][64] = 22,
+ [0][0][1][0][RTW89_IC][2][64] = 68,
[0][0][1][0][RTW89_KCC][1][64] = 26,
[0][0][1][0][RTW89_KCC][0][64] = 127,
[0][0][1][0][RTW89_ACMA][1][64] = 127,
@@ -38555,6 +39083,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][64] = 127,
[0][0][1][0][RTW89_UK][1][64] = 127,
[0][0][1][0][RTW89_UK][0][64] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][64] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][64] = 127,
[0][0][1][0][RTW89_FCC][1][66] = 22,
[0][0][1][0][RTW89_FCC][2][66] = 68,
[0][0][1][0][RTW89_ETSI][1][66] = 127,
@@ -38562,6 +39092,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][66] = 127,
[0][0][1][0][RTW89_MKK][0][66] = 127,
[0][0][1][0][RTW89_IC][1][66] = 22,
+ [0][0][1][0][RTW89_IC][2][66] = 68,
[0][0][1][0][RTW89_KCC][1][66] = 26,
[0][0][1][0][RTW89_KCC][0][66] = 127,
[0][0][1][0][RTW89_ACMA][1][66] = 127,
@@ -38571,6 +39102,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][66] = 127,
[0][0][1][0][RTW89_UK][1][66] = 127,
[0][0][1][0][RTW89_UK][0][66] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][66] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][66] = 127,
[0][0][1][0][RTW89_FCC][1][68] = 22,
[0][0][1][0][RTW89_FCC][2][68] = 68,
[0][0][1][0][RTW89_ETSI][1][68] = 127,
@@ -38578,6 +39111,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][68] = 127,
[0][0][1][0][RTW89_MKK][0][68] = 127,
[0][0][1][0][RTW89_IC][1][68] = 22,
+ [0][0][1][0][RTW89_IC][2][68] = 68,
[0][0][1][0][RTW89_KCC][1][68] = 26,
[0][0][1][0][RTW89_KCC][0][68] = 127,
[0][0][1][0][RTW89_ACMA][1][68] = 127,
@@ -38587,6 +39121,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][68] = 127,
[0][0][1][0][RTW89_UK][1][68] = 127,
[0][0][1][0][RTW89_UK][0][68] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][68] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][68] = 127,
[0][0][1][0][RTW89_FCC][1][70] = 24,
[0][0][1][0][RTW89_FCC][2][70] = 68,
[0][0][1][0][RTW89_ETSI][1][70] = 127,
@@ -38594,6 +39130,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][70] = 127,
[0][0][1][0][RTW89_MKK][0][70] = 127,
[0][0][1][0][RTW89_IC][1][70] = 24,
+ [0][0][1][0][RTW89_IC][2][70] = 68,
[0][0][1][0][RTW89_KCC][1][70] = 26,
[0][0][1][0][RTW89_KCC][0][70] = 127,
[0][0][1][0][RTW89_ACMA][1][70] = 127,
@@ -38603,6 +39140,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][70] = 127,
[0][0][1][0][RTW89_UK][1][70] = 127,
[0][0][1][0][RTW89_UK][0][70] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][70] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][70] = 127,
[0][0][1][0][RTW89_FCC][1][72] = 22,
[0][0][1][0][RTW89_FCC][2][72] = 68,
[0][0][1][0][RTW89_ETSI][1][72] = 127,
@@ -38610,6 +39149,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][72] = 127,
[0][0][1][0][RTW89_MKK][0][72] = 127,
[0][0][1][0][RTW89_IC][1][72] = 22,
+ [0][0][1][0][RTW89_IC][2][72] = 68,
[0][0][1][0][RTW89_KCC][1][72] = 26,
[0][0][1][0][RTW89_KCC][0][72] = 127,
[0][0][1][0][RTW89_ACMA][1][72] = 127,
@@ -38619,6 +39159,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][72] = 127,
[0][0][1][0][RTW89_UK][1][72] = 127,
[0][0][1][0][RTW89_UK][0][72] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][72] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][72] = 127,
[0][0][1][0][RTW89_FCC][1][74] = 22,
[0][0][1][0][RTW89_FCC][2][74] = 68,
[0][0][1][0][RTW89_ETSI][1][74] = 127,
@@ -38626,6 +39168,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][74] = 127,
[0][0][1][0][RTW89_MKK][0][74] = 127,
[0][0][1][0][RTW89_IC][1][74] = 22,
+ [0][0][1][0][RTW89_IC][2][74] = 68,
[0][0][1][0][RTW89_KCC][1][74] = 26,
[0][0][1][0][RTW89_KCC][0][74] = 127,
[0][0][1][0][RTW89_ACMA][1][74] = 127,
@@ -38635,6 +39178,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][74] = 127,
[0][0][1][0][RTW89_UK][1][74] = 127,
[0][0][1][0][RTW89_UK][0][74] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][74] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][74] = 127,
[0][0][1][0][RTW89_FCC][1][75] = 22,
[0][0][1][0][RTW89_FCC][2][75] = 68,
[0][0][1][0][RTW89_ETSI][1][75] = 127,
@@ -38642,6 +39187,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][75] = 127,
[0][0][1][0][RTW89_MKK][0][75] = 127,
[0][0][1][0][RTW89_IC][1][75] = 22,
+ [0][0][1][0][RTW89_IC][2][75] = 68,
[0][0][1][0][RTW89_KCC][1][75] = 26,
[0][0][1][0][RTW89_KCC][0][75] = 127,
[0][0][1][0][RTW89_ACMA][1][75] = 127,
@@ -38651,6 +39197,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][75] = 127,
[0][0][1][0][RTW89_UK][1][75] = 127,
[0][0][1][0][RTW89_UK][0][75] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][75] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][75] = 127,
[0][0][1][0][RTW89_FCC][1][77] = 22,
[0][0][1][0][RTW89_FCC][2][77] = 68,
[0][0][1][0][RTW89_ETSI][1][77] = 127,
@@ -38658,6 +39206,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][77] = 127,
[0][0][1][0][RTW89_MKK][0][77] = 127,
[0][0][1][0][RTW89_IC][1][77] = 22,
+ [0][0][1][0][RTW89_IC][2][77] = 68,
[0][0][1][0][RTW89_KCC][1][77] = 26,
[0][0][1][0][RTW89_KCC][0][77] = 127,
[0][0][1][0][RTW89_ACMA][1][77] = 127,
@@ -38667,6 +39216,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][77] = 127,
[0][0][1][0][RTW89_UK][1][77] = 127,
[0][0][1][0][RTW89_UK][0][77] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][77] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][77] = 127,
[0][0][1][0][RTW89_FCC][1][79] = 22,
[0][0][1][0][RTW89_FCC][2][79] = 68,
[0][0][1][0][RTW89_ETSI][1][79] = 127,
@@ -38674,6 +39225,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][79] = 127,
[0][0][1][0][RTW89_MKK][0][79] = 127,
[0][0][1][0][RTW89_IC][1][79] = 22,
+ [0][0][1][0][RTW89_IC][2][79] = 68,
[0][0][1][0][RTW89_KCC][1][79] = 26,
[0][0][1][0][RTW89_KCC][0][79] = 127,
[0][0][1][0][RTW89_ACMA][1][79] = 127,
@@ -38683,6 +39235,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][79] = 127,
[0][0][1][0][RTW89_UK][1][79] = 127,
[0][0][1][0][RTW89_UK][0][79] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][79] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][79] = 127,
[0][0][1][0][RTW89_FCC][1][81] = 22,
[0][0][1][0][RTW89_FCC][2][81] = 68,
[0][0][1][0][RTW89_ETSI][1][81] = 127,
@@ -38690,6 +39244,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][81] = 127,
[0][0][1][0][RTW89_MKK][0][81] = 127,
[0][0][1][0][RTW89_IC][1][81] = 22,
+ [0][0][1][0][RTW89_IC][2][81] = 68,
[0][0][1][0][RTW89_KCC][1][81] = 26,
[0][0][1][0][RTW89_KCC][0][81] = 127,
[0][0][1][0][RTW89_ACMA][1][81] = 127,
@@ -38699,6 +39254,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][81] = 127,
[0][0][1][0][RTW89_UK][1][81] = 127,
[0][0][1][0][RTW89_UK][0][81] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][81] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][81] = 127,
[0][0][1][0][RTW89_FCC][1][83] = 22,
[0][0][1][0][RTW89_FCC][2][83] = 68,
[0][0][1][0][RTW89_ETSI][1][83] = 127,
@@ -38706,6 +39263,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][83] = 127,
[0][0][1][0][RTW89_MKK][0][83] = 127,
[0][0][1][0][RTW89_IC][1][83] = 22,
+ [0][0][1][0][RTW89_IC][2][83] = 68,
[0][0][1][0][RTW89_KCC][1][83] = 32,
[0][0][1][0][RTW89_KCC][0][83] = 127,
[0][0][1][0][RTW89_ACMA][1][83] = 127,
@@ -38715,6 +39273,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][83] = 127,
[0][0][1][0][RTW89_UK][1][83] = 127,
[0][0][1][0][RTW89_UK][0][83] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][83] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][83] = 127,
[0][0][1][0][RTW89_FCC][1][85] = 22,
[0][0][1][0][RTW89_FCC][2][85] = 68,
[0][0][1][0][RTW89_ETSI][1][85] = 127,
@@ -38722,6 +39282,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][85] = 127,
[0][0][1][0][RTW89_MKK][0][85] = 127,
[0][0][1][0][RTW89_IC][1][85] = 22,
+ [0][0][1][0][RTW89_IC][2][85] = 68,
[0][0][1][0][RTW89_KCC][1][85] = 32,
[0][0][1][0][RTW89_KCC][0][85] = 127,
[0][0][1][0][RTW89_ACMA][1][85] = 127,
@@ -38731,6 +39292,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][85] = 127,
[0][0][1][0][RTW89_UK][1][85] = 127,
[0][0][1][0][RTW89_UK][0][85] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][85] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][85] = 127,
[0][0][1][0][RTW89_FCC][1][87] = 22,
[0][0][1][0][RTW89_FCC][2][87] = 127,
[0][0][1][0][RTW89_ETSI][1][87] = 127,
@@ -38738,6 +39301,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][87] = 127,
[0][0][1][0][RTW89_MKK][0][87] = 127,
[0][0][1][0][RTW89_IC][1][87] = 22,
+ [0][0][1][0][RTW89_IC][2][87] = 127,
[0][0][1][0][RTW89_KCC][1][87] = 32,
[0][0][1][0][RTW89_KCC][0][87] = 127,
[0][0][1][0][RTW89_ACMA][1][87] = 127,
@@ -38747,6 +39311,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][87] = 127,
[0][0][1][0][RTW89_UK][1][87] = 127,
[0][0][1][0][RTW89_UK][0][87] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][87] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][87] = 127,
[0][0][1][0][RTW89_FCC][1][89] = 22,
[0][0][1][0][RTW89_FCC][2][89] = 127,
[0][0][1][0][RTW89_ETSI][1][89] = 127,
@@ -38754,6 +39320,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][89] = 127,
[0][0][1][0][RTW89_MKK][0][89] = 127,
[0][0][1][0][RTW89_IC][1][89] = 22,
+ [0][0][1][0][RTW89_IC][2][89] = 127,
[0][0][1][0][RTW89_KCC][1][89] = 32,
[0][0][1][0][RTW89_KCC][0][89] = 127,
[0][0][1][0][RTW89_ACMA][1][89] = 127,
@@ -38763,6 +39330,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][89] = 127,
[0][0][1][0][RTW89_UK][1][89] = 127,
[0][0][1][0][RTW89_UK][0][89] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][89] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][89] = 127,
[0][0][1][0][RTW89_FCC][1][90] = 22,
[0][0][1][0][RTW89_FCC][2][90] = 127,
[0][0][1][0][RTW89_ETSI][1][90] = 127,
@@ -38770,6 +39339,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][90] = 127,
[0][0][1][0][RTW89_MKK][0][90] = 127,
[0][0][1][0][RTW89_IC][1][90] = 22,
+ [0][0][1][0][RTW89_IC][2][90] = 127,
[0][0][1][0][RTW89_KCC][1][90] = 32,
[0][0][1][0][RTW89_KCC][0][90] = 127,
[0][0][1][0][RTW89_ACMA][1][90] = 127,
@@ -38779,6 +39349,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][90] = 127,
[0][0][1][0][RTW89_UK][1][90] = 127,
[0][0][1][0][RTW89_UK][0][90] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][90] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][90] = 127,
[0][0][1][0][RTW89_FCC][1][92] = 22,
[0][0][1][0][RTW89_FCC][2][92] = 127,
[0][0][1][0][RTW89_ETSI][1][92] = 127,
@@ -38786,6 +39358,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][92] = 127,
[0][0][1][0][RTW89_MKK][0][92] = 127,
[0][0][1][0][RTW89_IC][1][92] = 22,
+ [0][0][1][0][RTW89_IC][2][92] = 127,
[0][0][1][0][RTW89_KCC][1][92] = 32,
[0][0][1][0][RTW89_KCC][0][92] = 127,
[0][0][1][0][RTW89_ACMA][1][92] = 127,
@@ -38795,6 +39368,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][92] = 127,
[0][0][1][0][RTW89_UK][1][92] = 127,
[0][0][1][0][RTW89_UK][0][92] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][92] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][92] = 127,
[0][0][1][0][RTW89_FCC][1][94] = 22,
[0][0][1][0][RTW89_FCC][2][94] = 127,
[0][0][1][0][RTW89_ETSI][1][94] = 127,
@@ -38802,6 +39377,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][94] = 127,
[0][0][1][0][RTW89_MKK][0][94] = 127,
[0][0][1][0][RTW89_IC][1][94] = 22,
+ [0][0][1][0][RTW89_IC][2][94] = 127,
[0][0][1][0][RTW89_KCC][1][94] = 32,
[0][0][1][0][RTW89_KCC][0][94] = 127,
[0][0][1][0][RTW89_ACMA][1][94] = 127,
@@ -38811,6 +39387,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][94] = 127,
[0][0][1][0][RTW89_UK][1][94] = 127,
[0][0][1][0][RTW89_UK][0][94] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][94] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][94] = 127,
[0][0][1][0][RTW89_FCC][1][96] = 22,
[0][0][1][0][RTW89_FCC][2][96] = 127,
[0][0][1][0][RTW89_ETSI][1][96] = 127,
@@ -38818,6 +39396,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][96] = 127,
[0][0][1][0][RTW89_MKK][0][96] = 127,
[0][0][1][0][RTW89_IC][1][96] = 22,
+ [0][0][1][0][RTW89_IC][2][96] = 127,
[0][0][1][0][RTW89_KCC][1][96] = 32,
[0][0][1][0][RTW89_KCC][0][96] = 127,
[0][0][1][0][RTW89_ACMA][1][96] = 127,
@@ -38827,6 +39406,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][96] = 127,
[0][0][1][0][RTW89_UK][1][96] = 127,
[0][0][1][0][RTW89_UK][0][96] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][96] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][96] = 127,
[0][0][1][0][RTW89_FCC][1][98] = 22,
[0][0][1][0][RTW89_FCC][2][98] = 127,
[0][0][1][0][RTW89_ETSI][1][98] = 127,
@@ -38834,6 +39415,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][98] = 127,
[0][0][1][0][RTW89_MKK][0][98] = 127,
[0][0][1][0][RTW89_IC][1][98] = 22,
+ [0][0][1][0][RTW89_IC][2][98] = 127,
[0][0][1][0][RTW89_KCC][1][98] = 32,
[0][0][1][0][RTW89_KCC][0][98] = 127,
[0][0][1][0][RTW89_ACMA][1][98] = 127,
@@ -38843,6 +39425,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][98] = 127,
[0][0][1][0][RTW89_UK][1][98] = 127,
[0][0][1][0][RTW89_UK][0][98] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][98] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][98] = 127,
[0][0][1][0][RTW89_FCC][1][100] = 22,
[0][0][1][0][RTW89_FCC][2][100] = 127,
[0][0][1][0][RTW89_ETSI][1][100] = 127,
@@ -38850,6 +39434,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][100] = 127,
[0][0][1][0][RTW89_MKK][0][100] = 127,
[0][0][1][0][RTW89_IC][1][100] = 22,
+ [0][0][1][0][RTW89_IC][2][100] = 127,
[0][0][1][0][RTW89_KCC][1][100] = 32,
[0][0][1][0][RTW89_KCC][0][100] = 127,
[0][0][1][0][RTW89_ACMA][1][100] = 127,
@@ -38859,6 +39444,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][100] = 127,
[0][0][1][0][RTW89_UK][1][100] = 127,
[0][0][1][0][RTW89_UK][0][100] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][100] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][100] = 127,
[0][0][1][0][RTW89_FCC][1][102] = 22,
[0][0][1][0][RTW89_FCC][2][102] = 127,
[0][0][1][0][RTW89_ETSI][1][102] = 127,
@@ -38866,6 +39453,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][102] = 127,
[0][0][1][0][RTW89_MKK][0][102] = 127,
[0][0][1][0][RTW89_IC][1][102] = 22,
+ [0][0][1][0][RTW89_IC][2][102] = 127,
[0][0][1][0][RTW89_KCC][1][102] = 32,
[0][0][1][0][RTW89_KCC][0][102] = 127,
[0][0][1][0][RTW89_ACMA][1][102] = 127,
@@ -38875,6 +39463,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][102] = 127,
[0][0][1][0][RTW89_UK][1][102] = 127,
[0][0][1][0][RTW89_UK][0][102] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][102] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][102] = 127,
[0][0][1][0][RTW89_FCC][1][104] = 22,
[0][0][1][0][RTW89_FCC][2][104] = 127,
[0][0][1][0][RTW89_ETSI][1][104] = 127,
@@ -38882,6 +39472,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][104] = 127,
[0][0][1][0][RTW89_MKK][0][104] = 127,
[0][0][1][0][RTW89_IC][1][104] = 22,
+ [0][0][1][0][RTW89_IC][2][104] = 127,
[0][0][1][0][RTW89_KCC][1][104] = 32,
[0][0][1][0][RTW89_KCC][0][104] = 127,
[0][0][1][0][RTW89_ACMA][1][104] = 127,
@@ -38891,6 +39482,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][104] = 127,
[0][0][1][0][RTW89_UK][1][104] = 127,
[0][0][1][0][RTW89_UK][0][104] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][104] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][104] = 127,
[0][0][1][0][RTW89_FCC][1][105] = 22,
[0][0][1][0][RTW89_FCC][2][105] = 127,
[0][0][1][0][RTW89_ETSI][1][105] = 127,
@@ -38898,6 +39491,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][105] = 127,
[0][0][1][0][RTW89_MKK][0][105] = 127,
[0][0][1][0][RTW89_IC][1][105] = 22,
+ [0][0][1][0][RTW89_IC][2][105] = 127,
[0][0][1][0][RTW89_KCC][1][105] = 32,
[0][0][1][0][RTW89_KCC][0][105] = 127,
[0][0][1][0][RTW89_ACMA][1][105] = 127,
@@ -38907,6 +39501,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][105] = 127,
[0][0][1][0][RTW89_UK][1][105] = 127,
[0][0][1][0][RTW89_UK][0][105] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][105] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][105] = 127,
[0][0][1][0][RTW89_FCC][1][107] = 24,
[0][0][1][0][RTW89_FCC][2][107] = 127,
[0][0][1][0][RTW89_ETSI][1][107] = 127,
@@ -38914,6 +39510,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][107] = 127,
[0][0][1][0][RTW89_MKK][0][107] = 127,
[0][0][1][0][RTW89_IC][1][107] = 24,
+ [0][0][1][0][RTW89_IC][2][107] = 127,
[0][0][1][0][RTW89_KCC][1][107] = 32,
[0][0][1][0][RTW89_KCC][0][107] = 127,
[0][0][1][0][RTW89_ACMA][1][107] = 127,
@@ -38923,6 +39520,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][107] = 127,
[0][0][1][0][RTW89_UK][1][107] = 127,
[0][0][1][0][RTW89_UK][0][107] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][107] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][107] = 127,
[0][0][1][0][RTW89_FCC][1][109] = 24,
[0][0][1][0][RTW89_FCC][2][109] = 127,
[0][0][1][0][RTW89_ETSI][1][109] = 127,
@@ -38930,6 +39529,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][109] = 127,
[0][0][1][0][RTW89_MKK][0][109] = 127,
[0][0][1][0][RTW89_IC][1][109] = 24,
+ [0][0][1][0][RTW89_IC][2][109] = 127,
[0][0][1][0][RTW89_KCC][1][109] = 32,
[0][0][1][0][RTW89_KCC][0][109] = 127,
[0][0][1][0][RTW89_ACMA][1][109] = 127,
@@ -38939,6 +39539,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][109] = 127,
[0][0][1][0][RTW89_UK][1][109] = 127,
[0][0][1][0][RTW89_UK][0][109] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][109] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][109] = 127,
[0][0][1][0][RTW89_FCC][1][111] = 127,
[0][0][1][0][RTW89_FCC][2][111] = 127,
[0][0][1][0][RTW89_ETSI][1][111] = 127,
@@ -38946,6 +39548,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][111] = 127,
[0][0][1][0][RTW89_MKK][0][111] = 127,
[0][0][1][0][RTW89_IC][1][111] = 127,
+ [0][0][1][0][RTW89_IC][2][111] = 127,
[0][0][1][0][RTW89_KCC][1][111] = 127,
[0][0][1][0][RTW89_KCC][0][111] = 127,
[0][0][1][0][RTW89_ACMA][1][111] = 127,
@@ -38955,6 +39558,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][111] = 127,
[0][0][1][0][RTW89_UK][1][111] = 127,
[0][0][1][0][RTW89_UK][0][111] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][111] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][111] = 127,
[0][0][1][0][RTW89_FCC][1][113] = 127,
[0][0][1][0][RTW89_FCC][2][113] = 127,
[0][0][1][0][RTW89_ETSI][1][113] = 127,
@@ -38962,6 +39567,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][113] = 127,
[0][0][1][0][RTW89_MKK][0][113] = 127,
[0][0][1][0][RTW89_IC][1][113] = 127,
+ [0][0][1][0][RTW89_IC][2][113] = 127,
[0][0][1][0][RTW89_KCC][1][113] = 127,
[0][0][1][0][RTW89_KCC][0][113] = 127,
[0][0][1][0][RTW89_ACMA][1][113] = 127,
@@ -38971,6 +39577,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][113] = 127,
[0][0][1][0][RTW89_UK][1][113] = 127,
[0][0][1][0][RTW89_UK][0][113] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][113] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][113] = 127,
[0][0][1][0][RTW89_FCC][1][115] = 127,
[0][0][1][0][RTW89_FCC][2][115] = 127,
[0][0][1][0][RTW89_ETSI][1][115] = 127,
@@ -38978,6 +39586,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][115] = 127,
[0][0][1][0][RTW89_MKK][0][115] = 127,
[0][0][1][0][RTW89_IC][1][115] = 127,
+ [0][0][1][0][RTW89_IC][2][115] = 127,
[0][0][1][0][RTW89_KCC][1][115] = 127,
[0][0][1][0][RTW89_KCC][0][115] = 127,
[0][0][1][0][RTW89_ACMA][1][115] = 127,
@@ -38987,6 +39596,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][115] = 127,
[0][0][1][0][RTW89_UK][1][115] = 127,
[0][0][1][0][RTW89_UK][0][115] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][115] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][115] = 127,
[0][0][1][0][RTW89_FCC][1][117] = 127,
[0][0][1][0][RTW89_FCC][2][117] = 127,
[0][0][1][0][RTW89_ETSI][1][117] = 127,
@@ -38994,6 +39605,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][117] = 127,
[0][0][1][0][RTW89_MKK][0][117] = 127,
[0][0][1][0][RTW89_IC][1][117] = 127,
+ [0][0][1][0][RTW89_IC][2][117] = 127,
[0][0][1][0][RTW89_KCC][1][117] = 127,
[0][0][1][0][RTW89_KCC][0][117] = 127,
[0][0][1][0][RTW89_ACMA][1][117] = 127,
@@ -39003,6 +39615,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][117] = 127,
[0][0][1][0][RTW89_UK][1][117] = 127,
[0][0][1][0][RTW89_UK][0][117] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][117] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][117] = 127,
[0][0][1][0][RTW89_FCC][1][119] = 127,
[0][0][1][0][RTW89_FCC][2][119] = 127,
[0][0][1][0][RTW89_ETSI][1][119] = 127,
@@ -39010,6 +39624,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_MKK][1][119] = 127,
[0][0][1][0][RTW89_MKK][0][119] = 127,
[0][0][1][0][RTW89_IC][1][119] = 127,
+ [0][0][1][0][RTW89_IC][2][119] = 127,
[0][0][1][0][RTW89_KCC][1][119] = 127,
[0][0][1][0][RTW89_KCC][0][119] = 127,
[0][0][1][0][RTW89_ACMA][1][119] = 127,
@@ -39019,6 +39634,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][1][0][RTW89_QATAR][0][119] = 127,
[0][0][1][0][RTW89_UK][1][119] = 127,
[0][0][1][0][RTW89_UK][0][119] = 127,
+ [0][0][1][0][RTW89_THAILAND][1][119] = 127,
+ [0][0][1][0][RTW89_THAILAND][0][119] = 127,
[0][1][1][0][RTW89_FCC][1][0] = -2,
[0][1][1][0][RTW89_FCC][2][0] = 54,
[0][1][1][0][RTW89_ETSI][1][0] = 54,
@@ -39026,6 +39643,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][0] = 56,
[0][1][1][0][RTW89_MKK][0][0] = 16,
[0][1][1][0][RTW89_IC][1][0] = -2,
+ [0][1][1][0][RTW89_IC][2][0] = 54,
[0][1][1][0][RTW89_KCC][1][0] = 12,
[0][1][1][0][RTW89_KCC][0][0] = 10,
[0][1][1][0][RTW89_ACMA][1][0] = 54,
@@ -39035,6 +39653,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][0] = 18,
[0][1][1][0][RTW89_UK][1][0] = 54,
[0][1][1][0][RTW89_UK][0][0] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][0] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][0] = -2,
[0][1][1][0][RTW89_FCC][1][2] = -4,
[0][1][1][0][RTW89_FCC][2][2] = 54,
[0][1][1][0][RTW89_ETSI][1][2] = 54,
@@ -39042,6 +39662,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][2] = 54,
[0][1][1][0][RTW89_MKK][0][2] = 16,
[0][1][1][0][RTW89_IC][1][2] = -4,
+ [0][1][1][0][RTW89_IC][2][2] = 54,
[0][1][1][0][RTW89_KCC][1][2] = 12,
[0][1][1][0][RTW89_KCC][0][2] = 12,
[0][1][1][0][RTW89_ACMA][1][2] = 54,
@@ -39051,6 +39672,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][2] = 18,
[0][1][1][0][RTW89_UK][1][2] = 54,
[0][1][1][0][RTW89_UK][0][2] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][2] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][2] = -4,
[0][1][1][0][RTW89_FCC][1][4] = -4,
[0][1][1][0][RTW89_FCC][2][4] = 54,
[0][1][1][0][RTW89_ETSI][1][4] = 54,
@@ -39058,6 +39681,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][4] = 54,
[0][1][1][0][RTW89_MKK][0][4] = 16,
[0][1][1][0][RTW89_IC][1][4] = -4,
+ [0][1][1][0][RTW89_IC][2][4] = 54,
[0][1][1][0][RTW89_KCC][1][4] = 12,
[0][1][1][0][RTW89_KCC][0][4] = 12,
[0][1][1][0][RTW89_ACMA][1][4] = 54,
@@ -39067,6 +39691,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][4] = 18,
[0][1][1][0][RTW89_UK][1][4] = 54,
[0][1][1][0][RTW89_UK][0][4] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][4] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][4] = -4,
[0][1][1][0][RTW89_FCC][1][6] = -4,
[0][1][1][0][RTW89_FCC][2][6] = 54,
[0][1][1][0][RTW89_ETSI][1][6] = 54,
@@ -39074,6 +39700,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][6] = 54,
[0][1][1][0][RTW89_MKK][0][6] = 16,
[0][1][1][0][RTW89_IC][1][6] = -4,
+ [0][1][1][0][RTW89_IC][2][6] = 54,
[0][1][1][0][RTW89_KCC][1][6] = 12,
[0][1][1][0][RTW89_KCC][0][6] = 12,
[0][1][1][0][RTW89_ACMA][1][6] = 54,
@@ -39083,6 +39710,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][6] = 18,
[0][1][1][0][RTW89_UK][1][6] = 54,
[0][1][1][0][RTW89_UK][0][6] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][6] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][6] = -4,
[0][1][1][0][RTW89_FCC][1][8] = -4,
[0][1][1][0][RTW89_FCC][2][8] = 54,
[0][1][1][0][RTW89_ETSI][1][8] = 54,
@@ -39090,6 +39719,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][8] = 54,
[0][1][1][0][RTW89_MKK][0][8] = 16,
[0][1][1][0][RTW89_IC][1][8] = -4,
+ [0][1][1][0][RTW89_IC][2][8] = 54,
[0][1][1][0][RTW89_KCC][1][8] = 12,
[0][1][1][0][RTW89_KCC][0][8] = 12,
[0][1][1][0][RTW89_ACMA][1][8] = 54,
@@ -39099,6 +39729,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][8] = 18,
[0][1][1][0][RTW89_UK][1][8] = 54,
[0][1][1][0][RTW89_UK][0][8] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][8] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][8] = -4,
[0][1][1][0][RTW89_FCC][1][10] = -4,
[0][1][1][0][RTW89_FCC][2][10] = 54,
[0][1][1][0][RTW89_ETSI][1][10] = 54,
@@ -39106,6 +39738,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][10] = 54,
[0][1][1][0][RTW89_MKK][0][10] = 16,
[0][1][1][0][RTW89_IC][1][10] = -4,
+ [0][1][1][0][RTW89_IC][2][10] = 54,
[0][1][1][0][RTW89_KCC][1][10] = 12,
[0][1][1][0][RTW89_KCC][0][10] = 12,
[0][1][1][0][RTW89_ACMA][1][10] = 54,
@@ -39115,6 +39748,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][10] = 18,
[0][1][1][0][RTW89_UK][1][10] = 54,
[0][1][1][0][RTW89_UK][0][10] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][10] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][10] = -4,
[0][1][1][0][RTW89_FCC][1][12] = -4,
[0][1][1][0][RTW89_FCC][2][12] = 54,
[0][1][1][0][RTW89_ETSI][1][12] = 54,
@@ -39122,6 +39757,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][12] = 54,
[0][1][1][0][RTW89_MKK][0][12] = 16,
[0][1][1][0][RTW89_IC][1][12] = -4,
+ [0][1][1][0][RTW89_IC][2][12] = 54,
[0][1][1][0][RTW89_KCC][1][12] = 12,
[0][1][1][0][RTW89_KCC][0][12] = 12,
[0][1][1][0][RTW89_ACMA][1][12] = 54,
@@ -39131,6 +39767,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][12] = 18,
[0][1][1][0][RTW89_UK][1][12] = 54,
[0][1][1][0][RTW89_UK][0][12] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][12] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][12] = -4,
[0][1][1][0][RTW89_FCC][1][14] = -4,
[0][1][1][0][RTW89_FCC][2][14] = 54,
[0][1][1][0][RTW89_ETSI][1][14] = 54,
@@ -39138,6 +39776,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][14] = 54,
[0][1][1][0][RTW89_MKK][0][14] = 16,
[0][1][1][0][RTW89_IC][1][14] = -4,
+ [0][1][1][0][RTW89_IC][2][14] = 54,
[0][1][1][0][RTW89_KCC][1][14] = 12,
[0][1][1][0][RTW89_KCC][0][14] = 12,
[0][1][1][0][RTW89_ACMA][1][14] = 54,
@@ -39147,6 +39786,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][14] = 18,
[0][1][1][0][RTW89_UK][1][14] = 54,
[0][1][1][0][RTW89_UK][0][14] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][14] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][14] = -4,
[0][1][1][0][RTW89_FCC][1][15] = -4,
[0][1][1][0][RTW89_FCC][2][15] = 54,
[0][1][1][0][RTW89_ETSI][1][15] = 54,
@@ -39154,6 +39795,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][15] = 54,
[0][1][1][0][RTW89_MKK][0][15] = 16,
[0][1][1][0][RTW89_IC][1][15] = -4,
+ [0][1][1][0][RTW89_IC][2][15] = 54,
[0][1][1][0][RTW89_KCC][1][15] = 12,
[0][1][1][0][RTW89_KCC][0][15] = 12,
[0][1][1][0][RTW89_ACMA][1][15] = 54,
@@ -39163,6 +39805,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][15] = 18,
[0][1][1][0][RTW89_UK][1][15] = 54,
[0][1][1][0][RTW89_UK][0][15] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][15] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][15] = -4,
[0][1][1][0][RTW89_FCC][1][17] = -4,
[0][1][1][0][RTW89_FCC][2][17] = 54,
[0][1][1][0][RTW89_ETSI][1][17] = 54,
@@ -39170,6 +39814,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][17] = 54,
[0][1][1][0][RTW89_MKK][0][17] = 16,
[0][1][1][0][RTW89_IC][1][17] = -4,
+ [0][1][1][0][RTW89_IC][2][17] = 54,
[0][1][1][0][RTW89_KCC][1][17] = 12,
[0][1][1][0][RTW89_KCC][0][17] = 12,
[0][1][1][0][RTW89_ACMA][1][17] = 54,
@@ -39179,6 +39824,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][17] = 18,
[0][1][1][0][RTW89_UK][1][17] = 54,
[0][1][1][0][RTW89_UK][0][17] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][17] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][17] = -4,
[0][1][1][0][RTW89_FCC][1][19] = -4,
[0][1][1][0][RTW89_FCC][2][19] = 54,
[0][1][1][0][RTW89_ETSI][1][19] = 54,
@@ -39186,6 +39833,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][19] = 54,
[0][1][1][0][RTW89_MKK][0][19] = 16,
[0][1][1][0][RTW89_IC][1][19] = -4,
+ [0][1][1][0][RTW89_IC][2][19] = 54,
[0][1][1][0][RTW89_KCC][1][19] = 12,
[0][1][1][0][RTW89_KCC][0][19] = 12,
[0][1][1][0][RTW89_ACMA][1][19] = 54,
@@ -39195,6 +39843,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][19] = 18,
[0][1][1][0][RTW89_UK][1][19] = 54,
[0][1][1][0][RTW89_UK][0][19] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][19] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][19] = -4,
[0][1][1][0][RTW89_FCC][1][21] = -4,
[0][1][1][0][RTW89_FCC][2][21] = 54,
[0][1][1][0][RTW89_ETSI][1][21] = 54,
@@ -39202,6 +39852,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][21] = 54,
[0][1][1][0][RTW89_MKK][0][21] = 16,
[0][1][1][0][RTW89_IC][1][21] = -4,
+ [0][1][1][0][RTW89_IC][2][21] = 54,
[0][1][1][0][RTW89_KCC][1][21] = 12,
[0][1][1][0][RTW89_KCC][0][21] = 12,
[0][1][1][0][RTW89_ACMA][1][21] = 54,
@@ -39211,6 +39862,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][21] = 18,
[0][1][1][0][RTW89_UK][1][21] = 54,
[0][1][1][0][RTW89_UK][0][21] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][21] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][21] = -4,
[0][1][1][0][RTW89_FCC][1][23] = -4,
[0][1][1][0][RTW89_FCC][2][23] = 68,
[0][1][1][0][RTW89_ETSI][1][23] = 54,
@@ -39218,6 +39871,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][23] = 54,
[0][1][1][0][RTW89_MKK][0][23] = 16,
[0][1][1][0][RTW89_IC][1][23] = -4,
+ [0][1][1][0][RTW89_IC][2][23] = 68,
[0][1][1][0][RTW89_KCC][1][23] = 12,
[0][1][1][0][RTW89_KCC][0][23] = 10,
[0][1][1][0][RTW89_ACMA][1][23] = 54,
@@ -39227,6 +39881,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][23] = 18,
[0][1][1][0][RTW89_UK][1][23] = 54,
[0][1][1][0][RTW89_UK][0][23] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][23] = 44,
+ [0][1][1][0][RTW89_THAILAND][0][23] = -4,
[0][1][1][0][RTW89_FCC][1][25] = -4,
[0][1][1][0][RTW89_FCC][2][25] = 68,
[0][1][1][0][RTW89_ETSI][1][25] = 54,
@@ -39234,6 +39890,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][25] = 54,
[0][1][1][0][RTW89_MKK][0][25] = 16,
[0][1][1][0][RTW89_IC][1][25] = -4,
+ [0][1][1][0][RTW89_IC][2][25] = 68,
[0][1][1][0][RTW89_KCC][1][25] = 12,
[0][1][1][0][RTW89_KCC][0][25] = 14,
[0][1][1][0][RTW89_ACMA][1][25] = 54,
@@ -39243,6 +39900,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][25] = 18,
[0][1][1][0][RTW89_UK][1][25] = 54,
[0][1][1][0][RTW89_UK][0][25] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][25] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][25] = -4,
[0][1][1][0][RTW89_FCC][1][27] = -4,
[0][1][1][0][RTW89_FCC][2][27] = 68,
[0][1][1][0][RTW89_ETSI][1][27] = 54,
@@ -39250,6 +39909,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][27] = 54,
[0][1][1][0][RTW89_MKK][0][27] = 16,
[0][1][1][0][RTW89_IC][1][27] = -4,
+ [0][1][1][0][RTW89_IC][2][27] = 68,
[0][1][1][0][RTW89_KCC][1][27] = 12,
[0][1][1][0][RTW89_KCC][0][27] = 14,
[0][1][1][0][RTW89_ACMA][1][27] = 54,
@@ -39259,6 +39919,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][27] = 18,
[0][1][1][0][RTW89_UK][1][27] = 54,
[0][1][1][0][RTW89_UK][0][27] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][27] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][27] = -4,
[0][1][1][0][RTW89_FCC][1][29] = -4,
[0][1][1][0][RTW89_FCC][2][29] = 68,
[0][1][1][0][RTW89_ETSI][1][29] = 54,
@@ -39266,6 +39928,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][29] = 54,
[0][1][1][0][RTW89_MKK][0][29] = 16,
[0][1][1][0][RTW89_IC][1][29] = -4,
+ [0][1][1][0][RTW89_IC][2][29] = 68,
[0][1][1][0][RTW89_KCC][1][29] = 12,
[0][1][1][0][RTW89_KCC][0][29] = 14,
[0][1][1][0][RTW89_ACMA][1][29] = 54,
@@ -39275,6 +39938,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][29] = 18,
[0][1][1][0][RTW89_UK][1][29] = 54,
[0][1][1][0][RTW89_UK][0][29] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][29] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][29] = -4,
[0][1][1][0][RTW89_FCC][1][30] = -4,
[0][1][1][0][RTW89_FCC][2][30] = 68,
[0][1][1][0][RTW89_ETSI][1][30] = 54,
@@ -39282,6 +39947,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][30] = 54,
[0][1][1][0][RTW89_MKK][0][30] = 16,
[0][1][1][0][RTW89_IC][1][30] = -4,
+ [0][1][1][0][RTW89_IC][2][30] = 68,
[0][1][1][0][RTW89_KCC][1][30] = 12,
[0][1][1][0][RTW89_KCC][0][30] = 14,
[0][1][1][0][RTW89_ACMA][1][30] = 54,
@@ -39291,6 +39957,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][30] = 18,
[0][1][1][0][RTW89_UK][1][30] = 54,
[0][1][1][0][RTW89_UK][0][30] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][30] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][30] = -4,
[0][1][1][0][RTW89_FCC][1][32] = -4,
[0][1][1][0][RTW89_FCC][2][32] = 68,
[0][1][1][0][RTW89_ETSI][1][32] = 54,
@@ -39298,6 +39966,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][32] = 54,
[0][1][1][0][RTW89_MKK][0][32] = 16,
[0][1][1][0][RTW89_IC][1][32] = -4,
+ [0][1][1][0][RTW89_IC][2][32] = 68,
[0][1][1][0][RTW89_KCC][1][32] = 12,
[0][1][1][0][RTW89_KCC][0][32] = 14,
[0][1][1][0][RTW89_ACMA][1][32] = 54,
@@ -39307,6 +39976,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][32] = 18,
[0][1][1][0][RTW89_UK][1][32] = 54,
[0][1][1][0][RTW89_UK][0][32] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][32] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][32] = -4,
[0][1][1][0][RTW89_FCC][1][34] = -4,
[0][1][1][0][RTW89_FCC][2][34] = 68,
[0][1][1][0][RTW89_ETSI][1][34] = 54,
@@ -39314,6 +39985,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][34] = 54,
[0][1][1][0][RTW89_MKK][0][34] = 16,
[0][1][1][0][RTW89_IC][1][34] = -4,
+ [0][1][1][0][RTW89_IC][2][34] = 68,
[0][1][1][0][RTW89_KCC][1][34] = 12,
[0][1][1][0][RTW89_KCC][0][34] = 14,
[0][1][1][0][RTW89_ACMA][1][34] = 54,
@@ -39323,6 +39995,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][34] = 18,
[0][1][1][0][RTW89_UK][1][34] = 54,
[0][1][1][0][RTW89_UK][0][34] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][34] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][34] = -4,
[0][1][1][0][RTW89_FCC][1][36] = -4,
[0][1][1][0][RTW89_FCC][2][36] = 68,
[0][1][1][0][RTW89_ETSI][1][36] = 54,
@@ -39330,6 +40004,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][36] = 54,
[0][1][1][0][RTW89_MKK][0][36] = 16,
[0][1][1][0][RTW89_IC][1][36] = -4,
+ [0][1][1][0][RTW89_IC][2][36] = 68,
[0][1][1][0][RTW89_KCC][1][36] = 12,
[0][1][1][0][RTW89_KCC][0][36] = 14,
[0][1][1][0][RTW89_ACMA][1][36] = 54,
@@ -39339,6 +40014,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][36] = 18,
[0][1][1][0][RTW89_UK][1][36] = 54,
[0][1][1][0][RTW89_UK][0][36] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][36] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][36] = -4,
[0][1][1][0][RTW89_FCC][1][38] = -4,
[0][1][1][0][RTW89_FCC][2][38] = 68,
[0][1][1][0][RTW89_ETSI][1][38] = 54,
@@ -39346,6 +40023,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][38] = 54,
[0][1][1][0][RTW89_MKK][0][38] = 16,
[0][1][1][0][RTW89_IC][1][38] = -4,
+ [0][1][1][0][RTW89_IC][2][38] = 68,
[0][1][1][0][RTW89_KCC][1][38] = 12,
[0][1][1][0][RTW89_KCC][0][38] = 14,
[0][1][1][0][RTW89_ACMA][1][38] = 54,
@@ -39355,6 +40033,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][38] = 18,
[0][1][1][0][RTW89_UK][1][38] = 54,
[0][1][1][0][RTW89_UK][0][38] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][38] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][38] = -4,
[0][1][1][0][RTW89_FCC][1][40] = -4,
[0][1][1][0][RTW89_FCC][2][40] = 68,
[0][1][1][0][RTW89_ETSI][1][40] = 54,
@@ -39362,6 +40042,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][40] = 54,
[0][1][1][0][RTW89_MKK][0][40] = 16,
[0][1][1][0][RTW89_IC][1][40] = -4,
+ [0][1][1][0][RTW89_IC][2][40] = 68,
[0][1][1][0][RTW89_KCC][1][40] = 12,
[0][1][1][0][RTW89_KCC][0][40] = 14,
[0][1][1][0][RTW89_ACMA][1][40] = 54,
@@ -39371,6 +40052,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][40] = 18,
[0][1][1][0][RTW89_UK][1][40] = 54,
[0][1][1][0][RTW89_UK][0][40] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][40] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][40] = -4,
[0][1][1][0][RTW89_FCC][1][42] = -4,
[0][1][1][0][RTW89_FCC][2][42] = 68,
[0][1][1][0][RTW89_ETSI][1][42] = 54,
@@ -39378,6 +40061,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][42] = 54,
[0][1][1][0][RTW89_MKK][0][42] = 16,
[0][1][1][0][RTW89_IC][1][42] = -4,
+ [0][1][1][0][RTW89_IC][2][42] = 68,
[0][1][1][0][RTW89_KCC][1][42] = 12,
[0][1][1][0][RTW89_KCC][0][42] = 14,
[0][1][1][0][RTW89_ACMA][1][42] = 54,
@@ -39387,6 +40071,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][42] = 18,
[0][1][1][0][RTW89_UK][1][42] = 54,
[0][1][1][0][RTW89_UK][0][42] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][42] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][42] = -4,
[0][1][1][0][RTW89_FCC][1][44] = -2,
[0][1][1][0][RTW89_FCC][2][44] = 68,
[0][1][1][0][RTW89_ETSI][1][44] = 54,
@@ -39394,6 +40080,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][44] = 34,
[0][1][1][0][RTW89_MKK][0][44] = 16,
[0][1][1][0][RTW89_IC][1][44] = -2,
+ [0][1][1][0][RTW89_IC][2][44] = 68,
[0][1][1][0][RTW89_KCC][1][44] = 12,
[0][1][1][0][RTW89_KCC][0][44] = 12,
[0][1][1][0][RTW89_ACMA][1][44] = 54,
@@ -39403,6 +40090,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][44] = 18,
[0][1][1][0][RTW89_UK][1][44] = 54,
[0][1][1][0][RTW89_UK][0][44] = 18,
+ [0][1][1][0][RTW89_THAILAND][1][44] = 42,
+ [0][1][1][0][RTW89_THAILAND][0][44] = -2,
[0][1][1][0][RTW89_FCC][1][45] = -2,
[0][1][1][0][RTW89_FCC][2][45] = 127,
[0][1][1][0][RTW89_ETSI][1][45] = 127,
@@ -39410,6 +40099,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][45] = 127,
[0][1][1][0][RTW89_MKK][0][45] = 127,
[0][1][1][0][RTW89_IC][1][45] = -2,
+ [0][1][1][0][RTW89_IC][2][45] = 70,
[0][1][1][0][RTW89_KCC][1][45] = 12,
[0][1][1][0][RTW89_KCC][0][45] = 127,
[0][1][1][0][RTW89_ACMA][1][45] = 127,
@@ -39419,6 +40109,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][45] = 127,
[0][1][1][0][RTW89_UK][1][45] = 127,
[0][1][1][0][RTW89_UK][0][45] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][45] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][45] = 127,
[0][1][1][0][RTW89_FCC][1][47] = -2,
[0][1][1][0][RTW89_FCC][2][47] = 127,
[0][1][1][0][RTW89_ETSI][1][47] = 127,
@@ -39426,6 +40118,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][47] = 127,
[0][1][1][0][RTW89_MKK][0][47] = 127,
[0][1][1][0][RTW89_IC][1][47] = -2,
+ [0][1][1][0][RTW89_IC][2][47] = 68,
[0][1][1][0][RTW89_KCC][1][47] = 12,
[0][1][1][0][RTW89_KCC][0][47] = 127,
[0][1][1][0][RTW89_ACMA][1][47] = 127,
@@ -39435,6 +40128,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][47] = 127,
[0][1][1][0][RTW89_UK][1][47] = 127,
[0][1][1][0][RTW89_UK][0][47] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][47] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][47] = 127,
[0][1][1][0][RTW89_FCC][1][49] = -2,
[0][1][1][0][RTW89_FCC][2][49] = 127,
[0][1][1][0][RTW89_ETSI][1][49] = 127,
@@ -39442,6 +40137,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][49] = 127,
[0][1][1][0][RTW89_MKK][0][49] = 127,
[0][1][1][0][RTW89_IC][1][49] = -2,
+ [0][1][1][0][RTW89_IC][2][49] = 68,
[0][1][1][0][RTW89_KCC][1][49] = 12,
[0][1][1][0][RTW89_KCC][0][49] = 127,
[0][1][1][0][RTW89_ACMA][1][49] = 127,
@@ -39451,6 +40147,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][49] = 127,
[0][1][1][0][RTW89_UK][1][49] = 127,
[0][1][1][0][RTW89_UK][0][49] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][49] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][49] = 127,
[0][1][1][0][RTW89_FCC][1][51] = -2,
[0][1][1][0][RTW89_FCC][2][51] = 127,
[0][1][1][0][RTW89_ETSI][1][51] = 127,
@@ -39458,6 +40156,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][51] = 127,
[0][1][1][0][RTW89_MKK][0][51] = 127,
[0][1][1][0][RTW89_IC][1][51] = -2,
+ [0][1][1][0][RTW89_IC][2][51] = 68,
[0][1][1][0][RTW89_KCC][1][51] = 12,
[0][1][1][0][RTW89_KCC][0][51] = 127,
[0][1][1][0][RTW89_ACMA][1][51] = 127,
@@ -39467,6 +40166,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][51] = 127,
[0][1][1][0][RTW89_UK][1][51] = 127,
[0][1][1][0][RTW89_UK][0][51] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][51] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][51] = 127,
[0][1][1][0][RTW89_FCC][1][53] = -2,
[0][1][1][0][RTW89_FCC][2][53] = 127,
[0][1][1][0][RTW89_ETSI][1][53] = 127,
@@ -39474,6 +40175,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][53] = 127,
[0][1][1][0][RTW89_MKK][0][53] = 127,
[0][1][1][0][RTW89_IC][1][53] = -2,
+ [0][1][1][0][RTW89_IC][2][53] = 68,
[0][1][1][0][RTW89_KCC][1][53] = 12,
[0][1][1][0][RTW89_KCC][0][53] = 127,
[0][1][1][0][RTW89_ACMA][1][53] = 127,
@@ -39483,6 +40185,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][53] = 127,
[0][1][1][0][RTW89_UK][1][53] = 127,
[0][1][1][0][RTW89_UK][0][53] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][53] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][53] = 127,
[0][1][1][0][RTW89_FCC][1][55] = -2,
[0][1][1][0][RTW89_FCC][2][55] = 68,
[0][1][1][0][RTW89_ETSI][1][55] = 127,
@@ -39490,6 +40194,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][55] = 127,
[0][1][1][0][RTW89_MKK][0][55] = 127,
[0][1][1][0][RTW89_IC][1][55] = -2,
+ [0][1][1][0][RTW89_IC][2][55] = 68,
[0][1][1][0][RTW89_KCC][1][55] = 12,
[0][1][1][0][RTW89_KCC][0][55] = 127,
[0][1][1][0][RTW89_ACMA][1][55] = 127,
@@ -39499,6 +40204,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][55] = 127,
[0][1][1][0][RTW89_UK][1][55] = 127,
[0][1][1][0][RTW89_UK][0][55] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][55] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][55] = 127,
[0][1][1][0][RTW89_FCC][1][57] = -2,
[0][1][1][0][RTW89_FCC][2][57] = 68,
[0][1][1][0][RTW89_ETSI][1][57] = 127,
@@ -39506,6 +40213,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][57] = 127,
[0][1][1][0][RTW89_MKK][0][57] = 127,
[0][1][1][0][RTW89_IC][1][57] = -2,
+ [0][1][1][0][RTW89_IC][2][57] = 68,
[0][1][1][0][RTW89_KCC][1][57] = 12,
[0][1][1][0][RTW89_KCC][0][57] = 127,
[0][1][1][0][RTW89_ACMA][1][57] = 127,
@@ -39515,6 +40223,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][57] = 127,
[0][1][1][0][RTW89_UK][1][57] = 127,
[0][1][1][0][RTW89_UK][0][57] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][57] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][57] = 127,
[0][1][1][0][RTW89_FCC][1][59] = -2,
[0][1][1][0][RTW89_FCC][2][59] = 68,
[0][1][1][0][RTW89_ETSI][1][59] = 127,
@@ -39522,6 +40232,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][59] = 127,
[0][1][1][0][RTW89_MKK][0][59] = 127,
[0][1][1][0][RTW89_IC][1][59] = -2,
+ [0][1][1][0][RTW89_IC][2][59] = 68,
[0][1][1][0][RTW89_KCC][1][59] = 12,
[0][1][1][0][RTW89_KCC][0][59] = 127,
[0][1][1][0][RTW89_ACMA][1][59] = 127,
@@ -39531,6 +40242,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][59] = 127,
[0][1][1][0][RTW89_UK][1][59] = 127,
[0][1][1][0][RTW89_UK][0][59] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][59] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][59] = 127,
[0][1][1][0][RTW89_FCC][1][60] = -2,
[0][1][1][0][RTW89_FCC][2][60] = 68,
[0][1][1][0][RTW89_ETSI][1][60] = 127,
@@ -39538,6 +40251,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][60] = 127,
[0][1][1][0][RTW89_MKK][0][60] = 127,
[0][1][1][0][RTW89_IC][1][60] = -2,
+ [0][1][1][0][RTW89_IC][2][60] = 68,
[0][1][1][0][RTW89_KCC][1][60] = 12,
[0][1][1][0][RTW89_KCC][0][60] = 127,
[0][1][1][0][RTW89_ACMA][1][60] = 127,
@@ -39547,6 +40261,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][60] = 127,
[0][1][1][0][RTW89_UK][1][60] = 127,
[0][1][1][0][RTW89_UK][0][60] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][60] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][60] = 127,
[0][1][1][0][RTW89_FCC][1][62] = -2,
[0][1][1][0][RTW89_FCC][2][62] = 68,
[0][1][1][0][RTW89_ETSI][1][62] = 127,
@@ -39554,6 +40270,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][62] = 127,
[0][1][1][0][RTW89_MKK][0][62] = 127,
[0][1][1][0][RTW89_IC][1][62] = -2,
+ [0][1][1][0][RTW89_IC][2][62] = 68,
[0][1][1][0][RTW89_KCC][1][62] = 12,
[0][1][1][0][RTW89_KCC][0][62] = 127,
[0][1][1][0][RTW89_ACMA][1][62] = 127,
@@ -39563,6 +40280,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][62] = 127,
[0][1][1][0][RTW89_UK][1][62] = 127,
[0][1][1][0][RTW89_UK][0][62] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][62] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][62] = 127,
[0][1][1][0][RTW89_FCC][1][64] = -2,
[0][1][1][0][RTW89_FCC][2][64] = 68,
[0][1][1][0][RTW89_ETSI][1][64] = 127,
@@ -39570,6 +40289,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][64] = 127,
[0][1][1][0][RTW89_MKK][0][64] = 127,
[0][1][1][0][RTW89_IC][1][64] = -2,
+ [0][1][1][0][RTW89_IC][2][64] = 68,
[0][1][1][0][RTW89_KCC][1][64] = 12,
[0][1][1][0][RTW89_KCC][0][64] = 127,
[0][1][1][0][RTW89_ACMA][1][64] = 127,
@@ -39579,6 +40299,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][64] = 127,
[0][1][1][0][RTW89_UK][1][64] = 127,
[0][1][1][0][RTW89_UK][0][64] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][64] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][64] = 127,
[0][1][1][0][RTW89_FCC][1][66] = -2,
[0][1][1][0][RTW89_FCC][2][66] = 68,
[0][1][1][0][RTW89_ETSI][1][66] = 127,
@@ -39586,6 +40308,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][66] = 127,
[0][1][1][0][RTW89_MKK][0][66] = 127,
[0][1][1][0][RTW89_IC][1][66] = -2,
+ [0][1][1][0][RTW89_IC][2][66] = 68,
[0][1][1][0][RTW89_KCC][1][66] = 12,
[0][1][1][0][RTW89_KCC][0][66] = 127,
[0][1][1][0][RTW89_ACMA][1][66] = 127,
@@ -39595,6 +40318,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][66] = 127,
[0][1][1][0][RTW89_UK][1][66] = 127,
[0][1][1][0][RTW89_UK][0][66] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][66] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][66] = 127,
[0][1][1][0][RTW89_FCC][1][68] = -2,
[0][1][1][0][RTW89_FCC][2][68] = 68,
[0][1][1][0][RTW89_ETSI][1][68] = 127,
@@ -39602,6 +40327,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][68] = 127,
[0][1][1][0][RTW89_MKK][0][68] = 127,
[0][1][1][0][RTW89_IC][1][68] = -2,
+ [0][1][1][0][RTW89_IC][2][68] = 68,
[0][1][1][0][RTW89_KCC][1][68] = 12,
[0][1][1][0][RTW89_KCC][0][68] = 127,
[0][1][1][0][RTW89_ACMA][1][68] = 127,
@@ -39611,6 +40337,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][68] = 127,
[0][1][1][0][RTW89_UK][1][68] = 127,
[0][1][1][0][RTW89_UK][0][68] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][68] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][68] = 127,
[0][1][1][0][RTW89_FCC][1][70] = -2,
[0][1][1][0][RTW89_FCC][2][70] = 68,
[0][1][1][0][RTW89_ETSI][1][70] = 127,
@@ -39618,6 +40346,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][70] = 127,
[0][1][1][0][RTW89_MKK][0][70] = 127,
[0][1][1][0][RTW89_IC][1][70] = -2,
+ [0][1][1][0][RTW89_IC][2][70] = 68,
[0][1][1][0][RTW89_KCC][1][70] = 12,
[0][1][1][0][RTW89_KCC][0][70] = 127,
[0][1][1][0][RTW89_ACMA][1][70] = 127,
@@ -39627,6 +40356,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][70] = 127,
[0][1][1][0][RTW89_UK][1][70] = 127,
[0][1][1][0][RTW89_UK][0][70] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][70] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][70] = 127,
[0][1][1][0][RTW89_FCC][1][72] = -2,
[0][1][1][0][RTW89_FCC][2][72] = 68,
[0][1][1][0][RTW89_ETSI][1][72] = 127,
@@ -39634,6 +40365,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][72] = 127,
[0][1][1][0][RTW89_MKK][0][72] = 127,
[0][1][1][0][RTW89_IC][1][72] = -2,
+ [0][1][1][0][RTW89_IC][2][72] = 68,
[0][1][1][0][RTW89_KCC][1][72] = 12,
[0][1][1][0][RTW89_KCC][0][72] = 127,
[0][1][1][0][RTW89_ACMA][1][72] = 127,
@@ -39643,6 +40375,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][72] = 127,
[0][1][1][0][RTW89_UK][1][72] = 127,
[0][1][1][0][RTW89_UK][0][72] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][72] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][72] = 127,
[0][1][1][0][RTW89_FCC][1][74] = -2,
[0][1][1][0][RTW89_FCC][2][74] = 68,
[0][1][1][0][RTW89_ETSI][1][74] = 127,
@@ -39650,6 +40384,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][74] = 127,
[0][1][1][0][RTW89_MKK][0][74] = 127,
[0][1][1][0][RTW89_IC][1][74] = -2,
+ [0][1][1][0][RTW89_IC][2][74] = 68,
[0][1][1][0][RTW89_KCC][1][74] = 12,
[0][1][1][0][RTW89_KCC][0][74] = 127,
[0][1][1][0][RTW89_ACMA][1][74] = 127,
@@ -39659,6 +40394,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][74] = 127,
[0][1][1][0][RTW89_UK][1][74] = 127,
[0][1][1][0][RTW89_UK][0][74] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][74] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][74] = 127,
[0][1][1][0][RTW89_FCC][1][75] = -2,
[0][1][1][0][RTW89_FCC][2][75] = 68,
[0][1][1][0][RTW89_ETSI][1][75] = 127,
@@ -39666,6 +40403,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][75] = 127,
[0][1][1][0][RTW89_MKK][0][75] = 127,
[0][1][1][0][RTW89_IC][1][75] = -2,
+ [0][1][1][0][RTW89_IC][2][75] = 68,
[0][1][1][0][RTW89_KCC][1][75] = 12,
[0][1][1][0][RTW89_KCC][0][75] = 127,
[0][1][1][0][RTW89_ACMA][1][75] = 127,
@@ -39675,6 +40413,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][75] = 127,
[0][1][1][0][RTW89_UK][1][75] = 127,
[0][1][1][0][RTW89_UK][0][75] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][75] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][75] = 127,
[0][1][1][0][RTW89_FCC][1][77] = -2,
[0][1][1][0][RTW89_FCC][2][77] = 68,
[0][1][1][0][RTW89_ETSI][1][77] = 127,
@@ -39682,6 +40422,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][77] = 127,
[0][1][1][0][RTW89_MKK][0][77] = 127,
[0][1][1][0][RTW89_IC][1][77] = -2,
+ [0][1][1][0][RTW89_IC][2][77] = 68,
[0][1][1][0][RTW89_KCC][1][77] = 12,
[0][1][1][0][RTW89_KCC][0][77] = 127,
[0][1][1][0][RTW89_ACMA][1][77] = 127,
@@ -39691,6 +40432,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][77] = 127,
[0][1][1][0][RTW89_UK][1][77] = 127,
[0][1][1][0][RTW89_UK][0][77] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][77] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][77] = 127,
[0][1][1][0][RTW89_FCC][1][79] = -2,
[0][1][1][0][RTW89_FCC][2][79] = 68,
[0][1][1][0][RTW89_ETSI][1][79] = 127,
@@ -39698,6 +40441,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][79] = 127,
[0][1][1][0][RTW89_MKK][0][79] = 127,
[0][1][1][0][RTW89_IC][1][79] = -2,
+ [0][1][1][0][RTW89_IC][2][79] = 68,
[0][1][1][0][RTW89_KCC][1][79] = 12,
[0][1][1][0][RTW89_KCC][0][79] = 127,
[0][1][1][0][RTW89_ACMA][1][79] = 127,
@@ -39707,6 +40451,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][79] = 127,
[0][1][1][0][RTW89_UK][1][79] = 127,
[0][1][1][0][RTW89_UK][0][79] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][79] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][79] = 127,
[0][1][1][0][RTW89_FCC][1][81] = -2,
[0][1][1][0][RTW89_FCC][2][81] = 68,
[0][1][1][0][RTW89_ETSI][1][81] = 127,
@@ -39714,6 +40460,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][81] = 127,
[0][1][1][0][RTW89_MKK][0][81] = 127,
[0][1][1][0][RTW89_IC][1][81] = -2,
+ [0][1][1][0][RTW89_IC][2][81] = 68,
[0][1][1][0][RTW89_KCC][1][81] = 12,
[0][1][1][0][RTW89_KCC][0][81] = 127,
[0][1][1][0][RTW89_ACMA][1][81] = 127,
@@ -39723,6 +40470,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][81] = 127,
[0][1][1][0][RTW89_UK][1][81] = 127,
[0][1][1][0][RTW89_UK][0][81] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][81] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][81] = 127,
[0][1][1][0][RTW89_FCC][1][83] = -2,
[0][1][1][0][RTW89_FCC][2][83] = 68,
[0][1][1][0][RTW89_ETSI][1][83] = 127,
@@ -39730,6 +40479,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][83] = 127,
[0][1][1][0][RTW89_MKK][0][83] = 127,
[0][1][1][0][RTW89_IC][1][83] = -2,
+ [0][1][1][0][RTW89_IC][2][83] = 68,
[0][1][1][0][RTW89_KCC][1][83] = 20,
[0][1][1][0][RTW89_KCC][0][83] = 127,
[0][1][1][0][RTW89_ACMA][1][83] = 127,
@@ -39739,6 +40489,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][83] = 127,
[0][1][1][0][RTW89_UK][1][83] = 127,
[0][1][1][0][RTW89_UK][0][83] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][83] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][83] = 127,
[0][1][1][0][RTW89_FCC][1][85] = -2,
[0][1][1][0][RTW89_FCC][2][85] = 68,
[0][1][1][0][RTW89_ETSI][1][85] = 127,
@@ -39746,6 +40498,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][85] = 127,
[0][1][1][0][RTW89_MKK][0][85] = 127,
[0][1][1][0][RTW89_IC][1][85] = -2,
+ [0][1][1][0][RTW89_IC][2][85] = 68,
[0][1][1][0][RTW89_KCC][1][85] = 20,
[0][1][1][0][RTW89_KCC][0][85] = 127,
[0][1][1][0][RTW89_ACMA][1][85] = 127,
@@ -39755,6 +40508,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][85] = 127,
[0][1][1][0][RTW89_UK][1][85] = 127,
[0][1][1][0][RTW89_UK][0][85] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][85] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][85] = 127,
[0][1][1][0][RTW89_FCC][1][87] = -2,
[0][1][1][0][RTW89_FCC][2][87] = 127,
[0][1][1][0][RTW89_ETSI][1][87] = 127,
@@ -39762,6 +40517,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][87] = 127,
[0][1][1][0][RTW89_MKK][0][87] = 127,
[0][1][1][0][RTW89_IC][1][87] = -2,
+ [0][1][1][0][RTW89_IC][2][87] = 127,
[0][1][1][0][RTW89_KCC][1][87] = 20,
[0][1][1][0][RTW89_KCC][0][87] = 127,
[0][1][1][0][RTW89_ACMA][1][87] = 127,
@@ -39771,6 +40527,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][87] = 127,
[0][1][1][0][RTW89_UK][1][87] = 127,
[0][1][1][0][RTW89_UK][0][87] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][87] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][87] = 127,
[0][1][1][0][RTW89_FCC][1][89] = -2,
[0][1][1][0][RTW89_FCC][2][89] = 127,
[0][1][1][0][RTW89_ETSI][1][89] = 127,
@@ -39778,6 +40536,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][89] = 127,
[0][1][1][0][RTW89_MKK][0][89] = 127,
[0][1][1][0][RTW89_IC][1][89] = -2,
+ [0][1][1][0][RTW89_IC][2][89] = 127,
[0][1][1][0][RTW89_KCC][1][89] = 20,
[0][1][1][0][RTW89_KCC][0][89] = 127,
[0][1][1][0][RTW89_ACMA][1][89] = 127,
@@ -39787,6 +40546,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][89] = 127,
[0][1][1][0][RTW89_UK][1][89] = 127,
[0][1][1][0][RTW89_UK][0][89] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][89] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][89] = 127,
[0][1][1][0][RTW89_FCC][1][90] = -2,
[0][1][1][0][RTW89_FCC][2][90] = 127,
[0][1][1][0][RTW89_ETSI][1][90] = 127,
@@ -39794,6 +40555,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][90] = 127,
[0][1][1][0][RTW89_MKK][0][90] = 127,
[0][1][1][0][RTW89_IC][1][90] = -2,
+ [0][1][1][0][RTW89_IC][2][90] = 127,
[0][1][1][0][RTW89_KCC][1][90] = 20,
[0][1][1][0][RTW89_KCC][0][90] = 127,
[0][1][1][0][RTW89_ACMA][1][90] = 127,
@@ -39803,6 +40565,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][90] = 127,
[0][1][1][0][RTW89_UK][1][90] = 127,
[0][1][1][0][RTW89_UK][0][90] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][90] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][90] = 127,
[0][1][1][0][RTW89_FCC][1][92] = -2,
[0][1][1][0][RTW89_FCC][2][92] = 127,
[0][1][1][0][RTW89_ETSI][1][92] = 127,
@@ -39810,6 +40574,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][92] = 127,
[0][1][1][0][RTW89_MKK][0][92] = 127,
[0][1][1][0][RTW89_IC][1][92] = -2,
+ [0][1][1][0][RTW89_IC][2][92] = 127,
[0][1][1][0][RTW89_KCC][1][92] = 20,
[0][1][1][0][RTW89_KCC][0][92] = 127,
[0][1][1][0][RTW89_ACMA][1][92] = 127,
@@ -39819,6 +40584,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][92] = 127,
[0][1][1][0][RTW89_UK][1][92] = 127,
[0][1][1][0][RTW89_UK][0][92] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][92] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][92] = 127,
[0][1][1][0][RTW89_FCC][1][94] = -2,
[0][1][1][0][RTW89_FCC][2][94] = 127,
[0][1][1][0][RTW89_ETSI][1][94] = 127,
@@ -39826,6 +40593,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][94] = 127,
[0][1][1][0][RTW89_MKK][0][94] = 127,
[0][1][1][0][RTW89_IC][1][94] = -2,
+ [0][1][1][0][RTW89_IC][2][94] = 127,
[0][1][1][0][RTW89_KCC][1][94] = 20,
[0][1][1][0][RTW89_KCC][0][94] = 127,
[0][1][1][0][RTW89_ACMA][1][94] = 127,
@@ -39835,6 +40603,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][94] = 127,
[0][1][1][0][RTW89_UK][1][94] = 127,
[0][1][1][0][RTW89_UK][0][94] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][94] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][94] = 127,
[0][1][1][0][RTW89_FCC][1][96] = -2,
[0][1][1][0][RTW89_FCC][2][96] = 127,
[0][1][1][0][RTW89_ETSI][1][96] = 127,
@@ -39842,6 +40612,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][96] = 127,
[0][1][1][0][RTW89_MKK][0][96] = 127,
[0][1][1][0][RTW89_IC][1][96] = -2,
+ [0][1][1][0][RTW89_IC][2][96] = 127,
[0][1][1][0][RTW89_KCC][1][96] = 20,
[0][1][1][0][RTW89_KCC][0][96] = 127,
[0][1][1][0][RTW89_ACMA][1][96] = 127,
@@ -39851,6 +40622,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][96] = 127,
[0][1][1][0][RTW89_UK][1][96] = 127,
[0][1][1][0][RTW89_UK][0][96] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][96] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][96] = 127,
[0][1][1][0][RTW89_FCC][1][98] = -2,
[0][1][1][0][RTW89_FCC][2][98] = 127,
[0][1][1][0][RTW89_ETSI][1][98] = 127,
@@ -39858,6 +40631,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][98] = 127,
[0][1][1][0][RTW89_MKK][0][98] = 127,
[0][1][1][0][RTW89_IC][1][98] = -2,
+ [0][1][1][0][RTW89_IC][2][98] = 127,
[0][1][1][0][RTW89_KCC][1][98] = 20,
[0][1][1][0][RTW89_KCC][0][98] = 127,
[0][1][1][0][RTW89_ACMA][1][98] = 127,
@@ -39867,6 +40641,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][98] = 127,
[0][1][1][0][RTW89_UK][1][98] = 127,
[0][1][1][0][RTW89_UK][0][98] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][98] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][98] = 127,
[0][1][1][0][RTW89_FCC][1][100] = -2,
[0][1][1][0][RTW89_FCC][2][100] = 127,
[0][1][1][0][RTW89_ETSI][1][100] = 127,
@@ -39874,6 +40650,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][100] = 127,
[0][1][1][0][RTW89_MKK][0][100] = 127,
[0][1][1][0][RTW89_IC][1][100] = -2,
+ [0][1][1][0][RTW89_IC][2][100] = 127,
[0][1][1][0][RTW89_KCC][1][100] = 20,
[0][1][1][0][RTW89_KCC][0][100] = 127,
[0][1][1][0][RTW89_ACMA][1][100] = 127,
@@ -39883,6 +40660,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][100] = 127,
[0][1][1][0][RTW89_UK][1][100] = 127,
[0][1][1][0][RTW89_UK][0][100] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][100] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][100] = 127,
[0][1][1][0][RTW89_FCC][1][102] = -2,
[0][1][1][0][RTW89_FCC][2][102] = 127,
[0][1][1][0][RTW89_ETSI][1][102] = 127,
@@ -39890,6 +40669,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][102] = 127,
[0][1][1][0][RTW89_MKK][0][102] = 127,
[0][1][1][0][RTW89_IC][1][102] = -2,
+ [0][1][1][0][RTW89_IC][2][102] = 127,
[0][1][1][0][RTW89_KCC][1][102] = 20,
[0][1][1][0][RTW89_KCC][0][102] = 127,
[0][1][1][0][RTW89_ACMA][1][102] = 127,
@@ -39899,6 +40679,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][102] = 127,
[0][1][1][0][RTW89_UK][1][102] = 127,
[0][1][1][0][RTW89_UK][0][102] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][102] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][102] = 127,
[0][1][1][0][RTW89_FCC][1][104] = -2,
[0][1][1][0][RTW89_FCC][2][104] = 127,
[0][1][1][0][RTW89_ETSI][1][104] = 127,
@@ -39906,6 +40688,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][104] = 127,
[0][1][1][0][RTW89_MKK][0][104] = 127,
[0][1][1][0][RTW89_IC][1][104] = -2,
+ [0][1][1][0][RTW89_IC][2][104] = 127,
[0][1][1][0][RTW89_KCC][1][104] = 20,
[0][1][1][0][RTW89_KCC][0][104] = 127,
[0][1][1][0][RTW89_ACMA][1][104] = 127,
@@ -39915,6 +40698,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][104] = 127,
[0][1][1][0][RTW89_UK][1][104] = 127,
[0][1][1][0][RTW89_UK][0][104] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][104] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][104] = 127,
[0][1][1][0][RTW89_FCC][1][105] = -2,
[0][1][1][0][RTW89_FCC][2][105] = 127,
[0][1][1][0][RTW89_ETSI][1][105] = 127,
@@ -39922,6 +40707,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][105] = 127,
[0][1][1][0][RTW89_MKK][0][105] = 127,
[0][1][1][0][RTW89_IC][1][105] = -2,
+ [0][1][1][0][RTW89_IC][2][105] = 127,
[0][1][1][0][RTW89_KCC][1][105] = 20,
[0][1][1][0][RTW89_KCC][0][105] = 127,
[0][1][1][0][RTW89_ACMA][1][105] = 127,
@@ -39931,6 +40717,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][105] = 127,
[0][1][1][0][RTW89_UK][1][105] = 127,
[0][1][1][0][RTW89_UK][0][105] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][105] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][105] = 127,
[0][1][1][0][RTW89_FCC][1][107] = 1,
[0][1][1][0][RTW89_FCC][2][107] = 127,
[0][1][1][0][RTW89_ETSI][1][107] = 127,
@@ -39938,6 +40726,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][107] = 127,
[0][1][1][0][RTW89_MKK][0][107] = 127,
[0][1][1][0][RTW89_IC][1][107] = 1,
+ [0][1][1][0][RTW89_IC][2][107] = 127,
[0][1][1][0][RTW89_KCC][1][107] = 20,
[0][1][1][0][RTW89_KCC][0][107] = 127,
[0][1][1][0][RTW89_ACMA][1][107] = 127,
@@ -39947,6 +40736,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][107] = 127,
[0][1][1][0][RTW89_UK][1][107] = 127,
[0][1][1][0][RTW89_UK][0][107] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][107] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][107] = 127,
[0][1][1][0][RTW89_FCC][1][109] = 1,
[0][1][1][0][RTW89_FCC][2][109] = 127,
[0][1][1][0][RTW89_ETSI][1][109] = 127,
@@ -39954,6 +40745,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][109] = 127,
[0][1][1][0][RTW89_MKK][0][109] = 127,
[0][1][1][0][RTW89_IC][1][109] = 1,
+ [0][1][1][0][RTW89_IC][2][109] = 127,
[0][1][1][0][RTW89_KCC][1][109] = 20,
[0][1][1][0][RTW89_KCC][0][109] = 127,
[0][1][1][0][RTW89_ACMA][1][109] = 127,
@@ -39963,6 +40755,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][109] = 127,
[0][1][1][0][RTW89_UK][1][109] = 127,
[0][1][1][0][RTW89_UK][0][109] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][109] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][109] = 127,
[0][1][1][0][RTW89_FCC][1][111] = 127,
[0][1][1][0][RTW89_FCC][2][111] = 127,
[0][1][1][0][RTW89_ETSI][1][111] = 127,
@@ -39970,6 +40764,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][111] = 127,
[0][1][1][0][RTW89_MKK][0][111] = 127,
[0][1][1][0][RTW89_IC][1][111] = 127,
+ [0][1][1][0][RTW89_IC][2][111] = 127,
[0][1][1][0][RTW89_KCC][1][111] = 127,
[0][1][1][0][RTW89_KCC][0][111] = 127,
[0][1][1][0][RTW89_ACMA][1][111] = 127,
@@ -39979,6 +40774,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][111] = 127,
[0][1][1][0][RTW89_UK][1][111] = 127,
[0][1][1][0][RTW89_UK][0][111] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][111] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][111] = 127,
[0][1][1][0][RTW89_FCC][1][113] = 127,
[0][1][1][0][RTW89_FCC][2][113] = 127,
[0][1][1][0][RTW89_ETSI][1][113] = 127,
@@ -39986,6 +40783,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][113] = 127,
[0][1][1][0][RTW89_MKK][0][113] = 127,
[0][1][1][0][RTW89_IC][1][113] = 127,
+ [0][1][1][0][RTW89_IC][2][113] = 127,
[0][1][1][0][RTW89_KCC][1][113] = 127,
[0][1][1][0][RTW89_KCC][0][113] = 127,
[0][1][1][0][RTW89_ACMA][1][113] = 127,
@@ -39995,6 +40793,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][113] = 127,
[0][1][1][0][RTW89_UK][1][113] = 127,
[0][1][1][0][RTW89_UK][0][113] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][113] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][113] = 127,
[0][1][1][0][RTW89_FCC][1][115] = 127,
[0][1][1][0][RTW89_FCC][2][115] = 127,
[0][1][1][0][RTW89_ETSI][1][115] = 127,
@@ -40002,6 +40802,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][115] = 127,
[0][1][1][0][RTW89_MKK][0][115] = 127,
[0][1][1][0][RTW89_IC][1][115] = 127,
+ [0][1][1][0][RTW89_IC][2][115] = 127,
[0][1][1][0][RTW89_KCC][1][115] = 127,
[0][1][1][0][RTW89_KCC][0][115] = 127,
[0][1][1][0][RTW89_ACMA][1][115] = 127,
@@ -40011,6 +40812,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][115] = 127,
[0][1][1][0][RTW89_UK][1][115] = 127,
[0][1][1][0][RTW89_UK][0][115] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][115] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][115] = 127,
[0][1][1][0][RTW89_FCC][1][117] = 127,
[0][1][1][0][RTW89_FCC][2][117] = 127,
[0][1][1][0][RTW89_ETSI][1][117] = 127,
@@ -40018,6 +40821,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][117] = 127,
[0][1][1][0][RTW89_MKK][0][117] = 127,
[0][1][1][0][RTW89_IC][1][117] = 127,
+ [0][1][1][0][RTW89_IC][2][117] = 127,
[0][1][1][0][RTW89_KCC][1][117] = 127,
[0][1][1][0][RTW89_KCC][0][117] = 127,
[0][1][1][0][RTW89_ACMA][1][117] = 127,
@@ -40027,6 +40831,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][117] = 127,
[0][1][1][0][RTW89_UK][1][117] = 127,
[0][1][1][0][RTW89_UK][0][117] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][117] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][117] = 127,
[0][1][1][0][RTW89_FCC][1][119] = 127,
[0][1][1][0][RTW89_FCC][2][119] = 127,
[0][1][1][0][RTW89_ETSI][1][119] = 127,
@@ -40034,6 +40840,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_MKK][1][119] = 127,
[0][1][1][0][RTW89_MKK][0][119] = 127,
[0][1][1][0][RTW89_IC][1][119] = 127,
+ [0][1][1][0][RTW89_IC][2][119] = 127,
[0][1][1][0][RTW89_KCC][1][119] = 127,
[0][1][1][0][RTW89_KCC][0][119] = 127,
[0][1][1][0][RTW89_ACMA][1][119] = 127,
@@ -40043,6 +40850,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][1][0][RTW89_QATAR][0][119] = 127,
[0][1][1][0][RTW89_UK][1][119] = 127,
[0][1][1][0][RTW89_UK][0][119] = 127,
+ [0][1][1][0][RTW89_THAILAND][1][119] = 127,
+ [0][1][1][0][RTW89_THAILAND][0][119] = 127,
[0][0][2][0][RTW89_FCC][1][0] = 24,
[0][0][2][0][RTW89_FCC][2][0] = 56,
[0][0][2][0][RTW89_ETSI][1][0] = 66,
@@ -40050,6 +40859,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][0] = 66,
[0][0][2][0][RTW89_MKK][0][0] = 26,
[0][0][2][0][RTW89_IC][1][0] = 24,
+ [0][0][2][0][RTW89_IC][2][0] = 56,
[0][0][2][0][RTW89_KCC][1][0] = 24,
[0][0][2][0][RTW89_KCC][0][0] = 24,
[0][0][2][0][RTW89_ACMA][1][0] = 66,
@@ -40059,6 +40869,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][0] = 28,
[0][0][2][0][RTW89_UK][1][0] = 66,
[0][0][2][0][RTW89_UK][0][0] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][0] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][0] = 24,
[0][0][2][0][RTW89_FCC][1][2] = 22,
[0][0][2][0][RTW89_FCC][2][2] = 56,
[0][0][2][0][RTW89_ETSI][1][2] = 66,
@@ -40066,6 +40878,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][2] = 66,
[0][0][2][0][RTW89_MKK][0][2] = 26,
[0][0][2][0][RTW89_IC][1][2] = 22,
+ [0][0][2][0][RTW89_IC][2][2] = 56,
[0][0][2][0][RTW89_KCC][1][2] = 24,
[0][0][2][0][RTW89_KCC][0][2] = 24,
[0][0][2][0][RTW89_ACMA][1][2] = 66,
@@ -40075,6 +40888,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][2] = 28,
[0][0][2][0][RTW89_UK][1][2] = 66,
[0][0][2][0][RTW89_UK][0][2] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][2] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][2] = 22,
[0][0][2][0][RTW89_FCC][1][4] = 22,
[0][0][2][0][RTW89_FCC][2][4] = 56,
[0][0][2][0][RTW89_ETSI][1][4] = 66,
@@ -40082,6 +40897,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][4] = 66,
[0][0][2][0][RTW89_MKK][0][4] = 26,
[0][0][2][0][RTW89_IC][1][4] = 22,
+ [0][0][2][0][RTW89_IC][2][4] = 56,
[0][0][2][0][RTW89_KCC][1][4] = 24,
[0][0][2][0][RTW89_KCC][0][4] = 24,
[0][0][2][0][RTW89_ACMA][1][4] = 66,
@@ -40091,6 +40907,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][4] = 28,
[0][0][2][0][RTW89_UK][1][4] = 66,
[0][0][2][0][RTW89_UK][0][4] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][4] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][4] = 22,
[0][0][2][0][RTW89_FCC][1][6] = 22,
[0][0][2][0][RTW89_FCC][2][6] = 56,
[0][0][2][0][RTW89_ETSI][1][6] = 66,
@@ -40098,6 +40916,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][6] = 66,
[0][0][2][0][RTW89_MKK][0][6] = 26,
[0][0][2][0][RTW89_IC][1][6] = 22,
+ [0][0][2][0][RTW89_IC][2][6] = 56,
[0][0][2][0][RTW89_KCC][1][6] = 24,
[0][0][2][0][RTW89_KCC][0][6] = 24,
[0][0][2][0][RTW89_ACMA][1][6] = 66,
@@ -40107,6 +40926,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][6] = 28,
[0][0][2][0][RTW89_UK][1][6] = 66,
[0][0][2][0][RTW89_UK][0][6] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][6] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][6] = 22,
[0][0][2][0][RTW89_FCC][1][8] = 22,
[0][0][2][0][RTW89_FCC][2][8] = 56,
[0][0][2][0][RTW89_ETSI][1][8] = 66,
@@ -40114,6 +40935,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][8] = 66,
[0][0][2][0][RTW89_MKK][0][8] = 26,
[0][0][2][0][RTW89_IC][1][8] = 22,
+ [0][0][2][0][RTW89_IC][2][8] = 56,
[0][0][2][0][RTW89_KCC][1][8] = 24,
[0][0][2][0][RTW89_KCC][0][8] = 24,
[0][0][2][0][RTW89_ACMA][1][8] = 66,
@@ -40123,6 +40945,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][8] = 28,
[0][0][2][0][RTW89_UK][1][8] = 66,
[0][0][2][0][RTW89_UK][0][8] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][8] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][8] = 22,
[0][0][2][0][RTW89_FCC][1][10] = 22,
[0][0][2][0][RTW89_FCC][2][10] = 56,
[0][0][2][0][RTW89_ETSI][1][10] = 66,
@@ -40130,6 +40954,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][10] = 66,
[0][0][2][0][RTW89_MKK][0][10] = 26,
[0][0][2][0][RTW89_IC][1][10] = 22,
+ [0][0][2][0][RTW89_IC][2][10] = 56,
[0][0][2][0][RTW89_KCC][1][10] = 24,
[0][0][2][0][RTW89_KCC][0][10] = 24,
[0][0][2][0][RTW89_ACMA][1][10] = 66,
@@ -40139,6 +40964,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][10] = 28,
[0][0][2][0][RTW89_UK][1][10] = 66,
[0][0][2][0][RTW89_UK][0][10] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][10] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][10] = 22,
[0][0][2][0][RTW89_FCC][1][12] = 22,
[0][0][2][0][RTW89_FCC][2][12] = 56,
[0][0][2][0][RTW89_ETSI][1][12] = 66,
@@ -40146,6 +40973,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][12] = 66,
[0][0][2][0][RTW89_MKK][0][12] = 26,
[0][0][2][0][RTW89_IC][1][12] = 22,
+ [0][0][2][0][RTW89_IC][2][12] = 56,
[0][0][2][0][RTW89_KCC][1][12] = 24,
[0][0][2][0][RTW89_KCC][0][12] = 24,
[0][0][2][0][RTW89_ACMA][1][12] = 66,
@@ -40155,6 +40983,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][12] = 28,
[0][0][2][0][RTW89_UK][1][12] = 66,
[0][0][2][0][RTW89_UK][0][12] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][12] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][12] = 22,
[0][0][2][0][RTW89_FCC][1][14] = 22,
[0][0][2][0][RTW89_FCC][2][14] = 56,
[0][0][2][0][RTW89_ETSI][1][14] = 66,
@@ -40162,6 +40992,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][14] = 66,
[0][0][2][0][RTW89_MKK][0][14] = 26,
[0][0][2][0][RTW89_IC][1][14] = 22,
+ [0][0][2][0][RTW89_IC][2][14] = 56,
[0][0][2][0][RTW89_KCC][1][14] = 24,
[0][0][2][0][RTW89_KCC][0][14] = 24,
[0][0][2][0][RTW89_ACMA][1][14] = 66,
@@ -40171,6 +41002,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][14] = 28,
[0][0][2][0][RTW89_UK][1][14] = 66,
[0][0][2][0][RTW89_UK][0][14] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][14] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][14] = 22,
[0][0][2][0][RTW89_FCC][1][15] = 22,
[0][0][2][0][RTW89_FCC][2][15] = 56,
[0][0][2][0][RTW89_ETSI][1][15] = 66,
@@ -40178,6 +41011,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][15] = 66,
[0][0][2][0][RTW89_MKK][0][15] = 26,
[0][0][2][0][RTW89_IC][1][15] = 22,
+ [0][0][2][0][RTW89_IC][2][15] = 56,
[0][0][2][0][RTW89_KCC][1][15] = 24,
[0][0][2][0][RTW89_KCC][0][15] = 24,
[0][0][2][0][RTW89_ACMA][1][15] = 66,
@@ -40187,6 +41021,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][15] = 28,
[0][0][2][0][RTW89_UK][1][15] = 66,
[0][0][2][0][RTW89_UK][0][15] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][15] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][15] = 22,
[0][0][2][0][RTW89_FCC][1][17] = 22,
[0][0][2][0][RTW89_FCC][2][17] = 56,
[0][0][2][0][RTW89_ETSI][1][17] = 66,
@@ -40194,6 +41030,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][17] = 66,
[0][0][2][0][RTW89_MKK][0][17] = 26,
[0][0][2][0][RTW89_IC][1][17] = 22,
+ [0][0][2][0][RTW89_IC][2][17] = 56,
[0][0][2][0][RTW89_KCC][1][17] = 24,
[0][0][2][0][RTW89_KCC][0][17] = 24,
[0][0][2][0][RTW89_ACMA][1][17] = 66,
@@ -40203,6 +41040,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][17] = 28,
[0][0][2][0][RTW89_UK][1][17] = 66,
[0][0][2][0][RTW89_UK][0][17] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][17] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][17] = 22,
[0][0][2][0][RTW89_FCC][1][19] = 22,
[0][0][2][0][RTW89_FCC][2][19] = 56,
[0][0][2][0][RTW89_ETSI][1][19] = 66,
@@ -40210,6 +41049,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][19] = 66,
[0][0][2][0][RTW89_MKK][0][19] = 26,
[0][0][2][0][RTW89_IC][1][19] = 22,
+ [0][0][2][0][RTW89_IC][2][19] = 56,
[0][0][2][0][RTW89_KCC][1][19] = 24,
[0][0][2][0][RTW89_KCC][0][19] = 24,
[0][0][2][0][RTW89_ACMA][1][19] = 66,
@@ -40219,6 +41059,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][19] = 28,
[0][0][2][0][RTW89_UK][1][19] = 66,
[0][0][2][0][RTW89_UK][0][19] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][19] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][19] = 22,
[0][0][2][0][RTW89_FCC][1][21] = 22,
[0][0][2][0][RTW89_FCC][2][21] = 56,
[0][0][2][0][RTW89_ETSI][1][21] = 66,
@@ -40226,6 +41068,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][21] = 66,
[0][0][2][0][RTW89_MKK][0][21] = 26,
[0][0][2][0][RTW89_IC][1][21] = 22,
+ [0][0][2][0][RTW89_IC][2][21] = 56,
[0][0][2][0][RTW89_KCC][1][21] = 24,
[0][0][2][0][RTW89_KCC][0][21] = 24,
[0][0][2][0][RTW89_ACMA][1][21] = 66,
@@ -40235,6 +41078,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][21] = 28,
[0][0][2][0][RTW89_UK][1][21] = 66,
[0][0][2][0][RTW89_UK][0][21] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][21] = 56,
+ [0][0][2][0][RTW89_THAILAND][0][21] = 22,
[0][0][2][0][RTW89_FCC][1][23] = 22,
[0][0][2][0][RTW89_FCC][2][23] = 70,
[0][0][2][0][RTW89_ETSI][1][23] = 66,
@@ -40242,6 +41087,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][23] = 66,
[0][0][2][0][RTW89_MKK][0][23] = 26,
[0][0][2][0][RTW89_IC][1][23] = 22,
+ [0][0][2][0][RTW89_IC][2][23] = 70,
[0][0][2][0][RTW89_KCC][1][23] = 24,
[0][0][2][0][RTW89_KCC][0][23] = 26,
[0][0][2][0][RTW89_ACMA][1][23] = 66,
@@ -40251,6 +41097,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][23] = 28,
[0][0][2][0][RTW89_UK][1][23] = 66,
[0][0][2][0][RTW89_UK][0][23] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][23] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][23] = 22,
[0][0][2][0][RTW89_FCC][1][25] = 22,
[0][0][2][0][RTW89_FCC][2][25] = 70,
[0][0][2][0][RTW89_ETSI][1][25] = 66,
@@ -40258,6 +41106,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][25] = 66,
[0][0][2][0][RTW89_MKK][0][25] = 26,
[0][0][2][0][RTW89_IC][1][25] = 22,
+ [0][0][2][0][RTW89_IC][2][25] = 70,
[0][0][2][0][RTW89_KCC][1][25] = 24,
[0][0][2][0][RTW89_KCC][0][25] = 26,
[0][0][2][0][RTW89_ACMA][1][25] = 66,
@@ -40267,6 +41116,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][25] = 28,
[0][0][2][0][RTW89_UK][1][25] = 66,
[0][0][2][0][RTW89_UK][0][25] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][25] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][25] = 22,
[0][0][2][0][RTW89_FCC][1][27] = 22,
[0][0][2][0][RTW89_FCC][2][27] = 70,
[0][0][2][0][RTW89_ETSI][1][27] = 66,
@@ -40274,6 +41125,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][27] = 66,
[0][0][2][0][RTW89_MKK][0][27] = 26,
[0][0][2][0][RTW89_IC][1][27] = 22,
+ [0][0][2][0][RTW89_IC][2][27] = 70,
[0][0][2][0][RTW89_KCC][1][27] = 24,
[0][0][2][0][RTW89_KCC][0][27] = 26,
[0][0][2][0][RTW89_ACMA][1][27] = 66,
@@ -40283,6 +41135,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][27] = 28,
[0][0][2][0][RTW89_UK][1][27] = 66,
[0][0][2][0][RTW89_UK][0][27] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][27] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][27] = 22,
[0][0][2][0][RTW89_FCC][1][29] = 22,
[0][0][2][0][RTW89_FCC][2][29] = 70,
[0][0][2][0][RTW89_ETSI][1][29] = 66,
@@ -40290,6 +41144,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][29] = 66,
[0][0][2][0][RTW89_MKK][0][29] = 26,
[0][0][2][0][RTW89_IC][1][29] = 22,
+ [0][0][2][0][RTW89_IC][2][29] = 70,
[0][0][2][0][RTW89_KCC][1][29] = 24,
[0][0][2][0][RTW89_KCC][0][29] = 26,
[0][0][2][0][RTW89_ACMA][1][29] = 66,
@@ -40299,6 +41154,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][29] = 28,
[0][0][2][0][RTW89_UK][1][29] = 66,
[0][0][2][0][RTW89_UK][0][29] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][29] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][29] = 22,
[0][0][2][0][RTW89_FCC][1][30] = 22,
[0][0][2][0][RTW89_FCC][2][30] = 70,
[0][0][2][0][RTW89_ETSI][1][30] = 66,
@@ -40306,6 +41163,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][30] = 66,
[0][0][2][0][RTW89_MKK][0][30] = 26,
[0][0][2][0][RTW89_IC][1][30] = 22,
+ [0][0][2][0][RTW89_IC][2][30] = 70,
[0][0][2][0][RTW89_KCC][1][30] = 24,
[0][0][2][0][RTW89_KCC][0][30] = 26,
[0][0][2][0][RTW89_ACMA][1][30] = 66,
@@ -40315,6 +41173,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][30] = 28,
[0][0][2][0][RTW89_UK][1][30] = 66,
[0][0][2][0][RTW89_UK][0][30] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][30] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][30] = 22,
[0][0][2][0][RTW89_FCC][1][32] = 22,
[0][0][2][0][RTW89_FCC][2][32] = 70,
[0][0][2][0][RTW89_ETSI][1][32] = 66,
@@ -40322,6 +41182,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][32] = 66,
[0][0][2][0][RTW89_MKK][0][32] = 26,
[0][0][2][0][RTW89_IC][1][32] = 22,
+ [0][0][2][0][RTW89_IC][2][32] = 70,
[0][0][2][0][RTW89_KCC][1][32] = 24,
[0][0][2][0][RTW89_KCC][0][32] = 26,
[0][0][2][0][RTW89_ACMA][1][32] = 66,
@@ -40331,6 +41192,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][32] = 28,
[0][0][2][0][RTW89_UK][1][32] = 66,
[0][0][2][0][RTW89_UK][0][32] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][32] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][32] = 22,
[0][0][2][0][RTW89_FCC][1][34] = 22,
[0][0][2][0][RTW89_FCC][2][34] = 70,
[0][0][2][0][RTW89_ETSI][1][34] = 66,
@@ -40338,6 +41201,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][34] = 66,
[0][0][2][0][RTW89_MKK][0][34] = 26,
[0][0][2][0][RTW89_IC][1][34] = 22,
+ [0][0][2][0][RTW89_IC][2][34] = 70,
[0][0][2][0][RTW89_KCC][1][34] = 24,
[0][0][2][0][RTW89_KCC][0][34] = 26,
[0][0][2][0][RTW89_ACMA][1][34] = 66,
@@ -40347,6 +41211,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][34] = 28,
[0][0][2][0][RTW89_UK][1][34] = 66,
[0][0][2][0][RTW89_UK][0][34] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][34] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][34] = 22,
[0][0][2][0][RTW89_FCC][1][36] = 22,
[0][0][2][0][RTW89_FCC][2][36] = 70,
[0][0][2][0][RTW89_ETSI][1][36] = 66,
@@ -40354,6 +41220,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][36] = 66,
[0][0][2][0][RTW89_MKK][0][36] = 26,
[0][0][2][0][RTW89_IC][1][36] = 22,
+ [0][0][2][0][RTW89_IC][2][36] = 70,
[0][0][2][0][RTW89_KCC][1][36] = 24,
[0][0][2][0][RTW89_KCC][0][36] = 26,
[0][0][2][0][RTW89_ACMA][1][36] = 66,
@@ -40363,6 +41230,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][36] = 28,
[0][0][2][0][RTW89_UK][1][36] = 66,
[0][0][2][0][RTW89_UK][0][36] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][36] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][36] = 22,
[0][0][2][0][RTW89_FCC][1][38] = 22,
[0][0][2][0][RTW89_FCC][2][38] = 70,
[0][0][2][0][RTW89_ETSI][1][38] = 66,
@@ -40370,6 +41239,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][38] = 66,
[0][0][2][0][RTW89_MKK][0][38] = 26,
[0][0][2][0][RTW89_IC][1][38] = 22,
+ [0][0][2][0][RTW89_IC][2][38] = 70,
[0][0][2][0][RTW89_KCC][1][38] = 24,
[0][0][2][0][RTW89_KCC][0][38] = 26,
[0][0][2][0][RTW89_ACMA][1][38] = 66,
@@ -40379,6 +41249,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][38] = 28,
[0][0][2][0][RTW89_UK][1][38] = 66,
[0][0][2][0][RTW89_UK][0][38] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][38] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][38] = 22,
[0][0][2][0][RTW89_FCC][1][40] = 22,
[0][0][2][0][RTW89_FCC][2][40] = 70,
[0][0][2][0][RTW89_ETSI][1][40] = 66,
@@ -40386,6 +41258,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][40] = 66,
[0][0][2][0][RTW89_MKK][0][40] = 26,
[0][0][2][0][RTW89_IC][1][40] = 22,
+ [0][0][2][0][RTW89_IC][2][40] = 70,
[0][0][2][0][RTW89_KCC][1][40] = 24,
[0][0][2][0][RTW89_KCC][0][40] = 26,
[0][0][2][0][RTW89_ACMA][1][40] = 66,
@@ -40395,6 +41268,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][40] = 28,
[0][0][2][0][RTW89_UK][1][40] = 66,
[0][0][2][0][RTW89_UK][0][40] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][40] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][40] = 22,
[0][0][2][0][RTW89_FCC][1][42] = 22,
[0][0][2][0][RTW89_FCC][2][42] = 70,
[0][0][2][0][RTW89_ETSI][1][42] = 66,
@@ -40402,6 +41277,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][42] = 66,
[0][0][2][0][RTW89_MKK][0][42] = 26,
[0][0][2][0][RTW89_IC][1][42] = 22,
+ [0][0][2][0][RTW89_IC][2][42] = 70,
[0][0][2][0][RTW89_KCC][1][42] = 24,
[0][0][2][0][RTW89_KCC][0][42] = 26,
[0][0][2][0][RTW89_ACMA][1][42] = 66,
@@ -40411,6 +41287,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][42] = 28,
[0][0][2][0][RTW89_UK][1][42] = 66,
[0][0][2][0][RTW89_UK][0][42] = 28,
+ [0][0][2][0][RTW89_THAILAND][1][42] = 66,
+ [0][0][2][0][RTW89_THAILAND][0][42] = 22,
[0][0][2][0][RTW89_FCC][1][44] = 22,
[0][0][2][0][RTW89_FCC][2][44] = 70,
[0][0][2][0][RTW89_ETSI][1][44] = 66,
@@ -40418,6 +41296,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][44] = 44,
[0][0][2][0][RTW89_MKK][0][44] = 28,
[0][0][2][0][RTW89_IC][1][44] = 22,
+ [0][0][2][0][RTW89_IC][2][44] = 70,
[0][0][2][0][RTW89_KCC][1][44] = 24,
[0][0][2][0][RTW89_KCC][0][44] = 26,
[0][0][2][0][RTW89_ACMA][1][44] = 66,
@@ -40427,6 +41306,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][44] = 30,
[0][0][2][0][RTW89_UK][1][44] = 66,
[0][0][2][0][RTW89_UK][0][44] = 30,
+ [0][0][2][0][RTW89_THAILAND][1][44] = 68,
+ [0][0][2][0][RTW89_THAILAND][0][44] = 22,
[0][0][2][0][RTW89_FCC][1][45] = 22,
[0][0][2][0][RTW89_FCC][2][45] = 127,
[0][0][2][0][RTW89_ETSI][1][45] = 127,
@@ -40434,6 +41315,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][45] = 127,
[0][0][2][0][RTW89_MKK][0][45] = 127,
[0][0][2][0][RTW89_IC][1][45] = 22,
+ [0][0][2][0][RTW89_IC][2][45] = 70,
[0][0][2][0][RTW89_KCC][1][45] = 24,
[0][0][2][0][RTW89_KCC][0][45] = 127,
[0][0][2][0][RTW89_ACMA][1][45] = 127,
@@ -40443,6 +41325,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][45] = 127,
[0][0][2][0][RTW89_UK][1][45] = 127,
[0][0][2][0][RTW89_UK][0][45] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][45] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][45] = 127,
[0][0][2][0][RTW89_FCC][1][47] = 22,
[0][0][2][0][RTW89_FCC][2][47] = 127,
[0][0][2][0][RTW89_ETSI][1][47] = 127,
@@ -40450,6 +41334,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][47] = 127,
[0][0][2][0][RTW89_MKK][0][47] = 127,
[0][0][2][0][RTW89_IC][1][47] = 22,
+ [0][0][2][0][RTW89_IC][2][47] = 70,
[0][0][2][0][RTW89_KCC][1][47] = 24,
[0][0][2][0][RTW89_KCC][0][47] = 127,
[0][0][2][0][RTW89_ACMA][1][47] = 127,
@@ -40459,6 +41344,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][47] = 127,
[0][0][2][0][RTW89_UK][1][47] = 127,
[0][0][2][0][RTW89_UK][0][47] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][47] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][47] = 127,
[0][0][2][0][RTW89_FCC][1][49] = 24,
[0][0][2][0][RTW89_FCC][2][49] = 127,
[0][0][2][0][RTW89_ETSI][1][49] = 127,
@@ -40466,6 +41353,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][49] = 127,
[0][0][2][0][RTW89_MKK][0][49] = 127,
[0][0][2][0][RTW89_IC][1][49] = 24,
+ [0][0][2][0][RTW89_IC][2][49] = 70,
[0][0][2][0][RTW89_KCC][1][49] = 24,
[0][0][2][0][RTW89_KCC][0][49] = 127,
[0][0][2][0][RTW89_ACMA][1][49] = 127,
@@ -40475,6 +41363,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][49] = 127,
[0][0][2][0][RTW89_UK][1][49] = 127,
[0][0][2][0][RTW89_UK][0][49] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][49] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][49] = 127,
[0][0][2][0][RTW89_FCC][1][51] = 22,
[0][0][2][0][RTW89_FCC][2][51] = 127,
[0][0][2][0][RTW89_ETSI][1][51] = 127,
@@ -40482,6 +41372,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][51] = 127,
[0][0][2][0][RTW89_MKK][0][51] = 127,
[0][0][2][0][RTW89_IC][1][51] = 22,
+ [0][0][2][0][RTW89_IC][2][51] = 70,
[0][0][2][0][RTW89_KCC][1][51] = 24,
[0][0][2][0][RTW89_KCC][0][51] = 127,
[0][0][2][0][RTW89_ACMA][1][51] = 127,
@@ -40491,6 +41382,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][51] = 127,
[0][0][2][0][RTW89_UK][1][51] = 127,
[0][0][2][0][RTW89_UK][0][51] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][51] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][51] = 127,
[0][0][2][0][RTW89_FCC][1][53] = 22,
[0][0][2][0][RTW89_FCC][2][53] = 127,
[0][0][2][0][RTW89_ETSI][1][53] = 127,
@@ -40498,6 +41391,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][53] = 127,
[0][0][2][0][RTW89_MKK][0][53] = 127,
[0][0][2][0][RTW89_IC][1][53] = 22,
+ [0][0][2][0][RTW89_IC][2][53] = 70,
[0][0][2][0][RTW89_KCC][1][53] = 24,
[0][0][2][0][RTW89_KCC][0][53] = 127,
[0][0][2][0][RTW89_ACMA][1][53] = 127,
@@ -40507,6 +41401,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][53] = 127,
[0][0][2][0][RTW89_UK][1][53] = 127,
[0][0][2][0][RTW89_UK][0][53] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][53] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][53] = 127,
[0][0][2][0][RTW89_FCC][1][55] = 22,
[0][0][2][0][RTW89_FCC][2][55] = 68,
[0][0][2][0][RTW89_ETSI][1][55] = 127,
@@ -40514,6 +41410,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][55] = 127,
[0][0][2][0][RTW89_MKK][0][55] = 127,
[0][0][2][0][RTW89_IC][1][55] = 22,
+ [0][0][2][0][RTW89_IC][2][55] = 68,
[0][0][2][0][RTW89_KCC][1][55] = 26,
[0][0][2][0][RTW89_KCC][0][55] = 127,
[0][0][2][0][RTW89_ACMA][1][55] = 127,
@@ -40523,6 +41420,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][55] = 127,
[0][0][2][0][RTW89_UK][1][55] = 127,
[0][0][2][0][RTW89_UK][0][55] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][55] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][55] = 127,
[0][0][2][0][RTW89_FCC][1][57] = 22,
[0][0][2][0][RTW89_FCC][2][57] = 68,
[0][0][2][0][RTW89_ETSI][1][57] = 127,
@@ -40530,6 +41429,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][57] = 127,
[0][0][2][0][RTW89_MKK][0][57] = 127,
[0][0][2][0][RTW89_IC][1][57] = 22,
+ [0][0][2][0][RTW89_IC][2][57] = 68,
[0][0][2][0][RTW89_KCC][1][57] = 26,
[0][0][2][0][RTW89_KCC][0][57] = 127,
[0][0][2][0][RTW89_ACMA][1][57] = 127,
@@ -40539,6 +41439,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][57] = 127,
[0][0][2][0][RTW89_UK][1][57] = 127,
[0][0][2][0][RTW89_UK][0][57] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][57] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][57] = 127,
[0][0][2][0][RTW89_FCC][1][59] = 22,
[0][0][2][0][RTW89_FCC][2][59] = 68,
[0][0][2][0][RTW89_ETSI][1][59] = 127,
@@ -40546,6 +41448,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][59] = 127,
[0][0][2][0][RTW89_MKK][0][59] = 127,
[0][0][2][0][RTW89_IC][1][59] = 22,
+ [0][0][2][0][RTW89_IC][2][59] = 68,
[0][0][2][0][RTW89_KCC][1][59] = 26,
[0][0][2][0][RTW89_KCC][0][59] = 127,
[0][0][2][0][RTW89_ACMA][1][59] = 127,
@@ -40555,6 +41458,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][59] = 127,
[0][0][2][0][RTW89_UK][1][59] = 127,
[0][0][2][0][RTW89_UK][0][59] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][59] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][59] = 127,
[0][0][2][0][RTW89_FCC][1][60] = 22,
[0][0][2][0][RTW89_FCC][2][60] = 68,
[0][0][2][0][RTW89_ETSI][1][60] = 127,
@@ -40562,6 +41467,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][60] = 127,
[0][0][2][0][RTW89_MKK][0][60] = 127,
[0][0][2][0][RTW89_IC][1][60] = 22,
+ [0][0][2][0][RTW89_IC][2][60] = 68,
[0][0][2][0][RTW89_KCC][1][60] = 26,
[0][0][2][0][RTW89_KCC][0][60] = 127,
[0][0][2][0][RTW89_ACMA][1][60] = 127,
@@ -40571,6 +41477,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][60] = 127,
[0][0][2][0][RTW89_UK][1][60] = 127,
[0][0][2][0][RTW89_UK][0][60] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][60] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][60] = 127,
[0][0][2][0][RTW89_FCC][1][62] = 22,
[0][0][2][0][RTW89_FCC][2][62] = 68,
[0][0][2][0][RTW89_ETSI][1][62] = 127,
@@ -40578,6 +41486,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][62] = 127,
[0][0][2][0][RTW89_MKK][0][62] = 127,
[0][0][2][0][RTW89_IC][1][62] = 22,
+ [0][0][2][0][RTW89_IC][2][62] = 68,
[0][0][2][0][RTW89_KCC][1][62] = 26,
[0][0][2][0][RTW89_KCC][0][62] = 127,
[0][0][2][0][RTW89_ACMA][1][62] = 127,
@@ -40587,6 +41496,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][62] = 127,
[0][0][2][0][RTW89_UK][1][62] = 127,
[0][0][2][0][RTW89_UK][0][62] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][62] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][62] = 127,
[0][0][2][0][RTW89_FCC][1][64] = 22,
[0][0][2][0][RTW89_FCC][2][64] = 68,
[0][0][2][0][RTW89_ETSI][1][64] = 127,
@@ -40594,6 +41505,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][64] = 127,
[0][0][2][0][RTW89_MKK][0][64] = 127,
[0][0][2][0][RTW89_IC][1][64] = 22,
+ [0][0][2][0][RTW89_IC][2][64] = 68,
[0][0][2][0][RTW89_KCC][1][64] = 26,
[0][0][2][0][RTW89_KCC][0][64] = 127,
[0][0][2][0][RTW89_ACMA][1][64] = 127,
@@ -40603,6 +41515,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][64] = 127,
[0][0][2][0][RTW89_UK][1][64] = 127,
[0][0][2][0][RTW89_UK][0][64] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][64] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][64] = 127,
[0][0][2][0][RTW89_FCC][1][66] = 22,
[0][0][2][0][RTW89_FCC][2][66] = 68,
[0][0][2][0][RTW89_ETSI][1][66] = 127,
@@ -40610,6 +41524,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][66] = 127,
[0][0][2][0][RTW89_MKK][0][66] = 127,
[0][0][2][0][RTW89_IC][1][66] = 22,
+ [0][0][2][0][RTW89_IC][2][66] = 68,
[0][0][2][0][RTW89_KCC][1][66] = 26,
[0][0][2][0][RTW89_KCC][0][66] = 127,
[0][0][2][0][RTW89_ACMA][1][66] = 127,
@@ -40619,6 +41534,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][66] = 127,
[0][0][2][0][RTW89_UK][1][66] = 127,
[0][0][2][0][RTW89_UK][0][66] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][66] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][66] = 127,
[0][0][2][0][RTW89_FCC][1][68] = 22,
[0][0][2][0][RTW89_FCC][2][68] = 68,
[0][0][2][0][RTW89_ETSI][1][68] = 127,
@@ -40626,6 +41543,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][68] = 127,
[0][0][2][0][RTW89_MKK][0][68] = 127,
[0][0][2][0][RTW89_IC][1][68] = 22,
+ [0][0][2][0][RTW89_IC][2][68] = 68,
[0][0][2][0][RTW89_KCC][1][68] = 26,
[0][0][2][0][RTW89_KCC][0][68] = 127,
[0][0][2][0][RTW89_ACMA][1][68] = 127,
@@ -40635,6 +41553,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][68] = 127,
[0][0][2][0][RTW89_UK][1][68] = 127,
[0][0][2][0][RTW89_UK][0][68] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][68] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][68] = 127,
[0][0][2][0][RTW89_FCC][1][70] = 24,
[0][0][2][0][RTW89_FCC][2][70] = 68,
[0][0][2][0][RTW89_ETSI][1][70] = 127,
@@ -40642,6 +41562,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][70] = 127,
[0][0][2][0][RTW89_MKK][0][70] = 127,
[0][0][2][0][RTW89_IC][1][70] = 24,
+ [0][0][2][0][RTW89_IC][2][70] = 68,
[0][0][2][0][RTW89_KCC][1][70] = 26,
[0][0][2][0][RTW89_KCC][0][70] = 127,
[0][0][2][0][RTW89_ACMA][1][70] = 127,
@@ -40651,6 +41572,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][70] = 127,
[0][0][2][0][RTW89_UK][1][70] = 127,
[0][0][2][0][RTW89_UK][0][70] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][70] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][70] = 127,
[0][0][2][0][RTW89_FCC][1][72] = 22,
[0][0][2][0][RTW89_FCC][2][72] = 68,
[0][0][2][0][RTW89_ETSI][1][72] = 127,
@@ -40658,6 +41581,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][72] = 127,
[0][0][2][0][RTW89_MKK][0][72] = 127,
[0][0][2][0][RTW89_IC][1][72] = 22,
+ [0][0][2][0][RTW89_IC][2][72] = 68,
[0][0][2][0][RTW89_KCC][1][72] = 26,
[0][0][2][0][RTW89_KCC][0][72] = 127,
[0][0][2][0][RTW89_ACMA][1][72] = 127,
@@ -40667,6 +41591,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][72] = 127,
[0][0][2][0][RTW89_UK][1][72] = 127,
[0][0][2][0][RTW89_UK][0][72] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][72] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][72] = 127,
[0][0][2][0][RTW89_FCC][1][74] = 22,
[0][0][2][0][RTW89_FCC][2][74] = 68,
[0][0][2][0][RTW89_ETSI][1][74] = 127,
@@ -40674,6 +41600,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][74] = 127,
[0][0][2][0][RTW89_MKK][0][74] = 127,
[0][0][2][0][RTW89_IC][1][74] = 22,
+ [0][0][2][0][RTW89_IC][2][74] = 68,
[0][0][2][0][RTW89_KCC][1][74] = 26,
[0][0][2][0][RTW89_KCC][0][74] = 127,
[0][0][2][0][RTW89_ACMA][1][74] = 127,
@@ -40683,6 +41610,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][74] = 127,
[0][0][2][0][RTW89_UK][1][74] = 127,
[0][0][2][0][RTW89_UK][0][74] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][74] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][74] = 127,
[0][0][2][0][RTW89_FCC][1][75] = 22,
[0][0][2][0][RTW89_FCC][2][75] = 68,
[0][0][2][0][RTW89_ETSI][1][75] = 127,
@@ -40690,6 +41619,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][75] = 127,
[0][0][2][0][RTW89_MKK][0][75] = 127,
[0][0][2][0][RTW89_IC][1][75] = 22,
+ [0][0][2][0][RTW89_IC][2][75] = 68,
[0][0][2][0][RTW89_KCC][1][75] = 26,
[0][0][2][0][RTW89_KCC][0][75] = 127,
[0][0][2][0][RTW89_ACMA][1][75] = 127,
@@ -40699,6 +41629,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][75] = 127,
[0][0][2][0][RTW89_UK][1][75] = 127,
[0][0][2][0][RTW89_UK][0][75] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][75] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][75] = 127,
[0][0][2][0][RTW89_FCC][1][77] = 22,
[0][0][2][0][RTW89_FCC][2][77] = 68,
[0][0][2][0][RTW89_ETSI][1][77] = 127,
@@ -40706,6 +41638,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][77] = 127,
[0][0][2][0][RTW89_MKK][0][77] = 127,
[0][0][2][0][RTW89_IC][1][77] = 22,
+ [0][0][2][0][RTW89_IC][2][77] = 68,
[0][0][2][0][RTW89_KCC][1][77] = 26,
[0][0][2][0][RTW89_KCC][0][77] = 127,
[0][0][2][0][RTW89_ACMA][1][77] = 127,
@@ -40715,6 +41648,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][77] = 127,
[0][0][2][0][RTW89_UK][1][77] = 127,
[0][0][2][0][RTW89_UK][0][77] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][77] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][77] = 127,
[0][0][2][0][RTW89_FCC][1][79] = 22,
[0][0][2][0][RTW89_FCC][2][79] = 68,
[0][0][2][0][RTW89_ETSI][1][79] = 127,
@@ -40722,6 +41657,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][79] = 127,
[0][0][2][0][RTW89_MKK][0][79] = 127,
[0][0][2][0][RTW89_IC][1][79] = 22,
+ [0][0][2][0][RTW89_IC][2][79] = 68,
[0][0][2][0][RTW89_KCC][1][79] = 26,
[0][0][2][0][RTW89_KCC][0][79] = 127,
[0][0][2][0][RTW89_ACMA][1][79] = 127,
@@ -40731,6 +41667,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][79] = 127,
[0][0][2][0][RTW89_UK][1][79] = 127,
[0][0][2][0][RTW89_UK][0][79] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][79] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][79] = 127,
[0][0][2][0][RTW89_FCC][1][81] = 22,
[0][0][2][0][RTW89_FCC][2][81] = 68,
[0][0][2][0][RTW89_ETSI][1][81] = 127,
@@ -40738,6 +41676,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][81] = 127,
[0][0][2][0][RTW89_MKK][0][81] = 127,
[0][0][2][0][RTW89_IC][1][81] = 22,
+ [0][0][2][0][RTW89_IC][2][81] = 68,
[0][0][2][0][RTW89_KCC][1][81] = 26,
[0][0][2][0][RTW89_KCC][0][81] = 127,
[0][0][2][0][RTW89_ACMA][1][81] = 127,
@@ -40747,6 +41686,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][81] = 127,
[0][0][2][0][RTW89_UK][1][81] = 127,
[0][0][2][0][RTW89_UK][0][81] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][81] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][81] = 127,
[0][0][2][0][RTW89_FCC][1][83] = 22,
[0][0][2][0][RTW89_FCC][2][83] = 68,
[0][0][2][0][RTW89_ETSI][1][83] = 127,
@@ -40754,6 +41695,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][83] = 127,
[0][0][2][0][RTW89_MKK][0][83] = 127,
[0][0][2][0][RTW89_IC][1][83] = 22,
+ [0][0][2][0][RTW89_IC][2][83] = 68,
[0][0][2][0][RTW89_KCC][1][83] = 32,
[0][0][2][0][RTW89_KCC][0][83] = 127,
[0][0][2][0][RTW89_ACMA][1][83] = 127,
@@ -40763,6 +41705,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][83] = 127,
[0][0][2][0][RTW89_UK][1][83] = 127,
[0][0][2][0][RTW89_UK][0][83] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][83] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][83] = 127,
[0][0][2][0][RTW89_FCC][1][85] = 22,
[0][0][2][0][RTW89_FCC][2][85] = 68,
[0][0][2][0][RTW89_ETSI][1][85] = 127,
@@ -40770,6 +41714,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][85] = 127,
[0][0][2][0][RTW89_MKK][0][85] = 127,
[0][0][2][0][RTW89_IC][1][85] = 22,
+ [0][0][2][0][RTW89_IC][2][85] = 68,
[0][0][2][0][RTW89_KCC][1][85] = 32,
[0][0][2][0][RTW89_KCC][0][85] = 127,
[0][0][2][0][RTW89_ACMA][1][85] = 127,
@@ -40779,6 +41724,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][85] = 127,
[0][0][2][0][RTW89_UK][1][85] = 127,
[0][0][2][0][RTW89_UK][0][85] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][85] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][85] = 127,
[0][0][2][0][RTW89_FCC][1][87] = 22,
[0][0][2][0][RTW89_FCC][2][87] = 127,
[0][0][2][0][RTW89_ETSI][1][87] = 127,
@@ -40786,6 +41733,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][87] = 127,
[0][0][2][0][RTW89_MKK][0][87] = 127,
[0][0][2][0][RTW89_IC][1][87] = 22,
+ [0][0][2][0][RTW89_IC][2][87] = 127,
[0][0][2][0][RTW89_KCC][1][87] = 32,
[0][0][2][0][RTW89_KCC][0][87] = 127,
[0][0][2][0][RTW89_ACMA][1][87] = 127,
@@ -40795,6 +41743,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][87] = 127,
[0][0][2][0][RTW89_UK][1][87] = 127,
[0][0][2][0][RTW89_UK][0][87] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][87] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][87] = 127,
[0][0][2][0][RTW89_FCC][1][89] = 22,
[0][0][2][0][RTW89_FCC][2][89] = 127,
[0][0][2][0][RTW89_ETSI][1][89] = 127,
@@ -40802,6 +41752,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][89] = 127,
[0][0][2][0][RTW89_MKK][0][89] = 127,
[0][0][2][0][RTW89_IC][1][89] = 22,
+ [0][0][2][0][RTW89_IC][2][89] = 127,
[0][0][2][0][RTW89_KCC][1][89] = 32,
[0][0][2][0][RTW89_KCC][0][89] = 127,
[0][0][2][0][RTW89_ACMA][1][89] = 127,
@@ -40811,6 +41762,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][89] = 127,
[0][0][2][0][RTW89_UK][1][89] = 127,
[0][0][2][0][RTW89_UK][0][89] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][89] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][89] = 127,
[0][0][2][0][RTW89_FCC][1][90] = 22,
[0][0][2][0][RTW89_FCC][2][90] = 127,
[0][0][2][0][RTW89_ETSI][1][90] = 127,
@@ -40818,6 +41771,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][90] = 127,
[0][0][2][0][RTW89_MKK][0][90] = 127,
[0][0][2][0][RTW89_IC][1][90] = 22,
+ [0][0][2][0][RTW89_IC][2][90] = 127,
[0][0][2][0][RTW89_KCC][1][90] = 32,
[0][0][2][0][RTW89_KCC][0][90] = 127,
[0][0][2][0][RTW89_ACMA][1][90] = 127,
@@ -40827,6 +41781,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][90] = 127,
[0][0][2][0][RTW89_UK][1][90] = 127,
[0][0][2][0][RTW89_UK][0][90] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][90] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][90] = 127,
[0][0][2][0][RTW89_FCC][1][92] = 22,
[0][0][2][0][RTW89_FCC][2][92] = 127,
[0][0][2][0][RTW89_ETSI][1][92] = 127,
@@ -40834,6 +41790,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][92] = 127,
[0][0][2][0][RTW89_MKK][0][92] = 127,
[0][0][2][0][RTW89_IC][1][92] = 22,
+ [0][0][2][0][RTW89_IC][2][92] = 127,
[0][0][2][0][RTW89_KCC][1][92] = 32,
[0][0][2][0][RTW89_KCC][0][92] = 127,
[0][0][2][0][RTW89_ACMA][1][92] = 127,
@@ -40843,6 +41800,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][92] = 127,
[0][0][2][0][RTW89_UK][1][92] = 127,
[0][0][2][0][RTW89_UK][0][92] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][92] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][92] = 127,
[0][0][2][0][RTW89_FCC][1][94] = 22,
[0][0][2][0][RTW89_FCC][2][94] = 127,
[0][0][2][0][RTW89_ETSI][1][94] = 127,
@@ -40850,6 +41809,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][94] = 127,
[0][0][2][0][RTW89_MKK][0][94] = 127,
[0][0][2][0][RTW89_IC][1][94] = 22,
+ [0][0][2][0][RTW89_IC][2][94] = 127,
[0][0][2][0][RTW89_KCC][1][94] = 32,
[0][0][2][0][RTW89_KCC][0][94] = 127,
[0][0][2][0][RTW89_ACMA][1][94] = 127,
@@ -40859,6 +41819,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][94] = 127,
[0][0][2][0][RTW89_UK][1][94] = 127,
[0][0][2][0][RTW89_UK][0][94] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][94] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][94] = 127,
[0][0][2][0][RTW89_FCC][1][96] = 22,
[0][0][2][0][RTW89_FCC][2][96] = 127,
[0][0][2][0][RTW89_ETSI][1][96] = 127,
@@ -40866,6 +41828,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][96] = 127,
[0][0][2][0][RTW89_MKK][0][96] = 127,
[0][0][2][0][RTW89_IC][1][96] = 22,
+ [0][0][2][0][RTW89_IC][2][96] = 127,
[0][0][2][0][RTW89_KCC][1][96] = 32,
[0][0][2][0][RTW89_KCC][0][96] = 127,
[0][0][2][0][RTW89_ACMA][1][96] = 127,
@@ -40875,6 +41838,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][96] = 127,
[0][0][2][0][RTW89_UK][1][96] = 127,
[0][0][2][0][RTW89_UK][0][96] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][96] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][96] = 127,
[0][0][2][0][RTW89_FCC][1][98] = 22,
[0][0][2][0][RTW89_FCC][2][98] = 127,
[0][0][2][0][RTW89_ETSI][1][98] = 127,
@@ -40882,6 +41847,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][98] = 127,
[0][0][2][0][RTW89_MKK][0][98] = 127,
[0][0][2][0][RTW89_IC][1][98] = 22,
+ [0][0][2][0][RTW89_IC][2][98] = 127,
[0][0][2][0][RTW89_KCC][1][98] = 32,
[0][0][2][0][RTW89_KCC][0][98] = 127,
[0][0][2][0][RTW89_ACMA][1][98] = 127,
@@ -40891,6 +41857,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][98] = 127,
[0][0][2][0][RTW89_UK][1][98] = 127,
[0][0][2][0][RTW89_UK][0][98] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][98] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][98] = 127,
[0][0][2][0][RTW89_FCC][1][100] = 22,
[0][0][2][0][RTW89_FCC][2][100] = 127,
[0][0][2][0][RTW89_ETSI][1][100] = 127,
@@ -40898,6 +41866,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][100] = 127,
[0][0][2][0][RTW89_MKK][0][100] = 127,
[0][0][2][0][RTW89_IC][1][100] = 22,
+ [0][0][2][0][RTW89_IC][2][100] = 127,
[0][0][2][0][RTW89_KCC][1][100] = 32,
[0][0][2][0][RTW89_KCC][0][100] = 127,
[0][0][2][0][RTW89_ACMA][1][100] = 127,
@@ -40907,6 +41876,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][100] = 127,
[0][0][2][0][RTW89_UK][1][100] = 127,
[0][0][2][0][RTW89_UK][0][100] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][100] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][100] = 127,
[0][0][2][0][RTW89_FCC][1][102] = 22,
[0][0][2][0][RTW89_FCC][2][102] = 127,
[0][0][2][0][RTW89_ETSI][1][102] = 127,
@@ -40914,6 +41885,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][102] = 127,
[0][0][2][0][RTW89_MKK][0][102] = 127,
[0][0][2][0][RTW89_IC][1][102] = 22,
+ [0][0][2][0][RTW89_IC][2][102] = 127,
[0][0][2][0][RTW89_KCC][1][102] = 32,
[0][0][2][0][RTW89_KCC][0][102] = 127,
[0][0][2][0][RTW89_ACMA][1][102] = 127,
@@ -40923,6 +41895,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][102] = 127,
[0][0][2][0][RTW89_UK][1][102] = 127,
[0][0][2][0][RTW89_UK][0][102] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][102] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][102] = 127,
[0][0][2][0][RTW89_FCC][1][104] = 22,
[0][0][2][0][RTW89_FCC][2][104] = 127,
[0][0][2][0][RTW89_ETSI][1][104] = 127,
@@ -40930,6 +41904,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][104] = 127,
[0][0][2][0][RTW89_MKK][0][104] = 127,
[0][0][2][0][RTW89_IC][1][104] = 22,
+ [0][0][2][0][RTW89_IC][2][104] = 127,
[0][0][2][0][RTW89_KCC][1][104] = 32,
[0][0][2][0][RTW89_KCC][0][104] = 127,
[0][0][2][0][RTW89_ACMA][1][104] = 127,
@@ -40939,6 +41914,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][104] = 127,
[0][0][2][0][RTW89_UK][1][104] = 127,
[0][0][2][0][RTW89_UK][0][104] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][104] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][104] = 127,
[0][0][2][0][RTW89_FCC][1][105] = 22,
[0][0][2][0][RTW89_FCC][2][105] = 127,
[0][0][2][0][RTW89_ETSI][1][105] = 127,
@@ -40946,6 +41923,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][105] = 127,
[0][0][2][0][RTW89_MKK][0][105] = 127,
[0][0][2][0][RTW89_IC][1][105] = 22,
+ [0][0][2][0][RTW89_IC][2][105] = 127,
[0][0][2][0][RTW89_KCC][1][105] = 32,
[0][0][2][0][RTW89_KCC][0][105] = 127,
[0][0][2][0][RTW89_ACMA][1][105] = 127,
@@ -40955,6 +41933,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][105] = 127,
[0][0][2][0][RTW89_UK][1][105] = 127,
[0][0][2][0][RTW89_UK][0][105] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][105] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][105] = 127,
[0][0][2][0][RTW89_FCC][1][107] = 24,
[0][0][2][0][RTW89_FCC][2][107] = 127,
[0][0][2][0][RTW89_ETSI][1][107] = 127,
@@ -40962,6 +41942,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][107] = 127,
[0][0][2][0][RTW89_MKK][0][107] = 127,
[0][0][2][0][RTW89_IC][1][107] = 24,
+ [0][0][2][0][RTW89_IC][2][107] = 127,
[0][0][2][0][RTW89_KCC][1][107] = 32,
[0][0][2][0][RTW89_KCC][0][107] = 127,
[0][0][2][0][RTW89_ACMA][1][107] = 127,
@@ -40971,6 +41952,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][107] = 127,
[0][0][2][0][RTW89_UK][1][107] = 127,
[0][0][2][0][RTW89_UK][0][107] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][107] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][107] = 127,
[0][0][2][0][RTW89_FCC][1][109] = 24,
[0][0][2][0][RTW89_FCC][2][109] = 127,
[0][0][2][0][RTW89_ETSI][1][109] = 127,
@@ -40978,6 +41961,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][109] = 127,
[0][0][2][0][RTW89_MKK][0][109] = 127,
[0][0][2][0][RTW89_IC][1][109] = 24,
+ [0][0][2][0][RTW89_IC][2][109] = 127,
[0][0][2][0][RTW89_KCC][1][109] = 32,
[0][0][2][0][RTW89_KCC][0][109] = 127,
[0][0][2][0][RTW89_ACMA][1][109] = 127,
@@ -40987,6 +41971,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][109] = 127,
[0][0][2][0][RTW89_UK][1][109] = 127,
[0][0][2][0][RTW89_UK][0][109] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][109] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][109] = 127,
[0][0][2][0][RTW89_FCC][1][111] = 127,
[0][0][2][0][RTW89_FCC][2][111] = 127,
[0][0][2][0][RTW89_ETSI][1][111] = 127,
@@ -40994,6 +41980,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][111] = 127,
[0][0][2][0][RTW89_MKK][0][111] = 127,
[0][0][2][0][RTW89_IC][1][111] = 127,
+ [0][0][2][0][RTW89_IC][2][111] = 127,
[0][0][2][0][RTW89_KCC][1][111] = 127,
[0][0][2][0][RTW89_KCC][0][111] = 127,
[0][0][2][0][RTW89_ACMA][1][111] = 127,
@@ -41003,6 +41990,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][111] = 127,
[0][0][2][0][RTW89_UK][1][111] = 127,
[0][0][2][0][RTW89_UK][0][111] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][111] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][111] = 127,
[0][0][2][0][RTW89_FCC][1][113] = 127,
[0][0][2][0][RTW89_FCC][2][113] = 127,
[0][0][2][0][RTW89_ETSI][1][113] = 127,
@@ -41010,6 +41999,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][113] = 127,
[0][0][2][0][RTW89_MKK][0][113] = 127,
[0][0][2][0][RTW89_IC][1][113] = 127,
+ [0][0][2][0][RTW89_IC][2][113] = 127,
[0][0][2][0][RTW89_KCC][1][113] = 127,
[0][0][2][0][RTW89_KCC][0][113] = 127,
[0][0][2][0][RTW89_ACMA][1][113] = 127,
@@ -41019,6 +42009,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][113] = 127,
[0][0][2][0][RTW89_UK][1][113] = 127,
[0][0][2][0][RTW89_UK][0][113] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][113] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][113] = 127,
[0][0][2][0][RTW89_FCC][1][115] = 127,
[0][0][2][0][RTW89_FCC][2][115] = 127,
[0][0][2][0][RTW89_ETSI][1][115] = 127,
@@ -41026,6 +42018,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][115] = 127,
[0][0][2][0][RTW89_MKK][0][115] = 127,
[0][0][2][0][RTW89_IC][1][115] = 127,
+ [0][0][2][0][RTW89_IC][2][115] = 127,
[0][0][2][0][RTW89_KCC][1][115] = 127,
[0][0][2][0][RTW89_KCC][0][115] = 127,
[0][0][2][0][RTW89_ACMA][1][115] = 127,
@@ -41035,6 +42028,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][115] = 127,
[0][0][2][0][RTW89_UK][1][115] = 127,
[0][0][2][0][RTW89_UK][0][115] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][115] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][115] = 127,
[0][0][2][0][RTW89_FCC][1][117] = 127,
[0][0][2][0][RTW89_FCC][2][117] = 127,
[0][0][2][0][RTW89_ETSI][1][117] = 127,
@@ -41042,6 +42037,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][117] = 127,
[0][0][2][0][RTW89_MKK][0][117] = 127,
[0][0][2][0][RTW89_IC][1][117] = 127,
+ [0][0][2][0][RTW89_IC][2][117] = 127,
[0][0][2][0][RTW89_KCC][1][117] = 127,
[0][0][2][0][RTW89_KCC][0][117] = 127,
[0][0][2][0][RTW89_ACMA][1][117] = 127,
@@ -41051,6 +42047,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][117] = 127,
[0][0][2][0][RTW89_UK][1][117] = 127,
[0][0][2][0][RTW89_UK][0][117] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][117] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][117] = 127,
[0][0][2][0][RTW89_FCC][1][119] = 127,
[0][0][2][0][RTW89_FCC][2][119] = 127,
[0][0][2][0][RTW89_ETSI][1][119] = 127,
@@ -41058,6 +42056,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_MKK][1][119] = 127,
[0][0][2][0][RTW89_MKK][0][119] = 127,
[0][0][2][0][RTW89_IC][1][119] = 127,
+ [0][0][2][0][RTW89_IC][2][119] = 127,
[0][0][2][0][RTW89_KCC][1][119] = 127,
[0][0][2][0][RTW89_KCC][0][119] = 127,
[0][0][2][0][RTW89_ACMA][1][119] = 127,
@@ -41067,6 +42066,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][0][2][0][RTW89_QATAR][0][119] = 127,
[0][0][2][0][RTW89_UK][1][119] = 127,
[0][0][2][0][RTW89_UK][0][119] = 127,
+ [0][0][2][0][RTW89_THAILAND][1][119] = 127,
+ [0][0][2][0][RTW89_THAILAND][0][119] = 127,
[0][1][2][0][RTW89_FCC][1][0] = -2,
[0][1][2][0][RTW89_FCC][2][0] = 54,
[0][1][2][0][RTW89_ETSI][1][0] = 54,
@@ -41074,6 +42075,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][0] = 56,
[0][1][2][0][RTW89_MKK][0][0] = 16,
[0][1][2][0][RTW89_IC][1][0] = -2,
+ [0][1][2][0][RTW89_IC][2][0] = 54,
[0][1][2][0][RTW89_KCC][1][0] = 12,
[0][1][2][0][RTW89_KCC][0][0] = 10,
[0][1][2][0][RTW89_ACMA][1][0] = 54,
@@ -41083,6 +42085,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][0] = 18,
[0][1][2][0][RTW89_UK][1][0] = 54,
[0][1][2][0][RTW89_UK][0][0] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][0] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][0] = -2,
[0][1][2][0][RTW89_FCC][1][2] = -4,
[0][1][2][0][RTW89_FCC][2][2] = 54,
[0][1][2][0][RTW89_ETSI][1][2] = 54,
@@ -41090,6 +42094,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][2] = 54,
[0][1][2][0][RTW89_MKK][0][2] = 16,
[0][1][2][0][RTW89_IC][1][2] = -4,
+ [0][1][2][0][RTW89_IC][2][2] = 54,
[0][1][2][0][RTW89_KCC][1][2] = 12,
[0][1][2][0][RTW89_KCC][0][2] = 12,
[0][1][2][0][RTW89_ACMA][1][2] = 54,
@@ -41099,6 +42104,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][2] = 18,
[0][1][2][0][RTW89_UK][1][2] = 54,
[0][1][2][0][RTW89_UK][0][2] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][2] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][2] = -4,
[0][1][2][0][RTW89_FCC][1][4] = -4,
[0][1][2][0][RTW89_FCC][2][4] = 54,
[0][1][2][0][RTW89_ETSI][1][4] = 54,
@@ -41106,6 +42113,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][4] = 54,
[0][1][2][0][RTW89_MKK][0][4] = 16,
[0][1][2][0][RTW89_IC][1][4] = -4,
+ [0][1][2][0][RTW89_IC][2][4] = 54,
[0][1][2][0][RTW89_KCC][1][4] = 12,
[0][1][2][0][RTW89_KCC][0][4] = 12,
[0][1][2][0][RTW89_ACMA][1][4] = 54,
@@ -41115,6 +42123,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][4] = 18,
[0][1][2][0][RTW89_UK][1][4] = 54,
[0][1][2][0][RTW89_UK][0][4] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][4] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][4] = -4,
[0][1][2][0][RTW89_FCC][1][6] = -4,
[0][1][2][0][RTW89_FCC][2][6] = 54,
[0][1][2][0][RTW89_ETSI][1][6] = 54,
@@ -41122,6 +42132,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][6] = 54,
[0][1][2][0][RTW89_MKK][0][6] = 16,
[0][1][2][0][RTW89_IC][1][6] = -4,
+ [0][1][2][0][RTW89_IC][2][6] = 54,
[0][1][2][0][RTW89_KCC][1][6] = 12,
[0][1][2][0][RTW89_KCC][0][6] = 12,
[0][1][2][0][RTW89_ACMA][1][6] = 54,
@@ -41131,6 +42142,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][6] = 18,
[0][1][2][0][RTW89_UK][1][6] = 54,
[0][1][2][0][RTW89_UK][0][6] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][6] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][6] = -4,
[0][1][2][0][RTW89_FCC][1][8] = -4,
[0][1][2][0][RTW89_FCC][2][8] = 54,
[0][1][2][0][RTW89_ETSI][1][8] = 54,
@@ -41138,6 +42151,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][8] = 54,
[0][1][2][0][RTW89_MKK][0][8] = 16,
[0][1][2][0][RTW89_IC][1][8] = -4,
+ [0][1][2][0][RTW89_IC][2][8] = 54,
[0][1][2][0][RTW89_KCC][1][8] = 12,
[0][1][2][0][RTW89_KCC][0][8] = 12,
[0][1][2][0][RTW89_ACMA][1][8] = 54,
@@ -41147,6 +42161,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][8] = 18,
[0][1][2][0][RTW89_UK][1][8] = 54,
[0][1][2][0][RTW89_UK][0][8] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][8] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][8] = -4,
[0][1][2][0][RTW89_FCC][1][10] = -4,
[0][1][2][0][RTW89_FCC][2][10] = 54,
[0][1][2][0][RTW89_ETSI][1][10] = 54,
@@ -41154,6 +42170,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][10] = 54,
[0][1][2][0][RTW89_MKK][0][10] = 16,
[0][1][2][0][RTW89_IC][1][10] = -4,
+ [0][1][2][0][RTW89_IC][2][10] = 54,
[0][1][2][0][RTW89_KCC][1][10] = 12,
[0][1][2][0][RTW89_KCC][0][10] = 12,
[0][1][2][0][RTW89_ACMA][1][10] = 54,
@@ -41163,6 +42180,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][10] = 18,
[0][1][2][0][RTW89_UK][1][10] = 54,
[0][1][2][0][RTW89_UK][0][10] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][10] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][10] = -4,
[0][1][2][0][RTW89_FCC][1][12] = -4,
[0][1][2][0][RTW89_FCC][2][12] = 54,
[0][1][2][0][RTW89_ETSI][1][12] = 54,
@@ -41170,6 +42189,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][12] = 54,
[0][1][2][0][RTW89_MKK][0][12] = 16,
[0][1][2][0][RTW89_IC][1][12] = -4,
+ [0][1][2][0][RTW89_IC][2][12] = 54,
[0][1][2][0][RTW89_KCC][1][12] = 12,
[0][1][2][0][RTW89_KCC][0][12] = 12,
[0][1][2][0][RTW89_ACMA][1][12] = 54,
@@ -41179,6 +42199,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][12] = 18,
[0][1][2][0][RTW89_UK][1][12] = 54,
[0][1][2][0][RTW89_UK][0][12] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][12] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][12] = -4,
[0][1][2][0][RTW89_FCC][1][14] = -4,
[0][1][2][0][RTW89_FCC][2][14] = 54,
[0][1][2][0][RTW89_ETSI][1][14] = 54,
@@ -41186,6 +42208,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][14] = 54,
[0][1][2][0][RTW89_MKK][0][14] = 16,
[0][1][2][0][RTW89_IC][1][14] = -4,
+ [0][1][2][0][RTW89_IC][2][14] = 54,
[0][1][2][0][RTW89_KCC][1][14] = 12,
[0][1][2][0][RTW89_KCC][0][14] = 12,
[0][1][2][0][RTW89_ACMA][1][14] = 54,
@@ -41195,6 +42218,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][14] = 18,
[0][1][2][0][RTW89_UK][1][14] = 54,
[0][1][2][0][RTW89_UK][0][14] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][14] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][14] = -4,
[0][1][2][0][RTW89_FCC][1][15] = -4,
[0][1][2][0][RTW89_FCC][2][15] = 54,
[0][1][2][0][RTW89_ETSI][1][15] = 54,
@@ -41202,6 +42227,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][15] = 54,
[0][1][2][0][RTW89_MKK][0][15] = 16,
[0][1][2][0][RTW89_IC][1][15] = -4,
+ [0][1][2][0][RTW89_IC][2][15] = 54,
[0][1][2][0][RTW89_KCC][1][15] = 12,
[0][1][2][0][RTW89_KCC][0][15] = 12,
[0][1][2][0][RTW89_ACMA][1][15] = 54,
@@ -41211,6 +42237,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][15] = 18,
[0][1][2][0][RTW89_UK][1][15] = 54,
[0][1][2][0][RTW89_UK][0][15] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][15] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][15] = -4,
[0][1][2][0][RTW89_FCC][1][17] = -4,
[0][1][2][0][RTW89_FCC][2][17] = 54,
[0][1][2][0][RTW89_ETSI][1][17] = 54,
@@ -41218,6 +42246,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][17] = 54,
[0][1][2][0][RTW89_MKK][0][17] = 16,
[0][1][2][0][RTW89_IC][1][17] = -4,
+ [0][1][2][0][RTW89_IC][2][17] = 54,
[0][1][2][0][RTW89_KCC][1][17] = 12,
[0][1][2][0][RTW89_KCC][0][17] = 12,
[0][1][2][0][RTW89_ACMA][1][17] = 54,
@@ -41227,6 +42256,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][17] = 18,
[0][1][2][0][RTW89_UK][1][17] = 54,
[0][1][2][0][RTW89_UK][0][17] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][17] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][17] = -4,
[0][1][2][0][RTW89_FCC][1][19] = -4,
[0][1][2][0][RTW89_FCC][2][19] = 54,
[0][1][2][0][RTW89_ETSI][1][19] = 54,
@@ -41234,6 +42265,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][19] = 54,
[0][1][2][0][RTW89_MKK][0][19] = 16,
[0][1][2][0][RTW89_IC][1][19] = -4,
+ [0][1][2][0][RTW89_IC][2][19] = 54,
[0][1][2][0][RTW89_KCC][1][19] = 12,
[0][1][2][0][RTW89_KCC][0][19] = 12,
[0][1][2][0][RTW89_ACMA][1][19] = 54,
@@ -41243,6 +42275,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][19] = 18,
[0][1][2][0][RTW89_UK][1][19] = 54,
[0][1][2][0][RTW89_UK][0][19] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][19] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][19] = -4,
[0][1][2][0][RTW89_FCC][1][21] = -4,
[0][1][2][0][RTW89_FCC][2][21] = 54,
[0][1][2][0][RTW89_ETSI][1][21] = 54,
@@ -41250,6 +42284,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][21] = 54,
[0][1][2][0][RTW89_MKK][0][21] = 16,
[0][1][2][0][RTW89_IC][1][21] = -4,
+ [0][1][2][0][RTW89_IC][2][21] = 54,
[0][1][2][0][RTW89_KCC][1][21] = 12,
[0][1][2][0][RTW89_KCC][0][21] = 12,
[0][1][2][0][RTW89_ACMA][1][21] = 54,
@@ -41259,6 +42294,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][21] = 18,
[0][1][2][0][RTW89_UK][1][21] = 54,
[0][1][2][0][RTW89_UK][0][21] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][21] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][21] = -4,
[0][1][2][0][RTW89_FCC][1][23] = -4,
[0][1][2][0][RTW89_FCC][2][23] = 68,
[0][1][2][0][RTW89_ETSI][1][23] = 54,
@@ -41266,6 +42303,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][23] = 54,
[0][1][2][0][RTW89_MKK][0][23] = 16,
[0][1][2][0][RTW89_IC][1][23] = -4,
+ [0][1][2][0][RTW89_IC][2][23] = 68,
[0][1][2][0][RTW89_KCC][1][23] = 12,
[0][1][2][0][RTW89_KCC][0][23] = 10,
[0][1][2][0][RTW89_ACMA][1][23] = 54,
@@ -41275,6 +42313,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][23] = 18,
[0][1][2][0][RTW89_UK][1][23] = 54,
[0][1][2][0][RTW89_UK][0][23] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][23] = 44,
+ [0][1][2][0][RTW89_THAILAND][0][23] = -4,
[0][1][2][0][RTW89_FCC][1][25] = -4,
[0][1][2][0][RTW89_FCC][2][25] = 68,
[0][1][2][0][RTW89_ETSI][1][25] = 54,
@@ -41282,6 +42322,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][25] = 54,
[0][1][2][0][RTW89_MKK][0][25] = 16,
[0][1][2][0][RTW89_IC][1][25] = -4,
+ [0][1][2][0][RTW89_IC][2][25] = 68,
[0][1][2][0][RTW89_KCC][1][25] = 12,
[0][1][2][0][RTW89_KCC][0][25] = 14,
[0][1][2][0][RTW89_ACMA][1][25] = 54,
@@ -41291,6 +42332,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][25] = 18,
[0][1][2][0][RTW89_UK][1][25] = 54,
[0][1][2][0][RTW89_UK][0][25] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][25] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][25] = -4,
[0][1][2][0][RTW89_FCC][1][27] = -4,
[0][1][2][0][RTW89_FCC][2][27] = 68,
[0][1][2][0][RTW89_ETSI][1][27] = 54,
@@ -41298,6 +42341,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][27] = 54,
[0][1][2][0][RTW89_MKK][0][27] = 16,
[0][1][2][0][RTW89_IC][1][27] = -4,
+ [0][1][2][0][RTW89_IC][2][27] = 68,
[0][1][2][0][RTW89_KCC][1][27] = 12,
[0][1][2][0][RTW89_KCC][0][27] = 14,
[0][1][2][0][RTW89_ACMA][1][27] = 54,
@@ -41307,6 +42351,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][27] = 18,
[0][1][2][0][RTW89_UK][1][27] = 54,
[0][1][2][0][RTW89_UK][0][27] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][27] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][27] = -4,
[0][1][2][0][RTW89_FCC][1][29] = -4,
[0][1][2][0][RTW89_FCC][2][29] = 68,
[0][1][2][0][RTW89_ETSI][1][29] = 54,
@@ -41314,6 +42360,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][29] = 54,
[0][1][2][0][RTW89_MKK][0][29] = 16,
[0][1][2][0][RTW89_IC][1][29] = -4,
+ [0][1][2][0][RTW89_IC][2][29] = 68,
[0][1][2][0][RTW89_KCC][1][29] = 12,
[0][1][2][0][RTW89_KCC][0][29] = 14,
[0][1][2][0][RTW89_ACMA][1][29] = 54,
@@ -41323,6 +42370,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][29] = 18,
[0][1][2][0][RTW89_UK][1][29] = 54,
[0][1][2][0][RTW89_UK][0][29] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][29] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][29] = -4,
[0][1][2][0][RTW89_FCC][1][30] = -4,
[0][1][2][0][RTW89_FCC][2][30] = 68,
[0][1][2][0][RTW89_ETSI][1][30] = 54,
@@ -41330,6 +42379,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][30] = 54,
[0][1][2][0][RTW89_MKK][0][30] = 16,
[0][1][2][0][RTW89_IC][1][30] = -4,
+ [0][1][2][0][RTW89_IC][2][30] = 68,
[0][1][2][0][RTW89_KCC][1][30] = 12,
[0][1][2][0][RTW89_KCC][0][30] = 14,
[0][1][2][0][RTW89_ACMA][1][30] = 54,
@@ -41339,6 +42389,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][30] = 18,
[0][1][2][0][RTW89_UK][1][30] = 54,
[0][1][2][0][RTW89_UK][0][30] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][30] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][30] = -4,
[0][1][2][0][RTW89_FCC][1][32] = -4,
[0][1][2][0][RTW89_FCC][2][32] = 68,
[0][1][2][0][RTW89_ETSI][1][32] = 54,
@@ -41346,6 +42398,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][32] = 54,
[0][1][2][0][RTW89_MKK][0][32] = 16,
[0][1][2][0][RTW89_IC][1][32] = -4,
+ [0][1][2][0][RTW89_IC][2][32] = 68,
[0][1][2][0][RTW89_KCC][1][32] = 12,
[0][1][2][0][RTW89_KCC][0][32] = 14,
[0][1][2][0][RTW89_ACMA][1][32] = 54,
@@ -41355,6 +42408,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][32] = 18,
[0][1][2][0][RTW89_UK][1][32] = 54,
[0][1][2][0][RTW89_UK][0][32] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][32] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][32] = -4,
[0][1][2][0][RTW89_FCC][1][34] = -4,
[0][1][2][0][RTW89_FCC][2][34] = 68,
[0][1][2][0][RTW89_ETSI][1][34] = 54,
@@ -41362,6 +42417,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][34] = 54,
[0][1][2][0][RTW89_MKK][0][34] = 16,
[0][1][2][0][RTW89_IC][1][34] = -4,
+ [0][1][2][0][RTW89_IC][2][34] = 68,
[0][1][2][0][RTW89_KCC][1][34] = 12,
[0][1][2][0][RTW89_KCC][0][34] = 14,
[0][1][2][0][RTW89_ACMA][1][34] = 54,
@@ -41371,6 +42427,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][34] = 18,
[0][1][2][0][RTW89_UK][1][34] = 54,
[0][1][2][0][RTW89_UK][0][34] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][34] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][34] = -4,
[0][1][2][0][RTW89_FCC][1][36] = -4,
[0][1][2][0][RTW89_FCC][2][36] = 68,
[0][1][2][0][RTW89_ETSI][1][36] = 54,
@@ -41378,6 +42436,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][36] = 54,
[0][1][2][0][RTW89_MKK][0][36] = 16,
[0][1][2][0][RTW89_IC][1][36] = -4,
+ [0][1][2][0][RTW89_IC][2][36] = 68,
[0][1][2][0][RTW89_KCC][1][36] = 12,
[0][1][2][0][RTW89_KCC][0][36] = 14,
[0][1][2][0][RTW89_ACMA][1][36] = 54,
@@ -41387,6 +42446,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][36] = 18,
[0][1][2][0][RTW89_UK][1][36] = 54,
[0][1][2][0][RTW89_UK][0][36] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][36] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][36] = -4,
[0][1][2][0][RTW89_FCC][1][38] = -4,
[0][1][2][0][RTW89_FCC][2][38] = 68,
[0][1][2][0][RTW89_ETSI][1][38] = 54,
@@ -41394,6 +42455,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][38] = 54,
[0][1][2][0][RTW89_MKK][0][38] = 16,
[0][1][2][0][RTW89_IC][1][38] = -4,
+ [0][1][2][0][RTW89_IC][2][38] = 68,
[0][1][2][0][RTW89_KCC][1][38] = 12,
[0][1][2][0][RTW89_KCC][0][38] = 14,
[0][1][2][0][RTW89_ACMA][1][38] = 54,
@@ -41403,6 +42465,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][38] = 18,
[0][1][2][0][RTW89_UK][1][38] = 54,
[0][1][2][0][RTW89_UK][0][38] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][38] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][38] = -4,
[0][1][2][0][RTW89_FCC][1][40] = -4,
[0][1][2][0][RTW89_FCC][2][40] = 68,
[0][1][2][0][RTW89_ETSI][1][40] = 54,
@@ -41410,6 +42474,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][40] = 54,
[0][1][2][0][RTW89_MKK][0][40] = 16,
[0][1][2][0][RTW89_IC][1][40] = -4,
+ [0][1][2][0][RTW89_IC][2][40] = 68,
[0][1][2][0][RTW89_KCC][1][40] = 12,
[0][1][2][0][RTW89_KCC][0][40] = 14,
[0][1][2][0][RTW89_ACMA][1][40] = 54,
@@ -41419,6 +42484,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][40] = 18,
[0][1][2][0][RTW89_UK][1][40] = 54,
[0][1][2][0][RTW89_UK][0][40] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][40] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][40] = -4,
[0][1][2][0][RTW89_FCC][1][42] = -4,
[0][1][2][0][RTW89_FCC][2][42] = 68,
[0][1][2][0][RTW89_ETSI][1][42] = 54,
@@ -41426,6 +42493,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][42] = 54,
[0][1][2][0][RTW89_MKK][0][42] = 16,
[0][1][2][0][RTW89_IC][1][42] = -4,
+ [0][1][2][0][RTW89_IC][2][42] = 68,
[0][1][2][0][RTW89_KCC][1][42] = 12,
[0][1][2][0][RTW89_KCC][0][42] = 14,
[0][1][2][0][RTW89_ACMA][1][42] = 54,
@@ -41435,6 +42503,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][42] = 18,
[0][1][2][0][RTW89_UK][1][42] = 54,
[0][1][2][0][RTW89_UK][0][42] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][42] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][42] = -4,
[0][1][2][0][RTW89_FCC][1][44] = -2,
[0][1][2][0][RTW89_FCC][2][44] = 68,
[0][1][2][0][RTW89_ETSI][1][44] = 54,
@@ -41442,6 +42512,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][44] = 34,
[0][1][2][0][RTW89_MKK][0][44] = 16,
[0][1][2][0][RTW89_IC][1][44] = -2,
+ [0][1][2][0][RTW89_IC][2][44] = 68,
[0][1][2][0][RTW89_KCC][1][44] = 12,
[0][1][2][0][RTW89_KCC][0][44] = 12,
[0][1][2][0][RTW89_ACMA][1][44] = 54,
@@ -41451,6 +42522,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][44] = 18,
[0][1][2][0][RTW89_UK][1][44] = 54,
[0][1][2][0][RTW89_UK][0][44] = 18,
+ [0][1][2][0][RTW89_THAILAND][1][44] = 42,
+ [0][1][2][0][RTW89_THAILAND][0][44] = -2,
[0][1][2][0][RTW89_FCC][1][45] = -2,
[0][1][2][0][RTW89_FCC][2][45] = 127,
[0][1][2][0][RTW89_ETSI][1][45] = 127,
@@ -41458,6 +42531,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][45] = 127,
[0][1][2][0][RTW89_MKK][0][45] = 127,
[0][1][2][0][RTW89_IC][1][45] = -2,
+ [0][1][2][0][RTW89_IC][2][45] = 70,
[0][1][2][0][RTW89_KCC][1][45] = 12,
[0][1][2][0][RTW89_KCC][0][45] = 127,
[0][1][2][0][RTW89_ACMA][1][45] = 127,
@@ -41467,6 +42541,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][45] = 127,
[0][1][2][0][RTW89_UK][1][45] = 127,
[0][1][2][0][RTW89_UK][0][45] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][45] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][45] = 127,
[0][1][2][0][RTW89_FCC][1][47] = -2,
[0][1][2][0][RTW89_FCC][2][47] = 127,
[0][1][2][0][RTW89_ETSI][1][47] = 127,
@@ -41474,6 +42550,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][47] = 127,
[0][1][2][0][RTW89_MKK][0][47] = 127,
[0][1][2][0][RTW89_IC][1][47] = -2,
+ [0][1][2][0][RTW89_IC][2][47] = 68,
[0][1][2][0][RTW89_KCC][1][47] = 12,
[0][1][2][0][RTW89_KCC][0][47] = 127,
[0][1][2][0][RTW89_ACMA][1][47] = 127,
@@ -41483,6 +42560,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][47] = 127,
[0][1][2][0][RTW89_UK][1][47] = 127,
[0][1][2][0][RTW89_UK][0][47] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][47] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][47] = 127,
[0][1][2][0][RTW89_FCC][1][49] = -2,
[0][1][2][0][RTW89_FCC][2][49] = 127,
[0][1][2][0][RTW89_ETSI][1][49] = 127,
@@ -41490,6 +42569,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][49] = 127,
[0][1][2][0][RTW89_MKK][0][49] = 127,
[0][1][2][0][RTW89_IC][1][49] = -2,
+ [0][1][2][0][RTW89_IC][2][49] = 68,
[0][1][2][0][RTW89_KCC][1][49] = 12,
[0][1][2][0][RTW89_KCC][0][49] = 127,
[0][1][2][0][RTW89_ACMA][1][49] = 127,
@@ -41499,6 +42579,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][49] = 127,
[0][1][2][0][RTW89_UK][1][49] = 127,
[0][1][2][0][RTW89_UK][0][49] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][49] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][49] = 127,
[0][1][2][0][RTW89_FCC][1][51] = -2,
[0][1][2][0][RTW89_FCC][2][51] = 127,
[0][1][2][0][RTW89_ETSI][1][51] = 127,
@@ -41506,6 +42588,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][51] = 127,
[0][1][2][0][RTW89_MKK][0][51] = 127,
[0][1][2][0][RTW89_IC][1][51] = -2,
+ [0][1][2][0][RTW89_IC][2][51] = 68,
[0][1][2][0][RTW89_KCC][1][51] = 12,
[0][1][2][0][RTW89_KCC][0][51] = 127,
[0][1][2][0][RTW89_ACMA][1][51] = 127,
@@ -41515,6 +42598,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][51] = 127,
[0][1][2][0][RTW89_UK][1][51] = 127,
[0][1][2][0][RTW89_UK][0][51] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][51] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][51] = 127,
[0][1][2][0][RTW89_FCC][1][53] = -2,
[0][1][2][0][RTW89_FCC][2][53] = 127,
[0][1][2][0][RTW89_ETSI][1][53] = 127,
@@ -41522,6 +42607,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][53] = 127,
[0][1][2][0][RTW89_MKK][0][53] = 127,
[0][1][2][0][RTW89_IC][1][53] = -2,
+ [0][1][2][0][RTW89_IC][2][53] = 68,
[0][1][2][0][RTW89_KCC][1][53] = 12,
[0][1][2][0][RTW89_KCC][0][53] = 127,
[0][1][2][0][RTW89_ACMA][1][53] = 127,
@@ -41531,6 +42617,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][53] = 127,
[0][1][2][0][RTW89_UK][1][53] = 127,
[0][1][2][0][RTW89_UK][0][53] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][53] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][53] = 127,
[0][1][2][0][RTW89_FCC][1][55] = -2,
[0][1][2][0][RTW89_FCC][2][55] = 68,
[0][1][2][0][RTW89_ETSI][1][55] = 127,
@@ -41538,6 +42626,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][55] = 127,
[0][1][2][0][RTW89_MKK][0][55] = 127,
[0][1][2][0][RTW89_IC][1][55] = -2,
+ [0][1][2][0][RTW89_IC][2][55] = 68,
[0][1][2][0][RTW89_KCC][1][55] = 12,
[0][1][2][0][RTW89_KCC][0][55] = 127,
[0][1][2][0][RTW89_ACMA][1][55] = 127,
@@ -41547,6 +42636,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][55] = 127,
[0][1][2][0][RTW89_UK][1][55] = 127,
[0][1][2][0][RTW89_UK][0][55] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][55] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][55] = 127,
[0][1][2][0][RTW89_FCC][1][57] = -2,
[0][1][2][0][RTW89_FCC][2][57] = 68,
[0][1][2][0][RTW89_ETSI][1][57] = 127,
@@ -41554,6 +42645,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][57] = 127,
[0][1][2][0][RTW89_MKK][0][57] = 127,
[0][1][2][0][RTW89_IC][1][57] = -2,
+ [0][1][2][0][RTW89_IC][2][57] = 68,
[0][1][2][0][RTW89_KCC][1][57] = 12,
[0][1][2][0][RTW89_KCC][0][57] = 127,
[0][1][2][0][RTW89_ACMA][1][57] = 127,
@@ -41563,6 +42655,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][57] = 127,
[0][1][2][0][RTW89_UK][1][57] = 127,
[0][1][2][0][RTW89_UK][0][57] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][57] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][57] = 127,
[0][1][2][0][RTW89_FCC][1][59] = -2,
[0][1][2][0][RTW89_FCC][2][59] = 68,
[0][1][2][0][RTW89_ETSI][1][59] = 127,
@@ -41570,6 +42664,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][59] = 127,
[0][1][2][0][RTW89_MKK][0][59] = 127,
[0][1][2][0][RTW89_IC][1][59] = -2,
+ [0][1][2][0][RTW89_IC][2][59] = 68,
[0][1][2][0][RTW89_KCC][1][59] = 12,
[0][1][2][0][RTW89_KCC][0][59] = 127,
[0][1][2][0][RTW89_ACMA][1][59] = 127,
@@ -41579,6 +42674,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][59] = 127,
[0][1][2][0][RTW89_UK][1][59] = 127,
[0][1][2][0][RTW89_UK][0][59] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][59] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][59] = 127,
[0][1][2][0][RTW89_FCC][1][60] = -2,
[0][1][2][0][RTW89_FCC][2][60] = 68,
[0][1][2][0][RTW89_ETSI][1][60] = 127,
@@ -41586,6 +42683,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][60] = 127,
[0][1][2][0][RTW89_MKK][0][60] = 127,
[0][1][2][0][RTW89_IC][1][60] = -2,
+ [0][1][2][0][RTW89_IC][2][60] = 68,
[0][1][2][0][RTW89_KCC][1][60] = 12,
[0][1][2][0][RTW89_KCC][0][60] = 127,
[0][1][2][0][RTW89_ACMA][1][60] = 127,
@@ -41595,6 +42693,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][60] = 127,
[0][1][2][0][RTW89_UK][1][60] = 127,
[0][1][2][0][RTW89_UK][0][60] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][60] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][60] = 127,
[0][1][2][0][RTW89_FCC][1][62] = -2,
[0][1][2][0][RTW89_FCC][2][62] = 68,
[0][1][2][0][RTW89_ETSI][1][62] = 127,
@@ -41602,6 +42702,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][62] = 127,
[0][1][2][0][RTW89_MKK][0][62] = 127,
[0][1][2][0][RTW89_IC][1][62] = -2,
+ [0][1][2][0][RTW89_IC][2][62] = 68,
[0][1][2][0][RTW89_KCC][1][62] = 12,
[0][1][2][0][RTW89_KCC][0][62] = 127,
[0][1][2][0][RTW89_ACMA][1][62] = 127,
@@ -41611,6 +42712,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][62] = 127,
[0][1][2][0][RTW89_UK][1][62] = 127,
[0][1][2][0][RTW89_UK][0][62] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][62] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][62] = 127,
[0][1][2][0][RTW89_FCC][1][64] = -2,
[0][1][2][0][RTW89_FCC][2][64] = 68,
[0][1][2][0][RTW89_ETSI][1][64] = 127,
@@ -41618,6 +42721,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][64] = 127,
[0][1][2][0][RTW89_MKK][0][64] = 127,
[0][1][2][0][RTW89_IC][1][64] = -2,
+ [0][1][2][0][RTW89_IC][2][64] = 68,
[0][1][2][0][RTW89_KCC][1][64] = 12,
[0][1][2][0][RTW89_KCC][0][64] = 127,
[0][1][2][0][RTW89_ACMA][1][64] = 127,
@@ -41627,6 +42731,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][64] = 127,
[0][1][2][0][RTW89_UK][1][64] = 127,
[0][1][2][0][RTW89_UK][0][64] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][64] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][64] = 127,
[0][1][2][0][RTW89_FCC][1][66] = -2,
[0][1][2][0][RTW89_FCC][2][66] = 68,
[0][1][2][0][RTW89_ETSI][1][66] = 127,
@@ -41634,6 +42740,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][66] = 127,
[0][1][2][0][RTW89_MKK][0][66] = 127,
[0][1][2][0][RTW89_IC][1][66] = -2,
+ [0][1][2][0][RTW89_IC][2][66] = 68,
[0][1][2][0][RTW89_KCC][1][66] = 12,
[0][1][2][0][RTW89_KCC][0][66] = 127,
[0][1][2][0][RTW89_ACMA][1][66] = 127,
@@ -41643,6 +42750,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][66] = 127,
[0][1][2][0][RTW89_UK][1][66] = 127,
[0][1][2][0][RTW89_UK][0][66] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][66] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][66] = 127,
[0][1][2][0][RTW89_FCC][1][68] = -2,
[0][1][2][0][RTW89_FCC][2][68] = 68,
[0][1][2][0][RTW89_ETSI][1][68] = 127,
@@ -41650,6 +42759,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][68] = 127,
[0][1][2][0][RTW89_MKK][0][68] = 127,
[0][1][2][0][RTW89_IC][1][68] = -2,
+ [0][1][2][0][RTW89_IC][2][68] = 68,
[0][1][2][0][RTW89_KCC][1][68] = 12,
[0][1][2][0][RTW89_KCC][0][68] = 127,
[0][1][2][0][RTW89_ACMA][1][68] = 127,
@@ -41659,6 +42769,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][68] = 127,
[0][1][2][0][RTW89_UK][1][68] = 127,
[0][1][2][0][RTW89_UK][0][68] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][68] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][68] = 127,
[0][1][2][0][RTW89_FCC][1][70] = -2,
[0][1][2][0][RTW89_FCC][2][70] = 68,
[0][1][2][0][RTW89_ETSI][1][70] = 127,
@@ -41666,6 +42778,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][70] = 127,
[0][1][2][0][RTW89_MKK][0][70] = 127,
[0][1][2][0][RTW89_IC][1][70] = -2,
+ [0][1][2][0][RTW89_IC][2][70] = 68,
[0][1][2][0][RTW89_KCC][1][70] = 12,
[0][1][2][0][RTW89_KCC][0][70] = 127,
[0][1][2][0][RTW89_ACMA][1][70] = 127,
@@ -41675,6 +42788,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][70] = 127,
[0][1][2][0][RTW89_UK][1][70] = 127,
[0][1][2][0][RTW89_UK][0][70] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][70] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][70] = 127,
[0][1][2][0][RTW89_FCC][1][72] = -2,
[0][1][2][0][RTW89_FCC][2][72] = 68,
[0][1][2][0][RTW89_ETSI][1][72] = 127,
@@ -41682,6 +42797,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][72] = 127,
[0][1][2][0][RTW89_MKK][0][72] = 127,
[0][1][2][0][RTW89_IC][1][72] = -2,
+ [0][1][2][0][RTW89_IC][2][72] = 68,
[0][1][2][0][RTW89_KCC][1][72] = 12,
[0][1][2][0][RTW89_KCC][0][72] = 127,
[0][1][2][0][RTW89_ACMA][1][72] = 127,
@@ -41691,6 +42807,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][72] = 127,
[0][1][2][0][RTW89_UK][1][72] = 127,
[0][1][2][0][RTW89_UK][0][72] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][72] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][72] = 127,
[0][1][2][0][RTW89_FCC][1][74] = -2,
[0][1][2][0][RTW89_FCC][2][74] = 68,
[0][1][2][0][RTW89_ETSI][1][74] = 127,
@@ -41698,6 +42816,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][74] = 127,
[0][1][2][0][RTW89_MKK][0][74] = 127,
[0][1][2][0][RTW89_IC][1][74] = -2,
+ [0][1][2][0][RTW89_IC][2][74] = 68,
[0][1][2][0][RTW89_KCC][1][74] = 12,
[0][1][2][0][RTW89_KCC][0][74] = 127,
[0][1][2][0][RTW89_ACMA][1][74] = 127,
@@ -41707,6 +42826,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][74] = 127,
[0][1][2][0][RTW89_UK][1][74] = 127,
[0][1][2][0][RTW89_UK][0][74] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][74] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][74] = 127,
[0][1][2][0][RTW89_FCC][1][75] = -2,
[0][1][2][0][RTW89_FCC][2][75] = 68,
[0][1][2][0][RTW89_ETSI][1][75] = 127,
@@ -41714,6 +42835,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][75] = 127,
[0][1][2][0][RTW89_MKK][0][75] = 127,
[0][1][2][0][RTW89_IC][1][75] = -2,
+ [0][1][2][0][RTW89_IC][2][75] = 68,
[0][1][2][0][RTW89_KCC][1][75] = 12,
[0][1][2][0][RTW89_KCC][0][75] = 127,
[0][1][2][0][RTW89_ACMA][1][75] = 127,
@@ -41723,6 +42845,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][75] = 127,
[0][1][2][0][RTW89_UK][1][75] = 127,
[0][1][2][0][RTW89_UK][0][75] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][75] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][75] = 127,
[0][1][2][0][RTW89_FCC][1][77] = -2,
[0][1][2][0][RTW89_FCC][2][77] = 68,
[0][1][2][0][RTW89_ETSI][1][77] = 127,
@@ -41730,6 +42854,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][77] = 127,
[0][1][2][0][RTW89_MKK][0][77] = 127,
[0][1][2][0][RTW89_IC][1][77] = -2,
+ [0][1][2][0][RTW89_IC][2][77] = 68,
[0][1][2][0][RTW89_KCC][1][77] = 12,
[0][1][2][0][RTW89_KCC][0][77] = 127,
[0][1][2][0][RTW89_ACMA][1][77] = 127,
@@ -41739,6 +42864,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][77] = 127,
[0][1][2][0][RTW89_UK][1][77] = 127,
[0][1][2][0][RTW89_UK][0][77] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][77] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][77] = 127,
[0][1][2][0][RTW89_FCC][1][79] = -2,
[0][1][2][0][RTW89_FCC][2][79] = 68,
[0][1][2][0][RTW89_ETSI][1][79] = 127,
@@ -41746,6 +42873,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][79] = 127,
[0][1][2][0][RTW89_MKK][0][79] = 127,
[0][1][2][0][RTW89_IC][1][79] = -2,
+ [0][1][2][0][RTW89_IC][2][79] = 68,
[0][1][2][0][RTW89_KCC][1][79] = 12,
[0][1][2][0][RTW89_KCC][0][79] = 127,
[0][1][2][0][RTW89_ACMA][1][79] = 127,
@@ -41755,6 +42883,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][79] = 127,
[0][1][2][0][RTW89_UK][1][79] = 127,
[0][1][2][0][RTW89_UK][0][79] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][79] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][79] = 127,
[0][1][2][0][RTW89_FCC][1][81] = -2,
[0][1][2][0][RTW89_FCC][2][81] = 68,
[0][1][2][0][RTW89_ETSI][1][81] = 127,
@@ -41762,6 +42892,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][81] = 127,
[0][1][2][0][RTW89_MKK][0][81] = 127,
[0][1][2][0][RTW89_IC][1][81] = -2,
+ [0][1][2][0][RTW89_IC][2][81] = 68,
[0][1][2][0][RTW89_KCC][1][81] = 12,
[0][1][2][0][RTW89_KCC][0][81] = 127,
[0][1][2][0][RTW89_ACMA][1][81] = 127,
@@ -41771,6 +42902,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][81] = 127,
[0][1][2][0][RTW89_UK][1][81] = 127,
[0][1][2][0][RTW89_UK][0][81] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][81] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][81] = 127,
[0][1][2][0][RTW89_FCC][1][83] = -2,
[0][1][2][0][RTW89_FCC][2][83] = 68,
[0][1][2][0][RTW89_ETSI][1][83] = 127,
@@ -41778,6 +42911,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][83] = 127,
[0][1][2][0][RTW89_MKK][0][83] = 127,
[0][1][2][0][RTW89_IC][1][83] = -2,
+ [0][1][2][0][RTW89_IC][2][83] = 68,
[0][1][2][0][RTW89_KCC][1][83] = 20,
[0][1][2][0][RTW89_KCC][0][83] = 127,
[0][1][2][0][RTW89_ACMA][1][83] = 127,
@@ -41787,6 +42921,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][83] = 127,
[0][1][2][0][RTW89_UK][1][83] = 127,
[0][1][2][0][RTW89_UK][0][83] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][83] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][83] = 127,
[0][1][2][0][RTW89_FCC][1][85] = -2,
[0][1][2][0][RTW89_FCC][2][85] = 68,
[0][1][2][0][RTW89_ETSI][1][85] = 127,
@@ -41794,6 +42930,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][85] = 127,
[0][1][2][0][RTW89_MKK][0][85] = 127,
[0][1][2][0][RTW89_IC][1][85] = -2,
+ [0][1][2][0][RTW89_IC][2][85] = 68,
[0][1][2][0][RTW89_KCC][1][85] = 20,
[0][1][2][0][RTW89_KCC][0][85] = 127,
[0][1][2][0][RTW89_ACMA][1][85] = 127,
@@ -41803,6 +42940,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][85] = 127,
[0][1][2][0][RTW89_UK][1][85] = 127,
[0][1][2][0][RTW89_UK][0][85] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][85] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][85] = 127,
[0][1][2][0][RTW89_FCC][1][87] = -2,
[0][1][2][0][RTW89_FCC][2][87] = 127,
[0][1][2][0][RTW89_ETSI][1][87] = 127,
@@ -41810,6 +42949,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][87] = 127,
[0][1][2][0][RTW89_MKK][0][87] = 127,
[0][1][2][0][RTW89_IC][1][87] = -2,
+ [0][1][2][0][RTW89_IC][2][87] = 127,
[0][1][2][0][RTW89_KCC][1][87] = 20,
[0][1][2][0][RTW89_KCC][0][87] = 127,
[0][1][2][0][RTW89_ACMA][1][87] = 127,
@@ -41819,6 +42959,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][87] = 127,
[0][1][2][0][RTW89_UK][1][87] = 127,
[0][1][2][0][RTW89_UK][0][87] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][87] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][87] = 127,
[0][1][2][0][RTW89_FCC][1][89] = -2,
[0][1][2][0][RTW89_FCC][2][89] = 127,
[0][1][2][0][RTW89_ETSI][1][89] = 127,
@@ -41826,6 +42968,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][89] = 127,
[0][1][2][0][RTW89_MKK][0][89] = 127,
[0][1][2][0][RTW89_IC][1][89] = -2,
+ [0][1][2][0][RTW89_IC][2][89] = 127,
[0][1][2][0][RTW89_KCC][1][89] = 20,
[0][1][2][0][RTW89_KCC][0][89] = 127,
[0][1][2][0][RTW89_ACMA][1][89] = 127,
@@ -41835,6 +42978,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][89] = 127,
[0][1][2][0][RTW89_UK][1][89] = 127,
[0][1][2][0][RTW89_UK][0][89] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][89] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][89] = 127,
[0][1][2][0][RTW89_FCC][1][90] = -2,
[0][1][2][0][RTW89_FCC][2][90] = 127,
[0][1][2][0][RTW89_ETSI][1][90] = 127,
@@ -41842,6 +42987,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][90] = 127,
[0][1][2][0][RTW89_MKK][0][90] = 127,
[0][1][2][0][RTW89_IC][1][90] = -2,
+ [0][1][2][0][RTW89_IC][2][90] = 127,
[0][1][2][0][RTW89_KCC][1][90] = 20,
[0][1][2][0][RTW89_KCC][0][90] = 127,
[0][1][2][0][RTW89_ACMA][1][90] = 127,
@@ -41851,6 +42997,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][90] = 127,
[0][1][2][0][RTW89_UK][1][90] = 127,
[0][1][2][0][RTW89_UK][0][90] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][90] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][90] = 127,
[0][1][2][0][RTW89_FCC][1][92] = -2,
[0][1][2][0][RTW89_FCC][2][92] = 127,
[0][1][2][0][RTW89_ETSI][1][92] = 127,
@@ -41858,6 +43006,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][92] = 127,
[0][1][2][0][RTW89_MKK][0][92] = 127,
[0][1][2][0][RTW89_IC][1][92] = -2,
+ [0][1][2][0][RTW89_IC][2][92] = 127,
[0][1][2][0][RTW89_KCC][1][92] = 20,
[0][1][2][0][RTW89_KCC][0][92] = 127,
[0][1][2][0][RTW89_ACMA][1][92] = 127,
@@ -41867,6 +43016,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][92] = 127,
[0][1][2][0][RTW89_UK][1][92] = 127,
[0][1][2][0][RTW89_UK][0][92] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][92] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][92] = 127,
[0][1][2][0][RTW89_FCC][1][94] = -2,
[0][1][2][0][RTW89_FCC][2][94] = 127,
[0][1][2][0][RTW89_ETSI][1][94] = 127,
@@ -41874,6 +43025,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][94] = 127,
[0][1][2][0][RTW89_MKK][0][94] = 127,
[0][1][2][0][RTW89_IC][1][94] = -2,
+ [0][1][2][0][RTW89_IC][2][94] = 127,
[0][1][2][0][RTW89_KCC][1][94] = 20,
[0][1][2][0][RTW89_KCC][0][94] = 127,
[0][1][2][0][RTW89_ACMA][1][94] = 127,
@@ -41883,6 +43035,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][94] = 127,
[0][1][2][0][RTW89_UK][1][94] = 127,
[0][1][2][0][RTW89_UK][0][94] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][94] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][94] = 127,
[0][1][2][0][RTW89_FCC][1][96] = -2,
[0][1][2][0][RTW89_FCC][2][96] = 127,
[0][1][2][0][RTW89_ETSI][1][96] = 127,
@@ -41890,6 +43044,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][96] = 127,
[0][1][2][0][RTW89_MKK][0][96] = 127,
[0][1][2][0][RTW89_IC][1][96] = -2,
+ [0][1][2][0][RTW89_IC][2][96] = 127,
[0][1][2][0][RTW89_KCC][1][96] = 20,
[0][1][2][0][RTW89_KCC][0][96] = 127,
[0][1][2][0][RTW89_ACMA][1][96] = 127,
@@ -41899,6 +43054,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][96] = 127,
[0][1][2][0][RTW89_UK][1][96] = 127,
[0][1][2][0][RTW89_UK][0][96] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][96] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][96] = 127,
[0][1][2][0][RTW89_FCC][1][98] = -2,
[0][1][2][0][RTW89_FCC][2][98] = 127,
[0][1][2][0][RTW89_ETSI][1][98] = 127,
@@ -41906,6 +43063,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][98] = 127,
[0][1][2][0][RTW89_MKK][0][98] = 127,
[0][1][2][0][RTW89_IC][1][98] = -2,
+ [0][1][2][0][RTW89_IC][2][98] = 127,
[0][1][2][0][RTW89_KCC][1][98] = 20,
[0][1][2][0][RTW89_KCC][0][98] = 127,
[0][1][2][0][RTW89_ACMA][1][98] = 127,
@@ -41915,6 +43073,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][98] = 127,
[0][1][2][0][RTW89_UK][1][98] = 127,
[0][1][2][0][RTW89_UK][0][98] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][98] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][98] = 127,
[0][1][2][0][RTW89_FCC][1][100] = -2,
[0][1][2][0][RTW89_FCC][2][100] = 127,
[0][1][2][0][RTW89_ETSI][1][100] = 127,
@@ -41922,6 +43082,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][100] = 127,
[0][1][2][0][RTW89_MKK][0][100] = 127,
[0][1][2][0][RTW89_IC][1][100] = -2,
+ [0][1][2][0][RTW89_IC][2][100] = 127,
[0][1][2][0][RTW89_KCC][1][100] = 20,
[0][1][2][0][RTW89_KCC][0][100] = 127,
[0][1][2][0][RTW89_ACMA][1][100] = 127,
@@ -41931,6 +43092,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][100] = 127,
[0][1][2][0][RTW89_UK][1][100] = 127,
[0][1][2][0][RTW89_UK][0][100] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][100] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][100] = 127,
[0][1][2][0][RTW89_FCC][1][102] = -2,
[0][1][2][0][RTW89_FCC][2][102] = 127,
[0][1][2][0][RTW89_ETSI][1][102] = 127,
@@ -41938,6 +43101,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][102] = 127,
[0][1][2][0][RTW89_MKK][0][102] = 127,
[0][1][2][0][RTW89_IC][1][102] = -2,
+ [0][1][2][0][RTW89_IC][2][102] = 127,
[0][1][2][0][RTW89_KCC][1][102] = 20,
[0][1][2][0][RTW89_KCC][0][102] = 127,
[0][1][2][0][RTW89_ACMA][1][102] = 127,
@@ -41947,6 +43111,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][102] = 127,
[0][1][2][0][RTW89_UK][1][102] = 127,
[0][1][2][0][RTW89_UK][0][102] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][102] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][102] = 127,
[0][1][2][0][RTW89_FCC][1][104] = -2,
[0][1][2][0][RTW89_FCC][2][104] = 127,
[0][1][2][0][RTW89_ETSI][1][104] = 127,
@@ -41954,6 +43120,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][104] = 127,
[0][1][2][0][RTW89_MKK][0][104] = 127,
[0][1][2][0][RTW89_IC][1][104] = -2,
+ [0][1][2][0][RTW89_IC][2][104] = 127,
[0][1][2][0][RTW89_KCC][1][104] = 20,
[0][1][2][0][RTW89_KCC][0][104] = 127,
[0][1][2][0][RTW89_ACMA][1][104] = 127,
@@ -41963,6 +43130,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][104] = 127,
[0][1][2][0][RTW89_UK][1][104] = 127,
[0][1][2][0][RTW89_UK][0][104] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][104] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][104] = 127,
[0][1][2][0][RTW89_FCC][1][105] = -2,
[0][1][2][0][RTW89_FCC][2][105] = 127,
[0][1][2][0][RTW89_ETSI][1][105] = 127,
@@ -41970,6 +43139,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][105] = 127,
[0][1][2][0][RTW89_MKK][0][105] = 127,
[0][1][2][0][RTW89_IC][1][105] = -2,
+ [0][1][2][0][RTW89_IC][2][105] = 127,
[0][1][2][0][RTW89_KCC][1][105] = 20,
[0][1][2][0][RTW89_KCC][0][105] = 127,
[0][1][2][0][RTW89_ACMA][1][105] = 127,
@@ -41979,6 +43149,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][105] = 127,
[0][1][2][0][RTW89_UK][1][105] = 127,
[0][1][2][0][RTW89_UK][0][105] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][105] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][105] = 127,
[0][1][2][0][RTW89_FCC][1][107] = 1,
[0][1][2][0][RTW89_FCC][2][107] = 127,
[0][1][2][0][RTW89_ETSI][1][107] = 127,
@@ -41986,6 +43158,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][107] = 127,
[0][1][2][0][RTW89_MKK][0][107] = 127,
[0][1][2][0][RTW89_IC][1][107] = 1,
+ [0][1][2][0][RTW89_IC][2][107] = 127,
[0][1][2][0][RTW89_KCC][1][107] = 20,
[0][1][2][0][RTW89_KCC][0][107] = 127,
[0][1][2][0][RTW89_ACMA][1][107] = 127,
@@ -41995,6 +43168,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][107] = 127,
[0][1][2][0][RTW89_UK][1][107] = 127,
[0][1][2][0][RTW89_UK][0][107] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][107] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][107] = 127,
[0][1][2][0][RTW89_FCC][1][109] = 1,
[0][1][2][0][RTW89_FCC][2][109] = 127,
[0][1][2][0][RTW89_ETSI][1][109] = 127,
@@ -42002,6 +43177,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][109] = 127,
[0][1][2][0][RTW89_MKK][0][109] = 127,
[0][1][2][0][RTW89_IC][1][109] = 1,
+ [0][1][2][0][RTW89_IC][2][109] = 127,
[0][1][2][0][RTW89_KCC][1][109] = 20,
[0][1][2][0][RTW89_KCC][0][109] = 127,
[0][1][2][0][RTW89_ACMA][1][109] = 127,
@@ -42011,6 +43187,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][109] = 127,
[0][1][2][0][RTW89_UK][1][109] = 127,
[0][1][2][0][RTW89_UK][0][109] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][109] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][109] = 127,
[0][1][2][0][RTW89_FCC][1][111] = 127,
[0][1][2][0][RTW89_FCC][2][111] = 127,
[0][1][2][0][RTW89_ETSI][1][111] = 127,
@@ -42018,6 +43196,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][111] = 127,
[0][1][2][0][RTW89_MKK][0][111] = 127,
[0][1][2][0][RTW89_IC][1][111] = 127,
+ [0][1][2][0][RTW89_IC][2][111] = 127,
[0][1][2][0][RTW89_KCC][1][111] = 127,
[0][1][2][0][RTW89_KCC][0][111] = 127,
[0][1][2][0][RTW89_ACMA][1][111] = 127,
@@ -42027,6 +43206,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][111] = 127,
[0][1][2][0][RTW89_UK][1][111] = 127,
[0][1][2][0][RTW89_UK][0][111] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][111] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][111] = 127,
[0][1][2][0][RTW89_FCC][1][113] = 127,
[0][1][2][0][RTW89_FCC][2][113] = 127,
[0][1][2][0][RTW89_ETSI][1][113] = 127,
@@ -42034,6 +43215,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][113] = 127,
[0][1][2][0][RTW89_MKK][0][113] = 127,
[0][1][2][0][RTW89_IC][1][113] = 127,
+ [0][1][2][0][RTW89_IC][2][113] = 127,
[0][1][2][0][RTW89_KCC][1][113] = 127,
[0][1][2][0][RTW89_KCC][0][113] = 127,
[0][1][2][0][RTW89_ACMA][1][113] = 127,
@@ -42043,6 +43225,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][113] = 127,
[0][1][2][0][RTW89_UK][1][113] = 127,
[0][1][2][0][RTW89_UK][0][113] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][113] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][113] = 127,
[0][1][2][0][RTW89_FCC][1][115] = 127,
[0][1][2][0][RTW89_FCC][2][115] = 127,
[0][1][2][0][RTW89_ETSI][1][115] = 127,
@@ -42050,6 +43234,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][115] = 127,
[0][1][2][0][RTW89_MKK][0][115] = 127,
[0][1][2][0][RTW89_IC][1][115] = 127,
+ [0][1][2][0][RTW89_IC][2][115] = 127,
[0][1][2][0][RTW89_KCC][1][115] = 127,
[0][1][2][0][RTW89_KCC][0][115] = 127,
[0][1][2][0][RTW89_ACMA][1][115] = 127,
@@ -42059,6 +43244,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][115] = 127,
[0][1][2][0][RTW89_UK][1][115] = 127,
[0][1][2][0][RTW89_UK][0][115] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][115] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][115] = 127,
[0][1][2][0][RTW89_FCC][1][117] = 127,
[0][1][2][0][RTW89_FCC][2][117] = 127,
[0][1][2][0][RTW89_ETSI][1][117] = 127,
@@ -42066,6 +43253,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][117] = 127,
[0][1][2][0][RTW89_MKK][0][117] = 127,
[0][1][2][0][RTW89_IC][1][117] = 127,
+ [0][1][2][0][RTW89_IC][2][117] = 127,
[0][1][2][0][RTW89_KCC][1][117] = 127,
[0][1][2][0][RTW89_KCC][0][117] = 127,
[0][1][2][0][RTW89_ACMA][1][117] = 127,
@@ -42075,6 +43263,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][117] = 127,
[0][1][2][0][RTW89_UK][1][117] = 127,
[0][1][2][0][RTW89_UK][0][117] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][117] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][117] = 127,
[0][1][2][0][RTW89_FCC][1][119] = 127,
[0][1][2][0][RTW89_FCC][2][119] = 127,
[0][1][2][0][RTW89_ETSI][1][119] = 127,
@@ -42082,6 +43272,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_MKK][1][119] = 127,
[0][1][2][0][RTW89_MKK][0][119] = 127,
[0][1][2][0][RTW89_IC][1][119] = 127,
+ [0][1][2][0][RTW89_IC][2][119] = 127,
[0][1][2][0][RTW89_KCC][1][119] = 127,
[0][1][2][0][RTW89_KCC][0][119] = 127,
[0][1][2][0][RTW89_ACMA][1][119] = 127,
@@ -42091,6 +43282,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][0][RTW89_QATAR][0][119] = 127,
[0][1][2][0][RTW89_UK][1][119] = 127,
[0][1][2][0][RTW89_UK][0][119] = 127,
+ [0][1][2][0][RTW89_THAILAND][1][119] = 127,
+ [0][1][2][0][RTW89_THAILAND][0][119] = 127,
[0][1][2][1][RTW89_FCC][1][0] = -2,
[0][1][2][1][RTW89_FCC][2][0] = 54,
[0][1][2][1][RTW89_ETSI][1][0] = 42,
@@ -42098,6 +43291,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][0] = 56,
[0][1][2][1][RTW89_MKK][0][0] = 16,
[0][1][2][1][RTW89_IC][1][0] = -2,
+ [0][1][2][1][RTW89_IC][2][0] = 54,
[0][1][2][1][RTW89_KCC][1][0] = 12,
[0][1][2][1][RTW89_KCC][0][0] = 10,
[0][1][2][1][RTW89_ACMA][1][0] = 42,
@@ -42107,6 +43301,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][0] = 6,
[0][1][2][1][RTW89_UK][1][0] = 42,
[0][1][2][1][RTW89_UK][0][0] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][0] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][0] = -2,
[0][1][2][1][RTW89_FCC][1][2] = -4,
[0][1][2][1][RTW89_FCC][2][2] = 54,
[0][1][2][1][RTW89_ETSI][1][2] = 42,
@@ -42114,6 +43310,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][2] = 54,
[0][1][2][1][RTW89_MKK][0][2] = 16,
[0][1][2][1][RTW89_IC][1][2] = -4,
+ [0][1][2][1][RTW89_IC][2][2] = 54,
[0][1][2][1][RTW89_KCC][1][2] = 12,
[0][1][2][1][RTW89_KCC][0][2] = 12,
[0][1][2][1][RTW89_ACMA][1][2] = 42,
@@ -42123,6 +43320,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][2] = 6,
[0][1][2][1][RTW89_UK][1][2] = 42,
[0][1][2][1][RTW89_UK][0][2] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][2] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][2] = -4,
[0][1][2][1][RTW89_FCC][1][4] = -4,
[0][1][2][1][RTW89_FCC][2][4] = 54,
[0][1][2][1][RTW89_ETSI][1][4] = 42,
@@ -42130,6 +43329,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][4] = 54,
[0][1][2][1][RTW89_MKK][0][4] = 16,
[0][1][2][1][RTW89_IC][1][4] = -4,
+ [0][1][2][1][RTW89_IC][2][4] = 54,
[0][1][2][1][RTW89_KCC][1][4] = 12,
[0][1][2][1][RTW89_KCC][0][4] = 12,
[0][1][2][1][RTW89_ACMA][1][4] = 42,
@@ -42139,6 +43339,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][4] = 6,
[0][1][2][1][RTW89_UK][1][4] = 42,
[0][1][2][1][RTW89_UK][0][4] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][4] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][4] = -4,
[0][1][2][1][RTW89_FCC][1][6] = -4,
[0][1][2][1][RTW89_FCC][2][6] = 54,
[0][1][2][1][RTW89_ETSI][1][6] = 42,
@@ -42146,6 +43348,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][6] = 54,
[0][1][2][1][RTW89_MKK][0][6] = 16,
[0][1][2][1][RTW89_IC][1][6] = -4,
+ [0][1][2][1][RTW89_IC][2][6] = 54,
[0][1][2][1][RTW89_KCC][1][6] = 12,
[0][1][2][1][RTW89_KCC][0][6] = 12,
[0][1][2][1][RTW89_ACMA][1][6] = 42,
@@ -42155,6 +43358,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][6] = 6,
[0][1][2][1][RTW89_UK][1][6] = 42,
[0][1][2][1][RTW89_UK][0][6] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][6] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][6] = -4,
[0][1][2][1][RTW89_FCC][1][8] = -4,
[0][1][2][1][RTW89_FCC][2][8] = 54,
[0][1][2][1][RTW89_ETSI][1][8] = 42,
@@ -42162,6 +43367,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][8] = 54,
[0][1][2][1][RTW89_MKK][0][8] = 16,
[0][1][2][1][RTW89_IC][1][8] = -4,
+ [0][1][2][1][RTW89_IC][2][8] = 54,
[0][1][2][1][RTW89_KCC][1][8] = 12,
[0][1][2][1][RTW89_KCC][0][8] = 12,
[0][1][2][1][RTW89_ACMA][1][8] = 42,
@@ -42171,6 +43377,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][8] = 6,
[0][1][2][1][RTW89_UK][1][8] = 42,
[0][1][2][1][RTW89_UK][0][8] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][8] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][8] = -4,
[0][1][2][1][RTW89_FCC][1][10] = -4,
[0][1][2][1][RTW89_FCC][2][10] = 54,
[0][1][2][1][RTW89_ETSI][1][10] = 42,
@@ -42178,6 +43386,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][10] = 54,
[0][1][2][1][RTW89_MKK][0][10] = 16,
[0][1][2][1][RTW89_IC][1][10] = -4,
+ [0][1][2][1][RTW89_IC][2][10] = 54,
[0][1][2][1][RTW89_KCC][1][10] = 12,
[0][1][2][1][RTW89_KCC][0][10] = 12,
[0][1][2][1][RTW89_ACMA][1][10] = 42,
@@ -42187,6 +43396,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][10] = 6,
[0][1][2][1][RTW89_UK][1][10] = 42,
[0][1][2][1][RTW89_UK][0][10] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][10] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][10] = -4,
[0][1][2][1][RTW89_FCC][1][12] = -4,
[0][1][2][1][RTW89_FCC][2][12] = 54,
[0][1][2][1][RTW89_ETSI][1][12] = 42,
@@ -42194,6 +43405,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][12] = 54,
[0][1][2][1][RTW89_MKK][0][12] = 16,
[0][1][2][1][RTW89_IC][1][12] = -4,
+ [0][1][2][1][RTW89_IC][2][12] = 54,
[0][1][2][1][RTW89_KCC][1][12] = 12,
[0][1][2][1][RTW89_KCC][0][12] = 12,
[0][1][2][1][RTW89_ACMA][1][12] = 42,
@@ -42203,6 +43415,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][12] = 6,
[0][1][2][1][RTW89_UK][1][12] = 42,
[0][1][2][1][RTW89_UK][0][12] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][12] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][12] = -4,
[0][1][2][1][RTW89_FCC][1][14] = -4,
[0][1][2][1][RTW89_FCC][2][14] = 54,
[0][1][2][1][RTW89_ETSI][1][14] = 42,
@@ -42210,6 +43424,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][14] = 54,
[0][1][2][1][RTW89_MKK][0][14] = 16,
[0][1][2][1][RTW89_IC][1][14] = -4,
+ [0][1][2][1][RTW89_IC][2][14] = 54,
[0][1][2][1][RTW89_KCC][1][14] = 12,
[0][1][2][1][RTW89_KCC][0][14] = 12,
[0][1][2][1][RTW89_ACMA][1][14] = 42,
@@ -42219,6 +43434,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][14] = 6,
[0][1][2][1][RTW89_UK][1][14] = 42,
[0][1][2][1][RTW89_UK][0][14] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][14] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][14] = -4,
[0][1][2][1][RTW89_FCC][1][15] = -4,
[0][1][2][1][RTW89_FCC][2][15] = 54,
[0][1][2][1][RTW89_ETSI][1][15] = 42,
@@ -42226,6 +43443,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][15] = 54,
[0][1][2][1][RTW89_MKK][0][15] = 16,
[0][1][2][1][RTW89_IC][1][15] = -4,
+ [0][1][2][1][RTW89_IC][2][15] = 54,
[0][1][2][1][RTW89_KCC][1][15] = 12,
[0][1][2][1][RTW89_KCC][0][15] = 12,
[0][1][2][1][RTW89_ACMA][1][15] = 42,
@@ -42235,6 +43453,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][15] = 6,
[0][1][2][1][RTW89_UK][1][15] = 42,
[0][1][2][1][RTW89_UK][0][15] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][15] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][15] = -4,
[0][1][2][1][RTW89_FCC][1][17] = -4,
[0][1][2][1][RTW89_FCC][2][17] = 54,
[0][1][2][1][RTW89_ETSI][1][17] = 42,
@@ -42242,6 +43462,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][17] = 54,
[0][1][2][1][RTW89_MKK][0][17] = 16,
[0][1][2][1][RTW89_IC][1][17] = -4,
+ [0][1][2][1][RTW89_IC][2][17] = 54,
[0][1][2][1][RTW89_KCC][1][17] = 12,
[0][1][2][1][RTW89_KCC][0][17] = 12,
[0][1][2][1][RTW89_ACMA][1][17] = 42,
@@ -42251,6 +43472,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][17] = 6,
[0][1][2][1][RTW89_UK][1][17] = 42,
[0][1][2][1][RTW89_UK][0][17] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][17] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][17] = -4,
[0][1][2][1][RTW89_FCC][1][19] = -4,
[0][1][2][1][RTW89_FCC][2][19] = 54,
[0][1][2][1][RTW89_ETSI][1][19] = 42,
@@ -42258,6 +43481,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][19] = 54,
[0][1][2][1][RTW89_MKK][0][19] = 16,
[0][1][2][1][RTW89_IC][1][19] = -4,
+ [0][1][2][1][RTW89_IC][2][19] = 54,
[0][1][2][1][RTW89_KCC][1][19] = 12,
[0][1][2][1][RTW89_KCC][0][19] = 12,
[0][1][2][1][RTW89_ACMA][1][19] = 42,
@@ -42267,6 +43491,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][19] = 6,
[0][1][2][1][RTW89_UK][1][19] = 42,
[0][1][2][1][RTW89_UK][0][19] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][19] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][19] = -4,
[0][1][2][1][RTW89_FCC][1][21] = -4,
[0][1][2][1][RTW89_FCC][2][21] = 54,
[0][1][2][1][RTW89_ETSI][1][21] = 42,
@@ -42274,6 +43500,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][21] = 54,
[0][1][2][1][RTW89_MKK][0][21] = 16,
[0][1][2][1][RTW89_IC][1][21] = -4,
+ [0][1][2][1][RTW89_IC][2][21] = 54,
[0][1][2][1][RTW89_KCC][1][21] = 12,
[0][1][2][1][RTW89_KCC][0][21] = 12,
[0][1][2][1][RTW89_ACMA][1][21] = 42,
@@ -42283,6 +43510,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][21] = 6,
[0][1][2][1][RTW89_UK][1][21] = 42,
[0][1][2][1][RTW89_UK][0][21] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][21] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][21] = -4,
[0][1][2][1][RTW89_FCC][1][23] = -4,
[0][1][2][1][RTW89_FCC][2][23] = 68,
[0][1][2][1][RTW89_ETSI][1][23] = 42,
@@ -42290,6 +43519,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][23] = 54,
[0][1][2][1][RTW89_MKK][0][23] = 16,
[0][1][2][1][RTW89_IC][1][23] = -4,
+ [0][1][2][1][RTW89_IC][2][23] = 68,
[0][1][2][1][RTW89_KCC][1][23] = 12,
[0][1][2][1][RTW89_KCC][0][23] = 10,
[0][1][2][1][RTW89_ACMA][1][23] = 42,
@@ -42299,6 +43529,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][23] = 6,
[0][1][2][1][RTW89_UK][1][23] = 42,
[0][1][2][1][RTW89_UK][0][23] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][23] = 44,
+ [0][1][2][1][RTW89_THAILAND][0][23] = -4,
[0][1][2][1][RTW89_FCC][1][25] = -4,
[0][1][2][1][RTW89_FCC][2][25] = 68,
[0][1][2][1][RTW89_ETSI][1][25] = 42,
@@ -42306,6 +43538,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][25] = 54,
[0][1][2][1][RTW89_MKK][0][25] = 16,
[0][1][2][1][RTW89_IC][1][25] = -4,
+ [0][1][2][1][RTW89_IC][2][25] = 68,
[0][1][2][1][RTW89_KCC][1][25] = 12,
[0][1][2][1][RTW89_KCC][0][25] = 14,
[0][1][2][1][RTW89_ACMA][1][25] = 42,
@@ -42315,6 +43548,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][25] = 6,
[0][1][2][1][RTW89_UK][1][25] = 42,
[0][1][2][1][RTW89_UK][0][25] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][25] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][25] = -4,
[0][1][2][1][RTW89_FCC][1][27] = -4,
[0][1][2][1][RTW89_FCC][2][27] = 68,
[0][1][2][1][RTW89_ETSI][1][27] = 42,
@@ -42322,6 +43557,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][27] = 54,
[0][1][2][1][RTW89_MKK][0][27] = 16,
[0][1][2][1][RTW89_IC][1][27] = -4,
+ [0][1][2][1][RTW89_IC][2][27] = 68,
[0][1][2][1][RTW89_KCC][1][27] = 12,
[0][1][2][1][RTW89_KCC][0][27] = 14,
[0][1][2][1][RTW89_ACMA][1][27] = 42,
@@ -42331,6 +43567,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][27] = 6,
[0][1][2][1][RTW89_UK][1][27] = 42,
[0][1][2][1][RTW89_UK][0][27] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][27] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][27] = -4,
[0][1][2][1][RTW89_FCC][1][29] = -4,
[0][1][2][1][RTW89_FCC][2][29] = 68,
[0][1][2][1][RTW89_ETSI][1][29] = 42,
@@ -42338,6 +43576,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][29] = 54,
[0][1][2][1][RTW89_MKK][0][29] = 16,
[0][1][2][1][RTW89_IC][1][29] = -4,
+ [0][1][2][1][RTW89_IC][2][29] = 68,
[0][1][2][1][RTW89_KCC][1][29] = 12,
[0][1][2][1][RTW89_KCC][0][29] = 14,
[0][1][2][1][RTW89_ACMA][1][29] = 42,
@@ -42347,6 +43586,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][29] = 6,
[0][1][2][1][RTW89_UK][1][29] = 42,
[0][1][2][1][RTW89_UK][0][29] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][29] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][29] = -4,
[0][1][2][1][RTW89_FCC][1][30] = -4,
[0][1][2][1][RTW89_FCC][2][30] = 68,
[0][1][2][1][RTW89_ETSI][1][30] = 42,
@@ -42354,6 +43595,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][30] = 54,
[0][1][2][1][RTW89_MKK][0][30] = 16,
[0][1][2][1][RTW89_IC][1][30] = -4,
+ [0][1][2][1][RTW89_IC][2][30] = 68,
[0][1][2][1][RTW89_KCC][1][30] = 12,
[0][1][2][1][RTW89_KCC][0][30] = 14,
[0][1][2][1][RTW89_ACMA][1][30] = 42,
@@ -42363,6 +43605,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][30] = 6,
[0][1][2][1][RTW89_UK][1][30] = 42,
[0][1][2][1][RTW89_UK][0][30] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][30] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][30] = -4,
[0][1][2][1][RTW89_FCC][1][32] = -4,
[0][1][2][1][RTW89_FCC][2][32] = 68,
[0][1][2][1][RTW89_ETSI][1][32] = 42,
@@ -42370,6 +43614,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][32] = 54,
[0][1][2][1][RTW89_MKK][0][32] = 16,
[0][1][2][1][RTW89_IC][1][32] = -4,
+ [0][1][2][1][RTW89_IC][2][32] = 68,
[0][1][2][1][RTW89_KCC][1][32] = 12,
[0][1][2][1][RTW89_KCC][0][32] = 14,
[0][1][2][1][RTW89_ACMA][1][32] = 42,
@@ -42379,6 +43624,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][32] = 6,
[0][1][2][1][RTW89_UK][1][32] = 42,
[0][1][2][1][RTW89_UK][0][32] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][32] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][32] = -4,
[0][1][2][1][RTW89_FCC][1][34] = -4,
[0][1][2][1][RTW89_FCC][2][34] = 68,
[0][1][2][1][RTW89_ETSI][1][34] = 42,
@@ -42386,6 +43633,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][34] = 54,
[0][1][2][1][RTW89_MKK][0][34] = 16,
[0][1][2][1][RTW89_IC][1][34] = -4,
+ [0][1][2][1][RTW89_IC][2][34] = 68,
[0][1][2][1][RTW89_KCC][1][34] = 12,
[0][1][2][1][RTW89_KCC][0][34] = 14,
[0][1][2][1][RTW89_ACMA][1][34] = 42,
@@ -42395,6 +43643,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][34] = 6,
[0][1][2][1][RTW89_UK][1][34] = 42,
[0][1][2][1][RTW89_UK][0][34] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][34] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][34] = -4,
[0][1][2][1][RTW89_FCC][1][36] = -4,
[0][1][2][1][RTW89_FCC][2][36] = 68,
[0][1][2][1][RTW89_ETSI][1][36] = 42,
@@ -42402,6 +43652,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][36] = 54,
[0][1][2][1][RTW89_MKK][0][36] = 16,
[0][1][2][1][RTW89_IC][1][36] = -4,
+ [0][1][2][1][RTW89_IC][2][36] = 68,
[0][1][2][1][RTW89_KCC][1][36] = 12,
[0][1][2][1][RTW89_KCC][0][36] = 14,
[0][1][2][1][RTW89_ACMA][1][36] = 42,
@@ -42411,6 +43662,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][36] = 6,
[0][1][2][1][RTW89_UK][1][36] = 42,
[0][1][2][1][RTW89_UK][0][36] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][36] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][36] = -4,
[0][1][2][1][RTW89_FCC][1][38] = -4,
[0][1][2][1][RTW89_FCC][2][38] = 68,
[0][1][2][1][RTW89_ETSI][1][38] = 42,
@@ -42418,6 +43671,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][38] = 54,
[0][1][2][1][RTW89_MKK][0][38] = 16,
[0][1][2][1][RTW89_IC][1][38] = -4,
+ [0][1][2][1][RTW89_IC][2][38] = 68,
[0][1][2][1][RTW89_KCC][1][38] = 12,
[0][1][2][1][RTW89_KCC][0][38] = 14,
[0][1][2][1][RTW89_ACMA][1][38] = 42,
@@ -42427,6 +43681,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][38] = 6,
[0][1][2][1][RTW89_UK][1][38] = 42,
[0][1][2][1][RTW89_UK][0][38] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][38] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][38] = -4,
[0][1][2][1][RTW89_FCC][1][40] = -4,
[0][1][2][1][RTW89_FCC][2][40] = 68,
[0][1][2][1][RTW89_ETSI][1][40] = 42,
@@ -42434,6 +43690,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][40] = 54,
[0][1][2][1][RTW89_MKK][0][40] = 16,
[0][1][2][1][RTW89_IC][1][40] = -4,
+ [0][1][2][1][RTW89_IC][2][40] = 68,
[0][1][2][1][RTW89_KCC][1][40] = 12,
[0][1][2][1][RTW89_KCC][0][40] = 14,
[0][1][2][1][RTW89_ACMA][1][40] = 42,
@@ -42443,6 +43700,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][40] = 6,
[0][1][2][1][RTW89_UK][1][40] = 42,
[0][1][2][1][RTW89_UK][0][40] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][40] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][40] = -4,
[0][1][2][1][RTW89_FCC][1][42] = -4,
[0][1][2][1][RTW89_FCC][2][42] = 68,
[0][1][2][1][RTW89_ETSI][1][42] = 42,
@@ -42450,6 +43709,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][42] = 54,
[0][1][2][1][RTW89_MKK][0][42] = 16,
[0][1][2][1][RTW89_IC][1][42] = -4,
+ [0][1][2][1][RTW89_IC][2][42] = 68,
[0][1][2][1][RTW89_KCC][1][42] = 12,
[0][1][2][1][RTW89_KCC][0][42] = 14,
[0][1][2][1][RTW89_ACMA][1][42] = 42,
@@ -42459,6 +43719,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][42] = 6,
[0][1][2][1][RTW89_UK][1][42] = 42,
[0][1][2][1][RTW89_UK][0][42] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][42] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][42] = -4,
[0][1][2][1][RTW89_FCC][1][44] = -2,
[0][1][2][1][RTW89_FCC][2][44] = 68,
[0][1][2][1][RTW89_ETSI][1][44] = 42,
@@ -42466,6 +43728,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][44] = 34,
[0][1][2][1][RTW89_MKK][0][44] = 16,
[0][1][2][1][RTW89_IC][1][44] = -2,
+ [0][1][2][1][RTW89_IC][2][44] = 68,
[0][1][2][1][RTW89_KCC][1][44] = 12,
[0][1][2][1][RTW89_KCC][0][44] = 12,
[0][1][2][1][RTW89_ACMA][1][44] = 42,
@@ -42475,6 +43738,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][44] = 6,
[0][1][2][1][RTW89_UK][1][44] = 42,
[0][1][2][1][RTW89_UK][0][44] = 6,
+ [0][1][2][1][RTW89_THAILAND][1][44] = 42,
+ [0][1][2][1][RTW89_THAILAND][0][44] = -2,
[0][1][2][1][RTW89_FCC][1][45] = -2,
[0][1][2][1][RTW89_FCC][2][45] = 127,
[0][1][2][1][RTW89_ETSI][1][45] = 127,
@@ -42482,6 +43747,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][45] = 127,
[0][1][2][1][RTW89_MKK][0][45] = 127,
[0][1][2][1][RTW89_IC][1][45] = -2,
+ [0][1][2][1][RTW89_IC][2][45] = 70,
[0][1][2][1][RTW89_KCC][1][45] = 12,
[0][1][2][1][RTW89_KCC][0][45] = 127,
[0][1][2][1][RTW89_ACMA][1][45] = 127,
@@ -42491,6 +43757,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][45] = 127,
[0][1][2][1][RTW89_UK][1][45] = 127,
[0][1][2][1][RTW89_UK][0][45] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][45] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][45] = 127,
[0][1][2][1][RTW89_FCC][1][47] = -2,
[0][1][2][1][RTW89_FCC][2][47] = 127,
[0][1][2][1][RTW89_ETSI][1][47] = 127,
@@ -42498,6 +43766,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][47] = 127,
[0][1][2][1][RTW89_MKK][0][47] = 127,
[0][1][2][1][RTW89_IC][1][47] = -2,
+ [0][1][2][1][RTW89_IC][2][47] = 68,
[0][1][2][1][RTW89_KCC][1][47] = 12,
[0][1][2][1][RTW89_KCC][0][47] = 127,
[0][1][2][1][RTW89_ACMA][1][47] = 127,
@@ -42507,6 +43776,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][47] = 127,
[0][1][2][1][RTW89_UK][1][47] = 127,
[0][1][2][1][RTW89_UK][0][47] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][47] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][47] = 127,
[0][1][2][1][RTW89_FCC][1][49] = -2,
[0][1][2][1][RTW89_FCC][2][49] = 127,
[0][1][2][1][RTW89_ETSI][1][49] = 127,
@@ -42514,6 +43785,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][49] = 127,
[0][1][2][1][RTW89_MKK][0][49] = 127,
[0][1][2][1][RTW89_IC][1][49] = -2,
+ [0][1][2][1][RTW89_IC][2][49] = 68,
[0][1][2][1][RTW89_KCC][1][49] = 12,
[0][1][2][1][RTW89_KCC][0][49] = 127,
[0][1][2][1][RTW89_ACMA][1][49] = 127,
@@ -42523,6 +43795,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][49] = 127,
[0][1][2][1][RTW89_UK][1][49] = 127,
[0][1][2][1][RTW89_UK][0][49] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][49] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][49] = 127,
[0][1][2][1][RTW89_FCC][1][51] = -2,
[0][1][2][1][RTW89_FCC][2][51] = 127,
[0][1][2][1][RTW89_ETSI][1][51] = 127,
@@ -42530,6 +43804,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][51] = 127,
[0][1][2][1][RTW89_MKK][0][51] = 127,
[0][1][2][1][RTW89_IC][1][51] = -2,
+ [0][1][2][1][RTW89_IC][2][51] = 68,
[0][1][2][1][RTW89_KCC][1][51] = 12,
[0][1][2][1][RTW89_KCC][0][51] = 127,
[0][1][2][1][RTW89_ACMA][1][51] = 127,
@@ -42539,6 +43814,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][51] = 127,
[0][1][2][1][RTW89_UK][1][51] = 127,
[0][1][2][1][RTW89_UK][0][51] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][51] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][51] = 127,
[0][1][2][1][RTW89_FCC][1][53] = -2,
[0][1][2][1][RTW89_FCC][2][53] = 127,
[0][1][2][1][RTW89_ETSI][1][53] = 127,
@@ -42546,6 +43823,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][53] = 127,
[0][1][2][1][RTW89_MKK][0][53] = 127,
[0][1][2][1][RTW89_IC][1][53] = -2,
+ [0][1][2][1][RTW89_IC][2][53] = 68,
[0][1][2][1][RTW89_KCC][1][53] = 12,
[0][1][2][1][RTW89_KCC][0][53] = 127,
[0][1][2][1][RTW89_ACMA][1][53] = 127,
@@ -42555,6 +43833,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][53] = 127,
[0][1][2][1][RTW89_UK][1][53] = 127,
[0][1][2][1][RTW89_UK][0][53] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][53] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][53] = 127,
[0][1][2][1][RTW89_FCC][1][55] = -2,
[0][1][2][1][RTW89_FCC][2][55] = 68,
[0][1][2][1][RTW89_ETSI][1][55] = 127,
@@ -42562,6 +43842,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][55] = 127,
[0][1][2][1][RTW89_MKK][0][55] = 127,
[0][1][2][1][RTW89_IC][1][55] = -2,
+ [0][1][2][1][RTW89_IC][2][55] = 68,
[0][1][2][1][RTW89_KCC][1][55] = 12,
[0][1][2][1][RTW89_KCC][0][55] = 127,
[0][1][2][1][RTW89_ACMA][1][55] = 127,
@@ -42571,6 +43852,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][55] = 127,
[0][1][2][1][RTW89_UK][1][55] = 127,
[0][1][2][1][RTW89_UK][0][55] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][55] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][55] = 127,
[0][1][2][1][RTW89_FCC][1][57] = -2,
[0][1][2][1][RTW89_FCC][2][57] = 68,
[0][1][2][1][RTW89_ETSI][1][57] = 127,
@@ -42578,6 +43861,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][57] = 127,
[0][1][2][1][RTW89_MKK][0][57] = 127,
[0][1][2][1][RTW89_IC][1][57] = -2,
+ [0][1][2][1][RTW89_IC][2][57] = 68,
[0][1][2][1][RTW89_KCC][1][57] = 12,
[0][1][2][1][RTW89_KCC][0][57] = 127,
[0][1][2][1][RTW89_ACMA][1][57] = 127,
@@ -42587,6 +43871,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][57] = 127,
[0][1][2][1][RTW89_UK][1][57] = 127,
[0][1][2][1][RTW89_UK][0][57] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][57] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][57] = 127,
[0][1][2][1][RTW89_FCC][1][59] = -2,
[0][1][2][1][RTW89_FCC][2][59] = 68,
[0][1][2][1][RTW89_ETSI][1][59] = 127,
@@ -42594,6 +43880,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][59] = 127,
[0][1][2][1][RTW89_MKK][0][59] = 127,
[0][1][2][1][RTW89_IC][1][59] = -2,
+ [0][1][2][1][RTW89_IC][2][59] = 68,
[0][1][2][1][RTW89_KCC][1][59] = 12,
[0][1][2][1][RTW89_KCC][0][59] = 127,
[0][1][2][1][RTW89_ACMA][1][59] = 127,
@@ -42603,6 +43890,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][59] = 127,
[0][1][2][1][RTW89_UK][1][59] = 127,
[0][1][2][1][RTW89_UK][0][59] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][59] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][59] = 127,
[0][1][2][1][RTW89_FCC][1][60] = -2,
[0][1][2][1][RTW89_FCC][2][60] = 68,
[0][1][2][1][RTW89_ETSI][1][60] = 127,
@@ -42610,6 +43899,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][60] = 127,
[0][1][2][1][RTW89_MKK][0][60] = 127,
[0][1][2][1][RTW89_IC][1][60] = -2,
+ [0][1][2][1][RTW89_IC][2][60] = 68,
[0][1][2][1][RTW89_KCC][1][60] = 12,
[0][1][2][1][RTW89_KCC][0][60] = 127,
[0][1][2][1][RTW89_ACMA][1][60] = 127,
@@ -42619,6 +43909,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][60] = 127,
[0][1][2][1][RTW89_UK][1][60] = 127,
[0][1][2][1][RTW89_UK][0][60] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][60] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][60] = 127,
[0][1][2][1][RTW89_FCC][1][62] = -2,
[0][1][2][1][RTW89_FCC][2][62] = 68,
[0][1][2][1][RTW89_ETSI][1][62] = 127,
@@ -42626,6 +43918,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][62] = 127,
[0][1][2][1][RTW89_MKK][0][62] = 127,
[0][1][2][1][RTW89_IC][1][62] = -2,
+ [0][1][2][1][RTW89_IC][2][62] = 68,
[0][1][2][1][RTW89_KCC][1][62] = 12,
[0][1][2][1][RTW89_KCC][0][62] = 127,
[0][1][2][1][RTW89_ACMA][1][62] = 127,
@@ -42635,6 +43928,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][62] = 127,
[0][1][2][1][RTW89_UK][1][62] = 127,
[0][1][2][1][RTW89_UK][0][62] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][62] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][62] = 127,
[0][1][2][1][RTW89_FCC][1][64] = -2,
[0][1][2][1][RTW89_FCC][2][64] = 68,
[0][1][2][1][RTW89_ETSI][1][64] = 127,
@@ -42642,6 +43937,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][64] = 127,
[0][1][2][1][RTW89_MKK][0][64] = 127,
[0][1][2][1][RTW89_IC][1][64] = -2,
+ [0][1][2][1][RTW89_IC][2][64] = 68,
[0][1][2][1][RTW89_KCC][1][64] = 12,
[0][1][2][1][RTW89_KCC][0][64] = 127,
[0][1][2][1][RTW89_ACMA][1][64] = 127,
@@ -42651,6 +43947,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][64] = 127,
[0][1][2][1][RTW89_UK][1][64] = 127,
[0][1][2][1][RTW89_UK][0][64] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][64] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][64] = 127,
[0][1][2][1][RTW89_FCC][1][66] = -2,
[0][1][2][1][RTW89_FCC][2][66] = 68,
[0][1][2][1][RTW89_ETSI][1][66] = 127,
@@ -42658,6 +43956,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][66] = 127,
[0][1][2][1][RTW89_MKK][0][66] = 127,
[0][1][2][1][RTW89_IC][1][66] = -2,
+ [0][1][2][1][RTW89_IC][2][66] = 68,
[0][1][2][1][RTW89_KCC][1][66] = 12,
[0][1][2][1][RTW89_KCC][0][66] = 127,
[0][1][2][1][RTW89_ACMA][1][66] = 127,
@@ -42667,6 +43966,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][66] = 127,
[0][1][2][1][RTW89_UK][1][66] = 127,
[0][1][2][1][RTW89_UK][0][66] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][66] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][66] = 127,
[0][1][2][1][RTW89_FCC][1][68] = -2,
[0][1][2][1][RTW89_FCC][2][68] = 68,
[0][1][2][1][RTW89_ETSI][1][68] = 127,
@@ -42674,6 +43975,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][68] = 127,
[0][1][2][1][RTW89_MKK][0][68] = 127,
[0][1][2][1][RTW89_IC][1][68] = -2,
+ [0][1][2][1][RTW89_IC][2][68] = 68,
[0][1][2][1][RTW89_KCC][1][68] = 12,
[0][1][2][1][RTW89_KCC][0][68] = 127,
[0][1][2][1][RTW89_ACMA][1][68] = 127,
@@ -42683,6 +43985,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][68] = 127,
[0][1][2][1][RTW89_UK][1][68] = 127,
[0][1][2][1][RTW89_UK][0][68] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][68] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][68] = 127,
[0][1][2][1][RTW89_FCC][1][70] = -2,
[0][1][2][1][RTW89_FCC][2][70] = 68,
[0][1][2][1][RTW89_ETSI][1][70] = 127,
@@ -42690,6 +43994,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][70] = 127,
[0][1][2][1][RTW89_MKK][0][70] = 127,
[0][1][2][1][RTW89_IC][1][70] = -2,
+ [0][1][2][1][RTW89_IC][2][70] = 68,
[0][1][2][1][RTW89_KCC][1][70] = 12,
[0][1][2][1][RTW89_KCC][0][70] = 127,
[0][1][2][1][RTW89_ACMA][1][70] = 127,
@@ -42699,6 +44004,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][70] = 127,
[0][1][2][1][RTW89_UK][1][70] = 127,
[0][1][2][1][RTW89_UK][0][70] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][70] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][70] = 127,
[0][1][2][1][RTW89_FCC][1][72] = -2,
[0][1][2][1][RTW89_FCC][2][72] = 68,
[0][1][2][1][RTW89_ETSI][1][72] = 127,
@@ -42706,6 +44013,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][72] = 127,
[0][1][2][1][RTW89_MKK][0][72] = 127,
[0][1][2][1][RTW89_IC][1][72] = -2,
+ [0][1][2][1][RTW89_IC][2][72] = 68,
[0][1][2][1][RTW89_KCC][1][72] = 12,
[0][1][2][1][RTW89_KCC][0][72] = 127,
[0][1][2][1][RTW89_ACMA][1][72] = 127,
@@ -42715,6 +44023,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][72] = 127,
[0][1][2][1][RTW89_UK][1][72] = 127,
[0][1][2][1][RTW89_UK][0][72] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][72] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][72] = 127,
[0][1][2][1][RTW89_FCC][1][74] = -2,
[0][1][2][1][RTW89_FCC][2][74] = 68,
[0][1][2][1][RTW89_ETSI][1][74] = 127,
@@ -42722,6 +44032,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][74] = 127,
[0][1][2][1][RTW89_MKK][0][74] = 127,
[0][1][2][1][RTW89_IC][1][74] = -2,
+ [0][1][2][1][RTW89_IC][2][74] = 68,
[0][1][2][1][RTW89_KCC][1][74] = 12,
[0][1][2][1][RTW89_KCC][0][74] = 127,
[0][1][2][1][RTW89_ACMA][1][74] = 127,
@@ -42731,6 +44042,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][74] = 127,
[0][1][2][1][RTW89_UK][1][74] = 127,
[0][1][2][1][RTW89_UK][0][74] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][74] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][74] = 127,
[0][1][2][1][RTW89_FCC][1][75] = -2,
[0][1][2][1][RTW89_FCC][2][75] = 68,
[0][1][2][1][RTW89_ETSI][1][75] = 127,
@@ -42738,6 +44051,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][75] = 127,
[0][1][2][1][RTW89_MKK][0][75] = 127,
[0][1][2][1][RTW89_IC][1][75] = -2,
+ [0][1][2][1][RTW89_IC][2][75] = 68,
[0][1][2][1][RTW89_KCC][1][75] = 12,
[0][1][2][1][RTW89_KCC][0][75] = 127,
[0][1][2][1][RTW89_ACMA][1][75] = 127,
@@ -42747,6 +44061,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][75] = 127,
[0][1][2][1][RTW89_UK][1][75] = 127,
[0][1][2][1][RTW89_UK][0][75] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][75] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][75] = 127,
[0][1][2][1][RTW89_FCC][1][77] = -2,
[0][1][2][1][RTW89_FCC][2][77] = 68,
[0][1][2][1][RTW89_ETSI][1][77] = 127,
@@ -42754,6 +44070,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][77] = 127,
[0][1][2][1][RTW89_MKK][0][77] = 127,
[0][1][2][1][RTW89_IC][1][77] = -2,
+ [0][1][2][1][RTW89_IC][2][77] = 68,
[0][1][2][1][RTW89_KCC][1][77] = 12,
[0][1][2][1][RTW89_KCC][0][77] = 127,
[0][1][2][1][RTW89_ACMA][1][77] = 127,
@@ -42763,6 +44080,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][77] = 127,
[0][1][2][1][RTW89_UK][1][77] = 127,
[0][1][2][1][RTW89_UK][0][77] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][77] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][77] = 127,
[0][1][2][1][RTW89_FCC][1][79] = -2,
[0][1][2][1][RTW89_FCC][2][79] = 68,
[0][1][2][1][RTW89_ETSI][1][79] = 127,
@@ -42770,6 +44089,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][79] = 127,
[0][1][2][1][RTW89_MKK][0][79] = 127,
[0][1][2][1][RTW89_IC][1][79] = -2,
+ [0][1][2][1][RTW89_IC][2][79] = 68,
[0][1][2][1][RTW89_KCC][1][79] = 12,
[0][1][2][1][RTW89_KCC][0][79] = 127,
[0][1][2][1][RTW89_ACMA][1][79] = 127,
@@ -42779,6 +44099,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][79] = 127,
[0][1][2][1][RTW89_UK][1][79] = 127,
[0][1][2][1][RTW89_UK][0][79] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][79] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][79] = 127,
[0][1][2][1][RTW89_FCC][1][81] = -2,
[0][1][2][1][RTW89_FCC][2][81] = 68,
[0][1][2][1][RTW89_ETSI][1][81] = 127,
@@ -42786,6 +44108,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][81] = 127,
[0][1][2][1][RTW89_MKK][0][81] = 127,
[0][1][2][1][RTW89_IC][1][81] = -2,
+ [0][1][2][1][RTW89_IC][2][81] = 68,
[0][1][2][1][RTW89_KCC][1][81] = 12,
[0][1][2][1][RTW89_KCC][0][81] = 127,
[0][1][2][1][RTW89_ACMA][1][81] = 127,
@@ -42795,6 +44118,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][81] = 127,
[0][1][2][1][RTW89_UK][1][81] = 127,
[0][1][2][1][RTW89_UK][0][81] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][81] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][81] = 127,
[0][1][2][1][RTW89_FCC][1][83] = -2,
[0][1][2][1][RTW89_FCC][2][83] = 68,
[0][1][2][1][RTW89_ETSI][1][83] = 127,
@@ -42802,6 +44127,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][83] = 127,
[0][1][2][1][RTW89_MKK][0][83] = 127,
[0][1][2][1][RTW89_IC][1][83] = -2,
+ [0][1][2][1][RTW89_IC][2][83] = 68,
[0][1][2][1][RTW89_KCC][1][83] = 20,
[0][1][2][1][RTW89_KCC][0][83] = 127,
[0][1][2][1][RTW89_ACMA][1][83] = 127,
@@ -42811,6 +44137,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][83] = 127,
[0][1][2][1][RTW89_UK][1][83] = 127,
[0][1][2][1][RTW89_UK][0][83] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][83] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][83] = 127,
[0][1][2][1][RTW89_FCC][1][85] = -2,
[0][1][2][1][RTW89_FCC][2][85] = 68,
[0][1][2][1][RTW89_ETSI][1][85] = 127,
@@ -42818,6 +44146,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][85] = 127,
[0][1][2][1][RTW89_MKK][0][85] = 127,
[0][1][2][1][RTW89_IC][1][85] = -2,
+ [0][1][2][1][RTW89_IC][2][85] = 68,
[0][1][2][1][RTW89_KCC][1][85] = 20,
[0][1][2][1][RTW89_KCC][0][85] = 127,
[0][1][2][1][RTW89_ACMA][1][85] = 127,
@@ -42827,6 +44156,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][85] = 127,
[0][1][2][1][RTW89_UK][1][85] = 127,
[0][1][2][1][RTW89_UK][0][85] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][85] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][85] = 127,
[0][1][2][1][RTW89_FCC][1][87] = -2,
[0][1][2][1][RTW89_FCC][2][87] = 127,
[0][1][2][1][RTW89_ETSI][1][87] = 127,
@@ -42834,6 +44165,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][87] = 127,
[0][1][2][1][RTW89_MKK][0][87] = 127,
[0][1][2][1][RTW89_IC][1][87] = -2,
+ [0][1][2][1][RTW89_IC][2][87] = 127,
[0][1][2][1][RTW89_KCC][1][87] = 20,
[0][1][2][1][RTW89_KCC][0][87] = 127,
[0][1][2][1][RTW89_ACMA][1][87] = 127,
@@ -42843,6 +44175,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][87] = 127,
[0][1][2][1][RTW89_UK][1][87] = 127,
[0][1][2][1][RTW89_UK][0][87] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][87] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][87] = 127,
[0][1][2][1][RTW89_FCC][1][89] = -2,
[0][1][2][1][RTW89_FCC][2][89] = 127,
[0][1][2][1][RTW89_ETSI][1][89] = 127,
@@ -42850,6 +44184,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][89] = 127,
[0][1][2][1][RTW89_MKK][0][89] = 127,
[0][1][2][1][RTW89_IC][1][89] = -2,
+ [0][1][2][1][RTW89_IC][2][89] = 127,
[0][1][2][1][RTW89_KCC][1][89] = 20,
[0][1][2][1][RTW89_KCC][0][89] = 127,
[0][1][2][1][RTW89_ACMA][1][89] = 127,
@@ -42859,6 +44194,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][89] = 127,
[0][1][2][1][RTW89_UK][1][89] = 127,
[0][1][2][1][RTW89_UK][0][89] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][89] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][89] = 127,
[0][1][2][1][RTW89_FCC][1][90] = -2,
[0][1][2][1][RTW89_FCC][2][90] = 127,
[0][1][2][1][RTW89_ETSI][1][90] = 127,
@@ -42866,6 +44203,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][90] = 127,
[0][1][2][1][RTW89_MKK][0][90] = 127,
[0][1][2][1][RTW89_IC][1][90] = -2,
+ [0][1][2][1][RTW89_IC][2][90] = 127,
[0][1][2][1][RTW89_KCC][1][90] = 20,
[0][1][2][1][RTW89_KCC][0][90] = 127,
[0][1][2][1][RTW89_ACMA][1][90] = 127,
@@ -42875,6 +44213,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][90] = 127,
[0][1][2][1][RTW89_UK][1][90] = 127,
[0][1][2][1][RTW89_UK][0][90] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][90] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][90] = 127,
[0][1][2][1][RTW89_FCC][1][92] = -2,
[0][1][2][1][RTW89_FCC][2][92] = 127,
[0][1][2][1][RTW89_ETSI][1][92] = 127,
@@ -42882,6 +44222,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][92] = 127,
[0][1][2][1][RTW89_MKK][0][92] = 127,
[0][1][2][1][RTW89_IC][1][92] = -2,
+ [0][1][2][1][RTW89_IC][2][92] = 127,
[0][1][2][1][RTW89_KCC][1][92] = 20,
[0][1][2][1][RTW89_KCC][0][92] = 127,
[0][1][2][1][RTW89_ACMA][1][92] = 127,
@@ -42891,6 +44232,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][92] = 127,
[0][1][2][1][RTW89_UK][1][92] = 127,
[0][1][2][1][RTW89_UK][0][92] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][92] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][92] = 127,
[0][1][2][1][RTW89_FCC][1][94] = -2,
[0][1][2][1][RTW89_FCC][2][94] = 127,
[0][1][2][1][RTW89_ETSI][1][94] = 127,
@@ -42898,6 +44241,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][94] = 127,
[0][1][2][1][RTW89_MKK][0][94] = 127,
[0][1][2][1][RTW89_IC][1][94] = -2,
+ [0][1][2][1][RTW89_IC][2][94] = 127,
[0][1][2][1][RTW89_KCC][1][94] = 20,
[0][1][2][1][RTW89_KCC][0][94] = 127,
[0][1][2][1][RTW89_ACMA][1][94] = 127,
@@ -42907,6 +44251,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][94] = 127,
[0][1][2][1][RTW89_UK][1][94] = 127,
[0][1][2][1][RTW89_UK][0][94] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][94] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][94] = 127,
[0][1][2][1][RTW89_FCC][1][96] = -2,
[0][1][2][1][RTW89_FCC][2][96] = 127,
[0][1][2][1][RTW89_ETSI][1][96] = 127,
@@ -42914,6 +44260,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][96] = 127,
[0][1][2][1][RTW89_MKK][0][96] = 127,
[0][1][2][1][RTW89_IC][1][96] = -2,
+ [0][1][2][1][RTW89_IC][2][96] = 127,
[0][1][2][1][RTW89_KCC][1][96] = 20,
[0][1][2][1][RTW89_KCC][0][96] = 127,
[0][1][2][1][RTW89_ACMA][1][96] = 127,
@@ -42923,6 +44270,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][96] = 127,
[0][1][2][1][RTW89_UK][1][96] = 127,
[0][1][2][1][RTW89_UK][0][96] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][96] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][96] = 127,
[0][1][2][1][RTW89_FCC][1][98] = -2,
[0][1][2][1][RTW89_FCC][2][98] = 127,
[0][1][2][1][RTW89_ETSI][1][98] = 127,
@@ -42930,6 +44279,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][98] = 127,
[0][1][2][1][RTW89_MKK][0][98] = 127,
[0][1][2][1][RTW89_IC][1][98] = -2,
+ [0][1][2][1][RTW89_IC][2][98] = 127,
[0][1][2][1][RTW89_KCC][1][98] = 20,
[0][1][2][1][RTW89_KCC][0][98] = 127,
[0][1][2][1][RTW89_ACMA][1][98] = 127,
@@ -42939,6 +44289,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][98] = 127,
[0][1][2][1][RTW89_UK][1][98] = 127,
[0][1][2][1][RTW89_UK][0][98] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][98] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][98] = 127,
[0][1][2][1][RTW89_FCC][1][100] = -2,
[0][1][2][1][RTW89_FCC][2][100] = 127,
[0][1][2][1][RTW89_ETSI][1][100] = 127,
@@ -42946,6 +44298,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][100] = 127,
[0][1][2][1][RTW89_MKK][0][100] = 127,
[0][1][2][1][RTW89_IC][1][100] = -2,
+ [0][1][2][1][RTW89_IC][2][100] = 127,
[0][1][2][1][RTW89_KCC][1][100] = 20,
[0][1][2][1][RTW89_KCC][0][100] = 127,
[0][1][2][1][RTW89_ACMA][1][100] = 127,
@@ -42955,6 +44308,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][100] = 127,
[0][1][2][1][RTW89_UK][1][100] = 127,
[0][1][2][1][RTW89_UK][0][100] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][100] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][100] = 127,
[0][1][2][1][RTW89_FCC][1][102] = -2,
[0][1][2][1][RTW89_FCC][2][102] = 127,
[0][1][2][1][RTW89_ETSI][1][102] = 127,
@@ -42962,6 +44317,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][102] = 127,
[0][1][2][1][RTW89_MKK][0][102] = 127,
[0][1][2][1][RTW89_IC][1][102] = -2,
+ [0][1][2][1][RTW89_IC][2][102] = 127,
[0][1][2][1][RTW89_KCC][1][102] = 20,
[0][1][2][1][RTW89_KCC][0][102] = 127,
[0][1][2][1][RTW89_ACMA][1][102] = 127,
@@ -42971,6 +44327,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][102] = 127,
[0][1][2][1][RTW89_UK][1][102] = 127,
[0][1][2][1][RTW89_UK][0][102] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][102] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][102] = 127,
[0][1][2][1][RTW89_FCC][1][104] = -2,
[0][1][2][1][RTW89_FCC][2][104] = 127,
[0][1][2][1][RTW89_ETSI][1][104] = 127,
@@ -42978,6 +44336,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][104] = 127,
[0][1][2][1][RTW89_MKK][0][104] = 127,
[0][1][2][1][RTW89_IC][1][104] = -2,
+ [0][1][2][1][RTW89_IC][2][104] = 127,
[0][1][2][1][RTW89_KCC][1][104] = 20,
[0][1][2][1][RTW89_KCC][0][104] = 127,
[0][1][2][1][RTW89_ACMA][1][104] = 127,
@@ -42987,6 +44346,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][104] = 127,
[0][1][2][1][RTW89_UK][1][104] = 127,
[0][1][2][1][RTW89_UK][0][104] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][104] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][104] = 127,
[0][1][2][1][RTW89_FCC][1][105] = -2,
[0][1][2][1][RTW89_FCC][2][105] = 127,
[0][1][2][1][RTW89_ETSI][1][105] = 127,
@@ -42994,6 +44355,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][105] = 127,
[0][1][2][1][RTW89_MKK][0][105] = 127,
[0][1][2][1][RTW89_IC][1][105] = -2,
+ [0][1][2][1][RTW89_IC][2][105] = 127,
[0][1][2][1][RTW89_KCC][1][105] = 20,
[0][1][2][1][RTW89_KCC][0][105] = 127,
[0][1][2][1][RTW89_ACMA][1][105] = 127,
@@ -43003,6 +44365,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][105] = 127,
[0][1][2][1][RTW89_UK][1][105] = 127,
[0][1][2][1][RTW89_UK][0][105] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][105] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][105] = 127,
[0][1][2][1][RTW89_FCC][1][107] = 1,
[0][1][2][1][RTW89_FCC][2][107] = 127,
[0][1][2][1][RTW89_ETSI][1][107] = 127,
@@ -43010,6 +44374,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][107] = 127,
[0][1][2][1][RTW89_MKK][0][107] = 127,
[0][1][2][1][RTW89_IC][1][107] = 1,
+ [0][1][2][1][RTW89_IC][2][107] = 127,
[0][1][2][1][RTW89_KCC][1][107] = 20,
[0][1][2][1][RTW89_KCC][0][107] = 127,
[0][1][2][1][RTW89_ACMA][1][107] = 127,
@@ -43019,6 +44384,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][107] = 127,
[0][1][2][1][RTW89_UK][1][107] = 127,
[0][1][2][1][RTW89_UK][0][107] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][107] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][107] = 127,
[0][1][2][1][RTW89_FCC][1][109] = 1,
[0][1][2][1][RTW89_FCC][2][109] = 127,
[0][1][2][1][RTW89_ETSI][1][109] = 127,
@@ -43026,6 +44393,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][109] = 127,
[0][1][2][1][RTW89_MKK][0][109] = 127,
[0][1][2][1][RTW89_IC][1][109] = 1,
+ [0][1][2][1][RTW89_IC][2][109] = 127,
[0][1][2][1][RTW89_KCC][1][109] = 20,
[0][1][2][1][RTW89_KCC][0][109] = 127,
[0][1][2][1][RTW89_ACMA][1][109] = 127,
@@ -43035,6 +44403,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][109] = 127,
[0][1][2][1][RTW89_UK][1][109] = 127,
[0][1][2][1][RTW89_UK][0][109] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][109] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][109] = 127,
[0][1][2][1][RTW89_FCC][1][111] = 127,
[0][1][2][1][RTW89_FCC][2][111] = 127,
[0][1][2][1][RTW89_ETSI][1][111] = 127,
@@ -43042,6 +44412,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][111] = 127,
[0][1][2][1][RTW89_MKK][0][111] = 127,
[0][1][2][1][RTW89_IC][1][111] = 127,
+ [0][1][2][1][RTW89_IC][2][111] = 127,
[0][1][2][1][RTW89_KCC][1][111] = 127,
[0][1][2][1][RTW89_KCC][0][111] = 127,
[0][1][2][1][RTW89_ACMA][1][111] = 127,
@@ -43051,6 +44422,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][111] = 127,
[0][1][2][1][RTW89_UK][1][111] = 127,
[0][1][2][1][RTW89_UK][0][111] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][111] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][111] = 127,
[0][1][2][1][RTW89_FCC][1][113] = 127,
[0][1][2][1][RTW89_FCC][2][113] = 127,
[0][1][2][1][RTW89_ETSI][1][113] = 127,
@@ -43058,6 +44431,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][113] = 127,
[0][1][2][1][RTW89_MKK][0][113] = 127,
[0][1][2][1][RTW89_IC][1][113] = 127,
+ [0][1][2][1][RTW89_IC][2][113] = 127,
[0][1][2][1][RTW89_KCC][1][113] = 127,
[0][1][2][1][RTW89_KCC][0][113] = 127,
[0][1][2][1][RTW89_ACMA][1][113] = 127,
@@ -43067,6 +44441,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][113] = 127,
[0][1][2][1][RTW89_UK][1][113] = 127,
[0][1][2][1][RTW89_UK][0][113] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][113] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][113] = 127,
[0][1][2][1][RTW89_FCC][1][115] = 127,
[0][1][2][1][RTW89_FCC][2][115] = 127,
[0][1][2][1][RTW89_ETSI][1][115] = 127,
@@ -43074,6 +44450,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][115] = 127,
[0][1][2][1][RTW89_MKK][0][115] = 127,
[0][1][2][1][RTW89_IC][1][115] = 127,
+ [0][1][2][1][RTW89_IC][2][115] = 127,
[0][1][2][1][RTW89_KCC][1][115] = 127,
[0][1][2][1][RTW89_KCC][0][115] = 127,
[0][1][2][1][RTW89_ACMA][1][115] = 127,
@@ -43083,6 +44460,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][115] = 127,
[0][1][2][1][RTW89_UK][1][115] = 127,
[0][1][2][1][RTW89_UK][0][115] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][115] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][115] = 127,
[0][1][2][1][RTW89_FCC][1][117] = 127,
[0][1][2][1][RTW89_FCC][2][117] = 127,
[0][1][2][1][RTW89_ETSI][1][117] = 127,
@@ -43090,6 +44469,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][117] = 127,
[0][1][2][1][RTW89_MKK][0][117] = 127,
[0][1][2][1][RTW89_IC][1][117] = 127,
+ [0][1][2][1][RTW89_IC][2][117] = 127,
[0][1][2][1][RTW89_KCC][1][117] = 127,
[0][1][2][1][RTW89_KCC][0][117] = 127,
[0][1][2][1][RTW89_ACMA][1][117] = 127,
@@ -43099,6 +44479,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][117] = 127,
[0][1][2][1][RTW89_UK][1][117] = 127,
[0][1][2][1][RTW89_UK][0][117] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][117] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][117] = 127,
[0][1][2][1][RTW89_FCC][1][119] = 127,
[0][1][2][1][RTW89_FCC][2][119] = 127,
[0][1][2][1][RTW89_ETSI][1][119] = 127,
@@ -43106,6 +44488,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_MKK][1][119] = 127,
[0][1][2][1][RTW89_MKK][0][119] = 127,
[0][1][2][1][RTW89_IC][1][119] = 127,
+ [0][1][2][1][RTW89_IC][2][119] = 127,
[0][1][2][1][RTW89_KCC][1][119] = 127,
[0][1][2][1][RTW89_KCC][0][119] = 127,
[0][1][2][1][RTW89_ACMA][1][119] = 127,
@@ -43115,6 +44498,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[0][1][2][1][RTW89_QATAR][0][119] = 127,
[0][1][2][1][RTW89_UK][1][119] = 127,
[0][1][2][1][RTW89_UK][0][119] = 127,
+ [0][1][2][1][RTW89_THAILAND][1][119] = 127,
+ [0][1][2][1][RTW89_THAILAND][0][119] = 127,
[1][0][2][0][RTW89_FCC][1][1] = 34,
[1][0][2][0][RTW89_FCC][2][1] = 70,
[1][0][2][0][RTW89_ETSI][1][1] = 66,
@@ -43122,6 +44507,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][1] = 62,
[1][0][2][0][RTW89_MKK][0][1] = 26,
[1][0][2][0][RTW89_IC][1][1] = 34,
+ [1][0][2][0][RTW89_IC][2][1] = 70,
[1][0][2][0][RTW89_KCC][1][1] = 40,
[1][0][2][0][RTW89_KCC][0][1] = 24,
[1][0][2][0][RTW89_ACMA][1][1] = 66,
@@ -43131,6 +44517,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][1] = 30,
[1][0][2][0][RTW89_UK][1][1] = 66,
[1][0][2][0][RTW89_UK][0][1] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][1] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][1] = 30,
[1][0][2][0][RTW89_FCC][1][5] = 34,
[1][0][2][0][RTW89_FCC][2][5] = 70,
[1][0][2][0][RTW89_ETSI][1][5] = 66,
@@ -43138,6 +44526,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][5] = 62,
[1][0][2][0][RTW89_MKK][0][5] = 26,
[1][0][2][0][RTW89_IC][1][5] = 34,
+ [1][0][2][0][RTW89_IC][2][5] = 70,
[1][0][2][0][RTW89_KCC][1][5] = 40,
[1][0][2][0][RTW89_KCC][0][5] = 24,
[1][0][2][0][RTW89_ACMA][1][5] = 66,
@@ -43147,6 +44536,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][5] = 30,
[1][0][2][0][RTW89_UK][1][5] = 66,
[1][0][2][0][RTW89_UK][0][5] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][5] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][5] = 30,
[1][0][2][0][RTW89_FCC][1][9] = 34,
[1][0][2][0][RTW89_FCC][2][9] = 70,
[1][0][2][0][RTW89_ETSI][1][9] = 66,
@@ -43154,6 +44545,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][9] = 62,
[1][0][2][0][RTW89_MKK][0][9] = 26,
[1][0][2][0][RTW89_IC][1][9] = 34,
+ [1][0][2][0][RTW89_IC][2][9] = 70,
[1][0][2][0][RTW89_KCC][1][9] = 40,
[1][0][2][0][RTW89_KCC][0][9] = 24,
[1][0][2][0][RTW89_ACMA][1][9] = 66,
@@ -43163,6 +44555,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][9] = 30,
[1][0][2][0][RTW89_UK][1][9] = 66,
[1][0][2][0][RTW89_UK][0][9] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][9] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][9] = 30,
[1][0][2][0][RTW89_FCC][1][13] = 34,
[1][0][2][0][RTW89_FCC][2][13] = 70,
[1][0][2][0][RTW89_ETSI][1][13] = 66,
@@ -43170,6 +44564,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][13] = 62,
[1][0][2][0][RTW89_MKK][0][13] = 26,
[1][0][2][0][RTW89_IC][1][13] = 34,
+ [1][0][2][0][RTW89_IC][2][13] = 70,
[1][0][2][0][RTW89_KCC][1][13] = 40,
[1][0][2][0][RTW89_KCC][0][13] = 24,
[1][0][2][0][RTW89_ACMA][1][13] = 66,
@@ -43179,6 +44574,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][13] = 30,
[1][0][2][0][RTW89_UK][1][13] = 66,
[1][0][2][0][RTW89_UK][0][13] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][13] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][13] = 30,
[1][0][2][0][RTW89_FCC][1][16] = 34,
[1][0][2][0][RTW89_FCC][2][16] = 70,
[1][0][2][0][RTW89_ETSI][1][16] = 66,
@@ -43186,6 +44583,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][16] = 62,
[1][0][2][0][RTW89_MKK][0][16] = 26,
[1][0][2][0][RTW89_IC][1][16] = 34,
+ [1][0][2][0][RTW89_IC][2][16] = 70,
[1][0][2][0][RTW89_KCC][1][16] = 40,
[1][0][2][0][RTW89_KCC][0][16] = 24,
[1][0][2][0][RTW89_ACMA][1][16] = 66,
@@ -43195,6 +44593,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][16] = 30,
[1][0][2][0][RTW89_UK][1][16] = 66,
[1][0][2][0][RTW89_UK][0][16] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][16] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][16] = 30,
[1][0][2][0][RTW89_FCC][1][20] = 34,
[1][0][2][0][RTW89_FCC][2][20] = 70,
[1][0][2][0][RTW89_ETSI][1][20] = 66,
@@ -43202,6 +44602,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][20] = 62,
[1][0][2][0][RTW89_MKK][0][20] = 26,
[1][0][2][0][RTW89_IC][1][20] = 34,
+ [1][0][2][0][RTW89_IC][2][20] = 70,
[1][0][2][0][RTW89_KCC][1][20] = 40,
[1][0][2][0][RTW89_KCC][0][20] = 24,
[1][0][2][0][RTW89_ACMA][1][20] = 66,
@@ -43211,6 +44612,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][20] = 30,
[1][0][2][0][RTW89_UK][1][20] = 66,
[1][0][2][0][RTW89_UK][0][20] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][20] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][20] = 30,
[1][0][2][0][RTW89_FCC][1][24] = 36,
[1][0][2][0][RTW89_FCC][2][24] = 70,
[1][0][2][0][RTW89_ETSI][1][24] = 66,
@@ -43218,6 +44621,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][24] = 64,
[1][0][2][0][RTW89_MKK][0][24] = 28,
[1][0][2][0][RTW89_IC][1][24] = 36,
+ [1][0][2][0][RTW89_IC][2][24] = 70,
[1][0][2][0][RTW89_KCC][1][24] = 40,
[1][0][2][0][RTW89_KCC][0][24] = 26,
[1][0][2][0][RTW89_ACMA][1][24] = 66,
@@ -43227,6 +44631,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][24] = 30,
[1][0][2][0][RTW89_UK][1][24] = 66,
[1][0][2][0][RTW89_UK][0][24] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][24] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][24] = 30,
[1][0][2][0][RTW89_FCC][1][28] = 34,
[1][0][2][0][RTW89_FCC][2][28] = 70,
[1][0][2][0][RTW89_ETSI][1][28] = 66,
@@ -43234,6 +44640,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][28] = 64,
[1][0][2][0][RTW89_MKK][0][28] = 26,
[1][0][2][0][RTW89_IC][1][28] = 34,
+ [1][0][2][0][RTW89_IC][2][28] = 70,
[1][0][2][0][RTW89_KCC][1][28] = 40,
[1][0][2][0][RTW89_KCC][0][28] = 26,
[1][0][2][0][RTW89_ACMA][1][28] = 66,
@@ -43243,6 +44650,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][28] = 30,
[1][0][2][0][RTW89_UK][1][28] = 66,
[1][0][2][0][RTW89_UK][0][28] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][28] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][28] = 30,
[1][0][2][0][RTW89_FCC][1][31] = 34,
[1][0][2][0][RTW89_FCC][2][31] = 70,
[1][0][2][0][RTW89_ETSI][1][31] = 66,
@@ -43250,6 +44659,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][31] = 64,
[1][0][2][0][RTW89_MKK][0][31] = 26,
[1][0][2][0][RTW89_IC][1][31] = 34,
+ [1][0][2][0][RTW89_IC][2][31] = 70,
[1][0][2][0][RTW89_KCC][1][31] = 40,
[1][0][2][0][RTW89_KCC][0][31] = 26,
[1][0][2][0][RTW89_ACMA][1][31] = 66,
@@ -43259,6 +44669,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][31] = 30,
[1][0][2][0][RTW89_UK][1][31] = 66,
[1][0][2][0][RTW89_UK][0][31] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][31] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][31] = 30,
[1][0][2][0][RTW89_FCC][1][35] = 34,
[1][0][2][0][RTW89_FCC][2][35] = 70,
[1][0][2][0][RTW89_ETSI][1][35] = 66,
@@ -43266,6 +44678,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][35] = 64,
[1][0][2][0][RTW89_MKK][0][35] = 26,
[1][0][2][0][RTW89_IC][1][35] = 34,
+ [1][0][2][0][RTW89_IC][2][35] = 70,
[1][0][2][0][RTW89_KCC][1][35] = 40,
[1][0][2][0][RTW89_KCC][0][35] = 26,
[1][0][2][0][RTW89_ACMA][1][35] = 66,
@@ -43275,6 +44688,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][35] = 30,
[1][0][2][0][RTW89_UK][1][35] = 66,
[1][0][2][0][RTW89_UK][0][35] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][35] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][35] = 30,
[1][0][2][0][RTW89_FCC][1][39] = 34,
[1][0][2][0][RTW89_FCC][2][39] = 70,
[1][0][2][0][RTW89_ETSI][1][39] = 66,
@@ -43282,6 +44697,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][39] = 64,
[1][0][2][0][RTW89_MKK][0][39] = 26,
[1][0][2][0][RTW89_IC][1][39] = 34,
+ [1][0][2][0][RTW89_IC][2][39] = 70,
[1][0][2][0][RTW89_KCC][1][39] = 40,
[1][0][2][0][RTW89_KCC][0][39] = 26,
[1][0][2][0][RTW89_ACMA][1][39] = 66,
@@ -43291,6 +44707,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][39] = 30,
[1][0][2][0][RTW89_UK][1][39] = 66,
[1][0][2][0][RTW89_UK][0][39] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][39] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][39] = 30,
[1][0][2][0][RTW89_FCC][1][43] = 34,
[1][0][2][0][RTW89_FCC][2][43] = 70,
[1][0][2][0][RTW89_ETSI][1][43] = 66,
@@ -43298,6 +44716,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][43] = 64,
[1][0][2][0][RTW89_MKK][0][43] = 26,
[1][0][2][0][RTW89_IC][1][43] = 34,
+ [1][0][2][0][RTW89_IC][2][43] = 70,
[1][0][2][0][RTW89_KCC][1][43] = 40,
[1][0][2][0][RTW89_KCC][0][43] = 26,
[1][0][2][0][RTW89_ACMA][1][43] = 66,
@@ -43307,6 +44726,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][43] = 30,
[1][0][2][0][RTW89_UK][1][43] = 66,
[1][0][2][0][RTW89_UK][0][43] = 30,
+ [1][0][2][0][RTW89_THAILAND][1][43] = 68,
+ [1][0][2][0][RTW89_THAILAND][0][43] = 30,
[1][0][2][0][RTW89_FCC][1][46] = 34,
[1][0][2][0][RTW89_FCC][2][46] = 127,
[1][0][2][0][RTW89_ETSI][1][46] = 127,
@@ -43314,6 +44735,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][46] = 127,
[1][0][2][0][RTW89_MKK][0][46] = 127,
[1][0][2][0][RTW89_IC][1][46] = 34,
+ [1][0][2][0][RTW89_IC][2][46] = 68,
[1][0][2][0][RTW89_KCC][1][46] = 40,
[1][0][2][0][RTW89_KCC][0][46] = 127,
[1][0][2][0][RTW89_ACMA][1][46] = 127,
@@ -43323,6 +44745,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][46] = 127,
[1][0][2][0][RTW89_UK][1][46] = 127,
[1][0][2][0][RTW89_UK][0][46] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][46] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][46] = 127,
[1][0][2][0][RTW89_FCC][1][50] = 34,
[1][0][2][0][RTW89_FCC][2][50] = 127,
[1][0][2][0][RTW89_ETSI][1][50] = 127,
@@ -43330,6 +44754,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][50] = 127,
[1][0][2][0][RTW89_MKK][0][50] = 127,
[1][0][2][0][RTW89_IC][1][50] = 34,
+ [1][0][2][0][RTW89_IC][2][50] = 68,
[1][0][2][0][RTW89_KCC][1][50] = 40,
[1][0][2][0][RTW89_KCC][0][50] = 127,
[1][0][2][0][RTW89_ACMA][1][50] = 127,
@@ -43339,6 +44764,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][50] = 127,
[1][0][2][0][RTW89_UK][1][50] = 127,
[1][0][2][0][RTW89_UK][0][50] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][50] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][50] = 127,
[1][0][2][0][RTW89_FCC][1][54] = 36,
[1][0][2][0][RTW89_FCC][2][54] = 127,
[1][0][2][0][RTW89_ETSI][1][54] = 127,
@@ -43346,6 +44773,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][54] = 127,
[1][0][2][0][RTW89_MKK][0][54] = 127,
[1][0][2][0][RTW89_IC][1][54] = 36,
+ [1][0][2][0][RTW89_IC][2][54] = 127,
[1][0][2][0][RTW89_KCC][1][54] = 40,
[1][0][2][0][RTW89_KCC][0][54] = 127,
[1][0][2][0][RTW89_ACMA][1][54] = 127,
@@ -43355,6 +44783,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][54] = 127,
[1][0][2][0][RTW89_UK][1][54] = 127,
[1][0][2][0][RTW89_UK][0][54] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][54] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][54] = 127,
[1][0][2][0][RTW89_FCC][1][58] = 36,
[1][0][2][0][RTW89_FCC][2][58] = 66,
[1][0][2][0][RTW89_ETSI][1][58] = 127,
@@ -43362,6 +44792,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][58] = 127,
[1][0][2][0][RTW89_MKK][0][58] = 127,
[1][0][2][0][RTW89_IC][1][58] = 36,
+ [1][0][2][0][RTW89_IC][2][58] = 66,
[1][0][2][0][RTW89_KCC][1][58] = 40,
[1][0][2][0][RTW89_KCC][0][58] = 127,
[1][0][2][0][RTW89_ACMA][1][58] = 127,
@@ -43371,6 +44802,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][58] = 127,
[1][0][2][0][RTW89_UK][1][58] = 127,
[1][0][2][0][RTW89_UK][0][58] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][58] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][58] = 127,
[1][0][2][0][RTW89_FCC][1][61] = 34,
[1][0][2][0][RTW89_FCC][2][61] = 66,
[1][0][2][0][RTW89_ETSI][1][61] = 127,
@@ -43378,6 +44811,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][61] = 127,
[1][0][2][0][RTW89_MKK][0][61] = 127,
[1][0][2][0][RTW89_IC][1][61] = 34,
+ [1][0][2][0][RTW89_IC][2][61] = 66,
[1][0][2][0][RTW89_KCC][1][61] = 40,
[1][0][2][0][RTW89_KCC][0][61] = 127,
[1][0][2][0][RTW89_ACMA][1][61] = 127,
@@ -43387,6 +44821,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][61] = 127,
[1][0][2][0][RTW89_UK][1][61] = 127,
[1][0][2][0][RTW89_UK][0][61] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][61] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][61] = 127,
[1][0][2][0][RTW89_FCC][1][65] = 34,
[1][0][2][0][RTW89_FCC][2][65] = 66,
[1][0][2][0][RTW89_ETSI][1][65] = 127,
@@ -43394,6 +44830,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][65] = 127,
[1][0][2][0][RTW89_MKK][0][65] = 127,
[1][0][2][0][RTW89_IC][1][65] = 34,
+ [1][0][2][0][RTW89_IC][2][65] = 66,
[1][0][2][0][RTW89_KCC][1][65] = 40,
[1][0][2][0][RTW89_KCC][0][65] = 127,
[1][0][2][0][RTW89_ACMA][1][65] = 127,
@@ -43403,6 +44840,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][65] = 127,
[1][0][2][0][RTW89_UK][1][65] = 127,
[1][0][2][0][RTW89_UK][0][65] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][65] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][65] = 127,
[1][0][2][0][RTW89_FCC][1][69] = 34,
[1][0][2][0][RTW89_FCC][2][69] = 66,
[1][0][2][0][RTW89_ETSI][1][69] = 127,
@@ -43410,6 +44849,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][69] = 127,
[1][0][2][0][RTW89_MKK][0][69] = 127,
[1][0][2][0][RTW89_IC][1][69] = 34,
+ [1][0][2][0][RTW89_IC][2][69] = 66,
[1][0][2][0][RTW89_KCC][1][69] = 40,
[1][0][2][0][RTW89_KCC][0][69] = 127,
[1][0][2][0][RTW89_ACMA][1][69] = 127,
@@ -43419,6 +44859,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][69] = 127,
[1][0][2][0][RTW89_UK][1][69] = 127,
[1][0][2][0][RTW89_UK][0][69] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][69] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][69] = 127,
[1][0][2][0][RTW89_FCC][1][73] = 34,
[1][0][2][0][RTW89_FCC][2][73] = 66,
[1][0][2][0][RTW89_ETSI][1][73] = 127,
@@ -43426,6 +44868,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][73] = 127,
[1][0][2][0][RTW89_MKK][0][73] = 127,
[1][0][2][0][RTW89_IC][1][73] = 34,
+ [1][0][2][0][RTW89_IC][2][73] = 66,
[1][0][2][0][RTW89_KCC][1][73] = 40,
[1][0][2][0][RTW89_KCC][0][73] = 127,
[1][0][2][0][RTW89_ACMA][1][73] = 127,
@@ -43435,6 +44878,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][73] = 127,
[1][0][2][0][RTW89_UK][1][73] = 127,
[1][0][2][0][RTW89_UK][0][73] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][73] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][73] = 127,
[1][0][2][0][RTW89_FCC][1][76] = 34,
[1][0][2][0][RTW89_FCC][2][76] = 66,
[1][0][2][0][RTW89_ETSI][1][76] = 127,
@@ -43442,6 +44887,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][76] = 127,
[1][0][2][0][RTW89_MKK][0][76] = 127,
[1][0][2][0][RTW89_IC][1][76] = 34,
+ [1][0][2][0][RTW89_IC][2][76] = 66,
[1][0][2][0][RTW89_KCC][1][76] = 40,
[1][0][2][0][RTW89_KCC][0][76] = 127,
[1][0][2][0][RTW89_ACMA][1][76] = 127,
@@ -43451,6 +44897,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][76] = 127,
[1][0][2][0][RTW89_UK][1][76] = 127,
[1][0][2][0][RTW89_UK][0][76] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][76] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][76] = 127,
[1][0][2][0][RTW89_FCC][1][80] = 34,
[1][0][2][0][RTW89_FCC][2][80] = 66,
[1][0][2][0][RTW89_ETSI][1][80] = 127,
@@ -43458,6 +44906,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][80] = 127,
[1][0][2][0][RTW89_MKK][0][80] = 127,
[1][0][2][0][RTW89_IC][1][80] = 34,
+ [1][0][2][0][RTW89_IC][2][80] = 66,
[1][0][2][0][RTW89_KCC][1][80] = 42,
[1][0][2][0][RTW89_KCC][0][80] = 127,
[1][0][2][0][RTW89_ACMA][1][80] = 127,
@@ -43467,6 +44916,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][80] = 127,
[1][0][2][0][RTW89_UK][1][80] = 127,
[1][0][2][0][RTW89_UK][0][80] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][80] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][80] = 127,
[1][0][2][0][RTW89_FCC][1][84] = 34,
[1][0][2][0][RTW89_FCC][2][84] = 66,
[1][0][2][0][RTW89_ETSI][1][84] = 127,
@@ -43474,6 +44925,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][84] = 127,
[1][0][2][0][RTW89_MKK][0][84] = 127,
[1][0][2][0][RTW89_IC][1][84] = 34,
+ [1][0][2][0][RTW89_IC][2][84] = 66,
[1][0][2][0][RTW89_KCC][1][84] = 42,
[1][0][2][0][RTW89_KCC][0][84] = 127,
[1][0][2][0][RTW89_ACMA][1][84] = 127,
@@ -43483,6 +44935,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][84] = 127,
[1][0][2][0][RTW89_UK][1][84] = 127,
[1][0][2][0][RTW89_UK][0][84] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][84] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][84] = 127,
[1][0][2][0][RTW89_FCC][1][88] = 34,
[1][0][2][0][RTW89_FCC][2][88] = 127,
[1][0][2][0][RTW89_ETSI][1][88] = 127,
@@ -43490,6 +44944,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][88] = 127,
[1][0][2][0][RTW89_MKK][0][88] = 127,
[1][0][2][0][RTW89_IC][1][88] = 34,
+ [1][0][2][0][RTW89_IC][2][88] = 127,
[1][0][2][0][RTW89_KCC][1][88] = 42,
[1][0][2][0][RTW89_KCC][0][88] = 127,
[1][0][2][0][RTW89_ACMA][1][88] = 127,
@@ -43499,6 +44954,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][88] = 127,
[1][0][2][0][RTW89_UK][1][88] = 127,
[1][0][2][0][RTW89_UK][0][88] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][88] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][88] = 127,
[1][0][2][0][RTW89_FCC][1][91] = 36,
[1][0][2][0][RTW89_FCC][2][91] = 127,
[1][0][2][0][RTW89_ETSI][1][91] = 127,
@@ -43506,6 +44963,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][91] = 127,
[1][0][2][0][RTW89_MKK][0][91] = 127,
[1][0][2][0][RTW89_IC][1][91] = 36,
+ [1][0][2][0][RTW89_IC][2][91] = 127,
[1][0][2][0][RTW89_KCC][1][91] = 42,
[1][0][2][0][RTW89_KCC][0][91] = 127,
[1][0][2][0][RTW89_ACMA][1][91] = 127,
@@ -43515,6 +44973,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][91] = 127,
[1][0][2][0][RTW89_UK][1][91] = 127,
[1][0][2][0][RTW89_UK][0][91] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][91] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][91] = 127,
[1][0][2][0][RTW89_FCC][1][95] = 34,
[1][0][2][0][RTW89_FCC][2][95] = 127,
[1][0][2][0][RTW89_ETSI][1][95] = 127,
@@ -43522,6 +44982,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][95] = 127,
[1][0][2][0][RTW89_MKK][0][95] = 127,
[1][0][2][0][RTW89_IC][1][95] = 34,
+ [1][0][2][0][RTW89_IC][2][95] = 127,
[1][0][2][0][RTW89_KCC][1][95] = 42,
[1][0][2][0][RTW89_KCC][0][95] = 127,
[1][0][2][0][RTW89_ACMA][1][95] = 127,
@@ -43531,6 +44992,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][95] = 127,
[1][0][2][0][RTW89_UK][1][95] = 127,
[1][0][2][0][RTW89_UK][0][95] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][95] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][95] = 127,
[1][0][2][0][RTW89_FCC][1][99] = 34,
[1][0][2][0][RTW89_FCC][2][99] = 127,
[1][0][2][0][RTW89_ETSI][1][99] = 127,
@@ -43538,6 +45001,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][99] = 127,
[1][0][2][0][RTW89_MKK][0][99] = 127,
[1][0][2][0][RTW89_IC][1][99] = 34,
+ [1][0][2][0][RTW89_IC][2][99] = 127,
[1][0][2][0][RTW89_KCC][1][99] = 42,
[1][0][2][0][RTW89_KCC][0][99] = 127,
[1][0][2][0][RTW89_ACMA][1][99] = 127,
@@ -43547,6 +45011,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][99] = 127,
[1][0][2][0][RTW89_UK][1][99] = 127,
[1][0][2][0][RTW89_UK][0][99] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][99] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][99] = 127,
[1][0][2][0][RTW89_FCC][1][103] = 34,
[1][0][2][0][RTW89_FCC][2][103] = 127,
[1][0][2][0][RTW89_ETSI][1][103] = 127,
@@ -43554,6 +45020,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][103] = 127,
[1][0][2][0][RTW89_MKK][0][103] = 127,
[1][0][2][0][RTW89_IC][1][103] = 34,
+ [1][0][2][0][RTW89_IC][2][103] = 127,
[1][0][2][0][RTW89_KCC][1][103] = 42,
[1][0][2][0][RTW89_KCC][0][103] = 127,
[1][0][2][0][RTW89_ACMA][1][103] = 127,
@@ -43563,6 +45030,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][103] = 127,
[1][0][2][0][RTW89_UK][1][103] = 127,
[1][0][2][0][RTW89_UK][0][103] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][103] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][103] = 127,
[1][0][2][0][RTW89_FCC][1][106] = 36,
[1][0][2][0][RTW89_FCC][2][106] = 127,
[1][0][2][0][RTW89_ETSI][1][106] = 127,
@@ -43570,6 +45039,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][106] = 127,
[1][0][2][0][RTW89_MKK][0][106] = 127,
[1][0][2][0][RTW89_IC][1][106] = 36,
+ [1][0][2][0][RTW89_IC][2][106] = 127,
[1][0][2][0][RTW89_KCC][1][106] = 42,
[1][0][2][0][RTW89_KCC][0][106] = 127,
[1][0][2][0][RTW89_ACMA][1][106] = 127,
@@ -43579,6 +45049,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][106] = 127,
[1][0][2][0][RTW89_UK][1][106] = 127,
[1][0][2][0][RTW89_UK][0][106] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][106] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][106] = 127,
[1][0][2][0][RTW89_FCC][1][110] = 127,
[1][0][2][0][RTW89_FCC][2][110] = 127,
[1][0][2][0][RTW89_ETSI][1][110] = 127,
@@ -43586,6 +45058,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][110] = 127,
[1][0][2][0][RTW89_MKK][0][110] = 127,
[1][0][2][0][RTW89_IC][1][110] = 127,
+ [1][0][2][0][RTW89_IC][2][110] = 127,
[1][0][2][0][RTW89_KCC][1][110] = 127,
[1][0][2][0][RTW89_KCC][0][110] = 127,
[1][0][2][0][RTW89_ACMA][1][110] = 127,
@@ -43595,6 +45068,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][110] = 127,
[1][0][2][0][RTW89_UK][1][110] = 127,
[1][0][2][0][RTW89_UK][0][110] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][110] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][110] = 127,
[1][0][2][0][RTW89_FCC][1][114] = 127,
[1][0][2][0][RTW89_FCC][2][114] = 127,
[1][0][2][0][RTW89_ETSI][1][114] = 127,
@@ -43602,6 +45077,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][114] = 127,
[1][0][2][0][RTW89_MKK][0][114] = 127,
[1][0][2][0][RTW89_IC][1][114] = 127,
+ [1][0][2][0][RTW89_IC][2][114] = 127,
[1][0][2][0][RTW89_KCC][1][114] = 127,
[1][0][2][0][RTW89_KCC][0][114] = 127,
[1][0][2][0][RTW89_ACMA][1][114] = 127,
@@ -43611,6 +45087,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][114] = 127,
[1][0][2][0][RTW89_UK][1][114] = 127,
[1][0][2][0][RTW89_UK][0][114] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][114] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][114] = 127,
[1][0][2][0][RTW89_FCC][1][118] = 127,
[1][0][2][0][RTW89_FCC][2][118] = 127,
[1][0][2][0][RTW89_ETSI][1][118] = 127,
@@ -43618,6 +45096,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_MKK][1][118] = 127,
[1][0][2][0][RTW89_MKK][0][118] = 127,
[1][0][2][0][RTW89_IC][1][118] = 127,
+ [1][0][2][0][RTW89_IC][2][118] = 127,
[1][0][2][0][RTW89_KCC][1][118] = 127,
[1][0][2][0][RTW89_KCC][0][118] = 127,
[1][0][2][0][RTW89_ACMA][1][118] = 127,
@@ -43627,6 +45106,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][0][2][0][RTW89_QATAR][0][118] = 127,
[1][0][2][0][RTW89_UK][1][118] = 127,
[1][0][2][0][RTW89_UK][0][118] = 127,
+ [1][0][2][0][RTW89_THAILAND][1][118] = 127,
+ [1][0][2][0][RTW89_THAILAND][0][118] = 127,
[1][1][2][0][RTW89_FCC][1][1] = 10,
[1][1][2][0][RTW89_FCC][2][1] = 58,
[1][1][2][0][RTW89_ETSI][1][1] = 54,
@@ -43634,6 +45115,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][1] = 52,
[1][1][2][0][RTW89_MKK][0][1] = 12,
[1][1][2][0][RTW89_IC][1][1] = 10,
+ [1][1][2][0][RTW89_IC][2][1] = 58,
[1][1][2][0][RTW89_KCC][1][1] = 28,
[1][1][2][0][RTW89_KCC][0][1] = 12,
[1][1][2][0][RTW89_ACMA][1][1] = 54,
@@ -43643,6 +45125,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][1] = 18,
[1][1][2][0][RTW89_UK][1][1] = 54,
[1][1][2][0][RTW89_UK][0][1] = 18,
+ [1][1][2][0][RTW89_THAILAND][1][1] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][1] = 10,
[1][1][2][0][RTW89_FCC][1][5] = 10,
[1][1][2][0][RTW89_FCC][2][5] = 58,
[1][1][2][0][RTW89_ETSI][1][5] = 54,
@@ -43650,6 +45134,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][5] = 52,
[1][1][2][0][RTW89_MKK][0][5] = 12,
[1][1][2][0][RTW89_IC][1][5] = 10,
+ [1][1][2][0][RTW89_IC][2][5] = 58,
[1][1][2][0][RTW89_KCC][1][5] = 28,
[1][1][2][0][RTW89_KCC][0][5] = 12,
[1][1][2][0][RTW89_ACMA][1][5] = 54,
@@ -43659,6 +45144,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][5] = 16,
[1][1][2][0][RTW89_UK][1][5] = 54,
[1][1][2][0][RTW89_UK][0][5] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][5] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][5] = 10,
[1][1][2][0][RTW89_FCC][1][9] = 10,
[1][1][2][0][RTW89_FCC][2][9] = 58,
[1][1][2][0][RTW89_ETSI][1][9] = 54,
@@ -43666,6 +45153,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][9] = 52,
[1][1][2][0][RTW89_MKK][0][9] = 12,
[1][1][2][0][RTW89_IC][1][9] = 10,
+ [1][1][2][0][RTW89_IC][2][9] = 58,
[1][1][2][0][RTW89_KCC][1][9] = 28,
[1][1][2][0][RTW89_KCC][0][9] = 12,
[1][1][2][0][RTW89_ACMA][1][9] = 54,
@@ -43675,6 +45163,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][9] = 16,
[1][1][2][0][RTW89_UK][1][9] = 54,
[1][1][2][0][RTW89_UK][0][9] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][9] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][9] = 10,
[1][1][2][0][RTW89_FCC][1][13] = 10,
[1][1][2][0][RTW89_FCC][2][13] = 58,
[1][1][2][0][RTW89_ETSI][1][13] = 54,
@@ -43682,6 +45172,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][13] = 52,
[1][1][2][0][RTW89_MKK][0][13] = 12,
[1][1][2][0][RTW89_IC][1][13] = 10,
+ [1][1][2][0][RTW89_IC][2][13] = 58,
[1][1][2][0][RTW89_KCC][1][13] = 28,
[1][1][2][0][RTW89_KCC][0][13] = 12,
[1][1][2][0][RTW89_ACMA][1][13] = 54,
@@ -43691,6 +45182,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][13] = 16,
[1][1][2][0][RTW89_UK][1][13] = 54,
[1][1][2][0][RTW89_UK][0][13] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][13] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][13] = 10,
[1][1][2][0][RTW89_FCC][1][16] = 10,
[1][1][2][0][RTW89_FCC][2][16] = 58,
[1][1][2][0][RTW89_ETSI][1][16] = 54,
@@ -43698,6 +45191,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][16] = 52,
[1][1][2][0][RTW89_MKK][0][16] = 12,
[1][1][2][0][RTW89_IC][1][16] = 10,
+ [1][1][2][0][RTW89_IC][2][16] = 58,
[1][1][2][0][RTW89_KCC][1][16] = 28,
[1][1][2][0][RTW89_KCC][0][16] = 12,
[1][1][2][0][RTW89_ACMA][1][16] = 54,
@@ -43707,6 +45201,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][16] = 16,
[1][1][2][0][RTW89_UK][1][16] = 54,
[1][1][2][0][RTW89_UK][0][16] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][16] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][16] = 10,
[1][1][2][0][RTW89_FCC][1][20] = 10,
[1][1][2][0][RTW89_FCC][2][20] = 58,
[1][1][2][0][RTW89_ETSI][1][20] = 54,
@@ -43714,6 +45210,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][20] = 52,
[1][1][2][0][RTW89_MKK][0][20] = 12,
[1][1][2][0][RTW89_IC][1][20] = 10,
+ [1][1][2][0][RTW89_IC][2][20] = 58,
[1][1][2][0][RTW89_KCC][1][20] = 28,
[1][1][2][0][RTW89_KCC][0][20] = 12,
[1][1][2][0][RTW89_ACMA][1][20] = 54,
@@ -43723,6 +45220,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][20] = 16,
[1][1][2][0][RTW89_UK][1][20] = 54,
[1][1][2][0][RTW89_UK][0][20] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][20] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][20] = 10,
[1][1][2][0][RTW89_FCC][1][24] = 10,
[1][1][2][0][RTW89_FCC][2][24] = 70,
[1][1][2][0][RTW89_ETSI][1][24] = 54,
@@ -43730,6 +45229,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][24] = 54,
[1][1][2][0][RTW89_MKK][0][24] = 14,
[1][1][2][0][RTW89_IC][1][24] = 10,
+ [1][1][2][0][RTW89_IC][2][24] = 70,
[1][1][2][0][RTW89_KCC][1][24] = 28,
[1][1][2][0][RTW89_KCC][0][24] = 12,
[1][1][2][0][RTW89_ACMA][1][24] = 54,
@@ -43739,6 +45239,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][24] = 16,
[1][1][2][0][RTW89_UK][1][24] = 54,
[1][1][2][0][RTW89_UK][0][24] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][24] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][24] = 10,
[1][1][2][0][RTW89_FCC][1][28] = 10,
[1][1][2][0][RTW89_FCC][2][28] = 70,
[1][1][2][0][RTW89_ETSI][1][28] = 54,
@@ -43746,6 +45248,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][28] = 52,
[1][1][2][0][RTW89_MKK][0][28] = 14,
[1][1][2][0][RTW89_IC][1][28] = 10,
+ [1][1][2][0][RTW89_IC][2][28] = 70,
[1][1][2][0][RTW89_KCC][1][28] = 28,
[1][1][2][0][RTW89_KCC][0][28] = 14,
[1][1][2][0][RTW89_ACMA][1][28] = 54,
@@ -43755,6 +45258,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][28] = 16,
[1][1][2][0][RTW89_UK][1][28] = 54,
[1][1][2][0][RTW89_UK][0][28] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][28] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][28] = 10,
[1][1][2][0][RTW89_FCC][1][31] = 10,
[1][1][2][0][RTW89_FCC][2][31] = 70,
[1][1][2][0][RTW89_ETSI][1][31] = 54,
@@ -43762,6 +45267,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][31] = 52,
[1][1][2][0][RTW89_MKK][0][31] = 14,
[1][1][2][0][RTW89_IC][1][31] = 10,
+ [1][1][2][0][RTW89_IC][2][31] = 70,
[1][1][2][0][RTW89_KCC][1][31] = 28,
[1][1][2][0][RTW89_KCC][0][31] = 14,
[1][1][2][0][RTW89_ACMA][1][31] = 54,
@@ -43771,6 +45277,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][31] = 16,
[1][1][2][0][RTW89_UK][1][31] = 54,
[1][1][2][0][RTW89_UK][0][31] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][31] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][31] = 10,
[1][1][2][0][RTW89_FCC][1][35] = 10,
[1][1][2][0][RTW89_FCC][2][35] = 70,
[1][1][2][0][RTW89_ETSI][1][35] = 54,
@@ -43778,6 +45286,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][35] = 52,
[1][1][2][0][RTW89_MKK][0][35] = 14,
[1][1][2][0][RTW89_IC][1][35] = 10,
+ [1][1][2][0][RTW89_IC][2][35] = 70,
[1][1][2][0][RTW89_KCC][1][35] = 28,
[1][1][2][0][RTW89_KCC][0][35] = 14,
[1][1][2][0][RTW89_ACMA][1][35] = 54,
@@ -43787,6 +45296,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][35] = 16,
[1][1][2][0][RTW89_UK][1][35] = 54,
[1][1][2][0][RTW89_UK][0][35] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][35] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][35] = 10,
[1][1][2][0][RTW89_FCC][1][39] = 10,
[1][1][2][0][RTW89_FCC][2][39] = 70,
[1][1][2][0][RTW89_ETSI][1][39] = 54,
@@ -43794,6 +45305,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][39] = 52,
[1][1][2][0][RTW89_MKK][0][39] = 14,
[1][1][2][0][RTW89_IC][1][39] = 10,
+ [1][1][2][0][RTW89_IC][2][39] = 70,
[1][1][2][0][RTW89_KCC][1][39] = 28,
[1][1][2][0][RTW89_KCC][0][39] = 14,
[1][1][2][0][RTW89_ACMA][1][39] = 54,
@@ -43803,6 +45315,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][39] = 16,
[1][1][2][0][RTW89_UK][1][39] = 54,
[1][1][2][0][RTW89_UK][0][39] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][39] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][39] = 10,
[1][1][2][0][RTW89_FCC][1][43] = 10,
[1][1][2][0][RTW89_FCC][2][43] = 70,
[1][1][2][0][RTW89_ETSI][1][43] = 54,
@@ -43810,6 +45324,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][43] = 52,
[1][1][2][0][RTW89_MKK][0][43] = 14,
[1][1][2][0][RTW89_IC][1][43] = 10,
+ [1][1][2][0][RTW89_IC][2][43] = 70,
[1][1][2][0][RTW89_KCC][1][43] = 28,
[1][1][2][0][RTW89_KCC][0][43] = 14,
[1][1][2][0][RTW89_ACMA][1][43] = 54,
@@ -43819,6 +45334,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][43] = 16,
[1][1][2][0][RTW89_UK][1][43] = 54,
[1][1][2][0][RTW89_UK][0][43] = 16,
+ [1][1][2][0][RTW89_THAILAND][1][43] = 46,
+ [1][1][2][0][RTW89_THAILAND][0][43] = 10,
[1][1][2][0][RTW89_FCC][1][46] = 12,
[1][1][2][0][RTW89_FCC][2][46] = 127,
[1][1][2][0][RTW89_ETSI][1][46] = 127,
@@ -43826,6 +45343,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][46] = 127,
[1][1][2][0][RTW89_MKK][0][46] = 127,
[1][1][2][0][RTW89_IC][1][46] = 12,
+ [1][1][2][0][RTW89_IC][2][46] = 68,
[1][1][2][0][RTW89_KCC][1][46] = 28,
[1][1][2][0][RTW89_KCC][0][46] = 127,
[1][1][2][0][RTW89_ACMA][1][46] = 127,
@@ -43835,6 +45353,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][46] = 127,
[1][1][2][0][RTW89_UK][1][46] = 127,
[1][1][2][0][RTW89_UK][0][46] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][46] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][46] = 127,
[1][1][2][0][RTW89_FCC][1][50] = 12,
[1][1][2][0][RTW89_FCC][2][50] = 127,
[1][1][2][0][RTW89_ETSI][1][50] = 127,
@@ -43842,6 +45362,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][50] = 127,
[1][1][2][0][RTW89_MKK][0][50] = 127,
[1][1][2][0][RTW89_IC][1][50] = 12,
+ [1][1][2][0][RTW89_IC][2][50] = 68,
[1][1][2][0][RTW89_KCC][1][50] = 28,
[1][1][2][0][RTW89_KCC][0][50] = 127,
[1][1][2][0][RTW89_ACMA][1][50] = 127,
@@ -43851,6 +45372,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][50] = 127,
[1][1][2][0][RTW89_UK][1][50] = 127,
[1][1][2][0][RTW89_UK][0][50] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][50] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][50] = 127,
[1][1][2][0][RTW89_FCC][1][54] = 10,
[1][1][2][0][RTW89_FCC][2][54] = 127,
[1][1][2][0][RTW89_ETSI][1][54] = 127,
@@ -43858,6 +45381,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][54] = 127,
[1][1][2][0][RTW89_MKK][0][54] = 127,
[1][1][2][0][RTW89_IC][1][54] = 10,
+ [1][1][2][0][RTW89_IC][2][54] = 127,
[1][1][2][0][RTW89_KCC][1][54] = 28,
[1][1][2][0][RTW89_KCC][0][54] = 127,
[1][1][2][0][RTW89_ACMA][1][54] = 127,
@@ -43867,6 +45391,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][54] = 127,
[1][1][2][0][RTW89_UK][1][54] = 127,
[1][1][2][0][RTW89_UK][0][54] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][54] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][54] = 127,
[1][1][2][0][RTW89_FCC][1][58] = 10,
[1][1][2][0][RTW89_FCC][2][58] = 66,
[1][1][2][0][RTW89_ETSI][1][58] = 127,
@@ -43874,6 +45400,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][58] = 127,
[1][1][2][0][RTW89_MKK][0][58] = 127,
[1][1][2][0][RTW89_IC][1][58] = 10,
+ [1][1][2][0][RTW89_IC][2][58] = 66,
[1][1][2][0][RTW89_KCC][1][58] = 28,
[1][1][2][0][RTW89_KCC][0][58] = 127,
[1][1][2][0][RTW89_ACMA][1][58] = 127,
@@ -43883,6 +45410,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][58] = 127,
[1][1][2][0][RTW89_UK][1][58] = 127,
[1][1][2][0][RTW89_UK][0][58] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][58] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][58] = 127,
[1][1][2][0][RTW89_FCC][1][61] = 10,
[1][1][2][0][RTW89_FCC][2][61] = 66,
[1][1][2][0][RTW89_ETSI][1][61] = 127,
@@ -43890,6 +45419,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][61] = 127,
[1][1][2][0][RTW89_MKK][0][61] = 127,
[1][1][2][0][RTW89_IC][1][61] = 10,
+ [1][1][2][0][RTW89_IC][2][61] = 66,
[1][1][2][0][RTW89_KCC][1][61] = 28,
[1][1][2][0][RTW89_KCC][0][61] = 127,
[1][1][2][0][RTW89_ACMA][1][61] = 127,
@@ -43899,6 +45429,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][61] = 127,
[1][1][2][0][RTW89_UK][1][61] = 127,
[1][1][2][0][RTW89_UK][0][61] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][61] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][61] = 127,
[1][1][2][0][RTW89_FCC][1][65] = 10,
[1][1][2][0][RTW89_FCC][2][65] = 66,
[1][1][2][0][RTW89_ETSI][1][65] = 127,
@@ -43906,6 +45438,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][65] = 127,
[1][1][2][0][RTW89_MKK][0][65] = 127,
[1][1][2][0][RTW89_IC][1][65] = 10,
+ [1][1][2][0][RTW89_IC][2][65] = 66,
[1][1][2][0][RTW89_KCC][1][65] = 28,
[1][1][2][0][RTW89_KCC][0][65] = 127,
[1][1][2][0][RTW89_ACMA][1][65] = 127,
@@ -43915,6 +45448,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][65] = 127,
[1][1][2][0][RTW89_UK][1][65] = 127,
[1][1][2][0][RTW89_UK][0][65] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][65] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][65] = 127,
[1][1][2][0][RTW89_FCC][1][69] = 10,
[1][1][2][0][RTW89_FCC][2][69] = 66,
[1][1][2][0][RTW89_ETSI][1][69] = 127,
@@ -43922,6 +45457,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][69] = 127,
[1][1][2][0][RTW89_MKK][0][69] = 127,
[1][1][2][0][RTW89_IC][1][69] = 10,
+ [1][1][2][0][RTW89_IC][2][69] = 66,
[1][1][2][0][RTW89_KCC][1][69] = 28,
[1][1][2][0][RTW89_KCC][0][69] = 127,
[1][1][2][0][RTW89_ACMA][1][69] = 127,
@@ -43931,6 +45467,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][69] = 127,
[1][1][2][0][RTW89_UK][1][69] = 127,
[1][1][2][0][RTW89_UK][0][69] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][69] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][69] = 127,
[1][1][2][0][RTW89_FCC][1][73] = 10,
[1][1][2][0][RTW89_FCC][2][73] = 66,
[1][1][2][0][RTW89_ETSI][1][73] = 127,
@@ -43938,6 +45476,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][73] = 127,
[1][1][2][0][RTW89_MKK][0][73] = 127,
[1][1][2][0][RTW89_IC][1][73] = 10,
+ [1][1][2][0][RTW89_IC][2][73] = 66,
[1][1][2][0][RTW89_KCC][1][73] = 28,
[1][1][2][0][RTW89_KCC][0][73] = 127,
[1][1][2][0][RTW89_ACMA][1][73] = 127,
@@ -43947,6 +45486,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][73] = 127,
[1][1][2][0][RTW89_UK][1][73] = 127,
[1][1][2][0][RTW89_UK][0][73] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][73] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][73] = 127,
[1][1][2][0][RTW89_FCC][1][76] = 10,
[1][1][2][0][RTW89_FCC][2][76] = 66,
[1][1][2][0][RTW89_ETSI][1][76] = 127,
@@ -43954,6 +45495,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][76] = 127,
[1][1][2][0][RTW89_MKK][0][76] = 127,
[1][1][2][0][RTW89_IC][1][76] = 10,
+ [1][1][2][0][RTW89_IC][2][76] = 66,
[1][1][2][0][RTW89_KCC][1][76] = 28,
[1][1][2][0][RTW89_KCC][0][76] = 127,
[1][1][2][0][RTW89_ACMA][1][76] = 127,
@@ -43963,6 +45505,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][76] = 127,
[1][1][2][0][RTW89_UK][1][76] = 127,
[1][1][2][0][RTW89_UK][0][76] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][76] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][76] = 127,
[1][1][2][0][RTW89_FCC][1][80] = 10,
[1][1][2][0][RTW89_FCC][2][80] = 66,
[1][1][2][0][RTW89_ETSI][1][80] = 127,
@@ -43970,6 +45514,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][80] = 127,
[1][1][2][0][RTW89_MKK][0][80] = 127,
[1][1][2][0][RTW89_IC][1][80] = 10,
+ [1][1][2][0][RTW89_IC][2][80] = 66,
[1][1][2][0][RTW89_KCC][1][80] = 32,
[1][1][2][0][RTW89_KCC][0][80] = 127,
[1][1][2][0][RTW89_ACMA][1][80] = 127,
@@ -43979,6 +45524,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][80] = 127,
[1][1][2][0][RTW89_UK][1][80] = 127,
[1][1][2][0][RTW89_UK][0][80] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][80] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][80] = 127,
[1][1][2][0][RTW89_FCC][1][84] = 10,
[1][1][2][0][RTW89_FCC][2][84] = 66,
[1][1][2][0][RTW89_ETSI][1][84] = 127,
@@ -43986,6 +45533,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][84] = 127,
[1][1][2][0][RTW89_MKK][0][84] = 127,
[1][1][2][0][RTW89_IC][1][84] = 10,
+ [1][1][2][0][RTW89_IC][2][84] = 66,
[1][1][2][0][RTW89_KCC][1][84] = 32,
[1][1][2][0][RTW89_KCC][0][84] = 127,
[1][1][2][0][RTW89_ACMA][1][84] = 127,
@@ -43995,6 +45543,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][84] = 127,
[1][1][2][0][RTW89_UK][1][84] = 127,
[1][1][2][0][RTW89_UK][0][84] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][84] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][84] = 127,
[1][1][2][0][RTW89_FCC][1][88] = 10,
[1][1][2][0][RTW89_FCC][2][88] = 127,
[1][1][2][0][RTW89_ETSI][1][88] = 127,
@@ -44002,6 +45552,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][88] = 127,
[1][1][2][0][RTW89_MKK][0][88] = 127,
[1][1][2][0][RTW89_IC][1][88] = 10,
+ [1][1][2][0][RTW89_IC][2][88] = 127,
[1][1][2][0][RTW89_KCC][1][88] = 32,
[1][1][2][0][RTW89_KCC][0][88] = 127,
[1][1][2][0][RTW89_ACMA][1][88] = 127,
@@ -44011,6 +45562,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][88] = 127,
[1][1][2][0][RTW89_UK][1][88] = 127,
[1][1][2][0][RTW89_UK][0][88] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][88] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][88] = 127,
[1][1][2][0][RTW89_FCC][1][91] = 12,
[1][1][2][0][RTW89_FCC][2][91] = 127,
[1][1][2][0][RTW89_ETSI][1][91] = 127,
@@ -44018,6 +45571,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][91] = 127,
[1][1][2][0][RTW89_MKK][0][91] = 127,
[1][1][2][0][RTW89_IC][1][91] = 12,
+ [1][1][2][0][RTW89_IC][2][91] = 127,
[1][1][2][0][RTW89_KCC][1][91] = 32,
[1][1][2][0][RTW89_KCC][0][91] = 127,
[1][1][2][0][RTW89_ACMA][1][91] = 127,
@@ -44027,6 +45581,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][91] = 127,
[1][1][2][0][RTW89_UK][1][91] = 127,
[1][1][2][0][RTW89_UK][0][91] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][91] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][91] = 127,
[1][1][2][0][RTW89_FCC][1][95] = 10,
[1][1][2][0][RTW89_FCC][2][95] = 127,
[1][1][2][0][RTW89_ETSI][1][95] = 127,
@@ -44034,6 +45590,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][95] = 127,
[1][1][2][0][RTW89_MKK][0][95] = 127,
[1][1][2][0][RTW89_IC][1][95] = 10,
+ [1][1][2][0][RTW89_IC][2][95] = 127,
[1][1][2][0][RTW89_KCC][1][95] = 32,
[1][1][2][0][RTW89_KCC][0][95] = 127,
[1][1][2][0][RTW89_ACMA][1][95] = 127,
@@ -44043,6 +45600,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][95] = 127,
[1][1][2][0][RTW89_UK][1][95] = 127,
[1][1][2][0][RTW89_UK][0][95] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][95] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][95] = 127,
[1][1][2][0][RTW89_FCC][1][99] = 10,
[1][1][2][0][RTW89_FCC][2][99] = 127,
[1][1][2][0][RTW89_ETSI][1][99] = 127,
@@ -44050,6 +45609,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][99] = 127,
[1][1][2][0][RTW89_MKK][0][99] = 127,
[1][1][2][0][RTW89_IC][1][99] = 10,
+ [1][1][2][0][RTW89_IC][2][99] = 127,
[1][1][2][0][RTW89_KCC][1][99] = 32,
[1][1][2][0][RTW89_KCC][0][99] = 127,
[1][1][2][0][RTW89_ACMA][1][99] = 127,
@@ -44059,6 +45619,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][99] = 127,
[1][1][2][0][RTW89_UK][1][99] = 127,
[1][1][2][0][RTW89_UK][0][99] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][99] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][99] = 127,
[1][1][2][0][RTW89_FCC][1][103] = 10,
[1][1][2][0][RTW89_FCC][2][103] = 127,
[1][1][2][0][RTW89_ETSI][1][103] = 127,
@@ -44066,6 +45628,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][103] = 127,
[1][1][2][0][RTW89_MKK][0][103] = 127,
[1][1][2][0][RTW89_IC][1][103] = 10,
+ [1][1][2][0][RTW89_IC][2][103] = 127,
[1][1][2][0][RTW89_KCC][1][103] = 32,
[1][1][2][0][RTW89_KCC][0][103] = 127,
[1][1][2][0][RTW89_ACMA][1][103] = 127,
@@ -44075,6 +45638,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][103] = 127,
[1][1][2][0][RTW89_UK][1][103] = 127,
[1][1][2][0][RTW89_UK][0][103] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][103] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][103] = 127,
[1][1][2][0][RTW89_FCC][1][106] = 12,
[1][1][2][0][RTW89_FCC][2][106] = 127,
[1][1][2][0][RTW89_ETSI][1][106] = 127,
@@ -44082,6 +45647,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][106] = 127,
[1][1][2][0][RTW89_MKK][0][106] = 127,
[1][1][2][0][RTW89_IC][1][106] = 12,
+ [1][1][2][0][RTW89_IC][2][106] = 127,
[1][1][2][0][RTW89_KCC][1][106] = 32,
[1][1][2][0][RTW89_KCC][0][106] = 127,
[1][1][2][0][RTW89_ACMA][1][106] = 127,
@@ -44091,6 +45657,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][106] = 127,
[1][1][2][0][RTW89_UK][1][106] = 127,
[1][1][2][0][RTW89_UK][0][106] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][106] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][106] = 127,
[1][1][2][0][RTW89_FCC][1][110] = 127,
[1][1][2][0][RTW89_FCC][2][110] = 127,
[1][1][2][0][RTW89_ETSI][1][110] = 127,
@@ -44098,6 +45666,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][110] = 127,
[1][1][2][0][RTW89_MKK][0][110] = 127,
[1][1][2][0][RTW89_IC][1][110] = 127,
+ [1][1][2][0][RTW89_IC][2][110] = 127,
[1][1][2][0][RTW89_KCC][1][110] = 127,
[1][1][2][0][RTW89_KCC][0][110] = 127,
[1][1][2][0][RTW89_ACMA][1][110] = 127,
@@ -44107,6 +45676,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][110] = 127,
[1][1][2][0][RTW89_UK][1][110] = 127,
[1][1][2][0][RTW89_UK][0][110] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][110] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][110] = 127,
[1][1][2][0][RTW89_FCC][1][114] = 127,
[1][1][2][0][RTW89_FCC][2][114] = 127,
[1][1][2][0][RTW89_ETSI][1][114] = 127,
@@ -44114,6 +45685,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][114] = 127,
[1][1][2][0][RTW89_MKK][0][114] = 127,
[1][1][2][0][RTW89_IC][1][114] = 127,
+ [1][1][2][0][RTW89_IC][2][114] = 127,
[1][1][2][0][RTW89_KCC][1][114] = 127,
[1][1][2][0][RTW89_KCC][0][114] = 127,
[1][1][2][0][RTW89_ACMA][1][114] = 127,
@@ -44123,6 +45695,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][114] = 127,
[1][1][2][0][RTW89_UK][1][114] = 127,
[1][1][2][0][RTW89_UK][0][114] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][114] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][114] = 127,
[1][1][2][0][RTW89_FCC][1][118] = 127,
[1][1][2][0][RTW89_FCC][2][118] = 127,
[1][1][2][0][RTW89_ETSI][1][118] = 127,
@@ -44130,6 +45704,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_MKK][1][118] = 127,
[1][1][2][0][RTW89_MKK][0][118] = 127,
[1][1][2][0][RTW89_IC][1][118] = 127,
+ [1][1][2][0][RTW89_IC][2][118] = 127,
[1][1][2][0][RTW89_KCC][1][118] = 127,
[1][1][2][0][RTW89_KCC][0][118] = 127,
[1][1][2][0][RTW89_ACMA][1][118] = 127,
@@ -44139,6 +45714,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][0][RTW89_QATAR][0][118] = 127,
[1][1][2][0][RTW89_UK][1][118] = 127,
[1][1][2][0][RTW89_UK][0][118] = 127,
+ [1][1][2][0][RTW89_THAILAND][1][118] = 127,
+ [1][1][2][0][RTW89_THAILAND][0][118] = 127,
[1][1][2][1][RTW89_FCC][1][1] = 10,
[1][1][2][1][RTW89_FCC][2][1] = 58,
[1][1][2][1][RTW89_ETSI][1][1] = 42,
@@ -44146,6 +45723,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][1] = 52,
[1][1][2][1][RTW89_MKK][0][1] = 12,
[1][1][2][1][RTW89_IC][1][1] = 10,
+ [1][1][2][1][RTW89_IC][2][1] = 58,
[1][1][2][1][RTW89_KCC][1][1] = 28,
[1][1][2][1][RTW89_KCC][0][1] = 12,
[1][1][2][1][RTW89_ACMA][1][1] = 42,
@@ -44155,6 +45733,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][1] = 6,
[1][1][2][1][RTW89_UK][1][1] = 42,
[1][1][2][1][RTW89_UK][0][1] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][1] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][1] = 6,
[1][1][2][1][RTW89_FCC][1][5] = 10,
[1][1][2][1][RTW89_FCC][2][5] = 58,
[1][1][2][1][RTW89_ETSI][1][5] = 42,
@@ -44162,6 +45742,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][5] = 52,
[1][1][2][1][RTW89_MKK][0][5] = 12,
[1][1][2][1][RTW89_IC][1][5] = 10,
+ [1][1][2][1][RTW89_IC][2][5] = 58,
[1][1][2][1][RTW89_KCC][1][5] = 28,
[1][1][2][1][RTW89_KCC][0][5] = 12,
[1][1][2][1][RTW89_ACMA][1][5] = 42,
@@ -44171,6 +45752,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][5] = 6,
[1][1][2][1][RTW89_UK][1][5] = 42,
[1][1][2][1][RTW89_UK][0][5] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][5] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][5] = 6,
[1][1][2][1][RTW89_FCC][1][9] = 10,
[1][1][2][1][RTW89_FCC][2][9] = 58,
[1][1][2][1][RTW89_ETSI][1][9] = 42,
@@ -44178,6 +45761,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][9] = 52,
[1][1][2][1][RTW89_MKK][0][9] = 12,
[1][1][2][1][RTW89_IC][1][9] = 10,
+ [1][1][2][1][RTW89_IC][2][9] = 58,
[1][1][2][1][RTW89_KCC][1][9] = 28,
[1][1][2][1][RTW89_KCC][0][9] = 12,
[1][1][2][1][RTW89_ACMA][1][9] = 42,
@@ -44187,6 +45771,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][9] = 6,
[1][1][2][1][RTW89_UK][1][9] = 42,
[1][1][2][1][RTW89_UK][0][9] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][9] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][9] = 6,
[1][1][2][1][RTW89_FCC][1][13] = 10,
[1][1][2][1][RTW89_FCC][2][13] = 58,
[1][1][2][1][RTW89_ETSI][1][13] = 42,
@@ -44194,6 +45780,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][13] = 52,
[1][1][2][1][RTW89_MKK][0][13] = 12,
[1][1][2][1][RTW89_IC][1][13] = 10,
+ [1][1][2][1][RTW89_IC][2][13] = 58,
[1][1][2][1][RTW89_KCC][1][13] = 28,
[1][1][2][1][RTW89_KCC][0][13] = 12,
[1][1][2][1][RTW89_ACMA][1][13] = 42,
@@ -44203,6 +45790,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][13] = 6,
[1][1][2][1][RTW89_UK][1][13] = 42,
[1][1][2][1][RTW89_UK][0][13] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][13] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][13] = 6,
[1][1][2][1][RTW89_FCC][1][16] = 10,
[1][1][2][1][RTW89_FCC][2][16] = 58,
[1][1][2][1][RTW89_ETSI][1][16] = 42,
@@ -44210,6 +45799,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][16] = 52,
[1][1][2][1][RTW89_MKK][0][16] = 12,
[1][1][2][1][RTW89_IC][1][16] = 10,
+ [1][1][2][1][RTW89_IC][2][16] = 58,
[1][1][2][1][RTW89_KCC][1][16] = 28,
[1][1][2][1][RTW89_KCC][0][16] = 12,
[1][1][2][1][RTW89_ACMA][1][16] = 42,
@@ -44219,6 +45809,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][16] = 6,
[1][1][2][1][RTW89_UK][1][16] = 42,
[1][1][2][1][RTW89_UK][0][16] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][16] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][16] = 6,
[1][1][2][1][RTW89_FCC][1][20] = 10,
[1][1][2][1][RTW89_FCC][2][20] = 58,
[1][1][2][1][RTW89_ETSI][1][20] = 42,
@@ -44226,6 +45818,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][20] = 52,
[1][1][2][1][RTW89_MKK][0][20] = 12,
[1][1][2][1][RTW89_IC][1][20] = 10,
+ [1][1][2][1][RTW89_IC][2][20] = 58,
[1][1][2][1][RTW89_KCC][1][20] = 28,
[1][1][2][1][RTW89_KCC][0][20] = 12,
[1][1][2][1][RTW89_ACMA][1][20] = 42,
@@ -44235,6 +45828,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][20] = 6,
[1][1][2][1][RTW89_UK][1][20] = 42,
[1][1][2][1][RTW89_UK][0][20] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][20] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][20] = 6,
[1][1][2][1][RTW89_FCC][1][24] = 10,
[1][1][2][1][RTW89_FCC][2][24] = 70,
[1][1][2][1][RTW89_ETSI][1][24] = 42,
@@ -44242,6 +45837,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][24] = 54,
[1][1][2][1][RTW89_MKK][0][24] = 14,
[1][1][2][1][RTW89_IC][1][24] = 10,
+ [1][1][2][1][RTW89_IC][2][24] = 70,
[1][1][2][1][RTW89_KCC][1][24] = 28,
[1][1][2][1][RTW89_KCC][0][24] = 12,
[1][1][2][1][RTW89_ACMA][1][24] = 42,
@@ -44251,6 +45847,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][24] = 6,
[1][1][2][1][RTW89_UK][1][24] = 42,
[1][1][2][1][RTW89_UK][0][24] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][24] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][24] = 6,
[1][1][2][1][RTW89_FCC][1][28] = 10,
[1][1][2][1][RTW89_FCC][2][28] = 70,
[1][1][2][1][RTW89_ETSI][1][28] = 42,
@@ -44258,6 +45856,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][28] = 52,
[1][1][2][1][RTW89_MKK][0][28] = 14,
[1][1][2][1][RTW89_IC][1][28] = 10,
+ [1][1][2][1][RTW89_IC][2][28] = 70,
[1][1][2][1][RTW89_KCC][1][28] = 28,
[1][1][2][1][RTW89_KCC][0][28] = 14,
[1][1][2][1][RTW89_ACMA][1][28] = 42,
@@ -44267,6 +45866,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][28] = 6,
[1][1][2][1][RTW89_UK][1][28] = 42,
[1][1][2][1][RTW89_UK][0][28] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][28] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][28] = 6,
[1][1][2][1][RTW89_FCC][1][31] = 10,
[1][1][2][1][RTW89_FCC][2][31] = 70,
[1][1][2][1][RTW89_ETSI][1][31] = 42,
@@ -44274,6 +45875,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][31] = 52,
[1][1][2][1][RTW89_MKK][0][31] = 14,
[1][1][2][1][RTW89_IC][1][31] = 10,
+ [1][1][2][1][RTW89_IC][2][31] = 70,
[1][1][2][1][RTW89_KCC][1][31] = 28,
[1][1][2][1][RTW89_KCC][0][31] = 14,
[1][1][2][1][RTW89_ACMA][1][31] = 42,
@@ -44283,6 +45885,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][31] = 6,
[1][1][2][1][RTW89_UK][1][31] = 42,
[1][1][2][1][RTW89_UK][0][31] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][31] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][31] = 6,
[1][1][2][1][RTW89_FCC][1][35] = 10,
[1][1][2][1][RTW89_FCC][2][35] = 70,
[1][1][2][1][RTW89_ETSI][1][35] = 42,
@@ -44290,6 +45894,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][35] = 52,
[1][1][2][1][RTW89_MKK][0][35] = 14,
[1][1][2][1][RTW89_IC][1][35] = 10,
+ [1][1][2][1][RTW89_IC][2][35] = 70,
[1][1][2][1][RTW89_KCC][1][35] = 28,
[1][1][2][1][RTW89_KCC][0][35] = 14,
[1][1][2][1][RTW89_ACMA][1][35] = 42,
@@ -44299,6 +45904,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][35] = 6,
[1][1][2][1][RTW89_UK][1][35] = 42,
[1][1][2][1][RTW89_UK][0][35] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][35] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][35] = 6,
[1][1][2][1][RTW89_FCC][1][39] = 10,
[1][1][2][1][RTW89_FCC][2][39] = 70,
[1][1][2][1][RTW89_ETSI][1][39] = 42,
@@ -44306,6 +45913,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][39] = 52,
[1][1][2][1][RTW89_MKK][0][39] = 14,
[1][1][2][1][RTW89_IC][1][39] = 10,
+ [1][1][2][1][RTW89_IC][2][39] = 70,
[1][1][2][1][RTW89_KCC][1][39] = 28,
[1][1][2][1][RTW89_KCC][0][39] = 14,
[1][1][2][1][RTW89_ACMA][1][39] = 42,
@@ -44315,6 +45923,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][39] = 6,
[1][1][2][1][RTW89_UK][1][39] = 42,
[1][1][2][1][RTW89_UK][0][39] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][39] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][39] = 6,
[1][1][2][1][RTW89_FCC][1][43] = 10,
[1][1][2][1][RTW89_FCC][2][43] = 70,
[1][1][2][1][RTW89_ETSI][1][43] = 42,
@@ -44322,6 +45932,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][43] = 52,
[1][1][2][1][RTW89_MKK][0][43] = 14,
[1][1][2][1][RTW89_IC][1][43] = 10,
+ [1][1][2][1][RTW89_IC][2][43] = 70,
[1][1][2][1][RTW89_KCC][1][43] = 28,
[1][1][2][1][RTW89_KCC][0][43] = 14,
[1][1][2][1][RTW89_ACMA][1][43] = 42,
@@ -44331,6 +45942,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][43] = 6,
[1][1][2][1][RTW89_UK][1][43] = 42,
[1][1][2][1][RTW89_UK][0][43] = 6,
+ [1][1][2][1][RTW89_THAILAND][1][43] = 46,
+ [1][1][2][1][RTW89_THAILAND][0][43] = 6,
[1][1][2][1][RTW89_FCC][1][46] = 12,
[1][1][2][1][RTW89_FCC][2][46] = 127,
[1][1][2][1][RTW89_ETSI][1][46] = 127,
@@ -44338,6 +45951,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][46] = 127,
[1][1][2][1][RTW89_MKK][0][46] = 127,
[1][1][2][1][RTW89_IC][1][46] = 12,
+ [1][1][2][1][RTW89_IC][2][46] = 68,
[1][1][2][1][RTW89_KCC][1][46] = 28,
[1][1][2][1][RTW89_KCC][0][46] = 127,
[1][1][2][1][RTW89_ACMA][1][46] = 127,
@@ -44347,6 +45961,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][46] = 127,
[1][1][2][1][RTW89_UK][1][46] = 127,
[1][1][2][1][RTW89_UK][0][46] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][46] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][46] = 127,
[1][1][2][1][RTW89_FCC][1][50] = 12,
[1][1][2][1][RTW89_FCC][2][50] = 127,
[1][1][2][1][RTW89_ETSI][1][50] = 127,
@@ -44354,6 +45970,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][50] = 127,
[1][1][2][1][RTW89_MKK][0][50] = 127,
[1][1][2][1][RTW89_IC][1][50] = 12,
+ [1][1][2][1][RTW89_IC][2][50] = 68,
[1][1][2][1][RTW89_KCC][1][50] = 28,
[1][1][2][1][RTW89_KCC][0][50] = 127,
[1][1][2][1][RTW89_ACMA][1][50] = 127,
@@ -44363,6 +45980,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][50] = 127,
[1][1][2][1][RTW89_UK][1][50] = 127,
[1][1][2][1][RTW89_UK][0][50] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][50] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][50] = 127,
[1][1][2][1][RTW89_FCC][1][54] = 10,
[1][1][2][1][RTW89_FCC][2][54] = 127,
[1][1][2][1][RTW89_ETSI][1][54] = 127,
@@ -44370,6 +45989,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][54] = 127,
[1][1][2][1][RTW89_MKK][0][54] = 127,
[1][1][2][1][RTW89_IC][1][54] = 10,
+ [1][1][2][1][RTW89_IC][2][54] = 127,
[1][1][2][1][RTW89_KCC][1][54] = 28,
[1][1][2][1][RTW89_KCC][0][54] = 127,
[1][1][2][1][RTW89_ACMA][1][54] = 127,
@@ -44379,6 +45999,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][54] = 127,
[1][1][2][1][RTW89_UK][1][54] = 127,
[1][1][2][1][RTW89_UK][0][54] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][54] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][54] = 127,
[1][1][2][1][RTW89_FCC][1][58] = 10,
[1][1][2][1][RTW89_FCC][2][58] = 66,
[1][1][2][1][RTW89_ETSI][1][58] = 127,
@@ -44386,6 +46008,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][58] = 127,
[1][1][2][1][RTW89_MKK][0][58] = 127,
[1][1][2][1][RTW89_IC][1][58] = 10,
+ [1][1][2][1][RTW89_IC][2][58] = 66,
[1][1][2][1][RTW89_KCC][1][58] = 28,
[1][1][2][1][RTW89_KCC][0][58] = 127,
[1][1][2][1][RTW89_ACMA][1][58] = 127,
@@ -44395,6 +46018,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][58] = 127,
[1][1][2][1][RTW89_UK][1][58] = 127,
[1][1][2][1][RTW89_UK][0][58] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][58] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][58] = 127,
[1][1][2][1][RTW89_FCC][1][61] = 10,
[1][1][2][1][RTW89_FCC][2][61] = 66,
[1][1][2][1][RTW89_ETSI][1][61] = 127,
@@ -44402,6 +46027,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][61] = 127,
[1][1][2][1][RTW89_MKK][0][61] = 127,
[1][1][2][1][RTW89_IC][1][61] = 10,
+ [1][1][2][1][RTW89_IC][2][61] = 66,
[1][1][2][1][RTW89_KCC][1][61] = 28,
[1][1][2][1][RTW89_KCC][0][61] = 127,
[1][1][2][1][RTW89_ACMA][1][61] = 127,
@@ -44411,6 +46037,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][61] = 127,
[1][1][2][1][RTW89_UK][1][61] = 127,
[1][1][2][1][RTW89_UK][0][61] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][61] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][61] = 127,
[1][1][2][1][RTW89_FCC][1][65] = 10,
[1][1][2][1][RTW89_FCC][2][65] = 66,
[1][1][2][1][RTW89_ETSI][1][65] = 127,
@@ -44418,6 +46046,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][65] = 127,
[1][1][2][1][RTW89_MKK][0][65] = 127,
[1][1][2][1][RTW89_IC][1][65] = 10,
+ [1][1][2][1][RTW89_IC][2][65] = 66,
[1][1][2][1][RTW89_KCC][1][65] = 28,
[1][1][2][1][RTW89_KCC][0][65] = 127,
[1][1][2][1][RTW89_ACMA][1][65] = 127,
@@ -44427,6 +46056,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][65] = 127,
[1][1][2][1][RTW89_UK][1][65] = 127,
[1][1][2][1][RTW89_UK][0][65] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][65] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][65] = 127,
[1][1][2][1][RTW89_FCC][1][69] = 10,
[1][1][2][1][RTW89_FCC][2][69] = 66,
[1][1][2][1][RTW89_ETSI][1][69] = 127,
@@ -44434,6 +46065,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][69] = 127,
[1][1][2][1][RTW89_MKK][0][69] = 127,
[1][1][2][1][RTW89_IC][1][69] = 10,
+ [1][1][2][1][RTW89_IC][2][69] = 66,
[1][1][2][1][RTW89_KCC][1][69] = 28,
[1][1][2][1][RTW89_KCC][0][69] = 127,
[1][1][2][1][RTW89_ACMA][1][69] = 127,
@@ -44443,6 +46075,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][69] = 127,
[1][1][2][1][RTW89_UK][1][69] = 127,
[1][1][2][1][RTW89_UK][0][69] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][69] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][69] = 127,
[1][1][2][1][RTW89_FCC][1][73] = 10,
[1][1][2][1][RTW89_FCC][2][73] = 66,
[1][1][2][1][RTW89_ETSI][1][73] = 127,
@@ -44450,6 +46084,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][73] = 127,
[1][1][2][1][RTW89_MKK][0][73] = 127,
[1][1][2][1][RTW89_IC][1][73] = 10,
+ [1][1][2][1][RTW89_IC][2][73] = 66,
[1][1][2][1][RTW89_KCC][1][73] = 28,
[1][1][2][1][RTW89_KCC][0][73] = 127,
[1][1][2][1][RTW89_ACMA][1][73] = 127,
@@ -44459,6 +46094,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][73] = 127,
[1][1][2][1][RTW89_UK][1][73] = 127,
[1][1][2][1][RTW89_UK][0][73] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][73] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][73] = 127,
[1][1][2][1][RTW89_FCC][1][76] = 10,
[1][1][2][1][RTW89_FCC][2][76] = 66,
[1][1][2][1][RTW89_ETSI][1][76] = 127,
@@ -44466,6 +46103,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][76] = 127,
[1][1][2][1][RTW89_MKK][0][76] = 127,
[1][1][2][1][RTW89_IC][1][76] = 10,
+ [1][1][2][1][RTW89_IC][2][76] = 66,
[1][1][2][1][RTW89_KCC][1][76] = 28,
[1][1][2][1][RTW89_KCC][0][76] = 127,
[1][1][2][1][RTW89_ACMA][1][76] = 127,
@@ -44475,6 +46113,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][76] = 127,
[1][1][2][1][RTW89_UK][1][76] = 127,
[1][1][2][1][RTW89_UK][0][76] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][76] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][76] = 127,
[1][1][2][1][RTW89_FCC][1][80] = 10,
[1][1][2][1][RTW89_FCC][2][80] = 66,
[1][1][2][1][RTW89_ETSI][1][80] = 127,
@@ -44482,6 +46122,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][80] = 127,
[1][1][2][1][RTW89_MKK][0][80] = 127,
[1][1][2][1][RTW89_IC][1][80] = 10,
+ [1][1][2][1][RTW89_IC][2][80] = 66,
[1][1][2][1][RTW89_KCC][1][80] = 32,
[1][1][2][1][RTW89_KCC][0][80] = 127,
[1][1][2][1][RTW89_ACMA][1][80] = 127,
@@ -44491,6 +46132,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][80] = 127,
[1][1][2][1][RTW89_UK][1][80] = 127,
[1][1][2][1][RTW89_UK][0][80] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][80] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][80] = 127,
[1][1][2][1][RTW89_FCC][1][84] = 10,
[1][1][2][1][RTW89_FCC][2][84] = 66,
[1][1][2][1][RTW89_ETSI][1][84] = 127,
@@ -44498,6 +46141,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][84] = 127,
[1][1][2][1][RTW89_MKK][0][84] = 127,
[1][1][2][1][RTW89_IC][1][84] = 10,
+ [1][1][2][1][RTW89_IC][2][84] = 66,
[1][1][2][1][RTW89_KCC][1][84] = 32,
[1][1][2][1][RTW89_KCC][0][84] = 127,
[1][1][2][1][RTW89_ACMA][1][84] = 127,
@@ -44507,6 +46151,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][84] = 127,
[1][1][2][1][RTW89_UK][1][84] = 127,
[1][1][2][1][RTW89_UK][0][84] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][84] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][84] = 127,
[1][1][2][1][RTW89_FCC][1][88] = 10,
[1][1][2][1][RTW89_FCC][2][88] = 127,
[1][1][2][1][RTW89_ETSI][1][88] = 127,
@@ -44514,6 +46160,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][88] = 127,
[1][1][2][1][RTW89_MKK][0][88] = 127,
[1][1][2][1][RTW89_IC][1][88] = 10,
+ [1][1][2][1][RTW89_IC][2][88] = 127,
[1][1][2][1][RTW89_KCC][1][88] = 32,
[1][1][2][1][RTW89_KCC][0][88] = 127,
[1][1][2][1][RTW89_ACMA][1][88] = 127,
@@ -44523,6 +46170,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][88] = 127,
[1][1][2][1][RTW89_UK][1][88] = 127,
[1][1][2][1][RTW89_UK][0][88] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][88] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][88] = 127,
[1][1][2][1][RTW89_FCC][1][91] = 12,
[1][1][2][1][RTW89_FCC][2][91] = 127,
[1][1][2][1][RTW89_ETSI][1][91] = 127,
@@ -44530,6 +46179,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][91] = 127,
[1][1][2][1][RTW89_MKK][0][91] = 127,
[1][1][2][1][RTW89_IC][1][91] = 12,
+ [1][1][2][1][RTW89_IC][2][91] = 127,
[1][1][2][1][RTW89_KCC][1][91] = 32,
[1][1][2][1][RTW89_KCC][0][91] = 127,
[1][1][2][1][RTW89_ACMA][1][91] = 127,
@@ -44539,6 +46189,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][91] = 127,
[1][1][2][1][RTW89_UK][1][91] = 127,
[1][1][2][1][RTW89_UK][0][91] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][91] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][91] = 127,
[1][1][2][1][RTW89_FCC][1][95] = 10,
[1][1][2][1][RTW89_FCC][2][95] = 127,
[1][1][2][1][RTW89_ETSI][1][95] = 127,
@@ -44546,6 +46198,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][95] = 127,
[1][1][2][1][RTW89_MKK][0][95] = 127,
[1][1][2][1][RTW89_IC][1][95] = 10,
+ [1][1][2][1][RTW89_IC][2][95] = 127,
[1][1][2][1][RTW89_KCC][1][95] = 32,
[1][1][2][1][RTW89_KCC][0][95] = 127,
[1][1][2][1][RTW89_ACMA][1][95] = 127,
@@ -44555,6 +46208,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][95] = 127,
[1][1][2][1][RTW89_UK][1][95] = 127,
[1][1][2][1][RTW89_UK][0][95] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][95] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][95] = 127,
[1][1][2][1][RTW89_FCC][1][99] = 10,
[1][1][2][1][RTW89_FCC][2][99] = 127,
[1][1][2][1][RTW89_ETSI][1][99] = 127,
@@ -44562,6 +46217,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][99] = 127,
[1][1][2][1][RTW89_MKK][0][99] = 127,
[1][1][2][1][RTW89_IC][1][99] = 10,
+ [1][1][2][1][RTW89_IC][2][99] = 127,
[1][1][2][1][RTW89_KCC][1][99] = 32,
[1][1][2][1][RTW89_KCC][0][99] = 127,
[1][1][2][1][RTW89_ACMA][1][99] = 127,
@@ -44571,6 +46227,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][99] = 127,
[1][1][2][1][RTW89_UK][1][99] = 127,
[1][1][2][1][RTW89_UK][0][99] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][99] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][99] = 127,
[1][1][2][1][RTW89_FCC][1][103] = 10,
[1][1][2][1][RTW89_FCC][2][103] = 127,
[1][1][2][1][RTW89_ETSI][1][103] = 127,
@@ -44578,6 +46236,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][103] = 127,
[1][1][2][1][RTW89_MKK][0][103] = 127,
[1][1][2][1][RTW89_IC][1][103] = 10,
+ [1][1][2][1][RTW89_IC][2][103] = 127,
[1][1][2][1][RTW89_KCC][1][103] = 32,
[1][1][2][1][RTW89_KCC][0][103] = 127,
[1][1][2][1][RTW89_ACMA][1][103] = 127,
@@ -44587,6 +46246,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][103] = 127,
[1][1][2][1][RTW89_UK][1][103] = 127,
[1][1][2][1][RTW89_UK][0][103] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][103] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][103] = 127,
[1][1][2][1][RTW89_FCC][1][106] = 12,
[1][1][2][1][RTW89_FCC][2][106] = 127,
[1][1][2][1][RTW89_ETSI][1][106] = 127,
@@ -44594,6 +46255,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][106] = 127,
[1][1][2][1][RTW89_MKK][0][106] = 127,
[1][1][2][1][RTW89_IC][1][106] = 12,
+ [1][1][2][1][RTW89_IC][2][106] = 127,
[1][1][2][1][RTW89_KCC][1][106] = 32,
[1][1][2][1][RTW89_KCC][0][106] = 127,
[1][1][2][1][RTW89_ACMA][1][106] = 127,
@@ -44603,6 +46265,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][106] = 127,
[1][1][2][1][RTW89_UK][1][106] = 127,
[1][1][2][1][RTW89_UK][0][106] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][106] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][106] = 127,
[1][1][2][1][RTW89_FCC][1][110] = 127,
[1][1][2][1][RTW89_FCC][2][110] = 127,
[1][1][2][1][RTW89_ETSI][1][110] = 127,
@@ -44610,6 +46274,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][110] = 127,
[1][1][2][1][RTW89_MKK][0][110] = 127,
[1][1][2][1][RTW89_IC][1][110] = 127,
+ [1][1][2][1][RTW89_IC][2][110] = 127,
[1][1][2][1][RTW89_KCC][1][110] = 127,
[1][1][2][1][RTW89_KCC][0][110] = 127,
[1][1][2][1][RTW89_ACMA][1][110] = 127,
@@ -44619,6 +46284,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][110] = 127,
[1][1][2][1][RTW89_UK][1][110] = 127,
[1][1][2][1][RTW89_UK][0][110] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][110] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][110] = 127,
[1][1][2][1][RTW89_FCC][1][114] = 127,
[1][1][2][1][RTW89_FCC][2][114] = 127,
[1][1][2][1][RTW89_ETSI][1][114] = 127,
@@ -44626,6 +46293,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][114] = 127,
[1][1][2][1][RTW89_MKK][0][114] = 127,
[1][1][2][1][RTW89_IC][1][114] = 127,
+ [1][1][2][1][RTW89_IC][2][114] = 127,
[1][1][2][1][RTW89_KCC][1][114] = 127,
[1][1][2][1][RTW89_KCC][0][114] = 127,
[1][1][2][1][RTW89_ACMA][1][114] = 127,
@@ -44635,6 +46303,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][114] = 127,
[1][1][2][1][RTW89_UK][1][114] = 127,
[1][1][2][1][RTW89_UK][0][114] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][114] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][114] = 127,
[1][1][2][1][RTW89_FCC][1][118] = 127,
[1][1][2][1][RTW89_FCC][2][118] = 127,
[1][1][2][1][RTW89_ETSI][1][118] = 127,
@@ -44642,6 +46312,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_MKK][1][118] = 127,
[1][1][2][1][RTW89_MKK][0][118] = 127,
[1][1][2][1][RTW89_IC][1][118] = 127,
+ [1][1][2][1][RTW89_IC][2][118] = 127,
[1][1][2][1][RTW89_KCC][1][118] = 127,
[1][1][2][1][RTW89_KCC][0][118] = 127,
[1][1][2][1][RTW89_ACMA][1][118] = 127,
@@ -44651,6 +46322,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[1][1][2][1][RTW89_QATAR][0][118] = 127,
[1][1][2][1][RTW89_UK][1][118] = 127,
[1][1][2][1][RTW89_UK][0][118] = 127,
+ [1][1][2][1][RTW89_THAILAND][1][118] = 127,
+ [1][1][2][1][RTW89_THAILAND][0][118] = 127,
[2][0][2][0][RTW89_FCC][1][3] = 46,
[2][0][2][0][RTW89_FCC][2][3] = 60,
[2][0][2][0][RTW89_ETSI][1][3] = 58,
@@ -44658,6 +46331,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][3] = 58,
[2][0][2][0][RTW89_MKK][0][3] = 26,
[2][0][2][0][RTW89_IC][1][3] = 46,
+ [2][0][2][0][RTW89_IC][2][3] = 60,
[2][0][2][0][RTW89_KCC][1][3] = 50,
[2][0][2][0][RTW89_KCC][0][3] = 24,
[2][0][2][0][RTW89_ACMA][1][3] = 58,
@@ -44667,6 +46341,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][3] = 30,
[2][0][2][0][RTW89_UK][1][3] = 58,
[2][0][2][0][RTW89_UK][0][3] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][3] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][3] = 30,
[2][0][2][0][RTW89_FCC][1][11] = 46,
[2][0][2][0][RTW89_FCC][2][11] = 60,
[2][0][2][0][RTW89_ETSI][1][11] = 58,
@@ -44674,6 +46350,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][11] = 58,
[2][0][2][0][RTW89_MKK][0][11] = 24,
[2][0][2][0][RTW89_IC][1][11] = 46,
+ [2][0][2][0][RTW89_IC][2][11] = 60,
[2][0][2][0][RTW89_KCC][1][11] = 50,
[2][0][2][0][RTW89_KCC][0][11] = 24,
[2][0][2][0][RTW89_ACMA][1][11] = 58,
@@ -44683,6 +46360,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][11] = 30,
[2][0][2][0][RTW89_UK][1][11] = 58,
[2][0][2][0][RTW89_UK][0][11] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][11] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][11] = 30,
[2][0][2][0][RTW89_FCC][1][18] = 46,
[2][0][2][0][RTW89_FCC][2][18] = 60,
[2][0][2][0][RTW89_ETSI][1][18] = 58,
@@ -44690,6 +46369,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][18] = 58,
[2][0][2][0][RTW89_MKK][0][18] = 24,
[2][0][2][0][RTW89_IC][1][18] = 46,
+ [2][0][2][0][RTW89_IC][2][18] = 60,
[2][0][2][0][RTW89_KCC][1][18] = 50,
[2][0][2][0][RTW89_KCC][0][18] = 24,
[2][0][2][0][RTW89_ACMA][1][18] = 58,
@@ -44699,6 +46379,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][18] = 30,
[2][0][2][0][RTW89_UK][1][18] = 58,
[2][0][2][0][RTW89_UK][0][18] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][18] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][18] = 30,
[2][0][2][0][RTW89_FCC][1][26] = 46,
[2][0][2][0][RTW89_FCC][2][26] = 60,
[2][0][2][0][RTW89_ETSI][1][26] = 58,
@@ -44706,6 +46388,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][26] = 58,
[2][0][2][0][RTW89_MKK][0][26] = 24,
[2][0][2][0][RTW89_IC][1][26] = 46,
+ [2][0][2][0][RTW89_IC][2][26] = 60,
[2][0][2][0][RTW89_KCC][1][26] = 50,
[2][0][2][0][RTW89_KCC][0][26] = 26,
[2][0][2][0][RTW89_ACMA][1][26] = 58,
@@ -44715,6 +46398,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][26] = 30,
[2][0][2][0][RTW89_UK][1][26] = 58,
[2][0][2][0][RTW89_UK][0][26] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][26] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][26] = 30,
[2][0][2][0][RTW89_FCC][1][33] = 46,
[2][0][2][0][RTW89_FCC][2][33] = 60,
[2][0][2][0][RTW89_ETSI][1][33] = 58,
@@ -44722,6 +46407,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][33] = 58,
[2][0][2][0][RTW89_MKK][0][33] = 24,
[2][0][2][0][RTW89_IC][1][33] = 46,
+ [2][0][2][0][RTW89_IC][2][33] = 60,
[2][0][2][0][RTW89_KCC][1][33] = 50,
[2][0][2][0][RTW89_KCC][0][33] = 24,
[2][0][2][0][RTW89_ACMA][1][33] = 58,
@@ -44731,6 +46417,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][33] = 30,
[2][0][2][0][RTW89_UK][1][33] = 58,
[2][0][2][0][RTW89_UK][0][33] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][33] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][33] = 30,
[2][0][2][0][RTW89_FCC][1][41] = 46,
[2][0][2][0][RTW89_FCC][2][41] = 60,
[2][0][2][0][RTW89_ETSI][1][41] = 58,
@@ -44738,6 +46426,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][41] = 58,
[2][0][2][0][RTW89_MKK][0][41] = 24,
[2][0][2][0][RTW89_IC][1][41] = 46,
+ [2][0][2][0][RTW89_IC][2][41] = 60,
[2][0][2][0][RTW89_KCC][1][41] = 50,
[2][0][2][0][RTW89_KCC][0][41] = 24,
[2][0][2][0][RTW89_ACMA][1][41] = 58,
@@ -44747,6 +46436,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][41] = 30,
[2][0][2][0][RTW89_UK][1][41] = 58,
[2][0][2][0][RTW89_UK][0][41] = 30,
+ [2][0][2][0][RTW89_THAILAND][1][41] = 58,
+ [2][0][2][0][RTW89_THAILAND][0][41] = 30,
[2][0][2][0][RTW89_FCC][1][48] = 46,
[2][0][2][0][RTW89_FCC][2][48] = 127,
[2][0][2][0][RTW89_ETSI][1][48] = 127,
@@ -44754,6 +46445,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][48] = 127,
[2][0][2][0][RTW89_MKK][0][48] = 127,
[2][0][2][0][RTW89_IC][1][48] = 46,
+ [2][0][2][0][RTW89_IC][2][48] = 60,
[2][0][2][0][RTW89_KCC][1][48] = 48,
[2][0][2][0][RTW89_KCC][0][48] = 127,
[2][0][2][0][RTW89_ACMA][1][48] = 127,
@@ -44763,6 +46455,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][48] = 127,
[2][0][2][0][RTW89_UK][1][48] = 127,
[2][0][2][0][RTW89_UK][0][48] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][48] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][48] = 127,
[2][0][2][0][RTW89_FCC][1][56] = 46,
[2][0][2][0][RTW89_FCC][2][56] = 127,
[2][0][2][0][RTW89_ETSI][1][56] = 127,
@@ -44770,6 +46464,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][56] = 127,
[2][0][2][0][RTW89_MKK][0][56] = 127,
[2][0][2][0][RTW89_IC][1][56] = 46,
+ [2][0][2][0][RTW89_IC][2][56] = 58,
[2][0][2][0][RTW89_KCC][1][56] = 48,
[2][0][2][0][RTW89_KCC][0][56] = 127,
[2][0][2][0][RTW89_ACMA][1][56] = 127,
@@ -44779,6 +46474,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][56] = 127,
[2][0][2][0][RTW89_UK][1][56] = 127,
[2][0][2][0][RTW89_UK][0][56] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][56] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][56] = 127,
[2][0][2][0][RTW89_FCC][1][63] = 46,
[2][0][2][0][RTW89_FCC][2][63] = 58,
[2][0][2][0][RTW89_ETSI][1][63] = 127,
@@ -44786,6 +46483,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][63] = 127,
[2][0][2][0][RTW89_MKK][0][63] = 127,
[2][0][2][0][RTW89_IC][1][63] = 46,
+ [2][0][2][0][RTW89_IC][2][63] = 58,
[2][0][2][0][RTW89_KCC][1][63] = 48,
[2][0][2][0][RTW89_KCC][0][63] = 127,
[2][0][2][0][RTW89_ACMA][1][63] = 127,
@@ -44795,6 +46493,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][63] = 127,
[2][0][2][0][RTW89_UK][1][63] = 127,
[2][0][2][0][RTW89_UK][0][63] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][63] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][63] = 127,
[2][0][2][0][RTW89_FCC][1][71] = 46,
[2][0][2][0][RTW89_FCC][2][71] = 58,
[2][0][2][0][RTW89_ETSI][1][71] = 127,
@@ -44802,6 +46502,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][71] = 127,
[2][0][2][0][RTW89_MKK][0][71] = 127,
[2][0][2][0][RTW89_IC][1][71] = 46,
+ [2][0][2][0][RTW89_IC][2][71] = 58,
[2][0][2][0][RTW89_KCC][1][71] = 48,
[2][0][2][0][RTW89_KCC][0][71] = 127,
[2][0][2][0][RTW89_ACMA][1][71] = 127,
@@ -44811,6 +46512,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][71] = 127,
[2][0][2][0][RTW89_UK][1][71] = 127,
[2][0][2][0][RTW89_UK][0][71] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][71] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][71] = 127,
[2][0][2][0][RTW89_FCC][1][78] = 46,
[2][0][2][0][RTW89_FCC][2][78] = 58,
[2][0][2][0][RTW89_ETSI][1][78] = 127,
@@ -44818,6 +46521,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][78] = 127,
[2][0][2][0][RTW89_MKK][0][78] = 127,
[2][0][2][0][RTW89_IC][1][78] = 46,
+ [2][0][2][0][RTW89_IC][2][78] = 58,
[2][0][2][0][RTW89_KCC][1][78] = 52,
[2][0][2][0][RTW89_KCC][0][78] = 127,
[2][0][2][0][RTW89_ACMA][1][78] = 127,
@@ -44827,6 +46531,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][78] = 127,
[2][0][2][0][RTW89_UK][1][78] = 127,
[2][0][2][0][RTW89_UK][0][78] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][78] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][78] = 127,
[2][0][2][0][RTW89_FCC][1][86] = 46,
[2][0][2][0][RTW89_FCC][2][86] = 127,
[2][0][2][0][RTW89_ETSI][1][86] = 127,
@@ -44834,6 +46540,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][86] = 127,
[2][0][2][0][RTW89_MKK][0][86] = 127,
[2][0][2][0][RTW89_IC][1][86] = 46,
+ [2][0][2][0][RTW89_IC][2][86] = 127,
[2][0][2][0][RTW89_KCC][1][86] = 52,
[2][0][2][0][RTW89_KCC][0][86] = 127,
[2][0][2][0][RTW89_ACMA][1][86] = 127,
@@ -44843,6 +46550,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][86] = 127,
[2][0][2][0][RTW89_UK][1][86] = 127,
[2][0][2][0][RTW89_UK][0][86] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][86] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][86] = 127,
[2][0][2][0][RTW89_FCC][1][93] = 46,
[2][0][2][0][RTW89_FCC][2][93] = 127,
[2][0][2][0][RTW89_ETSI][1][93] = 127,
@@ -44850,6 +46559,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][93] = 127,
[2][0][2][0][RTW89_MKK][0][93] = 127,
[2][0][2][0][RTW89_IC][1][93] = 46,
+ [2][0][2][0][RTW89_IC][2][93] = 127,
[2][0][2][0][RTW89_KCC][1][93] = 50,
[2][0][2][0][RTW89_KCC][0][93] = 127,
[2][0][2][0][RTW89_ACMA][1][93] = 127,
@@ -44859,6 +46569,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][93] = 127,
[2][0][2][0][RTW89_UK][1][93] = 127,
[2][0][2][0][RTW89_UK][0][93] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][93] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][93] = 127,
[2][0][2][0][RTW89_FCC][1][101] = 44,
[2][0][2][0][RTW89_FCC][2][101] = 127,
[2][0][2][0][RTW89_ETSI][1][101] = 127,
@@ -44866,6 +46578,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][101] = 127,
[2][0][2][0][RTW89_MKK][0][101] = 127,
[2][0][2][0][RTW89_IC][1][101] = 44,
+ [2][0][2][0][RTW89_IC][2][101] = 127,
[2][0][2][0][RTW89_KCC][1][101] = 50,
[2][0][2][0][RTW89_KCC][0][101] = 127,
[2][0][2][0][RTW89_ACMA][1][101] = 127,
@@ -44875,6 +46588,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][101] = 127,
[2][0][2][0][RTW89_UK][1][101] = 127,
[2][0][2][0][RTW89_UK][0][101] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][101] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][101] = 127,
[2][0][2][0][RTW89_FCC][1][108] = 127,
[2][0][2][0][RTW89_FCC][2][108] = 127,
[2][0][2][0][RTW89_ETSI][1][108] = 127,
@@ -44882,6 +46597,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][108] = 127,
[2][0][2][0][RTW89_MKK][0][108] = 127,
[2][0][2][0][RTW89_IC][1][108] = 127,
+ [2][0][2][0][RTW89_IC][2][108] = 127,
[2][0][2][0][RTW89_KCC][1][108] = 127,
[2][0][2][0][RTW89_KCC][0][108] = 127,
[2][0][2][0][RTW89_ACMA][1][108] = 127,
@@ -44891,6 +46607,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][108] = 127,
[2][0][2][0][RTW89_UK][1][108] = 127,
[2][0][2][0][RTW89_UK][0][108] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][108] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][108] = 127,
[2][0][2][0][RTW89_FCC][1][116] = 127,
[2][0][2][0][RTW89_FCC][2][116] = 127,
[2][0][2][0][RTW89_ETSI][1][116] = 127,
@@ -44898,6 +46616,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_MKK][1][116] = 127,
[2][0][2][0][RTW89_MKK][0][116] = 127,
[2][0][2][0][RTW89_IC][1][116] = 127,
+ [2][0][2][0][RTW89_IC][2][116] = 127,
[2][0][2][0][RTW89_KCC][1][116] = 127,
[2][0][2][0][RTW89_KCC][0][116] = 127,
[2][0][2][0][RTW89_ACMA][1][116] = 127,
@@ -44907,6 +46626,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][0][2][0][RTW89_QATAR][0][116] = 127,
[2][0][2][0][RTW89_UK][1][116] = 127,
[2][0][2][0][RTW89_UK][0][116] = 127,
+ [2][0][2][0][RTW89_THAILAND][1][116] = 127,
+ [2][0][2][0][RTW89_THAILAND][0][116] = 127,
[2][1][2][0][RTW89_FCC][1][3] = 22,
[2][1][2][0][RTW89_FCC][2][3] = 50,
[2][1][2][0][RTW89_ETSI][1][3] = 54,
@@ -44914,6 +46635,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][3] = 52,
[2][1][2][0][RTW89_MKK][0][3] = 14,
[2][1][2][0][RTW89_IC][1][3] = 22,
+ [2][1][2][0][RTW89_IC][2][3] = 50,
[2][1][2][0][RTW89_KCC][1][3] = 38,
[2][1][2][0][RTW89_KCC][0][3] = 12,
[2][1][2][0][RTW89_ACMA][1][3] = 54,
@@ -44923,6 +46645,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][3] = 16,
[2][1][2][0][RTW89_UK][1][3] = 54,
[2][1][2][0][RTW89_UK][0][3] = 16,
+ [2][1][2][0][RTW89_THAILAND][1][3] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][3] = 18,
[2][1][2][0][RTW89_FCC][1][11] = 20,
[2][1][2][0][RTW89_FCC][2][11] = 50,
[2][1][2][0][RTW89_ETSI][1][11] = 54,
@@ -44930,6 +46654,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][11] = 52,
[2][1][2][0][RTW89_MKK][0][11] = 12,
[2][1][2][0][RTW89_IC][1][11] = 20,
+ [2][1][2][0][RTW89_IC][2][11] = 50,
[2][1][2][0][RTW89_KCC][1][11] = 38,
[2][1][2][0][RTW89_KCC][0][11] = 12,
[2][1][2][0][RTW89_ACMA][1][11] = 54,
@@ -44939,6 +46664,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][11] = 16,
[2][1][2][0][RTW89_UK][1][11] = 54,
[2][1][2][0][RTW89_UK][0][11] = 16,
+ [2][1][2][0][RTW89_THAILAND][1][11] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][11] = 18,
[2][1][2][0][RTW89_FCC][1][18] = 20,
[2][1][2][0][RTW89_FCC][2][18] = 50,
[2][1][2][0][RTW89_ETSI][1][18] = 54,
@@ -44946,6 +46673,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][18] = 52,
[2][1][2][0][RTW89_MKK][0][18] = 12,
[2][1][2][0][RTW89_IC][1][18] = 20,
+ [2][1][2][0][RTW89_IC][2][18] = 50,
[2][1][2][0][RTW89_KCC][1][18] = 38,
[2][1][2][0][RTW89_KCC][0][18] = 12,
[2][1][2][0][RTW89_ACMA][1][18] = 54,
@@ -44955,6 +46683,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][18] = 16,
[2][1][2][0][RTW89_UK][1][18] = 54,
[2][1][2][0][RTW89_UK][0][18] = 16,
+ [2][1][2][0][RTW89_THAILAND][1][18] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][18] = 18,
[2][1][2][0][RTW89_FCC][1][26] = 20,
[2][1][2][0][RTW89_FCC][2][26] = 60,
[2][1][2][0][RTW89_ETSI][1][26] = 54,
@@ -44962,6 +46692,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][26] = 52,
[2][1][2][0][RTW89_MKK][0][26] = 12,
[2][1][2][0][RTW89_IC][1][26] = 20,
+ [2][1][2][0][RTW89_IC][2][26] = 60,
[2][1][2][0][RTW89_KCC][1][26] = 38,
[2][1][2][0][RTW89_KCC][0][26] = 12,
[2][1][2][0][RTW89_ACMA][1][26] = 54,
@@ -44971,6 +46702,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][26] = 16,
[2][1][2][0][RTW89_UK][1][26] = 54,
[2][1][2][0][RTW89_UK][0][26] = 16,
+ [2][1][2][0][RTW89_THAILAND][1][26] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][26] = 18,
[2][1][2][0][RTW89_FCC][1][33] = 20,
[2][1][2][0][RTW89_FCC][2][33] = 60,
[2][1][2][0][RTW89_ETSI][1][33] = 54,
@@ -44978,6 +46711,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][33] = 48,
[2][1][2][0][RTW89_MKK][0][33] = 12,
[2][1][2][0][RTW89_IC][1][33] = 20,
+ [2][1][2][0][RTW89_IC][2][33] = 60,
[2][1][2][0][RTW89_KCC][1][33] = 38,
[2][1][2][0][RTW89_KCC][0][33] = 12,
[2][1][2][0][RTW89_ACMA][1][33] = 54,
@@ -44987,6 +46721,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][33] = 16,
[2][1][2][0][RTW89_UK][1][33] = 54,
[2][1][2][0][RTW89_UK][0][33] = 16,
+ [2][1][2][0][RTW89_THAILAND][1][33] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][33] = 18,
[2][1][2][0][RTW89_FCC][1][41] = 22,
[2][1][2][0][RTW89_FCC][2][41] = 60,
[2][1][2][0][RTW89_ETSI][1][41] = 54,
@@ -44994,6 +46730,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][41] = 48,
[2][1][2][0][RTW89_MKK][0][41] = 12,
[2][1][2][0][RTW89_IC][1][41] = 22,
+ [2][1][2][0][RTW89_IC][2][41] = 60,
[2][1][2][0][RTW89_KCC][1][41] = 38,
[2][1][2][0][RTW89_KCC][0][41] = 12,
[2][1][2][0][RTW89_ACMA][1][41] = 54,
@@ -45003,6 +46740,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][41] = 18,
[2][1][2][0][RTW89_UK][1][41] = 54,
[2][1][2][0][RTW89_UK][0][41] = 18,
+ [2][1][2][0][RTW89_THAILAND][1][41] = 46,
+ [2][1][2][0][RTW89_THAILAND][0][41] = 18,
[2][1][2][0][RTW89_FCC][1][48] = 22,
[2][1][2][0][RTW89_FCC][2][48] = 127,
[2][1][2][0][RTW89_ETSI][1][48] = 127,
@@ -45010,6 +46749,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][48] = 127,
[2][1][2][0][RTW89_MKK][0][48] = 127,
[2][1][2][0][RTW89_IC][1][48] = 22,
+ [2][1][2][0][RTW89_IC][2][48] = 60,
[2][1][2][0][RTW89_KCC][1][48] = 38,
[2][1][2][0][RTW89_KCC][0][48] = 127,
[2][1][2][0][RTW89_ACMA][1][48] = 127,
@@ -45019,6 +46759,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][48] = 127,
[2][1][2][0][RTW89_UK][1][48] = 127,
[2][1][2][0][RTW89_UK][0][48] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][48] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][48] = 127,
[2][1][2][0][RTW89_FCC][1][56] = 20,
[2][1][2][0][RTW89_FCC][2][56] = 127,
[2][1][2][0][RTW89_ETSI][1][56] = 127,
@@ -45026,6 +46768,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][56] = 127,
[2][1][2][0][RTW89_MKK][0][56] = 127,
[2][1][2][0][RTW89_IC][1][56] = 20,
+ [2][1][2][0][RTW89_IC][2][56] = 56,
[2][1][2][0][RTW89_KCC][1][56] = 38,
[2][1][2][0][RTW89_KCC][0][56] = 127,
[2][1][2][0][RTW89_ACMA][1][56] = 127,
@@ -45035,6 +46778,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][56] = 127,
[2][1][2][0][RTW89_UK][1][56] = 127,
[2][1][2][0][RTW89_UK][0][56] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][56] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][56] = 127,
[2][1][2][0][RTW89_FCC][1][63] = 22,
[2][1][2][0][RTW89_FCC][2][63] = 58,
[2][1][2][0][RTW89_ETSI][1][63] = 127,
@@ -45042,6 +46787,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][63] = 127,
[2][1][2][0][RTW89_MKK][0][63] = 127,
[2][1][2][0][RTW89_IC][1][63] = 22,
+ [2][1][2][0][RTW89_IC][2][63] = 58,
[2][1][2][0][RTW89_KCC][1][63] = 38,
[2][1][2][0][RTW89_KCC][0][63] = 127,
[2][1][2][0][RTW89_ACMA][1][63] = 127,
@@ -45051,6 +46797,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][63] = 127,
[2][1][2][0][RTW89_UK][1][63] = 127,
[2][1][2][0][RTW89_UK][0][63] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][63] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][63] = 127,
[2][1][2][0][RTW89_FCC][1][71] = 20,
[2][1][2][0][RTW89_FCC][2][71] = 58,
[2][1][2][0][RTW89_ETSI][1][71] = 127,
@@ -45058,6 +46806,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][71] = 127,
[2][1][2][0][RTW89_MKK][0][71] = 127,
[2][1][2][0][RTW89_IC][1][71] = 20,
+ [2][1][2][0][RTW89_IC][2][71] = 58,
[2][1][2][0][RTW89_KCC][1][71] = 38,
[2][1][2][0][RTW89_KCC][0][71] = 127,
[2][1][2][0][RTW89_ACMA][1][71] = 127,
@@ -45067,6 +46816,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][71] = 127,
[2][1][2][0][RTW89_UK][1][71] = 127,
[2][1][2][0][RTW89_UK][0][71] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][71] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][71] = 127,
[2][1][2][0][RTW89_FCC][1][78] = 20,
[2][1][2][0][RTW89_FCC][2][78] = 58,
[2][1][2][0][RTW89_ETSI][1][78] = 127,
@@ -45074,6 +46825,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][78] = 127,
[2][1][2][0][RTW89_MKK][0][78] = 127,
[2][1][2][0][RTW89_IC][1][78] = 20,
+ [2][1][2][0][RTW89_IC][2][78] = 58,
[2][1][2][0][RTW89_KCC][1][78] = 38,
[2][1][2][0][RTW89_KCC][0][78] = 127,
[2][1][2][0][RTW89_ACMA][1][78] = 127,
@@ -45083,6 +46835,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][78] = 127,
[2][1][2][0][RTW89_UK][1][78] = 127,
[2][1][2][0][RTW89_UK][0][78] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][78] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][78] = 127,
[2][1][2][0][RTW89_FCC][1][86] = 20,
[2][1][2][0][RTW89_FCC][2][86] = 127,
[2][1][2][0][RTW89_ETSI][1][86] = 127,
@@ -45090,6 +46844,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][86] = 127,
[2][1][2][0][RTW89_MKK][0][86] = 127,
[2][1][2][0][RTW89_IC][1][86] = 20,
+ [2][1][2][0][RTW89_IC][2][86] = 127,
[2][1][2][0][RTW89_KCC][1][86] = 38,
[2][1][2][0][RTW89_KCC][0][86] = 127,
[2][1][2][0][RTW89_ACMA][1][86] = 127,
@@ -45099,6 +46854,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][86] = 127,
[2][1][2][0][RTW89_UK][1][86] = 127,
[2][1][2][0][RTW89_UK][0][86] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][86] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][86] = 127,
[2][1][2][0][RTW89_FCC][1][93] = 22,
[2][1][2][0][RTW89_FCC][2][93] = 127,
[2][1][2][0][RTW89_ETSI][1][93] = 127,
@@ -45106,6 +46863,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][93] = 127,
[2][1][2][0][RTW89_MKK][0][93] = 127,
[2][1][2][0][RTW89_IC][1][93] = 22,
+ [2][1][2][0][RTW89_IC][2][93] = 127,
[2][1][2][0][RTW89_KCC][1][93] = 38,
[2][1][2][0][RTW89_KCC][0][93] = 127,
[2][1][2][0][RTW89_ACMA][1][93] = 127,
@@ -45115,6 +46873,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][93] = 127,
[2][1][2][0][RTW89_UK][1][93] = 127,
[2][1][2][0][RTW89_UK][0][93] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][93] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][93] = 127,
[2][1][2][0][RTW89_FCC][1][101] = 22,
[2][1][2][0][RTW89_FCC][2][101] = 127,
[2][1][2][0][RTW89_ETSI][1][101] = 127,
@@ -45122,6 +46882,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][101] = 127,
[2][1][2][0][RTW89_MKK][0][101] = 127,
[2][1][2][0][RTW89_IC][1][101] = 22,
+ [2][1][2][0][RTW89_IC][2][101] = 127,
[2][1][2][0][RTW89_KCC][1][101] = 38,
[2][1][2][0][RTW89_KCC][0][101] = 127,
[2][1][2][0][RTW89_ACMA][1][101] = 127,
@@ -45131,6 +46892,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][101] = 127,
[2][1][2][0][RTW89_UK][1][101] = 127,
[2][1][2][0][RTW89_UK][0][101] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][101] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][101] = 127,
[2][1][2][0][RTW89_FCC][1][108] = 127,
[2][1][2][0][RTW89_FCC][2][108] = 127,
[2][1][2][0][RTW89_ETSI][1][108] = 127,
@@ -45138,6 +46901,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][108] = 127,
[2][1][2][0][RTW89_MKK][0][108] = 127,
[2][1][2][0][RTW89_IC][1][108] = 127,
+ [2][1][2][0][RTW89_IC][2][108] = 127,
[2][1][2][0][RTW89_KCC][1][108] = 127,
[2][1][2][0][RTW89_KCC][0][108] = 127,
[2][1][2][0][RTW89_ACMA][1][108] = 127,
@@ -45147,6 +46911,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][108] = 127,
[2][1][2][0][RTW89_UK][1][108] = 127,
[2][1][2][0][RTW89_UK][0][108] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][108] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][108] = 127,
[2][1][2][0][RTW89_FCC][1][116] = 127,
[2][1][2][0][RTW89_FCC][2][116] = 127,
[2][1][2][0][RTW89_ETSI][1][116] = 127,
@@ -45154,6 +46920,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_MKK][1][116] = 127,
[2][1][2][0][RTW89_MKK][0][116] = 127,
[2][1][2][0][RTW89_IC][1][116] = 127,
+ [2][1][2][0][RTW89_IC][2][116] = 127,
[2][1][2][0][RTW89_KCC][1][116] = 127,
[2][1][2][0][RTW89_KCC][0][116] = 127,
[2][1][2][0][RTW89_ACMA][1][116] = 127,
@@ -45163,6 +46930,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][0][RTW89_QATAR][0][116] = 127,
[2][1][2][0][RTW89_UK][1][116] = 127,
[2][1][2][0][RTW89_UK][0][116] = 127,
+ [2][1][2][0][RTW89_THAILAND][1][116] = 127,
+ [2][1][2][0][RTW89_THAILAND][0][116] = 127,
[2][1][2][1][RTW89_FCC][1][3] = 22,
[2][1][2][1][RTW89_FCC][2][3] = 50,
[2][1][2][1][RTW89_ETSI][1][3] = 42,
@@ -45170,6 +46939,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][3] = 52,
[2][1][2][1][RTW89_MKK][0][3] = 14,
[2][1][2][1][RTW89_IC][1][3] = 22,
+ [2][1][2][1][RTW89_IC][2][3] = 50,
[2][1][2][1][RTW89_KCC][1][3] = 38,
[2][1][2][1][RTW89_KCC][0][3] = 12,
[2][1][2][1][RTW89_ACMA][1][3] = 42,
@@ -45179,6 +46949,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][3] = 6,
[2][1][2][1][RTW89_UK][1][3] = 42,
[2][1][2][1][RTW89_UK][0][3] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][3] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][3] = 6,
[2][1][2][1][RTW89_FCC][1][11] = 20,
[2][1][2][1][RTW89_FCC][2][11] = 50,
[2][1][2][1][RTW89_ETSI][1][11] = 42,
@@ -45186,6 +46958,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][11] = 52,
[2][1][2][1][RTW89_MKK][0][11] = 12,
[2][1][2][1][RTW89_IC][1][11] = 20,
+ [2][1][2][1][RTW89_IC][2][11] = 50,
[2][1][2][1][RTW89_KCC][1][11] = 38,
[2][1][2][1][RTW89_KCC][0][11] = 12,
[2][1][2][1][RTW89_ACMA][1][11] = 42,
@@ -45195,6 +46968,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][11] = 6,
[2][1][2][1][RTW89_UK][1][11] = 42,
[2][1][2][1][RTW89_UK][0][11] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][11] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][11] = 6,
[2][1][2][1][RTW89_FCC][1][18] = 20,
[2][1][2][1][RTW89_FCC][2][18] = 50,
[2][1][2][1][RTW89_ETSI][1][18] = 42,
@@ -45202,6 +46977,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][18] = 52,
[2][1][2][1][RTW89_MKK][0][18] = 12,
[2][1][2][1][RTW89_IC][1][18] = 20,
+ [2][1][2][1][RTW89_IC][2][18] = 50,
[2][1][2][1][RTW89_KCC][1][18] = 38,
[2][1][2][1][RTW89_KCC][0][18] = 12,
[2][1][2][1][RTW89_ACMA][1][18] = 42,
@@ -45211,6 +46987,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][18] = 6,
[2][1][2][1][RTW89_UK][1][18] = 42,
[2][1][2][1][RTW89_UK][0][18] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][18] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][18] = 6,
[2][1][2][1][RTW89_FCC][1][26] = 20,
[2][1][2][1][RTW89_FCC][2][26] = 60,
[2][1][2][1][RTW89_ETSI][1][26] = 42,
@@ -45218,6 +46996,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][26] = 52,
[2][1][2][1][RTW89_MKK][0][26] = 12,
[2][1][2][1][RTW89_IC][1][26] = 20,
+ [2][1][2][1][RTW89_IC][2][26] = 60,
[2][1][2][1][RTW89_KCC][1][26] = 38,
[2][1][2][1][RTW89_KCC][0][26] = 12,
[2][1][2][1][RTW89_ACMA][1][26] = 42,
@@ -45227,6 +47006,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][26] = 6,
[2][1][2][1][RTW89_UK][1][26] = 42,
[2][1][2][1][RTW89_UK][0][26] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][26] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][26] = 6,
[2][1][2][1][RTW89_FCC][1][33] = 20,
[2][1][2][1][RTW89_FCC][2][33] = 60,
[2][1][2][1][RTW89_ETSI][1][33] = 42,
@@ -45234,6 +47015,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][33] = 48,
[2][1][2][1][RTW89_MKK][0][33] = 12,
[2][1][2][1][RTW89_IC][1][33] = 20,
+ [2][1][2][1][RTW89_IC][2][33] = 60,
[2][1][2][1][RTW89_KCC][1][33] = 38,
[2][1][2][1][RTW89_KCC][0][33] = 12,
[2][1][2][1][RTW89_ACMA][1][33] = 42,
@@ -45243,6 +47025,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][33] = 6,
[2][1][2][1][RTW89_UK][1][33] = 42,
[2][1][2][1][RTW89_UK][0][33] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][33] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][33] = 6,
[2][1][2][1][RTW89_FCC][1][41] = 22,
[2][1][2][1][RTW89_FCC][2][41] = 60,
[2][1][2][1][RTW89_ETSI][1][41] = 42,
@@ -45250,6 +47034,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][41] = 48,
[2][1][2][1][RTW89_MKK][0][41] = 12,
[2][1][2][1][RTW89_IC][1][41] = 22,
+ [2][1][2][1][RTW89_IC][2][41] = 60,
[2][1][2][1][RTW89_KCC][1][41] = 38,
[2][1][2][1][RTW89_KCC][0][41] = 12,
[2][1][2][1][RTW89_ACMA][1][41] = 42,
@@ -45259,6 +47044,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][41] = 6,
[2][1][2][1][RTW89_UK][1][41] = 42,
[2][1][2][1][RTW89_UK][0][41] = 6,
+ [2][1][2][1][RTW89_THAILAND][1][41] = 46,
+ [2][1][2][1][RTW89_THAILAND][0][41] = 6,
[2][1][2][1][RTW89_FCC][1][48] = 22,
[2][1][2][1][RTW89_FCC][2][48] = 127,
[2][1][2][1][RTW89_ETSI][1][48] = 127,
@@ -45266,6 +47053,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][48] = 127,
[2][1][2][1][RTW89_MKK][0][48] = 127,
[2][1][2][1][RTW89_IC][1][48] = 22,
+ [2][1][2][1][RTW89_IC][2][48] = 60,
[2][1][2][1][RTW89_KCC][1][48] = 38,
[2][1][2][1][RTW89_KCC][0][48] = 127,
[2][1][2][1][RTW89_ACMA][1][48] = 127,
@@ -45275,6 +47063,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][48] = 127,
[2][1][2][1][RTW89_UK][1][48] = 127,
[2][1][2][1][RTW89_UK][0][48] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][48] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][48] = 127,
[2][1][2][1][RTW89_FCC][1][56] = 20,
[2][1][2][1][RTW89_FCC][2][56] = 127,
[2][1][2][1][RTW89_ETSI][1][56] = 127,
@@ -45282,6 +47072,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][56] = 127,
[2][1][2][1][RTW89_MKK][0][56] = 127,
[2][1][2][1][RTW89_IC][1][56] = 20,
+ [2][1][2][1][RTW89_IC][2][56] = 56,
[2][1][2][1][RTW89_KCC][1][56] = 38,
[2][1][2][1][RTW89_KCC][0][56] = 127,
[2][1][2][1][RTW89_ACMA][1][56] = 127,
@@ -45291,6 +47082,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][56] = 127,
[2][1][2][1][RTW89_UK][1][56] = 127,
[2][1][2][1][RTW89_UK][0][56] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][56] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][56] = 127,
[2][1][2][1][RTW89_FCC][1][63] = 22,
[2][1][2][1][RTW89_FCC][2][63] = 58,
[2][1][2][1][RTW89_ETSI][1][63] = 127,
@@ -45298,6 +47091,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][63] = 127,
[2][1][2][1][RTW89_MKK][0][63] = 127,
[2][1][2][1][RTW89_IC][1][63] = 22,
+ [2][1][2][1][RTW89_IC][2][63] = 58,
[2][1][2][1][RTW89_KCC][1][63] = 38,
[2][1][2][1][RTW89_KCC][0][63] = 127,
[2][1][2][1][RTW89_ACMA][1][63] = 127,
@@ -45307,6 +47101,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][63] = 127,
[2][1][2][1][RTW89_UK][1][63] = 127,
[2][1][2][1][RTW89_UK][0][63] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][63] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][63] = 127,
[2][1][2][1][RTW89_FCC][1][71] = 20,
[2][1][2][1][RTW89_FCC][2][71] = 58,
[2][1][2][1][RTW89_ETSI][1][71] = 127,
@@ -45314,6 +47110,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][71] = 127,
[2][1][2][1][RTW89_MKK][0][71] = 127,
[2][1][2][1][RTW89_IC][1][71] = 20,
+ [2][1][2][1][RTW89_IC][2][71] = 58,
[2][1][2][1][RTW89_KCC][1][71] = 38,
[2][1][2][1][RTW89_KCC][0][71] = 127,
[2][1][2][1][RTW89_ACMA][1][71] = 127,
@@ -45323,6 +47120,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][71] = 127,
[2][1][2][1][RTW89_UK][1][71] = 127,
[2][1][2][1][RTW89_UK][0][71] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][71] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][71] = 127,
[2][1][2][1][RTW89_FCC][1][78] = 20,
[2][1][2][1][RTW89_FCC][2][78] = 58,
[2][1][2][1][RTW89_ETSI][1][78] = 127,
@@ -45330,6 +47129,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][78] = 127,
[2][1][2][1][RTW89_MKK][0][78] = 127,
[2][1][2][1][RTW89_IC][1][78] = 20,
+ [2][1][2][1][RTW89_IC][2][78] = 58,
[2][1][2][1][RTW89_KCC][1][78] = 38,
[2][1][2][1][RTW89_KCC][0][78] = 127,
[2][1][2][1][RTW89_ACMA][1][78] = 127,
@@ -45339,6 +47139,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][78] = 127,
[2][1][2][1][RTW89_UK][1][78] = 127,
[2][1][2][1][RTW89_UK][0][78] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][78] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][78] = 127,
[2][1][2][1][RTW89_FCC][1][86] = 20,
[2][1][2][1][RTW89_FCC][2][86] = 127,
[2][1][2][1][RTW89_ETSI][1][86] = 127,
@@ -45346,6 +47148,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][86] = 127,
[2][1][2][1][RTW89_MKK][0][86] = 127,
[2][1][2][1][RTW89_IC][1][86] = 20,
+ [2][1][2][1][RTW89_IC][2][86] = 127,
[2][1][2][1][RTW89_KCC][1][86] = 38,
[2][1][2][1][RTW89_KCC][0][86] = 127,
[2][1][2][1][RTW89_ACMA][1][86] = 127,
@@ -45355,6 +47158,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][86] = 127,
[2][1][2][1][RTW89_UK][1][86] = 127,
[2][1][2][1][RTW89_UK][0][86] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][86] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][86] = 127,
[2][1][2][1][RTW89_FCC][1][93] = 22,
[2][1][2][1][RTW89_FCC][2][93] = 127,
[2][1][2][1][RTW89_ETSI][1][93] = 127,
@@ -45362,6 +47167,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][93] = 127,
[2][1][2][1][RTW89_MKK][0][93] = 127,
[2][1][2][1][RTW89_IC][1][93] = 22,
+ [2][1][2][1][RTW89_IC][2][93] = 127,
[2][1][2][1][RTW89_KCC][1][93] = 38,
[2][1][2][1][RTW89_KCC][0][93] = 127,
[2][1][2][1][RTW89_ACMA][1][93] = 127,
@@ -45371,6 +47177,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][93] = 127,
[2][1][2][1][RTW89_UK][1][93] = 127,
[2][1][2][1][RTW89_UK][0][93] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][93] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][93] = 127,
[2][1][2][1][RTW89_FCC][1][101] = 22,
[2][1][2][1][RTW89_FCC][2][101] = 127,
[2][1][2][1][RTW89_ETSI][1][101] = 127,
@@ -45378,6 +47186,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][101] = 127,
[2][1][2][1][RTW89_MKK][0][101] = 127,
[2][1][2][1][RTW89_IC][1][101] = 22,
+ [2][1][2][1][RTW89_IC][2][101] = 127,
[2][1][2][1][RTW89_KCC][1][101] = 38,
[2][1][2][1][RTW89_KCC][0][101] = 127,
[2][1][2][1][RTW89_ACMA][1][101] = 127,
@@ -45387,6 +47196,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][101] = 127,
[2][1][2][1][RTW89_UK][1][101] = 127,
[2][1][2][1][RTW89_UK][0][101] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][101] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][101] = 127,
[2][1][2][1][RTW89_FCC][1][108] = 127,
[2][1][2][1][RTW89_FCC][2][108] = 127,
[2][1][2][1][RTW89_ETSI][1][108] = 127,
@@ -45394,6 +47205,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][108] = 127,
[2][1][2][1][RTW89_MKK][0][108] = 127,
[2][1][2][1][RTW89_IC][1][108] = 127,
+ [2][1][2][1][RTW89_IC][2][108] = 127,
[2][1][2][1][RTW89_KCC][1][108] = 127,
[2][1][2][1][RTW89_KCC][0][108] = 127,
[2][1][2][1][RTW89_ACMA][1][108] = 127,
@@ -45403,6 +47215,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][108] = 127,
[2][1][2][1][RTW89_UK][1][108] = 127,
[2][1][2][1][RTW89_UK][0][108] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][108] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][108] = 127,
[2][1][2][1][RTW89_FCC][1][116] = 127,
[2][1][2][1][RTW89_FCC][2][116] = 127,
[2][1][2][1][RTW89_ETSI][1][116] = 127,
@@ -45410,6 +47224,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_MKK][1][116] = 127,
[2][1][2][1][RTW89_MKK][0][116] = 127,
[2][1][2][1][RTW89_IC][1][116] = 127,
+ [2][1][2][1][RTW89_IC][2][116] = 127,
[2][1][2][1][RTW89_KCC][1][116] = 127,
[2][1][2][1][RTW89_KCC][0][116] = 127,
[2][1][2][1][RTW89_ACMA][1][116] = 127,
@@ -45419,6 +47234,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[2][1][2][1][RTW89_QATAR][0][116] = 127,
[2][1][2][1][RTW89_UK][1][116] = 127,
[2][1][2][1][RTW89_UK][0][116] = 127,
+ [2][1][2][1][RTW89_THAILAND][1][116] = 127,
+ [2][1][2][1][RTW89_THAILAND][0][116] = 127,
[3][0][2][0][RTW89_FCC][1][7] = 52,
[3][0][2][0][RTW89_FCC][2][7] = 52,
[3][0][2][0][RTW89_ETSI][1][7] = 50,
@@ -45426,6 +47243,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][7] = 50,
[3][0][2][0][RTW89_MKK][0][7] = 22,
[3][0][2][0][RTW89_IC][1][7] = 52,
+ [3][0][2][0][RTW89_IC][2][7] = 52,
[3][0][2][0][RTW89_KCC][1][7] = 42,
[3][0][2][0][RTW89_KCC][0][7] = 24,
[3][0][2][0][RTW89_ACMA][1][7] = 50,
@@ -45435,6 +47253,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][7] = 30,
[3][0][2][0][RTW89_UK][1][7] = 50,
[3][0][2][0][RTW89_UK][0][7] = 30,
+ [3][0][2][0][RTW89_THAILAND][1][7] = 50,
+ [3][0][2][0][RTW89_THAILAND][0][7] = 30,
[3][0][2][0][RTW89_FCC][1][22] = 52,
[3][0][2][0][RTW89_FCC][2][22] = 52,
[3][0][2][0][RTW89_ETSI][1][22] = 50,
@@ -45442,6 +47262,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][22] = 50,
[3][0][2][0][RTW89_MKK][0][22] = 20,
[3][0][2][0][RTW89_IC][1][22] = 52,
+ [3][0][2][0][RTW89_IC][2][22] = 52,
[3][0][2][0][RTW89_KCC][1][22] = 42,
[3][0][2][0][RTW89_KCC][0][22] = 24,
[3][0][2][0][RTW89_ACMA][1][22] = 50,
@@ -45451,6 +47272,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][22] = 30,
[3][0][2][0][RTW89_UK][1][22] = 50,
[3][0][2][0][RTW89_UK][0][22] = 30,
+ [3][0][2][0][RTW89_THAILAND][1][22] = 50,
+ [3][0][2][0][RTW89_THAILAND][0][22] = 30,
[3][0][2][0][RTW89_FCC][1][37] = 52,
[3][0][2][0][RTW89_FCC][2][37] = 52,
[3][0][2][0][RTW89_ETSI][1][37] = 50,
@@ -45458,6 +47281,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][37] = 50,
[3][0][2][0][RTW89_MKK][0][37] = 20,
[3][0][2][0][RTW89_IC][1][37] = 52,
+ [3][0][2][0][RTW89_IC][2][37] = 52,
[3][0][2][0][RTW89_KCC][1][37] = 42,
[3][0][2][0][RTW89_KCC][0][37] = 24,
[3][0][2][0][RTW89_ACMA][1][37] = 50,
@@ -45467,6 +47291,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][37] = 30,
[3][0][2][0][RTW89_UK][1][37] = 50,
[3][0][2][0][RTW89_UK][0][37] = 30,
+ [3][0][2][0][RTW89_THAILAND][1][37] = 50,
+ [3][0][2][0][RTW89_THAILAND][0][37] = 30,
[3][0][2][0][RTW89_FCC][1][52] = 54,
[3][0][2][0][RTW89_FCC][2][52] = 127,
[3][0][2][0][RTW89_ETSI][1][52] = 127,
@@ -45474,6 +47300,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][52] = 127,
[3][0][2][0][RTW89_MKK][0][52] = 127,
[3][0][2][0][RTW89_IC][1][52] = 54,
+ [3][0][2][0][RTW89_IC][2][52] = 56,
[3][0][2][0][RTW89_KCC][1][52] = 56,
[3][0][2][0][RTW89_KCC][0][52] = 127,
[3][0][2][0][RTW89_ACMA][1][52] = 127,
@@ -45483,6 +47310,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][52] = 127,
[3][0][2][0][RTW89_UK][1][52] = 127,
[3][0][2][0][RTW89_UK][0][52] = 127,
+ [3][0][2][0][RTW89_THAILAND][1][52] = 127,
+ [3][0][2][0][RTW89_THAILAND][0][52] = 127,
[3][0][2][0][RTW89_FCC][1][67] = 54,
[3][0][2][0][RTW89_FCC][2][67] = 54,
[3][0][2][0][RTW89_ETSI][1][67] = 127,
@@ -45490,6 +47319,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][67] = 127,
[3][0][2][0][RTW89_MKK][0][67] = 127,
[3][0][2][0][RTW89_IC][1][67] = 54,
+ [3][0][2][0][RTW89_IC][2][67] = 54,
[3][0][2][0][RTW89_KCC][1][67] = 54,
[3][0][2][0][RTW89_KCC][0][67] = 127,
[3][0][2][0][RTW89_ACMA][1][67] = 127,
@@ -45499,6 +47329,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][67] = 127,
[3][0][2][0][RTW89_UK][1][67] = 127,
[3][0][2][0][RTW89_UK][0][67] = 127,
+ [3][0][2][0][RTW89_THAILAND][1][67] = 127,
+ [3][0][2][0][RTW89_THAILAND][0][67] = 127,
[3][0][2][0][RTW89_FCC][1][82] = 46,
[3][0][2][0][RTW89_FCC][2][82] = 127,
[3][0][2][0][RTW89_ETSI][1][82] = 127,
@@ -45506,6 +47338,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][82] = 127,
[3][0][2][0][RTW89_MKK][0][82] = 127,
[3][0][2][0][RTW89_IC][1][82] = 46,
+ [3][0][2][0][RTW89_IC][2][82] = 127,
[3][0][2][0][RTW89_KCC][1][82] = 26,
[3][0][2][0][RTW89_KCC][0][82] = 127,
[3][0][2][0][RTW89_ACMA][1][82] = 127,
@@ -45515,6 +47348,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][82] = 127,
[3][0][2][0][RTW89_UK][1][82] = 127,
[3][0][2][0][RTW89_UK][0][82] = 127,
+ [3][0][2][0][RTW89_THAILAND][1][82] = 127,
+ [3][0][2][0][RTW89_THAILAND][0][82] = 127,
[3][0][2][0][RTW89_FCC][1][97] = 40,
[3][0][2][0][RTW89_FCC][2][97] = 127,
[3][0][2][0][RTW89_ETSI][1][97] = 127,
@@ -45522,6 +47357,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][97] = 127,
[3][0][2][0][RTW89_MKK][0][97] = 127,
[3][0][2][0][RTW89_IC][1][97] = 40,
+ [3][0][2][0][RTW89_IC][2][97] = 127,
[3][0][2][0][RTW89_KCC][1][97] = 26,
[3][0][2][0][RTW89_KCC][0][97] = 127,
[3][0][2][0][RTW89_ACMA][1][97] = 127,
@@ -45531,6 +47367,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][97] = 127,
[3][0][2][0][RTW89_UK][1][97] = 127,
[3][0][2][0][RTW89_UK][0][97] = 127,
+ [3][0][2][0][RTW89_THAILAND][1][97] = 127,
+ [3][0][2][0][RTW89_THAILAND][0][97] = 127,
[3][0][2][0][RTW89_FCC][1][112] = 127,
[3][0][2][0][RTW89_FCC][2][112] = 127,
[3][0][2][0][RTW89_ETSI][1][112] = 127,
@@ -45538,6 +47376,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_MKK][1][112] = 127,
[3][0][2][0][RTW89_MKK][0][112] = 127,
[3][0][2][0][RTW89_IC][1][112] = 127,
+ [3][0][2][0][RTW89_IC][2][112] = 127,
[3][0][2][0][RTW89_KCC][1][112] = 127,
[3][0][2][0][RTW89_KCC][0][112] = 127,
[3][0][2][0][RTW89_ACMA][1][112] = 127,
@@ -45547,6 +47386,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][0][2][0][RTW89_QATAR][0][112] = 127,
[3][0][2][0][RTW89_UK][1][112] = 127,
[3][0][2][0][RTW89_UK][0][112] = 127,
+ [3][0][2][0][RTW89_THAILAND][1][112] = 127,
+ [3][0][2][0][RTW89_THAILAND][0][112] = 127,
[3][1][2][0][RTW89_FCC][1][7] = 32,
[3][1][2][0][RTW89_FCC][2][7] = 46,
[3][1][2][0][RTW89_ETSI][1][7] = 50,
@@ -45554,6 +47395,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][7] = 38,
[3][1][2][0][RTW89_MKK][0][7] = 10,
[3][1][2][0][RTW89_IC][1][7] = 32,
+ [3][1][2][0][RTW89_IC][2][7] = 46,
[3][1][2][0][RTW89_KCC][1][7] = 40,
[3][1][2][0][RTW89_KCC][0][7] = 12,
[3][1][2][0][RTW89_ACMA][1][7] = 50,
@@ -45563,6 +47405,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][7] = 18,
[3][1][2][0][RTW89_UK][1][7] = 50,
[3][1][2][0][RTW89_UK][0][7] = 18,
+ [3][1][2][0][RTW89_THAILAND][1][7] = 46,
+ [3][1][2][0][RTW89_THAILAND][0][7] = 18,
[3][1][2][0][RTW89_FCC][1][22] = 30,
[3][1][2][0][RTW89_FCC][2][22] = 52,
[3][1][2][0][RTW89_ETSI][1][22] = 46,
@@ -45570,6 +47414,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][22] = 48,
[3][1][2][0][RTW89_MKK][0][22] = 8,
[3][1][2][0][RTW89_IC][1][22] = 30,
+ [3][1][2][0][RTW89_IC][2][22] = 52,
[3][1][2][0][RTW89_KCC][1][22] = 40,
[3][1][2][0][RTW89_KCC][0][22] = 12,
[3][1][2][0][RTW89_ACMA][1][22] = 46,
@@ -45579,6 +47424,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][22] = 16,
[3][1][2][0][RTW89_UK][1][22] = 46,
[3][1][2][0][RTW89_UK][0][22] = 16,
+ [3][1][2][0][RTW89_THAILAND][1][22] = 46,
+ [3][1][2][0][RTW89_THAILAND][0][22] = 18,
[3][1][2][0][RTW89_FCC][1][37] = 30,
[3][1][2][0][RTW89_FCC][2][37] = 52,
[3][1][2][0][RTW89_ETSI][1][37] = 46,
@@ -45586,6 +47433,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][37] = 48,
[3][1][2][0][RTW89_MKK][0][37] = 8,
[3][1][2][0][RTW89_IC][1][37] = 30,
+ [3][1][2][0][RTW89_IC][2][37] = 52,
[3][1][2][0][RTW89_KCC][1][37] = 40,
[3][1][2][0][RTW89_KCC][0][37] = 12,
[3][1][2][0][RTW89_ACMA][1][37] = 46,
@@ -45595,6 +47443,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][37] = 16,
[3][1][2][0][RTW89_UK][1][37] = 46,
[3][1][2][0][RTW89_UK][0][37] = 16,
+ [3][1][2][0][RTW89_THAILAND][1][37] = 46,
+ [3][1][2][0][RTW89_THAILAND][0][37] = 18,
[3][1][2][0][RTW89_FCC][1][52] = 30,
[3][1][2][0][RTW89_FCC][2][52] = 127,
[3][1][2][0][RTW89_ETSI][1][52] = 127,
@@ -45602,6 +47452,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][52] = 127,
[3][1][2][0][RTW89_MKK][0][52] = 127,
[3][1][2][0][RTW89_IC][1][52] = 30,
+ [3][1][2][0][RTW89_IC][2][52] = 56,
[3][1][2][0][RTW89_KCC][1][52] = 48,
[3][1][2][0][RTW89_KCC][0][52] = 127,
[3][1][2][0][RTW89_ACMA][1][52] = 127,
@@ -45611,6 +47462,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][52] = 127,
[3][1][2][0][RTW89_UK][1][52] = 127,
[3][1][2][0][RTW89_UK][0][52] = 127,
+ [3][1][2][0][RTW89_THAILAND][1][52] = 127,
+ [3][1][2][0][RTW89_THAILAND][0][52] = 127,
[3][1][2][0][RTW89_FCC][1][67] = 32,
[3][1][2][0][RTW89_FCC][2][67] = 54,
[3][1][2][0][RTW89_ETSI][1][67] = 127,
@@ -45618,6 +47471,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][67] = 127,
[3][1][2][0][RTW89_MKK][0][67] = 127,
[3][1][2][0][RTW89_IC][1][67] = 32,
+ [3][1][2][0][RTW89_IC][2][67] = 54,
[3][1][2][0][RTW89_KCC][1][67] = 48,
[3][1][2][0][RTW89_KCC][0][67] = 127,
[3][1][2][0][RTW89_ACMA][1][67] = 127,
@@ -45627,6 +47481,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][67] = 127,
[3][1][2][0][RTW89_UK][1][67] = 127,
[3][1][2][0][RTW89_UK][0][67] = 127,
+ [3][1][2][0][RTW89_THAILAND][1][67] = 127,
+ [3][1][2][0][RTW89_THAILAND][0][67] = 127,
[3][1][2][0][RTW89_FCC][1][82] = 32,
[3][1][2][0][RTW89_FCC][2][82] = 127,
[3][1][2][0][RTW89_ETSI][1][82] = 127,
@@ -45634,6 +47490,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][82] = 127,
[3][1][2][0][RTW89_MKK][0][82] = 127,
[3][1][2][0][RTW89_IC][1][82] = 32,
+ [3][1][2][0][RTW89_IC][2][82] = 127,
[3][1][2][0][RTW89_KCC][1][82] = 24,
[3][1][2][0][RTW89_KCC][0][82] = 127,
[3][1][2][0][RTW89_ACMA][1][82] = 127,
@@ -45643,6 +47500,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][82] = 127,
[3][1][2][0][RTW89_UK][1][82] = 127,
[3][1][2][0][RTW89_UK][0][82] = 127,
+ [3][1][2][0][RTW89_THAILAND][1][82] = 127,
+ [3][1][2][0][RTW89_THAILAND][0][82] = 127,
[3][1][2][0][RTW89_FCC][1][97] = 32,
[3][1][2][0][RTW89_FCC][2][97] = 127,
[3][1][2][0][RTW89_ETSI][1][97] = 127,
@@ -45650,6 +47509,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][97] = 127,
[3][1][2][0][RTW89_MKK][0][97] = 127,
[3][1][2][0][RTW89_IC][1][97] = 32,
+ [3][1][2][0][RTW89_IC][2][97] = 127,
[3][1][2][0][RTW89_KCC][1][97] = 24,
[3][1][2][0][RTW89_KCC][0][97] = 127,
[3][1][2][0][RTW89_ACMA][1][97] = 127,
@@ -45659,6 +47519,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][97] = 127,
[3][1][2][0][RTW89_UK][1][97] = 127,
[3][1][2][0][RTW89_UK][0][97] = 127,
+ [3][1][2][0][RTW89_THAILAND][1][97] = 127,
+ [3][1][2][0][RTW89_THAILAND][0][97] = 127,
[3][1][2][0][RTW89_FCC][1][112] = 127,
[3][1][2][0][RTW89_FCC][2][112] = 127,
[3][1][2][0][RTW89_ETSI][1][112] = 127,
@@ -45666,6 +47528,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_MKK][1][112] = 127,
[3][1][2][0][RTW89_MKK][0][112] = 127,
[3][1][2][0][RTW89_IC][1][112] = 127,
+ [3][1][2][0][RTW89_IC][2][112] = 127,
[3][1][2][0][RTW89_KCC][1][112] = 127,
[3][1][2][0][RTW89_KCC][0][112] = 127,
[3][1][2][0][RTW89_ACMA][1][112] = 127,
@@ -45675,6 +47538,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][0][RTW89_QATAR][0][112] = 127,
[3][1][2][0][RTW89_UK][1][112] = 127,
[3][1][2][0][RTW89_UK][0][112] = 127,
+ [3][1][2][0][RTW89_THAILAND][1][112] = 127,
+ [3][1][2][0][RTW89_THAILAND][0][112] = 127,
[3][1][2][1][RTW89_FCC][1][7] = 32,
[3][1][2][1][RTW89_FCC][2][7] = 46,
[3][1][2][1][RTW89_ETSI][1][7] = 42,
@@ -45682,6 +47547,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][7] = 38,
[3][1][2][1][RTW89_MKK][0][7] = 10,
[3][1][2][1][RTW89_IC][1][7] = 32,
+ [3][1][2][1][RTW89_IC][2][7] = 46,
[3][1][2][1][RTW89_KCC][1][7] = 40,
[3][1][2][1][RTW89_KCC][0][7] = 12,
[3][1][2][1][RTW89_ACMA][1][7] = 42,
@@ -45691,6 +47557,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][7] = 6,
[3][1][2][1][RTW89_UK][1][7] = 42,
[3][1][2][1][RTW89_UK][0][7] = 6,
+ [3][1][2][1][RTW89_THAILAND][1][7] = 46,
+ [3][1][2][1][RTW89_THAILAND][0][7] = 6,
[3][1][2][1][RTW89_FCC][1][22] = 30,
[3][1][2][1][RTW89_FCC][2][22] = 52,
[3][1][2][1][RTW89_ETSI][1][22] = 42,
@@ -45698,6 +47566,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][22] = 48,
[3][1][2][1][RTW89_MKK][0][22] = 8,
[3][1][2][1][RTW89_IC][1][22] = 30,
+ [3][1][2][1][RTW89_IC][2][22] = 52,
[3][1][2][1][RTW89_KCC][1][22] = 40,
[3][1][2][1][RTW89_KCC][0][22] = 12,
[3][1][2][1][RTW89_ACMA][1][22] = 42,
@@ -45707,6 +47576,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][22] = 6,
[3][1][2][1][RTW89_UK][1][22] = 42,
[3][1][2][1][RTW89_UK][0][22] = 6,
+ [3][1][2][1][RTW89_THAILAND][1][22] = 46,
+ [3][1][2][1][RTW89_THAILAND][0][22] = 6,
[3][1][2][1][RTW89_FCC][1][37] = 30,
[3][1][2][1][RTW89_FCC][2][37] = 52,
[3][1][2][1][RTW89_ETSI][1][37] = 42,
@@ -45714,6 +47585,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][37] = 48,
[3][1][2][1][RTW89_MKK][0][37] = 8,
[3][1][2][1][RTW89_IC][1][37] = 30,
+ [3][1][2][1][RTW89_IC][2][37] = 52,
[3][1][2][1][RTW89_KCC][1][37] = 40,
[3][1][2][1][RTW89_KCC][0][37] = 12,
[3][1][2][1][RTW89_ACMA][1][37] = 42,
@@ -45723,6 +47595,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][37] = 6,
[3][1][2][1][RTW89_UK][1][37] = 42,
[3][1][2][1][RTW89_UK][0][37] = 6,
+ [3][1][2][1][RTW89_THAILAND][1][37] = 46,
+ [3][1][2][1][RTW89_THAILAND][0][37] = 6,
[3][1][2][1][RTW89_FCC][1][52] = 30,
[3][1][2][1][RTW89_FCC][2][52] = 127,
[3][1][2][1][RTW89_ETSI][1][52] = 127,
@@ -45730,6 +47604,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][52] = 127,
[3][1][2][1][RTW89_MKK][0][52] = 127,
[3][1][2][1][RTW89_IC][1][52] = 30,
+ [3][1][2][1][RTW89_IC][2][52] = 56,
[3][1][2][1][RTW89_KCC][1][52] = 48,
[3][1][2][1][RTW89_KCC][0][52] = 127,
[3][1][2][1][RTW89_ACMA][1][52] = 127,
@@ -45739,6 +47614,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][52] = 127,
[3][1][2][1][RTW89_UK][1][52] = 127,
[3][1][2][1][RTW89_UK][0][52] = 127,
+ [3][1][2][1][RTW89_THAILAND][1][52] = 127,
+ [3][1][2][1][RTW89_THAILAND][0][52] = 127,
[3][1][2][1][RTW89_FCC][1][67] = 32,
[3][1][2][1][RTW89_FCC][2][67] = 54,
[3][1][2][1][RTW89_ETSI][1][67] = 127,
@@ -45746,6 +47623,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][67] = 127,
[3][1][2][1][RTW89_MKK][0][67] = 127,
[3][1][2][1][RTW89_IC][1][67] = 32,
+ [3][1][2][1][RTW89_IC][2][67] = 54,
[3][1][2][1][RTW89_KCC][1][67] = 48,
[3][1][2][1][RTW89_KCC][0][67] = 127,
[3][1][2][1][RTW89_ACMA][1][67] = 127,
@@ -45755,6 +47633,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][67] = 127,
[3][1][2][1][RTW89_UK][1][67] = 127,
[3][1][2][1][RTW89_UK][0][67] = 127,
+ [3][1][2][1][RTW89_THAILAND][1][67] = 127,
+ [3][1][2][1][RTW89_THAILAND][0][67] = 127,
[3][1][2][1][RTW89_FCC][1][82] = 32,
[3][1][2][1][RTW89_FCC][2][82] = 127,
[3][1][2][1][RTW89_ETSI][1][82] = 127,
@@ -45762,6 +47642,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][82] = 127,
[3][1][2][1][RTW89_MKK][0][82] = 127,
[3][1][2][1][RTW89_IC][1][82] = 32,
+ [3][1][2][1][RTW89_IC][2][82] = 127,
[3][1][2][1][RTW89_KCC][1][82] = 24,
[3][1][2][1][RTW89_KCC][0][82] = 127,
[3][1][2][1][RTW89_ACMA][1][82] = 127,
@@ -45771,6 +47652,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][82] = 127,
[3][1][2][1][RTW89_UK][1][82] = 127,
[3][1][2][1][RTW89_UK][0][82] = 127,
+ [3][1][2][1][RTW89_THAILAND][1][82] = 127,
+ [3][1][2][1][RTW89_THAILAND][0][82] = 127,
[3][1][2][1][RTW89_FCC][1][97] = 32,
[3][1][2][1][RTW89_FCC][2][97] = 127,
[3][1][2][1][RTW89_ETSI][1][97] = 127,
@@ -45778,6 +47661,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][97] = 127,
[3][1][2][1][RTW89_MKK][0][97] = 127,
[3][1][2][1][RTW89_IC][1][97] = 32,
+ [3][1][2][1][RTW89_IC][2][97] = 127,
[3][1][2][1][RTW89_KCC][1][97] = 24,
[3][1][2][1][RTW89_KCC][0][97] = 127,
[3][1][2][1][RTW89_ACMA][1][97] = 127,
@@ -45787,6 +47671,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][97] = 127,
[3][1][2][1][RTW89_UK][1][97] = 127,
[3][1][2][1][RTW89_UK][0][97] = 127,
+ [3][1][2][1][RTW89_THAILAND][1][97] = 127,
+ [3][1][2][1][RTW89_THAILAND][0][97] = 127,
[3][1][2][1][RTW89_FCC][1][112] = 127,
[3][1][2][1][RTW89_FCC][2][112] = 127,
[3][1][2][1][RTW89_ETSI][1][112] = 127,
@@ -45794,6 +47680,7 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_MKK][1][112] = 127,
[3][1][2][1][RTW89_MKK][0][112] = 127,
[3][1][2][1][RTW89_IC][1][112] = 127,
+ [3][1][2][1][RTW89_IC][2][112] = 127,
[3][1][2][1][RTW89_KCC][1][112] = 127,
[3][1][2][1][RTW89_KCC][0][112] = 127,
[3][1][2][1][RTW89_ACMA][1][112] = 127,
@@ -45803,6 +47690,8 @@ const s8 rtw89_8852c_txpwr_lmt_6g[RTW89_6G_BW_NUM][RTW89_NTX_NUM]
[3][1][2][1][RTW89_QATAR][0][112] = 127,
[3][1][2][1][RTW89_UK][1][112] = 127,
[3][1][2][1][RTW89_UK][0][112] = 127,
+ [3][1][2][1][RTW89_THAILAND][1][112] = 127,
+ [3][1][2][1][RTW89_THAILAND][0][112] = 127,
};
static
@@ -45904,6 +47793,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][0] = 34,
[0][0][RTW89_CHILE][0] = 60,
[0][0][RTW89_QATAR][0] = 34,
+ [0][0][RTW89_THAILAND][0] = 34,
[0][0][RTW89_FCC][1] = 60,
[0][0][RTW89_ETSI][1] = 38,
[0][0][RTW89_MKK][1] = 40,
@@ -45916,6 +47806,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][1] = 38,
[0][0][RTW89_CHILE][1] = 50,
[0][0][RTW89_QATAR][1] = 38,
+ [0][0][RTW89_THAILAND][1] = 38,
[0][0][RTW89_FCC][2] = 64,
[0][0][RTW89_ETSI][2] = 38,
[0][0][RTW89_MKK][2] = 40,
@@ -45928,6 +47819,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][2] = 38,
[0][0][RTW89_CHILE][2] = 50,
[0][0][RTW89_QATAR][2] = 38,
+ [0][0][RTW89_THAILAND][2] = 38,
[0][0][RTW89_FCC][3] = 68,
[0][0][RTW89_ETSI][3] = 38,
[0][0][RTW89_MKK][3] = 40,
@@ -45940,6 +47832,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][3] = 38,
[0][0][RTW89_CHILE][3] = 50,
[0][0][RTW89_QATAR][3] = 38,
+ [0][0][RTW89_THAILAND][3] = 38,
[0][0][RTW89_FCC][4] = 68,
[0][0][RTW89_ETSI][4] = 38,
[0][0][RTW89_MKK][4] = 40,
@@ -45952,6 +47845,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][4] = 38,
[0][0][RTW89_CHILE][4] = 50,
[0][0][RTW89_QATAR][4] = 38,
+ [0][0][RTW89_THAILAND][4] = 38,
[0][0][RTW89_FCC][5] = 78,
[0][0][RTW89_ETSI][5] = 38,
[0][0][RTW89_MKK][5] = 40,
@@ -45964,6 +47858,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][5] = 38,
[0][0][RTW89_CHILE][5] = 78,
[0][0][RTW89_QATAR][5] = 38,
+ [0][0][RTW89_THAILAND][5] = 38,
[0][0][RTW89_FCC][6] = 54,
[0][0][RTW89_ETSI][6] = 38,
[0][0][RTW89_MKK][6] = 40,
@@ -45976,6 +47871,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][6] = 38,
[0][0][RTW89_CHILE][6] = 36,
[0][0][RTW89_QATAR][6] = 38,
+ [0][0][RTW89_THAILAND][6] = 38,
[0][0][RTW89_FCC][7] = 54,
[0][0][RTW89_ETSI][7] = 38,
[0][0][RTW89_MKK][7] = 40,
@@ -45988,6 +47884,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][7] = 38,
[0][0][RTW89_CHILE][7] = 36,
[0][0][RTW89_QATAR][7] = 38,
+ [0][0][RTW89_THAILAND][7] = 38,
[0][0][RTW89_FCC][8] = 50,
[0][0][RTW89_ETSI][8] = 38,
[0][0][RTW89_MKK][8] = 40,
@@ -46000,6 +47897,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][8] = 38,
[0][0][RTW89_CHILE][8] = 36,
[0][0][RTW89_QATAR][8] = 38,
+ [0][0][RTW89_THAILAND][8] = 38,
[0][0][RTW89_FCC][9] = 46,
[0][0][RTW89_ETSI][9] = 38,
[0][0][RTW89_MKK][9] = 40,
@@ -46012,6 +47910,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][9] = 38,
[0][0][RTW89_CHILE][9] = 36,
[0][0][RTW89_QATAR][9] = 38,
+ [0][0][RTW89_THAILAND][9] = 38,
[0][0][RTW89_FCC][10] = 46,
[0][0][RTW89_ETSI][10] = 38,
[0][0][RTW89_MKK][10] = 40,
@@ -46024,6 +47923,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][10] = 38,
[0][0][RTW89_CHILE][10] = 46,
[0][0][RTW89_QATAR][10] = 38,
+ [0][0][RTW89_THAILAND][10] = 38,
[0][0][RTW89_FCC][11] = 26,
[0][0][RTW89_ETSI][11] = 38,
[0][0][RTW89_MKK][11] = 40,
@@ -46036,6 +47936,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][11] = 38,
[0][0][RTW89_CHILE][11] = 26,
[0][0][RTW89_QATAR][11] = 38,
+ [0][0][RTW89_THAILAND][11] = 38,
[0][0][RTW89_FCC][12] = -20,
[0][0][RTW89_ETSI][12] = 34,
[0][0][RTW89_MKK][12] = 36,
@@ -46048,6 +47949,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][12] = 34,
[0][0][RTW89_CHILE][12] = -20,
[0][0][RTW89_QATAR][12] = 34,
+ [0][0][RTW89_THAILAND][12] = 34,
[0][0][RTW89_FCC][13] = 127,
[0][0][RTW89_ETSI][13] = 127,
[0][0][RTW89_MKK][13] = 127,
@@ -46060,6 +47962,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][13] = 127,
[0][0][RTW89_CHILE][13] = 127,
[0][0][RTW89_QATAR][13] = 127,
+ [0][0][RTW89_THAILAND][13] = 127,
[0][1][RTW89_FCC][0] = 56,
[0][1][RTW89_ETSI][0] = 22,
[0][1][RTW89_MKK][0] = 24,
@@ -46072,6 +47975,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][0] = 22,
[0][1][RTW89_CHILE][0] = 56,
[0][1][RTW89_QATAR][0] = 22,
+ [0][1][RTW89_THAILAND][0] = 22,
[0][1][RTW89_FCC][1] = 56,
[0][1][RTW89_ETSI][1] = 24,
[0][1][RTW89_MKK][1] = 30,
@@ -46084,6 +47988,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][1] = 24,
[0][1][RTW89_CHILE][1] = 40,
[0][1][RTW89_QATAR][1] = 24,
+ [0][1][RTW89_THAILAND][1] = 24,
[0][1][RTW89_FCC][2] = 60,
[0][1][RTW89_ETSI][2] = 24,
[0][1][RTW89_MKK][2] = 30,
@@ -46096,6 +48001,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][2] = 24,
[0][1][RTW89_CHILE][2] = 40,
[0][1][RTW89_QATAR][2] = 24,
+ [0][1][RTW89_THAILAND][2] = 24,
[0][1][RTW89_FCC][3] = 64,
[0][1][RTW89_ETSI][3] = 24,
[0][1][RTW89_MKK][3] = 30,
@@ -46108,6 +48014,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][3] = 24,
[0][1][RTW89_CHILE][3] = 40,
[0][1][RTW89_QATAR][3] = 24,
+ [0][1][RTW89_THAILAND][3] = 24,
[0][1][RTW89_FCC][4] = 68,
[0][1][RTW89_ETSI][4] = 24,
[0][1][RTW89_MKK][4] = 30,
@@ -46120,6 +48027,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][4] = 24,
[0][1][RTW89_CHILE][4] = 40,
[0][1][RTW89_QATAR][4] = 24,
+ [0][1][RTW89_THAILAND][4] = 24,
[0][1][RTW89_FCC][5] = 76,
[0][1][RTW89_ETSI][5] = 24,
[0][1][RTW89_MKK][5] = 30,
@@ -46132,6 +48040,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][5] = 24,
[0][1][RTW89_CHILE][5] = 76,
[0][1][RTW89_QATAR][5] = 24,
+ [0][1][RTW89_THAILAND][5] = 24,
[0][1][RTW89_FCC][6] = 54,
[0][1][RTW89_ETSI][6] = 24,
[0][1][RTW89_MKK][6] = 30,
@@ -46144,6 +48053,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][6] = 24,
[0][1][RTW89_CHILE][6] = 26,
[0][1][RTW89_QATAR][6] = 24,
+ [0][1][RTW89_THAILAND][6] = 24,
[0][1][RTW89_FCC][7] = 50,
[0][1][RTW89_ETSI][7] = 24,
[0][1][RTW89_MKK][7] = 30,
@@ -46156,6 +48066,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][7] = 24,
[0][1][RTW89_CHILE][7] = 26,
[0][1][RTW89_QATAR][7] = 24,
+ [0][1][RTW89_THAILAND][7] = 24,
[0][1][RTW89_FCC][8] = 46,
[0][1][RTW89_ETSI][8] = 24,
[0][1][RTW89_MKK][8] = 30,
@@ -46168,6 +48079,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][8] = 24,
[0][1][RTW89_CHILE][8] = 26,
[0][1][RTW89_QATAR][8] = 24,
+ [0][1][RTW89_THAILAND][8] = 24,
[0][1][RTW89_FCC][9] = 42,
[0][1][RTW89_ETSI][9] = 24,
[0][1][RTW89_MKK][9] = 30,
@@ -46180,6 +48092,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][9] = 24,
[0][1][RTW89_CHILE][9] = 26,
[0][1][RTW89_QATAR][9] = 24,
+ [0][1][RTW89_THAILAND][9] = 24,
[0][1][RTW89_FCC][10] = 42,
[0][1][RTW89_ETSI][10] = 24,
[0][1][RTW89_MKK][10] = 30,
@@ -46192,6 +48105,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][10] = 24,
[0][1][RTW89_CHILE][10] = 42,
[0][1][RTW89_QATAR][10] = 24,
+ [0][1][RTW89_THAILAND][10] = 24,
[0][1][RTW89_FCC][11] = 22,
[0][1][RTW89_ETSI][11] = 24,
[0][1][RTW89_MKK][11] = 30,
@@ -46204,6 +48118,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][11] = 24,
[0][1][RTW89_CHILE][11] = 22,
[0][1][RTW89_QATAR][11] = 24,
+ [0][1][RTW89_THAILAND][11] = 24,
[0][1][RTW89_FCC][12] = -30,
[0][1][RTW89_ETSI][12] = 20,
[0][1][RTW89_MKK][12] = 24,
@@ -46216,6 +48131,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][12] = 20,
[0][1][RTW89_CHILE][12] = -30,
[0][1][RTW89_QATAR][12] = 20,
+ [0][1][RTW89_THAILAND][12] = 20,
[0][1][RTW89_FCC][13] = 127,
[0][1][RTW89_ETSI][13] = 127,
[0][1][RTW89_MKK][13] = 127,
@@ -46228,6 +48144,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][13] = 127,
[0][1][RTW89_CHILE][13] = 127,
[0][1][RTW89_QATAR][13] = 127,
+ [0][1][RTW89_THAILAND][13] = 127,
[1][0][RTW89_FCC][0] = 66,
[1][0][RTW89_ETSI][0] = 46,
[1][0][RTW89_MKK][0] = 48,
@@ -46240,6 +48157,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][0] = 46,
[1][0][RTW89_CHILE][0] = 66,
[1][0][RTW89_QATAR][0] = 46,
+ [1][0][RTW89_THAILAND][0] = 46,
[1][0][RTW89_FCC][1] = 66,
[1][0][RTW89_ETSI][1] = 46,
[1][0][RTW89_MKK][1] = 48,
@@ -46252,6 +48170,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][1] = 46,
[1][0][RTW89_CHILE][1] = 54,
[1][0][RTW89_QATAR][1] = 46,
+ [1][0][RTW89_THAILAND][1] = 46,
[1][0][RTW89_FCC][2] = 70,
[1][0][RTW89_ETSI][2] = 46,
[1][0][RTW89_MKK][2] = 48,
@@ -46264,6 +48183,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][2] = 46,
[1][0][RTW89_CHILE][2] = 54,
[1][0][RTW89_QATAR][2] = 46,
+ [1][0][RTW89_THAILAND][2] = 46,
[1][0][RTW89_FCC][3] = 72,
[1][0][RTW89_ETSI][3] = 46,
[1][0][RTW89_MKK][3] = 48,
@@ -46276,6 +48196,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][3] = 46,
[1][0][RTW89_CHILE][3] = 54,
[1][0][RTW89_QATAR][3] = 46,
+ [1][0][RTW89_THAILAND][3] = 46,
[1][0][RTW89_FCC][4] = 72,
[1][0][RTW89_ETSI][4] = 46,
[1][0][RTW89_MKK][4] = 48,
@@ -46288,6 +48209,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][4] = 46,
[1][0][RTW89_CHILE][4] = 54,
[1][0][RTW89_QATAR][4] = 46,
+ [1][0][RTW89_THAILAND][4] = 46,
[1][0][RTW89_FCC][5] = 82,
[1][0][RTW89_ETSI][5] = 46,
[1][0][RTW89_MKK][5] = 48,
@@ -46300,6 +48222,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][5] = 46,
[1][0][RTW89_CHILE][5] = 82,
[1][0][RTW89_QATAR][5] = 46,
+ [1][0][RTW89_THAILAND][5] = 46,
[1][0][RTW89_FCC][6] = 58,
[1][0][RTW89_ETSI][6] = 44,
[1][0][RTW89_MKK][6] = 48,
@@ -46312,6 +48235,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][6] = 44,
[1][0][RTW89_CHILE][6] = 40,
[1][0][RTW89_QATAR][6] = 44,
+ [1][0][RTW89_THAILAND][6] = 44,
[1][0][RTW89_FCC][7] = 58,
[1][0][RTW89_ETSI][7] = 46,
[1][0][RTW89_MKK][7] = 48,
@@ -46324,6 +48248,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][7] = 46,
[1][0][RTW89_CHILE][7] = 40,
[1][0][RTW89_QATAR][7] = 46,
+ [1][0][RTW89_THAILAND][7] = 46,
[1][0][RTW89_FCC][8] = 58,
[1][0][RTW89_ETSI][8] = 46,
[1][0][RTW89_MKK][8] = 48,
@@ -46336,6 +48261,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][8] = 46,
[1][0][RTW89_CHILE][8] = 40,
[1][0][RTW89_QATAR][8] = 46,
+ [1][0][RTW89_THAILAND][8] = 46,
[1][0][RTW89_FCC][9] = 54,
[1][0][RTW89_ETSI][9] = 46,
[1][0][RTW89_MKK][9] = 48,
@@ -46348,6 +48274,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][9] = 46,
[1][0][RTW89_CHILE][9] = 40,
[1][0][RTW89_QATAR][9] = 46,
+ [1][0][RTW89_THAILAND][9] = 46,
[1][0][RTW89_FCC][10] = 54,
[1][0][RTW89_ETSI][10] = 46,
[1][0][RTW89_MKK][10] = 48,
@@ -46360,6 +48287,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][10] = 46,
[1][0][RTW89_CHILE][10] = 54,
[1][0][RTW89_QATAR][10] = 46,
+ [1][0][RTW89_THAILAND][10] = 46,
[1][0][RTW89_FCC][11] = 36,
[1][0][RTW89_ETSI][11] = 46,
[1][0][RTW89_MKK][11] = 48,
@@ -46372,6 +48300,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][11] = 46,
[1][0][RTW89_CHILE][11] = 36,
[1][0][RTW89_QATAR][11] = 46,
+ [1][0][RTW89_THAILAND][11] = 46,
[1][0][RTW89_FCC][12] = 4,
[1][0][RTW89_ETSI][12] = 46,
[1][0][RTW89_MKK][12] = 46,
@@ -46384,6 +48313,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][12] = 46,
[1][0][RTW89_CHILE][12] = 4,
[1][0][RTW89_QATAR][12] = 46,
+ [1][0][RTW89_THAILAND][12] = 46,
[1][0][RTW89_FCC][13] = 127,
[1][0][RTW89_ETSI][13] = 127,
[1][0][RTW89_MKK][13] = 127,
@@ -46396,6 +48326,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][13] = 127,
[1][0][RTW89_CHILE][13] = 127,
[1][0][RTW89_QATAR][13] = 127,
+ [1][0][RTW89_THAILAND][13] = 127,
[1][1][RTW89_FCC][0] = 58,
[1][1][RTW89_ETSI][0] = 32,
[1][1][RTW89_MKK][0] = 34,
@@ -46408,6 +48339,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][0] = 32,
[1][1][RTW89_CHILE][0] = 58,
[1][1][RTW89_QATAR][0] = 32,
+ [1][1][RTW89_THAILAND][0] = 32,
[1][1][RTW89_FCC][1] = 58,
[1][1][RTW89_ETSI][1] = 34,
[1][1][RTW89_MKK][1] = 34,
@@ -46420,6 +48352,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][1] = 34,
[1][1][RTW89_CHILE][1] = 40,
[1][1][RTW89_QATAR][1] = 34,
+ [1][1][RTW89_THAILAND][1] = 34,
[1][1][RTW89_FCC][2] = 62,
[1][1][RTW89_ETSI][2] = 34,
[1][1][RTW89_MKK][2] = 34,
@@ -46432,6 +48365,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][2] = 34,
[1][1][RTW89_CHILE][2] = 40,
[1][1][RTW89_QATAR][2] = 34,
+ [1][1][RTW89_THAILAND][2] = 34,
[1][1][RTW89_FCC][3] = 66,
[1][1][RTW89_ETSI][3] = 34,
[1][1][RTW89_MKK][3] = 34,
@@ -46444,6 +48378,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][3] = 34,
[1][1][RTW89_CHILE][3] = 40,
[1][1][RTW89_QATAR][3] = 34,
+ [1][1][RTW89_THAILAND][3] = 34,
[1][1][RTW89_FCC][4] = 70,
[1][1][RTW89_ETSI][4] = 34,
[1][1][RTW89_MKK][4] = 34,
@@ -46456,6 +48391,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][4] = 34,
[1][1][RTW89_CHILE][4] = 40,
[1][1][RTW89_QATAR][4] = 34,
+ [1][1][RTW89_THAILAND][4] = 34,
[1][1][RTW89_FCC][5] = 82,
[1][1][RTW89_ETSI][5] = 34,
[1][1][RTW89_MKK][5] = 34,
@@ -46468,6 +48404,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][5] = 34,
[1][1][RTW89_CHILE][5] = 78,
[1][1][RTW89_QATAR][5] = 34,
+ [1][1][RTW89_THAILAND][5] = 34,
[1][1][RTW89_FCC][6] = 60,
[1][1][RTW89_ETSI][6] = 34,
[1][1][RTW89_MKK][6] = 34,
@@ -46480,6 +48417,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][6] = 34,
[1][1][RTW89_CHILE][6] = 30,
[1][1][RTW89_QATAR][6] = 34,
+ [1][1][RTW89_THAILAND][6] = 34,
[1][1][RTW89_FCC][7] = 56,
[1][1][RTW89_ETSI][7] = 34,
[1][1][RTW89_MKK][7] = 34,
@@ -46492,6 +48430,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][7] = 34,
[1][1][RTW89_CHILE][7] = 30,
[1][1][RTW89_QATAR][7] = 34,
+ [1][1][RTW89_THAILAND][7] = 34,
[1][1][RTW89_FCC][8] = 52,
[1][1][RTW89_ETSI][8] = 34,
[1][1][RTW89_MKK][8] = 34,
@@ -46504,6 +48443,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][8] = 34,
[1][1][RTW89_CHILE][8] = 30,
[1][1][RTW89_QATAR][8] = 34,
+ [1][1][RTW89_THAILAND][8] = 34,
[1][1][RTW89_FCC][9] = 48,
[1][1][RTW89_ETSI][9] = 34,
[1][1][RTW89_MKK][9] = 34,
@@ -46516,6 +48456,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][9] = 34,
[1][1][RTW89_CHILE][9] = 30,
[1][1][RTW89_QATAR][9] = 34,
+ [1][1][RTW89_THAILAND][9] = 34,
[1][1][RTW89_FCC][10] = 48,
[1][1][RTW89_ETSI][10] = 34,
[1][1][RTW89_MKK][10] = 34,
@@ -46528,6 +48469,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][10] = 34,
[1][1][RTW89_CHILE][10] = 48,
[1][1][RTW89_QATAR][10] = 34,
+ [1][1][RTW89_THAILAND][10] = 34,
[1][1][RTW89_FCC][11] = 30,
[1][1][RTW89_ETSI][11] = 34,
[1][1][RTW89_MKK][11] = 34,
@@ -46540,6 +48482,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][11] = 34,
[1][1][RTW89_CHILE][11] = 30,
[1][1][RTW89_QATAR][11] = 34,
+ [1][1][RTW89_THAILAND][11] = 34,
[1][1][RTW89_FCC][12] = -6,
[1][1][RTW89_ETSI][12] = 34,
[1][1][RTW89_MKK][12] = 34,
@@ -46552,6 +48495,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][12] = 34,
[1][1][RTW89_CHILE][12] = -6,
[1][1][RTW89_QATAR][12] = 34,
+ [1][1][RTW89_THAILAND][12] = 34,
[1][1][RTW89_FCC][13] = 127,
[1][1][RTW89_ETSI][13] = 127,
[1][1][RTW89_MKK][13] = 127,
@@ -46564,6 +48508,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][13] = 127,
[1][1][RTW89_CHILE][13] = 127,
[1][1][RTW89_QATAR][13] = 127,
+ [1][1][RTW89_THAILAND][13] = 127,
[2][0][RTW89_FCC][0] = 70,
[2][0][RTW89_ETSI][0] = 58,
[2][0][RTW89_MKK][0] = 58,
@@ -46576,6 +48521,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][0] = 58,
[2][0][RTW89_CHILE][0] = 70,
[2][0][RTW89_QATAR][0] = 58,
+ [2][0][RTW89_THAILAND][0] = 58,
[2][0][RTW89_FCC][1] = 70,
[2][0][RTW89_ETSI][1] = 58,
[2][0][RTW89_MKK][1] = 58,
@@ -46588,6 +48534,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][1] = 58,
[2][0][RTW89_CHILE][1] = 54,
[2][0][RTW89_QATAR][1] = 58,
+ [2][0][RTW89_THAILAND][1] = 58,
[2][0][RTW89_FCC][2] = 72,
[2][0][RTW89_ETSI][2] = 58,
[2][0][RTW89_MKK][2] = 58,
@@ -46600,6 +48547,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][2] = 58,
[2][0][RTW89_CHILE][2] = 54,
[2][0][RTW89_QATAR][2] = 58,
+ [2][0][RTW89_THAILAND][2] = 58,
[2][0][RTW89_FCC][3] = 72,
[2][0][RTW89_ETSI][3] = 58,
[2][0][RTW89_MKK][3] = 58,
@@ -46612,6 +48560,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][3] = 58,
[2][0][RTW89_CHILE][3] = 54,
[2][0][RTW89_QATAR][3] = 58,
+ [2][0][RTW89_THAILAND][3] = 58,
[2][0][RTW89_FCC][4] = 72,
[2][0][RTW89_ETSI][4] = 58,
[2][0][RTW89_MKK][4] = 58,
@@ -46624,6 +48573,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][4] = 58,
[2][0][RTW89_CHILE][4] = 54,
[2][0][RTW89_QATAR][4] = 58,
+ [2][0][RTW89_THAILAND][4] = 58,
[2][0][RTW89_FCC][5] = 82,
[2][0][RTW89_ETSI][5] = 58,
[2][0][RTW89_MKK][5] = 58,
@@ -46636,6 +48586,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][5] = 58,
[2][0][RTW89_CHILE][5] = 82,
[2][0][RTW89_QATAR][5] = 58,
+ [2][0][RTW89_THAILAND][5] = 58,
[2][0][RTW89_FCC][6] = 66,
[2][0][RTW89_ETSI][6] = 56,
[2][0][RTW89_MKK][6] = 58,
@@ -46648,6 +48599,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][6] = 56,
[2][0][RTW89_CHILE][6] = 48,
[2][0][RTW89_QATAR][6] = 56,
+ [2][0][RTW89_THAILAND][6] = 56,
[2][0][RTW89_FCC][7] = 66,
[2][0][RTW89_ETSI][7] = 58,
[2][0][RTW89_MKK][7] = 58,
@@ -46660,6 +48612,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][7] = 58,
[2][0][RTW89_CHILE][7] = 48,
[2][0][RTW89_QATAR][7] = 58,
+ [2][0][RTW89_THAILAND][7] = 58,
[2][0][RTW89_FCC][8] = 66,
[2][0][RTW89_ETSI][8] = 58,
[2][0][RTW89_MKK][8] = 58,
@@ -46672,6 +48625,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][8] = 58,
[2][0][RTW89_CHILE][8] = 48,
[2][0][RTW89_QATAR][8] = 58,
+ [2][0][RTW89_THAILAND][8] = 58,
[2][0][RTW89_FCC][9] = 64,
[2][0][RTW89_ETSI][9] = 58,
[2][0][RTW89_MKK][9] = 58,
@@ -46684,6 +48638,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][9] = 58,
[2][0][RTW89_CHILE][9] = 48,
[2][0][RTW89_QATAR][9] = 58,
+ [2][0][RTW89_THAILAND][9] = 58,
[2][0][RTW89_FCC][10] = 64,
[2][0][RTW89_ETSI][10] = 58,
[2][0][RTW89_MKK][10] = 58,
@@ -46696,6 +48651,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][10] = 58,
[2][0][RTW89_CHILE][10] = 64,
[2][0][RTW89_QATAR][10] = 58,
+ [2][0][RTW89_THAILAND][10] = 58,
[2][0][RTW89_FCC][11] = 48,
[2][0][RTW89_ETSI][11] = 58,
[2][0][RTW89_MKK][11] = 58,
@@ -46708,6 +48664,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][11] = 58,
[2][0][RTW89_CHILE][11] = 48,
[2][0][RTW89_QATAR][11] = 58,
+ [2][0][RTW89_THAILAND][11] = 58,
[2][0][RTW89_FCC][12] = 16,
[2][0][RTW89_ETSI][12] = 58,
[2][0][RTW89_MKK][12] = 58,
@@ -46720,6 +48677,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][12] = 58,
[2][0][RTW89_CHILE][12] = 16,
[2][0][RTW89_QATAR][12] = 58,
+ [2][0][RTW89_THAILAND][12] = 58,
[2][0][RTW89_FCC][13] = 127,
[2][0][RTW89_ETSI][13] = 127,
[2][0][RTW89_MKK][13] = 127,
@@ -46732,6 +48690,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][13] = 127,
[2][0][RTW89_CHILE][13] = 127,
[2][0][RTW89_QATAR][13] = 127,
+ [2][0][RTW89_THAILAND][13] = 127,
[2][1][RTW89_FCC][0] = 64,
[2][1][RTW89_ETSI][0] = 46,
[2][1][RTW89_MKK][0] = 46,
@@ -46744,6 +48703,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][0] = 46,
[2][1][RTW89_CHILE][0] = 64,
[2][1][RTW89_QATAR][0] = 46,
+ [2][1][RTW89_THAILAND][0] = 46,
[2][1][RTW89_FCC][1] = 64,
[2][1][RTW89_ETSI][1] = 46,
[2][1][RTW89_MKK][1] = 46,
@@ -46756,6 +48716,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][1] = 46,
[2][1][RTW89_CHILE][1] = 44,
[2][1][RTW89_QATAR][1] = 46,
+ [2][1][RTW89_THAILAND][1] = 46,
[2][1][RTW89_FCC][2] = 68,
[2][1][RTW89_ETSI][2] = 46,
[2][1][RTW89_MKK][2] = 46,
@@ -46768,6 +48729,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][2] = 46,
[2][1][RTW89_CHILE][2] = 44,
[2][1][RTW89_QATAR][2] = 46,
+ [2][1][RTW89_THAILAND][2] = 46,
[2][1][RTW89_FCC][3] = 72,
[2][1][RTW89_ETSI][3] = 46,
[2][1][RTW89_MKK][3] = 46,
@@ -46780,6 +48742,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][3] = 46,
[2][1][RTW89_CHILE][3] = 44,
[2][1][RTW89_QATAR][3] = 46,
+ [2][1][RTW89_THAILAND][3] = 46,
[2][1][RTW89_FCC][4] = 74,
[2][1][RTW89_ETSI][4] = 46,
[2][1][RTW89_MKK][4] = 46,
@@ -46792,6 +48755,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][4] = 46,
[2][1][RTW89_CHILE][4] = 44,
[2][1][RTW89_QATAR][4] = 46,
+ [2][1][RTW89_THAILAND][4] = 46,
[2][1][RTW89_FCC][5] = 82,
[2][1][RTW89_ETSI][5] = 46,
[2][1][RTW89_MKK][5] = 46,
@@ -46804,6 +48768,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][5] = 46,
[2][1][RTW89_CHILE][5] = 78,
[2][1][RTW89_QATAR][5] = 46,
+ [2][1][RTW89_THAILAND][5] = 46,
[2][1][RTW89_FCC][6] = 72,
[2][1][RTW89_ETSI][6] = 44,
[2][1][RTW89_MKK][6] = 46,
@@ -46816,6 +48781,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][6] = 44,
[2][1][RTW89_CHILE][6] = 42,
[2][1][RTW89_QATAR][6] = 44,
+ [2][1][RTW89_THAILAND][6] = 44,
[2][1][RTW89_FCC][7] = 72,
[2][1][RTW89_ETSI][7] = 46,
[2][1][RTW89_MKK][7] = 46,
@@ -46828,6 +48794,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][7] = 46,
[2][1][RTW89_CHILE][7] = 42,
[2][1][RTW89_QATAR][7] = 46,
+ [2][1][RTW89_THAILAND][7] = 46,
[2][1][RTW89_FCC][8] = 68,
[2][1][RTW89_ETSI][8] = 46,
[2][1][RTW89_MKK][8] = 46,
@@ -46840,6 +48807,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][8] = 46,
[2][1][RTW89_CHILE][8] = 42,
[2][1][RTW89_QATAR][8] = 46,
+ [2][1][RTW89_THAILAND][8] = 46,
[2][1][RTW89_FCC][9] = 64,
[2][1][RTW89_ETSI][9] = 46,
[2][1][RTW89_MKK][9] = 46,
@@ -46852,6 +48820,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][9] = 46,
[2][1][RTW89_CHILE][9] = 42,
[2][1][RTW89_QATAR][9] = 46,
+ [2][1][RTW89_THAILAND][9] = 46,
[2][1][RTW89_FCC][10] = 64,
[2][1][RTW89_ETSI][10] = 46,
[2][1][RTW89_MKK][10] = 46,
@@ -46864,6 +48833,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][10] = 46,
[2][1][RTW89_CHILE][10] = 64,
[2][1][RTW89_QATAR][10] = 46,
+ [2][1][RTW89_THAILAND][10] = 46,
[2][1][RTW89_FCC][11] = 46,
[2][1][RTW89_ETSI][11] = 46,
[2][1][RTW89_MKK][11] = 46,
@@ -46876,6 +48846,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][11] = 46,
[2][1][RTW89_CHILE][11] = 46,
[2][1][RTW89_QATAR][11] = 46,
+ [2][1][RTW89_THAILAND][11] = 46,
[2][1][RTW89_FCC][12] = 6,
[2][1][RTW89_ETSI][12] = 44,
[2][1][RTW89_MKK][12] = 46,
@@ -46888,6 +48859,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][12] = 44,
[2][1][RTW89_CHILE][12] = 6,
[2][1][RTW89_QATAR][12] = 44,
+ [2][1][RTW89_THAILAND][12] = 44,
[2][1][RTW89_FCC][13] = 127,
[2][1][RTW89_ETSI][13] = 127,
[2][1][RTW89_MKK][13] = 127,
@@ -46900,6 +48872,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_2g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][13] = 127,
[2][1][RTW89_CHILE][13] = 127,
[2][1][RTW89_QATAR][13] = 127,
+ [2][1][RTW89_THAILAND][13] = 127,
};
static
@@ -47085,6 +49058,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][0] = 22,
[0][0][RTW89_CHILE][0] = 50,
[0][0][RTW89_QATAR][0] = 30,
+ [0][0][RTW89_THAILAND][0] = 30,
[0][0][RTW89_FCC][2] = 50,
[0][0][RTW89_ETSI][2] = 30,
[0][0][RTW89_MKK][2] = 36,
@@ -47097,6 +49071,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][2] = 22,
[0][0][RTW89_CHILE][2] = 50,
[0][0][RTW89_QATAR][2] = 30,
+ [0][0][RTW89_THAILAND][2] = 30,
[0][0][RTW89_FCC][4] = 50,
[0][0][RTW89_ETSI][4] = 30,
[0][0][RTW89_MKK][4] = 22,
@@ -47109,6 +49084,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][4] = 22,
[0][0][RTW89_CHILE][4] = 50,
[0][0][RTW89_QATAR][4] = 30,
+ [0][0][RTW89_THAILAND][4] = 30,
[0][0][RTW89_FCC][6] = 50,
[0][0][RTW89_ETSI][6] = 30,
[0][0][RTW89_MKK][6] = 22,
@@ -47121,6 +49097,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][6] = 22,
[0][0][RTW89_CHILE][6] = 50,
[0][0][RTW89_QATAR][6] = 30,
+ [0][0][RTW89_THAILAND][6] = 30,
[0][0][RTW89_FCC][8] = 52,
[0][0][RTW89_ETSI][8] = 28,
[0][0][RTW89_MKK][8] = 18,
@@ -47133,6 +49110,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][8] = 22,
[0][0][RTW89_CHILE][8] = 52,
[0][0][RTW89_QATAR][8] = 28,
+ [0][0][RTW89_THAILAND][8] = 28,
[0][0][RTW89_FCC][10] = 52,
[0][0][RTW89_ETSI][10] = 28,
[0][0][RTW89_MKK][10] = 18,
@@ -47145,6 +49123,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][10] = 22,
[0][0][RTW89_CHILE][10] = 52,
[0][0][RTW89_QATAR][10] = 28,
+ [0][0][RTW89_THAILAND][10] = 28,
[0][0][RTW89_FCC][12] = 52,
[0][0][RTW89_ETSI][12] = 28,
[0][0][RTW89_MKK][12] = 34,
@@ -47157,6 +49136,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][12] = 22,
[0][0][RTW89_CHILE][12] = 52,
[0][0][RTW89_QATAR][12] = 28,
+ [0][0][RTW89_THAILAND][12] = 28,
[0][0][RTW89_FCC][14] = 52,
[0][0][RTW89_ETSI][14] = 28,
[0][0][RTW89_MKK][14] = 34,
@@ -47169,6 +49149,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][14] = 22,
[0][0][RTW89_CHILE][14] = 52,
[0][0][RTW89_QATAR][14] = 28,
+ [0][0][RTW89_THAILAND][14] = 28,
[0][0][RTW89_FCC][15] = 52,
[0][0][RTW89_ETSI][15] = 30,
[0][0][RTW89_MKK][15] = 56,
@@ -47181,6 +49162,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][15] = 22,
[0][0][RTW89_CHILE][15] = 52,
[0][0][RTW89_QATAR][15] = 30,
+ [0][0][RTW89_THAILAND][15] = 30,
[0][0][RTW89_FCC][17] = 52,
[0][0][RTW89_ETSI][17] = 30,
[0][0][RTW89_MKK][17] = 58,
@@ -47193,6 +49175,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][17] = 22,
[0][0][RTW89_CHILE][17] = 52,
[0][0][RTW89_QATAR][17] = 30,
+ [0][0][RTW89_THAILAND][17] = 30,
[0][0][RTW89_FCC][19] = 52,
[0][0][RTW89_ETSI][19] = 30,
[0][0][RTW89_MKK][19] = 58,
@@ -47205,6 +49188,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][19] = 22,
[0][0][RTW89_CHILE][19] = 52,
[0][0][RTW89_QATAR][19] = 30,
+ [0][0][RTW89_THAILAND][19] = 30,
[0][0][RTW89_FCC][21] = 52,
[0][0][RTW89_ETSI][21] = 30,
[0][0][RTW89_MKK][21] = 58,
@@ -47217,6 +49201,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][21] = 22,
[0][0][RTW89_CHILE][21] = 52,
[0][0][RTW89_QATAR][21] = 30,
+ [0][0][RTW89_THAILAND][21] = 30,
[0][0][RTW89_FCC][23] = 52,
[0][0][RTW89_ETSI][23] = 30,
[0][0][RTW89_MKK][23] = 58,
@@ -47229,6 +49214,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][23] = 22,
[0][0][RTW89_CHILE][23] = 52,
[0][0][RTW89_QATAR][23] = 30,
+ [0][0][RTW89_THAILAND][23] = 30,
[0][0][RTW89_FCC][25] = 52,
[0][0][RTW89_ETSI][25] = 30,
[0][0][RTW89_MKK][25] = 58,
@@ -47241,6 +49227,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][25] = 22,
[0][0][RTW89_CHILE][25] = 52,
[0][0][RTW89_QATAR][25] = 30,
+ [0][0][RTW89_THAILAND][25] = 30,
[0][0][RTW89_FCC][27] = 52,
[0][0][RTW89_ETSI][27] = 30,
[0][0][RTW89_MKK][27] = 58,
@@ -47253,6 +49240,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][27] = 22,
[0][0][RTW89_CHILE][27] = 52,
[0][0][RTW89_QATAR][27] = 30,
+ [0][0][RTW89_THAILAND][27] = 30,
[0][0][RTW89_FCC][29] = 52,
[0][0][RTW89_ETSI][29] = 30,
[0][0][RTW89_MKK][29] = 58,
@@ -47265,6 +49253,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][29] = 22,
[0][0][RTW89_CHILE][29] = 52,
[0][0][RTW89_QATAR][29] = 30,
+ [0][0][RTW89_THAILAND][29] = 30,
[0][0][RTW89_FCC][31] = 52,
[0][0][RTW89_ETSI][31] = 30,
[0][0][RTW89_MKK][31] = 58,
@@ -47277,6 +49266,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][31] = 22,
[0][0][RTW89_CHILE][31] = 52,
[0][0][RTW89_QATAR][31] = 30,
+ [0][0][RTW89_THAILAND][31] = 30,
[0][0][RTW89_FCC][33] = 44,
[0][0][RTW89_ETSI][33] = 30,
[0][0][RTW89_MKK][33] = 58,
@@ -47289,6 +49279,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][33] = 22,
[0][0][RTW89_CHILE][33] = 44,
[0][0][RTW89_QATAR][33] = 30,
+ [0][0][RTW89_THAILAND][33] = 30,
[0][0][RTW89_FCC][35] = 44,
[0][0][RTW89_ETSI][35] = 30,
[0][0][RTW89_MKK][35] = 58,
@@ -47301,6 +49292,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][35] = 22,
[0][0][RTW89_CHILE][35] = 44,
[0][0][RTW89_QATAR][35] = 30,
+ [0][0][RTW89_THAILAND][35] = 30,
[0][0][RTW89_FCC][37] = 52,
[0][0][RTW89_ETSI][37] = 127,
[0][0][RTW89_MKK][37] = 58,
@@ -47313,6 +49305,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][37] = 127,
[0][0][RTW89_CHILE][37] = 52,
[0][0][RTW89_QATAR][37] = 127,
+ [0][0][RTW89_THAILAND][37] = 127,
[0][0][RTW89_FCC][38] = 64,
[0][0][RTW89_ETSI][38] = 28,
[0][0][RTW89_MKK][38] = 127,
@@ -47325,6 +49318,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][38] = 26,
[0][0][RTW89_CHILE][38] = 64,
[0][0][RTW89_QATAR][38] = 26,
+ [0][0][RTW89_THAILAND][38] = 28,
[0][0][RTW89_FCC][40] = 64,
[0][0][RTW89_ETSI][40] = 28,
[0][0][RTW89_MKK][40] = 127,
@@ -47337,6 +49331,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][40] = 26,
[0][0][RTW89_CHILE][40] = 64,
[0][0][RTW89_QATAR][40] = 26,
+ [0][0][RTW89_THAILAND][40] = 28,
[0][0][RTW89_FCC][42] = 60,
[0][0][RTW89_ETSI][42] = 28,
[0][0][RTW89_MKK][42] = 127,
@@ -47349,6 +49344,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][42] = 26,
[0][0][RTW89_CHILE][42] = 60,
[0][0][RTW89_QATAR][42] = 26,
+ [0][0][RTW89_THAILAND][42] = 28,
[0][0][RTW89_FCC][44] = 60,
[0][0][RTW89_ETSI][44] = 28,
[0][0][RTW89_MKK][44] = 127,
@@ -47361,6 +49357,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][44] = 26,
[0][0][RTW89_CHILE][44] = 60,
[0][0][RTW89_QATAR][44] = 26,
+ [0][0][RTW89_THAILAND][44] = 28,
[0][0][RTW89_FCC][46] = 60,
[0][0][RTW89_ETSI][46] = 28,
[0][0][RTW89_MKK][46] = 127,
@@ -47373,6 +49370,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][46] = 26,
[0][0][RTW89_CHILE][46] = 60,
[0][0][RTW89_QATAR][46] = 26,
+ [0][0][RTW89_THAILAND][46] = 28,
[0][0][RTW89_FCC][48] = 46,
[0][0][RTW89_ETSI][48] = 127,
[0][0][RTW89_MKK][48] = 127,
@@ -47385,6 +49383,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][48] = 127,
[0][0][RTW89_CHILE][48] = 127,
[0][0][RTW89_QATAR][48] = 127,
+ [0][0][RTW89_THAILAND][48] = 127,
[0][0][RTW89_FCC][50] = 44,
[0][0][RTW89_ETSI][50] = 127,
[0][0][RTW89_MKK][50] = 127,
@@ -47397,6 +49396,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][50] = 127,
[0][0][RTW89_CHILE][50] = 127,
[0][0][RTW89_QATAR][50] = 127,
+ [0][0][RTW89_THAILAND][50] = 127,
[0][0][RTW89_FCC][52] = 34,
[0][0][RTW89_ETSI][52] = 127,
[0][0][RTW89_MKK][52] = 127,
@@ -47409,6 +49409,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_UKRAINE][52] = 127,
[0][0][RTW89_CHILE][52] = 127,
[0][0][RTW89_QATAR][52] = 127,
+ [0][0][RTW89_THAILAND][52] = 127,
[0][1][RTW89_FCC][0] = 30,
[0][1][RTW89_ETSI][0] = 18,
[0][1][RTW89_MKK][0] = 20,
@@ -47421,6 +49422,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][0] = 10,
[0][1][RTW89_CHILE][0] = 30,
[0][1][RTW89_QATAR][0] = 18,
+ [0][1][RTW89_THAILAND][0] = 18,
[0][1][RTW89_FCC][2] = 32,
[0][1][RTW89_ETSI][2] = 18,
[0][1][RTW89_MKK][2] = 20,
@@ -47433,6 +49435,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][2] = 10,
[0][1][RTW89_CHILE][2] = 32,
[0][1][RTW89_QATAR][2] = 18,
+ [0][1][RTW89_THAILAND][2] = 18,
[0][1][RTW89_FCC][4] = 30,
[0][1][RTW89_ETSI][4] = 18,
[0][1][RTW89_MKK][4] = 8,
@@ -47445,6 +49448,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][4] = 10,
[0][1][RTW89_CHILE][4] = 30,
[0][1][RTW89_QATAR][4] = 18,
+ [0][1][RTW89_THAILAND][4] = 18,
[0][1][RTW89_FCC][6] = 30,
[0][1][RTW89_ETSI][6] = 18,
[0][1][RTW89_MKK][6] = 8,
@@ -47457,6 +49461,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][6] = 10,
[0][1][RTW89_CHILE][6] = 30,
[0][1][RTW89_QATAR][6] = 18,
+ [0][1][RTW89_THAILAND][6] = 18,
[0][1][RTW89_FCC][8] = 30,
[0][1][RTW89_ETSI][8] = 16,
[0][1][RTW89_MKK][8] = 20,
@@ -47469,6 +49474,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][8] = 10,
[0][1][RTW89_CHILE][8] = 30,
[0][1][RTW89_QATAR][8] = 16,
+ [0][1][RTW89_THAILAND][8] = 16,
[0][1][RTW89_FCC][10] = 30,
[0][1][RTW89_ETSI][10] = 16,
[0][1][RTW89_MKK][10] = 20,
@@ -47481,6 +49487,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][10] = 10,
[0][1][RTW89_CHILE][10] = 30,
[0][1][RTW89_QATAR][10] = 16,
+ [0][1][RTW89_THAILAND][10] = 16,
[0][1][RTW89_FCC][12] = 30,
[0][1][RTW89_ETSI][12] = 16,
[0][1][RTW89_MKK][12] = 34,
@@ -47493,6 +49500,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][12] = 10,
[0][1][RTW89_CHILE][12] = 30,
[0][1][RTW89_QATAR][12] = 16,
+ [0][1][RTW89_THAILAND][12] = 16,
[0][1][RTW89_FCC][14] = 30,
[0][1][RTW89_ETSI][14] = 16,
[0][1][RTW89_MKK][14] = 34,
@@ -47505,6 +49513,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][14] = 10,
[0][1][RTW89_CHILE][14] = 30,
[0][1][RTW89_QATAR][14] = 16,
+ [0][1][RTW89_THAILAND][14] = 16,
[0][1][RTW89_FCC][15] = 32,
[0][1][RTW89_ETSI][15] = 18,
[0][1][RTW89_MKK][15] = 44,
@@ -47517,6 +49526,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][15] = 10,
[0][1][RTW89_CHILE][15] = 32,
[0][1][RTW89_QATAR][15] = 18,
+ [0][1][RTW89_THAILAND][15] = 18,
[0][1][RTW89_FCC][17] = 32,
[0][1][RTW89_ETSI][17] = 18,
[0][1][RTW89_MKK][17] = 44,
@@ -47529,6 +49539,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][17] = 10,
[0][1][RTW89_CHILE][17] = 32,
[0][1][RTW89_QATAR][17] = 18,
+ [0][1][RTW89_THAILAND][17] = 18,
[0][1][RTW89_FCC][19] = 32,
[0][1][RTW89_ETSI][19] = 18,
[0][1][RTW89_MKK][19] = 44,
@@ -47541,6 +49552,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][19] = 10,
[0][1][RTW89_CHILE][19] = 32,
[0][1][RTW89_QATAR][19] = 18,
+ [0][1][RTW89_THAILAND][19] = 18,
[0][1][RTW89_FCC][21] = 32,
[0][1][RTW89_ETSI][21] = 18,
[0][1][RTW89_MKK][21] = 44,
@@ -47553,6 +49565,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][21] = 10,
[0][1][RTW89_CHILE][21] = 32,
[0][1][RTW89_QATAR][21] = 18,
+ [0][1][RTW89_THAILAND][21] = 18,
[0][1][RTW89_FCC][23] = 32,
[0][1][RTW89_ETSI][23] = 18,
[0][1][RTW89_MKK][23] = 44,
@@ -47565,6 +49578,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][23] = 10,
[0][1][RTW89_CHILE][23] = 32,
[0][1][RTW89_QATAR][23] = 18,
+ [0][1][RTW89_THAILAND][23] = 18,
[0][1][RTW89_FCC][25] = 32,
[0][1][RTW89_ETSI][25] = 18,
[0][1][RTW89_MKK][25] = 44,
@@ -47577,6 +49591,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][25] = 10,
[0][1][RTW89_CHILE][25] = 32,
[0][1][RTW89_QATAR][25] = 18,
+ [0][1][RTW89_THAILAND][25] = 18,
[0][1][RTW89_FCC][27] = 32,
[0][1][RTW89_ETSI][27] = 16,
[0][1][RTW89_MKK][27] = 44,
@@ -47589,6 +49604,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][27] = 10,
[0][1][RTW89_CHILE][27] = 32,
[0][1][RTW89_QATAR][27] = 16,
+ [0][1][RTW89_THAILAND][27] = 16,
[0][1][RTW89_FCC][29] = 32,
[0][1][RTW89_ETSI][29] = 16,
[0][1][RTW89_MKK][29] = 44,
@@ -47601,6 +49617,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][29] = 10,
[0][1][RTW89_CHILE][29] = 32,
[0][1][RTW89_QATAR][29] = 16,
+ [0][1][RTW89_THAILAND][29] = 16,
[0][1][RTW89_FCC][31] = 32,
[0][1][RTW89_ETSI][31] = 16,
[0][1][RTW89_MKK][31] = 44,
@@ -47613,6 +49630,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][31] = 10,
[0][1][RTW89_CHILE][31] = 32,
[0][1][RTW89_QATAR][31] = 16,
+ [0][1][RTW89_THAILAND][31] = 16,
[0][1][RTW89_FCC][33] = 30,
[0][1][RTW89_ETSI][33] = 16,
[0][1][RTW89_MKK][33] = 44,
@@ -47625,6 +49643,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][33] = 10,
[0][1][RTW89_CHILE][33] = 30,
[0][1][RTW89_QATAR][33] = 16,
+ [0][1][RTW89_THAILAND][33] = 16,
[0][1][RTW89_FCC][35] = 30,
[0][1][RTW89_ETSI][35] = 16,
[0][1][RTW89_MKK][35] = 44,
@@ -47637,6 +49656,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][35] = 10,
[0][1][RTW89_CHILE][35] = 30,
[0][1][RTW89_QATAR][35] = 16,
+ [0][1][RTW89_THAILAND][35] = 16,
[0][1][RTW89_FCC][37] = 34,
[0][1][RTW89_ETSI][37] = 127,
[0][1][RTW89_MKK][37] = 44,
@@ -47649,6 +49669,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][37] = 127,
[0][1][RTW89_CHILE][37] = 34,
[0][1][RTW89_QATAR][37] = 127,
+ [0][1][RTW89_THAILAND][37] = 127,
[0][1][RTW89_FCC][38] = 62,
[0][1][RTW89_ETSI][38] = 16,
[0][1][RTW89_MKK][38] = 127,
@@ -47661,6 +49682,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][38] = 14,
[0][1][RTW89_CHILE][38] = 62,
[0][1][RTW89_QATAR][38] = 14,
+ [0][1][RTW89_THAILAND][38] = 16,
[0][1][RTW89_FCC][40] = 62,
[0][1][RTW89_ETSI][40] = 16,
[0][1][RTW89_MKK][40] = 127,
@@ -47673,6 +49695,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][40] = 14,
[0][1][RTW89_CHILE][40] = 62,
[0][1][RTW89_QATAR][40] = 14,
+ [0][1][RTW89_THAILAND][40] = 16,
[0][1][RTW89_FCC][42] = 58,
[0][1][RTW89_ETSI][42] = 16,
[0][1][RTW89_MKK][42] = 127,
@@ -47685,6 +49708,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][42] = 14,
[0][1][RTW89_CHILE][42] = 58,
[0][1][RTW89_QATAR][42] = 14,
+ [0][1][RTW89_THAILAND][42] = 16,
[0][1][RTW89_FCC][44] = 56,
[0][1][RTW89_ETSI][44] = 16,
[0][1][RTW89_MKK][44] = 127,
@@ -47697,6 +49721,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][44] = 14,
[0][1][RTW89_CHILE][44] = 56,
[0][1][RTW89_QATAR][44] = 14,
+ [0][1][RTW89_THAILAND][44] = 16,
[0][1][RTW89_FCC][46] = 56,
[0][1][RTW89_ETSI][46] = 16,
[0][1][RTW89_MKK][46] = 127,
@@ -47709,6 +49734,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][46] = 14,
[0][1][RTW89_CHILE][46] = 56,
[0][1][RTW89_QATAR][46] = 14,
+ [0][1][RTW89_THAILAND][46] = 16,
[0][1][RTW89_FCC][48] = 20,
[0][1][RTW89_ETSI][48] = 127,
[0][1][RTW89_MKK][48] = 127,
@@ -47721,6 +49747,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][48] = 127,
[0][1][RTW89_CHILE][48] = 127,
[0][1][RTW89_QATAR][48] = 127,
+ [0][1][RTW89_THAILAND][48] = 127,
[0][1][RTW89_FCC][50] = 20,
[0][1][RTW89_ETSI][50] = 127,
[0][1][RTW89_MKK][50] = 127,
@@ -47733,6 +49760,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][50] = 127,
[0][1][RTW89_CHILE][50] = 127,
[0][1][RTW89_QATAR][50] = 127,
+ [0][1][RTW89_THAILAND][50] = 127,
[0][1][RTW89_FCC][52] = 8,
[0][1][RTW89_ETSI][52] = 127,
[0][1][RTW89_MKK][52] = 127,
@@ -47745,6 +49773,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_UKRAINE][52] = 127,
[0][1][RTW89_CHILE][52] = 127,
[0][1][RTW89_QATAR][52] = 127,
+ [0][1][RTW89_THAILAND][52] = 127,
[1][0][RTW89_FCC][0] = 62,
[1][0][RTW89_ETSI][0] = 40,
[1][0][RTW89_MKK][0] = 48,
@@ -47757,6 +49786,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][0] = 32,
[1][0][RTW89_CHILE][0] = 62,
[1][0][RTW89_QATAR][0] = 40,
+ [1][0][RTW89_THAILAND][0] = 40,
[1][0][RTW89_FCC][2] = 62,
[1][0][RTW89_ETSI][2] = 40,
[1][0][RTW89_MKK][2] = 48,
@@ -47769,6 +49799,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][2] = 32,
[1][0][RTW89_CHILE][2] = 62,
[1][0][RTW89_QATAR][2] = 40,
+ [1][0][RTW89_THAILAND][2] = 40,
[1][0][RTW89_FCC][4] = 64,
[1][0][RTW89_ETSI][4] = 40,
[1][0][RTW89_MKK][4] = 40,
@@ -47781,6 +49812,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][4] = 32,
[1][0][RTW89_CHILE][4] = 64,
[1][0][RTW89_QATAR][4] = 40,
+ [1][0][RTW89_THAILAND][4] = 40,
[1][0][RTW89_FCC][6] = 64,
[1][0][RTW89_ETSI][6] = 40,
[1][0][RTW89_MKK][6] = 40,
@@ -47793,6 +49825,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][6] = 32,
[1][0][RTW89_CHILE][6] = 64,
[1][0][RTW89_QATAR][6] = 40,
+ [1][0][RTW89_THAILAND][6] = 40,
[1][0][RTW89_FCC][8] = 62,
[1][0][RTW89_ETSI][8] = 40,
[1][0][RTW89_MKK][8] = 34,
@@ -47805,6 +49838,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][8] = 32,
[1][0][RTW89_CHILE][8] = 62,
[1][0][RTW89_QATAR][8] = 40,
+ [1][0][RTW89_THAILAND][8] = 40,
[1][0][RTW89_FCC][10] = 62,
[1][0][RTW89_ETSI][10] = 40,
[1][0][RTW89_MKK][10] = 34,
@@ -47817,6 +49851,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][10] = 32,
[1][0][RTW89_CHILE][10] = 62,
[1][0][RTW89_QATAR][10] = 40,
+ [1][0][RTW89_THAILAND][10] = 40,
[1][0][RTW89_FCC][12] = 62,
[1][0][RTW89_ETSI][12] = 40,
[1][0][RTW89_MKK][12] = 46,
@@ -47829,6 +49864,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][12] = 32,
[1][0][RTW89_CHILE][12] = 62,
[1][0][RTW89_QATAR][12] = 40,
+ [1][0][RTW89_THAILAND][12] = 40,
[1][0][RTW89_FCC][14] = 62,
[1][0][RTW89_ETSI][14] = 40,
[1][0][RTW89_MKK][14] = 46,
@@ -47841,6 +49877,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][14] = 32,
[1][0][RTW89_CHILE][14] = 62,
[1][0][RTW89_QATAR][14] = 40,
+ [1][0][RTW89_THAILAND][14] = 40,
[1][0][RTW89_FCC][15] = 62,
[1][0][RTW89_ETSI][15] = 40,
[1][0][RTW89_MKK][15] = 62,
@@ -47853,6 +49890,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][15] = 32,
[1][0][RTW89_CHILE][15] = 62,
[1][0][RTW89_QATAR][15] = 40,
+ [1][0][RTW89_THAILAND][15] = 40,
[1][0][RTW89_FCC][17] = 62,
[1][0][RTW89_ETSI][17] = 40,
[1][0][RTW89_MKK][17] = 68,
@@ -47865,6 +49903,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][17] = 32,
[1][0][RTW89_CHILE][17] = 62,
[1][0][RTW89_QATAR][17] = 40,
+ [1][0][RTW89_THAILAND][17] = 40,
[1][0][RTW89_FCC][19] = 64,
[1][0][RTW89_ETSI][19] = 40,
[1][0][RTW89_MKK][19] = 68,
@@ -47877,6 +49916,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][19] = 32,
[1][0][RTW89_CHILE][19] = 64,
[1][0][RTW89_QATAR][19] = 40,
+ [1][0][RTW89_THAILAND][19] = 40,
[1][0][RTW89_FCC][21] = 64,
[1][0][RTW89_ETSI][21] = 40,
[1][0][RTW89_MKK][21] = 68,
@@ -47889,6 +49929,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][21] = 32,
[1][0][RTW89_CHILE][21] = 64,
[1][0][RTW89_QATAR][21] = 40,
+ [1][0][RTW89_THAILAND][21] = 40,
[1][0][RTW89_FCC][23] = 64,
[1][0][RTW89_ETSI][23] = 40,
[1][0][RTW89_MKK][23] = 68,
@@ -47901,6 +49942,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][23] = 32,
[1][0][RTW89_CHILE][23] = 64,
[1][0][RTW89_QATAR][23] = 40,
+ [1][0][RTW89_THAILAND][23] = 40,
[1][0][RTW89_FCC][25] = 64,
[1][0][RTW89_ETSI][25] = 40,
[1][0][RTW89_MKK][25] = 68,
@@ -47913,6 +49955,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][25] = 32,
[1][0][RTW89_CHILE][25] = 64,
[1][0][RTW89_QATAR][25] = 40,
+ [1][0][RTW89_THAILAND][25] = 40,
[1][0][RTW89_FCC][27] = 64,
[1][0][RTW89_ETSI][27] = 42,
[1][0][RTW89_MKK][27] = 68,
@@ -47925,6 +49968,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][27] = 32,
[1][0][RTW89_CHILE][27] = 64,
[1][0][RTW89_QATAR][27] = 42,
+ [1][0][RTW89_THAILAND][27] = 42,
[1][0][RTW89_FCC][29] = 64,
[1][0][RTW89_ETSI][29] = 42,
[1][0][RTW89_MKK][29] = 68,
@@ -47937,6 +49981,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][29] = 32,
[1][0][RTW89_CHILE][29] = 64,
[1][0][RTW89_QATAR][29] = 42,
+ [1][0][RTW89_THAILAND][29] = 42,
[1][0][RTW89_FCC][31] = 64,
[1][0][RTW89_ETSI][31] = 42,
[1][0][RTW89_MKK][31] = 68,
@@ -47949,6 +49994,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][31] = 32,
[1][0][RTW89_CHILE][31] = 64,
[1][0][RTW89_QATAR][31] = 42,
+ [1][0][RTW89_THAILAND][31] = 42,
[1][0][RTW89_FCC][33] = 56,
[1][0][RTW89_ETSI][33] = 42,
[1][0][RTW89_MKK][33] = 68,
@@ -47961,6 +50007,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][33] = 32,
[1][0][RTW89_CHILE][33] = 56,
[1][0][RTW89_QATAR][33] = 42,
+ [1][0][RTW89_THAILAND][33] = 42,
[1][0][RTW89_FCC][35] = 56,
[1][0][RTW89_ETSI][35] = 42,
[1][0][RTW89_MKK][35] = 68,
@@ -47973,6 +50020,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][35] = 32,
[1][0][RTW89_CHILE][35] = 56,
[1][0][RTW89_QATAR][35] = 42,
+ [1][0][RTW89_THAILAND][35] = 42,
[1][0][RTW89_FCC][37] = 66,
[1][0][RTW89_ETSI][37] = 127,
[1][0][RTW89_MKK][37] = 68,
@@ -47985,66 +50033,72 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][37] = 127,
[1][0][RTW89_CHILE][37] = 66,
[1][0][RTW89_QATAR][37] = 127,
+ [1][0][RTW89_THAILAND][37] = 127,
[1][0][RTW89_FCC][38] = 76,
[1][0][RTW89_ETSI][38] = 28,
[1][0][RTW89_MKK][38] = 127,
[1][0][RTW89_IC][38] = 76,
[1][0][RTW89_KCC][38] = 54,
[1][0][RTW89_ACMA][38] = 76,
- [1][0][RTW89_CN][38] = 66,
+ [1][0][RTW89_CN][38] = 56,
[1][0][RTW89_UK][38] = 44,
[1][0][RTW89_MEXICO][38] = 76,
[1][0][RTW89_UKRAINE][38] = 26,
[1][0][RTW89_CHILE][38] = 76,
[1][0][RTW89_QATAR][38] = 26,
+ [1][0][RTW89_THAILAND][38] = 28,
[1][0][RTW89_FCC][40] = 76,
[1][0][RTW89_ETSI][40] = 28,
[1][0][RTW89_MKK][40] = 127,
[1][0][RTW89_IC][40] = 76,
[1][0][RTW89_KCC][40] = 54,
[1][0][RTW89_ACMA][40] = 76,
- [1][0][RTW89_CN][40] = 66,
+ [1][0][RTW89_CN][40] = 56,
[1][0][RTW89_UK][40] = 44,
[1][0][RTW89_MEXICO][40] = 76,
[1][0][RTW89_UKRAINE][40] = 26,
[1][0][RTW89_CHILE][40] = 76,
[1][0][RTW89_QATAR][40] = 26,
+ [1][0][RTW89_THAILAND][40] = 28,
[1][0][RTW89_FCC][42] = 68,
[1][0][RTW89_ETSI][42] = 28,
[1][0][RTW89_MKK][42] = 127,
[1][0][RTW89_IC][42] = 68,
[1][0][RTW89_KCC][42] = 54,
[1][0][RTW89_ACMA][42] = 68,
- [1][0][RTW89_CN][42] = 66,
+ [1][0][RTW89_CN][42] = 56,
[1][0][RTW89_UK][42] = 44,
[1][0][RTW89_MEXICO][42] = 68,
[1][0][RTW89_UKRAINE][42] = 26,
[1][0][RTW89_CHILE][42] = 68,
[1][0][RTW89_QATAR][42] = 26,
+ [1][0][RTW89_THAILAND][42] = 28,
[1][0][RTW89_FCC][44] = 70,
[1][0][RTW89_ETSI][44] = 28,
[1][0][RTW89_MKK][44] = 127,
[1][0][RTW89_IC][44] = 70,
[1][0][RTW89_KCC][44] = 54,
[1][0][RTW89_ACMA][44] = 70,
- [1][0][RTW89_CN][44] = 66,
+ [1][0][RTW89_CN][44] = 56,
[1][0][RTW89_UK][44] = 42,
[1][0][RTW89_MEXICO][44] = 70,
[1][0][RTW89_UKRAINE][44] = 26,
[1][0][RTW89_CHILE][44] = 70,
[1][0][RTW89_QATAR][44] = 26,
+ [1][0][RTW89_THAILAND][44] = 28,
[1][0][RTW89_FCC][46] = 70,
[1][0][RTW89_ETSI][46] = 28,
[1][0][RTW89_MKK][46] = 127,
[1][0][RTW89_IC][46] = 70,
[1][0][RTW89_KCC][46] = 54,
[1][0][RTW89_ACMA][46] = 70,
- [1][0][RTW89_CN][46] = 66,
+ [1][0][RTW89_CN][46] = 56,
[1][0][RTW89_UK][46] = 42,
[1][0][RTW89_MEXICO][46] = 70,
[1][0][RTW89_UKRAINE][46] = 26,
[1][0][RTW89_CHILE][46] = 70,
[1][0][RTW89_QATAR][46] = 26,
+ [1][0][RTW89_THAILAND][46] = 28,
[1][0][RTW89_FCC][48] = 56,
[1][0][RTW89_ETSI][48] = 127,
[1][0][RTW89_MKK][48] = 127,
@@ -48057,6 +50111,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][48] = 127,
[1][0][RTW89_CHILE][48] = 127,
[1][0][RTW89_QATAR][48] = 127,
+ [1][0][RTW89_THAILAND][48] = 127,
[1][0][RTW89_FCC][50] = 58,
[1][0][RTW89_ETSI][50] = 127,
[1][0][RTW89_MKK][50] = 127,
@@ -48069,6 +50124,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][50] = 127,
[1][0][RTW89_CHILE][50] = 127,
[1][0][RTW89_QATAR][50] = 127,
+ [1][0][RTW89_THAILAND][50] = 127,
[1][0][RTW89_FCC][52] = 56,
[1][0][RTW89_ETSI][52] = 127,
[1][0][RTW89_MKK][52] = 127,
@@ -48081,6 +50137,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_UKRAINE][52] = 127,
[1][0][RTW89_CHILE][52] = 127,
[1][0][RTW89_QATAR][52] = 127,
+ [1][0][RTW89_THAILAND][52] = 127,
[1][1][RTW89_FCC][0] = 44,
[1][1][RTW89_ETSI][0] = 30,
[1][1][RTW89_MKK][0] = 34,
@@ -48093,6 +50150,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][0] = 20,
[1][1][RTW89_CHILE][0] = 44,
[1][1][RTW89_QATAR][0] = 30,
+ [1][1][RTW89_THAILAND][0] = 30,
[1][1][RTW89_FCC][2] = 44,
[1][1][RTW89_ETSI][2] = 30,
[1][1][RTW89_MKK][2] = 34,
@@ -48105,6 +50163,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][2] = 20,
[1][1][RTW89_CHILE][2] = 44,
[1][1][RTW89_QATAR][2] = 30,
+ [1][1][RTW89_THAILAND][2] = 30,
[1][1][RTW89_FCC][4] = 46,
[1][1][RTW89_ETSI][4] = 30,
[1][1][RTW89_MKK][4] = 26,
@@ -48117,6 +50176,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][4] = 20,
[1][1][RTW89_CHILE][4] = 46,
[1][1][RTW89_QATAR][4] = 30,
+ [1][1][RTW89_THAILAND][4] = 30,
[1][1][RTW89_FCC][6] = 46,
[1][1][RTW89_ETSI][6] = 30,
[1][1][RTW89_MKK][6] = 26,
@@ -48129,6 +50189,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][6] = 20,
[1][1][RTW89_CHILE][6] = 46,
[1][1][RTW89_QATAR][6] = 30,
+ [1][1][RTW89_THAILAND][6] = 30,
[1][1][RTW89_FCC][8] = 44,
[1][1][RTW89_ETSI][8] = 30,
[1][1][RTW89_MKK][8] = 20,
@@ -48141,6 +50202,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][8] = 20,
[1][1][RTW89_CHILE][8] = 44,
[1][1][RTW89_QATAR][8] = 30,
+ [1][1][RTW89_THAILAND][8] = 30,
[1][1][RTW89_FCC][10] = 44,
[1][1][RTW89_ETSI][10] = 30,
[1][1][RTW89_MKK][10] = 20,
@@ -48153,6 +50215,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][10] = 20,
[1][1][RTW89_CHILE][10] = 44,
[1][1][RTW89_QATAR][10] = 30,
+ [1][1][RTW89_THAILAND][10] = 30,
[1][1][RTW89_FCC][12] = 44,
[1][1][RTW89_ETSI][12] = 30,
[1][1][RTW89_MKK][12] = 34,
@@ -48165,6 +50228,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][12] = 20,
[1][1][RTW89_CHILE][12] = 44,
[1][1][RTW89_QATAR][12] = 30,
+ [1][1][RTW89_THAILAND][12] = 30,
[1][1][RTW89_FCC][14] = 44,
[1][1][RTW89_ETSI][14] = 30,
[1][1][RTW89_MKK][14] = 34,
@@ -48177,6 +50241,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][14] = 20,
[1][1][RTW89_CHILE][14] = 44,
[1][1][RTW89_QATAR][14] = 30,
+ [1][1][RTW89_THAILAND][14] = 30,
[1][1][RTW89_FCC][15] = 44,
[1][1][RTW89_ETSI][15] = 28,
[1][1][RTW89_MKK][15] = 56,
@@ -48189,6 +50254,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][15] = 20,
[1][1][RTW89_CHILE][15] = 44,
[1][1][RTW89_QATAR][15] = 28,
+ [1][1][RTW89_THAILAND][15] = 28,
[1][1][RTW89_FCC][17] = 44,
[1][1][RTW89_ETSI][17] = 28,
[1][1][RTW89_MKK][17] = 58,
@@ -48201,6 +50267,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][17] = 20,
[1][1][RTW89_CHILE][17] = 44,
[1][1][RTW89_QATAR][17] = 28,
+ [1][1][RTW89_THAILAND][17] = 28,
[1][1][RTW89_FCC][19] = 44,
[1][1][RTW89_ETSI][19] = 28,
[1][1][RTW89_MKK][19] = 58,
@@ -48213,6 +50280,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][19] = 20,
[1][1][RTW89_CHILE][19] = 44,
[1][1][RTW89_QATAR][19] = 28,
+ [1][1][RTW89_THAILAND][19] = 28,
[1][1][RTW89_FCC][21] = 44,
[1][1][RTW89_ETSI][21] = 28,
[1][1][RTW89_MKK][21] = 58,
@@ -48225,6 +50293,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][21] = 20,
[1][1][RTW89_CHILE][21] = 44,
[1][1][RTW89_QATAR][21] = 28,
+ [1][1][RTW89_THAILAND][21] = 28,
[1][1][RTW89_FCC][23] = 44,
[1][1][RTW89_ETSI][23] = 28,
[1][1][RTW89_MKK][23] = 58,
@@ -48237,6 +50306,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][23] = 20,
[1][1][RTW89_CHILE][23] = 44,
[1][1][RTW89_QATAR][23] = 28,
+ [1][1][RTW89_THAILAND][23] = 28,
[1][1][RTW89_FCC][25] = 44,
[1][1][RTW89_ETSI][25] = 28,
[1][1][RTW89_MKK][25] = 58,
@@ -48249,6 +50319,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][25] = 20,
[1][1][RTW89_CHILE][25] = 44,
[1][1][RTW89_QATAR][25] = 28,
+ [1][1][RTW89_THAILAND][25] = 28,
[1][1][RTW89_FCC][27] = 44,
[1][1][RTW89_ETSI][27] = 30,
[1][1][RTW89_MKK][27] = 58,
@@ -48261,6 +50332,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][27] = 20,
[1][1][RTW89_CHILE][27] = 44,
[1][1][RTW89_QATAR][27] = 30,
+ [1][1][RTW89_THAILAND][27] = 30,
[1][1][RTW89_FCC][29] = 44,
[1][1][RTW89_ETSI][29] = 30,
[1][1][RTW89_MKK][29] = 58,
@@ -48273,6 +50345,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][29] = 20,
[1][1][RTW89_CHILE][29] = 44,
[1][1][RTW89_QATAR][29] = 30,
+ [1][1][RTW89_THAILAND][29] = 30,
[1][1][RTW89_FCC][31] = 44,
[1][1][RTW89_ETSI][31] = 30,
[1][1][RTW89_MKK][31] = 58,
@@ -48285,6 +50358,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][31] = 20,
[1][1][RTW89_CHILE][31] = 44,
[1][1][RTW89_QATAR][31] = 30,
+ [1][1][RTW89_THAILAND][31] = 30,
[1][1][RTW89_FCC][33] = 38,
[1][1][RTW89_ETSI][33] = 30,
[1][1][RTW89_MKK][33] = 58,
@@ -48297,6 +50371,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][33] = 20,
[1][1][RTW89_CHILE][33] = 38,
[1][1][RTW89_QATAR][33] = 30,
+ [1][1][RTW89_THAILAND][33] = 30,
[1][1][RTW89_FCC][35] = 38,
[1][1][RTW89_ETSI][35] = 30,
[1][1][RTW89_MKK][35] = 58,
@@ -48309,6 +50384,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][35] = 20,
[1][1][RTW89_CHILE][35] = 38,
[1][1][RTW89_QATAR][35] = 30,
+ [1][1][RTW89_THAILAND][35] = 30,
[1][1][RTW89_FCC][37] = 46,
[1][1][RTW89_ETSI][37] = 127,
[1][1][RTW89_MKK][37] = 58,
@@ -48321,6 +50397,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][37] = 127,
[1][1][RTW89_CHILE][37] = 46,
[1][1][RTW89_QATAR][37] = 127,
+ [1][1][RTW89_THAILAND][37] = 127,
[1][1][RTW89_FCC][38] = 74,
[1][1][RTW89_ETSI][38] = 16,
[1][1][RTW89_MKK][38] = 127,
@@ -48333,6 +50410,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][38] = 14,
[1][1][RTW89_CHILE][38] = 72,
[1][1][RTW89_QATAR][38] = 14,
+ [1][1][RTW89_THAILAND][38] = 16,
[1][1][RTW89_FCC][40] = 74,
[1][1][RTW89_ETSI][40] = 16,
[1][1][RTW89_MKK][40] = 127,
@@ -48345,6 +50423,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][40] = 14,
[1][1][RTW89_CHILE][40] = 72,
[1][1][RTW89_QATAR][40] = 14,
+ [1][1][RTW89_THAILAND][40] = 16,
[1][1][RTW89_FCC][42] = 74,
[1][1][RTW89_ETSI][42] = 16,
[1][1][RTW89_MKK][42] = 127,
@@ -48357,6 +50436,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][42] = 14,
[1][1][RTW89_CHILE][42] = 72,
[1][1][RTW89_QATAR][42] = 14,
+ [1][1][RTW89_THAILAND][42] = 16,
[1][1][RTW89_FCC][44] = 74,
[1][1][RTW89_ETSI][44] = 16,
[1][1][RTW89_MKK][44] = 127,
@@ -48369,6 +50449,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][44] = 14,
[1][1][RTW89_CHILE][44] = 72,
[1][1][RTW89_QATAR][44] = 14,
+ [1][1][RTW89_THAILAND][44] = 16,
[1][1][RTW89_FCC][46] = 74,
[1][1][RTW89_ETSI][46] = 16,
[1][1][RTW89_MKK][46] = 127,
@@ -48381,6 +50462,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][46] = 14,
[1][1][RTW89_CHILE][46] = 72,
[1][1][RTW89_QATAR][46] = 14,
+ [1][1][RTW89_THAILAND][46] = 16,
[1][1][RTW89_FCC][48] = 34,
[1][1][RTW89_ETSI][48] = 127,
[1][1][RTW89_MKK][48] = 127,
@@ -48393,6 +50475,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][48] = 127,
[1][1][RTW89_CHILE][48] = 127,
[1][1][RTW89_QATAR][48] = 127,
+ [1][1][RTW89_THAILAND][48] = 127,
[1][1][RTW89_FCC][50] = 34,
[1][1][RTW89_ETSI][50] = 127,
[1][1][RTW89_MKK][50] = 127,
@@ -48405,6 +50488,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][50] = 127,
[1][1][RTW89_CHILE][50] = 127,
[1][1][RTW89_QATAR][50] = 127,
+ [1][1][RTW89_THAILAND][50] = 127,
[1][1][RTW89_FCC][52] = 30,
[1][1][RTW89_ETSI][52] = 127,
[1][1][RTW89_MKK][52] = 127,
@@ -48417,6 +50501,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_UKRAINE][52] = 127,
[1][1][RTW89_CHILE][52] = 127,
[1][1][RTW89_QATAR][52] = 127,
+ [1][1][RTW89_THAILAND][52] = 127,
[2][0][RTW89_FCC][0] = 68,
[2][0][RTW89_ETSI][0] = 52,
[2][0][RTW89_MKK][0] = 60,
@@ -48429,6 +50514,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][0] = 46,
[2][0][RTW89_CHILE][0] = 68,
[2][0][RTW89_QATAR][0] = 52,
+ [2][0][RTW89_THAILAND][0] = 52,
[2][0][RTW89_FCC][2] = 64,
[2][0][RTW89_ETSI][2] = 52,
[2][0][RTW89_MKK][2] = 60,
@@ -48441,6 +50527,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][2] = 46,
[2][0][RTW89_CHILE][2] = 64,
[2][0][RTW89_QATAR][2] = 52,
+ [2][0][RTW89_THAILAND][2] = 52,
[2][0][RTW89_FCC][4] = 68,
[2][0][RTW89_ETSI][4] = 52,
[2][0][RTW89_MKK][4] = 50,
@@ -48453,6 +50540,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][4] = 46,
[2][0][RTW89_CHILE][4] = 68,
[2][0][RTW89_QATAR][4] = 52,
+ [2][0][RTW89_THAILAND][4] = 52,
[2][0][RTW89_FCC][6] = 68,
[2][0][RTW89_ETSI][6] = 52,
[2][0][RTW89_MKK][6] = 50,
@@ -48465,6 +50553,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][6] = 46,
[2][0][RTW89_CHILE][6] = 68,
[2][0][RTW89_QATAR][6] = 52,
+ [2][0][RTW89_THAILAND][6] = 52,
[2][0][RTW89_FCC][8] = 68,
[2][0][RTW89_ETSI][8] = 52,
[2][0][RTW89_MKK][8] = 44,
@@ -48477,6 +50566,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][8] = 46,
[2][0][RTW89_CHILE][8] = 68,
[2][0][RTW89_QATAR][8] = 52,
+ [2][0][RTW89_THAILAND][8] = 52,
[2][0][RTW89_FCC][10] = 68,
[2][0][RTW89_ETSI][10] = 52,
[2][0][RTW89_MKK][10] = 44,
@@ -48489,6 +50579,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][10] = 46,
[2][0][RTW89_CHILE][10] = 68,
[2][0][RTW89_QATAR][10] = 52,
+ [2][0][RTW89_THAILAND][10] = 52,
[2][0][RTW89_FCC][12] = 68,
[2][0][RTW89_ETSI][12] = 52,
[2][0][RTW89_MKK][12] = 58,
@@ -48501,6 +50592,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][12] = 46,
[2][0][RTW89_CHILE][12] = 68,
[2][0][RTW89_QATAR][12] = 52,
+ [2][0][RTW89_THAILAND][12] = 52,
[2][0][RTW89_FCC][14] = 68,
[2][0][RTW89_ETSI][14] = 52,
[2][0][RTW89_MKK][14] = 58,
@@ -48513,6 +50605,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][14] = 46,
[2][0][RTW89_CHILE][14] = 68,
[2][0][RTW89_QATAR][14] = 52,
+ [2][0][RTW89_THAILAND][14] = 52,
[2][0][RTW89_FCC][15] = 68,
[2][0][RTW89_ETSI][15] = 52,
[2][0][RTW89_MKK][15] = 68,
@@ -48525,6 +50618,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][15] = 46,
[2][0][RTW89_CHILE][15] = 68,
[2][0][RTW89_QATAR][15] = 52,
+ [2][0][RTW89_THAILAND][15] = 52,
[2][0][RTW89_FCC][17] = 68,
[2][0][RTW89_ETSI][17] = 52,
[2][0][RTW89_MKK][17] = 74,
@@ -48537,6 +50631,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][17] = 46,
[2][0][RTW89_CHILE][17] = 68,
[2][0][RTW89_QATAR][17] = 52,
+ [2][0][RTW89_THAILAND][17] = 52,
[2][0][RTW89_FCC][19] = 70,
[2][0][RTW89_ETSI][19] = 52,
[2][0][RTW89_MKK][19] = 74,
@@ -48549,6 +50644,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][19] = 46,
[2][0][RTW89_CHILE][19] = 70,
[2][0][RTW89_QATAR][19] = 52,
+ [2][0][RTW89_THAILAND][19] = 52,
[2][0][RTW89_FCC][21] = 70,
[2][0][RTW89_ETSI][21] = 52,
[2][0][RTW89_MKK][21] = 74,
@@ -48561,6 +50657,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][21] = 46,
[2][0][RTW89_CHILE][21] = 70,
[2][0][RTW89_QATAR][21] = 52,
+ [2][0][RTW89_THAILAND][21] = 52,
[2][0][RTW89_FCC][23] = 70,
[2][0][RTW89_ETSI][23] = 52,
[2][0][RTW89_MKK][23] = 74,
@@ -48573,6 +50670,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][23] = 46,
[2][0][RTW89_CHILE][23] = 70,
[2][0][RTW89_QATAR][23] = 52,
+ [2][0][RTW89_THAILAND][23] = 52,
[2][0][RTW89_FCC][25] = 70,
[2][0][RTW89_ETSI][25] = 52,
[2][0][RTW89_MKK][25] = 74,
@@ -48585,6 +50683,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][25] = 46,
[2][0][RTW89_CHILE][25] = 70,
[2][0][RTW89_QATAR][25] = 52,
+ [2][0][RTW89_THAILAND][25] = 52,
[2][0][RTW89_FCC][27] = 70,
[2][0][RTW89_ETSI][27] = 52,
[2][0][RTW89_MKK][27] = 74,
@@ -48597,6 +50696,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][27] = 46,
[2][0][RTW89_CHILE][27] = 70,
[2][0][RTW89_QATAR][27] = 52,
+ [2][0][RTW89_THAILAND][27] = 52,
[2][0][RTW89_FCC][29] = 70,
[2][0][RTW89_ETSI][29] = 52,
[2][0][RTW89_MKK][29] = 74,
@@ -48609,6 +50709,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][29] = 46,
[2][0][RTW89_CHILE][29] = 70,
[2][0][RTW89_QATAR][29] = 52,
+ [2][0][RTW89_THAILAND][29] = 52,
[2][0][RTW89_FCC][31] = 70,
[2][0][RTW89_ETSI][31] = 52,
[2][0][RTW89_MKK][31] = 74,
@@ -48621,6 +50722,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][31] = 46,
[2][0][RTW89_CHILE][31] = 70,
[2][0][RTW89_QATAR][31] = 52,
+ [2][0][RTW89_THAILAND][31] = 52,
[2][0][RTW89_FCC][33] = 62,
[2][0][RTW89_ETSI][33] = 52,
[2][0][RTW89_MKK][33] = 74,
@@ -48633,6 +50735,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][33] = 46,
[2][0][RTW89_CHILE][33] = 62,
[2][0][RTW89_QATAR][33] = 52,
+ [2][0][RTW89_THAILAND][33] = 52,
[2][0][RTW89_FCC][35] = 62,
[2][0][RTW89_ETSI][35] = 52,
[2][0][RTW89_MKK][35] = 74,
@@ -48645,6 +50748,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][35] = 46,
[2][0][RTW89_CHILE][35] = 62,
[2][0][RTW89_QATAR][35] = 52,
+ [2][0][RTW89_THAILAND][35] = 52,
[2][0][RTW89_FCC][37] = 70,
[2][0][RTW89_ETSI][37] = 127,
[2][0][RTW89_MKK][37] = 74,
@@ -48657,66 +50761,72 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][37] = 127,
[2][0][RTW89_CHILE][37] = 70,
[2][0][RTW89_QATAR][37] = 127,
+ [2][0][RTW89_THAILAND][37] = 127,
[2][0][RTW89_FCC][38] = 82,
[2][0][RTW89_ETSI][38] = 28,
[2][0][RTW89_MKK][38] = 127,
[2][0][RTW89_IC][38] = 82,
[2][0][RTW89_KCC][38] = 60,
[2][0][RTW89_ACMA][38] = 82,
- [2][0][RTW89_CN][38] = 68,
+ [2][0][RTW89_CN][38] = 56,
[2][0][RTW89_UK][38] = 54,
[2][0][RTW89_MEXICO][38] = 82,
[2][0][RTW89_UKRAINE][38] = 26,
[2][0][RTW89_CHILE][38] = 82,
[2][0][RTW89_QATAR][38] = 26,
+ [2][0][RTW89_THAILAND][38] = 28,
[2][0][RTW89_FCC][40] = 82,
[2][0][RTW89_ETSI][40] = 28,
[2][0][RTW89_MKK][40] = 127,
[2][0][RTW89_IC][40] = 82,
[2][0][RTW89_KCC][40] = 60,
[2][0][RTW89_ACMA][40] = 82,
- [2][0][RTW89_CN][40] = 68,
+ [2][0][RTW89_CN][40] = 56,
[2][0][RTW89_UK][40] = 54,
[2][0][RTW89_MEXICO][40] = 82,
[2][0][RTW89_UKRAINE][40] = 26,
[2][0][RTW89_CHILE][40] = 82,
[2][0][RTW89_QATAR][40] = 26,
+ [2][0][RTW89_THAILAND][40] = 28,
[2][0][RTW89_FCC][42] = 76,
[2][0][RTW89_ETSI][42] = 28,
[2][0][RTW89_MKK][42] = 127,
[2][0][RTW89_IC][42] = 76,
[2][0][RTW89_KCC][42] = 60,
[2][0][RTW89_ACMA][42] = 76,
- [2][0][RTW89_CN][42] = 68,
+ [2][0][RTW89_CN][42] = 56,
[2][0][RTW89_UK][42] = 54,
[2][0][RTW89_MEXICO][42] = 76,
[2][0][RTW89_UKRAINE][42] = 26,
[2][0][RTW89_CHILE][42] = 76,
[2][0][RTW89_QATAR][42] = 26,
+ [2][0][RTW89_THAILAND][42] = 28,
[2][0][RTW89_FCC][44] = 80,
[2][0][RTW89_ETSI][44] = 28,
[2][0][RTW89_MKK][44] = 127,
[2][0][RTW89_IC][44] = 80,
[2][0][RTW89_KCC][44] = 60,
[2][0][RTW89_ACMA][44] = 80,
- [2][0][RTW89_CN][44] = 68,
+ [2][0][RTW89_CN][44] = 56,
[2][0][RTW89_UK][44] = 54,
[2][0][RTW89_MEXICO][44] = 80,
[2][0][RTW89_UKRAINE][44] = 26,
[2][0][RTW89_CHILE][44] = 80,
[2][0][RTW89_QATAR][44] = 26,
+ [2][0][RTW89_THAILAND][44] = 28,
[2][0][RTW89_FCC][46] = 80,
[2][0][RTW89_ETSI][46] = 28,
[2][0][RTW89_MKK][46] = 127,
[2][0][RTW89_IC][46] = 80,
[2][0][RTW89_KCC][46] = 60,
[2][0][RTW89_ACMA][46] = 80,
- [2][0][RTW89_CN][46] = 68,
+ [2][0][RTW89_CN][46] = 56,
[2][0][RTW89_UK][46] = 54,
[2][0][RTW89_MEXICO][46] = 80,
[2][0][RTW89_UKRAINE][46] = 26,
[2][0][RTW89_CHILE][46] = 80,
[2][0][RTW89_QATAR][46] = 26,
+ [2][0][RTW89_THAILAND][46] = 28,
[2][0][RTW89_FCC][48] = 64,
[2][0][RTW89_ETSI][48] = 127,
[2][0][RTW89_MKK][48] = 127,
@@ -48729,6 +50839,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][48] = 127,
[2][0][RTW89_CHILE][48] = 127,
[2][0][RTW89_QATAR][48] = 127,
+ [2][0][RTW89_THAILAND][48] = 127,
[2][0][RTW89_FCC][50] = 64,
[2][0][RTW89_ETSI][50] = 127,
[2][0][RTW89_MKK][50] = 127,
@@ -48741,6 +50852,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][50] = 127,
[2][0][RTW89_CHILE][50] = 127,
[2][0][RTW89_QATAR][50] = 127,
+ [2][0][RTW89_THAILAND][50] = 127,
[2][0][RTW89_FCC][52] = 64,
[2][0][RTW89_ETSI][52] = 127,
[2][0][RTW89_MKK][52] = 127,
@@ -48753,6 +50865,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_UKRAINE][52] = 127,
[2][0][RTW89_CHILE][52] = 127,
[2][0][RTW89_QATAR][52] = 127,
+ [2][0][RTW89_THAILAND][52] = 127,
[2][1][RTW89_FCC][0] = 50,
[2][1][RTW89_ETSI][0] = 40,
[2][1][RTW89_MKK][0] = 44,
@@ -48765,6 +50878,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][0] = 34,
[2][1][RTW89_CHILE][0] = 50,
[2][1][RTW89_QATAR][0] = 40,
+ [2][1][RTW89_THAILAND][0] = 40,
[2][1][RTW89_FCC][2] = 50,
[2][1][RTW89_ETSI][2] = 40,
[2][1][RTW89_MKK][2] = 44,
@@ -48777,6 +50891,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][2] = 34,
[2][1][RTW89_CHILE][2] = 50,
[2][1][RTW89_QATAR][2] = 40,
+ [2][1][RTW89_THAILAND][2] = 40,
[2][1][RTW89_FCC][4] = 50,
[2][1][RTW89_ETSI][4] = 40,
[2][1][RTW89_MKK][4] = 36,
@@ -48789,6 +50904,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][4] = 34,
[2][1][RTW89_CHILE][4] = 50,
[2][1][RTW89_QATAR][4] = 40,
+ [2][1][RTW89_THAILAND][4] = 40,
[2][1][RTW89_FCC][6] = 50,
[2][1][RTW89_ETSI][6] = 40,
[2][1][RTW89_MKK][6] = 36,
@@ -48801,6 +50917,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][6] = 34,
[2][1][RTW89_CHILE][6] = 50,
[2][1][RTW89_QATAR][6] = 40,
+ [2][1][RTW89_THAILAND][6] = 40,
[2][1][RTW89_FCC][8] = 50,
[2][1][RTW89_ETSI][8] = 40,
[2][1][RTW89_MKK][8] = 32,
@@ -48813,6 +50930,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][8] = 34,
[2][1][RTW89_CHILE][8] = 50,
[2][1][RTW89_QATAR][8] = 40,
+ [2][1][RTW89_THAILAND][8] = 40,
[2][1][RTW89_FCC][10] = 50,
[2][1][RTW89_ETSI][10] = 40,
[2][1][RTW89_MKK][10] = 32,
@@ -48825,6 +50943,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][10] = 34,
[2][1][RTW89_CHILE][10] = 50,
[2][1][RTW89_QATAR][10] = 40,
+ [2][1][RTW89_THAILAND][10] = 40,
[2][1][RTW89_FCC][12] = 48,
[2][1][RTW89_ETSI][12] = 40,
[2][1][RTW89_MKK][12] = 44,
@@ -48837,6 +50956,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][12] = 34,
[2][1][RTW89_CHILE][12] = 48,
[2][1][RTW89_QATAR][12] = 40,
+ [2][1][RTW89_THAILAND][12] = 40,
[2][1][RTW89_FCC][14] = 48,
[2][1][RTW89_ETSI][14] = 40,
[2][1][RTW89_MKK][14] = 44,
@@ -48849,6 +50969,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][14] = 34,
[2][1][RTW89_CHILE][14] = 48,
[2][1][RTW89_QATAR][14] = 40,
+ [2][1][RTW89_THAILAND][14] = 40,
[2][1][RTW89_FCC][15] = 50,
[2][1][RTW89_ETSI][15] = 40,
[2][1][RTW89_MKK][15] = 66,
@@ -48861,6 +50982,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][15] = 34,
[2][1][RTW89_CHILE][15] = 50,
[2][1][RTW89_QATAR][15] = 40,
+ [2][1][RTW89_THAILAND][15] = 40,
[2][1][RTW89_FCC][17] = 50,
[2][1][RTW89_ETSI][17] = 40,
[2][1][RTW89_MKK][17] = 66,
@@ -48873,6 +50995,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][17] = 34,
[2][1][RTW89_CHILE][17] = 50,
[2][1][RTW89_QATAR][17] = 40,
+ [2][1][RTW89_THAILAND][17] = 40,
[2][1][RTW89_FCC][19] = 50,
[2][1][RTW89_ETSI][19] = 40,
[2][1][RTW89_MKK][19] = 66,
@@ -48885,6 +51008,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][19] = 34,
[2][1][RTW89_CHILE][19] = 50,
[2][1][RTW89_QATAR][19] = 40,
+ [2][1][RTW89_THAILAND][19] = 40,
[2][1][RTW89_FCC][21] = 50,
[2][1][RTW89_ETSI][21] = 40,
[2][1][RTW89_MKK][21] = 66,
@@ -48897,6 +51021,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][21] = 34,
[2][1][RTW89_CHILE][21] = 50,
[2][1][RTW89_QATAR][21] = 40,
+ [2][1][RTW89_THAILAND][21] = 40,
[2][1][RTW89_FCC][23] = 50,
[2][1][RTW89_ETSI][23] = 40,
[2][1][RTW89_MKK][23] = 66,
@@ -48909,6 +51034,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][23] = 34,
[2][1][RTW89_CHILE][23] = 50,
[2][1][RTW89_QATAR][23] = 40,
+ [2][1][RTW89_THAILAND][23] = 40,
[2][1][RTW89_FCC][25] = 50,
[2][1][RTW89_ETSI][25] = 40,
[2][1][RTW89_MKK][25] = 66,
@@ -48921,6 +51047,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][25] = 34,
[2][1][RTW89_CHILE][25] = 50,
[2][1][RTW89_QATAR][25] = 40,
+ [2][1][RTW89_THAILAND][25] = 40,
[2][1][RTW89_FCC][27] = 50,
[2][1][RTW89_ETSI][27] = 40,
[2][1][RTW89_MKK][27] = 66,
@@ -48933,6 +51060,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][27] = 34,
[2][1][RTW89_CHILE][27] = 50,
[2][1][RTW89_QATAR][27] = 40,
+ [2][1][RTW89_THAILAND][27] = 40,
[2][1][RTW89_FCC][29] = 50,
[2][1][RTW89_ETSI][29] = 40,
[2][1][RTW89_MKK][29] = 66,
@@ -48945,6 +51073,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][29] = 34,
[2][1][RTW89_CHILE][29] = 50,
[2][1][RTW89_QATAR][29] = 40,
+ [2][1][RTW89_THAILAND][29] = 40,
[2][1][RTW89_FCC][31] = 50,
[2][1][RTW89_ETSI][31] = 40,
[2][1][RTW89_MKK][31] = 66,
@@ -48957,6 +51086,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][31] = 34,
[2][1][RTW89_CHILE][31] = 50,
[2][1][RTW89_QATAR][31] = 40,
+ [2][1][RTW89_THAILAND][31] = 40,
[2][1][RTW89_FCC][33] = 48,
[2][1][RTW89_ETSI][33] = 40,
[2][1][RTW89_MKK][33] = 66,
@@ -48969,6 +51099,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][33] = 34,
[2][1][RTW89_CHILE][33] = 48,
[2][1][RTW89_QATAR][33] = 40,
+ [2][1][RTW89_THAILAND][33] = 40,
[2][1][RTW89_FCC][35] = 48,
[2][1][RTW89_ETSI][35] = 40,
[2][1][RTW89_MKK][35] = 66,
@@ -48981,6 +51112,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][35] = 34,
[2][1][RTW89_CHILE][35] = 48,
[2][1][RTW89_QATAR][35] = 40,
+ [2][1][RTW89_THAILAND][35] = 40,
[2][1][RTW89_FCC][37] = 52,
[2][1][RTW89_ETSI][37] = 127,
[2][1][RTW89_MKK][37] = 66,
@@ -48993,6 +51125,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][37] = 127,
[2][1][RTW89_CHILE][37] = 52,
[2][1][RTW89_QATAR][37] = 127,
+ [2][1][RTW89_THAILAND][37] = 127,
[2][1][RTW89_FCC][38] = 78,
[2][1][RTW89_ETSI][38] = 16,
[2][1][RTW89_MKK][38] = 127,
@@ -49005,6 +51138,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][38] = 14,
[2][1][RTW89_CHILE][38] = 72,
[2][1][RTW89_QATAR][38] = 14,
+ [2][1][RTW89_THAILAND][38] = 16,
[2][1][RTW89_FCC][40] = 78,
[2][1][RTW89_ETSI][40] = 16,
[2][1][RTW89_MKK][40] = 127,
@@ -49017,6 +51151,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][40] = 14,
[2][1][RTW89_CHILE][40] = 72,
[2][1][RTW89_QATAR][40] = 14,
+ [2][1][RTW89_THAILAND][40] = 16,
[2][1][RTW89_FCC][42] = 78,
[2][1][RTW89_ETSI][42] = 16,
[2][1][RTW89_MKK][42] = 127,
@@ -49029,6 +51164,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][42] = 14,
[2][1][RTW89_CHILE][42] = 72,
[2][1][RTW89_QATAR][42] = 14,
+ [2][1][RTW89_THAILAND][42] = 16,
[2][1][RTW89_FCC][44] = 74,
[2][1][RTW89_ETSI][44] = 16,
[2][1][RTW89_MKK][44] = 127,
@@ -49041,6 +51177,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][44] = 14,
[2][1][RTW89_CHILE][44] = 72,
[2][1][RTW89_QATAR][44] = 14,
+ [2][1][RTW89_THAILAND][44] = 16,
[2][1][RTW89_FCC][46] = 74,
[2][1][RTW89_ETSI][46] = 16,
[2][1][RTW89_MKK][46] = 127,
@@ -49053,6 +51190,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][46] = 14,
[2][1][RTW89_CHILE][46] = 72,
[2][1][RTW89_QATAR][46] = 14,
+ [2][1][RTW89_THAILAND][46] = 16,
[2][1][RTW89_FCC][48] = 40,
[2][1][RTW89_ETSI][48] = 127,
[2][1][RTW89_MKK][48] = 127,
@@ -49065,6 +51203,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][48] = 127,
[2][1][RTW89_CHILE][48] = 127,
[2][1][RTW89_QATAR][48] = 127,
+ [2][1][RTW89_THAILAND][48] = 127,
[2][1][RTW89_FCC][50] = 40,
[2][1][RTW89_ETSI][50] = 127,
[2][1][RTW89_MKK][50] = 127,
@@ -49077,6 +51216,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][50] = 127,
[2][1][RTW89_CHILE][50] = 127,
[2][1][RTW89_QATAR][50] = 127,
+ [2][1][RTW89_THAILAND][50] = 127,
[2][1][RTW89_FCC][52] = 40,
[2][1][RTW89_ETSI][52] = 127,
[2][1][RTW89_MKK][52] = 127,
@@ -49089,6 +51229,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_5g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_UKRAINE][52] = 127,
[2][1][RTW89_CHILE][52] = 127,
[2][1][RTW89_QATAR][52] = 127,
+ [2][1][RTW89_THAILAND][52] = 127,
};
static
@@ -49169,19 +51310,19 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_WW][2][44] = 56,
[0][0][RTW89_WW][0][45] = -16,
[0][0][RTW89_WW][1][45] = -16,
- [0][0][RTW89_WW][2][45] = 0,
+ [0][0][RTW89_WW][2][45] = 56,
[0][0][RTW89_WW][0][47] = -18,
[0][0][RTW89_WW][1][47] = -18,
- [0][0][RTW89_WW][2][47] = 0,
+ [0][0][RTW89_WW][2][47] = 56,
[0][0][RTW89_WW][0][49] = -18,
[0][0][RTW89_WW][1][49] = -18,
- [0][0][RTW89_WW][2][49] = 0,
+ [0][0][RTW89_WW][2][49] = 56,
[0][0][RTW89_WW][0][51] = -18,
[0][0][RTW89_WW][1][51] = -18,
- [0][0][RTW89_WW][2][51] = 0,
+ [0][0][RTW89_WW][2][51] = 56,
[0][0][RTW89_WW][0][53] = -16,
[0][0][RTW89_WW][1][53] = -16,
- [0][0][RTW89_WW][2][53] = 0,
+ [0][0][RTW89_WW][2][53] = 56,
[0][0][RTW89_WW][0][55] = -18,
[0][0][RTW89_WW][1][55] = -18,
[0][0][RTW89_WW][2][55] = 56,
@@ -49361,19 +51502,19 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_WW][2][44] = 32,
[0][1][RTW89_WW][0][45] = -40,
[0][1][RTW89_WW][1][45] = -40,
- [0][1][RTW89_WW][2][45] = 0,
+ [0][1][RTW89_WW][2][45] = 32,
[0][1][RTW89_WW][0][47] = -40,
[0][1][RTW89_WW][1][47] = -40,
- [0][1][RTW89_WW][2][47] = 0,
+ [0][1][RTW89_WW][2][47] = 32,
[0][1][RTW89_WW][0][49] = -40,
[0][1][RTW89_WW][1][49] = -40,
- [0][1][RTW89_WW][2][49] = 0,
+ [0][1][RTW89_WW][2][49] = 32,
[0][1][RTW89_WW][0][51] = -40,
[0][1][RTW89_WW][1][51] = -40,
- [0][1][RTW89_WW][2][51] = 0,
+ [0][1][RTW89_WW][2][51] = 32,
[0][1][RTW89_WW][0][53] = -40,
[0][1][RTW89_WW][1][53] = -40,
- [0][1][RTW89_WW][2][53] = 0,
+ [0][1][RTW89_WW][2][53] = 32,
[0][1][RTW89_WW][0][55] = -40,
[0][1][RTW89_WW][1][55] = -40,
[0][1][RTW89_WW][2][55] = 30,
@@ -49553,19 +51694,19 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_WW][2][44] = 66,
[1][0][RTW89_WW][0][45] = -4,
[1][0][RTW89_WW][1][45] = -4,
- [1][0][RTW89_WW][2][45] = 0,
+ [1][0][RTW89_WW][2][45] = 68,
[1][0][RTW89_WW][0][47] = -4,
[1][0][RTW89_WW][1][47] = -4,
- [1][0][RTW89_WW][2][47] = 0,
+ [1][0][RTW89_WW][2][47] = 68,
[1][0][RTW89_WW][0][49] = -4,
[1][0][RTW89_WW][1][49] = -4,
- [1][0][RTW89_WW][2][49] = 0,
+ [1][0][RTW89_WW][2][49] = 68,
[1][0][RTW89_WW][0][51] = -4,
[1][0][RTW89_WW][1][51] = -4,
- [1][0][RTW89_WW][2][51] = 0,
+ [1][0][RTW89_WW][2][51] = 68,
[1][0][RTW89_WW][0][53] = -4,
[1][0][RTW89_WW][1][53] = -4,
- [1][0][RTW89_WW][2][53] = 0,
+ [1][0][RTW89_WW][2][53] = 68,
[1][0][RTW89_WW][0][55] = -4,
[1][0][RTW89_WW][1][55] = -4,
[1][0][RTW89_WW][2][55] = 68,
@@ -49745,19 +51886,19 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_WW][2][44] = 44,
[1][1][RTW89_WW][0][45] = -26,
[1][1][RTW89_WW][1][45] = -26,
- [1][1][RTW89_WW][2][45] = 0,
+ [1][1][RTW89_WW][2][45] = 44,
[1][1][RTW89_WW][0][47] = -28,
[1][1][RTW89_WW][1][47] = -28,
- [1][1][RTW89_WW][2][47] = 0,
+ [1][1][RTW89_WW][2][47] = 44,
[1][1][RTW89_WW][0][49] = -28,
[1][1][RTW89_WW][1][49] = -28,
- [1][1][RTW89_WW][2][49] = 0,
+ [1][1][RTW89_WW][2][49] = 44,
[1][1][RTW89_WW][0][51] = -28,
[1][1][RTW89_WW][1][51] = -28,
- [1][1][RTW89_WW][2][51] = 0,
+ [1][1][RTW89_WW][2][51] = 44,
[1][1][RTW89_WW][0][53] = -26,
[1][1][RTW89_WW][1][53] = -26,
- [1][1][RTW89_WW][2][53] = 0,
+ [1][1][RTW89_WW][2][53] = 44,
[1][1][RTW89_WW][0][55] = -28,
[1][1][RTW89_WW][1][55] = -28,
[1][1][RTW89_WW][2][55] = 44,
@@ -49901,106 +52042,106 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_WW][2][21] = 60,
[2][0][RTW89_WW][0][23] = -2,
[2][0][RTW89_WW][1][23] = -2,
- [2][0][RTW89_WW][2][23] = 78,
+ [2][0][RTW89_WW][2][23] = 70,
[2][0][RTW89_WW][0][25] = -2,
[2][0][RTW89_WW][1][25] = -2,
- [2][0][RTW89_WW][2][25] = 78,
+ [2][0][RTW89_WW][2][25] = 70,
[2][0][RTW89_WW][0][27] = -2,
[2][0][RTW89_WW][1][27] = -2,
- [2][0][RTW89_WW][2][27] = 78,
+ [2][0][RTW89_WW][2][27] = 70,
[2][0][RTW89_WW][0][29] = -2,
[2][0][RTW89_WW][1][29] = -2,
- [2][0][RTW89_WW][2][29] = 78,
+ [2][0][RTW89_WW][2][29] = 70,
[2][0][RTW89_WW][0][30] = -2,
[2][0][RTW89_WW][1][30] = -2,
- [2][0][RTW89_WW][2][30] = 78,
+ [2][0][RTW89_WW][2][30] = 70,
[2][0][RTW89_WW][0][32] = -2,
[2][0][RTW89_WW][1][32] = -2,
- [2][0][RTW89_WW][2][32] = 78,
+ [2][0][RTW89_WW][2][32] = 70,
[2][0][RTW89_WW][0][34] = -2,
[2][0][RTW89_WW][1][34] = -2,
- [2][0][RTW89_WW][2][34] = 78,
+ [2][0][RTW89_WW][2][34] = 70,
[2][0][RTW89_WW][0][36] = -2,
[2][0][RTW89_WW][1][36] = -2,
- [2][0][RTW89_WW][2][36] = 78,
+ [2][0][RTW89_WW][2][36] = 70,
[2][0][RTW89_WW][0][38] = -2,
[2][0][RTW89_WW][1][38] = -2,
- [2][0][RTW89_WW][2][38] = 78,
+ [2][0][RTW89_WW][2][38] = 70,
[2][0][RTW89_WW][0][40] = -2,
[2][0][RTW89_WW][1][40] = -2,
- [2][0][RTW89_WW][2][40] = 78,
+ [2][0][RTW89_WW][2][40] = 70,
[2][0][RTW89_WW][0][42] = -2,
[2][0][RTW89_WW][1][42] = -2,
- [2][0][RTW89_WW][2][42] = 78,
+ [2][0][RTW89_WW][2][42] = 70,
[2][0][RTW89_WW][0][44] = -2,
[2][0][RTW89_WW][1][44] = -2,
- [2][0][RTW89_WW][2][44] = 78,
+ [2][0][RTW89_WW][2][44] = 70,
[2][0][RTW89_WW][0][45] = -2,
[2][0][RTW89_WW][1][45] = -2,
- [2][0][RTW89_WW][2][45] = 0,
+ [2][0][RTW89_WW][2][45] = 70,
[2][0][RTW89_WW][0][47] = -2,
[2][0][RTW89_WW][1][47] = -2,
- [2][0][RTW89_WW][2][47] = 0,
+ [2][0][RTW89_WW][2][47] = 70,
[2][0][RTW89_WW][0][49] = -2,
[2][0][RTW89_WW][1][49] = -2,
- [2][0][RTW89_WW][2][49] = 0,
+ [2][0][RTW89_WW][2][49] = 70,
[2][0][RTW89_WW][0][51] = -2,
[2][0][RTW89_WW][1][51] = -2,
- [2][0][RTW89_WW][2][51] = 0,
+ [2][0][RTW89_WW][2][51] = 70,
[2][0][RTW89_WW][0][53] = -2,
[2][0][RTW89_WW][1][53] = -2,
- [2][0][RTW89_WW][2][53] = 0,
+ [2][0][RTW89_WW][2][53] = 70,
[2][0][RTW89_WW][0][55] = -2,
[2][0][RTW89_WW][1][55] = -2,
- [2][0][RTW89_WW][2][55] = 78,
+ [2][0][RTW89_WW][2][55] = 68,
[2][0][RTW89_WW][0][57] = -2,
[2][0][RTW89_WW][1][57] = -2,
- [2][0][RTW89_WW][2][57] = 78,
+ [2][0][RTW89_WW][2][57] = 68,
[2][0][RTW89_WW][0][59] = -2,
[2][0][RTW89_WW][1][59] = -2,
- [2][0][RTW89_WW][2][59] = 78,
+ [2][0][RTW89_WW][2][59] = 68,
[2][0][RTW89_WW][0][60] = -2,
[2][0][RTW89_WW][1][60] = -2,
- [2][0][RTW89_WW][2][60] = 78,
+ [2][0][RTW89_WW][2][60] = 68,
[2][0][RTW89_WW][0][62] = -2,
[2][0][RTW89_WW][1][62] = -2,
- [2][0][RTW89_WW][2][62] = 78,
+ [2][0][RTW89_WW][2][62] = 68,
[2][0][RTW89_WW][0][64] = -2,
[2][0][RTW89_WW][1][64] = -2,
- [2][0][RTW89_WW][2][64] = 78,
+ [2][0][RTW89_WW][2][64] = 68,
[2][0][RTW89_WW][0][66] = -2,
[2][0][RTW89_WW][1][66] = -2,
- [2][0][RTW89_WW][2][66] = 78,
+ [2][0][RTW89_WW][2][66] = 68,
[2][0][RTW89_WW][0][68] = -2,
[2][0][RTW89_WW][1][68] = -2,
- [2][0][RTW89_WW][2][68] = 78,
+ [2][0][RTW89_WW][2][68] = 68,
[2][0][RTW89_WW][0][70] = -2,
[2][0][RTW89_WW][1][70] = -2,
- [2][0][RTW89_WW][2][70] = 78,
+ [2][0][RTW89_WW][2][70] = 68,
[2][0][RTW89_WW][0][72] = -2,
[2][0][RTW89_WW][1][72] = -2,
- [2][0][RTW89_WW][2][72] = 78,
+ [2][0][RTW89_WW][2][72] = 68,
[2][0][RTW89_WW][0][74] = -2,
[2][0][RTW89_WW][1][74] = -2,
- [2][0][RTW89_WW][2][74] = 78,
+ [2][0][RTW89_WW][2][74] = 68,
[2][0][RTW89_WW][0][75] = -2,
[2][0][RTW89_WW][1][75] = -2,
- [2][0][RTW89_WW][2][75] = 78,
+ [2][0][RTW89_WW][2][75] = 68,
[2][0][RTW89_WW][0][77] = -2,
[2][0][RTW89_WW][1][77] = -2,
- [2][0][RTW89_WW][2][77] = 78,
+ [2][0][RTW89_WW][2][77] = 68,
[2][0][RTW89_WW][0][79] = -2,
[2][0][RTW89_WW][1][79] = -2,
- [2][0][RTW89_WW][2][79] = 78,
+ [2][0][RTW89_WW][2][79] = 68,
[2][0][RTW89_WW][0][81] = -2,
[2][0][RTW89_WW][1][81] = -2,
- [2][0][RTW89_WW][2][81] = 78,
+ [2][0][RTW89_WW][2][81] = 68,
[2][0][RTW89_WW][0][83] = -2,
[2][0][RTW89_WW][1][83] = -2,
- [2][0][RTW89_WW][2][83] = 78,
+ [2][0][RTW89_WW][2][83] = 68,
[2][0][RTW89_WW][0][85] = -2,
[2][0][RTW89_WW][1][85] = -2,
- [2][0][RTW89_WW][2][85] = 78,
+ [2][0][RTW89_WW][2][85] = 68,
[2][0][RTW89_WW][0][87] = -2,
[2][0][RTW89_WW][1][87] = -2,
[2][0][RTW89_WW][2][87] = 0,
@@ -50129,19 +52270,19 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_WW][2][44] = 54,
[2][1][RTW89_WW][0][45] = -16,
[2][1][RTW89_WW][1][45] = -16,
- [2][1][RTW89_WW][2][45] = 0,
+ [2][1][RTW89_WW][2][45] = 56,
[2][1][RTW89_WW][0][47] = -16,
[2][1][RTW89_WW][1][47] = -16,
- [2][1][RTW89_WW][2][47] = 0,
+ [2][1][RTW89_WW][2][47] = 56,
[2][1][RTW89_WW][0][49] = -16,
[2][1][RTW89_WW][1][49] = -16,
- [2][1][RTW89_WW][2][49] = 0,
+ [2][1][RTW89_WW][2][49] = 56,
[2][1][RTW89_WW][0][51] = -16,
[2][1][RTW89_WW][1][51] = -16,
- [2][1][RTW89_WW][2][51] = 0,
+ [2][1][RTW89_WW][2][51] = 56,
[2][1][RTW89_WW][0][53] = -16,
[2][1][RTW89_WW][1][53] = -16,
- [2][1][RTW89_WW][2][53] = 0,
+ [2][1][RTW89_WW][2][53] = 56,
[2][1][RTW89_WW][0][55] = -16,
[2][1][RTW89_WW][1][55] = -16,
[2][1][RTW89_WW][2][55] = 54,
@@ -50254,6 +52395,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][0] = 30,
[0][0][RTW89_MKK][0][0] = -8,
[0][0][RTW89_IC][1][0] = -16,
+ [0][0][RTW89_IC][2][0] = 44,
[0][0][RTW89_KCC][1][0] = -2,
[0][0][RTW89_KCC][0][0] = -2,
[0][0][RTW89_ACMA][1][0] = 32,
@@ -50263,6 +52405,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][0] = -8,
[0][0][RTW89_UK][1][0] = 32,
[0][0][RTW89_UK][0][0] = -8,
+ [0][0][RTW89_THAILAND][1][0] = 30,
+ [0][0][RTW89_THAILAND][0][0] = -16,
[0][0][RTW89_FCC][1][2] = -18,
[0][0][RTW89_FCC][2][2] = 44,
[0][0][RTW89_ETSI][1][2] = 32,
@@ -50270,6 +52414,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][2] = 30,
[0][0][RTW89_MKK][0][2] = -8,
[0][0][RTW89_IC][1][2] = -18,
+ [0][0][RTW89_IC][2][2] = 44,
[0][0][RTW89_KCC][1][2] = -2,
[0][0][RTW89_KCC][0][2] = -2,
[0][0][RTW89_ACMA][1][2] = 32,
@@ -50279,6 +52424,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][2] = -8,
[0][0][RTW89_UK][1][2] = 32,
[0][0][RTW89_UK][0][2] = -8,
+ [0][0][RTW89_THAILAND][1][2] = 30,
+ [0][0][RTW89_THAILAND][0][2] = -18,
[0][0][RTW89_FCC][1][4] = -18,
[0][0][RTW89_FCC][2][4] = 44,
[0][0][RTW89_ETSI][1][4] = 32,
@@ -50286,6 +52433,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][4] = 30,
[0][0][RTW89_MKK][0][4] = -8,
[0][0][RTW89_IC][1][4] = -18,
+ [0][0][RTW89_IC][2][4] = 44,
[0][0][RTW89_KCC][1][4] = -2,
[0][0][RTW89_KCC][0][4] = -2,
[0][0][RTW89_ACMA][1][4] = 32,
@@ -50295,6 +52443,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][4] = -8,
[0][0][RTW89_UK][1][4] = 32,
[0][0][RTW89_UK][0][4] = -8,
+ [0][0][RTW89_THAILAND][1][4] = 30,
+ [0][0][RTW89_THAILAND][0][4] = -18,
[0][0][RTW89_FCC][1][6] = -18,
[0][0][RTW89_FCC][2][6] = 44,
[0][0][RTW89_ETSI][1][6] = 32,
@@ -50302,6 +52452,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][6] = 30,
[0][0][RTW89_MKK][0][6] = -8,
[0][0][RTW89_IC][1][6] = -18,
+ [0][0][RTW89_IC][2][6] = 44,
[0][0][RTW89_KCC][1][6] = -2,
[0][0][RTW89_KCC][0][6] = -2,
[0][0][RTW89_ACMA][1][6] = 32,
@@ -50311,6 +52462,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][6] = -8,
[0][0][RTW89_UK][1][6] = 32,
[0][0][RTW89_UK][0][6] = -8,
+ [0][0][RTW89_THAILAND][1][6] = 30,
+ [0][0][RTW89_THAILAND][0][6] = -18,
[0][0][RTW89_FCC][1][8] = -18,
[0][0][RTW89_FCC][2][8] = 44,
[0][0][RTW89_ETSI][1][8] = 32,
@@ -50318,6 +52471,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][8] = 30,
[0][0][RTW89_MKK][0][8] = -8,
[0][0][RTW89_IC][1][8] = -18,
+ [0][0][RTW89_IC][2][8] = 44,
[0][0][RTW89_KCC][1][8] = -2,
[0][0][RTW89_KCC][0][8] = -2,
[0][0][RTW89_ACMA][1][8] = 32,
@@ -50327,6 +52481,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][8] = -8,
[0][0][RTW89_UK][1][8] = 32,
[0][0][RTW89_UK][0][8] = -8,
+ [0][0][RTW89_THAILAND][1][8] = 30,
+ [0][0][RTW89_THAILAND][0][8] = -18,
[0][0][RTW89_FCC][1][10] = -18,
[0][0][RTW89_FCC][2][10] = 44,
[0][0][RTW89_ETSI][1][10] = 32,
@@ -50334,6 +52490,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][10] = 30,
[0][0][RTW89_MKK][0][10] = -8,
[0][0][RTW89_IC][1][10] = -18,
+ [0][0][RTW89_IC][2][10] = 44,
[0][0][RTW89_KCC][1][10] = -2,
[0][0][RTW89_KCC][0][10] = -2,
[0][0][RTW89_ACMA][1][10] = 32,
@@ -50343,6 +52500,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][10] = -8,
[0][0][RTW89_UK][1][10] = 32,
[0][0][RTW89_UK][0][10] = -8,
+ [0][0][RTW89_THAILAND][1][10] = 30,
+ [0][0][RTW89_THAILAND][0][10] = -18,
[0][0][RTW89_FCC][1][12] = -18,
[0][0][RTW89_FCC][2][12] = 44,
[0][0][RTW89_ETSI][1][12] = 32,
@@ -50350,6 +52509,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][12] = 30,
[0][0][RTW89_MKK][0][12] = -8,
[0][0][RTW89_IC][1][12] = -18,
+ [0][0][RTW89_IC][2][12] = 44,
[0][0][RTW89_KCC][1][12] = -2,
[0][0][RTW89_KCC][0][12] = -2,
[0][0][RTW89_ACMA][1][12] = 32,
@@ -50359,6 +52519,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][12] = -8,
[0][0][RTW89_UK][1][12] = 32,
[0][0][RTW89_UK][0][12] = -8,
+ [0][0][RTW89_THAILAND][1][12] = 30,
+ [0][0][RTW89_THAILAND][0][12] = -18,
[0][0][RTW89_FCC][1][14] = -18,
[0][0][RTW89_FCC][2][14] = 44,
[0][0][RTW89_ETSI][1][14] = 32,
@@ -50366,6 +52528,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][14] = 30,
[0][0][RTW89_MKK][0][14] = -8,
[0][0][RTW89_IC][1][14] = -18,
+ [0][0][RTW89_IC][2][14] = 44,
[0][0][RTW89_KCC][1][14] = -2,
[0][0][RTW89_KCC][0][14] = -2,
[0][0][RTW89_ACMA][1][14] = 32,
@@ -50375,6 +52538,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][14] = -8,
[0][0][RTW89_UK][1][14] = 32,
[0][0][RTW89_UK][0][14] = -8,
+ [0][0][RTW89_THAILAND][1][14] = 30,
+ [0][0][RTW89_THAILAND][0][14] = -18,
[0][0][RTW89_FCC][1][15] = -18,
[0][0][RTW89_FCC][2][15] = 44,
[0][0][RTW89_ETSI][1][15] = 32,
@@ -50382,6 +52547,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][15] = 30,
[0][0][RTW89_MKK][0][15] = -8,
[0][0][RTW89_IC][1][15] = -18,
+ [0][0][RTW89_IC][2][15] = 44,
[0][0][RTW89_KCC][1][15] = -2,
[0][0][RTW89_KCC][0][15] = -2,
[0][0][RTW89_ACMA][1][15] = 32,
@@ -50391,6 +52557,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][15] = -8,
[0][0][RTW89_UK][1][15] = 32,
[0][0][RTW89_UK][0][15] = -8,
+ [0][0][RTW89_THAILAND][1][15] = 30,
+ [0][0][RTW89_THAILAND][0][15] = -18,
[0][0][RTW89_FCC][1][17] = -18,
[0][0][RTW89_FCC][2][17] = 44,
[0][0][RTW89_ETSI][1][17] = 32,
@@ -50398,6 +52566,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][17] = 30,
[0][0][RTW89_MKK][0][17] = -8,
[0][0][RTW89_IC][1][17] = -18,
+ [0][0][RTW89_IC][2][17] = 44,
[0][0][RTW89_KCC][1][17] = -2,
[0][0][RTW89_KCC][0][17] = -2,
[0][0][RTW89_ACMA][1][17] = 32,
@@ -50407,6 +52576,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][17] = -8,
[0][0][RTW89_UK][1][17] = 32,
[0][0][RTW89_UK][0][17] = -8,
+ [0][0][RTW89_THAILAND][1][17] = 30,
+ [0][0][RTW89_THAILAND][0][17] = -18,
[0][0][RTW89_FCC][1][19] = -18,
[0][0][RTW89_FCC][2][19] = 44,
[0][0][RTW89_ETSI][1][19] = 32,
@@ -50414,6 +52585,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][19] = 30,
[0][0][RTW89_MKK][0][19] = -8,
[0][0][RTW89_IC][1][19] = -18,
+ [0][0][RTW89_IC][2][19] = 44,
[0][0][RTW89_KCC][1][19] = -2,
[0][0][RTW89_KCC][0][19] = -2,
[0][0][RTW89_ACMA][1][19] = 32,
@@ -50423,6 +52595,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][19] = -8,
[0][0][RTW89_UK][1][19] = 32,
[0][0][RTW89_UK][0][19] = -8,
+ [0][0][RTW89_THAILAND][1][19] = 30,
+ [0][0][RTW89_THAILAND][0][19] = -18,
[0][0][RTW89_FCC][1][21] = -18,
[0][0][RTW89_FCC][2][21] = 44,
[0][0][RTW89_ETSI][1][21] = 32,
@@ -50430,6 +52604,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][21] = 30,
[0][0][RTW89_MKK][0][21] = -8,
[0][0][RTW89_IC][1][21] = -18,
+ [0][0][RTW89_IC][2][21] = 44,
[0][0][RTW89_KCC][1][21] = -2,
[0][0][RTW89_KCC][0][21] = -2,
[0][0][RTW89_ACMA][1][21] = 32,
@@ -50439,6 +52614,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][21] = -8,
[0][0][RTW89_UK][1][21] = 32,
[0][0][RTW89_UK][0][21] = -8,
+ [0][0][RTW89_THAILAND][1][21] = 30,
+ [0][0][RTW89_THAILAND][0][21] = -18,
[0][0][RTW89_FCC][1][23] = -18,
[0][0][RTW89_FCC][2][23] = 54,
[0][0][RTW89_ETSI][1][23] = 32,
@@ -50446,6 +52623,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][23] = 30,
[0][0][RTW89_MKK][0][23] = -8,
[0][0][RTW89_IC][1][23] = -18,
+ [0][0][RTW89_IC][2][23] = 54,
[0][0][RTW89_KCC][1][23] = -2,
[0][0][RTW89_KCC][0][23] = -2,
[0][0][RTW89_ACMA][1][23] = 32,
@@ -50455,6 +52633,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][23] = -8,
[0][0][RTW89_UK][1][23] = 32,
[0][0][RTW89_UK][0][23] = -8,
+ [0][0][RTW89_THAILAND][1][23] = 30,
+ [0][0][RTW89_THAILAND][0][23] = -18,
[0][0][RTW89_FCC][1][25] = -18,
[0][0][RTW89_FCC][2][25] = 54,
[0][0][RTW89_ETSI][1][25] = 32,
@@ -50462,6 +52642,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][25] = 30,
[0][0][RTW89_MKK][0][25] = -8,
[0][0][RTW89_IC][1][25] = -18,
+ [0][0][RTW89_IC][2][25] = 54,
[0][0][RTW89_KCC][1][25] = -2,
[0][0][RTW89_KCC][0][25] = -2,
[0][0][RTW89_ACMA][1][25] = 32,
@@ -50471,6 +52652,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][25] = -8,
[0][0][RTW89_UK][1][25] = 32,
[0][0][RTW89_UK][0][25] = -8,
+ [0][0][RTW89_THAILAND][1][25] = 30,
+ [0][0][RTW89_THAILAND][0][25] = -18,
[0][0][RTW89_FCC][1][27] = -18,
[0][0][RTW89_FCC][2][27] = 54,
[0][0][RTW89_ETSI][1][27] = 32,
@@ -50478,6 +52661,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][27] = 30,
[0][0][RTW89_MKK][0][27] = -8,
[0][0][RTW89_IC][1][27] = -18,
+ [0][0][RTW89_IC][2][27] = 54,
[0][0][RTW89_KCC][1][27] = -2,
[0][0][RTW89_KCC][0][27] = -2,
[0][0][RTW89_ACMA][1][27] = 32,
@@ -50487,6 +52671,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][27] = -8,
[0][0][RTW89_UK][1][27] = 32,
[0][0][RTW89_UK][0][27] = -8,
+ [0][0][RTW89_THAILAND][1][27] = 30,
+ [0][0][RTW89_THAILAND][0][27] = -18,
[0][0][RTW89_FCC][1][29] = -18,
[0][0][RTW89_FCC][2][29] = 54,
[0][0][RTW89_ETSI][1][29] = 32,
@@ -50494,6 +52680,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][29] = 30,
[0][0][RTW89_MKK][0][29] = -8,
[0][0][RTW89_IC][1][29] = -18,
+ [0][0][RTW89_IC][2][29] = 54,
[0][0][RTW89_KCC][1][29] = -2,
[0][0][RTW89_KCC][0][29] = -2,
[0][0][RTW89_ACMA][1][29] = 32,
@@ -50503,6 +52690,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][29] = -8,
[0][0][RTW89_UK][1][29] = 32,
[0][0][RTW89_UK][0][29] = -8,
+ [0][0][RTW89_THAILAND][1][29] = 30,
+ [0][0][RTW89_THAILAND][0][29] = -18,
[0][0][RTW89_FCC][1][30] = -18,
[0][0][RTW89_FCC][2][30] = 54,
[0][0][RTW89_ETSI][1][30] = 32,
@@ -50510,6 +52699,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][30] = 30,
[0][0][RTW89_MKK][0][30] = -8,
[0][0][RTW89_IC][1][30] = -18,
+ [0][0][RTW89_IC][2][30] = 54,
[0][0][RTW89_KCC][1][30] = -2,
[0][0][RTW89_KCC][0][30] = -2,
[0][0][RTW89_ACMA][1][30] = 32,
@@ -50519,6 +52709,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][30] = -8,
[0][0][RTW89_UK][1][30] = 32,
[0][0][RTW89_UK][0][30] = -8,
+ [0][0][RTW89_THAILAND][1][30] = 30,
+ [0][0][RTW89_THAILAND][0][30] = -18,
[0][0][RTW89_FCC][1][32] = -18,
[0][0][RTW89_FCC][2][32] = 54,
[0][0][RTW89_ETSI][1][32] = 32,
@@ -50526,6 +52718,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][32] = 30,
[0][0][RTW89_MKK][0][32] = -8,
[0][0][RTW89_IC][1][32] = -18,
+ [0][0][RTW89_IC][2][32] = 54,
[0][0][RTW89_KCC][1][32] = -2,
[0][0][RTW89_KCC][0][32] = -2,
[0][0][RTW89_ACMA][1][32] = 32,
@@ -50535,6 +52728,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][32] = -8,
[0][0][RTW89_UK][1][32] = 32,
[0][0][RTW89_UK][0][32] = -8,
+ [0][0][RTW89_THAILAND][1][32] = 30,
+ [0][0][RTW89_THAILAND][0][32] = -18,
[0][0][RTW89_FCC][1][34] = -18,
[0][0][RTW89_FCC][2][34] = 54,
[0][0][RTW89_ETSI][1][34] = 32,
@@ -50542,6 +52737,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][34] = 30,
[0][0][RTW89_MKK][0][34] = -8,
[0][0][RTW89_IC][1][34] = -18,
+ [0][0][RTW89_IC][2][34] = 54,
[0][0][RTW89_KCC][1][34] = -2,
[0][0][RTW89_KCC][0][34] = -2,
[0][0][RTW89_ACMA][1][34] = 32,
@@ -50551,6 +52747,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][34] = -8,
[0][0][RTW89_UK][1][34] = 32,
[0][0][RTW89_UK][0][34] = -8,
+ [0][0][RTW89_THAILAND][1][34] = 30,
+ [0][0][RTW89_THAILAND][0][34] = -18,
[0][0][RTW89_FCC][1][36] = -18,
[0][0][RTW89_FCC][2][36] = 54,
[0][0][RTW89_ETSI][1][36] = 32,
@@ -50558,6 +52756,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][36] = 30,
[0][0][RTW89_MKK][0][36] = -8,
[0][0][RTW89_IC][1][36] = -18,
+ [0][0][RTW89_IC][2][36] = 54,
[0][0][RTW89_KCC][1][36] = -2,
[0][0][RTW89_KCC][0][36] = -2,
[0][0][RTW89_ACMA][1][36] = 32,
@@ -50567,6 +52766,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][36] = -8,
[0][0][RTW89_UK][1][36] = 32,
[0][0][RTW89_UK][0][36] = -8,
+ [0][0][RTW89_THAILAND][1][36] = 30,
+ [0][0][RTW89_THAILAND][0][36] = -18,
[0][0][RTW89_FCC][1][38] = -18,
[0][0][RTW89_FCC][2][38] = 54,
[0][0][RTW89_ETSI][1][38] = 32,
@@ -50574,6 +52775,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][38] = 30,
[0][0][RTW89_MKK][0][38] = -8,
[0][0][RTW89_IC][1][38] = -18,
+ [0][0][RTW89_IC][2][38] = 54,
[0][0][RTW89_KCC][1][38] = -2,
[0][0][RTW89_KCC][0][38] = -2,
[0][0][RTW89_ACMA][1][38] = 32,
@@ -50583,6 +52785,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][38] = -8,
[0][0][RTW89_UK][1][38] = 32,
[0][0][RTW89_UK][0][38] = -8,
+ [0][0][RTW89_THAILAND][1][38] = 30,
+ [0][0][RTW89_THAILAND][0][38] = -18,
[0][0][RTW89_FCC][1][40] = -18,
[0][0][RTW89_FCC][2][40] = 54,
[0][0][RTW89_ETSI][1][40] = 32,
@@ -50590,6 +52794,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][40] = 30,
[0][0][RTW89_MKK][0][40] = -8,
[0][0][RTW89_IC][1][40] = -18,
+ [0][0][RTW89_IC][2][40] = 54,
[0][0][RTW89_KCC][1][40] = -2,
[0][0][RTW89_KCC][0][40] = -2,
[0][0][RTW89_ACMA][1][40] = 32,
@@ -50599,6 +52804,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][40] = -8,
[0][0][RTW89_UK][1][40] = 32,
[0][0][RTW89_UK][0][40] = -8,
+ [0][0][RTW89_THAILAND][1][40] = 30,
+ [0][0][RTW89_THAILAND][0][40] = -18,
[0][0][RTW89_FCC][1][42] = -18,
[0][0][RTW89_FCC][2][42] = 54,
[0][0][RTW89_ETSI][1][42] = 32,
@@ -50606,6 +52813,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][42] = 30,
[0][0][RTW89_MKK][0][42] = -8,
[0][0][RTW89_IC][1][42] = -18,
+ [0][0][RTW89_IC][2][42] = 54,
[0][0][RTW89_KCC][1][42] = -2,
[0][0][RTW89_KCC][0][42] = -2,
[0][0][RTW89_ACMA][1][42] = 32,
@@ -50615,6 +52823,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][42] = -8,
[0][0][RTW89_UK][1][42] = 32,
[0][0][RTW89_UK][0][42] = -8,
+ [0][0][RTW89_THAILAND][1][42] = 30,
+ [0][0][RTW89_THAILAND][0][42] = -18,
[0][0][RTW89_FCC][1][44] = -16,
[0][0][RTW89_FCC][2][44] = 56,
[0][0][RTW89_ETSI][1][44] = 32,
@@ -50622,6 +52832,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][44] = 8,
[0][0][RTW89_MKK][0][44] = -10,
[0][0][RTW89_IC][1][44] = -16,
+ [0][0][RTW89_IC][2][44] = 56,
[0][0][RTW89_KCC][1][44] = -2,
[0][0][RTW89_KCC][0][44] = -2,
[0][0][RTW89_ACMA][1][44] = 32,
@@ -50631,6 +52842,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][44] = -6,
[0][0][RTW89_UK][1][44] = 32,
[0][0][RTW89_UK][0][44] = -6,
+ [0][0][RTW89_THAILAND][1][44] = 30,
+ [0][0][RTW89_THAILAND][0][44] = -16,
[0][0][RTW89_FCC][1][45] = -16,
[0][0][RTW89_FCC][2][45] = 127,
[0][0][RTW89_ETSI][1][45] = 127,
@@ -50638,6 +52851,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][45] = 127,
[0][0][RTW89_MKK][0][45] = 127,
[0][0][RTW89_IC][1][45] = -16,
+ [0][0][RTW89_IC][2][45] = 56,
[0][0][RTW89_KCC][1][45] = -2,
[0][0][RTW89_KCC][0][45] = 127,
[0][0][RTW89_ACMA][1][45] = 127,
@@ -50647,6 +52861,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][45] = 127,
[0][0][RTW89_UK][1][45] = 127,
[0][0][RTW89_UK][0][45] = 127,
+ [0][0][RTW89_THAILAND][1][45] = 127,
+ [0][0][RTW89_THAILAND][0][45] = 127,
[0][0][RTW89_FCC][1][47] = -18,
[0][0][RTW89_FCC][2][47] = 127,
[0][0][RTW89_ETSI][1][47] = 127,
@@ -50654,6 +52870,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][47] = 127,
[0][0][RTW89_MKK][0][47] = 127,
[0][0][RTW89_IC][1][47] = -18,
+ [0][0][RTW89_IC][2][47] = 56,
[0][0][RTW89_KCC][1][47] = -2,
[0][0][RTW89_KCC][0][47] = 127,
[0][0][RTW89_ACMA][1][47] = 127,
@@ -50663,6 +52880,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][47] = 127,
[0][0][RTW89_UK][1][47] = 127,
[0][0][RTW89_UK][0][47] = 127,
+ [0][0][RTW89_THAILAND][1][47] = 127,
+ [0][0][RTW89_THAILAND][0][47] = 127,
[0][0][RTW89_FCC][1][49] = -18,
[0][0][RTW89_FCC][2][49] = 127,
[0][0][RTW89_ETSI][1][49] = 127,
@@ -50670,6 +52889,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][49] = 127,
[0][0][RTW89_MKK][0][49] = 127,
[0][0][RTW89_IC][1][49] = -18,
+ [0][0][RTW89_IC][2][49] = 56,
[0][0][RTW89_KCC][1][49] = -2,
[0][0][RTW89_KCC][0][49] = 127,
[0][0][RTW89_ACMA][1][49] = 127,
@@ -50679,6 +52899,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][49] = 127,
[0][0][RTW89_UK][1][49] = 127,
[0][0][RTW89_UK][0][49] = 127,
+ [0][0][RTW89_THAILAND][1][49] = 127,
+ [0][0][RTW89_THAILAND][0][49] = 127,
[0][0][RTW89_FCC][1][51] = -18,
[0][0][RTW89_FCC][2][51] = 127,
[0][0][RTW89_ETSI][1][51] = 127,
@@ -50686,6 +52908,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][51] = 127,
[0][0][RTW89_MKK][0][51] = 127,
[0][0][RTW89_IC][1][51] = -18,
+ [0][0][RTW89_IC][2][51] = 56,
[0][0][RTW89_KCC][1][51] = -2,
[0][0][RTW89_KCC][0][51] = 127,
[0][0][RTW89_ACMA][1][51] = 127,
@@ -50695,6 +52918,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][51] = 127,
[0][0][RTW89_UK][1][51] = 127,
[0][0][RTW89_UK][0][51] = 127,
+ [0][0][RTW89_THAILAND][1][51] = 127,
+ [0][0][RTW89_THAILAND][0][51] = 127,
[0][0][RTW89_FCC][1][53] = -16,
[0][0][RTW89_FCC][2][53] = 127,
[0][0][RTW89_ETSI][1][53] = 127,
@@ -50702,6 +52927,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][53] = 127,
[0][0][RTW89_MKK][0][53] = 127,
[0][0][RTW89_IC][1][53] = -16,
+ [0][0][RTW89_IC][2][53] = 56,
[0][0][RTW89_KCC][1][53] = -2,
[0][0][RTW89_KCC][0][53] = 127,
[0][0][RTW89_ACMA][1][53] = 127,
@@ -50711,6 +52937,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][53] = 127,
[0][0][RTW89_UK][1][53] = 127,
[0][0][RTW89_UK][0][53] = 127,
+ [0][0][RTW89_THAILAND][1][53] = 127,
+ [0][0][RTW89_THAILAND][0][53] = 127,
[0][0][RTW89_FCC][1][55] = -18,
[0][0][RTW89_FCC][2][55] = 56,
[0][0][RTW89_ETSI][1][55] = 127,
@@ -50718,6 +52946,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][55] = 127,
[0][0][RTW89_MKK][0][55] = 127,
[0][0][RTW89_IC][1][55] = -18,
+ [0][0][RTW89_IC][2][55] = 56,
[0][0][RTW89_KCC][1][55] = -2,
[0][0][RTW89_KCC][0][55] = 127,
[0][0][RTW89_ACMA][1][55] = 127,
@@ -50727,6 +52956,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][55] = 127,
[0][0][RTW89_UK][1][55] = 127,
[0][0][RTW89_UK][0][55] = 127,
+ [0][0][RTW89_THAILAND][1][55] = 127,
+ [0][0][RTW89_THAILAND][0][55] = 127,
[0][0][RTW89_FCC][1][57] = -18,
[0][0][RTW89_FCC][2][57] = 56,
[0][0][RTW89_ETSI][1][57] = 127,
@@ -50734,6 +52965,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][57] = 127,
[0][0][RTW89_MKK][0][57] = 127,
[0][0][RTW89_IC][1][57] = -18,
+ [0][0][RTW89_IC][2][57] = 56,
[0][0][RTW89_KCC][1][57] = -2,
[0][0][RTW89_KCC][0][57] = 127,
[0][0][RTW89_ACMA][1][57] = 127,
@@ -50743,6 +52975,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][57] = 127,
[0][0][RTW89_UK][1][57] = 127,
[0][0][RTW89_UK][0][57] = 127,
+ [0][0][RTW89_THAILAND][1][57] = 127,
+ [0][0][RTW89_THAILAND][0][57] = 127,
[0][0][RTW89_FCC][1][59] = -18,
[0][0][RTW89_FCC][2][59] = 56,
[0][0][RTW89_ETSI][1][59] = 127,
@@ -50750,6 +52984,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][59] = 127,
[0][0][RTW89_MKK][0][59] = 127,
[0][0][RTW89_IC][1][59] = -18,
+ [0][0][RTW89_IC][2][59] = 56,
[0][0][RTW89_KCC][1][59] = -2,
[0][0][RTW89_KCC][0][59] = 127,
[0][0][RTW89_ACMA][1][59] = 127,
@@ -50759,6 +52994,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][59] = 127,
[0][0][RTW89_UK][1][59] = 127,
[0][0][RTW89_UK][0][59] = 127,
+ [0][0][RTW89_THAILAND][1][59] = 127,
+ [0][0][RTW89_THAILAND][0][59] = 127,
[0][0][RTW89_FCC][1][60] = -18,
[0][0][RTW89_FCC][2][60] = 56,
[0][0][RTW89_ETSI][1][60] = 127,
@@ -50766,6 +53003,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][60] = 127,
[0][0][RTW89_MKK][0][60] = 127,
[0][0][RTW89_IC][1][60] = -18,
+ [0][0][RTW89_IC][2][60] = 56,
[0][0][RTW89_KCC][1][60] = -2,
[0][0][RTW89_KCC][0][60] = 127,
[0][0][RTW89_ACMA][1][60] = 127,
@@ -50775,6 +53013,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][60] = 127,
[0][0][RTW89_UK][1][60] = 127,
[0][0][RTW89_UK][0][60] = 127,
+ [0][0][RTW89_THAILAND][1][60] = 127,
+ [0][0][RTW89_THAILAND][0][60] = 127,
[0][0][RTW89_FCC][1][62] = -18,
[0][0][RTW89_FCC][2][62] = 56,
[0][0][RTW89_ETSI][1][62] = 127,
@@ -50782,6 +53022,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][62] = 127,
[0][0][RTW89_MKK][0][62] = 127,
[0][0][RTW89_IC][1][62] = -18,
+ [0][0][RTW89_IC][2][62] = 56,
[0][0][RTW89_KCC][1][62] = -2,
[0][0][RTW89_KCC][0][62] = 127,
[0][0][RTW89_ACMA][1][62] = 127,
@@ -50791,6 +53032,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][62] = 127,
[0][0][RTW89_UK][1][62] = 127,
[0][0][RTW89_UK][0][62] = 127,
+ [0][0][RTW89_THAILAND][1][62] = 127,
+ [0][0][RTW89_THAILAND][0][62] = 127,
[0][0][RTW89_FCC][1][64] = -18,
[0][0][RTW89_FCC][2][64] = 56,
[0][0][RTW89_ETSI][1][64] = 127,
@@ -50798,6 +53041,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][64] = 127,
[0][0][RTW89_MKK][0][64] = 127,
[0][0][RTW89_IC][1][64] = -18,
+ [0][0][RTW89_IC][2][64] = 56,
[0][0][RTW89_KCC][1][64] = -2,
[0][0][RTW89_KCC][0][64] = 127,
[0][0][RTW89_ACMA][1][64] = 127,
@@ -50807,6 +53051,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][64] = 127,
[0][0][RTW89_UK][1][64] = 127,
[0][0][RTW89_UK][0][64] = 127,
+ [0][0][RTW89_THAILAND][1][64] = 127,
+ [0][0][RTW89_THAILAND][0][64] = 127,
[0][0][RTW89_FCC][1][66] = -18,
[0][0][RTW89_FCC][2][66] = 56,
[0][0][RTW89_ETSI][1][66] = 127,
@@ -50814,6 +53060,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][66] = 127,
[0][0][RTW89_MKK][0][66] = 127,
[0][0][RTW89_IC][1][66] = -18,
+ [0][0][RTW89_IC][2][66] = 56,
[0][0][RTW89_KCC][1][66] = -2,
[0][0][RTW89_KCC][0][66] = 127,
[0][0][RTW89_ACMA][1][66] = 127,
@@ -50823,6 +53070,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][66] = 127,
[0][0][RTW89_UK][1][66] = 127,
[0][0][RTW89_UK][0][66] = 127,
+ [0][0][RTW89_THAILAND][1][66] = 127,
+ [0][0][RTW89_THAILAND][0][66] = 127,
[0][0][RTW89_FCC][1][68] = -18,
[0][0][RTW89_FCC][2][68] = 56,
[0][0][RTW89_ETSI][1][68] = 127,
@@ -50830,6 +53079,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][68] = 127,
[0][0][RTW89_MKK][0][68] = 127,
[0][0][RTW89_IC][1][68] = -18,
+ [0][0][RTW89_IC][2][68] = 56,
[0][0][RTW89_KCC][1][68] = -2,
[0][0][RTW89_KCC][0][68] = 127,
[0][0][RTW89_ACMA][1][68] = 127,
@@ -50839,6 +53089,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][68] = 127,
[0][0][RTW89_UK][1][68] = 127,
[0][0][RTW89_UK][0][68] = 127,
+ [0][0][RTW89_THAILAND][1][68] = 127,
+ [0][0][RTW89_THAILAND][0][68] = 127,
[0][0][RTW89_FCC][1][70] = -16,
[0][0][RTW89_FCC][2][70] = 56,
[0][0][RTW89_ETSI][1][70] = 127,
@@ -50846,6 +53098,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][70] = 127,
[0][0][RTW89_MKK][0][70] = 127,
[0][0][RTW89_IC][1][70] = -16,
+ [0][0][RTW89_IC][2][70] = 56,
[0][0][RTW89_KCC][1][70] = -2,
[0][0][RTW89_KCC][0][70] = 127,
[0][0][RTW89_ACMA][1][70] = 127,
@@ -50855,6 +53108,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][70] = 127,
[0][0][RTW89_UK][1][70] = 127,
[0][0][RTW89_UK][0][70] = 127,
+ [0][0][RTW89_THAILAND][1][70] = 127,
+ [0][0][RTW89_THAILAND][0][70] = 127,
[0][0][RTW89_FCC][1][72] = -18,
[0][0][RTW89_FCC][2][72] = 56,
[0][0][RTW89_ETSI][1][72] = 127,
@@ -50862,6 +53117,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][72] = 127,
[0][0][RTW89_MKK][0][72] = 127,
[0][0][RTW89_IC][1][72] = -18,
+ [0][0][RTW89_IC][2][72] = 56,
[0][0][RTW89_KCC][1][72] = -2,
[0][0][RTW89_KCC][0][72] = 127,
[0][0][RTW89_ACMA][1][72] = 127,
@@ -50871,6 +53127,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][72] = 127,
[0][0][RTW89_UK][1][72] = 127,
[0][0][RTW89_UK][0][72] = 127,
+ [0][0][RTW89_THAILAND][1][72] = 127,
+ [0][0][RTW89_THAILAND][0][72] = 127,
[0][0][RTW89_FCC][1][74] = -18,
[0][0][RTW89_FCC][2][74] = 56,
[0][0][RTW89_ETSI][1][74] = 127,
@@ -50878,6 +53136,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][74] = 127,
[0][0][RTW89_MKK][0][74] = 127,
[0][0][RTW89_IC][1][74] = -18,
+ [0][0][RTW89_IC][2][74] = 56,
[0][0][RTW89_KCC][1][74] = -2,
[0][0][RTW89_KCC][0][74] = 127,
[0][0][RTW89_ACMA][1][74] = 127,
@@ -50887,6 +53146,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][74] = 127,
[0][0][RTW89_UK][1][74] = 127,
[0][0][RTW89_UK][0][74] = 127,
+ [0][0][RTW89_THAILAND][1][74] = 127,
+ [0][0][RTW89_THAILAND][0][74] = 127,
[0][0][RTW89_FCC][1][75] = -18,
[0][0][RTW89_FCC][2][75] = 56,
[0][0][RTW89_ETSI][1][75] = 127,
@@ -50894,6 +53155,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][75] = 127,
[0][0][RTW89_MKK][0][75] = 127,
[0][0][RTW89_IC][1][75] = -18,
+ [0][0][RTW89_IC][2][75] = 56,
[0][0][RTW89_KCC][1][75] = -2,
[0][0][RTW89_KCC][0][75] = 127,
[0][0][RTW89_ACMA][1][75] = 127,
@@ -50903,6 +53165,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][75] = 127,
[0][0][RTW89_UK][1][75] = 127,
[0][0][RTW89_UK][0][75] = 127,
+ [0][0][RTW89_THAILAND][1][75] = 127,
+ [0][0][RTW89_THAILAND][0][75] = 127,
[0][0][RTW89_FCC][1][77] = -18,
[0][0][RTW89_FCC][2][77] = 56,
[0][0][RTW89_ETSI][1][77] = 127,
@@ -50910,6 +53174,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][77] = 127,
[0][0][RTW89_MKK][0][77] = 127,
[0][0][RTW89_IC][1][77] = -18,
+ [0][0][RTW89_IC][2][77] = 56,
[0][0][RTW89_KCC][1][77] = -2,
[0][0][RTW89_KCC][0][77] = 127,
[0][0][RTW89_ACMA][1][77] = 127,
@@ -50919,6 +53184,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][77] = 127,
[0][0][RTW89_UK][1][77] = 127,
[0][0][RTW89_UK][0][77] = 127,
+ [0][0][RTW89_THAILAND][1][77] = 127,
+ [0][0][RTW89_THAILAND][0][77] = 127,
[0][0][RTW89_FCC][1][79] = -18,
[0][0][RTW89_FCC][2][79] = 56,
[0][0][RTW89_ETSI][1][79] = 127,
@@ -50926,6 +53193,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][79] = 127,
[0][0][RTW89_MKK][0][79] = 127,
[0][0][RTW89_IC][1][79] = -18,
+ [0][0][RTW89_IC][2][79] = 56,
[0][0][RTW89_KCC][1][79] = -2,
[0][0][RTW89_KCC][0][79] = 127,
[0][0][RTW89_ACMA][1][79] = 127,
@@ -50935,6 +53203,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][79] = 127,
[0][0][RTW89_UK][1][79] = 127,
[0][0][RTW89_UK][0][79] = 127,
+ [0][0][RTW89_THAILAND][1][79] = 127,
+ [0][0][RTW89_THAILAND][0][79] = 127,
[0][0][RTW89_FCC][1][81] = -18,
[0][0][RTW89_FCC][2][81] = 56,
[0][0][RTW89_ETSI][1][81] = 127,
@@ -50942,6 +53212,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][81] = 127,
[0][0][RTW89_MKK][0][81] = 127,
[0][0][RTW89_IC][1][81] = -18,
+ [0][0][RTW89_IC][2][81] = 56,
[0][0][RTW89_KCC][1][81] = -2,
[0][0][RTW89_KCC][0][81] = 127,
[0][0][RTW89_ACMA][1][81] = 127,
@@ -50951,6 +53222,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][81] = 127,
[0][0][RTW89_UK][1][81] = 127,
[0][0][RTW89_UK][0][81] = 127,
+ [0][0][RTW89_THAILAND][1][81] = 127,
+ [0][0][RTW89_THAILAND][0][81] = 127,
[0][0][RTW89_FCC][1][83] = -18,
[0][0][RTW89_FCC][2][83] = 56,
[0][0][RTW89_ETSI][1][83] = 127,
@@ -50958,6 +53231,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][83] = 127,
[0][0][RTW89_MKK][0][83] = 127,
[0][0][RTW89_IC][1][83] = -18,
+ [0][0][RTW89_IC][2][83] = 56,
[0][0][RTW89_KCC][1][83] = -2,
[0][0][RTW89_KCC][0][83] = 127,
[0][0][RTW89_ACMA][1][83] = 127,
@@ -50967,6 +53241,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][83] = 127,
[0][0][RTW89_UK][1][83] = 127,
[0][0][RTW89_UK][0][83] = 127,
+ [0][0][RTW89_THAILAND][1][83] = 127,
+ [0][0][RTW89_THAILAND][0][83] = 127,
[0][0][RTW89_FCC][1][85] = -18,
[0][0][RTW89_FCC][2][85] = 56,
[0][0][RTW89_ETSI][1][85] = 127,
@@ -50974,6 +53250,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][85] = 127,
[0][0][RTW89_MKK][0][85] = 127,
[0][0][RTW89_IC][1][85] = -18,
+ [0][0][RTW89_IC][2][85] = 56,
[0][0][RTW89_KCC][1][85] = -2,
[0][0][RTW89_KCC][0][85] = 127,
[0][0][RTW89_ACMA][1][85] = 127,
@@ -50983,6 +53260,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][85] = 127,
[0][0][RTW89_UK][1][85] = 127,
[0][0][RTW89_UK][0][85] = 127,
+ [0][0][RTW89_THAILAND][1][85] = 127,
+ [0][0][RTW89_THAILAND][0][85] = 127,
[0][0][RTW89_FCC][1][87] = -16,
[0][0][RTW89_FCC][2][87] = 127,
[0][0][RTW89_ETSI][1][87] = 127,
@@ -50990,6 +53269,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][87] = 127,
[0][0][RTW89_MKK][0][87] = 127,
[0][0][RTW89_IC][1][87] = -16,
+ [0][0][RTW89_IC][2][87] = 127,
[0][0][RTW89_KCC][1][87] = -2,
[0][0][RTW89_KCC][0][87] = 127,
[0][0][RTW89_ACMA][1][87] = 127,
@@ -50999,6 +53279,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][87] = 127,
[0][0][RTW89_UK][1][87] = 127,
[0][0][RTW89_UK][0][87] = 127,
+ [0][0][RTW89_THAILAND][1][87] = 127,
+ [0][0][RTW89_THAILAND][0][87] = 127,
[0][0][RTW89_FCC][1][89] = -16,
[0][0][RTW89_FCC][2][89] = 127,
[0][0][RTW89_ETSI][1][89] = 127,
@@ -51006,6 +53288,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][89] = 127,
[0][0][RTW89_MKK][0][89] = 127,
[0][0][RTW89_IC][1][89] = -16,
+ [0][0][RTW89_IC][2][89] = 127,
[0][0][RTW89_KCC][1][89] = -2,
[0][0][RTW89_KCC][0][89] = 127,
[0][0][RTW89_ACMA][1][89] = 127,
@@ -51015,6 +53298,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][89] = 127,
[0][0][RTW89_UK][1][89] = 127,
[0][0][RTW89_UK][0][89] = 127,
+ [0][0][RTW89_THAILAND][1][89] = 127,
+ [0][0][RTW89_THAILAND][0][89] = 127,
[0][0][RTW89_FCC][1][90] = -16,
[0][0][RTW89_FCC][2][90] = 127,
[0][0][RTW89_ETSI][1][90] = 127,
@@ -51022,6 +53307,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][90] = 127,
[0][0][RTW89_MKK][0][90] = 127,
[0][0][RTW89_IC][1][90] = -16,
+ [0][0][RTW89_IC][2][90] = 127,
[0][0][RTW89_KCC][1][90] = -2,
[0][0][RTW89_KCC][0][90] = 127,
[0][0][RTW89_ACMA][1][90] = 127,
@@ -51031,6 +53317,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][90] = 127,
[0][0][RTW89_UK][1][90] = 127,
[0][0][RTW89_UK][0][90] = 127,
+ [0][0][RTW89_THAILAND][1][90] = 127,
+ [0][0][RTW89_THAILAND][0][90] = 127,
[0][0][RTW89_FCC][1][92] = -16,
[0][0][RTW89_FCC][2][92] = 127,
[0][0][RTW89_ETSI][1][92] = 127,
@@ -51038,6 +53326,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][92] = 127,
[0][0][RTW89_MKK][0][92] = 127,
[0][0][RTW89_IC][1][92] = -16,
+ [0][0][RTW89_IC][2][92] = 127,
[0][0][RTW89_KCC][1][92] = -2,
[0][0][RTW89_KCC][0][92] = 127,
[0][0][RTW89_ACMA][1][92] = 127,
@@ -51047,6 +53336,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][92] = 127,
[0][0][RTW89_UK][1][92] = 127,
[0][0][RTW89_UK][0][92] = 127,
+ [0][0][RTW89_THAILAND][1][92] = 127,
+ [0][0][RTW89_THAILAND][0][92] = 127,
[0][0][RTW89_FCC][1][94] = -16,
[0][0][RTW89_FCC][2][94] = 127,
[0][0][RTW89_ETSI][1][94] = 127,
@@ -51054,6 +53345,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][94] = 127,
[0][0][RTW89_MKK][0][94] = 127,
[0][0][RTW89_IC][1][94] = -16,
+ [0][0][RTW89_IC][2][94] = 127,
[0][0][RTW89_KCC][1][94] = -2,
[0][0][RTW89_KCC][0][94] = 127,
[0][0][RTW89_ACMA][1][94] = 127,
@@ -51063,6 +53355,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][94] = 127,
[0][0][RTW89_UK][1][94] = 127,
[0][0][RTW89_UK][0][94] = 127,
+ [0][0][RTW89_THAILAND][1][94] = 127,
+ [0][0][RTW89_THAILAND][0][94] = 127,
[0][0][RTW89_FCC][1][96] = -16,
[0][0][RTW89_FCC][2][96] = 127,
[0][0][RTW89_ETSI][1][96] = 127,
@@ -51070,6 +53364,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][96] = 127,
[0][0][RTW89_MKK][0][96] = 127,
[0][0][RTW89_IC][1][96] = -16,
+ [0][0][RTW89_IC][2][96] = 127,
[0][0][RTW89_KCC][1][96] = -2,
[0][0][RTW89_KCC][0][96] = 127,
[0][0][RTW89_ACMA][1][96] = 127,
@@ -51079,6 +53374,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][96] = 127,
[0][0][RTW89_UK][1][96] = 127,
[0][0][RTW89_UK][0][96] = 127,
+ [0][0][RTW89_THAILAND][1][96] = 127,
+ [0][0][RTW89_THAILAND][0][96] = 127,
[0][0][RTW89_FCC][1][98] = -16,
[0][0][RTW89_FCC][2][98] = 127,
[0][0][RTW89_ETSI][1][98] = 127,
@@ -51086,6 +53383,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][98] = 127,
[0][0][RTW89_MKK][0][98] = 127,
[0][0][RTW89_IC][1][98] = -16,
+ [0][0][RTW89_IC][2][98] = 127,
[0][0][RTW89_KCC][1][98] = -2,
[0][0][RTW89_KCC][0][98] = 127,
[0][0][RTW89_ACMA][1][98] = 127,
@@ -51095,6 +53393,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][98] = 127,
[0][0][RTW89_UK][1][98] = 127,
[0][0][RTW89_UK][0][98] = 127,
+ [0][0][RTW89_THAILAND][1][98] = 127,
+ [0][0][RTW89_THAILAND][0][98] = 127,
[0][0][RTW89_FCC][1][100] = -16,
[0][0][RTW89_FCC][2][100] = 127,
[0][0][RTW89_ETSI][1][100] = 127,
@@ -51102,6 +53402,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][100] = 127,
[0][0][RTW89_MKK][0][100] = 127,
[0][0][RTW89_IC][1][100] = -16,
+ [0][0][RTW89_IC][2][100] = 127,
[0][0][RTW89_KCC][1][100] = -2,
[0][0][RTW89_KCC][0][100] = 127,
[0][0][RTW89_ACMA][1][100] = 127,
@@ -51111,6 +53412,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][100] = 127,
[0][0][RTW89_UK][1][100] = 127,
[0][0][RTW89_UK][0][100] = 127,
+ [0][0][RTW89_THAILAND][1][100] = 127,
+ [0][0][RTW89_THAILAND][0][100] = 127,
[0][0][RTW89_FCC][1][102] = -16,
[0][0][RTW89_FCC][2][102] = 127,
[0][0][RTW89_ETSI][1][102] = 127,
@@ -51118,6 +53421,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][102] = 127,
[0][0][RTW89_MKK][0][102] = 127,
[0][0][RTW89_IC][1][102] = -16,
+ [0][0][RTW89_IC][2][102] = 127,
[0][0][RTW89_KCC][1][102] = -2,
[0][0][RTW89_KCC][0][102] = 127,
[0][0][RTW89_ACMA][1][102] = 127,
@@ -51127,6 +53431,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][102] = 127,
[0][0][RTW89_UK][1][102] = 127,
[0][0][RTW89_UK][0][102] = 127,
+ [0][0][RTW89_THAILAND][1][102] = 127,
+ [0][0][RTW89_THAILAND][0][102] = 127,
[0][0][RTW89_FCC][1][104] = -16,
[0][0][RTW89_FCC][2][104] = 127,
[0][0][RTW89_ETSI][1][104] = 127,
@@ -51134,6 +53440,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][104] = 127,
[0][0][RTW89_MKK][0][104] = 127,
[0][0][RTW89_IC][1][104] = -16,
+ [0][0][RTW89_IC][2][104] = 127,
[0][0][RTW89_KCC][1][104] = -2,
[0][0][RTW89_KCC][0][104] = 127,
[0][0][RTW89_ACMA][1][104] = 127,
@@ -51143,6 +53450,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][104] = 127,
[0][0][RTW89_UK][1][104] = 127,
[0][0][RTW89_UK][0][104] = 127,
+ [0][0][RTW89_THAILAND][1][104] = 127,
+ [0][0][RTW89_THAILAND][0][104] = 127,
[0][0][RTW89_FCC][1][105] = -16,
[0][0][RTW89_FCC][2][105] = 127,
[0][0][RTW89_ETSI][1][105] = 127,
@@ -51150,6 +53459,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][105] = 127,
[0][0][RTW89_MKK][0][105] = 127,
[0][0][RTW89_IC][1][105] = -16,
+ [0][0][RTW89_IC][2][105] = 127,
[0][0][RTW89_KCC][1][105] = -2,
[0][0][RTW89_KCC][0][105] = 127,
[0][0][RTW89_ACMA][1][105] = 127,
@@ -51159,6 +53469,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][105] = 127,
[0][0][RTW89_UK][1][105] = 127,
[0][0][RTW89_UK][0][105] = 127,
+ [0][0][RTW89_THAILAND][1][105] = 127,
+ [0][0][RTW89_THAILAND][0][105] = 127,
[0][0][RTW89_FCC][1][107] = -12,
[0][0][RTW89_FCC][2][107] = 127,
[0][0][RTW89_ETSI][1][107] = 127,
@@ -51166,6 +53478,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][107] = 127,
[0][0][RTW89_MKK][0][107] = 127,
[0][0][RTW89_IC][1][107] = -12,
+ [0][0][RTW89_IC][2][107] = 127,
[0][0][RTW89_KCC][1][107] = -2,
[0][0][RTW89_KCC][0][107] = 127,
[0][0][RTW89_ACMA][1][107] = 127,
@@ -51175,6 +53488,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][107] = 127,
[0][0][RTW89_UK][1][107] = 127,
[0][0][RTW89_UK][0][107] = 127,
+ [0][0][RTW89_THAILAND][1][107] = 127,
+ [0][0][RTW89_THAILAND][0][107] = 127,
[0][0][RTW89_FCC][1][109] = -12,
[0][0][RTW89_FCC][2][109] = 127,
[0][0][RTW89_ETSI][1][109] = 127,
@@ -51182,6 +53497,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][109] = 127,
[0][0][RTW89_MKK][0][109] = 127,
[0][0][RTW89_IC][1][109] = -12,
+ [0][0][RTW89_IC][2][109] = 127,
[0][0][RTW89_KCC][1][109] = 127,
[0][0][RTW89_KCC][0][109] = 127,
[0][0][RTW89_ACMA][1][109] = 127,
@@ -51191,6 +53507,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][109] = 127,
[0][0][RTW89_UK][1][109] = 127,
[0][0][RTW89_UK][0][109] = 127,
+ [0][0][RTW89_THAILAND][1][109] = 127,
+ [0][0][RTW89_THAILAND][0][109] = 127,
[0][0][RTW89_FCC][1][111] = 127,
[0][0][RTW89_FCC][2][111] = 127,
[0][0][RTW89_ETSI][1][111] = 127,
@@ -51198,6 +53516,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][111] = 127,
[0][0][RTW89_MKK][0][111] = 127,
[0][0][RTW89_IC][1][111] = 127,
+ [0][0][RTW89_IC][2][111] = 127,
[0][0][RTW89_KCC][1][111] = 127,
[0][0][RTW89_KCC][0][111] = 127,
[0][0][RTW89_ACMA][1][111] = 127,
@@ -51207,6 +53526,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][111] = 127,
[0][0][RTW89_UK][1][111] = 127,
[0][0][RTW89_UK][0][111] = 127,
+ [0][0][RTW89_THAILAND][1][111] = 127,
+ [0][0][RTW89_THAILAND][0][111] = 127,
[0][0][RTW89_FCC][1][113] = 127,
[0][0][RTW89_FCC][2][113] = 127,
[0][0][RTW89_ETSI][1][113] = 127,
@@ -51214,6 +53535,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][113] = 127,
[0][0][RTW89_MKK][0][113] = 127,
[0][0][RTW89_IC][1][113] = 127,
+ [0][0][RTW89_IC][2][113] = 127,
[0][0][RTW89_KCC][1][113] = 127,
[0][0][RTW89_KCC][0][113] = 127,
[0][0][RTW89_ACMA][1][113] = 127,
@@ -51223,6 +53545,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][113] = 127,
[0][0][RTW89_UK][1][113] = 127,
[0][0][RTW89_UK][0][113] = 127,
+ [0][0][RTW89_THAILAND][1][113] = 127,
+ [0][0][RTW89_THAILAND][0][113] = 127,
[0][0][RTW89_FCC][1][115] = 127,
[0][0][RTW89_FCC][2][115] = 127,
[0][0][RTW89_ETSI][1][115] = 127,
@@ -51230,6 +53554,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][115] = 127,
[0][0][RTW89_MKK][0][115] = 127,
[0][0][RTW89_IC][1][115] = 127,
+ [0][0][RTW89_IC][2][115] = 127,
[0][0][RTW89_KCC][1][115] = 127,
[0][0][RTW89_KCC][0][115] = 127,
[0][0][RTW89_ACMA][1][115] = 127,
@@ -51239,6 +53564,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][115] = 127,
[0][0][RTW89_UK][1][115] = 127,
[0][0][RTW89_UK][0][115] = 127,
+ [0][0][RTW89_THAILAND][1][115] = 127,
+ [0][0][RTW89_THAILAND][0][115] = 127,
[0][0][RTW89_FCC][1][117] = 127,
[0][0][RTW89_FCC][2][117] = 127,
[0][0][RTW89_ETSI][1][117] = 127,
@@ -51246,6 +53573,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][117] = 127,
[0][0][RTW89_MKK][0][117] = 127,
[0][0][RTW89_IC][1][117] = 127,
+ [0][0][RTW89_IC][2][117] = 127,
[0][0][RTW89_KCC][1][117] = 127,
[0][0][RTW89_KCC][0][117] = 127,
[0][0][RTW89_ACMA][1][117] = 127,
@@ -51255,6 +53583,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][117] = 127,
[0][0][RTW89_UK][1][117] = 127,
[0][0][RTW89_UK][0][117] = 127,
+ [0][0][RTW89_THAILAND][1][117] = 127,
+ [0][0][RTW89_THAILAND][0][117] = 127,
[0][0][RTW89_FCC][1][119] = 127,
[0][0][RTW89_FCC][2][119] = 127,
[0][0][RTW89_ETSI][1][119] = 127,
@@ -51262,6 +53592,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_MKK][1][119] = 127,
[0][0][RTW89_MKK][0][119] = 127,
[0][0][RTW89_IC][1][119] = 127,
+ [0][0][RTW89_IC][2][119] = 127,
[0][0][RTW89_KCC][1][119] = 127,
[0][0][RTW89_KCC][0][119] = 127,
[0][0][RTW89_ACMA][1][119] = 127,
@@ -51271,6 +53602,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][0][RTW89_QATAR][0][119] = 127,
[0][0][RTW89_UK][1][119] = 127,
[0][0][RTW89_UK][0][119] = 127,
+ [0][0][RTW89_THAILAND][1][119] = 127,
+ [0][0][RTW89_THAILAND][0][119] = 127,
[0][1][RTW89_FCC][1][0] = -40,
[0][1][RTW89_FCC][2][0] = 32,
[0][1][RTW89_ETSI][1][0] = 20,
@@ -51278,6 +53611,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][0] = 18,
[0][1][RTW89_MKK][0][0] = -20,
[0][1][RTW89_IC][1][0] = -40,
+ [0][1][RTW89_IC][2][0] = 32,
[0][1][RTW89_KCC][1][0] = -14,
[0][1][RTW89_KCC][0][0] = -14,
[0][1][RTW89_ACMA][1][0] = 20,
@@ -51287,6 +53621,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][0] = -18,
[0][1][RTW89_UK][1][0] = 20,
[0][1][RTW89_UK][0][0] = -18,
+ [0][1][RTW89_THAILAND][1][0] = 6,
+ [0][1][RTW89_THAILAND][0][0] = -40,
[0][1][RTW89_FCC][1][2] = -40,
[0][1][RTW89_FCC][2][2] = 32,
[0][1][RTW89_ETSI][1][2] = 20,
@@ -51294,6 +53630,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][2] = 18,
[0][1][RTW89_MKK][0][2] = -22,
[0][1][RTW89_IC][1][2] = -40,
+ [0][1][RTW89_IC][2][2] = 32,
[0][1][RTW89_KCC][1][2] = -14,
[0][1][RTW89_KCC][0][2] = -14,
[0][1][RTW89_ACMA][1][2] = 20,
@@ -51303,6 +53640,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][2] = -18,
[0][1][RTW89_UK][1][2] = 20,
[0][1][RTW89_UK][0][2] = -18,
+ [0][1][RTW89_THAILAND][1][2] = 6,
+ [0][1][RTW89_THAILAND][0][2] = -40,
[0][1][RTW89_FCC][1][4] = -40,
[0][1][RTW89_FCC][2][4] = 32,
[0][1][RTW89_ETSI][1][4] = 20,
@@ -51310,6 +53649,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][4] = 18,
[0][1][RTW89_MKK][0][4] = -22,
[0][1][RTW89_IC][1][4] = -40,
+ [0][1][RTW89_IC][2][4] = 32,
[0][1][RTW89_KCC][1][4] = -14,
[0][1][RTW89_KCC][0][4] = -14,
[0][1][RTW89_ACMA][1][4] = 20,
@@ -51319,6 +53659,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][4] = -18,
[0][1][RTW89_UK][1][4] = 20,
[0][1][RTW89_UK][0][4] = -18,
+ [0][1][RTW89_THAILAND][1][4] = 6,
+ [0][1][RTW89_THAILAND][0][4] = -40,
[0][1][RTW89_FCC][1][6] = -40,
[0][1][RTW89_FCC][2][6] = 32,
[0][1][RTW89_ETSI][1][6] = 20,
@@ -51326,6 +53668,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][6] = 18,
[0][1][RTW89_MKK][0][6] = -22,
[0][1][RTW89_IC][1][6] = -40,
+ [0][1][RTW89_IC][2][6] = 32,
[0][1][RTW89_KCC][1][6] = -14,
[0][1][RTW89_KCC][0][6] = -14,
[0][1][RTW89_ACMA][1][6] = 20,
@@ -51335,6 +53678,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][6] = -18,
[0][1][RTW89_UK][1][6] = 20,
[0][1][RTW89_UK][0][6] = -18,
+ [0][1][RTW89_THAILAND][1][6] = 6,
+ [0][1][RTW89_THAILAND][0][6] = -40,
[0][1][RTW89_FCC][1][8] = -40,
[0][1][RTW89_FCC][2][8] = 32,
[0][1][RTW89_ETSI][1][8] = 20,
@@ -51342,6 +53687,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][8] = 18,
[0][1][RTW89_MKK][0][8] = -22,
[0][1][RTW89_IC][1][8] = -40,
+ [0][1][RTW89_IC][2][8] = 32,
[0][1][RTW89_KCC][1][8] = -14,
[0][1][RTW89_KCC][0][8] = -14,
[0][1][RTW89_ACMA][1][8] = 20,
@@ -51351,6 +53697,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][8] = -18,
[0][1][RTW89_UK][1][8] = 20,
[0][1][RTW89_UK][0][8] = -18,
+ [0][1][RTW89_THAILAND][1][8] = 6,
+ [0][1][RTW89_THAILAND][0][8] = -40,
[0][1][RTW89_FCC][1][10] = -40,
[0][1][RTW89_FCC][2][10] = 32,
[0][1][RTW89_ETSI][1][10] = 20,
@@ -51358,6 +53706,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][10] = 18,
[0][1][RTW89_MKK][0][10] = -22,
[0][1][RTW89_IC][1][10] = -40,
+ [0][1][RTW89_IC][2][10] = 32,
[0][1][RTW89_KCC][1][10] = -14,
[0][1][RTW89_KCC][0][10] = -14,
[0][1][RTW89_ACMA][1][10] = 20,
@@ -51367,6 +53716,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][10] = -18,
[0][1][RTW89_UK][1][10] = 20,
[0][1][RTW89_UK][0][10] = -18,
+ [0][1][RTW89_THAILAND][1][10] = 6,
+ [0][1][RTW89_THAILAND][0][10] = -40,
[0][1][RTW89_FCC][1][12] = -40,
[0][1][RTW89_FCC][2][12] = 32,
[0][1][RTW89_ETSI][1][12] = 20,
@@ -51374,6 +53725,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][12] = 18,
[0][1][RTW89_MKK][0][12] = -22,
[0][1][RTW89_IC][1][12] = -40,
+ [0][1][RTW89_IC][2][12] = 32,
[0][1][RTW89_KCC][1][12] = -14,
[0][1][RTW89_KCC][0][12] = -14,
[0][1][RTW89_ACMA][1][12] = 20,
@@ -51383,6 +53735,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][12] = -18,
[0][1][RTW89_UK][1][12] = 20,
[0][1][RTW89_UK][0][12] = -18,
+ [0][1][RTW89_THAILAND][1][12] = 6,
+ [0][1][RTW89_THAILAND][0][12] = -40,
[0][1][RTW89_FCC][1][14] = -40,
[0][1][RTW89_FCC][2][14] = 32,
[0][1][RTW89_ETSI][1][14] = 20,
@@ -51390,6 +53744,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][14] = 18,
[0][1][RTW89_MKK][0][14] = -22,
[0][1][RTW89_IC][1][14] = -40,
+ [0][1][RTW89_IC][2][14] = 32,
[0][1][RTW89_KCC][1][14] = -14,
[0][1][RTW89_KCC][0][14] = -14,
[0][1][RTW89_ACMA][1][14] = 20,
@@ -51399,6 +53754,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][14] = -18,
[0][1][RTW89_UK][1][14] = 20,
[0][1][RTW89_UK][0][14] = -18,
+ [0][1][RTW89_THAILAND][1][14] = 6,
+ [0][1][RTW89_THAILAND][0][14] = -40,
[0][1][RTW89_FCC][1][15] = -40,
[0][1][RTW89_FCC][2][15] = 32,
[0][1][RTW89_ETSI][1][15] = 20,
@@ -51406,6 +53763,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][15] = 18,
[0][1][RTW89_MKK][0][15] = -22,
[0][1][RTW89_IC][1][15] = -40,
+ [0][1][RTW89_IC][2][15] = 32,
[0][1][RTW89_KCC][1][15] = -14,
[0][1][RTW89_KCC][0][15] = -14,
[0][1][RTW89_ACMA][1][15] = 20,
@@ -51415,6 +53773,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][15] = -18,
[0][1][RTW89_UK][1][15] = 20,
[0][1][RTW89_UK][0][15] = -18,
+ [0][1][RTW89_THAILAND][1][15] = 6,
+ [0][1][RTW89_THAILAND][0][15] = -40,
[0][1][RTW89_FCC][1][17] = -40,
[0][1][RTW89_FCC][2][17] = 32,
[0][1][RTW89_ETSI][1][17] = 20,
@@ -51422,6 +53782,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][17] = 18,
[0][1][RTW89_MKK][0][17] = -22,
[0][1][RTW89_IC][1][17] = -40,
+ [0][1][RTW89_IC][2][17] = 32,
[0][1][RTW89_KCC][1][17] = -14,
[0][1][RTW89_KCC][0][17] = -14,
[0][1][RTW89_ACMA][1][17] = 20,
@@ -51431,6 +53792,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][17] = -18,
[0][1][RTW89_UK][1][17] = 20,
[0][1][RTW89_UK][0][17] = -18,
+ [0][1][RTW89_THAILAND][1][17] = 6,
+ [0][1][RTW89_THAILAND][0][17] = -40,
[0][1][RTW89_FCC][1][19] = -40,
[0][1][RTW89_FCC][2][19] = 32,
[0][1][RTW89_ETSI][1][19] = 20,
@@ -51438,6 +53801,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][19] = 18,
[0][1][RTW89_MKK][0][19] = -22,
[0][1][RTW89_IC][1][19] = -40,
+ [0][1][RTW89_IC][2][19] = 32,
[0][1][RTW89_KCC][1][19] = -14,
[0][1][RTW89_KCC][0][19] = -14,
[0][1][RTW89_ACMA][1][19] = 20,
@@ -51447,6 +53811,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][19] = -18,
[0][1][RTW89_UK][1][19] = 20,
[0][1][RTW89_UK][0][19] = -18,
+ [0][1][RTW89_THAILAND][1][19] = 6,
+ [0][1][RTW89_THAILAND][0][19] = -40,
[0][1][RTW89_FCC][1][21] = -40,
[0][1][RTW89_FCC][2][21] = 32,
[0][1][RTW89_ETSI][1][21] = 20,
@@ -51454,6 +53820,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][21] = 18,
[0][1][RTW89_MKK][0][21] = -22,
[0][1][RTW89_IC][1][21] = -40,
+ [0][1][RTW89_IC][2][21] = 32,
[0][1][RTW89_KCC][1][21] = -14,
[0][1][RTW89_KCC][0][21] = -14,
[0][1][RTW89_ACMA][1][21] = 20,
@@ -51463,6 +53830,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][21] = -18,
[0][1][RTW89_UK][1][21] = 20,
[0][1][RTW89_UK][0][21] = -18,
+ [0][1][RTW89_THAILAND][1][21] = 6,
+ [0][1][RTW89_THAILAND][0][21] = -40,
[0][1][RTW89_FCC][1][23] = -40,
[0][1][RTW89_FCC][2][23] = 32,
[0][1][RTW89_ETSI][1][23] = 20,
@@ -51470,6 +53839,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][23] = 18,
[0][1][RTW89_MKK][0][23] = -22,
[0][1][RTW89_IC][1][23] = -40,
+ [0][1][RTW89_IC][2][23] = 32,
[0][1][RTW89_KCC][1][23] = -14,
[0][1][RTW89_KCC][0][23] = -14,
[0][1][RTW89_ACMA][1][23] = 20,
@@ -51479,6 +53849,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][23] = -18,
[0][1][RTW89_UK][1][23] = 20,
[0][1][RTW89_UK][0][23] = -18,
+ [0][1][RTW89_THAILAND][1][23] = 6,
+ [0][1][RTW89_THAILAND][0][23] = -40,
[0][1][RTW89_FCC][1][25] = -40,
[0][1][RTW89_FCC][2][25] = 32,
[0][1][RTW89_ETSI][1][25] = 20,
@@ -51486,6 +53858,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][25] = -4,
[0][1][RTW89_MKK][0][25] = -22,
[0][1][RTW89_IC][1][25] = -40,
+ [0][1][RTW89_IC][2][25] = 32,
[0][1][RTW89_KCC][1][25] = -14,
[0][1][RTW89_KCC][0][25] = -14,
[0][1][RTW89_ACMA][1][25] = 20,
@@ -51495,6 +53868,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][25] = -18,
[0][1][RTW89_UK][1][25] = 20,
[0][1][RTW89_UK][0][25] = -18,
+ [0][1][RTW89_THAILAND][1][25] = 6,
+ [0][1][RTW89_THAILAND][0][25] = -40,
[0][1][RTW89_FCC][1][27] = -40,
[0][1][RTW89_FCC][2][27] = 32,
[0][1][RTW89_ETSI][1][27] = 20,
@@ -51502,6 +53877,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][27] = -4,
[0][1][RTW89_MKK][0][27] = -22,
[0][1][RTW89_IC][1][27] = -40,
+ [0][1][RTW89_IC][2][27] = 32,
[0][1][RTW89_KCC][1][27] = -14,
[0][1][RTW89_KCC][0][27] = -14,
[0][1][RTW89_ACMA][1][27] = 20,
@@ -51511,6 +53887,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][27] = -18,
[0][1][RTW89_UK][1][27] = 20,
[0][1][RTW89_UK][0][27] = -18,
+ [0][1][RTW89_THAILAND][1][27] = 6,
+ [0][1][RTW89_THAILAND][0][27] = -40,
[0][1][RTW89_FCC][1][29] = -40,
[0][1][RTW89_FCC][2][29] = 32,
[0][1][RTW89_ETSI][1][29] = 20,
@@ -51518,6 +53896,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][29] = -4,
[0][1][RTW89_MKK][0][29] = -22,
[0][1][RTW89_IC][1][29] = -40,
+ [0][1][RTW89_IC][2][29] = 32,
[0][1][RTW89_KCC][1][29] = -14,
[0][1][RTW89_KCC][0][29] = -14,
[0][1][RTW89_ACMA][1][29] = 20,
@@ -51527,6 +53906,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][29] = -18,
[0][1][RTW89_UK][1][29] = 20,
[0][1][RTW89_UK][0][29] = -18,
+ [0][1][RTW89_THAILAND][1][29] = 6,
+ [0][1][RTW89_THAILAND][0][29] = -40,
[0][1][RTW89_FCC][1][30] = -40,
[0][1][RTW89_FCC][2][30] = 32,
[0][1][RTW89_ETSI][1][30] = 20,
@@ -51534,6 +53915,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][30] = -4,
[0][1][RTW89_MKK][0][30] = -22,
[0][1][RTW89_IC][1][30] = -40,
+ [0][1][RTW89_IC][2][30] = 32,
[0][1][RTW89_KCC][1][30] = -14,
[0][1][RTW89_KCC][0][30] = -14,
[0][1][RTW89_ACMA][1][30] = 20,
@@ -51543,6 +53925,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][30] = -18,
[0][1][RTW89_UK][1][30] = 20,
[0][1][RTW89_UK][0][30] = -18,
+ [0][1][RTW89_THAILAND][1][30] = 6,
+ [0][1][RTW89_THAILAND][0][30] = -40,
[0][1][RTW89_FCC][1][32] = -40,
[0][1][RTW89_FCC][2][32] = 32,
[0][1][RTW89_ETSI][1][32] = 20,
@@ -51550,6 +53934,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][32] = -4,
[0][1][RTW89_MKK][0][32] = -22,
[0][1][RTW89_IC][1][32] = -40,
+ [0][1][RTW89_IC][2][32] = 32,
[0][1][RTW89_KCC][1][32] = -14,
[0][1][RTW89_KCC][0][32] = -14,
[0][1][RTW89_ACMA][1][32] = 20,
@@ -51559,6 +53944,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][32] = -18,
[0][1][RTW89_UK][1][32] = 20,
[0][1][RTW89_UK][0][32] = -18,
+ [0][1][RTW89_THAILAND][1][32] = 6,
+ [0][1][RTW89_THAILAND][0][32] = -40,
[0][1][RTW89_FCC][1][34] = -40,
[0][1][RTW89_FCC][2][34] = 32,
[0][1][RTW89_ETSI][1][34] = 20,
@@ -51566,6 +53953,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][34] = -4,
[0][1][RTW89_MKK][0][34] = -22,
[0][1][RTW89_IC][1][34] = -40,
+ [0][1][RTW89_IC][2][34] = 32,
[0][1][RTW89_KCC][1][34] = -14,
[0][1][RTW89_KCC][0][34] = -14,
[0][1][RTW89_ACMA][1][34] = 20,
@@ -51575,6 +53963,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][34] = -18,
[0][1][RTW89_UK][1][34] = 20,
[0][1][RTW89_UK][0][34] = -18,
+ [0][1][RTW89_THAILAND][1][34] = 6,
+ [0][1][RTW89_THAILAND][0][34] = -40,
[0][1][RTW89_FCC][1][36] = -40,
[0][1][RTW89_FCC][2][36] = 32,
[0][1][RTW89_ETSI][1][36] = 20,
@@ -51582,6 +53972,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][36] = -4,
[0][1][RTW89_MKK][0][36] = -22,
[0][1][RTW89_IC][1][36] = -40,
+ [0][1][RTW89_IC][2][36] = 32,
[0][1][RTW89_KCC][1][36] = -14,
[0][1][RTW89_KCC][0][36] = -14,
[0][1][RTW89_ACMA][1][36] = 20,
@@ -51591,6 +53982,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][36] = -18,
[0][1][RTW89_UK][1][36] = 20,
[0][1][RTW89_UK][0][36] = -18,
+ [0][1][RTW89_THAILAND][1][36] = 6,
+ [0][1][RTW89_THAILAND][0][36] = -40,
[0][1][RTW89_FCC][1][38] = -40,
[0][1][RTW89_FCC][2][38] = 32,
[0][1][RTW89_ETSI][1][38] = 20,
@@ -51598,6 +53991,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][38] = -4,
[0][1][RTW89_MKK][0][38] = -22,
[0][1][RTW89_IC][1][38] = -40,
+ [0][1][RTW89_IC][2][38] = 32,
[0][1][RTW89_KCC][1][38] = -14,
[0][1][RTW89_KCC][0][38] = -14,
[0][1][RTW89_ACMA][1][38] = 20,
@@ -51607,6 +54001,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][38] = -18,
[0][1][RTW89_UK][1][38] = 20,
[0][1][RTW89_UK][0][38] = -18,
+ [0][1][RTW89_THAILAND][1][38] = 6,
+ [0][1][RTW89_THAILAND][0][38] = -40,
[0][1][RTW89_FCC][1][40] = -40,
[0][1][RTW89_FCC][2][40] = 32,
[0][1][RTW89_ETSI][1][40] = 20,
@@ -51614,6 +54010,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][40] = -4,
[0][1][RTW89_MKK][0][40] = -22,
[0][1][RTW89_IC][1][40] = -40,
+ [0][1][RTW89_IC][2][40] = 32,
[0][1][RTW89_KCC][1][40] = -14,
[0][1][RTW89_KCC][0][40] = -14,
[0][1][RTW89_ACMA][1][40] = 20,
@@ -51623,6 +54020,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][40] = -18,
[0][1][RTW89_UK][1][40] = 20,
[0][1][RTW89_UK][0][40] = -18,
+ [0][1][RTW89_THAILAND][1][40] = 6,
+ [0][1][RTW89_THAILAND][0][40] = -40,
[0][1][RTW89_FCC][1][42] = -40,
[0][1][RTW89_FCC][2][42] = 32,
[0][1][RTW89_ETSI][1][42] = 20,
@@ -51630,6 +54029,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][42] = -4,
[0][1][RTW89_MKK][0][42] = -22,
[0][1][RTW89_IC][1][42] = -40,
+ [0][1][RTW89_IC][2][42] = 32,
[0][1][RTW89_KCC][1][42] = -14,
[0][1][RTW89_KCC][0][42] = -14,
[0][1][RTW89_ACMA][1][42] = 20,
@@ -51639,6 +54039,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][42] = -18,
[0][1][RTW89_UK][1][42] = 20,
[0][1][RTW89_UK][0][42] = -18,
+ [0][1][RTW89_THAILAND][1][42] = 6,
+ [0][1][RTW89_THAILAND][0][42] = -40,
[0][1][RTW89_FCC][1][44] = -40,
[0][1][RTW89_FCC][2][44] = 32,
[0][1][RTW89_ETSI][1][44] = 20,
@@ -51646,6 +54048,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][44] = -4,
[0][1][RTW89_MKK][0][44] = -22,
[0][1][RTW89_IC][1][44] = -40,
+ [0][1][RTW89_IC][2][44] = 32,
[0][1][RTW89_KCC][1][44] = -14,
[0][1][RTW89_KCC][0][44] = -14,
[0][1][RTW89_ACMA][1][44] = 20,
@@ -51655,6 +54058,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][44] = -18,
[0][1][RTW89_UK][1][44] = 20,
[0][1][RTW89_UK][0][44] = -18,
+ [0][1][RTW89_THAILAND][1][44] = 6,
+ [0][1][RTW89_THAILAND][0][44] = -40,
[0][1][RTW89_FCC][1][45] = -40,
[0][1][RTW89_FCC][2][45] = 127,
[0][1][RTW89_ETSI][1][45] = 127,
@@ -51662,6 +54067,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][45] = 127,
[0][1][RTW89_MKK][0][45] = 127,
[0][1][RTW89_IC][1][45] = -40,
+ [0][1][RTW89_IC][2][45] = 32,
[0][1][RTW89_KCC][1][45] = -14,
[0][1][RTW89_KCC][0][45] = 127,
[0][1][RTW89_ACMA][1][45] = 127,
@@ -51671,6 +54077,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][45] = 127,
[0][1][RTW89_UK][1][45] = 127,
[0][1][RTW89_UK][0][45] = 127,
+ [0][1][RTW89_THAILAND][1][45] = 127,
+ [0][1][RTW89_THAILAND][0][45] = 127,
[0][1][RTW89_FCC][1][47] = -40,
[0][1][RTW89_FCC][2][47] = 127,
[0][1][RTW89_ETSI][1][47] = 127,
@@ -51678,6 +54086,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][47] = 127,
[0][1][RTW89_MKK][0][47] = 127,
[0][1][RTW89_IC][1][47] = -40,
+ [0][1][RTW89_IC][2][47] = 32,
[0][1][RTW89_KCC][1][47] = -14,
[0][1][RTW89_KCC][0][47] = 127,
[0][1][RTW89_ACMA][1][47] = 127,
@@ -51687,6 +54096,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][47] = 127,
[0][1][RTW89_UK][1][47] = 127,
[0][1][RTW89_UK][0][47] = 127,
+ [0][1][RTW89_THAILAND][1][47] = 127,
+ [0][1][RTW89_THAILAND][0][47] = 127,
[0][1][RTW89_FCC][1][49] = -40,
[0][1][RTW89_FCC][2][49] = 127,
[0][1][RTW89_ETSI][1][49] = 127,
@@ -51694,6 +54105,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][49] = 127,
[0][1][RTW89_MKK][0][49] = 127,
[0][1][RTW89_IC][1][49] = -40,
+ [0][1][RTW89_IC][2][49] = 32,
[0][1][RTW89_KCC][1][49] = -14,
[0][1][RTW89_KCC][0][49] = 127,
[0][1][RTW89_ACMA][1][49] = 127,
@@ -51703,6 +54115,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][49] = 127,
[0][1][RTW89_UK][1][49] = 127,
[0][1][RTW89_UK][0][49] = 127,
+ [0][1][RTW89_THAILAND][1][49] = 127,
+ [0][1][RTW89_THAILAND][0][49] = 127,
[0][1][RTW89_FCC][1][51] = -40,
[0][1][RTW89_FCC][2][51] = 127,
[0][1][RTW89_ETSI][1][51] = 127,
@@ -51710,6 +54124,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][51] = 127,
[0][1][RTW89_MKK][0][51] = 127,
[0][1][RTW89_IC][1][51] = -40,
+ [0][1][RTW89_IC][2][51] = 32,
[0][1][RTW89_KCC][1][51] = -14,
[0][1][RTW89_KCC][0][51] = 127,
[0][1][RTW89_ACMA][1][51] = 127,
@@ -51719,6 +54134,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][51] = 127,
[0][1][RTW89_UK][1][51] = 127,
[0][1][RTW89_UK][0][51] = 127,
+ [0][1][RTW89_THAILAND][1][51] = 127,
+ [0][1][RTW89_THAILAND][0][51] = 127,
[0][1][RTW89_FCC][1][53] = -40,
[0][1][RTW89_FCC][2][53] = 127,
[0][1][RTW89_ETSI][1][53] = 127,
@@ -51726,6 +54143,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][53] = 127,
[0][1][RTW89_MKK][0][53] = 127,
[0][1][RTW89_IC][1][53] = -40,
+ [0][1][RTW89_IC][2][53] = 32,
[0][1][RTW89_KCC][1][53] = -14,
[0][1][RTW89_KCC][0][53] = 127,
[0][1][RTW89_ACMA][1][53] = 127,
@@ -51735,6 +54153,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][53] = 127,
[0][1][RTW89_UK][1][53] = 127,
[0][1][RTW89_UK][0][53] = 127,
+ [0][1][RTW89_THAILAND][1][53] = 127,
+ [0][1][RTW89_THAILAND][0][53] = 127,
[0][1][RTW89_FCC][1][55] = -40,
[0][1][RTW89_FCC][2][55] = 30,
[0][1][RTW89_ETSI][1][55] = 127,
@@ -51742,6 +54162,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][55] = 127,
[0][1][RTW89_MKK][0][55] = 127,
[0][1][RTW89_IC][1][55] = -40,
+ [0][1][RTW89_IC][2][55] = 30,
[0][1][RTW89_KCC][1][55] = -14,
[0][1][RTW89_KCC][0][55] = 127,
[0][1][RTW89_ACMA][1][55] = 127,
@@ -51751,6 +54172,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][55] = 127,
[0][1][RTW89_UK][1][55] = 127,
[0][1][RTW89_UK][0][55] = 127,
+ [0][1][RTW89_THAILAND][1][55] = 127,
+ [0][1][RTW89_THAILAND][0][55] = 127,
[0][1][RTW89_FCC][1][57] = -40,
[0][1][RTW89_FCC][2][57] = 30,
[0][1][RTW89_ETSI][1][57] = 127,
@@ -51758,6 +54181,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][57] = 127,
[0][1][RTW89_MKK][0][57] = 127,
[0][1][RTW89_IC][1][57] = -40,
+ [0][1][RTW89_IC][2][57] = 30,
[0][1][RTW89_KCC][1][57] = -14,
[0][1][RTW89_KCC][0][57] = 127,
[0][1][RTW89_ACMA][1][57] = 127,
@@ -51767,6 +54191,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][57] = 127,
[0][1][RTW89_UK][1][57] = 127,
[0][1][RTW89_UK][0][57] = 127,
+ [0][1][RTW89_THAILAND][1][57] = 127,
+ [0][1][RTW89_THAILAND][0][57] = 127,
[0][1][RTW89_FCC][1][59] = -40,
[0][1][RTW89_FCC][2][59] = 30,
[0][1][RTW89_ETSI][1][59] = 127,
@@ -51774,6 +54200,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][59] = 127,
[0][1][RTW89_MKK][0][59] = 127,
[0][1][RTW89_IC][1][59] = -40,
+ [0][1][RTW89_IC][2][59] = 30,
[0][1][RTW89_KCC][1][59] = -14,
[0][1][RTW89_KCC][0][59] = 127,
[0][1][RTW89_ACMA][1][59] = 127,
@@ -51783,6 +54210,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][59] = 127,
[0][1][RTW89_UK][1][59] = 127,
[0][1][RTW89_UK][0][59] = 127,
+ [0][1][RTW89_THAILAND][1][59] = 127,
+ [0][1][RTW89_THAILAND][0][59] = 127,
[0][1][RTW89_FCC][1][60] = -40,
[0][1][RTW89_FCC][2][60] = 30,
[0][1][RTW89_ETSI][1][60] = 127,
@@ -51790,6 +54219,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][60] = 127,
[0][1][RTW89_MKK][0][60] = 127,
[0][1][RTW89_IC][1][60] = -40,
+ [0][1][RTW89_IC][2][60] = 30,
[0][1][RTW89_KCC][1][60] = -14,
[0][1][RTW89_KCC][0][60] = 127,
[0][1][RTW89_ACMA][1][60] = 127,
@@ -51799,6 +54229,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][60] = 127,
[0][1][RTW89_UK][1][60] = 127,
[0][1][RTW89_UK][0][60] = 127,
+ [0][1][RTW89_THAILAND][1][60] = 127,
+ [0][1][RTW89_THAILAND][0][60] = 127,
[0][1][RTW89_FCC][1][62] = -40,
[0][1][RTW89_FCC][2][62] = 30,
[0][1][RTW89_ETSI][1][62] = 127,
@@ -51806,6 +54238,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][62] = 127,
[0][1][RTW89_MKK][0][62] = 127,
[0][1][RTW89_IC][1][62] = -40,
+ [0][1][RTW89_IC][2][62] = 30,
[0][1][RTW89_KCC][1][62] = -14,
[0][1][RTW89_KCC][0][62] = 127,
[0][1][RTW89_ACMA][1][62] = 127,
@@ -51815,6 +54248,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][62] = 127,
[0][1][RTW89_UK][1][62] = 127,
[0][1][RTW89_UK][0][62] = 127,
+ [0][1][RTW89_THAILAND][1][62] = 127,
+ [0][1][RTW89_THAILAND][0][62] = 127,
[0][1][RTW89_FCC][1][64] = -40,
[0][1][RTW89_FCC][2][64] = 30,
[0][1][RTW89_ETSI][1][64] = 127,
@@ -51822,6 +54257,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][64] = 127,
[0][1][RTW89_MKK][0][64] = 127,
[0][1][RTW89_IC][1][64] = -40,
+ [0][1][RTW89_IC][2][64] = 30,
[0][1][RTW89_KCC][1][64] = -14,
[0][1][RTW89_KCC][0][64] = 127,
[0][1][RTW89_ACMA][1][64] = 127,
@@ -51831,6 +54267,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][64] = 127,
[0][1][RTW89_UK][1][64] = 127,
[0][1][RTW89_UK][0][64] = 127,
+ [0][1][RTW89_THAILAND][1][64] = 127,
+ [0][1][RTW89_THAILAND][0][64] = 127,
[0][1][RTW89_FCC][1][66] = -40,
[0][1][RTW89_FCC][2][66] = 30,
[0][1][RTW89_ETSI][1][66] = 127,
@@ -51838,6 +54276,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][66] = 127,
[0][1][RTW89_MKK][0][66] = 127,
[0][1][RTW89_IC][1][66] = -40,
+ [0][1][RTW89_IC][2][66] = 30,
[0][1][RTW89_KCC][1][66] = -14,
[0][1][RTW89_KCC][0][66] = 127,
[0][1][RTW89_ACMA][1][66] = 127,
@@ -51847,6 +54286,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][66] = 127,
[0][1][RTW89_UK][1][66] = 127,
[0][1][RTW89_UK][0][66] = 127,
+ [0][1][RTW89_THAILAND][1][66] = 127,
+ [0][1][RTW89_THAILAND][0][66] = 127,
[0][1][RTW89_FCC][1][68] = -40,
[0][1][RTW89_FCC][2][68] = 30,
[0][1][RTW89_ETSI][1][68] = 127,
@@ -51854,6 +54295,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][68] = 127,
[0][1][RTW89_MKK][0][68] = 127,
[0][1][RTW89_IC][1][68] = -40,
+ [0][1][RTW89_IC][2][68] = 30,
[0][1][RTW89_KCC][1][68] = -14,
[0][1][RTW89_KCC][0][68] = 127,
[0][1][RTW89_ACMA][1][68] = 127,
@@ -51863,6 +54305,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][68] = 127,
[0][1][RTW89_UK][1][68] = 127,
[0][1][RTW89_UK][0][68] = 127,
+ [0][1][RTW89_THAILAND][1][68] = 127,
+ [0][1][RTW89_THAILAND][0][68] = 127,
[0][1][RTW89_FCC][1][70] = -38,
[0][1][RTW89_FCC][2][70] = 30,
[0][1][RTW89_ETSI][1][70] = 127,
@@ -51870,6 +54314,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][70] = 127,
[0][1][RTW89_MKK][0][70] = 127,
[0][1][RTW89_IC][1][70] = -38,
+ [0][1][RTW89_IC][2][70] = 30,
[0][1][RTW89_KCC][1][70] = -14,
[0][1][RTW89_KCC][0][70] = 127,
[0][1][RTW89_ACMA][1][70] = 127,
@@ -51879,6 +54324,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][70] = 127,
[0][1][RTW89_UK][1][70] = 127,
[0][1][RTW89_UK][0][70] = 127,
+ [0][1][RTW89_THAILAND][1][70] = 127,
+ [0][1][RTW89_THAILAND][0][70] = 127,
[0][1][RTW89_FCC][1][72] = -38,
[0][1][RTW89_FCC][2][72] = 30,
[0][1][RTW89_ETSI][1][72] = 127,
@@ -51886,6 +54333,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][72] = 127,
[0][1][RTW89_MKK][0][72] = 127,
[0][1][RTW89_IC][1][72] = -38,
+ [0][1][RTW89_IC][2][72] = 30,
[0][1][RTW89_KCC][1][72] = -14,
[0][1][RTW89_KCC][0][72] = 127,
[0][1][RTW89_ACMA][1][72] = 127,
@@ -51895,6 +54343,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][72] = 127,
[0][1][RTW89_UK][1][72] = 127,
[0][1][RTW89_UK][0][72] = 127,
+ [0][1][RTW89_THAILAND][1][72] = 127,
+ [0][1][RTW89_THAILAND][0][72] = 127,
[0][1][RTW89_FCC][1][74] = -38,
[0][1][RTW89_FCC][2][74] = 30,
[0][1][RTW89_ETSI][1][74] = 127,
@@ -51902,6 +54352,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][74] = 127,
[0][1][RTW89_MKK][0][74] = 127,
[0][1][RTW89_IC][1][74] = -38,
+ [0][1][RTW89_IC][2][74] = 30,
[0][1][RTW89_KCC][1][74] = -14,
[0][1][RTW89_KCC][0][74] = 127,
[0][1][RTW89_ACMA][1][74] = 127,
@@ -51911,6 +54362,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][74] = 127,
[0][1][RTW89_UK][1][74] = 127,
[0][1][RTW89_UK][0][74] = 127,
+ [0][1][RTW89_THAILAND][1][74] = 127,
+ [0][1][RTW89_THAILAND][0][74] = 127,
[0][1][RTW89_FCC][1][75] = -38,
[0][1][RTW89_FCC][2][75] = 30,
[0][1][RTW89_ETSI][1][75] = 127,
@@ -51918,6 +54371,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][75] = 127,
[0][1][RTW89_MKK][0][75] = 127,
[0][1][RTW89_IC][1][75] = -38,
+ [0][1][RTW89_IC][2][75] = 30,
[0][1][RTW89_KCC][1][75] = -14,
[0][1][RTW89_KCC][0][75] = 127,
[0][1][RTW89_ACMA][1][75] = 127,
@@ -51927,6 +54381,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][75] = 127,
[0][1][RTW89_UK][1][75] = 127,
[0][1][RTW89_UK][0][75] = 127,
+ [0][1][RTW89_THAILAND][1][75] = 127,
+ [0][1][RTW89_THAILAND][0][75] = 127,
[0][1][RTW89_FCC][1][77] = -38,
[0][1][RTW89_FCC][2][77] = 30,
[0][1][RTW89_ETSI][1][77] = 127,
@@ -51934,6 +54390,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][77] = 127,
[0][1][RTW89_MKK][0][77] = 127,
[0][1][RTW89_IC][1][77] = -38,
+ [0][1][RTW89_IC][2][77] = 30,
[0][1][RTW89_KCC][1][77] = -14,
[0][1][RTW89_KCC][0][77] = 127,
[0][1][RTW89_ACMA][1][77] = 127,
@@ -51943,6 +54400,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][77] = 127,
[0][1][RTW89_UK][1][77] = 127,
[0][1][RTW89_UK][0][77] = 127,
+ [0][1][RTW89_THAILAND][1][77] = 127,
+ [0][1][RTW89_THAILAND][0][77] = 127,
[0][1][RTW89_FCC][1][79] = -38,
[0][1][RTW89_FCC][2][79] = 30,
[0][1][RTW89_ETSI][1][79] = 127,
@@ -51950,6 +54409,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][79] = 127,
[0][1][RTW89_MKK][0][79] = 127,
[0][1][RTW89_IC][1][79] = -38,
+ [0][1][RTW89_IC][2][79] = 30,
[0][1][RTW89_KCC][1][79] = -14,
[0][1][RTW89_KCC][0][79] = 127,
[0][1][RTW89_ACMA][1][79] = 127,
@@ -51959,6 +54419,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][79] = 127,
[0][1][RTW89_UK][1][79] = 127,
[0][1][RTW89_UK][0][79] = 127,
+ [0][1][RTW89_THAILAND][1][79] = 127,
+ [0][1][RTW89_THAILAND][0][79] = 127,
[0][1][RTW89_FCC][1][81] = -38,
[0][1][RTW89_FCC][2][81] = 30,
[0][1][RTW89_ETSI][1][81] = 127,
@@ -51966,6 +54428,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][81] = 127,
[0][1][RTW89_MKK][0][81] = 127,
[0][1][RTW89_IC][1][81] = -38,
+ [0][1][RTW89_IC][2][81] = 30,
[0][1][RTW89_KCC][1][81] = -14,
[0][1][RTW89_KCC][0][81] = 127,
[0][1][RTW89_ACMA][1][81] = 127,
@@ -51975,6 +54438,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][81] = 127,
[0][1][RTW89_UK][1][81] = 127,
[0][1][RTW89_UK][0][81] = 127,
+ [0][1][RTW89_THAILAND][1][81] = 127,
+ [0][1][RTW89_THAILAND][0][81] = 127,
[0][1][RTW89_FCC][1][83] = -38,
[0][1][RTW89_FCC][2][83] = 30,
[0][1][RTW89_ETSI][1][83] = 127,
@@ -51982,6 +54447,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][83] = 127,
[0][1][RTW89_MKK][0][83] = 127,
[0][1][RTW89_IC][1][83] = -38,
+ [0][1][RTW89_IC][2][83] = 30,
[0][1][RTW89_KCC][1][83] = -14,
[0][1][RTW89_KCC][0][83] = 127,
[0][1][RTW89_ACMA][1][83] = 127,
@@ -51991,6 +54457,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][83] = 127,
[0][1][RTW89_UK][1][83] = 127,
[0][1][RTW89_UK][0][83] = 127,
+ [0][1][RTW89_THAILAND][1][83] = 127,
+ [0][1][RTW89_THAILAND][0][83] = 127,
[0][1][RTW89_FCC][1][85] = -38,
[0][1][RTW89_FCC][2][85] = 30,
[0][1][RTW89_ETSI][1][85] = 127,
@@ -51998,6 +54466,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][85] = 127,
[0][1][RTW89_MKK][0][85] = 127,
[0][1][RTW89_IC][1][85] = -38,
+ [0][1][RTW89_IC][2][85] = 30,
[0][1][RTW89_KCC][1][85] = -14,
[0][1][RTW89_KCC][0][85] = 127,
[0][1][RTW89_ACMA][1][85] = 127,
@@ -52007,6 +54476,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][85] = 127,
[0][1][RTW89_UK][1][85] = 127,
[0][1][RTW89_UK][0][85] = 127,
+ [0][1][RTW89_THAILAND][1][85] = 127,
+ [0][1][RTW89_THAILAND][0][85] = 127,
[0][1][RTW89_FCC][1][87] = -40,
[0][1][RTW89_FCC][2][87] = 127,
[0][1][RTW89_ETSI][1][87] = 127,
@@ -52014,6 +54485,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][87] = 127,
[0][1][RTW89_MKK][0][87] = 127,
[0][1][RTW89_IC][1][87] = -40,
+ [0][1][RTW89_IC][2][87] = 127,
[0][1][RTW89_KCC][1][87] = -14,
[0][1][RTW89_KCC][0][87] = 127,
[0][1][RTW89_ACMA][1][87] = 127,
@@ -52023,6 +54495,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][87] = 127,
[0][1][RTW89_UK][1][87] = 127,
[0][1][RTW89_UK][0][87] = 127,
+ [0][1][RTW89_THAILAND][1][87] = 127,
+ [0][1][RTW89_THAILAND][0][87] = 127,
[0][1][RTW89_FCC][1][89] = -38,
[0][1][RTW89_FCC][2][89] = 127,
[0][1][RTW89_ETSI][1][89] = 127,
@@ -52030,6 +54504,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][89] = 127,
[0][1][RTW89_MKK][0][89] = 127,
[0][1][RTW89_IC][1][89] = -38,
+ [0][1][RTW89_IC][2][89] = 127,
[0][1][RTW89_KCC][1][89] = -14,
[0][1][RTW89_KCC][0][89] = 127,
[0][1][RTW89_ACMA][1][89] = 127,
@@ -52039,6 +54514,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][89] = 127,
[0][1][RTW89_UK][1][89] = 127,
[0][1][RTW89_UK][0][89] = 127,
+ [0][1][RTW89_THAILAND][1][89] = 127,
+ [0][1][RTW89_THAILAND][0][89] = 127,
[0][1][RTW89_FCC][1][90] = -38,
[0][1][RTW89_FCC][2][90] = 127,
[0][1][RTW89_ETSI][1][90] = 127,
@@ -52046,6 +54523,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][90] = 127,
[0][1][RTW89_MKK][0][90] = 127,
[0][1][RTW89_IC][1][90] = -38,
+ [0][1][RTW89_IC][2][90] = 127,
[0][1][RTW89_KCC][1][90] = -14,
[0][1][RTW89_KCC][0][90] = 127,
[0][1][RTW89_ACMA][1][90] = 127,
@@ -52055,6 +54533,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][90] = 127,
[0][1][RTW89_UK][1][90] = 127,
[0][1][RTW89_UK][0][90] = 127,
+ [0][1][RTW89_THAILAND][1][90] = 127,
+ [0][1][RTW89_THAILAND][0][90] = 127,
[0][1][RTW89_FCC][1][92] = -38,
[0][1][RTW89_FCC][2][92] = 127,
[0][1][RTW89_ETSI][1][92] = 127,
@@ -52062,6 +54542,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][92] = 127,
[0][1][RTW89_MKK][0][92] = 127,
[0][1][RTW89_IC][1][92] = -38,
+ [0][1][RTW89_IC][2][92] = 127,
[0][1][RTW89_KCC][1][92] = -14,
[0][1][RTW89_KCC][0][92] = 127,
[0][1][RTW89_ACMA][1][92] = 127,
@@ -52071,6 +54552,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][92] = 127,
[0][1][RTW89_UK][1][92] = 127,
[0][1][RTW89_UK][0][92] = 127,
+ [0][1][RTW89_THAILAND][1][92] = 127,
+ [0][1][RTW89_THAILAND][0][92] = 127,
[0][1][RTW89_FCC][1][94] = -38,
[0][1][RTW89_FCC][2][94] = 127,
[0][1][RTW89_ETSI][1][94] = 127,
@@ -52078,6 +54561,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][94] = 127,
[0][1][RTW89_MKK][0][94] = 127,
[0][1][RTW89_IC][1][94] = -38,
+ [0][1][RTW89_IC][2][94] = 127,
[0][1][RTW89_KCC][1][94] = -14,
[0][1][RTW89_KCC][0][94] = 127,
[0][1][RTW89_ACMA][1][94] = 127,
@@ -52087,6 +54571,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][94] = 127,
[0][1][RTW89_UK][1][94] = 127,
[0][1][RTW89_UK][0][94] = 127,
+ [0][1][RTW89_THAILAND][1][94] = 127,
+ [0][1][RTW89_THAILAND][0][94] = 127,
[0][1][RTW89_FCC][1][96] = -38,
[0][1][RTW89_FCC][2][96] = 127,
[0][1][RTW89_ETSI][1][96] = 127,
@@ -52094,6 +54580,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][96] = 127,
[0][1][RTW89_MKK][0][96] = 127,
[0][1][RTW89_IC][1][96] = -38,
+ [0][1][RTW89_IC][2][96] = 127,
[0][1][RTW89_KCC][1][96] = -14,
[0][1][RTW89_KCC][0][96] = 127,
[0][1][RTW89_ACMA][1][96] = 127,
@@ -52103,6 +54590,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][96] = 127,
[0][1][RTW89_UK][1][96] = 127,
[0][1][RTW89_UK][0][96] = 127,
+ [0][1][RTW89_THAILAND][1][96] = 127,
+ [0][1][RTW89_THAILAND][0][96] = 127,
[0][1][RTW89_FCC][1][98] = -38,
[0][1][RTW89_FCC][2][98] = 127,
[0][1][RTW89_ETSI][1][98] = 127,
@@ -52110,6 +54599,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][98] = 127,
[0][1][RTW89_MKK][0][98] = 127,
[0][1][RTW89_IC][1][98] = -38,
+ [0][1][RTW89_IC][2][98] = 127,
[0][1][RTW89_KCC][1][98] = -14,
[0][1][RTW89_KCC][0][98] = 127,
[0][1][RTW89_ACMA][1][98] = 127,
@@ -52119,6 +54609,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][98] = 127,
[0][1][RTW89_UK][1][98] = 127,
[0][1][RTW89_UK][0][98] = 127,
+ [0][1][RTW89_THAILAND][1][98] = 127,
+ [0][1][RTW89_THAILAND][0][98] = 127,
[0][1][RTW89_FCC][1][100] = -38,
[0][1][RTW89_FCC][2][100] = 127,
[0][1][RTW89_ETSI][1][100] = 127,
@@ -52126,6 +54618,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][100] = 127,
[0][1][RTW89_MKK][0][100] = 127,
[0][1][RTW89_IC][1][100] = -38,
+ [0][1][RTW89_IC][2][100] = 127,
[0][1][RTW89_KCC][1][100] = -14,
[0][1][RTW89_KCC][0][100] = 127,
[0][1][RTW89_ACMA][1][100] = 127,
@@ -52135,6 +54628,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][100] = 127,
[0][1][RTW89_UK][1][100] = 127,
[0][1][RTW89_UK][0][100] = 127,
+ [0][1][RTW89_THAILAND][1][100] = 127,
+ [0][1][RTW89_THAILAND][0][100] = 127,
[0][1][RTW89_FCC][1][102] = -38,
[0][1][RTW89_FCC][2][102] = 127,
[0][1][RTW89_ETSI][1][102] = 127,
@@ -52142,6 +54637,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][102] = 127,
[0][1][RTW89_MKK][0][102] = 127,
[0][1][RTW89_IC][1][102] = -38,
+ [0][1][RTW89_IC][2][102] = 127,
[0][1][RTW89_KCC][1][102] = -14,
[0][1][RTW89_KCC][0][102] = 127,
[0][1][RTW89_ACMA][1][102] = 127,
@@ -52151,6 +54647,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][102] = 127,
[0][1][RTW89_UK][1][102] = 127,
[0][1][RTW89_UK][0][102] = 127,
+ [0][1][RTW89_THAILAND][1][102] = 127,
+ [0][1][RTW89_THAILAND][0][102] = 127,
[0][1][RTW89_FCC][1][104] = -38,
[0][1][RTW89_FCC][2][104] = 127,
[0][1][RTW89_ETSI][1][104] = 127,
@@ -52158,6 +54656,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][104] = 127,
[0][1][RTW89_MKK][0][104] = 127,
[0][1][RTW89_IC][1][104] = -38,
+ [0][1][RTW89_IC][2][104] = 127,
[0][1][RTW89_KCC][1][104] = -14,
[0][1][RTW89_KCC][0][104] = 127,
[0][1][RTW89_ACMA][1][104] = 127,
@@ -52167,6 +54666,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][104] = 127,
[0][1][RTW89_UK][1][104] = 127,
[0][1][RTW89_UK][0][104] = 127,
+ [0][1][RTW89_THAILAND][1][104] = 127,
+ [0][1][RTW89_THAILAND][0][104] = 127,
[0][1][RTW89_FCC][1][105] = -38,
[0][1][RTW89_FCC][2][105] = 127,
[0][1][RTW89_ETSI][1][105] = 127,
@@ -52174,6 +54675,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][105] = 127,
[0][1][RTW89_MKK][0][105] = 127,
[0][1][RTW89_IC][1][105] = -38,
+ [0][1][RTW89_IC][2][105] = 127,
[0][1][RTW89_KCC][1][105] = -14,
[0][1][RTW89_KCC][0][105] = 127,
[0][1][RTW89_ACMA][1][105] = 127,
@@ -52183,6 +54685,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][105] = 127,
[0][1][RTW89_UK][1][105] = 127,
[0][1][RTW89_UK][0][105] = 127,
+ [0][1][RTW89_THAILAND][1][105] = 127,
+ [0][1][RTW89_THAILAND][0][105] = 127,
[0][1][RTW89_FCC][1][107] = -34,
[0][1][RTW89_FCC][2][107] = 127,
[0][1][RTW89_ETSI][1][107] = 127,
@@ -52190,6 +54694,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][107] = 127,
[0][1][RTW89_MKK][0][107] = 127,
[0][1][RTW89_IC][1][107] = -34,
+ [0][1][RTW89_IC][2][107] = 127,
[0][1][RTW89_KCC][1][107] = -14,
[0][1][RTW89_KCC][0][107] = 127,
[0][1][RTW89_ACMA][1][107] = 127,
@@ -52199,6 +54704,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][107] = 127,
[0][1][RTW89_UK][1][107] = 127,
[0][1][RTW89_UK][0][107] = 127,
+ [0][1][RTW89_THAILAND][1][107] = 127,
+ [0][1][RTW89_THAILAND][0][107] = 127,
[0][1][RTW89_FCC][1][109] = -34,
[0][1][RTW89_FCC][2][109] = 127,
[0][1][RTW89_ETSI][1][109] = 127,
@@ -52206,6 +54713,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][109] = 127,
[0][1][RTW89_MKK][0][109] = 127,
[0][1][RTW89_IC][1][109] = -34,
+ [0][1][RTW89_IC][2][109] = 127,
[0][1][RTW89_KCC][1][109] = 127,
[0][1][RTW89_KCC][0][109] = 127,
[0][1][RTW89_ACMA][1][109] = 127,
@@ -52215,6 +54723,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][109] = 127,
[0][1][RTW89_UK][1][109] = 127,
[0][1][RTW89_UK][0][109] = 127,
+ [0][1][RTW89_THAILAND][1][109] = 127,
+ [0][1][RTW89_THAILAND][0][109] = 127,
[0][1][RTW89_FCC][1][111] = 127,
[0][1][RTW89_FCC][2][111] = 127,
[0][1][RTW89_ETSI][1][111] = 127,
@@ -52222,6 +54732,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][111] = 127,
[0][1][RTW89_MKK][0][111] = 127,
[0][1][RTW89_IC][1][111] = 127,
+ [0][1][RTW89_IC][2][111] = 127,
[0][1][RTW89_KCC][1][111] = 127,
[0][1][RTW89_KCC][0][111] = 127,
[0][1][RTW89_ACMA][1][111] = 127,
@@ -52231,6 +54742,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][111] = 127,
[0][1][RTW89_UK][1][111] = 127,
[0][1][RTW89_UK][0][111] = 127,
+ [0][1][RTW89_THAILAND][1][111] = 127,
+ [0][1][RTW89_THAILAND][0][111] = 127,
[0][1][RTW89_FCC][1][113] = 127,
[0][1][RTW89_FCC][2][113] = 127,
[0][1][RTW89_ETSI][1][113] = 127,
@@ -52238,6 +54751,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][113] = 127,
[0][1][RTW89_MKK][0][113] = 127,
[0][1][RTW89_IC][1][113] = 127,
+ [0][1][RTW89_IC][2][113] = 127,
[0][1][RTW89_KCC][1][113] = 127,
[0][1][RTW89_KCC][0][113] = 127,
[0][1][RTW89_ACMA][1][113] = 127,
@@ -52247,6 +54761,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][113] = 127,
[0][1][RTW89_UK][1][113] = 127,
[0][1][RTW89_UK][0][113] = 127,
+ [0][1][RTW89_THAILAND][1][113] = 127,
+ [0][1][RTW89_THAILAND][0][113] = 127,
[0][1][RTW89_FCC][1][115] = 127,
[0][1][RTW89_FCC][2][115] = 127,
[0][1][RTW89_ETSI][1][115] = 127,
@@ -52254,6 +54770,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][115] = 127,
[0][1][RTW89_MKK][0][115] = 127,
[0][1][RTW89_IC][1][115] = 127,
+ [0][1][RTW89_IC][2][115] = 127,
[0][1][RTW89_KCC][1][115] = 127,
[0][1][RTW89_KCC][0][115] = 127,
[0][1][RTW89_ACMA][1][115] = 127,
@@ -52263,6 +54780,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][115] = 127,
[0][1][RTW89_UK][1][115] = 127,
[0][1][RTW89_UK][0][115] = 127,
+ [0][1][RTW89_THAILAND][1][115] = 127,
+ [0][1][RTW89_THAILAND][0][115] = 127,
[0][1][RTW89_FCC][1][117] = 127,
[0][1][RTW89_FCC][2][117] = 127,
[0][1][RTW89_ETSI][1][117] = 127,
@@ -52270,6 +54789,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][117] = 127,
[0][1][RTW89_MKK][0][117] = 127,
[0][1][RTW89_IC][1][117] = 127,
+ [0][1][RTW89_IC][2][117] = 127,
[0][1][RTW89_KCC][1][117] = 127,
[0][1][RTW89_KCC][0][117] = 127,
[0][1][RTW89_ACMA][1][117] = 127,
@@ -52279,6 +54799,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][117] = 127,
[0][1][RTW89_UK][1][117] = 127,
[0][1][RTW89_UK][0][117] = 127,
+ [0][1][RTW89_THAILAND][1][117] = 127,
+ [0][1][RTW89_THAILAND][0][117] = 127,
[0][1][RTW89_FCC][1][119] = 127,
[0][1][RTW89_FCC][2][119] = 127,
[0][1][RTW89_ETSI][1][119] = 127,
@@ -52286,6 +54808,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_MKK][1][119] = 127,
[0][1][RTW89_MKK][0][119] = 127,
[0][1][RTW89_IC][1][119] = 127,
+ [0][1][RTW89_IC][2][119] = 127,
[0][1][RTW89_KCC][1][119] = 127,
[0][1][RTW89_KCC][0][119] = 127,
[0][1][RTW89_ACMA][1][119] = 127,
@@ -52295,6 +54818,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[0][1][RTW89_QATAR][0][119] = 127,
[0][1][RTW89_UK][1][119] = 127,
[0][1][RTW89_UK][0][119] = 127,
+ [0][1][RTW89_THAILAND][1][119] = 127,
+ [0][1][RTW89_THAILAND][0][119] = 127,
[1][0][RTW89_FCC][1][0] = -4,
[1][0][RTW89_FCC][2][0] = 52,
[1][0][RTW89_ETSI][1][0] = 46,
@@ -52302,6 +54827,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][0] = 42,
[1][0][RTW89_MKK][0][0] = 2,
[1][0][RTW89_IC][1][0] = -4,
+ [1][0][RTW89_IC][2][0] = 52,
[1][0][RTW89_KCC][1][0] = -2,
[1][0][RTW89_KCC][0][0] = -2,
[1][0][RTW89_ACMA][1][0] = 46,
@@ -52311,6 +54837,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][0] = 6,
[1][0][RTW89_UK][1][0] = 46,
[1][0][RTW89_UK][0][0] = 6,
+ [1][0][RTW89_THAILAND][1][0] = 42,
+ [1][0][RTW89_THAILAND][0][0] = -4,
[1][0][RTW89_FCC][1][2] = -4,
[1][0][RTW89_FCC][2][2] = 52,
[1][0][RTW89_ETSI][1][2] = 46,
@@ -52318,6 +54846,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][2] = 42,
[1][0][RTW89_MKK][0][2] = 2,
[1][0][RTW89_IC][1][2] = -4,
+ [1][0][RTW89_IC][2][2] = 52,
[1][0][RTW89_KCC][1][2] = -2,
[1][0][RTW89_KCC][0][2] = -2,
[1][0][RTW89_ACMA][1][2] = 46,
@@ -52327,6 +54856,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][2] = 6,
[1][0][RTW89_UK][1][2] = 46,
[1][0][RTW89_UK][0][2] = 6,
+ [1][0][RTW89_THAILAND][1][2] = 42,
+ [1][0][RTW89_THAILAND][0][2] = -4,
[1][0][RTW89_FCC][1][4] = -4,
[1][0][RTW89_FCC][2][4] = 52,
[1][0][RTW89_ETSI][1][4] = 46,
@@ -52334,6 +54865,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][4] = 42,
[1][0][RTW89_MKK][0][4] = 2,
[1][0][RTW89_IC][1][4] = -4,
+ [1][0][RTW89_IC][2][4] = 52,
[1][0][RTW89_KCC][1][4] = -2,
[1][0][RTW89_KCC][0][4] = -2,
[1][0][RTW89_ACMA][1][4] = 46,
@@ -52343,6 +54875,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][4] = 6,
[1][0][RTW89_UK][1][4] = 46,
[1][0][RTW89_UK][0][4] = 6,
+ [1][0][RTW89_THAILAND][1][4] = 42,
+ [1][0][RTW89_THAILAND][0][4] = -4,
[1][0][RTW89_FCC][1][6] = -4,
[1][0][RTW89_FCC][2][6] = 52,
[1][0][RTW89_ETSI][1][6] = 46,
@@ -52350,6 +54884,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][6] = 42,
[1][0][RTW89_MKK][0][6] = 2,
[1][0][RTW89_IC][1][6] = -4,
+ [1][0][RTW89_IC][2][6] = 52,
[1][0][RTW89_KCC][1][6] = -2,
[1][0][RTW89_KCC][0][6] = -2,
[1][0][RTW89_ACMA][1][6] = 46,
@@ -52359,6 +54894,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][6] = 6,
[1][0][RTW89_UK][1][6] = 46,
[1][0][RTW89_UK][0][6] = 6,
+ [1][0][RTW89_THAILAND][1][6] = 42,
+ [1][0][RTW89_THAILAND][0][6] = -4,
[1][0][RTW89_FCC][1][8] = -4,
[1][0][RTW89_FCC][2][8] = 52,
[1][0][RTW89_ETSI][1][8] = 46,
@@ -52366,6 +54903,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][8] = 42,
[1][0][RTW89_MKK][0][8] = 2,
[1][0][RTW89_IC][1][8] = -4,
+ [1][0][RTW89_IC][2][8] = 52,
[1][0][RTW89_KCC][1][8] = -2,
[1][0][RTW89_KCC][0][8] = -2,
[1][0][RTW89_ACMA][1][8] = 46,
@@ -52375,6 +54913,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][8] = 6,
[1][0][RTW89_UK][1][8] = 46,
[1][0][RTW89_UK][0][8] = 6,
+ [1][0][RTW89_THAILAND][1][8] = 42,
+ [1][0][RTW89_THAILAND][0][8] = -4,
[1][0][RTW89_FCC][1][10] = -4,
[1][0][RTW89_FCC][2][10] = 52,
[1][0][RTW89_ETSI][1][10] = 46,
@@ -52382,6 +54922,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][10] = 42,
[1][0][RTW89_MKK][0][10] = 2,
[1][0][RTW89_IC][1][10] = -4,
+ [1][0][RTW89_IC][2][10] = 52,
[1][0][RTW89_KCC][1][10] = -2,
[1][0][RTW89_KCC][0][10] = -2,
[1][0][RTW89_ACMA][1][10] = 46,
@@ -52391,6 +54932,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][10] = 6,
[1][0][RTW89_UK][1][10] = 46,
[1][0][RTW89_UK][0][10] = 6,
+ [1][0][RTW89_THAILAND][1][10] = 42,
+ [1][0][RTW89_THAILAND][0][10] = -4,
[1][0][RTW89_FCC][1][12] = -4,
[1][0][RTW89_FCC][2][12] = 52,
[1][0][RTW89_ETSI][1][12] = 46,
@@ -52398,6 +54941,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][12] = 42,
[1][0][RTW89_MKK][0][12] = 2,
[1][0][RTW89_IC][1][12] = -4,
+ [1][0][RTW89_IC][2][12] = 52,
[1][0][RTW89_KCC][1][12] = -2,
[1][0][RTW89_KCC][0][12] = -2,
[1][0][RTW89_ACMA][1][12] = 46,
@@ -52407,6 +54951,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][12] = 6,
[1][0][RTW89_UK][1][12] = 46,
[1][0][RTW89_UK][0][12] = 6,
+ [1][0][RTW89_THAILAND][1][12] = 42,
+ [1][0][RTW89_THAILAND][0][12] = -4,
[1][0][RTW89_FCC][1][14] = -4,
[1][0][RTW89_FCC][2][14] = 52,
[1][0][RTW89_ETSI][1][14] = 46,
@@ -52414,6 +54960,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][14] = 42,
[1][0][RTW89_MKK][0][14] = 2,
[1][0][RTW89_IC][1][14] = -4,
+ [1][0][RTW89_IC][2][14] = 52,
[1][0][RTW89_KCC][1][14] = -2,
[1][0][RTW89_KCC][0][14] = -2,
[1][0][RTW89_ACMA][1][14] = 46,
@@ -52423,6 +54970,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][14] = 6,
[1][0][RTW89_UK][1][14] = 46,
[1][0][RTW89_UK][0][14] = 6,
+ [1][0][RTW89_THAILAND][1][14] = 42,
+ [1][0][RTW89_THAILAND][0][14] = -4,
[1][0][RTW89_FCC][1][15] = -4,
[1][0][RTW89_FCC][2][15] = 52,
[1][0][RTW89_ETSI][1][15] = 46,
@@ -52430,6 +54979,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][15] = 42,
[1][0][RTW89_MKK][0][15] = 2,
[1][0][RTW89_IC][1][15] = -4,
+ [1][0][RTW89_IC][2][15] = 52,
[1][0][RTW89_KCC][1][15] = -2,
[1][0][RTW89_KCC][0][15] = -2,
[1][0][RTW89_ACMA][1][15] = 46,
@@ -52439,6 +54989,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][15] = 6,
[1][0][RTW89_UK][1][15] = 46,
[1][0][RTW89_UK][0][15] = 6,
+ [1][0][RTW89_THAILAND][1][15] = 42,
+ [1][0][RTW89_THAILAND][0][15] = -4,
[1][0][RTW89_FCC][1][17] = -4,
[1][0][RTW89_FCC][2][17] = 52,
[1][0][RTW89_ETSI][1][17] = 46,
@@ -52446,6 +54998,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][17] = 42,
[1][0][RTW89_MKK][0][17] = 2,
[1][0][RTW89_IC][1][17] = -4,
+ [1][0][RTW89_IC][2][17] = 52,
[1][0][RTW89_KCC][1][17] = -2,
[1][0][RTW89_KCC][0][17] = -2,
[1][0][RTW89_ACMA][1][17] = 46,
@@ -52455,6 +55008,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][17] = 6,
[1][0][RTW89_UK][1][17] = 46,
[1][0][RTW89_UK][0][17] = 6,
+ [1][0][RTW89_THAILAND][1][17] = 42,
+ [1][0][RTW89_THAILAND][0][17] = -4,
[1][0][RTW89_FCC][1][19] = -4,
[1][0][RTW89_FCC][2][19] = 52,
[1][0][RTW89_ETSI][1][19] = 46,
@@ -52462,6 +55017,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][19] = 42,
[1][0][RTW89_MKK][0][19] = 2,
[1][0][RTW89_IC][1][19] = -4,
+ [1][0][RTW89_IC][2][19] = 52,
[1][0][RTW89_KCC][1][19] = -2,
[1][0][RTW89_KCC][0][19] = -2,
[1][0][RTW89_ACMA][1][19] = 46,
@@ -52471,6 +55027,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][19] = 6,
[1][0][RTW89_UK][1][19] = 46,
[1][0][RTW89_UK][0][19] = 6,
+ [1][0][RTW89_THAILAND][1][19] = 42,
+ [1][0][RTW89_THAILAND][0][19] = -4,
[1][0][RTW89_FCC][1][21] = -4,
[1][0][RTW89_FCC][2][21] = 52,
[1][0][RTW89_ETSI][1][21] = 46,
@@ -52478,6 +55036,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][21] = 42,
[1][0][RTW89_MKK][0][21] = 2,
[1][0][RTW89_IC][1][21] = -4,
+ [1][0][RTW89_IC][2][21] = 52,
[1][0][RTW89_KCC][1][21] = -2,
[1][0][RTW89_KCC][0][21] = -2,
[1][0][RTW89_ACMA][1][21] = 46,
@@ -52487,6 +55046,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][21] = 6,
[1][0][RTW89_UK][1][21] = 46,
[1][0][RTW89_UK][0][21] = 6,
+ [1][0][RTW89_THAILAND][1][21] = 42,
+ [1][0][RTW89_THAILAND][0][21] = -4,
[1][0][RTW89_FCC][1][23] = -4,
[1][0][RTW89_FCC][2][23] = 66,
[1][0][RTW89_ETSI][1][23] = 46,
@@ -52494,6 +55055,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][23] = 42,
[1][0][RTW89_MKK][0][23] = 2,
[1][0][RTW89_IC][1][23] = -4,
+ [1][0][RTW89_IC][2][23] = 66,
[1][0][RTW89_KCC][1][23] = -2,
[1][0][RTW89_KCC][0][23] = -2,
[1][0][RTW89_ACMA][1][23] = 46,
@@ -52503,6 +55065,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][23] = 6,
[1][0][RTW89_UK][1][23] = 46,
[1][0][RTW89_UK][0][23] = 6,
+ [1][0][RTW89_THAILAND][1][23] = 42,
+ [1][0][RTW89_THAILAND][0][23] = -4,
[1][0][RTW89_FCC][1][25] = -4,
[1][0][RTW89_FCC][2][25] = 66,
[1][0][RTW89_ETSI][1][25] = 46,
@@ -52510,6 +55074,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][25] = 42,
[1][0][RTW89_MKK][0][25] = 2,
[1][0][RTW89_IC][1][25] = -4,
+ [1][0][RTW89_IC][2][25] = 66,
[1][0][RTW89_KCC][1][25] = -2,
[1][0][RTW89_KCC][0][25] = -2,
[1][0][RTW89_ACMA][1][25] = 46,
@@ -52519,6 +55084,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][25] = 6,
[1][0][RTW89_UK][1][25] = 46,
[1][0][RTW89_UK][0][25] = 6,
+ [1][0][RTW89_THAILAND][1][25] = 42,
+ [1][0][RTW89_THAILAND][0][25] = -4,
[1][0][RTW89_FCC][1][27] = -4,
[1][0][RTW89_FCC][2][27] = 66,
[1][0][RTW89_ETSI][1][27] = 46,
@@ -52526,6 +55093,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][27] = 42,
[1][0][RTW89_MKK][0][27] = 2,
[1][0][RTW89_IC][1][27] = -4,
+ [1][0][RTW89_IC][2][27] = 66,
[1][0][RTW89_KCC][1][27] = -2,
[1][0][RTW89_KCC][0][27] = -2,
[1][0][RTW89_ACMA][1][27] = 46,
@@ -52535,6 +55103,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][27] = 6,
[1][0][RTW89_UK][1][27] = 46,
[1][0][RTW89_UK][0][27] = 6,
+ [1][0][RTW89_THAILAND][1][27] = 42,
+ [1][0][RTW89_THAILAND][0][27] = -4,
[1][0][RTW89_FCC][1][29] = -4,
[1][0][RTW89_FCC][2][29] = 66,
[1][0][RTW89_ETSI][1][29] = 46,
@@ -52542,6 +55112,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][29] = 42,
[1][0][RTW89_MKK][0][29] = 2,
[1][0][RTW89_IC][1][29] = -4,
+ [1][0][RTW89_IC][2][29] = 66,
[1][0][RTW89_KCC][1][29] = -2,
[1][0][RTW89_KCC][0][29] = -2,
[1][0][RTW89_ACMA][1][29] = 46,
@@ -52551,6 +55122,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][29] = 6,
[1][0][RTW89_UK][1][29] = 46,
[1][0][RTW89_UK][0][29] = 6,
+ [1][0][RTW89_THAILAND][1][29] = 42,
+ [1][0][RTW89_THAILAND][0][29] = -4,
[1][0][RTW89_FCC][1][30] = -4,
[1][0][RTW89_FCC][2][30] = 66,
[1][0][RTW89_ETSI][1][30] = 46,
@@ -52558,6 +55131,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][30] = 42,
[1][0][RTW89_MKK][0][30] = 2,
[1][0][RTW89_IC][1][30] = -4,
+ [1][0][RTW89_IC][2][30] = 66,
[1][0][RTW89_KCC][1][30] = -2,
[1][0][RTW89_KCC][0][30] = -2,
[1][0][RTW89_ACMA][1][30] = 46,
@@ -52567,6 +55141,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][30] = 6,
[1][0][RTW89_UK][1][30] = 46,
[1][0][RTW89_UK][0][30] = 6,
+ [1][0][RTW89_THAILAND][1][30] = 42,
+ [1][0][RTW89_THAILAND][0][30] = -4,
[1][0][RTW89_FCC][1][32] = -4,
[1][0][RTW89_FCC][2][32] = 66,
[1][0][RTW89_ETSI][1][32] = 46,
@@ -52574,6 +55150,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][32] = 42,
[1][0][RTW89_MKK][0][32] = 2,
[1][0][RTW89_IC][1][32] = -4,
+ [1][0][RTW89_IC][2][32] = 66,
[1][0][RTW89_KCC][1][32] = -2,
[1][0][RTW89_KCC][0][32] = -2,
[1][0][RTW89_ACMA][1][32] = 46,
@@ -52583,6 +55160,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][32] = 6,
[1][0][RTW89_UK][1][32] = 46,
[1][0][RTW89_UK][0][32] = 6,
+ [1][0][RTW89_THAILAND][1][32] = 42,
+ [1][0][RTW89_THAILAND][0][32] = -4,
[1][0][RTW89_FCC][1][34] = -4,
[1][0][RTW89_FCC][2][34] = 66,
[1][0][RTW89_ETSI][1][34] = 46,
@@ -52590,6 +55169,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][34] = 42,
[1][0][RTW89_MKK][0][34] = 2,
[1][0][RTW89_IC][1][34] = -4,
+ [1][0][RTW89_IC][2][34] = 66,
[1][0][RTW89_KCC][1][34] = -2,
[1][0][RTW89_KCC][0][34] = -2,
[1][0][RTW89_ACMA][1][34] = 46,
@@ -52599,6 +55179,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][34] = 6,
[1][0][RTW89_UK][1][34] = 46,
[1][0][RTW89_UK][0][34] = 6,
+ [1][0][RTW89_THAILAND][1][34] = 42,
+ [1][0][RTW89_THAILAND][0][34] = -4,
[1][0][RTW89_FCC][1][36] = -4,
[1][0][RTW89_FCC][2][36] = 66,
[1][0][RTW89_ETSI][1][36] = 46,
@@ -52606,6 +55188,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][36] = 42,
[1][0][RTW89_MKK][0][36] = 2,
[1][0][RTW89_IC][1][36] = -4,
+ [1][0][RTW89_IC][2][36] = 66,
[1][0][RTW89_KCC][1][36] = -2,
[1][0][RTW89_KCC][0][36] = -2,
[1][0][RTW89_ACMA][1][36] = 46,
@@ -52615,6 +55198,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][36] = 6,
[1][0][RTW89_UK][1][36] = 46,
[1][0][RTW89_UK][0][36] = 6,
+ [1][0][RTW89_THAILAND][1][36] = 42,
+ [1][0][RTW89_THAILAND][0][36] = -4,
[1][0][RTW89_FCC][1][38] = -4,
[1][0][RTW89_FCC][2][38] = 66,
[1][0][RTW89_ETSI][1][38] = 46,
@@ -52622,6 +55207,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][38] = 42,
[1][0][RTW89_MKK][0][38] = 2,
[1][0][RTW89_IC][1][38] = -4,
+ [1][0][RTW89_IC][2][38] = 66,
[1][0][RTW89_KCC][1][38] = -2,
[1][0][RTW89_KCC][0][38] = -2,
[1][0][RTW89_ACMA][1][38] = 46,
@@ -52631,6 +55217,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][38] = 6,
[1][0][RTW89_UK][1][38] = 46,
[1][0][RTW89_UK][0][38] = 6,
+ [1][0][RTW89_THAILAND][1][38] = 42,
+ [1][0][RTW89_THAILAND][0][38] = -4,
[1][0][RTW89_FCC][1][40] = -4,
[1][0][RTW89_FCC][2][40] = 66,
[1][0][RTW89_ETSI][1][40] = 46,
@@ -52638,6 +55226,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][40] = 42,
[1][0][RTW89_MKK][0][40] = 2,
[1][0][RTW89_IC][1][40] = -4,
+ [1][0][RTW89_IC][2][40] = 66,
[1][0][RTW89_KCC][1][40] = -2,
[1][0][RTW89_KCC][0][40] = -2,
[1][0][RTW89_ACMA][1][40] = 46,
@@ -52647,6 +55236,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][40] = 6,
[1][0][RTW89_UK][1][40] = 46,
[1][0][RTW89_UK][0][40] = 6,
+ [1][0][RTW89_THAILAND][1][40] = 42,
+ [1][0][RTW89_THAILAND][0][40] = -4,
[1][0][RTW89_FCC][1][42] = -4,
[1][0][RTW89_FCC][2][42] = 66,
[1][0][RTW89_ETSI][1][42] = 46,
@@ -52654,6 +55245,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][42] = 42,
[1][0][RTW89_MKK][0][42] = 2,
[1][0][RTW89_IC][1][42] = -4,
+ [1][0][RTW89_IC][2][42] = 66,
[1][0][RTW89_KCC][1][42] = -2,
[1][0][RTW89_KCC][0][42] = -2,
[1][0][RTW89_ACMA][1][42] = 46,
@@ -52663,6 +55255,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][42] = 6,
[1][0][RTW89_UK][1][42] = 46,
[1][0][RTW89_UK][0][42] = 6,
+ [1][0][RTW89_THAILAND][1][42] = 42,
+ [1][0][RTW89_THAILAND][0][42] = -4,
[1][0][RTW89_FCC][1][44] = -4,
[1][0][RTW89_FCC][2][44] = 66,
[1][0][RTW89_ETSI][1][44] = 46,
@@ -52670,6 +55264,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][44] = 22,
[1][0][RTW89_MKK][0][44] = 4,
[1][0][RTW89_IC][1][44] = -4,
+ [1][0][RTW89_IC][2][44] = 66,
[1][0][RTW89_KCC][1][44] = -2,
[1][0][RTW89_KCC][0][44] = -2,
[1][0][RTW89_ACMA][1][44] = 46,
@@ -52679,6 +55274,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][44] = 8,
[1][0][RTW89_UK][1][44] = 46,
[1][0][RTW89_UK][0][44] = 8,
+ [1][0][RTW89_THAILAND][1][44] = 42,
+ [1][0][RTW89_THAILAND][0][44] = -4,
[1][0][RTW89_FCC][1][45] = -4,
[1][0][RTW89_FCC][2][45] = 127,
[1][0][RTW89_ETSI][1][45] = 127,
@@ -52686,6 +55283,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][45] = 127,
[1][0][RTW89_MKK][0][45] = 127,
[1][0][RTW89_IC][1][45] = -4,
+ [1][0][RTW89_IC][2][45] = 68,
[1][0][RTW89_KCC][1][45] = -2,
[1][0][RTW89_KCC][0][45] = 127,
[1][0][RTW89_ACMA][1][45] = 127,
@@ -52695,6 +55293,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][45] = 127,
[1][0][RTW89_UK][1][45] = 127,
[1][0][RTW89_UK][0][45] = 127,
+ [1][0][RTW89_THAILAND][1][45] = 127,
+ [1][0][RTW89_THAILAND][0][45] = 127,
[1][0][RTW89_FCC][1][47] = -4,
[1][0][RTW89_FCC][2][47] = 127,
[1][0][RTW89_ETSI][1][47] = 127,
@@ -52702,6 +55302,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][47] = 127,
[1][0][RTW89_MKK][0][47] = 127,
[1][0][RTW89_IC][1][47] = -4,
+ [1][0][RTW89_IC][2][47] = 68,
[1][0][RTW89_KCC][1][47] = -2,
[1][0][RTW89_KCC][0][47] = 127,
[1][0][RTW89_ACMA][1][47] = 127,
@@ -52711,6 +55312,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][47] = 127,
[1][0][RTW89_UK][1][47] = 127,
[1][0][RTW89_UK][0][47] = 127,
+ [1][0][RTW89_THAILAND][1][47] = 127,
+ [1][0][RTW89_THAILAND][0][47] = 127,
[1][0][RTW89_FCC][1][49] = -4,
[1][0][RTW89_FCC][2][49] = 127,
[1][0][RTW89_ETSI][1][49] = 127,
@@ -52718,6 +55321,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][49] = 127,
[1][0][RTW89_MKK][0][49] = 127,
[1][0][RTW89_IC][1][49] = -4,
+ [1][0][RTW89_IC][2][49] = 68,
[1][0][RTW89_KCC][1][49] = -2,
[1][0][RTW89_KCC][0][49] = 127,
[1][0][RTW89_ACMA][1][49] = 127,
@@ -52727,6 +55331,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][49] = 127,
[1][0][RTW89_UK][1][49] = 127,
[1][0][RTW89_UK][0][49] = 127,
+ [1][0][RTW89_THAILAND][1][49] = 127,
+ [1][0][RTW89_THAILAND][0][49] = 127,
[1][0][RTW89_FCC][1][51] = -4,
[1][0][RTW89_FCC][2][51] = 127,
[1][0][RTW89_ETSI][1][51] = 127,
@@ -52734,6 +55340,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][51] = 127,
[1][0][RTW89_MKK][0][51] = 127,
[1][0][RTW89_IC][1][51] = -4,
+ [1][0][RTW89_IC][2][51] = 68,
[1][0][RTW89_KCC][1][51] = -2,
[1][0][RTW89_KCC][0][51] = 127,
[1][0][RTW89_ACMA][1][51] = 127,
@@ -52743,6 +55350,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][51] = 127,
[1][0][RTW89_UK][1][51] = 127,
[1][0][RTW89_UK][0][51] = 127,
+ [1][0][RTW89_THAILAND][1][51] = 127,
+ [1][0][RTW89_THAILAND][0][51] = 127,
[1][0][RTW89_FCC][1][53] = -4,
[1][0][RTW89_FCC][2][53] = 127,
[1][0][RTW89_ETSI][1][53] = 127,
@@ -52750,6 +55359,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][53] = 127,
[1][0][RTW89_MKK][0][53] = 127,
[1][0][RTW89_IC][1][53] = -4,
+ [1][0][RTW89_IC][2][53] = 68,
[1][0][RTW89_KCC][1][53] = -2,
[1][0][RTW89_KCC][0][53] = 127,
[1][0][RTW89_ACMA][1][53] = 127,
@@ -52759,6 +55369,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][53] = 127,
[1][0][RTW89_UK][1][53] = 127,
[1][0][RTW89_UK][0][53] = 127,
+ [1][0][RTW89_THAILAND][1][53] = 127,
+ [1][0][RTW89_THAILAND][0][53] = 127,
[1][0][RTW89_FCC][1][55] = -4,
[1][0][RTW89_FCC][2][55] = 68,
[1][0][RTW89_ETSI][1][55] = 127,
@@ -52766,6 +55378,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][55] = 127,
[1][0][RTW89_MKK][0][55] = 127,
[1][0][RTW89_IC][1][55] = -4,
+ [1][0][RTW89_IC][2][55] = 68,
[1][0][RTW89_KCC][1][55] = -2,
[1][0][RTW89_KCC][0][55] = 127,
[1][0][RTW89_ACMA][1][55] = 127,
@@ -52775,6 +55388,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][55] = 127,
[1][0][RTW89_UK][1][55] = 127,
[1][0][RTW89_UK][0][55] = 127,
+ [1][0][RTW89_THAILAND][1][55] = 127,
+ [1][0][RTW89_THAILAND][0][55] = 127,
[1][0][RTW89_FCC][1][57] = -4,
[1][0][RTW89_FCC][2][57] = 68,
[1][0][RTW89_ETSI][1][57] = 127,
@@ -52782,6 +55397,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][57] = 127,
[1][0][RTW89_MKK][0][57] = 127,
[1][0][RTW89_IC][1][57] = -4,
+ [1][0][RTW89_IC][2][57] = 68,
[1][0][RTW89_KCC][1][57] = -2,
[1][0][RTW89_KCC][0][57] = 127,
[1][0][RTW89_ACMA][1][57] = 127,
@@ -52791,6 +55407,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][57] = 127,
[1][0][RTW89_UK][1][57] = 127,
[1][0][RTW89_UK][0][57] = 127,
+ [1][0][RTW89_THAILAND][1][57] = 127,
+ [1][0][RTW89_THAILAND][0][57] = 127,
[1][0][RTW89_FCC][1][59] = -4,
[1][0][RTW89_FCC][2][59] = 68,
[1][0][RTW89_ETSI][1][59] = 127,
@@ -52798,6 +55416,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][59] = 127,
[1][0][RTW89_MKK][0][59] = 127,
[1][0][RTW89_IC][1][59] = -4,
+ [1][0][RTW89_IC][2][59] = 68,
[1][0][RTW89_KCC][1][59] = -2,
[1][0][RTW89_KCC][0][59] = 127,
[1][0][RTW89_ACMA][1][59] = 127,
@@ -52807,6 +55426,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][59] = 127,
[1][0][RTW89_UK][1][59] = 127,
[1][0][RTW89_UK][0][59] = 127,
+ [1][0][RTW89_THAILAND][1][59] = 127,
+ [1][0][RTW89_THAILAND][0][59] = 127,
[1][0][RTW89_FCC][1][60] = -4,
[1][0][RTW89_FCC][2][60] = 68,
[1][0][RTW89_ETSI][1][60] = 127,
@@ -52814,6 +55435,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][60] = 127,
[1][0][RTW89_MKK][0][60] = 127,
[1][0][RTW89_IC][1][60] = -4,
+ [1][0][RTW89_IC][2][60] = 68,
[1][0][RTW89_KCC][1][60] = -2,
[1][0][RTW89_KCC][0][60] = 127,
[1][0][RTW89_ACMA][1][60] = 127,
@@ -52823,6 +55445,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][60] = 127,
[1][0][RTW89_UK][1][60] = 127,
[1][0][RTW89_UK][0][60] = 127,
+ [1][0][RTW89_THAILAND][1][60] = 127,
+ [1][0][RTW89_THAILAND][0][60] = 127,
[1][0][RTW89_FCC][1][62] = -4,
[1][0][RTW89_FCC][2][62] = 68,
[1][0][RTW89_ETSI][1][62] = 127,
@@ -52830,6 +55454,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][62] = 127,
[1][0][RTW89_MKK][0][62] = 127,
[1][0][RTW89_IC][1][62] = -4,
+ [1][0][RTW89_IC][2][62] = 68,
[1][0][RTW89_KCC][1][62] = -2,
[1][0][RTW89_KCC][0][62] = 127,
[1][0][RTW89_ACMA][1][62] = 127,
@@ -52839,6 +55464,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][62] = 127,
[1][0][RTW89_UK][1][62] = 127,
[1][0][RTW89_UK][0][62] = 127,
+ [1][0][RTW89_THAILAND][1][62] = 127,
+ [1][0][RTW89_THAILAND][0][62] = 127,
[1][0][RTW89_FCC][1][64] = -4,
[1][0][RTW89_FCC][2][64] = 68,
[1][0][RTW89_ETSI][1][64] = 127,
@@ -52846,6 +55473,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][64] = 127,
[1][0][RTW89_MKK][0][64] = 127,
[1][0][RTW89_IC][1][64] = -4,
+ [1][0][RTW89_IC][2][64] = 68,
[1][0][RTW89_KCC][1][64] = -2,
[1][0][RTW89_KCC][0][64] = 127,
[1][0][RTW89_ACMA][1][64] = 127,
@@ -52855,6 +55483,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][64] = 127,
[1][0][RTW89_UK][1][64] = 127,
[1][0][RTW89_UK][0][64] = 127,
+ [1][0][RTW89_THAILAND][1][64] = 127,
+ [1][0][RTW89_THAILAND][0][64] = 127,
[1][0][RTW89_FCC][1][66] = -4,
[1][0][RTW89_FCC][2][66] = 68,
[1][0][RTW89_ETSI][1][66] = 127,
@@ -52862,6 +55492,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][66] = 127,
[1][0][RTW89_MKK][0][66] = 127,
[1][0][RTW89_IC][1][66] = -4,
+ [1][0][RTW89_IC][2][66] = 68,
[1][0][RTW89_KCC][1][66] = -2,
[1][0][RTW89_KCC][0][66] = 127,
[1][0][RTW89_ACMA][1][66] = 127,
@@ -52871,6 +55502,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][66] = 127,
[1][0][RTW89_UK][1][66] = 127,
[1][0][RTW89_UK][0][66] = 127,
+ [1][0][RTW89_THAILAND][1][66] = 127,
+ [1][0][RTW89_THAILAND][0][66] = 127,
[1][0][RTW89_FCC][1][68] = -4,
[1][0][RTW89_FCC][2][68] = 68,
[1][0][RTW89_ETSI][1][68] = 127,
@@ -52878,6 +55511,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][68] = 127,
[1][0][RTW89_MKK][0][68] = 127,
[1][0][RTW89_IC][1][68] = -4,
+ [1][0][RTW89_IC][2][68] = 68,
[1][0][RTW89_KCC][1][68] = -2,
[1][0][RTW89_KCC][0][68] = 127,
[1][0][RTW89_ACMA][1][68] = 127,
@@ -52887,6 +55521,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][68] = 127,
[1][0][RTW89_UK][1][68] = 127,
[1][0][RTW89_UK][0][68] = 127,
+ [1][0][RTW89_THAILAND][1][68] = 127,
+ [1][0][RTW89_THAILAND][0][68] = 127,
[1][0][RTW89_FCC][1][70] = -4,
[1][0][RTW89_FCC][2][70] = 68,
[1][0][RTW89_ETSI][1][70] = 127,
@@ -52894,6 +55530,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][70] = 127,
[1][0][RTW89_MKK][0][70] = 127,
[1][0][RTW89_IC][1][70] = -4,
+ [1][0][RTW89_IC][2][70] = 68,
[1][0][RTW89_KCC][1][70] = -2,
[1][0][RTW89_KCC][0][70] = 127,
[1][0][RTW89_ACMA][1][70] = 127,
@@ -52903,6 +55540,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][70] = 127,
[1][0][RTW89_UK][1][70] = 127,
[1][0][RTW89_UK][0][70] = 127,
+ [1][0][RTW89_THAILAND][1][70] = 127,
+ [1][0][RTW89_THAILAND][0][70] = 127,
[1][0][RTW89_FCC][1][72] = -4,
[1][0][RTW89_FCC][2][72] = 68,
[1][0][RTW89_ETSI][1][72] = 127,
@@ -52910,6 +55549,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][72] = 127,
[1][0][RTW89_MKK][0][72] = 127,
[1][0][RTW89_IC][1][72] = -4,
+ [1][0][RTW89_IC][2][72] = 68,
[1][0][RTW89_KCC][1][72] = -2,
[1][0][RTW89_KCC][0][72] = 127,
[1][0][RTW89_ACMA][1][72] = 127,
@@ -52919,6 +55559,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][72] = 127,
[1][0][RTW89_UK][1][72] = 127,
[1][0][RTW89_UK][0][72] = 127,
+ [1][0][RTW89_THAILAND][1][72] = 127,
+ [1][0][RTW89_THAILAND][0][72] = 127,
[1][0][RTW89_FCC][1][74] = -4,
[1][0][RTW89_FCC][2][74] = 68,
[1][0][RTW89_ETSI][1][74] = 127,
@@ -52926,6 +55568,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][74] = 127,
[1][0][RTW89_MKK][0][74] = 127,
[1][0][RTW89_IC][1][74] = -4,
+ [1][0][RTW89_IC][2][74] = 68,
[1][0][RTW89_KCC][1][74] = -2,
[1][0][RTW89_KCC][0][74] = 127,
[1][0][RTW89_ACMA][1][74] = 127,
@@ -52935,6 +55578,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][74] = 127,
[1][0][RTW89_UK][1][74] = 127,
[1][0][RTW89_UK][0][74] = 127,
+ [1][0][RTW89_THAILAND][1][74] = 127,
+ [1][0][RTW89_THAILAND][0][74] = 127,
[1][0][RTW89_FCC][1][75] = -4,
[1][0][RTW89_FCC][2][75] = 68,
[1][0][RTW89_ETSI][1][75] = 127,
@@ -52942,6 +55587,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][75] = 127,
[1][0][RTW89_MKK][0][75] = 127,
[1][0][RTW89_IC][1][75] = -4,
+ [1][0][RTW89_IC][2][75] = 68,
[1][0][RTW89_KCC][1][75] = -2,
[1][0][RTW89_KCC][0][75] = 127,
[1][0][RTW89_ACMA][1][75] = 127,
@@ -52951,6 +55597,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][75] = 127,
[1][0][RTW89_UK][1][75] = 127,
[1][0][RTW89_UK][0][75] = 127,
+ [1][0][RTW89_THAILAND][1][75] = 127,
+ [1][0][RTW89_THAILAND][0][75] = 127,
[1][0][RTW89_FCC][1][77] = -4,
[1][0][RTW89_FCC][2][77] = 68,
[1][0][RTW89_ETSI][1][77] = 127,
@@ -52958,6 +55606,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][77] = 127,
[1][0][RTW89_MKK][0][77] = 127,
[1][0][RTW89_IC][1][77] = -4,
+ [1][0][RTW89_IC][2][77] = 68,
[1][0][RTW89_KCC][1][77] = -2,
[1][0][RTW89_KCC][0][77] = 127,
[1][0][RTW89_ACMA][1][77] = 127,
@@ -52967,6 +55616,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][77] = 127,
[1][0][RTW89_UK][1][77] = 127,
[1][0][RTW89_UK][0][77] = 127,
+ [1][0][RTW89_THAILAND][1][77] = 127,
+ [1][0][RTW89_THAILAND][0][77] = 127,
[1][0][RTW89_FCC][1][79] = -4,
[1][0][RTW89_FCC][2][79] = 68,
[1][0][RTW89_ETSI][1][79] = 127,
@@ -52974,6 +55625,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][79] = 127,
[1][0][RTW89_MKK][0][79] = 127,
[1][0][RTW89_IC][1][79] = -4,
+ [1][0][RTW89_IC][2][79] = 68,
[1][0][RTW89_KCC][1][79] = -2,
[1][0][RTW89_KCC][0][79] = 127,
[1][0][RTW89_ACMA][1][79] = 127,
@@ -52983,6 +55635,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][79] = 127,
[1][0][RTW89_UK][1][79] = 127,
[1][0][RTW89_UK][0][79] = 127,
+ [1][0][RTW89_THAILAND][1][79] = 127,
+ [1][0][RTW89_THAILAND][0][79] = 127,
[1][0][RTW89_FCC][1][81] = -4,
[1][0][RTW89_FCC][2][81] = 68,
[1][0][RTW89_ETSI][1][81] = 127,
@@ -52990,6 +55644,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][81] = 127,
[1][0][RTW89_MKK][0][81] = 127,
[1][0][RTW89_IC][1][81] = -4,
+ [1][0][RTW89_IC][2][81] = 68,
[1][0][RTW89_KCC][1][81] = -2,
[1][0][RTW89_KCC][0][81] = 127,
[1][0][RTW89_ACMA][1][81] = 127,
@@ -52999,6 +55654,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][81] = 127,
[1][0][RTW89_UK][1][81] = 127,
[1][0][RTW89_UK][0][81] = 127,
+ [1][0][RTW89_THAILAND][1][81] = 127,
+ [1][0][RTW89_THAILAND][0][81] = 127,
[1][0][RTW89_FCC][1][83] = -4,
[1][0][RTW89_FCC][2][83] = 68,
[1][0][RTW89_ETSI][1][83] = 127,
@@ -53006,6 +55663,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][83] = 127,
[1][0][RTW89_MKK][0][83] = 127,
[1][0][RTW89_IC][1][83] = -4,
+ [1][0][RTW89_IC][2][83] = 68,
[1][0][RTW89_KCC][1][83] = -2,
[1][0][RTW89_KCC][0][83] = 127,
[1][0][RTW89_ACMA][1][83] = 127,
@@ -53015,6 +55673,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][83] = 127,
[1][0][RTW89_UK][1][83] = 127,
[1][0][RTW89_UK][0][83] = 127,
+ [1][0][RTW89_THAILAND][1][83] = 127,
+ [1][0][RTW89_THAILAND][0][83] = 127,
[1][0][RTW89_FCC][1][85] = -4,
[1][0][RTW89_FCC][2][85] = 68,
[1][0][RTW89_ETSI][1][85] = 127,
@@ -53022,6 +55682,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][85] = 127,
[1][0][RTW89_MKK][0][85] = 127,
[1][0][RTW89_IC][1][85] = -4,
+ [1][0][RTW89_IC][2][85] = 68,
[1][0][RTW89_KCC][1][85] = -2,
[1][0][RTW89_KCC][0][85] = 127,
[1][0][RTW89_ACMA][1][85] = 127,
@@ -53031,6 +55692,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][85] = 127,
[1][0][RTW89_UK][1][85] = 127,
[1][0][RTW89_UK][0][85] = 127,
+ [1][0][RTW89_THAILAND][1][85] = 127,
+ [1][0][RTW89_THAILAND][0][85] = 127,
[1][0][RTW89_FCC][1][87] = -4,
[1][0][RTW89_FCC][2][87] = 127,
[1][0][RTW89_ETSI][1][87] = 127,
@@ -53038,6 +55701,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][87] = 127,
[1][0][RTW89_MKK][0][87] = 127,
[1][0][RTW89_IC][1][87] = -4,
+ [1][0][RTW89_IC][2][87] = 127,
[1][0][RTW89_KCC][1][87] = -2,
[1][0][RTW89_KCC][0][87] = 127,
[1][0][RTW89_ACMA][1][87] = 127,
@@ -53047,6 +55711,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][87] = 127,
[1][0][RTW89_UK][1][87] = 127,
[1][0][RTW89_UK][0][87] = 127,
+ [1][0][RTW89_THAILAND][1][87] = 127,
+ [1][0][RTW89_THAILAND][0][87] = 127,
[1][0][RTW89_FCC][1][89] = -4,
[1][0][RTW89_FCC][2][89] = 127,
[1][0][RTW89_ETSI][1][89] = 127,
@@ -53054,6 +55720,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][89] = 127,
[1][0][RTW89_MKK][0][89] = 127,
[1][0][RTW89_IC][1][89] = -4,
+ [1][0][RTW89_IC][2][89] = 127,
[1][0][RTW89_KCC][1][89] = -2,
[1][0][RTW89_KCC][0][89] = 127,
[1][0][RTW89_ACMA][1][89] = 127,
@@ -53063,6 +55730,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][89] = 127,
[1][0][RTW89_UK][1][89] = 127,
[1][0][RTW89_UK][0][89] = 127,
+ [1][0][RTW89_THAILAND][1][89] = 127,
+ [1][0][RTW89_THAILAND][0][89] = 127,
[1][0][RTW89_FCC][1][90] = -4,
[1][0][RTW89_FCC][2][90] = 127,
[1][0][RTW89_ETSI][1][90] = 127,
@@ -53070,6 +55739,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][90] = 127,
[1][0][RTW89_MKK][0][90] = 127,
[1][0][RTW89_IC][1][90] = -4,
+ [1][0][RTW89_IC][2][90] = 127,
[1][0][RTW89_KCC][1][90] = -2,
[1][0][RTW89_KCC][0][90] = 127,
[1][0][RTW89_ACMA][1][90] = 127,
@@ -53079,6 +55749,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][90] = 127,
[1][0][RTW89_UK][1][90] = 127,
[1][0][RTW89_UK][0][90] = 127,
+ [1][0][RTW89_THAILAND][1][90] = 127,
+ [1][0][RTW89_THAILAND][0][90] = 127,
[1][0][RTW89_FCC][1][92] = -4,
[1][0][RTW89_FCC][2][92] = 127,
[1][0][RTW89_ETSI][1][92] = 127,
@@ -53086,6 +55758,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][92] = 127,
[1][0][RTW89_MKK][0][92] = 127,
[1][0][RTW89_IC][1][92] = -4,
+ [1][0][RTW89_IC][2][92] = 127,
[1][0][RTW89_KCC][1][92] = -2,
[1][0][RTW89_KCC][0][92] = 127,
[1][0][RTW89_ACMA][1][92] = 127,
@@ -53095,6 +55768,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][92] = 127,
[1][0][RTW89_UK][1][92] = 127,
[1][0][RTW89_UK][0][92] = 127,
+ [1][0][RTW89_THAILAND][1][92] = 127,
+ [1][0][RTW89_THAILAND][0][92] = 127,
[1][0][RTW89_FCC][1][94] = -4,
[1][0][RTW89_FCC][2][94] = 127,
[1][0][RTW89_ETSI][1][94] = 127,
@@ -53102,6 +55777,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][94] = 127,
[1][0][RTW89_MKK][0][94] = 127,
[1][0][RTW89_IC][1][94] = -4,
+ [1][0][RTW89_IC][2][94] = 127,
[1][0][RTW89_KCC][1][94] = -2,
[1][0][RTW89_KCC][0][94] = 127,
[1][0][RTW89_ACMA][1][94] = 127,
@@ -53111,6 +55787,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][94] = 127,
[1][0][RTW89_UK][1][94] = 127,
[1][0][RTW89_UK][0][94] = 127,
+ [1][0][RTW89_THAILAND][1][94] = 127,
+ [1][0][RTW89_THAILAND][0][94] = 127,
[1][0][RTW89_FCC][1][96] = -4,
[1][0][RTW89_FCC][2][96] = 127,
[1][0][RTW89_ETSI][1][96] = 127,
@@ -53118,6 +55796,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][96] = 127,
[1][0][RTW89_MKK][0][96] = 127,
[1][0][RTW89_IC][1][96] = -4,
+ [1][0][RTW89_IC][2][96] = 127,
[1][0][RTW89_KCC][1][96] = -2,
[1][0][RTW89_KCC][0][96] = 127,
[1][0][RTW89_ACMA][1][96] = 127,
@@ -53127,6 +55806,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][96] = 127,
[1][0][RTW89_UK][1][96] = 127,
[1][0][RTW89_UK][0][96] = 127,
+ [1][0][RTW89_THAILAND][1][96] = 127,
+ [1][0][RTW89_THAILAND][0][96] = 127,
[1][0][RTW89_FCC][1][98] = -4,
[1][0][RTW89_FCC][2][98] = 127,
[1][0][RTW89_ETSI][1][98] = 127,
@@ -53134,6 +55815,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][98] = 127,
[1][0][RTW89_MKK][0][98] = 127,
[1][0][RTW89_IC][1][98] = -4,
+ [1][0][RTW89_IC][2][98] = 127,
[1][0][RTW89_KCC][1][98] = -2,
[1][0][RTW89_KCC][0][98] = 127,
[1][0][RTW89_ACMA][1][98] = 127,
@@ -53143,6 +55825,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][98] = 127,
[1][0][RTW89_UK][1][98] = 127,
[1][0][RTW89_UK][0][98] = 127,
+ [1][0][RTW89_THAILAND][1][98] = 127,
+ [1][0][RTW89_THAILAND][0][98] = 127,
[1][0][RTW89_FCC][1][100] = -4,
[1][0][RTW89_FCC][2][100] = 127,
[1][0][RTW89_ETSI][1][100] = 127,
@@ -53150,6 +55834,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][100] = 127,
[1][0][RTW89_MKK][0][100] = 127,
[1][0][RTW89_IC][1][100] = -4,
+ [1][0][RTW89_IC][2][100] = 127,
[1][0][RTW89_KCC][1][100] = -2,
[1][0][RTW89_KCC][0][100] = 127,
[1][0][RTW89_ACMA][1][100] = 127,
@@ -53159,6 +55844,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][100] = 127,
[1][0][RTW89_UK][1][100] = 127,
[1][0][RTW89_UK][0][100] = 127,
+ [1][0][RTW89_THAILAND][1][100] = 127,
+ [1][0][RTW89_THAILAND][0][100] = 127,
[1][0][RTW89_FCC][1][102] = -4,
[1][0][RTW89_FCC][2][102] = 127,
[1][0][RTW89_ETSI][1][102] = 127,
@@ -53166,6 +55853,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][102] = 127,
[1][0][RTW89_MKK][0][102] = 127,
[1][0][RTW89_IC][1][102] = -4,
+ [1][0][RTW89_IC][2][102] = 127,
[1][0][RTW89_KCC][1][102] = -2,
[1][0][RTW89_KCC][0][102] = 127,
[1][0][RTW89_ACMA][1][102] = 127,
@@ -53175,6 +55863,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][102] = 127,
[1][0][RTW89_UK][1][102] = 127,
[1][0][RTW89_UK][0][102] = 127,
+ [1][0][RTW89_THAILAND][1][102] = 127,
+ [1][0][RTW89_THAILAND][0][102] = 127,
[1][0][RTW89_FCC][1][104] = -4,
[1][0][RTW89_FCC][2][104] = 127,
[1][0][RTW89_ETSI][1][104] = 127,
@@ -53182,6 +55872,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][104] = 127,
[1][0][RTW89_MKK][0][104] = 127,
[1][0][RTW89_IC][1][104] = -4,
+ [1][0][RTW89_IC][2][104] = 127,
[1][0][RTW89_KCC][1][104] = -2,
[1][0][RTW89_KCC][0][104] = 127,
[1][0][RTW89_ACMA][1][104] = 127,
@@ -53191,6 +55882,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][104] = 127,
[1][0][RTW89_UK][1][104] = 127,
[1][0][RTW89_UK][0][104] = 127,
+ [1][0][RTW89_THAILAND][1][104] = 127,
+ [1][0][RTW89_THAILAND][0][104] = 127,
[1][0][RTW89_FCC][1][105] = -4,
[1][0][RTW89_FCC][2][105] = 127,
[1][0][RTW89_ETSI][1][105] = 127,
@@ -53198,6 +55891,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][105] = 127,
[1][0][RTW89_MKK][0][105] = 127,
[1][0][RTW89_IC][1][105] = -4,
+ [1][0][RTW89_IC][2][105] = 127,
[1][0][RTW89_KCC][1][105] = -2,
[1][0][RTW89_KCC][0][105] = 127,
[1][0][RTW89_ACMA][1][105] = 127,
@@ -53207,6 +55901,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][105] = 127,
[1][0][RTW89_UK][1][105] = 127,
[1][0][RTW89_UK][0][105] = 127,
+ [1][0][RTW89_THAILAND][1][105] = 127,
+ [1][0][RTW89_THAILAND][0][105] = 127,
[1][0][RTW89_FCC][1][107] = 1,
[1][0][RTW89_FCC][2][107] = 127,
[1][0][RTW89_ETSI][1][107] = 127,
@@ -53214,6 +55910,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][107] = 127,
[1][0][RTW89_MKK][0][107] = 127,
[1][0][RTW89_IC][1][107] = 1,
+ [1][0][RTW89_IC][2][107] = 127,
[1][0][RTW89_KCC][1][107] = -2,
[1][0][RTW89_KCC][0][107] = 127,
[1][0][RTW89_ACMA][1][107] = 127,
@@ -53223,6 +55920,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][107] = 127,
[1][0][RTW89_UK][1][107] = 127,
[1][0][RTW89_UK][0][107] = 127,
+ [1][0][RTW89_THAILAND][1][107] = 127,
+ [1][0][RTW89_THAILAND][0][107] = 127,
[1][0][RTW89_FCC][1][109] = 2,
[1][0][RTW89_FCC][2][109] = 127,
[1][0][RTW89_ETSI][1][109] = 127,
@@ -53230,6 +55929,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][109] = 127,
[1][0][RTW89_MKK][0][109] = 127,
[1][0][RTW89_IC][1][109] = 2,
+ [1][0][RTW89_IC][2][109] = 127,
[1][0][RTW89_KCC][1][109] = 127,
[1][0][RTW89_KCC][0][109] = 127,
[1][0][RTW89_ACMA][1][109] = 127,
@@ -53239,6 +55939,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][109] = 127,
[1][0][RTW89_UK][1][109] = 127,
[1][0][RTW89_UK][0][109] = 127,
+ [1][0][RTW89_THAILAND][1][109] = 127,
+ [1][0][RTW89_THAILAND][0][109] = 127,
[1][0][RTW89_FCC][1][111] = 127,
[1][0][RTW89_FCC][2][111] = 127,
[1][0][RTW89_ETSI][1][111] = 127,
@@ -53246,6 +55948,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][111] = 127,
[1][0][RTW89_MKK][0][111] = 127,
[1][0][RTW89_IC][1][111] = 127,
+ [1][0][RTW89_IC][2][111] = 127,
[1][0][RTW89_KCC][1][111] = 127,
[1][0][RTW89_KCC][0][111] = 127,
[1][0][RTW89_ACMA][1][111] = 127,
@@ -53255,6 +55958,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][111] = 127,
[1][0][RTW89_UK][1][111] = 127,
[1][0][RTW89_UK][0][111] = 127,
+ [1][0][RTW89_THAILAND][1][111] = 127,
+ [1][0][RTW89_THAILAND][0][111] = 127,
[1][0][RTW89_FCC][1][113] = 127,
[1][0][RTW89_FCC][2][113] = 127,
[1][0][RTW89_ETSI][1][113] = 127,
@@ -53262,6 +55967,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][113] = 127,
[1][0][RTW89_MKK][0][113] = 127,
[1][0][RTW89_IC][1][113] = 127,
+ [1][0][RTW89_IC][2][113] = 127,
[1][0][RTW89_KCC][1][113] = 127,
[1][0][RTW89_KCC][0][113] = 127,
[1][0][RTW89_ACMA][1][113] = 127,
@@ -53271,6 +55977,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][113] = 127,
[1][0][RTW89_UK][1][113] = 127,
[1][0][RTW89_UK][0][113] = 127,
+ [1][0][RTW89_THAILAND][1][113] = 127,
+ [1][0][RTW89_THAILAND][0][113] = 127,
[1][0][RTW89_FCC][1][115] = 127,
[1][0][RTW89_FCC][2][115] = 127,
[1][0][RTW89_ETSI][1][115] = 127,
@@ -53278,6 +55986,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][115] = 127,
[1][0][RTW89_MKK][0][115] = 127,
[1][0][RTW89_IC][1][115] = 127,
+ [1][0][RTW89_IC][2][115] = 127,
[1][0][RTW89_KCC][1][115] = 127,
[1][0][RTW89_KCC][0][115] = 127,
[1][0][RTW89_ACMA][1][115] = 127,
@@ -53287,6 +55996,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][115] = 127,
[1][0][RTW89_UK][1][115] = 127,
[1][0][RTW89_UK][0][115] = 127,
+ [1][0][RTW89_THAILAND][1][115] = 127,
+ [1][0][RTW89_THAILAND][0][115] = 127,
[1][0][RTW89_FCC][1][117] = 127,
[1][0][RTW89_FCC][2][117] = 127,
[1][0][RTW89_ETSI][1][117] = 127,
@@ -53294,6 +56005,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][117] = 127,
[1][0][RTW89_MKK][0][117] = 127,
[1][0][RTW89_IC][1][117] = 127,
+ [1][0][RTW89_IC][2][117] = 127,
[1][0][RTW89_KCC][1][117] = 127,
[1][0][RTW89_KCC][0][117] = 127,
[1][0][RTW89_ACMA][1][117] = 127,
@@ -53303,6 +56015,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][117] = 127,
[1][0][RTW89_UK][1][117] = 127,
[1][0][RTW89_UK][0][117] = 127,
+ [1][0][RTW89_THAILAND][1][117] = 127,
+ [1][0][RTW89_THAILAND][0][117] = 127,
[1][0][RTW89_FCC][1][119] = 127,
[1][0][RTW89_FCC][2][119] = 127,
[1][0][RTW89_ETSI][1][119] = 127,
@@ -53310,6 +56024,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_MKK][1][119] = 127,
[1][0][RTW89_MKK][0][119] = 127,
[1][0][RTW89_IC][1][119] = 127,
+ [1][0][RTW89_IC][2][119] = 127,
[1][0][RTW89_KCC][1][119] = 127,
[1][0][RTW89_KCC][0][119] = 127,
[1][0][RTW89_ACMA][1][119] = 127,
@@ -53319,6 +56034,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][0][RTW89_QATAR][0][119] = 127,
[1][0][RTW89_UK][1][119] = 127,
[1][0][RTW89_UK][0][119] = 127,
+ [1][0][RTW89_THAILAND][1][119] = 127,
+ [1][0][RTW89_THAILAND][0][119] = 127,
[1][1][RTW89_FCC][1][0] = -26,
[1][1][RTW89_FCC][2][0] = 44,
[1][1][RTW89_ETSI][1][0] = 32,
@@ -53326,6 +56043,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][0] = 30,
[1][1][RTW89_MKK][0][0] = -10,
[1][1][RTW89_IC][1][0] = -26,
+ [1][1][RTW89_IC][2][0] = 44,
[1][1][RTW89_KCC][1][0] = -14,
[1][1][RTW89_KCC][0][0] = -14,
[1][1][RTW89_ACMA][1][0] = 32,
@@ -53335,6 +56053,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][0] = -6,
[1][1][RTW89_UK][1][0] = 32,
[1][1][RTW89_UK][0][0] = -6,
+ [1][1][RTW89_THAILAND][1][0] = 18,
+ [1][1][RTW89_THAILAND][0][0] = -26,
[1][1][RTW89_FCC][1][2] = -28,
[1][1][RTW89_FCC][2][2] = 44,
[1][1][RTW89_ETSI][1][2] = 32,
@@ -53342,6 +56062,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][2] = 30,
[1][1][RTW89_MKK][0][2] = -10,
[1][1][RTW89_IC][1][2] = -28,
+ [1][1][RTW89_IC][2][2] = 44,
[1][1][RTW89_KCC][1][2] = -14,
[1][1][RTW89_KCC][0][2] = -14,
[1][1][RTW89_ACMA][1][2] = 32,
@@ -53351,6 +56072,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][2] = -6,
[1][1][RTW89_UK][1][2] = 32,
[1][1][RTW89_UK][0][2] = -6,
+ [1][1][RTW89_THAILAND][1][2] = 18,
+ [1][1][RTW89_THAILAND][0][2] = -28,
[1][1][RTW89_FCC][1][4] = -28,
[1][1][RTW89_FCC][2][4] = 44,
[1][1][RTW89_ETSI][1][4] = 32,
@@ -53358,6 +56081,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][4] = 30,
[1][1][RTW89_MKK][0][4] = -10,
[1][1][RTW89_IC][1][4] = -28,
+ [1][1][RTW89_IC][2][4] = 44,
[1][1][RTW89_KCC][1][4] = -14,
[1][1][RTW89_KCC][0][4] = -14,
[1][1][RTW89_ACMA][1][4] = 32,
@@ -53367,6 +56091,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][4] = -6,
[1][1][RTW89_UK][1][4] = 32,
[1][1][RTW89_UK][0][4] = -6,
+ [1][1][RTW89_THAILAND][1][4] = 18,
+ [1][1][RTW89_THAILAND][0][4] = -28,
[1][1][RTW89_FCC][1][6] = -28,
[1][1][RTW89_FCC][2][6] = 44,
[1][1][RTW89_ETSI][1][6] = 32,
@@ -53374,6 +56100,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][6] = 30,
[1][1][RTW89_MKK][0][6] = -10,
[1][1][RTW89_IC][1][6] = -28,
+ [1][1][RTW89_IC][2][6] = 44,
[1][1][RTW89_KCC][1][6] = -14,
[1][1][RTW89_KCC][0][6] = -14,
[1][1][RTW89_ACMA][1][6] = 32,
@@ -53383,6 +56110,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][6] = -6,
[1][1][RTW89_UK][1][6] = 32,
[1][1][RTW89_UK][0][6] = -6,
+ [1][1][RTW89_THAILAND][1][6] = 18,
+ [1][1][RTW89_THAILAND][0][6] = -28,
[1][1][RTW89_FCC][1][8] = -28,
[1][1][RTW89_FCC][2][8] = 44,
[1][1][RTW89_ETSI][1][8] = 32,
@@ -53390,6 +56119,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][8] = 30,
[1][1][RTW89_MKK][0][8] = -10,
[1][1][RTW89_IC][1][8] = -28,
+ [1][1][RTW89_IC][2][8] = 44,
[1][1][RTW89_KCC][1][8] = -14,
[1][1][RTW89_KCC][0][8] = -14,
[1][1][RTW89_ACMA][1][8] = 32,
@@ -53399,6 +56129,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][8] = -6,
[1][1][RTW89_UK][1][8] = 32,
[1][1][RTW89_UK][0][8] = -6,
+ [1][1][RTW89_THAILAND][1][8] = 18,
+ [1][1][RTW89_THAILAND][0][8] = -28,
[1][1][RTW89_FCC][1][10] = -28,
[1][1][RTW89_FCC][2][10] = 44,
[1][1][RTW89_ETSI][1][10] = 32,
@@ -53406,6 +56138,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][10] = 30,
[1][1][RTW89_MKK][0][10] = -10,
[1][1][RTW89_IC][1][10] = -28,
+ [1][1][RTW89_IC][2][10] = 44,
[1][1][RTW89_KCC][1][10] = -14,
[1][1][RTW89_KCC][0][10] = -14,
[1][1][RTW89_ACMA][1][10] = 32,
@@ -53415,6 +56148,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][10] = -6,
[1][1][RTW89_UK][1][10] = 32,
[1][1][RTW89_UK][0][10] = -6,
+ [1][1][RTW89_THAILAND][1][10] = 18,
+ [1][1][RTW89_THAILAND][0][10] = -28,
[1][1][RTW89_FCC][1][12] = -28,
[1][1][RTW89_FCC][2][12] = 44,
[1][1][RTW89_ETSI][1][12] = 32,
@@ -53422,6 +56157,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][12] = 30,
[1][1][RTW89_MKK][0][12] = -10,
[1][1][RTW89_IC][1][12] = -28,
+ [1][1][RTW89_IC][2][12] = 44,
[1][1][RTW89_KCC][1][12] = -14,
[1][1][RTW89_KCC][0][12] = -14,
[1][1][RTW89_ACMA][1][12] = 32,
@@ -53431,6 +56167,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][12] = -6,
[1][1][RTW89_UK][1][12] = 32,
[1][1][RTW89_UK][0][12] = -6,
+ [1][1][RTW89_THAILAND][1][12] = 18,
+ [1][1][RTW89_THAILAND][0][12] = -28,
[1][1][RTW89_FCC][1][14] = -28,
[1][1][RTW89_FCC][2][14] = 44,
[1][1][RTW89_ETSI][1][14] = 32,
@@ -53438,6 +56176,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][14] = 30,
[1][1][RTW89_MKK][0][14] = -10,
[1][1][RTW89_IC][1][14] = -28,
+ [1][1][RTW89_IC][2][14] = 44,
[1][1][RTW89_KCC][1][14] = -14,
[1][1][RTW89_KCC][0][14] = -14,
[1][1][RTW89_ACMA][1][14] = 32,
@@ -53447,6 +56186,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][14] = -6,
[1][1][RTW89_UK][1][14] = 32,
[1][1][RTW89_UK][0][14] = -6,
+ [1][1][RTW89_THAILAND][1][14] = 18,
+ [1][1][RTW89_THAILAND][0][14] = -28,
[1][1][RTW89_FCC][1][15] = -28,
[1][1][RTW89_FCC][2][15] = 44,
[1][1][RTW89_ETSI][1][15] = 32,
@@ -53454,6 +56195,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][15] = 30,
[1][1][RTW89_MKK][0][15] = -10,
[1][1][RTW89_IC][1][15] = -28,
+ [1][1][RTW89_IC][2][15] = 44,
[1][1][RTW89_KCC][1][15] = -14,
[1][1][RTW89_KCC][0][15] = -14,
[1][1][RTW89_ACMA][1][15] = 32,
@@ -53463,6 +56205,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][15] = -6,
[1][1][RTW89_UK][1][15] = 32,
[1][1][RTW89_UK][0][15] = -6,
+ [1][1][RTW89_THAILAND][1][15] = 18,
+ [1][1][RTW89_THAILAND][0][15] = -28,
[1][1][RTW89_FCC][1][17] = -28,
[1][1][RTW89_FCC][2][17] = 44,
[1][1][RTW89_ETSI][1][17] = 32,
@@ -53470,6 +56214,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][17] = 30,
[1][1][RTW89_MKK][0][17] = -10,
[1][1][RTW89_IC][1][17] = -28,
+ [1][1][RTW89_IC][2][17] = 44,
[1][1][RTW89_KCC][1][17] = -14,
[1][1][RTW89_KCC][0][17] = -14,
[1][1][RTW89_ACMA][1][17] = 32,
@@ -53479,6 +56224,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][17] = -6,
[1][1][RTW89_UK][1][17] = 32,
[1][1][RTW89_UK][0][17] = -6,
+ [1][1][RTW89_THAILAND][1][17] = 18,
+ [1][1][RTW89_THAILAND][0][17] = -28,
[1][1][RTW89_FCC][1][19] = -28,
[1][1][RTW89_FCC][2][19] = 44,
[1][1][RTW89_ETSI][1][19] = 32,
@@ -53486,6 +56233,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][19] = 30,
[1][1][RTW89_MKK][0][19] = -10,
[1][1][RTW89_IC][1][19] = -28,
+ [1][1][RTW89_IC][2][19] = 44,
[1][1][RTW89_KCC][1][19] = -14,
[1][1][RTW89_KCC][0][19] = -14,
[1][1][RTW89_ACMA][1][19] = 32,
@@ -53495,6 +56243,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][19] = -6,
[1][1][RTW89_UK][1][19] = 32,
[1][1][RTW89_UK][0][19] = -6,
+ [1][1][RTW89_THAILAND][1][19] = 18,
+ [1][1][RTW89_THAILAND][0][19] = -28,
[1][1][RTW89_FCC][1][21] = -28,
[1][1][RTW89_FCC][2][21] = 44,
[1][1][RTW89_ETSI][1][21] = 32,
@@ -53502,6 +56252,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][21] = 30,
[1][1][RTW89_MKK][0][21] = -10,
[1][1][RTW89_IC][1][21] = -28,
+ [1][1][RTW89_IC][2][21] = 44,
[1][1][RTW89_KCC][1][21] = -14,
[1][1][RTW89_KCC][0][21] = -14,
[1][1][RTW89_ACMA][1][21] = 32,
@@ -53511,6 +56262,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][21] = -6,
[1][1][RTW89_UK][1][21] = 32,
[1][1][RTW89_UK][0][21] = -6,
+ [1][1][RTW89_THAILAND][1][21] = 18,
+ [1][1][RTW89_THAILAND][0][21] = -28,
[1][1][RTW89_FCC][1][23] = -28,
[1][1][RTW89_FCC][2][23] = 44,
[1][1][RTW89_ETSI][1][23] = 32,
@@ -53518,6 +56271,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][23] = 32,
[1][1][RTW89_MKK][0][23] = -10,
[1][1][RTW89_IC][1][23] = -28,
+ [1][1][RTW89_IC][2][23] = 44,
[1][1][RTW89_KCC][1][23] = -14,
[1][1][RTW89_KCC][0][23] = -14,
[1][1][RTW89_ACMA][1][23] = 32,
@@ -53527,6 +56281,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][23] = -6,
[1][1][RTW89_UK][1][23] = 32,
[1][1][RTW89_UK][0][23] = -6,
+ [1][1][RTW89_THAILAND][1][23] = 18,
+ [1][1][RTW89_THAILAND][0][23] = -28,
[1][1][RTW89_FCC][1][25] = -28,
[1][1][RTW89_FCC][2][25] = 44,
[1][1][RTW89_ETSI][1][25] = 32,
@@ -53534,6 +56290,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][25] = 32,
[1][1][RTW89_MKK][0][25] = -10,
[1][1][RTW89_IC][1][25] = -28,
+ [1][1][RTW89_IC][2][25] = 44,
[1][1][RTW89_KCC][1][25] = -14,
[1][1][RTW89_KCC][0][25] = -14,
[1][1][RTW89_ACMA][1][25] = 32,
@@ -53543,6 +56300,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][25] = -6,
[1][1][RTW89_UK][1][25] = 32,
[1][1][RTW89_UK][0][25] = -6,
+ [1][1][RTW89_THAILAND][1][25] = 18,
+ [1][1][RTW89_THAILAND][0][25] = -28,
[1][1][RTW89_FCC][1][27] = -28,
[1][1][RTW89_FCC][2][27] = 44,
[1][1][RTW89_ETSI][1][27] = 32,
@@ -53550,6 +56309,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][27] = 32,
[1][1][RTW89_MKK][0][27] = -10,
[1][1][RTW89_IC][1][27] = -28,
+ [1][1][RTW89_IC][2][27] = 44,
[1][1][RTW89_KCC][1][27] = -14,
[1][1][RTW89_KCC][0][27] = -14,
[1][1][RTW89_ACMA][1][27] = 32,
@@ -53559,6 +56319,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][27] = -6,
[1][1][RTW89_UK][1][27] = 32,
[1][1][RTW89_UK][0][27] = -6,
+ [1][1][RTW89_THAILAND][1][27] = 18,
+ [1][1][RTW89_THAILAND][0][27] = -28,
[1][1][RTW89_FCC][1][29] = -28,
[1][1][RTW89_FCC][2][29] = 44,
[1][1][RTW89_ETSI][1][29] = 32,
@@ -53566,6 +56328,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][29] = 32,
[1][1][RTW89_MKK][0][29] = -10,
[1][1][RTW89_IC][1][29] = -28,
+ [1][1][RTW89_IC][2][29] = 44,
[1][1][RTW89_KCC][1][29] = -14,
[1][1][RTW89_KCC][0][29] = -14,
[1][1][RTW89_ACMA][1][29] = 32,
@@ -53575,6 +56338,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][29] = -6,
[1][1][RTW89_UK][1][29] = 32,
[1][1][RTW89_UK][0][29] = -6,
+ [1][1][RTW89_THAILAND][1][29] = 18,
+ [1][1][RTW89_THAILAND][0][29] = -28,
[1][1][RTW89_FCC][1][30] = -28,
[1][1][RTW89_FCC][2][30] = 44,
[1][1][RTW89_ETSI][1][30] = 32,
@@ -53582,6 +56347,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][30] = 32,
[1][1][RTW89_MKK][0][30] = -10,
[1][1][RTW89_IC][1][30] = -28,
+ [1][1][RTW89_IC][2][30] = 44,
[1][1][RTW89_KCC][1][30] = -14,
[1][1][RTW89_KCC][0][30] = -14,
[1][1][RTW89_ACMA][1][30] = 32,
@@ -53591,6 +56357,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][30] = -6,
[1][1][RTW89_UK][1][30] = 32,
[1][1][RTW89_UK][0][30] = -6,
+ [1][1][RTW89_THAILAND][1][30] = 18,
+ [1][1][RTW89_THAILAND][0][30] = -28,
[1][1][RTW89_FCC][1][32] = -28,
[1][1][RTW89_FCC][2][32] = 44,
[1][1][RTW89_ETSI][1][32] = 32,
@@ -53598,6 +56366,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][32] = 32,
[1][1][RTW89_MKK][0][32] = -10,
[1][1][RTW89_IC][1][32] = -28,
+ [1][1][RTW89_IC][2][32] = 44,
[1][1][RTW89_KCC][1][32] = -14,
[1][1][RTW89_KCC][0][32] = -14,
[1][1][RTW89_ACMA][1][32] = 32,
@@ -53607,6 +56376,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][32] = -6,
[1][1][RTW89_UK][1][32] = 32,
[1][1][RTW89_UK][0][32] = -6,
+ [1][1][RTW89_THAILAND][1][32] = 18,
+ [1][1][RTW89_THAILAND][0][32] = -28,
[1][1][RTW89_FCC][1][34] = -28,
[1][1][RTW89_FCC][2][34] = 44,
[1][1][RTW89_ETSI][1][34] = 32,
@@ -53614,6 +56385,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][34] = 32,
[1][1][RTW89_MKK][0][34] = -10,
[1][1][RTW89_IC][1][34] = -28,
+ [1][1][RTW89_IC][2][34] = 44,
[1][1][RTW89_KCC][1][34] = -14,
[1][1][RTW89_KCC][0][34] = -14,
[1][1][RTW89_ACMA][1][34] = 32,
@@ -53623,6 +56395,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][34] = -6,
[1][1][RTW89_UK][1][34] = 32,
[1][1][RTW89_UK][0][34] = -6,
+ [1][1][RTW89_THAILAND][1][34] = 18,
+ [1][1][RTW89_THAILAND][0][34] = -28,
[1][1][RTW89_FCC][1][36] = -28,
[1][1][RTW89_FCC][2][36] = 44,
[1][1][RTW89_ETSI][1][36] = 32,
@@ -53630,6 +56404,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][36] = 32,
[1][1][RTW89_MKK][0][36] = -10,
[1][1][RTW89_IC][1][36] = -28,
+ [1][1][RTW89_IC][2][36] = 44,
[1][1][RTW89_KCC][1][36] = -14,
[1][1][RTW89_KCC][0][36] = -14,
[1][1][RTW89_ACMA][1][36] = 32,
@@ -53639,6 +56414,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][36] = -6,
[1][1][RTW89_UK][1][36] = 32,
[1][1][RTW89_UK][0][36] = -6,
+ [1][1][RTW89_THAILAND][1][36] = 18,
+ [1][1][RTW89_THAILAND][0][36] = -28,
[1][1][RTW89_FCC][1][38] = -28,
[1][1][RTW89_FCC][2][38] = 44,
[1][1][RTW89_ETSI][1][38] = 32,
@@ -53646,6 +56423,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][38] = 32,
[1][1][RTW89_MKK][0][38] = -10,
[1][1][RTW89_IC][1][38] = -28,
+ [1][1][RTW89_IC][2][38] = 44,
[1][1][RTW89_KCC][1][38] = -14,
[1][1][RTW89_KCC][0][38] = -14,
[1][1][RTW89_ACMA][1][38] = 32,
@@ -53655,6 +56433,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][38] = -6,
[1][1][RTW89_UK][1][38] = 32,
[1][1][RTW89_UK][0][38] = -6,
+ [1][1][RTW89_THAILAND][1][38] = 18,
+ [1][1][RTW89_THAILAND][0][38] = -28,
[1][1][RTW89_FCC][1][40] = -28,
[1][1][RTW89_FCC][2][40] = 44,
[1][1][RTW89_ETSI][1][40] = 32,
@@ -53662,6 +56442,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][40] = 32,
[1][1][RTW89_MKK][0][40] = -10,
[1][1][RTW89_IC][1][40] = -28,
+ [1][1][RTW89_IC][2][40] = 44,
[1][1][RTW89_KCC][1][40] = -14,
[1][1][RTW89_KCC][0][40] = -14,
[1][1][RTW89_ACMA][1][40] = 32,
@@ -53671,6 +56452,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][40] = -6,
[1][1][RTW89_UK][1][40] = 32,
[1][1][RTW89_UK][0][40] = -6,
+ [1][1][RTW89_THAILAND][1][40] = 18,
+ [1][1][RTW89_THAILAND][0][40] = -28,
[1][1][RTW89_FCC][1][42] = -28,
[1][1][RTW89_FCC][2][42] = 44,
[1][1][RTW89_ETSI][1][42] = 32,
@@ -53678,6 +56461,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][42] = 32,
[1][1][RTW89_MKK][0][42] = -10,
[1][1][RTW89_IC][1][42] = -28,
+ [1][1][RTW89_IC][2][42] = 44,
[1][1][RTW89_KCC][1][42] = -14,
[1][1][RTW89_KCC][0][42] = -14,
[1][1][RTW89_ACMA][1][42] = 32,
@@ -53687,6 +56471,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][42] = -6,
[1][1][RTW89_UK][1][42] = 32,
[1][1][RTW89_UK][0][42] = -6,
+ [1][1][RTW89_THAILAND][1][42] = 18,
+ [1][1][RTW89_THAILAND][0][42] = -28,
[1][1][RTW89_FCC][1][44] = -28,
[1][1][RTW89_FCC][2][44] = 44,
[1][1][RTW89_ETSI][1][44] = 34,
@@ -53694,6 +56480,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][44] = 4,
[1][1][RTW89_MKK][0][44] = -8,
[1][1][RTW89_IC][1][44] = -28,
+ [1][1][RTW89_IC][2][44] = 44,
[1][1][RTW89_KCC][1][44] = -14,
[1][1][RTW89_KCC][0][44] = -14,
[1][1][RTW89_ACMA][1][44] = 34,
@@ -53703,6 +56490,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][44] = -4,
[1][1][RTW89_UK][1][44] = 34,
[1][1][RTW89_UK][0][44] = -4,
+ [1][1][RTW89_THAILAND][1][44] = 18,
+ [1][1][RTW89_THAILAND][0][44] = -28,
[1][1][RTW89_FCC][1][45] = -26,
[1][1][RTW89_FCC][2][45] = 127,
[1][1][RTW89_ETSI][1][45] = 127,
@@ -53710,6 +56499,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][45] = 127,
[1][1][RTW89_MKK][0][45] = 127,
[1][1][RTW89_IC][1][45] = -26,
+ [1][1][RTW89_IC][2][45] = 44,
[1][1][RTW89_KCC][1][45] = -14,
[1][1][RTW89_KCC][0][45] = 127,
[1][1][RTW89_ACMA][1][45] = 127,
@@ -53719,6 +56509,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][45] = 127,
[1][1][RTW89_UK][1][45] = 127,
[1][1][RTW89_UK][0][45] = 127,
+ [1][1][RTW89_THAILAND][1][45] = 127,
+ [1][1][RTW89_THAILAND][0][45] = 127,
[1][1][RTW89_FCC][1][47] = -28,
[1][1][RTW89_FCC][2][47] = 127,
[1][1][RTW89_ETSI][1][47] = 127,
@@ -53726,6 +56518,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][47] = 127,
[1][1][RTW89_MKK][0][47] = 127,
[1][1][RTW89_IC][1][47] = -28,
+ [1][1][RTW89_IC][2][47] = 44,
[1][1][RTW89_KCC][1][47] = -14,
[1][1][RTW89_KCC][0][47] = 127,
[1][1][RTW89_ACMA][1][47] = 127,
@@ -53735,6 +56528,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][47] = 127,
[1][1][RTW89_UK][1][47] = 127,
[1][1][RTW89_UK][0][47] = 127,
+ [1][1][RTW89_THAILAND][1][47] = 127,
+ [1][1][RTW89_THAILAND][0][47] = 127,
[1][1][RTW89_FCC][1][49] = -28,
[1][1][RTW89_FCC][2][49] = 127,
[1][1][RTW89_ETSI][1][49] = 127,
@@ -53742,6 +56537,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][49] = 127,
[1][1][RTW89_MKK][0][49] = 127,
[1][1][RTW89_IC][1][49] = -28,
+ [1][1][RTW89_IC][2][49] = 44,
[1][1][RTW89_KCC][1][49] = -14,
[1][1][RTW89_KCC][0][49] = 127,
[1][1][RTW89_ACMA][1][49] = 127,
@@ -53751,6 +56547,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][49] = 127,
[1][1][RTW89_UK][1][49] = 127,
[1][1][RTW89_UK][0][49] = 127,
+ [1][1][RTW89_THAILAND][1][49] = 127,
+ [1][1][RTW89_THAILAND][0][49] = 127,
[1][1][RTW89_FCC][1][51] = -28,
[1][1][RTW89_FCC][2][51] = 127,
[1][1][RTW89_ETSI][1][51] = 127,
@@ -53758,6 +56556,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][51] = 127,
[1][1][RTW89_MKK][0][51] = 127,
[1][1][RTW89_IC][1][51] = -28,
+ [1][1][RTW89_IC][2][51] = 44,
[1][1][RTW89_KCC][1][51] = -14,
[1][1][RTW89_KCC][0][51] = 127,
[1][1][RTW89_ACMA][1][51] = 127,
@@ -53767,6 +56566,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][51] = 127,
[1][1][RTW89_UK][1][51] = 127,
[1][1][RTW89_UK][0][51] = 127,
+ [1][1][RTW89_THAILAND][1][51] = 127,
+ [1][1][RTW89_THAILAND][0][51] = 127,
[1][1][RTW89_FCC][1][53] = -26,
[1][1][RTW89_FCC][2][53] = 127,
[1][1][RTW89_ETSI][1][53] = 127,
@@ -53774,6 +56575,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][53] = 127,
[1][1][RTW89_MKK][0][53] = 127,
[1][1][RTW89_IC][1][53] = -26,
+ [1][1][RTW89_IC][2][53] = 44,
[1][1][RTW89_KCC][1][53] = -14,
[1][1][RTW89_KCC][0][53] = 127,
[1][1][RTW89_ACMA][1][53] = 127,
@@ -53783,6 +56585,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][53] = 127,
[1][1][RTW89_UK][1][53] = 127,
[1][1][RTW89_UK][0][53] = 127,
+ [1][1][RTW89_THAILAND][1][53] = 127,
+ [1][1][RTW89_THAILAND][0][53] = 127,
[1][1][RTW89_FCC][1][55] = -28,
[1][1][RTW89_FCC][2][55] = 44,
[1][1][RTW89_ETSI][1][55] = 127,
@@ -53790,6 +56594,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][55] = 127,
[1][1][RTW89_MKK][0][55] = 127,
[1][1][RTW89_IC][1][55] = -28,
+ [1][1][RTW89_IC][2][55] = 44,
[1][1][RTW89_KCC][1][55] = -14,
[1][1][RTW89_KCC][0][55] = 127,
[1][1][RTW89_ACMA][1][55] = 127,
@@ -53799,6 +56604,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][55] = 127,
[1][1][RTW89_UK][1][55] = 127,
[1][1][RTW89_UK][0][55] = 127,
+ [1][1][RTW89_THAILAND][1][55] = 127,
+ [1][1][RTW89_THAILAND][0][55] = 127,
[1][1][RTW89_FCC][1][57] = -28,
[1][1][RTW89_FCC][2][57] = 44,
[1][1][RTW89_ETSI][1][57] = 127,
@@ -53806,6 +56613,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][57] = 127,
[1][1][RTW89_MKK][0][57] = 127,
[1][1][RTW89_IC][1][57] = -28,
+ [1][1][RTW89_IC][2][57] = 44,
[1][1][RTW89_KCC][1][57] = -14,
[1][1][RTW89_KCC][0][57] = 127,
[1][1][RTW89_ACMA][1][57] = 127,
@@ -53815,6 +56623,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][57] = 127,
[1][1][RTW89_UK][1][57] = 127,
[1][1][RTW89_UK][0][57] = 127,
+ [1][1][RTW89_THAILAND][1][57] = 127,
+ [1][1][RTW89_THAILAND][0][57] = 127,
[1][1][RTW89_FCC][1][59] = -28,
[1][1][RTW89_FCC][2][59] = 44,
[1][1][RTW89_ETSI][1][59] = 127,
@@ -53822,6 +56632,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][59] = 127,
[1][1][RTW89_MKK][0][59] = 127,
[1][1][RTW89_IC][1][59] = -28,
+ [1][1][RTW89_IC][2][59] = 44,
[1][1][RTW89_KCC][1][59] = -14,
[1][1][RTW89_KCC][0][59] = 127,
[1][1][RTW89_ACMA][1][59] = 127,
@@ -53831,6 +56642,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][59] = 127,
[1][1][RTW89_UK][1][59] = 127,
[1][1][RTW89_UK][0][59] = 127,
+ [1][1][RTW89_THAILAND][1][59] = 127,
+ [1][1][RTW89_THAILAND][0][59] = 127,
[1][1][RTW89_FCC][1][60] = -28,
[1][1][RTW89_FCC][2][60] = 44,
[1][1][RTW89_ETSI][1][60] = 127,
@@ -53838,6 +56651,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][60] = 127,
[1][1][RTW89_MKK][0][60] = 127,
[1][1][RTW89_IC][1][60] = -28,
+ [1][1][RTW89_IC][2][60] = 44,
[1][1][RTW89_KCC][1][60] = -14,
[1][1][RTW89_KCC][0][60] = 127,
[1][1][RTW89_ACMA][1][60] = 127,
@@ -53847,6 +56661,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][60] = 127,
[1][1][RTW89_UK][1][60] = 127,
[1][1][RTW89_UK][0][60] = 127,
+ [1][1][RTW89_THAILAND][1][60] = 127,
+ [1][1][RTW89_THAILAND][0][60] = 127,
[1][1][RTW89_FCC][1][62] = -28,
[1][1][RTW89_FCC][2][62] = 44,
[1][1][RTW89_ETSI][1][62] = 127,
@@ -53854,6 +56670,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][62] = 127,
[1][1][RTW89_MKK][0][62] = 127,
[1][1][RTW89_IC][1][62] = -28,
+ [1][1][RTW89_IC][2][62] = 44,
[1][1][RTW89_KCC][1][62] = -14,
[1][1][RTW89_KCC][0][62] = 127,
[1][1][RTW89_ACMA][1][62] = 127,
@@ -53863,6 +56680,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][62] = 127,
[1][1][RTW89_UK][1][62] = 127,
[1][1][RTW89_UK][0][62] = 127,
+ [1][1][RTW89_THAILAND][1][62] = 127,
+ [1][1][RTW89_THAILAND][0][62] = 127,
[1][1][RTW89_FCC][1][64] = -28,
[1][1][RTW89_FCC][2][64] = 44,
[1][1][RTW89_ETSI][1][64] = 127,
@@ -53870,6 +56689,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][64] = 127,
[1][1][RTW89_MKK][0][64] = 127,
[1][1][RTW89_IC][1][64] = -28,
+ [1][1][RTW89_IC][2][64] = 44,
[1][1][RTW89_KCC][1][64] = -14,
[1][1][RTW89_KCC][0][64] = 127,
[1][1][RTW89_ACMA][1][64] = 127,
@@ -53879,6 +56699,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][64] = 127,
[1][1][RTW89_UK][1][64] = 127,
[1][1][RTW89_UK][0][64] = 127,
+ [1][1][RTW89_THAILAND][1][64] = 127,
+ [1][1][RTW89_THAILAND][0][64] = 127,
[1][1][RTW89_FCC][1][66] = -28,
[1][1][RTW89_FCC][2][66] = 44,
[1][1][RTW89_ETSI][1][66] = 127,
@@ -53886,6 +56708,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][66] = 127,
[1][1][RTW89_MKK][0][66] = 127,
[1][1][RTW89_IC][1][66] = -28,
+ [1][1][RTW89_IC][2][66] = 44,
[1][1][RTW89_KCC][1][66] = -14,
[1][1][RTW89_KCC][0][66] = 127,
[1][1][RTW89_ACMA][1][66] = 127,
@@ -53895,6 +56718,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][66] = 127,
[1][1][RTW89_UK][1][66] = 127,
[1][1][RTW89_UK][0][66] = 127,
+ [1][1][RTW89_THAILAND][1][66] = 127,
+ [1][1][RTW89_THAILAND][0][66] = 127,
[1][1][RTW89_FCC][1][68] = -28,
[1][1][RTW89_FCC][2][68] = 44,
[1][1][RTW89_ETSI][1][68] = 127,
@@ -53902,6 +56727,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][68] = 127,
[1][1][RTW89_MKK][0][68] = 127,
[1][1][RTW89_IC][1][68] = -28,
+ [1][1][RTW89_IC][2][68] = 44,
[1][1][RTW89_KCC][1][68] = -14,
[1][1][RTW89_KCC][0][68] = 127,
[1][1][RTW89_ACMA][1][68] = 127,
@@ -53911,6 +56737,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][68] = 127,
[1][1][RTW89_UK][1][68] = 127,
[1][1][RTW89_UK][0][68] = 127,
+ [1][1][RTW89_THAILAND][1][68] = 127,
+ [1][1][RTW89_THAILAND][0][68] = 127,
[1][1][RTW89_FCC][1][70] = -26,
[1][1][RTW89_FCC][2][70] = 44,
[1][1][RTW89_ETSI][1][70] = 127,
@@ -53918,6 +56746,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][70] = 127,
[1][1][RTW89_MKK][0][70] = 127,
[1][1][RTW89_IC][1][70] = -26,
+ [1][1][RTW89_IC][2][70] = 44,
[1][1][RTW89_KCC][1][70] = -14,
[1][1][RTW89_KCC][0][70] = 127,
[1][1][RTW89_ACMA][1][70] = 127,
@@ -53927,6 +56756,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][70] = 127,
[1][1][RTW89_UK][1][70] = 127,
[1][1][RTW89_UK][0][70] = 127,
+ [1][1][RTW89_THAILAND][1][70] = 127,
+ [1][1][RTW89_THAILAND][0][70] = 127,
[1][1][RTW89_FCC][1][72] = -28,
[1][1][RTW89_FCC][2][72] = 44,
[1][1][RTW89_ETSI][1][72] = 127,
@@ -53934,6 +56765,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][72] = 127,
[1][1][RTW89_MKK][0][72] = 127,
[1][1][RTW89_IC][1][72] = -28,
+ [1][1][RTW89_IC][2][72] = 44,
[1][1][RTW89_KCC][1][72] = -14,
[1][1][RTW89_KCC][0][72] = 127,
[1][1][RTW89_ACMA][1][72] = 127,
@@ -53943,6 +56775,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][72] = 127,
[1][1][RTW89_UK][1][72] = 127,
[1][1][RTW89_UK][0][72] = 127,
+ [1][1][RTW89_THAILAND][1][72] = 127,
+ [1][1][RTW89_THAILAND][0][72] = 127,
[1][1][RTW89_FCC][1][74] = -28,
[1][1][RTW89_FCC][2][74] = 44,
[1][1][RTW89_ETSI][1][74] = 127,
@@ -53950,6 +56784,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][74] = 127,
[1][1][RTW89_MKK][0][74] = 127,
[1][1][RTW89_IC][1][74] = -28,
+ [1][1][RTW89_IC][2][74] = 44,
[1][1][RTW89_KCC][1][74] = -14,
[1][1][RTW89_KCC][0][74] = 127,
[1][1][RTW89_ACMA][1][74] = 127,
@@ -53959,6 +56794,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][74] = 127,
[1][1][RTW89_UK][1][74] = 127,
[1][1][RTW89_UK][0][74] = 127,
+ [1][1][RTW89_THAILAND][1][74] = 127,
+ [1][1][RTW89_THAILAND][0][74] = 127,
[1][1][RTW89_FCC][1][75] = -28,
[1][1][RTW89_FCC][2][75] = 44,
[1][1][RTW89_ETSI][1][75] = 127,
@@ -53966,6 +56803,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][75] = 127,
[1][1][RTW89_MKK][0][75] = 127,
[1][1][RTW89_IC][1][75] = -28,
+ [1][1][RTW89_IC][2][75] = 44,
[1][1][RTW89_KCC][1][75] = -14,
[1][1][RTW89_KCC][0][75] = 127,
[1][1][RTW89_ACMA][1][75] = 127,
@@ -53975,6 +56813,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][75] = 127,
[1][1][RTW89_UK][1][75] = 127,
[1][1][RTW89_UK][0][75] = 127,
+ [1][1][RTW89_THAILAND][1][75] = 127,
+ [1][1][RTW89_THAILAND][0][75] = 127,
[1][1][RTW89_FCC][1][77] = -28,
[1][1][RTW89_FCC][2][77] = 44,
[1][1][RTW89_ETSI][1][77] = 127,
@@ -53982,6 +56822,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][77] = 127,
[1][1][RTW89_MKK][0][77] = 127,
[1][1][RTW89_IC][1][77] = -28,
+ [1][1][RTW89_IC][2][77] = 44,
[1][1][RTW89_KCC][1][77] = -14,
[1][1][RTW89_KCC][0][77] = 127,
[1][1][RTW89_ACMA][1][77] = 127,
@@ -53991,6 +56832,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][77] = 127,
[1][1][RTW89_UK][1][77] = 127,
[1][1][RTW89_UK][0][77] = 127,
+ [1][1][RTW89_THAILAND][1][77] = 127,
+ [1][1][RTW89_THAILAND][0][77] = 127,
[1][1][RTW89_FCC][1][79] = -28,
[1][1][RTW89_FCC][2][79] = 44,
[1][1][RTW89_ETSI][1][79] = 127,
@@ -53998,6 +56841,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][79] = 127,
[1][1][RTW89_MKK][0][79] = 127,
[1][1][RTW89_IC][1][79] = -28,
+ [1][1][RTW89_IC][2][79] = 44,
[1][1][RTW89_KCC][1][79] = -14,
[1][1][RTW89_KCC][0][79] = 127,
[1][1][RTW89_ACMA][1][79] = 127,
@@ -54007,6 +56851,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][79] = 127,
[1][1][RTW89_UK][1][79] = 127,
[1][1][RTW89_UK][0][79] = 127,
+ [1][1][RTW89_THAILAND][1][79] = 127,
+ [1][1][RTW89_THAILAND][0][79] = 127,
[1][1][RTW89_FCC][1][81] = -28,
[1][1][RTW89_FCC][2][81] = 44,
[1][1][RTW89_ETSI][1][81] = 127,
@@ -54014,6 +56860,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][81] = 127,
[1][1][RTW89_MKK][0][81] = 127,
[1][1][RTW89_IC][1][81] = -28,
+ [1][1][RTW89_IC][2][81] = 44,
[1][1][RTW89_KCC][1][81] = -14,
[1][1][RTW89_KCC][0][81] = 127,
[1][1][RTW89_ACMA][1][81] = 127,
@@ -54023,6 +56870,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][81] = 127,
[1][1][RTW89_UK][1][81] = 127,
[1][1][RTW89_UK][0][81] = 127,
+ [1][1][RTW89_THAILAND][1][81] = 127,
+ [1][1][RTW89_THAILAND][0][81] = 127,
[1][1][RTW89_FCC][1][83] = -28,
[1][1][RTW89_FCC][2][83] = 44,
[1][1][RTW89_ETSI][1][83] = 127,
@@ -54030,6 +56879,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][83] = 127,
[1][1][RTW89_MKK][0][83] = 127,
[1][1][RTW89_IC][1][83] = -28,
+ [1][1][RTW89_IC][2][83] = 44,
[1][1][RTW89_KCC][1][83] = -14,
[1][1][RTW89_KCC][0][83] = 127,
[1][1][RTW89_ACMA][1][83] = 127,
@@ -54039,6 +56889,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][83] = 127,
[1][1][RTW89_UK][1][83] = 127,
[1][1][RTW89_UK][0][83] = 127,
+ [1][1][RTW89_THAILAND][1][83] = 127,
+ [1][1][RTW89_THAILAND][0][83] = 127,
[1][1][RTW89_FCC][1][85] = -28,
[1][1][RTW89_FCC][2][85] = 44,
[1][1][RTW89_ETSI][1][85] = 127,
@@ -54046,6 +56898,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][85] = 127,
[1][1][RTW89_MKK][0][85] = 127,
[1][1][RTW89_IC][1][85] = -28,
+ [1][1][RTW89_IC][2][85] = 44,
[1][1][RTW89_KCC][1][85] = -14,
[1][1][RTW89_KCC][0][85] = 127,
[1][1][RTW89_ACMA][1][85] = 127,
@@ -54055,6 +56908,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][85] = 127,
[1][1][RTW89_UK][1][85] = 127,
[1][1][RTW89_UK][0][85] = 127,
+ [1][1][RTW89_THAILAND][1][85] = 127,
+ [1][1][RTW89_THAILAND][0][85] = 127,
[1][1][RTW89_FCC][1][87] = -28,
[1][1][RTW89_FCC][2][87] = 127,
[1][1][RTW89_ETSI][1][87] = 127,
@@ -54062,6 +56917,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][87] = 127,
[1][1][RTW89_MKK][0][87] = 127,
[1][1][RTW89_IC][1][87] = -28,
+ [1][1][RTW89_IC][2][87] = 127,
[1][1][RTW89_KCC][1][87] = -14,
[1][1][RTW89_KCC][0][87] = 127,
[1][1][RTW89_ACMA][1][87] = 127,
@@ -54071,6 +56927,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][87] = 127,
[1][1][RTW89_UK][1][87] = 127,
[1][1][RTW89_UK][0][87] = 127,
+ [1][1][RTW89_THAILAND][1][87] = 127,
+ [1][1][RTW89_THAILAND][0][87] = 127,
[1][1][RTW89_FCC][1][89] = -26,
[1][1][RTW89_FCC][2][89] = 127,
[1][1][RTW89_ETSI][1][89] = 127,
@@ -54078,6 +56936,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][89] = 127,
[1][1][RTW89_MKK][0][89] = 127,
[1][1][RTW89_IC][1][89] = -26,
+ [1][1][RTW89_IC][2][89] = 127,
[1][1][RTW89_KCC][1][89] = -14,
[1][1][RTW89_KCC][0][89] = 127,
[1][1][RTW89_ACMA][1][89] = 127,
@@ -54087,6 +56946,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][89] = 127,
[1][1][RTW89_UK][1][89] = 127,
[1][1][RTW89_UK][0][89] = 127,
+ [1][1][RTW89_THAILAND][1][89] = 127,
+ [1][1][RTW89_THAILAND][0][89] = 127,
[1][1][RTW89_FCC][1][90] = -26,
[1][1][RTW89_FCC][2][90] = 127,
[1][1][RTW89_ETSI][1][90] = 127,
@@ -54094,6 +56955,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][90] = 127,
[1][1][RTW89_MKK][0][90] = 127,
[1][1][RTW89_IC][1][90] = -26,
+ [1][1][RTW89_IC][2][90] = 127,
[1][1][RTW89_KCC][1][90] = -14,
[1][1][RTW89_KCC][0][90] = 127,
[1][1][RTW89_ACMA][1][90] = 127,
@@ -54103,6 +56965,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][90] = 127,
[1][1][RTW89_UK][1][90] = 127,
[1][1][RTW89_UK][0][90] = 127,
+ [1][1][RTW89_THAILAND][1][90] = 127,
+ [1][1][RTW89_THAILAND][0][90] = 127,
[1][1][RTW89_FCC][1][92] = -26,
[1][1][RTW89_FCC][2][92] = 127,
[1][1][RTW89_ETSI][1][92] = 127,
@@ -54110,6 +56974,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][92] = 127,
[1][1][RTW89_MKK][0][92] = 127,
[1][1][RTW89_IC][1][92] = -26,
+ [1][1][RTW89_IC][2][92] = 127,
[1][1][RTW89_KCC][1][92] = -14,
[1][1][RTW89_KCC][0][92] = 127,
[1][1][RTW89_ACMA][1][92] = 127,
@@ -54119,6 +56984,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][92] = 127,
[1][1][RTW89_UK][1][92] = 127,
[1][1][RTW89_UK][0][92] = 127,
+ [1][1][RTW89_THAILAND][1][92] = 127,
+ [1][1][RTW89_THAILAND][0][92] = 127,
[1][1][RTW89_FCC][1][94] = -26,
[1][1][RTW89_FCC][2][94] = 127,
[1][1][RTW89_ETSI][1][94] = 127,
@@ -54126,6 +56993,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][94] = 127,
[1][1][RTW89_MKK][0][94] = 127,
[1][1][RTW89_IC][1][94] = -26,
+ [1][1][RTW89_IC][2][94] = 127,
[1][1][RTW89_KCC][1][94] = -14,
[1][1][RTW89_KCC][0][94] = 127,
[1][1][RTW89_ACMA][1][94] = 127,
@@ -54135,6 +57003,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][94] = 127,
[1][1][RTW89_UK][1][94] = 127,
[1][1][RTW89_UK][0][94] = 127,
+ [1][1][RTW89_THAILAND][1][94] = 127,
+ [1][1][RTW89_THAILAND][0][94] = 127,
[1][1][RTW89_FCC][1][96] = -26,
[1][1][RTW89_FCC][2][96] = 127,
[1][1][RTW89_ETSI][1][96] = 127,
@@ -54142,6 +57012,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][96] = 127,
[1][1][RTW89_MKK][0][96] = 127,
[1][1][RTW89_IC][1][96] = -26,
+ [1][1][RTW89_IC][2][96] = 127,
[1][1][RTW89_KCC][1][96] = -14,
[1][1][RTW89_KCC][0][96] = 127,
[1][1][RTW89_ACMA][1][96] = 127,
@@ -54151,6 +57022,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][96] = 127,
[1][1][RTW89_UK][1][96] = 127,
[1][1][RTW89_UK][0][96] = 127,
+ [1][1][RTW89_THAILAND][1][96] = 127,
+ [1][1][RTW89_THAILAND][0][96] = 127,
[1][1][RTW89_FCC][1][98] = -26,
[1][1][RTW89_FCC][2][98] = 127,
[1][1][RTW89_ETSI][1][98] = 127,
@@ -54158,6 +57031,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][98] = 127,
[1][1][RTW89_MKK][0][98] = 127,
[1][1][RTW89_IC][1][98] = -26,
+ [1][1][RTW89_IC][2][98] = 127,
[1][1][RTW89_KCC][1][98] = -14,
[1][1][RTW89_KCC][0][98] = 127,
[1][1][RTW89_ACMA][1][98] = 127,
@@ -54167,6 +57041,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][98] = 127,
[1][1][RTW89_UK][1][98] = 127,
[1][1][RTW89_UK][0][98] = 127,
+ [1][1][RTW89_THAILAND][1][98] = 127,
+ [1][1][RTW89_THAILAND][0][98] = 127,
[1][1][RTW89_FCC][1][100] = -26,
[1][1][RTW89_FCC][2][100] = 127,
[1][1][RTW89_ETSI][1][100] = 127,
@@ -54174,6 +57050,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][100] = 127,
[1][1][RTW89_MKK][0][100] = 127,
[1][1][RTW89_IC][1][100] = -26,
+ [1][1][RTW89_IC][2][100] = 127,
[1][1][RTW89_KCC][1][100] = -14,
[1][1][RTW89_KCC][0][100] = 127,
[1][1][RTW89_ACMA][1][100] = 127,
@@ -54183,6 +57060,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][100] = 127,
[1][1][RTW89_UK][1][100] = 127,
[1][1][RTW89_UK][0][100] = 127,
+ [1][1][RTW89_THAILAND][1][100] = 127,
+ [1][1][RTW89_THAILAND][0][100] = 127,
[1][1][RTW89_FCC][1][102] = -26,
[1][1][RTW89_FCC][2][102] = 127,
[1][1][RTW89_ETSI][1][102] = 127,
@@ -54190,6 +57069,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][102] = 127,
[1][1][RTW89_MKK][0][102] = 127,
[1][1][RTW89_IC][1][102] = -26,
+ [1][1][RTW89_IC][2][102] = 127,
[1][1][RTW89_KCC][1][102] = -14,
[1][1][RTW89_KCC][0][102] = 127,
[1][1][RTW89_ACMA][1][102] = 127,
@@ -54199,6 +57079,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][102] = 127,
[1][1][RTW89_UK][1][102] = 127,
[1][1][RTW89_UK][0][102] = 127,
+ [1][1][RTW89_THAILAND][1][102] = 127,
+ [1][1][RTW89_THAILAND][0][102] = 127,
[1][1][RTW89_FCC][1][104] = -26,
[1][1][RTW89_FCC][2][104] = 127,
[1][1][RTW89_ETSI][1][104] = 127,
@@ -54206,6 +57088,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][104] = 127,
[1][1][RTW89_MKK][0][104] = 127,
[1][1][RTW89_IC][1][104] = -26,
+ [1][1][RTW89_IC][2][104] = 127,
[1][1][RTW89_KCC][1][104] = -14,
[1][1][RTW89_KCC][0][104] = 127,
[1][1][RTW89_ACMA][1][104] = 127,
@@ -54215,6 +57098,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][104] = 127,
[1][1][RTW89_UK][1][104] = 127,
[1][1][RTW89_UK][0][104] = 127,
+ [1][1][RTW89_THAILAND][1][104] = 127,
+ [1][1][RTW89_THAILAND][0][104] = 127,
[1][1][RTW89_FCC][1][105] = -26,
[1][1][RTW89_FCC][2][105] = 127,
[1][1][RTW89_ETSI][1][105] = 127,
@@ -54222,6 +57107,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][105] = 127,
[1][1][RTW89_MKK][0][105] = 127,
[1][1][RTW89_IC][1][105] = -26,
+ [1][1][RTW89_IC][2][105] = 127,
[1][1][RTW89_KCC][1][105] = -14,
[1][1][RTW89_KCC][0][105] = 127,
[1][1][RTW89_ACMA][1][105] = 127,
@@ -54231,6 +57117,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][105] = 127,
[1][1][RTW89_UK][1][105] = 127,
[1][1][RTW89_UK][0][105] = 127,
+ [1][1][RTW89_THAILAND][1][105] = 127,
+ [1][1][RTW89_THAILAND][0][105] = 127,
[1][1][RTW89_FCC][1][107] = -22,
[1][1][RTW89_FCC][2][107] = 127,
[1][1][RTW89_ETSI][1][107] = 127,
@@ -54238,6 +57126,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][107] = 127,
[1][1][RTW89_MKK][0][107] = 127,
[1][1][RTW89_IC][1][107] = -22,
+ [1][1][RTW89_IC][2][107] = 127,
[1][1][RTW89_KCC][1][107] = -14,
[1][1][RTW89_KCC][0][107] = 127,
[1][1][RTW89_ACMA][1][107] = 127,
@@ -54247,6 +57136,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][107] = 127,
[1][1][RTW89_UK][1][107] = 127,
[1][1][RTW89_UK][0][107] = 127,
+ [1][1][RTW89_THAILAND][1][107] = 127,
+ [1][1][RTW89_THAILAND][0][107] = 127,
[1][1][RTW89_FCC][1][109] = -22,
[1][1][RTW89_FCC][2][109] = 127,
[1][1][RTW89_ETSI][1][109] = 127,
@@ -54254,6 +57145,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][109] = 127,
[1][1][RTW89_MKK][0][109] = 127,
[1][1][RTW89_IC][1][109] = -22,
+ [1][1][RTW89_IC][2][109] = 127,
[1][1][RTW89_KCC][1][109] = 127,
[1][1][RTW89_KCC][0][109] = 127,
[1][1][RTW89_ACMA][1][109] = 127,
@@ -54263,6 +57155,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][109] = 127,
[1][1][RTW89_UK][1][109] = 127,
[1][1][RTW89_UK][0][109] = 127,
+ [1][1][RTW89_THAILAND][1][109] = 127,
+ [1][1][RTW89_THAILAND][0][109] = 127,
[1][1][RTW89_FCC][1][111] = 127,
[1][1][RTW89_FCC][2][111] = 127,
[1][1][RTW89_ETSI][1][111] = 127,
@@ -54270,6 +57164,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][111] = 127,
[1][1][RTW89_MKK][0][111] = 127,
[1][1][RTW89_IC][1][111] = 127,
+ [1][1][RTW89_IC][2][111] = 127,
[1][1][RTW89_KCC][1][111] = 127,
[1][1][RTW89_KCC][0][111] = 127,
[1][1][RTW89_ACMA][1][111] = 127,
@@ -54279,6 +57174,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][111] = 127,
[1][1][RTW89_UK][1][111] = 127,
[1][1][RTW89_UK][0][111] = 127,
+ [1][1][RTW89_THAILAND][1][111] = 127,
+ [1][1][RTW89_THAILAND][0][111] = 127,
[1][1][RTW89_FCC][1][113] = 127,
[1][1][RTW89_FCC][2][113] = 127,
[1][1][RTW89_ETSI][1][113] = 127,
@@ -54286,6 +57183,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][113] = 127,
[1][1][RTW89_MKK][0][113] = 127,
[1][1][RTW89_IC][1][113] = 127,
+ [1][1][RTW89_IC][2][113] = 127,
[1][1][RTW89_KCC][1][113] = 127,
[1][1][RTW89_KCC][0][113] = 127,
[1][1][RTW89_ACMA][1][113] = 127,
@@ -54295,6 +57193,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][113] = 127,
[1][1][RTW89_UK][1][113] = 127,
[1][1][RTW89_UK][0][113] = 127,
+ [1][1][RTW89_THAILAND][1][113] = 127,
+ [1][1][RTW89_THAILAND][0][113] = 127,
[1][1][RTW89_FCC][1][115] = 127,
[1][1][RTW89_FCC][2][115] = 127,
[1][1][RTW89_ETSI][1][115] = 127,
@@ -54302,6 +57202,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][115] = 127,
[1][1][RTW89_MKK][0][115] = 127,
[1][1][RTW89_IC][1][115] = 127,
+ [1][1][RTW89_IC][2][115] = 127,
[1][1][RTW89_KCC][1][115] = 127,
[1][1][RTW89_KCC][0][115] = 127,
[1][1][RTW89_ACMA][1][115] = 127,
@@ -54311,6 +57212,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][115] = 127,
[1][1][RTW89_UK][1][115] = 127,
[1][1][RTW89_UK][0][115] = 127,
+ [1][1][RTW89_THAILAND][1][115] = 127,
+ [1][1][RTW89_THAILAND][0][115] = 127,
[1][1][RTW89_FCC][1][117] = 127,
[1][1][RTW89_FCC][2][117] = 127,
[1][1][RTW89_ETSI][1][117] = 127,
@@ -54318,6 +57221,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][117] = 127,
[1][1][RTW89_MKK][0][117] = 127,
[1][1][RTW89_IC][1][117] = 127,
+ [1][1][RTW89_IC][2][117] = 127,
[1][1][RTW89_KCC][1][117] = 127,
[1][1][RTW89_KCC][0][117] = 127,
[1][1][RTW89_ACMA][1][117] = 127,
@@ -54327,6 +57231,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][117] = 127,
[1][1][RTW89_UK][1][117] = 127,
[1][1][RTW89_UK][0][117] = 127,
+ [1][1][RTW89_THAILAND][1][117] = 127,
+ [1][1][RTW89_THAILAND][0][117] = 127,
[1][1][RTW89_FCC][1][119] = 127,
[1][1][RTW89_FCC][2][119] = 127,
[1][1][RTW89_ETSI][1][119] = 127,
@@ -54334,6 +57240,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_MKK][1][119] = 127,
[1][1][RTW89_MKK][0][119] = 127,
[1][1][RTW89_IC][1][119] = 127,
+ [1][1][RTW89_IC][2][119] = 127,
[1][1][RTW89_KCC][1][119] = 127,
[1][1][RTW89_KCC][0][119] = 127,
[1][1][RTW89_ACMA][1][119] = 127,
@@ -54343,6 +57250,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[1][1][RTW89_QATAR][0][119] = 127,
[1][1][RTW89_UK][1][119] = 127,
[1][1][RTW89_UK][0][119] = 127,
+ [1][1][RTW89_THAILAND][1][119] = 127,
+ [1][1][RTW89_THAILAND][0][119] = 127,
[2][0][RTW89_FCC][1][0] = 8,
[2][0][RTW89_FCC][2][0] = 60,
[2][0][RTW89_ETSI][1][0] = 56,
@@ -54350,6 +57259,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][0] = 54,
[2][0][RTW89_MKK][0][0] = 14,
[2][0][RTW89_IC][1][0] = 8,
+ [2][0][RTW89_IC][2][0] = 60,
[2][0][RTW89_KCC][1][0] = -2,
[2][0][RTW89_KCC][0][0] = -2,
[2][0][RTW89_ACMA][1][0] = 56,
@@ -54359,6 +57269,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][0] = 18,
[2][0][RTW89_UK][1][0] = 56,
[2][0][RTW89_UK][0][0] = 18,
+ [2][0][RTW89_THAILAND][1][0] = 52,
+ [2][0][RTW89_THAILAND][0][0] = 8,
[2][0][RTW89_FCC][1][2] = 8,
[2][0][RTW89_FCC][2][2] = 60,
[2][0][RTW89_ETSI][1][2] = 56,
@@ -54366,6 +57278,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][2] = 54,
[2][0][RTW89_MKK][0][2] = 14,
[2][0][RTW89_IC][1][2] = 8,
+ [2][0][RTW89_IC][2][2] = 60,
[2][0][RTW89_KCC][1][2] = -2,
[2][0][RTW89_KCC][0][2] = -2,
[2][0][RTW89_ACMA][1][2] = 56,
@@ -54375,6 +57288,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][2] = 18,
[2][0][RTW89_UK][1][2] = 56,
[2][0][RTW89_UK][0][2] = 18,
+ [2][0][RTW89_THAILAND][1][2] = 52,
+ [2][0][RTW89_THAILAND][0][2] = 8,
[2][0][RTW89_FCC][1][4] = 8,
[2][0][RTW89_FCC][2][4] = 60,
[2][0][RTW89_ETSI][1][4] = 56,
@@ -54382,6 +57297,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][4] = 54,
[2][0][RTW89_MKK][0][4] = 14,
[2][0][RTW89_IC][1][4] = 8,
+ [2][0][RTW89_IC][2][4] = 60,
[2][0][RTW89_KCC][1][4] = -2,
[2][0][RTW89_KCC][0][4] = -2,
[2][0][RTW89_ACMA][1][4] = 56,
@@ -54391,6 +57307,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][4] = 18,
[2][0][RTW89_UK][1][4] = 56,
[2][0][RTW89_UK][0][4] = 18,
+ [2][0][RTW89_THAILAND][1][4] = 52,
+ [2][0][RTW89_THAILAND][0][4] = 8,
[2][0][RTW89_FCC][1][6] = 8,
[2][0][RTW89_FCC][2][6] = 60,
[2][0][RTW89_ETSI][1][6] = 56,
@@ -54398,6 +57316,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][6] = 54,
[2][0][RTW89_MKK][0][6] = 14,
[2][0][RTW89_IC][1][6] = 8,
+ [2][0][RTW89_IC][2][6] = 60,
[2][0][RTW89_KCC][1][6] = -2,
[2][0][RTW89_KCC][0][6] = -2,
[2][0][RTW89_ACMA][1][6] = 56,
@@ -54407,6 +57326,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][6] = 18,
[2][0][RTW89_UK][1][6] = 56,
[2][0][RTW89_UK][0][6] = 18,
+ [2][0][RTW89_THAILAND][1][6] = 52,
+ [2][0][RTW89_THAILAND][0][6] = 8,
[2][0][RTW89_FCC][1][8] = 8,
[2][0][RTW89_FCC][2][8] = 60,
[2][0][RTW89_ETSI][1][8] = 56,
@@ -54414,6 +57335,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][8] = 54,
[2][0][RTW89_MKK][0][8] = 14,
[2][0][RTW89_IC][1][8] = 8,
+ [2][0][RTW89_IC][2][8] = 60,
[2][0][RTW89_KCC][1][8] = -2,
[2][0][RTW89_KCC][0][8] = -2,
[2][0][RTW89_ACMA][1][8] = 56,
@@ -54423,6 +57345,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][8] = 18,
[2][0][RTW89_UK][1][8] = 56,
[2][0][RTW89_UK][0][8] = 18,
+ [2][0][RTW89_THAILAND][1][8] = 52,
+ [2][0][RTW89_THAILAND][0][8] = 8,
[2][0][RTW89_FCC][1][10] = 8,
[2][0][RTW89_FCC][2][10] = 60,
[2][0][RTW89_ETSI][1][10] = 56,
@@ -54430,6 +57354,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][10] = 54,
[2][0][RTW89_MKK][0][10] = 14,
[2][0][RTW89_IC][1][10] = 8,
+ [2][0][RTW89_IC][2][10] = 60,
[2][0][RTW89_KCC][1][10] = -2,
[2][0][RTW89_KCC][0][10] = -2,
[2][0][RTW89_ACMA][1][10] = 56,
@@ -54439,6 +57364,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][10] = 18,
[2][0][RTW89_UK][1][10] = 56,
[2][0][RTW89_UK][0][10] = 18,
+ [2][0][RTW89_THAILAND][1][10] = 52,
+ [2][0][RTW89_THAILAND][0][10] = 8,
[2][0][RTW89_FCC][1][12] = 8,
[2][0][RTW89_FCC][2][12] = 60,
[2][0][RTW89_ETSI][1][12] = 56,
@@ -54446,6 +57373,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][12] = 54,
[2][0][RTW89_MKK][0][12] = 14,
[2][0][RTW89_IC][1][12] = 8,
+ [2][0][RTW89_IC][2][12] = 60,
[2][0][RTW89_KCC][1][12] = -2,
[2][0][RTW89_KCC][0][12] = -2,
[2][0][RTW89_ACMA][1][12] = 56,
@@ -54455,6 +57383,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][12] = 18,
[2][0][RTW89_UK][1][12] = 56,
[2][0][RTW89_UK][0][12] = 18,
+ [2][0][RTW89_THAILAND][1][12] = 52,
+ [2][0][RTW89_THAILAND][0][12] = 8,
[2][0][RTW89_FCC][1][14] = 8,
[2][0][RTW89_FCC][2][14] = 60,
[2][0][RTW89_ETSI][1][14] = 56,
@@ -54462,6 +57392,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][14] = 54,
[2][0][RTW89_MKK][0][14] = 14,
[2][0][RTW89_IC][1][14] = 8,
+ [2][0][RTW89_IC][2][14] = 60,
[2][0][RTW89_KCC][1][14] = -2,
[2][0][RTW89_KCC][0][14] = -2,
[2][0][RTW89_ACMA][1][14] = 56,
@@ -54471,6 +57402,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][14] = 18,
[2][0][RTW89_UK][1][14] = 56,
[2][0][RTW89_UK][0][14] = 18,
+ [2][0][RTW89_THAILAND][1][14] = 52,
+ [2][0][RTW89_THAILAND][0][14] = 8,
[2][0][RTW89_FCC][1][15] = 8,
[2][0][RTW89_FCC][2][15] = 60,
[2][0][RTW89_ETSI][1][15] = 56,
@@ -54478,6 +57411,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][15] = 54,
[2][0][RTW89_MKK][0][15] = 14,
[2][0][RTW89_IC][1][15] = 8,
+ [2][0][RTW89_IC][2][15] = 60,
[2][0][RTW89_KCC][1][15] = -2,
[2][0][RTW89_KCC][0][15] = -2,
[2][0][RTW89_ACMA][1][15] = 56,
@@ -54487,6 +57421,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][15] = 18,
[2][0][RTW89_UK][1][15] = 56,
[2][0][RTW89_UK][0][15] = 18,
+ [2][0][RTW89_THAILAND][1][15] = 52,
+ [2][0][RTW89_THAILAND][0][15] = 8,
[2][0][RTW89_FCC][1][17] = 8,
[2][0][RTW89_FCC][2][17] = 60,
[2][0][RTW89_ETSI][1][17] = 56,
@@ -54494,6 +57430,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][17] = 54,
[2][0][RTW89_MKK][0][17] = 14,
[2][0][RTW89_IC][1][17] = 8,
+ [2][0][RTW89_IC][2][17] = 60,
[2][0][RTW89_KCC][1][17] = -2,
[2][0][RTW89_KCC][0][17] = -2,
[2][0][RTW89_ACMA][1][17] = 56,
@@ -54503,6 +57440,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][17] = 18,
[2][0][RTW89_UK][1][17] = 56,
[2][0][RTW89_UK][0][17] = 18,
+ [2][0][RTW89_THAILAND][1][17] = 52,
+ [2][0][RTW89_THAILAND][0][17] = 8,
[2][0][RTW89_FCC][1][19] = 8,
[2][0][RTW89_FCC][2][19] = 60,
[2][0][RTW89_ETSI][1][19] = 56,
@@ -54510,6 +57449,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][19] = 54,
[2][0][RTW89_MKK][0][19] = 14,
[2][0][RTW89_IC][1][19] = 8,
+ [2][0][RTW89_IC][2][19] = 60,
[2][0][RTW89_KCC][1][19] = -2,
[2][0][RTW89_KCC][0][19] = -2,
[2][0][RTW89_ACMA][1][19] = 56,
@@ -54519,6 +57459,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][19] = 18,
[2][0][RTW89_UK][1][19] = 56,
[2][0][RTW89_UK][0][19] = 18,
+ [2][0][RTW89_THAILAND][1][19] = 52,
+ [2][0][RTW89_THAILAND][0][19] = 8,
[2][0][RTW89_FCC][1][21] = 8,
[2][0][RTW89_FCC][2][21] = 60,
[2][0][RTW89_ETSI][1][21] = 56,
@@ -54526,6 +57468,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][21] = 54,
[2][0][RTW89_MKK][0][21] = 14,
[2][0][RTW89_IC][1][21] = 8,
+ [2][0][RTW89_IC][2][21] = 60,
[2][0][RTW89_KCC][1][21] = -2,
[2][0][RTW89_KCC][0][21] = -2,
[2][0][RTW89_ACMA][1][21] = 56,
@@ -54535,13 +57478,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][21] = 18,
[2][0][RTW89_UK][1][21] = 56,
[2][0][RTW89_UK][0][21] = 18,
+ [2][0][RTW89_THAILAND][1][21] = 52,
+ [2][0][RTW89_THAILAND][0][21] = 8,
[2][0][RTW89_FCC][1][23] = 8,
- [2][0][RTW89_FCC][2][23] = 78,
+ [2][0][RTW89_FCC][2][23] = 70,
[2][0][RTW89_ETSI][1][23] = 56,
[2][0][RTW89_ETSI][0][23] = 18,
[2][0][RTW89_MKK][1][23] = 56,
[2][0][RTW89_MKK][0][23] = 14,
[2][0][RTW89_IC][1][23] = 8,
+ [2][0][RTW89_IC][2][23] = 70,
[2][0][RTW89_KCC][1][23] = -2,
[2][0][RTW89_KCC][0][23] = -2,
[2][0][RTW89_ACMA][1][23] = 56,
@@ -54551,13 +57497,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][23] = 18,
[2][0][RTW89_UK][1][23] = 56,
[2][0][RTW89_UK][0][23] = 18,
+ [2][0][RTW89_THAILAND][1][23] = 52,
+ [2][0][RTW89_THAILAND][0][23] = 8,
[2][0][RTW89_FCC][1][25] = 8,
- [2][0][RTW89_FCC][2][25] = 78,
+ [2][0][RTW89_FCC][2][25] = 70,
[2][0][RTW89_ETSI][1][25] = 56,
[2][0][RTW89_ETSI][0][25] = 18,
[2][0][RTW89_MKK][1][25] = 56,
[2][0][RTW89_MKK][0][25] = 14,
[2][0][RTW89_IC][1][25] = 8,
+ [2][0][RTW89_IC][2][25] = 70,
[2][0][RTW89_KCC][1][25] = -2,
[2][0][RTW89_KCC][0][25] = -2,
[2][0][RTW89_ACMA][1][25] = 56,
@@ -54567,13 +57516,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][25] = 18,
[2][0][RTW89_UK][1][25] = 56,
[2][0][RTW89_UK][0][25] = 18,
+ [2][0][RTW89_THAILAND][1][25] = 52,
+ [2][0][RTW89_THAILAND][0][25] = 8,
[2][0][RTW89_FCC][1][27] = 8,
- [2][0][RTW89_FCC][2][27] = 78,
+ [2][0][RTW89_FCC][2][27] = 70,
[2][0][RTW89_ETSI][1][27] = 56,
[2][0][RTW89_ETSI][0][27] = 18,
[2][0][RTW89_MKK][1][27] = 56,
[2][0][RTW89_MKK][0][27] = 14,
[2][0][RTW89_IC][1][27] = 8,
+ [2][0][RTW89_IC][2][27] = 70,
[2][0][RTW89_KCC][1][27] = -2,
[2][0][RTW89_KCC][0][27] = -2,
[2][0][RTW89_ACMA][1][27] = 56,
@@ -54583,13 +57535,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][27] = 18,
[2][0][RTW89_UK][1][27] = 56,
[2][0][RTW89_UK][0][27] = 18,
+ [2][0][RTW89_THAILAND][1][27] = 52,
+ [2][0][RTW89_THAILAND][0][27] = 8,
[2][0][RTW89_FCC][1][29] = 8,
- [2][0][RTW89_FCC][2][29] = 78,
+ [2][0][RTW89_FCC][2][29] = 70,
[2][0][RTW89_ETSI][1][29] = 56,
[2][0][RTW89_ETSI][0][29] = 18,
[2][0][RTW89_MKK][1][29] = 56,
[2][0][RTW89_MKK][0][29] = 14,
[2][0][RTW89_IC][1][29] = 8,
+ [2][0][RTW89_IC][2][29] = 70,
[2][0][RTW89_KCC][1][29] = -2,
[2][0][RTW89_KCC][0][29] = -2,
[2][0][RTW89_ACMA][1][29] = 56,
@@ -54599,13 +57554,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][29] = 18,
[2][0][RTW89_UK][1][29] = 56,
[2][0][RTW89_UK][0][29] = 18,
+ [2][0][RTW89_THAILAND][1][29] = 52,
+ [2][0][RTW89_THAILAND][0][29] = 8,
[2][0][RTW89_FCC][1][30] = 8,
- [2][0][RTW89_FCC][2][30] = 78,
+ [2][0][RTW89_FCC][2][30] = 70,
[2][0][RTW89_ETSI][1][30] = 56,
[2][0][RTW89_ETSI][0][30] = 18,
[2][0][RTW89_MKK][1][30] = 56,
[2][0][RTW89_MKK][0][30] = 14,
[2][0][RTW89_IC][1][30] = 8,
+ [2][0][RTW89_IC][2][30] = 70,
[2][0][RTW89_KCC][1][30] = -2,
[2][0][RTW89_KCC][0][30] = -2,
[2][0][RTW89_ACMA][1][30] = 56,
@@ -54615,13 +57573,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][30] = 18,
[2][0][RTW89_UK][1][30] = 56,
[2][0][RTW89_UK][0][30] = 18,
+ [2][0][RTW89_THAILAND][1][30] = 52,
+ [2][0][RTW89_THAILAND][0][30] = 8,
[2][0][RTW89_FCC][1][32] = 8,
- [2][0][RTW89_FCC][2][32] = 78,
+ [2][0][RTW89_FCC][2][32] = 70,
[2][0][RTW89_ETSI][1][32] = 56,
[2][0][RTW89_ETSI][0][32] = 18,
[2][0][RTW89_MKK][1][32] = 56,
[2][0][RTW89_MKK][0][32] = 14,
[2][0][RTW89_IC][1][32] = 8,
+ [2][0][RTW89_IC][2][32] = 70,
[2][0][RTW89_KCC][1][32] = -2,
[2][0][RTW89_KCC][0][32] = -2,
[2][0][RTW89_ACMA][1][32] = 56,
@@ -54631,13 +57592,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][32] = 18,
[2][0][RTW89_UK][1][32] = 56,
[2][0][RTW89_UK][0][32] = 18,
+ [2][0][RTW89_THAILAND][1][32] = 52,
+ [2][0][RTW89_THAILAND][0][32] = 8,
[2][0][RTW89_FCC][1][34] = 8,
- [2][0][RTW89_FCC][2][34] = 78,
+ [2][0][RTW89_FCC][2][34] = 70,
[2][0][RTW89_ETSI][1][34] = 56,
[2][0][RTW89_ETSI][0][34] = 18,
[2][0][RTW89_MKK][1][34] = 56,
[2][0][RTW89_MKK][0][34] = 14,
[2][0][RTW89_IC][1][34] = 8,
+ [2][0][RTW89_IC][2][34] = 70,
[2][0][RTW89_KCC][1][34] = -2,
[2][0][RTW89_KCC][0][34] = -2,
[2][0][RTW89_ACMA][1][34] = 56,
@@ -54647,13 +57611,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][34] = 18,
[2][0][RTW89_UK][1][34] = 56,
[2][0][RTW89_UK][0][34] = 18,
+ [2][0][RTW89_THAILAND][1][34] = 52,
+ [2][0][RTW89_THAILAND][0][34] = 8,
[2][0][RTW89_FCC][1][36] = 8,
- [2][0][RTW89_FCC][2][36] = 78,
+ [2][0][RTW89_FCC][2][36] = 70,
[2][0][RTW89_ETSI][1][36] = 56,
[2][0][RTW89_ETSI][0][36] = 18,
[2][0][RTW89_MKK][1][36] = 56,
[2][0][RTW89_MKK][0][36] = 14,
[2][0][RTW89_IC][1][36] = 8,
+ [2][0][RTW89_IC][2][36] = 70,
[2][0][RTW89_KCC][1][36] = -2,
[2][0][RTW89_KCC][0][36] = -2,
[2][0][RTW89_ACMA][1][36] = 56,
@@ -54663,13 +57630,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][36] = 18,
[2][0][RTW89_UK][1][36] = 56,
[2][0][RTW89_UK][0][36] = 18,
+ [2][0][RTW89_THAILAND][1][36] = 52,
+ [2][0][RTW89_THAILAND][0][36] = 8,
[2][0][RTW89_FCC][1][38] = 8,
- [2][0][RTW89_FCC][2][38] = 78,
+ [2][0][RTW89_FCC][2][38] = 70,
[2][0][RTW89_ETSI][1][38] = 56,
[2][0][RTW89_ETSI][0][38] = 18,
[2][0][RTW89_MKK][1][38] = 56,
[2][0][RTW89_MKK][0][38] = 14,
[2][0][RTW89_IC][1][38] = 8,
+ [2][0][RTW89_IC][2][38] = 70,
[2][0][RTW89_KCC][1][38] = -2,
[2][0][RTW89_KCC][0][38] = -2,
[2][0][RTW89_ACMA][1][38] = 56,
@@ -54679,13 +57649,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][38] = 18,
[2][0][RTW89_UK][1][38] = 56,
[2][0][RTW89_UK][0][38] = 18,
+ [2][0][RTW89_THAILAND][1][38] = 52,
+ [2][0][RTW89_THAILAND][0][38] = 8,
[2][0][RTW89_FCC][1][40] = 8,
- [2][0][RTW89_FCC][2][40] = 78,
+ [2][0][RTW89_FCC][2][40] = 70,
[2][0][RTW89_ETSI][1][40] = 56,
[2][0][RTW89_ETSI][0][40] = 18,
[2][0][RTW89_MKK][1][40] = 56,
[2][0][RTW89_MKK][0][40] = 14,
[2][0][RTW89_IC][1][40] = 8,
+ [2][0][RTW89_IC][2][40] = 70,
[2][0][RTW89_KCC][1][40] = -2,
[2][0][RTW89_KCC][0][40] = -2,
[2][0][RTW89_ACMA][1][40] = 56,
@@ -54695,13 +57668,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][40] = 18,
[2][0][RTW89_UK][1][40] = 56,
[2][0][RTW89_UK][0][40] = 18,
+ [2][0][RTW89_THAILAND][1][40] = 52,
+ [2][0][RTW89_THAILAND][0][40] = 8,
[2][0][RTW89_FCC][1][42] = 8,
- [2][0][RTW89_FCC][2][42] = 78,
+ [2][0][RTW89_FCC][2][42] = 70,
[2][0][RTW89_ETSI][1][42] = 56,
[2][0][RTW89_ETSI][0][42] = 18,
[2][0][RTW89_MKK][1][42] = 56,
[2][0][RTW89_MKK][0][42] = 14,
[2][0][RTW89_IC][1][42] = 8,
+ [2][0][RTW89_IC][2][42] = 70,
[2][0][RTW89_KCC][1][42] = -2,
[2][0][RTW89_KCC][0][42] = -2,
[2][0][RTW89_ACMA][1][42] = 56,
@@ -54711,13 +57687,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][42] = 18,
[2][0][RTW89_UK][1][42] = 56,
[2][0][RTW89_UK][0][42] = 18,
+ [2][0][RTW89_THAILAND][1][42] = 52,
+ [2][0][RTW89_THAILAND][0][42] = 8,
[2][0][RTW89_FCC][1][44] = 8,
- [2][0][RTW89_FCC][2][44] = 78,
+ [2][0][RTW89_FCC][2][44] = 70,
[2][0][RTW89_ETSI][1][44] = 56,
[2][0][RTW89_ETSI][0][44] = 18,
[2][0][RTW89_MKK][1][44] = 32,
[2][0][RTW89_MKK][0][44] = 14,
[2][0][RTW89_IC][1][44] = 8,
+ [2][0][RTW89_IC][2][44] = 70,
[2][0][RTW89_KCC][1][44] = -2,
[2][0][RTW89_KCC][0][44] = -2,
[2][0][RTW89_ACMA][1][44] = 56,
@@ -54727,6 +57706,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][44] = 18,
[2][0][RTW89_UK][1][44] = 56,
[2][0][RTW89_UK][0][44] = 18,
+ [2][0][RTW89_THAILAND][1][44] = 52,
+ [2][0][RTW89_THAILAND][0][44] = 8,
[2][0][RTW89_FCC][1][45] = 8,
[2][0][RTW89_FCC][2][45] = 127,
[2][0][RTW89_ETSI][1][45] = 127,
@@ -54734,6 +57715,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][45] = 127,
[2][0][RTW89_MKK][0][45] = 127,
[2][0][RTW89_IC][1][45] = 8,
+ [2][0][RTW89_IC][2][45] = 70,
[2][0][RTW89_KCC][1][45] = -2,
[2][0][RTW89_KCC][0][45] = 127,
[2][0][RTW89_ACMA][1][45] = 127,
@@ -54743,6 +57725,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][45] = 127,
[2][0][RTW89_UK][1][45] = 127,
[2][0][RTW89_UK][0][45] = 127,
+ [2][0][RTW89_THAILAND][1][45] = 127,
+ [2][0][RTW89_THAILAND][0][45] = 127,
[2][0][RTW89_FCC][1][47] = 8,
[2][0][RTW89_FCC][2][47] = 127,
[2][0][RTW89_ETSI][1][47] = 127,
@@ -54750,6 +57734,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][47] = 127,
[2][0][RTW89_MKK][0][47] = 127,
[2][0][RTW89_IC][1][47] = 8,
+ [2][0][RTW89_IC][2][47] = 70,
[2][0][RTW89_KCC][1][47] = -2,
[2][0][RTW89_KCC][0][47] = 127,
[2][0][RTW89_ACMA][1][47] = 127,
@@ -54759,6 +57744,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][47] = 127,
[2][0][RTW89_UK][1][47] = 127,
[2][0][RTW89_UK][0][47] = 127,
+ [2][0][RTW89_THAILAND][1][47] = 127,
+ [2][0][RTW89_THAILAND][0][47] = 127,
[2][0][RTW89_FCC][1][49] = 8,
[2][0][RTW89_FCC][2][49] = 127,
[2][0][RTW89_ETSI][1][49] = 127,
@@ -54766,6 +57753,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][49] = 127,
[2][0][RTW89_MKK][0][49] = 127,
[2][0][RTW89_IC][1][49] = 8,
+ [2][0][RTW89_IC][2][49] = 70,
[2][0][RTW89_KCC][1][49] = -2,
[2][0][RTW89_KCC][0][49] = 127,
[2][0][RTW89_ACMA][1][49] = 127,
@@ -54775,6 +57763,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][49] = 127,
[2][0][RTW89_UK][1][49] = 127,
[2][0][RTW89_UK][0][49] = 127,
+ [2][0][RTW89_THAILAND][1][49] = 127,
+ [2][0][RTW89_THAILAND][0][49] = 127,
[2][0][RTW89_FCC][1][51] = 8,
[2][0][RTW89_FCC][2][51] = 127,
[2][0][RTW89_ETSI][1][51] = 127,
@@ -54782,6 +57772,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][51] = 127,
[2][0][RTW89_MKK][0][51] = 127,
[2][0][RTW89_IC][1][51] = 8,
+ [2][0][RTW89_IC][2][51] = 70,
[2][0][RTW89_KCC][1][51] = -2,
[2][0][RTW89_KCC][0][51] = 127,
[2][0][RTW89_ACMA][1][51] = 127,
@@ -54791,6 +57782,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][51] = 127,
[2][0][RTW89_UK][1][51] = 127,
[2][0][RTW89_UK][0][51] = 127,
+ [2][0][RTW89_THAILAND][1][51] = 127,
+ [2][0][RTW89_THAILAND][0][51] = 127,
[2][0][RTW89_FCC][1][53] = 8,
[2][0][RTW89_FCC][2][53] = 127,
[2][0][RTW89_ETSI][1][53] = 127,
@@ -54798,6 +57791,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][53] = 127,
[2][0][RTW89_MKK][0][53] = 127,
[2][0][RTW89_IC][1][53] = 8,
+ [2][0][RTW89_IC][2][53] = 70,
[2][0][RTW89_KCC][1][53] = -2,
[2][0][RTW89_KCC][0][53] = 127,
[2][0][RTW89_ACMA][1][53] = 127,
@@ -54807,13 +57801,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][53] = 127,
[2][0][RTW89_UK][1][53] = 127,
[2][0][RTW89_UK][0][53] = 127,
+ [2][0][RTW89_THAILAND][1][53] = 127,
+ [2][0][RTW89_THAILAND][0][53] = 127,
[2][0][RTW89_FCC][1][55] = 8,
- [2][0][RTW89_FCC][2][55] = 78,
+ [2][0][RTW89_FCC][2][55] = 68,
[2][0][RTW89_ETSI][1][55] = 127,
[2][0][RTW89_ETSI][0][55] = 127,
[2][0][RTW89_MKK][1][55] = 127,
[2][0][RTW89_MKK][0][55] = 127,
[2][0][RTW89_IC][1][55] = 8,
+ [2][0][RTW89_IC][2][55] = 68,
[2][0][RTW89_KCC][1][55] = -2,
[2][0][RTW89_KCC][0][55] = 127,
[2][0][RTW89_ACMA][1][55] = 127,
@@ -54823,13 +57820,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][55] = 127,
[2][0][RTW89_UK][1][55] = 127,
[2][0][RTW89_UK][0][55] = 127,
+ [2][0][RTW89_THAILAND][1][55] = 127,
+ [2][0][RTW89_THAILAND][0][55] = 127,
[2][0][RTW89_FCC][1][57] = 8,
- [2][0][RTW89_FCC][2][57] = 78,
+ [2][0][RTW89_FCC][2][57] = 68,
[2][0][RTW89_ETSI][1][57] = 127,
[2][0][RTW89_ETSI][0][57] = 127,
[2][0][RTW89_MKK][1][57] = 127,
[2][0][RTW89_MKK][0][57] = 127,
[2][0][RTW89_IC][1][57] = 8,
+ [2][0][RTW89_IC][2][57] = 68,
[2][0][RTW89_KCC][1][57] = -2,
[2][0][RTW89_KCC][0][57] = 127,
[2][0][RTW89_ACMA][1][57] = 127,
@@ -54839,13 +57839,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][57] = 127,
[2][0][RTW89_UK][1][57] = 127,
[2][0][RTW89_UK][0][57] = 127,
+ [2][0][RTW89_THAILAND][1][57] = 127,
+ [2][0][RTW89_THAILAND][0][57] = 127,
[2][0][RTW89_FCC][1][59] = 8,
- [2][0][RTW89_FCC][2][59] = 78,
+ [2][0][RTW89_FCC][2][59] = 68,
[2][0][RTW89_ETSI][1][59] = 127,
[2][0][RTW89_ETSI][0][59] = 127,
[2][0][RTW89_MKK][1][59] = 127,
[2][0][RTW89_MKK][0][59] = 127,
[2][0][RTW89_IC][1][59] = 8,
+ [2][0][RTW89_IC][2][59] = 68,
[2][0][RTW89_KCC][1][59] = -2,
[2][0][RTW89_KCC][0][59] = 127,
[2][0][RTW89_ACMA][1][59] = 127,
@@ -54855,13 +57858,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][59] = 127,
[2][0][RTW89_UK][1][59] = 127,
[2][0][RTW89_UK][0][59] = 127,
+ [2][0][RTW89_THAILAND][1][59] = 127,
+ [2][0][RTW89_THAILAND][0][59] = 127,
[2][0][RTW89_FCC][1][60] = 8,
- [2][0][RTW89_FCC][2][60] = 78,
+ [2][0][RTW89_FCC][2][60] = 68,
[2][0][RTW89_ETSI][1][60] = 127,
[2][0][RTW89_ETSI][0][60] = 127,
[2][0][RTW89_MKK][1][60] = 127,
[2][0][RTW89_MKK][0][60] = 127,
[2][0][RTW89_IC][1][60] = 8,
+ [2][0][RTW89_IC][2][60] = 68,
[2][0][RTW89_KCC][1][60] = -2,
[2][0][RTW89_KCC][0][60] = 127,
[2][0][RTW89_ACMA][1][60] = 127,
@@ -54871,13 +57877,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][60] = 127,
[2][0][RTW89_UK][1][60] = 127,
[2][0][RTW89_UK][0][60] = 127,
+ [2][0][RTW89_THAILAND][1][60] = 127,
+ [2][0][RTW89_THAILAND][0][60] = 127,
[2][0][RTW89_FCC][1][62] = 8,
- [2][0][RTW89_FCC][2][62] = 78,
+ [2][0][RTW89_FCC][2][62] = 68,
[2][0][RTW89_ETSI][1][62] = 127,
[2][0][RTW89_ETSI][0][62] = 127,
[2][0][RTW89_MKK][1][62] = 127,
[2][0][RTW89_MKK][0][62] = 127,
[2][0][RTW89_IC][1][62] = 8,
+ [2][0][RTW89_IC][2][62] = 68,
[2][0][RTW89_KCC][1][62] = -2,
[2][0][RTW89_KCC][0][62] = 127,
[2][0][RTW89_ACMA][1][62] = 127,
@@ -54887,13 +57896,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][62] = 127,
[2][0][RTW89_UK][1][62] = 127,
[2][0][RTW89_UK][0][62] = 127,
+ [2][0][RTW89_THAILAND][1][62] = 127,
+ [2][0][RTW89_THAILAND][0][62] = 127,
[2][0][RTW89_FCC][1][64] = 8,
- [2][0][RTW89_FCC][2][64] = 78,
+ [2][0][RTW89_FCC][2][64] = 68,
[2][0][RTW89_ETSI][1][64] = 127,
[2][0][RTW89_ETSI][0][64] = 127,
[2][0][RTW89_MKK][1][64] = 127,
[2][0][RTW89_MKK][0][64] = 127,
[2][0][RTW89_IC][1][64] = 8,
+ [2][0][RTW89_IC][2][64] = 68,
[2][0][RTW89_KCC][1][64] = -2,
[2][0][RTW89_KCC][0][64] = 127,
[2][0][RTW89_ACMA][1][64] = 127,
@@ -54903,13 +57915,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][64] = 127,
[2][0][RTW89_UK][1][64] = 127,
[2][0][RTW89_UK][0][64] = 127,
+ [2][0][RTW89_THAILAND][1][64] = 127,
+ [2][0][RTW89_THAILAND][0][64] = 127,
[2][0][RTW89_FCC][1][66] = 8,
- [2][0][RTW89_FCC][2][66] = 78,
+ [2][0][RTW89_FCC][2][66] = 68,
[2][0][RTW89_ETSI][1][66] = 127,
[2][0][RTW89_ETSI][0][66] = 127,
[2][0][RTW89_MKK][1][66] = 127,
[2][0][RTW89_MKK][0][66] = 127,
[2][0][RTW89_IC][1][66] = 8,
+ [2][0][RTW89_IC][2][66] = 68,
[2][0][RTW89_KCC][1][66] = -2,
[2][0][RTW89_KCC][0][66] = 127,
[2][0][RTW89_ACMA][1][66] = 127,
@@ -54919,13 +57934,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][66] = 127,
[2][0][RTW89_UK][1][66] = 127,
[2][0][RTW89_UK][0][66] = 127,
+ [2][0][RTW89_THAILAND][1][66] = 127,
+ [2][0][RTW89_THAILAND][0][66] = 127,
[2][0][RTW89_FCC][1][68] = 8,
- [2][0][RTW89_FCC][2][68] = 78,
+ [2][0][RTW89_FCC][2][68] = 68,
[2][0][RTW89_ETSI][1][68] = 127,
[2][0][RTW89_ETSI][0][68] = 127,
[2][0][RTW89_MKK][1][68] = 127,
[2][0][RTW89_MKK][0][68] = 127,
[2][0][RTW89_IC][1][68] = 8,
+ [2][0][RTW89_IC][2][68] = 68,
[2][0][RTW89_KCC][1][68] = -2,
[2][0][RTW89_KCC][0][68] = 127,
[2][0][RTW89_ACMA][1][68] = 127,
@@ -54935,13 +57953,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][68] = 127,
[2][0][RTW89_UK][1][68] = 127,
[2][0][RTW89_UK][0][68] = 127,
+ [2][0][RTW89_THAILAND][1][68] = 127,
+ [2][0][RTW89_THAILAND][0][68] = 127,
[2][0][RTW89_FCC][1][70] = 8,
- [2][0][RTW89_FCC][2][70] = 78,
+ [2][0][RTW89_FCC][2][70] = 68,
[2][0][RTW89_ETSI][1][70] = 127,
[2][0][RTW89_ETSI][0][70] = 127,
[2][0][RTW89_MKK][1][70] = 127,
[2][0][RTW89_MKK][0][70] = 127,
[2][0][RTW89_IC][1][70] = 8,
+ [2][0][RTW89_IC][2][70] = 68,
[2][0][RTW89_KCC][1][70] = -2,
[2][0][RTW89_KCC][0][70] = 127,
[2][0][RTW89_ACMA][1][70] = 127,
@@ -54951,13 +57972,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][70] = 127,
[2][0][RTW89_UK][1][70] = 127,
[2][0][RTW89_UK][0][70] = 127,
+ [2][0][RTW89_THAILAND][1][70] = 127,
+ [2][0][RTW89_THAILAND][0][70] = 127,
[2][0][RTW89_FCC][1][72] = 8,
- [2][0][RTW89_FCC][2][72] = 78,
+ [2][0][RTW89_FCC][2][72] = 68,
[2][0][RTW89_ETSI][1][72] = 127,
[2][0][RTW89_ETSI][0][72] = 127,
[2][0][RTW89_MKK][1][72] = 127,
[2][0][RTW89_MKK][0][72] = 127,
[2][0][RTW89_IC][1][72] = 8,
+ [2][0][RTW89_IC][2][72] = 68,
[2][0][RTW89_KCC][1][72] = -2,
[2][0][RTW89_KCC][0][72] = 127,
[2][0][RTW89_ACMA][1][72] = 127,
@@ -54967,13 +57991,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][72] = 127,
[2][0][RTW89_UK][1][72] = 127,
[2][0][RTW89_UK][0][72] = 127,
+ [2][0][RTW89_THAILAND][1][72] = 127,
+ [2][0][RTW89_THAILAND][0][72] = 127,
[2][0][RTW89_FCC][1][74] = 8,
- [2][0][RTW89_FCC][2][74] = 78,
+ [2][0][RTW89_FCC][2][74] = 68,
[2][0][RTW89_ETSI][1][74] = 127,
[2][0][RTW89_ETSI][0][74] = 127,
[2][0][RTW89_MKK][1][74] = 127,
[2][0][RTW89_MKK][0][74] = 127,
[2][0][RTW89_IC][1][74] = 8,
+ [2][0][RTW89_IC][2][74] = 68,
[2][0][RTW89_KCC][1][74] = -2,
[2][0][RTW89_KCC][0][74] = 127,
[2][0][RTW89_ACMA][1][74] = 127,
@@ -54983,13 +58010,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][74] = 127,
[2][0][RTW89_UK][1][74] = 127,
[2][0][RTW89_UK][0][74] = 127,
+ [2][0][RTW89_THAILAND][1][74] = 127,
+ [2][0][RTW89_THAILAND][0][74] = 127,
[2][0][RTW89_FCC][1][75] = 8,
- [2][0][RTW89_FCC][2][75] = 78,
+ [2][0][RTW89_FCC][2][75] = 68,
[2][0][RTW89_ETSI][1][75] = 127,
[2][0][RTW89_ETSI][0][75] = 127,
[2][0][RTW89_MKK][1][75] = 127,
[2][0][RTW89_MKK][0][75] = 127,
[2][0][RTW89_IC][1][75] = 8,
+ [2][0][RTW89_IC][2][75] = 68,
[2][0][RTW89_KCC][1][75] = -2,
[2][0][RTW89_KCC][0][75] = 127,
[2][0][RTW89_ACMA][1][75] = 127,
@@ -54999,13 +58029,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][75] = 127,
[2][0][RTW89_UK][1][75] = 127,
[2][0][RTW89_UK][0][75] = 127,
+ [2][0][RTW89_THAILAND][1][75] = 127,
+ [2][0][RTW89_THAILAND][0][75] = 127,
[2][0][RTW89_FCC][1][77] = 8,
- [2][0][RTW89_FCC][2][77] = 78,
+ [2][0][RTW89_FCC][2][77] = 68,
[2][0][RTW89_ETSI][1][77] = 127,
[2][0][RTW89_ETSI][0][77] = 127,
[2][0][RTW89_MKK][1][77] = 127,
[2][0][RTW89_MKK][0][77] = 127,
[2][0][RTW89_IC][1][77] = 8,
+ [2][0][RTW89_IC][2][77] = 68,
[2][0][RTW89_KCC][1][77] = -2,
[2][0][RTW89_KCC][0][77] = 127,
[2][0][RTW89_ACMA][1][77] = 127,
@@ -55015,13 +58048,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][77] = 127,
[2][0][RTW89_UK][1][77] = 127,
[2][0][RTW89_UK][0][77] = 127,
+ [2][0][RTW89_THAILAND][1][77] = 127,
+ [2][0][RTW89_THAILAND][0][77] = 127,
[2][0][RTW89_FCC][1][79] = 8,
- [2][0][RTW89_FCC][2][79] = 78,
+ [2][0][RTW89_FCC][2][79] = 68,
[2][0][RTW89_ETSI][1][79] = 127,
[2][0][RTW89_ETSI][0][79] = 127,
[2][0][RTW89_MKK][1][79] = 127,
[2][0][RTW89_MKK][0][79] = 127,
[2][0][RTW89_IC][1][79] = 8,
+ [2][0][RTW89_IC][2][79] = 68,
[2][0][RTW89_KCC][1][79] = -2,
[2][0][RTW89_KCC][0][79] = 127,
[2][0][RTW89_ACMA][1][79] = 127,
@@ -55031,13 +58067,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][79] = 127,
[2][0][RTW89_UK][1][79] = 127,
[2][0][RTW89_UK][0][79] = 127,
+ [2][0][RTW89_THAILAND][1][79] = 127,
+ [2][0][RTW89_THAILAND][0][79] = 127,
[2][0][RTW89_FCC][1][81] = 8,
- [2][0][RTW89_FCC][2][81] = 78,
+ [2][0][RTW89_FCC][2][81] = 68,
[2][0][RTW89_ETSI][1][81] = 127,
[2][0][RTW89_ETSI][0][81] = 127,
[2][0][RTW89_MKK][1][81] = 127,
[2][0][RTW89_MKK][0][81] = 127,
[2][0][RTW89_IC][1][81] = 8,
+ [2][0][RTW89_IC][2][81] = 68,
[2][0][RTW89_KCC][1][81] = -2,
[2][0][RTW89_KCC][0][81] = 127,
[2][0][RTW89_ACMA][1][81] = 127,
@@ -55047,13 +58086,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][81] = 127,
[2][0][RTW89_UK][1][81] = 127,
[2][0][RTW89_UK][0][81] = 127,
+ [2][0][RTW89_THAILAND][1][81] = 127,
+ [2][0][RTW89_THAILAND][0][81] = 127,
[2][0][RTW89_FCC][1][83] = 8,
- [2][0][RTW89_FCC][2][83] = 78,
+ [2][0][RTW89_FCC][2][83] = 68,
[2][0][RTW89_ETSI][1][83] = 127,
[2][0][RTW89_ETSI][0][83] = 127,
[2][0][RTW89_MKK][1][83] = 127,
[2][0][RTW89_MKK][0][83] = 127,
[2][0][RTW89_IC][1][83] = 8,
+ [2][0][RTW89_IC][2][83] = 68,
[2][0][RTW89_KCC][1][83] = -2,
[2][0][RTW89_KCC][0][83] = 127,
[2][0][RTW89_ACMA][1][83] = 127,
@@ -55063,13 +58105,16 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][83] = 127,
[2][0][RTW89_UK][1][83] = 127,
[2][0][RTW89_UK][0][83] = 127,
+ [2][0][RTW89_THAILAND][1][83] = 127,
+ [2][0][RTW89_THAILAND][0][83] = 127,
[2][0][RTW89_FCC][1][85] = 8,
- [2][0][RTW89_FCC][2][85] = 78,
+ [2][0][RTW89_FCC][2][85] = 68,
[2][0][RTW89_ETSI][1][85] = 127,
[2][0][RTW89_ETSI][0][85] = 127,
[2][0][RTW89_MKK][1][85] = 127,
[2][0][RTW89_MKK][0][85] = 127,
[2][0][RTW89_IC][1][85] = 8,
+ [2][0][RTW89_IC][2][85] = 68,
[2][0][RTW89_KCC][1][85] = -2,
[2][0][RTW89_KCC][0][85] = 127,
[2][0][RTW89_ACMA][1][85] = 127,
@@ -55079,6 +58124,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][85] = 127,
[2][0][RTW89_UK][1][85] = 127,
[2][0][RTW89_UK][0][85] = 127,
+ [2][0][RTW89_THAILAND][1][85] = 127,
+ [2][0][RTW89_THAILAND][0][85] = 127,
[2][0][RTW89_FCC][1][87] = 8,
[2][0][RTW89_FCC][2][87] = 127,
[2][0][RTW89_ETSI][1][87] = 127,
@@ -55086,6 +58133,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][87] = 127,
[2][0][RTW89_MKK][0][87] = 127,
[2][0][RTW89_IC][1][87] = 8,
+ [2][0][RTW89_IC][2][87] = 127,
[2][0][RTW89_KCC][1][87] = -2,
[2][0][RTW89_KCC][0][87] = 127,
[2][0][RTW89_ACMA][1][87] = 127,
@@ -55095,6 +58143,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][87] = 127,
[2][0][RTW89_UK][1][87] = 127,
[2][0][RTW89_UK][0][87] = 127,
+ [2][0][RTW89_THAILAND][1][87] = 127,
+ [2][0][RTW89_THAILAND][0][87] = 127,
[2][0][RTW89_FCC][1][89] = 8,
[2][0][RTW89_FCC][2][89] = 127,
[2][0][RTW89_ETSI][1][89] = 127,
@@ -55102,6 +58152,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][89] = 127,
[2][0][RTW89_MKK][0][89] = 127,
[2][0][RTW89_IC][1][89] = 8,
+ [2][0][RTW89_IC][2][89] = 127,
[2][0][RTW89_KCC][1][89] = -2,
[2][0][RTW89_KCC][0][89] = 127,
[2][0][RTW89_ACMA][1][89] = 127,
@@ -55111,6 +58162,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][89] = 127,
[2][0][RTW89_UK][1][89] = 127,
[2][0][RTW89_UK][0][89] = 127,
+ [2][0][RTW89_THAILAND][1][89] = 127,
+ [2][0][RTW89_THAILAND][0][89] = 127,
[2][0][RTW89_FCC][1][90] = 8,
[2][0][RTW89_FCC][2][90] = 127,
[2][0][RTW89_ETSI][1][90] = 127,
@@ -55118,6 +58171,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][90] = 127,
[2][0][RTW89_MKK][0][90] = 127,
[2][0][RTW89_IC][1][90] = 8,
+ [2][0][RTW89_IC][2][90] = 127,
[2][0][RTW89_KCC][1][90] = -2,
[2][0][RTW89_KCC][0][90] = 127,
[2][0][RTW89_ACMA][1][90] = 127,
@@ -55127,6 +58181,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][90] = 127,
[2][0][RTW89_UK][1][90] = 127,
[2][0][RTW89_UK][0][90] = 127,
+ [2][0][RTW89_THAILAND][1][90] = 127,
+ [2][0][RTW89_THAILAND][0][90] = 127,
[2][0][RTW89_FCC][1][92] = 8,
[2][0][RTW89_FCC][2][92] = 127,
[2][0][RTW89_ETSI][1][92] = 127,
@@ -55134,6 +58190,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][92] = 127,
[2][0][RTW89_MKK][0][92] = 127,
[2][0][RTW89_IC][1][92] = 8,
+ [2][0][RTW89_IC][2][92] = 127,
[2][0][RTW89_KCC][1][92] = -2,
[2][0][RTW89_KCC][0][92] = 127,
[2][0][RTW89_ACMA][1][92] = 127,
@@ -55143,6 +58200,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][92] = 127,
[2][0][RTW89_UK][1][92] = 127,
[2][0][RTW89_UK][0][92] = 127,
+ [2][0][RTW89_THAILAND][1][92] = 127,
+ [2][0][RTW89_THAILAND][0][92] = 127,
[2][0][RTW89_FCC][1][94] = 8,
[2][0][RTW89_FCC][2][94] = 127,
[2][0][RTW89_ETSI][1][94] = 127,
@@ -55150,6 +58209,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][94] = 127,
[2][0][RTW89_MKK][0][94] = 127,
[2][0][RTW89_IC][1][94] = 8,
+ [2][0][RTW89_IC][2][94] = 127,
[2][0][RTW89_KCC][1][94] = -2,
[2][0][RTW89_KCC][0][94] = 127,
[2][0][RTW89_ACMA][1][94] = 127,
@@ -55159,6 +58219,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][94] = 127,
[2][0][RTW89_UK][1][94] = 127,
[2][0][RTW89_UK][0][94] = 127,
+ [2][0][RTW89_THAILAND][1][94] = 127,
+ [2][0][RTW89_THAILAND][0][94] = 127,
[2][0][RTW89_FCC][1][96] = 8,
[2][0][RTW89_FCC][2][96] = 127,
[2][0][RTW89_ETSI][1][96] = 127,
@@ -55166,6 +58228,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][96] = 127,
[2][0][RTW89_MKK][0][96] = 127,
[2][0][RTW89_IC][1][96] = 8,
+ [2][0][RTW89_IC][2][96] = 127,
[2][0][RTW89_KCC][1][96] = -2,
[2][0][RTW89_KCC][0][96] = 127,
[2][0][RTW89_ACMA][1][96] = 127,
@@ -55175,6 +58238,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][96] = 127,
[2][0][RTW89_UK][1][96] = 127,
[2][0][RTW89_UK][0][96] = 127,
+ [2][0][RTW89_THAILAND][1][96] = 127,
+ [2][0][RTW89_THAILAND][0][96] = 127,
[2][0][RTW89_FCC][1][98] = 8,
[2][0][RTW89_FCC][2][98] = 127,
[2][0][RTW89_ETSI][1][98] = 127,
@@ -55182,6 +58247,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][98] = 127,
[2][0][RTW89_MKK][0][98] = 127,
[2][0][RTW89_IC][1][98] = 8,
+ [2][0][RTW89_IC][2][98] = 127,
[2][0][RTW89_KCC][1][98] = -2,
[2][0][RTW89_KCC][0][98] = 127,
[2][0][RTW89_ACMA][1][98] = 127,
@@ -55191,6 +58257,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][98] = 127,
[2][0][RTW89_UK][1][98] = 127,
[2][0][RTW89_UK][0][98] = 127,
+ [2][0][RTW89_THAILAND][1][98] = 127,
+ [2][0][RTW89_THAILAND][0][98] = 127,
[2][0][RTW89_FCC][1][100] = 8,
[2][0][RTW89_FCC][2][100] = 127,
[2][0][RTW89_ETSI][1][100] = 127,
@@ -55198,6 +58266,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][100] = 127,
[2][0][RTW89_MKK][0][100] = 127,
[2][0][RTW89_IC][1][100] = 8,
+ [2][0][RTW89_IC][2][100] = 127,
[2][0][RTW89_KCC][1][100] = -2,
[2][0][RTW89_KCC][0][100] = 127,
[2][0][RTW89_ACMA][1][100] = 127,
@@ -55207,6 +58276,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][100] = 127,
[2][0][RTW89_UK][1][100] = 127,
[2][0][RTW89_UK][0][100] = 127,
+ [2][0][RTW89_THAILAND][1][100] = 127,
+ [2][0][RTW89_THAILAND][0][100] = 127,
[2][0][RTW89_FCC][1][102] = 8,
[2][0][RTW89_FCC][2][102] = 127,
[2][0][RTW89_ETSI][1][102] = 127,
@@ -55214,6 +58285,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][102] = 127,
[2][0][RTW89_MKK][0][102] = 127,
[2][0][RTW89_IC][1][102] = 8,
+ [2][0][RTW89_IC][2][102] = 127,
[2][0][RTW89_KCC][1][102] = -2,
[2][0][RTW89_KCC][0][102] = 127,
[2][0][RTW89_ACMA][1][102] = 127,
@@ -55223,6 +58295,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][102] = 127,
[2][0][RTW89_UK][1][102] = 127,
[2][0][RTW89_UK][0][102] = 127,
+ [2][0][RTW89_THAILAND][1][102] = 127,
+ [2][0][RTW89_THAILAND][0][102] = 127,
[2][0][RTW89_FCC][1][104] = 8,
[2][0][RTW89_FCC][2][104] = 127,
[2][0][RTW89_ETSI][1][104] = 127,
@@ -55230,6 +58304,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][104] = 127,
[2][0][RTW89_MKK][0][104] = 127,
[2][0][RTW89_IC][1][104] = 8,
+ [2][0][RTW89_IC][2][104] = 127,
[2][0][RTW89_KCC][1][104] = -2,
[2][0][RTW89_KCC][0][104] = 127,
[2][0][RTW89_ACMA][1][104] = 127,
@@ -55239,6 +58314,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][104] = 127,
[2][0][RTW89_UK][1][104] = 127,
[2][0][RTW89_UK][0][104] = 127,
+ [2][0][RTW89_THAILAND][1][104] = 127,
+ [2][0][RTW89_THAILAND][0][104] = 127,
[2][0][RTW89_FCC][1][105] = 8,
[2][0][RTW89_FCC][2][105] = 127,
[2][0][RTW89_ETSI][1][105] = 127,
@@ -55246,6 +58323,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][105] = 127,
[2][0][RTW89_MKK][0][105] = 127,
[2][0][RTW89_IC][1][105] = 8,
+ [2][0][RTW89_IC][2][105] = 127,
[2][0][RTW89_KCC][1][105] = -2,
[2][0][RTW89_KCC][0][105] = 127,
[2][0][RTW89_ACMA][1][105] = 127,
@@ -55255,6 +58333,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][105] = 127,
[2][0][RTW89_UK][1][105] = 127,
[2][0][RTW89_UK][0][105] = 127,
+ [2][0][RTW89_THAILAND][1][105] = 127,
+ [2][0][RTW89_THAILAND][0][105] = 127,
[2][0][RTW89_FCC][1][107] = 10,
[2][0][RTW89_FCC][2][107] = 127,
[2][0][RTW89_ETSI][1][107] = 127,
@@ -55262,6 +58342,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][107] = 127,
[2][0][RTW89_MKK][0][107] = 127,
[2][0][RTW89_IC][1][107] = 10,
+ [2][0][RTW89_IC][2][107] = 127,
[2][0][RTW89_KCC][1][107] = -2,
[2][0][RTW89_KCC][0][107] = 127,
[2][0][RTW89_ACMA][1][107] = 127,
@@ -55271,6 +58352,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][107] = 127,
[2][0][RTW89_UK][1][107] = 127,
[2][0][RTW89_UK][0][107] = 127,
+ [2][0][RTW89_THAILAND][1][107] = 127,
+ [2][0][RTW89_THAILAND][0][107] = 127,
[2][0][RTW89_FCC][1][109] = 12,
[2][0][RTW89_FCC][2][109] = 127,
[2][0][RTW89_ETSI][1][109] = 127,
@@ -55278,6 +58361,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][109] = 127,
[2][0][RTW89_MKK][0][109] = 127,
[2][0][RTW89_IC][1][109] = 12,
+ [2][0][RTW89_IC][2][109] = 127,
[2][0][RTW89_KCC][1][109] = 127,
[2][0][RTW89_KCC][0][109] = 127,
[2][0][RTW89_ACMA][1][109] = 127,
@@ -55287,6 +58371,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][109] = 127,
[2][0][RTW89_UK][1][109] = 127,
[2][0][RTW89_UK][0][109] = 127,
+ [2][0][RTW89_THAILAND][1][109] = 127,
+ [2][0][RTW89_THAILAND][0][109] = 127,
[2][0][RTW89_FCC][1][111] = 127,
[2][0][RTW89_FCC][2][111] = 127,
[2][0][RTW89_ETSI][1][111] = 127,
@@ -55294,6 +58380,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][111] = 127,
[2][0][RTW89_MKK][0][111] = 127,
[2][0][RTW89_IC][1][111] = 127,
+ [2][0][RTW89_IC][2][111] = 127,
[2][0][RTW89_KCC][1][111] = 127,
[2][0][RTW89_KCC][0][111] = 127,
[2][0][RTW89_ACMA][1][111] = 127,
@@ -55303,6 +58390,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][111] = 127,
[2][0][RTW89_UK][1][111] = 127,
[2][0][RTW89_UK][0][111] = 127,
+ [2][0][RTW89_THAILAND][1][111] = 127,
+ [2][0][RTW89_THAILAND][0][111] = 127,
[2][0][RTW89_FCC][1][113] = 127,
[2][0][RTW89_FCC][2][113] = 127,
[2][0][RTW89_ETSI][1][113] = 127,
@@ -55310,6 +58399,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][113] = 127,
[2][0][RTW89_MKK][0][113] = 127,
[2][0][RTW89_IC][1][113] = 127,
+ [2][0][RTW89_IC][2][113] = 127,
[2][0][RTW89_KCC][1][113] = 127,
[2][0][RTW89_KCC][0][113] = 127,
[2][0][RTW89_ACMA][1][113] = 127,
@@ -55319,6 +58409,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][113] = 127,
[2][0][RTW89_UK][1][113] = 127,
[2][0][RTW89_UK][0][113] = 127,
+ [2][0][RTW89_THAILAND][1][113] = 127,
+ [2][0][RTW89_THAILAND][0][113] = 127,
[2][0][RTW89_FCC][1][115] = 127,
[2][0][RTW89_FCC][2][115] = 127,
[2][0][RTW89_ETSI][1][115] = 127,
@@ -55326,6 +58418,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][115] = 127,
[2][0][RTW89_MKK][0][115] = 127,
[2][0][RTW89_IC][1][115] = 127,
+ [2][0][RTW89_IC][2][115] = 127,
[2][0][RTW89_KCC][1][115] = 127,
[2][0][RTW89_KCC][0][115] = 127,
[2][0][RTW89_ACMA][1][115] = 127,
@@ -55335,6 +58428,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][115] = 127,
[2][0][RTW89_UK][1][115] = 127,
[2][0][RTW89_UK][0][115] = 127,
+ [2][0][RTW89_THAILAND][1][115] = 127,
+ [2][0][RTW89_THAILAND][0][115] = 127,
[2][0][RTW89_FCC][1][117] = 127,
[2][0][RTW89_FCC][2][117] = 127,
[2][0][RTW89_ETSI][1][117] = 127,
@@ -55342,6 +58437,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][117] = 127,
[2][0][RTW89_MKK][0][117] = 127,
[2][0][RTW89_IC][1][117] = 127,
+ [2][0][RTW89_IC][2][117] = 127,
[2][0][RTW89_KCC][1][117] = 127,
[2][0][RTW89_KCC][0][117] = 127,
[2][0][RTW89_ACMA][1][117] = 127,
@@ -55351,6 +58447,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][117] = 127,
[2][0][RTW89_UK][1][117] = 127,
[2][0][RTW89_UK][0][117] = 127,
+ [2][0][RTW89_THAILAND][1][117] = 127,
+ [2][0][RTW89_THAILAND][0][117] = 127,
[2][0][RTW89_FCC][1][119] = 127,
[2][0][RTW89_FCC][2][119] = 127,
[2][0][RTW89_ETSI][1][119] = 127,
@@ -55358,6 +58456,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_MKK][1][119] = 127,
[2][0][RTW89_MKK][0][119] = 127,
[2][0][RTW89_IC][1][119] = 127,
+ [2][0][RTW89_IC][2][119] = 127,
[2][0][RTW89_KCC][1][119] = 127,
[2][0][RTW89_KCC][0][119] = 127,
[2][0][RTW89_ACMA][1][119] = 127,
@@ -55367,6 +58466,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][0][RTW89_QATAR][0][119] = 127,
[2][0][RTW89_UK][1][119] = 127,
[2][0][RTW89_UK][0][119] = 127,
+ [2][0][RTW89_THAILAND][1][119] = 127,
+ [2][0][RTW89_THAILAND][0][119] = 127,
[2][1][RTW89_FCC][1][0] = -16,
[2][1][RTW89_FCC][2][0] = 54,
[2][1][RTW89_ETSI][1][0] = 44,
@@ -55374,6 +58475,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][0] = 42,
[2][1][RTW89_MKK][0][0] = 2,
[2][1][RTW89_IC][1][0] = -16,
+ [2][1][RTW89_IC][2][0] = 54,
[2][1][RTW89_KCC][1][0] = -14,
[2][1][RTW89_KCC][0][0] = -14,
[2][1][RTW89_ACMA][1][0] = 44,
@@ -55383,6 +58485,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][0] = 6,
[2][1][RTW89_UK][1][0] = 44,
[2][1][RTW89_UK][0][0] = 6,
+ [2][1][RTW89_THAILAND][1][0] = 28,
+ [2][1][RTW89_THAILAND][0][0] = -16,
[2][1][RTW89_FCC][1][2] = -16,
[2][1][RTW89_FCC][2][2] = 54,
[2][1][RTW89_ETSI][1][2] = 44,
@@ -55390,6 +58494,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][2] = 40,
[2][1][RTW89_MKK][0][2] = 2,
[2][1][RTW89_IC][1][2] = -16,
+ [2][1][RTW89_IC][2][2] = 54,
[2][1][RTW89_KCC][1][2] = -14,
[2][1][RTW89_KCC][0][2] = -14,
[2][1][RTW89_ACMA][1][2] = 44,
@@ -55399,6 +58504,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][2] = 6,
[2][1][RTW89_UK][1][2] = 44,
[2][1][RTW89_UK][0][2] = 6,
+ [2][1][RTW89_THAILAND][1][2] = 28,
+ [2][1][RTW89_THAILAND][0][2] = -16,
[2][1][RTW89_FCC][1][4] = -16,
[2][1][RTW89_FCC][2][4] = 54,
[2][1][RTW89_ETSI][1][4] = 44,
@@ -55406,6 +58513,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][4] = 40,
[2][1][RTW89_MKK][0][4] = 2,
[2][1][RTW89_IC][1][4] = -16,
+ [2][1][RTW89_IC][2][4] = 54,
[2][1][RTW89_KCC][1][4] = -14,
[2][1][RTW89_KCC][0][4] = -14,
[2][1][RTW89_ACMA][1][4] = 44,
@@ -55415,6 +58523,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][4] = 6,
[2][1][RTW89_UK][1][4] = 44,
[2][1][RTW89_UK][0][4] = 6,
+ [2][1][RTW89_THAILAND][1][4] = 28,
+ [2][1][RTW89_THAILAND][0][4] = -16,
[2][1][RTW89_FCC][1][6] = -16,
[2][1][RTW89_FCC][2][6] = 54,
[2][1][RTW89_ETSI][1][6] = 44,
@@ -55422,6 +58532,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][6] = 40,
[2][1][RTW89_MKK][0][6] = 2,
[2][1][RTW89_IC][1][6] = -16,
+ [2][1][RTW89_IC][2][6] = 54,
[2][1][RTW89_KCC][1][6] = -14,
[2][1][RTW89_KCC][0][6] = -14,
[2][1][RTW89_ACMA][1][6] = 44,
@@ -55431,6 +58542,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][6] = 6,
[2][1][RTW89_UK][1][6] = 44,
[2][1][RTW89_UK][0][6] = 6,
+ [2][1][RTW89_THAILAND][1][6] = 28,
+ [2][1][RTW89_THAILAND][0][6] = -16,
[2][1][RTW89_FCC][1][8] = -16,
[2][1][RTW89_FCC][2][8] = 54,
[2][1][RTW89_ETSI][1][8] = 44,
@@ -55438,6 +58551,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][8] = 40,
[2][1][RTW89_MKK][0][8] = 2,
[2][1][RTW89_IC][1][8] = -16,
+ [2][1][RTW89_IC][2][8] = 54,
[2][1][RTW89_KCC][1][8] = -14,
[2][1][RTW89_KCC][0][8] = -14,
[2][1][RTW89_ACMA][1][8] = 44,
@@ -55447,6 +58561,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][8] = 6,
[2][1][RTW89_UK][1][8] = 44,
[2][1][RTW89_UK][0][8] = 6,
+ [2][1][RTW89_THAILAND][1][8] = 28,
+ [2][1][RTW89_THAILAND][0][8] = -16,
[2][1][RTW89_FCC][1][10] = -16,
[2][1][RTW89_FCC][2][10] = 54,
[2][1][RTW89_ETSI][1][10] = 44,
@@ -55454,6 +58570,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][10] = 40,
[2][1][RTW89_MKK][0][10] = 2,
[2][1][RTW89_IC][1][10] = -16,
+ [2][1][RTW89_IC][2][10] = 54,
[2][1][RTW89_KCC][1][10] = -14,
[2][1][RTW89_KCC][0][10] = -14,
[2][1][RTW89_ACMA][1][10] = 44,
@@ -55463,6 +58580,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][10] = 6,
[2][1][RTW89_UK][1][10] = 44,
[2][1][RTW89_UK][0][10] = 6,
+ [2][1][RTW89_THAILAND][1][10] = 28,
+ [2][1][RTW89_THAILAND][0][10] = -16,
[2][1][RTW89_FCC][1][12] = -16,
[2][1][RTW89_FCC][2][12] = 54,
[2][1][RTW89_ETSI][1][12] = 44,
@@ -55470,6 +58589,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][12] = 40,
[2][1][RTW89_MKK][0][12] = 2,
[2][1][RTW89_IC][1][12] = -16,
+ [2][1][RTW89_IC][2][12] = 54,
[2][1][RTW89_KCC][1][12] = -14,
[2][1][RTW89_KCC][0][12] = -14,
[2][1][RTW89_ACMA][1][12] = 44,
@@ -55479,6 +58599,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][12] = 6,
[2][1][RTW89_UK][1][12] = 44,
[2][1][RTW89_UK][0][12] = 6,
+ [2][1][RTW89_THAILAND][1][12] = 28,
+ [2][1][RTW89_THAILAND][0][12] = -16,
[2][1][RTW89_FCC][1][14] = -16,
[2][1][RTW89_FCC][2][14] = 54,
[2][1][RTW89_ETSI][1][14] = 44,
@@ -55486,6 +58608,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][14] = 40,
[2][1][RTW89_MKK][0][14] = 2,
[2][1][RTW89_IC][1][14] = -16,
+ [2][1][RTW89_IC][2][14] = 54,
[2][1][RTW89_KCC][1][14] = -14,
[2][1][RTW89_KCC][0][14] = -14,
[2][1][RTW89_ACMA][1][14] = 44,
@@ -55495,6 +58618,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][14] = 6,
[2][1][RTW89_UK][1][14] = 44,
[2][1][RTW89_UK][0][14] = 6,
+ [2][1][RTW89_THAILAND][1][14] = 28,
+ [2][1][RTW89_THAILAND][0][14] = -16,
[2][1][RTW89_FCC][1][15] = -16,
[2][1][RTW89_FCC][2][15] = 54,
[2][1][RTW89_ETSI][1][15] = 44,
@@ -55502,6 +58627,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][15] = 40,
[2][1][RTW89_MKK][0][15] = 2,
[2][1][RTW89_IC][1][15] = -16,
+ [2][1][RTW89_IC][2][15] = 54,
[2][1][RTW89_KCC][1][15] = -14,
[2][1][RTW89_KCC][0][15] = -14,
[2][1][RTW89_ACMA][1][15] = 44,
@@ -55511,6 +58637,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][15] = 6,
[2][1][RTW89_UK][1][15] = 44,
[2][1][RTW89_UK][0][15] = 6,
+ [2][1][RTW89_THAILAND][1][15] = 28,
+ [2][1][RTW89_THAILAND][0][15] = -16,
[2][1][RTW89_FCC][1][17] = -16,
[2][1][RTW89_FCC][2][17] = 54,
[2][1][RTW89_ETSI][1][17] = 44,
@@ -55518,6 +58646,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][17] = 40,
[2][1][RTW89_MKK][0][17] = 2,
[2][1][RTW89_IC][1][17] = -16,
+ [2][1][RTW89_IC][2][17] = 54,
[2][1][RTW89_KCC][1][17] = -14,
[2][1][RTW89_KCC][0][17] = -14,
[2][1][RTW89_ACMA][1][17] = 44,
@@ -55527,6 +58656,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][17] = 6,
[2][1][RTW89_UK][1][17] = 44,
[2][1][RTW89_UK][0][17] = 6,
+ [2][1][RTW89_THAILAND][1][17] = 28,
+ [2][1][RTW89_THAILAND][0][17] = -16,
[2][1][RTW89_FCC][1][19] = -16,
[2][1][RTW89_FCC][2][19] = 54,
[2][1][RTW89_ETSI][1][19] = 44,
@@ -55534,6 +58665,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][19] = 40,
[2][1][RTW89_MKK][0][19] = 2,
[2][1][RTW89_IC][1][19] = -16,
+ [2][1][RTW89_IC][2][19] = 54,
[2][1][RTW89_KCC][1][19] = -14,
[2][1][RTW89_KCC][0][19] = -14,
[2][1][RTW89_ACMA][1][19] = 44,
@@ -55543,6 +58675,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][19] = 6,
[2][1][RTW89_UK][1][19] = 44,
[2][1][RTW89_UK][0][19] = 6,
+ [2][1][RTW89_THAILAND][1][19] = 28,
+ [2][1][RTW89_THAILAND][0][19] = -16,
[2][1][RTW89_FCC][1][21] = -16,
[2][1][RTW89_FCC][2][21] = 54,
[2][1][RTW89_ETSI][1][21] = 44,
@@ -55550,6 +58684,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][21] = 40,
[2][1][RTW89_MKK][0][21] = 2,
[2][1][RTW89_IC][1][21] = -16,
+ [2][1][RTW89_IC][2][21] = 54,
[2][1][RTW89_KCC][1][21] = -14,
[2][1][RTW89_KCC][0][21] = -14,
[2][1][RTW89_ACMA][1][21] = 44,
@@ -55559,6 +58694,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][21] = 6,
[2][1][RTW89_UK][1][21] = 44,
[2][1][RTW89_UK][0][21] = 6,
+ [2][1][RTW89_THAILAND][1][21] = 28,
+ [2][1][RTW89_THAILAND][0][21] = -16,
[2][1][RTW89_FCC][1][23] = -16,
[2][1][RTW89_FCC][2][23] = 54,
[2][1][RTW89_ETSI][1][23] = 44,
@@ -55566,6 +58703,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][23] = 40,
[2][1][RTW89_MKK][0][23] = 2,
[2][1][RTW89_IC][1][23] = -16,
+ [2][1][RTW89_IC][2][23] = 54,
[2][1][RTW89_KCC][1][23] = -14,
[2][1][RTW89_KCC][0][23] = -14,
[2][1][RTW89_ACMA][1][23] = 44,
@@ -55575,6 +58713,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][23] = 6,
[2][1][RTW89_UK][1][23] = 44,
[2][1][RTW89_UK][0][23] = 6,
+ [2][1][RTW89_THAILAND][1][23] = 30,
+ [2][1][RTW89_THAILAND][0][23] = -16,
[2][1][RTW89_FCC][1][25] = -16,
[2][1][RTW89_FCC][2][25] = 54,
[2][1][RTW89_ETSI][1][25] = 44,
@@ -55582,6 +58722,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][25] = 40,
[2][1][RTW89_MKK][0][25] = 2,
[2][1][RTW89_IC][1][25] = -16,
+ [2][1][RTW89_IC][2][25] = 54,
[2][1][RTW89_KCC][1][25] = -14,
[2][1][RTW89_KCC][0][25] = -14,
[2][1][RTW89_ACMA][1][25] = 44,
@@ -55591,6 +58732,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][25] = 6,
[2][1][RTW89_UK][1][25] = 44,
[2][1][RTW89_UK][0][25] = 6,
+ [2][1][RTW89_THAILAND][1][25] = 28,
+ [2][1][RTW89_THAILAND][0][25] = -16,
[2][1][RTW89_FCC][1][27] = -16,
[2][1][RTW89_FCC][2][27] = 54,
[2][1][RTW89_ETSI][1][27] = 44,
@@ -55598,6 +58741,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][27] = 40,
[2][1][RTW89_MKK][0][27] = 2,
[2][1][RTW89_IC][1][27] = -16,
+ [2][1][RTW89_IC][2][27] = 54,
[2][1][RTW89_KCC][1][27] = -14,
[2][1][RTW89_KCC][0][27] = -14,
[2][1][RTW89_ACMA][1][27] = 44,
@@ -55607,6 +58751,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][27] = 6,
[2][1][RTW89_UK][1][27] = 44,
[2][1][RTW89_UK][0][27] = 6,
+ [2][1][RTW89_THAILAND][1][27] = 28,
+ [2][1][RTW89_THAILAND][0][27] = -16,
[2][1][RTW89_FCC][1][29] = -16,
[2][1][RTW89_FCC][2][29] = 54,
[2][1][RTW89_ETSI][1][29] = 44,
@@ -55614,6 +58760,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][29] = 40,
[2][1][RTW89_MKK][0][29] = 2,
[2][1][RTW89_IC][1][29] = -16,
+ [2][1][RTW89_IC][2][29] = 54,
[2][1][RTW89_KCC][1][29] = -14,
[2][1][RTW89_KCC][0][29] = -14,
[2][1][RTW89_ACMA][1][29] = 44,
@@ -55623,6 +58770,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][29] = 6,
[2][1][RTW89_UK][1][29] = 44,
[2][1][RTW89_UK][0][29] = 6,
+ [2][1][RTW89_THAILAND][1][29] = 28,
+ [2][1][RTW89_THAILAND][0][29] = -16,
[2][1][RTW89_FCC][1][30] = -16,
[2][1][RTW89_FCC][2][30] = 54,
[2][1][RTW89_ETSI][1][30] = 44,
@@ -55630,6 +58779,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][30] = 40,
[2][1][RTW89_MKK][0][30] = 2,
[2][1][RTW89_IC][1][30] = -16,
+ [2][1][RTW89_IC][2][30] = 54,
[2][1][RTW89_KCC][1][30] = -14,
[2][1][RTW89_KCC][0][30] = -14,
[2][1][RTW89_ACMA][1][30] = 44,
@@ -55639,6 +58789,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][30] = 6,
[2][1][RTW89_UK][1][30] = 44,
[2][1][RTW89_UK][0][30] = 6,
+ [2][1][RTW89_THAILAND][1][30] = 28,
+ [2][1][RTW89_THAILAND][0][30] = -16,
[2][1][RTW89_FCC][1][32] = -16,
[2][1][RTW89_FCC][2][32] = 54,
[2][1][RTW89_ETSI][1][32] = 44,
@@ -55646,6 +58798,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][32] = 40,
[2][1][RTW89_MKK][0][32] = 2,
[2][1][RTW89_IC][1][32] = -16,
+ [2][1][RTW89_IC][2][32] = 54,
[2][1][RTW89_KCC][1][32] = -14,
[2][1][RTW89_KCC][0][32] = -14,
[2][1][RTW89_ACMA][1][32] = 44,
@@ -55655,6 +58808,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][32] = 6,
[2][1][RTW89_UK][1][32] = 44,
[2][1][RTW89_UK][0][32] = 6,
+ [2][1][RTW89_THAILAND][1][32] = 28,
+ [2][1][RTW89_THAILAND][0][32] = -16,
[2][1][RTW89_FCC][1][34] = -16,
[2][1][RTW89_FCC][2][34] = 54,
[2][1][RTW89_ETSI][1][34] = 44,
@@ -55662,6 +58817,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][34] = 40,
[2][1][RTW89_MKK][0][34] = 2,
[2][1][RTW89_IC][1][34] = -16,
+ [2][1][RTW89_IC][2][34] = 54,
[2][1][RTW89_KCC][1][34] = -14,
[2][1][RTW89_KCC][0][34] = -14,
[2][1][RTW89_ACMA][1][34] = 44,
@@ -55671,6 +58827,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][34] = 6,
[2][1][RTW89_UK][1][34] = 44,
[2][1][RTW89_UK][0][34] = 6,
+ [2][1][RTW89_THAILAND][1][34] = 28,
+ [2][1][RTW89_THAILAND][0][34] = -16,
[2][1][RTW89_FCC][1][36] = -16,
[2][1][RTW89_FCC][2][36] = 54,
[2][1][RTW89_ETSI][1][36] = 44,
@@ -55678,6 +58836,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][36] = 40,
[2][1][RTW89_MKK][0][36] = 2,
[2][1][RTW89_IC][1][36] = -16,
+ [2][1][RTW89_IC][2][36] = 54,
[2][1][RTW89_KCC][1][36] = -14,
[2][1][RTW89_KCC][0][36] = -14,
[2][1][RTW89_ACMA][1][36] = 44,
@@ -55687,6 +58846,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][36] = 6,
[2][1][RTW89_UK][1][36] = 44,
[2][1][RTW89_UK][0][36] = 6,
+ [2][1][RTW89_THAILAND][1][36] = 28,
+ [2][1][RTW89_THAILAND][0][36] = -16,
[2][1][RTW89_FCC][1][38] = -16,
[2][1][RTW89_FCC][2][38] = 54,
[2][1][RTW89_ETSI][1][38] = 44,
@@ -55694,6 +58855,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][38] = 40,
[2][1][RTW89_MKK][0][38] = 2,
[2][1][RTW89_IC][1][38] = -16,
+ [2][1][RTW89_IC][2][38] = 54,
[2][1][RTW89_KCC][1][38] = -14,
[2][1][RTW89_KCC][0][38] = -14,
[2][1][RTW89_ACMA][1][38] = 44,
@@ -55703,6 +58865,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][38] = 6,
[2][1][RTW89_UK][1][38] = 44,
[2][1][RTW89_UK][0][38] = 6,
+ [2][1][RTW89_THAILAND][1][38] = 28,
+ [2][1][RTW89_THAILAND][0][38] = -16,
[2][1][RTW89_FCC][1][40] = -16,
[2][1][RTW89_FCC][2][40] = 54,
[2][1][RTW89_ETSI][1][40] = 44,
@@ -55710,6 +58874,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][40] = 40,
[2][1][RTW89_MKK][0][40] = 2,
[2][1][RTW89_IC][1][40] = -16,
+ [2][1][RTW89_IC][2][40] = 54,
[2][1][RTW89_KCC][1][40] = -14,
[2][1][RTW89_KCC][0][40] = -14,
[2][1][RTW89_ACMA][1][40] = 44,
@@ -55719,6 +58884,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][40] = 6,
[2][1][RTW89_UK][1][40] = 44,
[2][1][RTW89_UK][0][40] = 6,
+ [2][1][RTW89_THAILAND][1][40] = 28,
+ [2][1][RTW89_THAILAND][0][40] = -16,
[2][1][RTW89_FCC][1][42] = -16,
[2][1][RTW89_FCC][2][42] = 54,
[2][1][RTW89_ETSI][1][42] = 44,
@@ -55726,6 +58893,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][42] = 40,
[2][1][RTW89_MKK][0][42] = 2,
[2][1][RTW89_IC][1][42] = -16,
+ [2][1][RTW89_IC][2][42] = 54,
[2][1][RTW89_KCC][1][42] = -14,
[2][1][RTW89_KCC][0][42] = -14,
[2][1][RTW89_ACMA][1][42] = 44,
@@ -55735,6 +58903,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][42] = 6,
[2][1][RTW89_UK][1][42] = 44,
[2][1][RTW89_UK][0][42] = 6,
+ [2][1][RTW89_THAILAND][1][42] = 28,
+ [2][1][RTW89_THAILAND][0][42] = -16,
[2][1][RTW89_FCC][1][44] = -16,
[2][1][RTW89_FCC][2][44] = 54,
[2][1][RTW89_ETSI][1][44] = 44,
@@ -55742,6 +58912,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][44] = 16,
[2][1][RTW89_MKK][0][44] = 2,
[2][1][RTW89_IC][1][44] = -16,
+ [2][1][RTW89_IC][2][44] = 54,
[2][1][RTW89_KCC][1][44] = -14,
[2][1][RTW89_KCC][0][44] = -14,
[2][1][RTW89_ACMA][1][44] = 44,
@@ -55751,6 +58922,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][44] = 6,
[2][1][RTW89_UK][1][44] = 44,
[2][1][RTW89_UK][0][44] = 6,
+ [2][1][RTW89_THAILAND][1][44] = 28,
+ [2][1][RTW89_THAILAND][0][44] = -16,
[2][1][RTW89_FCC][1][45] = -16,
[2][1][RTW89_FCC][2][45] = 127,
[2][1][RTW89_ETSI][1][45] = 127,
@@ -55758,6 +58931,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][45] = 127,
[2][1][RTW89_MKK][0][45] = 127,
[2][1][RTW89_IC][1][45] = -16,
+ [2][1][RTW89_IC][2][45] = 56,
[2][1][RTW89_KCC][1][45] = -14,
[2][1][RTW89_KCC][0][45] = 127,
[2][1][RTW89_ACMA][1][45] = 127,
@@ -55767,6 +58941,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][45] = 127,
[2][1][RTW89_UK][1][45] = 127,
[2][1][RTW89_UK][0][45] = 127,
+ [2][1][RTW89_THAILAND][1][45] = 127,
+ [2][1][RTW89_THAILAND][0][45] = 127,
[2][1][RTW89_FCC][1][47] = -16,
[2][1][RTW89_FCC][2][47] = 127,
[2][1][RTW89_ETSI][1][47] = 127,
@@ -55774,6 +58950,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][47] = 127,
[2][1][RTW89_MKK][0][47] = 127,
[2][1][RTW89_IC][1][47] = -16,
+ [2][1][RTW89_IC][2][47] = 56,
[2][1][RTW89_KCC][1][47] = -14,
[2][1][RTW89_KCC][0][47] = 127,
[2][1][RTW89_ACMA][1][47] = 127,
@@ -55783,6 +58960,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][47] = 127,
[2][1][RTW89_UK][1][47] = 127,
[2][1][RTW89_UK][0][47] = 127,
+ [2][1][RTW89_THAILAND][1][47] = 127,
+ [2][1][RTW89_THAILAND][0][47] = 127,
[2][1][RTW89_FCC][1][49] = -16,
[2][1][RTW89_FCC][2][49] = 127,
[2][1][RTW89_ETSI][1][49] = 127,
@@ -55790,6 +58969,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][49] = 127,
[2][1][RTW89_MKK][0][49] = 127,
[2][1][RTW89_IC][1][49] = -16,
+ [2][1][RTW89_IC][2][49] = 56,
[2][1][RTW89_KCC][1][49] = -14,
[2][1][RTW89_KCC][0][49] = 127,
[2][1][RTW89_ACMA][1][49] = 127,
@@ -55799,6 +58979,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][49] = 127,
[2][1][RTW89_UK][1][49] = 127,
[2][1][RTW89_UK][0][49] = 127,
+ [2][1][RTW89_THAILAND][1][49] = 127,
+ [2][1][RTW89_THAILAND][0][49] = 127,
[2][1][RTW89_FCC][1][51] = -16,
[2][1][RTW89_FCC][2][51] = 127,
[2][1][RTW89_ETSI][1][51] = 127,
@@ -55806,6 +58988,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][51] = 127,
[2][1][RTW89_MKK][0][51] = 127,
[2][1][RTW89_IC][1][51] = -16,
+ [2][1][RTW89_IC][2][51] = 56,
[2][1][RTW89_KCC][1][51] = -14,
[2][1][RTW89_KCC][0][51] = 127,
[2][1][RTW89_ACMA][1][51] = 127,
@@ -55815,6 +58998,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][51] = 127,
[2][1][RTW89_UK][1][51] = 127,
[2][1][RTW89_UK][0][51] = 127,
+ [2][1][RTW89_THAILAND][1][51] = 127,
+ [2][1][RTW89_THAILAND][0][51] = 127,
[2][1][RTW89_FCC][1][53] = -16,
[2][1][RTW89_FCC][2][53] = 127,
[2][1][RTW89_ETSI][1][53] = 127,
@@ -55822,6 +59007,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][53] = 127,
[2][1][RTW89_MKK][0][53] = 127,
[2][1][RTW89_IC][1][53] = -16,
+ [2][1][RTW89_IC][2][53] = 56,
[2][1][RTW89_KCC][1][53] = -14,
[2][1][RTW89_KCC][0][53] = 127,
[2][1][RTW89_ACMA][1][53] = 127,
@@ -55831,6 +59017,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][53] = 127,
[2][1][RTW89_UK][1][53] = 127,
[2][1][RTW89_UK][0][53] = 127,
+ [2][1][RTW89_THAILAND][1][53] = 127,
+ [2][1][RTW89_THAILAND][0][53] = 127,
[2][1][RTW89_FCC][1][55] = -16,
[2][1][RTW89_FCC][2][55] = 54,
[2][1][RTW89_ETSI][1][55] = 127,
@@ -55838,6 +59026,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][55] = 127,
[2][1][RTW89_MKK][0][55] = 127,
[2][1][RTW89_IC][1][55] = -16,
+ [2][1][RTW89_IC][2][55] = 54,
[2][1][RTW89_KCC][1][55] = -14,
[2][1][RTW89_KCC][0][55] = 127,
[2][1][RTW89_ACMA][1][55] = 127,
@@ -55847,6 +59036,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][55] = 127,
[2][1][RTW89_UK][1][55] = 127,
[2][1][RTW89_UK][0][55] = 127,
+ [2][1][RTW89_THAILAND][1][55] = 127,
+ [2][1][RTW89_THAILAND][0][55] = 127,
[2][1][RTW89_FCC][1][57] = -16,
[2][1][RTW89_FCC][2][57] = 54,
[2][1][RTW89_ETSI][1][57] = 127,
@@ -55854,6 +59045,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][57] = 127,
[2][1][RTW89_MKK][0][57] = 127,
[2][1][RTW89_IC][1][57] = -16,
+ [2][1][RTW89_IC][2][57] = 54,
[2][1][RTW89_KCC][1][57] = -14,
[2][1][RTW89_KCC][0][57] = 127,
[2][1][RTW89_ACMA][1][57] = 127,
@@ -55863,6 +59055,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][57] = 127,
[2][1][RTW89_UK][1][57] = 127,
[2][1][RTW89_UK][0][57] = 127,
+ [2][1][RTW89_THAILAND][1][57] = 127,
+ [2][1][RTW89_THAILAND][0][57] = 127,
[2][1][RTW89_FCC][1][59] = -16,
[2][1][RTW89_FCC][2][59] = 54,
[2][1][RTW89_ETSI][1][59] = 127,
@@ -55870,6 +59064,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][59] = 127,
[2][1][RTW89_MKK][0][59] = 127,
[2][1][RTW89_IC][1][59] = -16,
+ [2][1][RTW89_IC][2][59] = 54,
[2][1][RTW89_KCC][1][59] = -14,
[2][1][RTW89_KCC][0][59] = 127,
[2][1][RTW89_ACMA][1][59] = 127,
@@ -55879,6 +59074,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][59] = 127,
[2][1][RTW89_UK][1][59] = 127,
[2][1][RTW89_UK][0][59] = 127,
+ [2][1][RTW89_THAILAND][1][59] = 127,
+ [2][1][RTW89_THAILAND][0][59] = 127,
[2][1][RTW89_FCC][1][60] = -16,
[2][1][RTW89_FCC][2][60] = 54,
[2][1][RTW89_ETSI][1][60] = 127,
@@ -55886,6 +59083,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][60] = 127,
[2][1][RTW89_MKK][0][60] = 127,
[2][1][RTW89_IC][1][60] = -16,
+ [2][1][RTW89_IC][2][60] = 54,
[2][1][RTW89_KCC][1][60] = -14,
[2][1][RTW89_KCC][0][60] = 127,
[2][1][RTW89_ACMA][1][60] = 127,
@@ -55895,6 +59093,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][60] = 127,
[2][1][RTW89_UK][1][60] = 127,
[2][1][RTW89_UK][0][60] = 127,
+ [2][1][RTW89_THAILAND][1][60] = 127,
+ [2][1][RTW89_THAILAND][0][60] = 127,
[2][1][RTW89_FCC][1][62] = -16,
[2][1][RTW89_FCC][2][62] = 54,
[2][1][RTW89_ETSI][1][62] = 127,
@@ -55902,6 +59102,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][62] = 127,
[2][1][RTW89_MKK][0][62] = 127,
[2][1][RTW89_IC][1][62] = -16,
+ [2][1][RTW89_IC][2][62] = 54,
[2][1][RTW89_KCC][1][62] = -14,
[2][1][RTW89_KCC][0][62] = 127,
[2][1][RTW89_ACMA][1][62] = 127,
@@ -55911,6 +59112,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][62] = 127,
[2][1][RTW89_UK][1][62] = 127,
[2][1][RTW89_UK][0][62] = 127,
+ [2][1][RTW89_THAILAND][1][62] = 127,
+ [2][1][RTW89_THAILAND][0][62] = 127,
[2][1][RTW89_FCC][1][64] = -16,
[2][1][RTW89_FCC][2][64] = 54,
[2][1][RTW89_ETSI][1][64] = 127,
@@ -55918,6 +59121,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][64] = 127,
[2][1][RTW89_MKK][0][64] = 127,
[2][1][RTW89_IC][1][64] = -16,
+ [2][1][RTW89_IC][2][64] = 54,
[2][1][RTW89_KCC][1][64] = -14,
[2][1][RTW89_KCC][0][64] = 127,
[2][1][RTW89_ACMA][1][64] = 127,
@@ -55927,6 +59131,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][64] = 127,
[2][1][RTW89_UK][1][64] = 127,
[2][1][RTW89_UK][0][64] = 127,
+ [2][1][RTW89_THAILAND][1][64] = 127,
+ [2][1][RTW89_THAILAND][0][64] = 127,
[2][1][RTW89_FCC][1][66] = -16,
[2][1][RTW89_FCC][2][66] = 54,
[2][1][RTW89_ETSI][1][66] = 127,
@@ -55934,6 +59140,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][66] = 127,
[2][1][RTW89_MKK][0][66] = 127,
[2][1][RTW89_IC][1][66] = -16,
+ [2][1][RTW89_IC][2][66] = 54,
[2][1][RTW89_KCC][1][66] = -14,
[2][1][RTW89_KCC][0][66] = 127,
[2][1][RTW89_ACMA][1][66] = 127,
@@ -55943,6 +59150,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][66] = 127,
[2][1][RTW89_UK][1][66] = 127,
[2][1][RTW89_UK][0][66] = 127,
+ [2][1][RTW89_THAILAND][1][66] = 127,
+ [2][1][RTW89_THAILAND][0][66] = 127,
[2][1][RTW89_FCC][1][68] = -16,
[2][1][RTW89_FCC][2][68] = 54,
[2][1][RTW89_ETSI][1][68] = 127,
@@ -55950,6 +59159,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][68] = 127,
[2][1][RTW89_MKK][0][68] = 127,
[2][1][RTW89_IC][1][68] = -16,
+ [2][1][RTW89_IC][2][68] = 54,
[2][1][RTW89_KCC][1][68] = -14,
[2][1][RTW89_KCC][0][68] = 127,
[2][1][RTW89_ACMA][1][68] = 127,
@@ -55959,6 +59169,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][68] = 127,
[2][1][RTW89_UK][1][68] = 127,
[2][1][RTW89_UK][0][68] = 127,
+ [2][1][RTW89_THAILAND][1][68] = 127,
+ [2][1][RTW89_THAILAND][0][68] = 127,
[2][1][RTW89_FCC][1][70] = -16,
[2][1][RTW89_FCC][2][70] = 56,
[2][1][RTW89_ETSI][1][70] = 127,
@@ -55966,6 +59178,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][70] = 127,
[2][1][RTW89_MKK][0][70] = 127,
[2][1][RTW89_IC][1][70] = -16,
+ [2][1][RTW89_IC][2][70] = 56,
[2][1][RTW89_KCC][1][70] = -14,
[2][1][RTW89_KCC][0][70] = 127,
[2][1][RTW89_ACMA][1][70] = 127,
@@ -55975,6 +59188,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][70] = 127,
[2][1][RTW89_UK][1][70] = 127,
[2][1][RTW89_UK][0][70] = 127,
+ [2][1][RTW89_THAILAND][1][70] = 127,
+ [2][1][RTW89_THAILAND][0][70] = 127,
[2][1][RTW89_FCC][1][72] = -16,
[2][1][RTW89_FCC][2][72] = 56,
[2][1][RTW89_ETSI][1][72] = 127,
@@ -55982,6 +59197,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][72] = 127,
[2][1][RTW89_MKK][0][72] = 127,
[2][1][RTW89_IC][1][72] = -16,
+ [2][1][RTW89_IC][2][72] = 56,
[2][1][RTW89_KCC][1][72] = -14,
[2][1][RTW89_KCC][0][72] = 127,
[2][1][RTW89_ACMA][1][72] = 127,
@@ -55991,6 +59207,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][72] = 127,
[2][1][RTW89_UK][1][72] = 127,
[2][1][RTW89_UK][0][72] = 127,
+ [2][1][RTW89_THAILAND][1][72] = 127,
+ [2][1][RTW89_THAILAND][0][72] = 127,
[2][1][RTW89_FCC][1][74] = -16,
[2][1][RTW89_FCC][2][74] = 56,
[2][1][RTW89_ETSI][1][74] = 127,
@@ -55998,6 +59216,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][74] = 127,
[2][1][RTW89_MKK][0][74] = 127,
[2][1][RTW89_IC][1][74] = -16,
+ [2][1][RTW89_IC][2][74] = 56,
[2][1][RTW89_KCC][1][74] = -14,
[2][1][RTW89_KCC][0][74] = 127,
[2][1][RTW89_ACMA][1][74] = 127,
@@ -56007,6 +59226,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][74] = 127,
[2][1][RTW89_UK][1][74] = 127,
[2][1][RTW89_UK][0][74] = 127,
+ [2][1][RTW89_THAILAND][1][74] = 127,
+ [2][1][RTW89_THAILAND][0][74] = 127,
[2][1][RTW89_FCC][1][75] = -16,
[2][1][RTW89_FCC][2][75] = 56,
[2][1][RTW89_ETSI][1][75] = 127,
@@ -56014,6 +59235,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][75] = 127,
[2][1][RTW89_MKK][0][75] = 127,
[2][1][RTW89_IC][1][75] = -16,
+ [2][1][RTW89_IC][2][75] = 56,
[2][1][RTW89_KCC][1][75] = -14,
[2][1][RTW89_KCC][0][75] = 127,
[2][1][RTW89_ACMA][1][75] = 127,
@@ -56023,6 +59245,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][75] = 127,
[2][1][RTW89_UK][1][75] = 127,
[2][1][RTW89_UK][0][75] = 127,
+ [2][1][RTW89_THAILAND][1][75] = 127,
+ [2][1][RTW89_THAILAND][0][75] = 127,
[2][1][RTW89_FCC][1][77] = -16,
[2][1][RTW89_FCC][2][77] = 56,
[2][1][RTW89_ETSI][1][77] = 127,
@@ -56030,6 +59254,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][77] = 127,
[2][1][RTW89_MKK][0][77] = 127,
[2][1][RTW89_IC][1][77] = -16,
+ [2][1][RTW89_IC][2][77] = 56,
[2][1][RTW89_KCC][1][77] = -14,
[2][1][RTW89_KCC][0][77] = 127,
[2][1][RTW89_ACMA][1][77] = 127,
@@ -56039,6 +59264,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][77] = 127,
[2][1][RTW89_UK][1][77] = 127,
[2][1][RTW89_UK][0][77] = 127,
+ [2][1][RTW89_THAILAND][1][77] = 127,
+ [2][1][RTW89_THAILAND][0][77] = 127,
[2][1][RTW89_FCC][1][79] = -16,
[2][1][RTW89_FCC][2][79] = 56,
[2][1][RTW89_ETSI][1][79] = 127,
@@ -56046,6 +59273,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][79] = 127,
[2][1][RTW89_MKK][0][79] = 127,
[2][1][RTW89_IC][1][79] = -16,
+ [2][1][RTW89_IC][2][79] = 56,
[2][1][RTW89_KCC][1][79] = -14,
[2][1][RTW89_KCC][0][79] = 127,
[2][1][RTW89_ACMA][1][79] = 127,
@@ -56055,6 +59283,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][79] = 127,
[2][1][RTW89_UK][1][79] = 127,
[2][1][RTW89_UK][0][79] = 127,
+ [2][1][RTW89_THAILAND][1][79] = 127,
+ [2][1][RTW89_THAILAND][0][79] = 127,
[2][1][RTW89_FCC][1][81] = -16,
[2][1][RTW89_FCC][2][81] = 56,
[2][1][RTW89_ETSI][1][81] = 127,
@@ -56062,6 +59292,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][81] = 127,
[2][1][RTW89_MKK][0][81] = 127,
[2][1][RTW89_IC][1][81] = -16,
+ [2][1][RTW89_IC][2][81] = 56,
[2][1][RTW89_KCC][1][81] = -14,
[2][1][RTW89_KCC][0][81] = 127,
[2][1][RTW89_ACMA][1][81] = 127,
@@ -56071,6 +59302,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][81] = 127,
[2][1][RTW89_UK][1][81] = 127,
[2][1][RTW89_UK][0][81] = 127,
+ [2][1][RTW89_THAILAND][1][81] = 127,
+ [2][1][RTW89_THAILAND][0][81] = 127,
[2][1][RTW89_FCC][1][83] = -16,
[2][1][RTW89_FCC][2][83] = 56,
[2][1][RTW89_ETSI][1][83] = 127,
@@ -56078,6 +59311,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][83] = 127,
[2][1][RTW89_MKK][0][83] = 127,
[2][1][RTW89_IC][1][83] = -16,
+ [2][1][RTW89_IC][2][83] = 56,
[2][1][RTW89_KCC][1][83] = -14,
[2][1][RTW89_KCC][0][83] = 127,
[2][1][RTW89_ACMA][1][83] = 127,
@@ -56087,6 +59321,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][83] = 127,
[2][1][RTW89_UK][1][83] = 127,
[2][1][RTW89_UK][0][83] = 127,
+ [2][1][RTW89_THAILAND][1][83] = 127,
+ [2][1][RTW89_THAILAND][0][83] = 127,
[2][1][RTW89_FCC][1][85] = -18,
[2][1][RTW89_FCC][2][85] = 56,
[2][1][RTW89_ETSI][1][85] = 127,
@@ -56094,6 +59330,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][85] = 127,
[2][1][RTW89_MKK][0][85] = 127,
[2][1][RTW89_IC][1][85] = -18,
+ [2][1][RTW89_IC][2][85] = 56,
[2][1][RTW89_KCC][1][85] = -14,
[2][1][RTW89_KCC][0][85] = 127,
[2][1][RTW89_ACMA][1][85] = 127,
@@ -56103,6 +59340,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][85] = 127,
[2][1][RTW89_UK][1][85] = 127,
[2][1][RTW89_UK][0][85] = 127,
+ [2][1][RTW89_THAILAND][1][85] = 127,
+ [2][1][RTW89_THAILAND][0][85] = 127,
[2][1][RTW89_FCC][1][87] = -16,
[2][1][RTW89_FCC][2][87] = 127,
[2][1][RTW89_ETSI][1][87] = 127,
@@ -56110,6 +59349,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][87] = 127,
[2][1][RTW89_MKK][0][87] = 127,
[2][1][RTW89_IC][1][87] = -16,
+ [2][1][RTW89_IC][2][87] = 127,
[2][1][RTW89_KCC][1][87] = -14,
[2][1][RTW89_KCC][0][87] = 127,
[2][1][RTW89_ACMA][1][87] = 127,
@@ -56119,6 +59359,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][87] = 127,
[2][1][RTW89_UK][1][87] = 127,
[2][1][RTW89_UK][0][87] = 127,
+ [2][1][RTW89_THAILAND][1][87] = 127,
+ [2][1][RTW89_THAILAND][0][87] = 127,
[2][1][RTW89_FCC][1][89] = -16,
[2][1][RTW89_FCC][2][89] = 127,
[2][1][RTW89_ETSI][1][89] = 127,
@@ -56126,6 +59368,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][89] = 127,
[2][1][RTW89_MKK][0][89] = 127,
[2][1][RTW89_IC][1][89] = -16,
+ [2][1][RTW89_IC][2][89] = 127,
[2][1][RTW89_KCC][1][89] = -14,
[2][1][RTW89_KCC][0][89] = 127,
[2][1][RTW89_ACMA][1][89] = 127,
@@ -56135,6 +59378,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][89] = 127,
[2][1][RTW89_UK][1][89] = 127,
[2][1][RTW89_UK][0][89] = 127,
+ [2][1][RTW89_THAILAND][1][89] = 127,
+ [2][1][RTW89_THAILAND][0][89] = 127,
[2][1][RTW89_FCC][1][90] = -16,
[2][1][RTW89_FCC][2][90] = 127,
[2][1][RTW89_ETSI][1][90] = 127,
@@ -56142,6 +59387,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][90] = 127,
[2][1][RTW89_MKK][0][90] = 127,
[2][1][RTW89_IC][1][90] = -16,
+ [2][1][RTW89_IC][2][90] = 127,
[2][1][RTW89_KCC][1][90] = -14,
[2][1][RTW89_KCC][0][90] = 127,
[2][1][RTW89_ACMA][1][90] = 127,
@@ -56151,6 +59397,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][90] = 127,
[2][1][RTW89_UK][1][90] = 127,
[2][1][RTW89_UK][0][90] = 127,
+ [2][1][RTW89_THAILAND][1][90] = 127,
+ [2][1][RTW89_THAILAND][0][90] = 127,
[2][1][RTW89_FCC][1][92] = -16,
[2][1][RTW89_FCC][2][92] = 127,
[2][1][RTW89_ETSI][1][92] = 127,
@@ -56158,6 +59406,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][92] = 127,
[2][1][RTW89_MKK][0][92] = 127,
[2][1][RTW89_IC][1][92] = -16,
+ [2][1][RTW89_IC][2][92] = 127,
[2][1][RTW89_KCC][1][92] = -14,
[2][1][RTW89_KCC][0][92] = 127,
[2][1][RTW89_ACMA][1][92] = 127,
@@ -56167,6 +59416,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][92] = 127,
[2][1][RTW89_UK][1][92] = 127,
[2][1][RTW89_UK][0][92] = 127,
+ [2][1][RTW89_THAILAND][1][92] = 127,
+ [2][1][RTW89_THAILAND][0][92] = 127,
[2][1][RTW89_FCC][1][94] = -16,
[2][1][RTW89_FCC][2][94] = 127,
[2][1][RTW89_ETSI][1][94] = 127,
@@ -56174,6 +59425,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][94] = 127,
[2][1][RTW89_MKK][0][94] = 127,
[2][1][RTW89_IC][1][94] = -16,
+ [2][1][RTW89_IC][2][94] = 127,
[2][1][RTW89_KCC][1][94] = -14,
[2][1][RTW89_KCC][0][94] = 127,
[2][1][RTW89_ACMA][1][94] = 127,
@@ -56183,6 +59435,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][94] = 127,
[2][1][RTW89_UK][1][94] = 127,
[2][1][RTW89_UK][0][94] = 127,
+ [2][1][RTW89_THAILAND][1][94] = 127,
+ [2][1][RTW89_THAILAND][0][94] = 127,
[2][1][RTW89_FCC][1][96] = -16,
[2][1][RTW89_FCC][2][96] = 127,
[2][1][RTW89_ETSI][1][96] = 127,
@@ -56190,6 +59444,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][96] = 127,
[2][1][RTW89_MKK][0][96] = 127,
[2][1][RTW89_IC][1][96] = -16,
+ [2][1][RTW89_IC][2][96] = 127,
[2][1][RTW89_KCC][1][96] = -14,
[2][1][RTW89_KCC][0][96] = 127,
[2][1][RTW89_ACMA][1][96] = 127,
@@ -56199,6 +59454,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][96] = 127,
[2][1][RTW89_UK][1][96] = 127,
[2][1][RTW89_UK][0][96] = 127,
+ [2][1][RTW89_THAILAND][1][96] = 127,
+ [2][1][RTW89_THAILAND][0][96] = 127,
[2][1][RTW89_FCC][1][98] = -16,
[2][1][RTW89_FCC][2][98] = 127,
[2][1][RTW89_ETSI][1][98] = 127,
@@ -56206,6 +59463,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][98] = 127,
[2][1][RTW89_MKK][0][98] = 127,
[2][1][RTW89_IC][1][98] = -16,
+ [2][1][RTW89_IC][2][98] = 127,
[2][1][RTW89_KCC][1][98] = -14,
[2][1][RTW89_KCC][0][98] = 127,
[2][1][RTW89_ACMA][1][98] = 127,
@@ -56215,6 +59473,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][98] = 127,
[2][1][RTW89_UK][1][98] = 127,
[2][1][RTW89_UK][0][98] = 127,
+ [2][1][RTW89_THAILAND][1][98] = 127,
+ [2][1][RTW89_THAILAND][0][98] = 127,
[2][1][RTW89_FCC][1][100] = -16,
[2][1][RTW89_FCC][2][100] = 127,
[2][1][RTW89_ETSI][1][100] = 127,
@@ -56222,6 +59482,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][100] = 127,
[2][1][RTW89_MKK][0][100] = 127,
[2][1][RTW89_IC][1][100] = -16,
+ [2][1][RTW89_IC][2][100] = 127,
[2][1][RTW89_KCC][1][100] = -14,
[2][1][RTW89_KCC][0][100] = 127,
[2][1][RTW89_ACMA][1][100] = 127,
@@ -56231,6 +59492,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][100] = 127,
[2][1][RTW89_UK][1][100] = 127,
[2][1][RTW89_UK][0][100] = 127,
+ [2][1][RTW89_THAILAND][1][100] = 127,
+ [2][1][RTW89_THAILAND][0][100] = 127,
[2][1][RTW89_FCC][1][102] = -16,
[2][1][RTW89_FCC][2][102] = 127,
[2][1][RTW89_ETSI][1][102] = 127,
@@ -56238,6 +59501,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][102] = 127,
[2][1][RTW89_MKK][0][102] = 127,
[2][1][RTW89_IC][1][102] = -16,
+ [2][1][RTW89_IC][2][102] = 127,
[2][1][RTW89_KCC][1][102] = -14,
[2][1][RTW89_KCC][0][102] = 127,
[2][1][RTW89_ACMA][1][102] = 127,
@@ -56247,6 +59511,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][102] = 127,
[2][1][RTW89_UK][1][102] = 127,
[2][1][RTW89_UK][0][102] = 127,
+ [2][1][RTW89_THAILAND][1][102] = 127,
+ [2][1][RTW89_THAILAND][0][102] = 127,
[2][1][RTW89_FCC][1][104] = -16,
[2][1][RTW89_FCC][2][104] = 127,
[2][1][RTW89_ETSI][1][104] = 127,
@@ -56254,6 +59520,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][104] = 127,
[2][1][RTW89_MKK][0][104] = 127,
[2][1][RTW89_IC][1][104] = -16,
+ [2][1][RTW89_IC][2][104] = 127,
[2][1][RTW89_KCC][1][104] = -14,
[2][1][RTW89_KCC][0][104] = 127,
[2][1][RTW89_ACMA][1][104] = 127,
@@ -56263,6 +59530,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][104] = 127,
[2][1][RTW89_UK][1][104] = 127,
[2][1][RTW89_UK][0][104] = 127,
+ [2][1][RTW89_THAILAND][1][104] = 127,
+ [2][1][RTW89_THAILAND][0][104] = 127,
[2][1][RTW89_FCC][1][105] = -16,
[2][1][RTW89_FCC][2][105] = 127,
[2][1][RTW89_ETSI][1][105] = 127,
@@ -56270,6 +59539,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][105] = 127,
[2][1][RTW89_MKK][0][105] = 127,
[2][1][RTW89_IC][1][105] = -16,
+ [2][1][RTW89_IC][2][105] = 127,
[2][1][RTW89_KCC][1][105] = -14,
[2][1][RTW89_KCC][0][105] = 127,
[2][1][RTW89_ACMA][1][105] = 127,
@@ -56279,6 +59549,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][105] = 127,
[2][1][RTW89_UK][1][105] = 127,
[2][1][RTW89_UK][0][105] = 127,
+ [2][1][RTW89_THAILAND][1][105] = 127,
+ [2][1][RTW89_THAILAND][0][105] = 127,
[2][1][RTW89_FCC][1][107] = -12,
[2][1][RTW89_FCC][2][107] = 127,
[2][1][RTW89_ETSI][1][107] = 127,
@@ -56286,6 +59558,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][107] = 127,
[2][1][RTW89_MKK][0][107] = 127,
[2][1][RTW89_IC][1][107] = -12,
+ [2][1][RTW89_IC][2][107] = 127,
[2][1][RTW89_KCC][1][107] = -14,
[2][1][RTW89_KCC][0][107] = 127,
[2][1][RTW89_ACMA][1][107] = 127,
@@ -56295,6 +59568,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][107] = 127,
[2][1][RTW89_UK][1][107] = 127,
[2][1][RTW89_UK][0][107] = 127,
+ [2][1][RTW89_THAILAND][1][107] = 127,
+ [2][1][RTW89_THAILAND][0][107] = 127,
[2][1][RTW89_FCC][1][109] = -10,
[2][1][RTW89_FCC][2][109] = 127,
[2][1][RTW89_ETSI][1][109] = 127,
@@ -56302,6 +59577,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][109] = 127,
[2][1][RTW89_MKK][0][109] = 127,
[2][1][RTW89_IC][1][109] = -10,
+ [2][1][RTW89_IC][2][109] = 127,
[2][1][RTW89_KCC][1][109] = 127,
[2][1][RTW89_KCC][0][109] = 127,
[2][1][RTW89_ACMA][1][109] = 127,
@@ -56311,6 +59587,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][109] = 127,
[2][1][RTW89_UK][1][109] = 127,
[2][1][RTW89_UK][0][109] = 127,
+ [2][1][RTW89_THAILAND][1][109] = 127,
+ [2][1][RTW89_THAILAND][0][109] = 127,
[2][1][RTW89_FCC][1][111] = 127,
[2][1][RTW89_FCC][2][111] = 127,
[2][1][RTW89_ETSI][1][111] = 127,
@@ -56318,6 +59596,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][111] = 127,
[2][1][RTW89_MKK][0][111] = 127,
[2][1][RTW89_IC][1][111] = 127,
+ [2][1][RTW89_IC][2][111] = 127,
[2][1][RTW89_KCC][1][111] = 127,
[2][1][RTW89_KCC][0][111] = 127,
[2][1][RTW89_ACMA][1][111] = 127,
@@ -56327,6 +59606,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][111] = 127,
[2][1][RTW89_UK][1][111] = 127,
[2][1][RTW89_UK][0][111] = 127,
+ [2][1][RTW89_THAILAND][1][111] = 127,
+ [2][1][RTW89_THAILAND][0][111] = 127,
[2][1][RTW89_FCC][1][113] = 127,
[2][1][RTW89_FCC][2][113] = 127,
[2][1][RTW89_ETSI][1][113] = 127,
@@ -56334,6 +59615,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][113] = 127,
[2][1][RTW89_MKK][0][113] = 127,
[2][1][RTW89_IC][1][113] = 127,
+ [2][1][RTW89_IC][2][113] = 127,
[2][1][RTW89_KCC][1][113] = 127,
[2][1][RTW89_KCC][0][113] = 127,
[2][1][RTW89_ACMA][1][113] = 127,
@@ -56343,6 +59625,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][113] = 127,
[2][1][RTW89_UK][1][113] = 127,
[2][1][RTW89_UK][0][113] = 127,
+ [2][1][RTW89_THAILAND][1][113] = 127,
+ [2][1][RTW89_THAILAND][0][113] = 127,
[2][1][RTW89_FCC][1][115] = 127,
[2][1][RTW89_FCC][2][115] = 127,
[2][1][RTW89_ETSI][1][115] = 127,
@@ -56350,6 +59634,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][115] = 127,
[2][1][RTW89_MKK][0][115] = 127,
[2][1][RTW89_IC][1][115] = 127,
+ [2][1][RTW89_IC][2][115] = 127,
[2][1][RTW89_KCC][1][115] = 127,
[2][1][RTW89_KCC][0][115] = 127,
[2][1][RTW89_ACMA][1][115] = 127,
@@ -56359,6 +59644,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][115] = 127,
[2][1][RTW89_UK][1][115] = 127,
[2][1][RTW89_UK][0][115] = 127,
+ [2][1][RTW89_THAILAND][1][115] = 127,
+ [2][1][RTW89_THAILAND][0][115] = 127,
[2][1][RTW89_FCC][1][117] = 127,
[2][1][RTW89_FCC][2][117] = 127,
[2][1][RTW89_ETSI][1][117] = 127,
@@ -56366,6 +59653,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][117] = 127,
[2][1][RTW89_MKK][0][117] = 127,
[2][1][RTW89_IC][1][117] = 127,
+ [2][1][RTW89_IC][2][117] = 127,
[2][1][RTW89_KCC][1][117] = 127,
[2][1][RTW89_KCC][0][117] = 127,
[2][1][RTW89_ACMA][1][117] = 127,
@@ -56375,6 +59663,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][117] = 127,
[2][1][RTW89_UK][1][117] = 127,
[2][1][RTW89_UK][0][117] = 127,
+ [2][1][RTW89_THAILAND][1][117] = 127,
+ [2][1][RTW89_THAILAND][0][117] = 127,
[2][1][RTW89_FCC][1][119] = 127,
[2][1][RTW89_FCC][2][119] = 127,
[2][1][RTW89_ETSI][1][119] = 127,
@@ -56382,6 +59672,7 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_MKK][1][119] = 127,
[2][1][RTW89_MKK][0][119] = 127,
[2][1][RTW89_IC][1][119] = 127,
+ [2][1][RTW89_IC][2][119] = 127,
[2][1][RTW89_KCC][1][119] = 127,
[2][1][RTW89_KCC][0][119] = 127,
[2][1][RTW89_ACMA][1][119] = 127,
@@ -56391,6 +59682,8 @@ const s8 rtw89_8852c_txpwr_lmt_ru_6g[RTW89_RU_NUM][RTW89_NTX_NUM]
[2][1][RTW89_QATAR][0][119] = 127,
[2][1][RTW89_UK][1][119] = 127,
[2][1][RTW89_UK][0][119] = 127,
+ [2][1][RTW89_THAILAND][1][119] = 127,
+ [2][1][RTW89_THAILAND][0][119] = 127,
};
const struct rtw89_phy_table rtw89_8852c_phy_bb_table = {
@@ -56425,6 +59718,7 @@ const struct rtw89_phy_table rtw89_8852c_phy_nctl_table = {
.rf_path = 0, /* don't care */
};
+static
const struct rtw89_txpwr_table rtw89_8852c_byr_table = {
.data = rtw89_8852c_txpwr_byrate,
.size = ARRAY_SIZE(rtw89_8852c_txpwr_byrate),
@@ -56452,12 +59746,16 @@ const struct rtw89_txpwr_track_cfg rtw89_8852c_trk_cfg = {
const struct rtw89_phy_tssi_dbw_table rtw89_8852c_tssi_dbw_table = {
.data[RTW89_TSSI_BANDEDGE_FLAT] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
- .data[RTW89_TSSI_BANDEDGE_LOW] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
- .data[RTW89_TSSI_BANDEDGE_MID] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
- .data[RTW89_TSSI_BANDEDGE_HIGH] = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+ .data[RTW89_TSSI_BANDEDGE_LOW] = {0x1d, 0x1d, 0x1d, 0x2f, 0xf, 0xf, 0x2f, 0x38,
+ 0x28, 0x18, 0x8, 0x8, 0x18, 0x28, 0x38},
+ .data[RTW89_TSSI_BANDEDGE_MID] = {0x24, 0x24, 0x24, 0x3b, 0x13, 0x13, 0x3b, 0x46,
+ 0x32, 0x1e, 0xa, 0xa, 0x1e, 0x32, 0x46},
+ .data[RTW89_TSSI_BANDEDGE_HIGH] = {0x2a, 0x2a, 0x2a, 0x46, 0x17, 0x17, 0x46, 0x53,
+ 0x3b, 0x24, 0xc, 0xc, 0x24, 0x3b, 0x53},
};
const struct rtw89_rfe_parms rtw89_8852c_dflt_parms = {
+ .byr_tbl = &rtw89_8852c_byr_table,
.rule_2ghz = {
.lmt = &rtw89_8852c_txpwr_lmt_2g,
.lmt_ru = &rtw89_8852c_txpwr_lmt_ru_2g,
@@ -56470,4 +59768,8 @@ const struct rtw89_rfe_parms rtw89_8852c_dflt_parms = {
.lmt = &rtw89_8852c_txpwr_lmt_6g,
.lmt_ru = &rtw89_8852c_txpwr_lmt_ru_6g,
},
+ .tx_shape = {
+ .lmt = &rtw89_8852c_tx_shape_lmt,
+ .lmt_ru = &rtw89_8852c_tx_shape_lmt_ru,
+ },
};
diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c_table.h b/drivers/net/wireless/realtek/rtw89/rtw8852c_table.h
index 3eb0c4995174..7c9f3ecdc4e7 100644
--- a/drivers/net/wireless/realtek/rtw89/rtw8852c_table.h
+++ b/drivers/net/wireless/realtek/rtw89/rtw8852c_table.h
@@ -12,11 +12,8 @@ extern const struct rtw89_phy_table rtw89_8852c_phy_bb_gain_table;
extern const struct rtw89_phy_table rtw89_8852c_phy_radioa_table;
extern const struct rtw89_phy_table rtw89_8852c_phy_radiob_table;
extern const struct rtw89_phy_table rtw89_8852c_phy_nctl_table;
-extern const struct rtw89_txpwr_table rtw89_8852c_byr_table;
extern const struct rtw89_phy_tssi_dbw_table rtw89_8852c_tssi_dbw_table;
extern const struct rtw89_txpwr_track_cfg rtw89_8852c_trk_cfg;
-extern const u8 rtw89_8852c_tx_shape[RTW89_BAND_NUM][RTW89_RS_TX_SHAPE_NUM]
- [RTW89_REGD_NUM];
extern const struct rtw89_rfe_parms rtw89_8852c_dflt_parms;
#endif
diff --git a/drivers/net/wireless/realtek/rtw89/txrx.h b/drivers/net/wireless/realtek/rtw89/txrx.h
index 02cff0f7d86b..7142cce167de 100644
--- a/drivers/net/wireless/realtek/rtw89/txrx.h
+++ b/drivers/net/wireless/realtek/rtw89/txrx.h
@@ -137,6 +137,181 @@ static inline u8 rtw89_get_data_nss(struct rtw89_dev *rtwdev, u16 hw_rate)
/* TX WD INFO DWORD 5 */
+/* TX WD BODY DWORD 0 */
+#define BE_TXD_BODY0_EN_HWSEQ_MODE GENMASK(1, 0)
+#define BE_TXD_BODY0_HW_SSN_SEL GENMASK(4, 2)
+#define BE_TXD_BODY0_HWAMSDU BIT(5)
+#define BE_TXD_BODY0_HW_SEC_IV BIT(6)
+#define BE_TXD_BODY0_WD_PAGE BIT(7)
+#define BE_TXD_BODY0_CHK_EN BIT(8)
+#define BE_TXD_BODY0_WP_INT BIT(9)
+#define BE_TXD_BODY0_STF_MODE BIT(10)
+#define BE_TXD_BODY0_HDR_LLC_LEN GENMASK(15, 11)
+#define BE_TXD_BODY0_CH_DMA GENMASK(19, 16)
+#define BE_TXD_BODY0_SMH_EN BIT(20)
+#define BE_TXD_BODY0_PKT_OFFSET BIT(21)
+#define BE_TXD_BODY0_WDINFO_EN BIT(22)
+#define BE_TXD_BODY0_MOREDATA BIT(23)
+#define BE_TXD_BODY0_WP_OFFSET_V1 GENMASK(27, 24)
+#define BE_TXD_BODY0_AZ_FTM_SEC_V1 BIT(28)
+#define BE_TXD_BODY0_WD_SOURCE GENMASK(30, 29)
+#define BE_TXD_BODY0_HCI_SEQNUM_MODE BIT(31)
+
+/* TX WD BODY DWORD 1 */
+#define BE_TXD_BODY1_DMA_TXAGG_NUM GENMASK(6, 0)
+#define BE_TXD_BODY1_REUSE_NUM GENMASK(11, 7)
+#define BE_TXD_BODY1_SEC_TYPE GENMASK(15, 12)
+#define BE_TXD_BODY1_SEC_KEYID GENMASK(17, 16)
+#define BE_TXD_BODY1_SW_SEC_IV BIT(18)
+#define BE_TXD_BODY1_REUSE_SIZE GENMASK(23, 20)
+#define BE_TXD_BODY1_REUSE_START_OFFSET GENMASK(25, 24)
+#define BE_TXD_BODY1_ADDR_INFO_NUM GENMASK(31, 26)
+
+/* TX WD BODY DWORD 2 */
+#define BE_TXD_BODY2_TXPKTSIZE GENMASK(13, 0)
+#define BE_TXD_BODY2_AGG_EN BIT(14)
+#define BE_TXD_BODY2_BK BIT(15)
+#define BE_TXD_BODY2_MACID_EXTEND BIT(16)
+#define BE_TXD_BODY2_QSEL GENMASK(22, 17)
+#define BE_TXD_BODY2_TID_IND BIT(23)
+#define BE_TXD_BODY2_MACID GENMASK(31, 24)
+
+/* TX WD BODY DWORD 3 */
+#define BE_TXD_BODY3_WIFI_SEQ GENMASK(11, 0)
+#define BE_TXD_BODY3_MLO_FLAG BIT(12)
+#define BE_TXD_BODY3_IS_MLD_SW_EN BIT(13)
+#define BE_TXD_BODY3_TRY_RATE BIT(14)
+#define BE_TXD_BODY3_RELINK_FLAG_V1 BIT(15)
+#define BE_TXD_BODY3_BAND0_SU_TC_V1 GENMASK(21, 16)
+#define BE_TXD_BODY3_TOTAL_TC GENMASK(27, 22)
+#define BE_TXD_BODY3_RU_RTY BIT(28)
+#define BE_TXD_BODY3_MU_PRI_RTY BIT(29)
+#define BE_TXD_BODY3_MU_2ND_RTY BIT(30)
+#define BE_TXD_BODY3_BAND1_SU_RTY_V1 BIT(31)
+
+/* TX WD BODY DWORD 4 */
+#define BE_TXD_BODY4_TXDESC_CHECKSUM GENMASK(15, 0)
+#define BE_TXD_BODY4_SEC_IV_L0 GENMASK(23, 16)
+#define BE_TXD_BODY4_SEC_IV_L1 GENMASK(31, 24)
+
+/* TX WD BODY DWORD 5 */
+#define BE_TXD_BODY5_SEC_IV_H2 GENMASK(7, 0)
+#define BE_TXD_BODY5_SEC_IV_H3 GENMASK(15, 8)
+#define BE_TXD_BODY5_SEC_IV_H4 GENMASK(23, 16)
+#define BE_TXD_BODY5_SEC_IV_H5 GENMASK(31, 24)
+
+/* TX WD BODY DWORD 6 */
+#define BE_TXD_BODY6_MU_TC GENMASK(4, 0)
+#define BE_TXD_BODY6_RU_TC GENMASK(9, 5)
+#define BE_TXD_BODY6_PS160 BIT(10)
+#define BE_TXD_BODY6_BMC BIT(11)
+#define BE_TXD_BODY6_NO_ACK BIT(12)
+#define BE_TXD_BODY6_UPD_WLAN_HDR BIT(13)
+#define BE_TXD_BODY6_A4_HDR BIT(14)
+#define BE_TXD_BODY6_EOSP_BIT BIT(15)
+#define BE_TXD_BODY6_S_IDX GENMASK(23, 16)
+#define BE_TXD_BODY6_RU_POS GENMASK(31, 24)
+
+/* TX WD BODY DWORD 7 */
+#define BE_TXD_BODY7_RTS_TC GENMASK(5, 0)
+#define BE_TXD_BODY7_MSDU_NUM GENMASK(9, 6)
+#define BE_TXD_BODY7_DATA_ER BIT(10)
+#define BE_TXD_BODY7_DATA_BW_ER BIT(11)
+#define BE_TXD_BODY7_DATA_DCM BIT(12)
+#define BE_TXD_BODY7_GI_LTF GENMASK(15, 13)
+#define BE_TXD_BODY7_DATARATE GENMASK(27, 16)
+#define BE_TXD_BODY7_DATA_BW GENMASK(30, 28)
+#define BE_TXD_BODY7_USERATE_SEL BIT(31)
+
+/* TX WD INFO DWORD 0 */
+#define BE_TXD_INFO0_MBSSID GENMASK(3, 0)
+#define BE_TXD_INFO0_MULTIPORT_ID GENMASK(6, 4)
+#define BE_TXD_INFO0_DISRTSFB BIT(9)
+#define BE_TXD_INFO0_DISDATAFB BIT(10)
+#define BE_TXD_INFO0_DATA_LDPC BIT(11)
+#define BE_TXD_INFO0_DATA_STBC BIT(12)
+#define BE_TXD_INFO0_DATA_TXCNT_LMT GENMASK(21, 16)
+#define BE_TXD_INFO0_DATA_TXCNT_LMT_SEL BIT(22)
+#define BE_TXD_INFO0_RESP_PHYSTS_CSI_EN_V1 BIT(23)
+#define BE_TXD_INFO0_RLS_TO_CPUIO BIT(30)
+#define BE_TXD_INFO0_ACK_CH_INFO BIT(31)
+
+/* TX WD INFO DWORD 1 */
+#define BE_TXD_INFO1_MAX_AGG_NUM GENMASK(7, 0)
+#define BE_TXD_INFO1_BCN_SRCH_SEQ GENMASK(9, 8)
+#define BE_TXD_INFO1_NAVUSEHDR BIT(10)
+#define BE_TXD_INFO1_A_CTRL_BQR BIT(12)
+#define BE_TXD_INFO1_A_CTRL_BSR BIT(14)
+#define BE_TXD_INFO1_A_CTRL_CAS BIT(15)
+#define BE_TXD_INFO1_DATA_RTY_LOWEST_RATE GENMASK(27, 16)
+#define BE_TXD_INFO1_SW_DEFINE GENMASK(31, 28)
+
+/* TX WD INFO DWORD 2 */
+#define BE_TXD_INFO2_SEC_CAM_IDX GENMASK(7, 0)
+#define BE_TXD_INFO2_FORCE_KEY_EN BIT(8)
+#define BE_TXD_INFO2_LIFETIME_SEL GENMASK(15, 13)
+#define BE_TXD_INFO2_FORCE_TXOP BIT(17)
+#define BE_TXD_INFO2_AMPDU_DENSITY GENMASK(20, 18)
+#define BE_TXD_INFO2_LSIG_TXOP_EN BIT(21)
+#define BE_TXD_INFO2_OBW_CTS2SELF_DUP_TYPE GENMASK(29, 26)
+#define BE_TXD_INFO2_SPE_RPT_V1 BIT(30)
+#define BE_TXD_INFO2_SIFS_TX_V1 BIT(31)
+
+/* TX WD INFO DWORD 3 */
+#define BE_TXD_INFO3_SPE_PKT GENMASK(3, 0)
+#define BE_TXD_INFO3_SPE_PKT_TYPE GENMASK(7, 4)
+#define BE_TXD_INFO3_CQI_SND BIT(8)
+#define BE_TXD_INFO3_RTT_EN BIT(9)
+#define BE_TXD_INFO3_HT_DATA_SND_V1 BIT(10)
+#define BE_TXD_INFO3_BT_NULL BIT(11)
+#define BE_TXD_INFO3_TRI_FRAME BIT(12)
+#define BE_TXD_INFO3_NULL_0 BIT(13)
+#define BE_TXD_INFO3_NULL_1 BIT(14)
+#define BE_TXD_INFO3_RAW BIT(15)
+#define BE_TXD_INFO3_GROUP_BIT_IE_OFFSET GENMASK(23, 16)
+#define BE_TXD_INFO3_SIGNALING_TA_PKT_EN BIT(25)
+#define BE_TXD_INFO3_BCNPKT_TSF_CTRL BIT(26)
+#define BE_TXD_INFO3_SIGNALING_TA_PKT_SC GENMASK(30, 27)
+#define BE_TXD_INFO3_FORCE_BSS_CLR BIT(31)
+
+/* TX WD INFO DWORD 4 */
+#define BE_TXD_INFO4_PUNCTURE_PATTERN GENMASK(15, 0)
+#define BE_TXD_INFO4_PUNC_MODE GENMASK(17, 16)
+#define BE_TXD_INFO4_SW_TX_OK_0 BIT(18)
+#define BE_TXD_INFO4_SW_TX_OK_1 BIT(19)
+#define BE_TXD_INFO4_SW_TX_PWR_DBM GENMASK(26, 23)
+#define BE_TXD_INFO4_RTS_EN BIT(27)
+#define BE_TXD_INFO4_CTS2SELF BIT(28)
+#define BE_TXD_INFO4_CCA_RTS GENMASK(30, 29)
+#define BE_TXD_INFO4_HW_RTS_EN BIT(31)
+
+/* TX WD INFO DWORD 5 */
+#define BE_TXD_INFO5_SR_RATE_V1 GENMASK(4, 0)
+#define BE_TXD_INFO5_SR_EN_V1 BIT(5)
+#define BE_TXD_INFO5_NDPA_DURATION GENMASK(31, 16)
+
+/* TX WD INFO DWORD 6 */
+#define BE_TXD_INFO6_UL_APEP_LEN GENMASK(11, 0)
+#define BE_TXD_INFO6_UL_GI_LTF GENMASK(14, 12)
+#define BE_TXD_INFO6_UL_DOPPLER BIT(15)
+#define BE_TXD_INFO6_UL_STBC BIT(16)
+#define BE_TXD_INFO6_UL_LENGTH_REF GENMASK(21, 18)
+#define BE_TXD_INFO6_UL_RF_GAIN_IDX GENMASK(31, 22)
+
+/* TX WD INFO DWORD 7 */
+#define BE_TXD_INFO7_UL_FIXED_GAIN_EN BIT(0)
+#define BE_TXD_INFO7_UL_PRI_EXP_RSSI_DBM GENMASK(7, 1)
+#define BE_TXD_INFO7_ELNA_IDX BIT(8)
+#define BE_TXD_INFO7_UL_APEP_UNIT GENMASK(10, 9)
+#define BE_TXD_INFO7_UL_TRI_PAD GENMASK(13, 11)
+#define BE_TXD_INFO7_UL_T_PE GENMASK(15, 14)
+#define BE_TXD_INFO7_UL_EHT_USR_PRES BIT(16)
+#define BE_TXD_INFO7_UL_HELTF_SYMBOL_NUM GENMASK(19, 17)
+#define BE_TXD_INFO7_ULBW GENMASK(21, 20)
+#define BE_TXD_INFO7_ULBW_EXT GENMASK(23, 22)
+#define BE_TXD_INFO7_USE_WD_UL GENMASK(25, 24)
+#define BE_TXD_INFO7_EXTEND_MODE_SEL GENMASK(31, 28)
+
/* RX WD dword0 */
#define AX_RXD_RPKT_LEN_MASK GENMASK(13, 0)
#define AX_RXD_SHIFT_MASK GENMASK(15, 14)
@@ -269,6 +444,102 @@ struct rtw89_phy_sts_iehdr {
#define RTW89_PHY_STS_IEHDR_TYPE GENMASK(4, 0)
#define RTW89_PHY_STS_IEHDR_LEN GENMASK(11, 5)
+/* BE RXD dword0 */
+#define BE_RXD_RPKT_LEN_MASK GENMASK(13, 0)
+#define BE_RXD_SHIFT_MASK GENMASK(15, 14)
+#define BE_RXD_DRV_INFO_SZ_MASK GENMASK(19, 18)
+#define BE_RXD_HDR_CNV_SZ_MASK GENMASK(21, 20)
+#define BE_RXD_PHY_RPT_SZ_MASK GENMASK(23, 22)
+#define BE_RXD_RPKT_TYPE_MASK GENMASK(29, 24)
+#define BE_RXD_BB_SEL BIT(30)
+#define BE_RXD_LONG_RXD BIT(31)
+
+/* BE RXD dword1 */
+#define BE_RXD_PKT_ID_MASK GENMASK(11, 0)
+#define BE_RXD_FWD_TARGET_MASK GENMASK(23, 16)
+#define BE_RXD_BCN_FW_INFO_MASK GENMASK(25, 24)
+#define BE_RXD_FW_RLS BIT(26)
+
+/* BE RXD dword2 */
+#define BE_RXD_MAC_ID_MASK GENMASK(7, 0)
+#define BE_RXD_TYPE_MASK GENMASK(11, 10)
+#define BE_RXD_LAST_MSDU BIT(12)
+#define BE_RXD_AMSDU_CUT BIT(13)
+#define BE_RXD_ADDR_CAM_VLD BIT(14)
+#define BE_RXD_REORDER BIT(15)
+#define BE_RXD_SEQ_MASK GENMASK(27, 16)
+#define BE_RXD_TID_MASK GENMASK(31, 28)
+
+/* BE RXD dword3 */
+#define BE_RXD_SEC_TYPE_MASK GENMASK(3, 0)
+#define BE_RXD_BIP_KEYID BIT(4)
+#define BE_RXD_BIP_ENC BIT(5)
+#define BE_RXD_CRC32_ERR BIT(6)
+#define BE_RXD_ICV_ERR BIT(7)
+#define BE_RXD_HW_DEC BIT(8)
+#define BE_RXD_SW_DEC BIT(9)
+#define BE_RXD_A1_MATCH BIT(10)
+#define BE_RXD_AMPDU BIT(11)
+#define BE_RXD_AMPDU_EOF BIT(12)
+#define BE_RXD_AMSDU BIT(13)
+#define BE_RXD_MC BIT(14)
+#define BE_RXD_BC BIT(15)
+#define BE_RXD_MD BIT(16)
+#define BE_RXD_MF BIT(17)
+#define BE_RXD_PWR BIT(18)
+#define BE_RXD_QOS BIT(19)
+#define BE_RXD_EOSP BIT(20)
+#define BE_RXD_HTC BIT(21)
+#define BE_RXD_QNULL BIT(22)
+#define BE_RXD_A4_FRAME BIT(23)
+#define BE_RXD_FRAG_MASK GENMASK(27, 24)
+#define BE_RXD_GET_CH_INFO_V1_MASK GENMASK(31, 30)
+
+/* BE RXD dword4 */
+#define BE_RXD_PPDU_TYPE_MASK GENMASK(7, 0)
+#define BE_RXD_PPDU_CNT_MASK GENMASK(10, 8)
+#define BE_RXD_BW_MASK GENMASK(14, 12)
+#define BE_RXD_RX_GI_LTF_MASK GENMASK(18, 16)
+#define BE_RXD_RX_REORDER_FIELD_EN BIT(19)
+#define BE_RXD_RX_DATARATE_MASK GENMASK(31, 20)
+
+/* BE RXD dword5 */
+#define BE_RXD_FREERUN_CNT_MASK GENMASK(31, 0)
+
+/* BE RXD dword6 */
+#define BE_RXD_ADDR_CAM_MASK GENMASK(7, 0)
+#define BE_RXD_SR_EN BIT(13)
+#define BE_RXD_NON_SRG_PPDU BIT(14)
+#define BE_RXD_INTER_PPDU BIT(15)
+#define BE_RXD_USER_ID_MASK GENMASK(21, 16)
+#define BE_RXD_RX_STATISTICS BIT(22)
+#define BE_RXD_SMART_ANT BIT(23)
+#define BE_RXD_SEC_CAM_IDX_MASK GENMASK(31, 24)
+
+/* BE RXD dword7 */
+#define BE_RXD_PATTERN_IDX_MASK GENMASK(4, 0)
+#define BE_RXD_MAGIC_WAKE BIT(5)
+#define BE_RXD_UNICAST_WAKE BIT(6)
+#define BE_RXD_PATTERN_WAKE BIT(7)
+#define BE_RXD_RX_PL_MATCH BIT(8)
+#define BE_RXD_RX_PL_ID_MASK GENMASK(15, 12)
+#define BE_RXD_HDR_CNV BIT(16)
+#define BE_RXD_NAT25_HIT BIT(17)
+#define BE_RXD_IS_DA BIT(18)
+#define BE_RXD_CHKSUM_OFFLOAD_EN BIT(19)
+#define BE_RXD_RXSC_ENTRY_MASK GENMASK(22, 20)
+#define BE_RXD_RXSC_HIT BIT(23)
+#define BE_RXD_WITH_LLC BIT(24)
+#define BE_RXD_RX_AGG_FIELD_EN BIT(25)
+
+/* BE RXD dword8 */
+#define BE_RXD_MAC_ADDR_MASK GENMASK(31, 0)
+
+/* BE RXD dword9 */
+#define BE_RXD_MAC_ADDR_H_MASK GENMASK(15, 0)
+#define BE_RXD_HDR_OFFSET_MASK GENMASK(20, 16)
+#define BE_RXD_WL_HD_IV_LEN_MASK GENMASK(26, 21)
+
struct rtw89_phy_sts_ie0 {
__le32 w0;
__le32 w1;
diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
index aa9efca04025..660bf2ece927 100644
--- a/drivers/net/wireless/realtek/rtw89/wow.c
+++ b/drivers/net/wireless/realtek/rtw89/wow.c
@@ -488,6 +488,8 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool include_bb = !!chip->bbmcu_nr;
struct ieee80211_sta *wow_sta;
struct rtw89_sta *rtwsta = NULL;
bool is_conn = true;
@@ -501,7 +503,7 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
else
is_conn = false;
- ret = rtw89_fw_download(rtwdev, fw_type);
+ ret = rtw89_fw_download(rtwdev, fw_type, include_bb);
if (ret) {
rtw89_warn(rtwdev, "download fw failed\n");
return ret;
diff --git a/drivers/net/wireless/silabs/wfx/data_tx.c b/drivers/net/wireless/silabs/wfx/data_tx.c
index 6a5e52a96d18..a44a7403ce8d 100644
--- a/drivers/net/wireless/silabs/wfx/data_tx.c
+++ b/drivers/net/wireless/silabs/wfx/data_tx.c
@@ -208,6 +208,36 @@ static bool wfx_is_action_back(struct ieee80211_hdr *hdr)
return true;
}
+struct wfx_tx_priv *wfx_skb_tx_priv(struct sk_buff *skb)
+{
+ struct ieee80211_tx_info *tx_info;
+
+ if (!skb)
+ return NULL;
+ tx_info = IEEE80211_SKB_CB(skb);
+ return (struct wfx_tx_priv *)tx_info->rate_driver_data;
+}
+
+struct wfx_hif_req_tx *wfx_skb_txreq(struct sk_buff *skb)
+{
+ struct wfx_hif_msg *hif = (struct wfx_hif_msg *)skb->data;
+ struct wfx_hif_req_tx *req = (struct wfx_hif_req_tx *)hif->body;
+
+ return req;
+}
+
+struct wfx_vif *wfx_skb_wvif(struct wfx_dev *wdev, struct sk_buff *skb)
+{
+ struct wfx_tx_priv *tx_priv = wfx_skb_tx_priv(skb);
+ struct wfx_hif_msg *hif = (struct wfx_hif_msg *)skb->data;
+
+ if (tx_priv->vif_id != hif->interface && hif->interface != 2) {
+ dev_err(wdev->dev, "corrupted skb");
+ return wdev_to_wvif(wdev, hif->interface);
+ }
+ return wdev_to_wvif(wdev, tx_priv->vif_id);
+}
+
static u8 wfx_tx_get_link_id(struct wfx_vif *wvif, struct ieee80211_sta *sta,
struct ieee80211_hdr *hdr)
{
@@ -226,53 +256,40 @@ static u8 wfx_tx_get_link_id(struct wfx_vif *wvif, struct ieee80211_sta *sta,
static void wfx_tx_fixup_rates(struct ieee80211_tx_rate *rates)
{
- int i;
- bool finished;
+ bool has_rate0 = false;
+ int i, j;
- /* Firmware is not able to mix rates with different flags */
- for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) {
- if (rates[0].flags & IEEE80211_TX_RC_SHORT_GI)
- rates[i].flags |= IEEE80211_TX_RC_SHORT_GI;
- if (!(rates[0].flags & IEEE80211_TX_RC_SHORT_GI))
+ for (i = 1, j = 1; j < IEEE80211_TX_MAX_RATES; j++) {
+ if (rates[j].idx == -1)
+ break;
+ /* The device use the rates in descending order, whatever the request from minstrel.
+ * We have to trade off here. Most important is to respect the primary rate
+ * requested by minstrel. So, we drops the entries with rate higher than the
+ * previous.
+ */
+ if (rates[j].idx >= rates[i - 1].idx) {
+ rates[i - 1].count += rates[j].count;
+ rates[i - 1].count = min_t(u16, 15, rates[i - 1].count);
+ } else {
+ memcpy(rates + i, rates + j, sizeof(rates[i]));
+ if (rates[i].idx == 0)
+ has_rate0 = true;
+ /* The device apply Short GI only on the first rate */
rates[i].flags &= ~IEEE80211_TX_RC_SHORT_GI;
- if (!(rates[0].flags & IEEE80211_TX_RC_USE_RTS_CTS))
- rates[i].flags &= ~IEEE80211_TX_RC_USE_RTS_CTS;
- }
-
- /* Sort rates and remove duplicates */
- do {
- finished = true;
- for (i = 0; i < IEEE80211_TX_MAX_RATES - 1; i++) {
- if (rates[i + 1].idx == rates[i].idx &&
- rates[i].idx != -1) {
- rates[i].count += rates[i + 1].count;
- if (rates[i].count > 15)
- rates[i].count = 15;
- rates[i + 1].idx = -1;
- rates[i + 1].count = 0;
-
- finished = false;
- }
- if (rates[i + 1].idx > rates[i].idx) {
- swap(rates[i + 1], rates[i]);
- finished = false;
- }
+ i++;
}
- } while (!finished);
+ }
/* Ensure that MCS0 or 1Mbps is present at the end of the retry list */
- for (i = 0; i < IEEE80211_TX_MAX_RATES; i++) {
- if (rates[i].idx == 0)
- break;
- if (rates[i].idx == -1) {
- rates[i].idx = 0;
- rates[i].count = 8; /* == hw->max_rate_tries */
- rates[i].flags = rates[i - 1].flags & IEEE80211_TX_RC_MCS;
- break;
- }
+ if (!has_rate0 && i < IEEE80211_TX_MAX_RATES) {
+ rates[i].idx = 0;
+ rates[i].count = 8; /* == hw->max_rate_tries */
+ rates[i].flags = rates[0].flags & IEEE80211_TX_RC_MCS;
+ i++;
+ }
+ for (; i < IEEE80211_TX_MAX_RATES; i++) {
+ memset(rates + i, 0, sizeof(rates[i]));
+ rates[i].idx = -1;
}
- /* All retries use long GI */
- for (i = 1; i < IEEE80211_TX_MAX_RATES; i++)
- rates[i].flags &= ~IEEE80211_TX_RC_SHORT_GI;
}
static u8 wfx_tx_get_retry_policy_id(struct wfx_vif *wvif, struct ieee80211_tx_info *tx_info)
@@ -334,6 +351,7 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta, struct
/* Fill tx_priv */
tx_priv = (struct wfx_tx_priv *)tx_info->rate_driver_data;
tx_priv->icv_size = wfx_tx_get_icv_len(hw_key);
+ tx_priv->vif_id = wvif->id;
/* Fill hif_msg */
WARN(skb_headroom(skb) < wmsg_len, "not enough space in skb");
@@ -344,7 +362,10 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta, struct
hif_msg = (struct wfx_hif_msg *)skb->data;
hif_msg->len = cpu_to_le16(skb->len);
hif_msg->id = HIF_REQ_ID_TX;
- hif_msg->interface = wvif->id;
+ if (tx_info->flags & IEEE80211_TX_CTL_TX_OFFCHAN)
+ hif_msg->interface = 2;
+ else
+ hif_msg->interface = wvif->id;
if (skb->len > le16_to_cpu(wvif->wdev->hw_caps.size_inp_ch_buf)) {
dev_warn(wvif->wdev->dev,
"requested frame size (%d) is larger than maximum supported (%d)\n",
@@ -365,9 +386,15 @@ static int wfx_tx_inner(struct wfx_vif *wvif, struct ieee80211_sta *sta, struct
req->fc_offset = offset;
/* Queue index are inverted between firmware and Linux */
req->queue_id = 3 - queue_id;
- req->peer_sta_id = wfx_tx_get_link_id(wvif, sta, hdr);
- req->retry_policy_index = wfx_tx_get_retry_policy_id(wvif, tx_info);
- req->frame_format = wfx_tx_get_frame_format(tx_info);
+ if (tx_info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) {
+ req->peer_sta_id = HIF_LINK_ID_NOT_ASSOCIATED;
+ req->retry_policy_index = HIF_TX_RETRY_POLICY_INVALID;
+ req->frame_format = HIF_FRAME_FORMAT_NON_HT;
+ } else {
+ req->peer_sta_id = wfx_tx_get_link_id(wvif, sta, hdr);
+ req->retry_policy_index = wfx_tx_get_retry_policy_id(wvif, tx_info);
+ req->frame_format = wfx_tx_get_frame_format(tx_info);
+ }
if (tx_info->driver_rates[0].flags & IEEE80211_TX_RC_SHORT_GI)
req->short_gi = 1;
if (tx_info->flags & IEEE80211_TX_CTL_SEND_AFTER_DTIM)
@@ -483,7 +510,7 @@ void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct wfx_hif_cnf_tx *arg)
}
tx_info = IEEE80211_SKB_CB(skb);
tx_priv = wfx_skb_tx_priv(skb);
- wvif = wdev_to_wvif(wdev, ((struct wfx_hif_msg *)skb->data)->interface);
+ wvif = wfx_skb_wvif(wdev, skb);
WARN_ON(!wvif);
if (!wvif)
return;
@@ -545,7 +572,6 @@ void wfx_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, b
struct wfx_dev *wdev = hw->priv;
struct sk_buff_head dropped;
struct wfx_vif *wvif;
- struct wfx_hif_msg *hif;
struct sk_buff *skb;
skb_queue_head_init(&dropped);
@@ -561,8 +587,7 @@ void wfx_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, b
if (wdev->chip_frozen)
wfx_pending_drop(wdev, &dropped);
while ((skb = skb_dequeue(&dropped)) != NULL) {
- hif = (struct wfx_hif_msg *)skb->data;
- wvif = wdev_to_wvif(wdev, hif->interface);
+ wvif = wfx_skb_wvif(wdev, skb);
ieee80211_tx_info_clear_status(IEEE80211_SKB_CB(skb));
wfx_skb_dtor(wvif, skb);
}
diff --git a/drivers/net/wireless/silabs/wfx/data_tx.h b/drivers/net/wireless/silabs/wfx/data_tx.h
index 983470705e4b..0621b82103be 100644
--- a/drivers/net/wireless/silabs/wfx/data_tx.h
+++ b/drivers/net/wireless/silabs/wfx/data_tx.h
@@ -36,6 +36,7 @@ struct wfx_tx_policy_cache {
struct wfx_tx_priv {
ktime_t xmit_timestamp;
unsigned char icv_size;
+ unsigned char vif_id;
};
void wfx_tx_policy_init(struct wfx_vif *wvif);
@@ -45,22 +46,8 @@ void wfx_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, struc
void wfx_tx_confirm_cb(struct wfx_dev *wdev, const struct wfx_hif_cnf_tx *arg);
void wfx_flush(struct ieee80211_hw *hw, struct ieee80211_vif *vif, u32 queues, bool drop);
-static inline struct wfx_tx_priv *wfx_skb_tx_priv(struct sk_buff *skb)
-{
- struct ieee80211_tx_info *tx_info;
-
- if (!skb)
- return NULL;
- tx_info = IEEE80211_SKB_CB(skb);
- return (struct wfx_tx_priv *)tx_info->rate_driver_data;
-}
-
-static inline struct wfx_hif_req_tx *wfx_skb_txreq(struct sk_buff *skb)
-{
- struct wfx_hif_msg *hif = (struct wfx_hif_msg *)skb->data;
- struct wfx_hif_req_tx *req = (struct wfx_hif_req_tx *)hif->body;
-
- return req;
-}
+struct wfx_tx_priv *wfx_skb_tx_priv(struct sk_buff *skb);
+struct wfx_hif_req_tx *wfx_skb_txreq(struct sk_buff *skb);
+struct wfx_vif *wfx_skb_wvif(struct wfx_dev *wdev, struct sk_buff *skb);
#endif
diff --git a/drivers/net/wireless/silabs/wfx/hif_tx.c b/drivers/net/wireless/silabs/wfx/hif_tx.c
index 9402503fbde3..9f403d275cb1 100644
--- a/drivers/net/wireless/silabs/wfx/hif_tx.c
+++ b/drivers/net/wireless/silabs/wfx/hif_tx.c
@@ -45,6 +45,24 @@ static void *wfx_alloc_hif(size_t body_len, struct wfx_hif_msg **hif)
return NULL;
}
+static u32 wfx_rate_mask_to_hw(struct wfx_dev *wdev, u32 rates)
+{
+ int i;
+ u32 ret = 0;
+ /* The device only supports 2GHz */
+ struct ieee80211_supported_band *sband = wdev->hw->wiphy->bands[NL80211_BAND_2GHZ];
+
+ for (i = 0; i < sband->n_bitrates; i++) {
+ if (rates & BIT(i)) {
+ if (i >= sband->n_bitrates)
+ dev_warn(wdev->dev, "unsupported basic rate\n");
+ else
+ ret |= BIT(sband->bitrates[i].hw_value);
+ }
+ }
+ return ret;
+}
+
int wfx_cmd_send(struct wfx_dev *wdev, struct wfx_hif_msg *request,
void *reply, size_t reply_len, bool no_reply)
{
@@ -220,6 +238,31 @@ int wfx_hif_write_mib(struct wfx_dev *wdev, int vif_id, u16 mib_id, void *val, s
return ret;
}
+/* Hijack scan request to implement Remain-On-Channel */
+int wfx_hif_scan_uniq(struct wfx_vif *wvif, struct ieee80211_channel *chan, int duration)
+{
+ int ret;
+ struct wfx_hif_msg *hif;
+ size_t buf_len = sizeof(struct wfx_hif_req_start_scan_alt) + sizeof(u8);
+ struct wfx_hif_req_start_scan_alt *body = wfx_alloc_hif(buf_len, &hif);
+
+ if (!hif)
+ return -ENOMEM;
+ body->num_of_ssids = HIF_API_MAX_NB_SSIDS;
+ body->maintain_current_bss = 1;
+ body->disallow_ps = 1;
+ body->tx_power_level = cpu_to_le32(chan->max_power);
+ body->num_of_channels = 1;
+ body->channel_list[0] = chan->hw_value;
+ body->max_transmit_rate = API_RATE_INDEX_B_1MBPS;
+ body->min_channel_time = cpu_to_le32(duration);
+ body->max_channel_time = cpu_to_le32(duration * 110 / 100);
+ wfx_fill_header(hif, wvif->id, HIF_REQ_ID_START_SCAN, buf_len);
+ ret = wfx_cmd_send(wvif->wdev, hif, NULL, 0, false);
+ kfree(hif);
+ return ret;
+}
+
int wfx_hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req,
int chan_start_idx, int chan_num)
{
diff --git a/drivers/net/wireless/silabs/wfx/hif_tx.h b/drivers/net/wireless/silabs/wfx/hif_tx.h
index 71817a6571f0..aab54df6aafa 100644
--- a/drivers/net/wireless/silabs/wfx/hif_tx.h
+++ b/drivers/net/wireless/silabs/wfx/hif_tx.h
@@ -54,6 +54,7 @@ int wfx_hif_beacon_transmit(struct wfx_vif *wvif, bool enable);
int wfx_hif_update_ie_beacon(struct wfx_vif *wvif, const u8 *ies, size_t ies_len);
int wfx_hif_scan(struct wfx_vif *wvif, struct cfg80211_scan_request *req80211,
int chan_start, int chan_num);
+int wfx_hif_scan_uniq(struct wfx_vif *wvif, struct ieee80211_channel *chan, int duration);
int wfx_hif_stop_scan(struct wfx_vif *wvif);
int wfx_hif_configuration(struct wfx_dev *wdev, const u8 *conf, size_t len);
int wfx_hif_shutdown(struct wfx_dev *wdev);
diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
index ede822d771aa..e7198520bdff 100644
--- a/drivers/net/wireless/silabs/wfx/main.c
+++ b/drivers/net/wireless/silabs/wfx/main.c
@@ -151,6 +151,8 @@ static const struct ieee80211_ops wfx_ops = {
.change_chanctx = wfx_change_chanctx,
.assign_vif_chanctx = wfx_assign_vif_chanctx,
.unassign_vif_chanctx = wfx_unassign_vif_chanctx,
+ .remain_on_channel = wfx_remain_on_channel,
+ .cancel_remain_on_channel = wfx_cancel_remain_on_channel,
};
bool wfx_api_older_than(struct wfx_dev *wdev, int major, int minor)
@@ -246,6 +248,7 @@ static void wfx_free_common(void *data)
mutex_destroy(&wdev->tx_power_loop_info_lock);
mutex_destroy(&wdev->rx_stats_lock);
+ mutex_destroy(&wdev->scan_lock);
mutex_destroy(&wdev->conf_mutex);
ieee80211_free_hw(wdev->hw);
}
@@ -288,6 +291,7 @@ struct wfx_dev *wfx_init_common(struct device *dev, const struct wfx_platform_da
hw->wiphy->features |= NL80211_FEATURE_AP_SCAN;
hw->wiphy->flags |= WIPHY_FLAG_AP_PROBE_RESP_OFFLOAD;
hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD;
+ hw->wiphy->max_remain_on_channel_duration = 5000;
hw->wiphy->max_ap_assoc_sta = HIF_LINK_ID_MAX;
hw->wiphy->max_scan_ssids = 2;
hw->wiphy->max_scan_ie_len = IEEE80211_MAX_DATA_LEN;
@@ -314,6 +318,7 @@ struct wfx_dev *wfx_init_common(struct device *dev, const struct wfx_platform_da
gpiod_set_consumer_name(wdev->pdata.gpio_wakeup, "wfx wakeup");
mutex_init(&wdev->conf_mutex);
+ mutex_init(&wdev->scan_lock);
mutex_init(&wdev->rx_stats_lock);
mutex_init(&wdev->tx_power_loop_info_lock);
init_completion(&wdev->firmware_ready);
diff --git a/drivers/net/wireless/silabs/wfx/queue.c b/drivers/net/wireless/silabs/wfx/queue.c
index 37f492e5d3be..e61b86f211e5 100644
--- a/drivers/net/wireless/silabs/wfx/queue.c
+++ b/drivers/net/wireless/silabs/wfx/queue.c
@@ -68,13 +68,16 @@ void wfx_tx_queues_init(struct wfx_vif *wvif)
for (i = 0; i < IEEE80211_NUM_ACS; ++i) {
skb_queue_head_init(&wvif->tx_queue[i].normal);
skb_queue_head_init(&wvif->tx_queue[i].cab);
+ skb_queue_head_init(&wvif->tx_queue[i].offchan);
wvif->tx_queue[i].priority = priorities[i];
}
}
bool wfx_tx_queue_empty(struct wfx_vif *wvif, struct wfx_queue *queue)
{
- return skb_queue_empty_lockless(&queue->normal) && skb_queue_empty_lockless(&queue->cab);
+ return skb_queue_empty_lockless(&queue->normal) &&
+ skb_queue_empty_lockless(&queue->cab) &&
+ skb_queue_empty_lockless(&queue->offchan);
}
void wfx_tx_queues_check_empty(struct wfx_vif *wvif)
@@ -103,8 +106,9 @@ static void __wfx_tx_queue_drop(struct wfx_vif *wvif,
void wfx_tx_queue_drop(struct wfx_vif *wvif, struct wfx_queue *queue,
struct sk_buff_head *dropped)
{
- __wfx_tx_queue_drop(wvif, &queue->cab, dropped);
__wfx_tx_queue_drop(wvif, &queue->normal, dropped);
+ __wfx_tx_queue_drop(wvif, &queue->cab, dropped);
+ __wfx_tx_queue_drop(wvif, &queue->offchan, dropped);
wake_up(&wvif->wdev->tx_dequeue);
}
@@ -113,7 +117,9 @@ void wfx_tx_queues_put(struct wfx_vif *wvif, struct sk_buff *skb)
struct wfx_queue *queue = &wvif->tx_queue[skb_get_queue_mapping(skb)];
struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
- if (tx_info->flags & IEEE80211_TX_CTL_SEND_AFTER_DTIM)
+ if (tx_info->flags & IEEE80211_TX_CTL_TX_OFFCHAN)
+ skb_queue_tail(&queue->offchan, skb);
+ else if (tx_info->flags & IEEE80211_TX_CTL_SEND_AFTER_DTIM)
skb_queue_tail(&queue->cab, skb);
else
skb_queue_tail(&queue->normal, skb);
@@ -123,13 +129,11 @@ void wfx_pending_drop(struct wfx_dev *wdev, struct sk_buff_head *dropped)
{
struct wfx_queue *queue;
struct wfx_vif *wvif;
- struct wfx_hif_msg *hif;
struct sk_buff *skb;
WARN(!wdev->chip_frozen, "%s should only be used to recover a frozen device", __func__);
while ((skb = skb_dequeue(&wdev->tx_pending)) != NULL) {
- hif = (struct wfx_hif_msg *)skb->data;
- wvif = wdev_to_wvif(wdev, hif->interface);
+ wvif = wfx_skb_wvif(wdev, skb);
if (wvif) {
queue = &wvif->tx_queue[skb_get_queue_mapping(skb)];
WARN_ON(skb_get_queue_mapping(skb) > 3);
@@ -155,7 +159,7 @@ struct sk_buff *wfx_pending_get(struct wfx_dev *wdev, u32 packet_id)
if (req->packet_id != packet_id)
continue;
spin_unlock_bh(&wdev->tx_pending.lock);
- wvif = wdev_to_wvif(wdev, hif->interface);
+ wvif = wfx_skb_wvif(wdev, skb);
if (wvif) {
queue = &wvif->tx_queue[skb_get_queue_mapping(skb)];
WARN_ON(skb_get_queue_mapping(skb) > 3);
@@ -248,6 +252,26 @@ static struct sk_buff *wfx_tx_queues_get_skb(struct wfx_dev *wdev)
wvif = NULL;
while ((wvif = wvif_iterate(wdev, wvif)) != NULL) {
+ for (i = 0; i < num_queues; i++) {
+ skb = skb_dequeue(&queues[i]->offchan);
+ if (!skb)
+ continue;
+ hif = (struct wfx_hif_msg *)skb->data;
+ /* Offchan frames are assigned to a special interface.
+ * The only interface allowed to send data during scan.
+ */
+ WARN_ON(hif->interface != 2);
+ atomic_inc(&queues[i]->pending_frames);
+ trace_queues_stats(wdev, queues[i]);
+ return skb;
+ }
+ }
+
+ if (mutex_is_locked(&wdev->scan_lock))
+ return NULL;
+
+ wvif = NULL;
+ while ((wvif = wvif_iterate(wdev, wvif)) != NULL) {
if (!wvif->after_dtim_tx_allowed)
continue;
for (i = 0; i < num_queues; i++) {
diff --git a/drivers/net/wireless/silabs/wfx/queue.h b/drivers/net/wireless/silabs/wfx/queue.h
index 4731debca93d..6857fbd60fba 100644
--- a/drivers/net/wireless/silabs/wfx/queue.h
+++ b/drivers/net/wireless/silabs/wfx/queue.h
@@ -17,6 +17,7 @@ struct wfx_vif;
struct wfx_queue {
struct sk_buff_head normal;
struct sk_buff_head cab; /* Content After (DTIM) Beacon */
+ struct sk_buff_head offchan;
atomic_t pending_frames;
int priority;
};
diff --git a/drivers/net/wireless/silabs/wfx/scan.c b/drivers/net/wireless/silabs/wfx/scan.c
index 16f619ed22e0..c3c103ff88cc 100644
--- a/drivers/net/wireless/silabs/wfx/scan.c
+++ b/drivers/net/wireless/silabs/wfx/scan.c
@@ -95,7 +95,7 @@ void wfx_hw_scan_work(struct work_struct *work)
int chan_cur, ret, err;
mutex_lock(&wvif->wdev->conf_mutex);
- mutex_lock(&wvif->scan_lock);
+ mutex_lock(&wvif->wdev->scan_lock);
if (wvif->join_in_progress) {
dev_info(wvif->wdev->dev, "abort in-progress REQ_JOIN");
wfx_reset(wvif);
@@ -116,7 +116,7 @@ void wfx_hw_scan_work(struct work_struct *work)
ret = -ETIMEDOUT;
}
} while (ret >= 0 && chan_cur < hw_req->req.n_channels);
- mutex_unlock(&wvif->scan_lock);
+ mutex_unlock(&wvif->wdev->scan_lock);
mutex_unlock(&wvif->wdev->conf_mutex);
wfx_ieee80211_scan_completed_compat(wvif->wdev->hw, ret < 0);
}
@@ -145,3 +145,65 @@ void wfx_scan_complete(struct wfx_vif *wvif, int nb_chan_done)
wvif->scan_nb_chan_done = nb_chan_done;
complete(&wvif->scan_complete);
}
+
+void wfx_remain_on_channel_work(struct work_struct *work)
+{
+ struct wfx_vif *wvif = container_of(work, struct wfx_vif, remain_on_channel_work);
+ struct ieee80211_channel *chan = wvif->remain_on_channel_chan;
+ int duration = wvif->remain_on_channel_duration;
+ int ret;
+
+ /* Hijack scan request to implement Remain-On-Channel */
+ mutex_lock(&wvif->wdev->conf_mutex);
+ mutex_lock(&wvif->wdev->scan_lock);
+ if (wvif->join_in_progress) {
+ dev_info(wvif->wdev->dev, "abort in-progress REQ_JOIN");
+ wfx_reset(wvif);
+ }
+ wfx_tx_flush(wvif->wdev);
+
+ reinit_completion(&wvif->scan_complete);
+ ret = wfx_hif_scan_uniq(wvif, chan, duration);
+ if (ret)
+ goto end;
+ ieee80211_ready_on_channel(wvif->wdev->hw);
+ ret = wait_for_completion_timeout(&wvif->scan_complete,
+ msecs_to_jiffies(duration * 120 / 100));
+ if (!ret) {
+ wfx_hif_stop_scan(wvif);
+ ret = wait_for_completion_timeout(&wvif->scan_complete, 1 * HZ);
+ dev_dbg(wvif->wdev->dev, "roc timeout\n");
+ }
+ if (!ret)
+ dev_err(wvif->wdev->dev, "roc didn't stop\n");
+ ieee80211_remain_on_channel_expired(wvif->wdev->hw);
+end:
+ mutex_unlock(&wvif->wdev->scan_lock);
+ mutex_unlock(&wvif->wdev->conf_mutex);
+ wfx_bh_request_tx(wvif->wdev);
+}
+
+int wfx_remain_on_channel(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel *chan, int duration,
+ enum ieee80211_roc_type type)
+{
+ struct wfx_dev *wdev = hw->priv;
+ struct wfx_vif *wvif = (struct wfx_vif *)vif->drv_priv;
+
+ if (wfx_api_older_than(wdev, 3, 10))
+ return -EOPNOTSUPP;
+
+ wvif->remain_on_channel_duration = duration;
+ wvif->remain_on_channel_chan = chan;
+ schedule_work(&wvif->remain_on_channel_work);
+ return 0;
+}
+
+int wfx_cancel_remain_on_channel(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+{
+ struct wfx_vif *wvif = (struct wfx_vif *)vif->drv_priv;
+
+ wfx_hif_stop_scan(wvif);
+ flush_work(&wvif->remain_on_channel_work);
+ return 0;
+}
diff --git a/drivers/net/wireless/silabs/wfx/scan.h b/drivers/net/wireless/silabs/wfx/scan.h
index 78e3b984f375..995ab8c6cb5e 100644
--- a/drivers/net/wireless/silabs/wfx/scan.h
+++ b/drivers/net/wireless/silabs/wfx/scan.h
@@ -19,4 +19,10 @@ int wfx_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
void wfx_cancel_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
void wfx_scan_complete(struct wfx_vif *wvif, int nb_chan_done);
+void wfx_remain_on_channel_work(struct work_struct *work);
+int wfx_remain_on_channel(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_channel *chan, int duration,
+ enum ieee80211_roc_type type);
+int wfx_cancel_remain_on_channel(struct ieee80211_hw *hw, struct ieee80211_vif *vif);
+
#endif
diff --git a/drivers/net/wireless/silabs/wfx/sta.c b/drivers/net/wireless/silabs/wfx/sta.c
index 626dfb4b7a55..1b6c158457b4 100644
--- a/drivers/net/wireless/silabs/wfx/sta.c
+++ b/drivers/net/wireless/silabs/wfx/sta.c
@@ -20,24 +20,6 @@
#define HIF_MAX_ARP_IP_ADDRTABLE_ENTRIES 2
-u32 wfx_rate_mask_to_hw(struct wfx_dev *wdev, u32 rates)
-{
- int i;
- u32 ret = 0;
- /* The device only supports 2GHz */
- struct ieee80211_supported_band *sband = wdev->hw->wiphy->bands[NL80211_BAND_2GHZ];
-
- for (i = 0; i < sband->n_bitrates; i++) {
- if (rates & BIT(i)) {
- if (i >= sband->n_bitrates)
- dev_warn(wdev->dev, "unsupported basic rate\n");
- else
- ret |= BIT(sband->bitrates[i].hw_value);
- }
- }
- return ret;
-}
-
void wfx_cooling_timeout_work(struct work_struct *work)
{
struct wfx_dev *wdev = container_of(to_delayed_work(work), struct wfx_dev,
@@ -114,10 +96,12 @@ void wfx_configure_filter(struct ieee80211_hw *hw, unsigned int changed_flags,
*total_flags &= FIF_BCN_PRBRESP_PROMISC | FIF_ALLMULTI | FIF_OTHER_BSS |
FIF_PROBE_REQ | FIF_PSPOLL;
+ /* Filters are ignored during the scan. No frames are filtered. */
+ if (mutex_is_locked(&wdev->scan_lock))
+ return;
+
mutex_lock(&wdev->conf_mutex);
while ((wvif = wvif_iterate(wdev, wvif)) != NULL) {
- mutex_lock(&wvif->scan_lock);
-
/* Note: FIF_BCN_PRBRESP_PROMISC covers probe response and
* beacons from other BSS
*/
@@ -144,8 +128,6 @@ void wfx_configure_filter(struct ieee80211_hw *hw, unsigned int changed_flags,
else
filter_prbreq = true;
wfx_hif_set_rx_filter(wvif, filter_bssid, filter_prbreq);
-
- mutex_unlock(&wvif->scan_lock);
}
mutex_unlock(&wdev->conf_mutex);
}
@@ -402,7 +384,12 @@ void wfx_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_bss_conf *link_conf)
{
struct wfx_vif *wvif = (struct wfx_vif *)vif->drv_priv;
+ struct wfx_dev *wdev = wvif->wdev;
+ wvif = NULL;
+ while ((wvif = wvif_iterate(wdev, wvif)) != NULL)
+ wfx_update_pm(wvif);
+ wvif = (struct wfx_vif *)vif->drv_priv;
wfx_reset(wvif);
}
@@ -634,18 +621,14 @@ int wfx_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta, bool set)
void wfx_suspend_resume_mc(struct wfx_vif *wvif, enum sta_notify_cmd notify_cmd)
{
- struct wfx_vif *wvif_it;
-
if (notify_cmd != STA_NOTIFY_AWAKE)
return;
/* Device won't be able to honor CAB if a scan is in progress on any interface. Prefer to
* skip this DTIM and wait for the next one.
*/
- wvif_it = NULL;
- while ((wvif_it = wvif_iterate(wvif->wdev, wvif_it)) != NULL)
- if (mutex_is_locked(&wvif_it->scan_lock))
- return;
+ if (mutex_is_locked(&wvif->wdev->scan_lock))
+ return;
if (!wfx_tx_queues_has_cab(wvif) || wvif->after_dtim_tx_allowed)
dev_warn(wvif->wdev->dev, "incorrect sequence (%d CAB in queue)",
@@ -743,9 +726,9 @@ int wfx_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
complete(&wvif->set_pm_mode_complete);
INIT_WORK(&wvif->tx_policy_upload_work, wfx_tx_policy_upload_work);
- mutex_init(&wvif->scan_lock);
init_completion(&wvif->scan_complete);
INIT_WORK(&wvif->scan_work, wfx_hw_scan_work);
+ INIT_WORK(&wvif->remain_on_channel_work, wfx_remain_on_channel_work);
wfx_tx_queues_init(wvif);
wfx_tx_policy_init(wvif);
diff --git a/drivers/net/wireless/silabs/wfx/sta.h b/drivers/net/wireless/silabs/wfx/sta.h
index 888db5cd3206..c478ddcb934b 100644
--- a/drivers/net/wireless/silabs/wfx/sta.h
+++ b/drivers/net/wireless/silabs/wfx/sta.h
@@ -66,6 +66,5 @@ int wfx_update_pm(struct wfx_vif *wvif);
/* Other Helpers */
void wfx_reset(struct wfx_vif *wvif);
-u32 wfx_rate_mask_to_hw(struct wfx_dev *wdev, u32 rates);
#endif
diff --git a/drivers/net/wireless/silabs/wfx/wfx.h b/drivers/net/wireless/silabs/wfx/wfx.h
index 13ba84b3b2c3..bd0df2e1ea99 100644
--- a/drivers/net/wireless/silabs/wfx/wfx.h
+++ b/drivers/net/wireless/silabs/wfx/wfx.h
@@ -43,6 +43,7 @@ struct wfx_dev {
struct delayed_work cooling_timeout_work;
bool poll_irq;
bool chip_frozen;
+ struct mutex scan_lock;
struct mutex conf_mutex;
struct wfx_hif_cmd hif_cmd;
@@ -69,6 +70,7 @@ struct wfx_vif {
bool after_dtim_tx_allowed;
bool join_in_progress;
+ struct completion set_pm_mode_complete;
struct delayed_work beacon_loss_work;
@@ -80,15 +82,15 @@ struct wfx_vif {
unsigned long uapsd_mask;
- /* avoid some operations in parallel with scan */
- struct mutex scan_lock;
struct work_struct scan_work;
struct completion scan_complete;
int scan_nb_chan_done;
bool scan_abort;
struct ieee80211_scan_request *scan_req;
- struct completion set_pm_mode_complete;
+ struct ieee80211_channel *remain_on_channel_chan;
+ int remain_on_channel_duration;
+ struct work_struct remain_on_channel_work;
};
static inline struct ieee80211_vif *wvif_to_vif(struct wfx_vif *wvif)
diff --git a/drivers/net/wireless/st/cw1200/txrx.c b/drivers/net/wireless/st/cw1200/txrx.c
index 6894b919ff94..084d52b11f5b 100644
--- a/drivers/net/wireless/st/cw1200/txrx.c
+++ b/drivers/net/wireless/st/cw1200/txrx.c
@@ -994,7 +994,7 @@ void cw1200_skb_dtor(struct cw1200_common *priv,
txpriv->raw_link_id, txpriv->tid);
tx_policy_put(priv, txpriv->rate_id);
}
- ieee80211_tx_status(priv->hw, skb);
+ ieee80211_tx_status_skb(priv->hw, skb);
}
void cw1200_rx_cb(struct cw1200_common *priv,
@@ -1166,7 +1166,7 @@ void cw1200_rx_cb(struct cw1200_common *priv,
size_t ies_len = skb->len - (ies - (u8 *)(skb->data));
tim_ie = cfg80211_find_ie(WLAN_EID_TIM, ies, ies_len);
- if (tim_ie) {
+ if (tim_ie && tim_ie[1] >= sizeof(struct ieee80211_tim_ie)) {
struct ieee80211_tim_ie *tim =
(struct ieee80211_tim_ie *)&tim_ie[2];
diff --git a/drivers/net/wireless/ti/wl1251/main.c b/drivers/net/wireless/ti/wl1251/main.c
index eded284af600..cd9a41f59f32 100644
--- a/drivers/net/wireless/ti/wl1251/main.c
+++ b/drivers/net/wireless/ti/wl1251/main.c
@@ -404,7 +404,7 @@ static int wl1251_op_start(struct ieee80211_hw *hw)
/* update hw/fw version info in wiphy struct */
wiphy->hw_version = wl->chip_id;
- strncpy(wiphy->fw_version, wl->fw_ver, sizeof(wiphy->fw_version));
+ strscpy(wiphy->fw_version, wl->fw_ver, sizeof(wiphy->fw_version));
out:
if (ret < 0)
diff --git a/drivers/net/wireless/ti/wl1251/tx.c b/drivers/net/wireless/ti/wl1251/tx.c
index e9dc3c72bb11..474b603c121c 100644
--- a/drivers/net/wireless/ti/wl1251/tx.c
+++ b/drivers/net/wireless/ti/wl1251/tx.c
@@ -434,7 +434,7 @@ static void wl1251_tx_packet_cb(struct wl1251 *wl,
result->status, wl1251_tx_parse_status(result->status));
- ieee80211_tx_status(wl->hw, skb);
+ ieee80211_tx_status_skb(wl->hw, skb);
wl->tx_frames[result->id] = NULL;
}
@@ -566,7 +566,7 @@ void wl1251_tx_flush(struct wl1251 *wl)
if (!(info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS))
continue;
- ieee80211_tx_status(wl->hw, skb);
+ ieee80211_tx_status_skb(wl->hw, skb);
}
for (i = 0; i < FW_TX_CMPLT_BLOCK_SIZE; i++)
@@ -577,7 +577,7 @@ void wl1251_tx_flush(struct wl1251 *wl)
if (!(info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS))
continue;
- ieee80211_tx_status(wl->hw, skb);
+ ieee80211_tx_status_skb(wl->hw, skb);
wl->tx_frames[i] = NULL;
}
}
diff --git a/drivers/net/wireless/ti/wl12xx/main.c b/drivers/net/wireless/ti/wl12xx/main.c
index d06a2c419447..de045fe4ca1e 100644
--- a/drivers/net/wireless/ti/wl12xx/main.c
+++ b/drivers/net/wireless/ti/wl12xx/main.c
@@ -1919,7 +1919,7 @@ out:
return ret;
}
-static int wl12xx_remove(struct platform_device *pdev)
+static void wl12xx_remove(struct platform_device *pdev)
{
struct wl1271 *wl = platform_get_drvdata(pdev);
struct wl12xx_priv *priv;
@@ -1928,7 +1928,7 @@ static int wl12xx_remove(struct platform_device *pdev)
kfree(priv->rx_mem_addr);
- return wlcore_remove(pdev);
+ wlcore_remove(pdev);
}
static const struct platform_device_id wl12xx_id_table[] = {
@@ -1939,7 +1939,7 @@ MODULE_DEVICE_TABLE(platform, wl12xx_id_table);
static struct platform_driver wl12xx_driver = {
.probe = wl12xx_probe,
- .remove = wl12xx_remove,
+ .remove_new = wl12xx_remove,
.id_table = wl12xx_id_table,
.driver = {
.name = "wl12xx_driver",
diff --git a/drivers/net/wireless/ti/wl18xx/main.c b/drivers/net/wireless/ti/wl18xx/main.c
index 0b3cf8477c6c..20d9181b3410 100644
--- a/drivers/net/wireless/ti/wl18xx/main.c
+++ b/drivers/net/wireless/ti/wl18xx/main.c
@@ -1516,12 +1516,9 @@ static int wl18xx_handle_static_data(struct wl1271 *wl,
struct wl18xx_static_data_priv *static_data_priv =
(struct wl18xx_static_data_priv *) static_data->priv;
- strncpy(wl->chip.phy_fw_ver_str, static_data_priv->phy_version,
+ strscpy(wl->chip.phy_fw_ver_str, static_data_priv->phy_version,
sizeof(wl->chip.phy_fw_ver_str));
- /* make sure the string is NULL-terminated */
- wl->chip.phy_fw_ver_str[sizeof(wl->chip.phy_fw_ver_str) - 1] = '\0';
-
wl1271_info("PHY firmware version: %s", static_data_priv->phy_version);
return 0;
@@ -2033,7 +2030,7 @@ MODULE_DEVICE_TABLE(platform, wl18xx_id_table);
static struct platform_driver wl18xx_driver = {
.probe = wl18xx_probe,
- .remove = wlcore_remove,
+ .remove_new = wlcore_remove,
.id_table = wl18xx_id_table,
.driver = {
.name = "wl18xx_driver",
diff --git a/drivers/net/wireless/ti/wlcore/boot.c b/drivers/net/wireless/ti/wlcore/boot.c
index 85abd0a2d1c9..f481c2e3dbc8 100644
--- a/drivers/net/wireless/ti/wlcore/boot.c
+++ b/drivers/net/wireless/ti/wlcore/boot.c
@@ -41,12 +41,9 @@ static int wlcore_boot_parse_fw_ver(struct wl1271 *wl,
{
int ret;
- strncpy(wl->chip.fw_ver_str, static_data->fw_version,
+ strscpy(wl->chip.fw_ver_str, static_data->fw_version,
sizeof(wl->chip.fw_ver_str));
- /* make sure the string is NULL-terminated */
- wl->chip.fw_ver_str[sizeof(wl->chip.fw_ver_str) - 1] = '\0';
-
ret = sscanf(wl->chip.fw_ver_str + 4, "%u.%u.%u.%u.%u",
&wl->chip.fw_ver[0], &wl->chip.fw_ver[1],
&wl->chip.fw_ver[2], &wl->chip.fw_ver[3],
diff --git a/drivers/net/wireless/ti/wlcore/event.c b/drivers/net/wireless/ti/wlcore/event.c
index 46ab69eab26a..1e082d039b82 100644
--- a/drivers/net/wireless/ti/wlcore/event.c
+++ b/drivers/net/wireless/ti/wlcore/event.c
@@ -229,7 +229,7 @@ void wlcore_event_channel_switch(struct wl1271 *wl,
vif = wl12xx_wlvif_to_vif(wlvif);
if (wlvif->bss_type == BSS_TYPE_STA_BSS) {
- ieee80211_chswitch_done(vif, success);
+ ieee80211_chswitch_done(vif, success, 0);
cancel_delayed_work(&wlvif->channel_switch_work);
} else {
set_bit(WLVIF_FLAG_BEACON_DISABLED, &wlvif->flags);
diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
index bf21611872a3..fb9ed97774c7 100644
--- a/drivers/net/wireless/ti/wlcore/main.c
+++ b/drivers/net/wireless/ti/wlcore/main.c
@@ -1126,7 +1126,7 @@ int wl1271_plt_start(struct wl1271 *wl, const enum plt_mode plt_mode)
/* update hw/fw version info in wiphy struct */
wiphy->hw_version = wl->chip.id;
- strncpy(wiphy->fw_version, wl->chip.fw_ver_str,
+ strscpy(wiphy->fw_version, wl->chip.fw_ver_str,
sizeof(wiphy->fw_version));
goto out;
@@ -2043,7 +2043,7 @@ static void wlcore_channel_switch_work(struct work_struct *work)
goto out;
vif = wl12xx_wlvif_to_vif(wlvif);
- ieee80211_chswitch_done(vif, false);
+ ieee80211_chswitch_done(vif, false, 0);
ret = pm_runtime_resume_and_get(wl->dev);
if (ret < 0)
@@ -2344,7 +2344,7 @@ power_off:
/* update hw/fw version info in wiphy struct */
wiphy->hw_version = wl->chip.id;
- strncpy(wiphy->fw_version, wl->chip.fw_ver_str,
+ strscpy(wiphy->fw_version, wl->chip.fw_ver_str,
sizeof(wiphy->fw_version));
/*
@@ -3030,7 +3030,7 @@ static int wlcore_unset_assoc(struct wl1271 *wl, struct wl12xx_vif *wlvif)
struct ieee80211_vif *vif = wl12xx_wlvif_to_vif(wlvif);
wl12xx_cmd_stop_channel_switch(wl, wlvif);
- ieee80211_chswitch_done(vif, false);
+ ieee80211_chswitch_done(vif, false, 0);
cancel_delayed_work(&wlvif->channel_switch_work);
}
@@ -5451,7 +5451,7 @@ static void wl12xx_op_channel_switch(struct ieee80211_hw *hw,
if (unlikely(wl->state == WLCORE_STATE_OFF)) {
if (test_bit(WLVIF_FLAG_STA_ASSOCIATED, &wlvif->flags))
- ieee80211_chswitch_done(vif, false);
+ ieee80211_chswitch_done(vif, false, 0);
goto out;
} else if (unlikely(wl->state != WLCORE_STATE_ON)) {
goto out;
@@ -6737,7 +6737,7 @@ int wlcore_probe(struct wl1271 *wl, struct platform_device *pdev)
}
EXPORT_SYMBOL_GPL(wlcore_probe);
-int wlcore_remove(struct platform_device *pdev)
+void wlcore_remove(struct platform_device *pdev)
{
struct wlcore_platdev_data *pdev_data = dev_get_platdata(&pdev->dev);
struct wl1271 *wl = platform_get_drvdata(pdev);
@@ -6752,7 +6752,7 @@ int wlcore_remove(struct platform_device *pdev)
if (pdev_data->family && pdev_data->family->nvs_name)
wait_for_completion(&wl->nvs_loading_complete);
if (!wl->initialized)
- return 0;
+ return;
if (wl->wakeirq >= 0) {
dev_pm_clear_wake_irq(wl->dev);
@@ -6772,8 +6772,6 @@ int wlcore_remove(struct platform_device *pdev)
free_irq(wl->irq, wl);
wlcore_free_hw(wl);
-
- return 0;
}
EXPORT_SYMBOL_GPL(wlcore_remove);
diff --git a/drivers/net/wireless/ti/wlcore/wlcore.h b/drivers/net/wireless/ti/wlcore/wlcore.h
index 81c94d390623..1f8511bf9bb3 100644
--- a/drivers/net/wireless/ti/wlcore/wlcore.h
+++ b/drivers/net/wireless/ti/wlcore/wlcore.h
@@ -497,7 +497,7 @@ struct wl1271 {
};
int wlcore_probe(struct wl1271 *wl, struct platform_device *pdev);
-int wlcore_remove(struct platform_device *pdev);
+void wlcore_remove(struct platform_device *pdev);
struct ieee80211_hw *wlcore_alloc_hw(size_t priv_size, u32 aggr_buf_size,
u32 mbox_size);
int wlcore_free_hw(struct wl1271 *wl);
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.c b/drivers/net/wireless/virtual/mac80211_hwsim.c
index 1f524030b186..c7b4414cc6c3 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.c
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.c
@@ -72,15 +72,6 @@ MODULE_PARM_DESC(mlo, "Support MLO");
/**
* enum hwsim_regtest - the type of regulatory tests we offer
*
- * These are the different values you can use for the regtest
- * module parameter. This is useful to help test world roaming
- * and the driver regulatory_hint() call and combinations of these.
- * If you want to do specific alpha2 regulatory domain tests simply
- * use the userspace regulatory request as that will be respected as
- * well without the need of this module parameter. This is designed
- * only for testing the driver regulatory request, world roaming
- * and all possible combinations.
- *
* @HWSIM_REGTEST_DISABLED: No regulatory tests are performed,
* this is the default value.
* @HWSIM_REGTEST_DRIVER_REG_FOLLOW: Used for testing the driver regulatory
@@ -125,6 +116,15 @@ MODULE_PARM_DESC(mlo, "Support MLO");
* domain request
* 6 and on - should follow the intersection of the 3rd, 4rth and 5th radio
* regulatory requests.
+ *
+ * These are the different values you can use for the regtest
+ * module parameter. This is useful to help test world roaming
+ * and the driver regulatory_hint() call and combinations of these.
+ * If you want to do specific alpha2 regulatory domain tests simply
+ * use the userspace regulatory request as that will be respected as
+ * well without the need of this module parameter. This is designed
+ * only for testing the driver regulatory request, world roaming
+ * and all possible combinations.
*/
enum hwsim_regtest {
HWSIM_REGTEST_DISABLED = 0,
@@ -2445,6 +2445,14 @@ static void mac80211_hwsim_vif_info_changed(struct ieee80211_hw *hw,
vp->assoc = vif->cfg.assoc;
vp->aid = vif->cfg.aid;
}
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ changed & BSS_CHANGED_MLD_VALID_LINKS) {
+ u16 usable_links = ieee80211_vif_usable_links(vif);
+
+ if (vif->active_links != usable_links)
+ ieee80211_set_active_links_async(vif, usable_links);
+ }
}
static void mac80211_hwsim_link_info_changed(struct ieee80211_hw *hw,
@@ -3170,7 +3178,7 @@ static void mac80211_hwsim_get_et_strings(struct ieee80211_hw *hw,
u32 sset, u8 *data)
{
if (sset == ETH_SS_STATS)
- memcpy(data, *mac80211_hwsim_gstrings_stats,
+ memcpy(data, mac80211_hwsim_gstrings_stats,
sizeof(mac80211_hwsim_gstrings_stats));
}
@@ -4899,25 +4907,19 @@ static const struct ieee80211_sband_iftype_data sband_capa_6ghz[] = {
static void mac80211_hwsim_sband_capab(struct ieee80211_supported_band *sband)
{
- u16 n_iftype_data;
-
- if (sband->band == NL80211_BAND_2GHZ) {
- n_iftype_data = ARRAY_SIZE(sband_capa_2ghz);
- sband->iftype_data =
- (struct ieee80211_sband_iftype_data *)sband_capa_2ghz;
- } else if (sband->band == NL80211_BAND_5GHZ) {
- n_iftype_data = ARRAY_SIZE(sband_capa_5ghz);
- sband->iftype_data =
- (struct ieee80211_sband_iftype_data *)sband_capa_5ghz;
- } else if (sband->band == NL80211_BAND_6GHZ) {
- n_iftype_data = ARRAY_SIZE(sband_capa_6ghz);
- sband->iftype_data =
- (struct ieee80211_sband_iftype_data *)sband_capa_6ghz;
- } else {
- return;
+ switch (sband->band) {
+ case NL80211_BAND_2GHZ:
+ ieee80211_set_sband_iftype_data(sband, sband_capa_2ghz);
+ break;
+ case NL80211_BAND_5GHZ:
+ ieee80211_set_sband_iftype_data(sband, sband_capa_5ghz);
+ break;
+ case NL80211_BAND_6GHZ:
+ ieee80211_set_sband_iftype_data(sband, sband_capa_6ghz);
+ break;
+ default:
+ break;
}
-
- sband->n_iftype_data = n_iftype_data;
}
#ifdef CONFIG_MAC80211_MESH
diff --git a/drivers/net/wireless/virtual/mac80211_hwsim.h b/drivers/net/wireless/virtual/mac80211_hwsim.h
index 92126f02c58f..4676cdaf4cfd 100644
--- a/drivers/net/wireless/virtual/mac80211_hwsim.h
+++ b/drivers/net/wireless/virtual/mac80211_hwsim.h
@@ -3,7 +3,7 @@
* mac80211_hwsim - software simulator of 802.11 radio(s) for mac80211
* Copyright (c) 2008, Jouni Malinen <j@w1.fi>
* Copyright (c) 2011, Javier Lopez <jlopex@gmail.com>
- * Copyright (C) 2020, 2022 Intel Corporation
+ * Copyright (C) 2020, 2022-2023 Intel Corporation
*/
#ifndef __MAC80211_HWSIM_H
@@ -86,7 +86,7 @@ enum hwsim_tx_control_flags {
* with %HWSIM_CMD_REPORT_PMSR.
* @__HWSIM_CMD_MAX: enum limit
*/
-enum {
+enum hwsim_commands {
HWSIM_CMD_UNSPEC,
HWSIM_CMD_REGISTER,
HWSIM_CMD_FRAME,
@@ -117,11 +117,11 @@ enum {
* the frame was broadcasted from
* @HWSIM_ATTR_FRAME: Data array
* @HWSIM_ATTR_FLAGS: mac80211 transmission flags, used to process
- properly the frame at user space
+ * properly the frame at user space
* @HWSIM_ATTR_RX_RATE: estimated rx rate index for this frame at user
- space
+ * space
* @HWSIM_ATTR_SIGNAL: estimated RX signal for this frame at user
- space
+ * space
* @HWSIM_ATTR_TX_INFO: ieee80211_tx_rate array
* @HWSIM_ATTR_COOKIE: sk_buff cookie to identify the frame
* @HWSIM_ATTR_CHANNELS: u32 attribute used with the %HWSIM_CMD_CREATE_RADIO
@@ -140,6 +140,7 @@ enum {
* command to force radio removal when process that created the radio dies
* @HWSIM_ATTR_RADIO_NAME: Name of radio, e.g. phy666
* @HWSIM_ATTR_NO_VIF: Do not create vif (wlanX) when creating radio.
+ * @HWSIM_ATTR_PAD: padding attribute for 64-bit values, ignore
* @HWSIM_ATTR_FREQ: Frequency at which packet is transmitted or received.
* @HWSIM_ATTR_TX_INFO_FLAGS: additional flags for corresponding
* rates of %HWSIM_ATTR_TX_INFO
@@ -156,9 +157,7 @@ enum {
* to provide peer measurement result (nl80211_peer_measurement_attrs)
* @__HWSIM_ATTR_MAX: enum limit
*/
-
-
-enum {
+enum hwsim_attrs {
HWSIM_ATTR_UNSPEC,
HWSIM_ATTR_ADDR_RECEIVER,
HWSIM_ATTR_ADDR_TRANSMITTER,
@@ -259,7 +258,7 @@ enum hwsim_tx_rate_flags {
* struct hwsim_tx_rate - rate selection/status
*
* @idx: rate index to attempt to send with
- * @count: number of tries in this rate before going to the next rate
+ * @flags: the rate flags according to &enum hwsim_tx_rate_flags
*
* A value of -1 for @idx indicates an invalid rate and, if used
* in an array of retry rates, that no more rates should be tried.
@@ -287,7 +286,7 @@ struct hwsim_tx_rate_flag {
* @HWSIM_VQ_RX: receive frames and transmission info reports
* @HWSIM_NUM_VQS: enum limit
*/
-enum {
+enum hwsim_vqs {
HWSIM_VQ_TX,
HWSIM_VQ_RX,
HWSIM_NUM_VQS,
diff --git a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
index e77084e76718..fdc211bbeda7 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_chnl_cfg.h
@@ -51,7 +51,7 @@ struct ipc_chnl_cfg {
/**
* ipc_chnl_cfg_get - Get pipe configuration.
* @chnl_cfg: Array of ipc_chnl_cfg struct
- * @index: Channel index (upto MAX_CHANNELS)
+ * @index: Channel index (up to MAX_CHANNELS)
*
* Return: 0 on success and failure value on error
*/
diff --git a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
index 026c5bd0f999..6bd0290e8be7 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_imem_ops.h
@@ -36,8 +36,8 @@
/**
* ipc_imem_sys_port_open - Open a port link to CP.
* @ipc_imem: Imem instance.
- * @chl_id: Channel Indentifier.
- * @hp_id: HP Indentifier.
+ * @chl_id: Channel Identifier.
+ * @hp_id: HP Identifier.
*
* Return: channel instance on success, NULL for failure
*/
diff --git a/drivers/net/wwan/iosm/iosm_ipc_mux.h b/drivers/net/wwan/iosm/iosm_ipc_mux.h
index 17ca8d1f9397..db5f1f9ebf26 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_mux.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_mux.h
@@ -432,7 +432,7 @@ int ipc_mux_open_session(struct iosm_mux *ipc_mux, int session_nr);
int ipc_mux_close_session(struct iosm_mux *ipc_mux, int session_nr);
/**
- * ipc_mux_get_max_sessions - Retuns the maximum sessions supported on the
+ * ipc_mux_get_max_sessions - Returns the maximum sessions supported on the
* provided MUX instance..
* @ipc_mux: Pointer to MUX data-struct
*
diff --git a/drivers/net/wwan/iosm/iosm_ipc_pm.h b/drivers/net/wwan/iosm/iosm_ipc_pm.h
index e7c00f388cb0..5f14d7932af9 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_pm.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_pm.h
@@ -172,7 +172,7 @@ bool ipc_pm_prepare_host_sleep(struct iosm_pm *ipc_pm);
bool ipc_pm_prepare_host_active(struct iosm_pm *ipc_pm);
/**
- * ipc_pm_wait_for_device_active - Wait upto IPC_PM_ACTIVE_TIMEOUT_MS ms
+ * ipc_pm_wait_for_device_active - Wait up to IPC_PM_ACTIVE_TIMEOUT_MS ms
* for the device to reach active state
* @ipc_pm: Pointer to power management component
*
diff --git a/drivers/net/wwan/iosm/iosm_ipc_port.h b/drivers/net/wwan/iosm/iosm_ipc_port.h
index 11bc8ed21616..d33c52aebf66 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_port.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_port.h
@@ -18,7 +18,7 @@
* @pcie: PCIe component
* @port_type: WWAN port type
* @channel: Channel instance
- * @chl_id: Channel Indentifier
+ * @chl_id: Channel Identifier
*/
struct iosm_cdev {
struct wwan_port *iosm_port;
diff --git a/drivers/net/wwan/iosm/iosm_ipc_trace.h b/drivers/net/wwan/iosm/iosm_ipc_trace.h
index 5ebe7790585c..3e7c7f163e1d 100644
--- a/drivers/net/wwan/iosm/iosm_ipc_trace.h
+++ b/drivers/net/wwan/iosm/iosm_ipc_trace.h
@@ -29,7 +29,7 @@ enum trace_ctrl_mode {
* @ipc_imem: Imem instance
* @dev: Pointer to device struct
* @channel: Channel instance
- * @chl_id: Channel Indentifier
+ * @chl_id: Channel Identifier
* @trc_mutex: Mutex used for read and write mode
* @mode: Mode for enable and disable trace
*/
diff --git a/drivers/net/wwan/rpmsg_wwan_ctrl.c b/drivers/net/wwan/rpmsg_wwan_ctrl.c
index 86b60aadfa11..26756ff0e44d 100644
--- a/drivers/net/wwan/rpmsg_wwan_ctrl.c
+++ b/drivers/net/wwan/rpmsg_wwan_ctrl.c
@@ -37,7 +37,7 @@ static int rpmsg_wwan_ctrl_start(struct wwan_port *port)
.dst = RPMSG_ADDR_ANY,
};
- strncpy(chinfo.name, rpwwan->rpdev->id.name, RPMSG_NAME_SIZE);
+ strscpy(chinfo.name, rpwwan->rpdev->id.name, sizeof(chinfo.name));
rpwwan->ept = rpmsg_create_ept(rpwwan->rpdev, rpmsg_wwan_ctrl_callback,
rpwwan, chinfo);
if (!rpwwan->ept)
diff --git a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
index f4ff2198b5ef..210d84c67ef9 100644
--- a/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
+++ b/drivers/net/wwan/t7xx/t7xx_hif_dpmaif_rx.c
@@ -852,7 +852,7 @@ int t7xx_dpmaif_napi_rx_poll(struct napi_struct *napi, const int budget)
if (!ret) {
napi_complete_done(napi, work_done);
rxq->sleep_lock_pending = true;
- napi_reschedule(napi);
+ napi_schedule(napi);
return work_done;
}
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
index 80edb8e75a6a..0bc97430211b 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
@@ -445,7 +445,8 @@ int t7xx_fsm_append_event(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_event_state ev
return -EINVAL;
}
- event = kmalloc(sizeof(*event) + length, in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
+ event = kmalloc(struct_size(event, data, length),
+ in_interrupt() ? GFP_ATOMIC : GFP_KERNEL);
if (!event)
return -ENOMEM;
diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
index b6e76f3903c8..b0b3662ae6d7 100644
--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
@@ -102,7 +102,7 @@ struct t7xx_fsm_event {
struct list_head entry;
enum t7xx_fsm_event_state event_id;
unsigned int length;
- unsigned char data[];
+ unsigned char data[] __counted_by(length);
};
struct t7xx_fsm_command {
diff --git a/drivers/net/wwan/wwan_core.c b/drivers/net/wwan/wwan_core.c
index 284ab1f56391..72e01e550a16 100644
--- a/drivers/net/wwan/wwan_core.c
+++ b/drivers/net/wwan/wwan_core.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/* Copyright (c) 2021, Linaro Ltd <loic.poulain@linaro.org> */
+#include <linux/bitmap.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/debugfs.h>
@@ -301,7 +302,7 @@ static void wwan_remove_dev(struct wwan_device *wwandev)
static const struct {
const char * const name; /* Port type name */
- const char * const devsuf; /* Port devce name suffix */
+ const char * const devsuf; /* Port device name suffix */
} wwan_port_types[WWAN_PORT_MAX + 1] = {
[WWAN_PORT_AT] = {
.name = "AT",
@@ -395,7 +396,7 @@ static int __wwan_port_dev_assign_name(struct wwan_port *port, const char *fmt)
char buf[0x20];
int id;
- idmap = (unsigned long *)get_zeroed_page(GFP_KERNEL);
+ idmap = bitmap_zalloc(max_ports, GFP_KERNEL);
if (!idmap)
return -ENOMEM;
@@ -414,7 +415,7 @@ static int __wwan_port_dev_assign_name(struct wwan_port *port, const char *fmt)
/* Allocate unique id */
id = find_first_zero_bit(idmap, max_ports);
- free_page((unsigned long)idmap);
+ bitmap_free(idmap);
snprintf(buf, sizeof(buf), fmt, id); /* Name generation */
@@ -1183,7 +1184,7 @@ void wwan_unregister_ops(struct device *parent)
*/
put_device(&wwandev->dev);
- rtnl_lock(); /* Prevent concurent netdev(s) creation/destroying */
+ rtnl_lock(); /* Prevent concurrent netdev(s) creation/destroying */
/* Remove all child netdev(s), using batch removing */
device_for_each_child(&wwandev->dev, &kill_list,
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fc3bb63b9ac3..db304f178136 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -252,6 +252,9 @@ xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (vif->hash.alg == XEN_NETIF_CTRL_HASH_ALGORITHM_NONE)
skb_clear_hash(skb);
+ /* timestamp packet in software */
+ skb_tx_timestamp(skb);
+
if (!xenvif_rx_queue_tail(queue, skb))
goto drop;
@@ -458,7 +461,7 @@ static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
static const struct ethtool_ops xenvif_ethtool_ops = {
.get_link = ethtool_op_get_link,
-
+ .get_ts_info = ethtool_op_get_ts_info,
.get_sset_count = xenvif_get_sset_count,
.get_ethtool_stats = xenvif_get_ethtool_stats,
.get_strings = xenvif_get_strings,
diff --git a/drivers/ptp/Kconfig b/drivers/ptp/Kconfig
index ed9d97a032f1..5dd5f188e14f 100644
--- a/drivers/ptp/Kconfig
+++ b/drivers/ptp/Kconfig
@@ -188,6 +188,7 @@ config PTP_1588_CLOCK_OCP
depends on COMMON_CLK
select NET_DEVLINK
select CRC16
+ select DPLL
help
This driver adds support for an OpenCompute time card.
diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
index 362bf756e6b7..282cd7d24077 100644
--- a/drivers/ptp/ptp_chardev.c
+++ b/drivers/ptp/ptp_chardev.c
@@ -10,6 +10,7 @@
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/timekeeping.h>
+#include <linux/debugfs.h>
#include <linux/nospec.h>
@@ -101,19 +102,67 @@ int ptp_set_pinfunc(struct ptp_clock *ptp, unsigned int pin,
return 0;
}
-int ptp_open(struct posix_clock *pc, fmode_t fmode)
+int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode)
{
+ struct ptp_clock *ptp =
+ container_of(pccontext->clk, struct ptp_clock, clock);
+ struct timestamp_event_queue *queue;
+ char debugfsname[32];
+
+ queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+ if (!queue)
+ return -EINVAL;
+ queue->mask = bitmap_alloc(PTP_MAX_CHANNELS, GFP_KERNEL);
+ if (!queue->mask) {
+ kfree(queue);
+ return -EINVAL;
+ }
+ bitmap_set(queue->mask, 0, PTP_MAX_CHANNELS);
+ spin_lock_init(&queue->lock);
+ list_add_tail(&queue->qlist, &ptp->tsevqs);
+ pccontext->private_clkdata = queue;
+
+ /* Debugfs contents */
+ sprintf(debugfsname, "0x%p", queue);
+ queue->debugfs_instance =
+ debugfs_create_dir(debugfsname, ptp->debugfs_root);
+ queue->dfs_bitmap.array = (u32 *)queue->mask;
+ queue->dfs_bitmap.n_elements =
+ DIV_ROUND_UP(PTP_MAX_CHANNELS, BITS_PER_BYTE * sizeof(u32));
+ debugfs_create_u32_array("mask", 0444, queue->debugfs_instance,
+ &queue->dfs_bitmap);
+
+ return 0;
+}
+
+int ptp_release(struct posix_clock_context *pccontext)
+{
+ struct timestamp_event_queue *queue = pccontext->private_clkdata;
+ unsigned long flags;
+
+ if (queue) {
+ debugfs_remove(queue->debugfs_instance);
+ pccontext->private_clkdata = NULL;
+ spin_lock_irqsave(&queue->lock, flags);
+ list_del(&queue->qlist);
+ spin_unlock_irqrestore(&queue->lock, flags);
+ bitmap_free(queue->mask);
+ kfree(queue);
+ }
return 0;
}
-long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
+long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd,
+ unsigned long arg)
{
- struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
+ struct ptp_clock *ptp =
+ container_of(pccontext->clk, struct ptp_clock, clock);
struct ptp_sys_offset_extended *extoff = NULL;
struct ptp_sys_offset_precise precise_offset;
struct system_device_crosststamp xtstamp;
struct ptp_clock_info *ops = ptp->info;
struct ptp_sys_offset *sysoff = NULL;
+ struct timestamp_event_queue *tsevq;
struct ptp_system_timestamp sts;
struct ptp_clock_request req;
struct ptp_clock_caps caps;
@@ -123,6 +172,8 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
struct timespec64 ts;
int enable, err = 0;
+ tsevq = pccontext->private_clkdata;
+
switch (cmd) {
case PTP_CLOCK_GETCAPS:
@@ -421,6 +472,22 @@ long ptp_ioctl(struct posix_clock *pc, unsigned int cmd, unsigned long arg)
mutex_unlock(&ptp->pincfg_mux);
break;
+ case PTP_MASK_CLEAR_ALL:
+ bitmap_clear(tsevq->mask, 0, PTP_MAX_CHANNELS);
+ break;
+
+ case PTP_MASK_EN_SINGLE:
+ if (copy_from_user(&i, (void __user *)arg, sizeof(i))) {
+ err = -EFAULT;
+ break;
+ }
+ if (i >= PTP_MAX_CHANNELS) {
+ err = -EFAULT;
+ break;
+ }
+ set_bit(i, tsevq->mask);
+ break;
+
default:
err = -ENOTTY;
break;
@@ -432,53 +499,65 @@ out:
return err;
}
-__poll_t ptp_poll(struct posix_clock *pc, struct file *fp, poll_table *wait)
+__poll_t ptp_poll(struct posix_clock_context *pccontext, struct file *fp,
+ poll_table *wait)
{
- struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
+ struct ptp_clock *ptp =
+ container_of(pccontext->clk, struct ptp_clock, clock);
+ struct timestamp_event_queue *queue;
+
+ queue = pccontext->private_clkdata;
+ if (!queue)
+ return EPOLLERR;
poll_wait(fp, &ptp->tsev_wq, wait);
- return queue_cnt(&ptp->tsevq) ? EPOLLIN : 0;
+ return queue_cnt(queue) ? EPOLLIN : 0;
}
#define EXTTS_BUFSIZE (PTP_BUF_TIMESTAMPS * sizeof(struct ptp_extts_event))
-ssize_t ptp_read(struct posix_clock *pc,
- uint rdflags, char __user *buf, size_t cnt)
+ssize_t ptp_read(struct posix_clock_context *pccontext, uint rdflags,
+ char __user *buf, size_t cnt)
{
- struct ptp_clock *ptp = container_of(pc, struct ptp_clock, clock);
- struct timestamp_event_queue *queue = &ptp->tsevq;
+ struct ptp_clock *ptp =
+ container_of(pccontext->clk, struct ptp_clock, clock);
+ struct timestamp_event_queue *queue;
struct ptp_extts_event *event;
unsigned long flags;
size_t qcnt, i;
int result;
- if (cnt % sizeof(struct ptp_extts_event) != 0)
- return -EINVAL;
+ queue = pccontext->private_clkdata;
+ if (!queue) {
+ result = -EINVAL;
+ goto exit;
+ }
+
+ if (cnt % sizeof(struct ptp_extts_event) != 0) {
+ result = -EINVAL;
+ goto exit;
+ }
if (cnt > EXTTS_BUFSIZE)
cnt = EXTTS_BUFSIZE;
cnt = cnt / sizeof(struct ptp_extts_event);
- if (mutex_lock_interruptible(&ptp->tsevq_mux))
- return -ERESTARTSYS;
-
if (wait_event_interruptible(ptp->tsev_wq,
ptp->defunct || queue_cnt(queue))) {
- mutex_unlock(&ptp->tsevq_mux);
return -ERESTARTSYS;
}
if (ptp->defunct) {
- mutex_unlock(&ptp->tsevq_mux);
- return -ENODEV;
+ result = -ENODEV;
+ goto exit;
}
event = kmalloc(EXTTS_BUFSIZE, GFP_KERNEL);
if (!event) {
- mutex_unlock(&ptp->tsevq_mux);
- return -ENOMEM;
+ result = -ENOMEM;
+ goto exit;
}
spin_lock_irqsave(&queue->lock, flags);
@@ -497,12 +576,16 @@ ssize_t ptp_read(struct posix_clock *pc,
cnt = cnt * sizeof(struct ptp_extts_event);
- mutex_unlock(&ptp->tsevq_mux);
-
result = cnt;
- if (copy_to_user(buf, event, cnt))
+ if (copy_to_user(buf, event, cnt)) {
result = -EFAULT;
+ goto free_event;
+ }
+free_event:
kfree(event);
+exit:
+ if (result < 0)
+ ptp_release(pccontext);
return result;
}
diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
index 80f74e38c2da..3d1b0a97301c 100644
--- a/drivers/ptp/ptp_clock.c
+++ b/drivers/ptp/ptp_clock.c
@@ -15,6 +15,7 @@
#include <linux/slab.h>
#include <linux/syscalls.h>
#include <linux/uaccess.h>
+#include <linux/debugfs.h>
#include <uapi/linux/sched/types.h>
#include "ptp_private.h"
@@ -162,6 +163,7 @@ static struct posix_clock_operations ptp_clock_ops = {
.clock_settime = ptp_clock_settime,
.ioctl = ptp_ioctl,
.open = ptp_open,
+ .release = ptp_release,
.poll = ptp_poll,
.read = ptp_read,
};
@@ -169,12 +171,22 @@ static struct posix_clock_operations ptp_clock_ops = {
static void ptp_clock_release(struct device *dev)
{
struct ptp_clock *ptp = container_of(dev, struct ptp_clock, dev);
+ struct timestamp_event_queue *tsevq;
+ unsigned long flags;
ptp_cleanup_pin_groups(ptp);
kfree(ptp->vclock_index);
- mutex_destroy(&ptp->tsevq_mux);
mutex_destroy(&ptp->pincfg_mux);
mutex_destroy(&ptp->n_vclocks_mux);
+ /* Delete first entry */
+ tsevq = list_first_entry(&ptp->tsevqs, struct timestamp_event_queue,
+ qlist);
+ spin_lock_irqsave(&tsevq->lock, flags);
+ list_del(&tsevq->qlist);
+ spin_unlock_irqrestore(&tsevq->lock, flags);
+ bitmap_free(tsevq->mask);
+ kfree(tsevq);
+ debugfs_remove(ptp->debugfs_root);
ida_free(&ptp_clocks_map, ptp->index);
kfree(ptp);
}
@@ -206,7 +218,9 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
struct device *parent)
{
struct ptp_clock *ptp;
+ struct timestamp_event_queue *queue = NULL;
int err = 0, index, major = MAJOR(ptp_devt);
+ char debugfsname[16];
size_t size;
if (info->n_alarm > PTP_MAX_ALARMS)
@@ -228,8 +242,16 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
ptp->info = info;
ptp->devid = MKDEV(major, index);
ptp->index = index;
- spin_lock_init(&ptp->tsevq.lock);
- mutex_init(&ptp->tsevq_mux);
+ INIT_LIST_HEAD(&ptp->tsevqs);
+ queue = kzalloc(sizeof(*queue), GFP_KERNEL);
+ if (!queue)
+ goto no_memory_queue;
+ list_add_tail(&queue->qlist, &ptp->tsevqs);
+ queue->mask = bitmap_alloc(PTP_MAX_CHANNELS, GFP_KERNEL);
+ if (!queue->mask)
+ goto no_memory_bitmap;
+ bitmap_set(queue->mask, 0, PTP_MAX_CHANNELS);
+ spin_lock_init(&queue->lock);
mutex_init(&ptp->pincfg_mux);
mutex_init(&ptp->n_vclocks_mux);
init_waitqueue_head(&ptp->tsev_wq);
@@ -320,6 +342,10 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
return ERR_PTR(err);
}
+ /* Debugfs initialization */
+ snprintf(debugfsname, sizeof(debugfsname), "ptp%d", ptp->index);
+ ptp->debugfs_root = debugfs_create_dir(debugfsname, NULL);
+
return ptp;
no_pps:
@@ -330,9 +356,13 @@ no_mem_for_vclocks:
if (ptp->kworker)
kthread_destroy_worker(ptp->kworker);
kworker_err:
- mutex_destroy(&ptp->tsevq_mux);
mutex_destroy(&ptp->pincfg_mux);
mutex_destroy(&ptp->n_vclocks_mux);
+ bitmap_free(queue->mask);
+no_memory_bitmap:
+ list_del(&queue->qlist);
+ kfree(queue);
+no_memory_queue:
ida_free(&ptp_clocks_map, index);
no_slot:
kfree(ptp);
@@ -375,6 +405,7 @@ EXPORT_SYMBOL(ptp_clock_unregister);
void ptp_clock_event(struct ptp_clock *ptp, struct ptp_clock_event *event)
{
+ struct timestamp_event_queue *tsevq;
struct pps_event_time evt;
switch (event->type) {
@@ -383,7 +414,11 @@ void ptp_clock_event(struct ptp_clock *ptp, struct ptp_clock_event *event)
break;
case PTP_CLOCK_EXTTS:
- enqueue_external_timestamp(&ptp->tsevq, event);
+ /* Enqueue timestamp on selected queues */
+ list_for_each_entry(tsevq, &ptp->tsevqs, qlist) {
+ if (test_bit((unsigned int)event->index, tsevq->mask))
+ enqueue_external_timestamp(tsevq, event);
+ }
wake_up_interruptible(&ptp->tsev_wq);
break;
diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
index a7a6947ab4bc..4021d3d325f9 100644
--- a/drivers/ptp/ptp_ocp.c
+++ b/drivers/ptp/ptp_ocp.c
@@ -23,6 +23,7 @@
#include <linux/mtd/mtd.h>
#include <linux/nvmem-consumer.h>
#include <linux/crc16.h>
+#include <linux/dpll.h>
#define PCI_VENDOR_ID_FACEBOOK 0x1d9b
#define PCI_DEVICE_ID_FACEBOOK_TIMECARD 0x0400
@@ -260,12 +261,21 @@ enum ptp_ocp_sma_mode {
SMA_MODE_OUT,
};
+static struct dpll_pin_frequency ptp_ocp_sma_freq[] = {
+ DPLL_PIN_FREQUENCY_1PPS,
+ DPLL_PIN_FREQUENCY_10MHZ,
+ DPLL_PIN_FREQUENCY_IRIG_B,
+ DPLL_PIN_FREQUENCY_DCF77,
+};
+
struct ptp_ocp_sma_connector {
enum ptp_ocp_sma_mode mode;
bool fixed_fcn;
bool fixed_dir;
bool disabled;
u8 default_fcn;
+ struct dpll_pin *dpll_pin;
+ struct dpll_pin_properties dpll_prop;
};
struct ocp_attr_group {
@@ -294,6 +304,7 @@ struct ptp_ocp_serial_port {
#define OCP_BOARD_ID_LEN 13
#define OCP_SERIAL_LEN 6
+#define OCP_SMA_NUM 4
struct ptp_ocp {
struct pci_dev *pdev;
@@ -331,7 +342,9 @@ struct ptp_ocp {
const struct attribute_group **attr_group;
const struct ptp_ocp_eeprom_map *eeprom_map;
struct dentry *debug_root;
+ bool sync;
time64_t gnss_lost;
+ struct delayed_work sync_work;
int id;
int n_irqs;
struct ptp_ocp_serial_port gnss_port;
@@ -350,8 +363,9 @@ struct ptp_ocp {
u32 ts_window_adjust;
u64 fw_cap;
struct ptp_ocp_signal signal[4];
- struct ptp_ocp_sma_connector sma[4];
+ struct ptp_ocp_sma_connector sma[OCP_SMA_NUM];
const struct ocp_sma_op *sma_op;
+ struct dpll_device *dpll;
};
#define OCP_REQ_TIMESTAMP BIT(0)
@@ -835,6 +849,7 @@ static DEFINE_IDR(ptp_ocp_idr);
struct ocp_selector {
const char *name;
int value;
+ u64 frequency;
};
static const struct ocp_selector ptp_ocp_clock[] = {
@@ -855,31 +870,31 @@ static const struct ocp_selector ptp_ocp_clock[] = {
#define SMA_SELECT_MASK GENMASK(14, 0)
static const struct ocp_selector ptp_ocp_sma_in[] = {
- { .name = "10Mhz", .value = 0x0000 },
- { .name = "PPS1", .value = 0x0001 },
- { .name = "PPS2", .value = 0x0002 },
- { .name = "TS1", .value = 0x0004 },
- { .name = "TS2", .value = 0x0008 },
- { .name = "IRIG", .value = 0x0010 },
- { .name = "DCF", .value = 0x0020 },
- { .name = "TS3", .value = 0x0040 },
- { .name = "TS4", .value = 0x0080 },
- { .name = "FREQ1", .value = 0x0100 },
- { .name = "FREQ2", .value = 0x0200 },
- { .name = "FREQ3", .value = 0x0400 },
- { .name = "FREQ4", .value = 0x0800 },
- { .name = "None", .value = SMA_DISABLE },
+ { .name = "10Mhz", .value = 0x0000, .frequency = 10000000 },
+ { .name = "PPS1", .value = 0x0001, .frequency = 1 },
+ { .name = "PPS2", .value = 0x0002, .frequency = 1 },
+ { .name = "TS1", .value = 0x0004, .frequency = 0 },
+ { .name = "TS2", .value = 0x0008, .frequency = 0 },
+ { .name = "IRIG", .value = 0x0010, .frequency = 10000 },
+ { .name = "DCF", .value = 0x0020, .frequency = 77500 },
+ { .name = "TS3", .value = 0x0040, .frequency = 0 },
+ { .name = "TS4", .value = 0x0080, .frequency = 0 },
+ { .name = "FREQ1", .value = 0x0100, .frequency = 0 },
+ { .name = "FREQ2", .value = 0x0200, .frequency = 0 },
+ { .name = "FREQ3", .value = 0x0400, .frequency = 0 },
+ { .name = "FREQ4", .value = 0x0800, .frequency = 0 },
+ { .name = "None", .value = SMA_DISABLE, .frequency = 0 },
{ }
};
static const struct ocp_selector ptp_ocp_sma_out[] = {
- { .name = "10Mhz", .value = 0x0000 },
- { .name = "PHC", .value = 0x0001 },
- { .name = "MAC", .value = 0x0002 },
- { .name = "GNSS1", .value = 0x0004 },
- { .name = "GNSS2", .value = 0x0008 },
- { .name = "IRIG", .value = 0x0010 },
- { .name = "DCF", .value = 0x0020 },
+ { .name = "10Mhz", .value = 0x0000, .frequency = 10000000 },
+ { .name = "PHC", .value = 0x0001, .frequency = 1 },
+ { .name = "MAC", .value = 0x0002, .frequency = 1 },
+ { .name = "GNSS1", .value = 0x0004, .frequency = 1 },
+ { .name = "GNSS2", .value = 0x0008, .frequency = 1 },
+ { .name = "IRIG", .value = 0x0010, .frequency = 10000 },
+ { .name = "DCF", .value = 0x0020, .frequency = 77000 },
{ .name = "GEN1", .value = 0x0040 },
{ .name = "GEN2", .value = 0x0080 },
{ .name = "GEN3", .value = 0x0100 },
@@ -890,15 +905,15 @@ static const struct ocp_selector ptp_ocp_sma_out[] = {
};
static const struct ocp_selector ptp_ocp_art_sma_in[] = {
- { .name = "PPS1", .value = 0x0001 },
- { .name = "10Mhz", .value = 0x0008 },
+ { .name = "PPS1", .value = 0x0001, .frequency = 1 },
+ { .name = "10Mhz", .value = 0x0008, .frequency = 1000000 },
{ }
};
static const struct ocp_selector ptp_ocp_art_sma_out[] = {
- { .name = "PHC", .value = 0x0002 },
- { .name = "GNSS", .value = 0x0004 },
- { .name = "10Mhz", .value = 0x0010 },
+ { .name = "PHC", .value = 0x0002, .frequency = 1 },
+ { .name = "GNSS", .value = 0x0004, .frequency = 1 },
+ { .name = "10Mhz", .value = 0x0010, .frequency = 10000000 },
{ }
};
@@ -1351,7 +1366,6 @@ static int
ptp_ocp_init_clock(struct ptp_ocp *bp)
{
struct timespec64 ts;
- bool sync;
u32 ctrl;
ctrl = OCP_CTRL_ENABLE;
@@ -1375,8 +1389,8 @@ ptp_ocp_init_clock(struct ptp_ocp *bp)
ptp_ocp_estimate_pci_timing(bp);
- sync = ioread32(&bp->reg->status) & OCP_STATUS_IN_SYNC;
- if (!sync) {
+ bp->sync = ioread32(&bp->reg->status) & OCP_STATUS_IN_SYNC;
+ if (!bp->sync) {
ktime_get_clocktai_ts64(&ts);
ptp_ocp_settime(&bp->ptp_info, &ts);
}
@@ -2289,22 +2303,35 @@ ptp_ocp_sma_fb_set_inputs(struct ptp_ocp *bp, int sma_nr, u32 val)
static void
ptp_ocp_sma_fb_init(struct ptp_ocp *bp)
{
+ struct dpll_pin_properties prop = {
+ .board_label = NULL,
+ .type = DPLL_PIN_TYPE_EXT,
+ .capabilities = DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE,
+ .freq_supported_num = ARRAY_SIZE(ptp_ocp_sma_freq),
+ .freq_supported = ptp_ocp_sma_freq,
+
+ };
u32 reg;
int i;
/* defaults */
+ for (i = 0; i < OCP_SMA_NUM; i++) {
+ bp->sma[i].default_fcn = i & 1;
+ bp->sma[i].dpll_prop = prop;
+ bp->sma[i].dpll_prop.board_label =
+ bp->ptp_info.pin_config[i].name;
+ }
bp->sma[0].mode = SMA_MODE_IN;
bp->sma[1].mode = SMA_MODE_IN;
bp->sma[2].mode = SMA_MODE_OUT;
bp->sma[3].mode = SMA_MODE_OUT;
- for (i = 0; i < 4; i++)
- bp->sma[i].default_fcn = i & 1;
-
/* If no SMA1 map, the pin functions and directions are fixed. */
if (!bp->sma_map1) {
- for (i = 0; i < 4; i++) {
+ for (i = 0; i < OCP_SMA_NUM; i++) {
bp->sma[i].fixed_fcn = true;
bp->sma[i].fixed_dir = true;
+ bp->sma[1].dpll_prop.capabilities &=
+ ~DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;
}
return;
}
@@ -2314,7 +2341,7 @@ ptp_ocp_sma_fb_init(struct ptp_ocp *bp)
*/
reg = ioread32(&bp->sma_map2->gpio2);
if (reg == 0xffffffff) {
- for (i = 0; i < 4; i++)
+ for (i = 0; i < OCP_SMA_NUM; i++)
bp->sma[i].fixed_dir = true;
} else {
reg = ioread32(&bp->sma_map1->gpio1);
@@ -2336,7 +2363,7 @@ static const struct ocp_sma_op ocp_fb_sma_op = {
};
static int
-ptp_ocp_fb_set_pins(struct ptp_ocp *bp)
+ptp_ocp_set_pins(struct ptp_ocp *bp)
{
struct ptp_pin_desc *config;
int i;
@@ -2403,16 +2430,16 @@ ptp_ocp_fb_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
ptp_ocp_tod_init(bp);
ptp_ocp_nmea_out_init(bp);
- ptp_ocp_sma_init(bp);
ptp_ocp_signal_init(bp);
err = ptp_ocp_attr_group_add(bp, fb_timecard_groups);
if (err)
return err;
- err = ptp_ocp_fb_set_pins(bp);
+ err = ptp_ocp_set_pins(bp);
if (err)
return err;
+ ptp_ocp_sma_init(bp);
return ptp_ocp_init_clock(bp);
}
@@ -2452,6 +2479,14 @@ ptp_ocp_register_resources(struct ptp_ocp *bp, kernel_ulong_t driver_data)
static void
ptp_ocp_art_sma_init(struct ptp_ocp *bp)
{
+ struct dpll_pin_properties prop = {
+ .board_label = NULL,
+ .type = DPLL_PIN_TYPE_EXT,
+ .capabilities = 0,
+ .freq_supported_num = ARRAY_SIZE(ptp_ocp_sma_freq),
+ .freq_supported = ptp_ocp_sma_freq,
+
+ };
u32 reg;
int i;
@@ -2466,16 +2501,16 @@ ptp_ocp_art_sma_init(struct ptp_ocp *bp)
bp->sma[2].default_fcn = 0x10; /* OUT: 10Mhz */
bp->sma[3].default_fcn = 0x02; /* OUT: PHC */
- /* If no SMA map, the pin functions and directions are fixed. */
- if (!bp->art_sma) {
- for (i = 0; i < 4; i++) {
+ for (i = 0; i < OCP_SMA_NUM; i++) {
+ /* If no SMA map, the pin functions and directions are fixed. */
+ bp->sma[i].dpll_prop = prop;
+ bp->sma[i].dpll_prop.board_label =
+ bp->ptp_info.pin_config[i].name;
+ if (!bp->art_sma) {
bp->sma[i].fixed_fcn = true;
bp->sma[i].fixed_dir = true;
+ continue;
}
- return;
- }
-
- for (i = 0; i < 4; i++) {
reg = ioread32(&bp->art_sma->map[i].gpio);
switch (reg & 0xff) {
@@ -2486,9 +2521,13 @@ ptp_ocp_art_sma_init(struct ptp_ocp *bp)
case 1:
case 8:
bp->sma[i].mode = SMA_MODE_IN;
+ bp->sma[i].dpll_prop.capabilities =
+ DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;
break;
default:
bp->sma[i].mode = SMA_MODE_OUT;
+ bp->sma[i].dpll_prop.capabilities =
+ DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;
break;
}
}
@@ -2555,6 +2594,9 @@ ptp_ocp_art_board_init(struct ptp_ocp *bp, struct ocp_resource *r)
/* Enable MAC serial port during initialisation */
iowrite32(1, &bp->board_config->mro50_serial_activate);
+ err = ptp_ocp_set_pins(bp);
+ if (err)
+ return err;
ptp_ocp_sma_init(bp);
err = ptp_ocp_attr_group_add(bp, art_timecard_groups);
@@ -2696,16 +2738,9 @@ sma4_show(struct device *dev, struct device_attribute *attr, char *buf)
}
static int
-ptp_ocp_sma_store(struct ptp_ocp *bp, const char *buf, int sma_nr)
+ptp_ocp_sma_store_val(struct ptp_ocp *bp, int val, enum ptp_ocp_sma_mode mode, int sma_nr)
{
struct ptp_ocp_sma_connector *sma = &bp->sma[sma_nr - 1];
- enum ptp_ocp_sma_mode mode;
- int val;
-
- mode = sma->mode;
- val = sma_parse_inputs(bp->sma_op->tbl, buf, &mode);
- if (val < 0)
- return val;
if (sma->fixed_dir && (mode != sma->mode || val & SMA_DISABLE))
return -EOPNOTSUPP;
@@ -2740,6 +2775,20 @@ ptp_ocp_sma_store(struct ptp_ocp *bp, const char *buf, int sma_nr)
return val;
}
+static int
+ptp_ocp_sma_store(struct ptp_ocp *bp, const char *buf, int sma_nr)
+{
+ struct ptp_ocp_sma_connector *sma = &bp->sma[sma_nr - 1];
+ enum ptp_ocp_sma_mode mode;
+ int val;
+
+ mode = sma->mode;
+ val = sma_parse_inputs(bp->sma_op->tbl, buf, &mode);
+ if (val < 0)
+ return val;
+ return ptp_ocp_sma_store_val(bp, val, mode, sma_nr);
+}
+
static ssize_t
sma1_store(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
@@ -3834,9 +3883,8 @@ ptp_ocp_summary_show(struct seq_file *s, void *data)
strcpy(buf, "unknown");
break;
}
- val = ioread32(&bp->reg->status);
seq_printf(s, "%7s: %s, state: %s\n", "PHC src", buf,
- val & OCP_STATUS_IN_SYNC ? "sync" : "unsynced");
+ bp->sync ? "sync" : "unsynced");
if (!ptp_ocp_gettimex(&bp->ptp_info, &ts, &sts)) {
struct timespec64 sys_ts;
@@ -4067,7 +4115,6 @@ ptp_ocp_phc_info(struct ptp_ocp *bp)
{
struct timespec64 ts;
u32 version, select;
- bool sync;
version = ioread32(&bp->reg->version);
select = ioread32(&bp->reg->select);
@@ -4076,11 +4123,10 @@ ptp_ocp_phc_info(struct ptp_ocp *bp)
ptp_ocp_select_name_from_val(ptp_ocp_clock, select >> 16),
ptp_clock_index(bp->ptp));
- sync = ioread32(&bp->reg->status) & OCP_STATUS_IN_SYNC;
if (!ptp_ocp_gettimex(&bp->ptp_info, &ts, NULL))
dev_info(&bp->pdev->dev, "Time: %lld.%ld, %s\n",
ts.tv_sec, ts.tv_nsec,
- sync ? "in-sync" : "UNSYNCED");
+ bp->sync ? "in-sync" : "UNSYNCED");
}
static void
@@ -4177,12 +4223,168 @@ ptp_ocp_detach(struct ptp_ocp *bp)
device_unregister(&bp->dev);
}
+static int ptp_ocp_dpll_lock_status_get(const struct dpll_device *dpll,
+ void *priv,
+ enum dpll_lock_status *status,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp *bp = priv;
+
+ *status = bp->sync ? DPLL_LOCK_STATUS_LOCKED : DPLL_LOCK_STATUS_UNLOCKED;
+
+ return 0;
+}
+
+static int ptp_ocp_dpll_state_get(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp *bp = priv;
+ int idx;
+
+ if (bp->pps_select) {
+ idx = ioread32(&bp->pps_select->gpio1);
+ *state = (&bp->sma[idx] == pin_priv) ? DPLL_PIN_STATE_CONNECTED :
+ DPLL_PIN_STATE_SELECTABLE;
+ return 0;
+ }
+ NL_SET_ERR_MSG(extack, "pin selection is not supported on current HW");
+ return -EINVAL;
+}
+
+static int ptp_ocp_dpll_mode_get(const struct dpll_device *dpll, void *priv,
+ enum dpll_mode *mode, struct netlink_ext_ack *extack)
+{
+ *mode = DPLL_MODE_AUTOMATIC;
+ return 0;
+}
+
+static bool ptp_ocp_dpll_mode_supported(const struct dpll_device *dpll,
+ void *priv, const enum dpll_mode mode,
+ struct netlink_ext_ack *extack)
+{
+ return mode == DPLL_MODE_AUTOMATIC;
+}
+
+static int ptp_ocp_dpll_direction_get(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *priv,
+ enum dpll_pin_direction *direction,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp_sma_connector *sma = pin_priv;
+
+ *direction = sma->mode == SMA_MODE_IN ?
+ DPLL_PIN_DIRECTION_INPUT :
+ DPLL_PIN_DIRECTION_OUTPUT;
+ return 0;
+}
+
+static int ptp_ocp_dpll_direction_set(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv,
+ enum dpll_pin_direction direction,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp_sma_connector *sma = pin_priv;
+ struct ptp_ocp *bp = dpll_priv;
+ enum ptp_ocp_sma_mode mode;
+ int sma_nr = (sma - bp->sma);
+
+ if (sma->fixed_dir)
+ return -EOPNOTSUPP;
+ mode = direction == DPLL_PIN_DIRECTION_INPUT ?
+ SMA_MODE_IN : SMA_MODE_OUT;
+ return ptp_ocp_sma_store_val(bp, 0, mode, sma_nr);
+}
+
+static int ptp_ocp_dpll_frequency_set(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv, u64 frequency,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp_sma_connector *sma = pin_priv;
+ struct ptp_ocp *bp = dpll_priv;
+ const struct ocp_selector *tbl;
+ int sma_nr = (sma - bp->sma);
+ int i;
+
+ if (sma->fixed_fcn)
+ return -EOPNOTSUPP;
+
+ tbl = bp->sma_op->tbl[sma->mode];
+ for (i = 0; tbl[i].name; i++)
+ if (tbl[i].frequency == frequency)
+ return ptp_ocp_sma_store_val(bp, i, sma->mode, sma_nr);
+ return -EINVAL;
+}
+
+static int ptp_ocp_dpll_frequency_get(const struct dpll_pin *pin,
+ void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv, u64 *frequency,
+ struct netlink_ext_ack *extack)
+{
+ struct ptp_ocp_sma_connector *sma = pin_priv;
+ struct ptp_ocp *bp = dpll_priv;
+ const struct ocp_selector *tbl;
+ int sma_nr = (sma - bp->sma);
+ u32 val;
+ int i;
+
+ val = bp->sma_op->get(bp, sma_nr);
+ tbl = bp->sma_op->tbl[sma->mode];
+ for (i = 0; tbl[i].name; i++)
+ if (val == tbl[i].value) {
+ *frequency = tbl[i].frequency;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static const struct dpll_device_ops dpll_ops = {
+ .lock_status_get = ptp_ocp_dpll_lock_status_get,
+ .mode_get = ptp_ocp_dpll_mode_get,
+ .mode_supported = ptp_ocp_dpll_mode_supported,
+};
+
+static const struct dpll_pin_ops dpll_pins_ops = {
+ .frequency_get = ptp_ocp_dpll_frequency_get,
+ .frequency_set = ptp_ocp_dpll_frequency_set,
+ .direction_get = ptp_ocp_dpll_direction_get,
+ .direction_set = ptp_ocp_dpll_direction_set,
+ .state_on_dpll_get = ptp_ocp_dpll_state_get,
+};
+
+static void
+ptp_ocp_sync_work(struct work_struct *work)
+{
+ struct ptp_ocp *bp;
+ bool sync;
+
+ bp = container_of(work, struct ptp_ocp, sync_work.work);
+ sync = !!(ioread32(&bp->reg->status) & OCP_STATUS_IN_SYNC);
+
+ if (bp->sync != sync)
+ dpll_device_change_ntf(bp->dpll);
+
+ bp->sync = sync;
+
+ queue_delayed_work(system_power_efficient_wq, &bp->sync_work, HZ);
+}
+
static int
ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
struct devlink *devlink;
struct ptp_ocp *bp;
- int err;
+ int err, i;
+ u64 clkid;
devlink = devlink_alloc(&ptp_ocp_devlink_ops, sizeof(*bp), &pdev->dev);
if (!devlink) {
@@ -4201,6 +4403,8 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
if (err)
goto out_disable;
+ INIT_DELAYED_WORK(&bp->sync_work, ptp_ocp_sync_work);
+
/* compat mode.
* Older FPGA firmware only returns 2 irq's.
* allow this - if not all of the IRQ's are returned, skip the
@@ -4232,8 +4436,43 @@ ptp_ocp_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ptp_ocp_info(bp);
devlink_register(devlink);
- return 0;
+ clkid = pci_get_dsn(pdev);
+ bp->dpll = dpll_device_get(clkid, 0, THIS_MODULE);
+ if (IS_ERR(bp->dpll)) {
+ err = PTR_ERR(bp->dpll);
+ dev_err(&pdev->dev, "dpll_device_alloc failed\n");
+ goto out;
+ }
+
+ err = dpll_device_register(bp->dpll, DPLL_TYPE_PPS, &dpll_ops, bp);
+ if (err)
+ goto out;
+
+ for (i = 0; i < OCP_SMA_NUM; i++) {
+ bp->sma[i].dpll_pin = dpll_pin_get(clkid, i, THIS_MODULE, &bp->sma[i].dpll_prop);
+ if (IS_ERR(bp->sma[i].dpll_pin)) {
+ err = PTR_ERR(bp->sma[i].dpll_pin);
+ goto out_dpll;
+ }
+
+ err = dpll_pin_register(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops,
+ &bp->sma[i]);
+ if (err) {
+ dpll_pin_put(bp->sma[i].dpll_pin);
+ goto out_dpll;
+ }
+ }
+ queue_delayed_work(system_power_efficient_wq, &bp->sync_work, HZ);
+
+ return 0;
+out_dpll:
+ while (i) {
+ --i;
+ dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, &bp->sma[i]);
+ dpll_pin_put(bp->sma[i].dpll_pin);
+ }
+ dpll_device_put(bp->dpll);
out:
ptp_ocp_detach(bp);
out_disable:
@@ -4248,7 +4487,17 @@ ptp_ocp_remove(struct pci_dev *pdev)
{
struct ptp_ocp *bp = pci_get_drvdata(pdev);
struct devlink *devlink = priv_to_devlink(bp);
+ int i;
+ cancel_delayed_work_sync(&bp->sync_work);
+ for (i = 0; i < OCP_SMA_NUM; i++) {
+ if (bp->sma[i].dpll_pin) {
+ dpll_pin_unregister(bp->dpll, bp->sma[i].dpll_pin, &dpll_pins_ops, bp);
+ dpll_pin_put(bp->sma[i].dpll_pin);
+ }
+ }
+ dpll_device_unregister(bp->dpll, &dpll_ops, bp);
+ dpll_device_put(bp->dpll);
devlink_unregister(devlink);
ptp_ocp_detach(bp);
pci_disable_device(pdev);
diff --git a/drivers/ptp/ptp_private.h b/drivers/ptp/ptp_private.h
index 75f58fc468a7..52f87e394aa6 100644
--- a/drivers/ptp/ptp_private.h
+++ b/drivers/ptp/ptp_private.h
@@ -15,16 +15,24 @@
#include <linux/ptp_clock.h>
#include <linux/ptp_clock_kernel.h>
#include <linux/time.h>
+#include <linux/list.h>
+#include <linux/bitmap.h>
+#include <linux/debugfs.h>
#define PTP_MAX_TIMESTAMPS 128
#define PTP_BUF_TIMESTAMPS 30
#define PTP_DEFAULT_MAX_VCLOCKS 20
+#define PTP_MAX_CHANNELS 2048
struct timestamp_event_queue {
struct ptp_extts_event buf[PTP_MAX_TIMESTAMPS];
int head;
int tail;
spinlock_t lock;
+ struct list_head qlist;
+ unsigned long *mask;
+ struct dentry *debugfs_instance;
+ struct debugfs_u32_array dfs_bitmap;
};
struct ptp_clock {
@@ -35,8 +43,7 @@ struct ptp_clock {
int index; /* index into clocks.map */
struct pps_device *pps_source;
long dialed_frequency; /* remembers the frequency adjustment */
- struct timestamp_event_queue tsevq; /* simple fifo for time stamps */
- struct mutex tsevq_mux; /* one process at a time reading the fifo */
+ struct list_head tsevqs; /* timestamp fifo list */
struct mutex pincfg_mux; /* protect concurrent info->pin_config access */
wait_queue_head_t tsev_wq;
int defunct; /* tells readers to go away when clock is being removed */
@@ -53,6 +60,7 @@ struct ptp_clock {
struct mutex n_vclocks_mux; /* protect concurrent n_vclocks access */
bool is_virtual_clock;
bool has_cycles;
+ struct dentry *debugfs_root;
};
#define info_to_vclock(d) container_of((d), struct ptp_vclock, info)
@@ -117,16 +125,18 @@ extern struct class *ptp_class;
int ptp_set_pinfunc(struct ptp_clock *ptp, unsigned int pin,
enum ptp_pin_function func, unsigned int chan);
-long ptp_ioctl(struct posix_clock *pc,
- unsigned int cmd, unsigned long arg);
+long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd,
+ unsigned long arg);
-int ptp_open(struct posix_clock *pc, fmode_t fmode);
+int ptp_open(struct posix_clock_context *pccontext, fmode_t fmode);
-ssize_t ptp_read(struct posix_clock *pc,
- uint flags, char __user *buf, size_t cnt);
+int ptp_release(struct posix_clock_context *pccontext);
-__poll_t ptp_poll(struct posix_clock *pc,
- struct file *fp, poll_table *wait);
+ssize_t ptp_read(struct posix_clock_context *pccontext, uint flags, char __user *buf,
+ size_t cnt);
+
+__poll_t ptp_poll(struct posix_clock_context *pccontext, struct file *fp,
+ poll_table *wait);
/*
* see ptp_sysfs.c
diff --git a/drivers/ptp/ptp_sysfs.c b/drivers/ptp/ptp_sysfs.c
index 6e4d5456a885..7d023d9d0acb 100644
--- a/drivers/ptp/ptp_sysfs.c
+++ b/drivers/ptp/ptp_sysfs.c
@@ -75,17 +75,21 @@ static ssize_t extts_fifo_show(struct device *dev,
struct device_attribute *attr, char *page)
{
struct ptp_clock *ptp = dev_get_drvdata(dev);
- struct timestamp_event_queue *queue = &ptp->tsevq;
+ struct timestamp_event_queue *queue;
struct ptp_extts_event event;
unsigned long flags;
size_t qcnt;
int cnt = 0;
- memset(&event, 0, sizeof(event));
+ cnt = list_count_nodes(&ptp->tsevqs);
+ if (cnt <= 0)
+ goto out;
- if (mutex_lock_interruptible(&ptp->tsevq_mux))
- return -ERESTARTSYS;
+ /* The sysfs fifo will always draw from the fist queue */
+ queue = list_first_entry(&ptp->tsevqs, struct timestamp_event_queue,
+ qlist);
+ memset(&event, 0, sizeof(event));
spin_lock_irqsave(&queue->lock, flags);
qcnt = queue_cnt(queue);
if (qcnt) {
@@ -100,7 +104,6 @@ static ssize_t extts_fifo_show(struct device *dev,
cnt = snprintf(page, PAGE_SIZE, "%u %lld %u\n",
event.index, event.t.sec, event.t.nsec);
out:
- mutex_unlock(&ptp->tsevq_mux);
return cnt;
}
static DEVICE_ATTR(fifo, 0444, extts_fifo_show, NULL);
diff --git a/drivers/s390/net/ctcm_main.c b/drivers/s390/net/ctcm_main.c
index 6faf27136024..ac15d7c2b200 100644
--- a/drivers/s390/net/ctcm_main.c
+++ b/drivers/s390/net/ctcm_main.c
@@ -200,13 +200,13 @@ static void channel_free(struct channel *ch)
static void channel_remove(struct channel *ch)
{
struct channel **c = &channels;
- char chid[CTCM_ID_SIZE+1];
+ char chid[CTCM_ID_SIZE];
int ok = 0;
if (ch == NULL)
return;
else
- strncpy(chid, ch->id, CTCM_ID_SIZE);
+ strscpy(chid, ch->id, sizeof(chid));
channel_free(ch);
while (*c) {
diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
index cd783290bde5..6af2511e070c 100644
--- a/drivers/s390/net/qeth_core_main.c
+++ b/drivers/s390/net/qeth_core_main.c
@@ -6226,7 +6226,7 @@ static int qeth_add_dbf_entry(struct qeth_card *card, char *name)
new_entry = kzalloc(sizeof(struct qeth_dbf_entry), GFP_KERNEL);
if (!new_entry)
goto err_dbg;
- strncpy(new_entry->dbf_name, name, DBF_NAME_LEN);
+ strscpy(new_entry->dbf_name, name, sizeof(new_entry->dbf_name));
new_entry->dbf_info = card->debug;
mutex_lock(&qeth_dbf_list_mutex);
list_add(&new_entry->dbf_list, &qeth_dbf_list);
diff --git a/drivers/ssb/Kconfig b/drivers/ssb/Kconfig
index 34fa19d4b3f1..1cf1a98952fa 100644
--- a/drivers/ssb/Kconfig
+++ b/drivers/ssb/Kconfig
@@ -134,7 +134,8 @@ config SSB_SFLASH
# Assumption: We are on embedded, if we compile the MIPS core.
config SSB_EMBEDDED
bool
- depends on SSB_DRIVER_MIPS && SSB_PCICORE_HOSTMODE
+ depends on SSB_DRIVER_MIPS
+ depends on PCI=n || SSB_PCICORE_HOSTMODE
default y
config SSB_DRIVER_EXTIF
diff --git a/drivers/ssb/main.c b/drivers/ssb/main.c
index ab080cf26c9f..b9934b9c2d70 100644
--- a/drivers/ssb/main.c
+++ b/drivers/ssb/main.c
@@ -837,7 +837,7 @@ static u32 clkfactor_f6_resolve(u32 v)
case SSB_CHIPCO_CLK_F6_7:
return 7;
}
- return 0;
+ return 1;
}
/* Calculate the speed the backplane would run at a given set of clockcontrol values */
diff --git a/drivers/staging/qlge/qlge_devlink.c b/drivers/staging/qlge/qlge_devlink.c
index 0ab02d6d3817..0b19363ca2e9 100644
--- a/drivers/staging/qlge/qlge_devlink.c
+++ b/drivers/staging/qlge/qlge_devlink.c
@@ -2,51 +2,29 @@
#include "qlge.h"
#include "qlge_devlink.h"
-static int qlge_fill_seg_(struct devlink_fmsg *fmsg,
- struct mpi_coredump_segment_header *seg_header,
- u32 *reg_data)
+static void qlge_fill_seg_(struct devlink_fmsg *fmsg,
+ struct mpi_coredump_segment_header *seg_header,
+ u32 *reg_data)
{
int regs_num = (seg_header->seg_size
- sizeof(struct mpi_coredump_segment_header)) / sizeof(u32);
- int err;
int i;
- err = devlink_fmsg_pair_nest_start(fmsg, seg_header->description);
- if (err)
- return err;
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_u32_pair_put(fmsg, "segment", seg_header->seg_num);
- if (err)
- return err;
- err = devlink_fmsg_arr_pair_nest_start(fmsg, "values");
- if (err)
- return err;
+ devlink_fmsg_pair_nest_start(fmsg, seg_header->description);
+ devlink_fmsg_obj_nest_start(fmsg);
+ devlink_fmsg_u32_pair_put(fmsg, "segment", seg_header->seg_num);
+ devlink_fmsg_arr_pair_nest_start(fmsg, "values");
for (i = 0; i < regs_num; i++) {
- err = devlink_fmsg_u32_put(fmsg, *reg_data);
- if (err)
- return err;
+ devlink_fmsg_u32_put(fmsg, *reg_data);
reg_data++;
}
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_arr_pair_nest_end(fmsg);
- if (err)
- return err;
- err = devlink_fmsg_pair_nest_end(fmsg);
- return err;
+ devlink_fmsg_obj_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_pair_nest_end(fmsg);
}
-#define FILL_SEG(seg_hdr, seg_regs) \
- do { \
- err = qlge_fill_seg_(fmsg, &dump->seg_hdr, dump->seg_regs); \
- if (err) { \
- kvfree(dump); \
- return err; \
- } \
- } while (0)
+#define FILL_SEG(seg_hdr, seg_regs) \
+ qlge_fill_seg_(fmsg, &dump->seg_hdr, dump->seg_regs)
static int qlge_reporter_coredump(struct devlink_health_reporter *reporter,
struct devlink_fmsg *fmsg, void *priv_ctx,
@@ -114,14 +92,8 @@ static int qlge_reporter_coredump(struct devlink_health_reporter *reporter,
FILL_SEG(xfi_hss_tx_hdr, serdes_xfi_hss_tx);
FILL_SEG(xfi_hss_rx_hdr, serdes_xfi_hss_rx);
FILL_SEG(xfi_hss_pll_hdr, serdes_xfi_hss_pll);
-
- err = qlge_fill_seg_(fmsg, &dump->misc_nic_seg_hdr,
- (u32 *)&dump->misc_nic_info);
- if (err) {
- kvfree(dump);
- return err;
- }
-
+ qlge_fill_seg_(fmsg, &dump->misc_nic_seg_hdr,
+ (u32 *)&dump->misc_nic_info);
FILL_SEG(intr_states_seg_hdr, intr_states);
FILL_SEG(cam_entries_seg_hdr, cam_entries);
FILL_SEG(nic_routing_words_seg_hdr, nic_routing_words);
@@ -140,7 +112,7 @@ static int qlge_reporter_coredump(struct devlink_health_reporter *reporter,
FILL_SEG(sem_regs_seg_hdr, sem_regs);
kvfree(dump);
- return err;
+ return 0;
}
static const struct devlink_health_reporter_ops qlge_reporter_ops = {
diff --git a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
index af155fca39b8..1ff763c10064 100644
--- a/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
+++ b/drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
@@ -2317,13 +2317,14 @@ static int cfg80211_rtw_start_ap(struct wiphy *wiphy, struct net_device *ndev,
return ret;
}
-static int cfg80211_rtw_change_beacon(struct wiphy *wiphy,
- struct net_device *ndev,
- struct cfg80211_beacon_data *info)
+static int cfg80211_rtw_change_beacon(struct wiphy *wiphy, struct net_device *ndev,
+ struct cfg80211_ap_update *info)
{
struct adapter *adapter = rtw_netdev_priv(ndev);
- return rtw_add_beacon(adapter, info->head, info->head_len, info->tail, info->tail_len);
+ return rtw_add_beacon(adapter, info->beacon.head,
+ info->beacon.head_len, info->beacon.tail,
+ info->beacon.tail_len);
}
static int cfg80211_rtw_stop_ap(struct wiphy *wiphy, struct net_device *ndev,
diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
index 817d377a3f36..f75731396b7e 100644
--- a/drivers/vhost/vsock.c
+++ b/drivers/vhost/vsock.c
@@ -114,6 +114,7 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
struct sk_buff *skb;
unsigned out, in;
size_t nbytes;
+ u32 offset;
int head;
skb = virtio_vsock_skb_dequeue(&vsock->send_pkt_queue);
@@ -156,7 +157,8 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
}
iov_iter_init(&iov_iter, ITER_DEST, &vq->iov[out], in, iov_len);
- payload_len = skb->len;
+ offset = VIRTIO_VSOCK_SKB_CB(skb)->offset;
+ payload_len = skb->len - offset;
hdr = virtio_vsock_hdr(skb);
/* If the packet is greater than the space available in the
@@ -197,8 +199,10 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
break;
}
- nbytes = copy_to_iter(skb->data, payload_len, &iov_iter);
- if (nbytes != payload_len) {
+ if (skb_copy_datagram_iter(skb,
+ offset,
+ &iov_iter,
+ payload_len)) {
kfree_skb(skb);
vq_err(vq, "Faulted on copying pkt buf\n");
break;
@@ -212,13 +216,13 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
vhost_add_used(vq, head, sizeof(*hdr) + payload_len);
added = true;
- skb_pull(skb, payload_len);
+ VIRTIO_VSOCK_SKB_CB(skb)->offset += payload_len;
total_len += payload_len;
/* If we didn't send all the payload we can requeue the packet
* to send it with the next available buffer.
*/
- if (skb->len > 0) {
+ if (VIRTIO_VSOCK_SKB_CB(skb)->offset < skb->len) {
hdr->flags |= cpu_to_le32(flags_to_restore);
/* We are queueing the same skb to handle
@@ -394,6 +398,11 @@ static bool vhost_vsock_more_replies(struct vhost_vsock *vsock)
return val < vq->num;
}
+static bool vhost_transport_msgzerocopy_allow(void)
+{
+ return true;
+}
+
static bool vhost_transport_seqpacket_allow(u32 remote_cid);
static struct virtio_transport vhost_transport = {
@@ -427,6 +436,8 @@ static struct virtio_transport vhost_transport = {
.seqpacket_allow = vhost_transport_seqpacket_allow,
.seqpacket_has_data = virtio_transport_seqpacket_has_data,
+ .msgzerocopy_allow = vhost_transport_msgzerocopy_allow,
+
.notify_poll_in = virtio_transport_notify_poll_in,
.notify_poll_out = virtio_transport_notify_poll_out,
.notify_recv_init = virtio_transport_notify_recv_init,
diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index d0807ad43f93..6b3acf15be5c 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -4,6 +4,10 @@
#ifndef _VIRTCHNL_H_
#define _VIRTCHNL_H_
+#include <linux/bitops.h>
+#include <linux/overflow.h>
+#include <uapi/linux/if_ether.h>
+
/* Description:
* This header file describes the Virtual Function (VF) - Physical Function
* (PF) communication protocol used by the drivers for all devices starting
@@ -240,6 +244,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
#define VIRTCHNL_VF_OFFLOAD_REQ_QUEUES BIT(6)
/* used to negotiate communicating link speeds in Mbps */
#define VIRTCHNL_VF_CAP_ADV_LINK_SPEED BIT(7)
+#define VIRTCHNL_VF_OFFLOAD_CRC BIT(10)
#define VIRTCHNL_VF_OFFLOAD_VLAN_V2 BIT(15)
#define VIRTCHNL_VF_OFFLOAD_VLAN BIT(16)
#define VIRTCHNL_VF_OFFLOAD_RX_POLLING BIT(17)
@@ -295,7 +300,13 @@ VIRTCHNL_CHECK_STRUCT_LEN(24, virtchnl_txq_info);
/* VIRTCHNL_OP_CONFIG_RX_QUEUE
* VF sends this message to set up parameters for one RX queue.
* External data buffer contains one instance of virtchnl_rxq_info.
- * PF configures requested queue and returns a status code.
+ * PF configures requested queue and returns a status code. The
+ * crc_disable flag disables CRC stripping on the VF. Setting
+ * the crc_disable flag to 1 will disable CRC stripping for each
+ * queue in the VF where the flag is set. The VIRTCHNL_VF_OFFLOAD_CRC
+ * offload must have been set prior to sending this info or the PF
+ * will ignore the request. This flag should be set the same for
+ * all of the queues for a VF.
*/
/* Rx queue config info */
@@ -307,7 +318,7 @@ struct virtchnl_rxq_info {
u16 splithdr_enabled; /* deprecated with AVF 1.0 */
u32 databuffer_size;
u32 max_pkt_size;
- u8 pad0;
+ u8 crc_disable;
u8 rxdid;
u8 pad1[2];
u64 dma_ring_addr;
diff --git a/include/linux/bpf-cgroup-defs.h b/include/linux/bpf-cgroup-defs.h
index 7b121bd780eb..0985221d5478 100644
--- a/include/linux/bpf-cgroup-defs.h
+++ b/include/linux/bpf-cgroup-defs.h
@@ -28,19 +28,24 @@ enum cgroup_bpf_attach_type {
CGROUP_INET6_BIND,
CGROUP_INET4_CONNECT,
CGROUP_INET6_CONNECT,
+ CGROUP_UNIX_CONNECT,
CGROUP_INET4_POST_BIND,
CGROUP_INET6_POST_BIND,
CGROUP_UDP4_SENDMSG,
CGROUP_UDP6_SENDMSG,
+ CGROUP_UNIX_SENDMSG,
CGROUP_SYSCTL,
CGROUP_UDP4_RECVMSG,
CGROUP_UDP6_RECVMSG,
+ CGROUP_UNIX_RECVMSG,
CGROUP_GETSOCKOPT,
CGROUP_SETSOCKOPT,
CGROUP_INET4_GETPEERNAME,
CGROUP_INET6_GETPEERNAME,
+ CGROUP_UNIX_GETPEERNAME,
CGROUP_INET4_GETSOCKNAME,
CGROUP_INET6_GETSOCKNAME,
+ CGROUP_UNIX_GETSOCKNAME,
CGROUP_INET_SOCK_RELEASE,
CGROUP_LSM_START,
CGROUP_LSM_END = CGROUP_LSM_START + CGROUP_LSM_NUM - 1,
diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
index 8506690dbb9c..98b8cea904fe 100644
--- a/include/linux/bpf-cgroup.h
+++ b/include/linux/bpf-cgroup.h
@@ -48,19 +48,24 @@ to_cgroup_bpf_attach_type(enum bpf_attach_type attach_type)
CGROUP_ATYPE(CGROUP_INET6_BIND);
CGROUP_ATYPE(CGROUP_INET4_CONNECT);
CGROUP_ATYPE(CGROUP_INET6_CONNECT);
+ CGROUP_ATYPE(CGROUP_UNIX_CONNECT);
CGROUP_ATYPE(CGROUP_INET4_POST_BIND);
CGROUP_ATYPE(CGROUP_INET6_POST_BIND);
CGROUP_ATYPE(CGROUP_UDP4_SENDMSG);
CGROUP_ATYPE(CGROUP_UDP6_SENDMSG);
+ CGROUP_ATYPE(CGROUP_UNIX_SENDMSG);
CGROUP_ATYPE(CGROUP_SYSCTL);
CGROUP_ATYPE(CGROUP_UDP4_RECVMSG);
CGROUP_ATYPE(CGROUP_UDP6_RECVMSG);
+ CGROUP_ATYPE(CGROUP_UNIX_RECVMSG);
CGROUP_ATYPE(CGROUP_GETSOCKOPT);
CGROUP_ATYPE(CGROUP_SETSOCKOPT);
CGROUP_ATYPE(CGROUP_INET4_GETPEERNAME);
CGROUP_ATYPE(CGROUP_INET6_GETPEERNAME);
+ CGROUP_ATYPE(CGROUP_UNIX_GETPEERNAME);
CGROUP_ATYPE(CGROUP_INET4_GETSOCKNAME);
CGROUP_ATYPE(CGROUP_INET6_GETSOCKNAME);
+ CGROUP_ATYPE(CGROUP_UNIX_GETSOCKNAME);
CGROUP_ATYPE(CGROUP_INET_SOCK_RELEASE);
default:
return CGROUP_BPF_ATTACH_TYPE_INVALID;
@@ -120,6 +125,7 @@ int __cgroup_bpf_run_filter_sk(struct sock *sk,
int __cgroup_bpf_run_filter_sock_addr(struct sock *sk,
struct sockaddr *uaddr,
+ int *uaddrlen,
enum cgroup_bpf_attach_type atype,
void *t_ctx,
u32 *flags);
@@ -230,22 +236,22 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
#define BPF_CGROUP_RUN_PROG_INET6_POST_BIND(sk) \
BPF_CGROUP_RUN_SK_PROG(sk, CGROUP_INET6_POST_BIND)
-#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, atype) \
+#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, uaddrlen, atype) \
({ \
int __ret = 0; \
if (cgroup_bpf_enabled(atype)) \
- __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, atype, \
- NULL, NULL); \
+ __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, uaddrlen, \
+ atype, NULL, NULL); \
__ret; \
})
-#define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, atype, t_ctx) \
+#define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, atype, t_ctx) \
({ \
int __ret = 0; \
if (cgroup_bpf_enabled(atype)) { \
lock_sock(sk); \
- __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, atype, \
- t_ctx, NULL); \
+ __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, uaddrlen, \
+ atype, t_ctx, NULL); \
release_sock(sk); \
} \
__ret; \
@@ -256,14 +262,14 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
* (at bit position 0) is to indicate CAP_NET_BIND_SERVICE capability check
* should be bypassed (BPF_RET_BIND_NO_CAP_NET_BIND_SERVICE).
*/
-#define BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, atype, bind_flags) \
+#define BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, uaddrlen, atype, bind_flags) \
({ \
u32 __flags = 0; \
int __ret = 0; \
if (cgroup_bpf_enabled(atype)) { \
lock_sock(sk); \
- __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, atype, \
- NULL, &__flags); \
+ __ret = __cgroup_bpf_run_filter_sock_addr(sk, uaddr, uaddrlen, \
+ atype, NULL, &__flags); \
release_sock(sk); \
if (__flags & BPF_RET_BIND_NO_CAP_NET_BIND_SERVICE) \
*bind_flags |= BIND_NO_CAP_NET_BIND_SERVICE; \
@@ -276,29 +282,38 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
cgroup_bpf_enabled(CGROUP_INET6_CONNECT)) && \
(sk)->sk_prot->pre_connect)
-#define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG(sk, uaddr, CGROUP_INET4_CONNECT)
+#define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG(sk, uaddr, uaddrlen, CGROUP_INET4_CONNECT)
-#define BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG(sk, uaddr, CGROUP_INET6_CONNECT)
+#define BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG(sk, uaddr, uaddrlen, CGROUP_INET6_CONNECT)
-#define BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_INET4_CONNECT, NULL)
+#define BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_INET4_CONNECT, NULL)
-#define BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_INET6_CONNECT, NULL)
+#define BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_INET6_CONNECT, NULL)
-#define BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk, uaddr, t_ctx) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_UDP4_SENDMSG, t_ctx)
+#define BPF_CGROUP_RUN_PROG_UNIX_CONNECT_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UNIX_CONNECT, NULL)
-#define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, t_ctx) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_UDP6_SENDMSG, t_ctx)
+#define BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UDP4_SENDMSG, t_ctx)
-#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_UDP4_RECVMSG, NULL)
+#define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UDP6_SENDMSG, t_ctx)
-#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr) \
- BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, CGROUP_UDP6_RECVMSG, NULL)
+#define BPF_CGROUP_RUN_PROG_UNIX_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UNIX_SENDMSG, t_ctx)
+
+#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UDP4_RECVMSG, NULL)
+
+#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UDP6_RECVMSG, NULL)
+
+#define BPF_CGROUP_RUN_PROG_UNIX_RECVMSG_LOCK(sk, uaddr, uaddrlen) \
+ BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, CGROUP_UNIX_RECVMSG, NULL)
/* The SOCK_OPS"_SK" macro should be used when sock_ops->sk is not a
* fullsock and its parent fullsock cannot be traced by
@@ -477,24 +492,27 @@ static inline int bpf_percpu_cgroup_storage_update(struct bpf_map *map,
}
#define cgroup_bpf_enabled(atype) (0)
-#define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, atype, t_ctx) ({ 0; })
-#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, atype) ({ 0; })
+#define BPF_CGROUP_RUN_SA_PROG_LOCK(sk, uaddr, uaddrlen, atype, t_ctx) ({ 0; })
+#define BPF_CGROUP_RUN_SA_PROG(sk, uaddr, uaddrlen, atype) ({ 0; })
#define BPF_CGROUP_PRE_CONNECT_ENABLED(sk) (0)
#define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk,skb) ({ 0; })
#define BPF_CGROUP_RUN_PROG_INET_EGRESS(sk,skb) ({ 0; })
#define BPF_CGROUP_RUN_PROG_INET_SOCK(sk) ({ 0; })
#define BPF_CGROUP_RUN_PROG_INET_SOCK_RELEASE(sk) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, atype, flags) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, uaddrlen, atype, flags) ({ 0; })
#define BPF_CGROUP_RUN_PROG_INET4_POST_BIND(sk) ({ 0; })
#define BPF_CGROUP_RUN_PROG_INET6_POST_BIND(sk) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk, uaddr, t_ctx) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, t_ctx) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr) ({ 0; })
-#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UNIX_CONNECT_LOCK(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UNIX_SENDMSG_LOCK(sk, uaddr, uaddrlen, t_ctx) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk, uaddr, uaddrlen) ({ 0; })
+#define BPF_CGROUP_RUN_PROG_UNIX_RECVMSG_LOCK(sk, uaddr, uaddrlen) ({ 0; })
#define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; })
#define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; })
#define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; })
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 49f8b691496c..b4825d3cdb29 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -55,8 +55,8 @@ struct cgroup;
extern struct idr btf_idr;
extern spinlock_t btf_idr_lock;
extern struct kobject *btf_kobj;
-extern struct bpf_mem_alloc bpf_global_ma;
-extern bool bpf_global_ma_set;
+extern struct bpf_mem_alloc bpf_global_ma, bpf_global_percpu_ma;
+extern bool bpf_global_ma_set, bpf_global_percpu_ma_set;
typedef u64 (*bpf_callback_t)(u64, u64, u64, u64, u64);
typedef int (*bpf_iter_init_seq_priv_t)(void *private_data,
@@ -180,14 +180,15 @@ enum btf_field_type {
BPF_TIMER = (1 << 1),
BPF_KPTR_UNREF = (1 << 2),
BPF_KPTR_REF = (1 << 3),
- BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF,
- BPF_LIST_HEAD = (1 << 4),
- BPF_LIST_NODE = (1 << 5),
- BPF_RB_ROOT = (1 << 6),
- BPF_RB_NODE = (1 << 7),
+ BPF_KPTR_PERCPU = (1 << 4),
+ BPF_KPTR = BPF_KPTR_UNREF | BPF_KPTR_REF | BPF_KPTR_PERCPU,
+ BPF_LIST_HEAD = (1 << 5),
+ BPF_LIST_NODE = (1 << 6),
+ BPF_RB_ROOT = (1 << 7),
+ BPF_RB_NODE = (1 << 8),
BPF_GRAPH_NODE_OR_ROOT = BPF_LIST_NODE | BPF_LIST_HEAD |
BPF_RB_NODE | BPF_RB_ROOT,
- BPF_REFCOUNT = (1 << 8),
+ BPF_REFCOUNT = (1 << 9),
};
typedef void (*btf_dtor_kfunc_t)(void *);
@@ -300,6 +301,8 @@ static inline const char *btf_field_type_name(enum btf_field_type type)
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
return "kptr";
+ case BPF_KPTR_PERCPU:
+ return "percpu_kptr";
case BPF_LIST_HEAD:
return "bpf_list_head";
case BPF_LIST_NODE:
@@ -325,6 +328,7 @@ static inline u32 btf_field_type_size(enum btf_field_type type)
return sizeof(struct bpf_timer);
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
return sizeof(u64);
case BPF_LIST_HEAD:
return sizeof(struct bpf_list_head);
@@ -351,6 +355,7 @@ static inline u32 btf_field_type_align(enum btf_field_type type)
return __alignof__(struct bpf_timer);
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
return __alignof__(u64);
case BPF_LIST_HEAD:
return __alignof__(struct bpf_list_head);
@@ -389,6 +394,7 @@ static inline void bpf_obj_init_field(const struct btf_field *field, void *addr)
case BPF_TIMER:
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
break;
default:
WARN_ON_ONCE(1);
@@ -1029,6 +1035,11 @@ struct btf_func_model {
*/
#define BPF_TRAMP_F_SHARE_IPMODIFY BIT(6)
+/* Indicate that current trampoline is in a tail call context. Then, it has to
+ * cache and restore tail_call_cnt to avoid infinite tail call loop.
+ */
+#define BPF_TRAMP_F_TAIL_CALL_CTX BIT(7)
+
/* Each call __bpf_prog_enter + call bpf_func + call __bpf_prog_exit is ~50
* bytes on x86.
*/
@@ -1378,6 +1389,7 @@ struct bpf_prog_aux {
u32 stack_depth;
u32 id;
u32 func_cnt; /* used by non-func prog as the number of func progs */
+ u32 real_func_cnt; /* includes hidden progs, only used for JIT and freeing progs */
u32 func_idx; /* 0 for non-func prog, the index in func array for func prog */
u32 attach_btf_id; /* in-kernel BTF type id to attach to */
u32 ctx_arg_info_size;
@@ -1398,6 +1410,8 @@ struct bpf_prog_aux {
bool sleepable;
bool tail_call_reachable;
bool xdp_has_frags;
+ bool exception_cb;
+ bool exception_boundary;
/* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
const struct btf_type *attach_func_proto;
/* function name for valid attach_btf_id */
@@ -1420,6 +1434,7 @@ struct bpf_prog_aux {
int cgroup_atype; /* enum cgroup_bpf_attach_type */
struct bpf_map *cgroup_storage[MAX_BPF_CGROUP_STORAGE_TYPE];
char name[BPF_OBJ_NAME_LEN];
+ unsigned int (*bpf_exception_cb)(u64 cookie, u64 sp, u64 bp);
#ifdef CONFIG_SECURITY
void *security;
#endif
@@ -2043,6 +2058,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec);
bool btf_record_equal(const struct btf_record *rec_a, const struct btf_record *rec_b);
void bpf_obj_free_timer(const struct btf_record *rec, void *obj);
void bpf_obj_free_fields(const struct btf_record *rec, void *obj);
+void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu);
struct bpf_map *bpf_map_get(u32 ufd);
struct bpf_map *bpf_map_get_with_uref(u32 ufd);
@@ -2149,12 +2165,12 @@ static inline bool bpf_allow_uninit_stack(void)
static inline bool bpf_bypass_spec_v1(void)
{
- return perfmon_capable();
+ return cpu_mitigations_off() || perfmon_capable();
}
static inline bool bpf_bypass_spec_v4(void)
{
- return perfmon_capable();
+ return cpu_mitigations_off() || perfmon_capable();
}
int bpf_map_new_fd(struct bpf_map *map, int flags);
@@ -2407,9 +2423,11 @@ int btf_check_subprog_arg_match(struct bpf_verifier_env *env, int subprog,
int btf_check_subprog_call(struct bpf_verifier_env *env, int subprog,
struct bpf_reg_state *regs);
int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
- struct bpf_reg_state *reg);
+ struct bpf_reg_state *reg, bool is_ex_cb);
int btf_check_type_match(struct bpf_verifier_log *log, const struct bpf_prog *prog,
struct btf *btf, const struct btf_type *t);
+const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt,
+ int comp_idx, const char *tag_key);
struct bpf_prog *bpf_prog_by_id(u32 id);
struct bpf_link *bpf_link_by_id(u32 id);
@@ -2461,6 +2479,9 @@ void bpf_dynptr_init(struct bpf_dynptr_kern *ptr, void *data,
enum bpf_dynptr_type type, u32 offset, u32 size);
void bpf_dynptr_set_null(struct bpf_dynptr_kern *ptr);
void bpf_dynptr_set_rdonly(struct bpf_dynptr_kern *ptr);
+
+bool dev_check_flush(void);
+bool cpu_map_check_flush(void);
#else /* !CONFIG_BPF_SYSCALL */
static inline struct bpf_prog *bpf_prog_get(u32 ufd)
{
@@ -2905,6 +2926,22 @@ static inline int sock_map_bpf_prog_query(const union bpf_attr *attr,
#endif /* CONFIG_BPF_SYSCALL */
#endif /* CONFIG_NET && CONFIG_BPF_SYSCALL */
+static __always_inline void
+bpf_prog_inc_misses_counters(const struct bpf_prog_array *array)
+{
+ const struct bpf_prog_array_item *item;
+ struct bpf_prog *prog;
+
+ if (unlikely(!array))
+ return;
+
+ item = &array->items[0];
+ while ((prog = READ_ONCE(item->prog))) {
+ bpf_prog_inc_misses_counter(prog);
+ item++;
+ }
+}
+
#if defined(CONFIG_INET) && defined(CONFIG_BPF_SYSCALL)
void bpf_sk_reuseport_detach(struct sock *sk);
int bpf_fd_reuseport_array_lookup_elem(struct bpf_map *map, void *key,
@@ -3183,4 +3220,9 @@ static inline gfp_t bpf_memcg_flags(gfp_t flags)
return flags;
}
+static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+{
+ return prog->aux->func_idx != 0;
+}
+
#endif /* _LINUX_BPF_H */
diff --git a/include/linux/bpf_mem_alloc.h b/include/linux/bpf_mem_alloc.h
index d644bbb298af..bb1223b21308 100644
--- a/include/linux/bpf_mem_alloc.h
+++ b/include/linux/bpf_mem_alloc.h
@@ -11,6 +11,7 @@ struct bpf_mem_caches;
struct bpf_mem_alloc {
struct bpf_mem_caches __percpu *caches;
struct bpf_mem_cache __percpu *cache;
+ bool percpu;
struct work_struct work;
};
diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
index b6e58dab8e27..24213a99cc79 100644
--- a/include/linux/bpf_verifier.h
+++ b/include/linux/bpf_verifier.h
@@ -300,6 +300,7 @@ struct bpf_func_state {
bool in_callback_fn;
struct tnum callback_ret_range;
bool in_async_callback_fn;
+ bool in_exception_callback_fn;
/* The following fields should be last. See copy_func_state() */
int acquired_refs;
@@ -372,10 +373,25 @@ struct bpf_verifier_state {
struct bpf_active_lock active_lock;
bool speculative;
bool active_rcu_lock;
+ /* If this state was ever pointed-to by other state's loop_entry field
+ * this flag would be set to true. Used to avoid freeing such states
+ * while they are still in use.
+ */
+ bool used_as_loop_entry;
/* first and last insn idx of this verifier state */
u32 first_insn_idx;
u32 last_insn_idx;
+ /* If this state is a part of states loop this field points to some
+ * parent of this state such that:
+ * - it is also a member of the same states loop;
+ * - DFS states traversal starting from initial state visits loop_entry
+ * state before this state.
+ * Used to compute topmost loop entry for state loops.
+ * State loops might appear because of open coded iterators logic.
+ * See get_loop_entry() for more information.
+ */
+ struct bpf_verifier_state *loop_entry;
/* jmp history recorded from first to last.
* backtracking is using it to go from last to first.
* For most states jmp_history_cnt is [0-3].
@@ -383,21 +399,21 @@ struct bpf_verifier_state {
*/
struct bpf_idx_pair *jmp_history;
u32 jmp_history_cnt;
+ u32 dfs_depth;
};
-#define bpf_get_spilled_reg(slot, frame) \
+#define bpf_get_spilled_reg(slot, frame, mask) \
(((slot < frame->allocated_stack / BPF_REG_SIZE) && \
- (frame->stack[slot].slot_type[0] == STACK_SPILL)) \
+ ((1 << frame->stack[slot].slot_type[0]) & (mask))) \
? &frame->stack[slot].spilled_ptr : NULL)
/* Iterate over 'frame', setting 'reg' to either NULL or a spilled register. */
-#define bpf_for_each_spilled_reg(iter, frame, reg) \
- for (iter = 0, reg = bpf_get_spilled_reg(iter, frame); \
+#define bpf_for_each_spilled_reg(iter, frame, reg, mask) \
+ for (iter = 0, reg = bpf_get_spilled_reg(iter, frame, mask); \
iter < frame->allocated_stack / BPF_REG_SIZE; \
- iter++, reg = bpf_get_spilled_reg(iter, frame))
+ iter++, reg = bpf_get_spilled_reg(iter, frame, mask))
-/* Invoke __expr over regsiters in __vst, setting __state and __reg */
-#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
+#define bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, __mask, __expr) \
({ \
struct bpf_verifier_state *___vstate = __vst; \
int ___i, ___j; \
@@ -409,7 +425,7 @@ struct bpf_verifier_state {
__reg = &___regs[___j]; \
(void)(__expr); \
} \
- bpf_for_each_spilled_reg(___j, __state, __reg) { \
+ bpf_for_each_spilled_reg(___j, __state, __reg, __mask) { \
if (!__reg) \
continue; \
(void)(__expr); \
@@ -417,6 +433,10 @@ struct bpf_verifier_state {
} \
})
+/* Invoke __expr over regsiters in __vst, setting __state and __reg */
+#define bpf_for_each_reg_in_vstate(__vst, __state, __reg, __expr) \
+ bpf_for_each_reg_in_vstate_mask(__vst, __state, __reg, 1 << STACK_SPILL, __expr)
+
/* linked list of verifier states used to prune search */
struct bpf_verifier_state_list {
struct bpf_verifier_state state;
@@ -480,6 +500,7 @@ struct bpf_insn_aux_data {
bool zext_dst; /* this insn zero extends dst reg */
bool storage_get_func_atomic; /* bpf_*_storage_get() with atomic memory alloc */
bool is_iter_next; /* bpf_iter_<type>_next() kfunc call */
+ bool call_with_percpu_alloc_ptr; /* {this,per}_cpu_ptr() with prog percpu alloc */
u8 alu_state; /* used in combination with alu_limit */
/* below fields are initialized once */
@@ -540,7 +561,9 @@ struct bpf_subprog_info {
bool has_tail_call;
bool tail_call_reachable;
bool has_ld_abs;
+ bool is_cb;
bool is_async_cb;
+ bool is_exception_cb;
};
struct bpf_verifier_env;
@@ -587,6 +610,8 @@ struct bpf_verifier_env {
u32 used_map_cnt; /* number of used maps */
u32 used_btf_cnt; /* number of used BTF objects */
u32 id_gen; /* used to generate unique reg IDs */
+ u32 hidden_subprog_cnt; /* number of hidden subprogs */
+ int exception_callback_subprog;
bool explore_alu_limits;
bool allow_ptr_leaks;
bool allow_uninit_stack;
@@ -594,10 +619,11 @@ struct bpf_verifier_env {
bool bypass_spec_v1;
bool bypass_spec_v4;
bool seen_direct_write;
+ bool seen_exception;
struct bpf_insn_aux_data *insn_aux_data; /* array of per-insn state */
const struct bpf_line_info *prev_linfo;
struct bpf_verifier_log log;
- struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 1];
+ struct bpf_subprog_info subprog_info[BPF_MAX_SUBPROGS + 2]; /* max + 2 for the fake and exception subprogs */
union {
struct bpf_idmap idmap_scratch;
struct bpf_idset idset_scratch;
diff --git a/include/linux/brcmphy.h b/include/linux/brcmphy.h
index c55810a43541..1394ba302367 100644
--- a/include/linux/brcmphy.h
+++ b/include/linux/brcmphy.h
@@ -11,6 +11,7 @@
#define PHY_ID_BCM50610 0x0143bd60
#define PHY_ID_BCM50610M 0x0143bd70
+#define PHY_ID_BCM5221 0x004061e0
#define PHY_ID_BCM5241 0x0143bc30
#define PHY_ID_BCMAC131 0x0143bc70
#define PHY_ID_BCM5481 0x0143bca0
@@ -331,6 +332,15 @@
#define BCM54XX_WOL_INT_STATUS (MII_BCM54XX_EXP_SEL_WOL + 0x94)
+/* BCM5221 Registers */
+#define BCM5221_AEGSR 0x1C
+#define BCM5221_AEGSR_MDIX_STATUS BIT(13)
+#define BCM5221_AEGSR_MDIX_MAN_SWAP BIT(12)
+#define BCM5221_AEGSR_MDIX_DIS BIT(11)
+
+#define BCM5221_SHDW_AM4_EN_CLK_LPM BIT(2)
+#define BCM5221_SHDW_AM4_FORCE_LPM BIT(1)
+
/*****************************************************************************/
/* Fast Ethernet Transceiver definitions. */
/*****************************************************************************/
diff --git a/include/linux/btf.h b/include/linux/btf.h
index 928113a80a95..c2231c64d60b 100644
--- a/include/linux/btf.h
+++ b/include/linux/btf.h
@@ -74,6 +74,7 @@
#define KF_ITER_NEW (1 << 8) /* kfunc implements BPF iter constructor */
#define KF_ITER_NEXT (1 << 9) /* kfunc implements BPF iter next method */
#define KF_ITER_DESTROY (1 << 10) /* kfunc implements BPF iter destructor */
+#define KF_RCU_PROTECTED (1 << 11) /* kfunc should be protected by rcu cs when they are invoked */
/*
* Tag marking a kernel function as a kfunc. This is meant to minimize the
diff --git a/include/linux/can/dev.h b/include/linux/can/dev.h
index 982ba245eb41..1b92aed49363 100644
--- a/include/linux/can/dev.h
+++ b/include/linux/can/dev.h
@@ -195,6 +195,10 @@ int can_restart_now(struct net_device *dev);
void can_bus_off(struct net_device *dev);
const char *can_get_state_str(const enum can_state state);
+void can_state_get_by_berr_counter(const struct net_device *dev,
+ const struct can_berr_counter *bec,
+ enum can_state *tx_state,
+ enum can_state *rx_state);
void can_change_state(struct net_device *dev, struct can_frame *cf,
enum can_state tx_state, enum can_state rx_state);
diff --git a/include/linux/ceph/mon_client.h b/include/linux/ceph/mon_client.h
index b658961156a0..7a9a40163c0f 100644
--- a/include/linux/ceph/mon_client.h
+++ b/include/linux/ceph/mon_client.h
@@ -19,7 +19,7 @@ struct ceph_monmap {
struct ceph_fsid fsid;
u32 epoch;
u32 num_mon;
- struct ceph_entity_inst mon_inst[];
+ struct ceph_entity_inst mon_inst[] __counted_by(num_mon);
};
struct ceph_mon_client;
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index b307013b9c6c..0ef0af66080e 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -40,13 +40,11 @@ struct kernel_clone_args;
#define CGROUP_WEIGHT_DFL 100
#define CGROUP_WEIGHT_MAX 10000
-/* walk only threadgroup leaders */
-#define CSS_TASK_ITER_PROCS (1U << 0)
-/* walk all threaded css_sets in the domain */
-#define CSS_TASK_ITER_THREADED (1U << 1)
-
-/* internal flags */
-#define CSS_TASK_ITER_SKIPPED (1U << 16)
+enum {
+ CSS_TASK_ITER_PROCS = (1U << 0), /* walk only threadgroup leaders */
+ CSS_TASK_ITER_THREADED = (1U << 1), /* walk all threaded css_sets in the domain */
+ CSS_TASK_ITER_SKIPPED = (1U << 16), /* internal flags */
+};
/* a css_task_iter should be treated as an opaque object */
struct css_task_iter {
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index c523c6683789..6f1ca49306d2 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -2,6 +2,15 @@
#ifndef __LINUX_COMPILER_TYPES_H
#define __LINUX_COMPILER_TYPES_H
+/*
+ * __has_builtin is supported on gcc >= 10, clang >= 3 and icc >= 21.
+ * In the meantime, to support gcc < 10, we implement __has_builtin
+ * by hand.
+ */
+#ifndef __has_builtin
+#define __has_builtin(x) (0)
+#endif
+
#ifndef __ASSEMBLY__
/*
@@ -134,17 +143,6 @@ static inline void __chk_io_ptr(const volatile void __iomem *ptr) { }
# define __preserve_most
#endif
-/* Builtins */
-
-/*
- * __has_builtin is supported on gcc >= 10, clang >= 3 and icc >= 21.
- * In the meantime, to support gcc < 10, we implement __has_builtin
- * by hand.
- */
-#ifndef __has_builtin
-#define __has_builtin(x) (0)
-#endif
-
/* Compiler specific macros. */
#ifdef __clang__
#include <linux/compiler-clang.h>
@@ -352,6 +350,18 @@ struct ftrace_likely_data {
# define __realloc_size(x, ...)
#endif
+/*
+ * When the size of an allocated object is needed, use the best available
+ * mechanism to find it. (For cases where sizeof() cannot be used.)
+ */
+#if __has_builtin(__builtin_dynamic_object_size)
+#define __struct_size(p) __builtin_dynamic_object_size(p, 0)
+#define __member_size(p) __builtin_dynamic_object_size(p, 1)
+#else
+#define __struct_size(p) __builtin_object_size(p, 0)
+#define __member_size(p) __builtin_object_size(p, 1)
+#endif
+
#ifndef asm_volatile_goto
#define asm_volatile_goto(x...) asm goto(x)
#endif
diff --git a/include/linux/dpll.h b/include/linux/dpll.h
new file mode 100644
index 000000000000..578fc5fa3750
--- /dev/null
+++ b/include/linux/dpll.h
@@ -0,0 +1,170 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2023 Meta Platforms, Inc. and affiliates
+ * Copyright (c) 2023 Intel and affiliates
+ */
+
+#ifndef __DPLL_H__
+#define __DPLL_H__
+
+#include <uapi/linux/dpll.h>
+#include <linux/device.h>
+#include <linux/netlink.h>
+
+struct dpll_device;
+struct dpll_pin;
+
+struct dpll_device_ops {
+ int (*mode_get)(const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_mode *mode, struct netlink_ext_ack *extack);
+ bool (*mode_supported)(const struct dpll_device *dpll, void *dpll_priv,
+ const enum dpll_mode mode,
+ struct netlink_ext_ack *extack);
+ int (*lock_status_get)(const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_lock_status *status,
+ struct netlink_ext_ack *extack);
+ int (*temp_get)(const struct dpll_device *dpll, void *dpll_priv,
+ s32 *temp, struct netlink_ext_ack *extack);
+};
+
+struct dpll_pin_ops {
+ int (*frequency_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ const u64 frequency,
+ struct netlink_ext_ack *extack);
+ int (*frequency_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u64 *frequency, struct netlink_ext_ack *extack);
+ int (*direction_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ const enum dpll_pin_direction direction,
+ struct netlink_ext_ack *extack);
+ int (*direction_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ enum dpll_pin_direction *direction,
+ struct netlink_ext_ack *extack);
+ int (*state_on_pin_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_pin *parent_pin,
+ void *parent_pin_priv,
+ enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack);
+ int (*state_on_dpll_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv, enum dpll_pin_state *state,
+ struct netlink_ext_ack *extack);
+ int (*state_on_pin_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_pin *parent_pin,
+ void *parent_pin_priv,
+ const enum dpll_pin_state state,
+ struct netlink_ext_ack *extack);
+ int (*state_on_dpll_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll,
+ void *dpll_priv,
+ const enum dpll_pin_state state,
+ struct netlink_ext_ack *extack);
+ int (*prio_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ u32 *prio, struct netlink_ext_ack *extack);
+ int (*prio_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ const u32 prio, struct netlink_ext_ack *extack);
+ int (*phase_offset_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s64 *phase_offset,
+ struct netlink_ext_ack *extack);
+ int (*phase_adjust_get)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ s32 *phase_adjust,
+ struct netlink_ext_ack *extack);
+ int (*phase_adjust_set)(const struct dpll_pin *pin, void *pin_priv,
+ const struct dpll_device *dpll, void *dpll_priv,
+ const s32 phase_adjust,
+ struct netlink_ext_ack *extack);
+};
+
+struct dpll_pin_frequency {
+ u64 min;
+ u64 max;
+};
+
+#define DPLL_PIN_FREQUENCY_RANGE(_min, _max) \
+ { \
+ .min = _min, \
+ .max = _max, \
+ }
+
+#define DPLL_PIN_FREQUENCY(_val) DPLL_PIN_FREQUENCY_RANGE(_val, _val)
+#define DPLL_PIN_FREQUENCY_1PPS \
+ DPLL_PIN_FREQUENCY(DPLL_PIN_FREQUENCY_1_HZ)
+#define DPLL_PIN_FREQUENCY_10MHZ \
+ DPLL_PIN_FREQUENCY(DPLL_PIN_FREQUENCY_10_MHZ)
+#define DPLL_PIN_FREQUENCY_IRIG_B \
+ DPLL_PIN_FREQUENCY(DPLL_PIN_FREQUENCY_10_KHZ)
+#define DPLL_PIN_FREQUENCY_DCF77 \
+ DPLL_PIN_FREQUENCY(DPLL_PIN_FREQUENCY_77_5_KHZ)
+
+struct dpll_pin_phase_adjust_range {
+ s32 min;
+ s32 max;
+};
+
+struct dpll_pin_properties {
+ const char *board_label;
+ const char *panel_label;
+ const char *package_label;
+ enum dpll_pin_type type;
+ unsigned long capabilities;
+ u32 freq_supported_num;
+ struct dpll_pin_frequency *freq_supported;
+ struct dpll_pin_phase_adjust_range phase_range;
+};
+
+#if IS_ENABLED(CONFIG_DPLL)
+size_t dpll_msg_pin_handle_size(struct dpll_pin *pin);
+int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin);
+#else
+static inline size_t dpll_msg_pin_handle_size(struct dpll_pin *pin)
+{
+ return 0;
+}
+
+static inline int dpll_msg_add_pin_handle(struct sk_buff *msg, struct dpll_pin *pin)
+{
+ return 0;
+}
+#endif
+
+struct dpll_device *
+dpll_device_get(u64 clock_id, u32 dev_driver_id, struct module *module);
+
+void dpll_device_put(struct dpll_device *dpll);
+
+int dpll_device_register(struct dpll_device *dpll, enum dpll_type type,
+ const struct dpll_device_ops *ops, void *priv);
+
+void dpll_device_unregister(struct dpll_device *dpll,
+ const struct dpll_device_ops *ops, void *priv);
+
+struct dpll_pin *
+dpll_pin_get(u64 clock_id, u32 dev_driver_id, struct module *module,
+ const struct dpll_pin_properties *prop);
+
+int dpll_pin_register(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv);
+
+void dpll_pin_unregister(struct dpll_device *dpll, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv);
+
+void dpll_pin_put(struct dpll_pin *pin);
+
+int dpll_pin_on_pin_register(struct dpll_pin *parent, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv);
+
+void dpll_pin_on_pin_unregister(struct dpll_pin *parent, struct dpll_pin *pin,
+ const struct dpll_pin_ops *ops, void *priv);
+
+int dpll_device_change_ntf(struct dpll_device *dpll);
+
+int dpll_pin_change_ntf(struct dpll_pin *pin);
+
+#endif
diff --git a/include/linux/dsa/sja1105.h b/include/linux/dsa/sja1105.h
index c177322f793d..b9dd35d4b8f5 100644
--- a/include/linux/dsa/sja1105.h
+++ b/include/linux/dsa/sja1105.h
@@ -28,7 +28,7 @@
/* Source and Destination MAC of follow-up meta frames.
* Whereas the choice of SMAC only affects the unique identification of the
* switch as sender of meta frames, the DMAC must be an address that is present
- * in the DSA master port's multicast MAC filter.
+ * in the DSA conduit port's multicast MAC filter.
* 01-80-C2-00-00-0E is a good choice for this, as all profiles of IEEE 1588
* over L2 use this address for some purpose already.
*/
diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
index 62b61527bcc4..226a36ed5aa1 100644
--- a/include/linux/ethtool.h
+++ b/include/linux/ethtool.h
@@ -1052,4 +1052,23 @@ static inline int ethtool_mm_frag_size_min_to_add(u32 val_min, u32 *val_add,
* next string.
*/
extern __printf(2, 3) void ethtool_sprintf(u8 **data, const char *fmt, ...);
+
+/* Link mode to forced speed capabilities maps */
+struct ethtool_forced_speed_map {
+ u32 speed;
+ __ETHTOOL_DECLARE_LINK_MODE_MASK(caps);
+
+ const u32 *cap_arr;
+ u32 arr_size;
+};
+
+#define ETHTOOL_FORCED_SPEED_MAP(prefix, value) \
+{ \
+ .speed = SPEED_##value, \
+ .cap_arr = prefix##_##value, \
+ .arr_size = ARRAY_SIZE(prefix##_##value), \
+}
+
+void
+ethtool_forced_speed_maps_init(struct ethtool_forced_speed_map *maps, u32 size);
#endif /* _LINUX_ETHTOOL_H */
diff --git a/include/linux/filter.h b/include/linux/filter.h
index 761af6b3cf2b..a4953fafc8cb 100644
--- a/include/linux/filter.h
+++ b/include/linux/filter.h
@@ -117,21 +117,25 @@ struct ctl_table_header;
/* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */
-#define BPF_ALU64_IMM(OP, DST, IMM) \
+#define BPF_ALU64_IMM_OFF(OP, DST, IMM, OFF) \
((struct bpf_insn) { \
.code = BPF_ALU64 | BPF_OP(OP) | BPF_K, \
.dst_reg = DST, \
.src_reg = 0, \
- .off = 0, \
+ .off = OFF, \
.imm = IMM })
+#define BPF_ALU64_IMM(OP, DST, IMM) \
+ BPF_ALU64_IMM_OFF(OP, DST, IMM, 0)
-#define BPF_ALU32_IMM(OP, DST, IMM) \
+#define BPF_ALU32_IMM_OFF(OP, DST, IMM, OFF) \
((struct bpf_insn) { \
.code = BPF_ALU | BPF_OP(OP) | BPF_K, \
.dst_reg = DST, \
.src_reg = 0, \
- .off = 0, \
+ .off = OFF, \
.imm = IMM })
+#define BPF_ALU32_IMM(OP, DST, IMM) \
+ BPF_ALU32_IMM_OFF(OP, DST, IMM, 0)
/* Endianess conversion, cpu_to_{l,b}e(), {l,b}e_to_cpu() */
@@ -143,6 +147,16 @@ struct ctl_table_header;
.off = 0, \
.imm = LEN })
+/* Byte Swap, bswap16/32/64 */
+
+#define BPF_BSWAP(DST, LEN) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_END | BPF_SRC(BPF_TO_LE), \
+ .dst_reg = DST, \
+ .src_reg = 0, \
+ .off = 0, \
+ .imm = LEN })
+
/* Short form of mov, dst_reg = src_reg */
#define BPF_MOV64_REG(DST, SRC) \
@@ -179,6 +193,24 @@ struct ctl_table_header;
.off = 0, \
.imm = IMM })
+/* Short form of movsx, dst_reg = (s8,s16,s32)src_reg */
+
+#define BPF_MOVSX64_REG(DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU64 | BPF_MOV | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
+#define BPF_MOVSX32_REG(DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_ALU | BPF_MOV | BPF_X, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
/* Special form of mov32, used for doing explicit zero extension on dst. */
#define BPF_ZEXT_REG(DST) \
((struct bpf_insn) { \
@@ -263,6 +295,16 @@ static inline bool insn_is_zext(const struct bpf_insn *insn)
.off = OFF, \
.imm = 0 })
+/* Memory load, dst_reg = *(signed size *) (src_reg + off16) */
+
+#define BPF_LDX_MEMSX(SIZE, DST, SRC, OFF) \
+ ((struct bpf_insn) { \
+ .code = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEMSX, \
+ .dst_reg = DST, \
+ .src_reg = SRC, \
+ .off = OFF, \
+ .imm = 0 })
+
/* Memory store, *(uint *) (dst_reg + off16) = src_reg */
#define BPF_STX_MEM(SIZE, DST, SRC, OFF) \
@@ -694,7 +736,7 @@ static inline void bpf_compute_and_save_data_end(
cb->data_end = skb->data + skb_headlen(skb);
}
-/* Restore data saved by bpf_compute_data_pointers(). */
+/* Restore data saved by bpf_compute_and_save_data_end(). */
static inline void bpf_restore_data_end(
struct sk_buff *skb, void *saved_data_end)
{
@@ -912,6 +954,8 @@ bool bpf_jit_needs_zext(void);
bool bpf_jit_supports_subprog_tailcalls(void);
bool bpf_jit_supports_kfunc_call(void);
bool bpf_jit_supports_far_kfunc_call(void);
+bool bpf_jit_supports_exceptions(void);
+void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie);
bool bpf_helper_changes_pkt_data(void *func);
static inline bool bpf_dump_raw_ok(const struct cred *cred)
@@ -981,12 +1025,6 @@ int xdp_do_redirect_frame(struct net_device *dev,
struct bpf_prog *prog);
void xdp_do_flush(void);
-/* The xdp_do_flush_map() helper has been renamed to drop the _map suffix, as
- * it is no longer only flushing maps. Keep this define for compatibility
- * until all drivers are updated - do not use xdp_do_flush_map() in new code!
- */
-#define xdp_do_flush_map xdp_do_flush
-
void bpf_warn_invalid_xdp_action(struct net_device *dev, struct bpf_prog *prog, u32 act);
#ifdef CONFIG_INET
@@ -1127,6 +1165,7 @@ const char *__bpf_address_lookup(unsigned long addr, unsigned long *size,
bool is_bpf_text_address(unsigned long addr);
int bpf_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
char *sym);
+struct bpf_prog *bpf_prog_ksym_find(unsigned long addr);
static inline const char *
bpf_address_lookup(unsigned long addr, unsigned long *size,
@@ -1194,6 +1233,11 @@ static inline int bpf_get_kallsym(unsigned int symnum, unsigned long *value,
return -ERANGE;
}
+static inline struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
+{
+ return NULL;
+}
+
static inline const char *
bpf_address_lookup(unsigned long addr, unsigned long *size,
unsigned long *off, char **modname, char *sym)
@@ -1285,6 +1329,7 @@ struct bpf_sock_addr_kern {
*/
u64 tmp_reg;
void *t_ctx; /* Attach type specific context. */
+ u32 uaddrlen;
};
struct bpf_sock_ops_kern {
diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
index da51a83b2829..1e7711185ec6 100644
--- a/include/linux/fortify-string.h
+++ b/include/linux/fortify-string.h
@@ -93,13 +93,9 @@ extern char *__underlying_strncpy(char *p, const char *q, __kernel_size_t size)
#if __has_builtin(__builtin_dynamic_object_size)
#define POS __pass_dynamic_object_size(1)
#define POS0 __pass_dynamic_object_size(0)
-#define __struct_size(p) __builtin_dynamic_object_size(p, 0)
-#define __member_size(p) __builtin_dynamic_object_size(p, 1)
#else
#define POS __pass_object_size(1)
#define POS0 __pass_object_size(0)
-#define __struct_size(p) __builtin_object_size(p, 0)
-#define __member_size(p) __builtin_object_size(p, 1)
#endif
#define __compiletime_lessthan(bounds, length) ( \
diff --git a/include/linux/i3c/master.h b/include/linux/i3c/master.h
index 0b52da4f2346..db909ef79be4 100644
--- a/include/linux/i3c/master.h
+++ b/include/linux/i3c/master.h
@@ -24,6 +24,12 @@
struct i2c_client;
+/* notifier actions. notifier call data is the struct i3c_bus */
+enum {
+ I3C_NOTIFY_BUS_ADD,
+ I3C_NOTIFY_BUS_REMOVE,
+};
+
struct i3c_master_controller;
struct i3c_bus;
struct i3c_device;
@@ -652,4 +658,9 @@ void i3c_master_queue_ibi(struct i3c_dev_desc *dev, struct i3c_ibi_slot *slot);
struct i3c_ibi_slot *i3c_master_get_free_ibi_slot(struct i3c_dev_desc *dev);
+void i3c_for_each_bus_locked(int (*fn)(struct i3c_bus *bus, void *data),
+ void *data);
+int i3c_register_notifier(struct notifier_block *nb);
+int i3c_unregister_notifier(struct notifier_block *nb);
+
#endif /* I3C_MASTER_H */
diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
index b24fb80782c5..958771bac9c0 100644
--- a/include/linux/ieee80211.h
+++ b/include/linux/ieee80211.h
@@ -307,6 +307,13 @@ static inline u16 ieee80211_sn_sub(u16 sn1, u16 sn2)
#define IEEE80211_TRIGGER_TYPE_BQRP 0x6
#define IEEE80211_TRIGGER_TYPE_NFRP 0x7
+/* UL-bandwidth within common_info of trigger frame */
+#define IEEE80211_TRIGGER_ULBW_MASK 0xc0000
+#define IEEE80211_TRIGGER_ULBW_20MHZ 0x0
+#define IEEE80211_TRIGGER_ULBW_40MHZ 0x1
+#define IEEE80211_TRIGGER_ULBW_80MHZ 0x2
+#define IEEE80211_TRIGGER_ULBW_160_80P80MHZ 0x3
+
struct ieee80211_hdr {
__le16 frame_control;
__le16 duration_id;
@@ -951,17 +958,24 @@ struct ieee80211_wide_bw_chansw_ie {
* @dtim_count: DTIM Count
* @dtim_period: DTIM Period
* @bitmap_ctrl: Bitmap Control
+ * @required_octet: "Syntatic sugar" to force the struct size to the
+ * minimum valid size when carried in a non-S1G PPDU
* @virtual_map: Partial Virtual Bitmap
*
* This structure represents the payload of the "TIM element" as
- * described in IEEE Std 802.11-2020 section 9.4.2.5.
+ * described in IEEE Std 802.11-2020 section 9.4.2.5. Note that this
+ * definition is only applicable when the element is carried in a
+ * non-S1G PPDU. When the TIM is carried in an S1G PPDU, the Bitmap
+ * Control and Partial Virtual Bitmap may not be present.
*/
struct ieee80211_tim_ie {
u8 dtim_count;
u8 dtim_period;
u8 bitmap_ctrl;
- /* variable size: 1 - 251 bytes */
- u8 virtual_map[1];
+ union {
+ u8 required_octet;
+ DECLARE_FLEX_ARRAY(u8, virtual_map);
+ };
} __packed;
/**
@@ -1239,6 +1253,30 @@ struct ieee80211_twt_setup {
u8 params[];
} __packed;
+#define IEEE80211_TTLM_MAX_CNT 2
+#define IEEE80211_TTLM_CONTROL_DIRECTION 0x03
+#define IEEE80211_TTLM_CONTROL_DEF_LINK_MAP 0x04
+#define IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT 0x08
+#define IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT 0x10
+#define IEEE80211_TTLM_CONTROL_LINK_MAP_SIZE 0x20
+
+#define IEEE80211_TTLM_DIRECTION_DOWN 0
+#define IEEE80211_TTLM_DIRECTION_UP 1
+#define IEEE80211_TTLM_DIRECTION_BOTH 2
+
+/**
+ * struct ieee80211_ttlm_elem - TID-To-Link Mapping element
+ *
+ * Defined in section 9.4.2.314 in P802.11be_D4
+ *
+ * @control: the first part of control field
+ * @optional: the second part of control field
+ */
+struct ieee80211_ttlm_elem {
+ u8 control;
+ u8 optional[];
+} __packed;
+
struct ieee80211_mgmt {
__le16 frame_control;
__le16 duration;
@@ -1674,6 +1712,8 @@ struct ieee80211_mcs_info {
#define IEEE80211_HT_MCS_TX_MAX_STREAMS 4
#define IEEE80211_HT_MCS_TX_UNEQUAL_MODULATION 0x10
+#define IEEE80211_HT_MCS_CHAINS(mcs) ((mcs) == 32 ? 1 : (1 + ((mcs) >> 3)))
+
/*
* 802.11n D5.0 20.3.5 / 20.6 says:
* - indices 0 to 7 and 32 are single spatial stream
@@ -3132,6 +3172,28 @@ ieee80211_eht_oper_size_ok(const u8 *data, u8 len)
return len >= needed;
}
+#define IEEE80211_BW_IND_DIS_SUBCH_PRESENT BIT(1)
+
+struct ieee80211_bandwidth_indication {
+ u8 params;
+ struct ieee80211_eht_operation_info info;
+} __packed;
+
+static inline bool
+ieee80211_bandwidth_indication_size_ok(const u8 *data, u8 len)
+{
+ const struct ieee80211_bandwidth_indication *bwi = (const void *)data;
+
+ if (len < sizeof(*bwi))
+ return false;
+
+ if (bwi->params & IEEE80211_BW_IND_DIS_SUBCH_PRESENT &&
+ len < sizeof(*bwi) + 2)
+ return false;
+
+ return true;
+}
+
#define LISTEN_INT_USF GENMASK(15, 14)
#define LISTEN_INT_UI GENMASK(13, 0)
@@ -3589,6 +3651,8 @@ enum ieee80211_eid_ext {
WLAN_EID_EXT_EHT_OPERATION = 106,
WLAN_EID_EXT_EHT_MULTI_LINK = 107,
WLAN_EID_EXT_EHT_CAPABILITY = 108,
+ WLAN_EID_EXT_TID_TO_LINK_MAPPING = 109,
+ WLAN_EID_EXT_BANDWIDTH_INDICATION = 135,
};
/* Action category code */
@@ -4455,12 +4519,11 @@ static inline bool ieee80211_check_tim(const struct ieee80211_tim_ie *tim,
/**
* ieee80211_get_tdls_action - get tdls packet action (or -1, if not tdls packet)
* @skb: the skb containing the frame, length will not be checked
- * @hdr_size: the size of the ieee80211_hdr that starts at skb->data
*
* This function assumes the frame is a data frame, and that the network header
* is in the correct place.
*/
-static inline int ieee80211_get_tdls_action(struct sk_buff *skb, u32 hdr_size)
+static inline int ieee80211_get_tdls_action(struct sk_buff *skb)
{
if (!skb_is_nonlinear(skb) &&
skb->len > (skb_network_offset(skb) + 2)) {
@@ -5154,6 +5217,39 @@ static inline bool ieee80211_mle_reconf_sta_prof_size_ok(const u8 *data,
fixed + prof->sta_info_len - 1 <= len;
}
+static inline bool ieee80211_tid_to_link_map_size_ok(const u8 *data, size_t len)
+{
+ const struct ieee80211_ttlm_elem *t2l = (const void *)data;
+ u8 control, fixed = sizeof(*t2l), elem_len = 0;
+
+ if (len < fixed)
+ return false;
+
+ control = t2l->control;
+
+ if (control & IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT)
+ elem_len += 2;
+ if (control & IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT)
+ elem_len += 3;
+
+ if (!(control & IEEE80211_TTLM_CONTROL_DEF_LINK_MAP)) {
+ u8 bm_size;
+
+ elem_len += 1;
+ if (len < fixed + elem_len)
+ return false;
+
+ if (control & IEEE80211_TTLM_CONTROL_LINK_MAP_SIZE)
+ bm_size = 1;
+ else
+ bm_size = 2;
+
+ elem_len += hweight8(t2l->optional[0]) * bm_size;
+ }
+
+ return len >= fixed + elem_len;
+}
+
#define for_each_mle_subelement(_elem, _data, _len) \
if (ieee80211_mle_size_ok(_data, _len)) \
for_each_element(_elem, \
diff --git a/include/linux/igmp.h b/include/linux/igmp.h
index ebf4349a53af..5171231f70a8 100644
--- a/include/linux/igmp.h
+++ b/include/linux/igmp.h
@@ -39,7 +39,7 @@ struct ip_sf_socklist {
unsigned int sl_max;
unsigned int sl_count;
struct rcu_head rcu;
- __be32 sl_addr[];
+ __be32 sl_addr[] __counted_by(sl_max);
};
#define IP_SFBLOCK 10 /* allocate this many at once */
diff --git a/include/linux/ipv6.h b/include/linux/ipv6.h
index af8a771a053c..5e605e384aac 100644
--- a/include/linux/ipv6.h
+++ b/include/linux/ipv6.h
@@ -82,6 +82,7 @@ struct ipv6_devconf {
__u32 ioam6_id_wide;
__u8 ioam6_enabled;
__u8 ndisc_evict_nocarrier;
+ __u8 ra_honor_pio_life;
struct ctl_table_header *sysctl_header;
};
@@ -213,28 +214,9 @@ struct ipv6_pinfo {
__be32 flow_label;
__u32 frag_size;
- /*
- * Packed in 16bits.
- * Omit one shift by putting the signed field at MSB.
- */
-#if defined(__BIG_ENDIAN_BITFIELD)
- __s16 hop_limit:9;
- __u16 __unused_1:7;
-#else
- __u16 __unused_1:7;
- __s16 hop_limit:9;
-#endif
+ s16 hop_limit;
+ u8 mcast_hops;
-#if defined(__BIG_ENDIAN_BITFIELD)
- /* Packed in 16bits. */
- __s16 mcast_hops:9;
- __u16 __unused_2:6,
- mc_loop:1;
-#else
- __u16 mc_loop:1,
- __unused_2:6;
- __s16 mcast_hops:9;
-#endif
int ucast_oif;
int mcast_oif;
@@ -262,21 +244,11 @@ struct ipv6_pinfo {
} rxopt;
/* sockopt flags */
- __u16 recverr:1,
- sndflow:1,
- repflow:1,
- pmtudisc:3,
- padding:1, /* 1 bit hole */
- srcprefs:3, /* 001: prefer temporary address
+ __u8 srcprefs; /* 001: prefer temporary address
* 010: prefer public address
* 100: prefer care-of address
*/
- dontfrag:1,
- autoflowlabel:1,
- autoflowlabel_set:1,
- mc_all:1,
- recverr_rfc4884:1,
- rtalert_isolate:1;
+ __u8 pmtudisc;
__u8 min_hopcount;
__u8 tclass;
__be32 rcv_flowinfo;
@@ -293,6 +265,18 @@ struct ipv6_pinfo {
struct inet6_cork cork;
};
+/* We currently use available bits from inet_sk(sk)->inet_flags,
+ * this could change in the future.
+ */
+#define inet6_test_bit(nr, sk) \
+ test_bit(INET_FLAGS_##nr, &inet_sk(sk)->inet_flags)
+#define inet6_set_bit(nr, sk) \
+ set_bit(INET_FLAGS_##nr, &inet_sk(sk)->inet_flags)
+#define inet6_clear_bit(nr, sk) \
+ clear_bit(INET_FLAGS_##nr, &inet_sk(sk)->inet_flags)
+#define inet6_assign_bit(nr, sk, val) \
+ assign_bit(INET_FLAGS_##nr, &inet_sk(sk)->inet_flags, val)
+
/* WARNING: don't change the layout of the members in {raw,udp,tcp}6_sock! */
struct raw6_sock {
/* inet_sock has to be the first member of raw6_sock */
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 71fa9a40fb5a..72cb693b075b 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -285,8 +285,10 @@ static inline bool kasan_check_byte(const void *address)
#if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK)
void kasan_unpoison_task_stack(struct task_struct *task);
+asmlinkage void kasan_unpoison_task_stack_below(const void *watermark);
#else
static inline void kasan_unpoison_task_stack(struct task_struct *task) {}
+static inline void kasan_unpoison_task_stack_below(const void *watermark) {}
#endif
#ifdef CONFIG_KASAN_GENERIC
diff --git a/include/linux/linkmode.h b/include/linux/linkmode.h
index 15e0e0209da4..7303b4bc2ce0 100644
--- a/include/linux/linkmode.h
+++ b/include/linux/linkmode.h
@@ -43,15 +43,6 @@ static inline void linkmode_set_bit(int nr, volatile unsigned long *addr)
__set_bit(nr, addr);
}
-static inline void linkmode_set_bit_array(const int *array, int array_size,
- unsigned long *addr)
-{
- int i;
-
- for (i = 0; i < array_size; i++)
- linkmode_set_bit(array[i], addr);
-}
-
static inline void linkmode_clear_bit(int nr, volatile unsigned long *addr)
{
__clear_bit(nr, addr);
@@ -71,6 +62,15 @@ static inline int linkmode_test_bit(int nr, const volatile unsigned long *addr)
return test_bit(nr, addr);
}
+static inline void linkmode_set_bit_array(const int *array, int array_size,
+ unsigned long *addr)
+{
+ int i;
+
+ for (i = 0; i < array_size; i++)
+ linkmode_set_bit(array[i], addr);
+}
+
static inline int linkmode_equal(const unsigned long *src1,
const unsigned long *src2)
{
diff --git a/include/linux/micrel_phy.h b/include/linux/micrel_phy.h
index 4e27ca7c49de..591bf5b5e8dc 100644
--- a/include/linux/micrel_phy.h
+++ b/include/linux/micrel_phy.h
@@ -64,6 +64,10 @@
#define KSZ886X_BMCR_DISABLE_TRANSMIT BIT(1)
#define KSZ886X_BMCR_DISABLE_LED BIT(0)
+/* PHY Special Control/Status Register (Reg 31) */
#define KSZ886X_CTRL_MDIX_STAT BIT(4)
+#define KSZ886X_CTRL_FORCE_LINK BIT(3)
+#define KSZ886X_CTRL_PWRSAVE BIT(2)
+#define KSZ886X_CTRL_REMOTE_LOOPBACK BIT(1)
#endif /* _MICREL_PHY_H */
diff --git a/include/linux/mlx5/device.h b/include/linux/mlx5/device.h
index 4d5be378fa8c..820bca965fb6 100644
--- a/include/linux/mlx5/device.h
+++ b/include/linux/mlx5/device.h
@@ -366,6 +366,9 @@ enum mlx5_driver_event {
MLX5_DRIVER_EVENT_UPLINK_NETDEV,
MLX5_DRIVER_EVENT_MACSEC_SA_ADDED,
MLX5_DRIVER_EVENT_MACSEC_SA_DELETED,
+ MLX5_DRIVER_EVENT_SF_PEER_DEVLINK,
+ MLX5_DRIVER_EVENT_AFFILIATION_DONE,
+ MLX5_DRIVER_EVENT_AFFILIATION_REMOVED,
};
enum {
diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
index 3033bbaeac81..d2b8d4a74a30 100644
--- a/include/linux/mlx5/driver.h
+++ b/include/linux/mlx5/driver.h
@@ -155,6 +155,8 @@ enum {
MLX5_REG_MCC = 0x9062,
MLX5_REG_MCDA = 0x9063,
MLX5_REG_MCAM = 0x907f,
+ MLX5_REG_MSECQ = 0x9155,
+ MLX5_REG_MSEES = 0x9156,
MLX5_REG_MIRC = 0x9162,
MLX5_REG_SBCAM = 0xB01F,
MLX5_REG_RESOURCE_DUMP = 0xC000,
@@ -613,6 +615,7 @@ struct mlx5_priv {
int adev_idx;
int sw_vhca_id;
struct mlx5_events *events;
+ struct mlx5_vhca_events *vhca_events;
struct mlx5_flow_steering *steering;
struct mlx5_mpfs *mpfs;
@@ -621,6 +624,7 @@ struct mlx5_priv {
struct mlx5_lag *lag;
u32 flags;
struct mlx5_devcom_dev *devc;
+ struct mlx5_devcom_comp_dev *hca_devcom_comp;
struct mlx5_fw_reset *fw_reset;
struct mlx5_core_roce roce;
struct mlx5_fc_stats fc_stats;
@@ -1027,6 +1031,8 @@ bool mlx5_cmd_is_down(struct mlx5_core_dev *dev);
void mlx5_core_uplink_netdev_set(struct mlx5_core_dev *mdev, struct net_device *netdev);
void mlx5_core_uplink_netdev_event_replay(struct mlx5_core_dev *mdev);
+void mlx5_core_mp_event_replay(struct mlx5_core_dev *dev, u32 event, void *data);
+
void mlx5_health_cleanup(struct mlx5_core_dev *dev);
int mlx5_health_init(struct mlx5_core_dev *dev);
void mlx5_start_health_poll(struct mlx5_core_dev *dev);
@@ -1037,10 +1043,6 @@ void mlx5_trigger_health_work(struct mlx5_core_dev *dev);
int mlx5_frag_buf_alloc_node(struct mlx5_core_dev *dev, int size,
struct mlx5_frag_buf *buf, int node);
void mlx5_frag_buf_free(struct mlx5_core_dev *dev, struct mlx5_frag_buf *buf);
-struct mlx5_cmd_mailbox *mlx5_alloc_cmd_mailbox_chain(struct mlx5_core_dev *dev,
- gfp_t flags, int npages);
-void mlx5_free_cmd_mailbox_chain(struct mlx5_core_dev *dev,
- struct mlx5_cmd_mailbox *head);
int mlx5_core_create_mkey(struct mlx5_core_dev *dev, u32 *mkey, u32 *in,
int inlen);
int mlx5_core_destroy_mkey(struct mlx5_core_dev *dev, u32 mkey);
@@ -1054,8 +1056,6 @@ void mlx5_pagealloc_start(struct mlx5_core_dev *dev);
void mlx5_pagealloc_stop(struct mlx5_core_dev *dev);
void mlx5_pages_debugfs_init(struct mlx5_core_dev *dev);
void mlx5_pages_debugfs_cleanup(struct mlx5_core_dev *dev);
-void mlx5_core_req_pages_handler(struct mlx5_core_dev *dev, u16 func_id,
- s32 npages, bool ec_function);
int mlx5_satisfy_startup_pages(struct mlx5_core_dev *dev, int boot);
int mlx5_reclaim_startup_pages(struct mlx5_core_dev *dev);
void mlx5_register_debugfs(void);
@@ -1095,8 +1095,6 @@ int mlx5_core_create_psv(struct mlx5_core_dev *dev, u32 pdn,
int mlx5_core_destroy_psv(struct mlx5_core_dev *dev, int psv_num);
__be32 mlx5_core_get_terminate_scatter_list_mkey(struct mlx5_core_dev *dev);
void mlx5_core_put_rsc(struct mlx5_core_rsc_common *common);
-int mlx5_query_odp_caps(struct mlx5_core_dev *dev,
- struct mlx5_odp_caps *odp_caps);
int mlx5_init_rl_table(struct mlx5_core_dev *dev);
void mlx5_cleanup_rl_table(struct mlx5_core_dev *dev);
@@ -1198,12 +1196,6 @@ int mlx5_sriov_blocking_notifier_register(struct mlx5_core_dev *mdev,
void mlx5_sriov_blocking_notifier_unregister(struct mlx5_core_dev *mdev,
int vf_id,
struct notifier_block *nb);
-#ifdef CONFIG_MLX5_CORE_IPOIB
-struct net_device *mlx5_rdma_netdev_alloc(struct mlx5_core_dev *mdev,
- struct ib_device *ibdev,
- const char *name,
- void (*setup)(struct net_device *));
-#endif /* CONFIG_MLX5_CORE_IPOIB */
int mlx5_rdma_rn_get_params(struct mlx5_core_dev *mdev,
struct ib_device *device,
struct rdma_netdev_alloc_params *params);
diff --git a/include/linux/mlx5/fs.h b/include/linux/mlx5/fs.h
index 1e00c2436377..6f7725238abc 100644
--- a/include/linux/mlx5/fs.h
+++ b/include/linux/mlx5/fs.h
@@ -67,6 +67,7 @@ enum {
MLX5_FLOW_TABLE_TERMINATION = BIT(2),
MLX5_FLOW_TABLE_UNMANAGED = BIT(3),
MLX5_FLOW_TABLE_OTHER_VPORT = BIT(4),
+ MLX5_FLOW_TABLE_UPLINK_VPORT = BIT(5),
};
#define LEFTOVERS_RULE_NUM 2
diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
index fc3db401f8a2..4df6d1c12437 100644
--- a/include/linux/mlx5/mlx5_ifc.h
+++ b/include/linux/mlx5/mlx5_ifc.h
@@ -312,6 +312,7 @@ enum {
MLX5_CMD_OP_QUERY_VHCA_STATE = 0xb0d,
MLX5_CMD_OP_MODIFY_VHCA_STATE = 0xb0e,
MLX5_CMD_OP_SYNC_CRYPTO = 0xb12,
+ MLX5_CMD_OP_ALLOW_OTHER_VHCA_ACCESS = 0xb16,
MLX5_CMD_OP_MAX
};
@@ -1934,6 +1935,14 @@ struct mlx5_ifc_cmd_hca_cap_bits {
u8 match_definer_format_supported[0x40];
};
+enum {
+ MLX5_CROSS_VHCA_OBJ_TO_OBJ_SUPPORTED_LOCAL_FLOW_TABLE_TO_REMOTE_FLOW_TABLE_MISS = 0x80000,
+};
+
+enum {
+ MLX5_ALLOWED_OBJ_FOR_OTHER_VHCA_ACCESS_FLOW_TABLE = 0x200,
+};
+
struct mlx5_ifc_cmd_hca_cap_2_bits {
u8 reserved_at_0[0x80];
@@ -1948,9 +1957,15 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
u8 reserved_at_c0[0x8];
u8 migration_multi_load[0x1];
u8 migration_tracking_state[0x1];
- u8 reserved_at_ca[0x16];
+ u8 reserved_at_ca[0x6];
+ u8 migration_in_chunks[0x1];
+ u8 reserved_at_d1[0xf];
+
+ u8 cross_vhca_object_to_object_supported[0x20];
+
+ u8 allowed_object_for_other_vhca_access[0x40];
- u8 reserved_at_e0[0xc0];
+ u8 reserved_at_140[0x60];
u8 flow_table_type_2_type[0x8];
u8 reserved_at_1a8[0x3];
@@ -6369,6 +6384,28 @@ struct mlx5_ifc_general_obj_out_cmd_hdr_bits {
u8 reserved_at_60[0x20];
};
+struct mlx5_ifc_allow_other_vhca_access_in_bits {
+ u8 opcode[0x10];
+ u8 uid[0x10];
+ u8 reserved_at_20[0x10];
+ u8 op_mod[0x10];
+ u8 reserved_at_40[0x50];
+ u8 object_type_to_be_accessed[0x10];
+ u8 object_id_to_be_accessed[0x20];
+ u8 reserved_at_c0[0x40];
+ union {
+ u8 access_key_raw[0x100];
+ u8 access_key[8][0x20];
+ };
+};
+
+struct mlx5_ifc_allow_other_vhca_access_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+};
+
struct mlx5_ifc_modify_header_arg_bits {
u8 reserved_at_0[0x80];
@@ -6391,6 +6428,24 @@ struct mlx5_ifc_create_match_definer_out_bits {
struct mlx5_ifc_general_obj_out_cmd_hdr_bits general_obj_out_cmd_hdr;
};
+struct mlx5_ifc_alias_context_bits {
+ u8 vhca_id_to_be_accessed[0x10];
+ u8 reserved_at_10[0xd];
+ u8 status[0x3];
+ u8 object_id_to_be_accessed[0x20];
+ u8 reserved_at_40[0x40];
+ union {
+ u8 access_key_raw[0x100];
+ u8 access_key[8][0x20];
+ };
+ u8 metadata[0x80];
+};
+
+struct mlx5_ifc_create_alias_obj_in_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_alias_context_bits alias_ctx;
+};
+
enum {
MLX5_QUERY_FLOW_GROUP_OUT_MATCH_CRITERIA_ENABLE_OUTER_HEADERS = 0x0,
MLX5_QUERY_FLOW_GROUP_OUT_MATCH_CRITERIA_ENABLE_MISC_PARAMETERS = 0x1,
@@ -10176,7 +10231,9 @@ struct mlx5_ifc_mcam_access_reg_bits2 {
u8 mirc[0x1];
u8 regs_97_to_96[0x2];
- u8 regs_95_to_64[0x20];
+ u8 regs_95_to_87[0x09];
+ u8 synce_registers[0x2];
+ u8 regs_84_to_64[0x15];
u8 regs_63_to_32[0x20];
@@ -10572,6 +10629,7 @@ enum {
MLX5_INITIAL_SEG_HEALTH_SYNDROME_EQ_INV = 0xe,
MLX5_INITIAL_SEG_HEALTH_SYNDROME_FFSER_ERR = 0xf,
MLX5_INITIAL_SEG_HEALTH_SYNDROME_HIGH_TEMP_ERR = 0x10,
+ MLX5_INITIAL_SEG_HEALTH_SYNDROME_ICM_PCI_POISONED_ERR = 0x12,
};
struct mlx5_ifc_initial_seg_bits {
@@ -11917,6 +11975,7 @@ enum {
MLX5_GENERAL_OBJECT_TYPES_FLOW_METER_ASO = 0x24,
MLX5_GENERAL_OBJECT_TYPES_MACSEC = 0x27,
MLX5_GENERAL_OBJECT_TYPES_INT_KEK = 0x47,
+ MLX5_GENERAL_OBJECT_TYPES_FLOW_TABLE_ALIAS = 0xff15,
};
enum {
@@ -12392,7 +12451,8 @@ struct mlx5_ifc_query_vhca_migration_state_in_bits {
u8 op_mod[0x10];
u8 incremental[0x1];
- u8 reserved_at_41[0xf];
+ u8 chunk[0x1];
+ u8 reserved_at_42[0xe];
u8 vhca_id[0x10];
u8 reserved_at_60[0x20];
@@ -12408,7 +12468,11 @@ struct mlx5_ifc_query_vhca_migration_state_out_bits {
u8 required_umem_size[0x20];
- u8 reserved_at_a0[0x160];
+ u8 reserved_at_a0[0x20];
+
+ u8 remaining_total_size[0x40];
+
+ u8 reserved_at_100[0x100];
};
struct mlx5_ifc_save_vhca_state_in_bits {
@@ -12440,7 +12504,7 @@ struct mlx5_ifc_save_vhca_state_out_bits {
u8 actual_image_size[0x20];
- u8 reserved_at_60[0x20];
+ u8 next_required_umem_size[0x20];
};
struct mlx5_ifc_load_vhca_state_in_bits {
@@ -12549,4 +12613,59 @@ struct mlx5_ifc_modify_page_track_obj_in_bits {
struct mlx5_ifc_page_track_bits obj_context;
};
+struct mlx5_ifc_msecq_reg_bits {
+ u8 reserved_at_0[0x20];
+
+ u8 reserved_at_20[0x12];
+ u8 network_option[0x2];
+ u8 local_ssm_code[0x4];
+ u8 local_enhanced_ssm_code[0x8];
+
+ u8 local_clock_identity[0x40];
+
+ u8 reserved_at_80[0x180];
+};
+
+enum {
+ MLX5_MSEES_FIELD_SELECT_ENABLE = BIT(0),
+ MLX5_MSEES_FIELD_SELECT_ADMIN_STATUS = BIT(1),
+ MLX5_MSEES_FIELD_SELECT_ADMIN_FREQ_MEASURE = BIT(2),
+};
+
+enum mlx5_msees_admin_status {
+ MLX5_MSEES_ADMIN_STATUS_FREE_RUNNING = 0x0,
+ MLX5_MSEES_ADMIN_STATUS_TRACK = 0x1,
+};
+
+enum mlx5_msees_oper_status {
+ MLX5_MSEES_OPER_STATUS_FREE_RUNNING = 0x0,
+ MLX5_MSEES_OPER_STATUS_SELF_TRACK = 0x1,
+ MLX5_MSEES_OPER_STATUS_OTHER_TRACK = 0x2,
+ MLX5_MSEES_OPER_STATUS_HOLDOVER = 0x3,
+ MLX5_MSEES_OPER_STATUS_FAIL_HOLDOVER = 0x4,
+ MLX5_MSEES_OPER_STATUS_FAIL_FREE_RUNNING = 0x5,
+};
+
+struct mlx5_ifc_msees_reg_bits {
+ u8 reserved_at_0[0x8];
+ u8 local_port[0x8];
+ u8 pnat[0x2];
+ u8 lp_msb[0x2];
+ u8 reserved_at_14[0xc];
+
+ u8 field_select[0x20];
+
+ u8 admin_status[0x4];
+ u8 oper_status[0x4];
+ u8 ho_acq[0x1];
+ u8 reserved_at_49[0xc];
+ u8 admin_freq_measure[0x1];
+ u8 oper_freq_measure[0x1];
+ u8 failure_reason[0x9];
+
+ u8 frequency_diff[0x20];
+
+ u8 reserved_at_80[0x180];
+};
+
#endif /* MLX5_IFC_H */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 589f31ef2e84..4be8e310b189 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -125,18 +125,7 @@ struct page {
struct page_pool *pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
- union {
- /**
- * dma_addr_upper: might require a 64-bit
- * value on 32-bit architectures.
- */
- unsigned long dma_addr_upper;
- /**
- * For frag page support, not supported in
- * 32-bit architectures with 64-bit DMA.
- */
- atomic_long_t pp_frag_count;
- };
+ atomic_long_t pp_frag_count;
};
struct { /* Tail pages of compound page */
unsigned long compound_head; /* Bit zero is set */
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0896aaa91dd7..a16c9cc063fe 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -79,6 +79,8 @@ struct xdp_buff;
struct xdp_frame;
struct xdp_metadata_ops;
struct xdp_md;
+/* DPLL specific */
+struct dpll_pin;
typedef u32 xdp_features_t;
@@ -480,6 +482,29 @@ static inline bool napi_prefer_busy_poll(struct napi_struct *n)
return test_bit(NAPI_STATE_PREFER_BUSY_POLL, &n->state);
}
+/**
+ * napi_is_scheduled - test if NAPI is scheduled
+ * @n: NAPI context
+ *
+ * This check is "best-effort". With no locking implemented,
+ * a NAPI can be scheduled or terminate right after this check
+ * and produce not precise results.
+ *
+ * NAPI_STATE_SCHED is an internal state, napi_is_scheduled
+ * should not be used normally and napi_schedule should be
+ * used instead.
+ *
+ * Use only if the driver really needs to check if a NAPI
+ * is scheduled for example in the context of delayed timer
+ * that can be skipped if a NAPI is already scheduled.
+ *
+ * Return True if NAPI is scheduled, False otherwise.
+ */
+static inline bool napi_is_scheduled(struct napi_struct *n)
+{
+ return test_bit(NAPI_STATE_SCHED, &n->state);
+}
+
bool napi_schedule_prep(struct napi_struct *n);
/**
@@ -488,11 +513,18 @@ bool napi_schedule_prep(struct napi_struct *n);
*
* Schedule NAPI poll routine to be called if it is not already
* running.
+ * Return true if we schedule a NAPI or false if not.
+ * Refer to napi_schedule_prep() for additional reason on why
+ * a NAPI might not be scheduled.
*/
-static inline void napi_schedule(struct napi_struct *n)
+static inline bool napi_schedule(struct napi_struct *n)
{
- if (napi_schedule_prep(n))
+ if (napi_schedule_prep(n)) {
__napi_schedule(n);
+ return true;
+ }
+
+ return false;
}
/**
@@ -507,16 +539,6 @@ static inline void napi_schedule_irqoff(struct napi_struct *n)
__napi_schedule_irqoff(n);
}
-/* Try to reschedule poll. Called by dev->poll() after napi_complete(). */
-static inline bool napi_reschedule(struct napi_struct *napi)
-{
- if (napi_schedule_prep(napi)) {
- __napi_schedule(napi);
- return true;
- }
- return false;
-}
-
/**
* napi_complete_done - NAPI processing complete
* @n: NAPI context
@@ -917,6 +939,7 @@ struct net_device_path {
u8 queue;
u16 wcid;
u8 bss;
+ u8 amsdu;
} mtk_wdma;
};
};
@@ -1287,9 +1310,7 @@ struct netdev_net_notifier {
* struct net_device *dev,
* const unsigned char *addr, u16 vid)
* Deletes the FDB entry from dev coresponding to addr.
- * int (*ndo_fdb_del_bulk)(struct ndmsg *ndm, struct nlattr *tb[],
- * struct net_device *dev,
- * u16 vid,
+ * int (*ndo_fdb_del_bulk)(struct nlmsghdr *nlh, struct net_device *dev,
* struct netlink_ext_ack *extack);
* int (*ndo_fdb_dump)(struct sk_buff *skb, struct netlink_callback *cb,
* struct net_device *dev, struct net_device *filter_dev,
@@ -1564,10 +1585,8 @@ struct net_device_ops {
struct net_device *dev,
const unsigned char *addr,
u16 vid, struct netlink_ext_ack *extack);
- int (*ndo_fdb_del_bulk)(struct ndmsg *ndm,
- struct nlattr *tb[],
+ int (*ndo_fdb_del_bulk)(struct nlmsghdr *nlh,
struct net_device *dev,
- u16 vid,
struct netlink_ext_ack *extack);
int (*ndo_fdb_dump)(struct sk_buff *skb,
struct netlink_callback *cb,
@@ -1590,6 +1609,10 @@ struct net_device_ops {
int (*ndo_mdb_dump)(struct net_device *dev,
struct sk_buff *skb,
struct netlink_callback *cb);
+ int (*ndo_mdb_get)(struct net_device *dev,
+ struct nlattr *tb[], u32 portid,
+ u32 seq,
+ struct netlink_ext_ack *extack);
int (*ndo_bridge_setlink)(struct net_device *dev,
struct nlmsghdr *nlh,
u16 flags,
@@ -2049,6 +2072,9 @@ enum netdev_ml_priv_type {
* SET_NETDEV_DEVLINK_PORT macro. This pointer is static
* during the time netdevice is registered.
*
+ * @dpll_pin: Pointer to the SyncE source pin of a DPLL subsystem,
+ * where the clock is recovered.
+ *
* FIXME: cleanup struct net_device such that network protocol info
* moves out.
*/
@@ -2405,6 +2431,10 @@ struct net_device {
struct rtnl_hw_stats64 *offload_xstats_l3;
struct devlink_port *devlink_port;
+
+#if IS_ENABLED(CONFIG_DPLL)
+ struct dpll_pin *dpll_pin;
+#endif
};
#define to_net_dev(d) container_of(d, struct net_device, dev)
@@ -3940,6 +3970,18 @@ int dev_get_mac_address(struct sockaddr *sa, struct net *net, char *dev_name);
int dev_get_port_parent_id(struct net_device *dev,
struct netdev_phys_item_id *ppid, bool recurse);
bool netdev_port_same_parent_id(struct net_device *a, struct net_device *b);
+void netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin);
+void netdev_dpll_pin_clear(struct net_device *dev);
+
+static inline struct dpll_pin *netdev_dpll_pin(const struct net_device *dev)
+{
+#if IS_ENABLED(CONFIG_DPLL)
+ return dev->dpll_pin;
+#else
+ return NULL;
+#endif
+}
+
struct sk_buff *validate_xmit_skb_list(struct sk_buff *skb, struct net_device *dev, bool *again);
struct sk_buff *dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
struct netdev_queue *txq, int *ret);
@@ -3980,32 +4022,19 @@ static __always_inline bool __is_skb_forwardable(const struct net_device *dev,
return false;
}
-struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev);
-
-static inline struct net_device_core_stats __percpu *dev_core_stats(struct net_device *dev)
-{
- /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
- struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
-
- if (likely(p))
- return p;
-
- return netdev_core_stats_alloc(dev);
-}
+void netdev_core_stats_inc(struct net_device *dev, u32 offset);
#define DEV_CORE_STATS_INC(FIELD) \
static inline void dev_core_stats_##FIELD##_inc(struct net_device *dev) \
{ \
- struct net_device_core_stats __percpu *p; \
- \
- p = dev_core_stats(dev); \
- if (p) \
- this_cpu_inc(p->FIELD); \
+ netdev_core_stats_inc(dev, \
+ offsetof(struct net_device_core_stats, FIELD)); \
}
DEV_CORE_STATS_INC(rx_dropped)
DEV_CORE_STATS_INC(tx_dropped)
DEV_CORE_STATS_INC(rx_nohandler)
DEV_CORE_STATS_INC(rx_otherhost_dropped)
+#undef DEV_CORE_STATS_INC
static __always_inline int ____dev_forward_skb(struct net_device *dev,
struct sk_buff *skb,
@@ -5214,5 +5243,6 @@ extern struct net_device *blackhole_netdev;
#define DEV_STATS_INC(DEV, FIELD) atomic_long_inc(&(DEV)->stats.__##FIELD)
#define DEV_STATS_ADD(DEV, FIELD, VAL) \
atomic_long_add((VAL), &(DEV)->stats.__##FIELD)
+#define DEV_STATS_READ(DEV, FIELD) atomic_long_read(&(DEV)->stats.__##FIELD)
#endif /* _LINUX_NETDEVICE_H */
diff --git a/include/linux/netfilter.h b/include/linux/netfilter.h
index d68644b7c299..80900d910992 100644
--- a/include/linux/netfilter.h
+++ b/include/linux/netfilter.h
@@ -22,6 +22,16 @@ static inline int NF_DROP_GETERR(int verdict)
return -(verdict >> NF_VERDICT_QBITS);
}
+static __always_inline int
+NF_DROP_REASON(struct sk_buff *skb, enum skb_drop_reason reason, u32 err)
+{
+ BUILD_BUG_ON(err > 0xffff);
+
+ kfree_skb_reason(skb, reason);
+
+ return ((err << 16) | NF_STOLEN);
+}
+
static inline int nf_inet_addr_cmp(const union nf_inet_addr *a1,
const union nf_inet_addr *a2)
{
diff --git a/include/linux/overflow.h b/include/linux/overflow.h
index f9b60313eaea..7b5cf4a5cd19 100644
--- a/include/linux/overflow.h
+++ b/include/linux/overflow.h
@@ -309,4 +309,39 @@ static inline size_t __must_check size_sub(size_t minuend, size_t subtrahend)
#define struct_size_t(type, member, count) \
struct_size((type *)NULL, member, count)
+/**
+ * _DEFINE_FLEX() - helper macro for DEFINE_FLEX() family.
+ * Enables caller macro to pass (different) initializer.
+ *
+ * @type: structure type name, including "struct" keyword.
+ * @name: Name for a variable to define.
+ * @member: Name of the array member.
+ * @count: Number of elements in the array; must be compile-time const.
+ * @initializer: initializer expression (could be empty for no init).
+ */
+#define _DEFINE_FLEX(type, name, member, count, initializer) \
+ _Static_assert(__builtin_constant_p(count), \
+ "onstack flex array members require compile-time const count"); \
+ union { \
+ u8 bytes[struct_size_t(type, member, count)]; \
+ type obj; \
+ } name##_u initializer; \
+ type *name = (type *)&name##_u
+
+/**
+ * DEFINE_FLEX() - Define an on-stack instance of structure with a trailing
+ * flexible array member.
+ *
+ * @type: structure type name, including "struct" keyword.
+ * @name: Name for a variable to define.
+ * @member: Name of the array member.
+ * @count: Number of elements in the array; must be compile-time const.
+ *
+ * Define a zeroed, on-stack, instance of @type structure with a trailing
+ * flexible array member.
+ * Use __struct_size(@name) to get compile-time size of it afterwards.
+ */
+#define DEFINE_FLEX(type, name, member, count) \
+ _DEFINE_FLEX(type, name, member, count, = {})
+
#endif /* __LINUX_OVERFLOW_H */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 91b457de262e..91c1f6d5b44f 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -180,6 +180,8 @@
#define PCI_DEVICE_ID_BERKOM_A4T 0xffa4
#define PCI_DEVICE_ID_BERKOM_SCITEL_QUADRO 0xffa8
+#define PCI_VENDOR_ID_ITTIM 0x0b48
+
#define PCI_VENDOR_ID_COMPAQ 0x0e11
#define PCI_DEVICE_ID_COMPAQ_TOKENRING 0x0508
#define PCI_DEVICE_ID_COMPAQ_TACHYON 0xa0fc
diff --git a/include/linux/pds/pds_core_if.h b/include/linux/pds/pds_core_if.h
index e838a2b90440..17a87c1a55d7 100644
--- a/include/linux/pds/pds_core_if.h
+++ b/include/linux/pds/pds_core_if.h
@@ -79,6 +79,7 @@ enum pds_core_status_code {
PDS_RC_EVFID = 31, /* VF ID does not exist */
PDS_RC_BAD_FW = 32, /* FW file is invalid or corrupted */
PDS_RC_ECLIENT = 33, /* No such client id */
+ PDS_RC_BAD_PCI = 255, /* Broken PCI when reading status */
};
/**
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 68fac2e7cbe6..8c677f185901 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -132,6 +132,7 @@ extern void __init setup_per_cpu_areas(void);
extern void __percpu *__alloc_percpu_gfp(size_t size, size_t align, gfp_t gfp) __alloc_size(1);
extern void __percpu *__alloc_percpu(size_t size, size_t align) __alloc_size(1);
extern void free_percpu(void __percpu *__pdata);
+extern size_t pcpu_alloc_size(void __percpu *__pdata);
DEFINE_FREE(free_percpu, void __percpu *, free_percpu(_T))
diff --git a/include/linux/phy.h b/include/linux/phy.h
index 1351b802ffcf..3cc52826f18e 100644
--- a/include/linux/phy.h
+++ b/include/linux/phy.h
@@ -1736,6 +1736,7 @@ void phy_detach(struct phy_device *phydev);
void phy_start(struct phy_device *phydev);
void phy_stop(struct phy_device *phydev);
int phy_config_aneg(struct phy_device *phydev);
+int _phy_start_aneg(struct phy_device *phydev);
int phy_start_aneg(struct phy_device *phydev);
int phy_aneg_done(struct phy_device *phydev);
int phy_speed_down(struct phy_device *phydev, bool sync);
diff --git a/include/linux/phylink.h b/include/linux/phylink.h
index 2b886ea654bb..875439ab45de 100644
--- a/include/linux/phylink.h
+++ b/include/linux/phylink.h
@@ -227,7 +227,7 @@ void phylink_limit_mac_speed(struct phylink_config *config, u32 max_speed);
/**
* struct phylink_mac_ops - MAC operations structure.
- * @validate: Validate and update the link configuration.
+ * @mac_get_caps: Get MAC capabilities for interface mode.
* @mac_select_pcs: Select a PCS for the interface mode.
* @mac_prepare: prepare for a major reconfiguration of the interface.
* @mac_config: configure the MAC for the selected mode and state.
@@ -238,9 +238,8 @@ void phylink_limit_mac_speed(struct phylink_config *config, u32 max_speed);
* The individual methods are described more fully below.
*/
struct phylink_mac_ops {
- void (*validate)(struct phylink_config *config,
- unsigned long *supported,
- struct phylink_link_state *state);
+ unsigned long (*mac_get_caps)(struct phylink_config *config,
+ phy_interface_t interface);
struct phylink_pcs *(*mac_select_pcs)(struct phylink_config *config,
phy_interface_t interface);
int (*mac_prepare)(struct phylink_config *config, unsigned int mode,
@@ -259,39 +258,17 @@ struct phylink_mac_ops {
#if 0 /* For kernel-doc purposes only. */
/**
- * validate - Validate and update the link configuration
+ * mac_get_caps: Get MAC capabilities for interface mode.
* @config: a pointer to a &struct phylink_config.
- * @supported: ethtool bitmask for supported link modes.
- * @state: a pointer to a &struct phylink_link_state.
- *
- * Clear bits in the @supported and @state->advertising masks that
- * are not supportable by the MAC.
- *
- * Note that the PHY may be able to transform from one connection
- * technology to another, so, eg, don't clear 1000BaseX just
- * because the MAC is unable to BaseX mode. This is more about
- * clearing unsupported speeds and duplex settings. The port modes
- * should not be cleared; phylink_set_port_modes() will help with this.
- *
- * When @config->supported_interfaces has been set, phylink will iterate
- * over the supported interfaces to determine the full capability of the
- * MAC. The validation function must not print errors if @state->interface
- * is set to an unexpected value.
+ * @interface: PHY interface mode.
*
- * When @config->supported_interfaces is empty, phylink will call this
- * function with @state->interface set to %PHY_INTERFACE_MODE_NA, and
- * expects the MAC driver to return all supported link modes.
- *
- * If the @state->interface mode is not supported, then the @supported
- * mask must be cleared.
- *
- * This member is optional; if not set, the generic validator will be
- * used making use of @config->mac_capabilities and
- * @config->supported_interfaces to determine which link modes are
- * supported.
+ * Optional method. When not provided, config->mac_capabilities will be used.
+ * When implemented, this returns the MAC capabilities for the specified
+ * interface mode where there is some special handling required by the MAC
+ * driver (e.g. not supporting half-duplex in certain interface modes.)
*/
-void validate(struct phylink_config *config, unsigned long *supported,
- struct phylink_link_state *state);
+unsigned long mac_get_caps(struct phylink_config *config,
+ phy_interface_t interface);
/**
* mac_select_pcs: Select a PCS for the interface mode.
* @config: a pointer to a &struct phylink_config.
@@ -636,17 +613,6 @@ void pcs_link_up(struct phylink_pcs *pcs, unsigned int neg_mode,
phy_interface_t interface, int speed, int duplex);
#endif
-void phylink_caps_to_linkmodes(unsigned long *linkmodes, unsigned long caps);
-unsigned long phylink_get_capabilities(phy_interface_t interface,
- unsigned long mac_capabilities,
- int rate_matching);
-void phylink_validate_mask_caps(unsigned long *supported,
- struct phylink_link_state *state,
- unsigned long caps);
-void phylink_generic_validate(struct phylink_config *config,
- unsigned long *supported,
- struct phylink_link_state *state);
-
struct phylink *phylink_create(struct phylink_config *,
const struct fwnode_handle *,
phy_interface_t,
diff --git a/include/linux/posix-clock.h b/include/linux/posix-clock.h
index 468328b1e1dd..ef8619f48920 100644
--- a/include/linux/posix-clock.h
+++ b/include/linux/posix-clock.h
@@ -14,6 +14,7 @@
#include <linux/rwsem.h>
struct posix_clock;
+struct posix_clock_context;
/**
* struct posix_clock_operations - functional interface to the clock
@@ -50,18 +51,18 @@ struct posix_clock_operations {
/*
* Optional character device methods:
*/
- long (*ioctl) (struct posix_clock *pc,
- unsigned int cmd, unsigned long arg);
+ long (*ioctl)(struct posix_clock_context *pccontext, unsigned int cmd,
+ unsigned long arg);
- int (*open) (struct posix_clock *pc, fmode_t f_mode);
+ int (*open)(struct posix_clock_context *pccontext, fmode_t f_mode);
- __poll_t (*poll) (struct posix_clock *pc,
- struct file *file, poll_table *wait);
+ __poll_t (*poll)(struct posix_clock_context *pccontext, struct file *file,
+ poll_table *wait);
- int (*release) (struct posix_clock *pc);
+ int (*release)(struct posix_clock_context *pccontext);
- ssize_t (*read) (struct posix_clock *pc,
- uint flags, char __user *buf, size_t cnt);
+ ssize_t (*read)(struct posix_clock_context *pccontext, uint flags,
+ char __user *buf, size_t cnt);
};
/**
@@ -91,6 +92,24 @@ struct posix_clock {
};
/**
+ * struct posix_clock_context - represents clock file operations context
+ *
+ * @clk: Pointer to the clock
+ * @private_clkdata: Pointer to user data
+ *
+ * Drivers should use struct posix_clock_context during specific character
+ * device file operation methods to access the posix clock.
+ *
+ * Drivers can store a private data structure during the open operation
+ * if they have specific information that is required in other file
+ * operations.
+ */
+struct posix_clock_context {
+ struct posix_clock *clk;
+ void *private_clkdata;
+};
+
+/**
* posix_clock_register() - register a new clock
* @clk: Pointer to the clock. Caller must provide 'ops' field
* @dev: Pointer to the initialized device. Caller must provide
diff --git a/include/linux/soc/mediatek/mtk_wed.h b/include/linux/soc/mediatek/mtk_wed.h
index b2b28180dff7..a476648858a6 100644
--- a/include/linux/soc/mediatek/mtk_wed.h
+++ b/include/linux/soc/mediatek/mtk_wed.h
@@ -10,6 +10,7 @@
#define MTK_WED_TX_QUEUES 2
#define MTK_WED_RX_QUEUES 2
+#define MTK_WED_RX_PAGE_QUEUES 3
#define WED_WO_STA_REC 0x6
@@ -45,7 +46,7 @@ enum mtk_wed_wo_cmd {
MTK_WED_WO_CMD_WED_END
};
-struct mtk_rxbm_desc {
+struct mtk_wed_bm_desc {
__le32 buf0;
__le32 token;
} __packed __aligned(4);
@@ -76,6 +77,11 @@ struct mtk_wed_wo_rx_stats {
__le32 rx_drop_cnt;
};
+struct mtk_wed_buf {
+ void *p;
+ dma_addr_t phy_addr;
+};
+
struct mtk_wed_device {
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
const struct mtk_wed_ops *ops;
@@ -94,17 +100,20 @@ struct mtk_wed_device {
struct mtk_wed_ring txfree_ring;
struct mtk_wed_ring tx_wdma[MTK_WED_TX_QUEUES];
struct mtk_wed_ring rx_wdma[MTK_WED_RX_QUEUES];
+ struct mtk_wed_ring rx_rro_ring[MTK_WED_RX_QUEUES];
+ struct mtk_wed_ring rx_page_ring[MTK_WED_RX_PAGE_QUEUES];
+ struct mtk_wed_ring ind_cmd_ring;
struct {
int size;
- void **pages;
+ struct mtk_wed_buf *pages;
struct mtk_wdma_desc *desc;
dma_addr_t desc_phys;
} tx_buf_ring;
struct {
int size;
- struct mtk_rxbm_desc *desc;
+ struct mtk_wed_bm_desc *desc;
dma_addr_t desc_phys;
} rx_buf_ring;
@@ -114,6 +123,13 @@ struct mtk_wed_device {
dma_addr_t fdbk_phys;
} rro;
+ struct {
+ int size;
+ struct mtk_wed_buf *pages;
+ struct mtk_wed_bm_desc *desc;
+ dma_addr_t desc_phys;
+ } hw_rro;
+
/* filled by driver: */
struct {
union {
@@ -123,6 +139,7 @@ struct mtk_wed_device {
enum mtk_wed_bus_tye bus_type;
void __iomem *base;
u32 phy_base;
+ u32 id;
u32 wpdma_phys;
u32 wpdma_int;
@@ -131,18 +148,35 @@ struct mtk_wed_device {
u32 wpdma_txfree;
u32 wpdma_rx_glo;
u32 wpdma_rx;
+ u32 wpdma_rx_rro[MTK_WED_RX_QUEUES];
+ u32 wpdma_rx_pg;
bool wcid_512;
+ bool hw_rro;
+ bool msi;
u16 token_start;
unsigned int nbuf;
unsigned int rx_nbuf;
unsigned int rx_npkt;
unsigned int rx_size;
+ unsigned int amsdu_max_len;
u8 tx_tbit[MTK_WED_TX_QUEUES];
u8 rx_tbit[MTK_WED_RX_QUEUES];
+ u8 rro_rx_tbit[MTK_WED_RX_QUEUES];
+ u8 rx_pg_tbit[MTK_WED_RX_PAGE_QUEUES];
u8 txfree_tbit;
+ u8 amsdu_max_subframes;
+
+ struct {
+ u8 se_group_nums;
+ u16 win_size;
+ u16 particular_sid;
+ u32 ack_sn_addr;
+ dma_addr_t particular_se_phys;
+ dma_addr_t addr_elem_phys[1024];
+ } ind_cmd;
u32 (*init_buf)(void *ptr, dma_addr_t phys, int token_id);
int (*offload_enable)(struct mtk_wed_device *wed);
@@ -182,6 +216,14 @@ struct mtk_wed_ops {
void (*irq_set_mask)(struct mtk_wed_device *dev, u32 mask);
int (*setup_tc)(struct mtk_wed_device *wed, struct net_device *dev,
enum tc_setup_type type, void *type_data);
+ void (*start_hw_rro)(struct mtk_wed_device *dev, u32 irq_mask,
+ bool reset);
+ void (*rro_rx_ring_setup)(struct mtk_wed_device *dev, int ring,
+ void __iomem *regs);
+ void (*msdu_pg_rx_ring_setup)(struct mtk_wed_device *dev, int ring,
+ void __iomem *regs);
+ int (*ind_rx_ring_setup)(struct mtk_wed_device *dev,
+ void __iomem *regs);
};
extern const struct mtk_wed_ops __rcu *mtk_soc_wed_ops;
@@ -206,16 +248,27 @@ mtk_wed_device_attach(struct mtk_wed_device *dev)
return ret;
}
-static inline bool
-mtk_wed_get_rx_capa(struct mtk_wed_device *dev)
+static inline bool mtk_wed_get_rx_capa(struct mtk_wed_device *dev)
{
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
+ if (dev->version == 3)
+ return dev->wlan.hw_rro;
+
return dev->version != 1;
#else
return false;
#endif
}
+static inline bool mtk_wed_is_amsdu_supported(struct mtk_wed_device *dev)
+{
+#ifdef CONFIG_NET_MEDIATEK_SOC_WED
+ return dev->version == 3;
+#else
+ return false;
+#endif
+}
+
#ifdef CONFIG_NET_MEDIATEK_SOC_WED
#define mtk_wed_device_active(_dev) !!(_dev)->ops
#define mtk_wed_device_detach(_dev) (_dev)->ops->detach(_dev)
@@ -242,6 +295,15 @@ mtk_wed_get_rx_capa(struct mtk_wed_device *dev)
#define mtk_wed_device_dma_reset(_dev) (_dev)->ops->reset_dma(_dev)
#define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) \
(_dev)->ops->setup_tc(_dev, _netdev, _type, _type_data)
+#define mtk_wed_device_start_hw_rro(_dev, _mask, _reset) \
+ (_dev)->ops->start_hw_rro(_dev, _mask, _reset)
+#define mtk_wed_device_rro_rx_ring_setup(_dev, _ring, _regs) \
+ (_dev)->ops->rro_rx_ring_setup(_dev, _ring, _regs)
+#define mtk_wed_device_msdu_pg_rx_ring_setup(_dev, _ring, _regs) \
+ (_dev)->ops->msdu_pg_rx_ring_setup(_dev, _ring, _regs)
+#define mtk_wed_device_ind_rx_ring_setup(_dev, _regs) \
+ (_dev)->ops->ind_rx_ring_setup(_dev, _regs)
+
#else
static inline bool mtk_wed_device_active(struct mtk_wed_device *dev)
{
@@ -261,6 +323,10 @@ static inline bool mtk_wed_device_active(struct mtk_wed_device *dev)
#define mtk_wed_device_stop(_dev) do {} while (0)
#define mtk_wed_device_dma_reset(_dev) do {} while (0)
#define mtk_wed_device_setup_tc(_dev, _netdev, _type, _type_data) -EOPNOTSUPP
+#define mtk_wed_device_start_hw_rro(_dev, _mask, _reset) do {} while (0)
+#define mtk_wed_device_rro_rx_ring_setup(_dev, _ring, _regs) -ENODEV
+#define mtk_wed_device_msdu_pg_rx_ring_setup(_dev, _ring, _regs) -ENODEV
+#define mtk_wed_device_ind_rx_ring_setup(_dev, _regs) -ENODEV
#endif
#endif
diff --git a/include/linux/socket.h b/include/linux/socket.h
index 39b74d83c7c4..cfcb7e2c3813 100644
--- a/include/linux/socket.h
+++ b/include/linux/socket.h
@@ -383,6 +383,7 @@ struct ucred {
#define SOL_MPTCP 284
#define SOL_MCTP 285
#define SOL_SMC 286
+#define SOL_VSOCK 287
/* IPX options */
#define IPX_TYPE 1
diff --git a/include/linux/sockptr.h b/include/linux/sockptr.h
index bae5e2369b4f..307961b41541 100644
--- a/include/linux/sockptr.h
+++ b/include/linux/sockptr.h
@@ -55,6 +55,29 @@ static inline int copy_from_sockptr(void *dst, sockptr_t src, size_t size)
return copy_from_sockptr_offset(dst, src, 0, size);
}
+static inline int copy_struct_from_sockptr(void *dst, size_t ksize,
+ sockptr_t src, size_t usize)
+{
+ size_t size = min(ksize, usize);
+ size_t rest = max(ksize, usize) - size;
+
+ if (!sockptr_is_kernel(src))
+ return copy_struct_from_user(dst, ksize, src.user, size);
+
+ if (usize < ksize) {
+ memset(dst + size, 0, rest);
+ } else if (usize > ksize) {
+ char *p = src.kernel;
+
+ while (rest--) {
+ if (*p++)
+ return -E2BIG;
+ }
+ }
+ memcpy(dst, src.kernel, size);
+ return 0;
+}
+
static inline int copy_to_sockptr_offset(sockptr_t dst, size_t offset,
const void *src, size_t size)
{
diff --git a/include/linux/stmmac.h b/include/linux/stmmac.h
index ce89cc3e4913..0b4658a7eceb 100644
--- a/include/linux/stmmac.h
+++ b/include/linux/stmmac.h
@@ -139,6 +139,7 @@ struct stmmac_rxq_cfg {
struct stmmac_txq_cfg {
u32 weight;
+ bool coe_unsupported;
u8 mode_to_use;
/* Credit Base Shaper parameters */
u32 send_slope;
@@ -302,7 +303,6 @@ struct plat_stmmacenet_data {
unsigned int eee_usecs_rate;
struct pci_dev *pdev;
int int_snapshot_num;
- int ext_snapshot_num;
int msi_mac_vec;
int msi_wol_vec;
int msi_lpi_vec;
diff --git a/include/linux/tcp.h b/include/linux/tcp.h
index 3c5efeeb024f..ec4e9367f5b0 100644
--- a/include/linux/tcp.h
+++ b/include/linux/tcp.h
@@ -152,6 +152,7 @@ struct tcp_request_sock {
u64 snt_synack; /* first SYNACK sent time */
bool tfo_listener;
bool is_mptcp;
+ s8 req_usec_ts;
#if IS_ENABLED(CONFIG_MPTCP)
bool drop_req;
#endif
@@ -165,6 +166,11 @@ struct tcp_request_sock {
* after data-in-SYN.
*/
u8 syn_tos;
+#ifdef CONFIG_TCP_AO
+ u8 ao_keyid;
+ u8 ao_rcv_next;
+ u8 maclen;
+#endif
};
static inline struct tcp_request_sock *tcp_rsk(const struct request_sock *req)
@@ -172,6 +178,19 @@ static inline struct tcp_request_sock *tcp_rsk(const struct request_sock *req)
return (struct tcp_request_sock *)req;
}
+static inline bool tcp_rsk_used_ao(const struct request_sock *req)
+{
+ /* The real length of MAC is saved in the request socket,
+ * signing anything with zero-length makes no sense, so here is
+ * a little hack..
+ */
+#ifndef CONFIG_TCP_AO
+ return false;
+#else
+ return tcp_rsk(req)->maclen != 0;
+#endif
+}
+
#define TCP_RMEM_TO_WIN_SCALE 8
struct tcp_sock {
@@ -257,7 +276,8 @@ struct tcp_sock {
u8 compressed_ack;
u8 dup_ack_counter:2,
tlp_retrans:1, /* TLP is a retransmission */
- unused:5;
+ tcp_usec_ts:1, /* TSval values in usec */
+ unused:4;
u32 chrono_start; /* Start time in jiffies of a TCP chrono */
u32 chrono_stat[3]; /* Time in jiffies for chrono_stat stats */
u8 chrono_type:2, /* current chronograph type */
@@ -377,6 +397,14 @@ struct tcp_sock {
* Total data bytes retransmitted
*/
u32 total_retrans; /* Total retransmits for entire connection */
+ u32 rto_stamp; /* Start time (ms) of last CA_Loss recovery */
+ u16 total_rto; /* Total number of RTO timeouts, including
+ * SYN/SYN-ACK and recurring timeouts.
+ */
+ u16 total_rto_recoveries; /* Total number of RTO recoveries,
+ * including any unfinished recovery.
+ */
+ u32 total_rto_time; /* ms spent in (completed) RTO recoveries. */
u32 urg_seq; /* Seq of received urgent pointer */
unsigned int keepalive_time; /* time before keep alive takes place */
@@ -437,13 +465,18 @@ struct tcp_sock {
bool syn_smc; /* SYN includes SMC */
#endif
-#ifdef CONFIG_TCP_MD5SIG
-/* TCP AF-Specific parts; only used by MD5 Signature support so far */
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
+/* TCP AF-Specific parts; only used by TCP-AO/MD5 Signature support so far */
const struct tcp_sock_af_ops *af_specific;
+#ifdef CONFIG_TCP_MD5SIG
/* TCP MD5 Signature Option information */
struct tcp_md5sig_info __rcu *md5sig_info;
#endif
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info __rcu *ao_info;
+#endif
+#endif
/* TCP fastopen related information */
struct tcp_fastopen_request *fastopen_req;
@@ -463,15 +496,17 @@ enum tsq_enum {
TCP_MTU_REDUCED_DEFERRED, /* tcp_v{4|6}_err() could not call
* tcp_v{4|6}_mtu_reduced()
*/
+ TCP_ACK_DEFERRED, /* TX pure ack is deferred */
};
enum tsq_flags {
- TSQF_THROTTLED = (1UL << TSQ_THROTTLED),
- TSQF_QUEUED = (1UL << TSQ_QUEUED),
- TCPF_TSQ_DEFERRED = (1UL << TCP_TSQ_DEFERRED),
- TCPF_WRITE_TIMER_DEFERRED = (1UL << TCP_WRITE_TIMER_DEFERRED),
- TCPF_DELACK_TIMER_DEFERRED = (1UL << TCP_DELACK_TIMER_DEFERRED),
- TCPF_MTU_REDUCED_DEFERRED = (1UL << TCP_MTU_REDUCED_DEFERRED),
+ TSQF_THROTTLED = BIT(TSQ_THROTTLED),
+ TSQF_QUEUED = BIT(TSQ_QUEUED),
+ TCPF_TSQ_DEFERRED = BIT(TCP_TSQ_DEFERRED),
+ TCPF_WRITE_TIMER_DEFERRED = BIT(TCP_WRITE_TIMER_DEFERRED),
+ TCPF_DELACK_TIMER_DEFERRED = BIT(TCP_DELACK_TIMER_DEFERRED),
+ TCPF_MTU_REDUCED_DEFERRED = BIT(TCP_MTU_REDUCED_DEFERRED),
+ TCPF_ACK_DEFERRED = BIT(TCP_ACK_DEFERRED),
};
#define tcp_sk(ptr) container_of_const(ptr, struct tcp_sock, inet_conn.icsk_inet.sk)
@@ -497,6 +532,9 @@ struct tcp_timewait_sock {
#ifdef CONFIG_TCP_MD5SIG
struct tcp_md5sig_key *tw_md5_key;
#endif
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info __rcu *ao_info;
+#endif
};
static inline struct tcp_timewait_sock *tcp_twsk(const struct sock *sk)
@@ -566,4 +604,9 @@ void tcp_sock_set_quickack(struct sock *sk, int val);
int tcp_sock_set_syncnt(struct sock *sk, int val);
int tcp_sock_set_user_timeout(struct sock *sk, int val);
+static inline bool dst_tcp_usec_ts(const struct dst_entry *dst)
+{
+ return dst_feature(dst, RTAX_FEATURE_TCP_USEC_TS);
+}
+
#endif /* _LINUX_TCP_H */
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 21ae37e49319..5eb88a66eb68 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -761,7 +761,8 @@ struct bpf_raw_event_map *bpf_get_raw_tracepoint(const char *name);
void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp);
int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
u32 *fd_type, const char **buf,
- u64 *probe_offset, u64 *probe_addr);
+ u64 *probe_offset, u64 *probe_addr,
+ unsigned long *missed);
int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
int bpf_uprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
#else
@@ -801,7 +802,7 @@ static inline void bpf_put_raw_tracepoint(struct bpf_raw_event_map *btp)
static inline int bpf_get_perf_event_info(const struct perf_event *event,
u32 *prog_id, u32 *fd_type,
const char **buf, u64 *probe_offset,
- u64 *probe_addr)
+ u64 *probe_addr, unsigned long *missed)
{
return -EOPNOTSUPP;
}
@@ -877,6 +878,7 @@ extern void perf_kprobe_destroy(struct perf_event *event);
extern int bpf_get_kprobe_info(const struct perf_event *event,
u32 *fd_type, const char **symbol,
u64 *probe_offset, u64 *probe_addr,
+ unsigned long *missed,
bool perf_type_tracepoint);
#endif
#ifdef CONFIG_UPROBE_EVENTS
diff --git a/include/linux/udp.h b/include/linux/udp.h
index 43c1fb2d2c21..d04188714dca 100644
--- a/include/linux/udp.h
+++ b/include/linux/udp.h
@@ -32,25 +32,30 @@ static inline u32 udp_hashfn(const struct net *net, u32 num, u32 mask)
return (num + net_hash_mix(net)) & mask;
}
+enum {
+ UDP_FLAGS_CORK, /* Cork is required */
+ UDP_FLAGS_NO_CHECK6_TX, /* Send zero UDP6 checksums on TX? */
+ UDP_FLAGS_NO_CHECK6_RX, /* Allow zero UDP6 checksums on RX? */
+ UDP_FLAGS_GRO_ENABLED, /* Request GRO aggregation */
+ UDP_FLAGS_ACCEPT_FRAGLIST,
+ UDP_FLAGS_ACCEPT_L4,
+ UDP_FLAGS_ENCAP_ENABLED, /* This socket enabled encap */
+ UDP_FLAGS_UDPLITE_SEND_CC, /* set via udplite setsockopt */
+ UDP_FLAGS_UDPLITE_RECV_CC, /* set via udplite setsockopt */
+};
+
struct udp_sock {
/* inet_sock has to be the first member */
struct inet_sock inet;
#define udp_port_hash inet.sk.__sk_common.skc_u16hashes[0]
#define udp_portaddr_hash inet.sk.__sk_common.skc_u16hashes[1]
#define udp_portaddr_node inet.sk.__sk_common.skc_portaddr_node
+
+ unsigned long udp_flags;
+
int pending; /* Any pending frames ? */
- unsigned int corkflag; /* Cork is required */
__u8 encap_type; /* Is this an Encapsulation socket? */
- unsigned char no_check6_tx:1,/* Send zero UDP6 checksums on TX? */
- no_check6_rx:1,/* Allow zero UDP6 checksums on RX? */
- encap_enabled:1, /* This socket enabled encap
- * processing; UDP tunnels and
- * different encapsulation layer set
- * this
- */
- gro_enabled:1, /* Request GRO aggregation */
- accept_udp_l4:1,
- accept_udp_fraglist:1;
+
/*
* Following member retains the information to create a UDP header
* when the socket is uncorked.
@@ -62,12 +67,6 @@ struct udp_sock {
*/
__u16 pcslen;
__u16 pcrlen;
-/* indicator bits used by pcflag: */
-#define UDPLITE_BIT 0x1 /* set by udplite proto init function */
-#define UDPLITE_SEND_CC 0x2 /* set via udplite setsockopt */
-#define UDPLITE_RECV_CC 0x4 /* set via udplite setsocktopt */
- __u8 pcflag; /* marks socket as UDP-Lite if > 0 */
- __u8 unused[3];
/*
* For encapsulation sockets.
*/
@@ -95,28 +94,39 @@ struct udp_sock {
int forward_threshold;
};
+#define udp_test_bit(nr, sk) \
+ test_bit(UDP_FLAGS_##nr, &udp_sk(sk)->udp_flags)
+#define udp_set_bit(nr, sk) \
+ set_bit(UDP_FLAGS_##nr, &udp_sk(sk)->udp_flags)
+#define udp_test_and_set_bit(nr, sk) \
+ test_and_set_bit(UDP_FLAGS_##nr, &udp_sk(sk)->udp_flags)
+#define udp_clear_bit(nr, sk) \
+ clear_bit(UDP_FLAGS_##nr, &udp_sk(sk)->udp_flags)
+#define udp_assign_bit(nr, sk, val) \
+ assign_bit(UDP_FLAGS_##nr, &udp_sk(sk)->udp_flags, val)
+
#define UDP_MAX_SEGMENTS (1 << 6UL)
#define udp_sk(ptr) container_of_const(ptr, struct udp_sock, inet.sk)
static inline void udp_set_no_check6_tx(struct sock *sk, bool val)
{
- udp_sk(sk)->no_check6_tx = val;
+ udp_assign_bit(NO_CHECK6_TX, sk, val);
}
static inline void udp_set_no_check6_rx(struct sock *sk, bool val)
{
- udp_sk(sk)->no_check6_rx = val;
+ udp_assign_bit(NO_CHECK6_RX, sk, val);
}
-static inline bool udp_get_no_check6_tx(struct sock *sk)
+static inline bool udp_get_no_check6_tx(const struct sock *sk)
{
- return udp_sk(sk)->no_check6_tx;
+ return udp_test_bit(NO_CHECK6_TX, sk);
}
-static inline bool udp_get_no_check6_rx(struct sock *sk)
+static inline bool udp_get_no_check6_rx(const struct sock *sk)
{
- return udp_sk(sk)->no_check6_rx;
+ return udp_test_bit(NO_CHECK6_RX, sk);
}
static inline void udp_cmsg_recv(struct msghdr *msg, struct sock *sk,
@@ -135,10 +145,12 @@ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
if (!skb_is_gso(skb))
return false;
- if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 && !udp_sk(sk)->accept_udp_l4)
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&
+ !udp_test_bit(ACCEPT_L4, sk))
return true;
- if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST && !udp_sk(sk)->accept_udp_fraglist)
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_FRAGLIST &&
+ !udp_test_bit(ACCEPT_FRAGLIST, sk))
return true;
return false;
@@ -146,8 +158,8 @@ static inline bool udp_unexpected_gso(struct sock *sk, struct sk_buff *skb)
static inline void udp_allow_gso(struct sock *sk)
{
- udp_sk(sk)->accept_udp_l4 = 1;
- udp_sk(sk)->accept_udp_fraglist = 1;
+ udp_set_bit(ACCEPT_L4, sk);
+ udp_set_bit(ACCEPT_FRAGLIST, sk);
}
#define udp_portaddr_for_each_entry(__sk, list) \
diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
index c58453699ee9..ebb3ce63d64d 100644
--- a/include/linux/virtio_vsock.h
+++ b/include/linux/virtio_vsock.h
@@ -12,6 +12,7 @@
struct virtio_vsock_skb_cb {
bool reply;
bool tap_delivered;
+ u32 offset;
};
#define VIRTIO_VSOCK_SKB_CB(skb) ((struct virtio_vsock_skb_cb *)((skb)->cb))
@@ -159,6 +160,15 @@ struct virtio_transport {
/* Takes ownership of the packet */
int (*send_pkt)(struct sk_buff *skb);
+
+ /* Used in MSG_ZEROCOPY mode. Checks, that provided data
+ * (number of buffers) could be transmitted with zerocopy
+ * mode. If this callback is not implemented for the current
+ * transport - this means that this transport doesn't need
+ * extra checks and can perform zerocopy transmission by
+ * default.
+ */
+ bool (*can_msgzerocopy)(int bufs_num);
};
ssize_t
diff --git a/include/net/Space.h b/include/net/Space.h
index c29f3d51c078..ef42629f4258 100644
--- a/include/net/Space.h
+++ b/include/net/Space.h
@@ -10,4 +10,3 @@ struct net_device *smc_init(int unit);
struct net_device *cs89x0_probe(int unit);
struct net_device *tc515_probe(int unit);
struct net_device *lance_probe(int unit);
-struct net_device *cops_probe(int unit);
diff --git a/include/net/af_vsock.h b/include/net/af_vsock.h
index b01cf9ac2437..e302c0e804d0 100644
--- a/include/net/af_vsock.h
+++ b/include/net/af_vsock.h
@@ -177,6 +177,9 @@ struct vsock_transport {
/* Read a single skb */
int (*read_skb)(struct vsock_sock *, skb_read_actor_t);
+
+ /* Zero-copy. */
+ bool (*msgzerocopy_allow)(void);
};
/**** CORE ****/
@@ -241,4 +244,8 @@ static inline void __init vsock_bpf_build_proto(void)
{}
#endif
+static inline bool vsock_msgzerocopy_allow(const struct vsock_transport *t)
+{
+ return t->msgzerocopy_allow && t->msgzerocopy_allow();
+}
#endif /* __AF_VSOCK_H__ */
diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
index aa90adc3b2a4..7ffa8c192c3f 100644
--- a/include/net/bluetooth/bluetooth.h
+++ b/include/net/bluetooth/bluetooth.h
@@ -541,7 +541,7 @@ static inline struct sk_buff *bt_skb_sendmsg(struct sock *sk,
return ERR_PTR(-EFAULT);
}
- skb->priority = sk->sk_priority;
+ skb->priority = READ_ONCE(sk->sk_priority);
return skb;
}
diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
index 87d92accc26e..bdee5d649cc6 100644
--- a/include/net/bluetooth/hci.h
+++ b/include/net/bluetooth/hci.h
@@ -1,6 +1,7 @@
/*
BlueZ - Bluetooth protocol stack for Linux
Copyright (C) 2000-2001 Qualcomm Incorporated
+ Copyright 2023 NXP
Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
@@ -673,6 +674,8 @@ enum {
#define HCI_TX_POWER_INVALID 127
#define HCI_RSSI_INVALID 127
+#define HCI_SYNC_HANDLE_INVALID 0xffff
+
#define HCI_ROLE_MASTER 0x00
#define HCI_ROLE_SLAVE 0x01
diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
index c33348ba1657..20988623c5cc 100644
--- a/include/net/bluetooth/hci_core.h
+++ b/include/net/bluetooth/hci_core.h
@@ -350,6 +350,8 @@ struct hci_dev {
struct list_head list;
struct mutex lock;
+ struct ida unset_handle_ida;
+
const char *name;
unsigned long flags;
__u16 id;
@@ -1290,8 +1292,8 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
return NULL;
}
-static inline struct hci_conn *hci_conn_hash_lookup_big_any_dst(struct hci_dev *hdev,
- __u8 handle)
+static inline struct hci_conn *
+hci_conn_hash_lookup_pa_sync_big_handle(struct hci_dev *hdev, __u8 big)
{
struct hci_conn_hash *h = &hdev->conn_hash;
struct hci_conn *c;
@@ -1299,22 +1301,22 @@ static inline struct hci_conn *hci_conn_hash_lookup_big_any_dst(struct hci_dev *
rcu_read_lock();
list_for_each_entry_rcu(c, &h->list, list) {
- if (c->type != ISO_LINK)
+ if (c->type != ISO_LINK ||
+ !test_bit(HCI_CONN_PA_SYNC, &c->flags))
continue;
- if (handle != BT_ISO_QOS_BIG_UNSET && handle == c->iso_qos.bcast.big) {
+ if (c->iso_qos.bcast.big == big) {
rcu_read_unlock();
return c;
}
}
-
rcu_read_unlock();
return NULL;
}
static inline struct hci_conn *
-hci_conn_hash_lookup_pa_sync(struct hci_dev *hdev, __u8 big)
+hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
{
struct hci_conn_hash *h = &hdev->conn_hash;
struct hci_conn *c;
@@ -1326,7 +1328,7 @@ hci_conn_hash_lookup_pa_sync(struct hci_dev *hdev, __u8 big)
!test_bit(HCI_CONN_PA_SYNC, &c->flags))
continue;
- if (c->iso_qos.bcast.big == big) {
+ if (c->sync_handle == sync_handle) {
rcu_read_unlock();
return c;
}
@@ -1377,6 +1379,26 @@ static inline void hci_conn_hash_list_state(struct hci_dev *hdev,
rcu_read_unlock();
}
+static inline void hci_conn_hash_list_flag(struct hci_dev *hdev,
+ hci_conn_func_t func, __u8 type,
+ __u8 flag, void *data)
+{
+ struct hci_conn_hash *h = &hdev->conn_hash;
+ struct hci_conn *c;
+
+ if (!func)
+ return;
+
+ rcu_read_lock();
+
+ list_for_each_entry_rcu(c, &h->list, list) {
+ if (c->type == type && test_bit(flag, &c->flags))
+ func(c, data);
+ }
+
+ rcu_read_unlock();
+}
+
static inline struct hci_conn *hci_lookup_le_connect(struct hci_dev *hdev)
{
struct hci_conn_hash *h = &hdev->conn_hash;
@@ -1426,7 +1448,9 @@ int hci_le_create_cis_pending(struct hci_dev *hdev);
int hci_conn_check_create_cis(struct hci_conn *conn);
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
- u8 role);
+ u8 role, u16 handle);
+struct hci_conn *hci_conn_add_unset(struct hci_dev *hdev, int type,
+ bdaddr_t *dst, u8 role);
void hci_conn_del(struct hci_conn *conn);
void hci_conn_hash_flush(struct hci_dev *hdev);
void hci_conn_check_pending(struct hci_dev *hdev);
diff --git a/include/net/bluetooth/hci_sync.h b/include/net/bluetooth/hci_sync.h
index 57eeb07aeb25..6efbc2152146 100644
--- a/include/net/bluetooth/hci_sync.h
+++ b/include/net/bluetooth/hci_sync.h
@@ -80,6 +80,8 @@ int hci_start_per_adv_sync(struct hci_dev *hdev, u8 instance, u8 data_len,
u8 *data, u32 flags, u16 min_interval,
u16 max_interval, u16 sync_interval);
+int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance);
+
int hci_remove_advertising_sync(struct hci_dev *hdev, struct sock *sk,
u8 instance, bool force);
int hci_disable_advertising_sync(struct hci_dev *hdev);
diff --git a/include/net/cfg80211.h b/include/net/cfg80211.h
index 7192346e4a22..b137a33a1b68 100644
--- a/include/net/cfg80211.h
+++ b/include/net/cfg80211.h
@@ -76,6 +76,8 @@ struct wiphy;
* @IEEE80211_CHAN_DISABLED: This channel is disabled.
* @IEEE80211_CHAN_NO_IR: do not initiate radiation, this includes
* sending probe requests or beaconing.
+ * @IEEE80211_CHAN_PSD: Power spectral density (in dBm) is set for this
+ * channel.
* @IEEE80211_CHAN_RADAR: Radar detection is required on this channel.
* @IEEE80211_CHAN_NO_HT40PLUS: extension channel above this channel
* is not permitted.
@@ -119,7 +121,7 @@ struct wiphy;
enum ieee80211_channel_flags {
IEEE80211_CHAN_DISABLED = 1<<0,
IEEE80211_CHAN_NO_IR = 1<<1,
- /* hole at 1<<2 */
+ IEEE80211_CHAN_PSD = 1<<2,
IEEE80211_CHAN_RADAR = 1<<3,
IEEE80211_CHAN_NO_HT40PLUS = 1<<4,
IEEE80211_CHAN_NO_HT40MINUS = 1<<5,
@@ -171,6 +173,7 @@ enum ieee80211_channel_flags {
* on this channel.
* @dfs_state_entered: timestamp (jiffies) when the dfs state was entered.
* @dfs_cac_ms: DFS CAC time in milliseconds, this is valid for DFS channels.
+ * @psd: power spectral density (in dBm)
*/
struct ieee80211_channel {
enum nl80211_band band;
@@ -187,6 +190,7 @@ struct ieee80211_channel {
enum nl80211_dfs_state dfs_state;
unsigned long dfs_state_entered;
unsigned int dfs_cac_ms;
+ s8 psd;
};
/**
@@ -410,6 +414,19 @@ struct ieee80211_sta_eht_cap {
u8 eht_ppe_thres[IEEE80211_EHT_PPE_THRES_MAX_LEN];
};
+/* sparse defines __CHECKER__; see Documentation/dev-tools/sparse.rst */
+#ifdef __CHECKER__
+/*
+ * This is used to mark the sband->iftype_data pointer which is supposed
+ * to be an array with special access semantics (per iftype), but a lot
+ * of code got it wrong in the past, so with this marking sparse will be
+ * noisy when the pointer is used directly.
+ */
+# define __iftd __attribute__((noderef, address_space(__iftype_data)))
+#else
+# define __iftd
+#endif /* __CHECKER__ */
+
/**
* struct ieee80211_sband_iftype_data - sband data per interface type
*
@@ -543,10 +560,48 @@ struct ieee80211_supported_band {
struct ieee80211_sta_s1g_cap s1g_cap;
struct ieee80211_edmg edmg_cap;
u16 n_iftype_data;
- const struct ieee80211_sband_iftype_data *iftype_data;
+ const struct ieee80211_sband_iftype_data __iftd *iftype_data;
};
/**
+ * _ieee80211_set_sband_iftype_data - set sband iftype data array
+ * @sband: the sband to initialize
+ * @iftd: the iftype data array pointer
+ * @n_iftd: the length of the iftype data array
+ *
+ * Set the sband iftype data array; use this where the length cannot
+ * be derived from the ARRAY_SIZE() of the argument, but prefer
+ * ieee80211_set_sband_iftype_data() where it can be used.
+ */
+static inline void
+_ieee80211_set_sband_iftype_data(struct ieee80211_supported_band *sband,
+ const struct ieee80211_sband_iftype_data *iftd,
+ u16 n_iftd)
+{
+ sband->iftype_data = (const void __iftd __force *)iftd;
+ sband->n_iftype_data = n_iftd;
+}
+
+/**
+ * ieee80211_set_sband_iftype_data - set sband iftype data array
+ * @sband: the sband to initialize
+ * @iftd: the iftype data array
+ */
+#define ieee80211_set_sband_iftype_data(sband, iftd) \
+ _ieee80211_set_sband_iftype_data(sband, iftd, ARRAY_SIZE(iftd))
+
+/**
+ * for_each_sband_iftype_data - iterate sband iftype data entries
+ * @sband: the sband whose iftype_data array to iterate
+ * @i: iterator counter
+ * @iftd: iftype data pointer to set
+ */
+#define for_each_sband_iftype_data(sband, i, iftd) \
+ for (i = 0, iftd = (const void __force *)&(sband)->iftype_data[i]; \
+ i < (sband)->n_iftype_data; \
+ i++, iftd = (const void __force *)&(sband)->iftype_data[i])
+
+/**
* ieee80211_get_sband_iftype_data - return sband data for a given iftype
* @sband: the sband to search for the STA on
* @iftype: enum nl80211_iftype
@@ -557,6 +612,7 @@ static inline const struct ieee80211_sband_iftype_data *
ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
u8 iftype)
{
+ const struct ieee80211_sband_iftype_data *data;
int i;
if (WARN_ON(iftype >= NL80211_IFTYPE_MAX))
@@ -565,10 +621,7 @@ ieee80211_get_sband_iftype_data(const struct ieee80211_supported_band *sband,
if (iftype == NL80211_IFTYPE_AP_VLAN)
iftype = NL80211_IFTYPE_AP;
- for (i = 0; i < sband->n_iftype_data; i++) {
- const struct ieee80211_sband_iftype_data *data =
- &sband->iftype_data[i];
-
+ for_each_sband_iftype_data(sband, i, data) {
if (data->types_mask & BIT(iftype))
return data;
}
@@ -954,6 +1007,30 @@ int cfg80211_chandef_dfs_required(struct wiphy *wiphy,
enum nl80211_iftype iftype);
/**
+ * cfg80211_chandef_dfs_usable - checks if chandef is DFS usable and we
+ * can/need start CAC on such channel
+ * @wiphy: the wiphy to validate against
+ * @chandef: the channel definition to check
+ *
+ * Return: true if all channels available and at least
+ * one channel requires CAC (NL80211_DFS_USABLE)
+ */
+bool cfg80211_chandef_dfs_usable(struct wiphy *wiphy,
+ const struct cfg80211_chan_def *chandef);
+
+/**
+ * cfg80211_chandef_dfs_cac_time - get the DFS CAC time (in ms) for given
+ * channel definition
+ * @wiphy: the wiphy to validate against
+ * @chandef: the channel definition to check
+ *
+ * Returns: DFS CAC time (in ms) which applies for this channel definition
+ */
+unsigned int
+cfg80211_chandef_dfs_cac_time(struct wiphy *wiphy,
+ const struct cfg80211_chan_def *chandef);
+
+/**
* nl80211_send_chandef - sends the channel definition.
* @msg: the msg to send channel definition
* @chandef: the channel definition to check
@@ -1289,6 +1366,7 @@ struct cfg80211_acl_data {
* struct cfg80211_fils_discovery - FILS discovery parameters from
* IEEE Std 802.11ai-2016, Annex C.3 MIB detail.
*
+ * @update: Set to true if the feature configuration should be updated.
* @min_interval: Minimum packet interval in TUs (0 - 10000)
* @max_interval: Maximum packet interval in TUs (0 - 10000)
* @tmpl_len: Template length
@@ -1296,6 +1374,7 @@ struct cfg80211_acl_data {
* frame headers.
*/
struct cfg80211_fils_discovery {
+ bool update;
u32 min_interval;
u32 max_interval;
size_t tmpl_len;
@@ -1306,6 +1385,7 @@ struct cfg80211_fils_discovery {
* struct cfg80211_unsol_bcast_probe_resp - Unsolicited broadcast probe
* response parameters in 6GHz.
*
+ * @update: Set to true if the feature configuration should be updated.
* @interval: Packet interval in TUs. Maximum allowed is 20 TU, as mentioned
* in IEEE P802.11ax/D6.0 26.17.2.3.2 - AP behavior for fast passive
* scanning
@@ -1313,6 +1393,7 @@ struct cfg80211_fils_discovery {
* @tmpl: Template data for probe response
*/
struct cfg80211_unsol_bcast_probe_resp {
+ bool update;
u32 interval;
size_t tmpl_len;
const u8 *tmpl;
@@ -1399,6 +1480,22 @@ struct cfg80211_ap_settings {
u16 punct_bitmap;
};
+
+/**
+ * struct cfg80211_ap_update - AP configuration update
+ *
+ * Subset of &struct cfg80211_ap_settings, for updating a running AP.
+ *
+ * @beacon: beacon data
+ * @fils_discovery: FILS discovery transmission parameters
+ * @unsol_bcast_probe_resp: Unsolicited broadcast probe response parameters
+ */
+struct cfg80211_ap_update {
+ struct cfg80211_beacon_data beacon;
+ struct cfg80211_fils_discovery fils_discovery;
+ struct cfg80211_unsol_bcast_probe_resp unsol_bcast_probe_resp;
+};
+
/**
* struct cfg80211_csa_settings - channel switch settings
*
@@ -2346,7 +2443,7 @@ struct mesh_config {
* @user_mpm: userspace handles all MPM functions
* @dtim_period: DTIM period to use
* @beacon_interval: beacon interval to use
- * @mcast_rate: multicat rate for Mesh Node [6Mbps is the default for 802.11a]
+ * @mcast_rate: multicast rate for Mesh Node [6Mbps is the default for 802.11a]
* @basic_rates: basic rates to use when creating the mesh
* @beacon_rate: bitrate to be used for beacons
* @userspace_handles_dfs: whether user space controls DFS operation, i.e.
@@ -2487,7 +2584,6 @@ struct cfg80211_scan_6ghz_params {
* @n_ssids: number of SSIDs
* @channels: channels to scan on.
* @n_channels: total number of channels to scan
- * @scan_width: channel width for scanning
* @ie: optional information element(s) to add into Probe Request or %NULL
* @ie_len: length of ie in octets
* @duration: how long to listen on each channel, in TUs. If
@@ -2517,7 +2613,6 @@ struct cfg80211_scan_request {
struct cfg80211_ssid *ssids;
int n_ssids;
u32 n_channels;
- enum nl80211_bss_scan_width scan_width;
const u8 *ie;
size_t ie_len;
u16 duration;
@@ -2566,7 +2661,7 @@ static inline void get_random_mask_addr(u8 *buf, const u8 *addr, const u8 *mask)
* or no match (RSSI only)
* @rssi_thold: don't report scan results below this threshold (in s32 dBm)
* @per_band_rssi_thold: Minimum rssi threshold for each band to be applied
- * for filtering out scan results received. Drivers advertize this support
+ * for filtering out scan results received. Drivers advertise this support
* of band specific rssi based filtering through the feature capability
* %NL80211_EXT_FEATURE_SCHED_SCAN_BAND_SPECIFIC_RSSI_THOLD. These band
* specific rssi thresholds take precedence over rssi_thold, if specified.
@@ -2612,14 +2707,13 @@ struct cfg80211_bss_select_adjust {
* @ssids: SSIDs to scan for (passed in the probe_reqs in active scans)
* @n_ssids: number of SSIDs
* @n_channels: total number of channels to scan
- * @scan_width: channel width for scanning
* @ie: optional information element(s) to add into Probe Request or %NULL
* @ie_len: length of ie in octets
* @flags: control flags from &enum nl80211_scan_flags
* @match_sets: sets of parameters to be matched for a scan result
* entry to be considered valid and to be passed to the host
* (others are filtered out).
- * If ommited, all results are passed.
+ * If omitted, all results are passed.
* @n_match_sets: number of match sets
* @report_results: indicates that results were reported for this request
* @wiphy: the wiphy this was for
@@ -2653,14 +2747,13 @@ struct cfg80211_bss_select_adjust {
* to the specified band while deciding whether a better BSS is reported
* using @relative_rssi. If delta is a negative number, the BSSs that
* belong to the specified band will be penalized by delta dB in relative
- * comparisions.
+ * comparisons.
*/
struct cfg80211_sched_scan_request {
u64 reqid;
struct cfg80211_ssid *ssids;
int n_ssids;
u32 n_channels;
- enum nl80211_bss_scan_width scan_width;
const u8 *ie;
size_t ie_len;
u32 flags;
@@ -2708,7 +2801,6 @@ enum cfg80211_signal_type {
/**
* struct cfg80211_inform_bss - BSS inform data
* @chan: channel the frame was received on
- * @scan_width: scan width that was used
* @signal: signal strength value, according to the wiphy's
* signal type
* @boottime_ns: timestamp (CLOCK_BOOTTIME) when the information was
@@ -2728,7 +2820,6 @@ enum cfg80211_signal_type {
*/
struct cfg80211_inform_bss {
struct ieee80211_channel *chan;
- enum nl80211_bss_scan_width scan_width;
s32 signal;
u64 boottime_ns;
u64 parent_tsf;
@@ -2762,7 +2853,6 @@ struct cfg80211_bss_ies {
* for use in scan results and similar.
*
* @channel: channel this BSS is on
- * @scan_width: width of the control channel
* @bssid: BSSID of the BSS
* @beacon_interval: the beacon interval as from the frame
* @capability: the capability field in host byte order
@@ -2792,7 +2882,6 @@ struct cfg80211_bss_ies {
*/
struct cfg80211_bss {
struct ieee80211_channel *channel;
- enum nl80211_bss_scan_width scan_width;
const struct cfg80211_bss_ies __rcu *ies;
const struct cfg80211_bss_ies __rcu *beacon_ies;
@@ -2891,12 +2980,15 @@ struct cfg80211_auth_request {
* @elems_len: length of the elements
* @disabled: If set this link should be included during association etc. but it
* should not be used until enabled by the AP MLD.
+ * @error: per-link error code, must be <= 0. If there is an error, then the
+ * operation as a whole must fail.
*/
struct cfg80211_assoc_link {
struct cfg80211_bss *bss;
const u8 *elems;
size_t elems_len;
bool disabled;
+ int error;
};
/**
@@ -3495,7 +3587,7 @@ struct cfg80211_update_ft_ies_params {
* This structure provides information needed to transmit a mgmt frame
*
* @chan: channel to use
- * @offchan: indicates wether off channel operation is required
+ * @offchan: indicates whether off channel operation is required
* @wait: duration for ROC
* @buf: buffer to transmit
* @len: buffer length
@@ -3613,7 +3705,7 @@ struct cfg80211_nan_func_filter {
* @publish_bcast: if true, the solicited publish should be broadcasted
* @subscribe_active: if true, the subscribe is active
* @followup_id: the instance ID for follow up
- * @followup_reqid: the requestor instance ID for follow up
+ * @followup_reqid: the requester instance ID for follow up
* @followup_dest: MAC address of the recipient of the follow up
* @ttl: time to live counter in DW.
* @serv_spec_info: Service Specific Info
@@ -4450,7 +4542,7 @@ struct cfg80211_ops {
int (*start_ap)(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_ap_settings *settings);
int (*change_beacon)(struct wiphy *wiphy, struct net_device *dev,
- struct cfg80211_beacon_data *info);
+ struct cfg80211_ap_update *info);
int (*stop_ap)(struct wiphy *wiphy, struct net_device *dev,
unsigned int link_id);
@@ -4816,6 +4908,8 @@ struct cfg80211_ops {
* @WIPHY_FLAG_SUPPORTS_EXT_KCK_32: The device supports 32-byte KCK keys.
* @WIPHY_FLAG_NOTIFY_REGDOM_BY_DRIVER: The device could handle reg notify for
* NL80211_REGDOM_SET_BY_DRIVER.
+ * @WIPHY_FLAG_CHANNEL_CHANGE_ON_BEACON: reg_call_notifier() is called if driver
+ * set this flag to update channels on beacon hints.
*/
enum wiphy_flags {
WIPHY_FLAG_SUPPORTS_EXT_KEK_KCK = BIT(0),
@@ -4842,6 +4936,7 @@ enum wiphy_flags {
WIPHY_FLAG_SUPPORTS_5_10_MHZ = BIT(22),
WIPHY_FLAG_HAS_CHANNEL_SWITCH = BIT(23),
WIPHY_FLAG_NOTIFY_REGDOM_BY_DRIVER = BIT(24),
+ WIPHY_FLAG_CHANNEL_CHANGE_ON_BEACON = BIT(25),
};
/**
@@ -5826,6 +5921,16 @@ void wiphy_work_queue(struct wiphy *wiphy, struct wiphy_work *work);
*/
void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work);
+/**
+ * wiphy_work_flush - flush previously queued work
+ * @wiphy: the wiphy, for debug purposes
+ * @work: the work to flush, this can be %NULL to flush all work
+ *
+ * Flush the work (i.e. run it if pending). This must be called
+ * under the wiphy mutex acquired by wiphy_lock().
+ */
+void wiphy_work_flush(struct wiphy *wiphy, struct wiphy_work *work);
+
struct wiphy_delayed_work {
struct wiphy_work work;
struct wiphy *wiphy;
@@ -5870,6 +5975,17 @@ void wiphy_delayed_work_cancel(struct wiphy *wiphy,
struct wiphy_delayed_work *dwork);
/**
+ * wiphy_delayed_work_flush - flush previously queued delayed work
+ * @wiphy: the wiphy, for debug purposes
+ * @dwork: the delayed work to flush
+ *
+ * Flush the work (i.e. run it if pending). This must be called
+ * under the wiphy mutex acquired by wiphy_lock().
+ */
+void wiphy_delayed_work_flush(struct wiphy *wiphy,
+ struct wiphy_delayed_work *dwork);
+
+/**
* struct wireless_dev - wireless device state
*
* For netdevs, this structure must be allocated by the driver
@@ -5917,8 +6033,6 @@ void wiphy_delayed_work_cancel(struct wiphy *wiphy,
* @mgmt_registrations: list of registrations for management frames
* @mgmt_registrations_need_update: mgmt registrations were updated,
* need to propagate the update to the driver
- * @mtx: mutex used to lock data in this struct, may be used by drivers
- * and some API functions require it held
* @beacon_interval: beacon interval used on this device for transmitting
* beacons, 0 when not valid
* @address: The address for this device, valid only if @netdev is %NULL
@@ -5965,8 +6079,6 @@ struct wireless_dev {
struct list_head mgmt_registrations;
u8 mgmt_registrations_need_update:1;
- struct mutex mtx;
-
bool use_4addr, is_running, registered, registering;
u8 address[ETH_ALEN] __aligned(sizeof(u16));
@@ -6257,13 +6369,11 @@ ieee80211_get_response_rate(struct ieee80211_supported_band *sband,
/**
* ieee80211_mandatory_rates - get mandatory rates for a given band
* @sband: the band to look for rates in
- * @scan_width: width of the control channel
*
* This function returns a bitmap of the mandatory rates for the given
* band, bits are set according to the rate position in the bitrates array.
*/
-u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband,
- enum nl80211_bss_scan_width scan_width);
+u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband);
/*
* Radiotap parsing functions -- for controlled injection support
@@ -6604,7 +6714,7 @@ static inline const u8 *cfg80211_find_ie(u8 eid, const u8 *ies, int len)
* @ies: data consisting of IEs
* @len: length of data
*
- * Return: %NULL if the etended element could not be found or if
+ * Return: %NULL if the extended element could not be found or if
* the element is invalid (claims to be longer than the given
* data) or if the byte array doesn't match; otherwise return the
* requested element struct.
@@ -6751,7 +6861,7 @@ int regulatory_hint(struct wiphy *wiphy, const char *alpha2);
/**
* regulatory_set_wiphy_regd - set regdom info for self managed drivers
* @wiphy: the wireless device we want to process the regulatory domain on
- * @rd: the regulatory domain informatoin to use for this wiphy
+ * @rd: the regulatory domain information to use for this wiphy
*
* Set the regulatory domain information for self-managed wiphys, only they
* may use this function. See %REGULATORY_WIPHY_SELF_MANAGED for more
@@ -6842,7 +6952,7 @@ bool regulatory_pre_cac_allowed(struct wiphy *wiphy);
* Regulatory self-managed driver can use it to proactively
*
* @alpha2: the ISO/IEC 3166 alpha2 wmm rule to be queried.
- * @freq: the freqency(in MHz) to be queried.
+ * @freq: the frequency (in MHz) to be queried.
* @rule: pointer to store the wmm rule from the regulatory db.
*
* Self-managed wireless drivers can use this function to query
@@ -6925,22 +7035,6 @@ cfg80211_inform_bss_frame_data(struct wiphy *wiphy,
gfp_t gfp);
static inline struct cfg80211_bss * __must_check
-cfg80211_inform_bss_width_frame(struct wiphy *wiphy,
- struct ieee80211_channel *rx_channel,
- enum nl80211_bss_scan_width scan_width,
- struct ieee80211_mgmt *mgmt, size_t len,
- s32 signal, gfp_t gfp)
-{
- struct cfg80211_inform_bss data = {
- .chan = rx_channel,
- .scan_width = scan_width,
- .signal = signal,
- };
-
- return cfg80211_inform_bss_frame_data(wiphy, &data, mgmt, len, gfp);
-}
-
-static inline struct cfg80211_bss * __must_check
cfg80211_inform_bss_frame(struct wiphy *wiphy,
struct ieee80211_channel *rx_channel,
struct ieee80211_mgmt *mgmt, size_t len,
@@ -6948,7 +7042,6 @@ cfg80211_inform_bss_frame(struct wiphy *wiphy,
{
struct cfg80211_inform_bss data = {
.chan = rx_channel,
- .scan_width = NL80211_BSS_CHAN_WIDTH_20,
.signal = signal,
};
@@ -7051,26 +7144,6 @@ cfg80211_inform_bss_data(struct wiphy *wiphy,
gfp_t gfp);
static inline struct cfg80211_bss * __must_check
-cfg80211_inform_bss_width(struct wiphy *wiphy,
- struct ieee80211_channel *rx_channel,
- enum nl80211_bss_scan_width scan_width,
- enum cfg80211_bss_frame_type ftype,
- const u8 *bssid, u64 tsf, u16 capability,
- u16 beacon_interval, const u8 *ie, size_t ielen,
- s32 signal, gfp_t gfp)
-{
- struct cfg80211_inform_bss data = {
- .chan = rx_channel,
- .scan_width = scan_width,
- .signal = signal,
- };
-
- return cfg80211_inform_bss_data(wiphy, &data, ftype, bssid, tsf,
- capability, beacon_interval, ie, ielen,
- gfp);
-}
-
-static inline struct cfg80211_bss * __must_check
cfg80211_inform_bss(struct wiphy *wiphy,
struct ieee80211_channel *rx_channel,
enum cfg80211_bss_frame_type ftype,
@@ -7080,7 +7153,6 @@ cfg80211_inform_bss(struct wiphy *wiphy,
{
struct cfg80211_inform_bss data = {
.chan = rx_channel,
- .scan_width = NL80211_BSS_CHAN_WIDTH_20,
.signal = signal,
};
@@ -7165,19 +7237,6 @@ void cfg80211_bss_iter(struct wiphy *wiphy,
void *data),
void *iter_data);
-static inline enum nl80211_bss_scan_width
-cfg80211_chandef_to_scan_width(const struct cfg80211_chan_def *chandef)
-{
- switch (chandef->width) {
- case NL80211_CHAN_WIDTH_5:
- return NL80211_BSS_CHAN_WIDTH_5;
- case NL80211_CHAN_WIDTH_10:
- return NL80211_BSS_CHAN_WIDTH_10;
- default:
- return NL80211_BSS_CHAN_WIDTH_20;
- }
-}
-
/**
* cfg80211_rx_mlme_mgmt - notification of processed MLME management frame
* @dev: network device
@@ -7210,7 +7269,7 @@ void cfg80211_rx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len);
void cfg80211_auth_timeout(struct net_device *dev, const u8 *addr);
/**
- * struct cfg80211_rx_assoc_resp - association response data
+ * struct cfg80211_rx_assoc_resp_data - association response data
* @bss: the BSS that association was requested with, ownership of the pointer
* moves to cfg80211 in the call to cfg80211_rx_assoc_resp()
* @buf: (Re)Association Response frame (header + body)
@@ -7225,7 +7284,7 @@ void cfg80211_auth_timeout(struct net_device *dev, const u8 *addr);
* @links.status: Set this (along with a BSS pointer) for links that
* were rejected by the AP.
*/
-struct cfg80211_rx_assoc_resp {
+struct cfg80211_rx_assoc_resp_data {
const u8 *buf;
size_t len;
const u8 *req_ies;
@@ -7242,7 +7301,7 @@ struct cfg80211_rx_assoc_resp {
/**
* cfg80211_rx_assoc_resp - notification of processed association response
* @dev: network device
- * @data: association response data, &struct cfg80211_rx_assoc_resp
+ * @data: association response data, &struct cfg80211_rx_assoc_resp_data
*
* After being asked to associate via cfg80211_ops::assoc() the driver must
* call either this function or cfg80211_auth_timeout().
@@ -7250,7 +7309,7 @@ struct cfg80211_rx_assoc_resp {
* This function may sleep. The caller must hold the corresponding wdev's mutex.
*/
void cfg80211_rx_assoc_resp(struct net_device *dev,
- struct cfg80211_rx_assoc_resp *data);
+ struct cfg80211_rx_assoc_resp_data *data);
/**
* struct cfg80211_assoc_failure - association failure data
@@ -7969,7 +8028,8 @@ void cfg80211_roamed(struct net_device *dev, struct cfg80211_roam_info *info,
* cfg80211_port_authorized - notify cfg80211 of successful security association
*
* @dev: network device
- * @bssid: the BSSID of the AP
+ * @peer_addr: BSSID of the AP/P2P GO in case of STA/GC or STA/GC MAC address
+ * in case of AP/P2P GO
* @td_bitmap: transition disable policy
* @td_bitmap_len: Length of transition disable policy
* @gfp: allocation flags
@@ -7980,8 +8040,11 @@ void cfg80211_roamed(struct net_device *dev, struct cfg80211_roam_info *info,
* should be preceded with a call to cfg80211_connect_result(),
* cfg80211_connect_done(), cfg80211_connect_bss() or cfg80211_roamed() to
* indicate the 802.11 association.
+ * This function can also be called by AP/P2P GO driver that supports
+ * authentication offload. In this case the peer_mac passed is that of
+ * associated STA/GC.
*/
-void cfg80211_port_authorized(struct net_device *dev, const u8 *bssid,
+void cfg80211_port_authorized(struct net_device *dev, const u8 *peer_addr,
const u8* td_bitmap, u8 td_bitmap_len, gfp_t gfp);
/**
@@ -8570,7 +8633,7 @@ bool cfg80211_reg_can_beacon_relax(struct wiphy *wiphy,
* @link_id: the link ID for MLO, must be 0 for non-MLO
* @punct_bitmap: the new puncturing bitmap
*
- * Caller must acquire wdev_lock, therefore must only be called from sleepable
+ * Caller must hold wiphy mutex, therefore must only be called from sleepable
* driver context!
*/
void cfg80211_ch_switch_notify(struct net_device *dev,
@@ -8810,6 +8873,18 @@ static inline size_t ieee80211_ie_split(const u8 *ies, size_t ielen,
}
/**
+ * ieee80211_fragment_element - fragment the last element in skb
+ * @skb: The skbuf that the element was added to
+ * @len_pos: Pointer to length of the element to fragment
+ * @frag_id: The element ID to use for fragments
+ *
+ * This function fragments all data after @len_pos, adding fragmentation
+ * elements with the given ID as appropriate. The SKB will grow in size
+ * accordingly.
+ */
+void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id);
+
+/**
* cfg80211_report_wowlan_wakeup - report wakeup from WoWLAN
* @wdev: the wireless device reporting the wakeup
* @wakeup: the wakeup report
@@ -9058,9 +9133,9 @@ bool cfg80211_iftype_allowed(struct wiphy *wiphy, enum nl80211_iftype iftype,
/**
* cfg80211_assoc_comeback - notification of association that was
- * temporarly rejected with a comeback
+ * temporarily rejected with a comeback
* @netdev: network device
- * @ap_addr: AP (MLD) address that rejected the assocation
+ * @ap_addr: AP (MLD) address that rejected the association
* @timeout: timeout interval value TUs.
*
* this function may sleep. the caller must hold the corresponding wdev's mutex.
diff --git a/include/net/devlink.h b/include/net/devlink.h
index 29fd1b4ee654..9ac394bdfbe4 100644
--- a/include/net/devlink.h
+++ b/include/net/devlink.h
@@ -150,6 +150,7 @@ struct devlink_port {
struct devlink_rate *devlink_rate;
struct devlink_linecard *linecard;
+ u32 rel_index;
};
struct devlink_port_new_attrs {
@@ -1697,6 +1698,8 @@ void devlink_port_attrs_pci_vf_set(struct devlink_port *devlink_port, u32 contro
void devlink_port_attrs_pci_sf_set(struct devlink_port *devlink_port,
u32 controller, u16 pf, u32 sf,
bool external);
+int devl_port_fn_devlink_set(struct devlink_port *devlink_port,
+ struct devlink *fn_devlink);
struct devlink_rate *
devl_rate_node_create(struct devlink *devlink, void *priv, char *node_name,
struct devlink_rate *parent);
@@ -1717,8 +1720,8 @@ void devlink_linecard_provision_clear(struct devlink_linecard *linecard);
void devlink_linecard_provision_fail(struct devlink_linecard *linecard);
void devlink_linecard_activate(struct devlink_linecard *linecard);
void devlink_linecard_deactivate(struct devlink_linecard *linecard);
-void devlink_linecard_nested_dl_set(struct devlink_linecard *linecard,
- struct devlink *nested_devlink);
+int devlink_linecard_nested_dl_set(struct devlink_linecard *linecard,
+ struct devlink *nested_devlink);
int devl_sb_register(struct devlink *devlink, unsigned int sb_index,
u32 size, u16 ingress_pools_count,
u16 egress_pools_count, u16 ingress_tc_count,
@@ -1851,36 +1854,36 @@ int devlink_info_version_running_put_ext(struct devlink_info_req *req,
const char *version_value,
enum devlink_info_version_type version_type);
-int devlink_fmsg_obj_nest_start(struct devlink_fmsg *fmsg);
-int devlink_fmsg_obj_nest_end(struct devlink_fmsg *fmsg);
-
-int devlink_fmsg_pair_nest_start(struct devlink_fmsg *fmsg, const char *name);
-int devlink_fmsg_pair_nest_end(struct devlink_fmsg *fmsg);
-
-int devlink_fmsg_arr_pair_nest_start(struct devlink_fmsg *fmsg,
- const char *name);
-int devlink_fmsg_arr_pair_nest_end(struct devlink_fmsg *fmsg);
-int devlink_fmsg_binary_pair_nest_start(struct devlink_fmsg *fmsg,
- const char *name);
-int devlink_fmsg_binary_pair_nest_end(struct devlink_fmsg *fmsg);
-
-int devlink_fmsg_u32_put(struct devlink_fmsg *fmsg, u32 value);
-int devlink_fmsg_string_put(struct devlink_fmsg *fmsg, const char *value);
-int devlink_fmsg_binary_put(struct devlink_fmsg *fmsg, const void *value,
- u16 value_len);
-
-int devlink_fmsg_bool_pair_put(struct devlink_fmsg *fmsg, const char *name,
- bool value);
-int devlink_fmsg_u8_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u8 value);
-int devlink_fmsg_u32_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u32 value);
-int devlink_fmsg_u64_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u64 value);
-int devlink_fmsg_string_pair_put(struct devlink_fmsg *fmsg, const char *name,
- const char *value);
-int devlink_fmsg_binary_pair_put(struct devlink_fmsg *fmsg, const char *name,
- const void *value, u32 value_len);
+void devlink_fmsg_obj_nest_start(struct devlink_fmsg *fmsg);
+void devlink_fmsg_obj_nest_end(struct devlink_fmsg *fmsg);
+
+void devlink_fmsg_pair_nest_start(struct devlink_fmsg *fmsg, const char *name);
+void devlink_fmsg_pair_nest_end(struct devlink_fmsg *fmsg);
+
+void devlink_fmsg_arr_pair_nest_start(struct devlink_fmsg *fmsg,
+ const char *name);
+void devlink_fmsg_arr_pair_nest_end(struct devlink_fmsg *fmsg);
+void devlink_fmsg_binary_pair_nest_start(struct devlink_fmsg *fmsg,
+ const char *name);
+void devlink_fmsg_binary_pair_nest_end(struct devlink_fmsg *fmsg);
+
+void devlink_fmsg_u32_put(struct devlink_fmsg *fmsg, u32 value);
+void devlink_fmsg_string_put(struct devlink_fmsg *fmsg, const char *value);
+void devlink_fmsg_binary_put(struct devlink_fmsg *fmsg, const void *value,
+ u16 value_len);
+
+void devlink_fmsg_bool_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ bool value);
+void devlink_fmsg_u8_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u8 value);
+void devlink_fmsg_u32_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u32 value);
+void devlink_fmsg_u64_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u64 value);
+void devlink_fmsg_string_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ const char *value);
+void devlink_fmsg_binary_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ const void *value, u32 value_len);
struct devlink_health_reporter *
devl_port_health_reporter_create(struct devlink_port *port,
@@ -1918,6 +1921,8 @@ devlink_health_reporter_state_update(struct devlink_health_reporter *reporter,
void
devlink_health_reporter_recovery_done(struct devlink_health_reporter *reporter);
+int devl_nested_devlink_set(struct devlink *devlink,
+ struct devlink *nested_devlink);
bool devlink_is_reload_failed(const struct devlink *devlink);
void devlink_remote_reload_actions_performed(struct devlink *devlink,
enum devlink_reload_limit limit,
diff --git a/include/net/dropreason-core.h b/include/net/dropreason-core.h
index a587e83fc169..3c70ad53a49c 100644
--- a/include/net/dropreason-core.h
+++ b/include/net/dropreason-core.h
@@ -20,9 +20,14 @@
FN(IP_NOPROTO) \
FN(SOCKET_RCVBUFF) \
FN(PROTO_MEM) \
+ FN(TCP_AUTH_HDR) \
FN(TCP_MD5NOTFOUND) \
FN(TCP_MD5UNEXPECTED) \
FN(TCP_MD5FAILURE) \
+ FN(TCP_AONOTFOUND) \
+ FN(TCP_AOUNEXPECTED) \
+ FN(TCP_AOKEYNOTFOUND) \
+ FN(TCP_AOFAILURE) \
FN(SOCKET_BACKLOG) \
FN(TCP_FLAGS) \
FN(TCP_ZEROWINDOW) \
@@ -80,6 +85,7 @@
FN(IPV6_NDISC_BAD_OPTIONS) \
FN(IPV6_NDISC_NS_OTHERHOST) \
FN(QUEUE_PURGE) \
+ FN(TC_ERROR) \
FNe(MAX)
/**
@@ -142,6 +148,11 @@ enum skb_drop_reason {
*/
SKB_DROP_REASON_PROTO_MEM,
/**
+ * @SKB_DROP_REASON_TCP_AUTH_HDR: TCP-MD5 or TCP-AO hashes are met
+ * twice or set incorrectly.
+ */
+ SKB_DROP_REASON_TCP_AUTH_HDR,
+ /**
* @SKB_DROP_REASON_TCP_MD5NOTFOUND: no MD5 hash and one expected,
* corresponding to LINUX_MIB_TCPMD5NOTFOUND
*/
@@ -157,6 +168,26 @@ enum skb_drop_reason {
*/
SKB_DROP_REASON_TCP_MD5FAILURE,
/**
+ * @SKB_DROP_REASON_TCP_AONOTFOUND: no TCP-AO hash and one was expected,
+ * corresponding to LINUX_MIB_TCPAOREQUIRED
+ */
+ SKB_DROP_REASON_TCP_AONOTFOUND,
+ /**
+ * @SKB_DROP_REASON_TCP_AOUNEXPECTED: TCP-AO hash is present and it
+ * was not expected, corresponding to LINUX_MIB_TCPAOKEYNOTFOUND
+ */
+ SKB_DROP_REASON_TCP_AOUNEXPECTED,
+ /**
+ * @SKB_DROP_REASON_TCP_AOKEYNOTFOUND: TCP-AO key is unknown,
+ * corresponding to LINUX_MIB_TCPAOKEYNOTFOUND
+ */
+ SKB_DROP_REASON_TCP_AOKEYNOTFOUND,
+ /**
+ * @SKB_DROP_REASON_TCP_AOFAILURE: TCP-AO hash is wrong,
+ * corresponding to LINUX_MIB_TCPAOBAD
+ */
+ SKB_DROP_REASON_TCP_AOFAILURE,
+ /**
* @SKB_DROP_REASON_SOCKET_BACKLOG: failed to add skb to socket backlog (
* see LINUX_MIB_TCPBACKLOGDROP)
*/
@@ -345,6 +376,8 @@ enum skb_drop_reason {
SKB_DROP_REASON_IPV6_NDISC_NS_OTHERHOST,
/** @SKB_DROP_REASON_QUEUE_PURGE: bulk free. */
SKB_DROP_REASON_QUEUE_PURGE,
+ /** @SKB_DROP_REASON_TC_ERROR: generic internal tc error. */
+ SKB_DROP_REASON_TC_ERROR,
/**
* @SKB_DROP_REASON_MAX: the maximum of core drop reasons, which
* shouldn't be used as a real 'reason' - only for tracing code gen
diff --git a/include/net/dsa.h b/include/net/dsa.h
index 0b9c6aa27047..82135fbdb1e6 100644
--- a/include/net/dsa.h
+++ b/include/net/dsa.h
@@ -102,11 +102,11 @@ struct dsa_device_ops {
const char *name;
enum dsa_tag_protocol proto;
/* Some tagging protocols either mangle or shift the destination MAC
- * address, in which case the DSA master would drop packets on ingress
+ * address, in which case the DSA conduit would drop packets on ingress
* if what it understands out of the destination MAC address is not in
* its RX filter.
*/
- bool promisc_on_master;
+ bool promisc_on_conduit;
};
struct dsa_lag {
@@ -236,12 +236,12 @@ struct dsa_bridge {
};
struct dsa_port {
- /* A CPU port is physically connected to a master device.
- * A user port exposed to userspace has a slave device.
+ /* A CPU port is physically connected to a conduit device. A user port
+ * exposes a network device to user-space, called 'user' here.
*/
union {
- struct net_device *master;
- struct net_device *slave;
+ struct net_device *conduit;
+ struct net_device *user;
};
/* Copy of the tagging protocol operations, for quicker access
@@ -249,7 +249,7 @@ struct dsa_port {
*/
const struct dsa_device_ops *tag_ops;
- /* Copies for faster access in master receive hot path */
+ /* Copies for faster access in conduit receive hot path */
struct dsa_switch_tree *dst;
struct sk_buff *(*rcv)(struct sk_buff *skb, struct net_device *dev);
@@ -281,9 +281,9 @@ struct dsa_port {
u8 lag_tx_enabled:1;
- /* Master state bits, valid only on CPU ports */
- u8 master_admin_up:1;
- u8 master_oper_up:1;
+ /* conduit state bits, valid only on CPU ports */
+ u8 conduit_admin_up:1;
+ u8 conduit_oper_up:1;
/* Valid only on user ports */
u8 cpu_port_in_lag:1;
@@ -303,7 +303,7 @@ struct dsa_port {
struct list_head list;
/*
- * Original copy of the master netdev ethtool_ops
+ * Original copy of the conduit netdev ethtool_ops
*/
const struct ethtool_ops *orig_ethtool_ops;
@@ -452,10 +452,10 @@ struct dsa_switch {
const struct dsa_switch_ops *ops;
/*
- * Slave mii_bus and devices for the individual ports.
+ * User mii_bus and devices for the individual ports.
*/
u32 phys_mii_mask;
- struct mii_bus *slave_mii_bus;
+ struct mii_bus *user_mii_bus;
/* Ageing Time limits in msecs */
unsigned int ageing_time_min;
@@ -520,10 +520,10 @@ static inline bool dsa_port_is_unused(struct dsa_port *dp)
return dp->type == DSA_PORT_TYPE_UNUSED;
}
-static inline bool dsa_port_master_is_operational(struct dsa_port *dp)
+static inline bool dsa_port_conduit_is_operational(struct dsa_port *dp)
{
- return dsa_port_is_cpu(dp) && dp->master_admin_up &&
- dp->master_oper_up;
+ return dsa_port_is_cpu(dp) && dp->conduit_admin_up &&
+ dp->conduit_oper_up;
}
static inline bool dsa_is_unused_port(struct dsa_switch *ds, int p)
@@ -713,12 +713,12 @@ static inline bool dsa_port_offloads_lag(struct dsa_port *dp,
return dsa_port_lag_dev_get(dp) == lag->dev;
}
-static inline struct net_device *dsa_port_to_master(const struct dsa_port *dp)
+static inline struct net_device *dsa_port_to_conduit(const struct dsa_port *dp)
{
if (dp->cpu_port_in_lag)
return dsa_port_lag_dev_get(dp->cpu_dp);
- return dp->cpu_dp->master;
+ return dp->cpu_dp->conduit;
}
static inline
@@ -732,7 +732,7 @@ struct net_device *dsa_port_to_bridge_port(const struct dsa_port *dp)
else if (dp->hsr_dev)
return dp->hsr_dev;
- return dp->slave;
+ return dp->user;
}
static inline struct net_device *
@@ -834,9 +834,9 @@ struct dsa_switch_ops {
int (*connect_tag_protocol)(struct dsa_switch *ds,
enum dsa_tag_protocol proto);
- int (*port_change_master)(struct dsa_switch *ds, int port,
- struct net_device *master,
- struct netlink_ext_ack *extack);
+ int (*port_change_conduit)(struct dsa_switch *ds, int port,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack);
/* Optional switch-wide initialization and destruction methods */
int (*setup)(struct dsa_switch *ds);
@@ -969,6 +969,16 @@ struct dsa_switch_ops {
struct phy_device *phy);
void (*port_disable)(struct dsa_switch *ds, int port);
+
+ /*
+ * Notification for MAC address changes on user ports. Drivers can
+ * currently only veto operations. They should not use the method to
+ * program the hardware, since the operation is not rolled back in case
+ * of other errors.
+ */
+ int (*port_set_mac_address)(struct dsa_switch *ds, int port,
+ const unsigned char *addr);
+
/*
* Compatibility between device trees defining multiple CPU ports and
* drivers which are not OK to use by default the numerically smallest
@@ -1198,7 +1208,8 @@ struct dsa_switch_ops {
* HSR integration
*/
int (*port_hsr_join)(struct dsa_switch *ds, int port,
- struct net_device *hsr);
+ struct net_device *hsr,
+ struct netlink_ext_ack *extack);
int (*port_hsr_leave)(struct dsa_switch *ds, int port,
struct net_device *hsr);
@@ -1222,11 +1233,11 @@ struct dsa_switch_ops {
int (*tag_8021q_vlan_del)(struct dsa_switch *ds, int port, u16 vid);
/*
- * DSA master tracking operations
+ * DSA conduit tracking operations
*/
- void (*master_state_change)(struct dsa_switch *ds,
- const struct net_device *master,
- bool operational);
+ void (*conduit_state_change)(struct dsa_switch *ds,
+ const struct net_device *conduit,
+ bool operational);
};
#define DSA_DEVLINK_PARAM_DRIVER(_id, _name, _type, _cmodes) \
@@ -1363,9 +1374,9 @@ static inline int dsa_switch_resume(struct dsa_switch *ds)
#endif /* CONFIG_PM_SLEEP */
#if IS_ENABLED(CONFIG_NET_DSA)
-bool dsa_slave_dev_check(const struct net_device *dev);
+bool dsa_user_dev_check(const struct net_device *dev);
#else
-static inline bool dsa_slave_dev_check(const struct net_device *dev)
+static inline bool dsa_user_dev_check(const struct net_device *dev)
{
return false;
}
diff --git a/include/net/dsa_stubs.h b/include/net/dsa_stubs.h
index 361811750a54..6f384897f287 100644
--- a/include/net/dsa_stubs.h
+++ b/include/net/dsa_stubs.h
@@ -13,14 +13,14 @@
extern const struct dsa_stubs *dsa_stubs;
struct dsa_stubs {
- int (*master_hwtstamp_validate)(struct net_device *dev,
- const struct kernel_hwtstamp_config *config,
- struct netlink_ext_ack *extack);
+ int (*conduit_hwtstamp_validate)(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack);
};
-static inline int dsa_master_hwtstamp_validate(struct net_device *dev,
- const struct kernel_hwtstamp_config *config,
- struct netlink_ext_ack *extack)
+static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack)
{
if (!netdev_uses_dsa(dev))
return 0;
@@ -29,18 +29,18 @@ static inline int dsa_master_hwtstamp_validate(struct net_device *dev,
* netdev_uses_dsa() returns true, the dsa_core module is still
* registered, and so, dsa_unregister_stubs() couldn't have run.
* For netdev_uses_dsa() to start returning false, it would imply that
- * dsa_master_teardown() has executed, which requires rtnl_lock().
+ * dsa_conduit_teardown() has executed, which requires rtnl_lock().
*/
ASSERT_RTNL();
- return dsa_stubs->master_hwtstamp_validate(dev, config, extack);
+ return dsa_stubs->conduit_hwtstamp_validate(dev, config, extack);
}
#else
-static inline int dsa_master_hwtstamp_validate(struct net_device *dev,
- const struct kernel_hwtstamp_config *config,
- struct netlink_ext_ack *extack)
+static inline int dsa_conduit_hwtstamp_validate(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack)
{
return 0;
}
diff --git a/include/net/dst.h b/include/net/dst.h
index 78884429deed..f5dfc8fb7b37 100644
--- a/include/net/dst.h
+++ b/include/net/dst.h
@@ -222,13 +222,6 @@ static inline unsigned long dst_metric_rtt(const struct dst_entry *dst, int metr
return msecs_to_jiffies(dst_metric(dst, metric));
}
-static inline u32
-dst_allfrag(const struct dst_entry *dst)
-{
- int ret = dst_feature(dst, RTAX_FEATURE_ALLFRAG);
- return ret;
-}
-
static inline int
dst_metric_locked(const struct dst_entry *dst, int metric)
{
@@ -392,10 +385,10 @@ static inline int dst_discard(struct sk_buff *skb)
{
return dst_discard_out(&init_net, skb->sk, skb);
}
-void *dst_alloc(struct dst_ops *ops, struct net_device *dev, int initial_ref,
+void *dst_alloc(struct dst_ops *ops, struct net_device *dev,
int initial_obsolete, unsigned short flags);
void dst_init(struct dst_entry *dst, struct dst_ops *ops,
- struct net_device *dev, int initial_ref, int initial_obsolete,
+ struct net_device *dev, int initial_obsolete,
unsigned short flags);
struct dst_entry *dst_destroy(struct dst_entry *dst);
void dst_dev_put(struct dst_entry *dst);
diff --git a/include/net/flow_offload.h b/include/net/flow_offload.h
index 9efa9a59e81f..314087a5e181 100644
--- a/include/net/flow_offload.h
+++ b/include/net/flow_offload.h
@@ -333,7 +333,7 @@ struct flow_action_entry {
struct flow_action {
unsigned int num_entries;
- struct flow_action_entry entries[];
+ struct flow_action_entry entries[] __counted_by(num_entries);
};
static inline bool flow_action_has_entries(const struct flow_action *action)
diff --git a/include/net/ieee80211_radiotap.h b/include/net/ieee80211_radiotap.h
index 2338f8d2a8b3..925bac726a92 100644
--- a/include/net/ieee80211_radiotap.h
+++ b/include/net/ieee80211_radiotap.h
@@ -539,6 +539,12 @@ enum ieee80211_radiotap_eht_usig_common {
IEEE80211_RADIOTAP_EHT_USIG_COMMON_VALIDATE_BITS_OK = 0x00000080,
IEEE80211_RADIOTAP_EHT_USIG_COMMON_PHY_VER = 0x00007000,
IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW = 0x00038000,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_20MHZ = 0,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_40MHZ = 1,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_80MHZ = 2,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_160MHZ = 3,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_320MHZ_1 = 4,
+ IEEE80211_RADIOTAP_EHT_USIG_COMMON_BW_320MHZ_2 = 5,
IEEE80211_RADIOTAP_EHT_USIG_COMMON_UL_DL = 0x00040000,
IEEE80211_RADIOTAP_EHT_USIG_COMMON_BSS_COLOR = 0x01f80000,
IEEE80211_RADIOTAP_EHT_USIG_COMMON_TXOP = 0xfe000000,
diff --git a/include/net/if_inet6.h b/include/net/if_inet6.h
index c8490729b4ae..3e454c4d7ba6 100644
--- a/include/net/if_inet6.h
+++ b/include/net/if_inet6.h
@@ -89,7 +89,7 @@ struct ip6_sf_socklist {
unsigned int sl_max;
unsigned int sl_count;
struct rcu_head rcu;
- struct in6_addr sl_addr[];
+ struct in6_addr sl_addr[] __counted_by(sl_max);
};
#define IP6_SFBLOCK 10 /* allocate this many at once */
diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
index 5d2fcc137b88..d0a2f827d5f2 100644
--- a/include/net/inet_connection_sock.h
+++ b/include/net/inet_connection_sock.h
@@ -44,7 +44,6 @@ struct inet_connection_sock_af_ops {
struct request_sock *req_unhash,
bool *own_req);
u16 net_header_len;
- u16 net_frag_header_len;
u16 sockaddr_len;
int (*setsockopt)(struct sock *sk, int level, int optname,
sockptr_t optval, unsigned int optlen);
@@ -114,7 +113,10 @@ struct inet_connection_sock {
__u8 quick; /* Scheduled number of quick acks */
__u8 pingpong; /* The session is interactive */
__u8 retry; /* Number of attempts */
- __u32 ato; /* Predicted tick of soft clock */
+ #define ATO_BITS 8
+ __u32 ato:ATO_BITS, /* Predicted tick of soft clock */
+ lrcv_flowlabel:20, /* last received ipv6 flowlabel */
+ unused:4;
unsigned long timeout; /* Currently scheduled timeout */
__u32 lrcvtime; /* timestamp of last received data packet */
__u16 last_seg_size; /* Size of last incoming segment */
@@ -325,11 +327,10 @@ void inet_csk_update_fastreuse(struct inet_bind_bucket *tb,
struct dst_entry *inet_csk_update_pmtu(struct sock *sk, u32 mtu);
-#define TCP_PINGPONG_THRESH 1
-
static inline void inet_csk_enter_pingpong_mode(struct sock *sk)
{
- inet_csk(sk)->icsk_ack.pingpong = TCP_PINGPONG_THRESH;
+ inet_csk(sk)->icsk_ack.pingpong =
+ READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh);
}
static inline void inet_csk_exit_pingpong_mode(struct sock *sk)
@@ -339,7 +340,16 @@ static inline void inet_csk_exit_pingpong_mode(struct sock *sk)
static inline bool inet_csk_in_pingpong_mode(struct sock *sk)
{
- return inet_csk(sk)->icsk_ack.pingpong >= TCP_PINGPONG_THRESH;
+ return inet_csk(sk)->icsk_ack.pingpong >=
+ READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_pingpong_thresh);
+}
+
+static inline void inet_csk_inc_pingpong_cnt(struct sock *sk)
+{
+ struct inet_connection_sock *icsk = inet_csk(sk);
+
+ if (icsk->icsk_ack.pingpong < U8_MAX)
+ icsk->icsk_ack.pingpong++;
}
static inline bool inet_csk_has_ulp(const struct sock *sk)
diff --git a/include/net/inet_sock.h b/include/net/inet_sock.h
index 2de0e4d4a027..74db6d97cae1 100644
--- a/include/net/inet_sock.h
+++ b/include/net/inet_sock.h
@@ -244,7 +244,6 @@ struct inet_sock {
};
#define IPCORK_OPT 1 /* ip-options has been held in ipcork.opt */
-#define IPCORK_ALLFRAG 2 /* always fragment (for ipv6 for now) */
enum {
INET_FLAGS_PKTINFO = 0,
@@ -268,6 +267,16 @@ enum {
INET_FLAGS_NODEFRAG = 17,
INET_FLAGS_BIND_ADDRESS_NO_PORT = 18,
INET_FLAGS_DEFER_CONNECT = 19,
+ INET_FLAGS_MC6_LOOP = 20,
+ INET_FLAGS_RECVERR6_RFC4884 = 21,
+ INET_FLAGS_MC6_ALL = 22,
+ INET_FLAGS_AUTOFLOWLABEL_SET = 23,
+ INET_FLAGS_AUTOFLOWLABEL = 24,
+ INET_FLAGS_DONTFRAG = 25,
+ INET_FLAGS_RECVERR6 = 26,
+ INET_FLAGS_REPFLOW = 27,
+ INET_FLAGS_RTALERT_ISOLATE = 28,
+ INET_FLAGS_SNDFLOW = 29,
};
/* cmsg flags for inet */
diff --git a/include/net/inet_timewait_sock.h b/include/net/inet_timewait_sock.h
index 4a8e578405cb..b14999ff55db 100644
--- a/include/net/inet_timewait_sock.h
+++ b/include/net/inet_timewait_sock.h
@@ -67,7 +67,8 @@ struct inet_timewait_sock {
/* And these are ours. */
unsigned int tw_transparent : 1,
tw_flowlabel : 20,
- tw_pad : 3, /* 3 bits hole */
+ tw_usec_ts : 1,
+ tw_pad : 2, /* 2 bits hole */
tw_tos : 8;
u32 tw_txhash;
u32 tw_priority;
diff --git a/include/net/ip.h b/include/net/ip.h
index 3489a1cca5e7..1fc4c8d69e33 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -258,7 +258,7 @@ static inline u8 ip_sendmsg_scope(const struct inet_sock *inet,
static inline __u8 get_rttos(struct ipcm_cookie* ipc, struct inet_sock *inet)
{
- return (ipc->tos != -1) ? RT_TOS(ipc->tos) : RT_TOS(inet->tos);
+ return (ipc->tos != -1) ? RT_TOS(ipc->tos) : RT_TOS(READ_ONCE(inet->tos));
}
/* datagram.c */
@@ -434,19 +434,22 @@ int ip_dont_fragment(const struct sock *sk, const struct dst_entry *dst)
static inline bool ip_sk_accept_pmtu(const struct sock *sk)
{
- return inet_sk(sk)->pmtudisc != IP_PMTUDISC_INTERFACE &&
- inet_sk(sk)->pmtudisc != IP_PMTUDISC_OMIT;
+ u8 pmtudisc = READ_ONCE(inet_sk(sk)->pmtudisc);
+
+ return pmtudisc != IP_PMTUDISC_INTERFACE &&
+ pmtudisc != IP_PMTUDISC_OMIT;
}
static inline bool ip_sk_use_pmtu(const struct sock *sk)
{
- return inet_sk(sk)->pmtudisc < IP_PMTUDISC_PROBE;
+ return READ_ONCE(inet_sk(sk)->pmtudisc) < IP_PMTUDISC_PROBE;
}
static inline bool ip_sk_ignore_df(const struct sock *sk)
{
- return inet_sk(sk)->pmtudisc < IP_PMTUDISC_DO ||
- inet_sk(sk)->pmtudisc == IP_PMTUDISC_OMIT;
+ u8 pmtudisc = READ_ONCE(inet_sk(sk)->pmtudisc);
+
+ return pmtudisc < IP_PMTUDISC_DO || pmtudisc == IP_PMTUDISC_OMIT;
}
static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
index b32539bb0fb0..28b065790261 100644
--- a/include/net/ip6_route.h
+++ b/include/net/ip6_route.h
@@ -53,13 +53,12 @@ struct route_info {
*/
static inline int rt6_srcprefs2flags(unsigned int srcprefs)
{
- /* No need to bitmask because srcprefs have only 3 bits. */
- return srcprefs << 3;
+ return (srcprefs & IPV6_PREFER_SRC_MASK) << 3;
}
static inline unsigned int rt6_flags2srcprefs(int flags)
{
- return (flags >> 3) & 7;
+ return (flags >> 3) & IPV6_PREFER_SRC_MASK;
}
static inline bool rt6_need_strict(const struct in6_addr *daddr)
@@ -266,7 +265,7 @@ static inline unsigned int ip6_skb_dst_mtu(const struct sk_buff *skb)
const struct dst_entry *dst = skb_dst(skb);
unsigned int mtu;
- if (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) {
+ if (np && READ_ONCE(np->pmtudisc) >= IPV6_PMTUDISC_PROBE) {
mtu = READ_ONCE(dst->dev->mtu);
mtu -= lwtunnel_headroom(dst->lwtstate, mtu);
} else {
@@ -277,14 +276,18 @@ static inline unsigned int ip6_skb_dst_mtu(const struct sk_buff *skb)
static inline bool ip6_sk_accept_pmtu(const struct sock *sk)
{
- return inet6_sk(sk)->pmtudisc != IPV6_PMTUDISC_INTERFACE &&
- inet6_sk(sk)->pmtudisc != IPV6_PMTUDISC_OMIT;
+ u8 pmtudisc = READ_ONCE(inet6_sk(sk)->pmtudisc);
+
+ return pmtudisc != IPV6_PMTUDISC_INTERFACE &&
+ pmtudisc != IPV6_PMTUDISC_OMIT;
}
static inline bool ip6_sk_ignore_df(const struct sock *sk)
{
- return inet6_sk(sk)->pmtudisc < IPV6_PMTUDISC_DO ||
- inet6_sk(sk)->pmtudisc == IPV6_PMTUDISC_OMIT;
+ u8 pmtudisc = READ_ONCE(inet6_sk(sk)->pmtudisc);
+
+ return pmtudisc < IPV6_PMTUDISC_DO ||
+ pmtudisc == IPV6_PMTUDISC_OMIT;
}
static inline const struct in6_addr *rt6_nexthop(const struct rt6_info *rt,
diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
index 15de07d36540..d4667b7797e3 100644
--- a/include/net/ip_fib.h
+++ b/include/net/ip_fib.h
@@ -157,7 +157,7 @@ struct fib_info {
bool pfsrc_removed;
struct nexthop *nh;
struct rcu_head rcu;
- struct fib_nh fib_nh[];
+ struct fib_nh fib_nh[] __counted_by(fib_nhs);
};
diff --git a/include/net/ipv6.h b/include/net/ipv6.h
index c6932d1a3fa8..78d38dd88aba 100644
--- a/include/net/ipv6.h
+++ b/include/net/ipv6.h
@@ -373,12 +373,12 @@ static inline void ipcm6_init(struct ipcm6_cookie *ipc6)
}
static inline void ipcm6_init_sk(struct ipcm6_cookie *ipc6,
- const struct ipv6_pinfo *np)
+ const struct sock *sk)
{
*ipc6 = (struct ipcm6_cookie) {
.hlimit = -1,
- .tclass = np->tclass,
- .dontfrag = np->dontfrag,
+ .tclass = inet6_sk(sk)->tclass,
+ .dontfrag = inet6_test_bit(DONTFRAG, sk),
};
}
@@ -428,7 +428,7 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
int flags);
int ip6_flowlabel_init(void);
void ip6_flowlabel_cleanup(void);
-bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np);
+bool ip6_autoflowlabel(struct net *net, const struct sock *sk);
static inline void fl6_sock_release(struct ip6_flowlabel *fl)
{
@@ -914,9 +914,9 @@ static inline int ip6_sk_dst_hoplimit(struct ipv6_pinfo *np, struct flowi6 *fl6,
int hlimit;
if (ipv6_addr_is_multicast(&fl6->daddr))
- hlimit = np->mcast_hops;
+ hlimit = READ_ONCE(np->mcast_hops);
else
- hlimit = np->hop_limit;
+ hlimit = READ_ONCE(np->hop_limit);
if (hlimit < 0)
hlimit = ip6_dst_hoplimit(dst);
return hlimit;
@@ -1133,12 +1133,6 @@ struct dst_entry *ip6_dst_lookup_flow(struct net *net, const struct sock *sk, st
struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
const struct in6_addr *final_dst,
bool connected);
-struct dst_entry *ip6_dst_lookup_tunnel(struct sk_buff *skb,
- struct net_device *dev,
- struct net *net, struct socket *sock,
- struct in6_addr *saddr,
- const struct ip_tunnel_info *info,
- u8 protocol, bool use_cache);
struct dst_entry *ip6_blackhole_route(struct net *net,
struct dst_entry *orig_dst);
@@ -1303,15 +1297,16 @@ static inline int ip6_sock_set_v6only(struct sock *sk)
static inline void ip6_sock_set_recverr(struct sock *sk)
{
- lock_sock(sk);
- inet6_sk(sk)->recverr = true;
- release_sock(sk);
+ inet6_set_bit(RECVERR6, sk);
}
-static inline int __ip6_sock_set_addr_preferences(struct sock *sk, int val)
+#define IPV6_PREFER_SRC_MASK (IPV6_PREFER_SRC_TMP | IPV6_PREFER_SRC_PUBLIC | \
+ IPV6_PREFER_SRC_COA)
+
+static inline int ip6_sock_set_addr_preferences(struct sock *sk, int val)
{
+ unsigned int prefmask = ~IPV6_PREFER_SRC_MASK;
unsigned int pref = 0;
- unsigned int prefmask = ~0;
/* check PUBLIC/TMP/PUBTMP_DEFAULT conflicts */
switch (val & (IPV6_PREFER_SRC_PUBLIC |
@@ -1361,20 +1356,11 @@ static inline int __ip6_sock_set_addr_preferences(struct sock *sk, int val)
return -EINVAL;
}
- inet6_sk(sk)->srcprefs = (inet6_sk(sk)->srcprefs & prefmask) | pref;
+ WRITE_ONCE(inet6_sk(sk)->srcprefs,
+ (READ_ONCE(inet6_sk(sk)->srcprefs) & prefmask) | pref);
return 0;
}
-static inline int ip6_sock_set_addr_preferences(struct sock *sk, int val)
-{
- int ret;
-
- lock_sock(sk);
- ret = __ip6_sock_set_addr_preferences(sk, val);
- release_sock(sk);
- return ret;
-}
-
static inline void ip6_sock_set_recvpktinfo(struct sock *sk)
{
lock_sock(sk);
diff --git a/include/net/ipv6_stubs.h b/include/net/ipv6_stubs.h
index c48186bf4737..21da31e1dff5 100644
--- a/include/net/ipv6_stubs.h
+++ b/include/net/ipv6_stubs.h
@@ -85,6 +85,11 @@ struct ipv6_bpf_stub {
sockptr_t optval, unsigned int optlen);
int (*ipv6_getsockopt)(struct sock *sk, int level, int optname,
sockptr_t optval, sockptr_t optlen);
+ int (*ipv6_dev_get_saddr)(struct net *net,
+ const struct net_device *dst_dev,
+ const struct in6_addr *daddr,
+ unsigned int prefs,
+ struct in6_addr *saddr);
};
extern const struct ipv6_bpf_stub *ipv6_bpf_stub __read_mostly;
diff --git a/include/net/mac80211.h b/include/net/mac80211.h
index 7c707358d15c..580781ff9dcf 100644
--- a/include/net/mac80211.h
+++ b/include/net/mac80211.h
@@ -79,7 +79,7 @@
* helpers for sanity checking. Drivers must ensure all work added onto the
* mac80211 workqueue should be cancelled on the driver stop() callback.
*
- * mac80211 will flushed the workqueue upon interface removal and during
+ * mac80211 will flush the workqueue upon interface removal and during
* suspend.
*
* All work performed on the mac80211 workqueue must not acquire the RTNL lock.
@@ -138,7 +138,7 @@
* field to the frame RX timestamp and report the ack TX timestamp in the
* ieee80211_rx_status struct.
*
- * Similarly, To report hardware timestamps for Timing Measurement or Fine
+ * Similarly, to report hardware timestamps for Timing Measurement or Fine
* Timing Measurement frame TX, the driver should set the SKB's hwtstamp field
* to the frame TX timestamp and report the ack RX timestamp in the
* ieee80211_tx_status struct.
@@ -341,6 +341,7 @@ struct ieee80211_vif_chanctx_switch {
* @BSS_CHANGED_UNSOL_BCAST_PROBE_RESP: Unsolicited broadcast probe response
* status changed.
* @BSS_CHANGED_EHT_PUNCTURING: The channel puncturing bitmap changed.
+ * @BSS_CHANGED_MLD_VALID_LINKS: MLD valid links status changed.
*/
enum ieee80211_bss_change {
BSS_CHANGED_ASSOC = 1<<0,
@@ -376,6 +377,7 @@ enum ieee80211_bss_change {
BSS_CHANGED_FILS_DISCOVERY = 1<<30,
BSS_CHANGED_UNSOL_BCAST_PROBE_RESP = 1<<31,
BSS_CHANGED_EHT_PUNCTURING = BIT_ULL(32),
+ BSS_CHANGED_MLD_VALID_LINKS = BIT_ULL(33),
/* when adding here, make sure to change ieee80211_reconfig */
};
@@ -643,9 +645,7 @@ struct ieee80211_fils_discovery {
* @pwr_reduction: power constraint of BSS.
* @eht_support: does this BSS support EHT
* @eht_puncturing: bitmap to indicate which channels are punctured in this BSS
- * @csa_active: marks whether a channel switch is going on. Internally it is
- * write-protected by sdata_lock and local->mtx so holding either is fine
- * for read access.
+ * @csa_active: marks whether a channel switch is going on.
* @csa_punct_bitmap: new puncturing bitmap for channel switch
* @mu_mimo_owner: indicates interface owns MU-MIMO capability
* @chanctx_conf: The channel context this interface is assigned to, or %NULL
@@ -653,9 +653,7 @@ struct ieee80211_fils_discovery {
* path needing to access it; even though the netdev carrier will always
* be off when it is %NULL there can still be races and packets could be
* processed after it switches back to %NULL.
- * @color_change_active: marks whether a color change is ongoing. Internally it is
- * write-protected by sdata_lock and local->mtx so holding either is fine
- * for read access.
+ * @color_change_active: marks whether a color change is ongoing.
* @color_change_color: the bss color that will be used after the change.
* @ht_ldpc: in AP mode, indicates interface has HT LDPC capability.
* @vht_ldpc: in AP mode, indicates interface has VHT LDPC capability.
@@ -1082,6 +1080,11 @@ struct ieee80211_tx_rate {
#define IEEE80211_MAX_TX_RETRY 31
+static inline bool ieee80211_rate_valid(struct ieee80211_tx_rate *rate)
+{
+ return rate->idx >= 0 && rate->count > 0;
+}
+
static inline void ieee80211_rate_set_vht(struct ieee80211_tx_rate *rate,
u8 mcs, u8 nss)
{
@@ -1115,7 +1118,9 @@ ieee80211_rate_get_vht_nss(const struct ieee80211_tx_rate *rate)
* not valid if the interface is an MLD since we won't know which
* link the frame will be transmitted on
* @hw_queue: HW queue to put the frame on, skb_get_queue_mapping() gives the AC
- * @ack_frame_id: internal frame ID for TX status, used internally
+ * @status_data: internal data for TX status handling, assigned privately,
+ * see also &enum ieee80211_status_data for the internal documentation
+ * @status_data_idr: indicates status data is IDR allocated ID for ack frame
* @tx_time_est: TX time estimate in units of 4us, used internally
* @control: union part for control data
* @control.rates: TX rates array to try
@@ -1155,10 +1160,11 @@ struct ieee80211_tx_info {
/* common information */
u32 flags;
u32 band:3,
- ack_frame_id:13,
+ status_data_idr:1,
+ status_data:13,
hw_queue:4,
tx_time_est:10;
- /* 2 free bits */
+ /* 1 free bit */
union {
struct {
@@ -1172,7 +1178,11 @@ struct ieee80211_tx_info {
u8 use_cts_prot:1;
u8 short_preamble:1;
u8 skip_table:1;
- /* 2 bytes free */
+
+ /* for injection only (bitmap) */
+ u8 antennas:2;
+
+ /* 14 bits free */
};
/* only needed before rate control */
unsigned long jiffies;
@@ -1757,15 +1767,15 @@ struct ieee80211_channel_switch {
* @IEEE80211_VIF_GET_NOA_UPDATE: request to handle NOA attributes
* and send P2P_PS notification to the driver if NOA changed, even
* this is not pure P2P vif.
- * @IEEE80211_VIF_DISABLE_SMPS_OVERRIDE: disable user configuration of
- * SMPS mode via debugfs.
+ * @IEEE80211_VIF_EML_ACTIVE: The driver indicates that EML operation is
+ * enabled for the interface.
*/
enum ieee80211_vif_flags {
IEEE80211_VIF_BEACON_FILTER = BIT(0),
IEEE80211_VIF_SUPPORTS_CQM_RSSI = BIT(1),
IEEE80211_VIF_SUPPORTS_UAPSD = BIT(2),
IEEE80211_VIF_GET_NOA_UPDATE = BIT(3),
- IEEE80211_VIF_DISABLE_SMPS_OVERRIDE = BIT(4),
+ IEEE80211_VIF_EML_ACTIVE = BIT(4),
};
@@ -1938,7 +1948,7 @@ static inline bool ieee80211_vif_is_mld(const struct ieee80211_vif *vif)
for (link_id = 0; link_id < ARRAY_SIZE((vif)->link_conf); link_id++) \
if ((!(vif)->active_links || \
(vif)->active_links & BIT(link_id)) && \
- (link = rcu_dereference((vif)->link_conf[link_id])))
+ (link = link_conf_dereference_check(vif, link_id)))
static inline bool ieee80211_vif_is_mesh(struct ieee80211_vif *vif)
{
@@ -1971,22 +1981,18 @@ struct ieee80211_vif *wdev_to_ieee80211_vif(struct wireless_dev *wdev);
*/
struct wireless_dev *ieee80211_vif_to_wdev(struct ieee80211_vif *vif);
-/**
- * lockdep_vif_mutex_held - for lockdep checks on link poiners
- * @vif: the interface to check
- */
-static inline bool lockdep_vif_mutex_held(struct ieee80211_vif *vif)
+static inline bool lockdep_vif_wiphy_mutex_held(struct ieee80211_vif *vif)
{
- return lockdep_is_held(&ieee80211_vif_to_wdev(vif)->mtx);
+ return lockdep_is_held(&ieee80211_vif_to_wdev(vif)->wiphy->mtx);
}
#define link_conf_dereference_protected(vif, link_id) \
rcu_dereference_protected((vif)->link_conf[link_id], \
- lockdep_vif_mutex_held(vif))
+ lockdep_vif_wiphy_mutex_held(vif))
#define link_conf_dereference_check(vif, link_id) \
rcu_dereference_check((vif)->link_conf[link_id], \
- lockdep_vif_mutex_held(vif))
+ lockdep_vif_wiphy_mutex_held(vif))
/**
* enum ieee80211_key_flags - key flags
@@ -2393,7 +2399,7 @@ static inline bool lockdep_sta_mutex_held(struct ieee80211_sta *pubsta)
for (link_id = 0; link_id < ARRAY_SIZE((sta)->link); link_id++) \
if ((!(vif)->active_links || \
(vif)->active_links & BIT(link_id)) && \
- ((link_sta) = link_sta_dereference_protected(sta, link_id)))
+ ((link_sta) = link_sta_dereference_check(sta, link_id)))
/**
* enum sta_notify_cmd - sta notify command
@@ -3056,7 +3062,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* The set_key() call for the %SET_KEY command should return 0 if
* the key is now in use, -%EOPNOTSUPP or -%ENOSPC if it couldn't be
* added; if you return 0 then hw_key_idx must be assigned to the
- * hardware key index, you are free to use the full u8 range.
+ * hardware key index. You are free to use the full u8 range.
*
* Note that in the case that the @IEEE80211_HW_SW_CRYPTO_CONTROL flag is
* set, mac80211 will not automatically fall back to software crypto if
@@ -3066,7 +3072,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* When the cmd is %DISABLE_KEY then it must succeed.
*
* Note that it is permissible to not decrypt a frame even if a key
- * for it has been uploaded to hardware, the stack will not make any
+ * for it has been uploaded to hardware. The stack will not make any
* decision based on whether a key has been uploaded or not but rather
* based on the receive flags.
*
@@ -3081,7 +3087,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* The update_tkip_key() call updates the driver with the new phase 1 key.
* This happens every time the iv16 wraps around (every 65536 packets). The
* set_key() call will happen only once for each key (unless the AP did
- * rekeying), it will not include a valid phase 1 key. The valid phase 1 key is
+ * rekeying); it will not include a valid phase 1 key. The valid phase 1 key is
* provided by update_tkip_key only. The trigger that makes mac80211 call this
* handler is software decryption with wrap around of iv16.
*
@@ -3108,7 +3114,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
*
* mac80211 has support for various powersave implementations.
*
- * First, it can support hardware that handles all powersaving by itself,
+ * First, it can support hardware that handles all powersaving by itself;
* such hardware should simply set the %IEEE80211_HW_SUPPORTS_PS hardware
* flag. In that case, it will be told about the desired powersave mode
* with the %IEEE80211_CONF_PS flag depending on the association status.
@@ -3133,12 +3139,12 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* %IEEE80211_HW_PS_NULLFUNC_STACK flags. The hardware is of course still
* required to pass up beacons. The hardware is still required to handle
* waking up for multicast traffic; if it cannot the driver must handle that
- * as best as it can, mac80211 is too slow to do that.
+ * as best as it can; mac80211 is too slow to do that.
*
* Dynamic powersave is an extension to normal powersave in which the
* hardware stays awake for a user-specified period of time after sending a
* frame so that reply frames need not be buffered and therefore delayed to
- * the next wakeup. It's compromise of getting good enough latency when
+ * the next wakeup. It's a compromise of getting good enough latency when
* there's data traffic and still saving significantly power in idle
* periods.
*
@@ -3203,7 +3209,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* Note that change, for the sake of simplification, also includes information
* elements appearing or disappearing from the beacon.
*
- * Some hardware supports an "ignore list" instead, just make sure nothing
+ * Some hardware supports an "ignore list" instead. Just make sure nothing
* that was requested is on the ignore list, and include commonly changing
* information element IDs in the ignore list, for example 11 (BSS load) and
* the various vendor-assigned IEs with unknown contents (128, 129, 133-136,
@@ -3214,7 +3220,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* In addition to these capabilities, hardware should support notifying the
* host of changes in the beacon RSSI. This is relevant to implement roaming
* when no traffic is flowing (when traffic is flowing we see the RSSI of
- * the received data packets). This can consist in notifying the host when
+ * the received data packets). This can consist of notifying the host when
* the RSSI changes significantly or when it drops below or rises above
* configurable thresholds. In the future these thresholds will also be
* configured by mac80211 (which gets them from userspace) to implement
@@ -3361,8 +3367,8 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* period starts for any reason, @release_buffered_frames is called
* with the number of frames to be released and which TIDs they are
* to come from. In this case, the driver is responsible for setting
- * the EOSP (for uAPSD) and MORE_DATA bits in the released frames,
- * to help the @more_data parameter is passed to tell the driver if
+ * the EOSP (for uAPSD) and MORE_DATA bits in the released frames.
+ * To help the @more_data parameter is passed to tell the driver if
* there is more data on other TIDs -- the TIDs to release frames
* from are ignored since mac80211 doesn't know how many frames the
* buffers for those TIDs contain.
@@ -3411,7 +3417,7 @@ void ieee80211_free_txskb(struct ieee80211_hw *hw, struct sk_buff *skb);
* Additionally, the driver has to then use these HW queue IDs for the queue
* management functions (ieee80211_stop_queue() et al.)
*
- * The driver is free to set up the queue mappings as needed, multiple virtual
+ * The driver is free to set up the queue mappings as needed; multiple virtual
* interfaces may map to the same hardware queues if needed. The setup has to
* happen during add_interface or change_interface callbacks. For example, a
* driver supporting station+station and station+AP modes might decide to have
@@ -3635,11 +3641,14 @@ enum ieee80211_reconfig_type {
* @success: whether the frame exchange was successful, only
* used with the mgd_complete_tx() method, and then only
* valid for auth and (re)assoc.
+ * @link_id: the link id on which the frame will be TX'ed.
+ * Only used with the mgd_prepare_tx() method.
*/
struct ieee80211_prep_tx_info {
u16 duration;
u16 subtype;
u8 success:1;
+ int link_id;
};
/**
@@ -3863,6 +3872,10 @@ struct ieee80211_prep_tx_info {
* the station. See @sta_pre_rcu_remove if needed.
* This callback can sleep.
*
+ * @vif_add_debugfs: Drivers can use this callback to add a debugfs vif
+ * directory with its files. This callback should be within a
+ * CONFIG_MAC80211_DEBUGFS conditional. This callback can sleep.
+ *
* @link_add_debugfs: Drivers can use this callback to add debugfs files
* when a link is added to a mac80211 vif. This callback should be within
* a CONFIG_MAC80211_DEBUGFS conditional. This callback can sleep.
@@ -4067,11 +4080,15 @@ struct ieee80211_prep_tx_info {
* This callback must be atomic.
*
* @get_et_sset_count: Ethtool API to get string-set count.
+ * Note that the wiphy mutex is not held for this callback since it's
+ * expected to return a static value.
*
* @get_et_stats: Ethtool API to get a set of u64 stats.
*
* @get_et_strings: Ethtool API to get a set of strings to describe stats
* and perhaps other supported types of ethtool data-sets.
+ * Note that the wiphy mutex is not held for this callback since it's
+ * expected to return a static value.
*
* @mgd_prepare_tx: Prepare for transmitting a management frame for association
* before associated. In multi-channel scenarios, a virtual interface is
@@ -4358,6 +4375,8 @@ struct ieee80211_ops {
int (*sta_remove)(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
struct ieee80211_sta *sta);
#ifdef CONFIG_MAC80211_DEBUGFS
+ void (*vif_add_debugfs)(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif);
void (*link_add_debugfs)(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_bss_conf *link_conf,
@@ -4506,7 +4525,8 @@ struct ieee80211_ops {
struct ieee80211_prep_tx_info *info);
void (*mgd_protect_tdls_discover)(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ unsigned int link_id);
int (*add_chanctx)(struct ieee80211_hw *hw,
struct ieee80211_chanctx_conf *ctx);
@@ -4544,7 +4564,8 @@ struct ieee80211_ops {
struct ieee80211_channel_switch *ch_switch);
int (*post_channel_switch)(struct ieee80211_hw *hw,
- struct ieee80211_vif *vif);
+ struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf);
void (*abort_channel_switch)(struct ieee80211_hw *hw,
struct ieee80211_vif *vif);
void (*channel_switch_rx_beacon)(struct ieee80211_hw *hw,
@@ -4890,7 +4911,7 @@ void ieee80211_restart_hw(struct ieee80211_hw *hw);
* for a single hardware must be synchronized against each other. Calls to
* this function, ieee80211_rx_ni() and ieee80211_rx_irqsafe() may not be
* mixed for a single hardware. Must not run concurrently with
- * ieee80211_tx_status() or ieee80211_tx_status_ni().
+ * ieee80211_tx_status_skb() or ieee80211_tx_status_ni().
*
* This function must be called with BHs disabled and RCU read lock
*
@@ -4915,7 +4936,7 @@ void ieee80211_rx_list(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
* for a single hardware must be synchronized against each other. Calls to
* this function, ieee80211_rx_ni() and ieee80211_rx_irqsafe() may not be
* mixed for a single hardware. Must not run concurrently with
- * ieee80211_tx_status() or ieee80211_tx_status_ni().
+ * ieee80211_tx_status_skb() or ieee80211_tx_status_ni().
*
* This function must be called with BHs disabled.
*
@@ -4940,7 +4961,7 @@ void ieee80211_rx_napi(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
* for a single hardware must be synchronized against each other. Calls to
* this function, ieee80211_rx_ni() and ieee80211_rx_irqsafe() may not be
* mixed for a single hardware. Must not run concurrently with
- * ieee80211_tx_status() or ieee80211_tx_status_ni().
+ * ieee80211_tx_status_skb() or ieee80211_tx_status_ni().
*
* In process context use instead ieee80211_rx_ni().
*
@@ -4960,7 +4981,7 @@ static inline void ieee80211_rx(struct ieee80211_hw *hw, struct sk_buff *skb)
*
* Calls to this function, ieee80211_rx() or ieee80211_rx_ni() may not
* be mixed for a single hardware.Must not run concurrently with
- * ieee80211_tx_status() or ieee80211_tx_status_ni().
+ * ieee80211_tx_status_skb() or ieee80211_tx_status_ni().
*
* @hw: the hardware this frame came in on
* @skb: the buffer to receive, owned by mac80211 after this call
@@ -4975,7 +4996,7 @@ void ieee80211_rx_irqsafe(struct ieee80211_hw *hw, struct sk_buff *skb);
*
* Calls to this function, ieee80211_rx() and ieee80211_rx_irqsafe() may
* not be mixed for a single hardware. Must not run concurrently with
- * ieee80211_tx_status() or ieee80211_tx_status_ni().
+ * ieee80211_tx_status_skb() or ieee80211_tx_status_ni().
*
* @hw: the hardware this frame came in on
* @skb: the buffer to receive, owned by mac80211 after this call
@@ -5151,7 +5172,7 @@ void ieee80211_tx_rate_update(struct ieee80211_hw *hw,
struct ieee80211_tx_info *info);
/**
- * ieee80211_tx_status - transmit status callback
+ * ieee80211_tx_status_skb - transmit status callback
*
* Call this function for all transmitted frames after they have been
* transmitted. It is permissible to not call this function for
@@ -5166,13 +5187,13 @@ void ieee80211_tx_rate_update(struct ieee80211_hw *hw,
* @hw: the hardware the frame was transmitted by
* @skb: the frame that was transmitted, owned by mac80211 after this call
*/
-void ieee80211_tx_status(struct ieee80211_hw *hw,
- struct sk_buff *skb);
+void ieee80211_tx_status_skb(struct ieee80211_hw *hw,
+ struct sk_buff *skb);
/**
* ieee80211_tx_status_ext - extended transmit status callback
*
- * This function can be used as a replacement for ieee80211_tx_status
+ * This function can be used as a replacement for ieee80211_tx_status_skb()
* in drivers that may want to provide extra information that does not
* fit into &struct ieee80211_tx_info.
*
@@ -5189,7 +5210,7 @@ void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
/**
* ieee80211_tx_status_noskb - transmit status callback without skb
*
- * This function can be used as a replacement for ieee80211_tx_status
+ * This function can be used as a replacement for ieee80211_tx_status_skb()
* in drivers that cannot reliably map tx status information back to
* specific skbs.
*
@@ -5217,9 +5238,9 @@ static inline void ieee80211_tx_status_noskb(struct ieee80211_hw *hw,
/**
* ieee80211_tx_status_ni - transmit status callback (in process context)
*
- * Like ieee80211_tx_status() but can be called in process context.
+ * Like ieee80211_tx_status_skb() but can be called in process context.
*
- * Calls to this function, ieee80211_tx_status() and
+ * Calls to this function, ieee80211_tx_status_skb() and
* ieee80211_tx_status_irqsafe() may not be mixed
* for a single hardware.
*
@@ -5230,17 +5251,17 @@ static inline void ieee80211_tx_status_ni(struct ieee80211_hw *hw,
struct sk_buff *skb)
{
local_bh_disable();
- ieee80211_tx_status(hw, skb);
+ ieee80211_tx_status_skb(hw, skb);
local_bh_enable();
}
/**
* ieee80211_tx_status_irqsafe - IRQ-safe transmit status callback
*
- * Like ieee80211_tx_status() but can be called in IRQ context
+ * Like ieee80211_tx_status_skb() but can be called in IRQ context
* (internally defers to a tasklet.)
*
- * Calls to this function, ieee80211_tx_status() and
+ * Calls to this function, ieee80211_tx_status_skb() and
* ieee80211_tx_status_ni() may not be mixed for a single hardware.
*
* @hw: the hardware the frame was transmitted by
@@ -6542,11 +6563,14 @@ void ieee80211_radar_detected(struct ieee80211_hw *hw);
* ieee80211_chswitch_done - Complete channel switch process
* @vif: &struct ieee80211_vif pointer from the add_interface callback.
* @success: make the channel switch successful or not
+ * @link_id: the link_id on which the switch was done. Ignored if success is
+ * false.
*
* Complete the channel switch post-process: set the new operational channel
* and wake up the suspended queues.
*/
-void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success);
+void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success,
+ unsigned int link_id);
/**
* ieee80211_channel_switch_disconnect - disconnect due to channel switch error
@@ -7242,7 +7266,7 @@ ieee80211_return_txq(struct ieee80211_hw *hw, struct ieee80211_txq *txq,
*
* This function is used to check whether given txq is allowed to transmit by
* the airtime scheduler, and can be used by drivers to access the airtime
- * fairness accounting without going using the scheduling order enfored by
+ * fairness accounting without using the scheduling order enforced by
* next_txq().
*
* Returns %true if the airtime scheduler thinks the TXQ should be allowed to
diff --git a/include/net/mana/hw_channel.h b/include/net/mana/hw_channel.h
index 3d3b5c881bc1..158b125692c2 100644
--- a/include/net/mana/hw_channel.h
+++ b/include/net/mana/hw_channel.h
@@ -121,7 +121,7 @@ struct hwc_dma_buf {
u32 gpa_mkey;
u32 num_reqs;
- struct hwc_work_request reqs[];
+ struct hwc_work_request reqs[] __counted_by(num_reqs);
};
typedef void hwc_rx_event_handler_t(void *ctx, u32 gdma_rxq_id,
diff --git a/include/net/mana/mana.h b/include/net/mana/mana.h
index 4d43adf18606..6e3e9c1363db 100644
--- a/include/net/mana/mana.h
+++ b/include/net/mana/mana.h
@@ -339,7 +339,7 @@ struct mana_rxq {
/* MUST BE THE LAST MEMBER:
* Each receive buffer has an associated mana_recv_buf_oob.
*/
- struct mana_recv_buf_oob rx_oobs[];
+ struct mana_recv_buf_oob rx_oobs[] __counted_by(num_rx_buf);
};
struct mana_tx_qp {
diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
index eb6cd43b1746..13b3a4e29fdb 100644
--- a/include/net/net_namespace.h
+++ b/include/net/net_namespace.h
@@ -368,21 +368,30 @@ static inline void put_net_track(struct net *net, netns_tracker *tracker)
typedef struct {
#ifdef CONFIG_NET_NS
- struct net *net;
+ struct net __rcu *net;
#endif
} possible_net_t;
static inline void write_pnet(possible_net_t *pnet, struct net *net)
{
#ifdef CONFIG_NET_NS
- pnet->net = net;
+ rcu_assign_pointer(pnet->net, net);
#endif
}
static inline struct net *read_pnet(const possible_net_t *pnet)
{
#ifdef CONFIG_NET_NS
- return pnet->net;
+ return rcu_dereference_protected(pnet->net, true);
+#else
+ return &init_net;
+#endif
+}
+
+static inline struct net *read_pnet_rcu(possible_net_t *pnet)
+{
+#ifdef CONFIG_NET_NS
+ return rcu_dereference(pnet->net);
#else
return &init_net;
#endif
diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
index 4085765c3370..cba3ccf03fcc 100644
--- a/include/net/netfilter/nf_conntrack.h
+++ b/include/net/netfilter/nf_conntrack.h
@@ -160,10 +160,6 @@ static inline struct net *nf_ct_net(const struct nf_conn *ct)
return read_pnet(&ct->ct_net);
}
-/* Alter reply tuple (maybe alter helper). */
-void nf_conntrack_alter_reply(struct nf_conn *ct,
- const struct nf_conntrack_tuple *newreply);
-
/* Is this tuple taken? (ignoring any belonging to the given
conntrack). */
int nf_conntrack_tuple_taken(const struct nf_conntrack_tuple *tuple,
@@ -284,6 +280,16 @@ static inline bool nf_is_loopback_packet(const struct sk_buff *skb)
return skb->dev && skb->skb_iif && skb->dev->flags & IFF_LOOPBACK;
}
+static inline void nf_conntrack_alter_reply(struct nf_conn *ct,
+ const struct nf_conntrack_tuple *newreply)
+{
+ /* Must be unconfirmed, so not in hash table yet */
+ if (WARN_ON(nf_ct_is_confirmed(ct)))
+ return;
+
+ ct->tuplehash[IP_CT_DIR_REPLY].tuple = *newreply;
+}
+
#define nfct_time_stamp ((u32)(jiffies))
/* jiffies until ct expires, 0 if already expired */
diff --git a/include/net/netfilter/nf_conntrack_labels.h b/include/net/netfilter/nf_conntrack_labels.h
index fcb19a4e8f2b..6903f72bcc15 100644
--- a/include/net/netfilter/nf_conntrack_labels.h
+++ b/include/net/netfilter/nf_conntrack_labels.h
@@ -39,7 +39,7 @@ static inline struct nf_conn_labels *nf_ct_labels_ext_add(struct nf_conn *ct)
#ifdef CONFIG_NF_CONNTRACK_LABELS
struct net *net = nf_ct_net(ct);
- if (net->ct.labels_used == 0)
+ if (atomic_read(&net->ct.labels_used) == 0)
return NULL;
return nf_ct_ext_add(ct, NF_CT_EXT_LABELS, GFP_ATOMIC);
diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
index 7c816359d5a9..3bbd13ab1ecf 100644
--- a/include/net/netfilter/nf_tables.h
+++ b/include/net/netfilter/nf_tables.h
@@ -274,6 +274,9 @@ struct nft_userdata {
unsigned char data[];
};
+/* placeholder structure for opaque set element backend representation. */
+struct nft_elem_priv { };
+
/**
* struct nft_set_elem - generic representation of set elements
*
@@ -294,9 +297,14 @@ struct nft_set_elem {
u32 buf[NFT_DATA_VALUE_MAXLEN / sizeof(u32)];
struct nft_data val;
} data;
- void *priv;
+ struct nft_elem_priv *priv;
};
+static inline void *nft_elem_priv_cast(const struct nft_elem_priv *priv)
+{
+ return (void *)priv;
+}
+
struct nft_set;
struct nft_set_iter {
u8 genmask;
@@ -306,7 +314,7 @@ struct nft_set_iter {
int (*fn)(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem);
+ struct nft_elem_priv *elem_priv);
};
/**
@@ -430,7 +438,8 @@ struct nft_set_ops {
const struct nft_set_ext **ext);
bool (*update)(struct nft_set *set,
const u32 *key,
- void *(*new)(struct nft_set *,
+ struct nft_elem_priv *
+ (*new)(struct nft_set *,
const struct nft_expr *,
struct nft_regs *),
const struct nft_expr *expr,
@@ -442,27 +451,27 @@ struct nft_set_ops {
int (*insert)(const struct net *net,
const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext);
+ struct nft_elem_priv **priv);
void (*activate)(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem);
- void * (*deactivate)(const struct net *net,
+ struct nft_elem_priv *elem_priv);
+ struct nft_elem_priv * (*deactivate)(const struct net *net,
const struct nft_set *set,
const struct nft_set_elem *elem);
- bool (*flush)(const struct net *net,
+ void (*flush)(const struct net *net,
const struct nft_set *set,
- void *priv);
+ struct nft_elem_priv *priv);
void (*remove)(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem);
+ struct nft_elem_priv *elem_priv);
void (*walk)(const struct nft_ctx *ctx,
struct nft_set *set,
struct nft_set_iter *iter);
- void * (*get)(const struct net *net,
+ struct nft_elem_priv * (*get)(const struct net *net,
const struct nft_set *set,
const struct nft_set_elem *elem,
unsigned int flags);
- void (*commit)(const struct nft_set *set);
+ void (*commit)(struct nft_set *set);
void (*abort)(const struct nft_set *set);
u64 (*privsize)(const struct nlattr * const nla[],
const struct nft_set_desc *desc);
@@ -796,9 +805,9 @@ static inline bool nft_set_elem_expired(const struct nft_set_ext *ext)
}
static inline struct nft_set_ext *nft_set_elem_ext(const struct nft_set *set,
- void *elem)
+ const struct nft_elem_priv *elem_priv)
{
- return elem + set->ops->elemsize;
+ return (void *)elem_priv + set->ops->elemsize;
}
static inline struct nft_object **nft_set_ext_obj(const struct nft_set_ext *ext)
@@ -810,16 +819,19 @@ struct nft_expr *nft_set_elem_expr_alloc(const struct nft_ctx *ctx,
const struct nft_set *set,
const struct nlattr *attr);
-void *nft_set_elem_init(const struct nft_set *set,
- const struct nft_set_ext_tmpl *tmpl,
- const u32 *key, const u32 *key_end, const u32 *data,
- u64 timeout, u64 expiration, gfp_t gfp);
+struct nft_elem_priv *nft_set_elem_init(const struct nft_set *set,
+ const struct nft_set_ext_tmpl *tmpl,
+ const u32 *key, const u32 *key_end,
+ const u32 *data,
+ u64 timeout, u64 expiration, gfp_t gfp);
int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_expr *expr_array[]);
-void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+void nft_set_elem_destroy(const struct nft_set *set,
+ const struct nft_elem_priv *elem_priv,
bool destroy_expr);
void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
- const struct nft_set *set, void *elem);
+ const struct nft_set *set,
+ const struct nft_elem_priv *elem_priv);
struct nft_expr_ops;
/**
@@ -1061,7 +1073,7 @@ struct nft_chain {
int nft_chain_validate(const struct nft_ctx *ctx, const struct nft_chain *chain);
int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem);
+ struct nft_elem_priv *elem_priv);
int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set);
int nf_tables_bind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
void nf_tables_unbind_chain(const struct nft_ctx *ctx, struct nft_chain *chain);
@@ -1198,10 +1210,13 @@ static inline void nft_use_inc_restore(u32 *use)
* @hgenerator: handle generator state
* @handle: table handle
* @use: number of chain references to this table
+ * @family:address family
* @flags: table flag (see enum nft_table_flags)
* @genmask: generation mask
- * @afinfo: address family info
+ * @nlpid: netlink port ID
* @name: name of the table
+ * @udlen: length of the user data
+ * @udata: user data
* @validate_state: internal, set when transaction adds jumps
*/
struct nft_table {
@@ -1635,14 +1650,14 @@ struct nft_trans_table {
struct nft_trans_elem {
struct nft_set *set;
- struct nft_set_elem elem;
+ struct nft_elem_priv *elem_priv;
bool bound;
};
#define nft_trans_elem_set(trans) \
(((struct nft_trans_elem *)trans->data)->set)
-#define nft_trans_elem(trans) \
- (((struct nft_trans_elem *)trans->data)->elem)
+#define nft_trans_elem_priv(trans) \
+ (((struct nft_trans_elem *)trans->data)->elem_priv)
#define nft_trans_elem_set_bound(trans) \
(((struct nft_trans_elem *)trans->data)->bound)
@@ -1683,7 +1698,7 @@ struct nft_trans_gc {
struct nft_set *set;
u32 seq;
u16 count;
- void *priv[NFT_TRANS_GC_BATCHCOUNT];
+ struct nft_elem_priv *priv[NFT_TRANS_GC_BATCHCOUNT];
struct rcu_head rcu;
};
@@ -1706,7 +1721,7 @@ struct nft_trans_gc *nft_trans_gc_catchall_sync(struct nft_trans_gc *gc);
void nft_setelem_data_deactivate(const struct net *net,
const struct nft_set *set,
- struct nft_set_elem *elem);
+ struct nft_elem_priv *elem_priv);
int __init nft_chain_filter_init(void);
void nft_chain_filter_fini(void);
diff --git a/include/net/netkit.h b/include/net/netkit.h
new file mode 100644
index 000000000000..0ba2e6b847ca
--- /dev/null
+++ b/include/net/netkit.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (c) 2023 Isovalent */
+#ifndef __NET_NETKIT_H
+#define __NET_NETKIT_H
+
+#include <linux/bpf.h>
+
+#ifdef CONFIG_NETKIT
+int netkit_prog_attach(const union bpf_attr *attr, struct bpf_prog *prog);
+int netkit_link_attach(const union bpf_attr *attr, struct bpf_prog *prog);
+int netkit_prog_detach(const union bpf_attr *attr, struct bpf_prog *prog);
+int netkit_prog_query(const union bpf_attr *attr, union bpf_attr __user *uattr);
+#else
+static inline int netkit_prog_attach(const union bpf_attr *attr,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
+
+static inline int netkit_link_attach(const union bpf_attr *attr,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
+
+static inline int netkit_prog_detach(const union bpf_attr *attr,
+ struct bpf_prog *prog)
+{
+ return -EINVAL;
+}
+
+static inline int netkit_prog_query(const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+{
+ return -EINVAL;
+}
+#endif /* CONFIG_NETKIT */
+#endif /* __NET_NETKIT_H */
diff --git a/include/net/netlink.h b/include/net/netlink.h
index 8a7cd1170e1f..83bdf787aeee 100644
--- a/include/net/netlink.h
+++ b/include/net/netlink.h
@@ -128,6 +128,8 @@
* nla_len(nla) length of attribute payload
*
* Attribute Payload Access for Basic Types:
+ * nla_get_uint(nla) get payload for a uint attribute
+ * nla_get_sint(nla) get payload for a sint attribute
* nla_get_u8(nla) get payload for a u8 attribute
* nla_get_u16(nla) get payload for a u16 attribute
* nla_get_u32(nla) get payload for a u32 attribute
@@ -183,6 +185,8 @@ enum {
NLA_REJECT,
NLA_BE16,
NLA_BE32,
+ NLA_SINT,
+ NLA_UINT,
__NLA_TYPE_MAX,
};
@@ -229,6 +233,7 @@ enum nla_policy_validation {
* nested header (or empty); len field is used if
* nested_policy is also used, for the max attr
* number in the nested policy.
+ * NLA_SINT, NLA_UINT,
* NLA_U8, NLA_U16,
* NLA_U32, NLA_U64,
* NLA_S8, NLA_S16,
@@ -260,12 +265,14 @@ enum nla_policy_validation {
* while an array has the nested attributes at another
* level down and the attribute types directly in the
* nesting don't matter.
+ * NLA_UINT,
* NLA_U8,
* NLA_U16,
* NLA_U32,
* NLA_U64,
* NLA_BE16,
* NLA_BE32,
+ * NLA_SINT,
* NLA_S8,
* NLA_S16,
* NLA_S32,
@@ -280,6 +287,7 @@ enum nla_policy_validation {
* or NLA_POLICY_FULL_RANGE_SIGNED() macros instead.
* Use the NLA_POLICY_MIN(), NLA_POLICY_MAX() and
* NLA_POLICY_RANGE() macros.
+ * NLA_UINT,
* NLA_U8,
* NLA_U16,
* NLA_U32,
@@ -288,6 +296,7 @@ enum nla_policy_validation {
* to a struct netlink_range_validation that indicates
* the min/max values.
* Use NLA_POLICY_FULL_RANGE().
+ * NLA_SINT,
* NLA_S8,
* NLA_S16,
* NLA_S32,
@@ -351,8 +360,8 @@ struct nla_policy {
const u32 mask;
const char *reject_message;
const struct nla_policy *nested_policy;
- struct netlink_range_validation *range;
- struct netlink_range_validation_signed *range_signed;
+ const struct netlink_range_validation *range;
+ const struct netlink_range_validation_signed *range_signed;
struct {
s16 min, max;
};
@@ -377,9 +386,11 @@ struct nla_policy {
#define __NLA_IS_UINT_TYPE(tp) \
(tp == NLA_U8 || tp == NLA_U16 || tp == NLA_U32 || \
- tp == NLA_U64 || tp == NLA_BE16 || tp == NLA_BE32)
+ tp == NLA_U64 || tp == NLA_UINT || \
+ tp == NLA_BE16 || tp == NLA_BE32)
#define __NLA_IS_SINT_TYPE(tp) \
- (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64)
+ (tp == NLA_S8 || tp == NLA_S16 || tp == NLA_S32 || tp == NLA_S64 || \
+ tp == NLA_SINT)
#define __NLA_ENSURE(condition) BUILD_BUG_ON_ZERO(!(condition))
#define NLA_ENSURE_UINT_TYPE(tp) \
@@ -1358,6 +1369,22 @@ static inline int nla_put_u32(struct sk_buff *skb, int attrtype, u32 value)
}
/**
+ * nla_put_uint - Add a variable-size unsigned int to a socket buffer
+ * @skb: socket buffer to add attribute to
+ * @attrtype: attribute type
+ * @value: numeric value
+ */
+static inline int nla_put_uint(struct sk_buff *skb, int attrtype, u64 value)
+{
+ u64 tmp64 = value;
+ u32 tmp32 = value;
+
+ if (tmp64 == tmp32)
+ return nla_put_u32(skb, attrtype, tmp32);
+ return nla_put(skb, attrtype, sizeof(u64), &tmp64);
+}
+
+/**
* nla_put_be32 - Add a __be32 netlink attribute to a socket buffer
* @skb: socket buffer to add attribute to
* @attrtype: attribute type
@@ -1512,6 +1539,22 @@ static inline int nla_put_s64(struct sk_buff *skb, int attrtype, s64 value,
}
/**
+ * nla_put_sint - Add a variable-size signed int to a socket buffer
+ * @skb: socket buffer to add attribute to
+ * @attrtype: attribute type
+ * @value: numeric value
+ */
+static inline int nla_put_sint(struct sk_buff *skb, int attrtype, s64 value)
+{
+ s64 tmp64 = value;
+ s32 tmp32 = value;
+
+ if (tmp64 == tmp32)
+ return nla_put_s32(skb, attrtype, tmp32);
+ return nla_put(skb, attrtype, sizeof(s64), &tmp64);
+}
+
+/**
* nla_put_string - Add a string netlink attribute to a socket buffer
* @skb: socket buffer to add attribute to
* @attrtype: attribute type
@@ -1668,6 +1711,17 @@ static inline u64 nla_get_u64(const struct nlattr *nla)
}
/**
+ * nla_get_uint - return payload of uint attribute
+ * @nla: uint netlink attribute
+ */
+static inline u64 nla_get_uint(const struct nlattr *nla)
+{
+ if (nla_len(nla) == sizeof(u32))
+ return nla_get_u32(nla);
+ return nla_get_u64(nla);
+}
+
+/**
* nla_get_be64 - return payload of __be64 attribute
* @nla: __be64 netlink attribute
*/
@@ -1730,6 +1784,17 @@ static inline s64 nla_get_s64(const struct nlattr *nla)
}
/**
+ * nla_get_sint - return payload of uint attribute
+ * @nla: uint netlink attribute
+ */
+static inline s64 nla_get_sint(const struct nlattr *nla)
+{
+ if (nla_len(nla) == sizeof(s32))
+ return nla_get_s32(nla);
+ return nla_get_s64(nla);
+}
+
+/**
* nla_get_flag - return payload of flag attribute
* @nla: flag netlink attribute
*/
diff --git a/include/net/netns/conntrack.h b/include/net/netns/conntrack.h
index 1f463b3957c7..bae914815aa3 100644
--- a/include/net/netns/conntrack.h
+++ b/include/net/netns/conntrack.h
@@ -107,7 +107,7 @@ struct netns_ct {
struct nf_ct_event_notifier __rcu *nf_conntrack_event_cb;
struct nf_ip_net nf_ct_proto;
#if defined(CONFIG_NF_CONNTRACK_LABELS)
- unsigned int labels_used;
+ atomic_t labels_used;
#endif
};
#endif
diff --git a/include/net/netns/ipv4.h b/include/net/netns/ipv4.h
index 7a41c4791536..73f43f699199 100644
--- a/include/net/netns/ipv4.h
+++ b/include/net/netns/ipv4.h
@@ -132,6 +132,9 @@ struct netns_ipv4 {
u8 sysctl_tcp_syncookies;
u8 sysctl_tcp_migrate_req;
u8 sysctl_tcp_comp_sack_nr;
+ u8 sysctl_tcp_backlog_ack_defer;
+ u8 sysctl_tcp_pingpong_thresh;
+
int sysctl_tcp_reordering;
u8 sysctl_tcp_retries1;
u8 sysctl_tcp_retries2;
diff --git a/include/net/nexthop.h b/include/net/nexthop.h
index 2b12725de9c0..d92046a4a078 100644
--- a/include/net/nexthop.h
+++ b/include/net/nexthop.h
@@ -92,7 +92,7 @@ struct nh_res_table {
u32 unbalanced_timer;
u16 num_nh_buckets;
- struct nh_res_bucket nh_buckets[];
+ struct nh_res_bucket nh_buckets[] __counted_by(num_nh_buckets);
};
struct nh_grp_entry {
@@ -126,7 +126,7 @@ struct nh_group {
bool has_v4;
struct nh_res_table __rcu *res_table;
- struct nh_grp_entry nh_entries[];
+ struct nh_grp_entry nh_entries[] __counted_by(num_nh);
};
struct nexthop {
@@ -187,7 +187,7 @@ struct nh_notifier_grp_entry_info {
struct nh_notifier_grp_info {
u16 num_nh;
bool is_fdb;
- struct nh_notifier_grp_entry_info nh_entries[];
+ struct nh_notifier_grp_entry_info nh_entries[] __counted_by(num_nh);
};
struct nh_notifier_res_bucket_info {
@@ -200,7 +200,7 @@ struct nh_notifier_res_bucket_info {
struct nh_notifier_res_table_info {
u16 num_nh_buckets;
- struct nh_notifier_single_info nhs[];
+ struct nh_notifier_single_info nhs[] __counted_by(num_nh_buckets);
};
struct nh_notifier_info {
diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
index 8e7751464ff5..4ebd544ae977 100644
--- a/include/net/page_pool/helpers.h
+++ b/include/net/page_pool/helpers.h
@@ -8,23 +8,46 @@
/**
* DOC: page_pool allocator
*
- * The page_pool allocator is optimized for the XDP mode that
- * uses one frame per-page, but it can fallback on the
- * regular page allocator APIs.
- *
- * Basic use involves replacing alloc_pages() calls with the
- * page_pool_alloc_pages() call. Drivers should use
- * page_pool_dev_alloc_pages() replacing dev_alloc_pages().
- *
- * The API keeps track of in-flight pages, in order to let API users know
- * when it is safe to free a page_pool object. Thus, API users
- * must call page_pool_put_page() to free the page, or attach
- * the page to a page_pool-aware object like skbs marked with
+ * The page_pool allocator is optimized for recycling page or page fragment used
+ * by skb packet and xdp frame.
+ *
+ * Basic use involves replacing and alloc_pages() calls with page_pool_alloc(),
+ * which allocate memory with or without page splitting depending on the
+ * requested memory size.
+ *
+ * If the driver knows that it always requires full pages or its allocations are
+ * always smaller than half a page, it can use one of the more specific API
+ * calls:
+ *
+ * 1. page_pool_alloc_pages(): allocate memory without page splitting when
+ * driver knows that the memory it need is always bigger than half of the page
+ * allocated from page pool. There is no cache line dirtying for 'struct page'
+ * when a page is recycled back to the page pool.
+ *
+ * 2. page_pool_alloc_frag(): allocate memory with page splitting when driver
+ * knows that the memory it need is always smaller than or equal to half of the
+ * page allocated from page pool. Page splitting enables memory saving and thus
+ * avoids TLB/cache miss for data access, but there also is some cost to
+ * implement page splitting, mainly some cache line dirtying/bouncing for
+ * 'struct page' and atomic operation for page->pp_frag_count.
+ *
+ * The API keeps track of in-flight pages, in order to let API users know when
+ * it is safe to free a page_pool object, the API users must call
+ * page_pool_put_page() or page_pool_free_va() to free the page_pool object, or
+ * attach the page_pool object to a page_pool-aware object like skbs marked with
* skb_mark_for_recycle().
*
- * API users must call page_pool_put_page() once on a page, as it
- * will either recycle the page, or in case of refcnt > 1, it will
- * release the DMA mapping and in-flight state accounting.
+ * page_pool_put_page() may be called multi times on the same page if a page is
+ * split into multi fragments. For the last fragment, it will either recycle the
+ * page, or in case of page->_refcount > 1, it will release the DMA mapping and
+ * in-flight state accounting.
+ *
+ * dma_sync_single_range_for_device() is only called for the last fragment when
+ * page_pool is created with PP_FLAG_DMA_SYNC_DEV flag, so it depends on the
+ * last freed fragment to do the sync_for_device operation for all fragments in
+ * the same page when a page is split, the API user must setup pool->p.max_len
+ * and pool->p.offset correctly and ensure that page_pool_put_page() is called
+ * with dma_sync_size being -1 for fragment API.
*/
#ifndef _NET_PAGE_POOL_HELPERS_H
#define _NET_PAGE_POOL_HELPERS_H
@@ -73,6 +96,17 @@ static inline struct page *page_pool_dev_alloc_pages(struct page_pool *pool)
return page_pool_alloc_pages(pool, gfp);
}
+/**
+ * page_pool_dev_alloc_frag() - allocate a page fragment.
+ * @pool: pool from which to allocate
+ * @offset: offset to the allocated page
+ * @size: requested size
+ *
+ * Get a page fragment from the page allocator or page_pool caches.
+ *
+ * Return:
+ * Return allocated page fragment, otherwise return NULL.
+ */
static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
unsigned int *offset,
unsigned int size)
@@ -82,6 +116,91 @@ static inline struct page *page_pool_dev_alloc_frag(struct page_pool *pool,
return page_pool_alloc_frag(pool, offset, size, gfp);
}
+static inline struct page *page_pool_alloc(struct page_pool *pool,
+ unsigned int *offset,
+ unsigned int *size, gfp_t gfp)
+{
+ unsigned int max_size = PAGE_SIZE << pool->p.order;
+ struct page *page;
+
+ if ((*size << 1) > max_size) {
+ *size = max_size;
+ *offset = 0;
+ return page_pool_alloc_pages(pool, gfp);
+ }
+
+ page = page_pool_alloc_frag(pool, offset, *size, gfp);
+ if (unlikely(!page))
+ return NULL;
+
+ /* There is very likely not enough space for another fragment, so append
+ * the remaining size to the current fragment to avoid truesize
+ * underestimate problem.
+ */
+ if (pool->frag_offset + *size > max_size) {
+ *size = max_size - *offset;
+ pool->frag_offset = max_size;
+ }
+
+ return page;
+}
+
+/**
+ * page_pool_dev_alloc() - allocate a page or a page fragment.
+ * @pool: pool from which to allocate
+ * @offset: offset to the allocated page
+ * @size: in as the requested size, out as the allocated size
+ *
+ * Get a page or a page fragment from the page allocator or page_pool caches
+ * depending on the requested size in order to allocate memory with least memory
+ * utilization and performance penalty.
+ *
+ * Return:
+ * Return allocated page or page fragment, otherwise return NULL.
+ */
+static inline struct page *page_pool_dev_alloc(struct page_pool *pool,
+ unsigned int *offset,
+ unsigned int *size)
+{
+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);
+
+ return page_pool_alloc(pool, offset, size, gfp);
+}
+
+static inline void *page_pool_alloc_va(struct page_pool *pool,
+ unsigned int *size, gfp_t gfp)
+{
+ unsigned int offset;
+ struct page *page;
+
+ /* Mask off __GFP_HIGHMEM to ensure we can use page_address() */
+ page = page_pool_alloc(pool, &offset, size, gfp & ~__GFP_HIGHMEM);
+ if (unlikely(!page))
+ return NULL;
+
+ return page_address(page) + offset;
+}
+
+/**
+ * page_pool_dev_alloc_va() - allocate a page or a page fragment and return its
+ * va.
+ * @pool: pool from which to allocate
+ * @size: in as the requested size, out as the allocated size
+ *
+ * This is just a thin wrapper around the page_pool_alloc() API, and
+ * it returns va of the allocated page or page fragment.
+ *
+ * Return:
+ * Return the va for the allocated page or page fragment, otherwise return NULL.
+ */
+static inline void *page_pool_dev_alloc_va(struct page_pool *pool,
+ unsigned int *size)
+{
+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);
+
+ return page_pool_alloc_va(pool, size, gfp);
+}
+
/**
* page_pool_get_dma_dir() - Retrieve the stored DMA direction.
* @pool: pool from which page was allocated
@@ -115,28 +234,49 @@ static inline long page_pool_defrag_page(struct page *page, long nr)
long ret;
/* If nr == pp_frag_count then we have cleared all remaining
- * references to the page. No need to actually overwrite it, instead
- * we can leave this to be overwritten by the calling function.
+ * references to the page:
+ * 1. 'n == 1': no need to actually overwrite it.
+ * 2. 'n != 1': overwrite it with one, which is the rare case
+ * for pp_frag_count draining.
*
- * The main advantage to doing this is that an atomic_read is
- * generally a much cheaper operation than an atomic update,
- * especially when dealing with a page that may be partitioned
- * into only 2 or 3 pieces.
+ * The main advantage to doing this is that not only we avoid a atomic
+ * update, as an atomic_read is generally a much cheaper operation than
+ * an atomic update, especially when dealing with a page that may be
+ * partitioned into only 2 or 3 pieces; but also unify the pp_frag_count
+ * handling by ensuring all pages have partitioned into only 1 piece
+ * initially, and only overwrite it when the page is partitioned into
+ * more than one piece.
*/
- if (atomic_long_read(&page->pp_frag_count) == nr)
+ if (atomic_long_read(&page->pp_frag_count) == nr) {
+ /* As we have ensured nr is always one for constant case using
+ * the BUILD_BUG_ON(), only need to handle the non-constant case
+ * here for pp_frag_count draining, which is a rare case.
+ */
+ BUILD_BUG_ON(__builtin_constant_p(nr) && nr != 1);
+ if (!__builtin_constant_p(nr))
+ atomic_long_set(&page->pp_frag_count, 1);
+
return 0;
+ }
ret = atomic_long_sub_return(nr, &page->pp_frag_count);
WARN_ON(ret < 0);
+
+ /* We are the last user here too, reset pp_frag_count back to 1 to
+ * ensure all pages have been partitioned into 1 piece initially,
+ * this should be the rare case when the last two fragment users call
+ * page_pool_defrag_page() currently.
+ */
+ if (unlikely(!ret))
+ atomic_long_set(&page->pp_frag_count, 1);
+
return ret;
}
-static inline bool page_pool_is_last_frag(struct page_pool *pool,
- struct page *page)
+static inline bool page_pool_is_last_frag(struct page *page)
{
- /* If fragments aren't enabled or count is 0 we were the last user */
- return !(pool->p.flags & PP_FLAG_PAGE_FRAG) ||
- (page_pool_defrag_page(page, 1) == 0);
+ /* If page_pool_defrag_page() returns 0, we were the last user */
+ return page_pool_defrag_page(page, 1) == 0;
}
/**
@@ -161,7 +301,7 @@ static inline void page_pool_put_page(struct page_pool *pool,
* allow registering MEM_TYPE_PAGE_POOL, but shield linker.
*/
#ifdef CONFIG_PAGE_POOL
- if (!page_pool_is_last_frag(pool, page))
+ if (!page_pool_is_last_frag(page))
return;
page_pool_put_defragged_page(pool, page, dma_sync_size, allow_direct);
@@ -197,10 +337,24 @@ static inline void page_pool_recycle_direct(struct page_pool *pool,
page_pool_put_full_page(pool, page, true);
}
-#define PAGE_POOL_DMA_USE_PP_FRAG_COUNT \
+#define PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA \
(sizeof(dma_addr_t) > sizeof(unsigned long))
/**
+ * page_pool_free_va() - free a va into the page_pool
+ * @pool: pool from which va was allocated
+ * @va: va to be freed
+ * @allow_direct: freed by the consumer, allow lockless caching
+ *
+ * Free a va allocated from page_pool_allo_va().
+ */
+static inline void page_pool_free_va(struct page_pool *pool, void *va,
+ bool allow_direct)
+{
+ page_pool_put_page(pool, virt_to_head_page(va), -1, allow_direct);
+}
+
+/**
* page_pool_get_dma_addr() - Retrieve the stored DMA address.
* @page: page allocated from a page pool
*
@@ -211,17 +365,25 @@ static inline dma_addr_t page_pool_get_dma_addr(struct page *page)
{
dma_addr_t ret = page->dma_addr;
- if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
- ret |= (dma_addr_t)page->dma_addr_upper << 16 << 16;
+ if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA)
+ ret <<= PAGE_SHIFT;
return ret;
}
-static inline void page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
+static inline bool page_pool_set_dma_addr(struct page *page, dma_addr_t addr)
{
+ if (PAGE_POOL_32BIT_ARCH_WITH_64BIT_DMA) {
+ page->dma_addr = addr >> PAGE_SHIFT;
+
+ /* We assume page alignment to shave off bottom bits,
+ * if this "compression" doesn't work we need to drop.
+ */
+ return addr != (dma_addr_t)page->dma_addr << PAGE_SHIFT;
+ }
+
page->dma_addr = addr;
- if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT)
- page->dma_addr_upper = upper_32_bits(addr);
+ return false;
}
static inline bool page_pool_put(struct page_pool *pool)
diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h
index 887e7946a597..6fc5134095ed 100644
--- a/include/net/page_pool/types.h
+++ b/include/net/page_pool/types.h
@@ -17,10 +17,8 @@
* Please note DMA-sync-for-CPU is still
* device driver responsibility
*/
-#define PP_FLAG_PAGE_FRAG BIT(2) /* for page frag feature */
#define PP_FLAG_ALL (PP_FLAG_DMA_MAP |\
- PP_FLAG_DMA_SYNC_DEV |\
- PP_FLAG_PAGE_FRAG)
+ PP_FLAG_DMA_SYNC_DEV)
/*
* Fast allocation side cache array/stack
@@ -45,7 +43,7 @@ struct pp_alloc_cache {
/**
* struct page_pool_params - page pool parameters
- * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV, PP_FLAG_PAGE_FRAG
+ * @flags: PP_FLAG_DMA_MAP, PP_FLAG_DMA_SYNC_DEV
* @order: 2^order pages on allocation
* @pool_size: size of the ptr_ring
* @nid: NUMA node id to allocate from pages from
diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
index f308e8268651..a76c9171db0e 100644
--- a/include/net/pkt_cls.h
+++ b/include/net/pkt_cls.h
@@ -154,6 +154,12 @@ __cls_set_class(unsigned long *clp, unsigned long cl)
return xchg(clp, cl);
}
+static inline void tcf_set_drop_reason(struct tcf_result *res,
+ enum skb_drop_reason reason)
+{
+ res->drop_reason = reason;
+}
+
static inline void
__tcf_bind_filter(struct Qdisc *q, struct tcf_result *r, unsigned long base)
{
diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h
index 15960564e0c3..9fa1d0794dfa 100644
--- a/include/net/pkt_sched.h
+++ b/include/net/pkt_sched.h
@@ -20,10 +20,10 @@ struct qdisc_walker {
int (*fn)(struct Qdisc *, unsigned long cl, struct qdisc_walker *);
};
-static inline void *qdisc_priv(struct Qdisc *q)
-{
- return &q->privdata;
-}
+#define qdisc_priv(q) \
+ _Generic(q, \
+ const struct Qdisc * : (const void *)&q->privdata, \
+ struct Qdisc * : (void *)&q->privdata)
static inline struct Qdisc *qdisc_from_priv(void *priv)
{
diff --git a/include/net/regulatory.h b/include/net/regulatory.h
index b2cb4a9eb04d..ebf9e028d1ef 100644
--- a/include/net/regulatory.h
+++ b/include/net/regulatory.h
@@ -213,6 +213,7 @@ struct ieee80211_reg_rule {
u32 flags;
u32 dfs_cac_ms;
bool has_wmm;
+ s8 psd;
};
struct ieee80211_regdomain {
diff --git a/include/net/route.h b/include/net/route.h
index 51a45b1887b5..980ab474eabd 100644
--- a/include/net/route.h
+++ b/include/net/route.h
@@ -37,7 +37,7 @@
#define RTO_ONLINK 0x01
-#define RT_CONN_FLAGS(sk) (RT_TOS(inet_sk(sk)->tos) | sock_flag(sk, SOCK_LOCALROUTE))
+#define RT_CONN_FLAGS(sk) (RT_TOS(READ_ONCE(inet_sk(sk)->tos)) | sock_flag(sk, SOCK_LOCALROUTE))
#define RT_CONN_FLAGS_TOS(sk,tos) (RT_TOS(tos) | sock_flag(sk, SOCK_LOCALROUTE))
static inline __u8 ip_sock_rt_scope(const struct sock *sk)
@@ -50,7 +50,7 @@ static inline __u8 ip_sock_rt_scope(const struct sock *sk)
static inline __u8 ip_sock_rt_tos(const struct sock *sk)
{
- return RT_TOS(inet_sk(sk)->tos);
+ return RT_TOS(READ_ONCE(inet_sk(sk)->tos));
}
struct ip_tunnel_info;
@@ -136,12 +136,6 @@ static inline struct rtable *__ip_route_output_key(struct net *net,
struct rtable *ip_route_output_flow(struct net *, struct flowi4 *flp,
const struct sock *sk);
-struct rtable *ip_route_output_tunnel(struct sk_buff *skb,
- struct net_device *dev,
- struct net *net, __be32 *saddr,
- const struct ip_tunnel_info *info,
- u8 protocol, bool use_cache);
-
struct dst_entry *ipv4_blackhole_route(struct net *net,
struct dst_entry *dst_orig);
diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
index f232512505f8..dcb9160e6467 100644
--- a/include/net/sch_generic.h
+++ b/include/net/sch_generic.h
@@ -324,7 +324,6 @@ struct Qdisc_ops {
struct module *owner;
};
-
struct tcf_result {
union {
struct {
@@ -332,8 +331,8 @@ struct tcf_result {
u32 classid;
};
const struct tcf_proto *goto_tp;
-
};
+ enum skb_drop_reason drop_reason;
};
struct tcf_chain;
@@ -587,6 +586,7 @@ static inline void sch_tree_unlock(struct Qdisc *q)
extern struct Qdisc noop_qdisc;
extern struct Qdisc_ops noop_qdisc_ops;
extern struct Qdisc_ops pfifo_fast_ops;
+extern const u8 sch_default_prio2band[TC_PRIO_MAX + 1];
extern struct Qdisc_ops mq_qdisc_ops;
extern struct Qdisc_ops noqueue_qdisc_ops;
extern const struct Qdisc_ops *default_qdisc_ops;
diff --git a/include/net/sock.h b/include/net/sock.h
index 92f7ea62a915..242590308d64 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1821,12 +1821,11 @@ static inline bool sock_owned_by_user_nocheck(const struct sock *sk)
static inline void sock_release_ownership(struct sock *sk)
{
- if (sock_owned_by_user_nocheck(sk)) {
- sk->sk_lock.owned = 0;
+ DEBUG_NET_WARN_ON_ONCE(!sock_owned_by_user_nocheck(sk));
+ sk->sk_lock.owned = 0;
- /* The sk_lock has mutex_unlock() semantics: */
- mutex_release(&sk->sk_lock.dep_map, _RET_IP_);
- }
+ /* The sk_lock has mutex_unlock() semantics: */
+ mutex_release(&sk->sk_lock.dep_map, _RET_IP_);
}
/* no reclassification while locks are held */
@@ -2006,21 +2005,33 @@ static inline void sk_tx_queue_set(struct sock *sk, int tx_queue)
/* sk_tx_queue_mapping accept only upto a 16-bit value */
if (WARN_ON_ONCE((unsigned short)tx_queue >= USHRT_MAX))
return;
- sk->sk_tx_queue_mapping = tx_queue;
+ /* Paired with READ_ONCE() in sk_tx_queue_get() and
+ * other WRITE_ONCE() because socket lock might be not held.
+ */
+ WRITE_ONCE(sk->sk_tx_queue_mapping, tx_queue);
}
#define NO_QUEUE_MAPPING USHRT_MAX
static inline void sk_tx_queue_clear(struct sock *sk)
{
- sk->sk_tx_queue_mapping = NO_QUEUE_MAPPING;
+ /* Paired with READ_ONCE() in sk_tx_queue_get() and
+ * other WRITE_ONCE() because socket lock might be not held.
+ */
+ WRITE_ONCE(sk->sk_tx_queue_mapping, NO_QUEUE_MAPPING);
}
static inline int sk_tx_queue_get(const struct sock *sk)
{
- if (sk && sk->sk_tx_queue_mapping != NO_QUEUE_MAPPING)
- return sk->sk_tx_queue_mapping;
+ if (sk) {
+ /* Paired with WRITE_ONCE() in sk_tx_queue_clear()
+ * and sk_tx_queue_set().
+ */
+ int val = READ_ONCE(sk->sk_tx_queue_mapping);
+ if (val != NO_QUEUE_MAPPING)
+ return val;
+ }
return -1;
}
@@ -2140,14 +2151,14 @@ static inline bool sk_rethink_txhash(struct sock *sk)
}
static inline struct dst_entry *
-__sk_dst_get(struct sock *sk)
+__sk_dst_get(const struct sock *sk)
{
return rcu_dereference_check(sk->sk_dst_cache,
lockdep_sock_is_held(sk));
}
static inline struct dst_entry *
-sk_dst_get(struct sock *sk)
+sk_dst_get(const struct sock *sk)
{
struct dst_entry *dst;
@@ -2169,7 +2180,7 @@ static inline void __dst_negative_advice(struct sock *sk)
if (ndst != dst) {
rcu_assign_pointer(sk->sk_dst_cache, ndst);
sk_tx_queue_clear(sk);
- sk->sk_dst_pending_confirm = 0;
+ WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
}
}
}
@@ -2186,7 +2197,7 @@ __sk_dst_set(struct sock *sk, struct dst_entry *dst)
struct dst_entry *old_dst;
sk_tx_queue_clear(sk);
- sk->sk_dst_pending_confirm = 0;
+ WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
old_dst = rcu_dereference_protected(sk->sk_dst_cache,
lockdep_sock_is_held(sk));
rcu_assign_pointer(sk->sk_dst_cache, dst);
@@ -2199,7 +2210,7 @@ sk_dst_set(struct sock *sk, struct dst_entry *dst)
struct dst_entry *old_dst;
sk_tx_queue_clear(sk);
- sk->sk_dst_pending_confirm = 0;
+ WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst);
dst_release(old_dst);
}
@@ -2237,7 +2248,7 @@ static inline void sock_confirm_neigh(struct sk_buff *skb, struct neighbour *n)
}
}
-bool sk_mc_loop(struct sock *sk);
+bool sk_mc_loop(const struct sock *sk);
static inline bool sk_can_gso(const struct sock *sk)
{
diff --git a/include/net/tc_act/tc_ct.h b/include/net/tc_act/tc_ct.h
index b24ea2d9400b..8a6dbfb23336 100644
--- a/include/net/tc_act/tc_ct.h
+++ b/include/net/tc_act/tc_ct.h
@@ -22,6 +22,7 @@ struct tcf_ct_params {
struct nf_nat_range2 range;
bool ipv4_range;
+ bool put_labels;
u16 ct_action;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 4b03ca7cb8a5..d2f0736b76b8 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -37,6 +37,7 @@
#include <net/snmp.h>
#include <net/ip.h>
#include <net/tcp_states.h>
+#include <net/tcp_ao.h>
#include <net/inet_ecn.h>
#include <net/dst.h>
#include <net/mptcp.h>
@@ -131,6 +132,8 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
#define TCP_FIN_TIMEOUT_MAX (120 * HZ) /* max TCP_LINGER2 value (two minutes) */
#define TCP_DELACK_MAX ((unsigned)(HZ/5)) /* maximal time to delay before sending an ACK */
+static_assert((1 << ATO_BITS) > TCP_DELACK_MAX);
+
#if HZ >= 100
#define TCP_DELACK_MIN ((unsigned)(HZ/25)) /* minimal time to delay before sending an ACK */
#define TCP_ATO_MIN ((unsigned)(HZ/25))
@@ -164,7 +167,12 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
#define MAX_TCP_KEEPCNT 127
#define MAX_TCP_SYNCNT 127
-#define TCP_PAWS_24DAYS (60 * 60 * 24 * 24)
+/* Ensure that TCP PAWS checks are relaxed after ~2147 seconds
+ * to avoid overflows. This assumes a clock smaller than 1 Mhz.
+ * Default clock is 1 Khz, tcp_usec_ts uses 1 Mhz.
+ */
+#define TCP_PAWS_WRAP (INT_MAX / USEC_PER_SEC)
+
#define TCP_PAWS_MSL 60 /* Per-host timestamps are invalidated
* after this time. It should be equal
* (or greater than) TCP_TIMEWAIT_LEN
@@ -187,6 +195,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo);
#define TCPOPT_SACK 5 /* SACK Block */
#define TCPOPT_TIMESTAMP 8 /* Better RTT estimations/PAWS */
#define TCPOPT_MD5SIG 19 /* MD5 Signature (RFC2385) */
+#define TCPOPT_AO 29 /* Authentication Option (RFC5925) */
#define TCPOPT_MPTCP 30 /* Multipath TCP (RFC6824) */
#define TCPOPT_FASTOPEN 34 /* Fast open (RFC7413) */
#define TCPOPT_EXP 254 /* Experimental */
@@ -429,7 +438,6 @@ int tcp_mmap(struct file *file, struct socket *sock,
void tcp_parse_options(const struct net *net, const struct sk_buff *skb,
struct tcp_options_received *opt_rx,
int estab, struct tcp_fastopen_cookie *foc);
-const u8 *tcp_parse_md5sig_option(const struct tcphdr *th);
/*
* BPF SKB-less helpers
@@ -723,8 +731,10 @@ static inline void tcp_fast_path_check(struct sock *sk)
tcp_fast_path_on(tp);
}
+u32 tcp_delack_max(const struct sock *sk);
+
/* Compute the actual rto_min value */
-static inline u32 tcp_rto_min(struct sock *sk)
+static inline u32 tcp_rto_min(const struct sock *sk)
{
const struct dst_entry *dst = __sk_dst_get(sk);
u32 rto_min = inet_csk(sk)->icsk_rto_min;
@@ -734,7 +744,7 @@ static inline u32 tcp_rto_min(struct sock *sk)
return rto_min;
}
-static inline u32 tcp_rto_min_us(struct sock *sk)
+static inline u32 tcp_rto_min_us(const struct sock *sk)
{
return jiffies_to_usecs(tcp_rto_min(sk));
}
@@ -794,22 +804,31 @@ static inline u64 tcp_clock_us(void)
return div_u64(tcp_clock_ns(), NSEC_PER_USEC);
}
-/* This should only be used in contexts where tp->tcp_mstamp is up to date */
-static inline u32 tcp_time_stamp(const struct tcp_sock *tp)
+static inline u64 tcp_clock_ms(void)
+{
+ return div_u64(tcp_clock_ns(), NSEC_PER_MSEC);
+}
+
+/* TCP Timestamp included in TS option (RFC 1323) can either use ms
+ * or usec resolution. Each socket carries a flag to select one or other
+ * resolution, as the route attribute could change anytime.
+ * Each flow must stick to initial resolution.
+ */
+static inline u32 tcp_clock_ts(bool usec_ts)
{
- return div_u64(tp->tcp_mstamp, USEC_PER_SEC / TCP_TS_HZ);
+ return usec_ts ? tcp_clock_us() : tcp_clock_ms();
}
-/* Convert a nsec timestamp into TCP TSval timestamp (ms based currently) */
-static inline u32 tcp_ns_to_ts(u64 ns)
+static inline u32 tcp_time_stamp_ms(const struct tcp_sock *tp)
{
- return div_u64(ns, NSEC_PER_SEC / TCP_TS_HZ);
+ return div_u64(tp->tcp_mstamp, USEC_PER_MSEC);
}
-/* Could use tcp_clock_us() / 1000, but this version uses a single divide */
-static inline u32 tcp_time_stamp_raw(void)
+static inline u32 tcp_time_stamp_ts(const struct tcp_sock *tp)
{
- return tcp_ns_to_ts(tcp_clock_ns());
+ if (tp->tcp_usec_ts)
+ return tp->tcp_mstamp;
+ return tcp_time_stamp_ms(tp);
}
void tcp_mstamp_refresh(struct tcp_sock *tp);
@@ -819,17 +838,30 @@ static inline u32 tcp_stamp_us_delta(u64 t1, u64 t0)
return max_t(s64, t1 - t0, 0);
}
-static inline u32 tcp_skb_timestamp(const struct sk_buff *skb)
-{
- return tcp_ns_to_ts(skb->skb_mstamp_ns);
-}
-
/* provide the departure time in us unit */
static inline u64 tcp_skb_timestamp_us(const struct sk_buff *skb)
{
return div_u64(skb->skb_mstamp_ns, NSEC_PER_USEC);
}
+/* Provide skb TSval in usec or ms unit */
+static inline u32 tcp_skb_timestamp_ts(bool usec_ts, const struct sk_buff *skb)
+{
+ if (usec_ts)
+ return tcp_skb_timestamp_us(skb);
+
+ return div_u64(skb->skb_mstamp_ns, NSEC_PER_MSEC);
+}
+
+static inline u32 tcp_tw_tsval(const struct tcp_timewait_sock *tcptw)
+{
+ return tcp_clock_ts(tcptw->tw_sk.tw_usec_ts) + tcptw->tw_ts_offset;
+}
+
+static inline u32 tcp_rsk_tsval(const struct tcp_request_sock *treq)
+{
+ return tcp_clock_ts(treq->req_usec_ts) + treq->ts_off;
+}
#define tcp_flag_byte(th) (((u_int8_t *)th)[13])
@@ -1458,13 +1490,15 @@ static inline int tcp_space_from_win(const struct sock *sk, int win)
return __tcp_space_from_win(tcp_sk(sk)->scaling_ratio, win);
}
+/* Assume a conservative default of 1200 bytes of payload per 4K page.
+ * This may be adjusted later in tcp_measure_rcv_mss().
+ */
+#define TCP_DEFAULT_SCALING_RATIO ((1200 << TCP_RMEM_TO_WIN_SCALE) / \
+ SKB_TRUESIZE(4096))
+
static inline void tcp_scaling_ratio_init(struct sock *sk)
{
- /* Assume a conservative default of 1200 bytes of payload per 4K page.
- * This may be adjusted later in tcp_measure_rcv_mss().
- */
- tcp_sk(sk)->scaling_ratio = (1200 << TCP_RMEM_TO_WIN_SCALE) /
- SKB_TRUESIZE(4096);
+ tcp_sk(sk)->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
}
/* Note: caller must be prepared to deal with negative returns */
@@ -1595,7 +1629,7 @@ static inline bool tcp_paws_check(const struct tcp_options_received *rx_opt,
if ((s32)(rx_opt->ts_recent - rx_opt->rcv_tsval) <= paws_win)
return true;
if (unlikely(!time_before32(ktime_get_seconds(),
- rx_opt->ts_recent_stamp + TCP_PAWS_24DAYS)))
+ rx_opt->ts_recent_stamp + TCP_PAWS_WRAP)))
return true;
/*
* Some OSes send SYN and SYNACK messages with tsval=0 tsecr=0,
@@ -1655,12 +1689,7 @@ static inline void tcp_clear_all_retrans_hints(struct tcp_sock *tp)
tp->retransmit_skb_hint = NULL;
}
-union tcp_md5_addr {
- struct in_addr a4;
-#if IS_ENABLED(CONFIG_IPV6)
- struct in6_addr a6;
-#endif
-};
+#define tcp_md5_addr tcp_ao_addr
/* - key database */
struct tcp_md5sig_key {
@@ -1704,12 +1733,39 @@ union tcp_md5sum_block {
#endif
};
-/* - pool: digest algorithm, hash description and scratch buffer */
-struct tcp_md5sig_pool {
- struct ahash_request *md5_req;
- void *scratch;
+/*
+ * struct tcp_sigpool - per-CPU pool of ahash_requests
+ * @scratch: per-CPU temporary area, that can be used between
+ * tcp_sigpool_start() and tcp_sigpool_end() to perform
+ * crypto request
+ * @req: pre-allocated ahash request
+ */
+struct tcp_sigpool {
+ void *scratch;
+ struct ahash_request *req;
};
+int tcp_sigpool_alloc_ahash(const char *alg, size_t scratch_size);
+void tcp_sigpool_get(unsigned int id);
+void tcp_sigpool_release(unsigned int id);
+int tcp_sigpool_hash_skb_data(struct tcp_sigpool *hp,
+ const struct sk_buff *skb,
+ unsigned int header_len);
+
+/**
+ * tcp_sigpool_start - disable bh and start using tcp_sigpool_ahash
+ * @id: tcp_sigpool that was previously allocated by tcp_sigpool_alloc_ahash()
+ * @c: returned tcp_sigpool for usage (uninitialized on failure)
+ *
+ * Returns 0 on success, error otherwise.
+ */
+int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c);
+/**
+ * tcp_sigpool_end - enable bh and stop using tcp_sigpool
+ * @c: tcp_sigpool context that was returned by tcp_sigpool_start()
+ */
+void tcp_sigpool_end(struct tcp_sigpool *c);
+size_t tcp_sigpool_algo(unsigned int id, char *buf, size_t buf_len);
/* - functions */
int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
const struct sock *sk, const struct sk_buff *skb);
@@ -1722,6 +1778,7 @@ int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr,
int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr,
int family, u8 prefixlen, int l3index, u8 flags);
+void tcp_clear_md5_list(struct sock *sk);
struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk,
const struct sock *addr_sk);
@@ -1730,20 +1787,29 @@ struct tcp_md5sig_key *tcp_v4_md5_lookup(const struct sock *sk,
extern struct static_key_false_deferred tcp_md5_needed;
struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
const union tcp_md5_addr *addr,
- int family);
+ int family, bool any_l3index);
static inline struct tcp_md5sig_key *
tcp_md5_do_lookup(const struct sock *sk, int l3index,
const union tcp_md5_addr *addr, int family)
{
if (!static_branch_unlikely(&tcp_md5_needed.key))
return NULL;
- return __tcp_md5_do_lookup(sk, l3index, addr, family);
+ return __tcp_md5_do_lookup(sk, l3index, addr, family, false);
+}
+
+static inline struct tcp_md5sig_key *
+tcp_md5_do_lookup_any_l3index(const struct sock *sk,
+ const union tcp_md5_addr *addr, int family)
+{
+ if (!static_branch_unlikely(&tcp_md5_needed.key))
+ return NULL;
+ return __tcp_md5_do_lookup(sk, 0, addr, family, true);
}
enum skb_drop_reason
tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb,
const void *saddr, const void *daddr,
- int family, int dif, int sdif);
+ int family, int l3index, const __u8 *hash_location);
#define tcp_twsk_md5_key(twsk) ((twsk)->tw_md5_key)
@@ -1755,27 +1821,29 @@ tcp_md5_do_lookup(const struct sock *sk, int l3index,
return NULL;
}
+static inline struct tcp_md5sig_key *
+tcp_md5_do_lookup_any_l3index(const struct sock *sk,
+ const union tcp_md5_addr *addr, int family)
+{
+ return NULL;
+}
+
static inline enum skb_drop_reason
tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb,
const void *saddr, const void *daddr,
- int family, int dif, int sdif)
+ int family, int l3index, const __u8 *hash_location)
{
return SKB_NOT_DROPPED_YET;
}
#define tcp_twsk_md5_key(twsk) NULL
#endif
-bool tcp_alloc_md5sig_pool(void);
-
-struct tcp_md5sig_pool *tcp_get_md5sig_pool(void);
-static inline void tcp_put_md5sig_pool(void)
-{
- local_bh_enable();
-}
+int tcp_md5_alloc_sigpool(void);
+void tcp_md5_release_sigpool(void);
+void tcp_md5_add_sigpool(void);
+extern int tcp_md5_sigpool_id;
-int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff *,
- unsigned int header_len);
-int tcp_md5_hash_key(struct tcp_md5sig_pool *hp,
+int tcp_md5_hash_key(struct tcp_sigpool *hp,
const struct tcp_md5sig_key *key);
/* From tcp_fastopen.c */
@@ -2080,7 +2148,11 @@ INDIRECT_CALLABLE_DECLARE(int tcp4_gro_complete(struct sk_buff *skb, int thoff))
INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp4_gro_receive(struct list_head *head, struct sk_buff *skb));
INDIRECT_CALLABLE_DECLARE(int tcp6_gro_complete(struct sk_buff *skb, int thoff));
INDIRECT_CALLABLE_DECLARE(struct sk_buff *tcp6_gro_receive(struct list_head *head, struct sk_buff *skb));
+#ifdef CONFIG_INET
void tcp_gro_complete(struct sk_buff *skb);
+#else
+static inline void tcp_gro_complete(struct sk_buff *skb) { }
+#endif
void __tcp_v4_send_check(struct sk_buff *skb, __be32 saddr, __be32 daddr);
@@ -2120,6 +2192,18 @@ struct tcp_sock_af_ops {
sockptr_t optval,
int optlen);
#endif
+#ifdef CONFIG_TCP_AO
+ int (*ao_parse)(struct sock *sk, int optname, sockptr_t optval, int optlen);
+ struct tcp_ao_key *(*ao_lookup)(const struct sock *sk,
+ struct sock *addr_sk,
+ int sndid, int rcvid);
+ int (*ao_calc_key_sk)(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk,
+ __be32 sisn, __be32 disn, bool send);
+ int (*calc_ao_hash)(char *location, struct tcp_ao_key *ao,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne);
+#endif
};
struct tcp_request_sock_ops {
@@ -2132,6 +2216,15 @@ struct tcp_request_sock_ops {
const struct sock *sk,
const struct sk_buff *skb);
#endif
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_key *(*ao_lookup)(const struct sock *sk,
+ struct request_sock *req,
+ int sndid, int rcvid);
+ int (*ao_calc_key)(struct tcp_ao_key *mkt, u8 *key, struct request_sock *sk);
+ int (*ao_synack_hash)(char *ao_hash, struct tcp_ao_key *mkt,
+ struct request_sock *req, const struct sk_buff *skb,
+ int hash_offset, u32 sne);
+#endif
#ifdef CONFIG_SYN_COOKIES
__u32 (*cookie_init_seq)(const struct sk_buff *skb,
__u16 *mss);
@@ -2172,6 +2265,76 @@ static inline __u32 cookie_init_sequence(const struct tcp_request_sock_ops *ops,
}
#endif
+struct tcp_key {
+ union {
+ struct {
+ struct tcp_ao_key *ao_key;
+ char *traffic_key;
+ u32 sne;
+ u8 rcv_next;
+ };
+ struct tcp_md5sig_key *md5_key;
+ };
+ enum {
+ TCP_KEY_NONE = 0,
+ TCP_KEY_MD5,
+ TCP_KEY_AO,
+ } type;
+};
+
+static inline void tcp_get_current_key(const struct sock *sk,
+ struct tcp_key *out)
+{
+#if defined(CONFIG_TCP_AO) || defined(CONFIG_TCP_MD5SIG)
+ const struct tcp_sock *tp = tcp_sk(sk);
+#endif
+
+#ifdef CONFIG_TCP_AO
+ if (static_branch_unlikely(&tcp_ao_needed.key)) {
+ struct tcp_ao_info *ao;
+
+ ao = rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held(sk));
+ if (ao) {
+ out->ao_key = READ_ONCE(ao->current_key);
+ out->type = TCP_KEY_AO;
+ return;
+ }
+ }
+#endif
+#ifdef CONFIG_TCP_MD5SIG
+ if (static_branch_unlikely(&tcp_md5_needed.key) &&
+ rcu_access_pointer(tp->md5sig_info)) {
+ out->md5_key = tp->af_specific->md5_lookup(sk, sk);
+ if (out->md5_key) {
+ out->type = TCP_KEY_MD5;
+ return;
+ }
+ }
+#endif
+ out->type = TCP_KEY_NONE;
+}
+
+static inline bool tcp_key_is_md5(const struct tcp_key *key)
+{
+#ifdef CONFIG_TCP_MD5SIG
+ if (static_branch_unlikely(&tcp_md5_needed.key) &&
+ key->type == TCP_KEY_MD5)
+ return true;
+#endif
+ return false;
+}
+
+static inline bool tcp_key_is_ao(const struct tcp_key *key)
+{
+#ifdef CONFIG_TCP_AO
+ if (static_branch_unlikely(&tcp_ao_needed.key) &&
+ key->type == TCP_KEY_AO)
+ return true;
+#endif
+ return false;
+}
+
int tcpv4_offload_init(void);
void tcp_v4_init(void);
@@ -2530,4 +2693,116 @@ static inline u64 tcp_transmit_time(const struct sock *sk)
return 0;
}
+static inline int tcp_parse_auth_options(const struct tcphdr *th,
+ const u8 **md5_hash, const struct tcp_ao_hdr **aoh)
+{
+ const u8 *md5_tmp, *ao_tmp;
+ int ret;
+
+ ret = tcp_do_parse_auth_options(th, &md5_tmp, &ao_tmp);
+ if (ret)
+ return ret;
+
+ if (md5_hash)
+ *md5_hash = md5_tmp;
+
+ if (aoh) {
+ if (!ao_tmp)
+ *aoh = NULL;
+ else
+ *aoh = (struct tcp_ao_hdr *)(ao_tmp - 2);
+ }
+
+ return 0;
+}
+
+static inline bool tcp_ao_required(struct sock *sk, const void *saddr,
+ int family, int l3index, bool stat_inc)
+{
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao_info;
+ struct tcp_ao_key *ao_key;
+
+ if (!static_branch_unlikely(&tcp_ao_needed.key))
+ return false;
+
+ ao_info = rcu_dereference_check(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ if (!ao_info)
+ return false;
+
+ ao_key = tcp_ao_do_lookup(sk, l3index, saddr, family, -1, -1);
+ if (ao_info->ao_required || ao_key) {
+ if (stat_inc) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOREQUIRED);
+ atomic64_inc(&ao_info->counters.ao_required);
+ }
+ return true;
+ }
+#endif
+ return false;
+}
+
+/* Called with rcu_read_lock() */
+static inline enum skb_drop_reason
+tcp_inbound_hash(struct sock *sk, const struct request_sock *req,
+ const struct sk_buff *skb,
+ const void *saddr, const void *daddr,
+ int family, int dif, int sdif)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ const struct tcp_ao_hdr *aoh;
+ const __u8 *md5_location;
+ int l3index;
+
+ /* Invalid option or two times meet any of auth options */
+ if (tcp_parse_auth_options(th, &md5_location, &aoh)) {
+ tcp_hash_fail("TCP segment has incorrect auth options set",
+ family, skb, "");
+ return SKB_DROP_REASON_TCP_AUTH_HDR;
+ }
+
+ if (req) {
+ if (tcp_rsk_used_ao(req) != !!aoh) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOBAD);
+ tcp_hash_fail("TCP connection can't start/end using TCP-AO",
+ family, skb, "%s",
+ !aoh ? "missing AO" : "AO signed");
+ return SKB_DROP_REASON_TCP_AOFAILURE;
+ }
+ }
+
+ /* sdif set, means packet ingressed via a device
+ * in an L3 domain and dif is set to the l3mdev
+ */
+ l3index = sdif ? dif : 0;
+
+ /* Fast path: unsigned segments */
+ if (likely(!md5_location && !aoh)) {
+ /* Drop if there's TCP-MD5 or TCP-AO key with any rcvid/sndid
+ * for the remote peer. On TCP-AO established connection
+ * the last key is impossible to remove, so there's
+ * always at least one current_key.
+ */
+ if (tcp_ao_required(sk, saddr, family, l3index, true)) {
+ tcp_hash_fail("AO hash is required, but not found",
+ family, skb, "L3 index %d", l3index);
+ return SKB_DROP_REASON_TCP_AONOTFOUND;
+ }
+ if (unlikely(tcp_md5_do_lookup(sk, l3index, saddr, family))) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
+ tcp_hash_fail("MD5 Hash not found",
+ family, skb, "L3 index %d", l3index);
+ return SKB_DROP_REASON_TCP_MD5NOTFOUND;
+ }
+ return SKB_NOT_DROPPED_YET;
+ }
+
+ if (aoh)
+ return tcp_inbound_ao_hash(sk, skb, family, req, l3index, aoh);
+
+ return tcp_inbound_md5_hash(sk, skb, saddr, daddr, family,
+ l3index, md5_location);
+}
+
#endif /* _TCP_H */
diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h
new file mode 100644
index 000000000000..a375a171ef3c
--- /dev/null
+++ b/include/net/tcp_ao.h
@@ -0,0 +1,362 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _TCP_AO_H
+#define _TCP_AO_H
+
+#define TCP_AO_KEY_ALIGN 1
+#define __tcp_ao_key_align __aligned(TCP_AO_KEY_ALIGN)
+
+union tcp_ao_addr {
+ struct in_addr a4;
+#if IS_ENABLED(CONFIG_IPV6)
+ struct in6_addr a6;
+#endif
+};
+
+struct tcp_ao_hdr {
+ u8 kind;
+ u8 length;
+ u8 keyid;
+ u8 rnext_keyid;
+};
+
+struct tcp_ao_counters {
+ atomic64_t pkt_good;
+ atomic64_t pkt_bad;
+ atomic64_t key_not_found;
+ atomic64_t ao_required;
+ atomic64_t dropped_icmp;
+};
+
+struct tcp_ao_key {
+ struct hlist_node node;
+ union tcp_ao_addr addr;
+ u8 key[TCP_AO_MAXKEYLEN] __tcp_ao_key_align;
+ unsigned int tcp_sigpool_id;
+ unsigned int digest_size;
+ int l3index;
+ u8 prefixlen;
+ u8 family;
+ u8 keylen;
+ u8 keyflags;
+ u8 sndid;
+ u8 rcvid;
+ u8 maclen;
+ struct rcu_head rcu;
+ atomic64_t pkt_good;
+ atomic64_t pkt_bad;
+ u8 traffic_keys[];
+};
+
+static inline u8 *rcv_other_key(struct tcp_ao_key *key)
+{
+ return key->traffic_keys;
+}
+
+static inline u8 *snd_other_key(struct tcp_ao_key *key)
+{
+ return key->traffic_keys + key->digest_size;
+}
+
+static inline int tcp_ao_maclen(const struct tcp_ao_key *key)
+{
+ return key->maclen;
+}
+
+static inline int tcp_ao_len(const struct tcp_ao_key *key)
+{
+ return tcp_ao_maclen(key) + sizeof(struct tcp_ao_hdr);
+}
+
+static inline unsigned int tcp_ao_digest_size(struct tcp_ao_key *key)
+{
+ return key->digest_size;
+}
+
+static inline int tcp_ao_sizeof_key(const struct tcp_ao_key *key)
+{
+ return sizeof(struct tcp_ao_key) + (key->digest_size << 1);
+}
+
+struct tcp_ao_info {
+ /* List of tcp_ao_key's */
+ struct hlist_head head;
+ /* current_key and rnext_key aren't maintained on listen sockets.
+ * Their purpose is to cache keys on established connections,
+ * saving needless lookups. Never dereference any of them from
+ * listen sockets.
+ * ::current_key may change in RX to the key that was requested by
+ * the peer, please use READ_ONCE()/WRITE_ONCE() in order to avoid
+ * load/store tearing.
+ * Do the same for ::rnext_key, if you don't hold socket lock
+ * (it's changed only by userspace request in setsockopt()).
+ */
+ struct tcp_ao_key *current_key;
+ struct tcp_ao_key *rnext_key;
+ struct tcp_ao_counters counters;
+ u32 ao_required :1,
+ accept_icmps :1,
+ __unused :30;
+ __be32 lisn;
+ __be32 risn;
+ /* Sequence Number Extension (SNE) are upper 4 bytes for SEQ,
+ * that protect TCP-AO connection from replayed old TCP segments.
+ * See RFC5925 (6.2).
+ * In order to get correct SNE, there's a helper tcp_ao_compute_sne().
+ * It needs SEQ basis to understand whereabouts are lower SEQ numbers.
+ * According to that basis vector, it can provide incremented SNE
+ * when SEQ rolls over or provide decremented SNE when there's
+ * a retransmitted segment from before-rolling over.
+ * - for request sockets such basis is rcv_isn/snt_isn, which seems
+ * good enough as it's unexpected to receive 4 Gbytes on reqsk.
+ * - for full sockets the basis is rcv_nxt/snd_una. snd_una is
+ * taken instead of snd_nxt as currently it's easier to track
+ * in tcp_snd_una_update(), rather than updating SNE in all
+ * WRITE_ONCE(tp->snd_nxt, ...)
+ * - for time-wait sockets the basis is tw_rcv_nxt/tw_snd_nxt.
+ * tw_snd_nxt is not expected to change, while tw_rcv_nxt may.
+ */
+ u32 snd_sne;
+ u32 rcv_sne;
+ refcount_t refcnt; /* Protects twsk destruction */
+ struct rcu_head rcu;
+};
+
+#define tcp_hash_fail(msg, family, skb, fmt, ...) \
+do { \
+ const struct tcphdr *th = tcp_hdr(skb); \
+ char hdr_flags[5] = {}; \
+ char *f = hdr_flags; \
+ \
+ if (th->fin) \
+ *f++ = 'F'; \
+ if (th->syn) \
+ *f++ = 'S'; \
+ if (th->rst) \
+ *f++ = 'R'; \
+ if (th->ack) \
+ *f++ = 'A'; \
+ if (f != hdr_flags) \
+ *f = ' '; \
+ if ((family) == AF_INET) { \
+ net_info_ratelimited("%s for (%pI4, %d)->(%pI4, %d) %s" fmt "\n", \
+ msg, &ip_hdr(skb)->saddr, ntohs(th->source), \
+ &ip_hdr(skb)->daddr, ntohs(th->dest), \
+ hdr_flags, ##__VA_ARGS__); \
+ } else { \
+ net_info_ratelimited("%s for [%pI6c]:%u->[%pI6c]:%u %s" fmt "\n", \
+ msg, &ipv6_hdr(skb)->saddr, ntohs(th->source), \
+ &ipv6_hdr(skb)->daddr, ntohs(th->dest), \
+ hdr_flags, ##__VA_ARGS__); \
+ } \
+} while (0)
+
+#ifdef CONFIG_TCP_AO
+/* TCP-AO structures and functions */
+#include <linux/jump_label.h>
+extern struct static_key_false_deferred tcp_ao_needed;
+
+struct tcp4_ao_context {
+ __be32 saddr;
+ __be32 daddr;
+ __be16 sport;
+ __be16 dport;
+ __be32 sisn;
+ __be32 disn;
+};
+
+struct tcp6_ao_context {
+ struct in6_addr saddr;
+ struct in6_addr daddr;
+ __be16 sport;
+ __be16 dport;
+ __be32 sisn;
+ __be32 disn;
+};
+
+struct tcp_sigpool;
+#define TCP_AO_ESTABLISHED (TCPF_ESTABLISHED | TCPF_FIN_WAIT1 | TCPF_FIN_WAIT2 | \
+ TCPF_CLOSE | TCPF_CLOSE_WAIT | \
+ TCPF_LAST_ACK | TCPF_CLOSING)
+
+int tcp_ao_transmit_skb(struct sock *sk, struct sk_buff *skb,
+ struct tcp_ao_key *key, struct tcphdr *th,
+ __u8 *hash_location);
+int tcp_ao_hash_skb(unsigned short int family,
+ char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne);
+int tcp_parse_ao(struct sock *sk, int cmd, unsigned short int family,
+ sockptr_t optval, int optlen);
+struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
+ int sndid, int rcvid);
+int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ struct request_sock *req, struct sk_buff *skb,
+ int family);
+int tcp_ao_calc_traffic_key(struct tcp_ao_key *mkt, u8 *key, void *ctx,
+ unsigned int len, struct tcp_sigpool *hp);
+void tcp_ao_destroy_sock(struct sock *sk, bool twsk);
+void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw, struct tcp_sock *tp);
+bool tcp_ao_ignore_icmp(const struct sock *sk, int family, int type, int code);
+int tcp_ao_get_mkts(struct sock *sk, sockptr_t optval, sockptr_t optlen);
+int tcp_ao_get_sock_info(struct sock *sk, sockptr_t optval, sockptr_t optlen);
+int tcp_ao_get_repair(struct sock *sk, sockptr_t optval, sockptr_t optlen);
+int tcp_ao_set_repair(struct sock *sk, sockptr_t optval, unsigned int optlen);
+enum skb_drop_reason tcp_inbound_ao_hash(struct sock *sk,
+ const struct sk_buff *skb, unsigned short int family,
+ const struct request_sock *req, int l3index,
+ const struct tcp_ao_hdr *aoh);
+u32 tcp_ao_compute_sne(u32 next_sne, u32 next_seq, u32 seq);
+struct tcp_ao_key *tcp_ao_do_lookup(const struct sock *sk, int l3index,
+ const union tcp_ao_addr *addr,
+ int family, int sndid, int rcvid);
+int tcp_ao_hash_hdr(unsigned short family, char *ao_hash,
+ struct tcp_ao_key *key, const u8 *tkey,
+ const union tcp_ao_addr *daddr,
+ const union tcp_ao_addr *saddr,
+ const struct tcphdr *th, u32 sne);
+int tcp_ao_prepare_reset(const struct sock *sk, struct sk_buff *skb,
+ const struct tcp_ao_hdr *aoh, int l3index, u32 seq,
+ struct tcp_ao_key **key, char **traffic_key,
+ bool *allocated_traffic_key, u8 *keyid, u32 *sne);
+
+/* ipv4 specific functions */
+int tcp_v4_parse_ao(struct sock *sk, int cmd, sockptr_t optval, int optlen);
+struct tcp_ao_key *tcp_v4_ao_lookup(const struct sock *sk, struct sock *addr_sk,
+ int sndid, int rcvid);
+int tcp_v4_ao_synack_hash(char *ao_hash, struct tcp_ao_key *mkt,
+ struct request_sock *req, const struct sk_buff *skb,
+ int hash_offset, u32 sne);
+int tcp_v4_ao_calc_key_sk(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk,
+ __be32 sisn, __be32 disn, bool send);
+int tcp_v4_ao_calc_key_rsk(struct tcp_ao_key *mkt, u8 *key,
+ struct request_sock *req);
+struct tcp_ao_key *tcp_v4_ao_lookup_rsk(const struct sock *sk,
+ struct request_sock *req,
+ int sndid, int rcvid);
+int tcp_v4_ao_hash_skb(char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne);
+/* ipv6 specific functions */
+int tcp_v6_ao_hash_pseudoheader(struct tcp_sigpool *hp,
+ const struct in6_addr *daddr,
+ const struct in6_addr *saddr, int nbytes);
+int tcp_v6_ao_calc_key_skb(struct tcp_ao_key *mkt, u8 *key,
+ const struct sk_buff *skb, __be32 sisn, __be32 disn);
+int tcp_v6_ao_calc_key_sk(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk, __be32 sisn,
+ __be32 disn, bool send);
+int tcp_v6_ao_calc_key_rsk(struct tcp_ao_key *mkt, u8 *key,
+ struct request_sock *req);
+struct tcp_ao_key *tcp_v6_ao_lookup(const struct sock *sk,
+ struct sock *addr_sk, int sndid, int rcvid);
+struct tcp_ao_key *tcp_v6_ao_lookup_rsk(const struct sock *sk,
+ struct request_sock *req,
+ int sndid, int rcvid);
+int tcp_v6_ao_hash_skb(char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne);
+int tcp_v6_parse_ao(struct sock *sk, int cmd, sockptr_t optval, int optlen);
+int tcp_v6_ao_synack_hash(char *ao_hash, struct tcp_ao_key *ao_key,
+ struct request_sock *req, const struct sk_buff *skb,
+ int hash_offset, u32 sne);
+void tcp_ao_established(struct sock *sk);
+void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb);
+void tcp_ao_connect_init(struct sock *sk);
+void tcp_ao_syncookie(struct sock *sk, const struct sk_buff *skb,
+ struct tcp_request_sock *treq,
+ unsigned short int family, int l3index);
+#else /* CONFIG_TCP_AO */
+
+static inline int tcp_ao_transmit_skb(struct sock *sk, struct sk_buff *skb,
+ struct tcp_ao_key *key, struct tcphdr *th,
+ __u8 *hash_location)
+{
+ return 0;
+}
+
+static inline void tcp_ao_syncookie(struct sock *sk, const struct sk_buff *skb,
+ struct tcp_request_sock *treq,
+ unsigned short int family, int l3index)
+{
+}
+
+static inline bool tcp_ao_ignore_icmp(const struct sock *sk, int family,
+ int type, int code)
+{
+ return false;
+}
+
+static inline enum skb_drop_reason tcp_inbound_ao_hash(struct sock *sk,
+ const struct sk_buff *skb, unsigned short int family,
+ const struct request_sock *req, int l3index,
+ const struct tcp_ao_hdr *aoh)
+{
+ return SKB_NOT_DROPPED_YET;
+}
+
+static inline struct tcp_ao_key *tcp_ao_do_lookup(const struct sock *sk,
+ int l3index, const union tcp_ao_addr *addr,
+ int family, int sndid, int rcvid)
+{
+ return NULL;
+}
+
+static inline void tcp_ao_destroy_sock(struct sock *sk, bool twsk)
+{
+}
+
+static inline void tcp_ao_established(struct sock *sk)
+{
+}
+
+static inline void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb)
+{
+}
+
+static inline void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw,
+ struct tcp_sock *tp)
+{
+}
+
+static inline void tcp_ao_connect_init(struct sock *sk)
+{
+}
+
+static inline int tcp_ao_get_mkts(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+{
+ return -ENOPROTOOPT;
+}
+
+static inline int tcp_ao_get_sock_info(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+{
+ return -ENOPROTOOPT;
+}
+
+static inline int tcp_ao_get_repair(struct sock *sk,
+ sockptr_t optval, sockptr_t optlen)
+{
+ return -ENOPROTOOPT;
+}
+
+static inline int tcp_ao_set_repair(struct sock *sk,
+ sockptr_t optval, unsigned int optlen)
+{
+ return -ENOPROTOOPT;
+}
+#endif
+
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
+int tcp_do_parse_auth_options(const struct tcphdr *th,
+ const u8 **md5_hash, const u8 **ao_hash);
+#else
+static inline int tcp_do_parse_auth_options(const struct tcphdr *th,
+ const u8 **md5_hash, const u8 **ao_hash)
+{
+ *md5_hash = NULL;
+ *ao_hash = NULL;
+ return 0;
+}
+#endif
+
+#endif /* _TCP_AO_H */
diff --git a/include/net/tcx.h b/include/net/tcx.h
index 264f147953ba..04be9377785d 100644
--- a/include/net/tcx.h
+++ b/include/net/tcx.h
@@ -38,16 +38,11 @@ static inline struct tcx_entry *tcx_entry(struct bpf_mprog_entry *entry)
return container_of(bundle, struct tcx_entry, bundle);
}
-static inline struct tcx_link *tcx_link(struct bpf_link *link)
+static inline struct tcx_link *tcx_link(const struct bpf_link *link)
{
return container_of(link, struct tcx_link, link);
}
-static inline const struct tcx_link *tcx_link_const(const struct bpf_link *link)
-{
- return tcx_link((struct bpf_link *)link);
-}
-
void tcx_inc(void);
void tcx_dec(void);
diff --git a/include/net/tls.h b/include/net/tls.h
index a2b44578dcb7..962f0c501111 100644
--- a/include/net/tls.h
+++ b/include/net/tls.h
@@ -61,7 +61,8 @@ struct tls_rec;
#define TLS_AAD_SPACE_SIZE 13
-#define MAX_IV_SIZE 16
+#define TLS_MAX_IV_SIZE 16
+#define TLS_MAX_SALT_SIZE 4
#define TLS_TAG_SIZE 16
#define TLS_MAX_REC_SEQ_SIZE 8
#define TLS_MAX_AAD_SIZE TLS_AAD_SPACE_SIZE
@@ -149,6 +150,7 @@ struct tls_record_info {
skb_frag_t frags[MAX_SKB_FRAGS];
};
+#define TLS_DRIVER_STATE_SIZE_TX 16
struct tls_offload_context_tx {
struct crypto_aead *aead_send;
spinlock_t lock; /* protects records list */
@@ -162,17 +164,13 @@ struct tls_offload_context_tx {
void (*sk_destruct)(struct sock *sk);
struct work_struct destruct_work;
struct tls_context *ctx;
- u8 driver_state[] __aligned(8);
/* The TLS layer reserves room for driver specific state
* Currently the belief is that there is not enough
* driver specific state to justify another layer of indirection
*/
-#define TLS_DRIVER_STATE_SIZE_TX 16
+ u8 driver_state[TLS_DRIVER_STATE_SIZE_TX] __aligned(8);
};
-#define TLS_OFFLOAD_CONTEXT_SIZE_TX \
- (sizeof(struct tls_offload_context_tx) + TLS_DRIVER_STATE_SIZE_TX)
-
enum tls_context_flags {
/* tls_device_down was called after the netdev went down, device state
* was released, and kTLS works in software, even though rx_conf is
@@ -193,8 +191,8 @@ enum tls_context_flags {
};
struct cipher_context {
- char *iv;
- char *rec_seq;
+ char iv[TLS_MAX_IV_SIZE + TLS_MAX_SALT_SIZE];
+ char rec_seq[TLS_MAX_REC_SEQ_SIZE];
};
union tls_crypto_context {
@@ -302,6 +300,7 @@ struct tls_offload_resync_async {
u32 log[TLS_DEVICE_RESYNC_ASYNC_LOGMAX];
};
+#define TLS_DRIVER_STATE_SIZE_RX 8
struct tls_offload_context_rx {
/* sw must be the first member of tls_offload_context_rx */
struct tls_sw_context_rx sw;
@@ -325,17 +324,13 @@ struct tls_offload_context_rx {
struct tls_offload_resync_async *resync_async;
};
};
- u8 driver_state[] __aligned(8);
/* The TLS layer reserves room for driver specific state
* Currently the belief is that there is not enough
* driver specific state to justify another layer of indirection
*/
-#define TLS_DRIVER_STATE_SIZE_RX 8
+ u8 driver_state[TLS_DRIVER_STATE_SIZE_RX] __aligned(8);
};
-#define TLS_OFFLOAD_CONTEXT_SIZE_RX \
- (sizeof(struct tls_offload_context_rx) + TLS_DRIVER_STATE_SIZE_RX)
-
struct tls_record_info *tls_get_record(struct tls_offload_context_tx *context,
u32 seq, u64 *p_record_sn);
diff --git a/include/net/udp_tunnel.h b/include/net/udp_tunnel.h
index 0ca9b7a11baf..d716214fe03d 100644
--- a/include/net/udp_tunnel.h
+++ b/include/net/udp_tunnel.h
@@ -154,13 +154,30 @@ void udp_tunnel_xmit_skb(struct rtable *rt, struct sock *sk, struct sk_buff *skb
int udp_tunnel6_xmit_skb(struct dst_entry *dst, struct sock *sk,
struct sk_buff *skb,
- struct net_device *dev, struct in6_addr *saddr,
- struct in6_addr *daddr,
+ struct net_device *dev,
+ const struct in6_addr *saddr,
+ const struct in6_addr *daddr,
__u8 prio, __u8 ttl, __be32 label,
__be16 src_port, __be16 dst_port, bool nocheck);
void udp_tunnel_sock_release(struct socket *sock);
+struct rtable *udp_tunnel_dst_lookup(struct sk_buff *skb,
+ struct net_device *dev,
+ struct net *net, int oif,
+ __be32 *saddr,
+ const struct ip_tunnel_key *key,
+ __be16 sport, __be16 dport, u8 tos,
+ struct dst_cache *dst_cache);
+struct dst_entry *udp_tunnel6_dst_lookup(struct sk_buff *skb,
+ struct net_device *dev,
+ struct net *net,
+ struct socket *sock, int oif,
+ struct in6_addr *saddr,
+ const struct ip_tunnel_key *key,
+ __be16 sport, __be16 dport, u8 dsfield,
+ struct dst_cache *dst_cache);
+
struct metadata_dst *udp_tun_rx_dst(struct sk_buff *skb, unsigned short family,
__be16 flags, __be64 tunnel_id,
int md_size);
@@ -174,16 +191,13 @@ static inline int udp_tunnel_handle_offloads(struct sk_buff *skb, bool udp_csum)
}
#endif
-static inline void udp_tunnel_encap_enable(struct socket *sock)
+static inline void udp_tunnel_encap_enable(struct sock *sk)
{
- struct udp_sock *up = udp_sk(sock->sk);
-
- if (up->encap_enabled)
+ if (udp_test_and_set_bit(ENCAP_ENABLED, sk))
return;
- up->encap_enabled = 1;
#if IS_ENABLED(CONFIG_IPV6)
- if (sock->sk->sk_family == PF_INET6)
+ if (READ_ONCE(sk->sk_family) == PF_INET6)
ipv6_stub->udpv6_encap_enable();
#endif
udp_encap_enable();
diff --git a/include/net/udplite.h b/include/net/udplite.h
index bd33ff2b8f42..786919d29f8d 100644
--- a/include/net/udplite.h
+++ b/include/net/udplite.h
@@ -66,14 +66,18 @@ static inline int udplite_checksum_init(struct sk_buff *skb, struct udphdr *uh)
/* Fast-path computation of checksum. Socket may not be locked. */
static inline __wsum udplite_csum(struct sk_buff *skb)
{
- const struct udp_sock *up = udp_sk(skb->sk);
const int off = skb_transport_offset(skb);
+ const struct sock *sk = skb->sk;
int len = skb->len - off;
- if ((up->pcflag & UDPLITE_SEND_CC) && up->pcslen < len) {
- if (0 < up->pcslen)
- len = up->pcslen;
- udp_hdr(skb)->len = htons(up->pcslen);
+ if (udp_test_bit(UDPLITE_SEND_CC, sk)) {
+ u16 pcslen = READ_ONCE(udp_sk(sk)->pcslen);
+
+ if (pcslen < len) {
+ if (pcslen > 0)
+ len = pcslen;
+ udp_hdr(skb)->len = htons(pcslen);
+ }
}
skb->ip_summed = CHECKSUM_NONE; /* no HW support for checksumming */
diff --git a/include/net/xdp.h b/include/net/xdp.h
index de08c8e0d134..349c36fb5fd8 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -383,14 +383,25 @@ void xdp_attachment_setup(struct xdp_attachment_info *info,
#define DEV_MAP_BULK_SIZE XDP_BULK_QUEUE_SIZE
+/* Define the relationship between xdp-rx-metadata kfunc and
+ * various other entities:
+ * - xdp_rx_metadata enum
+ * - netdev netlink enum (Documentation/netlink/specs/netdev.yaml)
+ * - kfunc name
+ * - xdp_metadata_ops field
+ */
#define XDP_METADATA_KFUNC_xxx \
XDP_METADATA_KFUNC(XDP_METADATA_KFUNC_RX_TIMESTAMP, \
- bpf_xdp_metadata_rx_timestamp) \
+ NETDEV_XDP_RX_METADATA_TIMESTAMP, \
+ bpf_xdp_metadata_rx_timestamp, \
+ xmo_rx_timestamp) \
XDP_METADATA_KFUNC(XDP_METADATA_KFUNC_RX_HASH, \
- bpf_xdp_metadata_rx_hash) \
+ NETDEV_XDP_RX_METADATA_HASH, \
+ bpf_xdp_metadata_rx_hash, \
+ xmo_rx_hash) \
-enum {
-#define XDP_METADATA_KFUNC(name, _) name,
+enum xdp_rx_metadata {
+#define XDP_METADATA_KFUNC(name, _, __, ___) name,
XDP_METADATA_KFUNC_xxx
#undef XDP_METADATA_KFUNC
MAX_XDP_METADATA_KFUNC,
diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
index 1617af380162..f83128007fb0 100644
--- a/include/net/xdp_sock.h
+++ b/include/net/xdp_sock.h
@@ -14,6 +14,8 @@
#include <linux/mm.h>
#include <net/sock.h>
+#define XDP_UMEM_SG_FLAG (1 << 1)
+
struct net_device;
struct xsk_queue;
struct xdp_buff;
@@ -61,6 +63,13 @@ struct xdp_sock {
struct xsk_queue *tx ____cacheline_aligned_in_smp;
struct list_head tx_list;
+ /* record the number of tx descriptors sent by this xsk and
+ * when it exceeds MAX_PER_SOCKET_BUDGET, an opportunity needs
+ * to be given to other xsks for sending tx descriptors, thereby
+ * preventing other XSKs from being starved.
+ */
+ u32 tx_budget_spent;
+
/* Protects generic receive. */
spinlock_t rx_lock;
@@ -107,4 +116,13 @@ static inline void __xsk_map_flush(void)
#endif /* CONFIG_XDP_SOCKETS */
+#if defined(CONFIG_XDP_SOCKETS) && defined(CONFIG_DEBUG_NET)
+bool xsk_map_check_flush(void);
+#else
+static inline bool xsk_map_check_flush(void)
+{
+ return false;
+}
+#endif
+
#endif /* _LINUX_XDP_SOCK_H */
diff --git a/include/net/xfrm.h b/include/net/xfrm.h
index 363c7d510554..98d7aa78adda 100644
--- a/include/net/xfrm.h
+++ b/include/net/xfrm.h
@@ -2166,7 +2166,7 @@ static inline bool xfrm6_local_dontfrag(const struct sock *sk)
proto = sk->sk_protocol;
if (proto == IPPROTO_UDP || proto == IPPROTO_RAW)
- return inet6_sk(sk)->dontfrag;
+ return inet6_test_bit(DONTFRAG, sk);
return false;
}
diff --git a/include/trace/events/mptcp.h b/include/trace/events/mptcp.h
index 563e48617374..09e72215b9f9 100644
--- a/include/trace/events/mptcp.h
+++ b/include/trace/events/mptcp.h
@@ -44,7 +44,7 @@ TRACE_EVENT(mptcp_subflow_get_send,
ssk = mptcp_subflow_tcp_sock(subflow);
if (ssk && sk_fullsock(ssk)) {
__entry->snd_wnd = tcp_sk(ssk)->snd_wnd;
- __entry->pace = ssk->sk_pacing_rate;
+ __entry->pace = READ_ONCE(ssk->sk_pacing_rate);
} else {
__entry->snd_wnd = 0;
__entry->pace = 0;
diff --git a/include/trace/events/vsock_virtio_transport_common.h b/include/trace/events/vsock_virtio_transport_common.h
index d0b3f0ea9ba1..f1ebe36787c3 100644
--- a/include/trace/events/vsock_virtio_transport_common.h
+++ b/include/trace/events/vsock_virtio_transport_common.h
@@ -43,7 +43,8 @@ TRACE_EVENT(virtio_transport_alloc_pkt,
__u32 len,
__u16 type,
__u16 op,
- __u32 flags
+ __u32 flags,
+ bool zcopy
),
TP_ARGS(
src_cid, src_port,
@@ -51,7 +52,8 @@ TRACE_EVENT(virtio_transport_alloc_pkt,
len,
type,
op,
- flags
+ flags,
+ zcopy
),
TP_STRUCT__entry(
__field(__u32, src_cid)
@@ -62,6 +64,7 @@ TRACE_EVENT(virtio_transport_alloc_pkt,
__field(__u16, type)
__field(__u16, op)
__field(__u32, flags)
+ __field(bool, zcopy)
),
TP_fast_assign(
__entry->src_cid = src_cid;
@@ -72,14 +75,15 @@ TRACE_EVENT(virtio_transport_alloc_pkt,
__entry->type = type;
__entry->op = op;
__entry->flags = flags;
+ __entry->zcopy = zcopy;
),
- TP_printk("%u:%u -> %u:%u len=%u type=%s op=%s flags=%#x",
+ TP_printk("%u:%u -> %u:%u len=%u type=%s op=%s flags=%#x zcopy=%s",
__entry->src_cid, __entry->src_port,
__entry->dst_cid, __entry->dst_port,
__entry->len,
show_type(__entry->type),
show_op(__entry->op),
- __entry->flags)
+ __entry->flags, __entry->zcopy ? "true" : "false")
);
TRACE_EVENT(virtio_transport_recv_pkt,
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 0448700890f7..0f6cdf52b1da 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -932,7 +932,14 @@ enum bpf_map_type {
*/
BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
- BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
+ BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,
+ /* BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE is available to bpf programs
+ * attaching to a cgroup. The new mechanism (BPF_MAP_TYPE_CGRP_STORAGE +
+ * local percpu kptr) supports all BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE
+ * functionality and more. So mark * BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE
+ * deprecated.
+ */
+ BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE = BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,
BPF_MAP_TYPE_QUEUE,
BPF_MAP_TYPE_STACK,
BPF_MAP_TYPE_SK_STORAGE,
@@ -1040,6 +1047,13 @@ enum bpf_attach_type {
BPF_TCX_INGRESS,
BPF_TCX_EGRESS,
BPF_TRACE_UPROBE_MULTI,
+ BPF_CGROUP_UNIX_CONNECT,
+ BPF_CGROUP_UNIX_SENDMSG,
+ BPF_CGROUP_UNIX_RECVMSG,
+ BPF_CGROUP_UNIX_GETPEERNAME,
+ BPF_CGROUP_UNIX_GETSOCKNAME,
+ BPF_NETKIT_PRIMARY,
+ BPF_NETKIT_PEER,
__MAX_BPF_ATTACH_TYPE
};
@@ -1059,6 +1073,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_NETFILTER = 10,
BPF_LINK_TYPE_TCX = 11,
BPF_LINK_TYPE_UPROBE_MULTI = 12,
+ BPF_LINK_TYPE_NETKIT = 13,
MAX_BPF_LINK_TYPE,
};
@@ -1644,6 +1659,13 @@ union bpf_attr {
__u32 flags;
__u32 pid;
} uprobe_multi;
+ struct {
+ union {
+ __u32 relative_fd;
+ __u32 relative_id;
+ };
+ __u64 expected_revision;
+ } netkit;
};
} link_create;
@@ -2697,8 +2719,8 @@ union bpf_attr {
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
- * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
- * and **BPF_CGROUP_INET6_CONNECT**.
+ * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**,
+ * **BPF_CGROUP_INET6_CONNECT** and **BPF_CGROUP_UNIX_CONNECT**.
*
* This helper actually implements a subset of **setsockopt()**.
* It supports the following *level*\ s:
@@ -2936,8 +2958,8 @@ union bpf_attr {
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
- * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
- * and **BPF_CGROUP_INET6_CONNECT**.
+ * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**,
+ * **BPF_CGROUP_INET6_CONNECT** and **BPF_CGROUP_UNIX_CONNECT**.
*
* This helper actually implements a subset of **getsockopt()**.
* It supports the same set of *optname*\ s that is supported by
@@ -3257,6 +3279,11 @@ union bpf_attr {
* and *params*->smac will not be set as output. A common
* use case is to call **bpf_redirect_neigh**\ () after
* doing **bpf_fib_lookup**\ ().
+ * **BPF_FIB_LOOKUP_SRC**
+ * Derive and set source IP addr in *params*->ipv{4,6}_src
+ * for the nexthop. If the src addr cannot be derived,
+ * **BPF_FIB_LKUP_RET_NO_SRC_ADDR** is returned. In this
+ * case, *params*->dmac and *params*->smac are not set either.
*
* *ctx* is either **struct xdp_md** for XDP programs or
* **struct sk_buff** tc cls_act programs.
@@ -5089,6 +5116,8 @@ union bpf_attr {
* **BPF_F_TIMER_ABS**
* Start the timer in absolute expire value instead of the
* default relative one.
+ * **BPF_F_TIMER_CPU_PIN**
+ * Timer will be pinned to the CPU of the caller.
*
* Return
* 0 on success.
@@ -6525,6 +6554,7 @@ struct bpf_link_info {
__aligned_u64 addrs;
__u32 count; /* in/out: kprobe_multi function count */
__u32 flags;
+ __u64 missed;
} kprobe_multi;
struct {
__u32 type; /* enum bpf_perf_event_type */
@@ -6540,6 +6570,7 @@ struct bpf_link_info {
__u32 name_len;
__u32 offset; /* offset from func_name */
__u64 addr;
+ __u64 missed;
} kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */
struct {
__aligned_u64 tp_name; /* in/out */
@@ -6555,6 +6586,10 @@ struct bpf_link_info {
__u32 ifindex;
__u32 attach_type;
} tcx;
+ struct {
+ __u32 ifindex;
+ __u32 attach_type;
+ } netkit;
};
} __attribute__((aligned(8)));
@@ -6953,6 +6988,7 @@ enum {
BPF_FIB_LOOKUP_OUTPUT = (1U << 1),
BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
BPF_FIB_LOOKUP_TBID = (1U << 3),
+ BPF_FIB_LOOKUP_SRC = (1U << 4),
};
enum {
@@ -6965,6 +7001,7 @@ enum {
BPF_FIB_LKUP_RET_UNSUPP_LWT, /* fwd requires encapsulation */
BPF_FIB_LKUP_RET_NO_NEIGH, /* no neighbor entry for nh */
BPF_FIB_LKUP_RET_FRAG_NEEDED, /* fragmentation required to fwd */
+ BPF_FIB_LKUP_RET_NO_SRC_ADDR, /* failed to derive IP src addr */
};
struct bpf_fib_lookup {
@@ -6999,6 +7036,9 @@ struct bpf_fib_lookup {
__u32 rt_metric;
};
+ /* input: source address to consider for lookup
+ * output: source address result from lookup
+ */
union {
__be32 ipv4_src;
__u32 ipv6_src[4]; /* in6_addr; network order */
@@ -7300,9 +7340,11 @@ struct bpf_core_relo {
* Flags to control bpf_timer_start() behaviour.
* - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
* relative to current time.
+ * - BPF_F_TIMER_CPU_PIN: Timer will be pinned to the CPU of the caller.
*/
enum {
BPF_F_TIMER_ABS = (1ULL << 0),
+ BPF_F_TIMER_CPU_PIN = (1ULL << 1),
};
/* BPF numbers iterator state */
diff --git a/include/uapi/linux/devlink.h b/include/uapi/linux/devlink.h
index 03875e078be8..b3c8383d342d 100644
--- a/include/uapi/linux/devlink.h
+++ b/include/uapi/linux/devlink.h
@@ -265,7 +265,7 @@ enum {
* Documentation/networking/devlink/devlink-flash.rst
*
*/
-enum {
+enum devlink_flash_overwrite {
DEVLINK_FLASH_OVERWRITE_SETTINGS_BIT,
DEVLINK_FLASH_OVERWRITE_IDENTIFIERS_BIT,
@@ -680,6 +680,7 @@ enum devlink_port_function_attr {
DEVLINK_PORT_FN_ATTR_STATE, /* u8 */
DEVLINK_PORT_FN_ATTR_OPSTATE, /* u8 */
DEVLINK_PORT_FN_ATTR_CAPS, /* bitfield32 */
+ DEVLINK_PORT_FN_ATTR_DEVLINK, /* nested */
__DEVLINK_PORT_FUNCTION_ATTR_MAX,
DEVLINK_PORT_FUNCTION_ATTR_MAX = __DEVLINK_PORT_FUNCTION_ATTR_MAX - 1
diff --git a/include/uapi/linux/dpll.h b/include/uapi/linux/dpll.h
new file mode 100644
index 000000000000..715a491d2727
--- /dev/null
+++ b/include/uapi/linux/dpll.h
@@ -0,0 +1,207 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/dpll.yaml */
+/* YNL-GEN uapi header */
+
+#ifndef _UAPI_LINUX_DPLL_H
+#define _UAPI_LINUX_DPLL_H
+
+#define DPLL_FAMILY_NAME "dpll"
+#define DPLL_FAMILY_VERSION 1
+
+/**
+ * enum dpll_mode - working modes a dpll can support, differentiates if and how
+ * dpll selects one of its inputs to syntonize with it, valid values for
+ * DPLL_A_MODE attribute
+ * @DPLL_MODE_MANUAL: input can be only selected by sending a request to dpll
+ * @DPLL_MODE_AUTOMATIC: highest prio input pin auto selected by dpll
+ */
+enum dpll_mode {
+ DPLL_MODE_MANUAL = 1,
+ DPLL_MODE_AUTOMATIC,
+
+ /* private: */
+ __DPLL_MODE_MAX,
+ DPLL_MODE_MAX = (__DPLL_MODE_MAX - 1)
+};
+
+/**
+ * enum dpll_lock_status - provides information of dpll device lock status,
+ * valid values for DPLL_A_LOCK_STATUS attribute
+ * @DPLL_LOCK_STATUS_UNLOCKED: dpll was not yet locked to any valid input (or
+ * forced by setting DPLL_A_MODE to DPLL_MODE_DETACHED)
+ * @DPLL_LOCK_STATUS_LOCKED: dpll is locked to a valid signal, but no holdover
+ * available
+ * @DPLL_LOCK_STATUS_LOCKED_HO_ACQ: dpll is locked and holdover acquired
+ * @DPLL_LOCK_STATUS_HOLDOVER: dpll is in holdover state - lost a valid lock or
+ * was forced by disconnecting all the pins (latter possible only when dpll
+ * lock-state was already DPLL_LOCK_STATUS_LOCKED_HO_ACQ, if dpll lock-state
+ * was not DPLL_LOCK_STATUS_LOCKED_HO_ACQ, the dpll's lock-state shall remain
+ * DPLL_LOCK_STATUS_UNLOCKED)
+ */
+enum dpll_lock_status {
+ DPLL_LOCK_STATUS_UNLOCKED = 1,
+ DPLL_LOCK_STATUS_LOCKED,
+ DPLL_LOCK_STATUS_LOCKED_HO_ACQ,
+ DPLL_LOCK_STATUS_HOLDOVER,
+
+ /* private: */
+ __DPLL_LOCK_STATUS_MAX,
+ DPLL_LOCK_STATUS_MAX = (__DPLL_LOCK_STATUS_MAX - 1)
+};
+
+#define DPLL_TEMP_DIVIDER 1000
+
+/**
+ * enum dpll_type - type of dpll, valid values for DPLL_A_TYPE attribute
+ * @DPLL_TYPE_PPS: dpll produces Pulse-Per-Second signal
+ * @DPLL_TYPE_EEC: dpll drives the Ethernet Equipment Clock
+ */
+enum dpll_type {
+ DPLL_TYPE_PPS = 1,
+ DPLL_TYPE_EEC,
+
+ /* private: */
+ __DPLL_TYPE_MAX,
+ DPLL_TYPE_MAX = (__DPLL_TYPE_MAX - 1)
+};
+
+/**
+ * enum dpll_pin_type - defines possible types of a pin, valid values for
+ * DPLL_A_PIN_TYPE attribute
+ * @DPLL_PIN_TYPE_MUX: aggregates another layer of selectable pins
+ * @DPLL_PIN_TYPE_EXT: external input
+ * @DPLL_PIN_TYPE_SYNCE_ETH_PORT: ethernet port PHY's recovered clock
+ * @DPLL_PIN_TYPE_INT_OSCILLATOR: device internal oscillator
+ * @DPLL_PIN_TYPE_GNSS: GNSS recovered clock
+ */
+enum dpll_pin_type {
+ DPLL_PIN_TYPE_MUX = 1,
+ DPLL_PIN_TYPE_EXT,
+ DPLL_PIN_TYPE_SYNCE_ETH_PORT,
+ DPLL_PIN_TYPE_INT_OSCILLATOR,
+ DPLL_PIN_TYPE_GNSS,
+
+ /* private: */
+ __DPLL_PIN_TYPE_MAX,
+ DPLL_PIN_TYPE_MAX = (__DPLL_PIN_TYPE_MAX - 1)
+};
+
+/**
+ * enum dpll_pin_direction - defines possible direction of a pin, valid values
+ * for DPLL_A_PIN_DIRECTION attribute
+ * @DPLL_PIN_DIRECTION_INPUT: pin used as a input of a signal
+ * @DPLL_PIN_DIRECTION_OUTPUT: pin used to output the signal
+ */
+enum dpll_pin_direction {
+ DPLL_PIN_DIRECTION_INPUT = 1,
+ DPLL_PIN_DIRECTION_OUTPUT,
+
+ /* private: */
+ __DPLL_PIN_DIRECTION_MAX,
+ DPLL_PIN_DIRECTION_MAX = (__DPLL_PIN_DIRECTION_MAX - 1)
+};
+
+#define DPLL_PIN_FREQUENCY_1_HZ 1
+#define DPLL_PIN_FREQUENCY_10_KHZ 10000
+#define DPLL_PIN_FREQUENCY_77_5_KHZ 77500
+#define DPLL_PIN_FREQUENCY_10_MHZ 10000000
+
+/**
+ * enum dpll_pin_state - defines possible states of a pin, valid values for
+ * DPLL_A_PIN_STATE attribute
+ * @DPLL_PIN_STATE_CONNECTED: pin connected, active input of phase locked loop
+ * @DPLL_PIN_STATE_DISCONNECTED: pin disconnected, not considered as a valid
+ * input
+ * @DPLL_PIN_STATE_SELECTABLE: pin enabled for automatic input selection
+ */
+enum dpll_pin_state {
+ DPLL_PIN_STATE_CONNECTED = 1,
+ DPLL_PIN_STATE_DISCONNECTED,
+ DPLL_PIN_STATE_SELECTABLE,
+
+ /* private: */
+ __DPLL_PIN_STATE_MAX,
+ DPLL_PIN_STATE_MAX = (__DPLL_PIN_STATE_MAX - 1)
+};
+
+/**
+ * enum dpll_pin_capabilities - defines possible capabilities of a pin, valid
+ * flags on DPLL_A_PIN_CAPABILITIES attribute
+ * @DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE: pin direction can be changed
+ * @DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE: pin priority can be changed
+ * @DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE: pin state can be changed
+ */
+enum dpll_pin_capabilities {
+ DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE = 1,
+ DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE = 2,
+ DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE = 4,
+};
+
+#define DPLL_PHASE_OFFSET_DIVIDER 1000
+
+enum dpll_a {
+ DPLL_A_ID = 1,
+ DPLL_A_MODULE_NAME,
+ DPLL_A_PAD,
+ DPLL_A_CLOCK_ID,
+ DPLL_A_MODE,
+ DPLL_A_MODE_SUPPORTED,
+ DPLL_A_LOCK_STATUS,
+ DPLL_A_TEMP,
+ DPLL_A_TYPE,
+
+ __DPLL_A_MAX,
+ DPLL_A_MAX = (__DPLL_A_MAX - 1)
+};
+
+enum dpll_a_pin {
+ DPLL_A_PIN_ID = 1,
+ DPLL_A_PIN_PARENT_ID,
+ DPLL_A_PIN_MODULE_NAME,
+ DPLL_A_PIN_PAD,
+ DPLL_A_PIN_CLOCK_ID,
+ DPLL_A_PIN_BOARD_LABEL,
+ DPLL_A_PIN_PANEL_LABEL,
+ DPLL_A_PIN_PACKAGE_LABEL,
+ DPLL_A_PIN_TYPE,
+ DPLL_A_PIN_DIRECTION,
+ DPLL_A_PIN_FREQUENCY,
+ DPLL_A_PIN_FREQUENCY_SUPPORTED,
+ DPLL_A_PIN_FREQUENCY_MIN,
+ DPLL_A_PIN_FREQUENCY_MAX,
+ DPLL_A_PIN_PRIO,
+ DPLL_A_PIN_STATE,
+ DPLL_A_PIN_CAPABILITIES,
+ DPLL_A_PIN_PARENT_DEVICE,
+ DPLL_A_PIN_PARENT_PIN,
+ DPLL_A_PIN_PHASE_ADJUST_MIN,
+ DPLL_A_PIN_PHASE_ADJUST_MAX,
+ DPLL_A_PIN_PHASE_ADJUST,
+ DPLL_A_PIN_PHASE_OFFSET,
+
+ __DPLL_A_PIN_MAX,
+ DPLL_A_PIN_MAX = (__DPLL_A_PIN_MAX - 1)
+};
+
+enum dpll_cmd {
+ DPLL_CMD_DEVICE_ID_GET = 1,
+ DPLL_CMD_DEVICE_GET,
+ DPLL_CMD_DEVICE_SET,
+ DPLL_CMD_DEVICE_CREATE_NTF,
+ DPLL_CMD_DEVICE_DELETE_NTF,
+ DPLL_CMD_DEVICE_CHANGE_NTF,
+ DPLL_CMD_PIN_ID_GET,
+ DPLL_CMD_PIN_GET,
+ DPLL_CMD_PIN_SET,
+ DPLL_CMD_PIN_CREATE_NTF,
+ DPLL_CMD_PIN_DELETE_NTF,
+ DPLL_CMD_PIN_CHANGE_NTF,
+
+ __DPLL_CMD_MAX,
+ DPLL_CMD_MAX = (__DPLL_CMD_MAX - 1)
+};
+
+#define DPLL_MCGRP_MONITOR "monitor"
+
+#endif /* _UAPI_LINUX_DPLL_H */
diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h
index f95326fce6bb..2e23f99dc0f1 100644
--- a/include/uapi/linux/if_bridge.h
+++ b/include/uapi/linux/if_bridge.h
@@ -723,6 +723,24 @@ enum {
};
#define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1)
+/* [MDBA_GET_ENTRY] = {
+ * struct br_mdb_entry
+ * [MDBA_GET_ENTRY_ATTRS] = {
+ * [MDBE_ATTR_SOURCE]
+ * struct in_addr / struct in6_addr
+ * [MDBE_ATTR_SRC_VNI]
+ * u32
+ * }
+ * }
+ */
+enum {
+ MDBA_GET_ENTRY_UNSPEC,
+ MDBA_GET_ENTRY,
+ MDBA_GET_ENTRY_ATTRS,
+ __MDBA_GET_ENTRY_MAX,
+};
+#define MDBA_GET_ENTRY_MAX (__MDBA_GET_ENTRY_MAX - 1)
+
/* [MDBA_SET_ENTRY_ATTRS] = {
* [MDBE_ATTR_xxx]
* ...
diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
index ce3117df9cec..29ff80da2775 100644
--- a/include/uapi/linux/if_link.h
+++ b/include/uapi/linux/if_link.h
@@ -376,7 +376,7 @@ enum {
IFLA_GSO_IPV4_MAX_SIZE,
IFLA_GRO_IPV4_MAX_SIZE,
-
+ IFLA_DPLL_PIN,
__IFLA_MAX
};
@@ -510,6 +510,8 @@ enum {
IFLA_BR_VLAN_STATS_PER_PORT,
IFLA_BR_MULTI_BOOLOPT,
IFLA_BR_MCAST_QUERIER_STATE,
+ IFLA_BR_FDB_N_LEARNED,
+ IFLA_BR_FDB_MAX_LEARNED,
__IFLA_BR_MAX,
};
@@ -756,6 +758,30 @@ struct tunnel_msg {
__u32 ifindex;
};
+/* netkit section */
+enum netkit_action {
+ NETKIT_NEXT = -1,
+ NETKIT_PASS = 0,
+ NETKIT_DROP = 2,
+ NETKIT_REDIRECT = 7,
+};
+
+enum netkit_mode {
+ NETKIT_L2,
+ NETKIT_L3,
+};
+
+enum {
+ IFLA_NETKIT_UNSPEC,
+ IFLA_NETKIT_PEER_INFO,
+ IFLA_NETKIT_PRIMARY,
+ IFLA_NETKIT_POLICY,
+ IFLA_NETKIT_PEER_POLICY,
+ IFLA_NETKIT_MODE,
+ __IFLA_NETKIT_MAX,
+};
+#define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
+
/* VXLAN section */
/* include statistics in the dump */
@@ -1392,7 +1418,9 @@ enum {
enum {
IFLA_DSA_UNSPEC,
- IFLA_DSA_MASTER,
+ IFLA_DSA_CONDUIT,
+ /* Deprecated, use IFLA_DSA_CONDUIT instead */
+ IFLA_DSA_MASTER = IFLA_DSA_CONDUIT,
__IFLA_DSA_MAX,
};
diff --git a/include/uapi/linux/mptcp.h b/include/uapi/linux/mptcp.h
index ee9c49f949a2..a6451561f3f8 100644
--- a/include/uapi/linux/mptcp.h
+++ b/include/uapi/linux/mptcp.h
@@ -23,91 +23,20 @@
#define MPTCP_SUBFLOW_FLAG_CONNECTED _BITUL(7)
#define MPTCP_SUBFLOW_FLAG_MAPVALID _BITUL(8)
-enum {
- MPTCP_SUBFLOW_ATTR_UNSPEC,
- MPTCP_SUBFLOW_ATTR_TOKEN_REM,
- MPTCP_SUBFLOW_ATTR_TOKEN_LOC,
- MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ,
- MPTCP_SUBFLOW_ATTR_MAP_SEQ,
- MPTCP_SUBFLOW_ATTR_MAP_SFSEQ,
- MPTCP_SUBFLOW_ATTR_SSN_OFFSET,
- MPTCP_SUBFLOW_ATTR_MAP_DATALEN,
- MPTCP_SUBFLOW_ATTR_FLAGS,
- MPTCP_SUBFLOW_ATTR_ID_REM,
- MPTCP_SUBFLOW_ATTR_ID_LOC,
- MPTCP_SUBFLOW_ATTR_PAD,
- __MPTCP_SUBFLOW_ATTR_MAX
-};
-
-#define MPTCP_SUBFLOW_ATTR_MAX (__MPTCP_SUBFLOW_ATTR_MAX - 1)
-
-/* netlink interface */
-#define MPTCP_PM_NAME "mptcp_pm"
#define MPTCP_PM_CMD_GRP_NAME "mptcp_pm_cmds"
#define MPTCP_PM_EV_GRP_NAME "mptcp_pm_events"
-#define MPTCP_PM_VER 0x1
-
-/*
- * ATTR types defined for MPTCP
- */
-enum {
- MPTCP_PM_ATTR_UNSPEC,
-
- MPTCP_PM_ATTR_ADDR, /* nested address */
- MPTCP_PM_ATTR_RCV_ADD_ADDRS, /* u32 */
- MPTCP_PM_ATTR_SUBFLOWS, /* u32 */
- MPTCP_PM_ATTR_TOKEN, /* u32 */
- MPTCP_PM_ATTR_LOC_ID, /* u8 */
- MPTCP_PM_ATTR_ADDR_REMOTE, /* nested address */
-
- __MPTCP_PM_ATTR_MAX
-};
-
-#define MPTCP_PM_ATTR_MAX (__MPTCP_PM_ATTR_MAX - 1)
-
-enum {
- MPTCP_PM_ADDR_ATTR_UNSPEC,
-
- MPTCP_PM_ADDR_ATTR_FAMILY, /* u16 */
- MPTCP_PM_ADDR_ATTR_ID, /* u8 */
- MPTCP_PM_ADDR_ATTR_ADDR4, /* struct in_addr */
- MPTCP_PM_ADDR_ATTR_ADDR6, /* struct in6_addr */
- MPTCP_PM_ADDR_ATTR_PORT, /* u16 */
- MPTCP_PM_ADDR_ATTR_FLAGS, /* u32 */
- MPTCP_PM_ADDR_ATTR_IF_IDX, /* s32 */
-
- __MPTCP_PM_ADDR_ATTR_MAX
-};
-
-#define MPTCP_PM_ADDR_ATTR_MAX (__MPTCP_PM_ADDR_ATTR_MAX - 1)
-
-#define MPTCP_PM_ADDR_FLAG_SIGNAL (1 << 0)
-#define MPTCP_PM_ADDR_FLAG_SUBFLOW (1 << 1)
-#define MPTCP_PM_ADDR_FLAG_BACKUP (1 << 2)
-#define MPTCP_PM_ADDR_FLAG_FULLMESH (1 << 3)
-#define MPTCP_PM_ADDR_FLAG_IMPLICIT (1 << 4)
-
-enum {
- MPTCP_PM_CMD_UNSPEC,
- MPTCP_PM_CMD_ADD_ADDR,
- MPTCP_PM_CMD_DEL_ADDR,
- MPTCP_PM_CMD_GET_ADDR,
- MPTCP_PM_CMD_FLUSH_ADDRS,
- MPTCP_PM_CMD_SET_LIMITS,
- MPTCP_PM_CMD_GET_LIMITS,
- MPTCP_PM_CMD_SET_FLAGS,
- MPTCP_PM_CMD_ANNOUNCE,
- MPTCP_PM_CMD_REMOVE,
- MPTCP_PM_CMD_SUBFLOW_CREATE,
- MPTCP_PM_CMD_SUBFLOW_DESTROY,
-
- __MPTCP_PM_CMD_AFTER_LAST
-};
+#include <linux/mptcp_pm.h>
#define MPTCP_INFO_FLAG_FALLBACK _BITUL(0)
#define MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED _BITUL(1)
+#define MPTCP_PM_ADDR_FLAG_SIGNAL (1 << 0)
+#define MPTCP_PM_ADDR_FLAG_SUBFLOW (1 << 1)
+#define MPTCP_PM_ADDR_FLAG_BACKUP (1 << 2)
+#define MPTCP_PM_ADDR_FLAG_FULLMESH (1 << 3)
+#define MPTCP_PM_ADDR_FLAG_IMPLICIT (1 << 4)
+
struct mptcp_info {
__u8 mptcpi_subflows;
__u8 mptcpi_add_addr_signal;
@@ -130,93 +59,6 @@ struct mptcp_info {
__u64 mptcpi_bytes_acked;
};
-/*
- * MPTCP_EVENT_CREATED: token, family, saddr4 | saddr6, daddr4 | daddr6,
- * sport, dport
- * A new MPTCP connection has been created. It is the good time to allocate
- * memory and send ADD_ADDR if needed. Depending on the traffic-patterns
- * it can take a long time until the MPTCP_EVENT_ESTABLISHED is sent.
- *
- * MPTCP_EVENT_ESTABLISHED: token, family, saddr4 | saddr6, daddr4 | daddr6,
- * sport, dport
- * A MPTCP connection is established (can start new subflows).
- *
- * MPTCP_EVENT_CLOSED: token
- * A MPTCP connection has stopped.
- *
- * MPTCP_EVENT_ANNOUNCED: token, rem_id, family, daddr4 | daddr6 [, dport]
- * A new address has been announced by the peer.
- *
- * MPTCP_EVENT_REMOVED: token, rem_id
- * An address has been lost by the peer.
- *
- * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id,
- * saddr4 | saddr6, daddr4 | daddr6, sport,
- * dport, backup, if_idx [, error]
- * A new subflow has been established. 'error' should not be set.
- *
- * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6,
- * daddr4 | daddr6, sport, dport, backup, if_idx
- * [, error]
- * A subflow has been closed. An error (copy of sk_err) could be set if an
- * error has been detected for this subflow.
- *
- * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6,
- * daddr4 | daddr6, sport, dport, backup, if_idx
- * [, error]
- * The priority of a subflow has changed. 'error' should not be set.
- *
- * MPTCP_EVENT_LISTENER_CREATED: family, sport, saddr4 | saddr6
- * A new PM listener is created.
- *
- * MPTCP_EVENT_LISTENER_CLOSED: family, sport, saddr4 | saddr6
- * A PM listener is closed.
- */
-enum mptcp_event_type {
- MPTCP_EVENT_UNSPEC = 0,
- MPTCP_EVENT_CREATED = 1,
- MPTCP_EVENT_ESTABLISHED = 2,
- MPTCP_EVENT_CLOSED = 3,
-
- MPTCP_EVENT_ANNOUNCED = 6,
- MPTCP_EVENT_REMOVED = 7,
-
- MPTCP_EVENT_SUB_ESTABLISHED = 10,
- MPTCP_EVENT_SUB_CLOSED = 11,
-
- MPTCP_EVENT_SUB_PRIORITY = 13,
-
- MPTCP_EVENT_LISTENER_CREATED = 15,
- MPTCP_EVENT_LISTENER_CLOSED = 16,
-};
-
-enum mptcp_event_attr {
- MPTCP_ATTR_UNSPEC = 0,
-
- MPTCP_ATTR_TOKEN, /* u32 */
- MPTCP_ATTR_FAMILY, /* u16 */
- MPTCP_ATTR_LOC_ID, /* u8 */
- MPTCP_ATTR_REM_ID, /* u8 */
- MPTCP_ATTR_SADDR4, /* be32 */
- MPTCP_ATTR_SADDR6, /* struct in6_addr */
- MPTCP_ATTR_DADDR4, /* be32 */
- MPTCP_ATTR_DADDR6, /* struct in6_addr */
- MPTCP_ATTR_SPORT, /* be16 */
- MPTCP_ATTR_DPORT, /* be16 */
- MPTCP_ATTR_BACKUP, /* u8 */
- MPTCP_ATTR_ERROR, /* u8 */
- MPTCP_ATTR_FLAGS, /* u16 */
- MPTCP_ATTR_TIMEOUT, /* u32 */
- MPTCP_ATTR_IF_IDX, /* s32 */
- MPTCP_ATTR_RESET_REASON,/* u32 */
- MPTCP_ATTR_RESET_FLAGS, /* u32 */
- MPTCP_ATTR_SERVER_SIDE, /* u8 */
-
- __MPTCP_ATTR_AFTER_LAST
-};
-
-#define MPTCP_ATTR_MAX (__MPTCP_ATTR_AFTER_LAST - 1)
-
/* MPTCP Reset reason codes, rfc8684 */
#define MPTCP_RST_EUNSPEC 0
#define MPTCP_RST_EMPTCP 1
diff --git a/include/uapi/linux/mptcp_pm.h b/include/uapi/linux/mptcp_pm.h
new file mode 100644
index 000000000000..b5d11aece408
--- /dev/null
+++ b/include/uapi/linux/mptcp_pm.h
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/mptcp.yaml */
+/* YNL-GEN uapi header */
+
+#ifndef _UAPI_LINUX_MPTCP_PM_H
+#define _UAPI_LINUX_MPTCP_PM_H
+
+#define MPTCP_PM_NAME "mptcp_pm"
+#define MPTCP_PM_VER 1
+
+/**
+ * enum mptcp_event_type
+ * @MPTCP_EVENT_UNSPEC: unused event
+ * @MPTCP_EVENT_CREATED: token, family, saddr4 | saddr6, daddr4 | daddr6,
+ * sport, dport A new MPTCP connection has been created. It is the good time
+ * to allocate memory and send ADD_ADDR if needed. Depending on the
+ * traffic-patterns it can take a long time until the MPTCP_EVENT_ESTABLISHED
+ * is sent.
+ * @MPTCP_EVENT_ESTABLISHED: token, family, saddr4 | saddr6, daddr4 | daddr6,
+ * sport, dport A MPTCP connection is established (can start new subflows).
+ * @MPTCP_EVENT_CLOSED: token A MPTCP connection has stopped.
+ * @MPTCP_EVENT_ANNOUNCED: token, rem_id, family, daddr4 | daddr6 [, dport] A
+ * new address has been announced by the peer.
+ * @MPTCP_EVENT_REMOVED: token, rem_id An address has been lost by the peer.
+ * @MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id, saddr4 |
+ * saddr6, daddr4 | daddr6, sport, dport, backup, if_idx [, error] A new
+ * subflow has been established. 'error' should not be set.
+ * @MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6,
+ * daddr4 | daddr6, sport, dport, backup, if_idx [, error] A subflow has been
+ * closed. An error (copy of sk_err) could be set if an error has been
+ * detected for this subflow.
+ * @MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6,
+ * daddr4 | daddr6, sport, dport, backup, if_idx [, error] The priority of a
+ * subflow has changed. 'error' should not be set.
+ * @MPTCP_EVENT_LISTENER_CREATED: family, sport, saddr4 | saddr6 A new PM
+ * listener is created.
+ * @MPTCP_EVENT_LISTENER_CLOSED: family, sport, saddr4 | saddr6 A PM listener
+ * is closed.
+ */
+enum mptcp_event_type {
+ MPTCP_EVENT_UNSPEC,
+ MPTCP_EVENT_CREATED,
+ MPTCP_EVENT_ESTABLISHED,
+ MPTCP_EVENT_CLOSED,
+ MPTCP_EVENT_ANNOUNCED = 6,
+ MPTCP_EVENT_REMOVED,
+ MPTCP_EVENT_SUB_ESTABLISHED = 10,
+ MPTCP_EVENT_SUB_CLOSED,
+ MPTCP_EVENT_SUB_PRIORITY = 13,
+ MPTCP_EVENT_LISTENER_CREATED = 15,
+ MPTCP_EVENT_LISTENER_CLOSED,
+};
+
+enum {
+ MPTCP_PM_ADDR_ATTR_UNSPEC,
+ MPTCP_PM_ADDR_ATTR_FAMILY,
+ MPTCP_PM_ADDR_ATTR_ID,
+ MPTCP_PM_ADDR_ATTR_ADDR4,
+ MPTCP_PM_ADDR_ATTR_ADDR6,
+ MPTCP_PM_ADDR_ATTR_PORT,
+ MPTCP_PM_ADDR_ATTR_FLAGS,
+ MPTCP_PM_ADDR_ATTR_IF_IDX,
+
+ __MPTCP_PM_ADDR_ATTR_MAX
+};
+#define MPTCP_PM_ADDR_ATTR_MAX (__MPTCP_PM_ADDR_ATTR_MAX - 1)
+
+enum {
+ MPTCP_SUBFLOW_ATTR_UNSPEC,
+ MPTCP_SUBFLOW_ATTR_TOKEN_REM,
+ MPTCP_SUBFLOW_ATTR_TOKEN_LOC,
+ MPTCP_SUBFLOW_ATTR_RELWRITE_SEQ,
+ MPTCP_SUBFLOW_ATTR_MAP_SEQ,
+ MPTCP_SUBFLOW_ATTR_MAP_SFSEQ,
+ MPTCP_SUBFLOW_ATTR_SSN_OFFSET,
+ MPTCP_SUBFLOW_ATTR_MAP_DATALEN,
+ MPTCP_SUBFLOW_ATTR_FLAGS,
+ MPTCP_SUBFLOW_ATTR_ID_REM,
+ MPTCP_SUBFLOW_ATTR_ID_LOC,
+ MPTCP_SUBFLOW_ATTR_PAD,
+
+ __MPTCP_SUBFLOW_ATTR_MAX
+};
+#define MPTCP_SUBFLOW_ATTR_MAX (__MPTCP_SUBFLOW_ATTR_MAX - 1)
+
+enum {
+ MPTCP_PM_ENDPOINT_ADDR = 1,
+
+ __MPTCP_PM_ENDPOINT_MAX
+};
+#define MPTCP_PM_ENDPOINT_MAX (__MPTCP_PM_ENDPOINT_MAX - 1)
+
+enum {
+ MPTCP_PM_ATTR_UNSPEC,
+ MPTCP_PM_ATTR_ADDR,
+ MPTCP_PM_ATTR_RCV_ADD_ADDRS,
+ MPTCP_PM_ATTR_SUBFLOWS,
+ MPTCP_PM_ATTR_TOKEN,
+ MPTCP_PM_ATTR_LOC_ID,
+ MPTCP_PM_ATTR_ADDR_REMOTE,
+
+ __MPTCP_ATTR_AFTER_LAST
+};
+#define MPTCP_PM_ATTR_MAX (__MPTCP_ATTR_AFTER_LAST - 1)
+
+enum mptcp_event_attr {
+ MPTCP_ATTR_UNSPEC,
+ MPTCP_ATTR_TOKEN,
+ MPTCP_ATTR_FAMILY,
+ MPTCP_ATTR_LOC_ID,
+ MPTCP_ATTR_REM_ID,
+ MPTCP_ATTR_SADDR4,
+ MPTCP_ATTR_SADDR6,
+ MPTCP_ATTR_DADDR4,
+ MPTCP_ATTR_DADDR6,
+ MPTCP_ATTR_SPORT,
+ MPTCP_ATTR_DPORT,
+ MPTCP_ATTR_BACKUP,
+ MPTCP_ATTR_ERROR,
+ MPTCP_ATTR_FLAGS,
+ MPTCP_ATTR_TIMEOUT,
+ MPTCP_ATTR_IF_IDX,
+ MPTCP_ATTR_RESET_REASON,
+ MPTCP_ATTR_RESET_FLAGS,
+ MPTCP_ATTR_SERVER_SIDE,
+
+ __MPTCP_ATTR_MAX
+};
+#define MPTCP_ATTR_MAX (__MPTCP_ATTR_MAX - 1)
+
+enum {
+ MPTCP_PM_CMD_UNSPEC,
+ MPTCP_PM_CMD_ADD_ADDR,
+ MPTCP_PM_CMD_DEL_ADDR,
+ MPTCP_PM_CMD_GET_ADDR,
+ MPTCP_PM_CMD_FLUSH_ADDRS,
+ MPTCP_PM_CMD_SET_LIMITS,
+ MPTCP_PM_CMD_GET_LIMITS,
+ MPTCP_PM_CMD_SET_FLAGS,
+ MPTCP_PM_CMD_ANNOUNCE,
+ MPTCP_PM_CMD_REMOVE,
+ MPTCP_PM_CMD_SUBFLOW_CREATE,
+ MPTCP_PM_CMD_SUBFLOW_DESTROY,
+
+ __MPTCP_PM_CMD_AFTER_LAST
+};
+#define MPTCP_PM_CMD_MAX (__MPTCP_PM_CMD_AFTER_LAST - 1)
+
+#endif /* _UAPI_LINUX_MPTCP_PM_H */
diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h
index c1634b95c223..2943a151d4f1 100644
--- a/include/uapi/linux/netdev.h
+++ b/include/uapi/linux/netdev.h
@@ -38,11 +38,27 @@ enum netdev_xdp_act {
NETDEV_XDP_ACT_MASK = 127,
};
+/**
+ * enum netdev_xdp_rx_metadata
+ * @NETDEV_XDP_RX_METADATA_TIMESTAMP: Device is capable of exposing receive HW
+ * timestamp via bpf_xdp_metadata_rx_timestamp().
+ * @NETDEV_XDP_RX_METADATA_HASH: Device is capable of exposing receive packet
+ * hash via bpf_xdp_metadata_rx_hash().
+ */
+enum netdev_xdp_rx_metadata {
+ NETDEV_XDP_RX_METADATA_TIMESTAMP = 1,
+ NETDEV_XDP_RX_METADATA_HASH = 2,
+
+ /* private: */
+ NETDEV_XDP_RX_METADATA_MASK = 3,
+};
+
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
NETDEV_A_DEV_XDP_FEATURES,
NETDEV_A_DEV_XDP_ZC_MAX_SEGS,
+ NETDEV_A_DEV_XDP_RX_METADATA_FEATURES,
__NETDEV_A_DEV_MAX,
NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)
diff --git a/include/uapi/linux/netlink.h b/include/uapi/linux/netlink.h
index e2ae82e3f9f7..f87aaf28a649 100644
--- a/include/uapi/linux/netlink.h
+++ b/include/uapi/linux/netlink.h
@@ -298,6 +298,8 @@ struct nla_bitfield32 {
* entry has attributes again, the policy for those inner ones
* and the corresponding maxtype may be specified.
* @NL_ATTR_TYPE_BITFIELD32: &struct nla_bitfield32 attribute
+ * @NL_ATTR_TYPE_SINT: 32-bit or 64-bit signed attribute, aligned to 4B
+ * @NL_ATTR_TYPE_UINT: 32-bit or 64-bit unsigned attribute, aligned to 4B
*/
enum netlink_attribute_type {
NL_ATTR_TYPE_INVALID,
@@ -322,6 +324,9 @@ enum netlink_attribute_type {
NL_ATTR_TYPE_NESTED_ARRAY,
NL_ATTR_TYPE_BITFIELD32,
+
+ NL_ATTR_TYPE_SINT,
+ NL_ATTR_TYPE_UINT,
};
/**
diff --git a/include/uapi/linux/nl80211.h b/include/uapi/linux/nl80211.h
index 88eb85c63029..dced2c49daec 100644
--- a/include/uapi/linux/nl80211.h
+++ b/include/uapi/linux/nl80211.h
@@ -167,7 +167,7 @@
* following events occur.
* a) Expiration of hardware timer whose expiration time is set to maximum
* coalescing delay of matching coalesce rule.
- * b) Coalescing buffer in hardware reaches it's limit.
+ * b) Coalescing buffer in hardware reaches its limit.
* c) Packet doesn't match any of the configured coalesce rules.
*
* User needs to configure following parameters for creating a coalesce
@@ -326,7 +326,7 @@
/**
* DOC: Multi-Link Operation
*
- * In Multi-Link Operation, a connection between to MLDs utilizes multiple
+ * In Multi-Link Operation, a connection between two MLDs utilizes multiple
* links. To use this in nl80211, various commands and responses now need
* to or will include the new %NL80211_ATTR_MLO_LINKS attribute.
* Additionally, various commands that need to operate on a specific link
@@ -335,6 +335,15 @@
*/
/**
+ * DOC: OWE DH IE handling offload
+ *
+ * By setting @NL80211_EXT_FEATURE_OWE_OFFLOAD flag, drivers can indicate
+ * kernel/application space to avoid DH IE handling. When this flag is
+ * advertised, the driver/device will take care of DH IE inclusion and
+ * processing of peer DH IE to generate PMK.
+ */
+
+/**
* enum nl80211_commands - supported nl80211 commands
*
* @NL80211_CMD_UNSPEC: unspecified command to catch errors
@@ -2690,11 +2699,13 @@ enum nl80211_commands {
*
* @NL80211_ATTR_FILS_DISCOVERY: Optional parameter to configure FILS
* discovery. It is a nested attribute, see
- * &enum nl80211_fils_discovery_attributes.
+ * &enum nl80211_fils_discovery_attributes. Userspace should pass an empty
+ * nested attribute to disable this feature and delete the templates.
*
* @NL80211_ATTR_UNSOL_BCAST_PROBE_RESP: Optional parameter to configure
* unsolicited broadcast probe response. It is a nested attribute, see
- * &enum nl80211_unsol_bcast_probe_resp_attributes.
+ * &enum nl80211_unsol_bcast_probe_resp_attributes. Userspace should pass an empty
+ * nested attribute to disable this feature and delete the templates.
*
* @NL80211_ATTR_S1G_CAPABILITY: S1G Capability information element (from
* association request when used with NL80211_CMD_NEW_STATION)
@@ -4213,6 +4224,8 @@ enum nl80211_wmm_rule {
* as the primary or any of the secondary channels isn't possible
* @NL80211_FREQUENCY_ATTR_NO_EHT: EHT operation is not allowed on this channel
* in current regulatory domain.
+ * @NL80211_FREQUENCY_ATTR_PSD: Power spectral density (in dBm) that
+ * is allowed on this channel in current regulatory domain.
* @NL80211_FREQUENCY_ATTR_MAX: highest frequency attribute number
* currently defined
* @__NL80211_FREQUENCY_ATTR_AFTER_LAST: internal use
@@ -4251,6 +4264,7 @@ enum nl80211_frequency_attr {
NL80211_FREQUENCY_ATTR_16MHZ,
NL80211_FREQUENCY_ATTR_NO_320MHZ,
NL80211_FREQUENCY_ATTR_NO_EHT,
+ NL80211_FREQUENCY_ATTR_PSD,
/* keep last */
__NL80211_FREQUENCY_ATTR_AFTER_LAST,
@@ -4351,6 +4365,8 @@ enum nl80211_reg_type {
* a given frequency range. The value is in mBm (100 * dBm).
* @NL80211_ATTR_DFS_CAC_TIME: DFS CAC time in milliseconds.
* If not present or 0 default CAC time will be used.
+ * @NL80211_ATTR_POWER_RULE_PSD: power spectral density (in dBm).
+ * This could be negative.
* @NL80211_REG_RULE_ATTR_MAX: highest regulatory rule attribute number
* currently defined
* @__NL80211_REG_RULE_ATTR_AFTER_LAST: internal use
@@ -4368,6 +4384,8 @@ enum nl80211_reg_rule_attr {
NL80211_ATTR_DFS_CAC_TIME,
+ NL80211_ATTR_POWER_RULE_PSD,
+
/* keep last */
__NL80211_REG_RULE_ATTR_AFTER_LAST,
NL80211_REG_RULE_ATTR_MAX = __NL80211_REG_RULE_ATTR_AFTER_LAST - 1
@@ -4451,6 +4469,7 @@ enum nl80211_sched_scan_match_attr {
* @NL80211_RRF_NO_HE: HE operation not allowed
* @NL80211_RRF_NO_320MHZ: 320MHz operation not allowed
* @NL80211_RRF_NO_EHT: EHT operation not allowed
+ * @NL80211_RRF_PSD: Ruleset has power spectral density value
*/
enum nl80211_reg_rule_flags {
NL80211_RRF_NO_OFDM = 1<<0,
@@ -4471,6 +4490,7 @@ enum nl80211_reg_rule_flags {
NL80211_RRF_NO_HE = 1<<17,
NL80211_RRF_NO_320MHZ = 1<<18,
NL80211_RRF_NO_EHT = 1<<19,
+ NL80211_RRF_PSD = 1<<20,
};
#define NL80211_RRF_PASSIVE_SCAN NL80211_RRF_NO_IR
@@ -5038,7 +5058,7 @@ enum nl80211_bss_scan_width {
* elements from a Beacon frame (bin); not present if no Beacon frame has
* yet been received
* @NL80211_BSS_CHAN_WIDTH: channel width of the control channel
- * (u32, enum nl80211_bss_scan_width)
+ * (u32, enum nl80211_bss_scan_width) - No longer used!
* @NL80211_BSS_BEACON_TSF: TSF of the last received beacon (u64)
* (not present if no beacon frame has been received yet)
* @NL80211_BSS_PRESP_DATA: the data in @NL80211_BSS_INFORMATION_ELEMENTS and
@@ -6400,6 +6420,12 @@ enum nl80211_feature_flags {
* in authentication and deauthentication frames sent to unassociated peer
* using @NL80211_CMD_FRAME.
*
+ * @NL80211_EXT_FEATURE_OWE_OFFLOAD: Driver/Device wants to do OWE DH IE
+ * handling in station mode.
+ *
+ * @NL80211_EXT_FEATURE_OWE_OFFLOAD_AP: Driver/Device wants to do OWE DH IE
+ * handling in AP mode.
+ *
* @NUM_NL80211_EXT_FEATURES: number of extended features.
* @MAX_NL80211_EXT_FEATURES: highest extended feature index.
*/
@@ -6471,6 +6497,8 @@ enum nl80211_ext_feature_index {
NL80211_EXT_FEATURE_PUNCT,
NL80211_EXT_FEATURE_SECURE_NAN,
NL80211_EXT_FEATURE_AUTH_AND_DEAUTH_RANDOM_TA,
+ NL80211_EXT_FEATURE_OWE_OFFLOAD,
+ NL80211_EXT_FEATURE_OWE_OFFLOAD_AP,
/* add new features before the definition below */
NUM_NL80211_EXT_FEATURES,
@@ -7606,7 +7634,7 @@ enum nl80211_iftype_akm_attributes {
* @NL80211_FILS_DISCOVERY_ATTR_INT_MIN: Minimum packet interval (u32, TU).
* Allowed range: 0..10000 (TU = Time Unit)
* @NL80211_FILS_DISCOVERY_ATTR_INT_MAX: Maximum packet interval (u32, TU).
- * Allowed range: 0..10000 (TU = Time Unit)
+ * Allowed range: 0..10000 (TU = Time Unit). If set to 0, the feature is disabled.
* @NL80211_FILS_DISCOVERY_ATTR_TMPL: Template data for FILS discovery action
* frame including the headers.
*
@@ -7639,7 +7667,8 @@ enum nl80211_fils_discovery_attributes {
*
* @NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_INT: Maximum packet interval (u32, TU).
* Allowed range: 0..20 (TU = Time Unit). IEEE P802.11ax/D6.0
- * 26.17.2.3.2 (AP behavior for fast passive scanning).
+ * 26.17.2.3.2 (AP behavior for fast passive scanning). If set to 0, the feature is
+ * disabled.
* @NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_TMPL: Unsolicited broadcast probe response
* frame template (binary).
*
diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
index 3f85ae578056..f762a10bfb78 100644
--- a/include/uapi/linux/pkt_sched.h
+++ b/include/uapi/linux/pkt_sched.h
@@ -941,15 +941,22 @@ enum {
TCA_FQ_HORIZON_DROP, /* drop packets beyond horizon, or cap their EDT */
+ TCA_FQ_PRIOMAP, /* prio2band */
+
+ TCA_FQ_WEIGHTS, /* Weights for each band */
+
__TCA_FQ_MAX
};
#define TCA_FQ_MAX (__TCA_FQ_MAX - 1)
+#define FQ_BANDS 3
+#define FQ_MIN_WEIGHT 16384
+
struct tc_fq_qd_stats {
__u64 gc_flows;
- __u64 highprio_packets;
- __u64 tcp_retrans;
+ __u64 highprio_packets; /* obsolete */
+ __u64 tcp_retrans; /* obsolete */
__u64 throttled;
__u64 flows_plimit;
__u64 pkts_too_long;
@@ -962,6 +969,10 @@ struct tc_fq_qd_stats {
__u64 ce_mark; /* packets above ce_threshold */
__u64 horizon_drops;
__u64 horizon_caps;
+ __u64 fastpath_packets;
+ __u64 band_drops[FQ_BANDS];
+ __u32 band_pkt_count[FQ_BANDS];
+ __u32 pad;
};
/* Heavy-Hitter Filter */
diff --git a/include/uapi/linux/ptp_clock.h b/include/uapi/linux/ptp_clock.h
index 05cc35fc94ac..da700999cad4 100644
--- a/include/uapi/linux/ptp_clock.h
+++ b/include/uapi/linux/ptp_clock.h
@@ -224,6 +224,8 @@ struct ptp_pin_desc {
_IOWR(PTP_CLK_MAGIC, 17, struct ptp_sys_offset_precise)
#define PTP_SYS_OFFSET_EXTENDED2 \
_IOWR(PTP_CLK_MAGIC, 18, struct ptp_sys_offset_extended)
+#define PTP_MASK_CLEAR_ALL _IO(PTP_CLK_MAGIC, 19)
+#define PTP_MASK_EN_SINGLE _IOW(PTP_CLK_MAGIC, 20, unsigned int)
struct ptp_extts_event {
struct ptp_clock_time t; /* Time event occured. */
diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
index 51c13cf9c5ae..3b687d20c9ed 100644
--- a/include/uapi/linux/rtnetlink.h
+++ b/include/uapi/linux/rtnetlink.h
@@ -502,13 +502,17 @@ enum {
#define RTAX_MAX (__RTAX_MAX - 1)
-#define RTAX_FEATURE_ECN (1 << 0)
-#define RTAX_FEATURE_SACK (1 << 1)
-#define RTAX_FEATURE_TIMESTAMP (1 << 2)
-#define RTAX_FEATURE_ALLFRAG (1 << 3)
-
-#define RTAX_FEATURE_MASK (RTAX_FEATURE_ECN | RTAX_FEATURE_SACK | \
- RTAX_FEATURE_TIMESTAMP | RTAX_FEATURE_ALLFRAG)
+#define RTAX_FEATURE_ECN (1 << 0)
+#define RTAX_FEATURE_SACK (1 << 1) /* unused */
+#define RTAX_FEATURE_TIMESTAMP (1 << 2) /* unused */
+#define RTAX_FEATURE_ALLFRAG (1 << 3) /* unused */
+#define RTAX_FEATURE_TCP_USEC_TS (1 << 4)
+
+#define RTAX_FEATURE_MASK (RTAX_FEATURE_ECN | \
+ RTAX_FEATURE_SACK | \
+ RTAX_FEATURE_TIMESTAMP | \
+ RTAX_FEATURE_ALLFRAG | \
+ RTAX_FEATURE_TCP_USEC_TS)
struct rta_session {
__u8 proto;
diff --git a/include/uapi/linux/snmp.h b/include/uapi/linux/snmp.h
index 26f33a4c253d..a0819c6a5988 100644
--- a/include/uapi/linux/snmp.h
+++ b/include/uapi/linux/snmp.h
@@ -24,7 +24,7 @@ enum
IPSTATS_MIB_INOCTETS, /* InOctets */
IPSTATS_MIB_INDELIVERS, /* InDelivers */
IPSTATS_MIB_OUTFORWDATAGRAMS, /* OutForwDatagrams */
- IPSTATS_MIB_OUTPKTS, /* OutRequests */
+ IPSTATS_MIB_OUTREQUESTS, /* OutRequests */
IPSTATS_MIB_OUTOCTETS, /* OutOctets */
/* other fields */
IPSTATS_MIB_INHDRERRORS, /* InHdrErrors */
@@ -57,6 +57,7 @@ enum
IPSTATS_MIB_ECT0PKTS, /* InECT0Pkts */
IPSTATS_MIB_CEPKTS, /* InCEPkts */
IPSTATS_MIB_REASM_OVERLAPS, /* ReasmOverlaps */
+ IPSTATS_MIB_OUTPKTS, /* OutTransmits */
__IPSTATS_MIB_MAX
};
@@ -296,6 +297,11 @@ enum
LINUX_MIB_TCPMIGRATEREQSUCCESS, /* TCPMigrateReqSuccess */
LINUX_MIB_TCPMIGRATEREQFAILURE, /* TCPMigrateReqFailure */
LINUX_MIB_TCPPLBREHASH, /* TCPPLBRehash */
+ LINUX_MIB_TCPAOREQUIRED, /* TCPAORequired */
+ LINUX_MIB_TCPAOBAD, /* TCPAOBad */
+ LINUX_MIB_TCPAOKEYNOTFOUND, /* TCPAOKeyNotFound */
+ LINUX_MIB_TCPAOGOOD, /* TCPAOGood */
+ LINUX_MIB_TCPAODROPPEDICMPS, /* TCPAODroppedIcmps */
__LINUX_MIB_MAX
};
diff --git a/include/uapi/linux/tcp.h b/include/uapi/linux/tcp.h
index 879eeb0a084b..c07e9f90c084 100644
--- a/include/uapi/linux/tcp.h
+++ b/include/uapi/linux/tcp.h
@@ -129,6 +129,11 @@ enum {
#define TCP_TX_DELAY 37 /* delay outgoing packets by XX usec */
+#define TCP_AO_ADD_KEY 38 /* Add/Set MKT */
+#define TCP_AO_DEL_KEY 39 /* Delete MKT */
+#define TCP_AO_INFO 40 /* Set/list TCP-AO per-socket options */
+#define TCP_AO_GET_KEYS 41 /* List MKT(s) */
+#define TCP_AO_REPAIR 42 /* Get/Set SNEs and ISNs */
#define TCP_REPAIR_ON 1
#define TCP_REPAIR_OFF 0
@@ -170,6 +175,7 @@ enum tcp_fastopen_client_fail {
#define TCPI_OPT_ECN 8 /* ECN was negociated at TCP session init */
#define TCPI_OPT_ECN_SEEN 16 /* we received at least one packet with ECT */
#define TCPI_OPT_SYN_DATA 32 /* SYN-ACK acked data in SYN sent or rcvd */
+#define TCPI_OPT_USEC_TS 64 /* usec timestamps */
/*
* Sender's congestion state indicating normal or abnormal situations
@@ -289,6 +295,18 @@ struct tcp_info {
*/
__u32 tcpi_rehash; /* PLB or timeout triggered rehash attempts */
+
+ __u16 tcpi_total_rto; /* Total number of RTO timeouts, including
+ * SYN/SYN-ACK and recurring timeouts.
+ */
+ __u16 tcpi_total_rto_recoveries; /* Total number of RTO
+ * recoveries, including any
+ * unfinished recovery.
+ */
+ __u32 tcpi_total_rto_time; /* Total time spent in RTO recoveries
+ * in milliseconds, including any
+ * unfinished recovery.
+ */
};
/* netlink attributes types for SCM_TIMESTAMPING_OPT_STATS */
@@ -348,6 +366,106 @@ struct tcp_diag_md5sig {
__u8 tcpm_key[TCP_MD5SIG_MAXKEYLEN];
};
+#define TCP_AO_MAXKEYLEN 80
+
+#define TCP_AO_KEYF_IFINDEX (1 << 0) /* L3 ifindex for VRF */
+#define TCP_AO_KEYF_EXCLUDE_OPT (1 << 1) /* "Indicates whether TCP
+ * options other than TCP-AO
+ * are included in the MAC
+ * calculation"
+ */
+
+struct tcp_ao_add { /* setsockopt(TCP_AO_ADD_KEY) */
+ struct __kernel_sockaddr_storage addr; /* peer's address for the key */
+ char alg_name[64]; /* crypto hash algorithm to use */
+ __s32 ifindex; /* L3 dev index for VRF */
+ __u32 set_current :1, /* set key as Current_key at once */
+ set_rnext :1, /* request it from peer with RNext_key */
+ reserved :30; /* must be 0 */
+ __u16 reserved2; /* padding, must be 0 */
+ __u8 prefix; /* peer's address prefix */
+ __u8 sndid; /* SendID for outgoing segments */
+ __u8 rcvid; /* RecvID to match for incoming seg */
+ __u8 maclen; /* length of authentication code (hash) */
+ __u8 keyflags; /* see TCP_AO_KEYF_ */
+ __u8 keylen; /* length of ::key */
+ __u8 key[TCP_AO_MAXKEYLEN];
+} __attribute__((aligned(8)));
+
+struct tcp_ao_del { /* setsockopt(TCP_AO_DEL_KEY) */
+ struct __kernel_sockaddr_storage addr; /* peer's address for the key */
+ __s32 ifindex; /* L3 dev index for VRF */
+ __u32 set_current :1, /* corresponding ::current_key */
+ set_rnext :1, /* corresponding ::rnext */
+ del_async :1, /* only valid for listen sockets */
+ reserved :29; /* must be 0 */
+ __u16 reserved2; /* padding, must be 0 */
+ __u8 prefix; /* peer's address prefix */
+ __u8 sndid; /* SendID for outgoing segments */
+ __u8 rcvid; /* RecvID to match for incoming seg */
+ __u8 current_key; /* KeyID to set as Current_key */
+ __u8 rnext; /* KeyID to set as Rnext_key */
+ __u8 keyflags; /* see TCP_AO_KEYF_ */
+} __attribute__((aligned(8)));
+
+struct tcp_ao_info_opt { /* setsockopt(TCP_AO_INFO), getsockopt(TCP_AO_INFO) */
+ /* Here 'in' is for setsockopt(), 'out' is for getsockopt() */
+ __u32 set_current :1, /* in/out: corresponding ::current_key */
+ set_rnext :1, /* in/out: corresponding ::rnext */
+ ao_required :1, /* in/out: don't accept non-AO connects */
+ set_counters :1, /* in: set/clear ::pkt_* counters */
+ accept_icmps :1, /* in/out: accept incoming ICMPs */
+ reserved :27; /* must be 0 */
+ __u16 reserved2; /* padding, must be 0 */
+ __u8 current_key; /* in/out: KeyID of Current_key */
+ __u8 rnext; /* in/out: keyid of RNext_key */
+ __u64 pkt_good; /* in/out: verified segments */
+ __u64 pkt_bad; /* in/out: failed verification */
+ __u64 pkt_key_not_found; /* in/out: could not find a key to verify */
+ __u64 pkt_ao_required; /* in/out: segments missing TCP-AO sign */
+ __u64 pkt_dropped_icmp; /* in/out: ICMPs that were ignored */
+} __attribute__((aligned(8)));
+
+struct tcp_ao_getsockopt { /* getsockopt(TCP_AO_GET_KEYS) */
+ struct __kernel_sockaddr_storage addr; /* in/out: dump keys for peer
+ * with this address/prefix
+ */
+ char alg_name[64]; /* out: crypto hash algorithm */
+ __u8 key[TCP_AO_MAXKEYLEN];
+ __u32 nkeys; /* in: size of the userspace buffer
+ * @optval, measured in @optlen - the
+ * sizeof(struct tcp_ao_getsockopt)
+ * out: number of keys that matched
+ */
+ __u16 is_current :1, /* in: match and dump Current_key,
+ * out: the dumped key is Current_key
+ */
+
+ is_rnext :1, /* in: match and dump RNext_key,
+ * out: the dumped key is RNext_key
+ */
+ get_all :1, /* in: dump all keys */
+ reserved :13; /* padding, must be 0 */
+ __u8 sndid; /* in/out: dump keys with SendID */
+ __u8 rcvid; /* in/out: dump keys with RecvID */
+ __u8 prefix; /* in/out: dump keys with address/prefix */
+ __u8 maclen; /* out: key's length of authentication
+ * code (hash)
+ */
+ __u8 keyflags; /* in/out: see TCP_AO_KEYF_ */
+ __u8 keylen; /* out: length of ::key */
+ __s32 ifindex; /* in/out: L3 dev index for VRF */
+ __u64 pkt_good; /* out: verified segments */
+ __u64 pkt_bad; /* out: segments that failed verification */
+} __attribute__((aligned(8)));
+
+struct tcp_ao_repair { /* {s,g}etsockopt(TCP_AO_REPAIR) */
+ __be32 snt_isn;
+ __be32 rcv_isn;
+ __u32 snd_sne;
+ __u32 rcv_sne;
+} __attribute__((aligned(8)));
+
/* setsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE, ...) */
#define TCP_RECEIVE_ZEROCOPY_FLAG_TLB_CLEAN_HINT 0x1
diff --git a/include/uapi/linux/vm_sockets.h b/include/uapi/linux/vm_sockets.h
index c60ca33eac59..ed07181d4eff 100644
--- a/include/uapi/linux/vm_sockets.h
+++ b/include/uapi/linux/vm_sockets.h
@@ -191,4 +191,21 @@ struct sockaddr_vm {
#define IOCTL_VM_SOCKETS_GET_LOCAL_CID _IO(7, 0xb9)
+/* MSG_ZEROCOPY notifications are encoded in the standard error format,
+ * sock_extended_err. See Documentation/networking/msg_zerocopy.rst in
+ * kernel source tree for more details.
+ */
+
+/* 'cmsg_level' field value of 'struct cmsghdr' for notification parsing
+ * when MSG_ZEROCOPY flag is used on transmissions.
+ */
+
+#define SOL_VSOCK 287
+
+/* 'cmsg_type' field value of 'struct cmsghdr' for notification parsing
+ * when MSG_ZEROCOPY flag is used on transmissions.
+ */
+
+#define VSOCK_RECVERR 1
+
#endif /* _UAPI_VM_SOCKETS_H */
diff --git a/kernel/bpf/bpf_iter.c b/kernel/bpf/bpf_iter.c
index 96856f130cbf..833faa04461b 100644
--- a/kernel/bpf/bpf_iter.c
+++ b/kernel/bpf/bpf_iter.c
@@ -793,8 +793,6 @@ __bpf_kfunc int bpf_iter_num_new(struct bpf_iter_num *it, int start, int end)
BUILD_BUG_ON(sizeof(struct bpf_iter_num_kern) != sizeof(struct bpf_iter_num));
BUILD_BUG_ON(__alignof__(struct bpf_iter_num_kern) != __alignof__(struct bpf_iter_num));
- BTF_TYPE_EMIT(struct btf_iter_num);
-
/* start == end is legit, it's an empty range and we'll just get NULL
* on first (and any subsequent) bpf_iter_num_next() call
*/
diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
index fdc3e8705a3c..db6176fb64dc 100644
--- a/kernel/bpf/bpf_struct_ops.c
+++ b/kernel/bpf/bpf_struct_ops.c
@@ -615,7 +615,10 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
if (st_map->links)
bpf_struct_ops_map_put_progs(st_map);
bpf_map_area_free(st_map->links);
- bpf_jit_free_exec(st_map->image);
+ if (st_map->image) {
+ bpf_jit_free_exec(st_map->image);
+ bpf_jit_uncharge_modmem(PAGE_SIZE);
+ }
bpf_map_area_free(st_map->uvalue);
bpf_map_area_free(st_map);
}
@@ -657,6 +660,7 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
struct bpf_struct_ops_map *st_map;
const struct btf_type *t, *vt;
struct bpf_map *map;
+ int ret;
st_ops = bpf_struct_ops_find_value(attr->btf_vmlinux_value_type_id);
if (!st_ops)
@@ -681,12 +685,27 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
st_map->st_ops = st_ops;
map = &st_map->map;
+ ret = bpf_jit_charge_modmem(PAGE_SIZE);
+ if (ret) {
+ __bpf_struct_ops_map_free(map);
+ return ERR_PTR(ret);
+ }
+
+ st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
+ if (!st_map->image) {
+ /* __bpf_struct_ops_map_free() uses st_map->image as flag
+ * for "charged or not". In this case, we need to unchange
+ * here.
+ */
+ bpf_jit_uncharge_modmem(PAGE_SIZE);
+ __bpf_struct_ops_map_free(map);
+ return ERR_PTR(-ENOMEM);
+ }
st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
st_map->links =
bpf_map_area_alloc(btf_type_vlen(t) * sizeof(struct bpf_links *),
NUMA_NO_NODE);
- st_map->image = bpf_jit_alloc_exec(PAGE_SIZE);
- if (!st_map->uvalue || !st_map->links || !st_map->image) {
+ if (!st_map->uvalue || !st_map->links) {
__bpf_struct_ops_map_free(map);
return ERR_PTR(-ENOMEM);
}
@@ -907,4 +926,3 @@ err_out:
kfree(link);
return err;
}
-
diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
index 8090d7fb11ef..15d71d2986d3 100644
--- a/kernel/bpf/btf.c
+++ b/kernel/bpf/btf.c
@@ -3293,6 +3293,8 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t,
type = BPF_KPTR_UNREF;
else if (!strcmp("kptr", __btf_name_by_offset(btf, t->name_off)))
type = BPF_KPTR_REF;
+ else if (!strcmp("percpu_kptr", __btf_name_by_offset(btf, t->name_off)))
+ type = BPF_KPTR_PERCPU;
else
return -EINVAL;
@@ -3308,10 +3310,10 @@ static int btf_find_kptr(const struct btf *btf, const struct btf_type *t,
return BTF_FIELD_FOUND;
}
-static const char *btf_find_decl_tag_value(const struct btf *btf,
- const struct btf_type *pt,
- int comp_idx, const char *tag_key)
+const char *btf_find_decl_tag_value(const struct btf *btf, const struct btf_type *pt,
+ int comp_idx, const char *tag_key)
{
+ const char *value = NULL;
int i;
for (i = 1; i < btf_nr_types(btf); i++) {
@@ -3325,9 +3327,14 @@ static const char *btf_find_decl_tag_value(const struct btf *btf,
continue;
if (strncmp(__btf_name_by_offset(btf, t->name_off), tag_key, len))
continue;
- return __btf_name_by_offset(btf, t->name_off) + len;
+ /* Prevent duplicate entries for same type */
+ if (value)
+ return ERR_PTR(-EEXIST);
+ value = __btf_name_by_offset(btf, t->name_off) + len;
}
- return NULL;
+ if (!value)
+ return ERR_PTR(-ENOENT);
+ return value;
}
static int
@@ -3345,7 +3352,7 @@ btf_find_graph_root(const struct btf *btf, const struct btf_type *pt,
if (t->size != sz)
return BTF_FIELD_IGNORE;
value_type = btf_find_decl_tag_value(btf, pt, comp_idx, "contains:");
- if (!value_type)
+ if (IS_ERR(value_type))
return -EINVAL;
node_field_name = strstr(value_type, ":");
if (!node_field_name)
@@ -3457,6 +3464,7 @@ static int btf_find_struct_field(const struct btf *btf,
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
ret = btf_find_kptr(btf, member_type, off, sz,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
@@ -3523,6 +3531,7 @@ static int btf_find_datasec_var(const struct btf *btf, const struct btf_type *t,
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
ret = btf_find_kptr(btf, var_type, off, sz,
idx < info_cnt ? &info[idx] : &tmp);
if (ret < 0)
@@ -3783,6 +3792,7 @@ struct btf_record *btf_parse_fields(const struct btf *btf, const struct btf_type
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
ret = btf_parse_kptr(btf, &rec->fields[i], &info_arr[i]);
if (ret < 0)
goto end;
@@ -6949,7 +6959,7 @@ int btf_check_subprog_call(struct bpf_verifier_env *env, int subprog,
* (either PTR_TO_CTX or SCALAR_VALUE).
*/
int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
- struct bpf_reg_state *regs)
+ struct bpf_reg_state *regs, bool is_ex_cb)
{
struct bpf_verifier_log *log = &env->log;
struct bpf_prog *prog = env->prog;
@@ -7006,7 +7016,7 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
tname, nargs, MAX_BPF_FUNC_REG_ARGS);
return -EINVAL;
}
- /* check that function returns int */
+ /* check that function returns int, exception cb also requires this */
t = btf_type_by_id(btf, t->type);
while (btf_type_is_modifier(t))
t = btf_type_by_id(btf, t->type);
@@ -7055,6 +7065,14 @@ int btf_prepare_func_args(struct bpf_verifier_env *env, int subprog,
i, btf_type_str(t), tname);
return -EINVAL;
}
+ /* We have already ensured that the callback returns an integer, just
+ * like all global subprogs. We need to determine it only has a single
+ * scalar argument.
+ */
+ if (is_ex_cb && (nargs != 1 || regs[BPF_REG_1].type != SCALAR_VALUE)) {
+ bpf_log(log, "exception cb only supports single integer argument\n");
+ return -EINVAL;
+ }
return 0;
}
@@ -7832,6 +7850,7 @@ static int bpf_prog_type_to_kfunc_hook(enum bpf_prog_type prog_type)
case BPF_PROG_TYPE_SYSCALL:
return BTF_KFUNC_HOOK_SYSCALL;
case BPF_PROG_TYPE_CGROUP_SKB:
+ case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
return BTF_KFUNC_HOOK_CGROUP_SKB;
case BPF_PROG_TYPE_SCHED_ACT:
return BTF_KFUNC_HOOK_SCHED_ACT;
diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
index 03b3d4492980..74ad2215e1ba 100644
--- a/kernel/bpf/cgroup.c
+++ b/kernel/bpf/cgroup.c
@@ -1450,18 +1450,22 @@ EXPORT_SYMBOL(__cgroup_bpf_run_filter_sk);
* provided by user sockaddr
* @sk: sock struct that will use sockaddr
* @uaddr: sockaddr struct provided by user
+ * @uaddrlen: Pointer to the size of the sockaddr struct provided by user. It is
+ * read-only for AF_INET[6] uaddr but can be modified for AF_UNIX
+ * uaddr.
* @atype: The type of program to be executed
* @t_ctx: Pointer to attach type specific context
* @flags: Pointer to u32 which contains higher bits of BPF program
* return value (OR'ed together).
*
- * socket is expected to be of type INET or INET6.
+ * socket is expected to be of type INET, INET6 or UNIX.
*
* This function will return %-EPERM if an attached program is found and
* returned value != 1 during execution. In all other cases, 0 is returned.
*/
int __cgroup_bpf_run_filter_sock_addr(struct sock *sk,
struct sockaddr *uaddr,
+ int *uaddrlen,
enum cgroup_bpf_attach_type atype,
void *t_ctx,
u32 *flags)
@@ -1473,21 +1477,31 @@ int __cgroup_bpf_run_filter_sock_addr(struct sock *sk,
};
struct sockaddr_storage unspec;
struct cgroup *cgrp;
+ int ret;
/* Check socket family since not all sockets represent network
* endpoint (e.g. AF_UNIX).
*/
- if (sk->sk_family != AF_INET && sk->sk_family != AF_INET6)
+ if (sk->sk_family != AF_INET && sk->sk_family != AF_INET6 &&
+ sk->sk_family != AF_UNIX)
return 0;
if (!ctx.uaddr) {
memset(&unspec, 0, sizeof(unspec));
ctx.uaddr = (struct sockaddr *)&unspec;
+ ctx.uaddrlen = 0;
+ } else {
+ ctx.uaddrlen = *uaddrlen;
}
cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
- return bpf_prog_run_array_cg(&cgrp->bpf, atype, &ctx, bpf_prog_run,
- 0, flags);
+ ret = bpf_prog_run_array_cg(&cgrp->bpf, atype, &ctx, bpf_prog_run,
+ 0, flags);
+
+ if (!ret && uaddr)
+ *uaddrlen = ctx.uaddrlen;
+
+ return ret;
}
EXPORT_SYMBOL(__cgroup_bpf_run_filter_sock_addr);
@@ -2520,10 +2534,13 @@ cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_CGROUP_SOCK_OPS:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
return NULL;
default:
return &bpf_get_retval_proto;
@@ -2535,10 +2552,13 @@ cgroup_common_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_CGROUP_SOCK_OPS:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
return NULL;
default:
return &bpf_set_retval_proto;
diff --git a/kernel/bpf/cgroup_iter.c b/kernel/bpf/cgroup_iter.c
index 810378f04fbc..209e5135f9fb 100644
--- a/kernel/bpf/cgroup_iter.c
+++ b/kernel/bpf/cgroup_iter.c
@@ -294,3 +294,68 @@ static int __init bpf_cgroup_iter_init(void)
}
late_initcall(bpf_cgroup_iter_init);
+
+struct bpf_iter_css {
+ __u64 __opaque[3];
+} __attribute__((aligned(8)));
+
+struct bpf_iter_css_kern {
+ struct cgroup_subsys_state *start;
+ struct cgroup_subsys_state *pos;
+ unsigned int flags;
+} __attribute__((aligned(8)));
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+
+__bpf_kfunc int bpf_iter_css_new(struct bpf_iter_css *it,
+ struct cgroup_subsys_state *start, unsigned int flags)
+{
+ struct bpf_iter_css_kern *kit = (void *)it;
+
+ BUILD_BUG_ON(sizeof(struct bpf_iter_css_kern) > sizeof(struct bpf_iter_css));
+ BUILD_BUG_ON(__alignof__(struct bpf_iter_css_kern) != __alignof__(struct bpf_iter_css));
+
+ kit->start = NULL;
+ switch (flags) {
+ case BPF_CGROUP_ITER_DESCENDANTS_PRE:
+ case BPF_CGROUP_ITER_DESCENDANTS_POST:
+ case BPF_CGROUP_ITER_ANCESTORS_UP:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ kit->start = start;
+ kit->pos = NULL;
+ kit->flags = flags;
+ return 0;
+}
+
+__bpf_kfunc struct cgroup_subsys_state *bpf_iter_css_next(struct bpf_iter_css *it)
+{
+ struct bpf_iter_css_kern *kit = (void *)it;
+
+ if (!kit->start)
+ return NULL;
+
+ switch (kit->flags) {
+ case BPF_CGROUP_ITER_DESCENDANTS_PRE:
+ kit->pos = css_next_descendant_pre(kit->pos, kit->start);
+ break;
+ case BPF_CGROUP_ITER_DESCENDANTS_POST:
+ kit->pos = css_next_descendant_post(kit->pos, kit->start);
+ break;
+ case BPF_CGROUP_ITER_ANCESTORS_UP:
+ kit->pos = kit->pos ? kit->pos->parent : kit->start;
+ }
+
+ return kit->pos;
+}
+
+__bpf_kfunc void bpf_iter_css_destroy(struct bpf_iter_css *it)
+{
+}
+
+__diag_pop(); \ No newline at end of file
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 4e3ce0542e31..08626b519ce2 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -64,8 +64,8 @@
#define OFF insn->off
#define IMM insn->imm
-struct bpf_mem_alloc bpf_global_ma;
-bool bpf_global_ma_set;
+struct bpf_mem_alloc bpf_global_ma, bpf_global_percpu_ma;
+bool bpf_global_ma_set, bpf_global_percpu_ma_set;
/* No hurry in this branch
*
@@ -212,7 +212,7 @@ void bpf_prog_fill_jited_linfo(struct bpf_prog *prog,
const struct bpf_line_info *linfo;
void **jited_linfo;
- if (!prog->aux->jited_linfo)
+ if (!prog->aux->jited_linfo || prog->aux->func_idx > prog->aux->func_cnt)
/* Userspace did not provide linfo */
return;
@@ -539,7 +539,7 @@ static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
{
int i;
- for (i = 0; i < fp->aux->func_cnt; i++)
+ for (i = 0; i < fp->aux->real_func_cnt; i++)
bpf_prog_kallsyms_del(fp->aux->func[i]);
}
@@ -589,7 +589,7 @@ bpf_prog_ksym_set_name(struct bpf_prog *prog)
sym = bin2hex(sym, prog->tag, sizeof(prog->tag));
/* prog->aux->name will be ignored if full btf name is available */
- if (prog->aux->func_info_cnt) {
+ if (prog->aux->func_info_cnt && prog->aux->func_idx < prog->aux->func_info_cnt) {
type = btf_type_by_id(prog->aux->btf,
prog->aux->func_info[prog->aux->func_idx].type_id);
func_name = btf_name_by_offset(prog->aux->btf, type->name_off);
@@ -623,7 +623,11 @@ static __always_inline int bpf_tree_comp(void *key, struct latch_tree_node *n)
if (val < ksym->start)
return -1;
- if (val >= ksym->end)
+ /* Ensure that we detect return addresses as part of the program, when
+ * the final instruction is a call for a program part of the stack
+ * trace. Therefore, do val > ksym->end instead of val >= ksym->end.
+ */
+ if (val > ksym->end)
return 1;
return 0;
@@ -733,7 +737,7 @@ bool is_bpf_text_address(unsigned long addr)
return ret;
}
-static struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
+struct bpf_prog *bpf_prog_ksym_find(unsigned long addr)
{
struct bpf_ksym *ksym = bpf_ksym_find(addr);
@@ -1208,7 +1212,7 @@ int bpf_jit_get_func_addr(const struct bpf_prog *prog,
if (!extra_pass)
addr = NULL;
else if (prog->aux->func &&
- off >= 0 && off < prog->aux->func_cnt)
+ off >= 0 && off < prog->aux->real_func_cnt)
addr = (u8 *)prog->aux->func[off]->bpf_func;
else
return -EINVAL;
@@ -2721,7 +2725,7 @@ static void bpf_prog_free_deferred(struct work_struct *work)
#endif
if (aux->dst_trampoline)
bpf_trampoline_put(aux->dst_trampoline);
- for (i = 0; i < aux->func_cnt; i++) {
+ for (i = 0; i < aux->real_func_cnt; i++) {
/* We can just unlink the subprog poke descriptor table as
* it was originally linked to the main program and is also
* released along with it.
@@ -2729,7 +2733,7 @@ static void bpf_prog_free_deferred(struct work_struct *work)
aux->func[i]->aux->poke_tab = NULL;
bpf_jit_free(aux->func[i]);
}
- if (aux->func_cnt) {
+ if (aux->real_func_cnt) {
kfree(aux->func);
bpf_prog_unlock_free(aux->prog);
} else {
@@ -2914,6 +2918,15 @@ int __weak bpf_arch_text_invalidate(void *dst, size_t len)
return -ENOTSUPP;
}
+bool __weak bpf_jit_supports_exceptions(void)
+{
+ return false;
+}
+
+void __weak arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie)
+{
+}
+
#ifdef CONFIG_BPF_SYSCALL
static int __init bpf_global_ma_init(void)
{
@@ -2921,7 +2934,9 @@ static int __init bpf_global_ma_init(void)
ret = bpf_mem_alloc_init(&bpf_global_ma, 0, false);
bpf_global_ma_set = !ret;
- return ret;
+ ret = bpf_mem_alloc_init(&bpf_global_percpu_ma, 0, true);
+ bpf_global_percpu_ma_set = !ret;
+ return !bpf_global_ma_set || !bpf_global_percpu_ma_set;
}
late_initcall(bpf_global_ma_init);
#endif
diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
index e42a1bdb7f53..8a0bb80fe48a 100644
--- a/kernel/bpf/cpumap.c
+++ b/kernel/bpf/cpumap.c
@@ -764,6 +764,16 @@ void __cpu_map_flush(void)
}
}
+#ifdef CONFIG_DEBUG_NET
+bool cpu_map_check_flush(void)
+{
+ if (list_empty(this_cpu_ptr(&cpu_map_flush_list)))
+ return false;
+ __cpu_map_flush();
+ return true;
+}
+#endif
+
static int __init cpu_map_init(void)
{
int cpu;
diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
index 4d42f6ed6c11..a936c704d4e7 100644
--- a/kernel/bpf/devmap.c
+++ b/kernel/bpf/devmap.c
@@ -418,6 +418,16 @@ void __dev_flush(void)
}
}
+#ifdef CONFIG_DEBUG_NET
+bool dev_check_flush(void)
+{
+ if (list_empty(this_cpu_ptr(&dev_flush_list)))
+ return false;
+ __dev_flush();
+ return true;
+}
+#endif
+
/* Elements are kept alive by RCU; either by rcu_read_lock() (from syscall) or
* by local_bh_disable() (from XDP calls inside NAPI). The
* rcu_read_lock_bh_held() below makes lockdep accept both.
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index a8c7e1c5abfa..fd8d4b0addfc 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -155,13 +155,15 @@ static inline int htab_lock_bucket(const struct bpf_htab *htab,
hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
preempt_disable();
+ local_irq_save(flags);
if (unlikely(__this_cpu_inc_return(*(htab->map_locked[hash])) != 1)) {
__this_cpu_dec(*(htab->map_locked[hash]));
+ local_irq_restore(flags);
preempt_enable();
return -EBUSY;
}
- raw_spin_lock_irqsave(&b->raw_lock, flags);
+ raw_spin_lock(&b->raw_lock);
*pflags = flags;
return 0;
@@ -172,8 +174,9 @@ static inline void htab_unlock_bucket(const struct bpf_htab *htab,
unsigned long flags)
{
hash = hash & min_t(u32, HASHTAB_MAP_LOCK_MASK, htab->n_buckets - 1);
- raw_spin_unlock_irqrestore(&b->raw_lock, flags);
+ raw_spin_unlock(&b->raw_lock);
__this_cpu_dec(*(htab->map_locked[hash]));
+ local_irq_restore(flags);
preempt_enable();
}
diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
index 8bd3812fb8df..e46ac288a108 100644
--- a/kernel/bpf/helpers.c
+++ b/kernel/bpf/helpers.c
@@ -22,6 +22,7 @@
#include <linux/security.h>
#include <linux/btf_ids.h>
#include <linux/bpf_mem_alloc.h>
+#include <linux/kasan.h>
#include "../../lib/kstrtox.h"
@@ -1271,7 +1272,7 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla
if (in_nmi())
return -EOPNOTSUPP;
- if (flags > BPF_F_TIMER_ABS)
+ if (flags & ~(BPF_F_TIMER_ABS | BPF_F_TIMER_CPU_PIN))
return -EINVAL;
__bpf_spin_lock_irqsave(&timer->lock);
t = timer->timer;
@@ -1285,6 +1286,9 @@ BPF_CALL_3(bpf_timer_start, struct bpf_timer_kern *, timer, u64, nsecs, u64, fla
else
mode = HRTIMER_MODE_REL_SOFT;
+ if (flags & BPF_F_TIMER_CPU_PIN)
+ mode |= HRTIMER_MODE_PINNED;
+
hrtimer_start(&t->timer, ns_to_ktime(nsecs), mode);
out:
__bpf_spin_unlock_irqrestore(&timer->lock);
@@ -1807,8 +1811,6 @@ bpf_base_func_proto(enum bpf_func_id func_id)
}
}
-void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
-
void bpf_list_head_free(const struct btf_field *field, void *list_head,
struct bpf_spin_lock *spin_lock)
{
@@ -1840,7 +1842,7 @@ unlock:
* bpf_list_head which needs to be freed.
*/
migrate_disable();
- __bpf_obj_drop_impl(obj, field->graph_root.value_rec);
+ __bpf_obj_drop_impl(obj, field->graph_root.value_rec, false);
migrate_enable();
}
}
@@ -1879,7 +1881,7 @@ void bpf_rb_root_free(const struct btf_field *field, void *rb_root,
migrate_disable();
- __bpf_obj_drop_impl(obj, field->graph_root.value_rec);
+ __bpf_obj_drop_impl(obj, field->graph_root.value_rec, false);
migrate_enable();
}
}
@@ -1902,9 +1904,19 @@ __bpf_kfunc void *bpf_obj_new_impl(u64 local_type_id__k, void *meta__ign)
return p;
}
+__bpf_kfunc void *bpf_percpu_obj_new_impl(u64 local_type_id__k, void *meta__ign)
+{
+ u64 size = local_type_id__k;
+
+ /* The verifier has ensured that meta__ign must be NULL */
+ return bpf_mem_alloc(&bpf_global_percpu_ma, size);
+}
+
/* Must be called under migrate_disable(), as required by bpf_mem_free */
-void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
+void __bpf_obj_drop_impl(void *p, const struct btf_record *rec, bool percpu)
{
+ struct bpf_mem_alloc *ma;
+
if (rec && rec->refcount_off >= 0 &&
!refcount_dec_and_test((refcount_t *)(p + rec->refcount_off))) {
/* Object is refcounted and refcount_dec didn't result in 0
@@ -1916,10 +1928,14 @@ void __bpf_obj_drop_impl(void *p, const struct btf_record *rec)
if (rec)
bpf_obj_free_fields(rec, p);
+ if (percpu)
+ ma = &bpf_global_percpu_ma;
+ else
+ ma = &bpf_global_ma;
if (rec && rec->refcount_off >= 0)
- bpf_mem_free_rcu(&bpf_global_ma, p);
+ bpf_mem_free_rcu(ma, p);
else
- bpf_mem_free(&bpf_global_ma, p);
+ bpf_mem_free(ma, p);
}
__bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
@@ -1927,7 +1943,13 @@ __bpf_kfunc void bpf_obj_drop_impl(void *p__alloc, void *meta__ign)
struct btf_struct_meta *meta = meta__ign;
void *p = p__alloc;
- __bpf_obj_drop_impl(p, meta ? meta->record : NULL);
+ __bpf_obj_drop_impl(p, meta ? meta->record : NULL, false);
+}
+
+__bpf_kfunc void bpf_percpu_obj_drop_impl(void *p__alloc, void *meta__ign)
+{
+ /* The verifier has ensured that meta__ign must be NULL */
+ bpf_mem_free_rcu(&bpf_global_percpu_ma, p__alloc);
}
__bpf_kfunc void *bpf_refcount_acquire_impl(void *p__refcounted_kptr, void *meta__ign)
@@ -1965,7 +1987,7 @@ static int __bpf_list_add(struct bpf_list_node_kern *node,
*/
if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
/* Only called from BPF prog, no need to migrate_disable */
- __bpf_obj_drop_impl((void *)n - off, rec);
+ __bpf_obj_drop_impl((void *)n - off, rec, false);
return -EINVAL;
}
@@ -2064,7 +2086,7 @@ static int __bpf_rbtree_add(struct bpf_rb_root *root,
*/
if (cmpxchg(&node->owner, NULL, BPF_PTR_POISON)) {
/* Only called from BPF prog, no need to migrate_disable */
- __bpf_obj_drop_impl((void *)n - off, rec);
+ __bpf_obj_drop_impl((void *)n - off, rec, false);
return -EINVAL;
}
@@ -2197,7 +2219,12 @@ __bpf_kfunc struct cgroup *bpf_cgroup_from_id(u64 cgid)
__bpf_kfunc long bpf_task_under_cgroup(struct task_struct *task,
struct cgroup *ancestor)
{
- return task_under_cgroup_hierarchy(task, ancestor);
+ long ret;
+
+ rcu_read_lock();
+ ret = task_under_cgroup_hierarchy(task, ancestor);
+ rcu_read_unlock();
+ return ret;
}
#endif /* CONFIG_CGROUPS */
@@ -2435,6 +2462,49 @@ __bpf_kfunc void bpf_rcu_read_unlock(void)
rcu_read_unlock();
}
+struct bpf_throw_ctx {
+ struct bpf_prog_aux *aux;
+ u64 sp;
+ u64 bp;
+ int cnt;
+};
+
+static bool bpf_stack_walker(void *cookie, u64 ip, u64 sp, u64 bp)
+{
+ struct bpf_throw_ctx *ctx = cookie;
+ struct bpf_prog *prog;
+
+ if (!is_bpf_text_address(ip))
+ return !ctx->cnt;
+ prog = bpf_prog_ksym_find(ip);
+ ctx->cnt++;
+ if (bpf_is_subprog(prog))
+ return true;
+ ctx->aux = prog->aux;
+ ctx->sp = sp;
+ ctx->bp = bp;
+ return false;
+}
+
+__bpf_kfunc void bpf_throw(u64 cookie)
+{
+ struct bpf_throw_ctx ctx = {};
+
+ arch_bpf_stack_walk(bpf_stack_walker, &ctx);
+ WARN_ON_ONCE(!ctx.aux);
+ if (ctx.aux)
+ WARN_ON_ONCE(!ctx.aux->exception_boundary);
+ WARN_ON_ONCE(!ctx.bp);
+ WARN_ON_ONCE(!ctx.cnt);
+ /* Prevent KASAN false positives for CONFIG_KASAN_STACK by unpoisoning
+ * deeper stack depths than ctx.sp as we do not return from bpf_throw,
+ * which skips compiler generated instrumentation to do the same.
+ */
+ kasan_unpoison_task_stack_below((void *)(long)ctx.sp);
+ ctx.aux->bpf_exception_cb(cookie, ctx.sp, ctx.bp);
+ WARN(1, "A call to BPF exception callback should never return\n");
+}
+
__diag_pop();
BTF_SET8_START(generic_btf_ids)
@@ -2442,7 +2512,9 @@ BTF_SET8_START(generic_btf_ids)
BTF_ID_FLAGS(func, crash_kexec, KF_DESTRUCTIVE)
#endif
BTF_ID_FLAGS(func, bpf_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_percpu_obj_new_impl, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_obj_drop_impl, KF_RELEASE)
+BTF_ID_FLAGS(func, bpf_percpu_obj_drop_impl, KF_RELEASE)
BTF_ID_FLAGS(func, bpf_refcount_acquire_impl, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_list_push_front_impl)
BTF_ID_FLAGS(func, bpf_list_push_back_impl)
@@ -2462,6 +2534,7 @@ BTF_ID_FLAGS(func, bpf_cgroup_from_id, KF_ACQUIRE | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_task_under_cgroup, KF_RCU)
#endif
BTF_ID_FLAGS(func, bpf_task_from_pid, KF_ACQUIRE | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_throw)
BTF_SET8_END(generic_btf_ids)
static const struct btf_kfunc_id_set generic_kfunc_set = {
@@ -2488,6 +2561,18 @@ BTF_ID_FLAGS(func, bpf_dynptr_slice_rdwr, KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_num_new, KF_ITER_NEW)
BTF_ID_FLAGS(func, bpf_iter_num_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_num_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_iter_task_vma_new, KF_ITER_NEW | KF_RCU)
+BTF_ID_FLAGS(func, bpf_iter_task_vma_next, KF_ITER_NEXT | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_iter_task_vma_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_iter_css_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS)
+BTF_ID_FLAGS(func, bpf_iter_css_task_next, KF_ITER_NEXT | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_iter_css_task_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_iter_task_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
+BTF_ID_FLAGS(func, bpf_iter_task_next, KF_ITER_NEXT | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_iter_task_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_iter_css_new, KF_ITER_NEW | KF_TRUSTED_ARGS | KF_RCU_PROTECTED)
+BTF_ID_FLAGS(func, bpf_iter_css_next, KF_ITER_NEXT | KF_RET_NULL)
+BTF_ID_FLAGS(func, bpf_iter_css_destroy, KF_ITER_DESTROY)
BTF_ID_FLAGS(func, bpf_dynptr_adjust)
BTF_ID_FLAGS(func, bpf_dynptr_is_null)
BTF_ID_FLAGS(func, bpf_dynptr_is_rdonly)
diff --git a/kernel/bpf/memalloc.c b/kernel/bpf/memalloc.c
index d93ddac283d4..63b909d277d4 100644
--- a/kernel/bpf/memalloc.c
+++ b/kernel/bpf/memalloc.c
@@ -340,6 +340,7 @@ static void free_bulk(struct bpf_mem_cache *c)
int cnt;
WARN_ON_ONCE(tgt->unit_size != c->unit_size);
+ WARN_ON_ONCE(tgt->percpu_size != c->percpu_size);
do {
inc_active(c, &flags);
@@ -365,6 +366,9 @@ static void __free_by_rcu(struct rcu_head *head)
struct bpf_mem_cache *tgt = c->tgt;
struct llist_node *llnode;
+ WARN_ON_ONCE(tgt->unit_size != c->unit_size);
+ WARN_ON_ONCE(tgt->percpu_size != c->percpu_size);
+
llnode = llist_del_all(&c->waiting_for_gp);
if (!llnode)
goto out;
@@ -491,21 +495,17 @@ static int check_obj_size(struct bpf_mem_cache *c, unsigned int idx)
struct llist_node *first;
unsigned int obj_size;
- /* For per-cpu allocator, the size of free objects in free list doesn't
- * match with unit_size and now there is no way to get the size of
- * per-cpu pointer saved in free object, so just skip the checking.
- */
- if (c->percpu_size)
- return 0;
-
first = c->free_llist.first;
if (!first)
return 0;
- obj_size = ksize(first);
+ if (c->percpu_size)
+ obj_size = pcpu_alloc_size(((void **)first)[1]);
+ else
+ obj_size = ksize(first);
if (obj_size != c->unit_size) {
- WARN_ONCE(1, "bpf_mem_cache[%u]: unexpected object size %u, expect %u\n",
- idx, obj_size, c->unit_size);
+ WARN_ONCE(1, "bpf_mem_cache[%u]: percpu %d, unexpected object size %u, expect %u\n",
+ idx, c->percpu_size, obj_size, c->unit_size);
return -EINVAL;
}
return 0;
@@ -526,15 +526,17 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
struct bpf_mem_cache *c, __percpu *pc;
struct obj_cgroup *objcg = NULL;
+ /* room for llist_node and per-cpu pointer */
+ if (percpu)
+ percpu_size = LLIST_NODE_SZ + sizeof(void *);
+ ma->percpu = percpu;
+
if (size) {
pc = __alloc_percpu_gfp(sizeof(*pc), 8, GFP_KERNEL);
if (!pc)
return -ENOMEM;
- if (percpu)
- /* room for llist_node and per-cpu pointer */
- percpu_size = LLIST_NODE_SZ + sizeof(void *);
- else
+ if (!percpu)
size += LLIST_NODE_SZ; /* room for llist_node */
unit_size = size;
@@ -555,10 +557,6 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
return 0;
}
- /* size == 0 && percpu is an invalid combination */
- if (WARN_ON_ONCE(percpu))
- return -EINVAL;
-
pcc = __alloc_percpu_gfp(sizeof(*cc), 8, GFP_KERNEL);
if (!pcc)
return -ENOMEM;
@@ -572,6 +570,7 @@ int bpf_mem_alloc_init(struct bpf_mem_alloc *ma, int size, bool percpu)
c = &cc->cache[i];
c->unit_size = sizes[i];
c->objcg = objcg;
+ c->percpu_size = percpu_size;
c->tgt = c;
init_refill_work(c);
@@ -782,12 +781,17 @@ static void notrace *unit_alloc(struct bpf_mem_cache *c)
}
}
local_dec(&c->active);
- local_irq_restore(flags);
WARN_ON(cnt < 0);
if (cnt < c->low_watermark)
irq_work_raise(c);
+ /* Enable IRQ after the enqueue of irq work completes, so irq work
+ * will run after IRQ is enabled and free_llist may be refilled by
+ * irq work before other task preempts current task.
+ */
+ local_irq_restore(flags);
+
return llnode;
}
@@ -823,11 +827,16 @@ static void notrace unit_free(struct bpf_mem_cache *c, void *ptr)
llist_add(llnode, &c->free_llist_extra);
}
local_dec(&c->active);
- local_irq_restore(flags);
if (cnt > c->high_watermark)
/* free few objects from current cpu into global kmalloc pool */
irq_work_raise(c);
+ /* Enable IRQ after irq_work_raise() completes, otherwise when current
+ * task is preempted by task which does unit_alloc(), unit_alloc() may
+ * return NULL unexpectedly because irq work is already pending but can
+ * not been triggered and free_llist can not be refilled timely.
+ */
+ local_irq_restore(flags);
}
static void notrace unit_free_rcu(struct bpf_mem_cache *c, void *ptr)
@@ -845,10 +854,10 @@ static void notrace unit_free_rcu(struct bpf_mem_cache *c, void *ptr)
llist_add(llnode, &c->free_llist_extra_rcu);
}
local_dec(&c->active);
- local_irq_restore(flags);
if (!atomic_read(&c->call_rcu_in_progress))
irq_work_raise(c);
+ local_irq_restore(flags);
}
/* Called from BPF program or from sys_bpf syscall.
@@ -870,6 +879,17 @@ void notrace *bpf_mem_alloc(struct bpf_mem_alloc *ma, size_t size)
return !ret ? NULL : ret + LLIST_NODE_SZ;
}
+static notrace int bpf_mem_free_idx(void *ptr, bool percpu)
+{
+ size_t size;
+
+ if (percpu)
+ size = pcpu_alloc_size(*((void **)ptr));
+ else
+ size = ksize(ptr - LLIST_NODE_SZ);
+ return bpf_mem_cache_idx(size);
+}
+
void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
{
int idx;
@@ -877,7 +897,7 @@ void notrace bpf_mem_free(struct bpf_mem_alloc *ma, void *ptr)
if (!ptr)
return;
- idx = bpf_mem_cache_idx(ksize(ptr - LLIST_NODE_SZ));
+ idx = bpf_mem_free_idx(ptr, ma->percpu);
if (idx < 0)
return;
@@ -891,7 +911,7 @@ void notrace bpf_mem_free_rcu(struct bpf_mem_alloc *ma, void *ptr)
if (!ptr)
return;
- idx = bpf_mem_cache_idx(ksize(ptr - LLIST_NODE_SZ));
+ idx = bpf_mem_free_idx(ptr, ma->percpu);
if (idx < 0)
return;
@@ -965,6 +985,12 @@ void notrace *bpf_mem_cache_alloc_flags(struct bpf_mem_alloc *ma, gfp_t flags)
return !ret ? NULL : ret + LLIST_NODE_SZ;
}
+/* The alignment of dynamic per-cpu area is 8, so c->unit_size and the
+ * actual size of dynamic per-cpu area will always be matched and there is
+ * no need to adjust size_index for per-cpu allocation. However for the
+ * simplicity of the implementation, use an unified size_index for both
+ * kmalloc and per-cpu allocation.
+ */
static __init int bpf_mem_cache_adjust_size(void)
{
unsigned int size;
diff --git a/kernel/bpf/offload.c b/kernel/bpf/offload.c
index 87d6693d8233..1a4fec330eaa 100644
--- a/kernel/bpf/offload.c
+++ b/kernel/bpf/offload.c
@@ -234,7 +234,14 @@ int bpf_prog_dev_bound_init(struct bpf_prog *prog, union bpf_attr *attr)
attr->prog_type != BPF_PROG_TYPE_XDP)
return -EINVAL;
- if (attr->prog_flags & ~BPF_F_XDP_DEV_BOUND_ONLY)
+ if (attr->prog_flags & ~(BPF_F_XDP_DEV_BOUND_ONLY | BPF_F_XDP_HAS_FRAGS))
+ return -EINVAL;
+
+ /* Frags are allowed only if program is dev-bound-only, but not
+ * if it is requesting bpf offload.
+ */
+ if (attr->prog_flags & BPF_F_XDP_HAS_FRAGS &&
+ !(attr->prog_flags & BPF_F_XDP_DEV_BOUND_ONLY))
return -EINVAL;
if (attr->prog_type == BPF_PROG_TYPE_SCHED_CLS &&
@@ -847,10 +854,11 @@ void *bpf_dev_bound_resolve_kfunc(struct bpf_prog *prog, u32 func_id)
if (!ops)
goto out;
- if (func_id == bpf_xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_TIMESTAMP))
- p = ops->xmo_rx_timestamp;
- else if (func_id == bpf_xdp_metadata_kfunc_id(XDP_METADATA_KFUNC_RX_HASH))
- p = ops->xmo_rx_hash;
+#define XDP_METADATA_KFUNC(name, _, __, xmo) \
+ if (func_id == bpf_xdp_metadata_kfunc_id(name)) p = ops->xmo;
+ XDP_METADATA_KFUNC_xxx
+#undef XDP_METADATA_KFUNC
+
out:
up_read(&bpf_devs_lock);
diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
index f045fde632e5..0ee653a936ea 100644
--- a/kernel/bpf/ringbuf.c
+++ b/kernel/bpf/ringbuf.c
@@ -770,8 +770,7 @@ schedule_work_return:
/* Prevent the clearing of the busy-bit from being reordered before the
* storing of any rb consumer or producer positions.
*/
- smp_mb__before_atomic();
- atomic_set(&rb->busy, 0);
+ atomic_set_release(&rb->busy, 0);
if (flags & BPF_RB_FORCE_WAKEUP)
irq_work_queue(&rb->work);
diff --git a/kernel/bpf/stackmap.c b/kernel/bpf/stackmap.c
index 458bb80b14d5..d6b277482085 100644
--- a/kernel/bpf/stackmap.c
+++ b/kernel/bpf/stackmap.c
@@ -28,7 +28,7 @@ struct bpf_stack_map {
void *elems;
struct pcpu_freelist freelist;
u32 n_buckets;
- struct stack_map_bucket *buckets[];
+ struct stack_map_bucket *buckets[] __counted_by(n_buckets);
};
static inline bool stack_map_use_build_id(struct bpf_map *map)
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index d77b2f8b9364..0ed286b8a0f0 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -35,8 +35,9 @@
#include <linux/rcupdate_trace.h>
#include <linux/memcontrol.h>
#include <linux/trace_events.h>
-#include <net/netfilter/nf_bpf_link.h>
+#include <net/netfilter/nf_bpf_link.h>
+#include <net/netkit.h>
#include <net/tcx.h>
#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
@@ -514,6 +515,7 @@ void btf_record_free(struct btf_record *rec)
switch (rec->fields[i].type) {
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
if (rec->fields[i].kptr.module)
module_put(rec->fields[i].kptr.module);
btf_put(rec->fields[i].kptr.btf);
@@ -560,6 +562,7 @@ struct btf_record *btf_record_dup(const struct btf_record *rec)
switch (fields[i].type) {
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
btf_get(fields[i].kptr.btf);
if (fields[i].kptr.module && !try_module_get(fields[i].kptr.module)) {
ret = -ENXIO;
@@ -624,8 +627,6 @@ void bpf_obj_free_timer(const struct btf_record *rec, void *obj)
bpf_timer_cancel_and_free(obj + rec->timer_off);
}
-extern void __bpf_obj_drop_impl(void *p, const struct btf_record *rec);
-
void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
{
const struct btf_field *fields;
@@ -650,6 +651,7 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
WRITE_ONCE(*(u64 *)field_ptr, 0);
break;
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
xchgd_field = (void *)xchg((unsigned long *)field_ptr, 0);
if (!xchgd_field)
break;
@@ -659,8 +661,8 @@ void bpf_obj_free_fields(const struct btf_record *rec, void *obj)
field->kptr.btf_id);
migrate_disable();
__bpf_obj_drop_impl(xchgd_field, pointee_struct_meta ?
- pointee_struct_meta->record :
- NULL);
+ pointee_struct_meta->record : NULL,
+ fields[i].type == BPF_KPTR_PERCPU);
migrate_enable();
} else {
field->kptr.dtor(xchgd_field);
@@ -1045,6 +1047,7 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
break;
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
case BPF_REFCOUNT:
if (map->map_type != BPF_MAP_TYPE_HASH &&
map->map_type != BPF_MAP_TYPE_PERCPU_HASH &&
@@ -2442,14 +2445,19 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
case BPF_CGROUP_INET6_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
return 0;
default:
return -EINVAL;
@@ -2745,7 +2753,7 @@ free_used_maps:
* period before we can tear down JIT memory since symbols
* are already exposed under kallsyms.
*/
- __bpf_prog_put_noref(prog, prog->aux->func_cnt);
+ __bpf_prog_put_noref(prog, prog->aux->real_func_cnt);
return err;
free_prog_sec:
free_uid(prog->aux->user);
@@ -3370,7 +3378,7 @@ static void bpf_perf_link_dealloc(struct bpf_link *link)
static int bpf_perf_link_fill_common(const struct perf_event *event,
char __user *uname, u32 ulen,
u64 *probe_offset, u64 *probe_addr,
- u32 *fd_type)
+ u32 *fd_type, unsigned long *missed)
{
const char *buf;
u32 prog_id;
@@ -3381,7 +3389,7 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
return -EINVAL;
err = bpf_get_perf_event_info(event, &prog_id, fd_type, &buf,
- probe_offset, probe_addr);
+ probe_offset, probe_addr, missed);
if (err)
return err;
if (!uname)
@@ -3404,6 +3412,7 @@ static int bpf_perf_link_fill_common(const struct perf_event *event,
static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
struct bpf_link_info *info)
{
+ unsigned long missed;
char __user *uname;
u64 addr, offset;
u32 ulen, type;
@@ -3412,7 +3421,7 @@ static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
uname = u64_to_user_ptr(info->perf_event.kprobe.func_name);
ulen = info->perf_event.kprobe.name_len;
err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
- &type);
+ &type, &missed);
if (err)
return err;
if (type == BPF_FD_TYPE_KRETPROBE)
@@ -3421,6 +3430,7 @@ static int bpf_perf_link_fill_kprobe(const struct perf_event *event,
info->perf_event.type = BPF_PERF_EVENT_KPROBE;
info->perf_event.kprobe.offset = offset;
+ info->perf_event.kprobe.missed = missed;
if (!kallsyms_show_value(current_cred()))
addr = 0;
info->perf_event.kprobe.addr = addr;
@@ -3440,7 +3450,7 @@ static int bpf_perf_link_fill_uprobe(const struct perf_event *event,
uname = u64_to_user_ptr(info->perf_event.uprobe.file_name);
ulen = info->perf_event.uprobe.name_len;
err = bpf_perf_link_fill_common(event, uname, ulen, &offset, &addr,
- &type);
+ &type, NULL);
if (err)
return err;
@@ -3476,7 +3486,7 @@ static int bpf_perf_link_fill_tracepoint(const struct perf_event *event,
uname = u64_to_user_ptr(info->perf_event.tracepoint.tp_name);
ulen = info->perf_event.tracepoint.name_len;
info->perf_event.type = BPF_PERF_EVENT_TRACEPOINT;
- return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL);
+ return bpf_perf_link_fill_common(event, uname, ulen, NULL, NULL, NULL, NULL);
}
static int bpf_perf_link_fill_perf_event(const struct perf_event *event,
@@ -3672,14 +3682,19 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
case BPF_CGROUP_INET6_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
return BPF_PROG_TYPE_CGROUP_SOCK_ADDR;
case BPF_CGROUP_SOCK_OPS:
return BPF_PROG_TYPE_SOCK_OPS;
@@ -3716,6 +3731,8 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
return BPF_PROG_TYPE_LSM;
case BPF_TCX_INGRESS:
case BPF_TCX_EGRESS:
+ case BPF_NETKIT_PRIMARY:
+ case BPF_NETKIT_PEER:
return BPF_PROG_TYPE_SCHED_CLS;
default:
return BPF_PROG_TYPE_UNSPEC;
@@ -3767,7 +3784,9 @@ static int bpf_prog_attach_check_attach_type(const struct bpf_prog *prog,
return 0;
case BPF_PROG_TYPE_SCHED_CLS:
if (attach_type != BPF_TCX_INGRESS &&
- attach_type != BPF_TCX_EGRESS)
+ attach_type != BPF_TCX_EGRESS &&
+ attach_type != BPF_NETKIT_PRIMARY &&
+ attach_type != BPF_NETKIT_PEER)
return -EINVAL;
return 0;
default:
@@ -3850,7 +3869,11 @@ static int bpf_prog_attach(const union bpf_attr *attr)
ret = cgroup_bpf_prog_attach(attr, ptype, prog);
break;
case BPF_PROG_TYPE_SCHED_CLS:
- ret = tcx_prog_attach(attr, prog);
+ if (attr->attach_type == BPF_TCX_INGRESS ||
+ attr->attach_type == BPF_TCX_EGRESS)
+ ret = tcx_prog_attach(attr, prog);
+ else
+ ret = netkit_prog_attach(attr, prog);
break;
default:
ret = -EINVAL;
@@ -3911,7 +3934,11 @@ static int bpf_prog_detach(const union bpf_attr *attr)
ret = cgroup_bpf_prog_detach(attr, ptype);
break;
case BPF_PROG_TYPE_SCHED_CLS:
- ret = tcx_prog_detach(attr, prog);
+ if (attr->attach_type == BPF_TCX_INGRESS ||
+ attr->attach_type == BPF_TCX_EGRESS)
+ ret = tcx_prog_detach(attr, prog);
+ else
+ ret = netkit_prog_detach(attr, prog);
break;
default:
ret = -EINVAL;
@@ -3945,14 +3972,19 @@ static int bpf_prog_query(const union bpf_attr *attr,
case BPF_CGROUP_INET6_POST_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_SOCK_OPS:
case BPF_CGROUP_DEVICE:
case BPF_CGROUP_SYSCTL:
@@ -3973,6 +4005,9 @@ static int bpf_prog_query(const union bpf_attr *attr,
case BPF_TCX_INGRESS:
case BPF_TCX_EGRESS:
return tcx_prog_query(attr, uattr);
+ case BPF_NETKIT_PRIMARY:
+ case BPF_NETKIT_PEER:
+ return netkit_prog_query(attr, uattr);
default:
return -EINVAL;
}
@@ -4818,7 +4853,7 @@ static int bpf_task_fd_query(const union bpf_attr *attr,
err = bpf_get_perf_event_info(event, &prog_id, &fd_type,
&buf, &probe_offset,
- &probe_addr);
+ &probe_addr, NULL);
if (!err)
err = bpf_task_fd_query_copy(attr, uattr, prog_id,
fd_type, buf,
@@ -4954,7 +4989,11 @@ static int link_create(union bpf_attr *attr, bpfptr_t uattr)
ret = bpf_xdp_link_attach(attr, prog);
break;
case BPF_PROG_TYPE_SCHED_CLS:
- ret = tcx_link_attach(attr, prog);
+ if (attr->link_create.attach_type == BPF_TCX_INGRESS ||
+ attr->link_create.attach_type == BPF_TCX_EGRESS)
+ ret = tcx_link_attach(attr, prog);
+ else
+ ret = netkit_link_attach(attr, prog);
break;
case BPF_PROG_TYPE_NETFILTER:
ret = bpf_nf_link_attach(attr, prog);
diff --git a/kernel/bpf/task_iter.c b/kernel/bpf/task_iter.c
index 82ad23b1d257..654601dd6b49 100644
--- a/kernel/bpf/task_iter.c
+++ b/kernel/bpf/task_iter.c
@@ -7,7 +7,9 @@
#include <linux/fs.h>
#include <linux/fdtable.h>
#include <linux/filter.h>
+#include <linux/bpf_mem_alloc.h>
#include <linux/btf_ids.h>
+#include <linux/mm_types.h>
#include "mmap_unlock_work.h"
static const char * const iter_task_type_names[] = {
@@ -35,16 +37,13 @@ static struct task_struct *task_group_seq_get_next(struct bpf_iter_seq_task_comm
u32 *tid,
bool skip_if_dup_files)
{
- struct task_struct *task, *next_task;
+ struct task_struct *task;
struct pid *pid;
- u32 saved_tid;
+ u32 next_tid;
if (!*tid) {
/* The first time, the iterator calls this function. */
pid = find_pid_ns(common->pid, common->ns);
- if (!pid)
- return NULL;
-
task = get_pid_task(pid, PIDTYPE_TGID);
if (!task)
return NULL;
@@ -66,44 +65,27 @@ static struct task_struct *task_group_seq_get_next(struct bpf_iter_seq_task_comm
return task;
}
- pid = find_pid_ns(common->pid_visiting, common->ns);
- if (!pid)
- return NULL;
-
- task = get_pid_task(pid, PIDTYPE_PID);
+ task = find_task_by_pid_ns(common->pid_visiting, common->ns);
if (!task)
return NULL;
retry:
- if (!pid_alive(task)) {
- put_task_struct(task);
- return NULL;
- }
+ task = next_thread(task);
- next_task = next_thread(task);
- put_task_struct(task);
- if (!next_task)
- return NULL;
-
- saved_tid = *tid;
- *tid = __task_pid_nr_ns(next_task, PIDTYPE_PID, common->ns);
- if (!*tid || *tid == common->pid) {
+ next_tid = __task_pid_nr_ns(task, PIDTYPE_PID, common->ns);
+ if (!next_tid || next_tid == common->pid) {
/* Run out of tasks of a process. The tasks of a
* thread_group are linked as circular linked list.
*/
- *tid = saved_tid;
return NULL;
}
- get_task_struct(next_task);
- common->pid_visiting = *tid;
-
- if (skip_if_dup_files && task->files == task->group_leader->files) {
- task = next_task;
+ if (skip_if_dup_files && task->files == task->group_leader->files)
goto retry;
- }
- return next_task;
+ *tid = common->pid_visiting = next_tid;
+ get_task_struct(task);
+ return task;
}
static struct task_struct *task_seq_get_next(struct bpf_iter_seq_task_common *common,
@@ -821,6 +803,246 @@ const struct bpf_func_proto bpf_find_vma_proto = {
.arg5_type = ARG_ANYTHING,
};
+struct bpf_iter_task_vma_kern_data {
+ struct task_struct *task;
+ struct mm_struct *mm;
+ struct mmap_unlock_irq_work *work;
+ struct vma_iterator vmi;
+};
+
+struct bpf_iter_task_vma {
+ /* opaque iterator state; having __u64 here allows to preserve correct
+ * alignment requirements in vmlinux.h, generated from BTF
+ */
+ __u64 __opaque[1];
+} __attribute__((aligned(8)));
+
+/* Non-opaque version of bpf_iter_task_vma */
+struct bpf_iter_task_vma_kern {
+ struct bpf_iter_task_vma_kern_data *data;
+} __attribute__((aligned(8)));
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+
+__bpf_kfunc int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
+ struct task_struct *task, u64 addr)
+{
+ struct bpf_iter_task_vma_kern *kit = (void *)it;
+ bool irq_work_busy = false;
+ int err;
+
+ BUILD_BUG_ON(sizeof(struct bpf_iter_task_vma_kern) != sizeof(struct bpf_iter_task_vma));
+ BUILD_BUG_ON(__alignof__(struct bpf_iter_task_vma_kern) != __alignof__(struct bpf_iter_task_vma));
+
+ /* is_iter_reg_valid_uninit guarantees that kit hasn't been initialized
+ * before, so non-NULL kit->data doesn't point to previously
+ * bpf_mem_alloc'd bpf_iter_task_vma_kern_data
+ */
+ kit->data = bpf_mem_alloc(&bpf_global_ma, sizeof(struct bpf_iter_task_vma_kern_data));
+ if (!kit->data)
+ return -ENOMEM;
+
+ kit->data->task = get_task_struct(task);
+ kit->data->mm = task->mm;
+ if (!kit->data->mm) {
+ err = -ENOENT;
+ goto err_cleanup_iter;
+ }
+
+ /* kit->data->work == NULL is valid after bpf_mmap_unlock_get_irq_work */
+ irq_work_busy = bpf_mmap_unlock_get_irq_work(&kit->data->work);
+ if (irq_work_busy || !mmap_read_trylock(kit->data->mm)) {
+ err = -EBUSY;
+ goto err_cleanup_iter;
+ }
+
+ vma_iter_init(&kit->data->vmi, kit->data->mm, addr);
+ return 0;
+
+err_cleanup_iter:
+ if (kit->data->task)
+ put_task_struct(kit->data->task);
+ bpf_mem_free(&bpf_global_ma, kit->data);
+ /* NULL kit->data signals failed bpf_iter_task_vma initialization */
+ kit->data = NULL;
+ return err;
+}
+
+__bpf_kfunc struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it)
+{
+ struct bpf_iter_task_vma_kern *kit = (void *)it;
+
+ if (!kit->data) /* bpf_iter_task_vma_new failed */
+ return NULL;
+ return vma_next(&kit->data->vmi);
+}
+
+__bpf_kfunc void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it)
+{
+ struct bpf_iter_task_vma_kern *kit = (void *)it;
+
+ if (kit->data) {
+ bpf_mmap_unlock_mm(kit->data->work, kit->data->mm);
+ put_task_struct(kit->data->task);
+ bpf_mem_free(&bpf_global_ma, kit->data);
+ }
+}
+
+__diag_pop();
+
+struct bpf_iter_css_task {
+ __u64 __opaque[1];
+} __attribute__((aligned(8)));
+
+struct bpf_iter_css_task_kern {
+ struct css_task_iter *css_it;
+} __attribute__((aligned(8)));
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+
+__bpf_kfunc int bpf_iter_css_task_new(struct bpf_iter_css_task *it,
+ struct cgroup_subsys_state *css, unsigned int flags)
+{
+ struct bpf_iter_css_task_kern *kit = (void *)it;
+
+ BUILD_BUG_ON(sizeof(struct bpf_iter_css_task_kern) != sizeof(struct bpf_iter_css_task));
+ BUILD_BUG_ON(__alignof__(struct bpf_iter_css_task_kern) !=
+ __alignof__(struct bpf_iter_css_task));
+ kit->css_it = NULL;
+ switch (flags) {
+ case CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED:
+ case CSS_TASK_ITER_PROCS:
+ case 0:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ kit->css_it = bpf_mem_alloc(&bpf_global_ma, sizeof(struct css_task_iter));
+ if (!kit->css_it)
+ return -ENOMEM;
+ css_task_iter_start(css, flags, kit->css_it);
+ return 0;
+}
+
+__bpf_kfunc struct task_struct *bpf_iter_css_task_next(struct bpf_iter_css_task *it)
+{
+ struct bpf_iter_css_task_kern *kit = (void *)it;
+
+ if (!kit->css_it)
+ return NULL;
+ return css_task_iter_next(kit->css_it);
+}
+
+__bpf_kfunc void bpf_iter_css_task_destroy(struct bpf_iter_css_task *it)
+{
+ struct bpf_iter_css_task_kern *kit = (void *)it;
+
+ if (!kit->css_it)
+ return;
+ css_task_iter_end(kit->css_it);
+ bpf_mem_free(&bpf_global_ma, kit->css_it);
+}
+
+__diag_pop();
+
+struct bpf_iter_task {
+ __u64 __opaque[3];
+} __attribute__((aligned(8)));
+
+struct bpf_iter_task_kern {
+ struct task_struct *task;
+ struct task_struct *pos;
+ unsigned int flags;
+} __attribute__((aligned(8)));
+
+enum {
+ /* all process in the system */
+ BPF_TASK_ITER_ALL_PROCS,
+ /* all threads in the system */
+ BPF_TASK_ITER_ALL_THREADS,
+ /* all threads of a specific process */
+ BPF_TASK_ITER_PROC_THREADS
+};
+
+__diag_push();
+__diag_ignore_all("-Wmissing-prototypes",
+ "Global functions as their definitions will be in vmlinux BTF");
+
+__bpf_kfunc int bpf_iter_task_new(struct bpf_iter_task *it,
+ struct task_struct *task__nullable, unsigned int flags)
+{
+ struct bpf_iter_task_kern *kit = (void *)it;
+
+ BUILD_BUG_ON(sizeof(struct bpf_iter_task_kern) > sizeof(struct bpf_iter_task));
+ BUILD_BUG_ON(__alignof__(struct bpf_iter_task_kern) !=
+ __alignof__(struct bpf_iter_task));
+
+ kit->task = kit->pos = NULL;
+ switch (flags) {
+ case BPF_TASK_ITER_ALL_THREADS:
+ case BPF_TASK_ITER_ALL_PROCS:
+ break;
+ case BPF_TASK_ITER_PROC_THREADS:
+ if (!task__nullable)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (flags == BPF_TASK_ITER_PROC_THREADS)
+ kit->task = task__nullable;
+ else
+ kit->task = &init_task;
+ kit->pos = kit->task;
+ kit->flags = flags;
+ return 0;
+}
+
+__bpf_kfunc struct task_struct *bpf_iter_task_next(struct bpf_iter_task *it)
+{
+ struct bpf_iter_task_kern *kit = (void *)it;
+ struct task_struct *pos;
+ unsigned int flags;
+
+ flags = kit->flags;
+ pos = kit->pos;
+
+ if (!pos)
+ return pos;
+
+ if (flags == BPF_TASK_ITER_ALL_PROCS)
+ goto get_next_task;
+
+ kit->pos = next_thread(kit->pos);
+ if (kit->pos == kit->task) {
+ if (flags == BPF_TASK_ITER_PROC_THREADS) {
+ kit->pos = NULL;
+ return pos;
+ }
+ } else
+ return pos;
+
+get_next_task:
+ kit->pos = next_task(kit->pos);
+ kit->task = kit->pos;
+ if (kit->pos == &init_task)
+ kit->pos = NULL;
+
+ return pos;
+}
+
+__bpf_kfunc void bpf_iter_task_destroy(struct bpf_iter_task *it)
+{
+}
+
+__diag_pop();
+
DEFINE_PER_CPU(struct mmap_unlock_irq_work, mmap_unlock_work);
static void do_mmap_read_unlock(struct irq_work *entry)
diff --git a/kernel/bpf/tcx.c b/kernel/bpf/tcx.c
index 1338a13a8b64..2e4885e7781f 100644
--- a/kernel/bpf/tcx.c
+++ b/kernel/bpf/tcx.c
@@ -250,7 +250,7 @@ static void tcx_link_dealloc(struct bpf_link *link)
static void tcx_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
{
- const struct tcx_link *tcx = tcx_link_const(link);
+ const struct tcx_link *tcx = tcx_link(link);
u32 ifindex = 0;
rtnl_lock();
@@ -267,7 +267,7 @@ static void tcx_link_fdinfo(const struct bpf_link *link, struct seq_file *seq)
static int tcx_link_fill_info(const struct bpf_link *link,
struct bpf_link_info *info)
{
- const struct tcx_link *tcx = tcx_link_const(link);
+ const struct tcx_link *tcx = tcx_link(link);
u32 ifindex = 0;
rtnl_lock();
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index 53ff50cac61e..e97aeda3a86b 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -415,8 +415,8 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr, bool lock_direct_mut
goto out;
}
- /* clear all bits except SHARE_IPMODIFY */
- tr->flags &= BPF_TRAMP_F_SHARE_IPMODIFY;
+ /* clear all bits except SHARE_IPMODIFY and TAIL_CALL_CTX */
+ tr->flags &= (BPF_TRAMP_F_SHARE_IPMODIFY | BPF_TRAMP_F_TAIL_CALL_CTX);
if (tlinks[BPF_TRAMP_FEXIT].nr_links ||
tlinks[BPF_TRAMP_MODIFY_RETURN].nr_links) {
diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index 873ade146f3d..857d76694517 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -304,7 +304,7 @@ struct bpf_kfunc_call_arg_meta {
/* arg_{btf,btf_id,owning_ref} are used by kfunc-specific handling,
* generally to pass info about user-defined local kptr types to later
* verification logic
- * bpf_obj_drop
+ * bpf_obj_drop/bpf_percpu_obj_drop
* Record the local kptr type to be drop'd
* bpf_refcount_acquire (via KF_ARG_PTR_TO_REFCOUNTED_KPTR arg type)
* Record the local kptr type to be refcount_incr'd and use
@@ -543,6 +543,7 @@ static bool is_dynptr_ref_function(enum bpf_func_id func_id)
}
static bool is_callback_calling_kfunc(u32 btf_id);
+static bool is_bpf_throw_kfunc(struct bpf_insn *insn);
static bool is_callback_calling_function(enum bpf_func_id func_id)
{
@@ -1172,7 +1173,12 @@ static bool is_dynptr_type_expected(struct bpf_verifier_env *env, struct bpf_reg
static void __mark_reg_known_zero(struct bpf_reg_state *reg);
+static bool in_rcu_cs(struct bpf_verifier_env *env);
+
+static bool is_kfunc_rcu_protected(struct bpf_kfunc_call_arg_meta *meta);
+
static int mark_stack_slots_iter(struct bpf_verifier_env *env,
+ struct bpf_kfunc_call_arg_meta *meta,
struct bpf_reg_state *reg, int insn_idx,
struct btf *btf, u32 btf_id, int nr_slots)
{
@@ -1193,6 +1199,12 @@ static int mark_stack_slots_iter(struct bpf_verifier_env *env,
__mark_reg_known_zero(st);
st->type = PTR_TO_STACK; /* we don't have dedicated reg type */
+ if (is_kfunc_rcu_protected(meta)) {
+ if (in_rcu_cs(env))
+ st->type |= MEM_RCU;
+ else
+ st->type |= PTR_UNTRUSTED;
+ }
st->live |= REG_LIVE_WRITTEN;
st->ref_obj_id = i == 0 ? id : 0;
st->iter.btf = btf;
@@ -1267,7 +1279,7 @@ static bool is_iter_reg_valid_uninit(struct bpf_verifier_env *env,
return true;
}
-static bool is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
+static int is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_state *reg,
struct btf *btf, u32 btf_id, int nr_slots)
{
struct bpf_func_state *state = func(env, reg);
@@ -1275,26 +1287,28 @@ static bool is_iter_reg_valid_init(struct bpf_verifier_env *env, struct bpf_reg_
spi = iter_get_spi(env, reg, nr_slots);
if (spi < 0)
- return false;
+ return -EINVAL;
for (i = 0; i < nr_slots; i++) {
struct bpf_stack_state *slot = &state->stack[spi - i];
struct bpf_reg_state *st = &slot->spilled_ptr;
+ if (st->type & PTR_UNTRUSTED)
+ return -EPROTO;
/* only main (first) slot has ref_obj_id set */
if (i == 0 && !st->ref_obj_id)
- return false;
+ return -EINVAL;
if (i != 0 && st->ref_obj_id)
- return false;
+ return -EINVAL;
if (st->iter.btf != btf || st->iter.btf_id != btf_id)
- return false;
+ return -EINVAL;
for (j = 0; j < BPF_REG_SIZE; j++)
if (slot->slot_type[j] != STACK_ITER)
- return false;
+ return -EINVAL;
}
- return true;
+ return 0;
}
/* Check if given stack slot is "special":
@@ -1341,6 +1355,50 @@ static void scrub_spilled_slot(u8 *stype)
*stype = STACK_MISC;
}
+static void print_scalar_ranges(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ const char **sep)
+{
+ struct {
+ const char *name;
+ u64 val;
+ bool omit;
+ } minmaxs[] = {
+ {"smin", reg->smin_value, reg->smin_value == S64_MIN},
+ {"smax", reg->smax_value, reg->smax_value == S64_MAX},
+ {"umin", reg->umin_value, reg->umin_value == 0},
+ {"umax", reg->umax_value, reg->umax_value == U64_MAX},
+ {"smin32", (s64)reg->s32_min_value, reg->s32_min_value == S32_MIN},
+ {"smax32", (s64)reg->s32_max_value, reg->s32_max_value == S32_MAX},
+ {"umin32", reg->u32_min_value, reg->u32_min_value == 0},
+ {"umax32", reg->u32_max_value, reg->u32_max_value == U32_MAX},
+ }, *m1, *m2, *mend = &minmaxs[ARRAY_SIZE(minmaxs)];
+ bool neg1, neg2;
+
+ for (m1 = &minmaxs[0]; m1 < mend; m1++) {
+ if (m1->omit)
+ continue;
+
+ neg1 = m1->name[0] == 's' && (s64)m1->val < 0;
+
+ verbose(env, "%s%s=", *sep, m1->name);
+ *sep = ",";
+
+ for (m2 = m1 + 2; m2 < mend; m2 += 2) {
+ if (m2->omit || m2->val != m1->val)
+ continue;
+ /* don't mix negatives with positives */
+ neg2 = m2->name[0] == 's' && (s64)m2->val < 0;
+ if (neg2 != neg1)
+ continue;
+ m2->omit = true;
+ verbose(env, "%s=", m2->name);
+ }
+
+ verbose(env, m1->name[0] == 's' ? "%lld" : "%llu", m1->val);
+ }
+}
+
static void print_verifier_state(struct bpf_verifier_env *env,
const struct bpf_func_state *state,
bool print_all)
@@ -1404,34 +1462,13 @@ static void print_verifier_state(struct bpf_verifier_env *env,
*/
verbose_a("imm=%llx", reg->var_off.value);
} else {
- if (reg->smin_value != reg->umin_value &&
- reg->smin_value != S64_MIN)
- verbose_a("smin=%lld", (long long)reg->smin_value);
- if (reg->smax_value != reg->umax_value &&
- reg->smax_value != S64_MAX)
- verbose_a("smax=%lld", (long long)reg->smax_value);
- if (reg->umin_value != 0)
- verbose_a("umin=%llu", (unsigned long long)reg->umin_value);
- if (reg->umax_value != U64_MAX)
- verbose_a("umax=%llu", (unsigned long long)reg->umax_value);
+ print_scalar_ranges(env, reg, &sep);
if (!tnum_is_unknown(reg->var_off)) {
char tn_buf[48];
tnum_strn(tn_buf, sizeof(tn_buf), reg->var_off);
verbose_a("var_off=%s", tn_buf);
}
- if (reg->s32_min_value != reg->smin_value &&
- reg->s32_min_value != S32_MIN)
- verbose_a("s32_min=%d", (int)(reg->s32_min_value));
- if (reg->s32_max_value != reg->smax_value &&
- reg->s32_max_value != S32_MAX)
- verbose_a("s32_max=%d", (int)(reg->s32_max_value));
- if (reg->u32_min_value != reg->umin_value &&
- reg->u32_min_value != U32_MIN)
- verbose_a("u32_min=%d", (int)(reg->u32_min_value));
- if (reg->u32_max_value != reg->umax_value &&
- reg->u32_max_value != U32_MAX)
- verbose_a("u32_max=%d", (int)(reg->u32_max_value));
}
#undef verbose_a
@@ -1515,7 +1552,8 @@ static void print_verifier_state(struct bpf_verifier_env *env,
if (state->in_async_callback_fn)
verbose(env, " async_cb");
verbose(env, "\n");
- mark_verifier_state_clean(env);
+ if (!print_all)
+ mark_verifier_state_clean(env);
}
static inline u32 vlog_alignment(u32 pos)
@@ -1748,7 +1786,9 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
return -ENOMEM;
dst_state->jmp_history_cnt = src->jmp_history_cnt;
- /* if dst has more stack frames then src frame, free them */
+ /* if dst has more stack frames then src frame, free them, this is also
+ * necessary in case of exceptional exits using bpf_throw.
+ */
for (i = src->curframe + 1; i <= dst_state->curframe; i++) {
free_func_state(dst_state->frame[i]);
dst_state->frame[i] = NULL;
@@ -1762,6 +1802,8 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
dst_state->parent = src->parent;
dst_state->first_insn_idx = src->first_insn_idx;
dst_state->last_insn_idx = src->last_insn_idx;
+ dst_state->dfs_depth = src->dfs_depth;
+ dst_state->used_as_loop_entry = src->used_as_loop_entry;
for (i = 0; i <= src->curframe; i++) {
dst = dst_state->frame[i];
if (!dst) {
@@ -1777,11 +1819,203 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state,
return 0;
}
+static u32 state_htab_size(struct bpf_verifier_env *env)
+{
+ return env->prog->len;
+}
+
+static struct bpf_verifier_state_list **explored_state(struct bpf_verifier_env *env, int idx)
+{
+ struct bpf_verifier_state *cur = env->cur_state;
+ struct bpf_func_state *state = cur->frame[cur->curframe];
+
+ return &env->explored_states[(idx ^ state->callsite) % state_htab_size(env)];
+}
+
+static bool same_callsites(struct bpf_verifier_state *a, struct bpf_verifier_state *b)
+{
+ int fr;
+
+ if (a->curframe != b->curframe)
+ return false;
+
+ for (fr = a->curframe; fr >= 0; fr--)
+ if (a->frame[fr]->callsite != b->frame[fr]->callsite)
+ return false;
+
+ return true;
+}
+
+/* Open coded iterators allow back-edges in the state graph in order to
+ * check unbounded loops that iterators.
+ *
+ * In is_state_visited() it is necessary to know if explored states are
+ * part of some loops in order to decide whether non-exact states
+ * comparison could be used:
+ * - non-exact states comparison establishes sub-state relation and uses
+ * read and precision marks to do so, these marks are propagated from
+ * children states and thus are not guaranteed to be final in a loop;
+ * - exact states comparison just checks if current and explored states
+ * are identical (and thus form a back-edge).
+ *
+ * Paper "A New Algorithm for Identifying Loops in Decompilation"
+ * by Tao Wei, Jian Mao, Wei Zou and Yu Chen [1] presents a convenient
+ * algorithm for loop structure detection and gives an overview of
+ * relevant terminology. It also has helpful illustrations.
+ *
+ * [1] https://api.semanticscholar.org/CorpusID:15784067
+ *
+ * We use a similar algorithm but because loop nested structure is
+ * irrelevant for verifier ours is significantly simpler and resembles
+ * strongly connected components algorithm from Sedgewick's textbook.
+ *
+ * Define topmost loop entry as a first node of the loop traversed in a
+ * depth first search starting from initial state. The goal of the loop
+ * tracking algorithm is to associate topmost loop entries with states
+ * derived from these entries.
+ *
+ * For each step in the DFS states traversal algorithm needs to identify
+ * the following situations:
+ *
+ * initial initial initial
+ * | | |
+ * V V V
+ * ... ... .---------> hdr
+ * | | | |
+ * V V | V
+ * cur .-> succ | .------...
+ * | | | | | |
+ * V | V | V V
+ * succ '-- cur | ... ...
+ * | | |
+ * | V V
+ * | succ <- cur
+ * | |
+ * | V
+ * | ...
+ * | |
+ * '----'
+ *
+ * (A) successor state of cur (B) successor state of cur or it's entry
+ * not yet traversed are in current DFS path, thus cur and succ
+ * are members of the same outermost loop
+ *
+ * initial initial
+ * | |
+ * V V
+ * ... ...
+ * | |
+ * V V
+ * .------... .------...
+ * | | | |
+ * V V V V
+ * .-> hdr ... ... ...
+ * | | | | |
+ * | V V V V
+ * | succ <- cur succ <- cur
+ * | | |
+ * | V V
+ * | ... ...
+ * | | |
+ * '----' exit
+ *
+ * (C) successor state of cur is a part of some loop but this loop
+ * does not include cur or successor state is not in a loop at all.
+ *
+ * Algorithm could be described as the following python code:
+ *
+ * traversed = set() # Set of traversed nodes
+ * entries = {} # Mapping from node to loop entry
+ * depths = {} # Depth level assigned to graph node
+ * path = set() # Current DFS path
+ *
+ * # Find outermost loop entry known for n
+ * def get_loop_entry(n):
+ * h = entries.get(n, None)
+ * while h in entries and entries[h] != h:
+ * h = entries[h]
+ * return h
+ *
+ * # Update n's loop entry if h's outermost entry comes
+ * # before n's outermost entry in current DFS path.
+ * def update_loop_entry(n, h):
+ * n1 = get_loop_entry(n) or n
+ * h1 = get_loop_entry(h) or h
+ * if h1 in path and depths[h1] <= depths[n1]:
+ * entries[n] = h1
+ *
+ * def dfs(n, depth):
+ * traversed.add(n)
+ * path.add(n)
+ * depths[n] = depth
+ * for succ in G.successors(n):
+ * if succ not in traversed:
+ * # Case A: explore succ and update cur's loop entry
+ * # only if succ's entry is in current DFS path.
+ * dfs(succ, depth + 1)
+ * h = get_loop_entry(succ)
+ * update_loop_entry(n, h)
+ * else:
+ * # Case B or C depending on `h1 in path` check in update_loop_entry().
+ * update_loop_entry(n, succ)
+ * path.remove(n)
+ *
+ * To adapt this algorithm for use with verifier:
+ * - use st->branch == 0 as a signal that DFS of succ had been finished
+ * and cur's loop entry has to be updated (case A), handle this in
+ * update_branch_counts();
+ * - use st->branch > 0 as a signal that st is in the current DFS path;
+ * - handle cases B and C in is_state_visited();
+ * - update topmost loop entry for intermediate states in get_loop_entry().
+ */
+static struct bpf_verifier_state *get_loop_entry(struct bpf_verifier_state *st)
+{
+ struct bpf_verifier_state *topmost = st->loop_entry, *old;
+
+ while (topmost && topmost->loop_entry && topmost != topmost->loop_entry)
+ topmost = topmost->loop_entry;
+ /* Update loop entries for intermediate states to avoid this
+ * traversal in future get_loop_entry() calls.
+ */
+ while (st && st->loop_entry != topmost) {
+ old = st->loop_entry;
+ st->loop_entry = topmost;
+ st = old;
+ }
+ return topmost;
+}
+
+static void update_loop_entry(struct bpf_verifier_state *cur, struct bpf_verifier_state *hdr)
+{
+ struct bpf_verifier_state *cur1, *hdr1;
+
+ cur1 = get_loop_entry(cur) ?: cur;
+ hdr1 = get_loop_entry(hdr) ?: hdr;
+ /* The head1->branches check decides between cases B and C in
+ * comment for get_loop_entry(). If hdr1->branches == 0 then
+ * head's topmost loop entry is not in current DFS path,
+ * hence 'cur' and 'hdr' are not in the same loop and there is
+ * no need to update cur->loop_entry.
+ */
+ if (hdr1->branches && hdr1->dfs_depth <= cur1->dfs_depth) {
+ cur->loop_entry = hdr;
+ hdr->used_as_loop_entry = true;
+ }
+}
+
static void update_branch_counts(struct bpf_verifier_env *env, struct bpf_verifier_state *st)
{
while (st) {
u32 br = --st->branches;
+ /* br == 0 signals that DFS exploration for 'st' is finished,
+ * thus it is necessary to update parent's loop entry if it
+ * turned out that st is a part of some loop.
+ * This is a part of 'case A' in get_loop_entry() comment.
+ */
+ if (br == 0 && st->parent && st->loop_entry)
+ update_loop_entry(st->parent, st->loop_entry);
+
/* WARN_ON(br > 1) technically makes sense here,
* but see comment in push_stack(), hence:
*/
@@ -2454,6 +2688,68 @@ static int add_subprog(struct bpf_verifier_env *env, int off)
return env->subprog_cnt - 1;
}
+static int bpf_find_exception_callback_insn_off(struct bpf_verifier_env *env)
+{
+ struct bpf_prog_aux *aux = env->prog->aux;
+ struct btf *btf = aux->btf;
+ const struct btf_type *t;
+ u32 main_btf_id, id;
+ const char *name;
+ int ret, i;
+
+ /* Non-zero func_info_cnt implies valid btf */
+ if (!aux->func_info_cnt)
+ return 0;
+ main_btf_id = aux->func_info[0].type_id;
+
+ t = btf_type_by_id(btf, main_btf_id);
+ if (!t) {
+ verbose(env, "invalid btf id for main subprog in func_info\n");
+ return -EINVAL;
+ }
+
+ name = btf_find_decl_tag_value(btf, t, -1, "exception_callback:");
+ if (IS_ERR(name)) {
+ ret = PTR_ERR(name);
+ /* If there is no tag present, there is no exception callback */
+ if (ret == -ENOENT)
+ ret = 0;
+ else if (ret == -EEXIST)
+ verbose(env, "multiple exception callback tags for main subprog\n");
+ return ret;
+ }
+
+ ret = btf_find_by_name_kind(btf, name, BTF_KIND_FUNC);
+ if (ret < 0) {
+ verbose(env, "exception callback '%s' could not be found in BTF\n", name);
+ return ret;
+ }
+ id = ret;
+ t = btf_type_by_id(btf, id);
+ if (btf_func_linkage(t) != BTF_FUNC_GLOBAL) {
+ verbose(env, "exception callback '%s' must have global linkage\n", name);
+ return -EINVAL;
+ }
+ ret = 0;
+ for (i = 0; i < aux->func_info_cnt; i++) {
+ if (aux->func_info[i].type_id != id)
+ continue;
+ ret = aux->func_info[i].insn_off;
+ /* Further func_info and subprog checks will also happen
+ * later, so assume this is the right insn_off for now.
+ */
+ if (!ret) {
+ verbose(env, "invalid exception callback insn_off in func_info: 0\n");
+ ret = -EINVAL;
+ }
+ }
+ if (!ret) {
+ verbose(env, "exception callback type id not found in func_info\n");
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
#define MAX_KFUNC_DESCS 256
#define MAX_KFUNC_BTFS 256
@@ -2793,8 +3089,8 @@ bpf_jit_find_kfunc_model(const struct bpf_prog *prog,
static int add_subprog_and_kfunc(struct bpf_verifier_env *env)
{
struct bpf_subprog_info *subprog = env->subprog_info;
+ int i, ret, insn_cnt = env->prog->len, ex_cb_insn;
struct bpf_insn *insn = env->prog->insnsi;
- int i, ret, insn_cnt = env->prog->len;
/* Add entry function. */
ret = add_subprog(env, 0);
@@ -2820,6 +3116,26 @@ static int add_subprog_and_kfunc(struct bpf_verifier_env *env)
return ret;
}
+ ret = bpf_find_exception_callback_insn_off(env);
+ if (ret < 0)
+ return ret;
+ ex_cb_insn = ret;
+
+ /* If ex_cb_insn > 0, this means that the main program has a subprog
+ * marked using BTF decl tag to serve as the exception callback.
+ */
+ if (ex_cb_insn) {
+ ret = add_subprog(env, ex_cb_insn);
+ if (ret < 0)
+ return ret;
+ for (i = 1; i < env->subprog_cnt; i++) {
+ if (env->subprog_info[i].start != ex_cb_insn)
+ continue;
+ env->exception_callback_subprog = i;
+ break;
+ }
+ }
+
/* Add a fake 'exit' subprog which could simplify subprog iteration
* logic. 'subprog_cnt' should not be increased.
*/
@@ -2868,7 +3184,7 @@ next:
if (i == subprog_end - 1) {
/* to avoid fall-through from one subprog into another
* the last insn of the subprog should be either exit
- * or unconditional jump back
+ * or unconditional jump back or bpf_throw call
*/
if (code != (BPF_JMP | BPF_EXIT) &&
code != (BPF_JMP32 | BPF_JA) &&
@@ -3029,7 +3345,7 @@ static bool is_reg64(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (class == BPF_LDX) {
if (t != SRC_OP)
- return BPF_SIZE(code) == BPF_DW;
+ return BPF_SIZE(code) == BPF_DW || BPF_MODE(code) == BPF_MEMSX;
/* LDX source must be ptr. */
return true;
}
@@ -4999,6 +5315,8 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
perm_flags |= PTR_UNTRUSTED;
} else {
perm_flags = PTR_MAYBE_NULL | MEM_ALLOC;
+ if (kptr_field->type == BPF_KPTR_PERCPU)
+ perm_flags |= MEM_PERCPU;
}
if (base_type(reg->type) != PTR_TO_BTF_ID || (type_flag(reg->type) & ~perm_flags))
@@ -5042,7 +5360,7 @@ static int map_kptr_match_type(struct bpf_verifier_env *env,
*/
if (!btf_struct_ids_match(&env->log, reg->btf, reg->btf_id, reg->off,
kptr_field->kptr.btf, kptr_field->kptr.btf_id,
- kptr_field->type == BPF_KPTR_REF))
+ kptr_field->type != BPF_KPTR_UNREF))
goto bad_type;
return 0;
bad_type:
@@ -5086,7 +5404,18 @@ static bool rcu_safe_kptr(const struct btf_field *field)
{
const struct btf_field_kptr *kptr = &field->kptr;
- return field->type == BPF_KPTR_REF && rcu_protected_object(kptr->btf, kptr->btf_id);
+ return field->type == BPF_KPTR_PERCPU ||
+ (field->type == BPF_KPTR_REF && rcu_protected_object(kptr->btf, kptr->btf_id));
+}
+
+static u32 btf_ld_kptr_type(struct bpf_verifier_env *env, struct btf_field *kptr_field)
+{
+ if (rcu_safe_kptr(kptr_field) && in_rcu_cs(env)) {
+ if (kptr_field->type != BPF_KPTR_PERCPU)
+ return PTR_MAYBE_NULL | MEM_RCU;
+ return PTR_MAYBE_NULL | MEM_RCU | MEM_PERCPU;
+ }
+ return PTR_MAYBE_NULL | PTR_UNTRUSTED;
}
static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
@@ -5112,7 +5441,8 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
/* We only allow loading referenced kptr, since it will be marked as
* untrusted, similar to unreferenced kptr.
*/
- if (class != BPF_LDX && kptr_field->type == BPF_KPTR_REF) {
+ if (class != BPF_LDX &&
+ (kptr_field->type == BPF_KPTR_REF || kptr_field->type == BPF_KPTR_PERCPU)) {
verbose(env, "store to referenced kptr disallowed\n");
return -EACCES;
}
@@ -5123,10 +5453,7 @@ static int check_map_kptr_access(struct bpf_verifier_env *env, u32 regno,
* value from map as PTR_TO_BTF_ID, with the correct type.
*/
mark_btf_ld_reg(env, cur_regs(env), value_regno, PTR_TO_BTF_ID, kptr_field->kptr.btf,
- kptr_field->kptr.btf_id,
- rcu_safe_kptr(kptr_field) && in_rcu_cs(env) ?
- PTR_MAYBE_NULL | MEM_RCU :
- PTR_MAYBE_NULL | PTR_UNTRUSTED);
+ kptr_field->kptr.btf_id, btf_ld_kptr_type(env, kptr_field));
/* For mark_ptr_or_null_reg */
val_reg->id = ++env->id_gen;
} else if (class == BPF_STX) {
@@ -5180,6 +5507,7 @@ static int check_map_access(struct bpf_verifier_env *env, u32 regno,
switch (field->type) {
case BPF_KPTR_UNREF:
case BPF_KPTR_REF:
+ case BPF_KPTR_PERCPU:
if (src != ACCESS_DIRECT) {
verbose(env, "kptr cannot be accessed indirectly by helper\n");
return -EACCES;
@@ -5647,6 +5975,27 @@ continue_func:
for (; i < subprog_end; i++) {
int next_insn, sidx;
+ if (bpf_pseudo_kfunc_call(insn + i) && !insn[i].off) {
+ bool err = false;
+
+ if (!is_bpf_throw_kfunc(insn + i))
+ continue;
+ if (subprog[idx].is_cb)
+ err = true;
+ for (int c = 0; c < frame && !err; c++) {
+ if (subprog[ret_prog[c]].is_cb) {
+ err = true;
+ break;
+ }
+ }
+ if (!err)
+ continue;
+ verbose(env,
+ "bpf_throw kfunc (insn %d) cannot be called from callback subprog %d\n",
+ i, idx);
+ return -EINVAL;
+ }
+
if (!bpf_pseudo_call(insn + i) && !bpf_pseudo_func(insn + i))
continue;
/* remember insn and function to return to */
@@ -5669,6 +6018,10 @@ continue_func:
/* async callbacks don't increase bpf prog stack size unless called directly */
if (!bpf_pseudo_call(insn + i))
continue;
+ if (subprog[sidx].is_exception_cb) {
+ verbose(env, "insn %d cannot call exception cb directly\n", i);
+ return -EINVAL;
+ }
}
i = next_insn;
idx = sidx;
@@ -5690,8 +6043,13 @@ continue_func:
* tail call counter throughout bpf2bpf calls combined with tailcalls
*/
if (tail_call_reachable)
- for (j = 0; j < frame; j++)
+ for (j = 0; j < frame; j++) {
+ if (subprog[ret_prog[j]].is_exception_cb) {
+ verbose(env, "cannot tail call within exception cb\n");
+ return -EINVAL;
+ }
subprog[ret_prog[j]].tail_call_reachable = true;
+ }
if (subprog[0].tail_call_reachable)
env->prog->aux->tail_call_reachable = true;
@@ -6207,7 +6565,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
}
if (type_is_alloc(reg->type) && !type_is_non_owning_ref(reg->type) &&
- !reg->ref_obj_id) {
+ !(reg->type & MEM_RCU) && !reg->ref_obj_id) {
verbose(env, "verifier internal error: ref_obj_id for allocated object must be non-zero\n");
return -EFAULT;
}
@@ -7318,7 +7676,7 @@ static int process_kptr_func(struct bpf_verifier_env *env, int regno,
verbose(env, "off=%d doesn't point to kptr\n", kptr_off);
return -EACCES;
}
- if (kptr_field->type != BPF_KPTR_REF) {
+ if (kptr_field->type != BPF_KPTR_REF && kptr_field->type != BPF_KPTR_PERCPU) {
verbose(env, "off=%d kptr isn't referenced kptr\n", kptr_off);
return -EACCES;
}
@@ -7489,15 +7847,24 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
return err;
}
- err = mark_stack_slots_iter(env, reg, insn_idx, meta->btf, btf_id, nr_slots);
+ err = mark_stack_slots_iter(env, meta, reg, insn_idx, meta->btf, btf_id, nr_slots);
if (err)
return err;
} else {
/* iter_next() or iter_destroy() expect initialized iter state*/
- if (!is_iter_reg_valid_init(env, reg, meta->btf, btf_id, nr_slots)) {
+ err = is_iter_reg_valid_init(env, reg, meta->btf, btf_id, nr_slots);
+ switch (err) {
+ case 0:
+ break;
+ case -EINVAL:
verbose(env, "expected an initialized iter_%s as arg #%d\n",
iter_type_str(meta->btf, btf_id), regno);
- return -EINVAL;
+ return err;
+ case -EPROTO:
+ verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
+ return err;
+ default:
+ return err;
}
spi = iter_get_spi(env, reg, nr_slots);
@@ -7523,6 +7890,81 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
return 0;
}
+/* Look for a previous loop entry at insn_idx: nearest parent state
+ * stopped at insn_idx with callsites matching those in cur->frame.
+ */
+static struct bpf_verifier_state *find_prev_entry(struct bpf_verifier_env *env,
+ struct bpf_verifier_state *cur,
+ int insn_idx)
+{
+ struct bpf_verifier_state_list *sl;
+ struct bpf_verifier_state *st;
+
+ /* Explored states are pushed in stack order, most recent states come first */
+ sl = *explored_state(env, insn_idx);
+ for (; sl; sl = sl->next) {
+ /* If st->branches != 0 state is a part of current DFS verification path,
+ * hence cur & st for a loop.
+ */
+ st = &sl->state;
+ if (st->insn_idx == insn_idx && st->branches && same_callsites(st, cur) &&
+ st->dfs_depth < cur->dfs_depth)
+ return st;
+ }
+
+ return NULL;
+}
+
+static void reset_idmap_scratch(struct bpf_verifier_env *env);
+static bool regs_exact(const struct bpf_reg_state *rold,
+ const struct bpf_reg_state *rcur,
+ struct bpf_idmap *idmap);
+
+static void maybe_widen_reg(struct bpf_verifier_env *env,
+ struct bpf_reg_state *rold, struct bpf_reg_state *rcur,
+ struct bpf_idmap *idmap)
+{
+ if (rold->type != SCALAR_VALUE)
+ return;
+ if (rold->type != rcur->type)
+ return;
+ if (rold->precise || rcur->precise || regs_exact(rold, rcur, idmap))
+ return;
+ __mark_reg_unknown(env, rcur);
+}
+
+static int widen_imprecise_scalars(struct bpf_verifier_env *env,
+ struct bpf_verifier_state *old,
+ struct bpf_verifier_state *cur)
+{
+ struct bpf_func_state *fold, *fcur;
+ int i, fr;
+
+ reset_idmap_scratch(env);
+ for (fr = old->curframe; fr >= 0; fr--) {
+ fold = old->frame[fr];
+ fcur = cur->frame[fr];
+
+ for (i = 0; i < MAX_BPF_REG; i++)
+ maybe_widen_reg(env,
+ &fold->regs[i],
+ &fcur->regs[i],
+ &env->idmap_scratch);
+
+ for (i = 0; i < fold->allocated_stack / BPF_REG_SIZE; i++) {
+ if (!is_spilled_reg(&fold->stack[i]) ||
+ !is_spilled_reg(&fcur->stack[i]))
+ continue;
+
+ maybe_widen_reg(env,
+ &fold->stack[i].spilled_ptr,
+ &fcur->stack[i].spilled_ptr,
+ &env->idmap_scratch);
+ }
+ }
+ return 0;
+}
+
/* process_iter_next_call() is called when verifier gets to iterator's next
* "method" (e.g., bpf_iter_num_next() for numbers iterator) call. We'll refer
* to it as just "iter_next()" in comments below.
@@ -7564,25 +8006,47 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
* is some statically known limit on number of iterations (e.g., if there is
* an explicit `if n > 100 then break;` statement somewhere in the loop).
*
- * One very subtle but very important aspect is that we *always* simulate NULL
- * condition first (as the current state) before we simulate non-NULL case.
- * This has to do with intricacies of scalar precision tracking. By simulating
- * "exit condition" of iter_next() returning NULL first, we make sure all the
- * relevant precision marks *that will be set **after** we exit iterator loop*
- * are propagated backwards to common parent state of NULL and non-NULL
- * branches. Thanks to that, state equivalence checks done later in forked
- * state, when reaching iter_next() for ACTIVE iterator, can assume that
- * precision marks are finalized and won't change. Because simulating another
- * ACTIVE iterator iteration won't change them (because given same input
- * states we'll end up with exactly same output states which we are currently
- * comparing; and verification after the loop already propagated back what
- * needs to be **additionally** tracked as precise). It's subtle, grok
- * precision tracking for more intuitive understanding.
+ * Iteration convergence logic in is_state_visited() relies on exact
+ * states comparison, which ignores read and precision marks.
+ * This is necessary because read and precision marks are not finalized
+ * while in the loop. Exact comparison might preclude convergence for
+ * simple programs like below:
+ *
+ * i = 0;
+ * while(iter_next(&it))
+ * i++;
+ *
+ * At each iteration step i++ would produce a new distinct state and
+ * eventually instruction processing limit would be reached.
+ *
+ * To avoid such behavior speculatively forget (widen) range for
+ * imprecise scalar registers, if those registers were not precise at the
+ * end of the previous iteration and do not match exactly.
+ *
+ * This is a conservative heuristic that allows to verify wide range of programs,
+ * however it precludes verification of programs that conjure an
+ * imprecise value on the first loop iteration and use it as precise on a second.
+ * For example, the following safe program would fail to verify:
+ *
+ * struct bpf_num_iter it;
+ * int arr[10];
+ * int i = 0, a = 0;
+ * bpf_iter_num_new(&it, 0, 10);
+ * while (bpf_iter_num_next(&it)) {
+ * if (a == 0) {
+ * a = 1;
+ * i = 7; // Because i changed verifier would forget
+ * // it's range on second loop entry.
+ * } else {
+ * arr[i] = 42; // This would fail to verify.
+ * }
+ * }
+ * bpf_iter_num_destroy(&it);
*/
static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
struct bpf_kfunc_call_arg_meta *meta)
{
- struct bpf_verifier_state *cur_st = env->cur_state, *queued_st;
+ struct bpf_verifier_state *cur_st = env->cur_state, *queued_st, *prev_st;
struct bpf_func_state *cur_fr = cur_st->frame[cur_st->curframe], *queued_fr;
struct bpf_reg_state *cur_iter, *queued_iter;
int iter_frameno = meta->iter.frameno;
@@ -7600,6 +8064,19 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
}
if (cur_iter->iter.state == BPF_ITER_STATE_ACTIVE) {
+ /* Because iter_next() call is a checkpoint is_state_visitied()
+ * should guarantee parent state with same call sites and insn_idx.
+ */
+ if (!cur_st->parent || cur_st->parent->insn_idx != insn_idx ||
+ !same_callsites(cur_st->parent, cur_st)) {
+ verbose(env, "bug: bad parent state for iter next call");
+ return -EFAULT;
+ }
+ /* Note cur_st->parent in the call below, it is necessary to skip
+ * checkpoint created for cur_st by is_state_visited()
+ * right at this instruction.
+ */
+ prev_st = find_prev_entry(env, cur_st->parent, insn_idx);
/* branch out active iter state */
queued_st = push_stack(env, insn_idx + 1, insn_idx, false);
if (!queued_st)
@@ -7608,6 +8085,8 @@ static int process_iter_next_call(struct bpf_verifier_env *env, int insn_idx,
queued_iter = &queued_st->frame[iter_frameno]->stack[iter_spi].spilled_ptr;
queued_iter->iter.state = BPF_ITER_STATE_ACTIVE;
queued_iter->iter.depth++;
+ if (prev_st)
+ widen_imprecise_scalars(env, prev_st, queued_st);
queued_fr = queued_st->frame[queued_st->curframe];
mark_ptr_not_null_reg(&queued_fr->regs[BPF_REG_0]);
@@ -7751,6 +8230,7 @@ static const struct bpf_reg_types btf_ptr_types = {
static const struct bpf_reg_types percpu_btf_ptr_types = {
.types = {
PTR_TO_BTF_ID | MEM_PERCPU,
+ PTR_TO_BTF_ID | MEM_PERCPU | MEM_RCU,
PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED,
}
};
@@ -7829,8 +8309,10 @@ static int check_reg_type(struct bpf_verifier_env *env, u32 regno,
if (base_type(arg_type) == ARG_PTR_TO_MEM)
type &= ~DYNPTR_TYPE_FLAG_MASK;
- if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type))
+ if (meta->func_id == BPF_FUNC_kptr_xchg && type_is_alloc(type)) {
type &= ~MEM_ALLOC;
+ type &= ~MEM_PERCPU;
+ }
for (i = 0; i < ARRAY_SIZE(compatible->types); i++) {
expected = compatible->types[i];
@@ -7913,6 +8395,7 @@ found:
break;
}
case PTR_TO_BTF_ID | MEM_ALLOC:
+ case PTR_TO_BTF_ID | MEM_PERCPU | MEM_ALLOC:
if (meta->func_id != BPF_FUNC_spin_lock && meta->func_id != BPF_FUNC_spin_unlock &&
meta->func_id != BPF_FUNC_kptr_xchg) {
verbose(env, "verifier internal error: unimplemented handling of MEM_ALLOC\n");
@@ -7924,6 +8407,7 @@ found:
}
break;
case PTR_TO_BTF_ID | MEM_PERCPU:
+ case PTR_TO_BTF_ID | MEM_PERCPU | MEM_RCU:
case PTR_TO_BTF_ID | MEM_PERCPU | PTR_TRUSTED:
/* Handled by helper specific checks */
break;
@@ -8900,6 +9384,7 @@ static int __check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn
* callbacks
*/
if (set_callee_state_cb != set_callee_state) {
+ env->subprog_info[subprog].is_cb = true;
if (bpf_pseudo_kfunc_call(insn) &&
!is_callback_calling_kfunc(insn->imm)) {
verbose(env, "verifier bug: kfunc %s#%d not marked as callback-calling\n",
@@ -9289,7 +9774,8 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
verbose(env, "to caller at %d:\n", *insn_idx);
print_verifier_state(env, caller, true);
}
- /* clear everything in the callee */
+ /* clear everything in the callee. In case of exceptional exits using
+ * bpf_throw, this will be done by copy_verifier_state for extra frames. */
free_func_state(callee);
state->frame[state->curframe--] = NULL;
return 0;
@@ -9413,17 +9899,17 @@ record_func_key(struct bpf_verifier_env *env, struct bpf_call_arg_meta *meta,
return 0;
}
-static int check_reference_leak(struct bpf_verifier_env *env)
+static int check_reference_leak(struct bpf_verifier_env *env, bool exception_exit)
{
struct bpf_func_state *state = cur_func(env);
bool refs_lingering = false;
int i;
- if (state->frameno && !state->in_callback_fn)
+ if (!exception_exit && state->frameno && !state->in_callback_fn)
return 0;
for (i = 0; i < state->acquired_refs; i++) {
- if (state->in_callback_fn && state->refs[i].callback_ref != state->frameno)
+ if (!exception_exit && state->in_callback_fn && state->refs[i].callback_ref != state->frameno)
continue;
verbose(env, "Unreleased reference id=%d alloc_insn=%d\n",
state->refs[i].id, state->refs[i].insn_idx);
@@ -9530,6 +10016,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
int *insn_idx_p)
{
enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
+ bool returns_cpu_specific_alloc_ptr = false;
const struct bpf_func_proto *fn = NULL;
enum bpf_return_type ret_type;
enum bpf_type_flag ret_flag;
@@ -9640,6 +10127,26 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
return -EFAULT;
}
err = unmark_stack_slots_dynptr(env, &regs[meta.release_regno]);
+ } else if (func_id == BPF_FUNC_kptr_xchg && meta.ref_obj_id) {
+ u32 ref_obj_id = meta.ref_obj_id;
+ bool in_rcu = in_rcu_cs(env);
+ struct bpf_func_state *state;
+ struct bpf_reg_state *reg;
+
+ err = release_reference_state(cur_func(env), ref_obj_id);
+ if (!err) {
+ bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
+ if (reg->ref_obj_id == ref_obj_id) {
+ if (in_rcu && (reg->type & MEM_ALLOC) && (reg->type & MEM_PERCPU)) {
+ reg->ref_obj_id = 0;
+ reg->type &= ~MEM_ALLOC;
+ reg->type |= MEM_RCU;
+ } else {
+ mark_reg_invalid(env, reg);
+ }
+ }
+ }));
+ }
} else if (meta.ref_obj_id) {
err = release_reference(env, meta.ref_obj_id);
} else if (register_is_null(&regs[meta.release_regno])) {
@@ -9657,7 +10164,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
switch (func_id) {
case BPF_FUNC_tail_call:
- err = check_reference_leak(env);
+ err = check_reference_leak(env, false);
if (err) {
verbose(env, "tail_call would lead to reference leak\n");
return err;
@@ -9768,6 +10275,23 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
break;
}
+ case BPF_FUNC_per_cpu_ptr:
+ case BPF_FUNC_this_cpu_ptr:
+ {
+ struct bpf_reg_state *reg = &regs[BPF_REG_1];
+ const struct btf_type *type;
+
+ if (reg->type & MEM_RCU) {
+ type = btf_type_by_id(reg->btf, reg->btf_id);
+ if (!type || !btf_type_is_struct(type)) {
+ verbose(env, "Helper has invalid btf/btf_id in R1\n");
+ return -EFAULT;
+ }
+ returns_cpu_specific_alloc_ptr = true;
+ env->insn_aux_data[insn_idx].call_with_percpu_alloc_ptr = true;
+ }
+ break;
+ }
case BPF_FUNC_user_ringbuf_drain:
err = __check_func_call(env, insn, insn_idx_p, meta.subprogno,
set_user_ringbuf_callback_state);
@@ -9857,14 +10381,18 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
regs[BPF_REG_0].type = PTR_TO_MEM | ret_flag;
regs[BPF_REG_0].mem_size = tsize;
} else {
- /* MEM_RDONLY may be carried from ret_flag, but it
- * doesn't apply on PTR_TO_BTF_ID. Fold it, otherwise
- * it will confuse the check of PTR_TO_BTF_ID in
- * check_mem_access().
- */
- ret_flag &= ~MEM_RDONLY;
+ if (returns_cpu_specific_alloc_ptr) {
+ regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC | MEM_RCU;
+ } else {
+ /* MEM_RDONLY may be carried from ret_flag, but it
+ * doesn't apply on PTR_TO_BTF_ID. Fold it, otherwise
+ * it will confuse the check of PTR_TO_BTF_ID in
+ * check_mem_access().
+ */
+ ret_flag &= ~MEM_RDONLY;
+ regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
+ }
- regs[BPF_REG_0].type = PTR_TO_BTF_ID | ret_flag;
regs[BPF_REG_0].btf = meta.ret_btf;
regs[BPF_REG_0].btf_id = meta.ret_btf_id;
}
@@ -9880,8 +10408,11 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
if (func_id == BPF_FUNC_kptr_xchg) {
ret_btf = meta.kptr_field->kptr.btf;
ret_btf_id = meta.kptr_field->kptr.btf_id;
- if (!btf_is_kernel(ret_btf))
+ if (!btf_is_kernel(ret_btf)) {
regs[BPF_REG_0].type |= MEM_ALLOC;
+ if (meta.kptr_field->type == BPF_KPTR_PERCPU)
+ regs[BPF_REG_0].type |= MEM_PERCPU;
+ }
} else {
if (fn->ret_btf_id == BPF_PTR_POISON) {
verbose(env, "verifier internal error:");
@@ -10028,6 +10559,11 @@ static bool is_kfunc_rcu(struct bpf_kfunc_call_arg_meta *meta)
return meta->kfunc_flags & KF_RCU;
}
+static bool is_kfunc_rcu_protected(struct bpf_kfunc_call_arg_meta *meta)
+{
+ return meta->kfunc_flags & KF_RCU_PROTECTED;
+}
+
static bool __kfunc_param_match_suffix(const struct btf *btf,
const struct btf_param *arg,
const char *suffix)
@@ -10102,6 +10638,11 @@ static bool is_kfunc_arg_refcounted_kptr(const struct btf *btf, const struct btf
return __kfunc_param_match_suffix(btf, arg, "__refcounted_kptr");
}
+static bool is_kfunc_arg_nullable(const struct btf *btf, const struct btf_param *arg)
+{
+ return __kfunc_param_match_suffix(btf, arg, "__nullable");
+}
+
static bool is_kfunc_arg_scalar_with_name(const struct btf *btf,
const struct btf_param *arg,
const char *name)
@@ -10244,6 +10785,7 @@ enum kfunc_ptr_arg_type {
KF_ARG_PTR_TO_CALLBACK,
KF_ARG_PTR_TO_RB_ROOT,
KF_ARG_PTR_TO_RB_NODE,
+ KF_ARG_PTR_TO_NULL,
};
enum special_kfunc_type {
@@ -10266,6 +10808,10 @@ enum special_kfunc_type {
KF_bpf_dynptr_slice,
KF_bpf_dynptr_slice_rdwr,
KF_bpf_dynptr_clone,
+ KF_bpf_percpu_obj_new_impl,
+ KF_bpf_percpu_obj_drop_impl,
+ KF_bpf_throw,
+ KF_bpf_iter_css_task_new,
};
BTF_SET_START(special_kfunc_set)
@@ -10286,6 +10832,10 @@ BTF_ID(func, bpf_dynptr_from_xdp)
BTF_ID(func, bpf_dynptr_slice)
BTF_ID(func, bpf_dynptr_slice_rdwr)
BTF_ID(func, bpf_dynptr_clone)
+BTF_ID(func, bpf_percpu_obj_new_impl)
+BTF_ID(func, bpf_percpu_obj_drop_impl)
+BTF_ID(func, bpf_throw)
+BTF_ID(func, bpf_iter_css_task_new)
BTF_SET_END(special_kfunc_set)
BTF_ID_LIST(special_kfunc_list)
@@ -10308,6 +10858,10 @@ BTF_ID(func, bpf_dynptr_from_xdp)
BTF_ID(func, bpf_dynptr_slice)
BTF_ID(func, bpf_dynptr_slice_rdwr)
BTF_ID(func, bpf_dynptr_clone)
+BTF_ID(func, bpf_percpu_obj_new_impl)
+BTF_ID(func, bpf_percpu_obj_drop_impl)
+BTF_ID(func, bpf_throw)
+BTF_ID(func, bpf_iter_css_task_new)
static bool is_kfunc_ret_null(struct bpf_kfunc_call_arg_meta *meta)
{
@@ -10388,6 +10942,8 @@ get_kfunc_ptr_arg_type(struct bpf_verifier_env *env,
if (is_kfunc_arg_callback(env, meta->btf, &args[argno]))
return KF_ARG_PTR_TO_CALLBACK;
+ if (is_kfunc_arg_nullable(meta->btf, &args[argno]) && register_is_null(reg))
+ return KF_ARG_PTR_TO_NULL;
if (argno + 1 < nargs &&
(is_kfunc_arg_mem_size(meta->btf, &args[argno + 1], &regs[regno + 1]) ||
@@ -10625,6 +11181,12 @@ static bool is_callback_calling_kfunc(u32 btf_id)
return btf_id == special_kfunc_list[KF_bpf_rbtree_add_impl];
}
+static bool is_bpf_throw_kfunc(struct bpf_insn *insn)
+{
+ return bpf_pseudo_kfunc_call(insn) && insn->off == 0 &&
+ insn->imm == special_kfunc_list[KF_bpf_throw];
+}
+
static bool is_rbtree_lock_required_kfunc(u32 btf_id)
{
return is_bpf_rbtree_api_kfunc(btf_id);
@@ -10832,6 +11394,20 @@ static int process_kf_arg_ptr_to_rbtree_node(struct bpf_verifier_env *env,
&meta->arg_rbtree_root.field);
}
+static bool check_css_task_iter_allowlist(struct bpf_verifier_env *env)
+{
+ enum bpf_prog_type prog_type = resolve_prog_type(env->prog);
+
+ switch (prog_type) {
+ case BPF_PROG_TYPE_LSM:
+ return true;
+ case BPF_TRACE_ITER:
+ return env->prog->aux->sleepable;
+ default:
+ return false;
+ }
+}
+
static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_arg_meta *meta,
int insn_idx)
{
@@ -10918,7 +11494,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
- (register_is_null(reg) || type_may_be_null(reg->type))) {
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
return -EACCES;
}
@@ -10943,6 +11520,8 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
return kf_arg_type;
switch (kf_arg_type) {
+ case KF_ARG_PTR_TO_NULL:
+ continue;
case KF_ARG_PTR_TO_ALLOC_BTF_ID:
case KF_ARG_PTR_TO_BTF_ID:
if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
@@ -11002,7 +11581,17 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
}
break;
case KF_ARG_PTR_TO_ALLOC_BTF_ID:
- if (reg->type != (PTR_TO_BTF_ID | MEM_ALLOC)) {
+ if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC)) {
+ if (meta->func_id != special_kfunc_list[KF_bpf_obj_drop_impl]) {
+ verbose(env, "arg#%d expected for bpf_obj_drop_impl()\n", i);
+ return -EINVAL;
+ }
+ } else if (reg->type == (PTR_TO_BTF_ID | MEM_ALLOC | MEM_PERCPU)) {
+ if (meta->func_id != special_kfunc_list[KF_bpf_percpu_obj_drop_impl]) {
+ verbose(env, "arg#%d expected for bpf_percpu_obj_drop_impl()\n", i);
+ return -EINVAL;
+ }
+ } else {
verbose(env, "arg#%d expected pointer to allocated object\n", i);
return -EINVAL;
}
@@ -11010,8 +11599,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
verbose(env, "allocated object must be referenced\n");
return -EINVAL;
}
- if (meta->btf == btf_vmlinux &&
- meta->func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
+ if (meta->btf == btf_vmlinux) {
meta->arg_btf = reg->btf;
meta->arg_btf_id = reg->btf_id;
}
@@ -11073,6 +11661,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
}
case KF_ARG_PTR_TO_ITER:
+ if (meta->func_id == special_kfunc_list[KF_bpf_iter_css_task_new]) {
+ if (!check_css_task_iter_allowlist(env)) {
+ verbose(env, "css_task_iter is only allowed in bpf_lsm and bpf iter-s\n");
+ return -EINVAL;
+ }
+ }
ret = process_iter_arg(env, regno, insn_idx, meta);
if (ret < 0)
return ret;
@@ -11202,6 +11796,10 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
break;
}
case KF_ARG_PTR_TO_CALLBACK:
+ if (reg->type != PTR_TO_FUNC) {
+ verbose(env, "arg%d expected pointer to func\n", i);
+ return -EINVAL;
+ }
meta->subprogno = reg->subprogno;
break;
case KF_ARG_PTR_TO_REFCOUNTED_KPTR:
@@ -11280,6 +11878,8 @@ static int fetch_kfunc_meta(struct bpf_verifier_env *env,
return 0;
}
+static int check_return_code(struct bpf_verifier_env *env, int regno);
+
static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
int *insn_idx_p)
{
@@ -11326,6 +11926,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
if (env->cur_state->active_rcu_lock) {
struct bpf_func_state *state;
struct bpf_reg_state *reg;
+ u32 clear_mask = (1 << STACK_SPILL) | (1 << STACK_ITER);
if (in_rbtree_lock_required_cb(env) && (rcu_lock || rcu_unlock)) {
verbose(env, "Calling bpf_rcu_read_{lock,unlock} in unnecessary rbtree callback\n");
@@ -11336,7 +11937,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
verbose(env, "nested rcu read lock (kernel function %s)\n", func_name);
return -EINVAL;
} else if (rcu_unlock) {
- bpf_for_each_reg_in_vstate(env->cur_state, state, reg, ({
+ bpf_for_each_reg_in_vstate_mask(env->cur_state, state, reg, clear_mask, ({
if (reg->type & MEM_RCU) {
reg->type &= ~(MEM_RCU | PTR_MAYBE_NULL);
reg->type |= PTR_UNTRUSTED;
@@ -11401,6 +12002,24 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
}
}
+ if (meta.func_id == special_kfunc_list[KF_bpf_throw]) {
+ if (!bpf_jit_supports_exceptions()) {
+ verbose(env, "JIT does not support calling kfunc %s#%d\n",
+ func_name, meta.func_id);
+ return -ENOTSUPP;
+ }
+ env->seen_exception = true;
+
+ /* In the case of the default callback, the cookie value passed
+ * to bpf_throw becomes the return value of the program.
+ */
+ if (!env->exception_callback_subprog) {
+ err = check_return_code(env, BPF_REG_1);
+ if (err < 0)
+ return err;
+ }
+ }
+
for (i = 0; i < CALLER_SAVED_REGS; i++)
mark_reg_not_init(env, regs, caller_saved[i]);
@@ -11411,6 +12030,7 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
/* Only exception is bpf_obj_new_impl */
if (meta.btf != btf_vmlinux ||
(meta.func_id != special_kfunc_list[KF_bpf_obj_new_impl] &&
+ meta.func_id != special_kfunc_list[KF_bpf_percpu_obj_new_impl] &&
meta.func_id != special_kfunc_list[KF_bpf_refcount_acquire_impl])) {
verbose(env, "acquire kernel function does not return PTR_TO_BTF_ID\n");
return -EINVAL;
@@ -11424,11 +12044,16 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
ptr_type = btf_type_skip_modifiers(desc_btf, t->type, &ptr_type_id);
if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
- if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
+ if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl] ||
+ meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
+ struct btf_struct_meta *struct_meta;
struct btf *ret_btf;
u32 ret_btf_id;
- if (unlikely(!bpf_global_ma_set))
+ if (meta.func_id == special_kfunc_list[KF_bpf_obj_new_impl] && !bpf_global_ma_set)
+ return -ENOMEM;
+
+ if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl] && !bpf_global_percpu_ma_set)
return -ENOMEM;
if (((u64)(u32)meta.arg_constant.value) != meta.arg_constant.value) {
@@ -11441,24 +12066,38 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
/* This may be NULL due to user not supplying a BTF */
if (!ret_btf) {
- verbose(env, "bpf_obj_new requires prog BTF\n");
+ verbose(env, "bpf_obj_new/bpf_percpu_obj_new requires prog BTF\n");
return -EINVAL;
}
ret_t = btf_type_by_id(ret_btf, ret_btf_id);
if (!ret_t || !__btf_type_is_struct(ret_t)) {
- verbose(env, "bpf_obj_new type ID argument must be of a struct\n");
+ verbose(env, "bpf_obj_new/bpf_percpu_obj_new type ID argument must be of a struct\n");
return -EINVAL;
}
+ struct_meta = btf_find_struct_meta(ret_btf, ret_btf_id);
+ if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
+ if (!__btf_type_is_scalar_struct(env, ret_btf, ret_t, 0)) {
+ verbose(env, "bpf_percpu_obj_new type ID argument must be of a struct of scalars\n");
+ return -EINVAL;
+ }
+
+ if (struct_meta) {
+ verbose(env, "bpf_percpu_obj_new type ID argument must not contain special fields\n");
+ return -EINVAL;
+ }
+ }
+
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC;
regs[BPF_REG_0].btf = ret_btf;
regs[BPF_REG_0].btf_id = ret_btf_id;
+ if (meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl])
+ regs[BPF_REG_0].type |= MEM_PERCPU;
insn_aux->obj_new_size = ret_t->size;
- insn_aux->kptr_struct_meta =
- btf_find_struct_meta(ret_btf, ret_btf_id);
+ insn_aux->kptr_struct_meta = struct_meta;
} else if (meta.func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl]) {
mark_reg_known_zero(env, regs, BPF_REG_0);
regs[BPF_REG_0].type = PTR_TO_BTF_ID | MEM_ALLOC;
@@ -11595,7 +12234,8 @@ static int check_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
regs[BPF_REG_0].id = ++env->id_gen;
} else if (btf_type_is_void(t)) {
if (meta.btf == btf_vmlinux && btf_id_set_contains(&special_kfunc_set, meta.func_id)) {
- if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl]) {
+ if (meta.func_id == special_kfunc_list[KF_bpf_obj_drop_impl] ||
+ meta.func_id == special_kfunc_list[KF_bpf_percpu_obj_drop_impl]) {
insn_aux->kptr_struct_meta =
btf_find_struct_meta(meta.arg_btf,
meta.arg_btf_id);
@@ -13391,12 +14031,16 @@ static int is_branch32_taken(struct bpf_reg_state *reg, u32 val, u8 opcode)
return !!tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 0;
+ else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
+ return 0;
break;
case BPF_JNE:
if (tnum_is_const(subreg))
return !tnum_equals_const(subreg, val);
else if (val < reg->u32_min_value || val > reg->u32_max_value)
return 1;
+ else if (sval < reg->s32_min_value || sval > reg->s32_max_value)
+ return 1;
break;
case BPF_JSET:
if ((~subreg.mask & subreg.value) & val)
@@ -13468,12 +14112,16 @@ static int is_branch64_taken(struct bpf_reg_state *reg, u64 val, u8 opcode)
return !!tnum_equals_const(reg->var_off, val);
else if (val < reg->umin_value || val > reg->umax_value)
return 0;
+ else if (sval < reg->smin_value || sval > reg->smax_value)
+ return 0;
break;
case BPF_JNE:
if (tnum_is_const(reg->var_off))
return !tnum_equals_const(reg->var_off, val);
else if (val < reg->umin_value || val > reg->umax_value)
return 1;
+ else if (sval < reg->smin_value || sval > reg->smax_value)
+ return 1;
break;
case BPF_JSET:
if ((~reg->var_off.mask & reg->var_off.value) & val)
@@ -14135,6 +14783,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
!sanitize_speculative_path(env, insn, *insn_idx + 1,
*insn_idx))
return -EFAULT;
+ if (env->log.level & BPF_LOG_LEVEL)
+ print_insn_state(env, this_branch->frame[this_branch->curframe]);
*insn_idx += insn->off;
return 0;
} else if (pred == 0) {
@@ -14147,6 +14797,8 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env,
*insn_idx + insn->off + 1,
*insn_idx))
return -EFAULT;
+ if (env->log.level & BPF_LOG_LEVEL)
+ print_insn_state(env, this_branch->frame[this_branch->curframe]);
return 0;
}
@@ -14425,7 +15077,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
* gen_ld_abs() may terminate the program at runtime, leading to
* reference leak.
*/
- err = check_reference_leak(env);
+ err = check_reference_leak(env, false);
if (err) {
verbose(env, "BPF_LD_[ABS|IND] cannot be mixed with socket references\n");
return err;
@@ -14474,7 +15126,7 @@ static int check_ld_abs(struct bpf_verifier_env *env, struct bpf_insn *insn)
return 0;
}
-static int check_return_code(struct bpf_verifier_env *env)
+static int check_return_code(struct bpf_verifier_env *env, int regno)
{
struct tnum enforce_attach_type_range = tnum_unknown;
const struct bpf_prog *prog = env->prog;
@@ -14486,7 +15138,7 @@ static int check_return_code(struct bpf_verifier_env *env)
const bool is_subprog = frame->subprogno;
/* LSM and struct_ops func-ptr's return type could be "void" */
- if (!is_subprog) {
+ if (!is_subprog || frame->in_exception_callback_fn) {
switch (prog_type) {
case BPF_PROG_TYPE_LSM:
if (prog->expected_attach_type == BPF_LSM_CGROUP)
@@ -14508,22 +15160,22 @@ static int check_return_code(struct bpf_verifier_env *env)
* of bpf_exit, which means that program wrote
* something into it earlier
*/
- err = check_reg_arg(env, BPF_REG_0, SRC_OP);
+ err = check_reg_arg(env, regno, SRC_OP);
if (err)
return err;
- if (is_pointer_value(env, BPF_REG_0)) {
- verbose(env, "R0 leaks addr as return value\n");
+ if (is_pointer_value(env, regno)) {
+ verbose(env, "R%d leaks addr as return value\n", regno);
return -EACCES;
}
- reg = cur_regs(env) + BPF_REG_0;
+ reg = cur_regs(env) + regno;
if (frame->in_async_callback_fn) {
/* enforce return zero from async callbacks like timer */
if (reg->type != SCALAR_VALUE) {
- verbose(env, "In async callback the register R0 is not a known value (%s)\n",
- reg_type_str(env, reg->type));
+ verbose(env, "In async callback the register R%d is not a known value (%s)\n",
+ regno, reg_type_str(env, reg->type));
return -EINVAL;
}
@@ -14534,10 +15186,10 @@ static int check_return_code(struct bpf_verifier_env *env)
return 0;
}
- if (is_subprog) {
+ if (is_subprog && !frame->in_exception_callback_fn) {
if (reg->type != SCALAR_VALUE) {
- verbose(env, "At subprogram exit the register R0 is not a scalar value (%s)\n",
- reg_type_str(env, reg->type));
+ verbose(env, "At subprogram exit the register R%d is not a scalar value (%s)\n",
+ regno, reg_type_str(env, reg->type));
return -EINVAL;
}
return 0;
@@ -14547,10 +15199,13 @@ static int check_return_code(struct bpf_verifier_env *env)
case BPF_PROG_TYPE_CGROUP_SOCK_ADDR:
if (env->prog->expected_attach_type == BPF_CGROUP_UDP4_RECVMSG ||
env->prog->expected_attach_type == BPF_CGROUP_UDP6_RECVMSG ||
+ env->prog->expected_attach_type == BPF_CGROUP_UNIX_RECVMSG ||
env->prog->expected_attach_type == BPF_CGROUP_INET4_GETPEERNAME ||
env->prog->expected_attach_type == BPF_CGROUP_INET6_GETPEERNAME ||
+ env->prog->expected_attach_type == BPF_CGROUP_UNIX_GETPEERNAME ||
env->prog->expected_attach_type == BPF_CGROUP_INET4_GETSOCKNAME ||
- env->prog->expected_attach_type == BPF_CGROUP_INET6_GETSOCKNAME)
+ env->prog->expected_attach_type == BPF_CGROUP_INET6_GETSOCKNAME ||
+ env->prog->expected_attach_type == BPF_CGROUP_UNIX_GETSOCKNAME)
range = tnum_range(1, 1);
if (env->prog->expected_attach_type == BPF_CGROUP_INET4_BIND ||
env->prog->expected_attach_type == BPF_CGROUP_INET6_BIND)
@@ -14619,8 +15274,8 @@ static int check_return_code(struct bpf_verifier_env *env)
}
if (reg->type != SCALAR_VALUE) {
- verbose(env, "At program exit the register R0 is not a known value (%s)\n",
- reg_type_str(env, reg->type));
+ verbose(env, "At program exit the register R%d is not a known value (%s)\n",
+ regno, reg_type_str(env, reg->type));
return -EINVAL;
}
@@ -14679,21 +15334,6 @@ enum {
BRANCH = 2,
};
-static u32 state_htab_size(struct bpf_verifier_env *env)
-{
- return env->prog->len;
-}
-
-static struct bpf_verifier_state_list **explored_state(
- struct bpf_verifier_env *env,
- int idx)
-{
- struct bpf_verifier_state *cur = env->cur_state;
- struct bpf_func_state *state = cur->frame[cur->curframe];
-
- return &env->explored_states[(idx ^ state->callsite) % state_htab_size(env)];
-}
-
static void mark_prune_point(struct bpf_verifier_env *env, int idx)
{
env->insn_aux_data[idx].prune_point = true;
@@ -14891,8 +15531,8 @@ static int check_cfg(struct bpf_verifier_env *env)
{
int insn_cnt = env->prog->len;
int *insn_stack, *insn_state;
- int ret = 0;
- int i;
+ int ex_insn_beg, i, ret = 0;
+ bool ex_done = false;
insn_state = env->cfg.insn_state = kvcalloc(insn_cnt, sizeof(int), GFP_KERNEL);
if (!insn_state)
@@ -14908,6 +15548,7 @@ static int check_cfg(struct bpf_verifier_env *env)
insn_stack[0] = 0; /* 0 is the first instruction */
env->cfg.cur_stack = 1;
+walk_cfg:
while (env->cfg.cur_stack > 0) {
int t = insn_stack[env->cfg.cur_stack - 1];
@@ -14934,6 +15575,16 @@ static int check_cfg(struct bpf_verifier_env *env)
goto err_free;
}
+ if (env->exception_callback_subprog && !ex_done) {
+ ex_insn_beg = env->subprog_info[env->exception_callback_subprog].start;
+
+ insn_state[ex_insn_beg] = DISCOVERED;
+ insn_stack[0] = ex_insn_beg;
+ env->cfg.cur_stack = 1;
+ ex_done = true;
+ goto walk_cfg;
+ }
+
for (i = 0; i < insn_cnt; i++) {
if (insn_state[i] != EXPLORED) {
verbose(env, "unreachable insn %d\n", i);
@@ -14971,20 +15622,18 @@ static int check_abnormal_return(struct bpf_verifier_env *env)
#define MIN_BPF_FUNCINFO_SIZE 8
#define MAX_FUNCINFO_REC_SIZE 252
-static int check_btf_func(struct bpf_verifier_env *env,
- const union bpf_attr *attr,
- bpfptr_t uattr)
+static int check_btf_func_early(struct bpf_verifier_env *env,
+ const union bpf_attr *attr,
+ bpfptr_t uattr)
{
- const struct btf_type *type, *func_proto, *ret_type;
- u32 i, nfuncs, urec_size, min_size;
u32 krec_size = sizeof(struct bpf_func_info);
+ const struct btf_type *type, *func_proto;
+ u32 i, nfuncs, urec_size, min_size;
struct bpf_func_info *krecord;
- struct bpf_func_info_aux *info_aux = NULL;
struct bpf_prog *prog;
const struct btf *btf;
- bpfptr_t urecord;
u32 prev_offset = 0;
- bool scalar_return;
+ bpfptr_t urecord;
int ret = -ENOMEM;
nfuncs = attr->func_info_cnt;
@@ -14994,11 +15643,6 @@ static int check_btf_func(struct bpf_verifier_env *env,
return 0;
}
- if (nfuncs != env->subprog_cnt) {
- verbose(env, "number of funcs in func_info doesn't match number of subprogs\n");
- return -EINVAL;
- }
-
urec_size = attr->func_info_rec_size;
if (urec_size < MIN_BPF_FUNCINFO_SIZE ||
urec_size > MAX_FUNCINFO_REC_SIZE ||
@@ -15016,9 +15660,6 @@ static int check_btf_func(struct bpf_verifier_env *env,
krecord = kvcalloc(nfuncs, krec_size, GFP_KERNEL | __GFP_NOWARN);
if (!krecord)
return -ENOMEM;
- info_aux = kcalloc(nfuncs, sizeof(*info_aux), GFP_KERNEL | __GFP_NOWARN);
- if (!info_aux)
- goto err_free;
for (i = 0; i < nfuncs; i++) {
ret = bpf_check_uarg_tail_zero(urecord, krec_size, urec_size);
@@ -15057,11 +15698,6 @@ static int check_btf_func(struct bpf_verifier_env *env,
goto err_free;
}
- if (env->subprog_info[i].start != krecord[i].insn_off) {
- verbose(env, "func_info BTF section doesn't match subprog layout in BPF program\n");
- goto err_free;
- }
-
/* check type_id */
type = btf_type_by_id(btf, krecord[i].type_id);
if (!type || !btf_type_is_func(type)) {
@@ -15069,12 +15705,77 @@ static int check_btf_func(struct bpf_verifier_env *env,
krecord[i].type_id);
goto err_free;
}
- info_aux[i].linkage = BTF_INFO_VLEN(type->info);
func_proto = btf_type_by_id(btf, type->type);
if (unlikely(!func_proto || !btf_type_is_func_proto(func_proto)))
/* btf_func_check() already verified it during BTF load */
goto err_free;
+
+ prev_offset = krecord[i].insn_off;
+ bpfptr_add(&urecord, urec_size);
+ }
+
+ prog->aux->func_info = krecord;
+ prog->aux->func_info_cnt = nfuncs;
+ return 0;
+
+err_free:
+ kvfree(krecord);
+ return ret;
+}
+
+static int check_btf_func(struct bpf_verifier_env *env,
+ const union bpf_attr *attr,
+ bpfptr_t uattr)
+{
+ const struct btf_type *type, *func_proto, *ret_type;
+ u32 i, nfuncs, urec_size;
+ struct bpf_func_info *krecord;
+ struct bpf_func_info_aux *info_aux = NULL;
+ struct bpf_prog *prog;
+ const struct btf *btf;
+ bpfptr_t urecord;
+ bool scalar_return;
+ int ret = -ENOMEM;
+
+ nfuncs = attr->func_info_cnt;
+ if (!nfuncs) {
+ if (check_abnormal_return(env))
+ return -EINVAL;
+ return 0;
+ }
+ if (nfuncs != env->subprog_cnt) {
+ verbose(env, "number of funcs in func_info doesn't match number of subprogs\n");
+ return -EINVAL;
+ }
+
+ urec_size = attr->func_info_rec_size;
+
+ prog = env->prog;
+ btf = prog->aux->btf;
+
+ urecord = make_bpfptr(attr->func_info, uattr.is_kernel);
+
+ krecord = prog->aux->func_info;
+ info_aux = kcalloc(nfuncs, sizeof(*info_aux), GFP_KERNEL | __GFP_NOWARN);
+ if (!info_aux)
+ return -ENOMEM;
+
+ for (i = 0; i < nfuncs; i++) {
+ /* check insn_off */
+ ret = -EINVAL;
+
+ if (env->subprog_info[i].start != krecord[i].insn_off) {
+ verbose(env, "func_info BTF section doesn't match subprog layout in BPF program\n");
+ goto err_free;
+ }
+
+ /* Already checked type_id */
+ type = btf_type_by_id(btf, krecord[i].type_id);
+ info_aux[i].linkage = BTF_INFO_VLEN(type->info);
+ /* Already checked func_proto */
+ func_proto = btf_type_by_id(btf, type->type);
+
ret_type = btf_type_skip_modifiers(btf, func_proto->type, NULL);
scalar_return =
btf_type_is_small_int(ret_type) || btf_is_any_enum(ret_type);
@@ -15087,17 +15788,13 @@ static int check_btf_func(struct bpf_verifier_env *env,
goto err_free;
}
- prev_offset = krecord[i].insn_off;
bpfptr_add(&urecord, urec_size);
}
- prog->aux->func_info = krecord;
- prog->aux->func_info_cnt = nfuncs;
prog->aux->func_info_aux = info_aux;
return 0;
err_free:
- kvfree(krecord);
kfree(info_aux);
return ret;
}
@@ -15110,7 +15807,8 @@ static void adjust_btf_func(struct bpf_verifier_env *env)
if (!aux->func_info)
return;
- for (i = 0; i < env->subprog_cnt; i++)
+ /* func_info is not available for hidden subprogs */
+ for (i = 0; i < env->subprog_cnt - env->hidden_subprog_cnt; i++)
aux->func_info[i].insn_off = env->subprog_info[i].start;
}
@@ -15314,9 +16012,9 @@ static int check_core_relo(struct bpf_verifier_env *env,
return err;
}
-static int check_btf_info(struct bpf_verifier_env *env,
- const union bpf_attr *attr,
- bpfptr_t uattr)
+static int check_btf_info_early(struct bpf_verifier_env *env,
+ const union bpf_attr *attr,
+ bpfptr_t uattr)
{
struct btf *btf;
int err;
@@ -15336,6 +16034,24 @@ static int check_btf_info(struct bpf_verifier_env *env,
}
env->prog->aux->btf = btf;
+ err = check_btf_func_early(env, attr, uattr);
+ if (err)
+ return err;
+ return 0;
+}
+
+static int check_btf_info(struct bpf_verifier_env *env,
+ const union bpf_attr *attr,
+ bpfptr_t uattr)
+{
+ int err;
+
+ if (!attr->func_info_cnt && !attr->line_info_cnt) {
+ if (check_abnormal_return(env))
+ return -EINVAL;
+ return 0;
+ }
+
err = check_btf_func(env, attr, uattr);
if (err)
return err;
@@ -15494,18 +16210,14 @@ static void clean_live_states(struct bpf_verifier_env *env, int insn,
struct bpf_verifier_state *cur)
{
struct bpf_verifier_state_list *sl;
- int i;
sl = *explored_state(env, insn);
while (sl) {
if (sl->state.branches)
goto next;
if (sl->state.insn_idx != insn ||
- sl->state.curframe != cur->curframe)
+ !same_callsites(&sl->state, cur))
goto next;
- for (i = 0; i <= cur->curframe; i++)
- if (sl->state.frame[i]->callsite != cur->frame[i]->callsite)
- goto next;
clean_verifier_state(env, &sl->state);
next:
sl = sl->next;
@@ -15523,8 +16235,11 @@ static bool regs_exact(const struct bpf_reg_state *rold,
/* Returns true if (rold safe implies rcur safe) */
static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
- struct bpf_reg_state *rcur, struct bpf_idmap *idmap)
+ struct bpf_reg_state *rcur, struct bpf_idmap *idmap, bool exact)
{
+ if (exact)
+ return regs_exact(rold, rcur, idmap);
+
if (!(rold->live & REG_LIVE_READ))
/* explored state didn't use this */
return true;
@@ -15641,7 +16356,7 @@ static bool regsafe(struct bpf_verifier_env *env, struct bpf_reg_state *rold,
}
static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
- struct bpf_func_state *cur, struct bpf_idmap *idmap)
+ struct bpf_func_state *cur, struct bpf_idmap *idmap, bool exact)
{
int i, spi;
@@ -15654,7 +16369,12 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
spi = i / BPF_REG_SIZE;
- if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)) {
+ if (exact &&
+ old->stack[spi].slot_type[i % BPF_REG_SIZE] !=
+ cur->stack[spi].slot_type[i % BPF_REG_SIZE])
+ return false;
+
+ if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ) && !exact) {
i += BPF_REG_SIZE - 1;
/* explored state didn't use this */
continue;
@@ -15704,7 +16424,7 @@ static bool stacksafe(struct bpf_verifier_env *env, struct bpf_func_state *old,
* return false to continue verification of this path
*/
if (!regsafe(env, &old->stack[spi].spilled_ptr,
- &cur->stack[spi].spilled_ptr, idmap))
+ &cur->stack[spi].spilled_ptr, idmap, exact))
return false;
break;
case STACK_DYNPTR:
@@ -15786,16 +16506,16 @@ static bool refsafe(struct bpf_func_state *old, struct bpf_func_state *cur,
* the current state will reach 'bpf_exit' instruction safely
*/
static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_state *old,
- struct bpf_func_state *cur)
+ struct bpf_func_state *cur, bool exact)
{
int i;
for (i = 0; i < MAX_BPF_REG; i++)
if (!regsafe(env, &old->regs[i], &cur->regs[i],
- &env->idmap_scratch))
+ &env->idmap_scratch, exact))
return false;
- if (!stacksafe(env, old, cur, &env->idmap_scratch))
+ if (!stacksafe(env, old, cur, &env->idmap_scratch, exact))
return false;
if (!refsafe(old, cur, &env->idmap_scratch))
@@ -15804,17 +16524,23 @@ static bool func_states_equal(struct bpf_verifier_env *env, struct bpf_func_stat
return true;
}
+static void reset_idmap_scratch(struct bpf_verifier_env *env)
+{
+ env->idmap_scratch.tmp_id_gen = env->id_gen;
+ memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
+}
+
static bool states_equal(struct bpf_verifier_env *env,
struct bpf_verifier_state *old,
- struct bpf_verifier_state *cur)
+ struct bpf_verifier_state *cur,
+ bool exact)
{
int i;
if (old->curframe != cur->curframe)
return false;
- env->idmap_scratch.tmp_id_gen = env->id_gen;
- memset(&env->idmap_scratch.map, 0, sizeof(env->idmap_scratch.map));
+ reset_idmap_scratch(env);
/* Verification state from speculative execution simulation
* must never prune a non-speculative execution one.
@@ -15844,7 +16570,7 @@ static bool states_equal(struct bpf_verifier_env *env,
for (i = 0; i <= old->curframe; i++) {
if (old->frame[i]->callsite != cur->frame[i]->callsite)
return false;
- if (!func_states_equal(env, old->frame[i], cur->frame[i]))
+ if (!func_states_equal(env, old->frame[i], cur->frame[i], exact))
return false;
}
return true;
@@ -16098,10 +16824,11 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
{
struct bpf_verifier_state_list *new_sl;
struct bpf_verifier_state_list *sl, **pprev;
- struct bpf_verifier_state *cur = env->cur_state, *new;
- int i, j, err, states_cnt = 0;
+ struct bpf_verifier_state *cur = env->cur_state, *new, *loop_entry;
+ int i, j, n, err, states_cnt = 0;
bool force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx);
bool add_new_state = force_new_state;
+ bool force_exact;
/* bpf progs typically have pruning point every 4 instructions
* http://vger.kernel.org/bpfconf2019.html#session-1
@@ -16154,9 +16881,33 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
* It's safe to assume that iterator loop will finish, taking into
* account iter_next() contract of eventually returning
* sticky NULL result.
+ *
+ * Note, that states have to be compared exactly in this case because
+ * read and precision marks might not be finalized inside the loop.
+ * E.g. as in the program below:
+ *
+ * 1. r7 = -16
+ * 2. r6 = bpf_get_prandom_u32()
+ * 3. while (bpf_iter_num_next(&fp[-8])) {
+ * 4. if (r6 != 42) {
+ * 5. r7 = -32
+ * 6. r6 = bpf_get_prandom_u32()
+ * 7. continue
+ * 8. }
+ * 9. r0 = r10
+ * 10. r0 += r7
+ * 11. r8 = *(u64 *)(r0 + 0)
+ * 12. r6 = bpf_get_prandom_u32()
+ * 13. }
+ *
+ * Here verifier would first visit path 1-3, create a checkpoint at 3
+ * with r7=-16, continue to 4-7,3. Existing checkpoint at 3 does
+ * not have read or precision mark for r7 yet, thus inexact states
+ * comparison would discard current state with r7=-32
+ * => unsafe memory access at 11 would not be caught.
*/
if (is_iter_next_insn(env, insn_idx)) {
- if (states_equal(env, &sl->state, cur)) {
+ if (states_equal(env, &sl->state, cur, true)) {
struct bpf_func_state *cur_frame;
struct bpf_reg_state *iter_state, *iter_reg;
int spi;
@@ -16172,17 +16923,23 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx)
*/
spi = __get_spi(iter_reg->off + iter_reg->var_off.value);
iter_state = &func(env, iter_reg)->stack[spi].spilled_ptr;
- if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE)
+ if (iter_state->iter.state == BPF_ITER_STATE_ACTIVE) {
+ update_loop_entry(cur, &sl->state);
goto hit;
+ }
}
goto skip_inf_loop_check;
}
/* attempt to detect infinite loop to avoid unnecessary doomed work */
if (states_maybe_looping(&sl->state, cur) &&
- states_equal(env, &sl->state, cur) &&
+ states_equal(env, &sl->state, cur, false) &&
!iter_active_depths_differ(&sl->state, cur)) {
verbose_linfo(env, insn_idx, "; ");
verbose(env, "infinite loop detected at insn %d\n", insn_idx);
+ verbose(env, "cur state:");
+ print_verifier_state(env, cur->frame[cur->curframe], true);
+ verbose(env, "old state:");
+ print_verifier_state(env, sl->state.frame[cur->curframe], true);
return -EINVAL;
}
/* if the verifier is processing a loop, avoid adding new state
@@ -16204,7 +16961,36 @@ skip_inf_loop_check:
add_new_state = false;
goto miss;
}
- if (states_equal(env, &sl->state, cur)) {
+ /* If sl->state is a part of a loop and this loop's entry is a part of
+ * current verification path then states have to be compared exactly.
+ * 'force_exact' is needed to catch the following case:
+ *
+ * initial Here state 'succ' was processed first,
+ * | it was eventually tracked to produce a
+ * V state identical to 'hdr'.
+ * .---------> hdr All branches from 'succ' had been explored
+ * | | and thus 'succ' has its .branches == 0.
+ * | V
+ * | .------... Suppose states 'cur' and 'succ' correspond
+ * | | | to the same instruction + callsites.
+ * | V V In such case it is necessary to check
+ * | ... ... if 'succ' and 'cur' are states_equal().
+ * | | | If 'succ' and 'cur' are a part of the
+ * | V V same loop exact flag has to be set.
+ * | succ <- cur To check if that is the case, verify
+ * | | if loop entry of 'succ' is in current
+ * | V DFS path.
+ * | ...
+ * | |
+ * '----'
+ *
+ * Additional details are in the comment before get_loop_entry().
+ */
+ loop_entry = get_loop_entry(&sl->state);
+ force_exact = loop_entry && loop_entry->branches > 0;
+ if (states_equal(env, &sl->state, cur, force_exact)) {
+ if (force_exact)
+ update_loop_entry(cur, loop_entry);
hit:
sl->hit_cnt++;
/* reached equivalent register/stack state,
@@ -16243,13 +17029,18 @@ miss:
* to keep checking from state equivalence point of view.
* Higher numbers increase max_states_per_insn and verification time,
* but do not meaningfully decrease insn_processed.
+ * 'n' controls how many times state could miss before eviction.
+ * Use bigger 'n' for checkpoints because evicting checkpoint states
+ * too early would hinder iterator convergence.
*/
- if (sl->miss_cnt > sl->hit_cnt * 3 + 3) {
+ n = is_force_checkpoint(env, insn_idx) && sl->state.branches > 0 ? 64 : 3;
+ if (sl->miss_cnt > sl->hit_cnt * n + n) {
/* the state is unlikely to be useful. Remove it to
* speed up verification
*/
*pprev = sl->next;
- if (sl->state.frame[0]->regs[0].live & REG_LIVE_DONE) {
+ if (sl->state.frame[0]->regs[0].live & REG_LIVE_DONE &&
+ !sl->state.used_as_loop_entry) {
u32 br = sl->state.branches;
WARN_ONCE(br,
@@ -16318,6 +17109,7 @@ next:
cur->parent = new;
cur->first_insn_idx = insn_idx;
+ cur->dfs_depth = new->dfs_depth + 1;
clear_jmp_history(cur);
new_sl->next = *explored_state(env, insn_idx);
*explored_state(env, insn_idx) = new_sl;
@@ -16438,6 +17230,7 @@ static int do_check(struct bpf_verifier_env *env)
int prev_insn_idx = -1;
for (;;) {
+ bool exception_exit = false;
struct bpf_insn *insn;
u8 class;
int err;
@@ -16652,12 +17445,17 @@ static int do_check(struct bpf_verifier_env *env)
return -EINVAL;
}
}
- if (insn->src_reg == BPF_PSEUDO_CALL)
+ if (insn->src_reg == BPF_PSEUDO_CALL) {
err = check_func_call(env, insn, &env->insn_idx);
- else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL)
+ } else if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
err = check_kfunc_call(env, insn, &env->insn_idx);
- else
+ if (!err && is_bpf_throw_kfunc(insn)) {
+ exception_exit = true;
+ goto process_bpf_exit_full;
+ }
+ } else {
err = check_helper_call(env, insn, &env->insn_idx);
+ }
if (err)
return err;
@@ -16687,7 +17485,7 @@ static int do_check(struct bpf_verifier_env *env)
verbose(env, "BPF_EXIT uses reserved fields\n");
return -EINVAL;
}
-
+process_bpf_exit_full:
if (env->cur_state->active_lock.ptr &&
!in_rbtree_lock_required_cb(env)) {
verbose(env, "bpf_spin_unlock is missing\n");
@@ -16706,10 +17504,23 @@ static int do_check(struct bpf_verifier_env *env)
* function, for which reference_state must
* match caller reference state when it exits.
*/
- err = check_reference_leak(env);
+ err = check_reference_leak(env, exception_exit);
if (err)
return err;
+ /* The side effect of the prepare_func_exit
+ * which is being skipped is that it frees
+ * bpf_func_state. Typically, process_bpf_exit
+ * will only be hit with outermost exit.
+ * copy_verifier_state in pop_stack will handle
+ * freeing of any extra bpf_func_state left over
+ * from not processing all nested function
+ * exits. We also skip return code checks as
+ * they are not needed for exceptional exits.
+ */
+ if (exception_exit)
+ goto process_bpf_exit;
+
if (state->curframe) {
/* exit from nested function */
err = prepare_func_exit(env, &env->insn_idx);
@@ -16719,7 +17530,7 @@ static int do_check(struct bpf_verifier_env *env)
continue;
}
- err = check_return_code(env);
+ err = check_return_code(env, BPF_REG_0);
if (err)
return err;
process_bpf_exit:
@@ -18012,6 +18823,9 @@ static int jit_subprogs(struct bpf_verifier_env *env)
}
func[i]->aux->num_exentries = num_exentries;
func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable;
+ func[i]->aux->exception_cb = env->subprog_info[i].is_exception_cb;
+ if (!i)
+ func[i]->aux->exception_boundary = env->seen_exception;
func[i] = bpf_int_jit_compile(func[i]);
if (!func[i]->jited) {
err = -ENOTSUPP;
@@ -18051,7 +18865,8 @@ static int jit_subprogs(struct bpf_verifier_env *env)
* the call instruction, as an index for this list
*/
func[i]->aux->func = func;
- func[i]->aux->func_cnt = env->subprog_cnt;
+ func[i]->aux->func_cnt = env->subprog_cnt - env->hidden_subprog_cnt;
+ func[i]->aux->real_func_cnt = env->subprog_cnt;
}
for (i = 0; i < env->subprog_cnt; i++) {
old_bpf_func = func[i]->bpf_func;
@@ -18097,7 +18912,10 @@ static int jit_subprogs(struct bpf_verifier_env *env)
prog->aux->extable = func[0]->aux->extable;
prog->aux->num_exentries = func[0]->aux->num_exentries;
prog->aux->func = func;
- prog->aux->func_cnt = env->subprog_cnt;
+ prog->aux->func_cnt = env->subprog_cnt - env->hidden_subprog_cnt;
+ prog->aux->real_func_cnt = env->subprog_cnt;
+ prog->aux->bpf_exception_cb = (void *)func[env->exception_callback_subprog]->bpf_func;
+ prog->aux->exception_boundary = func[0]->aux->exception_boundary;
bpf_prog_jit_attempt_done(prog);
return 0;
out_free:
@@ -18264,21 +19082,35 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
insn->imm = BPF_CALL_IMM(desc->addr);
if (insn->off)
return 0;
- if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl]) {
+ if (desc->func_id == special_kfunc_list[KF_bpf_obj_new_impl] ||
+ desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl]) {
struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) };
u64 obj_new_size = env->insn_aux_data[insn_idx].obj_new_size;
+ if (desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_new_impl] && kptr_struct_meta) {
+ verbose(env, "verifier internal error: NULL kptr_struct_meta expected at insn_idx %d\n",
+ insn_idx);
+ return -EFAULT;
+ }
+
insn_buf[0] = BPF_MOV64_IMM(BPF_REG_1, obj_new_size);
insn_buf[1] = addr[0];
insn_buf[2] = addr[1];
insn_buf[3] = *insn;
*cnt = 4;
} else if (desc->func_id == special_kfunc_list[KF_bpf_obj_drop_impl] ||
+ desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_drop_impl] ||
desc->func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl]) {
struct btf_struct_meta *kptr_struct_meta = env->insn_aux_data[insn_idx].kptr_struct_meta;
struct bpf_insn addr[2] = { BPF_LD_IMM64(BPF_REG_2, (long)kptr_struct_meta) };
+ if (desc->func_id == special_kfunc_list[KF_bpf_percpu_obj_drop_impl] && kptr_struct_meta) {
+ verbose(env, "verifier internal error: NULL kptr_struct_meta expected at insn_idx %d\n",
+ insn_idx);
+ return -EFAULT;
+ }
+
if (desc->func_id == special_kfunc_list[KF_bpf_refcount_acquire_impl] &&
!kptr_struct_meta) {
verbose(env, "verifier internal error: kptr_struct_meta expected at insn_idx %d\n",
@@ -18319,6 +19151,33 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
return 0;
}
+/* The function requires that first instruction in 'patch' is insnsi[prog->len - 1] */
+static int add_hidden_subprog(struct bpf_verifier_env *env, struct bpf_insn *patch, int len)
+{
+ struct bpf_subprog_info *info = env->subprog_info;
+ int cnt = env->subprog_cnt;
+ struct bpf_prog *prog;
+
+ /* We only reserve one slot for hidden subprogs in subprog_info. */
+ if (env->hidden_subprog_cnt) {
+ verbose(env, "verifier internal error: only one hidden subprog supported\n");
+ return -EFAULT;
+ }
+ /* We're not patching any existing instruction, just appending the new
+ * ones for the hidden subprog. Hence all of the adjustment operations
+ * in bpf_patch_insn_data are no-ops.
+ */
+ prog = bpf_patch_insn_data(env, env->prog->len - 1, patch, len);
+ if (!prog)
+ return -ENOMEM;
+ env->prog = prog;
+ info[cnt + 1].start = info[cnt].start;
+ info[cnt].start = prog->len - len + 1;
+ env->subprog_cnt++;
+ env->hidden_subprog_cnt++;
+ return 0;
+}
+
/* Do various post-verification rewrites in a single program pass.
* These rewrites simplify JIT and interpreter implementations.
*/
@@ -18337,6 +19196,26 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
struct bpf_map *map_ptr;
int i, ret, cnt, delta = 0;
+ if (env->seen_exception && !env->exception_callback_subprog) {
+ struct bpf_insn patch[] = {
+ env->prog->insnsi[insn_cnt - 1],
+ BPF_MOV64_REG(BPF_REG_0, BPF_REG_1),
+ BPF_EXIT_INSN(),
+ };
+
+ ret = add_hidden_subprog(env, patch, ARRAY_SIZE(patch));
+ if (ret < 0)
+ return ret;
+ prog = env->prog;
+ insn = prog->insnsi;
+
+ env->exception_callback_subprog = env->subprog_cnt - 1;
+ /* Don't update insn_cnt, as add_hidden_subprog always appends insns */
+ env->subprog_info[env->exception_callback_subprog].is_cb = true;
+ env->subprog_info[env->exception_callback_subprog].is_async_cb = true;
+ env->subprog_info[env->exception_callback_subprog].is_exception_cb = true;
+ }
+
for (i = 0; i < insn_cnt; i++, insn++) {
/* Make divide-by-zero exceptions impossible. */
if (insn->code == (BPF_ALU64 | BPF_MOD | BPF_X) ||
@@ -18606,6 +19485,25 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
goto patch_call_imm;
}
+ /* bpf_per_cpu_ptr() and bpf_this_cpu_ptr() */
+ if (env->insn_aux_data[i + delta].call_with_percpu_alloc_ptr) {
+ /* patch with 'r1 = *(u64 *)(r1 + 0)' since for percpu data,
+ * bpf_mem_alloc() returns a ptr to the percpu data ptr.
+ */
+ insn_buf[0] = BPF_LDX_MEM(BPF_DW, BPF_REG_1, BPF_REG_1, 0);
+ insn_buf[1] = *insn;
+ cnt = 2;
+
+ new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+ if (!new_prog)
+ return -ENOMEM;
+
+ delta += cnt - 1;
+ env->prog = prog = new_prog;
+ insn = new_prog->insnsi + i + delta;
+ goto patch_call_imm;
+ }
+
/* BPF_EMIT_CALL() assumptions in some of the map_gen_lookup
* and other inlining handlers are currently limited to 64 bit
* only.
@@ -19015,7 +19913,7 @@ static void free_states(struct bpf_verifier_env *env)
}
}
-static int do_check_common(struct bpf_verifier_env *env, int subprog)
+static int do_check_common(struct bpf_verifier_env *env, int subprog, bool is_ex_cb)
{
bool pop_log = !(env->log.level & BPF_LOG_LEVEL2);
struct bpf_verifier_state *state;
@@ -19046,7 +19944,7 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
regs = state->frame[state->curframe]->regs;
if (subprog || env->prog->type == BPF_PROG_TYPE_EXT) {
- ret = btf_prepare_func_args(env, subprog, regs);
+ ret = btf_prepare_func_args(env, subprog, regs, is_ex_cb);
if (ret)
goto out;
for (i = BPF_REG_1; i <= BPF_REG_5; i++) {
@@ -19062,6 +19960,12 @@ static int do_check_common(struct bpf_verifier_env *env, int subprog)
regs[i].id = ++env->id_gen;
}
}
+ if (is_ex_cb) {
+ state->frame[0]->in_exception_callback_fn = true;
+ env->subprog_info[subprog].is_cb = true;
+ env->subprog_info[subprog].is_async_cb = true;
+ env->subprog_info[subprog].is_exception_cb = true;
+ }
} else {
/* 1st arg to a function */
regs[BPF_REG_1].type = PTR_TO_CTX;
@@ -19126,7 +20030,7 @@ static int do_check_subprogs(struct bpf_verifier_env *env)
continue;
env->insn_idx = env->subprog_info[i].start;
WARN_ON_ONCE(env->insn_idx == 0);
- ret = do_check_common(env, i);
+ ret = do_check_common(env, i, env->exception_callback_subprog == i);
if (ret) {
return ret;
} else if (env->log.level & BPF_LOG_LEVEL) {
@@ -19143,7 +20047,7 @@ static int do_check_main(struct bpf_verifier_env *env)
int ret;
env->insn_idx = 0;
- ret = do_check_common(env, 0);
+ ret = do_check_common(env, 0, false);
if (!ret)
env->prog->aux->stack_depth = env->subprog_info[0].stack_depth;
return ret;
@@ -19312,6 +20216,12 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
bpf_log(log, "Subprog %s doesn't exist\n", tname);
return -EINVAL;
}
+ if (aux->func && aux->func[subprog]->aux->exception_cb) {
+ bpf_log(log,
+ "%s programs cannot attach to exception callback\n",
+ prog_extension ? "Extension" : "FENTRY/FEXIT");
+ return -EINVAL;
+ }
conservative = aux->func_info_aux[subprog].unreliable;
if (prog_extension) {
if (conservative) {
@@ -19641,6 +20551,9 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
if (!tr)
return -ENOMEM;
+ if (tgt_prog && tgt_prog->aux->tail_call_reachable)
+ tr->flags = BPF_TRAMP_F_TAIL_CALL_CTX;
+
prog->aux->dst_trampoline = tr;
return 0;
}
@@ -19736,6 +20649,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
if (!env->explored_states)
goto skip_full_check;
+ ret = check_btf_info_early(env, attr, uattr);
+ if (ret < 0)
+ goto skip_full_check;
+
ret = add_subprog_and_kfunc(env);
if (ret < 0)
goto skip_full_check;
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index c09531bace38..484adb375b15 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4923,9 +4923,11 @@ repeat:
void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
struct css_task_iter *it)
{
+ unsigned long irqflags;
+
memset(it, 0, sizeof(*it));
- spin_lock_irq(&css_set_lock);
+ spin_lock_irqsave(&css_set_lock, irqflags);
it->ss = css->ss;
it->flags = flags;
@@ -4939,7 +4941,7 @@ void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
css_task_iter_advance(it);
- spin_unlock_irq(&css_set_lock);
+ spin_unlock_irqrestore(&css_set_lock, irqflags);
}
/**
@@ -4952,12 +4954,14 @@ void css_task_iter_start(struct cgroup_subsys_state *css, unsigned int flags,
*/
struct task_struct *css_task_iter_next(struct css_task_iter *it)
{
+ unsigned long irqflags;
+
if (it->cur_task) {
put_task_struct(it->cur_task);
it->cur_task = NULL;
}
- spin_lock_irq(&css_set_lock);
+ spin_lock_irqsave(&css_set_lock, irqflags);
/* @it may be half-advanced by skips, finish advancing */
if (it->flags & CSS_TASK_ITER_SKIPPED)
@@ -4970,7 +4974,7 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it)
css_task_iter_advance(it);
}
- spin_unlock_irq(&css_set_lock);
+ spin_unlock_irqrestore(&css_set_lock, irqflags);
return it->cur_task;
}
@@ -4983,11 +4987,13 @@ struct task_struct *css_task_iter_next(struct css_task_iter *it)
*/
void css_task_iter_end(struct css_task_iter *it)
{
+ unsigned long irqflags;
+
if (it->cur_cset) {
- spin_lock_irq(&css_set_lock);
+ spin_lock_irqsave(&css_set_lock, irqflags);
list_del(&it->iters_node);
put_css_set_locked(it->cur_cset);
- spin_unlock_irq(&css_set_lock);
+ spin_unlock_irqrestore(&css_set_lock, irqflags);
}
if (it->cur_dcset)
diff --git a/kernel/time/posix-clock.c b/kernel/time/posix-clock.c
index 77c0c2370b6d..9de66bbbb3d1 100644
--- a/kernel/time/posix-clock.c
+++ b/kernel/time/posix-clock.c
@@ -19,7 +19,8 @@
*/
static struct posix_clock *get_posix_clock(struct file *fp)
{
- struct posix_clock *clk = fp->private_data;
+ struct posix_clock_context *pccontext = fp->private_data;
+ struct posix_clock *clk = pccontext->clk;
down_read(&clk->rwsem);
@@ -39,6 +40,7 @@ static void put_posix_clock(struct posix_clock *clk)
static ssize_t posix_clock_read(struct file *fp, char __user *buf,
size_t count, loff_t *ppos)
{
+ struct posix_clock_context *pccontext = fp->private_data;
struct posix_clock *clk = get_posix_clock(fp);
int err = -EINVAL;
@@ -46,7 +48,7 @@ static ssize_t posix_clock_read(struct file *fp, char __user *buf,
return -ENODEV;
if (clk->ops.read)
- err = clk->ops.read(clk, fp->f_flags, buf, count);
+ err = clk->ops.read(pccontext, fp->f_flags, buf, count);
put_posix_clock(clk);
@@ -55,6 +57,7 @@ static ssize_t posix_clock_read(struct file *fp, char __user *buf,
static __poll_t posix_clock_poll(struct file *fp, poll_table *wait)
{
+ struct posix_clock_context *pccontext = fp->private_data;
struct posix_clock *clk = get_posix_clock(fp);
__poll_t result = 0;
@@ -62,7 +65,7 @@ static __poll_t posix_clock_poll(struct file *fp, poll_table *wait)
return EPOLLERR;
if (clk->ops.poll)
- result = clk->ops.poll(clk, fp, wait);
+ result = clk->ops.poll(pccontext, fp, wait);
put_posix_clock(clk);
@@ -72,6 +75,7 @@ static __poll_t posix_clock_poll(struct file *fp, poll_table *wait)
static long posix_clock_ioctl(struct file *fp,
unsigned int cmd, unsigned long arg)
{
+ struct posix_clock_context *pccontext = fp->private_data;
struct posix_clock *clk = get_posix_clock(fp);
int err = -ENOTTY;
@@ -79,7 +83,7 @@ static long posix_clock_ioctl(struct file *fp,
return -ENODEV;
if (clk->ops.ioctl)
- err = clk->ops.ioctl(clk, cmd, arg);
+ err = clk->ops.ioctl(pccontext, cmd, arg);
put_posix_clock(clk);
@@ -90,6 +94,7 @@ static long posix_clock_ioctl(struct file *fp,
static long posix_clock_compat_ioctl(struct file *fp,
unsigned int cmd, unsigned long arg)
{
+ struct posix_clock_context *pccontext = fp->private_data;
struct posix_clock *clk = get_posix_clock(fp);
int err = -ENOTTY;
@@ -97,7 +102,7 @@ static long posix_clock_compat_ioctl(struct file *fp,
return -ENODEV;
if (clk->ops.ioctl)
- err = clk->ops.ioctl(clk, cmd, arg);
+ err = clk->ops.ioctl(pccontext, cmd, arg);
put_posix_clock(clk);
@@ -110,6 +115,7 @@ static int posix_clock_open(struct inode *inode, struct file *fp)
int err;
struct posix_clock *clk =
container_of(inode->i_cdev, struct posix_clock, cdev);
+ struct posix_clock_context *pccontext;
down_read(&clk->rwsem);
@@ -117,14 +123,20 @@ static int posix_clock_open(struct inode *inode, struct file *fp)
err = -ENODEV;
goto out;
}
+ pccontext = kzalloc(sizeof(*pccontext), GFP_KERNEL);
+ if (!pccontext) {
+ err = -ENOMEM;
+ goto out;
+ }
+ pccontext->clk = clk;
+ fp->private_data = pccontext;
if (clk->ops.open)
- err = clk->ops.open(clk, fp->f_mode);
+ err = clk->ops.open(pccontext, fp->f_mode);
else
err = 0;
if (!err) {
get_device(clk->dev);
- fp->private_data = clk;
}
out:
up_read(&clk->rwsem);
@@ -133,14 +145,20 @@ out:
static int posix_clock_release(struct inode *inode, struct file *fp)
{
- struct posix_clock *clk = fp->private_data;
+ struct posix_clock_context *pccontext = fp->private_data;
+ struct posix_clock *clk;
int err = 0;
+ if (!pccontext)
+ return -ENODEV;
+ clk = pccontext->clk;
+
if (clk->ops.release)
- err = clk->ops.release(clk);
+ err = clk->ops.release(pccontext);
put_device(clk->dev);
+ kfree(pccontext);
fp->private_data = NULL;
return err;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 868008f56fec..df697c74d519 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -117,6 +117,9 @@ unsigned int trace_call_bpf(struct trace_event_call *call, void *ctx)
* and don't send kprobe event into ring-buffer,
* so return zero here
*/
+ rcu_read_lock();
+ bpf_prog_inc_misses_counters(rcu_dereference(call->prog_array));
+ rcu_read_unlock();
ret = 0;
goto out;
}
@@ -2384,7 +2387,8 @@ int bpf_probe_unregister(struct bpf_raw_event_map *btp, struct bpf_prog *prog)
int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
u32 *fd_type, const char **buf,
- u64 *probe_offset, u64 *probe_addr)
+ u64 *probe_offset, u64 *probe_addr,
+ unsigned long *missed)
{
bool is_tracepoint, is_syscall_tp;
struct bpf_prog *prog;
@@ -2419,7 +2423,7 @@ int bpf_get_perf_event_info(const struct perf_event *event, u32 *prog_id,
#ifdef CONFIG_KPROBE_EVENTS
if (flags & TRACE_EVENT_FL_KPROBE)
err = bpf_get_kprobe_info(event, fd_type, buf,
- probe_offset, probe_addr,
+ probe_offset, probe_addr, missed,
event->attr.type == PERF_TYPE_TRACEPOINT);
#endif
#ifdef CONFIG_UPROBE_EVENTS
@@ -2614,6 +2618,7 @@ static int bpf_kprobe_multi_link_fill_link_info(const struct bpf_link *link,
kmulti_link = container_of(link, struct bpf_kprobe_multi_link, link);
info->kprobe_multi.count = kmulti_link->cnt;
info->kprobe_multi.flags = kmulti_link->flags;
+ info->kprobe_multi.missed = kmulti_link->fp.nmissed;
if (!uaddrs)
return 0;
@@ -2710,6 +2715,7 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link *link,
int err;
if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
+ bpf_prog_inc_misses_counter(link->link.prog);
err = 0;
goto out;
}
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index e834f149695b..a3442db35670 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -1249,6 +1249,12 @@ static const struct file_operations kprobe_events_ops = {
.write = probes_write,
};
+static unsigned long trace_kprobe_missed(struct trace_kprobe *tk)
+{
+ return trace_kprobe_is_return(tk) ?
+ tk->rp.kp.nmissed + tk->rp.nmissed : tk->rp.kp.nmissed;
+}
+
/* Probes profiling interfaces */
static int probes_profile_seq_show(struct seq_file *m, void *v)
{
@@ -1260,8 +1266,7 @@ static int probes_profile_seq_show(struct seq_file *m, void *v)
return 0;
tk = to_trace_kprobe(ev);
- nmissed = trace_kprobe_is_return(tk) ?
- tk->rp.kp.nmissed + tk->rp.nmissed : tk->rp.kp.nmissed;
+ nmissed = trace_kprobe_missed(tk);
seq_printf(m, " %-44s %15lu %15lu\n",
trace_probe_name(&tk->tp),
trace_kprobe_nhit(tk),
@@ -1607,7 +1612,8 @@ NOKPROBE_SYMBOL(kretprobe_perf_func);
int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
const char **symbol, u64 *probe_offset,
- u64 *probe_addr, bool perf_type_tracepoint)
+ u64 *probe_addr, unsigned long *missed,
+ bool perf_type_tracepoint)
{
const char *pevent = trace_event_name(event->tp_event);
const char *group = event->tp_event->class->system;
@@ -1626,6 +1632,8 @@ int bpf_get_kprobe_info(const struct perf_event *event, u32 *fd_type,
*probe_addr = kallsyms_show_value(current_cred()) ?
(unsigned long)tk->rp.kp.addr : 0;
*symbol = tk->symbol;
+ if (missed)
+ *missed = trace_kprobe_missed(tk);
return 0;
}
#endif /* CONFIG_PERF_EVENTS */
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index de753403cdaf..9c581d6da843 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -556,7 +556,7 @@ static int perf_call_bpf_enter(struct trace_event_call *call, struct pt_regs *re
{
struct syscall_tp_t {
struct trace_entry ent;
- unsigned long syscall_nr;
+ int syscall_nr;
unsigned long args[SYSCALL_DEFINE_MAXARGS];
} __aligned(8) param;
int i;
@@ -661,7 +661,7 @@ static int perf_call_bpf_exit(struct trace_event_call *call, struct pt_regs *reg
{
struct syscall_tp_t {
struct trace_entry ent;
- unsigned long syscall_nr;
+ int syscall_nr;
unsigned long ret;
} __aligned(8) param;
diff --git a/lib/nlattr.c b/lib/nlattr.c
index 7a2b6c38fd59..dc15e7888fc1 100644
--- a/lib/nlattr.c
+++ b/lib/nlattr.c
@@ -134,6 +134,7 @@ void nla_get_range_unsigned(const struct nla_policy *pt,
range->max = U32_MAX;
break;
case NLA_U64:
+ case NLA_UINT:
case NLA_MSECS:
range->max = U64_MAX;
break;
@@ -183,6 +184,9 @@ static int nla_validate_range_unsigned(const struct nla_policy *pt,
case NLA_U64:
value = nla_get_u64(nla);
break;
+ case NLA_UINT:
+ value = nla_get_uint(nla);
+ break;
case NLA_MSECS:
value = nla_get_u64(nla);
break;
@@ -248,6 +252,7 @@ void nla_get_range_signed(const struct nla_policy *pt,
range->max = S32_MAX;
break;
case NLA_S64:
+ case NLA_SINT:
range->min = S64_MIN;
range->max = S64_MAX;
break;
@@ -295,6 +300,9 @@ static int nla_validate_int_range_signed(const struct nla_policy *pt,
case NLA_S64:
value = nla_get_s64(nla);
break;
+ case NLA_SINT:
+ value = nla_get_sint(nla);
+ break;
default:
return -EINVAL;
}
@@ -320,6 +328,7 @@ static int nla_validate_int_range(const struct nla_policy *pt,
case NLA_U16:
case NLA_U32:
case NLA_U64:
+ case NLA_UINT:
case NLA_MSECS:
case NLA_BINARY:
case NLA_BE16:
@@ -329,6 +338,7 @@ static int nla_validate_int_range(const struct nla_policy *pt,
case NLA_S16:
case NLA_S32:
case NLA_S64:
+ case NLA_SINT:
return nla_validate_int_range_signed(pt, nla, extack);
default:
WARN_ON(1);
@@ -355,6 +365,9 @@ static int nla_validate_mask(const struct nla_policy *pt,
case NLA_U64:
value = nla_get_u64(nla);
break;
+ case NLA_UINT:
+ value = nla_get_uint(nla);
+ break;
case NLA_BE16:
value = ntohs(nla_get_be16(nla));
break;
@@ -433,6 +446,15 @@ static int validate_nla(const struct nlattr *nla, int maxtype,
goto out_err;
break;
+ case NLA_SINT:
+ case NLA_UINT:
+ if (attrlen != sizeof(u32) && attrlen != sizeof(u64)) {
+ NL_SET_ERR_MSG_ATTR_POL(extack, nla, pt,
+ "invalid attribute length");
+ return -EINVAL;
+ }
+ break;
+
case NLA_BITFIELD32:
if (attrlen != sizeof(struct nla_bitfield32))
goto out_err;
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ecde4216201e..7916503e6a6a 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -5111,6 +5111,104 @@ static struct bpf_test tests[] = {
{ },
{ { 0, 0xffffffff } }
},
+ /* MOVSX32 */
+ {
+ "ALU_MOVSX | BPF_B",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0x00000000ffffffefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX32_REG(R1, R3, 8),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
+ {
+ "ALU_MOVSX | BPF_H",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0x00000000ffffbeefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX32_REG(R1, R3, 16),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
+ {
+ "ALU_MOVSX | BPF_W",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0x00000000deadbeefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX32_REG(R1, R3, 32),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
+ /* MOVSX64 REG */
+ {
+ "ALU64_MOVSX | BPF_B",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0xffffffffffffffefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX64_REG(R1, R3, 8),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
+ {
+ "ALU64_MOVSX | BPF_H",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0xffffffffffffbeefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX64_REG(R1, R3, 16),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
+ {
+ "ALU64_MOVSX | BPF_W",
+ .u.insns_int = {
+ BPF_LD_IMM64(R2, 0xffffffffdeadbeefLL),
+ BPF_LD_IMM64(R3, 0xdeadbeefdeadbeefLL),
+ BPF_MOVSX64_REG(R1, R3, 32),
+ BPF_JMP_REG(BPF_JEQ, R2, R1, 2),
+ BPF_MOV32_IMM(R0, 2),
+ BPF_EXIT_INSN(),
+ BPF_MOV32_IMM(R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1 } },
+ },
/* BPF_ALU | BPF_ADD | BPF_X */
{
"ALU_ADD_X: 1 + 2 = 3",
@@ -6105,6 +6203,106 @@ static struct bpf_test tests[] = {
{ },
{ { 0, 2 } },
},
+ /* BPF_ALU | BPF_DIV | BPF_X off=1 (SDIV) */
+ {
+ "ALU_SDIV_X: -6 / 2 = -3",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -6),
+ BPF_ALU32_IMM(BPF_MOV, R1, 2),
+ BPF_ALU32_REG_OFF(BPF_DIV, R0, R1, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -3 } },
+ },
+ /* BPF_ALU | BPF_DIV | BPF_K off=1 (SDIV) */
+ {
+ "ALU_SDIV_K: -6 / 2 = -3",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -6),
+ BPF_ALU32_IMM_OFF(BPF_DIV, R0, 2, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -3 } },
+ },
+ /* BPF_ALU64 | BPF_DIV | BPF_X off=1 (SDIV64) */
+ {
+ "ALU64_SDIV_X: -6 / 2 = -3",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -6),
+ BPF_ALU32_IMM(BPF_MOV, R1, 2),
+ BPF_ALU64_REG_OFF(BPF_DIV, R0, R1, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -3 } },
+ },
+ /* BPF_ALU64 | BPF_DIV | BPF_K off=1 (SDIV64) */
+ {
+ "ALU64_SDIV_K: -6 / 2 = -3",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -6),
+ BPF_ALU64_IMM_OFF(BPF_DIV, R0, 2, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -3 } },
+ },
+ /* BPF_ALU | BPF_MOD | BPF_X off=1 (SMOD) */
+ {
+ "ALU_SMOD_X: -7 % 2 = -1",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -7),
+ BPF_ALU32_IMM(BPF_MOV, R1, 2),
+ BPF_ALU32_REG_OFF(BPF_MOD, R0, R1, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -1 } },
+ },
+ /* BPF_ALU | BPF_MOD | BPF_K off=1 (SMOD) */
+ {
+ "ALU_SMOD_K: -7 % 2 = -1",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -7),
+ BPF_ALU32_IMM_OFF(BPF_MOD, R0, 2, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -1 } },
+ },
+ /* BPF_ALU64 | BPF_MOD | BPF_X off=1 (SMOD64) */
+ {
+ "ALU64_SMOD_X: -7 % 2 = -1",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -7),
+ BPF_ALU32_IMM(BPF_MOV, R1, 2),
+ BPF_ALU64_REG_OFF(BPF_MOD, R0, R1, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -1 } },
+ },
+ /* BPF_ALU64 | BPF_MOD | BPF_K off=1 (SMOD64) */
+ {
+ "ALU64_SMOD_X: -7 % 2 = -1",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, -7),
+ BPF_ALU64_IMM_OFF(BPF_MOD, R0, 2, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, -1 } },
+ },
/* BPF_ALU | BPF_AND | BPF_X */
{
"ALU_AND_X: 3 & 2 = 2",
@@ -7837,6 +8035,104 @@ static struct bpf_test tests[] = {
{ },
{ { 0, (u32) (cpu_to_le64(0xfedcba9876543210ULL) >> 32) } },
},
+ /* BSWAP */
+ {
+ "BSWAP 16: 0x0123456789abcdef -> 0xefcd",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+ BPF_BSWAP(R0, 16),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0xefcd } },
+ },
+ {
+ "BSWAP 32: 0x0123456789abcdef -> 0xefcdab89",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+ BPF_BSWAP(R0, 32),
+ BPF_ALU64_REG(BPF_MOV, R1, R0),
+ BPF_ALU64_IMM(BPF_RSH, R1, 32),
+ BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0xefcdab89 } },
+ },
+ {
+ "BSWAP 64: 0x0123456789abcdef -> 0x67452301",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+ BPF_BSWAP(R0, 64),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x67452301 } },
+ },
+ {
+ "BSWAP 64: 0x0123456789abcdef >> 32 -> 0xefcdab89",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0x0123456789abcdefLL),
+ BPF_BSWAP(R0, 64),
+ BPF_ALU64_IMM(BPF_RSH, R0, 32),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0xefcdab89 } },
+ },
+ /* BSWAP, reversed */
+ {
+ "BSWAP 16: 0xfedcba9876543210 -> 0x1032",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0xfedcba9876543210ULL),
+ BPF_BSWAP(R0, 16),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x1032 } },
+ },
+ {
+ "BSWAP 32: 0xfedcba9876543210 -> 0x10325476",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0xfedcba9876543210ULL),
+ BPF_BSWAP(R0, 32),
+ BPF_ALU64_REG(BPF_MOV, R1, R0),
+ BPF_ALU64_IMM(BPF_RSH, R1, 32),
+ BPF_ALU32_REG(BPF_ADD, R0, R1), /* R1 = 0 */
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x10325476 } },
+ },
+ {
+ "BSWAP 64: 0xfedcba9876543210 -> 0x98badcfe",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0xfedcba9876543210ULL),
+ BPF_BSWAP(R0, 64),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x98badcfe } },
+ },
+ {
+ "BSWAP 64: 0xfedcba9876543210 >> 32 -> 0x10325476",
+ .u.insns_int = {
+ BPF_LD_IMM64(R0, 0xfedcba9876543210ULL),
+ BPF_BSWAP(R0, 64),
+ BPF_ALU64_IMM(BPF_RSH, R0, 32),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0x10325476 } },
+ },
/* BPF_LDX_MEM B/H/W/DW */
{
"BPF_LDX_MEM | BPF_B, base",
@@ -8228,6 +8524,67 @@ static struct bpf_test tests[] = {
{ { 32, 0 } },
.stack_depth = 0,
},
+ /* BPF_LDX_MEMSX B/H/W */
+ {
+ "BPF_LDX_MEMSX | BPF_B",
+ .u.insns_int = {
+ BPF_LD_IMM64(R1, 0xdead0000000000f0ULL),
+ BPF_LD_IMM64(R2, 0xfffffffffffffff0ULL),
+ BPF_STX_MEM(BPF_DW, R10, R1, -8),
+#ifdef __BIG_ENDIAN
+ BPF_LDX_MEMSX(BPF_B, R0, R10, -1),
+#else
+ BPF_LDX_MEMSX(BPF_B, R0, R10, -8),
+#endif
+ BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+ BPF_ALU64_IMM(BPF_MOV, R0, 0),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0 } },
+ .stack_depth = 8,
+ },
+ {
+ "BPF_LDX_MEMSX | BPF_H",
+ .u.insns_int = {
+ BPF_LD_IMM64(R1, 0xdead00000000f123ULL),
+ BPF_LD_IMM64(R2, 0xfffffffffffff123ULL),
+ BPF_STX_MEM(BPF_DW, R10, R1, -8),
+#ifdef __BIG_ENDIAN
+ BPF_LDX_MEMSX(BPF_H, R0, R10, -2),
+#else
+ BPF_LDX_MEMSX(BPF_H, R0, R10, -8),
+#endif
+ BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+ BPF_ALU64_IMM(BPF_MOV, R0, 0),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0 } },
+ .stack_depth = 8,
+ },
+ {
+ "BPF_LDX_MEMSX | BPF_W",
+ .u.insns_int = {
+ BPF_LD_IMM64(R1, 0x00000000deadbeefULL),
+ BPF_LD_IMM64(R2, 0xffffffffdeadbeefULL),
+ BPF_STX_MEM(BPF_DW, R10, R1, -8),
+#ifdef __BIG_ENDIAN
+ BPF_LDX_MEMSX(BPF_W, R0, R10, -4),
+#else
+ BPF_LDX_MEMSX(BPF_W, R0, R10, -8),
+#endif
+ BPF_JMP_REG(BPF_JNE, R0, R2, 1),
+ BPF_ALU64_IMM(BPF_MOV, R0, 0),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 0 } },
+ .stack_depth = 8,
+ },
/* BPF_STX_MEM B/H/W/DW */
{
"BPF_STX_MEM | BPF_B",
@@ -9474,6 +9831,20 @@ static struct bpf_test tests[] = {
{ },
{ { 0, 1 } },
},
+ /* BPF_JMP32 | BPF_JA */
+ {
+ "JMP32_JA: Unconditional jump: if (true) return 1",
+ .u.insns_int = {
+ BPF_ALU32_IMM(BPF_MOV, R0, 0),
+ BPF_JMP32_IMM(BPF_JA, 0, 1, 0),
+ BPF_EXIT_INSN(),
+ BPF_ALU32_IMM(BPF_MOV, R0, 1),
+ BPF_EXIT_INSN(),
+ },
+ INTERNAL,
+ { },
+ { { 0, 1 } },
+ },
/* BPF_JMP | BPF_JSLT | BPF_K */
{
"JMP_JSLT_K: Signed jump: if (-2 < -1) return 1",
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index d37831b8511c..8b06bab5c406 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -562,7 +562,6 @@ void kasan_restore_multi_shot(bool enabled);
* code. Declared here to avoid warnings about missing declarations.
*/
-asmlinkage void kasan_unpoison_task_stack_below(const void *watermark);
void __asan_register_globals(void *globals, ssize_t size);
void __asan_unregister_globals(void *globals, ssize_t size);
void __asan_handle_no_return(void);
diff --git a/mm/percpu.c b/mm/percpu.c
index a7665de8485f..60ed078e4cd0 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2245,6 +2245,37 @@ static void pcpu_balance_workfn(struct work_struct *work)
}
/**
+ * pcpu_alloc_size - the size of the dynamic percpu area
+ * @ptr: pointer to the dynamic percpu area
+ *
+ * Returns the size of the @ptr allocation. This is undefined for statically
+ * defined percpu variables as there is no corresponding chunk->bound_map.
+ *
+ * RETURNS:
+ * The size of the dynamic percpu area.
+ *
+ * CONTEXT:
+ * Can be called from atomic context.
+ */
+size_t pcpu_alloc_size(void __percpu *ptr)
+{
+ struct pcpu_chunk *chunk;
+ unsigned long bit_off, end;
+ void *addr;
+
+ if (!ptr)
+ return 0;
+
+ addr = __pcpu_ptr_to_addr(ptr);
+ /* No pcpu_lock here: ptr has not been freed, so chunk is still alive */
+ chunk = pcpu_chunk_addr_search(addr);
+ bit_off = (addr - chunk->base_addr) / PCPU_MIN_ALLOC_SIZE;
+ end = find_next_bit(chunk->bound_map, pcpu_chunk_map_bits(chunk),
+ bit_off + 1);
+ return (end - bit_off) * PCPU_MIN_ALLOC_SIZE;
+}
+
+/**
* free_percpu - free percpu area
* @ptr: pointer to area to free
*
@@ -2267,12 +2298,10 @@ void free_percpu(void __percpu *ptr)
kmemleak_free_percpu(ptr);
addr = __pcpu_ptr_to_addr(ptr);
-
- spin_lock_irqsave(&pcpu_lock, flags);
-
chunk = pcpu_chunk_addr_search(addr);
off = addr - chunk->base_addr;
+ spin_lock_irqsave(&pcpu_lock, flags);
size = pcpu_free_area(chunk, off);
pcpu_memcg_free_hook(chunk, off, size);
diff --git a/net/Kconfig b/net/Kconfig
index d532ec33f1fe..3ec6bc98fa05 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -246,7 +246,7 @@ source "net/bridge/Kconfig"
source "net/dsa/Kconfig"
source "net/8021q/Kconfig"
source "net/llc/Kconfig"
-source "drivers/net/appletalk/Kconfig"
+source "net/appletalk/Kconfig"
source "net/x25/Kconfig"
source "net/lapb/Kconfig"
source "net/phonet/Kconfig"
@@ -508,4 +508,13 @@ config NETDEV_ADDR_LIST_TEST
default KUNIT_ALL_TESTS
depends on KUNIT
+config NET_TEST
+ tristate "KUnit tests for networking" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ KUnit tests covering core networking infra, such as sk_buff.
+
+ If unsure, say N.
+
endif # if NET
diff --git a/net/appletalk/Kconfig b/net/appletalk/Kconfig
new file mode 100644
index 000000000000..041141abf925
--- /dev/null
+++ b/net/appletalk/Kconfig
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Appletalk configuration
+#
+config ATALK
+ tristate "Appletalk protocol support"
+ select LLC
+ help
+ AppleTalk is the protocol that Apple computers can use to communicate
+ on a network. If your Linux box is connected to such a network and you
+ wish to connect to it, say Y. You will need to use the netatalk package
+ so that your Linux box can act as a print and file server for Macs as
+ well as access AppleTalk printers. Check out
+ <http://www.zettabyte.net/netatalk/> on the WWW for details.
+ EtherTalk is the name used for AppleTalk over Ethernet and the
+ cheaper and slower LocalTalk is AppleTalk over a proprietary Apple
+ network using serial links. EtherTalk and LocalTalk are fully
+ supported by Linux.
+
+ General information about how to connect Linux, Windows machines and
+ Macs is on the WWW at <http://www.eats.com/linux_mac_win.html>. The
+ NET3-4-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>, contains valuable
+ information as well.
+
+ To compile this driver as a module, choose M here: the module will be
+ called appletalk. You almost certainly want to compile it as a
+ module so you can restart your AppleTalk stack without rebooting
+ your machine. I hear that the GNU boycott of Apple is over, so
+ even politically correct people are allowed to say Y here.
diff --git a/net/appletalk/aarp.c b/net/appletalk/aarp.c
index c7236daa2415..9fa0b246902b 100644
--- a/net/appletalk/aarp.c
+++ b/net/appletalk/aarp.c
@@ -664,7 +664,7 @@ out_unlock:
sendit:
if (skb->sk)
- skb->priority = skb->sk->sk_priority;
+ skb->priority = READ_ONCE(skb->sk->sk_priority);
if (dev_queue_xmit(skb))
goto drop;
sent:
diff --git a/net/appletalk/ddp.c b/net/appletalk/ddp.c
index 8978fb6212ff..9ba04a69ec2a 100644
--- a/net/appletalk/ddp.c
+++ b/net/appletalk/ddp.c
@@ -1284,39 +1284,6 @@ out:
return err;
}
-#if IS_ENABLED(CONFIG_IPDDP)
-static __inline__ int is_ip_over_ddp(struct sk_buff *skb)
-{
- return skb->data[12] == 22;
-}
-
-static int handle_ip_over_ddp(struct sk_buff *skb)
-{
- struct net_device *dev = __dev_get_by_name(&init_net, "ipddp0");
- struct net_device_stats *stats;
-
- /* This needs to be able to handle ipddp"N" devices */
- if (!dev) {
- kfree_skb(skb);
- return NET_RX_DROP;
- }
-
- skb->protocol = htons(ETH_P_IP);
- skb_pull(skb, 13);
- skb->dev = dev;
- skb_reset_transport_header(skb);
-
- stats = netdev_priv(dev);
- stats->rx_packets++;
- stats->rx_bytes += skb->len + 13;
- return netif_rx(skb); /* Send the SKB up to a higher place. */
-}
-#else
-/* make it easy for gcc to optimize this test out, i.e. kill the code */
-#define is_ip_over_ddp(skb) 0
-#define handle_ip_over_ddp(skb) 0
-#endif
-
static int atalk_route_packet(struct sk_buff *skb, struct net_device *dev,
struct ddpehdr *ddp, __u16 len_hops, int origlen)
{
@@ -1480,9 +1447,6 @@ static int atalk_rcv(struct sk_buff *skb, struct net_device *dev,
return atalk_route_packet(skb, dev, ddp, len_hops, origlen);
}
- /* if IP over DDP is not selected this code will be optimized out */
- if (is_ip_over_ddp(skb))
- return handle_ip_over_ddp(skb);
/*
* Which socket - atalk_search_socket() looks for a *full match*
* of the <net, node, port> tuple.
diff --git a/net/atm/atm_sysfs.c b/net/atm/atm_sysfs.c
index 466353b3dde4..54e7fb1a4ee5 100644
--- a/net/atm/atm_sysfs.c
+++ b/net/atm/atm_sysfs.c
@@ -116,8 +116,6 @@ static int atm_uevent(const struct device *cdev, struct kobj_uevent_env *env)
return -ENODEV;
adev = to_atm_dev(cdev);
- if (!adev)
- return -ENODEV;
if (add_uevent_var(env, "NAME=%s%d", adev->type, adev->number))
return -ENOMEM;
diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
index 5db805d5f74d..558e158c98d0 100644
--- a/net/ax25/af_ax25.c
+++ b/net/ax25/af_ax25.c
@@ -939,7 +939,7 @@ struct sock *ax25_make_new(struct sock *osk, struct ax25_dev *ax25_dev)
sock_init_data(NULL, sk);
sk->sk_type = osk->sk_type;
- sk->sk_priority = osk->sk_priority;
+ sk->sk_priority = READ_ONCE(osk->sk_priority);
sk->sk_protocol = osk->sk_protocol;
sk->sk_rcvbuf = osk->sk_rcvbuf;
sk->sk_sndbuf = osk->sk_sndbuf;
diff --git a/net/bluetooth/amp.c b/net/bluetooth/amp.c
index 2134f92bd7ac..5d698f19868c 100644
--- a/net/bluetooth/amp.c
+++ b/net/bluetooth/amp.c
@@ -109,7 +109,7 @@ struct hci_conn *phylink_add(struct hci_dev *hdev, struct amp_mgr *mgr,
struct hci_conn *hcon;
u8 role = out ? HCI_ROLE_MASTER : HCI_ROLE_SLAVE;
- hcon = hci_conn_add(hdev, AMP_LINK, dst, role);
+ hcon = hci_conn_add(hdev, AMP_LINK, dst, role, __next_handle(mgr));
if (!hcon)
return NULL;
@@ -117,7 +117,6 @@ struct hci_conn *phylink_add(struct hci_dev *hdev, struct amp_mgr *mgr,
hcon->state = BT_CONNECT;
hcon->attempt++;
- hcon->handle = __next_handle(mgr);
hcon->remote_id = remote_id;
hcon->amp_mgr = amp_mgr_get(mgr);
diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
index 73470cc3518a..2cee330188ce 100644
--- a/net/bluetooth/hci_conn.c
+++ b/net/bluetooth/hci_conn.c
@@ -153,6 +153,9 @@ static void hci_conn_cleanup(struct hci_conn *conn)
hci_conn_hash_del(hdev, conn);
+ if (HCI_CONN_HANDLE_UNSET(conn->handle))
+ ida_free(&hdev->unset_handle_ida, conn->handle);
+
if (conn->cleanup)
conn->cleanup(conn);
@@ -169,13 +172,11 @@ static void hci_conn_cleanup(struct hci_conn *conn)
hdev->notify(hdev, HCI_NOTIFY_CONN_DEL);
}
- hci_conn_del_sysfs(conn);
-
debugfs_remove_recursive(conn->debugfs);
- hci_dev_put(hdev);
+ hci_conn_del_sysfs(conn);
- hci_conn_put(conn);
+ hci_dev_put(hdev);
}
static void hci_acl_create_connection(struct hci_conn *conn)
@@ -759,6 +760,7 @@ static int terminate_big_sync(struct hci_dev *hdev, void *data)
bt_dev_dbg(hdev, "big 0x%2.2x bis 0x%2.2x", d->big, d->bis);
+ hci_disable_per_advertising_sync(hdev, d->bis);
hci_remove_ext_adv_instance_sync(hdev, d->bis, NULL);
/* Only terminate BIG if it has been created */
@@ -814,6 +816,17 @@ static int big_terminate_sync(struct hci_dev *hdev, void *data)
return 0;
}
+static void find_bis(struct hci_conn *conn, void *data)
+{
+ struct iso_list_data *d = data;
+
+ /* Ignore if BIG doesn't match */
+ if (d->big != conn->iso_qos.bcast.big)
+ return;
+
+ d->count++;
+}
+
static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *conn)
{
struct iso_list_data *d;
@@ -825,10 +838,27 @@ static int hci_le_big_terminate(struct hci_dev *hdev, u8 big, struct hci_conn *c
if (!d)
return -ENOMEM;
+ memset(d, 0, sizeof(*d));
d->big = big;
d->sync_handle = conn->sync_handle;
- d->pa_sync_term = test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags);
- d->big_sync_term = test_and_clear_bit(HCI_CONN_BIG_SYNC, &conn->flags);
+
+ if (test_and_clear_bit(HCI_CONN_PA_SYNC, &conn->flags)) {
+ hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
+ HCI_CONN_PA_SYNC, d);
+
+ if (!d->count)
+ d->pa_sync_term = true;
+
+ d->count = 0;
+ }
+
+ if (test_and_clear_bit(HCI_CONN_BIG_SYNC, &conn->flags)) {
+ hci_conn_hash_list_flag(hdev, find_bis, ISO_LINK,
+ HCI_CONN_BIG_SYNC, d);
+
+ if (!d->count)
+ d->big_sync_term = true;
+ }
ret = hci_cmd_sync_queue(hdev, big_terminate_sync, d,
terminate_big_destroy);
@@ -864,12 +894,6 @@ static void bis_cleanup(struct hci_conn *conn)
hci_le_terminate_big(hdev, conn);
} else {
- bis = hci_conn_hash_lookup_big_any_dst(hdev,
- conn->iso_qos.bcast.big);
-
- if (bis)
- return;
-
hci_le_big_terminate(hdev, conn->iso_qos.bcast.big,
conn);
}
@@ -928,31 +952,18 @@ static void cis_cleanup(struct hci_conn *conn)
hci_le_remove_cig(hdev, conn->iso_qos.ucast.cig);
}
-static u16 hci_conn_hash_alloc_unset(struct hci_dev *hdev)
+static int hci_conn_hash_alloc_unset(struct hci_dev *hdev)
{
- struct hci_conn_hash *h = &hdev->conn_hash;
- struct hci_conn *c;
- u16 handle = HCI_CONN_HANDLE_MAX + 1;
-
- rcu_read_lock();
-
- list_for_each_entry_rcu(c, &h->list, list) {
- /* Find the first unused handle */
- if (handle == 0xffff || c->handle != handle)
- break;
- handle++;
- }
- rcu_read_unlock();
-
- return handle;
+ return ida_alloc_range(&hdev->unset_handle_ida, HCI_CONN_HANDLE_MAX + 1,
+ U16_MAX, GFP_ATOMIC);
}
struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
- u8 role)
+ u8 role, u16 handle)
{
struct hci_conn *conn;
- BT_DBG("%s dst %pMR", hdev->name, dst);
+ bt_dev_dbg(hdev, "dst %pMR handle 0x%4.4x", dst, handle);
conn = kzalloc(sizeof(*conn), GFP_KERNEL);
if (!conn)
@@ -960,7 +971,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
bacpy(&conn->dst, dst);
bacpy(&conn->src, &hdev->bdaddr);
- conn->handle = hci_conn_hash_alloc_unset(hdev);
+ conn->handle = handle;
conn->hdev = hdev;
conn->type = type;
conn->role = role;
@@ -973,6 +984,7 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
conn->rssi = HCI_RSSI_INVALID;
conn->tx_power = HCI_TX_POWER_INVALID;
conn->max_tx_power = HCI_TX_POWER_INVALID;
+ conn->sync_handle = HCI_SYNC_HANDLE_INVALID;
set_bit(HCI_CONN_POWER_SAVE, &conn->flags);
conn->disc_timeout = HCI_DISCONN_TIMEOUT;
@@ -1044,6 +1056,20 @@ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
return conn;
}
+struct hci_conn *hci_conn_add_unset(struct hci_dev *hdev, int type,
+ bdaddr_t *dst, u8 role)
+{
+ int handle;
+
+ bt_dev_dbg(hdev, "dst %pMR", dst);
+
+ handle = hci_conn_hash_alloc_unset(hdev);
+ if (unlikely(handle < 0))
+ return NULL;
+
+ return hci_conn_add(hdev, type, dst, role, handle);
+}
+
static void hci_conn_cleanup_child(struct hci_conn *conn, u8 reason)
{
if (!reason)
@@ -1247,6 +1273,12 @@ void hci_conn_failed(struct hci_conn *conn, u8 status)
break;
}
+ /* In case of BIG/PA sync failed, clear conn flags so that
+ * the conns will be correctly cleaned up by ISO layer
+ */
+ test_and_clear_bit(HCI_CONN_BIG_SYNC_FAILED, &conn->flags);
+ test_and_clear_bit(HCI_CONN_PA_SYNC_FAILED, &conn->flags);
+
conn->state = BT_CLOSED;
hci_connect_cfm(conn, status);
hci_conn_del(conn);
@@ -1274,6 +1306,9 @@ u8 hci_conn_set_handle(struct hci_conn *conn, u16 handle)
if (conn->abort_reason)
return conn->abort_reason;
+ if (HCI_CONN_HANDLE_UNSET(conn->handle))
+ ida_free(&hdev->unset_handle_ida, conn->handle);
+
conn->handle = handle;
return 0;
@@ -1381,7 +1416,7 @@ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
if (conn) {
bacpy(&conn->dst, dst);
} else {
- conn = hci_conn_add(hdev, LE_LINK, dst, role);
+ conn = hci_conn_add_unset(hdev, LE_LINK, dst, role);
if (!conn)
return ERR_PTR(-ENOMEM);
hci_conn_hold(conn);
@@ -1486,6 +1521,18 @@ static int qos_set_bis(struct hci_dev *hdev, struct bt_iso_qos *qos)
/* Allocate BIS if not set */
if (qos->bcast.bis == BT_ISO_QOS_BIS_UNSET) {
+ if (qos->bcast.big != BT_ISO_QOS_BIG_UNSET) {
+ conn = hci_conn_hash_lookup_big(hdev, qos->bcast.big);
+
+ if (conn) {
+ /* If the BIG handle is already matched to an advertising
+ * handle, do not allocate a new one.
+ */
+ qos->bcast.bis = conn->iso_qos.bcast.bis;
+ return 0;
+ }
+ }
+
/* Find an unused adv set to advertise BIS, skip instance 0x00
* since it is reserved as general purpose set.
*/
@@ -1546,7 +1593,7 @@ static struct hci_conn *hci_add_bis(struct hci_dev *hdev, bdaddr_t *dst,
memcmp(conn->le_per_adv_data, base, base_len)))
return ERR_PTR(-EADDRINUSE);
- conn = hci_conn_add(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
+ conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
if (!conn)
return ERR_PTR(-ENOMEM);
@@ -1590,7 +1637,7 @@ struct hci_conn *hci_connect_le_scan(struct hci_dev *hdev, bdaddr_t *dst,
BT_DBG("requesting refresh of dst_addr");
- conn = hci_conn_add(hdev, LE_LINK, dst, HCI_ROLE_MASTER);
+ conn = hci_conn_add_unset(hdev, LE_LINK, dst, HCI_ROLE_MASTER);
if (!conn)
return ERR_PTR(-ENOMEM);
@@ -1638,7 +1685,7 @@ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
acl = hci_conn_hash_lookup_ba(hdev, ACL_LINK, dst);
if (!acl) {
- acl = hci_conn_add(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);
+ acl = hci_conn_add_unset(hdev, ACL_LINK, dst, HCI_ROLE_MASTER);
if (!acl)
return ERR_PTR(-ENOMEM);
}
@@ -1698,7 +1745,7 @@ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
sco = hci_conn_hash_lookup_ba(hdev, type, dst);
if (!sco) {
- sco = hci_conn_add(hdev, type, dst, HCI_ROLE_MASTER);
+ sco = hci_conn_add_unset(hdev, type, dst, HCI_ROLE_MASTER);
if (!sco) {
hci_conn_drop(acl);
return ERR_PTR(-ENOMEM);
@@ -1890,7 +1937,7 @@ struct hci_conn *hci_bind_cis(struct hci_dev *hdev, bdaddr_t *dst,
cis = hci_conn_hash_lookup_cis(hdev, dst, dst_type, qos->ucast.cig,
qos->ucast.cis);
if (!cis) {
- cis = hci_conn_add(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
+ cis = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_MASTER);
if (!cis)
return ERR_PTR(-ENOMEM);
cis->cleanup = cis_cleanup;
@@ -2139,7 +2186,7 @@ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
} pdu;
int err;
- if (num_bis > sizeof(pdu.bis))
+ if (num_bis < 0x01 || num_bis > sizeof(pdu.bis))
return -EINVAL;
err = qos_set_big(hdev, qos);
diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
index 195aea2198a9..65601aa52e0d 100644
--- a/net/bluetooth/hci_core.c
+++ b/net/bluetooth/hci_core.c
@@ -2535,6 +2535,8 @@ struct hci_dev *hci_alloc_dev_priv(int sizeof_priv)
mutex_init(&hdev->lock);
mutex_init(&hdev->req_lock);
+ ida_init(&hdev->unset_handle_ida);
+
INIT_LIST_HEAD(&hdev->mesh_pending);
INIT_LIST_HEAD(&hdev->mgmt_pending);
INIT_LIST_HEAD(&hdev->reject_list);
@@ -2789,6 +2791,7 @@ void hci_release_dev(struct hci_dev *hdev)
hci_codec_list_clear(&hdev->local_codecs);
hci_dev_unlock(hdev);
+ ida_destroy(&hdev->unset_handle_ida);
ida_simple_remove(&hci_index_ida, hdev->id);
kfree_skb(hdev->sent_cmd);
kfree_skb(hdev->recv_event);
diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
index 1e1c9147356c..0849e0dafa95 100644
--- a/net/bluetooth/hci_event.c
+++ b/net/bluetooth/hci_event.c
@@ -2335,8 +2335,8 @@ static void hci_cs_create_conn(struct hci_dev *hdev, __u8 status)
}
} else {
if (!conn) {
- conn = hci_conn_add(hdev, ACL_LINK, &cp->bdaddr,
- HCI_ROLE_MASTER);
+ conn = hci_conn_add_unset(hdev, ACL_LINK, &cp->bdaddr,
+ HCI_ROLE_MASTER);
if (!conn)
bt_dev_err(hdev, "no memory for new connection");
}
@@ -3151,8 +3151,8 @@ static void hci_conn_complete_evt(struct hci_dev *hdev, void *data,
hci_bdaddr_list_lookup_with_flags(&hdev->accept_list,
&ev->bdaddr,
BDADDR_BREDR)) {
- conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr,
- HCI_ROLE_SLAVE);
+ conn = hci_conn_add_unset(hdev, ev->link_type,
+ &ev->bdaddr, HCI_ROLE_SLAVE);
if (!conn) {
bt_dev_err(hdev, "no memory for new conn");
goto unlock;
@@ -3317,8 +3317,8 @@ static void hci_conn_request_evt(struct hci_dev *hdev, void *data,
conn = hci_conn_hash_lookup_ba(hdev, ev->link_type,
&ev->bdaddr);
if (!conn) {
- conn = hci_conn_add(hdev, ev->link_type, &ev->bdaddr,
- HCI_ROLE_SLAVE);
+ conn = hci_conn_add_unset(hdev, ev->link_type, &ev->bdaddr,
+ HCI_ROLE_SLAVE);
if (!conn) {
bt_dev_err(hdev, "no memory for new connection");
goto unlock;
@@ -5890,7 +5890,7 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
if (status)
goto unlock;
- conn = hci_conn_add(hdev, LE_LINK, bdaddr, role);
+ conn = hci_conn_add_unset(hdev, LE_LINK, bdaddr, role);
if (!conn) {
bt_dev_err(hdev, "no memory for new connection");
goto unlock;
@@ -5952,17 +5952,11 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
conn->dst_type = ev_bdaddr_type(hdev, conn->dst_type, NULL);
- if (handle > HCI_CONN_HANDLE_MAX) {
- bt_dev_err(hdev, "Invalid handle: 0x%4.4x > 0x%4.4x", handle,
- HCI_CONN_HANDLE_MAX);
- status = HCI_ERROR_INVALID_PARAMETERS;
- }
-
/* All connection failure handling is taken care of by the
* hci_conn_failed function which is triggered by the HCI
* request completion callbacks used for connecting.
*/
- if (status)
+ if (status || hci_conn_set_handle(conn, handle))
goto unlock;
/* Drop the connection if it has been aborted */
@@ -5986,7 +5980,6 @@ static void le_conn_complete_evt(struct hci_dev *hdev, u8 status,
mgmt_device_connected(hdev, conn, NULL, 0);
conn->sec_level = BT_SECURITY_LOW;
- conn->handle = handle;
conn->state = BT_CONFIG;
/* Store current advertising instance as connection advertising instance
@@ -6603,7 +6596,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
struct hci_ev_le_pa_sync_established *ev = data;
int mask = hdev->link_mode;
__u8 flags = 0;
- struct hci_conn *bis;
+ struct hci_conn *pa_sync;
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
@@ -6620,20 +6613,19 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
if (!(flags & HCI_PROTO_DEFER))
goto unlock;
- /* Add connection to indicate the PA sync event */
- bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY,
- HCI_ROLE_SLAVE);
+ if (ev->status) {
+ /* Add connection to indicate the failed PA sync event */
+ pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
+ HCI_ROLE_SLAVE);
- if (!bis)
- goto unlock;
+ if (!pa_sync)
+ goto unlock;
- if (ev->status)
- set_bit(HCI_CONN_PA_SYNC_FAILED, &bis->flags);
- else
- set_bit(HCI_CONN_PA_SYNC, &bis->flags);
+ set_bit(HCI_CONN_PA_SYNC_FAILED, &pa_sync->flags);
- /* Notify connection to iso layer */
- hci_connect_cfm(bis, ev->status);
+ /* Notify iso layer */
+ hci_connect_cfm(pa_sync, ev->status);
+ }
unlock:
hci_dev_unlock(hdev);
@@ -7020,12 +7012,12 @@ static void hci_le_cis_req_evt(struct hci_dev *hdev, void *data,
cis = hci_conn_hash_lookup_handle(hdev, cis_handle);
if (!cis) {
- cis = hci_conn_add(hdev, ISO_LINK, &acl->dst, HCI_ROLE_SLAVE);
+ cis = hci_conn_add(hdev, ISO_LINK, &acl->dst, HCI_ROLE_SLAVE,
+ cis_handle);
if (!cis) {
hci_le_reject_cis(hdev, ev->cis_handle);
goto unlock;
}
- cis->handle = cis_handle;
}
cis->iso_qos.ucast.cig = ev->cig_id;
@@ -7113,7 +7105,6 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
{
struct hci_evt_le_big_sync_estabilished *ev = data;
struct hci_conn *bis;
- struct hci_conn *pa_sync;
int i;
bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
@@ -7124,15 +7115,6 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
hci_dev_lock(hdev);
- if (!ev->status) {
- pa_sync = hci_conn_hash_lookup_pa_sync(hdev, ev->handle);
- if (pa_sync)
- /* Also mark the BIG sync established event on the
- * associated PA sync hcon
- */
- set_bit(HCI_CONN_BIG_SYNC, &pa_sync->flags);
- }
-
for (i = 0; i < ev->num_bis; i++) {
u16 handle = le16_to_cpu(ev->bis[i]);
__le32 interval;
@@ -7140,10 +7122,9 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
bis = hci_conn_hash_lookup_handle(hdev, handle);
if (!bis) {
bis = hci_conn_add(hdev, ISO_LINK, BDADDR_ANY,
- HCI_ROLE_SLAVE);
+ HCI_ROLE_SLAVE, handle);
if (!bis)
continue;
- bis->handle = handle;
}
if (ev->status != 0x42)
@@ -7186,15 +7167,42 @@ static void hci_le_big_info_adv_report_evt(struct hci_dev *hdev, void *data,
struct hci_evt_le_big_info_adv_report *ev = data;
int mask = hdev->link_mode;
__u8 flags = 0;
+ struct hci_conn *pa_sync;
bt_dev_dbg(hdev, "sync_handle 0x%4.4x", le16_to_cpu(ev->sync_handle));
hci_dev_lock(hdev);
mask |= hci_proto_connect_ind(hdev, BDADDR_ANY, ISO_LINK, &flags);
- if (!(mask & HCI_LM_ACCEPT))
+ if (!(mask & HCI_LM_ACCEPT)) {
hci_le_pa_term_sync(hdev, ev->sync_handle);
+ goto unlock;
+ }
+ if (!(flags & HCI_PROTO_DEFER))
+ goto unlock;
+
+ pa_sync = hci_conn_hash_lookup_pa_sync_handle
+ (hdev,
+ le16_to_cpu(ev->sync_handle));
+
+ if (pa_sync)
+ goto unlock;
+
+ /* Add connection to indicate the PA sync event */
+ pa_sync = hci_conn_add_unset(hdev, ISO_LINK, BDADDR_ANY,
+ HCI_ROLE_SLAVE);
+
+ if (!pa_sync)
+ goto unlock;
+
+ pa_sync->sync_handle = le16_to_cpu(ev->sync_handle);
+ set_bit(HCI_CONN_PA_SYNC, &pa_sync->flags);
+
+ /* Notify iso layer */
+ hci_connect_cfm(pa_sync, 0x00);
+
+unlock:
hci_dev_unlock(hdev);
}
diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
index a15ab0b874a9..d85a7091a116 100644
--- a/net/bluetooth/hci_sync.c
+++ b/net/bluetooth/hci_sync.c
@@ -152,7 +152,7 @@ struct sk_buff *__hci_cmd_sync_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
struct sk_buff *skb;
int err = 0;
- bt_dev_dbg(hdev, "Opcode 0x%4x", opcode);
+ bt_dev_dbg(hdev, "Opcode 0x%4.4x", opcode);
hci_req_init(&req, hdev);
@@ -248,7 +248,7 @@ int __hci_cmd_sync_status_sk(struct hci_dev *hdev, u16 opcode, u32 plen,
skb = __hci_cmd_sync_sk(hdev, opcode, plen, param, event, timeout, sk);
if (IS_ERR(skb)) {
if (!event)
- bt_dev_err(hdev, "Opcode 0x%4x failed: %ld", opcode,
+ bt_dev_err(hdev, "Opcode 0x%4.4x failed: %ld", opcode,
PTR_ERR(skb));
return PTR_ERR(skb);
}
@@ -1312,7 +1312,7 @@ int hci_start_ext_adv_sync(struct hci_dev *hdev, u8 instance)
return hci_enable_ext_advertising_sync(hdev, instance);
}
-static int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance)
+int hci_disable_per_advertising_sync(struct hci_dev *hdev, u8 instance)
{
struct hci_cp_le_set_per_adv_enable cp;
struct adv_info *adv = NULL;
@@ -4264,12 +4264,12 @@ static int hci_le_set_host_feature_sync(struct hci_dev *hdev)
{
struct hci_cp_le_set_host_feature cp;
- if (!iso_capable(hdev))
+ if (!cis_capable(hdev))
return 0;
memset(&cp, 0, sizeof(cp));
- /* Isochronous Channels (Host Support) */
+ /* Connected Isochronous Channels (Host Support) */
cp.bit_number = 32;
cp.bit_value = 1;
@@ -5232,6 +5232,17 @@ static int hci_disconnect_sync(struct hci_dev *hdev, struct hci_conn *conn,
if (conn->type == AMP_LINK)
return hci_disconnect_phy_link_sync(hdev, conn->handle, reason);
+ if (test_bit(HCI_CONN_BIG_CREATED, &conn->flags)) {
+ /* This is a BIS connection, hci_conn_del will
+ * do the necessary cleanup.
+ */
+ hci_dev_lock(hdev);
+ hci_conn_failed(conn, reason);
+ hci_dev_unlock(hdev);
+
+ return 0;
+ }
+
memset(&cp, 0, sizeof(cp));
cp.handle = cpu_to_le16(conn->handle);
cp.reason = reason;
@@ -5384,21 +5395,6 @@ int hci_abort_conn_sync(struct hci_dev *hdev, struct hci_conn *conn, u8 reason)
err = hci_reject_conn_sync(hdev, conn, reason);
break;
case BT_OPEN:
- hci_dev_lock(hdev);
-
- /* Cleanup bis or pa sync connections */
- if (test_and_clear_bit(HCI_CONN_BIG_SYNC_FAILED, &conn->flags) ||
- test_and_clear_bit(HCI_CONN_PA_SYNC_FAILED, &conn->flags)) {
- hci_conn_failed(conn, reason);
- } else if (test_bit(HCI_CONN_PA_SYNC, &conn->flags) ||
- test_bit(HCI_CONN_BIG_SYNC, &conn->flags)) {
- conn->state = BT_CLOSED;
- hci_disconn_cfm(conn, reason);
- hci_conn_del(conn);
- }
-
- hci_dev_unlock(hdev);
- return 0;
case BT_BOUND:
break;
default:
diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
index 15b33579007c..367e32fe30eb 100644
--- a/net/bluetooth/hci_sysfs.c
+++ b/net/bluetooth/hci_sysfs.c
@@ -35,7 +35,7 @@ void hci_conn_init_sysfs(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
- BT_DBG("conn %p", conn);
+ bt_dev_dbg(hdev, "conn %p", conn);
conn->dev.type = &bt_link;
conn->dev.class = &bt_class;
@@ -48,27 +48,30 @@ void hci_conn_add_sysfs(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
- BT_DBG("conn %p", conn);
+ bt_dev_dbg(hdev, "conn %p", conn);
if (device_is_registered(&conn->dev))
return;
dev_set_name(&conn->dev, "%s:%d", hdev->name, conn->handle);
- if (device_add(&conn->dev) < 0) {
+ if (device_add(&conn->dev) < 0)
bt_dev_err(hdev, "failed to register connection device");
- return;
- }
-
- hci_dev_hold(hdev);
}
void hci_conn_del_sysfs(struct hci_conn *conn)
{
struct hci_dev *hdev = conn->hdev;
- if (!device_is_registered(&conn->dev))
+ bt_dev_dbg(hdev, "conn %p", conn);
+
+ if (!device_is_registered(&conn->dev)) {
+ /* If device_add() has *not* succeeded, use *only* put_device()
+ * to drop the reference count.
+ */
+ put_device(&conn->dev);
return;
+ }
while (1) {
struct device *dev;
@@ -80,9 +83,7 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
put_device(dev);
}
- device_del(&conn->dev);
-
- hci_dev_put(hdev);
+ device_unregister(&conn->dev);
}
static void bt_host_release(struct device *dev)
diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
index 71248163ce9a..07b80e97aead 100644
--- a/net/bluetooth/iso.c
+++ b/net/bluetooth/iso.c
@@ -14,6 +14,7 @@
#include <net/bluetooth/bluetooth.h>
#include <net/bluetooth/hci_core.h>
#include <net/bluetooth/iso.h>
+#include "eir.h"
static const struct proto_ops iso_sock_ops;
@@ -47,6 +48,7 @@ static void iso_sock_kill(struct sock *sk);
#define EIR_SERVICE_DATA_LENGTH 4
#define BASE_MAX_LENGTH (HCI_MAX_PER_AD_LENGTH - EIR_SERVICE_DATA_LENGTH)
+#define EIR_BAA_SERVICE_UUID 0x1851
/* iso_pinfo flags values */
enum {
@@ -77,6 +79,7 @@ static struct bt_iso_qos default_qos;
static bool check_ucast_qos(struct bt_iso_qos *qos);
static bool check_bcast_qos(struct bt_iso_qos *qos);
static bool iso_match_sid(struct sock *sk, void *data);
+static bool iso_match_sync_handle(struct sock *sk, void *data);
static void iso_sock_disconn(struct sock *sk);
/* ---- ISO timers ---- */
@@ -789,8 +792,7 @@ static int iso_sock_bind_bc(struct socket *sock, struct sockaddr *addr,
BT_DBG("sk %p bc_sid %u bc_num_bis %u", sk, sa->iso_bc->bc_sid,
sa->iso_bc->bc_num_bis);
- if (addr_len > sizeof(*sa) + sizeof(*sa->iso_bc) ||
- sa->iso_bc->bc_num_bis < 0x01 || sa->iso_bc->bc_num_bis > 0x1f)
+ if (addr_len > sizeof(*sa) + sizeof(*sa->iso_bc))
return -EINVAL;
bacpy(&iso_pi(sk)->dst, &sa->iso_bc->bc_bdaddr);
@@ -1202,7 +1204,6 @@ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
test_bit(HCI_CONN_PA_SYNC, &pi->conn->hcon->flags)) {
iso_conn_big_sync(sk);
sk->sk_state = BT_LISTEN;
- set_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags);
} else {
iso_conn_defer_accept(pi->conn->hcon);
sk->sk_state = BT_CONFIG;
@@ -1461,6 +1462,8 @@ static int iso_sock_getsockopt(struct socket *sock, int level, int optname,
len = min_t(unsigned int, len, base_len);
if (copy_to_user(optval, base, len))
err = -EFAULT;
+ if (put_user(len, optlen))
+ err = -EFAULT;
break;
@@ -1579,6 +1582,7 @@ static void iso_conn_ready(struct iso_conn *conn)
struct sock *sk = conn->sk;
struct hci_ev_le_big_sync_estabilished *ev = NULL;
struct hci_ev_le_pa_sync_established *ev2 = NULL;
+ struct hci_evt_le_big_info_adv_report *ev3 = NULL;
struct hci_conn *hcon;
BT_DBG("conn %p", conn);
@@ -1603,14 +1607,20 @@ static void iso_conn_ready(struct iso_conn *conn)
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_big, ev);
- } else if (test_bit(HCI_CONN_PA_SYNC, &hcon->flags) ||
- test_bit(HCI_CONN_PA_SYNC_FAILED, &hcon->flags)) {
+ } else if (test_bit(HCI_CONN_PA_SYNC_FAILED, &hcon->flags)) {
ev2 = hci_recv_event_data(hcon->hdev,
HCI_EV_LE_PA_SYNC_ESTABLISHED);
if (ev2)
parent = iso_get_sock_listen(&hcon->src,
&hcon->dst,
iso_match_sid, ev2);
+ } else if (test_bit(HCI_CONN_PA_SYNC, &hcon->flags)) {
+ ev3 = hci_recv_event_data(hcon->hdev,
+ HCI_EVT_LE_BIG_INFO_ADV_REPORT);
+ if (ev3)
+ parent = iso_get_sock_listen(&hcon->src,
+ &hcon->dst,
+ iso_match_sync_handle, ev3);
}
if (!parent)
@@ -1650,11 +1660,13 @@ static void iso_conn_ready(struct iso_conn *conn)
hcon->sync_handle = iso_pi(parent)->sync_handle;
}
- if (ev2 && !ev2->status) {
- iso_pi(sk)->sync_handle = iso_pi(parent)->sync_handle;
+ if (ev3) {
iso_pi(sk)->qos = iso_pi(parent)->qos;
+ iso_pi(sk)->qos.bcast.encryption = ev3->encryption;
+ hcon->iso_qos = iso_pi(sk)->qos;
iso_pi(sk)->bc_num_bis = iso_pi(parent)->bc_num_bis;
memcpy(iso_pi(sk)->bc_bis, iso_pi(parent)->bc_bis, ISO_MAX_NUM_BIS);
+ set_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags);
}
bacpy(&iso_pi(sk)->dst, &hcon->dst);
@@ -1774,12 +1786,16 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
ev3 = hci_recv_event_data(hdev, HCI_EV_LE_PER_ADV_REPORT);
if (ev3) {
+ size_t base_len = ev3->length;
+ u8 *base;
+
sk = iso_get_sock_listen(&hdev->bdaddr, bdaddr,
iso_match_sync_handle_pa_report, ev3);
-
- if (sk) {
- memcpy(iso_pi(sk)->base, ev3->data, ev3->length);
- iso_pi(sk)->base_len = ev3->length;
+ base = eir_get_service_data(ev3->data, ev3->length,
+ EIR_BAA_SERVICE_UUID, &base_len);
+ if (base && sk && base_len <= sizeof(iso_pi(sk)->base)) {
+ memcpy(iso_pi(sk)->base, base, base_len);
+ iso_pi(sk)->base_len = base_len;
}
} else {
sk = iso_get_sock_listen(&hdev->bdaddr, BDADDR_ANY, NULL, NULL);
diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
index 3bdfc3f1e73d..e50d3d102078 100644
--- a/net/bluetooth/l2cap_sock.c
+++ b/net/bluetooth/l2cap_sock.c
@@ -1615,7 +1615,7 @@ static struct sk_buff *l2cap_sock_alloc_skb_cb(struct l2cap_chan *chan,
return ERR_PTR(-ENOTCONN);
}
- skb->priority = sk->sk_priority;
+ skb->priority = READ_ONCE(sk->sk_priority);
bt_cb(skb)->l2cap.chan = chan;
diff --git a/net/bluetooth/msft.c b/net/bluetooth/msft.c
index abbafa6194ca..630e3023273b 100644
--- a/net/bluetooth/msft.c
+++ b/net/bluetooth/msft.c
@@ -150,10 +150,7 @@ static bool read_supported_features(struct hci_dev *hdev,
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
- if (!skb)
- skb = ERR_PTR(-EIO);
-
+ if (IS_ERR(skb)) {
bt_dev_err(hdev, "Failed to read MSFT supported features (%ld)",
PTR_ERR(skb));
return false;
@@ -353,7 +350,7 @@ static void msft_remove_addr_filters_sync(struct hci_dev *hdev, u8 handle)
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
+ if (IS_ERR(skb)) {
kfree(address_filter);
continue;
}
@@ -442,11 +439,8 @@ static int msft_remove_monitor_sync(struct hci_dev *hdev,
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
- if (!skb)
- return -EIO;
+ if (IS_ERR(skb))
return PTR_ERR(skb);
- }
return msft_le_cancel_monitor_advertisement_cb(hdev, hdev->msft_opcode,
monitor, skb);
@@ -559,7 +553,7 @@ static int msft_add_monitor_sync(struct hci_dev *hdev,
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, total_size, cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
+ if (IS_ERR(skb)) {
err = PTR_ERR(skb);
goto out_free;
}
@@ -740,10 +734,10 @@ static int msft_cancel_address_filter_sync(struct hci_dev *hdev, void *data)
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, sizeof(cp), &cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
+ if (IS_ERR(skb)) {
bt_dev_err(hdev, "MSFT: Failed to cancel address (%pMR) filter",
&address_filter->bdaddr);
- err = -EIO;
+ err = PTR_ERR(skb);
goto done;
}
kfree_skb(skb);
@@ -893,7 +887,7 @@ static int msft_add_address_filter_sync(struct hci_dev *hdev, void *data)
skb = __hci_cmd_sync(hdev, hdev->msft_opcode, size, cp,
HCI_CMD_TIMEOUT);
- if (IS_ERR_OR_NULL(skb)) {
+ if (IS_ERR(skb)) {
bt_dev_err(hdev, "Failed to enable address %pMR filter",
&address_filter->bdaddr);
skb = NULL;
diff --git a/net/bridge/br.c b/net/bridge/br.c
index a6e94ceb7c9a..ac19b797dbec 100644
--- a/net/bridge/br.c
+++ b/net/bridge/br.c
@@ -477,3 +477,4 @@ module_exit(br_deinit)
MODULE_LICENSE("GPL");
MODULE_VERSION(BR_VERSION);
MODULE_ALIAS_RTNL_LINK("bridge");
+MODULE_DESCRIPTION("Ethernet bridge driver");
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index 9a5ea06236bd..8f40de3af154 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -92,7 +92,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
goto out;
}
- mdst = br_mdb_get(brmctx, skb, vid);
+ mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst))
br_multicast_flood(mdst, skb, brmctx, false, true);
@@ -472,6 +472,7 @@ static const struct net_device_ops br_netdev_ops = {
.ndo_mdb_add = br_mdb_add,
.ndo_mdb_del = br_mdb_del,
.ndo_mdb_dump = br_mdb_dump,
+ .ndo_mdb_get = br_mdb_get,
.ndo_bridge_getlink = br_getlink,
.ndo_bridge_setlink = br_setlink,
.ndo_bridge_dellink = br_dellink,
diff --git a/net/bridge/br_fdb.c b/net/bridge/br_fdb.c
index e69a872bfc1d..c622de5eccd0 100644
--- a/net/bridge/br_fdb.c
+++ b/net/bridge/br_fdb.c
@@ -329,11 +329,18 @@ static void fdb_delete(struct net_bridge *br, struct net_bridge_fdb_entry *f,
hlist_del_init_rcu(&f->fdb_node);
rhashtable_remove_fast(&br->fdb_hash_tbl, &f->rhnode,
br_fdb_rht_params);
+ if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &f->flags))
+ atomic_dec(&br->fdb_n_learned);
fdb_notify(br, f, RTM_DELNEIGH, swdev_notify);
call_rcu(&f->rcu, fdb_rcu_free);
}
-/* Delete a local entry if no other port had the same address. */
+/* Delete a local entry if no other port had the same address.
+ *
+ * This function should only be called on entries with BR_FDB_LOCAL set,
+ * so even with BR_FDB_ADDED_BY_USER cleared we never need to increase
+ * the accounting for dynamically learned entries again.
+ */
static void fdb_delete_local(struct net_bridge *br,
const struct net_bridge_port *p,
struct net_bridge_fdb_entry *f)
@@ -388,9 +395,20 @@ static struct net_bridge_fdb_entry *fdb_create(struct net_bridge *br,
__u16 vid,
unsigned long flags)
{
+ bool learned = !test_bit(BR_FDB_ADDED_BY_USER, &flags) &&
+ !test_bit(BR_FDB_LOCAL, &flags);
+ u32 max_learned = READ_ONCE(br->fdb_max_learned);
struct net_bridge_fdb_entry *fdb;
int err;
+ if (likely(learned)) {
+ int n_learned = atomic_read(&br->fdb_n_learned);
+
+ if (unlikely(max_learned && n_learned >= max_learned))
+ return NULL;
+ __set_bit(BR_FDB_DYNAMIC_LEARNED, &flags);
+ }
+
fdb = kmem_cache_alloc(br_fdb_cache, GFP_ATOMIC);
if (!fdb)
return NULL;
@@ -407,6 +425,9 @@ static struct net_bridge_fdb_entry *fdb_create(struct net_bridge *br,
return NULL;
}
+ if (likely(learned))
+ atomic_inc(&br->fdb_n_learned);
+
hlist_add_head_rcu(&fdb->fdb_node, &br->fdb_list);
return fdb;
@@ -661,14 +682,30 @@ static int __fdb_flush_validate_ifindex(const struct net_bridge *br,
return 0;
}
-int br_fdb_delete_bulk(struct ndmsg *ndm, struct nlattr *tb[],
- struct net_device *dev, u16 vid,
+static const struct nla_policy br_fdb_del_bulk_policy[NDA_MAX + 1] = {
+ [NDA_VLAN] = NLA_POLICY_RANGE(NLA_U16, 1, VLAN_N_VID - 2),
+ [NDA_IFINDEX] = NLA_POLICY_MIN(NLA_S32, 1),
+ [NDA_NDM_STATE_MASK] = { .type = NLA_U16 },
+ [NDA_NDM_FLAGS_MASK] = { .type = NLA_U8 },
+};
+
+int br_fdb_delete_bulk(struct nlmsghdr *nlh, struct net_device *dev,
struct netlink_ext_ack *extack)
{
- u8 ndm_flags = ndm->ndm_flags & ~FDB_FLUSH_IGNORED_NDM_FLAGS;
- struct net_bridge_fdb_flush_desc desc = { .vlan_id = vid };
+ struct net_bridge_fdb_flush_desc desc = {};
+ struct ndmsg *ndm = nlmsg_data(nlh);
struct net_bridge_port *p = NULL;
+ struct nlattr *tb[NDA_MAX + 1];
struct net_bridge *br;
+ u8 ndm_flags;
+ int err;
+
+ ndm_flags = ndm->ndm_flags & ~FDB_FLUSH_IGNORED_NDM_FLAGS;
+
+ err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX,
+ br_fdb_del_bulk_policy, extack);
+ if (err)
+ return err;
if (netif_is_bridge_master(dev)) {
br = netdev_priv(dev);
@@ -681,6 +718,9 @@ int br_fdb_delete_bulk(struct ndmsg *ndm, struct nlattr *tb[],
br = p->br;
}
+ if (tb[NDA_VLAN])
+ desc.vlan_id = nla_get_u16(tb[NDA_VLAN]);
+
if (ndm_flags & ~FDB_FLUSH_ALLOWED_NDM_FLAGS) {
NL_SET_ERR_MSG(extack, "Unsupported fdb flush ndm flag bits set");
return -EINVAL;
@@ -703,7 +743,7 @@ int br_fdb_delete_bulk(struct ndmsg *ndm, struct nlattr *tb[],
desc.flags_mask |= __ndm_flags_to_fdb_flags(ndm_flags_mask);
}
if (tb[NDA_IFINDEX]) {
- int err, ifidx = nla_get_s32(tb[NDA_IFINDEX]);
+ int ifidx = nla_get_s32(tb[NDA_IFINDEX]);
err = __fdb_flush_validate_ifindex(br, ifidx, extack);
if (err)
@@ -893,8 +933,12 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source,
clear_bit(BR_FDB_LOCKED, &fdb->flags);
}
- if (unlikely(test_bit(BR_FDB_ADDED_BY_USER, &flags)))
+ if (unlikely(test_bit(BR_FDB_ADDED_BY_USER, &flags))) {
set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
+ if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED,
+ &fdb->flags))
+ atomic_dec(&br->fdb_n_learned);
+ }
if (unlikely(fdb_modified)) {
trace_br_fdb_update(br, source, addr, vid, flags);
fdb_notify(br, fdb, RTM_NEWNEIGH, true);
@@ -1056,7 +1100,8 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
if (!(flags & NLM_F_CREATE))
return -ENOENT;
- fdb = fdb_create(br, source, addr, vid, 0);
+ fdb = fdb_create(br, source, addr, vid,
+ BIT(BR_FDB_ADDED_BY_USER));
if (!fdb)
return -ENOMEM;
@@ -1069,6 +1114,10 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
WRITE_ONCE(fdb->dst, source);
modified = true;
}
+
+ set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
+ if (test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &fdb->flags))
+ atomic_dec(&br->fdb_n_learned);
}
if (fdb_to_nud(br, fdb) != state) {
@@ -1100,8 +1149,6 @@ static int fdb_add_entry(struct net_bridge *br, struct net_bridge_port *source,
if (fdb_handle_notify(fdb, notify))
modified = true;
- set_bit(BR_FDB_ADDED_BY_USER, &fdb->flags);
-
fdb->used = jiffies;
if (modified) {
if (refresh)
@@ -1445,6 +1492,10 @@ int br_fdb_external_learn_add(struct net_bridge *br, struct net_bridge_port *p,
if (!p)
set_bit(BR_FDB_LOCAL, &fdb->flags);
+ if ((swdev_notify || !p) &&
+ test_and_clear_bit(BR_FDB_DYNAMIC_LEARNED, &fdb->flags))
+ atomic_dec(&br->fdb_n_learned);
+
if (modified)
fdb_notify(br, fdb, RTM_NEWNEIGH, swdev_notify);
}
diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
index c729528b5e85..f21097e73482 100644
--- a/net/bridge/br_input.c
+++ b/net/bridge/br_input.c
@@ -175,7 +175,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
switch (pkt_type) {
case BR_PKT_MULTICAST:
- mdst = br_mdb_get(brmctx, skb, vid);
+ mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) {
if ((mdst && mdst->host_joined) ||
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 7305f5f8215c..8cc526067bc2 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -323,9 +323,6 @@ static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
struct net_bridge_mdb_entry *mp;
struct nlattr *nest, *nest2;
- if (!br_opt_get(br, BROPT_MULTICAST_ENABLED))
- return 0;
-
nest = nla_nest_start_noflag(skb, MDBA_MDB);
if (nest == NULL)
return -EMSGSIZE;
@@ -453,13 +450,15 @@ cancel:
return -EMSGSIZE;
}
-static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
+static size_t rtnl_mdb_nlmsg_pg_size(const struct net_bridge_port_group *pg)
{
- size_t nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
- nla_total_size(sizeof(struct br_mdb_entry)) +
- nla_total_size(sizeof(u32));
struct net_bridge_group_src *ent;
- size_t addr_size = 0;
+ size_t nlmsg_size, addr_size = 0;
+
+ /* MDBA_MDB_ENTRY_INFO */
+ nlmsg_size = nla_total_size(sizeof(struct br_mdb_entry)) +
+ /* MDBA_MDB_EATTR_TIMER */
+ nla_total_size(sizeof(u32));
if (!pg)
goto out;
@@ -507,6 +506,17 @@ out:
return nlmsg_size;
}
+static size_t rtnl_mdb_nlmsg_size(const struct net_bridge_port_group *pg)
+{
+ return NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* Port group entry */
+ rtnl_mdb_nlmsg_pg_size(pg);
+}
+
void br_mdb_notify(struct net_device *dev,
struct net_bridge_mdb_entry *mp,
struct net_bridge_port_group *pg,
@@ -1401,3 +1411,161 @@ int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
br_mdb_config_fini(&cfg);
return err;
}
+
+static const struct nla_policy br_mdbe_attrs_get_pol[MDBE_ATTR_MAX + 1] = {
+ [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+};
+
+static int br_mdb_get_parse(struct net_device *dev, struct nlattr *tb[],
+ struct br_ip *group, struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(tb[MDBA_GET_ENTRY]);
+ struct nlattr *mdbe_attrs[MDBE_ATTR_MAX + 1];
+ int err;
+
+ if (!tb[MDBA_GET_ENTRY_ATTRS]) {
+ __mdb_entry_to_br_ip(entry, group, NULL);
+ return 0;
+ }
+
+ err = nla_parse_nested(mdbe_attrs, MDBE_ATTR_MAX,
+ tb[MDBA_GET_ENTRY_ATTRS], br_mdbe_attrs_get_pol,
+ extack);
+ if (err)
+ return err;
+
+ if (mdbe_attrs[MDBE_ATTR_SOURCE] &&
+ !is_valid_mdb_source(mdbe_attrs[MDBE_ATTR_SOURCE],
+ entry->addr.proto, extack))
+ return -EINVAL;
+
+ __mdb_entry_to_br_ip(entry, group, mdbe_attrs);
+
+ return 0;
+}
+
+static struct sk_buff *
+br_mdb_get_reply_alloc(const struct net_bridge_mdb_entry *mp)
+{
+ struct net_bridge_port_group *pg;
+ size_t nlmsg_size;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0);
+
+ if (mp->host_joined)
+ nlmsg_size += rtnl_mdb_nlmsg_pg_size(NULL);
+
+ for (pg = mlock_dereference(mp->ports, mp->br); pg;
+ pg = mlock_dereference(pg->next, mp->br))
+ nlmsg_size += rtnl_mdb_nlmsg_pg_size(pg);
+
+ return nlmsg_new(nlmsg_size, GFP_ATOMIC);
+}
+
+static int br_mdb_get_reply_fill(struct sk_buff *skb,
+ struct net_bridge_mdb_entry *mp, u32 portid,
+ u32 seq)
+{
+ struct nlattr *mdb_nest, *mdb_entry_nest;
+ struct net_bridge_port_group *pg;
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = nlmsg_put(skb, portid, seq, RTM_NEWMDB, sizeof(*bpm), 0);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = mp->br->dev->ifindex;
+ mdb_nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!mdb_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+ mdb_entry_nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!mdb_entry_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+
+ if (mp->host_joined) {
+ err = __mdb_fill_info(skb, mp, NULL);
+ if (err)
+ goto cancel;
+ }
+
+ for (pg = mlock_dereference(mp->ports, mp->br); pg;
+ pg = mlock_dereference(pg->next, mp->br)) {
+ err = __mdb_fill_info(skb, mp, pg);
+ if (err)
+ goto cancel;
+ }
+
+ nla_nest_end(skb, mdb_entry_nest);
+ nla_nest_end(skb, mdb_nest);
+ nlmsg_end(skb, nlh);
+
+ return 0;
+
+cancel:
+ nlmsg_cancel(skb, nlh);
+ return err;
+}
+
+int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ struct netlink_ext_ack *extack)
+{
+ struct net_bridge *br = netdev_priv(dev);
+ struct net_bridge_mdb_entry *mp;
+ struct sk_buff *skb;
+ struct br_ip group;
+ int err;
+
+ err = br_mdb_get_parse(dev, tb, &group, extack);
+ if (err)
+ return err;
+
+ /* Hold the multicast lock to ensure that the MDB entry does not change
+ * between the time the reply size is determined and when the reply is
+ * filled in.
+ */
+ spin_lock_bh(&br->multicast_lock);
+
+ mp = br_mdb_ip_get(br, &group);
+ if (!mp) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ err = -ENOENT;
+ goto unlock;
+ }
+
+ skb = br_mdb_get_reply_alloc(mp);
+ if (!skb) {
+ err = -ENOMEM;
+ goto unlock;
+ }
+
+ err = br_mdb_get_reply_fill(skb, mp, portid, seq);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
+ goto free;
+ }
+
+ spin_unlock_bh(&br->multicast_lock);
+
+ return rtnl_unicast(skb, dev_net(dev), portid);
+
+free:
+ kfree_skb(skb);
+unlock:
+ spin_unlock_bh(&br->multicast_lock);
+ return err;
+}
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 96d1fc78dd39..d7d021af1029 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -145,8 +145,9 @@ static struct net_bridge_mdb_entry *br_mdb_ip6_get(struct net_bridge *br,
}
#endif
-struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid)
+struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid)
{
struct net_bridge *br = brmctx->br;
struct br_ip ip;
diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
index 033034d68f1f..6adcb45bca75 100644
--- a/net/bridge/br_netfilter_hooks.c
+++ b/net/bridge/br_netfilter_hooks.c
@@ -486,11 +486,11 @@ static unsigned int br_nf_pre_routing(void *priv,
struct brnf_net *brnet;
if (unlikely(!pskb_may_pull(skb, len)))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_PKT_TOO_SMALL, 0);
p = br_port_get_rcu(state->in);
if (p == NULL)
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_DEV_READY, 0);
br = p->br;
brnet = net_generic(state->net, brnf_net_id);
@@ -501,7 +501,7 @@ static unsigned int br_nf_pre_routing(void *priv,
return NF_ACCEPT;
if (!ipv6_mod_enabled()) {
pr_warn_once("Module ipv6 is disabled, so call_ip6tables is not supported.");
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_IPV6DISABLED, 0);
}
nf_bridge_pull_encap_header_rcsum(skb);
@@ -518,12 +518,12 @@ static unsigned int br_nf_pre_routing(void *priv,
nf_bridge_pull_encap_header_rcsum(skb);
if (br_validate_ipv4(state->net, skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_IP_INHDR, 0);
if (!nf_bridge_alloc(skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_NOMEM, 0);
if (!setup_pre_routing(skb, state->net))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_DEV_READY, 0);
nf_bridge = nf_bridge_info_get(skb);
nf_bridge->ipv4_daddr = ip_hdr(skb)->daddr;
@@ -570,18 +570,12 @@ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff
}
-/* This is the 'purely bridged' case. For IP, we pass the packet to
- * netfilter with indev and outdev set to the bridge device,
- * but we are still able to filter on the 'real' indev/outdev
- * because of the physdev module. For ARP, indev and outdev are the
- * bridge ports. */
-static unsigned int br_nf_forward_ip(void *priv,
- struct sk_buff *skb,
- const struct nf_hook_state *state)
+static unsigned int br_nf_forward_ip(struct sk_buff *skb,
+ const struct nf_hook_state *state,
+ u8 pf)
{
struct nf_bridge_info *nf_bridge;
struct net_device *parent;
- u_int8_t pf;
nf_bridge = nf_bridge_info_get(skb);
if (!nf_bridge)
@@ -590,24 +584,15 @@ static unsigned int br_nf_forward_ip(void *priv,
/* Need exclusive nf_bridge_info since we might have multiple
* different physoutdevs. */
if (!nf_bridge_unshare(skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_NOMEM, 0);
nf_bridge = nf_bridge_info_get(skb);
if (!nf_bridge)
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_NOMEM, 0);
parent = bridge_parent(state->out);
if (!parent)
- return NF_DROP;
-
- if (IS_IP(skb) || is_vlan_ip(skb, state->net) ||
- is_pppoe_ip(skb, state->net))
- pf = NFPROTO_IPV4;
- else if (IS_IPV6(skb) || is_vlan_ipv6(skb, state->net) ||
- is_pppoe_ipv6(skb, state->net))
- pf = NFPROTO_IPV6;
- else
- return NF_ACCEPT;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_DEV_READY, 0);
nf_bridge_pull_encap_header(skb);
@@ -618,21 +603,20 @@ static unsigned int br_nf_forward_ip(void *priv,
if (pf == NFPROTO_IPV4) {
if (br_validate_ipv4(state->net, skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_IP_INHDR, 0);
IPCB(skb)->frag_max_size = nf_bridge->frag_max_size;
- }
-
- if (pf == NFPROTO_IPV6) {
+ skb->protocol = htons(ETH_P_IP);
+ } else if (pf == NFPROTO_IPV6) {
if (br_validate_ipv6(state->net, skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_IP_INHDR, 0);
IP6CB(skb)->frag_max_size = nf_bridge->frag_max_size;
+ skb->protocol = htons(ETH_P_IPV6);
+ } else {
+ WARN_ON_ONCE(1);
+ return NF_DROP;
}
nf_bridge->physoutdev = skb->dev;
- if (pf == NFPROTO_IPV4)
- skb->protocol = htons(ETH_P_IP);
- else
- skb->protocol = htons(ETH_P_IPV6);
NF_HOOK(pf, NF_INET_FORWARD, state->net, NULL, skb,
brnf_get_logical_dev(skb, state->in, state->net),
@@ -641,8 +625,7 @@ static unsigned int br_nf_forward_ip(void *priv,
return NF_STOLEN;
}
-static unsigned int br_nf_forward_arp(void *priv,
- struct sk_buff *skb,
+static unsigned int br_nf_forward_arp(struct sk_buff *skb,
const struct nf_hook_state *state)
{
struct net_bridge_port *p;
@@ -659,14 +642,11 @@ static unsigned int br_nf_forward_arp(void *priv,
if (!brnet->call_arptables && !br_opt_get(br, BROPT_NF_CALL_ARPTABLES))
return NF_ACCEPT;
- if (!IS_ARP(skb)) {
- if (!is_vlan_arp(skb, state->net))
- return NF_ACCEPT;
+ if (is_vlan_arp(skb, state->net))
nf_bridge_pull_encap_header(skb);
- }
if (unlikely(!pskb_may_pull(skb, sizeof(struct arphdr))))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_PKT_TOO_SMALL, 0);
if (arp_hdr(skb)->ar_pln != 4) {
if (is_vlan_arp(skb, state->net))
@@ -680,6 +660,28 @@ static unsigned int br_nf_forward_arp(void *priv,
return NF_STOLEN;
}
+/* This is the 'purely bridged' case. For IP, we pass the packet to
+ * netfilter with indev and outdev set to the bridge device,
+ * but we are still able to filter on the 'real' indev/outdev
+ * because of the physdev module. For ARP, indev and outdev are the
+ * bridge ports.
+ */
+static unsigned int br_nf_forward(void *priv,
+ struct sk_buff *skb,
+ const struct nf_hook_state *state)
+{
+ if (IS_IP(skb) || is_vlan_ip(skb, state->net) ||
+ is_pppoe_ip(skb, state->net))
+ return br_nf_forward_ip(skb, state, NFPROTO_IPV4);
+ if (IS_IPV6(skb) || is_vlan_ipv6(skb, state->net) ||
+ is_pppoe_ipv6(skb, state->net))
+ return br_nf_forward_ip(skb, state, NFPROTO_IPV6);
+ if (IS_ARP(skb) || is_vlan_arp(skb, state->net))
+ return br_nf_forward_arp(skb, state);
+
+ return NF_ACCEPT;
+}
+
static int br_nf_push_frag_xmit(struct net *net, struct sock *sk, struct sk_buff *skb)
{
struct brnf_frag_data *data;
@@ -831,7 +833,7 @@ static unsigned int br_nf_post_routing(void *priv,
return NF_ACCEPT;
if (!realoutdev)
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_DEV_READY, 0);
if (IS_IP(skb) || is_vlan_ip(skb, state->net) ||
is_pppoe_ip(skb, state->net))
@@ -937,13 +939,7 @@ static const struct nf_hook_ops br_nf_ops[] = {
.priority = NF_BR_PRI_BRNF,
},
{
- .hook = br_nf_forward_ip,
- .pf = NFPROTO_BRIDGE,
- .hooknum = NF_BR_FORWARD,
- .priority = NF_BR_PRI_BRNF - 1,
- },
- {
- .hook = br_nf_forward_arp,
+ .hook = br_nf_forward,
.pf = NFPROTO_BRIDGE,
.hooknum = NF_BR_FORWARD,
.priority = NF_BR_PRI_BRNF,
diff --git a/net/bridge/br_netfilter_ipv6.c b/net/bridge/br_netfilter_ipv6.c
index 550039dfc31a..2e24a743f917 100644
--- a/net/bridge/br_netfilter_ipv6.c
+++ b/net/bridge/br_netfilter_ipv6.c
@@ -161,13 +161,13 @@ unsigned int br_nf_pre_routing_ipv6(void *priv,
struct nf_bridge_info *nf_bridge;
if (br_validate_ipv6(state->net, skb))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_IP_INHDR, 0);
nf_bridge = nf_bridge_alloc(skb);
if (!nf_bridge)
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_NOMEM, 0);
if (!setup_pre_routing(skb, state->net))
- return NF_DROP;
+ return NF_DROP_REASON(skb, SKB_DROP_REASON_DEV_READY, 0);
nf_bridge = nf_bridge_info_get(skb);
nf_bridge->ipv6_daddr = ipv6_hdr(skb)->daddr;
diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
index 10f0d33d8ccf..5ad4abfcb7ba 100644
--- a/net/bridge/br_netlink.c
+++ b/net/bridge/br_netlink.c
@@ -1229,6 +1229,8 @@ static size_t br_port_get_slave_size(const struct net_device *brdev,
}
static const struct nla_policy br_policy[IFLA_BR_MAX + 1] = {
+ [IFLA_BR_UNSPEC] = { .strict_start_type =
+ IFLA_BR_FDB_N_LEARNED },
[IFLA_BR_FORWARD_DELAY] = { .type = NLA_U32 },
[IFLA_BR_HELLO_TIME] = { .type = NLA_U32 },
[IFLA_BR_MAX_AGE] = { .type = NLA_U32 },
@@ -1265,6 +1267,8 @@ static const struct nla_policy br_policy[IFLA_BR_MAX + 1] = {
[IFLA_BR_VLAN_STATS_PER_PORT] = { .type = NLA_U8 },
[IFLA_BR_MULTI_BOOLOPT] =
NLA_POLICY_EXACT_LEN(sizeof(struct br_boolopt_multi)),
+ [IFLA_BR_FDB_N_LEARNED] = { .type = NLA_REJECT },
+ [IFLA_BR_FDB_MAX_LEARNED] = { .type = NLA_U32 },
};
static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
@@ -1539,6 +1543,12 @@ static int br_changelink(struct net_device *brdev, struct nlattr *tb[],
return err;
}
+ if (data[IFLA_BR_FDB_MAX_LEARNED]) {
+ u32 val = nla_get_u32(data[IFLA_BR_FDB_MAX_LEARNED]);
+
+ WRITE_ONCE(br->fdb_max_learned, val);
+ }
+
return 0;
}
@@ -1593,6 +1603,8 @@ static size_t br_get_size(const struct net_device *brdev)
nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_TOPOLOGY_CHANGE_TIMER */
nla_total_size_64bit(sizeof(u64)) + /* IFLA_BR_GC_TIMER */
nla_total_size(ETH_ALEN) + /* IFLA_BR_GROUP_ADDR */
+ nla_total_size(sizeof(u32)) + /* IFLA_BR_FDB_N_LEARNED */
+ nla_total_size(sizeof(u32)) + /* IFLA_BR_FDB_MAX_LEARNED */
#ifdef CONFIG_BRIDGE_IGMP_SNOOPING
nla_total_size(sizeof(u8)) + /* IFLA_BR_MCAST_ROUTER */
nla_total_size(sizeof(u8)) + /* IFLA_BR_MCAST_SNOOPING */
@@ -1668,7 +1680,10 @@ static int br_fill_info(struct sk_buff *skb, const struct net_device *brdev)
nla_put_u8(skb, IFLA_BR_TOPOLOGY_CHANGE_DETECTED,
br->topology_change_detected) ||
nla_put(skb, IFLA_BR_GROUP_ADDR, ETH_ALEN, br->group_addr) ||
- nla_put(skb, IFLA_BR_MULTI_BOOLOPT, sizeof(bm), &bm))
+ nla_put(skb, IFLA_BR_MULTI_BOOLOPT, sizeof(bm), &bm) ||
+ nla_put_u32(skb, IFLA_BR_FDB_N_LEARNED,
+ atomic_read(&br->fdb_n_learned)) ||
+ nla_put_u32(skb, IFLA_BR_FDB_MAX_LEARNED, br->fdb_max_learned))
return -EMSGSIZE;
#ifdef CONFIG_BRIDGE_VLAN_FILTERING
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index a1f4acfa6994..6b7f36769d03 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -274,6 +274,7 @@ enum {
BR_FDB_NOTIFY,
BR_FDB_NOTIFY_INACTIVE,
BR_FDB_LOCKED,
+ BR_FDB_DYNAMIC_LEARNED,
};
struct net_bridge_fdb_key {
@@ -555,6 +556,9 @@ struct net_bridge {
struct kobject *ifobj;
u32 auto_cnt;
+ atomic_t fdb_n_learned;
+ u32 fdb_max_learned;
+
#ifdef CONFIG_NET_SWITCHDEV
/* Counter used to make sure that hardware domains get unique
* identifiers in case a bridge spans multiple switchdev instances.
@@ -847,8 +851,7 @@ void br_fdb_update(struct net_bridge *br, struct net_bridge_port *source,
int br_fdb_delete(struct ndmsg *ndm, struct nlattr *tb[],
struct net_device *dev, const unsigned char *addr, u16 vid,
struct netlink_ext_ack *extack);
-int br_fdb_delete_bulk(struct ndmsg *ndm, struct nlattr *tb[],
- struct net_device *dev, u16 vid,
+int br_fdb_delete_bulk(struct nlmsghdr *nlh, struct net_device *dev,
struct netlink_ext_ack *extack);
int br_fdb_add(struct ndmsg *nlh, struct nlattr *tb[], struct net_device *dev,
const unsigned char *addr, u16 vid, u16 nlh_flags,
@@ -952,8 +955,9 @@ int br_multicast_rcv(struct net_bridge_mcast **brmctx,
struct net_bridge_mcast_port **pmctx,
struct net_bridge_vlan *vlan,
struct sk_buff *skb, u16 vid);
-struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid);
+struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid);
int br_multicast_add_port(struct net_bridge_port *port);
void br_multicast_del_port(struct net_bridge_port *port);
void br_multicast_enable_port(struct net_bridge_port *port);
@@ -1018,6 +1022,8 @@ int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack);
int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
struct netlink_callback *cb);
+int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ struct netlink_ext_ack *extack);
void br_multicast_host_join(const struct net_bridge_mcast *brmctx,
struct net_bridge_mdb_entry *mp, bool notify);
void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify);
@@ -1342,8 +1348,9 @@ static inline int br_multicast_rcv(struct net_bridge_mcast **brmctx,
return 0;
}
-static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid)
+static inline struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid)
{
return NULL;
}
@@ -1427,6 +1434,13 @@ static inline int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
return 0;
}
+static inline int br_mdb_get(struct net_device *dev, struct nlattr *tb[],
+ u32 portid, u32 seq,
+ struct netlink_ext_ack *extack)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int br_mdb_hash_init(struct net_bridge *br)
{
return 0;
diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
index b28c976f52a0..14c431663233 100644
--- a/net/can/j1939/socket.c
+++ b/net/can/j1939/socket.c
@@ -884,7 +884,7 @@ static struct sk_buff *j1939_sk_alloc_skb(struct net_device *ndev,
skcb = j1939_skb_to_cb(skb);
memset(skcb, 0, sizeof(*skcb));
skcb->addr = jsk->addr;
- skcb->priority = j1939_prio(sk->sk_priority);
+ skcb->priority = j1939_prio(READ_ONCE(sk->sk_priority));
if (msg->msg_name) {
struct sockaddr_can *addr = msg->msg_name;
diff --git a/net/can/raw.c b/net/can/raw.c
index d50c3f3d892f..e6b822624ba2 100644
--- a/net/can/raw.c
+++ b/net/can/raw.c
@@ -493,8 +493,7 @@ static int raw_bind(struct socket *sock, struct sockaddr *uaddr, int len)
out_put_dev:
/* remove potential reference from dev_get_by_index() */
- if (dev)
- dev_put(dev);
+ dev_put(dev);
out:
release_sock(sk);
rtnl_unlock();
@@ -881,7 +880,7 @@ static int raw_sendmsg(struct socket *sock, struct msghdr *msg, size_t size)
}
skb->dev = dev;
- skb->priority = sk->sk_priority;
+ skb->priority = READ_ONCE(sk->sk_priority);
skb->mark = READ_ONCE(sk->sk_mark);
skb->tstamp = sockc.transmit_time;
diff --git a/net/ceph/mon_client.c b/net/ceph/mon_client.c
index faabad6603db..f263f7e91a21 100644
--- a/net/ceph/mon_client.c
+++ b/net/ceph/mon_client.c
@@ -1136,6 +1136,7 @@ static int build_initial_monmap(struct ceph_mon_client *monc)
GFP_KERNEL);
if (!monc->monmap)
return -ENOMEM;
+ monc->monmap->num_mon = num_mon;
for (i = 0; i < num_mon; i++) {
struct ceph_entity_inst *inst = &monc->monmap->mon_inst[i];
@@ -1147,7 +1148,6 @@ static int build_initial_monmap(struct ceph_mon_client *monc)
inst->name.type = CEPH_ENTITY_TYPE_MON;
inst->name.num = cpu_to_le64(i);
}
- monc->monmap->num_mon = num_mon;
return 0;
}
diff --git a/net/core/Makefile b/net/core/Makefile
index 731db2eaa610..0cb734cbc24b 100644
--- a/net/core/Makefile
+++ b/net/core/Makefile
@@ -40,3 +40,4 @@ obj-$(CONFIG_NET_SOCK_MSG) += skmsg.o
obj-$(CONFIG_BPF_SYSCALL) += sock_map.o
obj-$(CONFIG_BPF_SYSCALL) += bpf_sk_storage.o
obj-$(CONFIG_OF) += of_net.o
+obj-$(CONFIG_NET_TEST) += gso_test.o
diff --git a/net/core/dev.c b/net/core/dev.c
index 9f3f8930c691..0d548431f3fa 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1057,7 +1057,7 @@ EXPORT_SYMBOL(dev_valid_name);
* __dev_alloc_name - allocate a name for a device
* @net: network namespace to allocate the device name in
* @name: name format string
- * @buf: scratch buffer and result name string
+ * @res: result name string
*
* Passed a format string - eg "lt%d" it will try and find a suitable
* id. It scans list of devices to build up a free map, then chooses
@@ -1068,106 +1068,79 @@ EXPORT_SYMBOL(dev_valid_name);
* Returns the number of the unit assigned or a negative errno code.
*/
-static int __dev_alloc_name(struct net *net, const char *name, char *buf)
+static int __dev_alloc_name(struct net *net, const char *name, char *res)
{
int i = 0;
const char *p;
const int max_netdevices = 8*PAGE_SIZE;
unsigned long *inuse;
struct net_device *d;
+ char buf[IFNAMSIZ];
- if (!dev_valid_name(name))
- return -EINVAL;
-
+ /* Verify the string as this thing may have come from the user.
+ * There must be one "%d" and no other "%" characters.
+ */
p = strchr(name, '%');
- if (p) {
- /*
- * Verify the string as this thing may have come from
- * the user. There must be either one "%d" and no other "%"
- * characters.
- */
- if (p[1] != 'd' || strchr(p + 2, '%'))
- return -EINVAL;
-
- /* Use one page as a bit array of possible slots */
- inuse = bitmap_zalloc(max_netdevices, GFP_ATOMIC);
- if (!inuse)
- return -ENOMEM;
+ if (!p || p[1] != 'd' || strchr(p + 2, '%'))
+ return -EINVAL;
- for_each_netdev(net, d) {
- struct netdev_name_node *name_node;
+ /* Use one page as a bit array of possible slots */
+ inuse = bitmap_zalloc(max_netdevices, GFP_ATOMIC);
+ if (!inuse)
+ return -ENOMEM;
- netdev_for_each_altname(d, name_node) {
- if (!sscanf(name_node->name, name, &i))
- continue;
- if (i < 0 || i >= max_netdevices)
- continue;
+ for_each_netdev(net, d) {
+ struct netdev_name_node *name_node;
- /* avoid cases where sscanf is not exact inverse of printf */
- snprintf(buf, IFNAMSIZ, name, i);
- if (!strncmp(buf, name_node->name, IFNAMSIZ))
- __set_bit(i, inuse);
- }
- if (!sscanf(d->name, name, &i))
+ netdev_for_each_altname(d, name_node) {
+ if (!sscanf(name_node->name, name, &i))
continue;
if (i < 0 || i >= max_netdevices)
continue;
- /* avoid cases where sscanf is not exact inverse of printf */
+ /* avoid cases where sscanf is not exact inverse of printf */
snprintf(buf, IFNAMSIZ, name, i);
- if (!strncmp(buf, d->name, IFNAMSIZ))
+ if (!strncmp(buf, name_node->name, IFNAMSIZ))
__set_bit(i, inuse);
}
+ if (!sscanf(d->name, name, &i))
+ continue;
+ if (i < 0 || i >= max_netdevices)
+ continue;
- i = find_first_zero_bit(inuse, max_netdevices);
- bitmap_free(inuse);
+ /* avoid cases where sscanf is not exact inverse of printf */
+ snprintf(buf, IFNAMSIZ, name, i);
+ if (!strncmp(buf, d->name, IFNAMSIZ))
+ __set_bit(i, inuse);
}
- snprintf(buf, IFNAMSIZ, name, i);
- if (!netdev_name_in_use(net, buf))
- return i;
+ i = find_first_zero_bit(inuse, max_netdevices);
+ bitmap_free(inuse);
+ if (i == max_netdevices)
+ return -ENFILE;
- /* It is possible to run out of possible slots
- * when the name is long and there isn't enough space left
- * for the digits, or if all bits are used.
- */
- return -ENFILE;
+ snprintf(res, IFNAMSIZ, name, i);
+ return i;
}
+/* Returns negative errno or allocated unit id (see __dev_alloc_name()) */
static int dev_prep_valid_name(struct net *net, struct net_device *dev,
- const char *want_name, char *out_name)
+ const char *want_name, char *out_name,
+ int dup_errno)
{
- int ret;
-
if (!dev_valid_name(want_name))
return -EINVAL;
- if (strchr(want_name, '%')) {
- ret = __dev_alloc_name(net, want_name, out_name);
- return ret < 0 ? ret : 0;
- } else if (netdev_name_in_use(net, want_name)) {
- return -EEXIST;
- } else if (out_name != want_name) {
- strscpy(out_name, want_name, IFNAMSIZ);
- }
+ if (strchr(want_name, '%'))
+ return __dev_alloc_name(net, want_name, out_name);
+ if (netdev_name_in_use(net, want_name))
+ return -dup_errno;
+ if (out_name != want_name)
+ strscpy(out_name, want_name, IFNAMSIZ);
return 0;
}
-static int dev_alloc_name_ns(struct net *net,
- struct net_device *dev,
- const char *name)
-{
- char buf[IFNAMSIZ];
- int ret;
-
- BUG_ON(!net);
- ret = __dev_alloc_name(net, name, buf);
- if (ret >= 0)
- strscpy(dev->name, buf, IFNAMSIZ);
- return ret;
-}
-
/**
* dev_alloc_name - allocate a name for a device
* @dev: device
@@ -1184,20 +1157,17 @@ static int dev_alloc_name_ns(struct net *net,
int dev_alloc_name(struct net_device *dev, const char *name)
{
- return dev_alloc_name_ns(dev_net(dev), dev, name);
+ return dev_prep_valid_name(dev_net(dev), dev, name, dev->name, ENFILE);
}
EXPORT_SYMBOL(dev_alloc_name);
static int dev_get_valid_name(struct net *net, struct net_device *dev,
const char *name)
{
- char buf[IFNAMSIZ];
int ret;
- ret = dev_prep_valid_name(net, dev, name, buf);
- if (ret >= 0)
- strscpy(dev->name, buf, IFNAMSIZ);
- return ret;
+ ret = dev_prep_valid_name(net, dev, name, dev->name, EEXIST);
+ return ret < 0 ? ret : 0;
}
/**
@@ -3939,7 +3909,8 @@ EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
#endif /* CONFIG_NET_EGRESS */
#ifdef CONFIG_NET_XGRESS
-static int tc_run(struct tcx_entry *entry, struct sk_buff *skb)
+static int tc_run(struct tcx_entry *entry, struct sk_buff *skb,
+ enum skb_drop_reason *drop_reason)
{
int ret = TC_ACT_UNSPEC;
#ifdef CONFIG_NET_CLS_ACT
@@ -3951,12 +3922,14 @@ static int tc_run(struct tcx_entry *entry, struct sk_buff *skb)
tc_skb_cb(skb)->mru = 0;
tc_skb_cb(skb)->post_ct = false;
+ res.drop_reason = *drop_reason;
mini_qdisc_bstats_cpu_update(miniq, skb);
ret = tcf_classify(skb, miniq->block, miniq->filter_list, &res, false);
/* Only tcf related quirks below. */
switch (ret) {
case TC_ACT_SHOT:
+ *drop_reason = res.drop_reason;
mini_qdisc_qstats_cpu_drop(miniq);
break;
case TC_ACT_OK:
@@ -4006,6 +3979,7 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
struct net_device *orig_dev, bool *another)
{
struct bpf_mprog_entry *entry = rcu_dereference_bh(skb->dev->tcx_ingress);
+ enum skb_drop_reason drop_reason = SKB_DROP_REASON_TC_INGRESS;
int sch_ret;
if (!entry)
@@ -4023,7 +3997,7 @@ sch_handle_ingress(struct sk_buff *skb, struct packet_type **pt_prev, int *ret,
if (sch_ret != TC_ACT_UNSPEC)
goto ingress_verdict;
}
- sch_ret = tc_run(tcx_entry(entry), skb);
+ sch_ret = tc_run(tcx_entry(entry), skb, &drop_reason);
ingress_verdict:
switch (sch_ret) {
case TC_ACT_REDIRECT:
@@ -4040,7 +4014,7 @@ ingress_verdict:
*ret = NET_RX_SUCCESS;
return NULL;
case TC_ACT_SHOT:
- kfree_skb_reason(skb, SKB_DROP_REASON_TC_INGRESS);
+ kfree_skb_reason(skb, drop_reason);
*ret = NET_RX_DROP;
return NULL;
/* used by tc_run */
@@ -4061,6 +4035,7 @@ static __always_inline struct sk_buff *
sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
{
struct bpf_mprog_entry *entry = rcu_dereference_bh(dev->tcx_egress);
+ enum skb_drop_reason drop_reason = SKB_DROP_REASON_TC_EGRESS;
int sch_ret;
if (!entry)
@@ -4074,7 +4049,7 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
if (sch_ret != TC_ACT_UNSPEC)
goto egress_verdict;
}
- sch_ret = tc_run(tcx_entry(entry), skb);
+ sch_ret = tc_run(tcx_entry(entry), skb, &drop_reason);
egress_verdict:
switch (sch_ret) {
case TC_ACT_REDIRECT:
@@ -4083,7 +4058,7 @@ egress_verdict:
*ret = NET_XMIT_SUCCESS;
return NULL;
case TC_ACT_SHOT:
- kfree_skb_reason(skb, SKB_DROP_REASON_TC_EGRESS);
+ kfree_skb_reason(skb, drop_reason);
*ret = NET_XMIT_DROP;
return NULL;
/* used by tc_run */
@@ -6552,9 +6527,11 @@ static int __napi_poll(struct napi_struct *n, bool *repoll)
* accidentally calling ->poll() when NAPI is not scheduled.
*/
work = 0;
- if (test_bit(NAPI_STATE_SCHED, &n->state)) {
+ if (napi_is_scheduled(n)) {
work = n->poll(n, weight);
trace_napi_poll(n, work, weight);
+
+ xdp_do_check_flushed(n);
}
if (unlikely(work > weight))
@@ -9052,6 +9029,28 @@ bool netdev_port_same_parent_id(struct net_device *a, struct net_device *b)
}
EXPORT_SYMBOL(netdev_port_same_parent_id);
+static void netdev_dpll_pin_assign(struct net_device *dev, struct dpll_pin *dpll_pin)
+{
+#if IS_ENABLED(CONFIG_DPLL)
+ rtnl_lock();
+ dev->dpll_pin = dpll_pin;
+ rtnl_unlock();
+#endif
+}
+
+void netdev_dpll_pin_set(struct net_device *dev, struct dpll_pin *dpll_pin)
+{
+ WARN_ON(!dpll_pin);
+ netdev_dpll_pin_assign(dev, dpll_pin);
+}
+EXPORT_SYMBOL(netdev_dpll_pin_set);
+
+void netdev_dpll_pin_clear(struct net_device *dev)
+{
+ netdev_dpll_pin_assign(dev, NULL);
+}
+EXPORT_SYMBOL(netdev_dpll_pin_clear);
+
/**
* dev_change_proto_down - set carrier according to proto_down.
*
@@ -10504,7 +10503,8 @@ void netdev_stats_to_stats64(struct rtnl_link_stats64 *stats64,
}
EXPORT_SYMBOL(netdev_stats_to_stats64);
-struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device *dev)
+static __cold struct net_device_core_stats __percpu *netdev_core_stats_alloc(
+ struct net_device *dev)
{
struct net_device_core_stats __percpu *p;
@@ -10517,7 +10517,23 @@ struct net_device_core_stats __percpu *netdev_core_stats_alloc(struct net_device
/* This READ_ONCE() pairs with the cmpxchg() above */
return READ_ONCE(dev->core_stats);
}
-EXPORT_SYMBOL(netdev_core_stats_alloc);
+
+noinline void netdev_core_stats_inc(struct net_device *dev, u32 offset)
+{
+ /* This READ_ONCE() pairs with the write in netdev_core_stats_alloc() */
+ struct net_device_core_stats __percpu *p = READ_ONCE(dev->core_stats);
+ unsigned long __percpu *field;
+
+ if (unlikely(!p)) {
+ p = netdev_core_stats_alloc(dev);
+ if (!p)
+ return;
+ }
+
+ field = (__force unsigned long __percpu *)((__force void *)p + offset);
+ this_cpu_inc(*field);
+}
+EXPORT_SYMBOL_GPL(netdev_core_stats_inc);
/**
* dev_get_stats - get network device statistics
@@ -11091,7 +11107,7 @@ int __dev_change_net_namespace(struct net_device *dev, struct net *net,
/* We get here if we can't use the current device name */
if (!pat)
goto out;
- err = dev_prep_valid_name(net, dev, pat, new_name);
+ err = dev_prep_valid_name(net, dev, pat, new_name, EEXIST);
if (err < 0)
goto out;
}
diff --git a/net/core/dev.h b/net/core/dev.h
index fa2e9c5c4122..5aa45f0fd4ae 100644
--- a/net/core/dev.h
+++ b/net/core/dev.h
@@ -139,4 +139,10 @@ static inline void netif_set_gro_ipv4_max_size(struct net_device *dev,
}
int rps_cpumask_housekeeping(struct cpumask *mask);
+
+#if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
+void xdp_do_check_flushed(struct napi_struct *napi);
+#else
+static inline void xdp_do_check_flushed(struct napi_struct *napi) { }
+#endif
#endif
diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c
index b46aedc36939..feeddf95f450 100644
--- a/net/core/dev_ioctl.c
+++ b/net/core/dev_ioctl.c
@@ -382,7 +382,7 @@ static int dev_set_hwtstamp(struct net_device *dev, struct ifreq *ifr)
if (err)
return err;
- err = dsa_master_hwtstamp_validate(dev, &kernel_cfg, &extack);
+ err = dsa_conduit_hwtstamp_validate(dev, &kernel_cfg, &extack);
if (err) {
if (extack._msg)
netdev_err(dev, "%s\n", extack._msg);
diff --git a/net/core/dst.c b/net/core/dst.c
index 980e2fd2f013..6838d3212c37 100644
--- a/net/core/dst.c
+++ b/net/core/dst.c
@@ -45,7 +45,7 @@ const struct dst_metrics dst_default_metrics = {
EXPORT_SYMBOL(dst_default_metrics);
void dst_init(struct dst_entry *dst, struct dst_ops *ops,
- struct net_device *dev, int initial_ref, int initial_obsolete,
+ struct net_device *dev, int initial_obsolete,
unsigned short flags)
{
dst->dev = dev;
@@ -66,7 +66,7 @@ void dst_init(struct dst_entry *dst, struct dst_ops *ops,
dst->tclassid = 0;
#endif
dst->lwtstate = NULL;
- rcuref_init(&dst->__rcuref, initial_ref);
+ rcuref_init(&dst->__rcuref, 1);
INIT_LIST_HEAD(&dst->rt_uncached);
dst->__use = 0;
dst->lastuse = jiffies;
@@ -77,7 +77,7 @@ void dst_init(struct dst_entry *dst, struct dst_ops *ops,
EXPORT_SYMBOL(dst_init);
void *dst_alloc(struct dst_ops *ops, struct net_device *dev,
- int initial_ref, int initial_obsolete, unsigned short flags)
+ int initial_obsolete, unsigned short flags)
{
struct dst_entry *dst;
@@ -90,7 +90,7 @@ void *dst_alloc(struct dst_ops *ops, struct net_device *dev,
if (!dst)
return NULL;
- dst_init(dst, ops, dev, initial_ref, initial_obsolete, flags);
+ dst_init(dst, ops, dev, initial_obsolete, flags);
return dst;
}
@@ -270,7 +270,7 @@ static void __metadata_dst_init(struct metadata_dst *md_dst,
struct dst_entry *dst;
dst = &md_dst->dst;
- dst_init(dst, &dst_blackhole_ops, NULL, 1, DST_OBSOLETE_NONE,
+ dst_init(dst, &dst_blackhole_ops, NULL, DST_OBSOLETE_NONE,
DST_METADATA | DST_NOCOUNT);
memset(dst + 1, 0, sizeof(*md_dst) + optslen - sizeof(*dst));
md_dst->type = type;
diff --git a/net/core/filter.c b/net/core/filter.c
index a094694899c9..21d75108c2e9 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -81,6 +81,9 @@
#include <net/xdp.h>
#include <net/mptcp.h>
#include <net/netfilter/nf_conntrack_bpf.h>
+#include <linux/un.h>
+
+#include "dev.h"
static const struct bpf_func_proto *
bpf_sk_base_func_proto(enum bpf_func_id func_id);
@@ -4207,6 +4210,20 @@ void xdp_do_flush(void)
}
EXPORT_SYMBOL_GPL(xdp_do_flush);
+#if defined(CONFIG_DEBUG_NET) && defined(CONFIG_BPF_SYSCALL)
+void xdp_do_check_flushed(struct napi_struct *napi)
+{
+ bool ret;
+
+ ret = dev_check_flush();
+ ret |= cpu_map_check_flush();
+ ret |= xsk_map_check_flush();
+
+ WARN_ONCE(ret, "Missing xdp_do_flush() invocation after NAPI by %ps\n",
+ napi->poll);
+}
+#endif
+
void bpf_clear_redirect_map(struct bpf_map *map)
{
struct bpf_redirect_info *ri;
@@ -5850,6 +5867,9 @@ static int bpf_ipv4_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
params->rt_metric = res.fi->fib_priority;
params->ifindex = dev->ifindex;
+ if (flags & BPF_FIB_LOOKUP_SRC)
+ params->ipv4_src = fib_result_prefsrc(net, &res);
+
/* xdp and cls_bpf programs are run in RCU-bh so
* rcu_read_lock_bh is not needed here
*/
@@ -5992,6 +6012,18 @@ static int bpf_ipv6_fib_lookup(struct net *net, struct bpf_fib_lookup *params,
params->rt_metric = res.f6i->fib6_metric;
params->ifindex = dev->ifindex;
+ if (flags & BPF_FIB_LOOKUP_SRC) {
+ if (res.f6i->fib6_prefsrc.plen) {
+ *src = res.f6i->fib6_prefsrc.addr;
+ } else {
+ err = ipv6_bpf_stub->ipv6_dev_get_saddr(net, dev,
+ &fl6.daddr, 0,
+ src);
+ if (err)
+ return BPF_FIB_LKUP_RET_NO_SRC_ADDR;
+ }
+ }
+
if (flags & BPF_FIB_LOOKUP_SKIP_NEIGH)
goto set_fwd_params;
@@ -6010,7 +6042,8 @@ set_fwd_params:
#endif
#define BPF_FIB_LOOKUP_MASK (BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_OUTPUT | \
- BPF_FIB_LOOKUP_SKIP_NEIGH | BPF_FIB_LOOKUP_TBID)
+ BPF_FIB_LOOKUP_SKIP_NEIGH | BPF_FIB_LOOKUP_TBID | \
+ BPF_FIB_LOOKUP_SRC)
BPF_CALL_4(bpf_xdp_fib_lookup, struct xdp_buff *, ctx,
struct bpf_fib_lookup *, params, int, plen, u32, flags)
@@ -7858,14 +7891,19 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_CGROUP_INET6_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
return &bpf_sock_addr_setsockopt_proto;
default:
return NULL;
@@ -7876,14 +7914,19 @@ sock_addr_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
case BPF_CGROUP_INET6_BIND:
case BPF_CGROUP_INET4_CONNECT:
case BPF_CGROUP_INET6_CONNECT:
+ case BPF_CGROUP_UNIX_CONNECT:
case BPF_CGROUP_UDP4_RECVMSG:
case BPF_CGROUP_UDP6_RECVMSG:
+ case BPF_CGROUP_UNIX_RECVMSG:
case BPF_CGROUP_UDP4_SENDMSG:
case BPF_CGROUP_UDP6_SENDMSG:
+ case BPF_CGROUP_UNIX_SENDMSG:
case BPF_CGROUP_INET4_GETPEERNAME:
case BPF_CGROUP_INET6_GETPEERNAME:
+ case BPF_CGROUP_UNIX_GETPEERNAME:
case BPF_CGROUP_INET4_GETSOCKNAME:
case BPF_CGROUP_INET6_GETSOCKNAME:
+ case BPF_CGROUP_UNIX_GETSOCKNAME:
return &bpf_sock_addr_getsockopt_proto;
default:
return NULL;
@@ -8931,8 +8974,8 @@ static bool sock_addr_is_valid_access(int off, int size,
if (off % size != 0)
return false;
- /* Disallow access to IPv6 fields from IPv4 contex and vise
- * versa.
+ /* Disallow access to fields not belonging to the attach type's address
+ * family.
*/
switch (off) {
case bpf_ctx_range(struct bpf_sock_addr, user_ip4):
@@ -11752,6 +11795,27 @@ __bpf_kfunc int bpf_dynptr_from_xdp(struct xdp_buff *xdp, u64 flags,
return 0;
}
+
+__bpf_kfunc int bpf_sock_addr_set_sun_path(struct bpf_sock_addr_kern *sa_kern,
+ const u8 *sun_path, u32 sun_path__sz)
+{
+ struct sockaddr_un *un;
+
+ if (sa_kern->sk->sk_family != AF_UNIX)
+ return -EINVAL;
+
+ /* We do not allow changing the address to unnamed or larger than the
+ * maximum allowed address size for a unix sockaddr.
+ */
+ if (sun_path__sz == 0 || sun_path__sz > UNIX_PATH_MAX)
+ return -EINVAL;
+
+ un = (struct sockaddr_un *)sa_kern->uaddr;
+ memcpy(un->sun_path, sun_path, sun_path__sz);
+ sa_kern->uaddrlen = offsetof(struct sockaddr_un, sun_path) + sun_path__sz;
+
+ return 0;
+}
__diag_pop();
int bpf_dynptr_from_skb_rdonly(struct sk_buff *skb, u64 flags,
@@ -11776,6 +11840,10 @@ BTF_SET8_START(bpf_kfunc_check_set_xdp)
BTF_ID_FLAGS(func, bpf_dynptr_from_xdp)
BTF_SET8_END(bpf_kfunc_check_set_xdp)
+BTF_SET8_START(bpf_kfunc_check_set_sock_addr)
+BTF_ID_FLAGS(func, bpf_sock_addr_set_sun_path)
+BTF_SET8_END(bpf_kfunc_check_set_sock_addr)
+
static const struct btf_kfunc_id_set bpf_kfunc_set_skb = {
.owner = THIS_MODULE,
.set = &bpf_kfunc_check_set_skb,
@@ -11786,6 +11854,11 @@ static const struct btf_kfunc_id_set bpf_kfunc_set_xdp = {
.set = &bpf_kfunc_check_set_xdp,
};
+static const struct btf_kfunc_id_set bpf_kfunc_set_sock_addr = {
+ .owner = THIS_MODULE,
+ .set = &bpf_kfunc_check_set_sock_addr,
+};
+
static int __init bpf_kfunc_init(void)
{
int ret;
@@ -11800,7 +11873,9 @@ static int __init bpf_kfunc_init(void)
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_XMIT, &bpf_kfunc_set_skb);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_LWT_SEG6LOCAL, &bpf_kfunc_set_skb);
ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_NETFILTER, &bpf_kfunc_set_skb);
- return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
+ ret = ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_XDP, &bpf_kfunc_set_xdp);
+ return ret ?: register_btf_kfunc_id_set(BPF_PROG_TYPE_CGROUP_SOCK_ADDR,
+ &bpf_kfunc_set_sock_addr);
}
late_initcall(bpf_kfunc_init);
diff --git a/net/core/gso_test.c b/net/core/gso_test.c
new file mode 100644
index 000000000000..ceb684be4cbf
--- /dev/null
+++ b/net/core/gso_test.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <kunit/test.h>
+#include <linux/skbuff.h>
+
+static const char hdr[] = "abcdefgh";
+#define GSO_TEST_SIZE 1000
+
+static void __init_skb(struct sk_buff *skb)
+{
+ skb_reset_mac_header(skb);
+ memcpy(skb_mac_header(skb), hdr, sizeof(hdr));
+
+ /* skb_segment expects skb->data at start of payload */
+ skb_pull(skb, sizeof(hdr));
+ skb_reset_network_header(skb);
+ skb_reset_transport_header(skb);
+
+ /* proto is arbitrary, as long as not ETH_P_TEB or vlan */
+ skb->protocol = htons(ETH_P_ATALK);
+ skb_shinfo(skb)->gso_size = GSO_TEST_SIZE;
+}
+
+enum gso_test_nr {
+ GSO_TEST_LINEAR,
+ GSO_TEST_NO_GSO,
+ GSO_TEST_FRAGS,
+ GSO_TEST_FRAGS_PURE,
+ GSO_TEST_GSO_PARTIAL,
+ GSO_TEST_FRAG_LIST,
+ GSO_TEST_FRAG_LIST_PURE,
+ GSO_TEST_FRAG_LIST_NON_UNIFORM,
+ GSO_TEST_GSO_BY_FRAGS,
+};
+
+struct gso_test_case {
+ enum gso_test_nr id;
+ const char *name;
+
+ /* input */
+ unsigned int linear_len;
+ unsigned int nr_frags;
+ const unsigned int *frags;
+ unsigned int nr_frag_skbs;
+ const unsigned int *frag_skbs;
+
+ /* output as expected */
+ unsigned int nr_segs;
+ const unsigned int *segs;
+};
+
+static struct gso_test_case cases[] = {
+ {
+ .id = GSO_TEST_NO_GSO,
+ .name = "no_gso",
+ .linear_len = GSO_TEST_SIZE,
+ .nr_segs = 1,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE },
+ },
+ {
+ .id = GSO_TEST_LINEAR,
+ .name = "linear",
+ .linear_len = GSO_TEST_SIZE + GSO_TEST_SIZE + 1,
+ .nr_segs = 3,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, 1 },
+ },
+ {
+ .id = GSO_TEST_FRAGS,
+ .name = "frags",
+ .linear_len = GSO_TEST_SIZE,
+ .nr_frags = 2,
+ .frags = (const unsigned int[]) { GSO_TEST_SIZE, 1 },
+ .nr_segs = 3,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, 1 },
+ },
+ {
+ .id = GSO_TEST_FRAGS_PURE,
+ .name = "frags_pure",
+ .nr_frags = 3,
+ .frags = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, 2 },
+ .nr_segs = 3,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, 2 },
+ },
+ {
+ .id = GSO_TEST_GSO_PARTIAL,
+ .name = "gso_partial",
+ .linear_len = GSO_TEST_SIZE,
+ .nr_frags = 2,
+ .frags = (const unsigned int[]) { GSO_TEST_SIZE, 3 },
+ .nr_segs = 2,
+ .segs = (const unsigned int[]) { 2 * GSO_TEST_SIZE, 3 },
+ },
+ {
+ /* commit 89319d3801d1: frag_list on mss boundaries */
+ .id = GSO_TEST_FRAG_LIST,
+ .name = "frag_list",
+ .linear_len = GSO_TEST_SIZE,
+ .nr_frag_skbs = 2,
+ .frag_skbs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE },
+ .nr_segs = 3,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, GSO_TEST_SIZE },
+ },
+ {
+ .id = GSO_TEST_FRAG_LIST_PURE,
+ .name = "frag_list_pure",
+ .nr_frag_skbs = 2,
+ .frag_skbs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE },
+ .nr_segs = 2,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE },
+ },
+ {
+ /* commit 43170c4e0ba7: GRO of frag_list trains */
+ .id = GSO_TEST_FRAG_LIST_NON_UNIFORM,
+ .name = "frag_list_non_uniform",
+ .linear_len = GSO_TEST_SIZE,
+ .nr_frag_skbs = 4,
+ .frag_skbs = (const unsigned int[]) { GSO_TEST_SIZE, 1, GSO_TEST_SIZE, 2 },
+ .nr_segs = 4,
+ .segs = (const unsigned int[]) { GSO_TEST_SIZE, GSO_TEST_SIZE, GSO_TEST_SIZE, 3 },
+ },
+ {
+ /* commit 3953c46c3ac7 ("sk_buff: allow segmenting based on frag sizes") and
+ * commit 90017accff61 ("sctp: Add GSO support")
+ *
+ * "there will be a cover skb with protocol headers and
+ * children ones containing the actual segments"
+ */
+ .id = GSO_TEST_GSO_BY_FRAGS,
+ .name = "gso_by_frags",
+ .nr_frag_skbs = 4,
+ .frag_skbs = (const unsigned int[]) { 100, 200, 300, 400 },
+ .nr_segs = 4,
+ .segs = (const unsigned int[]) { 100, 200, 300, 400 },
+ },
+};
+
+static void gso_test_case_to_desc(struct gso_test_case *t, char *desc)
+{
+ sprintf(desc, "%s", t->name);
+}
+
+KUNIT_ARRAY_PARAM(gso_test, cases, gso_test_case_to_desc);
+
+static void gso_test_func(struct kunit *test)
+{
+ const int shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ struct sk_buff *skb, *segs, *cur, *next, *last;
+ const struct gso_test_case *tcase;
+ netdev_features_t features;
+ struct page *page;
+ int i;
+
+ tcase = test->param_value;
+
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ skb = build_skb(page_address(page), sizeof(hdr) + tcase->linear_len + shinfo_size);
+ KUNIT_ASSERT_NOT_NULL(test, skb);
+ __skb_put(skb, sizeof(hdr) + tcase->linear_len);
+
+ __init_skb(skb);
+
+ if (tcase->nr_frags) {
+ unsigned int pg_off = 0;
+
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ page_ref_add(page, tcase->nr_frags - 1);
+
+ for (i = 0; i < tcase->nr_frags; i++) {
+ skb_fill_page_desc(skb, i, page, pg_off, tcase->frags[i]);
+ pg_off += tcase->frags[i];
+ }
+
+ KUNIT_ASSERT_LE(test, pg_off, PAGE_SIZE);
+
+ skb->data_len = pg_off;
+ skb->len += skb->data_len;
+ skb->truesize += skb->data_len;
+ }
+
+ if (tcase->frag_skbs) {
+ unsigned int total_size = 0, total_true_size = 0, alloc_size = 0;
+ struct sk_buff *frag_skb, *prev = NULL;
+
+ page = alloc_page(GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, page);
+ page_ref_add(page, tcase->nr_frag_skbs - 1);
+
+ for (i = 0; i < tcase->nr_frag_skbs; i++) {
+ unsigned int frag_size;
+
+ frag_size = tcase->frag_skbs[i];
+ frag_skb = build_skb(page_address(page) + alloc_size,
+ frag_size + shinfo_size);
+ KUNIT_ASSERT_NOT_NULL(test, frag_skb);
+ __skb_put(frag_skb, frag_size);
+
+ if (prev)
+ prev->next = frag_skb;
+ else
+ skb_shinfo(skb)->frag_list = frag_skb;
+ prev = frag_skb;
+
+ total_size += frag_size;
+ total_true_size += frag_skb->truesize;
+ alloc_size += frag_size + shinfo_size;
+ }
+
+ KUNIT_ASSERT_LE(test, alloc_size, PAGE_SIZE);
+
+ skb->len += total_size;
+ skb->data_len += total_size;
+ skb->truesize += total_true_size;
+
+ if (tcase->id == GSO_TEST_GSO_BY_FRAGS)
+ skb_shinfo(skb)->gso_size = GSO_BY_FRAGS;
+ }
+
+ features = NETIF_F_SG | NETIF_F_HW_CSUM;
+ if (tcase->id == GSO_TEST_GSO_PARTIAL)
+ features |= NETIF_F_GSO_PARTIAL;
+
+ /* TODO: this should also work with SG,
+ * rather than hit BUG_ON(i >= nfrags)
+ */
+ if (tcase->id == GSO_TEST_FRAG_LIST_NON_UNIFORM)
+ features &= ~NETIF_F_SG;
+
+ segs = skb_segment(skb, features);
+ if (IS_ERR(segs)) {
+ KUNIT_FAIL(test, "segs error %lld", PTR_ERR(segs));
+ goto free_gso_skb;
+ } else if (!segs) {
+ KUNIT_FAIL(test, "no segments");
+ goto free_gso_skb;
+ }
+
+ last = segs->prev;
+ for (cur = segs, i = 0; cur; cur = next, i++) {
+ next = cur->next;
+
+ KUNIT_ASSERT_EQ(test, cur->len, sizeof(hdr) + tcase->segs[i]);
+
+ /* segs have skb->data pointing to the mac header */
+ KUNIT_ASSERT_PTR_EQ(test, skb_mac_header(cur), cur->data);
+ KUNIT_ASSERT_PTR_EQ(test, skb_network_header(cur), cur->data + sizeof(hdr));
+
+ /* header was copied to all segs */
+ KUNIT_ASSERT_EQ(test, memcmp(skb_mac_header(cur), hdr, sizeof(hdr)), 0);
+
+ /* last seg can be found through segs->prev pointer */
+ if (!next)
+ KUNIT_ASSERT_PTR_EQ(test, cur, last);
+
+ consume_skb(cur);
+ }
+
+ KUNIT_ASSERT_EQ(test, i, tcase->nr_segs);
+
+free_gso_skb:
+ consume_skb(skb);
+}
+
+static struct kunit_case gso_test_cases[] = {
+ KUNIT_CASE_PARAM(gso_test_func, gso_test_gen_params),
+ {}
+};
+
+static struct kunit_suite gso_test_suite = {
+ .name = "net_core_gso",
+ .test_cases = gso_test_cases,
+};
+
+kunit_test_suite(gso_test_suite);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("KUnit tests for segmentation offload");
diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
index d6a70aeaa503..d22f0919821e 100644
--- a/net/core/netclassid_cgroup.c
+++ b/net/core/netclassid_cgroup.c
@@ -88,6 +88,12 @@ static void update_classid_task(struct task_struct *p, u32 classid)
};
unsigned int fd = 0;
+ /* Only update the leader task, when many threads in this task,
+ * so it can avoid the useless traversal.
+ */
+ if (p != p->group_leader)
+ return;
+
do {
task_lock(p);
fd = iterate_fd(p->files, fd, update_classid_sock, &ctx);
diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
index c1aea8b756b6..fe61f85bcf33 100644
--- a/net/core/netdev-genl.c
+++ b/net/core/netdev-genl.c
@@ -5,6 +5,7 @@
#include <linux/rtnetlink.h>
#include <net/net_namespace.h>
#include <net/sock.h>
+#include <net/xdp.h>
#include "netdev-genl-gen.h"
@@ -12,15 +13,24 @@ static int
netdev_nl_dev_fill(struct net_device *netdev, struct sk_buff *rsp,
const struct genl_info *info)
{
+ u64 xdp_rx_meta = 0;
void *hdr;
hdr = genlmsg_iput(rsp, info);
if (!hdr)
return -EMSGSIZE;
+#define XDP_METADATA_KFUNC(_, flag, __, xmo) \
+ if (netdev->xdp_metadata_ops && netdev->xdp_metadata_ops->xmo) \
+ xdp_rx_meta |= flag;
+XDP_METADATA_KFUNC_xxx
+#undef XDP_METADATA_KFUNC
+
if (nla_put_u32(rsp, NETDEV_A_DEV_IFINDEX, netdev->ifindex) ||
nla_put_u64_64bit(rsp, NETDEV_A_DEV_XDP_FEATURES,
- netdev->xdp_features, NETDEV_A_DEV_PAD)) {
+ netdev->xdp_features, NETDEV_A_DEV_PAD) ||
+ nla_put_u64_64bit(rsp, NETDEV_A_DEV_XDP_RX_METADATA_FEATURES,
+ xdp_rx_meta, NETDEV_A_DEV_PAD)) {
genlmsg_cancel(rsp, hdr);
return -EINVAL;
}
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 77cb75e63aca..5e409b98aba0 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -211,10 +211,6 @@ static int page_pool_init(struct page_pool *pool,
*/
}
- if (PAGE_POOL_DMA_USE_PP_FRAG_COUNT &&
- pool->p.flags & PP_FLAG_PAGE_FRAG)
- return -EINVAL;
-
#ifdef CONFIG_PAGE_POOL_STATS
pool->recycle_stats = alloc_percpu(struct page_pool_recycle_stats);
if (!pool->recycle_stats)
@@ -359,12 +355,20 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
if (dma_mapping_error(pool->p.dev, dma))
return false;
- page_pool_set_dma_addr(page, dma);
+ if (page_pool_set_dma_addr(page, dma))
+ goto unmap_failed;
if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
return true;
+
+unmap_failed:
+ WARN_ON_ONCE("unexpected DMA address, please report to netdev@");
+ dma_unmap_page_attrs(pool->p.dev, dma,
+ PAGE_SIZE << pool->p.order, pool->p.dma_dir,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
+ return false;
}
static void page_pool_set_pp_info(struct page_pool *pool,
@@ -372,6 +376,14 @@ static void page_pool_set_pp_info(struct page_pool *pool,
{
page->pp = pool;
page->pp_magic |= PP_SIGNATURE;
+
+ /* Ensuring all pages have been split into one fragment initially:
+ * page_pool_set_pp_info() is only called once for every page when it
+ * is allocated from the page allocator and page_pool_fragment_page()
+ * is dirtying the same cache line as the page->pp_magic above, so
+ * the overhead is negligible.
+ */
+ page_pool_fragment_page(page, 1);
if (pool->p.init_callback)
pool->p.init_callback(page, pool->p.init_arg);
}
@@ -668,7 +680,7 @@ void page_pool_put_page_bulk(struct page_pool *pool, void **data,
struct page *page = virt_to_head_page(data[i]);
/* It is not the last user for the page frag case */
- if (!page_pool_is_last_frag(pool, page))
+ if (!page_pool_is_last_frag(page))
continue;
page = __page_pool_put_page(pool, page, -1, false);
@@ -744,8 +756,7 @@ struct page *page_pool_alloc_frag(struct page_pool *pool,
unsigned int max_size = PAGE_SIZE << pool->p.order;
struct page *page = pool->frag_page;
- if (WARN_ON(!(pool->p.flags & PP_FLAG_PAGE_FRAG) ||
- size > max_size))
+ if (WARN_ON(size > max_size))
return NULL;
size = ALIGN(size, dma_get_cache_alignment());
@@ -798,7 +809,7 @@ static void page_pool_empty_ring(struct page_pool *pool)
}
}
-static void page_pool_free(struct page_pool *pool)
+static void __page_pool_destroy(struct page_pool *pool)
{
if (pool->disconnect)
pool->disconnect(pool);
@@ -849,7 +860,7 @@ static int page_pool_release(struct page_pool *pool)
page_pool_scrub(pool);
inflight = page_pool_inflight(pool);
if (!inflight)
- page_pool_free(pool);
+ __page_pool_destroy(pool);
return inflight;
}
diff --git a/net/core/pktgen.c b/net/core/pktgen.c
index 4d1696677c48..8afcfadf8d5a 100644
--- a/net/core/pktgen.c
+++ b/net/core/pktgen.c
@@ -200,6 +200,7 @@
pf(VID_RND) /* Random VLAN ID */ \
pf(SVID_RND) /* Random SVLAN ID */ \
pf(NODE) /* Node memory alloc*/ \
+ pf(SHARED) /* Shared SKB */ \
#define pf(flag) flag##_SHIFT,
enum pkt_flags {
@@ -1198,7 +1199,8 @@ static ssize_t pktgen_if_write(struct file *file,
((pkt_dev->xmit_mode == M_NETIF_RECEIVE) ||
!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING)))
return -ENOTSUPP;
- if (value > 0 && pkt_dev->n_imix_entries > 0)
+ if (value > 0 && (pkt_dev->n_imix_entries > 0 ||
+ !(pkt_dev->flags & F_SHARED)))
return -EINVAL;
i += len;
@@ -1257,6 +1259,10 @@ static ssize_t pktgen_if_write(struct file *file,
((pkt_dev->xmit_mode == M_START_XMIT) &&
(!(pkt_dev->odev->priv_flags & IFF_TX_SKB_SHARING)))))
return -ENOTSUPP;
+
+ if (value > 1 && !(pkt_dev->flags & F_SHARED))
+ return -EINVAL;
+
pkt_dev->burst = value < 1 ? 1 : value;
sprintf(pg_result, "OK: burst=%u", pkt_dev->burst);
return count;
@@ -1318,9 +1324,10 @@ static ssize_t pktgen_if_write(struct file *file,
return count;
}
if (!strcmp(name, "flag")) {
+ bool disable = false;
__u32 flag;
char f[32];
- bool disable = false;
+ char *end;
memset(f, 0, 32);
len = strn_len(&user_buffer[i], sizeof(f) - 1);
@@ -1332,28 +1339,42 @@ static ssize_t pktgen_if_write(struct file *file,
i += len;
flag = pktgen_read_flag(f, &disable);
-
if (flag) {
- if (disable)
+ if (disable) {
+ /* If "clone_skb", or "burst" parameters are
+ * configured, it means that the skb still
+ * needs to be referenced by the pktgen, so
+ * the skb must be shared.
+ */
+ if (flag == F_SHARED && (pkt_dev->clone_skb ||
+ pkt_dev->burst > 1))
+ return -EINVAL;
pkt_dev->flags &= ~flag;
- else
+ } else {
pkt_dev->flags |= flag;
- } else {
- sprintf(pg_result,
- "Flag -:%s:- unknown\nAvailable flags, (prepend ! to un-set flag):\n%s",
- f,
- "IPSRC_RND, IPDST_RND, UDPSRC_RND, UDPDST_RND, "
- "MACSRC_RND, MACDST_RND, TXSIZE_RND, IPV6, "
- "MPLS_RND, VID_RND, SVID_RND, FLOW_SEQ, "
- "QUEUE_MAP_RND, QUEUE_MAP_CPU, UDPCSUM, "
- "NO_TIMESTAMP, "
-#ifdef CONFIG_XFRM
- "IPSEC, "
-#endif
- "NODE_ALLOC\n");
+ }
+
+ sprintf(pg_result, "OK: flags=0x%x", pkt_dev->flags);
return count;
}
- sprintf(pg_result, "OK: flags=0x%x", pkt_dev->flags);
+
+ /* Unknown flag */
+ end = pkt_dev->result + sizeof(pkt_dev->result);
+ pg_result += sprintf(pg_result,
+ "Flag -:%s:- unknown\n"
+ "Available flags, (prepend ! to un-set flag):\n", f);
+
+ for (int n = 0; n < NR_PKT_FLAGS && pg_result < end; n++) {
+ if (!IS_ENABLED(CONFIG_XFRM) && n == IPSEC_SHIFT)
+ continue;
+ pg_result += snprintf(pg_result, end - pg_result,
+ "%s, ", pkt_flag_names[n]);
+ }
+ if (!WARN_ON_ONCE(pg_result >= end)) {
+ /* Remove the comma and whitespace at the end */
+ *(pg_result - 2) = '\0';
+ }
+
return count;
}
if (!strcmp(name, "dst_min") || !strcmp(name, "dst")) {
@@ -3440,12 +3461,24 @@ static void pktgen_wait_for_skb(struct pktgen_dev *pkt_dev)
static void pktgen_xmit(struct pktgen_dev *pkt_dev)
{
- unsigned int burst = READ_ONCE(pkt_dev->burst);
+ bool skb_shared = !!(READ_ONCE(pkt_dev->flags) & F_SHARED);
struct net_device *odev = pkt_dev->odev;
struct netdev_queue *txq;
+ unsigned int burst = 1;
struct sk_buff *skb;
+ int clone_skb = 0;
int ret;
+ /* If 'skb_shared' is false, the read of possible
+ * new values (if any) for 'burst' and 'clone_skb' will be skipped to
+ * prevent some concurrent changes from slipping in. And the stabilized
+ * config will be read in during the next run of pktgen_xmit.
+ */
+ if (skb_shared) {
+ burst = READ_ONCE(pkt_dev->burst);
+ clone_skb = READ_ONCE(pkt_dev->clone_skb);
+ }
+
/* If device is offline, then don't send */
if (unlikely(!netif_running(odev) || !netif_carrier_ok(odev))) {
pktgen_stop_device(pkt_dev);
@@ -3462,7 +3495,7 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
/* If no skb or clone count exhausted then get new one */
if (!pkt_dev->skb || (pkt_dev->last_ok &&
- ++pkt_dev->clone_count >= pkt_dev->clone_skb)) {
+ ++pkt_dev->clone_count >= clone_skb)) {
/* build a new pkt */
kfree_skb(pkt_dev->skb);
@@ -3483,7 +3516,8 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
if (pkt_dev->xmit_mode == M_NETIF_RECEIVE) {
skb = pkt_dev->skb;
skb->protocol = eth_type_trans(skb, skb->dev);
- refcount_add(burst, &skb->users);
+ if (skb_shared)
+ refcount_add(burst, &skb->users);
local_bh_disable();
do {
ret = netif_receive_skb(skb);
@@ -3491,6 +3525,10 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
pkt_dev->errors++;
pkt_dev->sofar++;
pkt_dev->seq_num++;
+ if (unlikely(!skb_shared)) {
+ pkt_dev->skb = NULL;
+ break;
+ }
if (refcount_read(&skb->users) != burst) {
/* skb was queued by rps/rfs or taps,
* so cannot reuse this skb
@@ -3509,9 +3547,14 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
goto out; /* Skips xmit_mode M_START_XMIT */
} else if (pkt_dev->xmit_mode == M_QUEUE_XMIT) {
local_bh_disable();
- refcount_inc(&pkt_dev->skb->users);
+ if (skb_shared)
+ refcount_inc(&pkt_dev->skb->users);
ret = dev_queue_xmit(pkt_dev->skb);
+
+ if (!skb_shared && dev_xmit_complete(ret))
+ pkt_dev->skb = NULL;
+
switch (ret) {
case NET_XMIT_SUCCESS:
pkt_dev->sofar++;
@@ -3549,11 +3592,15 @@ static void pktgen_xmit(struct pktgen_dev *pkt_dev)
pkt_dev->last_ok = 0;
goto unlock;
}
- refcount_add(burst, &pkt_dev->skb->users);
+ if (skb_shared)
+ refcount_add(burst, &pkt_dev->skb->users);
xmit_more:
ret = netdev_start_xmit(pkt_dev->skb, odev, txq, --burst > 0);
+ if (!skb_shared && dev_xmit_complete(ret))
+ pkt_dev->skb = NULL;
+
switch (ret) {
case NETDEV_TX_OK:
pkt_dev->last_ok = 1;
@@ -3575,7 +3622,8 @@ xmit_more:
fallthrough;
case NETDEV_TX_BUSY:
/* Retry it next time */
- refcount_dec(&(pkt_dev->skb->users));
+ if (skb_shared)
+ refcount_dec(&pkt_dev->skb->users);
pkt_dev->last_ok = 0;
}
if (unlikely(burst))
@@ -3588,7 +3636,8 @@ out:
/* If pkt_dev->count is zero, then run forever */
if ((pkt_dev->count != 0) && (pkt_dev->sofar >= pkt_dev->count)) {
- pktgen_wait_for_skb(pkt_dev);
+ if (pkt_dev->skb)
+ pktgen_wait_for_skb(pkt_dev);
/* Done with this */
pktgen_stop_device(pkt_dev);
@@ -3771,6 +3820,7 @@ static int pktgen_add_device(struct pktgen_thread *t, const char *ifname)
pkt_dev->svlan_id = 0xffff;
pkt_dev->burst = 1;
pkt_dev->node = NUMA_NO_NODE;
+ pkt_dev->flags = F_SHARED; /* SKB shared by default */
err = pktgen_setup_dev(t->net, pkt_dev, ifname);
if (err)
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index 53c377d054f0..e8431c6c8490 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -57,6 +57,7 @@
#if IS_ENABLED(CONFIG_IPV6)
#include <net/addrconf.h>
#endif
+#include <linux/dpll.h>
#include "dev.h"
@@ -1055,6 +1056,15 @@ static size_t rtnl_devlink_port_size(const struct net_device *dev)
return size;
}
+static size_t rtnl_dpll_pin_size(const struct net_device *dev)
+{
+ size_t size = nla_total_size(0); /* nest IFLA_DPLL_PIN */
+
+ size += dpll_msg_pin_handle_size(netdev_dpll_pin(dev));
+
+ return size;
+}
+
static noinline size_t if_nlmsg_size(const struct net_device *dev,
u32 ext_filter_mask)
{
@@ -1111,6 +1121,7 @@ static noinline size_t if_nlmsg_size(const struct net_device *dev,
+ rtnl_prop_list_size(dev)
+ nla_total_size(MAX_ADDR_LEN) /* IFLA_PERM_ADDRESS */
+ rtnl_devlink_port_size(dev)
+ + rtnl_dpll_pin_size(dev)
+ 0;
}
@@ -1774,6 +1785,28 @@ nest_cancel:
return ret;
}
+static int rtnl_fill_dpll_pin(struct sk_buff *skb,
+ const struct net_device *dev)
+{
+ struct nlattr *dpll_pin_nest;
+ int ret;
+
+ dpll_pin_nest = nla_nest_start(skb, IFLA_DPLL_PIN);
+ if (!dpll_pin_nest)
+ return -EMSGSIZE;
+
+ ret = dpll_msg_add_pin_handle(skb, netdev_dpll_pin(dev));
+ if (ret < 0)
+ goto nest_cancel;
+
+ nla_nest_end(skb, dpll_pin_nest);
+ return 0;
+
+nest_cancel:
+ nla_nest_cancel(skb, dpll_pin_nest);
+ return ret;
+}
+
static int rtnl_fill_ifinfo(struct sk_buff *skb,
struct net_device *dev, struct net *src_net,
int type, u32 pid, u32 seq, u32 change,
@@ -1916,6 +1949,9 @@ static int rtnl_fill_ifinfo(struct sk_buff *skb,
if (rtnl_fill_devlink_port(skb, dev))
goto nla_put_failure;
+ if (rtnl_fill_dpll_pin(skb, dev))
+ goto nla_put_failure;
+
nlmsg_end(skb, nlh);
return 0;
@@ -4331,13 +4367,6 @@ int ndo_dflt_fdb_del(struct ndmsg *ndm,
}
EXPORT_SYMBOL(ndo_dflt_fdb_del);
-static const struct nla_policy fdb_del_bulk_policy[NDA_MAX + 1] = {
- [NDA_VLAN] = { .type = NLA_U16 },
- [NDA_IFINDEX] = NLA_POLICY_MIN(NLA_S32, 1),
- [NDA_NDM_STATE_MASK] = { .type = NLA_U16 },
- [NDA_NDM_FLAGS_MASK] = { .type = NLA_U8 },
-};
-
static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
struct netlink_ext_ack *extack)
{
@@ -4358,8 +4387,10 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
err = nlmsg_parse_deprecated(nlh, sizeof(*ndm), tb, NDA_MAX,
NULL, extack);
} else {
- err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX,
- fdb_del_bulk_policy, extack);
+ /* For bulk delete, the drivers will parse the message with
+ * policy.
+ */
+ err = nlmsg_parse(nlh, sizeof(*ndm), tb, NDA_MAX, NULL, extack);
}
if (err < 0)
return err;
@@ -4382,6 +4413,10 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
return -EINVAL;
}
addr = nla_data(tb[NDA_LLADDR]);
+
+ err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
+ if (err)
+ return err;
}
if (dev->type != ARPHRD_ETHER) {
@@ -4389,10 +4424,6 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
return -EINVAL;
}
- err = fdb_vid_parse(tb[NDA_VLAN], &vid, extack);
- if (err)
- return err;
-
err = -EOPNOTSUPP;
/* Support fdb on master device the net/bridge default case */
@@ -4406,8 +4437,7 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
err = ops->ndo_fdb_del(ndm, tb, dev, addr, vid, extack);
} else {
if (ops->ndo_fdb_del_bulk)
- err = ops->ndo_fdb_del_bulk(ndm, tb, dev, vid,
- extack);
+ err = ops->ndo_fdb_del_bulk(nlh, dev, extack);
}
if (err)
@@ -4428,8 +4458,7 @@ static int rtnl_fdb_del(struct sk_buff *skb, struct nlmsghdr *nlh,
/* in case err was cleared by NTF_MASTER call */
err = -EOPNOTSUPP;
if (ops->ndo_fdb_del_bulk)
- err = ops->ndo_fdb_del_bulk(ndm, tb, dev, vid,
- extack);
+ err = ops->ndo_fdb_del_bulk(nlh, dev, extack);
}
if (!err) {
@@ -6190,6 +6219,93 @@ out:
return skb->len;
}
+static int rtnl_validate_mdb_entry_get(const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(attr);
+
+ if (nla_len(attr) != sizeof(struct br_mdb_entry)) {
+ NL_SET_ERR_MSG_ATTR(extack, attr, "Invalid attribute length");
+ return -EINVAL;
+ }
+
+ if (entry->ifindex) {
+ NL_SET_ERR_MSG(extack, "Entry ifindex cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->state) {
+ NL_SET_ERR_MSG(extack, "Entry state cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->flags) {
+ NL_SET_ERR_MSG(extack, "Entry flags cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->vid >= VLAN_VID_MASK) {
+ NL_SET_ERR_MSG(extack, "Invalid entry VLAN id");
+ return -EINVAL;
+ }
+
+ if (entry->addr.proto != htons(ETH_P_IP) &&
+ entry->addr.proto != htons(ETH_P_IPV6) &&
+ entry->addr.proto != 0) {
+ NL_SET_ERR_MSG(extack, "Unknown entry protocol");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct nla_policy mdba_get_policy[MDBA_GET_ENTRY_MAX + 1] = {
+ [MDBA_GET_ENTRY] = NLA_POLICY_VALIDATE_FN(NLA_BINARY,
+ rtnl_validate_mdb_entry_get,
+ sizeof(struct br_mdb_entry)),
+ [MDBA_GET_ENTRY_ATTRS] = { .type = NLA_NESTED },
+};
+
+static int rtnl_mdb_get(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[MDBA_GET_ENTRY_MAX + 1];
+ struct net *net = sock_net(in_skb->sk);
+ struct br_port_msg *bpm;
+ struct net_device *dev;
+ int err;
+
+ err = nlmsg_parse(nlh, sizeof(struct br_port_msg), tb,
+ MDBA_GET_ENTRY_MAX, mdba_get_policy, extack);
+ if (err)
+ return err;
+
+ bpm = nlmsg_data(nlh);
+ if (!bpm->ifindex) {
+ NL_SET_ERR_MSG(extack, "Invalid ifindex");
+ return -EINVAL;
+ }
+
+ dev = __dev_get_by_index(net, bpm->ifindex);
+ if (!dev) {
+ NL_SET_ERR_MSG(extack, "Device doesn't exist");
+ return -ENODEV;
+ }
+
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_GET_ENTRY)) {
+ NL_SET_ERR_MSG(extack, "Missing MDBA_GET_ENTRY attribute");
+ return -EINVAL;
+ }
+
+ if (!dev->netdev_ops->ndo_mdb_get) {
+ NL_SET_ERR_MSG(extack, "Device does not support MDB operations");
+ return -EOPNOTSUPP;
+ }
+
+ return dev->netdev_ops->ndo_mdb_get(dev, tb, NETLINK_CB(in_skb).portid,
+ nlh->nlmsg_seq, extack);
+}
+
static int rtnl_validate_mdb_entry(const struct nlattr *attr,
struct netlink_ext_ack *extack)
{
@@ -6566,7 +6682,7 @@ void __init rtnetlink_init(void)
0);
rtnl_register(PF_UNSPEC, RTM_SETSTATS, rtnl_stats_set, NULL, 0);
- rtnl_register(PF_BRIDGE, RTM_GETMDB, NULL, rtnl_mdb_dump, 0);
+ rtnl_register(PF_BRIDGE, RTM_GETMDB, rtnl_mdb_get, rtnl_mdb_dump, 0);
rtnl_register(PF_BRIDGE, RTM_NEWMDB, rtnl_mdb_add, NULL, 0);
rtnl_register(PF_BRIDGE, RTM_DELMDB, rtnl_mdb_del, NULL, 0);
}
diff --git a/net/core/selftests.c b/net/core/selftests.c
index acb1ee97bbd3..94fe3146a959 100644
--- a/net/core/selftests.c
+++ b/net/core/selftests.c
@@ -397,14 +397,11 @@ EXPORT_SYMBOL_GPL(net_selftest_get_count);
void net_selftest_get_strings(u8 *data)
{
- u8 *p = data;
int i;
- for (i = 0; i < net_selftest_get_count(); i++) {
- snprintf(p, ETH_GSTRING_LEN, "%2d. %s", i + 1,
- net_selftests[i].name);
- p += ETH_GSTRING_LEN;
- }
+ for (i = 0; i < net_selftest_get_count(); i++)
+ ethtool_sprintf(&data, "%2d. %s", i + 1,
+ net_selftests[i].name);
}
EXPORT_SYMBOL_GPL(net_selftest_get_strings);
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 2bfa6a7ba244..b157efea5dea 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -848,6 +848,8 @@ EXPORT_SYMBOL(__napi_alloc_skb);
void skb_add_rx_frag(struct sk_buff *skb, int i, struct page *page, int off,
int size, unsigned int truesize)
{
+ DEBUG_NET_WARN_ON_ONCE(size > truesize);
+
skb_fill_page_desc(skb, i, page, off, size);
skb->len += size;
skb->data_len += size;
@@ -860,6 +862,8 @@ void skb_coalesce_rx_frag(struct sk_buff *skb, int i, int size,
{
skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ DEBUG_NET_WARN_ON_ONCE(size > truesize);
+
skb_frag_size_add(frag, size);
skb->len += size;
skb->data_len += size;
@@ -3719,10 +3723,19 @@ EXPORT_SYMBOL(skb_dequeue_tail);
void skb_queue_purge_reason(struct sk_buff_head *list,
enum skb_drop_reason reason)
{
- struct sk_buff *skb;
+ struct sk_buff_head tmp;
+ unsigned long flags;
+
+ if (skb_queue_empty_lockless(list))
+ return;
- while ((skb = skb_dequeue(list)) != NULL)
- kfree_skb_reason(skb, reason);
+ __skb_queue_head_init(&tmp);
+
+ spin_lock_irqsave(&list->lock, flags);
+ skb_queue_splice_init(list, &tmp);
+ spin_unlock_irqrestore(&list->lock, flags);
+
+ __skb_queue_purge_reason(&tmp, reason);
}
EXPORT_SYMBOL(skb_queue_purge_reason);
@@ -4255,6 +4268,7 @@ static void skb_ts_finish(struct ts_config *conf, struct ts_state *state)
unsigned int skb_find_text(struct sk_buff *skb, unsigned int from,
unsigned int to, struct ts_config *config)
{
+ unsigned int patlen = config->ops->get_pattern_len(config);
struct ts_state state;
unsigned int ret;
@@ -4266,7 +4280,7 @@ unsigned int skb_find_text(struct sk_buff *skb, unsigned int from,
skb_prepare_seq_read(skb, from, to, TS_SKB_CB(&state));
ret = textsearch_find(config, &state);
- return (ret <= to - from ? ret : UINT_MAX);
+ return (ret + patlen <= to - from ? ret : UINT_MAX);
}
EXPORT_SYMBOL(skb_find_text);
@@ -5150,6 +5164,9 @@ struct sk_buff *sock_dequeue_err_skb(struct sock *sk)
bool icmp_next = false;
unsigned long flags;
+ if (skb_queue_empty_lockless(q))
+ return NULL;
+
spin_lock_irqsave(&q->lock, flags);
skb = __skb_dequeue(q);
if (skb && (skb_next = skb_peek(q))) {
@@ -5749,7 +5766,7 @@ bool skb_try_coalesce(struct sk_buff *to, struct sk_buff *from,
/* In general, avoid mixing page_pool and non-page_pool allocated
* pages within the same SKB. Additionally avoid dealing with clones
* with page_pool pages, in case the SKB is using page_pool fragment
- * references (PP_FLAG_PAGE_FRAG). Since we only take full page
+ * references (page_pool_alloc_frag()). Since we only take full page
* references for cloned SKBs at the moment that would result in
* inconsistent reference counts.
* In theory we could take full references if @from is cloned and
diff --git a/net/core/sock.c b/net/core/sock.c
index 16584e2dd648..1d28e3e87970 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -600,7 +600,7 @@ struct dst_entry *__sk_dst_check(struct sock *sk, u32 cookie)
INDIRECT_CALL_INET(dst->ops->check, ip6_dst_check, ipv4_dst_check,
dst, cookie) == NULL) {
sk_tx_queue_clear(sk);
- sk->sk_dst_pending_confirm = 0;
+ WRITE_ONCE(sk->sk_dst_pending_confirm, 0);
RCU_INIT_POINTER(sk->sk_dst_cache, NULL);
dst_release(dst);
return NULL;
@@ -759,7 +759,7 @@ out:
return ret;
}
-bool sk_mc_loop(struct sock *sk)
+bool sk_mc_loop(const struct sock *sk)
{
if (dev_recursion_level())
return false;
@@ -771,7 +771,7 @@ bool sk_mc_loop(struct sock *sk)
return inet_test_bit(MC_LOOP, sk);
#if IS_ENABLED(CONFIG_IPV6)
case AF_INET6:
- return inet6_sk(sk)->mc_loop;
+ return inet6_test_bit(MC6_LOOP, sk);
#endif
}
WARN_ON_ONCE(1);
@@ -806,9 +806,7 @@ EXPORT_SYMBOL(sock_no_linger);
void sock_set_priority(struct sock *sk, u32 priority)
{
- lock_sock(sk);
WRITE_ONCE(sk->sk_priority, priority);
- release_sock(sk);
}
EXPORT_SYMBOL(sock_set_priority);
@@ -1118,6 +1116,83 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
valbool = val ? 1 : 0;
+ /* handle options which do not require locking the socket. */
+ switch (optname) {
+ case SO_PRIORITY:
+ if ((val >= 0 && val <= 6) ||
+ sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) ||
+ sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) {
+ sock_set_priority(sk, val);
+ return 0;
+ }
+ return -EPERM;
+ case SO_PASSSEC:
+ assign_bit(SOCK_PASSSEC, &sock->flags, valbool);
+ return 0;
+ case SO_PASSCRED:
+ assign_bit(SOCK_PASSCRED, &sock->flags, valbool);
+ return 0;
+ case SO_PASSPIDFD:
+ assign_bit(SOCK_PASSPIDFD, &sock->flags, valbool);
+ return 0;
+ case SO_TYPE:
+ case SO_PROTOCOL:
+ case SO_DOMAIN:
+ case SO_ERROR:
+ return -ENOPROTOOPT;
+#ifdef CONFIG_NET_RX_BUSY_POLL
+ case SO_BUSY_POLL:
+ if (val < 0)
+ return -EINVAL;
+ WRITE_ONCE(sk->sk_ll_usec, val);
+ return 0;
+ case SO_PREFER_BUSY_POLL:
+ if (valbool && !sockopt_capable(CAP_NET_ADMIN))
+ return -EPERM;
+ WRITE_ONCE(sk->sk_prefer_busy_poll, valbool);
+ return 0;
+ case SO_BUSY_POLL_BUDGET:
+ if (val > READ_ONCE(sk->sk_busy_poll_budget) &&
+ !sockopt_capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if (val < 0 || val > U16_MAX)
+ return -EINVAL;
+ WRITE_ONCE(sk->sk_busy_poll_budget, val);
+ return 0;
+#endif
+ case SO_MAX_PACING_RATE:
+ {
+ unsigned long ulval = (val == ~0U) ? ~0UL : (unsigned int)val;
+ unsigned long pacing_rate;
+
+ if (sizeof(ulval) != sizeof(val) &&
+ optlen >= sizeof(ulval) &&
+ copy_from_sockptr(&ulval, optval, sizeof(ulval))) {
+ return -EFAULT;
+ }
+ if (ulval != ~0UL)
+ cmpxchg(&sk->sk_pacing_status,
+ SK_PACING_NONE,
+ SK_PACING_NEEDED);
+ /* Pairs with READ_ONCE() from sk_getsockopt() */
+ WRITE_ONCE(sk->sk_max_pacing_rate, ulval);
+ pacing_rate = READ_ONCE(sk->sk_pacing_rate);
+ if (ulval < pacing_rate)
+ WRITE_ONCE(sk->sk_pacing_rate, ulval);
+ return 0;
+ }
+ case SO_TXREHASH:
+ if (val < -1 || val > 1)
+ return -EINVAL;
+ if ((u8)val == SOCK_TXREHASH_DEFAULT)
+ val = READ_ONCE(sock_net(sk)->core.sysctl_txrehash);
+ /* Paired with READ_ONCE() in tcp_rtx_synack()
+ * and sk_getsockopt().
+ */
+ WRITE_ONCE(sk->sk_txrehash, (u8)val);
+ return 0;
+ }
+
sockopt_lock_sock(sk);
switch (optname) {
@@ -1133,12 +1208,6 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
case SO_REUSEPORT:
sk->sk_reuseport = valbool;
break;
- case SO_TYPE:
- case SO_PROTOCOL:
- case SO_DOMAIN:
- case SO_ERROR:
- ret = -ENOPROTOOPT;
- break;
case SO_DONTROUTE:
sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
sk_dst_reset(sk);
@@ -1213,15 +1282,6 @@ set_sndbuf:
sk->sk_no_check_tx = valbool;
break;
- case SO_PRIORITY:
- if ((val >= 0 && val <= 6) ||
- sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) ||
- sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN))
- WRITE_ONCE(sk->sk_priority, val);
- else
- ret = -EPERM;
- break;
-
case SO_LINGER:
if (optlen < sizeof(ling)) {
ret = -EINVAL; /* 1003.1g */
@@ -1247,14 +1307,6 @@ set_sndbuf:
case SO_BSDCOMPAT:
break;
- case SO_PASSCRED:
- assign_bit(SOCK_PASSCRED, &sock->flags, valbool);
- break;
-
- case SO_PASSPIDFD:
- assign_bit(SOCK_PASSPIDFD, &sock->flags, valbool);
- break;
-
case SO_TIMESTAMP_OLD:
case SO_TIMESTAMP_NEW:
case SO_TIMESTAMPNS_OLD:
@@ -1360,9 +1412,6 @@ set_sndbuf:
sock_valbool_flag(sk, SOCK_FILTER_LOCKED, valbool);
break;
- case SO_PASSSEC:
- assign_bit(SOCK_PASSSEC, &sock->flags, valbool);
- break;
case SO_MARK:
if (!sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_RAW) &&
!sockopt_ns_capable(sock_net(sk)->user_ns, CAP_NET_ADMIN)) {
@@ -1404,50 +1453,7 @@ set_sndbuf:
sock_valbool_flag(sk, SOCK_SELECT_ERR_QUEUE, valbool);
break;
-#ifdef CONFIG_NET_RX_BUSY_POLL
- case SO_BUSY_POLL:
- if (val < 0)
- ret = -EINVAL;
- else
- WRITE_ONCE(sk->sk_ll_usec, val);
- break;
- case SO_PREFER_BUSY_POLL:
- if (valbool && !sockopt_capable(CAP_NET_ADMIN))
- ret = -EPERM;
- else
- WRITE_ONCE(sk->sk_prefer_busy_poll, valbool);
- break;
- case SO_BUSY_POLL_BUDGET:
- if (val > READ_ONCE(sk->sk_busy_poll_budget) && !sockopt_capable(CAP_NET_ADMIN)) {
- ret = -EPERM;
- } else {
- if (val < 0 || val > U16_MAX)
- ret = -EINVAL;
- else
- WRITE_ONCE(sk->sk_busy_poll_budget, val);
- }
- break;
-#endif
- case SO_MAX_PACING_RATE:
- {
- unsigned long ulval = (val == ~0U) ? ~0UL : (unsigned int)val;
-
- if (sizeof(ulval) != sizeof(val) &&
- optlen >= sizeof(ulval) &&
- copy_from_sockptr(&ulval, optval, sizeof(ulval))) {
- ret = -EFAULT;
- break;
- }
- if (ulval != ~0UL)
- cmpxchg(&sk->sk_pacing_status,
- SK_PACING_NONE,
- SK_PACING_NEEDED);
- /* Pairs with READ_ONCE() from sk_getsockopt() */
- WRITE_ONCE(sk->sk_max_pacing_rate, ulval);
- sk->sk_pacing_rate = min(sk->sk_pacing_rate, ulval);
- break;
- }
case SO_INCOMING_CPU:
reuseport_update_incoming_cpu(sk, val);
break;
@@ -1532,19 +1538,6 @@ set_sndbuf:
break;
}
- case SO_TXREHASH:
- if (val < -1 || val > 1) {
- ret = -EINVAL;
- break;
- }
- if ((u8)val == SOCK_TXREHASH_DEFAULT)
- val = READ_ONCE(sock_net(sk)->core.sysctl_txrehash);
- /* Paired with READ_ONCE() in tcp_rtx_synack()
- * and sk_getsockopt().
- */
- WRITE_ONCE(sk->sk_txrehash, (u8)val);
- break;
-
default:
ret = -ENOPROTOOPT;
break;
@@ -3001,6 +2994,11 @@ void __sk_flush_backlog(struct sock *sk)
{
spin_lock_bh(&sk->sk_lock.slock);
__release_sock(sk);
+
+ if (sk->sk_prot->release_cb)
+ INDIRECT_CALL_INET_1(sk->sk_prot->release_cb,
+ tcp_release_cb, sk);
+
spin_unlock_bh(&sk->sk_lock.slock);
}
EXPORT_SYMBOL_GPL(__sk_flush_backlog);
@@ -3037,21 +3035,29 @@ EXPORT_SYMBOL(sk_wait_data);
* @amt: pages to allocate
* @kind: allocation type
*
- * Similar to __sk_mem_schedule(), but does not update sk_forward_alloc
+ * Similar to __sk_mem_schedule(), but does not update sk_forward_alloc.
+ *
+ * Unlike the globally shared limits among the sockets under same protocol,
+ * consuming the budget of a memcg won't have direct effect on other ones.
+ * So be optimistic about memcg's tolerance, and leave the callers to decide
+ * whether or not to raise allocated through sk_under_memory_pressure() or
+ * its variants.
*/
int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
{
- bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
+ struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL;
struct proto *prot = sk->sk_prot;
- bool charged = true;
+ bool charged = false;
long allocated;
sk_memory_allocated_add(sk, amt);
allocated = sk_memory_allocated(sk);
- if (memcg_charge &&
- !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
- gfp_memcg_charge())))
- goto suppress_allocation;
+
+ if (memcg) {
+ if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge()))
+ goto suppress_allocation;
+ charged = true;
+ }
/* Under limit. */
if (allocated <= sk_prot_mem_limits(sk, 0)) {
@@ -3067,7 +3073,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (allocated > sk_prot_mem_limits(sk, 2))
goto suppress_allocation;
- /* guarantee minimum buffer size under pressure */
+ /* Guarantee minimum buffer size under pressure (either global
+ * or memcg) to make sure features described in RFC 7323 (TCP
+ * Extensions for High Performance) work properly.
+ *
+ * This rule does NOT stand when exceeds global or memcg's hard
+ * limit, or else a DoS attack can be taken place by spawning
+ * lots of sockets whose usage are under minimum buffer size.
+ */
if (kind == SK_MEM_RECV) {
if (atomic_read(&sk->sk_rmem_alloc) < sk_get_rmem0(sk, prot))
return 1;
@@ -3086,8 +3099,17 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (sk_has_memory_pressure(sk)) {
u64 alloc;
- if (!sk_under_memory_pressure(sk))
+ /* The following 'average' heuristic is within the
+ * scope of global accounting, so it only makes
+ * sense for global memory pressure.
+ */
+ if (!sk_under_global_memory_pressure(sk))
return 1;
+
+ /* Try to be fair among all the sockets under global
+ * pressure by allowing the ones that below average
+ * usage to raise.
+ */
alloc = sk_sockets_allocated_read_positive(sk);
if (sk_prot_mem_limits(sk, 2) > alloc *
sk_mem_pages(sk->sk_wmem_queued +
@@ -3106,8 +3128,8 @@ suppress_allocation:
*/
if (sk->sk_wmem_queued + size >= sk->sk_sndbuf) {
/* Force charge with __GFP_NOFAIL */
- if (memcg_charge && !charged) {
- mem_cgroup_charge_skmem(sk->sk_memcg, amt,
+ if (memcg && !charged) {
+ mem_cgroup_charge_skmem(memcg, amt,
gfp_memcg_charge() | __GFP_NOFAIL);
}
return 1;
@@ -3119,8 +3141,8 @@ suppress_allocation:
sk_memory_allocated_sub(sk, amt);
- if (memcg_charge && charged)
- mem_cgroup_uncharge_skmem(sk->sk_memcg, amt);
+ if (charged)
+ mem_cgroup_uncharge_skmem(memcg, amt);
return 0;
}
@@ -3519,11 +3541,9 @@ void release_sock(struct sock *sk)
if (sk->sk_backlog.tail)
__release_sock(sk);
- /* Warning : release_cb() might need to release sk ownership,
- * ie call sock_release_ownership(sk) before us.
- */
if (sk->sk_prot->release_cb)
- sk->sk_prot->release_cb(sk);
+ INDIRECT_CALL_INET_1(sk->sk_prot->release_cb,
+ tcp_release_cb, sk);
sock_release_ownership(sk);
if (waitqueue_active(&sk->sk_lock.wq))
diff --git a/net/core/xdp.c b/net/core/xdp.c
index a70670fe9a2d..df4789ab512d 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -741,7 +741,7 @@ __bpf_kfunc int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, u32 *hash,
__diag_pop();
BTF_SET8_START(xdp_metadata_kfunc_ids)
-#define XDP_METADATA_KFUNC(_, name) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS)
+#define XDP_METADATA_KFUNC(_, __, name, ___) BTF_ID_FLAGS(func, name, KF_TRUSTED_ARGS)
XDP_METADATA_KFUNC_xxx
#undef XDP_METADATA_KFUNC
BTF_SET8_END(xdp_metadata_kfunc_ids)
@@ -752,7 +752,7 @@ static const struct btf_kfunc_id_set xdp_metadata_kfunc_set = {
};
BTF_ID_LIST(xdp_metadata_kfunc_ids_unsorted)
-#define XDP_METADATA_KFUNC(name, str) BTF_ID(func, str)
+#define XDP_METADATA_KFUNC(name, _, str, __) BTF_ID(func, str)
XDP_METADATA_KFUNC_xxx
#undef XDP_METADATA_KFUNC
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index 69453b936bd5..1b8cbfda6e5d 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -511,7 +511,7 @@ static int dccp_v4_send_response(const struct sock *sk, struct request_sock *req
err = ip_build_and_send_pkt(skb, sk, ireq->ir_loc_addr,
ireq->ir_rmt_addr,
rcu_dereference(ireq->ireq_opt),
- inet_sk(sk)->tos);
+ READ_ONCE(inet_sk(sk)->tos));
rcu_read_unlock();
err = net_xmit_eval(err);
}
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index c693a570682f..8d344b219f84 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -180,7 +180,7 @@ static int dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
goto out;
}
- if (!sock_owned_by_user(sk) && np->recverr) {
+ if (!sock_owned_by_user(sk) && inet6_test_bit(RECVERR6, sk)) {
sk->sk_err = err;
sk_error_report(sk);
} else {
@@ -239,7 +239,7 @@ static int dccp_v6_send_response(const struct sock *sk, struct request_sock *req
if (!opt)
opt = rcu_dereference(np->opt);
err = ip6_xmit(sk, skb, &fl6, READ_ONCE(sk->sk_mark), opt,
- np->tclass, sk->sk_priority);
+ np->tclass, READ_ONCE(sk->sk_priority));
rcu_read_unlock();
err = net_xmit_eval(err);
}
@@ -671,10 +671,10 @@ ipv6_pktoptions:
if (np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo)
np->mcast_oif = inet6_iif(opt_skb);
if (np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim)
- np->mcast_hops = ipv6_hdr(opt_skb)->hop_limit;
+ WRITE_ONCE(np->mcast_hops, ipv6_hdr(opt_skb)->hop_limit);
if (np->rxopt.bits.rxflow || np->rxopt.bits.rxtclass)
np->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(opt_skb));
- if (np->repflow)
+ if (inet6_test_bit(REPFLOW, sk))
np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
if (ipv6_opt_accepted(sk, opt_skb,
&DCCP_SKB_CB(opt_skb)->header.h6)) {
@@ -839,7 +839,7 @@ static int dccp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
memset(&fl6, 0, sizeof(fl6));
- if (np->sndflow) {
+ if (inet6_test_bit(SNDFLOW, sk)) {
fl6.flowlabel = usin->sin6_flowinfo & IPV6_FLOWINFO_MASK;
IP6_ECN_flow_init(fl6.flowlabel);
if (fl6.flowlabel & IPV6_FLOWLABEL_MASK) {
diff --git a/net/dccp/timer.c b/net/dccp/timer.c
index b3255e87cc7e..a4cfb47b60e5 100644
--- a/net/dccp/timer.c
+++ b/net/dccp/timer.c
@@ -196,8 +196,8 @@ static void dccp_delack_timer(struct timer_list *t)
if (inet_csk_ack_scheduled(sk)) {
if (!inet_csk_in_pingpong_mode(sk)) {
/* Delayed ACK missed: inflate ATO. */
- icsk->icsk_ack.ato = min(icsk->icsk_ack.ato << 1,
- icsk->icsk_rto);
+ icsk->icsk_ack.ato = min_t(u32, icsk->icsk_ack.ato << 1,
+ icsk->icsk_rto);
} else {
/* Delayed ACK missed: leave pingpong mode and
* deflate ATO.
diff --git a/net/devlink/core.c b/net/devlink/core.c
index 6cec4afb01fb..6984877e9f10 100644
--- a/net/devlink/core.c
+++ b/net/devlink/core.c
@@ -16,6 +16,222 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(devlink_trap_report);
DEFINE_XARRAY_FLAGS(devlinks, XA_FLAGS_ALLOC);
+static struct devlink *devlinks_xa_get(unsigned long index)
+{
+ struct devlink *devlink;
+
+ rcu_read_lock();
+ devlink = xa_find(&devlinks, &index, index, DEVLINK_REGISTERED);
+ if (!devlink || !devlink_try_get(devlink))
+ devlink = NULL;
+ rcu_read_unlock();
+ return devlink;
+}
+
+/* devlink_rels xarray contains 1:1 relationships between
+ * devlink object and related nested devlink instance.
+ * The xarray index is used to get the nested object from
+ * the nested-in object code.
+ */
+static DEFINE_XARRAY_FLAGS(devlink_rels, XA_FLAGS_ALLOC1);
+
+#define DEVLINK_REL_IN_USE XA_MARK_0
+
+struct devlink_rel {
+ u32 index;
+ refcount_t refcount;
+ u32 devlink_index;
+ struct {
+ u32 devlink_index;
+ u32 obj_index;
+ devlink_rel_notify_cb_t *notify_cb;
+ devlink_rel_cleanup_cb_t *cleanup_cb;
+ struct work_struct notify_work;
+ } nested_in;
+};
+
+static void devlink_rel_free(struct devlink_rel *rel)
+{
+ xa_erase(&devlink_rels, rel->index);
+ kfree(rel);
+}
+
+static void __devlink_rel_get(struct devlink_rel *rel)
+{
+ refcount_inc(&rel->refcount);
+}
+
+static void __devlink_rel_put(struct devlink_rel *rel)
+{
+ if (refcount_dec_and_test(&rel->refcount))
+ devlink_rel_free(rel);
+}
+
+static void devlink_rel_nested_in_notify_work(struct work_struct *work)
+{
+ struct devlink_rel *rel = container_of(work, struct devlink_rel,
+ nested_in.notify_work);
+ struct devlink *devlink;
+
+ devlink = devlinks_xa_get(rel->nested_in.devlink_index);
+ if (!devlink)
+ goto rel_put;
+ if (!devl_trylock(devlink)) {
+ devlink_put(devlink);
+ goto reschedule_work;
+ }
+ if (!devl_is_registered(devlink)) {
+ devl_unlock(devlink);
+ devlink_put(devlink);
+ goto rel_put;
+ }
+ if (!xa_get_mark(&devlink_rels, rel->index, DEVLINK_REL_IN_USE))
+ rel->nested_in.cleanup_cb(devlink, rel->nested_in.obj_index, rel->index);
+ rel->nested_in.notify_cb(devlink, rel->nested_in.obj_index);
+ devl_unlock(devlink);
+ devlink_put(devlink);
+
+rel_put:
+ __devlink_rel_put(rel);
+ return;
+
+reschedule_work:
+ schedule_work(&rel->nested_in.notify_work);
+}
+
+static void devlink_rel_nested_in_notify_work_schedule(struct devlink_rel *rel)
+{
+ __devlink_rel_get(rel);
+ schedule_work(&rel->nested_in.notify_work);
+}
+
+static struct devlink_rel *devlink_rel_alloc(void)
+{
+ struct devlink_rel *rel;
+ static u32 next;
+ int err;
+
+ rel = kzalloc(sizeof(*rel), GFP_KERNEL);
+ if (!rel)
+ return ERR_PTR(-ENOMEM);
+
+ err = xa_alloc_cyclic(&devlink_rels, &rel->index, rel,
+ xa_limit_32b, &next, GFP_KERNEL);
+ if (err) {
+ kfree(rel);
+ return ERR_PTR(err);
+ }
+
+ refcount_set(&rel->refcount, 1);
+ INIT_WORK(&rel->nested_in.notify_work,
+ &devlink_rel_nested_in_notify_work);
+ return rel;
+}
+
+static void devlink_rel_put(struct devlink *devlink)
+{
+ struct devlink_rel *rel = devlink->rel;
+
+ if (!rel)
+ return;
+ xa_clear_mark(&devlink_rels, rel->index, DEVLINK_REL_IN_USE);
+ devlink_rel_nested_in_notify_work_schedule(rel);
+ __devlink_rel_put(rel);
+ devlink->rel = NULL;
+}
+
+void devlink_rel_nested_in_clear(u32 rel_index)
+{
+ xa_clear_mark(&devlink_rels, rel_index, DEVLINK_REL_IN_USE);
+}
+
+int devlink_rel_nested_in_add(u32 *rel_index, u32 devlink_index,
+ u32 obj_index, devlink_rel_notify_cb_t *notify_cb,
+ devlink_rel_cleanup_cb_t *cleanup_cb,
+ struct devlink *devlink)
+{
+ struct devlink_rel *rel = devlink_rel_alloc();
+
+ ASSERT_DEVLINK_NOT_REGISTERED(devlink);
+
+ if (IS_ERR(rel))
+ return PTR_ERR(rel);
+
+ rel->devlink_index = devlink->index;
+ rel->nested_in.devlink_index = devlink_index;
+ rel->nested_in.obj_index = obj_index;
+ rel->nested_in.notify_cb = notify_cb;
+ rel->nested_in.cleanup_cb = cleanup_cb;
+ *rel_index = rel->index;
+ xa_set_mark(&devlink_rels, rel->index, DEVLINK_REL_IN_USE);
+ devlink->rel = rel;
+ return 0;
+}
+
+/**
+ * devlink_rel_nested_in_notify - Notify the object this devlink
+ * instance is nested in.
+ * @devlink: devlink
+ *
+ * This is called upon network namespace change of devlink instance.
+ * In case this devlink instance is nested in another devlink object,
+ * a notification of a change of this object should be sent
+ * over netlink. The parent devlink instance lock needs to be
+ * taken during the notification preparation.
+ * However, since the devlink lock of nested instance is held here,
+ * we would end with wrong devlink instance lock ordering and
+ * deadlock. Therefore the work is utilized to avoid that.
+ */
+void devlink_rel_nested_in_notify(struct devlink *devlink)
+{
+ struct devlink_rel *rel = devlink->rel;
+
+ if (!rel)
+ return;
+ devlink_rel_nested_in_notify_work_schedule(rel);
+}
+
+static struct devlink_rel *devlink_rel_find(unsigned long rel_index)
+{
+ return xa_find(&devlink_rels, &rel_index, rel_index,
+ DEVLINK_REL_IN_USE);
+}
+
+static struct devlink *devlink_rel_devlink_get(u32 rel_index)
+{
+ struct devlink_rel *rel;
+ u32 devlink_index;
+
+ if (!rel_index)
+ return NULL;
+ xa_lock(&devlink_rels);
+ rel = devlink_rel_find(rel_index);
+ if (rel)
+ devlink_index = rel->devlink_index;
+ xa_unlock(&devlink_rels);
+ if (!rel)
+ return NULL;
+ return devlinks_xa_get(devlink_index);
+}
+
+int devlink_rel_devlink_handle_put(struct sk_buff *msg, struct devlink *devlink,
+ u32 rel_index, int attrtype,
+ bool *msg_updated)
+{
+ struct net *net = devlink_net(devlink);
+ struct devlink *rel_devlink;
+ int err;
+
+ rel_devlink = devlink_rel_devlink_get(rel_index);
+ if (!rel_devlink)
+ return 0;
+ err = devlink_nl_put_nested_handle(msg, net, rel_devlink, attrtype);
+ devlink_put(rel_devlink);
+ if (!err && msg_updated)
+ *msg_updated = true;
+ return err;
+}
+
void *devlink_priv(struct devlink *devlink)
{
return &devlink->priv;
@@ -97,6 +313,7 @@ static void devlink_release(struct work_struct *work)
mutex_destroy(&devlink->lock);
lockdep_unregister_key(&devlink->lock_key);
+ put_device(devlink->dev);
kfree(devlink);
}
@@ -142,6 +359,7 @@ int devl_register(struct devlink *devlink)
xa_set_mark(&devlinks, devlink->index, DEVLINK_REGISTERED);
devlink_notify_register(devlink);
+ devlink_rel_nested_in_notify(devlink);
return 0;
}
@@ -166,6 +384,7 @@ void devl_unregister(struct devlink *devlink)
devlink_notify_unregister(devlink);
xa_clear_mark(&devlinks, devlink->index, DEVLINK_REGISTERED);
+ devlink_rel_put(devlink);
}
EXPORT_SYMBOL_GPL(devl_unregister);
@@ -210,11 +429,12 @@ struct devlink *devlink_alloc_ns(const struct devlink_ops *ops,
if (ret < 0)
goto err_xa_alloc;
- devlink->dev = dev;
+ devlink->dev = get_device(dev);
devlink->ops = ops;
xa_init_flags(&devlink->ports, XA_FLAGS_ALLOC);
xa_init_flags(&devlink->params, XA_FLAGS_ALLOC);
xa_init_flags(&devlink->snapshot_ids, XA_FLAGS_ALLOC);
+ xa_init_flags(&devlink->nested_rels, XA_FLAGS_ALLOC);
write_pnet(&devlink->_net, net);
INIT_LIST_HEAD(&devlink->rate_list);
INIT_LIST_HEAD(&devlink->linecard_list);
@@ -261,6 +481,7 @@ void devlink_free(struct devlink *devlink)
WARN_ON(!list_empty(&devlink->linecard_list));
WARN_ON(!xa_empty(&devlink->ports));
+ xa_destroy(&devlink->nested_rels);
xa_destroy(&devlink->snapshot_ids);
xa_destroy(&devlink->params);
xa_destroy(&devlink->ports);
diff --git a/net/devlink/dev.c b/net/devlink/dev.c
index bba4ace7d22b..4fc7adb32663 100644
--- a/net/devlink/dev.c
+++ b/net/devlink/dev.c
@@ -138,6 +138,23 @@ nla_put_failure:
return -EMSGSIZE;
}
+static int devlink_nl_nested_fill(struct sk_buff *msg, struct devlink *devlink)
+{
+ unsigned long rel_index;
+ void *unused;
+ int err;
+
+ xa_for_each(&devlink->nested_rels, rel_index, unused) {
+ err = devlink_rel_devlink_handle_put(msg, devlink,
+ rel_index,
+ DEVLINK_ATTR_NESTED_DEVLINK,
+ NULL);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
static int devlink_nl_fill(struct sk_buff *msg, struct devlink *devlink,
enum devlink_command cmd, u32 portid,
u32 seq, int flags)
@@ -164,6 +181,10 @@ static int devlink_nl_fill(struct sk_buff *msg, struct devlink *devlink,
goto dev_stats_nest_cancel;
nla_nest_end(msg, dev_stats);
+
+ if (devlink_nl_nested_fill(msg, devlink))
+ goto nla_put_failure;
+
genlmsg_end(msg, hdr);
return 0;
@@ -230,6 +251,34 @@ int devlink_nl_get_dumpit(struct sk_buff *msg, struct netlink_callback *cb)
return devlink_nl_dumpit(msg, cb, devlink_nl_get_dump_one);
}
+static void devlink_rel_notify_cb(struct devlink *devlink, u32 obj_index)
+{
+ devlink_notify(devlink, DEVLINK_CMD_NEW);
+}
+
+static void devlink_rel_cleanup_cb(struct devlink *devlink, u32 obj_index,
+ u32 rel_index)
+{
+ xa_erase(&devlink->nested_rels, rel_index);
+}
+
+int devl_nested_devlink_set(struct devlink *devlink,
+ struct devlink *nested_devlink)
+{
+ u32 rel_index;
+ int err;
+
+ err = devlink_rel_nested_in_add(&rel_index, devlink->index, 0,
+ devlink_rel_notify_cb,
+ devlink_rel_cleanup_cb,
+ nested_devlink);
+ if (err)
+ return err;
+ return xa_insert(&devlink->nested_rels, rel_index,
+ xa_mk_value(0), GFP_KERNEL);
+}
+EXPORT_SYMBOL_GPL(devl_nested_devlink_set);
+
void devlink_notify_register(struct devlink *devlink)
{
devlink_notify(devlink, DEVLINK_CMD_NEW);
@@ -372,6 +421,7 @@ static void devlink_reload_netns_change(struct devlink *devlink,
devlink_notify_unregister(devlink);
write_pnet(&devlink->_net, dest_net);
devlink_notify_register(devlink);
+ devlink_rel_nested_in_notify(devlink);
}
int devlink_reload(struct devlink *devlink, struct net *dest_net,
@@ -442,7 +492,7 @@ free_msg:
return -EMSGSIZE;
}
-int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_reload_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
enum devlink_reload_action action;
@@ -608,7 +658,7 @@ nla_put_failure:
return err;
}
-int devlink_nl_cmd_eswitch_get_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_eswitch_get_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct sk_buff *msg;
@@ -629,7 +679,7 @@ int devlink_nl_cmd_eswitch_get_doit(struct sk_buff *skb, struct genl_info *info)
return genlmsg_reply(msg, info);
}
-int devlink_nl_cmd_eswitch_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_eswitch_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const struct devlink_ops *ops = devlink->ops;
@@ -1058,7 +1108,7 @@ static int devlink_flash_component_get(struct devlink *devlink,
return 0;
}
-int devlink_nl_cmd_flash_update(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_flash_update_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *nla_overwrite_mask, *nla_file_name;
struct devlink_flash_update_params params = {};
@@ -1301,7 +1351,7 @@ static const struct nla_policy devlink_selftest_nl_policy[DEVLINK_ATTR_SELFTEST_
[DEVLINK_ATTR_SELFTEST_ID_FLASH] = { .type = NLA_FLAG },
};
-int devlink_nl_cmd_selftests_run(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_selftests_run_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *tb[DEVLINK_ATTR_SELFTEST_ID_MAX + 1];
struct devlink *devlink = info->user_ptr[0];
diff --git a/net/devlink/devl_internal.h b/net/devlink/devl_internal.h
index f6b5fea2e13c..183dbe3807ab 100644
--- a/net/devlink/devl_internal.h
+++ b/net/devlink/devl_internal.h
@@ -17,6 +17,8 @@
#include "netlink_gen.h"
+struct devlink_rel;
+
#define DEVLINK_REGISTERED XA_MARK_1
#define DEVLINK_RELOAD_STATS_ARRAY_SIZE \
@@ -55,6 +57,8 @@ struct devlink {
u8 reload_failed:1;
refcount_t refcount;
struct rcu_work rwork;
+ struct devlink_rel *rel;
+ struct xarray nested_rels;
char priv[] __aligned(NETDEV_ALIGN);
};
@@ -92,6 +96,20 @@ static inline bool devl_is_registered(struct devlink *devlink)
return xa_get_mark(&devlinks, devlink->index, DEVLINK_REGISTERED);
}
+typedef void devlink_rel_notify_cb_t(struct devlink *devlink, u32 obj_index);
+typedef void devlink_rel_cleanup_cb_t(struct devlink *devlink, u32 obj_index,
+ u32 rel_index);
+
+void devlink_rel_nested_in_clear(u32 rel_index);
+int devlink_rel_nested_in_add(u32 *rel_index, u32 devlink_index,
+ u32 obj_index, devlink_rel_notify_cb_t *notify_cb,
+ devlink_rel_cleanup_cb_t *cleanup_cb,
+ struct devlink *devlink);
+void devlink_rel_nested_in_notify(struct devlink *devlink);
+int devlink_rel_devlink_handle_put(struct sk_buff *msg, struct devlink *devlink,
+ u32 rel_index, int attrtype,
+ bool *msg_updated);
+
/* Netlink */
#define DEVLINK_NL_FLAG_NEED_PORT BIT(0)
#define DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT BIT(1)
@@ -145,6 +163,8 @@ devlink_nl_put_handle(struct sk_buff *msg, struct devlink *devlink)
return 0;
}
+int devlink_nl_put_nested_handle(struct sk_buff *msg, struct net *net,
+ struct devlink *devlink, int attrtype);
int devlink_nl_msg_reply_and_new(struct sk_buff **msg, struct genl_info *info);
/* Notify */
@@ -206,80 +226,4 @@ int devlink_rate_nodes_check(struct devlink *devlink, u16 mode,
struct netlink_ext_ack *extack);
/* Linecards */
-struct devlink_linecard {
- struct list_head list;
- struct devlink *devlink;
- unsigned int index;
- const struct devlink_linecard_ops *ops;
- void *priv;
- enum devlink_linecard_state state;
- struct mutex state_lock; /* Protects state */
- const char *type;
- struct devlink_linecard_type *types;
- unsigned int types_count;
- struct devlink *nested_devlink;
-};
-
-/* Devlink nl cmds */
-int devlink_nl_cmd_reload(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_eswitch_get_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_eswitch_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_flash_update(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_selftests_run(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_port_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_port_split_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_port_unsplit_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_port_new_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_port_del_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_sb_pool_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_sb_port_pool_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_sb_occ_snapshot_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_sb_occ_max_clear_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_dpipe_table_get(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_dpipe_entries_get(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_dpipe_headers_get(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_dpipe_table_counters_set(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_resource_set(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_resource_dump(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_param_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_port_param_get_dumpit(struct sk_buff *msg,
- struct netlink_callback *cb);
-int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_port_param_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_region_new(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_region_del(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
- struct netlink_callback *cb);
-int devlink_nl_cmd_health_reporter_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_health_reporter_recover_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_health_reporter_diagnose_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_health_reporter_dump_get_dumpit(struct sk_buff *skb,
- struct netlink_callback *cb);
-int devlink_nl_cmd_health_reporter_dump_clear_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_health_reporter_test_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_trap_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_trap_group_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_trap_policer_set_doit(struct sk_buff *skb,
- struct genl_info *info);
-int devlink_nl_cmd_rate_set_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_rate_new_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_rate_del_doit(struct sk_buff *skb, struct genl_info *info);
-int devlink_nl_cmd_linecard_set_doit(struct sk_buff *skb,
- struct genl_info *info);
+unsigned int devlink_linecard_index(struct devlink_linecard *linecard);
diff --git a/net/devlink/dpipe.c b/net/devlink/dpipe.c
index 431227c412e5..a72a9292efc5 100644
--- a/net/devlink/dpipe.c
+++ b/net/devlink/dpipe.c
@@ -289,7 +289,7 @@ err_table_put:
return err;
}
-int devlink_nl_cmd_dpipe_table_get(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_dpipe_table_get_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const char *table_name = NULL;
@@ -562,8 +562,8 @@ send_done:
return genlmsg_reply(dump_ctx.skb, info);
}
-int devlink_nl_cmd_dpipe_entries_get(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_dpipe_entries_get_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_dpipe_table *table;
@@ -712,8 +712,8 @@ err_table_put:
return err;
}
-int devlink_nl_cmd_dpipe_headers_get(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_dpipe_headers_get_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
@@ -746,8 +746,8 @@ static int devlink_dpipe_table_counters_set(struct devlink *devlink,
return 0;
}
-int devlink_nl_cmd_dpipe_table_counters_set(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_dpipe_table_counters_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const char *table_name;
diff --git a/net/devlink/health.c b/net/devlink/health.c
index 51e6e81e31bb..695df61f8ac2 100644
--- a/net/devlink/health.c
+++ b/net/devlink/health.c
@@ -19,6 +19,7 @@ struct devlink_fmsg_item {
struct devlink_fmsg {
struct list_head item_list;
+ int err; /* first error encountered on some devlink_fmsg_XXX() call */
bool putting_binary; /* This flag forces enclosing of binary data
* in an array brackets. It forces using
* of designated API:
@@ -451,8 +452,8 @@ int devlink_nl_health_reporter_get_dumpit(struct sk_buff *skb,
devlink_nl_health_reporter_get_dump_one);
}
-int devlink_nl_cmd_health_reporter_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_health_reporter_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_health_reporter *reporter;
@@ -562,21 +563,18 @@ static int devlink_health_do_dump(struct devlink_health_reporter *reporter,
return 0;
reporter->dump_fmsg = devlink_fmsg_alloc();
- if (!reporter->dump_fmsg) {
- err = -ENOMEM;
- return err;
- }
+ if (!reporter->dump_fmsg)
+ return -ENOMEM;
- err = devlink_fmsg_obj_nest_start(reporter->dump_fmsg);
- if (err)
- goto dump_err;
+ devlink_fmsg_obj_nest_start(reporter->dump_fmsg);
err = reporter->ops->dump(reporter, reporter->dump_fmsg,
priv_ctx, extack);
if (err)
goto dump_err;
- err = devlink_fmsg_obj_nest_end(reporter->dump_fmsg);
+ devlink_fmsg_obj_nest_end(reporter->dump_fmsg);
+ err = reporter->dump_fmsg->err;
if (err)
goto dump_err;
@@ -657,8 +655,8 @@ devlink_health_reporter_state_update(struct devlink_health_reporter *reporter,
}
EXPORT_SYMBOL_GPL(devlink_health_reporter_state_update);
-int devlink_nl_cmd_health_reporter_recover_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_health_reporter_recover_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_health_reporter *reporter;
@@ -670,373 +668,258 @@ int devlink_nl_cmd_health_reporter_recover_doit(struct sk_buff *skb,
return devlink_health_reporter_recover(reporter, NULL, info->extack);
}
-static int devlink_fmsg_nest_common(struct devlink_fmsg *fmsg,
- int attrtype)
+static void devlink_fmsg_err_if_binary(struct devlink_fmsg *fmsg)
+{
+ if (!fmsg->err && fmsg->putting_binary)
+ fmsg->err = -EINVAL;
+}
+
+static void devlink_fmsg_nest_common(struct devlink_fmsg *fmsg, int attrtype)
{
struct devlink_fmsg_item *item;
+ if (fmsg->err)
+ return;
+
item = kzalloc(sizeof(*item), GFP_KERNEL);
- if (!item)
- return -ENOMEM;
+ if (!item) {
+ fmsg->err = -ENOMEM;
+ return;
+ }
item->attrtype = attrtype;
list_add_tail(&item->list, &fmsg->item_list);
-
- return 0;
}
-int devlink_fmsg_obj_nest_start(struct devlink_fmsg *fmsg)
+void devlink_fmsg_obj_nest_start(struct devlink_fmsg *fmsg)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_OBJ_NEST_START);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_OBJ_NEST_START);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_obj_nest_start);
-static int devlink_fmsg_nest_end(struct devlink_fmsg *fmsg)
+static void devlink_fmsg_nest_end(struct devlink_fmsg *fmsg)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_NEST_END);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_NEST_END);
}
-int devlink_fmsg_obj_nest_end(struct devlink_fmsg *fmsg)
+void devlink_fmsg_obj_nest_end(struct devlink_fmsg *fmsg)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_nest_end(fmsg);
+ devlink_fmsg_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_obj_nest_end);
#define DEVLINK_FMSG_MAX_SIZE (GENLMSG_DEFAULT_SIZE - GENL_HDRLEN - NLA_HDRLEN)
-static int devlink_fmsg_put_name(struct devlink_fmsg *fmsg, const char *name)
+static void devlink_fmsg_put_name(struct devlink_fmsg *fmsg, const char *name)
{
struct devlink_fmsg_item *item;
- if (fmsg->putting_binary)
- return -EINVAL;
+ devlink_fmsg_err_if_binary(fmsg);
+ if (fmsg->err)
+ return;
- if (strlen(name) + 1 > DEVLINK_FMSG_MAX_SIZE)
- return -EMSGSIZE;
+ if (strlen(name) + 1 > DEVLINK_FMSG_MAX_SIZE) {
+ fmsg->err = -EMSGSIZE;
+ return;
+ }
item = kzalloc(sizeof(*item) + strlen(name) + 1, GFP_KERNEL);
- if (!item)
- return -ENOMEM;
+ if (!item) {
+ fmsg->err = -ENOMEM;
+ return;
+ }
item->nla_type = NLA_NUL_STRING;
item->len = strlen(name) + 1;
item->attrtype = DEVLINK_ATTR_FMSG_OBJ_NAME;
memcpy(&item->value, name, item->len);
list_add_tail(&item->list, &fmsg->item_list);
-
- return 0;
}
-int devlink_fmsg_pair_nest_start(struct devlink_fmsg *fmsg, const char *name)
+void devlink_fmsg_pair_nest_start(struct devlink_fmsg *fmsg, const char *name)
{
- int err;
-
- if (fmsg->putting_binary)
- return -EINVAL;
-
- err = devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_PAIR_NEST_START);
- if (err)
- return err;
-
- err = devlink_fmsg_put_name(fmsg, name);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_PAIR_NEST_START);
+ devlink_fmsg_put_name(fmsg, name);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_pair_nest_start);
-int devlink_fmsg_pair_nest_end(struct devlink_fmsg *fmsg)
+void devlink_fmsg_pair_nest_end(struct devlink_fmsg *fmsg)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_nest_end(fmsg);
+ devlink_fmsg_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_pair_nest_end);
-int devlink_fmsg_arr_pair_nest_start(struct devlink_fmsg *fmsg,
- const char *name)
+void devlink_fmsg_arr_pair_nest_start(struct devlink_fmsg *fmsg,
+ const char *name)
{
- int err;
-
- if (fmsg->putting_binary)
- return -EINVAL;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_ARR_NEST_START);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_nest_common(fmsg, DEVLINK_ATTR_FMSG_ARR_NEST_START);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_arr_pair_nest_start);
-int devlink_fmsg_arr_pair_nest_end(struct devlink_fmsg *fmsg)
+void devlink_fmsg_arr_pair_nest_end(struct devlink_fmsg *fmsg)
{
- int err;
-
- if (fmsg->putting_binary)
- return -EINVAL;
-
- err = devlink_fmsg_nest_end(fmsg);
- if (err)
- return err;
-
- err = devlink_fmsg_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_nest_end(fmsg);
+ devlink_fmsg_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_arr_pair_nest_end);
-int devlink_fmsg_binary_pair_nest_start(struct devlink_fmsg *fmsg,
- const char *name)
+void devlink_fmsg_binary_pair_nest_start(struct devlink_fmsg *fmsg,
+ const char *name)
{
- int err;
-
- err = devlink_fmsg_arr_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
+ devlink_fmsg_arr_pair_nest_start(fmsg, name);
fmsg->putting_binary = true;
- return err;
}
EXPORT_SYMBOL_GPL(devlink_fmsg_binary_pair_nest_start);
-int devlink_fmsg_binary_pair_nest_end(struct devlink_fmsg *fmsg)
+void devlink_fmsg_binary_pair_nest_end(struct devlink_fmsg *fmsg)
{
+ if (fmsg->err)
+ return;
+
if (!fmsg->putting_binary)
- return -EINVAL;
+ fmsg->err = -EINVAL;
fmsg->putting_binary = false;
- return devlink_fmsg_arr_pair_nest_end(fmsg);
+ devlink_fmsg_arr_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_binary_pair_nest_end);
-static int devlink_fmsg_put_value(struct devlink_fmsg *fmsg,
- const void *value, u16 value_len,
- u8 value_nla_type)
+static void devlink_fmsg_put_value(struct devlink_fmsg *fmsg,
+ const void *value, u16 value_len,
+ u8 value_nla_type)
{
struct devlink_fmsg_item *item;
- if (value_len > DEVLINK_FMSG_MAX_SIZE)
- return -EMSGSIZE;
+ if (fmsg->err)
+ return;
+
+ if (value_len > DEVLINK_FMSG_MAX_SIZE) {
+ fmsg->err = -EMSGSIZE;
+ return;
+ }
item = kzalloc(sizeof(*item) + value_len, GFP_KERNEL);
- if (!item)
- return -ENOMEM;
+ if (!item) {
+ fmsg->err = -ENOMEM;
+ return;
+ }
item->nla_type = value_nla_type;
item->len = value_len;
item->attrtype = DEVLINK_ATTR_FMSG_OBJ_VALUE_DATA;
memcpy(&item->value, value, item->len);
list_add_tail(&item->list, &fmsg->item_list);
-
- return 0;
}
-static int devlink_fmsg_bool_put(struct devlink_fmsg *fmsg, bool value)
+static void devlink_fmsg_bool_put(struct devlink_fmsg *fmsg, bool value)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_FLAG);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_FLAG);
}
-static int devlink_fmsg_u8_put(struct devlink_fmsg *fmsg, u8 value)
+static void devlink_fmsg_u8_put(struct devlink_fmsg *fmsg, u8 value)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U8);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U8);
}
-int devlink_fmsg_u32_put(struct devlink_fmsg *fmsg, u32 value)
+void devlink_fmsg_u32_put(struct devlink_fmsg *fmsg, u32 value)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U32);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U32);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_u32_put);
-static int devlink_fmsg_u64_put(struct devlink_fmsg *fmsg, u64 value)
+static void devlink_fmsg_u64_put(struct devlink_fmsg *fmsg, u64 value)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U64);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_put_value(fmsg, &value, sizeof(value), NLA_U64);
}
-int devlink_fmsg_string_put(struct devlink_fmsg *fmsg, const char *value)
+void devlink_fmsg_string_put(struct devlink_fmsg *fmsg, const char *value)
{
- if (fmsg->putting_binary)
- return -EINVAL;
-
- return devlink_fmsg_put_value(fmsg, value, strlen(value) + 1,
- NLA_NUL_STRING);
+ devlink_fmsg_err_if_binary(fmsg);
+ devlink_fmsg_put_value(fmsg, value, strlen(value) + 1, NLA_NUL_STRING);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_string_put);
-int devlink_fmsg_binary_put(struct devlink_fmsg *fmsg, const void *value,
- u16 value_len)
+void devlink_fmsg_binary_put(struct devlink_fmsg *fmsg, const void *value,
+ u16 value_len)
{
- if (!fmsg->putting_binary)
- return -EINVAL;
+ if (!fmsg->err && !fmsg->putting_binary)
+ fmsg->err = -EINVAL;
- return devlink_fmsg_put_value(fmsg, value, value_len, NLA_BINARY);
+ devlink_fmsg_put_value(fmsg, value, value_len, NLA_BINARY);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_binary_put);
-int devlink_fmsg_bool_pair_put(struct devlink_fmsg *fmsg, const char *name,
- bool value)
+void devlink_fmsg_bool_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ bool value)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_bool_put(fmsg, value);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_bool_put(fmsg, value);
+ devlink_fmsg_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_bool_pair_put);
-int devlink_fmsg_u8_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u8 value)
+void devlink_fmsg_u8_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u8 value)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_u8_put(fmsg, value);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_u8_put(fmsg, value);
+ devlink_fmsg_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_u8_pair_put);
-int devlink_fmsg_u32_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u32 value)
+void devlink_fmsg_u32_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u32 value)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_u32_put(fmsg, value);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_u32_put(fmsg, value);
+ devlink_fmsg_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_u32_pair_put);
-int devlink_fmsg_u64_pair_put(struct devlink_fmsg *fmsg, const char *name,
- u64 value)
+void devlink_fmsg_u64_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ u64 value)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_u64_put(fmsg, value);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_u64_put(fmsg, value);
+ devlink_fmsg_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_u64_pair_put);
-int devlink_fmsg_string_pair_put(struct devlink_fmsg *fmsg, const char *name,
- const char *value)
+void devlink_fmsg_string_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ const char *value)
{
- int err;
-
- err = devlink_fmsg_pair_nest_start(fmsg, name);
- if (err)
- return err;
-
- err = devlink_fmsg_string_put(fmsg, value);
- if (err)
- return err;
-
- err = devlink_fmsg_pair_nest_end(fmsg);
- if (err)
- return err;
-
- return 0;
+ devlink_fmsg_pair_nest_start(fmsg, name);
+ devlink_fmsg_string_put(fmsg, value);
+ devlink_fmsg_pair_nest_end(fmsg);
}
EXPORT_SYMBOL_GPL(devlink_fmsg_string_pair_put);
-int devlink_fmsg_binary_pair_put(struct devlink_fmsg *fmsg, const char *name,
- const void *value, u32 value_len)
+void devlink_fmsg_binary_pair_put(struct devlink_fmsg *fmsg, const char *name,
+ const void *value, u32 value_len)
{
u32 data_size;
- int end_err;
u32 offset;
- int err;
- err = devlink_fmsg_binary_pair_nest_start(fmsg, name);
- if (err)
- return err;
+ devlink_fmsg_binary_pair_nest_start(fmsg, name);
for (offset = 0; offset < value_len; offset += data_size) {
data_size = value_len - offset;
if (data_size > DEVLINK_FMSG_MAX_SIZE)
data_size = DEVLINK_FMSG_MAX_SIZE;
- err = devlink_fmsg_binary_put(fmsg, value + offset, data_size);
- if (err)
- break;
- /* Exit from loop with a break (instead of
- * return) to make sure putting_binary is turned off in
- * devlink_fmsg_binary_pair_nest_end
- */
- }
- end_err = devlink_fmsg_binary_pair_nest_end(fmsg);
- if (end_err)
- err = end_err;
+ devlink_fmsg_binary_put(fmsg, value + offset, data_size);
+ }
- return err;
+ devlink_fmsg_binary_pair_nest_end(fmsg);
+ fmsg->putting_binary = false;
}
EXPORT_SYMBOL_GPL(devlink_fmsg_binary_pair_put);
@@ -1146,6 +1029,9 @@ static int devlink_fmsg_snd(struct devlink_fmsg *fmsg,
void *hdr;
int err;
+ if (fmsg->err)
+ return fmsg->err;
+
while (!last) {
int tmp_index = index;
@@ -1199,6 +1085,9 @@ static int devlink_fmsg_dumpit(struct devlink_fmsg *fmsg, struct sk_buff *skb,
void *hdr;
int err;
+ if (fmsg->err)
+ return fmsg->err;
+
hdr = genlmsg_put(skb, NETLINK_CB(cb->skb).portid, cb->nlh->nlmsg_seq,
&devlink_nl_family, NLM_F_ACK | NLM_F_MULTI, cmd);
if (!hdr) {
@@ -1219,8 +1108,8 @@ nla_put_failure:
return err;
}
-int devlink_nl_cmd_health_reporter_diagnose_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_health_reporter_diagnose_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_health_reporter *reporter;
@@ -1238,17 +1127,13 @@ int devlink_nl_cmd_health_reporter_diagnose_doit(struct sk_buff *skb,
if (!fmsg)
return -ENOMEM;
- err = devlink_fmsg_obj_nest_start(fmsg);
- if (err)
- goto out;
+ devlink_fmsg_obj_nest_start(fmsg);
err = reporter->ops->diagnose(reporter, fmsg, info->extack);
if (err)
goto out;
- err = devlink_fmsg_obj_nest_end(fmsg);
- if (err)
- goto out;
+ devlink_fmsg_obj_nest_end(fmsg);
err = devlink_fmsg_snd(fmsg, info,
DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE, 0);
@@ -1278,8 +1163,8 @@ devlink_health_reporter_get_from_cb_lock(struct netlink_callback *cb)
return reporter;
}
-int devlink_nl_cmd_health_reporter_dump_get_dumpit(struct sk_buff *skb,
- struct netlink_callback *cb)
+int devlink_nl_health_reporter_dump_get_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb)
{
struct devlink_nl_dump_state *state = devlink_dump_state(cb);
struct devlink_health_reporter *reporter;
@@ -1317,8 +1202,8 @@ unlock:
return err;
}
-int devlink_nl_cmd_health_reporter_dump_clear_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_health_reporter_dump_clear_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_health_reporter *reporter;
@@ -1334,8 +1219,8 @@ int devlink_nl_cmd_health_reporter_dump_clear_doit(struct sk_buff *skb,
return 0;
}
-int devlink_nl_cmd_health_reporter_test_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_health_reporter_test_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_health_reporter *reporter;
diff --git a/net/devlink/linecard.c b/net/devlink/linecard.c
index 85c32c314b0f..2f1c317b64cd 100644
--- a/net/devlink/linecard.c
+++ b/net/devlink/linecard.c
@@ -6,6 +6,25 @@
#include "devl_internal.h"
+struct devlink_linecard {
+ struct list_head list;
+ struct devlink *devlink;
+ unsigned int index;
+ const struct devlink_linecard_ops *ops;
+ void *priv;
+ enum devlink_linecard_state state;
+ struct mutex state_lock; /* Protects state */
+ const char *type;
+ struct devlink_linecard_type *types;
+ unsigned int types_count;
+ u32 rel_index;
+};
+
+unsigned int devlink_linecard_index(struct devlink_linecard *linecard)
+{
+ return linecard->index;
+}
+
static struct devlink_linecard *
devlink_linecard_get_by_index(struct devlink *devlink,
unsigned int linecard_index)
@@ -46,24 +65,6 @@ devlink_linecard_get_from_info(struct devlink *devlink, struct genl_info *info)
return devlink_linecard_get_from_attrs(devlink, info->attrs);
}
-static int devlink_nl_put_nested_handle(struct sk_buff *msg, struct devlink *devlink)
-{
- struct nlattr *nested_attr;
-
- nested_attr = nla_nest_start(msg, DEVLINK_ATTR_NESTED_DEVLINK);
- if (!nested_attr)
- return -EMSGSIZE;
- if (devlink_nl_put_handle(msg, devlink))
- goto nla_put_failure;
-
- nla_nest_end(msg, nested_attr);
- return 0;
-
-nla_put_failure:
- nla_nest_cancel(msg, nested_attr);
- return -EMSGSIZE;
-}
-
struct devlink_linecard_type {
const char *type;
const void *priv;
@@ -111,8 +112,10 @@ static int devlink_nl_linecard_fill(struct sk_buff *msg,
nla_nest_end(msg, attr);
}
- if (linecard->nested_devlink &&
- devlink_nl_put_nested_handle(msg, linecard->nested_devlink))
+ if (devlink_rel_devlink_handle_put(msg, devlink,
+ linecard->rel_index,
+ DEVLINK_ATTR_NESTED_DEVLINK,
+ NULL))
goto nla_put_failure;
genlmsg_end(msg, hdr);
@@ -366,8 +369,7 @@ out:
return err;
}
-int devlink_nl_cmd_linecard_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_linecard_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct netlink_ext_ack *extack = info->extack;
struct devlink *devlink = info->user_ptr[0];
@@ -521,7 +523,6 @@ EXPORT_SYMBOL_GPL(devlink_linecard_provision_set);
void devlink_linecard_provision_clear(struct devlink_linecard *linecard)
{
mutex_lock(&linecard->state_lock);
- WARN_ON(linecard->nested_devlink);
linecard->state = DEVLINK_LINECARD_STATE_UNPROVISIONED;
linecard->type = NULL;
devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_NEW);
@@ -540,7 +541,6 @@ EXPORT_SYMBOL_GPL(devlink_linecard_provision_clear);
void devlink_linecard_provision_fail(struct devlink_linecard *linecard)
{
mutex_lock(&linecard->state_lock);
- WARN_ON(linecard->nested_devlink);
linecard->state = DEVLINK_LINECARD_STATE_PROVISIONING_FAILED;
devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_NEW);
mutex_unlock(&linecard->state_lock);
@@ -588,6 +588,27 @@ void devlink_linecard_deactivate(struct devlink_linecard *linecard)
}
EXPORT_SYMBOL_GPL(devlink_linecard_deactivate);
+static void devlink_linecard_rel_notify_cb(struct devlink *devlink,
+ u32 linecard_index)
+{
+ struct devlink_linecard *linecard;
+
+ linecard = devlink_linecard_get_by_index(devlink, linecard_index);
+ if (!linecard)
+ return;
+ devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_NEW);
+}
+
+static void devlink_linecard_rel_cleanup_cb(struct devlink *devlink,
+ u32 linecard_index, u32 rel_index)
+{
+ struct devlink_linecard *linecard;
+
+ linecard = devlink_linecard_get_by_index(devlink, linecard_index);
+ if (linecard && linecard->rel_index == rel_index)
+ linecard->rel_index = 0;
+}
+
/**
* devlink_linecard_nested_dl_set - Attach/detach nested devlink
* instance to linecard.
@@ -595,12 +616,14 @@ EXPORT_SYMBOL_GPL(devlink_linecard_deactivate);
* @linecard: devlink linecard
* @nested_devlink: devlink instance to attach or NULL to detach
*/
-void devlink_linecard_nested_dl_set(struct devlink_linecard *linecard,
- struct devlink *nested_devlink)
+int devlink_linecard_nested_dl_set(struct devlink_linecard *linecard,
+ struct devlink *nested_devlink)
{
- mutex_lock(&linecard->state_lock);
- linecard->nested_devlink = nested_devlink;
- devlink_linecard_notify(linecard, DEVLINK_CMD_LINECARD_NEW);
- mutex_unlock(&linecard->state_lock);
+ return devlink_rel_nested_in_add(&linecard->rel_index,
+ linecard->devlink->index,
+ linecard->index,
+ devlink_linecard_rel_notify_cb,
+ devlink_linecard_rel_cleanup_cb,
+ nested_devlink);
}
EXPORT_SYMBOL_GPL(devlink_linecard_nested_dl_set);
diff --git a/net/devlink/netlink.c b/net/devlink/netlink.c
index fc3e7c029a3b..d0b90ebc8b15 100644
--- a/net/devlink/netlink.c
+++ b/net/devlink/netlink.c
@@ -13,74 +13,37 @@ static const struct genl_multicast_group devlink_nl_mcgrps[] = {
[DEVLINK_MCGRP_CONFIG] = { .name = DEVLINK_GENL_MCGRP_CONFIG_NAME },
};
-static const struct nla_policy devlink_nl_policy[DEVLINK_ATTR_MAX + 1] = {
- [DEVLINK_ATTR_UNSPEC] = { .strict_start_type =
- DEVLINK_ATTR_TRAP_POLICER_ID },
- [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32 },
- [DEVLINK_ATTR_PORT_TYPE] = NLA_POLICY_RANGE(NLA_U16, DEVLINK_PORT_TYPE_AUTO,
- DEVLINK_PORT_TYPE_IB),
- [DEVLINK_ATTR_PORT_SPLIT_COUNT] = { .type = NLA_U32 },
- [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32 },
- [DEVLINK_ATTR_SB_POOL_INDEX] = { .type = NLA_U16 },
- [DEVLINK_ATTR_SB_POOL_TYPE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_SB_POOL_SIZE] = { .type = NLA_U32 },
- [DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_SB_THRESHOLD] = { .type = NLA_U32 },
- [DEVLINK_ATTR_SB_TC_INDEX] = { .type = NLA_U16 },
- [DEVLINK_ATTR_ESWITCH_MODE] = NLA_POLICY_RANGE(NLA_U16, DEVLINK_ESWITCH_MODE_LEGACY,
- DEVLINK_ESWITCH_MODE_SWITCHDEV),
- [DEVLINK_ATTR_ESWITCH_INLINE_MODE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_ESWITCH_ENCAP_MODE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED] = { .type = NLA_U8 },
- [DEVLINK_ATTR_RESOURCE_ID] = { .type = NLA_U64},
- [DEVLINK_ATTR_RESOURCE_SIZE] = { .type = NLA_U64},
- [DEVLINK_ATTR_PARAM_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_PARAM_TYPE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_PARAM_VALUE_CMODE] = { .type = NLA_U8 },
- [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32 },
- [DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .type = NLA_U64 },
- [DEVLINK_ATTR_REGION_CHUNK_LEN] = { .type = NLA_U64 },
- [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .type = NLA_U64 },
- [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .type = NLA_U8 },
- [DEVLINK_ATTR_FLASH_UPDATE_FILE_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_FLASH_UPDATE_COMPONENT] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK] =
- NLA_POLICY_BITFIELD32(DEVLINK_SUPPORTED_FLASH_OVERWRITE_SECTIONS),
- [DEVLINK_ATTR_TRAP_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_TRAP_ACTION] = { .type = NLA_U8 },
- [DEVLINK_ATTR_TRAP_GROUP_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_NETNS_PID] = { .type = NLA_U32 },
- [DEVLINK_ATTR_NETNS_FD] = { .type = NLA_U32 },
- [DEVLINK_ATTR_NETNS_ID] = { .type = NLA_U32 },
- [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP] = { .type = NLA_U8 },
- [DEVLINK_ATTR_TRAP_POLICER_ID] = { .type = NLA_U32 },
- [DEVLINK_ATTR_TRAP_POLICER_RATE] = { .type = NLA_U64 },
- [DEVLINK_ATTR_TRAP_POLICER_BURST] = { .type = NLA_U64 },
- [DEVLINK_ATTR_PORT_FUNCTION] = { .type = NLA_NESTED },
- [DEVLINK_ATTR_RELOAD_ACTION] = NLA_POLICY_RANGE(NLA_U8, DEVLINK_RELOAD_ACTION_DRIVER_REINIT,
- DEVLINK_RELOAD_ACTION_MAX),
- [DEVLINK_ATTR_RELOAD_LIMITS] = NLA_POLICY_BITFIELD32(DEVLINK_RELOAD_LIMITS_VALID_MASK),
- [DEVLINK_ATTR_PORT_FLAVOUR] = { .type = NLA_U16 },
- [DEVLINK_ATTR_PORT_PCI_PF_NUMBER] = { .type = NLA_U16 },
- [DEVLINK_ATTR_PORT_PCI_SF_NUMBER] = { .type = NLA_U32 },
- [DEVLINK_ATTR_PORT_CONTROLLER_NUMBER] = { .type = NLA_U32 },
- [DEVLINK_ATTR_RATE_TYPE] = { .type = NLA_U16 },
- [DEVLINK_ATTR_RATE_TX_SHARE] = { .type = NLA_U64 },
- [DEVLINK_ATTR_RATE_TX_MAX] = { .type = NLA_U64 },
- [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_LINECARD_INDEX] = { .type = NLA_U32 },
- [DEVLINK_ATTR_LINECARD_TYPE] = { .type = NLA_NUL_STRING },
- [DEVLINK_ATTR_SELFTESTS] = { .type = NLA_NESTED },
- [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32 },
- [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32 },
- [DEVLINK_ATTR_REGION_DIRECT] = { .type = NLA_FLAG },
-};
+int devlink_nl_put_nested_handle(struct sk_buff *msg, struct net *net,
+ struct devlink *devlink, int attrtype)
+{
+ struct nlattr *nested_attr;
+ struct net *devl_net;
+
+ nested_attr = nla_nest_start(msg, attrtype);
+ if (!nested_attr)
+ return -EMSGSIZE;
+ if (devlink_nl_put_handle(msg, devlink))
+ goto nla_put_failure;
+
+ rcu_read_lock();
+ devl_net = read_pnet_rcu(&devlink->_net);
+ if (!net_eq(net, devl_net)) {
+ int id = peernet2id_alloc(net, devl_net, GFP_ATOMIC);
+
+ rcu_read_unlock();
+ if (nla_put_s32(msg, DEVLINK_ATTR_NETNS_ID, id))
+ return -EMSGSIZE;
+ } else {
+ rcu_read_unlock();
+ }
+
+ nla_nest_end(msg, nested_attr);
+ return 0;
+
+nla_put_failure:
+ nla_nest_cancel(msg, nested_attr);
+ return -EMSGSIZE;
+}
int devlink_nl_msg_reply_and_new(struct sk_buff **msg, struct genl_info *info)
{
@@ -159,7 +122,7 @@ unlock:
int devlink_nl_pre_doit(const struct genl_split_ops *ops,
struct sk_buff *skb, struct genl_info *info)
{
- return __devlink_nl_pre_doit(skb, info, ops->internal_flags);
+ return __devlink_nl_pre_doit(skb, info, 0);
}
int devlink_nl_pre_doit_port(const struct genl_split_ops *ops,
@@ -255,269 +218,12 @@ int devlink_nl_dumpit(struct sk_buff *msg, struct netlink_callback *cb,
return devlink_nl_inst_iter_dumpit(msg, cb, flags, dump_one);
}
-static const struct genl_small_ops devlink_nl_small_ops[40] = {
- {
- .cmd = DEVLINK_CMD_PORT_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_port_set_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_RATE_SET,
- .doit = devlink_nl_cmd_rate_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_RATE_NEW,
- .doit = devlink_nl_cmd_rate_new_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_RATE_DEL,
- .doit = devlink_nl_cmd_rate_del_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_PORT_SPLIT,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_port_split_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_PORT_UNSPLIT,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_port_unsplit_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_PORT_NEW,
- .doit = devlink_nl_cmd_port_new_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_PORT_DEL,
- .doit = devlink_nl_cmd_port_del_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
-
- {
- .cmd = DEVLINK_CMD_LINECARD_SET,
- .doit = devlink_nl_cmd_linecard_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_SB_POOL_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_sb_pool_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_SB_PORT_POOL_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_sb_port_pool_set_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_SB_TC_POOL_BIND_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_sb_tc_pool_bind_set_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_SB_OCC_SNAPSHOT,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_sb_occ_snapshot_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_SB_OCC_MAX_CLEAR,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_sb_occ_max_clear_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_ESWITCH_GET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_eswitch_get_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_ESWITCH_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_eswitch_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_DPIPE_TABLE_GET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_dpipe_table_get,
- /* can be retrieved by unprivileged users */
- },
- {
- .cmd = DEVLINK_CMD_DPIPE_ENTRIES_GET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_dpipe_entries_get,
- /* can be retrieved by unprivileged users */
- },
- {
- .cmd = DEVLINK_CMD_DPIPE_HEADERS_GET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_dpipe_headers_get,
- /* can be retrieved by unprivileged users */
- },
- {
- .cmd = DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_dpipe_table_counters_set,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_RESOURCE_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_resource_set,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_RESOURCE_DUMP,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_resource_dump,
- /* can be retrieved by unprivileged users */
- },
- {
- .cmd = DEVLINK_CMD_RELOAD,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_reload,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_PARAM_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_param_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_PORT_PARAM_GET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_port_param_get_doit,
- .dumpit = devlink_nl_cmd_port_param_get_dumpit,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- /* can be retrieved by unprivileged users */
- },
- {
- .cmd = DEVLINK_CMD_PORT_PARAM_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_port_param_set_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_PORT,
- },
- {
- .cmd = DEVLINK_CMD_REGION_NEW,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_region_new,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_REGION_DEL,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_region_del,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_REGION_READ,
- .validate = GENL_DONT_VALIDATE_STRICT |
- GENL_DONT_VALIDATE_DUMP_STRICT,
- .dumpit = devlink_nl_cmd_region_read_dumpit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_SET,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_health_reporter_set_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_RECOVER,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_health_reporter_recover_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_health_reporter_diagnose_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET,
- .validate = GENL_DONT_VALIDATE_STRICT |
- GENL_DONT_VALIDATE_DUMP_STRICT,
- .dumpit = devlink_nl_cmd_health_reporter_dump_get_dumpit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_health_reporter_dump_clear_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT,
- },
- {
- .cmd = DEVLINK_CMD_HEALTH_REPORTER_TEST,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_health_reporter_test_doit,
- .flags = GENL_ADMIN_PERM,
- .internal_flags = DEVLINK_NL_FLAG_NEED_DEVLINK_OR_PORT,
- },
- {
- .cmd = DEVLINK_CMD_FLASH_UPDATE,
- .validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
- .doit = devlink_nl_cmd_flash_update,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_TRAP_SET,
- .doit = devlink_nl_cmd_trap_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_TRAP_GROUP_SET,
- .doit = devlink_nl_cmd_trap_group_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_TRAP_POLICER_SET,
- .doit = devlink_nl_cmd_trap_policer_set_doit,
- .flags = GENL_ADMIN_PERM,
- },
- {
- .cmd = DEVLINK_CMD_SELFTESTS_RUN,
- .doit = devlink_nl_cmd_selftests_run,
- .flags = GENL_ADMIN_PERM,
- },
- /* -- No new ops here! Use split ops going forward! -- */
-};
-
struct genl_family devlink_nl_family __ro_after_init = {
.name = DEVLINK_GENL_NAME,
.version = DEVLINK_GENL_VERSION,
- .maxattr = DEVLINK_ATTR_MAX,
- .policy = devlink_nl_policy,
.netnsok = true,
.parallel_ops = true,
- .pre_doit = devlink_nl_pre_doit,
- .post_doit = devlink_nl_post_doit,
.module = THIS_MODULE,
- .small_ops = devlink_nl_small_ops,
- .n_small_ops = ARRAY_SIZE(devlink_nl_small_ops),
.split_ops = devlink_nl_ops,
.n_split_ops = ARRAY_SIZE(devlink_nl_ops),
.resv_start_op = DEVLINK_CMD_SELFTESTS_RUN + 1,
diff --git a/net/devlink/netlink_gen.c b/net/devlink/netlink_gen.c
index 467b7a431de1..9cbae0169249 100644
--- a/net/devlink/netlink_gen.c
+++ b/net/devlink/netlink_gen.c
@@ -10,6 +10,18 @@
#include <uapi/linux/devlink.h>
+/* Common nested types */
+const struct nla_policy devlink_dl_port_function_nl_policy[DEVLINK_PORT_FN_ATTR_CAPS + 1] = {
+ [DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR] = { .type = NLA_BINARY, },
+ [DEVLINK_PORT_FN_ATTR_STATE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [DEVLINK_PORT_FN_ATTR_OPSTATE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [DEVLINK_PORT_FN_ATTR_CAPS] = NLA_POLICY_BITFIELD32(3),
+};
+
+const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1] = {
+ [DEVLINK_ATTR_SELFTEST_ID_FLASH] = { .type = NLA_FLAG, },
+};
+
/* DEVLINK_CMD_GET - do */
static const struct nla_policy devlink_get_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -29,6 +41,48 @@ static const struct nla_policy devlink_port_get_dump_nl_policy[DEVLINK_ATTR_DEV_
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_PORT_SET - do */
+static const struct nla_policy devlink_port_set_nl_policy[DEVLINK_ATTR_PORT_FUNCTION + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_PORT_TYPE] = NLA_POLICY_MAX(NLA_U16, 3),
+ [DEVLINK_ATTR_PORT_FUNCTION] = NLA_POLICY_NESTED(devlink_dl_port_function_nl_policy),
+};
+
+/* DEVLINK_CMD_PORT_NEW - do */
+static const struct nla_policy devlink_port_new_nl_policy[DEVLINK_ATTR_PORT_PCI_SF_NUMBER + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_PORT_FLAVOUR] = NLA_POLICY_MAX(NLA_U16, 7),
+ [DEVLINK_ATTR_PORT_PCI_PF_NUMBER] = { .type = NLA_U16, },
+ [DEVLINK_ATTR_PORT_PCI_SF_NUMBER] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_PORT_CONTROLLER_NUMBER] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_PORT_DEL - do */
+static const struct nla_policy devlink_port_del_nl_policy[DEVLINK_ATTR_PORT_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_PORT_SPLIT - do */
+static const struct nla_policy devlink_port_split_nl_policy[DEVLINK_ATTR_PORT_SPLIT_COUNT + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_PORT_SPLIT_COUNT] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_PORT_UNSPLIT - do */
+static const struct nla_policy devlink_port_unsplit_nl_policy[DEVLINK_ATTR_PORT_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_SB_GET - do */
static const struct nla_policy devlink_sb_get_do_nl_policy[DEVLINK_ATTR_SB_INDEX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -56,6 +110,16 @@ static const struct nla_policy devlink_sb_pool_get_dump_nl_policy[DEVLINK_ATTR_D
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_SB_POOL_SET - do */
+static const struct nla_policy devlink_sb_pool_set_nl_policy[DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_SB_POOL_INDEX] = { .type = NLA_U16, },
+ [DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [DEVLINK_ATTR_SB_POOL_SIZE] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_SB_PORT_POOL_GET - do */
static const struct nla_policy devlink_sb_port_pool_get_do_nl_policy[DEVLINK_ATTR_SB_POOL_INDEX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -71,6 +135,16 @@ static const struct nla_policy devlink_sb_port_pool_get_dump_nl_policy[DEVLINK_A
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_SB_PORT_POOL_SET - do */
+static const struct nla_policy devlink_sb_port_pool_set_nl_policy[DEVLINK_ATTR_SB_THRESHOLD + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_SB_POOL_INDEX] = { .type = NLA_U16, },
+ [DEVLINK_ATTR_SB_THRESHOLD] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_SB_TC_POOL_BIND_GET - do */
static const struct nla_policy devlink_sb_tc_pool_bind_get_do_nl_policy[DEVLINK_ATTR_SB_TC_INDEX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -87,6 +161,100 @@ static const struct nla_policy devlink_sb_tc_pool_bind_get_dump_nl_policy[DEVLIN
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_SB_TC_POOL_BIND_SET - do */
+static const struct nla_policy devlink_sb_tc_pool_bind_set_nl_policy[DEVLINK_ATTR_SB_TC_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_SB_POOL_INDEX] = { .type = NLA_U16, },
+ [DEVLINK_ATTR_SB_POOL_TYPE] = NLA_POLICY_MAX(NLA_U8, 1),
+ [DEVLINK_ATTR_SB_TC_INDEX] = { .type = NLA_U16, },
+ [DEVLINK_ATTR_SB_THRESHOLD] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_SB_OCC_SNAPSHOT - do */
+static const struct nla_policy devlink_sb_occ_snapshot_nl_policy[DEVLINK_ATTR_SB_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_SB_OCC_MAX_CLEAR - do */
+static const struct nla_policy devlink_sb_occ_max_clear_nl_policy[DEVLINK_ATTR_SB_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_SB_INDEX] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_ESWITCH_GET - do */
+static const struct nla_policy devlink_eswitch_get_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_ESWITCH_SET - do */
+static const struct nla_policy devlink_eswitch_set_nl_policy[DEVLINK_ATTR_ESWITCH_ENCAP_MODE + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_ESWITCH_MODE] = NLA_POLICY_MAX(NLA_U16, 1),
+ [DEVLINK_ATTR_ESWITCH_INLINE_MODE] = NLA_POLICY_MAX(NLA_U16, 3),
+ [DEVLINK_ATTR_ESWITCH_ENCAP_MODE] = NLA_POLICY_MAX(NLA_U8, 1),
+};
+
+/* DEVLINK_CMD_DPIPE_TABLE_GET - do */
+static const struct nla_policy devlink_dpipe_table_get_nl_policy[DEVLINK_ATTR_DPIPE_TABLE_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_DPIPE_ENTRIES_GET - do */
+static const struct nla_policy devlink_dpipe_entries_get_nl_policy[DEVLINK_ATTR_DPIPE_TABLE_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_DPIPE_HEADERS_GET - do */
+static const struct nla_policy devlink_dpipe_headers_get_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET - do */
+static const struct nla_policy devlink_dpipe_table_counters_set_nl_policy[DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED] = { .type = NLA_U8, },
+};
+
+/* DEVLINK_CMD_RESOURCE_SET - do */
+static const struct nla_policy devlink_resource_set_nl_policy[DEVLINK_ATTR_RESOURCE_SIZE + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RESOURCE_ID] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE] = { .type = NLA_U64, },
+};
+
+/* DEVLINK_CMD_RESOURCE_DUMP - do */
+static const struct nla_policy devlink_resource_dump_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_RELOAD - do */
+static const struct nla_policy devlink_reload_nl_policy[DEVLINK_ATTR_RELOAD_LIMITS + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RELOAD_ACTION] = NLA_POLICY_RANGE(NLA_U8, 1, 2),
+ [DEVLINK_ATTR_RELOAD_LIMITS] = NLA_POLICY_BITFIELD32(6),
+ [DEVLINK_ATTR_NETNS_PID] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_NETNS_FD] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_NETNS_ID] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_PARAM_GET - do */
static const struct nla_policy devlink_param_get_do_nl_policy[DEVLINK_ATTR_PARAM_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -100,6 +268,15 @@ static const struct nla_policy devlink_param_get_dump_nl_policy[DEVLINK_ATTR_DEV
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_PARAM_SET - do */
+static const struct nla_policy devlink_param_set_nl_policy[DEVLINK_ATTR_PARAM_VALUE_CMODE + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PARAM_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PARAM_TYPE] = { .type = NLA_U8, },
+ [DEVLINK_ATTR_PARAM_VALUE_CMODE] = NLA_POLICY_MAX(NLA_U8, 2),
+};
+
/* DEVLINK_CMD_REGION_GET - do */
static const struct nla_policy devlink_region_get_do_nl_policy[DEVLINK_ATTR_REGION_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -114,6 +291,50 @@ static const struct nla_policy devlink_region_get_dump_nl_policy[DEVLINK_ATTR_DE
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_REGION_NEW - do */
+static const struct nla_policy devlink_region_new_nl_policy[DEVLINK_ATTR_REGION_SNAPSHOT_ID + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_REGION_DEL - do */
+static const struct nla_policy devlink_region_del_nl_policy[DEVLINK_ATTR_REGION_SNAPSHOT_ID + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_REGION_READ - dump */
+static const struct nla_policy devlink_region_read_nl_policy[DEVLINK_ATTR_REGION_DIRECT + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_REGION_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_REGION_DIRECT] = { .type = NLA_FLAG, },
+ [DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_REGION_CHUNK_LEN] = { .type = NLA_U64, },
+};
+
+/* DEVLINK_CMD_PORT_PARAM_GET - do */
+static const struct nla_policy devlink_port_param_get_nl_policy[DEVLINK_ATTR_PORT_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+};
+
+/* DEVLINK_CMD_PORT_PARAM_SET - do */
+static const struct nla_policy devlink_port_param_set_nl_policy[DEVLINK_ATTR_PORT_INDEX + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_INFO_GET - do */
static const struct nla_policy devlink_info_get_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -135,6 +356,58 @@ static const struct nla_policy devlink_health_reporter_get_dump_nl_policy[DEVLIN
[DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
};
+/* DEVLINK_CMD_HEALTH_REPORTER_SET - do */
+static const struct nla_policy devlink_health_reporter_set_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .type = NLA_U8, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP] = { .type = NLA_U8, },
+};
+
+/* DEVLINK_CMD_HEALTH_REPORTER_RECOVER - do */
+static const struct nla_policy devlink_health_reporter_recover_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE - do */
+static const struct nla_policy devlink_health_reporter_diagnose_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET - dump */
+static const struct nla_policy devlink_health_reporter_dump_get_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR - do */
+static const struct nla_policy devlink_health_reporter_dump_clear_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_FLASH_UPDATE - do */
+static const struct nla_policy devlink_flash_update_nl_policy[DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_FLASH_UPDATE_FILE_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_FLASH_UPDATE_COMPONENT] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK] = NLA_POLICY_BITFIELD32(3),
+};
+
/* DEVLINK_CMD_TRAP_GET - do */
static const struct nla_policy devlink_trap_get_do_nl_policy[DEVLINK_ATTR_TRAP_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -148,6 +421,14 @@ static const struct nla_policy devlink_trap_get_dump_nl_policy[DEVLINK_ATTR_DEV_
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_TRAP_SET - do */
+static const struct nla_policy devlink_trap_set_nl_policy[DEVLINK_ATTR_TRAP_ACTION + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_TRAP_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_TRAP_ACTION] = NLA_POLICY_MAX(NLA_U8, 2),
+};
+
/* DEVLINK_CMD_TRAP_GROUP_GET - do */
static const struct nla_policy devlink_trap_group_get_do_nl_policy[DEVLINK_ATTR_TRAP_GROUP_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -161,6 +442,15 @@ static const struct nla_policy devlink_trap_group_get_dump_nl_policy[DEVLINK_ATT
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_TRAP_GROUP_SET - do */
+static const struct nla_policy devlink_trap_group_set_nl_policy[DEVLINK_ATTR_TRAP_POLICER_ID + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_TRAP_GROUP_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_TRAP_ACTION] = NLA_POLICY_MAX(NLA_U8, 2),
+ [DEVLINK_ATTR_TRAP_POLICER_ID] = { .type = NLA_U32, },
+};
+
/* DEVLINK_CMD_TRAP_POLICER_GET - do */
static const struct nla_policy devlink_trap_policer_get_do_nl_policy[DEVLINK_ATTR_TRAP_POLICER_ID + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -174,6 +464,23 @@ static const struct nla_policy devlink_trap_policer_get_dump_nl_policy[DEVLINK_A
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_TRAP_POLICER_SET - do */
+static const struct nla_policy devlink_trap_policer_set_nl_policy[DEVLINK_ATTR_TRAP_POLICER_BURST + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_TRAP_POLICER_ID] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_TRAP_POLICER_RATE] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_TRAP_POLICER_BURST] = { .type = NLA_U64, },
+};
+
+/* DEVLINK_CMD_HEALTH_REPORTER_TEST - do */
+static const struct nla_policy devlink_health_reporter_test_nl_policy[DEVLINK_ATTR_HEALTH_REPORTER_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_PORT_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .type = NLA_NUL_STRING, },
+};
+
/* DEVLINK_CMD_RATE_GET - do */
static const struct nla_policy devlink_rate_get_do_nl_policy[DEVLINK_ATTR_RATE_NODE_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -188,6 +495,37 @@ static const struct nla_policy devlink_rate_get_dump_nl_policy[DEVLINK_ATTR_DEV_
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_RATE_SET - do */
+static const struct nla_policy devlink_rate_set_nl_policy[DEVLINK_ATTR_RATE_TX_WEIGHT + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RATE_TX_SHARE] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_RATE_TX_MAX] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_RATE_NEW - do */
+static const struct nla_policy devlink_rate_new_nl_policy[DEVLINK_ATTR_RATE_TX_WEIGHT + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RATE_TX_SHARE] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_RATE_TX_MAX] = { .type = NLA_U64, },
+ [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .type = NLA_NUL_STRING, },
+};
+
+/* DEVLINK_CMD_RATE_DEL - do */
+static const struct nla_policy devlink_rate_del_nl_policy[DEVLINK_ATTR_RATE_NODE_NAME + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_RATE_NODE_NAME] = { .type = NLA_NUL_STRING, },
+};
+
/* DEVLINK_CMD_LINECARD_GET - do */
static const struct nla_policy devlink_linecard_get_do_nl_policy[DEVLINK_ATTR_LINECARD_INDEX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
@@ -201,14 +539,29 @@ static const struct nla_policy devlink_linecard_get_dump_nl_policy[DEVLINK_ATTR_
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_LINECARD_SET - do */
+static const struct nla_policy devlink_linecard_set_nl_policy[DEVLINK_ATTR_LINECARD_TYPE + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_LINECARD_INDEX] = { .type = NLA_U32, },
+ [DEVLINK_ATTR_LINECARD_TYPE] = { .type = NLA_NUL_STRING, },
+};
+
/* DEVLINK_CMD_SELFTESTS_GET - do */
static const struct nla_policy devlink_selftests_get_nl_policy[DEVLINK_ATTR_DEV_NAME + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
[DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
};
+/* DEVLINK_CMD_SELFTESTS_RUN - do */
+static const struct nla_policy devlink_selftests_run_nl_policy[DEVLINK_ATTR_SELFTESTS + 1] = {
+ [DEVLINK_ATTR_BUS_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_DEV_NAME] = { .type = NLA_NUL_STRING, },
+ [DEVLINK_ATTR_SELFTESTS] = NLA_POLICY_NESTED(devlink_dl_selftest_id_nl_policy),
+};
+
/* Ops table for devlink */
-const struct genl_split_ops devlink_nl_ops[32] = {
+const struct genl_split_ops devlink_nl_ops[73] = {
{
.cmd = DEVLINK_CMD_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
@@ -243,6 +596,56 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_PORT_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_FUNCTION,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_NEW,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_port_new_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_new_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_PCI_SF_NUMBER,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_DEL,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_del_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_del_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_SPLIT,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_split_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_split_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_SPLIT_COUNT,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_UNSPLIT,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_unsplit_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_unsplit_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_SB_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -277,6 +680,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_SB_POOL_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_sb_pool_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_sb_pool_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_SB_PORT_POOL_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit_port,
@@ -294,6 +707,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_SB_PORT_POOL_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_sb_port_pool_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_sb_port_pool_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_SB_THRESHOLD,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_SB_TC_POOL_BIND_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit_port,
@@ -311,6 +734,126 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_SB_TC_POOL_BIND_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_sb_tc_pool_bind_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_sb_tc_pool_bind_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_SB_TC_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_SB_OCC_SNAPSHOT,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_sb_occ_snapshot_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_sb_occ_snapshot_nl_policy,
+ .maxattr = DEVLINK_ATTR_SB_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_SB_OCC_MAX_CLEAR,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_sb_occ_max_clear_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_sb_occ_max_clear_nl_policy,
+ .maxattr = DEVLINK_ATTR_SB_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_ESWITCH_GET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_eswitch_get_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_eswitch_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_DEV_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_ESWITCH_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_eswitch_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_eswitch_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_ESWITCH_ENCAP_MODE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_DPIPE_TABLE_GET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_dpipe_table_get_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_dpipe_table_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_DPIPE_TABLE_NAME,
+ .flags = GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_DPIPE_ENTRIES_GET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_dpipe_entries_get_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_dpipe_entries_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_DPIPE_TABLE_NAME,
+ .flags = GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_DPIPE_HEADERS_GET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_dpipe_headers_get_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_dpipe_headers_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_DEV_NAME,
+ .flags = GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_dpipe_table_counters_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_dpipe_table_counters_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_RESOURCE_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_resource_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_resource_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_RESOURCE_SIZE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_RESOURCE_DUMP,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_resource_dump_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_resource_dump_nl_policy,
+ .maxattr = DEVLINK_ATTR_DEV_NAME,
+ .flags = GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_RELOAD,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_reload_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_reload_nl_policy,
+ .maxattr = DEVLINK_ATTR_RELOAD_LIMITS,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_PARAM_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -328,6 +871,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_PARAM_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_param_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_param_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_PARAM_VALUE_CMODE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_REGION_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit_port_optional,
@@ -345,6 +898,60 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_REGION_NEW,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_region_new_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_region_new_nl_policy,
+ .maxattr = DEVLINK_ATTR_REGION_SNAPSHOT_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_REGION_DEL,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_region_del_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_region_del_nl_policy,
+ .maxattr = DEVLINK_ATTR_REGION_SNAPSHOT_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_REGION_READ,
+ .validate = GENL_DONT_VALIDATE_DUMP_STRICT,
+ .dumpit = devlink_nl_region_read_dumpit,
+ .policy = devlink_region_read_nl_policy,
+ .maxattr = DEVLINK_ATTR_REGION_DIRECT,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_PARAM_GET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_param_get_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_param_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_INDEX,
+ .flags = GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_PARAM_GET,
+ .validate = GENL_DONT_VALIDATE_DUMP_STRICT,
+ .dumpit = devlink_nl_port_param_get_dumpit,
+ .flags = GENL_CMD_CAP_DUMP,
+ },
+ {
+ .cmd = DEVLINK_CMD_PORT_PARAM_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port,
+ .doit = devlink_nl_port_param_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_port_param_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_PORT_INDEX,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_INFO_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -378,6 +985,64 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_health_reporter_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_health_reporter_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_RECOVER,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_health_reporter_recover_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_health_reporter_recover_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_health_reporter_diagnose_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_health_reporter_diagnose_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET,
+ .validate = GENL_DONT_VALIDATE_DUMP_STRICT,
+ .dumpit = devlink_nl_health_reporter_dump_get_dumpit,
+ .policy = devlink_health_reporter_dump_get_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DUMP,
+ },
+ {
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_health_reporter_dump_clear_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_health_reporter_dump_clear_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_FLASH_UPDATE,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_flash_update_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_flash_update_nl_policy,
+ .maxattr = DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_TRAP_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -395,6 +1060,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_TRAP_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_trap_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_trap_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_TRAP_ACTION,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_TRAP_GROUP_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -412,6 +1087,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_TRAP_GROUP_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_trap_group_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_trap_group_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_TRAP_POLICER_ID,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_TRAP_POLICER_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -429,6 +1114,26 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_TRAP_POLICER_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_trap_policer_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_trap_policer_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_TRAP_POLICER_BURST,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_HEALTH_REPORTER_TEST,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit_port_optional,
+ .doit = devlink_nl_health_reporter_test_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_health_reporter_test_nl_policy,
+ .maxattr = DEVLINK_ATTR_HEALTH_REPORTER_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_RATE_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -446,6 +1151,36 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_RATE_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_rate_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_rate_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_RATE_TX_WEIGHT,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_RATE_NEW,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_rate_new_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_rate_new_nl_policy,
+ .maxattr = DEVLINK_ATTR_RATE_TX_WEIGHT,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
+ .cmd = DEVLINK_CMD_RATE_DEL,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_rate_del_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_rate_del_nl_policy,
+ .maxattr = DEVLINK_ATTR_RATE_NODE_NAME,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_LINECARD_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -463,6 +1198,16 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.flags = GENL_CMD_CAP_DUMP,
},
{
+ .cmd = DEVLINK_CMD_LINECARD_SET,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_linecard_set_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_linecard_set_nl_policy,
+ .maxattr = DEVLINK_ATTR_LINECARD_TYPE,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
+ {
.cmd = DEVLINK_CMD_SELFTESTS_GET,
.validate = GENL_DONT_VALIDATE_STRICT,
.pre_doit = devlink_nl_pre_doit,
@@ -478,4 +1223,14 @@ const struct genl_split_ops devlink_nl_ops[32] = {
.dumpit = devlink_nl_selftests_get_dumpit,
.flags = GENL_CMD_CAP_DUMP,
},
+ {
+ .cmd = DEVLINK_CMD_SELFTESTS_RUN,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .pre_doit = devlink_nl_pre_doit,
+ .doit = devlink_nl_selftests_run_doit,
+ .post_doit = devlink_nl_post_doit,
+ .policy = devlink_selftests_run_nl_policy,
+ .maxattr = DEVLINK_ATTR_SELFTESTS,
+ .flags = GENL_ADMIN_PERM | GENL_CMD_CAP_DO,
+ },
};
diff --git a/net/devlink/netlink_gen.h b/net/devlink/netlink_gen.h
index f8bbc93e39be..0e9e89c31c31 100644
--- a/net/devlink/netlink_gen.h
+++ b/net/devlink/netlink_gen.h
@@ -11,8 +11,12 @@
#include <uapi/linux/devlink.h>
+/* Common nested types */
+extern const struct nla_policy devlink_dl_port_function_nl_policy[DEVLINK_PORT_FN_ATTR_CAPS + 1];
+extern const struct nla_policy devlink_dl_selftest_id_nl_policy[DEVLINK_ATTR_SELFTEST_ID_FLASH + 1];
+
/* Ops table for devlink */
-extern const struct genl_split_ops devlink_nl_ops[32];
+extern const struct genl_split_ops devlink_nl_ops[73];
int devlink_nl_pre_doit(const struct genl_split_ops *ops, struct sk_buff *skb,
struct genl_info *info);
@@ -30,25 +34,61 @@ int devlink_nl_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
int devlink_nl_port_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_port_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_port_set_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_port_new_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_port_del_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_port_split_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_port_unsplit_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_sb_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_sb_get_dumpit(struct sk_buff *skb, struct netlink_callback *cb);
int devlink_nl_sb_pool_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_sb_pool_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_sb_pool_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_sb_port_pool_get_doit(struct sk_buff *skb,
struct genl_info *info);
int devlink_nl_sb_port_pool_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_sb_port_pool_set_doit(struct sk_buff *skb,
+ struct genl_info *info);
int devlink_nl_sb_tc_pool_bind_get_doit(struct sk_buff *skb,
struct genl_info *info);
int devlink_nl_sb_tc_pool_bind_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_sb_occ_snapshot_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_sb_occ_max_clear_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_eswitch_get_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_eswitch_set_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_dpipe_table_get_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_dpipe_entries_get_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_dpipe_headers_get_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_dpipe_table_counters_set_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_resource_set_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_resource_dump_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_reload_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_param_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_param_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_param_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_region_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_region_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_region_new_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_region_del_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_region_read_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int devlink_nl_port_param_get_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_port_param_get_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int devlink_nl_port_param_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_info_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_info_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
@@ -56,24 +96,46 @@ int devlink_nl_health_reporter_get_doit(struct sk_buff *skb,
struct genl_info *info);
int devlink_nl_health_reporter_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_health_reporter_set_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_health_reporter_recover_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_health_reporter_diagnose_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_health_reporter_dump_get_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int devlink_nl_health_reporter_dump_clear_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_flash_update_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_trap_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_trap_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_trap_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_trap_group_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_trap_group_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_trap_group_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_trap_policer_get_doit(struct sk_buff *skb,
struct genl_info *info);
int devlink_nl_trap_policer_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_trap_policer_set_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int devlink_nl_health_reporter_test_doit(struct sk_buff *skb,
+ struct genl_info *info);
int devlink_nl_rate_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_rate_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_rate_set_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_rate_new_doit(struct sk_buff *skb, struct genl_info *info);
+int devlink_nl_rate_del_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_linecard_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_linecard_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_linecard_set_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_selftests_get_doit(struct sk_buff *skb, struct genl_info *info);
int devlink_nl_selftests_get_dumpit(struct sk_buff *skb,
struct netlink_callback *cb);
+int devlink_nl_selftests_run_doit(struct sk_buff *skb, struct genl_info *info);
#endif /* _LINUX_DEVLINK_GEN_H */
diff --git a/net/devlink/param.c b/net/devlink/param.c
index 31275f9d4cb7..d74df09311a9 100644
--- a/net/devlink/param.c
+++ b/net/devlink/param.c
@@ -581,7 +581,7 @@ static int __devlink_nl_cmd_param_set_doit(struct devlink *devlink,
return 0;
}
-int devlink_nl_cmd_param_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_param_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
@@ -589,22 +589,22 @@ int devlink_nl_cmd_param_set_doit(struct sk_buff *skb, struct genl_info *info)
info, DEVLINK_CMD_PARAM_NEW);
}
-int devlink_nl_cmd_port_param_get_dumpit(struct sk_buff *msg,
- struct netlink_callback *cb)
+int devlink_nl_port_param_get_dumpit(struct sk_buff *msg,
+ struct netlink_callback *cb)
{
NL_SET_ERR_MSG(cb->extack, "Port params are not supported");
return msg->len;
}
-int devlink_nl_cmd_port_param_get_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_port_param_get_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
NL_SET_ERR_MSG(info->extack, "Port params are not supported");
return -EINVAL;
}
-int devlink_nl_cmd_port_param_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_port_param_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
NL_SET_ERR_MSG(info->extack, "Port params are not supported");
return -EINVAL;
diff --git a/net/devlink/port.c b/net/devlink/port.c
index 4763b42885fb..7634f187fa50 100644
--- a/net/devlink/port.c
+++ b/net/devlink/port.c
@@ -428,6 +428,13 @@ devlink_nl_port_function_attrs_put(struct sk_buff *msg, struct devlink_port *por
if (err)
goto out;
err = devlink_port_fn_state_fill(port, msg, extack, &msg_updated);
+ if (err)
+ goto out;
+ err = devlink_rel_devlink_handle_put(msg, port->devlink,
+ port->rel_index,
+ DEVLINK_PORT_FN_ATTR_DEVLINK,
+ &msg_updated);
+
out:
if (err || !msg_updated)
nla_nest_cancel(msg, function_attr);
@@ -483,7 +490,7 @@ static int devlink_nl_port_fill(struct sk_buff *msg,
goto nla_put_failure;
if (devlink_port->linecard &&
nla_put_u32(msg, DEVLINK_ATTR_LINECARD_INDEX,
- devlink_port->linecard->index))
+ devlink_linecard_index(devlink_port->linecard)))
goto nla_put_failure;
genlmsg_end(msg, hdr);
@@ -765,7 +772,7 @@ static int devlink_port_function_set(struct devlink_port *port,
return err;
}
-int devlink_nl_cmd_port_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_port_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
int err;
@@ -791,7 +798,7 @@ int devlink_nl_cmd_port_set_doit(struct sk_buff *skb, struct genl_info *info)
return 0;
}
-int devlink_nl_cmd_port_split_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_port_split_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
struct devlink *devlink = info->user_ptr[0];
@@ -822,8 +829,7 @@ int devlink_nl_cmd_port_split_doit(struct sk_buff *skb, struct genl_info *info)
info->extack);
}
-int devlink_nl_cmd_port_unsplit_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_port_unsplit_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
struct devlink *devlink = info->user_ptr[0];
@@ -833,7 +839,7 @@ int devlink_nl_cmd_port_unsplit_doit(struct sk_buff *skb,
return devlink_port->ops->port_unsplit(devlink, devlink_port, info->extack);
}
-int devlink_nl_cmd_port_new_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_port_new_doit(struct sk_buff *skb, struct genl_info *info)
{
struct netlink_ext_ack *extack = info->extack;
struct devlink_port_new_attrs new_attrs = {};
@@ -897,7 +903,7 @@ err_out_port_del:
return err;
}
-int devlink_nl_cmd_port_del_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_port_del_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
struct netlink_ext_ack *extack = info->extack;
@@ -1392,6 +1398,50 @@ void devlink_port_attrs_pci_sf_set(struct devlink_port *devlink_port, u32 contro
}
EXPORT_SYMBOL_GPL(devlink_port_attrs_pci_sf_set);
+static void devlink_port_rel_notify_cb(struct devlink *devlink, u32 port_index)
+{
+ struct devlink_port *devlink_port;
+
+ devlink_port = devlink_port_get_by_index(devlink, port_index);
+ if (!devlink_port)
+ return;
+ devlink_port_notify(devlink_port, DEVLINK_CMD_PORT_NEW);
+}
+
+static void devlink_port_rel_cleanup_cb(struct devlink *devlink, u32 port_index,
+ u32 rel_index)
+{
+ struct devlink_port *devlink_port;
+
+ devlink_port = devlink_port_get_by_index(devlink, port_index);
+ if (devlink_port && devlink_port->rel_index == rel_index)
+ devlink_port->rel_index = 0;
+}
+
+/**
+ * devl_port_fn_devlink_set - Attach peer devlink
+ * instance to port function.
+ * @devlink_port: devlink port
+ * @fn_devlink: devlink instance to attach
+ */
+int devl_port_fn_devlink_set(struct devlink_port *devlink_port,
+ struct devlink *fn_devlink)
+{
+ ASSERT_DEVLINK_PORT_REGISTERED(devlink_port);
+
+ if (WARN_ON(devlink_port->attrs.flavour != DEVLINK_PORT_FLAVOUR_PCI_SF ||
+ devlink_port->attrs.pci_sf.external))
+ return -EINVAL;
+
+ return devlink_rel_nested_in_add(&devlink_port->rel_index,
+ devlink_port->devlink->index,
+ devlink_port->index,
+ devlink_port_rel_notify_cb,
+ devlink_port_rel_cleanup_cb,
+ fn_devlink);
+}
+EXPORT_SYMBOL_GPL(devl_port_fn_devlink_set);
+
/**
* devlink_port_linecard_set - Link port with a linecard
*
@@ -1420,7 +1470,7 @@ static int __devlink_port_phys_port_name_get(struct devlink_port *devlink_port,
case DEVLINK_PORT_FLAVOUR_PHYSICAL:
if (devlink_port->linecard)
n = snprintf(name, len, "l%u",
- devlink_port->linecard->index);
+ devlink_linecard_index(devlink_port->linecard));
if (n < len)
n += snprintf(name + n, len - n, "p%u",
attrs->phys.port_number);
diff --git a/net/devlink/rate.c b/net/devlink/rate.c
index dff1593b8406..94b289b93ff2 100644
--- a/net/devlink/rate.c
+++ b/net/devlink/rate.c
@@ -458,7 +458,7 @@ static bool devlink_rate_set_ops_supported(const struct devlink_ops *ops,
return true;
}
-int devlink_nl_cmd_rate_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_rate_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_rate *devlink_rate;
@@ -480,7 +480,7 @@ int devlink_nl_cmd_rate_set_doit(struct sk_buff *skb, struct genl_info *info)
return err;
}
-int devlink_nl_cmd_rate_new_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_rate_new_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_rate *rate_node;
@@ -536,7 +536,7 @@ err_strdup:
return err;
}
-int devlink_nl_cmd_rate_del_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_rate_del_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_rate *rate_node;
diff --git a/net/devlink/region.c b/net/devlink/region.c
index d197cdb662db..0aab7b82d678 100644
--- a/net/devlink/region.c
+++ b/net/devlink/region.c
@@ -588,7 +588,7 @@ int devlink_nl_region_get_dumpit(struct sk_buff *skb,
return devlink_nl_dumpit(skb, cb, devlink_nl_region_get_dump_one);
}
-int devlink_nl_cmd_region_del(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_region_del_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_snapshot *snapshot;
@@ -633,7 +633,7 @@ int devlink_nl_cmd_region_del(struct sk_buff *skb, struct genl_info *info)
return 0;
}
-int devlink_nl_cmd_region_new(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_region_new_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_snapshot *snapshot;
@@ -863,8 +863,8 @@ devlink_region_direct_fill(void *cb_priv, u8 *chunk, u32 chunk_size,
curr_offset, chunk_size, chunk);
}
-int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
- struct netlink_callback *cb)
+int devlink_nl_region_read_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb)
{
const struct genl_dumpit_info *info = genl_dumpit_info(cb);
struct devlink_nl_dump_state *state = devlink_dump_state(cb);
diff --git a/net/devlink/resource.c b/net/devlink/resource.c
index c8b615e4c385..594c8aeb3bfa 100644
--- a/net/devlink/resource.c
+++ b/net/devlink/resource.c
@@ -105,7 +105,7 @@ devlink_resource_validate_size(struct devlink_resource *resource, u64 size,
return err;
}
-int devlink_nl_cmd_resource_set(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_resource_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
struct devlink_resource *resource;
@@ -285,7 +285,7 @@ err_resource_put:
return err;
}
-int devlink_nl_cmd_resource_dump(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_resource_dump_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
diff --git a/net/devlink/sb.c b/net/devlink/sb.c
index bd677fff5ec8..0a76bb32502b 100644
--- a/net/devlink/sb.c
+++ b/net/devlink/sb.c
@@ -413,7 +413,7 @@ static int devlink_sb_pool_set(struct devlink *devlink, unsigned int sb_index,
return -EOPNOTSUPP;
}
-int devlink_nl_cmd_sb_pool_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_sb_pool_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
enum devlink_sb_threshold_type threshold_type;
@@ -621,8 +621,8 @@ static int devlink_sb_port_pool_set(struct devlink_port *devlink_port,
return -EOPNOTSUPP;
}
-int devlink_nl_cmd_sb_port_pool_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_sb_port_pool_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
struct devlink *devlink = info->user_ptr[0];
@@ -861,8 +861,8 @@ static int devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
return -EOPNOTSUPP;
}
-int devlink_nl_cmd_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink_port *devlink_port = info->user_ptr[1];
struct devlink *devlink = info->user_ptr[0];
@@ -900,8 +900,7 @@ int devlink_nl_cmd_sb_tc_pool_bind_set_doit(struct sk_buff *skb,
pool_index, threshold, info->extack);
}
-int devlink_nl_cmd_sb_occ_snapshot_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_sb_occ_snapshot_doit(struct sk_buff *skb, struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const struct devlink_ops *ops = devlink->ops;
@@ -916,8 +915,8 @@ int devlink_nl_cmd_sb_occ_snapshot_doit(struct sk_buff *skb,
return -EOPNOTSUPP;
}
-int devlink_nl_cmd_sb_occ_max_clear_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_sb_occ_max_clear_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink *devlink = info->user_ptr[0];
const struct devlink_ops *ops = devlink->ops;
diff --git a/net/devlink/trap.c b/net/devlink/trap.c
index c26bf9b29bca..c26313e7ca08 100644
--- a/net/devlink/trap.c
+++ b/net/devlink/trap.c
@@ -414,7 +414,7 @@ static int devlink_trap_action_set(struct devlink *devlink,
info->extack);
}
-int devlink_nl_cmd_trap_set_doit(struct sk_buff *skb, struct genl_info *info)
+int devlink_nl_trap_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct netlink_ext_ack *extack = info->extack;
struct devlink *devlink = info->user_ptr[0];
@@ -684,8 +684,7 @@ static int devlink_trap_group_set(struct devlink *devlink,
return 0;
}
-int devlink_nl_cmd_trap_group_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_trap_group_set_doit(struct sk_buff *skb, struct genl_info *info)
{
struct netlink_ext_ack *extack = info->extack;
struct devlink *devlink = info->user_ptr[0];
@@ -926,8 +925,8 @@ devlink_trap_policer_set(struct devlink *devlink,
return 0;
}
-int devlink_nl_cmd_trap_policer_set_doit(struct sk_buff *skb,
- struct genl_info *info)
+int devlink_nl_trap_policer_set_doit(struct sk_buff *skb,
+ struct genl_info *info)
{
struct devlink_trap_policer_item *policer_item;
struct netlink_ext_ack *extack = info->extack;
diff --git a/net/dsa/Makefile b/net/dsa/Makefile
index 12e305824a96..8a1894a42552 100644
--- a/net/dsa/Makefile
+++ b/net/dsa/Makefile
@@ -8,16 +8,16 @@ endif
# the core
obj-$(CONFIG_NET_DSA) += dsa_core.o
dsa_core-y += \
+ conduit.o \
devlink.o \
dsa.o \
- master.o \
netlink.o \
port.o \
- slave.o \
switch.o \
tag.o \
tag_8021q.o \
- trace.o
+ trace.o \
+ user.o
# tagging formats
obj-$(CONFIG_NET_DSA_TAG_AR9331) += tag_ar9331.o
diff --git a/net/dsa/master.c b/net/dsa/conduit.c
index 6be89ab0cc01..3dfdb3cb47dc 100644
--- a/net/dsa/master.c
+++ b/net/dsa/conduit.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * Handling of a master device, switching frames via its switch fabric CPU port
+ * Handling of a conduit device, switching frames via its switch fabric CPU port
*
* Copyright (c) 2017 Savoir-faire Linux Inc.
* Vivien Didelot <vivien.didelot@savoirfairelinux.com>
@@ -11,12 +11,12 @@
#include <linux/netlink.h>
#include <net/dsa.h>
+#include "conduit.h"
#include "dsa.h"
-#include "master.h"
#include "port.h"
#include "tag.h"
-static int dsa_master_get_regs_len(struct net_device *dev)
+static int dsa_conduit_get_regs_len(struct net_device *dev)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -45,8 +45,8 @@ static int dsa_master_get_regs_len(struct net_device *dev)
return ret;
}
-static void dsa_master_get_regs(struct net_device *dev,
- struct ethtool_regs *regs, void *data)
+static void dsa_conduit_get_regs(struct net_device *dev,
+ struct ethtool_regs *regs, void *data)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -80,9 +80,9 @@ static void dsa_master_get_regs(struct net_device *dev,
}
}
-static void dsa_master_get_ethtool_stats(struct net_device *dev,
- struct ethtool_stats *stats,
- uint64_t *data)
+static void dsa_conduit_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats,
+ uint64_t *data)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -99,9 +99,9 @@ static void dsa_master_get_ethtool_stats(struct net_device *dev,
ds->ops->get_ethtool_stats(ds, port, data + count);
}
-static void dsa_master_get_ethtool_phy_stats(struct net_device *dev,
- struct ethtool_stats *stats,
- uint64_t *data)
+static void dsa_conduit_get_ethtool_phy_stats(struct net_device *dev,
+ struct ethtool_stats *stats,
+ uint64_t *data)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -125,7 +125,7 @@ static void dsa_master_get_ethtool_phy_stats(struct net_device *dev,
ds->ops->get_ethtool_phy_stats(ds, port, data + count);
}
-static int dsa_master_get_sset_count(struct net_device *dev, int sset)
+static int dsa_conduit_get_sset_count(struct net_device *dev, int sset)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -147,8 +147,8 @@ static int dsa_master_get_sset_count(struct net_device *dev, int sset)
return count;
}
-static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
- uint8_t *data)
+static void dsa_conduit_get_strings(struct net_device *dev, uint32_t stringset,
+ uint8_t *data)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
const struct ethtool_ops *ops = cpu_dp->orig_ethtool_ops;
@@ -195,12 +195,12 @@ static void dsa_master_get_strings(struct net_device *dev, uint32_t stringset,
}
}
-/* Deny PTP operations on master if there is at least one switch in the tree
+/* Deny PTP operations on conduit if there is at least one switch in the tree
* that is PTP capable.
*/
-int __dsa_master_hwtstamp_validate(struct net_device *dev,
- const struct kernel_hwtstamp_config *config,
- struct netlink_ext_ack *extack)
+int __dsa_conduit_hwtstamp_validate(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch *ds = cpu_dp->ds;
@@ -212,7 +212,7 @@ int __dsa_master_hwtstamp_validate(struct net_device *dev,
list_for_each_entry(dp, &dst->ports, list) {
if (dsa_port_supports_hwtstamp(dp)) {
NL_SET_ERR_MSG(extack,
- "HW timestamping not allowed on DSA master when switch supports the operation");
+ "HW timestamping not allowed on DSA conduit when switch supports the operation");
return -EBUSY;
}
}
@@ -220,7 +220,7 @@ int __dsa_master_hwtstamp_validate(struct net_device *dev,
return 0;
}
-static int dsa_master_ethtool_setup(struct net_device *dev)
+static int dsa_conduit_ethtool_setup(struct net_device *dev)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch *ds = cpu_dp->ds;
@@ -237,19 +237,19 @@ static int dsa_master_ethtool_setup(struct net_device *dev)
if (cpu_dp->orig_ethtool_ops)
memcpy(ops, cpu_dp->orig_ethtool_ops, sizeof(*ops));
- ops->get_regs_len = dsa_master_get_regs_len;
- ops->get_regs = dsa_master_get_regs;
- ops->get_sset_count = dsa_master_get_sset_count;
- ops->get_ethtool_stats = dsa_master_get_ethtool_stats;
- ops->get_strings = dsa_master_get_strings;
- ops->get_ethtool_phy_stats = dsa_master_get_ethtool_phy_stats;
+ ops->get_regs_len = dsa_conduit_get_regs_len;
+ ops->get_regs = dsa_conduit_get_regs;
+ ops->get_sset_count = dsa_conduit_get_sset_count;
+ ops->get_ethtool_stats = dsa_conduit_get_ethtool_stats;
+ ops->get_strings = dsa_conduit_get_strings;
+ ops->get_ethtool_phy_stats = dsa_conduit_get_ethtool_phy_stats;
dev->ethtool_ops = ops;
return 0;
}
-static void dsa_master_ethtool_teardown(struct net_device *dev)
+static void dsa_conduit_ethtool_teardown(struct net_device *dev)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
@@ -260,16 +260,16 @@ static void dsa_master_ethtool_teardown(struct net_device *dev)
cpu_dp->orig_ethtool_ops = NULL;
}
-/* Keep the master always promiscuous if the tagging protocol requires that
+/* Keep the conduit always promiscuous if the tagging protocol requires that
* (garbles MAC DA) or if it doesn't support unicast filtering, case in which
* it would revert to promiscuous mode as soon as we call dev_uc_add() on it
* anyway.
*/
-static void dsa_master_set_promiscuity(struct net_device *dev, int inc)
+static void dsa_conduit_set_promiscuity(struct net_device *dev, int inc)
{
const struct dsa_device_ops *ops = dev->dsa_ptr->tag_ops;
- if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_master)
+ if ((dev->priv_flags & IFF_UNICAST_FLT) && !ops->promisc_on_conduit)
return;
ASSERT_RTNL();
@@ -336,17 +336,17 @@ out:
}
static DEVICE_ATTR_RW(tagging);
-static struct attribute *dsa_slave_attrs[] = {
+static struct attribute *dsa_user_attrs[] = {
&dev_attr_tagging.attr,
NULL
};
static const struct attribute_group dsa_group = {
.name = "dsa",
- .attrs = dsa_slave_attrs,
+ .attrs = dsa_user_attrs,
};
-static void dsa_master_reset_mtu(struct net_device *dev)
+static void dsa_conduit_reset_mtu(struct net_device *dev)
{
int err;
@@ -356,7 +356,7 @@ static void dsa_master_reset_mtu(struct net_device *dev)
"Unable to reset MTU to exclude DSA overheads\n");
}
-int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
+int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp)
{
const struct dsa_device_ops *tag_ops = cpu_dp->tag_ops;
struct dsa_switch *ds = cpu_dp->ds;
@@ -365,7 +365,7 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
mtu = ETH_DATA_LEN + dsa_tag_protocol_overhead(tag_ops);
- /* The DSA master must use SET_NETDEV_DEV for this to work. */
+ /* The DSA conduit must use SET_NETDEV_DEV for this to work. */
if (!netif_is_lag_master(dev)) {
consumer_link = device_link_add(ds->dev, dev->dev.parent,
DL_FLAG_AUTOREMOVE_CONSUMER);
@@ -376,7 +376,7 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
}
/* The switch driver may not implement ->port_change_mtu(), case in
- * which dsa_slave_change_mtu() will not update the master MTU either,
+ * which dsa_user_change_mtu() will not update the conduit MTU either,
* so we need to do that here.
*/
ret = dev_set_mtu(dev, mtu);
@@ -392,9 +392,9 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
dev->dsa_ptr = cpu_dp;
- dsa_master_set_promiscuity(dev, 1);
+ dsa_conduit_set_promiscuity(dev, 1);
- ret = dsa_master_ethtool_setup(dev);
+ ret = dsa_conduit_ethtool_setup(dev);
if (ret)
goto out_err_reset_promisc;
@@ -405,18 +405,18 @@ int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)
return ret;
out_err_ethtool_teardown:
- dsa_master_ethtool_teardown(dev);
+ dsa_conduit_ethtool_teardown(dev);
out_err_reset_promisc:
- dsa_master_set_promiscuity(dev, -1);
+ dsa_conduit_set_promiscuity(dev, -1);
return ret;
}
-void dsa_master_teardown(struct net_device *dev)
+void dsa_conduit_teardown(struct net_device *dev)
{
sysfs_remove_group(&dev->dev.kobj, &dsa_group);
- dsa_master_ethtool_teardown(dev);
- dsa_master_reset_mtu(dev);
- dsa_master_set_promiscuity(dev, -1);
+ dsa_conduit_ethtool_teardown(dev);
+ dsa_conduit_reset_mtu(dev);
+ dsa_conduit_set_promiscuity(dev, -1);
dev->dsa_ptr = NULL;
@@ -427,40 +427,40 @@ void dsa_master_teardown(struct net_device *dev)
wmb();
}
-int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
- struct netdev_lag_upper_info *uinfo,
- struct netlink_ext_ack *extack)
+int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
+ struct netdev_lag_upper_info *uinfo,
+ struct netlink_ext_ack *extack)
{
- bool master_setup = false;
+ bool conduit_setup = false;
int err;
if (!netdev_uses_dsa(lag_dev)) {
- err = dsa_master_setup(lag_dev, cpu_dp);
+ err = dsa_conduit_setup(lag_dev, cpu_dp);
if (err)
return err;
- master_setup = true;
+ conduit_setup = true;
}
err = dsa_port_lag_join(cpu_dp, lag_dev, uinfo, extack);
if (err) {
NL_SET_ERR_MSG_WEAK_MOD(extack, "CPU port failed to join LAG");
- goto out_master_teardown;
+ goto out_conduit_teardown;
}
return 0;
-out_master_teardown:
- if (master_setup)
- dsa_master_teardown(lag_dev);
+out_conduit_teardown:
+ if (conduit_setup)
+ dsa_conduit_teardown(lag_dev);
return err;
}
-/* Tear down a master if there isn't any other user port on it,
+/* Tear down a conduit if there isn't any other user port on it,
* optionally also destroying LAG information.
*/
-void dsa_master_lag_teardown(struct net_device *lag_dev,
- struct dsa_port *cpu_dp)
+void dsa_conduit_lag_teardown(struct net_device *lag_dev,
+ struct dsa_port *cpu_dp)
{
struct net_device *upper;
struct list_head *iter;
@@ -468,8 +468,8 @@ void dsa_master_lag_teardown(struct net_device *lag_dev,
dsa_port_lag_leave(cpu_dp, lag_dev);
netdev_for_each_upper_dev_rcu(lag_dev, upper, iter)
- if (dsa_slave_dev_check(upper))
+ if (dsa_user_dev_check(upper))
return;
- dsa_master_teardown(lag_dev);
+ dsa_conduit_teardown(lag_dev);
}
diff --git a/net/dsa/conduit.h b/net/dsa/conduit.h
new file mode 100644
index 000000000000..31f8834f54bb
--- /dev/null
+++ b/net/dsa/conduit.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __DSA_CONDUIT_H
+#define __DSA_CONDUIT_H
+
+struct dsa_port;
+struct net_device;
+struct netdev_lag_upper_info;
+struct netlink_ext_ack;
+
+int dsa_conduit_setup(struct net_device *dev, struct dsa_port *cpu_dp);
+void dsa_conduit_teardown(struct net_device *dev);
+int dsa_conduit_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
+ struct netdev_lag_upper_info *uinfo,
+ struct netlink_ext_ack *extack);
+void dsa_conduit_lag_teardown(struct net_device *lag_dev,
+ struct dsa_port *cpu_dp);
+int __dsa_conduit_hwtstamp_validate(struct net_device *dev,
+ const struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack);
+
+#endif
diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
index ccbdb98109f8..ac7be864e80d 100644
--- a/net/dsa/dsa.c
+++ b/net/dsa/dsa.c
@@ -20,14 +20,14 @@
#include <net/dsa_stubs.h>
#include <net/sch_generic.h>
+#include "conduit.h"
#include "devlink.h"
#include "dsa.h"
-#include "master.h"
#include "netlink.h"
#include "port.h"
-#include "slave.h"
#include "switch.h"
#include "tag.h"
+#include "user.h"
#define DSA_MAX_NUM_OFFLOADING_BRIDGES BITS_PER_LONG
@@ -365,18 +365,18 @@ static struct dsa_port *dsa_tree_find_first_cpu(struct dsa_switch_tree *dst)
return NULL;
}
-struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst)
+struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst)
{
struct device_node *ethernet;
- struct net_device *master;
+ struct net_device *conduit;
struct dsa_port *cpu_dp;
cpu_dp = dsa_tree_find_first_cpu(dst);
ethernet = of_parse_phandle(cpu_dp->dn, "ethernet", 0);
- master = of_find_net_device_by_node(ethernet);
+ conduit = of_find_net_device_by_node(ethernet);
of_node_put(ethernet);
- return master;
+ return conduit;
}
/* Assign the default CPU port (the first one in the tree) to all ports of the
@@ -517,7 +517,7 @@ static int dsa_port_setup(struct dsa_port *dp)
break;
case DSA_PORT_TYPE_USER:
of_get_mac_address(dp->dn, dp->mac);
- err = dsa_slave_create(dp);
+ err = dsa_user_create(dp);
break;
}
@@ -554,9 +554,9 @@ static void dsa_port_teardown(struct dsa_port *dp)
dsa_shared_port_link_unregister_of(dp);
break;
case DSA_PORT_TYPE_USER:
- if (dp->slave) {
- dsa_slave_destroy(dp->slave);
- dp->slave = NULL;
+ if (dp->user) {
+ dsa_user_destroy(dp->user);
+ dp->user = NULL;
}
break;
}
@@ -632,9 +632,9 @@ static int dsa_switch_setup(struct dsa_switch *ds)
if (ds->setup)
return 0;
- /* Initialize ds->phys_mii_mask before registering the slave MDIO bus
+ /* Initialize ds->phys_mii_mask before registering the user MDIO bus
* driver and before ops->setup() has run, since the switch drivers and
- * the slave MDIO bus driver rely on these values for probing PHY
+ * the user MDIO bus driver rely on these values for probing PHY
* devices or not
*/
ds->phys_mii_mask |= dsa_user_ports(ds);
@@ -657,21 +657,21 @@ static int dsa_switch_setup(struct dsa_switch *ds)
if (err)
goto teardown;
- if (!ds->slave_mii_bus && ds->ops->phy_read) {
- ds->slave_mii_bus = mdiobus_alloc();
- if (!ds->slave_mii_bus) {
+ if (!ds->user_mii_bus && ds->ops->phy_read) {
+ ds->user_mii_bus = mdiobus_alloc();
+ if (!ds->user_mii_bus) {
err = -ENOMEM;
goto teardown;
}
- dsa_slave_mii_bus_init(ds);
+ dsa_user_mii_bus_init(ds);
dn = of_get_child_by_name(ds->dev->of_node, "mdio");
- err = of_mdiobus_register(ds->slave_mii_bus, dn);
+ err = of_mdiobus_register(ds->user_mii_bus, dn);
of_node_put(dn);
if (err < 0)
- goto free_slave_mii_bus;
+ goto free_user_mii_bus;
}
dsa_switch_devlink_register(ds);
@@ -679,9 +679,9 @@ static int dsa_switch_setup(struct dsa_switch *ds)
ds->setup = true;
return 0;
-free_slave_mii_bus:
- if (ds->slave_mii_bus && ds->ops->phy_read)
- mdiobus_free(ds->slave_mii_bus);
+free_user_mii_bus:
+ if (ds->user_mii_bus && ds->ops->phy_read)
+ mdiobus_free(ds->user_mii_bus);
teardown:
if (ds->ops->teardown)
ds->ops->teardown(ds);
@@ -699,10 +699,10 @@ static void dsa_switch_teardown(struct dsa_switch *ds)
dsa_switch_devlink_unregister(ds);
- if (ds->slave_mii_bus && ds->ops->phy_read) {
- mdiobus_unregister(ds->slave_mii_bus);
- mdiobus_free(ds->slave_mii_bus);
- ds->slave_mii_bus = NULL;
+ if (ds->user_mii_bus && ds->ops->phy_read) {
+ mdiobus_unregister(ds->user_mii_bus);
+ mdiobus_free(ds->user_mii_bus);
+ ds->user_mii_bus = NULL;
}
dsa_switch_teardown_tag_protocol(ds);
@@ -793,7 +793,7 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
return err;
}
-static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
+static int dsa_tree_setup_conduit(struct dsa_switch_tree *dst)
{
struct dsa_port *cpu_dp;
int err = 0;
@@ -801,18 +801,18 @@ static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
rtnl_lock();
dsa_tree_for_each_cpu_port(cpu_dp, dst) {
- struct net_device *master = cpu_dp->master;
- bool admin_up = (master->flags & IFF_UP) &&
- !qdisc_tx_is_noop(master);
+ struct net_device *conduit = cpu_dp->conduit;
+ bool admin_up = (conduit->flags & IFF_UP) &&
+ !qdisc_tx_is_noop(conduit);
- err = dsa_master_setup(master, cpu_dp);
+ err = dsa_conduit_setup(conduit, cpu_dp);
if (err)
break;
- /* Replay master state event */
- dsa_tree_master_admin_state_change(dst, master, admin_up);
- dsa_tree_master_oper_state_change(dst, master,
- netif_oper_up(master));
+ /* Replay conduit state event */
+ dsa_tree_conduit_admin_state_change(dst, conduit, admin_up);
+ dsa_tree_conduit_oper_state_change(dst, conduit,
+ netif_oper_up(conduit));
}
rtnl_unlock();
@@ -820,22 +820,22 @@ static int dsa_tree_setup_master(struct dsa_switch_tree *dst)
return err;
}
-static void dsa_tree_teardown_master(struct dsa_switch_tree *dst)
+static void dsa_tree_teardown_conduit(struct dsa_switch_tree *dst)
{
struct dsa_port *cpu_dp;
rtnl_lock();
dsa_tree_for_each_cpu_port(cpu_dp, dst) {
- struct net_device *master = cpu_dp->master;
+ struct net_device *conduit = cpu_dp->conduit;
/* Synthesizing an "admin down" state is sufficient for
- * the switches to get a notification if the master is
+ * the switches to get a notification if the conduit is
* currently up and running.
*/
- dsa_tree_master_admin_state_change(dst, master, false);
+ dsa_tree_conduit_admin_state_change(dst, conduit, false);
- dsa_master_teardown(master);
+ dsa_conduit_teardown(conduit);
}
rtnl_unlock();
@@ -894,13 +894,13 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
if (err)
goto teardown_switches;
- err = dsa_tree_setup_master(dst);
+ err = dsa_tree_setup_conduit(dst);
if (err)
goto teardown_ports;
err = dsa_tree_setup_lags(dst);
if (err)
- goto teardown_master;
+ goto teardown_conduit;
dst->setup = true;
@@ -908,8 +908,8 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
return 0;
-teardown_master:
- dsa_tree_teardown_master(dst);
+teardown_conduit:
+ dsa_tree_teardown_conduit(dst);
teardown_ports:
dsa_tree_teardown_ports(dst);
teardown_switches:
@@ -929,7 +929,7 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
dsa_tree_teardown_lags(dst);
- dsa_tree_teardown_master(dst);
+ dsa_tree_teardown_conduit(dst);
dsa_tree_teardown_ports(dst);
@@ -978,7 +978,7 @@ out_disconnect:
return err;
}
-/* Since the dsa/tagging sysfs device attribute is per master, the assumption
+/* Since the dsa/tagging sysfs device attribute is per conduit, the assumption
* is that all DSA switches within a tree share the same tagger, otherwise
* they would have formed disjoint trees (different "dsa,member" values).
*/
@@ -999,10 +999,10 @@ int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
* restriction, there needs to be another mutex which serializes this.
*/
dsa_tree_for_each_user_port(dp, dst) {
- if (dsa_port_to_master(dp)->flags & IFF_UP)
+ if (dsa_port_to_conduit(dp)->flags & IFF_UP)
goto out_unlock;
- if (dp->slave->flags & IFF_UP)
+ if (dp->user->flags & IFF_UP)
goto out_unlock;
}
@@ -1028,62 +1028,62 @@ out_unlock:
return err;
}
-static void dsa_tree_master_state_change(struct dsa_switch_tree *dst,
- struct net_device *master)
+static void dsa_tree_conduit_state_change(struct dsa_switch_tree *dst,
+ struct net_device *conduit)
{
- struct dsa_notifier_master_state_info info;
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_notifier_conduit_state_info info;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
- info.master = master;
- info.operational = dsa_port_master_is_operational(cpu_dp);
+ info.conduit = conduit;
+ info.operational = dsa_port_conduit_is_operational(cpu_dp);
- dsa_tree_notify(dst, DSA_NOTIFIER_MASTER_STATE_CHANGE, &info);
+ dsa_tree_notify(dst, DSA_NOTIFIER_CONDUIT_STATE_CHANGE, &info);
}
-void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst,
- struct net_device *master,
- bool up)
+void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst,
+ struct net_device *conduit,
+ bool up)
{
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
bool notify = false;
- /* Don't keep track of admin state on LAG DSA masters,
- * but rather just of physical DSA masters
+ /* Don't keep track of admin state on LAG DSA conduits,
+ * but rather just of physical DSA conduits
*/
- if (netif_is_lag_master(master))
+ if (netif_is_lag_master(conduit))
return;
- if ((dsa_port_master_is_operational(cpu_dp)) !=
- (up && cpu_dp->master_oper_up))
+ if ((dsa_port_conduit_is_operational(cpu_dp)) !=
+ (up && cpu_dp->conduit_oper_up))
notify = true;
- cpu_dp->master_admin_up = up;
+ cpu_dp->conduit_admin_up = up;
if (notify)
- dsa_tree_master_state_change(dst, master);
+ dsa_tree_conduit_state_change(dst, conduit);
}
-void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst,
- struct net_device *master,
- bool up)
+void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst,
+ struct net_device *conduit,
+ bool up)
{
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
bool notify = false;
- /* Don't keep track of oper state on LAG DSA masters,
- * but rather just of physical DSA masters
+ /* Don't keep track of oper state on LAG DSA conduits,
+ * but rather just of physical DSA conduits
*/
- if (netif_is_lag_master(master))
+ if (netif_is_lag_master(conduit))
return;
- if ((dsa_port_master_is_operational(cpu_dp)) !=
- (cpu_dp->master_admin_up && up))
+ if ((dsa_port_conduit_is_operational(cpu_dp)) !=
+ (cpu_dp->conduit_admin_up && up))
notify = true;
- cpu_dp->master_oper_up = up;
+ cpu_dp->conduit_oper_up = up;
if (notify)
- dsa_tree_master_state_change(dst, master);
+ dsa_tree_conduit_state_change(dst, conduit);
}
static struct dsa_port *dsa_port_touch(struct dsa_switch *ds, int index)
@@ -1129,7 +1129,7 @@ static int dsa_port_parse_dsa(struct dsa_port *dp)
}
static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp,
- struct net_device *master)
+ struct net_device *conduit)
{
enum dsa_tag_protocol tag_protocol = DSA_TAG_PROTO_NONE;
struct dsa_switch *mds, *ds = dp->ds;
@@ -1140,21 +1140,21 @@ static enum dsa_tag_protocol dsa_get_tag_protocol(struct dsa_port *dp,
* happens the switch driver may want to know if its tagging protocol
* is going to work in such a configuration.
*/
- if (dsa_slave_dev_check(master)) {
- mdp = dsa_slave_to_port(master);
+ if (dsa_user_dev_check(conduit)) {
+ mdp = dsa_user_to_port(conduit);
mds = mdp->ds;
mdp_upstream = dsa_upstream_port(mds, mdp->index);
tag_protocol = mds->ops->get_tag_protocol(mds, mdp_upstream,
DSA_TAG_PROTO_NONE);
}
- /* If the master device is not itself a DSA slave in a disjoint DSA
+ /* If the conduit device is not itself a DSA user in a disjoint DSA
* tree, then return immediately.
*/
return ds->ops->get_tag_protocol(ds, dp->index, tag_protocol);
}
-static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
+static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *conduit,
const char *user_protocol)
{
const struct dsa_device_ops *tag_ops = NULL;
@@ -1163,7 +1163,7 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
enum dsa_tag_protocol default_proto;
/* Find out which protocol the switch would prefer. */
- default_proto = dsa_get_tag_protocol(dp, master);
+ default_proto = dsa_get_tag_protocol(dp, conduit);
if (dst->default_proto) {
if (dst->default_proto != default_proto) {
dev_err(ds->dev,
@@ -1218,7 +1218,7 @@ static int dsa_port_parse_cpu(struct dsa_port *dp, struct net_device *master,
dst->tag_ops = tag_ops;
}
- dp->master = master;
+ dp->conduit = conduit;
dp->type = DSA_PORT_TYPE_CPU;
dsa_port_set_tag_protocol(dp, dst->tag_ops);
dp->dst = dst;
@@ -1248,16 +1248,16 @@ static int dsa_port_parse_of(struct dsa_port *dp, struct device_node *dn)
dp->dn = dn;
if (ethernet) {
- struct net_device *master;
+ struct net_device *conduit;
const char *user_protocol;
- master = of_find_net_device_by_node(ethernet);
+ conduit = of_find_net_device_by_node(ethernet);
of_node_put(ethernet);
- if (!master)
+ if (!conduit)
return -EPROBE_DEFER;
user_protocol = of_get_property(dn, "dsa-tag-protocol", NULL);
- return dsa_port_parse_cpu(dp, master, user_protocol);
+ return dsa_port_parse_cpu(dp, conduit, user_protocol);
}
if (link)
@@ -1412,15 +1412,15 @@ static int dsa_port_parse(struct dsa_port *dp, const char *name,
struct device *dev)
{
if (!strcmp(name, "cpu")) {
- struct net_device *master;
+ struct net_device *conduit;
- master = dsa_dev_to_net_device(dev);
- if (!master)
+ conduit = dsa_dev_to_net_device(dev);
+ if (!conduit)
return -EPROBE_DEFER;
- dev_put(master);
+ dev_put(conduit);
- return dsa_port_parse_cpu(dp, master, NULL);
+ return dsa_port_parse_cpu(dp, conduit, NULL);
}
if (!strcmp(name, "dsa"))
@@ -1566,14 +1566,14 @@ void dsa_unregister_switch(struct dsa_switch *ds)
}
EXPORT_SYMBOL_GPL(dsa_unregister_switch);
-/* If the DSA master chooses to unregister its net_device on .shutdown, DSA is
+/* If the DSA conduit chooses to unregister its net_device on .shutdown, DSA is
* blocking that operation from completion, due to the dev_hold taken inside
- * netdev_upper_dev_link. Unlink the DSA slave interfaces from being uppers of
- * the DSA master, so that the system can reboot successfully.
+ * netdev_upper_dev_link. Unlink the DSA user interfaces from being uppers of
+ * the DSA conduit, so that the system can reboot successfully.
*/
void dsa_switch_shutdown(struct dsa_switch *ds)
{
- struct net_device *master, *slave_dev;
+ struct net_device *conduit, *user_dev;
struct dsa_port *dp;
mutex_lock(&dsa2_mutex);
@@ -1584,17 +1584,17 @@ void dsa_switch_shutdown(struct dsa_switch *ds)
rtnl_lock();
dsa_switch_for_each_user_port(dp, ds) {
- master = dsa_port_to_master(dp);
- slave_dev = dp->slave;
+ conduit = dsa_port_to_conduit(dp);
+ user_dev = dp->user;
- netdev_upper_dev_unlink(master, slave_dev);
+ netdev_upper_dev_unlink(conduit, user_dev);
}
- /* Disconnect from further netdevice notifiers on the master,
+ /* Disconnect from further netdevice notifiers on the conduit,
* since netdev_uses_dsa() will now return false.
*/
dsa_switch_for_each_cpu_port(dp, ds)
- dp->master->dsa_ptr = NULL;
+ dp->conduit->dsa_ptr = NULL;
rtnl_unlock();
out:
@@ -1605,7 +1605,7 @@ EXPORT_SYMBOL_GPL(dsa_switch_shutdown);
#ifdef CONFIG_PM_SLEEP
static bool dsa_port_is_initialized(const struct dsa_port *dp)
{
- return dp->type == DSA_PORT_TYPE_USER && dp->slave;
+ return dp->type == DSA_PORT_TYPE_USER && dp->user;
}
int dsa_switch_suspend(struct dsa_switch *ds)
@@ -1613,12 +1613,12 @@ int dsa_switch_suspend(struct dsa_switch *ds)
struct dsa_port *dp;
int ret = 0;
- /* Suspend slave network devices */
+ /* Suspend user network devices */
dsa_switch_for_each_port(dp, ds) {
if (!dsa_port_is_initialized(dp))
continue;
- ret = dsa_slave_suspend(dp->slave);
+ ret = dsa_user_suspend(dp->user);
if (ret)
return ret;
}
@@ -1641,12 +1641,12 @@ int dsa_switch_resume(struct dsa_switch *ds)
if (ret)
return ret;
- /* Resume slave network devices */
+ /* Resume user network devices */
dsa_switch_for_each_port(dp, ds) {
if (!dsa_port_is_initialized(dp))
continue;
- ret = dsa_slave_resume(dp->slave);
+ ret = dsa_user_resume(dp->user);
if (ret)
return ret;
}
@@ -1658,10 +1658,10 @@ EXPORT_SYMBOL_GPL(dsa_switch_resume);
struct dsa_port *dsa_port_from_netdev(struct net_device *netdev)
{
- if (!netdev || !dsa_slave_dev_check(netdev))
+ if (!netdev || !dsa_user_dev_check(netdev))
return ERR_PTR(-ENODEV);
- return dsa_slave_to_port(netdev);
+ return dsa_user_to_port(netdev);
}
EXPORT_SYMBOL_GPL(dsa_port_from_netdev);
@@ -1726,7 +1726,7 @@ bool dsa_mdb_present_in_other_db(struct dsa_switch *ds, int port,
EXPORT_SYMBOL_GPL(dsa_mdb_present_in_other_db);
static const struct dsa_stubs __dsa_stubs = {
- .master_hwtstamp_validate = __dsa_master_hwtstamp_validate,
+ .conduit_hwtstamp_validate = __dsa_conduit_hwtstamp_validate,
};
static void dsa_register_stubs(void)
@@ -1748,7 +1748,7 @@ static int __init dsa_init_module(void)
if (!dsa_owq)
return -ENOMEM;
- rc = dsa_slave_register_notifier();
+ rc = dsa_user_register_notifier();
if (rc)
goto register_notifier_fail;
@@ -1763,7 +1763,7 @@ static int __init dsa_init_module(void)
return 0;
netlink_register_fail:
- dsa_slave_unregister_notifier();
+ dsa_user_unregister_notifier();
dev_remove_pack(&dsa_pack_type);
register_notifier_fail:
destroy_workqueue(dsa_owq);
@@ -1778,7 +1778,7 @@ static void __exit dsa_cleanup_module(void)
rtnl_link_unregister(&dsa_link_ops);
- dsa_slave_unregister_notifier();
+ dsa_user_unregister_notifier();
dev_remove_pack(&dsa_pack_type);
destroy_workqueue(dsa_owq);
}
diff --git a/net/dsa/dsa.h b/net/dsa/dsa.h
index b7e17ae1094d..3cc7823e9ef3 100644
--- a/net/dsa/dsa.h
+++ b/net/dsa/dsa.h
@@ -21,16 +21,16 @@ void dsa_lag_map(struct dsa_switch_tree *dst, struct dsa_lag *lag);
void dsa_lag_unmap(struct dsa_switch_tree *dst, struct dsa_lag *lag);
struct dsa_lag *dsa_tree_lag_find(struct dsa_switch_tree *dst,
const struct net_device *lag_dev);
-struct net_device *dsa_tree_find_first_master(struct dsa_switch_tree *dst);
+struct net_device *dsa_tree_find_first_conduit(struct dsa_switch_tree *dst);
int dsa_tree_change_tag_proto(struct dsa_switch_tree *dst,
const struct dsa_device_ops *tag_ops,
const struct dsa_device_ops *old_tag_ops);
-void dsa_tree_master_admin_state_change(struct dsa_switch_tree *dst,
- struct net_device *master,
+void dsa_tree_conduit_admin_state_change(struct dsa_switch_tree *dst,
+ struct net_device *conduit,
+ bool up);
+void dsa_tree_conduit_oper_state_change(struct dsa_switch_tree *dst,
+ struct net_device *conduit,
bool up);
-void dsa_tree_master_oper_state_change(struct dsa_switch_tree *dst,
- struct net_device *master,
- bool up);
unsigned int dsa_bridge_num_get(const struct net_device *bridge_dev, int max);
void dsa_bridge_num_put(const struct net_device *bridge_dev,
unsigned int bridge_num);
diff --git a/net/dsa/master.h b/net/dsa/master.h
deleted file mode 100644
index 76e39d3ec909..000000000000
--- a/net/dsa/master.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-
-#ifndef __DSA_MASTER_H
-#define __DSA_MASTER_H
-
-struct dsa_port;
-struct net_device;
-struct netdev_lag_upper_info;
-struct netlink_ext_ack;
-
-int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp);
-void dsa_master_teardown(struct net_device *dev);
-int dsa_master_lag_setup(struct net_device *lag_dev, struct dsa_port *cpu_dp,
- struct netdev_lag_upper_info *uinfo,
- struct netlink_ext_ack *extack);
-void dsa_master_lag_teardown(struct net_device *lag_dev,
- struct dsa_port *cpu_dp);
-int __dsa_master_hwtstamp_validate(struct net_device *dev,
- const struct kernel_hwtstamp_config *config,
- struct netlink_ext_ack *extack);
-
-#endif
diff --git a/net/dsa/netlink.c b/net/dsa/netlink.c
index bd4bbaf851de..1332e56349e5 100644
--- a/net/dsa/netlink.c
+++ b/net/dsa/netlink.c
@@ -5,10 +5,10 @@
#include <net/rtnetlink.h>
#include "netlink.h"
-#include "slave.h"
+#include "user.h"
static const struct nla_policy dsa_policy[IFLA_DSA_MAX + 1] = {
- [IFLA_DSA_MASTER] = { .type = NLA_U32 },
+ [IFLA_DSA_CONDUIT] = { .type = NLA_U32 },
};
static int dsa_changelink(struct net_device *dev, struct nlattr *tb[],
@@ -20,15 +20,15 @@ static int dsa_changelink(struct net_device *dev, struct nlattr *tb[],
if (!data)
return 0;
- if (data[IFLA_DSA_MASTER]) {
- u32 ifindex = nla_get_u32(data[IFLA_DSA_MASTER]);
- struct net_device *master;
+ if (data[IFLA_DSA_CONDUIT]) {
+ u32 ifindex = nla_get_u32(data[IFLA_DSA_CONDUIT]);
+ struct net_device *conduit;
- master = __dev_get_by_index(dev_net(dev), ifindex);
- if (!master)
+ conduit = __dev_get_by_index(dev_net(dev), ifindex);
+ if (!conduit)
return -EINVAL;
- err = dsa_slave_change_master(dev, master, extack);
+ err = dsa_user_change_conduit(dev, conduit, extack);
if (err)
return err;
}
@@ -38,15 +38,15 @@ static int dsa_changelink(struct net_device *dev, struct nlattr *tb[],
static size_t dsa_get_size(const struct net_device *dev)
{
- return nla_total_size(sizeof(u32)) + /* IFLA_DSA_MASTER */
+ return nla_total_size(sizeof(u32)) + /* IFLA_DSA_CONDUIT */
0;
}
static int dsa_fill_info(struct sk_buff *skb, const struct net_device *dev)
{
- struct net_device *master = dsa_slave_to_master(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
- if (nla_put_u32(skb, IFLA_DSA_MASTER, master->ifindex))
+ if (nla_put_u32(skb, IFLA_DSA_CONDUIT, conduit->ifindex))
return -EMSGSIZE;
return 0;
diff --git a/net/dsa/port.c b/net/dsa/port.c
index 37ab238e8304..c42dac87671b 100644
--- a/net/dsa/port.c
+++ b/net/dsa/port.c
@@ -14,9 +14,9 @@
#include "dsa.h"
#include "port.h"
-#include "slave.h"
#include "switch.h"
#include "tag_8021q.h"
+#include "user.h"
/**
* dsa_port_notify - Notify the switching fabric of changes to a port
@@ -289,7 +289,7 @@ static void dsa_port_reset_vlan_filtering(struct dsa_port *dp,
}
/* If the bridge was vlan_filtering, the bridge core doesn't trigger an
- * event for changing vlan_filtering setting upon slave ports leaving
+ * event for changing vlan_filtering setting upon user ports leaving
* it. That is a good thing, because that lets us handle it and also
* handle the case where the switch's vlan_filtering setting is global
* (not per port). When that happens, the correct moment to trigger the
@@ -489,7 +489,7 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
.dp = dp,
.extack = extack,
};
- struct net_device *dev = dp->slave;
+ struct net_device *dev = dp->user;
struct net_device *brport_dev;
int err;
@@ -514,8 +514,8 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
dp->bridge->tx_fwd_offload = info.tx_fwd_offload;
err = switchdev_bridge_port_offload(brport_dev, dev, dp,
- &dsa_slave_switchdev_notifier,
- &dsa_slave_switchdev_blocking_notifier,
+ &dsa_user_switchdev_notifier,
+ &dsa_user_switchdev_blocking_notifier,
dp->bridge->tx_fwd_offload, extack);
if (err)
goto out_rollback_unbridge;
@@ -528,8 +528,8 @@ int dsa_port_bridge_join(struct dsa_port *dp, struct net_device *br,
out_rollback_unoffload:
switchdev_bridge_port_unoffload(brport_dev, dp,
- &dsa_slave_switchdev_notifier,
- &dsa_slave_switchdev_blocking_notifier);
+ &dsa_user_switchdev_notifier,
+ &dsa_user_switchdev_blocking_notifier);
dsa_flush_workqueue();
out_rollback_unbridge:
dsa_broadcast(DSA_NOTIFIER_BRIDGE_LEAVE, &info);
@@ -547,8 +547,8 @@ void dsa_port_pre_bridge_leave(struct dsa_port *dp, struct net_device *br)
return;
switchdev_bridge_port_unoffload(brport_dev, dp,
- &dsa_slave_switchdev_notifier,
- &dsa_slave_switchdev_blocking_notifier);
+ &dsa_user_switchdev_notifier,
+ &dsa_user_switchdev_blocking_notifier);
dsa_flush_workqueue();
}
@@ -741,10 +741,10 @@ static bool dsa_port_can_apply_vlan_filtering(struct dsa_port *dp,
*/
if (vlan_filtering && dsa_port_is_user(dp)) {
struct net_device *br = dsa_port_bridge_dev_get(dp);
- struct net_device *upper_dev, *slave = dp->slave;
+ struct net_device *upper_dev, *user = dp->user;
struct list_head *iter;
- netdev_for_each_upper_dev_rcu(slave, upper_dev, iter) {
+ netdev_for_each_upper_dev_rcu(user, upper_dev, iter) {
struct bridge_vlan_info br_info;
u16 vid;
@@ -803,9 +803,9 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
if (!ds->ops->port_vlan_filtering)
return -EOPNOTSUPP;
- /* We are called from dsa_slave_switchdev_blocking_event(),
+ /* We are called from dsa_user_switchdev_blocking_event(),
* which is not under rcu_read_lock(), unlike
- * dsa_slave_switchdev_event().
+ * dsa_user_switchdev_event().
*/
rcu_read_lock();
apply = dsa_port_can_apply_vlan_filtering(dp, vlan_filtering, extack);
@@ -827,24 +827,24 @@ int dsa_port_vlan_filtering(struct dsa_port *dp, bool vlan_filtering,
ds->vlan_filtering = vlan_filtering;
dsa_switch_for_each_user_port(other_dp, ds) {
- struct net_device *slave = other_dp->slave;
+ struct net_device *user = other_dp->user;
/* We might be called in the unbind path, so not
- * all slave devices might still be registered.
+ * all user devices might still be registered.
*/
- if (!slave)
+ if (!user)
continue;
- err = dsa_slave_manage_vlan_filtering(slave,
- vlan_filtering);
+ err = dsa_user_manage_vlan_filtering(user,
+ vlan_filtering);
if (err)
goto restore;
}
} else {
dp->vlan_filtering = vlan_filtering;
- err = dsa_slave_manage_vlan_filtering(dp->slave,
- vlan_filtering);
+ err = dsa_user_manage_vlan_filtering(dp->user,
+ vlan_filtering);
if (err)
goto restore;
}
@@ -863,7 +863,7 @@ restore:
}
/* This enforces legacy behavior for switch drivers which assume they can't
- * receive VLAN configuration when enslaved to a bridge with vlan_filtering=0
+ * receive VLAN configuration when joining a bridge with vlan_filtering=0
*/
bool dsa_port_skip_vlan_configuration(struct dsa_port *dp)
{
@@ -1047,7 +1047,7 @@ int dsa_port_standalone_host_fdb_add(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
const unsigned char *addr, u16 vid)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
@@ -1057,12 +1057,12 @@ int dsa_port_bridge_host_fdb_add(struct dsa_port *dp,
if (!dp->ds->fdb_isolation)
db.bridge.num = 0;
- /* Avoid a call to __dev_set_promiscuity() on the master, which
+ /* Avoid a call to __dev_set_promiscuity() on the conduit, which
* requires rtnl_lock(), since we can't guarantee that is held here,
* and we can't take it either.
*/
- if (master->priv_flags & IFF_UNICAST_FLT) {
- err = dev_uc_add(master, addr);
+ if (conduit->priv_flags & IFF_UNICAST_FLT) {
+ err = dev_uc_add(conduit, addr);
if (err)
return err;
}
@@ -1098,7 +1098,7 @@ int dsa_port_standalone_host_fdb_del(struct dsa_port *dp,
int dsa_port_bridge_host_fdb_del(struct dsa_port *dp,
const unsigned char *addr, u16 vid)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
@@ -1108,8 +1108,8 @@ int dsa_port_bridge_host_fdb_del(struct dsa_port *dp,
if (!dp->ds->fdb_isolation)
db.bridge.num = 0;
- if (master->priv_flags & IFF_UNICAST_FLT) {
- err = dev_uc_del(master, addr);
+ if (conduit->priv_flags & IFF_UNICAST_FLT) {
+ err = dev_uc_del(conduit, addr);
if (err)
return err;
}
@@ -1229,7 +1229,7 @@ int dsa_port_standalone_host_mdb_add(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
@@ -1239,7 +1239,7 @@ int dsa_port_bridge_host_mdb_add(const struct dsa_port *dp,
if (!dp->ds->fdb_isolation)
db.bridge.num = 0;
- err = dev_mc_add(master, mdb->addr);
+ err = dev_mc_add(conduit, mdb->addr);
if (err)
return err;
@@ -1273,7 +1273,7 @@ int dsa_port_standalone_host_mdb_del(const struct dsa_port *dp,
int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp,
const struct switchdev_obj_port_mdb *mdb)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_db db = {
.type = DSA_DB_BRIDGE,
.bridge = *dp->bridge,
@@ -1283,7 +1283,7 @@ int dsa_port_bridge_host_mdb_del(const struct dsa_port *dp,
if (!dp->ds->fdb_isolation)
db.bridge.num = 0;
- err = dev_mc_del(master, mdb->addr);
+ err = dev_mc_del(conduit, mdb->addr);
if (err)
return err;
@@ -1318,7 +1318,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan,
struct netlink_ext_ack *extack)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_notifier_vlan_info info = {
.dp = dp,
.vlan = vlan,
@@ -1330,7 +1330,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
if (err && err != -EOPNOTSUPP)
return err;
- vlan_vid_add(master, htons(ETH_P_8021Q), vlan->vid);
+ vlan_vid_add(conduit, htons(ETH_P_8021Q), vlan->vid);
return err;
}
@@ -1338,7 +1338,7 @@ int dsa_port_host_vlan_add(struct dsa_port *dp,
int dsa_port_host_vlan_del(struct dsa_port *dp,
const struct switchdev_obj_port_vlan *vlan)
{
- struct net_device *master = dsa_port_to_master(dp);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_notifier_vlan_info info = {
.dp = dp,
.vlan = vlan,
@@ -1349,7 +1349,7 @@ int dsa_port_host_vlan_del(struct dsa_port *dp,
if (err && err != -EOPNOTSUPP)
return err;
- vlan_vid_del(master, htons(ETH_P_8021Q), vlan->vid);
+ vlan_vid_del(conduit, htons(ETH_P_8021Q), vlan->vid);
return err;
}
@@ -1398,24 +1398,24 @@ int dsa_port_mrp_del_ring_role(const struct dsa_port *dp,
return ds->ops->port_mrp_del_ring_role(ds, dp->index, mrp);
}
-static int dsa_port_assign_master(struct dsa_port *dp,
- struct net_device *master,
- struct netlink_ext_ack *extack,
- bool fail_on_err)
+static int dsa_port_assign_conduit(struct dsa_port *dp,
+ struct net_device *conduit,
+ struct netlink_ext_ack *extack,
+ bool fail_on_err)
{
struct dsa_switch *ds = dp->ds;
int port = dp->index, err;
- err = ds->ops->port_change_master(ds, port, master, extack);
+ err = ds->ops->port_change_conduit(ds, port, conduit, extack);
if (err && !fail_on_err)
- dev_err(ds->dev, "port %d failed to assign master %s: %pe\n",
- port, master->name, ERR_PTR(err));
+ dev_err(ds->dev, "port %d failed to assign conduit %s: %pe\n",
+ port, conduit->name, ERR_PTR(err));
if (err && fail_on_err)
return err;
- dp->cpu_dp = master->dsa_ptr;
- dp->cpu_port_in_lag = netif_is_lag_master(master);
+ dp->cpu_dp = conduit->dsa_ptr;
+ dp->cpu_port_in_lag = netif_is_lag_master(conduit);
return 0;
}
@@ -1428,12 +1428,12 @@ static int dsa_port_assign_master(struct dsa_port *dp,
* the old CPU port before changing it, and restore it on errors during the
* bringup of the new one.
*/
-int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
- struct netlink_ext_ack *extack)
+int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit,
+ struct netlink_ext_ack *extack)
{
struct net_device *bridge_dev = dsa_port_bridge_dev_get(dp);
- struct net_device *old_master = dsa_port_to_master(dp);
- struct net_device *dev = dp->slave;
+ struct net_device *old_conduit = dsa_port_to_conduit(dp);
+ struct net_device *dev = dp->user;
struct dsa_switch *ds = dp->ds;
bool vlan_filtering;
int err, tmp;
@@ -1454,7 +1454,7 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
*/
vlan_filtering = dsa_port_is_vlan_filtering(dp);
if (vlan_filtering) {
- err = dsa_slave_manage_vlan_filtering(dev, false);
+ err = dsa_user_manage_vlan_filtering(dev, false);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to remove standalone VLANs");
@@ -1465,16 +1465,16 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
/* Standalone addresses, and addresses of upper interfaces like
* VLAN, LAG, HSR need to be migrated.
*/
- dsa_slave_unsync_ha(dev);
+ dsa_user_unsync_ha(dev);
- err = dsa_port_assign_master(dp, master, extack, true);
+ err = dsa_port_assign_conduit(dp, conduit, extack, true);
if (err)
goto rewind_old_addrs;
- dsa_slave_sync_ha(dev);
+ dsa_user_sync_ha(dev);
if (vlan_filtering) {
- err = dsa_slave_manage_vlan_filtering(dev, true);
+ err = dsa_user_manage_vlan_filtering(dev, true);
if (err) {
NL_SET_ERR_MSG_MOD(extack,
"Failed to restore standalone VLANs");
@@ -1495,19 +1495,19 @@ int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
rewind_new_vlan:
if (vlan_filtering)
- dsa_slave_manage_vlan_filtering(dev, false);
+ dsa_user_manage_vlan_filtering(dev, false);
rewind_new_addrs:
- dsa_slave_unsync_ha(dev);
+ dsa_user_unsync_ha(dev);
- dsa_port_assign_master(dp, old_master, NULL, false);
+ dsa_port_assign_conduit(dp, old_conduit, NULL, false);
/* Restore the objects on the old CPU port */
rewind_old_addrs:
- dsa_slave_sync_ha(dev);
+ dsa_user_sync_ha(dev);
if (vlan_filtering) {
- tmp = dsa_slave_manage_vlan_filtering(dev, true);
+ tmp = dsa_user_manage_vlan_filtering(dev, true);
if (tmp) {
dev_err(ds->dev,
"port %d failed to restore standalone VLANs: %pe\n",
@@ -1554,20 +1554,6 @@ static struct phy_device *dsa_port_get_phy_device(struct dsa_port *dp)
return phydev;
}
-static void dsa_port_phylink_validate(struct phylink_config *config,
- unsigned long *supported,
- struct phylink_link_state *state)
-{
- /* Skip call for drivers which don't yet set mac_capabilities,
- * since validating in that case would mean their PHY will advertise
- * nothing. In turn, skipping validation makes them advertise
- * everything that the PHY supports, so those drivers should be
- * converted ASAP.
- */
- if (config->mac_capabilities)
- phylink_generic_validate(config, supported, state);
-}
-
static struct phylink_pcs *
dsa_port_phylink_mac_select_pcs(struct phylink_config *config,
phy_interface_t interface)
@@ -1634,7 +1620,7 @@ static void dsa_port_phylink_mac_link_down(struct phylink_config *config,
struct dsa_switch *ds = dp->ds;
if (dsa_port_is_user(dp))
- phydev = dp->slave->phydev;
+ phydev = dp->user->phydev;
if (!ds->ops->phylink_mac_link_down) {
if (ds->ops->adjust_link && phydev)
@@ -1666,7 +1652,6 @@ static void dsa_port_phylink_mac_link_up(struct phylink_config *config,
}
static const struct phylink_mac_ops dsa_port_phylink_mac_ops = {
- .validate = dsa_port_phylink_validate,
.mac_select_pcs = dsa_port_phylink_mac_select_pcs,
.mac_prepare = dsa_port_phylink_mac_prepare,
.mac_config = dsa_port_phylink_mac_config,
@@ -1823,7 +1808,7 @@ err_phy_connect:
* their type.
*
* User ports with no phy-handle or fixed-link are expected to connect to an
- * internal PHY located on the ds->slave_mii_bus at an MDIO address equal to
+ * internal PHY located on the ds->user_mii_bus at an MDIO address equal to
* the port number. This description is still actively supported.
*
* Shared (CPU and DSA) ports with no phy-handle or fixed-link are expected to
@@ -1844,7 +1829,7 @@ err_phy_connect:
* a fixed-link, a phy-handle, or a managed = "in-band-status" property.
* It becomes the responsibility of the driver to ensure that these ports
* operate at the maximum speed (whatever this means) and will interoperate
- * with the DSA master or other cascade port, since phylink methods will not be
+ * with the DSA conduit or other cascade port, since phylink methods will not be
* invoked for them.
*
* If you are considering expanding this table for newly introduced switches,
@@ -2024,7 +2009,8 @@ void dsa_shared_port_link_unregister_of(struct dsa_port *dp)
dsa_shared_port_setup_phy_of(dp, false);
}
-int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr)
+int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr,
+ struct netlink_ext_ack *extack)
{
struct dsa_switch *ds = dp->ds;
int err;
@@ -2034,7 +2020,7 @@ int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr)
dp->hsr_dev = hsr;
- err = ds->ops->port_hsr_join(ds, dp->index, hsr);
+ err = ds->ops->port_hsr_join(ds, dp->index, hsr, extack);
if (err)
dp->hsr_dev = NULL;
diff --git a/net/dsa/port.h b/net/dsa/port.h
index dc812512fd0e..6bc3291573c0 100644
--- a/net/dsa/port.h
+++ b/net/dsa/port.h
@@ -103,12 +103,13 @@ int dsa_port_phylink_create(struct dsa_port *dp);
void dsa_port_phylink_destroy(struct dsa_port *dp);
int dsa_shared_port_link_register_of(struct dsa_port *dp);
void dsa_shared_port_link_unregister_of(struct dsa_port *dp);
-int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr);
+int dsa_port_hsr_join(struct dsa_port *dp, struct net_device *hsr,
+ struct netlink_ext_ack *extack);
void dsa_port_hsr_leave(struct dsa_port *dp, struct net_device *hsr);
int dsa_port_tag_8021q_vlan_add(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid, bool broadcast);
void dsa_port_set_host_flood(struct dsa_port *dp, bool uc, bool mc);
-int dsa_port_change_master(struct dsa_port *dp, struct net_device *master,
- struct netlink_ext_ack *extack);
+int dsa_port_change_conduit(struct dsa_port *dp, struct net_device *conduit,
+ struct netlink_ext_ack *extack);
#endif
diff --git a/net/dsa/slave.h b/net/dsa/slave.h
deleted file mode 100644
index d0abe609e00d..000000000000
--- a/net/dsa/slave.h
+++ /dev/null
@@ -1,69 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0-or-later */
-
-#ifndef __DSA_SLAVE_H
-#define __DSA_SLAVE_H
-
-#include <linux/if_bridge.h>
-#include <linux/if_vlan.h>
-#include <linux/list.h>
-#include <linux/netpoll.h>
-#include <linux/types.h>
-#include <net/dsa.h>
-#include <net/gro_cells.h>
-
-struct net_device;
-struct netlink_ext_ack;
-
-extern struct notifier_block dsa_slave_switchdev_notifier;
-extern struct notifier_block dsa_slave_switchdev_blocking_notifier;
-
-struct dsa_slave_priv {
- /* Copy of CPU port xmit for faster access in slave transmit hot path */
- struct sk_buff * (*xmit)(struct sk_buff *skb,
- struct net_device *dev);
-
- struct gro_cells gcells;
-
- /* DSA port data, such as switch, port index, etc. */
- struct dsa_port *dp;
-
-#ifdef CONFIG_NET_POLL_CONTROLLER
- struct netpoll *netpoll;
-#endif
-
- /* TC context */
- struct list_head mall_tc_list;
-};
-
-void dsa_slave_mii_bus_init(struct dsa_switch *ds);
-int dsa_slave_create(struct dsa_port *dp);
-void dsa_slave_destroy(struct net_device *slave_dev);
-int dsa_slave_suspend(struct net_device *slave_dev);
-int dsa_slave_resume(struct net_device *slave_dev);
-int dsa_slave_register_notifier(void);
-void dsa_slave_unregister_notifier(void);
-void dsa_slave_sync_ha(struct net_device *dev);
-void dsa_slave_unsync_ha(struct net_device *dev);
-void dsa_slave_setup_tagger(struct net_device *slave);
-int dsa_slave_change_mtu(struct net_device *dev, int new_mtu);
-int dsa_slave_change_master(struct net_device *dev, struct net_device *master,
- struct netlink_ext_ack *extack);
-int dsa_slave_manage_vlan_filtering(struct net_device *dev,
- bool vlan_filtering);
-
-static inline struct dsa_port *dsa_slave_to_port(const struct net_device *dev)
-{
- struct dsa_slave_priv *p = netdev_priv(dev);
-
- return p->dp;
-}
-
-static inline struct net_device *
-dsa_slave_to_master(const struct net_device *dev)
-{
- struct dsa_port *dp = dsa_slave_to_port(dev);
-
- return dsa_port_to_master(dp);
-}
-
-#endif
diff --git a/net/dsa/switch.c b/net/dsa/switch.c
index 1a42f9317334..3d2feeea897b 100644
--- a/net/dsa/switch.c
+++ b/net/dsa/switch.c
@@ -15,10 +15,10 @@
#include "dsa.h"
#include "netlink.h"
#include "port.h"
-#include "slave.h"
#include "switch.h"
#include "tag_8021q.h"
#include "trace.h"
+#include "user.h"
static unsigned int dsa_switch_fastest_ageing_time(struct dsa_switch *ds,
unsigned int ageing_time)
@@ -894,12 +894,12 @@ static int dsa_switch_change_tag_proto(struct dsa_switch *ds,
* bits that depend on the tagger, such as the MTU.
*/
dsa_switch_for_each_user_port(dp, ds) {
- struct net_device *slave = dp->slave;
+ struct net_device *user = dp->user;
- dsa_slave_setup_tagger(slave);
+ dsa_user_setup_tagger(user);
/* rtnl_mutex is held in dsa_tree_change_tag_proto */
- dsa_slave_change_mtu(slave, slave->mtu);
+ dsa_user_change_mtu(user, user->mtu);
}
return 0;
@@ -960,13 +960,13 @@ dsa_switch_disconnect_tag_proto(struct dsa_switch *ds,
}
static int
-dsa_switch_master_state_change(struct dsa_switch *ds,
- struct dsa_notifier_master_state_info *info)
+dsa_switch_conduit_state_change(struct dsa_switch *ds,
+ struct dsa_notifier_conduit_state_info *info)
{
- if (!ds->ops->master_state_change)
+ if (!ds->ops->conduit_state_change)
return 0;
- ds->ops->master_state_change(ds, info->master, info->operational);
+ ds->ops->conduit_state_change(ds, info->conduit, info->operational);
return 0;
}
@@ -1056,8 +1056,8 @@ static int dsa_switch_event(struct notifier_block *nb,
case DSA_NOTIFIER_TAG_8021Q_VLAN_DEL:
err = dsa_switch_tag_8021q_vlan_del(ds, info);
break;
- case DSA_NOTIFIER_MASTER_STATE_CHANGE:
- err = dsa_switch_master_state_change(ds, info);
+ case DSA_NOTIFIER_CONDUIT_STATE_CHANGE:
+ err = dsa_switch_conduit_state_change(ds, info);
break;
default:
err = -EOPNOTSUPP;
diff --git a/net/dsa/switch.h b/net/dsa/switch.h
index ea034677da15..be0a2749cd97 100644
--- a/net/dsa/switch.h
+++ b/net/dsa/switch.h
@@ -34,7 +34,7 @@ enum {
DSA_NOTIFIER_TAG_PROTO_DISCONNECT,
DSA_NOTIFIER_TAG_8021Q_VLAN_ADD,
DSA_NOTIFIER_TAG_8021Q_VLAN_DEL,
- DSA_NOTIFIER_MASTER_STATE_CHANGE,
+ DSA_NOTIFIER_CONDUIT_STATE_CHANGE,
};
/* DSA_NOTIFIER_AGEING_TIME */
@@ -105,9 +105,9 @@ struct dsa_notifier_tag_8021q_vlan_info {
u16 vid;
};
-/* DSA_NOTIFIER_MASTER_STATE_CHANGE */
-struct dsa_notifier_master_state_info {
- const struct net_device *master;
+/* DSA_NOTIFIER_CONDUIT_STATE_CHANGE */
+struct dsa_notifier_conduit_state_info {
+ const struct net_device *conduit;
bool operational;
};
diff --git a/net/dsa/tag.c b/net/dsa/tag.c
index 5105a5ff58fa..6e402d49afd3 100644
--- a/net/dsa/tag.c
+++ b/net/dsa/tag.c
@@ -13,8 +13,8 @@
#include <net/dsa.h>
#include <net/dst_metadata.h>
-#include "slave.h"
#include "tag.h"
+#include "user.h"
static LIST_HEAD(dsa_tag_drivers_list);
static DEFINE_MUTEX(dsa_tag_drivers_lock);
@@ -27,7 +27,7 @@ static DEFINE_MUTEX(dsa_tag_drivers_lock);
* switch, the DSA driver owning the interface to which the packet is
* delivered is never notified unless we do so here.
*/
-static bool dsa_skb_defer_rx_timestamp(struct dsa_slave_priv *p,
+static bool dsa_skb_defer_rx_timestamp(struct dsa_user_priv *p,
struct sk_buff *skb)
{
struct dsa_switch *ds = p->dp->ds;
@@ -57,7 +57,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
struct metadata_dst *md_dst = skb_metadata_dst(skb);
struct dsa_port *cpu_dp = dev->dsa_ptr;
struct sk_buff *nskb = NULL;
- struct dsa_slave_priv *p;
+ struct dsa_user_priv *p;
if (unlikely(!cpu_dp)) {
kfree_skb(skb);
@@ -75,7 +75,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
if (!skb_has_extensions(skb))
skb->slow_gro = 0;
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (likely(skb->dev)) {
dsa_default_offload_fwd_mark(skb);
nskb = skb;
@@ -94,7 +94,7 @@ static int dsa_switch_rcv(struct sk_buff *skb, struct net_device *dev,
skb->pkt_type = PACKET_HOST;
skb->protocol = eth_type_trans(skb, skb->dev);
- if (unlikely(!dsa_slave_dev_check(skb->dev))) {
+ if (unlikely(!dsa_user_dev_check(skb->dev))) {
/* Packet is to be injected directly on an upper
* device, e.g. a team/bond, so skip all DSA-port
* specific actions.
diff --git a/net/dsa/tag.h b/net/dsa/tag.h
index 32d12f4a9d73..f6b9c73718df 100644
--- a/net/dsa/tag.h
+++ b/net/dsa/tag.h
@@ -9,7 +9,7 @@
#include <net/dsa.h>
#include "port.h"
-#include "slave.h"
+#include "user.h"
struct dsa_tag_driver {
const struct dsa_device_ops *ops;
@@ -29,7 +29,7 @@ static inline int dsa_tag_protocol_overhead(const struct dsa_device_ops *ops)
return ops->needed_headroom + ops->needed_tailroom;
}
-static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
+static inline struct net_device *dsa_conduit_find_user(struct net_device *dev,
int device, int port)
{
struct dsa_port *cpu_dp = dev->dsa_ptr;
@@ -39,7 +39,7 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
list_for_each_entry(dp, &dst->ports, list)
if (dp->ds->index == device && dp->index == port &&
dp->type == DSA_PORT_TYPE_USER)
- return dp->slave;
+ return dp->user;
return NULL;
}
@@ -49,7 +49,7 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
*/
static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
{
- struct dsa_port *dp = dsa_slave_to_port(skb->dev);
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct net_device *br = dsa_port_bridge_dev_get(dp);
struct net_device *dev = skb->dev;
struct net_device *upper_dev;
@@ -107,12 +107,12 @@ static inline struct sk_buff *dsa_untag_bridge_pvid(struct sk_buff *skb)
* to support termination through the bridge.
*/
static inline struct net_device *
-dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid)
+dsa_find_designated_bridge_port_by_vid(struct net_device *conduit, u16 vid)
{
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
struct bridge_vlan_info vinfo;
- struct net_device *slave;
+ struct net_device *user;
struct dsa_port *dp;
int err;
@@ -134,13 +134,13 @@ dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid)
if (dp->cpu_dp != cpu_dp)
continue;
- slave = dp->slave;
+ user = dp->user;
- err = br_vlan_get_info_rcu(slave, vid, &vinfo);
+ err = br_vlan_get_info_rcu(user, vid, &vinfo);
if (err)
continue;
- return slave;
+ return user;
}
return NULL;
@@ -155,7 +155,7 @@ dsa_find_designated_bridge_port_by_vid(struct net_device *master, u16 vid)
*/
static inline void dsa_default_offload_fwd_mark(struct sk_buff *skb)
{
- struct dsa_port *dp = dsa_slave_to_port(skb->dev);
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
skb->offload_fwd_mark = !!(dp->bridge);
}
@@ -215,9 +215,9 @@ static inline void dsa_alloc_etype_header(struct sk_buff *skb, int len)
memmove(skb->data, skb->data + len, 2 * ETH_ALEN);
}
-/* On RX, eth_type_trans() on the DSA master pulls ETH_HLEN bytes starting from
+/* On RX, eth_type_trans() on the DSA conduit pulls ETH_HLEN bytes starting from
* skb_mac_header(skb), which leaves skb->data pointing at the first byte after
- * what the DSA master perceives as the EtherType (the beginning of the L3
+ * what the DSA conduit perceives as the EtherType (the beginning of the L3
* protocol). Since DSA EtherType header taggers treat the EtherType as part of
* the DSA tag itself, and the EtherType is 2 bytes in length, the DSA header
* is located 2 bytes behind skb->data. Note that EtherType in this context
diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
index cbdfc392f7e0..71b26ae6db39 100644
--- a/net/dsa/tag_8021q.c
+++ b/net/dsa/tag_8021q.c
@@ -73,7 +73,7 @@ struct dsa_tag_8021q_vlan {
struct dsa_8021q_context {
struct dsa_switch *ds;
struct list_head vlans;
- /* EtherType of RX VID, used for filtering on master interface */
+ /* EtherType of RX VID, used for filtering on conduit interface */
__be16 proto;
};
@@ -338,7 +338,7 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
u16 vid = dsa_tag_8021q_standalone_vid(dp);
- struct net_device *master;
+ struct net_device *conduit;
int err;
/* The CPU port is implicitly configured by
@@ -347,7 +347,7 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return 0;
- master = dsa_port_to_master(dp);
+ conduit = dsa_port_to_conduit(dp);
err = dsa_port_tag_8021q_vlan_add(dp, vid, false);
if (err) {
@@ -357,8 +357,8 @@ static int dsa_tag_8021q_port_setup(struct dsa_switch *ds, int port)
return err;
}
- /* Add the VLAN to the master's RX filter. */
- vlan_vid_add(master, ctx->proto, vid);
+ /* Add the VLAN to the conduit's RX filter. */
+ vlan_vid_add(conduit, ctx->proto, vid);
return err;
}
@@ -368,7 +368,7 @@ static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
struct dsa_8021q_context *ctx = ds->tag_8021q_ctx;
struct dsa_port *dp = dsa_to_port(ds, port);
u16 vid = dsa_tag_8021q_standalone_vid(dp);
- struct net_device *master;
+ struct net_device *conduit;
/* The CPU port is implicitly configured by
* configuring the front-panel ports
@@ -376,11 +376,11 @@ static void dsa_tag_8021q_port_teardown(struct dsa_switch *ds, int port)
if (!dsa_port_is_user(dp))
return;
- master = dsa_port_to_master(dp);
+ conduit = dsa_port_to_conduit(dp);
dsa_port_tag_8021q_vlan_del(dp, vid, false);
- vlan_vid_del(master, ctx->proto, vid);
+ vlan_vid_del(conduit, ctx->proto, vid);
}
static int dsa_tag_8021q_setup(struct dsa_switch *ds)
@@ -468,10 +468,10 @@ struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
}
EXPORT_SYMBOL_GPL(dsa_8021q_xmit);
-struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master,
+struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit,
int vbid)
{
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
struct dsa_port *dp;
@@ -490,7 +490,7 @@ struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master,
continue;
if (dsa_port_bridge_num_get(dp) == vbid)
- return dp->slave;
+ return dp->user;
}
return NULL;
diff --git a/net/dsa/tag_8021q.h b/net/dsa/tag_8021q.h
index b75cbaa028ef..41f7167ac520 100644
--- a/net/dsa/tag_8021q.h
+++ b/net/dsa/tag_8021q.h
@@ -16,7 +16,7 @@ struct sk_buff *dsa_8021q_xmit(struct sk_buff *skb, struct net_device *netdev,
void dsa_8021q_rcv(struct sk_buff *skb, int *source_port, int *switch_id,
int *vbid);
-struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *master,
+struct net_device *dsa_tag_8021q_find_port_by_vbid(struct net_device *conduit,
int vbid);
int dsa_switch_tag_8021q_vlan_add(struct dsa_switch *ds,
diff --git a/net/dsa/tag_ar9331.c b/net/dsa/tag_ar9331.c
index 7f3b7d730b85..92ce67b93a58 100644
--- a/net/dsa/tag_ar9331.c
+++ b/net/dsa/tag_ar9331.c
@@ -29,7 +29,7 @@
static struct sk_buff *ar9331_tag_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
__le16 *phdr;
u16 hdr;
@@ -74,7 +74,7 @@ static struct sk_buff *ar9331_tag_rcv(struct sk_buff *skb,
/* Get source port information */
port = FIELD_GET(AR9331_HDR_PORT_NUM_MASK, hdr);
- skb->dev = dsa_master_find_slave(ndev, 0, port);
+ skb->dev = dsa_conduit_find_user(ndev, 0, port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_brcm.c b/net/dsa/tag_brcm.c
index cacdafb41200..83d283a5d27e 100644
--- a/net/dsa/tag_brcm.c
+++ b/net/dsa/tag_brcm.c
@@ -85,7 +85,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
struct net_device *dev,
unsigned int offset)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u16 queue = skb_get_queue_mapping(skb);
u8 *brcm_tag;
@@ -96,7 +96,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
* (including FCS and tag) because the length verification is done after
* the Broadcom tag is stripped off the ingress packet.
*
- * Let dsa_slave_xmit() free the SKB
+ * Let dsa_user_xmit() free the SKB
*/
if (__skb_put_padto(skb, ETH_ZLEN + BRCM_TAG_LEN, false))
return NULL;
@@ -119,7 +119,7 @@ static struct sk_buff *brcm_tag_xmit_ll(struct sk_buff *skb,
brcm_tag[2] = BRCM_IG_DSTMAP2_MASK;
brcm_tag[3] = (1 << dp->index) & BRCM_IG_DSTMAP1_MASK;
- /* Now tell the master network device about the desired output queue
+ /* Now tell the conduit network device about the desired output queue
* as well
*/
skb_set_queue_mapping(skb, BRCM_TAG_SET_PORT_QUEUE(dp->index, queue));
@@ -164,7 +164,7 @@ static struct sk_buff *brcm_tag_rcv_ll(struct sk_buff *skb,
/* Locate which port this is coming from */
source_port = brcm_tag[3] & BRCM_EG_PID_MASK;
- skb->dev = dsa_master_find_slave(dev, 0, source_port);
+ skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev)
return NULL;
@@ -216,7 +216,7 @@ MODULE_ALIAS_DSA_TAG_DRIVER(DSA_TAG_PROTO_BRCM, BRCM_NAME);
static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u8 *brcm_tag;
/* The Ethernet switch we are interfaced with needs packets to be at
@@ -226,7 +226,7 @@ static struct sk_buff *brcm_leg_tag_xmit(struct sk_buff *skb,
* (including FCS and tag) because the length verification is done after
* the Broadcom tag is stripped off the ingress packet.
*
- * Let dsa_slave_xmit() free the SKB
+ * Let dsa_user_xmit() free the SKB
*/
if (__skb_put_padto(skb, ETH_ZLEN + BRCM_LEG_TAG_LEN, false))
return NULL;
@@ -264,7 +264,7 @@ static struct sk_buff *brcm_leg_tag_rcv(struct sk_buff *skb,
source_port = brcm_tag[5] & BRCM_LEG_PORT_ID;
- skb->dev = dsa_master_find_slave(dev, 0, source_port);
+ skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_dsa.c b/net/dsa/tag_dsa.c
index 1fd7fa26db64..8ed52dd663ab 100644
--- a/net/dsa/tag_dsa.c
+++ b/net/dsa/tag_dsa.c
@@ -129,7 +129,7 @@ enum dsa_code {
static struct sk_buff *dsa_xmit_ll(struct sk_buff *skb, struct net_device *dev,
u8 extra)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct net_device *br_dev;
u8 tag_dev, tag_port;
enum dsa_cmd cmd;
@@ -267,14 +267,14 @@ static struct sk_buff *dsa_rcv_ll(struct sk_buff *skb, struct net_device *dev,
lag = dsa_lag_by_id(cpu_dp->dst, source_port + 1);
skb->dev = lag ? lag->dev : NULL;
} else {
- skb->dev = dsa_master_find_slave(dev, source_device,
+ skb->dev = dsa_conduit_find_user(dev, source_device,
source_port);
}
if (!skb->dev)
return NULL;
- /* When using LAG offload, skb->dev is not a DSA slave interface,
+ /* When using LAG offload, skb->dev is not a DSA user interface,
* so we cannot call dsa_default_offload_fwd_mark and we need to
* special-case it.
*/
diff --git a/net/dsa/tag_gswip.c b/net/dsa/tag_gswip.c
index e279cd9057b0..3539141b5350 100644
--- a/net/dsa/tag_gswip.c
+++ b/net/dsa/tag_gswip.c
@@ -61,7 +61,7 @@
static struct sk_buff *gswip_tag_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u8 *gswip_tag;
skb_push(skb, GSWIP_TX_HEADER_LEN);
@@ -89,7 +89,7 @@ static struct sk_buff *gswip_tag_rcv(struct sk_buff *skb,
/* Get source port information */
port = (gswip_tag[7] & GSWIP_RX_SPPID_MASK) >> GSWIP_RX_SPPID_SHIFT;
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_hellcreek.c b/net/dsa/tag_hellcreek.c
index 03a1fb9c87a9..6e233cd0aa38 100644
--- a/net/dsa/tag_hellcreek.c
+++ b/net/dsa/tag_hellcreek.c
@@ -20,7 +20,7 @@
static struct sk_buff *hellcreek_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u8 *tag;
/* Calculate checksums (if required) before adding the trailer tag to
@@ -45,7 +45,7 @@ static struct sk_buff *hellcreek_rcv(struct sk_buff *skb,
u8 *tag = skb_tail_pointer(skb) - HELLCREEK_TAG_LEN;
unsigned int port = tag[0] & 0x03;
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) {
netdev_warn_once(dev, "Failed to get source port: %d\n", port);
return NULL;
diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
index ea100bd25939..9be341fa88f0 100644
--- a/net/dsa/tag_ksz.c
+++ b/net/dsa/tag_ksz.c
@@ -87,7 +87,7 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
struct net_device *dev,
unsigned int port, unsigned int len)
{
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev)
return NULL;
@@ -119,7 +119,7 @@ static struct sk_buff *ksz_common_rcv(struct sk_buff *skb,
static struct sk_buff *ksz8795_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr;
u8 *tag;
@@ -256,7 +256,7 @@ static struct sk_buff *ksz_defer_xmit(struct dsa_port *dp, struct sk_buff *skb)
return NULL;
kthread_init_work(&xmit_work->work, xmit_work_fn);
- /* Increase refcount so the kfree_skb in dsa_slave_xmit
+ /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet.
*/
xmit_work->dp = dp;
@@ -272,7 +272,7 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
{
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr;
__be16 *tag;
u16 val;
@@ -293,6 +293,14 @@ static struct sk_buff *ksz9477_xmit(struct sk_buff *skb,
if (is_link_local_ether_addr(hdr->h_dest))
val |= KSZ9477_TAIL_TAG_OVERRIDE;
+ if (dev->features & NETIF_F_HW_HSR_DUP) {
+ struct net_device *hsr_dev = dp->hsr_dev;
+ struct dsa_port *other_dp;
+
+ dsa_hsr_foreach_port(other_dp, dp->ds, hsr_dev)
+ val |= BIT(other_dp->index);
+ }
+
*tag = cpu_to_be16(val);
return ksz_defer_xmit(dp, skb);
@@ -336,7 +344,7 @@ static struct sk_buff *ksz9893_xmit(struct sk_buff *skb,
{
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct ethhdr *hdr;
u8 *tag;
@@ -402,7 +410,7 @@ static struct sk_buff *lan937x_xmit(struct sk_buff *skb,
{
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 prio = netdev_txq_to_tc(dev, queue_mapping);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
const struct ethhdr *hdr = eth_hdr(skb);
__be16 *tag;
u16 val;
diff --git a/net/dsa/tag_lan9303.c b/net/dsa/tag_lan9303.c
index c25f5536706b..1ed8ee24855d 100644
--- a/net/dsa/tag_lan9303.c
+++ b/net/dsa/tag_lan9303.c
@@ -56,7 +56,7 @@ static int lan9303_xmit_use_arl(struct dsa_port *dp, u8 *dest_addr)
static struct sk_buff *lan9303_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *lan9303_tag;
u16 tag;
@@ -99,7 +99,7 @@ static struct sk_buff *lan9303_rcv(struct sk_buff *skb, struct net_device *dev)
source_port = lan9303_tag1 & 0x3;
- skb->dev = dsa_master_find_slave(dev, 0, source_port);
+ skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev) {
dev_warn_ratelimited(&dev->dev, "Dropping packet due to invalid source port\n");
return NULL;
diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c
index 40af80452747..2483785f6ab1 100644
--- a/net/dsa/tag_mtk.c
+++ b/net/dsa/tag_mtk.c
@@ -23,7 +23,7 @@
static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u8 xmit_tpid;
u8 *mtk_tag;
@@ -85,7 +85,7 @@ static struct sk_buff *mtk_tag_rcv(struct sk_buff *skb, struct net_device *dev)
/* Get source port information */
port = (hdr & MTK_HDR_RECV_SOURCE_PORT_MASK);
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_none.c b/net/dsa/tag_none.c
index d2fd179c4227..9a473624db50 100644
--- a/net/dsa/tag_none.c
+++ b/net/dsa/tag_none.c
@@ -12,8 +12,8 @@
#define NONE_NAME "none"
-static struct sk_buff *dsa_slave_notag_xmit(struct sk_buff *skb,
- struct net_device *dev)
+static struct sk_buff *dsa_user_notag_xmit(struct sk_buff *skb,
+ struct net_device *dev)
{
/* Just return the original SKB */
return skb;
@@ -22,7 +22,7 @@ static struct sk_buff *dsa_slave_notag_xmit(struct sk_buff *skb,
static const struct dsa_device_ops none_ops = {
.name = NONE_NAME,
.proto = DSA_TAG_PROTO_NONE,
- .xmit = dsa_slave_notag_xmit,
+ .xmit = dsa_user_notag_xmit,
};
module_dsa_tag_driver(none_ops);
diff --git a/net/dsa/tag_ocelot.c b/net/dsa/tag_ocelot.c
index 20bf7074d5a6..ef2f8fffb2c7 100644
--- a/net/dsa/tag_ocelot.c
+++ b/net/dsa/tag_ocelot.c
@@ -45,7 +45,7 @@ static void ocelot_xmit_get_vlan_info(struct sk_buff *skb, struct dsa_port *dp,
static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
__be32 ifh_prefix, void **ifh)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
struct dsa_switch *ds = dp->ds;
u64 vlan_tci, tag_type;
void *injection;
@@ -79,7 +79,7 @@ static void ocelot_xmit_common(struct sk_buff *skb, struct net_device *netdev,
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
void *injection;
ocelot_xmit_common(skb, netdev, cpu_to_be32(0x8880000a), &injection);
@@ -91,7 +91,7 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
static struct sk_buff *seville_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
void *injection;
ocelot_xmit_common(skb, netdev, cpu_to_be32(0x88800005), &injection);
@@ -111,12 +111,12 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
u16 vlan_tpid;
u64 rew_val;
- /* Revert skb->data by the amount consumed by the DSA master,
+ /* Revert skb->data by the amount consumed by the DSA conduit,
* so it points to the beginning of the frame.
*/
skb_push(skb, ETH_HLEN);
/* We don't care about the short prefix, it is just for easy entrance
- * into the DSA master's RX filter. Discard it now by moving it into
+ * into the DSA conduit's RX filter. Discard it now by moving it into
* the headroom.
*/
skb_pull(skb, OCELOT_SHORT_PREFIX_LEN);
@@ -141,12 +141,12 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
ocelot_xfh_get_vlan_tci(extraction, &vlan_tci);
ocelot_xfh_get_rew_val(extraction, &rew_val);
- skb->dev = dsa_master_find_slave(netdev, 0, src_port);
+ skb->dev = dsa_conduit_find_user(netdev, 0, src_port);
if (!skb->dev)
/* The switch will reflect back some frames sent through
- * sockets opened on the bare DSA master. These will come back
+ * sockets opened on the bare DSA conduit. These will come back
* with src_port equal to the index of the CPU port, for which
- * there is no slave registered. So don't print any error
+ * there is no user registered. So don't print any error
* message here (ignore and drop those frames).
*/
return NULL;
@@ -170,7 +170,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
* equal to the pvid of the ingress port and should not be used for
* processing.
*/
- dp = dsa_slave_to_port(skb->dev);
+ dp = dsa_user_to_port(skb->dev);
vlan_tpid = tag_type ? ETH_P_8021AD : ETH_P_8021Q;
if (dsa_port_is_vlan_filtering(dp) &&
@@ -192,7 +192,7 @@ static const struct dsa_device_ops ocelot_netdev_ops = {
.xmit = ocelot_xmit,
.rcv = ocelot_rcv,
.needed_headroom = OCELOT_TOTAL_TAG_LEN,
- .promisc_on_master = true,
+ .promisc_on_conduit = true,
};
DSA_TAG_DRIVER(ocelot_netdev_ops);
@@ -204,7 +204,7 @@ static const struct dsa_device_ops seville_netdev_ops = {
.xmit = seville_xmit,
.rcv = ocelot_rcv,
.needed_headroom = OCELOT_TOTAL_TAG_LEN,
- .promisc_on_master = true,
+ .promisc_on_conduit = true,
};
DSA_TAG_DRIVER(seville_netdev_ops);
diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c
index 1f0b8c20eba5..210039320888 100644
--- a/net/dsa/tag_ocelot_8021q.c
+++ b/net/dsa/tag_ocelot_8021q.c
@@ -37,8 +37,8 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
return NULL;
/* PTP over IP packets need UDP checksumming. We may have inherited
- * NETIF_F_HW_CSUM from the DSA master, but these packets are not sent
- * through the DSA master, so calculate the checksum here.
+ * NETIF_F_HW_CSUM from the DSA conduit, but these packets are not sent
+ * through the DSA conduit, so calculate the checksum here.
*/
if (skb->ip_summed == CHECKSUM_PARTIAL && skb_checksum_help(skb))
return NULL;
@@ -49,7 +49,7 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
/* Calls felix_port_deferred_xmit in felix.c */
kthread_init_work(&xmit_work->work, xmit_work_fn);
- /* Increase refcount so the kfree_skb in dsa_slave_xmit
+ /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet.
*/
xmit_work->dp = dp;
@@ -63,7 +63,7 @@ static struct sk_buff *ocelot_defer_xmit(struct dsa_port *dp,
static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@@ -83,7 +83,7 @@ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
dsa_8021q_rcv(skb, &src_port, &switch_id, NULL);
- skb->dev = dsa_master_find_slave(netdev, switch_id, src_port);
+ skb->dev = dsa_conduit_find_user(netdev, switch_id, src_port);
if (!skb->dev)
return NULL;
@@ -130,7 +130,7 @@ static const struct dsa_device_ops ocelot_8021q_netdev_ops = {
.connect = ocelot_connect,
.disconnect = ocelot_disconnect,
.needed_headroom = VLAN_HLEN,
- .promisc_on_master = true,
+ .promisc_on_conduit = true,
};
MODULE_LICENSE("GPL v2");
diff --git a/net/dsa/tag_qca.c b/net/dsa/tag_qca.c
index e5ff7c34e577..6514aa7993ce 100644
--- a/net/dsa/tag_qca.c
+++ b/net/dsa/tag_qca.c
@@ -14,7 +14,7 @@
static struct sk_buff *qca_tag_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *phdr;
u16 hdr;
@@ -78,7 +78,7 @@ static struct sk_buff *qca_tag_rcv(struct sk_buff *skb, struct net_device *dev)
/* Get source port information */
port = FIELD_GET(QCA_HDR_RECV_SOURCE_PORT, hdr);
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev)
return NULL;
@@ -116,7 +116,7 @@ static const struct dsa_device_ops qca_netdev_ops = {
.xmit = qca_tag_xmit,
.rcv = qca_tag_rcv,
.needed_headroom = QCA_HDR_LEN,
- .promisc_on_master = true,
+ .promisc_on_conduit = true,
};
MODULE_LICENSE("GPL");
diff --git a/net/dsa/tag_rtl4_a.c b/net/dsa/tag_rtl4_a.c
index c327314b95e3..4da5bad1a7aa 100644
--- a/net/dsa/tag_rtl4_a.c
+++ b/net/dsa/tag_rtl4_a.c
@@ -36,7 +36,7 @@
static struct sk_buff *rtl4a_tag_xmit(struct sk_buff *skb,
struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
__be16 *p;
u8 *tag;
u16 out;
@@ -97,9 +97,9 @@ static struct sk_buff *rtl4a_tag_rcv(struct sk_buff *skb,
}
port = protport & 0xff;
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) {
- netdev_dbg(dev, "could not find slave for port %d\n", port);
+ netdev_dbg(dev, "could not find user for port %d\n", port);
return NULL;
}
diff --git a/net/dsa/tag_rtl8_4.c b/net/dsa/tag_rtl8_4.c
index 4f67834fd121..07e857debabf 100644
--- a/net/dsa/tag_rtl8_4.c
+++ b/net/dsa/tag_rtl8_4.c
@@ -103,7 +103,7 @@
static void rtl8_4_write_tag(struct sk_buff *skb, struct net_device *dev,
void *tag)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
__be16 tag16[RTL8_4_TAG_LEN / 2];
/* Set Realtek EtherType */
@@ -180,10 +180,10 @@ static int rtl8_4_read_tag(struct sk_buff *skb, struct net_device *dev,
/* Parse TX (switch->CPU) */
port = FIELD_GET(RTL8_4_TX, ntohs(tag16[3]));
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev) {
dev_warn_ratelimited(&dev->dev,
- "could not find slave for port %d\n",
+ "could not find user for port %d\n",
port);
return -ENOENT;
}
diff --git a/net/dsa/tag_rzn1_a5psw.c b/net/dsa/tag_rzn1_a5psw.c
index 437a6820ac42..2ce866b45615 100644
--- a/net/dsa/tag_rzn1_a5psw.c
+++ b/net/dsa/tag_rzn1_a5psw.c
@@ -39,7 +39,7 @@ struct a5psw_tag {
static struct sk_buff *a5psw_tag_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct a5psw_tag *ptag;
u32 data2_val;
@@ -90,7 +90,7 @@ static struct sk_buff *a5psw_tag_rcv(struct sk_buff *skb,
port = FIELD_GET(A5PSW_CTRL_DATA_PORT, ntohs(tag->ctrl_data));
- skb->dev = dsa_master_find_slave(dev, 0, port);
+ skb->dev = dsa_conduit_find_user(dev, 0, port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_sja1105.c b/net/dsa/tag_sja1105.c
index ade3eeb2f3e6..1fffe8c2b589 100644
--- a/net/dsa/tag_sja1105.c
+++ b/net/dsa/tag_sja1105.c
@@ -157,7 +157,7 @@ static struct sk_buff *sja1105_defer_xmit(struct dsa_port *dp,
return NULL;
kthread_init_work(&xmit_work->work, xmit_work_fn);
- /* Increase refcount so the kfree_skb in dsa_slave_xmit
+ /* Increase refcount so the kfree_skb in dsa_user_xmit
* won't really free the packet.
*/
xmit_work->dp = dp;
@@ -210,7 +210,7 @@ static u16 sja1105_xmit_tpid(struct dsa_port *dp)
static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
unsigned int bridge_num = dsa_port_bridge_num_get(dp);
struct net_device *br = dsa_port_bridge_dev_get(dp);
u16 tx_vid;
@@ -235,7 +235,7 @@ static struct sk_buff *sja1105_imprecise_xmit(struct sk_buff *skb,
/* Transform untagged control packets into pvid-tagged control packets so that
* all packets sent by this tagger are VLAN-tagged and we can configure the
- * switch to drop untagged packets coming from the DSA master.
+ * switch to drop untagged packets coming from the DSA conduit.
*/
static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp,
struct sk_buff *skb, u8 pcp)
@@ -266,7 +266,7 @@ static struct sk_buff *sja1105_pvid_tag_control_pkt(struct dsa_port *dp,
static struct sk_buff *sja1105_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@@ -294,7 +294,7 @@ static struct sk_buff *sja1110_xmit(struct sk_buff *skb,
struct net_device *netdev)
{
struct sk_buff *clone = SJA1105_SKB_CB(skb)->clone;
- struct dsa_port *dp = dsa_slave_to_port(netdev);
+ struct dsa_port *dp = dsa_user_to_port(netdev);
u16 queue_mapping = skb_get_queue_mapping(skb);
u8 pcp = netdev_txq_to_tc(netdev, queue_mapping);
u16 tx_vid = dsa_tag_8021q_standalone_vid(dp);
@@ -383,7 +383,7 @@ static struct sk_buff
* Buffer it until we get its meta frame.
*/
if (is_link_local) {
- struct dsa_port *dp = dsa_slave_to_port(skb->dev);
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct sja1105_tagger_private *priv;
struct dsa_switch *ds = dp->ds;
@@ -396,7 +396,7 @@ static struct sk_buff
if (priv->stampable_skb) {
dev_err_ratelimited(ds->dev,
"Expected meta frame, is %12llx "
- "in the DSA master multicast filter?\n",
+ "in the DSA conduit multicast filter?\n",
SJA1105_META_DMAC);
kfree_skb(priv->stampable_skb);
}
@@ -417,7 +417,7 @@ static struct sk_buff
* frame, which serves no further purpose).
*/
} else if (is_meta) {
- struct dsa_port *dp = dsa_slave_to_port(skb->dev);
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
struct sja1105_tagger_private *priv;
struct dsa_switch *ds = dp->ds;
struct sk_buff *stampable_skb;
@@ -550,7 +550,7 @@ static struct sk_buff *sja1105_rcv(struct sk_buff *skb,
}
if (source_port != -1 && switch_id != -1)
- skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
+ skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port);
else if (vbid >= 1)
skb->dev = dsa_tag_8021q_find_port_by_vbid(netdev, vbid);
else
@@ -573,16 +573,16 @@ static struct sk_buff *sja1110_rcv_meta(struct sk_buff *skb, u16 rx_header)
int switch_id = SJA1110_RX_HEADER_SWITCH_ID(rx_header);
int n_ts = SJA1110_RX_HEADER_N_TS(rx_header);
struct sja1105_tagger_data *tagger_data;
- struct net_device *master = skb->dev;
+ struct net_device *conduit = skb->dev;
struct dsa_port *cpu_dp;
struct dsa_switch *ds;
int i;
- cpu_dp = master->dsa_ptr;
+ cpu_dp = conduit->dsa_ptr;
ds = dsa_switch_find(cpu_dp->dst->index, switch_id);
if (!ds) {
net_err_ratelimited("%s: cannot find switch id %d\n",
- master->name, switch_id);
+ conduit->name, switch_id);
return NULL;
}
@@ -649,7 +649,7 @@ static struct sk_buff *sja1110_rcv_inband_control_extension(struct sk_buff *skb,
/* skb->len counts from skb->data, while start_of_padding
* counts from the destination MAC address. Right now skb->data
- * is still as set by the DSA master, so to trim away the
+ * is still as set by the DSA conduit, so to trim away the
* padding and trailer we need to account for the fact that
* skb->data points to skb_mac_header(skb) + ETH_HLEN.
*/
@@ -698,7 +698,7 @@ static struct sk_buff *sja1110_rcv(struct sk_buff *skb,
else if (source_port == -1 || switch_id == -1)
skb->dev = dsa_find_designated_bridge_port_by_vid(netdev, vid);
else
- skb->dev = dsa_master_find_slave(netdev, switch_id, source_port);
+ skb->dev = dsa_conduit_find_user(netdev, switch_id, source_port);
if (!skb->dev) {
netdev_warn(netdev, "Couldn't decode source port\n");
return NULL;
@@ -778,7 +778,7 @@ static const struct dsa_device_ops sja1105_netdev_ops = {
.disconnect = sja1105_disconnect,
.needed_headroom = VLAN_HLEN,
.flow_dissect = sja1105_flow_dissect,
- .promisc_on_master = true,
+ .promisc_on_conduit = true,
};
DSA_TAG_DRIVER(sja1105_netdev_ops);
diff --git a/net/dsa/tag_trailer.c b/net/dsa/tag_trailer.c
index 7361b9106382..1ebb25a8b140 100644
--- a/net/dsa/tag_trailer.c
+++ b/net/dsa/tag_trailer.c
@@ -14,7 +14,7 @@
static struct sk_buff *trailer_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
u8 *trailer;
trailer = skb_put(skb, 4);
@@ -41,7 +41,7 @@ static struct sk_buff *trailer_rcv(struct sk_buff *skb, struct net_device *dev)
source_port = trailer[1] & 7;
- skb->dev = dsa_master_find_slave(dev, 0, source_port);
+ skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/tag_xrs700x.c b/net/dsa/tag_xrs700x.c
index af19969f9bc4..c9c163598ef2 100644
--- a/net/dsa/tag_xrs700x.c
+++ b/net/dsa/tag_xrs700x.c
@@ -13,7 +13,7 @@
static struct sk_buff *xrs700x_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_port *partner, *dp = dsa_slave_to_port(dev);
+ struct dsa_port *partner, *dp = dsa_user_to_port(dev);
u8 *trailer;
trailer = skb_put(skb, 1);
@@ -39,7 +39,7 @@ static struct sk_buff *xrs700x_rcv(struct sk_buff *skb, struct net_device *dev)
if (source_port < 0)
return NULL;
- skb->dev = dsa_master_find_slave(dev, 0, source_port);
+ skb->dev = dsa_conduit_find_user(dev, 0, source_port);
if (!skb->dev)
return NULL;
diff --git a/net/dsa/slave.c b/net/dsa/user.c
index 48db91b33390..d438884a4eb0 100644
--- a/net/dsa/slave.c
+++ b/net/dsa/user.c
@@ -1,6 +1,6 @@
// SPDX-License-Identifier: GPL-2.0-or-later
/*
- * net/dsa/slave.c - Slave device handling
+ * net/dsa/user.c - user device handling
* Copyright (c) 2008-2009 Marvell Semiconductor
*/
@@ -23,13 +23,13 @@
#include <linux/netpoll.h>
#include <linux/string.h>
+#include "conduit.h"
#include "dsa.h"
-#include "port.h"
-#include "master.h"
#include "netlink.h"
-#include "slave.h"
+#include "port.h"
#include "switch.h"
#include "tag.h"
+#include "user.h"
struct dsa_switchdev_event_work {
struct net_device *dev;
@@ -79,13 +79,13 @@ static bool dsa_switch_supports_mc_filtering(struct dsa_switch *ds)
!ds->needs_standalone_vlan_filtering;
}
-static void dsa_slave_standalone_event_work(struct work_struct *work)
+static void dsa_user_standalone_event_work(struct work_struct *work)
{
struct dsa_standalone_event_work *standalone_work =
container_of(work, struct dsa_standalone_event_work, work);
const unsigned char *addr = standalone_work->addr;
struct net_device *dev = standalone_work->dev;
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_mdb mdb;
struct dsa_switch *ds = dp->ds;
u16 vid = standalone_work->vid;
@@ -140,10 +140,10 @@ static void dsa_slave_standalone_event_work(struct work_struct *work)
kfree(standalone_work);
}
-static int dsa_slave_schedule_standalone_work(struct net_device *dev,
- enum dsa_standalone_event event,
- const unsigned char *addr,
- u16 vid)
+static int dsa_user_schedule_standalone_work(struct net_device *dev,
+ enum dsa_standalone_event event,
+ const unsigned char *addr,
+ u16 vid)
{
struct dsa_standalone_event_work *standalone_work;
@@ -151,7 +151,7 @@ static int dsa_slave_schedule_standalone_work(struct net_device *dev,
if (!standalone_work)
return -ENOMEM;
- INIT_WORK(&standalone_work->work, dsa_slave_standalone_event_work);
+ INIT_WORK(&standalone_work->work, dsa_user_standalone_event_work);
standalone_work->event = event;
standalone_work->dev = dev;
@@ -163,18 +163,18 @@ static int dsa_slave_schedule_standalone_work(struct net_device *dev,
return 0;
}
-static int dsa_slave_host_vlan_rx_filtering(void *arg, int vid)
+static int dsa_user_host_vlan_rx_filtering(void *arg, int vid)
{
struct dsa_host_vlan_rx_filtering_ctx *ctx = arg;
- return dsa_slave_schedule_standalone_work(ctx->dev, ctx->event,
+ return dsa_user_schedule_standalone_work(ctx->dev, ctx->event,
ctx->addr, vid);
}
-static int dsa_slave_vlan_for_each(struct net_device *dev,
- int (*cb)(void *arg, int vid), void *arg)
+static int dsa_user_vlan_for_each(struct net_device *dev,
+ int (*cb)(void *arg, int vid), void *arg)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_vlan *v;
int err;
@@ -193,99 +193,99 @@ static int dsa_slave_vlan_for_each(struct net_device *dev,
return 0;
}
-static int dsa_slave_sync_uc(struct net_device *dev,
- const unsigned char *addr)
+static int dsa_user_sync_uc(struct net_device *dev,
+ const unsigned char *addr)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_host_vlan_rx_filtering_ctx ctx = {
.dev = dev,
.addr = addr,
.event = DSA_UC_ADD,
};
- dev_uc_add(master, addr);
+ dev_uc_add(conduit, addr);
if (!dsa_switch_supports_uc_filtering(dp->ds))
return 0;
- return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
+ return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering,
&ctx);
}
-static int dsa_slave_unsync_uc(struct net_device *dev,
- const unsigned char *addr)
+static int dsa_user_unsync_uc(struct net_device *dev,
+ const unsigned char *addr)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_host_vlan_rx_filtering_ctx ctx = {
.dev = dev,
.addr = addr,
.event = DSA_UC_DEL,
};
- dev_uc_del(master, addr);
+ dev_uc_del(conduit, addr);
if (!dsa_switch_supports_uc_filtering(dp->ds))
return 0;
- return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
+ return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering,
&ctx);
}
-static int dsa_slave_sync_mc(struct net_device *dev,
- const unsigned char *addr)
+static int dsa_user_sync_mc(struct net_device *dev,
+ const unsigned char *addr)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_host_vlan_rx_filtering_ctx ctx = {
.dev = dev,
.addr = addr,
.event = DSA_MC_ADD,
};
- dev_mc_add(master, addr);
+ dev_mc_add(conduit, addr);
if (!dsa_switch_supports_mc_filtering(dp->ds))
return 0;
- return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
+ return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering,
&ctx);
}
-static int dsa_slave_unsync_mc(struct net_device *dev,
- const unsigned char *addr)
+static int dsa_user_unsync_mc(struct net_device *dev,
+ const unsigned char *addr)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_host_vlan_rx_filtering_ctx ctx = {
.dev = dev,
.addr = addr,
.event = DSA_MC_DEL,
};
- dev_mc_del(master, addr);
+ dev_mc_del(conduit, addr);
if (!dsa_switch_supports_mc_filtering(dp->ds))
return 0;
- return dsa_slave_vlan_for_each(dev, dsa_slave_host_vlan_rx_filtering,
+ return dsa_user_vlan_for_each(dev, dsa_user_host_vlan_rx_filtering,
&ctx);
}
-void dsa_slave_sync_ha(struct net_device *dev)
+void dsa_user_sync_ha(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct netdev_hw_addr *ha;
netif_addr_lock_bh(dev);
netdev_for_each_synced_mc_addr(ha, dev)
- dsa_slave_sync_mc(dev, ha->addr);
+ dsa_user_sync_mc(dev, ha->addr);
netdev_for_each_synced_uc_addr(ha, dev)
- dsa_slave_sync_uc(dev, ha->addr);
+ dsa_user_sync_uc(dev, ha->addr);
netif_addr_unlock_bh(dev);
@@ -294,19 +294,19 @@ void dsa_slave_sync_ha(struct net_device *dev)
dsa_flush_workqueue();
}
-void dsa_slave_unsync_ha(struct net_device *dev)
+void dsa_user_unsync_ha(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct netdev_hw_addr *ha;
netif_addr_lock_bh(dev);
netdev_for_each_synced_uc_addr(ha, dev)
- dsa_slave_unsync_uc(dev, ha->addr);
+ dsa_user_unsync_uc(dev, ha->addr);
netdev_for_each_synced_mc_addr(ha, dev)
- dsa_slave_unsync_mc(dev, ha->addr);
+ dsa_user_unsync_mc(dev, ha->addr);
netif_addr_unlock_bh(dev);
@@ -315,8 +315,8 @@ void dsa_slave_unsync_ha(struct net_device *dev)
dsa_flush_workqueue();
}
-/* slave mii_bus handling ***************************************************/
-static int dsa_slave_phy_read(struct mii_bus *bus, int addr, int reg)
+/* user mii_bus handling ***************************************************/
+static int dsa_user_phy_read(struct mii_bus *bus, int addr, int reg)
{
struct dsa_switch *ds = bus->priv;
@@ -326,7 +326,7 @@ static int dsa_slave_phy_read(struct mii_bus *bus, int addr, int reg)
return 0xffff;
}
-static int dsa_slave_phy_write(struct mii_bus *bus, int addr, int reg, u16 val)
+static int dsa_user_phy_write(struct mii_bus *bus, int addr, int reg, u16 val)
{
struct dsa_switch *ds = bus->priv;
@@ -336,35 +336,35 @@ static int dsa_slave_phy_write(struct mii_bus *bus, int addr, int reg, u16 val)
return 0;
}
-void dsa_slave_mii_bus_init(struct dsa_switch *ds)
+void dsa_user_mii_bus_init(struct dsa_switch *ds)
{
- ds->slave_mii_bus->priv = (void *)ds;
- ds->slave_mii_bus->name = "dsa slave smi";
- ds->slave_mii_bus->read = dsa_slave_phy_read;
- ds->slave_mii_bus->write = dsa_slave_phy_write;
- snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d.%d",
+ ds->user_mii_bus->priv = (void *)ds;
+ ds->user_mii_bus->name = "dsa user smi";
+ ds->user_mii_bus->read = dsa_user_phy_read;
+ ds->user_mii_bus->write = dsa_user_phy_write;
+ snprintf(ds->user_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d.%d",
ds->dst->index, ds->index);
- ds->slave_mii_bus->parent = ds->dev;
- ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask;
+ ds->user_mii_bus->parent = ds->dev;
+ ds->user_mii_bus->phy_mask = ~ds->phys_mii_mask;
}
-/* slave device handling ****************************************************/
-static int dsa_slave_get_iflink(const struct net_device *dev)
+/* user device handling ****************************************************/
+static int dsa_user_get_iflink(const struct net_device *dev)
{
- return dsa_slave_to_master(dev)->ifindex;
+ return dsa_user_to_conduit(dev)->ifindex;
}
-static int dsa_slave_open(struct net_device *dev)
+static int dsa_user_open(struct net_device *dev)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int err;
- err = dev_open(master, NULL);
+ err = dev_open(conduit, NULL);
if (err < 0) {
- netdev_err(dev, "failed to open master %s\n", master->name);
+ netdev_err(dev, "failed to open conduit %s\n", conduit->name);
goto out;
}
@@ -374,8 +374,8 @@ static int dsa_slave_open(struct net_device *dev)
goto out;
}
- if (!ether_addr_equal(dev->dev_addr, master->dev_addr)) {
- err = dev_uc_add(master, dev->dev_addr);
+ if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr)) {
+ err = dev_uc_add(conduit, dev->dev_addr);
if (err < 0)
goto del_host_addr;
}
@@ -387,8 +387,8 @@ static int dsa_slave_open(struct net_device *dev)
return 0;
del_unicast:
- if (!ether_addr_equal(dev->dev_addr, master->dev_addr))
- dev_uc_del(master, dev->dev_addr);
+ if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr))
+ dev_uc_del(conduit, dev->dev_addr);
del_host_addr:
if (dsa_switch_supports_uc_filtering(ds))
dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0);
@@ -396,16 +396,16 @@ out:
return err;
}
-static int dsa_slave_close(struct net_device *dev)
+static int dsa_user_close(struct net_device *dev)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
dsa_port_disable_rt(dp);
- if (!ether_addr_equal(dev->dev_addr, master->dev_addr))
- dev_uc_del(master, dev->dev_addr);
+ if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr))
+ dev_uc_del(conduit, dev->dev_addr);
if (dsa_switch_supports_uc_filtering(ds))
dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0);
@@ -413,43 +413,43 @@ static int dsa_slave_close(struct net_device *dev)
return 0;
}
-static void dsa_slave_manage_host_flood(struct net_device *dev)
+static void dsa_user_manage_host_flood(struct net_device *dev)
{
bool mc = dev->flags & (IFF_PROMISC | IFF_ALLMULTI);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
bool uc = dev->flags & IFF_PROMISC;
dsa_port_set_host_flood(dp, uc, mc);
}
-static void dsa_slave_change_rx_flags(struct net_device *dev, int change)
+static void dsa_user_change_rx_flags(struct net_device *dev, int change)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (change & IFF_ALLMULTI)
- dev_set_allmulti(master,
+ dev_set_allmulti(conduit,
dev->flags & IFF_ALLMULTI ? 1 : -1);
if (change & IFF_PROMISC)
- dev_set_promiscuity(master,
+ dev_set_promiscuity(conduit,
dev->flags & IFF_PROMISC ? 1 : -1);
if (dsa_switch_supports_uc_filtering(ds) &&
dsa_switch_supports_mc_filtering(ds))
- dsa_slave_manage_host_flood(dev);
+ dsa_user_manage_host_flood(dev);
}
-static void dsa_slave_set_rx_mode(struct net_device *dev)
+static void dsa_user_set_rx_mode(struct net_device *dev)
{
- __dev_mc_sync(dev, dsa_slave_sync_mc, dsa_slave_unsync_mc);
- __dev_uc_sync(dev, dsa_slave_sync_uc, dsa_slave_unsync_uc);
+ __dev_mc_sync(dev, dsa_user_sync_mc, dsa_user_unsync_mc);
+ __dev_uc_sync(dev, dsa_user_sync_uc, dsa_user_unsync_uc);
}
-static int dsa_slave_set_mac_address(struct net_device *dev, void *a)
+static int dsa_user_set_mac_address(struct net_device *dev, void *a)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct sockaddr *addr = a;
int err;
@@ -457,8 +457,15 @@ static int dsa_slave_set_mac_address(struct net_device *dev, void *a)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
+ if (ds->ops->port_set_mac_address) {
+ err = ds->ops->port_set_mac_address(ds, dp->index,
+ addr->sa_data);
+ if (err)
+ return err;
+ }
+
/* If the port is down, the address isn't synced yet to hardware or
- * to the DSA master, so there is nothing to change.
+ * to the DSA conduit, so there is nothing to change.
*/
if (!(dev->flags & IFF_UP))
goto out_change_dev_addr;
@@ -469,14 +476,14 @@ static int dsa_slave_set_mac_address(struct net_device *dev, void *a)
return err;
}
- if (!ether_addr_equal(addr->sa_data, master->dev_addr)) {
- err = dev_uc_add(master, addr->sa_data);
+ if (!ether_addr_equal(addr->sa_data, conduit->dev_addr)) {
+ err = dev_uc_add(conduit, addr->sa_data);
if (err < 0)
goto del_unicast;
}
- if (!ether_addr_equal(dev->dev_addr, master->dev_addr))
- dev_uc_del(master, dev->dev_addr);
+ if (!ether_addr_equal(dev->dev_addr, conduit->dev_addr))
+ dev_uc_del(conduit, dev->dev_addr);
if (dsa_switch_supports_uc_filtering(ds))
dsa_port_standalone_host_fdb_del(dp, dev->dev_addr, 0);
@@ -493,7 +500,7 @@ del_unicast:
return err;
}
-struct dsa_slave_dump_ctx {
+struct dsa_user_dump_ctx {
struct net_device *dev;
struct sk_buff *skb;
struct netlink_callback *cb;
@@ -501,10 +508,10 @@ struct dsa_slave_dump_ctx {
};
static int
-dsa_slave_port_fdb_do_dump(const unsigned char *addr, u16 vid,
- bool is_static, void *data)
+dsa_user_port_fdb_do_dump(const unsigned char *addr, u16 vid,
+ bool is_static, void *data)
{
- struct dsa_slave_dump_ctx *dump = data;
+ struct dsa_user_dump_ctx *dump = data;
u32 portid = NETLINK_CB(dump->cb->skb).portid;
u32 seq = dump->cb->nlh->nlmsg_seq;
struct nlmsghdr *nlh;
@@ -545,12 +552,12 @@ nla_put_failure:
}
static int
-dsa_slave_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
- struct net_device *dev, struct net_device *filter_dev,
- int *idx)
+dsa_user_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
+ struct net_device *dev, struct net_device *filter_dev,
+ int *idx)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
- struct dsa_slave_dump_ctx dump = {
+ struct dsa_port *dp = dsa_user_to_port(dev);
+ struct dsa_user_dump_ctx dump = {
.dev = dev,
.skb = skb,
.cb = cb,
@@ -558,15 +565,15 @@ dsa_slave_fdb_dump(struct sk_buff *skb, struct netlink_callback *cb,
};
int err;
- err = dsa_port_fdb_dump(dp, dsa_slave_port_fdb_do_dump, &dump);
+ err = dsa_port_fdb_dump(dp, dsa_user_port_fdb_do_dump, &dump);
*idx = dump.idx;
return err;
}
-static int dsa_slave_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+static int dsa_user_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
{
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct dsa_switch *ds = p->dp->ds;
int port = p->dp->index;
@@ -585,11 +592,11 @@ static int dsa_slave_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
return phylink_mii_ioctl(p->dp->pl, ifr, cmd);
}
-static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
- const struct switchdev_attr *attr,
- struct netlink_ext_ack *extack)
+static int dsa_user_port_attr_set(struct net_device *dev, const void *ctx,
+ const struct switchdev_attr *attr,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
int ret;
if (ctx && ctx != dp)
@@ -656,13 +663,13 @@ static int dsa_slave_port_attr_set(struct net_device *dev, const void *ctx,
/* Must be called under rcu_read_lock() */
static int
-dsa_slave_vlan_check_for_8021q_uppers(struct net_device *slave,
- const struct switchdev_obj_port_vlan *vlan)
+dsa_user_vlan_check_for_8021q_uppers(struct net_device *user,
+ const struct switchdev_obj_port_vlan *vlan)
{
struct net_device *upper_dev;
struct list_head *iter;
- netdev_for_each_upper_dev_rcu(slave, upper_dev, iter) {
+ netdev_for_each_upper_dev_rcu(user, upper_dev, iter) {
u16 vid;
if (!is_vlan_dev(upper_dev))
@@ -676,11 +683,11 @@ dsa_slave_vlan_check_for_8021q_uppers(struct net_device *slave,
return 0;
}
-static int dsa_slave_vlan_add(struct net_device *dev,
- const struct switchdev_obj *obj,
- struct netlink_ext_ack *extack)
+static int dsa_user_vlan_add(struct net_device *dev,
+ const struct switchdev_obj *obj,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan *vlan;
int err;
@@ -696,7 +703,7 @@ static int dsa_slave_vlan_add(struct net_device *dev,
*/
if (br_vlan_enabled(dsa_port_bridge_dev_get(dp))) {
rcu_read_lock();
- err = dsa_slave_vlan_check_for_8021q_uppers(dev, vlan);
+ err = dsa_user_vlan_check_for_8021q_uppers(dev, vlan);
rcu_read_unlock();
if (err) {
NL_SET_ERR_MSG_MOD(extack,
@@ -711,11 +718,11 @@ static int dsa_slave_vlan_add(struct net_device *dev,
/* Offload a VLAN installed on the bridge or on a foreign interface by
* installing it as a VLAN towards the CPU port.
*/
-static int dsa_slave_host_vlan_add(struct net_device *dev,
- const struct switchdev_obj *obj,
- struct netlink_ext_ack *extack)
+static int dsa_user_host_vlan_add(struct net_device *dev,
+ const struct switchdev_obj *obj,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan vlan;
/* Do nothing if this is a software bridge */
@@ -737,11 +744,11 @@ static int dsa_slave_host_vlan_add(struct net_device *dev,
return dsa_port_host_vlan_add(dp, &vlan, extack);
}
-static int dsa_slave_port_obj_add(struct net_device *dev, const void *ctx,
- const struct switchdev_obj *obj,
- struct netlink_ext_ack *extack)
+static int dsa_user_port_obj_add(struct net_device *dev, const void *ctx,
+ const struct switchdev_obj *obj,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
int err;
if (ctx && ctx != dp)
@@ -762,9 +769,9 @@ static int dsa_slave_port_obj_add(struct net_device *dev, const void *ctx,
break;
case SWITCHDEV_OBJ_ID_PORT_VLAN:
if (dsa_port_offloads_bridge_port(dp, obj->orig_dev))
- err = dsa_slave_vlan_add(dev, obj, extack);
+ err = dsa_user_vlan_add(dev, obj, extack);
else
- err = dsa_slave_host_vlan_add(dev, obj, extack);
+ err = dsa_user_host_vlan_add(dev, obj, extack);
break;
case SWITCHDEV_OBJ_ID_MRP:
if (!dsa_port_offloads_bridge_dev(dp, obj->orig_dev))
@@ -787,10 +794,10 @@ static int dsa_slave_port_obj_add(struct net_device *dev, const void *ctx,
return err;
}
-static int dsa_slave_vlan_del(struct net_device *dev,
- const struct switchdev_obj *obj)
+static int dsa_user_vlan_del(struct net_device *dev,
+ const struct switchdev_obj *obj)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan *vlan;
if (dsa_port_skip_vlan_configuration(dp))
@@ -801,10 +808,10 @@ static int dsa_slave_vlan_del(struct net_device *dev,
return dsa_port_vlan_del(dp, vlan);
}
-static int dsa_slave_host_vlan_del(struct net_device *dev,
- const struct switchdev_obj *obj)
+static int dsa_user_host_vlan_del(struct net_device *dev,
+ const struct switchdev_obj *obj)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan *vlan;
/* Do nothing if this is a software bridge */
@@ -819,10 +826,10 @@ static int dsa_slave_host_vlan_del(struct net_device *dev,
return dsa_port_host_vlan_del(dp, vlan);
}
-static int dsa_slave_port_obj_del(struct net_device *dev, const void *ctx,
- const struct switchdev_obj *obj)
+static int dsa_user_port_obj_del(struct net_device *dev, const void *ctx,
+ const struct switchdev_obj *obj)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
int err;
if (ctx && ctx != dp)
@@ -843,9 +850,9 @@ static int dsa_slave_port_obj_del(struct net_device *dev, const void *ctx,
break;
case SWITCHDEV_OBJ_ID_PORT_VLAN:
if (dsa_port_offloads_bridge_port(dp, obj->orig_dev))
- err = dsa_slave_vlan_del(dev, obj);
+ err = dsa_user_vlan_del(dev, obj);
else
- err = dsa_slave_host_vlan_del(dev, obj);
+ err = dsa_user_host_vlan_del(dev, obj);
break;
case SWITCHDEV_OBJ_ID_MRP:
if (!dsa_port_offloads_bridge_dev(dp, obj->orig_dev))
@@ -868,11 +875,11 @@ static int dsa_slave_port_obj_del(struct net_device *dev, const void *ctx,
return err;
}
-static inline netdev_tx_t dsa_slave_netpoll_send_skb(struct net_device *dev,
- struct sk_buff *skb)
+static inline netdev_tx_t dsa_user_netpoll_send_skb(struct net_device *dev,
+ struct sk_buff *skb)
{
#ifdef CONFIG_NET_POLL_CONTROLLER
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
return netpoll_send_skb(p->netpoll, skb);
#else
@@ -881,7 +888,7 @@ static inline netdev_tx_t dsa_slave_netpoll_send_skb(struct net_device *dev,
#endif
}
-static void dsa_skb_tx_timestamp(struct dsa_slave_priv *p,
+static void dsa_skb_tx_timestamp(struct dsa_user_priv *p,
struct sk_buff *skb)
{
struct dsa_switch *ds = p->dp->ds;
@@ -901,12 +908,12 @@ netdev_tx_t dsa_enqueue_skb(struct sk_buff *skb, struct net_device *dev)
* tag to be successfully transmitted
*/
if (unlikely(netpoll_tx_running(dev)))
- return dsa_slave_netpoll_send_skb(dev, skb);
+ return dsa_user_netpoll_send_skb(dev, skb);
/* Queue the SKB for transmission on the parent interface, but
* do not modify its EtherType
*/
- skb->dev = dsa_slave_to_master(dev);
+ skb->dev = dsa_user_to_conduit(dev);
dev_queue_xmit(skb);
return NETDEV_TX_OK;
@@ -920,7 +927,7 @@ static int dsa_realloc_skb(struct sk_buff *skb, struct net_device *dev)
/* For tail taggers, we need to pad short frames ourselves, to ensure
* that the tail tag does not fail at its role of being at the end of
- * the packet, once the master interface pads the frame. Account for
+ * the packet, once the conduit interface pads the frame. Account for
* that pad length here, and pad later.
*/
if (unlikely(needed_tailroom && skb->len < ETH_ZLEN))
@@ -937,9 +944,9 @@ static int dsa_realloc_skb(struct sk_buff *skb, struct net_device *dev)
GFP_ATOMIC);
}
-static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
+static netdev_tx_t dsa_user_xmit(struct sk_buff *skb, struct net_device *dev)
{
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct sk_buff *nskb;
dev_sw_netstats_tx_add(dev, 1, skb->len);
@@ -974,17 +981,17 @@ static netdev_tx_t dsa_slave_xmit(struct sk_buff *skb, struct net_device *dev)
/* ethtool operations *******************************************************/
-static void dsa_slave_get_drvinfo(struct net_device *dev,
- struct ethtool_drvinfo *drvinfo)
+static void dsa_user_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *drvinfo)
{
strscpy(drvinfo->driver, "dsa", sizeof(drvinfo->driver));
strscpy(drvinfo->fw_version, "N/A", sizeof(drvinfo->fw_version));
strscpy(drvinfo->bus_info, "platform", sizeof(drvinfo->bus_info));
}
-static int dsa_slave_get_regs_len(struct net_device *dev)
+static int dsa_user_get_regs_len(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_regs_len)
@@ -994,25 +1001,25 @@ static int dsa_slave_get_regs_len(struct net_device *dev)
}
static void
-dsa_slave_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *_p)
+dsa_user_get_regs(struct net_device *dev, struct ethtool_regs *regs, void *_p)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_regs)
ds->ops->get_regs(ds, dp->index, regs, _p);
}
-static int dsa_slave_nway_reset(struct net_device *dev)
+static int dsa_user_nway_reset(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
return phylink_ethtool_nway_reset(dp->pl);
}
-static int dsa_slave_get_eeprom_len(struct net_device *dev)
+static int dsa_user_get_eeprom_len(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->cd && ds->cd->eeprom_len)
@@ -1024,10 +1031,10 @@ static int dsa_slave_get_eeprom_len(struct net_device *dev)
return 0;
}
-static int dsa_slave_get_eeprom(struct net_device *dev,
- struct ethtool_eeprom *eeprom, u8 *data)
+static int dsa_user_get_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom, u8 *data)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_eeprom)
@@ -1036,10 +1043,10 @@ static int dsa_slave_get_eeprom(struct net_device *dev,
return -EOPNOTSUPP;
}
-static int dsa_slave_set_eeprom(struct net_device *dev,
- struct ethtool_eeprom *eeprom, u8 *data)
+static int dsa_user_set_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom, u8 *data)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->set_eeprom)
@@ -1048,10 +1055,10 @@ static int dsa_slave_set_eeprom(struct net_device *dev,
return -EOPNOTSUPP;
}
-static void dsa_slave_get_strings(struct net_device *dev,
- uint32_t stringset, uint8_t *data)
+static void dsa_user_get_strings(struct net_device *dev,
+ uint32_t stringset, uint8_t *data)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (stringset == ETH_SS_STATS) {
@@ -1070,11 +1077,11 @@ static void dsa_slave_get_strings(struct net_device *dev,
}
-static void dsa_slave_get_ethtool_stats(struct net_device *dev,
- struct ethtool_stats *stats,
- uint64_t *data)
+static void dsa_user_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats,
+ uint64_t *data)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct pcpu_sw_netstats *s;
unsigned int start;
@@ -1100,9 +1107,9 @@ static void dsa_slave_get_ethtool_stats(struct net_device *dev,
ds->ops->get_ethtool_stats(ds, dp->index, data + 4);
}
-static int dsa_slave_get_sset_count(struct net_device *dev, int sset)
+static int dsa_user_get_sset_count(struct net_device *dev, int sset)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (sset == ETH_SS_STATS) {
@@ -1122,20 +1129,20 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset)
return -EOPNOTSUPP;
}
-static void dsa_slave_get_eth_phy_stats(struct net_device *dev,
- struct ethtool_eth_phy_stats *phy_stats)
+static void dsa_user_get_eth_phy_stats(struct net_device *dev,
+ struct ethtool_eth_phy_stats *phy_stats)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_eth_phy_stats)
ds->ops->get_eth_phy_stats(ds, dp->index, phy_stats);
}
-static void dsa_slave_get_eth_mac_stats(struct net_device *dev,
- struct ethtool_eth_mac_stats *mac_stats)
+static void dsa_user_get_eth_mac_stats(struct net_device *dev,
+ struct ethtool_eth_mac_stats *mac_stats)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_eth_mac_stats)
@@ -1143,10 +1150,10 @@ static void dsa_slave_get_eth_mac_stats(struct net_device *dev,
}
static void
-dsa_slave_get_eth_ctrl_stats(struct net_device *dev,
- struct ethtool_eth_ctrl_stats *ctrl_stats)
+dsa_user_get_eth_ctrl_stats(struct net_device *dev,
+ struct ethtool_eth_ctrl_stats *ctrl_stats)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_eth_ctrl_stats)
@@ -1154,21 +1161,21 @@ dsa_slave_get_eth_ctrl_stats(struct net_device *dev,
}
static void
-dsa_slave_get_rmon_stats(struct net_device *dev,
- struct ethtool_rmon_stats *rmon_stats,
- const struct ethtool_rmon_hist_range **ranges)
+dsa_user_get_rmon_stats(struct net_device *dev,
+ struct ethtool_rmon_stats *rmon_stats,
+ const struct ethtool_rmon_hist_range **ranges)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_rmon_stats)
ds->ops->get_rmon_stats(ds, dp->index, rmon_stats, ranges);
}
-static void dsa_slave_net_selftest(struct net_device *ndev,
- struct ethtool_test *etest, u64 *buf)
+static void dsa_user_net_selftest(struct net_device *ndev,
+ struct ethtool_test *etest, u64 *buf)
{
- struct dsa_port *dp = dsa_slave_to_port(ndev);
+ struct dsa_port *dp = dsa_user_to_port(ndev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->self_test) {
@@ -1179,10 +1186,10 @@ static void dsa_slave_net_selftest(struct net_device *ndev,
net_selftest(ndev, etest, buf);
}
-static int dsa_slave_get_mm(struct net_device *dev,
- struct ethtool_mm_state *state)
+static int dsa_user_get_mm(struct net_device *dev,
+ struct ethtool_mm_state *state)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (!ds->ops->get_mm)
@@ -1191,10 +1198,10 @@ static int dsa_slave_get_mm(struct net_device *dev,
return ds->ops->get_mm(ds, dp->index, state);
}
-static int dsa_slave_set_mm(struct net_device *dev, struct ethtool_mm_cfg *cfg,
- struct netlink_ext_ack *extack)
+static int dsa_user_set_mm(struct net_device *dev, struct ethtool_mm_cfg *cfg,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (!ds->ops->set_mm)
@@ -1203,19 +1210,19 @@ static int dsa_slave_set_mm(struct net_device *dev, struct ethtool_mm_cfg *cfg,
return ds->ops->set_mm(ds, dp->index, cfg, extack);
}
-static void dsa_slave_get_mm_stats(struct net_device *dev,
- struct ethtool_mm_stats *stats)
+static void dsa_user_get_mm_stats(struct net_device *dev,
+ struct ethtool_mm_stats *stats)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_mm_stats)
ds->ops->get_mm_stats(ds, dp->index, stats);
}
-static void dsa_slave_get_wol(struct net_device *dev, struct ethtool_wolinfo *w)
+static void dsa_user_get_wol(struct net_device *dev, struct ethtool_wolinfo *w)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
phylink_ethtool_get_wol(dp->pl, w);
@@ -1224,9 +1231,9 @@ static void dsa_slave_get_wol(struct net_device *dev, struct ethtool_wolinfo *w)
ds->ops->get_wol(ds, dp->index, w);
}
-static int dsa_slave_set_wol(struct net_device *dev, struct ethtool_wolinfo *w)
+static int dsa_user_set_wol(struct net_device *dev, struct ethtool_wolinfo *w)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int ret = -EOPNOTSUPP;
@@ -1238,9 +1245,9 @@ static int dsa_slave_set_wol(struct net_device *dev, struct ethtool_wolinfo *w)
return ret;
}
-static int dsa_slave_set_eee(struct net_device *dev, struct ethtool_eee *e)
+static int dsa_user_set_eee(struct net_device *dev, struct ethtool_eee *e)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int ret;
@@ -1258,9 +1265,9 @@ static int dsa_slave_set_eee(struct net_device *dev, struct ethtool_eee *e)
return phylink_ethtool_set_eee(dp->pl, e);
}
-static int dsa_slave_get_eee(struct net_device *dev, struct ethtool_eee *e)
+static int dsa_user_get_eee(struct net_device *dev, struct ethtool_eee *e)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int ret;
@@ -1278,54 +1285,54 @@ static int dsa_slave_get_eee(struct net_device *dev, struct ethtool_eee *e)
return phylink_ethtool_get_eee(dp->pl, e);
}
-static int dsa_slave_get_link_ksettings(struct net_device *dev,
- struct ethtool_link_ksettings *cmd)
+static int dsa_user_get_link_ksettings(struct net_device *dev,
+ struct ethtool_link_ksettings *cmd)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
return phylink_ethtool_ksettings_get(dp->pl, cmd);
}
-static int dsa_slave_set_link_ksettings(struct net_device *dev,
- const struct ethtool_link_ksettings *cmd)
+static int dsa_user_set_link_ksettings(struct net_device *dev,
+ const struct ethtool_link_ksettings *cmd)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
return phylink_ethtool_ksettings_set(dp->pl, cmd);
}
-static void dsa_slave_get_pause_stats(struct net_device *dev,
- struct ethtool_pause_stats *pause_stats)
+static void dsa_user_get_pause_stats(struct net_device *dev,
+ struct ethtool_pause_stats *pause_stats)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_pause_stats)
ds->ops->get_pause_stats(ds, dp->index, pause_stats);
}
-static void dsa_slave_get_pauseparam(struct net_device *dev,
- struct ethtool_pauseparam *pause)
+static void dsa_user_get_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
phylink_ethtool_get_pauseparam(dp->pl, pause);
}
-static int dsa_slave_set_pauseparam(struct net_device *dev,
- struct ethtool_pauseparam *pause)
+static int dsa_user_set_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
return phylink_ethtool_set_pauseparam(dp->pl, pause);
}
#ifdef CONFIG_NET_POLL_CONTROLLER
-static int dsa_slave_netpoll_setup(struct net_device *dev,
- struct netpoll_info *ni)
+static int dsa_user_netpoll_setup(struct net_device *dev,
+ struct netpoll_info *ni)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct netpoll *netpoll;
int err = 0;
@@ -1333,7 +1340,7 @@ static int dsa_slave_netpoll_setup(struct net_device *dev,
if (!netpoll)
return -ENOMEM;
- err = __netpoll_setup(netpoll, master);
+ err = __netpoll_setup(netpoll, conduit);
if (err) {
kfree(netpoll);
goto out;
@@ -1344,9 +1351,9 @@ out:
return err;
}
-static void dsa_slave_netpoll_cleanup(struct net_device *dev)
+static void dsa_user_netpoll_cleanup(struct net_device *dev)
{
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct netpoll *netpoll = p->netpoll;
if (!netpoll)
@@ -1357,15 +1364,15 @@ static void dsa_slave_netpoll_cleanup(struct net_device *dev)
__netpoll_free(netpoll);
}
-static void dsa_slave_poll_controller(struct net_device *dev)
+static void dsa_user_poll_controller(struct net_device *dev)
{
}
#endif
static struct dsa_mall_tc_entry *
-dsa_slave_mall_tc_entry_find(struct net_device *dev, unsigned long cookie)
+dsa_user_mall_tc_entry_find(struct net_device *dev, unsigned long cookie)
{
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct dsa_mall_tc_entry *mall_tc_entry;
list_for_each_entry(mall_tc_entry, &p->mall_tc_list, list)
@@ -1376,13 +1383,13 @@ dsa_slave_mall_tc_entry_find(struct net_device *dev, unsigned long cookie)
}
static int
-dsa_slave_add_cls_matchall_mirred(struct net_device *dev,
- struct tc_cls_matchall_offload *cls,
- bool ingress)
+dsa_user_add_cls_matchall_mirred(struct net_device *dev,
+ struct tc_cls_matchall_offload *cls,
+ bool ingress)
{
struct netlink_ext_ack *extack = cls->common.extack;
- struct dsa_port *dp = dsa_slave_to_port(dev);
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct dsa_mall_mirror_tc_entry *mirror;
struct dsa_mall_tc_entry *mall_tc_entry;
struct dsa_switch *ds = dp->ds;
@@ -1402,7 +1409,7 @@ dsa_slave_add_cls_matchall_mirred(struct net_device *dev,
if (!act->dev)
return -EINVAL;
- if (!dsa_slave_dev_check(act->dev))
+ if (!dsa_user_dev_check(act->dev))
return -EOPNOTSUPP;
mall_tc_entry = kzalloc(sizeof(*mall_tc_entry), GFP_KERNEL);
@@ -1413,7 +1420,7 @@ dsa_slave_add_cls_matchall_mirred(struct net_device *dev,
mall_tc_entry->type = DSA_PORT_MALL_MIRROR;
mirror = &mall_tc_entry->mirror;
- to_dp = dsa_slave_to_port(act->dev);
+ to_dp = dsa_user_to_port(act->dev);
mirror->to_local_port = to_dp->index;
mirror->ingress = ingress;
@@ -1430,13 +1437,13 @@ dsa_slave_add_cls_matchall_mirred(struct net_device *dev,
}
static int
-dsa_slave_add_cls_matchall_police(struct net_device *dev,
- struct tc_cls_matchall_offload *cls,
- bool ingress)
+dsa_user_add_cls_matchall_police(struct net_device *dev,
+ struct tc_cls_matchall_offload *cls,
+ bool ingress)
{
struct netlink_ext_ack *extack = cls->common.extack;
- struct dsa_port *dp = dsa_slave_to_port(dev);
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct dsa_mall_policer_tc_entry *policer;
struct dsa_mall_tc_entry *mall_tc_entry;
struct dsa_switch *ds = dp->ds;
@@ -1490,31 +1497,31 @@ dsa_slave_add_cls_matchall_police(struct net_device *dev,
return err;
}
-static int dsa_slave_add_cls_matchall(struct net_device *dev,
- struct tc_cls_matchall_offload *cls,
- bool ingress)
+static int dsa_user_add_cls_matchall(struct net_device *dev,
+ struct tc_cls_matchall_offload *cls,
+ bool ingress)
{
int err = -EOPNOTSUPP;
if (cls->common.protocol == htons(ETH_P_ALL) &&
flow_offload_has_one_action(&cls->rule->action) &&
cls->rule->action.entries[0].id == FLOW_ACTION_MIRRED)
- err = dsa_slave_add_cls_matchall_mirred(dev, cls, ingress);
+ err = dsa_user_add_cls_matchall_mirred(dev, cls, ingress);
else if (flow_offload_has_one_action(&cls->rule->action) &&
cls->rule->action.entries[0].id == FLOW_ACTION_POLICE)
- err = dsa_slave_add_cls_matchall_police(dev, cls, ingress);
+ err = dsa_user_add_cls_matchall_police(dev, cls, ingress);
return err;
}
-static void dsa_slave_del_cls_matchall(struct net_device *dev,
- struct tc_cls_matchall_offload *cls)
+static void dsa_user_del_cls_matchall(struct net_device *dev,
+ struct tc_cls_matchall_offload *cls)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_mall_tc_entry *mall_tc_entry;
struct dsa_switch *ds = dp->ds;
- mall_tc_entry = dsa_slave_mall_tc_entry_find(dev, cls->cookie);
+ mall_tc_entry = dsa_user_mall_tc_entry_find(dev, cls->cookie);
if (!mall_tc_entry)
return;
@@ -1537,29 +1544,29 @@ static void dsa_slave_del_cls_matchall(struct net_device *dev,
kfree(mall_tc_entry);
}
-static int dsa_slave_setup_tc_cls_matchall(struct net_device *dev,
- struct tc_cls_matchall_offload *cls,
- bool ingress)
+static int dsa_user_setup_tc_cls_matchall(struct net_device *dev,
+ struct tc_cls_matchall_offload *cls,
+ bool ingress)
{
if (cls->common.chain_index)
return -EOPNOTSUPP;
switch (cls->command) {
case TC_CLSMATCHALL_REPLACE:
- return dsa_slave_add_cls_matchall(dev, cls, ingress);
+ return dsa_user_add_cls_matchall(dev, cls, ingress);
case TC_CLSMATCHALL_DESTROY:
- dsa_slave_del_cls_matchall(dev, cls);
+ dsa_user_del_cls_matchall(dev, cls);
return 0;
default:
return -EOPNOTSUPP;
}
}
-static int dsa_slave_add_cls_flower(struct net_device *dev,
- struct flow_cls_offload *cls,
- bool ingress)
+static int dsa_user_add_cls_flower(struct net_device *dev,
+ struct flow_cls_offload *cls,
+ bool ingress)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int port = dp->index;
@@ -1569,11 +1576,11 @@ static int dsa_slave_add_cls_flower(struct net_device *dev,
return ds->ops->cls_flower_add(ds, port, cls, ingress);
}
-static int dsa_slave_del_cls_flower(struct net_device *dev,
- struct flow_cls_offload *cls,
- bool ingress)
+static int dsa_user_del_cls_flower(struct net_device *dev,
+ struct flow_cls_offload *cls,
+ bool ingress)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int port = dp->index;
@@ -1583,11 +1590,11 @@ static int dsa_slave_del_cls_flower(struct net_device *dev,
return ds->ops->cls_flower_del(ds, port, cls, ingress);
}
-static int dsa_slave_stats_cls_flower(struct net_device *dev,
- struct flow_cls_offload *cls,
- bool ingress)
+static int dsa_user_stats_cls_flower(struct net_device *dev,
+ struct flow_cls_offload *cls,
+ bool ingress)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int port = dp->index;
@@ -1597,24 +1604,24 @@ static int dsa_slave_stats_cls_flower(struct net_device *dev,
return ds->ops->cls_flower_stats(ds, port, cls, ingress);
}
-static int dsa_slave_setup_tc_cls_flower(struct net_device *dev,
- struct flow_cls_offload *cls,
- bool ingress)
+static int dsa_user_setup_tc_cls_flower(struct net_device *dev,
+ struct flow_cls_offload *cls,
+ bool ingress)
{
switch (cls->command) {
case FLOW_CLS_REPLACE:
- return dsa_slave_add_cls_flower(dev, cls, ingress);
+ return dsa_user_add_cls_flower(dev, cls, ingress);
case FLOW_CLS_DESTROY:
- return dsa_slave_del_cls_flower(dev, cls, ingress);
+ return dsa_user_del_cls_flower(dev, cls, ingress);
case FLOW_CLS_STATS:
- return dsa_slave_stats_cls_flower(dev, cls, ingress);
+ return dsa_user_stats_cls_flower(dev, cls, ingress);
default:
return -EOPNOTSUPP;
}
}
-static int dsa_slave_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
- void *cb_priv, bool ingress)
+static int dsa_user_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
+ void *cb_priv, bool ingress)
{
struct net_device *dev = cb_priv;
@@ -1623,46 +1630,46 @@ static int dsa_slave_setup_tc_block_cb(enum tc_setup_type type, void *type_data,
switch (type) {
case TC_SETUP_CLSMATCHALL:
- return dsa_slave_setup_tc_cls_matchall(dev, type_data, ingress);
+ return dsa_user_setup_tc_cls_matchall(dev, type_data, ingress);
case TC_SETUP_CLSFLOWER:
- return dsa_slave_setup_tc_cls_flower(dev, type_data, ingress);
+ return dsa_user_setup_tc_cls_flower(dev, type_data, ingress);
default:
return -EOPNOTSUPP;
}
}
-static int dsa_slave_setup_tc_block_cb_ig(enum tc_setup_type type,
- void *type_data, void *cb_priv)
+static int dsa_user_setup_tc_block_cb_ig(enum tc_setup_type type,
+ void *type_data, void *cb_priv)
{
- return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, true);
+ return dsa_user_setup_tc_block_cb(type, type_data, cb_priv, true);
}
-static int dsa_slave_setup_tc_block_cb_eg(enum tc_setup_type type,
- void *type_data, void *cb_priv)
+static int dsa_user_setup_tc_block_cb_eg(enum tc_setup_type type,
+ void *type_data, void *cb_priv)
{
- return dsa_slave_setup_tc_block_cb(type, type_data, cb_priv, false);
+ return dsa_user_setup_tc_block_cb(type, type_data, cb_priv, false);
}
-static LIST_HEAD(dsa_slave_block_cb_list);
+static LIST_HEAD(dsa_user_block_cb_list);
-static int dsa_slave_setup_tc_block(struct net_device *dev,
- struct flow_block_offload *f)
+static int dsa_user_setup_tc_block(struct net_device *dev,
+ struct flow_block_offload *f)
{
struct flow_block_cb *block_cb;
flow_setup_cb_t *cb;
if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_INGRESS)
- cb = dsa_slave_setup_tc_block_cb_ig;
+ cb = dsa_user_setup_tc_block_cb_ig;
else if (f->binder_type == FLOW_BLOCK_BINDER_TYPE_CLSACT_EGRESS)
- cb = dsa_slave_setup_tc_block_cb_eg;
+ cb = dsa_user_setup_tc_block_cb_eg;
else
return -EOPNOTSUPP;
- f->driver_block_list = &dsa_slave_block_cb_list;
+ f->driver_block_list = &dsa_user_block_cb_list;
switch (f->command) {
case FLOW_BLOCK_BIND:
- if (flow_block_cb_is_busy(cb, dev, &dsa_slave_block_cb_list))
+ if (flow_block_cb_is_busy(cb, dev, &dsa_user_block_cb_list))
return -EBUSY;
block_cb = flow_block_cb_alloc(cb, dev, dev, NULL);
@@ -1670,7 +1677,7 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
return PTR_ERR(block_cb);
flow_block_cb_add(block_cb, f);
- list_add_tail(&block_cb->driver_list, &dsa_slave_block_cb_list);
+ list_add_tail(&block_cb->driver_list, &dsa_user_block_cb_list);
return 0;
case FLOW_BLOCK_UNBIND:
block_cb = flow_block_cb_lookup(f->block, cb, dev);
@@ -1685,28 +1692,28 @@ static int dsa_slave_setup_tc_block(struct net_device *dev,
}
}
-static int dsa_slave_setup_ft_block(struct dsa_switch *ds, int port,
- void *type_data)
+static int dsa_user_setup_ft_block(struct dsa_switch *ds, int port,
+ void *type_data)
{
- struct net_device *master = dsa_port_to_master(dsa_to_port(ds, port));
+ struct net_device *conduit = dsa_port_to_conduit(dsa_to_port(ds, port));
- if (!master->netdev_ops->ndo_setup_tc)
+ if (!conduit->netdev_ops->ndo_setup_tc)
return -EOPNOTSUPP;
- return master->netdev_ops->ndo_setup_tc(master, TC_SETUP_FT, type_data);
+ return conduit->netdev_ops->ndo_setup_tc(conduit, TC_SETUP_FT, type_data);
}
-static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type,
- void *type_data)
+static int dsa_user_setup_tc(struct net_device *dev, enum tc_setup_type type,
+ void *type_data)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
switch (type) {
case TC_SETUP_BLOCK:
- return dsa_slave_setup_tc_block(dev, type_data);
+ return dsa_user_setup_tc_block(dev, type_data);
case TC_SETUP_FT:
- return dsa_slave_setup_ft_block(ds, dp->index, type_data);
+ return dsa_user_setup_ft_block(ds, dp->index, type_data);
default:
break;
}
@@ -1717,10 +1724,10 @@ static int dsa_slave_setup_tc(struct net_device *dev, enum tc_setup_type type,
return ds->ops->port_setup_tc(ds, dp->index, type, type_data);
}
-static int dsa_slave_get_rxnfc(struct net_device *dev,
- struct ethtool_rxnfc *nfc, u32 *rule_locs)
+static int dsa_user_get_rxnfc(struct net_device *dev,
+ struct ethtool_rxnfc *nfc, u32 *rule_locs)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (!ds->ops->get_rxnfc)
@@ -1729,10 +1736,10 @@ static int dsa_slave_get_rxnfc(struct net_device *dev,
return ds->ops->get_rxnfc(ds, dp->index, nfc, rule_locs);
}
-static int dsa_slave_set_rxnfc(struct net_device *dev,
- struct ethtool_rxnfc *nfc)
+static int dsa_user_set_rxnfc(struct net_device *dev,
+ struct ethtool_rxnfc *nfc)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (!ds->ops->set_rxnfc)
@@ -1741,10 +1748,10 @@ static int dsa_slave_set_rxnfc(struct net_device *dev,
return ds->ops->set_rxnfc(ds, dp->index, nfc);
}
-static int dsa_slave_get_ts_info(struct net_device *dev,
- struct ethtool_ts_info *ts)
+static int dsa_user_get_ts_info(struct net_device *dev,
+ struct ethtool_ts_info *ts)
{
- struct dsa_slave_priv *p = netdev_priv(dev);
+ struct dsa_user_priv *p = netdev_priv(dev);
struct dsa_switch *ds = p->dp->ds;
if (!ds->ops->get_ts_info)
@@ -1753,10 +1760,10 @@ static int dsa_slave_get_ts_info(struct net_device *dev,
return ds->ops->get_ts_info(ds, p->dp->index, ts);
}
-static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
- u16 vid)
+static int dsa_user_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
+ u16 vid)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan vlan = {
.obj.id = SWITCHDEV_OBJ_ID_PORT_VLAN,
.vid = vid,
@@ -1803,15 +1810,15 @@ static int dsa_slave_vlan_rx_add_vid(struct net_device *dev, __be16 proto,
if (dsa_switch_supports_mc_filtering(ds)) {
netdev_for_each_synced_mc_addr(ha, dev) {
- dsa_slave_schedule_standalone_work(dev, DSA_MC_ADD,
- ha->addr, vid);
+ dsa_user_schedule_standalone_work(dev, DSA_MC_ADD,
+ ha->addr, vid);
}
}
if (dsa_switch_supports_uc_filtering(ds)) {
netdev_for_each_synced_uc_addr(ha, dev) {
- dsa_slave_schedule_standalone_work(dev, DSA_UC_ADD,
- ha->addr, vid);
+ dsa_user_schedule_standalone_work(dev, DSA_UC_ADD,
+ ha->addr, vid);
}
}
@@ -1828,10 +1835,10 @@ rollback:
return ret;
}
-static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
- u16 vid)
+static int dsa_user_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
+ u16 vid)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct switchdev_obj_port_vlan vlan = {
.vid = vid,
/* This API only allows programming tagged, non-PVID VIDs */
@@ -1867,15 +1874,15 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
if (dsa_switch_supports_mc_filtering(ds)) {
netdev_for_each_synced_mc_addr(ha, dev) {
- dsa_slave_schedule_standalone_work(dev, DSA_MC_DEL,
- ha->addr, vid);
+ dsa_user_schedule_standalone_work(dev, DSA_MC_DEL,
+ ha->addr, vid);
}
}
if (dsa_switch_supports_uc_filtering(ds)) {
netdev_for_each_synced_uc_addr(ha, dev) {
- dsa_slave_schedule_standalone_work(dev, DSA_UC_DEL,
- ha->addr, vid);
+ dsa_user_schedule_standalone_work(dev, DSA_UC_DEL,
+ ha->addr, vid);
}
}
@@ -1886,18 +1893,18 @@ static int dsa_slave_vlan_rx_kill_vid(struct net_device *dev, __be16 proto,
return 0;
}
-static int dsa_slave_restore_vlan(struct net_device *vdev, int vid, void *arg)
+static int dsa_user_restore_vlan(struct net_device *vdev, int vid, void *arg)
{
__be16 proto = vdev ? vlan_dev_vlan_proto(vdev) : htons(ETH_P_8021Q);
- return dsa_slave_vlan_rx_add_vid(arg, proto, vid);
+ return dsa_user_vlan_rx_add_vid(arg, proto, vid);
}
-static int dsa_slave_clear_vlan(struct net_device *vdev, int vid, void *arg)
+static int dsa_user_clear_vlan(struct net_device *vdev, int vid, void *arg)
{
__be16 proto = vdev ? vlan_dev_vlan_proto(vdev) : htons(ETH_P_8021Q);
- return dsa_slave_vlan_rx_kill_vid(arg, proto, vid);
+ return dsa_user_vlan_rx_kill_vid(arg, proto, vid);
}
/* Keep the VLAN RX filtering list in sync with the hardware only if VLAN
@@ -1931,26 +1938,26 @@ static int dsa_slave_clear_vlan(struct net_device *vdev, int vid, void *arg)
* - the bridge VLANs
* - the 8021q upper VLANs
*/
-int dsa_slave_manage_vlan_filtering(struct net_device *slave,
- bool vlan_filtering)
+int dsa_user_manage_vlan_filtering(struct net_device *user,
+ bool vlan_filtering)
{
int err;
if (vlan_filtering) {
- slave->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ user->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
- err = vlan_for_each(slave, dsa_slave_restore_vlan, slave);
+ err = vlan_for_each(user, dsa_user_restore_vlan, user);
if (err) {
- vlan_for_each(slave, dsa_slave_clear_vlan, slave);
- slave->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
+ vlan_for_each(user, dsa_user_clear_vlan, user);
+ user->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
return err;
}
} else {
- err = vlan_for_each(slave, dsa_slave_clear_vlan, slave);
+ err = vlan_for_each(user, dsa_user_clear_vlan, user);
if (err)
return err;
- slave->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
+ user->features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
}
return 0;
@@ -2021,7 +2028,7 @@ static void dsa_bridge_mtu_normalization(struct dsa_port *dp)
list_for_each_entry(dst, &dsa_tree_list, list) {
list_for_each_entry(other_dp, &dst->ports, list) {
struct dsa_hw_port *hw_port;
- struct net_device *slave;
+ struct net_device *user;
if (other_dp->type != DSA_PORT_TYPE_USER)
continue;
@@ -2032,17 +2039,17 @@ static void dsa_bridge_mtu_normalization(struct dsa_port *dp)
if (!other_dp->ds->mtu_enforcement_ingress)
continue;
- slave = other_dp->slave;
+ user = other_dp->user;
- if (min_mtu > slave->mtu)
- min_mtu = slave->mtu;
+ if (min_mtu > user->mtu)
+ min_mtu = user->mtu;
hw_port = kzalloc(sizeof(*hw_port), GFP_KERNEL);
if (!hw_port)
goto out;
- hw_port->dev = slave;
- hw_port->old_mtu = slave->mtu;
+ hw_port->dev = user;
+ hw_port->old_mtu = user->mtu;
list_add(&hw_port->list, &hw_port_list);
}
@@ -2052,7 +2059,7 @@ static void dsa_bridge_mtu_normalization(struct dsa_port *dp)
* interface's MTU first, regardless of whether the intention of the
* user was to raise or lower it.
*/
- err = dsa_hw_port_list_set_mtu(&hw_port_list, dp->slave->mtu);
+ err = dsa_hw_port_list_set_mtu(&hw_port_list, dp->user->mtu);
if (!err)
goto out;
@@ -2066,16 +2073,16 @@ out:
dsa_hw_port_list_free(&hw_port_list);
}
-int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
+int dsa_user_change_mtu(struct net_device *dev, int new_mtu)
{
- struct net_device *master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_port *cpu_dp = dp->cpu_dp;
struct dsa_switch *ds = dp->ds;
struct dsa_port *other_dp;
int largest_mtu = 0;
- int new_master_mtu;
- int old_master_mtu;
+ int new_conduit_mtu;
+ int old_conduit_mtu;
int mtu_limit;
int overhead;
int cpu_mtu;
@@ -2085,44 +2092,44 @@ int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
return -EOPNOTSUPP;
dsa_tree_for_each_user_port(other_dp, ds->dst) {
- int slave_mtu;
+ int user_mtu;
- /* During probe, this function will be called for each slave
+ /* During probe, this function will be called for each user
* device, while not all of them have been allocated. That's
* ok, it doesn't change what the maximum is, so ignore it.
*/
- if (!other_dp->slave)
+ if (!other_dp->user)
continue;
/* Pretend that we already applied the setting, which we
* actually haven't (still haven't done all integrity checks)
*/
if (dp == other_dp)
- slave_mtu = new_mtu;
+ user_mtu = new_mtu;
else
- slave_mtu = other_dp->slave->mtu;
+ user_mtu = other_dp->user->mtu;
- if (largest_mtu < slave_mtu)
- largest_mtu = slave_mtu;
+ if (largest_mtu < user_mtu)
+ largest_mtu = user_mtu;
}
overhead = dsa_tag_protocol_overhead(cpu_dp->tag_ops);
- mtu_limit = min_t(int, master->max_mtu, dev->max_mtu + overhead);
- old_master_mtu = master->mtu;
- new_master_mtu = largest_mtu + overhead;
- if (new_master_mtu > mtu_limit)
+ mtu_limit = min_t(int, conduit->max_mtu, dev->max_mtu + overhead);
+ old_conduit_mtu = conduit->mtu;
+ new_conduit_mtu = largest_mtu + overhead;
+ if (new_conduit_mtu > mtu_limit)
return -ERANGE;
- /* If the master MTU isn't over limit, there's no need to check the CPU
+ /* If the conduit MTU isn't over limit, there's no need to check the CPU
* MTU, since that surely isn't either.
*/
cpu_mtu = largest_mtu;
/* Start applying stuff */
- if (new_master_mtu != old_master_mtu) {
- err = dev_set_mtu(master, new_master_mtu);
+ if (new_conduit_mtu != old_conduit_mtu) {
+ err = dev_set_mtu(conduit, new_conduit_mtu);
if (err < 0)
- goto out_master_failed;
+ goto out_conduit_failed;
/* We only need to propagate the MTU of the CPU port to
* upstream switches, so emit a notifier which updates them.
@@ -2143,19 +2150,19 @@ int dsa_slave_change_mtu(struct net_device *dev, int new_mtu)
return 0;
out_port_failed:
- if (new_master_mtu != old_master_mtu)
- dsa_port_mtu_change(cpu_dp, old_master_mtu - overhead);
+ if (new_conduit_mtu != old_conduit_mtu)
+ dsa_port_mtu_change(cpu_dp, old_conduit_mtu - overhead);
out_cpu_failed:
- if (new_master_mtu != old_master_mtu)
- dev_set_mtu(master, old_master_mtu);
-out_master_failed:
+ if (new_conduit_mtu != old_conduit_mtu)
+ dev_set_mtu(conduit, old_conduit_mtu);
+out_conduit_failed:
return err;
}
static int __maybe_unused
-dsa_slave_dcbnl_set_default_prio(struct net_device *dev, struct dcb_app *app)
+dsa_user_dcbnl_set_default_prio(struct net_device *dev, struct dcb_app *app)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
unsigned long mask, new_prio;
int err, port = dp->index;
@@ -2180,9 +2187,9 @@ dsa_slave_dcbnl_set_default_prio(struct net_device *dev, struct dcb_app *app)
}
static int __maybe_unused
-dsa_slave_dcbnl_add_dscp_prio(struct net_device *dev, struct dcb_app *app)
+dsa_user_dcbnl_add_dscp_prio(struct net_device *dev, struct dcb_app *app)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
unsigned long mask, new_prio;
int err, port = dp->index;
@@ -2213,29 +2220,29 @@ dsa_slave_dcbnl_add_dscp_prio(struct net_device *dev, struct dcb_app *app)
return 0;
}
-static int __maybe_unused dsa_slave_dcbnl_ieee_setapp(struct net_device *dev,
- struct dcb_app *app)
+static int __maybe_unused dsa_user_dcbnl_ieee_setapp(struct net_device *dev,
+ struct dcb_app *app)
{
switch (app->selector) {
case IEEE_8021QAZ_APP_SEL_ETHERTYPE:
switch (app->protocol) {
case 0:
- return dsa_slave_dcbnl_set_default_prio(dev, app);
+ return dsa_user_dcbnl_set_default_prio(dev, app);
default:
return -EOPNOTSUPP;
}
break;
case IEEE_8021QAZ_APP_SEL_DSCP:
- return dsa_slave_dcbnl_add_dscp_prio(dev, app);
+ return dsa_user_dcbnl_add_dscp_prio(dev, app);
default:
return -EOPNOTSUPP;
}
}
static int __maybe_unused
-dsa_slave_dcbnl_del_default_prio(struct net_device *dev, struct dcb_app *app)
+dsa_user_dcbnl_del_default_prio(struct net_device *dev, struct dcb_app *app)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
unsigned long mask, new_prio;
int err, port = dp->index;
@@ -2260,9 +2267,9 @@ dsa_slave_dcbnl_del_default_prio(struct net_device *dev, struct dcb_app *app)
}
static int __maybe_unused
-dsa_slave_dcbnl_del_dscp_prio(struct net_device *dev, struct dcb_app *app)
+dsa_user_dcbnl_del_dscp_prio(struct net_device *dev, struct dcb_app *app)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int err, port = dp->index;
u8 dscp = app->protocol;
@@ -2283,20 +2290,20 @@ dsa_slave_dcbnl_del_dscp_prio(struct net_device *dev, struct dcb_app *app)
return 0;
}
-static int __maybe_unused dsa_slave_dcbnl_ieee_delapp(struct net_device *dev,
- struct dcb_app *app)
+static int __maybe_unused dsa_user_dcbnl_ieee_delapp(struct net_device *dev,
+ struct dcb_app *app)
{
switch (app->selector) {
case IEEE_8021QAZ_APP_SEL_ETHERTYPE:
switch (app->protocol) {
case 0:
- return dsa_slave_dcbnl_del_default_prio(dev, app);
+ return dsa_user_dcbnl_del_default_prio(dev, app);
default:
return -EOPNOTSUPP;
}
break;
case IEEE_8021QAZ_APP_SEL_DSCP:
- return dsa_slave_dcbnl_del_dscp_prio(dev, app);
+ return dsa_user_dcbnl_del_dscp_prio(dev, app);
default:
return -EOPNOTSUPP;
}
@@ -2305,9 +2312,9 @@ static int __maybe_unused dsa_slave_dcbnl_ieee_delapp(struct net_device *dev,
/* Pre-populate the DCB application priority table with the priorities
* configured during switch setup, which we read from hardware here.
*/
-static int dsa_slave_dcbnl_init(struct net_device *dev)
+static int dsa_user_dcbnl_init(struct net_device *dev)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
int port = dp->index;
int err;
@@ -2355,49 +2362,49 @@ static int dsa_slave_dcbnl_init(struct net_device *dev)
return 0;
}
-static const struct ethtool_ops dsa_slave_ethtool_ops = {
- .get_drvinfo = dsa_slave_get_drvinfo,
- .get_regs_len = dsa_slave_get_regs_len,
- .get_regs = dsa_slave_get_regs,
- .nway_reset = dsa_slave_nway_reset,
+static const struct ethtool_ops dsa_user_ethtool_ops = {
+ .get_drvinfo = dsa_user_get_drvinfo,
+ .get_regs_len = dsa_user_get_regs_len,
+ .get_regs = dsa_user_get_regs,
+ .nway_reset = dsa_user_nway_reset,
.get_link = ethtool_op_get_link,
- .get_eeprom_len = dsa_slave_get_eeprom_len,
- .get_eeprom = dsa_slave_get_eeprom,
- .set_eeprom = dsa_slave_set_eeprom,
- .get_strings = dsa_slave_get_strings,
- .get_ethtool_stats = dsa_slave_get_ethtool_stats,
- .get_sset_count = dsa_slave_get_sset_count,
- .get_eth_phy_stats = dsa_slave_get_eth_phy_stats,
- .get_eth_mac_stats = dsa_slave_get_eth_mac_stats,
- .get_eth_ctrl_stats = dsa_slave_get_eth_ctrl_stats,
- .get_rmon_stats = dsa_slave_get_rmon_stats,
- .set_wol = dsa_slave_set_wol,
- .get_wol = dsa_slave_get_wol,
- .set_eee = dsa_slave_set_eee,
- .get_eee = dsa_slave_get_eee,
- .get_link_ksettings = dsa_slave_get_link_ksettings,
- .set_link_ksettings = dsa_slave_set_link_ksettings,
- .get_pause_stats = dsa_slave_get_pause_stats,
- .get_pauseparam = dsa_slave_get_pauseparam,
- .set_pauseparam = dsa_slave_set_pauseparam,
- .get_rxnfc = dsa_slave_get_rxnfc,
- .set_rxnfc = dsa_slave_set_rxnfc,
- .get_ts_info = dsa_slave_get_ts_info,
- .self_test = dsa_slave_net_selftest,
- .get_mm = dsa_slave_get_mm,
- .set_mm = dsa_slave_set_mm,
- .get_mm_stats = dsa_slave_get_mm_stats,
+ .get_eeprom_len = dsa_user_get_eeprom_len,
+ .get_eeprom = dsa_user_get_eeprom,
+ .set_eeprom = dsa_user_set_eeprom,
+ .get_strings = dsa_user_get_strings,
+ .get_ethtool_stats = dsa_user_get_ethtool_stats,
+ .get_sset_count = dsa_user_get_sset_count,
+ .get_eth_phy_stats = dsa_user_get_eth_phy_stats,
+ .get_eth_mac_stats = dsa_user_get_eth_mac_stats,
+ .get_eth_ctrl_stats = dsa_user_get_eth_ctrl_stats,
+ .get_rmon_stats = dsa_user_get_rmon_stats,
+ .set_wol = dsa_user_set_wol,
+ .get_wol = dsa_user_get_wol,
+ .set_eee = dsa_user_set_eee,
+ .get_eee = dsa_user_get_eee,
+ .get_link_ksettings = dsa_user_get_link_ksettings,
+ .set_link_ksettings = dsa_user_set_link_ksettings,
+ .get_pause_stats = dsa_user_get_pause_stats,
+ .get_pauseparam = dsa_user_get_pauseparam,
+ .set_pauseparam = dsa_user_set_pauseparam,
+ .get_rxnfc = dsa_user_get_rxnfc,
+ .set_rxnfc = dsa_user_set_rxnfc,
+ .get_ts_info = dsa_user_get_ts_info,
+ .self_test = dsa_user_net_selftest,
+ .get_mm = dsa_user_get_mm,
+ .set_mm = dsa_user_set_mm,
+ .get_mm_stats = dsa_user_get_mm_stats,
};
-static const struct dcbnl_rtnl_ops __maybe_unused dsa_slave_dcbnl_ops = {
- .ieee_setapp = dsa_slave_dcbnl_ieee_setapp,
- .ieee_delapp = dsa_slave_dcbnl_ieee_delapp,
+static const struct dcbnl_rtnl_ops __maybe_unused dsa_user_dcbnl_ops = {
+ .ieee_setapp = dsa_user_dcbnl_ieee_setapp,
+ .ieee_delapp = dsa_user_dcbnl_ieee_delapp,
};
-static void dsa_slave_get_stats64(struct net_device *dev,
- struct rtnl_link_stats64 *s)
+static void dsa_user_get_stats64(struct net_device *dev,
+ struct rtnl_link_stats64 *s)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
if (ds->ops->get_stats64)
@@ -2406,43 +2413,43 @@ static void dsa_slave_get_stats64(struct net_device *dev,
dev_get_tstats64(dev, s);
}
-static int dsa_slave_fill_forward_path(struct net_device_path_ctx *ctx,
- struct net_device_path *path)
+static int dsa_user_fill_forward_path(struct net_device_path_ctx *ctx,
+ struct net_device_path *path)
{
- struct dsa_port *dp = dsa_slave_to_port(ctx->dev);
- struct net_device *master = dsa_port_to_master(dp);
+ struct dsa_port *dp = dsa_user_to_port(ctx->dev);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
struct dsa_port *cpu_dp = dp->cpu_dp;
path->dev = ctx->dev;
path->type = DEV_PATH_DSA;
path->dsa.proto = cpu_dp->tag_ops->proto;
path->dsa.port = dp->index;
- ctx->dev = master;
+ ctx->dev = conduit;
return 0;
}
-static const struct net_device_ops dsa_slave_netdev_ops = {
- .ndo_open = dsa_slave_open,
- .ndo_stop = dsa_slave_close,
- .ndo_start_xmit = dsa_slave_xmit,
- .ndo_change_rx_flags = dsa_slave_change_rx_flags,
- .ndo_set_rx_mode = dsa_slave_set_rx_mode,
- .ndo_set_mac_address = dsa_slave_set_mac_address,
- .ndo_fdb_dump = dsa_slave_fdb_dump,
- .ndo_eth_ioctl = dsa_slave_ioctl,
- .ndo_get_iflink = dsa_slave_get_iflink,
+static const struct net_device_ops dsa_user_netdev_ops = {
+ .ndo_open = dsa_user_open,
+ .ndo_stop = dsa_user_close,
+ .ndo_start_xmit = dsa_user_xmit,
+ .ndo_change_rx_flags = dsa_user_change_rx_flags,
+ .ndo_set_rx_mode = dsa_user_set_rx_mode,
+ .ndo_set_mac_address = dsa_user_set_mac_address,
+ .ndo_fdb_dump = dsa_user_fdb_dump,
+ .ndo_eth_ioctl = dsa_user_ioctl,
+ .ndo_get_iflink = dsa_user_get_iflink,
#ifdef CONFIG_NET_POLL_CONTROLLER
- .ndo_netpoll_setup = dsa_slave_netpoll_setup,
- .ndo_netpoll_cleanup = dsa_slave_netpoll_cleanup,
- .ndo_poll_controller = dsa_slave_poll_controller,
+ .ndo_netpoll_setup = dsa_user_netpoll_setup,
+ .ndo_netpoll_cleanup = dsa_user_netpoll_cleanup,
+ .ndo_poll_controller = dsa_user_poll_controller,
#endif
- .ndo_setup_tc = dsa_slave_setup_tc,
- .ndo_get_stats64 = dsa_slave_get_stats64,
- .ndo_vlan_rx_add_vid = dsa_slave_vlan_rx_add_vid,
- .ndo_vlan_rx_kill_vid = dsa_slave_vlan_rx_kill_vid,
- .ndo_change_mtu = dsa_slave_change_mtu,
- .ndo_fill_forward_path = dsa_slave_fill_forward_path,
+ .ndo_setup_tc = dsa_user_setup_tc,
+ .ndo_get_stats64 = dsa_user_get_stats64,
+ .ndo_vlan_rx_add_vid = dsa_user_vlan_rx_add_vid,
+ .ndo_vlan_rx_kill_vid = dsa_user_vlan_rx_kill_vid,
+ .ndo_change_mtu = dsa_user_change_mtu,
+ .ndo_fill_forward_path = dsa_user_fill_forward_path,
};
static struct device_type dsa_type = {
@@ -2458,8 +2465,8 @@ void dsa_port_phylink_mac_change(struct dsa_switch *ds, int port, bool up)
}
EXPORT_SYMBOL_GPL(dsa_port_phylink_mac_change);
-static void dsa_slave_phylink_fixed_state(struct phylink_config *config,
- struct phylink_link_state *state)
+static void dsa_user_phylink_fixed_state(struct phylink_config *config,
+ struct phylink_link_state *state)
{
struct dsa_port *dp = container_of(config, struct dsa_port, pl_config);
struct dsa_switch *ds = dp->ds;
@@ -2470,33 +2477,33 @@ static void dsa_slave_phylink_fixed_state(struct phylink_config *config,
ds->ops->phylink_fixed_state(ds, dp->index, state);
}
-/* slave device setup *******************************************************/
-static int dsa_slave_phy_connect(struct net_device *slave_dev, int addr,
- u32 flags)
+/* user device setup *******************************************************/
+static int dsa_user_phy_connect(struct net_device *user_dev, int addr,
+ u32 flags)
{
- struct dsa_port *dp = dsa_slave_to_port(slave_dev);
+ struct dsa_port *dp = dsa_user_to_port(user_dev);
struct dsa_switch *ds = dp->ds;
- slave_dev->phydev = mdiobus_get_phy(ds->slave_mii_bus, addr);
- if (!slave_dev->phydev) {
- netdev_err(slave_dev, "no phy at %d\n", addr);
+ user_dev->phydev = mdiobus_get_phy(ds->user_mii_bus, addr);
+ if (!user_dev->phydev) {
+ netdev_err(user_dev, "no phy at %d\n", addr);
return -ENODEV;
}
- slave_dev->phydev->dev_flags |= flags;
+ user_dev->phydev->dev_flags |= flags;
- return phylink_connect_phy(dp->pl, slave_dev->phydev);
+ return phylink_connect_phy(dp->pl, user_dev->phydev);
}
-static int dsa_slave_phy_setup(struct net_device *slave_dev)
+static int dsa_user_phy_setup(struct net_device *user_dev)
{
- struct dsa_port *dp = dsa_slave_to_port(slave_dev);
+ struct dsa_port *dp = dsa_user_to_port(user_dev);
struct device_node *port_dn = dp->dn;
struct dsa_switch *ds = dp->ds;
u32 phy_flags = 0;
int ret;
- dp->pl_config.dev = &slave_dev->dev;
+ dp->pl_config.dev = &user_dev->dev;
dp->pl_config.type = PHYLINK_NETDEV;
/* The get_fixed_state callback takes precedence over polling the
@@ -2504,7 +2511,7 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
* this if the switch provides such a callback.
*/
if (ds->ops->phylink_fixed_state) {
- dp->pl_config.get_fixed_state = dsa_slave_phylink_fixed_state;
+ dp->pl_config.get_fixed_state = dsa_user_phylink_fixed_state;
dp->pl_config.poll_fixed_state = true;
}
@@ -2516,14 +2523,14 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
phy_flags = ds->ops->get_phy_flags(ds, dp->index);
ret = phylink_of_phy_connect(dp->pl, port_dn, phy_flags);
- if (ret == -ENODEV && ds->slave_mii_bus) {
+ if (ret == -ENODEV && ds->user_mii_bus) {
/* We could not connect to a designated PHY or SFP, so try to
* use the switch internal MDIO bus instead
*/
- ret = dsa_slave_phy_connect(slave_dev, dp->index, phy_flags);
+ ret = dsa_user_phy_connect(user_dev, dp->index, phy_flags);
}
if (ret) {
- netdev_err(slave_dev, "failed to connect to PHY: %pe\n",
+ netdev_err(user_dev, "failed to connect to PHY: %pe\n",
ERR_PTR(ret));
dsa_port_phylink_destroy(dp);
}
@@ -2531,42 +2538,42 @@ static int dsa_slave_phy_setup(struct net_device *slave_dev)
return ret;
}
-void dsa_slave_setup_tagger(struct net_device *slave)
+void dsa_user_setup_tagger(struct net_device *user)
{
- struct dsa_port *dp = dsa_slave_to_port(slave);
- struct net_device *master = dsa_port_to_master(dp);
- struct dsa_slave_priv *p = netdev_priv(slave);
+ struct dsa_port *dp = dsa_user_to_port(user);
+ struct net_device *conduit = dsa_port_to_conduit(dp);
+ struct dsa_user_priv *p = netdev_priv(user);
const struct dsa_port *cpu_dp = dp->cpu_dp;
const struct dsa_switch *ds = dp->ds;
- slave->needed_headroom = cpu_dp->tag_ops->needed_headroom;
- slave->needed_tailroom = cpu_dp->tag_ops->needed_tailroom;
- /* Try to save one extra realloc later in the TX path (in the master)
- * by also inheriting the master's needed headroom and tailroom.
+ user->needed_headroom = cpu_dp->tag_ops->needed_headroom;
+ user->needed_tailroom = cpu_dp->tag_ops->needed_tailroom;
+ /* Try to save one extra realloc later in the TX path (in the conduit)
+ * by also inheriting the conduit's needed headroom and tailroom.
* The 8021q driver also does this.
*/
- slave->needed_headroom += master->needed_headroom;
- slave->needed_tailroom += master->needed_tailroom;
+ user->needed_headroom += conduit->needed_headroom;
+ user->needed_tailroom += conduit->needed_tailroom;
p->xmit = cpu_dp->tag_ops->xmit;
- slave->features = master->vlan_features | NETIF_F_HW_TC;
- slave->hw_features |= NETIF_F_HW_TC;
- slave->features |= NETIF_F_LLTX;
- if (slave->needed_tailroom)
- slave->features &= ~(NETIF_F_SG | NETIF_F_FRAGLIST);
+ user->features = conduit->vlan_features | NETIF_F_HW_TC;
+ user->hw_features |= NETIF_F_HW_TC;
+ user->features |= NETIF_F_LLTX;
+ if (user->needed_tailroom)
+ user->features &= ~(NETIF_F_SG | NETIF_F_FRAGLIST);
if (ds->needs_standalone_vlan_filtering)
- slave->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ user->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
}
-int dsa_slave_suspend(struct net_device *slave_dev)
+int dsa_user_suspend(struct net_device *user_dev)
{
- struct dsa_port *dp = dsa_slave_to_port(slave_dev);
+ struct dsa_port *dp = dsa_user_to_port(user_dev);
- if (!netif_running(slave_dev))
+ if (!netif_running(user_dev))
return 0;
- netif_device_detach(slave_dev);
+ netif_device_detach(user_dev);
rtnl_lock();
phylink_stop(dp->pl);
@@ -2575,14 +2582,14 @@ int dsa_slave_suspend(struct net_device *slave_dev)
return 0;
}
-int dsa_slave_resume(struct net_device *slave_dev)
+int dsa_user_resume(struct net_device *user_dev)
{
- struct dsa_port *dp = dsa_slave_to_port(slave_dev);
+ struct dsa_port *dp = dsa_user_to_port(user_dev);
- if (!netif_running(slave_dev))
+ if (!netif_running(user_dev))
return 0;
- netif_device_attach(slave_dev);
+ netif_device_attach(user_dev);
rtnl_lock();
phylink_start(dp->pl);
@@ -2591,12 +2598,12 @@ int dsa_slave_resume(struct net_device *slave_dev)
return 0;
}
-int dsa_slave_create(struct dsa_port *port)
+int dsa_user_create(struct dsa_port *port)
{
- struct net_device *master = dsa_port_to_master(port);
+ struct net_device *conduit = dsa_port_to_conduit(port);
struct dsa_switch *ds = port->ds;
- struct net_device *slave_dev;
- struct dsa_slave_priv *p;
+ struct net_device *user_dev;
+ struct dsa_user_priv *p;
const char *name;
int assign_type;
int ret;
@@ -2612,55 +2619,55 @@ int dsa_slave_create(struct dsa_port *port)
assign_type = NET_NAME_ENUM;
}
- slave_dev = alloc_netdev_mqs(sizeof(struct dsa_slave_priv), name,
- assign_type, ether_setup,
- ds->num_tx_queues, 1);
- if (slave_dev == NULL)
+ user_dev = alloc_netdev_mqs(sizeof(struct dsa_user_priv), name,
+ assign_type, ether_setup,
+ ds->num_tx_queues, 1);
+ if (user_dev == NULL)
return -ENOMEM;
- slave_dev->rtnl_link_ops = &dsa_link_ops;
- slave_dev->ethtool_ops = &dsa_slave_ethtool_ops;
+ user_dev->rtnl_link_ops = &dsa_link_ops;
+ user_dev->ethtool_ops = &dsa_user_ethtool_ops;
#if IS_ENABLED(CONFIG_DCB)
- slave_dev->dcbnl_ops = &dsa_slave_dcbnl_ops;
+ user_dev->dcbnl_ops = &dsa_user_dcbnl_ops;
#endif
if (!is_zero_ether_addr(port->mac))
- eth_hw_addr_set(slave_dev, port->mac);
+ eth_hw_addr_set(user_dev, port->mac);
else
- eth_hw_addr_inherit(slave_dev, master);
- slave_dev->priv_flags |= IFF_NO_QUEUE;
+ eth_hw_addr_inherit(user_dev, conduit);
+ user_dev->priv_flags |= IFF_NO_QUEUE;
if (dsa_switch_supports_uc_filtering(ds))
- slave_dev->priv_flags |= IFF_UNICAST_FLT;
- slave_dev->netdev_ops = &dsa_slave_netdev_ops;
+ user_dev->priv_flags |= IFF_UNICAST_FLT;
+ user_dev->netdev_ops = &dsa_user_netdev_ops;
if (ds->ops->port_max_mtu)
- slave_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
- SET_NETDEV_DEVTYPE(slave_dev, &dsa_type);
-
- SET_NETDEV_DEV(slave_dev, port->ds->dev);
- SET_NETDEV_DEVLINK_PORT(slave_dev, &port->devlink_port);
- slave_dev->dev.of_node = port->dn;
- slave_dev->vlan_features = master->vlan_features;
-
- p = netdev_priv(slave_dev);
- slave_dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
- if (!slave_dev->tstats) {
- free_netdev(slave_dev);
+ user_dev->max_mtu = ds->ops->port_max_mtu(ds, port->index);
+ SET_NETDEV_DEVTYPE(user_dev, &dsa_type);
+
+ SET_NETDEV_DEV(user_dev, port->ds->dev);
+ SET_NETDEV_DEVLINK_PORT(user_dev, &port->devlink_port);
+ user_dev->dev.of_node = port->dn;
+ user_dev->vlan_features = conduit->vlan_features;
+
+ p = netdev_priv(user_dev);
+ user_dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);
+ if (!user_dev->tstats) {
+ free_netdev(user_dev);
return -ENOMEM;
}
- ret = gro_cells_init(&p->gcells, slave_dev);
+ ret = gro_cells_init(&p->gcells, user_dev);
if (ret)
goto out_free;
p->dp = port;
INIT_LIST_HEAD(&p->mall_tc_list);
- port->slave = slave_dev;
- dsa_slave_setup_tagger(slave_dev);
+ port->user = user_dev;
+ dsa_user_setup_tagger(user_dev);
- netif_carrier_off(slave_dev);
+ netif_carrier_off(user_dev);
- ret = dsa_slave_phy_setup(slave_dev);
+ ret = dsa_user_phy_setup(user_dev);
if (ret) {
- netdev_err(slave_dev,
+ netdev_err(user_dev,
"error %d setting up PHY for tree %d, switch %d, port %d\n",
ret, ds->dst->index, ds->index, port->index);
goto out_gcells;
@@ -2668,23 +2675,23 @@ int dsa_slave_create(struct dsa_port *port)
rtnl_lock();
- ret = dsa_slave_change_mtu(slave_dev, ETH_DATA_LEN);
+ ret = dsa_user_change_mtu(user_dev, ETH_DATA_LEN);
if (ret && ret != -EOPNOTSUPP)
dev_warn(ds->dev, "nonfatal error %d setting MTU to %d on port %d\n",
ret, ETH_DATA_LEN, port->index);
- ret = register_netdevice(slave_dev);
+ ret = register_netdevice(user_dev);
if (ret) {
- netdev_err(master, "error %d registering interface %s\n",
- ret, slave_dev->name);
+ netdev_err(conduit, "error %d registering interface %s\n",
+ ret, user_dev->name);
rtnl_unlock();
goto out_phy;
}
if (IS_ENABLED(CONFIG_DCB)) {
- ret = dsa_slave_dcbnl_init(slave_dev);
+ ret = dsa_user_dcbnl_init(user_dev);
if (ret) {
- netdev_err(slave_dev,
+ netdev_err(user_dev,
"failed to initialize DCB: %pe\n",
ERR_PTR(ret));
rtnl_unlock();
@@ -2692,7 +2699,7 @@ int dsa_slave_create(struct dsa_port *port)
}
}
- ret = netdev_upper_dev_link(master, slave_dev, NULL);
+ ret = netdev_upper_dev_link(conduit, user_dev, NULL);
rtnl_unlock();
@@ -2702,7 +2709,7 @@ int dsa_slave_create(struct dsa_port *port)
return 0;
out_unregister:
- unregister_netdev(slave_dev);
+ unregister_netdev(user_dev);
out_phy:
rtnl_lock();
phylink_disconnect_phy(p->dp->pl);
@@ -2711,122 +2718,122 @@ out_phy:
out_gcells:
gro_cells_destroy(&p->gcells);
out_free:
- free_percpu(slave_dev->tstats);
- free_netdev(slave_dev);
- port->slave = NULL;
+ free_percpu(user_dev->tstats);
+ free_netdev(user_dev);
+ port->user = NULL;
return ret;
}
-void dsa_slave_destroy(struct net_device *slave_dev)
+void dsa_user_destroy(struct net_device *user_dev)
{
- struct net_device *master = dsa_slave_to_master(slave_dev);
- struct dsa_port *dp = dsa_slave_to_port(slave_dev);
- struct dsa_slave_priv *p = netdev_priv(slave_dev);
+ struct net_device *conduit = dsa_user_to_conduit(user_dev);
+ struct dsa_port *dp = dsa_user_to_port(user_dev);
+ struct dsa_user_priv *p = netdev_priv(user_dev);
- netif_carrier_off(slave_dev);
+ netif_carrier_off(user_dev);
rtnl_lock();
- netdev_upper_dev_unlink(master, slave_dev);
- unregister_netdevice(slave_dev);
+ netdev_upper_dev_unlink(conduit, user_dev);
+ unregister_netdevice(user_dev);
phylink_disconnect_phy(dp->pl);
rtnl_unlock();
dsa_port_phylink_destroy(dp);
gro_cells_destroy(&p->gcells);
- free_percpu(slave_dev->tstats);
- free_netdev(slave_dev);
+ free_percpu(user_dev->tstats);
+ free_netdev(user_dev);
}
-int dsa_slave_change_master(struct net_device *dev, struct net_device *master,
+int dsa_user_change_conduit(struct net_device *dev, struct net_device *conduit,
struct netlink_ext_ack *extack)
{
- struct net_device *old_master = dsa_slave_to_master(dev);
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct net_device *old_conduit = dsa_user_to_conduit(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch *ds = dp->ds;
struct net_device *upper;
struct list_head *iter;
int err;
- if (master == old_master)
+ if (conduit == old_conduit)
return 0;
- if (!ds->ops->port_change_master) {
+ if (!ds->ops->port_change_conduit) {
NL_SET_ERR_MSG_MOD(extack,
- "Driver does not support changing DSA master");
+ "Driver does not support changing DSA conduit");
return -EOPNOTSUPP;
}
- if (!netdev_uses_dsa(master)) {
+ if (!netdev_uses_dsa(conduit)) {
NL_SET_ERR_MSG_MOD(extack,
- "Interface not eligible as DSA master");
+ "Interface not eligible as DSA conduit");
return -EOPNOTSUPP;
}
- netdev_for_each_upper_dev_rcu(master, upper, iter) {
- if (dsa_slave_dev_check(upper))
+ netdev_for_each_upper_dev_rcu(conduit, upper, iter) {
+ if (dsa_user_dev_check(upper))
continue;
if (netif_is_bridge_master(upper))
continue;
- NL_SET_ERR_MSG_MOD(extack, "Cannot join master with unknown uppers");
+ NL_SET_ERR_MSG_MOD(extack, "Cannot join conduit with unknown uppers");
return -EOPNOTSUPP;
}
- /* Since we allow live-changing the DSA master, plus we auto-open the
- * DSA master when the user port opens => we need to ensure that the
- * new DSA master is open too.
+ /* Since we allow live-changing the DSA conduit, plus we auto-open the
+ * DSA conduit when the user port opens => we need to ensure that the
+ * new DSA conduit is open too.
*/
if (dev->flags & IFF_UP) {
- err = dev_open(master, extack);
+ err = dev_open(conduit, extack);
if (err)
return err;
}
- netdev_upper_dev_unlink(old_master, dev);
+ netdev_upper_dev_unlink(old_conduit, dev);
- err = netdev_upper_dev_link(master, dev, extack);
+ err = netdev_upper_dev_link(conduit, dev, extack);
if (err)
- goto out_revert_old_master_unlink;
+ goto out_revert_old_conduit_unlink;
- err = dsa_port_change_master(dp, master, extack);
+ err = dsa_port_change_conduit(dp, conduit, extack);
if (err)
- goto out_revert_master_link;
+ goto out_revert_conduit_link;
/* Update the MTU of the new CPU port through cross-chip notifiers */
- err = dsa_slave_change_mtu(dev, dev->mtu);
+ err = dsa_user_change_mtu(dev, dev->mtu);
if (err && err != -EOPNOTSUPP) {
netdev_warn(dev,
- "nonfatal error updating MTU with new master: %pe\n",
+ "nonfatal error updating MTU with new conduit: %pe\n",
ERR_PTR(err));
}
/* If the port doesn't have its own MAC address and relies on the DSA
- * master's one, inherit it again from the new DSA master.
+ * conduit's one, inherit it again from the new DSA conduit.
*/
if (is_zero_ether_addr(dp->mac))
- eth_hw_addr_inherit(dev, master);
+ eth_hw_addr_inherit(dev, conduit);
return 0;
-out_revert_master_link:
- netdev_upper_dev_unlink(master, dev);
-out_revert_old_master_unlink:
- netdev_upper_dev_link(old_master, dev, NULL);
+out_revert_conduit_link:
+ netdev_upper_dev_unlink(conduit, dev);
+out_revert_old_conduit_unlink:
+ netdev_upper_dev_link(old_conduit, dev, NULL);
return err;
}
-bool dsa_slave_dev_check(const struct net_device *dev)
+bool dsa_user_dev_check(const struct net_device *dev)
{
- return dev->netdev_ops == &dsa_slave_netdev_ops;
+ return dev->netdev_ops == &dsa_user_netdev_ops;
}
-EXPORT_SYMBOL_GPL(dsa_slave_dev_check);
+EXPORT_SYMBOL_GPL(dsa_user_dev_check);
-static int dsa_slave_changeupper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+static int dsa_user_changeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct netlink_ext_ack *extack;
int err = NOTIFY_DONE;
- if (!dsa_slave_dev_check(dev))
+ if (!dsa_user_dev_check(dev))
return err;
extack = netdev_notifier_info_to_extack(&info->info);
@@ -2862,7 +2869,7 @@ static int dsa_slave_changeupper(struct net_device *dev,
}
} else if (is_hsr_master(info->upper_dev)) {
if (info->linking) {
- err = dsa_port_hsr_join(dp, info->upper_dev);
+ err = dsa_port_hsr_join(dp, info->upper_dev, extack);
if (err == -EOPNOTSUPP) {
NL_SET_ERR_MSG_WEAK_MOD(extack,
"Offloading not supported");
@@ -2878,28 +2885,28 @@ static int dsa_slave_changeupper(struct net_device *dev,
return err;
}
-static int dsa_slave_prechangeupper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+static int dsa_user_prechangeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
- if (!dsa_slave_dev_check(dev))
+ if (!dsa_user_dev_check(dev))
return NOTIFY_DONE;
if (netif_is_bridge_master(info->upper_dev) && !info->linking)
dsa_port_pre_bridge_leave(dp, info->upper_dev);
else if (netif_is_lag_master(info->upper_dev) && !info->linking)
dsa_port_pre_lag_leave(dp, info->upper_dev);
- /* dsa_port_pre_hsr_leave is not yet necessary since hsr cannot be
- * meaningfully enslaved to a bridge yet
+ /* dsa_port_pre_hsr_leave is not yet necessary since hsr devices cannot
+ * meaningfully placed under a bridge yet
*/
return NOTIFY_DONE;
}
static int
-dsa_slave_lag_changeupper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+dsa_user_lag_changeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
struct net_device *lower;
struct list_head *iter;
@@ -2910,15 +2917,15 @@ dsa_slave_lag_changeupper(struct net_device *dev,
return err;
netdev_for_each_lower_dev(dev, lower, iter) {
- if (!dsa_slave_dev_check(lower))
+ if (!dsa_user_dev_check(lower))
continue;
- dp = dsa_slave_to_port(lower);
+ dp = dsa_user_to_port(lower);
if (!dp->lag)
/* Software LAG */
continue;
- err = dsa_slave_changeupper(lower, info);
+ err = dsa_user_changeupper(lower, info);
if (notifier_to_errno(err))
break;
}
@@ -2926,12 +2933,12 @@ dsa_slave_lag_changeupper(struct net_device *dev,
return err;
}
-/* Same as dsa_slave_lag_changeupper() except that it calls
- * dsa_slave_prechangeupper()
+/* Same as dsa_user_lag_changeupper() except that it calls
+ * dsa_user_prechangeupper()
*/
static int
-dsa_slave_lag_prechangeupper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+dsa_user_lag_prechangeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
struct net_device *lower;
struct list_head *iter;
@@ -2942,15 +2949,15 @@ dsa_slave_lag_prechangeupper(struct net_device *dev,
return err;
netdev_for_each_lower_dev(dev, lower, iter) {
- if (!dsa_slave_dev_check(lower))
+ if (!dsa_user_dev_check(lower))
continue;
- dp = dsa_slave_to_port(lower);
+ dp = dsa_user_to_port(lower);
if (!dp->lag)
/* Software LAG */
continue;
- err = dsa_slave_prechangeupper(lower, info);
+ err = dsa_user_prechangeupper(lower, info);
if (notifier_to_errno(err))
break;
}
@@ -2963,7 +2970,7 @@ dsa_prevent_bridging_8021q_upper(struct net_device *dev,
struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *ext_ack;
- struct net_device *slave, *br;
+ struct net_device *user, *br;
struct dsa_port *dp;
ext_ack = netdev_notifier_info_to_extack(&info->info);
@@ -2971,11 +2978,11 @@ dsa_prevent_bridging_8021q_upper(struct net_device *dev,
if (!is_vlan_dev(dev))
return NOTIFY_DONE;
- slave = vlan_dev_real_dev(dev);
- if (!dsa_slave_dev_check(slave))
+ user = vlan_dev_real_dev(dev);
+ if (!dsa_user_dev_check(user))
return NOTIFY_DONE;
- dp = dsa_slave_to_port(slave);
+ dp = dsa_user_to_port(user);
br = dsa_port_bridge_dev_get(dp);
if (!br)
return NOTIFY_DONE;
@@ -2984,7 +2991,7 @@ dsa_prevent_bridging_8021q_upper(struct net_device *dev,
if (br_vlan_enabled(br) &&
netif_is_bridge_master(info->upper_dev) && info->linking) {
NL_SET_ERR_MSG_MOD(ext_ack,
- "Cannot enslave VLAN device into VLAN aware bridge");
+ "Cannot make VLAN device join VLAN-aware bridge");
return notifier_from_errno(-EINVAL);
}
@@ -2992,10 +2999,10 @@ dsa_prevent_bridging_8021q_upper(struct net_device *dev,
}
static int
-dsa_slave_check_8021q_upper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+dsa_user_check_8021q_upper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
struct net_device *br = dsa_port_bridge_dev_get(dp);
struct bridge_vlan_info br_info;
struct netlink_ext_ack *extack;
@@ -3023,17 +3030,17 @@ dsa_slave_check_8021q_upper(struct net_device *dev,
}
static int
-dsa_slave_prechangeupper_sanity_check(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+dsa_user_prechangeupper_sanity_check(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
struct dsa_switch *ds;
struct dsa_port *dp;
int err;
- if (!dsa_slave_dev_check(dev))
+ if (!dsa_user_dev_check(dev))
return dsa_prevent_bridging_8021q_upper(dev, info);
- dp = dsa_slave_to_port(dev);
+ dp = dsa_user_to_port(dev);
ds = dp->ds;
if (ds->ops->port_prechangeupper) {
@@ -3043,17 +3050,17 @@ dsa_slave_prechangeupper_sanity_check(struct net_device *dev,
}
if (is_vlan_dev(info->upper_dev))
- return dsa_slave_check_8021q_upper(dev, info);
+ return dsa_user_check_8021q_upper(dev, info);
return NOTIFY_DONE;
}
-/* To be eligible as a DSA master, a LAG must have all lower interfaces be
- * eligible DSA masters. Additionally, all LAG slaves must be DSA masters of
+/* To be eligible as a DSA conduit, a LAG must have all lower interfaces be
+ * eligible DSA conduits. Additionally, all LAG slaves must be DSA conduits of
* switches in the same switch tree.
*/
-static int dsa_lag_master_validate(struct net_device *lag_dev,
- struct netlink_ext_ack *extack)
+static int dsa_lag_conduit_validate(struct net_device *lag_dev,
+ struct netlink_ext_ack *extack)
{
struct net_device *lower1, *lower2;
struct list_head *iter1, *iter2;
@@ -3063,7 +3070,7 @@ static int dsa_lag_master_validate(struct net_device *lag_dev,
if (!netdev_uses_dsa(lower1) ||
!netdev_uses_dsa(lower2)) {
NL_SET_ERR_MSG_MOD(extack,
- "All LAG ports must be eligible as DSA masters");
+ "All LAG ports must be eligible as DSA conduits");
return notifier_from_errno(-EINVAL);
}
@@ -3073,7 +3080,7 @@ static int dsa_lag_master_validate(struct net_device *lag_dev,
if (!dsa_port_tree_same(lower1->dsa_ptr,
lower2->dsa_ptr)) {
NL_SET_ERR_MSG_MOD(extack,
- "LAG contains DSA masters of disjoint switch trees");
+ "LAG contains DSA conduits of disjoint switch trees");
return notifier_from_errno(-EINVAL);
}
}
@@ -3083,41 +3090,41 @@ static int dsa_lag_master_validate(struct net_device *lag_dev,
}
static int
-dsa_master_prechangeupper_sanity_check(struct net_device *master,
- struct netdev_notifier_changeupper_info *info)
+dsa_conduit_prechangeupper_sanity_check(struct net_device *conduit,
+ struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info);
- if (!netdev_uses_dsa(master))
+ if (!netdev_uses_dsa(conduit))
return NOTIFY_DONE;
if (!info->linking)
return NOTIFY_DONE;
/* Allow DSA switch uppers */
- if (dsa_slave_dev_check(info->upper_dev))
+ if (dsa_user_dev_check(info->upper_dev))
return NOTIFY_DONE;
- /* Allow bridge uppers of DSA masters, subject to further
+ /* Allow bridge uppers of DSA conduits, subject to further
* restrictions in dsa_bridge_prechangelower_sanity_check()
*/
if (netif_is_bridge_master(info->upper_dev))
return NOTIFY_DONE;
/* Allow LAG uppers, subject to further restrictions in
- * dsa_lag_master_prechangelower_sanity_check()
+ * dsa_lag_conduit_prechangelower_sanity_check()
*/
if (netif_is_lag_master(info->upper_dev))
- return dsa_lag_master_validate(info->upper_dev, extack);
+ return dsa_lag_conduit_validate(info->upper_dev, extack);
NL_SET_ERR_MSG_MOD(extack,
- "DSA master cannot join unknown upper interfaces");
+ "DSA conduit cannot join unknown upper interfaces");
return notifier_from_errno(-EBUSY);
}
static int
-dsa_lag_master_prechangelower_sanity_check(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+dsa_lag_conduit_prechangelower_sanity_check(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack = netdev_notifier_info_to_extack(&info->info);
struct net_device *lag_dev = info->upper_dev;
@@ -3132,14 +3139,14 @@ dsa_lag_master_prechangelower_sanity_check(struct net_device *dev,
if (!netdev_uses_dsa(dev)) {
NL_SET_ERR_MSG(extack,
- "Only DSA masters can join a LAG DSA master");
+ "Only DSA conduits can join a LAG DSA conduit");
return notifier_from_errno(-EINVAL);
}
netdev_for_each_lower_dev(lag_dev, lower, iter) {
if (!dsa_port_tree_same(dev->dsa_ptr, lower->dsa_ptr)) {
NL_SET_ERR_MSG(extack,
- "Interface is DSA master for a different switch tree than this LAG");
+ "Interface is DSA conduit for a different switch tree than this LAG");
return notifier_from_errno(-EINVAL);
}
@@ -3149,13 +3156,13 @@ dsa_lag_master_prechangelower_sanity_check(struct net_device *dev,
return NOTIFY_DONE;
}
-/* Don't allow bridging of DSA masters, since the bridge layer rx_handler
+/* Don't allow bridging of DSA conduits, since the bridge layer rx_handler
* prevents the DSA fake ethertype handler to be invoked, so we don't get the
* chance to strip off and parse the DSA switch tag protocol header (the bridge
* layer just returns RX_HANDLER_CONSUMED, stopping RX processing for these
* frames).
* The only case where that would not be an issue is when bridging can already
- * be offloaded, such as when the DSA master is itself a DSA or plain switchdev
+ * be offloaded, such as when the DSA conduit is itself a DSA or plain switchdev
* port, and is bridged only with other ports from the same hardware device.
*/
static int
@@ -3181,7 +3188,7 @@ dsa_bridge_prechangelower_sanity_check(struct net_device *new_lower,
if (!netdev_port_same_parent_id(lower, new_lower)) {
NL_SET_ERR_MSG(extack,
- "Cannot do software bridging with a DSA master");
+ "Cannot do software bridging with a DSA conduit");
return notifier_from_errno(-EINVAL);
}
}
@@ -3189,45 +3196,45 @@ dsa_bridge_prechangelower_sanity_check(struct net_device *new_lower,
return NOTIFY_DONE;
}
-static void dsa_tree_migrate_ports_from_lag_master(struct dsa_switch_tree *dst,
- struct net_device *lag_dev)
+static void dsa_tree_migrate_ports_from_lag_conduit(struct dsa_switch_tree *dst,
+ struct net_device *lag_dev)
{
- struct net_device *new_master = dsa_tree_find_first_master(dst);
+ struct net_device *new_conduit = dsa_tree_find_first_conduit(dst);
struct dsa_port *dp;
int err;
dsa_tree_for_each_user_port(dp, dst) {
- if (dsa_port_to_master(dp) != lag_dev)
+ if (dsa_port_to_conduit(dp) != lag_dev)
continue;
- err = dsa_slave_change_master(dp->slave, new_master, NULL);
+ err = dsa_user_change_conduit(dp->user, new_conduit, NULL);
if (err) {
- netdev_err(dp->slave,
- "failed to restore master to %s: %pe\n",
- new_master->name, ERR_PTR(err));
+ netdev_err(dp->user,
+ "failed to restore conduit to %s: %pe\n",
+ new_conduit->name, ERR_PTR(err));
}
}
}
-static int dsa_master_lag_join(struct net_device *master,
- struct net_device *lag_dev,
- struct netdev_lag_upper_info *uinfo,
- struct netlink_ext_ack *extack)
+static int dsa_conduit_lag_join(struct net_device *conduit,
+ struct net_device *lag_dev,
+ struct netdev_lag_upper_info *uinfo,
+ struct netlink_ext_ack *extack)
{
- struct dsa_port *cpu_dp = master->dsa_ptr;
+ struct dsa_port *cpu_dp = conduit->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
struct dsa_port *dp;
int err;
- err = dsa_master_lag_setup(lag_dev, cpu_dp, uinfo, extack);
+ err = dsa_conduit_lag_setup(lag_dev, cpu_dp, uinfo, extack);
if (err)
return err;
dsa_tree_for_each_user_port(dp, dst) {
- if (dsa_port_to_master(dp) != master)
+ if (dsa_port_to_conduit(dp) != conduit)
continue;
- err = dsa_slave_change_master(dp->slave, lag_dev, extack);
+ err = dsa_user_change_conduit(dp->user, lag_dev, extack);
if (err)
goto restore;
}
@@ -3236,24 +3243,24 @@ static int dsa_master_lag_join(struct net_device *master,
restore:
dsa_tree_for_each_user_port_continue_reverse(dp, dst) {
- if (dsa_port_to_master(dp) != lag_dev)
+ if (dsa_port_to_conduit(dp) != lag_dev)
continue;
- err = dsa_slave_change_master(dp->slave, master, NULL);
+ err = dsa_user_change_conduit(dp->user, conduit, NULL);
if (err) {
- netdev_err(dp->slave,
- "failed to restore master to %s: %pe\n",
- master->name, ERR_PTR(err));
+ netdev_err(dp->user,
+ "failed to restore conduit to %s: %pe\n",
+ conduit->name, ERR_PTR(err));
}
}
- dsa_master_lag_teardown(lag_dev, master->dsa_ptr);
+ dsa_conduit_lag_teardown(lag_dev, conduit->dsa_ptr);
return err;
}
-static void dsa_master_lag_leave(struct net_device *master,
- struct net_device *lag_dev)
+static void dsa_conduit_lag_leave(struct net_device *conduit,
+ struct net_device *lag_dev)
{
struct dsa_port *dp, *cpu_dp = lag_dev->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->dst;
@@ -3270,10 +3277,10 @@ static void dsa_master_lag_leave(struct net_device *master,
if (new_cpu_dp) {
/* Update the CPU port of the user ports still under the LAG
- * so that dsa_port_to_master() continues to work properly
+ * so that dsa_port_to_conduit() continues to work properly
*/
dsa_tree_for_each_user_port(dp, dst)
- if (dsa_port_to_master(dp) == lag_dev)
+ if (dsa_port_to_conduit(dp) == lag_dev)
dp->cpu_dp = new_cpu_dp;
/* Update the index of the virtual CPU port to match the lowest
@@ -3282,20 +3289,20 @@ static void dsa_master_lag_leave(struct net_device *master,
lag_dev->dsa_ptr = new_cpu_dp;
wmb();
} else {
- /* If the LAG DSA master has no ports left, migrate back all
+ /* If the LAG DSA conduit has no ports left, migrate back all
* user ports to the first physical CPU port
*/
- dsa_tree_migrate_ports_from_lag_master(dst, lag_dev);
+ dsa_tree_migrate_ports_from_lag_conduit(dst, lag_dev);
}
- /* This DSA master has left its LAG in any case, so let
+ /* This DSA conduit has left its LAG in any case, so let
* the CPU port leave the hardware LAG as well
*/
- dsa_master_lag_teardown(lag_dev, master->dsa_ptr);
+ dsa_conduit_lag_teardown(lag_dev, conduit->dsa_ptr);
}
-static int dsa_master_changeupper(struct net_device *dev,
- struct netdev_notifier_changeupper_info *info)
+static int dsa_conduit_changeupper(struct net_device *dev,
+ struct netdev_notifier_changeupper_info *info)
{
struct netlink_ext_ack *extack;
int err = NOTIFY_DONE;
@@ -3307,11 +3314,11 @@ static int dsa_master_changeupper(struct net_device *dev,
if (netif_is_lag_master(info->upper_dev)) {
if (info->linking) {
- err = dsa_master_lag_join(dev, info->upper_dev,
- info->upper_info, extack);
+ err = dsa_conduit_lag_join(dev, info->upper_dev,
+ info->upper_info, extack);
err = notifier_from_errno(err);
} else {
- dsa_master_lag_leave(dev, info->upper_dev);
+ dsa_conduit_lag_leave(dev, info->upper_dev);
err = NOTIFY_OK;
}
}
@@ -3319,8 +3326,8 @@ static int dsa_master_changeupper(struct net_device *dev,
return err;
}
-static int dsa_slave_netdevice_event(struct notifier_block *nb,
- unsigned long event, void *ptr)
+static int dsa_user_netdevice_event(struct notifier_block *nb,
+ unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
@@ -3329,15 +3336,15 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
struct netdev_notifier_changeupper_info *info = ptr;
int err;
- err = dsa_slave_prechangeupper_sanity_check(dev, info);
+ err = dsa_user_prechangeupper_sanity_check(dev, info);
if (notifier_to_errno(err))
return err;
- err = dsa_master_prechangeupper_sanity_check(dev, info);
+ err = dsa_conduit_prechangeupper_sanity_check(dev, info);
if (notifier_to_errno(err))
return err;
- err = dsa_lag_master_prechangelower_sanity_check(dev, info);
+ err = dsa_lag_conduit_prechangelower_sanity_check(dev, info);
if (notifier_to_errno(err))
return err;
@@ -3345,11 +3352,11 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
if (notifier_to_errno(err))
return err;
- err = dsa_slave_prechangeupper(dev, ptr);
+ err = dsa_user_prechangeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
- err = dsa_slave_lag_prechangeupper(dev, ptr);
+ err = dsa_user_lag_prechangeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
@@ -3358,15 +3365,15 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
case NETDEV_CHANGEUPPER: {
int err;
- err = dsa_slave_changeupper(dev, ptr);
+ err = dsa_user_changeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
- err = dsa_slave_lag_changeupper(dev, ptr);
+ err = dsa_user_lag_changeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
- err = dsa_master_changeupper(dev, ptr);
+ err = dsa_conduit_changeupper(dev, ptr);
if (notifier_to_errno(err))
return err;
@@ -3377,13 +3384,13 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
struct dsa_port *dp;
int err = 0;
- if (dsa_slave_dev_check(dev)) {
- dp = dsa_slave_to_port(dev);
+ if (dsa_user_dev_check(dev)) {
+ dp = dsa_user_to_port(dev);
err = dsa_port_lag_change(dp, info->lower_state_info);
}
- /* Mirror LAG port events on DSA masters that are in
+ /* Mirror LAG port events on DSA conduits that are in
* a LAG towards their respective switch CPU ports
*/
if (netdev_uses_dsa(dev)) {
@@ -3396,28 +3403,28 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
}
case NETDEV_CHANGE:
case NETDEV_UP: {
- /* Track state of master port.
- * DSA driver may require the master port (and indirectly
+ /* Track state of conduit port.
+ * DSA driver may require the conduit port (and indirectly
* the tagger) to be available for some special operation.
*/
if (netdev_uses_dsa(dev)) {
struct dsa_port *cpu_dp = dev->dsa_ptr;
struct dsa_switch_tree *dst = cpu_dp->ds->dst;
- /* Track when the master port is UP */
- dsa_tree_master_oper_state_change(dst, dev,
- netif_oper_up(dev));
+ /* Track when the conduit port is UP */
+ dsa_tree_conduit_oper_state_change(dst, dev,
+ netif_oper_up(dev));
- /* Track when the master port is ready and can accept
+ /* Track when the conduit port is ready and can accept
* packet.
* NETDEV_UP event is not enough to flag a port as ready.
* We also have to wait for linkwatch_do_dev to dev_activate
* and emit a NETDEV_CHANGE event.
- * We check if a master port is ready by checking if the dev
+ * We check if a conduit port is ready by checking if the dev
* have a qdisc assigned and is not noop.
*/
- dsa_tree_master_admin_state_change(dst, dev,
- !qdisc_tx_is_noop(dev));
+ dsa_tree_conduit_admin_state_change(dst, dev,
+ !qdisc_tx_is_noop(dev));
return NOTIFY_OK;
}
@@ -3435,7 +3442,7 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
cpu_dp = dev->dsa_ptr;
dst = cpu_dp->ds->dst;
- dsa_tree_master_admin_state_change(dst, dev, false);
+ dsa_tree_conduit_admin_state_change(dst, dev, false);
list_for_each_entry(dp, &dst->ports, list) {
if (!dsa_port_is_user(dp))
@@ -3444,7 +3451,7 @@ static int dsa_slave_netdevice_event(struct notifier_block *nb,
if (dp->cpu_dp != cpu_dp)
continue;
- list_add(&dp->slave->close_list, &close_list);
+ list_add(&dp->user->close_list, &close_list);
}
dev_close_many(&close_list, true);
@@ -3470,7 +3477,7 @@ dsa_fdb_offload_notify(struct dsa_switchdev_event_work *switchdev_work)
switchdev_work->orig_dev, &info.info, NULL);
}
-static void dsa_slave_switchdev_event_work(struct work_struct *work)
+static void dsa_user_switchdev_event_work(struct work_struct *work)
{
struct dsa_switchdev_event_work *switchdev_work =
container_of(work, struct dsa_switchdev_event_work, work);
@@ -3481,7 +3488,7 @@ static void dsa_slave_switchdev_event_work(struct work_struct *work)
struct dsa_port *dp;
int err;
- dp = dsa_slave_to_port(dev);
+ dp = dsa_user_to_port(dev);
ds = dp->ds;
switch (switchdev_work->event) {
@@ -3523,7 +3530,7 @@ static void dsa_slave_switchdev_event_work(struct work_struct *work)
static bool dsa_foreign_dev_check(const struct net_device *dev,
const struct net_device *foreign_dev)
{
- const struct dsa_port *dp = dsa_slave_to_port(dev);
+ const struct dsa_port *dp = dsa_user_to_port(dev);
struct dsa_switch_tree *dst = dp->ds->dst;
if (netif_is_bridge_master(foreign_dev))
@@ -3536,13 +3543,13 @@ static bool dsa_foreign_dev_check(const struct net_device *dev,
return true;
}
-static int dsa_slave_fdb_event(struct net_device *dev,
- struct net_device *orig_dev,
- unsigned long event, const void *ctx,
- const struct switchdev_notifier_fdb_info *fdb_info)
+static int dsa_user_fdb_event(struct net_device *dev,
+ struct net_device *orig_dev,
+ unsigned long event, const void *ctx,
+ const struct switchdev_notifier_fdb_info *fdb_info)
{
struct dsa_switchdev_event_work *switchdev_work;
- struct dsa_port *dp = dsa_slave_to_port(dev);
+ struct dsa_port *dp = dsa_user_to_port(dev);
bool host_addr = fdb_info->is_local;
struct dsa_switch *ds = dp->ds;
@@ -3591,7 +3598,7 @@ static int dsa_slave_fdb_event(struct net_device *dev,
orig_dev->name, fdb_info->addr, fdb_info->vid,
host_addr ? " as host address" : "");
- INIT_WORK(&switchdev_work->work, dsa_slave_switchdev_event_work);
+ INIT_WORK(&switchdev_work->work, dsa_user_switchdev_event_work);
switchdev_work->event = event;
switchdev_work->dev = dev;
switchdev_work->orig_dev = orig_dev;
@@ -3606,8 +3613,8 @@ static int dsa_slave_fdb_event(struct net_device *dev,
}
/* Called under rcu_read_lock() */
-static int dsa_slave_switchdev_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
+static int dsa_user_switchdev_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
{
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
int err;
@@ -3615,15 +3622,15 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused,
switch (event) {
case SWITCHDEV_PORT_ATTR_SET:
err = switchdev_handle_port_attr_set(dev, ptr,
- dsa_slave_dev_check,
- dsa_slave_port_attr_set);
+ dsa_user_dev_check,
+ dsa_user_port_attr_set);
return notifier_from_errno(err);
case SWITCHDEV_FDB_ADD_TO_DEVICE:
case SWITCHDEV_FDB_DEL_TO_DEVICE:
err = switchdev_handle_fdb_event_to_device(dev, event, ptr,
- dsa_slave_dev_check,
+ dsa_user_dev_check,
dsa_foreign_dev_check,
- dsa_slave_fdb_event);
+ dsa_user_fdb_event);
return notifier_from_errno(err);
default:
return NOTIFY_DONE;
@@ -3632,8 +3639,8 @@ static int dsa_slave_switchdev_event(struct notifier_block *unused,
return NOTIFY_OK;
}
-static int dsa_slave_switchdev_blocking_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
+static int dsa_user_switchdev_blocking_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
{
struct net_device *dev = switchdev_notifier_info_to_dev(ptr);
int err;
@@ -3641,52 +3648,52 @@ static int dsa_slave_switchdev_blocking_event(struct notifier_block *unused,
switch (event) {
case SWITCHDEV_PORT_OBJ_ADD:
err = switchdev_handle_port_obj_add_foreign(dev, ptr,
- dsa_slave_dev_check,
+ dsa_user_dev_check,
dsa_foreign_dev_check,
- dsa_slave_port_obj_add);
+ dsa_user_port_obj_add);
return notifier_from_errno(err);
case SWITCHDEV_PORT_OBJ_DEL:
err = switchdev_handle_port_obj_del_foreign(dev, ptr,
- dsa_slave_dev_check,
+ dsa_user_dev_check,
dsa_foreign_dev_check,
- dsa_slave_port_obj_del);
+ dsa_user_port_obj_del);
return notifier_from_errno(err);
case SWITCHDEV_PORT_ATTR_SET:
err = switchdev_handle_port_attr_set(dev, ptr,
- dsa_slave_dev_check,
- dsa_slave_port_attr_set);
+ dsa_user_dev_check,
+ dsa_user_port_attr_set);
return notifier_from_errno(err);
}
return NOTIFY_DONE;
}
-static struct notifier_block dsa_slave_nb __read_mostly = {
- .notifier_call = dsa_slave_netdevice_event,
+static struct notifier_block dsa_user_nb __read_mostly = {
+ .notifier_call = dsa_user_netdevice_event,
};
-struct notifier_block dsa_slave_switchdev_notifier = {
- .notifier_call = dsa_slave_switchdev_event,
+struct notifier_block dsa_user_switchdev_notifier = {
+ .notifier_call = dsa_user_switchdev_event,
};
-struct notifier_block dsa_slave_switchdev_blocking_notifier = {
- .notifier_call = dsa_slave_switchdev_blocking_event,
+struct notifier_block dsa_user_switchdev_blocking_notifier = {
+ .notifier_call = dsa_user_switchdev_blocking_event,
};
-int dsa_slave_register_notifier(void)
+int dsa_user_register_notifier(void)
{
struct notifier_block *nb;
int err;
- err = register_netdevice_notifier(&dsa_slave_nb);
+ err = register_netdevice_notifier(&dsa_user_nb);
if (err)
return err;
- err = register_switchdev_notifier(&dsa_slave_switchdev_notifier);
+ err = register_switchdev_notifier(&dsa_user_switchdev_notifier);
if (err)
goto err_switchdev_nb;
- nb = &dsa_slave_switchdev_blocking_notifier;
+ nb = &dsa_user_switchdev_blocking_notifier;
err = register_switchdev_blocking_notifier(nb);
if (err)
goto err_switchdev_blocking_nb;
@@ -3694,27 +3701,27 @@ int dsa_slave_register_notifier(void)
return 0;
err_switchdev_blocking_nb:
- unregister_switchdev_notifier(&dsa_slave_switchdev_notifier);
+ unregister_switchdev_notifier(&dsa_user_switchdev_notifier);
err_switchdev_nb:
- unregister_netdevice_notifier(&dsa_slave_nb);
+ unregister_netdevice_notifier(&dsa_user_nb);
return err;
}
-void dsa_slave_unregister_notifier(void)
+void dsa_user_unregister_notifier(void)
{
struct notifier_block *nb;
int err;
- nb = &dsa_slave_switchdev_blocking_notifier;
+ nb = &dsa_user_switchdev_blocking_notifier;
err = unregister_switchdev_blocking_notifier(nb);
if (err)
pr_err("DSA: failed to unregister switchdev blocking notifier (%d)\n", err);
- err = unregister_switchdev_notifier(&dsa_slave_switchdev_notifier);
+ err = unregister_switchdev_notifier(&dsa_user_switchdev_notifier);
if (err)
pr_err("DSA: failed to unregister switchdev notifier (%d)\n", err);
- err = unregister_netdevice_notifier(&dsa_slave_nb);
+ err = unregister_netdevice_notifier(&dsa_user_nb);
if (err)
- pr_err("DSA: failed to unregister slave notifier (%d)\n", err);
+ pr_err("DSA: failed to unregister user notifier (%d)\n", err);
}
diff --git a/net/dsa/user.h b/net/dsa/user.h
new file mode 100644
index 000000000000..996069130bea
--- /dev/null
+++ b/net/dsa/user.h
@@ -0,0 +1,69 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __DSA_USER_H
+#define __DSA_USER_H
+
+#include <linux/if_bridge.h>
+#include <linux/if_vlan.h>
+#include <linux/list.h>
+#include <linux/netpoll.h>
+#include <linux/types.h>
+#include <net/dsa.h>
+#include <net/gro_cells.h>
+
+struct net_device;
+struct netlink_ext_ack;
+
+extern struct notifier_block dsa_user_switchdev_notifier;
+extern struct notifier_block dsa_user_switchdev_blocking_notifier;
+
+struct dsa_user_priv {
+ /* Copy of CPU port xmit for faster access in user transmit hot path */
+ struct sk_buff * (*xmit)(struct sk_buff *skb,
+ struct net_device *dev);
+
+ struct gro_cells gcells;
+
+ /* DSA port data, such as switch, port index, etc. */
+ struct dsa_port *dp;
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ struct netpoll *netpoll;
+#endif
+
+ /* TC context */
+ struct list_head mall_tc_list;
+};
+
+void dsa_user_mii_bus_init(struct dsa_switch *ds);
+int dsa_user_create(struct dsa_port *dp);
+void dsa_user_destroy(struct net_device *user_dev);
+int dsa_user_suspend(struct net_device *user_dev);
+int dsa_user_resume(struct net_device *user_dev);
+int dsa_user_register_notifier(void);
+void dsa_user_unregister_notifier(void);
+void dsa_user_sync_ha(struct net_device *dev);
+void dsa_user_unsync_ha(struct net_device *dev);
+void dsa_user_setup_tagger(struct net_device *user);
+int dsa_user_change_mtu(struct net_device *dev, int new_mtu);
+int dsa_user_change_conduit(struct net_device *dev, struct net_device *conduit,
+ struct netlink_ext_ack *extack);
+int dsa_user_manage_vlan_filtering(struct net_device *dev,
+ bool vlan_filtering);
+
+static inline struct dsa_port *dsa_user_to_port(const struct net_device *dev)
+{
+ struct dsa_user_priv *p = netdev_priv(dev);
+
+ return p->dp;
+}
+
+static inline struct net_device *
+dsa_user_to_conduit(const struct net_device *dev)
+{
+ struct dsa_port *dp = dsa_user_to_port(dev);
+
+ return dsa_port_to_conduit(dp);
+}
+
+#endif
diff --git a/net/ethtool/common.c b/net/ethtool/common.c
index f5598c5f50de..b4419fb6df6a 100644
--- a/net/ethtool/common.c
+++ b/net/ethtool/common.c
@@ -685,3 +685,24 @@ ethtool_params_from_link_mode(struct ethtool_link_ksettings *link_ksettings,
link_ksettings->base.duplex = link_info->duplex;
}
EXPORT_SYMBOL_GPL(ethtool_params_from_link_mode);
+
+/**
+ * ethtool_forced_speed_maps_init
+ * @maps: Pointer to an array of Ethtool forced speed map
+ * @size: Array size
+ *
+ * Initialize an array of Ethtool forced speed map to Ethtool link modes. This
+ * should be called during driver module init.
+ */
+void
+ethtool_forced_speed_maps_init(struct ethtool_forced_speed_map *maps, u32 size)
+{
+ for (u32 i = 0; i < size; i++) {
+ struct ethtool_forced_speed_map *map = &maps[i];
+
+ linkmode_set_bit_array(map->cap_arr, map->arr_size, map->caps);
+ map->cap_arr = NULL;
+ map->arr_size = 0;
+ }
+}
+EXPORT_SYMBOL_GPL(ethtool_forced_speed_maps_init);
diff --git a/net/handshake/genl.c b/net/handshake/genl.c
index 233be5cbfec9..f55d14d7b726 100644
--- a/net/handshake/genl.c
+++ b/net/handshake/genl.c
@@ -18,7 +18,7 @@ static const struct nla_policy handshake_accept_nl_policy[HANDSHAKE_A_ACCEPT_HAN
/* HANDSHAKE_CMD_DONE - do */
static const struct nla_policy handshake_done_nl_policy[HANDSHAKE_A_DONE_REMOTE_AUTH + 1] = {
[HANDSHAKE_A_DONE_STATUS] = { .type = NLA_U32, },
- [HANDSHAKE_A_DONE_SOCKFD] = { .type = NLA_U32, },
+ [HANDSHAKE_A_DONE_SOCKFD] = { .type = NLA_S32, },
[HANDSHAKE_A_DONE_REMOTE_AUTH] = { .type = NLA_U32, },
};
diff --git a/net/handshake/netlink.c b/net/handshake/netlink.c
index 80c7302692c7..89637e732866 100644
--- a/net/handshake/netlink.c
+++ b/net/handshake/netlink.c
@@ -143,7 +143,7 @@ int handshake_nl_done_doit(struct sk_buff *skb, struct genl_info *info)
if (GENL_REQ_ATTR_CHECK(info, HANDSHAKE_A_DONE_SOCKFD))
return -EINVAL;
- fd = nla_get_u32(info->attrs[HANDSHAKE_A_DONE_SOCKFD]);
+ fd = nla_get_s32(info->attrs[HANDSHAKE_A_DONE_SOCKFD]);
sock = sockfd_lookup(fd, &err);
if (!sock)
diff --git a/net/handshake/tlshd.c b/net/handshake/tlshd.c
index bbfb4095ddd6..d697f68c598c 100644
--- a/net/handshake/tlshd.c
+++ b/net/handshake/tlshd.c
@@ -173,9 +173,9 @@ static int tls_handshake_put_certificate(struct sk_buff *msg,
if (!entry_attr)
return -EMSGSIZE;
- if (nla_put_u32(msg, HANDSHAKE_A_X509_CERT,
+ if (nla_put_s32(msg, HANDSHAKE_A_X509_CERT,
treq->th_certificate) ||
- nla_put_u32(msg, HANDSHAKE_A_X509_PRIVKEY,
+ nla_put_s32(msg, HANDSHAKE_A_X509_PRIVKEY,
treq->th_privkey)) {
nla_nest_cancel(msg, entry_attr);
return -EMSGSIZE;
@@ -214,7 +214,7 @@ static int tls_handshake_accept(struct handshake_req *req,
goto out_cancel;
ret = -EMSGSIZE;
- ret = nla_put_u32(msg, HANDSHAKE_A_ACCEPT_SOCKFD, fd);
+ ret = nla_put_s32(msg, HANDSHAKE_A_ACCEPT_SOCKFD, fd);
if (ret < 0)
goto out_cancel;
ret = nla_put_u32(msg, HANDSHAKE_A_ACCEPT_MESSAGE_TYPE, treq->th_type);
diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig
index 2dfb12230f08..8e94ed7c56a0 100644
--- a/net/ipv4/Kconfig
+++ b/net/ipv4/Kconfig
@@ -741,10 +741,27 @@ config DEFAULT_TCP_CONG
default "bbr" if DEFAULT_BBR
default "cubic"
+config TCP_SIGPOOL
+ tristate
+
+config TCP_AO
+ bool "TCP: Authentication Option (RFC5925)"
+ select CRYPTO
+ select TCP_SIGPOOL
+ depends on 64BIT && IPV6 != m # seq-number extension needs WRITE_ONCE(u64)
+ help
+ TCP-AO specifies the use of stronger Message Authentication Codes (MACs),
+ protects against replays for long-lived TCP connections, and
+ provides more details on the association of security with TCP
+ connections than TCP MD5 (See RFC5925)
+
+ If unsure, say N.
+
config TCP_MD5SIG
bool "TCP: MD5 Signature Option support (RFC2385)"
select CRYPTO
select CRYPTO_MD5
+ select TCP_SIGPOOL
help
RFC2385 specifies a method of giving MD5 protection to TCP sessions.
Its main (only?) use is to protect BGP sessions between core routers
diff --git a/net/ipv4/Makefile b/net/ipv4/Makefile
index b18ba8ef93ad..e144a02a6a61 100644
--- a/net/ipv4/Makefile
+++ b/net/ipv4/Makefile
@@ -62,12 +62,14 @@ obj-$(CONFIG_TCP_CONG_SCALABLE) += tcp_scalable.o
obj-$(CONFIG_TCP_CONG_LP) += tcp_lp.o
obj-$(CONFIG_TCP_CONG_YEAH) += tcp_yeah.o
obj-$(CONFIG_TCP_CONG_ILLINOIS) += tcp_illinois.o
+obj-$(CONFIG_TCP_SIGPOOL) += tcp_sigpool.o
obj-$(CONFIG_NET_SOCK_MSG) += tcp_bpf.o
obj-$(CONFIG_BPF_SYSCALL) += udp_bpf.o
obj-$(CONFIG_NETLABEL) += cipso_ipv4.o
obj-$(CONFIG_XFRM) += xfrm4_policy.o xfrm4_state.o xfrm4_input.o \
xfrm4_output.o xfrm4_protocol.o
+obj-$(CONFIG_TCP_AO) += tcp_ao.o
ifeq ($(CONFIG_BPF_JIT),y)
obj-$(CONFIG_BPF_SYSCALL) += bpf_tcp_ca.o
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index 2713c9b06c4c..fb81de10d332 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -452,7 +452,7 @@ int inet_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len)
/* BPF prog is run before any checks are done so that if the prog
* changes context in a wrong way it will be caught.
*/
- err = BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr,
+ err = BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, &addr_len,
CGROUP_INET4_BIND, &flags);
if (err)
return err;
@@ -794,6 +794,7 @@ int inet_getname(struct socket *sock, struct sockaddr *uaddr,
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
DECLARE_SOCKADDR(struct sockaddr_in *, sin, uaddr);
+ int sin_addr_len = sizeof(*sin);
sin->sin_family = AF_INET;
lock_sock(sk);
@@ -806,7 +807,7 @@ int inet_getname(struct socket *sock, struct sockaddr *uaddr,
}
sin->sin_port = inet->inet_dport;
sin->sin_addr.s_addr = inet->inet_daddr;
- BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin,
+ BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin, &sin_addr_len,
CGROUP_INET4_GETPEERNAME);
} else {
__be32 addr = inet->inet_rcv_saddr;
@@ -814,12 +815,12 @@ int inet_getname(struct socket *sock, struct sockaddr *uaddr,
addr = inet->inet_saddr;
sin->sin_port = inet->inet_sport;
sin->sin_addr.s_addr = addr;
- BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin,
+ BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin, &sin_addr_len,
CGROUP_INET4_GETSOCKNAME);
}
release_sock(sk);
memset(sin->sin_zero, 0, sizeof(sin->sin_zero));
- return sizeof(*sin);
+ return sin_addr_len;
}
EXPORT_SYMBOL(inet_getname);
diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
index cb5dbee9e018..2cc50cbfc2a3 100644
--- a/net/ipv4/datagram.c
+++ b/net/ipv4/datagram.c
@@ -39,11 +39,11 @@ int __ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len
saddr = inet->inet_saddr;
if (ipv4_is_multicast(usin->sin_addr.s_addr)) {
if (!oif || netif_index_is_l3_master(sock_net(sk), oif))
- oif = inet->mc_index;
+ oif = READ_ONCE(inet->mc_index);
if (!saddr)
- saddr = inet->mc_addr;
+ saddr = READ_ONCE(inet->mc_addr);
} else if (!oif) {
- oif = inet->uc_index;
+ oif = READ_ONCE(inet->uc_index);
}
fl4 = &inet->cork.fl.u.ip4;
rt = ip_route_connect(fl4, usin->sin_addr.s_addr, saddr, oif,
diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
index 418e5fb58fd3..76c3ea75b8dd 100644
--- a/net/ipv4/igmp.c
+++ b/net/ipv4/igmp.c
@@ -2944,8 +2944,6 @@ static struct ip_sf_list *igmp_mcf_get_next(struct seq_file *seq, struct ip_sf_l
continue;
state->im = rcu_dereference(state->idev->mc_list);
}
- if (!state->im)
- break;
spin_lock_bh(&state->im->lock);
psf = state->im->sources;
}
diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c
index e13a84433413..f01aee832aab 100644
--- a/net/ipv4/inet_diag.c
+++ b/net/ipv4/inet_diag.c
@@ -134,7 +134,7 @@ int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
* hence this needs to be included regardless of socket family.
*/
if (ext & (1 << (INET_DIAG_TOS - 1)))
- if (nla_put_u8(skb, INET_DIAG_TOS, inet->tos) < 0)
+ if (nla_put_u8(skb, INET_DIAG_TOS, READ_ONCE(inet->tos)) < 0)
goto errout;
#if IS_ENABLED(CONFIG_IPV6)
@@ -165,7 +165,7 @@ int inet_diag_msg_attrs_fill(struct sock *sk, struct sk_buff *skb,
* For cgroup2 classid is always zero.
*/
if (!classid)
- classid = sk->sk_priority;
+ classid = READ_ONCE(sk->sk_priority);
if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid))
goto errout;
diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
index 66fac1216d46..8b65f12583eb 100644
--- a/net/ipv4/ip_forward.c
+++ b/net/ipv4/ip_forward.c
@@ -66,8 +66,6 @@ static int ip_forward_finish(struct net *net, struct sock *sk, struct sk_buff *s
{
struct ip_options *opt = &(IPCB(skb)->opt);
- __IP_INC_STATS(net, IPSTATS_MIB_OUTFORWDATAGRAMS);
-
#ifdef CONFIG_NET_SWITCHDEV
if (skb->offload_l3_fwd_mark) {
consume_skb(skb);
@@ -130,6 +128,8 @@ int ip_forward(struct sk_buff *skb)
if (opt->is_strictroute && rt->rt_uses_gateway)
goto sr_failed;
+ __IP_INC_STATS(net, IPSTATS_MIB_OUTFORWDATAGRAMS);
+
IPCB(skb)->flags |= IPSKB_FORWARDED;
mtu = ip_dst_mtu_maybe_forward(&rt->dst, true);
if (ip_exceeds_mtu(skb, mtu)) {
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index 4ab877cf6d35..b06f678b03a1 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -101,6 +101,8 @@ int __ip_local_out(struct net *net, struct sock *sk, struct sk_buff *skb)
{
struct iphdr *iph = ip_hdr(skb);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTREQUESTS);
+
iph_set_totlen(iph, skb->len);
ip_send_check(iph);
@@ -544,7 +546,7 @@ EXPORT_SYMBOL(__ip_queue_xmit);
int ip_queue_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl)
{
- return __ip_queue_xmit(sk, skb, fl, inet_sk(sk)->tos);
+ return __ip_queue_xmit(sk, skb, fl, READ_ONCE(inet_sk(sk)->tos));
}
EXPORT_SYMBOL(ip_queue_xmit);
@@ -1387,8 +1389,8 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
struct ip_options *opt = NULL;
struct rtable *rt = (struct rtable *)cork->dst;
struct iphdr *iph;
+ u8 pmtudisc, ttl;
__be16 df = 0;
- __u8 ttl;
skb = __skb_dequeue(queue);
if (!skb)
@@ -1418,8 +1420,9 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
/* DF bit is set when we want to see DF on outgoing frames.
* If ignore_df is set too, we still allow to fragment this frame
* locally. */
- if (inet->pmtudisc == IP_PMTUDISC_DO ||
- inet->pmtudisc == IP_PMTUDISC_PROBE ||
+ pmtudisc = READ_ONCE(inet->pmtudisc);
+ if (pmtudisc == IP_PMTUDISC_DO ||
+ pmtudisc == IP_PMTUDISC_PROBE ||
(skb->len <= dst_mtu(&rt->dst) &&
ip_dont_fragment(sk, &rt->dst)))
df = htons(IP_DF);
@@ -1430,14 +1433,14 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
if (cork->ttl != 0)
ttl = cork->ttl;
else if (rt->rt_type == RTN_MULTICAST)
- ttl = inet->mc_ttl;
+ ttl = READ_ONCE(inet->mc_ttl);
else
ttl = ip_select_ttl(inet, &rt->dst);
iph = ip_hdr(skb);
iph->version = 4;
iph->ihl = 5;
- iph->tos = (cork->tos != -1) ? cork->tos : inet->tos;
+ iph->tos = (cork->tos != -1) ? cork->tos : READ_ONCE(inet->tos);
iph->frag_off = df;
iph->ttl = ttl;
iph->protocol = sk->sk_protocol;
@@ -1449,7 +1452,7 @@ struct sk_buff *__ip_make_skb(struct sock *sk,
ip_options_build(skb, opt, cork->addr, rt);
}
- skb->priority = (cork->tos != -1) ? cork->priority: sk->sk_priority;
+ skb->priority = (cork->tos != -1) ? cork->priority: READ_ONCE(sk->sk_priority);
skb->mark = cork->mark;
skb->tstamp = cork->transmit_time;
/*
diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
index cce9cb25f3b3..2efc53526a38 100644
--- a/net/ipv4/ip_sockglue.c
+++ b/net/ipv4/ip_sockglue.c
@@ -587,12 +587,14 @@ out:
void __ip_sock_set_tos(struct sock *sk, int val)
{
+ u8 old_tos = inet_sk(sk)->tos;
+
if (sk->sk_type == SOCK_STREAM) {
val &= ~INET_ECN_MASK;
- val |= inet_sk(sk)->tos & INET_ECN_MASK;
+ val |= old_tos & INET_ECN_MASK;
}
- if (inet_sk(sk)->tos != val) {
- inet_sk(sk)->tos = val;
+ if (old_tos != val) {
+ WRITE_ONCE(inet_sk(sk)->tos, val);
WRITE_ONCE(sk->sk_priority, rt_tos2priority(val));
sk_dst_reset(sk);
}
@@ -600,9 +602,9 @@ void __ip_sock_set_tos(struct sock *sk, int val)
void ip_sock_set_tos(struct sock *sk, int val)
{
- lock_sock(sk);
+ sockopt_lock_sock(sk);
__ip_sock_set_tos(sk, val);
- release_sock(sk);
+ sockopt_release_sock(sk);
}
EXPORT_SYMBOL(ip_sock_set_tos);
@@ -622,9 +624,7 @@ int ip_sock_set_mtu_discover(struct sock *sk, int val)
{
if (val < IP_PMTUDISC_DONT || val > IP_PMTUDISC_OMIT)
return -EINVAL;
- lock_sock(sk);
- inet_sk(sk)->pmtudisc = val;
- release_sock(sk);
+ WRITE_ONCE(inet_sk(sk)->pmtudisc, val);
return 0;
}
EXPORT_SYMBOL(ip_sock_set_mtu_discover);
@@ -1039,6 +1039,22 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
WRITE_ONCE(inet->min_ttl, val);
return 0;
+ case IP_MULTICAST_TTL:
+ if (sk->sk_type == SOCK_STREAM)
+ return -EINVAL;
+ if (optlen < 1)
+ return -EINVAL;
+ if (val == -1)
+ val = 1;
+ if (val < 0 || val > 255)
+ return -EINVAL;
+ WRITE_ONCE(inet->mc_ttl, val);
+ return 0;
+ case IP_MTU_DISCOVER:
+ return ip_sock_set_mtu_discover(sk, val);
+ case IP_TOS: /* This sets both TOS and Precedence */
+ ip_sock_set_tos(sk, val);
+ return 0;
}
err = 0;
@@ -1093,25 +1109,6 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
}
}
break;
- case IP_TOS: /* This sets both TOS and Precedence */
- __ip_sock_set_tos(sk, val);
- break;
- case IP_MTU_DISCOVER:
- if (val < IP_PMTUDISC_DONT || val > IP_PMTUDISC_OMIT)
- goto e_inval;
- inet->pmtudisc = val;
- break;
- case IP_MULTICAST_TTL:
- if (sk->sk_type == SOCK_STREAM)
- goto e_inval;
- if (optlen < 1)
- goto e_inval;
- if (val == -1)
- val = 1;
- if (val < 0 || val > 255)
- goto e_inval;
- inet->mc_ttl = val;
- break;
case IP_UNICAST_IF:
{
struct net_device *dev = NULL;
@@ -1123,7 +1120,7 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
ifindex = (__force int)ntohl((__force __be32)val);
if (ifindex == 0) {
- inet->uc_index = 0;
+ WRITE_ONCE(inet->uc_index, 0);
err = 0;
break;
}
@@ -1140,7 +1137,7 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
if (sk->sk_bound_dev_if && midx != sk->sk_bound_dev_if)
break;
- inet->uc_index = ifindex;
+ WRITE_ONCE(inet->uc_index, ifindex);
err = 0;
break;
}
@@ -1178,8 +1175,8 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
if (!mreq.imr_ifindex) {
if (mreq.imr_address.s_addr == htonl(INADDR_ANY)) {
- inet->mc_index = 0;
- inet->mc_addr = 0;
+ WRITE_ONCE(inet->mc_index, 0);
+ WRITE_ONCE(inet->mc_addr, 0);
err = 0;
break;
}
@@ -1204,8 +1201,8 @@ int do_ip_setsockopt(struct sock *sk, int level, int optname,
midx != sk->sk_bound_dev_if)
break;
- inet->mc_index = mreq.imr_ifindex;
- inet->mc_addr = mreq.imr_address.s_addr;
+ WRITE_ONCE(inet->mc_index, mreq.imr_ifindex);
+ WRITE_ONCE(inet->mc_addr, mreq.imr_address.s_addr);
err = 0;
break;
}
@@ -1592,27 +1589,29 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
case IP_MINTTL:
val = READ_ONCE(inet->min_ttl);
goto copyval;
- }
-
- if (needs_rtnl)
- rtnl_lock();
- sockopt_lock_sock(sk);
-
- switch (optname) {
+ case IP_MULTICAST_TTL:
+ val = READ_ONCE(inet->mc_ttl);
+ goto copyval;
+ case IP_MTU_DISCOVER:
+ val = READ_ONCE(inet->pmtudisc);
+ goto copyval;
+ case IP_TOS:
+ val = READ_ONCE(inet->tos);
+ goto copyval;
case IP_OPTIONS:
{
unsigned char optbuf[sizeof(struct ip_options)+40];
struct ip_options *opt = (struct ip_options *)optbuf;
struct ip_options_rcu *inet_opt;
- inet_opt = rcu_dereference_protected(inet->inet_opt,
- lockdep_sock_is_held(sk));
+ rcu_read_lock();
+ inet_opt = rcu_dereference(inet->inet_opt);
opt->optlen = 0;
if (inet_opt)
memcpy(optbuf, &inet_opt->opt,
sizeof(struct ip_options) +
inet_opt->opt.optlen);
- sockopt_release_sock(sk);
+ rcu_read_unlock();
if (opt->optlen == 0) {
len = 0;
@@ -1628,12 +1627,6 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
return -EFAULT;
return 0;
}
- case IP_TOS:
- val = inet->tos;
- break;
- case IP_MTU_DISCOVER:
- val = inet->pmtudisc;
- break;
case IP_MTU:
{
struct dst_entry *dst;
@@ -1643,24 +1636,55 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
val = dst_mtu(dst);
dst_release(dst);
}
- if (!val) {
- sockopt_release_sock(sk);
+ if (!val)
return -ENOTCONN;
+ goto copyval;
+ }
+ case IP_PKTOPTIONS:
+ {
+ struct msghdr msg;
+
+ if (sk->sk_type != SOCK_STREAM)
+ return -ENOPROTOOPT;
+
+ if (optval.is_kernel) {
+ msg.msg_control_is_user = false;
+ msg.msg_control = optval.kernel;
+ } else {
+ msg.msg_control_is_user = true;
+ msg.msg_control_user = optval.user;
}
- break;
+ msg.msg_controllen = len;
+ msg.msg_flags = in_compat_syscall() ? MSG_CMSG_COMPAT : 0;
+
+ if (inet_test_bit(PKTINFO, sk)) {
+ struct in_pktinfo info;
+
+ info.ipi_addr.s_addr = READ_ONCE(inet->inet_rcv_saddr);
+ info.ipi_spec_dst.s_addr = READ_ONCE(inet->inet_rcv_saddr);
+ info.ipi_ifindex = READ_ONCE(inet->mc_index);
+ put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info);
+ }
+ if (inet_test_bit(TTL, sk)) {
+ int hlim = READ_ONCE(inet->mc_ttl);
+
+ put_cmsg(&msg, SOL_IP, IP_TTL, sizeof(hlim), &hlim);
+ }
+ if (inet_test_bit(TOS, sk)) {
+ int tos = READ_ONCE(inet->rcv_tos);
+ put_cmsg(&msg, SOL_IP, IP_TOS, sizeof(tos), &tos);
+ }
+ len -= msg.msg_controllen;
+ return copy_to_sockptr(optlen, &len, sizeof(int));
}
- case IP_MULTICAST_TTL:
- val = inet->mc_ttl;
- break;
case IP_UNICAST_IF:
- val = (__force int)htonl((__u32) inet->uc_index);
- break;
+ val = (__force int)htonl((__u32) READ_ONCE(inet->uc_index));
+ goto copyval;
case IP_MULTICAST_IF:
{
struct in_addr addr;
len = min_t(unsigned int, len, sizeof(struct in_addr));
- addr.s_addr = inet->mc_addr;
- sockopt_release_sock(sk);
+ addr.s_addr = READ_ONCE(inet->mc_addr);
if (copy_to_sockptr(optlen, &len, sizeof(int)))
return -EFAULT;
@@ -1668,6 +1692,13 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
return -EFAULT;
return 0;
}
+ }
+
+ if (needs_rtnl)
+ rtnl_lock();
+ sockopt_lock_sock(sk);
+
+ switch (optname) {
case IP_MSFILTER:
{
struct ip_msfilter msf;
@@ -1690,44 +1721,6 @@ int do_ip_getsockopt(struct sock *sk, int level, int optname,
else
err = ip_get_mcast_msfilter(sk, optval, optlen, len);
goto out;
- case IP_PKTOPTIONS:
- {
- struct msghdr msg;
-
- sockopt_release_sock(sk);
-
- if (sk->sk_type != SOCK_STREAM)
- return -ENOPROTOOPT;
-
- if (optval.is_kernel) {
- msg.msg_control_is_user = false;
- msg.msg_control = optval.kernel;
- } else {
- msg.msg_control_is_user = true;
- msg.msg_control_user = optval.user;
- }
- msg.msg_controllen = len;
- msg.msg_flags = in_compat_syscall() ? MSG_CMSG_COMPAT : 0;
-
- if (inet_test_bit(PKTINFO, sk)) {
- struct in_pktinfo info;
-
- info.ipi_addr.s_addr = inet->inet_rcv_saddr;
- info.ipi_spec_dst.s_addr = inet->inet_rcv_saddr;
- info.ipi_ifindex = inet->mc_index;
- put_cmsg(&msg, SOL_IP, IP_PKTINFO, sizeof(info), &info);
- }
- if (inet_test_bit(TTL, sk)) {
- int hlim = inet->mc_ttl;
- put_cmsg(&msg, SOL_IP, IP_TTL, sizeof(hlim), &hlim);
- }
- if (inet_test_bit(TOS, sk)) {
- int tos = inet->rcv_tos;
- put_cmsg(&msg, SOL_IP, IP_TOS, sizeof(tos), &tos);
- }
- len -= msg.msg_controllen;
- return copy_to_sockptr(optlen, &len, sizeof(int));
- }
case IP_LOCAL_PORT_RANGE:
val = inet->local_port_range.hi << 16 | inet->local_port_range.lo;
break;
diff --git a/net/ipv4/netfilter/iptable_mangle.c b/net/ipv4/netfilter/iptable_mangle.c
index 3abb430af9e6..385d945d8ebe 100644
--- a/net/ipv4/netfilter/iptable_mangle.c
+++ b/net/ipv4/netfilter/iptable_mangle.c
@@ -36,12 +36,12 @@ static const struct xt_table packet_mangler = {
static unsigned int
ipt_mangle_out(void *priv, struct sk_buff *skb, const struct nf_hook_state *state)
{
- unsigned int ret;
+ unsigned int ret, verdict;
const struct iphdr *iph;
- u_int8_t tos;
__be32 saddr, daddr;
- u_int32_t mark;
+ u32 mark;
int err;
+ u8 tos;
/* Save things which could affect route */
mark = skb->mark;
@@ -51,8 +51,9 @@ ipt_mangle_out(void *priv, struct sk_buff *skb, const struct nf_hook_state *stat
tos = iph->tos;
ret = ipt_do_table(priv, skb, state);
+ verdict = ret & NF_VERDICT_MASK;
/* Reroute for ANY change. */
- if (ret != NF_DROP && ret != NF_STOLEN) {
+ if (verdict != NF_DROP && verdict != NF_STOLEN) {
iph = ip_hdr(skb);
if (iph->saddr != saddr ||
diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
index 75e0aee35eb7..823306487a82 100644
--- a/net/ipv4/ping.c
+++ b/net/ipv4/ping.c
@@ -301,7 +301,7 @@ static int ping_pre_connect(struct sock *sk, struct sockaddr *uaddr,
if (addr_len < sizeof(struct sockaddr_in))
return -EINVAL;
- return BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr, &addr_len);
}
/* Checks the bind address and possibly modifies sk->sk_bound_dev_if. */
@@ -551,7 +551,7 @@ void ping_err(struct sk_buff *skb, int offset, u32 info)
case ICMP_DEST_UNREACH:
if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */
ipv4_sk_update_pmtu(skb, sk, info);
- if (inet_sock->pmtudisc != IP_PMTUDISC_DONT) {
+ if (READ_ONCE(inet_sock->pmtudisc) != IP_PMTUDISC_DONT) {
err = EMSGSIZE;
harderr = 1;
break;
@@ -581,7 +581,7 @@ void ping_err(struct sk_buff *skb, int offset, u32 info)
* 4.1.3.3.
*/
if ((family == AF_INET && !inet_test_bit(RECVERR, sk)) ||
- (family == AF_INET6 && !inet6_sk(sk)->recverr)) {
+ (family == AF_INET6 && !inet6_test_bit(RECVERR6, sk))) {
if (!harderr || sk->sk_state != TCP_ESTABLISHED)
goto out;
} else {
@@ -773,11 +773,11 @@ static int ping_v4_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (ipv4_is_multicast(daddr)) {
if (!ipc.oif || netif_index_is_l3_master(sock_net(sk), ipc.oif))
- ipc.oif = inet->mc_index;
+ ipc.oif = READ_ONCE(inet->mc_index);
if (!saddr)
- saddr = inet->mc_addr;
+ saddr = READ_ONCE(inet->mc_addr);
} else if (!ipc.oif)
- ipc.oif = inet->uc_index;
+ ipc.oif = READ_ONCE(inet->uc_index);
flowi4_init_output(&fl4, ipc.oif, ipc.sockc.mark, tos, scope,
sk->sk_protocol, inet_sk_flowi_flags(sk), faddr,
@@ -899,7 +899,6 @@ int ping_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags,
#if IS_ENABLED(CONFIG_IPV6)
} else if (family == AF_INET6) {
- struct ipv6_pinfo *np = inet6_sk(sk);
struct ipv6hdr *ip6 = ipv6_hdr(skb);
DECLARE_SOCKADDR(struct sockaddr_in6 *, sin6, msg->msg_name);
@@ -908,7 +907,7 @@ int ping_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, int flags,
sin6->sin6_port = 0;
sin6->sin6_addr = ip6->saddr;
sin6->sin6_flowinfo = 0;
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
sin6->sin6_flowinfo = ip6_flowinfo(ip6);
sin6->sin6_scope_id =
ipv6_iface_scope_id(&sin6->sin6_addr,
diff --git a/net/ipv4/proc.c b/net/ipv4/proc.c
index eaf1d3113b62..5f4654ebff48 100644
--- a/net/ipv4/proc.c
+++ b/net/ipv4/proc.c
@@ -83,7 +83,7 @@ static const struct snmp_mib snmp4_ipstats_list[] = {
SNMP_MIB_ITEM("InUnknownProtos", IPSTATS_MIB_INUNKNOWNPROTOS),
SNMP_MIB_ITEM("InDiscards", IPSTATS_MIB_INDISCARDS),
SNMP_MIB_ITEM("InDelivers", IPSTATS_MIB_INDELIVERS),
- SNMP_MIB_ITEM("OutRequests", IPSTATS_MIB_OUTPKTS),
+ SNMP_MIB_ITEM("OutRequests", IPSTATS_MIB_OUTREQUESTS),
SNMP_MIB_ITEM("OutDiscards", IPSTATS_MIB_OUTDISCARDS),
SNMP_MIB_ITEM("OutNoRoutes", IPSTATS_MIB_OUTNOROUTES),
SNMP_MIB_ITEM("ReasmTimeout", IPSTATS_MIB_REASMTIMEOUT),
@@ -93,6 +93,7 @@ static const struct snmp_mib snmp4_ipstats_list[] = {
SNMP_MIB_ITEM("FragOKs", IPSTATS_MIB_FRAGOKS),
SNMP_MIB_ITEM("FragFails", IPSTATS_MIB_FRAGFAILS),
SNMP_MIB_ITEM("FragCreates", IPSTATS_MIB_FRAGCREATES),
+ SNMP_MIB_ITEM("OutTransmits", IPSTATS_MIB_OUTPKTS),
SNMP_MIB_SENTINEL
};
@@ -298,6 +299,11 @@ static const struct snmp_mib snmp4_net_list[] = {
SNMP_MIB_ITEM("TCPMigrateReqSuccess", LINUX_MIB_TCPMIGRATEREQSUCCESS),
SNMP_MIB_ITEM("TCPMigrateReqFailure", LINUX_MIB_TCPMIGRATEREQFAILURE),
SNMP_MIB_ITEM("TCPPLBRehash", LINUX_MIB_TCPPLBREHASH),
+ SNMP_MIB_ITEM("TCPAORequired", LINUX_MIB_TCPAOREQUIRED),
+ SNMP_MIB_ITEM("TCPAOBad", LINUX_MIB_TCPAOBAD),
+ SNMP_MIB_ITEM("TCPAOKeyNotFound", LINUX_MIB_TCPAOKEYNOTFOUND),
+ SNMP_MIB_ITEM("TCPAOGood", LINUX_MIB_TCPAOGOOD),
+ SNMP_MIB_ITEM("TCPAODroppedIcmps", LINUX_MIB_TCPAODROPPEDICMPS),
SNMP_MIB_SENTINEL
};
diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
index 4b5db5d1edc2..27da9d7294c0 100644
--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -239,7 +239,7 @@ static void raw_err(struct sock *sk, struct sk_buff *skb, u32 info)
if (code > NR_ICMP_UNREACH)
break;
if (code == ICMP_FRAG_NEEDED) {
- harderr = inet->pmtudisc != IP_PMTUDISC_DONT;
+ harderr = READ_ONCE(inet->pmtudisc) != IP_PMTUDISC_DONT;
err = EMSGSIZE;
} else {
err = icmp_err_convert[code].errno;
@@ -482,7 +482,7 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
int free = 0;
__be32 daddr;
__be32 saddr;
- int err;
+ int uc_index, err;
struct ip_options_data opt_copy;
struct raw_frag_vec rfv;
int hdrincl;
@@ -576,24 +576,25 @@ static int raw_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
tos = get_rttos(&ipc, inet);
scope = ip_sendmsg_scope(inet, &ipc, msg);
+ uc_index = READ_ONCE(inet->uc_index);
if (ipv4_is_multicast(daddr)) {
if (!ipc.oif || netif_index_is_l3_master(sock_net(sk), ipc.oif))
- ipc.oif = inet->mc_index;
+ ipc.oif = READ_ONCE(inet->mc_index);
if (!saddr)
- saddr = inet->mc_addr;
+ saddr = READ_ONCE(inet->mc_addr);
} else if (!ipc.oif) {
- ipc.oif = inet->uc_index;
- } else if (ipv4_is_lbcast(daddr) && inet->uc_index) {
+ ipc.oif = uc_index;
+ } else if (ipv4_is_lbcast(daddr) && uc_index) {
/* oif is set, packet is to local broadcast
* and uc_index is set. oif is most likely set
* by sk_bound_dev_if. If uc_index != oif check if the
* oif is an L3 master and uc_index is an L3 slave.
* If so, we want to allow the send using the uc_index.
*/
- if (ipc.oif != inet->uc_index &&
+ if (ipc.oif != uc_index &&
ipc.oif == l3mdev_master_ifindex_by_index(sock_net(sk),
- inet->uc_index)) {
- ipc.oif = inet->uc_index;
+ uc_index)) {
+ ipc.oif = uc_index;
}
}
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index b214b5a2e045..3290a4442b4a 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -1632,7 +1632,7 @@ struct rtable *rt_dst_alloc(struct net_device *dev,
{
struct rtable *rt;
- rt = dst_alloc(&ipv4_dst_ops, dev, 1, DST_OBSOLETE_FORCE_CHK,
+ rt = dst_alloc(&ipv4_dst_ops, dev, DST_OBSOLETE_FORCE_CHK,
(noxfrm ? DST_NOXFRM : 0));
if (rt) {
@@ -1660,7 +1660,7 @@ struct rtable *rt_dst_clone(struct net_device *dev, struct rtable *rt)
{
struct rtable *new_rt;
- new_rt = dst_alloc(&ipv4_dst_ops, dev, 1, DST_OBSOLETE_FORCE_CHK,
+ new_rt = dst_alloc(&ipv4_dst_ops, dev, DST_OBSOLETE_FORCE_CHK,
rt->dst.flags);
if (new_rt) {
@@ -2834,7 +2834,7 @@ struct dst_entry *ipv4_blackhole_route(struct net *net, struct dst_entry *dst_or
struct rtable *ort = (struct rtable *) dst_orig;
struct rtable *rt;
- rt = dst_alloc(&ipv4_dst_blackhole_ops, NULL, 1, DST_OBSOLETE_DEAD, 0);
+ rt = dst_alloc(&ipv4_dst_blackhole_ops, NULL, DST_OBSOLETE_DEAD, 0);
if (rt) {
struct dst_entry *new = &rt->dst;
@@ -2885,54 +2885,6 @@ struct rtable *ip_route_output_flow(struct net *net, struct flowi4 *flp4,
}
EXPORT_SYMBOL_GPL(ip_route_output_flow);
-struct rtable *ip_route_output_tunnel(struct sk_buff *skb,
- struct net_device *dev,
- struct net *net, __be32 *saddr,
- const struct ip_tunnel_info *info,
- u8 protocol, bool use_cache)
-{
-#ifdef CONFIG_DST_CACHE
- struct dst_cache *dst_cache;
-#endif
- struct rtable *rt = NULL;
- struct flowi4 fl4;
- __u8 tos;
-
-#ifdef CONFIG_DST_CACHE
- dst_cache = (struct dst_cache *)&info->dst_cache;
- if (use_cache) {
- rt = dst_cache_get_ip4(dst_cache, saddr);
- if (rt)
- return rt;
- }
-#endif
- memset(&fl4, 0, sizeof(fl4));
- fl4.flowi4_mark = skb->mark;
- fl4.flowi4_proto = protocol;
- fl4.daddr = info->key.u.ipv4.dst;
- fl4.saddr = info->key.u.ipv4.src;
- tos = info->key.tos;
- fl4.flowi4_tos = RT_TOS(tos);
-
- rt = ip_route_output_key(net, &fl4);
- if (IS_ERR(rt)) {
- netdev_dbg(dev, "no route to %pI4\n", &fl4.daddr);
- return ERR_PTR(-ENETUNREACH);
- }
- if (rt->dst.dev == dev) { /* is this necessary? */
- netdev_dbg(dev, "circular route to %pI4\n", &fl4.daddr);
- ip_rt_put(rt);
- return ERR_PTR(-ELOOP);
- }
-#ifdef CONFIG_DST_CACHE
- if (use_cache)
- dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
-#endif
- *saddr = fl4.saddr;
- return rt;
-}
-EXPORT_SYMBOL_GPL(ip_route_output_tunnel);
-
/* called with rcu_read_lock held */
static int rt_fill_info(struct net *net, __be32 dst, __be32 src,
struct rtable *rt, u32 table_id, struct flowi4 *fl4,
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index dc478a0574cb..98b25e5d147b 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -41,7 +41,6 @@ static siphash_aligned_key_t syncookie_secret[2];
* requested/supported by the syn/synack exchange.
*/
#define TSBITS 6
-#define TSMASK (((__u32)1 << TSBITS) - 1)
static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
u32 count, int c)
@@ -52,6 +51,14 @@ static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
count, &syncookie_secret[c]);
}
+/* Convert one nsec 64bit timestamp to ts (ms or usec resolution) */
+static u64 tcp_ns_to_ts(bool usec_ts, u64 val)
+{
+ if (usec_ts)
+ return div_u64(val, NSEC_PER_USEC);
+
+ return div_u64(val, NSEC_PER_MSEC);
+}
/*
* when syncookies are in effect and tcp timestamps are enabled we encode
@@ -62,27 +69,24 @@ static u32 cookie_hash(__be32 saddr, __be32 daddr, __be16 sport, __be16 dport,
*/
u64 cookie_init_timestamp(struct request_sock *req, u64 now)
{
- struct inet_request_sock *ireq;
- u32 ts, ts_now = tcp_ns_to_ts(now);
+ const struct inet_request_sock *ireq = inet_rsk(req);
+ u64 ts, ts_now = tcp_ns_to_ts(false, now);
u32 options = 0;
- ireq = inet_rsk(req);
-
options = ireq->wscale_ok ? ireq->snd_wscale : TS_OPT_WSCALE_MASK;
if (ireq->sack_ok)
options |= TS_OPT_SACK;
if (ireq->ecn_ok)
options |= TS_OPT_ECN;
- ts = ts_now & ~TSMASK;
+ ts = (ts_now >> TSBITS) << TSBITS;
ts |= options;
- if (ts > ts_now) {
- ts >>= TSBITS;
- ts--;
- ts <<= TSBITS;
- ts |= options;
- }
- return (u64)ts * (NSEC_PER_SEC / TCP_TS_HZ);
+ if (ts > ts_now)
+ ts -= (1UL << TSBITS);
+
+ if (tcp_rsk(req)->req_usec_ts)
+ return ts * NSEC_PER_USEC;
+ return ts * NSEC_PER_MSEC;
}
@@ -302,6 +306,8 @@ struct request_sock *cookie_tcp_reqsk_alloc(const struct request_sock_ops *ops,
treq->af_specific = af_ops;
treq->syn_tos = TCP_SKB_CB(skb)->ip_dsfield;
+ treq->req_usec_ts = -1;
+
#if IS_ENABLED(CONFIG_MPTCP)
treq->is_mptcp = sk_is_mptcp(sk);
if (treq->is_mptcp) {
@@ -338,6 +344,7 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
__u8 rcv_wscale;
struct flowi4 fl4;
u32 tsoff = 0;
+ int l3index;
if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) ||
!th->ack || th->rst)
@@ -399,6 +406,9 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb)
ireq->ir_iif = inet_request_bound_dev_if(sk, skb);
+ l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif);
+ tcp_ao_syncookie(sk, skb, treq, AF_INET, l3index);
+
/* We throwed the options of the initial SYN away, so we hope
* the ACK carries the same options again (see RFC1122 4.2.3.8)
*/
diff --git a/net/ipv4/sysctl_net_ipv4.c b/net/ipv4/sysctl_net_ipv4.c
index 6ac890b4073f..f63a545a7374 100644
--- a/net/ipv4/sysctl_net_ipv4.c
+++ b/net/ipv4/sysctl_net_ipv4.c
@@ -1367,6 +1367,15 @@ static struct ctl_table ipv4_net_table[] = {
.extra1 = SYSCTL_ZERO,
},
{
+ .procname = "tcp_backlog_ack_defer",
+ .data = &init_net.ipv4.sysctl_tcp_backlog_ack_defer,
+ .maxlen = sizeof(u8),
+ .mode = 0644,
+ .proc_handler = proc_dou8vec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+ {
.procname = "tcp_reflect_tos",
.data = &init_net.ipv4.sysctl_tcp_reflect_tos,
.maxlen = sizeof(u8),
@@ -1489,6 +1498,14 @@ static struct ctl_table ipv4_net_table[] = {
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
},
+ {
+ .procname = "tcp_pingpong_thresh",
+ .data = &init_net.ipv4.sysctl_tcp_pingpong_thresh,
+ .maxlen = sizeof(u8),
+ .mode = 0644,
+ .proc_handler = proc_dou8vec_minmax,
+ .extra1 = SYSCTL_ONE,
+ },
{ }
};
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 3d3a24f79573..53bcc17c91e4 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -3593,6 +3593,31 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
__tcp_sock_set_quickack(sk, val);
break;
+ case TCP_AO_REPAIR:
+ err = tcp_ao_set_repair(sk, optval, optlen);
+ break;
+#ifdef CONFIG_TCP_AO
+ case TCP_AO_ADD_KEY:
+ case TCP_AO_DEL_KEY:
+ case TCP_AO_INFO: {
+ /* If this is the first TCP-AO setsockopt() on the socket,
+ * sk_state has to be LISTEN or CLOSE. Allow TCP_REPAIR
+ * in any state.
+ */
+ if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
+ goto ao_parse;
+ if (rcu_dereference_protected(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk)))
+ goto ao_parse;
+ if (tp->repair)
+ goto ao_parse;
+ err = -EISCONN;
+ break;
+ao_parse:
+ err = tp->af_specific->ao_parse(sk, optname, optval, optlen);
+ break;
+ }
+#endif
#ifdef CONFIG_TCP_MD5SIG
case TCP_MD5SIG:
case TCP_MD5SIG_EXT:
@@ -3631,10 +3656,16 @@ int do_tcp_setsockopt(struct sock *sk, int level, int optname,
tp->fastopen_no_cookie = val;
break;
case TCP_TIMESTAMP:
- if (!tp->repair)
+ if (!tp->repair) {
err = -EPERM;
- else
- WRITE_ONCE(tp->tsoffset, val - tcp_time_stamp_raw());
+ break;
+ }
+ /* val is an opaque field,
+ * and low order bit contains usec_ts enable bit.
+ * Its a best effort, and we do not care if user makes an error.
+ */
+ tp->tcp_usec_ts = val & 1;
+ WRITE_ONCE(tp->tsoffset, val - tcp_clock_ts(tp->tcp_usec_ts));
break;
case TCP_REPAIR_WINDOW:
err = tcp_repair_set_window(tp, optval, optlen);
@@ -3756,9 +3787,12 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
info->tcpi_options |= TCPI_OPT_ECN_SEEN;
if (tp->syn_data_acked)
info->tcpi_options |= TCPI_OPT_SYN_DATA;
+ if (tp->tcp_usec_ts)
+ info->tcpi_options |= TCPI_OPT_USEC_TS;
info->tcpi_rto = jiffies_to_usecs(icsk->icsk_rto);
- info->tcpi_ato = jiffies_to_usecs(icsk->icsk_ack.ato);
+ info->tcpi_ato = jiffies_to_usecs(min_t(u32, icsk->icsk_ack.ato,
+ tcp_delack_max(sk)));
info->tcpi_snd_mss = tp->mss_cache;
info->tcpi_rcv_mss = icsk->icsk_ack.rcv_mss;
@@ -3814,6 +3848,13 @@ void tcp_get_info(struct sock *sk, struct tcp_info *info)
info->tcpi_rcv_wnd = tp->rcv_wnd;
info->tcpi_rehash = tp->plb_rehash + tp->timeout_rehash;
info->tcpi_fastopen_client_fail = tp->fastopen_client_fail;
+
+ info->tcpi_total_rto = tp->total_rto;
+ info->tcpi_total_rto_recoveries = tp->total_rto_recoveries;
+ info->tcpi_total_rto_time = tp->total_rto_time;
+ if (tp->rto_stamp)
+ info->tcpi_total_rto_time += tcp_clock_ms() - tp->rto_stamp;
+
unlock_sock_fast(sk, slow);
}
EXPORT_SYMBOL_GPL(tcp_get_info);
@@ -4137,7 +4178,11 @@ int do_tcp_getsockopt(struct sock *sk, int level,
break;
case TCP_TIMESTAMP:
- val = tcp_time_stamp_raw() + READ_ONCE(tp->tsoffset);
+ val = tcp_clock_ts(tp->tcp_usec_ts) + READ_ONCE(tp->tsoffset);
+ if (tp->tcp_usec_ts)
+ val |= 1;
+ else
+ val &= ~1;
break;
case TCP_NOTSENT_LOWAT:
val = READ_ONCE(tp->notsent_lowat);
@@ -4247,6 +4292,21 @@ zerocopy_rcv_out:
return err;
}
#endif
+ case TCP_AO_REPAIR:
+ return tcp_ao_get_repair(sk, optval, optlen);
+ case TCP_AO_GET_KEYS:
+ case TCP_AO_INFO: {
+ int err;
+
+ sockopt_lock_sock(sk);
+ if (optname == TCP_AO_GET_KEYS)
+ err = tcp_ao_get_mkts(sk, optval, optlen);
+ else
+ err = tcp_ao_get_sock_info(sk, optval, optlen);
+ sockopt_release_sock(sk);
+
+ return err;
+ }
default:
return -ENOPROTOOPT;
}
@@ -4285,141 +4345,52 @@ int tcp_getsockopt(struct sock *sk, int level, int optname, char __user *optval,
EXPORT_SYMBOL(tcp_getsockopt);
#ifdef CONFIG_TCP_MD5SIG
-static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool);
-static DEFINE_MUTEX(tcp_md5sig_mutex);
-static bool tcp_md5sig_pool_populated = false;
+int tcp_md5_sigpool_id = -1;
+EXPORT_SYMBOL_GPL(tcp_md5_sigpool_id);
-static void __tcp_alloc_md5sig_pool(void)
+int tcp_md5_alloc_sigpool(void)
{
- struct crypto_ahash *hash;
- int cpu;
-
- hash = crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC);
- if (IS_ERR(hash))
- return;
-
- for_each_possible_cpu(cpu) {
- void *scratch = per_cpu(tcp_md5sig_pool, cpu).scratch;
- struct ahash_request *req;
-
- if (!scratch) {
- scratch = kmalloc_node(sizeof(union tcp_md5sum_block) +
- sizeof(struct tcphdr),
- GFP_KERNEL,
- cpu_to_node(cpu));
- if (!scratch)
- return;
- per_cpu(tcp_md5sig_pool, cpu).scratch = scratch;
- }
- if (per_cpu(tcp_md5sig_pool, cpu).md5_req)
- continue;
-
- req = ahash_request_alloc(hash, GFP_KERNEL);
- if (!req)
- return;
-
- ahash_request_set_callback(req, 0, NULL, NULL);
-
- per_cpu(tcp_md5sig_pool, cpu).md5_req = req;
- }
- /* before setting tcp_md5sig_pool_populated, we must commit all writes
- * to memory. See smp_rmb() in tcp_get_md5sig_pool()
- */
- smp_wmb();
- /* Paired with READ_ONCE() from tcp_alloc_md5sig_pool()
- * and tcp_get_md5sig_pool().
- */
- WRITE_ONCE(tcp_md5sig_pool_populated, true);
-}
-
-bool tcp_alloc_md5sig_pool(void)
-{
- /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
- if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) {
- mutex_lock(&tcp_md5sig_mutex);
-
- if (!tcp_md5sig_pool_populated)
- __tcp_alloc_md5sig_pool();
+ size_t scratch_size;
+ int ret;
- mutex_unlock(&tcp_md5sig_mutex);
+ scratch_size = sizeof(union tcp_md5sum_block) + sizeof(struct tcphdr);
+ ret = tcp_sigpool_alloc_ahash("md5", scratch_size);
+ if (ret >= 0) {
+ /* As long as any md5 sigpool was allocated, the return
+ * id would stay the same. Re-write the id only for the case
+ * when previously all MD5 keys were deleted and this call
+ * allocates the first MD5 key, which may return a different
+ * sigpool id than was used previously.
+ */
+ WRITE_ONCE(tcp_md5_sigpool_id, ret); /* Avoids the compiler potentially being smart here */
+ return 0;
}
- /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
- return READ_ONCE(tcp_md5sig_pool_populated);
+ return ret;
}
-EXPORT_SYMBOL(tcp_alloc_md5sig_pool);
-
-/**
- * tcp_get_md5sig_pool - get md5sig_pool for this user
- *
- * We use percpu structure, so if we succeed, we exit with preemption
- * and BH disabled, to make sure another thread or softirq handling
- * wont try to get same context.
- */
-struct tcp_md5sig_pool *tcp_get_md5sig_pool(void)
+void tcp_md5_release_sigpool(void)
{
- local_bh_disable();
-
- /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */
- if (READ_ONCE(tcp_md5sig_pool_populated)) {
- /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */
- smp_rmb();
- return this_cpu_ptr(&tcp_md5sig_pool);
- }
- local_bh_enable();
- return NULL;
+ tcp_sigpool_release(READ_ONCE(tcp_md5_sigpool_id));
}
-EXPORT_SYMBOL(tcp_get_md5sig_pool);
-int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp,
- const struct sk_buff *skb, unsigned int header_len)
+void tcp_md5_add_sigpool(void)
{
- struct scatterlist sg;
- const struct tcphdr *tp = tcp_hdr(skb);
- struct ahash_request *req = hp->md5_req;
- unsigned int i;
- const unsigned int head_data_len = skb_headlen(skb) > header_len ?
- skb_headlen(skb) - header_len : 0;
- const struct skb_shared_info *shi = skb_shinfo(skb);
- struct sk_buff *frag_iter;
-
- sg_init_table(&sg, 1);
-
- sg_set_buf(&sg, ((u8 *) tp) + header_len, head_data_len);
- ahash_request_set_crypt(req, &sg, NULL, head_data_len);
- if (crypto_ahash_update(req))
- return 1;
-
- for (i = 0; i < shi->nr_frags; ++i) {
- const skb_frag_t *f = &shi->frags[i];
- unsigned int offset = skb_frag_off(f);
- struct page *page = skb_frag_page(f) + (offset >> PAGE_SHIFT);
-
- sg_set_page(&sg, page, skb_frag_size(f),
- offset_in_page(offset));
- ahash_request_set_crypt(req, &sg, NULL, skb_frag_size(f));
- if (crypto_ahash_update(req))
- return 1;
- }
-
- skb_walk_frags(skb, frag_iter)
- if (tcp_md5_hash_skb_data(hp, frag_iter, 0))
- return 1;
-
- return 0;
+ tcp_sigpool_get(READ_ONCE(tcp_md5_sigpool_id));
}
-EXPORT_SYMBOL(tcp_md5_hash_skb_data);
-int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_key *key)
+int tcp_md5_hash_key(struct tcp_sigpool *hp,
+ const struct tcp_md5sig_key *key)
{
u8 keylen = READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_md5_do_add */
struct scatterlist sg;
sg_init_one(&sg, key->key, keylen);
- ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen);
+ ahash_request_set_crypt(hp->req, &sg, NULL, keylen);
- /* We use data_race() because tcp_md5_do_add() might change key->key under us */
- return data_race(crypto_ahash_update(hp->md5_req));
+ /* We use data_race() because tcp_md5_do_add() might change
+ * key->key under us
+ */
+ return data_race(crypto_ahash_update(hp->req));
}
EXPORT_SYMBOL(tcp_md5_hash_key);
@@ -4427,42 +4398,24 @@ EXPORT_SYMBOL(tcp_md5_hash_key);
enum skb_drop_reason
tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb,
const void *saddr, const void *daddr,
- int family, int dif, int sdif)
+ int family, int l3index, const __u8 *hash_location)
{
- /*
- * This gets called for each TCP segment that arrives
- * so we want to be efficient.
+ /* This gets called for each TCP segment that has TCP-MD5 option.
* We have 3 drop cases:
* o No MD5 hash and one expected.
* o MD5 hash and we're not expecting one.
* o MD5 hash and its wrong.
*/
- const __u8 *hash_location = NULL;
- struct tcp_md5sig_key *hash_expected;
- const struct tcphdr *th = tcp_hdr(skb);
const struct tcp_sock *tp = tcp_sk(sk);
- int genhash, l3index;
+ struct tcp_md5sig_key *key;
u8 newhash[16];
+ int genhash;
- /* sdif set, means packet ingressed via a device
- * in an L3 domain and dif is set to the l3mdev
- */
- l3index = sdif ? dif : 0;
-
- hash_expected = tcp_md5_do_lookup(sk, l3index, saddr, family);
- hash_location = tcp_parse_md5sig_option(th);
-
- /* We've parsed the options - do we have a hash? */
- if (!hash_expected && !hash_location)
- return SKB_NOT_DROPPED_YET;
-
- if (hash_expected && !hash_location) {
- NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5NOTFOUND);
- return SKB_DROP_REASON_TCP_MD5NOTFOUND;
- }
+ key = tcp_md5_do_lookup(sk, l3index, saddr, family);
- if (!hash_expected && hash_location) {
+ if (!key && hash_location) {
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5UNEXPECTED);
+ tcp_hash_fail("Unexpected MD5 Hash found", family, skb, "");
return SKB_DROP_REASON_TCP_MD5UNEXPECTED;
}
@@ -4471,27 +4424,26 @@ tcp_inbound_md5_hash(const struct sock *sk, const struct sk_buff *skb,
* IPv4-mapped case.
*/
if (family == AF_INET)
- genhash = tcp_v4_md5_hash_skb(newhash,
- hash_expected,
- NULL, skb);
+ genhash = tcp_v4_md5_hash_skb(newhash, key, NULL, skb);
else
- genhash = tp->af_specific->calc_md5_hash(newhash,
- hash_expected,
+ genhash = tp->af_specific->calc_md5_hash(newhash, key,
NULL, skb);
-
if (genhash || memcmp(hash_location, newhash, 16) != 0) {
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMD5FAILURE);
if (family == AF_INET) {
- net_info_ratelimited("MD5 Hash failed for (%pI4, %d)->(%pI4, %d)%s L3 index %d\n",
- saddr, ntohs(th->source),
- daddr, ntohs(th->dest),
- genhash ? " tcp_v4_calc_md5_hash failed"
- : "", l3index);
+ tcp_hash_fail("MD5 Hash failed", AF_INET, skb, "%s L3 index %d",
+ genhash ? "tcp_v4_calc_md5_hash failed"
+ : "", l3index);
} else {
- net_info_ratelimited("MD5 Hash %s for [%pI6c]:%u->[%pI6c]:%u L3 index %d\n",
- genhash ? "failed" : "mismatch",
- saddr, ntohs(th->source),
- daddr, ntohs(th->dest), l3index);
+ if (genhash) {
+ tcp_hash_fail("MD5 Hash failed",
+ AF_INET6, skb, "L3 index %d",
+ l3index);
+ } else {
+ tcp_hash_fail("MD5 Hash mismatch",
+ AF_INET6, skb, "L3 index %d",
+ l3index);
+ }
}
return SKB_DROP_REASON_TCP_MD5FAILURE;
}
diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c
new file mode 100644
index 000000000000..6a845e906a1d
--- /dev/null
+++ b/net/ipv4/tcp_ao.c
@@ -0,0 +1,2392 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * INET An implementation of the TCP Authentication Option (TCP-AO).
+ * See RFC5925.
+ *
+ * Authors: Dmitry Safonov <dima@arista.com>
+ * Francesco Ruggeri <fruggeri@arista.com>
+ * Salam Noureddine <noureddine@arista.com>
+ */
+#define pr_fmt(fmt) "TCP: " fmt
+
+#include <crypto/hash.h>
+#include <linux/inetdevice.h>
+#include <linux/tcp.h>
+
+#include <net/tcp.h>
+#include <net/ipv6.h>
+#include <net/icmp.h>
+
+DEFINE_STATIC_KEY_DEFERRED_FALSE(tcp_ao_needed, HZ);
+
+int tcp_ao_calc_traffic_key(struct tcp_ao_key *mkt, u8 *key, void *ctx,
+ unsigned int len, struct tcp_sigpool *hp)
+{
+ struct scatterlist sg;
+ int ret;
+
+ if (crypto_ahash_setkey(crypto_ahash_reqtfm(hp->req),
+ mkt->key, mkt->keylen))
+ goto clear_hash;
+
+ ret = crypto_ahash_init(hp->req);
+ if (ret)
+ goto clear_hash;
+
+ sg_init_one(&sg, ctx, len);
+ ahash_request_set_crypt(hp->req, &sg, key, len);
+ crypto_ahash_update(hp->req);
+
+ ret = crypto_ahash_final(hp->req);
+ if (ret)
+ goto clear_hash;
+
+ return 0;
+clear_hash:
+ memset(key, 0, tcp_ao_digest_size(mkt));
+ return 1;
+}
+
+bool tcp_ao_ignore_icmp(const struct sock *sk, int family, int type, int code)
+{
+ bool ignore_icmp = false;
+ struct tcp_ao_info *ao;
+
+ if (!static_branch_unlikely(&tcp_ao_needed.key))
+ return false;
+
+ /* RFC5925, 7.8:
+ * >> A TCP-AO implementation MUST default to ignore incoming ICMPv4
+ * messages of Type 3 (destination unreachable), Codes 2-4 (protocol
+ * unreachable, port unreachable, and fragmentation needed -- ’hard
+ * errors’), and ICMPv6 Type 1 (destination unreachable), Code 1
+ * (administratively prohibited) and Code 4 (port unreachable) intended
+ * for connections in synchronized states (ESTABLISHED, FIN-WAIT-1, FIN-
+ * WAIT-2, CLOSE-WAIT, CLOSING, LAST-ACK, TIME-WAIT) that match MKTs.
+ */
+ if (family == AF_INET) {
+ if (type != ICMP_DEST_UNREACH)
+ return false;
+ if (code < ICMP_PROT_UNREACH || code > ICMP_FRAG_NEEDED)
+ return false;
+ } else {
+ if (type != ICMPV6_DEST_UNREACH)
+ return false;
+ if (code != ICMPV6_ADM_PROHIBITED && code != ICMPV6_PORT_UNREACH)
+ return false;
+ }
+
+ rcu_read_lock();
+ switch (sk->sk_state) {
+ case TCP_TIME_WAIT:
+ ao = rcu_dereference(tcp_twsk(sk)->ao_info);
+ break;
+ case TCP_SYN_SENT:
+ case TCP_SYN_RECV:
+ case TCP_LISTEN:
+ case TCP_NEW_SYN_RECV:
+ /* RFC5925 specifies to ignore ICMPs *only* on connections
+ * in synchronized states.
+ */
+ rcu_read_unlock();
+ return false;
+ default:
+ ao = rcu_dereference(tcp_sk(sk)->ao_info);
+ }
+
+ if (ao && !ao->accept_icmps) {
+ ignore_icmp = true;
+ __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAODROPPEDICMPS);
+ atomic64_inc(&ao->counters.dropped_icmp);
+ }
+ rcu_read_unlock();
+
+ return ignore_icmp;
+}
+
+/* Optimized version of tcp_ao_do_lookup(): only for sockets for which
+ * it's known that the keys in ao_info are matching peer's
+ * family/address/VRF/etc.
+ */
+struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
+ int sndid, int rcvid)
+{
+ struct tcp_ao_key *key;
+
+ hlist_for_each_entry_rcu(key, &ao->head, node) {
+ if ((sndid >= 0 && key->sndid != sndid) ||
+ (rcvid >= 0 && key->rcvid != rcvid))
+ continue;
+ return key;
+ }
+
+ return NULL;
+}
+
+static int ipv4_prefix_cmp(const struct in_addr *addr1,
+ const struct in_addr *addr2,
+ unsigned int prefixlen)
+{
+ __be32 mask = inet_make_mask(prefixlen);
+ __be32 a1 = addr1->s_addr & mask;
+ __be32 a2 = addr2->s_addr & mask;
+
+ if (a1 == a2)
+ return 0;
+ return memcmp(&a1, &a2, sizeof(a1));
+}
+
+static int __tcp_ao_key_cmp(const struct tcp_ao_key *key, int l3index,
+ const union tcp_ao_addr *addr, u8 prefixlen,
+ int family, int sndid, int rcvid)
+{
+ if (sndid >= 0 && key->sndid != sndid)
+ return (key->sndid > sndid) ? 1 : -1;
+ if (rcvid >= 0 && key->rcvid != rcvid)
+ return (key->rcvid > rcvid) ? 1 : -1;
+ if (l3index >= 0 && (key->keyflags & TCP_AO_KEYF_IFINDEX)) {
+ if (key->l3index != l3index)
+ return (key->l3index > l3index) ? 1 : -1;
+ }
+
+ if (family == AF_UNSPEC)
+ return 0;
+ if (key->family != family)
+ return (key->family > family) ? 1 : -1;
+
+ if (family == AF_INET) {
+ if (ntohl(key->addr.a4.s_addr) == INADDR_ANY)
+ return 0;
+ if (ntohl(addr->a4.s_addr) == INADDR_ANY)
+ return 0;
+ return ipv4_prefix_cmp(&key->addr.a4, &addr->a4, prefixlen);
+#if IS_ENABLED(CONFIG_IPV6)
+ } else {
+ if (ipv6_addr_any(&key->addr.a6) || ipv6_addr_any(&addr->a6))
+ return 0;
+ if (ipv6_prefix_equal(&key->addr.a6, &addr->a6, prefixlen))
+ return 0;
+ return memcmp(&key->addr.a6, &addr->a6, sizeof(addr->a6));
+#endif
+ }
+ return -1;
+}
+
+static int tcp_ao_key_cmp(const struct tcp_ao_key *key, int l3index,
+ const union tcp_ao_addr *addr, u8 prefixlen,
+ int family, int sndid, int rcvid)
+{
+#if IS_ENABLED(CONFIG_IPV6)
+ if (family == AF_INET6 && ipv6_addr_v4mapped(&addr->a6)) {
+ __be32 addr4 = addr->a6.s6_addr32[3];
+
+ return __tcp_ao_key_cmp(key, l3index,
+ (union tcp_ao_addr *)&addr4,
+ prefixlen, AF_INET, sndid, rcvid);
+ }
+#endif
+ return __tcp_ao_key_cmp(key, l3index, addr,
+ prefixlen, family, sndid, rcvid);
+}
+
+static struct tcp_ao_key *__tcp_ao_do_lookup(const struct sock *sk, int l3index,
+ const union tcp_ao_addr *addr, int family, u8 prefix,
+ int sndid, int rcvid)
+{
+ struct tcp_ao_key *key;
+ struct tcp_ao_info *ao;
+
+ if (!static_branch_unlikely(&tcp_ao_needed.key))
+ return NULL;
+
+ ao = rcu_dereference_check(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ if (!ao)
+ return NULL;
+
+ hlist_for_each_entry_rcu(key, &ao->head, node) {
+ u8 prefixlen = min(prefix, key->prefixlen);
+
+ if (!tcp_ao_key_cmp(key, l3index, addr, prefixlen,
+ family, sndid, rcvid))
+ return key;
+ }
+ return NULL;
+}
+
+struct tcp_ao_key *tcp_ao_do_lookup(const struct sock *sk, int l3index,
+ const union tcp_ao_addr *addr,
+ int family, int sndid, int rcvid)
+{
+ return __tcp_ao_do_lookup(sk, l3index, addr, family, U8_MAX, sndid, rcvid);
+}
+
+static struct tcp_ao_info *tcp_ao_alloc_info(gfp_t flags)
+{
+ struct tcp_ao_info *ao;
+
+ ao = kzalloc(sizeof(*ao), flags);
+ if (!ao)
+ return NULL;
+ INIT_HLIST_HEAD(&ao->head);
+ refcount_set(&ao->refcnt, 1);
+
+ return ao;
+}
+
+static void tcp_ao_link_mkt(struct tcp_ao_info *ao, struct tcp_ao_key *mkt)
+{
+ hlist_add_head_rcu(&mkt->node, &ao->head);
+}
+
+static struct tcp_ao_key *tcp_ao_copy_key(struct sock *sk,
+ struct tcp_ao_key *key)
+{
+ struct tcp_ao_key *new_key;
+
+ new_key = sock_kmalloc(sk, tcp_ao_sizeof_key(key),
+ GFP_ATOMIC);
+ if (!new_key)
+ return NULL;
+
+ *new_key = *key;
+ INIT_HLIST_NODE(&new_key->node);
+ tcp_sigpool_get(new_key->tcp_sigpool_id);
+ atomic64_set(&new_key->pkt_good, 0);
+ atomic64_set(&new_key->pkt_bad, 0);
+
+ return new_key;
+}
+
+static void tcp_ao_key_free_rcu(struct rcu_head *head)
+{
+ struct tcp_ao_key *key = container_of(head, struct tcp_ao_key, rcu);
+
+ tcp_sigpool_release(key->tcp_sigpool_id);
+ kfree_sensitive(key);
+}
+
+void tcp_ao_destroy_sock(struct sock *sk, bool twsk)
+{
+ struct tcp_ao_info *ao;
+ struct tcp_ao_key *key;
+ struct hlist_node *n;
+
+ if (twsk) {
+ ao = rcu_dereference_protected(tcp_twsk(sk)->ao_info, 1);
+ tcp_twsk(sk)->ao_info = NULL;
+ } else {
+ ao = rcu_dereference_protected(tcp_sk(sk)->ao_info, 1);
+ tcp_sk(sk)->ao_info = NULL;
+ }
+
+ if (!ao || !refcount_dec_and_test(&ao->refcnt))
+ return;
+
+ hlist_for_each_entry_safe(key, n, &ao->head, node) {
+ hlist_del_rcu(&key->node);
+ if (!twsk)
+ atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+ call_rcu(&key->rcu, tcp_ao_key_free_rcu);
+ }
+
+ kfree_rcu(ao, rcu);
+ static_branch_slow_dec_deferred(&tcp_ao_needed);
+}
+
+void tcp_ao_time_wait(struct tcp_timewait_sock *tcptw, struct tcp_sock *tp)
+{
+ struct tcp_ao_info *ao_info = rcu_dereference_protected(tp->ao_info, 1);
+
+ if (ao_info) {
+ struct tcp_ao_key *key;
+ struct hlist_node *n;
+ int omem = 0;
+
+ hlist_for_each_entry_safe(key, n, &ao_info->head, node) {
+ omem += tcp_ao_sizeof_key(key);
+ }
+
+ refcount_inc(&ao_info->refcnt);
+ atomic_sub(omem, &(((struct sock *)tp)->sk_omem_alloc));
+ rcu_assign_pointer(tcptw->ao_info, ao_info);
+ } else {
+ tcptw->ao_info = NULL;
+ }
+}
+
+/* 4 tuple and ISNs are expected in NBO */
+static int tcp_v4_ao_calc_key(struct tcp_ao_key *mkt, u8 *key,
+ __be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport,
+ __be32 sisn, __be32 disn)
+{
+ /* See RFC5926 3.1.1 */
+ struct kdf_input_block {
+ u8 counter;
+ u8 label[6];
+ struct tcp4_ao_context ctx;
+ __be16 outlen;
+ } __packed * tmp;
+ struct tcp_sigpool hp;
+ int err;
+
+ err = tcp_sigpool_start(mkt->tcp_sigpool_id, &hp);
+ if (err)
+ return err;
+
+ tmp = hp.scratch;
+ tmp->counter = 1;
+ memcpy(tmp->label, "TCP-AO", 6);
+ tmp->ctx.saddr = saddr;
+ tmp->ctx.daddr = daddr;
+ tmp->ctx.sport = sport;
+ tmp->ctx.dport = dport;
+ tmp->ctx.sisn = sisn;
+ tmp->ctx.disn = disn;
+ tmp->outlen = htons(tcp_ao_digest_size(mkt) * 8); /* in bits */
+
+ err = tcp_ao_calc_traffic_key(mkt, key, tmp, sizeof(*tmp), &hp);
+ tcp_sigpool_end(&hp);
+
+ return err;
+}
+
+int tcp_v4_ao_calc_key_sk(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk,
+ __be32 sisn, __be32 disn, bool send)
+{
+ if (send)
+ return tcp_v4_ao_calc_key(mkt, key, sk->sk_rcv_saddr,
+ sk->sk_daddr, htons(sk->sk_num),
+ sk->sk_dport, sisn, disn);
+ else
+ return tcp_v4_ao_calc_key(mkt, key, sk->sk_daddr,
+ sk->sk_rcv_saddr, sk->sk_dport,
+ htons(sk->sk_num), disn, sisn);
+}
+
+static int tcp_ao_calc_key_sk(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk,
+ __be32 sisn, __be32 disn, bool send)
+{
+ if (mkt->family == AF_INET)
+ return tcp_v4_ao_calc_key_sk(mkt, key, sk, sisn, disn, send);
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (mkt->family == AF_INET6)
+ return tcp_v6_ao_calc_key_sk(mkt, key, sk, sisn, disn, send);
+#endif
+ else
+ return -EOPNOTSUPP;
+}
+
+int tcp_v4_ao_calc_key_rsk(struct tcp_ao_key *mkt, u8 *key,
+ struct request_sock *req)
+{
+ struct inet_request_sock *ireq = inet_rsk(req);
+
+ return tcp_v4_ao_calc_key(mkt, key,
+ ireq->ir_loc_addr, ireq->ir_rmt_addr,
+ htons(ireq->ir_num), ireq->ir_rmt_port,
+ htonl(tcp_rsk(req)->snt_isn),
+ htonl(tcp_rsk(req)->rcv_isn));
+}
+
+static int tcp_v4_ao_calc_key_skb(struct tcp_ao_key *mkt, u8 *key,
+ const struct sk_buff *skb,
+ __be32 sisn, __be32 disn)
+{
+ const struct iphdr *iph = ip_hdr(skb);
+ const struct tcphdr *th = tcp_hdr(skb);
+
+ return tcp_v4_ao_calc_key(mkt, key, iph->saddr, iph->daddr,
+ th->source, th->dest, sisn, disn);
+}
+
+static int tcp_ao_calc_key_skb(struct tcp_ao_key *mkt, u8 *key,
+ const struct sk_buff *skb,
+ __be32 sisn, __be32 disn, int family)
+{
+ if (family == AF_INET)
+ return tcp_v4_ao_calc_key_skb(mkt, key, skb, sisn, disn);
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (family == AF_INET6)
+ return tcp_v6_ao_calc_key_skb(mkt, key, skb, sisn, disn);
+#endif
+ return -EAFNOSUPPORT;
+}
+
+static int tcp_v4_ao_hash_pseudoheader(struct tcp_sigpool *hp,
+ __be32 daddr, __be32 saddr,
+ int nbytes)
+{
+ struct tcp4_pseudohdr *bp;
+ struct scatterlist sg;
+
+ bp = hp->scratch;
+ bp->saddr = saddr;
+ bp->daddr = daddr;
+ bp->pad = 0;
+ bp->protocol = IPPROTO_TCP;
+ bp->len = cpu_to_be16(nbytes);
+
+ sg_init_one(&sg, bp, sizeof(*bp));
+ ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp));
+ return crypto_ahash_update(hp->req);
+}
+
+static int tcp_ao_hash_pseudoheader(unsigned short int family,
+ const struct sock *sk,
+ const struct sk_buff *skb,
+ struct tcp_sigpool *hp, int nbytes)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+
+ /* TODO: Can we rely on checksum being zero to mean outbound pkt? */
+ if (!th->check) {
+ if (family == AF_INET)
+ return tcp_v4_ao_hash_pseudoheader(hp, sk->sk_daddr,
+ sk->sk_rcv_saddr, skb->len);
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (family == AF_INET6)
+ return tcp_v6_ao_hash_pseudoheader(hp, &sk->sk_v6_daddr,
+ &sk->sk_v6_rcv_saddr, skb->len);
+#endif
+ else
+ return -EAFNOSUPPORT;
+ }
+
+ if (family == AF_INET) {
+ const struct iphdr *iph = ip_hdr(skb);
+
+ return tcp_v4_ao_hash_pseudoheader(hp, iph->daddr,
+ iph->saddr, skb->len);
+#if IS_ENABLED(CONFIG_IPV6)
+ } else if (family == AF_INET6) {
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+
+ return tcp_v6_ao_hash_pseudoheader(hp, &iph->daddr,
+ &iph->saddr, skb->len);
+#endif
+ }
+ return -EAFNOSUPPORT;
+}
+
+u32 tcp_ao_compute_sne(u32 next_sne, u32 next_seq, u32 seq)
+{
+ u32 sne = next_sne;
+
+ if (before(seq, next_seq)) {
+ if (seq > next_seq)
+ sne--;
+ } else {
+ if (seq < next_seq)
+ sne++;
+ }
+
+ return sne;
+}
+
+/* tcp_ao_hash_sne(struct tcp_sigpool *hp)
+ * @hp - used for hashing
+ * @sne - sne value
+ */
+static int tcp_ao_hash_sne(struct tcp_sigpool *hp, u32 sne)
+{
+ struct scatterlist sg;
+ __be32 *bp;
+
+ bp = (__be32 *)hp->scratch;
+ *bp = htonl(sne);
+
+ sg_init_one(&sg, bp, sizeof(*bp));
+ ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp));
+ return crypto_ahash_update(hp->req);
+}
+
+static int tcp_ao_hash_header(struct tcp_sigpool *hp,
+ const struct tcphdr *th,
+ bool exclude_options, u8 *hash,
+ int hash_offset, int hash_len)
+{
+ int err, len = th->doff << 2;
+ struct scatterlist sg;
+ u8 *hdr = hp->scratch;
+
+ /* We are not allowed to change tcphdr, make a local copy */
+ if (exclude_options) {
+ len = sizeof(*th) + sizeof(struct tcp_ao_hdr) + hash_len;
+ memcpy(hdr, th, sizeof(*th));
+ memcpy(hdr + sizeof(*th),
+ (u8 *)th + hash_offset - sizeof(struct tcp_ao_hdr),
+ sizeof(struct tcp_ao_hdr));
+ memset(hdr + sizeof(*th) + sizeof(struct tcp_ao_hdr),
+ 0, hash_len);
+ ((struct tcphdr *)hdr)->check = 0;
+ } else {
+ len = th->doff << 2;
+ memcpy(hdr, th, len);
+ /* zero out tcp-ao hash */
+ ((struct tcphdr *)hdr)->check = 0;
+ memset(hdr + hash_offset, 0, hash_len);
+ }
+
+ sg_init_one(&sg, hdr, len);
+ ahash_request_set_crypt(hp->req, &sg, NULL, len);
+ err = crypto_ahash_update(hp->req);
+ WARN_ON_ONCE(err != 0);
+ return err;
+}
+
+int tcp_ao_hash_hdr(unsigned short int family, char *ao_hash,
+ struct tcp_ao_key *key, const u8 *tkey,
+ const union tcp_ao_addr *daddr,
+ const union tcp_ao_addr *saddr,
+ const struct tcphdr *th, u32 sne)
+{
+ int tkey_len = tcp_ao_digest_size(key);
+ int hash_offset = ao_hash - (char *)th;
+ struct tcp_sigpool hp;
+ void *hash_buf = NULL;
+
+ hash_buf = kmalloc(tkey_len, GFP_ATOMIC);
+ if (!hash_buf)
+ goto clear_hash_noput;
+
+ if (tcp_sigpool_start(key->tcp_sigpool_id, &hp))
+ goto clear_hash_noput;
+
+ if (crypto_ahash_setkey(crypto_ahash_reqtfm(hp.req), tkey, tkey_len))
+ goto clear_hash;
+
+ if (crypto_ahash_init(hp.req))
+ goto clear_hash;
+
+ if (tcp_ao_hash_sne(&hp, sne))
+ goto clear_hash;
+ if (family == AF_INET) {
+ if (tcp_v4_ao_hash_pseudoheader(&hp, daddr->a4.s_addr,
+ saddr->a4.s_addr, th->doff * 4))
+ goto clear_hash;
+#if IS_ENABLED(CONFIG_IPV6)
+ } else if (family == AF_INET6) {
+ if (tcp_v6_ao_hash_pseudoheader(&hp, &daddr->a6,
+ &saddr->a6, th->doff * 4))
+ goto clear_hash;
+#endif
+ } else {
+ WARN_ON_ONCE(1);
+ goto clear_hash;
+ }
+ if (tcp_ao_hash_header(&hp, th,
+ !!(key->keyflags & TCP_AO_KEYF_EXCLUDE_OPT),
+ ao_hash, hash_offset, tcp_ao_maclen(key)))
+ goto clear_hash;
+ ahash_request_set_crypt(hp.req, NULL, hash_buf, 0);
+ if (crypto_ahash_final(hp.req))
+ goto clear_hash;
+
+ memcpy(ao_hash, hash_buf, tcp_ao_maclen(key));
+ tcp_sigpool_end(&hp);
+ kfree(hash_buf);
+ return 0;
+
+clear_hash:
+ tcp_sigpool_end(&hp);
+clear_hash_noput:
+ memset(ao_hash, 0, tcp_ao_maclen(key));
+ kfree(hash_buf);
+ return 1;
+}
+
+int tcp_ao_hash_skb(unsigned short int family,
+ char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ int tkey_len = tcp_ao_digest_size(key);
+ struct tcp_sigpool hp;
+ void *hash_buf = NULL;
+
+ hash_buf = kmalloc(tkey_len, GFP_ATOMIC);
+ if (!hash_buf)
+ goto clear_hash_noput;
+
+ if (tcp_sigpool_start(key->tcp_sigpool_id, &hp))
+ goto clear_hash_noput;
+
+ if (crypto_ahash_setkey(crypto_ahash_reqtfm(hp.req), tkey, tkey_len))
+ goto clear_hash;
+
+ /* For now use sha1 by default. Depends on alg in tcp_ao_key */
+ if (crypto_ahash_init(hp.req))
+ goto clear_hash;
+
+ if (tcp_ao_hash_sne(&hp, sne))
+ goto clear_hash;
+ if (tcp_ao_hash_pseudoheader(family, sk, skb, &hp, skb->len))
+ goto clear_hash;
+ if (tcp_ao_hash_header(&hp, th,
+ !!(key->keyflags & TCP_AO_KEYF_EXCLUDE_OPT),
+ ao_hash, hash_offset, tcp_ao_maclen(key)))
+ goto clear_hash;
+ if (tcp_sigpool_hash_skb_data(&hp, skb, th->doff << 2))
+ goto clear_hash;
+ ahash_request_set_crypt(hp.req, NULL, hash_buf, 0);
+ if (crypto_ahash_final(hp.req))
+ goto clear_hash;
+
+ memcpy(ao_hash, hash_buf, tcp_ao_maclen(key));
+ tcp_sigpool_end(&hp);
+ kfree(hash_buf);
+ return 0;
+
+clear_hash:
+ tcp_sigpool_end(&hp);
+clear_hash_noput:
+ memset(ao_hash, 0, tcp_ao_maclen(key));
+ kfree(hash_buf);
+ return 1;
+}
+
+int tcp_v4_ao_hash_skb(char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne)
+{
+ return tcp_ao_hash_skb(AF_INET, ao_hash, key, sk, skb,
+ tkey, hash_offset, sne);
+}
+
+int tcp_v4_ao_synack_hash(char *ao_hash, struct tcp_ao_key *ao_key,
+ struct request_sock *req, const struct sk_buff *skb,
+ int hash_offset, u32 sne)
+{
+ void *hash_buf = NULL;
+ int err;
+
+ hash_buf = kmalloc(tcp_ao_digest_size(ao_key), GFP_ATOMIC);
+ if (!hash_buf)
+ return -ENOMEM;
+
+ err = tcp_v4_ao_calc_key_rsk(ao_key, hash_buf, req);
+ if (err)
+ goto out;
+
+ err = tcp_ao_hash_skb(AF_INET, ao_hash, ao_key, req_to_sk(req), skb,
+ hash_buf, hash_offset, sne);
+out:
+ kfree(hash_buf);
+ return err;
+}
+
+struct tcp_ao_key *tcp_v4_ao_lookup_rsk(const struct sock *sk,
+ struct request_sock *req,
+ int sndid, int rcvid)
+{
+ struct inet_request_sock *ireq = inet_rsk(req);
+ union tcp_ao_addr *addr = (union tcp_ao_addr *)&ireq->ir_rmt_addr;
+ int l3index;
+
+ l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif);
+ return tcp_ao_do_lookup(sk, l3index, addr, AF_INET, sndid, rcvid);
+}
+
+struct tcp_ao_key *tcp_v4_ao_lookup(const struct sock *sk, struct sock *addr_sk,
+ int sndid, int rcvid)
+{
+ int l3index = l3mdev_master_ifindex_by_index(sock_net(sk),
+ addr_sk->sk_bound_dev_if);
+ union tcp_ao_addr *addr = (union tcp_ao_addr *)&addr_sk->sk_daddr;
+
+ return tcp_ao_do_lookup(sk, l3index, addr, AF_INET, sndid, rcvid);
+}
+
+int tcp_ao_prepare_reset(const struct sock *sk, struct sk_buff *skb,
+ const struct tcp_ao_hdr *aoh, int l3index, u32 seq,
+ struct tcp_ao_key **key, char **traffic_key,
+ bool *allocated_traffic_key, u8 *keyid, u32 *sne)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ struct tcp_ao_info *ao_info;
+
+ *allocated_traffic_key = false;
+ /* If there's no socket - than initial sisn/disn are unknown.
+ * Drop the segment. RFC5925 (7.7) advises to require graceful
+ * restart [RFC4724]. Alternatively, the RFC5925 advises to
+ * save/restore traffic keys before/after reboot.
+ * Linux TCP-AO support provides TCP_AO_ADD_KEY and TCP_AO_REPAIR
+ * options to restore a socket post-reboot.
+ */
+ if (!sk)
+ return -ENOTCONN;
+
+ if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) {
+ unsigned int family = READ_ONCE(sk->sk_family);
+ union tcp_ao_addr *addr;
+ __be32 disn, sisn;
+
+ if (sk->sk_state == TCP_NEW_SYN_RECV) {
+ struct request_sock *req = inet_reqsk(sk);
+
+ sisn = htonl(tcp_rsk(req)->rcv_isn);
+ disn = htonl(tcp_rsk(req)->snt_isn);
+ *sne = tcp_ao_compute_sne(0, tcp_rsk(req)->snt_isn, seq);
+ } else {
+ sisn = th->seq;
+ disn = 0;
+ }
+ if (IS_ENABLED(CONFIG_IPV6) && family == AF_INET6)
+ addr = (union tcp_md5_addr *)&ipv6_hdr(skb)->saddr;
+ else
+ addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr;
+#if IS_ENABLED(CONFIG_IPV6)
+ if (family == AF_INET6 && ipv6_addr_v4mapped(&sk->sk_v6_daddr))
+ family = AF_INET;
+#endif
+
+ sk = sk_const_to_full_sk(sk);
+ ao_info = rcu_dereference(tcp_sk(sk)->ao_info);
+ if (!ao_info)
+ return -ENOENT;
+ *key = tcp_ao_do_lookup(sk, l3index, addr, family,
+ -1, aoh->rnext_keyid);
+ if (!*key)
+ return -ENOENT;
+ *traffic_key = kmalloc(tcp_ao_digest_size(*key), GFP_ATOMIC);
+ if (!*traffic_key)
+ return -ENOMEM;
+ *allocated_traffic_key = true;
+ if (tcp_ao_calc_key_skb(*key, *traffic_key, skb,
+ sisn, disn, family))
+ return -1;
+ *keyid = (*key)->rcvid;
+ } else {
+ struct tcp_ao_key *rnext_key;
+ u32 snd_basis;
+
+ if (sk->sk_state == TCP_TIME_WAIT) {
+ ao_info = rcu_dereference(tcp_twsk(sk)->ao_info);
+ snd_basis = tcp_twsk(sk)->tw_snd_nxt;
+ } else {
+ ao_info = rcu_dereference(tcp_sk(sk)->ao_info);
+ snd_basis = tcp_sk(sk)->snd_una;
+ }
+ if (!ao_info)
+ return -ENOENT;
+
+ *key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
+ if (!*key)
+ return -ENOENT;
+ *traffic_key = snd_other_key(*key);
+ rnext_key = READ_ONCE(ao_info->rnext_key);
+ *keyid = rnext_key->rcvid;
+ *sne = tcp_ao_compute_sne(READ_ONCE(ao_info->snd_sne),
+ snd_basis, seq);
+ }
+ return 0;
+}
+
+int tcp_ao_transmit_skb(struct sock *sk, struct sk_buff *skb,
+ struct tcp_ao_key *key, struct tcphdr *th,
+ __u8 *hash_location)
+{
+ struct tcp_skb_cb *tcb = TCP_SKB_CB(skb);
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_ao_info *ao;
+ void *tkey_buf = NULL;
+ u8 *traffic_key;
+ u32 sne;
+
+ ao = rcu_dereference_protected(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ traffic_key = snd_other_key(key);
+ if (unlikely(tcb->tcp_flags & TCPHDR_SYN)) {
+ __be32 disn;
+
+ if (!(tcb->tcp_flags & TCPHDR_ACK)) {
+ disn = 0;
+ tkey_buf = kmalloc(tcp_ao_digest_size(key), GFP_ATOMIC);
+ if (!tkey_buf)
+ return -ENOMEM;
+ traffic_key = tkey_buf;
+ } else {
+ disn = ao->risn;
+ }
+ tp->af_specific->ao_calc_key_sk(key, traffic_key,
+ sk, ao->lisn, disn, true);
+ }
+ sne = tcp_ao_compute_sne(READ_ONCE(ao->snd_sne), READ_ONCE(tp->snd_una),
+ ntohl(th->seq));
+ tp->af_specific->calc_ao_hash(hash_location, key, sk, skb, traffic_key,
+ hash_location - (u8 *)th, sne);
+ kfree(tkey_buf);
+ return 0;
+}
+
+static struct tcp_ao_key *tcp_ao_inbound_lookup(unsigned short int family,
+ const struct sock *sk, const struct sk_buff *skb,
+ int sndid, int rcvid, int l3index)
+{
+ if (family == AF_INET) {
+ const struct iphdr *iph = ip_hdr(skb);
+
+ return tcp_ao_do_lookup(sk, l3index,
+ (union tcp_ao_addr *)&iph->saddr,
+ AF_INET, sndid, rcvid);
+ } else {
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+
+ return tcp_ao_do_lookup(sk, l3index,
+ (union tcp_ao_addr *)&iph->saddr,
+ AF_INET6, sndid, rcvid);
+ }
+}
+
+void tcp_ao_syncookie(struct sock *sk, const struct sk_buff *skb,
+ struct tcp_request_sock *treq,
+ unsigned short int family, int l3index)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ const struct tcp_ao_hdr *aoh;
+ struct tcp_ao_key *key;
+
+ treq->maclen = 0;
+
+ if (tcp_parse_auth_options(th, NULL, &aoh) || !aoh)
+ return;
+
+ key = tcp_ao_inbound_lookup(family, sk, skb, -1, aoh->keyid, l3index);
+ if (!key)
+ /* Key not found, continue without TCP-AO */
+ return;
+
+ treq->ao_rcv_next = aoh->keyid;
+ treq->ao_keyid = aoh->rnext_keyid;
+ treq->maclen = tcp_ao_maclen(key);
+}
+
+static enum skb_drop_reason
+tcp_ao_verify_hash(const struct sock *sk, const struct sk_buff *skb,
+ unsigned short int family, struct tcp_ao_info *info,
+ const struct tcp_ao_hdr *aoh, struct tcp_ao_key *key,
+ u8 *traffic_key, u8 *phash, u32 sne, int l3index)
+{
+ u8 maclen = aoh->length - sizeof(struct tcp_ao_hdr);
+ const struct tcphdr *th = tcp_hdr(skb);
+ void *hash_buf = NULL;
+
+ if (maclen != tcp_ao_maclen(key)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOBAD);
+ atomic64_inc(&info->counters.pkt_bad);
+ atomic64_inc(&key->pkt_bad);
+ tcp_hash_fail("AO hash wrong length", family, skb,
+ "%u != %d L3index: %d", maclen,
+ tcp_ao_maclen(key), l3index);
+ return SKB_DROP_REASON_TCP_AOFAILURE;
+ }
+
+ hash_buf = kmalloc(tcp_ao_digest_size(key), GFP_ATOMIC);
+ if (!hash_buf)
+ return SKB_DROP_REASON_NOT_SPECIFIED;
+
+ /* XXX: make it per-AF callback? */
+ tcp_ao_hash_skb(family, hash_buf, key, sk, skb, traffic_key,
+ (phash - (u8 *)th), sne);
+ if (memcmp(phash, hash_buf, maclen)) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOBAD);
+ atomic64_inc(&info->counters.pkt_bad);
+ atomic64_inc(&key->pkt_bad);
+ tcp_hash_fail("AO hash mismatch", family, skb,
+ "L3index: %d", l3index);
+ kfree(hash_buf);
+ return SKB_DROP_REASON_TCP_AOFAILURE;
+ }
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOGOOD);
+ atomic64_inc(&info->counters.pkt_good);
+ atomic64_inc(&key->pkt_good);
+ kfree(hash_buf);
+ return SKB_NOT_DROPPED_YET;
+}
+
+enum skb_drop_reason
+tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb,
+ unsigned short int family, const struct request_sock *req,
+ int l3index, const struct tcp_ao_hdr *aoh)
+{
+ const struct tcphdr *th = tcp_hdr(skb);
+ u8 *phash = (u8 *)(aoh + 1); /* hash goes just after the header */
+ struct tcp_ao_info *info;
+ enum skb_drop_reason ret;
+ struct tcp_ao_key *key;
+ __be32 sisn, disn;
+ u8 *traffic_key;
+ u32 sne = 0;
+
+ info = rcu_dereference(tcp_sk(sk)->ao_info);
+ if (!info) {
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOKEYNOTFOUND);
+ tcp_hash_fail("AO key not found", family, skb,
+ "keyid: %u L3index: %d", aoh->keyid, l3index);
+ return SKB_DROP_REASON_TCP_AOUNEXPECTED;
+ }
+
+ if (unlikely(th->syn)) {
+ sisn = th->seq;
+ disn = 0;
+ }
+
+ /* Fast-path */
+ if (likely((1 << sk->sk_state) & TCP_AO_ESTABLISHED)) {
+ enum skb_drop_reason err;
+ struct tcp_ao_key *current_key;
+
+ /* Check if this socket's rnext_key matches the keyid in the
+ * packet. If not we lookup the key based on the keyid
+ * matching the rcvid in the mkt.
+ */
+ key = READ_ONCE(info->rnext_key);
+ if (key->rcvid != aoh->keyid) {
+ key = tcp_ao_established_key(info, -1, aoh->keyid);
+ if (!key)
+ goto key_not_found;
+ }
+
+ /* Delayed retransmitted SYN */
+ if (unlikely(th->syn && !th->ack))
+ goto verify_hash;
+
+ sne = tcp_ao_compute_sne(info->rcv_sne, tcp_sk(sk)->rcv_nxt,
+ ntohl(th->seq));
+ /* Established socket, traffic key are cached */
+ traffic_key = rcv_other_key(key);
+ err = tcp_ao_verify_hash(sk, skb, family, info, aoh, key,
+ traffic_key, phash, sne, l3index);
+ if (err)
+ return err;
+ current_key = READ_ONCE(info->current_key);
+ /* Key rotation: the peer asks us to use new key (RNext) */
+ if (unlikely(aoh->rnext_keyid != current_key->sndid)) {
+ /* If the key is not found we do nothing. */
+ key = tcp_ao_established_key(info, aoh->rnext_keyid, -1);
+ if (key)
+ /* pairs with tcp_ao_del_cmd */
+ WRITE_ONCE(info->current_key, key);
+ }
+ return SKB_NOT_DROPPED_YET;
+ }
+
+ /* Lookup key based on peer address and keyid.
+ * current_key and rnext_key must not be used on tcp listen
+ * sockets as otherwise:
+ * - request sockets would race on those key pointers
+ * - tcp_ao_del_cmd() allows async key removal
+ */
+ key = tcp_ao_inbound_lookup(family, sk, skb, -1, aoh->keyid, l3index);
+ if (!key)
+ goto key_not_found;
+
+ if (th->syn && !th->ack)
+ goto verify_hash;
+
+ if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_NEW_SYN_RECV)) {
+ /* Make the initial syn the likely case here */
+ if (unlikely(req)) {
+ sne = tcp_ao_compute_sne(0, tcp_rsk(req)->rcv_isn,
+ ntohl(th->seq));
+ sisn = htonl(tcp_rsk(req)->rcv_isn);
+ disn = htonl(tcp_rsk(req)->snt_isn);
+ } else if (unlikely(th->ack && !th->syn)) {
+ /* Possible syncookie packet */
+ sisn = htonl(ntohl(th->seq) - 1);
+ disn = htonl(ntohl(th->ack_seq) - 1);
+ sne = tcp_ao_compute_sne(0, ntohl(sisn),
+ ntohl(th->seq));
+ } else if (unlikely(!th->syn)) {
+ /* no way to figure out initial sisn/disn - drop */
+ return SKB_DROP_REASON_TCP_FLAGS;
+ }
+ } else if ((1 << sk->sk_state) & (TCPF_SYN_SENT | TCPF_SYN_RECV)) {
+ disn = info->lisn;
+ if (th->syn || th->rst)
+ sisn = th->seq;
+ else
+ sisn = info->risn;
+ } else {
+ WARN_ONCE(1, "TCP-AO: Unexpected sk_state %d", sk->sk_state);
+ return SKB_DROP_REASON_TCP_AOFAILURE;
+ }
+verify_hash:
+ traffic_key = kmalloc(tcp_ao_digest_size(key), GFP_ATOMIC);
+ if (!traffic_key)
+ return SKB_DROP_REASON_NOT_SPECIFIED;
+ tcp_ao_calc_key_skb(key, traffic_key, skb, sisn, disn, family);
+ ret = tcp_ao_verify_hash(sk, skb, family, info, aoh, key,
+ traffic_key, phash, sne, l3index);
+ kfree(traffic_key);
+ return ret;
+
+key_not_found:
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPAOKEYNOTFOUND);
+ atomic64_inc(&info->counters.key_not_found);
+ tcp_hash_fail("Requested by the peer AO key id not found",
+ family, skb, "L3index: %d", l3index);
+ return SKB_DROP_REASON_TCP_AOKEYNOTFOUND;
+}
+
+static int tcp_ao_cache_traffic_keys(const struct sock *sk,
+ struct tcp_ao_info *ao,
+ struct tcp_ao_key *ao_key)
+{
+ u8 *traffic_key = snd_other_key(ao_key);
+ int ret;
+
+ ret = tcp_ao_calc_key_sk(ao_key, traffic_key, sk,
+ ao->lisn, ao->risn, true);
+ if (ret)
+ return ret;
+
+ traffic_key = rcv_other_key(ao_key);
+ ret = tcp_ao_calc_key_sk(ao_key, traffic_key, sk,
+ ao->lisn, ao->risn, false);
+ return ret;
+}
+
+void tcp_ao_connect_init(struct sock *sk)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_ao_info *ao_info;
+ union tcp_ao_addr *addr;
+ struct tcp_ao_key *key;
+ int family, l3index;
+
+ ao_info = rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held(sk));
+ if (!ao_info)
+ return;
+
+ /* Remove all keys that don't match the peer */
+ family = sk->sk_family;
+ if (family == AF_INET)
+ addr = (union tcp_ao_addr *)&sk->sk_daddr;
+#if IS_ENABLED(CONFIG_IPV6)
+ else if (family == AF_INET6)
+ addr = (union tcp_ao_addr *)&sk->sk_v6_daddr;
+#endif
+ else
+ return;
+ l3index = l3mdev_master_ifindex_by_index(sock_net(sk),
+ sk->sk_bound_dev_if);
+
+ hlist_for_each_entry_rcu(key, &ao_info->head, node) {
+ if (!tcp_ao_key_cmp(key, l3index, addr, key->prefixlen, family, -1, -1))
+ continue;
+
+ if (key == ao_info->current_key)
+ ao_info->current_key = NULL;
+ if (key == ao_info->rnext_key)
+ ao_info->rnext_key = NULL;
+ hlist_del_rcu(&key->node);
+ atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+ call_rcu(&key->rcu, tcp_ao_key_free_rcu);
+ }
+
+ key = tp->af_specific->ao_lookup(sk, sk, -1, -1);
+ if (key) {
+ /* if current_key or rnext_key were not provided,
+ * use the first key matching the peer
+ */
+ if (!ao_info->current_key)
+ ao_info->current_key = key;
+ if (!ao_info->rnext_key)
+ ao_info->rnext_key = key;
+ tp->tcp_header_len += tcp_ao_len(key);
+
+ ao_info->lisn = htonl(tp->write_seq);
+ ao_info->snd_sne = 0;
+ } else {
+ /* Can't happen: tcp_connect() verifies that there's
+ * at least one tcp-ao key that matches the remote peer.
+ */
+ WARN_ON_ONCE(1);
+ rcu_assign_pointer(tp->ao_info, NULL);
+ kfree(ao_info);
+ }
+}
+
+void tcp_ao_established(struct sock *sk)
+{
+ struct tcp_ao_info *ao;
+ struct tcp_ao_key *key;
+
+ ao = rcu_dereference_protected(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ if (!ao)
+ return;
+
+ hlist_for_each_entry_rcu(key, &ao->head, node)
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+}
+
+void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb)
+{
+ struct tcp_ao_info *ao;
+ struct tcp_ao_key *key;
+
+ ao = rcu_dereference_protected(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ if (!ao)
+ return;
+
+ WRITE_ONCE(ao->risn, tcp_hdr(skb)->seq);
+ ao->rcv_sne = 0;
+
+ hlist_for_each_entry_rcu(key, &ao->head, node)
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+}
+
+int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ struct request_sock *req, struct sk_buff *skb,
+ int family)
+{
+ struct tcp_ao_key *key, *new_key, *first_key;
+ struct tcp_ao_info *new_ao, *ao;
+ struct hlist_node *key_head;
+ int l3index, ret = -ENOMEM;
+ union tcp_ao_addr *addr;
+ bool match = false;
+
+ ao = rcu_dereference(tcp_sk(sk)->ao_info);
+ if (!ao)
+ return 0;
+
+ /* New socket without TCP-AO on it */
+ if (!tcp_rsk_used_ao(req))
+ return 0;
+
+ new_ao = tcp_ao_alloc_info(GFP_ATOMIC);
+ if (!new_ao)
+ return -ENOMEM;
+ new_ao->lisn = htonl(tcp_rsk(req)->snt_isn);
+ new_ao->risn = htonl(tcp_rsk(req)->rcv_isn);
+ new_ao->ao_required = ao->ao_required;
+ new_ao->accept_icmps = ao->accept_icmps;
+
+ if (family == AF_INET) {
+ addr = (union tcp_ao_addr *)&newsk->sk_daddr;
+#if IS_ENABLED(CONFIG_IPV6)
+ } else if (family == AF_INET6) {
+ addr = (union tcp_ao_addr *)&newsk->sk_v6_daddr;
+#endif
+ } else {
+ ret = -EAFNOSUPPORT;
+ goto free_ao;
+ }
+ l3index = l3mdev_master_ifindex_by_index(sock_net(newsk),
+ newsk->sk_bound_dev_if);
+
+ hlist_for_each_entry_rcu(key, &ao->head, node) {
+ if (tcp_ao_key_cmp(key, l3index, addr, key->prefixlen, family, -1, -1))
+ continue;
+
+ new_key = tcp_ao_copy_key(newsk, key);
+ if (!new_key)
+ goto free_and_exit;
+
+ tcp_ao_cache_traffic_keys(newsk, new_ao, new_key);
+ tcp_ao_link_mkt(new_ao, new_key);
+ match = true;
+ }
+
+ if (!match) {
+ /* RFC5925 (7.4.1) specifies that the TCP-AO status
+ * of a connection is determined on the initial SYN.
+ * At this point the connection was TCP-AO enabled, so
+ * it can't switch to being unsigned if peer's key
+ * disappears on the listening socket.
+ */
+ ret = -EKEYREJECTED;
+ goto free_and_exit;
+ }
+
+ if (!static_key_fast_inc_not_disabled(&tcp_ao_needed.key.key)) {
+ ret = -EUSERS;
+ goto free_and_exit;
+ }
+
+ key_head = rcu_dereference(hlist_first_rcu(&new_ao->head));
+ first_key = hlist_entry_safe(key_head, struct tcp_ao_key, node);
+
+ key = tcp_ao_established_key(new_ao, tcp_rsk(req)->ao_keyid, -1);
+ if (key)
+ new_ao->current_key = key;
+ else
+ new_ao->current_key = first_key;
+
+ /* set rnext_key */
+ key = tcp_ao_established_key(new_ao, -1, tcp_rsk(req)->ao_rcv_next);
+ if (key)
+ new_ao->rnext_key = key;
+ else
+ new_ao->rnext_key = first_key;
+
+ sk_gso_disable(newsk);
+ rcu_assign_pointer(tcp_sk(newsk)->ao_info, new_ao);
+
+ return 0;
+
+free_and_exit:
+ hlist_for_each_entry_safe(key, key_head, &new_ao->head, node) {
+ hlist_del(&key->node);
+ tcp_sigpool_release(key->tcp_sigpool_id);
+ atomic_sub(tcp_ao_sizeof_key(key), &newsk->sk_omem_alloc);
+ kfree_sensitive(key);
+ }
+free_ao:
+ kfree(new_ao);
+ return ret;
+}
+
+static bool tcp_ao_can_set_current_rnext(struct sock *sk)
+{
+ /* There aren't current/rnext keys on TCP_LISTEN sockets */
+ if (sk->sk_state == TCP_LISTEN)
+ return false;
+ return true;
+}
+
+static int tcp_ao_verify_ipv4(struct sock *sk, struct tcp_ao_add *cmd,
+ union tcp_ao_addr **addr)
+{
+ struct sockaddr_in *sin = (struct sockaddr_in *)&cmd->addr;
+ struct inet_sock *inet = inet_sk(sk);
+
+ if (sin->sin_family != AF_INET)
+ return -EINVAL;
+
+ /* Currently matching is not performed on port (or port ranges) */
+ if (sin->sin_port != 0)
+ return -EINVAL;
+
+ /* Check prefix and trailing 0's in addr */
+ if (cmd->prefix != 0) {
+ __be32 mask;
+
+ if (ntohl(sin->sin_addr.s_addr) == INADDR_ANY)
+ return -EINVAL;
+ if (cmd->prefix > 32)
+ return -EINVAL;
+
+ mask = inet_make_mask(cmd->prefix);
+ if (sin->sin_addr.s_addr & ~mask)
+ return -EINVAL;
+
+ /* Check that MKT address is consistent with socket */
+ if (ntohl(inet->inet_daddr) != INADDR_ANY &&
+ (inet->inet_daddr & mask) != sin->sin_addr.s_addr)
+ return -EINVAL;
+ } else {
+ if (ntohl(sin->sin_addr.s_addr) != INADDR_ANY)
+ return -EINVAL;
+ }
+
+ *addr = (union tcp_ao_addr *)&sin->sin_addr;
+ return 0;
+}
+
+static int tcp_ao_parse_crypto(struct tcp_ao_add *cmd, struct tcp_ao_key *key)
+{
+ unsigned int syn_tcp_option_space;
+ bool is_kdf_aes_128_cmac = false;
+ struct crypto_ahash *tfm;
+ struct tcp_sigpool hp;
+ void *tmp_key = NULL;
+ int err;
+
+ /* RFC5926, 3.1.1.2. KDF_AES_128_CMAC */
+ if (!strcmp("cmac(aes128)", cmd->alg_name)) {
+ strscpy(cmd->alg_name, "cmac(aes)", sizeof(cmd->alg_name));
+ is_kdf_aes_128_cmac = (cmd->keylen != 16);
+ tmp_key = kmalloc(cmd->keylen, GFP_KERNEL);
+ if (!tmp_key)
+ return -ENOMEM;
+ }
+
+ key->maclen = cmd->maclen ?: 12; /* 12 is the default in RFC5925 */
+
+ /* Check: maclen + tcp-ao header <= (MAX_TCP_OPTION_SPACE - mss
+ * - tstamp - wscale - sackperm),
+ * see tcp_syn_options(), tcp_synack_options(), commit 33ad798c924b.
+ *
+ * In order to allow D-SACK with TCP-AO, the header size should be:
+ * (MAX_TCP_OPTION_SPACE - TCPOLEN_TSTAMP_ALIGNED
+ * - TCPOLEN_SACK_BASE_ALIGNED
+ * - 2 * TCPOLEN_SACK_PERBLOCK) = 8 (maclen = 4),
+ * see tcp_established_options().
+ *
+ * RFC5925, 2.2:
+ * Typical MACs are 96-128 bits (12-16 bytes), but any length
+ * that fits in the header of the segment being authenticated
+ * is allowed.
+ *
+ * RFC5925, 7.6:
+ * TCP-AO continues to consume 16 bytes in non-SYN segments,
+ * leaving a total of 24 bytes for other options, of which
+ * the timestamp consumes 10. This leaves 14 bytes, of which 10
+ * are used for a single SACK block. When two SACK blocks are used,
+ * such as to handle D-SACK, a smaller TCP-AO MAC would be required
+ * to make room for the additional SACK block (i.e., to leave 18
+ * bytes for the D-SACK variant of the SACK option) [RFC2883].
+ * Note that D-SACK is not supportable in TCP MD5 in the presence
+ * of timestamps, because TCP MD5’s MAC length is fixed and too
+ * large to leave sufficient option space.
+ */
+ syn_tcp_option_space = MAX_TCP_OPTION_SPACE;
+ syn_tcp_option_space -= TCPOLEN_TSTAMP_ALIGNED;
+ syn_tcp_option_space -= TCPOLEN_WSCALE_ALIGNED;
+ syn_tcp_option_space -= TCPOLEN_SACKPERM_ALIGNED;
+ if (tcp_ao_len(key) > syn_tcp_option_space) {
+ err = -EMSGSIZE;
+ goto err_kfree;
+ }
+
+ key->keylen = cmd->keylen;
+ memcpy(key->key, cmd->key, cmd->keylen);
+
+ err = tcp_sigpool_start(key->tcp_sigpool_id, &hp);
+ if (err)
+ goto err_kfree;
+
+ tfm = crypto_ahash_reqtfm(hp.req);
+ if (is_kdf_aes_128_cmac) {
+ void *scratch = hp.scratch;
+ struct scatterlist sg;
+
+ memcpy(tmp_key, cmd->key, cmd->keylen);
+ sg_init_one(&sg, tmp_key, cmd->keylen);
+
+ /* Using zero-key of 16 bytes as described in RFC5926 */
+ memset(scratch, 0, 16);
+ err = crypto_ahash_setkey(tfm, scratch, 16);
+ if (err)
+ goto err_pool_end;
+
+ err = crypto_ahash_init(hp.req);
+ if (err)
+ goto err_pool_end;
+
+ ahash_request_set_crypt(hp.req, &sg, key->key, cmd->keylen);
+ err = crypto_ahash_update(hp.req);
+ if (err)
+ goto err_pool_end;
+
+ err |= crypto_ahash_final(hp.req);
+ if (err)
+ goto err_pool_end;
+ key->keylen = 16;
+ }
+
+ err = crypto_ahash_setkey(tfm, key->key, key->keylen);
+ if (err)
+ goto err_pool_end;
+
+ tcp_sigpool_end(&hp);
+ kfree_sensitive(tmp_key);
+
+ if (tcp_ao_maclen(key) > key->digest_size)
+ return -EINVAL;
+
+ return 0;
+
+err_pool_end:
+ tcp_sigpool_end(&hp);
+err_kfree:
+ kfree_sensitive(tmp_key);
+ return err;
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
+static int tcp_ao_verify_ipv6(struct sock *sk, struct tcp_ao_add *cmd,
+ union tcp_ao_addr **paddr,
+ unsigned short int *family)
+{
+ struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd->addr;
+ struct in6_addr *addr = &sin6->sin6_addr;
+ u8 prefix = cmd->prefix;
+
+ if (sin6->sin6_family != AF_INET6)
+ return -EINVAL;
+
+ /* Currently matching is not performed on port (or port ranges) */
+ if (sin6->sin6_port != 0)
+ return -EINVAL;
+
+ /* Check prefix and trailing 0's in addr */
+ if (cmd->prefix != 0 && ipv6_addr_v4mapped(addr)) {
+ __be32 addr4 = addr->s6_addr32[3];
+ __be32 mask;
+
+ if (prefix > 32 || ntohl(addr4) == INADDR_ANY)
+ return -EINVAL;
+
+ mask = inet_make_mask(prefix);
+ if (addr4 & ~mask)
+ return -EINVAL;
+
+ /* Check that MKT address is consistent with socket */
+ if (!ipv6_addr_any(&sk->sk_v6_daddr)) {
+ __be32 daddr4 = sk->sk_v6_daddr.s6_addr32[3];
+
+ if (!ipv6_addr_v4mapped(&sk->sk_v6_daddr))
+ return -EINVAL;
+ if ((daddr4 & mask) != addr4)
+ return -EINVAL;
+ }
+
+ *paddr = (union tcp_ao_addr *)&addr->s6_addr32[3];
+ *family = AF_INET;
+ return 0;
+ } else if (cmd->prefix != 0) {
+ struct in6_addr pfx;
+
+ if (ipv6_addr_any(addr) || prefix > 128)
+ return -EINVAL;
+
+ ipv6_addr_prefix(&pfx, addr, prefix);
+ if (ipv6_addr_cmp(&pfx, addr))
+ return -EINVAL;
+
+ /* Check that MKT address is consistent with socket */
+ if (!ipv6_addr_any(&sk->sk_v6_daddr) &&
+ !ipv6_prefix_equal(&sk->sk_v6_daddr, addr, prefix))
+
+ return -EINVAL;
+ } else {
+ if (!ipv6_addr_any(addr))
+ return -EINVAL;
+ }
+
+ *paddr = (union tcp_ao_addr *)addr;
+ return 0;
+}
+#else
+static int tcp_ao_verify_ipv6(struct sock *sk, struct tcp_ao_add *cmd,
+ union tcp_ao_addr **paddr,
+ unsigned short int *family)
+{
+ return -EOPNOTSUPP;
+}
+#endif
+
+static struct tcp_ao_info *setsockopt_ao_info(struct sock *sk)
+{
+ if (sk_fullsock(sk)) {
+ return rcu_dereference_protected(tcp_sk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ } else if (sk->sk_state == TCP_TIME_WAIT) {
+ return rcu_dereference_protected(tcp_twsk(sk)->ao_info,
+ lockdep_sock_is_held(sk));
+ }
+ return ERR_PTR(-ESOCKTNOSUPPORT);
+}
+
+static struct tcp_ao_info *getsockopt_ao_info(struct sock *sk)
+{
+ if (sk_fullsock(sk))
+ return rcu_dereference(tcp_sk(sk)->ao_info);
+ else if (sk->sk_state == TCP_TIME_WAIT)
+ return rcu_dereference(tcp_twsk(sk)->ao_info);
+
+ return ERR_PTR(-ESOCKTNOSUPPORT);
+}
+
+#define TCP_AO_KEYF_ALL (TCP_AO_KEYF_IFINDEX | TCP_AO_KEYF_EXCLUDE_OPT)
+#define TCP_AO_GET_KEYF_VALID (TCP_AO_KEYF_IFINDEX)
+
+static struct tcp_ao_key *tcp_ao_key_alloc(struct sock *sk,
+ struct tcp_ao_add *cmd)
+{
+ const char *algo = cmd->alg_name;
+ unsigned int digest_size;
+ struct crypto_ahash *tfm;
+ struct tcp_ao_key *key;
+ struct tcp_sigpool hp;
+ int err, pool_id;
+ size_t size;
+
+ /* Force null-termination of alg_name */
+ cmd->alg_name[ARRAY_SIZE(cmd->alg_name) - 1] = '\0';
+
+ /* RFC5926, 3.1.1.2. KDF_AES_128_CMAC */
+ if (!strcmp("cmac(aes128)", algo))
+ algo = "cmac(aes)";
+
+ /* Full TCP header (th->doff << 2) should fit into scratch area,
+ * see tcp_ao_hash_header().
+ */
+ pool_id = tcp_sigpool_alloc_ahash(algo, 60);
+ if (pool_id < 0)
+ return ERR_PTR(pool_id);
+
+ err = tcp_sigpool_start(pool_id, &hp);
+ if (err)
+ goto err_free_pool;
+
+ tfm = crypto_ahash_reqtfm(hp.req);
+ if (crypto_ahash_alignmask(tfm) > TCP_AO_KEY_ALIGN) {
+ err = -EOPNOTSUPP;
+ goto err_pool_end;
+ }
+ digest_size = crypto_ahash_digestsize(tfm);
+ tcp_sigpool_end(&hp);
+
+ size = sizeof(struct tcp_ao_key) + (digest_size << 1);
+ key = sock_kmalloc(sk, size, GFP_KERNEL);
+ if (!key) {
+ err = -ENOMEM;
+ goto err_free_pool;
+ }
+
+ key->tcp_sigpool_id = pool_id;
+ key->digest_size = digest_size;
+ return key;
+
+err_pool_end:
+ tcp_sigpool_end(&hp);
+err_free_pool:
+ tcp_sigpool_release(pool_id);
+ return ERR_PTR(err);
+}
+
+static int tcp_ao_add_cmd(struct sock *sk, unsigned short int family,
+ sockptr_t optval, int optlen)
+{
+ struct tcp_ao_info *ao_info;
+ union tcp_ao_addr *addr;
+ struct tcp_ao_key *key;
+ struct tcp_ao_add cmd;
+ int ret, l3index = 0;
+ bool first = false;
+
+ if (optlen < sizeof(cmd))
+ return -EINVAL;
+
+ ret = copy_struct_from_sockptr(&cmd, sizeof(cmd), optval, optlen);
+ if (ret)
+ return ret;
+
+ if (cmd.keylen > TCP_AO_MAXKEYLEN)
+ return -EINVAL;
+
+ if (cmd.reserved != 0 || cmd.reserved2 != 0)
+ return -EINVAL;
+
+ if (family == AF_INET)
+ ret = tcp_ao_verify_ipv4(sk, &cmd, &addr);
+ else
+ ret = tcp_ao_verify_ipv6(sk, &cmd, &addr, &family);
+ if (ret)
+ return ret;
+
+ if (cmd.keyflags & ~TCP_AO_KEYF_ALL)
+ return -EINVAL;
+
+ if (cmd.set_current || cmd.set_rnext) {
+ if (!tcp_ao_can_set_current_rnext(sk))
+ return -EINVAL;
+ }
+
+ if (cmd.ifindex && !(cmd.keyflags & TCP_AO_KEYF_IFINDEX))
+ return -EINVAL;
+
+ /* For cmd.tcp_ifindex = 0 the key will apply to the default VRF */
+ if (cmd.keyflags & TCP_AO_KEYF_IFINDEX && cmd.ifindex) {
+ int bound_dev_if = READ_ONCE(sk->sk_bound_dev_if);
+ struct net_device *dev;
+
+ rcu_read_lock();
+ dev = dev_get_by_index_rcu(sock_net(sk), cmd.ifindex);
+ if (dev && netif_is_l3_master(dev))
+ l3index = dev->ifindex;
+ rcu_read_unlock();
+
+ if (!dev || !l3index)
+ return -EINVAL;
+
+ /* It's still possible to bind after adding keys or even
+ * re-bind to a different dev (with CAP_NET_RAW).
+ * So, no reason to return error here, rather try to be
+ * nice and warn the user.
+ */
+ if (bound_dev_if && bound_dev_if != cmd.ifindex)
+ net_warn_ratelimited("AO key ifindex %d != sk bound ifindex %d\n",
+ cmd.ifindex, bound_dev_if);
+ }
+
+ /* Don't allow keys for peers that have a matching TCP-MD5 key */
+ if (cmd.keyflags & TCP_AO_KEYF_IFINDEX) {
+ /* Non-_exact version of tcp_md5_do_lookup() will
+ * as well match keys that aren't bound to a specific VRF
+ * (that will make them match AO key with
+ * sysctl_tcp_l3dev_accept = 1
+ */
+ if (tcp_md5_do_lookup(sk, l3index, addr, family))
+ return -EKEYREJECTED;
+ } else {
+ if (tcp_md5_do_lookup_any_l3index(sk, addr, family))
+ return -EKEYREJECTED;
+ }
+
+ ao_info = setsockopt_ao_info(sk);
+ if (IS_ERR(ao_info))
+ return PTR_ERR(ao_info);
+
+ if (!ao_info) {
+ ao_info = tcp_ao_alloc_info(GFP_KERNEL);
+ if (!ao_info)
+ return -ENOMEM;
+ first = true;
+ } else {
+ /* Check that neither RecvID nor SendID match any
+ * existing key for the peer, RFC5925 3.1:
+ * > The IDs of MKTs MUST NOT overlap where their
+ * > TCP connection identifiers overlap.
+ */
+ if (__tcp_ao_do_lookup(sk, l3index, addr, family, cmd.prefix, -1, cmd.rcvid))
+ return -EEXIST;
+ if (__tcp_ao_do_lookup(sk, l3index, addr, family,
+ cmd.prefix, cmd.sndid, -1))
+ return -EEXIST;
+ }
+
+ key = tcp_ao_key_alloc(sk, &cmd);
+ if (IS_ERR(key)) {
+ ret = PTR_ERR(key);
+ goto err_free_ao;
+ }
+
+ INIT_HLIST_NODE(&key->node);
+ memcpy(&key->addr, addr, (family == AF_INET) ? sizeof(struct in_addr) :
+ sizeof(struct in6_addr));
+ key->prefixlen = cmd.prefix;
+ key->family = family;
+ key->keyflags = cmd.keyflags;
+ key->sndid = cmd.sndid;
+ key->rcvid = cmd.rcvid;
+ key->l3index = l3index;
+ atomic64_set(&key->pkt_good, 0);
+ atomic64_set(&key->pkt_bad, 0);
+
+ ret = tcp_ao_parse_crypto(&cmd, key);
+ if (ret < 0)
+ goto err_free_sock;
+
+ if (!((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))) {
+ tcp_ao_cache_traffic_keys(sk, ao_info, key);
+ if (first) {
+ ao_info->current_key = key;
+ ao_info->rnext_key = key;
+ }
+ }
+
+ tcp_ao_link_mkt(ao_info, key);
+ if (first) {
+ if (!static_branch_inc(&tcp_ao_needed.key)) {
+ ret = -EUSERS;
+ goto err_free_sock;
+ }
+ sk_gso_disable(sk);
+ rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info);
+ }
+
+ if (cmd.set_current)
+ WRITE_ONCE(ao_info->current_key, key);
+ if (cmd.set_rnext)
+ WRITE_ONCE(ao_info->rnext_key, key);
+ return 0;
+
+err_free_sock:
+ atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+ tcp_sigpool_release(key->tcp_sigpool_id);
+ kfree_sensitive(key);
+err_free_ao:
+ if (first)
+ kfree(ao_info);
+ return ret;
+}
+
+static int tcp_ao_delete_key(struct sock *sk, struct tcp_ao_info *ao_info,
+ bool del_async, struct tcp_ao_key *key,
+ struct tcp_ao_key *new_current,
+ struct tcp_ao_key *new_rnext)
+{
+ int err;
+
+ hlist_del_rcu(&key->node);
+
+ /* Support for async delete on listening sockets: as they don't
+ * need current_key/rnext_key maintaining, we don't need to check
+ * them and we can just free all resources in RCU fashion.
+ */
+ if (del_async) {
+ atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+ call_rcu(&key->rcu, tcp_ao_key_free_rcu);
+ return 0;
+ }
+
+ /* At this moment another CPU could have looked this key up
+ * while it was unlinked from the list. Wait for RCU grace period,
+ * after which the key is off-list and can't be looked up again;
+ * the rx path [just before RCU came] might have used it and set it
+ * as current_key (very unlikely).
+ * Free the key with next RCU grace period (in case it was
+ * current_key before tcp_ao_current_rnext() might have
+ * changed it in forced-delete).
+ */
+ synchronize_rcu();
+ if (new_current)
+ WRITE_ONCE(ao_info->current_key, new_current);
+ if (new_rnext)
+ WRITE_ONCE(ao_info->rnext_key, new_rnext);
+
+ if (unlikely(READ_ONCE(ao_info->current_key) == key ||
+ READ_ONCE(ao_info->rnext_key) == key)) {
+ err = -EBUSY;
+ goto add_key;
+ }
+
+ atomic_sub(tcp_ao_sizeof_key(key), &sk->sk_omem_alloc);
+ call_rcu(&key->rcu, tcp_ao_key_free_rcu);
+
+ return 0;
+add_key:
+ hlist_add_head_rcu(&key->node, &ao_info->head);
+ return err;
+}
+
+#define TCP_AO_DEL_KEYF_ALL (TCP_AO_KEYF_IFINDEX)
+static int tcp_ao_del_cmd(struct sock *sk, unsigned short int family,
+ sockptr_t optval, int optlen)
+{
+ struct tcp_ao_key *key, *new_current = NULL, *new_rnext = NULL;
+ int err, addr_len, l3index = 0;
+ struct tcp_ao_info *ao_info;
+ union tcp_ao_addr *addr;
+ struct tcp_ao_del cmd;
+ __u8 prefix;
+ u16 port;
+
+ if (optlen < sizeof(cmd))
+ return -EINVAL;
+
+ err = copy_struct_from_sockptr(&cmd, sizeof(cmd), optval, optlen);
+ if (err)
+ return err;
+
+ if (cmd.reserved != 0 || cmd.reserved2 != 0)
+ return -EINVAL;
+
+ if (cmd.set_current || cmd.set_rnext) {
+ if (!tcp_ao_can_set_current_rnext(sk))
+ return -EINVAL;
+ }
+
+ if (cmd.keyflags & ~TCP_AO_DEL_KEYF_ALL)
+ return -EINVAL;
+
+ /* No sanity check for TCP_AO_KEYF_IFINDEX as if a VRF
+ * was destroyed, there still should be a way to delete keys,
+ * that were bound to that l3intf. So, fail late at lookup stage
+ * if there is no key for that ifindex.
+ */
+ if (cmd.ifindex && !(cmd.keyflags & TCP_AO_KEYF_IFINDEX))
+ return -EINVAL;
+
+ ao_info = setsockopt_ao_info(sk);
+ if (IS_ERR(ao_info))
+ return PTR_ERR(ao_info);
+ if (!ao_info)
+ return -ENOENT;
+
+ /* For sockets in TCP_CLOSED it's possible set keys that aren't
+ * matching the future peer (address/VRF/etc),
+ * tcp_ao_connect_init() will choose a correct matching MKT
+ * if there's any.
+ */
+ if (cmd.set_current) {
+ new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
+ if (!new_current)
+ return -ENOENT;
+ }
+ if (cmd.set_rnext) {
+ new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
+ if (!new_rnext)
+ return -ENOENT;
+ }
+ if (cmd.del_async && sk->sk_state != TCP_LISTEN)
+ return -EINVAL;
+
+ if (family == AF_INET) {
+ struct sockaddr_in *sin = (struct sockaddr_in *)&cmd.addr;
+
+ addr = (union tcp_ao_addr *)&sin->sin_addr;
+ addr_len = sizeof(struct in_addr);
+ port = ntohs(sin->sin_port);
+ } else {
+ struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd.addr;
+ struct in6_addr *addr6 = &sin6->sin6_addr;
+
+ if (ipv6_addr_v4mapped(addr6)) {
+ addr = (union tcp_ao_addr *)&addr6->s6_addr32[3];
+ addr_len = sizeof(struct in_addr);
+ family = AF_INET;
+ } else {
+ addr = (union tcp_ao_addr *)addr6;
+ addr_len = sizeof(struct in6_addr);
+ }
+ port = ntohs(sin6->sin6_port);
+ }
+ prefix = cmd.prefix;
+
+ /* Currently matching is not performed on port (or port ranges) */
+ if (port != 0)
+ return -EINVAL;
+
+ /* We could choose random present key here for current/rnext
+ * but that's less predictable. Let's be strict and don't
+ * allow removing a key that's in use. RFC5925 doesn't
+ * specify how-to coordinate key removal, but says:
+ * "It is presumed that an MKT affecting a particular
+ * connection cannot be destroyed during an active connection"
+ */
+ hlist_for_each_entry_rcu(key, &ao_info->head, node) {
+ if (cmd.sndid != key->sndid ||
+ cmd.rcvid != key->rcvid)
+ continue;
+
+ if (family != key->family ||
+ prefix != key->prefixlen ||
+ memcmp(addr, &key->addr, addr_len))
+ continue;
+
+ if ((cmd.keyflags & TCP_AO_KEYF_IFINDEX) !=
+ (key->keyflags & TCP_AO_KEYF_IFINDEX))
+ continue;
+
+ if (key->l3index != l3index)
+ continue;
+
+ if (key == new_current || key == new_rnext)
+ continue;
+
+ return tcp_ao_delete_key(sk, ao_info, cmd.del_async, key,
+ new_current, new_rnext);
+ }
+ return -ENOENT;
+}
+
+/* cmd.ao_required makes a socket TCP-AO only.
+ * Don't allow any md5 keys for any l3intf on the socket together with it.
+ * Restricting it early in setsockopt() removes a check for
+ * ao_info->ao_required on inbound tcp segment fast-path.
+ */
+static int tcp_ao_required_verify(struct sock *sk)
+{
+#ifdef CONFIG_TCP_MD5SIG
+ const struct tcp_md5sig_info *md5sig;
+
+ if (!static_branch_unlikely(&tcp_md5_needed.key))
+ return 0;
+
+ md5sig = rcu_dereference_check(tcp_sk(sk)->md5sig_info,
+ lockdep_sock_is_held(sk));
+ if (!md5sig)
+ return 0;
+
+ if (rcu_dereference_check(hlist_first_rcu(&md5sig->head),
+ lockdep_sock_is_held(sk)))
+ return 1;
+#endif
+ return 0;
+}
+
+static int tcp_ao_info_cmd(struct sock *sk, unsigned short int family,
+ sockptr_t optval, int optlen)
+{
+ struct tcp_ao_key *new_current = NULL, *new_rnext = NULL;
+ struct tcp_ao_info *ao_info;
+ struct tcp_ao_info_opt cmd;
+ bool first = false;
+ int err;
+
+ if (optlen < sizeof(cmd))
+ return -EINVAL;
+
+ err = copy_struct_from_sockptr(&cmd, sizeof(cmd), optval, optlen);
+ if (err)
+ return err;
+
+ if (cmd.set_current || cmd.set_rnext) {
+ if (!tcp_ao_can_set_current_rnext(sk))
+ return -EINVAL;
+ }
+
+ if (cmd.reserved != 0 || cmd.reserved2 != 0)
+ return -EINVAL;
+
+ ao_info = setsockopt_ao_info(sk);
+ if (IS_ERR(ao_info))
+ return PTR_ERR(ao_info);
+ if (!ao_info) {
+ if (!((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE)))
+ return -EINVAL;
+ ao_info = tcp_ao_alloc_info(GFP_KERNEL);
+ if (!ao_info)
+ return -ENOMEM;
+ first = true;
+ }
+
+ if (cmd.ao_required && tcp_ao_required_verify(sk))
+ return -EKEYREJECTED;
+
+ /* For sockets in TCP_CLOSED it's possible set keys that aren't
+ * matching the future peer (address/port/VRF/etc),
+ * tcp_ao_connect_init() will choose a correct matching MKT
+ * if there's any.
+ */
+ if (cmd.set_current) {
+ new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
+ if (!new_current) {
+ err = -ENOENT;
+ goto out;
+ }
+ }
+ if (cmd.set_rnext) {
+ new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
+ if (!new_rnext) {
+ err = -ENOENT;
+ goto out;
+ }
+ }
+ if (cmd.set_counters) {
+ atomic64_set(&ao_info->counters.pkt_good, cmd.pkt_good);
+ atomic64_set(&ao_info->counters.pkt_bad, cmd.pkt_bad);
+ atomic64_set(&ao_info->counters.key_not_found, cmd.pkt_key_not_found);
+ atomic64_set(&ao_info->counters.ao_required, cmd.pkt_ao_required);
+ atomic64_set(&ao_info->counters.dropped_icmp, cmd.pkt_dropped_icmp);
+ }
+
+ ao_info->ao_required = cmd.ao_required;
+ ao_info->accept_icmps = cmd.accept_icmps;
+ if (new_current)
+ WRITE_ONCE(ao_info->current_key, new_current);
+ if (new_rnext)
+ WRITE_ONCE(ao_info->rnext_key, new_rnext);
+ if (first) {
+ if (!static_branch_inc(&tcp_ao_needed.key)) {
+ err = -EUSERS;
+ goto out;
+ }
+ sk_gso_disable(sk);
+ rcu_assign_pointer(tcp_sk(sk)->ao_info, ao_info);
+ }
+ return 0;
+out:
+ if (first)
+ kfree(ao_info);
+ return err;
+}
+
+int tcp_parse_ao(struct sock *sk, int cmd, unsigned short int family,
+ sockptr_t optval, int optlen)
+{
+ if (WARN_ON_ONCE(family != AF_INET && family != AF_INET6))
+ return -EAFNOSUPPORT;
+
+ switch (cmd) {
+ case TCP_AO_ADD_KEY:
+ return tcp_ao_add_cmd(sk, family, optval, optlen);
+ case TCP_AO_DEL_KEY:
+ return tcp_ao_del_cmd(sk, family, optval, optlen);
+ case TCP_AO_INFO:
+ return tcp_ao_info_cmd(sk, family, optval, optlen);
+ default:
+ WARN_ON_ONCE(1);
+ return -EINVAL;
+ }
+}
+
+int tcp_v4_parse_ao(struct sock *sk, int cmd, sockptr_t optval, int optlen)
+{
+ return tcp_parse_ao(sk, cmd, AF_INET, optval, optlen);
+}
+
+/* tcp_ao_copy_mkts_to_user(ao_info, optval, optlen)
+ *
+ * @ao_info: struct tcp_ao_info on the socket that
+ * socket getsockopt(TCP_AO_GET_KEYS) is executed on
+ * @optval: pointer to array of tcp_ao_getsockopt structures in user space.
+ * Must be != NULL.
+ * @optlen: pointer to size of tcp_ao_getsockopt structure.
+ * Must be != NULL.
+ *
+ * Return value: 0 on success, a negative error number otherwise.
+ *
+ * optval points to an array of tcp_ao_getsockopt structures in user space.
+ * optval[0] is used as both input and output to getsockopt. It determines
+ * which keys are returned by the kernel.
+ * optval[0].nkeys is the size of the array in user space. On return it contains
+ * the number of keys matching the search criteria.
+ * If tcp_ao_getsockopt::get_all is set, then all keys in the socket are
+ * returned, otherwise only keys matching <addr, prefix, sndid, rcvid>
+ * in optval[0] are returned.
+ * optlen is also used as both input and output. The user provides the size
+ * of struct tcp_ao_getsockopt in user space, and the kernel returns the size
+ * of the structure in kernel space.
+ * The size of struct tcp_ao_getsockopt may differ between user and kernel.
+ * There are three cases to consider:
+ * * If usize == ksize, then keys are copied verbatim.
+ * * If usize < ksize, then the userspace has passed an old struct to a
+ * newer kernel. The rest of the trailing bytes in optval[0]
+ * (ksize - usize) are interpreted as 0 by the kernel.
+ * * If usize > ksize, then the userspace has passed a new struct to an
+ * older kernel. The trailing bytes unknown to the kernel (usize - ksize)
+ * are checked to ensure they are zeroed, otherwise -E2BIG is returned.
+ * On return the kernel fills in min(usize, ksize) in each entry of the array.
+ * The layout of the fields in the user and kernel structures is expected to
+ * be the same (including in the 32bit vs 64bit case).
+ */
+static int tcp_ao_copy_mkts_to_user(struct tcp_ao_info *ao_info,
+ sockptr_t optval, sockptr_t optlen)
+{
+ struct tcp_ao_getsockopt opt_in, opt_out;
+ struct tcp_ao_key *key, *current_key;
+ bool do_address_matching = true;
+ union tcp_ao_addr *addr = NULL;
+ int err, l3index, user_len;
+ unsigned int max_keys; /* maximum number of keys to copy to user */
+ size_t out_offset = 0;
+ size_t bytes_to_write; /* number of bytes to write to user level */
+ u32 matched_keys; /* keys from ao_info matched so far */
+ int optlen_out;
+ __be16 port = 0;
+
+ if (copy_from_sockptr(&user_len, optlen, sizeof(int)))
+ return -EFAULT;
+
+ if (user_len <= 0)
+ return -EINVAL;
+
+ memset(&opt_in, 0, sizeof(struct tcp_ao_getsockopt));
+ err = copy_struct_from_sockptr(&opt_in, sizeof(opt_in),
+ optval, user_len);
+ if (err < 0)
+ return err;
+
+ if (opt_in.pkt_good || opt_in.pkt_bad)
+ return -EINVAL;
+ if (opt_in.keyflags & ~TCP_AO_GET_KEYF_VALID)
+ return -EINVAL;
+ if (opt_in.ifindex && !(opt_in.keyflags & TCP_AO_KEYF_IFINDEX))
+ return -EINVAL;
+
+ if (opt_in.reserved != 0)
+ return -EINVAL;
+
+ max_keys = opt_in.nkeys;
+ l3index = (opt_in.keyflags & TCP_AO_KEYF_IFINDEX) ? opt_in.ifindex : -1;
+
+ if (opt_in.get_all || opt_in.is_current || opt_in.is_rnext) {
+ if (opt_in.get_all && (opt_in.is_current || opt_in.is_rnext))
+ return -EINVAL;
+ do_address_matching = false;
+ }
+
+ switch (opt_in.addr.ss_family) {
+ case AF_INET: {
+ struct sockaddr_in *sin;
+ __be32 mask;
+
+ sin = (struct sockaddr_in *)&opt_in.addr;
+ port = sin->sin_port;
+ addr = (union tcp_ao_addr *)&sin->sin_addr;
+
+ if (opt_in.prefix > 32)
+ return -EINVAL;
+
+ if (ntohl(sin->sin_addr.s_addr) == INADDR_ANY &&
+ opt_in.prefix != 0)
+ return -EINVAL;
+
+ mask = inet_make_mask(opt_in.prefix);
+ if (sin->sin_addr.s_addr & ~mask)
+ return -EINVAL;
+
+ break;
+ }
+ case AF_INET6: {
+ struct sockaddr_in6 *sin6;
+ struct in6_addr *addr6;
+
+ sin6 = (struct sockaddr_in6 *)&opt_in.addr;
+ addr = (union tcp_ao_addr *)&sin6->sin6_addr;
+ addr6 = &sin6->sin6_addr;
+ port = sin6->sin6_port;
+
+ /* We don't have to change family and @addr here if
+ * ipv6_addr_v4mapped() like in key adding:
+ * tcp_ao_key_cmp() does it. Do the sanity checks though.
+ */
+ if (opt_in.prefix != 0) {
+ if (ipv6_addr_v4mapped(addr6)) {
+ __be32 mask, addr4 = addr6->s6_addr32[3];
+
+ if (opt_in.prefix > 32 ||
+ ntohl(addr4) == INADDR_ANY)
+ return -EINVAL;
+ mask = inet_make_mask(opt_in.prefix);
+ if (addr4 & ~mask)
+ return -EINVAL;
+ } else {
+ struct in6_addr pfx;
+
+ if (ipv6_addr_any(addr6) ||
+ opt_in.prefix > 128)
+ return -EINVAL;
+
+ ipv6_addr_prefix(&pfx, addr6, opt_in.prefix);
+ if (ipv6_addr_cmp(&pfx, addr6))
+ return -EINVAL;
+ }
+ } else if (!ipv6_addr_any(addr6)) {
+ return -EINVAL;
+ }
+ break;
+ }
+ case 0:
+ if (!do_address_matching)
+ break;
+ fallthrough;
+ default:
+ return -EAFNOSUPPORT;
+ }
+
+ if (!do_address_matching) {
+ /* We could just ignore those, but let's do stricter checks */
+ if (addr || port)
+ return -EINVAL;
+ if (opt_in.prefix || opt_in.sndid || opt_in.rcvid)
+ return -EINVAL;
+ }
+
+ bytes_to_write = min_t(int, user_len, sizeof(struct tcp_ao_getsockopt));
+ matched_keys = 0;
+ /* May change in RX, while we're dumping, pre-fetch it */
+ current_key = READ_ONCE(ao_info->current_key);
+
+ hlist_for_each_entry_rcu(key, &ao_info->head, node) {
+ if (opt_in.get_all)
+ goto match;
+
+ if (opt_in.is_current || opt_in.is_rnext) {
+ if (opt_in.is_current && key == current_key)
+ goto match;
+ if (opt_in.is_rnext && key == ao_info->rnext_key)
+ goto match;
+ continue;
+ }
+
+ if (tcp_ao_key_cmp(key, l3index, addr, opt_in.prefix,
+ opt_in.addr.ss_family,
+ opt_in.sndid, opt_in.rcvid) != 0)
+ continue;
+match:
+ matched_keys++;
+ if (matched_keys > max_keys)
+ continue;
+
+ memset(&opt_out, 0, sizeof(struct tcp_ao_getsockopt));
+
+ if (key->family == AF_INET) {
+ struct sockaddr_in *sin_out = (struct sockaddr_in *)&opt_out.addr;
+
+ sin_out->sin_family = key->family;
+ sin_out->sin_port = 0;
+ memcpy(&sin_out->sin_addr, &key->addr, sizeof(struct in_addr));
+ } else {
+ struct sockaddr_in6 *sin6_out = (struct sockaddr_in6 *)&opt_out.addr;
+
+ sin6_out->sin6_family = key->family;
+ sin6_out->sin6_port = 0;
+ memcpy(&sin6_out->sin6_addr, &key->addr, sizeof(struct in6_addr));
+ }
+ opt_out.sndid = key->sndid;
+ opt_out.rcvid = key->rcvid;
+ opt_out.prefix = key->prefixlen;
+ opt_out.keyflags = key->keyflags;
+ opt_out.is_current = (key == current_key);
+ opt_out.is_rnext = (key == ao_info->rnext_key);
+ opt_out.nkeys = 0;
+ opt_out.maclen = key->maclen;
+ opt_out.keylen = key->keylen;
+ opt_out.ifindex = key->l3index;
+ opt_out.pkt_good = atomic64_read(&key->pkt_good);
+ opt_out.pkt_bad = atomic64_read(&key->pkt_bad);
+ memcpy(&opt_out.key, key->key, key->keylen);
+ tcp_sigpool_algo(key->tcp_sigpool_id, opt_out.alg_name, 64);
+
+ /* Copy key to user */
+ if (copy_to_sockptr_offset(optval, out_offset,
+ &opt_out, bytes_to_write))
+ return -EFAULT;
+ out_offset += user_len;
+ }
+
+ optlen_out = (int)sizeof(struct tcp_ao_getsockopt);
+ if (copy_to_sockptr(optlen, &optlen_out, sizeof(int)))
+ return -EFAULT;
+
+ out_offset = offsetof(struct tcp_ao_getsockopt, nkeys);
+ if (copy_to_sockptr_offset(optval, out_offset,
+ &matched_keys, sizeof(u32)))
+ return -EFAULT;
+
+ return 0;
+}
+
+int tcp_ao_get_mkts(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+{
+ struct tcp_ao_info *ao_info;
+
+ ao_info = setsockopt_ao_info(sk);
+ if (IS_ERR(ao_info))
+ return PTR_ERR(ao_info);
+ if (!ao_info)
+ return -ENOENT;
+
+ return tcp_ao_copy_mkts_to_user(ao_info, optval, optlen);
+}
+
+int tcp_ao_get_sock_info(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+{
+ struct tcp_ao_info_opt out, in = {};
+ struct tcp_ao_key *current_key;
+ struct tcp_ao_info *ao;
+ int err, len;
+
+ if (copy_from_sockptr(&len, optlen, sizeof(int)))
+ return -EFAULT;
+
+ if (len <= 0)
+ return -EINVAL;
+
+ /* Copying this "in" only to check ::reserved, ::reserved2,
+ * that may be needed to extend (struct tcp_ao_info_opt) and
+ * what getsockopt() provides in future.
+ */
+ err = copy_struct_from_sockptr(&in, sizeof(in), optval, len);
+ if (err)
+ return err;
+
+ if (in.reserved != 0 || in.reserved2 != 0)
+ return -EINVAL;
+
+ ao = setsockopt_ao_info(sk);
+ if (IS_ERR(ao))
+ return PTR_ERR(ao);
+ if (!ao)
+ return -ENOENT;
+
+ memset(&out, 0, sizeof(out));
+ out.ao_required = ao->ao_required;
+ out.accept_icmps = ao->accept_icmps;
+ out.pkt_good = atomic64_read(&ao->counters.pkt_good);
+ out.pkt_bad = atomic64_read(&ao->counters.pkt_bad);
+ out.pkt_key_not_found = atomic64_read(&ao->counters.key_not_found);
+ out.pkt_ao_required = atomic64_read(&ao->counters.ao_required);
+ out.pkt_dropped_icmp = atomic64_read(&ao->counters.dropped_icmp);
+
+ current_key = READ_ONCE(ao->current_key);
+ if (current_key) {
+ out.set_current = 1;
+ out.current_key = current_key->sndid;
+ }
+ if (ao->rnext_key) {
+ out.set_rnext = 1;
+ out.rnext = ao->rnext_key->rcvid;
+ }
+
+ if (copy_to_sockptr(optval, &out, min_t(int, len, sizeof(out))))
+ return -EFAULT;
+
+ return 0;
+}
+
+int tcp_ao_set_repair(struct sock *sk, sockptr_t optval, unsigned int optlen)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_ao_repair cmd;
+ struct tcp_ao_key *key;
+ struct tcp_ao_info *ao;
+ int err;
+
+ if (optlen < sizeof(cmd))
+ return -EINVAL;
+
+ err = copy_struct_from_sockptr(&cmd, sizeof(cmd), optval, optlen);
+ if (err)
+ return err;
+
+ if (!tp->repair)
+ return -EPERM;
+
+ ao = setsockopt_ao_info(sk);
+ if (IS_ERR(ao))
+ return PTR_ERR(ao);
+ if (!ao)
+ return -ENOENT;
+
+ WRITE_ONCE(ao->lisn, cmd.snt_isn);
+ WRITE_ONCE(ao->risn, cmd.rcv_isn);
+ WRITE_ONCE(ao->snd_sne, cmd.snd_sne);
+ WRITE_ONCE(ao->rcv_sne, cmd.rcv_sne);
+
+ hlist_for_each_entry_rcu(key, &ao->head, node)
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+
+ return 0;
+}
+
+int tcp_ao_get_repair(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+{
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_ao_repair opt;
+ struct tcp_ao_info *ao;
+ int len;
+
+ if (copy_from_sockptr(&len, optlen, sizeof(int)))
+ return -EFAULT;
+
+ if (len <= 0)
+ return -EINVAL;
+
+ if (!tp->repair)
+ return -EPERM;
+
+ rcu_read_lock();
+ ao = getsockopt_ao_info(sk);
+ if (IS_ERR_OR_NULL(ao)) {
+ rcu_read_unlock();
+ return ao ? PTR_ERR(ao) : -ENOENT;
+ }
+
+ opt.snt_isn = ao->lisn;
+ opt.rcv_isn = ao->risn;
+ opt.snd_sne = READ_ONCE(ao->snd_sne);
+ opt.rcv_sne = READ_ONCE(ao->rcv_sne);
+ rcu_read_unlock();
+
+ if (copy_to_sockptr(optval, &opt, min_t(int, len, sizeof(opt))))
+ return -EFAULT;
+ return 0;
+}
diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
index 146792cd26fe..22358032dd48 100644
--- a/net/ipv4/tcp_bbr.c
+++ b/net/ipv4/tcp_bbr.c
@@ -258,7 +258,7 @@ static unsigned long bbr_bw_to_pacing_rate(struct sock *sk, u32 bw, int gain)
u64 rate = bw;
rate = bbr_rate_bytes_per_sec(sk, rate, gain);
- rate = min_t(u64, rate, sk->sk_max_pacing_rate);
+ rate = min_t(u64, rate, READ_ONCE(sk->sk_max_pacing_rate));
return rate;
}
@@ -278,7 +278,8 @@ static void bbr_init_pacing_rate_from_rtt(struct sock *sk)
}
bw = (u64)tcp_snd_cwnd(tp) * BW_UNIT;
do_div(bw, rtt_us);
- sk->sk_pacing_rate = bbr_bw_to_pacing_rate(sk, bw, bbr_high_gain);
+ WRITE_ONCE(sk->sk_pacing_rate,
+ bbr_bw_to_pacing_rate(sk, bw, bbr_high_gain));
}
/* Pace using current bw estimate and a gain factor. */
@@ -290,14 +291,14 @@ static void bbr_set_pacing_rate(struct sock *sk, u32 bw, int gain)
if (unlikely(!bbr->has_seen_rtt && tp->srtt_us))
bbr_init_pacing_rate_from_rtt(sk);
- if (bbr_full_bw_reached(sk) || rate > sk->sk_pacing_rate)
- sk->sk_pacing_rate = rate;
+ if (bbr_full_bw_reached(sk) || rate > READ_ONCE(sk->sk_pacing_rate))
+ WRITE_ONCE(sk->sk_pacing_rate, rate);
}
/* override sysctl_tcp_min_tso_segs */
__bpf_kfunc static u32 bbr_min_tso_segs(struct sock *sk)
{
- return sk->sk_pacing_rate < (bbr_min_tso_rate >> 3) ? 1 : 2;
+ return READ_ONCE(sk->sk_pacing_rate) < (bbr_min_tso_rate >> 3) ? 1 : 2;
}
static u32 bbr_tso_segs_goal(struct sock *sk)
@@ -309,7 +310,7 @@ static u32 bbr_tso_segs_goal(struct sock *sk)
* driver provided sk_gso_max_size.
*/
bytes = min_t(unsigned long,
- sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift),
+ READ_ONCE(sk->sk_pacing_rate) >> READ_ONCE(sk->sk_pacing_shift),
GSO_LEGACY_MAX_SIZE - 1 - MAX_TCP_HEADER);
segs = max_t(u32, bytes / tp->mss_cache, bbr_min_tso_segs(sk));
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 804821d6bd4d..50aaa1527150 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -693,6 +693,23 @@ new_measure:
tp->rcv_rtt_est.time = tp->tcp_mstamp;
}
+static s32 tcp_rtt_tsopt_us(const struct tcp_sock *tp)
+{
+ u32 delta, delta_us;
+
+ delta = tcp_time_stamp_ts(tp) - tp->rx_opt.rcv_tsecr;
+ if (tp->tcp_usec_ts)
+ return delta;
+
+ if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
+ if (!delta)
+ delta = 1;
+ delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
+ return delta_us;
+ }
+ return -1;
+}
+
static inline void tcp_rcv_rtt_measure_ts(struct sock *sk,
const struct sk_buff *skb)
{
@@ -704,15 +721,10 @@ static inline void tcp_rcv_rtt_measure_ts(struct sock *sk,
if (TCP_SKB_CB(skb)->end_seq -
TCP_SKB_CB(skb)->seq >= inet_csk(sk)->icsk_ack.rcv_mss) {
- u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
- u32 delta_us;
-
- if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
- if (!delta)
- delta = 1;
- delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
- tcp_rcv_rtt_update(tp, delta_us, 0);
- }
+ s32 delta = tcp_rtt_tsopt_us(tp);
+
+ if (delta >= 0)
+ tcp_rcv_rtt_update(tp, delta, 0);
}
}
@@ -778,6 +790,16 @@ new_measure:
tp->rcvq_space.time = tp->tcp_mstamp;
}
+static void tcp_save_lrcv_flowlabel(struct sock *sk, const struct sk_buff *skb)
+{
+#if IS_ENABLED(CONFIG_IPV6)
+ struct inet_connection_sock *icsk = inet_csk(sk);
+
+ if (skb->protocol == htons(ETH_P_IPV6))
+ icsk->icsk_ack.lrcv_flowlabel = ntohl(ip6_flowlabel(ipv6_hdr(skb)));
+#endif
+}
+
/* There is something which you must keep in mind when you analyze the
* behavior of the tp->ato delayed ack timeout interval. When a
* connection starts up, we want to ack as quickly as possible. The
@@ -826,6 +848,7 @@ static void tcp_event_data_recv(struct sock *sk, struct sk_buff *skb)
}
}
icsk->icsk_ack.lrcvtime = now;
+ tcp_save_lrcv_flowlabel(sk, skb);
tcp_ecn_check_ce(sk, skb);
@@ -940,8 +963,8 @@ static void tcp_update_pacing_rate(struct sock *sk)
* without any lock. We want to make sure compiler wont store
* intermediate values in this location.
*/
- WRITE_ONCE(sk->sk_pacing_rate, min_t(u64, rate,
- sk->sk_max_pacing_rate));
+ WRITE_ONCE(sk->sk_pacing_rate,
+ min_t(u64, rate, READ_ONCE(sk->sk_max_pacing_rate)));
}
/* Calculate rto without backoff. This is the second half of Van Jacobson's
@@ -2101,6 +2124,10 @@ void tcp_clear_retrans(struct tcp_sock *tp)
tp->undo_marker = 0;
tp->undo_retrans = -1;
tp->sacked_out = 0;
+ tp->rto_stamp = 0;
+ tp->total_rto = 0;
+ tp->total_rto_recoveries = 0;
+ tp->total_rto_time = 0;
}
static inline void tcp_init_undo(struct tcp_sock *tp)
@@ -2428,7 +2455,7 @@ static bool tcp_skb_spurious_retrans(const struct tcp_sock *tp,
const struct sk_buff *skb)
{
return (TCP_SKB_CB(skb)->sacked & TCPCB_RETRANS) &&
- tcp_tsopt_ecr_before(tp, tcp_skb_timestamp(skb));
+ tcp_tsopt_ecr_before(tp, tcp_skb_timestamp_ts(tp->tcp_usec_ts, skb));
}
/* Nothing was retransmitted or returned timestamp is less
@@ -2839,6 +2866,14 @@ void tcp_enter_recovery(struct sock *sk, bool ece_ack)
tcp_set_ca_state(sk, TCP_CA_Recovery);
}
+static void tcp_update_rto_time(struct tcp_sock *tp)
+{
+ if (tp->rto_stamp) {
+ tp->total_rto_time += tcp_time_stamp_ms(tp) - tp->rto_stamp;
+ tp->rto_stamp = 0;
+ }
+}
+
/* Process an ACK in CA_Loss state. Move to CA_Open if lost data are
* recovered or spurious. Otherwise retransmits more on partial ACKs.
*/
@@ -3043,6 +3078,8 @@ static void tcp_fastretrans_alert(struct sock *sk, const u32 prior_snd_una,
break;
case TCP_CA_Loss:
tcp_process_loss(sk, flag, num_dupack, rexmit);
+ if (icsk->icsk_ca_state != TCP_CA_Loss)
+ tcp_update_rto_time(tp);
tcp_identify_packet_loss(sk, ack_flag);
if (!(icsk->icsk_ca_state == TCP_CA_Open ||
(*ack_flag & FLAG_LOST_RETRANS)))
@@ -3122,17 +3159,10 @@ static bool tcp_ack_update_rtt(struct sock *sk, const int flag,
* left edge of the send window.
* See draft-ietf-tcplw-high-performance-00, section 3.3.
*/
- if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
- flag & FLAG_ACKED) {
- u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr;
-
- if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) {
- if (!delta)
- delta = 1;
- seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ);
- ca_rtt_us = seq_rtt_us;
- }
- }
+ if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp &&
+ tp->rx_opt.rcv_tsecr && flag & FLAG_ACKED)
+ seq_rtt_us = ca_rtt_us = tcp_rtt_tsopt_us(tp);
+
rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */
if (seq_rtt_us < 0)
return false;
@@ -3542,6 +3572,21 @@ static inline bool tcp_may_update_window(const struct tcp_sock *tp,
(ack_seq == tp->snd_wl1 && (nwin > tp->snd_wnd || !nwin));
}
+static void tcp_snd_sne_update(struct tcp_sock *tp, u32 ack)
+{
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao;
+
+ if (!static_branch_unlikely(&tcp_ao_needed.key))
+ return;
+
+ ao = rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held((struct sock *)tp));
+ if (ao && ack < tp->snd_una)
+ ao->snd_sne++;
+#endif
+}
+
/* If we update tp->snd_una, also update tp->bytes_acked */
static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack)
{
@@ -3549,9 +3594,25 @@ static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack)
sock_owned_by_me((struct sock *)tp);
tp->bytes_acked += delta;
+ tcp_snd_sne_update(tp, ack);
tp->snd_una = ack;
}
+static void tcp_rcv_sne_update(struct tcp_sock *tp, u32 seq)
+{
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao;
+
+ if (!static_branch_unlikely(&tcp_ao_needed.key))
+ return;
+
+ ao = rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held((struct sock *)tp));
+ if (ao && seq < tp->rcv_nxt)
+ ao->rcv_sne++;
+#endif
+}
+
/* If we update tp->rcv_nxt, also update tp->bytes_received */
static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq)
{
@@ -3559,6 +3620,7 @@ static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq)
sock_owned_by_me((struct sock *)tp);
tp->bytes_received += delta;
+ tcp_rcv_sne_update(tp, seq);
WRITE_ONCE(tp->rcv_nxt, seq);
}
@@ -4225,39 +4287,58 @@ static bool tcp_fast_parse_options(const struct net *net,
return true;
}
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
/*
- * Parse MD5 Signature option
+ * Parse Signature options
*/
-const u8 *tcp_parse_md5sig_option(const struct tcphdr *th)
+int tcp_do_parse_auth_options(const struct tcphdr *th,
+ const u8 **md5_hash, const u8 **ao_hash)
{
int length = (th->doff << 2) - sizeof(*th);
const u8 *ptr = (const u8 *)(th + 1);
+ unsigned int minlen = TCPOLEN_MD5SIG;
+
+ if (IS_ENABLED(CONFIG_TCP_AO))
+ minlen = sizeof(struct tcp_ao_hdr) + 1;
+
+ *md5_hash = NULL;
+ *ao_hash = NULL;
/* If not enough data remaining, we can short cut */
- while (length >= TCPOLEN_MD5SIG) {
+ while (length >= minlen) {
int opcode = *ptr++;
int opsize;
switch (opcode) {
case TCPOPT_EOL:
- return NULL;
+ return 0;
case TCPOPT_NOP:
length--;
continue;
default:
opsize = *ptr++;
if (opsize < 2 || opsize > length)
- return NULL;
- if (opcode == TCPOPT_MD5SIG)
- return opsize == TCPOLEN_MD5SIG ? ptr : NULL;
+ return -EINVAL;
+ if (opcode == TCPOPT_MD5SIG) {
+ if (opsize != TCPOLEN_MD5SIG)
+ return -EINVAL;
+ if (unlikely(*md5_hash || *ao_hash))
+ return -EEXIST;
+ *md5_hash = ptr;
+ } else if (opcode == TCPOPT_AO) {
+ if (opsize <= sizeof(struct tcp_ao_hdr))
+ return -EINVAL;
+ if (unlikely(*md5_hash || *ao_hash))
+ return -EEXIST;
+ *ao_hash = ptr;
+ }
}
ptr += opsize - 2;
length -= opsize;
}
- return NULL;
+ return 0;
}
-EXPORT_SYMBOL(tcp_parse_md5sig_option);
+EXPORT_SYMBOL(tcp_do_parse_auth_options);
#endif
/* Sorry, PAWS as specified is broken wrt. pure-ACKs -DaveM
@@ -4500,12 +4581,23 @@ static void tcp_rcv_spurious_retrans(struct sock *sk, const struct sk_buff *skb)
{
/* When the ACK path fails or drops most ACKs, the sender would
* timeout and spuriously retransmit the same segment repeatedly.
- * The receiver remembers and reflects via DSACKs. Leverage the
- * DSACK state and change the txhash to re-route speculatively.
+ * If it seems our ACKs are not reaching the other side,
+ * based on receiving a duplicate data segment with new flowlabel
+ * (suggesting the sender suffered an RTO), and we are not already
+ * repathing due to our own RTO, then rehash the socket to repath our
+ * packets.
*/
- if (TCP_SKB_CB(skb)->seq == tcp_sk(sk)->duplicate_sack[0].start_seq &&
+#if IS_ENABLED(CONFIG_IPV6)
+ if (inet_csk(sk)->icsk_ca_state != TCP_CA_Loss &&
+ skb->protocol == htons(ETH_P_IPV6) &&
+ (tcp_sk(sk)->inet_conn.icsk_ack.lrcv_flowlabel !=
+ ntohl(ip6_flowlabel(ipv6_hdr(skb)))) &&
sk_rethink_txhash(sk))
NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPDUPLICATEDATAREHASH);
+
+ /* Save last flowlabel after a spurious retrans. */
+ tcp_save_lrcv_flowlabel(sk, skb);
+#endif
}
static void tcp_send_dupack(struct sock *sk, const struct sk_buff *skb)
@@ -4822,6 +4914,7 @@ static void tcp_data_queue_ofo(struct sock *sk, struct sk_buff *skb)
u32 seq, end_seq;
bool fragstolen;
+ tcp_save_lrcv_flowlabel(sk, skb);
tcp_ecn_check_ce(sk, skb);
if (unlikely(tcp_try_rmem_schedule(sk, skb, skb->truesize))) {
@@ -5567,6 +5660,14 @@ static void __tcp_ack_snd_check(struct sock *sk, int ofo_possible)
tcp_in_quickack_mode(sk) ||
/* Protocol state mandates a one-time immediate ACK */
inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOW) {
+ /* If we are running from __release_sock() in user context,
+ * Defer the ack until tcp_release_cb().
+ */
+ if (sock_owned_by_user_nocheck(sk) &&
+ READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_backlog_ack_defer)) {
+ set_bit(TCP_ACK_DEFERRED, &sk->sk_tsq_flags);
+ return;
+ }
send_now:
tcp_send_ack(sk);
return;
@@ -6101,6 +6202,7 @@ void tcp_finish_connect(struct sock *sk, struct sk_buff *skb)
struct tcp_sock *tp = tcp_sk(sk);
struct inet_connection_sock *icsk = inet_csk(sk);
+ tcp_ao_finish_connect(sk, skb);
tcp_set_state(sk, TCP_ESTABLISHED);
icsk->icsk_ack.lrcvtime = tcp_jiffies32;
@@ -6249,7 +6351,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
if (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
!between(tp->rx_opt.rcv_tsecr, tp->retrans_stamp,
- tcp_time_stamp(tp))) {
+ tcp_time_stamp_ts(tp))) {
NET_INC_STATS(sock_net(sk),
LINUX_MIB_PAWSACTIVEREJECTED);
goto reset_and_undo;
@@ -6386,6 +6488,16 @@ consume:
* simultaneous connect with crossed SYNs.
* Particularly, it can be connect to self.
*/
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao;
+
+ ao = rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held(sk));
+ if (ao) {
+ WRITE_ONCE(ao->risn, th->seq);
+ ao->rcv_sne = 0;
+ }
+#endif
tcp_set_state(sk, TCP_SYN_RECV);
if (tp->rx_opt.saw_tstamp) {
@@ -6450,22 +6562,24 @@ reset_and_undo:
static void tcp_rcv_synrecv_state_fastopen(struct sock *sk)
{
+ struct tcp_sock *tp = tcp_sk(sk);
struct request_sock *req;
/* If we are still handling the SYNACK RTO, see if timestamp ECR allows
* undo. If peer SACKs triggered fast recovery, we can't undo here.
*/
- if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss)
- tcp_try_undo_loss(sk, false);
+ if (inet_csk(sk)->icsk_ca_state == TCP_CA_Loss && !tp->packets_out)
+ tcp_try_undo_recovery(sk);
/* Reset rtx states to prevent spurious retransmits_timed_out() */
- tcp_sk(sk)->retrans_stamp = 0;
+ tcp_update_rto_time(tp);
+ tp->retrans_stamp = 0;
inet_csk(sk)->icsk_retransmits = 0;
/* Once we leave TCP_SYN_RECV or TCP_FIN_WAIT_1,
* we no longer need req so release it.
*/
- req = rcu_dereference_protected(tcp_sk(sk)->fastopen_rsk,
+ req = rcu_dereference_protected(tp->fastopen_rsk,
lockdep_sock_is_held(sk));
reqsk_fastopen_remove(sk, req, false);
@@ -6596,6 +6710,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
skb);
WRITE_ONCE(tp->copied_seq, tp->rcv_nxt);
}
+ tcp_ao_established(sk);
smp_mb();
tcp_set_state(sk, TCP_ESTABLISHED);
sk->sk_state_change(sk);
@@ -6972,6 +7087,10 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
struct flowi fl;
u8 syncookies;
+#ifdef CONFIG_TCP_AO
+ const struct tcp_ao_hdr *aoh;
+#endif
+
syncookies = READ_ONCE(net->ipv4.sysctl_tcp_syncookies);
/* TW buckets are converted to open requests without
@@ -6996,6 +7115,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
req->syncookie = want_cookie;
tcp_rsk(req)->af_specific = af_ops;
tcp_rsk(req)->ts_off = 0;
+ tcp_rsk(req)->req_usec_ts = -1;
#if IS_ENABLED(CONFIG_MPTCP)
tcp_rsk(req)->is_mptcp = 0;
#endif
@@ -7057,6 +7177,17 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
inet_rsk(req)->ecn_ok = 0;
}
+#ifdef CONFIG_TCP_AO
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ goto drop_and_release; /* Invalid TCP options */
+ if (aoh) {
+ tcp_rsk(req)->maclen = aoh->length - sizeof(struct tcp_ao_hdr);
+ tcp_rsk(req)->ao_rcv_next = aoh->keyid;
+ tcp_rsk(req)->ao_keyid = aoh->rnext_keyid;
+ } else {
+ tcp_rsk(req)->maclen = 0;
+ }
+#endif
tcp_rsk(req)->snt_isn = isn;
tcp_rsk(req)->txhash = net_tx_rndhash();
tcp_rsk(req)->syn_tos = TCP_SKB_CB(skb)->ip_dsfield;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 4167e8a48b60..5f693bbd578d 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -194,7 +194,7 @@ static int tcp_v4_pre_connect(struct sock *sk, struct sockaddr *uaddr,
sock_owned_by_me(sk);
- return BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET4_CONNECT(sk, uaddr, &addr_len);
}
/* This will initiate an outgoing connection. */
@@ -296,6 +296,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
rt = NULL;
goto failure;
}
+ tp->tcp_usec_ts = dst_tcp_usec_ts(&rt->dst);
/* OK, now commit destination to socket. */
sk->sk_gso_type = SKB_GSO_TCPV4;
sk_setup_caps(sk, &rt->dst);
@@ -493,6 +494,8 @@ int tcp_v4_err(struct sk_buff *skb, u32 info)
return -ENOENT;
}
if (sk->sk_state == TCP_TIME_WAIT) {
+ /* To increase the counter of ignored icmps for TCP-AO */
+ tcp_ao_ignore_icmp(sk, AF_INET, type, code);
inet_twsk_put(inet_twsk(sk));
return 0;
}
@@ -506,6 +509,11 @@ int tcp_v4_err(struct sk_buff *skb, u32 info)
return 0;
}
+ if (tcp_ao_ignore_icmp(sk, AF_INET, type, code)) {
+ sock_put(sk);
+ return 0;
+ }
+
bh_lock_sock(sk);
/* If too many ICMPs get dropped on busy
* servers this needs to be solved differently.
@@ -656,6 +664,52 @@ void tcp_v4_send_check(struct sock *sk, struct sk_buff *skb)
}
EXPORT_SYMBOL(tcp_v4_send_check);
+#define REPLY_OPTIONS_LEN (MAX_TCP_OPTION_SPACE / sizeof(__be32))
+
+static bool tcp_v4_ao_sign_reset(const struct sock *sk, struct sk_buff *skb,
+ const struct tcp_ao_hdr *aoh,
+ struct ip_reply_arg *arg, struct tcphdr *reply,
+ __be32 reply_options[REPLY_OPTIONS_LEN])
+{
+#ifdef CONFIG_TCP_AO
+ int sdif = tcp_v4_sdif(skb);
+ int dif = inet_iif(skb);
+ int l3index = sdif ? dif : 0;
+ bool allocated_traffic_key;
+ struct tcp_ao_key *key;
+ char *traffic_key;
+ bool drop = true;
+ u32 ao_sne = 0;
+ u8 keyid;
+
+ rcu_read_lock();
+ if (tcp_ao_prepare_reset(sk, skb, aoh, l3index, ntohl(reply->seq),
+ &key, &traffic_key, &allocated_traffic_key,
+ &keyid, &ao_sne))
+ goto out;
+
+ reply_options[0] = htonl((TCPOPT_AO << 24) | (tcp_ao_len(key) << 16) |
+ (aoh->rnext_keyid << 8) | keyid);
+ arg->iov[0].iov_len += round_up(tcp_ao_len(key), 4);
+ reply->doff = arg->iov[0].iov_len / 4;
+
+ if (tcp_ao_hash_hdr(AF_INET, (char *)&reply_options[1],
+ key, traffic_key,
+ (union tcp_ao_addr *)&ip_hdr(skb)->saddr,
+ (union tcp_ao_addr *)&ip_hdr(skb)->daddr,
+ reply, ao_sne))
+ goto out;
+ drop = false;
+out:
+ rcu_read_unlock();
+ if (allocated_traffic_key)
+ kfree(traffic_key);
+ return drop;
+#else
+ return true;
+#endif
+}
+
/*
* This routine will send an RST to the other tcp.
*
@@ -669,26 +723,21 @@ EXPORT_SYMBOL(tcp_v4_send_check);
* Exception: precedence violation. We do not implement it in any case.
*/
-#ifdef CONFIG_TCP_MD5SIG
-#define OPTION_BYTES TCPOLEN_MD5SIG_ALIGNED
-#else
-#define OPTION_BYTES sizeof(__be32)
-#endif
-
static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
{
const struct tcphdr *th = tcp_hdr(skb);
struct {
struct tcphdr th;
- __be32 opt[OPTION_BYTES / sizeof(__be32)];
+ __be32 opt[REPLY_OPTIONS_LEN];
} rep;
+ const __u8 *md5_hash_location = NULL;
+ const struct tcp_ao_hdr *aoh;
struct ip_reply_arg arg;
#ifdef CONFIG_TCP_MD5SIG
struct tcp_md5sig_key *key = NULL;
- const __u8 *hash_location = NULL;
unsigned char newhash[16];
- int genhash;
struct sock *sk1 = NULL;
+ int genhash;
#endif
u64 transmit_time = 0;
struct sock *ctl_sk;
@@ -725,9 +774,16 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
arg.iov[0].iov_len = sizeof(rep.th);
net = sk ? sock_net(sk) : dev_net(skb_dst(skb)->dev);
+
+ /* Invalid TCP option size or twice included auth */
+ if (tcp_parse_auth_options(tcp_hdr(skb), &md5_hash_location, &aoh))
+ return;
+
+ if (aoh && tcp_v4_ao_sign_reset(sk, skb, aoh, &arg, &rep.th, rep.opt))
+ return;
+
#ifdef CONFIG_TCP_MD5SIG
rcu_read_lock();
- hash_location = tcp_parse_md5sig_option(th);
if (sk && sk_fullsock(sk)) {
const union tcp_md5_addr *addr;
int l3index;
@@ -738,7 +794,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0;
addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr;
key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET);
- } else if (hash_location) {
+ } else if (md5_hash_location) {
const union tcp_md5_addr *addr;
int sdif = tcp_v4_sdif(skb);
int dif = inet_iif(skb);
@@ -770,7 +826,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
genhash = tcp_v4_md5_hash_skb(newhash, key, NULL, skb);
- if (genhash || memcmp(hash_location, newhash, 16) != 0)
+ if (genhash || memcmp(md5_hash_location, newhash, 16) != 0)
goto out;
}
@@ -828,7 +884,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ?
inet_twsk(sk)->tw_mark : sk->sk_mark;
ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
- inet_twsk(sk)->tw_priority : sk->sk_priority;
+ inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority);
transmit_time = tcp_transmit_time(sk);
xfrm_sk_clone_policy(ctl_sk, sk);
txhash = (sk->sk_state == TCP_TIME_WAIT) ?
@@ -862,17 +918,13 @@ out:
static void tcp_v4_send_ack(const struct sock *sk,
struct sk_buff *skb, u32 seq, u32 ack,
u32 win, u32 tsval, u32 tsecr, int oif,
- struct tcp_md5sig_key *key,
+ struct tcp_key *key,
int reply_flags, u8 tos, u32 txhash)
{
const struct tcphdr *th = tcp_hdr(skb);
struct {
struct tcphdr th;
- __be32 opt[(TCPOLEN_TSTAMP_ALIGNED >> 2)
-#ifdef CONFIG_TCP_MD5SIG
- + (TCPOLEN_MD5SIG_ALIGNED >> 2)
-#endif
- ];
+ __be32 opt[(MAX_TCP_OPTION_SPACE >> 2)];
} rep;
struct net *net = sock_net(sk);
struct ip_reply_arg arg;
@@ -903,7 +955,7 @@ static void tcp_v4_send_ack(const struct sock *sk,
rep.th.window = htons(win);
#ifdef CONFIG_TCP_MD5SIG
- if (key) {
+ if (tcp_key_is_md5(key)) {
int offset = (tsecr) ? 3 : 0;
rep.opt[offset++] = htonl((TCPOPT_NOP << 24) |
@@ -914,10 +966,28 @@ static void tcp_v4_send_ack(const struct sock *sk,
rep.th.doff = arg.iov[0].iov_len/4;
tcp_v4_md5_hash_hdr((__u8 *) &rep.opt[offset],
- key, ip_hdr(skb)->saddr,
+ key->md5_key, ip_hdr(skb)->saddr,
ip_hdr(skb)->daddr, &rep.th);
}
#endif
+#ifdef CONFIG_TCP_AO
+ if (tcp_key_is_ao(key)) {
+ int offset = (tsecr) ? 3 : 0;
+
+ rep.opt[offset++] = htonl((TCPOPT_AO << 24) |
+ (tcp_ao_len(key->ao_key) << 16) |
+ (key->ao_key->sndid << 8) |
+ key->rcv_next);
+ arg.iov[0].iov_len += round_up(tcp_ao_len(key->ao_key), 4);
+ rep.th.doff = arg.iov[0].iov_len / 4;
+
+ tcp_ao_hash_hdr(AF_INET, (char *)&rep.opt[offset],
+ key->ao_key, key->traffic_key,
+ (union tcp_ao_addr *)&ip_hdr(skb)->saddr,
+ (union tcp_ao_addr *)&ip_hdr(skb)->daddr,
+ &rep.th, key->sne);
+ }
+#endif
arg.flags = reply_flags;
arg.csum = csum_tcpudp_nofold(ip_hdr(skb)->daddr,
ip_hdr(skb)->saddr, /* XXX */
@@ -950,18 +1020,53 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
{
struct inet_timewait_sock *tw = inet_twsk(sk);
struct tcp_timewait_sock *tcptw = tcp_twsk(sk);
+ struct tcp_key key = {};
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao_info;
+
+ if (static_branch_unlikely(&tcp_ao_needed.key)) {
+ /* FIXME: the segment to-be-acked is not verified yet */
+ ao_info = rcu_dereference(tcptw->ao_info);
+ if (ao_info) {
+ const struct tcp_ao_hdr *aoh;
+
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh)) {
+ inet_twsk_put(tw);
+ return;
+ }
+
+ if (aoh)
+ key.ao_key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+ struct tcp_ao_key *rnext_key;
+
+ key.traffic_key = snd_other_key(key.ao_key);
+ key.sne = READ_ONCE(ao_info->snd_sne);
+ rnext_key = READ_ONCE(ao_info->rnext_key);
+ key.rcv_next = rnext_key->rcvid;
+ key.type = TCP_KEY_AO;
+#else
+ if (0) {
+#endif
+#ifdef CONFIG_TCP_MD5SIG
+ } else if (static_branch_unlikely(&tcp_md5_needed.key)) {
+ key.md5_key = tcp_twsk_md5_key(tcptw);
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+#endif
+ }
tcp_v4_send_ack(sk, skb,
tcptw->tw_snd_nxt, tcptw->tw_rcv_nxt,
tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
- tcp_time_stamp_raw() + tcptw->tw_ts_offset,
+ tcp_tw_tsval(tcptw),
tcptw->tw_ts_recent,
- tw->tw_bound_dev_if,
- tcp_twsk_md5_key(tcptw),
+ tw->tw_bound_dev_if, &key,
tw->tw_transparent ? IP_REPLY_ARG_NOSRCCHECK : 0,
tw->tw_tos,
- tw->tw_txhash
- );
+ tw->tw_txhash);
inet_twsk_put(tw);
}
@@ -969,8 +1074,7 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
struct request_sock *req)
{
- const union tcp_md5_addr *addr;
- int l3index;
+ struct tcp_key key = {};
/* sk->sk_state == TCP_LISTEN -> for regular TCP_SYN_RECV
* sk->sk_state == TCP_SYN_RECV -> for Fast Open.
@@ -978,23 +1082,77 @@ static void tcp_v4_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
u32 seq = (sk->sk_state == TCP_LISTEN) ? tcp_rsk(req)->snt_isn + 1 :
tcp_sk(sk)->snd_nxt;
+#ifdef CONFIG_TCP_AO
+ if (static_branch_unlikely(&tcp_ao_needed.key) &&
+ tcp_rsk_used_ao(req)) {
+ const union tcp_md5_addr *addr;
+ const struct tcp_ao_hdr *aoh;
+ int l3index;
+
+ /* Invalid TCP option size or twice included auth */
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ return;
+ if (!aoh)
+ return;
+
+ addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr;
+ l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0;
+ key.ao_key = tcp_ao_do_lookup(sk, l3index, addr, AF_INET,
+ aoh->rnext_keyid, -1);
+ if (unlikely(!key.ao_key)) {
+ /* Send ACK with any matching MKT for the peer */
+ key.ao_key = tcp_ao_do_lookup(sk, l3index, addr, AF_INET, -1, -1);
+ /* Matching key disappeared (user removed the key?)
+ * let the handshake timeout.
+ */
+ if (!key.ao_key) {
+ net_info_ratelimited("TCP-AO key for (%pI4, %d)->(%pI4, %d) suddenly disappeared, won't ACK new connection\n",
+ addr,
+ ntohs(tcp_hdr(skb)->source),
+ &ip_hdr(skb)->daddr,
+ ntohs(tcp_hdr(skb)->dest));
+ return;
+ }
+ }
+ key.traffic_key = kmalloc(tcp_ao_digest_size(key.ao_key), GFP_ATOMIC);
+ if (!key.traffic_key)
+ return;
+
+ key.type = TCP_KEY_AO;
+ key.rcv_next = aoh->keyid;
+ tcp_v4_ao_calc_key_rsk(key.ao_key, key.traffic_key, req);
+#else
+ if (0) {
+#endif
+#ifdef CONFIG_TCP_MD5SIG
+ } else if (static_branch_unlikely(&tcp_md5_needed.key)) {
+ const union tcp_md5_addr *addr;
+ int l3index;
+
+ addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr;
+ l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0;
+ key.md5_key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET);
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+#endif
+ }
+
/* RFC 7323 2.3
* The window field (SEG.WND) of every outgoing segment, with the
* exception of <SYN> segments, MUST be right-shifted by
* Rcv.Wind.Shift bits:
*/
- addr = (union tcp_md5_addr *)&ip_hdr(skb)->saddr;
- l3index = tcp_v4_sdif(skb) ? inet_iif(skb) : 0;
tcp_v4_send_ack(sk, skb, seq,
tcp_rsk(req)->rcv_nxt,
req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
- tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+ tcp_rsk_tsval(tcp_rsk(req)),
READ_ONCE(req->ts_recent),
- 0,
- tcp_md5_do_lookup(sk, l3index, addr, AF_INET),
+ 0, &key,
inet_rsk(req)->no_srccheck ? IP_REPLY_ARG_NOSRCCHECK : 0,
ip_hdr(skb)->tos,
READ_ONCE(tcp_rsk(req)->txhash));
+ if (tcp_key_is_ao(&key))
+ kfree(key.traffic_key);
}
/*
@@ -1024,10 +1182,11 @@ static int tcp_v4_send_synack(const struct sock *sk, struct dst_entry *dst,
if (skb) {
__tcp_v4_send_check(skb, ireq->ir_loc_addr, ireq->ir_rmt_addr);
- tos = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
- (tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
- (inet_sk(sk)->tos & INET_ECN_MASK) :
- inet_sk(sk)->tos;
+ tos = READ_ONCE(inet_sk(sk)->tos);
+
+ if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos))
+ tos = (tcp_rsk(req)->syn_tos & ~INET_ECN_MASK) |
+ (tos & INET_ECN_MASK);
if (!INET_ECN_is_capable(tos) &&
tcp_bpf_ca_needs_ecn((struct sock *)req))
@@ -1080,7 +1239,7 @@ static bool better_md5_match(struct tcp_md5sig_key *old, struct tcp_md5sig_key *
/* Find the Key structure for an address. */
struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
const union tcp_md5_addr *addr,
- int family)
+ int family, bool any_l3index)
{
const struct tcp_sock *tp = tcp_sk(sk);
struct tcp_md5sig_key *key;
@@ -1099,7 +1258,8 @@ struct tcp_md5sig_key *__tcp_md5_do_lookup(const struct sock *sk, int l3index,
lockdep_sock_is_held(sk)) {
if (key->family != family)
continue;
- if (key->flags & TCP_MD5SIG_FLAG_IFINDEX && key->l3index != l3index)
+ if (!any_l3index && key->flags & TCP_MD5SIG_FLAG_IFINDEX &&
+ key->l3index != l3index)
continue;
if (family == AF_INET) {
mask = inet_make_mask(key->prefixlen);
@@ -1219,10 +1379,6 @@ static int __tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
key = sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO);
if (!key)
return -ENOMEM;
- if (!tcp_alloc_md5sig_pool()) {
- sock_kfree_s(sk, key, sizeof(*key));
- return -ENOMEM;
- }
memcpy(key->key, newkey, newkeylen);
key->keylen = newkeylen;
@@ -1244,15 +1400,21 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_md5_addr *addr,
struct tcp_sock *tp = tcp_sk(sk);
if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) {
- if (tcp_md5sig_info_add(sk, GFP_KERNEL))
+ if (tcp_md5_alloc_sigpool())
return -ENOMEM;
+ if (tcp_md5sig_info_add(sk, GFP_KERNEL)) {
+ tcp_md5_release_sigpool();
+ return -ENOMEM;
+ }
+
if (!static_branch_inc(&tcp_md5_needed.key)) {
struct tcp_md5sig_info *md5sig;
md5sig = rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk));
rcu_assign_pointer(tp->md5sig_info, NULL);
kfree_rcu(md5sig, rcu);
+ tcp_md5_release_sigpool();
return -EUSERS;
}
}
@@ -1269,8 +1431,12 @@ int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr,
struct tcp_sock *tp = tcp_sk(sk);
if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))) {
- if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC)))
+ tcp_md5_add_sigpool();
+
+ if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) {
+ tcp_md5_release_sigpool();
return -ENOMEM;
+ }
if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) {
struct tcp_md5sig_info *md5sig;
@@ -1279,6 +1445,7 @@ int tcp_md5_key_copy(struct sock *sk, const union tcp_md5_addr *addr,
net_warn_ratelimited("Too many TCP-MD5 keys in the system\n");
rcu_assign_pointer(tp->md5sig_info, NULL);
kfree_rcu(md5sig, rcu);
+ tcp_md5_release_sigpool();
return -EUSERS;
}
}
@@ -1304,7 +1471,7 @@ int tcp_md5_do_del(struct sock *sk, const union tcp_md5_addr *addr, int family,
}
EXPORT_SYMBOL(tcp_md5_do_del);
-static void tcp_clear_md5_list(struct sock *sk)
+void tcp_clear_md5_list(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
struct tcp_md5sig_key *key;
@@ -1328,6 +1495,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname,
const union tcp_md5_addr *addr;
u8 prefixlen = 32;
int l3index = 0;
+ bool l3flag;
u8 flags;
if (optlen < sizeof(cmd))
@@ -1340,6 +1508,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname,
return -EINVAL;
flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
+ l3flag = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
if (optname == TCP_MD5SIG_EXT &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) {
@@ -1374,11 +1543,17 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int optname,
if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN)
return -EINVAL;
+ /* Don't allow keys for peers that have a matching TCP-AO key.
+ * See the comment in tcp_ao_add_cmd()
+ */
+ if (tcp_ao_required(sk, addr, AF_INET, l3flag ? l3index : -1, false))
+ return -EKEYREJECTED;
+
return tcp_md5_do_add(sk, addr, AF_INET, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen);
}
-static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp,
+static int tcp_v4_md5_hash_headers(struct tcp_sigpool *hp,
__be32 daddr, __be32 saddr,
const struct tcphdr *th, int nbytes)
{
@@ -1398,38 +1573,35 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp,
_th->check = 0;
sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th));
- ahash_request_set_crypt(hp->md5_req, &sg, NULL,
+ ahash_request_set_crypt(hp->req, &sg, NULL,
sizeof(*bp) + sizeof(*th));
- return crypto_ahash_update(hp->md5_req);
+ return crypto_ahash_update(hp->req);
}
static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
__be32 daddr, __be32 saddr, const struct tcphdr *th)
{
- struct tcp_md5sig_pool *hp;
- struct ahash_request *req;
+ struct tcp_sigpool hp;
- hp = tcp_get_md5sig_pool();
- if (!hp)
- goto clear_hash_noput;
- req = hp->md5_req;
+ if (tcp_sigpool_start(tcp_md5_sigpool_id, &hp))
+ goto clear_hash_nostart;
- if (crypto_ahash_init(req))
+ if (crypto_ahash_init(hp.req))
goto clear_hash;
- if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2))
+ if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2))
goto clear_hash;
- if (tcp_md5_hash_key(hp, key))
+ if (tcp_md5_hash_key(&hp, key))
goto clear_hash;
- ahash_request_set_crypt(req, NULL, md5_hash, 0);
- if (crypto_ahash_final(req))
+ ahash_request_set_crypt(hp.req, NULL, md5_hash, 0);
+ if (crypto_ahash_final(hp.req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_sigpool_end(&hp);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
-clear_hash_noput:
+ tcp_sigpool_end(&hp);
+clear_hash_nostart:
memset(md5_hash, 0, 16);
return 1;
}
@@ -1438,9 +1610,8 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
const struct sock *sk,
const struct sk_buff *skb)
{
- struct tcp_md5sig_pool *hp;
- struct ahash_request *req;
const struct tcphdr *th = tcp_hdr(skb);
+ struct tcp_sigpool hp;
__be32 saddr, daddr;
if (sk) { /* valid for establish/request sockets */
@@ -1452,30 +1623,28 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key,
daddr = iph->daddr;
}
- hp = tcp_get_md5sig_pool();
- if (!hp)
- goto clear_hash_noput;
- req = hp->md5_req;
+ if (tcp_sigpool_start(tcp_md5_sigpool_id, &hp))
+ goto clear_hash_nostart;
- if (crypto_ahash_init(req))
+ if (crypto_ahash_init(hp.req))
goto clear_hash;
- if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, skb->len))
+ if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, skb->len))
goto clear_hash;
- if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2))
+ if (tcp_sigpool_hash_skb_data(&hp, skb, th->doff << 2))
goto clear_hash;
- if (tcp_md5_hash_key(hp, key))
+ if (tcp_md5_hash_key(&hp, key))
goto clear_hash;
- ahash_request_set_crypt(req, NULL, md5_hash, 0);
- if (crypto_ahash_final(req))
+ ahash_request_set_crypt(hp.req, NULL, md5_hash, 0);
+ if (crypto_ahash_final(hp.req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_sigpool_end(&hp);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
-clear_hash_noput:
+ tcp_sigpool_end(&hp);
+clear_hash_nostart:
memset(md5_hash, 0, 16);
return 1;
}
@@ -1524,6 +1693,11 @@ const struct tcp_request_sock_ops tcp_request_sock_ipv4_ops = {
.req_md5_lookup = tcp_v4_md5_lookup,
.calc_md5_hash = tcp_v4_md5_hash_skb,
#endif
+#ifdef CONFIG_TCP_AO
+ .ao_lookup = tcp_v4_ao_lookup_rsk,
+ .ao_calc_key = tcp_v4_ao_calc_key_rsk,
+ .ao_synack_hash = tcp_v4_ao_synack_hash,
+#endif
#ifdef CONFIG_SYN_COOKIES
.cookie_init_seq = cookie_v4_init_sequence,
#endif
@@ -1625,12 +1799,16 @@ struct sock *tcp_v4_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,
/* Copy over the MD5 key from the original socket */
addr = (union tcp_md5_addr *)&newinet->inet_daddr;
key = tcp_md5_do_lookup(sk, l3index, addr, AF_INET);
- if (key) {
+ if (key && !tcp_rsk_used_ao(req)) {
if (tcp_md5_key_copy(newsk, addr, AF_INET, 32, l3index, key))
goto put_and_exit;
sk_gso_disable(newsk);
}
#endif
+#ifdef CONFIG_TCP_AO
+ if (tcp_ao_copy_all_matching(sk, newsk, req, skb, AF_INET))
+ goto put_and_exit; /* OOM, release back memory */
+#endif
if (__inet_inherit_port(sk, newsk) < 0)
goto put_and_exit;
@@ -2041,9 +2219,9 @@ process:
if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb))
drop_reason = SKB_DROP_REASON_XFRM_POLICY;
else
- drop_reason = tcp_inbound_md5_hash(sk, skb,
- &iph->saddr, &iph->daddr,
- AF_INET, dif, sdif);
+ drop_reason = tcp_inbound_hash(sk, req, skb,
+ &iph->saddr, &iph->daddr,
+ AF_INET, dif, sdif);
if (unlikely(drop_reason)) {
sk_drops_add(sk, skb);
reqsk_put(req);
@@ -2120,8 +2298,8 @@ process:
goto discard_and_relse;
}
- drop_reason = tcp_inbound_md5_hash(sk, skb, &iph->saddr,
- &iph->daddr, AF_INET, dif, sdif);
+ drop_reason = tcp_inbound_hash(sk, NULL, skb, &iph->saddr, &iph->daddr,
+ AF_INET, dif, sdif);
if (drop_reason)
goto discard_and_relse;
@@ -2268,11 +2446,19 @@ const struct inet_connection_sock_af_ops ipv4_specific = {
};
EXPORT_SYMBOL(ipv4_specific);
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
static const struct tcp_sock_af_ops tcp_sock_ipv4_specific = {
+#ifdef CONFIG_TCP_MD5SIG
.md5_lookup = tcp_v4_md5_lookup,
.calc_md5_hash = tcp_v4_md5_hash_skb,
.md5_parse = tcp_v4_parse_md5_keys,
+#endif
+#ifdef CONFIG_TCP_AO
+ .ao_lookup = tcp_v4_ao_lookup,
+ .calc_ao_hash = tcp_v4_ao_hash_skb,
+ .ao_parse = tcp_v4_parse_ao,
+ .ao_calc_key_sk = tcp_v4_ao_calc_key_sk,
+#endif
};
#endif
@@ -2287,13 +2473,25 @@ static int tcp_v4_init_sock(struct sock *sk)
icsk->icsk_af_ops = &ipv4_specific;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
tcp_sk(sk)->af_specific = &tcp_sock_ipv4_specific;
#endif
return 0;
}
+#ifdef CONFIG_TCP_MD5SIG
+static void tcp_md5sig_info_free_rcu(struct rcu_head *head)
+{
+ struct tcp_md5sig_info *md5sig;
+
+ md5sig = container_of(head, struct tcp_md5sig_info, rcu);
+ kfree(md5sig);
+ static_branch_slow_dec_deferred(&tcp_md5_needed);
+ tcp_md5_release_sigpool();
+}
+#endif
+
void tcp_v4_destroy_sock(struct sock *sk)
{
struct tcp_sock *tp = tcp_sk(sk);
@@ -2318,12 +2516,15 @@ void tcp_v4_destroy_sock(struct sock *sk)
#ifdef CONFIG_TCP_MD5SIG
/* Clean up the MD5 key list, if any */
if (tp->md5sig_info) {
+ struct tcp_md5sig_info *md5sig;
+
+ md5sig = rcu_dereference_protected(tp->md5sig_info, 1);
tcp_clear_md5_list(sk);
- kfree_rcu(rcu_dereference_protected(tp->md5sig_info, 1), rcu);
- tp->md5sig_info = NULL;
- static_branch_slow_dec_deferred(&tcp_md5_needed);
+ call_rcu(&md5sig->rcu, tcp_md5sig_info_free_rcu);
+ rcu_assign_pointer(tp->md5sig_info, NULL);
}
#endif
+ tcp_ao_destroy_sock(sk, false);
/* Clean up a referenced TCP bind bucket. */
if (inet_csk(sk)->icsk_bind_hash)
@@ -3264,6 +3465,7 @@ static int __net_init tcp_sk_init(struct net *net)
net->ipv4.sysctl_tcp_comp_sack_delay_ns = NSEC_PER_MSEC;
net->ipv4.sysctl_tcp_comp_sack_slack_ns = 100 * NSEC_PER_USEC;
net->ipv4.sysctl_tcp_comp_sack_nr = 44;
+ net->ipv4.sysctl_tcp_backlog_ack_defer = 1;
net->ipv4.sysctl_tcp_fastopen = TFO_CLIENT_ENABLE;
net->ipv4.sysctl_tcp_fastopen_blackhole_timeout = 0;
atomic_set(&net->ipv4.tfo_active_disable_times, 0);
@@ -3287,6 +3489,8 @@ static int __net_init tcp_sk_init(struct net *net)
net->ipv4.sysctl_tcp_syn_linear_timeouts = 4;
net->ipv4.sysctl_tcp_shrink_window = 0;
+ net->ipv4.sysctl_tcp_pingpong_thresh = 1;
+
return 0;
}
diff --git a/net/ipv4/tcp_lp.c b/net/ipv4/tcp_lp.c
index ae36780977d2..52fe17167460 100644
--- a/net/ipv4/tcp_lp.c
+++ b/net/ipv4/tcp_lp.c
@@ -272,7 +272,7 @@ static void tcp_lp_pkts_acked(struct sock *sk, const struct ack_sample *sample)
{
struct tcp_sock *tp = tcp_sk(sk);
struct lp *lp = inet_csk_ca(sk);
- u32 now = tcp_time_stamp(tp);
+ u32 now = tcp_time_stamp_ts(tp);
u32 delta;
if (sample->rtt_us > 0)
diff --git a/net/ipv4/tcp_metrics.c b/net/ipv4/tcp_metrics.c
index c196759f1d3b..c2a925538542 100644
--- a/net/ipv4/tcp_metrics.c
+++ b/net/ipv4/tcp_metrics.c
@@ -470,11 +470,15 @@ void tcp_init_metrics(struct sock *sk)
u32 val, crtt = 0; /* cached RTT scaled by 8 */
sk_dst_confirm(sk);
+ /* ssthresh may have been reduced unnecessarily during.
+ * 3WHS. Restore it back to its initial default.
+ */
+ tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
if (!dst)
goto reset;
rcu_read_lock();
- tm = tcp_get_metrics(sk, dst, true);
+ tm = tcp_get_metrics(sk, dst, false);
if (!tm) {
rcu_read_unlock();
goto reset;
@@ -489,11 +493,6 @@ void tcp_init_metrics(struct sock *sk)
tp->snd_ssthresh = val;
if (tp->snd_ssthresh > tp->snd_cwnd_clamp)
tp->snd_ssthresh = tp->snd_cwnd_clamp;
- } else {
- /* ssthresh may have been reduced unnecessarily during.
- * 3WHS. Restore it back to its initial default.
- */
- tp->snd_ssthresh = TCP_INFINITE_SSTHRESH;
}
val = tcp_metric_get(tm, TCP_METRIC_REORDERING);
if (val && tp->reordering != val)
@@ -899,22 +898,25 @@ static void tcp_metrics_flush_all(struct net *net)
unsigned int row;
for (row = 0; row < max_rows; row++, hb++) {
- struct tcp_metrics_block __rcu **pp;
+ struct tcp_metrics_block __rcu **pp = &hb->chain;
bool match;
+ if (!rcu_access_pointer(*pp))
+ continue;
+
spin_lock_bh(&tcp_metrics_lock);
- pp = &hb->chain;
for (tm = deref_locked(*pp); tm; tm = deref_locked(*pp)) {
match = net ? net_eq(tm_net(tm), net) :
!refcount_read(&tm_net(tm)->ns.count);
if (match) {
- *pp = tm->tcpm_next;
+ rcu_assign_pointer(*pp, tm->tcpm_next);
kfree_rcu(tm, rcu_head);
} else {
pp = &tm->tcpm_next;
}
}
spin_unlock_bh(&tcp_metrics_lock);
+ cond_resched();
}
}
@@ -949,7 +951,7 @@ static int tcp_metrics_nl_cmd_del(struct sk_buff *skb, struct genl_info *info)
if (addr_same(&tm->tcpm_daddr, &daddr) &&
(!src || addr_same(&tm->tcpm_saddr, &saddr)) &&
net_eq(tm_net(tm), net)) {
- *pp = tm->tcpm_next;
+ rcu_assign_pointer(*pp, tm->tcpm_next);
kfree_rcu(tm, rcu_head);
found = true;
} else {
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index b98d476f1594..a9807eeb311c 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -51,6 +51,18 @@ tcp_timewait_check_oow_rate_limit(struct inet_timewait_sock *tw,
return TCP_TW_SUCCESS;
}
+static void twsk_rcv_nxt_update(struct tcp_timewait_sock *tcptw, u32 seq)
+{
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao;
+
+ ao = rcu_dereference(tcptw->ao_info);
+ if (unlikely(ao && seq < tcptw->tw_rcv_nxt))
+ WRITE_ONCE(ao->rcv_sne, ao->rcv_sne + 1);
+#endif
+ tcptw->tw_rcv_nxt = seq;
+}
+
/*
* * Main purpose of TIME-WAIT state is to close connection gracefully,
* when one of ends sits in LAST-ACK or CLOSING retransmitting FIN
@@ -136,7 +148,8 @@ tcp_timewait_state_process(struct inet_timewait_sock *tw, struct sk_buff *skb,
/* FIN arrived, enter true time-wait state. */
tw->tw_substate = TCP_TIME_WAIT;
- tcptw->tw_rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+ twsk_rcv_nxt_update(tcptw, TCP_SKB_CB(skb)->end_seq);
+
if (tmp_opt.saw_tstamp) {
tcptw->tw_ts_recent_stamp = ktime_get_seconds();
tcptw->tw_ts_recent = tmp_opt.rcv_tsval;
@@ -261,10 +274,9 @@ static void tcp_time_wait_init(struct sock *sk, struct tcp_timewait_sock *tcptw)
tcptw->tw_md5_key = kmemdup(key, sizeof(*key), GFP_ATOMIC);
if (!tcptw->tw_md5_key)
return;
- if (!tcp_alloc_md5sig_pool())
- goto out_free;
if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key))
goto out_free;
+ tcp_md5_add_sigpool();
}
return;
out_free:
@@ -280,7 +292,7 @@ out_free:
void tcp_time_wait(struct sock *sk, int state, int timeo)
{
const struct inet_connection_sock *icsk = inet_csk(sk);
- const struct tcp_sock *tp = tcp_sk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
struct net *net = sock_net(sk);
struct inet_timewait_sock *tw;
@@ -292,7 +304,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
tw->tw_transparent = inet_test_bit(TRANSPARENT, sk);
tw->tw_mark = sk->sk_mark;
- tw->tw_priority = sk->sk_priority;
+ tw->tw_priority = READ_ONCE(sk->sk_priority);
tw->tw_rcv_wscale = tp->rx_opt.rcv_wscale;
tcptw->tw_rcv_nxt = tp->rcv_nxt;
tcptw->tw_snd_nxt = tp->snd_nxt;
@@ -300,6 +312,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
tcptw->tw_ts_recent = tp->rx_opt.ts_recent;
tcptw->tw_ts_recent_stamp = tp->rx_opt.ts_recent_stamp;
tcptw->tw_ts_offset = tp->tsoffset;
+ tw->tw_usec_ts = tp->tcp_usec_ts;
tcptw->tw_last_oow_ack_time = 0;
tcptw->tw_tx_delay = tp->tcp_tx_delay;
tw->tw_txhash = sk->sk_txhash;
@@ -316,6 +329,7 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
#endif
tcp_time_wait_init(sk, tcptw);
+ tcp_ao_time_wait(tcptw, tp);
/* Get the TIME_WAIT timeout firing. */
if (timeo < rto)
@@ -348,18 +362,29 @@ void tcp_time_wait(struct sock *sk, int state, int timeo)
}
EXPORT_SYMBOL(tcp_time_wait);
+#ifdef CONFIG_TCP_MD5SIG
+static void tcp_md5_twsk_free_rcu(struct rcu_head *head)
+{
+ struct tcp_md5sig_key *key;
+
+ key = container_of(head, struct tcp_md5sig_key, rcu);
+ kfree(key);
+ static_branch_slow_dec_deferred(&tcp_md5_needed);
+ tcp_md5_release_sigpool();
+}
+#endif
+
void tcp_twsk_destructor(struct sock *sk)
{
#ifdef CONFIG_TCP_MD5SIG
if (static_branch_unlikely(&tcp_md5_needed.key)) {
struct tcp_timewait_sock *twsk = tcp_twsk(sk);
- if (twsk->tw_md5_key) {
- kfree_rcu(twsk->tw_md5_key, rcu);
- static_branch_slow_dec_deferred(&tcp_md5_needed);
- }
+ if (twsk->tw_md5_key)
+ call_rcu(&twsk->tw_md5_key->rcu, tcp_md5_twsk_free_rcu);
}
#endif
+ tcp_ao_destroy_sock(sk, true);
}
EXPORT_SYMBOL_GPL(tcp_twsk_destructor);
@@ -494,6 +519,9 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
const struct tcp_sock *oldtp;
struct tcp_sock *newtp;
u32 seq;
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_key *ao_key;
+#endif
if (!newsk)
return NULL;
@@ -554,22 +582,41 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
newtp->max_window = newtp->snd_wnd;
if (newtp->rx_opt.tstamp_ok) {
+ newtp->tcp_usec_ts = treq->req_usec_ts;
newtp->rx_opt.ts_recent = READ_ONCE(req->ts_recent);
newtp->rx_opt.ts_recent_stamp = ktime_get_seconds();
newtp->tcp_header_len = sizeof(struct tcphdr) + TCPOLEN_TSTAMP_ALIGNED;
} else {
+ newtp->tcp_usec_ts = 0;
newtp->rx_opt.ts_recent_stamp = 0;
newtp->tcp_header_len = sizeof(struct tcphdr);
}
if (req->num_timeout) {
+ newtp->total_rto = req->num_timeout;
newtp->undo_marker = treq->snt_isn;
- newtp->retrans_stamp = div_u64(treq->snt_synack,
- USEC_PER_SEC / TCP_TS_HZ);
+ if (newtp->tcp_usec_ts) {
+ newtp->retrans_stamp = treq->snt_synack;
+ newtp->total_rto_time = (u32)(tcp_clock_us() -
+ newtp->retrans_stamp) / USEC_PER_MSEC;
+ } else {
+ newtp->retrans_stamp = div_u64(treq->snt_synack,
+ USEC_PER_SEC / TCP_TS_HZ);
+ newtp->total_rto_time = tcp_clock_ms() -
+ newtp->retrans_stamp;
+ }
+ newtp->total_rto_recoveries = 1;
}
newtp->tsoffset = treq->ts_off;
#ifdef CONFIG_TCP_MD5SIG
newtp->md5sig_info = NULL; /*XXX*/
#endif
+#ifdef CONFIG_TCP_AO
+ newtp->ao_info = NULL;
+ ao_key = treq->af_specific->ao_lookup(sk, req,
+ tcp_rsk(req)->ao_keyid, -1);
+ if (ao_key)
+ newtp->tcp_header_len += tcp_ao_len(ao_key);
+ #endif
if (skb->len >= TCP_MSS_DEFAULT + newtp->tcp_header_len)
newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len;
newtp->rx_opt.mss_clamp = req->mss;
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index f0723460753c..f558c054cf6e 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -170,10 +170,10 @@ static void tcp_event_data_sent(struct tcp_sock *tp,
tp->lsndtime = now;
/* If it is a reply for ato after last received
- * packet, enter pingpong mode.
+ * packet, increase pingpong count.
*/
if ((u32)(now - icsk->icsk_ack.lrcvtime) < icsk->icsk_ack.ato)
- inet_csk_enter_pingpong_mode(sk);
+ inet_csk_inc_pingpong_cnt(sk);
}
/* Account for an ACK we sent. */
@@ -422,6 +422,7 @@ static inline bool tcp_urg_mode(const struct tcp_sock *tp)
#define OPTION_FAST_OPEN_COOKIE BIT(8)
#define OPTION_SMC BIT(9)
#define OPTION_MPTCP BIT(10)
+#define OPTION_AO BIT(11)
static void smc_options_write(__be32 *ptr, u16 *options)
{
@@ -614,19 +615,52 @@ static void bpf_skops_write_hdr_opt(struct sock *sk, struct sk_buff *skb,
* (but it may well be that other scenarios fail similarly).
*/
static void tcp_options_write(struct tcphdr *th, struct tcp_sock *tp,
- struct tcp_out_options *opts)
+ const struct tcp_request_sock *tcprsk,
+ struct tcp_out_options *opts,
+ struct tcp_key *key)
{
__be32 *ptr = (__be32 *)(th + 1);
u16 options = opts->options; /* mungable copy */
- if (unlikely(OPTION_MD5 & options)) {
+ if (tcp_key_is_md5(key)) {
*ptr++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) |
(TCPOPT_MD5SIG << 8) | TCPOLEN_MD5SIG);
/* overload cookie hash location */
opts->hash_location = (__u8 *)ptr;
ptr += 4;
- }
+ } else if (tcp_key_is_ao(key)) {
+#ifdef CONFIG_TCP_AO
+ u8 maclen = tcp_ao_maclen(key->ao_key);
+
+ if (tcprsk) {
+ u8 aolen = maclen + sizeof(struct tcp_ao_hdr);
+ *ptr++ = htonl((TCPOPT_AO << 24) | (aolen << 16) |
+ (tcprsk->ao_keyid << 8) |
+ (tcprsk->ao_rcv_next));
+ } else {
+ struct tcp_ao_key *rnext_key;
+ struct tcp_ao_info *ao_info;
+
+ ao_info = rcu_dereference_check(tp->ao_info,
+ lockdep_sock_is_held(&tp->inet_conn.icsk_inet.sk));
+ rnext_key = READ_ONCE(ao_info->rnext_key);
+ if (WARN_ON_ONCE(!rnext_key))
+ goto out_ao;
+ *ptr++ = htonl((TCPOPT_AO << 24) |
+ (tcp_ao_len(key->ao_key) << 16) |
+ (key->ao_key->sndid << 8) |
+ (rnext_key->rcvid));
+ }
+ opts->hash_location = (__u8 *)ptr;
+ ptr += maclen / sizeof(*ptr);
+ if (unlikely(maclen % sizeof(*ptr))) {
+ memset(ptr, TCPOPT_NOP, sizeof(*ptr));
+ ptr++;
+ }
+out_ao:
+#endif
+ }
if (unlikely(opts->mss)) {
*ptr++ = htonl((TCPOPT_MSS << 24) |
(TCPOLEN_MSS << 16) |
@@ -767,23 +801,25 @@ static void mptcp_set_option_cond(const struct request_sock *req,
*/
static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
struct tcp_out_options *opts,
- struct tcp_md5sig_key **md5)
+ struct tcp_key *key)
{
struct tcp_sock *tp = tcp_sk(sk);
unsigned int remaining = MAX_TCP_OPTION_SPACE;
struct tcp_fastopen_request *fastopen = tp->fastopen_req;
+ bool timestamps;
- *md5 = NULL;
-#ifdef CONFIG_TCP_MD5SIG
- if (static_branch_unlikely(&tcp_md5_needed.key) &&
- rcu_access_pointer(tp->md5sig_info)) {
- *md5 = tp->af_specific->md5_lookup(sk, sk);
- if (*md5) {
- opts->options |= OPTION_MD5;
- remaining -= TCPOLEN_MD5SIG_ALIGNED;
+ /* Better than switch (key.type) as it has static branches */
+ if (tcp_key_is_md5(key)) {
+ timestamps = false;
+ opts->options |= OPTION_MD5;
+ remaining -= TCPOLEN_MD5SIG_ALIGNED;
+ } else {
+ timestamps = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps);
+ if (tcp_key_is_ao(key)) {
+ opts->options |= OPTION_AO;
+ remaining -= tcp_ao_len(key->ao_key);
}
}
-#endif
/* We always get an MSS option. The option bytes which will be seen in
* normal data packets should timestamps be used, must be in the MSS
@@ -797,9 +833,9 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
opts->mss = tcp_advertise_mss(sk);
remaining -= TCPOLEN_MSS_ALIGNED;
- if (likely(READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps) && !*md5)) {
+ if (likely(timestamps)) {
opts->options |= OPTION_TS;
- opts->tsval = tcp_skb_timestamp(skb) + tp->tsoffset;
+ opts->tsval = tcp_skb_timestamp_ts(tp->tcp_usec_ts, skb) + tp->tsoffset;
opts->tsecr = tp->rx_opt.ts_recent;
remaining -= TCPOLEN_TSTAMP_ALIGNED;
}
@@ -850,7 +886,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
struct request_sock *req,
unsigned int mss, struct sk_buff *skb,
struct tcp_out_options *opts,
- const struct tcp_md5sig_key *md5,
+ const struct tcp_key *key,
struct tcp_fastopen_cookie *foc,
enum tcp_synack_type synack_type,
struct sk_buff *syn_skb)
@@ -858,8 +894,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
struct inet_request_sock *ireq = inet_rsk(req);
unsigned int remaining = MAX_TCP_OPTION_SPACE;
-#ifdef CONFIG_TCP_MD5SIG
- if (md5) {
+ if (tcp_key_is_md5(key)) {
opts->options |= OPTION_MD5;
remaining -= TCPOLEN_MD5SIG_ALIGNED;
@@ -870,8 +905,11 @@ static unsigned int tcp_synack_options(const struct sock *sk,
*/
if (synack_type != TCP_SYNACK_COOKIE)
ireq->tstamp_ok &= !ireq->sack_ok;
+ } else if (tcp_key_is_ao(key)) {
+ opts->options |= OPTION_AO;
+ remaining -= tcp_ao_len(key->ao_key);
+ ireq->tstamp_ok &= !ireq->sack_ok;
}
-#endif
/* We always send an MSS option. */
opts->mss = mss;
@@ -884,7 +922,8 @@ static unsigned int tcp_synack_options(const struct sock *sk,
}
if (likely(ireq->tstamp_ok)) {
opts->options |= OPTION_TS;
- opts->tsval = tcp_skb_timestamp(skb) + tcp_rsk(req)->ts_off;
+ opts->tsval = tcp_skb_timestamp_ts(tcp_rsk(req)->req_usec_ts, skb) +
+ tcp_rsk(req)->ts_off;
opts->tsecr = READ_ONCE(req->ts_recent);
remaining -= TCPOLEN_TSTAMP_ALIGNED;
}
@@ -921,7 +960,7 @@ static unsigned int tcp_synack_options(const struct sock *sk,
*/
static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb,
struct tcp_out_options *opts,
- struct tcp_md5sig_key **md5)
+ struct tcp_key *key)
{
struct tcp_sock *tp = tcp_sk(sk);
unsigned int size = 0;
@@ -929,21 +968,19 @@ static unsigned int tcp_established_options(struct sock *sk, struct sk_buff *skb
opts->options = 0;
- *md5 = NULL;
-#ifdef CONFIG_TCP_MD5SIG
- if (static_branch_unlikely(&tcp_md5_needed.key) &&
- rcu_access_pointer(tp->md5sig_info)) {
- *md5 = tp->af_specific->md5_lookup(sk, sk);
- if (*md5) {
- opts->options |= OPTION_MD5;
- size += TCPOLEN_MD5SIG_ALIGNED;
- }
+ /* Better than switch (key.type) as it has static branches */
+ if (tcp_key_is_md5(key)) {
+ opts->options |= OPTION_MD5;
+ size += TCPOLEN_MD5SIG_ALIGNED;
+ } else if (tcp_key_is_ao(key)) {
+ opts->options |= OPTION_AO;
+ size += tcp_ao_len(key->ao_key);
}
-#endif
if (likely(tp->rx_opt.tstamp_ok)) {
opts->options |= OPTION_TS;
- opts->tsval = skb ? tcp_skb_timestamp(skb) + tp->tsoffset : 0;
+ opts->tsval = skb ? tcp_skb_timestamp_ts(tp->tcp_usec_ts, skb) +
+ tp->tsoffset : 0;
opts->tsecr = tp->rx_opt.ts_recent;
size += TCPOLEN_TSTAMP_ALIGNED;
}
@@ -1076,7 +1113,8 @@ static void tcp_tasklet_func(struct tasklet_struct *t)
#define TCP_DEFERRED_ALL (TCPF_TSQ_DEFERRED | \
TCPF_WRITE_TIMER_DEFERRED | \
TCPF_DELACK_TIMER_DEFERRED | \
- TCPF_MTU_REDUCED_DEFERRED)
+ TCPF_MTU_REDUCED_DEFERRED | \
+ TCPF_ACK_DEFERRED)
/**
* tcp_release_cb - tcp release_sock() callback
* @sk: socket
@@ -1100,16 +1138,6 @@ void tcp_release_cb(struct sock *sk)
tcp_tsq_write(sk);
__sock_put(sk);
}
- /* Here begins the tricky part :
- * We are called from release_sock() with :
- * 1) BH disabled
- * 2) sk_lock.slock spinlock held
- * 3) socket owned by us (sk->sk_lock.owned == 1)
- *
- * But following code is meant to be called from BH handlers,
- * so we should keep BH disabled, but early release socket ownership
- */
- sock_release_ownership(sk);
if (flags & TCPF_WRITE_TIMER_DEFERRED) {
tcp_write_timer_handler(sk);
@@ -1123,6 +1151,8 @@ void tcp_release_cb(struct sock *sk)
inet_csk(sk)->icsk_af_ops->mtu_reduced(sk);
__sock_put(sk);
}
+ if ((flags & TCPF_ACK_DEFERRED) && inet_csk_ack_scheduled(sk))
+ tcp_send_ack(sk);
}
EXPORT_SYMBOL(tcp_release_cb);
@@ -1207,7 +1237,7 @@ static void tcp_update_skb_after_send(struct sock *sk, struct sk_buff *skb,
struct tcp_sock *tp = tcp_sk(sk);
if (sk->sk_pacing_status != SK_PACING_NONE) {
- unsigned long rate = sk->sk_pacing_rate;
+ unsigned long rate = READ_ONCE(sk->sk_pacing_rate);
/* Original sch_fq does not pace first 10 MSS
* Note that tp->data_segs_out overflows after 2^32 packets,
@@ -1250,7 +1280,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
struct tcp_out_options opts;
unsigned int tcp_options_size, tcp_header_size;
struct sk_buff *oskb = NULL;
- struct tcp_md5sig_key *md5;
+ struct tcp_key key;
struct tcphdr *th;
u64 prior_wstamp;
int err;
@@ -1282,11 +1312,11 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
tcb = TCP_SKB_CB(skb);
memset(&opts, 0, sizeof(opts));
+ tcp_get_current_key(sk, &key);
if (unlikely(tcb->tcp_flags & TCPHDR_SYN)) {
- tcp_options_size = tcp_syn_options(sk, skb, &opts, &md5);
+ tcp_options_size = tcp_syn_options(sk, skb, &opts, &key);
} else {
- tcp_options_size = tcp_established_options(sk, skb, &opts,
- &md5);
+ tcp_options_size = tcp_established_options(sk, skb, &opts, &key);
/* Force a PSH flag on all (GSO) packets to expedite GRO flush
* at receiver : This slightly improve GRO performance.
* Note that we do not force the PSH flag for non GSO packets,
@@ -1331,7 +1361,7 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
skb->destructor = skb_is_tcp_pure_ack(skb) ? __sock_wfree : tcp_wfree;
refcount_add(skb->truesize, &sk->sk_wmem_alloc);
- skb_set_dst_pending_confirm(skb, sk->sk_dst_pending_confirm);
+ skb_set_dst_pending_confirm(skb, READ_ONCE(sk->sk_dst_pending_confirm));
/* Build TCP header and checksum it. */
th = (struct tcphdr *)skb->data;
@@ -1367,16 +1397,25 @@ static int __tcp_transmit_skb(struct sock *sk, struct sk_buff *skb,
th->window = htons(min(tp->rcv_wnd, 65535U));
}
- tcp_options_write(th, tp, &opts);
+ tcp_options_write(th, tp, NULL, &opts, &key);
+ if (tcp_key_is_md5(&key)) {
#ifdef CONFIG_TCP_MD5SIG
- /* Calculate the MD5 hash, as we have all we need now */
- if (md5) {
+ /* Calculate the MD5 hash, as we have all we need now */
sk_gso_disable(sk);
tp->af_specific->calc_md5_hash(opts.hash_location,
- md5, sk, skb);
- }
+ key.md5_key, sk, skb);
#endif
+ } else if (tcp_key_is_ao(&key)) {
+ int err;
+
+ err = tcp_ao_transmit_skb(sk, skb, key.ao_key, th,
+ opts.hash_location);
+ if (err) {
+ kfree_skb_reason(skb, SKB_DROP_REASON_NOT_SPECIFIED);
+ return -ENOMEM;
+ }
+ }
/* BPF prog is the last one writing header option */
bpf_skops_write_hdr_opt(sk, skb, NULL, NULL, 0, &opts);
@@ -1703,14 +1742,6 @@ static inline int __tcp_mtu_to_mss(struct sock *sk, int pmtu)
*/
mss_now = pmtu - icsk->icsk_af_ops->net_header_len - sizeof(struct tcphdr);
- /* IPv6 adds a frag_hdr in case RTAX_FEATURE_ALLFRAG is set */
- if (icsk->icsk_af_ops->net_frag_header_len) {
- const struct dst_entry *dst = __sk_dst_get(sk);
-
- if (dst && dst_allfrag(dst))
- mss_now -= icsk->icsk_af_ops->net_frag_header_len;
- }
-
/* Clamp it (mss_clamp does not include tcp options) */
if (mss_now > tp->rx_opt.mss_clamp)
mss_now = tp->rx_opt.mss_clamp;
@@ -1738,21 +1769,11 @@ int tcp_mss_to_mtu(struct sock *sk, int mss)
{
const struct tcp_sock *tp = tcp_sk(sk);
const struct inet_connection_sock *icsk = inet_csk(sk);
- int mtu;
- mtu = mss +
+ return mss +
tp->tcp_header_len +
icsk->icsk_ext_hdr_len +
icsk->icsk_af_ops->net_header_len;
-
- /* IPv6 adds a frag_hdr in case RTAX_FEATURE_ALLFRAG is set */
- if (icsk->icsk_af_ops->net_frag_header_len) {
- const struct dst_entry *dst = __sk_dst_get(sk);
-
- if (dst && dst_allfrag(dst))
- mtu += icsk->icsk_af_ops->net_frag_header_len;
- }
- return mtu;
}
EXPORT_SYMBOL(tcp_mss_to_mtu);
@@ -1827,7 +1848,7 @@ unsigned int tcp_current_mss(struct sock *sk)
u32 mss_now;
unsigned int header_len;
struct tcp_out_options opts;
- struct tcp_md5sig_key *md5;
+ struct tcp_key key;
mss_now = tp->mss_cache;
@@ -1836,8 +1857,8 @@ unsigned int tcp_current_mss(struct sock *sk)
if (mtu != inet_csk(sk)->icsk_pmtu_cookie)
mss_now = tcp_sync_mss(sk, mtu);
}
-
- header_len = tcp_established_options(sk, NULL, &opts, &md5) +
+ tcp_get_current_key(sk, &key);
+ header_len = tcp_established_options(sk, NULL, &opts, &key) +
sizeof(struct tcphdr);
/* The mss_cache is sized based on tp->tcp_header_len, which assumes
* some common options. If this is an odd packet (because we have SACK
@@ -1979,7 +2000,7 @@ static u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now,
unsigned long bytes;
u32 r;
- bytes = sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift);
+ bytes = READ_ONCE(sk->sk_pacing_rate) >> READ_ONCE(sk->sk_pacing_shift);
r = tcp_min_rtt(tcp_sk(sk)) >> READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_tso_rtt_log);
if (r < BITS_PER_TYPE(sk->sk_gso_max_size))
@@ -2572,7 +2593,7 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
limit = max_t(unsigned long,
2 * skb->truesize,
- sk->sk_pacing_rate >> READ_ONCE(sk->sk_pacing_shift));
+ READ_ONCE(sk->sk_pacing_rate) >> READ_ONCE(sk->sk_pacing_shift));
if (sk->sk_pacing_status == SK_PACING_NONE)
limit = min_t(unsigned long, limit,
READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_limit_output_bytes));
@@ -2580,7 +2601,8 @@ static bool tcp_small_queue_check(struct sock *sk, const struct sk_buff *skb,
if (static_branch_unlikely(&tcp_tx_delay_enabled) &&
tcp_sk(sk)->tcp_tx_delay) {
- u64 extra_bytes = (u64)sk->sk_pacing_rate * tcp_sk(sk)->tcp_tx_delay;
+ u64 extra_bytes = (u64)READ_ONCE(sk->sk_pacing_rate) *
+ tcp_sk(sk)->tcp_tx_delay;
/* TSQ is based on skb truesize sum (sk_wmem_alloc), so we
* approximate our needs assuming an ~100% skb->truesize overhead.
@@ -3385,7 +3407,7 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb, int segs)
/* Save stamp of the first (attempted) retransmit. */
if (!tp->retrans_stamp)
- tp->retrans_stamp = tcp_skb_timestamp(skb);
+ tp->retrans_stamp = tcp_skb_timestamp_ts(tp->tcp_usec_ts, skb);
if (tp->undo_retrans < 0)
tp->undo_retrans = 0;
@@ -3633,8 +3655,8 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
{
struct inet_request_sock *ireq = inet_rsk(req);
const struct tcp_sock *tp = tcp_sk(sk);
- struct tcp_md5sig_key *md5 = NULL;
struct tcp_out_options opts;
+ struct tcp_key key = {};
struct sk_buff *skb;
int tcp_header_size;
struct tcphdr *th;
@@ -3671,6 +3693,8 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
mss = tcp_mss_clamp(tp, dst_metric_advmss(dst));
memset(&opts, 0, sizeof(opts));
+ if (tcp_rsk(req)->req_usec_ts < 0)
+ tcp_rsk(req)->req_usec_ts = dst_tcp_usec_ts(dst);
now = tcp_clock_ns();
#ifdef CONFIG_SYN_COOKIES
if (unlikely(synack_type == TCP_SYNACK_COOKIE && ireq->tstamp_ok))
@@ -3684,16 +3708,48 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
tcp_rsk(req)->snt_synack = tcp_skb_timestamp_us(skb);
}
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
rcu_read_lock();
- md5 = tcp_rsk(req)->af_specific->req_md5_lookup(sk, req_to_sk(req));
#endif
+ if (tcp_rsk_used_ao(req)) {
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_key *ao_key = NULL;
+ u8 maclen = tcp_rsk(req)->maclen;
+ u8 keyid = tcp_rsk(req)->ao_keyid;
+
+ ao_key = tcp_sk(sk)->af_specific->ao_lookup(sk, req_to_sk(req),
+ keyid, -1);
+ /* If there is no matching key - avoid sending anything,
+ * especially usigned segments. It could try harder and lookup
+ * for another peer-matching key, but the peer has requested
+ * ao_keyid (RFC5925 RNextKeyID), so let's keep it simple here.
+ */
+ if (unlikely(!ao_key || tcp_ao_maclen(ao_key) != maclen)) {
+ u8 key_maclen = ao_key ? tcp_ao_maclen(ao_key) : 0;
+
+ rcu_read_unlock();
+ kfree_skb(skb);
+ net_warn_ratelimited("TCP-AO: the keyid %u with maclen %u|%u from SYN packet is not present - not sending SYNACK\n",
+ keyid, maclen, key_maclen);
+ return NULL;
+ }
+ key.ao_key = ao_key;
+ key.type = TCP_KEY_AO;
+#endif
+ } else {
+#ifdef CONFIG_TCP_MD5SIG
+ key.md5_key = tcp_rsk(req)->af_specific->req_md5_lookup(sk,
+ req_to_sk(req));
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+#endif
+ }
skb_set_hash(skb, READ_ONCE(tcp_rsk(req)->txhash), PKT_HASH_TYPE_L4);
/* bpf program will be interested in the tcp_flags */
TCP_SKB_CB(skb)->tcp_flags = TCPHDR_SYN | TCPHDR_ACK;
- tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts, md5,
- foc, synack_type,
- syn_skb) + sizeof(*th);
+ tcp_header_size = tcp_synack_options(sk, req, mss, skb, &opts,
+ &key, foc, synack_type, syn_skb)
+ + sizeof(*th);
skb_push(skb, tcp_header_size);
skb_reset_transport_header(skb);
@@ -3713,15 +3769,24 @@ struct sk_buff *tcp_make_synack(const struct sock *sk, struct dst_entry *dst,
/* RFC1323: The window in SYN & SYN/ACK segments is never scaled. */
th->window = htons(min(req->rsk_rcv_wnd, 65535U));
- tcp_options_write(th, NULL, &opts);
+ tcp_options_write(th, NULL, tcp_rsk(req), &opts, &key);
th->doff = (tcp_header_size >> 2);
TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
-#ifdef CONFIG_TCP_MD5SIG
/* Okay, we have all we need - do the md5 hash if needed */
- if (md5)
+ if (tcp_key_is_md5(&key)) {
+#ifdef CONFIG_TCP_MD5SIG
tcp_rsk(req)->af_specific->calc_md5_hash(opts.hash_location,
- md5, req_to_sk(req), skb);
+ key.md5_key, req_to_sk(req), skb);
+#endif
+ } else if (tcp_key_is_ao(&key)) {
+#ifdef CONFIG_TCP_AO
+ tcp_rsk(req)->af_specific->ao_synack_hash(opts.hash_location,
+ key.ao_key, req, skb,
+ opts.hash_location - (u8 *)th, 0);
+#endif
+ }
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
rcu_read_unlock();
#endif
@@ -3769,6 +3834,8 @@ static void tcp_connect_init(struct sock *sk)
if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_timestamps))
tp->tcp_header_len += TCPOLEN_TSTAMP_ALIGNED;
+ tcp_ao_connect_init(sk);
+
/* If user gave his TCP_MAXSEG, record it to clamp */
if (tp->rx_opt.user_mss)
tp->rx_opt.mss_clamp = tp->rx_opt.user_mss;
@@ -3951,6 +4018,53 @@ int tcp_connect(struct sock *sk)
tcp_call_bpf(sk, BPF_SOCK_OPS_TCP_CONNECT_CB, 0, NULL);
+#if defined(CONFIG_TCP_MD5SIG) && defined(CONFIG_TCP_AO)
+ /* Has to be checked late, after setting daddr/saddr/ops.
+ * Return error if the peer has both a md5 and a tcp-ao key
+ * configured as this is ambiguous.
+ */
+ if (unlikely(rcu_dereference_protected(tp->md5sig_info,
+ lockdep_sock_is_held(sk)))) {
+ bool needs_ao = !!tp->af_specific->ao_lookup(sk, sk, -1, -1);
+ bool needs_md5 = !!tp->af_specific->md5_lookup(sk, sk);
+ struct tcp_ao_info *ao_info;
+
+ ao_info = rcu_dereference_check(tp->ao_info,
+ lockdep_sock_is_held(sk));
+ if (ao_info) {
+ /* This is an extra check: tcp_ao_required() in
+ * tcp_v{4,6}_parse_md5_keys() should prevent adding
+ * md5 keys on ao_required socket.
+ */
+ needs_ao |= ao_info->ao_required;
+ WARN_ON_ONCE(ao_info->ao_required && needs_md5);
+ }
+ if (needs_md5 && needs_ao)
+ return -EKEYREJECTED;
+
+ /* If we have a matching md5 key and no matching tcp-ao key
+ * then free up ao_info if allocated.
+ */
+ if (needs_md5) {
+ tcp_ao_destroy_sock(sk, false);
+ } else if (needs_ao) {
+ tcp_clear_md5_list(sk);
+ kfree(rcu_replace_pointer(tp->md5sig_info, NULL,
+ lockdep_sock_is_held(sk)));
+ }
+ }
+#endif
+#ifdef CONFIG_TCP_AO
+ if (unlikely(rcu_dereference_protected(tp->ao_info,
+ lockdep_sock_is_held(sk)))) {
+ /* Don't allow connecting if ao is configured but no
+ * matching key is found.
+ */
+ if (!tp->af_specific->ao_lookup(sk, sk, -1, -1))
+ return -EKEYREJECTED;
+ }
+#endif
+
if (inet_csk(sk)->icsk_af_ops->rebuild_header(sk))
return -EHOSTUNREACH; /* Routing failure or similar. */
@@ -3967,7 +4081,7 @@ int tcp_connect(struct sock *sk)
tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN);
tcp_mstamp_refresh(tp);
- tp->retrans_stamp = tcp_time_stamp(tp);
+ tp->retrans_stamp = tcp_time_stamp_ts(tp);
tcp_connect_queue_skb(sk, buff);
tcp_ecn_send_syn(sk, buff);
tcp_rbtree_insert(&sk->tcp_rtx_queue, buff);
@@ -3997,6 +4111,20 @@ int tcp_connect(struct sock *sk)
}
EXPORT_SYMBOL(tcp_connect);
+u32 tcp_delack_max(const struct sock *sk)
+{
+ const struct dst_entry *dst = __sk_dst_get(sk);
+ u32 delack_max = inet_csk(sk)->icsk_delack_max;
+
+ if (dst && dst_metric_locked(dst, RTAX_RTO_MIN)) {
+ u32 rto_min = dst_metric_rtt(dst, RTAX_RTO_MIN);
+ u32 delack_from_rto_min = max_t(int, 1, rto_min - 1);
+
+ delack_max = min_t(u32, delack_max, delack_from_rto_min);
+ }
+ return delack_max;
+}
+
/* Send out a delayed ack, the caller does the policy checking
* to see if we should even be here. See tcp_input.c:tcp_ack_snd_check()
* for details.
@@ -4032,7 +4160,7 @@ void tcp_send_delayed_ack(struct sock *sk)
ato = min(ato, max_ato);
}
- ato = min_t(u32, ato, inet_csk(sk)->icsk_delack_max);
+ ato = min_t(u32, ato, tcp_delack_max(sk));
/* Stay within the limit we were given */
timeout = jiffies + ato;
diff --git a/net/ipv4/tcp_sigpool.c b/net/ipv4/tcp_sigpool.c
new file mode 100644
index 000000000000..65a8eaae2fec
--- /dev/null
+++ b/net/ipv4/tcp_sigpool.c
@@ -0,0 +1,358 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+
+#include <crypto/hash.h>
+#include <linux/cpu.h>
+#include <linux/kref.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/percpu.h>
+#include <linux/workqueue.h>
+#include <net/tcp.h>
+
+static size_t __scratch_size;
+static DEFINE_PER_CPU(void __rcu *, sigpool_scratch);
+
+struct sigpool_entry {
+ struct crypto_ahash *hash;
+ const char *alg;
+ struct kref kref;
+ uint16_t needs_key:1,
+ reserved:15;
+};
+
+#define CPOOL_SIZE (PAGE_SIZE / sizeof(struct sigpool_entry))
+static struct sigpool_entry cpool[CPOOL_SIZE];
+static unsigned int cpool_populated;
+static DEFINE_MUTEX(cpool_mutex);
+
+/* Slow-path */
+struct scratches_to_free {
+ struct rcu_head rcu;
+ unsigned int cnt;
+ void *scratches[];
+};
+
+static void free_old_scratches(struct rcu_head *head)
+{
+ struct scratches_to_free *stf;
+
+ stf = container_of(head, struct scratches_to_free, rcu);
+ while (stf->cnt--)
+ kfree(stf->scratches[stf->cnt]);
+ kfree(stf);
+}
+
+/**
+ * sigpool_reserve_scratch - re-allocates scratch buffer, slow-path
+ * @size: request size for the scratch/temp buffer
+ */
+static int sigpool_reserve_scratch(size_t size)
+{
+ struct scratches_to_free *stf;
+ size_t stf_sz = struct_size(stf, scratches, num_possible_cpus());
+ int cpu, err = 0;
+
+ lockdep_assert_held(&cpool_mutex);
+ if (__scratch_size >= size)
+ return 0;
+
+ stf = kmalloc(stf_sz, GFP_KERNEL);
+ if (!stf)
+ return -ENOMEM;
+ stf->cnt = 0;
+
+ size = max(size, __scratch_size);
+ cpus_read_lock();
+ for_each_possible_cpu(cpu) {
+ void *scratch, *old_scratch;
+
+ scratch = kmalloc_node(size, GFP_KERNEL, cpu_to_node(cpu));
+ if (!scratch) {
+ err = -ENOMEM;
+ break;
+ }
+
+ old_scratch = rcu_replace_pointer(per_cpu(sigpool_scratch, cpu),
+ scratch, lockdep_is_held(&cpool_mutex));
+ if (!cpu_online(cpu) || !old_scratch) {
+ kfree(old_scratch);
+ continue;
+ }
+ stf->scratches[stf->cnt++] = old_scratch;
+ }
+ cpus_read_unlock();
+ if (!err)
+ __scratch_size = size;
+
+ call_rcu(&stf->rcu, free_old_scratches);
+ return err;
+}
+
+static void sigpool_scratch_free(void)
+{
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ kfree(rcu_replace_pointer(per_cpu(sigpool_scratch, cpu),
+ NULL, lockdep_is_held(&cpool_mutex)));
+ __scratch_size = 0;
+}
+
+static int __cpool_try_clone(struct crypto_ahash *hash)
+{
+ struct crypto_ahash *tmp;
+
+ tmp = crypto_clone_ahash(hash);
+ if (IS_ERR(tmp))
+ return PTR_ERR(tmp);
+
+ crypto_free_ahash(tmp);
+ return 0;
+}
+
+static int __cpool_alloc_ahash(struct sigpool_entry *e, const char *alg)
+{
+ struct crypto_ahash *cpu0_hash;
+ int ret;
+
+ e->alg = kstrdup(alg, GFP_KERNEL);
+ if (!e->alg)
+ return -ENOMEM;
+
+ cpu0_hash = crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC);
+ if (IS_ERR(cpu0_hash)) {
+ ret = PTR_ERR(cpu0_hash);
+ goto out_free_alg;
+ }
+
+ e->needs_key = crypto_ahash_get_flags(cpu0_hash) & CRYPTO_TFM_NEED_KEY;
+
+ ret = __cpool_try_clone(cpu0_hash);
+ if (ret)
+ goto out_free_cpu0_hash;
+ e->hash = cpu0_hash;
+ kref_init(&e->kref);
+ return 0;
+
+out_free_cpu0_hash:
+ crypto_free_ahash(cpu0_hash);
+out_free_alg:
+ kfree(e->alg);
+ e->alg = NULL;
+ return ret;
+}
+
+/**
+ * tcp_sigpool_alloc_ahash - allocates pool for ahash requests
+ * @alg: name of async hash algorithm
+ * @scratch_size: reserve a tcp_sigpool::scratch buffer of this size
+ */
+int tcp_sigpool_alloc_ahash(const char *alg, size_t scratch_size)
+{
+ int i, ret;
+
+ /* slow-path */
+ mutex_lock(&cpool_mutex);
+ ret = sigpool_reserve_scratch(scratch_size);
+ if (ret)
+ goto out;
+ for (i = 0; i < cpool_populated; i++) {
+ if (!cpool[i].alg)
+ continue;
+ if (strcmp(cpool[i].alg, alg))
+ continue;
+
+ if (kref_read(&cpool[i].kref) > 0)
+ kref_get(&cpool[i].kref);
+ else
+ kref_init(&cpool[i].kref);
+ ret = i;
+ goto out;
+ }
+
+ for (i = 0; i < cpool_populated; i++) {
+ if (!cpool[i].alg)
+ break;
+ }
+ if (i >= CPOOL_SIZE) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ ret = __cpool_alloc_ahash(&cpool[i], alg);
+ if (!ret) {
+ ret = i;
+ if (i == cpool_populated)
+ cpool_populated++;
+ }
+out:
+ mutex_unlock(&cpool_mutex);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_alloc_ahash);
+
+static void __cpool_free_entry(struct sigpool_entry *e)
+{
+ crypto_free_ahash(e->hash);
+ kfree(e->alg);
+ memset(e, 0, sizeof(*e));
+}
+
+static void cpool_cleanup_work_cb(struct work_struct *work)
+{
+ bool free_scratch = true;
+ unsigned int i;
+
+ mutex_lock(&cpool_mutex);
+ for (i = 0; i < cpool_populated; i++) {
+ if (kref_read(&cpool[i].kref) > 0) {
+ free_scratch = false;
+ continue;
+ }
+ if (!cpool[i].alg)
+ continue;
+ __cpool_free_entry(&cpool[i]);
+ }
+ if (free_scratch)
+ sigpool_scratch_free();
+ mutex_unlock(&cpool_mutex);
+}
+
+static DECLARE_WORK(cpool_cleanup_work, cpool_cleanup_work_cb);
+static void cpool_schedule_cleanup(struct kref *kref)
+{
+ schedule_work(&cpool_cleanup_work);
+}
+
+/**
+ * tcp_sigpool_release - decreases number of users for a pool. If it was
+ * the last user of the pool, releases any memory that was consumed.
+ * @id: tcp_sigpool that was previously allocated by tcp_sigpool_alloc_ahash()
+ */
+void tcp_sigpool_release(unsigned int id)
+{
+ if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg))
+ return;
+
+ /* slow-path */
+ kref_put(&cpool[id].kref, cpool_schedule_cleanup);
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_release);
+
+/**
+ * tcp_sigpool_get - increases number of users (refcounter) for a pool
+ * @id: tcp_sigpool that was previously allocated by tcp_sigpool_alloc_ahash()
+ */
+void tcp_sigpool_get(unsigned int id)
+{
+ if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg))
+ return;
+ kref_get(&cpool[id].kref);
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_get);
+
+int tcp_sigpool_start(unsigned int id, struct tcp_sigpool *c) __cond_acquires(RCU_BH)
+{
+ struct crypto_ahash *hash;
+
+ rcu_read_lock_bh();
+ if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) {
+ rcu_read_unlock_bh();
+ return -EINVAL;
+ }
+
+ hash = crypto_clone_ahash(cpool[id].hash);
+ if (IS_ERR(hash)) {
+ rcu_read_unlock_bh();
+ return PTR_ERR(hash);
+ }
+
+ c->req = ahash_request_alloc(hash, GFP_ATOMIC);
+ if (!c->req) {
+ crypto_free_ahash(hash);
+ rcu_read_unlock_bh();
+ return -ENOMEM;
+ }
+ ahash_request_set_callback(c->req, 0, NULL, NULL);
+
+ /* Pairs with tcp_sigpool_reserve_scratch(), scratch area is
+ * valid (allocated) until tcp_sigpool_end().
+ */
+ c->scratch = rcu_dereference_bh(*this_cpu_ptr(&sigpool_scratch));
+ return 0;
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_start);
+
+void tcp_sigpool_end(struct tcp_sigpool *c) __releases(RCU_BH)
+{
+ struct crypto_ahash *hash = crypto_ahash_reqtfm(c->req);
+
+ rcu_read_unlock_bh();
+ ahash_request_free(c->req);
+ crypto_free_ahash(hash);
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_end);
+
+/**
+ * tcp_sigpool_algo - return algorithm of tcp_sigpool
+ * @id: tcp_sigpool that was previously allocated by tcp_sigpool_alloc_ahash()
+ * @buf: buffer to return name of algorithm
+ * @buf_len: size of @buf
+ */
+size_t tcp_sigpool_algo(unsigned int id, char *buf, size_t buf_len)
+{
+ if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg))
+ return -EINVAL;
+
+ return strscpy(buf, cpool[id].alg, buf_len);
+}
+EXPORT_SYMBOL_GPL(tcp_sigpool_algo);
+
+/**
+ * tcp_sigpool_hash_skb_data - hash data in skb with initialized tcp_sigpool
+ * @hp: tcp_sigpool pointer
+ * @skb: buffer to add sign for
+ * @header_len: TCP header length for this segment
+ */
+int tcp_sigpool_hash_skb_data(struct tcp_sigpool *hp,
+ const struct sk_buff *skb,
+ unsigned int header_len)
+{
+ const unsigned int head_data_len = skb_headlen(skb) > header_len ?
+ skb_headlen(skb) - header_len : 0;
+ const struct skb_shared_info *shi = skb_shinfo(skb);
+ const struct tcphdr *tp = tcp_hdr(skb);
+ struct ahash_request *req = hp->req;
+ struct sk_buff *frag_iter;
+ struct scatterlist sg;
+ unsigned int i;
+
+ sg_init_table(&sg, 1);
+
+ sg_set_buf(&sg, ((u8 *)tp) + header_len, head_data_len);
+ ahash_request_set_crypt(req, &sg, NULL, head_data_len);
+ if (crypto_ahash_update(req))
+ return 1;
+
+ for (i = 0; i < shi->nr_frags; ++i) {
+ const skb_frag_t *f = &shi->frags[i];
+ unsigned int offset = skb_frag_off(f);
+ struct page *page;
+
+ page = skb_frag_page(f) + (offset >> PAGE_SHIFT);
+ sg_set_page(&sg, page, skb_frag_size(f), offset_in_page(offset));
+ ahash_request_set_crypt(req, &sg, NULL, skb_frag_size(f));
+ if (crypto_ahash_update(req))
+ return 1;
+ }
+
+ skb_walk_frags(skb, frag_iter)
+ if (tcp_sigpool_hash_skb_data(hp, frag_iter, 0))
+ return 1;
+
+ return 0;
+}
+EXPORT_SYMBOL(tcp_sigpool_hash_skb_data);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Per-CPU pool of crypto requests");
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index 984ab4a0421e..1f9f6c1c196b 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -26,14 +26,18 @@
static u32 tcp_clamp_rto_to_user_timeout(const struct sock *sk)
{
struct inet_connection_sock *icsk = inet_csk(sk);
- u32 elapsed, start_ts, user_timeout;
+ const struct tcp_sock *tp = tcp_sk(sk);
+ u32 elapsed, user_timeout;
s32 remaining;
- start_ts = tcp_sk(sk)->retrans_stamp;
user_timeout = READ_ONCE(icsk->icsk_user_timeout);
if (!user_timeout)
return icsk->icsk_rto;
- elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts;
+
+ elapsed = tcp_time_stamp_ts(tp) - tp->retrans_stamp;
+ if (tp->tcp_usec_ts)
+ elapsed /= USEC_PER_MSEC;
+
remaining = user_timeout - elapsed;
if (remaining <= 0)
return 1; /* user timeout has passed; fire ASAP */
@@ -212,12 +216,13 @@ static bool retransmits_timed_out(struct sock *sk,
unsigned int boundary,
unsigned int timeout)
{
- unsigned int start_ts;
+ struct tcp_sock *tp = tcp_sk(sk);
+ unsigned int start_ts, delta;
if (!inet_csk(sk)->icsk_retransmits)
return false;
- start_ts = tcp_sk(sk)->retrans_stamp;
+ start_ts = tp->retrans_stamp;
if (likely(timeout == 0)) {
unsigned int rto_base = TCP_RTO_MIN;
@@ -226,7 +231,12 @@ static bool retransmits_timed_out(struct sock *sk,
timeout = tcp_model_timeout(sk, boundary, rto_base);
}
- return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0;
+ if (tp->tcp_usec_ts) {
+ /* delta maybe off up to a jiffy due to timer granularity. */
+ delta = tp->tcp_mstamp - start_ts + jiffies_to_usecs(1);
+ return (s32)(delta - timeout * USEC_PER_MSEC) >= 0;
+ }
+ return (s32)(tcp_time_stamp_ts(tp) - start_ts - timeout) >= 0;
}
/* A write timeout has occurred. Process the after effects. */
@@ -322,7 +332,7 @@ void tcp_delack_timer_handler(struct sock *sk)
if (inet_csk_ack_scheduled(sk)) {
if (!inet_csk_in_pingpong_mode(sk)) {
/* Delayed ACK missed: inflate ATO. */
- icsk->icsk_ack.ato = min(icsk->icsk_ack.ato << 1, icsk->icsk_rto);
+ icsk->icsk_ack.ato = min_t(u32, icsk->icsk_ack.ato << 1, icsk->icsk_rto);
} else {
/* Delayed ACK missed: leave pingpong mode and
* deflate ATO.
@@ -394,7 +404,7 @@ static void tcp_probe_timer(struct sock *sk)
if (user_timeout &&
(s32)(tcp_jiffies32 - icsk->icsk_probes_tstamp) >=
msecs_to_jiffies(user_timeout))
- goto abort;
+ goto abort;
}
max_probes = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_retries2);
if (sock_flag(sk, SOCK_DEAD)) {
@@ -415,6 +425,19 @@ abort: tcp_write_err(sk);
}
}
+static void tcp_update_rto_stats(struct sock *sk)
+{
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ struct tcp_sock *tp = tcp_sk(sk);
+
+ if (!icsk->icsk_retransmits) {
+ tp->total_rto_recoveries++;
+ tp->rto_stamp = tcp_time_stamp_ms(tp);
+ }
+ icsk->icsk_retransmits++;
+ tp->total_rto++;
+}
+
/*
* Timer for Fast Open socket to retransmit SYNACK. Note that the
* sk here is the child socket, not the parent (listener) socket.
@@ -447,28 +470,26 @@ static void tcp_fastopen_synack_timer(struct sock *sk, struct request_sock *req)
*/
inet_rtx_syn_ack(sk, req);
req->num_timeout++;
- icsk->icsk_retransmits++;
+ tcp_update_rto_stats(sk);
if (!tp->retrans_stamp)
- tp->retrans_stamp = tcp_time_stamp(tp);
+ tp->retrans_stamp = tcp_time_stamp_ts(tp);
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
req->timeout << req->num_timeout, TCP_RTO_MAX);
}
static bool tcp_rtx_probe0_timed_out(const struct sock *sk,
- const struct sk_buff *skb)
+ const struct sk_buff *skb,
+ u32 rtx_delta)
{
const struct tcp_sock *tp = tcp_sk(sk);
const int timeout = TCP_RTO_MAX * 2;
- u32 rcv_delta, rtx_delta;
+ u32 rcv_delta;
rcv_delta = inet_csk(sk)->icsk_timeout - tp->rcv_tstamp;
if (rcv_delta <= timeout)
return false;
- rtx_delta = (u32)msecs_to_jiffies(tcp_time_stamp(tp) -
- (tp->retrans_stamp ?: tcp_skb_timestamp(skb)));
-
- return rtx_delta > timeout;
+ return msecs_to_jiffies(rtx_delta) > timeout;
}
/**
@@ -521,7 +542,11 @@ void tcp_retransmit_timer(struct sock *sk)
struct inet_sock *inet = inet_sk(sk);
u32 rtx_delta;
- rtx_delta = tcp_time_stamp(tp) - (tp->retrans_stamp ?: tcp_skb_timestamp(skb));
+ rtx_delta = tcp_time_stamp_ts(tp) - (tp->retrans_stamp ?:
+ tcp_skb_timestamp_ts(tp->tcp_usec_ts, skb));
+ if (tp->tcp_usec_ts)
+ rtx_delta /= USEC_PER_MSEC;
+
if (sk->sk_family == AF_INET) {
net_dbg_ratelimited("Probing zero-window on %pI4:%u/%u, seq=%u:%u, recv %ums ago, lasting %ums\n",
&inet->inet_daddr, ntohs(inet->inet_dport),
@@ -538,7 +563,7 @@ void tcp_retransmit_timer(struct sock *sk)
rtx_delta);
}
#endif
- if (tcp_rtx_probe0_timed_out(sk, skb)) {
+ if (tcp_rtx_probe0_timed_out(sk, skb, rtx_delta)) {
tcp_write_err(sk);
goto out;
}
@@ -575,7 +600,7 @@ void tcp_retransmit_timer(struct sock *sk)
tcp_enter_loss(sk);
- icsk->icsk_retransmits++;
+ tcp_update_rto_stats(sk);
if (tcp_retransmit_skb(sk, tcp_rtx_queue_head(sk), 1) > 0) {
/* Retransmission failed because of local congestion,
* Let senders fight for local resources conservatively.
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index f39b9c844580..1734fd6a1ce0 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -714,7 +714,7 @@ int __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable)
iph->saddr, uh->source, skb->dev->ifindex,
inet_sdif(skb), udptable, NULL);
- if (!sk || udp_sk(sk)->encap_type) {
+ if (!sk || READ_ONCE(udp_sk(sk)->encap_type)) {
/* No socket for error: try tunnels before discarding */
if (static_branch_unlikely(&udp_encap_needed_key)) {
sk = __udp4_lib_err_encap(net, iph, uh, udptable, sk, skb,
@@ -750,7 +750,7 @@ int __udp4_lib_err(struct sk_buff *skb, u32 info, struct udp_table *udptable)
case ICMP_DEST_UNREACH:
if (code == ICMP_FRAG_NEEDED) { /* Path MTU discovery */
ipv4_sk_update_pmtu(skb, sk, info);
- if (inet->pmtudisc != IP_PMTUDISC_DONT) {
+ if (READ_ONCE(inet->pmtudisc) != IP_PMTUDISC_DONT) {
err = EMSGSIZE;
harderr = 1;
break;
@@ -1051,10 +1051,11 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
u8 tos, scope;
__be16 dport;
int err, is_udplite = IS_UDPLITE(sk);
- int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ int corkreq = udp_test_bit(CORK, sk) || msg->msg_flags & MSG_MORE;
int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
struct sk_buff *skb;
struct ip_options_data opt_copy;
+ int uc_index;
if (len > 0xFFFF)
return -EMSGSIZE;
@@ -1143,7 +1144,9 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (cgroup_bpf_enabled(CGROUP_UDP4_SENDMSG) && !connected) {
err = BPF_CGROUP_RUN_PROG_UDP4_SENDMSG_LOCK(sk,
- (struct sockaddr *)usin, &ipc.addr);
+ (struct sockaddr *)usin,
+ &msg->msg_namelen,
+ &ipc.addr);
if (err)
goto out_free;
if (usin) {
@@ -1173,25 +1176,26 @@ int udp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (scope == RT_SCOPE_LINK)
connected = 0;
+ uc_index = READ_ONCE(inet->uc_index);
if (ipv4_is_multicast(daddr)) {
if (!ipc.oif || netif_index_is_l3_master(sock_net(sk), ipc.oif))
- ipc.oif = inet->mc_index;
+ ipc.oif = READ_ONCE(inet->mc_index);
if (!saddr)
- saddr = inet->mc_addr;
+ saddr = READ_ONCE(inet->mc_addr);
connected = 0;
} else if (!ipc.oif) {
- ipc.oif = inet->uc_index;
- } else if (ipv4_is_lbcast(daddr) && inet->uc_index) {
+ ipc.oif = uc_index;
+ } else if (ipv4_is_lbcast(daddr) && uc_index) {
/* oif is set, packet is to local broadcast and
* uc_index is set. oif is most likely set
* by sk_bound_dev_if. If uc_index != oif check if the
* oif is an L3 master and uc_index is an L3 slave.
* If so, we want to allow the send using the uc_index.
*/
- if (ipc.oif != inet->uc_index &&
+ if (ipc.oif != uc_index &&
ipc.oif == l3mdev_master_ifindex_by_index(sock_net(sk),
- inet->uc_index)) {
- ipc.oif = inet->uc_index;
+ uc_index)) {
+ ipc.oif = uc_index;
}
}
@@ -1315,11 +1319,11 @@ void udp_splice_eof(struct socket *sock)
struct sock *sk = sock->sk;
struct udp_sock *up = udp_sk(sk);
- if (!up->pending || READ_ONCE(up->corkflag))
+ if (!up->pending || udp_test_bit(CORK, sk))
return;
lock_sock(sk);
- if (up->pending && !READ_ONCE(up->corkflag))
+ if (up->pending && !udp_test_bit(CORK, sk))
udp_push_pending_frames(sk);
release_sock(sk);
}
@@ -1865,10 +1869,11 @@ try_again:
*addr_len = sizeof(*sin);
BPF_CGROUP_RUN_PROG_UDP4_RECVMSG_LOCK(sk,
- (struct sockaddr *)sin);
+ (struct sockaddr *)sin,
+ addr_len);
}
- if (udp_sk(sk)->gro_enabled)
+ if (udp_test_bit(GRO_ENABLED, sk))
udp_cmsg_recv(msg, sk, skb);
if (inet_cmsg_flags(inet))
@@ -1904,7 +1909,7 @@ int udp_pre_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
if (addr_len < sizeof(struct sockaddr_in))
return -EINVAL;
- return BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET4_CONNECT_LOCK(sk, uaddr, &addr_len);
}
EXPORT_SYMBOL(udp_pre_connect);
@@ -2081,7 +2086,8 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
}
nf_reset_ct(skb);
- if (static_branch_unlikely(&udp_encap_needed_key) && up->encap_type) {
+ if (static_branch_unlikely(&udp_encap_needed_key) &&
+ READ_ONCE(up->encap_type)) {
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
/*
@@ -2119,7 +2125,8 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
/*
* UDP-Lite specific tests, ignored on UDP sockets
*/
- if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
+ if (udp_test_bit(UDPLITE_RECV_CC, sk) && UDP_SKB_CB(skb)->partial_cov) {
+ u16 pcrlen = READ_ONCE(up->pcrlen);
/*
* MIB statistics other than incrementing the error count are
@@ -2132,7 +2139,7 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
* delivery of packets with coverage values less than a value
* provided by the application."
*/
- if (up->pcrlen == 0) { /* full coverage was set */
+ if (pcrlen == 0) { /* full coverage was set */
net_dbg_ratelimited("UDPLite: partial coverage %d while full coverage %d requested\n",
UDP_SKB_CB(skb)->cscov, skb->len);
goto drop;
@@ -2143,9 +2150,9 @@ static int udp_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
* that it wants x while sender emits packets of smaller size y.
* Therefore the above ...()->partial_cov statement is essential.
*/
- if (UDP_SKB_CB(skb)->cscov < up->pcrlen) {
+ if (UDP_SKB_CB(skb)->cscov < pcrlen) {
net_dbg_ratelimited("UDPLite: coverage %d too small, need min %d\n",
- UDP_SKB_CB(skb)->cscov, up->pcrlen);
+ UDP_SKB_CB(skb)->cscov, pcrlen);
goto drop;
}
}
@@ -2618,7 +2625,7 @@ void udp_destroy_sock(struct sock *sk)
if (encap_destroy)
encap_destroy(sk);
}
- if (up->encap_enabled)
+ if (udp_test_bit(ENCAP_ENABLED, sk))
static_branch_dec(&udp_encap_needed_key);
}
}
@@ -2658,9 +2665,9 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
switch (optname) {
case UDP_CORK:
if (val != 0) {
- WRITE_ONCE(up->corkflag, 1);
+ udp_set_bit(CORK, sk);
} else {
- WRITE_ONCE(up->corkflag, 0);
+ udp_clear_bit(CORK, sk);
lock_sock(sk);
push_pending_frames(sk);
release_sock(sk);
@@ -2675,17 +2682,17 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
case UDP_ENCAP_ESPINUDP_NON_IKE:
#if IS_ENABLED(CONFIG_IPV6)
if (sk->sk_family == AF_INET6)
- up->encap_rcv = ipv6_stub->xfrm6_udp_encap_rcv;
+ WRITE_ONCE(up->encap_rcv,
+ ipv6_stub->xfrm6_udp_encap_rcv);
else
#endif
- up->encap_rcv = xfrm4_udp_encap_rcv;
+ WRITE_ONCE(up->encap_rcv,
+ xfrm4_udp_encap_rcv);
#endif
fallthrough;
case UDP_ENCAP_L2TPINUDP:
- up->encap_type = val;
- lock_sock(sk);
- udp_tunnel_encap_enable(sk->sk_socket);
- release_sock(sk);
+ WRITE_ONCE(up->encap_type, val);
+ udp_tunnel_encap_enable(sk);
break;
default:
err = -ENOPROTOOPT;
@@ -2694,11 +2701,11 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
break;
case UDP_NO_CHECK6_TX:
- up->no_check6_tx = valbool;
+ udp_set_no_check6_tx(sk, valbool);
break;
case UDP_NO_CHECK6_RX:
- up->no_check6_rx = valbool;
+ udp_set_no_check6_rx(sk, valbool);
break;
case UDP_SEGMENT:
@@ -2708,14 +2715,12 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
break;
case UDP_GRO:
- lock_sock(sk);
/* when enabling GRO, accept the related GSO packet type */
if (valbool)
- udp_tunnel_encap_enable(sk->sk_socket);
- up->gro_enabled = valbool;
- up->accept_udp_l4 = valbool;
- release_sock(sk);
+ udp_tunnel_encap_enable(sk);
+ udp_assign_bit(GRO_ENABLED, sk, valbool);
+ udp_assign_bit(ACCEPT_L4, sk, valbool);
break;
/*
@@ -2730,8 +2735,8 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
val = 8;
else if (val > USHRT_MAX)
val = USHRT_MAX;
- up->pcslen = val;
- up->pcflag |= UDPLITE_SEND_CC;
+ WRITE_ONCE(up->pcslen, val);
+ udp_set_bit(UDPLITE_SEND_CC, sk);
break;
/* The receiver specifies a minimum checksum coverage value. To make
@@ -2744,8 +2749,8 @@ int udp_lib_setsockopt(struct sock *sk, int level, int optname,
val = 8;
else if (val > USHRT_MAX)
val = USHRT_MAX;
- up->pcrlen = val;
- up->pcflag |= UDPLITE_RECV_CC;
+ WRITE_ONCE(up->pcrlen, val);
+ udp_set_bit(UDPLITE_RECV_CC, sk);
break;
default:
@@ -2783,19 +2788,19 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
switch (optname) {
case UDP_CORK:
- val = READ_ONCE(up->corkflag);
+ val = udp_test_bit(CORK, sk);
break;
case UDP_ENCAP:
- val = up->encap_type;
+ val = READ_ONCE(up->encap_type);
break;
case UDP_NO_CHECK6_TX:
- val = up->no_check6_tx;
+ val = udp_get_no_check6_tx(sk);
break;
case UDP_NO_CHECK6_RX:
- val = up->no_check6_rx;
+ val = udp_get_no_check6_rx(sk);
break;
case UDP_SEGMENT:
@@ -2803,17 +2808,17 @@ int udp_lib_getsockopt(struct sock *sk, int level, int optname,
break;
case UDP_GRO:
- val = up->gro_enabled;
+ val = udp_test_bit(GRO_ENABLED, sk);
break;
/* The following two cannot be changed on UDP sockets, the return is
* always 0 (which corresponds to the full checksum coverage of UDP). */
case UDPLITE_SEND_CSCOV:
- val = up->pcslen;
+ val = READ_ONCE(up->pcslen);
break;
case UDPLITE_RECV_CSCOV:
- val = up->pcrlen;
+ val = READ_ONCE(up->pcrlen);
break;
default:
diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
index 0f46b3c2e4ac..6c95d28d0c4a 100644
--- a/net/ipv4/udp_offload.c
+++ b/net/ipv4/udp_offload.c
@@ -557,10 +557,10 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
NAPI_GRO_CB(skb)->is_flist = 0;
if (!sk || !udp_sk(sk)->gro_receive) {
if (skb->dev->features & NETIF_F_GRO_FRAGLIST)
- NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled : 1;
+ NAPI_GRO_CB(skb)->is_flist = sk ? !udp_test_bit(GRO_ENABLED, sk) : 1;
if ((!sk && (skb->dev->features & NETIF_F_GRO_UDP_FWD)) ||
- (sk && udp_sk(sk)->gro_enabled) || NAPI_GRO_CB(skb)->is_flist)
+ (sk && udp_test_bit(GRO_ENABLED, sk)) || NAPI_GRO_CB(skb)->is_flist)
return call_gro_receive(udp_gro_receive_segment, head, skb);
/* no GRO, be sure flush the current packet */
diff --git a/net/ipv4/udp_tunnel_core.c b/net/ipv4/udp_tunnel_core.c
index 9b18f371af0d..a87defb2b167 100644
--- a/net/ipv4/udp_tunnel_core.c
+++ b/net/ipv4/udp_tunnel_core.c
@@ -78,7 +78,7 @@ void setup_udp_tunnel_sock(struct net *net, struct socket *sock,
udp_sk(sk)->gro_receive = cfg->gro_receive;
udp_sk(sk)->gro_complete = cfg->gro_complete;
- udp_tunnel_encap_enable(sock);
+ udp_tunnel_encap_enable(sk);
}
EXPORT_SYMBOL_GPL(setup_udp_tunnel_sock);
@@ -204,4 +204,53 @@ struct metadata_dst *udp_tun_rx_dst(struct sk_buff *skb, unsigned short family,
}
EXPORT_SYMBOL_GPL(udp_tun_rx_dst);
+struct rtable *udp_tunnel_dst_lookup(struct sk_buff *skb,
+ struct net_device *dev,
+ struct net *net, int oif,
+ __be32 *saddr,
+ const struct ip_tunnel_key *key,
+ __be16 sport, __be16 dport, u8 tos,
+ struct dst_cache *dst_cache)
+{
+ struct rtable *rt = NULL;
+ struct flowi4 fl4;
+
+#ifdef CONFIG_DST_CACHE
+ if (dst_cache) {
+ rt = dst_cache_get_ip4(dst_cache, saddr);
+ if (rt)
+ return rt;
+ }
+#endif
+
+ memset(&fl4, 0, sizeof(fl4));
+ fl4.flowi4_mark = skb->mark;
+ fl4.flowi4_proto = IPPROTO_UDP;
+ fl4.flowi4_oif = oif;
+ fl4.daddr = key->u.ipv4.dst;
+ fl4.saddr = key->u.ipv4.src;
+ fl4.fl4_dport = dport;
+ fl4.fl4_sport = sport;
+ fl4.flowi4_tos = RT_TOS(tos);
+ fl4.flowi4_flags = key->flow_flags;
+
+ rt = ip_route_output_key(net, &fl4);
+ if (IS_ERR(rt)) {
+ netdev_dbg(dev, "no route to %pI4\n", &fl4.daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+ if (rt->dst.dev == dev) { /* is this necessary? */
+ netdev_dbg(dev, "circular route to %pI4\n", &fl4.daddr);
+ ip_rt_put(rt);
+ return ERR_PTR(-ELOOP);
+ }
+#ifdef CONFIG_DST_CACHE
+ if (dst_cache)
+ dst_cache_set_ip4(dst_cache, &rt->dst, fl4.saddr);
+#endif
+ *saddr = fl4.saddr;
+ return rt;
+}
+EXPORT_SYMBOL_GPL(udp_tunnel_dst_lookup);
+
MODULE_LICENSE("GPL");
diff --git a/net/ipv4/udp_tunnel_nic.c b/net/ipv4/udp_tunnel_nic.c
index 029219749785..b6d2d16189c0 100644
--- a/net/ipv4/udp_tunnel_nic.c
+++ b/net/ipv4/udp_tunnel_nic.c
@@ -47,7 +47,7 @@ struct udp_tunnel_nic {
unsigned int n_tables;
unsigned long missed;
- struct udp_tunnel_nic_table_entry **entries;
+ struct udp_tunnel_nic_table_entry *entries[] __counted_by(n_tables);
};
/* We ensure all work structs are done using driver state, but not the code.
@@ -725,16 +725,12 @@ udp_tunnel_nic_alloc(const struct udp_tunnel_nic_info *info,
struct udp_tunnel_nic *utn;
unsigned int i;
- utn = kzalloc(sizeof(*utn), GFP_KERNEL);
+ utn = kzalloc(struct_size(utn, entries, n_tables), GFP_KERNEL);
if (!utn)
return NULL;
utn->n_tables = n_tables;
INIT_WORK(&utn->work, udp_tunnel_nic_device_sync_work);
- utn->entries = kmalloc_array(n_tables, sizeof(void *), GFP_KERNEL);
- if (!utn->entries)
- goto err_free_utn;
-
for (i = 0; i < n_tables; i++) {
utn->entries[i] = kcalloc(info->tables[i].n_entries,
sizeof(*utn->entries[i]), GFP_KERNEL);
@@ -747,8 +743,6 @@ udp_tunnel_nic_alloc(const struct udp_tunnel_nic_info *info,
err_free_prev_entries:
while (i--)
kfree(utn->entries[i]);
- kfree(utn->entries);
-err_free_utn:
kfree(utn);
return NULL;
}
@@ -759,7 +753,6 @@ static void udp_tunnel_nic_free(struct udp_tunnel_nic *utn)
for (i = 0; i < utn->n_tables; i++)
kfree(utn->entries[i]);
- kfree(utn->entries);
kfree(utn);
}
diff --git a/net/ipv4/udplite.c b/net/ipv4/udplite.c
index 39ecdad1b50c..af37af3ab727 100644
--- a/net/ipv4/udplite.c
+++ b/net/ipv4/udplite.c
@@ -21,7 +21,6 @@ EXPORT_SYMBOL(udplite_table);
static int udplite_sk_init(struct sock *sk)
{
udp_init_sock(sk);
- udp_sk(sk)->pcflag = UDPLITE_BIT;
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n");
return 0;
diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c
index eac206a290d0..183f6dc37242 100644
--- a/net/ipv4/xfrm4_input.c
+++ b/net/ipv4/xfrm4_input.c
@@ -85,11 +85,11 @@ int xfrm4_udp_encap_rcv(struct sock *sk, struct sk_buff *skb)
struct udphdr *uh;
struct iphdr *iph;
int iphlen, len;
-
__u8 *udpdata;
__be32 *udpdata32;
- __u16 encap_type = up->encap_type;
+ u16 encap_type;
+ encap_type = READ_ONCE(up->encap_type);
/* if this is not encapsulated socket, then just return now */
if (!encap_type)
return 1;
diff --git a/net/ipv6/Makefile b/net/ipv6/Makefile
index 3036a45e8a1e..d283c59df4c1 100644
--- a/net/ipv6/Makefile
+++ b/net/ipv6/Makefile
@@ -52,4 +52,5 @@ obj-$(subst m,y,$(CONFIG_IPV6)) += inet6_hashtables.o
ifneq ($(CONFIG_IPV6),)
obj-$(CONFIG_NET_UDP_TUNNEL) += ip6_udp_tunnel.o
obj-y += mcast_snoop.o
+obj-$(CONFIG_TCP_AO) += tcp_ao.o
endif
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 0b6ee962c84e..3aaea56b5166 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -236,6 +236,7 @@ static struct ipv6_devconf ipv6_devconf __read_mostly = {
.ioam6_id = IOAM6_DEFAULT_IF_ID,
.ioam6_id_wide = IOAM6_DEFAULT_IF_ID_WIDE,
.ndisc_evict_nocarrier = 1,
+ .ra_honor_pio_life = 0,
};
static struct ipv6_devconf ipv6_devconf_dflt __read_mostly = {
@@ -297,6 +298,7 @@ static struct ipv6_devconf ipv6_devconf_dflt __read_mostly = {
.ioam6_id = IOAM6_DEFAULT_IF_ID,
.ioam6_id_wide = IOAM6_DEFAULT_IF_ID_WIDE,
.ndisc_evict_nocarrier = 1,
+ .ra_honor_pio_life = 0,
};
/* Check if link is ready: is it up and is a valid qdisc available */
@@ -1397,6 +1399,7 @@ retry:
idev->cnf.temp_valid_lft + age);
cfg.preferred_lft = cnf_temp_preferred_lft + age - idev->desync_factor;
cfg.preferred_lft = min_t(__u32, ifp->prefered_lft, cfg.preferred_lft);
+ cfg.preferred_lft = min_t(__u32, cfg.valid_lft, cfg.preferred_lft);
cfg.plen = ifp->prefix_len;
tmp_tstamp = ifp->tstamp;
@@ -1404,15 +1407,23 @@ retry:
write_unlock_bh(&idev->lock);
- /* A temporary address is created only if this calculated Preferred
- * Lifetime is greater than REGEN_ADVANCE time units. In particular,
- * an implementation must not create a temporary address with a zero
- * Preferred Lifetime.
+ /* From RFC 4941:
+ *
+ * A temporary address is created only if this calculated Preferred
+ * Lifetime is greater than REGEN_ADVANCE time units. In
+ * particular, an implementation must not create a temporary address
+ * with a zero Preferred Lifetime.
+ *
+ * Clamp the preferred lifetime to a minimum of regen_advance, unless
+ * that would exceed valid_lft.
+ *
* Use age calculation as in addrconf_verify to avoid unnecessary
* temporary addresses being generated.
*/
age = (now - tmp_tstamp + ADDRCONF_TIMER_FUZZ_MINUS) / HZ;
- if (cfg.preferred_lft <= regen_advance + age) {
+ if (cfg.preferred_lft <= regen_advance + age)
+ cfg.preferred_lft = regen_advance + age + 1;
+ if (cfg.preferred_lft > cfg.valid_lft) {
in6_ifa_put(ifp);
in6_dev_put(idev);
ret = -1;
@@ -2657,22 +2668,23 @@ int addrconf_prefix_rcv_add_addr(struct net *net, struct net_device *dev,
stored_lft = ifp->valid_lft - (now - ifp->tstamp) / HZ;
else
stored_lft = 0;
- if (!create && stored_lft) {
+
+ /* RFC4862 Section 5.5.3e:
+ * "Note that the preferred lifetime of the
+ * corresponding address is always reset to
+ * the Preferred Lifetime in the received
+ * Prefix Information option, regardless of
+ * whether the valid lifetime is also reset or
+ * ignored."
+ *
+ * So we should always update prefered_lft here.
+ */
+ update_lft = !create && stored_lft;
+
+ if (update_lft && !in6_dev->cnf.ra_honor_pio_life) {
const u32 minimum_lft = min_t(u32,
stored_lft, MIN_VALID_LIFETIME);
valid_lft = max(valid_lft, minimum_lft);
-
- /* RFC4862 Section 5.5.3e:
- * "Note that the preferred lifetime of the
- * corresponding address is always reset to
- * the Preferred Lifetime in the received
- * Prefix Information option, regardless of
- * whether the valid lifetime is also reset or
- * ignored."
- *
- * So we should always update prefered_lft here.
- */
- update_lft = 1;
}
if (update_lft) {
@@ -6846,6 +6858,15 @@ static const struct ctl_table addrconf_sysctl[] = {
.mode = 0644,
.proc_handler = proc_dointvec,
},
+ {
+ .procname = "ra_honor_pio_life",
+ .data = &ipv6_devconf.ra_honor_pio_life,
+ .maxlen = sizeof(u8),
+ .mode = 0644,
+ .proc_handler = proc_dou8vec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
#ifdef CONFIG_IPV6_ROUTER_PREF
{
.procname = "accept_ra_rtr_pref",
diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
index 368824fe9719..c35d302a3da9 100644
--- a/net/ipv6/af_inet6.c
+++ b/net/ipv6/af_inet6.c
@@ -217,10 +217,11 @@ lookup_protocol:
inet_sk(sk)->pinet6 = np = inet6_sk_generic(sk);
np->hop_limit = -1;
np->mcast_hops = IPV6_DEFAULT_MCASTHOPS;
- np->mc_loop = 1;
- np->mc_all = 1;
+ inet6_set_bit(MC6_LOOP, sk);
+ inet6_set_bit(MC6_ALL, sk);
np->pmtudisc = IPV6_PMTUDISC_WANT;
- np->repflow = net->ipv6.sysctl.flowlabel_reflect & FLOWLABEL_REFLECT_ESTABLISHED;
+ inet6_assign_bit(REPFLOW, sk, net->ipv6.sysctl.flowlabel_reflect &
+ FLOWLABEL_REFLECT_ESTABLISHED);
sk->sk_ipv6only = net->ipv6.sysctl.bindv6only;
sk->sk_txrehash = READ_ONCE(net->core.sysctl_txrehash);
@@ -453,7 +454,7 @@ int inet6_bind_sk(struct sock *sk, struct sockaddr *uaddr, int addr_len)
/* BPF prog is run before any checks are done so that if the prog
* changes context in a wrong way it will be caught.
*/
- err = BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr,
+ err = BPF_CGROUP_RUN_PROG_INET_BIND_LOCK(sk, uaddr, &addr_len,
CGROUP_INET6_BIND, &flags);
if (err)
return err;
@@ -519,6 +520,7 @@ int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
int peer)
{
struct sockaddr_in6 *sin = (struct sockaddr_in6 *)uaddr;
+ int sin_addr_len = sizeof(*sin);
struct sock *sk = sock->sk;
struct inet_sock *inet = inet_sk(sk);
struct ipv6_pinfo *np = inet6_sk(sk);
@@ -536,9 +538,9 @@ int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
}
sin->sin6_port = inet->inet_dport;
sin->sin6_addr = sk->sk_v6_daddr;
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
sin->sin6_flowinfo = np->flow_label;
- BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin,
+ BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin, &sin_addr_len,
CGROUP_INET6_GETPEERNAME);
} else {
if (ipv6_addr_any(&sk->sk_v6_rcv_saddr))
@@ -546,13 +548,13 @@ int inet6_getname(struct socket *sock, struct sockaddr *uaddr,
else
sin->sin6_addr = sk->sk_v6_rcv_saddr;
sin->sin6_port = inet->inet_sport;
- BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin,
+ BPF_CGROUP_RUN_SA_PROG(sk, (struct sockaddr *)sin, &sin_addr_len,
CGROUP_INET6_GETSOCKNAME);
}
sin->sin6_scope_id = ipv6_iface_scope_id(&sin->sin6_addr,
sk->sk_bound_dev_if);
release_sock(sk);
- return sizeof(*sin);
+ return sin_addr_len;
}
EXPORT_SYMBOL(inet6_getname);
@@ -1060,6 +1062,7 @@ static const struct ipv6_bpf_stub ipv6_bpf_stub_impl = {
.udp6_lib_lookup = __udp6_lib_lookup,
.ipv6_setsockopt = do_ipv6_setsockopt,
.ipv6_getsockopt = do_ipv6_getsockopt,
+ .ipv6_dev_get_saddr = ipv6_dev_get_saddr,
};
static int __init inet6_init(void)
diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
index 41ebc4e57473..cc6a502db39d 100644
--- a/net/ipv6/datagram.c
+++ b/net/ipv6/datagram.c
@@ -80,7 +80,8 @@ int ip6_datagram_dst_update(struct sock *sk, bool fix_sk_saddr)
struct flowi6 fl6;
int err = 0;
- if (np->sndflow && (np->flow_label & IPV6_FLOWLABEL_MASK)) {
+ if (inet6_test_bit(SNDFLOW, sk) &&
+ (np->flow_label & IPV6_FLOWLABEL_MASK)) {
flowlabel = fl6_sock_lookup(sk, np->flow_label);
if (IS_ERR(flowlabel))
return -EINVAL;
@@ -163,7 +164,7 @@ int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr,
if (usin->sin6_family != AF_INET6)
return -EAFNOSUPPORT;
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
fl6_flowlabel = usin->sin6_flowinfo & IPV6_FLOWINFO_MASK;
if (ipv6_addr_any(&usin->sin6_addr)) {
@@ -305,11 +306,10 @@ static void ipv6_icmp_error_rfc4884(const struct sk_buff *skb,
void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err,
__be16 port, u32 info, u8 *payload)
{
- struct ipv6_pinfo *np = inet6_sk(sk);
struct icmp6hdr *icmph = icmp6_hdr(skb);
struct sock_exterr_skb *serr;
- if (!np->recverr)
+ if (!inet6_test_bit(RECVERR6, sk))
return;
skb = skb_clone(skb, GFP_ATOMIC);
@@ -332,7 +332,7 @@ void ipv6_icmp_error(struct sock *sk, struct sk_buff *skb, int err,
__skb_pull(skb, payload - skb->data);
- if (inet6_sk(sk)->recverr_rfc4884)
+ if (inet6_test_bit(RECVERR6_RFC4884, sk))
ipv6_icmp_error_rfc4884(skb, &serr->ee.ee_rfc4884);
skb_reset_transport_header(skb);
@@ -344,12 +344,11 @@ EXPORT_SYMBOL_GPL(ipv6_icmp_error);
void ipv6_local_error(struct sock *sk, int err, struct flowi6 *fl6, u32 info)
{
- const struct ipv6_pinfo *np = inet6_sk(sk);
struct sock_exterr_skb *serr;
struct ipv6hdr *iph;
struct sk_buff *skb;
- if (!np->recverr)
+ if (!inet6_test_bit(RECVERR6, sk))
return;
skb = alloc_skb(sizeof(struct ipv6hdr), GFP_ATOMIC);
@@ -493,7 +492,7 @@ int ipv6_recv_error(struct sock *sk, struct msghdr *msg, int len, int *addr_len)
const struct ipv6hdr *ip6h = container_of((struct in6_addr *)(nh + serr->addr_offset),
struct ipv6hdr, daddr);
sin->sin6_addr = ip6h->daddr;
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
sin->sin6_flowinfo = ip6_flowinfo(ip6h);
sin->sin6_scope_id =
ipv6_iface_scope_id(&sin->sin6_addr,
diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
index 93a594a901d1..8fb4a791881a 100644
--- a/net/ipv6/icmp.c
+++ b/net/ipv6/icmp.c
@@ -588,7 +588,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
else if (!fl6.flowi6_oif)
fl6.flowi6_oif = np->ucast_oif;
- ipcm6_init_sk(&ipc6, np);
+ ipcm6_init_sk(&ipc6, sk);
ipc6.sockc.mark = mark;
fl6.flowlabel = ip6_make_flowinfo(ipc6.tclass, fl6.flowlabel);
@@ -791,7 +791,7 @@ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
msg.offset = 0;
msg.type = type;
- ipcm6_init_sk(&ipc6, np);
+ ipcm6_init_sk(&ipc6, sk);
ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
ipc6.tclass = ipv6_get_dsfield(ipv6_hdr(skb));
ipc6.sockc.mark = mark;
diff --git a/net/ipv6/inet6_connection_sock.c b/net/ipv6/inet6_connection_sock.c
index 0c50dcd35fe8..80043e46117c 100644
--- a/net/ipv6/inet6_connection_sock.c
+++ b/net/ipv6/inet6_connection_sock.c
@@ -133,7 +133,7 @@ int inet6_csk_xmit(struct sock *sk, struct sk_buff *skb, struct flowi *fl_unused
fl6.daddr = sk->sk_v6_daddr;
res = ip6_xmit(sk, skb, &fl6, sk->sk_mark, rcu_dereference(np->opt),
- np->tclass, sk->sk_priority);
+ np->tclass, READ_ONCE(sk->sk_priority));
rcu_read_unlock();
return res;
}
diff --git a/net/ipv6/ioam6_iptunnel.c b/net/ipv6/ioam6_iptunnel.c
index f6f5b83dd954..7563f8c6aa87 100644
--- a/net/ipv6/ioam6_iptunnel.c
+++ b/net/ipv6/ioam6_iptunnel.c
@@ -46,7 +46,7 @@ struct ioam6_lwt {
struct ioam6_lwt_encap tuninfo;
};
-static struct netlink_range_validation freq_range = {
+static const struct netlink_range_validation freq_range = {
.min = IOAM6_IPTUNNEL_FREQ_MIN,
.max = IOAM6_IPTUNNEL_FREQ_MAX,
};
diff --git a/net/ipv6/ip6_flowlabel.c b/net/ipv6/ip6_flowlabel.c
index b3ca4beb4405..eca07e10e21f 100644
--- a/net/ipv6/ip6_flowlabel.c
+++ b/net/ipv6/ip6_flowlabel.c
@@ -513,7 +513,7 @@ int ipv6_flowlabel_opt_get(struct sock *sk, struct in6_flowlabel_req *freq,
return 0;
}
- if (np->repflow) {
+ if (inet6_test_bit(REPFLOW, sk)) {
freq->flr_label = np->flow_label;
return 0;
}
@@ -551,10 +551,10 @@ static int ipv6_flowlabel_put(struct sock *sk, struct in6_flowlabel_req *freq)
if (freq->flr_flags & IPV6_FL_F_REFLECT) {
if (sk->sk_protocol != IPPROTO_TCP)
return -ENOPROTOOPT;
- if (!np->repflow)
+ if (!inet6_test_bit(REPFLOW, sk))
return -ESRCH;
np->flow_label = 0;
- np->repflow = 0;
+ inet6_clear_bit(REPFLOW, sk);
return 0;
}
@@ -626,7 +626,7 @@ static int ipv6_flowlabel_get(struct sock *sk, struct in6_flowlabel_req *freq,
if (sk->sk_protocol != IPPROTO_TCP)
return -ENOPROTOOPT;
- np->repflow = 1;
+ inet6_set_bit(REPFLOW, sk);
return 0;
}
diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
index 54fc4c711f2c..a722a43dd668 100644
--- a/net/ipv6/ip6_output.c
+++ b/net/ipv6/ip6_output.c
@@ -117,6 +117,8 @@ static int ip6_finish_output2(struct net *net, struct sock *sk, struct sk_buff *
return res;
}
+ IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
+
rcu_read_lock();
nexthop = rt6_nexthop((struct rt6_info *)dst, daddr);
neigh = __ipv6_neigh_lookup_noref(dev, nexthop);
@@ -162,7 +164,13 @@ ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
int err;
skb_mark_not_on_list(segs);
- err = ip6_fragment(net, sk, segs, ip6_finish_output2);
+ /* Last GSO segment can be smaller than gso_size (and MTU).
+ * Adding a fragment header would produce an "atomic fragment",
+ * which is considered harmful (RFC-8021). Avoid that.
+ */
+ err = segs->len > mtu ?
+ ip6_fragment(net, sk, segs, ip6_finish_output2) :
+ ip6_finish_output2(net, sk, segs);
if (err && ret == 0)
ret = err;
}
@@ -170,6 +178,16 @@ ip6_finish_output_gso_slowpath_drop(struct net *net, struct sock *sk,
return ret;
}
+static int ip6_finish_output_gso(struct net *net, struct sock *sk,
+ struct sk_buff *skb, unsigned int mtu)
+{
+ if (!(IP6CB(skb)->flags & IP6SKB_FAKEJUMBO) &&
+ !skb_gso_validate_network_len(skb, mtu))
+ return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
+
+ return ip6_finish_output2(net, sk, skb);
+}
+
static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
{
unsigned int mtu;
@@ -183,17 +201,14 @@ static int __ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff
#endif
mtu = ip6_skb_dst_mtu(skb);
- if (skb_is_gso(skb) &&
- !(IP6CB(skb)->flags & IP6SKB_FAKEJUMBO) &&
- !skb_gso_validate_network_len(skb, mtu))
- return ip6_finish_output_gso_slowpath_drop(net, sk, skb, mtu);
+ if (skb_is_gso(skb))
+ return ip6_finish_output_gso(net, sk, skb, mtu);
- if ((skb->len > mtu && !skb_is_gso(skb)) ||
- dst_allfrag(skb_dst(skb)) ||
+ if (skb->len > mtu ||
(IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
return ip6_fragment(net, sk, skb, ip6_finish_output2);
- else
- return ip6_finish_output2(net, sk, skb);
+
+ return ip6_finish_output2(net, sk, skb);
}
static int ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *skb)
@@ -232,12 +247,11 @@ int ip6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
}
EXPORT_SYMBOL(ip6_output);
-bool ip6_autoflowlabel(struct net *net, const struct ipv6_pinfo *np)
+bool ip6_autoflowlabel(struct net *net, const struct sock *sk)
{
- if (!np->autoflowlabel_set)
+ if (!inet6_test_bit(AUTOFLOWLABEL_SET, sk))
return ip6_default_np_autolabel(net);
- else
- return np->autoflowlabel;
+ return inet6_test_bit(AUTOFLOWLABEL, sk);
}
/*
@@ -309,12 +323,12 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
* Fill in the IPv6 header
*/
if (np)
- hlimit = np->hop_limit;
+ hlimit = READ_ONCE(np->hop_limit);
if (hlimit < 0)
hlimit = ip6_dst_hoplimit(dst);
ip6_flow_hdr(hdr, tclass, ip6_make_flowlabel(net, skb, fl6->flowlabel,
- ip6_autoflowlabel(net, np), fl6));
+ ip6_autoflowlabel(net, sk), fl6));
hdr->payload_len = htons(seg_len);
hdr->nexthdr = proto;
@@ -329,7 +343,7 @@ int ip6_xmit(const struct sock *sk, struct sk_buff *skb, struct flowi6 *fl6,
mtu = dst_mtu(dst);
if ((skb->len <= mtu) || skb->ignore_df || skb_is_gso(skb)) {
- IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
/* if egress device is enslaved to an L3 master device pass the
* skb to its handler for processing
@@ -369,9 +383,8 @@ static int ip6_call_ra_chain(struct sk_buff *skb, int sel)
if (sk && ra->sel == sel &&
(!sk->sk_bound_dev_if ||
sk->sk_bound_dev_if == skb->dev->ifindex)) {
- struct ipv6_pinfo *np = inet6_sk(sk);
- if (np && np->rtalert_isolate &&
+ if (inet6_test_bit(RTALERT_ISOLATE, sk) &&
!net_eq(sock_net(sk), dev_net(skb->dev))) {
continue;
}
@@ -448,10 +461,6 @@ static int ip6_forward_proxy_check(struct sk_buff *skb)
static inline int ip6_forward_finish(struct net *net, struct sock *sk,
struct sk_buff *skb)
{
- struct dst_entry *dst = skb_dst(skb);
-
- __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
-
#ifdef CONFIG_NET_SWITCHDEV
if (skb->offload_l3_fwd_mark) {
consume_skb(skb);
@@ -619,6 +628,8 @@ int ip6_forward(struct sk_buff *skb)
}
}
+ __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
+
mtu = ip6_dst_mtu_maybe_forward(dst, true);
if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU;
@@ -881,9 +892,11 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
mtu = IPV6_MIN_MTU;
}
- if (np && np->frag_size < mtu) {
- if (np->frag_size)
- mtu = np->frag_size;
+ if (np) {
+ u32 frag_size = READ_ONCE(np->frag_size);
+
+ if (frag_size && frag_size < mtu)
+ mtu = frag_size;
}
if (mtu < hlen + sizeof(struct frag_hdr) + 8)
goto fail_toobig;
@@ -1017,9 +1030,6 @@ slow_path:
return err;
fail_toobig:
- if (skb->sk && dst_allfrag(skb_dst(skb)))
- sk_gso_disable(skb->sk);
-
icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu);
err = -EMSGSIZE;
@@ -1113,7 +1123,7 @@ static int ip6_dst_lookup_tail(struct net *net, const struct sock *sk,
rcu_read_lock();
from = rt ? rcu_dereference(rt->from) : NULL;
err = ip6_route_get_saddr(net, from, &fl6->daddr,
- sk ? inet6_sk(sk)->srcprefs : 0,
+ sk ? READ_ONCE(inet6_sk(sk)->srcprefs) : 0,
&fl6->saddr);
rcu_read_unlock();
@@ -1283,74 +1293,6 @@ struct dst_entry *ip6_sk_dst_lookup_flow(struct sock *sk, struct flowi6 *fl6,
}
EXPORT_SYMBOL_GPL(ip6_sk_dst_lookup_flow);
-/**
- * ip6_dst_lookup_tunnel - perform route lookup on tunnel
- * @skb: Packet for which lookup is done
- * @dev: Tunnel device
- * @net: Network namespace of tunnel device
- * @sock: Socket which provides route info
- * @saddr: Memory to store the src ip address
- * @info: Tunnel information
- * @protocol: IP protocol
- * @use_cache: Flag to enable cache usage
- * This function performs a route lookup on a tunnel
- *
- * It returns a valid dst pointer and stores src address to be used in
- * tunnel in param saddr on success, else a pointer encoded error code.
- */
-
-struct dst_entry *ip6_dst_lookup_tunnel(struct sk_buff *skb,
- struct net_device *dev,
- struct net *net,
- struct socket *sock,
- struct in6_addr *saddr,
- const struct ip_tunnel_info *info,
- u8 protocol,
- bool use_cache)
-{
- struct dst_entry *dst = NULL;
-#ifdef CONFIG_DST_CACHE
- struct dst_cache *dst_cache;
-#endif
- struct flowi6 fl6;
- __u8 prio;
-
-#ifdef CONFIG_DST_CACHE
- dst_cache = (struct dst_cache *)&info->dst_cache;
- if (use_cache) {
- dst = dst_cache_get_ip6(dst_cache, saddr);
- if (dst)
- return dst;
- }
-#endif
- memset(&fl6, 0, sizeof(fl6));
- fl6.flowi6_mark = skb->mark;
- fl6.flowi6_proto = protocol;
- fl6.daddr = info->key.u.ipv6.dst;
- fl6.saddr = info->key.u.ipv6.src;
- prio = info->key.tos;
- fl6.flowlabel = ip6_make_flowinfo(prio, info->key.label);
-
- dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6,
- NULL);
- if (IS_ERR(dst)) {
- netdev_dbg(dev, "no route to %pI6\n", &fl6.daddr);
- return ERR_PTR(-ENETUNREACH);
- }
- if (dst->dev == dev) { /* is this necessary? */
- netdev_dbg(dev, "circular route to %pI6\n", &fl6.daddr);
- dst_release(dst);
- return ERR_PTR(-ELOOP);
- }
-#ifdef CONFIG_DST_CACHE
- if (use_cache)
- dst_cache_set_ip6(dst_cache, dst, &fl6.saddr);
-#endif
- *saddr = fl6.saddr;
- return dst;
-}
-EXPORT_SYMBOL_GPL(ip6_dst_lookup_tunnel);
-
static inline struct ipv6_opt_hdr *ip6_opt_dup(struct ipv6_opt_hdr *src,
gfp_t gfp)
{
@@ -1392,7 +1334,7 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
struct rt6_info *rt)
{
struct ipv6_pinfo *np = inet6_sk(sk);
- unsigned int mtu;
+ unsigned int mtu, frag_size;
struct ipv6_txoptions *nopt, *opt = ipc6->opt;
/* callers pass dst together with a reference, set it first so
@@ -1436,25 +1378,23 @@ static int ip6_setup_cork(struct sock *sk, struct inet_cork_full *cork,
v6_cork->hop_limit = ipc6->hlimit;
v6_cork->tclass = ipc6->tclass;
if (rt->dst.flags & DST_XFRM_TUNNEL)
- mtu = np->pmtudisc >= IPV6_PMTUDISC_PROBE ?
+ mtu = READ_ONCE(np->pmtudisc) >= IPV6_PMTUDISC_PROBE ?
READ_ONCE(rt->dst.dev->mtu) : dst_mtu(&rt->dst);
else
- mtu = np->pmtudisc >= IPV6_PMTUDISC_PROBE ?
+ mtu = READ_ONCE(np->pmtudisc) >= IPV6_PMTUDISC_PROBE ?
READ_ONCE(rt->dst.dev->mtu) : dst_mtu(xfrm_dst_path(&rt->dst));
- if (np->frag_size < mtu) {
- if (np->frag_size)
- mtu = np->frag_size;
- }
+
+ frag_size = READ_ONCE(np->frag_size);
+ if (frag_size && frag_size < mtu)
+ mtu = frag_size;
+
cork->base.fragsize = mtu;
cork->base.gso_size = ipc6->gso_size;
cork->base.tx_flags = 0;
cork->base.mark = ipc6->sockc.mark;
sock_tx_timestamp(sk, ipc6->sockc.tsflags, &cork->base.tx_flags);
- if (dst_allfrag(xfrm_dst_path(&rt->dst)))
- cork->base.flags |= IPCORK_ALLFRAG;
cork->base.length = 0;
-
cork->base.transmit_time = ipc6->sockc.transmit_time;
return 0;
@@ -1511,8 +1451,6 @@ static int __ip6_append_data(struct sock *sk,
headersize = sizeof(struct ipv6hdr) +
(opt ? opt->opt_flen + opt->opt_nflen : 0) +
- (dst_allfrag(&rt->dst) ?
- sizeof(struct frag_hdr) : 0) +
rt->rt6i_nfheader_len;
if (mtu <= fragheaderlen ||
@@ -1622,7 +1560,7 @@ emsgsize:
while (length > 0) {
/* Check if the remaining data fits into current packet. */
- copy = (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - skb->len;
+ copy = (cork->length <= mtu ? mtu : maxfraglen) - skb->len;
if (copy < length)
copy = maxfraglen - skb->len;
@@ -1653,7 +1591,7 @@ alloc_new_skb:
*/
datalen = length + fraggap;
- if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen)
+ if (datalen > (cork->length <= mtu ? mtu : maxfraglen) - fragheaderlen)
datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len;
fraglen = datalen + fragheaderlen;
pagedlen = 0;
@@ -1902,7 +1840,6 @@ static void ip6_cork_steal_dst(struct sk_buff *skb, struct inet_cork_full *cork)
struct dst_entry *dst = cork->base.dst;
cork->base.dst = NULL;
- cork->base.flags &= ~IPCORK_ALLFRAG;
skb_dst_set(skb, dst);
}
@@ -1923,7 +1860,6 @@ static void ip6_cork_release(struct inet_cork_full *cork,
if (cork->base.dst) {
dst_release(cork->base.dst);
cork->base.dst = NULL;
- cork->base.flags &= ~IPCORK_ALLFRAG;
}
}
@@ -1935,7 +1871,6 @@ struct sk_buff *__ip6_make_skb(struct sock *sk,
struct sk_buff *skb, *tmp_skb;
struct sk_buff **tail_skb;
struct in6_addr *final_dst;
- struct ipv6_pinfo *np = inet6_sk(sk);
struct net *net = sock_net(sk);
struct ipv6hdr *hdr;
struct ipv6_txoptions *opt = v6_cork->opt;
@@ -1978,18 +1913,18 @@ struct sk_buff *__ip6_make_skb(struct sock *sk,
ip6_flow_hdr(hdr, v6_cork->tclass,
ip6_make_flowlabel(net, skb, fl6->flowlabel,
- ip6_autoflowlabel(net, np), fl6));
+ ip6_autoflowlabel(net, sk), fl6));
hdr->hop_limit = v6_cork->hop_limit;
hdr->nexthdr = proto;
hdr->saddr = fl6->saddr;
hdr->daddr = *final_dst;
- skb->priority = sk->sk_priority;
+ skb->priority = READ_ONCE(sk->sk_priority);
skb->mark = cork->base.mark;
skb->tstamp = cork->base.transmit_time;
ip6_cork_steal_dst(skb, cork);
- IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTREQUESTS);
if (proto == IPPROTO_ICMPV6) {
struct inet6_dev *idev = ip6_dst_idev(skb_dst(skb));
u8 icmp6_type;
@@ -2091,7 +2026,7 @@ struct sk_buff *ip6_make_skb(struct sock *sk,
return ERR_PTR(err);
}
if (ipc6->dontfrag < 0)
- ipc6->dontfrag = inet6_sk(sk)->dontfrag;
+ ipc6->dontfrag = inet6_test_bit(DONTFRAG, sk);
err = __ip6_append_data(sk, &queue, cork, &v6_cork,
&current->task_frag, getfrag, from,
diff --git a/net/ipv6/ip6_udp_tunnel.c b/net/ipv6/ip6_udp_tunnel.c
index cdc4d4ee2420..a7bf0327b380 100644
--- a/net/ipv6/ip6_udp_tunnel.c
+++ b/net/ipv6/ip6_udp_tunnel.c
@@ -1,3 +1,4 @@
+
// SPDX-License-Identifier: GPL-2.0-only
#include <linux/module.h>
#include <linux/errno.h>
@@ -75,8 +76,9 @@ EXPORT_SYMBOL_GPL(udp_sock_create6);
int udp_tunnel6_xmit_skb(struct dst_entry *dst, struct sock *sk,
struct sk_buff *skb,
- struct net_device *dev, struct in6_addr *saddr,
- struct in6_addr *daddr,
+ struct net_device *dev,
+ const struct in6_addr *saddr,
+ const struct in6_addr *daddr,
__u8 prio, __u8 ttl, __be32 label,
__be16 src_port, __be16 dst_port, bool nocheck)
{
@@ -111,4 +113,73 @@ int udp_tunnel6_xmit_skb(struct dst_entry *dst, struct sock *sk,
}
EXPORT_SYMBOL_GPL(udp_tunnel6_xmit_skb);
+/**
+ * udp_tunnel6_dst_lookup - perform route lookup on UDP tunnel
+ * @skb: Packet for which lookup is done
+ * @dev: Tunnel device
+ * @net: Network namespace of tunnel device
+ * @sock: Socket which provides route info
+ * @oif: Index of the output interface
+ * @saddr: Memory to store the src ip address
+ * @key: Tunnel information
+ * @sport: UDP source port
+ * @dport: UDP destination port
+ * @dsfield: The traffic class field
+ * @dst_cache: The dst cache to use for lookup
+ * This function performs a route lookup on a UDP tunnel
+ *
+ * It returns a valid dst pointer and stores src address to be used in
+ * tunnel in param saddr on success, else a pointer encoded error code.
+ */
+
+struct dst_entry *udp_tunnel6_dst_lookup(struct sk_buff *skb,
+ struct net_device *dev,
+ struct net *net,
+ struct socket *sock,
+ int oif,
+ struct in6_addr *saddr,
+ const struct ip_tunnel_key *key,
+ __be16 sport, __be16 dport, u8 dsfield,
+ struct dst_cache *dst_cache)
+{
+ struct dst_entry *dst = NULL;
+ struct flowi6 fl6;
+
+#ifdef CONFIG_DST_CACHE
+ if (dst_cache) {
+ dst = dst_cache_get_ip6(dst_cache, saddr);
+ if (dst)
+ return dst;
+ }
+#endif
+ memset(&fl6, 0, sizeof(fl6));
+ fl6.flowi6_mark = skb->mark;
+ fl6.flowi6_proto = IPPROTO_UDP;
+ fl6.flowi6_oif = oif;
+ fl6.daddr = key->u.ipv6.dst;
+ fl6.saddr = key->u.ipv6.src;
+ fl6.fl6_sport = sport;
+ fl6.fl6_dport = dport;
+ fl6.flowlabel = ip6_make_flowinfo(dsfield, key->label);
+
+ dst = ipv6_stub->ipv6_dst_lookup_flow(net, sock->sk, &fl6,
+ NULL);
+ if (IS_ERR(dst)) {
+ netdev_dbg(dev, "no route to %pI6\n", &fl6.daddr);
+ return ERR_PTR(-ENETUNREACH);
+ }
+ if (dst->dev == dev) { /* is this necessary? */
+ netdev_dbg(dev, "circular route to %pI6\n", &fl6.daddr);
+ dst_release(dst);
+ return ERR_PTR(-ELOOP);
+ }
+#ifdef CONFIG_DST_CACHE
+ if (dst_cache)
+ dst_cache_set_ip6(dst_cache, dst, &fl6.saddr);
+#endif
+ *saddr = fl6.saddr;
+ return dst;
+}
+EXPORT_SYMBOL_GPL(udp_tunnel6_dst_lookup);
+
MODULE_LICENSE("GPL");
diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
index 0e2a0847b387..7d661735cb9d 100644
--- a/net/ipv6/ipv6_sockglue.c
+++ b/net/ipv6/ipv6_sockglue.c
@@ -415,6 +415,101 @@ int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
if (ip6_mroute_opt(optname))
return ip6_mroute_setsockopt(sk, optname, optval, optlen);
+ /* Handle options that can be set without locking the socket. */
+ switch (optname) {
+ case IPV6_UNICAST_HOPS:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val > 255 || val < -1)
+ return -EINVAL;
+ WRITE_ONCE(np->hop_limit, val);
+ return 0;
+ case IPV6_MULTICAST_LOOP:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val != valbool)
+ return -EINVAL;
+ inet6_assign_bit(MC6_LOOP, sk, valbool);
+ return 0;
+ case IPV6_MULTICAST_HOPS:
+ if (sk->sk_type == SOCK_STREAM)
+ return retv;
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val > 255 || val < -1)
+ return -EINVAL;
+ WRITE_ONCE(np->mcast_hops,
+ val == -1 ? IPV6_DEFAULT_MCASTHOPS : val);
+ return 0;
+ case IPV6_MTU:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val && val < IPV6_MIN_MTU)
+ return -EINVAL;
+ WRITE_ONCE(np->frag_size, val);
+ return 0;
+ case IPV6_MINHOPCOUNT:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val < 0 || val > 255)
+ return -EINVAL;
+
+ if (val)
+ static_branch_enable(&ip6_min_hopcount);
+
+ /* tcp_v6_err() and tcp_v6_rcv() might read min_hopcount
+ * while we are changing it.
+ */
+ WRITE_ONCE(np->min_hopcount, val);
+ return 0;
+ case IPV6_RECVERR_RFC4884:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val < 0 || val > 1)
+ return -EINVAL;
+ inet6_assign_bit(RECVERR6_RFC4884, sk, valbool);
+ return 0;
+ case IPV6_MULTICAST_ALL:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ inet6_assign_bit(MC6_ALL, sk, valbool);
+ return 0;
+ case IPV6_AUTOFLOWLABEL:
+ inet6_assign_bit(AUTOFLOWLABEL, sk, valbool);
+ inet6_set_bit(AUTOFLOWLABEL_SET, sk);
+ return 0;
+ case IPV6_DONTFRAG:
+ inet6_assign_bit(DONTFRAG, sk, valbool);
+ return 0;
+ case IPV6_RECVERR:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ inet6_assign_bit(RECVERR6, sk, valbool);
+ if (!val)
+ skb_errqueue_purge(&sk->sk_error_queue);
+ return 0;
+ case IPV6_ROUTER_ALERT_ISOLATE:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ inet6_assign_bit(RTALERT_ISOLATE, sk, valbool);
+ return 0;
+ case IPV6_MTU_DISCOVER:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ if (val < IPV6_PMTUDISC_DONT || val > IPV6_PMTUDISC_OMIT)
+ return -EINVAL;
+ WRITE_ONCE(np->pmtudisc, val);
+ return 0;
+ case IPV6_FLOWINFO_SEND:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ inet6_assign_bit(SNDFLOW, sk, valbool);
+ return 0;
+ case IPV6_ADDR_PREFERENCES:
+ if (optlen < sizeof(int))
+ return -EINVAL;
+ return ip6_sock_set_addr_preferences(sk, val);
+ }
if (needs_rtnl)
rtnl_lock();
sockopt_lock_sock(sk);
@@ -733,34 +828,7 @@ done:
}
break;
}
- case IPV6_UNICAST_HOPS:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val > 255 || val < -1)
- goto e_inval;
- np->hop_limit = val;
- retv = 0;
- break;
- case IPV6_MULTICAST_HOPS:
- if (sk->sk_type == SOCK_STREAM)
- break;
- if (optlen < sizeof(int))
- goto e_inval;
- if (val > 255 || val < -1)
- goto e_inval;
- np->mcast_hops = (val == -1 ? IPV6_DEFAULT_MCASTHOPS : val);
- retv = 0;
- break;
-
- case IPV6_MULTICAST_LOOP:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val != valbool)
- goto e_inval;
- np->mc_loop = valbool;
- retv = 0;
- break;
case IPV6_UNICAST_IF:
{
@@ -862,13 +930,6 @@ done:
retv = ipv6_sock_ac_drop(sk, mreq.ipv6mr_ifindex, &mreq.ipv6mr_acaddr);
break;
}
- case IPV6_MULTICAST_ALL:
- if (optlen < sizeof(int))
- goto e_inval;
- np->mc_all = valbool;
- retv = 0;
- break;
-
case MCAST_JOIN_GROUP:
case MCAST_LEAVE_GROUP:
if (in_compat_syscall())
@@ -896,42 +957,6 @@ done:
goto e_inval;
retv = ip6_ra_control(sk, val);
break;
- case IPV6_ROUTER_ALERT_ISOLATE:
- if (optlen < sizeof(int))
- goto e_inval;
- np->rtalert_isolate = valbool;
- retv = 0;
- break;
- case IPV6_MTU_DISCOVER:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val < IPV6_PMTUDISC_DONT || val > IPV6_PMTUDISC_OMIT)
- goto e_inval;
- np->pmtudisc = val;
- retv = 0;
- break;
- case IPV6_MTU:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val && val < IPV6_MIN_MTU)
- goto e_inval;
- np->frag_size = val;
- retv = 0;
- break;
- case IPV6_RECVERR:
- if (optlen < sizeof(int))
- goto e_inval;
- np->recverr = valbool;
- if (!val)
- skb_errqueue_purge(&sk->sk_error_queue);
- retv = 0;
- break;
- case IPV6_FLOWINFO_SEND:
- if (optlen < sizeof(int))
- goto e_inval;
- np->sndflow = valbool;
- retv = 0;
- break;
case IPV6_FLOWLABEL_MGR:
retv = ipv6_flowlabel_opt(sk, optval, optlen);
break;
@@ -943,47 +968,10 @@ done:
retv = xfrm_user_policy(sk, optname, optval, optlen);
break;
- case IPV6_ADDR_PREFERENCES:
- if (optlen < sizeof(int))
- goto e_inval;
- retv = __ip6_sock_set_addr_preferences(sk, val);
- break;
- case IPV6_MINHOPCOUNT:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val < 0 || val > 255)
- goto e_inval;
-
- if (val)
- static_branch_enable(&ip6_min_hopcount);
-
- /* tcp_v6_err() and tcp_v6_rcv() might read min_hopcount
- * while we are changing it.
- */
- WRITE_ONCE(np->min_hopcount, val);
- retv = 0;
- break;
- case IPV6_DONTFRAG:
- np->dontfrag = valbool;
- retv = 0;
- break;
- case IPV6_AUTOFLOWLABEL:
- np->autoflowlabel = valbool;
- np->autoflowlabel_set = 1;
- retv = 0;
- break;
case IPV6_RECVFRAGSIZE:
np->rxopt.bits.recvfragsize = valbool;
retv = 0;
break;
- case IPV6_RECVERR_RFC4884:
- if (optlen < sizeof(int))
- goto e_inval;
- if (val < 0 || val > 1)
- goto e_inval;
- np->recverr_rfc4884 = valbool;
- retv = 0;
- break;
}
unlock:
@@ -1180,7 +1168,8 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
put_cmsg(&msg, SOL_IPV6, IPV6_PKTINFO, sizeof(src_info), &src_info);
}
if (np->rxopt.bits.rxhlim) {
- int hlim = np->mcast_hops;
+ int hlim = READ_ONCE(np->mcast_hops);
+
put_cmsg(&msg, SOL_IPV6, IPV6_HOPLIMIT, sizeof(hlim), &hlim);
}
if (np->rxopt.bits.rxtclass) {
@@ -1197,7 +1186,8 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
put_cmsg(&msg, SOL_IPV6, IPV6_2292PKTINFO, sizeof(src_info), &src_info);
}
if (np->rxopt.bits.rxohlim) {
- int hlim = np->mcast_hops;
+ int hlim = READ_ONCE(np->mcast_hops);
+
put_cmsg(&msg, SOL_IPV6, IPV6_2292HOPLIMIT, sizeof(hlim), &hlim);
}
if (np->rxopt.bits.rxflow) {
@@ -1347,9 +1337,9 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
struct dst_entry *dst;
if (optname == IPV6_UNICAST_HOPS)
- val = np->hop_limit;
+ val = READ_ONCE(np->hop_limit);
else
- val = np->mcast_hops;
+ val = READ_ONCE(np->mcast_hops);
if (val < 0) {
rcu_read_lock();
@@ -1365,7 +1355,7 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
}
case IPV6_MULTICAST_LOOP:
- val = np->mc_loop;
+ val = inet6_test_bit(MC6_LOOP, sk);
break;
case IPV6_MULTICAST_IF:
@@ -1373,7 +1363,7 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
break;
case IPV6_MULTICAST_ALL:
- val = np->mc_all;
+ val = inet6_test_bit(MC6_ALL, sk);
break;
case IPV6_UNICAST_IF:
@@ -1381,15 +1371,15 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
break;
case IPV6_MTU_DISCOVER:
- val = np->pmtudisc;
+ val = READ_ONCE(np->pmtudisc);
break;
case IPV6_RECVERR:
- val = np->recverr;
+ val = inet6_test_bit(RECVERR6, sk);
break;
case IPV6_FLOWINFO_SEND:
- val = np->sndflow;
+ val = inet6_test_bit(SNDFLOW, sk);
break;
case IPV6_FLOWLABEL_MGR:
@@ -1424,33 +1414,35 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
}
case IPV6_ADDR_PREFERENCES:
+ {
+ u8 srcprefs = READ_ONCE(np->srcprefs);
val = 0;
- if (np->srcprefs & IPV6_PREFER_SRC_TMP)
+ if (srcprefs & IPV6_PREFER_SRC_TMP)
val |= IPV6_PREFER_SRC_TMP;
- else if (np->srcprefs & IPV6_PREFER_SRC_PUBLIC)
+ else if (srcprefs & IPV6_PREFER_SRC_PUBLIC)
val |= IPV6_PREFER_SRC_PUBLIC;
else {
/* XXX: should we return system default? */
val |= IPV6_PREFER_SRC_PUBTMP_DEFAULT;
}
- if (np->srcprefs & IPV6_PREFER_SRC_COA)
+ if (srcprefs & IPV6_PREFER_SRC_COA)
val |= IPV6_PREFER_SRC_COA;
else
val |= IPV6_PREFER_SRC_HOME;
break;
-
+ }
case IPV6_MINHOPCOUNT:
- val = np->min_hopcount;
+ val = READ_ONCE(np->min_hopcount);
break;
case IPV6_DONTFRAG:
- val = np->dontfrag;
+ val = inet6_test_bit(DONTFRAG, sk);
break;
case IPV6_AUTOFLOWLABEL:
- val = ip6_autoflowlabel(sock_net(sk), np);
+ val = ip6_autoflowlabel(sock_net(sk), sk);
break;
case IPV6_RECVFRAGSIZE:
@@ -1458,11 +1450,11 @@ int do_ipv6_getsockopt(struct sock *sk, int level, int optname,
break;
case IPV6_ROUTER_ALERT_ISOLATE:
- val = np->rtalert_isolate;
+ val = inet6_test_bit(RTALERT_ISOLATE, sk);
break;
case IPV6_RECVERR_RFC4884:
- val = np->recverr_rfc4884;
+ val = inet6_test_bit(RECVERR6_RFC4884, sk);
break;
default:
diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
index 5ce25bcb9974..b75d3c9d41bb 100644
--- a/net/ipv6/mcast.c
+++ b/net/ipv6/mcast.c
@@ -642,7 +642,7 @@ bool inet6_mc_check(const struct sock *sk, const struct in6_addr *mc_addr,
}
if (!mc) {
rcu_read_unlock();
- return np->mc_all;
+ return inet6_test_bit(MC6_ALL, sk);
}
psl = rcu_dereference(mc->sflist);
if (!psl) {
@@ -1716,7 +1716,7 @@ static void ip6_mc_hdr(const struct sock *sk, struct sk_buff *skb,
hdr->payload_len = htons(len);
hdr->nexthdr = proto;
- hdr->hop_limit = inet6_sk(sk)->hop_limit;
+ hdr->hop_limit = READ_ONCE(inet6_sk(sk)->hop_limit);
hdr->saddr = *saddr;
hdr->daddr = *daddr;
@@ -1789,7 +1789,7 @@ static void mld_sendpack(struct sk_buff *skb)
rcu_read_lock();
idev = __in6_dev_get(skb->dev);
- IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
payload_len = (skb_tail_pointer(skb) - skb_network_header(skb)) -
sizeof(*pip6);
@@ -2147,8 +2147,7 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
full_len = sizeof(struct ipv6hdr) + payload_len;
rcu_read_lock();
- IP6_UPD_PO_STATS(net, __in6_dev_get(dev),
- IPSTATS_MIB_OUT, full_len);
+ IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_OUTREQUESTS);
rcu_read_unlock();
skb = sock_alloc_send_skb(sk, hlen + tlen + full_len, 1, &err);
@@ -3011,8 +3010,6 @@ static struct ip6_sf_list *igmp6_mcf_get_next(struct seq_file *seq, struct ip6_s
continue;
state->im = rcu_dereference(state->idev->mc_list);
}
- if (!state->im)
- break;
psf = rcu_dereference(state->im->mca_sources);
}
out:
diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
index 553c8664e0a7..a19999b30bc0 100644
--- a/net/ipv6/ndisc.c
+++ b/net/ipv6/ndisc.c
@@ -500,11 +500,11 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
csum_partial(icmp6h,
skb->len, 0));
- ip6_nd_hdr(skb, saddr, daddr, inet6_sk(sk)->hop_limit, skb->len);
+ ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len);
rcu_read_lock();
idev = __in6_dev_get(dst->dev);
- IP6_UPD_PO_STATS(net, idev, IPSTATS_MIB_OUT, skb->len);
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
err = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
net, sk, skb, NULL, dst->dev,
@@ -1996,7 +1996,7 @@ static int __net_init ndisc_net_init(struct net *net)
np = inet6_sk(sk);
np->hop_limit = 255;
/* Do not loopback ndisc messages */
- np->mc_loop = 0;
+ inet6_clear_bit(MC6_LOOP, sk);
return 0;
}
diff --git a/net/ipv6/netfilter/ip6table_mangle.c b/net/ipv6/netfilter/ip6table_mangle.c
index a88b2ce4a3cb..8dd4cd0c47bd 100644
--- a/net/ipv6/netfilter/ip6table_mangle.c
+++ b/net/ipv6/netfilter/ip6table_mangle.c
@@ -31,10 +31,10 @@ static const struct xt_table packet_mangler = {
static unsigned int
ip6t_mangle_out(void *priv, struct sk_buff *skb, const struct nf_hook_state *state)
{
- unsigned int ret;
struct in6_addr saddr, daddr;
- u_int8_t hop_limit;
- u_int32_t flowlabel, mark;
+ unsigned int ret, verdict;
+ u32 flowlabel, mark;
+ u8 hop_limit;
int err;
/* save source/dest address, mark, hoplimit, flowlabel, priority, */
@@ -47,8 +47,9 @@ ip6t_mangle_out(void *priv, struct sk_buff *skb, const struct nf_hook_state *sta
flowlabel = *((u_int32_t *)ipv6_hdr(skb));
ret = ip6t_do_table(priv, skb, state);
+ verdict = ret & NF_VERDICT_MASK;
- if (ret != NF_DROP && ret != NF_STOLEN &&
+ if (verdict != NF_DROP && verdict != NF_STOLEN &&
(!ipv6_addr_equal(&ipv6_hdr(skb)->saddr, &saddr) ||
!ipv6_addr_equal(&ipv6_hdr(skb)->daddr, &daddr) ||
skb->mark != mark ||
diff --git a/net/ipv6/ping.c b/net/ipv6/ping.c
index 5831aaa53d75..d2098dd4ceae 100644
--- a/net/ipv6/ping.c
+++ b/net/ipv6/ping.c
@@ -56,7 +56,7 @@ static int ping_v6_pre_connect(struct sock *sk, struct sockaddr *uaddr,
if (addr_len < SIN6_LEN_RFC2133)
return -EINVAL;
- return BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr, &addr_len);
}
static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
@@ -89,7 +89,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
return -EAFNOSUPPORT;
}
daddr = &(u->sin6_addr);
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
fl6.flowlabel = u->sin6_flowinfo & IPV6_FLOWINFO_MASK;
if (__ipv6_addr_needs_scope_id(ipv6_addr_type(daddr)))
oif = u->sin6_scope_id;
@@ -118,7 +118,7 @@ static int ping_v6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
l3mdev_master_ifindex_by_index(sock_net(sk), oif) != sk->sk_bound_dev_if))
return -EINVAL;
- ipcm6_init_sk(&ipc6, np);
+ ipcm6_init_sk(&ipc6, sk);
ipc6.sockc.tsflags = READ_ONCE(sk->sk_tsflags);
ipc6.sockc.mark = READ_ONCE(sk->sk_mark);
diff --git a/net/ipv6/proc.c b/net/ipv6/proc.c
index e20b3705c2d2..6d1d9221649d 100644
--- a/net/ipv6/proc.c
+++ b/net/ipv6/proc.c
@@ -61,7 +61,7 @@ static const struct snmp_mib snmp6_ipstats_list[] = {
SNMP_MIB_ITEM("Ip6InDiscards", IPSTATS_MIB_INDISCARDS),
SNMP_MIB_ITEM("Ip6InDelivers", IPSTATS_MIB_INDELIVERS),
SNMP_MIB_ITEM("Ip6OutForwDatagrams", IPSTATS_MIB_OUTFORWDATAGRAMS),
- SNMP_MIB_ITEM("Ip6OutRequests", IPSTATS_MIB_OUTPKTS),
+ SNMP_MIB_ITEM("Ip6OutRequests", IPSTATS_MIB_OUTREQUESTS),
SNMP_MIB_ITEM("Ip6OutDiscards", IPSTATS_MIB_OUTDISCARDS),
SNMP_MIB_ITEM("Ip6OutNoRoutes", IPSTATS_MIB_OUTNOROUTES),
SNMP_MIB_ITEM("Ip6ReasmTimeout", IPSTATS_MIB_REASMTIMEOUT),
@@ -84,6 +84,7 @@ static const struct snmp_mib snmp6_ipstats_list[] = {
SNMP_MIB_ITEM("Ip6InECT1Pkts", IPSTATS_MIB_ECT1PKTS),
SNMP_MIB_ITEM("Ip6InECT0Pkts", IPSTATS_MIB_ECT0PKTS),
SNMP_MIB_ITEM("Ip6InCEPkts", IPSTATS_MIB_CEPKTS),
+ SNMP_MIB_ITEM("Ip6OutTransmits", IPSTATS_MIB_OUTPKTS),
SNMP_MIB_SENTINEL
};
diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
index 42fcec3ecf5e..dd0a4e73e602 100644
--- a/net/ipv6/raw.c
+++ b/net/ipv6/raw.c
@@ -291,6 +291,7 @@ static void rawv6_err(struct sock *sk, struct sk_buff *skb,
struct inet6_skb_parm *opt,
u8 type, u8 code, int offset, __be32 info)
{
+ bool recverr = inet6_test_bit(RECVERR6, sk);
struct ipv6_pinfo *np = inet6_sk(sk);
int err;
int harderr;
@@ -300,26 +301,26 @@ static void rawv6_err(struct sock *sk, struct sk_buff *skb,
2. Socket is connected (otherwise the error indication
is useless without recverr and error is hard.
*/
- if (!np->recverr && sk->sk_state != TCP_ESTABLISHED)
+ if (!recverr && sk->sk_state != TCP_ESTABLISHED)
return;
harderr = icmpv6_err_convert(type, code, &err);
if (type == ICMPV6_PKT_TOOBIG) {
ip6_sk_update_pmtu(skb, sk, info);
- harderr = (np->pmtudisc == IPV6_PMTUDISC_DO);
+ harderr = (READ_ONCE(np->pmtudisc) == IPV6_PMTUDISC_DO);
}
if (type == NDISC_REDIRECT) {
ip6_sk_redirect(skb, sk);
return;
}
- if (np->recverr) {
+ if (recverr) {
u8 *payload = skb->data;
if (!inet_test_bit(HDRINCL, sk))
payload += offset;
ipv6_icmp_error(sk, skb, err, 0, ntohl(info), payload);
}
- if (np->recverr || harderr) {
+ if (recverr || harderr) {
sk->sk_err = err;
sk_error_report(sk);
}
@@ -587,7 +588,6 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
struct flowi6 *fl6, struct dst_entry **dstp,
unsigned int flags, const struct sockcm_cookie *sockc)
{
- struct ipv6_pinfo *np = inet6_sk(sk);
struct net *net = sock_net(sk);
struct ipv6hdr *iph;
struct sk_buff *skb;
@@ -651,7 +651,7 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
* have been queued for deletion.
*/
rcu_read_lock();
- IP6_UPD_PO_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUT, skb->len);
+ IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTREQUESTS);
err = NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
NULL, rt->dst.dev, dst_output);
if (err > 0)
@@ -668,7 +668,7 @@ out:
error:
IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTDISCARDS);
error_check:
- if (err == -ENOBUFS && !np->recverr)
+ if (err == -ENOBUFS && !inet6_test_bit(RECVERR6, sk))
err = 0;
return err;
}
@@ -795,7 +795,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
return -EINVAL;
daddr = &sin6->sin6_addr;
- if (np->sndflow) {
+ if (inet6_test_bit(SNDFLOW, sk)) {
fl6.flowlabel = sin6->sin6_flowinfo&IPV6_FLOWINFO_MASK;
if (fl6.flowlabel&IPV6_FLOWLABEL_MASK) {
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
@@ -898,7 +898,7 @@ static int rawv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
if (ipc6.dontfrag < 0)
- ipc6.dontfrag = np->dontfrag;
+ ipc6.dontfrag = inet6_test_bit(DONTFRAG, sk);
if (msg->msg_flags&MSG_CONFIRM)
goto do_confirm;
diff --git a/net/ipv6/route.c b/net/ipv6/route.c
index 9c687b357e6a..b132feae3393 100644
--- a/net/ipv6/route.c
+++ b/net/ipv6/route.c
@@ -341,7 +341,7 @@ struct rt6_info *ip6_dst_alloc(struct net *net, struct net_device *dev,
int flags)
{
struct rt6_info *rt = dst_alloc(&net->ipv6.ip6_dst_ops, dev,
- 1, DST_OBSOLETE_FORCE_CHK, flags);
+ DST_OBSOLETE_FORCE_CHK, flags);
if (rt) {
rt6_info_init(rt);
@@ -2622,7 +2622,7 @@ static struct dst_entry *ip6_route_output_flags_noref(struct net *net,
if (!any_src)
flags |= RT6_LOOKUP_F_HAS_SADDR;
else if (sk)
- flags |= rt6_srcprefs2flags(inet6_sk(sk)->srcprefs);
+ flags |= rt6_srcprefs2flags(READ_ONCE(inet6_sk(sk)->srcprefs));
return fib6_rule_lookup(net, fl6, NULL, flags, ip6_pol_route_output);
}
@@ -2655,7 +2655,7 @@ struct dst_entry *ip6_blackhole_route(struct net *net, struct dst_entry *dst_ori
struct net_device *loopback_dev = net->loopback_dev;
struct dst_entry *new = NULL;
- rt = dst_alloc(&ip6_dst_blackhole_ops, loopback_dev, 1,
+ rt = dst_alloc(&ip6_dst_blackhole_ops, loopback_dev,
DST_OBSOLETE_DEAD, 0);
if (rt) {
rt6_info_init(rt);
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 5014aa663452..500f6ed3b8cf 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -140,6 +140,7 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
struct dst_entry *dst;
__u8 rcv_wscale;
u32 tsoff = 0;
+ int l3index;
if (!READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_syncookies) ||
!th->ack || th->rst)
@@ -214,6 +215,10 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
treq->snt_isn = cookie;
treq->ts_off = 0;
treq->txhash = net_tx_rndhash();
+
+ l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif);
+ tcp_ao_syncookie(sk, skb, treq, AF_INET6, l3index);
+
if (IS_ENABLED(CONFIG_SMC))
ireq->smc_ok = 0;
diff --git a/net/ipv6/tcp_ao.c b/net/ipv6/tcp_ao.c
new file mode 100644
index 000000000000..3c09ac26206e
--- /dev/null
+++ b/net/ipv6/tcp_ao.c
@@ -0,0 +1,168 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * INET An implementation of the TCP Authentication Option (TCP-AO).
+ * See RFC5925.
+ *
+ * Authors: Dmitry Safonov <dima@arista.com>
+ * Francesco Ruggeri <fruggeri@arista.com>
+ * Salam Noureddine <noureddine@arista.com>
+ */
+#include <crypto/hash.h>
+#include <linux/tcp.h>
+
+#include <net/tcp.h>
+#include <net/ipv6.h>
+
+static int tcp_v6_ao_calc_key(struct tcp_ao_key *mkt, u8 *key,
+ const struct in6_addr *saddr,
+ const struct in6_addr *daddr,
+ __be16 sport, __be16 dport,
+ __be32 sisn, __be32 disn)
+{
+ struct kdf_input_block {
+ u8 counter;
+ u8 label[6];
+ struct tcp6_ao_context ctx;
+ __be16 outlen;
+ } __packed * tmp;
+ struct tcp_sigpool hp;
+ int err;
+
+ err = tcp_sigpool_start(mkt->tcp_sigpool_id, &hp);
+ if (err)
+ return err;
+
+ tmp = hp.scratch;
+ tmp->counter = 1;
+ memcpy(tmp->label, "TCP-AO", 6);
+ tmp->ctx.saddr = *saddr;
+ tmp->ctx.daddr = *daddr;
+ tmp->ctx.sport = sport;
+ tmp->ctx.dport = dport;
+ tmp->ctx.sisn = sisn;
+ tmp->ctx.disn = disn;
+ tmp->outlen = htons(tcp_ao_digest_size(mkt) * 8); /* in bits */
+
+ err = tcp_ao_calc_traffic_key(mkt, key, tmp, sizeof(*tmp), &hp);
+ tcp_sigpool_end(&hp);
+
+ return err;
+}
+
+int tcp_v6_ao_calc_key_skb(struct tcp_ao_key *mkt, u8 *key,
+ const struct sk_buff *skb,
+ __be32 sisn, __be32 disn)
+{
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+ const struct tcphdr *th = tcp_hdr(skb);
+
+ return tcp_v6_ao_calc_key(mkt, key, &iph->saddr,
+ &iph->daddr, th->source,
+ th->dest, sisn, disn);
+}
+
+int tcp_v6_ao_calc_key_sk(struct tcp_ao_key *mkt, u8 *key,
+ const struct sock *sk, __be32 sisn,
+ __be32 disn, bool send)
+{
+ if (send)
+ return tcp_v6_ao_calc_key(mkt, key, &sk->sk_v6_rcv_saddr,
+ &sk->sk_v6_daddr, htons(sk->sk_num),
+ sk->sk_dport, sisn, disn);
+ else
+ return tcp_v6_ao_calc_key(mkt, key, &sk->sk_v6_daddr,
+ &sk->sk_v6_rcv_saddr, sk->sk_dport,
+ htons(sk->sk_num), disn, sisn);
+}
+
+int tcp_v6_ao_calc_key_rsk(struct tcp_ao_key *mkt, u8 *key,
+ struct request_sock *req)
+{
+ struct inet_request_sock *ireq = inet_rsk(req);
+
+ return tcp_v6_ao_calc_key(mkt, key,
+ &ireq->ir_v6_loc_addr, &ireq->ir_v6_rmt_addr,
+ htons(ireq->ir_num), ireq->ir_rmt_port,
+ htonl(tcp_rsk(req)->snt_isn),
+ htonl(tcp_rsk(req)->rcv_isn));
+}
+
+struct tcp_ao_key *tcp_v6_ao_lookup(const struct sock *sk,
+ struct sock *addr_sk,
+ int sndid, int rcvid)
+{
+ int l3index = l3mdev_master_ifindex_by_index(sock_net(sk),
+ addr_sk->sk_bound_dev_if);
+ struct in6_addr *addr = &addr_sk->sk_v6_daddr;
+
+ return tcp_ao_do_lookup(sk, l3index, (union tcp_ao_addr *)addr,
+ AF_INET6, sndid, rcvid);
+}
+
+struct tcp_ao_key *tcp_v6_ao_lookup_rsk(const struct sock *sk,
+ struct request_sock *req,
+ int sndid, int rcvid)
+{
+ struct inet_request_sock *ireq = inet_rsk(req);
+ struct in6_addr *addr = &ireq->ir_v6_rmt_addr;
+ int l3index;
+
+ l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif);
+ return tcp_ao_do_lookup(sk, l3index, (union tcp_ao_addr *)addr,
+ AF_INET6, sndid, rcvid);
+}
+
+int tcp_v6_ao_hash_pseudoheader(struct tcp_sigpool *hp,
+ const struct in6_addr *daddr,
+ const struct in6_addr *saddr, int nbytes)
+{
+ struct tcp6_pseudohdr *bp;
+ struct scatterlist sg;
+
+ bp = hp->scratch;
+ /* 1. TCP pseudo-header (RFC2460) */
+ bp->saddr = *saddr;
+ bp->daddr = *daddr;
+ bp->len = cpu_to_be32(nbytes);
+ bp->protocol = cpu_to_be32(IPPROTO_TCP);
+
+ sg_init_one(&sg, bp, sizeof(*bp));
+ ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp));
+ return crypto_ahash_update(hp->req);
+}
+
+int tcp_v6_ao_hash_skb(char *ao_hash, struct tcp_ao_key *key,
+ const struct sock *sk, const struct sk_buff *skb,
+ const u8 *tkey, int hash_offset, u32 sne)
+{
+ return tcp_ao_hash_skb(AF_INET6, ao_hash, key, sk, skb, tkey,
+ hash_offset, sne);
+}
+
+int tcp_v6_parse_ao(struct sock *sk, int cmd,
+ sockptr_t optval, int optlen)
+{
+ return tcp_parse_ao(sk, cmd, AF_INET6, optval, optlen);
+}
+
+int tcp_v6_ao_synack_hash(char *ao_hash, struct tcp_ao_key *ao_key,
+ struct request_sock *req, const struct sk_buff *skb,
+ int hash_offset, u32 sne)
+{
+ void *hash_buf = NULL;
+ int err;
+
+ hash_buf = kmalloc(tcp_ao_digest_size(ao_key), GFP_ATOMIC);
+ if (!hash_buf)
+ return -ENOMEM;
+
+ err = tcp_v6_ao_calc_key_rsk(ao_key, hash_buf, req);
+ if (err)
+ goto out;
+
+ err = tcp_ao_hash_skb(AF_INET6, ao_hash, ao_key, req_to_sk(req), skb,
+ hash_buf, hash_offset, sne);
+out:
+ kfree(hash_buf);
+ return err;
+}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 44b6949d72b2..937a02c2e534 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -76,16 +76,9 @@ INDIRECT_CALLABLE_SCOPE int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb);
static const struct inet_connection_sock_af_ops ipv6_mapped;
const struct inet_connection_sock_af_ops ipv6_specific;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
static const struct tcp_sock_af_ops tcp_sock_ipv6_specific;
static const struct tcp_sock_af_ops tcp_sock_ipv6_mapped_specific;
-#else
-static struct tcp_md5sig_key *tcp_v6_md5_do_lookup(const struct sock *sk,
- const struct in6_addr *addr,
- int l3index)
-{
- return NULL;
-}
#endif
/* Helper returning the inet6 address from a given tcp socket.
@@ -135,7 +128,7 @@ static int tcp_v6_pre_connect(struct sock *sk, struct sockaddr *uaddr,
sock_owned_by_me(sk);
- return BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET6_CONNECT(sk, uaddr, &addr_len);
}
static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
@@ -163,7 +156,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
memset(&fl6, 0, sizeof(fl6));
- if (np->sndflow) {
+ if (inet6_test_bit(SNDFLOW, sk)) {
fl6.flowlabel = usin->sin6_flowinfo&IPV6_FLOWINFO_MASK;
IP6_ECN_flow_init(fl6.flowlabel);
if (fl6.flowlabel&IPV6_FLOWLABEL_MASK) {
@@ -239,7 +232,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
if (sk_is_mptcp(sk))
mptcpv6_handle_mapped(sk, true);
sk->sk_backlog_rcv = tcp_v4_do_rcv;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
tp->af_specific = &tcp_sock_ipv6_mapped_specific;
#endif
@@ -252,7 +245,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
if (sk_is_mptcp(sk))
mptcpv6_handle_mapped(sk, false);
sk->sk_backlog_rcv = tcp_v6_do_rcv;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
tp->af_specific = &tcp_sock_ipv6_specific;
#endif
goto failure;
@@ -286,6 +279,7 @@ static int tcp_v6_connect(struct sock *sk, struct sockaddr *uaddr,
goto failure;
}
+ tp->tcp_usec_ts = dst_tcp_usec_ts(dst);
tcp_death_row = &sock_net(sk)->ipv4.tcp_death_row;
if (!saddr) {
@@ -402,6 +396,8 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
}
if (sk->sk_state == TCP_TIME_WAIT) {
+ /* To increase the counter of ignored icmps for TCP-AO */
+ tcp_ao_ignore_icmp(sk, AF_INET6, type, code);
inet_twsk_put(inet_twsk(sk));
return 0;
}
@@ -412,6 +408,11 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
return 0;
}
+ if (tcp_ao_ignore_icmp(sk, AF_INET6, type, code)) {
+ sock_put(sk);
+ return 0;
+ }
+
bh_lock_sock(sk);
if (sock_owned_by_user(sk) && type != ICMPV6_PKT_TOOBIG)
__NET_INC_STATS(net, LINUX_MIB_LOCKDROPPEDICMPS);
@@ -508,7 +509,7 @@ static int tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
tcp_ld_RTO_revert(sk, seq);
}
- if (!sock_owned_by_user(sk) && np->recverr) {
+ if (!sock_owned_by_user(sk) && inet6_test_bit(RECVERR6, sk)) {
WRITE_ONCE(sk->sk_err, err);
sk_error_report(sk);
} else {
@@ -548,7 +549,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
&ireq->ir_v6_rmt_addr);
fl6->daddr = ireq->ir_v6_rmt_addr;
- if (np->repflow && ireq->pktopts)
+ if (inet6_test_bit(REPFLOW, sk) && ireq->pktopts)
fl6->flowlabel = ip6_flowlabel(ipv6_hdr(ireq->pktopts));
tclass = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_reflect_tos) ?
@@ -565,7 +566,7 @@ static int tcp_v6_send_synack(const struct sock *sk, struct dst_entry *dst,
if (!opt)
opt = rcu_dereference(np->opt);
err = ip6_xmit(sk, skb, fl6, skb->mark ? : READ_ONCE(sk->sk_mark),
- opt, tclass, sk->sk_priority);
+ opt, tclass, READ_ONCE(sk->sk_priority));
rcu_read_unlock();
err = net_xmit_eval(err);
}
@@ -606,8 +607,10 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname,
{
struct tcp_md5sig cmd;
struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *)&cmd.tcpm_addr;
+ union tcp_ao_addr *addr;
int l3index = 0;
u8 prefixlen;
+ bool l3flag;
u8 flags;
if (optlen < sizeof(cmd))
@@ -620,6 +623,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname,
return -EINVAL;
flags = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
+ l3flag = cmd.tcpm_flags & TCP_MD5SIG_FLAG_IFINDEX;
if (optname == TCP_MD5SIG_EXT &&
cmd.tcpm_flags & TCP_MD5SIG_FLAG_PREFIX) {
@@ -660,17 +664,33 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int optname,
if (cmd.tcpm_keylen > TCP_MD5SIG_MAXKEYLEN)
return -EINVAL;
- if (ipv6_addr_v4mapped(&sin6->sin6_addr))
- return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3],
+ if (ipv6_addr_v4mapped(&sin6->sin6_addr)) {
+ addr = (union tcp_md5_addr *)&sin6->sin6_addr.s6_addr32[3];
+
+ /* Don't allow keys for peers that have a matching TCP-AO key.
+ * See the comment in tcp_ao_add_cmd()
+ */
+ if (tcp_ao_required(sk, addr, AF_INET,
+ l3flag ? l3index : -1, false))
+ return -EKEYREJECTED;
+ return tcp_md5_do_add(sk, addr,
AF_INET, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen);
+ }
+
+ addr = (union tcp_md5_addr *)&sin6->sin6_addr;
- return tcp_md5_do_add(sk, (union tcp_md5_addr *)&sin6->sin6_addr,
- AF_INET6, prefixlen, l3index, flags,
+ /* Don't allow keys for peers that have a matching TCP-AO key.
+ * See the comment in tcp_ao_add_cmd()
+ */
+ if (tcp_ao_required(sk, addr, AF_INET6, l3flag ? l3index : -1, false))
+ return -EKEYREJECTED;
+
+ return tcp_md5_do_add(sk, addr, AF_INET6, prefixlen, l3index, flags,
cmd.tcpm_key, cmd.tcpm_keylen);
}
-static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp,
+static int tcp_v6_md5_hash_headers(struct tcp_sigpool *hp,
const struct in6_addr *daddr,
const struct in6_addr *saddr,
const struct tcphdr *th, int nbytes)
@@ -691,39 +711,36 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp,
_th->check = 0;
sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th));
- ahash_request_set_crypt(hp->md5_req, &sg, NULL,
+ ahash_request_set_crypt(hp->req, &sg, NULL,
sizeof(*bp) + sizeof(*th));
- return crypto_ahash_update(hp->md5_req);
+ return crypto_ahash_update(hp->req);
}
static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key *key,
const struct in6_addr *daddr, struct in6_addr *saddr,
const struct tcphdr *th)
{
- struct tcp_md5sig_pool *hp;
- struct ahash_request *req;
+ struct tcp_sigpool hp;
- hp = tcp_get_md5sig_pool();
- if (!hp)
- goto clear_hash_noput;
- req = hp->md5_req;
+ if (tcp_sigpool_start(tcp_md5_sigpool_id, &hp))
+ goto clear_hash_nostart;
- if (crypto_ahash_init(req))
+ if (crypto_ahash_init(hp.req))
goto clear_hash;
- if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2))
+ if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2))
goto clear_hash;
- if (tcp_md5_hash_key(hp, key))
+ if (tcp_md5_hash_key(&hp, key))
goto clear_hash;
- ahash_request_set_crypt(req, NULL, md5_hash, 0);
- if (crypto_ahash_final(req))
+ ahash_request_set_crypt(hp.req, NULL, md5_hash, 0);
+ if (crypto_ahash_final(hp.req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_sigpool_end(&hp);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
-clear_hash_noput:
+ tcp_sigpool_end(&hp);
+clear_hash_nostart:
memset(md5_hash, 0, 16);
return 1;
}
@@ -733,10 +750,9 @@ static int tcp_v6_md5_hash_skb(char *md5_hash,
const struct sock *sk,
const struct sk_buff *skb)
{
- const struct in6_addr *saddr, *daddr;
- struct tcp_md5sig_pool *hp;
- struct ahash_request *req;
const struct tcphdr *th = tcp_hdr(skb);
+ const struct in6_addr *saddr, *daddr;
+ struct tcp_sigpool hp;
if (sk) { /* valid for establish/request sockets */
saddr = &sk->sk_v6_rcv_saddr;
@@ -747,34 +763,31 @@ static int tcp_v6_md5_hash_skb(char *md5_hash,
daddr = &ip6h->daddr;
}
- hp = tcp_get_md5sig_pool();
- if (!hp)
- goto clear_hash_noput;
- req = hp->md5_req;
+ if (tcp_sigpool_start(tcp_md5_sigpool_id, &hp))
+ goto clear_hash_nostart;
- if (crypto_ahash_init(req))
+ if (crypto_ahash_init(hp.req))
goto clear_hash;
- if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, skb->len))
+ if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, skb->len))
goto clear_hash;
- if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2))
+ if (tcp_sigpool_hash_skb_data(&hp, skb, th->doff << 2))
goto clear_hash;
- if (tcp_md5_hash_key(hp, key))
+ if (tcp_md5_hash_key(&hp, key))
goto clear_hash;
- ahash_request_set_crypt(req, NULL, md5_hash, 0);
- if (crypto_ahash_final(req))
+ ahash_request_set_crypt(hp.req, NULL, md5_hash, 0);
+ if (crypto_ahash_final(hp.req))
goto clear_hash;
- tcp_put_md5sig_pool();
+ tcp_sigpool_end(&hp);
return 0;
clear_hash:
- tcp_put_md5sig_pool();
-clear_hash_noput:
+ tcp_sigpool_end(&hp);
+clear_hash_nostart:
memset(md5_hash, 0, 16);
return 1;
}
-
#endif
static void tcp_v6_init_req(struct request_sock *req,
@@ -797,7 +810,7 @@ static void tcp_v6_init_req(struct request_sock *req,
(ipv6_opt_accepted(sk_listener, skb, &TCP_SKB_CB(skb)->header.h6) ||
np->rxopt.bits.rxinfo ||
np->rxopt.bits.rxoinfo || np->rxopt.bits.rxhlim ||
- np->rxopt.bits.rxohlim || np->repflow)) {
+ np->rxopt.bits.rxohlim || inet6_test_bit(REPFLOW, sk_listener))) {
refcount_inc(&skb->users);
ireq->pktopts = skb;
}
@@ -833,6 +846,11 @@ const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = {
.req_md5_lookup = tcp_v6_md5_lookup,
.calc_md5_hash = tcp_v6_md5_hash_skb,
#endif
+#ifdef CONFIG_TCP_AO
+ .ao_lookup = tcp_v6_ao_lookup_rsk,
+ .ao_calc_key = tcp_v6_ao_calc_key_rsk,
+ .ao_synack_hash = tcp_v6_ao_synack_hash,
+#endif
#ifdef CONFIG_SYN_COOKIES
.cookie_init_seq = cookie_v6_init_sequence,
#endif
@@ -844,8 +862,8 @@ const struct tcp_request_sock_ops tcp_request_sock_ipv6_ops = {
static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32 seq,
u32 ack, u32 win, u32 tsval, u32 tsecr,
- int oif, struct tcp_md5sig_key *key, int rst,
- u8 tclass, __be32 label, u32 priority, u32 txhash)
+ int oif, int rst, u8 tclass, __be32 label,
+ u32 priority, u32 txhash, struct tcp_key *key)
{
const struct tcphdr *th = tcp_hdr(skb);
struct tcphdr *t1;
@@ -860,13 +878,13 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
if (tsecr)
tot_len += TCPOLEN_TSTAMP_ALIGNED;
-#ifdef CONFIG_TCP_MD5SIG
- if (key)
+ if (tcp_key_is_md5(key))
tot_len += TCPOLEN_MD5SIG_ALIGNED;
-#endif
+ if (tcp_key_is_ao(key))
+ tot_len += tcp_ao_len(key->ao_key);
#ifdef CONFIG_MPTCP
- if (rst && !key) {
+ if (rst && !tcp_key_is_md5(key)) {
mrst = mptcp_reset_option(skb);
if (mrst)
@@ -907,14 +925,28 @@ static void tcp_v6_send_response(const struct sock *sk, struct sk_buff *skb, u32
*topt++ = mrst;
#ifdef CONFIG_TCP_MD5SIG
- if (key) {
+ if (tcp_key_is_md5(key)) {
*topt++ = htonl((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) |
(TCPOPT_MD5SIG << 8) | TCPOLEN_MD5SIG);
- tcp_v6_md5_hash_hdr((__u8 *)topt, key,
+ tcp_v6_md5_hash_hdr((__u8 *)topt, key->md5_key,
&ipv6_hdr(skb)->saddr,
&ipv6_hdr(skb)->daddr, t1);
}
#endif
+#ifdef CONFIG_TCP_AO
+ if (tcp_key_is_ao(key)) {
+ *topt++ = htonl((TCPOPT_AO << 24) |
+ (tcp_ao_len(key->ao_key) << 16) |
+ (key->ao_key->sndid << 8) |
+ (key->rcv_next));
+
+ tcp_ao_hash_hdr(AF_INET6, (char *)topt, key->ao_key,
+ key->traffic_key,
+ (union tcp_ao_addr *)&ipv6_hdr(skb)->saddr,
+ (union tcp_ao_addr *)&ipv6_hdr(skb)->daddr,
+ t1, key->sne);
+ }
+#endif
memset(&fl6, 0, sizeof(fl6));
fl6.daddr = ipv6_hdr(skb)->saddr;
@@ -977,19 +1009,23 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
{
const struct tcphdr *th = tcp_hdr(skb);
struct ipv6hdr *ipv6h = ipv6_hdr(skb);
- u32 seq = 0, ack_seq = 0;
- struct tcp_md5sig_key *key = NULL;
-#ifdef CONFIG_TCP_MD5SIG
- const __u8 *hash_location = NULL;
- unsigned char newhash[16];
- int genhash;
- struct sock *sk1 = NULL;
+ const __u8 *md5_hash_location = NULL;
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
+ bool allocated_traffic_key = false;
#endif
+ const struct tcp_ao_hdr *aoh;
+ struct tcp_key key = {};
+ u32 seq = 0, ack_seq = 0;
__be32 label = 0;
u32 priority = 0;
struct net *net;
u32 txhash = 0;
int oif = 0;
+#ifdef CONFIG_TCP_MD5SIG
+ unsigned char newhash[16];
+ int genhash;
+ struct sock *sk1 = NULL;
+#endif
if (th->rst)
return;
@@ -1001,9 +1037,13 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
return;
net = sk ? sock_net(sk) : dev_net(skb_dst(skb)->dev);
-#ifdef CONFIG_TCP_MD5SIG
+ /* Invalid TCP option size or twice included auth */
+ if (tcp_parse_auth_options(th, &md5_hash_location, &aoh))
+ return;
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
rcu_read_lock();
- hash_location = tcp_parse_md5sig_option(th);
+#endif
+#ifdef CONFIG_TCP_MD5SIG
if (sk && sk_fullsock(sk)) {
int l3index;
@@ -1011,8 +1051,10 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
* in an L3 domain and inet_iif is set to it.
*/
l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0;
- key = tcp_v6_md5_do_lookup(sk, &ipv6h->saddr, l3index);
- } else if (hash_location) {
+ key.md5_key = tcp_v6_md5_do_lookup(sk, &ipv6h->saddr, l3index);
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+ } else if (md5_hash_location) {
int dif = tcp_v6_iif_l3_slave(skb);
int sdif = tcp_v6_sdif(skb);
int l3index;
@@ -1036,12 +1078,13 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
*/
l3index = tcp_v6_sdif(skb) ? dif : 0;
- key = tcp_v6_md5_do_lookup(sk1, &ipv6h->saddr, l3index);
- if (!key)
+ key.md5_key = tcp_v6_md5_do_lookup(sk1, &ipv6h->saddr, l3index);
+ if (!key.md5_key)
goto out;
+ key.type = TCP_KEY_MD5;
- genhash = tcp_v6_md5_hash_skb(newhash, key, NULL, skb);
- if (genhash || memcmp(hash_location, newhash, 16) != 0)
+ genhash = tcp_v6_md5_hash_skb(newhash, key.md5_key, NULL, skb);
+ if (genhash || memcmp(md5_hash_location, newhash, 16) != 0)
goto out;
}
#endif
@@ -1052,15 +1095,27 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
ack_seq = ntohl(th->seq) + th->syn + th->fin + skb->len -
(th->doff << 2);
+#ifdef CONFIG_TCP_AO
+ if (aoh) {
+ int l3index;
+
+ l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0;
+ if (tcp_ao_prepare_reset(sk, skb, aoh, l3index, seq,
+ &key.ao_key, &key.traffic_key,
+ &allocated_traffic_key,
+ &key.rcv_next, &key.sne))
+ goto out;
+ key.type = TCP_KEY_AO;
+ }
+#endif
+
if (sk) {
oif = sk->sk_bound_dev_if;
if (sk_fullsock(sk)) {
- const struct ipv6_pinfo *np = tcp_inet6_sk(sk);
-
trace_tcp_send_reset(sk, skb);
- if (np->repflow)
+ if (inet6_test_bit(REPFLOW, sk))
label = ip6_flowlabel(ipv6h);
- priority = sk->sk_priority;
+ priority = READ_ONCE(sk->sk_priority);
txhash = sk->sk_txhash;
}
if (sk->sk_state == TCP_TIME_WAIT) {
@@ -1073,45 +1128,141 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
label = ip6_flowlabel(ipv6h);
}
- tcp_v6_send_response(sk, skb, seq, ack_seq, 0, 0, 0, oif, key, 1,
- ipv6_get_dsfield(ipv6h), label, priority, txhash);
+ tcp_v6_send_response(sk, skb, seq, ack_seq, 0, 0, 0, oif, 1,
+ ipv6_get_dsfield(ipv6h), label, priority, txhash,
+ &key);
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
out:
+ if (allocated_traffic_key)
+ kfree(key.traffic_key);
rcu_read_unlock();
#endif
}
static void tcp_v6_send_ack(const struct sock *sk, struct sk_buff *skb, u32 seq,
u32 ack, u32 win, u32 tsval, u32 tsecr, int oif,
- struct tcp_md5sig_key *key, u8 tclass,
+ struct tcp_key *key, u8 tclass,
__be32 label, u32 priority, u32 txhash)
{
- tcp_v6_send_response(sk, skb, seq, ack, win, tsval, tsecr, oif, key, 0,
- tclass, label, priority, txhash);
+ tcp_v6_send_response(sk, skb, seq, ack, win, tsval, tsecr, oif, 0,
+ tclass, label, priority, txhash, key);
}
static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
{
struct inet_timewait_sock *tw = inet_twsk(sk);
struct tcp_timewait_sock *tcptw = tcp_twsk(sk);
+ struct tcp_key key = {};
+#ifdef CONFIG_TCP_AO
+ struct tcp_ao_info *ao_info;
+
+ if (static_branch_unlikely(&tcp_ao_needed.key)) {
+
+ /* FIXME: the segment to-be-acked is not verified yet */
+ ao_info = rcu_dereference(tcptw->ao_info);
+ if (ao_info) {
+ const struct tcp_ao_hdr *aoh;
+
+ /* Invalid TCP option size or twice included auth */
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ goto out;
+ if (aoh)
+ key.ao_key = tcp_ao_established_key(ao_info,
+ aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+ struct tcp_ao_key *rnext_key;
+
+ key.traffic_key = snd_other_key(key.ao_key);
+ /* rcv_next switches to our rcv_next */
+ rnext_key = READ_ONCE(ao_info->rnext_key);
+ key.rcv_next = rnext_key->rcvid;
+ key.sne = READ_ONCE(ao_info->snd_sne);
+ key.type = TCP_KEY_AO;
+#else
+ if (0) {
+#endif
+#ifdef CONFIG_TCP_MD5SIG
+ } else if (static_branch_unlikely(&tcp_md5_needed.key)) {
+ key.md5_key = tcp_twsk_md5_key(tcptw);
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+#endif
+ }
tcp_v6_send_ack(sk, skb, tcptw->tw_snd_nxt, tcptw->tw_rcv_nxt,
tcptw->tw_rcv_wnd >> tw->tw_rcv_wscale,
- tcp_time_stamp_raw() + tcptw->tw_ts_offset,
- tcptw->tw_ts_recent, tw->tw_bound_dev_if, tcp_twsk_md5_key(tcptw),
+ tcp_tw_tsval(tcptw),
+ tcptw->tw_ts_recent, tw->tw_bound_dev_if, &key,
tw->tw_tclass, cpu_to_be32(tw->tw_flowlabel), tw->tw_priority,
tw->tw_txhash);
+#ifdef CONFIG_TCP_AO
+out:
+#endif
inet_twsk_put(tw);
}
static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
struct request_sock *req)
{
- int l3index;
+ struct tcp_key key = {};
+
+#ifdef CONFIG_TCP_AO
+ if (static_branch_unlikely(&tcp_ao_needed.key) &&
+ tcp_rsk_used_ao(req)) {
+ const struct in6_addr *addr = &ipv6_hdr(skb)->saddr;
+ const struct tcp_ao_hdr *aoh;
+ int l3index;
+
+ l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0;
+ /* Invalid TCP option size or twice included auth */
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ return;
+ if (!aoh)
+ return;
+ key.ao_key = tcp_ao_do_lookup(sk, l3index,
+ (union tcp_ao_addr *)addr,
+ AF_INET6, aoh->rnext_keyid, -1);
+ if (unlikely(!key.ao_key)) {
+ /* Send ACK with any matching MKT for the peer */
+ key.ao_key = tcp_ao_do_lookup(sk, l3index,
+ (union tcp_ao_addr *)addr,
+ AF_INET6, -1, -1);
+ /* Matching key disappeared (user removed the key?)
+ * let the handshake timeout.
+ */
+ if (!key.ao_key) {
+ net_info_ratelimited("TCP-AO key for (%pI6, %d)->(%pI6, %d) suddenly disappeared, won't ACK new connection\n",
+ addr,
+ ntohs(tcp_hdr(skb)->source),
+ &ipv6_hdr(skb)->daddr,
+ ntohs(tcp_hdr(skb)->dest));
+ return;
+ }
+ }
+ key.traffic_key = kmalloc(tcp_ao_digest_size(key.ao_key), GFP_ATOMIC);
+ if (!key.traffic_key)
+ return;
- l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0;
+ key.type = TCP_KEY_AO;
+ key.rcv_next = aoh->keyid;
+ tcp_v6_ao_calc_key_rsk(key.ao_key, key.traffic_key, req);
+#else
+ if (0) {
+#endif
+#ifdef CONFIG_TCP_MD5SIG
+ } else if (static_branch_unlikely(&tcp_md5_needed.key)) {
+ int l3index = tcp_v6_sdif(skb) ? tcp_v6_iif_l3_slave(skb) : 0;
+
+ key.md5_key = tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr,
+ l3index);
+ if (key.md5_key)
+ key.type = TCP_KEY_MD5;
+#endif
+ }
/* sk->sk_state == TCP_LISTEN -> for regular TCP_SYN_RECV
* sk->sk_state == TCP_SYN_RECV -> for Fast Open.
@@ -1125,12 +1276,13 @@ static void tcp_v6_reqsk_send_ack(const struct sock *sk, struct sk_buff *skb,
tcp_rsk(req)->snt_isn + 1 : tcp_sk(sk)->snd_nxt,
tcp_rsk(req)->rcv_nxt,
req->rsk_rcv_wnd >> inet_rsk(req)->rcv_wscale,
- tcp_time_stamp_raw() + tcp_rsk(req)->ts_off,
+ tcp_rsk_tsval(tcp_rsk(req)),
READ_ONCE(req->ts_recent), sk->sk_bound_dev_if,
- tcp_v6_md5_do_lookup(sk, &ipv6_hdr(skb)->saddr, l3index),
- ipv6_get_dsfield(ipv6_hdr(skb)), 0,
+ &key, ipv6_get_dsfield(ipv6_hdr(skb)), 0,
READ_ONCE(sk->sk_priority),
READ_ONCE(tcp_rsk(req)->txhash));
+ if (tcp_key_is_ao(&key))
+ kfree(key.traffic_key);
}
@@ -1235,7 +1387,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
if (sk_is_mptcp(newsk))
mptcpv6_handle_mapped(newsk, true);
newsk->sk_backlog_rcv = tcp_v4_do_rcv;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
newtp->af_specific = &tcp_sock_ipv6_mapped_specific;
#endif
@@ -1247,7 +1399,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
newnp->mcast_oif = inet_iif(skb);
newnp->mcast_hops = ip_hdr(skb)->ttl;
newnp->rcv_flowinfo = 0;
- if (np->repflow)
+ if (inet6_test_bit(REPFLOW, sk))
newnp->flow_label = 0;
/*
@@ -1320,7 +1472,7 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
newnp->mcast_oif = tcp_v6_iif(skb);
newnp->mcast_hops = ipv6_hdr(skb)->hop_limit;
newnp->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(skb));
- if (np->repflow)
+ if (inet6_test_bit(REPFLOW, sk))
newnp->flow_label = ip6_flowlabel(ipv6_hdr(skb));
/* Set ToS of the new socket based upon the value of incoming SYN.
@@ -1360,19 +1512,26 @@ static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *
#ifdef CONFIG_TCP_MD5SIG
l3index = l3mdev_master_ifindex_by_index(sock_net(sk), ireq->ir_iif);
- /* Copy over the MD5 key from the original socket */
- key = tcp_v6_md5_do_lookup(sk, &newsk->sk_v6_daddr, l3index);
- if (key) {
- const union tcp_md5_addr *addr;
-
- addr = (union tcp_md5_addr *)&newsk->sk_v6_daddr;
- if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) {
- inet_csk_prepare_forced_close(newsk);
- tcp_done(newsk);
- goto out;
+ if (!tcp_rsk_used_ao(req)) {
+ /* Copy over the MD5 key from the original socket */
+ key = tcp_v6_md5_do_lookup(sk, &newsk->sk_v6_daddr, l3index);
+ if (key) {
+ const union tcp_md5_addr *addr;
+
+ addr = (union tcp_md5_addr *)&newsk->sk_v6_daddr;
+ if (tcp_md5_key_copy(newsk, addr, AF_INET6, 128, l3index, key)) {
+ inet_csk_prepare_forced_close(newsk);
+ tcp_done(newsk);
+ goto out;
+ }
}
}
#endif
+#ifdef CONFIG_TCP_AO
+ /* Copy over tcp_ao_info if any */
+ if (tcp_ao_copy_all_matching(sk, newsk, req, skb, AF_INET6))
+ goto out; /* OOM */
+#endif
if (__inet_inherit_port(sk, newsk) < 0) {
inet_csk_prepare_forced_close(newsk);
@@ -1542,10 +1701,11 @@ ipv6_pktoptions:
if (np->rxopt.bits.rxinfo || np->rxopt.bits.rxoinfo)
np->mcast_oif = tcp_v6_iif(opt_skb);
if (np->rxopt.bits.rxhlim || np->rxopt.bits.rxohlim)
- np->mcast_hops = ipv6_hdr(opt_skb)->hop_limit;
+ WRITE_ONCE(np->mcast_hops,
+ ipv6_hdr(opt_skb)->hop_limit);
if (np->rxopt.bits.rxflow || np->rxopt.bits.rxtclass)
np->rcv_flowinfo = ip6_flowinfo(ipv6_hdr(opt_skb));
- if (np->repflow)
+ if (inet6_test_bit(REPFLOW, sk))
np->flow_label = ip6_flowlabel(ipv6_hdr(opt_skb));
if (ipv6_opt_accepted(sk, opt_skb, &TCP_SKB_CB(opt_skb)->header.h6)) {
tcp_v6_restore_cb(opt_skb);
@@ -1643,9 +1803,9 @@ process:
if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb))
drop_reason = SKB_DROP_REASON_XFRM_POLICY;
else
- drop_reason = tcp_inbound_md5_hash(sk, skb,
- &hdr->saddr, &hdr->daddr,
- AF_INET6, dif, sdif);
+ drop_reason = tcp_inbound_hash(sk, req, skb,
+ &hdr->saddr, &hdr->daddr,
+ AF_INET6, dif, sdif);
if (drop_reason) {
sk_drops_add(sk, skb);
reqsk_put(req);
@@ -1719,8 +1879,8 @@ process:
goto discard_and_relse;
}
- drop_reason = tcp_inbound_md5_hash(sk, skb, &hdr->saddr, &hdr->daddr,
- AF_INET6, dif, sdif);
+ drop_reason = tcp_inbound_hash(sk, NULL, skb, &hdr->saddr, &hdr->daddr,
+ AF_INET6, dif, sdif);
if (drop_reason)
goto discard_and_relse;
@@ -1895,7 +2055,6 @@ const struct inet_connection_sock_af_ops ipv6_specific = {
.conn_request = tcp_v6_conn_request,
.syn_recv_sock = tcp_v6_syn_recv_sock,
.net_header_len = sizeof(struct ipv6hdr),
- .net_frag_header_len = sizeof(struct frag_hdr),
.setsockopt = ipv6_setsockopt,
.getsockopt = ipv6_getsockopt,
.addr2sockaddr = inet6_csk_addr2sockaddr,
@@ -1903,11 +2062,19 @@ const struct inet_connection_sock_af_ops ipv6_specific = {
.mtu_reduced = tcp_v6_mtu_reduced,
};
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
static const struct tcp_sock_af_ops tcp_sock_ipv6_specific = {
+#ifdef CONFIG_TCP_MD5SIG
.md5_lookup = tcp_v6_md5_lookup,
.calc_md5_hash = tcp_v6_md5_hash_skb,
.md5_parse = tcp_v6_parse_md5_keys,
+#endif
+#ifdef CONFIG_TCP_AO
+ .ao_lookup = tcp_v6_ao_lookup,
+ .calc_ao_hash = tcp_v6_ao_hash_skb,
+ .ao_parse = tcp_v6_parse_ao,
+ .ao_calc_key_sk = tcp_v6_ao_calc_key_sk,
+#endif
};
#endif
@@ -1929,11 +2096,19 @@ static const struct inet_connection_sock_af_ops ipv6_mapped = {
.mtu_reduced = tcp_v4_mtu_reduced,
};
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
static const struct tcp_sock_af_ops tcp_sock_ipv6_mapped_specific = {
+#ifdef CONFIG_TCP_MD5SIG
.md5_lookup = tcp_v4_md5_lookup,
.calc_md5_hash = tcp_v4_md5_hash_skb,
.md5_parse = tcp_v6_parse_md5_keys,
+#endif
+#ifdef CONFIG_TCP_AO
+ .ao_lookup = tcp_v6_ao_lookup,
+ .calc_ao_hash = tcp_v4_ao_hash_skb,
+ .ao_parse = tcp_v6_parse_ao,
+ .ao_calc_key_sk = tcp_v4_ao_calc_key_sk,
+#endif
};
#endif
@@ -1948,7 +2123,7 @@ static int tcp_v6_init_sock(struct sock *sk)
icsk->icsk_af_ops = &ipv6_specific;
-#ifdef CONFIG_TCP_MD5SIG
+#if defined(CONFIG_TCP_MD5SIG) || defined(CONFIG_TCP_AO)
tcp_sk(sk)->af_specific = &tcp_sock_ipv6_specific;
#endif
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 86b5d509a468..622b10a549f7 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -410,10 +410,11 @@ try_again:
*addr_len = sizeof(*sin6);
BPF_CGROUP_RUN_PROG_UDP6_RECVMSG_LOCK(sk,
- (struct sockaddr *)sin6);
+ (struct sockaddr *)sin6,
+ addr_len);
}
- if (udp_sk(sk)->gro_enabled)
+ if (udp_test_bit(GRO_ENABLED, sk))
udp_cmsg_recv(msg, sk, skb);
if (np->rxopt.all)
@@ -571,7 +572,7 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
sk = __udp6_lib_lookup(net, daddr, uh->dest, saddr, uh->source,
inet6_iif(skb), inet6_sdif(skb), udptable, NULL);
- if (!sk || udp_sk(sk)->encap_type) {
+ if (!sk || READ_ONCE(udp_sk(sk)->encap_type)) {
/* No socket for error: try tunnels before discarding */
if (static_branch_unlikely(&udpv6_encap_needed_key)) {
sk = __udp6_lib_err_encap(net, hdr, offset, uh,
@@ -598,7 +599,7 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
if (!ip6_sk_accept_pmtu(sk))
goto out;
ip6_sk_update_pmtu(skb, sk, info);
- if (np->pmtudisc != IPV6_PMTUDISC_DONT)
+ if (READ_ONCE(np->pmtudisc) != IPV6_PMTUDISC_DONT)
harderr = 1;
}
if (type == NDISC_REDIRECT) {
@@ -619,7 +620,7 @@ int __udp6_lib_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
goto out;
}
- if (!np->recverr) {
+ if (!inet6_test_bit(RECVERR6, sk)) {
if (!harderr || sk->sk_state != TCP_ESTABLISHED)
goto out;
} else {
@@ -688,7 +689,8 @@ static int udpv6_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
}
nf_reset_ct(skb);
- if (static_branch_unlikely(&udpv6_encap_needed_key) && up->encap_type) {
+ if (static_branch_unlikely(&udpv6_encap_needed_key) &&
+ READ_ONCE(up->encap_type)) {
int (*encap_rcv)(struct sock *sk, struct sk_buff *skb);
/*
@@ -726,16 +728,17 @@ static int udpv6_queue_rcv_one_skb(struct sock *sk, struct sk_buff *skb)
/*
* UDP-Lite specific tests, ignored on UDP sockets (see net/ipv4/udp.c).
*/
- if ((up->pcflag & UDPLITE_RECV_CC) && UDP_SKB_CB(skb)->partial_cov) {
+ if (udp_test_bit(UDPLITE_RECV_CC, sk) && UDP_SKB_CB(skb)->partial_cov) {
+ u16 pcrlen = READ_ONCE(up->pcrlen);
- if (up->pcrlen == 0) { /* full coverage was set */
+ if (pcrlen == 0) { /* full coverage was set */
net_dbg_ratelimited("UDPLITE6: partial coverage %d while full coverage %d requested\n",
UDP_SKB_CB(skb)->cscov, skb->len);
goto drop;
}
- if (UDP_SKB_CB(skb)->cscov < up->pcrlen) {
+ if (UDP_SKB_CB(skb)->cscov < pcrlen) {
net_dbg_ratelimited("UDPLITE6: coverage %d too small, need min %d\n",
- UDP_SKB_CB(skb)->cscov, up->pcrlen);
+ UDP_SKB_CB(skb)->cscov, pcrlen);
goto drop;
}
}
@@ -858,7 +861,7 @@ start_lookup:
/* If zero checksum and no_check is not on for
* the socket then skip it.
*/
- if (!uh->check && !udp_sk(sk)->no_check6_rx)
+ if (!uh->check && !udp_get_no_check6_rx(sk))
continue;
if (!first) {
first = sk;
@@ -980,7 +983,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
if (unlikely(rcu_dereference(sk->sk_rx_dst) != dst))
udp6_sk_rx_dst_set(sk, dst);
- if (!uh->check && !udp_sk(sk)->no_check6_rx) {
+ if (!uh->check && !udp_get_no_check6_rx(sk)) {
if (refcounted)
sock_put(sk);
goto report_csum_error;
@@ -1002,7 +1005,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
/* Unicast */
sk = __udp6_lib_lookup_skb(skb, uh->source, uh->dest, udptable);
if (sk) {
- if (!uh->check && !udp_sk(sk)->no_check6_rx)
+ if (!uh->check && !udp_get_no_check6_rx(sk))
goto report_csum_error;
return udp6_unicast_rcv_skb(sk, skb, uh);
}
@@ -1155,7 +1158,7 @@ static int udpv6_pre_connect(struct sock *sk, struct sockaddr *uaddr,
if (addr_len < SIN6_LEN_RFC2133)
return -EINVAL;
- return BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr);
+ return BPF_CGROUP_RUN_PROG_INET6_CONNECT_LOCK(sk, uaddr, &addr_len);
}
/**
@@ -1241,7 +1244,7 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
kfree_skb(skb);
return -EINVAL;
}
- if (udp_sk(sk)->no_check6_tx) {
+ if (udp_get_no_check6_tx(sk)) {
kfree_skb(skb);
return -EINVAL;
}
@@ -1262,7 +1265,7 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
if (is_udplite)
csum = udplite_csum(skb);
- else if (udp_sk(sk)->no_check6_tx) { /* UDP csum disabled */
+ else if (udp_get_no_check6_tx(sk)) { /* UDP csum disabled */
skb->ip_summed = CHECKSUM_NONE;
goto send;
} else if (skb->ip_summed == CHECKSUM_PARTIAL) { /* UDP hardware csum */
@@ -1281,7 +1284,7 @@ csum_partial:
send:
err = ip6_send_skb(skb);
if (err) {
- if (err == -ENOBUFS && !inet6_sk(sk)->recverr) {
+ if (err == -ENOBUFS && !inet6_test_bit(RECVERR6, sk)) {
UDP6_INC_STATS(sock_net(sk),
UDP_MIB_SNDBUFERRORS, is_udplite);
err = 0;
@@ -1332,7 +1335,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
int addr_len = msg->msg_namelen;
bool connected = false;
int ulen = len;
- int corkreq = READ_ONCE(up->corkflag) || msg->msg_flags&MSG_MORE;
+ int corkreq = udp_test_bit(CORK, sk) || msg->msg_flags & MSG_MORE;
int err;
int is_udplite = IS_UDPLITE(sk);
int (*getfrag)(void *, char *, int, int, int, struct sk_buff *);
@@ -1427,7 +1430,7 @@ do_udp_sendmsg:
fl6->fl6_dport = sin6->sin6_port;
daddr = &sin6->sin6_addr;
- if (np->sndflow) {
+ if (inet6_test_bit(SNDFLOW, sk)) {
fl6->flowlabel = sin6->sin6_flowinfo&IPV6_FLOWINFO_MASK;
if (fl6->flowlabel & IPV6_FLOWLABEL_MASK) {
flowlabel = fl6_sock_lookup(sk, fl6->flowlabel);
@@ -1508,6 +1511,7 @@ do_udp_sendmsg:
if (cgroup_bpf_enabled(CGROUP_UDP6_SENDMSG) && !connected) {
err = BPF_CGROUP_RUN_PROG_UDP6_SENDMSG_LOCK(sk,
(struct sockaddr *)sin6,
+ &addr_len,
&fl6->saddr);
if (err)
goto out_no_dst;
@@ -1593,7 +1597,7 @@ back_from_confirm:
do_append_data:
if (ipc6.dontfrag < 0)
- ipc6.dontfrag = np->dontfrag;
+ ipc6.dontfrag = inet6_test_bit(DONTFRAG, sk);
up->len += ulen;
err = ip6_append_data(sk, getfrag, msg, ulen, sizeof(struct udphdr),
&ipc6, fl6, (struct rt6_info *)dst,
@@ -1606,7 +1610,7 @@ do_append_data:
up->pending = 0;
if (err > 0)
- err = np->recverr ? net_xmit_errno(err) : 0;
+ err = inet6_test_bit(RECVERR6, sk) ? net_xmit_errno(err) : 0;
release_sock(sk);
out:
@@ -1644,11 +1648,11 @@ static void udpv6_splice_eof(struct socket *sock)
struct sock *sk = sock->sk;
struct udp_sock *up = udp_sk(sk);
- if (!up->pending || READ_ONCE(up->corkflag))
+ if (!up->pending || udp_test_bit(CORK, sk))
return;
lock_sock(sk);
- if (up->pending && !READ_ONCE(up->corkflag))
+ if (up->pending && !udp_test_bit(CORK, sk))
udp_v6_push_pending_frames(sk);
release_sock(sk);
}
@@ -1670,7 +1674,7 @@ void udpv6_destroy_sock(struct sock *sk)
if (encap_destroy)
encap_destroy(sk);
}
- if (up->encap_enabled) {
+ if (udp_test_bit(ENCAP_ENABLED, sk)) {
static_branch_dec(&udpv6_encap_needed_key);
udp_encap_disable();
}
diff --git a/net/ipv6/udplite.c b/net/ipv6/udplite.c
index 267d491e9707..a60bec9b14f1 100644
--- a/net/ipv6/udplite.c
+++ b/net/ipv6/udplite.c
@@ -17,7 +17,6 @@
static int udplitev6_sk_init(struct sock *sk)
{
udpv6_init_sock(sk);
- udp_sk(sk)->pcflag = UDPLITE_BIT;
pr_warn_once("UDP-Lite is deprecated and scheduled to be removed in 2025, "
"please contact the netdev mailing list\n");
return 0;
diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c
index 4907ab241d6b..4156387248e4 100644
--- a/net/ipv6/xfrm6_input.c
+++ b/net/ipv6/xfrm6_input.c
@@ -81,14 +81,14 @@ int xfrm6_udp_encap_rcv(struct sock *sk, struct sk_buff *skb)
struct ipv6hdr *ip6h;
int len;
int ip6hlen = sizeof(struct ipv6hdr);
-
__u8 *udpdata;
__be32 *udpdata32;
- __u16 encap_type = up->encap_type;
+ u16 encap_type;
if (skb->protocol == htons(ETH_P_IP))
return xfrm4_udp_encap_rcv(sk, skb);
+ encap_type = READ_ONCE(up->encap_type);
/* if this is not encapsulated socket, then just return now */
if (!encap_type)
return 1;
diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
index ad07904642ca..5f7b1fdbffe6 100644
--- a/net/ipv6/xfrm6_output.c
+++ b/net/ipv6/xfrm6_output.c
@@ -95,7 +95,7 @@ static int __xfrm6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
return -EMSGSIZE;
}
- if (toobig || dst_allfrag(skb_dst(skb)))
+ if (toobig)
return ip6_fragment(net, sk, skb,
__xfrm6_output_finish);
diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
index 03608d3ded4b..8d21ff25f160 100644
--- a/net/l2tp/l2tp_core.c
+++ b/net/l2tp/l2tp_core.c
@@ -1139,9 +1139,9 @@ static void l2tp_tunnel_destruct(struct sock *sk)
switch (tunnel->encap) {
case L2TP_ENCAPTYPE_UDP:
/* No longer an encapsulation socket. See net/ipv4/udp.c */
- (udp_sk(sk))->encap_type = 0;
- (udp_sk(sk))->encap_rcv = NULL;
- (udp_sk(sk))->encap_destroy = NULL;
+ WRITE_ONCE(udp_sk(sk)->encap_type, 0);
+ udp_sk(sk)->encap_rcv = NULL;
+ udp_sk(sk)->encap_destroy = NULL;
break;
case L2TP_ENCAPTYPE_IP:
break;
diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c
index f2ae03c40473..25ca89f80414 100644
--- a/net/l2tp/l2tp_eth.c
+++ b/net/l2tp/l2tp_eth.c
@@ -37,12 +37,6 @@
/* via netdev_priv() */
struct l2tp_eth {
struct l2tp_session *session;
- atomic_long_t tx_bytes;
- atomic_long_t tx_packets;
- atomic_long_t tx_dropped;
- atomic_long_t rx_bytes;
- atomic_long_t rx_packets;
- atomic_long_t rx_errors;
};
/* via l2tp_session_priv() */
@@ -79,10 +73,10 @@ static netdev_tx_t l2tp_eth_dev_xmit(struct sk_buff *skb, struct net_device *dev
int ret = l2tp_xmit_skb(session, skb);
if (likely(ret == NET_XMIT_SUCCESS)) {
- atomic_long_add(len, &priv->tx_bytes);
- atomic_long_inc(&priv->tx_packets);
+ DEV_STATS_ADD(dev, tx_bytes, len);
+ DEV_STATS_INC(dev, tx_packets);
} else {
- atomic_long_inc(&priv->tx_dropped);
+ DEV_STATS_INC(dev, tx_dropped);
}
return NETDEV_TX_OK;
}
@@ -90,14 +84,12 @@ static netdev_tx_t l2tp_eth_dev_xmit(struct sk_buff *skb, struct net_device *dev
static void l2tp_eth_get_stats64(struct net_device *dev,
struct rtnl_link_stats64 *stats)
{
- struct l2tp_eth *priv = netdev_priv(dev);
-
- stats->tx_bytes = (unsigned long)atomic_long_read(&priv->tx_bytes);
- stats->tx_packets = (unsigned long)atomic_long_read(&priv->tx_packets);
- stats->tx_dropped = (unsigned long)atomic_long_read(&priv->tx_dropped);
- stats->rx_bytes = (unsigned long)atomic_long_read(&priv->rx_bytes);
- stats->rx_packets = (unsigned long)atomic_long_read(&priv->rx_packets);
- stats->rx_errors = (unsigned long)atomic_long_read(&priv->rx_errors);
+ stats->tx_bytes = DEV_STATS_READ(dev, tx_bytes);
+ stats->tx_packets = DEV_STATS_READ(dev, tx_packets);
+ stats->tx_dropped = DEV_STATS_READ(dev, tx_dropped);
+ stats->rx_bytes = DEV_STATS_READ(dev, rx_bytes);
+ stats->rx_packets = DEV_STATS_READ(dev, rx_packets);
+ stats->rx_errors = DEV_STATS_READ(dev, rx_errors);
}
static const struct net_device_ops l2tp_eth_netdev_ops = {
@@ -126,7 +118,6 @@ static void l2tp_eth_dev_recv(struct l2tp_session *session, struct sk_buff *skb,
{
struct l2tp_eth_sess *spriv = l2tp_session_priv(session);
struct net_device *dev;
- struct l2tp_eth *priv;
if (!pskb_may_pull(skb, ETH_HLEN))
goto error;
@@ -144,12 +135,11 @@ static void l2tp_eth_dev_recv(struct l2tp_session *session, struct sk_buff *skb,
if (!dev)
goto error_rcu;
- priv = netdev_priv(dev);
if (dev_forward_skb(dev, skb) == NET_RX_SUCCESS) {
- atomic_long_inc(&priv->rx_packets);
- atomic_long_add(data_len, &priv->rx_bytes);
+ DEV_STATS_INC(dev, rx_packets);
+ DEV_STATS_ADD(dev, rx_bytes, data_len);
} else {
- atomic_long_inc(&priv->rx_errors);
+ DEV_STATS_INC(dev, rx_errors);
}
rcu_read_unlock();
diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
index 11f3d375cec0..bb373e249237 100644
--- a/net/l2tp/l2tp_ip6.c
+++ b/net/l2tp/l2tp_ip6.c
@@ -431,7 +431,7 @@ static int l2tp_ip6_getname(struct socket *sock, struct sockaddr *uaddr,
return -ENOTCONN;
lsa->l2tp_conn_id = lsk->peer_conn_id;
lsa->l2tp_addr = sk->sk_v6_daddr;
- if (np->sndflow)
+ if (inet6_test_bit(SNDFLOW, sk))
lsa->l2tp_flowinfo = np->flow_label;
} else {
if (ipv6_addr_any(&sk->sk_v6_rcv_saddr))
@@ -528,7 +528,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
return -EAFNOSUPPORT;
daddr = &lsa->l2tp_addr;
- if (np->sndflow) {
+ if (inet6_test_bit(SNDFLOW, sk)) {
fl6.flowlabel = lsa->l2tp_flowinfo & IPV6_FLOWINFO_MASK;
if (fl6.flowlabel & IPV6_FLOWLABEL_MASK) {
flowlabel = fl6_sock_lookup(sk, fl6.flowlabel);
@@ -620,7 +620,7 @@ static int l2tp_ip6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
if (ipc6.dontfrag < 0)
- ipc6.dontfrag = np->dontfrag;
+ ipc6.dontfrag = inet6_test_bit(DONTFRAG, sk);
if (msg->msg_flags & MSG_CONFIRM)
goto do_confirm;
diff --git a/net/mac80211/Kconfig b/net/mac80211/Kconfig
index 51ec8256b7fa..037ab74f5ade 100644
--- a/net/mac80211/Kconfig
+++ b/net/mac80211/Kconfig
@@ -57,6 +57,17 @@ endif
comment "Some wireless drivers require a rate control algorithm"
depends on MAC80211 && MAC80211_HAS_RC=n
+config MAC80211_KUNIT_TEST
+ tristate "KUnit tests for mac80211" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ depends on MAC80211
+ default KUNIT_ALL_TESTS
+ depends on !KERNEL_6_2
+ help
+ Enable this option to test mac80211 internals with kunit.
+
+ If unsure, say N.
+
config MAC80211_MESH
bool "Enable mac80211 mesh networking support"
depends on MAC80211
diff --git a/net/mac80211/Makefile b/net/mac80211/Makefile
index b8de44da1fb8..c9eb52768133 100644
--- a/net/mac80211/Makefile
+++ b/net/mac80211/Makefile
@@ -65,4 +65,6 @@ rc80211_minstrel-$(CONFIG_MAC80211_DEBUGFS) += \
mac80211-$(CONFIG_MAC80211_RC_MINSTREL) += $(rc80211_minstrel-y)
+obj-y += tests/
+
ccflags-y += -DDEBUG
diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
index c6fa53230450..9bffac7a4974 100644
--- a/net/mac80211/agg-rx.c
+++ b/net/mac80211/agg-rx.c
@@ -9,7 +9,7 @@
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2007-2010, Intel Corporation
* Copyright(c) 2015-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2022 Intel Corporation
+ * Copyright (C) 2018-2023 Intel Corporation
*/
/**
@@ -55,8 +55,8 @@ static void ieee80211_free_tid_rx(struct rcu_head *h)
kfree(tid_rx);
}
-void ___ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
- u16 initiator, u16 reason, bool tx)
+void __ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
+ u16 initiator, u16 reason, bool tx)
{
struct ieee80211_local *local = sta->local;
struct tid_ampdu_rx *tid_rx;
@@ -69,10 +69,10 @@ void ___ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
.ssn = 0,
};
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
tid_rx = rcu_dereference_protected(sta->ampdu_mlme.tid_rx[tid],
- lockdep_is_held(&sta->ampdu_mlme.mtx));
+ lockdep_is_held(&sta->local->hw.wiphy->mtx));
if (!test_bit(tid, sta->ampdu_mlme.agg_session_valid))
return;
@@ -114,14 +114,6 @@ void ___ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
call_rcu(&tid_rx->rcu_head, ieee80211_free_tid_rx);
}
-void __ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
- u16 initiator, u16 reason, bool tx)
-{
- mutex_lock(&sta->ampdu_mlme.mtx);
- ___ieee80211_stop_rx_ba_session(sta, tid, initiator, reason, tx);
- mutex_unlock(&sta->ampdu_mlme.mtx);
-}
-
void ieee80211_stop_rx_ba_session(struct ieee80211_vif *vif, u16 ba_rx_bitmap,
const u8 *addr)
{
@@ -140,7 +132,7 @@ void ieee80211_stop_rx_ba_session(struct ieee80211_vif *vif, u16 ba_rx_bitmap,
if (ba_rx_bitmap & BIT(i))
set_bit(i, sta->ampdu_mlme.tid_rx_stop_requested);
- ieee80211_queue_work(&sta->local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(sta->local->hw.wiphy, &sta->ampdu_mlme.work);
rcu_read_unlock();
}
EXPORT_SYMBOL(ieee80211_stop_rx_ba_session);
@@ -166,7 +158,7 @@ static void sta_rx_agg_session_timer_expired(struct timer_list *t)
sta->sta.addr, tid);
set_bit(tid, sta->ampdu_mlme.tid_rx_timer_expired);
- ieee80211_queue_work(&sta->local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(sta->local->hw.wiphy, &sta->ampdu_mlme.work);
}
static void sta_rx_agg_reorder_timer_expired(struct timer_list *t)
@@ -250,11 +242,11 @@ static void ieee80211_send_addba_resp(struct sta_info *sta, u8 *da, u16 tid,
ieee80211_tx_skb(sdata, skb);
}
-void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
- u8 dialog_token, u16 timeout,
- u16 start_seq_num, u16 ba_policy, u16 tid,
- u16 buf_size, bool tx, bool auto_seq,
- const struct ieee80211_addba_ext_ie *addbaext)
+void __ieee80211_start_rx_ba_session(struct sta_info *sta,
+ u8 dialog_token, u16 timeout,
+ u16 start_seq_num, u16 ba_policy, u16 tid,
+ u16 buf_size, bool tx, bool auto_seq,
+ const struct ieee80211_addba_ext_ie *addbaext)
{
struct ieee80211_local *local = sta->sdata->local;
struct tid_ampdu_rx *tid_agg_rx;
@@ -270,6 +262,8 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
u16 status = WLAN_STATUS_REQUEST_DECLINED;
u16 max_buf_size;
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
+
if (tid >= IEEE80211_FIRST_TSPEC_TSID) {
ht_dbg(sta->sdata,
"STA %pM requests BA session on unsupported tid %d\n",
@@ -325,9 +319,6 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
ht_dbg(sta->sdata, "AddBA Req buf_size=%d for %pM\n",
buf_size, sta->sta.addr);
- /* examine state machine */
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
-
if (test_bit(tid, sta->ampdu_mlme.agg_session_valid)) {
if (sta->ampdu_mlme.tid_rx_token[tid] == dialog_token) {
struct tid_ampdu_rx *tid_rx;
@@ -355,9 +346,9 @@ void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
sta->sta.addr, tid);
/* delete existing Rx BA session on the same tid */
- ___ieee80211_stop_rx_ba_session(sta, tid, WLAN_BACK_RECIPIENT,
- WLAN_STATUS_UNSPECIFIED_QOS,
- false);
+ __ieee80211_stop_rx_ba_session(sta, tid, WLAN_BACK_RECIPIENT,
+ WLAN_STATUS_UNSPECIFIED_QOS,
+ false);
}
if (ieee80211_hw_check(&local->hw, SUPPORTS_REORDERING_BUFFER)) {
@@ -444,20 +435,6 @@ end:
timeout, addbaext);
}
-static void __ieee80211_start_rx_ba_session(struct sta_info *sta,
- u8 dialog_token, u16 timeout,
- u16 start_seq_num, u16 ba_policy,
- u16 tid, u16 buf_size, bool tx,
- bool auto_seq,
- const struct ieee80211_addba_ext_ie *addbaext)
-{
- mutex_lock(&sta->ampdu_mlme.mtx);
- ___ieee80211_start_rx_ba_session(sta, dialog_token, timeout,
- start_seq_num, ba_policy, tid,
- buf_size, tx, auto_seq, addbaext);
- mutex_unlock(&sta->ampdu_mlme.mtx);
-}
-
void ieee80211_process_addba_request(struct ieee80211_local *local,
struct sta_info *sta,
struct ieee80211_mgmt *mgmt,
@@ -507,7 +484,6 @@ void ieee80211_manage_rx_ba_offl(struct ieee80211_vif *vif,
const u8 *addr, unsigned int tid)
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
- struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
rcu_read_lock();
@@ -516,7 +492,7 @@ void ieee80211_manage_rx_ba_offl(struct ieee80211_vif *vif,
goto unlock;
set_bit(tid, sta->ampdu_mlme.tid_rx_manage_offl);
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(sta->local->hw.wiphy, &sta->ampdu_mlme.work);
unlock:
rcu_read_unlock();
}
@@ -526,7 +502,6 @@ void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif,
const u8 *addr, unsigned int tid)
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
- struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
rcu_read_lock();
@@ -535,7 +510,7 @@ void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif,
goto unlock;
set_bit(tid, sta->ampdu_mlme.tid_rx_timer_expired);
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(sta->local->hw.wiphy, &sta->ampdu_mlme.work);
unlock:
rcu_read_unlock();
diff --git a/net/mac80211/agg-tx.c b/net/mac80211/agg-tx.c
index b6b772685881..b8a278355e18 100644
--- a/net/mac80211/agg-tx.c
+++ b/net/mac80211/agg-tx.c
@@ -142,7 +142,7 @@ EXPORT_SYMBOL(ieee80211_send_bar);
void ieee80211_assign_tid_tx(struct sta_info *sta, int tid,
struct tid_ampdu_tx *tid_tx)
{
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
lockdep_assert_held(&sta->lock);
rcu_assign_pointer(sta->ampdu_mlme.tid_tx[tid], tid_tx);
}
@@ -213,7 +213,7 @@ ieee80211_agg_start_txq(struct sta_info *sta, int tid, bool enable)
struct ieee80211_txq *txq = sta->sta.txq[tid];
struct txq_info *txqi;
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
if (!txq)
return;
@@ -271,7 +271,7 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
{
struct tid_ampdu_tx *tid_tx;
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
lockdep_assert_held(&sta->lock);
tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
@@ -296,8 +296,8 @@ static void ieee80211_remove_tid_tx(struct sta_info *sta, int tid)
kfree_rcu(tid_tx, rcu_head);
}
-int ___ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
- enum ieee80211_agg_stop_reason reason)
+int __ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
+ enum ieee80211_agg_stop_reason reason)
{
struct ieee80211_local *local = sta->local;
struct tid_ampdu_tx *tid_tx;
@@ -311,7 +311,7 @@ int ___ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
};
int ret;
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
switch (reason) {
case AGG_STOP_DECLINED:
@@ -461,7 +461,7 @@ static void ieee80211_send_addba_with_timeout(struct sta_info *sta,
test_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state)))
return;
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
/* activate the timer for the recipient's addBA response */
mod_timer(&tid_tx->addba_resp_timer, jiffies + ADDBA_RESP_INTERVAL);
@@ -497,7 +497,7 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
{
struct tid_ampdu_tx *tid_tx;
struct ieee80211_local *local = sta->local;
- struct ieee80211_sub_if_data *sdata;
+ struct ieee80211_sub_if_data *sdata = sta->sdata;
struct ieee80211_ampdu_params params = {
.sta = &sta->sta,
.action = IEEE80211_AMPDU_TX_START,
@@ -525,7 +525,6 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
*/
synchronize_net();
- sdata = sta->sdata;
params.ssn = sta->tid_seq[tid] >> 4;
ret = drv_ampdu_action(local, sdata, &params);
tid_tx->ssn = params.ssn;
@@ -539,9 +538,6 @@ void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid)
*/
set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state);
} else if (ret) {
- if (!sdata)
- return;
-
ht_dbg(sdata,
"BA request denied - HW unavailable for %pM tid %d\n",
sta->sta.addr, tid);
@@ -743,7 +739,7 @@ int ieee80211_start_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid,
*/
sta->ampdu_mlme.tid_start_tx[tid] = tid_tx;
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(local->hw.wiphy, &sta->ampdu_mlme.work);
/* this flow continues off the work */
err_unlock_sta:
@@ -764,7 +760,7 @@ static void ieee80211_agg_tx_operational(struct ieee80211_local *local,
.ssn = 0,
};
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
params.buf_size = tid_tx->buf_size;
@@ -801,7 +797,7 @@ void ieee80211_start_tx_ba_cb(struct sta_info *sta, int tid,
struct ieee80211_sub_if_data *sdata = sta->sdata;
struct ieee80211_local *local = sdata->local;
- lockdep_assert_held(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
if (WARN_ON(test_and_set_bit(HT_AGG_STATE_DRV_READY, &tid_tx->state)))
return;
@@ -862,26 +858,12 @@ void ieee80211_start_tx_ba_cb_irqsafe(struct ieee80211_vif *vif,
goto out;
set_bit(HT_AGG_STATE_START_CB, &tid_tx->state);
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(local->hw.wiphy, &sta->ampdu_mlme.work);
out:
rcu_read_unlock();
}
EXPORT_SYMBOL(ieee80211_start_tx_ba_cb_irqsafe);
-int __ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
- enum ieee80211_agg_stop_reason reason)
-{
- int ret;
-
- mutex_lock(&sta->ampdu_mlme.mtx);
-
- ret = ___ieee80211_stop_tx_ba_session(sta, tid, reason);
-
- mutex_unlock(&sta->ampdu_mlme.mtx);
-
- return ret;
-}
-
int ieee80211_stop_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid)
{
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
@@ -916,7 +898,7 @@ int ieee80211_stop_tx_ba_session(struct ieee80211_sta *pubsta, u16 tid)
}
set_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state);
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(local->hw.wiphy, &sta->ampdu_mlme.work);
unlock:
spin_unlock_bh(&sta->lock);
@@ -976,7 +958,7 @@ void ieee80211_stop_tx_ba_cb_irqsafe(struct ieee80211_vif *vif,
goto out;
set_bit(HT_AGG_STATE_STOP_CB, &tid_tx->state);
- ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);
+ wiphy_work_queue(local->hw.wiphy, &sta->ampdu_mlme.work);
out:
rcu_read_unlock();
}
@@ -993,6 +975,8 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
u16 capab, tid, buf_size;
bool amsdu;
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
+
capab = le16_to_cpu(mgmt->u.action.u.addba_resp.capab);
amsdu = capab & IEEE80211_ADDBA_PARAM_AMSDU_MASK;
tid = u16_get_bits(capab, IEEE80211_ADDBA_PARAM_TID_MASK);
@@ -1003,16 +987,14 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
if (!amsdu && txq)
set_bit(IEEE80211_TXQ_NO_AMSDU, &to_txq_info(txq)->flags);
- mutex_lock(&sta->ampdu_mlme.mtx);
-
tid_tx = rcu_dereference_protected_tid_tx(sta, tid);
if (!tid_tx)
- goto out;
+ return;
if (mgmt->u.action.u.addba_resp.dialog_token != tid_tx->dialog_token) {
ht_dbg(sta->sdata, "wrong addBA response token, %pM tid %d\n",
sta->sta.addr, tid);
- goto out;
+ return;
}
del_timer_sync(&tid_tx->addba_resp_timer);
@@ -1030,7 +1012,7 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
ht_dbg(sta->sdata,
"got addBA resp for %pM tid %d but we already gave up\n",
sta->sta.addr, tid);
- goto out;
+ return;
}
/*
@@ -1044,7 +1026,7 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
if (test_and_set_bit(HT_AGG_STATE_RESPONSE_RECEIVED,
&tid_tx->state)) {
/* ignore duplicate response */
- goto out;
+ return;
}
tid_tx->buf_size = buf_size;
@@ -1065,9 +1047,6 @@ void ieee80211_process_addba_resp(struct ieee80211_local *local,
}
} else {
- ___ieee80211_stop_tx_ba_session(sta, tid, AGG_STOP_DECLINED);
+ __ieee80211_stop_tx_ba_session(sta, tid, AGG_STOP_DECLINED);
}
-
- out:
- mutex_unlock(&sta->ampdu_mlme.mtx);
}
diff --git a/net/mac80211/airtime.c b/net/mac80211/airtime.c
index e8ebd343e2bf..fdf8b658fede 100644
--- a/net/mac80211/airtime.c
+++ b/net/mac80211/airtime.c
@@ -557,7 +557,7 @@ static int ieee80211_fill_rx_status(struct ieee80211_rx_status *stat,
if (ieee80211_fill_rate_info(hw, stat, band, ri))
return 0;
- if (rate->idx < 0 || !rate->count)
+ if (!ieee80211_rate_valid(rate))
return -1;
if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH)
@@ -632,7 +632,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
{
struct ieee80211_supported_band *sband;
struct ieee80211_chanctx_conf *conf;
- int rateidx, shift = 0;
+ int rateidx;
bool cck, short_pream;
u32 basic_rates;
u8 band = 0;
@@ -641,10 +641,8 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
len += 38; /* Ethernet header length */
conf = rcu_dereference(vif->bss_conf.chanctx_conf);
- if (conf) {
+ if (conf)
band = conf->def.chan->band;
- shift = ieee80211_chandef_get_shift(&conf->def);
- }
if (pubsta) {
struct sta_info *sta = container_of(pubsta, struct sta_info,
@@ -704,7 +702,7 @@ u32 ieee80211_calc_expected_tx_airtime(struct ieee80211_hw *hw,
short_pream = vif->bss_conf.use_short_preamble;
rateidx = basic_rates ? ffs(basic_rates) - 1 : 0;
- rate = sband->bitrates[rateidx].bitrate << shift;
+ rate = sband->bitrates[rateidx].bitrate;
cck = sband->bitrates[rateidx].flags & IEEE80211_RATE_MANDATORY_B;
return ieee80211_calc_legacy_rate_duration(rate, short_pream, cck, len);
diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
index 0e3a1753a51c..606b1b2e4123 100644
--- a/net/mac80211/cfg.c
+++ b/net/mac80211/cfg.c
@@ -214,6 +214,8 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
struct sta_info *sta;
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
ret = ieee80211_if_change_type(sdata, type);
if (ret)
return ret;
@@ -235,12 +237,10 @@ static int ieee80211_change_iface(struct wiphy *wiphy,
if (!ifmgd->associated)
return 0;
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, sdata->deflink.u.mgd.bssid);
if (sta)
drv_sta_set_4addr(local, sdata, &sta->sta,
params->use_4addr);
- mutex_unlock(&local->sta_mtx);
if (params->use_4addr)
ieee80211_send_4addr_nullfunc(local, sdata);
@@ -261,9 +261,9 @@ static int ieee80211_start_p2p_device(struct wiphy *wiphy,
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
int ret;
- mutex_lock(&sdata->local->chanctx_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
ret = ieee80211_check_combinations(sdata, NULL, 0, 0);
- mutex_unlock(&sdata->local->chanctx_mtx);
if (ret < 0)
return ret;
@@ -283,9 +283,9 @@ static int ieee80211_start_nan(struct wiphy *wiphy,
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
int ret;
- mutex_lock(&sdata->local->chanctx_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
ret = ieee80211_check_combinations(sdata, NULL, 0, 0);
- mutex_unlock(&sdata->local->chanctx_mtx);
if (ret < 0)
return ret;
@@ -452,13 +452,11 @@ static int ieee80211_set_tx(struct ieee80211_sub_if_data *sdata,
if (sta->ptk_idx == key_idx)
return 0;
- mutex_lock(&local->key_mtx);
- key = key_mtx_dereference(local, sta->ptk[key_idx]);
+ key = wiphy_dereference(local->hw.wiphy, sta->ptk[key_idx]);
if (key && key->conf.flags & IEEE80211_KEY_FLAG_NO_AUTO_TX)
ret = ieee80211_set_tx_key(key);
- mutex_unlock(&local->key_mtx);
return ret;
}
@@ -474,6 +472,8 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_key *key;
int err;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!ieee80211_sdata_running(sdata))
return -ENETDOWN;
@@ -510,8 +510,6 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
if (params->mode == NL80211_KEY_NO_TX)
key->conf.flags |= IEEE80211_KEY_FLAG_NO_AUTO_TX;
- mutex_lock(&local->sta_mtx);
-
if (mac_addr) {
sta = sta_info_get_bss(sdata, mac_addr);
/*
@@ -526,8 +524,7 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
*/
if (!sta || !test_sta_flag(sta, WLAN_STA_ASSOC)) {
ieee80211_key_free_unused(key);
- err = -ENOENT;
- goto out_unlock;
+ return -ENOENT;
}
}
@@ -570,9 +567,6 @@ static int ieee80211_add_key(struct wiphy *wiphy, struct net_device *dev,
if (err == -EALREADY)
err = 0;
- out_unlock:
- mutex_unlock(&local->sta_mtx);
-
return err;
}
@@ -585,8 +579,7 @@ ieee80211_lookup_key(struct ieee80211_sub_if_data *sdata, int link_id,
struct ieee80211_key *key;
if (link_id >= 0) {
- link = rcu_dereference_check(sdata->link[link_id],
- lockdep_is_held(&sdata->wdev.mtx));
+ link = sdata_dereference(sdata->link[link_id], sdata);
if (!link)
return NULL;
}
@@ -601,7 +594,7 @@ ieee80211_lookup_key(struct ieee80211_sub_if_data *sdata, int link_id,
if (link_id >= 0) {
link_sta = rcu_dereference_check(sta->link[link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!link_sta)
return NULL;
} else {
@@ -609,30 +602,29 @@ ieee80211_lookup_key(struct ieee80211_sub_if_data *sdata, int link_id,
}
if (pairwise && key_idx < NUM_DEFAULT_KEYS)
- return rcu_dereference_check_key_mtx(local,
- sta->ptk[key_idx]);
+ return wiphy_dereference(local->hw.wiphy,
+ sta->ptk[key_idx]);
if (!pairwise &&
key_idx < NUM_DEFAULT_KEYS +
NUM_DEFAULT_MGMT_KEYS +
NUM_DEFAULT_BEACON_KEYS)
- return rcu_dereference_check_key_mtx(local,
- link_sta->gtk[key_idx]);
+ return wiphy_dereference(local->hw.wiphy,
+ link_sta->gtk[key_idx]);
return NULL;
}
if (pairwise && key_idx < NUM_DEFAULT_KEYS)
- return rcu_dereference_check_key_mtx(local,
- sdata->keys[key_idx]);
+ return wiphy_dereference(local->hw.wiphy, sdata->keys[key_idx]);
- key = rcu_dereference_check_key_mtx(local, link->gtk[key_idx]);
+ key = wiphy_dereference(local->hw.wiphy, link->gtk[key_idx]);
if (key)
return key;
/* or maybe it was a WEP key */
if (key_idx < NUM_DEFAULT_KEYS)
- return rcu_dereference_check_key_mtx(local, sdata->keys[key_idx]);
+ return wiphy_dereference(local->hw.wiphy, sdata->keys[key_idx]);
return NULL;
}
@@ -644,25 +636,16 @@ static int ieee80211_del_key(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = sdata->local;
struct ieee80211_key *key;
- int ret;
- mutex_lock(&local->sta_mtx);
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
key = ieee80211_lookup_key(sdata, link_id, key_idx, pairwise, mac_addr);
- if (!key) {
- ret = -ENOENT;
- goto out_unlock;
- }
+ if (!key)
+ return -ENOENT;
ieee80211_key_free(key, sdata->vif.type == NL80211_IFTYPE_STATION);
- ret = 0;
- out_unlock:
- mutex_unlock(&local->key_mtx);
- mutex_unlock(&local->sta_mtx);
-
- return ret;
+ return 0;
}
static int ieee80211_get_key(struct wiphy *wiphy, struct net_device *dev,
@@ -833,15 +816,11 @@ void sta_set_rate_info_tx(struct sta_info *sta,
rinfo->nss = ieee80211_rate_get_vht_nss(rate);
} else {
struct ieee80211_supported_band *sband;
- int shift = ieee80211_vif_get_shift(&sta->sdata->vif);
- u16 brate;
sband = ieee80211_get_sband(sta->sdata);
WARN_ON_ONCE(sband && !sband->bitrates);
- if (sband && sband->bitrates) {
- brate = sband->bitrates[rate->idx].bitrate;
- rinfo->legacy = DIV_ROUND_UP(brate, 1 << shift);
- }
+ if (sband && sband->bitrates)
+ rinfo->legacy = sband->bitrates[rate->idx].bitrate;
}
if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH)
rinfo->bw = RATE_INFO_BW_40;
@@ -863,7 +842,7 @@ static int ieee80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
struct sta_info *sta;
int ret = -ENOENT;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sta = sta_info_get_by_idx(sdata, idx);
if (sta) {
@@ -872,8 +851,6 @@ static int ieee80211_dump_station(struct wiphy *wiphy, struct net_device *dev,
sta_set_sinfo(sta, sinfo, true);
}
- mutex_unlock(&local->sta_mtx);
-
return ret;
}
@@ -893,7 +870,7 @@ static int ieee80211_get_station(struct wiphy *wiphy, struct net_device *dev,
struct sta_info *sta;
int ret = -ENOENT;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sta = sta_info_get_bss(sdata, mac);
if (sta) {
@@ -901,8 +878,6 @@ static int ieee80211_get_station(struct wiphy *wiphy, struct net_device *dev,
sta_set_sinfo(sta, sinfo, true);
}
- mutex_unlock(&local->sta_mtx);
-
return ret;
}
@@ -913,6 +888,8 @@ static int ieee80211_set_monitor_channel(struct wiphy *wiphy,
struct ieee80211_sub_if_data *sdata;
int ret = 0;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (cfg80211_chandef_identical(&local->monitor_chandef, chandef))
return 0;
@@ -920,22 +897,16 @@ static int ieee80211_set_monitor_channel(struct wiphy *wiphy,
sdata = wiphy_dereference(local->hw.wiphy,
local->monitor_sdata);
if (sdata) {
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
ret = ieee80211_link_use_channel(&sdata->deflink,
chandef,
IEEE80211_CHANCTX_EXCLUSIVE);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
}
} else {
- mutex_lock(&local->mtx);
if (local->open_count == local->monitors) {
local->_oper_chandef = *chandef;
ieee80211_hw_config(local, 0);
}
- mutex_unlock(&local->mtx);
}
if (ret == 0)
@@ -987,25 +958,29 @@ static int ieee80211_set_fils_discovery(struct ieee80211_sub_if_data *sdata,
struct fils_discovery_data *new, *old = NULL;
struct ieee80211_fils_discovery *fd;
- if (!params->tmpl || !params->tmpl_len)
- return -EINVAL;
+ if (!params->update)
+ return 0;
fd = &link_conf->fils_discovery;
fd->min_interval = params->min_interval;
fd->max_interval = params->max_interval;
old = sdata_dereference(link->u.ap.fils_discovery, sdata);
- new = kzalloc(sizeof(*new) + params->tmpl_len, GFP_KERNEL);
- if (!new)
- return -ENOMEM;
- new->len = params->tmpl_len;
- memcpy(new->data, params->tmpl, params->tmpl_len);
- rcu_assign_pointer(link->u.ap.fils_discovery, new);
-
if (old)
kfree_rcu(old, rcu_head);
- return 0;
+ if (params->tmpl && params->tmpl_len) {
+ new = kzalloc(sizeof(*new) + params->tmpl_len, GFP_KERNEL);
+ if (!new)
+ return -ENOMEM;
+ new->len = params->tmpl_len;
+ memcpy(new->data, params->tmpl, params->tmpl_len);
+ rcu_assign_pointer(link->u.ap.fils_discovery, new);
+ } else {
+ RCU_INIT_POINTER(link->u.ap.fils_discovery, NULL);
+ }
+
+ return BSS_CHANGED_FILS_DISCOVERY;
}
static int
@@ -1016,23 +991,27 @@ ieee80211_set_unsol_bcast_probe_resp(struct ieee80211_sub_if_data *sdata,
{
struct unsol_bcast_probe_resp_data *new, *old = NULL;
- if (!params->tmpl || !params->tmpl_len)
- return -EINVAL;
+ if (!params->update)
+ return 0;
- old = sdata_dereference(link->u.ap.unsol_bcast_probe_resp, sdata);
- new = kzalloc(sizeof(*new) + params->tmpl_len, GFP_KERNEL);
- if (!new)
- return -ENOMEM;
- new->len = params->tmpl_len;
- memcpy(new->data, params->tmpl, params->tmpl_len);
- rcu_assign_pointer(link->u.ap.unsol_bcast_probe_resp, new);
+ link_conf->unsol_bcast_probe_resp_interval = params->interval;
+ old = sdata_dereference(link->u.ap.unsol_bcast_probe_resp, sdata);
if (old)
kfree_rcu(old, rcu_head);
- link_conf->unsol_bcast_probe_resp_interval = params->interval;
+ if (params->tmpl && params->tmpl_len) {
+ new = kzalloc(sizeof(*new) + params->tmpl_len, GFP_KERNEL);
+ if (!new)
+ return -ENOMEM;
+ new->len = params->tmpl_len;
+ memcpy(new->data, params->tmpl, params->tmpl_len);
+ rcu_assign_pointer(link->u.ap.unsol_bcast_probe_resp, new);
+ } else {
+ RCU_INIT_POINTER(link->u.ap.unsol_bcast_probe_resp, NULL);
+ }
- return 0;
+ return BSS_CHANGED_UNSOL_BCAST_PROBE_RESP;
}
static int ieee80211_set_ftm_responder_params(
@@ -1278,6 +1257,8 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_link_data *link;
struct ieee80211_bss_conf *link_conf;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
link = sdata_dereference(sdata->link[link_id], sdata);
if (!link)
return -ENOLINK;
@@ -1387,12 +1368,10 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
return err;
}
- mutex_lock(&local->mtx);
err = ieee80211_link_use_channel(link, &params->chandef,
IEEE80211_CHANCTX_SHARED);
if (!err)
ieee80211_link_copy_chanctx_to_vlans(link, false);
- mutex_unlock(&local->mtx);
if (err) {
link_conf->beacon_int = prev_beacon_int;
return err;
@@ -1463,23 +1442,18 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
if (err < 0)
goto error;
- if (params->fils_discovery.max_interval) {
- err = ieee80211_set_fils_discovery(sdata,
- &params->fils_discovery,
- link, link_conf);
- if (err < 0)
- goto error;
- changed |= BSS_CHANGED_FILS_DISCOVERY;
- }
+ err = ieee80211_set_fils_discovery(sdata, &params->fils_discovery,
+ link, link_conf);
+ if (err < 0)
+ goto error;
+ changed |= err;
- if (params->unsol_bcast_probe_resp.interval) {
- err = ieee80211_set_unsol_bcast_probe_resp(sdata,
- &params->unsol_bcast_probe_resp,
- link, link_conf);
- if (err < 0)
- goto error;
- changed |= BSS_CHANGED_UNSOL_BCAST_PROBE_RESP;
- }
+ err = ieee80211_set_unsol_bcast_probe_resp(sdata,
+ &params->unsol_bcast_probe_resp,
+ link, link_conf);
+ if (err < 0)
+ goto error;
+ changed |= err;
err = drv_start_ap(sdata->local, sdata, link_conf);
if (err) {
@@ -1503,26 +1477,26 @@ static int ieee80211_start_ap(struct wiphy *wiphy, struct net_device *dev,
return 0;
error:
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(link);
- mutex_unlock(&local->mtx);
return err;
}
static int ieee80211_change_beacon(struct wiphy *wiphy, struct net_device *dev,
- struct cfg80211_beacon_data *params)
+ struct cfg80211_ap_update *params)
+
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_link_data *link;
+ struct cfg80211_beacon_data *beacon = &params->beacon;
struct beacon_data *old;
int err;
struct ieee80211_bss_conf *link_conf;
u64 changed = 0;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(wiphy);
- link = sdata_dereference(sdata->link[params->link_id], sdata);
+ link = sdata_dereference(sdata->link[beacon->link_id], sdata);
if (!link)
return -ENOLINK;
@@ -1538,14 +1512,27 @@ static int ieee80211_change_beacon(struct wiphy *wiphy, struct net_device *dev,
if (!old)
return -ENOENT;
- err = ieee80211_assign_beacon(sdata, link, params, NULL, NULL,
+ err = ieee80211_assign_beacon(sdata, link, beacon, NULL, NULL,
&changed);
if (err < 0)
return err;
- if (params->he_bss_color_valid &&
- params->he_bss_color.enabled != link_conf->he_bss_color.enabled) {
- link_conf->he_bss_color.enabled = params->he_bss_color.enabled;
+ err = ieee80211_set_fils_discovery(sdata, &params->fils_discovery,
+ link, link_conf);
+ if (err < 0)
+ return err;
+ changed |= err;
+
+ err = ieee80211_set_unsol_bcast_probe_resp(sdata,
+ &params->unsol_bcast_probe_resp,
+ link, link_conf);
+ if (err < 0)
+ return err;
+ changed |= err;
+
+ if (beacon->he_bss_color_valid &&
+ beacon->he_bss_color.enabled != link_conf->he_bss_color.enabled) {
+ link_conf->he_bss_color.enabled = beacon->he_bss_color.enabled;
changed |= BSS_CHANGED_HE_BSS_COLOR;
}
@@ -1579,7 +1566,7 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
sdata_dereference(sdata->link[link_id], sdata);
struct ieee80211_bss_conf *link_conf = link->conf;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
old_beacon = sdata_dereference(link->u.ap.beacon, sdata);
if (!old_beacon)
@@ -1593,7 +1580,6 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
sdata);
/* abort any running channel switch or color change */
- mutex_lock(&local->mtx);
link_conf->csa_active = false;
link_conf->color_change_active = false;
if (link->csa_block_tx) {
@@ -1602,8 +1588,6 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
link->csa_block_tx = false;
}
- mutex_unlock(&local->mtx);
-
ieee80211_free_next_beacon(link);
/* turn off carrier for this interface and dependent VLANs */
@@ -1646,7 +1630,7 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
if (sdata->wdev.cac_started) {
chandef = link_conf->chandef;
- cancel_delayed_work_sync(&link->dfs_cac_timer_work);
+ wiphy_delayed_work_cancel(wiphy, &link->dfs_cac_timer_work);
cfg80211_cac_event(sdata->dev, &chandef,
NL80211_RADAR_CAC_ABORTED,
GFP_KERNEL);
@@ -1658,10 +1642,8 @@ static int ieee80211_stop_ap(struct wiphy *wiphy, struct net_device *dev,
local->total_ps_buffered -= skb_queue_len(&sdata->u.ap.ps.bc_buf);
ieee80211_purge_tx_queue(&local->hw, &sdata->u.ap.ps.bc_buf);
- mutex_lock(&local->mtx);
ieee80211_link_copy_chanctx_to_vlans(link, true);
ieee80211_link_release_channel(link);
- mutex_unlock(&local->mtx);
return 0;
}
@@ -1803,7 +1785,7 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
sdata_dereference(sdata->link[link_id], sdata);
struct link_sta_info *link_sta =
rcu_dereference_protected(sta->link[link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
/*
* If there are no changes, then accept a link that doesn't exist,
@@ -2038,6 +2020,8 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_sub_if_data *sdata;
int err;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (params->vlan) {
sdata = IEEE80211_DEV_TO_SUB_IF(params->vlan);
@@ -2081,9 +2065,7 @@ static int ieee80211_add_station(struct wiphy *wiphy, struct net_device *dev,
* visible yet), sta_apply_parameters (and inner functions) require
* the mutex due to other paths.
*/
- mutex_lock(&local->sta_mtx);
err = sta_apply_parameters(local, sta, params);
- mutex_unlock(&local->sta_mtx);
if (err) {
sta_info_free(local, sta);
return err;
@@ -2126,13 +2108,11 @@ static int ieee80211_change_station(struct wiphy *wiphy,
enum cfg80211_station_type statype;
int err;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sta = sta_info_get_bss(sdata, mac);
- if (!sta) {
- err = -ENOENT;
- goto out_err;
- }
+ if (!sta)
+ return -ENOENT;
switch (sdata->vif.type) {
case NL80211_IFTYPE_MESH_POINT:
@@ -2162,22 +2142,19 @@ static int ieee80211_change_station(struct wiphy *wiphy,
statype = CFG80211_STA_AP_CLIENT_UNASSOC;
break;
default:
- err = -EOPNOTSUPP;
- goto out_err;
+ return -EOPNOTSUPP;
}
err = cfg80211_check_station_change(wiphy, params, statype);
if (err)
- goto out_err;
+ return err;
if (params->vlan && params->vlan != sta->sdata->dev) {
vlansdata = IEEE80211_DEV_TO_SUB_IF(params->vlan);
if (params->vlan->ieee80211_ptr->use_4addr) {
- if (vlansdata->u.vlan.sta) {
- err = -EBUSY;
- goto out_err;
- }
+ if (vlansdata->u.vlan.sta)
+ return -EBUSY;
rcu_assign_pointer(vlansdata->u.vlan.sta, sta);
__ieee80211_check_fast_rx_iface(vlansdata);
@@ -2203,18 +2180,9 @@ static int ieee80211_change_station(struct wiphy *wiphy,
}
}
- /* we use sta_info_get_bss() so this might be different */
- if (sdata != sta->sdata) {
- mutex_lock_nested(&sta->sdata->wdev.mtx, 1);
- err = sta_apply_parameters(local, sta, params);
- mutex_unlock(&sta->sdata->wdev.mtx);
- } else {
- err = sta_apply_parameters(local, sta, params);
- }
+ err = sta_apply_parameters(local, sta, params);
if (err)
- goto out_err;
-
- mutex_unlock(&local->sta_mtx);
+ return err;
if (sdata->vif.type == NL80211_IFTYPE_STATION &&
params->sta_flags_mask & BIT(NL80211_STA_FLAG_AUTHORIZED)) {
@@ -2223,9 +2191,6 @@ static int ieee80211_change_station(struct wiphy *wiphy,
}
return 0;
-out_err:
- mutex_unlock(&local->sta_mtx);
- return err;
}
#ifdef CONFIG_MAC80211_MESH
@@ -2638,6 +2603,8 @@ static int ieee80211_join_mesh(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
int err;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
memcpy(&ifmsh->mshcfg, conf, sizeof(struct mesh_config));
err = copy_mesh_setup(ifmsh, setup);
if (err)
@@ -2649,10 +2616,8 @@ static int ieee80211_join_mesh(struct wiphy *wiphy, struct net_device *dev,
sdata->deflink.smps_mode = IEEE80211_SMPS_OFF;
sdata->deflink.needed_rx_chains = sdata->local->rx_chains;
- mutex_lock(&sdata->local->mtx);
err = ieee80211_link_use_channel(&sdata->deflink, &setup->chandef,
IEEE80211_CHANCTX_SHARED);
- mutex_unlock(&sdata->local->mtx);
if (err)
return err;
@@ -2663,11 +2628,11 @@ static int ieee80211_leave_mesh(struct wiphy *wiphy, struct net_device *dev)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
ieee80211_stop_mesh(sdata);
- mutex_lock(&sdata->local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
kfree(sdata->u.mesh.ie);
- mutex_unlock(&sdata->local->mtx);
return 0;
}
@@ -3025,6 +2990,8 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
bool update_txp_type = false;
bool has_monitor = false;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (wdev) {
sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
@@ -3072,7 +3039,6 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
break;
}
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(sdata, &local->interfaces, list) {
if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
has_monitor = true;
@@ -3088,7 +3054,6 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
continue;
ieee80211_recalc_txpower(sdata, update_txp_type);
}
- mutex_unlock(&local->iflist_mtx);
if (has_monitor) {
sdata = wiphy_dereference(local->hw.wiphy,
@@ -3121,6 +3086,10 @@ static int ieee80211_get_tx_power(struct wiphy *wiphy,
else
*dbm = sdata->vif.bss_conf.txpower;
+ /* INT_MIN indicates no power level was set yet */
+ if (*dbm == INT_MIN)
+ return -EINVAL;
+
return 0;
}
@@ -3177,14 +3146,24 @@ int __ieee80211_request_smps_mgd(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta;
bool tdls_peer_found = false;
- lockdep_assert_held(&sdata->wdev.mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION))
return -EINVAL;
+ if (ieee80211_vif_is_mld(&sdata->vif) &&
+ !(sdata->vif.active_links & BIT(link->link_id)))
+ return 0;
+
old_req = link->u.mgd.req_smps;
link->u.mgd.req_smps = smps_mode;
+ /* The driver indicated that EML is enabled for the interface, which
+ * implies that SMPS flows towards the AP should be stopped.
+ */
+ if (sdata->vif.driver_flags & IEEE80211_VIF_EML_ACTIVE)
+ return 0;
+
if (old_req == smps_mode &&
smps_mode != IEEE80211_SMPS_AUTOMATIC)
return 0;
@@ -3198,7 +3177,7 @@ int __ieee80211_request_smps_mgd(struct ieee80211_sub_if_data *sdata,
link->conf->chandef.width == NL80211_CHAN_WIDTH_20_NOHT)
return 0;
- ap = link->u.mgd.bssid;
+ ap = sdata->vif.cfg.ap_addr;
rcu_read_lock();
list_for_each_entry_rcu(sta, &sdata->local->sta_list, list) {
@@ -3220,7 +3199,9 @@ int __ieee80211_request_smps_mgd(struct ieee80211_sub_if_data *sdata,
/* send SM PS frame to AP */
err = ieee80211_send_smps_action(sdata, smps_mode,
- ap, ap);
+ ap, ap,
+ ieee80211_vif_is_mld(&sdata->vif) ?
+ link->link_id : -1);
if (err)
link->u.mgd.req_smps = old_req;
else if (smps_mode != IEEE80211_SMPS_OFF && tdls_peer_found)
@@ -3250,7 +3231,6 @@ static int ieee80211_set_power_mgmt(struct wiphy *wiphy, struct net_device *dev,
local->dynamic_ps_forced_timeout = timeout;
/* no change, but if automatic follow powersave */
- sdata_lock(sdata);
for (link_id = 0; link_id < ARRAY_SIZE(sdata->link); link_id++) {
struct ieee80211_link_data *link;
@@ -3261,7 +3241,6 @@ static int ieee80211_set_power_mgmt(struct wiphy *wiphy, struct net_device *dev,
__ieee80211_request_smps_mgd(sdata, link,
link->u.mgd.req_smps);
}
- sdata_unlock(sdata);
if (ieee80211_hw_check(&local->hw, SUPPORTS_DYNAMIC_PS))
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
@@ -3407,7 +3386,8 @@ static int ieee80211_start_radar_detection(struct wiphy *wiphy,
struct ieee80211_local *local = sdata->local;
int err;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!list_empty(&local->roc_list) || local->scanning) {
err = -EBUSY;
goto out_unlock;
@@ -3422,12 +3402,10 @@ static int ieee80211_start_radar_detection(struct wiphy *wiphy,
if (err)
goto out_unlock;
- ieee80211_queue_delayed_work(&sdata->local->hw,
- &sdata->deflink.dfs_cac_timer_work,
- msecs_to_jiffies(cac_time_ms));
+ wiphy_delayed_work_queue(wiphy, &sdata->deflink.dfs_cac_timer_work,
+ msecs_to_jiffies(cac_time_ms));
out_unlock:
- mutex_unlock(&local->mtx);
return err;
}
@@ -3437,20 +3415,21 @@ static void ieee80211_end_cac(struct wiphy *wiphy,
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = sdata->local;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry(sdata, &local->interfaces, list) {
/* it might be waiting for the local->mtx, but then
* by the time it gets it, sdata->wdev.cac_started
* will no longer be true
*/
- cancel_delayed_work(&sdata->deflink.dfs_cac_timer_work);
+ wiphy_delayed_work_cancel(wiphy,
+ &sdata->deflink.dfs_cac_timer_work);
if (sdata->wdev.cac_started) {
ieee80211_link_release_channel(&sdata->deflink);
sdata->wdev.cac_started = false;
}
}
- mutex_unlock(&local->mtx);
}
static struct cfg80211_beacon_data *
@@ -3582,11 +3561,11 @@ void ieee80211_csa_finish(struct ieee80211_vif *vif)
if (iter == sdata || iter->vif.mbssid_tx_vif != vif)
continue;
- ieee80211_queue_work(&iter->local->hw,
- &iter->deflink.csa_finalize_work);
+ wiphy_work_queue(iter->local->hw.wiphy,
+ &iter->deflink.csa_finalize_work);
}
}
- ieee80211_queue_work(&local->hw, &sdata->deflink.csa_finalize_work);
+ wiphy_work_queue(local->hw.wiphy, &sdata->deflink.csa_finalize_work);
rcu_read_unlock();
}
@@ -3642,15 +3621,14 @@ static int ieee80211_set_after_csa_beacon(struct ieee80211_sub_if_data *sdata,
return 0;
}
-static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+static int __ieee80211_csa_finalize(struct ieee80211_link_data *link_data)
{
+ struct ieee80211_sub_if_data *sdata = link_data->sdata;
struct ieee80211_local *local = sdata->local;
u64 changed = 0;
int err;
- sdata_assert_lock(sdata);
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/*
* using reservation isn't immediate as it may be deferred until later
@@ -3659,20 +3637,20 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
* completed successfully
*/
- if (sdata->deflink.reserved_chanctx) {
+ if (link_data->reserved_chanctx) {
/*
* with multi-vif csa driver may call ieee80211_csa_finish()
* many times while waiting for other interfaces to use their
* reservations
*/
- if (sdata->deflink.reserved_ready)
+ if (link_data->reserved_ready)
return 0;
return ieee80211_link_use_reserved_context(&sdata->deflink);
}
- if (!cfg80211_chandef_identical(&sdata->vif.bss_conf.chandef,
- &sdata->deflink.csa_chandef))
+ if (!cfg80211_chandef_identical(&link_data->conf->chandef,
+ &link_data->csa_chandef))
return -EINVAL;
sdata->vif.bss_conf.csa_active = false;
@@ -3687,57 +3665,53 @@ static int __ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
changed |= BSS_CHANGED_EHT_PUNCTURING;
}
- ieee80211_link_info_change_notify(sdata, &sdata->deflink, changed);
+ ieee80211_link_info_change_notify(sdata, link_data, changed);
- if (sdata->deflink.csa_block_tx) {
+ if (link_data->csa_block_tx) {
ieee80211_wake_vif_queues(local, sdata,
IEEE80211_QUEUE_STOP_REASON_CSA);
- sdata->deflink.csa_block_tx = false;
+ link_data->csa_block_tx = false;
}
- err = drv_post_channel_switch(sdata);
+ err = drv_post_channel_switch(link_data);
if (err)
return err;
- cfg80211_ch_switch_notify(sdata->dev, &sdata->deflink.csa_chandef, 0,
- sdata->vif.bss_conf.eht_puncturing);
+ cfg80211_ch_switch_notify(sdata->dev, &link_data->csa_chandef,
+ link_data->link_id,
+ link_data->conf->eht_puncturing);
return 0;
}
-static void ieee80211_csa_finalize(struct ieee80211_sub_if_data *sdata)
+static void ieee80211_csa_finalize(struct ieee80211_link_data *link_data)
{
- if (__ieee80211_csa_finalize(sdata)) {
+ struct ieee80211_sub_if_data *sdata = link_data->sdata;
+
+ if (__ieee80211_csa_finalize(link_data)) {
sdata_info(sdata, "failed to finalize CSA, disconnecting\n");
cfg80211_stop_iface(sdata->local->hw.wiphy, &sdata->wdev,
GFP_KERNEL);
}
}
-void ieee80211_csa_finalize_work(struct work_struct *work)
+void ieee80211_csa_finalize_work(struct wiphy *wiphy, struct wiphy_work *work)
{
- struct ieee80211_sub_if_data *sdata =
- container_of(work, struct ieee80211_sub_if_data,
- deflink.csa_finalize_work);
+ struct ieee80211_link_data *link =
+ container_of(work, struct ieee80211_link_data, csa_finalize_work);
+ struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_local *local = sdata->local;
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* AP might have been stopped while waiting for the lock. */
- if (!sdata->vif.bss_conf.csa_active)
- goto unlock;
+ if (!link->conf->csa_active)
+ return;
if (!ieee80211_sdata_running(sdata))
- goto unlock;
-
- ieee80211_csa_finalize(sdata);
+ return;
-unlock:
- mutex_unlock(&local->chanctx_mtx);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
+ ieee80211_csa_finalize(link);
}
static int ieee80211_set_csa_beacon(struct ieee80211_sub_if_data *sdata,
@@ -3893,8 +3867,7 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
u64 changed = 0;
int err;
- sdata_assert_lock(sdata);
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!list_empty(&local->roc_list) || local->scanning)
return -EBUSY;
@@ -3910,9 +3883,8 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
if (sdata->vif.bss_conf.csa_active)
return -EBUSY;
- mutex_lock(&local->chanctx_mtx);
conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!conf) {
err = -EBUSY;
goto out;
@@ -3982,11 +3954,10 @@ __ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
drv_channel_switch_beacon(sdata, &params->chandef);
} else {
/* if the beacon didn't change, we can finalize immediately */
- ieee80211_csa_finalize(sdata);
+ ieee80211_csa_finalize(&sdata->deflink);
}
out:
- mutex_unlock(&local->chanctx_mtx);
return err;
}
@@ -3995,18 +3966,15 @@ int ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = sdata->local;
- int err;
- mutex_lock(&local->mtx);
- err = __ieee80211_channel_switch(wiphy, dev, params);
- mutex_unlock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
- return err;
+ return __ieee80211_channel_switch(wiphy, dev, params);
}
u64 ieee80211_mgmt_tx_cookie(struct ieee80211_local *local)
{
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
local->roc_cookie_counter++;
@@ -4038,7 +4006,8 @@ int ieee80211_attach_ack_skb(struct ieee80211_local *local, struct sk_buff *skb,
return -ENOMEM;
}
- IEEE80211_SKB_CB(skb)->ack_frame_id = id;
+ IEEE80211_SKB_CB(skb)->status_data_idr = 1;
+ IEEE80211_SKB_CB(skb)->status_data = id;
*cookie = ieee80211_mgmt_tx_cookie(local);
IEEE80211_SKB_CB(ack_skb)->ack.cookie = *cookie;
@@ -4088,11 +4057,17 @@ ieee80211_update_mgmt_frame_registrations(struct wiphy *wiphy,
static int ieee80211_set_antenna(struct wiphy *wiphy, u32 tx_ant, u32 rx_ant)
{
struct ieee80211_local *local = wiphy_priv(wiphy);
+ int ret;
if (local->started)
return -EOPNOTSUPP;
- return drv_set_antenna(local, tx_ant, rx_ant);
+ ret = drv_set_antenna(local, tx_ant, rx_ant);
+ if (ret)
+ return ret;
+
+ local->rx_chains = hweight8(rx_ant);
+ return 0;
}
static int ieee80211_get_antenna(struct wiphy *wiphy, u32 *tx_ant, u32 *rx_ant)
@@ -4134,7 +4109,7 @@ static int ieee80211_probe_client(struct wiphy *wiphy, struct net_device *dev,
int ret;
/* the lock is needed to assign the cookie later */
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
rcu_read_lock();
sta = sta_info_get_bss(sdata, peer);
@@ -4205,7 +4180,6 @@ static int ieee80211_probe_client(struct wiphy *wiphy, struct net_device *dev,
ret = 0;
unlock:
rcu_read_unlock();
- mutex_unlock(&local->mtx);
return ret;
}
@@ -4563,7 +4537,8 @@ static int ieee80211_set_tid_config(struct wiphy *wiphy,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct sta_info *sta;
- int ret;
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (!sdata->local->ops->set_tid_config)
return -EOPNOTSUPP;
@@ -4571,17 +4546,11 @@ static int ieee80211_set_tid_config(struct wiphy *wiphy,
if (!tid_conf->peer)
return drv_set_tid_config(sdata->local, sdata, NULL, tid_conf);
- mutex_lock(&sdata->local->sta_mtx);
sta = sta_info_get_bss(sdata, tid_conf->peer);
- if (!sta) {
- mutex_unlock(&sdata->local->sta_mtx);
+ if (!sta)
return -ENOENT;
- }
- ret = drv_set_tid_config(sdata->local, sdata, &sta->sta, tid_conf);
- mutex_unlock(&sdata->local->sta_mtx);
-
- return ret;
+ return drv_set_tid_config(sdata->local, sdata, &sta->sta, tid_conf);
}
static int ieee80211_reset_tid_config(struct wiphy *wiphy,
@@ -4590,7 +4559,8 @@ static int ieee80211_reset_tid_config(struct wiphy *wiphy,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct sta_info *sta;
- int ret;
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (!sdata->local->ops->reset_tid_config)
return -EOPNOTSUPP;
@@ -4598,17 +4568,11 @@ static int ieee80211_reset_tid_config(struct wiphy *wiphy,
if (!peer)
return drv_reset_tid_config(sdata->local, sdata, NULL, tids);
- mutex_lock(&sdata->local->sta_mtx);
sta = sta_info_get_bss(sdata, peer);
- if (!sta) {
- mutex_unlock(&sdata->local->sta_mtx);
+ if (!sta)
return -ENOENT;
- }
- ret = drv_reset_tid_config(sdata->local, sdata, &sta->sta, tids);
- mutex_unlock(&sdata->local->sta_mtx);
-
- return ret;
+ return drv_reset_tid_config(sdata->local, sdata, &sta->sta, tids);
}
static int ieee80211_set_sar_specs(struct wiphy *wiphy,
@@ -4694,6 +4658,8 @@ static void
ieee80211_color_change_bss_config_notify(struct ieee80211_sub_if_data *sdata,
u8 color, int enable, u64 changed)
{
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
sdata->vif.bss_conf.he_bss_color.color = color;
sdata->vif.bss_conf.he_bss_color.enabled = enable;
changed |= BSS_CHANGED_HE_BSS_COLOR;
@@ -4703,7 +4669,6 @@ ieee80211_color_change_bss_config_notify(struct ieee80211_sub_if_data *sdata,
if (!sdata->vif.bss_conf.nontransmitted && sdata->vif.mbssid_tx_vif) {
struct ieee80211_sub_if_data *child;
- mutex_lock(&sdata->local->iflist_mtx);
list_for_each_entry(child, &sdata->local->interfaces, list) {
if (child != sdata && child->vif.mbssid_tx_vif == &sdata->vif) {
child->vif.bss_conf.he_bss_color.color = color;
@@ -4713,7 +4678,6 @@ ieee80211_color_change_bss_config_notify(struct ieee80211_sub_if_data *sdata,
BSS_CHANGED_HE_BSS_COLOR);
}
}
- mutex_unlock(&sdata->local->iflist_mtx);
}
}
@@ -4723,8 +4687,7 @@ static int ieee80211_color_change_finalize(struct ieee80211_sub_if_data *sdata)
u64 changed = 0;
int err;
- sdata_assert_lock(sdata);
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata->vif.bss_conf.color_change_active = false;
@@ -4742,28 +4705,24 @@ static int ieee80211_color_change_finalize(struct ieee80211_sub_if_data *sdata)
return 0;
}
-void ieee80211_color_change_finalize_work(struct work_struct *work)
+void ieee80211_color_change_finalize_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_sub_if_data *sdata =
container_of(work, struct ieee80211_sub_if_data,
deflink.color_change_finalize_work);
struct ieee80211_local *local = sdata->local;
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* AP might have been stopped while waiting for the lock. */
if (!sdata->vif.bss_conf.color_change_active)
- goto unlock;
+ return;
if (!ieee80211_sdata_running(sdata))
- goto unlock;
+ return;
ieee80211_color_change_finalize(sdata);
-
-unlock:
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
}
void ieee80211_color_collision_detection_work(struct work_struct *work)
@@ -4774,17 +4733,15 @@ void ieee80211_color_collision_detection_work(struct work_struct *work)
color_collision_detect_work);
struct ieee80211_sub_if_data *sdata = link->sdata;
- sdata_lock(sdata);
cfg80211_obss_color_collision_notify(sdata->dev, link->color_bitmap);
- sdata_unlock(sdata);
}
void ieee80211_color_change_finish(struct ieee80211_vif *vif)
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
- ieee80211_queue_work(&sdata->local->hw,
- &sdata->deflink.color_change_finalize_work);
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &sdata->deflink.color_change_finalize_work);
}
EXPORT_SYMBOL_GPL(ieee80211_color_change_finish);
@@ -4820,13 +4777,11 @@ ieee80211_color_change(struct wiphy *wiphy, struct net_device *dev,
u64 changed = 0;
int err;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (sdata->vif.bss_conf.nontransmitted)
return -EINVAL;
- mutex_lock(&local->mtx);
-
/* don't allow another color change if one is already active or if csa
* is active
*/
@@ -4851,7 +4806,6 @@ ieee80211_color_change(struct wiphy *wiphy, struct net_device *dev,
ieee80211_color_change_finalize(sdata);
out:
- mutex_unlock(&local->mtx);
return err;
}
@@ -4873,16 +4827,13 @@ static int ieee80211_add_intf_link(struct wiphy *wiphy,
unsigned int link_id)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
- int res;
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (wdev->use_4addr)
return -EOPNOTSUPP;
- mutex_lock(&sdata->local->mtx);
- res = ieee80211_vif_set_links(sdata, wdev->valid_links, 0);
- mutex_unlock(&sdata->local->mtx);
-
- return res;
+ return ieee80211_vif_set_links(sdata, wdev->valid_links, 0);
}
static void ieee80211_del_intf_link(struct wiphy *wiphy,
@@ -4891,9 +4842,9 @@ static void ieee80211_del_intf_link(struct wiphy *wiphy,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
- mutex_lock(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
ieee80211_vif_set_links(sdata, wdev->valid_links, 0);
- mutex_unlock(&sdata->local->mtx);
}
static int sta_add_link_station(struct ieee80211_local *local,
@@ -4933,13 +4884,10 @@ ieee80211_add_link_station(struct wiphy *wiphy, struct net_device *dev,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = wiphy_priv(wiphy);
- int ret;
- mutex_lock(&sdata->local->sta_mtx);
- ret = sta_add_link_station(local, sdata, params);
- mutex_unlock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return ret;
+ return sta_add_link_station(local, sdata, params);
}
static int sta_mod_link_station(struct ieee80211_local *local,
@@ -4964,13 +4912,10 @@ ieee80211_mod_link_station(struct wiphy *wiphy, struct net_device *dev,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = wiphy_priv(wiphy);
- int ret;
- mutex_lock(&sdata->local->sta_mtx);
- ret = sta_mod_link_station(local, sdata, params);
- mutex_unlock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return ret;
+ return sta_mod_link_station(local, sdata, params);
}
static int sta_del_link_station(struct ieee80211_sub_if_data *sdata,
@@ -4999,13 +4944,10 @@ ieee80211_del_link_station(struct wiphy *wiphy, struct net_device *dev,
struct link_station_del_parameters *params)
{
struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
- int ret;
- mutex_lock(&sdata->local->sta_mtx);
- ret = sta_del_link_station(sdata, params);
- mutex_unlock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return ret;
+ return sta_del_link_station(sdata, params);
}
static int ieee80211_set_hw_timestamp(struct wiphy *wiphy,
diff --git a/net/mac80211/chan.c b/net/mac80211/chan.c
index 68952752b599..1d928f29ad6f 100644
--- a/net/mac80211/chan.c
+++ b/net/mac80211/chan.c
@@ -18,7 +18,7 @@ static int ieee80211_chanctx_num_assigned(struct ieee80211_local *local,
struct ieee80211_link_data *link;
int num = 0;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(link, &ctx->assigned_links, assigned_chanctx_list)
num++;
@@ -32,7 +32,7 @@ static int ieee80211_chanctx_num_reserved(struct ieee80211_local *local,
struct ieee80211_link_data *link;
int num = 0;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(link, &ctx->reserved_links, reserved_chanctx_list)
num++;
@@ -52,7 +52,7 @@ static int ieee80211_num_chanctx(struct ieee80211_local *local)
struct ieee80211_chanctx *ctx;
int num = 0;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(ctx, &local->chanctx_list, list)
num++;
@@ -62,7 +62,8 @@ static int ieee80211_num_chanctx(struct ieee80211_local *local)
static bool ieee80211_can_create_new_chanctx(struct ieee80211_local *local)
{
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
return ieee80211_num_chanctx(local) < ieee80211_max_num_channels(local);
}
@@ -73,7 +74,7 @@ ieee80211_link_get_chanctx(struct ieee80211_link_data *link)
struct ieee80211_chanctx_conf *conf;
conf = rcu_dereference_protected(link->conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!conf)
return NULL;
@@ -87,7 +88,7 @@ ieee80211_chanctx_reserved_chandef(struct ieee80211_local *local,
{
struct ieee80211_link_data *link;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(link, &ctx->reserved_links,
reserved_chanctx_list) {
@@ -110,7 +111,7 @@ ieee80211_chanctx_non_reserved_chandef(struct ieee80211_local *local,
{
struct ieee80211_link_data *link;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(link, &ctx->assigned_links,
assigned_chanctx_list) {
@@ -136,7 +137,7 @@ ieee80211_chanctx_combined_chandef(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx,
const struct cfg80211_chan_def *compat)
{
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
compat = ieee80211_chanctx_reserved_chandef(local, ctx, compat);
if (!compat)
@@ -154,7 +155,7 @@ ieee80211_chanctx_can_reserve_chandef(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx,
const struct cfg80211_chan_def *def)
{
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (ieee80211_chanctx_combined_chandef(local, ctx, def))
return true;
@@ -173,7 +174,7 @@ ieee80211_find_reservation_chanctx(struct ieee80211_local *local,
{
struct ieee80211_chanctx *ctx;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (mode == IEEE80211_CHANCTX_EXCLUSIVE)
return NULL;
@@ -361,7 +362,7 @@ _ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
enum nl80211_chan_width max_bw;
struct cfg80211_chan_def min_def;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* don't optimize non-20MHz based and radar_enabled confs */
if (ctx->conf.def.width == NL80211_CHAN_WIDTH_5 ||
@@ -537,7 +538,7 @@ ieee80211_find_chanctx(struct ieee80211_local *local,
{
struct ieee80211_chanctx *ctx;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (mode == IEEE80211_CHANCTX_EXCLUSIVE)
return NULL;
@@ -572,7 +573,7 @@ bool ieee80211_is_radar_required(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
rcu_read_lock();
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
@@ -602,8 +603,7 @@ ieee80211_chanctx_radar_required(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata;
bool required = false;
- lockdep_assert_held(&local->chanctx_mtx);
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
rcu_read_lock();
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
@@ -641,7 +641,7 @@ ieee80211_alloc_chanctx(struct ieee80211_local *local,
{
struct ieee80211_chanctx *ctx;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
ctx = kzalloc(sizeof(*ctx) + local->hw.chanctx_data_size, GFP_KERNEL);
if (!ctx)
@@ -665,8 +665,7 @@ static int ieee80211_add_chanctx(struct ieee80211_local *local,
u32 changed;
int err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!local->use_chanctx)
local->hw.conf.radar_enabled = ctx->conf.radar_enabled;
@@ -698,8 +697,7 @@ ieee80211_new_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx;
int err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
ctx = ieee80211_alloc_chanctx(local, chandef, mode);
if (!ctx)
@@ -718,7 +716,7 @@ ieee80211_new_chanctx(struct ieee80211_local *local,
static void ieee80211_del_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx)
{
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!local->use_chanctx) {
struct cfg80211_chan_def *chandef = &local->_oper_chandef;
@@ -753,7 +751,7 @@ static void ieee80211_del_chanctx(struct ieee80211_local *local,
static void ieee80211_free_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx)
{
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
WARN_ON_ONCE(ieee80211_chanctx_refcount(local, ctx) != 0);
@@ -770,7 +768,7 @@ void ieee80211_recalc_chanctx_chantype(struct ieee80211_local *local,
const struct cfg80211_chan_def *compat = NULL;
struct sta_info *sta;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
rcu_read_lock();
list_for_each_entry_rcu(sdata, &local->interfaces, list) {
@@ -833,9 +831,7 @@ static void ieee80211_recalc_radar_chanctx(struct ieee80211_local *local,
{
bool radar_enabled;
- lockdep_assert_held(&local->chanctx_mtx);
- /* for ieee80211_is_radar_required */
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
radar_enabled = ieee80211_chanctx_radar_required(local, chanctx);
@@ -865,7 +861,7 @@ static int ieee80211_assign_link_chanctx(struct ieee80211_link_data *link,
return -ENOTSUPP;
conf = rcu_dereference_protected(link->conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (conf) {
curr_ctx = container_of(conf, struct ieee80211_chanctx, conf);
@@ -920,7 +916,7 @@ void ieee80211_recalc_smps_chanctx(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata;
u8 rx_chains_static, rx_chains_dynamic;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
rx_chains_static = 1;
rx_chains_dynamic = 1;
@@ -1023,7 +1019,7 @@ __ieee80211_link_copy_chanctx_to_vlans(struct ieee80211_link_data *link,
if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_AP))
return;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* Check that conf exists, even when clearing this function
* must be called with the AP's channel context still there
@@ -1032,7 +1028,7 @@ __ieee80211_link_copy_chanctx_to_vlans(struct ieee80211_link_data *link,
* to a channel context that has already been freed.
*/
conf = rcu_dereference_protected(link_conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
WARN_ON(!conf);
if (clear)
@@ -1056,11 +1052,9 @@ void ieee80211_link_copy_chanctx_to_vlans(struct ieee80211_link_data *link,
{
struct ieee80211_local *local = link->sdata->local;
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
__ieee80211_link_copy_chanctx_to_vlans(link, clear);
-
- mutex_unlock(&local->chanctx_mtx);
}
int ieee80211_link_unreserve_chanctx(struct ieee80211_link_data *link)
@@ -1068,7 +1062,7 @@ int ieee80211_link_unreserve_chanctx(struct ieee80211_link_data *link)
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_chanctx *ctx = link->reserved_chanctx;
- lockdep_assert_held(&sdata->local->chanctx_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (WARN_ON(!ctx))
return -EINVAL;
@@ -1108,7 +1102,7 @@ int ieee80211_link_reserve_chanctx(struct ieee80211_link_data *link,
struct ieee80211_local *local = sdata->local;
struct ieee80211_chanctx *new_ctx, *curr_ctx, *ctx;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
curr_ctx = ieee80211_link_get_chanctx(link);
if (curr_ctx && local->use_chanctx && !local->ops->switch_vif_chanctx)
@@ -1206,8 +1200,8 @@ ieee80211_link_chanctx_reservation_complete(struct ieee80211_link_data *link)
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_MESH_POINT:
case NL80211_IFTYPE_OCB:
- ieee80211_queue_work(&sdata->local->hw,
- &link->csa_finalize_work);
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &link->csa_finalize_work);
break;
case NL80211_IFTYPE_STATION:
wiphy_delayed_work_queue(sdata->local->hw.wiphy,
@@ -1265,8 +1259,7 @@ ieee80211_link_use_reserved_reassign(struct ieee80211_link_data *link)
u64 changed = 0;
int err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
new_ctx = link->reserved_chanctx;
old_ctx = ieee80211_link_get_chanctx(link);
@@ -1390,7 +1383,7 @@ ieee80211_link_has_in_place_reservation(struct ieee80211_link_data *link)
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_chanctx *old_ctx, *new_ctx;
- lockdep_assert_held(&sdata->local->chanctx_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
new_ctx = link->reserved_chanctx;
old_ctx = ieee80211_link_get_chanctx(link);
@@ -1415,8 +1408,7 @@ static int ieee80211_chsw_switch_hwconf(struct ieee80211_local *local,
{
const struct cfg80211_chan_def *chandef;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
chandef = ieee80211_chanctx_reserved_chandef(local, new_ctx, NULL);
if (WARN_ON(!chandef))
@@ -1437,8 +1429,7 @@ static int ieee80211_chsw_switch_vifs(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx, *old_ctx;
int i, err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
vif_chsw = kcalloc(n_vifs, sizeof(vif_chsw[0]), GFP_KERNEL);
if (!vif_chsw)
@@ -1482,8 +1473,7 @@ static int ieee80211_chsw_switch_ctxs(struct ieee80211_local *local)
struct ieee80211_chanctx *ctx;
int err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(ctx, &local->chanctx_list, list) {
if (ctx->replace_state != IEEE80211_CHANCTX_REPLACES_OTHER)
@@ -1523,8 +1513,7 @@ static int ieee80211_vif_use_reserved_switch(struct ieee80211_local *local)
int err, n_assigned, n_reserved, n_ready;
int n_ctx = 0, n_vifs_switch = 0, n_vifs_assign = 0, n_vifs_ctxless = 0;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/*
* If there are 2 independent pairs of channel contexts performing
@@ -1783,10 +1772,10 @@ static void __ieee80211_link_release_channel(struct ieee80211_link_data *link)
struct ieee80211_chanctx *ctx;
bool use_reserved_switch = false;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
conf = rcu_dereference_protected(link_conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!conf)
return;
@@ -1821,7 +1810,7 @@ int ieee80211_link_use_channel(struct ieee80211_link_data *link,
u8 radar_detect_width = 0;
int ret;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (sdata->vif.active_links &&
!(sdata->vif.active_links & BIT(link->link_id))) {
@@ -1829,8 +1818,6 @@ int ieee80211_link_use_channel(struct ieee80211_link_data *link,
return 0;
}
- mutex_lock(&local->chanctx_mtx);
-
ret = cfg80211_chandef_dfs_required(local->hw.wiphy,
chandef,
sdata->wdev.iftype);
@@ -1872,7 +1859,6 @@ int ieee80211_link_use_channel(struct ieee80211_link_data *link,
if (ret)
link->radar_required = false;
- mutex_unlock(&local->chanctx_mtx);
return ret;
}
@@ -1884,8 +1870,7 @@ int ieee80211_link_use_reserved_context(struct ieee80211_link_data *link)
struct ieee80211_chanctx *old_ctx;
int err;
- lockdep_assert_held(&local->mtx);
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
new_ctx = link->reserved_chanctx;
old_ctx = ieee80211_link_get_chanctx(link);
@@ -1948,51 +1933,40 @@ int ieee80211_link_change_bandwidth(struct ieee80211_link_data *link,
struct ieee80211_chanctx_conf *conf;
struct ieee80211_chanctx *ctx;
const struct cfg80211_chan_def *compat;
- int ret;
+
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!cfg80211_chandef_usable(sdata->local->hw.wiphy, chandef,
IEEE80211_CHAN_DISABLED))
return -EINVAL;
- mutex_lock(&local->chanctx_mtx);
- if (cfg80211_chandef_identical(chandef, &link_conf->chandef)) {
- ret = 0;
- goto out;
- }
+ if (cfg80211_chandef_identical(chandef, &link_conf->chandef))
+ return 0;
if (chandef->width == NL80211_CHAN_WIDTH_20_NOHT ||
- link_conf->chandef.width == NL80211_CHAN_WIDTH_20_NOHT) {
- ret = -EINVAL;
- goto out;
- }
+ link_conf->chandef.width == NL80211_CHAN_WIDTH_20_NOHT)
+ return -EINVAL;
conf = rcu_dereference_protected(link_conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
- if (!conf) {
- ret = -EINVAL;
- goto out;
- }
+ lockdep_is_held(&local->hw.wiphy->mtx));
+ if (!conf)
+ return -EINVAL;
ctx = container_of(conf, struct ieee80211_chanctx, conf);
compat = cfg80211_chandef_compatible(&conf->def, chandef);
- if (!compat) {
- ret = -EINVAL;
- goto out;
- }
+ if (!compat)
+ return -EINVAL;
switch (ctx->replace_state) {
case IEEE80211_CHANCTX_REPLACE_NONE:
- if (!ieee80211_chanctx_reserved_chandef(local, ctx, compat)) {
- ret = -EBUSY;
- goto out;
- }
+ if (!ieee80211_chanctx_reserved_chandef(local, ctx, compat))
+ return -EBUSY;
break;
case IEEE80211_CHANCTX_WILL_BE_REPLACED:
/* TODO: Perhaps the bandwidth change could be treated as a
* reservation itself? */
- ret = -EBUSY;
- goto out;
+ return -EBUSY;
case IEEE80211_CHANCTX_REPLACES_OTHER:
/* channel context that is going to replace another channel
* context doesn't really exist and shouldn't be assigned
@@ -2006,22 +1980,17 @@ int ieee80211_link_change_bandwidth(struct ieee80211_link_data *link,
ieee80211_recalc_chanctx_chantype(local, ctx);
*changed |= BSS_CHANGED_BANDWIDTH;
- ret = 0;
- out:
- mutex_unlock(&local->chanctx_mtx);
- return ret;
+ return 0;
}
void ieee80211_link_release_channel(struct ieee80211_link_data *link)
{
struct ieee80211_sub_if_data *sdata = link->sdata;
- mutex_lock(&sdata->local->chanctx_mtx);
- if (rcu_access_pointer(link->conf->chanctx_conf)) {
- lockdep_assert_held(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+ if (rcu_access_pointer(link->conf->chanctx_conf))
__ieee80211_link_release_channel(link);
- }
- mutex_unlock(&sdata->local->chanctx_mtx);
}
void ieee80211_link_vlan_copy_chanctx(struct ieee80211_link_data *link)
@@ -2034,20 +2003,19 @@ void ieee80211_link_vlan_copy_chanctx(struct ieee80211_link_data *link)
struct ieee80211_sub_if_data *ap;
struct ieee80211_chanctx_conf *conf;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_AP_VLAN || !sdata->bss))
return;
ap = container_of(sdata->bss, struct ieee80211_sub_if_data, u.ap);
- mutex_lock(&local->chanctx_mtx);
-
rcu_read_lock();
ap_conf = rcu_dereference(ap->vif.link_conf[link_id]);
conf = rcu_dereference_protected(ap_conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
rcu_assign_pointer(link_conf->chanctx_conf, conf);
rcu_read_unlock();
- mutex_unlock(&local->chanctx_mtx);
}
void ieee80211_iter_chan_contexts_atomic(
diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
index 207f772bd8ce..b575ae90e57f 100644
--- a/net/mac80211/debugfs.c
+++ b/net/mac80211/debugfs.c
@@ -4,7 +4,7 @@
*
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
- * Copyright (C) 2018 - 2019, 2021-2022 Intel Corporation
+ * Copyright (C) 2018 - 2019, 2021-2023 Intel Corporation
*/
#include <linux/debugfs.h>
@@ -288,10 +288,10 @@ static ssize_t aql_txq_limit_write(struct file *file,
q_limit_low_old = local->aql_txq_limit_low[ac];
q_limit_high_old = local->aql_txq_limit_high[ac];
+ wiphy_lock(local->hw.wiphy);
local->aql_txq_limit_low[ac] = q_limit_low;
local->aql_txq_limit_high[ac] = q_limit_high;
- mutex_lock(&local->sta_mtx);
list_for_each_entry(sta, &local->sta_list, list) {
/* If a sta has customized queue limits, keep it */
if (sta->airtime[ac].aql_limit_low == q_limit_low_old &&
@@ -300,7 +300,8 @@ static ssize_t aql_txq_limit_write(struct file *file,
sta->airtime[ac].aql_limit_high = q_limit_high;
}
}
- mutex_unlock(&local->sta_mtx);
+ wiphy_unlock(local->hw.wiphy);
+
return count;
}
@@ -594,9 +595,9 @@ static ssize_t format_devstat_counter(struct ieee80211_local *local,
char buf[20];
int res;
- rtnl_lock();
+ wiphy_lock(local->hw.wiphy);
res = drv_get_stats(local, &stats);
- rtnl_unlock();
+ wiphy_unlock(local->hw.wiphy);
if (res)
return res;
res = printvalue(&stats, buf, sizeof(buf));
diff --git a/net/mac80211/debugfs_key.c b/net/mac80211/debugfs_key.c
index 16a04330e7dc..7e54da508765 100644
--- a/net/mac80211/debugfs_key.c
+++ b/net/mac80211/debugfs_key.c
@@ -4,7 +4,7 @@
* Copyright (c) 2006 Jiri Benc <jbenc@suse.cz>
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
* Copyright (C) 2015 Intel Deutschland GmbH
- * Copyright (C) 2021-2022 Intel Corporation
+ * Copyright (C) 2021-2023 Intel Corporation
*/
#include <linux/kobject.h>
@@ -378,14 +378,14 @@ void ieee80211_debugfs_key_update_default(struct ieee80211_sub_if_data *sdata)
if (!sdata->vif.debugfs_dir)
return;
- lockdep_assert_held(&sdata->local->key_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
debugfs_remove(sdata->debugfs.default_unicast_key);
sdata->debugfs.default_unicast_key = NULL;
if (sdata->default_unicast_key) {
- key = key_mtx_dereference(sdata->local,
- sdata->default_unicast_key);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->default_unicast_key);
sprintf(buf, "../keys/%d", key->debugfs.cnt);
sdata->debugfs.default_unicast_key =
debugfs_create_symlink("default_unicast_key",
@@ -396,8 +396,8 @@ void ieee80211_debugfs_key_update_default(struct ieee80211_sub_if_data *sdata)
sdata->debugfs.default_multicast_key = NULL;
if (sdata->deflink.default_multicast_key) {
- key = key_mtx_dereference(sdata->local,
- sdata->deflink.default_multicast_key);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->deflink.default_multicast_key);
sprintf(buf, "../keys/%d", key->debugfs.cnt);
sdata->debugfs.default_multicast_key =
debugfs_create_symlink("default_multicast_key",
@@ -413,8 +413,8 @@ void ieee80211_debugfs_key_add_mgmt_default(struct ieee80211_sub_if_data *sdata)
if (!sdata->vif.debugfs_dir)
return;
- key = key_mtx_dereference(sdata->local,
- sdata->deflink.default_mgmt_key);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->deflink.default_mgmt_key);
if (key) {
sprintf(buf, "../keys/%d", key->debugfs.cnt);
sdata->debugfs.default_mgmt_key =
@@ -442,8 +442,8 @@ ieee80211_debugfs_key_add_beacon_default(struct ieee80211_sub_if_data *sdata)
if (!sdata->vif.debugfs_dir)
return;
- key = key_mtx_dereference(sdata->local,
- sdata->deflink.default_beacon_key);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->deflink.default_beacon_key);
if (key) {
sprintf(buf, "../keys/%d", key->debugfs.cnt);
sdata->debugfs.default_beacon_key =
diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
index 63250286dc8b..ec91e131b29e 100644
--- a/net/mac80211/debugfs_netdev.c
+++ b/net/mac80211/debugfs_netdev.c
@@ -22,18 +22,18 @@
#include "debugfs_netdev.h"
#include "driver-ops.h"
-static ssize_t ieee80211_if_read(
- void *data,
+static ssize_t ieee80211_if_read_sdata(
+ struct ieee80211_sub_if_data *sdata,
char __user *userbuf,
size_t count, loff_t *ppos,
- ssize_t (*format)(const void *, char *, int))
+ ssize_t (*format)(const struct ieee80211_sub_if_data *sdata, char *, int))
{
char buf[200];
ssize_t ret = -EINVAL;
- read_lock(&dev_base_lock);
- ret = (*format)(data, buf, sizeof(buf));
- read_unlock(&dev_base_lock);
+ wiphy_lock(sdata->local->hw.wiphy);
+ ret = (*format)(sdata, buf, sizeof(buf));
+ wiphy_unlock(sdata->local->hw.wiphy);
if (ret >= 0)
ret = simple_read_from_buffer(userbuf, count, ppos, buf, ret);
@@ -41,11 +41,11 @@ static ssize_t ieee80211_if_read(
return ret;
}
-static ssize_t ieee80211_if_write(
- void *data,
+static ssize_t ieee80211_if_write_sdata(
+ struct ieee80211_sub_if_data *sdata,
const char __user *userbuf,
size_t count, loff_t *ppos,
- ssize_t (*write)(void *, const char *, int))
+ ssize_t (*write)(struct ieee80211_sub_if_data *sdata, const char *, int))
{
char buf[64];
ssize_t ret;
@@ -57,9 +57,51 @@ static ssize_t ieee80211_if_write(
return -EFAULT;
buf[count] = '\0';
- rtnl_lock();
- ret = (*write)(data, buf, count);
- rtnl_unlock();
+ wiphy_lock(sdata->local->hw.wiphy);
+ ret = (*write)(sdata, buf, count);
+ wiphy_unlock(sdata->local->hw.wiphy);
+
+ return ret;
+}
+
+static ssize_t ieee80211_if_read_link(
+ struct ieee80211_link_data *link,
+ char __user *userbuf,
+ size_t count, loff_t *ppos,
+ ssize_t (*format)(const struct ieee80211_link_data *link, char *, int))
+{
+ char buf[200];
+ ssize_t ret = -EINVAL;
+
+ wiphy_lock(link->sdata->local->hw.wiphy);
+ ret = (*format)(link, buf, sizeof(buf));
+ wiphy_unlock(link->sdata->local->hw.wiphy);
+
+ if (ret >= 0)
+ ret = simple_read_from_buffer(userbuf, count, ppos, buf, ret);
+
+ return ret;
+}
+
+static ssize_t ieee80211_if_write_link(
+ struct ieee80211_link_data *link,
+ const char __user *userbuf,
+ size_t count, loff_t *ppos,
+ ssize_t (*write)(struct ieee80211_link_data *link, const char *, int))
+{
+ char buf[64];
+ ssize_t ret;
+
+ if (count >= sizeof(buf))
+ return -E2BIG;
+
+ if (copy_from_user(buf, userbuf, count))
+ return -EFAULT;
+ buf[count] = '\0';
+
+ wiphy_lock(link->sdata->local->hw.wiphy);
+ ret = (*write)(link, buf, count);
+ wiphy_unlock(link->sdata->local->hw.wiphy);
return ret;
}
@@ -126,41 +168,37 @@ static const struct file_operations name##_ops = { \
.llseek = generic_file_llseek, \
}
-#define _IEEE80211_IF_FILE_R_FN(name, type) \
+#define _IEEE80211_IF_FILE_R_FN(name) \
static ssize_t ieee80211_if_read_##name(struct file *file, \
char __user *userbuf, \
size_t count, loff_t *ppos) \
{ \
- ssize_t (*fn)(const void *, char *, int) = (void *) \
- ((ssize_t (*)(const type, char *, int)) \
- ieee80211_if_fmt_##name); \
- return ieee80211_if_read(file->private_data, \
- userbuf, count, ppos, fn); \
+ return ieee80211_if_read_sdata(file->private_data, \
+ userbuf, count, ppos, \
+ ieee80211_if_fmt_##name); \
}
-#define _IEEE80211_IF_FILE_W_FN(name, type) \
+#define _IEEE80211_IF_FILE_W_FN(name) \
static ssize_t ieee80211_if_write_##name(struct file *file, \
const char __user *userbuf, \
size_t count, loff_t *ppos) \
{ \
- ssize_t (*fn)(void *, const char *, int) = (void *) \
- ((ssize_t (*)(type, const char *, int)) \
- ieee80211_if_parse_##name); \
- return ieee80211_if_write(file->private_data, userbuf, count, \
- ppos, fn); \
+ return ieee80211_if_write_sdata(file->private_data, userbuf, \
+ count, ppos, \
+ ieee80211_if_parse_##name); \
}
#define IEEE80211_IF_FILE_R(name) \
- _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_sub_if_data *) \
+ _IEEE80211_IF_FILE_R_FN(name) \
_IEEE80211_IF_FILE_OPS(name, ieee80211_if_read_##name, NULL)
#define IEEE80211_IF_FILE_W(name) \
- _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_sub_if_data *) \
+ _IEEE80211_IF_FILE_W_FN(name) \
_IEEE80211_IF_FILE_OPS(name, NULL, ieee80211_if_write_##name)
#define IEEE80211_IF_FILE_RW(name) \
- _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_sub_if_data *) \
- _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_sub_if_data *) \
+ _IEEE80211_IF_FILE_R_FN(name) \
+ _IEEE80211_IF_FILE_W_FN(name) \
_IEEE80211_IF_FILE_OPS(name, ieee80211_if_read_##name, \
ieee80211_if_write_##name)
@@ -168,18 +206,37 @@ static ssize_t ieee80211_if_write_##name(struct file *file, \
IEEE80211_IF_FMT_##format(name, struct ieee80211_sub_if_data, field) \
IEEE80211_IF_FILE_R(name)
-/* Same but with a link_ prefix in the ops variable name and different type */
+#define _IEEE80211_IF_LINK_R_FN(name) \
+static ssize_t ieee80211_if_read_##name(struct file *file, \
+ char __user *userbuf, \
+ size_t count, loff_t *ppos) \
+{ \
+ return ieee80211_if_read_link(file->private_data, \
+ userbuf, count, ppos, \
+ ieee80211_if_fmt_##name); \
+}
+
+#define _IEEE80211_IF_LINK_W_FN(name) \
+static ssize_t ieee80211_if_write_##name(struct file *file, \
+ const char __user *userbuf, \
+ size_t count, loff_t *ppos) \
+{ \
+ return ieee80211_if_write_link(file->private_data, userbuf, \
+ count, ppos, \
+ ieee80211_if_parse_##name); \
+}
+
#define IEEE80211_IF_LINK_FILE_R(name) \
- _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_link_data *) \
+ _IEEE80211_IF_LINK_R_FN(name) \
_IEEE80211_IF_FILE_OPS(link_##name, ieee80211_if_read_##name, NULL)
#define IEEE80211_IF_LINK_FILE_W(name) \
- _IEEE80211_IF_FILE_W_FN(name) \
+ _IEEE80211_IF_LINK_W_FN(name) \
_IEEE80211_IF_FILE_OPS(link_##name, NULL, ieee80211_if_write_##name)
#define IEEE80211_IF_LINK_FILE_RW(name) \
- _IEEE80211_IF_FILE_R_FN(name, struct ieee80211_link_data *) \
- _IEEE80211_IF_FILE_W_FN(name, struct ieee80211_link_data *) \
+ _IEEE80211_IF_LINK_R_FN(name) \
+ _IEEE80211_IF_LINK_W_FN(name) \
_IEEE80211_IF_FILE_OPS(link_##name, ieee80211_if_read_##name, \
ieee80211_if_write_##name)
@@ -265,9 +322,11 @@ static int ieee80211_set_smps(struct ieee80211_link_data *link,
{
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_local *local = sdata->local;
- int err;
- if (sdata->vif.driver_flags & IEEE80211_VIF_DISABLE_SMPS_OVERRIDE)
+ /* The driver indicated that EML is enabled for the interface, thus do
+ * not allow to override the SMPS state.
+ */
+ if (sdata->vif.driver_flags & IEEE80211_VIF_EML_ACTIVE)
return -EOPNOTSUPP;
if (!(local->hw.wiphy->features & NL80211_FEATURE_STATIC_SMPS) &&
@@ -283,11 +342,7 @@ static int ieee80211_set_smps(struct ieee80211_link_data *link,
if (sdata->vif.type != NL80211_IFTYPE_STATION)
return -EOPNOTSUPP;
- sdata_lock(sdata);
- err = __ieee80211_request_smps_mgd(link->sdata, link, smps_mode);
- sdata_unlock(sdata);
-
- return err;
+ return __ieee80211_request_smps_mgd(link->sdata, link, smps_mode);
}
static const char *smps_modes[IEEE80211_SMPS_NUM_MODES] = {
@@ -359,16 +414,13 @@ static ssize_t ieee80211_if_parse_tkip_mic_test(
case NL80211_IFTYPE_STATION:
fc |= cpu_to_le16(IEEE80211_FCTL_TODS);
/* BSSID SA DA */
- sdata_lock(sdata);
if (!sdata->u.mgd.associated) {
- sdata_unlock(sdata);
dev_kfree_skb(skb);
return -ENOTCONN;
}
memcpy(hdr->addr1, sdata->deflink.u.mgd.bssid, ETH_ALEN);
memcpy(hdr->addr2, sdata->vif.addr, ETH_ALEN);
memcpy(hdr->addr3, addr, ETH_ALEN);
- sdata_unlock(sdata);
break;
default:
dev_kfree_skb(skb);
@@ -885,18 +937,20 @@ static void add_link_files(struct ieee80211_link_data *link,
}
}
-void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata)
+void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata,
+ bool mld_vif)
{
char buf[10+IFNAMSIZ];
sprintf(buf, "netdev:%s", sdata->name);
sdata->vif.debugfs_dir = debugfs_create_dir(buf,
sdata->local->hw.wiphy->debugfsdir);
+ /* deflink also has this */
+ sdata->deflink.debugfs_dir = sdata->vif.debugfs_dir;
sdata->debugfs.subdir_stations = debugfs_create_dir("stations",
sdata->vif.debugfs_dir);
add_files(sdata);
-
- if (!(sdata->local->hw.wiphy->flags & WIPHY_FLAG_SUPPORTS_MLO))
+ if (!mld_vif)
add_link_files(&sdata->deflink, sdata->vif.debugfs_dir);
}
@@ -924,11 +978,21 @@ void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata)
debugfs_rename(dir->d_parent, dir, dir->d_parent, buf);
}
+void ieee80211_debugfs_recreate_netdev(struct ieee80211_sub_if_data *sdata,
+ bool mld_vif)
+{
+ ieee80211_debugfs_remove_netdev(sdata);
+ ieee80211_debugfs_add_netdev(sdata, mld_vif);
+ drv_vif_add_debugfs(sdata->local, sdata);
+ if (!mld_vif)
+ ieee80211_link_debugfs_drv_add(&sdata->deflink);
+}
+
void ieee80211_link_debugfs_add(struct ieee80211_link_data *link)
{
char link_dir_name[10];
- if (WARN_ON(!link->sdata->vif.debugfs_dir))
+ if (WARN_ON(!link->sdata->vif.debugfs_dir || link->debugfs_dir))
return;
/* For now, this should not be called for non-MLO capable drivers */
@@ -965,7 +1029,8 @@ void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link)
void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link)
{
- if (WARN_ON(!link->debugfs_dir))
+ if (link->sdata->vif.type == NL80211_IFTYPE_MONITOR ||
+ WARN_ON(!link->debugfs_dir))
return;
drv_link_add_debugfs(link->sdata->local, link->sdata,
diff --git a/net/mac80211/debugfs_netdev.h b/net/mac80211/debugfs_netdev.h
index 99e688dcabd6..b226b1aae88a 100644
--- a/net/mac80211/debugfs_netdev.h
+++ b/net/mac80211/debugfs_netdev.h
@@ -1,4 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Portions:
+ * Copyright (C) 2023 Intel Corporation
+ */
/* routines exported for debugfs handling */
#ifndef __IEEE80211_DEBUGFS_NETDEV_H
@@ -7,9 +11,12 @@
#include "ieee80211_i.h"
#ifdef CONFIG_MAC80211_DEBUGFS
-void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata);
+void ieee80211_debugfs_add_netdev(struct ieee80211_sub_if_data *sdata,
+ bool mld_vif);
void ieee80211_debugfs_remove_netdev(struct ieee80211_sub_if_data *sdata);
void ieee80211_debugfs_rename_netdev(struct ieee80211_sub_if_data *sdata);
+void ieee80211_debugfs_recreate_netdev(struct ieee80211_sub_if_data *sdata,
+ bool mld_vif);
void ieee80211_link_debugfs_add(struct ieee80211_link_data *link);
void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link);
@@ -18,7 +25,7 @@ void ieee80211_link_debugfs_drv_add(struct ieee80211_link_data *link);
void ieee80211_link_debugfs_drv_remove(struct ieee80211_link_data *link);
#else
static inline void ieee80211_debugfs_add_netdev(
- struct ieee80211_sub_if_data *sdata)
+ struct ieee80211_sub_if_data *sdata, bool mld_vif)
{}
static inline void ieee80211_debugfs_remove_netdev(
struct ieee80211_sub_if_data *sdata)
@@ -26,7 +33,9 @@ static inline void ieee80211_debugfs_remove_netdev(
static inline void ieee80211_debugfs_rename_netdev(
struct ieee80211_sub_if_data *sdata)
{}
-
+static inline void ieee80211_debugfs_recreate_netdev(
+ struct ieee80211_sub_if_data *sdata, bool mld_vif)
+{}
static inline void ieee80211_link_debugfs_add(struct ieee80211_link_data *link)
{}
static inline void ieee80211_link_debugfs_remove(struct ieee80211_link_data *link)
diff --git a/net/mac80211/debugfs_sta.c b/net/mac80211/debugfs_sta.c
index 5a97fb248c85..06e3613bf46b 100644
--- a/net/mac80211/debugfs_sta.c
+++ b/net/mac80211/debugfs_sta.c
@@ -5,7 +5,7 @@
* Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright(c) 2016 Intel Deutschland GmbH
- * Copyright (C) 2018 - 2022 Intel Corporation
+ * Copyright (C) 2018 - 2023 Intel Corporation
*/
#include <linux/debugfs.h>
@@ -420,6 +420,7 @@ static ssize_t sta_agg_status_write(struct file *file, const char __user *userbu
if (ret || tid >= IEEE80211_NUM_TIDS)
return -EINVAL;
+ wiphy_lock(sta->local->hw.wiphy);
if (tx) {
if (start)
ret = ieee80211_start_tx_ba_session(&sta->sta, tid,
@@ -431,6 +432,7 @@ static ssize_t sta_agg_status_write(struct file *file, const char __user *userbu
3, true);
ret = 0;
}
+ wiphy_unlock(sta->local->hw.wiphy);
return ret ?: count;
}
diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
index 30cd0c905a24..7938ec87ef25 100644
--- a/net/mac80211/driver-ops.c
+++ b/net/mac80211/driver-ops.c
@@ -15,6 +15,7 @@ int drv_start(struct ieee80211_local *local)
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(local->started))
return -EALREADY;
@@ -35,6 +36,7 @@ int drv_start(struct ieee80211_local *local)
void drv_stop(struct ieee80211_local *local)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(!local->started))
return;
@@ -58,6 +60,7 @@ int drv_add_interface(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
(sdata->vif.type == NL80211_IFTYPE_MONITOR &&
@@ -69,10 +72,18 @@ int drv_add_interface(struct ieee80211_local *local,
ret = local->ops->add_interface(&local->hw, &sdata->vif);
trace_drv_return_int(local, ret);
- if (ret == 0)
- sdata->flags |= IEEE80211_SDATA_IN_DRIVER;
+ if (ret)
+ return ret;
- return ret;
+ sdata->flags |= IEEE80211_SDATA_IN_DRIVER;
+
+ if (!local->in_reconfig) {
+ drv_vif_add_debugfs(local, sdata);
+ /* initially vif is not MLD */
+ ieee80211_link_debugfs_drv_add(&sdata->deflink);
+ }
+
+ return 0;
}
int drv_change_interface(struct ieee80211_local *local,
@@ -82,6 +93,7 @@ int drv_change_interface(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -96,6 +108,7 @@ void drv_remove_interface(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -116,6 +129,7 @@ int drv_sta_state(struct ieee80211_local *local,
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -149,6 +163,7 @@ int drv_sta_set_txpwr(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -190,6 +205,7 @@ int drv_conf_tx(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -223,6 +239,7 @@ u64 drv_get_tsf(struct ieee80211_local *local,
u64 ret = -1ULL;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return ret;
@@ -239,6 +256,7 @@ void drv_set_tsf(struct ieee80211_local *local,
u64 tsf)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -254,6 +272,7 @@ void drv_offset_tsf(struct ieee80211_local *local,
s64 offset)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -268,6 +287,7 @@ void drv_reset_tsf(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -285,7 +305,9 @@ int drv_assign_vif_chanctx(struct ieee80211_local *local,
{
int ret = 0;
- drv_verify_link_exists(sdata, link_conf);
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -312,8 +334,8 @@ void drv_unassign_vif_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
- drv_verify_link_exists(sdata, link_conf);
if (!check_sdata_in_driver(sdata))
return;
@@ -340,6 +362,7 @@ int drv_switch_vif_chanctx(struct ieee80211_local *local,
int i;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!local->ops->switch_vif_chanctx)
return -EOPNOTSUPP;
@@ -392,9 +415,7 @@ int drv_ampdu_action(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
-
- if (!sdata)
- return -EIO;
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -416,6 +437,7 @@ void drv_link_info_changed(struct ieee80211_local *local,
int link_id, u64 changed)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON_ONCE(changed & (BSS_CHANGED_BEACON |
BSS_CHANGED_BEACON_ENABLED) &&
@@ -458,6 +480,7 @@ int drv_set_key(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -485,6 +508,7 @@ int drv_change_vif_links(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -510,10 +534,13 @@ int drv_change_vif_links(struct ieee80211_local *local,
if (ret)
return ret;
- for_each_set_bit(link_id, &links_to_add, IEEE80211_MLD_MAX_NUM_LINKS) {
- link = rcu_access_pointer(sdata->link[link_id]);
+ if (!local->in_reconfig) {
+ for_each_set_bit(link_id, &links_to_add,
+ IEEE80211_MLD_MAX_NUM_LINKS) {
+ link = rcu_access_pointer(sdata->link[link_id]);
- ieee80211_link_debugfs_drv_add(link);
+ ieee80211_link_debugfs_drv_add(link);
+ }
}
return 0;
@@ -532,6 +559,7 @@ int drv_change_sta_links(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -547,7 +575,7 @@ int drv_change_sta_links(struct ieee80211_local *local,
for_each_set_bit(link_id, &links_to_rem, IEEE80211_MLD_MAX_NUM_LINKS) {
link_sta = rcu_dereference_protected(info->link[link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
ieee80211_link_sta_debugfs_drv_remove(link_sta);
}
@@ -563,7 +591,7 @@ int drv_change_sta_links(struct ieee80211_local *local,
for_each_set_bit(link_id, &links_to_add, IEEE80211_MLD_MAX_NUM_LINKS) {
link_sta = rcu_dereference_protected(info->link[link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
ieee80211_link_sta_debugfs_drv_add(link_sta);
}
diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
index c4505593ba7a..568633b38c47 100644
--- a/net/mac80211/driver-ops.h
+++ b/net/mac80211/driver-ops.h
@@ -40,6 +40,9 @@ static inline void drv_tx(struct ieee80211_local *local,
static inline void drv_sync_rx_queues(struct ieee80211_local *local,
struct sta_info *sta)
{
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (local->ops->sync_rx_queues) {
trace_drv_sync_rx_queues(local, sta->sdata, &sta->sta);
local->ops->sync_rx_queues(&local->hw);
@@ -94,6 +97,7 @@ static inline int drv_suspend(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_suspend(local);
ret = local->ops->suspend(&local->hw, wowlan);
@@ -106,6 +110,7 @@ static inline int drv_resume(struct ieee80211_local *local)
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_resume(local);
ret = local->ops->resume(&local->hw);
@@ -117,6 +122,7 @@ static inline void drv_set_wakeup(struct ieee80211_local *local,
bool enabled)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!local->ops->set_wakeup)
return;
@@ -142,6 +148,7 @@ static inline int drv_config(struct ieee80211_local *local, u32 changed)
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_config(local, changed);
ret = local->ops->config(&local->hw, changed);
@@ -154,6 +161,7 @@ static inline void drv_vif_cfg_changed(struct ieee80211_local *local,
u64 changed)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -193,6 +201,7 @@ static inline void drv_configure_filter(struct ieee80211_local *local,
u64 multicast)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_configure_filter(local, changed_flags, total_flags,
multicast);
@@ -207,6 +216,7 @@ static inline void drv_config_iface_filter(struct ieee80211_local *local,
unsigned int changed_flags)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_config_iface_filter(local, sdata, filter_flags,
changed_flags);
@@ -263,6 +273,7 @@ static inline int drv_hw_scan(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -277,6 +288,7 @@ static inline void drv_cancel_hw_scan(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -295,6 +307,7 @@ drv_sched_scan_start(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -312,6 +325,7 @@ static inline int drv_sched_scan_stop(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -328,6 +342,7 @@ static inline void drv_sw_scan_start(struct ieee80211_local *local,
const u8 *mac_addr)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_sw_scan_start(local, sdata, mac_addr);
if (local->ops->sw_scan_start)
@@ -339,6 +354,7 @@ static inline void drv_sw_scan_complete(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_sw_scan_complete(local, sdata);
if (local->ops->sw_scan_complete)
@@ -352,6 +368,7 @@ static inline int drv_get_stats(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (local->ops->get_stats)
ret = local->ops->get_stats(&local->hw, stats);
@@ -375,6 +392,7 @@ static inline int drv_set_frag_threshold(struct ieee80211_local *local,
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_set_frag_threshold(local, value);
if (local->ops->set_frag_threshold)
@@ -389,6 +407,7 @@ static inline int drv_set_rts_threshold(struct ieee80211_local *local,
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_set_rts_threshold(local, value);
if (local->ops->set_rts_threshold)
@@ -402,6 +421,7 @@ static inline int drv_set_coverage_class(struct ieee80211_local *local,
{
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_set_coverage_class(local, value);
if (local->ops->set_coverage_class)
@@ -435,6 +455,7 @@ static inline int drv_sta_add(struct ieee80211_local *local,
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -454,6 +475,7 @@ static inline void drv_sta_remove(struct ieee80211_local *local,
struct ieee80211_sta *sta)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -467,12 +489,30 @@ static inline void drv_sta_remove(struct ieee80211_local *local,
}
#ifdef CONFIG_MAC80211_DEBUGFS
+static inline void drv_vif_add_debugfs(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata)
+{
+ might_sleep();
+
+ if (sdata->vif.type == NL80211_IFTYPE_MONITOR ||
+ WARN_ON(!sdata->vif.debugfs_dir))
+ return;
+
+ sdata = get_bss_sdata(sdata);
+ if (!check_sdata_in_driver(sdata))
+ return;
+
+ if (local->ops->vif_add_debugfs)
+ local->ops->vif_add_debugfs(&local->hw, &sdata->vif);
+}
+
static inline void drv_link_add_debugfs(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct ieee80211_bss_conf *link_conf,
struct dentry *dir)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -489,6 +529,7 @@ static inline void drv_sta_add_debugfs(struct ieee80211_local *local,
struct dentry *dir)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -505,6 +546,7 @@ static inline void drv_link_sta_add_debugfs(struct ieee80211_local *local,
struct dentry *dir)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -514,6 +556,12 @@ static inline void drv_link_sta_add_debugfs(struct ieee80211_local *local,
local->ops->link_sta_add_debugfs(&local->hw, &sdata->vif,
link_sta, dir);
}
+#else
+static inline void drv_vif_add_debugfs(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata)
+{
+ might_sleep();
+}
#endif
static inline void drv_sta_pre_rcu_remove(struct ieee80211_local *local,
@@ -521,6 +569,7 @@ static inline void drv_sta_pre_rcu_remove(struct ieee80211_local *local,
struct sta_info *sta)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
@@ -569,6 +618,9 @@ static inline void drv_sta_statistics(struct ieee80211_local *local,
struct ieee80211_sta *sta,
struct station_info *sinfo)
{
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
sdata = get_bss_sdata(sdata);
if (!check_sdata_in_driver(sdata))
return;
@@ -599,6 +651,7 @@ static inline int drv_tx_last_beacon(struct ieee80211_local *local)
int ret = 0; /* default unsupported op for less congestion */
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_tx_last_beacon(local);
if (local->ops->tx_last_beacon)
@@ -616,6 +669,9 @@ static inline int drv_get_survey(struct ieee80211_local *local, int idx,
{
int ret = -EOPNOTSUPP;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
trace_drv_get_survey(local, idx, survey);
if (local->ops->get_survey)
@@ -629,6 +685,7 @@ static inline int drv_get_survey(struct ieee80211_local *local, int idx,
static inline void drv_rfkill_poll(struct ieee80211_local *local)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (local->ops->rfkill_poll)
local->ops->rfkill_poll(&local->hw);
@@ -641,6 +698,7 @@ static inline void drv_flush(struct ieee80211_local *local,
struct ieee80211_vif *vif = sdata ? &sdata->vif : NULL;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (sdata && !check_sdata_in_driver(sdata))
return;
@@ -656,6 +714,7 @@ static inline void drv_flush_sta(struct ieee80211_local *local,
struct sta_info *sta)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (sdata && !check_sdata_in_driver(sdata))
return;
@@ -671,6 +730,7 @@ static inline void drv_channel_switch(struct ieee80211_local *local,
struct ieee80211_channel_switch *ch_switch)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_channel_switch(local, sdata, ch_switch);
local->ops->channel_switch(&local->hw, &sdata->vif, ch_switch);
@@ -683,6 +743,7 @@ static inline int drv_set_antenna(struct ieee80211_local *local,
{
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (local->ops->set_antenna)
ret = local->ops->set_antenna(&local->hw, tx_ant, rx_ant);
trace_drv_set_antenna(local, tx_ant, rx_ant, ret);
@@ -694,6 +755,7 @@ static inline int drv_get_antenna(struct ieee80211_local *local,
{
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (local->ops->get_antenna)
ret = local->ops->get_antenna(&local->hw, tx_ant, rx_ant);
trace_drv_get_antenna(local, *tx_ant, *rx_ant, ret);
@@ -709,6 +771,7 @@ static inline int drv_remain_on_channel(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_remain_on_channel(local, sdata, chan, duration, type);
ret = local->ops->remain_on_channel(&local->hw, &sdata->vif,
@@ -725,6 +788,7 @@ drv_cancel_remain_on_channel(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_cancel_remain_on_channel(local, sdata);
ret = local->ops->cancel_remain_on_channel(&local->hw, &sdata->vif);
@@ -739,6 +803,7 @@ static inline int drv_set_ringparam(struct ieee80211_local *local,
int ret = -ENOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_set_ringparam(local, tx, rx);
if (local->ops->set_ringparam)
@@ -752,6 +817,7 @@ static inline void drv_get_ringparam(struct ieee80211_local *local,
u32 *tx, u32 *tx_max, u32 *rx, u32 *rx_max)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_get_ringparam(local, tx, tx_max, rx, rx_max);
if (local->ops->get_ringparam)
@@ -764,6 +830,7 @@ static inline bool drv_tx_frames_pending(struct ieee80211_local *local)
bool ret = false;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_tx_frames_pending(local);
if (local->ops->tx_frames_pending)
@@ -780,6 +847,7 @@ static inline int drv_set_bitrate_mask(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -797,6 +865,9 @@ static inline void drv_set_rekey_data(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct cfg80211_gtk_rekey_data *data)
{
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return;
@@ -851,11 +922,13 @@ static inline void drv_mgd_prepare_tx(struct ieee80211_local *local,
struct ieee80211_prep_tx_info *info)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION);
+ info->link_id = info->link_id < 0 ? 0 : info->link_id;
trace_drv_mgd_prepare_tx(local, sdata, info->duration,
info->subtype, info->success);
if (local->ops->mgd_prepare_tx)
@@ -868,6 +941,7 @@ static inline void drv_mgd_complete_tx(struct ieee80211_local *local,
struct ieee80211_prep_tx_info *info)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -882,17 +956,22 @@ static inline void drv_mgd_complete_tx(struct ieee80211_local *local,
static inline void
drv_mgd_protect_tdls_discover(struct ieee80211_local *local,
- struct ieee80211_sub_if_data *sdata)
+ struct ieee80211_sub_if_data *sdata,
+ int link_id)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_STATION);
+ link_id = link_id > 0 ? link_id : 0;
+
trace_drv_mgd_protect_tdls_discover(local, sdata);
if (local->ops->mgd_protect_tdls_discover)
- local->ops->mgd_protect_tdls_discover(&local->hw, &sdata->vif);
+ local->ops->mgd_protect_tdls_discover(&local->hw, &sdata->vif,
+ link_id);
trace_drv_return_void(local);
}
@@ -902,6 +981,7 @@ static inline int drv_add_chanctx(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_add_chanctx(local, ctx);
if (local->ops->add_chanctx)
@@ -917,6 +997,7 @@ static inline void drv_remove_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx *ctx)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(!ctx->driver_present))
return;
@@ -933,6 +1014,7 @@ static inline void drv_change_chanctx(struct ieee80211_local *local,
u32 changed)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_change_chanctx(local, ctx, changed);
if (local->ops->change_chanctx) {
@@ -942,14 +1024,6 @@ static inline void drv_change_chanctx(struct ieee80211_local *local,
trace_drv_return_void(local);
}
-static inline void drv_verify_link_exists(struct ieee80211_sub_if_data *sdata,
- struct ieee80211_bss_conf *link_conf)
-{
- /* deflink always exists, so need to check only for other links */
- if (sdata->deflink.conf != link_conf)
- sdata_assert_lock(sdata);
-}
-
int drv_assign_vif_chanctx(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct ieee80211_bss_conf *link_conf,
@@ -968,10 +1042,8 @@ static inline int drv_start_ap(struct ieee80211_local *local,
{
int ret = 0;
- /* make sure link_conf is protected */
- drv_verify_link_exists(sdata, link_conf);
-
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -987,8 +1059,8 @@ static inline void drv_stop_ap(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
struct ieee80211_bss_conf *link_conf)
{
- /* make sure link_conf is protected */
- drv_verify_link_exists(sdata, link_conf);
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1004,6 +1076,7 @@ drv_reconfig_complete(struct ieee80211_local *local,
enum ieee80211_reconfig_type reconfig_type)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
trace_drv_reconfig_complete(local, reconfig_type);
if (local->ops->reconfig_complete)
@@ -1016,6 +1089,9 @@ drv_set_default_unicast_key(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
int key_idx)
{
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return;
@@ -1046,6 +1122,9 @@ drv_channel_switch_beacon(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_local *local = sdata->local;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (local->ops->channel_switch_beacon) {
trace_drv_channel_switch_beacon(local, sdata, chandef);
local->ops->channel_switch_beacon(&local->hw, &sdata->vif,
@@ -1060,6 +1139,9 @@ drv_pre_channel_switch(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
int ret = 0;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -1072,17 +1154,22 @@ drv_pre_channel_switch(struct ieee80211_sub_if_data *sdata,
}
static inline int
-drv_post_channel_switch(struct ieee80211_sub_if_data *sdata)
+drv_post_channel_switch(struct ieee80211_link_data *link)
{
+ struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_local *local = sdata->local;
int ret = 0;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return -EIO;
trace_drv_post_channel_switch(local, sdata);
if (local->ops->post_channel_switch)
- ret = local->ops->post_channel_switch(&local->hw, &sdata->vif);
+ ret = local->ops->post_channel_switch(&local->hw, &sdata->vif,
+ link->conf);
trace_drv_return_int(local, ret);
return ret;
}
@@ -1092,6 +1179,9 @@ drv_abort_channel_switch(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_local *local = sdata->local;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return;
@@ -1107,6 +1197,9 @@ drv_channel_switch_rx_beacon(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_local *local = sdata->local;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!check_sdata_in_driver(sdata))
return;
@@ -1122,6 +1215,7 @@ static inline int drv_join_ibss(struct ieee80211_local *local,
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -1136,6 +1230,7 @@ static inline void drv_leave_ibss(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1163,6 +1258,9 @@ static inline int drv_get_txpower(struct ieee80211_local *local,
{
int ret;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!local->ops->get_txpower)
return -EOPNOTSUPP;
@@ -1182,6 +1280,7 @@ drv_tdls_channel_switch(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -1202,6 +1301,7 @@ drv_tdls_cancel_channel_switch(struct ieee80211_local *local,
struct ieee80211_sta *sta)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1267,6 +1367,11 @@ drv_get_ftm_responder_stats(struct ieee80211_local *local,
{
u32 ret = -EOPNOTSUPP;
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
+ if (!check_sdata_in_driver(sdata))
+ return -EIO;
+
if (local->ops->get_ftm_responder_stats)
ret = local->ops->get_ftm_responder_stats(&local->hw,
&sdata->vif,
@@ -1283,6 +1388,7 @@ static inline int drv_start_pmsr(struct ieee80211_local *local,
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return -EIO;
@@ -1302,6 +1408,7 @@ static inline void drv_abort_pmsr(struct ieee80211_local *local,
trace_drv_abort_pmsr(local, sdata);
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1317,6 +1424,7 @@ static inline int drv_start_nan(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
trace_drv_start_nan(local, sdata, conf);
@@ -1329,6 +1437,7 @@ static inline void drv_stop_nan(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
trace_drv_stop_nan(local, sdata);
@@ -1344,6 +1453,7 @@ static inline int drv_nan_change_conf(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
if (!local->ops->nan_change_conf)
@@ -1364,6 +1474,7 @@ static inline int drv_add_nan_func(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
if (!local->ops->add_nan_func)
@@ -1381,6 +1492,7 @@ static inline void drv_del_nan_func(struct ieee80211_local *local,
u8 instance_id)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
trace_drv_del_nan_func(local, sdata, instance_id);
@@ -1397,6 +1509,7 @@ static inline int drv_set_tid_config(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
ret = local->ops->set_tid_config(&local->hw, &sdata->vif, sta,
tid_conf);
trace_drv_return_int(local, ret);
@@ -1411,6 +1524,7 @@ static inline int drv_reset_tid_config(struct ieee80211_local *local,
int ret;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
ret = local->ops->reset_tid_config(&local->hw, &sdata->vif, sta, tids);
trace_drv_return_int(local, ret);
@@ -1421,6 +1535,7 @@ static inline void drv_update_vif_offload(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
check_sdata_in_driver(sdata);
if (!local->ops->update_vif_offload)
@@ -1436,6 +1551,9 @@ static inline void drv_sta_set_4addr(struct ieee80211_local *local,
struct ieee80211_sta *sta, bool enabled)
{
sdata = get_bss_sdata(sdata);
+
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1451,6 +1569,9 @@ static inline void drv_sta_set_decap_offload(struct ieee80211_local *local,
bool enabled)
{
sdata = get_bss_sdata(sdata);
+
+ might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1469,6 +1590,7 @@ static inline void drv_add_twt_setup(struct ieee80211_local *local,
struct ieee80211_twt_params *twt_agrt;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1486,6 +1608,7 @@ static inline void drv_twt_teardown_request(struct ieee80211_local *local,
u8 flowid)
{
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!check_sdata_in_driver(sdata))
return;
@@ -1526,6 +1649,8 @@ static inline int drv_net_setup_tc(struct ieee80211_local *local,
{
int ret = -EOPNOTSUPP;
+ might_sleep();
+
sdata = get_bss_sdata(sdata);
trace_drv_net_setup_tc(local, sdata, type);
if (local->ops->net_setup_tc)
diff --git a/net/mac80211/drop.h b/net/mac80211/drop.h
index 49dc809cab29..12a6f0e9eca6 100644
--- a/net/mac80211/drop.h
+++ b/net/mac80211/drop.h
@@ -18,9 +18,54 @@ typedef unsigned int __bitwise ieee80211_rx_result;
/* this line for the trailing \ - add before this */
#define MAC80211_DROP_REASONS_UNUSABLE(R) \
+ /* 0x00 == ___RX_DROP_UNUSABLE */ \
R(RX_DROP_U_MIC_FAIL) \
R(RX_DROP_U_REPLAY) \
R(RX_DROP_U_BAD_MMIE) \
+ R(RX_DROP_U_DUP) \
+ R(RX_DROP_U_SPURIOUS) \
+ R(RX_DROP_U_DECRYPT_FAIL) \
+ R(RX_DROP_U_NO_KEY_ID) \
+ R(RX_DROP_U_BAD_CIPHER) \
+ R(RX_DROP_U_OOM) \
+ R(RX_DROP_U_NONSEQ_PN) \
+ R(RX_DROP_U_BAD_KEY_COLOR) \
+ R(RX_DROP_U_BAD_4ADDR) \
+ R(RX_DROP_U_BAD_AMSDU) \
+ R(RX_DROP_U_BAD_AMSDU_CIPHER) \
+ R(RX_DROP_U_INVALID_8023) \
+ /* 0x10 */ \
+ R(RX_DROP_U_RUNT_ACTION) \
+ R(RX_DROP_U_UNPROT_ACTION) \
+ R(RX_DROP_U_UNPROT_DUAL) \
+ R(RX_DROP_U_UNPROT_UCAST_MGMT) \
+ R(RX_DROP_U_UNPROT_MCAST_MGMT) \
+ R(RX_DROP_U_UNPROT_BEACON) \
+ R(RX_DROP_U_UNPROT_UNICAST_PUB_ACTION) \
+ R(RX_DROP_U_UNPROT_ROBUST_ACTION) \
+ R(RX_DROP_U_ACTION_UNKNOWN_SRC) \
+ R(RX_DROP_U_REJECTED_ACTION_RESPONSE) \
+ R(RX_DROP_U_EXPECT_DEFRAG_PROT) \
+ R(RX_DROP_U_WEP_DEC_FAIL) \
+ R(RX_DROP_U_NO_IV) \
+ R(RX_DROP_U_NO_ICV) \
+ R(RX_DROP_U_AP_RX_GROUPCAST) \
+ R(RX_DROP_U_SHORT_MMIC) \
+ /* 0x20 */ \
+ R(RX_DROP_U_MMIC_FAIL) \
+ R(RX_DROP_U_SHORT_TKIP) \
+ R(RX_DROP_U_TKIP_FAIL) \
+ R(RX_DROP_U_SHORT_CCMP) \
+ R(RX_DROP_U_SHORT_CCMP_MIC) \
+ R(RX_DROP_U_SHORT_GCMP) \
+ R(RX_DROP_U_SHORT_GCMP_MIC) \
+ R(RX_DROP_U_SHORT_CMAC) \
+ R(RX_DROP_U_SHORT_CMAC256) \
+ R(RX_DROP_U_SHORT_GMAC) \
+ R(RX_DROP_U_UNEXPECTED_VLAN_4ADDR) \
+ R(RX_DROP_U_UNEXPECTED_STA_4ADDR) \
+ R(RX_DROP_U_UNEXPECTED_VLAN_MCAST) \
+ R(RX_DROP_U_NOT_PORT_CONTROL) \
/* this line for the trailing \ - add before this */
/* having two enums allows for checking ieee80211_rx_result use with sparse */
@@ -46,11 +91,13 @@ enum mac80211_drop_reason {
RX_CONTINUE = (__force ieee80211_rx_result)___RX_CONTINUE,
RX_QUEUED = (__force ieee80211_rx_result)___RX_QUEUED,
RX_DROP_MONITOR = (__force ieee80211_rx_result)___RX_DROP_MONITOR,
- RX_DROP_UNUSABLE = (__force ieee80211_rx_result)___RX_DROP_UNUSABLE,
#define DEF(x) x = (__force ieee80211_rx_result)___ ## x,
MAC80211_DROP_REASONS_MONITOR(DEF)
MAC80211_DROP_REASONS_UNUSABLE(DEF)
#undef DEF
};
+#define RX_RES_IS_UNUSABLE(result) \
+ (((__force u32)(result) & SKB_DROP_REASON_SUBSYS_MASK) == ___RX_DROP_UNUSABLE)
+
#endif /* MAC80211_DROP_H */
diff --git a/net/mac80211/ethtool.c b/net/mac80211/ethtool.c
index a3830d925cc2..99f6174a9d69 100644
--- a/net/mac80211/ethtool.c
+++ b/net/mac80211/ethtool.c
@@ -5,7 +5,7 @@
* Copied from cfg.c - originally
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2014 Intel Corporation (Author: Johannes Berg)
- * Copyright (C) 2018, 2022 Intel Corporation
+ * Copyright (C) 2018, 2022-2023 Intel Corporation
*/
#include <linux/types.h>
#include <net/cfg80211.h>
@@ -19,11 +19,16 @@ static int ieee80211_set_ringparam(struct net_device *dev,
struct netlink_ext_ack *extack)
{
struct ieee80211_local *local = wiphy_priv(dev->ieee80211_ptr->wiphy);
+ int ret;
if (rp->rx_mini_pending != 0 || rp->rx_jumbo_pending != 0)
return -EINVAL;
- return drv_set_ringparam(local, rp->tx_pending, rp->rx_pending);
+ wiphy_lock(local->hw.wiphy);
+ ret = drv_set_ringparam(local, rp->tx_pending, rp->rx_pending);
+ wiphy_unlock(local->hw.wiphy);
+
+ return ret;
}
static void ieee80211_get_ringparam(struct net_device *dev,
@@ -35,8 +40,10 @@ static void ieee80211_get_ringparam(struct net_device *dev,
memset(rp, 0, sizeof(*rp));
+ wiphy_lock(local->hw.wiphy);
drv_get_ringparam(local, &rp->tx_pending, &rp->tx_max_pending,
&rp->rx_pending, &rp->rx_max_pending);
+ wiphy_unlock(local->hw.wiphy);
}
static const char ieee80211_gstrings_sta_stats[][ETH_GSTRING_LEN] = {
@@ -102,7 +109,7 @@ static void ieee80211_get_stats(struct net_device *dev,
* network device.
*/
- mutex_lock(&local->sta_mtx);
+ wiphy_lock(local->hw.wiphy);
if (sdata->vif.type == NL80211_IFTYPE_STATION) {
sta = sta_info_get_bss(sdata, sdata->deflink.u.mgd.bssid);
@@ -198,12 +205,13 @@ do_survey:
else
data[i++] = -1LL;
- mutex_unlock(&local->sta_mtx);
-
- if (WARN_ON(i != STA_STATS_LEN))
+ if (WARN_ON(i != STA_STATS_LEN)) {
+ wiphy_unlock(local->hw.wiphy);
return;
+ }
drv_get_et_stats(sdata, stats, &(data[STA_STATS_LEN]));
+ wiphy_unlock(local->hw.wiphy);
}
static void ieee80211_get_strings(struct net_device *dev, u32 sset, u8 *data)
diff --git a/net/mac80211/ht.c b/net/mac80211/ht.c
index 33729870ad8a..68cea2685224 100644
--- a/net/mac80211/ht.c
+++ b/net/mac80211/ht.c
@@ -316,16 +316,16 @@ void ieee80211_sta_tear_down_BA_sessions(struct sta_info *sta,
{
int i;
- mutex_lock(&sta->ampdu_mlme.mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
+
for (i = 0; i < IEEE80211_NUM_TIDS; i++)
- ___ieee80211_stop_rx_ba_session(sta, i, WLAN_BACK_RECIPIENT,
- WLAN_REASON_QSTA_LEAVE_QBSS,
- reason != AGG_STOP_DESTROY_STA &&
- reason != AGG_STOP_PEER_REQUEST);
+ __ieee80211_stop_rx_ba_session(sta, i, WLAN_BACK_RECIPIENT,
+ WLAN_REASON_QSTA_LEAVE_QBSS,
+ reason != AGG_STOP_DESTROY_STA &&
+ reason != AGG_STOP_PEER_REQUEST);
for (i = 0; i < IEEE80211_NUM_TIDS; i++)
- ___ieee80211_stop_tx_ba_session(sta, i, reason);
- mutex_unlock(&sta->ampdu_mlme.mtx);
+ __ieee80211_stop_tx_ba_session(sta, i, reason);
/*
* In case the tear down is part of a reconfigure due to HW restart
@@ -333,9 +333,8 @@ void ieee80211_sta_tear_down_BA_sessions(struct sta_info *sta,
* the BA session, so handle it to properly clean tid_tx data.
*/
if(reason == AGG_STOP_DESTROY_STA) {
- cancel_work_sync(&sta->ampdu_mlme.work);
+ wiphy_work_cancel(sta->local->hw.wiphy, &sta->ampdu_mlme.work);
- mutex_lock(&sta->ampdu_mlme.mtx);
for (i = 0; i < IEEE80211_NUM_TIDS; i++) {
struct tid_ampdu_tx *tid_tx =
rcu_dereference_protected_tid_tx(sta, i);
@@ -346,11 +345,10 @@ void ieee80211_sta_tear_down_BA_sessions(struct sta_info *sta,
if (test_and_clear_bit(HT_AGG_STATE_STOP_CB, &tid_tx->state))
ieee80211_stop_tx_ba_cb(sta, i, tid_tx);
}
- mutex_unlock(&sta->ampdu_mlme.mtx);
}
}
-void ieee80211_ba_session_work(struct work_struct *work)
+void ieee80211_ba_session_work(struct wiphy *wiphy, struct wiphy_work *work)
{
struct sta_info *sta =
container_of(work, struct sta_info, ampdu_mlme.work);
@@ -358,32 +356,33 @@ void ieee80211_ba_session_work(struct work_struct *work)
bool blocked;
int tid;
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
+
/* When this flag is set, new sessions should be blocked. */
blocked = test_sta_flag(sta, WLAN_STA_BLOCK_BA);
- mutex_lock(&sta->ampdu_mlme.mtx);
for (tid = 0; tid < IEEE80211_NUM_TIDS; tid++) {
if (test_and_clear_bit(tid, sta->ampdu_mlme.tid_rx_timer_expired))
- ___ieee80211_stop_rx_ba_session(
+ __ieee80211_stop_rx_ba_session(
sta, tid, WLAN_BACK_RECIPIENT,
WLAN_REASON_QSTA_TIMEOUT, true);
if (test_and_clear_bit(tid,
sta->ampdu_mlme.tid_rx_stop_requested))
- ___ieee80211_stop_rx_ba_session(
+ __ieee80211_stop_rx_ba_session(
sta, tid, WLAN_BACK_RECIPIENT,
WLAN_REASON_UNSPECIFIED, true);
if (!blocked &&
test_and_clear_bit(tid,
sta->ampdu_mlme.tid_rx_manage_offl))
- ___ieee80211_start_rx_ba_session(sta, 0, 0, 0, 1, tid,
- IEEE80211_MAX_AMPDU_BUF_HT,
- false, true, NULL);
+ __ieee80211_start_rx_ba_session(sta, 0, 0, 0, 1, tid,
+ IEEE80211_MAX_AMPDU_BUF_HT,
+ false, true, NULL);
if (test_and_clear_bit(tid + IEEE80211_NUM_TIDS,
sta->ampdu_mlme.tid_rx_manage_offl))
- ___ieee80211_stop_rx_ba_session(
+ __ieee80211_stop_rx_ba_session(
sta, tid, WLAN_BACK_RECIPIENT,
0, false);
@@ -414,9 +413,7 @@ void ieee80211_ba_session_work(struct work_struct *work)
*/
synchronize_net();
- mutex_unlock(&sta->ampdu_mlme.mtx);
-
- ieee80211_queue_work(&sdata->local->hw, work);
+ wiphy_work_queue(sdata->local->hw.wiphy, work);
return;
}
@@ -448,12 +445,11 @@ void ieee80211_ba_session_work(struct work_struct *work)
test_and_clear_bit(HT_AGG_STATE_START_CB, &tid_tx->state))
ieee80211_start_tx_ba_cb(sta, tid, tid_tx);
if (test_and_clear_bit(HT_AGG_STATE_WANT_STOP, &tid_tx->state))
- ___ieee80211_stop_tx_ba_session(sta, tid,
- AGG_STOP_LOCAL_REQUEST);
+ __ieee80211_stop_tx_ba_session(sta, tid,
+ AGG_STOP_LOCAL_REQUEST);
if (test_and_clear_bit(HT_AGG_STATE_STOP_CB, &tid_tx->state))
ieee80211_stop_tx_ba_cb(sta, tid, tid_tx);
}
- mutex_unlock(&sta->ampdu_mlme.mtx);
}
void ieee80211_send_delba(struct ieee80211_sub_if_data *sdata,
@@ -538,11 +534,13 @@ ieee80211_smps_mode_to_smps_mode(enum ieee80211_smps_mode smps)
int ieee80211_send_smps_action(struct ieee80211_sub_if_data *sdata,
enum ieee80211_smps_mode smps, const u8 *da,
- const u8 *bssid)
+ const u8 *bssid, int link_id)
{
struct ieee80211_local *local = sdata->local;
struct sk_buff *skb;
struct ieee80211_mgmt *action_frame;
+ struct ieee80211_tx_info *info;
+ u8 status_link_id = link_id < 0 ? 0 : link_id;
/* 27 = header + category + action + smps mode */
skb = dev_alloc_skb(27 + local->hw.extra_tx_headroom);
@@ -562,6 +560,7 @@ int ieee80211_send_smps_action(struct ieee80211_sub_if_data *sdata,
case IEEE80211_SMPS_AUTOMATIC:
case IEEE80211_SMPS_NUM_MODES:
WARN_ON(1);
+ smps = IEEE80211_SMPS_OFF;
fallthrough;
case IEEE80211_SMPS_OFF:
action_frame->u.action.u.ht_smps.smps_control =
@@ -578,8 +577,13 @@ int ieee80211_send_smps_action(struct ieee80211_sub_if_data *sdata,
}
/* we'll do more on status of this frame */
- IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
- ieee80211_tx_skb(sdata, skb);
+ info = IEEE80211_SKB_CB(skb);
+ info->flags |= IEEE80211_TX_CTL_REQ_TX_STATUS;
+ /* we have 12 bits, and need 6: link_id 4, smps 2 */
+ info->status_data = IEEE80211_STATUS_TYPE_SMPS |
+ u16_encode_bits(status_link_id << 2 | smps,
+ IEEE80211_STATUS_SUBDATA_MASK);
+ ieee80211_tx_skb_tid(sdata, skb, 7, link_id);
return 0;
}
diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
index 5542c93edfba..8b1e02f2f9ae 100644
--- a/net/mac80211/ibss.c
+++ b/net/mac80211/ibss.c
@@ -51,7 +51,6 @@ ieee80211_ibss_build_presp(struct ieee80211_sub_if_data *sdata,
u32 rate_flags, rates = 0, rates_added = 0;
struct beacon_data *presp;
int frame_len;
- int shift;
/* Build IBSS probe response */
frame_len = sizeof(struct ieee80211_hdr_3addr) +
@@ -92,7 +91,6 @@ ieee80211_ibss_build_presp(struct ieee80211_sub_if_data *sdata,
sband = local->hw.wiphy->bands[chandef->chan->band];
rate_flags = ieee80211_chandef_rate_flags(chandef);
- shift = ieee80211_chandef_get_shift(chandef);
rates_n = 0;
if (have_higher_than_11mbit)
*have_higher_than_11mbit = false;
@@ -111,8 +109,7 @@ ieee80211_ibss_build_presp(struct ieee80211_sub_if_data *sdata,
*pos++ = WLAN_EID_SUPP_RATES;
*pos++ = min_t(int, 8, rates_n);
for (ri = 0; ri < sband->n_bitrates; ri++) {
- int rate = DIV_ROUND_UP(sband->bitrates[ri].bitrate,
- 5 * (1 << shift));
+ int rate = DIV_ROUND_UP(sband->bitrates[ri].bitrate, 5);
u8 basic = 0;
if (!(rates & BIT(ri)))
continue;
@@ -155,8 +152,7 @@ ieee80211_ibss_build_presp(struct ieee80211_sub_if_data *sdata,
*pos++ = WLAN_EID_EXT_SUPP_RATES;
*pos++ = rates_n - 8;
for (; ri < sband->n_bitrates; ri++) {
- int rate = DIV_ROUND_UP(sband->bitrates[ri].bitrate,
- 5 * (1 << shift));
+ int rate = DIV_ROUND_UP(sband->bitrates[ri].bitrate, 5);
u8 basic = 0;
if (!(rates & BIT(ri)))
continue;
@@ -235,7 +231,7 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
bool radar_required;
int err;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* Reset own TSF to allow time synchronization work. */
drv_reset_tsf(local, sdata);
@@ -299,17 +295,14 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
radar_required = err;
- mutex_lock(&local->mtx);
if (ieee80211_link_use_channel(&sdata->deflink, &chandef,
ifibss->fixed_channel ?
IEEE80211_CHANCTX_SHARED :
IEEE80211_CHANCTX_EXCLUSIVE)) {
sdata_info(sdata, "Failed to join IBSS, no channel context\n");
- mutex_unlock(&local->mtx);
return;
}
sdata->deflink.radar_required = radar_required;
- mutex_unlock(&local->mtx);
memcpy(ifibss->bssid, bssid, ETH_ALEN);
@@ -367,9 +360,7 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
sdata->vif.cfg.ssid_len = 0;
RCU_INIT_POINTER(ifibss->presp, NULL);
kfree_rcu(presp, rcu_head);
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&local->mtx);
sdata_info(sdata, "Failed to join IBSS, driver failure: %d\n",
err);
return;
@@ -382,7 +373,6 @@ static void __ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
round_jiffies(jiffies + IEEE80211_IBSS_MERGE_INTERVAL));
bss_meta.chan = chan;
- bss_meta.scan_width = cfg80211_chandef_to_scan_width(&chandef);
bss = cfg80211_inform_bss_frame_data(local->hw.wiphy, &bss_meta, mgmt,
presp->head_len, GFP_KERNEL);
@@ -405,9 +395,8 @@ static void ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
enum nl80211_channel_type chan_type;
u64 tsf;
u32 rate_flags;
- int shift;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (beacon_int < 10)
beacon_int = 10;
@@ -440,7 +429,6 @@ static void ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
sband = sdata->local->hw.wiphy->bands[cbss->channel->band];
rate_flags = ieee80211_chandef_rate_flags(&sdata->u.ibss.chandef);
- shift = ieee80211_vif_get_shift(&sdata->vif);
basic_rates = 0;
@@ -454,8 +442,7 @@ static void ieee80211_sta_join_ibss(struct ieee80211_sub_if_data *sdata,
!= rate_flags)
continue;
- brate = DIV_ROUND_UP(sband->bitrates[j].bitrate,
- 5 * (1 << shift));
+ brate = DIV_ROUND_UP(sband->bitrates[j].bitrate, 5);
if (brate == rate) {
if (is_basic)
basic_rates |= BIT(j);
@@ -488,7 +475,7 @@ int ieee80211_ibss_csa_beacon(struct ieee80211_sub_if_data *sdata,
u16 capability = WLAN_CAPABILITY_IBSS;
u64 tsf;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (ifibss->privacy)
capability |= WLAN_CAPABILITY_PRIVACY;
@@ -530,7 +517,7 @@ int ieee80211_ibss_finish_csa(struct ieee80211_sub_if_data *sdata, u64 *changed)
struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
struct cfg80211_bss *cbss;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/* When not connected/joined, sending CSA doesn't make sense. */
if (ifibss->state != IEEE80211_IBSS_MLME_JOINED)
@@ -600,7 +587,6 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, const u8 *bssid,
struct sta_info *sta;
struct ieee80211_chanctx_conf *chanctx_conf;
struct ieee80211_supported_band *sband;
- enum nl80211_bss_scan_width scan_width;
int band;
/*
@@ -629,7 +615,6 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, const u8 *bssid,
if (WARN_ON_ONCE(!chanctx_conf))
return NULL;
band = chanctx_conf->def.chan->band;
- scan_width = cfg80211_chandef_to_scan_width(&chanctx_conf->def);
rcu_read_unlock();
sta = sta_info_alloc(sdata, addr, GFP_KERNEL);
@@ -641,7 +626,7 @@ ieee80211_ibss_add_sta(struct ieee80211_sub_if_data *sdata, const u8 *bssid,
/* make sure mandatory rates are always added */
sband = local->hw.wiphy->bands[band];
sta->sta.deflink.supp_rates[band] = supp_rates |
- ieee80211_mandatory_rates(sband, scan_width);
+ ieee80211_mandatory_rates(sband);
return ieee80211_ibss_finish_sta(sta);
}
@@ -652,7 +637,7 @@ static int ieee80211_sta_active_ibss(struct ieee80211_sub_if_data *sdata)
int active = 0;
struct sta_info *sta;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
rcu_read_lock();
@@ -680,6 +665,8 @@ static void ieee80211_ibss_disconnect(struct ieee80211_sub_if_data *sdata)
struct beacon_data *presp;
struct sta_info *sta;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!is_zero_ether_addr(ifibss->bssid)) {
cbss = cfg80211_get_bss(local->hw.wiphy, ifibss->chandef.chan,
ifibss->bssid, ifibss->ssid,
@@ -726,9 +713,7 @@ static void ieee80211_ibss_disconnect(struct ieee80211_sub_if_data *sdata)
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED |
BSS_CHANGED_IBSS);
drv_leave_ibss(local, sdata);
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&local->mtx);
}
static void ieee80211_csa_connection_drop_work(struct wiphy *wiphy,
@@ -738,16 +723,12 @@ static void ieee80211_csa_connection_drop_work(struct wiphy *wiphy,
container_of(work, struct ieee80211_sub_if_data,
u.ibss.csa_connection_drop_work);
- sdata_lock(sdata);
-
ieee80211_ibss_disconnect(sdata);
synchronize_rcu();
skb_queue_purge(&sdata->skb_queue);
/* trigger a scan to find another IBSS network to join */
wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
-
- sdata_unlock(sdata);
}
static void ieee80211_ibss_csa_mark_radar(struct ieee80211_sub_if_data *sdata)
@@ -779,7 +760,7 @@ ieee80211_ibss_process_chanswitch(struct ieee80211_sub_if_data *sdata,
ieee80211_conn_flags_t conn_flags;
u32 vht_cap_info = 0;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
conn_flags = IEEE80211_CONN_DISABLE_VHT;
@@ -951,7 +932,7 @@ static void ieee80211_rx_mgmt_auth_ibss(struct ieee80211_sub_if_data *sdata,
{
u16 auth_alg, auth_transaction;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (len < 24 + 6)
return;
@@ -984,7 +965,6 @@ static void ieee80211_update_sta_info(struct ieee80211_sub_if_data *sdata,
{
struct sta_info *sta;
enum nl80211_band band = rx_status->band;
- enum nl80211_bss_scan_width scan_width;
struct ieee80211_local *local = sdata->local;
struct ieee80211_supported_band *sband;
bool rates_updated = false;
@@ -1010,15 +990,9 @@ static void ieee80211_update_sta_info(struct ieee80211_sub_if_data *sdata,
u32 prev_rates;
prev_rates = sta->sta.deflink.supp_rates[band];
- /* make sure mandatory rates are always added */
- scan_width = NL80211_BSS_CHAN_WIDTH_20;
- if (rx_status->bw == RATE_INFO_BW_5)
- scan_width = NL80211_BSS_CHAN_WIDTH_5;
- else if (rx_status->bw == RATE_INFO_BW_10)
- scan_width = NL80211_BSS_CHAN_WIDTH_10;
sta->sta.deflink.supp_rates[band] = supp_rates |
- ieee80211_mandatory_rates(sband, scan_width);
+ ieee80211_mandatory_rates(sband);
if (sta->sta.deflink.supp_rates[band] != prev_rates) {
ibss_dbg(sdata,
"updated supp_rates set for %pM based on beacon/probe_resp (0x%x -> 0x%x)\n",
@@ -1205,7 +1179,6 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta;
struct ieee80211_chanctx_conf *chanctx_conf;
struct ieee80211_supported_band *sband;
- enum nl80211_bss_scan_width scan_width;
int band;
/*
@@ -1231,7 +1204,6 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
return;
}
band = chanctx_conf->def.chan->band;
- scan_width = cfg80211_chandef_to_scan_width(&chanctx_conf->def);
rcu_read_unlock();
sta = sta_info_alloc(sdata, addr, GFP_ATOMIC);
@@ -1241,7 +1213,7 @@ void ieee80211_ibss_rx_no_sta(struct ieee80211_sub_if_data *sdata,
/* make sure mandatory rates are always added */
sband = local->hw.wiphy->bands[band];
sta->sta.deflink.supp_rates[band] = supp_rates |
- ieee80211_mandatory_rates(sband, scan_width);
+ ieee80211_mandatory_rates(sband);
spin_lock(&ifibss->incomplete_lock);
list_add(&sta->list, &ifibss->incomplete_stations);
@@ -1257,7 +1229,7 @@ static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata)
unsigned long exp_time = IEEE80211_IBSS_INACTIVITY_LIMIT;
unsigned long exp_rsn = IEEE80211_IBSS_RSN_INACTIVITY_LIMIT;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry_safe(sta, tmp, &local->sta_list, list) {
unsigned long last_active = ieee80211_sta_last_active(sta);
@@ -1282,8 +1254,6 @@ static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata)
WARN_ON(__sta_info_destroy(sta));
}
}
-
- mutex_unlock(&local->sta_mtx);
}
/*
@@ -1293,9 +1263,8 @@ static void ieee80211_ibss_sta_expire(struct ieee80211_sub_if_data *sdata)
static void ieee80211_sta_merge_ibss(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
- enum nl80211_bss_scan_width scan_width;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
mod_timer(&ifibss->timer,
round_jiffies(jiffies + IEEE80211_IBSS_MERGE_INTERVAL));
@@ -1315,9 +1284,8 @@ static void ieee80211_sta_merge_ibss(struct ieee80211_sub_if_data *sdata)
sdata_info(sdata,
"No active IBSS STAs - trying to scan for other IBSS networks with same SSID (merge)\n");
- scan_width = cfg80211_chandef_to_scan_width(&ifibss->chandef);
ieee80211_request_ibss_scan(sdata, ifibss->ssid, ifibss->ssid_len,
- NULL, 0, scan_width);
+ NULL, 0);
}
static void ieee80211_sta_create_ibss(struct ieee80211_sub_if_data *sdata)
@@ -1327,7 +1295,7 @@ static void ieee80211_sta_create_ibss(struct ieee80211_sub_if_data *sdata)
u16 capability;
int i;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (ifibss->fixed_bssid) {
memcpy(bssid, ifibss->bssid, ETH_ALEN);
@@ -1435,10 +1403,9 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata)
struct cfg80211_bss *cbss;
struct ieee80211_channel *chan = NULL;
const u8 *bssid = NULL;
- enum nl80211_bss_scan_width scan_width;
int active_ibss;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
active_ibss = ieee80211_sta_active_ibss(sdata);
ibss_dbg(sdata, "sta_find_ibss (active_ibss=%d)\n", active_ibss);
@@ -1494,8 +1461,6 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata)
sdata_info(sdata, "Trigger new scan to find an IBSS to join\n");
- scan_width = cfg80211_chandef_to_scan_width(&ifibss->chandef);
-
if (ifibss->fixed_channel) {
num = ieee80211_ibss_setup_scan_channels(local->hw.wiphy,
&ifibss->chandef,
@@ -1503,11 +1468,10 @@ static void ieee80211_sta_find_ibss(struct ieee80211_sub_if_data *sdata)
ARRAY_SIZE(channels));
ieee80211_request_ibss_scan(sdata, ifibss->ssid,
ifibss->ssid_len, channels,
- num, scan_width);
+ num);
} else {
ieee80211_request_ibss_scan(sdata, ifibss->ssid,
- ifibss->ssid_len, NULL,
- 0, scan_width);
+ ifibss->ssid_len, NULL, 0);
}
} else {
int interval = IEEE80211_SCAN_INTERVAL;
@@ -1532,7 +1496,7 @@ static void ieee80211_rx_mgmt_probe_req(struct ieee80211_sub_if_data *sdata,
struct beacon_data *presp;
u8 *pos, *end;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
presp = sdata_dereference(ifibss->presp, sdata);
@@ -1628,10 +1592,8 @@ void ieee80211_ibss_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
mgmt = (struct ieee80211_mgmt *) skb->data;
fc = le16_to_cpu(mgmt->frame_control);
- sdata_lock(sdata);
-
if (!sdata->u.ibss.ssid_len)
- goto mgmt_out; /* not ready to merge yet */
+ return; /* not ready to merge yet */
switch (fc & IEEE80211_FCTL_STYPE) {
case IEEE80211_STYPE_PROBE_REQ:
@@ -1671,9 +1633,6 @@ void ieee80211_ibss_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
break;
}
}
-
- mgmt_out:
- sdata_unlock(sdata);
}
void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
@@ -1681,15 +1640,13 @@ void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
struct ieee80211_if_ibss *ifibss = &sdata->u.ibss;
struct sta_info *sta;
- sdata_lock(sdata);
-
/*
* Work could be scheduled after scan or similar
* when we aren't even joined (or trying) with a
* network.
*/
if (!ifibss->ssid_len)
- goto out;
+ return;
spin_lock_bh(&ifibss->incomplete_lock);
while (!list_empty(&ifibss->incomplete_stations)) {
@@ -1715,9 +1672,6 @@ void ieee80211_ibss_work(struct ieee80211_sub_if_data *sdata)
WARN_ON(1);
break;
}
-
- out:
- sdata_unlock(sdata);
}
static void ieee80211_ibss_timer(struct timer_list *t)
@@ -1744,7 +1698,8 @@ void ieee80211_ibss_notify_scan_completed(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
- mutex_lock(&local->iflist_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry(sdata, &local->interfaces, list) {
if (!ieee80211_sdata_running(sdata))
continue;
@@ -1752,7 +1707,6 @@ void ieee80211_ibss_notify_scan_completed(struct ieee80211_local *local)
continue;
sdata->u.ibss.last_scan_completed = jiffies;
}
- mutex_unlock(&local->iflist_mtx);
}
int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
@@ -1767,6 +1721,8 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
int i;
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (params->chandef.chan->freq_offset) {
/* this may work, but is untested */
return -EOPNOTSUPP;
@@ -1787,10 +1743,8 @@ int ieee80211_ibss_join(struct ieee80211_sub_if_data *sdata,
chanmode = (params->channel_fixed && !ret) ?
IEEE80211_CHANCTX_SHARED : IEEE80211_CHANCTX_EXCLUSIVE;
- mutex_lock(&local->chanctx_mtx);
ret = ieee80211_check_combinations(sdata, &params->chandef, chanmode,
radar_detect_width);
- mutex_unlock(&local->chanctx_mtx);
if (ret < 0)
return ret;
diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
index 98ef1fe1226e..84df104f272b 100644
--- a/net/mac80211/ieee80211_i.h
+++ b/net/mac80211/ieee80211_i.h
@@ -85,6 +85,12 @@ extern const u8 ieee80211_ac_to_qos_mask[IEEE80211_NUM_ACS];
#define IEEE80211_MAX_NAN_INSTANCE_ID 255
+enum ieee80211_status_data {
+ IEEE80211_STATUS_TYPE_MASK = 0x00f,
+ IEEE80211_STATUS_TYPE_INVALID = 0,
+ IEEE80211_STATUS_TYPE_SMPS = 1,
+ IEEE80211_STATUS_SUBDATA_MASK = 0xff0,
+};
/*
* Keep a station's queues on the active list for deficit accounting purposes
@@ -461,13 +467,24 @@ struct ieee80211_sta_tx_tspec {
bool downgraded;
};
+/* Advertised TID-to-link mapping info */
+struct ieee80211_adv_ttlm_info {
+ /* time in TUs at which the new mapping is established, or 0 if there is
+ * no planned advertised TID-to-link mapping
+ */
+ u16 switch_time;
+ u32 duration; /* duration of the planned T2L map in TUs */
+ u16 map; /* map of usable links for all TIDs */
+ bool active; /* whether the advertised mapping is active or not */
+};
+
DECLARE_EWMA(beacon_signal, 4, 4)
struct ieee80211_if_managed {
struct timer_list timer;
struct timer_list conn_mon_timer;
struct timer_list bcn_mon_timer;
- struct work_struct monitor_work;
+ struct wiphy_work monitor_work;
struct wiphy_work beacon_connection_loss_work;
struct wiphy_work csa_connection_drop_work;
@@ -530,7 +547,7 @@ struct ieee80211_if_managed {
/* TDLS support */
u8 tdls_peer[ETH_ALEN] __aligned(2);
- struct delayed_work tdls_peer_del_work;
+ struct wiphy_delayed_work tdls_peer_del_work;
struct sk_buff *orig_teardown_skb; /* The original teardown skb */
struct sk_buff *teardown_skb; /* A copy to send through the AP */
spinlock_t teardown_lock; /* To lock changing teardown_skb */
@@ -544,7 +561,7 @@ struct ieee80211_if_managed {
* on the BE queue, but there's a lot of VO traffic, we might
* get stuck in a downgraded situation and flush takes forever.
*/
- struct delayed_work tx_tspec_wk;
+ struct wiphy_delayed_work tx_tspec_wk;
/* Information elements from the last transmitted (Re)Association
* Request frame.
@@ -554,6 +571,10 @@ struct ieee80211_if_managed {
struct wiphy_delayed_work ml_reconf_work;
u16 removed_links;
+
+ /* TID-to-link mapping support */
+ struct wiphy_delayed_work ttlm_work;
+ struct ieee80211_adv_ttlm_info ttlm_info;
};
struct ieee80211_if_ibss {
@@ -618,8 +639,9 @@ struct ieee80211_if_ocb {
* these declarations define the interface, which enables
* vendor-specific mesh synchronization
*
+ * @rx_bcn_presp: beacon/probe response was received
+ * @adjust_tsf: TSF adjustment method
*/
-struct ieee802_11_elems;
struct ieee80211_mesh_sync_ops {
void (*rx_bcn_presp)(struct ieee80211_sub_if_data *sdata, u16 stype,
struct ieee80211_mgmt *mgmt, unsigned int len,
@@ -859,12 +881,13 @@ enum txq_info_flags {
* struct txq_info - per tid queue
*
* @tin: contains packets split into multiple flows
- * @def_flow: used as a fallback flow when a packet destined to @tin hashes to
- * a fq_flow which is already owned by a different tin
- * @def_cvars: codel vars for @def_flow
+ * @def_cvars: codel vars for the @tin's default_flow
+ * @cstats: code statistics for this queue
* @frags: used to keep fragments created after dequeue
* @schedule_order: used with ieee80211_local->active_txqs
* @schedule_round: counter to prevent infinite loops on TXQ scheduling
+ * @flags: TXQ flags from &enum txq_info_flags
+ * @txq: the driver visible part
*/
struct txq_info {
struct fq_tin tin;
@@ -893,7 +916,8 @@ struct ieee80211_if_mntr {
* struct ieee80211_if_nan - NAN state
*
* @conf: current NAN configuration
- * @func_ids: a bitmap of available instance_id's
+ * @func_lock: lock for @func_inst_ids
+ * @function_inst_ids: a bitmap of available instance_id's
*/
struct ieee80211_if_nan {
struct cfg80211_nan_conf conf;
@@ -926,6 +950,9 @@ struct ieee80211_link_data_managed {
struct wiphy_delayed_work chswitch_work;
struct wiphy_work request_smps_work;
+ /* used to reconfigure hardware SM PS */
+ struct wiphy_work recalc_smps;
+
bool beacon_crc_valid;
u32 beacon_crc;
struct ewma_beacon_signal ave_beacon_signal;
@@ -970,8 +997,8 @@ struct ieee80211_link_data {
struct ieee80211_sub_if_data *sdata;
unsigned int link_id;
- struct list_head assigned_chanctx_list; /* protected by chanctx_mtx */
- struct list_head reserved_chanctx_list; /* protected by chanctx_mtx */
+ struct list_head assigned_chanctx_list; /* protected by wiphy mutex */
+ struct list_head reserved_chanctx_list; /* protected by wiphy mutex */
/* multicast keys only */
struct ieee80211_key __rcu *gtk[NUM_DEFAULT_KEYS +
@@ -981,18 +1008,18 @@ struct ieee80211_link_data {
struct ieee80211_key __rcu *default_mgmt_key;
struct ieee80211_key __rcu *default_beacon_key;
- struct work_struct csa_finalize_work;
- bool csa_block_tx; /* write-protected by sdata_lock and local->mtx */
+ struct wiphy_work csa_finalize_work;
+ bool csa_block_tx;
bool operating_11g_mode;
struct cfg80211_chan_def csa_chandef;
- struct work_struct color_change_finalize_work;
+ struct wiphy_work color_change_finalize_work;
struct delayed_work color_collision_detect_work;
u64 color_bitmap;
- /* context reservation -- protected with chanctx_mtx */
+ /* context reservation -- protected with wiphy mutex */
struct ieee80211_chanctx *reserved_chanctx;
struct cfg80211_chan_def reserved_chandef;
bool reserved_radar_required;
@@ -1005,7 +1032,7 @@ struct ieee80211_link_data {
int ap_power_level; /* in dBm */
bool radar_required;
- struct delayed_work dfs_cac_timer_work;
+ struct wiphy_delayed_work dfs_cac_timer_work;
union {
struct ieee80211_link_data_managed mgd;
@@ -1032,7 +1059,7 @@ struct ieee80211_sub_if_data {
/* count for keys needing tailroom space allocation */
int crypto_tx_tailroom_needed_cnt;
int crypto_tx_tailroom_pending_dec;
- struct delayed_work dec_tailroom_needed_wk;
+ struct wiphy_delayed_work dec_tailroom_needed_wk;
struct net_device *dev;
struct ieee80211_local *local;
@@ -1064,9 +1091,6 @@ struct ieee80211_sub_if_data {
atomic_t num_tx_queued;
struct mac80211_qos_map __rcu *qos_map;
- /* used to reconfigure hardware SM PS */
- struct work_struct recalc_smps;
-
struct wiphy_work work;
struct sk_buff_head skb_queue;
struct sk_buff_head status_queue;
@@ -1106,7 +1130,7 @@ struct ieee80211_sub_if_data {
struct ieee80211_link_data __rcu *link[IEEE80211_MLD_MAX_NUM_LINKS];
/* for ieee80211_set_active_links_async() */
- struct work_struct activate_links_work;
+ struct wiphy_work activate_links_work;
u16 desired_active_links;
#ifdef CONFIG_MAC80211_DEBUGFS
@@ -1129,62 +1153,8 @@ struct ieee80211_sub_if_data *vif_to_sdata(struct ieee80211_vif *p)
return container_of(p, struct ieee80211_sub_if_data, vif);
}
-static inline void sdata_lock(struct ieee80211_sub_if_data *sdata)
- __acquires(&sdata->wdev.mtx)
-{
- mutex_lock(&sdata->wdev.mtx);
- __acquire(&sdata->wdev.mtx);
-}
-
-static inline void sdata_unlock(struct ieee80211_sub_if_data *sdata)
- __releases(&sdata->wdev.mtx)
-{
- mutex_unlock(&sdata->wdev.mtx);
- __release(&sdata->wdev.mtx);
-}
-
#define sdata_dereference(p, sdata) \
- rcu_dereference_protected(p, lockdep_is_held(&sdata->wdev.mtx))
-
-static inline void
-sdata_assert_lock(struct ieee80211_sub_if_data *sdata)
-{
- lockdep_assert_held(&sdata->wdev.mtx);
-}
-
-static inline int
-ieee80211_chanwidth_get_shift(enum nl80211_chan_width width)
-{
- switch (width) {
- case NL80211_CHAN_WIDTH_5:
- return 2;
- case NL80211_CHAN_WIDTH_10:
- return 1;
- default:
- return 0;
- }
-}
-
-static inline int
-ieee80211_chandef_get_shift(struct cfg80211_chan_def *chandef)
-{
- return ieee80211_chanwidth_get_shift(chandef->width);
-}
-
-static inline int
-ieee80211_vif_get_shift(struct ieee80211_vif *vif)
-{
- struct ieee80211_chanctx_conf *chanctx_conf;
- int shift = 0;
-
- rcu_read_lock();
- chanctx_conf = rcu_dereference(vif->bss_conf.chanctx_conf);
- if (chanctx_conf)
- shift = ieee80211_chandef_get_shift(&chanctx_conf->def);
- rcu_read_unlock();
-
- return shift;
-}
+ wiphy_dereference(sdata->local->hw.wiphy, p)
static inline int
ieee80211_get_mbssid_beacon_len(struct cfg80211_mbssid_elems *elems,
@@ -1254,7 +1224,7 @@ struct tpt_led_trigger {
#endif
/**
- * mac80211 scan flags - currently active scan mode
+ * enum mac80211_scan_flags - currently active scan mode
*
* @SCAN_SW_SCANNING: We're currently in the process of scanning but may as
* well be on the operating channel
@@ -1272,7 +1242,7 @@ struct tpt_led_trigger {
* and could send a probe request after receiving a beacon.
* @SCAN_BEACON_DONE: Beacon received, we can now send a probe request
*/
-enum {
+enum mac80211_scan_flags {
SCAN_SW_SCANNING,
SCAN_HW_SCANNING,
SCAN_ONCHANNEL_SCANNING,
@@ -1362,7 +1332,7 @@ struct ieee80211_local {
spinlock_t filter_lock;
/* used for uploading changed mc list */
- struct work_struct reconfig_filter;
+ struct wiphy_work reconfig_filter;
/* aggregated multicast list */
struct netdev_hw_addr_list mc_list;
@@ -1406,7 +1376,7 @@ struct ieee80211_local {
/* wowlan is enabled -- don't reconfig on resume */
bool wowlan;
- struct work_struct radar_detected_work;
+ struct wiphy_work radar_detected_work;
/* number of RX chains the hardware has */
u8 rx_chains;
@@ -1429,10 +1399,9 @@ struct ieee80211_local {
/* Station data */
/*
- * The mutex only protects the list, hash table and
- * counter, reads are done with RCU.
+ * The list, hash table and counter are protected
+ * by the wiphy mutex, reads are done with RCU.
*/
- struct mutex sta_mtx;
spinlock_t tim_lock;
unsigned long num_sta;
struct list_head sta_list;
@@ -1461,15 +1430,6 @@ struct ieee80211_local {
struct list_head mon_list; /* only that are IFF_UP && !cooked */
struct mutex iflist_mtx;
- /*
- * Key mutex, protects sdata's key_list and sta_info's
- * key pointers and ptk_idx (write access, they're RCU.)
- */
- struct mutex key_mtx;
-
- /* mutex for scan and work locking */
- struct mutex mtx;
-
/* Scanning and BSS list */
unsigned long scanning;
struct cfg80211_ssid scan_ssid;
@@ -1483,14 +1443,14 @@ struct ieee80211_local {
int hw_scan_ies_bufsize;
struct cfg80211_scan_info scan_info;
- struct work_struct sched_scan_stopped_work;
+ struct wiphy_work sched_scan_stopped_work;
struct ieee80211_sub_if_data __rcu *sched_scan_sdata;
struct cfg80211_sched_scan_request __rcu *sched_scan_req;
u8 scan_addr[ETH_ALEN];
unsigned long leave_oper_channel_time;
enum mac80211_scan_state next_scan_state;
- struct delayed_work scan_work;
+ struct wiphy_delayed_work scan_work;
struct ieee80211_sub_if_data __rcu *scan_sdata;
/* For backward compatibility only -- do not use */
struct cfg80211_chan_def _oper_chandef;
@@ -1500,7 +1460,6 @@ struct ieee80211_local {
/* channel contexts */
struct list_head chanctx_list;
- struct mutex chanctx_mtx;
#ifdef CONFIG_MAC80211_LEDS
struct led_trigger tx_led, rx_led, assoc_led, radio_led;
@@ -1554,8 +1513,8 @@ struct ieee80211_local {
* interface (and monitors) in PS, this then points there.
*/
struct ieee80211_sub_if_data *ps_sdata;
- struct work_struct dynamic_ps_enable_work;
- struct work_struct dynamic_ps_disable_work;
+ struct wiphy_work dynamic_ps_enable_work;
+ struct wiphy_work dynamic_ps_disable_work;
struct timer_list dynamic_ps_timer;
struct notifier_block ifa_notifier;
struct notifier_block ifa6_notifier;
@@ -1583,9 +1542,9 @@ struct ieee80211_local {
/*
* Remain-on-channel support
*/
- struct delayed_work roc_work;
+ struct wiphy_delayed_work roc_work;
struct list_head roc_list;
- struct work_struct hw_roc_start, hw_roc_done;
+ struct wiphy_work hw_roc_start, hw_roc_done;
unsigned long hw_roc_start_time;
u64 roc_cookie_counter;
@@ -1733,6 +1692,8 @@ struct ieee802_11_elems {
const struct ieee80211_eht_operation *eht_operation;
const struct ieee80211_multi_link_elem *ml_basic;
const struct ieee80211_multi_link_elem *ml_reconf;
+ const struct ieee80211_bandwidth_indication *bandwidth_indication;
+ const struct ieee80211_ttlm_elem *ttlm[IEEE80211_TTLM_MAX_CNT];
/* length of them, respectively */
u8 ext_capab_len;
@@ -1766,6 +1727,8 @@ struct ieee802_11_elems {
/* The reconfiguration Multi-Link element in the original IEs */
const struct element *ml_reconf_elem;
+ u8 ttlm_num;
+
/*
* store the per station profile pointer and length in case that the
* parsing also handled Multi-Link element parsing for a specific link
@@ -1783,7 +1746,7 @@ struct ieee802_11_elems {
*/
size_t scratch_len;
u8 *scratch_pos;
- u8 scratch[];
+ u8 scratch[] __counted_by(scratch_len);
};
static inline struct ieee80211_local *hw_to_local(
@@ -1929,12 +1892,11 @@ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata,
u64 *changed);
/* scan/BSS handling */
-void ieee80211_scan_work(struct work_struct *work);
+void ieee80211_scan_work(struct wiphy *wiphy, struct wiphy_work *work);
int ieee80211_request_ibss_scan(struct ieee80211_sub_if_data *sdata,
const u8 *ssid, u8 ssid_len,
struct ieee80211_channel **channels,
- unsigned int n_channels,
- enum nl80211_bss_scan_width scan_width);
+ unsigned int n_channels);
int ieee80211_request_scan(struct ieee80211_sub_if_data *sdata,
struct cfg80211_scan_request *req);
void ieee80211_scan_cancel(struct ieee80211_local *local);
@@ -1962,7 +1924,8 @@ int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
struct cfg80211_sched_scan_request *req);
int ieee80211_request_sched_scan_stop(struct ieee80211_local *local);
void ieee80211_sched_scan_end(struct ieee80211_local *local);
-void ieee80211_sched_scan_stopped_work(struct work_struct *work);
+void ieee80211_sched_scan_stopped_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
/* off-channel/mgmt-tx */
void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local);
@@ -1982,12 +1945,13 @@ int ieee80211_mgmt_tx_cancel_wait(struct wiphy *wiphy,
struct wireless_dev *wdev, u64 cookie);
/* channel switch handling */
-void ieee80211_csa_finalize_work(struct work_struct *work);
+void ieee80211_csa_finalize_work(struct wiphy *wiphy, struct wiphy_work *work);
int ieee80211_channel_switch(struct wiphy *wiphy, struct net_device *dev,
struct cfg80211_csa_settings *params);
/* color change handling */
-void ieee80211_color_change_finalize_work(struct work_struct *work);
+void ieee80211_color_change_finalize_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
void ieee80211_color_collision_detection_work(struct work_struct *work);
/* interface handling */
@@ -2037,8 +2001,10 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
void ieee80211_link_stop(struct ieee80211_link_data *link);
int ieee80211_vif_set_links(struct ieee80211_sub_if_data *sdata,
u16 new_links, u16 dormant_links);
-void ieee80211_vif_clear_links(struct ieee80211_sub_if_data *sdata);
-int __ieee80211_set_active_links(struct ieee80211_vif *vif, u16 active_links);
+static inline void ieee80211_vif_clear_links(struct ieee80211_sub_if_data *sdata)
+{
+ ieee80211_vif_set_links(sdata, 0, 0);
+}
/* tx handling */
void ieee80211_clear_tx_pending(struct ieee80211_local *local);
@@ -2060,7 +2026,7 @@ struct sk_buff *
ieee80211_build_data_template(struct ieee80211_sub_if_data *sdata,
struct sk_buff *skb, u32 info_flags);
void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
- int retry_count, int shift, bool send_to_cooked,
+ int retry_count, bool send_to_cooked,
struct ieee80211_tx_status *status);
void ieee80211_check_fast_xmit(struct sta_info *sta);
@@ -2093,19 +2059,17 @@ void ieee80211_send_delba(struct ieee80211_sub_if_data *sdata,
u16 initiator, u16 reason_code);
int ieee80211_send_smps_action(struct ieee80211_sub_if_data *sdata,
enum ieee80211_smps_mode smps, const u8 *da,
- const u8 *bssid);
+ const u8 *bssid, int link_id);
bool ieee80211_smps_is_restrictive(enum ieee80211_smps_mode smps_mode_old,
enum ieee80211_smps_mode smps_mode_new);
-void ___ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
- u16 initiator, u16 reason, bool stop);
void __ieee80211_stop_rx_ba_session(struct sta_info *sta, u16 tid,
u16 initiator, u16 reason, bool stop);
-void ___ieee80211_start_rx_ba_session(struct sta_info *sta,
- u8 dialog_token, u16 timeout,
- u16 start_seq_num, u16 ba_policy, u16 tid,
- u16 buf_size, bool tx, bool auto_seq,
- const struct ieee80211_addba_ext_ie *addbaext);
+void __ieee80211_start_rx_ba_session(struct sta_info *sta,
+ u8 dialog_token, u16 timeout,
+ u16 start_seq_num, u16 ba_policy, u16 tid,
+ u16 buf_size, bool tx, bool auto_seq,
+ const struct ieee80211_addba_ext_ie *addbaext);
void ieee80211_sta_tear_down_BA_sessions(struct sta_info *sta,
enum ieee80211_agg_stop_reason reason);
void ieee80211_process_delba(struct ieee80211_sub_if_data *sdata,
@@ -2122,13 +2086,11 @@ void ieee80211_process_addba_request(struct ieee80211_local *local,
int __ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
enum ieee80211_agg_stop_reason reason);
-int ___ieee80211_stop_tx_ba_session(struct sta_info *sta, u16 tid,
- enum ieee80211_agg_stop_reason reason);
void ieee80211_start_tx_ba_cb(struct sta_info *sta, int tid,
struct tid_ampdu_tx *tid_tx);
void ieee80211_stop_tx_ba_cb(struct sta_info *sta, int tid,
struct tid_ampdu_tx *tid_tx);
-void ieee80211_ba_session_work(struct work_struct *work);
+void ieee80211_ba_session_work(struct wiphy *wiphy, struct wiphy_work *work);
void ieee80211_tx_ba_session_handle_start(struct sta_info *sta, int tid);
void ieee80211_release_reorder_timeout(struct sta_info *sta, int tid);
@@ -2206,7 +2168,7 @@ void ieee80211_process_measurement_req(struct ieee80211_sub_if_data *sdata,
* flags from &enum ieee80211_conn_flags.
* @bssid: the currently connected bssid (for reporting)
* @csa_ie: parsed 802.11 csa elements on count, mode, chandef and mesh ttl.
- All of them will be filled with if success only.
+ * All of them will be filled with if success only.
* Return: 0 on success, <0 on error and >0 if there is nothing to parse.
*/
int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
@@ -2238,8 +2200,7 @@ static inline int __ieee80211_resume(struct ieee80211_hw *hw)
/* utility functions/constants */
extern const void *const mac80211_wiphy_privid; /* for wiphy privid */
int ieee80211_frame_duration(enum nl80211_band band, size_t len,
- int rate, int erp, int short_preamble,
- int shift);
+ int rate, int erp, int short_preamble);
void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
struct ieee80211_tx_queue_params *qparam,
int ac);
@@ -2334,8 +2295,6 @@ ieee802_11_parse_elems(const u8 *start, size_t len, bool action,
return ieee802_11_parse_elems_crc(start, len, action, 0, 0, bss);
}
-void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id);
-
extern const int ieee802_1d_to_ac[8];
static inline int ieee80211_ac_from_tid(int tid)
@@ -2343,8 +2302,10 @@ static inline int ieee80211_ac_from_tid(int tid)
return ieee802_1d_to_ac[tid & 7];
}
-void ieee80211_dynamic_ps_enable_work(struct work_struct *work);
-void ieee80211_dynamic_ps_disable_work(struct work_struct *work);
+void ieee80211_dynamic_ps_enable_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
+void ieee80211_dynamic_ps_disable_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
void ieee80211_dynamic_ps_timer(struct timer_list *t);
void ieee80211_send_nullfunc(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata,
@@ -2429,6 +2390,7 @@ void ieee80211_txq_init(struct ieee80211_sub_if_data *sdata,
struct txq_info *txq, int tid);
void ieee80211_txq_purge(struct ieee80211_local *local,
struct txq_info *txqi);
+void ieee80211_purge_sta_txqs(struct sta_info *sta);
void ieee80211_txq_remove_vlan(struct ieee80211_local *local,
struct ieee80211_sub_if_data *sdata);
void ieee80211_fill_txq_stats(struct cfg80211_txq_stats *txqstats,
@@ -2522,7 +2484,7 @@ bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, u32 vht_cap_info,
const struct ieee80211_vht_operation *oper,
const struct ieee80211_ht_operation *htop,
struct cfg80211_chan_def *chandef);
-void ieee80211_chandef_eht_oper(const struct ieee80211_eht_operation *eht_oper,
+void ieee80211_chandef_eht_oper(const struct ieee80211_eht_operation_info *info,
bool support_160, bool support_320,
struct cfg80211_chan_def *chandef);
bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata,
@@ -2564,9 +2526,10 @@ void ieee80211_recalc_chanctx_min_def(struct ieee80211_local *local,
struct ieee80211_link_data *rsvd_for);
bool ieee80211_is_radar_required(struct ieee80211_local *local);
-void ieee80211_dfs_cac_timer_work(struct work_struct *work);
+void ieee80211_dfs_cac_timer_work(struct wiphy *wiphy, struct wiphy_work *work);
void ieee80211_dfs_cac_cancel(struct ieee80211_local *local);
-void ieee80211_dfs_radar_detected_work(struct work_struct *work);
+void ieee80211_dfs_radar_detected_work(struct wiphy *wiphy,
+ struct wiphy_work *work);
int ieee80211_send_action_csa(struct ieee80211_sub_if_data *sdata,
struct cfg80211_csa_settings *csa_settings);
@@ -2588,7 +2551,7 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev,
const u8 *extra_ies, size_t extra_ies_len);
int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
const u8 *peer, enum nl80211_tdls_operation oper);
-void ieee80211_tdls_peer_del_work(struct work_struct *wk);
+void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk);
int ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev,
const u8 *addr, u8 oper_class,
struct cfg80211_chan_def *chandef);
diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
index be586bc0b5b7..e4e7c0b38cb6 100644
--- a/net/mac80211/iface.c
+++ b/net/mac80211/iface.c
@@ -33,14 +33,13 @@
* The interface list in each struct ieee80211_local is protected
* three-fold:
*
- * (1) modifications may only be done under the RTNL
- * (2) modifications and readers are protected against each other by
- * the iflist_mtx.
- * (3) modifications are done in an RCU manner so atomic readers
+ * (1) modifications may only be done under the RTNL *and* wiphy mutex
+ * *and* iflist_mtx
+ * (2) modifications are done in an RCU manner so atomic readers
* can traverse the list in RCU-safe blocks.
*
* As a consequence, reads (traversals) of the list can be protected
- * by either the RTNL, the iflist_mtx or RCU.
+ * by either the RTNL, the wiphy mutex, the iflist_mtx or RCU.
*/
static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work);
@@ -110,7 +109,7 @@ static u32 __ieee80211_recalc_idle(struct ieee80211_local *local,
bool working, scanning, active;
unsigned int led_trig_start = 0, led_trig_stop = 0;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
active = force_active ||
!list_empty(&local->chanctx_list) ||
@@ -160,6 +159,8 @@ static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr,
u8 *m;
int ret = 0;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (is_zero_ether_addr(local->hw.wiphy->addr_mask))
return 0;
@@ -176,7 +177,6 @@ static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr,
if (!check_dup)
return ret;
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(iter, &local->interfaces, list) {
if (iter == sdata)
continue;
@@ -195,7 +195,6 @@ static int ieee80211_verify_mac(struct ieee80211_sub_if_data *sdata, u8 *addr,
break;
}
}
- mutex_unlock(&local->iflist_mtx);
return ret;
}
@@ -207,6 +206,8 @@ static int ieee80211_can_powered_addr_change(struct ieee80211_sub_if_data *sdata
struct ieee80211_sub_if_data *scan_sdata;
int ret = 0;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/* To be the most flexible here we want to only limit changing the
* address if the specific interface is doing offchannel work or
* scanning.
@@ -214,8 +215,6 @@ static int ieee80211_can_powered_addr_change(struct ieee80211_sub_if_data *sdata
if (netif_carrier_ok(sdata->dev))
return -EBUSY;
- mutex_lock(&local->mtx);
-
/* First check no ROC work is happening on this iface */
list_for_each_entry(roc, &local->roc_list, list) {
if (roc->sdata != sdata)
@@ -230,7 +229,7 @@ static int ieee80211_can_powered_addr_change(struct ieee80211_sub_if_data *sdata
/* And if this iface is scanning */
if (local->scanning) {
scan_sdata = rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (sdata == scan_sdata)
ret = -EBUSY;
}
@@ -247,13 +246,12 @@ static int ieee80211_can_powered_addr_change(struct ieee80211_sub_if_data *sdata
}
unlock:
- mutex_unlock(&local->mtx);
return ret;
}
-static int ieee80211_change_mac(struct net_device *dev, void *addr)
+static int _ieee80211_change_mac(struct ieee80211_sub_if_data *sdata,
+ void *addr)
{
- struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
struct ieee80211_local *local = sdata->local;
struct sockaddr *sa = addr;
bool check_dup = true;
@@ -278,7 +276,7 @@ static int ieee80211_change_mac(struct net_device *dev, void *addr)
if (live)
drv_remove_interface(local, sdata);
- ret = eth_mac_addr(dev, sa);
+ ret = eth_mac_addr(sdata->dev, sa);
if (ret == 0) {
memcpy(sdata->vif.addr, sa->sa_data, ETH_ALEN);
@@ -294,6 +292,27 @@ static int ieee80211_change_mac(struct net_device *dev, void *addr)
return ret;
}
+static int ieee80211_change_mac(struct net_device *dev, void *addr)
+{
+ struct ieee80211_sub_if_data *sdata = IEEE80211_DEV_TO_SUB_IF(dev);
+ struct ieee80211_local *local = sdata->local;
+ int ret;
+
+ /*
+ * This happens during unregistration if there's a bond device
+ * active (maybe other cases?) and we must get removed from it.
+ * But we really don't care anymore if it's not registered now.
+ */
+ if (!dev->ieee80211_ptr->registered)
+ return 0;
+
+ wiphy_lock(local->hw.wiphy);
+ ret = _ieee80211_change_mac(sdata, addr);
+ wiphy_unlock(local->hw.wiphy);
+
+ return ret;
+}
+
static inline int identical_mac_addr_allowed(int type1, int type2)
{
return type1 == NL80211_IFTYPE_MONITOR ||
@@ -311,9 +330,9 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_sub_if_data *nsdata;
- int ret;
ASSERT_RTNL();
+ lockdep_assert_wiphy(local->hw.wiphy);
/* we hold the RTNL here so can safely walk the list */
list_for_each_entry(nsdata, &local->interfaces, list) {
@@ -378,10 +397,7 @@ static int ieee80211_check_concurrent_iface(struct ieee80211_sub_if_data *sdata,
}
}
- mutex_lock(&local->chanctx_mtx);
- ret = ieee80211_check_combinations(sdata, NULL, 0, 0);
- mutex_unlock(&local->chanctx_mtx);
- return ret;
+ return ieee80211_check_combinations(sdata, NULL, 0, 0);
}
static int ieee80211_check_queues(struct ieee80211_sub_if_data *sdata,
@@ -430,12 +446,13 @@ static int ieee80211_open(struct net_device *dev)
if (!is_valid_ether_addr(dev->dev_addr))
return -EADDRNOTAVAIL;
+ wiphy_lock(sdata->local->hw.wiphy);
err = ieee80211_check_concurrent_iface(sdata, sdata->vif.type);
if (err)
- return err;
+ goto out;
- wiphy_lock(sdata->local->hw.wiphy);
err = ieee80211_do_open(&sdata->wdev, true);
+out:
wiphy_unlock(sdata->local->hw.wiphy);
return err;
@@ -453,6 +470,8 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
bool cancel_scan;
struct cfg80211_nan_func *func;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
clear_bit(SDATA_STATE_RUNNING, &sdata->state);
synchronize_rcu(); /* flush _ieee80211_wake_txqs() */
@@ -516,16 +535,12 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
}
del_timer_sync(&local->dynamic_ps_timer);
- cancel_work_sync(&local->dynamic_ps_enable_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->dynamic_ps_enable_work);
- cancel_work_sync(&sdata->recalc_smps);
-
- sdata_lock(sdata);
WARN(ieee80211_vif_is_mld(&sdata->vif),
"destroying interface with valid links 0x%04x\n",
sdata->vif.valid_links);
- mutex_lock(&local->mtx);
sdata->vif.bss_conf.csa_active = false;
if (sdata->vif.type == NL80211_IFTYPE_STATION)
sdata->deflink.u.mgd.csa_waiting_bcn = false;
@@ -534,20 +549,17 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
IEEE80211_QUEUE_STOP_REASON_CSA);
sdata->deflink.csa_block_tx = false;
}
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
-
- cancel_work_sync(&sdata->deflink.csa_finalize_work);
- cancel_work_sync(&sdata->deflink.color_change_finalize_work);
- cancel_delayed_work_sync(&sdata->deflink.dfs_cac_timer_work);
+ wiphy_work_cancel(local->hw.wiphy, &sdata->deflink.csa_finalize_work);
+ wiphy_work_cancel(local->hw.wiphy,
+ &sdata->deflink.color_change_finalize_work);
+ wiphy_delayed_work_cancel(local->hw.wiphy,
+ &sdata->deflink.dfs_cac_timer_work);
if (sdata->wdev.cac_started) {
chandef = sdata->vif.bss_conf.chandef;
WARN_ON(local->suspended);
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&local->mtx);
cfg80211_cac_event(sdata->dev, &chandef,
NL80211_RADAR_CAC_ABORTED,
GFP_KERNEL);
@@ -575,9 +587,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
switch (sdata->vif.type) {
case NL80211_IFTYPE_AP_VLAN:
- mutex_lock(&local->mtx);
list_del(&sdata->u.vlan.list);
- mutex_unlock(&local->mtx);
RCU_INIT_POINTER(sdata->vif.bss_conf.chanctx_conf, NULL);
/* see comment in the default case below */
ieee80211_free_keys(sdata, true);
@@ -675,9 +685,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
if (local->monitors == 0)
ieee80211_del_virtual_monitor(local);
- mutex_lock(&local->mtx);
ieee80211_recalc_idle(local);
- mutex_unlock(&local->mtx);
if (!(sdata->u.mntr.flags & MONITOR_FLAG_ACTIVE))
break;
@@ -691,7 +699,7 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
ieee80211_recalc_ps(local);
if (cancel_scan)
- flush_delayed_work(&local->scan_work);
+ wiphy_delayed_work_flush(local->hw.wiphy, &local->scan_work);
if (local->open_count == 0) {
ieee80211_stop_device(local);
@@ -750,9 +758,9 @@ static int ieee80211_stop(struct net_device *dev)
ieee80211_stop_mbssid(sdata);
}
- cancel_work_sync(&sdata->activate_links_work);
-
wiphy_lock(sdata->local->hw.wiphy);
+ wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->activate_links_work);
+
ieee80211_do_stop(sdata, true);
wiphy_unlock(sdata->local->hw.wiphy);
@@ -779,7 +787,7 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
spin_lock_bh(&local->filter_lock);
__hw_addr_sync(&local->mc_list, &dev->mc, dev->addr_len);
spin_unlock_bh(&local->filter_lock);
- ieee80211_queue_work(&local->hw, &local->reconfig_filter);
+ wiphy_work_queue(local->hw.wiphy, &local->reconfig_filter);
}
/*
@@ -1046,7 +1054,7 @@ void ieee80211_recalc_offload(struct ieee80211_local *local)
if (!ieee80211_hw_check(&local->hw, SUPPORTS_TX_ENCAP_OFFLOAD))
return;
- mutex_lock(&local->iflist_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(sdata, &local->interfaces, list) {
if (!ieee80211_sdata_running(sdata))
@@ -1054,8 +1062,6 @@ void ieee80211_recalc_offload(struct ieee80211_local *local)
ieee80211_recalc_sdata_offload(sdata);
}
-
- mutex_unlock(&local->iflist_mtx);
}
void ieee80211_adjust_monitor_flags(struct ieee80211_sub_if_data *sdata,
@@ -1133,7 +1139,7 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local)
snprintf(sdata->name, IFNAMSIZ, "%s-monitor",
wiphy_name(local->hw.wiphy));
sdata->wdev.iftype = NL80211_IFTYPE_MONITOR;
- mutex_init(&sdata->wdev.mtx);
+ sdata->wdev.wiphy = local->hw.wiphy;
ieee80211_sdata_init(local, sdata);
@@ -1158,19 +1164,14 @@ int ieee80211_add_virtual_monitor(struct ieee80211_local *local)
rcu_assign_pointer(local->monitor_sdata, sdata);
mutex_unlock(&local->iflist_mtx);
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
ret = ieee80211_link_use_channel(&sdata->deflink, &local->monitor_chandef,
IEEE80211_CHANCTX_EXCLUSIVE);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
if (ret) {
mutex_lock(&local->iflist_mtx);
RCU_INIT_POINTER(local->monitor_sdata, NULL);
mutex_unlock(&local->iflist_mtx);
synchronize_net();
drv_remove_interface(local, sdata);
- mutex_destroy(&sdata->wdev.mtx);
kfree(sdata);
return ret;
}
@@ -1206,15 +1207,10 @@ void ieee80211_del_virtual_monitor(struct ieee80211_local *local)
synchronize_net();
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
drv_remove_interface(local, sdata);
- mutex_destroy(&sdata->wdev.mtx);
kfree(sdata);
}
@@ -1232,6 +1228,8 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
int res;
u32 hw_reconf_flags = 0;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
switch (sdata->vif.type) {
case NL80211_IFTYPE_AP_VLAN: {
struct ieee80211_sub_if_data *master;
@@ -1239,9 +1237,7 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
if (!sdata->bss)
return -ENOLINK;
- mutex_lock(&local->mtx);
list_add(&sdata->u.vlan.list, &sdata->bss->vlans);
- mutex_unlock(&local->mtx);
master = container_of(sdata->bss,
struct ieee80211_sub_if_data, u.ap);
@@ -1258,10 +1254,8 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
sizeof(sdata->vif.hw_queue));
sdata->vif.bss_conf.chandef = master->vif.bss_conf.chandef;
- mutex_lock(&local->key_mtx);
sdata->crypto_tx_tailroom_needed_cnt +=
master->crypto_tx_tailroom_needed_cnt;
- mutex_unlock(&local->key_mtx);
break;
}
@@ -1352,9 +1346,7 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
ieee80211_adjust_monitor_flags(sdata, 1);
ieee80211_configure_filter(local);
ieee80211_recalc_offload(local);
- mutex_lock(&local->mtx);
ieee80211_recalc_idle(local);
- mutex_unlock(&local->mtx);
netif_carrier_on(dev);
break;
@@ -1459,11 +1451,8 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
drv_stop(local);
err_del_bss:
sdata->bss = NULL;
- if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN) {
- mutex_lock(&local->mtx);
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
list_del(&sdata->u.vlan.list);
- mutex_unlock(&local->mtx);
- }
/* might already be clear but that doesn't matter */
clear_bit(SDATA_STATE_RUNNING, &sdata->state);
return res;
@@ -1490,12 +1479,13 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
{
struct ieee80211_mgmt *mgmt = (void *)skb->data;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (ieee80211_is_action(mgmt->frame_control) &&
mgmt->u.action.category == WLAN_CATEGORY_BACK) {
struct sta_info *sta;
int len = skb->len;
- mutex_lock(&local->sta_mtx);
sta = sta_info_get_bss(sdata, mgmt->sa);
if (sta) {
switch (mgmt->u.action.u.addba_req.action_code) {
@@ -1516,7 +1506,6 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
break;
}
}
- mutex_unlock(&local->sta_mtx);
} else if (ieee80211_is_action(mgmt->frame_control) &&
mgmt->u.action.category == WLAN_CATEGORY_VHT) {
switch (mgmt->u.action.u.vht_group_notif.action_code) {
@@ -1530,7 +1519,6 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
band = status->band;
opmode = mgmt->u.action.u.vht_opmode_notif.operating_mode;
- mutex_lock(&local->sta_mtx);
sta = sta_info_get_bss(sdata, mgmt->sa);
if (sta)
@@ -1538,7 +1526,6 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
&sta->deflink,
opmode, band);
- mutex_unlock(&local->sta_mtx);
break;
}
case WLAN_VHT_ACTION_GROUPID_MGMT:
@@ -1585,7 +1572,6 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
* a block-ack session was active. That cannot be
* right, so terminate the session.
*/
- mutex_lock(&local->sta_mtx);
sta = sta_info_get_bss(sdata, mgmt->sa);
if (sta) {
u16 tid = ieee80211_get_tid(hdr);
@@ -1595,7 +1581,6 @@ static void ieee80211_iface_process_skb(struct ieee80211_local *local,
WLAN_REASON_QSTA_REQUIRE_SETUP,
true);
}
- mutex_unlock(&local->sta_mtx);
} else switch (sdata->vif.type) {
case NL80211_IFTYPE_STATION:
ieee80211_sta_rx_queued_mgmt(sdata, skb);
@@ -1692,15 +1677,8 @@ static void ieee80211_iface_work(struct wiphy *wiphy, struct wiphy_work *work)
}
}
-static void ieee80211_recalc_smps_work(struct work_struct *work)
-{
- struct ieee80211_sub_if_data *sdata =
- container_of(work, struct ieee80211_sub_if_data, recalc_smps);
-
- ieee80211_recalc_smps(sdata, &sdata->deflink);
-}
-
-static void ieee80211_activate_links_work(struct work_struct *work)
+static void ieee80211_activate_links_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_sub_if_data *sdata =
container_of(work, struct ieee80211_sub_if_data,
@@ -1745,8 +1723,8 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata,
skb_queue_head_init(&sdata->skb_queue);
skb_queue_head_init(&sdata->status_queue);
wiphy_work_init(&sdata->work, ieee80211_iface_work);
- INIT_WORK(&sdata->recalc_smps, ieee80211_recalc_smps_work);
- INIT_WORK(&sdata->activate_links_work, ieee80211_activate_links_work);
+ wiphy_work_init(&sdata->activate_links_work,
+ ieee80211_activate_links_work);
switch (type) {
case NL80211_IFTYPE_P2P_GO:
@@ -1805,7 +1783,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata,
/* need to do this after the switch so vif.type is correct */
ieee80211_link_setup(&sdata->deflink);
- ieee80211_debugfs_add_netdev(sdata);
+ ieee80211_debugfs_add_netdev(sdata, false);
}
static int ieee80211_runtime_change_iftype(struct ieee80211_sub_if_data *sdata,
@@ -1936,6 +1914,8 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local,
u8 tmp_addr[ETH_ALEN];
int i;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/* default ... something at least */
memcpy(perm_addr, local->hw.wiphy->perm_addr, ETH_ALEN);
@@ -1943,8 +1923,6 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local,
local->hw.wiphy->n_addresses <= 1)
return;
- mutex_lock(&local->iflist_mtx);
-
switch (type) {
case NL80211_IFTYPE_MONITOR:
/* doesn't matter */
@@ -1968,7 +1946,7 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local,
if (!ieee80211_sdata_running(sdata))
continue;
memcpy(perm_addr, sdata->vif.addr, ETH_ALEN);
- goto out_unlock;
+ return;
}
}
fallthrough;
@@ -2054,9 +2032,6 @@ static void ieee80211_assign_perm_addr(struct ieee80211_local *local,
break;
}
-
- out_unlock:
- mutex_unlock(&local->iflist_mtx);
}
int ieee80211_if_add(struct ieee80211_local *local, const char *name,
@@ -2070,6 +2045,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
int ret, i;
ASSERT_RTNL();
+ lockdep_assert_wiphy(local->hw.wiphy);
if (type == NL80211_IFTYPE_P2P_DEVICE || type == NL80211_IFTYPE_NAN) {
struct wireless_dev *wdev;
@@ -2157,8 +2133,8 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
INIT_LIST_HEAD(&sdata->key_list);
- INIT_DELAYED_WORK(&sdata->dec_tailroom_needed_wk,
- ieee80211_delayed_tailroom_dec);
+ wiphy_delayed_work_init(&sdata->dec_tailroom_needed_wk,
+ ieee80211_delayed_tailroom_dec);
for (i = 0; i < NUM_NL80211_BANDS; i++) {
struct ieee80211_supported_band *sband;
@@ -2236,6 +2212,7 @@ int ieee80211_if_add(struct ieee80211_local *local, const char *name,
void ieee80211_if_remove(struct ieee80211_sub_if_data *sdata)
{
ASSERT_RTNL();
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
mutex_lock(&sdata->local->iflist_mtx);
list_del_rcu(&sdata->list);
@@ -2281,19 +2258,30 @@ void ieee80211_remove_interfaces(struct ieee80211_local *local)
*/
cfg80211_shutdown_all_interfaces(local->hw.wiphy);
+ wiphy_lock(local->hw.wiphy);
+
WARN(local->open_count, "%s: open count remains %d\n",
wiphy_name(local->hw.wiphy), local->open_count);
- ieee80211_txq_teardown_flows(local);
-
mutex_lock(&local->iflist_mtx);
list_splice_init(&local->interfaces, &unreg_list);
mutex_unlock(&local->iflist_mtx);
- wiphy_lock(local->hw.wiphy);
list_for_each_entry_safe(sdata, tmp, &unreg_list, list) {
bool netdev = sdata->dev;
+ /*
+ * Remove IP addresses explicitly, since the notifier will
+ * skip the callbacks if wdev->registered is false, since
+ * we can't acquire the wiphy_lock() again there if already
+ * inside this locked section.
+ */
+ sdata->vif.cfg.arp_addr_cnt = 0;
+ if (sdata->vif.type == NL80211_IFTYPE_STATION &&
+ sdata->u.mgd.associated)
+ ieee80211_vif_cfg_change_notify(sdata,
+ BSS_CHANGED_ARP_FILTER);
+
list_del(&sdata->list);
cfg80211_unregister_wdev(&sdata->wdev);
diff --git a/net/mac80211/key.c b/net/mac80211/key.c
index a2db0585dce0..af74d7f9d94d 100644
--- a/net/mac80211/key.c
+++ b/net/mac80211/key.c
@@ -53,11 +53,6 @@
static const u8 bcast_addr[ETH_ALEN] = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF };
-static void assert_key_lock(struct ieee80211_local *local)
-{
- lockdep_assert_held(&local->key_mtx);
-}
-
static void
update_vlan_tailroom_need_count(struct ieee80211_sub_if_data *sdata, int delta)
{
@@ -67,7 +62,7 @@ update_vlan_tailroom_need_count(struct ieee80211_sub_if_data *sdata, int delta)
return;
/* crypto_tx_tailroom_needed_cnt is protected by this */
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
rcu_read_lock();
@@ -98,7 +93,7 @@ static void increment_tailroom_need_count(struct ieee80211_sub_if_data *sdata)
* http://mid.gmane.org/1308590980.4322.19.camel@jlt3.sipsolutions.net
*/
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
update_vlan_tailroom_need_count(sdata, 1);
@@ -114,7 +109,7 @@ static void increment_tailroom_need_count(struct ieee80211_sub_if_data *sdata)
static void decrease_tailroom_need_count(struct ieee80211_sub_if_data *sdata,
int delta)
{
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
WARN_ON_ONCE(sdata->crypto_tx_tailroom_needed_cnt < delta);
@@ -129,6 +124,7 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
int ret = -EOPNOTSUPP;
might_sleep();
+ lockdep_assert_wiphy(key->local->hw.wiphy);
if (key->flags & KEY_FLAG_TAINTED) {
/* If we get here, it's during resume and the key is
@@ -151,8 +147,6 @@ static int ieee80211_key_enable_hw_accel(struct ieee80211_key *key)
if (!key->local->ops->set_key)
goto out_unsupported;
- assert_key_lock(key->local);
-
sta = key->sta;
/*
@@ -242,14 +236,14 @@ static void ieee80211_key_disable_hw_accel(struct ieee80211_key *key)
if (!key || !key->local->ops->set_key)
return;
- assert_key_lock(key->local);
-
if (!(key->flags & KEY_FLAG_UPLOADED_TO_HARDWARE))
return;
sta = key->sta;
sdata = key->sdata;
+ lockdep_assert_wiphy(key->local->hw.wiphy);
+
if (key->conf.link_id >= 0 && sdata->vif.active_links &&
!(sdata->vif.active_links & BIT(key->conf.link_id)))
return;
@@ -275,7 +269,7 @@ static int _ieee80211_set_tx_key(struct ieee80211_key *key, bool force)
struct sta_info *sta = key->sta;
struct ieee80211_local *local = key->local;
- assert_key_lock(local);
+ lockdep_assert_wiphy(local->hw.wiphy);
set_sta_flag(sta, WLAN_STA_USES_ENCRYPTION);
@@ -300,7 +294,7 @@ static void ieee80211_pairwise_rekey(struct ieee80211_key *old,
struct sta_info *sta = new->sta;
int i;
- assert_key_lock(local);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (new->conf.flags & IEEE80211_KEY_FLAG_NO_AUTO_TX) {
/* Extended Key ID key install, initial one or rekey */
@@ -317,11 +311,9 @@ static void ieee80211_pairwise_rekey(struct ieee80211_key *old,
* job done for the few ms we need it.)
*/
set_sta_flag(sta, WLAN_STA_BLOCK_BA);
- mutex_lock(&sta->ampdu_mlme.mtx);
for (i = 0; i < IEEE80211_NUM_TIDS; i++)
- ___ieee80211_stop_tx_ba_session(sta, i,
- AGG_STOP_LOCAL_REQUEST);
- mutex_unlock(&sta->ampdu_mlme.mtx);
+ __ieee80211_stop_tx_ba_session(sta, i,
+ AGG_STOP_LOCAL_REQUEST);
}
} else if (old) {
/* Rekey without Extended Key ID.
@@ -358,12 +350,14 @@ static void __ieee80211_set_default_key(struct ieee80211_link_data *link,
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_key *key = NULL;
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (idx >= 0 && idx < NUM_DEFAULT_KEYS) {
- key = key_mtx_dereference(sdata->local, sdata->keys[idx]);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->keys[idx]);
if (!key)
- key = key_mtx_dereference(sdata->local, link->gtk[idx]);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ link->gtk[idx]);
}
if (uni) {
@@ -382,9 +376,9 @@ static void __ieee80211_set_default_key(struct ieee80211_link_data *link,
void ieee80211_set_default_key(struct ieee80211_link_data *link, int idx,
bool uni, bool multi)
{
- mutex_lock(&link->sdata->local->key_mtx);
+ lockdep_assert_wiphy(link->sdata->local->hw.wiphy);
+
__ieee80211_set_default_key(link, idx, uni, multi);
- mutex_unlock(&link->sdata->local->key_mtx);
}
static void
@@ -393,11 +387,12 @@ __ieee80211_set_default_mgmt_key(struct ieee80211_link_data *link, int idx)
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_key *key = NULL;
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (idx >= NUM_DEFAULT_KEYS &&
idx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS)
- key = key_mtx_dereference(sdata->local, link->gtk[idx]);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ link->gtk[idx]);
rcu_assign_pointer(link->default_mgmt_key, key);
@@ -407,9 +402,9 @@ __ieee80211_set_default_mgmt_key(struct ieee80211_link_data *link, int idx)
void ieee80211_set_default_mgmt_key(struct ieee80211_link_data *link,
int idx)
{
- mutex_lock(&link->sdata->local->key_mtx);
+ lockdep_assert_wiphy(link->sdata->local->hw.wiphy);
+
__ieee80211_set_default_mgmt_key(link, idx);
- mutex_unlock(&link->sdata->local->key_mtx);
}
static void
@@ -418,12 +413,13 @@ __ieee80211_set_default_beacon_key(struct ieee80211_link_data *link, int idx)
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_key *key = NULL;
- assert_key_lock(sdata->local);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (idx >= NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS &&
idx < NUM_DEFAULT_KEYS + NUM_DEFAULT_MGMT_KEYS +
NUM_DEFAULT_BEACON_KEYS)
- key = key_mtx_dereference(sdata->local, link->gtk[idx]);
+ key = wiphy_dereference(sdata->local->hw.wiphy,
+ link->gtk[idx]);
rcu_assign_pointer(link->default_beacon_key, key);
@@ -433,9 +429,9 @@ __ieee80211_set_default_beacon_key(struct ieee80211_link_data *link, int idx)
void ieee80211_set_default_beacon_key(struct ieee80211_link_data *link,
int idx)
{
- mutex_lock(&link->sdata->local->key_mtx);
+ lockdep_assert_wiphy(link->sdata->local->hw.wiphy);
+
__ieee80211_set_default_beacon_key(link, idx);
- mutex_unlock(&link->sdata->local->key_mtx);
}
static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
@@ -452,6 +448,8 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
bool defunikey, defmultikey, defmgmtkey, defbeaconkey;
bool is_wep;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
/* caller must provide at least one old/new */
if (WARN_ON(!new && !old))
return 0;
@@ -482,7 +480,7 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
if (sta) {
link_sta = rcu_dereference_protected(sta->link[link_id],
- lockdep_is_held(&sta->local->sta_mtx));
+ lockdep_is_held(&sta->local->hw.wiphy->mtx));
if (!link_sta)
return -ENOLINK;
}
@@ -510,12 +508,10 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
ret = ieee80211_key_enable_hw_accel(new);
}
} else {
- if (!new->local->wowlan) {
+ if (!new->local->wowlan)
ret = ieee80211_key_enable_hw_accel(new);
- } else {
- assert_key_lock(new->local);
+ else
new->flags |= KEY_FLAG_UPLOADED_TO_HARDWARE;
- }
}
if (ret)
@@ -541,17 +537,17 @@ static int ieee80211_key_replace(struct ieee80211_sub_if_data *sdata,
ieee80211_check_fast_rx(sta);
} else {
defunikey = old &&
- old == key_mtx_dereference(sdata->local,
- sdata->default_unicast_key);
+ old == wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->default_unicast_key);
defmultikey = old &&
- old == key_mtx_dereference(sdata->local,
- link->default_multicast_key);
+ old == wiphy_dereference(sdata->local->hw.wiphy,
+ link->default_multicast_key);
defmgmtkey = old &&
- old == key_mtx_dereference(sdata->local,
- link->default_mgmt_key);
+ old == wiphy_dereference(sdata->local->hw.wiphy,
+ link->default_mgmt_key);
defbeaconkey = old &&
- old == key_mtx_dereference(sdata->local,
- link->default_beacon_key);
+ old == wiphy_dereference(sdata->local->hw.wiphy,
+ link->default_beacon_key);
if (defunikey && !new)
__ieee80211_set_default_key(link, -1, true, false);
@@ -775,8 +771,9 @@ static void __ieee80211_key_destroy(struct ieee80211_key *key,
if (delay_tailroom) {
/* see ieee80211_delayed_tailroom_dec */
sdata->crypto_tx_tailroom_pending_dec++;
- schedule_delayed_work(&sdata->dec_tailroom_needed_wk,
- HZ/2);
+ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ &sdata->dec_tailroom_needed_wk,
+ HZ / 2);
} else {
decrease_tailroom_need_count(sdata, 1);
}
@@ -859,13 +856,15 @@ int ieee80211_key_link(struct ieee80211_key *key,
bool delay_tailroom = sdata->vif.type == NL80211_IFTYPE_STATION;
int ret;
- mutex_lock(&sdata->local->key_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (sta && pairwise) {
struct ieee80211_key *alt_key;
- old_key = key_mtx_dereference(sdata->local, sta->ptk[idx]);
- alt_key = key_mtx_dereference(sdata->local, sta->ptk[idx ^ 1]);
+ old_key = wiphy_dereference(sdata->local->hw.wiphy,
+ sta->ptk[idx]);
+ alt_key = wiphy_dereference(sdata->local->hw.wiphy,
+ sta->ptk[idx ^ 1]);
/* The rekey code assumes that the old and new key are using
* the same cipher. Enforce the assumption for pairwise keys.
@@ -881,21 +880,22 @@ int ieee80211_key_link(struct ieee80211_key *key,
if (link_id >= 0) {
link_sta = rcu_dereference_protected(sta->link[link_id],
- lockdep_is_held(&sta->local->sta_mtx));
+ lockdep_is_held(&sta->local->hw.wiphy->mtx));
if (!link_sta) {
ret = -ENOLINK;
goto out;
}
}
- old_key = key_mtx_dereference(sdata->local, link_sta->gtk[idx]);
+ old_key = wiphy_dereference(sdata->local->hw.wiphy,
+ link_sta->gtk[idx]);
} else {
if (idx < NUM_DEFAULT_KEYS)
- old_key = key_mtx_dereference(sdata->local,
- sdata->keys[idx]);
+ old_key = wiphy_dereference(sdata->local->hw.wiphy,
+ sdata->keys[idx]);
if (!old_key)
- old_key = key_mtx_dereference(sdata->local,
- link->gtk[idx]);
+ old_key = wiphy_dereference(sdata->local->hw.wiphy,
+ link->gtk[idx]);
}
/* Non-pairwise keys must also not switch the cipher on rekey */
@@ -940,8 +940,6 @@ int ieee80211_key_link(struct ieee80211_key *key,
out:
ieee80211_key_free_unused(key);
- mutex_unlock(&sdata->local->key_mtx);
-
return ret;
}
@@ -967,8 +965,6 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata)
lockdep_assert_wiphy(sdata->local->hw.wiphy);
- mutex_lock(&sdata->local->key_mtx);
-
sdata->crypto_tx_tailroom_needed_cnt = 0;
sdata->crypto_tx_tailroom_pending_dec = 0;
@@ -985,8 +981,6 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata)
ieee80211_key_enable_hw_accel(key);
}
}
-
- mutex_unlock(&sdata->local->key_mtx);
}
void ieee80211_iter_keys(struct ieee80211_hw *hw,
@@ -1004,7 +998,6 @@ void ieee80211_iter_keys(struct ieee80211_hw *hw,
lockdep_assert_wiphy(hw->wiphy);
- mutex_lock(&local->key_mtx);
if (vif) {
sdata = vif_to_sdata(vif);
list_for_each_entry_safe(key, tmp, &sdata->key_list, list)
@@ -1019,7 +1012,6 @@ void ieee80211_iter_keys(struct ieee80211_hw *hw,
key->sta ? &key->sta->sta : NULL,
&key->conf, iter_data);
}
- mutex_unlock(&local->key_mtx);
}
EXPORT_SYMBOL(ieee80211_iter_keys);
@@ -1099,7 +1091,8 @@ void ieee80211_remove_link_keys(struct ieee80211_link_data *link,
struct ieee80211_local *local = sdata->local;
struct ieee80211_key *key, *tmp;
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry_safe(key, tmp, &sdata->key_list, list) {
if (key->conf.link_id != link->link_id)
continue;
@@ -1108,7 +1101,6 @@ void ieee80211_remove_link_keys(struct ieee80211_link_data *link,
key, NULL);
list_add_tail(&key->list, keys);
}
- mutex_unlock(&local->key_mtx);
}
void ieee80211_free_key_list(struct ieee80211_local *local,
@@ -1116,10 +1108,10 @@ void ieee80211_free_key_list(struct ieee80211_local *local,
{
struct ieee80211_key *key, *tmp;
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry_safe(key, tmp, keys, list)
__ieee80211_key_destroy(key, false);
- mutex_unlock(&local->key_mtx);
}
void ieee80211_free_keys(struct ieee80211_sub_if_data *sdata,
@@ -1131,9 +1123,10 @@ void ieee80211_free_keys(struct ieee80211_sub_if_data *sdata,
struct ieee80211_key *key, *tmp;
LIST_HEAD(keys);
- cancel_delayed_work_sync(&sdata->dec_tailroom_needed_wk);
+ wiphy_delayed_work_cancel(local->hw.wiphy,
+ &sdata->dec_tailroom_needed_wk);
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
ieee80211_free_keys_iface(sdata, &keys);
@@ -1166,8 +1159,6 @@ void ieee80211_free_keys(struct ieee80211_sub_if_data *sdata,
WARN_ON_ONCE(vlan->crypto_tx_tailroom_needed_cnt ||
vlan->crypto_tx_tailroom_pending_dec);
}
-
- mutex_unlock(&local->key_mtx);
}
void ieee80211_free_sta_keys(struct ieee80211_local *local,
@@ -1176,9 +1167,10 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
struct ieee80211_key *key;
int i;
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
for (i = 0; i < ARRAY_SIZE(sta->deflink.gtk); i++) {
- key = key_mtx_dereference(local, sta->deflink.gtk[i]);
+ key = wiphy_dereference(local->hw.wiphy, sta->deflink.gtk[i]);
if (!key)
continue;
ieee80211_key_replace(key->sdata, NULL, key->sta,
@@ -1189,7 +1181,7 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
}
for (i = 0; i < NUM_DEFAULT_KEYS; i++) {
- key = key_mtx_dereference(local, sta->ptk[i]);
+ key = wiphy_dereference(local->hw.wiphy, sta->ptk[i]);
if (!key)
continue;
ieee80211_key_replace(key->sdata, NULL, key->sta,
@@ -1198,11 +1190,10 @@ void ieee80211_free_sta_keys(struct ieee80211_local *local,
__ieee80211_key_destroy(key, key->sdata->vif.type ==
NL80211_IFTYPE_STATION);
}
-
- mutex_unlock(&local->key_mtx);
}
-void ieee80211_delayed_tailroom_dec(struct work_struct *wk)
+void ieee80211_delayed_tailroom_dec(struct wiphy *wiphy,
+ struct wiphy_work *wk)
{
struct ieee80211_sub_if_data *sdata;
@@ -1225,11 +1216,9 @@ void ieee80211_delayed_tailroom_dec(struct work_struct *wk)
* within an ESS this usually won't happen.
*/
- mutex_lock(&sdata->local->key_mtx);
decrease_tailroom_need_count(sdata,
sdata->crypto_tx_tailroom_pending_dec);
sdata->crypto_tx_tailroom_pending_dec = 0;
- mutex_unlock(&sdata->local->key_mtx);
}
void ieee80211_gtk_rekey_notify(struct ieee80211_vif *vif, const u8 *bssid,
@@ -1358,7 +1347,7 @@ void ieee80211_remove_key(struct ieee80211_key_conf *keyconf)
key = container_of(keyconf, struct ieee80211_key, conf);
- assert_key_lock(key->local);
+ lockdep_assert_wiphy(key->local->hw.wiphy);
/*
* if key was uploaded, we assume the driver will/has remove(d)
diff --git a/net/mac80211/key.h b/net/mac80211/key.h
index f3df97df4b72..1fa0f4f78962 100644
--- a/net/mac80211/key.h
+++ b/net/mac80211/key.h
@@ -2,7 +2,7 @@
/*
* Copyright 2002-2004, Instant802 Networks, Inc.
* Copyright 2005, Devicescape Software, Inc.
- * Copyright (C) 2019, 2022 Intel Corporation
+ * Copyright (C) 2019, 2022-2023 Intel Corporation
*/
#ifndef IEEE80211_KEY_H
@@ -168,12 +168,7 @@ void ieee80211_reenable_keys(struct ieee80211_sub_if_data *sdata);
int ieee80211_key_switch_links(struct ieee80211_sub_if_data *sdata,
unsigned long del_links_mask,
unsigned long add_links_mask);
-
-#define key_mtx_dereference(local, ref) \
- rcu_dereference_protected(ref, lockdep_is_held(&((local)->key_mtx)))
-#define rcu_dereference_check_key_mtx(local, ref) \
- rcu_dereference_check(ref, lockdep_is_held(&((local)->key_mtx)))
-
-void ieee80211_delayed_tailroom_dec(struct work_struct *wk);
+void ieee80211_delayed_tailroom_dec(struct wiphy *wiphy,
+ struct wiphy_work *wk);
#endif /* IEEE80211_KEY_H */
diff --git a/net/mac80211/link.c b/net/mac80211/link.c
index 6148208b320e..bf7bd880d062 100644
--- a/net/mac80211/link.c
+++ b/net/mac80211/link.c
@@ -37,16 +37,16 @@ void ieee80211_link_init(struct ieee80211_sub_if_data *sdata,
link_conf->link_id = link_id;
link_conf->vif = &sdata->vif;
- INIT_WORK(&link->csa_finalize_work,
- ieee80211_csa_finalize_work);
- INIT_WORK(&link->color_change_finalize_work,
- ieee80211_color_change_finalize_work);
+ wiphy_work_init(&link->csa_finalize_work,
+ ieee80211_csa_finalize_work);
+ wiphy_work_init(&link->color_change_finalize_work,
+ ieee80211_color_change_finalize_work);
INIT_DELAYED_WORK(&link->color_collision_detect_work,
ieee80211_color_collision_detection_work);
INIT_LIST_HEAD(&link->assigned_chanctx_list);
INIT_LIST_HEAD(&link->reserved_chanctx_list);
- INIT_DELAYED_WORK(&link->dfs_cac_timer_work,
- ieee80211_dfs_cac_timer_work);
+ wiphy_delayed_work_init(&link->dfs_cac_timer_work,
+ ieee80211_dfs_cac_timer_work);
if (!deflink) {
switch (sdata->vif.type) {
@@ -191,11 +191,11 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
struct ieee80211_link_data *old_data[IEEE80211_MLD_MAX_NUM_LINKS];
bool use_deflink = old_links == 0; /* set for error case */
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
memset(to_free, 0, sizeof(links));
- if (old_links == new_links)
+ if (old_links == new_links && dormant_links == sdata->vif.dormant_links)
return 0;
/* if there were no old links, need to clear the pointers to deflink */
@@ -235,6 +235,9 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
RCU_INIT_POINTER(sdata->vif.link_conf[link_id], NULL);
}
+ if (!old_links)
+ ieee80211_debugfs_recreate_netdev(sdata, true);
+
/* link them into data structures */
for_each_set_bit(link_id, &add, IEEE80211_MLD_MAX_NUM_LINKS) {
WARN_ON(!use_deflink &&
@@ -261,6 +264,8 @@ static int ieee80211_vif_update_links(struct ieee80211_sub_if_data *sdata,
old_links & old_active,
new_links & sdata->vif.active_links,
old);
+ if (!new_links)
+ ieee80211_debugfs_recreate_netdev(sdata, false);
}
if (ret) {
@@ -303,23 +308,6 @@ int ieee80211_vif_set_links(struct ieee80211_sub_if_data *sdata,
return ret;
}
-void ieee80211_vif_clear_links(struct ieee80211_sub_if_data *sdata)
-{
- struct link_container *links[IEEE80211_MLD_MAX_NUM_LINKS];
-
- /*
- * The locking here is different because when we free links
- * in the station case we need to be able to cancel_work_sync()
- * something that also takes the lock.
- */
-
- sdata_lock(sdata);
- ieee80211_vif_update_links(sdata, links, 0, 0);
- sdata_unlock(sdata);
-
- ieee80211_free_links(sdata, links);
-}
-
static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
u16 active_links)
{
@@ -447,17 +435,15 @@ static int _ieee80211_set_active_links(struct ieee80211_sub_if_data *sdata,
return 0;
}
-int __ieee80211_set_active_links(struct ieee80211_vif *vif, u16 active_links)
+int ieee80211_set_active_links(struct ieee80211_vif *vif, u16 active_links)
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
struct ieee80211_local *local = sdata->local;
u16 old_active;
int ret;
- sdata_assert_lock(sdata);
- mutex_lock(&local->sta_mtx);
- mutex_lock(&local->mtx);
- mutex_lock(&local->key_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
old_active = sdata->vif.active_links;
if (old_active & active_links) {
/*
@@ -473,21 +459,6 @@ int __ieee80211_set_active_links(struct ieee80211_vif *vif, u16 active_links)
/* otherwise switch directly */
ret = _ieee80211_set_active_links(sdata, active_links);
}
- mutex_unlock(&local->key_mtx);
- mutex_unlock(&local->mtx);
- mutex_unlock(&local->sta_mtx);
-
- return ret;
-}
-
-int ieee80211_set_active_links(struct ieee80211_vif *vif, u16 active_links)
-{
- struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
- int ret;
-
- sdata_lock(sdata);
- ret = __ieee80211_set_active_links(vif, active_links);
- sdata_unlock(sdata);
return ret;
}
@@ -512,6 +483,6 @@ void ieee80211_set_active_links_async(struct ieee80211_vif *vif,
return;
sdata->desired_active_links = active_links;
- schedule_work(&sdata->activate_links_work);
+ wiphy_work_queue(sdata->local->hw.wiphy, &sdata->activate_links_work);
}
EXPORT_SYMBOL_GPL(ieee80211_set_active_links_async);
diff --git a/net/mac80211/main.c b/net/mac80211/main.c
index 24315d7b3126..033a5261ac3a 100644
--- a/net/mac80211/main.c
+++ b/net/mac80211/main.c
@@ -84,7 +84,8 @@ void ieee80211_configure_filter(struct ieee80211_local *local)
local->filter_flags = new_flags & ~(1<<31);
}
-static void ieee80211_reconfig_filter(struct work_struct *work)
+static void ieee80211_reconfig_filter(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, reconfig_filter);
@@ -206,7 +207,8 @@ int ieee80211_hw_config(struct ieee80211_local *local, u32 changed)
BSS_CHANGED_PS |\
BSS_CHANGED_IBSS |\
BSS_CHANGED_ARP_FILTER |\
- BSS_CHANGED_SSID)
+ BSS_CHANGED_SSID |\
+ BSS_CHANGED_MLD_VALID_LINKS)
void ieee80211_bss_info_change_notify(struct ieee80211_sub_if_data *sdata,
u64 changed)
@@ -317,7 +319,7 @@ static void ieee80211_tasklet_handler(struct tasklet_struct *t)
break;
case IEEE80211_TX_STATUS_MSG:
skb->pkt_type = 0;
- ieee80211_tx_status(&local->hw, skb);
+ ieee80211_tx_status_skb(&local->hw, skb);
break;
default:
WARN(1, "mac80211: Packet is of unknown type %d\n",
@@ -335,14 +337,12 @@ static void ieee80211_restart_work(struct work_struct *work)
struct ieee80211_sub_if_data *sdata;
int ret;
- /* wait for scan work complete */
flush_workqueue(local->workqueue);
- flush_work(&local->sched_scan_stopped_work);
- flush_work(&local->radar_detected_work);
rtnl_lock();
/* we might do interface manipulations, so need both */
wiphy_lock(local->hw.wiphy);
+ wiphy_work_flush(local->hw.wiphy, NULL);
WARN(test_bit(SCAN_HW_SCANNING, &local->scanning),
"%s called with hardware scan in progress\n", __func__);
@@ -366,21 +366,19 @@ static void ieee80211_restart_work(struct work_struct *work)
*/
wiphy_work_cancel(local->hw.wiphy,
&sdata->u.mgd.csa_connection_drop_work);
- if (sdata->vif.bss_conf.csa_active) {
- sdata_lock(sdata);
+ if (sdata->vif.bss_conf.csa_active)
ieee80211_sta_connection_lost(sdata,
WLAN_REASON_UNSPECIFIED,
false);
- sdata_unlock(sdata);
- }
}
- flush_delayed_work(&sdata->dec_tailroom_needed_wk);
+ wiphy_delayed_work_flush(local->hw.wiphy,
+ &sdata->dec_tailroom_needed_wk);
}
ieee80211_scan_cancel(local);
/* make sure any new ROC will consider local->in_reconfig */
- flush_delayed_work(&local->roc_work);
- flush_work(&local->hw_roc_done);
+ wiphy_delayed_work_flush(local->hw.wiphy, &local->roc_work);
+ wiphy_work_flush(local->hw.wiphy, &local->hw_roc_done);
/* wait for all packet processing to be done */
synchronize_net();
@@ -439,7 +437,7 @@ static int ieee80211_ifa_changed(struct notifier_block *nb,
if (!wdev)
return NOTIFY_DONE;
- if (wdev->wiphy != local->hw.wiphy)
+ if (wdev->wiphy != local->hw.wiphy || !wdev->registered)
return NOTIFY_DONE;
sdata = IEEE80211_DEV_TO_SUB_IF(ndev);
@@ -454,7 +452,25 @@ static int ieee80211_ifa_changed(struct notifier_block *nb,
return NOTIFY_DONE;
ifmgd = &sdata->u.mgd;
- sdata_lock(sdata);
+
+ /*
+ * The nested here is needed to convince lockdep that this is
+ * all OK. Yes, we lock the wiphy mutex here while we already
+ * hold the notifier rwsem, that's the normal case. And yes,
+ * we also acquire the notifier rwsem again when unregistering
+ * a netdev while we already hold the wiphy mutex, so it does
+ * look like a typical ABBA deadlock.
+ *
+ * However, both of these things happen with the RTNL held
+ * already. Therefore, they can't actually happen, since the
+ * lock orders really are ABC and ACB, which is fine due to
+ * the RTNL (A).
+ *
+ * We still need to prevent recursion, which is accomplished
+ * by the !wdev->registered check above.
+ */
+ mutex_lock_nested(&local->hw.wiphy->mtx, 1);
+ __acquire(&local->hw.wiphy->mtx);
/* Copy the addresses to the vif config list */
ifa = rtnl_dereference(idev->ifa_list);
@@ -471,7 +487,7 @@ static int ieee80211_ifa_changed(struct notifier_block *nb,
if (ifmgd->associated)
ieee80211_vif_cfg_change_notify(sdata, BSS_CHANGED_ARP_FILTER);
- sdata_unlock(sdata);
+ wiphy_unlock(local->hw.wiphy);
return NOTIFY_OK;
}
@@ -784,9 +800,6 @@ struct ieee80211_hw *ieee80211_alloc_hw_nm(size_t priv_data_len,
__hw_addr_init(&local->mc_list);
mutex_init(&local->iflist_mtx);
- mutex_init(&local->mtx);
-
- mutex_init(&local->key_mtx);
spin_lock_init(&local->filter_lock);
spin_lock_init(&local->rx_path_lock);
spin_lock_init(&local->queue_stop_reason_lock);
@@ -807,26 +820,25 @@ struct ieee80211_hw *ieee80211_alloc_hw_nm(size_t priv_data_len,
spin_lock_init(&local->handle_wake_tx_queue_lock);
INIT_LIST_HEAD(&local->chanctx_list);
- mutex_init(&local->chanctx_mtx);
- INIT_DELAYED_WORK(&local->scan_work, ieee80211_scan_work);
+ wiphy_delayed_work_init(&local->scan_work, ieee80211_scan_work);
INIT_WORK(&local->restart_work, ieee80211_restart_work);
- INIT_WORK(&local->radar_detected_work,
- ieee80211_dfs_radar_detected_work);
+ wiphy_work_init(&local->radar_detected_work,
+ ieee80211_dfs_radar_detected_work);
- INIT_WORK(&local->reconfig_filter, ieee80211_reconfig_filter);
+ wiphy_work_init(&local->reconfig_filter, ieee80211_reconfig_filter);
local->smps_mode = IEEE80211_SMPS_OFF;
- INIT_WORK(&local->dynamic_ps_enable_work,
- ieee80211_dynamic_ps_enable_work);
- INIT_WORK(&local->dynamic_ps_disable_work,
- ieee80211_dynamic_ps_disable_work);
+ wiphy_work_init(&local->dynamic_ps_enable_work,
+ ieee80211_dynamic_ps_enable_work);
+ wiphy_work_init(&local->dynamic_ps_disable_work,
+ ieee80211_dynamic_ps_disable_work);
timer_setup(&local->dynamic_ps_timer, ieee80211_dynamic_ps_timer, 0);
- INIT_WORK(&local->sched_scan_stopped_work,
- ieee80211_sched_scan_stopped_work);
+ wiphy_work_init(&local->sched_scan_stopped_work,
+ ieee80211_sched_scan_stopped_work);
spin_lock_init(&local->ack_status_lock);
idr_init(&local->ack_status_frames);
@@ -1055,6 +1067,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
supp_he = false;
supp_eht = false;
for (band = 0; band < NUM_NL80211_BANDS; band++) {
+ const struct ieee80211_sband_iftype_data *iftd;
struct ieee80211_supported_band *sband;
sband = local->hw.wiphy->bands[band];
@@ -1101,11 +1114,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
supp_ht = supp_ht || sband->ht_cap.ht_supported;
supp_vht = supp_vht || sband->vht_cap.vht_supported;
- for (i = 0; i < sband->n_iftype_data; i++) {
- const struct ieee80211_sband_iftype_data *iftd;
-
- iftd = &sband->iftype_data[i];
-
+ for_each_sband_iftype_data(sband, i, iftd) {
supp_he = supp_he || iftd->he_cap.has_he;
supp_eht = supp_eht || iftd->eht_cap.has_eht;
}
@@ -1446,6 +1455,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
ieee80211_remove_interfaces(local);
rtnl_unlock();
fail_rate:
+ ieee80211_txq_teardown_flows(local);
fail_flows:
ieee80211_led_exit(local);
destroy_workqueue(local->workqueue);
@@ -1482,13 +1492,17 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
*/
ieee80211_remove_interfaces(local);
+ ieee80211_txq_teardown_flows(local);
+
+ wiphy_lock(local->hw.wiphy);
+ wiphy_delayed_work_cancel(local->hw.wiphy, &local->roc_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->reconfig_filter);
+ wiphy_work_cancel(local->hw.wiphy, &local->sched_scan_stopped_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->radar_detected_work);
+ wiphy_unlock(local->hw.wiphy);
rtnl_unlock();
- cancel_delayed_work_sync(&local->roc_work);
cancel_work_sync(&local->restart_work);
- cancel_work_sync(&local->reconfig_filter);
- flush_work(&local->sched_scan_stopped_work);
- flush_work(&local->radar_detected_work);
ieee80211_clear_tx_pending(local);
rate_control_deinitialize(local);
@@ -1519,7 +1533,6 @@ void ieee80211_free_hw(struct ieee80211_hw *hw)
enum nl80211_band band;
mutex_destroy(&local->iflist_mtx);
- mutex_destroy(&local->mtx);
if (local->wiphy_ciphers_allocated) {
kfree(local->hw.wiphy->cipher_suites);
diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
index e31c312c124a..fccbcde3359a 100644
--- a/net/mac80211/mesh.c
+++ b/net/mac80211/mesh.c
@@ -56,6 +56,8 @@ static void ieee80211_mesh_housekeeping_timer(struct timer_list *t)
*
* This function checks if the mesh configuration of a mesh point matches the
* local mesh configuration, i.e. if both nodes belong to the same mesh network.
+ *
+ * Returns: %true if both nodes belong to the same mesh
*/
bool mesh_matches_local(struct ieee80211_sub_if_data *sdata,
struct ieee802_11_elems *ie)
@@ -119,6 +121,8 @@ bool mesh_matches_local(struct ieee80211_sub_if_data *sdata,
* mesh_peer_accepts_plinks - check if an mp is willing to establish peer links
*
* @ie: information elements of a management frame from the mesh peer
+ *
+ * Returns: %true if the mesh peer is willing to establish peer links
*/
bool mesh_peer_accepts_plinks(struct ieee802_11_elems *ie)
{
@@ -858,7 +862,7 @@ bool ieee80211_mesh_xmit_fast(struct ieee80211_sub_if_data *sdata,
* @meshsa: source address in the mesh. Same as TA, as frame is
* locally originated.
*
- * Return the length of the 802.11 (does not include a mesh control header)
+ * Returns: the length of the 802.11 frame header (excludes mesh control header)
*/
int ieee80211_fill_mesh_addresses(struct ieee80211_hdr *hdr, __le16 *fc,
const u8 *meshda, const u8 *meshsa)
@@ -891,7 +895,7 @@ int ieee80211_fill_mesh_addresses(struct ieee80211_hdr *hdr, __le16 *fc,
* @addr6: 2nd address in the ae header, which corresponds to addr6 of the
* mesh frame
*
- * Return the header length.
+ * Returns: the header length
*/
unsigned int ieee80211_new_mesh_header(struct ieee80211_sub_if_data *sdata,
struct ieee80211s_hdr *meshhdr,
@@ -1291,7 +1295,7 @@ ieee80211_mesh_process_chnswitch(struct ieee80211_sub_if_data *sdata,
ieee80211_conn_flags_t conn_flags = 0;
u32 vht_cap_info = 0;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
sband = ieee80211_get_sband(sdata);
if (!sband)
@@ -1559,7 +1563,7 @@ int ieee80211_mesh_csa_beacon(struct ieee80211_sub_if_data *sdata,
struct mesh_csa_settings *tmp_csa_settings;
int ret = 0;
- lockdep_assert_held(&sdata->wdev.mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
tmp_csa_settings = kmalloc(sizeof(*tmp_csa_settings),
GFP_ATOMIC);
@@ -1691,11 +1695,11 @@ void ieee80211_mesh_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
struct ieee80211_mgmt *mgmt;
u16 stype;
- sdata_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/* mesh already went down */
if (!sdata->u.mesh.mesh_id_len)
- goto out;
+ return;
rx_status = IEEE80211_SKB_RXCB(skb);
mgmt = (struct ieee80211_mgmt *) skb->data;
@@ -1714,8 +1718,6 @@ void ieee80211_mesh_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
ieee80211_mesh_rx_mgmt_action(sdata, mgmt, skb->len, rx_status);
break;
}
-out:
- sdata_unlock(sdata);
}
static void mesh_bss_info_changed(struct ieee80211_sub_if_data *sdata)
@@ -1745,11 +1747,11 @@ void ieee80211_mesh_work(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
- sdata_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/* mesh already went down */
if (!sdata->u.mesh.mesh_id_len)
- goto out;
+ return;
if (ifmsh->preq_queue_len &&
time_after(jiffies,
@@ -1767,8 +1769,6 @@ void ieee80211_mesh_work(struct ieee80211_sub_if_data *sdata)
if (test_and_clear_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags))
mesh_bss_info_changed(sdata);
-out:
- sdata_unlock(sdata);
}
diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
index 51369072984e..775d52561c54 100644
--- a/net/mac80211/mesh_hwmp.c
+++ b/net/mac80211/mesh_hwmp.c
@@ -230,6 +230,8 @@ static void prepare_frame_for_deferred_tx(struct ieee80211_sub_if_data *sdata,
* Note: This function may be called with driver locks taken that the driver
* also acquires in the TX path. To avoid a deadlock we don't transmit the
* frame directly but add it to the pending queue instead.
+ *
+ * Returns: 0 on success
*/
int mesh_path_error_tx(struct ieee80211_sub_if_data *sdata,
u8 ttl, const u8 *target, u32 target_sn,
diff --git a/net/mac80211/mesh_pathtbl.c b/net/mac80211/mesh_pathtbl.c
index d32e304eeb4b..8a3f44ce3e04 100644
--- a/net/mac80211/mesh_pathtbl.c
+++ b/net/mac80211/mesh_pathtbl.c
@@ -1,6 +1,7 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Copyright (c) 2008, 2009 open80211s Ltd.
+ * Copyright (C) 2023 Intel Corporation
* Author: Luis Carlos Cobo <luisca@cozybit.com>
*/
@@ -173,6 +174,11 @@ static void prepare_for_gate(struct sk_buff *skb, char *dst_addr,
/**
* mesh_path_move_to_queue - Move or copy frames from one mpath queue to another
*
+ * @gate_mpath: An active mpath the frames will be sent to (i.e. the gate)
+ * @from_mpath: The failed mpath
+ * @copy: When true, copy all the frames to the new mpath queue. When false,
+ * move them.
+ *
* This function is used to transfer or copy frames from an unresolved mpath to
* a gate mpath. The function also adds the Address Extension field and
* updates the next hop.
@@ -181,11 +187,6 @@ static void prepare_for_gate(struct sk_buff *skb, char *dst_addr,
* destination addresses are updated.
*
* The gate mpath must be an active mpath with a valid mpath->next_hop.
- *
- * @gate_mpath: An active mpath the frames will be sent to (i.e. the gate)
- * @from_mpath: The failed mpath
- * @copy: When true, copy all the frames to the new mpath queue. When false,
- * move them.
*/
static void mesh_path_move_to_queue(struct mesh_path *gate_mpath,
struct mesh_path *from_mpath,
@@ -330,6 +331,8 @@ mpp_path_lookup_by_idx(struct ieee80211_sub_if_data *sdata, int idx)
/**
* mesh_path_add_gate - add the given mpath to a mesh gate to our path table
* @mpath: gate path to add to table
+ *
+ * Returns: 0 on success, -EEXIST
*/
int mesh_path_add_gate(struct mesh_path *mpath)
{
@@ -388,6 +391,8 @@ static void mesh_gate_del(struct mesh_table *tbl, struct mesh_path *mpath)
/**
* mesh_gate_num - number of gates known to this interface
* @sdata: subif data
+ *
+ * Returns: The number of gates
*/
int mesh_gate_num(struct ieee80211_sub_if_data *sdata)
{
@@ -648,7 +653,7 @@ void mesh_fast_tx_flush_addr(struct ieee80211_sub_if_data *sdata,
cache = &sdata->u.mesh.tx_cache;
spin_lock_bh(&cache->walk_lock);
- entry = rhashtable_lookup(&cache->rht, addr, fast_tx_rht_params);
+ entry = rhashtable_lookup_fast(&cache->rht, addr, fast_tx_rht_params);
if (entry)
mesh_fast_tx_entry_free(cache, entry);
spin_unlock_bh(&cache->walk_lock);
@@ -861,10 +866,9 @@ static void table_flush_by_iface(struct mesh_table *tbl)
/**
* mesh_path_flush_by_iface - Deletes all mesh paths associated with a given iface
*
- * This function deletes both mesh paths as well as mesh portal paths.
- *
* @sdata: interface data to match
*
+ * This function deletes both mesh paths as well as mesh portal paths.
*/
void mesh_path_flush_by_iface(struct ieee80211_sub_if_data *sdata)
{
@@ -944,6 +948,8 @@ void mesh_path_tx_pending(struct mesh_path *mpath)
* queue to that gate's queue. If there are more than one gates, the frames
* are copied from each gate to the next. After frames are copied, the
* mpath queues are emptied onto the transmission queue.
+ *
+ * Returns: 0 on success, -EHOSTUNREACH
*/
int mesh_path_send_to_gates(struct mesh_path *mpath)
{
diff --git a/net/mac80211/mesh_plink.c b/net/mac80211/mesh_plink.c
index a1e526419e9d..dbabeefe4515 100644
--- a/net/mac80211/mesh_plink.c
+++ b/net/mac80211/mesh_plink.c
@@ -153,6 +153,8 @@ out:
* selected if any non-HT peers are present in our MBSS. 20MHz-protection mode
* is selected if all peers in our 20/40MHz MBSS support HT and at least one
* HT20 peer is present. Otherwise no-protection mode is selected.
+ *
+ * Returns: BSS_CHANGED_HT or 0 for no change
*/
static u64 mesh_set_ht_prot_mode(struct ieee80211_sub_if_data *sdata)
{
@@ -362,7 +364,7 @@ free:
* Mesh paths with this peer as next hop should be flushed
* by the caller outside of plink_lock.
*
- * Returns beacon changed flag if the beacon content changed.
+ * Returns: beacon changed flag if the beacon content changed.
*
* Locking: the caller must hold sta->mesh->plink_lock
*/
@@ -390,6 +392,8 @@ static u64 __mesh_plink_deactivate(struct sta_info *sta)
* @sta: mesh peer link to deactivate
*
* All mesh paths with this peer as next hop will be flushed
+ *
+ * Returns: beacon changed flag if the beacon content changed.
*/
u64 mesh_plink_deactivate(struct sta_info *sta)
{
diff --git a/net/mac80211/mesh_ps.c b/net/mac80211/mesh_ps.c
index 35eacca43e49..20e022a03933 100644
--- a/net/mac80211/mesh_ps.c
+++ b/net/mac80211/mesh_ps.c
@@ -15,6 +15,8 @@
/**
* mps_qos_null_get - create pre-addressed QoS Null frame for mesh powersave
* @sta: the station to get the frame for
+ *
+ * Returns: A newly allocated SKB
*/
static struct sk_buff *mps_qos_null_get(struct sta_info *sta)
{
@@ -77,6 +79,8 @@ static void mps_qos_null_tx(struct sta_info *sta)
*
* sets the non-peer power mode and triggers the driver PS (re-)configuration
* Return BSS_CHANGED_BEACON if a beacon update is necessary.
+ *
+ * Returns: BSS_CHANGED_BEACON if a beacon update is in order.
*/
u64 ieee80211_mps_local_status_update(struct ieee80211_sub_if_data *sdata)
{
@@ -147,7 +151,7 @@ u64 ieee80211_mps_local_status_update(struct ieee80211_sub_if_data *sdata)
*
* @sta: mesh STA
* @pm: the power mode to set
- * Return BSS_CHANGED_BEACON if a beacon update is in order.
+ * Returns: BSS_CHANGED_BEACON if a beacon update is in order.
*/
u64 ieee80211_mps_set_sta_local_pm(struct sta_info *sta,
enum nl80211_mesh_power_mode pm)
diff --git a/net/mac80211/mesh_sync.c b/net/mac80211/mesh_sync.c
index 9e342cc2504c..8cf3f395f52f 100644
--- a/net/mac80211/mesh_sync.c
+++ b/net/mac80211/mesh_sync.c
@@ -3,7 +3,7 @@
* Copyright 2011-2012, Pavel Zubarev <pavel.zubarev@gmail.com>
* Copyright 2011-2012, Marco Porsch <marco.porsch@s2005.tu-chemnitz.de>
* Copyright 2011-2012, cozybit Inc.
- * Copyright (C) 2021 Intel Corporation
+ * Copyright (C) 2021,2023 Intel Corporation
*/
#include "ieee80211_i.h"
@@ -37,6 +37,8 @@ struct sync_method {
* mesh_peer_tbtt_adjusting - check if an mp is currently adjusting its TBTT
*
* @cfg: mesh config element from the mesh peer (or %NULL)
+ *
+ * Returns: If the mesh peer is currently adjusting its TBTT
*/
static bool mesh_peer_tbtt_adjusting(const struct ieee80211_meshconf_ie *cfg)
{
diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
index 0c9198997482..887b496f2b81 100644
--- a/net/mac80211/mlme.c
+++ b/net/mac80211/mlme.c
@@ -110,7 +110,8 @@ ieee80211_extract_dis_subch_bmap(const struct ieee80211_eht_operation *eht_oper,
return 0;
/* set 160/320 supported to get the full AP definition */
- ieee80211_chandef_eht_oper(eht_oper, true, true, &ap_chandef);
+ ieee80211_chandef_eht_oper((const void *)eht_oper->optional,
+ true, true, &ap_chandef);
ap_center_freq = ap_chandef.center_freq1;
ap_bw = 20 * BIT(u8_get_bits(info->control,
IEEE80211_EHT_OPER_CHAN_WIDTH));
@@ -175,7 +176,7 @@ ieee80211_handle_puncturing_bitmap(struct ieee80211_link_data *link,
static void run_again(struct ieee80211_sub_if_data *sdata,
unsigned long timeout)
{
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (!timer_pending(&sdata->u.mgd.timer) ||
time_before(timeout, sdata->u.mgd.timer.expires))
@@ -388,7 +389,7 @@ ieee80211_determine_chantype(struct ieee80211_sub_if_data *sdata,
if (eht_oper && (eht_oper->params & IEEE80211_EHT_OPER_INFO_PRESENT)) {
struct cfg80211_chan_def eht_chandef = *chandef;
- ieee80211_chandef_eht_oper(eht_oper,
+ ieee80211_chandef_eht_oper((const void *)eht_oper->optional,
eht_chandef.width ==
NL80211_CHAN_WIDTH_160,
false, &eht_chandef);
@@ -830,7 +831,6 @@ static void ieee80211_assoc_add_rates(struct sk_buff *skb,
struct ieee80211_supported_band *sband,
struct ieee80211_mgd_assoc_data *assoc_data)
{
- unsigned int shift = ieee80211_chanwidth_get_shift(width);
unsigned int rates_len, supp_rates_len;
u32 rates = 0;
int i, count;
@@ -869,8 +869,7 @@ static void ieee80211_assoc_add_rates(struct sk_buff *skb,
count = 0;
for (i = 0; i < sband->n_bitrates; i++) {
if (BIT(i) & rates) {
- int rate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
- 5 * (1 << shift));
+ int rate = DIV_ROUND_UP(sband->bitrates[i].bitrate, 5);
*pos++ = (u8)rate;
if (++count == 8)
break;
@@ -886,8 +885,7 @@ static void ieee80211_assoc_add_rates(struct sk_buff *skb,
if (BIT(i) & rates) {
int rate;
- rate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
- 5 * (1 << shift));
+ rate = DIV_ROUND_UP(sband->bitrates[i].bitrate, 5);
*pos++ = (u8)rate;
}
}
@@ -1401,7 +1399,7 @@ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
assoc_data->ie,
assoc_data->ie_len);
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
size = local->hw.extra_tx_headroom +
sizeof(*mgmt) + /* bit too much but doesn't matter */
@@ -1586,6 +1584,7 @@ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
ifmgd->assoc_req_ies_len = pos - ie_start;
+ info.link_id = assoc_data->assoc_link_id;
drv_mgd_prepare_tx(local, sdata, &info);
IEEE80211_SKB_CB(skb)->flags |= IEEE80211_TX_INTFL_DONT_ENCRYPT;
@@ -1689,15 +1688,13 @@ static void ieee80211_chswitch_work(struct wiphy *wiphy,
if (!ieee80211_sdata_running(sdata))
return;
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!ifmgd->associated)
- goto out;
+ return;
if (!link->conf->csa_active)
- goto out;
+ return;
/*
* using reservation isn't immediate as it may be deferred until later
@@ -1713,7 +1710,7 @@ static void ieee80211_chswitch_work(struct wiphy *wiphy,
* reservations
*/
if (link->reserved_ready)
- goto out;
+ return;
ret = ieee80211_link_use_reserved_context(link);
if (ret) {
@@ -1722,10 +1719,8 @@ static void ieee80211_chswitch_work(struct wiphy *wiphy,
ret);
wiphy_work_queue(sdata->local->hw.wiphy,
&ifmgd->csa_connection_drop_work);
- goto out;
}
-
- goto out;
+ return;
}
if (!cfg80211_chandef_identical(&link->conf->chandef,
@@ -1734,18 +1729,13 @@ static void ieee80211_chswitch_work(struct wiphy *wiphy,
"failed to finalize channel switch, disconnecting\n");
wiphy_work_queue(sdata->local->hw.wiphy,
&ifmgd->csa_connection_drop_work);
- goto out;
+ return;
}
link->u.mgd.csa_waiting_bcn = true;
ieee80211_sta_reset_beacon_monitor(sdata);
ieee80211_sta_reset_conn_monitor(sdata);
-
-out:
- mutex_unlock(&local->chanctx_mtx);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
}
static void ieee80211_chswitch_post_beacon(struct ieee80211_link_data *link)
@@ -1755,7 +1745,7 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_link_data *link)
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
int ret;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
WARN_ON(!link->conf->csa_active);
@@ -1773,7 +1763,7 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_link_data *link)
*/
link->u.mgd.beacon_crc_valid = false;
- ret = drv_post_channel_switch(sdata);
+ ret = drv_post_channel_switch(link);
if (ret) {
sdata_info(sdata,
"driver post channel switch failed, disconnecting\n");
@@ -1782,28 +1772,38 @@ static void ieee80211_chswitch_post_beacon(struct ieee80211_link_data *link)
return;
}
- cfg80211_ch_switch_notify(sdata->dev, &link->reserved_chandef, 0, 0);
+ cfg80211_ch_switch_notify(sdata->dev, &link->reserved_chandef,
+ link->link_id, 0);
}
-void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success)
+void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success,
+ unsigned int link_id)
{
struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
- struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
- if (WARN_ON(ieee80211_vif_is_mld(&sdata->vif)))
- success = false;
+ trace_api_chswitch_done(sdata, success, link_id);
+
+ rcu_read_lock();
- trace_api_chswitch_done(sdata, success);
if (!success) {
sdata_info(sdata,
"driver channel switch failed, disconnecting\n");
wiphy_work_queue(sdata->local->hw.wiphy,
- &ifmgd->csa_connection_drop_work);
+ &sdata->u.mgd.csa_connection_drop_work);
} else {
+ struct ieee80211_link_data *link =
+ rcu_dereference(sdata->link[link_id]);
+
+ if (WARN_ON(!link)) {
+ rcu_read_unlock();
+ return;
+ }
+
wiphy_delayed_work_queue(sdata->local->hw.wiphy,
- &sdata->deflink.u.mgd.chswitch_work,
- 0);
+ &link->u.mgd.chswitch_work, 0);
}
+
+ rcu_read_unlock();
}
EXPORT_SYMBOL(ieee80211_chswitch_done);
@@ -1813,14 +1813,12 @@ ieee80211_sta_abort_chanswitch(struct ieee80211_link_data *link)
struct ieee80211_sub_if_data *sdata = link->sdata;
struct ieee80211_local *local = sdata->local;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!local->ops->abort_channel_switch)
return;
- mutex_lock(&local->mtx);
-
- mutex_lock(&local->chanctx_mtx);
ieee80211_link_unreserve_chanctx(link);
- mutex_unlock(&local->chanctx_mtx);
if (link->csa_block_tx)
ieee80211_wake_vif_queues(local, sdata,
@@ -1829,8 +1827,6 @@ ieee80211_sta_abort_chanswitch(struct ieee80211_link_data *link)
link->csa_block_tx = false;
link->conf->csa_active = false;
- mutex_unlock(&local->mtx);
-
drv_abort_channel_switch(sdata);
}
@@ -1853,7 +1849,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
unsigned long timeout;
int res;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!cbss)
return;
@@ -1875,7 +1871,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
}
if (res < 0)
- goto lock_and_drop_connection;
+ goto drop_connection;
if (beacon && link->conf->csa_active &&
!link->u.mgd.csa_waiting_bcn) {
@@ -1897,7 +1893,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
csa_ie.chandef.chan->center_freq,
csa_ie.chandef.width, csa_ie.chandef.center_freq1,
csa_ie.chandef.center_freq2);
- goto lock_and_drop_connection;
+ goto drop_connection;
}
if (!cfg80211_chandef_usable(local->hw.wiphy, &csa_ie.chandef,
@@ -1912,7 +1908,7 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
csa_ie.chandef.width, csa_ie.chandef.center_freq1,
csa_ie.chandef.freq1_offset,
csa_ie.chandef.center_freq2);
- goto lock_and_drop_connection;
+ goto drop_connection;
}
if (cfg80211_chandef_identical(&csa_ie.chandef,
@@ -1935,10 +1931,8 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
*/
ieee80211_teardown_tdls_peers(sdata);
- mutex_lock(&local->mtx);
- mutex_lock(&local->chanctx_mtx);
conf = rcu_dereference_protected(link->conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!conf) {
sdata_info(sdata,
"no channel context assigned to vif?, disconnecting\n");
@@ -1968,7 +1962,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
res);
goto drop_connection;
}
- mutex_unlock(&local->chanctx_mtx);
link->conf->csa_active = true;
link->csa_chandef = csa_ie.chandef;
@@ -1979,7 +1972,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
if (link->csa_block_tx)
ieee80211_stop_vif_queues(local, sdata,
IEEE80211_QUEUE_STOP_REASON_CSA);
- mutex_unlock(&local->mtx);
cfg80211_ch_switch_started_notify(sdata->dev, &csa_ie.chandef,
link->link_id, csa_ie.count,
@@ -1998,9 +1990,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
&link->u.mgd.chswitch_work,
timeout);
return;
- lock_and_drop_connection:
- mutex_lock(&local->mtx);
- mutex_lock(&local->chanctx_mtx);
drop_connection:
/*
* This is just so that the disconnect flow will know that
@@ -2014,8 +2003,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
wiphy_work_queue(sdata->local->hw.wiphy,
&ifmgd->csa_connection_drop_work);
- mutex_unlock(&local->chanctx_mtx);
- mutex_unlock(&local->mtx);
}
static bool
@@ -2211,7 +2198,8 @@ static void ieee80211_change_ps(struct ieee80211_local *local)
conf->flags &= ~IEEE80211_CONF_PS;
ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_PS);
del_timer_sync(&local->dynamic_ps_timer);
- cancel_work_sync(&local->dynamic_ps_enable_work);
+ wiphy_work_cancel(local->hw.wiphy,
+ &local->dynamic_ps_enable_work);
}
}
@@ -2308,7 +2296,8 @@ void ieee80211_recalc_ps_vif(struct ieee80211_sub_if_data *sdata)
}
}
-void ieee80211_dynamic_ps_disable_work(struct work_struct *work)
+void ieee80211_dynamic_ps_disable_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local,
@@ -2325,7 +2314,8 @@ void ieee80211_dynamic_ps_disable_work(struct work_struct *work)
false);
}
-void ieee80211_dynamic_ps_enable_work(struct work_struct *work)
+void ieee80211_dynamic_ps_enable_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local,
@@ -2398,26 +2388,25 @@ void ieee80211_dynamic_ps_timer(struct timer_list *t)
{
struct ieee80211_local *local = from_timer(local, t, dynamic_ps_timer);
- ieee80211_queue_work(&local->hw, &local->dynamic_ps_enable_work);
+ wiphy_work_queue(local->hw.wiphy, &local->dynamic_ps_enable_work);
}
-void ieee80211_dfs_cac_timer_work(struct work_struct *work)
+void ieee80211_dfs_cac_timer_work(struct wiphy *wiphy, struct wiphy_work *work)
{
- struct delayed_work *delayed_work = to_delayed_work(work);
struct ieee80211_link_data *link =
- container_of(delayed_work, struct ieee80211_link_data,
- dfs_cac_timer_work);
+ container_of(work, struct ieee80211_link_data,
+ dfs_cac_timer_work.work);
struct cfg80211_chan_def chandef = link->conf->chandef;
struct ieee80211_sub_if_data *sdata = link->sdata;
- mutex_lock(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (sdata->wdev.cac_started) {
ieee80211_link_release_channel(link);
cfg80211_cac_event(sdata->dev, &chandef,
NL80211_RADAR_CAC_FINISHED,
GFP_KERNEL);
}
- mutex_unlock(&sdata->local->mtx);
}
static bool
@@ -2487,8 +2476,10 @@ __ieee80211_sta_handle_tspec_ac_params(struct ieee80211_sub_if_data *sdata)
ac);
tx_tspec->action = TX_TSPEC_ACTION_NONE;
ret = true;
- schedule_delayed_work(&ifmgd->tx_tspec_wk,
- tx_tspec->time_slice_start + HZ - now + 1);
+ wiphy_delayed_work_queue(local->hw.wiphy,
+ &ifmgd->tx_tspec_wk,
+ tx_tspec->time_slice_start +
+ HZ - now + 1);
break;
case TX_TSPEC_ACTION_NONE:
/* nothing now */
@@ -2506,7 +2497,8 @@ void ieee80211_sta_handle_tspec_ac_params(struct ieee80211_sub_if_data *sdata)
BSS_CHANGED_QOS);
}
-static void ieee80211_sta_handle_tspec_ac_params_wk(struct work_struct *work)
+static void ieee80211_sta_handle_tspec_ac_params_wk(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_sub_if_data *sdata;
@@ -2681,7 +2673,7 @@ ieee80211_sta_wmm_params(struct ieee80211_local *local,
static void __ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata)
{
- lockdep_assert_held(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
sdata->u.mgd.flags &= ~IEEE80211_STA_CONNECTION_POLL;
ieee80211_run_deferred_scan(sdata->local);
@@ -2689,9 +2681,9 @@ static void __ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata)
static void ieee80211_stop_poll(struct ieee80211_sub_if_data *sdata)
{
- mutex_lock(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
__ieee80211_stop_poll(sdata);
- mutex_unlock(&sdata->local->mtx);
}
static u64 ieee80211_handle_bss_capability(struct ieee80211_link_data *link,
@@ -2809,6 +2801,8 @@ static void ieee80211_set_associated(struct ieee80211_sub_if_data *sdata,
u64 vif_changed = BSS_CHANGED_ASSOC;
unsigned int link_id;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
sdata->u.mgd.associated = true;
for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) {
@@ -2870,9 +2864,7 @@ static void ieee80211_set_associated(struct ieee80211_sub_if_data *sdata,
vif_changed | changed[0]);
}
- mutex_lock(&local->iflist_mtx);
ieee80211_recalc_ps(local);
- mutex_unlock(&local->iflist_mtx);
/* leave this here to not change ordering in non-MLO cases */
if (!ieee80211_vif_is_mld(&sdata->vif))
@@ -2894,7 +2886,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
.subtype = stype,
};
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON_ONCE(tx && !frame_buf))
return;
@@ -2945,9 +2937,22 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
* deauthentication frame by calling mgd_prepare_tx, if the
* driver requested so.
*/
- if (ieee80211_hw_check(&local->hw, DEAUTH_NEED_MGD_TX_PREP) &&
- !sdata->deflink.u.mgd.have_beacon) {
- drv_mgd_prepare_tx(sdata->local, sdata, &info);
+ if (ieee80211_hw_check(&local->hw, DEAUTH_NEED_MGD_TX_PREP)) {
+ for (link_id = 0; link_id < ARRAY_SIZE(sdata->link);
+ link_id++) {
+ struct ieee80211_link_data *link;
+
+ link = sdata_dereference(sdata->link[link_id],
+ sdata);
+ if (!link)
+ continue;
+ if (link->u.mgd.have_beacon)
+ break;
+ }
+ if (link_id == IEEE80211_MLD_MAX_NUM_LINKS) {
+ info.link_id = ffs(sdata->vif.active_links) - 1;
+ drv_mgd_prepare_tx(sdata->local, sdata, &info);
+ }
}
ieee80211_send_deauth_disassoc(sdata, sdata->vif.cfg.ap_addr,
@@ -3003,7 +3008,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
sdata->deflink.ap_power_level = IEEE80211_UNSET_POWER_LEVEL;
del_timer_sync(&local->dynamic_ps_timer);
- cancel_work_sync(&local->dynamic_ps_enable_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->dynamic_ps_enable_work);
/* Disable ARP filtering */
if (sdata->vif.cfg.arp_addr_cnt)
@@ -3035,7 +3040,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
ifmgd->flags = 0;
sdata->deflink.u.mgd.conn_flags = 0;
- mutex_lock(&local->mtx);
for (link_id = 0; link_id < ARRAY_SIZE(sdata->link); link_id++) {
struct ieee80211_link_data *link;
@@ -3054,17 +3058,19 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
IEEE80211_QUEUE_STOP_REASON_CSA);
sdata->deflink.csa_block_tx = false;
}
- mutex_unlock(&local->mtx);
/* existing TX TSPEC sessions no longer exist */
memset(ifmgd->tx_tspec, 0, sizeof(ifmgd->tx_tspec));
- cancel_delayed_work_sync(&ifmgd->tx_tspec_wk);
+ wiphy_delayed_work_cancel(local->hw.wiphy, &ifmgd->tx_tspec_wk);
sdata->vif.bss_conf.pwr_reduction = 0;
sdata->vif.bss_conf.tx_pwr_env_num = 0;
memset(sdata->vif.bss_conf.tx_pwr_env, 0,
sizeof(sdata->vif.bss_conf.tx_pwr_env));
+ memset(&sdata->u.mgd.ttlm_info, 0,
+ sizeof(sdata->u.mgd.ttlm_info));
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work);
ieee80211_vif_set_links(sdata, 0, 0);
}
@@ -3073,18 +3079,17 @@ static void ieee80211_reset_ap_probe(struct ieee80211_sub_if_data *sdata)
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
struct ieee80211_local *local = sdata->local;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!(ifmgd->flags & IEEE80211_STA_CONNECTION_POLL))
- goto out;
+ return;
__ieee80211_stop_poll(sdata);
- mutex_lock(&local->iflist_mtx);
ieee80211_recalc_ps(local);
- mutex_unlock(&local->iflist_mtx);
if (ieee80211_hw_check(&sdata->local->hw, CONNECTION_MONITOR))
- goto out;
+ return;
/*
* We've received a probe response, but are not sure whether
@@ -3096,8 +3101,6 @@ static void ieee80211_reset_ap_probe(struct ieee80211_sub_if_data *sdata)
mod_timer(&ifmgd->conn_mon_timer,
round_jiffies_up(jiffies +
IEEE80211_CONNECTION_IDLE_TIME));
-out:
- mutex_unlock(&local->mtx);
}
static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
@@ -3126,7 +3129,8 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
if (tx_tspec->downgraded) {
tx_tspec->action = TX_TSPEC_ACTION_STOP_DOWNGRADE;
- schedule_delayed_work(&ifmgd->tx_tspec_wk, 0);
+ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ &ifmgd->tx_tspec_wk, 0);
}
}
@@ -3138,7 +3142,8 @@ static void ieee80211_sta_tx_wmm_ac_notify(struct ieee80211_sub_if_data *sdata,
if (tx_tspec->consumed_tx_time >= tx_tspec->admitted_time) {
tx_tspec->downgraded = true;
tx_tspec->action = TX_TSPEC_ACTION_DOWNGRADE;
- schedule_delayed_work(&ifmgd->tx_tspec_wk, 0);
+ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ &ifmgd->tx_tspec_wk, 0);
}
}
@@ -3179,6 +3184,8 @@ static void ieee80211_mgd_probe_ap_send(struct ieee80211_sub_if_data *sdata)
u8 unicast_limit = max(1, max_probe_tries - 3);
struct sta_info *sta;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (WARN_ON(ieee80211_vif_is_mld(&sdata->vif)))
return;
@@ -3200,11 +3207,9 @@ static void ieee80211_mgd_probe_ap_send(struct ieee80211_sub_if_data *sdata)
ifmgd->probe_send_count++;
if (dst) {
- mutex_lock(&sdata->local->sta_mtx);
sta = sta_info_get(sdata, dst);
if (!WARN_ON(!sta))
ieee80211_check_fast_rx(sta);
- mutex_unlock(&sdata->local->sta_mtx);
}
if (ieee80211_hw_check(&sdata->local->hw, REPORTS_TX_ACK_STATUS)) {
@@ -3227,29 +3232,24 @@ static void ieee80211_mgd_probe_ap(struct ieee80211_sub_if_data *sdata,
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
bool already = false;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (WARN_ON_ONCE(ieee80211_vif_is_mld(&sdata->vif)))
return;
if (!ieee80211_sdata_running(sdata))
return;
- sdata_lock(sdata);
-
if (!ifmgd->associated)
- goto out;
-
- mutex_lock(&sdata->local->mtx);
+ return;
- if (sdata->local->tmp_channel || sdata->local->scanning) {
- mutex_unlock(&sdata->local->mtx);
- goto out;
- }
+ if (sdata->local->tmp_channel || sdata->local->scanning)
+ return;
if (sdata->local->suspending) {
/* reschedule after resume */
- mutex_unlock(&sdata->local->mtx);
ieee80211_reset_ap_probe(sdata);
- goto out;
+ return;
}
if (beacon) {
@@ -3276,19 +3276,13 @@ static void ieee80211_mgd_probe_ap(struct ieee80211_sub_if_data *sdata,
ifmgd->flags |= IEEE80211_STA_CONNECTION_POLL;
- mutex_unlock(&sdata->local->mtx);
-
if (already)
- goto out;
+ return;
- mutex_lock(&sdata->local->iflist_mtx);
ieee80211_recalc_ps(sdata->local);
- mutex_unlock(&sdata->local->iflist_mtx);
ifmgd->probe_send_count = 0;
ieee80211_mgd_probe_ap_send(sdata);
- out:
- sdata_unlock(sdata);
}
struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw,
@@ -3301,12 +3295,12 @@ struct sk_buff *ieee80211_ap_probereq_get(struct ieee80211_hw *hw,
const struct element *ssid;
int ssid_len;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (WARN_ON(sdata->vif.type != NL80211_IFTYPE_STATION ||
ieee80211_vif_is_mld(&sdata->vif)))
return NULL;
- sdata_assert_lock(sdata);
-
if (ifmgd->associated)
cbss = sdata->deflink.u.mgd.bss;
else if (ifmgd->auth_data)
@@ -3353,13 +3347,15 @@ static void ieee80211_report_disconnect(struct ieee80211_sub_if_data *sdata,
drv_event_callback(sdata->local, sdata, &event);
}
-static void ___ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
+static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
bool tx;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!ifmgd->associated)
return;
@@ -3395,7 +3391,6 @@ static void ___ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
WLAN_REASON_DEAUTH_LEAVING :
WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
tx, frame_buf);
- mutex_lock(&local->mtx);
/* the other links will be destroyed */
sdata->vif.bss_conf.csa_active = false;
sdata->deflink.u.mgd.csa_waiting_bcn = false;
@@ -3404,7 +3399,6 @@ static void ___ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
IEEE80211_QUEUE_STOP_REASON_CSA);
sdata->deflink.csa_block_tx = false;
}
- mutex_unlock(&local->mtx);
ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx,
WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
@@ -3412,13 +3406,6 @@ static void ___ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
ifmgd->reconnect = false;
}
-static void __ieee80211_disconnect(struct ieee80211_sub_if_data *sdata)
-{
- sdata_lock(sdata);
- ___ieee80211_disconnect(sdata);
- sdata_unlock(sdata);
-}
-
static void ieee80211_beacon_connection_loss_work(struct wiphy *wiphy,
struct wiphy_work *work)
{
@@ -3500,7 +3487,7 @@ static void ieee80211_destroy_auth_data(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_mgd_auth_data *auth_data = sdata->u.mgd.auth_data;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (!assoc) {
/*
@@ -3518,10 +3505,8 @@ static void ieee80211_destroy_auth_data(struct ieee80211_sub_if_data *sdata,
BSS_CHANGED_BSSID);
sdata->u.mgd.flags = 0;
- mutex_lock(&sdata->local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
ieee80211_vif_set_links(sdata, 0, 0);
- mutex_unlock(&sdata->local->mtx);
}
cfg80211_put_bss(sdata->local->hw.wiphy, auth_data->bss);
@@ -3541,7 +3526,7 @@ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_mgd_assoc_data *assoc_data = sdata->u.mgd.assoc_data;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (status != ASSOC_SUCCESS) {
/*
@@ -3577,10 +3562,8 @@ static void ieee80211_destroy_assoc_data(struct ieee80211_sub_if_data *sdata,
cfg80211_assoc_failure(sdata->dev, &data);
}
- mutex_lock(&sdata->local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
ieee80211_vif_set_links(sdata, 0, 0);
- mutex_unlock(&sdata->local->mtx);
}
kfree(assoc_data);
@@ -3597,6 +3580,7 @@ static void ieee80211_auth_challenge(struct ieee80211_sub_if_data *sdata,
u32 tx_flags = 0;
struct ieee80211_prep_tx_info info = {
.subtype = IEEE80211_STYPE_AUTH,
+ .link_id = auth_data->link_id,
};
pos = mgmt->u.auth.variable;
@@ -3622,7 +3606,8 @@ static bool ieee80211_mark_sta_auth(struct ieee80211_sub_if_data *sdata)
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
const u8 *ap_addr = ifmgd->auth_data->ap_addr;
struct sta_info *sta;
- bool result = true;
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
sdata_info(sdata, "authenticated\n");
ifmgd->auth_data->done = true;
@@ -3631,22 +3616,17 @@ static bool ieee80211_mark_sta_auth(struct ieee80211_sub_if_data *sdata)
run_again(sdata, ifmgd->auth_data->timeout);
/* move station state to auth */
- mutex_lock(&sdata->local->sta_mtx);
sta = sta_info_get(sdata, ap_addr);
if (!sta) {
WARN_ONCE(1, "%s: STA %pM not found", sdata->name, ap_addr);
- result = false;
- goto out;
+ return false;
}
if (sta_info_move_state(sta, IEEE80211_STA_AUTH)) {
sdata_info(sdata, "failed moving %pM to auth\n", ap_addr);
- result = false;
- goto out;
+ return false;
}
-out:
- mutex_unlock(&sdata->local->sta_mtx);
- return result;
+ return true;
}
static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
@@ -3662,7 +3642,7 @@ static void ieee80211_rx_mgmt_auth(struct ieee80211_sub_if_data *sdata,
.subtype = IEEE80211_STYPE_AUTH,
};
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (len < 24 + 6)
return;
@@ -3820,7 +3800,7 @@ static void ieee80211_rx_mgmt_deauth(struct ieee80211_sub_if_data *sdata,
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u16 reason_code = le16_to_cpu(mgmt->u.deauth.reason_code);
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (len < 24 + 2)
return;
@@ -3864,7 +3844,7 @@ static void ieee80211_rx_mgmt_disassoc(struct ieee80211_sub_if_data *sdata,
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u16 reason_code;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (len < 24 + 2)
return;
@@ -3894,8 +3874,7 @@ static void ieee80211_get_rates(struct ieee80211_supported_band *sband,
u8 *supp_rates, unsigned int supp_rates_len,
u32 *rates, u32 *basic_rates,
bool *have_higher_than_11mbit,
- int *min_rate, int *min_rate_index,
- int shift)
+ int *min_rate, int *min_rate_index)
{
int i, j;
@@ -3903,7 +3882,7 @@ static void ieee80211_get_rates(struct ieee80211_supported_band *sband,
int rate = supp_rates[i] & 0x7f;
bool is_basic = !!(supp_rates[i] & 0x80);
- if ((rate * 5 * (1 << shift)) > 110)
+ if ((rate * 5) > 110)
*have_higher_than_11mbit = true;
/*
@@ -3927,7 +3906,7 @@ static void ieee80211_get_rates(struct ieee80211_supported_band *sband,
br = &sband->bitrates[j];
- brate = DIV_ROUND_UP(br->bitrate, (1 << shift) * 5);
+ brate = DIV_ROUND_UP(br->bitrate, 5);
if (brate == rate) {
*rates |= BIT(j);
if (is_basic)
@@ -4394,8 +4373,6 @@ static int ieee80211_mgd_setup_link_sta(struct ieee80211_link_data *link,
u32 rates = 0, basic_rates = 0;
bool have_higher_than_11mbit = false;
int min_rate = INT_MAX, min_rate_index = -1;
- /* this is clearly wrong for MLO but we'll just remove it later */
- int shift = ieee80211_vif_get_shift(&sdata->vif);
struct ieee80211_supported_band *sband;
memcpy(link_sta->addr, cbss->bssid, ETH_ALEN);
@@ -4411,7 +4388,7 @@ static int ieee80211_mgd_setup_link_sta(struct ieee80211_link_data *link,
ieee80211_get_rates(sband, bss->supp_rates, bss->supp_rates_len,
&rates, &basic_rates, &have_higher_than_11mbit,
- &min_rate, &min_rate_index, shift);
+ &min_rate, &min_rate_index);
/*
* This used to be a workaround for basic rates missing
@@ -4817,6 +4794,7 @@ ieee80211_verify_sta_eht_mcs_support(struct ieee80211_sub_if_data *sdata,
static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
struct ieee80211_link_data *link,
struct cfg80211_bss *cbss,
+ bool mlo,
ieee80211_conn_flags_t *conn_flags)
{
struct ieee80211_local *local = sdata->local;
@@ -4830,6 +4808,7 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
struct cfg80211_chan_def chandef;
bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ;
bool is_5ghz = cbss->channel->band == NL80211_BAND_5GHZ;
+ bool supports_mlo = false;
struct ieee80211_bss *bss = (void *)cbss->priv;
struct ieee80211_elems_parse_params parse_params = {
.link_id = -1,
@@ -4841,6 +4820,8 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
u32 i;
bool have_80mhz;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
rcu_read_lock();
ies = rcu_dereference(cbss->ies);
@@ -4981,6 +4962,8 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
ieee80211_mle_type_ok(eht_ml_elem->data + 1,
IEEE80211_ML_CONTROL_TYPE_BASIC,
eht_ml_elem->datalen - 1)) {
+ supports_mlo = true;
+
sdata->vif.cfg.eml_cap =
ieee80211_mle_get_eml_cap(eht_ml_elem->data + 1);
sdata->vif.cfg.eml_med_sync_delay =
@@ -5036,13 +5019,17 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
return -EINVAL;
}
+ if (mlo && !supports_mlo) {
+ sdata_info(sdata, "Rejecting MLO as it is not supported by AP\n");
+ return -EINVAL;
+ }
+
if (!link)
return 0;
/* will change later if needed */
link->smps_mode = IEEE80211_SMPS_OFF;
- mutex_lock(&local->mtx);
/*
* If this fails (possibly due to channel context sharing
* on incompatible channels, e.g. 80+80 and 160 sharing the
@@ -5063,7 +5050,6 @@ static int ieee80211_prep_channel(struct ieee80211_sub_if_data *sdata,
IEEE80211_CHANCTX_SHARED);
}
out:
- mutex_unlock(&local->mtx);
return ret;
}
@@ -5115,7 +5101,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
u16 valid_links = 0, dormant_links = 0;
int err;
- mutex_lock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/*
* station info was already allocated and inserted before
* the association and should be available to us
@@ -5164,7 +5150,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
" (assoc)" : "");
link_sta = rcu_dereference_protected(sta->link[link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (WARN_ON(!link_sta))
goto out_err;
@@ -5187,7 +5173,7 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
link->conf->dtim_period = link->u.mgd.dtim_period ?: 1;
if (link_id != assoc_data->assoc_link_id) {
- err = ieee80211_prep_channel(sdata, link, cbss,
+ err = ieee80211_prep_channel(sdata, link, cbss, true,
&link->u.mgd.conn_flags);
if (err) {
link_info(link, "prep_channel failed\n");
@@ -5251,8 +5237,6 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
if (sdata->wdev.use_4addr)
drv_sta_set_4addr(local, sdata, &sta->sta, true);
- mutex_unlock(&sdata->local->sta_mtx);
-
ieee80211_set_associated(sdata, assoc_data, changed);
/*
@@ -5272,7 +5256,6 @@ static bool ieee80211_assoc_success(struct ieee80211_sub_if_data *sdata,
return true;
out_err:
eth_zero_addr(sdata->vif.cfg.ap_addr);
- mutex_unlock(&sdata->local->sta_mtx);
return false;
}
@@ -5298,13 +5281,13 @@ static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
.u.mlme.data = ASSOC_EVENT,
};
struct ieee80211_prep_tx_info info = {};
- struct cfg80211_rx_assoc_resp resp = {
+ struct cfg80211_rx_assoc_resp_data resp = {
.uapsd_queues = -1,
};
u8 ap_mld_addr[ETH_ALEN] __aligned(2);
unsigned int link_id;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (!assoc_data)
return;
@@ -5505,7 +5488,7 @@ static void ieee80211_rx_bss_info(struct ieee80211_link_data *link,
struct ieee80211_bss *bss;
struct ieee80211_channel *channel;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
channel = ieee80211_get_channel_khz(local->hw.wiphy,
ieee80211_rx_status_to_khz(rx_status));
@@ -5532,7 +5515,7 @@ static void ieee80211_rx_mgmt_probe_resp(struct ieee80211_link_data *link,
ifmgd = &sdata->u.mgd;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/*
* According to Draft P802.11ax D6.0 clause 26.17.2.3.2:
@@ -5743,21 +5726,16 @@ static void ieee80211_ml_reconf_work(struct wiphy *wiphy,
u16 new_valid_links, new_active_links, new_dormant_links;
int ret;
- sdata_lock(sdata);
- if (!sdata->u.mgd.removed_links) {
- sdata_unlock(sdata);
+ if (!sdata->u.mgd.removed_links)
return;
- }
sdata_info(sdata,
"MLO Reconfiguration: work: valid=0x%x, removed=0x%x\n",
sdata->vif.valid_links, sdata->u.mgd.removed_links);
new_valid_links = sdata->vif.valid_links & ~sdata->u.mgd.removed_links;
- if (new_valid_links == sdata->vif.valid_links) {
- sdata_unlock(sdata);
+ if (new_valid_links == sdata->vif.valid_links)
return;
- }
if (!new_valid_links ||
!(new_valid_links & ~sdata->vif.dormant_links)) {
@@ -5773,8 +5751,7 @@ static void ieee80211_ml_reconf_work(struct wiphy *wiphy,
BIT(ffs(new_valid_links &
~sdata->vif.dormant_links) - 1);
- ret = __ieee80211_set_active_links(&sdata->vif,
- new_active_links);
+ ret = ieee80211_set_active_links(&sdata->vif, new_active_links);
if (ret) {
sdata_info(sdata,
"Failed setting active links\n");
@@ -5789,15 +5766,15 @@ static void ieee80211_ml_reconf_work(struct wiphy *wiphy,
if (ret)
sdata_info(sdata, "Failed setting valid links\n");
+ ieee80211_vif_cfg_change_notify(sdata, BSS_CHANGED_MLD_VALID_LINKS);
+
out:
if (!ret)
cfg80211_links_removed(sdata->dev, sdata->u.mgd.removed_links);
else
- ___ieee80211_disconnect(sdata);
+ __ieee80211_disconnect(sdata);
sdata->u.mgd.removed_links = 0;
-
- sdata_unlock(sdata);
}
static void ieee80211_ml_reconfiguration(struct ieee80211_sub_if_data *sdata,
@@ -5897,6 +5874,194 @@ static void ieee80211_ml_reconfiguration(struct ieee80211_sub_if_data *sdata,
TU_TO_JIFFIES(delay));
}
+static void ieee80211_tid_to_link_map_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
+{
+ u16 new_active_links, new_dormant_links;
+ struct ieee80211_sub_if_data *sdata =
+ container_of(work, struct ieee80211_sub_if_data,
+ u.mgd.ttlm_work.work);
+ int ret;
+
+ new_active_links = sdata->u.mgd.ttlm_info.map &
+ sdata->vif.valid_links;
+ new_dormant_links = ~sdata->u.mgd.ttlm_info.map &
+ sdata->vif.valid_links;
+ if (!new_active_links) {
+ ieee80211_disconnect(&sdata->vif, false);
+ return;
+ }
+
+ ieee80211_vif_set_links(sdata, sdata->vif.valid_links, 0);
+ new_active_links = BIT(ffs(new_active_links) - 1);
+ ieee80211_set_active_links(&sdata->vif, new_active_links);
+
+ ret = ieee80211_vif_set_links(sdata, sdata->vif.valid_links,
+ new_dormant_links);
+
+ sdata->u.mgd.ttlm_info.active = true;
+ sdata->u.mgd.ttlm_info.switch_time = 0;
+
+ if (!ret)
+ ieee80211_vif_cfg_change_notify(sdata,
+ BSS_CHANGED_MLD_VALID_LINKS);
+}
+
+static u16 ieee80211_get_ttlm(u8 bm_size, u8 *data)
+{
+ if (bm_size == 1)
+ return *data;
+ else
+ return get_unaligned_le16(data);
+}
+
+static int
+ieee80211_parse_adv_t2l(struct ieee80211_sub_if_data *sdata,
+ const struct ieee80211_ttlm_elem *ttlm,
+ struct ieee80211_adv_ttlm_info *ttlm_info)
+{
+ /* The element size was already validated in
+ * ieee80211_tid_to_link_map_size_ok()
+ */
+ u8 control, link_map_presence, map_size, tid;
+ u8 *pos;
+
+ memset(ttlm_info, 0, sizeof(*ttlm_info));
+ pos = (void *)ttlm->optional;
+ control = ttlm->control;
+
+ if ((control & IEEE80211_TTLM_CONTROL_DEF_LINK_MAP) ||
+ !(control & IEEE80211_TTLM_CONTROL_SWITCH_TIME_PRESENT))
+ return 0;
+
+ if ((control & IEEE80211_TTLM_CONTROL_DIRECTION) !=
+ IEEE80211_TTLM_DIRECTION_BOTH) {
+ sdata_info(sdata, "Invalid advertised T2L map direction\n");
+ return -EINVAL;
+ }
+
+ link_map_presence = *pos;
+ pos++;
+
+ ttlm_info->switch_time = get_unaligned_le16(pos);
+ pos += 2;
+
+ if (control & IEEE80211_TTLM_CONTROL_EXPECTED_DUR_PRESENT) {
+ ttlm_info->duration = pos[0] | pos[1] << 8 | pos[2] << 16;
+ pos += 3;
+ }
+
+ if (control & IEEE80211_TTLM_CONTROL_LINK_MAP_SIZE)
+ map_size = 1;
+ else
+ map_size = 2;
+
+ /* According to Draft P802.11be_D3.0 clause 35.3.7.1.7, an AP MLD shall
+ * not advertise a TID-to-link mapping that does not map all TIDs to the
+ * same link set, reject frame if not all links have mapping
+ */
+ if (link_map_presence != 0xff) {
+ sdata_info(sdata,
+ "Invalid advertised T2L mapping presence indicator\n");
+ return -EINVAL;
+ }
+
+ ttlm_info->map = ieee80211_get_ttlm(map_size, pos);
+ if (!ttlm_info->map) {
+ sdata_info(sdata,
+ "Invalid advertised T2L map for TID 0\n");
+ return -EINVAL;
+ }
+
+ pos += map_size;
+
+ for (tid = 1; tid < 8; tid++) {
+ u16 map = ieee80211_get_ttlm(map_size, pos);
+
+ if (map != ttlm_info->map) {
+ sdata_info(sdata, "Invalid advertised T2L map for tid %d\n",
+ tid);
+ return -EINVAL;
+ }
+
+ pos += map_size;
+ }
+ return 0;
+}
+
+static void ieee80211_process_adv_ttlm(struct ieee80211_sub_if_data *sdata,
+ struct ieee802_11_elems *elems,
+ u64 beacon_ts)
+{
+ u8 i;
+ int ret;
+
+ if (!ieee80211_vif_is_mld(&sdata->vif))
+ return;
+
+ if (!elems->ttlm_num) {
+ if (sdata->u.mgd.ttlm_info.switch_time) {
+ /* if a planned TID-to-link mapping was cancelled -
+ * abort it
+ */
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ &sdata->u.mgd.ttlm_work);
+ } else if (sdata->u.mgd.ttlm_info.active) {
+ /* if no TID-to-link element, set to default mapping in
+ * which all TIDs are mapped to all setup links
+ */
+ ret = ieee80211_vif_set_links(sdata,
+ sdata->vif.valid_links,
+ 0);
+ if (ret) {
+ sdata_info(sdata, "Failed setting valid/dormant links\n");
+ return;
+ }
+ ieee80211_vif_cfg_change_notify(sdata,
+ BSS_CHANGED_MLD_VALID_LINKS);
+ }
+ memset(&sdata->u.mgd.ttlm_info, 0,
+ sizeof(sdata->u.mgd.ttlm_info));
+ return;
+ }
+
+ for (i = 0; i < elems->ttlm_num; i++) {
+ struct ieee80211_adv_ttlm_info ttlm_info;
+ u32 res;
+
+ res = ieee80211_parse_adv_t2l(sdata, elems->ttlm[i],
+ &ttlm_info);
+
+ if (res) {
+ __ieee80211_disconnect(sdata);
+ return;
+ }
+
+ if (ttlm_info.switch_time) {
+ u32 st_us, delay = 0;
+ u32 ts_l26 = beacon_ts & GENMASK(25, 0);
+
+ /* The t2l map switch time is indicated with a partial
+ * TSF value, convert it to TSF and calc the delay
+ * to the start time.
+ */
+ st_us = ieee80211_tu_to_usec(ttlm_info.switch_time);
+ if (st_us > ts_l26)
+ delay = st_us - ts_l26;
+ else
+ continue;
+
+ sdata->u.mgd.ttlm_info = ttlm_info;
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ &sdata->u.mgd.ttlm_work);
+ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ &sdata->u.mgd.ttlm_work,
+ usecs_to_jiffies(delay));
+ return;
+ }
+ }
+}
+
static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
struct ieee80211_hdr *hdr, size_t len,
struct ieee80211_rx_status *rx_status)
@@ -5925,7 +6090,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
.from_ap = true,
};
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* Process beacon from the current BSS */
bssid = ieee80211_get_bssid(hdr, len, sdata->vif.type);
@@ -6141,9 +6306,7 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
changed |= BSS_CHANGED_BEACON_INFO;
link->u.mgd.have_beacon = true;
- mutex_lock(&local->iflist_mtx);
ieee80211_recalc_ps(local);
- mutex_unlock(&local->iflist_mtx);
ieee80211_recalc_ps_vif(sdata);
}
@@ -6160,16 +6323,13 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
le16_to_cpu(mgmt->u.beacon.capab_info),
erp_valid, erp_value);
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, sdata->vif.cfg.ap_addr);
if (WARN_ON(!sta)) {
- mutex_unlock(&local->sta_mtx);
goto free;
}
link_sta = rcu_dereference_protected(sta->link[link->link_id],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (WARN_ON(!link_sta)) {
- mutex_unlock(&local->sta_mtx);
goto free;
}
@@ -6185,7 +6345,6 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
elems->vht_operation, elems->he_operation,
elems->eht_operation,
elems->s1g_oper, bssid, &changed)) {
- mutex_unlock(&local->sta_mtx);
sdata_info(sdata,
"failed to follow AP %pM bandwidth change, disconnect\n",
bssid);
@@ -6203,7 +6362,6 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
ieee80211_vht_handle_opmode(sdata, link_sta,
*elems->opmode_notif,
rx_status->band);
- mutex_unlock(&local->sta_mtx);
changed |= ieee80211_handle_pwr_constr(link, chan, mgmt,
elems->country_elem,
@@ -6227,6 +6385,8 @@ static void ieee80211_rx_mgmt_beacon(struct ieee80211_link_data *link,
}
ieee80211_ml_reconfiguration(sdata, elems);
+ ieee80211_process_adv_ttlm(sdata, elems,
+ le64_to_cpu(mgmt->u.beacon.timestamp));
ieee80211_link_info_change_notify(sdata, link, changed);
free:
@@ -6241,17 +6401,17 @@ void ieee80211_sta_rx_queued_ext(struct ieee80211_sub_if_data *sdata,
struct ieee80211_hdr *hdr;
u16 fc;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
rx_status = (struct ieee80211_rx_status *) skb->cb;
hdr = (struct ieee80211_hdr *) skb->data;
fc = le16_to_cpu(hdr->frame_control);
- sdata_lock(sdata);
switch (fc & IEEE80211_FCTL_STYPE) {
case IEEE80211_STYPE_S1G_BEACON:
ieee80211_rx_mgmt_beacon(link, hdr, skb->len, rx_status);
break;
}
- sdata_unlock(sdata);
}
void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
@@ -6263,17 +6423,17 @@ void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
u16 fc;
int ies_len;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
rx_status = (struct ieee80211_rx_status *) skb->cb;
mgmt = (struct ieee80211_mgmt *) skb->data;
fc = le16_to_cpu(mgmt->frame_control);
- sdata_lock(sdata);
-
if (rx_status->link_valid) {
link = sdata_dereference(sdata->link[rx_status->link_id],
sdata);
if (!link)
- goto out;
+ return;
}
switch (fc & IEEE80211_FCTL_STYPE) {
@@ -6356,8 +6516,6 @@ void ieee80211_sta_rx_queued_mgmt(struct ieee80211_sub_if_data *sdata,
}
break;
}
-out:
- sdata_unlock(sdata);
}
static void ieee80211_sta_timer(struct timer_list *t)
@@ -6392,7 +6550,7 @@ static int ieee80211_auth(struct ieee80211_sub_if_data *sdata)
.subtype = IEEE80211_STYPE_AUTH,
};
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (WARN_ON_ONCE(!auth_data))
return -EINVAL;
@@ -6415,6 +6573,7 @@ static int ieee80211_auth(struct ieee80211_sub_if_data *sdata)
if (auth_data->algorithm == WLAN_AUTH_SAE)
info.duration = jiffies_to_msecs(IEEE80211_AUTH_TIMEOUT_SAE);
+ info.link_id = auth_data->link_id;
drv_mgd_prepare_tx(local, sdata, &info);
sdata_info(sdata, "send auth to %pM (try %d/%d)\n",
@@ -6461,7 +6620,7 @@ static int ieee80211_do_assoc(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
int ret;
- sdata_assert_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
assoc_data->tries++;
if (assoc_data->tries > IEEE80211_ASSOC_MAX_TRIES) {
@@ -6517,7 +6676,7 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
- sdata_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (ifmgd->status_received) {
__le16 fc = ifmgd->status_fc;
@@ -6652,8 +6811,6 @@ void ieee80211_sta_work(struct ieee80211_sub_if_data *sdata)
WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY, false);
}
}
-
- sdata_unlock(sdata);
}
static void ieee80211_sta_bcn_mon_timer(struct timer_list *t)
@@ -6709,10 +6866,11 @@ static void ieee80211_sta_conn_mon_timer(struct timer_list *t)
return;
}
- ieee80211_queue_work(&local->hw, &ifmgd->monitor_work);
+ wiphy_work_queue(local->hw.wiphy, &sdata->u.mgd.monitor_work);
}
-static void ieee80211_sta_monitor_work(struct work_struct *work)
+static void ieee80211_sta_monitor_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_sub_if_data *sdata =
container_of(work, struct ieee80211_sub_if_data,
@@ -6728,8 +6886,8 @@ static void ieee80211_restart_sta_timer(struct ieee80211_sub_if_data *sdata)
/* let's probe the connection once */
if (!ieee80211_hw_check(&sdata->local->hw, CONNECTION_MONITOR))
- ieee80211_queue_work(&sdata->local->hw,
- &sdata->u.mgd.monitor_work);
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &sdata->u.mgd.monitor_work);
}
}
@@ -6739,7 +6897,7 @@ void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata)
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
- sdata_lock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
if (ifmgd->auth_data || ifmgd->assoc_data) {
const u8 *ap_addr = ifmgd->auth_data ?
@@ -6791,8 +6949,6 @@ void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata)
memcpy(bssid, sdata->vif.cfg.ap_addr, ETH_ALEN);
ieee80211_mgd_deauth(sdata, &req);
}
-
- sdata_unlock(sdata);
}
#endif
@@ -6800,11 +6956,10 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
- sdata_lock(sdata);
- if (!ifmgd->associated) {
- sdata_unlock(sdata);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+ if (!ifmgd->associated)
return;
- }
if (sdata->flags & IEEE80211_SDATA_DISCONNECT_RESUME) {
sdata->flags &= ~IEEE80211_SDATA_DISCONNECT_RESUME;
@@ -6812,7 +6967,6 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)
ieee80211_sta_connection_lost(sdata,
WLAN_REASON_UNSPECIFIED,
true);
- sdata_unlock(sdata);
return;
}
@@ -6822,11 +6976,8 @@ void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata)
ieee80211_sta_connection_lost(sdata,
WLAN_REASON_UNSPECIFIED,
true);
- sdata_unlock(sdata);
return;
}
-
- sdata_unlock(sdata);
}
static void ieee80211_request_smps_mgd_work(struct wiphy *wiphy,
@@ -6836,10 +6987,8 @@ static void ieee80211_request_smps_mgd_work(struct wiphy *wiphy,
container_of(work, struct ieee80211_link_data,
u.mgd.request_smps_work);
- sdata_lock(link->sdata);
__ieee80211_request_smps_mgd(link->sdata, link,
link->u.mgd.driver_smps_mode);
- sdata_unlock(link->sdata);
}
/* interface setup */
@@ -6847,20 +6996,22 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
- INIT_WORK(&ifmgd->monitor_work, ieee80211_sta_monitor_work);
+ wiphy_work_init(&ifmgd->monitor_work, ieee80211_sta_monitor_work);
wiphy_work_init(&ifmgd->beacon_connection_loss_work,
ieee80211_beacon_connection_loss_work);
wiphy_work_init(&ifmgd->csa_connection_drop_work,
ieee80211_csa_connection_drop_work);
- INIT_DELAYED_WORK(&ifmgd->tdls_peer_del_work,
- ieee80211_tdls_peer_del_work);
+ wiphy_delayed_work_init(&ifmgd->tdls_peer_del_work,
+ ieee80211_tdls_peer_del_work);
wiphy_delayed_work_init(&ifmgd->ml_reconf_work,
ieee80211_ml_reconf_work);
timer_setup(&ifmgd->timer, ieee80211_sta_timer, 0);
timer_setup(&ifmgd->bcn_mon_timer, ieee80211_sta_bcn_mon_timer, 0);
timer_setup(&ifmgd->conn_mon_timer, ieee80211_sta_conn_mon_timer, 0);
- INIT_DELAYED_WORK(&ifmgd->tx_tspec_wk,
- ieee80211_sta_handle_tspec_ac_params_wk);
+ wiphy_delayed_work_init(&ifmgd->tx_tspec_wk,
+ ieee80211_sta_handle_tspec_ac_params_wk);
+ wiphy_delayed_work_init(&ifmgd->ttlm_work,
+ ieee80211_tid_to_link_map_work);
ifmgd->flags = 0;
ifmgd->powersave = sdata->wdev.ps;
@@ -6872,6 +7023,16 @@ void ieee80211_sta_setup_sdata(struct ieee80211_sub_if_data *sdata)
ifmgd->orig_teardown_skb = NULL;
}
+static void ieee80211_recalc_smps_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
+{
+ struct ieee80211_link_data *link =
+ container_of(work, struct ieee80211_link_data,
+ u.mgd.recalc_smps);
+
+ ieee80211_recalc_smps(link->sdata, link);
+}
+
void ieee80211_mgd_setup_link(struct ieee80211_link_data *link)
{
struct ieee80211_sub_if_data *sdata = link->sdata;
@@ -6884,6 +7045,8 @@ void ieee80211_mgd_setup_link(struct ieee80211_link_data *link)
wiphy_work_init(&link->u.mgd.request_smps_work,
ieee80211_request_smps_mgd_work);
+ wiphy_work_init(&link->u.mgd.recalc_smps,
+ ieee80211_recalc_smps_work);
if (local->hw.wiphy->features & NL80211_FEATURE_DYNAMIC_SMPS)
link->u.mgd.req_smps = IEEE80211_SMPS_AUTOMATIC;
else
@@ -7047,7 +7210,7 @@ static int ieee80211_prep_connection(struct ieee80211_sub_if_data *sdata,
}
if (new_sta || override) {
- err = ieee80211_prep_channel(sdata, link, cbss,
+ err = ieee80211_prep_channel(sdata, link, cbss, mlo,
&link->u.mgd.conn_flags);
if (err) {
if (new_sta)
@@ -7099,10 +7262,14 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
struct ieee80211_mgd_auth_data *auth_data;
+ struct ieee80211_link_data *link;
+ const struct element *csa_elem, *ecsa_elem;
u16 auth_alg;
int err;
bool cont_auth;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
/* prepare auth data structure */
switch (req->auth_type) {
@@ -7139,6 +7306,22 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
if (ifmgd->assoc_data)
return -EBUSY;
+ rcu_read_lock();
+ csa_elem = ieee80211_bss_get_elem(req->bss, WLAN_EID_CHANNEL_SWITCH);
+ ecsa_elem = ieee80211_bss_get_elem(req->bss,
+ WLAN_EID_EXT_CHANSWITCH_ANN);
+ if ((csa_elem &&
+ csa_elem->datalen == sizeof(struct ieee80211_channel_sw_ie) &&
+ ((struct ieee80211_channel_sw_ie *)csa_elem->data)->count != 0) ||
+ (ecsa_elem &&
+ ecsa_elem->datalen == sizeof(struct ieee80211_ext_chansw_ie) &&
+ ((struct ieee80211_ext_chansw_ie *)ecsa_elem->data)->count != 0)) {
+ rcu_read_unlock();
+ sdata_info(sdata, "AP is in CSA process, reject auth\n");
+ return -EINVAL;
+ }
+ rcu_read_unlock();
+
auth_data = kzalloc(sizeof(*auth_data) + req->auth_data_len +
req->ie_len, GFP_KERNEL);
if (!auth_data)
@@ -7222,8 +7405,6 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
false);
}
- sdata_info(sdata, "authenticate with %pM\n", auth_data->ap_addr);
-
/* needed for transmitting the auth frame(s) properly */
memcpy(sdata->vif.cfg.ap_addr, auth_data->ap_addr, ETH_ALEN);
@@ -7232,6 +7413,19 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
if (err)
goto err_clear;
+ if (req->link_id > 0)
+ link = sdata_dereference(sdata->link[req->link_id], sdata);
+ else
+ link = sdata_dereference(sdata->link[0], sdata);
+
+ if (WARN_ON(!link)) {
+ err = -ENOLINK;
+ goto err_clear;
+ }
+
+ sdata_info(sdata, "authenticate with %pM (local address=%pM)\n",
+ auth_data->ap_addr, link->conf->addr);
+
err = ieee80211_auth(sdata);
if (err) {
sta_info_destroy_addr(sdata, auth_data->ap_addr);
@@ -7247,9 +7441,7 @@ int ieee80211_mgd_auth(struct ieee80211_sub_if_data *sdata,
eth_zero_addr(sdata->deflink.u.mgd.bssid);
ieee80211_link_info_change_notify(sdata, &sdata->deflink,
BSS_CHANGED_BSSID);
- mutex_lock(&sdata->local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&sdata->local->mtx);
}
ifmgd->auth_data = NULL;
kfree(auth_data);
@@ -7264,7 +7456,7 @@ ieee80211_setup_assoc_link(struct ieee80211_sub_if_data *sdata,
unsigned int link_id)
{
struct ieee80211_local *local = sdata->local;
- const struct cfg80211_bss_ies *beacon_ies;
+ const struct cfg80211_bss_ies *bss_ies;
struct ieee80211_supported_band *sband;
const struct element *ht_elem, *vht_elem;
struct ieee80211_link_data *link;
@@ -7339,32 +7531,37 @@ ieee80211_setup_assoc_link(struct ieee80211_sub_if_data *sdata,
link->conf->eht_puncturing = 0;
rcu_read_lock();
- beacon_ies = rcu_dereference(cbss->beacon_ies);
- if (beacon_ies) {
- const struct ieee80211_eht_operation *eht_oper;
- const struct element *elem;
+ bss_ies = rcu_dereference(cbss->beacon_ies);
+ if (bss_ies) {
u8 dtim_count = 0;
- ieee80211_get_dtim(beacon_ies, &dtim_count,
+ ieee80211_get_dtim(bss_ies, &dtim_count,
&link->u.mgd.dtim_period);
sdata->deflink.u.mgd.have_beacon = true;
if (ieee80211_hw_check(&local->hw, TIMING_BEACON_ONLY)) {
- link->conf->sync_tsf = beacon_ies->tsf;
+ link->conf->sync_tsf = bss_ies->tsf;
link->conf->sync_device_ts = bss->device_ts_beacon;
link->conf->sync_dtim_count = dtim_count;
}
+ } else {
+ bss_ies = rcu_dereference(cbss->ies);
+ }
+
+ if (bss_ies) {
+ const struct ieee80211_eht_operation *eht_oper;
+ const struct element *elem;
elem = cfg80211_find_ext_elem(WLAN_EID_EXT_MULTIPLE_BSSID_CONFIGURATION,
- beacon_ies->data, beacon_ies->len);
+ bss_ies->data, bss_ies->len);
if (elem && elem->datalen >= 3)
link->conf->profile_periodicity = elem->data[2];
else
link->conf->profile_periodicity = 0;
elem = cfg80211_find_elem(WLAN_EID_EXT_CAPABILITY,
- beacon_ies->data, beacon_ies->len);
+ bss_ies->data, bss_ies->len);
if (elem && elem->datalen >= 11 &&
(elem->data[10] & WLAN_EXT_CAPA11_EMA_SUPPORT))
link->conf->ema_ap = true;
@@ -7372,7 +7569,7 @@ ieee80211_setup_assoc_link(struct ieee80211_sub_if_data *sdata,
link->conf->ema_ap = false;
elem = cfg80211_find_ext_elem(WLAN_EID_EXT_EHT_OPERATION,
- beacon_ies->data, beacon_ies->len);
+ bss_ies->data, bss_ies->len);
eht_oper = (const void *)(elem->data + 1);
if (elem &&
@@ -7432,7 +7629,7 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
struct ieee80211_mgd_assoc_data *assoc_data;
- const struct element *ssid_elem;
+ const struct element *ssid_elem, *csa_elem, *ecsa_elem;
struct ieee80211_vif_cfg *vif_cfg = &sdata->vif.cfg;
ieee80211_conn_flags_t conn_flags = 0;
struct ieee80211_link_data *link;
@@ -7462,6 +7659,21 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
kfree(assoc_data);
return -EINVAL;
}
+
+ csa_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_CHANNEL_SWITCH);
+ ecsa_elem = ieee80211_bss_get_elem(cbss, WLAN_EID_EXT_CHANSWITCH_ANN);
+ if ((csa_elem &&
+ csa_elem->datalen == sizeof(struct ieee80211_channel_sw_ie) &&
+ ((struct ieee80211_channel_sw_ie *)csa_elem->data)->count != 0) ||
+ (ecsa_elem &&
+ ecsa_elem->datalen == sizeof(struct ieee80211_ext_chansw_ie) &&
+ ((struct ieee80211_ext_chansw_ie *)ecsa_elem->data)->count != 0)) {
+ sdata_info(sdata, "AP is in CSA process, reject assoc\n");
+ rcu_read_unlock();
+ kfree(assoc_data);
+ return -EINVAL;
+ }
+
memcpy(assoc_data->ssid, ssid_elem->data, ssid_elem->datalen);
assoc_data->ssid_len = ssid_elem->datalen;
memcpy(vif_cfg->ssid, assoc_data->ssid, assoc_data->ssid_len);
@@ -7522,7 +7734,10 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
match = ether_addr_equal(ifmgd->auth_data->ap_addr,
assoc_data->ap_addr) &&
ifmgd->auth_data->link_id == req->link_id;
- ieee80211_destroy_auth_data(sdata, match);
+
+ /* Cleanup is delayed if auth_data matches */
+ if (!match)
+ ieee80211_destroy_auth_data(sdata, false);
}
/* prepare assoc data */
@@ -7703,10 +7918,13 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
if (i == assoc_data->assoc_link_id)
continue;
/* only calculate the flags, hence link == NULL */
- err = ieee80211_prep_channel(sdata, NULL, assoc_data->link[i].bss,
+ err = ieee80211_prep_channel(sdata, NULL,
+ assoc_data->link[i].bss, true,
&assoc_data->link[i].conn_flags);
- if (err)
+ if (err) {
+ req->links[i].error = err;
goto err_clear;
+ }
}
/* needed for transmitting the assoc frames properly */
@@ -7742,11 +7960,17 @@ int ieee80211_mgd_assoc(struct ieee80211_sub_if_data *sdata,
run_again(sdata, assoc_data->timeout);
+ /* We are associating, clean up auth_data */
+ if (ifmgd->auth_data)
+ ieee80211_destroy_auth_data(sdata, true);
+
return 0;
err_clear:
- eth_zero_addr(sdata->deflink.u.mgd.bssid);
- ieee80211_link_info_change_notify(sdata, &sdata->deflink,
- BSS_CHANGED_BSSID);
+ if (!ifmgd->auth_data) {
+ eth_zero_addr(sdata->deflink.u.mgd.bssid);
+ ieee80211_link_info_change_notify(sdata, &sdata->deflink,
+ BSS_CHANGED_BSSID);
+ }
ifmgd->assoc_data = NULL;
err_free:
kfree(assoc_data);
@@ -7770,6 +7994,7 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
req->bssid, req->reason_code,
ieee80211_get_reason_code_string(req->reason_code));
+ info.link_id = ifmgd->auth_data->link_id;
drv_mgd_prepare_tx(sdata->local, sdata, &info);
ieee80211_send_deauth_disassoc(sdata, req->bssid, req->bssid,
IEEE80211_STYPE_DEAUTH,
@@ -7790,6 +8015,7 @@ int ieee80211_mgd_deauth(struct ieee80211_sub_if_data *sdata,
req->bssid, req->reason_code,
ieee80211_get_reason_code_string(req->reason_code));
+ info.link_id = ifmgd->assoc_data->assoc_link_id;
drv_mgd_prepare_tx(sdata->local, sdata, &info);
ieee80211_send_deauth_disassoc(sdata, req->bssid, req->bssid,
IEEE80211_STYPE_DEAUTH,
@@ -7849,6 +8075,8 @@ void ieee80211_mgd_stop_link(struct ieee80211_link_data *link)
{
wiphy_work_cancel(link->sdata->local->hw.wiphy,
&link->u.mgd.request_smps_work);
+ wiphy_work_cancel(link->sdata->local->hw.wiphy,
+ &link->u.mgd.recalc_smps);
wiphy_delayed_work_cancel(link->sdata->local->hw.wiphy,
&link->u.mgd.chswitch_work);
}
@@ -7862,16 +8090,18 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)
* they will not do anything but might not have been
* cancelled when disconnecting.
*/
- cancel_work_sync(&ifmgd->monitor_work);
+ wiphy_work_cancel(sdata->local->hw.wiphy,
+ &ifmgd->monitor_work);
wiphy_work_cancel(sdata->local->hw.wiphy,
&ifmgd->beacon_connection_loss_work);
wiphy_work_cancel(sdata->local->hw.wiphy,
&ifmgd->csa_connection_drop_work);
- cancel_delayed_work_sync(&ifmgd->tdls_peer_del_work);
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ &ifmgd->tdls_peer_del_work);
wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
&ifmgd->ml_reconf_work);
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy, &ifmgd->ttlm_work);
- sdata_lock(sdata);
if (ifmgd->assoc_data)
ieee80211_destroy_assoc_data(sdata, ASSOC_TIMEOUT);
if (ifmgd->auth_data)
@@ -7887,7 +8117,6 @@ void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata)
ifmgd->assoc_req_ies_len = 0;
spin_unlock_bh(&ifmgd->teardown_lock);
del_timer_sync(&ifmgd->timer);
- sdata_unlock(sdata);
}
void ieee80211_cqm_rssi_notify(struct ieee80211_vif *vif,
diff --git a/net/mac80211/ocb.c b/net/mac80211/ocb.c
index b44896e14522..449af4e1cca4 100644
--- a/net/mac80211/ocb.c
+++ b/net/mac80211/ocb.c
@@ -44,7 +44,6 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct ieee80211_chanctx_conf *chanctx_conf;
struct ieee80211_supported_band *sband;
- enum nl80211_bss_scan_width scan_width;
struct sta_info *sta;
int band;
@@ -66,7 +65,6 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata,
return;
}
band = chanctx_conf->def.chan->band;
- scan_width = cfg80211_chandef_to_scan_width(&chanctx_conf->def);
rcu_read_unlock();
sta = sta_info_alloc(sdata, addr, GFP_ATOMIC);
@@ -75,8 +73,7 @@ void ieee80211_ocb_rx_no_sta(struct ieee80211_sub_if_data *sdata,
/* Add only mandatory rates for now */
sband = local->hw.wiphy->bands[band];
- sta->sta.deflink.supp_rates[band] =
- ieee80211_mandatory_rates(sband, scan_width);
+ sta->sta.deflink.supp_rates[band] = ieee80211_mandatory_rates(sband);
spin_lock(&ifocb->incomplete_lock);
list_add(&sta->list, &ifocb->incomplete_stations);
@@ -124,11 +121,11 @@ void ieee80211_ocb_work(struct ieee80211_sub_if_data *sdata)
struct ieee80211_if_ocb *ifocb = &sdata->u.ocb;
struct sta_info *sta;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (ifocb->joined != true)
return;
- sdata_lock(sdata);
-
spin_lock_bh(&ifocb->incomplete_lock);
while (!list_empty(&ifocb->incomplete_stations)) {
sta = list_first_entry(&ifocb->incomplete_stations,
@@ -144,8 +141,6 @@ void ieee80211_ocb_work(struct ieee80211_sub_if_data *sdata)
if (test_and_clear_bit(OCB_WORK_HOUSEKEEPING, &ifocb->wrkq_flags))
ieee80211_ocb_housekeeping(sdata);
-
- sdata_unlock(sdata);
}
static void ieee80211_ocb_housekeeping_timer(struct timer_list *t)
@@ -178,6 +173,8 @@ int ieee80211_ocb_join(struct ieee80211_sub_if_data *sdata,
u64 changed = BSS_CHANGED_OCB | BSS_CHANGED_BSSID;
int err;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
if (ifocb->joined == true)
return -EINVAL;
@@ -185,10 +182,8 @@ int ieee80211_ocb_join(struct ieee80211_sub_if_data *sdata,
sdata->deflink.smps_mode = IEEE80211_SMPS_OFF;
sdata->deflink.needed_rx_chains = sdata->local->rx_chains;
- mutex_lock(&sdata->local->mtx);
err = ieee80211_link_use_channel(&sdata->deflink, &setup->chandef,
IEEE80211_CHANCTX_SHARED);
- mutex_unlock(&sdata->local->mtx);
if (err)
return err;
@@ -209,6 +204,8 @@ int ieee80211_ocb_leave(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
ifocb->joined = false;
sta_info_flush(sdata);
@@ -228,9 +225,7 @@ int ieee80211_ocb_leave(struct ieee80211_sub_if_data *sdata)
clear_bit(SDATA_STATE_OFFCHANNEL, &sdata->state);
ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_OCB);
- mutex_lock(&sdata->local->mtx);
ieee80211_link_release_channel(&sdata->deflink);
- mutex_unlock(&sdata->local->mtx);
skb_queue_purge(&sdata->skb_queue);
diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c
index cdf991e74ab9..6c4080202573 100644
--- a/net/mac80211/offchannel.c
+++ b/net/mac80211/offchannel.c
@@ -34,7 +34,7 @@ static void ieee80211_offchannel_ps_enable(struct ieee80211_sub_if_data *sdata)
del_timer_sync(&ifmgd->bcn_mon_timer);
del_timer_sync(&ifmgd->conn_mon_timer);
- cancel_work_sync(&local->dynamic_ps_enable_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->dynamic_ps_enable_work);
if (local->hw.conf.flags & IEEE80211_CONF_PS) {
offchannel_ps_enabled = true;
@@ -84,6 +84,8 @@ void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (WARN_ON(local->use_chanctx))
return;
@@ -101,7 +103,6 @@ void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local)
false);
ieee80211_flush_queues(local, NULL, false);
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(sdata, &local->interfaces, list) {
if (!ieee80211_sdata_running(sdata))
continue;
@@ -127,17 +128,17 @@ void ieee80211_offchannel_stop_vifs(struct ieee80211_local *local)
sdata->u.mgd.associated)
ieee80211_offchannel_ps_enable(sdata);
}
- mutex_unlock(&local->iflist_mtx);
}
void ieee80211_offchannel_return(struct ieee80211_local *local)
{
struct ieee80211_sub_if_data *sdata;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (WARN_ON(local->use_chanctx))
return;
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(sdata, &local->interfaces, list) {
if (sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE)
continue;
@@ -161,7 +162,6 @@ void ieee80211_offchannel_return(struct ieee80211_local *local)
BSS_CHANGED_BEACON_ENABLED);
}
}
- mutex_unlock(&local->iflist_mtx);
ieee80211_wake_queues_by_reason(&local->hw, IEEE80211_MAX_QUEUE_MAP,
IEEE80211_QUEUE_STOP_REASON_OFFCHANNEL,
@@ -197,7 +197,7 @@ static unsigned long ieee80211_end_finished_rocs(struct ieee80211_local *local,
struct ieee80211_roc_work *roc, *tmp;
long remaining_dur_min = LONG_MAX;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry_safe(roc, tmp, &local->roc_list, list) {
long remaining;
@@ -230,7 +230,7 @@ static bool ieee80211_recalc_sw_work(struct ieee80211_local *local,
if (dur == LONG_MAX)
return false;
- mod_delayed_work(local->workqueue, &local->roc_work, dur);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->roc_work, dur);
return true;
}
@@ -258,13 +258,13 @@ static void ieee80211_handle_roc_started(struct ieee80211_roc_work *roc,
roc->notified = true;
}
-static void ieee80211_hw_roc_start(struct work_struct *work)
+static void ieee80211_hw_roc_start(struct wiphy *wiphy, struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, hw_roc_start);
struct ieee80211_roc_work *roc;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(roc, &local->roc_list, list) {
if (!roc->started)
@@ -273,8 +273,6 @@ static void ieee80211_hw_roc_start(struct work_struct *work)
roc->hw_begun = true;
ieee80211_handle_roc_started(roc, local->hw_roc_start_time);
}
-
- mutex_unlock(&local->mtx);
}
void ieee80211_ready_on_channel(struct ieee80211_hw *hw)
@@ -285,7 +283,7 @@ void ieee80211_ready_on_channel(struct ieee80211_hw *hw)
trace_api_ready_on_channel(local);
- ieee80211_queue_work(hw, &local->hw_roc_start);
+ wiphy_work_queue(hw->wiphy, &local->hw_roc_start);
}
EXPORT_SYMBOL_GPL(ieee80211_ready_on_channel);
@@ -295,7 +293,7 @@ static void _ieee80211_start_next_roc(struct ieee80211_local *local)
enum ieee80211_roc_type type;
u32 min_dur, max_dur;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(list_empty(&local->roc_list)))
return;
@@ -338,7 +336,7 @@ static void _ieee80211_start_next_roc(struct ieee80211_local *local)
tmp->started = true;
tmp->abort = true;
}
- ieee80211_queue_work(&local->hw, &local->hw_roc_done);
+ wiphy_work_queue(local->hw.wiphy, &local->hw_roc_done);
return;
}
@@ -368,8 +366,8 @@ static void _ieee80211_start_next_roc(struct ieee80211_local *local)
ieee80211_hw_config(local, 0);
}
- ieee80211_queue_delayed_work(&local->hw, &local->roc_work,
- msecs_to_jiffies(min_dur));
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->roc_work,
+ msecs_to_jiffies(min_dur));
/* tell userspace or send frame(s) */
list_for_each_entry(tmp, &local->roc_list, list) {
@@ -386,7 +384,7 @@ void ieee80211_start_next_roc(struct ieee80211_local *local)
{
struct ieee80211_roc_work *roc;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (list_empty(&local->roc_list)) {
ieee80211_run_deferred_scan(local);
@@ -407,8 +405,8 @@ void ieee80211_start_next_roc(struct ieee80211_local *local)
_ieee80211_start_next_roc(local);
} else {
/* delay it a bit */
- ieee80211_queue_delayed_work(&local->hw, &local->roc_work,
- round_jiffies_relative(HZ/2));
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->roc_work,
+ round_jiffies_relative(HZ / 2));
}
}
@@ -417,7 +415,7 @@ static void __ieee80211_roc_work(struct ieee80211_local *local)
struct ieee80211_roc_work *roc;
bool on_channel;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(local->ops->remain_on_channel))
return;
@@ -451,29 +449,27 @@ static void __ieee80211_roc_work(struct ieee80211_local *local)
}
}
-static void ieee80211_roc_work(struct work_struct *work)
+static void ieee80211_roc_work(struct wiphy *wiphy, struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, roc_work.work);
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
__ieee80211_roc_work(local);
- mutex_unlock(&local->mtx);
}
-static void ieee80211_hw_roc_done(struct work_struct *work)
+static void ieee80211_hw_roc_done(struct wiphy *wiphy, struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, hw_roc_done);
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
ieee80211_end_finished_rocs(local, jiffies);
/* if there's another roc, start it now */
ieee80211_start_next_roc(local);
-
- mutex_unlock(&local->mtx);
}
void ieee80211_remain_on_channel_expired(struct ieee80211_hw *hw)
@@ -482,7 +478,7 @@ void ieee80211_remain_on_channel_expired(struct ieee80211_hw *hw)
trace_api_remain_on_channel_expired(local);
- ieee80211_queue_work(hw, &local->hw_roc_done);
+ wiphy_work_queue(hw->wiphy, &local->hw_roc_done);
}
EXPORT_SYMBOL_GPL(ieee80211_remain_on_channel_expired);
@@ -537,7 +533,7 @@ static int ieee80211_start_roc_work(struct ieee80211_local *local,
bool queued = false, combine_started = true;
int ret;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (channel->freq_offset)
/* this may work, but is untested */
@@ -586,8 +582,8 @@ static int ieee80211_start_roc_work(struct ieee80211_local *local,
/* if not HW assist, just queue & schedule work */
if (!local->ops->remain_on_channel) {
list_add_tail(&roc->list, &local->roc_list);
- ieee80211_queue_delayed_work(&local->hw,
- &local->roc_work, 0);
+ wiphy_delayed_work_queue(local->hw.wiphy,
+ &local->roc_work, 0);
} else {
/* otherwise actually kick it off here
* (for error handling)
@@ -675,15 +671,12 @@ int ieee80211_remain_on_channel(struct wiphy *wiphy, struct wireless_dev *wdev,
{
struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
struct ieee80211_local *local = sdata->local;
- int ret;
- mutex_lock(&local->mtx);
- ret = ieee80211_start_roc_work(local, sdata, chan,
- duration, cookie, NULL,
- IEEE80211_ROC_TYPE_NORMAL);
- mutex_unlock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
- return ret;
+ return ieee80211_start_roc_work(local, sdata, chan,
+ duration, cookie, NULL,
+ IEEE80211_ROC_TYPE_NORMAL);
}
static int ieee80211_cancel_roc(struct ieee80211_local *local,
@@ -692,12 +685,13 @@ static int ieee80211_cancel_roc(struct ieee80211_local *local,
struct ieee80211_roc_work *roc, *tmp, *found = NULL;
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!cookie)
return -ENOENT;
- flush_work(&local->hw_roc_start);
+ wiphy_work_flush(local->hw.wiphy, &local->hw_roc_start);
- mutex_lock(&local->mtx);
list_for_each_entry_safe(roc, tmp, &local->roc_list, list) {
if (!mgmt_tx && roc->cookie != cookie)
continue;
@@ -709,7 +703,6 @@ static int ieee80211_cancel_roc(struct ieee80211_local *local,
}
if (!found) {
- mutex_unlock(&local->mtx);
return -ENOENT;
}
@@ -721,10 +714,26 @@ static int ieee80211_cancel_roc(struct ieee80211_local *local,
if (local->ops->remain_on_channel) {
ret = drv_cancel_remain_on_channel(local, roc->sdata);
if (WARN_ON_ONCE(ret)) {
- mutex_unlock(&local->mtx);
return ret;
}
+ /*
+ * We could be racing against the notification from the driver:
+ * + driver is handling the notification on CPU0
+ * + user space is cancelling the remain on channel and
+ * schedules the hw_roc_done worker.
+ *
+ * Now hw_roc_done might start to run after the next roc will
+ * start and mac80211 will think that this second roc has
+ * ended prematurely.
+ * Cancel the work to make sure that all the pending workers
+ * have completed execution.
+ * Note that this assumes that by the time the driver returns
+ * from drv_cancel_remain_on_channel, it has completed all
+ * the processing of related notifications.
+ */
+ wiphy_work_cancel(local->hw.wiphy, &local->hw_roc_done);
+
/* TODO:
* if multiple items were combined here then we really shouldn't
* cancel them all - we should wait for as much time as needed
@@ -745,11 +754,10 @@ static int ieee80211_cancel_roc(struct ieee80211_local *local,
} else {
/* go through work struct to return to the operating channel */
found->abort = true;
- mod_delayed_work(local->workqueue, &local->roc_work, 0);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->roc_work, 0);
}
out_unlock:
- mutex_unlock(&local->mtx);
return 0;
}
@@ -778,6 +786,8 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
int ret;
u8 *data;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (params->dont_wait_for_ack)
flags = IEEE80211_TX_CTL_NO_ACK;
else
@@ -833,13 +843,16 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
break;
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
- sdata_lock(sdata);
if (!sdata->u.mgd.associated ||
(params->offchan && params->wait &&
local->ops->remain_on_channel &&
- memcmp(sdata->vif.cfg.ap_addr, mgmt->bssid, ETH_ALEN)))
+ memcmp(sdata->vif.cfg.ap_addr, mgmt->bssid, ETH_ALEN))) {
need_offchan = true;
- sdata_unlock(sdata);
+ } else if (sdata->u.mgd.associated &&
+ ether_addr_equal(sdata->vif.cfg.ap_addr, mgmt->da)) {
+ sta = sta_info_get_bss(sdata, mgmt->da);
+ mlo_sta = sta && sta->sta.mlo;
+ }
break;
case NL80211_IFTYPE_P2P_DEVICE:
need_offchan = true;
@@ -855,8 +868,6 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
if (need_offchan && !params->chan)
return -EINVAL;
- mutex_lock(&local->mtx);
-
/* Check if the operating channel is the requested channel */
if (!params->chan && mlo_sta) {
need_offchan = false;
@@ -980,7 +991,6 @@ int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
if (ret)
ieee80211_free_txskb(&local->hw, skb);
out_unlock:
- mutex_unlock(&local->mtx);
return ret;
}
@@ -994,9 +1004,9 @@ int ieee80211_mgmt_tx_cancel_wait(struct wiphy *wiphy,
void ieee80211_roc_setup(struct ieee80211_local *local)
{
- INIT_WORK(&local->hw_roc_start, ieee80211_hw_roc_start);
- INIT_WORK(&local->hw_roc_done, ieee80211_hw_roc_done);
- INIT_DELAYED_WORK(&local->roc_work, ieee80211_roc_work);
+ wiphy_work_init(&local->hw_roc_start, ieee80211_hw_roc_start);
+ wiphy_work_init(&local->hw_roc_done, ieee80211_hw_roc_done);
+ wiphy_delayed_work_init(&local->roc_work, ieee80211_roc_work);
INIT_LIST_HEAD(&local->roc_list);
}
@@ -1006,7 +1016,8 @@ void ieee80211_roc_purge(struct ieee80211_local *local,
struct ieee80211_roc_work *roc, *tmp;
bool work_to_do = false;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry_safe(roc, tmp, &local->roc_list, list) {
if (sdata && roc->sdata != sdata)
continue;
@@ -1026,5 +1037,4 @@ void ieee80211_roc_purge(struct ieee80211_local *local,
}
if (work_to_do)
__ieee80211_roc_work(local);
- mutex_unlock(&local->mtx);
}
diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c
index 0ccb5701c7f3..c1fa26e09479 100644
--- a/net/mac80211/pm.c
+++ b/net/mac80211/pm.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Portions
- * Copyright (C) 2020-2021 Intel Corporation
+ * Copyright (C) 2020-2021, 2023 Intel Corporation
*/
#include <net/mac80211.h>
#include <net/rtnetlink.h>
@@ -40,13 +40,12 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
if (ieee80211_hw_check(hw, AMPDU_AGGREGATION) &&
!(wowlan && wowlan->any)) {
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(sta, &local->sta_list, list) {
set_sta_flag(sta, WLAN_STA_BLOCK_BA);
ieee80211_sta_tear_down_BA_sessions(
sta, AGG_STOP_LOCAL_REQUEST);
}
- mutex_unlock(&local->sta_mtx);
}
/* keep sched_scan only in case of 'any' trigger */
@@ -76,7 +75,7 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
* Note that this particular timer doesn't need to be
* restarted at resume.
*/
- cancel_work_sync(&local->dynamic_ps_enable_work);
+ wiphy_work_cancel(local->hw.wiphy, &local->dynamic_ps_enable_work);
del_timer_sync(&local->dynamic_ps_timer);
local->wowlan = wowlan;
@@ -119,12 +118,11 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
local->quiescing = false;
local->wowlan = false;
if (ieee80211_hw_check(hw, AMPDU_AGGREGATION)) {
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(sta,
&local->sta_list, list) {
clear_sta_flag(sta, WLAN_STA_BLOCK_BA);
}
- mutex_unlock(&local->sta_mtx);
}
ieee80211_wake_queues_by_reason(hw,
IEEE80211_MAX_QUEUE_MAP,
@@ -161,7 +159,8 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
break;
}
- flush_delayed_work(&sdata->dec_tailroom_needed_wk);
+ wiphy_delayed_work_flush(local->hw.wiphy,
+ &sdata->dec_tailroom_needed_wk);
drv_remove_interface(local, sdata);
}
diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
index b34c80522047..6bf3b4444a43 100644
--- a/net/mac80211/rc80211_minstrel_ht.c
+++ b/net/mac80211/rc80211_minstrel_ht.c
@@ -1725,16 +1725,15 @@ minstrel_ht_update_caps(void *priv, struct ieee80211_supported_band *sband,
mi->band = sband->band;
mi->last_stats_update = jiffies;
- ack_dur = ieee80211_frame_duration(sband->band, 10, 60, 1, 1, 0);
- mi->overhead = ieee80211_frame_duration(sband->band, 0, 60, 1, 1, 0);
+ ack_dur = ieee80211_frame_duration(sband->band, 10, 60, 1, 1);
+ mi->overhead = ieee80211_frame_duration(sband->band, 0, 60, 1, 1);
mi->overhead += ack_dur;
mi->overhead_rtscts = mi->overhead + 2 * ack_dur;
ctl_rate = &sband->bitrates[rate_lowest_index(sband, sta)];
erp = ctl_rate->flags & IEEE80211_RATE_ERP_G;
ack_dur = ieee80211_frame_duration(sband->band, 10,
- ctl_rate->bitrate, erp, 1,
- ieee80211_chandef_get_shift(chandef));
+ ctl_rate->bitrate, erp, 1);
mi->overhead_legacy = ack_dur;
mi->overhead_legacy_rtscts = mi->overhead_legacy + 2 * ack_dur;
diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
index 8f6b6f56b65b..64352e4e6d00 100644
--- a/net/mac80211/rx.c
+++ b/net/mac80211/rx.c
@@ -1436,7 +1436,7 @@ ieee80211_rx_h_check_dup(struct ieee80211_rx_data *rx)
rx->sta->last_seq_ctrl[rx->seqno_idx] == hdr->seq_ctrl)) {
I802_DEBUG_INC(rx->local->dot11FrameDuplicateCount);
rx->link_sta->rx_stats.num_duplicates++;
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_DUP;
} else if (!(status->flag & RX_FLAG_AMSDU_MORE)) {
rx->sta->last_seq_ctrl[rx->seqno_idx] = hdr->seq_ctrl;
}
@@ -1490,7 +1490,7 @@ ieee80211_rx_h_check(struct ieee80211_rx_data *rx)
cfg80211_rx_spurious_frame(rx->sdata->dev,
hdr->addr2,
GFP_ATOMIC))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SPURIOUS;
return RX_DROP_MONITOR;
}
@@ -1883,7 +1883,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(skb);
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
int keyidx;
- ieee80211_rx_result result = RX_DROP_UNUSABLE;
+ ieee80211_rx_result result = RX_DROP_U_DECRYPT_FAIL;
struct ieee80211_key *sta_ptk = NULL;
struct ieee80211_key *ptk_idx = NULL;
int mmie_keyidx = -1;
@@ -1933,7 +1933,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
keyid = ieee80211_get_keyid(rx->skb);
if (unlikely(keyid < 0))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_NO_KEY_ID;
ptk_idx = rcu_dereference(rx->sta->ptk[keyid]);
}
@@ -2038,7 +2038,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
keyidx = ieee80211_get_keyid(rx->skb);
if (unlikely(keyidx < 0))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_NO_KEY_ID;
/* check per-station GTK first, if multicast packet */
if (is_multicast_ether_addr(hdr->addr1) && rx->link_sta)
@@ -2104,7 +2104,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
result = ieee80211_crypto_gcmp_decrypt(rx);
break;
default:
- result = RX_DROP_UNUSABLE;
+ result = RX_DROP_U_BAD_CIPHER;
}
/* the hdr variable is invalid after the decrypt handlers */
@@ -2112,7 +2112,7 @@ ieee80211_rx_h_decrypt(struct ieee80211_rx_data *rx)
/* either the frame has been decrypted or will be dropped */
status->flag |= RX_FLAG_DECRYPTED;
- if (unlikely(ieee80211_is_beacon(fc) && (result & RX_DROP_UNUSABLE) &&
+ if (unlikely(ieee80211_is_beacon(fc) && RX_RES_IS_UNUSABLE(result) &&
rx->sdata->dev))
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
skb->data, skb->len);
@@ -2249,7 +2249,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
I802_DEBUG_INC(rx->local->rx_handlers_fragments);
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
/*
* skb_linearize() might change the skb->data and
@@ -2312,11 +2312,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
u8 pn[IEEE80211_CCMP_PN_LEN], *rpn;
if (!requires_sequential_pn(rx, fc))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_NONSEQ_PN;
/* Prevent mixed key and fragment cache attacks */
if (entry->key_color != rx->key->color)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_KEY_COLOR;
memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
@@ -2327,7 +2327,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
rpn = rx->ccm_gcm.pn;
if (memcmp(pn, rpn, IEEE80211_CCMP_PN_LEN))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_REPLAY;
memcpy(entry->last_pn, pn, IEEE80211_CCMP_PN_LEN);
} else if (entry->is_protected &&
(!rx->key ||
@@ -2338,11 +2338,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
* if for TKIP Michael MIC should protect us, and WEP is a
* lost cause anyway.
*/
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_EXPECT_DEFRAG_PROT;
} else if (entry->is_protected && rx->key &&
entry->key_color != rx->key->color &&
(status->flag & RX_FLAG_DECRYPTED)) {
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_KEY_COLOR;
}
skb_pull(rx->skb, ieee80211_hdrlen(fc));
@@ -2361,7 +2361,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
GFP_ATOMIC))) {
I802_DEBUG_INC(rx->local->rx_handlers_drop_defrag);
__skb_queue_purge(&entry->skb_list);
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
}
}
while ((skb = __skb_dequeue(&entry->skb_list))) {
@@ -2405,7 +2405,8 @@ static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc)
return 0;
}
-static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
+static ieee80211_rx_result
+ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
{
struct ieee80211_rx_status *status = IEEE80211_SKB_RXCB(rx->skb);
struct ieee80211_mgmt *mgmt = (void *)rx->skb->data;
@@ -2416,12 +2417,12 @@ static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
* decrypted them already.
*/
if (status->flag & RX_FLAG_DECRYPTED)
- return 0;
+ return RX_CONTINUE;
/* drop unicast protected dual (that wasn't protected) */
if (ieee80211_is_action(fc) &&
mgmt->u.action.category == WLAN_CATEGORY_PROTECTED_DUAL_OF_ACTION)
- return -EACCES;
+ return RX_DROP_U_UNPROT_DUAL;
if (rx->sta && test_sta_flag(rx->sta, WLAN_STA_MFP)) {
if (unlikely(!ieee80211_has_protected(fc) &&
@@ -2433,13 +2434,13 @@ static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
* during 4-way-HS (key is installed after HS).
*/
if (!rx->key)
- return 0;
+ return RX_CONTINUE;
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
rx->skb->data,
rx->skb->len);
}
- return -EACCES;
+ return RX_DROP_U_UNPROT_UCAST_MGMT;
}
/* BIP does not use Protected field, so need to check MMIE */
if (unlikely(ieee80211_is_multicast_robust_mgmt_frame(rx->skb) &&
@@ -2449,14 +2450,14 @@ static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
rx->skb->data,
rx->skb->len);
- return -EACCES;
+ return RX_DROP_U_UNPROT_MCAST_MGMT;
}
if (unlikely(ieee80211_is_beacon(fc) && rx->key &&
ieee80211_get_mmie_keyidx(rx->skb) < 0)) {
cfg80211_rx_unprot_mlme_mgmt(rx->sdata->dev,
rx->skb->data,
rx->skb->len);
- return -EACCES;
+ return RX_DROP_U_UNPROT_BEACON;
}
/*
* When using MFP, Action frames are not allowed prior to
@@ -2464,18 +2465,27 @@ static int ieee80211_drop_unencrypted_mgmt(struct ieee80211_rx_data *rx)
*/
if (unlikely(ieee80211_is_action(fc) && !rx->key &&
ieee80211_is_robust_mgmt_frame(rx->skb)))
- return -EACCES;
+ return RX_DROP_U_UNPROT_ACTION;
/* drop unicast public action frames when using MPF */
if (is_unicast_ether_addr(mgmt->da) &&
ieee80211_is_protected_dual_of_public_action(rx->skb))
- return -EACCES;
+ return RX_DROP_U_UNPROT_UNICAST_PUB_ACTION;
}
- return 0;
+ /*
+ * Drop robust action frames before assoc regardless of MFP state,
+ * after assoc we also have decided on MFP or not.
+ */
+ if (ieee80211_is_action(fc) &&
+ ieee80211_is_robust_mgmt_frame(rx->skb) &&
+ (!rx->sta || !test_sta_flag(rx->sta, WLAN_STA_ASSOC)))
+ return RX_DROP_U_UNPROT_ROBUST_ACTION;
+
+ return RX_CONTINUE;
}
-static int
+static ieee80211_rx_result
__ieee80211_data_to_8023(struct ieee80211_rx_data *rx, bool *port_control)
{
struct ieee80211_sub_if_data *sdata = rx->sdata;
@@ -2487,32 +2497,31 @@ __ieee80211_data_to_8023(struct ieee80211_rx_data *rx, bool *port_control)
*port_control = false;
if (ieee80211_has_a4(hdr->frame_control) &&
sdata->vif.type == NL80211_IFTYPE_AP_VLAN && !sdata->u.vlan.sta)
- return -1;
+ return RX_DROP_U_UNEXPECTED_VLAN_4ADDR;
if (sdata->vif.type == NL80211_IFTYPE_STATION &&
!!sdata->u.mgd.use_4addr != !!ieee80211_has_a4(hdr->frame_control)) {
-
if (!sdata->u.mgd.use_4addr)
- return -1;
+ return RX_DROP_U_UNEXPECTED_STA_4ADDR;
else if (!ether_addr_equal(hdr->addr1, sdata->vif.addr))
check_port_control = true;
}
if (is_multicast_ether_addr(hdr->addr1) &&
sdata->vif.type == NL80211_IFTYPE_AP_VLAN && sdata->u.vlan.sta)
- return -1;
+ return RX_DROP_U_UNEXPECTED_VLAN_MCAST;
ret = ieee80211_data_to_8023(rx->skb, sdata->vif.addr, sdata->vif.type);
if (ret < 0)
- return ret;
+ return RX_DROP_U_INVALID_8023;
ehdr = (struct ethhdr *) rx->skb->data;
if (ehdr->h_proto == rx->sdata->control_port_protocol)
*port_control = true;
else if (check_port_control)
- return -1;
+ return RX_DROP_U_NOT_PORT_CONTROL;
- return 0;
+ return RX_CONTINUE;
}
bool ieee80211_is_our_addr(struct ieee80211_sub_if_data *sdata,
@@ -2903,10 +2912,10 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
skb = NULL;
if (skb_cow_head(fwd_skb, hdrlen - sizeof(struct ethhdr)))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
if (skb_linearize(fwd_skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
}
fwd_hdr = skb_push(fwd_skb, hdrlen - sizeof(struct ethhdr));
@@ -3002,7 +3011,7 @@ __ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx, u8 data_offset)
rx->sdata->vif.addr,
rx->sdata->vif.type,
data_offset, true))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_AMSDU;
if (rx->sta->amsdu_mesh_control < 0) {
s8 valid = -1;
@@ -3077,21 +3086,21 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
switch (rx->sdata->vif.type) {
case NL80211_IFTYPE_AP_VLAN:
if (!rx->sdata->u.vlan.sta)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_4ADDR;
break;
case NL80211_IFTYPE_STATION:
if (!rx->sdata->u.mgd.use_4addr)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_4ADDR;
break;
case NL80211_IFTYPE_MESH_POINT:
break;
default:
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_4ADDR;
}
}
if (is_multicast_ether_addr(hdr->addr1) || !rx->sta)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_AMSDU;
if (rx->key) {
/*
@@ -3104,7 +3113,7 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx)
case WLAN_CIPHER_SUITE_WEP40:
case WLAN_CIPHER_SUITE_WEP104:
case WLAN_CIPHER_SUITE_TKIP:
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_BAD_AMSDU_CIPHER;
default:
break;
}
@@ -3123,7 +3132,6 @@ ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
__le16 fc = hdr->frame_control;
ieee80211_rx_result res;
bool port_control;
- int err;
if (unlikely(!ieee80211_is_data(hdr->frame_control)))
return RX_CONTINUE;
@@ -3144,9 +3152,9 @@ ieee80211_rx_h_data(struct ieee80211_rx_data *rx)
return RX_DROP_MONITOR;
}
- err = __ieee80211_data_to_8023(rx, &port_control);
- if (unlikely(err))
- return RX_DROP_UNUSABLE;
+ res = __ieee80211_data_to_8023(rx, &port_control);
+ if (unlikely(res != RX_CONTINUE))
+ return res;
res = ieee80211_rx_mesh_data(rx->sdata, rx->sta, rx->skb);
if (res != RX_CONTINUE)
@@ -3378,7 +3386,7 @@ ieee80211_rx_h_mgmt_check(struct ieee80211_rx_data *rx)
/* drop too small action frames */
if (ieee80211_is_action(mgmt->frame_control) &&
rx->skb->len < IEEE80211_MIN_ACTION_SIZE)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_RUNT_ACTION;
if (rx->sdata->vif.type == NL80211_IFTYPE_AP &&
ieee80211_is_beacon(mgmt->frame_control) &&
@@ -3399,10 +3407,7 @@ ieee80211_rx_h_mgmt_check(struct ieee80211_rx_data *rx)
rx->flags |= IEEE80211_RX_BEACON_REPORTED;
}
- if (ieee80211_drop_unencrypted_mgmt(rx))
- return RX_DROP_UNUSABLE;
-
- return RX_CONTINUE;
+ return ieee80211_drop_unencrypted_mgmt(rx);
}
static bool
@@ -3472,7 +3477,7 @@ ieee80211_rx_h_action(struct ieee80211_rx_data *rx)
if (!rx->sta && mgmt->u.action.category != WLAN_CATEGORY_PUBLIC &&
mgmt->u.action.category != WLAN_CATEGORY_SELF_PROTECTED &&
mgmt->u.action.category != WLAN_CATEGORY_SPECTRUM_MGMT)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_ACTION_UNKNOWN_SRC;
switch (mgmt->u.action.category) {
case WLAN_CATEGORY_HT:
@@ -3877,7 +3882,7 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
/* do not return rejected action frames */
if (mgmt->u.action.category & 0x80)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_REJECTED_ACTION_RESPONSE;
nskb = skb_copy_expand(rx->skb, local->hw.extra_tx_headroom, 0,
GFP_ATOMIC);
@@ -4668,7 +4673,7 @@ void __ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(sta, &local->sta_list, list) {
if (sdata != sta->sdata &&
@@ -4682,9 +4687,9 @@ void ieee80211_check_fast_rx_iface(struct ieee80211_sub_if_data *sdata)
{
struct ieee80211_local *local = sdata->local;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
__ieee80211_check_fast_rx_iface(sdata);
- mutex_unlock(&local->sta_mtx);
}
static void ieee80211_rx_8023(struct ieee80211_rx_data *rx,
diff --git a/net/mac80211/s1g.c b/net/mac80211/s1g.c
index c1f964e9991c..d4ed0c0a335c 100644
--- a/net/mac80211/s1g.c
+++ b/net/mac80211/s1g.c
@@ -2,6 +2,7 @@
/*
* S1G handling
* Copyright(c) 2020 Adapt-IP
+ * Copyright (C) 2023 Intel Corporation
*/
#include <linux/ieee80211.h>
#include <net/mac80211.h>
@@ -153,11 +154,11 @@ void ieee80211_s1g_rx_twt_action(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sta = sta_info_get_bss(sdata, mgmt->sa);
if (!sta)
- goto out;
+ return;
switch (mgmt->u.action.u.s1g.action_code) {
case WLAN_S1G_TWT_SETUP:
@@ -169,9 +170,6 @@ void ieee80211_s1g_rx_twt_action(struct ieee80211_sub_if_data *sdata,
default:
break;
}
-
-out:
- mutex_unlock(&local->sta_mtx);
}
void ieee80211_s1g_status_twt_action(struct ieee80211_sub_if_data *sdata,
@@ -181,11 +179,11 @@ void ieee80211_s1g_status_twt_action(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
sta = sta_info_get_bss(sdata, mgmt->da);
if (!sta)
- goto out;
+ return;
switch (mgmt->u.action.u.s1g.action_code) {
case WLAN_S1G_TWT_SETUP:
@@ -195,7 +193,4 @@ void ieee80211_s1g_status_twt_action(struct ieee80211_sub_if_data *sdata,
default:
break;
}
-
-out:
- mutex_unlock(&local->sta_mtx);
}
diff --git a/net/mac80211/scan.c b/net/mac80211/scan.c
index 0805aa8603c6..24fa06105378 100644
--- a/net/mac80211/scan.c
+++ b/net/mac80211/scan.c
@@ -187,12 +187,6 @@ ieee80211_bss_info_update(struct ieee80211_local *local,
else if (ieee80211_hw_check(&local->hw, SIGNAL_UNSPEC))
bss_meta.signal = (rx_status->signal * 100) / local->hw.max_signal;
- bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_20;
- if (rx_status->bw == RATE_INFO_BW_5)
- bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_5;
- else if (rx_status->bw == RATE_INFO_BW_10)
- bss_meta.scan_width = NL80211_BSS_CHAN_WIDTH_10;
-
bss_meta.chan = channel;
rcu_read_lock();
@@ -274,8 +268,8 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
* the beacon/proberesp rx gives us an opportunity to upgrade
* to active scan
*/
- set_bit(SCAN_BEACON_DONE, &local->scanning);
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
+ set_bit(SCAN_BEACON_DONE, &local->scanning);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work, 0);
}
if (ieee80211_is_probe_resp(mgmt->frame_control)) {
@@ -315,22 +309,11 @@ void ieee80211_scan_rx(struct ieee80211_local *local, struct sk_buff *skb)
ieee80211_rx_bss_put(local, bss);
}
-static void
-ieee80211_prepare_scan_chandef(struct cfg80211_chan_def *chandef,
- enum nl80211_bss_scan_width scan_width)
+static void ieee80211_prepare_scan_chandef(struct cfg80211_chan_def *chandef)
{
memset(chandef, 0, sizeof(*chandef));
- switch (scan_width) {
- case NL80211_BSS_CHAN_WIDTH_5:
- chandef->width = NL80211_CHAN_WIDTH_5;
- break;
- case NL80211_BSS_CHAN_WIDTH_10:
- chandef->width = NL80211_CHAN_WIDTH_10;
- break;
- default:
- chandef->width = NL80211_CHAN_WIDTH_20_NOHT;
- break;
- }
+
+ chandef->width = NL80211_CHAN_WIDTH_20_NOHT;
}
/* return false if no more work */
@@ -344,7 +327,7 @@ static bool ieee80211_prep_hw_scan(struct ieee80211_sub_if_data *sdata)
u32 flags = 0;
req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (test_bit(SCAN_HW_CANCELLED, &local->scanning))
return false;
@@ -378,7 +361,7 @@ static bool ieee80211_prep_hw_scan(struct ieee80211_sub_if_data *sdata)
}
local->hw_scan_req->req.n_channels = n_chans;
- ieee80211_prepare_scan_chandef(&chandef, req->scan_width);
+ ieee80211_prepare_scan_chandef(&chandef);
if (req->flags & NL80211_SCAN_FLAG_MIN_PREQ_CONTENT)
flags |= IEEE80211_PROBE_FLAG_MIN_CONTENT;
@@ -409,7 +392,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
struct ieee80211_sub_if_data *scan_sdata;
struct ieee80211_sub_if_data *sdata;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/*
* It's ok to abort a not-yet-running scan (that
@@ -424,7 +407,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
return;
scan_sdata = rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (hw_scan && !aborted &&
!ieee80211_hw_check(&local->hw, SINGLE_SCAN_ON_ALL_BANDS) &&
@@ -433,7 +416,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
rc = drv_hw_scan(local,
rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx)),
+ lockdep_is_held(&local->hw.wiphy->mtx)),
local->hw_scan_req);
if (rc == 0)
@@ -450,7 +433,7 @@ static void __ieee80211_scan_completed(struct ieee80211_hw *hw, bool aborted)
local->hw_scan_req = NULL;
scan_req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
RCU_INIT_POINTER(local->scan_req, NULL);
RCU_INIT_POINTER(local->scan_sdata, NULL);
@@ -505,7 +488,7 @@ void ieee80211_scan_completed(struct ieee80211_hw *hw,
memcpy(&local->scan_info, info, sizeof(*info));
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work, 0);
}
EXPORT_SYMBOL(ieee80211_scan_completed);
@@ -545,8 +528,7 @@ static int ieee80211_start_sw_scan(struct ieee80211_local *local,
/* We need to set power level at maximum rate for scanning. */
ieee80211_hw_config(local, 0);
- ieee80211_queue_delayed_work(&local->hw,
- &local->scan_work, 0);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work, 0);
return 0;
}
@@ -556,20 +538,18 @@ static bool __ieee80211_can_leave_ch(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
struct ieee80211_sub_if_data *sdata_iter;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!ieee80211_is_radar_required(local))
return true;
if (!regulatory_pre_cac_allowed(local->hw.wiphy))
return false;
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(sdata_iter, &local->interfaces, list) {
- if (sdata_iter->wdev.cac_started) {
- mutex_unlock(&local->iflist_mtx);
+ if (sdata_iter->wdev.cac_started)
return false;
- }
}
- mutex_unlock(&local->iflist_mtx);
return true;
}
@@ -592,7 +572,7 @@ static bool ieee80211_can_scan(struct ieee80211_local *local,
void ieee80211_run_deferred_scan(struct ieee80211_local *local)
{
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!local->scan_req || local->scanning)
return;
@@ -600,11 +580,11 @@ void ieee80211_run_deferred_scan(struct ieee80211_local *local)
if (!ieee80211_can_scan(local,
rcu_dereference_protected(
local->scan_sdata,
- lockdep_is_held(&local->mtx))))
+ lockdep_is_held(&local->hw.wiphy->mtx))))
return;
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work,
- round_jiffies_relative(0));
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work,
+ round_jiffies_relative(0));
}
static void ieee80211_send_scan_probe_req(struct ieee80211_sub_if_data *sdata,
@@ -645,7 +625,7 @@ static void ieee80211_scan_state_send_probe(struct ieee80211_local *local,
u32 flags = 0, tx_flags;
scan_req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
tx_flags = IEEE80211_TX_INTFL_OFFCHAN_TX_OK;
if (scan_req->no_cck)
@@ -656,7 +636,7 @@ static void ieee80211_scan_state_send_probe(struct ieee80211_local *local,
flags |= IEEE80211_PROBE_FLAG_RANDOM_SN;
sdata = rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
for (i = 0; i < scan_req->n_ssids; i++)
ieee80211_send_scan_probe_req(
@@ -681,7 +661,7 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
bool hw_scan = local->ops->hw_scan;
int rc;
- lockdep_assert_held(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (local->scan_req)
return -EBUSY;
@@ -795,8 +775,8 @@ static int __ieee80211_start_scan(struct ieee80211_sub_if_data *sdata,
}
/* Now, just wait a bit and we are all done! */
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work,
- next_delay);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work,
+ next_delay);
return 0;
} else {
/* Do normal software scan */
@@ -861,12 +841,13 @@ static void ieee80211_scan_state_decision(struct ieee80211_local *local,
enum mac80211_scan_state next_scan_state;
struct cfg80211_scan_request *scan_req;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/*
* check if at least one STA interface is associated,
* check if at least one STA interface has pending tx frames
* and grab the lowest used beacon interval
*/
- mutex_lock(&local->iflist_mtx);
list_for_each_entry(sdata, &local->interfaces, list) {
if (!ieee80211_sdata_running(sdata))
continue;
@@ -882,10 +863,9 @@ static void ieee80211_scan_state_decision(struct ieee80211_local *local,
}
}
}
- mutex_unlock(&local->iflist_mtx);
scan_req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
next_chan = scan_req->channels[local->scan_channel_idx];
@@ -922,11 +902,10 @@ static void ieee80211_scan_state_set_channel(struct ieee80211_local *local,
{
int skip;
struct ieee80211_channel *chan;
- enum nl80211_bss_scan_width oper_scan_width;
struct cfg80211_scan_request *scan_req;
scan_req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
skip = 0;
chan = scan_req->channels[local->scan_channel_idx];
@@ -936,42 +915,21 @@ static void ieee80211_scan_state_set_channel(struct ieee80211_local *local,
local->scan_chandef.freq1_offset = chan->freq_offset;
local->scan_chandef.center_freq2 = 0;
- /* For scanning on the S1G band, ignore scan_width (which is constant
- * across all channels) for now since channel width is specific to each
- * channel. Detect the required channel width here and likely revisit
- * later. Maybe scan_width could be used to build the channel scan list?
+ /* For scanning on the S1G band, detect the channel width according to
+ * the channel being scanned.
*/
if (chan->band == NL80211_BAND_S1GHZ) {
local->scan_chandef.width = ieee80211_s1g_channel_width(chan);
goto set_channel;
}
- switch (scan_req->scan_width) {
- case NL80211_BSS_CHAN_WIDTH_5:
- local->scan_chandef.width = NL80211_CHAN_WIDTH_5;
- break;
- case NL80211_BSS_CHAN_WIDTH_10:
- local->scan_chandef.width = NL80211_CHAN_WIDTH_10;
- break;
- default:
- case NL80211_BSS_CHAN_WIDTH_20:
- /* If scanning on oper channel, use whatever channel-type
- * is currently in use.
- */
- oper_scan_width = cfg80211_chandef_to_scan_width(
- &local->_oper_chandef);
- if (chan == local->_oper_chandef.chan &&
- oper_scan_width == scan_req->scan_width)
- local->scan_chandef = local->_oper_chandef;
- else
- local->scan_chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
- break;
- case NL80211_BSS_CHAN_WIDTH_1:
- case NL80211_BSS_CHAN_WIDTH_2:
- /* shouldn't get here, S1G handled above */
- WARN_ON(1);
- break;
- }
+ /* If scanning on oper channel, use whatever channel-type
+ * is currently in use.
+ */
+ if (chan == local->_oper_chandef.chan)
+ local->scan_chandef = local->_oper_chandef;
+ else
+ local->scan_chandef.width = NL80211_CHAN_WIDTH_20_NOHT;
set_channel:
if (ieee80211_hw_config(local, IEEE80211_CONF_CHANGE_CHANNEL))
@@ -1043,7 +1001,7 @@ static void ieee80211_scan_state_resume(struct ieee80211_local *local,
local->next_scan_state = SCAN_SET_CHANNEL;
}
-void ieee80211_scan_work(struct work_struct *work)
+void ieee80211_scan_work(struct wiphy *wiphy, struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, scan_work.work);
@@ -1052,7 +1010,7 @@ void ieee80211_scan_work(struct work_struct *work)
unsigned long next_delay = 0;
bool aborted;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (!ieee80211_can_run_worker(local)) {
aborted = true;
@@ -1060,9 +1018,9 @@ void ieee80211_scan_work(struct work_struct *work)
}
sdata = rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
scan_req = rcu_dereference_protected(local->scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
/* When scanning on-channel, the first-callback means completed. */
if (test_bit(SCAN_ONCHANNEL_SCANNING, &local->scanning)) {
@@ -1076,7 +1034,7 @@ void ieee80211_scan_work(struct work_struct *work)
}
if (!sdata || !scan_req)
- goto out;
+ return;
if (!local->scanning) {
int rc;
@@ -1085,13 +1043,12 @@ void ieee80211_scan_work(struct work_struct *work)
RCU_INIT_POINTER(local->scan_sdata, NULL);
rc = __ieee80211_start_scan(sdata, scan_req);
- if (rc) {
- /* need to complete scan in cfg80211 */
- rcu_assign_pointer(local->scan_req, scan_req);
- aborted = true;
- goto out_complete;
- } else
- goto out;
+ if (!rc)
+ return;
+ /* need to complete scan in cfg80211 */
+ rcu_assign_pointer(local->scan_req, scan_req);
+ aborted = true;
+ goto out_complete;
}
clear_bit(SCAN_BEACON_WAIT, &local->scanning);
@@ -1137,38 +1094,32 @@ void ieee80211_scan_work(struct work_struct *work)
}
} while (next_delay == 0);
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, next_delay);
- goto out;
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work,
+ next_delay);
+ return;
out_complete:
__ieee80211_scan_completed(&local->hw, aborted);
-out:
- mutex_unlock(&local->mtx);
}
int ieee80211_request_scan(struct ieee80211_sub_if_data *sdata,
struct cfg80211_scan_request *req)
{
- int res;
-
- mutex_lock(&sdata->local->mtx);
- res = __ieee80211_start_scan(sdata, req);
- mutex_unlock(&sdata->local->mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return res;
+ return __ieee80211_start_scan(sdata, req);
}
int ieee80211_request_ibss_scan(struct ieee80211_sub_if_data *sdata,
const u8 *ssid, u8 ssid_len,
struct ieee80211_channel **channels,
- unsigned int n_channels,
- enum nl80211_bss_scan_width scan_width)
+ unsigned int n_channels)
{
struct ieee80211_local *local = sdata->local;
int ret = -EBUSY, i, n_ch = 0;
enum nl80211_band band;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* busy scanning */
if (local->scan_req)
@@ -1219,13 +1170,11 @@ int ieee80211_request_ibss_scan(struct ieee80211_sub_if_data *sdata,
local->int_scan_req->ssids = &local->scan_ssid;
local->int_scan_req->n_ssids = 1;
- local->int_scan_req->scan_width = scan_width;
memcpy(local->int_scan_req->ssids[0].ssid, ssid, IEEE80211_MAX_SSID_LEN);
local->int_scan_req->ssids[0].ssid_len = ssid_len;
ret = __ieee80211_start_scan(sdata, sdata->local->int_scan_req);
unlock:
- mutex_unlock(&local->mtx);
return ret;
}
@@ -1252,9 +1201,8 @@ void ieee80211_scan_cancel(struct ieee80211_local *local)
* after the scan was completed/aborted.
*/
- mutex_lock(&local->mtx);
if (!local->scan_req)
- goto out;
+ return;
/*
* We have a scan running and the driver already reported completion,
@@ -1264,7 +1212,7 @@ void ieee80211_scan_cancel(struct ieee80211_local *local)
if (test_bit(SCAN_HW_SCANNING, &local->scanning) &&
test_bit(SCAN_COMPLETED, &local->scanning)) {
set_bit(SCAN_HW_CANCELLED, &local->scanning);
- goto out;
+ return;
}
if (test_bit(SCAN_HW_SCANNING, &local->scanning)) {
@@ -1276,21 +1224,14 @@ void ieee80211_scan_cancel(struct ieee80211_local *local)
if (local->ops->cancel_hw_scan)
drv_cancel_hw_scan(local,
rcu_dereference_protected(local->scan_sdata,
- lockdep_is_held(&local->mtx)));
- goto out;
+ lockdep_is_held(&local->hw.wiphy->mtx)));
+ return;
}
- /*
- * If the work is currently running, it must be blocked on
- * the mutex, but we'll set scan_sdata = NULL and it'll
- * simply exit once it acquires the mutex.
- */
- cancel_delayed_work(&local->scan_work);
+ wiphy_delayed_work_cancel(local->hw.wiphy, &local->scan_work);
/* and clean up */
memset(&local->scan_info, 0, sizeof(local->scan_info));
__ieee80211_scan_completed(&local->hw, true);
-out:
- mutex_unlock(&local->mtx);
}
int __ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
@@ -1305,9 +1246,9 @@ int __ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
u8 *ie;
u32 flags = 0;
- iebufsz = local->scan_ies_len + req->ie_len;
+ lockdep_assert_wiphy(local->hw.wiphy);
- lockdep_assert_held(&local->mtx);
+ iebufsz = local->scan_ies_len + req->ie_len;
if (!local->ops->sched_scan_start)
return -ENOTSUPP;
@@ -1329,7 +1270,7 @@ int __ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
goto out;
}
- ieee80211_prepare_scan_chandef(&chandef, req->scan_width);
+ ieee80211_prepare_scan_chandef(&chandef);
ieee80211_build_preq_ies(sdata, ie, num_bands * iebufsz,
&sched_scan_ies, req->ie,
@@ -1358,19 +1299,13 @@ int ieee80211_request_sched_scan_start(struct ieee80211_sub_if_data *sdata,
struct cfg80211_sched_scan_request *req)
{
struct ieee80211_local *local = sdata->local;
- int ret;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
- if (rcu_access_pointer(local->sched_scan_sdata)) {
- mutex_unlock(&local->mtx);
+ if (rcu_access_pointer(local->sched_scan_sdata))
return -EBUSY;
- }
-
- ret = __ieee80211_request_sched_scan_start(sdata, req);
- mutex_unlock(&local->mtx);
- return ret;
+ return __ieee80211_request_sched_scan_start(sdata, req);
}
int ieee80211_request_sched_scan_stop(struct ieee80211_local *local)
@@ -1378,25 +1313,21 @@ int ieee80211_request_sched_scan_stop(struct ieee80211_local *local)
struct ieee80211_sub_if_data *sched_scan_sdata;
int ret = -ENOENT;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
- if (!local->ops->sched_scan_stop) {
- ret = -ENOTSUPP;
- goto out;
- }
+ if (!local->ops->sched_scan_stop)
+ return -ENOTSUPP;
/* We don't want to restart sched scan anymore. */
RCU_INIT_POINTER(local->sched_scan_req, NULL);
sched_scan_sdata = rcu_dereference_protected(local->sched_scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (sched_scan_sdata) {
ret = drv_sched_scan_stop(local, sched_scan_sdata);
if (!ret)
RCU_INIT_POINTER(local->sched_scan_sdata, NULL);
}
-out:
- mutex_unlock(&local->mtx);
return ret;
}
@@ -1413,24 +1344,21 @@ EXPORT_SYMBOL(ieee80211_sched_scan_results);
void ieee80211_sched_scan_end(struct ieee80211_local *local)
{
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
- if (!rcu_access_pointer(local->sched_scan_sdata)) {
- mutex_unlock(&local->mtx);
+ if (!rcu_access_pointer(local->sched_scan_sdata))
return;
- }
RCU_INIT_POINTER(local->sched_scan_sdata, NULL);
/* If sched scan was aborted by the driver. */
RCU_INIT_POINTER(local->sched_scan_req, NULL);
- mutex_unlock(&local->mtx);
-
- cfg80211_sched_scan_stopped(local->hw.wiphy, 0);
+ cfg80211_sched_scan_stopped_locked(local->hw.wiphy, 0);
}
-void ieee80211_sched_scan_stopped_work(struct work_struct *work)
+void ieee80211_sched_scan_stopped_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local,
@@ -1453,6 +1381,6 @@ void ieee80211_sched_scan_stopped(struct ieee80211_hw *hw)
if (local->in_reconfig)
return;
- schedule_work(&local->sched_scan_stopped_work);
+ wiphy_work_queue(hw->wiphy, &local->sched_scan_stopped_work);
}
EXPORT_SYMBOL(ieee80211_sched_scan_stopped);
diff --git a/net/mac80211/spectmgmt.c b/net/mac80211/spectmgmt.c
index 871cdac2d0f4..55959b0b24c5 100644
--- a/net/mac80211/spectmgmt.c
+++ b/net/mac80211/spectmgmt.c
@@ -9,7 +9,7 @@
* Copyright 2007, Michael Wu <flamingice@sourmilk.net>
* Copyright 2007-2008, Intel Corporation
* Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
- * Copyright (C) 2018, 2020, 2022 Intel Corporation
+ * Copyright (C) 2018, 2020, 2022-2023 Intel Corporation
*/
#include <linux/ieee80211.h>
@@ -33,12 +33,14 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
struct cfg80211_chan_def new_vht_chandef = {};
const struct ieee80211_sec_chan_offs_ie *sec_chan_offs;
const struct ieee80211_wide_bw_chansw_ie *wide_bw_chansw_ie;
+ const struct ieee80211_bandwidth_indication *bwi;
int secondary_channel_offset = -1;
memset(csa_ie, 0, sizeof(*csa_ie));
sec_chan_offs = elems->sec_chan_offs;
wide_bw_chansw_ie = elems->wide_bw_chansw_ie;
+ bwi = elems->bandwidth_indication;
if (conn_flags & (IEEE80211_CONN_DISABLE_HT |
IEEE80211_CONN_DISABLE_40MHZ)) {
@@ -132,7 +134,14 @@ int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata,
break;
}
- if (wide_bw_chansw_ie) {
+ if (bwi) {
+ /* start with the CSA one */
+ new_vht_chandef = csa_ie->chandef;
+ /* and update the width accordingly */
+ /* FIXME: support 160/320 */
+ ieee80211_chandef_eht_oper(&bwi->info, true, true,
+ &new_vht_chandef);
+ } else if (wide_bw_chansw_ie) {
u8 new_seg1 = wide_bw_chansw_ie->new_center_freq_seg1;
struct ieee80211_vht_operation vht_oper = {
.chan_width =
diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index 7751f8ba960e..0ba613dd1cc4 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -88,7 +88,6 @@ static const struct rhashtable_params link_sta_rht_params = {
.max_size = CONFIG_MAC80211_STA_HASH_MAX_SIZE,
};
-/* Caller must hold local->sta_mtx */
static int sta_info_hash_del(struct ieee80211_local *local,
struct sta_info *sta)
{
@@ -99,19 +98,36 @@ static int sta_info_hash_del(struct ieee80211_local *local,
static int link_sta_info_hash_add(struct ieee80211_local *local,
struct link_sta_info *link_sta)
{
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
return rhltable_insert(&local->link_sta_hash,
- &link_sta->link_hash_node,
- link_sta_rht_params);
+ &link_sta->link_hash_node, link_sta_rht_params);
}
static int link_sta_info_hash_del(struct ieee80211_local *local,
struct link_sta_info *link_sta)
{
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
return rhltable_remove(&local->link_sta_hash,
- &link_sta->link_hash_node,
- link_sta_rht_params);
+ &link_sta->link_hash_node, link_sta_rht_params);
+}
+
+void ieee80211_purge_sta_txqs(struct sta_info *sta)
+{
+ struct ieee80211_local *local = sta->sdata->local;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(sta->sta.txq); i++) {
+ struct txq_info *txqi;
+
+ if (!sta->sta.txq[i])
+ continue;
+
+ txqi = to_txq_info(sta->sta.txq[i]);
+
+ ieee80211_txq_purge(local, txqi);
+ }
}
static void __cleanup_single_sta(struct sta_info *sta)
@@ -140,16 +156,7 @@ static void __cleanup_single_sta(struct sta_info *sta)
atomic_dec(&ps->num_sta_ps);
}
- for (i = 0; i < ARRAY_SIZE(sta->sta.txq); i++) {
- struct txq_info *txqi;
-
- if (!sta->sta.txq[i])
- continue;
-
- txqi = to_txq_info(sta->sta.txq[i]);
-
- ieee80211_txq_purge(local, txqi);
- }
+ ieee80211_purge_sta_txqs(sta);
for (ac = 0; ac < IEEE80211_NUM_ACS; ac++) {
local->total_ps_buffered -= skb_queue_len(&sta->ps_tx_buf[ac]);
@@ -331,7 +338,7 @@ struct sta_info *sta_info_get_by_idx(struct ieee80211_sub_if_data *sdata,
int i = 0;
list_for_each_entry_rcu(sta, &local->sta_list, list,
- lockdep_is_held(&local->sta_mtx)) {
+ lockdep_is_held(&local->hw.wiphy->mtx)) {
if (sdata != sta->sdata)
continue;
if (i < idx) {
@@ -355,10 +362,9 @@ static void sta_remove_link(struct sta_info *sta, unsigned int link_id,
struct sta_link_alloc *alloc = NULL;
struct link_sta_info *link_sta;
- link_sta = rcu_access_pointer(sta->link[link_id]);
- if (link_sta != &sta->deflink)
- lockdep_assert_held(&sta->local->sta_mtx);
+ lockdep_assert_wiphy(sta->local->hw.wiphy);
+ link_sta = rcu_access_pointer(sta->link[link_id]);
if (WARN_ON(!link_sta))
return;
@@ -437,7 +443,6 @@ void sta_info_free(struct ieee80211_local *local, struct sta_info *sta)
kfree(sta);
}
-/* Caller must hold local->sta_mtx */
static int sta_info_hash_add(struct ieee80211_local *local,
struct sta_info *sta)
{
@@ -556,8 +561,7 @@ __sta_info_alloc(struct ieee80211_sub_if_data *sdata,
spin_lock_init(&sta->lock);
spin_lock_init(&sta->ps_lock);
INIT_WORK(&sta->drv_deliver_wk, sta_deliver_ps_frames);
- INIT_WORK(&sta->ampdu_mlme.work, ieee80211_ba_session_work);
- mutex_init(&sta->ampdu_mlme.mtx);
+ wiphy_work_init(&sta->ampdu_mlme.work, ieee80211_ba_session_work);
#ifdef CONFIG_MAC80211_MESH
if (ieee80211_vif_is_mesh(&sdata->vif)) {
sta->mesh = kzalloc(sizeof(*sta->mesh), gfp);
@@ -717,6 +721,8 @@ static int sta_info_insert_check(struct sta_info *sta)
{
struct ieee80211_sub_if_data *sdata = sta->sdata;
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
/*
* Can't be a WARN_ON because it can be triggered through a race:
* something inserts a STA (on one CPU) without holding the RTNL
@@ -734,7 +740,6 @@ static int sta_info_insert_check(struct sta_info *sta)
* for correctness.
*/
rcu_read_lock();
- lockdep_assert_held(&sdata->local->sta_mtx);
if (ieee80211_hw_check(&sdata->local->hw, NEEDS_UNIQUE_STA_ADDR) &&
ieee80211_find_sta_by_ifaddr(&sdata->local->hw, sta->addr, NULL)) {
rcu_read_unlock();
@@ -808,11 +813,6 @@ ieee80211_recalc_p2p_go_ps_allowed(struct ieee80211_sub_if_data *sdata)
}
}
-/*
- * should be called with sta_mtx locked
- * this function replaces the mutex lock
- * with a RCU lock
- */
static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
{
struct ieee80211_local *local = sta->local;
@@ -820,7 +820,7 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
struct station_info *sinfo = NULL;
int err = 0;
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* check if STA exists already */
if (sta_info_get_bss(sdata, sta->sta.addr)) {
@@ -884,7 +884,7 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
struct link_sta_info *link_sta;
link_sta = rcu_dereference_protected(sta->link[i],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (!link_sta)
continue;
@@ -906,7 +906,6 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
/* move reference to rcu-protected */
rcu_read_lock();
- mutex_unlock(&local->sta_mtx);
if (ieee80211_vif_is_mesh(&sdata->vif))
mesh_accept_plinks_update(sdata);
@@ -922,7 +921,6 @@ static int sta_info_insert_finish(struct sta_info *sta) __acquires(RCU)
synchronize_net();
out_cleanup:
cleanup_single_sta(sta);
- mutex_unlock(&local->sta_mtx);
kfree(sinfo);
rcu_read_lock();
return err;
@@ -934,13 +932,11 @@ int sta_info_insert_rcu(struct sta_info *sta) __acquires(RCU)
int err;
might_sleep();
-
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
err = sta_info_insert_check(sta);
if (err) {
sta_info_free(local, sta);
- mutex_unlock(&local->sta_mtx);
rcu_read_lock();
return err;
}
@@ -1219,7 +1215,7 @@ static int __must_check __sta_info_destroy_part1(struct sta_info *sta)
local = sta->local;
sdata = sta->sdata;
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/*
* Before removing the station from the driver and
@@ -1244,7 +1240,7 @@ static int __must_check __sta_info_destroy_part1(struct sta_info *sta)
continue;
link_sta = rcu_dereference_protected(sta->link[i],
- lockdep_is_held(&local->sta_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
link_sta_info_hash_del(local, link_sta);
}
@@ -1279,6 +1275,8 @@ static int _sta_info_move_state(struct sta_info *sta,
enum ieee80211_sta_state new_state,
bool recalc)
{
+ struct ieee80211_local *local = sta->local;
+
might_sleep();
if (sta->sta_state == new_state)
@@ -1354,6 +1352,24 @@ static int _sta_info_move_state(struct sta_info *sta,
} else if (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
ieee80211_vif_dec_num_mcast(sta->sdata);
clear_bit(WLAN_STA_AUTHORIZED, &sta->_flags);
+
+ /*
+ * If we have encryption offload, flush (station) queues
+ * (after ensuring concurrent TX completed) so we won't
+ * transmit anything later unencrypted if/when keys are
+ * also removed, which might otherwise happen depending
+ * on how the hardware offload works.
+ */
+ if (local->ops->set_key) {
+ synchronize_net();
+ if (local->ops->flush_sta)
+ drv_flush_sta(local, sta->sdata, sta);
+ else
+ ieee80211_flush_queues(local,
+ sta->sdata,
+ false);
+ }
+
ieee80211_clear_fast_xmit(sta);
ieee80211_clear_fast_rx(sta);
}
@@ -1397,26 +1413,28 @@ static void __sta_info_destroy_part2(struct sta_info *sta, bool recalc)
* after _part1 and before _part2!
*/
+ /*
+ * There's a potential race in _part1 where we set WLAN_STA_BLOCK_BA
+ * but someone might have just gotten past a check, and not yet into
+ * queuing the work/creating the data/etc.
+ *
+ * Do another round of destruction so that the worker is certainly
+ * canceled before we later free the station.
+ *
+ * Since this is after synchronize_rcu()/synchronize_net() we're now
+ * certain that nobody can actually hold a reference to the STA and
+ * be calling e.g. ieee80211_start_tx_ba_session().
+ */
+ ieee80211_sta_tear_down_BA_sessions(sta, AGG_STOP_DESTROY_STA);
+
might_sleep();
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (sta->sta_state == IEEE80211_STA_AUTHORIZED) {
ret = _sta_info_move_state(sta, IEEE80211_STA_ASSOC, recalc);
WARN_ON_ONCE(ret);
}
- /* Flush queues before removing keys, as that might remove them
- * from hardware, and then depending on the offload method, any
- * frames sitting on hardware queues might be sent out without
- * any encryption at all.
- */
- if (local->ops->set_key) {
- if (local->ops->flush_sta)
- drv_flush_sta(local, sta->sdata, sta);
- else
- ieee80211_flush_queues(local, sta->sdata, false);
- }
-
/* now keys can no longer be reached */
ieee80211_free_sta_keys(local, sta);
@@ -1474,28 +1492,22 @@ int __must_check __sta_info_destroy(struct sta_info *sta)
int sta_info_destroy_addr(struct ieee80211_sub_if_data *sdata, const u8 *addr)
{
struct sta_info *sta;
- int ret;
- mutex_lock(&sdata->local->sta_mtx);
- sta = sta_info_get(sdata, addr);
- ret = __sta_info_destroy(sta);
- mutex_unlock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return ret;
+ sta = sta_info_get(sdata, addr);
+ return __sta_info_destroy(sta);
}
int sta_info_destroy_addr_bss(struct ieee80211_sub_if_data *sdata,
const u8 *addr)
{
struct sta_info *sta;
- int ret;
- mutex_lock(&sdata->local->sta_mtx);
- sta = sta_info_get_bss(sdata, addr);
- ret = __sta_info_destroy(sta);
- mutex_unlock(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
- return ret;
+ sta = sta_info_get_bss(sdata, addr);
+ return __sta_info_destroy(sta);
}
static void sta_info_cleanup(struct timer_list *t)
@@ -1535,7 +1547,6 @@ int sta_info_init(struct ieee80211_local *local)
}
spin_lock_init(&local->tim_lock);
- mutex_init(&local->sta_mtx);
INIT_LIST_HEAD(&local->sta_list);
timer_setup(&local->sta_cleanup, sta_info_cleanup, 0);
@@ -1558,11 +1569,11 @@ int __sta_info_flush(struct ieee80211_sub_if_data *sdata, bool vlans)
int ret = 0;
might_sleep();
+ lockdep_assert_wiphy(local->hw.wiphy);
WARN_ON(vlans && sdata->vif.type != NL80211_IFTYPE_AP);
WARN_ON(vlans && !sdata->bss);
- mutex_lock(&local->sta_mtx);
list_for_each_entry_safe(sta, tmp, &local->sta_list, list) {
if (sdata == sta->sdata ||
(vlans && sdata->bss == sta->sdata->bss)) {
@@ -1586,7 +1597,6 @@ int __sta_info_flush(struct ieee80211_sub_if_data *sdata, bool vlans)
if (!support_p2p_ps)
ieee80211_recalc_p2p_go_ps_allowed(sdata);
}
- mutex_unlock(&local->sta_mtx);
return ret;
}
@@ -1597,7 +1607,7 @@ void ieee80211_sta_expire(struct ieee80211_sub_if_data *sdata,
struct ieee80211_local *local = sdata->local;
struct sta_info *sta, *tmp;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry_safe(sta, tmp, &local->sta_list, list) {
unsigned long last_active = ieee80211_sta_last_active(sta);
@@ -1616,8 +1626,6 @@ void ieee80211_sta_expire(struct ieee80211_sub_if_data *sdata,
WARN_ON(__sta_info_destroy(sta));
}
}
-
- mutex_unlock(&local->sta_mtx);
}
struct ieee80211_sta *ieee80211_find_sta_by_ifaddr(struct ieee80211_hw *hw,
@@ -2711,7 +2719,8 @@ void sta_set_sinfo(struct sta_info *sta, struct station_info *sinfo,
}
if (!(sinfo->filled & BIT_ULL(NL80211_STA_INFO_TX_BITRATE)) &&
- !sta->sta.valid_links) {
+ !sta->sta.valid_links &&
+ ieee80211_rate_valid(&sta->deflink.tx_stats.last_rate)) {
sta_set_rate_info_tx(sta, &sta->deflink.tx_stats.last_rate,
&sinfo->txrate);
sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
@@ -2872,7 +2881,9 @@ int ieee80211_sta_allocate_link(struct sta_info *sta, unsigned int link_id)
struct sta_link_alloc *alloc;
int ret;
- lockdep_assert_held(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+ WARN_ON(!test_sta_flag(sta, WLAN_STA_INSERTED));
/* must represent an MLD from the start */
if (WARN_ON(!sta->sta.valid_links))
@@ -2901,7 +2912,9 @@ int ieee80211_sta_allocate_link(struct sta_info *sta, unsigned int link_id)
void ieee80211_sta_free_link(struct sta_info *sta, unsigned int link_id)
{
- lockdep_assert_held(&sta->sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sta->sdata->local->hw.wiphy);
+
+ WARN_ON(!test_sta_flag(sta, WLAN_STA_INSERTED));
sta_remove_link(sta, link_id, false);
}
@@ -2915,7 +2928,7 @@ int ieee80211_sta_activate_link(struct sta_info *sta, unsigned int link_id)
int ret;
link_sta = rcu_dereference_protected(sta->link[link_id],
- lockdep_is_held(&sdata->local->sta_mtx));
+ lockdep_is_held(&sdata->local->hw.wiphy->mtx));
if (WARN_ON(old_links == new_links || !link_sta))
return -EINVAL;
@@ -2930,7 +2943,7 @@ int ieee80211_sta_activate_link(struct sta_info *sta, unsigned int link_id)
sta->sta.valid_links = new_links;
- if (!test_sta_flag(sta, WLAN_STA_INSERTED))
+ if (WARN_ON(!test_sta_flag(sta, WLAN_STA_INSERTED)))
goto hash;
ieee80211_recalc_min_chandef(sdata, link_id);
@@ -2959,11 +2972,11 @@ void ieee80211_sta_remove_link(struct sta_info *sta, unsigned int link_id)
struct ieee80211_sub_if_data *sdata = sta->sdata;
u16 old_links = sta->sta.valid_links;
- lockdep_assert_held(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
sta->sta.valid_links &= ~BIT(link_id);
- if (test_sta_flag(sta, WLAN_STA_INSERTED))
+ if (!WARN_ON(!test_sta_flag(sta, WLAN_STA_INSERTED)))
drv_change_sta_links(sdata->local, sdata, &sta->sta,
old_links, sta->sta.valid_links);
@@ -2990,7 +3003,7 @@ void ieee80211_sta_set_max_amsdu_subframes(struct sta_info *sta,
WLAN_EXT_CAPA9_MAX_MSDU_IN_AMSDU_MSB) << 1;
if (val)
- sta->sta.max_amsdu_subframes = 4 << val;
+ sta->sta.max_amsdu_subframes = 4 << (4 - val);
}
#ifdef CONFIG_LOCKDEP
@@ -2998,7 +3011,7 @@ bool lockdep_sta_mutex_held(struct ieee80211_sta *pubsta)
{
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
- return lockdep_is_held(&sta->local->sta_mtx);
+ return lockdep_is_held(&sta->local->hw.wiphy->mtx);
}
EXPORT_SYMBOL(lockdep_sta_mutex_held);
#endif
diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
index 195b563132d6..7acf2223e47a 100644
--- a/net/mac80211/sta_info.h
+++ b/net/mac80211/sta_info.h
@@ -3,7 +3,7 @@
* Copyright 2002-2005, Devicescape Software, Inc.
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright(c) 2015-2017 Intel Deutschland GmbH
- * Copyright(c) 2020-2022 Intel Corporation
+ * Copyright(c) 2020-2023 Intel Corporation
*/
#ifndef STA_INFO_H
@@ -259,9 +259,6 @@ struct tid_ampdu_rx {
/**
* struct sta_ampdu_mlme - STA aggregation information.
*
- * @mtx: mutex to protect all TX data (except non-NULL assignments
- * to tid_tx[idx], which are protected by the sta spinlock)
- * tid_start_tx is also protected by sta->lock.
* @tid_rx: aggregation info for Rx per TID -- RCU protected
* @tid_rx_token: dialog tokens for valid aggregation sessions
* @tid_rx_timer_expired: bitmap indicating on which TIDs the
@@ -275,13 +272,13 @@ struct tid_ampdu_rx {
* unexpected aggregation related frames outside a session
* @work: work struct for starting/stopping aggregation
* @tid_tx: aggregation info for Tx per TID
- * @tid_start_tx: sessions where start was requested
+ * @tid_start_tx: sessions where start was requested, not just protected
+ * by wiphy mutex but also sta->lock
* @last_addba_req_time: timestamp of the last addBA request.
* @addba_req_num: number of times addBA request has been sent.
* @dialog_token_allocator: dialog token enumerator for each new session;
*/
struct sta_ampdu_mlme {
- struct mutex mtx;
/* rx */
struct tid_ampdu_rx __rcu *tid_rx[IEEE80211_NUM_TIDS];
u8 tid_rx_token[IEEE80211_NUM_TIDS];
@@ -291,7 +288,7 @@ struct sta_ampdu_mlme {
unsigned long agg_session_valid[BITS_TO_LONGS(IEEE80211_NUM_TIDS)];
unsigned long unexpected_agg[BITS_TO_LONGS(IEEE80211_NUM_TIDS)];
/* tx */
- struct work_struct work;
+ struct wiphy_work work;
struct tid_ampdu_tx __rcu *tid_tx[IEEE80211_NUM_TIDS];
struct tid_ampdu_tx *tid_start_tx[IEEE80211_NUM_TIDS];
unsigned long last_addba_req_time[IEEE80211_NUM_TIDS];
@@ -618,8 +615,6 @@ struct link_sta_info {
* @sta: station information we share with the driver
* @sta_state: duplicates information about station state (for debug)
* @rcu_head: RCU head used for freeing this station struct
- * @cur_max_bandwidth: maximum bandwidth to use for TX to the station,
- * taken from HT/VHT capabilities or VHT operating mode notification
* @cparams: CoDel parameters for this station.
* @reserved_tid: reserved TID (if any, otherwise IEEE80211_TID_UNRESERVED)
* @amsdu_mesh_control: track the mesh A-MSDU format used by the peer:
@@ -796,13 +791,10 @@ static inline void sta_info_pre_move_state(struct sta_info *sta,
void ieee80211_assign_tid_tx(struct sta_info *sta, int tid,
struct tid_ampdu_tx *tid_tx);
-static inline struct tid_ampdu_tx *
-rcu_dereference_protected_tid_tx(struct sta_info *sta, int tid)
-{
- return rcu_dereference_protected(sta->ampdu_mlme.tid_tx[tid],
- lockdep_is_held(&sta->lock) ||
- lockdep_is_held(&sta->ampdu_mlme.mtx));
-}
+#define rcu_dereference_protected_tid_tx(sta, tid) \
+ rcu_dereference_protected((sta)->ampdu_mlme.tid_tx[tid], \
+ lockdep_is_held(&(sta)->lock) || \
+ lockdep_is_held(&(sta)->local->hw.wiphy->mtx));
/* Maximum number of frames to buffer per power saving station per AC */
#define STA_MAX_TX_BUFFER 64
@@ -827,7 +819,7 @@ struct sta_info *sta_info_get(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta_info_get_bss(struct ieee80211_sub_if_data *sdata,
const u8 *addr);
-/* user must hold sta_mtx or be in RCU critical section */
+/* user must hold wiphy mutex or be in RCU critical section */
struct sta_info *sta_info_get_by_addrs(struct ieee80211_local *local,
const u8 *sta_addr, const u8 *vif_addr);
diff --git a/net/mac80211/status.c b/net/mac80211/status.c
index 44d83da60aee..1708b33cdc5e 100644
--- a/net/mac80211/status.c
+++ b/net/mac80211/status.c
@@ -184,8 +184,6 @@ static void ieee80211_check_pending_bar(struct sta_info *sta, u8 *addr, u8 tid)
static void ieee80211_frame_acked(struct sta_info *sta, struct sk_buff *skb)
{
struct ieee80211_mgmt *mgmt = (void *) skb->data;
- struct ieee80211_local *local = sta->local;
- struct ieee80211_sub_if_data *sdata = sta->sdata;
if (ieee80211_is_data_qos(mgmt->frame_control)) {
struct ieee80211_hdr *hdr = (void *) skb->data;
@@ -194,39 +192,6 @@ static void ieee80211_frame_acked(struct sta_info *sta, struct sk_buff *skb)
ieee80211_check_pending_bar(sta, hdr->addr1, tid);
}
-
- if (ieee80211_is_action(mgmt->frame_control) &&
- !ieee80211_has_protected(mgmt->frame_control) &&
- mgmt->u.action.category == WLAN_CATEGORY_HT &&
- mgmt->u.action.u.ht_smps.action == WLAN_HT_ACTION_SMPS &&
- ieee80211_sdata_running(sdata)) {
- enum ieee80211_smps_mode smps_mode;
-
- switch (mgmt->u.action.u.ht_smps.smps_control) {
- case WLAN_HT_SMPS_CONTROL_DYNAMIC:
- smps_mode = IEEE80211_SMPS_DYNAMIC;
- break;
- case WLAN_HT_SMPS_CONTROL_STATIC:
- smps_mode = IEEE80211_SMPS_STATIC;
- break;
- case WLAN_HT_SMPS_CONTROL_DISABLED:
- default: /* shouldn't happen since we don't send that */
- smps_mode = IEEE80211_SMPS_OFF;
- break;
- }
-
- if (sdata->vif.type == NL80211_IFTYPE_STATION) {
- /*
- * This update looks racy, but isn't -- if we come
- * here we've definitely got a station that we're
- * talking to, and on a managed interface that can
- * only be the AP. And the only other place updating
- * this variable in managed mode is before association.
- */
- sdata->deflink.smps_mode = smps_mode;
- ieee80211_queue_work(&local->hw, &sdata->recalc_smps);
- }
- }
}
static void ieee80211_set_bar_pending(struct sta_info *sta, u8 tid, u16 ssn)
@@ -291,7 +256,7 @@ static int ieee80211_tx_radiotap_len(struct ieee80211_tx_info *info,
static void
ieee80211_add_tx_radiotap_header(struct ieee80211_local *local,
struct sk_buff *skb, int retry_count,
- int rtap_len, int shift,
+ int rtap_len,
struct ieee80211_tx_status *status)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
@@ -342,7 +307,7 @@ ieee80211_add_tx_radiotap_header(struct ieee80211_local *local,
if (legacy_rate) {
rthdr->it_present |= cpu_to_le32(BIT(IEEE80211_RADIOTAP_RATE));
- *pos = DIV_ROUND_UP(legacy_rate, 5 * (1 << shift));
+ *pos = DIV_ROUND_UP(legacy_rate, 5);
/* padding for tx flags */
pos += 2;
}
@@ -633,7 +598,7 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
unsigned long flags;
spin_lock_irqsave(&local->ack_status_lock, flags);
- skb = idr_remove(&local->ack_status_frames, info->ack_frame_id);
+ skb = idr_remove(&local->ack_status_frames, info->status_data);
spin_unlock_irqrestore(&local->ack_status_lock, flags);
if (!skb)
@@ -695,6 +660,42 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
}
}
+static void ieee80211_handle_smps_status(struct ieee80211_sub_if_data *sdata,
+ bool acked, u16 status_data)
+{
+ u16 sub_data = u16_get_bits(status_data, IEEE80211_STATUS_SUBDATA_MASK);
+ enum ieee80211_smps_mode smps_mode = sub_data & 3;
+ int link_id = (sub_data >> 2);
+ struct ieee80211_link_data *link;
+
+ if (!sdata || !ieee80211_sdata_running(sdata))
+ return;
+
+ if (!acked)
+ return;
+
+ if (sdata->vif.type != NL80211_IFTYPE_STATION)
+ return;
+
+ if (WARN(link_id >= ARRAY_SIZE(sdata->link),
+ "bad SMPS status link: %d\n", link_id))
+ return;
+
+ link = rcu_dereference(sdata->link[link_id]);
+ if (!link)
+ return;
+
+ /*
+ * This update looks racy, but isn't, the only other place
+ * updating this variable is in managed mode before assoc,
+ * and we have to be associated to have a status from the
+ * action frame TX, since we cannot send it while we're not
+ * associated yet.
+ */
+ link->smps_mode = smps_mode;
+ wiphy_work_queue(sdata->local->hw.wiphy, &link->u.mgd.recalc_smps);
+}
+
static void ieee80211_report_used_skb(struct ieee80211_local *local,
struct sk_buff *skb, bool dropped,
ktime_t ack_hwtstamp)
@@ -730,12 +731,9 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local,
if (!sdata) {
skb->dev = NULL;
} else if (!dropped) {
- unsigned int hdr_size =
- ieee80211_hdrlen(hdr->frame_control);
-
/* Check to see if packet is a TDLS teardown packet */
if (ieee80211_is_data(hdr->frame_control) &&
- (ieee80211_get_tdls_action(skb, hdr_size) ==
+ (ieee80211_get_tdls_action(skb) ==
WLAN_TDLS_TEARDOWN)) {
ieee80211_tdls_td_tx_handle(local, sdata, skb,
info->flags);
@@ -759,9 +757,24 @@ static void ieee80211_report_used_skb(struct ieee80211_local *local,
}
rcu_read_unlock();
- } else if (info->ack_frame_id) {
+ } else if (info->status_data_idr) {
ieee80211_report_ack_skb(local, skb, acked, dropped,
ack_hwtstamp);
+ } else if (info->status_data) {
+ struct ieee80211_sub_if_data *sdata;
+
+ rcu_read_lock();
+
+ sdata = ieee80211_sdata_from_skb(local, skb);
+
+ switch (u16_get_bits(info->status_data,
+ IEEE80211_STATUS_TYPE_MASK)) {
+ case IEEE80211_STATUS_TYPE_SMPS:
+ ieee80211_handle_smps_status(sdata, acked,
+ info->status_data);
+ break;
+ }
+ rcu_read_unlock();
}
if (!dropped && skb->destructor) {
@@ -862,7 +875,7 @@ static int ieee80211_tx_get_rates(struct ieee80211_hw *hw,
}
void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
- int retry_count, int shift, bool send_to_cooked,
+ int retry_count, bool send_to_cooked,
struct ieee80211_tx_status *status)
{
struct sk_buff *skb2;
@@ -879,7 +892,7 @@ void ieee80211_tx_monitor(struct ieee80211_local *local, struct sk_buff *skb,
return;
}
ieee80211_add_tx_radiotap_header(local, skb, retry_count,
- rtap_len, shift, status);
+ rtap_len, status);
/* XXX: is this sufficient for BPF? */
skb_reset_mac_header(skb);
@@ -932,14 +945,12 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
bool acked;
bool noack_success;
struct ieee80211_bar *bar;
- int shift = 0;
int tid = IEEE80211_NUM_TIDS;
fc = hdr->frame_control;
if (status->sta) {
sta = container_of(status->sta, struct sta_info, sta);
- shift = ieee80211_vif_get_shift(&sta->sdata->vif);
if (info->flags & IEEE80211_TX_STATUS_EOSP)
clear_sta_flag(sta, WLAN_STA_SP);
@@ -1077,11 +1088,11 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
}
/* send to monitor interfaces */
- ieee80211_tx_monitor(local, skb, retry_count, shift,
+ ieee80211_tx_monitor(local, skb, retry_count,
send_to_cooked, status);
}
-void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
+void ieee80211_tx_status_skb(struct ieee80211_hw *hw, struct sk_buff *skb)
{
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *) skb->data;
struct ieee80211_local *local = hw_to_local(hw);
@@ -1100,7 +1111,7 @@ void ieee80211_tx_status(struct ieee80211_hw *hw, struct sk_buff *skb)
ieee80211_tx_status_ext(hw, &status);
rcu_read_unlock();
}
-EXPORT_SYMBOL(ieee80211_tx_status);
+EXPORT_SYMBOL(ieee80211_tx_status_skb);
void ieee80211_tx_status_ext(struct ieee80211_hw *hw,
struct ieee80211_tx_status *status)
diff --git a/net/mac80211/tdls.c b/net/mac80211/tdls.c
index a4af3b7675ef..05a7dff69fe9 100644
--- a/net/mac80211/tdls.c
+++ b/net/mac80211/tdls.c
@@ -21,7 +21,7 @@
/* give usermode some time for retries in setting up the TDLS session */
#define TDLS_PEER_SETUP_TIMEOUT (15 * HZ)
-void ieee80211_tdls_peer_del_work(struct work_struct *wk)
+void ieee80211_tdls_peer_del_work(struct wiphy *wiphy, struct wiphy_work *wk)
{
struct ieee80211_sub_if_data *sdata;
struct ieee80211_local *local;
@@ -30,13 +30,13 @@ void ieee80211_tdls_peer_del_work(struct work_struct *wk)
u.mgd.tdls_peer_del_work.work);
local = sdata->local;
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!is_zero_ether_addr(sdata->u.mgd.tdls_peer)) {
tdls_dbg(sdata, "TDLS del peer %pM\n", sdata->u.mgd.tdls_peer);
sta_info_destroy_addr(sdata, sdata->u.mgd.tdls_peer);
eth_zero_addr(sdata->u.mgd.tdls_peer);
}
- mutex_unlock(&local->mtx);
}
static void ieee80211_tdls_add_ext_capab(struct ieee80211_link_data *link,
@@ -309,7 +309,7 @@ ieee80211_tdls_chandef_vht_upgrade(struct ieee80211_sub_if_data *sdata,
struct sta_info *sta)
{
/* IEEE802.11ac-2013 Table E-4 */
- u16 centers_80mhz[] = { 5210, 5290, 5530, 5610, 5690, 5775 };
+ static const u16 centers_80mhz[] = { 5210, 5290, 5530, 5610, 5690, 5775 };
struct cfg80211_chan_def uc = sta->tdls_chandef;
enum nl80211_chan_width max_width =
ieee80211_sta_cap_chan_bw(&sta->deflink);
@@ -1180,7 +1180,7 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev,
return -ENOTSUPP;
}
- mutex_lock(&local->mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* we don't support concurrent TDLS peer setups */
if (!is_zero_ether_addr(sdata->u.mgd.tdls_peer) &&
@@ -1208,7 +1208,6 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev,
ieee80211_flush_queues(local, sdata, false);
memcpy(sdata->u.mgd.tdls_peer, peer, ETH_ALEN);
- mutex_unlock(&local->mtx);
/* we cannot take the mutex while preparing the setup packet */
ret = ieee80211_tdls_prep_mgmt_packet(wiphy, dev, peer,
@@ -1218,19 +1217,16 @@ ieee80211_tdls_mgmt_setup(struct wiphy *wiphy, struct net_device *dev,
extra_ies, extra_ies_len, 0,
NULL);
if (ret < 0) {
- mutex_lock(&local->mtx);
eth_zero_addr(sdata->u.mgd.tdls_peer);
- mutex_unlock(&local->mtx);
return ret;
}
- ieee80211_queue_delayed_work(&sdata->local->hw,
- &sdata->u.mgd.tdls_peer_del_work,
- TDLS_PEER_SETUP_TIMEOUT);
+ wiphy_delayed_work_queue(sdata->local->hw.wiphy,
+ &sdata->u.mgd.tdls_peer_del_work,
+ TDLS_PEER_SETUP_TIMEOUT);
return 0;
out_unlock:
- mutex_unlock(&local->mtx);
return ret;
}
@@ -1322,7 +1318,7 @@ int ieee80211_tdls_mgmt(struct wiphy *wiphy, struct net_device *dev,
* response frame. It is transmitted directly and not buffered
* by the AP.
*/
- drv_mgd_protect_tdls_discover(sdata->local, sdata);
+ drv_mgd_protect_tdls_discover(sdata->local, sdata, link_id);
fallthrough;
case WLAN_TDLS_SETUP_CONFIRM:
case WLAN_PUB_ACTION_TDLS_DISCOVER_RES:
@@ -1354,9 +1350,10 @@ static void iee80211_tdls_recalc_chanctx(struct ieee80211_sub_if_data *sdata,
enum nl80211_chan_width width;
struct ieee80211_supported_band *sband;
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
conf = rcu_dereference_protected(sdata->vif.bss_conf.chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (conf) {
width = conf->def.width;
sband = local->hw.wiphy->bands[conf->def.chan->band];
@@ -1384,7 +1381,6 @@ static void iee80211_tdls_recalc_chanctx(struct ieee80211_sub_if_data *sdata,
}
}
- mutex_unlock(&local->chanctx_mtx);
}
static int iee80211_tdls_have_ht_peers(struct ieee80211_sub_if_data *sdata)
@@ -1447,6 +1443,8 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
struct ieee80211_local *local = sdata->local;
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!(wiphy->flags & WIPHY_FLAG_SUPPORTS_TDLS))
return -ENOTSUPP;
@@ -1467,35 +1465,26 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
/* protect possible bss_conf changes and avoid concurrency in
* ieee80211_bss_info_change_notify()
*/
- sdata_lock(sdata);
- mutex_lock(&local->mtx);
tdls_dbg(sdata, "TDLS oper %d peer %pM\n", oper, peer);
switch (oper) {
case NL80211_TDLS_ENABLE_LINK:
if (sdata->vif.bss_conf.csa_active) {
tdls_dbg(sdata, "TDLS: disallow link during CSA\n");
- ret = -EBUSY;
- break;
+ return -EBUSY;
}
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, peer);
- if (!sta) {
- mutex_unlock(&local->sta_mtx);
- ret = -ENOLINK;
- break;
- }
+ if (!sta)
+ return -ENOLINK;
iee80211_tdls_recalc_chanctx(sdata, sta);
iee80211_tdls_recalc_ht_protection(sdata, sta);
set_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH);
- mutex_unlock(&local->sta_mtx);
WARN_ON_ONCE(is_zero_ether_addr(sdata->u.mgd.tdls_peer) ||
!ether_addr_equal(sdata->u.mgd.tdls_peer, peer));
- ret = 0;
break;
case NL80211_TDLS_DISABLE_LINK:
/*
@@ -1514,29 +1503,26 @@ int ieee80211_tdls_oper(struct wiphy *wiphy, struct net_device *dev,
ret = sta_info_destroy_addr(sdata, peer);
- mutex_lock(&local->sta_mtx);
iee80211_tdls_recalc_ht_protection(sdata, NULL);
- mutex_unlock(&local->sta_mtx);
iee80211_tdls_recalc_chanctx(sdata, NULL);
+ if (ret)
+ return ret;
break;
default:
- ret = -ENOTSUPP;
- break;
+ return -ENOTSUPP;
}
- if (ret == 0 && ether_addr_equal(sdata->u.mgd.tdls_peer, peer)) {
- cancel_delayed_work(&sdata->u.mgd.tdls_peer_del_work);
+ if (ether_addr_equal(sdata->u.mgd.tdls_peer, peer)) {
+ wiphy_delayed_work_cancel(sdata->local->hw.wiphy,
+ &sdata->u.mgd.tdls_peer_del_work);
eth_zero_addr(sdata->u.mgd.tdls_peer);
}
- if (ret == 0)
- wiphy_work_queue(sdata->local->hw.wiphy,
- &sdata->deflink.u.mgd.request_smps_work);
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &sdata->deflink.u.mgd.request_smps_work);
- mutex_unlock(&local->mtx);
- sdata_unlock(sdata);
- return ret;
+ return 0;
}
void ieee80211_tdls_oper_request(struct ieee80211_vif *vif, const u8 *peer,
@@ -1669,11 +1655,12 @@ ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev,
u32 ch_sw_tm_ie;
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (chandef->chan->freq_offset)
/* this may work, but is untested */
return -EOPNOTSUPP;
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, addr);
if (!sta) {
tdls_dbg(sdata,
@@ -1703,7 +1690,6 @@ ieee80211_tdls_channel_switch(struct wiphy *wiphy, struct net_device *dev,
set_sta_flag(sta, WLAN_STA_TDLS_OFF_CHANNEL);
out:
- mutex_unlock(&local->sta_mtx);
dev_kfree_skb_any(skb);
return ret;
}
@@ -1717,26 +1703,24 @@ ieee80211_tdls_cancel_channel_switch(struct wiphy *wiphy,
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
- mutex_lock(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
sta = sta_info_get(sdata, addr);
if (!sta) {
tdls_dbg(sdata,
"Invalid TDLS peer %pM for channel switch cancel\n",
addr);
- goto out;
+ return;
}
if (!test_sta_flag(sta, WLAN_STA_TDLS_OFF_CHANNEL)) {
tdls_dbg(sdata, "TDLS channel switch not initiated by %pM\n",
addr);
- goto out;
+ return;
}
drv_tdls_cancel_channel_switch(local, sdata, &sta->sta);
clear_sta_flag(sta, WLAN_STA_TDLS_OFF_CHANNEL);
-
-out:
- mutex_unlock(&local->sta_mtx);
}
static struct sk_buff *
@@ -1798,6 +1782,8 @@ ieee80211_process_tdls_channel_switch_resp(struct ieee80211_sub_if_data *sdata,
struct ieee80211_tdls_ch_sw_params params = {};
int ret;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
params.action_code = WLAN_TDLS_CHANNEL_SWITCH_RESPONSE;
params.timestamp = rx_status->device_timestamp;
@@ -1807,7 +1793,6 @@ ieee80211_process_tdls_channel_switch_resp(struct ieee80211_sub_if_data *sdata,
return -EINVAL;
}
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, tf->sa);
if (!sta || !test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) {
tdls_dbg(sdata, "TDLS chan switch from non-peer sta %pM\n",
@@ -1870,7 +1855,6 @@ call_drv:
tf->sa, params.status);
out:
- mutex_unlock(&local->sta_mtx);
dev_kfree_skb_any(params.tmpl_skb);
kfree(elems);
return ret;
@@ -1896,6 +1880,8 @@ ieee80211_process_tdls_channel_switch_req(struct ieee80211_sub_if_data *sdata,
struct ieee80211_tdls_ch_sw_params params = {};
int ret = 0;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
params.action_code = WLAN_TDLS_CHANNEL_SWITCH_REQUEST;
params.timestamp = rx_status->device_timestamp;
@@ -1984,7 +1970,6 @@ ieee80211_process_tdls_channel_switch_req(struct ieee80211_sub_if_data *sdata,
goto free;
}
- mutex_lock(&local->sta_mtx);
sta = sta_info_get(sdata, tf->sa);
if (!sta || !test_sta_flag(sta, WLAN_STA_TDLS_PEER_AUTH)) {
tdls_dbg(sdata, "TDLS chan switch from non-peer sta %pM\n",
@@ -2031,7 +2016,6 @@ ieee80211_process_tdls_channel_switch_req(struct ieee80211_sub_if_data *sdata,
tf->sa, params.chandef->chan->center_freq,
params.chandef->width);
out:
- mutex_unlock(&local->sta_mtx);
dev_kfree_skb_any(params.tmpl_skb);
free:
kfree(elems);
diff --git a/net/mac80211/tests/Makefile b/net/mac80211/tests/Makefile
new file mode 100644
index 000000000000..4814584f8a14
--- /dev/null
+++ b/net/mac80211/tests/Makefile
@@ -0,0 +1,3 @@
+mac80211-tests-y += module.o elems.o
+
+obj-$(CONFIG_MAC80211_KUNIT_TEST) += mac80211-tests.o
diff --git a/net/mac80211/tests/elems.c b/net/mac80211/tests/elems.c
new file mode 100644
index 000000000000..997d0cd27b2d
--- /dev/null
+++ b/net/mac80211/tests/elems.c
@@ -0,0 +1,101 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KUnit tests for element parsing
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+#include <kunit/test.h>
+#include "../ieee80211_i.h"
+
+MODULE_IMPORT_NS(EXPORTED_FOR_KUNIT_TESTING);
+
+static void mle_defrag(struct kunit *test)
+{
+ struct ieee80211_elems_parse_params parse_params = {
+ .link_id = 12,
+ .from_ap = true,
+ };
+ struct ieee802_11_elems *parsed;
+ struct sk_buff *skb;
+ u8 *len_mle, *len_prof;
+ int i;
+
+ skb = alloc_skb(1024, GFP_KERNEL);
+ KUNIT_ASSERT_NOT_NULL(test, skb);
+
+ if (skb_pad(skb, skb_tailroom(skb))) {
+ KUNIT_FAIL(test, "failed to pad skb");
+ return;
+ }
+
+ /* build a multi-link element */
+ skb_put_u8(skb, WLAN_EID_EXTENSION);
+ len_mle = skb_put(skb, 1);
+ skb_put_u8(skb, WLAN_EID_EXT_EHT_MULTI_LINK);
+
+ put_unaligned_le16(IEEE80211_ML_CONTROL_TYPE_BASIC,
+ skb_put(skb, 2));
+ /* struct ieee80211_mle_basic_common_info */
+ skb_put_u8(skb, 7); /* includes len field */
+ skb_put_data(skb, "\x00\x00\x00\x00\x00\x00", ETH_ALEN); /* MLD addr */
+
+ /* with a STA profile inside */
+ skb_put_u8(skb, IEEE80211_MLE_SUBELEM_PER_STA_PROFILE);
+ len_prof = skb_put(skb, 1);
+ put_unaligned_le16(IEEE80211_MLE_STA_CONTROL_COMPLETE_PROFILE |
+ parse_params.link_id,
+ skb_put(skb, 2));
+ skb_put_u8(skb, 1); /* fake sta_info_len - includes itself */
+ /* put a bunch of useless elements into it */
+ for (i = 0; i < 20; i++) {
+ skb_put_u8(skb, WLAN_EID_SSID);
+ skb_put_u8(skb, 20);
+ skb_put(skb, 20);
+ }
+
+ /* fragment STA profile */
+ ieee80211_fragment_element(skb, len_prof,
+ IEEE80211_MLE_SUBELEM_FRAGMENT);
+ /* fragment MLE */
+ ieee80211_fragment_element(skb, len_mle, WLAN_EID_FRAGMENT);
+
+ parse_params.start = skb->data;
+ parse_params.len = skb->len;
+ parsed = ieee802_11_parse_elems_full(&parse_params);
+ /* should return ERR_PTR or valid, not NULL */
+ KUNIT_EXPECT_NOT_NULL(test, parsed);
+
+ if (IS_ERR_OR_NULL(parsed))
+ goto free_skb;
+
+ KUNIT_EXPECT_NOT_NULL(test, parsed->ml_basic_elem);
+ KUNIT_EXPECT_EQ(test,
+ parsed->ml_basic_len,
+ 2 /* control */ +
+ 7 /* common info */ +
+ 2 /* sta profile element header */ +
+ 3 /* sta profile header */ +
+ 20 * 22 /* sta profile data */ +
+ 2 /* sta profile fragment element */);
+ KUNIT_EXPECT_NOT_NULL(test, parsed->prof);
+ KUNIT_EXPECT_EQ(test,
+ parsed->sta_prof_len,
+ 3 /* sta profile header */ +
+ 20 * 22 /* sta profile data */);
+
+ kfree(parsed);
+free_skb:
+ kfree_skb(skb);
+}
+
+static struct kunit_case element_parsing_test_cases[] = {
+ KUNIT_CASE(mle_defrag),
+ {}
+};
+
+static struct kunit_suite element_parsing = {
+ .name = "mac80211-element-parsing",
+ .test_cases = element_parsing_test_cases,
+};
+
+kunit_test_suite(element_parsing);
diff --git a/net/mac80211/tests/module.c b/net/mac80211/tests/module.c
new file mode 100644
index 000000000000..9d05f2943935
--- /dev/null
+++ b/net/mac80211/tests/module.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This is just module boilerplate for the mac80211 kunit module.
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+#include <linux/module.h>
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("tests for mac80211");
diff --git a/net/mac80211/trace.h b/net/mac80211/trace.h
index b8c53b4a710b..032718d5b298 100644
--- a/net/mac80211/trace.h
+++ b/net/mac80211/trace.h
@@ -2839,23 +2839,26 @@ TRACE_EVENT(api_sta_block_awake,
);
TRACE_EVENT(api_chswitch_done,
- TP_PROTO(struct ieee80211_sub_if_data *sdata, bool success),
+ TP_PROTO(struct ieee80211_sub_if_data *sdata, bool success,
+ unsigned int link_id),
- TP_ARGS(sdata, success),
+ TP_ARGS(sdata, success, link_id),
TP_STRUCT__entry(
VIF_ENTRY
__field(bool, success)
+ __field(unsigned int, link_id)
),
TP_fast_assign(
VIF_ASSIGN;
__entry->success = success;
+ __entry->link_id = link_id;
),
TP_printk(
- VIF_PR_FMT " success=%d",
- VIF_PR_ARG, __entry->success
+ VIF_PR_FMT " success=%d link_id=%d",
+ VIF_PR_ARG, __entry->success, __entry->link_id
)
);
diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
index d45d4be63dd8..ed4fdf655343 100644
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -43,7 +43,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
struct sk_buff *skb, int group_addr,
int next_frag_len)
{
- int rate, mrate, erp, dur, i, shift = 0;
+ int rate, mrate, erp, dur, i;
struct ieee80211_rate *txrate;
struct ieee80211_local *local = tx->local;
struct ieee80211_supported_band *sband;
@@ -58,10 +58,8 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
rcu_read_lock();
chanctx_conf = rcu_dereference(tx->sdata->vif.bss_conf.chanctx_conf);
- if (chanctx_conf) {
- shift = ieee80211_chandef_get_shift(&chanctx_conf->def);
+ if (chanctx_conf)
rate_flags = ieee80211_chandef_rate_flags(&chanctx_conf->def);
- }
rcu_read_unlock();
/* uh huh? */
@@ -143,7 +141,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
continue;
if (tx->sdata->vif.bss_conf.basic_rates & BIT(i))
- rate = DIV_ROUND_UP(r->bitrate, 1 << shift);
+ rate = r->bitrate;
switch (sband->band) {
case NL80211_BAND_2GHZ:
@@ -173,7 +171,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
if (rate == -1) {
/* No matching basic rate found; use highest suitable mandatory
* PHY rate */
- rate = DIV_ROUND_UP(mrate, 1 << shift);
+ rate = mrate;
}
/* Don't calculate ACKs for QoS Frames with NoAck Policy set */
@@ -185,8 +183,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
* (10 bytes + 4-byte FCS = 112 bits) plus SIFS; rounded up
* to closest integer */
dur = ieee80211_frame_duration(sband->band, 10, rate, erp,
- tx->sdata->vif.bss_conf.use_short_preamble,
- shift);
+ tx->sdata->vif.bss_conf.use_short_preamble);
if (next_frag_len) {
/* Frame is fragmented: duration increases with time needed to
@@ -195,8 +192,7 @@ static __le16 ieee80211_duration(struct ieee80211_tx_data *tx,
/* next fragment */
dur += ieee80211_frame_duration(sband->band, next_frag_len,
txrate->bitrate, erp,
- tx->sdata->vif.bss_conf.use_short_preamble,
- shift);
+ tx->sdata->vif.bss_conf.use_short_preamble);
}
return cpu_to_le16(dur);
@@ -266,8 +262,8 @@ ieee80211_tx_h_dynamic_ps(struct ieee80211_tx_data *tx)
IEEE80211_QUEUE_STOP_REASON_PS,
false);
ifmgd->flags &= ~IEEE80211_STA_NULLFUNC_ACKED;
- ieee80211_queue_work(&local->hw,
- &local->dynamic_ps_disable_work);
+ wiphy_work_queue(local->hw.wiphy,
+ &local->dynamic_ps_disable_work);
}
/* Don't restart the timer if we're not disassociated */
@@ -2167,6 +2163,11 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
rate_found = true;
break;
+ case IEEE80211_RADIOTAP_ANTENNA:
+ /* this can appear multiple times, keep a bitmap */
+ info->control.antennas |= BIT(*iterator.this_arg);
+ break;
+
case IEEE80211_RADIOTAP_DATA_RETRIES:
rate_retries = *iterator.this_arg;
break;
@@ -2261,8 +2262,17 @@ bool ieee80211_parse_tx_radiotap(struct sk_buff *skb,
}
if (rate_flags & IEEE80211_TX_RC_MCS) {
+ /* reset antennas if not enough */
+ if (IEEE80211_HT_MCS_CHAINS(rate) >
+ hweight8(info->control.antennas))
+ info->control.antennas = 0;
+
info->control.rates[0].idx = rate;
} else if (rate_flags & IEEE80211_TX_RC_VHT_MCS) {
+ /* reset antennas if not enough */
+ if (vht_nss > hweight8(info->control.antennas))
+ info->control.antennas = 0;
+
ieee80211_rate_set_vht(info->control.rates, vht_mcs,
vht_nss);
} else if (sband) {
@@ -2856,9 +2866,10 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
goto free;
}
- if (unlikely(!multicast && ((skb->sk &&
- skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS) ||
- ctrl_flags & IEEE80211_TX_CTL_REQ_TX_STATUS)))
+ if (unlikely(!multicast &&
+ ((skb->sk &&
+ skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS) ||
+ ctrl_flags & IEEE80211_TX_CTL_REQ_TX_STATUS)))
info_id = ieee80211_store_ack_skb(local, skb, &info_flags,
cookie);
@@ -2942,7 +2953,10 @@ static struct sk_buff *ieee80211_build_hdr(struct ieee80211_sub_if_data *sdata,
memset(info, 0, sizeof(*info));
info->flags = info_flags;
- info->ack_frame_id = info_id;
+ if (info_id) {
+ info->status_data = info_id;
+ info->status_data_idr = 1;
+ }
info->band = band;
if (likely(!cookie)) {
@@ -4475,6 +4489,8 @@ static void ieee80211_mlo_multicast_tx(struct net_device *dev,
* @dev: incoming interface
*
* On failure skb will be freed.
+ *
+ * Returns: the netdev TX status (but really only %NETDEV_TX_OK)
*/
netdev_tx_t ieee80211_subif_start_xmit(struct sk_buff *skb,
struct net_device *dev)
@@ -4639,9 +4655,12 @@ static void ieee80211_8023_xmit(struct ieee80211_sub_if_data *sdata,
}
if (unlikely(skb->sk &&
- skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS))
- info->ack_frame_id = ieee80211_store_ack_skb(local, skb,
- &info->flags, NULL);
+ skb_shinfo(skb)->tx_flags & SKBTX_WIFI_STATUS)) {
+ info->status_data = ieee80211_store_ack_skb(local, skb,
+ &info->flags, NULL);
+ if (info->status_data)
+ info->status_data_idr = 1;
+ }
dev_sw_netstats_tx_add(dev, skbs, len);
sta->deflink.tx_stats.packets[queue] += skbs;
@@ -5550,7 +5569,6 @@ struct sk_buff *ieee80211_beacon_get_tim(struct ieee80211_hw *hw,
IEEE80211_INCLUDE_ALL_MBSSID_ELEMS,
NULL);
struct sk_buff *copy;
- int shift;
if (!bcn)
return bcn;
@@ -5570,8 +5588,7 @@ struct sk_buff *ieee80211_beacon_get_tim(struct ieee80211_hw *hw,
if (!copy)
return bcn;
- shift = ieee80211_vif_get_shift(vif);
- ieee80211_tx_monitor(hw_to_local(hw), copy, 1, shift, false, NULL);
+ ieee80211_tx_monitor(hw_to_local(hw), copy, 1, false, NULL);
return bcn;
}
@@ -5921,7 +5938,7 @@ int ieee80211_reserve_tid(struct ieee80211_sta *pubsta, u8 tid)
int ret;
u32 queues;
- lockdep_assert_held(&local->sta_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
/* only some cases are supported right now */
switch (sdata->vif.type) {
@@ -5982,7 +5999,7 @@ void ieee80211_unreserve_tid(struct ieee80211_sta *pubsta, u8 tid)
struct sta_info *sta = container_of(pubsta, struct sta_info, sta);
struct ieee80211_sub_if_data *sdata = sta->sdata;
- lockdep_assert_held(&sdata->local->sta_mtx);
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
/* only some cases are supported right now */
switch (sdata->vif.type) {
@@ -6103,6 +6120,9 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
u32 flags = 0;
int err;
+ /* mutex lock is only needed for incrementing the cookie counter */
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/* Only accept CONTROL_PORT_PROTOCOL configured in CONNECT/ASSOCIATE
* or Pre-Authentication
*/
@@ -6193,15 +6213,10 @@ int ieee80211_tx_control_port(struct wiphy *wiphy, struct net_device *dev,
rcu_read_unlock();
start_xmit:
- /* mutex lock is only needed for incrementing the cookie counter */
- mutex_lock(&local->mtx);
-
local_bh_disable();
__ieee80211_subif_start_xmit(skb, skb->dev, flags, ctrl_flags, cookie);
local_bh_enable();
- mutex_unlock(&local->mtx);
-
return 0;
}
diff --git a/net/mac80211/util.c b/net/mac80211/util.c
index 8a6917cf63cf..ed680120d5a7 100644
--- a/net/mac80211/util.c
+++ b/net/mac80211/util.c
@@ -24,6 +24,7 @@
#include <net/net_namespace.h>
#include <net/cfg80211.h>
#include <net/rtnetlink.h>
+#include <kunit/visibility.h>
#include "ieee80211_i.h"
#include "driver-ops.h"
@@ -109,8 +110,7 @@ void ieee80211_tx_set_protected(struct ieee80211_tx_data *tx)
}
int ieee80211_frame_duration(enum nl80211_band band, size_t len,
- int rate, int erp, int short_preamble,
- int shift)
+ int rate, int erp, int short_preamble)
{
int dur;
@@ -121,9 +121,6 @@ int ieee80211_frame_duration(enum nl80211_band band, size_t len,
*
* rate is in 100 kbps, so divident is multiplied by 10 in the
* DIV_ROUND_UP() operations.
- *
- * shift may be 2 for 5 MHz channels or 1 for 10 MHz channels, and
- * is assumed to be 0 otherwise.
*/
if (band == NL80211_BAND_5GHZ || erp) {
@@ -144,12 +141,6 @@ int ieee80211_frame_duration(enum nl80211_band band, size_t len,
dur += 16; /* IEEE 802.11-2012 18.3.2.4: T_PREAMBLE = 16 usec */
dur += 4; /* IEEE 802.11-2012 18.3.2.4: T_SIGNAL = 4 usec */
- /* IEEE 802.11-2012 18.3.2.4: all values above are:
- * * times 4 for 5 MHz
- * * times 2 for 10 MHz
- */
- dur *= 1 << shift;
-
/* rates should already consider the channel bandwidth,
* don't apply divisor again.
*/
@@ -184,7 +175,7 @@ __le16 ieee80211_generic_frame_duration(struct ieee80211_hw *hw,
{
struct ieee80211_sub_if_data *sdata;
u16 dur;
- int erp, shift = 0;
+ int erp;
bool short_preamble = false;
erp = 0;
@@ -193,11 +184,10 @@ __le16 ieee80211_generic_frame_duration(struct ieee80211_hw *hw,
short_preamble = sdata->vif.bss_conf.use_short_preamble;
if (sdata->deflink.operating_11g_mode)
erp = rate->flags & IEEE80211_RATE_ERP_G;
- shift = ieee80211_vif_get_shift(vif);
}
dur = ieee80211_frame_duration(band, frame_len, rate->bitrate, erp,
- short_preamble, shift);
+ short_preamble);
return cpu_to_le16(dur);
}
@@ -211,7 +201,7 @@ __le16 ieee80211_rts_duration(struct ieee80211_hw *hw,
struct ieee80211_rate *rate;
struct ieee80211_sub_if_data *sdata;
bool short_preamble;
- int erp, shift = 0, bitrate;
+ int erp, bitrate;
u16 dur;
struct ieee80211_supported_band *sband;
@@ -227,20 +217,19 @@ __le16 ieee80211_rts_duration(struct ieee80211_hw *hw,
short_preamble = sdata->vif.bss_conf.use_short_preamble;
if (sdata->deflink.operating_11g_mode)
erp = rate->flags & IEEE80211_RATE_ERP_G;
- shift = ieee80211_vif_get_shift(vif);
}
- bitrate = DIV_ROUND_UP(rate->bitrate, 1 << shift);
+ bitrate = rate->bitrate;
/* CTS duration */
dur = ieee80211_frame_duration(sband->band, 10, bitrate,
- erp, short_preamble, shift);
+ erp, short_preamble);
/* Data frame duration */
dur += ieee80211_frame_duration(sband->band, frame_len, bitrate,
- erp, short_preamble, shift);
+ erp, short_preamble);
/* ACK duration */
dur += ieee80211_frame_duration(sband->band, 10, bitrate,
- erp, short_preamble, shift);
+ erp, short_preamble);
return cpu_to_le16(dur);
}
@@ -255,7 +244,7 @@ __le16 ieee80211_ctstoself_duration(struct ieee80211_hw *hw,
struct ieee80211_rate *rate;
struct ieee80211_sub_if_data *sdata;
bool short_preamble;
- int erp, shift = 0, bitrate;
+ int erp, bitrate;
u16 dur;
struct ieee80211_supported_band *sband;
@@ -270,18 +259,17 @@ __le16 ieee80211_ctstoself_duration(struct ieee80211_hw *hw,
short_preamble = sdata->vif.bss_conf.use_short_preamble;
if (sdata->deflink.operating_11g_mode)
erp = rate->flags & IEEE80211_RATE_ERP_G;
- shift = ieee80211_vif_get_shift(vif);
}
- bitrate = DIV_ROUND_UP(rate->bitrate, 1 << shift);
+ bitrate = rate->bitrate;
/* Data frame duration */
dur = ieee80211_frame_duration(sband->band, frame_len, bitrate,
- erp, short_preamble, shift);
+ erp, short_preamble);
if (!(frame_txctl->flags & IEEE80211_TX_CTL_NO_ACK)) {
/* ACK duration */
dur += ieee80211_frame_duration(sband->band, 10, bitrate,
- erp, short_preamble, shift);
+ erp, short_preamble);
}
return cpu_to_le16(dur);
@@ -705,6 +693,19 @@ void __ieee80211_flush_queues(struct ieee80211_local *local,
IEEE80211_QUEUE_STOP_REASON_FLUSH,
false);
+ if (drop) {
+ struct sta_info *sta;
+
+ /* Purge the queues, so the frames on them won't be
+ * sent during __ieee80211_wake_queue()
+ */
+ list_for_each_entry(sta, &local->sta_list, list) {
+ if (sdata != sta->sdata)
+ continue;
+ ieee80211_purge_sta_txqs(sta);
+ }
+ }
+
drv_flush(local, sdata, queues, drop);
ieee80211_wake_queues_by_reason(&local->hw, queues,
@@ -1002,6 +1003,19 @@ ieee80211_parse_extension_element(u32 *crc,
}
}
break;
+ case WLAN_EID_EXT_BANDWIDTH_INDICATION:
+ if (ieee80211_bandwidth_indication_size_ok(data, len))
+ elems->bandwidth_indication = data;
+ calc_crc = true;
+ break;
+ case WLAN_EID_EXT_TID_TO_LINK_MAPPING:
+ calc_crc = true;
+ if (ieee80211_tid_to_link_map_size_ok(data, len) &&
+ elems->ttlm_num < ARRAY_SIZE(elems->ttlm)) {
+ elems->ttlm[elems->ttlm_num] = (void *)data;
+ elems->ttlm_num++;
+ }
+ break;
}
if (crc && calc_crc)
@@ -1017,11 +1031,11 @@ _ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params,
bool calc_crc = params->filter != 0;
DECLARE_BITMAP(seen_elems, 256);
u32 crc = params->crc;
- const u8 *ie;
bitmap_zero(seen_elems, 256);
for_each_element(elem, params->start, params->len) {
+ const struct element *subelem;
bool elem_parse_failed;
u8 id = elem->id;
u8 elen = elem->datalen;
@@ -1279,15 +1293,27 @@ _ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params,
}
/*
* This is a bit tricky, but as we only care about
- * the wide bandwidth channel switch element, so
- * just parse it out manually.
+ * a few elements, parse them out manually.
*/
- ie = cfg80211_find_ie(WLAN_EID_WIDE_BW_CHANNEL_SWITCH,
- pos, elen);
- if (ie) {
- if (ie[1] >= sizeof(*elems->wide_bw_chansw_ie))
+ subelem = cfg80211_find_elem(WLAN_EID_WIDE_BW_CHANNEL_SWITCH,
+ pos, elen);
+ if (subelem) {
+ if (subelem->datalen >= sizeof(*elems->wide_bw_chansw_ie))
elems->wide_bw_chansw_ie =
- (void *)(ie + 2);
+ (void *)subelem->data;
+ else
+ elem_parse_failed = true;
+ }
+
+ subelem = cfg80211_find_ext_elem(WLAN_EID_EXT_BANDWIDTH_INDICATION,
+ pos, elen);
+ if (subelem) {
+ const void *edata = subelem->data + 1;
+ u8 edatalen = subelem->datalen - 1;
+
+ if (ieee80211_bandwidth_indication_size_ok(edata,
+ edatalen))
+ elems->bandwidth_indication = edata;
else
elem_parse_failed = true;
}
@@ -1599,7 +1625,7 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)
int nontransmitted_profile_len = 0;
size_t scratch_len = 3 * params->len;
- elems = kzalloc(sizeof(*elems) + scratch_len, GFP_ATOMIC);
+ elems = kzalloc(struct_size(elems, scratch, scratch_len), GFP_ATOMIC);
if (!elems)
return NULL;
elems->ie_start = params->start;
@@ -1654,6 +1680,7 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)
return elems;
}
+EXPORT_SYMBOL_IF_KUNIT(ieee802_11_parse_elems_full);
void ieee80211_regulatory_limit_wmm_params(struct ieee80211_sub_if_data *sdata,
struct ieee80211_tx_queue_params
@@ -1942,7 +1969,6 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
u8 rates[32];
int num_rates;
int ext_rates_len;
- int shift;
u32 rate_flags;
bool have_80mhz = false;
@@ -1953,7 +1979,6 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
return 0;
rate_flags = ieee80211_chandef_rate_flags(chandef);
- shift = ieee80211_chandef_get_shift(chandef);
/* For direct scan add S1G IE and consider its override bits */
if (band == NL80211_BAND_S1GHZ) {
@@ -1971,8 +1996,7 @@ static int ieee80211_build_preq_ies_band(struct ieee80211_sub_if_data *sdata,
continue;
rates[num_rates++] =
- (u8) DIV_ROUND_UP(sband->bitrates[i].bitrate,
- (1 << shift) * 5);
+ (u8) DIV_ROUND_UP(sband->bitrates[i].bitrate, 5);
}
supp_rates_len = min_t(int, num_rates, 8);
@@ -2265,14 +2289,13 @@ u32 ieee80211_sta_get_rates(struct ieee80211_sub_if_data *sdata,
struct ieee80211_supported_band *sband;
size_t num_rates;
u32 supp_rates, rate_flags;
- int i, j, shift;
+ int i, j;
sband = sdata->local->hw.wiphy->bands[band];
if (WARN_ON(!sband))
return 1;
rate_flags = ieee80211_chandef_rate_flags(&sdata->vif.bss_conf.chandef);
- shift = ieee80211_vif_get_shift(&sdata->vif);
num_rates = sband->n_bitrates;
supp_rates = 0;
@@ -2298,8 +2321,7 @@ u32 ieee80211_sta_get_rates(struct ieee80211_sub_if_data *sdata,
!= rate_flags)
continue;
- brate = DIV_ROUND_UP(sband->bitrates[j].bitrate,
- 1 << shift);
+ brate = sband->bitrates[j].bitrate;
if (brate == own_rate) {
supp_rates |= BIT(j);
@@ -2316,9 +2338,10 @@ void ieee80211_stop_device(struct ieee80211_local *local)
ieee80211_led_radio(local, false);
ieee80211_mod_tpt_led_trig(local, 0, IEEE80211_TPT_LEDTRIG_FL_RADIO);
- cancel_work_sync(&local->reconfig_filter);
+ wiphy_work_cancel(local->hw.wiphy, &local->reconfig_filter);
flush_workqueue(local->workqueue);
+ wiphy_work_flush(local->hw.wiphy, NULL);
drv_stop(local);
}
@@ -2340,8 +2363,8 @@ static void ieee80211_flush_completed_scan(struct ieee80211_local *local,
*/
if (aborted)
set_bit(SCAN_ABORTED, &local->scanning);
- ieee80211_queue_delayed_work(&local->hw, &local->scan_work, 0);
- flush_delayed_work(&local->scan_work);
+ wiphy_delayed_work_queue(local->hw.wiphy, &local->scan_work, 0);
+ wiphy_delayed_work_flush(local->hw.wiphy, &local->scan_work);
}
}
@@ -2350,6 +2373,8 @@ static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)
struct ieee80211_sub_if_data *sdata;
struct ieee80211_chanctx *ctx;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/*
* We get here if during resume the device can't be restarted properly.
* We might also get here if this happens during HW reset, which is a
@@ -2378,10 +2403,8 @@ static void ieee80211_handle_reconfig_failure(struct ieee80211_local *local)
/* Mark channel contexts as not being in the driver any more to avoid
* removing them from the driver during the shutdown process...
*/
- mutex_lock(&local->chanctx_mtx);
list_for_each_entry(ctx, &local->chanctx_list, list)
ctx->driver_present = false;
- mutex_unlock(&local->chanctx_mtx);
}
static void ieee80211_assign_chanctx(struct ieee80211_local *local,
@@ -2391,17 +2414,17 @@ static void ieee80211_assign_chanctx(struct ieee80211_local *local,
struct ieee80211_chanctx_conf *conf;
struct ieee80211_chanctx *ctx;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (!local->use_chanctx)
return;
- mutex_lock(&local->chanctx_mtx);
conf = rcu_dereference_protected(link->conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (conf) {
ctx = container_of(conf, struct ieee80211_chanctx, conf);
drv_assign_vif_chanctx(local, sdata, link->conf, ctx);
}
- mutex_unlock(&local->chanctx_mtx);
}
static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)
@@ -2409,8 +2432,9 @@ static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)
struct ieee80211_local *local = sdata->local;
struct sta_info *sta;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/* add STAs back */
- mutex_lock(&local->sta_mtx);
list_for_each_entry(sta, &local->sta_list, list) {
enum ieee80211_sta_state state;
@@ -2422,7 +2446,6 @@ static void ieee80211_reconfig_stations(struct ieee80211_sub_if_data *sdata)
WARN_ON(drv_sta_state(local, sta->sdata, sta, state,
state + 1));
}
- mutex_unlock(&local->sta_mtx);
}
static int ieee80211_reconfig_nan(struct ieee80211_sub_if_data *sdata)
@@ -2509,6 +2532,8 @@ int ieee80211_reconfig(struct ieee80211_local *local)
bool suspended = local->suspended;
bool in_reconfig = false;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
/* nothing to do if HW shouldn't run */
if (!local->open_count)
goto wake_up;
@@ -2624,12 +2649,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
/* add channel contexts */
if (local->use_chanctx) {
- mutex_lock(&local->chanctx_mtx);
list_for_each_entry(ctx, &local->chanctx_list, list)
if (ctx->replace_state !=
IEEE80211_CHANCTX_REPLACES_OTHER)
WARN_ON(drv_add_chanctx(local, ctx));
- mutex_unlock(&local->chanctx_mtx);
sdata = wiphy_dereference(local->hw.wiphy,
local->monitor_sdata);
@@ -2663,7 +2686,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
if (!ieee80211_sdata_running(sdata))
continue;
- sdata_lock(sdata);
if (ieee80211_vif_is_mld(&sdata->vif)) {
struct ieee80211_bss_conf *old[IEEE80211_MLD_MAX_NUM_LINKS] = {
[0] = &sdata->vif.bss_conf,
@@ -2795,7 +2817,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
case NL80211_IFTYPE_NAN:
res = ieee80211_reconfig_nan(sdata);
if (res < 0) {
- sdata_unlock(sdata);
ieee80211_handle_reconfig_failure(local);
return res;
}
@@ -2813,7 +2834,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
WARN_ON(1);
break;
}
- sdata_unlock(sdata);
if (active_links)
ieee80211_set_active_links(&sdata->vif, active_links);
@@ -2843,7 +2863,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
if (!ieee80211_sdata_running(sdata))
continue;
- sdata_lock(sdata);
switch (sdata->vif.type) {
case NL80211_IFTYPE_AP_VLAN:
case NL80211_IFTYPE_AP:
@@ -2852,7 +2871,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
default:
break;
}
- sdata_unlock(sdata);
}
/* add back keys */
@@ -2860,11 +2878,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
ieee80211_reenable_keys(sdata);
/* Reconfigure sched scan if it was interrupted by FW restart */
- mutex_lock(&local->mtx);
sched_scan_sdata = rcu_dereference_protected(local->sched_scan_sdata,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
sched_scan_req = rcu_dereference_protected(local->sched_scan_req,
- lockdep_is_held(&local->mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
if (sched_scan_sdata && sched_scan_req)
/*
* Sched scan stopped, but we don't want to report it. Instead,
@@ -2880,7 +2897,6 @@ int ieee80211_reconfig(struct ieee80211_local *local)
RCU_INIT_POINTER(local->sched_scan_req, NULL);
sched_scan_stopped = true;
}
- mutex_unlock(&local->mtx);
if (sched_scan_stopped)
cfg80211_sched_scan_stopped_locked(local->hw.wiphy, 0);
@@ -2901,16 +2917,12 @@ int ieee80211_reconfig(struct ieee80211_local *local)
* are active. This is really a workaround though.
*/
if (ieee80211_hw_check(hw, AMPDU_AGGREGATION)) {
- mutex_lock(&local->sta_mtx);
-
list_for_each_entry(sta, &local->sta_list, list) {
if (!local->resuming)
ieee80211_sta_tear_down_BA_sessions(
sta, AGG_STOP_LOCAL_REQUEST);
clear_sta_flag(sta, WLAN_STA_BLOCK_BA);
}
-
- mutex_unlock(&local->sta_mtx);
}
/*
@@ -2926,9 +2938,7 @@ int ieee80211_reconfig(struct ieee80211_local *local)
barrier();
/* Restart deferred ROCs */
- mutex_lock(&local->mtx);
ieee80211_start_next_roc(local);
- mutex_unlock(&local->mtx);
/* Requeue all works */
list_for_each_entry(sdata, &local->interfaces, list)
@@ -2989,6 +2999,8 @@ static void ieee80211_reconfig_disconnect(struct ieee80211_vif *vif, u8 flag)
sdata = vif_to_sdata(vif);
local = sdata->local;
+ lockdep_assert_wiphy(local->hw.wiphy);
+
if (WARN_ON(flag & IEEE80211_SDATA_DISCONNECT_RESUME &&
!local->resuming))
return;
@@ -3002,10 +3014,8 @@ static void ieee80211_reconfig_disconnect(struct ieee80211_vif *vif, u8 flag)
sdata->flags |= flag;
- mutex_lock(&local->key_mtx);
list_for_each_entry(key, &sdata->key_list, list)
key->flags |= KEY_FLAG_TAINTED;
- mutex_unlock(&local->key_mtx);
}
void ieee80211_hw_restart_disconnect(struct ieee80211_vif *vif)
@@ -3027,10 +3037,10 @@ void ieee80211_recalc_smps(struct ieee80211_sub_if_data *sdata,
struct ieee80211_chanctx_conf *chanctx_conf;
struct ieee80211_chanctx *chanctx;
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
chanctx_conf = rcu_dereference_protected(link->conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
/*
* This function can be called from a work, thus it may be possible
@@ -3039,12 +3049,10 @@ void ieee80211_recalc_smps(struct ieee80211_sub_if_data *sdata,
* So nothing should be done in such case.
*/
if (!chanctx_conf)
- goto unlock;
+ return;
chanctx = container_of(chanctx_conf, struct ieee80211_chanctx, conf);
ieee80211_recalc_smps_chanctx(local, chanctx);
- unlock:
- mutex_unlock(&local->chanctx_mtx);
}
void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata,
@@ -3055,7 +3063,7 @@ void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata,
struct ieee80211_chanctx *chanctx;
int i;
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
for (i = 0; i < ARRAY_SIZE(sdata->vif.link_conf); i++) {
struct ieee80211_bss_conf *bss_conf;
@@ -3071,9 +3079,9 @@ void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata,
}
chanctx_conf = rcu_dereference_protected(bss_conf->chanctx_conf,
- lockdep_is_held(&local->chanctx_mtx));
+ lockdep_is_held(&local->hw.wiphy->mtx));
/*
- * Since we hold the chanctx_mtx (checked above)
+ * Since we hold the wiphy mutex (checked above)
* we can take the chanctx_conf pointer out of the
* RCU critical section, it cannot go away without
* the mutex. Just the way we reached it could - in
@@ -3083,14 +3091,12 @@ void ieee80211_recalc_min_chandef(struct ieee80211_sub_if_data *sdata,
rcu_read_unlock();
if (!chanctx_conf)
- goto unlock;
+ return;
chanctx = container_of(chanctx_conf, struct ieee80211_chanctx,
conf);
ieee80211_recalc_chanctx_min_def(local, chanctx, NULL);
}
- unlock:
- mutex_unlock(&local->chanctx_mtx);
}
size_t ieee80211_ie_split_vendor(const u8 *ies, size_t ielen, size_t offset)
@@ -3778,12 +3784,10 @@ bool ieee80211_chandef_vht_oper(struct ieee80211_hw *hw, u32 vht_cap_info,
return true;
}
-void ieee80211_chandef_eht_oper(const struct ieee80211_eht_operation *eht_oper,
+void ieee80211_chandef_eht_oper(const struct ieee80211_eht_operation_info *info,
bool support_160, bool support_320,
struct cfg80211_chan_def *chandef)
{
- struct ieee80211_eht_operation_info *info = (void *)eht_oper->optional;
-
chandef->center_freq1 =
ieee80211_channel_to_frequency(info->ccfs0,
chandef->chan->band);
@@ -3952,8 +3956,9 @@ bool ieee80211_chandef_he_6ghz_oper(struct ieee80211_sub_if_data *sdata,
support_320 =
eht_phy_cap & IEEE80211_EHT_PHY_CAP0_320MHZ_IN_6GHZ;
- ieee80211_chandef_eht_oper(eht_oper, support_160,
- support_320, &he_chandef);
+ ieee80211_chandef_eht_oper((const void *)eht_oper->optional,
+ support_160, support_320,
+ &he_chandef);
}
if (!cfg80211_chandef_valid(&he_chandef)) {
@@ -4012,7 +4017,6 @@ int ieee80211_parse_bitrates(enum nl80211_chan_width width,
const u8 *srates, int srates_len, u32 *rates)
{
u32 rate_flags = ieee80211_chanwidth_rate_flags(width);
- int shift = ieee80211_chanwidth_get_shift(width);
struct ieee80211_rate *br;
int brate, rate, i, j, count = 0;
@@ -4026,7 +4030,7 @@ int ieee80211_parse_bitrates(enum nl80211_chan_width width,
if ((rate_flags & br->flags) != rate_flags)
continue;
- brate = DIV_ROUND_UP(br->bitrate, (1 << shift) * 5);
+ brate = DIV_ROUND_UP(br->bitrate, 5);
if (brate == rate) {
*rates |= BIT(j);
count++;
@@ -4043,12 +4047,11 @@ int ieee80211_add_srates_ie(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_supported_band *sband;
- int rate, shift;
+ int rate;
u8 i, rates, *pos;
u32 basic_rates = sdata->vif.bss_conf.basic_rates;
u32 rate_flags;
- shift = ieee80211_vif_get_shift(&sdata->vif);
rate_flags = ieee80211_chandef_rate_flags(&sdata->vif.bss_conf.chandef);
sband = local->hw.wiphy->bands[band];
rates = 0;
@@ -4073,8 +4076,7 @@ int ieee80211_add_srates_ie(struct ieee80211_sub_if_data *sdata,
if (need_basic && basic_rates & BIT(i))
basic = 0x80;
- rate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
- 5 * (1 << shift));
+ rate = DIV_ROUND_UP(sband->bitrates[i].bitrate, 5);
*pos++ = basic | (u8) rate;
}
@@ -4087,13 +4089,12 @@ int ieee80211_add_ext_srates_ie(struct ieee80211_sub_if_data *sdata,
{
struct ieee80211_local *local = sdata->local;
struct ieee80211_supported_band *sband;
- int rate, shift;
+ int rate;
u8 i, exrates, *pos;
u32 basic_rates = sdata->vif.bss_conf.basic_rates;
u32 rate_flags;
rate_flags = ieee80211_chandef_rate_flags(&sdata->vif.bss_conf.chandef);
- shift = ieee80211_vif_get_shift(&sdata->vif);
sband = local->hw.wiphy->bands[band];
exrates = 0;
@@ -4122,8 +4123,7 @@ int ieee80211_add_ext_srates_ie(struct ieee80211_sub_if_data *sdata,
continue;
if (need_basic && basic_rates & BIT(i))
basic = 0x80;
- rate = DIV_ROUND_UP(sband->bitrates[i].bitrate,
- 5 * (1 << shift));
+ rate = DIV_ROUND_UP(sband->bitrates[i].bitrate, 5);
*pos++ = basic | (u8) rate;
}
}
@@ -4167,6 +4167,8 @@ u8 ieee80211_mcs_to_chains(const struct ieee80211_mcs_info *mcs)
* This function calculates the RX timestamp at the given MPDU offset, taking
* into account what the RX timestamp was. An offset of 0 will just normalize
* the timestamp to TSF at beginning of MPDU reception.
+ *
+ * Returns: the calculated timestamp
*/
u64 ieee80211_calculate_rx_timestamp(struct ieee80211_local *local,
struct ieee80211_rx_status *status,
@@ -4282,25 +4284,13 @@ u64 ieee80211_calculate_rx_timestamp(struct ieee80211_local *local,
fallthrough;
case RX_ENC_LEGACY: {
struct ieee80211_supported_band *sband;
- int shift = 0;
- int bitrate;
-
- switch (status->bw) {
- case RATE_INFO_BW_10:
- shift = 1;
- break;
- case RATE_INFO_BW_5:
- shift = 2;
- break;
- }
sband = local->hw.wiphy->bands[status->band];
- bitrate = sband->bitrates[status->rate_idx].bitrate;
- ri.legacy = DIV_ROUND_UP(bitrate, (1 << shift));
+ ri.legacy = sband->bitrates[status->rate_idx].bitrate;
if (status->flag & RX_FLAG_MACTIME_PLCP_START) {
if (status->band == NL80211_BAND_5GHZ) {
- ts += 20 << shift;
+ ts += 20;
mpdu_offset += 2;
} else if (status->enc_flags & RX_ENC_FLAG_SHORTPRE) {
ts += 96;
@@ -4333,16 +4323,15 @@ void ieee80211_dfs_cac_cancel(struct ieee80211_local *local)
struct ieee80211_sub_if_data *sdata;
struct cfg80211_chan_def chandef;
- /* for interface list, to avoid linking iflist_mtx and chanctx_mtx */
lockdep_assert_wiphy(local->hw.wiphy);
- mutex_lock(&local->mtx);
list_for_each_entry(sdata, &local->interfaces, list) {
/* it might be waiting for the local->mtx, but then
* by the time it gets it, sdata->wdev.cac_started
* will no longer be true
*/
- cancel_delayed_work(&sdata->deflink.dfs_cac_timer_work);
+ wiphy_delayed_work_cancel(local->hw.wiphy,
+ &sdata->deflink.dfs_cac_timer_work);
if (sdata->wdev.cac_started) {
chandef = sdata->vif.bss_conf.chandef;
@@ -4353,10 +4342,10 @@ void ieee80211_dfs_cac_cancel(struct ieee80211_local *local)
GFP_KERNEL);
}
}
- mutex_unlock(&local->mtx);
}
-void ieee80211_dfs_radar_detected_work(struct work_struct *work)
+void ieee80211_dfs_radar_detected_work(struct wiphy *wiphy,
+ struct wiphy_work *work)
{
struct ieee80211_local *local =
container_of(work, struct ieee80211_local, radar_detected_work);
@@ -4364,7 +4353,8 @@ void ieee80211_dfs_radar_detected_work(struct work_struct *work)
struct ieee80211_chanctx *ctx;
int num_chanctx = 0;
- mutex_lock(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
+
list_for_each_entry(ctx, &local->chanctx_list, list) {
if (ctx->replace_state == IEEE80211_CHANCTX_REPLACES_OTHER)
continue;
@@ -4372,11 +4362,8 @@ void ieee80211_dfs_radar_detected_work(struct work_struct *work)
num_chanctx++;
chandef = ctx->conf.def;
}
- mutex_unlock(&local->chanctx_mtx);
- wiphy_lock(local->hw.wiphy);
ieee80211_dfs_cac_cancel(local);
- wiphy_unlock(local->hw.wiphy);
if (num_chanctx > 1)
/* XXX: multi-channel is not supported yet */
@@ -4391,7 +4378,7 @@ void ieee80211_radar_detected(struct ieee80211_hw *hw)
trace_api_radar_detected(local);
- schedule_work(&local->radar_detected_work);
+ wiphy_work_queue(hw->wiphy, &local->radar_detected_work);
}
EXPORT_SYMBOL(ieee80211_radar_detected);
@@ -4775,7 +4762,7 @@ static u8 ieee80211_chanctx_radar_detect(struct ieee80211_local *local,
struct ieee80211_link_data *link;
u8 radar_detect = 0;
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(ctx->replace_state == IEEE80211_CHANCTX_WILL_BE_REPLACED))
return 0;
@@ -4816,7 +4803,7 @@ int ieee80211_check_combinations(struct ieee80211_sub_if_data *sdata,
.radar_detect = radar_detect,
};
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
if (WARN_ON(hweight32(radar_detect) > 1))
return -EINVAL;
@@ -4906,7 +4893,7 @@ int ieee80211_max_num_channels(struct ieee80211_local *local)
int err;
struct iface_combination_params params = {0};
- lockdep_assert_held(&local->chanctx_mtx);
+ lockdep_assert_wiphy(local->hw.wiphy);
list_for_each_entry(ctx, &local->chanctx_list, list) {
if (ctx->replace_state == IEEE80211_CHANCTX_WILL_BE_REPLACED)
@@ -5118,31 +5105,3 @@ u8 *ieee80211_ie_build_eht_cap(u8 *pos,
return pos;
}
-
-void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id)
-{
- unsigned int elem_len;
-
- if (!len_pos)
- return;
-
- elem_len = skb->data + skb->len - len_pos - 1;
-
- while (elem_len > 255) {
- /* this one is 255 */
- *len_pos = 255;
- /* remaining data gets smaller */
- elem_len -= 255;
- /* make space for the fragment ID/len in SKB */
- skb_put(skb, 2);
- /* shift back the remaining data to place fragment ID/len */
- memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len);
- /* place the fragment ID */
- len_pos += 255 + 1;
- *len_pos = frag_id;
- /* and point to fragment length to update later */
- len_pos++;
- }
-
- *len_pos = elem_len;
-}
diff --git a/net/mac80211/wep.c b/net/mac80211/wep.c
index 9a6e11d7b4db..5c01e121481a 100644
--- a/net/mac80211/wep.c
+++ b/net/mac80211/wep.c
@@ -3,6 +3,7 @@
* Software WEP encryption implementation
* Copyright 2002, Jouni Malinen <jkmaline@cc.hut.fi>
* Copyright 2003, Instant802 Networks, Inc.
+ * Copyright (C) 2023 Intel Corporation
*/
#include <linux/netdevice.h>
@@ -250,18 +251,18 @@ ieee80211_crypto_wep_decrypt(struct ieee80211_rx_data *rx)
if (!(status->flag & RX_FLAG_DECRYPTED)) {
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
if (ieee80211_wep_decrypt(rx->local, rx->skb, rx->key))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_WEP_DEC_FAIL;
} else if (!(status->flag & RX_FLAG_IV_STRIPPED)) {
if (!pskb_may_pull(rx->skb, ieee80211_hdrlen(fc) +
IEEE80211_WEP_IV_LEN))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_NO_IV;
ieee80211_wep_remove_iv(rx->local, rx->skb, rx->key);
/* remove ICV */
if (!(status->flag & RX_FLAG_ICV_STRIPPED) &&
pskb_trim(rx->skb, rx->skb->len - IEEE80211_WEP_ICV_LEN))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_NO_ICV;
}
return RX_CONTINUE;
diff --git a/net/mac80211/wpa.c b/net/mac80211/wpa.c
index 2d8e38b3bcb5..94dae7cb6dbd 100644
--- a/net/mac80211/wpa.c
+++ b/net/mac80211/wpa.c
@@ -3,7 +3,7 @@
* Copyright 2002-2004, Instant802 Networks, Inc.
* Copyright 2008, Jouni Malinen <j@w1.fi>
* Copyright (C) 2016-2017 Intel Deutschland GmbH
- * Copyright (C) 2020-2022 Intel Corporation
+ * Copyright (C) 2020-2023 Intel Corporation
*/
#include <linux/netdevice.h>
@@ -142,7 +142,7 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx)
* group keys and only the AP is sending real multicast
* frames in the BSS.
*/
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_AP_RX_GROUPCAST;
}
if (status->flag & RX_FLAG_MMIC_ERROR)
@@ -150,10 +150,10 @@ ieee80211_rx_h_michael_mic_verify(struct ieee80211_rx_data *rx)
hdrlen = ieee80211_hdrlen(hdr->frame_control);
if (skb->len < hdrlen + MICHAEL_MIC_LEN)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_MMIC;
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
hdr = (void *)skb->data;
data = skb->data + hdrlen;
@@ -188,7 +188,7 @@ mic_fail_no_key:
NL80211_KEYTYPE_PAIRWISE,
rx->key ? rx->key->conf.keyidx : -1,
NULL, GFP_ATOMIC);
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_MMIC_FAIL;
}
static int tkip_encrypt_skb(struct ieee80211_tx_data *tx, struct sk_buff *skb)
@@ -276,11 +276,11 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
return RX_CONTINUE;
if (!rx->sta || skb->len - hdrlen < 12)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_TKIP;
/* it may be possible to optimize this a bit more */
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
hdr = (void *)skb->data;
/*
@@ -298,7 +298,7 @@ ieee80211_crypto_tkip_decrypt(struct ieee80211_rx_data *rx)
&rx->tkip.iv32,
&rx->tkip.iv16);
if (res != TKIP_DECRYPT_OK)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_TKIP_FAIL;
/* Trim ICV */
if (!(status->flag & RX_FLAG_ICV_STRIPPED))
@@ -523,12 +523,12 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
if (status->flag & RX_FLAG_DECRYPTED) {
if (!pskb_may_pull(rx->skb, hdrlen + IEEE80211_CCMP_HDR_LEN))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_CCMP;
if (status->flag & RX_FLAG_MIC_STRIPPED)
mic_len = 0;
} else {
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
}
/* reload hdr - skb might have been reallocated */
@@ -536,7 +536,7 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
data_len = skb->len - hdrlen - IEEE80211_CCMP_HDR_LEN - mic_len;
if (!rx->sta || data_len < 0)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_CCMP;
if (!(status->flag & RX_FLAG_PN_VALIDATED)) {
int res;
@@ -574,7 +574,7 @@ ieee80211_crypto_ccmp_decrypt(struct ieee80211_rx_data *rx,
/* Remove CCMP header and MIC */
if (pskb_trim(skb, skb->len - mic_len))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_CCMP_MIC;
memmove(skb->data + IEEE80211_CCMP_HDR_LEN, skb->data, hdrlen);
skb_pull(skb, IEEE80211_CCMP_HDR_LEN);
@@ -719,12 +719,12 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
if (status->flag & RX_FLAG_DECRYPTED) {
if (!pskb_may_pull(rx->skb, hdrlen + IEEE80211_GCMP_HDR_LEN))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_GCMP;
if (status->flag & RX_FLAG_MIC_STRIPPED)
mic_len = 0;
} else {
if (skb_linearize(rx->skb))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
}
/* reload hdr - skb might have been reallocated */
@@ -732,7 +732,7 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
data_len = skb->len - hdrlen - IEEE80211_GCMP_HDR_LEN - mic_len;
if (!rx->sta || data_len < 0)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_GCMP;
if (!(status->flag & RX_FLAG_PN_VALIDATED)) {
int res;
@@ -771,7 +771,7 @@ ieee80211_crypto_gcmp_decrypt(struct ieee80211_rx_data *rx)
/* Remove GCMP header and MIC */
if (pskb_trim(skb, skb->len - mic_len))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_GCMP_MIC;
memmove(skb->data + IEEE80211_GCMP_HDR_LEN, skb->data, hdrlen);
skb_pull(skb, IEEE80211_GCMP_HDR_LEN);
@@ -924,7 +924,7 @@ ieee80211_crypto_aes_cmac_decrypt(struct ieee80211_rx_data *rx)
/* management frames are already linear */
if (skb->len < 24 + sizeof(*mmie))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_CMAC;
mmie = (struct ieee80211_mmie *)
(skb->data + skb->len - sizeof(*mmie));
@@ -974,13 +974,13 @@ ieee80211_crypto_aes_cmac_256_decrypt(struct ieee80211_rx_data *rx)
/* management frames are already linear */
if (skb->len < 24 + sizeof(*mmie))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_CMAC256;
mmie = (struct ieee80211_mmie_16 *)
(skb->data + skb->len - sizeof(*mmie));
if (mmie->element_id != WLAN_EID_MMIE ||
mmie->length != sizeof(*mmie) - 2)
- return RX_DROP_UNUSABLE; /* Invalid MMIE */
+ return RX_DROP_U_BAD_MMIE; /* Invalid MMIE */
bip_ipn_swap(ipn, mmie->sequence_number);
@@ -1073,7 +1073,7 @@ ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx)
/* management frames are already linear */
if (skb->len < 24 + sizeof(*mmie))
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_SHORT_GMAC;
mmie = (struct ieee80211_mmie_16 *)
(skb->data + skb->len - sizeof(*mmie));
@@ -1097,7 +1097,7 @@ ieee80211_crypto_aes_gmac_decrypt(struct ieee80211_rx_data *rx)
mic = kmalloc(GMAC_MIC_LEN, GFP_ATOMIC);
if (!mic)
- return RX_DROP_UNUSABLE;
+ return RX_DROP_U_OOM;
if (ieee80211_aes_gmac(key->u.aes_gmac.tfm, aad, nonce,
skb->data + 24, skb->len - 24,
mic) < 0 ||
diff --git a/net/mptcp/Makefile b/net/mptcp/Makefile
index 84e531f86b82..bcf1dbf3a432 100644
--- a/net/mptcp/Makefile
+++ b/net/mptcp/Makefile
@@ -2,7 +2,8 @@
obj-$(CONFIG_MPTCP) += mptcp.o
mptcp-y := protocol.o subflow.o options.o token.o crypto.o ctrl.o pm.o diag.o \
- mib.o pm_netlink.o sockopt.o pm_userspace.o fastopen.o sched.o
+ mib.o pm_netlink.o sockopt.o pm_userspace.o fastopen.o sched.o \
+ mptcp_pm_gen.o
obj-$(CONFIG_SYN_COOKIES) += syncookies.o
obj-$(CONFIG_INET_MPTCP_DIAG) += mptcp_diag.o
diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
index e72b518c5d02..13fe0748dde8 100644
--- a/net/mptcp/ctrl.c
+++ b/net/mptcp/ctrl.c
@@ -27,6 +27,7 @@ struct mptcp_pernet {
#endif
unsigned int add_addr_timeout;
+ unsigned int close_timeout;
unsigned int stale_loss_cnt;
u8 mptcp_enabled;
u8 checksum_enabled;
@@ -65,6 +66,13 @@ unsigned int mptcp_stale_loss_cnt(const struct net *net)
return mptcp_get_pernet(net)->stale_loss_cnt;
}
+unsigned int mptcp_close_timeout(const struct sock *sk)
+{
+ if (sock_flag(sk, SOCK_DEAD))
+ return TCP_TIMEWAIT_LEN;
+ return mptcp_get_pernet(sock_net(sk))->close_timeout;
+}
+
int mptcp_get_pm_type(const struct net *net)
{
return mptcp_get_pernet(net)->pm_type;
@@ -79,6 +87,7 @@ static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet)
{
pernet->mptcp_enabled = 1;
pernet->add_addr_timeout = TCP_RTO_MAX;
+ pernet->close_timeout = TCP_TIMEWAIT_LEN;
pernet->checksum_enabled = 0;
pernet->allow_join_initial_addr_port = 1;
pernet->stale_loss_cnt = 4;
@@ -141,6 +150,12 @@ static struct ctl_table mptcp_sysctl_table[] = {
.mode = 0644,
.proc_handler = proc_dostring,
},
+ {
+ .procname = "close_timeout",
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_jiffies,
+ },
{}
};
@@ -163,6 +178,7 @@ static int mptcp_pernet_new_table(struct net *net, struct mptcp_pernet *pernet)
table[4].data = &pernet->stale_loss_cnt;
table[5].data = &pernet->pm_type;
table[6].data = &pernet->scheduler;
+ table[7].data = &pernet->close_timeout;
hdr = register_net_sysctl_sz(net, MPTCP_SYSCTL_PATH, table,
ARRAY_SIZE(mptcp_sysctl_table));
diff --git a/net/mptcp/fastopen.c b/net/mptcp/fastopen.c
index bceaab8dd8e4..74698582a285 100644
--- a/net/mptcp/fastopen.c
+++ b/net/mptcp/fastopen.c
@@ -52,6 +52,7 @@ void mptcp_fastopen_subflow_synack_set_params(struct mptcp_subflow_context *subf
mptcp_set_owner_r(skb, sk);
__skb_queue_tail(&sk->sk_receive_queue, skb);
+ mptcp_sk(sk)->bytes_received += skb->len;
sk->sk_data_ready(sk);
diff --git a/net/mptcp/mptcp_pm_gen.c b/net/mptcp/mptcp_pm_gen.c
new file mode 100644
index 000000000000..a2325e70ddab
--- /dev/null
+++ b/net/mptcp/mptcp_pm_gen.c
@@ -0,0 +1,179 @@
+// SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause)
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/mptcp.yaml */
+/* YNL-GEN kernel source */
+
+#include <net/netlink.h>
+#include <net/genetlink.h>
+
+#include "mptcp_pm_gen.h"
+
+#include <uapi/linux/mptcp_pm.h>
+
+/* Common nested types */
+const struct nla_policy mptcp_pm_address_nl_policy[MPTCP_PM_ADDR_ATTR_IF_IDX + 1] = {
+ [MPTCP_PM_ADDR_ATTR_FAMILY] = { .type = NLA_U16, },
+ [MPTCP_PM_ADDR_ATTR_ID] = { .type = NLA_U8, },
+ [MPTCP_PM_ADDR_ATTR_ADDR4] = { .type = NLA_U32, },
+ [MPTCP_PM_ADDR_ATTR_ADDR6] = NLA_POLICY_EXACT_LEN(16),
+ [MPTCP_PM_ADDR_ATTR_PORT] = { .type = NLA_U16, },
+ [MPTCP_PM_ADDR_ATTR_FLAGS] = { .type = NLA_U32, },
+ [MPTCP_PM_ADDR_ATTR_IF_IDX] = { .type = NLA_S32, },
+};
+
+/* MPTCP_PM_CMD_ADD_ADDR - do */
+const struct nla_policy mptcp_pm_add_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1] = {
+ [MPTCP_PM_ENDPOINT_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_DEL_ADDR - do */
+const struct nla_policy mptcp_pm_del_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1] = {
+ [MPTCP_PM_ENDPOINT_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_GET_ADDR - do */
+const struct nla_policy mptcp_pm_get_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1] = {
+ [MPTCP_PM_ENDPOINT_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_FLUSH_ADDRS - do */
+const struct nla_policy mptcp_pm_flush_addrs_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1] = {
+ [MPTCP_PM_ENDPOINT_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_SET_LIMITS - do */
+const struct nla_policy mptcp_pm_set_limits_nl_policy[MPTCP_PM_ATTR_SUBFLOWS + 1] = {
+ [MPTCP_PM_ATTR_RCV_ADD_ADDRS] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_SUBFLOWS] = { .type = NLA_U32, },
+};
+
+/* MPTCP_PM_CMD_GET_LIMITS - do */
+const struct nla_policy mptcp_pm_get_limits_nl_policy[MPTCP_PM_ATTR_SUBFLOWS + 1] = {
+ [MPTCP_PM_ATTR_RCV_ADD_ADDRS] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_SUBFLOWS] = { .type = NLA_U32, },
+};
+
+/* MPTCP_PM_CMD_SET_FLAGS - do */
+const struct nla_policy mptcp_pm_set_flags_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1] = {
+ [MPTCP_PM_ATTR_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+ [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_ADDR_REMOTE] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_ANNOUNCE - do */
+const struct nla_policy mptcp_pm_announce_nl_policy[MPTCP_PM_ATTR_TOKEN + 1] = {
+ [MPTCP_PM_ATTR_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+ [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
+};
+
+/* MPTCP_PM_CMD_REMOVE - do */
+const struct nla_policy mptcp_pm_remove_nl_policy[MPTCP_PM_ATTR_LOC_ID + 1] = {
+ [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_LOC_ID] = { .type = NLA_U8, },
+};
+
+/* MPTCP_PM_CMD_SUBFLOW_CREATE - do */
+const struct nla_policy mptcp_pm_subflow_create_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1] = {
+ [MPTCP_PM_ATTR_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+ [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_ADDR_REMOTE] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* MPTCP_PM_CMD_SUBFLOW_DESTROY - do */
+const struct nla_policy mptcp_pm_subflow_destroy_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1] = {
+ [MPTCP_PM_ATTR_ADDR] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+ [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
+ [MPTCP_PM_ATTR_ADDR_REMOTE] = NLA_POLICY_NESTED(mptcp_pm_address_nl_policy),
+};
+
+/* Ops table for mptcp_pm */
+const struct genl_ops mptcp_pm_nl_ops[11] = {
+ {
+ .cmd = MPTCP_PM_CMD_ADD_ADDR,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_add_addr_doit,
+ .policy = mptcp_pm_add_addr_nl_policy,
+ .maxattr = MPTCP_PM_ENDPOINT_ADDR,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_DEL_ADDR,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_del_addr_doit,
+ .policy = mptcp_pm_del_addr_nl_policy,
+ .maxattr = MPTCP_PM_ENDPOINT_ADDR,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_GET_ADDR,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_get_addr_doit,
+ .dumpit = mptcp_pm_nl_get_addr_dumpit,
+ .policy = mptcp_pm_get_addr_nl_policy,
+ .maxattr = MPTCP_PM_ENDPOINT_ADDR,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_FLUSH_ADDRS,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_flush_addrs_doit,
+ .policy = mptcp_pm_flush_addrs_nl_policy,
+ .maxattr = MPTCP_PM_ENDPOINT_ADDR,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_SET_LIMITS,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_set_limits_doit,
+ .policy = mptcp_pm_set_limits_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_SUBFLOWS,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_GET_LIMITS,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_get_limits_doit,
+ .policy = mptcp_pm_get_limits_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_SUBFLOWS,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_SET_FLAGS,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_set_flags_doit,
+ .policy = mptcp_pm_set_flags_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_ADDR_REMOTE,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_ANNOUNCE,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_announce_doit,
+ .policy = mptcp_pm_announce_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_TOKEN,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_REMOVE,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_remove_doit,
+ .policy = mptcp_pm_remove_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_LOC_ID,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_SUBFLOW_CREATE,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_subflow_create_doit,
+ .policy = mptcp_pm_subflow_create_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_ADDR_REMOTE,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+ {
+ .cmd = MPTCP_PM_CMD_SUBFLOW_DESTROY,
+ .validate = GENL_DONT_VALIDATE_STRICT,
+ .doit = mptcp_pm_nl_subflow_destroy_doit,
+ .policy = mptcp_pm_subflow_destroy_nl_policy,
+ .maxattr = MPTCP_PM_ATTR_ADDR_REMOTE,
+ .flags = GENL_UNS_ADMIN_PERM,
+ },
+};
diff --git a/net/mptcp/mptcp_pm_gen.h b/net/mptcp/mptcp_pm_gen.h
new file mode 100644
index 000000000000..10579d184587
--- /dev/null
+++ b/net/mptcp/mptcp_pm_gen.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */
+/* Do not edit directly, auto-generated from: */
+/* Documentation/netlink/specs/mptcp.yaml */
+/* YNL-GEN kernel header */
+
+#ifndef _LINUX_MPTCP_PM_GEN_H
+#define _LINUX_MPTCP_PM_GEN_H
+
+#include <net/netlink.h>
+#include <net/genetlink.h>
+
+#include <uapi/linux/mptcp_pm.h>
+
+/* Common nested types */
+extern const struct nla_policy mptcp_pm_address_nl_policy[MPTCP_PM_ADDR_ATTR_IF_IDX + 1];
+
+extern const struct nla_policy mptcp_pm_add_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1];
+
+extern const struct nla_policy mptcp_pm_del_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1];
+
+extern const struct nla_policy mptcp_pm_get_addr_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1];
+
+extern const struct nla_policy mptcp_pm_flush_addrs_nl_policy[MPTCP_PM_ENDPOINT_ADDR + 1];
+
+extern const struct nla_policy mptcp_pm_set_limits_nl_policy[MPTCP_PM_ATTR_SUBFLOWS + 1];
+
+extern const struct nla_policy mptcp_pm_get_limits_nl_policy[MPTCP_PM_ATTR_SUBFLOWS + 1];
+
+extern const struct nla_policy mptcp_pm_set_flags_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1];
+
+extern const struct nla_policy mptcp_pm_announce_nl_policy[MPTCP_PM_ATTR_TOKEN + 1];
+
+extern const struct nla_policy mptcp_pm_remove_nl_policy[MPTCP_PM_ATTR_LOC_ID + 1];
+
+extern const struct nla_policy mptcp_pm_subflow_create_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1];
+
+extern const struct nla_policy mptcp_pm_subflow_destroy_nl_policy[MPTCP_PM_ATTR_ADDR_REMOTE + 1];
+
+/* Ops table for mptcp_pm */
+extern const struct genl_ops mptcp_pm_nl_ops[11];
+
+int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_del_addr_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_get_addr_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_get_addr_dumpit(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int mptcp_pm_nl_flush_addrs_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_set_limits_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_get_limits_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_set_flags_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_remove_doit(struct sk_buff *skb, struct genl_info *info);
+int mptcp_pm_nl_subflow_create_doit(struct sk_buff *skb,
+ struct genl_info *info);
+int mptcp_pm_nl_subflow_destroy_doit(struct sk_buff *skb,
+ struct genl_info *info);
+
+#endif /* _LINUX_MPTCP_PM_GEN_H */
diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c
index d8da5374d9e1..4ae19113b8eb 100644
--- a/net/mptcp/pm.c
+++ b/net/mptcp/pm.c
@@ -184,7 +184,7 @@ void mptcp_pm_subflow_established(struct mptcp_sock *msk)
spin_unlock_bh(&pm->lock);
}
-void mptcp_pm_subflow_check_next(struct mptcp_sock *msk, const struct sock *ssk,
+void mptcp_pm_subflow_check_next(struct mptcp_sock *msk,
const struct mptcp_subflow_context *subflow)
{
struct mptcp_pm_data *pm = &msk->pm;
diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
index 9661f3812682..1529ec358815 100644
--- a/net/mptcp/pm_netlink.c
+++ b/net/mptcp/pm_netlink.c
@@ -1104,29 +1104,6 @@ static const struct genl_multicast_group mptcp_pm_mcgrps[] = {
},
};
-static const struct nla_policy
-mptcp_pm_addr_policy[MPTCP_PM_ADDR_ATTR_MAX + 1] = {
- [MPTCP_PM_ADDR_ATTR_FAMILY] = { .type = NLA_U16, },
- [MPTCP_PM_ADDR_ATTR_ID] = { .type = NLA_U8, },
- [MPTCP_PM_ADDR_ATTR_ADDR4] = { .type = NLA_U32, },
- [MPTCP_PM_ADDR_ATTR_ADDR6] =
- NLA_POLICY_EXACT_LEN(sizeof(struct in6_addr)),
- [MPTCP_PM_ADDR_ATTR_PORT] = { .type = NLA_U16 },
- [MPTCP_PM_ADDR_ATTR_FLAGS] = { .type = NLA_U32 },
- [MPTCP_PM_ADDR_ATTR_IF_IDX] = { .type = NLA_S32 },
-};
-
-static const struct nla_policy mptcp_pm_policy[MPTCP_PM_ATTR_MAX + 1] = {
- [MPTCP_PM_ATTR_ADDR] =
- NLA_POLICY_NESTED(mptcp_pm_addr_policy),
- [MPTCP_PM_ATTR_RCV_ADD_ADDRS] = { .type = NLA_U32, },
- [MPTCP_PM_ATTR_SUBFLOWS] = { .type = NLA_U32, },
- [MPTCP_PM_ATTR_TOKEN] = { .type = NLA_U32, },
- [MPTCP_PM_ATTR_LOC_ID] = { .type = NLA_U8, },
- [MPTCP_PM_ATTR_ADDR_REMOTE] =
- NLA_POLICY_NESTED(mptcp_pm_addr_policy),
-};
-
void mptcp_pm_nl_subflow_chk_stale(const struct mptcp_sock *msk, struct sock *ssk)
{
struct mptcp_subflow_context *iter, *subflow = mptcp_subflow_ctx(ssk);
@@ -1188,7 +1165,7 @@ static int mptcp_pm_parse_pm_addr_attr(struct nlattr *tb[],
/* no validation needed - was already done via nested policy */
err = nla_parse_nested_deprecated(tb, MPTCP_PM_ADDR_ATTR_MAX, attr,
- mptcp_pm_addr_policy, info->extack);
+ mptcp_pm_address_nl_policy, info->extack);
if (err)
return err;
@@ -1303,9 +1280,9 @@ next:
return 0;
}
-static int mptcp_nl_cmd_add_addr(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
{
- struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR];
+ struct nlattr *attr = info->attrs[MPTCP_PM_ENDPOINT_ADDR];
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
struct mptcp_pm_addr_entry addr, *entry;
int ret;
@@ -1484,9 +1461,9 @@ next:
return 0;
}
-static int mptcp_nl_cmd_del_addr(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_del_addr_doit(struct sk_buff *skb, struct genl_info *info)
{
- struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR];
+ struct nlattr *attr = info->attrs[MPTCP_PM_ENDPOINT_ADDR];
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
struct mptcp_pm_addr_entry addr, *entry;
unsigned int addr_max;
@@ -1619,7 +1596,7 @@ static void __reset_counters(struct pm_nl_pernet *pernet)
pernet->addrs = 0;
}
-static int mptcp_nl_cmd_flush_addrs(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_flush_addrs_doit(struct sk_buff *skb, struct genl_info *info)
{
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
LIST_HEAD(free_list);
@@ -1675,9 +1652,9 @@ nla_put_failure:
return -EMSGSIZE;
}
-static int mptcp_nl_cmd_get_addr(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_get_addr_doit(struct sk_buff *skb, struct genl_info *info)
{
- struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR];
+ struct nlattr *attr = info->attrs[MPTCP_PM_ENDPOINT_ADDR];
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
struct mptcp_pm_addr_entry addr, *entry;
struct sk_buff *msg;
@@ -1725,8 +1702,8 @@ fail:
return ret;
}
-static int mptcp_nl_cmd_dump_addrs(struct sk_buff *msg,
- struct netlink_callback *cb)
+int mptcp_pm_nl_get_addr_dumpit(struct sk_buff *msg,
+ struct netlink_callback *cb)
{
struct net *net = sock_net(msg->sk);
struct mptcp_pm_addr_entry *entry;
@@ -1783,8 +1760,7 @@ static int parse_limit(struct genl_info *info, int id, unsigned int *limit)
return 0;
}
-static int
-mptcp_nl_cmd_set_limits(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_set_limits_doit(struct sk_buff *skb, struct genl_info *info)
{
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
unsigned int rcv_addrs, subflows;
@@ -1809,8 +1785,7 @@ unlock:
return ret;
}
-static int
-mptcp_nl_cmd_get_limits(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_get_limits_doit(struct sk_buff *skb, struct genl_info *info)
{
struct pm_nl_pernet *pernet = genl_info_pm_nl(info);
struct sk_buff *msg;
@@ -1919,7 +1894,7 @@ int mptcp_pm_nl_set_flags(struct net *net, struct mptcp_pm_addr_entry *addr, u8
return 0;
}
-static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_set_flags_doit(struct sk_buff *skb, struct genl_info *info)
{
struct mptcp_pm_addr_entry remote = { .addr = { .family = AF_UNSPEC }, };
struct mptcp_pm_addr_entry addr = { .addr = { .family = AF_UNSPEC }, };
@@ -2283,72 +2258,13 @@ nla_put_failure:
nlmsg_free(skb);
}
-static const struct genl_small_ops mptcp_pm_ops[] = {
- {
- .cmd = MPTCP_PM_CMD_ADD_ADDR,
- .doit = mptcp_nl_cmd_add_addr,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_DEL_ADDR,
- .doit = mptcp_nl_cmd_del_addr,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_FLUSH_ADDRS,
- .doit = mptcp_nl_cmd_flush_addrs,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_GET_ADDR,
- .doit = mptcp_nl_cmd_get_addr,
- .dumpit = mptcp_nl_cmd_dump_addrs,
- },
- {
- .cmd = MPTCP_PM_CMD_SET_LIMITS,
- .doit = mptcp_nl_cmd_set_limits,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_GET_LIMITS,
- .doit = mptcp_nl_cmd_get_limits,
- },
- {
- .cmd = MPTCP_PM_CMD_SET_FLAGS,
- .doit = mptcp_nl_cmd_set_flags,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_ANNOUNCE,
- .doit = mptcp_nl_cmd_announce,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_REMOVE,
- .doit = mptcp_nl_cmd_remove,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_SUBFLOW_CREATE,
- .doit = mptcp_nl_cmd_sf_create,
- .flags = GENL_UNS_ADMIN_PERM,
- },
- {
- .cmd = MPTCP_PM_CMD_SUBFLOW_DESTROY,
- .doit = mptcp_nl_cmd_sf_destroy,
- .flags = GENL_UNS_ADMIN_PERM,
- },
-};
-
static struct genl_family mptcp_genl_family __ro_after_init = {
.name = MPTCP_PM_NAME,
.version = MPTCP_PM_VER,
- .maxattr = MPTCP_PM_ATTR_MAX,
- .policy = mptcp_pm_policy,
.netnsok = true,
.module = THIS_MODULE,
- .small_ops = mptcp_pm_ops,
- .n_small_ops = ARRAY_SIZE(mptcp_pm_ops),
+ .ops = mptcp_pm_nl_ops,
+ .n_ops = ARRAY_SIZE(mptcp_pm_nl_ops),
.resv_start_op = MPTCP_PM_CMD_SUBFLOW_DESTROY + 1,
.mcgrps = mptcp_pm_mcgrps,
.n_mcgrps = ARRAY_SIZE(mptcp_pm_mcgrps),
diff --git a/net/mptcp/pm_userspace.c b/net/mptcp/pm_userspace.c
index d042d32beb4d..5c01b9bc619a 100644
--- a/net/mptcp/pm_userspace.c
+++ b/net/mptcp/pm_userspace.c
@@ -145,13 +145,14 @@ int mptcp_userspace_pm_get_local_id(struct mptcp_sock *msk,
return mptcp_userspace_pm_append_new_local_addr(msk, &new_entry);
}
-int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_announce_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
struct nlattr *addr = info->attrs[MPTCP_PM_ATTR_ADDR];
struct mptcp_pm_addr_entry addr_val;
struct mptcp_sock *msk;
int err = -EINVAL;
+ struct sock *sk;
u32 token_val;
if (!addr || !token) {
@@ -167,6 +168,8 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
return err;
}
+ sk = (struct sock *)msk;
+
if (!mptcp_pm_is_userspace(msk)) {
GENL_SET_ERR_MSG(info, "invalid request; userspace PM not selected");
goto announce_err;
@@ -190,7 +193,7 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
goto announce_err;
}
- lock_sock((struct sock *)msk);
+ lock_sock(sk);
spin_lock_bh(&msk->pm.lock);
if (mptcp_pm_alloc_anno_list(msk, &addr_val.addr)) {
@@ -200,15 +203,49 @@ int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info)
}
spin_unlock_bh(&msk->pm.lock);
- release_sock((struct sock *)msk);
+ release_sock(sk);
err = 0;
announce_err:
- sock_put((struct sock *)msk);
+ sock_put(sk);
+ return err;
+}
+
+static int mptcp_userspace_pm_remove_id_zero_address(struct mptcp_sock *msk,
+ struct genl_info *info)
+{
+ struct mptcp_rm_list list = { .nr = 0 };
+ struct mptcp_subflow_context *subflow;
+ struct sock *sk = (struct sock *)msk;
+ bool has_id_0 = false;
+ int err = -EINVAL;
+
+ lock_sock(sk);
+ mptcp_for_each_subflow(msk, subflow) {
+ if (subflow->local_id == 0) {
+ has_id_0 = true;
+ break;
+ }
+ }
+ if (!has_id_0) {
+ GENL_SET_ERR_MSG(info, "address with id 0 not found");
+ goto remove_err;
+ }
+
+ list.ids[list.nr++] = 0;
+
+ spin_lock_bh(&msk->pm.lock);
+ mptcp_pm_remove_addr(msk, &list);
+ spin_unlock_bh(&msk->pm.lock);
+
+ err = 0;
+
+remove_err:
+ release_sock(sk);
return err;
}
-int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_remove_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
struct nlattr *id = info->attrs[MPTCP_PM_ATTR_LOC_ID];
@@ -217,6 +254,7 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
struct mptcp_sock *msk;
LIST_HEAD(free_list);
int err = -EINVAL;
+ struct sock *sk;
u32 token_val;
u8 id_val;
@@ -234,12 +272,19 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
return err;
}
+ sk = (struct sock *)msk;
+
if (!mptcp_pm_is_userspace(msk)) {
GENL_SET_ERR_MSG(info, "invalid request; userspace PM not selected");
goto remove_err;
}
- lock_sock((struct sock *)msk);
+ if (id_val == 0) {
+ err = mptcp_userspace_pm_remove_id_zero_address(msk, info);
+ goto remove_err;
+ }
+
+ lock_sock(sk);
list_for_each_entry(entry, &msk->pm.userspace_pm_local_addr_list, list) {
if (entry->addr.id == id_val) {
@@ -250,7 +295,7 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
if (!match) {
GENL_SET_ERR_MSG(info, "address with specified id not found");
- release_sock((struct sock *)msk);
+ release_sock(sk);
goto remove_err;
}
@@ -258,19 +303,19 @@ int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info)
mptcp_pm_remove_addrs(msk, &free_list);
- release_sock((struct sock *)msk);
+ release_sock(sk);
list_for_each_entry_safe(match, entry, &free_list, list) {
- sock_kfree_s((struct sock *)msk, match, sizeof(*match));
+ sock_kfree_s(sk, match, sizeof(*match));
}
err = 0;
remove_err:
- sock_put((struct sock *)msk);
+ sock_put(sk);
return err;
}
-int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_subflow_create_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *raddr = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE];
struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
@@ -296,6 +341,8 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
return err;
}
+ sk = (struct sock *)msk;
+
if (!mptcp_pm_is_userspace(msk)) {
GENL_SET_ERR_MSG(info, "invalid request; userspace PM not selected");
goto create_err;
@@ -313,8 +360,6 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
goto create_err;
}
- sk = (struct sock *)msk;
-
if (!mptcp_pm_addr_families_match(sk, &addr_l, &addr_r)) {
GENL_SET_ERR_MSG(info, "families mismatch");
err = -EINVAL;
@@ -342,7 +387,7 @@ int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info)
spin_unlock_bh(&msk->pm.lock);
create_err:
- sock_put((struct sock *)msk);
+ sock_put(sk);
return err;
}
@@ -394,7 +439,7 @@ static struct sock *mptcp_nl_find_ssk(struct mptcp_sock *msk,
return NULL;
}
-int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
+int mptcp_pm_nl_subflow_destroy_doit(struct sk_buff *skb, struct genl_info *info)
{
struct nlattr *raddr = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE];
struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN];
@@ -419,6 +464,8 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
return err;
}
+ sk = (struct sock *)msk;
+
if (!mptcp_pm_is_userspace(msk)) {
GENL_SET_ERR_MSG(info, "invalid request; userspace PM not selected");
goto destroy_err;
@@ -448,7 +495,6 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
goto destroy_err;
}
- sk = (struct sock *)msk;
lock_sock(sk);
ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r);
if (ssk) {
@@ -468,7 +514,7 @@ int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info)
release_sock(sk);
destroy_err:
- sock_put((struct sock *)msk);
+ sock_put(sk);
return err;
}
@@ -478,6 +524,7 @@ int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token,
{
struct mptcp_sock *msk;
int ret = -EINVAL;
+ struct sock *sk;
u32 token_val;
token_val = nla_get_u32(token);
@@ -486,6 +533,8 @@ int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token,
if (!msk)
return ret;
+ sk = (struct sock *)msk;
+
if (!mptcp_pm_is_userspace(msk))
goto set_flags_err;
@@ -493,11 +542,11 @@ int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token,
rem->addr.family == AF_UNSPEC)
goto set_flags_err;
- lock_sock((struct sock *)msk);
+ lock_sock(sk);
ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc->addr, &rem->addr, bkup);
- release_sock((struct sock *)msk);
+ release_sock(sk);
set_flags_err:
- sock_put((struct sock *)msk);
+ sock_put(sk);
return ret;
}
diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
index 886ab689a8ae..a0b8356cd8c5 100644
--- a/net/mptcp/protocol.c
+++ b/net/mptcp/protocol.c
@@ -121,8 +121,6 @@ struct sock *__mptcp_nmpc_sk(struct mptcp_sock *msk)
ret = __mptcp_socket_create(msk);
if (ret)
return ERR_PTR(ret);
-
- mptcp_sockopt_sync(msk, msk->first);
}
return msk->first;
@@ -863,9 +861,8 @@ void mptcp_data_ready(struct sock *sk, struct sock *ssk)
/* Wake-up the reader only for in-sequence data */
mptcp_data_lock(sk);
- if (move_skbs_to_msk(msk, ssk))
+ if (move_skbs_to_msk(msk, ssk) && mptcp_epollin_ready(sk))
sk->sk_data_ready(sk);
-
mptcp_data_unlock(sk);
}
@@ -893,6 +890,7 @@ static bool __mptcp_finish_join(struct mptcp_sock *msk, struct sock *ssk)
mptcp_sockopt_sync_locked(msk, ssk);
mptcp_subflow_joined(msk, ssk);
mptcp_stop_tout_timer(sk);
+ __mptcp_propagate_sndbuf(sk, ssk);
return true;
}
@@ -1079,15 +1077,16 @@ static void mptcp_enter_memory_pressure(struct sock *sk)
struct mptcp_sock *msk = mptcp_sk(sk);
bool first = true;
- sk_stream_moderate_sndbuf(sk);
mptcp_for_each_subflow(msk, subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
if (first)
tcp_enter_memory_pressure(ssk);
sk_stream_moderate_sndbuf(ssk);
+
first = false;
}
+ __mptcp_sync_sndbuf(sk);
}
/* ensure we get enough memory for the frag hdr, beyond some minimal amount of
@@ -1268,7 +1267,7 @@ static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk,
* queue management operation, to avoid breaking the ext <->
* SSN association set here
*/
- mpext = skb_ext_find(skb, SKB_EXT_MPTCP);
+ mpext = mptcp_get_ext(skb);
if (!mptcp_skb_can_collapse_to(data_seq, skb, mpext)) {
TCP_SKB_CB(skb)->eor = 1;
goto alloc_skb;
@@ -1290,7 +1289,7 @@ alloc_skb:
i = skb_shinfo(skb)->nr_frags;
reuse_skb = false;
- mpext = skb_ext_find(skb, SKB_EXT_MPTCP);
+ mpext = mptcp_get_ext(skb);
}
/* Zero window and all data acked? Probe. */
@@ -1761,6 +1760,18 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
return ret;
}
+static int do_copy_data_nocache(struct sock *sk, int copy,
+ struct iov_iter *from, char *to)
+{
+ if (sk->sk_route_caps & NETIF_F_NOCACHE_COPY) {
+ if (!copy_from_iter_full_nocache(to, copy, from))
+ return -EFAULT;
+ } else if (!copy_from_iter_full(to, copy, from)) {
+ return -EFAULT;
+ }
+ return 0;
+}
+
static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
{
struct mptcp_sock *msk = mptcp_sk(sk);
@@ -1834,11 +1845,10 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
if (!sk_wmem_schedule(sk, total_ts))
goto wait_for_memory;
- if (copy_page_from_iter(dfrag->page, offset, psize,
- &msg->msg_iter) != psize) {
- ret = -EFAULT;
+ ret = do_copy_data_nocache(sk, psize, &msg->msg_iter,
+ page_address(dfrag->page) + offset);
+ if (ret)
goto do_error;
- }
/* data successfully copied into the write queue */
sk_forward_alloc_add(sk, -total_ts);
@@ -1922,6 +1932,7 @@ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
if (!(flags & MSG_PEEK)) {
MPTCP_SKB_CB(skb)->offset += count;
MPTCP_SKB_CB(skb)->map_seq += count;
+ msk->bytes_consumed += count;
}
break;
}
@@ -1932,6 +1943,7 @@ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
WRITE_ONCE(msk->rmem_released, msk->rmem_released + skb->truesize);
__skb_unlink(skb, &msk->receive_queue);
__kfree_skb(skb);
+ msk->bytes_consumed += count;
}
if (copied >= len)
@@ -2391,8 +2403,8 @@ static void __mptcp_close_ssk(struct sock *sk, struct sock *ssk,
if (msk->in_accept_queue && msk->first == ssk &&
(sock_flag(sk, SOCK_DEAD) || sock_flag(ssk, SOCK_DEAD))) {
/* ensure later check in mptcp_worker() will dispose the msk */
- mptcp_set_close_tout(sk, tcp_jiffies32 - (TCP_TIMEWAIT_LEN + 1));
sock_set_flag(sk, SOCK_DEAD);
+ mptcp_set_close_tout(sk, tcp_jiffies32 - (mptcp_close_timeout(sk) + 1));
lock_sock_nested(ssk, SINGLE_DEPTH_NESTING);
mptcp_subflow_drop_ctx(ssk);
goto out_release;
@@ -2448,6 +2460,7 @@ out_release:
WRITE_ONCE(msk->first, NULL);
out:
+ __mptcp_sync_sndbuf(sk);
if (need_push)
__mptcp_push_pending(sk, 0);
@@ -2477,7 +2490,7 @@ void mptcp_close_ssk(struct sock *sk, struct sock *ssk,
/* subflow aborted before reaching the fully_established status
* attempt the creation of the next subflow
*/
- mptcp_pm_subflow_check_next(mptcp_sk(sk), ssk, subflow);
+ mptcp_pm_subflow_check_next(mptcp_sk(sk), subflow);
__mptcp_close_ssk(sk, ssk, subflow, MPTCP_CF_PUSH);
}
@@ -2516,7 +2529,7 @@ static bool mptcp_close_tout_expired(const struct sock *sk)
return false;
return time_after32(tcp_jiffies32,
- inet_csk(sk)->icsk_mtup.probe_timestamp + TCP_TIMEWAIT_LEN);
+ inet_csk(sk)->icsk_mtup.probe_timestamp + mptcp_close_timeout(sk));
}
static void mptcp_check_fastclose(struct mptcp_sock *msk)
@@ -2659,7 +2672,7 @@ void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout)
return;
close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies +
- TCP_TIMEWAIT_LEN;
+ mptcp_close_timeout(sk);
/* the close timeout takes precedence on the fail one, and here at least one of
* them is active
@@ -2755,6 +2768,7 @@ static void __mptcp_init_sock(struct sock *sk)
msk->rmem_fwd_alloc = 0;
WRITE_ONCE(msk->rmem_released, 0);
msk->timer_ival = TCP_RTO_MIN;
+ msk->scaling_ratio = TCP_DEFAULT_SCALING_RATIO;
WRITE_ONCE(msk->first, NULL);
inet_csk(sk)->icsk_sync_mss = mptcp_sync_mss;
@@ -2964,16 +2978,9 @@ void __mptcp_unaccepted_force_close(struct sock *sk)
__mptcp_destroy_sock(sk);
}
-static __poll_t mptcp_check_readable(struct mptcp_sock *msk)
+static __poll_t mptcp_check_readable(struct sock *sk)
{
- /* Concurrent splices from sk_receive_queue into receive_queue will
- * always show at least one non-empty queue when checked in this order.
- */
- if (skb_queue_empty_lockless(&((struct sock *)msk)->sk_receive_queue) &&
- skb_queue_empty_lockless(&msk->receive_queue))
- return 0;
-
- return EPOLLIN | EPOLLRDNORM;
+ return mptcp_epollin_ready(sk) ? EPOLLIN | EPOLLRDNORM : 0;
}
static void mptcp_check_listen_stop(struct sock *sk)
@@ -3011,7 +3018,7 @@ bool __mptcp_close(struct sock *sk, long timeout)
goto cleanup;
}
- if (mptcp_check_readable(msk) || timeout < 0) {
+ if (mptcp_data_avail(msk) || timeout < 0) {
/* If the msk has read data, or the caller explicitly ask it,
* do the MPTCP equivalent of TCP reset, aka MPTCP fastclose
*/
@@ -3138,6 +3145,7 @@ static int mptcp_disconnect(struct sock *sk, int flags)
msk->snd_data_fin_enable = false;
msk->rcv_fastclose = false;
msk->use_64bit_ack = false;
+ msk->bytes_consumed = 0;
WRITE_ONCE(msk->csum_enabled, mptcp_is_checksum_enabled(sock_net(sk)));
mptcp_pm_data_reset(msk);
mptcp_ca_reset(sk);
@@ -3219,7 +3227,7 @@ struct sock *mptcp_sk_clone_init(const struct sock *sk,
* uses the correct data
*/
mptcp_copy_inaddrs(nsk, ssk);
- mptcp_propagate_sndbuf(nsk, ssk);
+ __mptcp_propagate_sndbuf(nsk, ssk);
mptcp_rcv_space_init(msk, ssk);
bh_unlock_sock(nsk);
@@ -3397,6 +3405,8 @@ static void mptcp_release_cb(struct sock *sk)
__mptcp_set_connected(sk);
if (__test_and_clear_bit(MPTCP_ERROR_REPORT, &msk->cb_flags))
__mptcp_error_report(sk);
+ if (__test_and_clear_bit(MPTCP_SYNC_SNDBUF, &msk->cb_flags))
+ __mptcp_sync_sndbuf(sk);
}
__mptcp_update_rmem(sk);
@@ -3441,6 +3451,14 @@ void mptcp_subflow_process_delegated(struct sock *ssk, long status)
__set_bit(MPTCP_PUSH_PENDING, &mptcp_sk(sk)->cb_flags);
mptcp_data_unlock(sk);
}
+ if (status & BIT(MPTCP_DELEGATE_SNDBUF)) {
+ mptcp_data_lock(sk);
+ if (!sock_owned_by_user(sk))
+ __mptcp_sync_sndbuf(sk);
+ else
+ __set_bit(MPTCP_SYNC_SNDBUF, &mptcp_sk(sk)->cb_flags);
+ mptcp_data_unlock(sk);
+ }
if (status & BIT(MPTCP_DELEGATE_ACK))
schedule_3rdack_retransmission(ssk);
}
@@ -3525,6 +3543,7 @@ bool mptcp_finish_join(struct sock *ssk)
/* active subflow, already present inside the conn_list */
if (!list_empty(&subflow->node)) {
mptcp_subflow_joined(msk, ssk);
+ mptcp_propagate_sndbuf(parent, ssk);
return true;
}
@@ -3909,7 +3928,7 @@ static __poll_t mptcp_poll(struct file *file, struct socket *sock,
mask |= EPOLLIN | EPOLLRDNORM | EPOLLRDHUP;
if (state != TCP_SYN_SENT && state != TCP_SYN_RECV) {
- mask |= mptcp_check_readable(msk);
+ mask |= mptcp_check_readable(sk);
if (shutdown & SEND_SHUTDOWN)
mask |= EPOLLOUT | EPOLLWRNORM;
else
@@ -3947,6 +3966,7 @@ static const struct proto_ops mptcp_stream_ops = {
.sendmsg = inet_sendmsg,
.recvmsg = inet_recvmsg,
.mmap = sock_no_mmap,
+ .set_rcvlowat = mptcp_set_rcvlowat,
};
static struct inet_protosw mptcp_protosw = {
@@ -4048,6 +4068,7 @@ static const struct proto_ops mptcp_v6_stream_ops = {
#ifdef CONFIG_COMPAT
.compat_ioctl = inet6_compat_ioctl,
#endif
+ .set_rcvlowat = mptcp_set_rcvlowat,
};
static struct proto mptcp_v6_prot;
diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
index 3612545fa62e..fe6f2d399ee8 100644
--- a/net/mptcp/protocol.h
+++ b/net/mptcp/protocol.h
@@ -13,6 +13,8 @@
#include <uapi/linux/mptcp.h>
#include <net/genetlink.h>
+#include "mptcp_pm_gen.h"
+
#define MPTCP_SUPPORTED_VERSION 1
/* MPTCP option bits */
@@ -123,6 +125,7 @@
#define MPTCP_RETRANSMIT 4
#define MPTCP_FLUSH_JOIN_LIST 5
#define MPTCP_CONNECTED 6
+#define MPTCP_SYNC_SNDBUF 7
struct mptcp_skb_cb {
u64 map_seq;
@@ -267,6 +270,7 @@ struct mptcp_sock {
atomic64_t rcv_wnd_sent;
u64 rcv_data_fin_seq;
u64 bytes_retrans;
+ u64 bytes_consumed;
int rmem_fwd_alloc;
int snd_burst;
int old_wspace;
@@ -432,11 +436,6 @@ mptcp_subflow_rsk(const struct request_sock *rsk)
return (struct mptcp_subflow_request_sock *)rsk;
}
-enum mptcp_data_avail {
- MPTCP_SUBFLOW_NODATA,
- MPTCP_SUBFLOW_DATA_AVAIL,
-};
-
struct mptcp_delegated_action {
struct napi_struct napi;
struct list_head head;
@@ -447,6 +446,7 @@ DECLARE_PER_CPU(struct mptcp_delegated_action, mptcp_delegated_actions);
#define MPTCP_DELEGATE_SCHEDULED 0
#define MPTCP_DELEGATE_SEND 1
#define MPTCP_DELEGATE_ACK 2
+#define MPTCP_DELEGATE_SNDBUF 3
#define MPTCP_DELEGATE_ACTIONS_MASK (~BIT(MPTCP_DELEGATE_SCHEDULED))
/* MPTCP subflow context */
@@ -492,7 +492,7 @@ struct mptcp_subflow_context {
valid_csum_seen : 1, /* at least one csum validated */
is_mptfo : 1, /* subflow is doing TFO */
__unused : 9;
- enum mptcp_data_avail data_avail;
+ bool data_avail;
bool scheduled;
u32 remote_nonce;
u64 thmac;
@@ -520,6 +520,9 @@ struct mptcp_subflow_context {
u32 setsockopt_seq;
u32 stale_rcv_tstamp;
+ int cached_sndbuf; /* sndbuf size when last synced with the msk sndbuf,
+ * protected by the msk socket lock
+ */
struct sock *tcp_sock; /* tcp sk backpointer */
struct sock *conn; /* parent mptcp_sock */
@@ -613,6 +616,7 @@ unsigned int mptcp_get_add_addr_timeout(const struct net *net);
int mptcp_is_checksum_enabled(const struct net *net);
int mptcp_allow_join_id0(const struct net *net);
unsigned int mptcp_stale_loss_cnt(const struct net *net);
+unsigned int mptcp_close_timeout(const struct sock *sk);
int mptcp_get_pm_type(const struct net *net);
const char *mptcp_get_scheduler(const struct net *net);
void mptcp_subflow_fully_established(struct mptcp_subflow_context *subflow,
@@ -661,6 +665,24 @@ struct sock *mptcp_subflow_get_retrans(struct mptcp_sock *msk);
int mptcp_sched_get_send(struct mptcp_sock *msk);
int mptcp_sched_get_retrans(struct mptcp_sock *msk);
+static inline u64 mptcp_data_avail(const struct mptcp_sock *msk)
+{
+ return READ_ONCE(msk->bytes_received) - READ_ONCE(msk->bytes_consumed);
+}
+
+static inline bool mptcp_epollin_ready(const struct sock *sk)
+{
+ /* mptcp doesn't have to deal with small skbs in the receive queue,
+ * at it can always coalesce them
+ */
+ return (mptcp_data_avail(mptcp_sk(sk)) >= sk->sk_rcvlowat) ||
+ (mem_cgroup_sockets_enabled && sk->sk_memcg &&
+ mem_cgroup_under_socket_pressure(sk->sk_memcg)) ||
+ READ_ONCE(tcp_memory_pressure);
+}
+
+int mptcp_set_rcvlowat(struct sock *sk, int val);
+
static inline bool __tcp_can_send(const struct sock *ssk)
{
/* only send if our side has not closed yet */
@@ -735,6 +757,7 @@ static inline bool mptcp_is_fully_established(struct sock *sk)
return inet_sk_state_load(sk) == TCP_ESTABLISHED &&
READ_ONCE(mptcp_sk(sk)->fully_established);
}
+
void mptcp_rcv_space_init(struct mptcp_sock *msk, const struct sock *ssk);
void mptcp_data_ready(struct sock *sk, struct sock *ssk);
bool mptcp_finish_join(struct sock *sk);
@@ -762,13 +785,52 @@ static inline bool mptcp_data_fin_enabled(const struct mptcp_sock *msk)
READ_ONCE(msk->write_seq) == READ_ONCE(msk->snd_nxt);
}
-static inline bool mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk)
+static inline void __mptcp_sync_sndbuf(struct sock *sk)
{
- if ((sk->sk_userlocks & SOCK_SNDBUF_LOCK) || ssk->sk_sndbuf <= READ_ONCE(sk->sk_sndbuf))
- return false;
+ struct mptcp_subflow_context *subflow;
+ int ssk_sndbuf, new_sndbuf;
+
+ if (sk->sk_userlocks & SOCK_SNDBUF_LOCK)
+ return;
+
+ new_sndbuf = sock_net(sk)->ipv4.sysctl_tcp_wmem[0];
+ mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
+ ssk_sndbuf = READ_ONCE(mptcp_subflow_tcp_sock(subflow)->sk_sndbuf);
+
+ subflow->cached_sndbuf = ssk_sndbuf;
+ new_sndbuf += ssk_sndbuf;
+ }
+
+ /* the msk max wmem limit is <nr_subflows> * tcp wmem[2] */
+ WRITE_ONCE(sk->sk_sndbuf, new_sndbuf);
+}
+
+/* The called held both the msk socket and the subflow socket locks,
+ * possibly under BH
+ */
+static inline void __mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk)
+{
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+
+ if (READ_ONCE(ssk->sk_sndbuf) != subflow->cached_sndbuf)
+ __mptcp_sync_sndbuf(sk);
+}
+
+/* the caller held only the subflow socket lock, either in process or
+ * BH context. Additionally this can be called under the msk data lock,
+ * so we can't acquire such lock here: let the delegate action acquires
+ * the needed locks in suitable order.
+ */
+static inline void mptcp_propagate_sndbuf(struct sock *sk, struct sock *ssk)
+{
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+
+ if (likely(READ_ONCE(ssk->sk_sndbuf) == subflow->cached_sndbuf))
+ return;
- WRITE_ONCE(sk->sk_sndbuf, ssk->sk_sndbuf);
- return true;
+ local_bh_disable();
+ mptcp_subflow_delegate(subflow, MPTCP_DELEGATE_SNDBUF);
+ local_bh_enable();
}
static inline void mptcp_write_space(struct sock *sk)
@@ -826,7 +888,7 @@ bool mptcp_pm_allow_new_subflow(struct mptcp_sock *msk);
void mptcp_pm_connection_closed(struct mptcp_sock *msk);
void mptcp_pm_subflow_established(struct mptcp_sock *msk);
bool mptcp_pm_nl_check_work_pending(struct mptcp_sock *msk);
-void mptcp_pm_subflow_check_next(struct mptcp_sock *msk, const struct sock *ssk,
+void mptcp_pm_subflow_check_next(struct mptcp_sock *msk,
const struct mptcp_subflow_context *subflow);
void mptcp_pm_add_addr_received(const struct sock *ssk,
const struct mptcp_addr_info *addr);
@@ -877,10 +939,6 @@ void mptcp_pm_remove_addrs_and_subflows(struct mptcp_sock *msk,
struct list_head *rm_list);
void mptcp_free_local_addr_list(struct mptcp_sock *msk);
-int mptcp_nl_cmd_announce(struct sk_buff *skb, struct genl_info *info);
-int mptcp_nl_cmd_remove(struct sk_buff *skb, struct genl_info *info);
-int mptcp_nl_cmd_sf_create(struct sk_buff *skb, struct genl_info *info);
-int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info);
void mptcp_event(enum mptcp_event_type type, const struct mptcp_sock *msk,
const struct sock *ssk, gfp_t gfp);
@@ -1007,7 +1065,7 @@ static inline bool mptcp_check_fallback(const struct sock *sk)
static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
{
- if (test_bit(MPTCP_FALLBACK_DONE, &msk->flags)) {
+ if (__mptcp_check_fallback(msk)) {
pr_debug("TCP fallback already done (msk=%p)", msk);
return;
}
diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
index 8260202c0066..77f5e8932abf 100644
--- a/net/mptcp/sockopt.c
+++ b/net/mptcp/sockopt.c
@@ -89,12 +89,13 @@ static void mptcp_sol_socket_sync_intval(struct mptcp_sock *msk, int optname, in
sock_valbool_flag(ssk, SOCK_KEEPOPEN, !!val);
break;
case SO_PRIORITY:
- ssk->sk_priority = val;
+ WRITE_ONCE(ssk->sk_priority, val);
break;
case SO_SNDBUF:
case SO_SNDBUFFORCE:
ssk->sk_userlocks |= SOCK_SNDBUF_LOCK;
WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
+ mptcp_subflow_ctx(ssk)->cached_sndbuf = sk->sk_sndbuf;
break;
case SO_RCVBUF:
case SO_RCVBUFFORCE:
@@ -734,7 +735,7 @@ static int mptcp_setsockopt_v4_set_tos(struct mptcp_sock *msk, int optname,
lock_sock(sk);
sockopt_seq_inc(msk);
- val = inet_sk(sk)->tos;
+ val = READ_ONCE(inet_sk(sk)->tos);
mptcp_for_each_subflow(msk, subflow) {
struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
@@ -915,7 +916,7 @@ void mptcp_diag_fill_info(struct mptcp_sock *msk, struct mptcp_info *info)
mptcp_pm_get_local_addr_max(msk);
}
- if (test_bit(MPTCP_FALLBACK_DONE, &msk->flags))
+ if (__mptcp_check_fallback(msk))
flags |= MPTCP_INFO_FLAG_FALLBACK;
if (READ_ONCE(msk->can_ack))
flags |= MPTCP_INFO_FLAG_REMOTE_KEY_RECEIVED;
@@ -1343,7 +1344,7 @@ static int mptcp_getsockopt_v4(struct mptcp_sock *msk, int optname,
switch (optname) {
case IP_TOS:
- return mptcp_put_int_option(msk, optval, optlen, inet_sk(sk)->tos);
+ return mptcp_put_int_option(msk, optval, optlen, READ_ONCE(inet_sk(sk)->tos));
}
return -EOPNOTSUPP;
@@ -1415,8 +1416,10 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
if (sk->sk_userlocks & tx_rx_locks) {
ssk->sk_userlocks |= sk->sk_userlocks & tx_rx_locks;
- if (sk->sk_userlocks & SOCK_SNDBUF_LOCK)
+ if (sk->sk_userlocks & SOCK_SNDBUF_LOCK) {
WRITE_ONCE(ssk->sk_sndbuf, sk->sk_sndbuf);
+ mptcp_subflow_ctx(ssk)->cached_sndbuf = sk->sk_sndbuf;
+ }
if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)
WRITE_ONCE(ssk->sk_rcvbuf, sk->sk_rcvbuf);
}
@@ -1444,37 +1447,63 @@ static void sync_socket_options(struct mptcp_sock *msk, struct sock *ssk)
inet_assign_bit(FREEBIND, ssk, inet_test_bit(FREEBIND, sk));
}
-static void __mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk)
-{
- bool slow = lock_sock_fast(ssk);
-
- sync_socket_options(msk, ssk);
-
- unlock_sock_fast(ssk, slow);
-}
-
-void mptcp_sockopt_sync(struct mptcp_sock *msk, struct sock *ssk)
+void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk)
{
struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
msk_owned_by_me(msk);
+ ssk->sk_rcvlowat = 0;
+
+ /* subflows must ignore any latency-related settings: will not affect
+ * the user-space - only the msk is relevant - but will foul the
+ * mptcp scheduler
+ */
+ tcp_sk(ssk)->notsent_lowat = UINT_MAX;
+
if (READ_ONCE(subflow->setsockopt_seq) != msk->setsockopt_seq) {
- __mptcp_sockopt_sync(msk, ssk);
+ sync_socket_options(msk, ssk);
subflow->setsockopt_seq = msk->setsockopt_seq;
}
}
-void mptcp_sockopt_sync_locked(struct mptcp_sock *msk, struct sock *ssk)
+/* unfortunately this is different enough from the tcp version so
+ * that we can't factor it out
+ */
+int mptcp_set_rcvlowat(struct sock *sk, int val)
{
- struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+ struct mptcp_subflow_context *subflow;
+ int space, cap;
- msk_owned_by_me(msk);
+ if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)
+ cap = sk->sk_rcvbuf >> 1;
+ else
+ cap = READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]) >> 1;
+ val = min(val, cap);
+ WRITE_ONCE(sk->sk_rcvlowat, val ? : 1);
- if (READ_ONCE(subflow->setsockopt_seq) != msk->setsockopt_seq) {
- sync_socket_options(msk, ssk);
+ /* Check if we need to signal EPOLLIN right now */
+ if (mptcp_epollin_ready(sk))
+ sk->sk_data_ready(sk);
- subflow->setsockopt_seq = msk->setsockopt_seq;
+ if (sk->sk_userlocks & SOCK_RCVBUF_LOCK)
+ return 0;
+
+ space = __tcp_space_from_win(mptcp_sk(sk)->scaling_ratio, val);
+ if (space <= sk->sk_rcvbuf)
+ return 0;
+
+ /* propagate the rcvbuf changes to all the subflows */
+ WRITE_ONCE(sk->sk_rcvbuf, space);
+ mptcp_for_each_subflow(mptcp_sk(sk), subflow) {
+ struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+ bool slow;
+
+ slow = lock_sock_fast(ssk);
+ WRITE_ONCE(ssk->sk_rcvbuf, space);
+ tcp_sk(ssk)->window_clamp = val;
+ unlock_sock_fast(ssk, slow);
}
+ return 0;
}
diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
index 9c1f8d1d63d2..e120e9616454 100644
--- a/net/mptcp/subflow.c
+++ b/net/mptcp/subflow.c
@@ -421,6 +421,7 @@ static bool subflow_use_different_dport(struct mptcp_sock *msk, const struct soc
void __mptcp_set_connected(struct sock *sk)
{
+ __mptcp_propagate_sndbuf(sk, mptcp_sk(sk)->first);
if (sk->sk_state == TCP_SYN_SENT) {
inet_sk_state_store(sk, TCP_ESTABLISHED);
sk->sk_state_change(sk);
@@ -472,7 +473,6 @@ static void subflow_finish_connect(struct sock *sk, const struct sk_buff *skb)
return;
msk = mptcp_sk(parent);
- mptcp_propagate_sndbuf(parent, sk);
subflow->rel_write_seq = 1;
subflow->conn_finished = 1;
subflow->ssn_offset = TCP_SKB_CB(skb)->seq;
@@ -1237,7 +1237,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
struct sk_buff *skb;
if (!skb_peek(&ssk->sk_receive_queue))
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
+ WRITE_ONCE(subflow->data_avail, false);
if (subflow->data_avail)
return true;
@@ -1271,7 +1271,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
continue;
}
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL);
+ WRITE_ONCE(subflow->data_avail, true);
break;
}
return true;
@@ -1293,7 +1293,7 @@ fallback:
goto reset;
}
mptcp_subflow_fail(msk, ssk);
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL);
+ WRITE_ONCE(subflow->data_avail, true);
return true;
}
@@ -1310,7 +1310,7 @@ reset:
while ((skb = skb_peek(&ssk->sk_receive_queue)))
sk_eat_skb(ssk, skb);
tcp_send_active_reset(ssk, GFP_ATOMIC);
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
+ WRITE_ONCE(subflow->data_avail, false);
return false;
}
@@ -1322,7 +1322,7 @@ reset:
subflow->map_seq = READ_ONCE(msk->ack_seq);
subflow->map_data_len = skb->len;
subflow->map_subflow_seq = tcp_sk(ssk)->copied_seq - subflow->ssn_offset;
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL);
+ WRITE_ONCE(subflow->data_avail, true);
return true;
}
@@ -1334,7 +1334,7 @@ bool mptcp_subflow_data_available(struct sock *sk)
if (subflow->map_valid &&
mptcp_subflow_get_map_offset(subflow) >= subflow->map_data_len) {
subflow->map_valid = 0;
- WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA);
+ WRITE_ONCE(subflow->data_avail, false);
pr_debug("Done with mapping: seq=%u data_len=%u",
subflow->map_subflow_seq,
@@ -1405,10 +1405,18 @@ static void subflow_data_ready(struct sock *sk)
WARN_ON_ONCE(!__mptcp_check_fallback(msk) && !subflow->mp_capable &&
!subflow->mp_join && !(state & TCPF_CLOSE));
- if (mptcp_subflow_data_available(sk))
+ if (mptcp_subflow_data_available(sk)) {
mptcp_data_ready(parent, sk);
- else if (unlikely(sk->sk_err))
+
+ /* subflow-level lowat test are not relevant.
+ * respect the msk-level threshold eventually mandating an immediate ack
+ */
+ if (mptcp_data_avail(msk) < parent->sk_rcvlowat &&
+ (tcp_sk(sk)->rcv_nxt - tcp_sk(sk)->rcv_wup) > inet_csk(sk)->icsk_ack.rcv_mss)
+ inet_csk(sk)->icsk_ack.pending |= ICSK_ACK_NOW;
+ } else if (unlikely(sk->sk_err)) {
subflow_error_report(sk);
+ }
}
static void subflow_write_space(struct sock *ssk)
@@ -1525,8 +1533,6 @@ int __mptcp_subflow_connect(struct sock *sk, const struct mptcp_addr_info *loc,
if (addr.ss_family == AF_INET6)
addrlen = sizeof(struct sockaddr_in6);
#endif
- mptcp_sockopt_sync(msk, ssk);
-
ssk->sk_bound_dev_if = ifindex;
err = kernel_bind(sf, (struct sockaddr *)&addr, addrlen);
if (err)
@@ -1637,7 +1643,7 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
err = security_mptcp_add_subflow(sk, sf->sk);
if (err)
- goto release_ssk;
+ goto err_free;
/* the newly created socket has to be in the same cgroup as its parent */
mptcp_attach_cgroup(sk, sf->sk);
@@ -1651,15 +1657,12 @@ int mptcp_subflow_create_socket(struct sock *sk, unsigned short family,
get_net_track(net, &sf->sk->ns_tracker, GFP_KERNEL);
sock_inuse_add(net, 1);
err = tcp_set_ulp(sf->sk, "mptcp");
+ if (err)
+ goto err_free;
-release_ssk:
+ mptcp_sockopt_sync_locked(mptcp_sk(sk), sf->sk);
release_sock(sf->sk);
- if (err) {
- sock_release(sf);
- return err;
- }
-
/* the newly created socket really belongs to the owning MPTCP master
* socket, even if for additional subflows the allocation is performed
* by a kernel workqueue. Adjust inode references, so that the
@@ -1679,6 +1682,11 @@ release_ssk:
mptcp_subflow_ops_override(sf->sk);
return 0;
+
+err_free:
+ release_sock(sf->sk);
+ sock_release(sf);
+ return err;
}
static struct mptcp_subflow_context *subflow_create_ctx(struct sock *sk,
@@ -1728,7 +1736,6 @@ static void subflow_state_change(struct sock *sk)
msk = mptcp_sk(parent);
if (subflow_simultaneous_connect(sk)) {
- mptcp_propagate_sndbuf(parent, sk);
mptcp_do_fallback(sk);
mptcp_rcv_space_init(msk, sk);
pr_fallback(msk);
@@ -2044,7 +2051,6 @@ void __init mptcp_subflow_init(void)
subflow_v6m_specific.send_check = ipv4_specific.send_check;
subflow_v6m_specific.net_header_len = ipv4_specific.net_header_len;
subflow_v6m_specific.mtu_reduced = ipv4_specific.mtu_reduced;
- subflow_v6m_specific.net_frag_header_len = 0;
subflow_v6m_specific.rebuild_header = subflow_rebuild_header;
tcpv6_prot_override = tcpv6_prot;
diff --git a/net/netfilter/core.c b/net/netfilter/core.c
index ef4e76e5aef9..3126911f5042 100644
--- a/net/netfilter/core.c
+++ b/net/netfilter/core.c
@@ -639,10 +639,10 @@ int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
if (ret == 1)
continue;
return ret;
+ case NF_STOLEN:
+ return NF_DROP_GETERR(verdict);
default:
- /* Implicit handling for NF_STOLEN, as well as any other
- * non conventional verdicts.
- */
+ WARN_ON_ONCE(1);
return 0;
}
}
diff --git a/net/netfilter/ipvs/ip_vs_sync.c b/net/netfilter/ipvs/ip_vs_sync.c
index 4174076c66fa..eaf9f2ed0067 100644
--- a/net/netfilter/ipvs/ip_vs_sync.c
+++ b/net/netfilter/ipvs/ip_vs_sync.c
@@ -1298,17 +1298,13 @@ static void set_sock_size(struct sock *sk, int mode, int val)
static void set_mcast_loop(struct sock *sk, u_char loop)
{
/* setsockopt(sock, SOL_IP, IP_MULTICAST_LOOP, &loop, sizeof(loop)); */
- lock_sock(sk);
inet_assign_bit(MC_LOOP, sk, loop);
#ifdef CONFIG_IP_VS_IPV6
- if (sk->sk_family == AF_INET6) {
- struct ipv6_pinfo *np = inet6_sk(sk);
-
+ if (READ_ONCE(sk->sk_family) == AF_INET6) {
/* IPV6_MULTICAST_LOOP */
- np->mc_loop = loop ? 1 : 0;
+ inet6_assign_bit(MC6_LOOP, sk, loop);
}
#endif
- release_sock(sk);
}
/*
@@ -1320,13 +1316,13 @@ static void set_mcast_ttl(struct sock *sk, u_char ttl)
/* setsockopt(sock, SOL_IP, IP_MULTICAST_TTL, &ttl, sizeof(ttl)); */
lock_sock(sk);
- inet->mc_ttl = ttl;
+ WRITE_ONCE(inet->mc_ttl, ttl);
#ifdef CONFIG_IP_VS_IPV6
if (sk->sk_family == AF_INET6) {
struct ipv6_pinfo *np = inet6_sk(sk);
/* IPV6_MULTICAST_HOPS */
- np->mcast_hops = ttl;
+ WRITE_ONCE(np->mcast_hops, ttl);
}
#endif
release_sock(sk);
@@ -1339,13 +1335,13 @@ static void set_mcast_pmtudisc(struct sock *sk, int val)
/* setsockopt(sock, SOL_IP, IP_MTU_DISCOVER, &val, sizeof(val)); */
lock_sock(sk);
- inet->pmtudisc = val;
+ WRITE_ONCE(inet->pmtudisc, val);
#ifdef CONFIG_IP_VS_IPV6
if (sk->sk_family == AF_INET6) {
struct ipv6_pinfo *np = inet6_sk(sk);
/* IPV6_MTU_DISCOVER */
- np->pmtudisc = val;
+ WRITE_ONCE(np->pmtudisc, val);
}
#endif
release_sock(sk);
diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
index 9f6f2e643575..2e5f3864d353 100644
--- a/net/netfilter/nf_conntrack_core.c
+++ b/net/netfilter/nf_conntrack_core.c
@@ -2042,24 +2042,6 @@ out:
}
EXPORT_SYMBOL_GPL(nf_conntrack_in);
-/* Alter reply tuple (maybe alter helper). This is for NAT, and is
- implicitly racy: see __nf_conntrack_confirm */
-void nf_conntrack_alter_reply(struct nf_conn *ct,
- const struct nf_conntrack_tuple *newreply)
-{
- struct nf_conn_help *help = nfct_help(ct);
-
- /* Should be unconfirmed, so not in hash table yet */
- WARN_ON(nf_ct_is_confirmed(ct));
-
- nf_ct_dump_tuple(newreply);
-
- ct->tuplehash[IP_CT_DIR_REPLY].tuple = *newreply;
- if (ct->master || (help && !hlist_empty(&help->expectations)))
- return;
-}
-EXPORT_SYMBOL_GPL(nf_conntrack_alter_reply);
-
/* Refresh conntrack for this many jiffies and do accounting if do_acct is 1 */
void __nf_ct_refresh_acct(struct nf_conn *ct,
enum ip_conntrack_info ctinfo,
@@ -2187,11 +2169,11 @@ static int __nf_conntrack_update(struct net *net, struct sk_buff *skb,
dataoff = get_l4proto(skb, skb_network_offset(skb), l3num, &l4num);
if (dataoff <= 0)
- return -1;
+ return NF_DROP;
if (!nf_ct_get_tuple(skb, skb_network_offset(skb), dataoff, l3num,
l4num, net, &tuple))
- return -1;
+ return NF_DROP;
if (ct->status & IPS_SRC_NAT) {
memcpy(tuple.src.u3.all,
@@ -2211,7 +2193,7 @@ static int __nf_conntrack_update(struct net *net, struct sk_buff *skb,
h = nf_conntrack_find_get(net, nf_ct_zone(ct), &tuple);
if (!h)
- return 0;
+ return NF_ACCEPT;
/* Store status bits of the conntrack that is clashing to re-do NAT
* mangling according to what it has been done already to this packet.
@@ -2224,19 +2206,25 @@ static int __nf_conntrack_update(struct net *net, struct sk_buff *skb,
nat_hook = rcu_dereference(nf_nat_hook);
if (!nat_hook)
- return 0;
+ return NF_ACCEPT;
- if (status & IPS_SRC_NAT &&
- nat_hook->manip_pkt(skb, ct, NF_NAT_MANIP_SRC,
- IP_CT_DIR_ORIGINAL) == NF_DROP)
- return -1;
+ if (status & IPS_SRC_NAT) {
+ unsigned int verdict = nat_hook->manip_pkt(skb, ct,
+ NF_NAT_MANIP_SRC,
+ IP_CT_DIR_ORIGINAL);
+ if (verdict != NF_ACCEPT)
+ return verdict;
+ }
- if (status & IPS_DST_NAT &&
- nat_hook->manip_pkt(skb, ct, NF_NAT_MANIP_DST,
- IP_CT_DIR_ORIGINAL) == NF_DROP)
- return -1;
+ if (status & IPS_DST_NAT) {
+ unsigned int verdict = nat_hook->manip_pkt(skb, ct,
+ NF_NAT_MANIP_DST,
+ IP_CT_DIR_ORIGINAL);
+ if (verdict != NF_ACCEPT)
+ return verdict;
+ }
- return 0;
+ return NF_ACCEPT;
}
/* This packet is coming from userspace via nf_queue, complete the packet
@@ -2251,14 +2239,14 @@ static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct,
help = nfct_help(ct);
if (!help)
- return 0;
+ return NF_ACCEPT;
helper = rcu_dereference(help->helper);
if (!helper)
- return 0;
+ return NF_ACCEPT;
if (!(helper->flags & NF_CT_HELPER_F_USERSPACE))
- return 0;
+ return NF_ACCEPT;
switch (nf_ct_l3num(ct)) {
case NFPROTO_IPV4:
@@ -2273,42 +2261,44 @@ static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct,
protoff = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &pnum,
&frag_off);
if (protoff < 0 || (frag_off & htons(~0x7)) != 0)
- return 0;
+ return NF_ACCEPT;
break;
}
#endif
default:
- return 0;
+ return NF_ACCEPT;
}
if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) &&
!nf_is_loopback_packet(skb)) {
if (!nf_ct_seq_adjust(skb, ct, ctinfo, protoff)) {
NF_CT_STAT_INC_ATOMIC(nf_ct_net(ct), drop);
- return -1;
+ return NF_DROP;
}
}
/* We've seen it coming out the other side: confirm it */
- return nf_conntrack_confirm(skb) == NF_DROP ? - 1 : 0;
+ return nf_conntrack_confirm(skb);
}
static int nf_conntrack_update(struct net *net, struct sk_buff *skb)
{
enum ip_conntrack_info ctinfo;
struct nf_conn *ct;
- int err;
ct = nf_ct_get(skb, &ctinfo);
if (!ct)
- return 0;
+ return NF_ACCEPT;
if (!nf_ct_is_confirmed(ct)) {
- err = __nf_conntrack_update(net, skb, ct, ctinfo);
- if (err < 0)
- return err;
+ int ret = __nf_conntrack_update(net, skb, ct, ctinfo);
+
+ if (ret != NF_ACCEPT)
+ return ret;
ct = nf_ct_get(skb, &ctinfo);
+ if (!ct)
+ return NF_ACCEPT;
}
return nf_confirm_cthelper(skb, ct, ctinfo);
diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
index f22691f83853..4ed5878cb25b 100644
--- a/net/netfilter/nf_conntrack_helper.c
+++ b/net/netfilter/nf_conntrack_helper.c
@@ -194,12 +194,7 @@ int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl,
struct nf_conntrack_helper *helper = NULL;
struct nf_conn_help *help;
- /* We already got a helper explicitly attached. The function
- * nf_conntrack_alter_reply - in case NAT is in use - asks for looking
- * the helper up again. Since now the user is in full control of
- * making consistent helper configurations, skip this automatic
- * re-lookup, otherwise we'll lose the helper.
- */
+ /* We already got a helper explicitly attached (e.g. nft_ct) */
if (test_bit(IPS_HELPER_BIT, &ct->status))
return 0;
diff --git a/net/netfilter/nf_conntrack_labels.c b/net/netfilter/nf_conntrack_labels.c
index 6e70e137a0a6..6c46aad23313 100644
--- a/net/netfilter/nf_conntrack_labels.c
+++ b/net/netfilter/nf_conntrack_labels.c
@@ -11,8 +11,6 @@
#include <net/netfilter/nf_conntrack_ecache.h>
#include <net/netfilter/nf_conntrack_labels.h>
-static DEFINE_SPINLOCK(nf_connlabels_lock);
-
static int replace_u32(u32 *address, u32 mask, u32 new)
{
u32 old, tmp;
@@ -60,23 +58,24 @@ EXPORT_SYMBOL_GPL(nf_connlabels_replace);
int nf_connlabels_get(struct net *net, unsigned int bits)
{
+ int v;
+
if (BIT_WORD(bits) >= NF_CT_LABELS_MAX_SIZE / sizeof(long))
return -ERANGE;
- spin_lock(&nf_connlabels_lock);
- net->ct.labels_used++;
- spin_unlock(&nf_connlabels_lock);
-
BUILD_BUG_ON(NF_CT_LABELS_MAX_SIZE / sizeof(long) >= U8_MAX);
+ v = atomic_inc_return_relaxed(&net->ct.labels_used);
+ WARN_ON_ONCE(v <= 0);
+
return 0;
}
EXPORT_SYMBOL_GPL(nf_connlabels_get);
void nf_connlabels_put(struct net *net)
{
- spin_lock(&nf_connlabels_lock);
- net->ct.labels_used--;
- spin_unlock(&nf_connlabels_lock);
+ int v = atomic_dec_return_relaxed(&net->ct.labels_used);
+
+ WARN_ON_ONCE(v < 0);
}
EXPORT_SYMBOL_GPL(nf_connlabels_put);
diff --git a/net/netfilter/nf_conntrack_proto_tcp.c b/net/netfilter/nf_conntrack_proto_tcp.c
index 4018acb1d674..e573be5afde7 100644
--- a/net/netfilter/nf_conntrack_proto_tcp.c
+++ b/net/netfilter/nf_conntrack_proto_tcp.c
@@ -835,7 +835,8 @@ static bool tcp_error(const struct tcphdr *th,
static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
unsigned int dataoff,
- const struct tcphdr *th)
+ const struct tcphdr *th,
+ const struct nf_hook_state *state)
{
enum tcp_conntrack new_state;
struct net *net = nf_ct_net(ct);
@@ -846,7 +847,7 @@ static noinline bool tcp_new(struct nf_conn *ct, const struct sk_buff *skb,
/* Invalid: delete conntrack */
if (new_state >= TCP_CONNTRACK_MAX) {
- pr_debug("nf_ct_tcp: invalid new deleting.\n");
+ tcp_error_log(skb, state, "invalid new");
return false;
}
@@ -980,7 +981,7 @@ int nf_conntrack_tcp_packet(struct nf_conn *ct,
if (tcp_error(th, skb, dataoff, state))
return -NF_ACCEPT;
- if (!nf_ct_is_confirmed(ct) && !tcp_new(ct, skb, dataoff, th))
+ if (!nf_ct_is_confirmed(ct) && !tcp_new(ct, skb, dataoff, th, state))
return -NF_ACCEPT;
spin_lock_bh(&ct->lock);
diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
index 48cc60084d28..6d969468c779 100644
--- a/net/netfilter/nf_nat_proto.c
+++ b/net/netfilter/nf_nat_proto.c
@@ -697,6 +697,31 @@ static int nf_xfrm_me_harder(struct net *net, struct sk_buff *skb, unsigned int
}
#endif
+static bool nf_nat_inet_port_was_mangled(const struct sk_buff *skb, __be16 sport)
+{
+ enum ip_conntrack_info ctinfo;
+ enum ip_conntrack_dir dir;
+ const struct nf_conn *ct;
+
+ ct = nf_ct_get(skb, &ctinfo);
+ if (!ct)
+ return false;
+
+ switch (nf_ct_protonum(ct)) {
+ case IPPROTO_TCP:
+ case IPPROTO_UDP:
+ break;
+ default:
+ return false;
+ }
+
+ dir = CTINFO2DIR(ctinfo);
+ if (dir != IP_CT_DIR_ORIGINAL)
+ return false;
+
+ return ct->tuplehash[!dir].tuple.dst.u.all != sport;
+}
+
static unsigned int
nf_nat_ipv4_local_in(void *priv, struct sk_buff *skb,
const struct nf_hook_state *state)
@@ -707,8 +732,20 @@ nf_nat_ipv4_local_in(void *priv, struct sk_buff *skb,
ret = nf_nat_ipv4_fn(priv, skb, state);
- if (ret == NF_ACCEPT && sk && saddr != ip_hdr(skb)->saddr &&
- !inet_sk_transparent(sk))
+ if (ret != NF_ACCEPT || !sk || inet_sk_transparent(sk))
+ return ret;
+
+ /* skb has a socket assigned via tcp edemux. We need to check
+ * if nf_nat_ipv4_fn() has mangled the packet in a way that
+ * edemux would not have found this socket.
+ *
+ * This includes both changes to the source address and changes
+ * to the source port, which are both handled by the
+ * nf_nat_ipv4_fn() call above -- long after tcp/udp early demux
+ * might have found a socket for the old (pre-snat) address.
+ */
+ if (saddr != ip_hdr(skb)->saddr ||
+ nf_nat_inet_port_was_mangled(skb, sk->sk_dport))
skb_orphan(skb); /* TCP edemux obtained wrong socket */
return ret;
@@ -938,14 +975,36 @@ nf_nat_ipv6_fn(void *priv, struct sk_buff *skb,
}
static unsigned int
+nf_nat_ipv6_local_in(void *priv, struct sk_buff *skb,
+ const struct nf_hook_state *state)
+{
+ struct in6_addr saddr = ipv6_hdr(skb)->saddr;
+ struct sock *sk = skb->sk;
+ unsigned int ret;
+
+ ret = nf_nat_ipv6_fn(priv, skb, state);
+
+ if (ret != NF_ACCEPT || !sk || inet_sk_transparent(sk))
+ return ret;
+
+ /* see nf_nat_ipv4_local_in */
+ if (ipv6_addr_cmp(&saddr, &ipv6_hdr(skb)->saddr) ||
+ nf_nat_inet_port_was_mangled(skb, sk->sk_dport))
+ skb_orphan(skb);
+
+ return ret;
+}
+
+static unsigned int
nf_nat_ipv6_in(void *priv, struct sk_buff *skb,
const struct nf_hook_state *state)
{
- unsigned int ret;
+ unsigned int ret, verdict;
struct in6_addr daddr = ipv6_hdr(skb)->daddr;
ret = nf_nat_ipv6_fn(priv, skb, state);
- if (ret != NF_DROP && ret != NF_STOLEN &&
+ verdict = ret & NF_VERDICT_MASK;
+ if (verdict != NF_DROP && verdict != NF_STOLEN &&
ipv6_addr_cmp(&daddr, &ipv6_hdr(skb)->daddr))
skb_dst_drop(skb);
@@ -1051,7 +1110,7 @@ static const struct nf_hook_ops nf_nat_ipv6_ops[] = {
},
/* After packet filtering, change source */
{
- .hook = nf_nat_ipv6_fn,
+ .hook = nf_nat_ipv6_local_in,
.pf = NFPROTO_IPV6,
.hooknum = NF_INET_LOCAL_IN,
.priority = NF_IP6_PRI_NAT_SRC,
diff --git a/net/netfilter/nf_synproxy_core.c b/net/netfilter/nf_synproxy_core.c
index 16915f8eef2b..467671f2d42f 100644
--- a/net/netfilter/nf_synproxy_core.c
+++ b/net/netfilter/nf_synproxy_core.c
@@ -153,7 +153,7 @@ void synproxy_init_timestamp_cookie(const struct nf_synproxy_info *info,
struct synproxy_options *opts)
{
opts->tsecr = opts->tsval;
- opts->tsval = tcp_time_stamp_raw() & ~0x3f;
+ opts->tsval = tcp_clock_ms() & ~0x3f;
if (opts->options & NF_SYNPROXY_OPT_WSCALE) {
opts->tsval |= opts->wscale;
diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
index 29c651804cb2..3c1fd8283bf4 100644
--- a/net/netfilter/nf_tables_api.c
+++ b/net/netfilter/nf_tables_api.c
@@ -591,9 +591,9 @@ static int nft_trans_set_add(const struct nft_ctx *ctx, int msg_type,
static int nft_mapelem_deactivate(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- nft_setelem_data_deactivate(ctx->net, set, elem);
+ nft_setelem_data_deactivate(ctx->net, set, elem_priv);
return 0;
}
@@ -601,7 +601,7 @@ static int nft_mapelem_deactivate(const struct nft_ctx *ctx,
struct nft_set_elem_catchall {
struct list_head list;
struct rcu_head rcu;
- void *elem;
+ struct nft_elem_priv *elem;
};
static void nft_map_catchall_deactivate(const struct nft_ctx *ctx,
@@ -609,7 +609,6 @@ static void nft_map_catchall_deactivate(const struct nft_ctx *ctx,
{
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set_elem_catchall *catchall;
- struct nft_set_elem elem;
struct nft_set_ext *ext;
list_for_each_entry(catchall, &set->catchall_list, list) {
@@ -617,8 +616,7 @@ static void nft_map_catchall_deactivate(const struct nft_ctx *ctx,
if (!nft_set_elem_active(ext, genmask))
continue;
- elem.priv = catchall->elem;
- nft_setelem_data_deactivate(ctx->net, set, &elem);
+ nft_setelem_data_deactivate(ctx->net, set, catchall->elem);
break;
}
}
@@ -3316,7 +3314,7 @@ static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = {
[NFTA_RULE_CHAIN] = { .type = NLA_STRING,
.len = NFT_CHAIN_MAXNAMELEN - 1 },
[NFTA_RULE_HANDLE] = { .type = NLA_U64 },
- [NFTA_RULE_EXPRESSIONS] = { .type = NLA_NESTED },
+ [NFTA_RULE_EXPRESSIONS] = NLA_POLICY_NESTED_ARRAY(nft_expr_policy),
[NFTA_RULE_COMPAT] = { .type = NLA_NESTED },
[NFTA_RULE_POSITION] = { .type = NLA_U64 },
[NFTA_RULE_USERDATA] = { .type = NLA_BINARY,
@@ -3441,20 +3439,21 @@ static void audit_log_rule_reset(const struct nft_table *table,
}
struct nft_rule_dump_ctx {
+ unsigned int s_idx;
char *table;
char *chain;
+ bool reset;
};
static int __nf_tables_dump_rules(struct sk_buff *skb,
unsigned int *idx,
struct netlink_callback *cb,
const struct nft_table *table,
- const struct nft_chain *chain,
- bool reset)
+ const struct nft_chain *chain)
{
+ struct nft_rule_dump_ctx *ctx = (void *)cb->ctx;
struct net *net = sock_net(skb->sk);
const struct nft_rule *rule, *prule;
- unsigned int s_idx = cb->args[0];
unsigned int entries = 0;
int ret = 0;
u64 handle;
@@ -3463,12 +3462,8 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
list_for_each_entry_rcu(rule, &chain->rules, list) {
if (!nft_is_active(net, rule))
goto cont_skip;
- if (*idx < s_idx)
+ if (*idx < ctx->s_idx)
goto cont;
- if (*idx > s_idx) {
- memset(&cb->args[1], 0,
- sizeof(cb->args) - sizeof(cb->args[0]));
- }
if (prule)
handle = prule->handle;
else
@@ -3479,7 +3474,7 @@ static int __nf_tables_dump_rules(struct sk_buff *skb,
NFT_MSG_NEWRULE,
NLM_F_MULTI | NLM_F_APPEND,
table->family,
- table, chain, rule, handle, reset) < 0) {
+ table, chain, rule, handle, ctx->reset) < 0) {
ret = 1;
break;
}
@@ -3491,7 +3486,7 @@ cont_skip:
(*idx)++;
}
- if (reset && entries)
+ if (ctx->reset && entries)
audit_log_rule_reset(table, cb->seq, entries);
return ret;
@@ -3501,17 +3496,13 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
struct netlink_callback *cb)
{
const struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
- const struct nft_rule_dump_ctx *ctx = cb->data;
+ struct nft_rule_dump_ctx *ctx = (void *)cb->ctx;
struct nft_table *table;
const struct nft_chain *chain;
unsigned int idx = 0;
struct net *net = sock_net(skb->sk);
int family = nfmsg->nfgen_family;
struct nftables_pernet *nft_net;
- bool reset = false;
-
- if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETRULE_RESET)
- reset = true;
rcu_read_lock();
nft_net = nft_pernet(net);
@@ -3521,10 +3512,10 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
if (family != NFPROTO_UNSPEC && family != table->family)
continue;
- if (ctx && ctx->table && strcmp(ctx->table, table->name) != 0)
+ if (ctx->table && strcmp(ctx->table, table->name) != 0)
continue;
- if (ctx && ctx->table && ctx->chain) {
+ if (ctx->table && ctx->chain) {
struct rhlist_head *list, *tmp;
list = rhltable_lookup(&table->chains_ht, ctx->chain,
@@ -3536,7 +3527,7 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
if (!nft_is_active(net, chain))
continue;
__nf_tables_dump_rules(skb, &idx,
- cb, table, chain, reset);
+ cb, table, chain);
break;
}
goto done;
@@ -3544,68 +3535,81 @@ static int nf_tables_dump_rules(struct sk_buff *skb,
list_for_each_entry_rcu(chain, &table->chains, list) {
if (__nf_tables_dump_rules(skb, &idx,
- cb, table, chain, reset))
+ cb, table, chain))
goto done;
}
- if (ctx && ctx->table)
+ if (ctx->table)
break;
}
done:
rcu_read_unlock();
- cb->args[0] = idx;
+ ctx->s_idx = idx;
return skb->len;
}
+static int nf_tables_dumpreset_rules(struct sk_buff *skb,
+ struct netlink_callback *cb)
+{
+ struct nftables_pernet *nft_net = nft_pernet(sock_net(skb->sk));
+ int ret;
+
+ /* Mutex is held is to prevent that two concurrent dump-and-reset calls
+ * do not underrun counters and quotas. The commit_mutex is used for
+ * the lack a better lock, this is not transaction path.
+ */
+ mutex_lock(&nft_net->commit_mutex);
+ ret = nf_tables_dump_rules(skb, cb);
+ mutex_unlock(&nft_net->commit_mutex);
+
+ return ret;
+}
+
static int nf_tables_dump_rules_start(struct netlink_callback *cb)
{
+ struct nft_rule_dump_ctx *ctx = (void *)cb->ctx;
const struct nlattr * const *nla = cb->data;
- struct nft_rule_dump_ctx *ctx = NULL;
- if (nla[NFTA_RULE_TABLE] || nla[NFTA_RULE_CHAIN]) {
- ctx = kzalloc(sizeof(*ctx), GFP_ATOMIC);
- if (!ctx)
- return -ENOMEM;
+ BUILD_BUG_ON(sizeof(*ctx) > sizeof(cb->ctx));
- if (nla[NFTA_RULE_TABLE]) {
- ctx->table = nla_strdup(nla[NFTA_RULE_TABLE],
- GFP_ATOMIC);
- if (!ctx->table) {
- kfree(ctx);
- return -ENOMEM;
- }
- }
- if (nla[NFTA_RULE_CHAIN]) {
- ctx->chain = nla_strdup(nla[NFTA_RULE_CHAIN],
- GFP_ATOMIC);
- if (!ctx->chain) {
- kfree(ctx->table);
- kfree(ctx);
- return -ENOMEM;
- }
+ if (nla[NFTA_RULE_TABLE]) {
+ ctx->table = nla_strdup(nla[NFTA_RULE_TABLE], GFP_ATOMIC);
+ if (!ctx->table)
+ return -ENOMEM;
+ }
+ if (nla[NFTA_RULE_CHAIN]) {
+ ctx->chain = nla_strdup(nla[NFTA_RULE_CHAIN], GFP_ATOMIC);
+ if (!ctx->chain) {
+ kfree(ctx->table);
+ return -ENOMEM;
}
}
-
- cb->data = ctx;
return 0;
}
+static int nf_tables_dumpreset_rules_start(struct netlink_callback *cb)
+{
+ struct nft_rule_dump_ctx *ctx = (void *)cb->ctx;
+
+ ctx->reset = true;
+
+ return nf_tables_dump_rules_start(cb);
+}
+
static int nf_tables_dump_rules_done(struct netlink_callback *cb)
{
- struct nft_rule_dump_ctx *ctx = cb->data;
+ struct nft_rule_dump_ctx *ctx = (void *)cb->ctx;
- if (ctx) {
- kfree(ctx->table);
- kfree(ctx->chain);
- kfree(ctx);
- }
+ kfree(ctx->table);
+ kfree(ctx->chain);
return 0;
}
/* called with rcu_read_lock held */
-static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
- const struct nlattr * const nla[])
+static struct sk_buff *
+nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
{
struct netlink_ext_ack *extack = info->extack;
u8 genmask = nft_genmask_cur(info->net);
@@ -3615,60 +3619,110 @@ static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
struct net *net = info->net;
struct nft_table *table;
struct sk_buff *skb2;
- bool reset = false;
int err;
- if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
- struct netlink_dump_control c = {
- .start= nf_tables_dump_rules_start,
- .dump = nf_tables_dump_rules,
- .done = nf_tables_dump_rules_done,
- .module = THIS_MODULE,
- .data = (void *)nla,
- };
-
- return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
- }
-
table = nft_table_lookup(net, nla[NFTA_RULE_TABLE], family, genmask, 0);
if (IS_ERR(table)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_TABLE]);
- return PTR_ERR(table);
+ return ERR_CAST(table);
}
chain = nft_chain_lookup(net, table, nla[NFTA_RULE_CHAIN], genmask);
if (IS_ERR(chain)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_CHAIN]);
- return PTR_ERR(chain);
+ return ERR_CAST(chain);
}
rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
if (IS_ERR(rule)) {
NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
- return PTR_ERR(rule);
+ return ERR_CAST(rule);
}
skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
if (!skb2)
- return -ENOMEM;
-
- if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETRULE_RESET)
- reset = true;
+ return ERR_PTR(-ENOMEM);
- err = nf_tables_fill_rule_info(skb2, net, NETLINK_CB(skb).portid,
+ err = nf_tables_fill_rule_info(skb2, net, portid,
info->nlh->nlmsg_seq, NFT_MSG_NEWRULE, 0,
family, table, chain, rule, 0, reset);
- if (err < 0)
- goto err_fill_rule_info;
+ if (err < 0) {
+ kfree_skb(skb2);
+ return ERR_PTR(err);
+ }
- if (reset)
- audit_log_rule_reset(table, nft_pernet(net)->base_seq, 1);
+ return skb2;
+}
- return nfnetlink_unicast(skb2, net, NETLINK_CB(skb).portid);
+static int nf_tables_getrule(struct sk_buff *skb, const struct nfnl_info *info,
+ const struct nlattr * const nla[])
+{
+ u32 portid = NETLINK_CB(skb).portid;
+ struct net *net = info->net;
+ struct sk_buff *skb2;
-err_fill_rule_info:
- kfree_skb(skb2);
- return err;
+ if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
+ struct netlink_dump_control c = {
+ .start= nf_tables_dump_rules_start,
+ .dump = nf_tables_dump_rules,
+ .done = nf_tables_dump_rules_done,
+ .module = THIS_MODULE,
+ .data = (void *)nla,
+ };
+
+ return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
+ }
+
+ skb2 = nf_tables_getrule_single(portid, info, nla, false);
+ if (IS_ERR(skb2))
+ return PTR_ERR(skb2);
+
+ return nfnetlink_unicast(skb2, net, portid);
+}
+
+static int nf_tables_getrule_reset(struct sk_buff *skb,
+ const struct nfnl_info *info,
+ const struct nlattr * const nla[])
+{
+ struct nftables_pernet *nft_net = nft_pernet(info->net);
+ u32 portid = NETLINK_CB(skb).portid;
+ struct net *net = info->net;
+ struct sk_buff *skb2;
+ char *buf;
+
+ if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
+ struct netlink_dump_control c = {
+ .start= nf_tables_dumpreset_rules_start,
+ .dump = nf_tables_dumpreset_rules,
+ .done = nf_tables_dump_rules_done,
+ .module = THIS_MODULE,
+ .data = (void *)nla,
+ };
+
+ return nft_netlink_dump_start_rcu(info->sk, skb, info->nlh, &c);
+ }
+
+ if (!try_module_get(THIS_MODULE))
+ return -EINVAL;
+ rcu_read_unlock();
+ mutex_lock(&nft_net->commit_mutex);
+ skb2 = nf_tables_getrule_single(portid, info, nla, true);
+ mutex_unlock(&nft_net->commit_mutex);
+ rcu_read_lock();
+ module_put(THIS_MODULE);
+
+ if (IS_ERR(skb2))
+ return PTR_ERR(skb2);
+
+ buf = kasprintf(GFP_ATOMIC, "%.*s:%u",
+ nla_len(nla[NFTA_RULE_TABLE]),
+ (char *)nla_data(nla[NFTA_RULE_TABLE]),
+ nft_net->base_seq);
+ audit_log_nfcfg(buf, info->nfmsg->nfgen_family, 1,
+ AUDIT_NFT_OP_RULE_RESET, GFP_ATOMIC);
+ kfree(buf);
+
+ return nfnetlink_unicast(skb2, net, portid);
}
void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
@@ -3751,9 +3805,9 @@ static int nft_table_validate(struct net *net, const struct nft_table *table)
int nft_setelem_validate(const struct nft_ctx *ctx, struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
struct nft_ctx *pctx = (struct nft_ctx *)ctx;
const struct nft_data *data;
int err;
@@ -3783,7 +3837,6 @@ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set)
{
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set_elem_catchall *catchall;
- struct nft_set_elem elem;
struct nft_set_ext *ext;
int ret = 0;
@@ -3792,8 +3845,7 @@ int nft_set_catchall_validate(const struct nft_ctx *ctx, struct nft_set *set)
if (!nft_set_elem_active(ext, genmask))
continue;
- elem.priv = catchall->elem;
- ret = nft_setelem_validate(ctx, set, NULL, &elem);
+ ret = nft_setelem_validate(ctx, set, NULL, catchall->elem);
if (ret < 0)
return ret;
}
@@ -4254,12 +4306,16 @@ static const struct nla_policy nft_set_policy[NFTA_SET_MAX + 1] = {
[NFTA_SET_OBJ_TYPE] = { .type = NLA_U32 },
[NFTA_SET_HANDLE] = { .type = NLA_U64 },
[NFTA_SET_EXPR] = { .type = NLA_NESTED },
- [NFTA_SET_EXPRESSIONS] = { .type = NLA_NESTED },
+ [NFTA_SET_EXPRESSIONS] = NLA_POLICY_NESTED_ARRAY(nft_expr_policy),
+};
+
+static const struct nla_policy nft_concat_policy[NFTA_SET_FIELD_MAX + 1] = {
+ [NFTA_SET_FIELD_LEN] = { .type = NLA_U32 },
};
static const struct nla_policy nft_set_desc_policy[NFTA_SET_DESC_MAX + 1] = {
[NFTA_SET_DESC_SIZE] = { .type = NLA_U32 },
- [NFTA_SET_DESC_CONCAT] = { .type = NLA_NESTED },
+ [NFTA_SET_DESC_CONCAT] = NLA_POLICY_NESTED_ARRAY(nft_concat_policy),
};
static struct nft_set *nft_set_lookup(const struct nft_table *table,
@@ -4695,8 +4751,10 @@ static int nf_tables_getset(struct sk_buff *skb, const struct nfnl_info *info,
return -EINVAL;
set = nft_set_lookup(table, nla[NFTA_SET_NAME], genmask);
- if (IS_ERR(set))
+ if (IS_ERR(set)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_NAME]);
return PTR_ERR(set);
+ }
skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
if (skb2 == NULL)
@@ -4713,10 +4771,6 @@ err_fill_set_info:
return err;
}
-static const struct nla_policy nft_concat_policy[NFTA_SET_FIELD_MAX + 1] = {
- [NFTA_SET_FIELD_LEN] = { .type = NLA_U32 },
-};
-
static int nft_set_desc_concat_parse(const struct nlattr *attr,
struct nft_set_desc *desc)
{
@@ -5243,9 +5297,9 @@ static int nft_validate_register_store(const struct nft_ctx *ctx,
static int nft_setelem_data_validate(const struct nft_ctx *ctx,
struct nft_set *set,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
enum nft_registers dreg;
dreg = nft_type_to_reg(set->dtype);
@@ -5258,9 +5312,9 @@ static int nft_setelem_data_validate(const struct nft_ctx *ctx,
static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- return nft_setelem_data_validate(ctx, set, elem);
+ return nft_setelem_data_validate(ctx, set, elem_priv);
}
static int nft_set_catchall_bind_check(const struct nft_ctx *ctx,
@@ -5268,7 +5322,6 @@ static int nft_set_catchall_bind_check(const struct nft_ctx *ctx,
{
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set_elem_catchall *catchall;
- struct nft_set_elem elem;
struct nft_set_ext *ext;
int ret = 0;
@@ -5277,8 +5330,7 @@ static int nft_set_catchall_bind_check(const struct nft_ctx *ctx,
if (!nft_set_elem_active(ext, genmask))
continue;
- elem.priv = catchall->elem;
- ret = nft_setelem_data_validate(ctx, set, &elem);
+ ret = nft_setelem_data_validate(ctx, set, catchall->elem);
if (ret < 0)
break;
}
@@ -5345,14 +5397,14 @@ static void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
static void nft_setelem_data_activate(const struct net *net,
const struct nft_set *set,
- struct nft_set_elem *elem);
+ struct nft_elem_priv *elem_priv);
static int nft_mapelem_activate(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- nft_setelem_data_activate(ctx->net, set, elem);
+ nft_setelem_data_activate(ctx->net, set, elem_priv);
return 0;
}
@@ -5362,7 +5414,6 @@ static void nft_map_catchall_activate(const struct nft_ctx *ctx,
{
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set_elem_catchall *catchall;
- struct nft_set_elem elem;
struct nft_set_ext *ext;
list_for_each_entry(catchall, &set->catchall_list, list) {
@@ -5370,8 +5421,7 @@ static void nft_map_catchall_activate(const struct nft_ctx *ctx,
if (!nft_set_elem_active(ext, genmask))
continue;
- elem.priv = catchall->elem;
- nft_setelem_data_activate(ctx->net, set, &elem);
+ nft_setelem_data_activate(ctx->net, set, catchall->elem);
break;
}
}
@@ -5498,7 +5548,7 @@ static const struct nla_policy nft_set_elem_policy[NFTA_SET_ELEM_MAX + 1] = {
[NFTA_SET_ELEM_OBJREF] = { .type = NLA_STRING,
.len = NFT_OBJ_MAXNAMELEN - 1 },
[NFTA_SET_ELEM_KEY_END] = { .type = NLA_NESTED },
- [NFTA_SET_ELEM_EXPRESSIONS] = { .type = NLA_NESTED },
+ [NFTA_SET_ELEM_EXPRESSIONS] = NLA_POLICY_NESTED_ARRAY(nft_expr_policy),
};
static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX + 1] = {
@@ -5506,7 +5556,7 @@ static const struct nla_policy nft_set_elem_list_policy[NFTA_SET_ELEM_LIST_MAX +
.len = NFT_TABLE_MAXNAMELEN - 1 },
[NFTA_SET_ELEM_LIST_SET] = { .type = NLA_STRING,
.len = NFT_SET_MAXNAMELEN - 1 },
- [NFTA_SET_ELEM_LIST_ELEMENTS] = { .type = NLA_NESTED },
+ [NFTA_SET_ELEM_LIST_ELEMENTS] = NLA_POLICY_NESTED_ARRAY(nft_set_elem_policy),
[NFTA_SET_ELEM_LIST_SET_ID] = { .type = NLA_U32 },
};
@@ -5550,10 +5600,10 @@ nla_put_failure:
static int nf_tables_fill_setelem(struct sk_buff *skb,
const struct nft_set *set,
- const struct nft_set_elem *elem,
+ const struct nft_elem_priv *elem_priv,
bool reset)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
unsigned char *b = skb_tail_pointer(skb);
struct nlattr *nest;
@@ -5639,16 +5689,16 @@ struct nft_set_dump_args {
static int nf_tables_dump_setelem(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
struct nft_set_dump_args *args;
if (nft_set_elem_expired(ext))
return 0;
args = container_of(iter, struct nft_set_dump_args, iter);
- return nf_tables_fill_setelem(args->skb, set, elem, args->reset);
+ return nf_tables_fill_setelem(args->skb, set, elem_priv, args->reset);
}
static void audit_log_nft_set_reset(const struct nft_table *table,
@@ -5665,6 +5715,7 @@ static void audit_log_nft_set_reset(const struct nft_table *table,
struct nft_set_dump_ctx {
const struct nft_set *set;
struct nft_ctx ctx;
+ bool reset;
};
static int nft_set_catchall_dump(struct net *net, struct sk_buff *skb,
@@ -5673,7 +5724,6 @@ static int nft_set_catchall_dump(struct net *net, struct sk_buff *skb,
{
struct nft_set_elem_catchall *catchall;
u8 genmask = nft_genmask_cur(net);
- struct nft_set_elem elem;
struct nft_set_ext *ext;
int ret = 0;
@@ -5683,8 +5733,7 @@ static int nft_set_catchall_dump(struct net *net, struct sk_buff *skb,
nft_set_elem_expired(ext))
continue;
- elem.priv = catchall->elem;
- ret = nf_tables_fill_setelem(skb, set, &elem, reset);
+ ret = nf_tables_fill_setelem(skb, set, catchall->elem, reset);
if (reset && !ret)
audit_log_nft_set_reset(set->table, base_seq, 1);
break;
@@ -5704,7 +5753,6 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
bool set_found = false;
struct nlmsghdr *nlh;
struct nlattr *nest;
- bool reset = false;
u32 portid, seq;
int event;
@@ -5752,12 +5800,9 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
if (nest == NULL)
goto nla_put_failure;
- if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETSETELEM_RESET)
- reset = true;
-
args.cb = cb;
args.skb = skb;
- args.reset = reset;
+ args.reset = dump_ctx->reset;
args.iter.genmask = nft_genmask_cur(net);
args.iter.skip = cb->args[0];
args.iter.count = 0;
@@ -5767,11 +5812,11 @@ static int nf_tables_dump_set(struct sk_buff *skb, struct netlink_callback *cb)
if (!args.iter.err && args.iter.count == cb->args[0])
args.iter.err = nft_set_catchall_dump(net, skb, set,
- reset, cb->seq);
+ dump_ctx->reset, cb->seq);
nla_nest_end(skb, nest);
nlmsg_end(skb, nlh);
- if (reset && args.iter.count > args.iter.skip)
+ if (dump_ctx->reset && args.iter.count > args.iter.skip)
audit_log_nft_set_reset(table, cb->seq,
args.iter.count - args.iter.skip);
@@ -5809,7 +5854,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
const struct nft_ctx *ctx, u32 seq,
u32 portid, int event, u16 flags,
const struct nft_set *set,
- const struct nft_set_elem *elem,
+ const struct nft_elem_priv *elem_priv,
bool reset)
{
struct nlmsghdr *nlh;
@@ -5831,7 +5876,7 @@ static int nf_tables_fill_setelem_info(struct sk_buff *skb,
if (nest == NULL)
goto nla_put_failure;
- err = nf_tables_fill_setelem(skb, set, elem, reset);
+ err = nf_tables_fill_setelem(skb, set, elem_priv, reset);
if (err < 0)
goto nla_put_failure;
@@ -5981,7 +6026,7 @@ static int nft_get_set_elem(struct nft_ctx *ctx, struct nft_set *set,
return err;
err = nf_tables_fill_setelem_info(skb, ctx, ctx->seq, ctx->portid,
- NFT_MSG_NEWSETELEM, 0, set, &elem,
+ NFT_MSG_NEWSETELEM, 0, set, elem.priv,
reset);
if (err < 0)
goto err_fill_setelem;
@@ -6017,11 +6062,16 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
}
set = nft_set_lookup(table, nla[NFTA_SET_ELEM_LIST_SET], genmask);
- if (IS_ERR(set))
+ if (IS_ERR(set)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_SET]);
return PTR_ERR(set);
+ }
nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+ if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETSETELEM_RESET)
+ reset = true;
+
if (info->nlh->nlmsg_flags & NLM_F_DUMP) {
struct netlink_dump_control c = {
.start = nf_tables_dump_set_start,
@@ -6032,6 +6082,7 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
struct nft_set_dump_ctx dump_ctx = {
.set = set,
.ctx = ctx,
+ .reset = reset,
};
c.data = &dump_ctx;
@@ -6041,9 +6092,6 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
if (!nla[NFTA_SET_ELEM_LIST_ELEMENTS])
return -EINVAL;
- if (NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_GETSETELEM_RESET)
- reset = true;
-
nla_for_each_nested(attr, nla[NFTA_SET_ELEM_LIST_ELEMENTS], rem) {
err = nft_get_set_elem(&ctx, set, attr, reset);
if (err < 0) {
@@ -6062,7 +6110,7 @@ static int nf_tables_getsetelem(struct sk_buff *skb,
static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
const struct nft_set *set,
- const struct nft_set_elem *elem,
+ const struct nft_elem_priv *elem_priv,
int event)
{
struct nftables_pernet *nft_net;
@@ -6083,7 +6131,7 @@ static void nf_tables_setelem_notify(const struct nft_ctx *ctx,
flags |= ctx->flags & (NLM_F_CREATE | NLM_F_EXCL);
err = nf_tables_fill_setelem_info(skb, ctx, 0, portid, event, flags,
- set, elem, false);
+ set, elem_priv, false);
if (err < 0) {
kfree_skb(skb);
goto err;
@@ -6158,10 +6206,11 @@ static int nft_set_ext_memcpy(const struct nft_set_ext_tmpl *tmpl, u8 id,
return 0;
}
-void *nft_set_elem_init(const struct nft_set *set,
- const struct nft_set_ext_tmpl *tmpl,
- const u32 *key, const u32 *key_end,
- const u32 *data, u64 timeout, u64 expiration, gfp_t gfp)
+struct nft_elem_priv *nft_set_elem_init(const struct nft_set *set,
+ const struct nft_set_ext_tmpl *tmpl,
+ const u32 *key, const u32 *key_end,
+ const u32 *data,
+ u64 timeout, u64 expiration, gfp_t gfp)
{
struct nft_set_ext *ext;
void *elem;
@@ -6226,10 +6275,11 @@ static void nft_set_elem_expr_destroy(const struct nft_ctx *ctx,
}
/* Drop references and destroy. Called from gc, dynset and abort path. */
-void nft_set_elem_destroy(const struct nft_set *set, void *elem,
+void nft_set_elem_destroy(const struct nft_set *set,
+ const struct nft_elem_priv *elem_priv,
bool destroy_expr)
{
- struct nft_set_ext *ext = nft_set_elem_ext(set, elem);
+ struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
struct nft_ctx ctx = {
.net = read_pnet(&set->net),
.family = set->table->family,
@@ -6240,10 +6290,10 @@ void nft_set_elem_destroy(const struct nft_set *set, void *elem,
nft_data_release(nft_set_ext_data(ext), set->dtype);
if (destroy_expr && nft_set_ext_exists(ext, NFT_SET_EXT_EXPRESSIONS))
nft_set_elem_expr_destroy(&ctx, nft_set_ext_expr(ext));
-
if (nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF))
nft_use_dec(&(*nft_set_ext_obj(ext))->use);
- kfree(elem);
+
+ kfree(elem_priv);
}
EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
@@ -6251,14 +6301,15 @@ EXPORT_SYMBOL_GPL(nft_set_elem_destroy);
* path via nft_setelem_data_deactivate().
*/
void nf_tables_set_elem_destroy(const struct nft_ctx *ctx,
- const struct nft_set *set, void *elem)
+ const struct nft_set *set,
+ const struct nft_elem_priv *elem_priv)
{
- struct nft_set_ext *ext = nft_set_elem_ext(set, elem);
+ struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_EXPRESSIONS))
nft_set_elem_expr_destroy(ctx, nft_set_ext_expr(ext));
- kfree(elem);
+ kfree(elem_priv);
}
int nft_set_elem_expr_clone(const struct nft_ctx *ctx, struct nft_set *set,
@@ -6353,7 +6404,7 @@ EXPORT_SYMBOL_GPL(nft_set_catchall_lookup);
static int nft_setelem_catchall_insert(const struct net *net,
struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **pext)
+ struct nft_elem_priv **priv)
{
struct nft_set_elem_catchall *catchall;
u8 genmask = nft_genmask_next(net);
@@ -6362,7 +6413,7 @@ static int nft_setelem_catchall_insert(const struct net *net,
list_for_each_entry(catchall, &set->catchall_list, list) {
ext = nft_set_elem_ext(set, catchall->elem);
if (nft_set_elem_active(ext, genmask)) {
- *pext = ext;
+ *priv = catchall->elem;
return -EEXIST;
}
}
@@ -6380,22 +6431,23 @@ static int nft_setelem_catchall_insert(const struct net *net,
static int nft_setelem_insert(const struct net *net,
struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext, unsigned int flags)
+ struct nft_elem_priv **elem_priv,
+ unsigned int flags)
{
int ret;
if (flags & NFT_SET_ELEM_CATCHALL)
- ret = nft_setelem_catchall_insert(net, set, elem, ext);
+ ret = nft_setelem_catchall_insert(net, set, elem, elem_priv);
else
- ret = set->ops->insert(net, set, elem, ext);
+ ret = set->ops->insert(net, set, elem, elem_priv);
return ret;
}
static bool nft_setelem_is_catchall(const struct nft_set *set,
- const struct nft_set_elem *elem)
+ const struct nft_elem_priv *elem_priv)
{
- struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) &&
*nft_set_ext_flags(ext) & NFT_SET_ELEM_CATCHALL)
@@ -6405,14 +6457,14 @@ static bool nft_setelem_is_catchall(const struct nft_set *set,
}
static void nft_setelem_activate(struct net *net, struct nft_set *set,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
- if (nft_setelem_is_catchall(set, elem)) {
+ if (nft_setelem_is_catchall(set, elem_priv)) {
nft_set_elem_change_active(net, set, ext);
} else {
- set->ops->activate(net, set, elem);
+ set->ops->activate(net, set, elem_priv);
}
}
@@ -6470,12 +6522,12 @@ static int nft_setelem_deactivate(const struct net *net,
static void nft_setelem_catchall_remove(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
struct nft_set_elem_catchall *catchall, *next;
list_for_each_entry_safe(catchall, next, &set->catchall_list, list) {
- if (catchall->elem == elem->priv) {
+ if (catchall->elem == elem_priv) {
list_del_rcu(&catchall->list);
kfree_rcu(catchall, rcu);
break;
@@ -6485,12 +6537,12 @@ static void nft_setelem_catchall_remove(const struct net *net,
static void nft_setelem_remove(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- if (nft_setelem_is_catchall(set, elem))
- nft_setelem_catchall_remove(net, set, elem);
+ if (nft_setelem_is_catchall(set, elem_priv))
+ nft_setelem_catchall_remove(net, set, elem_priv);
else
- set->ops->remove(net, set, elem);
+ set->ops->remove(net, set, elem_priv);
}
static bool nft_setelem_valid_key_end(const struct nft_set *set,
@@ -6523,13 +6575,14 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
struct nft_set_ext *ext, *ext2;
struct nft_set_elem elem;
struct nft_set_binding *binding;
+ struct nft_elem_priv *elem_priv;
struct nft_object *obj = NULL;
struct nft_userdata *udata;
struct nft_data_desc desc;
enum nft_registers dreg;
struct nft_trans *trans;
- u64 timeout;
u64 expiration;
+ u64 timeout;
int err, i;
u8 ulen;
@@ -6822,9 +6875,10 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
ext->genmask = nft_genmask_cur(ctx->net);
- err = nft_setelem_insert(ctx->net, set, &elem, &ext2, flags);
+ err = nft_setelem_insert(ctx->net, set, &elem, &elem_priv, flags);
if (err) {
if (err == -EEXIST) {
+ ext2 = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA) ^
nft_set_ext_exists(ext2, NFT_SET_EXT_DATA) ||
nft_set_ext_exists(ext, NFT_SET_EXT_OBJREF) ^
@@ -6858,12 +6912,12 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
}
}
- nft_trans_elem(trans) = elem;
+ nft_trans_elem_priv(trans) = elem.priv;
nft_trans_commit_list_add_tail(ctx->net, trans);
return 0;
err_set_full:
- nft_setelem_remove(ctx->net, set, &elem);
+ nft_setelem_remove(ctx->net, set, elem.priv);
err_element_clash:
kfree(trans);
err_elem_free:
@@ -6911,8 +6965,10 @@ static int nf_tables_newsetelem(struct sk_buff *skb,
set = nft_set_lookup_global(net, table, nla[NFTA_SET_ELEM_LIST_SET],
nla[NFTA_SET_ELEM_LIST_SET_ID], genmask);
- if (IS_ERR(set))
+ if (IS_ERR(set)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_SET]);
return PTR_ERR(set);
+ }
if (!list_empty(&set->bindings) &&
(set->flags & (NFT_SET_CONSTANT | NFT_SET_ANONYMOUS)))
@@ -6962,9 +7018,9 @@ void nft_data_hold(const struct nft_data *data, enum nft_data_types type)
static void nft_setelem_data_activate(const struct net *net,
const struct nft_set *set,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_hold(nft_set_ext_data(ext), set->dtype);
@@ -6974,9 +7030,9 @@ static void nft_setelem_data_activate(const struct net *net,
void nft_setelem_data_deactivate(const struct net *net,
const struct nft_set *set,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_DATA))
nft_data_release(nft_set_ext_data(ext), set->dtype);
@@ -7061,9 +7117,9 @@ static int nft_del_setelem(struct nft_ctx *ctx, struct nft_set *set,
if (err < 0)
goto fail_ops;
- nft_setelem_data_deactivate(ctx->net, set, &elem);
+ nft_setelem_data_deactivate(ctx->net, set, elem.priv);
- nft_trans_elem(trans) = elem;
+ nft_trans_elem_priv(trans) = elem.priv;
nft_trans_commit_list_add_tail(ctx->net, trans);
return 0;
@@ -7081,36 +7137,29 @@ fail_elem:
static int nft_setelem_flush(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
struct nft_trans *trans;
- int err;
trans = nft_trans_alloc_gfp(ctx, NFT_MSG_DELSETELEM,
sizeof(struct nft_trans_elem), GFP_ATOMIC);
if (!trans)
return -ENOMEM;
- if (!set->ops->flush(ctx->net, set, elem->priv)) {
- err = -ENOENT;
- goto err1;
- }
+ set->ops->flush(ctx->net, set, elem_priv);
set->ndeact++;
- nft_setelem_data_deactivate(ctx->net, set, elem);
+ nft_setelem_data_deactivate(ctx->net, set, elem_priv);
nft_trans_elem_set(trans) = set;
- nft_trans_elem(trans) = *elem;
+ nft_trans_elem_priv(trans) = elem_priv;
nft_trans_commit_list_add_tail(ctx->net, trans);
return 0;
-err1:
- kfree(trans);
- return err;
}
static int __nft_set_catchall_flush(const struct nft_ctx *ctx,
struct nft_set *set,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
struct nft_trans *trans;
@@ -7119,9 +7168,9 @@ static int __nft_set_catchall_flush(const struct nft_ctx *ctx,
if (!trans)
return -ENOMEM;
- nft_setelem_data_deactivate(ctx->net, set, elem);
+ nft_setelem_data_deactivate(ctx->net, set, elem_priv);
nft_trans_elem_set(trans) = set;
- nft_trans_elem(trans) = *elem;
+ nft_trans_elem_priv(trans) = elem_priv;
nft_trans_commit_list_add_tail(ctx->net, trans);
return 0;
@@ -7132,7 +7181,6 @@ static int nft_set_catchall_flush(const struct nft_ctx *ctx,
{
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set_elem_catchall *catchall;
- struct nft_set_elem elem;
struct nft_set_ext *ext;
int ret = 0;
@@ -7141,8 +7189,7 @@ static int nft_set_catchall_flush(const struct nft_ctx *ctx,
if (!nft_set_elem_active(ext, genmask))
continue;
- elem.priv = catchall->elem;
- ret = __nft_set_catchall_flush(ctx, set, &elem);
+ ret = __nft_set_catchall_flush(ctx, set, catchall->elem);
if (ret < 0)
break;
nft_set_elem_change_active(ctx->net, set, ext);
@@ -7187,8 +7234,10 @@ static int nf_tables_delsetelem(struct sk_buff *skb,
}
set = nft_set_lookup(table, nla[NFTA_SET_ELEM_LIST_SET], genmask);
- if (IS_ERR(set))
+ if (IS_ERR(set)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_ELEM_LIST_SET]);
return PTR_ERR(set);
+ }
if (nft_set_is_anonymous(set))
return -EOPNOTSUPP;
@@ -7617,28 +7666,26 @@ static void audit_log_obj_reset(const struct nft_table *table,
kfree(buf);
}
-struct nft_obj_filter {
+struct nft_obj_dump_ctx {
+ unsigned int s_idx;
char *table;
u32 type;
+ bool reset;
};
static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
{
const struct nfgenmsg *nfmsg = nlmsg_data(cb->nlh);
- const struct nft_table *table;
- unsigned int idx = 0, s_idx = cb->args[0];
- struct nft_obj_filter *filter = cb->data;
+ struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
struct net *net = sock_net(skb->sk);
int family = nfmsg->nfgen_family;
struct nftables_pernet *nft_net;
+ const struct nft_table *table;
unsigned int entries = 0;
struct nft_object *obj;
- bool reset = false;
+ unsigned int idx = 0;
int rc = 0;
- if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET)
- reset = true;
-
rcu_read_lock();
nft_net = nft_pernet(net);
cb->seq = READ_ONCE(nft_net->base_seq);
@@ -7651,17 +7698,12 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
list_for_each_entry_rcu(obj, &table->objects, list) {
if (!nft_is_active(net, obj))
goto cont;
- if (idx < s_idx)
+ if (idx < ctx->s_idx)
goto cont;
- if (idx > s_idx)
- memset(&cb->args[1], 0,
- sizeof(cb->args) - sizeof(cb->args[0]));
- if (filter && filter->table &&
- strcmp(filter->table, table->name))
+ if (ctx->table && strcmp(ctx->table, table->name))
goto cont;
- if (filter &&
- filter->type != NFT_OBJECT_UNSPEC &&
- obj->ops->type->type != filter->type)
+ if (ctx->type != NFT_OBJECT_UNSPEC &&
+ obj->ops->type->type != ctx->type)
goto cont;
rc = nf_tables_fill_obj_info(skb, net,
@@ -7670,7 +7712,7 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
NFT_MSG_NEWOBJ,
NLM_F_MULTI | NLM_F_APPEND,
table->family, table,
- obj, reset);
+ obj, ctx->reset);
if (rc < 0)
break;
@@ -7679,51 +7721,44 @@ static int nf_tables_dump_obj(struct sk_buff *skb, struct netlink_callback *cb)
cont:
idx++;
}
- if (reset && entries)
+ if (ctx->reset && entries)
audit_log_obj_reset(table, nft_net->base_seq, entries);
if (rc < 0)
break;
}
rcu_read_unlock();
- cb->args[0] = idx;
+ ctx->s_idx = idx;
return skb->len;
}
static int nf_tables_dump_obj_start(struct netlink_callback *cb)
{
+ struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
const struct nlattr * const *nla = cb->data;
- struct nft_obj_filter *filter = NULL;
- if (nla[NFTA_OBJ_TABLE] || nla[NFTA_OBJ_TYPE]) {
- filter = kzalloc(sizeof(*filter), GFP_ATOMIC);
- if (!filter)
+ BUILD_BUG_ON(sizeof(*ctx) > sizeof(cb->ctx));
+
+ if (nla[NFTA_OBJ_TABLE]) {
+ ctx->table = nla_strdup(nla[NFTA_OBJ_TABLE], GFP_ATOMIC);
+ if (!ctx->table)
return -ENOMEM;
+ }
- if (nla[NFTA_OBJ_TABLE]) {
- filter->table = nla_strdup(nla[NFTA_OBJ_TABLE], GFP_ATOMIC);
- if (!filter->table) {
- kfree(filter);
- return -ENOMEM;
- }
- }
+ if (nla[NFTA_OBJ_TYPE])
+ ctx->type = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE]));
- if (nla[NFTA_OBJ_TYPE])
- filter->type = ntohl(nla_get_be32(nla[NFTA_OBJ_TYPE]));
- }
+ if (NFNL_MSG_TYPE(cb->nlh->nlmsg_type) == NFT_MSG_GETOBJ_RESET)
+ ctx->reset = true;
- cb->data = filter;
return 0;
}
static int nf_tables_dump_obj_done(struct netlink_callback *cb)
{
- struct nft_obj_filter *filter = cb->data;
+ struct nft_obj_dump_ctx *ctx = (void *)cb->ctx;
- if (filter) {
- kfree(filter->table);
- kfree(filter);
- }
+ kfree(ctx->table);
return 0;
}
@@ -8690,6 +8725,7 @@ static int nf_tables_getflowtable(struct sk_buff *skb,
const struct nfnl_info *info,
const struct nlattr * const nla[])
{
+ struct netlink_ext_ack *extack = info->extack;
u8 genmask = nft_genmask_cur(info->net);
u8 family = info->nfmsg->nfgen_family;
struct nft_flowtable *flowtable;
@@ -8715,13 +8751,17 @@ static int nf_tables_getflowtable(struct sk_buff *skb,
table = nft_table_lookup(net, nla[NFTA_FLOWTABLE_TABLE], family,
genmask, 0);
- if (IS_ERR(table))
+ if (IS_ERR(table)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_FLOWTABLE_TABLE]);
return PTR_ERR(table);
+ }
flowtable = nft_flowtable_lookup(table, nla[NFTA_FLOWTABLE_NAME],
genmask);
- if (IS_ERR(flowtable))
+ if (IS_ERR(flowtable)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_FLOWTABLE_NAME]);
return PTR_ERR(flowtable);
+ }
skb2 = alloc_skb(NLMSG_GOODSIZE, GFP_ATOMIC);
if (!skb2)
@@ -8977,7 +9017,7 @@ static const struct nfnl_callback nf_tables_cb[NFT_MSG_MAX] = {
.policy = nft_rule_policy,
},
[NFT_MSG_GETRULE_RESET] = {
- .call = nf_tables_getrule,
+ .call = nf_tables_getrule_reset,
.type = NFNL_CB_RCU,
.attr_count = NFTA_RULE_MAX,
.policy = nft_rule_policy,
@@ -9227,7 +9267,7 @@ static void nft_commit_release(struct nft_trans *trans)
case NFT_MSG_DESTROYSETELEM:
nf_tables_set_elem_destroy(&trans->ctx,
nft_trans_elem_set(trans),
- nft_trans_elem(trans).priv);
+ nft_trans_elem_priv(trans));
break;
case NFT_MSG_DELOBJ:
case NFT_MSG_DESTROYOBJ:
@@ -9456,16 +9496,12 @@ void nft_chain_del(struct nft_chain *chain)
static void nft_trans_gc_setelem_remove(struct nft_ctx *ctx,
struct nft_trans_gc *trans)
{
- void **priv = trans->priv;
+ struct nft_elem_priv **priv = trans->priv;
unsigned int i;
for (i = 0; i < trans->count; i++) {
- struct nft_set_elem elem = {
- .priv = priv[i],
- };
-
- nft_setelem_data_deactivate(ctx->net, trans->set, &elem);
- nft_setelem_remove(ctx->net, trans->set, &elem);
+ nft_setelem_data_deactivate(ctx->net, trans->set, priv[i]);
+ nft_setelem_remove(ctx->net, trans->set, priv[i]);
}
}
@@ -9478,7 +9514,7 @@ void nft_trans_gc_destroy(struct nft_trans_gc *trans)
static void nft_trans_gc_trans_free(struct rcu_head *rcu)
{
- struct nft_set_elem elem = {};
+ struct nft_elem_priv *elem_priv;
struct nft_trans_gc *trans;
struct nft_ctx ctx = {};
unsigned int i;
@@ -9487,11 +9523,11 @@ static void nft_trans_gc_trans_free(struct rcu_head *rcu)
ctx.net = read_pnet(&trans->set->net);
for (i = 0; i < trans->count; i++) {
- elem.priv = trans->priv[i];
- if (!nft_setelem_is_catchall(trans->set, &elem))
+ elem_priv = trans->priv[i];
+ if (!nft_setelem_is_catchall(trans->set, elem_priv))
atomic_dec(&trans->set->nelems);
- nf_tables_set_elem_destroy(&ctx, trans->set, elem.priv);
+ nf_tables_set_elem_destroy(&ctx, trans->set, elem_priv);
}
nft_trans_gc_destroy(trans);
@@ -10059,9 +10095,9 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
case NFT_MSG_NEWSETELEM:
te = (struct nft_trans_elem *)trans->data;
- nft_setelem_activate(net, te->set, &te->elem);
+ nft_setelem_activate(net, te->set, te->elem_priv);
nf_tables_setelem_notify(&trans->ctx, te->set,
- &te->elem,
+ te->elem_priv,
NFT_MSG_NEWSETELEM);
if (te->set->ops->commit &&
list_empty(&te->set->pending_update)) {
@@ -10075,10 +10111,10 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
te = (struct nft_trans_elem *)trans->data;
nf_tables_setelem_notify(&trans->ctx, te->set,
- &te->elem,
+ te->elem_priv,
trans->msg_type);
- nft_setelem_remove(net, te->set, &te->elem);
- if (!nft_setelem_is_catchall(te->set, &te->elem)) {
+ nft_setelem_remove(net, te->set, te->elem_priv);
+ if (!nft_setelem_is_catchall(te->set, te->elem_priv)) {
atomic_dec(&te->set->nelems);
te->set->ndeact--;
}
@@ -10198,7 +10234,7 @@ static void nf_tables_abort_release(struct nft_trans *trans)
break;
case NFT_MSG_NEWSETELEM:
nft_set_elem_destroy(nft_trans_elem_set(trans),
- nft_trans_elem(trans).priv, true);
+ nft_trans_elem_priv(trans), true);
break;
case NFT_MSG_NEWOBJ:
nft_obj_destroy(&trans->ctx, nft_trans_obj(trans));
@@ -10345,8 +10381,8 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
break;
}
te = (struct nft_trans_elem *)trans->data;
- nft_setelem_remove(net, te->set, &te->elem);
- if (!nft_setelem_is_catchall(te->set, &te->elem))
+ nft_setelem_remove(net, te->set, te->elem_priv);
+ if (!nft_setelem_is_catchall(te->set, te->elem_priv))
atomic_dec(&te->set->nelems);
if (te->set->ops->abort &&
@@ -10359,9 +10395,9 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
case NFT_MSG_DESTROYSETELEM:
te = (struct nft_trans_elem *)trans->data;
- nft_setelem_data_activate(net, te->set, &te->elem);
- nft_setelem_activate(net, te->set, &te->elem);
- if (!nft_setelem_is_catchall(te->set, &te->elem))
+ nft_setelem_data_activate(net, te->set, te->elem_priv);
+ nft_setelem_activate(net, te->set, te->elem_priv);
+ if (!nft_setelem_is_catchall(te->set, te->elem_priv))
te->set->ndeact--;
if (te->set->ops->abort &&
@@ -10537,9 +10573,9 @@ static int nft_check_loops(const struct nft_ctx *ctx,
static int nf_tables_loop_check_setelem(const struct nft_ctx *ctx,
struct nft_set *set,
const struct nft_set_iter *iter,
- struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
+ const struct nft_set_ext *ext = nft_set_elem_ext(set, elem_priv);
if (nft_set_ext_exists(ext, NFT_SET_EXT_FLAGS) &&
*nft_set_ext_flags(ext) & NFT_SET_ELEM_INTERVAL_END)
diff --git a/net/netfilter/nf_tables_core.c b/net/netfilter/nf_tables_core.c
index 4d0ce12221f6..8b536d7ef6c2 100644
--- a/net/netfilter/nf_tables_core.c
+++ b/net/netfilter/nf_tables_core.c
@@ -115,7 +115,7 @@ static noinline void __nft_trace_verdict(const struct nft_pktinfo *pkt,
{
enum nft_trace_types type;
- switch (regs->verdict.code) {
+ switch (regs->verdict.code & NF_VERDICT_MASK) {
case NFT_CONTINUE:
case NFT_RETURN:
type = NFT_TRACETYPE_RETURN;
@@ -308,10 +308,11 @@ next_rule:
switch (regs.verdict.code & NF_VERDICT_MASK) {
case NF_ACCEPT:
- case NF_DROP:
case NF_QUEUE:
case NF_STOLEN:
return regs.verdict.code;
+ case NF_DROP:
+ return NF_DROP_REASON(pkt->skb, SKB_DROP_REASON_NETFILTER_DROP, EPERM);
}
switch (regs.verdict.code) {
@@ -342,6 +343,9 @@ next_rule:
if (static_branch_unlikely(&nft_counters_enabled))
nft_update_chain_stats(basechain, pkt);
+ if (nft_base_chain(basechain)->policy == NF_DROP)
+ return NF_DROP_REASON(pkt->skb, SKB_DROP_REASON_NETFILTER_DROP, EPERM);
+
return nft_base_chain(basechain)->policy;
}
EXPORT_SYMBOL_GPL(nft_do_chain);
diff --git a/net/netfilter/nf_tables_trace.c b/net/netfilter/nf_tables_trace.c
index 6d41c0bd3d78..a83637e3f455 100644
--- a/net/netfilter/nf_tables_trace.c
+++ b/net/netfilter/nf_tables_trace.c
@@ -258,17 +258,21 @@ void nft_trace_notify(const struct nft_pktinfo *pkt,
case __NFT_TRACETYPE_MAX:
break;
case NFT_TRACETYPE_RETURN:
- case NFT_TRACETYPE_RULE:
+ case NFT_TRACETYPE_RULE: {
+ unsigned int v;
+
if (nft_verdict_dump(skb, NFTA_TRACE_VERDICT, verdict))
goto nla_put_failure;
/* pkt->skb undefined iff NF_STOLEN, disable dump */
- if (verdict->code == NF_STOLEN)
+ v = verdict->code & NF_VERDICT_MASK;
+ if (v == NF_STOLEN)
info->packet_dumped = true;
else
mark = pkt->skb->mark;
break;
+ }
case NFT_TRACETYPE_POLICY:
mark = pkt->skb->mark;
diff --git a/net/netfilter/nfnetlink_queue.c b/net/netfilter/nfnetlink_queue.c
index 556bc902af00..171d1f52d3dd 100644
--- a/net/netfilter/nfnetlink_queue.c
+++ b/net/netfilter/nfnetlink_queue.c
@@ -228,19 +228,22 @@ find_dequeue_entry(struct nfqnl_instance *queue, unsigned int id)
static void nfqnl_reinject(struct nf_queue_entry *entry, unsigned int verdict)
{
const struct nf_ct_hook *ct_hook;
- int err;
if (verdict == NF_ACCEPT ||
verdict == NF_REPEAT ||
verdict == NF_STOP) {
rcu_read_lock();
ct_hook = rcu_dereference(nf_ct_hook);
- if (ct_hook) {
- err = ct_hook->update(entry->state.net, entry->skb);
- if (err < 0)
- verdict = NF_DROP;
- }
+ if (ct_hook)
+ verdict = ct_hook->update(entry->state.net, entry->skb);
rcu_read_unlock();
+
+ switch (verdict & NF_VERDICT_MASK) {
+ case NF_STOLEN:
+ nf_queue_entry_free(entry);
+ return;
+ }
+
}
nf_reinject(entry, verdict);
}
diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c
index 5c5cc01c73c5..b18a79039125 100644
--- a/net/netfilter/nft_dynset.c
+++ b/net/netfilter/nft_dynset.c
@@ -44,33 +44,34 @@ static int nft_dynset_expr_setup(const struct nft_dynset *priv,
return 0;
}
-static void *nft_dynset_new(struct nft_set *set, const struct nft_expr *expr,
- struct nft_regs *regs)
+static struct nft_elem_priv *nft_dynset_new(struct nft_set *set,
+ const struct nft_expr *expr,
+ struct nft_regs *regs)
{
const struct nft_dynset *priv = nft_expr_priv(expr);
struct nft_set_ext *ext;
+ void *elem_priv;
u64 timeout;
- void *elem;
if (!atomic_add_unless(&set->nelems, 1, set->size))
return NULL;
timeout = priv->timeout ? : set->timeout;
- elem = nft_set_elem_init(set, &priv->tmpl,
- &regs->data[priv->sreg_key], NULL,
- &regs->data[priv->sreg_data],
- timeout, 0, GFP_ATOMIC);
- if (IS_ERR(elem))
+ elem_priv = nft_set_elem_init(set, &priv->tmpl,
+ &regs->data[priv->sreg_key], NULL,
+ &regs->data[priv->sreg_data],
+ timeout, 0, GFP_ATOMIC);
+ if (IS_ERR(elem_priv))
goto err1;
- ext = nft_set_elem_ext(set, elem);
+ ext = nft_set_elem_ext(set, elem_priv);
if (priv->num_exprs && nft_dynset_expr_setup(priv, ext) < 0)
goto err2;
- return elem;
+ return elem_priv;
err2:
- nft_set_elem_destroy(set, elem, false);
+ nft_set_elem_destroy(set, elem_priv, false);
err1:
if (set->size)
atomic_dec(&set->nelems);
diff --git a/net/netfilter/nft_set_bitmap.c b/net/netfilter/nft_set_bitmap.c
index 1e5e7a181e0b..32df7a16835d 100644
--- a/net/netfilter/nft_set_bitmap.c
+++ b/net/netfilter/nft_set_bitmap.c
@@ -13,6 +13,7 @@
#include <net/netfilter/nf_tables_core.h>
struct nft_bitmap_elem {
+ struct nft_elem_priv priv;
struct list_head head;
struct nft_set_ext ext;
};
@@ -104,8 +105,9 @@ nft_bitmap_elem_find(const struct nft_set *set, struct nft_bitmap_elem *this,
return NULL;
}
-static void *nft_bitmap_get(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem, unsigned int flags)
+static struct nft_elem_priv *
+nft_bitmap_get(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem, unsigned int flags)
{
const struct nft_bitmap *priv = nft_set_priv(set);
u8 genmask = nft_genmask_cur(net);
@@ -116,23 +118,23 @@ static void *nft_bitmap_get(const struct net *net, const struct nft_set *set,
!nft_set_elem_active(&be->ext, genmask))
continue;
- return be;
+ return &be->priv;
}
return ERR_PTR(-ENOENT);
}
static int nft_bitmap_insert(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext)
+ struct nft_elem_priv **elem_priv)
{
+ struct nft_bitmap_elem *new = nft_elem_priv_cast(elem->priv), *be;
struct nft_bitmap *priv = nft_set_priv(set);
- struct nft_bitmap_elem *new = elem->priv, *be;
u8 genmask = nft_genmask_next(net);
u32 idx, off;
be = nft_bitmap_elem_find(set, new, genmask);
if (be) {
- *ext = &be->ext;
+ *elem_priv = &be->priv;
return -EEXIST;
}
@@ -144,12 +146,11 @@ static int nft_bitmap_insert(const struct net *net, const struct nft_set *set,
return 0;
}
-static void nft_bitmap_remove(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static void nft_bitmap_remove(const struct net *net, const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
+ struct nft_bitmap_elem *be = nft_elem_priv_cast(elem_priv);
struct nft_bitmap *priv = nft_set_priv(set);
- struct nft_bitmap_elem *be = elem->priv;
u8 genmask = nft_genmask_next(net);
u32 idx, off;
@@ -161,10 +162,10 @@ static void nft_bitmap_remove(const struct net *net,
static void nft_bitmap_activate(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
+ struct nft_bitmap_elem *be = nft_elem_priv_cast(elem_priv);
struct nft_bitmap *priv = nft_set_priv(set);
- struct nft_bitmap_elem *be = elem->priv;
u8 genmask = nft_genmask_next(net);
u32 idx, off;
@@ -174,28 +175,27 @@ static void nft_bitmap_activate(const struct net *net,
nft_set_elem_change_active(net, set, &be->ext);
}
-static bool nft_bitmap_flush(const struct net *net,
- const struct nft_set *set, void *_be)
+static void nft_bitmap_flush(const struct net *net,
+ const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
+ struct nft_bitmap_elem *be = nft_elem_priv_cast(elem_priv);
struct nft_bitmap *priv = nft_set_priv(set);
u8 genmask = nft_genmask_next(net);
- struct nft_bitmap_elem *be = _be;
u32 idx, off;
nft_bitmap_location(set, nft_set_ext_key(&be->ext), &idx, &off);
/* Enter 10 state, similar to deactivation. */
priv->bitmap[idx] &= ~(genmask << off);
nft_set_elem_change_active(net, set, &be->ext);
-
- return true;
}
-static void *nft_bitmap_deactivate(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static struct nft_elem_priv *
+nft_bitmap_deactivate(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem)
{
+ struct nft_bitmap_elem *this = nft_elem_priv_cast(elem->priv), *be;
struct nft_bitmap *priv = nft_set_priv(set);
- struct nft_bitmap_elem *this = elem->priv, *be;
u8 genmask = nft_genmask_next(net);
u32 idx, off;
@@ -209,7 +209,7 @@ static void *nft_bitmap_deactivate(const struct net *net,
priv->bitmap[idx] &= ~(genmask << off);
nft_set_elem_change_active(net, set, &be->ext);
- return be;
+ return &be->priv;
}
static void nft_bitmap_walk(const struct nft_ctx *ctx,
@@ -218,7 +218,6 @@ static void nft_bitmap_walk(const struct nft_ctx *ctx,
{
const struct nft_bitmap *priv = nft_set_priv(set);
struct nft_bitmap_elem *be;
- struct nft_set_elem elem;
list_for_each_entry_rcu(be, &priv->list, head) {
if (iter->count < iter->skip)
@@ -226,9 +225,7 @@ static void nft_bitmap_walk(const struct nft_ctx *ctx,
if (!nft_set_elem_active(&be->ext, iter->genmask))
goto cont;
- elem.priv = be;
-
- iter->err = iter->fn(ctx, set, iter, &elem);
+ iter->err = iter->fn(ctx, set, iter, &be->priv);
if (iter->err < 0)
return;
@@ -265,6 +262,8 @@ static int nft_bitmap_init(const struct nft_set *set,
{
struct nft_bitmap *priv = nft_set_priv(set);
+ BUILD_BUG_ON(offsetof(struct nft_bitmap_elem, priv) != 0);
+
INIT_LIST_HEAD(&priv->list);
priv->bitmap_size = nft_bitmap_size(set->klen);
@@ -278,7 +277,7 @@ static void nft_bitmap_destroy(const struct nft_ctx *ctx,
struct nft_bitmap_elem *be, *n;
list_for_each_entry_safe(be, n, &priv->list, head)
- nf_tables_set_elem_destroy(ctx, set, be);
+ nf_tables_set_elem_destroy(ctx, set, &be->priv);
}
static bool nft_bitmap_estimate(const struct nft_set_desc *desc, u32 features,
diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
index 2013de934cef..6c2061bfdae6 100644
--- a/net/netfilter/nft_set_hash.c
+++ b/net/netfilter/nft_set_hash.c
@@ -27,6 +27,7 @@ struct nft_rhash {
};
struct nft_rhash_elem {
+ struct nft_elem_priv priv;
struct rhash_head node;
struct nft_set_ext ext;
};
@@ -95,8 +96,9 @@ bool nft_rhash_lookup(const struct net *net, const struct nft_set *set,
return !!he;
}
-static void *nft_rhash_get(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem, unsigned int flags)
+static struct nft_elem_priv *
+nft_rhash_get(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem, unsigned int flags)
{
struct nft_rhash *priv = nft_set_priv(set);
struct nft_rhash_elem *he;
@@ -108,13 +110,14 @@ static void *nft_rhash_get(const struct net *net, const struct nft_set *set,
he = rhashtable_lookup(&priv->ht, &arg, nft_rhash_params);
if (he != NULL)
- return he;
+ return &he->priv;
return ERR_PTR(-ENOENT);
}
static bool nft_rhash_update(struct nft_set *set, const u32 *key,
- void *(*new)(struct nft_set *,
+ struct nft_elem_priv *
+ (*new)(struct nft_set *,
const struct nft_expr *,
struct nft_regs *regs),
const struct nft_expr *expr,
@@ -123,6 +126,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
{
struct nft_rhash *priv = nft_set_priv(set);
struct nft_rhash_elem *he, *prev;
+ struct nft_elem_priv *elem_priv;
struct nft_rhash_cmp_arg arg = {
.genmask = NFT_GENMASK_ANY,
.set = set,
@@ -133,10 +137,11 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
if (he != NULL)
goto out;
- he = new(set, expr, regs);
- if (he == NULL)
+ elem_priv = new(set, expr, regs);
+ if (!elem_priv)
goto err1;
+ he = nft_elem_priv_cast(elem_priv);
prev = rhashtable_lookup_get_insert_key(&priv->ht, &arg, &he->node,
nft_rhash_params);
if (IS_ERR(prev))
@@ -144,7 +149,7 @@ static bool nft_rhash_update(struct nft_set *set, const u32 *key,
/* Another cpu may race to insert the element with the same key */
if (prev) {
- nft_set_elem_destroy(set, he, true);
+ nft_set_elem_destroy(set, &he->priv, true);
atomic_dec(&set->nelems);
he = prev;
}
@@ -154,7 +159,7 @@ out:
return true;
err2:
- nft_set_elem_destroy(set, he, true);
+ nft_set_elem_destroy(set, &he->priv, true);
atomic_dec(&set->nelems);
err1:
return false;
@@ -162,10 +167,10 @@ err1:
static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext)
+ struct nft_elem_priv **elem_priv)
{
+ struct nft_rhash_elem *he = nft_elem_priv_cast(elem->priv);
struct nft_rhash *priv = nft_set_priv(set);
- struct nft_rhash_elem *he = elem->priv;
struct nft_rhash_cmp_arg arg = {
.genmask = nft_genmask_next(net),
.set = set,
@@ -178,33 +183,32 @@ static int nft_rhash_insert(const struct net *net, const struct nft_set *set,
if (IS_ERR(prev))
return PTR_ERR(prev);
if (prev) {
- *ext = &prev->ext;
+ *elem_priv = &prev->priv;
return -EEXIST;
}
return 0;
}
static void nft_rhash_activate(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_rhash_elem *he = elem->priv;
+ struct nft_rhash_elem *he = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &he->ext);
}
-static bool nft_rhash_flush(const struct net *net,
- const struct nft_set *set, void *priv)
+static void nft_rhash_flush(const struct net *net,
+ const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
- struct nft_rhash_elem *he = priv;
+ struct nft_rhash_elem *he = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &he->ext);
-
- return true;
}
-static void *nft_rhash_deactivate(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static struct nft_elem_priv *
+nft_rhash_deactivate(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem)
{
struct nft_rhash *priv = nft_set_priv(set);
struct nft_rhash_elem *he;
@@ -221,15 +225,15 @@ static void *nft_rhash_deactivate(const struct net *net,
rcu_read_unlock();
- return he;
+ return &he->priv;
}
static void nft_rhash_remove(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
+ struct nft_rhash_elem *he = nft_elem_priv_cast(elem_priv);
struct nft_rhash *priv = nft_set_priv(set);
- struct nft_rhash_elem *he = elem->priv;
rhashtable_remove_fast(&priv->ht, &he->node, nft_rhash_params);
}
@@ -260,7 +264,6 @@ static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set,
struct nft_rhash *priv = nft_set_priv(set);
struct nft_rhash_elem *he;
struct rhashtable_iter hti;
- struct nft_set_elem elem;
rhashtable_walk_enter(&priv->ht, &hti);
rhashtable_walk_start(&hti);
@@ -280,9 +283,7 @@ static void nft_rhash_walk(const struct nft_ctx *ctx, struct nft_set *set,
if (!nft_set_elem_active(&he->ext, iter->genmask))
goto cont;
- elem.priv = he;
-
- iter->err = iter->fn(ctx, set, iter, &elem);
+ iter->err = iter->fn(ctx, set, iter, &he->priv);
if (iter->err < 0)
break;
@@ -406,6 +407,8 @@ static int nft_rhash_init(const struct nft_set *set,
struct rhashtable_params params = nft_rhash_params;
int err;
+ BUILD_BUG_ON(offsetof(struct nft_rhash_elem, priv) != 0);
+
params.nelem_hint = desc->size ?: NFT_RHASH_ELEMENT_HINT;
params.key_len = set->klen;
@@ -428,8 +431,9 @@ struct nft_rhash_ctx {
static void nft_rhash_elem_destroy(void *ptr, void *arg)
{
struct nft_rhash_ctx *rhash_ctx = arg;
+ struct nft_rhash_elem *he = ptr;
- nf_tables_set_elem_destroy(&rhash_ctx->ctx, rhash_ctx->set, ptr);
+ nf_tables_set_elem_destroy(&rhash_ctx->ctx, rhash_ctx->set, &he->priv);
}
static void nft_rhash_destroy(const struct nft_ctx *ctx,
@@ -476,6 +480,7 @@ struct nft_hash {
};
struct nft_hash_elem {
+ struct nft_elem_priv priv;
struct hlist_node node;
struct nft_set_ext ext;
};
@@ -501,8 +506,9 @@ bool nft_hash_lookup(const struct net *net, const struct nft_set *set,
return false;
}
-static void *nft_hash_get(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem, unsigned int flags)
+static struct nft_elem_priv *
+nft_hash_get(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem, unsigned int flags)
{
struct nft_hash *priv = nft_set_priv(set);
u8 genmask = nft_genmask_cur(net);
@@ -514,7 +520,7 @@ static void *nft_hash_get(const struct net *net, const struct nft_set *set,
hlist_for_each_entry_rcu(he, &priv->table[hash], node) {
if (!memcmp(nft_set_ext_key(&he->ext), elem->key.val.data, set->klen) &&
nft_set_elem_active(&he->ext, genmask))
- return he;
+ return &he->priv;
}
return ERR_PTR(-ENOENT);
}
@@ -562,9 +568,9 @@ static u32 nft_jhash(const struct nft_set *set, const struct nft_hash *priv,
static int nft_hash_insert(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext)
+ struct nft_elem_priv **elem_priv)
{
- struct nft_hash_elem *this = elem->priv, *he;
+ struct nft_hash_elem *this = nft_elem_priv_cast(elem->priv), *he;
struct nft_hash *priv = nft_set_priv(set);
u8 genmask = nft_genmask_next(net);
u32 hash;
@@ -574,7 +580,7 @@ static int nft_hash_insert(const struct net *net, const struct nft_set *set,
if (!memcmp(nft_set_ext_key(&this->ext),
nft_set_ext_key(&he->ext), set->klen) &&
nft_set_elem_active(&he->ext, genmask)) {
- *ext = &he->ext;
+ *elem_priv = &he->priv;
return -EEXIST;
}
}
@@ -583,28 +589,28 @@ static int nft_hash_insert(const struct net *net, const struct nft_set *set,
}
static void nft_hash_activate(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_hash_elem *he = elem->priv;
+ struct nft_hash_elem *he = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &he->ext);
}
-static bool nft_hash_flush(const struct net *net,
- const struct nft_set *set, void *priv)
+static void nft_hash_flush(const struct net *net,
+ const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
- struct nft_hash_elem *he = priv;
+ struct nft_hash_elem *he = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &he->ext);
- return true;
}
-static void *nft_hash_deactivate(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static struct nft_elem_priv *
+nft_hash_deactivate(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem)
{
+ struct nft_hash_elem *this = nft_elem_priv_cast(elem->priv), *he;
struct nft_hash *priv = nft_set_priv(set);
- struct nft_hash_elem *this = elem->priv, *he;
u8 genmask = nft_genmask_next(net);
u32 hash;
@@ -614,7 +620,7 @@ static void *nft_hash_deactivate(const struct net *net,
set->klen) &&
nft_set_elem_active(&he->ext, genmask)) {
nft_set_elem_change_active(net, set, &he->ext);
- return he;
+ return &he->priv;
}
}
return NULL;
@@ -622,9 +628,9 @@ static void *nft_hash_deactivate(const struct net *net,
static void nft_hash_remove(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_hash_elem *he = elem->priv;
+ struct nft_hash_elem *he = nft_elem_priv_cast(elem_priv);
hlist_del_rcu(&he->node);
}
@@ -634,7 +640,6 @@ static void nft_hash_walk(const struct nft_ctx *ctx, struct nft_set *set,
{
struct nft_hash *priv = nft_set_priv(set);
struct nft_hash_elem *he;
- struct nft_set_elem elem;
int i;
for (i = 0; i < priv->buckets; i++) {
@@ -644,9 +649,7 @@ static void nft_hash_walk(const struct nft_ctx *ctx, struct nft_set *set,
if (!nft_set_elem_active(&he->ext, iter->genmask))
goto cont;
- elem.priv = he;
-
- iter->err = iter->fn(ctx, set, iter, &elem);
+ iter->err = iter->fn(ctx, set, iter, &he->priv);
if (iter->err < 0)
return;
cont:
@@ -685,7 +688,7 @@ static void nft_hash_destroy(const struct nft_ctx *ctx,
for (i = 0; i < priv->buckets; i++) {
hlist_for_each_entry_safe(he, next, &priv->table[i], node) {
hlist_del_rcu(&he->node);
- nf_tables_set_elem_destroy(ctx, set, he);
+ nf_tables_set_elem_destroy(ctx, set, &he->priv);
}
}
}
diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
index c0dcc40de358..701977af3ee8 100644
--- a/net/netfilter/nft_set_pipapo.c
+++ b/net/netfilter/nft_set_pipapo.c
@@ -599,11 +599,18 @@ out:
* @elem: nftables API element representation containing key data
* @flags: Unused
*/
-static void *nft_pipapo_get(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem, unsigned int flags)
+static struct nft_elem_priv *
+nft_pipapo_get(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem, unsigned int flags)
{
- return pipapo_get(net, set, (const u8 *)elem->key.val.data,
- nft_genmask_cur(net));
+ static struct nft_pipapo_elem *e;
+
+ e = pipapo_get(net, set, (const u8 *)elem->key.val.data,
+ nft_genmask_cur(net));
+ if (IS_ERR(e))
+ return ERR_CAST(e);
+
+ return &e->priv;
}
/**
@@ -1151,21 +1158,21 @@ static int pipapo_realloc_scratch(struct nft_pipapo_match *clone,
* @net: Network namespace
* @set: nftables API set representation
* @elem: nftables API element representation containing key data
- * @ext2: Filled with pointer to &struct nft_set_ext in inserted element
+ * @elem_priv: Filled with pointer to &struct nft_set_ext in inserted element
*
* Return: 0 on success, error pointer on failure.
*/
static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext2)
+ struct nft_elem_priv **elem_priv)
{
const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
union nft_pipapo_map_bucket rulemap[NFT_PIPAPO_MAX_FIELDS];
const u8 *start = (const u8 *)elem->key.val.data, *end;
- struct nft_pipapo_elem *e = elem->priv, *dup;
struct nft_pipapo *priv = nft_set_priv(set);
struct nft_pipapo_match *m = priv->clone;
u8 genmask = nft_genmask_next(net);
+ struct nft_pipapo_elem *e, *dup;
struct nft_pipapo_field *f;
const u8 *start_p, *end_p;
int i, bsize_max, err = 0;
@@ -1188,7 +1195,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
if (!memcmp(start, dup_key->data, sizeof(*dup_key->data)) &&
!memcmp(end, dup_end->data, sizeof(*dup_end->data))) {
- *ext2 = &dup->ext;
+ *elem_priv = &dup->priv;
return -EEXIST;
}
@@ -1203,7 +1210,7 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
if (PTR_ERR(dup) != -ENOENT) {
if (IS_ERR(dup))
return PTR_ERR(dup);
- *ext2 = &dup->ext;
+ *elem_priv = &dup->priv;
return -ENOTEMPTY;
}
@@ -1263,7 +1270,8 @@ static int nft_pipapo_insert(const struct net *net, const struct nft_set *set,
put_cpu_ptr(m->scratch);
}
- *ext2 = &e->ext;
+ e = nft_elem_priv_cast(elem->priv);
+ *elem_priv = &e->priv;
pipapo_map(m, rulemap, e);
@@ -1540,21 +1548,16 @@ static void nft_pipapo_gc_deactivate(struct net *net, struct nft_set *set,
struct nft_pipapo_elem *e)
{
- struct nft_set_elem elem = {
- .priv = e,
- };
-
- nft_setelem_data_deactivate(net, set, &elem);
+ nft_setelem_data_deactivate(net, set, &e->priv);
}
/**
* pipapo_gc() - Drop expired entries from set, destroy start and end elements
- * @_set: nftables API set representation
+ * @set: nftables API set representation
* @m: Matching data
*/
-static void pipapo_gc(const struct nft_set *_set, struct nft_pipapo_match *m)
+static void pipapo_gc(struct nft_set *set, struct nft_pipapo_match *m)
{
- struct nft_set *set = (struct nft_set *) _set;
struct nft_pipapo *priv = nft_set_priv(set);
struct net *net = read_pnet(&set->net);
int rules_f0, first_rule = 0;
@@ -1672,7 +1675,7 @@ static void pipapo_reclaim_match(struct rcu_head *rcu)
* We also need to create a new working copy for subsequent insertions and
* deletions.
*/
-static void nft_pipapo_commit(const struct nft_set *set)
+static void nft_pipapo_commit(struct nft_set *set)
{
struct nft_pipapo *priv = nft_set_priv(set);
struct nft_pipapo_match *new_clone, *old;
@@ -1732,7 +1735,7 @@ static void nft_pipapo_abort(const struct nft_set *set)
* nft_pipapo_activate() - Mark element reference as active given key, commit
* @net: Network namespace
* @set: nftables API set representation
- * @elem: nftables API element representation containing key data
+ * @elem_priv: nftables API element representation containing key data
*
* On insertion, elements are added to a copy of the matching data currently
* in use for lookups, and not directly inserted into current lookup data. Both
@@ -1741,9 +1744,9 @@ static void nft_pipapo_abort(const struct nft_set *set)
*/
static void nft_pipapo_activate(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_pipapo_elem *e = elem->priv;
+ struct nft_pipapo_elem *e = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &e->ext);
}
@@ -1783,9 +1786,9 @@ static void *pipapo_deactivate(const struct net *net, const struct nft_set *set,
*
* Return: deactivated element if found, NULL otherwise.
*/
-static void *nft_pipapo_deactivate(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static struct nft_elem_priv *
+nft_pipapo_deactivate(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem)
{
const struct nft_set_ext *ext = nft_set_elem_ext(set, elem->priv);
@@ -1796,7 +1799,7 @@ static void *nft_pipapo_deactivate(const struct net *net,
* nft_pipapo_flush() - Call pipapo_deactivate() to make element inactive
* @net: Network namespace
* @set: nftables API set representation
- * @elem: nftables API element representation containing key data
+ * @elem_priv: nftables API element representation containing key data
*
* This is functionally the same as nft_pipapo_deactivate(), with a slightly
* different interface, and it's also called once for each element in a set
@@ -1810,13 +1813,12 @@ static void *nft_pipapo_deactivate(const struct net *net,
*
* Return: true if element was found and deactivated.
*/
-static bool nft_pipapo_flush(const struct net *net, const struct nft_set *set,
- void *elem)
+static void nft_pipapo_flush(const struct net *net, const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
- struct nft_pipapo_elem *e = elem;
+ struct nft_pipapo_elem *e = nft_elem_priv_cast(elem_priv);
- return pipapo_deactivate(net, set, (const u8 *)nft_set_ext_key(&e->ext),
- &e->ext);
+ nft_set_elem_change_active(net, set, &e->ext);
}
/**
@@ -1939,7 +1941,7 @@ static bool pipapo_match_field(struct nft_pipapo_field *f,
* nft_pipapo_remove() - Remove element given key, commit
* @net: Network namespace
* @set: nftables API set representation
- * @elem: nftables API element representation containing key data
+ * @elem_priv: nftables API element representation containing key data
*
* Similarly to nft_pipapo_activate(), this is used as commit operation by the
* API, but it's called once per element in the pending transaction, so we can't
@@ -1947,14 +1949,15 @@ static bool pipapo_match_field(struct nft_pipapo_field *f,
* the matched element here, if any, and commit the updated matching data.
*/
static void nft_pipapo_remove(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
struct nft_pipapo *priv = nft_set_priv(set);
struct nft_pipapo_match *m = priv->clone;
- struct nft_pipapo_elem *e = elem->priv;
int rules_f0, first_rule = 0;
+ struct nft_pipapo_elem *e;
const u8 *data;
+ e = nft_elem_priv_cast(elem_priv);
data = (const u8 *)nft_set_ext_key(&e->ext);
while ((rules_f0 = pipapo_rules_same_key(m->f, first_rule))) {
@@ -2031,7 +2034,6 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
for (r = 0; r < f->rules; r++) {
struct nft_pipapo_elem *e;
- struct nft_set_elem elem;
if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e)
continue;
@@ -2041,9 +2043,7 @@ static void nft_pipapo_walk(const struct nft_ctx *ctx, struct nft_set *set,
e = f->mt[r].e;
- elem.priv = e;
-
- iter->err = iter->fn(ctx, set, iter, &elem);
+ iter->err = iter->fn(ctx, set, iter, &e->priv);
if (iter->err < 0)
goto out;
@@ -2115,6 +2115,8 @@ static int nft_pipapo_init(const struct nft_set *set,
struct nft_pipapo_field *f;
int err, i, field_count;
+ BUILD_BUG_ON(offsetof(struct nft_pipapo_elem, priv) != 0);
+
field_count = desc->field_count ? : 1;
if (field_count > NFT_PIPAPO_MAX_FIELDS)
@@ -2209,7 +2211,7 @@ static void nft_set_pipapo_match_destroy(const struct nft_ctx *ctx,
e = f->mt[r].e;
- nf_tables_set_elem_destroy(ctx, set, e);
+ nf_tables_set_elem_destroy(ctx, set, &e->priv);
}
}
diff --git a/net/netfilter/nft_set_pipapo.h b/net/netfilter/nft_set_pipapo.h
index 2e164a319945..1040223da5fa 100644
--- a/net/netfilter/nft_set_pipapo.h
+++ b/net/netfilter/nft_set_pipapo.h
@@ -170,10 +170,12 @@ struct nft_pipapo_elem;
/**
* struct nft_pipapo_elem - API-facing representation of single set element
+ * @priv: element placeholder
* @ext: nftables API extensions
*/
struct nft_pipapo_elem {
- struct nft_set_ext ext;
+ struct nft_elem_priv priv;
+ struct nft_set_ext ext;
};
int pipapo_refill(unsigned long *map, int len, int rules, unsigned long *dst,
diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
index e34662f4a71e..6f1186abd47b 100644
--- a/net/netfilter/nft_set_rbtree.c
+++ b/net/netfilter/nft_set_rbtree.c
@@ -19,10 +19,11 @@ struct nft_rbtree {
struct rb_root root;
rwlock_t lock;
seqcount_rwlock_t count;
- struct delayed_work gc_work;
+ unsigned long last_gc;
};
struct nft_rbtree_elem {
+ struct nft_elem_priv priv;
struct rb_node node;
struct nft_set_ext ext;
};
@@ -48,8 +49,7 @@ static int nft_rbtree_cmp(const struct nft_set *set,
static bool nft_rbtree_elem_expired(const struct nft_rbtree_elem *rbe)
{
- return nft_set_elem_expired(&rbe->ext) ||
- nft_set_elem_is_dead(&rbe->ext);
+ return nft_set_elem_expired(&rbe->ext);
}
static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set,
@@ -197,8 +197,9 @@ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
return false;
}
-static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
- const struct nft_set_elem *elem, unsigned int flags)
+static struct nft_elem_priv *
+nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem, unsigned int flags)
{
struct nft_rbtree *priv = nft_set_priv(set);
unsigned int seq = read_seqcount_begin(&priv->count);
@@ -209,27 +210,25 @@ static void *nft_rbtree_get(const struct net *net, const struct nft_set *set,
ret = __nft_rbtree_get(net, set, key, &rbe, seq, flags, genmask);
if (ret || !read_seqcount_retry(&priv->count, seq))
- return rbe;
+ return &rbe->priv;
read_lock_bh(&priv->lock);
seq = read_seqcount_begin(&priv->count);
ret = __nft_rbtree_get(net, set, key, &rbe, seq, flags, genmask);
- if (!ret)
- rbe = ERR_PTR(-ENOENT);
read_unlock_bh(&priv->lock);
- return rbe;
+ if (!ret)
+ return ERR_PTR(-ENOENT);
+
+ return &rbe->priv;
}
-static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
- struct nft_rbtree *priv,
- struct nft_rbtree_elem *rbe)
+static void nft_rbtree_gc_elem_remove(struct net *net, struct nft_set *set,
+ struct nft_rbtree *priv,
+ struct nft_rbtree_elem *rbe)
{
- struct nft_set_elem elem = {
- .priv = rbe,
- };
-
- nft_setelem_data_deactivate(net, set, &elem);
+ lockdep_assert_held_write(&priv->lock);
+ nft_setelem_data_deactivate(net, set, &rbe->priv);
rb_erase(&rbe->node, &priv->root);
}
@@ -263,7 +262,7 @@ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
rbe_prev = NULL;
if (prev) {
rbe_prev = rb_entry(prev, struct nft_rbtree_elem, node);
- nft_rbtree_gc_remove(net, set, priv, rbe_prev);
+ nft_rbtree_gc_elem_remove(net, set, priv, rbe_prev);
/* There is always room in this trans gc for this element,
* memory allocation never actually happens, hence, the warning
@@ -277,7 +276,7 @@ nft_rbtree_gc_elem(const struct nft_set *__set, struct nft_rbtree *priv,
nft_trans_gc_elem_add(gc, rbe_prev);
}
- nft_rbtree_gc_remove(net, set, priv, rbe);
+ nft_rbtree_gc_elem_remove(net, set, priv, rbe);
gc = nft_trans_gc_queue_sync(gc, GFP_ATOMIC);
if (WARN_ON_ONCE(!gc))
return ERR_PTR(-ENOMEM);
@@ -307,7 +306,7 @@ static bool nft_rbtree_update_first(const struct nft_set *set,
static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
struct nft_rbtree_elem *new,
- struct nft_set_ext **ext)
+ struct nft_elem_priv **elem_priv)
{
struct nft_rbtree_elem *rbe, *rbe_le = NULL, *rbe_ge = NULL;
struct rb_node *node, *next, *parent, **p, *first = NULL;
@@ -424,7 +423,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
*/
if (rbe_ge && !nft_rbtree_cmp(set, new, rbe_ge) &&
nft_rbtree_interval_start(rbe_ge) == nft_rbtree_interval_start(new)) {
- *ext = &rbe_ge->ext;
+ *elem_priv = &rbe_ge->priv;
return -EEXIST;
}
@@ -433,7 +432,7 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
*/
if (rbe_le && !nft_rbtree_cmp(set, new, rbe_le) &&
nft_rbtree_interval_end(rbe_le) == nft_rbtree_interval_end(new)) {
- *ext = &rbe_le->ext;
+ *elem_priv = &rbe_le->priv;
return -EEXIST;
}
@@ -485,10 +484,10 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
const struct nft_set_elem *elem,
- struct nft_set_ext **ext)
+ struct nft_elem_priv **elem_priv)
{
+ struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem->priv);
struct nft_rbtree *priv = nft_set_priv(set);
- struct nft_rbtree_elem *rbe = elem->priv;
int err;
do {
@@ -499,7 +498,7 @@ static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
write_lock_bh(&priv->lock);
write_seqcount_begin(&priv->count);
- err = __nft_rbtree_insert(net, set, rbe, ext);
+ err = __nft_rbtree_insert(net, set, rbe, elem_priv);
write_seqcount_end(&priv->count);
write_unlock_bh(&priv->lock);
} while (err == -EAGAIN);
@@ -507,13 +506,8 @@ static int nft_rbtree_insert(const struct net *net, const struct nft_set *set,
return err;
}
-static void nft_rbtree_remove(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static void nft_rbtree_erase(struct nft_rbtree *priv, struct nft_rbtree_elem *rbe)
{
- struct nft_rbtree *priv = nft_set_priv(set);
- struct nft_rbtree_elem *rbe = elem->priv;
-
write_lock_bh(&priv->lock);
write_seqcount_begin(&priv->count);
rb_erase(&rbe->node, &priv->root);
@@ -521,32 +515,41 @@ static void nft_rbtree_remove(const struct net *net,
write_unlock_bh(&priv->lock);
}
+static void nft_rbtree_remove(const struct net *net,
+ const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
+{
+ struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem_priv);
+ struct nft_rbtree *priv = nft_set_priv(set);
+
+ nft_rbtree_erase(priv, rbe);
+}
+
static void nft_rbtree_activate(const struct net *net,
const struct nft_set *set,
- const struct nft_set_elem *elem)
+ struct nft_elem_priv *elem_priv)
{
- struct nft_rbtree_elem *rbe = elem->priv;
+ struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &rbe->ext);
}
-static bool nft_rbtree_flush(const struct net *net,
- const struct nft_set *set, void *priv)
+static void nft_rbtree_flush(const struct net *net,
+ const struct nft_set *set,
+ struct nft_elem_priv *elem_priv)
{
- struct nft_rbtree_elem *rbe = priv;
+ struct nft_rbtree_elem *rbe = nft_elem_priv_cast(elem_priv);
nft_set_elem_change_active(net, set, &rbe->ext);
-
- return true;
}
-static void *nft_rbtree_deactivate(const struct net *net,
- const struct nft_set *set,
- const struct nft_set_elem *elem)
+static struct nft_elem_priv *
+nft_rbtree_deactivate(const struct net *net, const struct nft_set *set,
+ const struct nft_set_elem *elem)
{
+ struct nft_rbtree_elem *rbe, *this = nft_elem_priv_cast(elem->priv);
const struct nft_rbtree *priv = nft_set_priv(set);
const struct rb_node *parent = priv->root.rb_node;
- struct nft_rbtree_elem *rbe, *this = elem->priv;
u8 genmask = nft_genmask_next(net);
int d;
@@ -574,8 +577,8 @@ static void *nft_rbtree_deactivate(const struct net *net,
parent = parent->rb_left;
continue;
}
- nft_rbtree_flush(net, set, rbe);
- return rbe;
+ nft_rbtree_flush(net, set, &rbe->priv);
+ return &rbe->priv;
}
}
return NULL;
@@ -587,7 +590,6 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
{
struct nft_rbtree *priv = nft_set_priv(set);
struct nft_rbtree_elem *rbe;
- struct nft_set_elem elem;
struct rb_node *node;
read_lock_bh(&priv->lock);
@@ -599,9 +601,7 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
if (!nft_set_elem_active(&rbe->ext, iter->genmask))
goto cont;
- elem.priv = rbe;
-
- iter->err = iter->fn(ctx, set, iter, &elem);
+ iter->err = iter->fn(ctx, set, iter, &rbe->priv);
if (iter->err < 0) {
read_unlock_bh(&priv->lock);
return;
@@ -612,45 +612,36 @@ cont:
read_unlock_bh(&priv->lock);
}
-static void nft_rbtree_gc(struct work_struct *work)
+static void nft_rbtree_gc_remove(struct net *net, struct nft_set *set,
+ struct nft_rbtree *priv,
+ struct nft_rbtree_elem *rbe)
+{
+ nft_setelem_data_deactivate(net, set, &rbe->priv);
+ nft_rbtree_erase(priv, rbe);
+}
+
+static void nft_rbtree_gc(struct nft_set *set)
{
+ struct nft_rbtree *priv = nft_set_priv(set);
struct nft_rbtree_elem *rbe, *rbe_end = NULL;
struct nftables_pernet *nft_net;
- struct nft_rbtree *priv;
+ struct rb_node *node, *next;
struct nft_trans_gc *gc;
- struct rb_node *node;
- struct nft_set *set;
- unsigned int gc_seq;
struct net *net;
- priv = container_of(work, struct nft_rbtree, gc_work.work);
set = nft_set_container_of(priv);
net = read_pnet(&set->net);
nft_net = nft_pernet(net);
- gc_seq = READ_ONCE(nft_net->gc_seq);
-
- if (nft_set_gc_is_pending(set))
- goto done;
- gc = nft_trans_gc_alloc(set, gc_seq, GFP_KERNEL);
+ gc = nft_trans_gc_alloc(set, 0, GFP_KERNEL);
if (!gc)
- goto done;
+ return;
- read_lock_bh(&priv->lock);
- for (node = rb_first(&priv->root); node != NULL; node = rb_next(node)) {
-
- /* Ruleset has been updated, try later. */
- if (READ_ONCE(nft_net->gc_seq) != gc_seq) {
- nft_trans_gc_destroy(gc);
- gc = NULL;
- goto try_later;
- }
+ for (node = rb_first(&priv->root); node ; node = next) {
+ next = rb_next(node);
rbe = rb_entry(node, struct nft_rbtree_elem, node);
- if (nft_set_elem_is_dead(&rbe->ext))
- goto dead_elem;
-
/* elements are reversed in the rbtree for historical reasons,
* from highest to lowest value, that is why end element is
* always visited before the start element.
@@ -662,37 +653,34 @@ static void nft_rbtree_gc(struct work_struct *work)
if (!nft_set_elem_expired(&rbe->ext))
continue;
- nft_set_elem_dead(&rbe->ext);
-
- if (!rbe_end)
- continue;
-
- nft_set_elem_dead(&rbe_end->ext);
-
- gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
+ gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL);
if (!gc)
goto try_later;
- nft_trans_gc_elem_add(gc, rbe_end);
- rbe_end = NULL;
-dead_elem:
- gc = nft_trans_gc_queue_async(gc, gc_seq, GFP_ATOMIC);
+ /* end element needs to be removed first, it has
+ * no timeout extension.
+ */
+ if (rbe_end) {
+ nft_rbtree_gc_remove(net, set, priv, rbe_end);
+ nft_trans_gc_elem_add(gc, rbe_end);
+ rbe_end = NULL;
+ }
+
+ gc = nft_trans_gc_queue_sync(gc, GFP_KERNEL);
if (!gc)
goto try_later;
+ nft_rbtree_gc_remove(net, set, priv, rbe);
nft_trans_gc_elem_add(gc, rbe);
}
- gc = nft_trans_gc_catchall_async(gc, gc_seq);
-
try_later:
- read_unlock_bh(&priv->lock);
- if (gc)
- nft_trans_gc_queue_async_done(gc);
-done:
- queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
- nft_set_gc_interval(set));
+ if (gc) {
+ gc = nft_trans_gc_catchall_sync(gc);
+ nft_trans_gc_queue_sync_done(gc);
+ priv->last_gc = jiffies;
+ }
}
static u64 nft_rbtree_privsize(const struct nlattr * const nla[],
@@ -707,15 +695,12 @@ static int nft_rbtree_init(const struct nft_set *set,
{
struct nft_rbtree *priv = nft_set_priv(set);
+ BUILD_BUG_ON(offsetof(struct nft_rbtree_elem, priv) != 0);
+
rwlock_init(&priv->lock);
seqcount_rwlock_init(&priv->count, &priv->lock);
priv->root = RB_ROOT;
- INIT_DEFERRABLE_WORK(&priv->gc_work, nft_rbtree_gc);
- if (set->flags & NFT_SET_TIMEOUT)
- queue_delayed_work(system_power_efficient_wq, &priv->gc_work,
- nft_set_gc_interval(set));
-
return 0;
}
@@ -726,12 +711,10 @@ static void nft_rbtree_destroy(const struct nft_ctx *ctx,
struct nft_rbtree_elem *rbe;
struct rb_node *node;
- cancel_delayed_work_sync(&priv->gc_work);
- rcu_barrier();
while ((node = priv->root.rb_node) != NULL) {
rb_erase(node, &priv->root);
rbe = rb_entry(node, struct nft_rbtree_elem, node);
- nf_tables_set_elem_destroy(ctx, set, rbe);
+ nf_tables_set_elem_destroy(ctx, set, &rbe->priv);
}
}
@@ -753,6 +736,21 @@ static bool nft_rbtree_estimate(const struct nft_set_desc *desc, u32 features,
return true;
}
+static void nft_rbtree_commit(struct nft_set *set)
+{
+ struct nft_rbtree *priv = nft_set_priv(set);
+
+ if (time_after_eq(jiffies, priv->last_gc + nft_set_gc_interval(set)))
+ nft_rbtree_gc(set);
+}
+
+static void nft_rbtree_gc_init(const struct nft_set *set)
+{
+ struct nft_rbtree *priv = nft_set_priv(set);
+
+ priv->last_gc = jiffies;
+}
+
const struct nft_set_type nft_set_rbtree_type = {
.features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | NFT_SET_TIMEOUT,
.ops = {
@@ -766,6 +764,8 @@ const struct nft_set_type nft_set_rbtree_type = {
.deactivate = nft_rbtree_deactivate,
.flush = nft_rbtree_flush,
.activate = nft_rbtree_activate,
+ .commit = nft_rbtree_commit,
+ .gc_init = nft_rbtree_gc_init,
.lookup = nft_rbtree_lookup,
.walk = nft_rbtree_walk,
.get = nft_rbtree_get,
diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
index 8315d31b53db..92ef5ed2e7b0 100644
--- a/net/netlink/genetlink.c
+++ b/net/netlink/genetlink.c
@@ -225,7 +225,8 @@ static void genl_op_from_split(struct genl_op_iter *iter)
}
if (i + cnt < family->n_split_ops &&
- family->split_ops[i + cnt].flags & GENL_CMD_CAP_DUMP) {
+ family->split_ops[i + cnt].flags & GENL_CMD_CAP_DUMP &&
+ (!cnt || family->split_ops[i + cnt].cmd == iter->doit.cmd)) {
iter->dumpit = family->split_ops[i + cnt];
genl_op_fill_in_reject_policy_split(family, &iter->dumpit);
cnt++;
diff --git a/net/netlink/policy.c b/net/netlink/policy.c
index 87e3de0fde89..1f8909c16f14 100644
--- a/net/netlink/policy.c
+++ b/net/netlink/policy.c
@@ -21,7 +21,7 @@ struct netlink_policy_dump_state {
struct {
const struct nla_policy *policy;
unsigned int maxtype;
- } policies[];
+ } policies[] __counted_by(n_alloc);
};
static int add_policy(struct netlink_policy_dump_state **statep,
@@ -29,7 +29,7 @@ static int add_policy(struct netlink_policy_dump_state **statep,
unsigned int maxtype)
{
struct netlink_policy_dump_state *state = *statep;
- unsigned int n_alloc, i;
+ unsigned int old_n_alloc, n_alloc, i;
if (!policy || !maxtype)
return 0;
@@ -52,12 +52,13 @@ static int add_policy(struct netlink_policy_dump_state **statep,
if (!state)
return -ENOMEM;
- memset(&state->policies[state->n_alloc], 0,
- flex_array_size(state, policies, n_alloc - state->n_alloc));
-
- state->policies[state->n_alloc].policy = policy;
- state->policies[state->n_alloc].maxtype = maxtype;
+ old_n_alloc = state->n_alloc;
state->n_alloc = n_alloc;
+ memset(&state->policies[old_n_alloc], 0,
+ flex_array_size(state, policies, n_alloc - old_n_alloc));
+
+ state->policies[old_n_alloc].policy = policy;
+ state->policies[old_n_alloc].maxtype = maxtype;
*statep = state;
return 0;
@@ -229,6 +230,8 @@ int netlink_policy_dump_attr_size_estimate(const struct nla_policy *pt)
case NLA_S16:
case NLA_S32:
case NLA_S64:
+ case NLA_SINT:
+ case NLA_UINT:
/* maximum is common, u64 min/max with padding */
return common +
2 * (nla_attr_size(0) + nla_attr_size(sizeof(u64)));
@@ -287,6 +290,7 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state,
case NLA_U16:
case NLA_U32:
case NLA_U64:
+ case NLA_UINT:
case NLA_MSECS: {
struct netlink_range_validation range;
@@ -296,8 +300,10 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state,
type = NL_ATTR_TYPE_U16;
else if (pt->type == NLA_U32)
type = NL_ATTR_TYPE_U32;
- else
+ else if (pt->type == NLA_U64)
type = NL_ATTR_TYPE_U64;
+ else
+ type = NL_ATTR_TYPE_UINT;
if (pt->validation_type == NLA_VALIDATE_MASK) {
if (nla_put_u64_64bit(skb, NL_POLICY_TYPE_ATTR_MASK,
@@ -319,7 +325,8 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state,
case NLA_S8:
case NLA_S16:
case NLA_S32:
- case NLA_S64: {
+ case NLA_S64:
+ case NLA_SINT: {
struct netlink_range_validation_signed range;
if (pt->type == NLA_S8)
@@ -328,8 +335,10 @@ __netlink_policy_dump_write_attr(struct netlink_policy_dump_state *state,
type = NL_ATTR_TYPE_S16;
else if (pt->type == NLA_S32)
type = NL_ATTR_TYPE_S32;
- else
+ else if (pt->type == NLA_S64)
type = NL_ATTR_TYPE_S64;
+ else
+ type = NL_ATTR_TYPE_SINT;
nla_get_range_signed(pt, &range);
diff --git a/net/netrom/af_netrom.c b/net/netrom/af_netrom.c
index 96e91ab71573..0eed00184adf 100644
--- a/net/netrom/af_netrom.c
+++ b/net/netrom/af_netrom.c
@@ -487,7 +487,7 @@ static struct sock *nr_make_new(struct sock *osk)
sock_init_data(NULL, sk);
sk->sk_type = osk->sk_type;
- sk->sk_priority = osk->sk_priority;
+ sk->sk_priority = READ_ONCE(osk->sk_priority);
sk->sk_protocol = osk->sk_protocol;
sk->sk_rcvbuf = osk->sk_rcvbuf;
sk->sk_sndbuf = osk->sk_sndbuf;
diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
index fd66014d8a76..6fcd7e2ca81f 100644
--- a/net/openvswitch/actions.c
+++ b/net/openvswitch/actions.c
@@ -311,11 +311,18 @@ static int push_eth(struct sk_buff *skb, struct sw_flow_key *key,
return 0;
}
-static int push_nsh(struct sk_buff *skb, struct sw_flow_key *key,
- const struct nshhdr *nh)
+static noinline_for_stack int push_nsh(struct sk_buff *skb,
+ struct sw_flow_key *key,
+ const struct nlattr *a)
{
+ u8 buffer[NSH_HDR_MAX_LEN];
+ struct nshhdr *nh = (struct nshhdr *)buffer;
int err;
+ err = nsh_hdr_from_nlattr(a, nh, NSH_HDR_MAX_LEN);
+ if (err)
+ return err;
+
err = nsh_push(skb, nh);
if (err)
return err;
@@ -873,7 +880,7 @@ static void ovs_fragment(struct net *net, struct vport *vport,
prepare_frag(vport, skb, orig_network_offset,
ovs_key_mac_proto(key));
- dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL, 1,
+ dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL,
DST_OBSOLETE_NONE, DST_NOCOUNT);
ovs_rt.dst.dev = vport->dev;
@@ -890,7 +897,7 @@ static void ovs_fragment(struct net *net, struct vport *vport,
prepare_frag(vport, skb, orig_network_offset,
ovs_key_mac_proto(key));
memset(&ovs_rt, 0, sizeof(ovs_rt));
- dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL, 1,
+ dst_init(&ovs_rt.dst, &ovs_dst_ops, NULL,
DST_OBSOLETE_NONE, DST_NOCOUNT);
ovs_rt.dst.dev = vport->dev;
@@ -1439,17 +1446,9 @@ static int do_execute_actions(struct datapath *dp, struct sk_buff *skb,
err = pop_eth(skb, key);
break;
- case OVS_ACTION_ATTR_PUSH_NSH: {
- u8 buffer[NSH_HDR_MAX_LEN];
- struct nshhdr *nh = (struct nshhdr *)buffer;
-
- err = nsh_hdr_from_nlattr(nla_data(a), nh,
- NSH_HDR_MAX_LEN);
- if (unlikely(err))
- break;
- err = push_nsh(skb, key, nh);
+ case OVS_ACTION_ATTR_PUSH_NSH:
+ err = push_nsh(skb, key, nla_data(a));
break;
- }
case OVS_ACTION_ATTR_POP_NSH:
err = pop_nsh(skb, key);
diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c
index 4f3b1798e0b2..d108ae0bd0ee 100644
--- a/net/openvswitch/flow_table.c
+++ b/net/openvswitch/flow_table.c
@@ -220,16 +220,13 @@ static struct mask_array *tbl_mask_array_alloc(int size)
struct mask_array *new;
size = max(MASK_ARRAY_SIZE_MIN, size);
- new = kzalloc(sizeof(struct mask_array) +
- sizeof(struct sw_flow_mask *) * size +
+ new = kzalloc(struct_size(new, masks, size) +
sizeof(u64) * size, GFP_KERNEL);
if (!new)
return NULL;
new->masks_usage_zero_cntr = (u64 *)((u8 *)new +
- sizeof(struct mask_array) +
- sizeof(struct sw_flow_mask *) *
- size);
+ struct_size(new, masks, size));
new->masks_usage_stats = __alloc_percpu(sizeof(struct mask_array_stats) +
sizeof(u64) * size,
diff --git a/net/openvswitch/flow_table.h b/net/openvswitch/flow_table.h
index 9e659db78c05..f524dc3e4862 100644
--- a/net/openvswitch/flow_table.h
+++ b/net/openvswitch/flow_table.h
@@ -48,7 +48,7 @@ struct mask_array {
int count, max;
struct mask_array_stats __percpu *masks_usage_stats;
u64 *masks_usage_zero_cntr;
- struct sw_flow_mask __rcu *masks[];
+ struct sw_flow_mask __rcu *masks[] __counted_by(max);
};
struct table_instance {
diff --git a/net/openvswitch/meter.h b/net/openvswitch/meter.h
index 0c33889a8515..ed11cd12b512 100644
--- a/net/openvswitch/meter.h
+++ b/net/openvswitch/meter.h
@@ -39,13 +39,13 @@ struct dp_meter {
u32 max_delta_t;
u64 used;
struct ovs_flow_stats stats;
- struct dp_meter_band bands[];
+ struct dp_meter_band bands[] __counted_by(n_bands);
};
struct dp_meter_instance {
struct rcu_head rcu;
u32 n_meters;
- struct dp_meter __rcu *dp_meters[];
+ struct dp_meter __rcu *dp_meters[] __counted_by(n_meters);
};
struct dp_meter_table {
diff --git a/net/packet/internal.h b/net/packet/internal.h
index 63f4865202c1..d29c94c45159 100644
--- a/net/packet/internal.h
+++ b/net/packet/internal.h
@@ -94,7 +94,7 @@ struct packet_fanout {
spinlock_t lock;
refcount_t sk_ref;
struct packet_type prot_hook ____cacheline_aligned_in_smp;
- struct sock __rcu *arr[];
+ struct sock __rcu *arr[] __counted_by(max_num_members);
};
struct packet_rollover {
diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
index 49dafe9ac72f..0cc5a4e19900 100644
--- a/net/rose/af_rose.c
+++ b/net/rose/af_rose.c
@@ -583,7 +583,7 @@ static struct sock *rose_make_new(struct sock *osk)
#endif
sk->sk_type = osk->sk_type;
- sk->sk_priority = osk->sk_priority;
+ sk->sk_priority = READ_ONCE(osk->sk_priority);
sk->sk_protocol = osk->sk_protocol;
sk->sk_rcvbuf = osk->sk_rcvbuf;
sk->sk_sndbuf = osk->sk_sndbuf;
diff --git a/net/sched/act_ct.c b/net/sched/act_ct.c
index fb52d6f9aff9..9583645e86c2 100644
--- a/net/sched/act_ct.c
+++ b/net/sched/act_ct.c
@@ -699,7 +699,6 @@ static struct tc_action_ops act_ct_ops;
struct tc_ct_action_net {
struct tc_action_net tn; /* Must be first */
- bool labels;
};
/* Determine whether skb->_nfct is equal to the result of conntrack lookup. */
@@ -838,8 +837,13 @@ static void tcf_ct_params_free(struct tcf_ct_params *params)
}
if (params->ct_ft)
tcf_ct_flow_table_put(params->ct_ft);
- if (params->tmpl)
+ if (params->tmpl) {
+ if (params->put_labels)
+ nf_connlabels_put(nf_ct_net(params->tmpl));
+
nf_ct_put(params->tmpl);
+ }
+
kfree(params);
}
@@ -1163,9 +1167,9 @@ static int tcf_ct_fill_params(struct net *net,
struct nlattr **tb,
struct netlink_ext_ack *extack)
{
- struct tc_ct_action_net *tn = net_generic(net, act_ct_ops.net_id);
struct nf_conntrack_zone zone;
int err, family, proto, len;
+ bool put_labels = false;
struct nf_conn *tmpl;
char *name;
@@ -1195,15 +1199,20 @@ static int tcf_ct_fill_params(struct net *net,
}
if (tb[TCA_CT_LABELS]) {
+ unsigned int n_bits = sizeof_field(struct tcf_ct_params, labels) * 8;
+
if (!IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS)) {
NL_SET_ERR_MSG_MOD(extack, "Conntrack labels isn't enabled.");
return -EOPNOTSUPP;
}
- if (!tn->labels) {
+ if (nf_connlabels_get(net, n_bits - 1)) {
NL_SET_ERR_MSG_MOD(extack, "Failed to set connlabel length");
return -EOPNOTSUPP;
+ } else {
+ put_labels = true;
}
+
tcf_ct_set_key_val(tb,
p->labels, TCA_CT_LABELS,
p->labels_mask, TCA_CT_LABELS_MASK,
@@ -1247,10 +1256,15 @@ static int tcf_ct_fill_params(struct net *net,
}
}
+ p->put_labels = put_labels;
+
if (p->ct_action & TCA_CT_ACT_COMMIT)
__set_bit(IPS_CONFIRMED_BIT, &tmpl->status);
return 0;
err:
+ if (put_labels)
+ nf_connlabels_put(net);
+
nf_ct_put(p->tmpl);
p->tmpl = NULL;
return err;
@@ -1551,32 +1565,13 @@ static struct tc_action_ops act_ct_ops = {
static __net_init int ct_init_net(struct net *net)
{
- unsigned int n_bits = sizeof_field(struct tcf_ct_params, labels) * 8;
struct tc_ct_action_net *tn = net_generic(net, act_ct_ops.net_id);
- if (nf_connlabels_get(net, n_bits - 1)) {
- tn->labels = false;
- pr_err("act_ct: Failed to set connlabels length");
- } else {
- tn->labels = true;
- }
-
return tc_action_net_init(net, &tn->tn, &act_ct_ops);
}
static void __net_exit ct_exit_net(struct list_head *net_list)
{
- struct net *net;
-
- rtnl_lock();
- list_for_each_entry(net, net_list, exit_list) {
- struct tc_ct_action_net *tn = net_generic(net, act_ct_ops.net_id);
-
- if (tn->labels)
- nf_connlabels_put(net);
- }
- rtnl_unlock();
-
tc_action_net_exit(net_list, act_ct_ops.net_id);
}
diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
index a193cc7b3241..1daeb2182b70 100644
--- a/net/sched/cls_api.c
+++ b/net/sched/cls_api.c
@@ -1681,12 +1681,16 @@ reclassify:
* time we got here with a cookie from hardware.
*/
if (unlikely(n->tp != tp || n->tp->chain != n->chain ||
- !tp->ops->get_exts))
+ !tp->ops->get_exts)) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
exts = tp->ops->get_exts(tp, n->handle);
- if (unlikely(!exts || n->exts != exts))
+ if (unlikely(!exts || n->exts != exts)) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
n = NULL;
err = tcf_exts_exec_ex(skb, exts, act_index, res);
@@ -1712,8 +1716,10 @@ reclassify:
return err;
}
- if (unlikely(n))
+ if (unlikely(n)) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
return TC_ACT_UNSPEC; /* signal: continue lookup */
#ifdef CONFIG_NET_CLS_ACT
@@ -1723,6 +1729,7 @@ reset:
tp->chain->block->index,
tp->prio & 0xffff,
ntohs(tp->protocol));
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
}
@@ -1759,8 +1766,10 @@ int tcf_classify(struct sk_buff *skb,
if (ext->act_miss) {
n = tcf_exts_miss_cookie_lookup(ext->act_miss_cookie,
&act_index);
- if (!n)
+ if (!n) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
chain = n->chain_index;
} else {
@@ -1768,8 +1777,10 @@ int tcf_classify(struct sk_buff *skb,
}
fchain = tcf_chain_lookup_rcu(block, chain);
- if (!fchain)
+ if (!fchain) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
/* Consume, so cloned/redirect skbs won't inherit ext */
skb_ext_del(skb, TC_SKB_EXT);
@@ -1788,8 +1799,11 @@ int tcf_classify(struct sk_buff *skb,
struct tc_skb_cb *cb = tc_skb_cb(skb);
ext = tc_skb_ext_alloc(skb);
- if (WARN_ON_ONCE(!ext))
+ if (WARN_ON_ONCE(!ext)) {
+ tcf_set_drop_reason(res, SKB_DROP_REASON_TC_ERROR);
return TC_ACT_SHOT;
+ }
+
ext->chain = last_executed_chain;
ext->mru = cb->mru;
ext->post_ct = cb->post_ct;
diff --git a/net/sched/cls_route.c b/net/sched/cls_route.c
index 1e20bbd687f1..1424bfeaca73 100644
--- a/net/sched/cls_route.c
+++ b/net/sched/cls_route.c
@@ -375,9 +375,9 @@ out:
static const struct nla_policy route4_policy[TCA_ROUTE4_MAX + 1] = {
[TCA_ROUTE4_CLASSID] = { .type = NLA_U32 },
- [TCA_ROUTE4_TO] = { .type = NLA_U32 },
- [TCA_ROUTE4_FROM] = { .type = NLA_U32 },
- [TCA_ROUTE4_IIF] = { .type = NLA_U32 },
+ [TCA_ROUTE4_TO] = NLA_POLICY_MAX(NLA_U32, 0xFF),
+ [TCA_ROUTE4_FROM] = NLA_POLICY_MAX(NLA_U32, 0xFF),
+ [TCA_ROUTE4_IIF] = NLA_POLICY_MAX(NLA_U32, 0x7FFF),
};
static int route4_set_parms(struct net *net, struct tcf_proto *tp,
@@ -397,33 +397,37 @@ static int route4_set_parms(struct net *net, struct tcf_proto *tp,
return err;
if (tb[TCA_ROUTE4_TO]) {
- if (new && handle & 0x8000)
+ if (new && handle & 0x8000) {
+ NL_SET_ERR_MSG(extack, "Invalid handle");
return -EINVAL;
+ }
to = nla_get_u32(tb[TCA_ROUTE4_TO]);
- if (to > 0xFF)
- return -EINVAL;
nhandle = to;
}
+ if (tb[TCA_ROUTE4_FROM] && tb[TCA_ROUTE4_IIF]) {
+ NL_SET_ERR_MSG_ATTR(extack, tb[TCA_ROUTE4_FROM],
+ "'from' and 'fromif' are mutually exclusive");
+ return -EINVAL;
+ }
+
if (tb[TCA_ROUTE4_FROM]) {
- if (tb[TCA_ROUTE4_IIF])
- return -EINVAL;
id = nla_get_u32(tb[TCA_ROUTE4_FROM]);
- if (id > 0xFF)
- return -EINVAL;
nhandle |= id << 16;
} else if (tb[TCA_ROUTE4_IIF]) {
id = nla_get_u32(tb[TCA_ROUTE4_IIF]);
- if (id > 0x7FFF)
- return -EINVAL;
nhandle |= (id | 0x8000) << 16;
} else
nhandle |= 0xFFFF << 16;
if (handle && new) {
nhandle |= handle & 0x7F00;
- if (nhandle != handle)
+ if (nhandle != handle) {
+ NL_SET_ERR_MSG_FMT(extack,
+ "Handle mismatch constructed: %x (expected: %x)",
+ handle, nhandle);
return -EINVAL;
+ }
}
if (!nhandle) {
@@ -478,7 +482,6 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
struct route4_filter __rcu **fp;
struct route4_filter *fold, *f1, *pfp, *f = NULL;
struct route4_bucket *b;
- struct nlattr *opt = tca[TCA_OPTIONS];
struct nlattr *tb[TCA_ROUTE4_MAX + 1];
unsigned int h, th;
int err;
@@ -489,10 +492,12 @@ static int route4_change(struct net *net, struct sk_buff *in_skb,
return -EINVAL;
}
- if (opt == NULL)
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tca, TCA_OPTIONS)) {
+ NL_SET_ERR_MSG_MOD(extack, "Missing options");
return -EINVAL;
+ }
- err = nla_parse_nested_deprecated(tb, TCA_ROUTE4_MAX, opt,
+ err = nla_parse_nested_deprecated(tb, TCA_ROUTE4_MAX, tca[TCA_OPTIONS],
route4_policy, NULL);
if (err < 0)
return err;
diff --git a/net/sched/em_meta.c b/net/sched/em_meta.c
index da34fd4c9269..09d8afd04a2a 100644
--- a/net/sched/em_meta.c
+++ b/net/sched/em_meta.c
@@ -546,7 +546,7 @@ META_COLLECTOR(int_sk_prio)
*err = -1;
return;
}
- dst->value = sk->sk_priority;
+ dst->value = READ_ONCE(sk->sk_priority);
}
META_COLLECTOR(int_sk_rcvlowat)
diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
index f59a2cb2c803..0fd18c344ab5 100644
--- a/net/sched/sch_fq.c
+++ b/net/sched/sch_fq.c
@@ -2,7 +2,7 @@
/*
* net/sched/sch_fq.c Fair Queue Packet Scheduler (per flow pacing)
*
- * Copyright (C) 2013-2015 Eric Dumazet <edumazet@google.com>
+ * Copyright (C) 2013-2023 Eric Dumazet <edumazet@google.com>
*
* Meant to be mostly used for locally generated traffic :
* Fast classification depends on skb->sk being set before reaching us.
@@ -51,7 +51,8 @@
#include <net/tcp.h>
struct fq_skb_cb {
- u64 time_to_send;
+ u64 time_to_send;
+ u8 band;
};
static inline struct fq_skb_cb *fq_skb_cb(struct sk_buff *skb)
@@ -73,37 +74,41 @@ struct fq_flow {
struct sk_buff *tail; /* last skb in the list */
unsigned long age; /* (jiffies | 1UL) when flow was emptied, for gc */
};
- struct rb_node fq_node; /* anchor in fq_root[] trees */
+ union {
+ struct rb_node fq_node; /* anchor in fq_root[] trees */
+ /* Following field is only used for q->internal,
+ * because q->internal is not hashed in fq_root[]
+ */
+ u64 stat_fastpath_packets;
+ };
struct sock *sk;
u32 socket_hash; /* sk_hash */
int qlen; /* number of packets in flow queue */
-/* Second cache line, used in fq_dequeue() */
+/* Second cache line */
int credit;
- /* 32bit hole on 64bit arches */
-
+ int band;
struct fq_flow *next; /* next pointer in RR lists */
struct rb_node rate_node; /* anchor in q->delayed tree */
u64 time_next_packet;
-} ____cacheline_aligned_in_smp;
+};
struct fq_flow_head {
struct fq_flow *first;
struct fq_flow *last;
};
-struct fq_sched_data {
+struct fq_perband_flows {
struct fq_flow_head new_flows;
-
struct fq_flow_head old_flows;
+ int credit;
+ int quantum; /* based on band nr : 576KB, 192KB, 64KB */
+};
- struct rb_root delayed; /* for rate limited flows */
- u64 time_next_delayed_flow;
- u64 ktime_cache; /* copy of last ktime_get_ns() */
- unsigned long unthrottle_latency_ns;
+struct fq_sched_data {
+/* Read mostly cache line */
- struct fq_flow internal; /* for non classified or high prio packets */
u32 quantum;
u32 initial_quantum;
u32 flow_refill_delay;
@@ -117,24 +122,46 @@ struct fq_sched_data {
u8 rate_enable;
u8 fq_trees_log;
u8 horizon_drop;
+ u8 prio2band[(TC_PRIO_MAX + 1) >> 2];
+ u32 timer_slack; /* hrtimer slack in ns */
+
+/* Read/Write fields. */
+
+ unsigned int band_nr; /* band being serviced in fq_dequeue() */
+
+ struct fq_perband_flows band_flows[FQ_BANDS];
+
+ struct fq_flow internal; /* fastpath queue. */
+ struct rb_root delayed; /* for rate limited flows */
+ u64 time_next_delayed_flow;
+ unsigned long unthrottle_latency_ns;
+
+ u32 band_pkt_count[FQ_BANDS];
u32 flows;
- u32 inactive_flows;
+ u32 inactive_flows; /* Flows with no packet to send. */
u32 throttled_flows;
- u64 stat_gc_flows;
- u64 stat_internal_packets;
u64 stat_throttled;
+ struct qdisc_watchdog watchdog;
+ u64 stat_gc_flows;
+
+/* Seldom used fields. */
+
+ u64 stat_band_drops[FQ_BANDS];
u64 stat_ce_mark;
u64 stat_horizon_drops;
u64 stat_horizon_caps;
u64 stat_flows_plimit;
u64 stat_pkts_too_long;
u64 stat_allocation_errors;
-
- u32 timer_slack; /* hrtimer slack in ns */
- struct qdisc_watchdog watchdog;
};
+/* return the i-th 2-bit value ("crumb") */
+static u8 fq_prio2band(const u8 *prio2band, unsigned int prio)
+{
+ return (prio2band[prio / 4] >> (2 * (prio & 0x3))) & 0x3;
+}
+
/*
* f->tail and f->age share the same location.
* We can use the low order bit to differentiate if this location points
@@ -159,8 +186,19 @@ static bool fq_flow_is_throttled(const struct fq_flow *f)
return f->next == &throttled;
}
-static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow)
+enum new_flow {
+ NEW_FLOW,
+ OLD_FLOW
+};
+
+static void fq_flow_add_tail(struct fq_sched_data *q, struct fq_flow *flow,
+ enum new_flow list_sel)
{
+ struct fq_perband_flows *pband = &q->band_flows[flow->band];
+ struct fq_flow_head *head = (list_sel == NEW_FLOW) ?
+ &pband->new_flows :
+ &pband->old_flows;
+
if (head->first)
head->last->next = flow;
else
@@ -173,7 +211,7 @@ static void fq_flow_unset_throttled(struct fq_sched_data *q, struct fq_flow *f)
{
rb_erase(&f->rate_node, &q->delayed);
q->throttled_flows--;
- fq_flow_add_tail(&q->old_flows, f);
+ fq_flow_add_tail(q, f, OLD_FLOW);
}
static void fq_flow_set_throttled(struct fq_sched_data *q, struct fq_flow *f)
@@ -258,17 +296,61 @@ static void fq_gc(struct fq_sched_data *q,
kmem_cache_free_bulk(fq_flow_cachep, fcnt, tofree);
}
-static struct fq_flow *fq_classify(struct sk_buff *skb, struct fq_sched_data *q)
+/* Fast path can be used if :
+ * 1) Packet tstamp is in the past.
+ * 2) FQ qlen == 0 OR
+ * (no flow is currently eligible for transmit,
+ * AND fast path queue has less than 8 packets)
+ * 3) No SO_MAX_PACING_RATE on the socket (if any).
+ * 4) No @maxrate attribute on this qdisc,
+ *
+ * FQ can not use generic TCQ_F_CAN_BYPASS infrastructure.
+ */
+static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
+ u64 now)
{
+ const struct fq_sched_data *q = qdisc_priv(sch);
+ const struct sock *sk;
+
+ if (fq_skb_cb(skb)->time_to_send > now)
+ return false;
+
+ if (sch->q.qlen != 0) {
+ /* Even if some packets are stored in this qdisc,
+ * we can still enable fast path if all of them are
+ * scheduled in the future (ie no flows are eligible)
+ * or in the fast path queue.
+ */
+ if (q->flows != q->inactive_flows + q->throttled_flows)
+ return false;
+
+ /* Do not allow fast path queue to explode, we want Fair Queue mode
+ * under pressure.
+ */
+ if (q->internal.qlen >= 8)
+ return false;
+ }
+
+ sk = skb->sk;
+ if (sk && sk_fullsock(sk) && !sk_is_tcp(sk) &&
+ sk->sk_max_pacing_rate != ~0UL)
+ return false;
+
+ if (q->flow_max_rate != ~0UL)
+ return false;
+
+ return true;
+}
+
+static struct fq_flow *fq_classify(struct Qdisc *sch, struct sk_buff *skb,
+ u64 now)
+{
+ struct fq_sched_data *q = qdisc_priv(sch);
struct rb_node **p, *parent;
struct sock *sk = skb->sk;
struct rb_root *root;
struct fq_flow *f;
- /* warning: no starvation prevention... */
- if (unlikely((skb->priority & TC_PRIO_MAX) == TC_PRIO_CONTROL))
- return &q->internal;
-
/* SYNACK messages are attached to a TCP_NEW_SYN_RECV request socket
* or a listener (SYNCOOKIE mode)
* 1) request sockets are not full blown,
@@ -299,11 +381,18 @@ static struct fq_flow *fq_classify(struct sk_buff *skb, struct fq_sched_data *q)
sk = (struct sock *)((hash << 1) | 1UL);
}
+ if (fq_fastpath_check(sch, skb, now)) {
+ q->internal.stat_fastpath_packets++;
+ if (skb->sk == sk && q->rate_enable &&
+ READ_ONCE(sk->sk_pacing_status) != SK_PACING_FQ)
+ smp_store_release(&sk->sk_pacing_status,
+ SK_PACING_FQ);
+ return &q->internal;
+ }
+
root = &q->fq_root[hash_ptr(sk, q->fq_trees_log)];
- if (q->flows >= (2U << q->fq_trees_log) &&
- q->inactive_flows > q->flows/2)
- fq_gc(q, root, sk);
+ fq_gc(q, root, sk);
p = &root->rb_node;
parent = NULL;
@@ -396,7 +485,6 @@ static void fq_dequeue_skb(struct Qdisc *sch, struct fq_flow *flow,
{
fq_erase_head(sch, flow, skb);
skb_mark_not_on_list(skb);
- flow->qlen--;
qdisc_qstats_backlog_dec(sch, skb);
sch->q.qlen--;
}
@@ -434,9 +522,9 @@ static void flow_queue_add(struct fq_flow *flow, struct sk_buff *skb)
}
static bool fq_packet_beyond_horizon(const struct sk_buff *skb,
- const struct fq_sched_data *q)
+ const struct fq_sched_data *q, u64 now)
{
- return unlikely((s64)skb->tstamp > (s64)(q->ktime_cache + q->horizon));
+ return unlikely((s64)skb->tstamp > (s64)(now + q->horizon));
}
static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
@@ -444,53 +532,57 @@ static int fq_enqueue(struct sk_buff *skb, struct Qdisc *sch,
{
struct fq_sched_data *q = qdisc_priv(sch);
struct fq_flow *f;
+ u64 now;
+ u8 band;
- if (unlikely(sch->q.qlen >= sch->limit))
+ band = fq_prio2band(q->prio2band, skb->priority & TC_PRIO_MAX);
+ if (unlikely(q->band_pkt_count[band] >= sch->limit)) {
+ q->stat_band_drops[band]++;
return qdisc_drop(skb, sch, to_free);
+ }
+ now = ktime_get_ns();
if (!skb->tstamp) {
- fq_skb_cb(skb)->time_to_send = q->ktime_cache = ktime_get_ns();
+ fq_skb_cb(skb)->time_to_send = now;
} else {
- /* Check if packet timestamp is too far in the future.
- * Try first if our cached value, to avoid ktime_get_ns()
- * cost in most cases.
- */
- if (fq_packet_beyond_horizon(skb, q)) {
- /* Refresh our cache and check another time */
- q->ktime_cache = ktime_get_ns();
- if (fq_packet_beyond_horizon(skb, q)) {
- if (q->horizon_drop) {
+ /* Check if packet timestamp is too far in the future. */
+ if (fq_packet_beyond_horizon(skb, q, now)) {
+ if (q->horizon_drop) {
q->stat_horizon_drops++;
return qdisc_drop(skb, sch, to_free);
- }
- q->stat_horizon_caps++;
- skb->tstamp = q->ktime_cache + q->horizon;
}
+ q->stat_horizon_caps++;
+ skb->tstamp = now + q->horizon;
}
fq_skb_cb(skb)->time_to_send = skb->tstamp;
}
- f = fq_classify(skb, q);
- if (unlikely(f->qlen >= q->flow_plimit && f != &q->internal)) {
- q->stat_flows_plimit++;
- return qdisc_drop(skb, sch, to_free);
- }
+ f = fq_classify(sch, skb, now);
- f->qlen++;
- qdisc_qstats_backlog_inc(sch, skb);
- if (fq_flow_is_detached(f)) {
- fq_flow_add_tail(&q->new_flows, f);
- if (time_after(jiffies, f->age + q->flow_refill_delay))
- f->credit = max_t(u32, f->credit, q->quantum);
- q->inactive_flows--;
+ if (f != &q->internal) {
+ if (unlikely(f->qlen >= q->flow_plimit)) {
+ q->stat_flows_plimit++;
+ return qdisc_drop(skb, sch, to_free);
+ }
+
+ if (fq_flow_is_detached(f)) {
+ fq_flow_add_tail(q, f, NEW_FLOW);
+ if (time_after(jiffies, f->age + q->flow_refill_delay))
+ f->credit = max_t(u32, f->credit, q->quantum);
+ }
+
+ f->band = band;
+ q->band_pkt_count[band]++;
+ fq_skb_cb(skb)->band = band;
+ if (f->qlen == 0)
+ q->inactive_flows--;
}
+ f->qlen++;
/* Note: this overwrites f->age */
flow_queue_add(f, skb);
- if (unlikely(f == &q->internal)) {
- q->stat_internal_packets++;
- }
+ qdisc_qstats_backlog_inc(sch, skb);
sch->q.qlen++;
return NET_XMIT_SUCCESS;
@@ -523,13 +615,26 @@ static void fq_check_throttled(struct fq_sched_data *q, u64 now)
}
}
+static struct fq_flow_head *fq_pband_head_select(struct fq_perband_flows *pband)
+{
+ if (pband->credit <= 0)
+ return NULL;
+
+ if (pband->new_flows.first)
+ return &pband->new_flows;
+
+ return pband->old_flows.first ? &pband->old_flows : NULL;
+}
+
static struct sk_buff *fq_dequeue(struct Qdisc *sch)
{
struct fq_sched_data *q = qdisc_priv(sch);
+ struct fq_perband_flows *pband;
struct fq_flow_head *head;
struct sk_buff *skb;
struct fq_flow *f;
unsigned long rate;
+ int retry;
u32 plen;
u64 now;
@@ -538,30 +643,38 @@ static struct sk_buff *fq_dequeue(struct Qdisc *sch)
skb = fq_peek(&q->internal);
if (unlikely(skb)) {
+ q->internal.qlen--;
fq_dequeue_skb(sch, &q->internal, skb);
goto out;
}
- q->ktime_cache = now = ktime_get_ns();
+ now = ktime_get_ns();
fq_check_throttled(q, now);
+ retry = 0;
+ pband = &q->band_flows[q->band_nr];
begin:
- head = &q->new_flows;
- if (!head->first) {
- head = &q->old_flows;
- if (!head->first) {
- if (q->time_next_delayed_flow != ~0ULL)
- qdisc_watchdog_schedule_range_ns(&q->watchdog,
+ head = fq_pband_head_select(pband);
+ if (!head) {
+ while (++retry <= FQ_BANDS) {
+ if (++q->band_nr == FQ_BANDS)
+ q->band_nr = 0;
+ pband = &q->band_flows[q->band_nr];
+ pband->credit = min(pband->credit + pband->quantum,
+ pband->quantum);
+ goto begin;
+ }
+ if (q->time_next_delayed_flow != ~0ULL)
+ qdisc_watchdog_schedule_range_ns(&q->watchdog,
q->time_next_delayed_flow,
q->timer_slack);
- return NULL;
- }
+ return NULL;
}
f = head->first;
-
+ retry = 0;
if (f->credit <= 0) {
f->credit += q->quantum;
head->first = f->next;
- fq_flow_add_tail(&q->old_flows, f);
+ fq_flow_add_tail(q, f, OLD_FLOW);
goto begin;
}
@@ -581,20 +694,23 @@ begin:
INET_ECN_set_ce(skb);
q->stat_ce_mark++;
}
+ if (--f->qlen == 0)
+ q->inactive_flows++;
+ q->band_pkt_count[fq_skb_cb(skb)->band]--;
fq_dequeue_skb(sch, f, skb);
} else {
head->first = f->next;
/* force a pass through old_flows to prevent starvation */
- if ((head == &q->new_flows) && q->old_flows.first) {
- fq_flow_add_tail(&q->old_flows, f);
+ if (head == &pband->new_flows) {
+ fq_flow_add_tail(q, f, OLD_FLOW);
} else {
fq_flow_set_detached(f);
- q->inactive_flows++;
}
goto begin;
}
plen = qdisc_pkt_len(skb);
f->credit -= plen;
+ pband->credit -= plen;
if (!q->rate_enable)
goto out;
@@ -607,7 +723,7 @@ begin:
*/
if (!skb->tstamp) {
if (skb->sk)
- rate = min(skb->sk->sk_pacing_rate, rate);
+ rate = min(READ_ONCE(skb->sk->sk_pacing_rate), rate);
if (rate <= q->low_rate_threshold) {
f->credit = 0;
@@ -686,8 +802,10 @@ static void fq_reset(struct Qdisc *sch)
kmem_cache_free(fq_flow_cachep, f);
}
}
- q->new_flows.first = NULL;
- q->old_flows.first = NULL;
+ for (idx = 0; idx < FQ_BANDS; idx++) {
+ q->band_flows[idx].new_flows.first = NULL;
+ q->band_flows[idx].old_flows.first = NULL;
+ }
q->delayed = RB_ROOT;
q->flows = 0;
q->inactive_flows = 0;
@@ -779,7 +897,7 @@ static int fq_resize(struct Qdisc *sch, u32 log)
return 0;
}
-static struct netlink_range_validation iq_range = {
+static const struct netlink_range_validation iq_range = {
.max = INT_MAX,
};
@@ -801,8 +919,77 @@ static const struct nla_policy fq_policy[TCA_FQ_MAX + 1] = {
[TCA_FQ_TIMER_SLACK] = { .type = NLA_U32 },
[TCA_FQ_HORIZON] = { .type = NLA_U32 },
[TCA_FQ_HORIZON_DROP] = { .type = NLA_U8 },
+ [TCA_FQ_PRIOMAP] = {
+ .type = NLA_BINARY,
+ .len = sizeof(struct tc_prio_qopt),
+ },
+ [TCA_FQ_WEIGHTS] = {
+ .type = NLA_BINARY,
+ .len = FQ_BANDS * sizeof(s32),
+ },
};
+/* compress a u8 array with all elems <= 3 to an array of 2-bit fields */
+static void fq_prio2band_compress_crumb(const u8 *in, u8 *out)
+{
+ const int num_elems = TC_PRIO_MAX + 1;
+ int i;
+
+ memset(out, 0, num_elems / 4);
+ for (i = 0; i < num_elems; i++)
+ out[i / 4] |= in[i] << (2 * (i & 0x3));
+}
+
+static void fq_prio2band_decompress_crumb(const u8 *in, u8 *out)
+{
+ const int num_elems = TC_PRIO_MAX + 1;
+ int i;
+
+ for (i = 0; i < num_elems; i++)
+ out[i] = fq_prio2band(in, i);
+}
+
+static int fq_load_weights(struct fq_sched_data *q,
+ const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+{
+ s32 *weights = nla_data(attr);
+ int i;
+
+ for (i = 0; i < FQ_BANDS; i++) {
+ if (weights[i] < FQ_MIN_WEIGHT) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "Weight %d less that minimum allowed %d",
+ weights[i], FQ_MIN_WEIGHT);
+ return -EINVAL;
+ }
+ }
+ for (i = 0; i < FQ_BANDS; i++)
+ q->band_flows[i].quantum = weights[i];
+ return 0;
+}
+
+static int fq_load_priomap(struct fq_sched_data *q,
+ const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+{
+ const struct tc_prio_qopt *map = nla_data(attr);
+ int i;
+
+ if (map->bands != FQ_BANDS) {
+ NL_SET_ERR_MSG_MOD(extack, "FQ only supports 3 bands");
+ return -EINVAL;
+ }
+ for (i = 0; i < TC_PRIO_MAX + 1; i++) {
+ if (map->priomap[i] >= FQ_BANDS) {
+ NL_SET_ERR_MSG_FMT_MOD(extack, "FQ priomap field %d maps to a too high band %d",
+ i, map->priomap[i]);
+ return -EINVAL;
+ }
+ }
+ fq_prio2band_compress_crumb(map->priomap, q->prio2band);
+ return 0;
+}
+
static int fq_change(struct Qdisc *sch, struct nlattr *opt,
struct netlink_ext_ack *extack)
{
@@ -877,6 +1064,12 @@ static int fq_change(struct Qdisc *sch, struct nlattr *opt,
q->flow_refill_delay = usecs_to_jiffies(usecs_delay);
}
+ if (!err && tb[TCA_FQ_PRIOMAP])
+ err = fq_load_priomap(q, tb[TCA_FQ_PRIOMAP], extack);
+
+ if (!err && tb[TCA_FQ_WEIGHTS])
+ err = fq_load_weights(q, tb[TCA_FQ_WEIGHTS], extack);
+
if (tb[TCA_FQ_ORPHAN_MASK])
q->orphan_mask = nla_get_u32(tb[TCA_FQ_ORPHAN_MASK]);
@@ -928,7 +1121,7 @@ static int fq_init(struct Qdisc *sch, struct nlattr *opt,
struct netlink_ext_ack *extack)
{
struct fq_sched_data *q = qdisc_priv(sch);
- int err;
+ int i, err;
sch->limit = 10000;
q->flow_plimit = 100;
@@ -938,8 +1131,13 @@ static int fq_init(struct Qdisc *sch, struct nlattr *opt,
q->flow_max_rate = ~0UL;
q->time_next_delayed_flow = ~0ULL;
q->rate_enable = 1;
- q->new_flows.first = NULL;
- q->old_flows.first = NULL;
+ for (i = 0; i < FQ_BANDS; i++) {
+ q->band_flows[i].new_flows.first = NULL;
+ q->band_flows[i].old_flows.first = NULL;
+ }
+ q->band_flows[0].quantum = 9 << 16;
+ q->band_flows[1].quantum = 3 << 16;
+ q->band_flows[2].quantum = 1 << 16;
q->delayed = RB_ROOT;
q->fq_root = NULL;
q->fq_trees_log = ilog2(1024);
@@ -954,6 +1152,7 @@ static int fq_init(struct Qdisc *sch, struct nlattr *opt,
/* Default ce_threshold of 4294 seconds */
q->ce_threshold = (u64)NSEC_PER_USEC * ~0U;
+ fq_prio2band_compress_crumb(sch_default_prio2band, q->prio2band);
qdisc_watchdog_init_clockid(&q->watchdog, sch, CLOCK_MONOTONIC);
if (opt)
@@ -968,8 +1167,12 @@ static int fq_dump(struct Qdisc *sch, struct sk_buff *skb)
{
struct fq_sched_data *q = qdisc_priv(sch);
u64 ce_threshold = q->ce_threshold;
+ struct tc_prio_qopt prio = {
+ .bands = FQ_BANDS,
+ };
u64 horizon = q->horizon;
struct nlattr *opts;
+ s32 weights[3];
opts = nla_nest_start_noflag(skb, TCA_OPTIONS);
if (opts == NULL)
@@ -999,6 +1202,16 @@ static int fq_dump(struct Qdisc *sch, struct sk_buff *skb)
nla_put_u8(skb, TCA_FQ_HORIZON_DROP, q->horizon_drop))
goto nla_put_failure;
+ fq_prio2band_decompress_crumb(q->prio2band, prio.priomap);
+ if (nla_put(skb, TCA_FQ_PRIOMAP, sizeof(prio), &prio))
+ goto nla_put_failure;
+
+ weights[0] = q->band_flows[0].quantum;
+ weights[1] = q->band_flows[1].quantum;
+ weights[2] = q->band_flows[2].quantum;
+ if (nla_put(skb, TCA_FQ_WEIGHTS, sizeof(weights), &weights))
+ goto nla_put_failure;
+
return nla_nest_end(skb, opts);
nla_put_failure:
@@ -1009,11 +1222,15 @@ static int fq_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
{
struct fq_sched_data *q = qdisc_priv(sch);
struct tc_fq_qd_stats st;
+ int i;
+
+ st.pad = 0;
sch_tree_lock(sch);
st.gc_flows = q->stat_gc_flows;
- st.highprio_packets = q->stat_internal_packets;
+ st.highprio_packets = 0;
+ st.fastpath_packets = q->internal.stat_fastpath_packets;
st.tcp_retrans = 0;
st.throttled = q->stat_throttled;
st.flows_plimit = q->stat_flows_plimit;
@@ -1029,6 +1246,10 @@ static int fq_dump_stats(struct Qdisc *sch, struct gnet_dump *d)
st.ce_mark = q->stat_ce_mark;
st.horizon_drops = q->stat_horizon_drops;
st.horizon_caps = q->stat_horizon_caps;
+ for (i = 0; i < FQ_BANDS; i++) {
+ st.band_drops[i] = q->stat_band_drops[i];
+ st.band_pkt_count[i] = q->band_pkt_count[i];
+ }
sch_tree_unlock(sch);
return gnet_stats_copy_app(d, &st, sizeof(st));
@@ -1056,7 +1277,7 @@ static int __init fq_module_init(void)
fq_flow_cachep = kmem_cache_create("fq_flow_cache",
sizeof(struct fq_flow),
- 0, 0, NULL);
+ 0, SLAB_HWCACHE_ALIGN, NULL);
if (!fq_flow_cachep)
return -ENOMEM;
diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
index 68e6acd0f130..5b595773e59b 100644
--- a/net/sched/sch_fq_pie.c
+++ b/net/sched/sch_fq_pie.c
@@ -202,7 +202,7 @@ out:
return NET_XMIT_CN;
}
-static struct netlink_range_validation fq_pie_q_range = {
+static const struct netlink_range_validation fq_pie_q_range = {
.min = 1,
.max = 1 << 20,
};
diff --git a/net/sched/sch_frag.c b/net/sched/sch_frag.c
index a9bd0a235890..ce63414185fd 100644
--- a/net/sched/sch_frag.c
+++ b/net/sched/sch_frag.c
@@ -96,7 +96,7 @@ static int sch_fragment(struct net *net, struct sk_buff *skb,
unsigned long orig_dst;
sch_frag_prepare_frag(skb, xmit);
- dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL, 1,
+ dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL,
DST_OBSOLETE_NONE, DST_NOCOUNT);
sch_frag_rt.dst.dev = skb->dev;
@@ -112,7 +112,7 @@ static int sch_fragment(struct net *net, struct sk_buff *skb,
sch_frag_prepare_frag(skb, xmit);
memset(&sch_frag_rt, 0, sizeof(sch_frag_rt));
- dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL, 1,
+ dst_init(&sch_frag_rt.dst, &sch_frag_dst_ops, NULL,
DST_OBSOLETE_NONE, DST_NOCOUNT);
sch_frag_rt.dst.dev = skb->dev;
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
index 5d7e23f4cc0e..4195a4bc26ca 100644
--- a/net/sched/sch_generic.c
+++ b/net/sched/sch_generic.c
@@ -694,9 +694,10 @@ struct Qdisc_ops noqueue_qdisc_ops __read_mostly = {
.owner = THIS_MODULE,
};
-static const u8 prio2band[TC_PRIO_MAX + 1] = {
- 1, 2, 2, 2, 1, 2, 0, 0 , 1, 1, 1, 1, 1, 1, 1, 1
+const u8 sch_default_prio2band[TC_PRIO_MAX + 1] = {
+ 1, 2, 2, 2, 1, 2, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1
};
+EXPORT_SYMBOL(sch_default_prio2band);
/* 3-band FIFO queue: old style, but should be a bit faster than
generic prio+fifo combination.
@@ -721,7 +722,7 @@ static inline struct skb_array *band2list(struct pfifo_fast_priv *priv,
static int pfifo_fast_enqueue(struct sk_buff *skb, struct Qdisc *qdisc,
struct sk_buff **to_free)
{
- int band = prio2band[skb->priority & TC_PRIO_MAX];
+ int band = sch_default_prio2band[skb->priority & TC_PRIO_MAX];
struct pfifo_fast_priv *priv = qdisc_priv(qdisc);
struct skb_array *q = band2list(priv, band);
unsigned int pkt_len = qdisc_pkt_len(skb);
@@ -830,7 +831,7 @@ static int pfifo_fast_dump(struct Qdisc *qdisc, struct sk_buff *skb)
{
struct tc_prio_qopt opt = { .bands = PFIFO_FAST_BANDS };
- memcpy(&opt.priomap, prio2band, TC_PRIO_MAX + 1);
+ memcpy(&opt.priomap, sch_default_prio2band, TC_PRIO_MAX + 1);
if (nla_put(skb, TCA_OPTIONS, sizeof(opt), &opt))
goto nla_put_failure;
return skb->len;
diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
index 4ad39a4a3cf5..6ba2dc191ed9 100644
--- a/net/sched/sch_netem.c
+++ b/net/sched/sch_netem.c
@@ -67,7 +67,7 @@
struct disttable {
u32 size;
- s16 table[];
+ s16 table[] __counted_by(size);
};
struct netem_sched_data {
diff --git a/net/sched/sch_qfq.c b/net/sched/sch_qfq.c
index 546c10adcacd..28315166fe8e 100644
--- a/net/sched/sch_qfq.c
+++ b/net/sched/sch_qfq.c
@@ -213,7 +213,7 @@ static struct qfq_class *qfq_find_class(struct Qdisc *sch, u32 classid)
return container_of(clc, struct qfq_class, common);
}
-static struct netlink_range_validation lmax_range = {
+static const struct netlink_range_validation lmax_range = {
.min = QFQ_MIN_LMAX,
.max = QFQ_MAX_LMAX,
};
@@ -1003,7 +1003,7 @@ static inline struct sk_buff *qfq_peek_skb(struct qfq_aggregate *agg,
*cl = list_first_entry(&agg->active, struct qfq_class, alist);
skb = (*cl)->qdisc->ops->peek((*cl)->qdisc);
if (skb == NULL)
- WARN_ONCE(1, "qfq_dequeue: non-workconserving leaf\n");
+ qdisc_warn_nonwc("qfq_dequeue", (*cl)->qdisc);
else
*len = qdisc_pkt_len(skb);
diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
index 1cb5e41c0ec7..2e1949de4171 100644
--- a/net/sched/sch_taprio.c
+++ b/net/sched/sch_taprio.c
@@ -1015,7 +1015,7 @@ static const struct nla_policy taprio_tc_policy[TCA_TAPRIO_TC_ENTRY_MAX + 1] = {
TC_FP_PREEMPTIBLE),
};
-static struct netlink_range_validation_signed taprio_cycle_time_range = {
+static const struct netlink_range_validation_signed taprio_cycle_time_range = {
.min = 0,
.max = INT_MAX,
};
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index 43f2731bf590..24368f755ab1 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -128,7 +128,6 @@ static void sctp_v6_err_handle(struct sctp_transport *t, struct sk_buff *skb,
{
struct sctp_association *asoc = t->asoc;
struct sock *sk = asoc->base.sk;
- struct ipv6_pinfo *np;
int err = 0;
switch (type) {
@@ -149,9 +148,8 @@ static void sctp_v6_err_handle(struct sctp_transport *t, struct sk_buff *skb,
break;
}
- np = inet6_sk(sk);
icmpv6_err_convert(type, code, &err);
- if (!sock_owned_by_user(sk) && np->recverr) {
+ if (!sock_owned_by_user(sk) && inet6_test_bit(RECVERR6, sk)) {
sk->sk_err = err;
sk_error_report(sk);
} else {
@@ -249,7 +247,7 @@ static int sctp_v6_xmit(struct sk_buff *skb, struct sctp_transport *t)
rcu_read_lock();
res = ip6_xmit(sk, skb, fl6, sk->sk_mark,
rcu_dereference(np->opt),
- tclass, sk->sk_priority);
+ tclass, READ_ONCE(sk->sk_priority));
rcu_read_unlock();
return res;
}
@@ -298,7 +296,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
if (t->flowlabel & SCTP_FLOWLABEL_SET_MASK)
fl6->flowlabel = htonl(t->flowlabel & SCTP_FLOWLABEL_VAL_MASK);
- if (np->sndflow && (fl6->flowlabel & IPV6_FLOWLABEL_MASK)) {
+ if (inet6_test_bit(SNDFLOW, sk) &&
+ (fl6->flowlabel & IPV6_FLOWLABEL_MASK)) {
struct ip6_flowlabel *flowlabel;
flowlabel = fl6_sock_lookup(sk, fl6->flowlabel);
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index 2185f44198de..94c6dd53cd62 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -426,7 +426,7 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
struct dst_entry *dst = NULL;
union sctp_addr *daddr = &t->ipaddr;
union sctp_addr dst_saddr;
- __u8 tos = inet_sk(sk)->tos;
+ u8 tos = READ_ONCE(inet_sk(sk)->tos);
if (t->dscp & SCTP_DSCP_SET_MASK)
tos = t->dscp & SCTP_DSCP_VAL_MASK;
@@ -1057,7 +1057,7 @@ static inline int sctp_v4_xmit(struct sk_buff *skb, struct sctp_transport *t)
struct flowi4 *fl4 = &t->fl.u.ip4;
struct sock *sk = skb->sk;
struct inet_sock *inet = inet_sk(sk);
- __u8 dscp = inet->tos;
+ __u8 dscp = READ_ONCE(inet->tos);
__be16 df = 0;
pr_debug("%s: skb:%p, len:%d, src:%pI4, dst:%pI4\n", __func__, skb,
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
index 08527d882e56..f80208edd6a5 100644
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -3303,7 +3303,7 @@ struct sctp_chunk *sctp_process_asconf(struct sctp_association *asoc,
/* Process the TLVs contained within the ASCONF chunk. */
sctp_walk_params(param, addip) {
- /* Skip preceeding address parameters. */
+ /* Skip preceding address parameters. */
if (param.p->type == SCTP_PARAM_IPV4_ADDRESS ||
param.p->type == SCTP_PARAM_IPV6_ADDRESS)
continue;
diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
index 35ddebae8894..abd2667734d4 100644
--- a/net/smc/af_smc.c
+++ b/net/smc/af_smc.c
@@ -493,7 +493,7 @@ static void smc_copy_sock_settings(struct sock *nsk, struct sock *osk,
nsk->sk_sndtimeo = osk->sk_sndtimeo;
nsk->sk_rcvtimeo = osk->sk_rcvtimeo;
nsk->sk_mark = READ_ONCE(osk->sk_mark);
- nsk->sk_priority = osk->sk_priority;
+ nsk->sk_priority = READ_ONCE(osk->sk_priority);
nsk->sk_rcvlowat = osk->sk_rcvlowat;
nsk->sk_bound_dev_if = osk->sk_bound_dev_if;
nsk->sk_err = osk->sk_err;
diff --git a/net/tipc/link.c b/net/tipc/link.c
index e33b4f29f77c..d0143823658d 100644
--- a/net/tipc/link.c
+++ b/net/tipc/link.c
@@ -1446,7 +1446,7 @@ u16 tipc_get_gap_ack_blks(struct tipc_gap_ack_blks **ga, struct tipc_link *l,
p = (struct tipc_gap_ack_blks *)msg_data(hdr);
sz = ntohs(p->len);
/* Sanity check */
- if (sz == struct_size(p, gacks, p->ugack_cnt + p->bgack_cnt)) {
+ if (sz == struct_size(p, gacks, size_add(p->ugack_cnt, p->bgack_cnt))) {
/* Good, check if the desired type exists */
if ((uc && p->ugack_cnt) || (!uc && p->bgack_cnt))
goto ok;
@@ -1533,7 +1533,7 @@ static u16 tipc_build_gap_ack_blks(struct tipc_link *l, struct tipc_msg *hdr)
__tipc_build_gap_ack_blks(ga, l, ga->bgack_cnt) : 0;
/* Total len */
- len = struct_size(ga, gacks, ga->bgack_cnt + ga->ugack_cnt);
+ len = struct_size(ga, gacks, size_add(ga->bgack_cnt, ga->ugack_cnt));
ga->len = htons(len);
return len;
}
diff --git a/net/tls/tls.h b/net/tls/tls.h
index 28a8c0e80e3c..762f424ff2d5 100644
--- a/net/tls/tls.h
+++ b/net/tls/tls.h
@@ -127,7 +127,7 @@ struct tls_rec {
struct sock *sk;
char aad_space[TLS_AAD_SPACE_SIZE];
- u8 iv_data[MAX_IV_SIZE];
+ u8 iv_data[TLS_MAX_IV_SIZE];
struct aead_request aead_req;
u8 aead_req_ctx[];
};
@@ -142,7 +142,10 @@ void update_sk_prot(struct sock *sk, struct tls_context *ctx);
int wait_on_pending_writer(struct sock *sk, long *timeo);
void tls_err_abort(struct sock *sk, int err);
-int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx);
+int init_prot_info(struct tls_prot_info *prot,
+ const struct tls_crypto_info *crypto_info,
+ const struct tls_cipher_desc *cipher_desc);
+int tls_set_sw_offload(struct sock *sk, int tx);
void tls_update_rx_zc_capable(struct tls_context *tls_ctx);
void tls_sw_strparser_arm(struct sock *sk, struct tls_context *ctx);
void tls_sw_strparser_done(struct tls_context *tls_ctx);
@@ -223,7 +226,7 @@ static inline bool tls_strp_msg_mixed_decrypted(struct tls_sw_context_rx *ctx)
#ifdef CONFIG_TLS_DEVICE
int tls_device_init(void);
void tls_device_cleanup(void);
-int tls_set_device_offload(struct sock *sk, struct tls_context *ctx);
+int tls_set_device_offload(struct sock *sk);
void tls_device_free_resources_tx(struct sock *sk);
int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx);
void tls_device_offload_cleanup_rx(struct sock *sk);
@@ -234,7 +237,7 @@ static inline int tls_device_init(void) { return 0; }
static inline void tls_device_cleanup(void) {}
static inline int
-tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+tls_set_device_offload(struct sock *sk)
{
return -EOPNOTSUPP;
}
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index 8c94c926606a..bf8ed36b1ad6 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -56,11 +56,8 @@ static struct page *dummy_page;
static void tls_device_free_ctx(struct tls_context *ctx)
{
- if (ctx->tx_conf == TLS_HW) {
+ if (ctx->tx_conf == TLS_HW)
kfree(tls_offload_ctx_tx(ctx));
- kfree(ctx->tx.rec_seq);
- kfree(ctx->tx.iv);
- }
if (ctx->rx_conf == TLS_HW)
kfree(tls_offload_ctx_rx(ctx));
@@ -891,14 +888,8 @@ tls_device_reencrypt(struct sock *sk, struct tls_context *tls_ctx)
struct strp_msg *rxm;
char *orig_buf, *buf;
- switch (tls_ctx->crypto_recv.info.cipher_type) {
- case TLS_CIPHER_AES_GCM_128:
- case TLS_CIPHER_AES_GCM_256:
- break;
- default:
- return -EINVAL;
- }
cipher_desc = get_cipher_desc(tls_ctx->crypto_recv.info.cipher_type);
+ DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable);
rxm = strp_msg(tls_strp_msg(sw_ctx));
orig_buf = kmalloc(rxm->full_len + TLS_HEADER_SIZE + cipher_desc->iv,
@@ -1042,22 +1033,45 @@ static void tls_device_attach(struct tls_context *ctx, struct sock *sk,
}
}
-int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
+static struct tls_offload_context_tx *alloc_offload_ctx_tx(struct tls_context *ctx)
+{
+ struct tls_offload_context_tx *offload_ctx;
+ __be64 rcd_sn;
+
+ offload_ctx = kzalloc(sizeof(*offload_ctx), GFP_KERNEL);
+ if (!offload_ctx)
+ return NULL;
+
+ INIT_WORK(&offload_ctx->destruct_work, tls_device_tx_del_task);
+ INIT_LIST_HEAD(&offload_ctx->records_list);
+ spin_lock_init(&offload_ctx->lock);
+ sg_init_table(offload_ctx->sg_tx_data,
+ ARRAY_SIZE(offload_ctx->sg_tx_data));
+
+ /* start at rec_seq - 1 to account for the start marker record */
+ memcpy(&rcd_sn, ctx->tx.rec_seq, sizeof(rcd_sn));
+ offload_ctx->unacked_record_sn = be64_to_cpu(rcd_sn) - 1;
+
+ offload_ctx->ctx = ctx;
+
+ return offload_ctx;
+}
+
+int tls_set_device_offload(struct sock *sk)
{
- struct tls_context *tls_ctx = tls_get_ctx(sk);
- struct tls_prot_info *prot = &tls_ctx->prot_info;
- const struct tls_cipher_desc *cipher_desc;
struct tls_record_info *start_marker_record;
struct tls_offload_context_tx *offload_ctx;
+ const struct tls_cipher_desc *cipher_desc;
struct tls_crypto_info *crypto_info;
+ struct tls_prot_info *prot;
struct net_device *netdev;
- char *iv, *rec_seq;
+ struct tls_context *ctx;
struct sk_buff *skb;
- __be64 rcd_sn;
+ char *iv, *rec_seq;
int rc;
- if (!ctx)
- return -EINVAL;
+ ctx = tls_get_ctx(sk);
+ prot = &ctx->prot_info;
if (ctx->priv_ctx_tx)
return -EEXIST;
@@ -1085,38 +1099,23 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
goto release_netdev;
}
+ rc = init_prot_info(prot, crypto_info, cipher_desc);
+ if (rc)
+ goto release_netdev;
+
iv = crypto_info_iv(crypto_info, cipher_desc);
rec_seq = crypto_info_rec_seq(crypto_info, cipher_desc);
- prot->version = crypto_info->version;
- prot->cipher_type = crypto_info->cipher_type;
- prot->prepend_size = TLS_HEADER_SIZE + cipher_desc->iv;
- prot->tag_size = cipher_desc->tag;
- prot->overhead_size = prot->prepend_size + prot->tag_size;
- prot->iv_size = cipher_desc->iv;
- prot->salt_size = cipher_desc->salt;
- ctx->tx.iv = kmalloc(cipher_desc->iv + cipher_desc->salt, GFP_KERNEL);
- if (!ctx->tx.iv) {
- rc = -ENOMEM;
- goto release_netdev;
- }
-
memcpy(ctx->tx.iv + cipher_desc->salt, iv, cipher_desc->iv);
-
- prot->rec_seq_size = cipher_desc->rec_seq;
- ctx->tx.rec_seq = kmemdup(rec_seq, cipher_desc->rec_seq, GFP_KERNEL);
- if (!ctx->tx.rec_seq) {
- rc = -ENOMEM;
- goto free_iv;
- }
+ memcpy(ctx->tx.rec_seq, rec_seq, cipher_desc->rec_seq);
start_marker_record = kmalloc(sizeof(*start_marker_record), GFP_KERNEL);
if (!start_marker_record) {
rc = -ENOMEM;
- goto free_rec_seq;
+ goto release_netdev;
}
- offload_ctx = kzalloc(TLS_OFFLOAD_CONTEXT_SIZE_TX, GFP_KERNEL);
+ offload_ctx = alloc_offload_ctx_tx(ctx);
if (!offload_ctx) {
rc = -ENOMEM;
goto free_marker_record;
@@ -1126,22 +1125,10 @@ int tls_set_device_offload(struct sock *sk, struct tls_context *ctx)
if (rc)
goto free_offload_ctx;
- /* start at rec_seq - 1 to account for the start marker record */
- memcpy(&rcd_sn, ctx->tx.rec_seq, sizeof(rcd_sn));
- offload_ctx->unacked_record_sn = be64_to_cpu(rcd_sn) - 1;
-
start_marker_record->end_seq = tcp_sk(sk)->write_seq;
start_marker_record->len = 0;
start_marker_record->num_frags = 0;
-
- INIT_WORK(&offload_ctx->destruct_work, tls_device_tx_del_task);
- offload_ctx->ctx = ctx;
-
- INIT_LIST_HEAD(&offload_ctx->records_list);
list_add_tail(&start_marker_record->list, &offload_ctx->records_list);
- spin_lock_init(&offload_ctx->lock);
- sg_init_table(offload_ctx->sg_tx_data,
- ARRAY_SIZE(offload_ctx->sg_tx_data));
clean_acked_data_enable(inet_csk(sk), &tls_icsk_clean_acked);
ctx->push_pending_record = tls_device_push_pending_record;
@@ -1198,10 +1185,6 @@ free_offload_ctx:
ctx->priv_ctx_tx = NULL;
free_marker_record:
kfree(start_marker_record);
-free_rec_seq:
- kfree(ctx->tx.rec_seq);
-free_iv:
- kfree(ctx->tx.iv);
release_netdev:
dev_put(netdev);
return rc;
@@ -1242,7 +1225,7 @@ int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
goto release_lock;
}
- context = kzalloc(TLS_OFFLOAD_CONTEXT_SIZE_RX, GFP_KERNEL);
+ context = kzalloc(sizeof(*context), GFP_KERNEL);
if (!context) {
rc = -ENOMEM;
goto release_lock;
@@ -1250,7 +1233,7 @@ int tls_set_device_offload_rx(struct sock *sk, struct tls_context *ctx)
context->resync_nh_reset = 1;
ctx->priv_ctx_rx = context;
- rc = tls_set_sw_offload(sk, ctx, 0);
+ rc = tls_set_sw_offload(sk, 0);
if (rc)
goto release_ctx;
diff --git a/net/tls/tls_device_fallback.c b/net/tls/tls_device_fallback.c
index 1d743f310f4f..4e7228f275fa 100644
--- a/net/tls/tls_device_fallback.c
+++ b/net/tls/tls_device_fallback.c
@@ -54,7 +54,7 @@ static int tls_enc_record(struct aead_request *aead_req,
struct scatter_walk *out, int *in_len,
struct tls_prot_info *prot)
{
- unsigned char buf[TLS_HEADER_SIZE + MAX_IV_SIZE];
+ unsigned char buf[TLS_HEADER_SIZE + TLS_MAX_IV_SIZE];
const struct tls_cipher_desc *cipher_desc;
struct scatterlist sg_in[3];
struct scatterlist sg_out[3];
@@ -62,14 +62,8 @@ static int tls_enc_record(struct aead_request *aead_req,
u16 len;
int rc;
- switch (prot->cipher_type) {
- case TLS_CIPHER_AES_GCM_128:
- case TLS_CIPHER_AES_GCM_256:
- break;
- default:
- return -EINVAL;
- }
cipher_desc = get_cipher_desc(prot->cipher_type);
+ DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable);
buf_size = TLS_HEADER_SIZE + cipher_desc->iv;
len = min_t(int, *in_len, buf_size);
@@ -338,17 +332,9 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
if (!aead_req)
return NULL;
- switch (tls_ctx->crypto_send.info.cipher_type) {
- case TLS_CIPHER_AES_GCM_128:
- salt = tls_ctx->crypto_send.aes_gcm_128.salt;
- break;
- case TLS_CIPHER_AES_GCM_256:
- salt = tls_ctx->crypto_send.aes_gcm_256.salt;
- break;
- default:
- goto free_req;
- }
cipher_desc = get_cipher_desc(tls_ctx->crypto_send.info.cipher_type);
+ DEBUG_NET_WARN_ON_ONCE(!cipher_desc || !cipher_desc->offloadable);
+
buf_len = cipher_desc->salt + cipher_desc->iv + TLS_AAD_SPACE_SIZE +
sync_size + cipher_desc->tag;
buf = kmalloc(buf_len, GFP_ATOMIC);
@@ -356,6 +342,7 @@ static struct sk_buff *tls_enc_skb(struct tls_context *tls_ctx,
goto free_req;
iv = buf;
+ salt = crypto_info_salt(&tls_ctx->crypto_send.info, cipher_desc);
memcpy(iv, salt, cipher_desc->salt);
aad = buf + cipher_desc->salt + cipher_desc->iv;
dummy_buf = aad + TLS_AAD_SPACE_SIZE;
diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
index 002483e60c19..1c2c6800949d 100644
--- a/net/tls/tls_main.c
+++ b/net/tls/tls_main.c
@@ -59,7 +59,8 @@ enum {
};
#define CHECK_CIPHER_DESC(cipher,ci) \
- static_assert(cipher ## _IV_SIZE <= MAX_IV_SIZE); \
+ static_assert(cipher ## _IV_SIZE <= TLS_MAX_IV_SIZE); \
+ static_assert(cipher ## _SALT_SIZE <= TLS_MAX_SALT_SIZE); \
static_assert(cipher ## _REC_SEQ_SIZE <= TLS_MAX_REC_SEQ_SIZE); \
static_assert(cipher ## _TAG_SIZE == TLS_TAG_SIZE); \
static_assert(sizeof_field(struct ci, iv) == cipher ## _IV_SIZE); \
@@ -348,8 +349,6 @@ static void tls_sk_proto_cleanup(struct sock *sk,
/* We need these for tls_sw_fallback handling of other packets */
if (ctx->tx_conf == TLS_SW) {
- kfree(ctx->tx.rec_seq);
- kfree(ctx->tx.iv);
tls_sw_release_resources_tx(sk);
TLS_DEC_STATS(sock_net(sk), LINUX_MIB_TLSCURRTXSW);
} else if (ctx->tx_conf == TLS_HW) {
@@ -585,6 +584,31 @@ static int tls_getsockopt(struct sock *sk, int level, int optname,
return do_tls_getsockopt(sk, optname, optval, optlen);
}
+static int validate_crypto_info(const struct tls_crypto_info *crypto_info,
+ const struct tls_crypto_info *alt_crypto_info)
+{
+ if (crypto_info->version != TLS_1_2_VERSION &&
+ crypto_info->version != TLS_1_3_VERSION)
+ return -EINVAL;
+
+ switch (crypto_info->cipher_type) {
+ case TLS_CIPHER_ARIA_GCM_128:
+ case TLS_CIPHER_ARIA_GCM_256:
+ if (crypto_info->version != TLS_1_2_VERSION)
+ return -EINVAL;
+ break;
+ }
+
+ /* Ensure that TLS version and ciphers are same in both directions */
+ if (TLS_CRYPTO_INFO_READY(alt_crypto_info)) {
+ if (alt_crypto_info->version != crypto_info->version ||
+ alt_crypto_info->cipher_type != crypto_info->cipher_type)
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
unsigned int optlen, int tx)
{
@@ -616,21 +640,9 @@ static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
goto err_crypto_info;
}
- /* check version */
- if (crypto_info->version != TLS_1_2_VERSION &&
- crypto_info->version != TLS_1_3_VERSION) {
- rc = -EINVAL;
+ rc = validate_crypto_info(crypto_info, alt_crypto_info);
+ if (rc)
goto err_crypto_info;
- }
-
- /* Ensure that TLS version and ciphers are same in both directions */
- if (TLS_CRYPTO_INFO_READY(alt_crypto_info)) {
- if (alt_crypto_info->version != crypto_info->version ||
- alt_crypto_info->cipher_type != crypto_info->cipher_type) {
- rc = -EINVAL;
- goto err_crypto_info;
- }
- }
cipher_desc = get_cipher_desc(crypto_info->cipher_type);
if (!cipher_desc) {
@@ -638,16 +650,6 @@ static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
goto err_crypto_info;
}
- switch (crypto_info->cipher_type) {
- case TLS_CIPHER_ARIA_GCM_128:
- case TLS_CIPHER_ARIA_GCM_256:
- if (crypto_info->version != TLS_1_2_VERSION) {
- rc = -EINVAL;
- goto err_crypto_info;
- }
- break;
- }
-
if (optlen != cipher_desc->crypto_info) {
rc = -EINVAL;
goto err_crypto_info;
@@ -662,13 +664,13 @@ static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
}
if (tx) {
- rc = tls_set_device_offload(sk, ctx);
+ rc = tls_set_device_offload(sk);
conf = TLS_HW;
if (!rc) {
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSTXDEVICE);
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSCURRTXDEVICE);
} else {
- rc = tls_set_sw_offload(sk, ctx, 1);
+ rc = tls_set_sw_offload(sk, 1);
if (rc)
goto err_crypto_info;
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSTXSW);
@@ -682,7 +684,7 @@ static int do_tls_setsockopt_conf(struct sock *sk, sockptr_t optval,
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSRXDEVICE);
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSCURRRXDEVICE);
} else {
- rc = tls_set_sw_offload(sk, ctx, 0);
+ rc = tls_set_sw_offload(sk, 0);
if (rc)
goto err_crypto_info;
TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSRXSW);
diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
index e9d1e83a859d..a78e8e722409 100644
--- a/net/tls/tls_sw.c
+++ b/net/tls/tls_sw.c
@@ -60,7 +60,7 @@ struct tls_decrypt_arg {
struct tls_decrypt_ctx {
struct sock *sk;
- u8 iv[MAX_IV_SIZE];
+ u8 iv[TLS_MAX_IV_SIZE];
u8 aad[TLS_MAX_AAD_SIZE];
u8 tail;
struct scatterlist sg[];
@@ -1491,7 +1491,7 @@ static int tls_decrypt_sg(struct sock *sk, struct iov_iter *out_iov,
*/
aead_size = sizeof(*aead_req) + crypto_aead_reqsize(ctx->aead_recv);
aead_size = ALIGN(aead_size, __alignof__(*dctx));
- mem = kmalloc(aead_size + struct_size(dctx, sg, n_sgin + n_sgout),
+ mem = kmalloc(aead_size + struct_size(dctx, sg, size_add(n_sgin, n_sgout)),
sk->sk_allocation);
if (!mem) {
err = -ENOMEM;
@@ -2326,7 +2326,7 @@ int tls_rx_msg_size(struct tls_strparser *strp, struct sk_buff *skb)
{
struct tls_context *tls_ctx = tls_get_ctx(strp->sk);
struct tls_prot_info *prot = &tls_ctx->prot_info;
- char header[TLS_HEADER_SIZE + MAX_IV_SIZE];
+ char header[TLS_HEADER_SIZE + TLS_MAX_IV_SIZE];
size_t cipher_overhead;
size_t data_len = 0;
int ret;
@@ -2474,9 +2474,6 @@ void tls_sw_release_resources_rx(struct sock *sk)
struct tls_context *tls_ctx = tls_get_ctx(sk);
struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx);
- kfree(tls_ctx->rx.rec_seq);
- kfree(tls_ctx->rx.iv);
-
if (ctx->aead_recv) {
__skb_queue_purge(&ctx->rx_list);
crypto_free_aead(ctx->aead_recv);
@@ -2588,69 +2585,113 @@ void tls_update_rx_zc_capable(struct tls_context *tls_ctx)
tls_ctx->prot_info.version != TLS_1_3_VERSION;
}
-int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+static struct tls_sw_context_tx *init_ctx_tx(struct tls_context *ctx, struct sock *sk)
+{
+ struct tls_sw_context_tx *sw_ctx_tx;
+
+ if (!ctx->priv_ctx_tx) {
+ sw_ctx_tx = kzalloc(sizeof(*sw_ctx_tx), GFP_KERNEL);
+ if (!sw_ctx_tx)
+ return NULL;
+ } else {
+ sw_ctx_tx = ctx->priv_ctx_tx;
+ }
+
+ crypto_init_wait(&sw_ctx_tx->async_wait);
+ spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
+ INIT_LIST_HEAD(&sw_ctx_tx->tx_list);
+ INIT_DELAYED_WORK(&sw_ctx_tx->tx_work.work, tx_work_handler);
+ sw_ctx_tx->tx_work.sk = sk;
+
+ return sw_ctx_tx;
+}
+
+static struct tls_sw_context_rx *init_ctx_rx(struct tls_context *ctx)
+{
+ struct tls_sw_context_rx *sw_ctx_rx;
+
+ if (!ctx->priv_ctx_rx) {
+ sw_ctx_rx = kzalloc(sizeof(*sw_ctx_rx), GFP_KERNEL);
+ if (!sw_ctx_rx)
+ return NULL;
+ } else {
+ sw_ctx_rx = ctx->priv_ctx_rx;
+ }
+
+ crypto_init_wait(&sw_ctx_rx->async_wait);
+ spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
+ init_waitqueue_head(&sw_ctx_rx->wq);
+ skb_queue_head_init(&sw_ctx_rx->rx_list);
+ skb_queue_head_init(&sw_ctx_rx->async_hold);
+
+ return sw_ctx_rx;
+}
+
+int init_prot_info(struct tls_prot_info *prot,
+ const struct tls_crypto_info *crypto_info,
+ const struct tls_cipher_desc *cipher_desc)
+{
+ u16 nonce_size = cipher_desc->nonce;
+
+ if (crypto_info->version == TLS_1_3_VERSION) {
+ nonce_size = 0;
+ prot->aad_size = TLS_HEADER_SIZE;
+ prot->tail_size = 1;
+ } else {
+ prot->aad_size = TLS_AAD_SPACE_SIZE;
+ prot->tail_size = 0;
+ }
+
+ /* Sanity-check the sizes for stack allocations. */
+ if (nonce_size > TLS_MAX_IV_SIZE || prot->aad_size > TLS_MAX_AAD_SIZE)
+ return -EINVAL;
+
+ prot->version = crypto_info->version;
+ prot->cipher_type = crypto_info->cipher_type;
+ prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
+ prot->tag_size = cipher_desc->tag;
+ prot->overhead_size = prot->prepend_size + prot->tag_size + prot->tail_size;
+ prot->iv_size = cipher_desc->iv;
+ prot->salt_size = cipher_desc->salt;
+ prot->rec_seq_size = cipher_desc->rec_seq;
+
+ return 0;
+}
+
+int tls_set_sw_offload(struct sock *sk, int tx)
{
- struct tls_context *tls_ctx = tls_get_ctx(sk);
- struct tls_prot_info *prot = &tls_ctx->prot_info;
- struct tls_crypto_info *crypto_info;
struct tls_sw_context_tx *sw_ctx_tx = NULL;
struct tls_sw_context_rx *sw_ctx_rx = NULL;
+ const struct tls_cipher_desc *cipher_desc;
+ struct tls_crypto_info *crypto_info;
+ char *iv, *rec_seq, *key, *salt;
struct cipher_context *cctx;
+ struct tls_prot_info *prot;
struct crypto_aead **aead;
+ struct tls_context *ctx;
struct crypto_tfm *tfm;
- char *iv, *rec_seq, *key, *salt;
- const struct tls_cipher_desc *cipher_desc;
- u16 nonce_size;
int rc = 0;
- if (!ctx) {
- rc = -EINVAL;
- goto out;
- }
+ ctx = tls_get_ctx(sk);
+ prot = &ctx->prot_info;
if (tx) {
- if (!ctx->priv_ctx_tx) {
- sw_ctx_tx = kzalloc(sizeof(*sw_ctx_tx), GFP_KERNEL);
- if (!sw_ctx_tx) {
- rc = -ENOMEM;
- goto out;
- }
- ctx->priv_ctx_tx = sw_ctx_tx;
- } else {
- sw_ctx_tx =
- (struct tls_sw_context_tx *)ctx->priv_ctx_tx;
- }
- } else {
- if (!ctx->priv_ctx_rx) {
- sw_ctx_rx = kzalloc(sizeof(*sw_ctx_rx), GFP_KERNEL);
- if (!sw_ctx_rx) {
- rc = -ENOMEM;
- goto out;
- }
- ctx->priv_ctx_rx = sw_ctx_rx;
- } else {
- sw_ctx_rx =
- (struct tls_sw_context_rx *)ctx->priv_ctx_rx;
- }
- }
+ ctx->priv_ctx_tx = init_ctx_tx(ctx, sk);
+ if (!ctx->priv_ctx_tx)
+ return -ENOMEM;
- if (tx) {
- crypto_init_wait(&sw_ctx_tx->async_wait);
- spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
+ sw_ctx_tx = ctx->priv_ctx_tx;
crypto_info = &ctx->crypto_send.info;
cctx = &ctx->tx;
aead = &sw_ctx_tx->aead_send;
- INIT_LIST_HEAD(&sw_ctx_tx->tx_list);
- INIT_DELAYED_WORK(&sw_ctx_tx->tx_work.work, tx_work_handler);
- sw_ctx_tx->tx_work.sk = sk;
} else {
- crypto_init_wait(&sw_ctx_rx->async_wait);
- spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
- init_waitqueue_head(&sw_ctx_rx->wq);
+ ctx->priv_ctx_rx = init_ctx_rx(ctx);
+ if (!ctx->priv_ctx_rx)
+ return -ENOMEM;
+
+ sw_ctx_rx = ctx->priv_ctx_rx;
crypto_info = &ctx->crypto_recv.info;
cctx = &ctx->rx;
- skb_queue_head_init(&sw_ctx_rx->rx_list);
- skb_queue_head_init(&sw_ctx_rx->async_hold);
aead = &sw_ctx_rx->aead_recv;
}
@@ -2660,58 +2701,25 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
goto free_priv;
}
- nonce_size = cipher_desc->nonce;
+ rc = init_prot_info(prot, crypto_info, cipher_desc);
+ if (rc)
+ goto free_priv;
iv = crypto_info_iv(crypto_info, cipher_desc);
key = crypto_info_key(crypto_info, cipher_desc);
salt = crypto_info_salt(crypto_info, cipher_desc);
rec_seq = crypto_info_rec_seq(crypto_info, cipher_desc);
- if (crypto_info->version == TLS_1_3_VERSION) {
- nonce_size = 0;
- prot->aad_size = TLS_HEADER_SIZE;
- prot->tail_size = 1;
- } else {
- prot->aad_size = TLS_AAD_SPACE_SIZE;
- prot->tail_size = 0;
- }
-
- /* Sanity-check the sizes for stack allocations. */
- if (nonce_size > MAX_IV_SIZE || prot->aad_size > TLS_MAX_AAD_SIZE) {
- rc = -EINVAL;
- goto free_priv;
- }
-
- prot->version = crypto_info->version;
- prot->cipher_type = crypto_info->cipher_type;
- prot->prepend_size = TLS_HEADER_SIZE + nonce_size;
- prot->tag_size = cipher_desc->tag;
- prot->overhead_size = prot->prepend_size +
- prot->tag_size + prot->tail_size;
- prot->iv_size = cipher_desc->iv;
- prot->salt_size = cipher_desc->salt;
- cctx->iv = kmalloc(cipher_desc->iv + cipher_desc->salt, GFP_KERNEL);
- if (!cctx->iv) {
- rc = -ENOMEM;
- goto free_priv;
- }
- /* Note: 128 & 256 bit salt are the same size */
- prot->rec_seq_size = cipher_desc->rec_seq;
memcpy(cctx->iv, salt, cipher_desc->salt);
memcpy(cctx->iv + cipher_desc->salt, iv, cipher_desc->iv);
-
- cctx->rec_seq = kmemdup(rec_seq, cipher_desc->rec_seq, GFP_KERNEL);
- if (!cctx->rec_seq) {
- rc = -ENOMEM;
- goto free_iv;
- }
+ memcpy(cctx->rec_seq, rec_seq, cipher_desc->rec_seq);
if (!*aead) {
*aead = crypto_alloc_aead(cipher_desc->cipher_name, 0, 0);
if (IS_ERR(*aead)) {
rc = PTR_ERR(*aead);
*aead = NULL;
- goto free_rec_seq;
+ goto free_priv;
}
}
@@ -2743,12 +2751,6 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
free_aead:
crypto_free_aead(*aead);
*aead = NULL;
-free_rec_seq:
- kfree(cctx->rec_seq);
- cctx->rec_seq = NULL;
-free_iv:
- kfree(cctx->iv);
- cctx->iv = NULL;
free_priv:
if (tx) {
kfree(ctx->priv_ctx_tx);
diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
index 3e8a04a13668..45506a95b25f 100644
--- a/net/unix/af_unix.c
+++ b/net/unix/af_unix.c
@@ -116,6 +116,7 @@
#include <linux/freezer.h>
#include <linux/file.h>
#include <linux/btf_ids.h>
+#include <linux/bpf-cgroup.h>
#include "scm.h"
@@ -1381,6 +1382,10 @@ static int unix_dgram_connect(struct socket *sock, struct sockaddr *addr,
if (err)
goto out;
+ err = BPF_CGROUP_RUN_PROG_UNIX_CONNECT_LOCK(sk, addr, &alen);
+ if (err)
+ goto out;
+
if ((test_bit(SOCK_PASSCRED, &sock->flags) ||
test_bit(SOCK_PASSPIDFD, &sock->flags)) &&
!unix_sk(sk)->addr) {
@@ -1490,6 +1495,10 @@ static int unix_stream_connect(struct socket *sock, struct sockaddr *uaddr,
if (err)
goto out;
+ err = BPF_CGROUP_RUN_PROG_UNIX_CONNECT_LOCK(sk, uaddr, &addr_len);
+ if (err)
+ goto out;
+
if ((test_bit(SOCK_PASSCRED, &sock->flags) ||
test_bit(SOCK_PASSPIDFD, &sock->flags)) && !u->addr) {
err = unix_autobind(sk);
@@ -1770,6 +1779,13 @@ static int unix_getname(struct socket *sock, struct sockaddr *uaddr, int peer)
} else {
err = addr->len;
memcpy(sunaddr, addr->name, addr->len);
+
+ if (peer)
+ BPF_CGROUP_RUN_SA_PROG(sk, uaddr, &err,
+ CGROUP_UNIX_GETPEERNAME);
+ else
+ BPF_CGROUP_RUN_SA_PROG(sk, uaddr, &err,
+ CGROUP_UNIX_GETSOCKNAME);
}
sock_put(sk);
out:
@@ -1922,6 +1938,13 @@ static int unix_dgram_sendmsg(struct socket *sock, struct msghdr *msg,
err = unix_validate_addr(sunaddr, msg->msg_namelen);
if (err)
goto out;
+
+ err = BPF_CGROUP_RUN_PROG_UNIX_SENDMSG_LOCK(sk,
+ msg->msg_name,
+ &msg->msg_namelen,
+ NULL);
+ if (err)
+ goto out;
} else {
sunaddr = NULL;
err = -ENOTCONN;
@@ -2390,9 +2413,14 @@ int __unix_dgram_recvmsg(struct sock *sk, struct msghdr *msg, size_t size,
EPOLLOUT | EPOLLWRNORM |
EPOLLWRBAND);
- if (msg->msg_name)
+ if (msg->msg_name) {
unix_copy_addr(msg, skb->sk);
+ BPF_CGROUP_RUN_PROG_UNIX_RECVMSG_LOCK(sk,
+ msg->msg_name,
+ &msg->msg_namelen);
+ }
+
if (size > skb->len - skip)
size = skb->len - skip;
else if (size < skb->len - skip)
@@ -2744,6 +2772,11 @@ unlock:
DECLARE_SOCKADDR(struct sockaddr_un *, sunaddr,
state->msg->msg_name);
unix_copy_addr(state->msg, skb->sk);
+
+ BPF_CGROUP_RUN_PROG_UNIX_RECVMSG_LOCK(sk,
+ state->msg->msg_name,
+ &state->msg->msg_namelen);
+
sunaddr = NULL;
}
@@ -3311,7 +3344,7 @@ static const struct seq_operations unix_seq_ops = {
.show = unix_seq_show,
};
-#if IS_BUILTIN(CONFIG_UNIX) && defined(CONFIG_BPF_SYSCALL)
+#ifdef CONFIG_BPF_SYSCALL
struct bpf_unix_iter_state {
struct seq_net_private p;
unsigned int cur_sk;
@@ -3573,7 +3606,7 @@ static struct pernet_operations unix_net_ops = {
.exit = unix_net_exit,
};
-#if IS_BUILTIN(CONFIG_UNIX) && defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_PROC_FS)
+#if defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_PROC_FS)
DEFINE_BPF_ITER_FUNC(unix, struct bpf_iter_meta *meta,
struct unix_sock *unix_sk, uid_t uid)
@@ -3673,7 +3706,7 @@ static int __init af_unix_init(void)
register_pernet_subsys(&unix_net_ops);
unix_bpf_build_proto();
-#if IS_BUILTIN(CONFIG_UNIX) && defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_PROC_FS)
+#if defined(CONFIG_BPF_SYSCALL) && defined(CONFIG_PROC_FS)
bpf_iter_register();
#endif
@@ -3681,20 +3714,5 @@ out:
return rc;
}
-static void __exit af_unix_exit(void)
-{
- sock_unregister(PF_UNIX);
- proto_unregister(&unix_dgram_proto);
- proto_unregister(&unix_stream_proto);
- unregister_pernet_subsys(&unix_net_ops);
-}
-
-/* Earlier than device_initcall() so that other drivers invoking
- request_module() don't end up in a loop when modprobe tries
- to use a UNIX socket. But later than subsys_initcall() because
- we depend on stuff initialised there */
+/* Later than subsys_initcall() because we depend on stuff initialised there */
fs_initcall(af_unix_init);
-module_exit(af_unix_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_ALIAS_NETPROTO(PF_UNIX);
diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
index 020cf17ab7e4..816725af281f 100644
--- a/net/vmw_vsock/af_vsock.c
+++ b/net/vmw_vsock/af_vsock.c
@@ -89,6 +89,7 @@
#include <linux/types.h>
#include <linux/bitops.h>
#include <linux/cred.h>
+#include <linux/errqueue.h>
#include <linux/init.h>
#include <linux/io.h>
#include <linux/kernel.h>
@@ -110,6 +111,7 @@
#include <linux/workqueue.h>
#include <net/sock.h>
#include <net/af_vsock.h>
+#include <uapi/linux/vm_sockets.h>
static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr);
static void vsock_sk_destruct(struct sock *sk);
@@ -1030,7 +1032,7 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
poll_wait(file, sk_sleep(sk), wait);
mask = 0;
- if (sk->sk_err)
+ if (sk->sk_err || !skb_queue_empty_lockless(&sk->sk_error_queue))
/* Signify that there has been an error on this socket. */
mask |= EPOLLERR;
@@ -1404,6 +1406,17 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
goto out;
}
+ if (vsock_msgzerocopy_allow(transport)) {
+ set_bit(SOCK_SUPPORT_ZC, &sk->sk_socket->flags);
+ } else if (sock_flag(sk, SOCK_ZEROCOPY)) {
+ /* If this option was set before 'connect()',
+ * when transport was unknown, check that this
+ * feature is supported here.
+ */
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
err = vsock_auto_bind(vsk);
if (err)
goto out;
@@ -1558,6 +1571,9 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
} else {
newsock->state = SS_CONNECTED;
sock_graft(connected, newsock);
+ if (vsock_msgzerocopy_allow(vconnected->transport))
+ set_bit(SOCK_SUPPORT_ZC,
+ &connected->sk_socket->flags);
}
release_sock(connected);
@@ -1635,7 +1651,7 @@ static int vsock_connectible_setsockopt(struct socket *sock,
const struct vsock_transport *transport;
u64 val;
- if (level != AF_VSOCK)
+ if (level != AF_VSOCK && level != SOL_SOCKET)
return -ENOPROTOOPT;
#define COPY_IN(_v) \
@@ -1658,6 +1674,33 @@ static int vsock_connectible_setsockopt(struct socket *sock,
transport = vsk->transport;
+ if (level == SOL_SOCKET) {
+ int zerocopy;
+
+ if (optname != SO_ZEROCOPY) {
+ release_sock(sk);
+ return sock_setsockopt(sock, level, optname, optval, optlen);
+ }
+
+ /* Use 'int' type here, because variable to
+ * set this option usually has this type.
+ */
+ COPY_IN(zerocopy);
+
+ if (zerocopy < 0 || zerocopy > 1) {
+ err = -EINVAL;
+ goto exit;
+ }
+
+ if (transport && !vsock_msgzerocopy_allow(transport)) {
+ err = -EOPNOTSUPP;
+ goto exit;
+ }
+
+ sock_valbool_flag(sk, SOCK_ZEROCOPY, zerocopy);
+ goto exit;
+ }
+
switch (optname) {
case SO_VM_SOCKETS_BUFFER_SIZE:
COPY_IN(val);
@@ -1822,6 +1865,12 @@ static int vsock_connectible_sendmsg(struct socket *sock, struct msghdr *msg,
goto out;
}
+ if (msg->msg_flags & MSG_ZEROCOPY &&
+ !vsock_msgzerocopy_allow(transport)) {
+ err = -EOPNOTSUPP;
+ goto out;
+ }
+
/* Wait for room in the produce queue to enqueue our user's data. */
timeout = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
@@ -1921,6 +1970,9 @@ out_err:
err = total_written;
}
out:
+ if (sk->sk_type == SOCK_STREAM)
+ err = sk_stream_error(sk, msg->msg_flags, err);
+
release_sock(sk);
return err;
}
@@ -2134,6 +2186,10 @@ vsock_connectible_recvmsg(struct socket *sock, struct msghdr *msg, size_t len,
int err;
sk = sock->sk;
+
+ if (unlikely(flags & MSG_ERRQUEUE))
+ return sock_recv_errqueue(sk, msg, len, SOL_VSOCK, VSOCK_RECVERR);
+
vsk = vsock_sk(sk);
err = 0;
@@ -2301,6 +2357,12 @@ static int vsock_create(struct net *net, struct socket *sock,
}
}
+ /* SOCK_DGRAM doesn't have 'setsockopt' callback set in its
+ * proto_ops, so there is no handler for custom logic.
+ */
+ if (sock_type_connectible(sock->type))
+ set_bit(SOCK_CUSTOM_SOCKOPT, &sk->sk_socket->flags);
+
vsock_insert_unbound(vsk);
return 0;
diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
index b80bf681327b..af5bab1acee1 100644
--- a/net/vmw_vsock/virtio_transport.c
+++ b/net/vmw_vsock/virtio_transport.c
@@ -63,6 +63,17 @@ struct virtio_vsock {
u32 guest_cid;
bool seqpacket_allow;
+
+ /* These fields are used only in tx path in function
+ * 'virtio_transport_send_pkt_work()', so to save
+ * stack space in it, place both of them here. Each
+ * pointer from 'out_sgs' points to the corresponding
+ * element in 'out_bufs' - this is initialized in
+ * 'virtio_vsock_probe()'. Both fields are protected
+ * by 'tx_lock'. +1 is needed for packet header.
+ */
+ struct scatterlist *out_sgs[MAX_SKB_FRAGS + 1];
+ struct scatterlist out_bufs[MAX_SKB_FRAGS + 1];
};
static u32 virtio_transport_get_local_cid(void)
@@ -100,8 +111,8 @@ virtio_transport_send_pkt_work(struct work_struct *work)
vq = vsock->vqs[VSOCK_VQ_TX];
for (;;) {
- struct scatterlist hdr, buf, *sgs[2];
int ret, in_sg = 0, out_sg = 0;
+ struct scatterlist **sgs;
struct sk_buff *skb;
bool reply;
@@ -111,12 +122,43 @@ virtio_transport_send_pkt_work(struct work_struct *work)
virtio_transport_deliver_tap_pkt(skb);
reply = virtio_vsock_skb_reply(skb);
-
- sg_init_one(&hdr, virtio_vsock_hdr(skb), sizeof(*virtio_vsock_hdr(skb)));
- sgs[out_sg++] = &hdr;
- if (skb->len > 0) {
- sg_init_one(&buf, skb->data, skb->len);
- sgs[out_sg++] = &buf;
+ sgs = vsock->out_sgs;
+ sg_init_one(sgs[out_sg], virtio_vsock_hdr(skb),
+ sizeof(*virtio_vsock_hdr(skb)));
+ out_sg++;
+
+ if (!skb_is_nonlinear(skb)) {
+ if (skb->len > 0) {
+ sg_init_one(sgs[out_sg], skb->data, skb->len);
+ out_sg++;
+ }
+ } else {
+ struct skb_shared_info *si;
+ int i;
+
+ /* If skb is nonlinear, then its buffer must contain
+ * only header and nothing more. Data is stored in
+ * the fragged part.
+ */
+ WARN_ON_ONCE(skb_headroom(skb) != sizeof(*virtio_vsock_hdr(skb)));
+
+ si = skb_shinfo(skb);
+
+ for (i = 0; i < si->nr_frags; i++) {
+ skb_frag_t *skb_frag = &si->frags[i];
+ void *va;
+
+ /* We will use 'page_to_virt()' for the userspace page
+ * here, because virtio or dma-mapping layers will call
+ * 'virt_to_phys()' later to fill the buffer descriptor.
+ * We don't touch memory at "virtual" address of this page.
+ */
+ va = page_to_virt(skb_frag->bv_page);
+ sg_init_one(sgs[out_sg],
+ va + skb_frag->bv_offset,
+ skb_frag->bv_len);
+ out_sg++;
+ }
}
ret = virtqueue_add_sgs(vq, sgs, out_sg, in_sg, skb, GFP_KERNEL);
@@ -413,6 +455,42 @@ static void virtio_vsock_rx_done(struct virtqueue *vq)
queue_work(virtio_vsock_workqueue, &vsock->rx_work);
}
+static bool virtio_transport_can_msgzerocopy(int bufs_num)
+{
+ struct virtio_vsock *vsock;
+ bool res = false;
+
+ rcu_read_lock();
+
+ vsock = rcu_dereference(the_virtio_vsock);
+ if (vsock) {
+ struct virtqueue *vq = vsock->vqs[VSOCK_VQ_TX];
+
+ /* Check that tx queue is large enough to keep whole
+ * data to send. This is needed, because when there is
+ * not enough free space in the queue, current skb to
+ * send will be reinserted to the head of tx list of
+ * the socket to retry transmission later, so if skb
+ * is bigger than whole queue, it will be reinserted
+ * again and again, thus blocking other skbs to be sent.
+ * Each page of the user provided buffer will be added
+ * as a single buffer to the tx virtqueue, so compare
+ * number of pages against maximum capacity of the queue.
+ */
+ if (bufs_num <= vq->num_max)
+ res = true;
+ }
+
+ rcu_read_unlock();
+
+ return res;
+}
+
+static bool virtio_transport_msgzerocopy_allow(void)
+{
+ return true;
+}
+
static bool virtio_transport_seqpacket_allow(u32 remote_cid);
static struct virtio_transport virtio_transport = {
@@ -446,6 +524,8 @@ static struct virtio_transport virtio_transport = {
.seqpacket_allow = virtio_transport_seqpacket_allow,
.seqpacket_has_data = virtio_transport_seqpacket_has_data,
+ .msgzerocopy_allow = virtio_transport_msgzerocopy_allow,
+
.notify_poll_in = virtio_transport_notify_poll_in,
.notify_poll_out = virtio_transport_notify_poll_out,
.notify_recv_init = virtio_transport_notify_recv_init,
@@ -462,6 +542,7 @@ static struct virtio_transport virtio_transport = {
},
.send_pkt = virtio_transport_send_pkt,
+ .can_msgzerocopy = virtio_transport_can_msgzerocopy,
};
static bool virtio_transport_seqpacket_allow(u32 remote_cid)
@@ -635,6 +716,7 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
{
struct virtio_vsock *vsock = NULL;
int ret;
+ int i;
ret = mutex_lock_interruptible(&the_virtio_vsock_mutex);
if (ret)
@@ -677,6 +759,9 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
if (ret < 0)
goto out;
+ for (i = 0; i < ARRAY_SIZE(vsock->out_sgs); i++)
+ vsock->out_sgs[i] = &vsock->out_bufs[i];
+
rcu_assign_pointer(the_virtio_vsock, vsock);
virtio_vsock_vqs_start(vsock);
diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
index 352d042b130b..e22c81435ef7 100644
--- a/net/vmw_vsock/virtio_transport_common.c
+++ b/net/vmw_vsock/virtio_transport_common.c
@@ -37,27 +37,89 @@ virtio_transport_get_ops(struct vsock_sock *vsk)
return container_of(t, struct virtio_transport, transport);
}
-/* Returns a new packet on success, otherwise returns NULL.
- *
- * If NULL is returned, errp is set to a negative errno.
- */
-static struct sk_buff *
-virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info,
- size_t len,
- u32 src_cid,
- u32 src_port,
- u32 dst_cid,
- u32 dst_port)
-{
- const size_t skb_len = VIRTIO_VSOCK_SKB_HEADROOM + len;
- struct virtio_vsock_hdr *hdr;
- struct sk_buff *skb;
- void *payload;
- int err;
+static bool virtio_transport_can_zcopy(const struct virtio_transport *t_ops,
+ struct virtio_vsock_pkt_info *info,
+ size_t pkt_len)
+{
+ struct iov_iter *iov_iter;
- skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL);
- if (!skb)
- return NULL;
+ if (!info->msg)
+ return false;
+
+ iov_iter = &info->msg->msg_iter;
+
+ if (iov_iter->iov_offset)
+ return false;
+
+ /* We can't send whole iov. */
+ if (iov_iter->count > pkt_len)
+ return false;
+
+ /* Check that transport can send data in zerocopy mode. */
+ t_ops = virtio_transport_get_ops(info->vsk);
+
+ if (t_ops->can_msgzerocopy) {
+ int pages_in_iov = iov_iter_npages(iov_iter, MAX_SKB_FRAGS);
+ int pages_to_send = min(pages_in_iov, MAX_SKB_FRAGS);
+
+ /* +1 is for packet header. */
+ return t_ops->can_msgzerocopy(pages_to_send + 1);
+ }
+
+ return true;
+}
+
+static int virtio_transport_init_zcopy_skb(struct vsock_sock *vsk,
+ struct sk_buff *skb,
+ struct msghdr *msg,
+ bool zerocopy)
+{
+ struct ubuf_info *uarg;
+
+ if (msg->msg_ubuf) {
+ uarg = msg->msg_ubuf;
+ net_zcopy_get(uarg);
+ } else {
+ struct iov_iter *iter = &msg->msg_iter;
+ struct ubuf_info_msgzc *uarg_zc;
+
+ uarg = msg_zerocopy_realloc(sk_vsock(vsk),
+ iter->count,
+ NULL);
+ if (!uarg)
+ return -1;
+
+ uarg_zc = uarg_to_msgzc(uarg);
+ uarg_zc->zerocopy = zerocopy ? 1 : 0;
+ }
+
+ skb_zcopy_init(skb, uarg);
+
+ return 0;
+}
+
+static int virtio_transport_fill_skb(struct sk_buff *skb,
+ struct virtio_vsock_pkt_info *info,
+ size_t len,
+ bool zcopy)
+{
+ if (zcopy)
+ return __zerocopy_sg_from_iter(info->msg, NULL, skb,
+ &info->msg->msg_iter,
+ len);
+
+ return memcpy_from_msg(skb_put(skb, len), info->msg, len);
+}
+
+static void virtio_transport_init_hdr(struct sk_buff *skb,
+ struct virtio_vsock_pkt_info *info,
+ size_t payload_len,
+ u32 src_cid,
+ u32 src_port,
+ u32 dst_cid,
+ u32 dst_port)
+{
+ struct virtio_vsock_hdr *hdr;
hdr = virtio_vsock_hdr(skb);
hdr->type = cpu_to_le16(info->type);
@@ -67,43 +129,28 @@ virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info,
hdr->src_port = cpu_to_le32(src_port);
hdr->dst_port = cpu_to_le32(dst_port);
hdr->flags = cpu_to_le32(info->flags);
- hdr->len = cpu_to_le32(len);
-
- if (info->msg && len > 0) {
- payload = skb_put(skb, len);
- err = memcpy_from_msg(payload, info->msg, len);
- if (err)
- goto out;
-
- if (msg_data_left(info->msg) == 0 &&
- info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) {
- hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM);
-
- if (info->msg->msg_flags & MSG_EOR)
- hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR);
- }
- }
+ hdr->len = cpu_to_le32(payload_len);
+}
- if (info->reply)
- virtio_vsock_skb_set_reply(skb);
+static void virtio_transport_copy_nonlinear_skb(const struct sk_buff *skb,
+ void *dst,
+ size_t len)
+{
+ struct iov_iter iov_iter = { 0 };
+ struct kvec kvec;
+ size_t to_copy;
- trace_virtio_transport_alloc_pkt(src_cid, src_port,
- dst_cid, dst_port,
- len,
- info->type,
- info->op,
- info->flags);
+ kvec.iov_base = dst;
+ kvec.iov_len = len;
- if (info->vsk && !skb_set_owner_sk_safe(skb, sk_vsock(info->vsk))) {
- WARN_ONCE(1, "failed to allocate skb on vsock socket with sk_refcnt == 0\n");
- goto out;
- }
+ iov_iter.iter_type = ITER_KVEC;
+ iov_iter.kvec = &kvec;
+ iov_iter.nr_segs = 1;
- return skb;
+ to_copy = min_t(size_t, len, skb->len);
-out:
- kfree_skb(skb);
- return NULL;
+ skb_copy_datagram_iter(skb, VIRTIO_VSOCK_SKB_CB(skb)->offset,
+ &iov_iter, to_copy);
}
/* Packet capture */
@@ -114,7 +161,6 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
struct af_vsockmon_hdr *hdr;
struct sk_buff *skb;
size_t payload_len;
- void *payload_buf;
/* A packet could be split to fit the RX buffer, so we can retrieve
* the payload length from the header and the buffer pointer taking
@@ -122,7 +168,6 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
*/
pkt_hdr = virtio_vsock_hdr(pkt);
payload_len = pkt->len;
- payload_buf = pkt->data;
skb = alloc_skb(sizeof(*hdr) + sizeof(*pkt_hdr) + payload_len,
GFP_ATOMIC);
@@ -165,7 +210,13 @@ static struct sk_buff *virtio_transport_build_skb(void *opaque)
skb_put_data(skb, pkt_hdr, sizeof(*pkt_hdr));
if (payload_len) {
- skb_put_data(skb, payload_buf, payload_len);
+ if (skb_is_nonlinear(pkt)) {
+ void *data = skb_put(skb, payload_len);
+
+ virtio_transport_copy_nonlinear_skb(pkt, data, payload_len);
+ } else {
+ skb_put_data(skb, pkt->data, payload_len);
+ }
}
return skb;
@@ -189,6 +240,82 @@ static u16 virtio_transport_get_type(struct sock *sk)
return VIRTIO_VSOCK_TYPE_SEQPACKET;
}
+/* Returns new sk_buff on success, otherwise returns NULL. */
+static struct sk_buff *virtio_transport_alloc_skb(struct virtio_vsock_pkt_info *info,
+ size_t payload_len,
+ bool zcopy,
+ u32 src_cid,
+ u32 src_port,
+ u32 dst_cid,
+ u32 dst_port)
+{
+ struct vsock_sock *vsk;
+ struct sk_buff *skb;
+ size_t skb_len;
+
+ skb_len = VIRTIO_VSOCK_SKB_HEADROOM;
+
+ if (!zcopy)
+ skb_len += payload_len;
+
+ skb = virtio_vsock_alloc_skb(skb_len, GFP_KERNEL);
+ if (!skb)
+ return NULL;
+
+ virtio_transport_init_hdr(skb, info, payload_len, src_cid, src_port,
+ dst_cid, dst_port);
+
+ vsk = info->vsk;
+
+ /* If 'vsk' != NULL then payload is always present, so we
+ * will never call '__zerocopy_sg_from_iter()' below without
+ * setting skb owner in 'skb_set_owner_w()'. The only case
+ * when 'vsk' == NULL is VIRTIO_VSOCK_OP_RST control message
+ * without payload.
+ */
+ WARN_ON_ONCE(!(vsk && (info->msg && payload_len)) && zcopy);
+
+ /* Set owner here, because '__zerocopy_sg_from_iter()' uses
+ * owner of skb without check to update 'sk_wmem_alloc'.
+ */
+ if (vsk)
+ skb_set_owner_w(skb, sk_vsock(vsk));
+
+ if (info->msg && payload_len > 0) {
+ int err;
+
+ err = virtio_transport_fill_skb(skb, info, payload_len, zcopy);
+ if (err)
+ goto out;
+
+ if (msg_data_left(info->msg) == 0 &&
+ info->type == VIRTIO_VSOCK_TYPE_SEQPACKET) {
+ struct virtio_vsock_hdr *hdr = virtio_vsock_hdr(skb);
+
+ hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOM);
+
+ if (info->msg->msg_flags & MSG_EOR)
+ hdr->flags |= cpu_to_le32(VIRTIO_VSOCK_SEQ_EOR);
+ }
+ }
+
+ if (info->reply)
+ virtio_vsock_skb_set_reply(skb);
+
+ trace_virtio_transport_alloc_pkt(src_cid, src_port,
+ dst_cid, dst_port,
+ payload_len,
+ info->type,
+ info->op,
+ info->flags,
+ zcopy);
+
+ return skb;
+out:
+ kfree_skb(skb);
+ return NULL;
+}
+
/* This function can only be used on connecting/connected sockets,
* since a socket assigned to a transport is required.
*
@@ -197,10 +324,12 @@ static u16 virtio_transport_get_type(struct sock *sk)
static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
struct virtio_vsock_pkt_info *info)
{
+ u32 max_skb_len = VIRTIO_VSOCK_MAX_PKT_BUF_SIZE;
u32 src_cid, src_port, dst_cid, dst_port;
const struct virtio_transport *t_ops;
struct virtio_vsock_sock *vvs;
u32 pkt_len = info->pkt_len;
+ bool can_zcopy = false;
u32 rest_len;
int ret;
@@ -229,15 +358,30 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
if (pkt_len == 0 && info->op == VIRTIO_VSOCK_OP_RW)
return pkt_len;
+ if (info->msg) {
+ /* If zerocopy is not enabled by 'setsockopt()', we behave as
+ * there is no MSG_ZEROCOPY flag set.
+ */
+ if (!sock_flag(sk_vsock(vsk), SOCK_ZEROCOPY))
+ info->msg->msg_flags &= ~MSG_ZEROCOPY;
+
+ if (info->msg->msg_flags & MSG_ZEROCOPY)
+ can_zcopy = virtio_transport_can_zcopy(t_ops, info, pkt_len);
+
+ if (can_zcopy)
+ max_skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE,
+ (MAX_SKB_FRAGS * PAGE_SIZE));
+ }
+
rest_len = pkt_len;
do {
struct sk_buff *skb;
size_t skb_len;
- skb_len = min_t(u32, VIRTIO_VSOCK_MAX_PKT_BUF_SIZE, rest_len);
+ skb_len = min(max_skb_len, rest_len);
- skb = virtio_transport_alloc_skb(info, skb_len,
+ skb = virtio_transport_alloc_skb(info, skb_len, can_zcopy,
src_cid, src_port,
dst_cid, dst_port);
if (!skb) {
@@ -245,6 +389,21 @@ static int virtio_transport_send_pkt_info(struct vsock_sock *vsk,
break;
}
+ /* We process buffer part by part, allocating skb on
+ * each iteration. If this is last skb for this buffer
+ * and MSG_ZEROCOPY mode is in use - we must allocate
+ * completion for the current syscall.
+ */
+ if (info->msg && info->msg->msg_flags & MSG_ZEROCOPY &&
+ skb_len == rest_len && info->op == VIRTIO_VSOCK_OP_RW) {
+ if (virtio_transport_init_zcopy_skb(vsk, skb,
+ info->msg,
+ can_zcopy)) {
+ ret = -ENOMEM;
+ break;
+ }
+ }
+
virtio_transport_inc_tx_pkt(vvs, skb);
ret = t_ops->send_pkt(skb);
@@ -364,9 +523,10 @@ virtio_transport_stream_do_peek(struct vsock_sock *vsk,
spin_unlock_bh(&vvs->rx_lock);
/* sk_lock is held by caller so no one else can dequeue.
- * Unlock rx_lock since memcpy_to_msg() may sleep.
+ * Unlock rx_lock since skb_copy_datagram_iter() may sleep.
*/
- err = memcpy_to_msg(msg, skb->data, bytes);
+ err = skb_copy_datagram_iter(skb, VIRTIO_VSOCK_SKB_CB(skb)->offset,
+ &msg->msg_iter, bytes);
if (err)
goto out;
@@ -410,25 +570,27 @@ virtio_transport_stream_do_dequeue(struct vsock_sock *vsk,
while (total < len && !skb_queue_empty(&vvs->rx_queue)) {
skb = skb_peek(&vvs->rx_queue);
- bytes = len - total;
- if (bytes > skb->len)
- bytes = skb->len;
+ bytes = min_t(size_t, len - total,
+ skb->len - VIRTIO_VSOCK_SKB_CB(skb)->offset);
/* sk_lock is held by caller so no one else can dequeue.
- * Unlock rx_lock since memcpy_to_msg() may sleep.
+ * Unlock rx_lock since skb_copy_datagram_iter() may sleep.
*/
spin_unlock_bh(&vvs->rx_lock);
- err = memcpy_to_msg(msg, skb->data, bytes);
+ err = skb_copy_datagram_iter(skb,
+ VIRTIO_VSOCK_SKB_CB(skb)->offset,
+ &msg->msg_iter, bytes);
if (err)
goto out;
spin_lock_bh(&vvs->rx_lock);
total += bytes;
- skb_pull(skb, bytes);
- if (skb->len == 0) {
+ VIRTIO_VSOCK_SKB_CB(skb)->offset += bytes;
+
+ if (skb->len == VIRTIO_VSOCK_SKB_CB(skb)->offset) {
u32 pkt_len = le32_to_cpu(virtio_vsock_hdr(skb)->len);
virtio_transport_dec_rx_pkt(vvs, pkt_len);
@@ -492,9 +654,10 @@ virtio_transport_seqpacket_do_peek(struct vsock_sock *vsk,
spin_unlock_bh(&vvs->rx_lock);
/* sk_lock is held by caller so no one else can dequeue.
- * Unlock rx_lock since memcpy_to_msg() may sleep.
+ * Unlock rx_lock since skb_copy_datagram_iter() may sleep.
*/
- err = memcpy_to_msg(msg, skb->data, bytes);
+ err = skb_copy_datagram_iter(skb, VIRTIO_VSOCK_SKB_CB(skb)->offset,
+ &msg->msg_iter, bytes);
if (err)
return err;
@@ -553,11 +716,13 @@ static int virtio_transport_seqpacket_do_dequeue(struct vsock_sock *vsk,
int err;
/* sk_lock is held by caller so no one else can dequeue.
- * Unlock rx_lock since memcpy_to_msg() may sleep.
+ * Unlock rx_lock since skb_copy_datagram_iter() may sleep.
*/
spin_unlock_bh(&vvs->rx_lock);
- err = memcpy_to_msg(msg, skb->data, bytes_to_copy);
+ err = skb_copy_datagram_iter(skb, 0,
+ &msg->msg_iter,
+ bytes_to_copy);
if (err) {
/* Copy of message failed. Rest of
* fragments will be freed without copy.
@@ -954,7 +1119,7 @@ static int virtio_transport_reset_no_sock(const struct virtio_transport *t,
if (!t)
return -ENOTCONN;
- reply = virtio_transport_alloc_skb(&info, 0,
+ reply = virtio_transport_alloc_skb(&info, 0, false,
le64_to_cpu(hdr->dst_cid),
le32_to_cpu(hdr->dst_port),
le64_to_cpu(hdr->src_cid),
diff --git a/net/vmw_vsock/vsock_loopback.c b/net/vmw_vsock/vsock_loopback.c
index 5c6360df1f31..048640167411 100644
--- a/net/vmw_vsock/vsock_loopback.c
+++ b/net/vmw_vsock/vsock_loopback.c
@@ -47,6 +47,10 @@ static int vsock_loopback_cancel_pkt(struct vsock_sock *vsk)
}
static bool vsock_loopback_seqpacket_allow(u32 remote_cid);
+static bool vsock_loopback_msgzerocopy_allow(void)
+{
+ return true;
+}
static struct virtio_transport loopback_transport = {
.transport = {
@@ -79,6 +83,8 @@ static struct virtio_transport loopback_transport = {
.seqpacket_allow = vsock_loopback_seqpacket_allow,
.seqpacket_has_data = virtio_transport_seqpacket_has_data,
+ .msgzerocopy_allow = vsock_loopback_msgzerocopy_allow,
+
.notify_poll_in = virtio_transport_notify_poll_in,
.notify_poll_out = virtio_transport_notify_poll_out,
.notify_recv_init = virtio_transport_notify_recv_init,
diff --git a/net/wireless/Kconfig b/net/wireless/Kconfig
index f620acd2a0f5..a9ac85e09af3 100644
--- a/net/wireless/Kconfig
+++ b/net/wireless/Kconfig
@@ -201,6 +201,17 @@ config CFG80211_WEXT_EXPORT
Drivers should select this option if they require cfg80211's
wext compatibility symbols to be exported.
+config CFG80211_KUNIT_TEST
+ tristate "KUnit tests for cfg80211" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ depends on CFG80211
+ default KUNIT_ALL_TESTS
+ depends on !KERNEL_6_2
+ help
+ Enable this option to test cfg80211 functions with kunit.
+
+ If unsure, say N.
+
endif # CFG80211
config LIB80211
diff --git a/net/wireless/Makefile b/net/wireless/Makefile
index 527ae669f6f7..089c841528c8 100644
--- a/net/wireless/Makefile
+++ b/net/wireless/Makefile
@@ -4,6 +4,7 @@ obj-$(CONFIG_LIB80211) += lib80211.o
obj-$(CONFIG_LIB80211_CRYPT_WEP) += lib80211_crypt_wep.o
obj-$(CONFIG_LIB80211_CRYPT_CCMP) += lib80211_crypt_ccmp.o
obj-$(CONFIG_LIB80211_CRYPT_TKIP) += lib80211_crypt_tkip.o
+obj-y += tests/
obj-$(CONFIG_WEXT_CORE) += wext-core.o
obj-$(CONFIG_WEXT_PROC) += wext-proc.o
diff --git a/net/wireless/ap.c b/net/wireless/ap.c
index 0962770303b2..9a9a870806f5 100644
--- a/net/wireless/ap.c
+++ b/net/wireless/ap.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Parts of this file are
- * Copyright (C) 2022 Intel Corporation
+ * Copyright (C) 2022-2023 Intel Corporation
*/
#include <linux/ieee80211.h>
#include <linux/export.h>
@@ -18,7 +18,7 @@ static int ___cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!rdev->ops->stop_ap)
return -EOPNOTSUPP;
@@ -52,9 +52,9 @@ static int ___cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
return err;
}
-int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
- struct net_device *dev, int link_id,
- bool notify)
+int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
+ struct net_device *dev, int link_id,
+ bool notify)
{
unsigned int link;
int ret = 0;
@@ -72,17 +72,3 @@ int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
return ret;
}
-
-int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
- struct net_device *dev, int link_id,
- bool notify)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
- int err;
-
- wdev_lock(wdev);
- err = __cfg80211_stop_ap(rdev, dev, link_id, notify);
- wdev_unlock(wdev);
-
- return err;
-}
diff --git a/net/wireless/chan.c b/net/wireless/chan.c
index 0b7e81db383d..2d21e423abdb 100644
--- a/net/wireless/chan.c
+++ b/net/wireless/chan.c
@@ -6,7 +6,7 @@
*
* Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
- * Copyright 2018-2022 Intel Corporation
+ * Copyright 2018-2023 Intel Corporation
*/
#include <linux/export.h>
@@ -666,6 +666,7 @@ bool cfg80211_chandef_dfs_usable(struct wiphy *wiphy,
return (r1 + r2 > 0);
}
+EXPORT_SYMBOL(cfg80211_chandef_dfs_usable);
/*
* Checks if center frequency of chan falls with in the bandwidth
@@ -713,7 +714,7 @@ bool cfg80211_beaconing_iface_active(struct wireless_dev *wdev)
{
unsigned int link;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
switch (wdev->iftype) {
case NL80211_IFTYPE_AP:
@@ -782,18 +783,14 @@ static bool cfg80211_is_wiphy_oper_chan(struct wiphy *wiphy,
{
struct wireless_dev *wdev;
+ lockdep_assert_wiphy(wiphy);
+
list_for_each_entry(wdev, &wiphy->wdev_list, list) {
- wdev_lock(wdev);
- if (!cfg80211_beaconing_iface_active(wdev)) {
- wdev_unlock(wdev);
+ if (!cfg80211_beaconing_iface_active(wdev))
continue;
- }
- if (cfg80211_wdev_on_sub_chan(wdev, chan, false)) {
- wdev_unlock(wdev);
+ if (cfg80211_wdev_on_sub_chan(wdev, chan, false))
return true;
- }
- wdev_unlock(wdev);
}
return false;
@@ -823,14 +820,18 @@ bool cfg80211_any_wiphy_oper_chan(struct wiphy *wiphy,
if (!(chan->flags & IEEE80211_CHAN_RADAR))
return false;
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
+ bool found;
+
if (!reg_dfs_domain_same(wiphy, &rdev->wiphy))
continue;
- if (cfg80211_is_wiphy_oper_chan(&rdev->wiphy, chan))
- return true;
+ wiphy_lock(&rdev->wiphy);
+ found = cfg80211_is_wiphy_oper_chan(&rdev->wiphy, chan) ||
+ cfg80211_offchan_chain_is_active(rdev, chan);
+ wiphy_unlock(&rdev->wiphy);
- if (cfg80211_offchan_chain_is_active(rdev, chan))
+ if (found)
return true;
}
@@ -965,6 +966,7 @@ cfg80211_chandef_dfs_cac_time(struct wiphy *wiphy,
return max(t1, t2);
}
+EXPORT_SYMBOL(cfg80211_chandef_dfs_cac_time);
static bool cfg80211_secondary_chans_ok(struct wiphy *wiphy,
u32 center_freq, u32 bandwidth,
@@ -1162,8 +1164,7 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy,
if (!sband)
return false;
- for (i = 0; i < sband->n_iftype_data; i++) {
- iftd = &sband->iftype_data[i];
+ for_each_sband_iftype_data(sband, i, iftd) {
if (!iftd->eht_cap.has_eht)
continue;
@@ -1321,10 +1322,7 @@ static bool cfg80211_ir_permissive_chan(struct wiphy *wiphy,
list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
bool ret;
- wdev_lock(wdev);
ret = cfg80211_ir_permissive_check_wdev(iftype, wdev, chan);
- wdev_unlock(wdev);
-
if (ret)
return ret;
}
@@ -1433,17 +1431,10 @@ EXPORT_SYMBOL(cfg80211_any_usable_channels);
struct cfg80211_chan_def *wdev_chandef(struct wireless_dev *wdev,
unsigned int link_id)
{
- /*
- * We need to sort out the locking here - in some cases
- * where we get here we really just don't care (yet)
- * about the valid links, but in others we do. But we
- * get here with various driver cases, so we cannot
- * easily require the wdev mutex.
- */
- if (link_id || wdev->valid_links & BIT(0)) {
- ASSERT_WDEV_LOCK(wdev);
- WARN_ON(!(wdev->valid_links & BIT(link_id)));
- }
+ lockdep_assert_wiphy(wdev->wiphy);
+
+ WARN_ON(wdev->valid_links && !(wdev->valid_links & BIT(link_id)));
+ WARN_ON(!wdev->valid_links && link_id > 0);
switch (wdev->iftype) {
case NL80211_IFTYPE_MESH_POINT:
diff --git a/net/wireless/core.c b/net/wireless/core.c
index acec41c1809a..758c9a2a12c0 100644
--- a/net/wireless/core.c
+++ b/net/wireless/core.c
@@ -5,7 +5,7 @@
* Copyright 2006-2010 Johannes Berg <johannes@sipsolutions.net>
* Copyright 2013-2014 Intel Mobile Communications GmbH
* Copyright 2015-2017 Intel Deutschland GmbH
- * Copyright (C) 2018-2022 Intel Corporation
+ * Copyright (C) 2018-2023 Intel Corporation
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -60,7 +60,7 @@ struct cfg80211_registered_device *cfg80211_rdev_by_wiphy_idx(int wiphy_idx)
ASSERT_RTNL();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (rdev->wiphy_idx == wiphy_idx) {
result = rdev;
break;
@@ -116,7 +116,7 @@ static int cfg80211_dev_check_name(struct cfg80211_registered_device *rdev,
}
/* Ensure another device does not already have this name. */
- list_for_each_entry(rdev2, &cfg80211_rdev_list, list)
+ for_each_rdev(rdev2)
if (strcmp(newname, wiphy_name(&rdev2->wiphy)) == 0)
return -EINVAL;
@@ -821,6 +821,7 @@ int wiphy_register(struct wiphy *wiphy)
/* sanity check supported bands/channels */
for (band = 0; band < NUM_NL80211_BANDS; band++) {
+ const struct ieee80211_sband_iftype_data *iftd;
u16 types = 0;
bool have_he = false;
@@ -877,14 +878,11 @@ int wiphy_register(struct wiphy *wiphy)
return -EINVAL;
}
- for (i = 0; i < sband->n_iftype_data; i++) {
- const struct ieee80211_sband_iftype_data *iftd;
+ for_each_sband_iftype_data(sband, i, iftd) {
bool has_ap, has_non_ap;
u32 ap_bits = BIT(NL80211_IFTYPE_AP) |
BIT(NL80211_IFTYPE_P2P_GO);
- iftd = &sband->iftype_data[i];
-
if (WARN_ON(!iftd->types_mask))
return -EINVAL;
if (WARN_ON(types & iftd->types_mask))
@@ -1049,7 +1047,8 @@ void wiphy_rfkill_start_polling(struct wiphy *wiphy)
}
EXPORT_SYMBOL(wiphy_rfkill_start_polling);
-void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev)
+void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev,
+ struct wiphy_work *end)
{
unsigned int runaway_limit = 100;
unsigned long flags;
@@ -1068,6 +1067,10 @@ void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev)
wk->func(&rdev->wiphy, wk);
spin_lock_irqsave(&rdev->wiphy_work_lock, flags);
+
+ if (wk == end)
+ break;
+
if (WARN_ON(--runaway_limit == 0))
INIT_LIST_HEAD(&rdev->wiphy_work_list);
}
@@ -1118,7 +1121,7 @@ void wiphy_unregister(struct wiphy *wiphy)
#endif
/* surely nothing is reachable now, clean up work */
- cfg80211_process_wiphy_works(rdev);
+ cfg80211_process_wiphy_works(rdev, NULL);
wiphy_unlock(&rdev->wiphy);
rtnl_unlock();
@@ -1271,14 +1274,13 @@ void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev,
rdev->num_running_monitor_ifaces += num;
}
-void __cfg80211_leave(struct cfg80211_registered_device *rdev,
- struct wireless_dev *wdev)
+void cfg80211_leave(struct cfg80211_registered_device *rdev,
+ struct wireless_dev *wdev)
{
struct net_device *dev = wdev->netdev;
struct cfg80211_sched_scan_request *pos, *tmp;
lockdep_assert_held(&rdev->wiphy.mtx);
- ASSERT_WDEV_LOCK(wdev);
cfg80211_pmsr_wdev_down(wdev);
@@ -1286,7 +1288,7 @@ void __cfg80211_leave(struct cfg80211_registered_device *rdev,
switch (wdev->iftype) {
case NL80211_IFTYPE_ADHOC:
- __cfg80211_leave_ibss(rdev, dev, true);
+ cfg80211_leave_ibss(rdev, dev, true);
break;
case NL80211_IFTYPE_P2P_CLIENT:
case NL80211_IFTYPE_STATION:
@@ -1306,14 +1308,14 @@ void __cfg80211_leave(struct cfg80211_registered_device *rdev,
WLAN_REASON_DEAUTH_LEAVING, true);
break;
case NL80211_IFTYPE_MESH_POINT:
- __cfg80211_leave_mesh(rdev, dev);
+ cfg80211_leave_mesh(rdev, dev);
break;
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
- __cfg80211_stop_ap(rdev, dev, -1, true);
+ cfg80211_stop_ap(rdev, dev, -1, true);
break;
case NL80211_IFTYPE_OCB:
- __cfg80211_leave_ocb(rdev, dev);
+ cfg80211_leave_ocb(rdev, dev);
break;
case NL80211_IFTYPE_P2P_DEVICE:
case NL80211_IFTYPE_NAN:
@@ -1331,14 +1333,6 @@ void __cfg80211_leave(struct cfg80211_registered_device *rdev,
}
}
-void cfg80211_leave(struct cfg80211_registered_device *rdev,
- struct wireless_dev *wdev)
-{
- wdev_lock(wdev);
- __cfg80211_leave(rdev, wdev);
- wdev_unlock(wdev);
-}
-
void cfg80211_stop_iface(struct wiphy *wiphy, struct wireless_dev *wdev,
gfp_t gfp)
{
@@ -1363,7 +1357,6 @@ EXPORT_SYMBOL(cfg80211_stop_iface);
void cfg80211_init_wdev(struct wireless_dev *wdev)
{
- mutex_init(&wdev->mtx);
INIT_LIST_HEAD(&wdev->event_list);
spin_lock_init(&wdev->event_lock);
INIT_LIST_HEAD(&wdev->mgmt_registrations);
@@ -1528,7 +1521,6 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
case NETDEV_UP:
wiphy_lock(&rdev->wiphy);
cfg80211_update_iface_num(rdev, wdev->iftype, 1);
- wdev_lock(wdev);
switch (wdev->iftype) {
#ifdef CONFIG_CFG80211_WEXT
case NL80211_IFTYPE_ADHOC:
@@ -1558,7 +1550,6 @@ static int cfg80211_netdev_notifier_call(struct notifier_block *nb,
default:
break;
}
- wdev_unlock(wdev);
rdev->opencount++;
/*
@@ -1601,7 +1592,7 @@ static void __net_exit cfg80211_pernet_exit(struct net *net)
struct cfg80211_registered_device *rdev;
rtnl_lock();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (net_eq(wiphy_net(&rdev->wiphy), net))
WARN_ON(cfg80211_switch_netns(rdev, &init_net));
}
@@ -1640,6 +1631,21 @@ void wiphy_work_cancel(struct wiphy *wiphy, struct wiphy_work *work)
}
EXPORT_SYMBOL_GPL(wiphy_work_cancel);
+void wiphy_work_flush(struct wiphy *wiphy, struct wiphy_work *work)
+{
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+ unsigned long flags;
+ bool run;
+
+ spin_lock_irqsave(&rdev->wiphy_work_lock, flags);
+ run = !work || !list_empty(&work->entry);
+ spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
+
+ if (run)
+ cfg80211_process_wiphy_works(rdev, work);
+}
+EXPORT_SYMBOL_GPL(wiphy_work_flush);
+
void wiphy_delayed_work_timer(struct timer_list *t)
{
struct wiphy_delayed_work *dwork = from_timer(dwork, t, timer);
@@ -1672,6 +1678,16 @@ void wiphy_delayed_work_cancel(struct wiphy *wiphy,
}
EXPORT_SYMBOL_GPL(wiphy_delayed_work_cancel);
+void wiphy_delayed_work_flush(struct wiphy *wiphy,
+ struct wiphy_delayed_work *dwork)
+{
+ lockdep_assert_held(&wiphy->mtx);
+
+ del_timer_sync(&dwork->timer);
+ wiphy_work_flush(wiphy, &dwork->work);
+}
+EXPORT_SYMBOL_GPL(wiphy_delayed_work_flush);
+
static int __init cfg80211_init(void)
{
int err;
diff --git a/net/wireless/core.h b/net/wireless/core.h
index ba9c7170afa4..4c692c7faf30 100644
--- a/net/wireless/core.h
+++ b/net/wireless/core.h
@@ -160,6 +160,16 @@ extern struct workqueue_struct *cfg80211_wq;
extern struct list_head cfg80211_rdev_list;
extern int cfg80211_rdev_list_generation;
+/* This is constructed like this so it can be used in if/else */
+static inline int for_each_rdev_check_rtnl(void)
+{
+ ASSERT_RTNL();
+ return 0;
+}
+#define for_each_rdev(rdev) \
+ if (for_each_rdev_check_rtnl()) {} else \
+ list_for_each_entry(rdev, &cfg80211_rdev_list, list)
+
struct cfg80211_internal_bss {
struct list_head list;
struct list_head hidden_list;
@@ -225,22 +235,6 @@ void cfg80211_init_wdev(struct wireless_dev *wdev);
void cfg80211_register_wdev(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev);
-static inline void wdev_lock(struct wireless_dev *wdev)
- __acquires(wdev)
-{
- mutex_lock(&wdev->mtx);
- __acquire(wdev->mtx);
-}
-
-static inline void wdev_unlock(struct wireless_dev *wdev)
- __releases(wdev)
-{
- __release(wdev->mtx);
- mutex_unlock(&wdev->mtx);
-}
-
-#define ASSERT_WDEV_LOCK(wdev) lockdep_assert_held(&(wdev)->mtx)
-
static inline bool cfg80211_has_monitors_only(struct cfg80211_registered_device *rdev)
{
lockdep_assert_held(&rdev->wiphy.mtx);
@@ -276,7 +270,7 @@ struct cfg80211_event {
struct ieee80211_channel *channel;
} ij;
struct {
- u8 bssid[ETH_ALEN];
+ u8 peer_addr[ETH_ALEN];
const u8 *td_bitmap;
u8 td_bitmap_len;
} pa;
@@ -329,8 +323,6 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
struct cfg80211_ibss_params *params,
struct cfg80211_cached_keys *connkeys);
void cfg80211_clear_ibss(struct net_device *dev, bool nowext);
-int __cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
- struct net_device *dev, bool nowext);
int cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
struct net_device *dev, bool nowext);
void __cfg80211_ibss_joined(struct net_device *dev, const u8 *bssid,
@@ -345,8 +337,6 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
struct net_device *dev,
struct mesh_setup *setup,
const struct mesh_config *conf);
-int __cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
- struct net_device *dev);
int cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
struct net_device *dev);
int cfg80211_set_mesh_channel(struct cfg80211_registered_device *rdev,
@@ -354,21 +344,13 @@ int cfg80211_set_mesh_channel(struct cfg80211_registered_device *rdev,
struct cfg80211_chan_def *chandef);
/* OCB */
-int __cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev,
- struct ocb_setup *setup);
int cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
struct net_device *dev,
struct ocb_setup *setup);
-int __cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev);
int cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
struct net_device *dev);
/* AP */
-int __cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
- struct net_device *dev, int link,
- bool notify);
int cfg80211_stop_ap(struct cfg80211_registered_device *rdev,
struct net_device *dev, int link,
bool notify);
@@ -422,7 +404,7 @@ int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
bool wextev);
void __cfg80211_roamed(struct wireless_dev *wdev,
struct cfg80211_roam_info *info);
-void __cfg80211_port_authorized(struct wireless_dev *wdev, const u8 *bssid,
+void __cfg80211_port_authorized(struct wireless_dev *wdev, const u8 *peer_addr,
const u8 *td_bitmap, u8 td_bitmap_len);
int cfg80211_mgd_wext_connect(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev);
@@ -464,7 +446,8 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
struct net_device *dev, enum nl80211_iftype ntype,
struct vif_params *params);
void cfg80211_process_rdev_events(struct cfg80211_registered_device *rdev);
-void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev);
+void cfg80211_process_wiphy_works(struct cfg80211_registered_device *rdev,
+ struct wiphy_work *end);
void cfg80211_process_wdev_events(struct wireless_dev *wdev);
bool cfg80211_does_bw_fit_range(const struct ieee80211_freq_range *freq_range,
@@ -474,29 +457,12 @@ int cfg80211_scan(struct cfg80211_registered_device *rdev);
extern struct work_struct cfg80211_disconnect_work;
-/**
- * cfg80211_chandef_dfs_usable - checks if chandef is DFS usable
- * @wiphy: the wiphy to validate against
- * @chandef: the channel definition to check
- *
- * Checks if chandef is usable and we can/need start CAC on such channel.
- *
- * Return: true if all channels available and at least
- * one channel requires CAC (NL80211_DFS_USABLE)
- */
-bool cfg80211_chandef_dfs_usable(struct wiphy *wiphy,
- const struct cfg80211_chan_def *chandef);
-
void cfg80211_set_dfs_state(struct wiphy *wiphy,
const struct cfg80211_chan_def *chandef,
enum nl80211_dfs_state dfs_state);
void cfg80211_dfs_channels_update_work(struct work_struct *work);
-unsigned int
-cfg80211_chandef_dfs_cac_time(struct wiphy *wiphy,
- const struct cfg80211_chan_def *chandef);
-
void cfg80211_sched_dfs_chan_update(struct cfg80211_registered_device *rdev);
int
@@ -545,8 +511,6 @@ int cfg80211_validate_beacon_int(struct cfg80211_registered_device *rdev,
void cfg80211_update_iface_num(struct cfg80211_registered_device *rdev,
enum nl80211_iftype iftype, int num);
-void __cfg80211_leave(struct cfg80211_registered_device *rdev,
- struct wireless_dev *wdev);
void cfg80211_leave(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev);
diff --git a/net/wireless/ibss.c b/net/wireless/ibss.c
index e6fdb0b8187d..9f02ee5f08be 100644
--- a/net/wireless/ibss.c
+++ b/net/wireless/ibss.c
@@ -3,7 +3,7 @@
* Some IBSS support code for cfg80211.
*
* Copyright 2009 Johannes Berg <johannes@sipsolutions.net>
- * Copyright (C) 2020-2022 Intel Corporation
+ * Copyright (C) 2020-2023 Intel Corporation
*/
#include <linux/etherdevice.h>
@@ -93,7 +93,6 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
int err;
lockdep_assert_held(&rdev->wiphy.mtx);
- ASSERT_WDEV_LOCK(wdev);
if (wdev->u.ibss.ssid_len)
return -EALREADY;
@@ -151,13 +150,13 @@ int __cfg80211_join_ibss(struct cfg80211_registered_device *rdev,
return 0;
}
-static void __cfg80211_clear_ibss(struct net_device *dev, bool nowext)
+void cfg80211_clear_ibss(struct net_device *dev, bool nowext)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
int i;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
kfree_sensitive(wdev->connect_keys);
wdev->connect_keys = NULL;
@@ -187,22 +186,13 @@ static void __cfg80211_clear_ibss(struct net_device *dev, bool nowext)
cfg80211_sched_dfs_chan_update(rdev);
}
-void cfg80211_clear_ibss(struct net_device *dev, bool nowext)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
-
- wdev_lock(wdev);
- __cfg80211_clear_ibss(dev, nowext);
- wdev_unlock(wdev);
-}
-
-int __cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
- struct net_device *dev, bool nowext)
+int cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
+ struct net_device *dev, bool nowext)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->u.ibss.ssid_len)
return -ENOLINK;
@@ -213,24 +203,11 @@ int __cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
return err;
wdev->conn_owner_nlportid = 0;
- __cfg80211_clear_ibss(dev, nowext);
+ cfg80211_clear_ibss(dev, nowext);
return 0;
}
-int cfg80211_leave_ibss(struct cfg80211_registered_device *rdev,
- struct net_device *dev, bool nowext)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
- int err;
-
- wdev_lock(wdev);
- err = __cfg80211_leave_ibss(rdev, dev, nowext);
- wdev_unlock(wdev);
-
- return err;
-}
-
#ifdef CONFIG_CFG80211_WEXT
int cfg80211_ibss_wext_join(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev)
@@ -239,7 +216,7 @@ int cfg80211_ibss_wext_join(struct cfg80211_registered_device *rdev,
enum nl80211_band band;
int i, err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->wext.ibss.beacon_interval)
wdev->wext.ibss.beacon_interval = 100;
@@ -336,11 +313,9 @@ int cfg80211_ibss_wext_siwfreq(struct net_device *dev,
if (wdev->wext.ibss.chandef.chan == chan)
return 0;
- wdev_lock(wdev);
err = 0;
if (wdev->u.ibss.ssid_len)
- err = __cfg80211_leave_ibss(rdev, dev, true);
- wdev_unlock(wdev);
+ err = cfg80211_leave_ibss(rdev, dev, true);
if (err)
return err;
@@ -354,11 +329,7 @@ int cfg80211_ibss_wext_siwfreq(struct net_device *dev,
wdev->wext.ibss.channel_fixed = false;
}
- wdev_lock(wdev);
- err = cfg80211_ibss_wext_join(rdev, wdev);
- wdev_unlock(wdev);
-
- return err;
+ return cfg80211_ibss_wext_join(rdev, wdev);
}
int cfg80211_ibss_wext_giwfreq(struct net_device *dev,
@@ -372,12 +343,10 @@ int cfg80211_ibss_wext_giwfreq(struct net_device *dev,
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_ADHOC))
return -EINVAL;
- wdev_lock(wdev);
if (wdev->u.ibss.current_bss)
chan = wdev->u.ibss.current_bss->pub.channel;
else if (wdev->wext.ibss.chandef.chan)
chan = wdev->wext.ibss.chandef.chan;
- wdev_unlock(wdev);
if (chan) {
freq->m = chan->center_freq;
@@ -405,11 +374,9 @@ int cfg80211_ibss_wext_siwessid(struct net_device *dev,
if (!rdev->ops->join_ibss)
return -EOPNOTSUPP;
- wdev_lock(wdev);
err = 0;
if (wdev->u.ibss.ssid_len)
- err = __cfg80211_leave_ibss(rdev, dev, true);
- wdev_unlock(wdev);
+ err = cfg80211_leave_ibss(rdev, dev, true);
if (err)
return err;
@@ -422,11 +389,7 @@ int cfg80211_ibss_wext_siwessid(struct net_device *dev,
wdev->wext.ibss.ssid = wdev->u.ibss.ssid;
wdev->wext.ibss.ssid_len = len;
- wdev_lock(wdev);
- err = cfg80211_ibss_wext_join(rdev, wdev);
- wdev_unlock(wdev);
-
- return err;
+ return cfg80211_ibss_wext_join(rdev, wdev);
}
int cfg80211_ibss_wext_giwessid(struct net_device *dev,
@@ -441,7 +404,6 @@ int cfg80211_ibss_wext_giwessid(struct net_device *dev,
data->flags = 0;
- wdev_lock(wdev);
if (wdev->u.ibss.ssid_len) {
data->flags = 1;
data->length = wdev->u.ibss.ssid_len;
@@ -451,7 +413,6 @@ int cfg80211_ibss_wext_giwessid(struct net_device *dev,
data->length = wdev->wext.ibss.ssid_len;
memcpy(ssid, wdev->wext.ibss.ssid, data->length);
}
- wdev_unlock(wdev);
return 0;
}
@@ -491,11 +452,9 @@ int cfg80211_ibss_wext_siwap(struct net_device *dev,
ether_addr_equal(bssid, wdev->wext.ibss.bssid))
return 0;
- wdev_lock(wdev);
err = 0;
if (wdev->u.ibss.ssid_len)
- err = __cfg80211_leave_ibss(rdev, dev, true);
- wdev_unlock(wdev);
+ err = cfg80211_leave_ibss(rdev, dev, true);
if (err)
return err;
@@ -506,11 +465,7 @@ int cfg80211_ibss_wext_siwap(struct net_device *dev,
} else
wdev->wext.ibss.bssid = NULL;
- wdev_lock(wdev);
- err = cfg80211_ibss_wext_join(rdev, wdev);
- wdev_unlock(wdev);
-
- return err;
+ return cfg80211_ibss_wext_join(rdev, wdev);
}
int cfg80211_ibss_wext_giwap(struct net_device *dev,
@@ -525,7 +480,6 @@ int cfg80211_ibss_wext_giwap(struct net_device *dev,
ap_addr->sa_family = ARPHRD_ETHER;
- wdev_lock(wdev);
if (wdev->u.ibss.current_bss)
memcpy(ap_addr->sa_data, wdev->u.ibss.current_bss->pub.bssid,
ETH_ALEN);
@@ -534,8 +488,6 @@ int cfg80211_ibss_wext_giwap(struct net_device *dev,
else
eth_zero_addr(ap_addr->sa_data);
- wdev_unlock(wdev);
-
return 0;
}
#endif
diff --git a/net/wireless/lib80211_crypt_tkip.c b/net/wireless/lib80211_crypt_tkip.c
index 1b4d6c87a5c5..5c8cdf7681e3 100644
--- a/net/wireless/lib80211_crypt_tkip.c
+++ b/net/wireless/lib80211_crypt_tkip.c
@@ -662,12 +662,12 @@ static int lib80211_tkip_get_key(void *key, int len, u8 * seq, void *priv)
memcpy(key, tkey->key, TKIP_KEY_LEN);
if (seq) {
- /* Return the sequence number of the last transmitted frame. */
- u16 iv16 = tkey->tx_iv16;
- u32 iv32 = tkey->tx_iv32;
- if (iv16 == 0)
- iv32--;
- iv16--;
+ /*
+ * Not clear if this should return the value as is
+ * or - as the code previously seemed to partially
+ * have been written as - subtract one from it. It
+ * was working this way for a long time so leave it.
+ */
seq[0] = tkey->tx_iv16;
seq[1] = tkey->tx_iv16 >> 8;
seq[2] = tkey->tx_iv32;
diff --git a/net/wireless/mesh.c b/net/wireless/mesh.c
index 59a3c5c092b1..83306979fbe2 100644
--- a/net/wireless/mesh.c
+++ b/net/wireless/mesh.c
@@ -1,7 +1,7 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Portions
- * Copyright (C) 2022 Intel Corporation
+ * Copyright (C) 2022-2023 Intel Corporation
*/
#include <linux/ieee80211.h>
#include <linux/export.h>
@@ -109,7 +109,7 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN != IEEE80211_MAX_MESH_ID_LEN);
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT)
return -EOPNOTSUPP;
@@ -172,7 +172,6 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
* basic rates
*/
if (!setup->basic_rates) {
- enum nl80211_bss_scan_width scan_width;
struct ieee80211_supported_band *sband =
rdev->wiphy.bands[setup->chandef.chan->band];
@@ -193,9 +192,7 @@ int __cfg80211_join_mesh(struct cfg80211_registered_device *rdev,
}
}
} else {
- scan_width = cfg80211_chandef_to_scan_width(&setup->chandef);
- setup->basic_rates = ieee80211_mandatory_rates(sband,
- scan_width);
+ setup->basic_rates = ieee80211_mandatory_rates(sband);
}
}
@@ -257,13 +254,13 @@ int cfg80211_set_mesh_channel(struct cfg80211_registered_device *rdev,
return 0;
}
-int __cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
- struct net_device *dev)
+int cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
+ struct net_device *dev)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_MESH_POINT)
return -EOPNOTSUPP;
@@ -287,16 +284,3 @@ int __cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
return err;
}
-
-int cfg80211_leave_mesh(struct cfg80211_registered_device *rdev,
- struct net_device *dev)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
- int err;
-
- wdev_lock(wdev);
- err = __cfg80211_leave_mesh(rdev, dev);
- wdev_unlock(wdev);
-
- return err;
-}
diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
index 55a1d3633853..bad9e4fd842f 100644
--- a/net/wireless/mlme.c
+++ b/net/wireless/mlme.c
@@ -4,7 +4,7 @@
*
* Copyright (c) 2009, Jouni Malinen <j@w1.fi>
* Copyright (c) 2015 Intel Deutschland GmbH
- * Copyright (C) 2019-2020, 2022 Intel Corporation
+ * Copyright (C) 2019-2020, 2022-2023 Intel Corporation
*/
#include <linux/kernel.h>
@@ -22,7 +22,7 @@
void cfg80211_rx_assoc_resp(struct net_device *dev,
- struct cfg80211_rx_assoc_resp *data)
+ struct cfg80211_rx_assoc_resp_data *data)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct wiphy *wiphy = wdev->wiphy;
@@ -151,7 +151,7 @@ void cfg80211_rx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len)
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct ieee80211_mgmt *mgmt = (void *)buf;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
trace_cfg80211_rx_mlme_mgmt(dev, buf, len);
@@ -216,7 +216,7 @@ void cfg80211_tx_mlme_mgmt(struct net_device *dev, const u8 *buf, size_t len,
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct ieee80211_mgmt *mgmt = (void *)buf;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
trace_cfg80211_tx_mlme_mgmt(dev, buf, len, reconnect);
@@ -264,7 +264,7 @@ int cfg80211_mlme_auth(struct cfg80211_registered_device *rdev,
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!req->bss)
return -ENOENT;
@@ -333,7 +333,7 @@ int cfg80211_mlme_assoc(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err, i, j;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
for (i = 1; i < ARRAY_SIZE(req->links); i++) {
if (!req->links[i].bss)
@@ -395,7 +395,7 @@ int cfg80211_mlme_deauth(struct cfg80211_registered_device *rdev,
.local_state_change = local_state_change,
};
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (local_state_change &&
(!wdev->connected ||
@@ -425,7 +425,7 @@ int cfg80211_mlme_disassoc(struct cfg80211_registered_device *rdev,
};
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->connected)
return -ENOTCONN;
@@ -448,7 +448,7 @@ void cfg80211_mlme_down(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev = dev->ieee80211_ptr;
u8 bssid[ETH_ALEN];
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!rdev->ops->deauth)
return;
@@ -728,6 +728,8 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
const struct ieee80211_mgmt *mgmt;
u16 stype;
+ lockdep_assert_wiphy(&rdev->wiphy);
+
if (!wdev->wiphy->mgmt_stypes)
return -EOPNOTSUPP;
@@ -750,8 +752,6 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
mgmt->u.action.category != WLAN_CATEGORY_PUBLIC) {
int err = 0;
- wdev_lock(wdev);
-
switch (wdev->iftype) {
case NL80211_IFTYPE_ADHOC:
/*
@@ -816,7 +816,6 @@ int cfg80211_mlme_mgmt_tx(struct cfg80211_registered_device *rdev,
err = -EOPNOTSUPP;
break;
}
- wdev_unlock(wdev);
if (err)
return err;
diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
index 931a03f4549c..569234bc2be6 100644
--- a/net/wireless/nl80211.c
+++ b/net/wireless/nl80211.c
@@ -106,7 +106,7 @@ __cfg80211_wdev_from_attrs(struct cfg80211_registered_device *rdev,
ASSERT_RTNL();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
struct wireless_dev *wdev;
if (wiphy_net(&rdev->wiphy) != netns)
@@ -463,7 +463,7 @@ nl80211_sta_wme_policy[NL80211_STA_WME_MAX + 1] = {
[NL80211_STA_WME_MAX_SP] = { .type = NLA_U8 },
};
-static struct netlink_range_validation nl80211_punct_bitmap_range = {
+static const struct netlink_range_validation nl80211_punct_bitmap_range = {
.min = 0,
.max = 0xffff,
};
@@ -1115,6 +1115,10 @@ static int nl80211_msg_put_channel(struct sk_buff *msg, struct wiphy *wiphy,
if (nla_put_u32(msg, NL80211_FREQUENCY_ATTR_OFFSET, chan->freq_offset))
goto nla_put_failure;
+ if ((chan->flags & IEEE80211_CHAN_PSD) &&
+ nla_put_s8(msg, NL80211_FREQUENCY_ATTR_PSD, chan->psd))
+ goto nla_put_failure;
+
if ((chan->flags & IEEE80211_CHAN_DISABLED) &&
nla_put_flag(msg, NL80211_FREQUENCY_ATTR_DISABLED))
goto nla_put_failure;
@@ -1544,7 +1548,7 @@ nl80211_parse_connkeys(struct cfg80211_registered_device *rdev,
static int nl80211_key_allowed(struct wireless_dev *wdev)
{
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
switch (wdev->iftype) {
case NL80211_IFTYPE_AP:
@@ -1913,20 +1917,20 @@ static int nl80211_send_band_rateinfo(struct sk_buff *msg,
struct nlattr *nl_iftype_data =
nla_nest_start_noflag(msg,
NL80211_BAND_ATTR_IFTYPE_DATA);
+ const struct ieee80211_sband_iftype_data *iftd;
int err;
if (!nl_iftype_data)
return -ENOBUFS;
- for (i = 0; i < sband->n_iftype_data; i++) {
+ for_each_sband_iftype_data(sband, i, iftd) {
struct nlattr *iftdata;
iftdata = nla_nest_start_noflag(msg, i + 1);
if (!iftdata)
return -ENOBUFS;
- err = nl80211_send_iftype_data(msg, sband,
- &sband->iftype_data[i]);
+ err = nl80211_send_iftype_data(msg, sband, iftd);
if (err)
return err;
@@ -3075,7 +3079,7 @@ static int nl80211_dump_wiphy(struct sk_buff *skb, struct netlink_callback *cb)
cb->args[0] = (long)state;
}
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (!net_eq(wiphy_net(&rdev->wiphy), sock_net(skb->sk)))
continue;
if (++idx <= state->start)
@@ -3423,13 +3427,8 @@ static int nl80211_set_channel(struct sk_buff *skb, struct genl_info *info)
struct cfg80211_registered_device *rdev = info->user_ptr[0];
int link_id = nl80211_link_id_or_invalid(info->attrs);
struct net_device *netdev = info->user_ptr[1];
- int ret;
-
- wdev_lock(netdev->ieee80211_ptr);
- ret = __nl80211_set_channel(rdev, netdev, info, link_id);
- wdev_unlock(netdev->ieee80211_ptr);
- return ret;
+ return __nl80211_set_channel(rdev, netdev, info, link_id);
}
static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
@@ -3536,7 +3535,6 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
txq_params.link_id =
nl80211_link_id_or_invalid(info->attrs);
- wdev_lock(netdev->ieee80211_ptr);
if (txq_params.link_id >= 0 &&
!(netdev->ieee80211_ptr->valid_links &
BIT(txq_params.link_id)))
@@ -3547,7 +3545,6 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
else
result = rdev_set_txq_params(rdev, netdev,
&txq_params);
- wdev_unlock(netdev->ieee80211_ptr);
if (result)
goto out;
}
@@ -3557,12 +3554,10 @@ static int nl80211_set_wiphy(struct sk_buff *skb, struct genl_info *info)
int link_id = nl80211_link_id_or_invalid(info->attrs);
if (wdev) {
- wdev_lock(wdev);
result = __nl80211_set_channel(
rdev,
nl80211_can_set_dev_channel(wdev) ? netdev : NULL,
info, link_id);
- wdev_unlock(wdev);
} else {
result = __nl80211_set_channel(rdev, netdev, info, link_id);
}
@@ -3870,33 +3865,31 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
goto nla_put_failure;
}
- wdev_lock(wdev);
switch (wdev->iftype) {
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
if (wdev->u.ap.ssid_len &&
nla_put(msg, NL80211_ATTR_SSID, wdev->u.ap.ssid_len,
wdev->u.ap.ssid))
- goto nla_put_failure_locked;
+ goto nla_put_failure;
break;
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
if (wdev->u.client.ssid_len &&
nla_put(msg, NL80211_ATTR_SSID, wdev->u.client.ssid_len,
wdev->u.client.ssid))
- goto nla_put_failure_locked;
+ goto nla_put_failure;
break;
case NL80211_IFTYPE_ADHOC:
if (wdev->u.ibss.ssid_len &&
nla_put(msg, NL80211_ATTR_SSID, wdev->u.ibss.ssid_len,
wdev->u.ibss.ssid))
- goto nla_put_failure_locked;
+ goto nla_put_failure;
break;
default:
/* nothing */
break;
}
- wdev_unlock(wdev);
if (rdev->ops->get_txq_stats) {
struct cfg80211_txq_stats txqstats = {};
@@ -3943,8 +3936,6 @@ static int nl80211_send_iface(struct sk_buff *msg, u32 portid, u32 seq, int flag
genlmsg_end(msg, hdr);
return 0;
- nla_put_failure_locked:
- wdev_unlock(wdev);
nla_put_failure:
genlmsg_cancel(msg, hdr);
return -EMSGSIZE;
@@ -3985,7 +3976,7 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
filter_wiphy = cb->args[2] - 1;
}
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (!net_eq(wiphy_net(&rdev->wiphy), sock_net(skb->sk)))
continue;
if (wp_idx < wp_start) {
@@ -4191,7 +4182,6 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
if (netif_running(dev))
return -EBUSY;
- wdev_lock(wdev);
BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN !=
IEEE80211_MAX_MESH_ID_LEN);
wdev->u.mesh.id_up_len =
@@ -4199,7 +4189,6 @@ static int nl80211_set_interface(struct sk_buff *skb, struct genl_info *info)
memcpy(wdev->u.mesh.id,
nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
wdev->u.mesh.id_up_len);
- wdev_unlock(wdev);
}
if (info->attrs[NL80211_ATTR_4ADDR]) {
@@ -4300,7 +4289,6 @@ static int _nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
case NL80211_IFTYPE_MESH_POINT:
if (!info->attrs[NL80211_ATTR_MESH_ID])
break;
- wdev_lock(wdev);
BUILD_BUG_ON(IEEE80211_MAX_SSID_LEN !=
IEEE80211_MAX_MESH_ID_LEN);
wdev->u.mesh.id_up_len =
@@ -4308,7 +4296,6 @@ static int _nl80211_new_interface(struct sk_buff *skb, struct genl_info *info)
memcpy(wdev->u.mesh.id,
nla_data(info->attrs[NL80211_ATTR_MESH_ID]),
wdev->u.mesh.id_up_len);
- wdev_unlock(wdev);
break;
case NL80211_IFTYPE_NAN:
case NL80211_IFTYPE_P2P_DEVICE:
@@ -4599,79 +4586,67 @@ static int nl80211_set_key(struct sk_buff *skb, struct genl_info *info)
!(key.p.mode == NL80211_KEY_SET_TX))
return -EINVAL;
- wdev_lock(wdev);
-
if (key.def) {
- if (!rdev->ops->set_default_key) {
- err = -EOPNOTSUPP;
- goto out;
- }
+ if (!rdev->ops->set_default_key)
+ return -EOPNOTSUPP;
err = nl80211_key_allowed(wdev);
if (err)
- goto out;
+ return err;
err = nl80211_validate_key_link_id(info, wdev, link_id, false);
if (err)
- goto out;
+ return err;
err = rdev_set_default_key(rdev, dev, link_id, key.idx,
key.def_uni, key.def_multi);
if (err)
- goto out;
+ return err;
#ifdef CONFIG_CFG80211_WEXT
wdev->wext.default_key = key.idx;
#endif
+ return 0;
} else if (key.defmgmt) {
- if (key.def_uni || !key.def_multi) {
- err = -EINVAL;
- goto out;
- }
+ if (key.def_uni || !key.def_multi)
+ return -EINVAL;
- if (!rdev->ops->set_default_mgmt_key) {
- err = -EOPNOTSUPP;
- goto out;
- }
+ if (!rdev->ops->set_default_mgmt_key)
+ return -EOPNOTSUPP;
err = nl80211_key_allowed(wdev);
if (err)
- goto out;
+ return err;
err = nl80211_validate_key_link_id(info, wdev, link_id, false);
if (err)
- goto out;
+ return err;
err = rdev_set_default_mgmt_key(rdev, dev, link_id, key.idx);
if (err)
- goto out;
+ return err;
#ifdef CONFIG_CFG80211_WEXT
wdev->wext.default_mgmt_key = key.idx;
#endif
+ return 0;
} else if (key.defbeacon) {
- if (key.def_uni || !key.def_multi) {
- err = -EINVAL;
- goto out;
- }
+ if (key.def_uni || !key.def_multi)
+ return -EINVAL;
- if (!rdev->ops->set_default_beacon_key) {
- err = -EOPNOTSUPP;
- goto out;
- }
+ if (!rdev->ops->set_default_beacon_key)
+ return -EOPNOTSUPP;
err = nl80211_key_allowed(wdev);
if (err)
- goto out;
+ return err;
err = nl80211_validate_key_link_id(info, wdev, link_id, false);
if (err)
- goto out;
+ return err;
- err = rdev_set_default_beacon_key(rdev, dev, link_id, key.idx);
- if (err)
- goto out;
+ return rdev_set_default_beacon_key(rdev, dev, link_id, key.idx);
} else if (key.p.mode == NL80211_KEY_SET_TX &&
wiphy_ext_feature_isset(&rdev->wiphy,
NL80211_EXT_FEATURE_EXT_KEY_ID)) {
@@ -4680,25 +4655,19 @@ static int nl80211_set_key(struct sk_buff *skb, struct genl_info *info)
if (info->attrs[NL80211_ATTR_MAC])
mac_addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
- if (!mac_addr || key.idx < 0 || key.idx > 1) {
- err = -EINVAL;
- goto out;
- }
+ if (!mac_addr || key.idx < 0 || key.idx > 1)
+ return -EINVAL;
err = nl80211_validate_key_link_id(info, wdev, link_id, true);
if (err)
- goto out;
+ return err;
- err = rdev_add_key(rdev, dev, link_id, key.idx,
- NL80211_KEYTYPE_PAIRWISE,
- mac_addr, &key.p);
- } else {
- err = -EINVAL;
+ return rdev_add_key(rdev, dev, link_id, key.idx,
+ NL80211_KEYTYPE_PAIRWISE,
+ mac_addr, &key.p);
}
- out:
- wdev_unlock(wdev);
- return err;
+ return -EINVAL;
}
static int nl80211_new_key(struct sk_buff *skb, struct genl_info *info)
@@ -4751,7 +4720,6 @@ static int nl80211_new_key(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
- wdev_lock(wdev);
err = nl80211_key_allowed(wdev);
if (err)
GENL_SET_ERR_MSG(info, "key not allowed");
@@ -4767,7 +4735,6 @@ static int nl80211_new_key(struct sk_buff *skb, struct genl_info *info)
if (err)
GENL_SET_ERR_MSG(info, "key addition failed");
}
- wdev_unlock(wdev);
return err;
}
@@ -4808,7 +4775,6 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info)
if (!rdev->ops->del_key)
return -EOPNOTSUPP;
- wdev_lock(wdev);
err = nl80211_key_allowed(wdev);
if (key.type == NL80211_KEYTYPE_GROUP && mac_addr &&
@@ -4832,7 +4798,6 @@ static int nl80211_del_key(struct sk_buff *skb, struct genl_info *info)
wdev->wext.default_mgmt_key = -1;
}
#endif
- wdev_unlock(wdev);
return err;
}
@@ -5671,11 +5636,10 @@ static int nl80211_parse_he_obss_pd(struct nlattr *attrs,
static int nl80211_parse_fils_discovery(struct cfg80211_registered_device *rdev,
struct nlattr *attrs,
- struct cfg80211_ap_settings *params)
+ struct cfg80211_fils_discovery *fd)
{
struct nlattr *tb[NL80211_FILS_DISCOVERY_ATTR_MAX + 1];
int ret;
- struct cfg80211_fils_discovery *fd = &params->fils_discovery;
if (!wiphy_ext_feature_isset(&rdev->wiphy,
NL80211_EXT_FEATURE_FILS_DISCOVERY))
@@ -5686,6 +5650,13 @@ static int nl80211_parse_fils_discovery(struct cfg80211_registered_device *rdev,
if (ret)
return ret;
+ if (!tb[NL80211_FILS_DISCOVERY_ATTR_INT_MIN] &&
+ !tb[NL80211_FILS_DISCOVERY_ATTR_INT_MAX] &&
+ !tb[NL80211_FILS_DISCOVERY_ATTR_TMPL]) {
+ fd->update = true;
+ return 0;
+ }
+
if (!tb[NL80211_FILS_DISCOVERY_ATTR_INT_MIN] ||
!tb[NL80211_FILS_DISCOVERY_ATTR_INT_MAX] ||
!tb[NL80211_FILS_DISCOVERY_ATTR_TMPL])
@@ -5695,19 +5666,17 @@ static int nl80211_parse_fils_discovery(struct cfg80211_registered_device *rdev,
fd->tmpl = nla_data(tb[NL80211_FILS_DISCOVERY_ATTR_TMPL]);
fd->min_interval = nla_get_u32(tb[NL80211_FILS_DISCOVERY_ATTR_INT_MIN]);
fd->max_interval = nla_get_u32(tb[NL80211_FILS_DISCOVERY_ATTR_INT_MAX]);
-
+ fd->update = true;
return 0;
}
static int
nl80211_parse_unsol_bcast_probe_resp(struct cfg80211_registered_device *rdev,
struct nlattr *attrs,
- struct cfg80211_ap_settings *params)
+ struct cfg80211_unsol_bcast_probe_resp *presp)
{
struct nlattr *tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_MAX + 1];
int ret;
- struct cfg80211_unsol_bcast_probe_resp *presp =
- &params->unsol_bcast_probe_resp;
if (!wiphy_ext_feature_isset(&rdev->wiphy,
NL80211_EXT_FEATURE_UNSOL_BCAST_PROBE_RESP))
@@ -5718,6 +5687,12 @@ nl80211_parse_unsol_bcast_probe_resp(struct cfg80211_registered_device *rdev,
if (ret)
return ret;
+ if (!tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_INT] &&
+ !tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_TMPL]) {
+ presp->update = true;
+ return 0;
+ }
+
if (!tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_INT] ||
!tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_TMPL])
return -EINVAL;
@@ -5725,6 +5700,7 @@ nl80211_parse_unsol_bcast_probe_resp(struct cfg80211_registered_device *rdev,
presp->tmpl = nla_data(tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_TMPL]);
presp->tmpl_len = nla_len(tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_TMPL]);
presp->interval = nla_get_u32(tb[NL80211_UNSOL_BCAST_PROBE_RESP_ATTR_INT]);
+ presp->update = true;
return 0;
}
@@ -6087,20 +6063,18 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
goto out;
}
- wdev_lock(wdev);
-
if (info->attrs[NL80211_ATTR_TX_RATES]) {
err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
NL80211_ATTR_TX_RATES,
&params->beacon_rate,
dev, false, link_id);
if (err)
- goto out_unlock;
+ goto out;
err = validate_beacon_tx_rate(rdev, params->chandef.chan->band,
&params->beacon_rate);
if (err)
- goto out_unlock;
+ goto out;
}
if (info->attrs[NL80211_ATTR_SMPS_MODE]) {
@@ -6113,19 +6087,19 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
if (!(rdev->wiphy.features &
NL80211_FEATURE_STATIC_SMPS)) {
err = -EINVAL;
- goto out_unlock;
+ goto out;
}
break;
case NL80211_SMPS_DYNAMIC:
if (!(rdev->wiphy.features &
NL80211_FEATURE_DYNAMIC_SMPS)) {
err = -EINVAL;
- goto out_unlock;
+ goto out;
}
break;
default:
err = -EINVAL;
- goto out_unlock;
+ goto out;
}
} else {
params->smps_mode = NL80211_SMPS_OFF;
@@ -6134,7 +6108,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
params->pbss = nla_get_flag(info->attrs[NL80211_ATTR_PBSS]);
if (params->pbss && !rdev->wiphy.bands[NL80211_BAND_60GHZ]) {
err = -EOPNOTSUPP;
- goto out_unlock;
+ goto out;
}
if (info->attrs[NL80211_ATTR_ACL_POLICY]) {
@@ -6142,7 +6116,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
if (IS_ERR(params->acl)) {
err = PTR_ERR(params->acl);
params->acl = NULL;
- goto out_unlock;
+ goto out;
}
}
@@ -6154,23 +6128,23 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
info->attrs[NL80211_ATTR_HE_OBSS_PD],
&params->he_obss_pd);
if (err)
- goto out_unlock;
+ goto out;
}
if (info->attrs[NL80211_ATTR_FILS_DISCOVERY]) {
err = nl80211_parse_fils_discovery(rdev,
info->attrs[NL80211_ATTR_FILS_DISCOVERY],
- params);
+ &params->fils_discovery);
if (err)
- goto out_unlock;
+ goto out;
}
if (info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP]) {
err = nl80211_parse_unsol_bcast_probe_resp(
rdev, info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP],
- params);
+ &params->unsol_bcast_probe_resp);
if (err)
- goto out_unlock;
+ goto out;
}
if (info->attrs[NL80211_ATTR_MBSSID_CONFIG]) {
@@ -6181,21 +6155,21 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
params->beacon.mbssid_ies->cnt :
0);
if (err)
- goto out_unlock;
+ goto out;
}
if (!params->mbssid_config.ema && params->beacon.rnr_ies) {
err = -EINVAL;
- goto out_unlock;
+ goto out;
}
err = nl80211_calculate_ap_params(params);
if (err)
- goto out_unlock;
+ goto out;
err = nl80211_validate_ap_phy_operation(params);
if (err)
- goto out_unlock;
+ goto out;
if (info->attrs[NL80211_ATTR_AP_SETTINGS_FLAGS])
params->flags = nla_get_u32(
@@ -6207,7 +6181,7 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
info->attrs[NL80211_ATTR_SOCKET_OWNER] &&
wdev->conn_owner_nlportid != info->snd_portid) {
err = -EINVAL;
- goto out_unlock;
+ goto out;
}
/* FIXME: validate MLO/link-id against driver capabilities */
@@ -6225,8 +6199,6 @@ static int nl80211_start_ap(struct sk_buff *skb, struct genl_info *info)
nl80211_send_ap_started(wdev, link_id);
}
-out_unlock:
- wdev_unlock(wdev);
out:
kfree(params->acl);
kfree(params->beacon.mbssid_ies);
@@ -6246,7 +6218,8 @@ static int nl80211_set_beacon(struct sk_buff *skb, struct genl_info *info)
unsigned int link_id = nl80211_link_id(info->attrs);
struct net_device *dev = info->user_ptr[1];
struct wireless_dev *wdev = dev->ieee80211_ptr;
- struct cfg80211_beacon_data params;
+ struct cfg80211_ap_update *params;
+ struct nlattr *attr;
int err;
if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_AP &&
@@ -6259,17 +6232,37 @@ static int nl80211_set_beacon(struct sk_buff *skb, struct genl_info *info)
if (!wdev->links[link_id].ap.beacon_interval)
return -EINVAL;
- err = nl80211_parse_beacon(rdev, info->attrs, &params, info->extack);
+ params = kzalloc(sizeof(*params), GFP_KERNEL);
+ if (!params)
+ return -ENOMEM;
+
+ err = nl80211_parse_beacon(rdev, info->attrs, &params->beacon,
+ info->extack);
if (err)
goto out;
- wdev_lock(wdev);
- err = rdev_change_beacon(rdev, dev, &params);
- wdev_unlock(wdev);
+ attr = info->attrs[NL80211_ATTR_FILS_DISCOVERY];
+ if (attr) {
+ err = nl80211_parse_fils_discovery(rdev, attr,
+ &params->fils_discovery);
+ if (err)
+ goto out;
+ }
+
+ attr = info->attrs[NL80211_ATTR_UNSOL_BCAST_PROBE_RESP];
+ if (attr) {
+ err = nl80211_parse_unsol_bcast_probe_resp(rdev, attr,
+ &params->unsol_bcast_probe_resp);
+ if (err)
+ goto out;
+ }
+
+ err = rdev_change_beacon(rdev, dev, params);
out:
- kfree(params.mbssid_ies);
- kfree(params.rnr_ies);
+ kfree(params->beacon.mbssid_ies);
+ kfree(params->beacon.rnr_ies);
+ kfree(params);
return err;
}
@@ -7324,9 +7317,7 @@ static int nl80211_set_station(struct sk_buff *skb, struct genl_info *info)
}
/* driver will call cfg80211_check_station_change() */
- wdev_lock(dev->ieee80211_ptr);
err = rdev_change_station(rdev, dev, mac_addr, &params);
- wdev_unlock(dev->ieee80211_ptr);
out_put_vlan:
dev_put(params.vlan);
@@ -7594,7 +7585,6 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
/* be aware of params.vlan when changing code here */
- wdev_lock(dev->ieee80211_ptr);
if (wdev->valid_links) {
if (params.link_sta_params.link_id < 0) {
err = -EINVAL;
@@ -7612,7 +7602,6 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info)
}
err = rdev_add_station(rdev, dev, mac_addr, &params);
out:
- wdev_unlock(dev->ieee80211_ptr);
dev_put(params.vlan);
return err;
}
@@ -7622,7 +7611,6 @@ static int nl80211_del_station(struct sk_buff *skb, struct genl_info *info)
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
struct station_del_parameters params;
- int ret;
memset(&params, 0, sizeof(params));
@@ -7670,11 +7658,7 @@ static int nl80211_del_station(struct sk_buff *skb, struct genl_info *info)
params.reason_code = WLAN_REASON_PREV_AUTH_NOT_VALID;
}
- wdev_lock(dev->ieee80211_ptr);
- ret = rdev_del_station(rdev, dev, &params);
- wdev_unlock(dev->ieee80211_ptr);
-
- return ret;
+ return rdev_del_station(rdev, dev, &params);
}
static int nl80211_send_mpath(struct sk_buff *msg, u32 portid, u32 seq,
@@ -7993,9 +7977,7 @@ static int nl80211_set_bss(struct sk_buff *skb, struct genl_info *info)
{
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
- struct wireless_dev *wdev = dev->ieee80211_ptr;
struct bss_parameters params;
- int err;
memset(&params, 0, sizeof(params));
params.link_id = nl80211_link_id_or_invalid(info->attrs);
@@ -8058,11 +8040,7 @@ static int nl80211_set_bss(struct sk_buff *skb, struct genl_info *info)
dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_GO)
return -EOPNOTSUPP;
- wdev_lock(wdev);
- err = rdev_change_bss(rdev, dev, &params);
- wdev_unlock(wdev);
-
- return err;
+ return rdev_change_bss(rdev, dev, &params);
}
static int nl80211_req_set_reg(struct sk_buff *skb, struct genl_info *info)
@@ -8133,13 +8111,11 @@ static int nl80211_get_mesh_config(struct sk_buff *skb,
if (!rdev->ops->get_mesh_config)
return -EOPNOTSUPP;
- wdev_lock(wdev);
/* If not connected, get default parameters */
if (!wdev->u.mesh.id_len)
memcpy(&cur_params, &default_mesh_config, sizeof(cur_params));
else
err = rdev_get_mesh_config(rdev, dev, &cur_params);
- wdev_unlock(wdev);
if (err)
return err;
@@ -8515,15 +8491,12 @@ static int nl80211_update_mesh_config(struct sk_buff *skb,
if (err)
return err;
- wdev_lock(wdev);
if (!wdev->u.mesh.id_len)
err = -ENOLINK;
if (!err)
err = rdev_update_mesh_config(rdev, dev, mask, &cfg);
- wdev_unlock(wdev);
-
return err;
}
@@ -8578,6 +8551,11 @@ static int nl80211_put_regdom(const struct ieee80211_regdomain *regdom,
reg_rule->dfs_cac_ms))
goto nla_put_failure;
+ if ((reg_rule->flags & NL80211_RRF_PSD) &&
+ nla_put_s8(msg, NL80211_ATTR_POWER_RULE_PSD,
+ reg_rule->psd))
+ goto nla_put_failure;
+
nla_nest_end(msg, nl_reg_rule);
}
@@ -9014,7 +8992,7 @@ static bool cfg80211_off_channel_oper_allowed(struct wireless_dev *wdev,
unsigned int link_id;
bool all_ok = true;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!cfg80211_beaconing_iface_active(wdev))
return true;
@@ -9264,7 +9242,6 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info)
request->n_channels = i;
- wdev_lock(wdev);
for (i = 0; i < request->n_channels; i++) {
struct ieee80211_channel *chan = request->channels[i];
@@ -9273,12 +9250,10 @@ static int nl80211_trigger_scan(struct sk_buff *skb, struct genl_info *info)
continue;
if (!cfg80211_wdev_on_sub_chan(wdev, chan, true)) {
- wdev_unlock(wdev);
err = -EBUSY;
goto out_free;
}
}
- wdev_unlock(wdev);
i = 0;
if (n_ssids) {
@@ -10284,9 +10259,7 @@ skip_beacons:
goto free;
}
- wdev_lock(wdev);
err = rdev_channel_switch(rdev, dev, &params);
- wdev_unlock(wdev);
free:
kfree(params.beacon_after.mbssid_ies);
@@ -10309,7 +10282,7 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
void *hdr;
struct nlattr *bss;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
hdr = nl80211hdr_put(msg, NETLINK_CB(cb->skb).portid, seq, flags,
NL80211_CMD_NEW_SCAN_RESULTS);
@@ -10372,7 +10345,6 @@ static int nl80211_send_bss(struct sk_buff *msg, struct netlink_callback *cb,
nla_put_u32(msg, NL80211_BSS_FREQUENCY, res->channel->center_freq) ||
nla_put_u32(msg, NL80211_BSS_FREQUENCY_OFFSET,
res->channel->freq_offset) ||
- nla_put_u32(msg, NL80211_BSS_CHAN_WIDTH, res->scan_width) ||
nla_put_u32(msg, NL80211_BSS_SEEN_MS_AGO,
jiffies_to_msecs(jiffies - intbss->ts)))
goto nla_put_failure;
@@ -10458,7 +10430,6 @@ static int nl80211_dump_scan(struct sk_buff *skb, struct netlink_callback *cb)
/* nl80211_prepare_wdev_dump acquired it in the successful case */
__acquire(&rdev->wiphy.mtx);
- wdev_lock(wdev);
spin_lock_bh(&rdev->bss_lock);
/*
@@ -10484,7 +10455,6 @@ static int nl80211_dump_scan(struct sk_buff *skb, struct netlink_callback *cb)
}
spin_unlock_bh(&rdev->bss_lock);
- wdev_unlock(wdev);
cb->args[2] = idx;
wiphy_unlock(&rdev->wiphy);
@@ -10607,9 +10577,7 @@ static int nl80211_dump_survey(struct sk_buff *skb, struct netlink_callback *cb)
}
while (1) {
- wdev_lock(wdev);
res = rdev_dump_survey(rdev, wdev->netdev, survey_idx, &survey);
- wdev_unlock(wdev);
if (res == -ENOENT)
break;
if (res)
@@ -10782,9 +10750,7 @@ static int nl80211_authenticate(struct sk_buff *skb, struct genl_info *info)
if (!req.bss)
return -ENOENT;
- wdev_lock(dev->ieee80211_ptr);
err = cfg80211_mlme_auth(rdev, dev, &req);
- wdev_unlock(dev->ieee80211_ptr);
cfg80211_put_bss(&rdev->wiphy, req.bss);
@@ -10994,8 +10960,9 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
if (cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
req.ie, req.ie_len)) {
- GENL_SET_ERR_MSG(info,
- "non-inheritance makes no sense");
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ info->attrs[NL80211_ATTR_IE],
+ "non-inheritance makes no sense");
return -EINVAL;
}
}
@@ -11120,6 +11087,7 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
if (!attrs[NL80211_ATTR_MLO_LINK_ID]) {
err = -EINVAL;
+ NL_SET_BAD_ATTR(info->extack, link);
goto free;
}
@@ -11127,6 +11095,7 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
/* cannot use the same link ID again */
if (req.links[link_id].bss) {
err = -EINVAL;
+ NL_SET_BAD_ATTR(info->extack, link);
goto free;
}
req.links[link_id].bss =
@@ -11134,6 +11103,8 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
if (IS_ERR(req.links[link_id].bss)) {
err = PTR_ERR(req.links[link_id].bss);
req.links[link_id].bss = NULL;
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ link, "Error fetching BSS for link");
goto free;
}
@@ -11146,8 +11117,9 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
if (cfg80211_find_elem(WLAN_EID_FRAGMENT,
req.links[link_id].elems,
req.links[link_id].elems_len)) {
- GENL_SET_ERR_MSG(info,
- "cannot deal with fragmentation");
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ attrs[NL80211_ATTR_IE],
+ "cannot deal with fragmentation");
err = -EINVAL;
goto free;
}
@@ -11155,8 +11127,9 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
if (cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
req.links[link_id].elems,
req.links[link_id].elems_len)) {
- GENL_SET_ERR_MSG(info,
- "cannot deal with non-inheritance");
+ NL_SET_ERR_MSG_ATTR(info->extack,
+ attrs[NL80211_ATTR_IE],
+ "cannot deal with non-inheritance");
err = -EINVAL;
goto free;
}
@@ -11199,7 +11172,8 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
err = nl80211_crypto_settings(rdev, info, &req.crypto, 1);
if (!err) {
- wdev_lock(dev->ieee80211_ptr);
+ struct nlattr *link;
+ int rem = 0;
err = cfg80211_mlme_assoc(rdev, dev, &req);
@@ -11210,7 +11184,33 @@ static int nl80211_associate(struct sk_buff *skb, struct genl_info *info)
ap_addr, ETH_ALEN);
}
- wdev_unlock(dev->ieee80211_ptr);
+ /* Report error from first problematic link */
+ if (info->attrs[NL80211_ATTR_MLO_LINKS]) {
+ nla_for_each_nested(link,
+ info->attrs[NL80211_ATTR_MLO_LINKS],
+ rem) {
+ struct nlattr *link_id_attr =
+ nla_find_nested(link, NL80211_ATTR_MLO_LINK_ID);
+
+ if (!link_id_attr)
+ continue;
+
+ link_id = nla_get_u8(link_id_attr);
+
+ if (link_id == req.link_id)
+ continue;
+
+ if (!req.links[link_id].error ||
+ WARN_ON(req.links[link_id].error > 0))
+ continue;
+
+ WARN_ON(err >= 0);
+
+ NL_SET_BAD_ATTR(info->extack, link);
+ err = req.links[link_id].error;
+ break;
+ }
+ }
}
free:
@@ -11227,7 +11227,7 @@ static int nl80211_deauthenticate(struct sk_buff *skb, struct genl_info *info)
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
const u8 *ie = NULL, *bssid;
- int ie_len = 0, err;
+ int ie_len = 0;
u16 reason_code;
bool local_state_change;
@@ -11263,11 +11263,8 @@ static int nl80211_deauthenticate(struct sk_buff *skb, struct genl_info *info)
local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE];
- wdev_lock(dev->ieee80211_ptr);
- err = cfg80211_mlme_deauth(rdev, dev, bssid, ie, ie_len, reason_code,
- local_state_change);
- wdev_unlock(dev->ieee80211_ptr);
- return err;
+ return cfg80211_mlme_deauth(rdev, dev, bssid, ie, ie_len, reason_code,
+ local_state_change);
}
static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
@@ -11275,7 +11272,7 @@ static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
const u8 *ie = NULL, *bssid;
- int ie_len = 0, err;
+ int ie_len = 0;
u16 reason_code;
bool local_state_change;
@@ -11311,11 +11308,8 @@ static int nl80211_disassociate(struct sk_buff *skb, struct genl_info *info)
local_state_change = !!info->attrs[NL80211_ATTR_LOCAL_STATE_CHANGE];
- wdev_lock(dev->ieee80211_ptr);
- err = cfg80211_mlme_disassoc(rdev, dev, bssid, ie, ie_len, reason_code,
- local_state_change);
- wdev_unlock(dev->ieee80211_ptr);
- return err;
+ return cfg80211_mlme_disassoc(rdev, dev, bssid, ie, ie_len, reason_code,
+ local_state_change);
}
static bool
@@ -11493,13 +11487,11 @@ static int nl80211_join_ibss(struct sk_buff *skb, struct genl_info *info)
ibss.userspace_handles_dfs =
nla_get_flag(info->attrs[NL80211_ATTR_HANDLE_DFS]);
- wdev_lock(dev->ieee80211_ptr);
err = __cfg80211_join_ibss(rdev, dev, &ibss, connkeys);
if (err)
kfree_sensitive(connkeys);
else if (info->attrs[NL80211_ATTR_SOCKET_OWNER])
dev->ieee80211_ptr->conn_owner_nlportid = info->snd_portid;
- wdev_unlock(dev->ieee80211_ptr);
return err;
}
@@ -12032,8 +12024,6 @@ static int nl80211_connect(struct sk_buff *skb, struct genl_info *info)
if (nla_get_flag(info->attrs[NL80211_ATTR_MLO_SUPPORT]))
connect.flags |= CONNECT_REQ_MLO_SUPPORT;
- wdev_lock(dev->ieee80211_ptr);
-
err = cfg80211_connect(rdev, dev, &connect, connkeys,
connect.prev_bssid);
if (err)
@@ -12048,8 +12038,6 @@ static int nl80211_connect(struct sk_buff *skb, struct genl_info *info)
eth_zero_addr(dev->ieee80211_ptr->disconnect_bssid);
}
- wdev_unlock(dev->ieee80211_ptr);
-
return err;
}
@@ -12063,7 +12051,6 @@ static int nl80211_update_connect_params(struct sk_buff *skb,
bool fils_sk_offload;
u32 auth_type;
u32 changed = 0;
- int ret;
if (!rdev->ops->update_connect_params)
return -EOPNOTSUPP;
@@ -12124,14 +12111,10 @@ static int nl80211_update_connect_params(struct sk_buff *skb,
changed |= UPDATE_AUTH_TYPE;
}
- wdev_lock(dev->ieee80211_ptr);
if (!wdev->connected)
- ret = -ENOLINK;
- else
- ret = rdev_update_connect_params(rdev, dev, &connect, changed);
- wdev_unlock(dev->ieee80211_ptr);
+ return -ENOLINK;
- return ret;
+ return rdev_update_connect_params(rdev, dev, &connect, changed);
}
static int nl80211_disconnect(struct sk_buff *skb, struct genl_info *info)
@@ -12139,7 +12122,6 @@ static int nl80211_disconnect(struct sk_buff *skb, struct genl_info *info)
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
u16 reason;
- int ret;
if (dev->ieee80211_ptr->conn_owner_nlportid &&
dev->ieee80211_ptr->conn_owner_nlportid != info->snd_portid)
@@ -12157,10 +12139,7 @@ static int nl80211_disconnect(struct sk_buff *skb, struct genl_info *info)
dev->ieee80211_ptr->iftype != NL80211_IFTYPE_P2P_CLIENT)
return -EOPNOTSUPP;
- wdev_lock(dev->ieee80211_ptr);
- ret = cfg80211_disconnect(rdev, dev, reason, true);
- wdev_unlock(dev->ieee80211_ptr);
- return ret;
+ return cfg80211_disconnect(rdev, dev, reason, true);
}
static int nl80211_wiphy_netns(struct sk_buff *skb, struct genl_info *info)
@@ -12371,7 +12350,6 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
if (err)
return err;
- wdev_lock(wdev);
if (!cfg80211_off_channel_oper_allowed(wdev, chandef.chan)) {
const struct cfg80211_chan_def *oper_chandef, *compat_chandef;
@@ -12380,7 +12358,6 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
if (WARN_ON(!oper_chandef)) {
/* cannot happen since we must beacon to get here */
WARN_ON(1);
- wdev_unlock(wdev);
return -EBUSY;
}
@@ -12388,12 +12365,9 @@ static int nl80211_remain_on_channel(struct sk_buff *skb,
compat_chandef = cfg80211_chandef_compatible(&chandef,
oper_chandef);
- if (compat_chandef != &chandef) {
- wdev_unlock(wdev);
+ if (compat_chandef != &chandef)
return -EBUSY;
- }
}
- wdev_unlock(wdev);
msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL);
if (!msg)
@@ -12452,23 +12426,18 @@ static int nl80211_set_tx_bitrate_mask(struct sk_buff *skb,
unsigned int link_id = nl80211_link_id(info->attrs);
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
- struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
if (!rdev->ops->set_bitrate_mask)
return -EOPNOTSUPP;
- wdev_lock(wdev);
err = nl80211_parse_tx_bitrate_mask(info, info->attrs,
NL80211_ATTR_TX_RATES, &mask,
dev, true, link_id);
if (err)
- goto out;
+ return err;
- err = rdev_set_bitrate_mask(rdev, dev, link_id, NULL, &mask);
-out:
- wdev_unlock(wdev);
- return err;
+ return rdev_set_bitrate_mask(rdev, dev, link_id, NULL, &mask);
}
static int nl80211_register_mgmt(struct sk_buff *skb, struct genl_info *info)
@@ -12597,12 +12566,9 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
if (!chandef.chan && params.offchan)
return -EINVAL;
- wdev_lock(wdev);
if (params.offchan &&
- !cfg80211_off_channel_oper_allowed(wdev, chandef.chan)) {
- wdev_unlock(wdev);
+ !cfg80211_off_channel_oper_allowed(wdev, chandef.chan))
return -EBUSY;
- }
params.link_id = nl80211_link_id_or_invalid(info->attrs);
/*
@@ -12611,11 +12577,8 @@ static int nl80211_tx_mgmt(struct sk_buff *skb, struct genl_info *info)
* to the driver.
*/
if (params.link_id >= 0 &&
- !(wdev->valid_links & BIT(params.link_id))) {
- wdev_unlock(wdev);
+ !(wdev->valid_links & BIT(params.link_id)))
return -EINVAL;
- }
- wdev_unlock(wdev);
params.buf = nla_data(info->attrs[NL80211_ATTR_FRAME]);
params.len = nla_len(info->attrs[NL80211_ATTR_FRAME]);
@@ -12887,8 +12850,8 @@ static int nl80211_set_cqm_rssi(struct genl_info *info,
struct cfg80211_cqm_config *cqm_config = NULL, *old;
struct net_device *dev = info->user_ptr[1];
struct wireless_dev *wdev = dev->ieee80211_ptr;
- int i, err;
s32 prev = S32_MIN;
+ int i, err;
/* Check all values negative and sorted */
for (i = 0; i < n_thresholds; i++) {
@@ -12917,18 +12880,14 @@ static int nl80211_set_cqm_rssi(struct genl_info *info,
if (n_thresholds == 1 && thresholds[0] == 0) /* Disabling */
n_thresholds = 0;
- wdev_lock(wdev);
- old = rcu_dereference_protected(wdev->cqm_config,
- lockdep_is_held(&wdev->mtx));
+ old = wiphy_dereference(wdev->wiphy, wdev->cqm_config);
if (n_thresholds) {
cqm_config = kzalloc(struct_size(cqm_config, rssi_thresholds,
n_thresholds),
GFP_KERNEL);
- if (!cqm_config) {
- err = -ENOMEM;
- goto unlock;
- }
+ if (!cqm_config)
+ return -ENOMEM;
cqm_config->rssi_hyst = hysteresis;
cqm_config->n_rssi_thresholds = n_thresholds;
@@ -12948,8 +12907,6 @@ static int nl80211_set_cqm_rssi(struct genl_info *info,
} else {
kfree_rcu(old, rcu_head);
}
-unlock:
- wdev_unlock(wdev);
return err;
}
@@ -13133,11 +13090,9 @@ static int nl80211_join_mesh(struct sk_buff *skb, struct genl_info *info)
setup.control_port_over_nl80211 = true;
}
- wdev_lock(dev->ieee80211_ptr);
err = __cfg80211_join_mesh(rdev, dev, &setup, &cfg);
if (!err && info->attrs[NL80211_ATTR_SOCKET_OWNER])
dev->ieee80211_ptr->conn_owner_nlportid = info->snd_portid;
- wdev_unlock(dev->ieee80211_ptr);
return err;
}
@@ -14081,21 +14036,13 @@ static int nl80211_set_rekey_data(struct sk_buff *skb, struct genl_info *info)
if (tb[NL80211_REKEY_DATA_AKM])
rekey_data.akm = nla_get_u32(tb[NL80211_REKEY_DATA_AKM]);
- wdev_lock(wdev);
- if (!wdev->connected) {
- err = -ENOTCONN;
- goto out;
- }
+ if (!wdev->connected)
+ return -ENOTCONN;
- if (!rdev->ops->set_rekey_data) {
- err = -EOPNOTSUPP;
- goto out;
- }
+ if (!rdev->ops->set_rekey_data)
+ return -EOPNOTSUPP;
- err = rdev_set_rekey_data(rdev, dev, &rekey_data);
- out:
- wdev_unlock(wdev);
- return err;
+ return rdev_set_rekey_data(rdev, dev, &rekey_data);
}
static int nl80211_register_unexpected_frame(struct sk_buff *skb,
@@ -15299,11 +15246,9 @@ static int nl80211_set_qos_map(struct sk_buff *skb,
memcpy(qos_map->up, pos, IEEE80211_QOS_MAP_LEN_MIN);
}
- wdev_lock(dev->ieee80211_ptr);
ret = nl80211_key_allowed(dev->ieee80211_ptr);
if (!ret)
ret = rdev_set_qos_map(rdev, dev, qos_map);
- wdev_unlock(dev->ieee80211_ptr);
kfree(qos_map);
return ret;
@@ -15317,7 +15262,6 @@ static int nl80211_add_tx_ts(struct sk_buff *skb, struct genl_info *info)
const u8 *peer;
u8 tsid, up;
u16 admitted_time = 0;
- int err;
if (!(rdev->wiphy.features & NL80211_FEATURE_SUPPORTS_WMM_ADMISSION))
return -EOPNOTSUPP;
@@ -15347,34 +15291,25 @@ static int nl80211_add_tx_ts(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
- wdev_lock(wdev);
switch (wdev->iftype) {
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
if (wdev->connected)
break;
- err = -ENOTCONN;
- goto out;
+ return -ENOTCONN;
default:
- err = -EOPNOTSUPP;
- goto out;
+ return -EOPNOTSUPP;
}
- err = rdev_add_tx_ts(rdev, dev, tsid, peer, up, admitted_time);
-
- out:
- wdev_unlock(wdev);
- return err;
+ return rdev_add_tx_ts(rdev, dev, tsid, peer, up, admitted_time);
}
static int nl80211_del_tx_ts(struct sk_buff *skb, struct genl_info *info)
{
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
- struct wireless_dev *wdev = dev->ieee80211_ptr;
const u8 *peer;
u8 tsid;
- int err;
if (!info->attrs[NL80211_ATTR_TSID] || !info->attrs[NL80211_ATTR_MAC])
return -EINVAL;
@@ -15382,11 +15317,7 @@ static int nl80211_del_tx_ts(struct sk_buff *skb, struct genl_info *info)
tsid = nla_get_u8(info->attrs[NL80211_ATTR_TSID]);
peer = nla_data(info->attrs[NL80211_ATTR_MAC]);
- wdev_lock(wdev);
- err = rdev_del_tx_ts(rdev, dev, tsid, peer);
- wdev_unlock(wdev);
-
- return err;
+ return rdev_del_tx_ts(rdev, dev, tsid, peer);
}
static int nl80211_tdls_channel_switch(struct sk_buff *skb,
@@ -15442,11 +15373,7 @@ static int nl80211_tdls_channel_switch(struct sk_buff *skb,
addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
oper_class = nla_get_u8(info->attrs[NL80211_ATTR_OPER_CLASS]);
- wdev_lock(wdev);
- err = rdev_tdls_channel_switch(rdev, dev, addr, oper_class, &chandef);
- wdev_unlock(wdev);
-
- return err;
+ return rdev_tdls_channel_switch(rdev, dev, addr, oper_class, &chandef);
}
static int nl80211_tdls_cancel_channel_switch(struct sk_buff *skb,
@@ -15454,7 +15381,6 @@ static int nl80211_tdls_cancel_channel_switch(struct sk_buff *skb,
{
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
- struct wireless_dev *wdev = dev->ieee80211_ptr;
const u8 *addr;
if (!rdev->ops->tdls_channel_switch ||
@@ -15475,9 +15401,7 @@ static int nl80211_tdls_cancel_channel_switch(struct sk_buff *skb,
addr = nla_data(info->attrs[NL80211_ATTR_MAC]);
- wdev_lock(wdev);
rdev_tdls_cancel_channel_switch(rdev, dev, addr);
- wdev_unlock(wdev);
return 0;
}
@@ -15510,7 +15434,6 @@ static int nl80211_set_pmk(struct sk_buff *skb, struct genl_info *info)
struct net_device *dev = info->user_ptr[1];
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct cfg80211_pmk_conf pmk_conf = {};
- int ret;
if (wdev->iftype != NL80211_IFTYPE_STATION &&
wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)
@@ -15523,34 +15446,24 @@ static int nl80211_set_pmk(struct sk_buff *skb, struct genl_info *info)
if (!info->attrs[NL80211_ATTR_MAC] || !info->attrs[NL80211_ATTR_PMK])
return -EINVAL;
- wdev_lock(wdev);
- if (!wdev->connected) {
- ret = -ENOTCONN;
- goto out;
- }
+ if (!wdev->connected)
+ return -ENOTCONN;
pmk_conf.aa = nla_data(info->attrs[NL80211_ATTR_MAC]);
- if (memcmp(pmk_conf.aa, wdev->u.client.connected_addr, ETH_ALEN)) {
- ret = -EINVAL;
- goto out;
- }
+ if (memcmp(pmk_conf.aa, wdev->u.client.connected_addr, ETH_ALEN))
+ return -EINVAL;
pmk_conf.pmk = nla_data(info->attrs[NL80211_ATTR_PMK]);
pmk_conf.pmk_len = nla_len(info->attrs[NL80211_ATTR_PMK]);
if (pmk_conf.pmk_len != WLAN_PMK_LEN &&
- pmk_conf.pmk_len != WLAN_PMK_LEN_SUITE_B_192) {
- ret = -EINVAL;
- goto out;
- }
+ pmk_conf.pmk_len != WLAN_PMK_LEN_SUITE_B_192)
+ return -EINVAL;
if (info->attrs[NL80211_ATTR_PMKR0_NAME])
pmk_conf.pmk_r0_name =
nla_data(info->attrs[NL80211_ATTR_PMKR0_NAME]);
- ret = rdev_set_pmk(rdev, dev, &pmk_conf);
-out:
- wdev_unlock(wdev);
- return ret;
+ return rdev_set_pmk(rdev, dev, &pmk_conf);
}
static int nl80211_del_pmk(struct sk_buff *skb, struct genl_info *info)
@@ -15559,7 +15472,6 @@ static int nl80211_del_pmk(struct sk_buff *skb, struct genl_info *info)
struct net_device *dev = info->user_ptr[1];
struct wireless_dev *wdev = dev->ieee80211_ptr;
const u8 *aa;
- int ret;
if (wdev->iftype != NL80211_IFTYPE_STATION &&
wdev->iftype != NL80211_IFTYPE_P2P_CLIENT)
@@ -15572,12 +15484,8 @@ static int nl80211_del_pmk(struct sk_buff *skb, struct genl_info *info)
if (!info->attrs[NL80211_ATTR_MAC])
return -EINVAL;
- wdev_lock(wdev);
aa = nla_data(info->attrs[NL80211_ATTR_MAC]);
- ret = rdev_del_pmk(rdev, dev, aa);
- wdev_unlock(wdev);
-
- return ret;
+ return rdev_del_pmk(rdev, dev, aa);
}
static int nl80211_external_auth(struct sk_buff *skb, struct genl_info *info)
@@ -15651,8 +15559,6 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
- wdev_lock(wdev);
-
switch (wdev->iftype) {
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
@@ -15661,21 +15567,16 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info)
case NL80211_IFTYPE_ADHOC:
if (wdev->u.ibss.current_bss)
break;
- err = -ENOTCONN;
- goto out;
+ return -ENOTCONN;
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
if (wdev->connected)
break;
- err = -ENOTCONN;
- goto out;
+ return -ENOTCONN;
default:
- err = -EOPNOTSUPP;
- goto out;
+ return -EOPNOTSUPP;
}
- wdev_unlock(wdev);
-
buf = nla_data(info->attrs[NL80211_ATTR_FRAME]);
len = nla_len(info->attrs[NL80211_ATTR_FRAME]);
dest = nla_data(info->attrs[NL80211_ATTR_MAC]);
@@ -15691,9 +15592,6 @@ static int nl80211_tx_control_port(struct sk_buff *skb, struct genl_info *info)
if (!err && !dont_wait_for_ack)
nl_set_extack_cookie_u64(info->extack, cookie);
return err;
- out:
- wdev_unlock(wdev);
- return err;
}
static int nl80211_get_ftm_responder_stats(struct sk_buff *skb,
@@ -15971,8 +15869,6 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
if (info->attrs[NL80211_ATTR_MAC])
tid_config->peer = nla_data(info->attrs[NL80211_ATTR_MAC]);
- wdev_lock(dev->ieee80211_ptr);
-
nla_for_each_nested(tid, info->attrs[NL80211_ATTR_TID_CONFIG],
rem_conf) {
ret = nla_parse_nested(attrs, NL80211_TID_CONFIG_ATTR_MAX,
@@ -15994,7 +15890,6 @@ static int nl80211_set_tid_config(struct sk_buff *skb,
bad_tid_conf:
kfree(tid_config);
- wdev_unlock(dev->ieee80211_ptr);
return ret;
}
@@ -16091,9 +15986,7 @@ static int nl80211_color_change(struct sk_buff *skb, struct genl_info *info)
params.counter_offset_presp = offset;
}
- wdev_lock(wdev);
err = rdev_color_change(rdev, dev, &params);
- wdev_unlock(wdev);
out:
kfree(params.beacon_next.mbssid_ies);
@@ -16149,7 +16042,6 @@ static int nl80211_add_link(struct sk_buff *skb, struct genl_info *info)
!is_valid_ether_addr(nla_data(info->attrs[NL80211_ATTR_MAC])))
return -EINVAL;
- wdev_lock(wdev);
wdev->valid_links |= BIT(link_id);
ether_addr_copy(wdev->links[link_id].addr,
nla_data(info->attrs[NL80211_ATTR_MAC]));
@@ -16159,7 +16051,6 @@ static int nl80211_add_link(struct sk_buff *skb, struct genl_info *info)
wdev->valid_links &= ~BIT(link_id);
eth_zero_addr(wdev->links[link_id].addr);
}
- wdev_unlock(wdev);
return ret;
}
@@ -16181,9 +16072,7 @@ static int nl80211_remove_link(struct sk_buff *skb, struct genl_info *info)
return -EINVAL;
}
- wdev_lock(wdev);
cfg80211_remove_link(wdev, link_id);
- wdev_unlock(wdev);
return 0;
}
@@ -16273,14 +16162,10 @@ nl80211_add_mod_link_station(struct sk_buff *skb, struct genl_info *info,
if (err)
return err;
- wdev_lock(dev->ieee80211_ptr);
if (add)
- err = rdev_add_link_station(rdev, dev, &params);
- else
- err = rdev_mod_link_station(rdev, dev, &params);
- wdev_unlock(dev->ieee80211_ptr);
+ return rdev_add_link_station(rdev, dev, &params);
- return err;
+ return rdev_mod_link_station(rdev, dev, &params);
}
static int
@@ -16301,7 +16186,6 @@ nl80211_remove_link_station(struct sk_buff *skb, struct genl_info *info)
struct link_station_del_parameters params = {};
struct cfg80211_registered_device *rdev = info->user_ptr[0];
struct net_device *dev = info->user_ptr[1];
- int ret;
if (!rdev->ops->del_link_station)
return -EOPNOTSUPP;
@@ -16313,11 +16197,7 @@ nl80211_remove_link_station(struct sk_buff *skb, struct genl_info *info)
params.mld_mac = nla_data(info->attrs[NL80211_ATTR_MLD_ADDR]);
params.link_id = nla_get_u8(info->attrs[NL80211_ATTR_MLO_LINK_ID]);
- wdev_lock(dev->ieee80211_ptr);
- ret = rdev_del_link_station(rdev, dev, &params);
- wdev_unlock(dev->ieee80211_ptr);
-
- return ret;
+ return rdev_del_link_station(rdev, dev, &params);
}
static int nl80211_set_hw_timestamp(struct sk_buff *skb,
@@ -17919,7 +17799,7 @@ void nl80211_send_rx_auth(struct cfg80211_registered_device *rdev,
void nl80211_send_rx_assoc(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
- struct cfg80211_rx_assoc_resp *data)
+ struct cfg80211_rx_assoc_resp_data *data)
{
nl80211_send_mlme_event(rdev, netdev, data->buf, data->len,
NL80211_CMD_ASSOCIATE, GFP_KERNEL,
@@ -18244,7 +18124,7 @@ void nl80211_send_roamed(struct cfg80211_registered_device *rdev,
}
void nl80211_send_port_authorized(struct cfg80211_registered_device *rdev,
- struct net_device *netdev, const u8 *bssid,
+ struct net_device *netdev, const u8 *peer_addr,
const u8 *td_bitmap, u8 td_bitmap_len)
{
struct sk_buff *msg;
@@ -18262,7 +18142,7 @@ void nl80211_send_port_authorized(struct cfg80211_registered_device *rdev,
if (nla_put_u32(msg, NL80211_ATTR_WIPHY, rdev->wiphy_idx) ||
nla_put_u32(msg, NL80211_ATTR_IFINDEX, netdev->ifindex) ||
- nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, bssid))
+ nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, peer_addr))
goto nla_put_failure;
if ((td_bitmap_len > 0) && td_bitmap)
@@ -18325,7 +18205,7 @@ void cfg80211_links_removed(struct net_device *dev, u16 link_mask)
struct nlattr *links;
void *hdr;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
trace_cfg80211_links_removed(dev, link_mask);
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
@@ -19128,11 +19008,9 @@ void cfg80211_cqm_rssi_notify_work(struct wiphy *wiphy, struct wiphy_work *work)
struct sk_buff *msg;
s32 rssi_level;
- wdev_lock(wdev);
- cqm_config = rcu_dereference_protected(wdev->cqm_config,
- lockdep_is_held(&wdev->mtx));
+ cqm_config = wiphy_dereference(wdev->wiphy, wdev->cqm_config);
if (!wdev->cqm_config)
- goto unlock;
+ return;
cfg80211_cqm_rssi_update(rdev, wdev->netdev, cqm_config);
@@ -19141,7 +19019,7 @@ void cfg80211_cqm_rssi_notify_work(struct wiphy *wiphy, struct wiphy_work *work)
msg = cfg80211_prepare_cqm(wdev->netdev, NULL, GFP_KERNEL);
if (!msg)
- goto unlock;
+ return;
if (nla_put_u32(msg, NL80211_ATTR_CQM_RSSI_THRESHOLD_EVENT,
rssi_event))
@@ -19153,12 +19031,10 @@ void cfg80211_cqm_rssi_notify_work(struct wiphy *wiphy, struct wiphy_work *work)
cfg80211_send_cqm(msg, GFP_KERNEL);
- goto unlock;
+ return;
nla_put_failure:
nlmsg_free(msg);
- unlock:
- wdev_unlock(wdev);
}
void cfg80211_cqm_txe_notify(struct net_device *dev,
@@ -19402,7 +19278,7 @@ void cfg80211_ch_switch_notify(struct net_device *dev,
struct wiphy *wiphy = wdev->wiphy;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
WARN_INVALID_LINK_ID(wdev, link_id);
trace_cfg80211_ch_switch_notify(dev, chandef, link_id, punct_bitmap);
@@ -19447,7 +19323,7 @@ void cfg80211_ch_switch_started_notify(struct net_device *dev,
struct wiphy *wiphy = wdev->wiphy;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
WARN_INVALID_LINK_ID(wdev, link_id);
trace_cfg80211_ch_switch_started_notify(dev, chandef, link_id,
@@ -19470,7 +19346,7 @@ int cfg80211_bss_color_notify(struct net_device *dev,
struct sk_buff *msg;
void *hdr;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
trace_cfg80211_bss_color_notify(dev, cmd, count, color_bitmap);
diff --git a/net/wireless/nl80211.h b/net/wireless/nl80211.h
index b4af53f9b227..aad40240d9cb 100644
--- a/net/wireless/nl80211.h
+++ b/net/wireless/nl80211.h
@@ -60,7 +60,7 @@ void nl80211_send_rx_auth(struct cfg80211_registered_device *rdev,
const u8 *buf, size_t len, gfp_t gfp);
void nl80211_send_rx_assoc(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
- struct cfg80211_rx_assoc_resp *data);
+ struct cfg80211_rx_assoc_resp_data *data);
void nl80211_send_deauth(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
const u8 *buf, size_t len,
@@ -82,8 +82,11 @@ void nl80211_send_connect_result(struct cfg80211_registered_device *rdev,
void nl80211_send_roamed(struct cfg80211_registered_device *rdev,
struct net_device *netdev,
struct cfg80211_roam_info *info, gfp_t gfp);
+/* For STA/GC, indicate port authorized with AP/GO bssid.
+ * For GO/AP, use peer GC/STA mac_addr.
+ */
void nl80211_send_port_authorized(struct cfg80211_registered_device *rdev,
- struct net_device *netdev, const u8 *bssid,
+ struct net_device *netdev, const u8 *peer_addr,
const u8 *td_bitmap, u8 td_bitmap_len);
void nl80211_send_disconnected(struct cfg80211_registered_device *rdev,
struct net_device *netdev, u16 reason,
diff --git a/net/wireless/ocb.c b/net/wireless/ocb.c
index 29afaf3da54f..7d2d67f13ad9 100644
--- a/net/wireless/ocb.c
+++ b/net/wireless/ocb.c
@@ -4,7 +4,7 @@
*
* Copyright: (c) 2014 Czech Technical University in Prague
* (c) 2014 Volkswagen Group Research
- * Copyright (C) 2022 Intel Corporation
+ * Copyright (C) 2022-2023 Intel Corporation
* Author: Rostislav Lisovy <rostislav.lisovy@fel.cvut.cz>
* Funded by: Volkswagen Group Research
*/
@@ -15,14 +15,14 @@
#include "core.h"
#include "rdev-ops.h"
-int __cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev,
- struct ocb_setup *setup)
+int cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
+ struct net_device *dev,
+ struct ocb_setup *setup)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_OCB)
return -EOPNOTSUPP;
@@ -40,27 +40,13 @@ int __cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
return err;
}
-int cfg80211_join_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev,
- struct ocb_setup *setup)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
- int err;
-
- wdev_lock(wdev);
- err = __cfg80211_join_ocb(rdev, dev, setup);
- wdev_unlock(wdev);
-
- return err;
-}
-
-int __cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev)
+int cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
+ struct net_device *dev)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (dev->ieee80211_ptr->iftype != NL80211_IFTYPE_OCB)
return -EOPNOTSUPP;
@@ -77,16 +63,3 @@ int __cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
return err;
}
-
-int cfg80211_leave_ocb(struct cfg80211_registered_device *rdev,
- struct net_device *dev)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
- int err;
-
- wdev_lock(wdev);
- err = __cfg80211_leave_ocb(rdev, dev);
- wdev_unlock(wdev);
-
- return err;
-}
diff --git a/net/wireless/pmsr.c b/net/wireless/pmsr.c
index 9611aa0bd051..e106dcea3977 100644
--- a/net/wireless/pmsr.c
+++ b/net/wireless/pmsr.c
@@ -600,7 +600,7 @@ static void cfg80211_pmsr_process_abort(struct wireless_dev *wdev)
struct cfg80211_pmsr_request *req, *tmp;
LIST_HEAD(free_list);
- lockdep_assert_held(&wdev->mtx);
+ lockdep_assert_wiphy(wdev->wiphy);
spin_lock_bh(&wdev->pmsr_lock);
list_for_each_entry_safe(req, tmp, &wdev->pmsr_list, list) {
@@ -623,9 +623,7 @@ void cfg80211_pmsr_free_wk(struct work_struct *work)
pmsr_free_wk);
wiphy_lock(wdev->wiphy);
- wdev_lock(wdev);
cfg80211_pmsr_process_abort(wdev);
- wdev_unlock(wdev);
wiphy_unlock(wdev->wiphy);
}
diff --git a/net/wireless/rdev-ops.h b/net/wireless/rdev-ops.h
index 90bb7ac4b930..2214a90cf101 100644
--- a/net/wireless/rdev-ops.h
+++ b/net/wireless/rdev-ops.h
@@ -173,7 +173,7 @@ static inline int rdev_start_ap(struct cfg80211_registered_device *rdev,
static inline int rdev_change_beacon(struct cfg80211_registered_device *rdev,
struct net_device *dev,
- struct cfg80211_beacon_data *info)
+ struct cfg80211_ap_update *info)
{
int ret;
trace_rdev_change_beacon(&rdev->wiphy, dev, info);
diff --git a/net/wireless/reg.c b/net/wireless/reg.c
index 0317cf9da307..2ef4f6cc7a32 100644
--- a/net/wireless/reg.c
+++ b/net/wireless/reg.c
@@ -1283,7 +1283,9 @@ static bool is_valid_rd(const struct ieee80211_regdomain *rd)
* 60 GHz band.
* This resolution can be lowered and should be considered as we add
* regulatory rule support for other "bands".
- **/
+ *
+ * Returns: whether or not the frequency is in the range
+ */
static bool freq_in_rule_band(const struct ieee80211_freq_range *freq_range,
u32 freq_khz)
{
@@ -1492,6 +1494,8 @@ static void add_rule(struct ieee80211_reg_rule *rule,
* Returns a pointer to the regulatory domain structure which will hold the
* resulting intersection of rules between rd1 and rd2. We will
* kzalloc() this structure for you.
+ *
+ * Returns: the intersected regdomain
*/
static struct ieee80211_regdomain *
regdom_intersect(const struct ieee80211_regdomain *rd1,
@@ -1589,6 +1593,8 @@ static u32 map_regdom_flags(u32 rd_flags)
channel_flags |= IEEE80211_CHAN_NO_320MHZ;
if (rd_flags & NL80211_RRF_NO_EHT)
channel_flags |= IEEE80211_CHAN_NO_EHT;
+ if (rd_flags & NL80211_RRF_PSD)
+ channel_flags |= IEEE80211_CHAN_PSD;
return channel_flags;
}
@@ -1795,6 +1801,9 @@ static void handle_channel_single_rule(struct wiphy *wiphy,
chan->dfs_cac_ms = reg_rule->dfs_cac_ms;
}
+ if (chan->flags & IEEE80211_CHAN_PSD)
+ chan->psd = reg_rule->psd;
+
return;
}
@@ -1815,6 +1824,9 @@ static void handle_channel_single_rule(struct wiphy *wiphy,
chan->dfs_cac_ms = IEEE80211_DFS_MIN_CAC_TIME_MS;
}
+ if (chan->flags & IEEE80211_CHAN_PSD)
+ chan->psd = reg_rule->psd;
+
if (chan->orig_mpwr) {
/*
* Devices that use REGULATORY_COUNTRY_IE_FOLLOW_POWER
@@ -1884,6 +1896,12 @@ static void handle_channel_adjacent_rules(struct wiphy *wiphy,
rrule2->dfs_cac_ms);
}
+ if ((rrule1->flags & NL80211_RRF_PSD) &&
+ (rrule2->flags & NL80211_RRF_PSD))
+ chan->psd = min_t(s8, rrule1->psd, rrule2->psd);
+ else
+ chan->flags &= ~NL80211_RRF_PSD;
+
return;
}
@@ -2151,6 +2169,13 @@ static bool reg_is_world_roaming(struct wiphy *wiphy)
return false;
}
+static void reg_call_notifier(struct wiphy *wiphy,
+ struct regulatory_request *request)
+{
+ if (wiphy->reg_notifier)
+ wiphy->reg_notifier(wiphy, request);
+}
+
static void handle_reg_beacon(struct wiphy *wiphy, unsigned int chan_idx,
struct reg_beacon *reg_beacon)
{
@@ -2158,6 +2183,7 @@ static void handle_reg_beacon(struct wiphy *wiphy, unsigned int chan_idx,
struct ieee80211_channel *chan;
bool channel_changed = false;
struct ieee80211_channel chan_before;
+ struct regulatory_request *lr = get_last_request();
sband = wiphy->bands[reg_beacon->chan.band];
chan = &sband->channels[chan_idx];
@@ -2183,8 +2209,11 @@ static void handle_reg_beacon(struct wiphy *wiphy, unsigned int chan_idx,
channel_changed = true;
}
- if (channel_changed)
+ if (channel_changed) {
nl80211_send_beacon_hint_event(wiphy, &chan_before, chan);
+ if (wiphy->flags & WIPHY_FLAG_CHANNEL_CHANGE_ON_BEACON)
+ reg_call_notifier(wiphy, lr);
+ }
}
/*
@@ -2327,13 +2356,6 @@ static void reg_process_ht_flags(struct wiphy *wiphy)
reg_process_ht_flags_band(wiphy, wiphy->bands[band]);
}
-static void reg_call_notifier(struct wiphy *wiphy,
- struct regulatory_request *request)
-{
- if (wiphy->reg_notifier)
- wiphy->reg_notifier(wiphy, request);
-}
-
static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
{
struct cfg80211_chan_def chandef = {};
@@ -2342,12 +2364,11 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
bool ret;
int link;
- wdev_lock(wdev);
iftype = wdev->iftype;
/* make sure the interface is active */
if (!wdev->netdev || !netif_running(wdev->netdev))
- goto wdev_inactive_unlock;
+ return true;
for (link = 0; link < ARRAY_SIZE(wdev->links); link++) {
struct ieee80211_channel *chan;
@@ -2407,8 +2428,6 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
break;
}
- wdev_unlock(wdev);
-
switch (iftype) {
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
@@ -2429,16 +2448,8 @@ static bool reg_wdev_chan_valid(struct wiphy *wiphy, struct wireless_dev *wdev)
default:
break;
}
-
- wdev_lock(wdev);
}
- wdev_unlock(wdev);
-
- return true;
-
-wdev_inactive_unlock:
- wdev_unlock(wdev);
return true;
}
@@ -2461,7 +2472,7 @@ static void reg_check_chans_work(struct work_struct *work)
pr_debug("Verifying active interfaces after reg change\n");
rtnl_lock();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list)
+ for_each_rdev(rdev)
reg_leave_invalid_chans(&rdev->wiphy);
rtnl_unlock();
@@ -2515,7 +2526,7 @@ static void update_all_wiphy_regulatory(enum nl80211_reg_initiator initiator)
ASSERT_RTNL();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
wiphy = &rdev->wiphy;
wiphy_update_regulatory(wiphy, initiator);
}
@@ -2577,6 +2588,9 @@ static void handle_channel_custom(struct wiphy *wiphy,
chan->dfs_cac_ms = IEEE80211_DFS_MIN_CAC_TIME_MS;
}
+ if (chan->flags & IEEE80211_CHAN_PSD)
+ chan->psd = reg_rule->psd;
+
chan->max_power = chan->max_reg_power;
}
@@ -2663,6 +2677,9 @@ static void reg_set_request_processed(void)
*
* The wireless subsystem can use this function to process
* a regulatory request issued by the regulatory core.
+ *
+ * Returns: %REG_REQ_OK or %REG_REQ_IGNORE, indicating if the
+ * hint was processed or ignored
*/
static enum reg_request_treatment
reg_process_hint_core(struct regulatory_request *core_request)
@@ -2719,6 +2736,9 @@ __reg_process_hint_user(struct regulatory_request *user_request)
*
* The wireless subsystem can use this function to process
* a regulatory request initiated by userspace.
+ *
+ * Returns: %REG_REQ_OK or %REG_REQ_IGNORE, indicating if the
+ * hint was processed or ignored
*/
static enum reg_request_treatment
reg_process_hint_user(struct regulatory_request *user_request)
@@ -2774,7 +2794,7 @@ __reg_process_hint_driver(struct regulatory_request *driver_request)
* The wireless subsystem can use this function to process
* a regulatory request issued by an 802.11 driver.
*
- * Returns one of the different reg request treatment values.
+ * Returns: one of the different reg request treatment values.
*/
static enum reg_request_treatment
reg_process_hint_driver(struct wiphy *wiphy,
@@ -2878,7 +2898,7 @@ __reg_process_hint_country_ie(struct wiphy *wiphy,
* The wireless subsystem can use this function to process
* a regulatory request issued by a country Information Element.
*
- * Returns one of the different reg request treatment values.
+ * Returns: one of the different reg request treatment values.
*/
static enum reg_request_treatment
reg_process_hint_country_ie(struct wiphy *wiphy,
@@ -2991,7 +3011,7 @@ static void wiphy_all_share_dfs_chan_state(struct wiphy *wiphy)
ASSERT_RTNL();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (wiphy == &rdev->wiphy)
continue;
wiphy_share_dfs_chan_state(wiphy, &rdev->wiphy);
@@ -3057,7 +3077,7 @@ static void notify_self_managed_wiphys(struct regulatory_request *request)
struct cfg80211_registered_device *rdev;
struct wiphy *wiphy;
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
wiphy = &rdev->wiphy;
if (wiphy->regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED &&
request->initiator == NL80211_REGDOM_SET_BY_USER)
@@ -3122,7 +3142,7 @@ static void reg_process_pending_beacon_hints(void)
list_del_init(&pending_beacon->list);
/* Applies the beacon hint to current wiphys */
- list_for_each_entry(rdev, &cfg80211_rdev_list, list)
+ for_each_rdev(rdev)
wiphy_update_new_beacon(&rdev->wiphy, pending_beacon);
/* Remembers the beacon hint for new wiphys or reg changes */
@@ -3177,7 +3197,7 @@ static void reg_process_self_managed_hints(void)
ASSERT_RTNL();
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
wiphy_lock(&rdev->wiphy);
reg_process_self_managed_hint(&rdev->wiphy);
wiphy_unlock(&rdev->wiphy);
@@ -3517,7 +3537,7 @@ static void restore_regulatory_settings(bool reset_user, bool cached)
world_alpha2[0] = cfg80211_world_regdom->alpha2[0];
world_alpha2[1] = cfg80211_world_regdom->alpha2[1];
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (rdev->wiphy.regulatory_flags & REGULATORY_WIPHY_SELF_MANAGED)
continue;
if (rdev->wiphy.regulatory_flags & REGULATORY_CUSTOM_REG)
@@ -3574,15 +3594,15 @@ static bool is_wiphy_all_set_reg_flag(enum ieee80211_regulatory_flags flag)
struct cfg80211_registered_device *rdev;
struct wireless_dev *wdev;
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
+ wiphy_lock(&rdev->wiphy);
list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
- wdev_lock(wdev);
if (!(wdev->wiphy->regulatory_flags & flag)) {
- wdev_unlock(wdev);
+ wiphy_unlock(&rdev->wiphy);
return false;
}
- wdev_unlock(wdev);
}
+ wiphy_unlock(&rdev->wiphy);
}
return true;
@@ -3838,7 +3858,7 @@ static int reg_set_rd_driver(const struct ieee80211_regdomain *rd,
{
const struct ieee80211_regdomain *regd;
const struct ieee80211_regdomain *intersected_rd = NULL;
- const struct ieee80211_regdomain *tmp;
+ const struct ieee80211_regdomain *tmp = NULL;
struct wiphy *request_wiphy;
if (is_world_regdom(rd->alpha2))
@@ -3861,10 +3881,8 @@ static int reg_set_rd_driver(const struct ieee80211_regdomain *rd,
if (!driver_request->intersect) {
ASSERT_RTNL();
wiphy_lock(request_wiphy);
- if (request_wiphy->regd) {
- wiphy_unlock(request_wiphy);
- return -EALREADY;
- }
+ if (request_wiphy->regd)
+ tmp = get_wiphy_regdom(request_wiphy);
regd = reg_copy_regd(rd);
if (IS_ERR(regd)) {
@@ -3873,6 +3891,7 @@ static int reg_set_rd_driver(const struct ieee80211_regdomain *rd,
}
rcu_assign_pointer(request_wiphy->regd, regd);
+ rcu_free_regdom(tmp);
wiphy_unlock(request_wiphy);
reset_regdomains(false, rd);
return 0;
@@ -4244,7 +4263,7 @@ void regulatory_propagate_dfs_state(struct wiphy *wiphy,
if (WARN_ON(!cfg80211_chandef_valid(chandef)))
return;
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
if (wiphy == &rdev->wiphy)
continue;
diff --git a/net/wireless/reg.h b/net/wireless/reg.h
index f3707f729024..a703e53c23ee 100644
--- a/net/wireless/reg.h
+++ b/net/wireless/reg.h
@@ -5,7 +5,7 @@
/*
* Copyright 2008-2011 Luis R. Rodriguez <mcgrof@qca.qualcomm.com>
- * Copyright (C) 2019 Intel Corporation
+ * Copyright (C) 2019, 2023 Intel Corporation
*
* Permission to use, copy, modify, and/or distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
@@ -133,7 +133,7 @@ void regulatory_hint_disconnect(void);
/**
* cfg80211_get_unii - get the U-NII band for the frequency
* @freq: the frequency for which we want to get the UNII band.
-
+ *
* Get a value specifying the U-NII band frequency belongs to.
* U-NII bands are defined by the FCC in C.F.R 47 part 15.
*
@@ -156,11 +156,11 @@ bool regulatory_indoor_allowed(void);
/**
* regulatory_propagate_dfs_state - Propagate DFS channel state to other wiphys
- * @wiphy - wiphy on which radar is detected and the event will be propagated
+ * @wiphy: wiphy on which radar is detected and the event will be propagated
* to other available wiphys having the same DFS domain
- * @chandef - Channel definition of radar detected channel
- * @dfs_state - DFS channel state to be set
- * @event - Type of radar event which triggered this DFS state change
+ * @chandef: Channel definition of radar detected channel
+ * @dfs_state: DFS channel state to be set
+ * @event: Type of radar event which triggered this DFS state change
*
* This function should be called with rtnl lock held.
*/
@@ -171,8 +171,8 @@ void regulatory_propagate_dfs_state(struct wiphy *wiphy,
/**
* reg_dfs_domain_same - Checks if both wiphy have same DFS domain configured
- * @wiphy1 - wiphy it's dfs_region to be checked against that of wiphy2
- * @wiphy2 - wiphy it's dfs_region to be checked against that of wiphy1
+ * @wiphy1: wiphy it's dfs_region to be checked against that of wiphy2
+ * @wiphy2: wiphy it's dfs_region to be checked against that of wiphy1
*/
bool reg_dfs_domain_same(struct wiphy *wiphy1, struct wiphy *wiphy2);
diff --git a/net/wireless/scan.c b/net/wireless/scan.c
index 8210a6090ac1..9e5ccffd6868 100644
--- a/net/wireless/scan.c
+++ b/net/wireless/scan.c
@@ -830,10 +830,47 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
list_for_each_entry(intbss, &rdev->bss_list, list) {
struct cfg80211_bss *res = &intbss->pub;
const struct cfg80211_bss_ies *ies;
+ const struct element *ssid_elem;
+ struct cfg80211_colocated_ap *entry;
+ u32 s_ssid_tmp;
+ int ret;
ies = rcu_access_pointer(res->ies);
count += cfg80211_parse_colocated_ap(ies,
&coloc_ap_list);
+
+ /* In case the scan request specified a specific BSSID
+ * and the BSS is found and operating on 6GHz band then
+ * add this AP to the collocated APs list.
+ * This is relevant for ML probe requests when the lower
+ * band APs have not been discovered.
+ */
+ if (is_broadcast_ether_addr(rdev_req->bssid) ||
+ !ether_addr_equal(rdev_req->bssid, res->bssid) ||
+ res->channel->band != NL80211_BAND_6GHZ)
+ continue;
+
+ ret = cfg80211_calc_short_ssid(ies, &ssid_elem,
+ &s_ssid_tmp);
+ if (ret)
+ continue;
+
+ entry = kzalloc(sizeof(*entry) + IEEE80211_MAX_SSID_LEN,
+ GFP_ATOMIC);
+
+ if (!entry)
+ continue;
+
+ memcpy(entry->bssid, res->bssid, ETH_ALEN);
+ entry->short_ssid = s_ssid_tmp;
+ memcpy(entry->ssid, ssid_elem->data,
+ ssid_elem->datalen);
+ entry->ssid_len = ssid_elem->datalen;
+ entry->short_ssid_valid = true;
+ entry->center_freq = res->channel->center_freq;
+
+ list_add_tail(&entry->list, &coloc_ap_list);
+ count++;
}
spin_unlock_bh(&rdev->bss_lock);
}
@@ -1642,8 +1679,6 @@ static bool cfg80211_combine_bsses(struct cfg80211_registered_device *rdev,
continue;
if (bss->pub.channel != new->pub.channel)
continue;
- if (bss->pub.scan_width != new->pub.scan_width)
- continue;
if (rcu_access_pointer(bss->pub.beacon_ies))
continue;
ies = rcu_access_pointer(bss->pub.ies);
@@ -1940,8 +1975,7 @@ EXPORT_SYMBOL(cfg80211_get_ies_channel_number);
*/
static struct ieee80211_channel *
cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
- struct ieee80211_channel *channel,
- enum nl80211_bss_scan_width scan_width)
+ struct ieee80211_channel *channel)
{
u32 freq;
int channel_number;
@@ -1981,16 +2015,6 @@ cfg80211_get_bss_channel(struct wiphy *wiphy, const u8 *ie, size_t ielen,
return channel;
}
- if (scan_width == NL80211_BSS_CHAN_WIDTH_10 ||
- scan_width == NL80211_BSS_CHAN_WIDTH_5) {
- /*
- * Ignore channel number in 5 and 10 MHz channels where there
- * may not be an n:1 or 1:n mapping between frequencies and
- * channel numbers.
- */
- return channel;
- }
-
/*
* Use the channel determined through the payload channel number
* instead of the RX channel reported by the driver.
@@ -2050,14 +2074,12 @@ cfg80211_inform_single_bss_data(struct wiphy *wiphy,
channel = data->channel;
if (!channel)
channel = cfg80211_get_bss_channel(wiphy, data->ie, data->ielen,
- drv_data->chan,
- drv_data->scan_width);
+ drv_data->chan);
if (!channel)
return NULL;
memcpy(tmp.pub.bssid, data->bssid, ETH_ALEN);
tmp.pub.channel = channel;
- tmp.pub.scan_width = drv_data->scan_width;
if (data->bss_source != BSS_SOURCE_STA_PROFILE)
tmp.pub.signal = drv_data->signal;
else
@@ -2358,8 +2380,8 @@ ssize_t cfg80211_defragment_element(const struct element *elem, const u8 *ies,
/* elem might be invalid after the memmove */
next = (void *)(elem->data + elem->datalen);
-
elem_datalen = elem->datalen;
+
if (elem->id == WLAN_EID_EXTENSION) {
copied = elem->datalen - 1;
if (copied > data_len)
@@ -2380,7 +2402,7 @@ ssize_t cfg80211_defragment_element(const struct element *elem, const u8 *ies,
for (elem = next;
elem->data < ies + ieslen &&
- elem->data + elem->datalen < ies + ieslen;
+ elem->data + elem->datalen <= ies + ieslen;
elem = next) {
/* elem might be invalid after the memmove */
next = (void *)(elem->data + elem->datalen);
@@ -2818,8 +2840,7 @@ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
variable = ext->u.s1g_beacon.variable;
}
- channel = cfg80211_get_bss_channel(wiphy, variable,
- ielen, data->chan, data->scan_width);
+ channel = cfg80211_get_bss_channel(wiphy, variable, ielen, data->chan);
if (!channel)
return NULL;
@@ -2872,7 +2893,6 @@ cfg80211_inform_single_bss_frame_data(struct wiphy *wiphy,
tmp.pub.beacon_interval = beacon_int;
tmp.pub.capability = capability;
tmp.pub.channel = channel;
- tmp.pub.scan_width = data->scan_width;
tmp.pub.signal = data->signal;
tmp.ts_boottime = data->boottime_ns;
tmp.parent_tsf = data->parent_tsf;
@@ -3426,59 +3446,63 @@ ieee80211_bss(struct wiphy *wiphy, struct iw_request_info *info,
cfg = (u8 *)ie + 2;
memset(&iwe, 0, sizeof(iwe));
iwe.cmd = IWEVCUSTOM;
- sprintf(buf, "Mesh Network Path Selection Protocol ID: "
- "0x%02X", cfg[0]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Mesh Network Path Selection Protocol ID: 0x%02X",
+ cfg[0]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Path Selection Metric ID: 0x%02X",
- cfg[1]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Path Selection Metric ID: 0x%02X",
+ cfg[1]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Congestion Control Mode ID: 0x%02X",
- cfg[2]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Congestion Control Mode ID: 0x%02X",
+ cfg[2]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Synchronization ID: 0x%02X", cfg[3]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Synchronization ID: 0x%02X",
+ cfg[3]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Authentication ID: 0x%02X", cfg[4]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Authentication ID: 0x%02X",
+ cfg[4]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Formation Info: 0x%02X", cfg[5]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Formation Info: 0x%02X",
+ cfg[5]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
- sprintf(buf, "Capabilities: 0x%02X", cfg[6]);
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf,
+ "Capabilities: 0x%02X",
+ cfg[6]);
current_ev = iwe_stream_add_point_check(info,
current_ev,
end_buf,
@@ -3534,17 +3558,16 @@ ieee80211_bss(struct wiphy *wiphy, struct iw_request_info *info,
memset(&iwe, 0, sizeof(iwe));
iwe.cmd = IWEVCUSTOM;
- sprintf(buf, "tsf=%016llx", (unsigned long long)(ies->tsf));
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf, "tsf=%016llx",
+ (unsigned long long)(ies->tsf));
current_ev = iwe_stream_add_point_check(info, current_ev, end_buf,
&iwe, buf);
if (IS_ERR(current_ev))
goto unlock;
memset(&iwe, 0, sizeof(iwe));
iwe.cmd = IWEVCUSTOM;
- sprintf(buf, " Last beacon: %ums ago",
- elapsed_jiffies_msecs(bss->ts));
- iwe.u.data.length = strlen(buf);
+ iwe.u.data.length = sprintf(buf, " Last beacon: %ums ago",
+ elapsed_jiffies_msecs(bss->ts));
current_ev = iwe_stream_add_point_check(info, current_ev,
end_buf, &iwe, buf);
if (IS_ERR(current_ev))
diff --git a/net/wireless/sme.c b/net/wireless/sme.c
index 9bba233b5a6e..acfe66da7109 100644
--- a/net/wireless/sme.c
+++ b/net/wireless/sme.c
@@ -67,7 +67,7 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
struct cfg80211_scan_request *request;
int n_channels, err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (rdev->scan_req || rdev->scan_msg)
return -EBUSY;
@@ -151,7 +151,7 @@ static int cfg80211_conn_do_work(struct wireless_dev *wdev,
struct cfg80211_assoc_request req = {};
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->conn)
return 0;
@@ -255,16 +255,13 @@ void cfg80211_conn_work(struct work_struct *work)
if (!wdev->netdev)
continue;
- wdev_lock(wdev);
- if (!netif_running(wdev->netdev)) {
- wdev_unlock(wdev);
+ if (!netif_running(wdev->netdev))
continue;
- }
+
if (!wdev->conn ||
- wdev->conn->state == CFG80211_CONN_CONNECTED) {
- wdev_unlock(wdev);
+ wdev->conn->state == CFG80211_CONN_CONNECTED)
continue;
- }
+
if (wdev->conn->params.bssid) {
memcpy(bssid_buf, wdev->conn->params.bssid, ETH_ALEN);
bssid = bssid_buf;
@@ -279,7 +276,6 @@ void cfg80211_conn_work(struct work_struct *work)
cr.timeout_reason = treason;
__cfg80211_connect_result(wdev->netdev, &cr, false);
}
- wdev_unlock(wdev);
}
wiphy_unlock(&rdev->wiphy);
@@ -300,7 +296,7 @@ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
struct cfg80211_bss *bss;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
bss = cfg80211_get_bss(wdev->wiphy, wdev->conn->params.channel,
wdev->conn->params.bssid,
@@ -317,13 +313,13 @@ static struct cfg80211_bss *cfg80211_get_conn_bss(struct wireless_dev *wdev)
return bss;
}
-static void __cfg80211_sme_scan_done(struct net_device *dev)
+void cfg80211_sme_scan_done(struct net_device *dev)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
struct cfg80211_bss *bss;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->conn)
return;
@@ -339,15 +335,6 @@ static void __cfg80211_sme_scan_done(struct net_device *dev)
schedule_work(&rdev->conn_work);
}
-void cfg80211_sme_scan_done(struct net_device *dev)
-{
- struct wireless_dev *wdev = dev->ieee80211_ptr;
-
- wdev_lock(wdev);
- __cfg80211_sme_scan_done(dev);
- wdev_unlock(wdev);
-}
-
void cfg80211_sme_rx_auth(struct wireless_dev *wdev, const u8 *buf, size_t len)
{
struct wiphy *wiphy = wdev->wiphy;
@@ -355,7 +342,7 @@ void cfg80211_sme_rx_auth(struct wireless_dev *wdev, const u8 *buf, size_t len)
struct ieee80211_mgmt *mgmt = (struct ieee80211_mgmt *)buf;
u16 status_code = le16_to_cpu(mgmt->u.auth.status_code);
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!wdev->conn || wdev->conn->state == CFG80211_CONN_CONNECTED)
return;
@@ -702,14 +689,14 @@ static bool cfg80211_is_all_idle(void)
* need not issue a disconnect hint and reset any info such
* as chan dfs state, etc.
*/
- list_for_each_entry(rdev, &cfg80211_rdev_list, list) {
+ for_each_rdev(rdev) {
+ wiphy_lock(&rdev->wiphy);
list_for_each_entry(wdev, &rdev->wiphy.wdev_list, list) {
- wdev_lock(wdev);
if (wdev->conn || wdev->connected ||
cfg80211_beaconing_iface_active(wdev))
is_all_idle = false;
- wdev_unlock(wdev);
}
+ wiphy_unlock(&rdev->wiphy);
}
return is_all_idle;
@@ -761,7 +748,7 @@ void __cfg80211_connect_result(struct net_device *dev,
const u8 *connected_addr;
bool bss_not_found = false;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
@@ -1093,7 +1080,7 @@ void __cfg80211_roamed(struct wireless_dev *wdev,
unsigned int link;
const u8 *connected_addr;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
@@ -1294,24 +1281,29 @@ out:
}
EXPORT_SYMBOL(cfg80211_roamed);
-void __cfg80211_port_authorized(struct wireless_dev *wdev, const u8 *bssid,
+void __cfg80211_port_authorized(struct wireless_dev *wdev, const u8 *peer_addr,
const u8 *td_bitmap, u8 td_bitmap_len)
{
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
- wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
+ wdev->iftype != NL80211_IFTYPE_P2P_CLIENT &&
+ wdev->iftype != NL80211_IFTYPE_AP &&
+ wdev->iftype != NL80211_IFTYPE_P2P_GO))
return;
- if (WARN_ON(!wdev->connected) ||
- WARN_ON(!ether_addr_equal(wdev->u.client.connected_addr, bssid)))
- return;
+ if (wdev->iftype == NL80211_IFTYPE_STATION ||
+ wdev->iftype == NL80211_IFTYPE_P2P_CLIENT) {
+ if (WARN_ON(!wdev->connected) ||
+ WARN_ON(!ether_addr_equal(wdev->u.client.connected_addr, peer_addr)))
+ return;
+ }
nl80211_send_port_authorized(wiphy_to_rdev(wdev->wiphy), wdev->netdev,
- bssid, td_bitmap, td_bitmap_len);
+ peer_addr, td_bitmap, td_bitmap_len);
}
-void cfg80211_port_authorized(struct net_device *dev, const u8 *bssid,
+void cfg80211_port_authorized(struct net_device *dev, const u8 *peer_addr,
const u8 *td_bitmap, u8 td_bitmap_len, gfp_t gfp)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
@@ -1319,7 +1311,7 @@ void cfg80211_port_authorized(struct net_device *dev, const u8 *bssid,
struct cfg80211_event *ev;
unsigned long flags;
- if (WARN_ON(!bssid))
+ if (WARN_ON(!peer_addr))
return;
ev = kzalloc(sizeof(*ev) + td_bitmap_len, gfp);
@@ -1327,7 +1319,7 @@ void cfg80211_port_authorized(struct net_device *dev, const u8 *bssid,
return;
ev->type = EVENT_PORT_AUTHORIZED;
- memcpy(ev->pa.bssid, bssid, ETH_ALEN);
+ memcpy(ev->pa.peer_addr, peer_addr, ETH_ALEN);
ev->pa.td_bitmap = ((u8 *)ev) + sizeof(*ev);
ev->pa.td_bitmap_len = td_bitmap_len;
memcpy((void *)ev->pa.td_bitmap, td_bitmap, td_bitmap_len);
@@ -1353,7 +1345,7 @@ void __cfg80211_disconnected(struct net_device *dev, const u8 *ie,
union iwreq_data wrqu;
#endif
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (WARN_ON(wdev->iftype != NL80211_IFTYPE_STATION &&
wdev->iftype != NL80211_IFTYPE_P2P_CLIENT))
@@ -1443,7 +1435,7 @@ int cfg80211_connect(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
/*
* If we have an ssid_len, we're trying to connect or are
@@ -1549,7 +1541,7 @@ int cfg80211_disconnect(struct cfg80211_registered_device *rdev,
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err = 0;
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
kfree_sensitive(wdev->connect_keys);
wdev->connect_keys = NULL;
@@ -1585,19 +1577,18 @@ void cfg80211_autodisconnect_wk(struct work_struct *work)
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
wiphy_lock(wdev->wiphy);
- wdev_lock(wdev);
if (wdev->conn_owner_nlportid) {
switch (wdev->iftype) {
case NL80211_IFTYPE_ADHOC:
- __cfg80211_leave_ibss(rdev, wdev->netdev, false);
+ cfg80211_leave_ibss(rdev, wdev->netdev, false);
break;
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
- __cfg80211_stop_ap(rdev, wdev->netdev, -1, false);
+ cfg80211_stop_ap(rdev, wdev->netdev, -1, false);
break;
case NL80211_IFTYPE_MESH_POINT:
- __cfg80211_leave_mesh(rdev, wdev->netdev);
+ cfg80211_leave_mesh(rdev, wdev->netdev);
break;
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
@@ -1622,6 +1613,5 @@ void cfg80211_autodisconnect_wk(struct work_struct *work)
}
}
- wdev_unlock(wdev);
wiphy_unlock(wdev->wiphy);
}
diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c
index c629bac3f298..565511a3f461 100644
--- a/net/wireless/sysfs.c
+++ b/net/wireless/sysfs.c
@@ -105,14 +105,14 @@ static int wiphy_suspend(struct device *dev)
cfg80211_leave_all(rdev);
cfg80211_process_rdev_events(rdev);
}
- cfg80211_process_wiphy_works(rdev);
+ cfg80211_process_wiphy_works(rdev, NULL);
if (rdev->ops->suspend)
ret = rdev_suspend(rdev, rdev->wiphy.wowlan_config);
if (ret == 1) {
/* Driver refuse to configure wowlan */
cfg80211_leave_all(rdev);
cfg80211_process_rdev_events(rdev);
- cfg80211_process_wiphy_works(rdev);
+ cfg80211_process_wiphy_works(rdev, NULL);
ret = rdev_suspend(rdev, NULL);
}
if (ret == 0)
diff --git a/net/wireless/tests/Makefile b/net/wireless/tests/Makefile
new file mode 100644
index 000000000000..fa8e297bbc5e
--- /dev/null
+++ b/net/wireless/tests/Makefile
@@ -0,0 +1,3 @@
+cfg80211-tests-y += module.o fragmentation.o
+
+obj-$(CONFIG_CFG80211_KUNIT_TEST) += cfg80211-tests.o
diff --git a/net/wireless/tests/fragmentation.c b/net/wireless/tests/fragmentation.c
new file mode 100644
index 000000000000..49a339ca8880
--- /dev/null
+++ b/net/wireless/tests/fragmentation.c
@@ -0,0 +1,157 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * KUnit tests for element fragmentation
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+#include <linux/ieee80211.h>
+#include <net/cfg80211.h>
+#include <kunit/test.h>
+
+static void defragment_0(struct kunit *test)
+{
+ ssize_t ret;
+ static const u8 input[] = {
+ [0] = WLAN_EID_EXTENSION,
+ [1] = 254,
+ [2] = WLAN_EID_EXT_EHT_MULTI_LINK,
+ [27] = 27,
+ [123] = 123,
+ [254 + 2] = WLAN_EID_FRAGMENT,
+ [254 + 3] = 7,
+ [254 + 3 + 7] = 0, /* for size */
+ };
+ u8 *data = kunit_kzalloc(test, sizeof(input), GFP_KERNEL);
+
+ KUNIT_ASSERT_NOT_NULL(test, data);
+
+ ret = cfg80211_defragment_element((void *)input,
+ input, sizeof(input),
+ data, sizeof(input),
+ WLAN_EID_FRAGMENT);
+ KUNIT_EXPECT_EQ(test, ret, 253);
+ KUNIT_EXPECT_MEMEQ(test, data, input + 3, 253);
+}
+
+static void defragment_1(struct kunit *test)
+{
+ ssize_t ret;
+ static const u8 input[] = {
+ [0] = WLAN_EID_EXTENSION,
+ [1] = 255,
+ [2] = WLAN_EID_EXT_EHT_MULTI_LINK,
+ [27] = 27,
+ [123] = 123,
+ [255 + 2] = WLAN_EID_FRAGMENT,
+ [255 + 3] = 7,
+ [255 + 3 + 1] = 0xaa,
+ [255 + 3 + 8] = WLAN_EID_FRAGMENT, /* not used */
+ [255 + 3 + 9] = 1,
+ [255 + 3 + 10] = 0, /* for size */
+ };
+ u8 *data = kunit_kzalloc(test, sizeof(input), GFP_KERNEL);
+ const struct element *elem;
+ int count = 0;
+
+ KUNIT_ASSERT_NOT_NULL(test, data);
+
+ for_each_element(elem, input, sizeof(input))
+ count++;
+
+ /* check the elements are right */
+ KUNIT_ASSERT_EQ(test, count, 3);
+
+ ret = cfg80211_defragment_element((void *)input,
+ input, sizeof(input),
+ data, sizeof(input),
+ WLAN_EID_FRAGMENT);
+ /* this means the last fragment was not used */
+ KUNIT_EXPECT_EQ(test, ret, 254 + 7);
+ KUNIT_EXPECT_MEMEQ(test, data, input + 3, 254);
+ KUNIT_EXPECT_MEMEQ(test, data + 254, input + 255 + 4, 7);
+}
+
+static void defragment_2(struct kunit *test)
+{
+ ssize_t ret;
+ static const u8 input[] = {
+ [0] = WLAN_EID_EXTENSION,
+ [1] = 255,
+ [2] = WLAN_EID_EXT_EHT_MULTI_LINK,
+ [27] = 27,
+ [123] = 123,
+
+ [257 + 0] = WLAN_EID_FRAGMENT,
+ [257 + 1] = 255,
+ [257 + 20] = 0xaa,
+
+ [2 * 257 + 0] = WLAN_EID_FRAGMENT,
+ [2 * 257 + 1] = 1,
+ [2 * 257 + 2] = 0xcc,
+ [2 * 257 + 3] = WLAN_EID_FRAGMENT, /* not used */
+ [2 * 257 + 4] = 1,
+ [2 * 257 + 5] = 0, /* for size */
+ };
+ u8 *data = kunit_kzalloc(test, sizeof(input), GFP_KERNEL);
+ const struct element *elem;
+ int count = 0;
+
+ KUNIT_ASSERT_NOT_NULL(test, data);
+
+ for_each_element(elem, input, sizeof(input))
+ count++;
+
+ /* check the elements are right */
+ KUNIT_ASSERT_EQ(test, count, 4);
+
+ ret = cfg80211_defragment_element((void *)input,
+ input, sizeof(input),
+ data, sizeof(input),
+ WLAN_EID_FRAGMENT);
+ /* this means the last fragment was not used */
+ KUNIT_EXPECT_EQ(test, ret, 254 + 255 + 1);
+ KUNIT_EXPECT_MEMEQ(test, data, input + 3, 254);
+ KUNIT_EXPECT_MEMEQ(test, data + 254, input + 257 + 2, 255);
+ KUNIT_EXPECT_MEMEQ(test, data + 254 + 255, input + 2 * 257 + 2, 1);
+}
+
+static void defragment_at_end(struct kunit *test)
+{
+ ssize_t ret;
+ static const u8 input[] = {
+ [0] = WLAN_EID_EXTENSION,
+ [1] = 255,
+ [2] = WLAN_EID_EXT_EHT_MULTI_LINK,
+ [27] = 27,
+ [123] = 123,
+ [255 + 2] = WLAN_EID_FRAGMENT,
+ [255 + 3] = 7,
+ [255 + 3 + 7] = 0, /* for size */
+ };
+ u8 *data = kunit_kzalloc(test, sizeof(input), GFP_KERNEL);
+
+ KUNIT_ASSERT_NOT_NULL(test, data);
+
+ ret = cfg80211_defragment_element((void *)input,
+ input, sizeof(input),
+ data, sizeof(input),
+ WLAN_EID_FRAGMENT);
+ KUNIT_EXPECT_EQ(test, ret, 254 + 7);
+ KUNIT_EXPECT_MEMEQ(test, data, input + 3, 254);
+ KUNIT_EXPECT_MEMEQ(test, data + 254, input + 255 + 4, 7);
+}
+
+static struct kunit_case element_fragmentation_test_cases[] = {
+ KUNIT_CASE(defragment_0),
+ KUNIT_CASE(defragment_1),
+ KUNIT_CASE(defragment_2),
+ KUNIT_CASE(defragment_at_end),
+ {}
+};
+
+static struct kunit_suite element_fragmentation = {
+ .name = "cfg80211-element-defragmentation",
+ .test_cases = element_fragmentation_test_cases,
+};
+
+kunit_test_suite(element_fragmentation);
diff --git a/net/wireless/tests/module.c b/net/wireless/tests/module.c
new file mode 100644
index 000000000000..9ff7b2c12312
--- /dev/null
+++ b/net/wireless/tests/module.c
@@ -0,0 +1,10 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This is just module boilerplate for the cfg80211 kunit module.
+ *
+ * Copyright (C) 2023 Intel Corporation
+ */
+#include <linux/module.h>
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("tests for cfg80211");
diff --git a/net/wireless/trace.h b/net/wireless/trace.h
index 617c0d0dfa96..30cd1bd58aac 100644
--- a/net/wireless/trace.h
+++ b/net/wireless/trace.h
@@ -615,49 +615,47 @@ TRACE_EVENT(rdev_start_ap,
TRACE_EVENT(rdev_change_beacon,
TP_PROTO(struct wiphy *wiphy, struct net_device *netdev,
- struct cfg80211_beacon_data *info),
+ struct cfg80211_ap_update *info),
TP_ARGS(wiphy, netdev, info),
TP_STRUCT__entry(
WIPHY_ENTRY
NETDEV_ENTRY
__field(int, link_id)
- __dynamic_array(u8, head, info ? info->head_len : 0)
- __dynamic_array(u8, tail, info ? info->tail_len : 0)
- __dynamic_array(u8, beacon_ies, info ? info->beacon_ies_len : 0)
- __dynamic_array(u8, proberesp_ies,
- info ? info->proberesp_ies_len : 0)
- __dynamic_array(u8, assocresp_ies,
- info ? info->assocresp_ies_len : 0)
- __dynamic_array(u8, probe_resp, info ? info->probe_resp_len : 0)
- ),
- TP_fast_assign(
- WIPHY_ASSIGN;
- NETDEV_ASSIGN;
- if (info) {
- __entry->link_id = info->link_id;
- if (info->head)
- memcpy(__get_dynamic_array(head), info->head,
- info->head_len);
- if (info->tail)
- memcpy(__get_dynamic_array(tail), info->tail,
- info->tail_len);
- if (info->beacon_ies)
- memcpy(__get_dynamic_array(beacon_ies),
- info->beacon_ies, info->beacon_ies_len);
- if (info->proberesp_ies)
- memcpy(__get_dynamic_array(proberesp_ies),
- info->proberesp_ies,
- info->proberesp_ies_len);
- if (info->assocresp_ies)
- memcpy(__get_dynamic_array(assocresp_ies),
- info->assocresp_ies,
- info->assocresp_ies_len);
- if (info->probe_resp)
- memcpy(__get_dynamic_array(probe_resp),
- info->probe_resp, info->probe_resp_len);
- } else {
- __entry->link_id = -1;
- }
+ __dynamic_array(u8, head, info->beacon.head_len)
+ __dynamic_array(u8, tail, info->beacon.tail_len)
+ __dynamic_array(u8, beacon_ies, info->beacon.beacon_ies_len)
+ __dynamic_array(u8, proberesp_ies, info->beacon.proberesp_ies_len)
+ __dynamic_array(u8, assocresp_ies, info->beacon.assocresp_ies_len)
+ __dynamic_array(u8, probe_resp, info->beacon.probe_resp_len)
+ ),
+ TP_fast_assign(
+ WIPHY_ASSIGN;
+ NETDEV_ASSIGN;
+ __entry->link_id = info->beacon.link_id;
+ if (info->beacon.head)
+ memcpy(__get_dynamic_array(head),
+ info->beacon.head,
+ info->beacon.head_len);
+ if (info->beacon.tail)
+ memcpy(__get_dynamic_array(tail),
+ info->beacon.tail,
+ info->beacon.tail_len);
+ if (info->beacon.beacon_ies)
+ memcpy(__get_dynamic_array(beacon_ies),
+ info->beacon.beacon_ies,
+ info->beacon.beacon_ies_len);
+ if (info->beacon.proberesp_ies)
+ memcpy(__get_dynamic_array(proberesp_ies),
+ info->beacon.proberesp_ies,
+ info->beacon.proberesp_ies_len);
+ if (info->beacon.assocresp_ies)
+ memcpy(__get_dynamic_array(assocresp_ies),
+ info->beacon.assocresp_ies,
+ info->beacon.assocresp_ies_len);
+ if (info->beacon.probe_resp)
+ memcpy(__get_dynamic_array(probe_resp),
+ info->beacon.probe_resp,
+ info->beacon.probe_resp_len);
),
TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", link_id:%d",
WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->link_id)
@@ -1323,16 +1321,18 @@ TRACE_EVENT(rdev_deauth,
NETDEV_ENTRY
MAC_ENTRY(bssid)
__field(u16, reason_code)
+ __field(bool, local_state_change)
),
TP_fast_assign(
WIPHY_ASSIGN;
NETDEV_ASSIGN;
MAC_ASSIGN(bssid, req->bssid);
__entry->reason_code = req->reason_code;
+ __entry->local_state_change = req->local_state_change;
),
- TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", bssid: %pM, reason: %u",
+ TP_printk(WIPHY_PR_FMT ", " NETDEV_PR_FMT ", bssid: %pM, reason: %u, local_state_change:%d",
WIPHY_PR_ARG, NETDEV_PR_ARG, __entry->bssid,
- __entry->reason_code)
+ __entry->reason_code, __entry->local_state_change)
);
TRACE_EVENT(rdev_disassoc,
@@ -2928,7 +2928,7 @@ DEFINE_EVENT(netdev_evt_only, cfg80211_send_rx_auth,
TRACE_EVENT(cfg80211_send_rx_assoc,
TP_PROTO(struct net_device *netdev,
- struct cfg80211_rx_assoc_resp *data),
+ struct cfg80211_rx_assoc_resp_data *data),
TP_ARGS(netdev, data),
TP_STRUCT__entry(
NETDEV_ENTRY
@@ -3590,7 +3590,6 @@ TRACE_EVENT(cfg80211_inform_bss_frame,
TP_STRUCT__entry(
WIPHY_ENTRY
CHAN_ENTRY
- __field(enum nl80211_bss_scan_width, scan_width)
__dynamic_array(u8, mgmt, len)
__field(s32, signal)
__field(u64, ts_boottime)
@@ -3600,7 +3599,6 @@ TRACE_EVENT(cfg80211_inform_bss_frame,
TP_fast_assign(
WIPHY_ASSIGN;
CHAN_ASSIGN(data->chan);
- __entry->scan_width = data->scan_width;
if (mgmt)
memcpy(__get_dynamic_array(mgmt), mgmt, len);
__entry->signal = data->signal;
@@ -3609,8 +3607,8 @@ TRACE_EVENT(cfg80211_inform_bss_frame,
MAC_ASSIGN(parent_bssid, data->parent_bssid);
),
TP_printk(WIPHY_PR_FMT ", " CHAN_PR_FMT
- "(scan_width: %d) signal: %d, tsb:%llu, detect_tsf:%llu, tsf_bssid: %pM",
- WIPHY_PR_ARG, CHAN_PR_ARG, __entry->scan_width,
+ "signal: %d, tsb:%llu, detect_tsf:%llu, tsf_bssid: %pM",
+ WIPHY_PR_ARG, CHAN_PR_ARG,
__entry->signal, (unsigned long long)__entry->ts_boottime,
(unsigned long long)__entry->parent_tsf,
__entry->parent_bssid)
diff --git a/net/wireless/util.c b/net/wireless/util.c
index 1783ab9d57a3..626b858b4b35 100644
--- a/net/wireless/util.c
+++ b/net/wireless/util.c
@@ -43,8 +43,7 @@ ieee80211_get_response_rate(struct ieee80211_supported_band *sband,
}
EXPORT_SYMBOL(ieee80211_get_response_rate);
-u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband,
- enum nl80211_bss_scan_width scan_width)
+u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband)
{
struct ieee80211_rate *bitrates;
u32 mandatory_rates = 0;
@@ -54,15 +53,10 @@ u32 ieee80211_mandatory_rates(struct ieee80211_supported_band *sband,
if (WARN_ON(!sband))
return 1;
- if (sband->band == NL80211_BAND_2GHZ) {
- if (scan_width == NL80211_BSS_CHAN_WIDTH_5 ||
- scan_width == NL80211_BSS_CHAN_WIDTH_10)
- mandatory_flag = IEEE80211_RATE_MANDATORY_G;
- else
- mandatory_flag = IEEE80211_RATE_MANDATORY_B;
- } else {
+ if (sband->band == NL80211_BAND_2GHZ)
+ mandatory_flag = IEEE80211_RATE_MANDATORY_B;
+ else
mandatory_flag = IEEE80211_RATE_MANDATORY_A;
- }
bitrates = sband->bitrates;
for (i = 0; i < sband->n_bitrates; i++)
@@ -1044,7 +1038,6 @@ void cfg80211_process_wdev_events(struct wireless_dev *wdev)
list_del(&ev->list);
spin_unlock_irqrestore(&wdev->event_lock, flags);
- wdev_lock(wdev);
switch (ev->type) {
case EVENT_CONNECT_RESULT:
__cfg80211_connect_result(
@@ -1066,15 +1059,14 @@ void cfg80211_process_wdev_events(struct wireless_dev *wdev)
ev->ij.channel);
break;
case EVENT_STOPPED:
- __cfg80211_leave(wiphy_to_rdev(wdev->wiphy), wdev);
+ cfg80211_leave(wiphy_to_rdev(wdev->wiphy), wdev);
break;
case EVENT_PORT_AUTHORIZED:
- __cfg80211_port_authorized(wdev, ev->pa.bssid,
+ __cfg80211_port_authorized(wdev, ev->pa.peer_addr,
ev->pa.td_bitmap,
ev->pa.td_bitmap_len);
break;
}
- wdev_unlock(wdev);
kfree(ev);
@@ -1124,9 +1116,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
return -EBUSY;
dev->ieee80211_ptr->use_4addr = false;
- wdev_lock(dev->ieee80211_ptr);
rdev_set_qos_map(rdev, dev, NULL);
- wdev_unlock(dev->ieee80211_ptr);
switch (otype) {
case NL80211_IFTYPE_AP:
@@ -1138,10 +1128,8 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
break;
case NL80211_IFTYPE_STATION:
case NL80211_IFTYPE_P2P_CLIENT:
- wdev_lock(dev->ieee80211_ptr);
cfg80211_disconnect(rdev, dev,
WLAN_REASON_DEAUTH_LEAVING, true);
- wdev_unlock(dev->ieee80211_ptr);
break;
case NL80211_IFTYPE_MESH_POINT:
/* mesh should be handled? */
@@ -1972,6 +1960,35 @@ size_t ieee80211_ie_split_ric(const u8 *ies, size_t ielen,
}
EXPORT_SYMBOL(ieee80211_ie_split_ric);
+void ieee80211_fragment_element(struct sk_buff *skb, u8 *len_pos, u8 frag_id)
+{
+ unsigned int elem_len;
+
+ if (!len_pos)
+ return;
+
+ elem_len = skb->data + skb->len - len_pos - 1;
+
+ while (elem_len > 255) {
+ /* this one is 255 */
+ *len_pos = 255;
+ /* remaining data gets smaller */
+ elem_len -= 255;
+ /* make space for the fragment ID/len in SKB */
+ skb_put(skb, 2);
+ /* shift back the remaining data to place fragment ID/len */
+ memmove(len_pos + 255 + 3, len_pos + 255 + 1, elem_len);
+ /* place the fragment ID */
+ len_pos += 255 + 1;
+ *len_pos = frag_id;
+ /* and point to fragment length to update later */
+ len_pos++;
+ }
+
+ *len_pos = elem_len;
+}
+EXPORT_SYMBOL(ieee80211_fragment_element);
+
bool ieee80211_operating_class_to_band(u8 operating_class,
enum nl80211_band *band)
{
@@ -1982,6 +1999,7 @@ bool ieee80211_operating_class_to_band(u8 operating_class,
*band = NL80211_BAND_5GHZ;
return true;
case 131 ... 135:
+ case 137:
*band = NL80211_BAND_6GHZ;
return true;
case 81:
@@ -2647,12 +2665,12 @@ void cfg80211_remove_link(struct wireless_dev *wdev, unsigned int link_id)
{
struct cfg80211_registered_device *rdev = wiphy_to_rdev(wdev->wiphy);
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
switch (wdev->iftype) {
case NL80211_IFTYPE_AP:
case NL80211_IFTYPE_P2P_GO:
- __cfg80211_stop_ap(rdev, wdev->netdev, link_id, true);
+ cfg80211_stop_ap(rdev, wdev->netdev, link_id, true);
break;
default:
/* per-link not relevant */
@@ -2677,12 +2695,10 @@ void cfg80211_remove_links(struct wireless_dev *wdev)
if (wdev->iftype != NL80211_IFTYPE_AP)
return;
- wdev_lock(wdev);
if (wdev->valid_links) {
for_each_valid_link(wdev, link_id)
cfg80211_remove_link(wdev, link_id);
}
- wdev_unlock(wdev);
}
int cfg80211_remove_virtual_intf(struct cfg80211_registered_device *rdev,
diff --git a/net/wireless/wext-compat.c b/net/wireless/wext-compat.c
index e3acfac7430a..2371069f3c43 100644
--- a/net/wireless/wext-compat.c
+++ b/net/wireless/wext-compat.c
@@ -7,7 +7,7 @@
* we directly assign the wireless handlers of wireless interfaces.
*
* Copyright 2008-2009 Johannes Berg <johannes@sipsolutions.net>
- * Copyright (C) 2019-2022 Intel Corporation
+ * Copyright (C) 2019-2023 Intel Corporation
*/
#include <linux/export.h>
@@ -227,7 +227,7 @@ EXPORT_WEXT_HANDLER(cfg80211_wext_giwrange);
* cfg80211_wext_freq - get wext frequency for non-"auto"
* @freq: the wext freq encoding
*
- * Returns a frequency, or a negative error code, or 0 for auto.
+ * Returns: a frequency, or a negative error code, or 0 for auto.
*/
int cfg80211_wext_freq(struct iw_freq *freq)
{
@@ -415,10 +415,10 @@ int cfg80211_wext_giwretry(struct net_device *dev,
}
EXPORT_WEXT_HANDLER(cfg80211_wext_giwretry);
-static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
- struct net_device *dev, bool pairwise,
- const u8 *addr, bool remove, bool tx_key,
- int idx, struct key_params *params)
+static int cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
+ struct net_device *dev, bool pairwise,
+ const u8 *addr, bool remove, bool tx_key,
+ int idx, struct key_params *params)
{
struct wireless_dev *wdev = dev->ieee80211_ptr;
int err, i;
@@ -471,7 +471,7 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
*/
if (idx == wdev->wext.default_key &&
wdev->iftype == NL80211_IFTYPE_ADHOC) {
- __cfg80211_leave_ibss(rdev, wdev->netdev, true);
+ cfg80211_leave_ibss(rdev, wdev->netdev, true);
rejoin = true;
}
@@ -552,7 +552,7 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
*/
if (wdev->iftype == NL80211_IFTYPE_ADHOC &&
wdev->wext.default_key == -1) {
- __cfg80211_leave_ibss(rdev, wdev->netdev, true);
+ cfg80211_leave_ibss(rdev, wdev->netdev, true);
rejoin = true;
}
err = rdev_set_default_key(rdev, dev, -1, idx, true,
@@ -580,21 +580,6 @@ static int __cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
return 0;
}
-static int cfg80211_set_encryption(struct cfg80211_registered_device *rdev,
- struct net_device *dev, bool pairwise,
- const u8 *addr, bool remove, bool tx_key,
- int idx, struct key_params *params)
-{
- int err;
-
- wdev_lock(dev->ieee80211_ptr);
- err = __cfg80211_set_encryption(rdev, dev, pairwise, addr,
- remove, tx_key, idx, params);
- wdev_unlock(dev->ieee80211_ptr);
-
- return err;
-}
-
static int cfg80211_wext_siwencode(struct net_device *dev,
struct iw_request_info *info,
union iwreq_data *wrqu, char *keybuf)
@@ -639,7 +624,6 @@ static int cfg80211_wext_siwencode(struct net_device *dev,
else if (erq->length == 0) {
/* No key data - just set the default TX key index */
err = 0;
- wdev_lock(wdev);
if (wdev->connected ||
(wdev->iftype == NL80211_IFTYPE_ADHOC &&
wdev->u.ibss.current_bss))
@@ -647,7 +631,6 @@ static int cfg80211_wext_siwencode(struct net_device *dev,
true);
if (!err)
wdev->wext.default_key = idx;
- wdev_unlock(wdev);
goto out;
}
@@ -697,12 +680,8 @@ static int cfg80211_wext_siwencodeext(struct net_device *dev,
!rdev->ops->set_default_key)
return -EOPNOTSUPP;
- wdev_lock(wdev);
- if (wdev->valid_links) {
- wdev_unlock(wdev);
+ if (wdev->valid_links)
return -EOPNOTSUPP;
- }
- wdev_unlock(wdev);
switch (ext->alg) {
case IW_ENCODE_ALG_NONE:
@@ -1341,13 +1320,11 @@ static int cfg80211_wext_giwrate(struct net_device *dev,
return -EOPNOTSUPP;
err = 0;
- wdev_lock(wdev);
if (!wdev->valid_links && wdev->links[0].client.current_bss)
memcpy(addr, wdev->links[0].client.current_bss->pub.bssid,
ETH_ALEN);
else
err = -EOPNOTSUPP;
- wdev_unlock(wdev);
if (err)
return err;
@@ -1387,17 +1364,15 @@ static struct iw_statistics *cfg80211_wireless_stats(struct net_device *dev)
return NULL;
/* Grab BSSID of current BSS, if any */
- wdev_lock(wdev);
+ wiphy_lock(&rdev->wiphy);
if (wdev->valid_links || !wdev->links[0].client.current_bss) {
- wdev_unlock(wdev);
+ wiphy_unlock(&rdev->wiphy);
return NULL;
}
memcpy(bssid, wdev->links[0].client.current_bss->pub.bssid, ETH_ALEN);
- wdev_unlock(wdev);
memset(&sinfo, 0, sizeof(sinfo));
- wiphy_lock(&rdev->wiphy);
ret = rdev_get_station(rdev, dev, bssid, &sinfo);
wiphy_unlock(&rdev->wiphy);
diff --git a/net/wireless/wext-sme.c b/net/wireless/wext-sme.c
index f3eaa3388694..8edd9ada69d0 100644
--- a/net/wireless/wext-sme.c
+++ b/net/wireless/wext-sme.c
@@ -23,7 +23,7 @@ int cfg80211_mgd_wext_connect(struct cfg80211_registered_device *rdev,
int err, i;
ASSERT_RTNL();
- ASSERT_WDEV_LOCK(wdev);
+ lockdep_assert_wiphy(wdev->wiphy);
if (!netif_running(wdev->netdev))
return 0;
@@ -87,15 +87,11 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev,
return -EINVAL;
}
- wdev_lock(wdev);
-
if (wdev->conn) {
bool event = true;
- if (wdev->wext.connect.channel == chan) {
- err = 0;
- goto out;
- }
+ if (wdev->wext.connect.channel == chan)
+ return 0;
/* if SSID set, we'll try right again, avoid event */
if (wdev->wext.connect.ssid_len)
@@ -103,14 +99,11 @@ int cfg80211_mgd_wext_siwfreq(struct net_device *dev,
err = cfg80211_disconnect(rdev, dev,
WLAN_REASON_DEAUTH_LEAVING, event);
if (err)
- goto out;
+ return err;
}
wdev->wext.connect.channel = chan;
- err = cfg80211_mgd_wext_connect(rdev, wdev);
- out:
- wdev_unlock(wdev);
- return err;
+ return cfg80211_mgd_wext_connect(rdev, wdev);
}
int cfg80211_mgd_wext_giwfreq(struct net_device *dev,
@@ -127,12 +120,10 @@ int cfg80211_mgd_wext_giwfreq(struct net_device *dev,
if (wdev->valid_links)
return -EOPNOTSUPP;
- wdev_lock(wdev);
if (wdev->links[0].client.current_bss)
chan = wdev->links[0].client.current_bss->pub.channel;
else if (wdev->wext.connect.channel)
chan = wdev->wext.connect.channel;
- wdev_unlock(wdev);
if (chan) {
freq->m = chan->center_freq;
@@ -164,17 +155,13 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
if (len > 0 && ssid[len - 1] == '\0')
len--;
- wdev_lock(wdev);
-
- err = 0;
-
if (wdev->conn) {
bool event = true;
if (wdev->wext.connect.ssid && len &&
len == wdev->wext.connect.ssid_len &&
memcmp(wdev->wext.connect.ssid, ssid, len) == 0)
- goto out;
+ return 0;
/* if SSID set now, we'll try to connect, avoid event */
if (len)
@@ -182,7 +169,7 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
err = cfg80211_disconnect(rdev, dev,
WLAN_REASON_DEAUTH_LEAVING, event);
if (err)
- goto out;
+ return err;
}
wdev->wext.prev_bssid_valid = false;
@@ -194,10 +181,7 @@ int cfg80211_mgd_wext_siwessid(struct net_device *dev,
wdev->wext.connect.crypto.control_port_ethertype =
cpu_to_be16(ETH_P_PAE);
- err = cfg80211_mgd_wext_connect(rdev, wdev);
- out:
- wdev_unlock(wdev);
- return err;
+ return cfg80211_mgd_wext_connect(rdev, wdev);
}
int cfg80211_mgd_wext_giwessid(struct net_device *dev,
@@ -216,7 +200,6 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
data->flags = 0;
- wdev_lock(wdev);
if (wdev->links[0].client.current_bss) {
const struct element *ssid_elem;
@@ -238,7 +221,6 @@ int cfg80211_mgd_wext_giwessid(struct net_device *dev,
data->length = wdev->wext.connect.ssid_len;
memcpy(ssid, wdev->wext.connect.ssid, data->length);
}
- wdev_unlock(wdev);
return ret;
}
@@ -263,23 +245,20 @@ int cfg80211_mgd_wext_siwap(struct net_device *dev,
if (is_zero_ether_addr(bssid) || is_broadcast_ether_addr(bssid))
bssid = NULL;
- wdev_lock(wdev);
-
if (wdev->conn) {
- err = 0;
/* both automatic */
if (!bssid && !wdev->wext.connect.bssid)
- goto out;
+ return 0;
/* fixed already - and no change */
if (wdev->wext.connect.bssid && bssid &&
ether_addr_equal(bssid, wdev->wext.connect.bssid))
- goto out;
+ return 0;
err = cfg80211_disconnect(rdev, dev,
WLAN_REASON_DEAUTH_LEAVING, false);
if (err)
- goto out;
+ return err;
}
if (bssid) {
@@ -288,10 +267,7 @@ int cfg80211_mgd_wext_siwap(struct net_device *dev,
} else
wdev->wext.connect.bssid = NULL;
- err = cfg80211_mgd_wext_connect(rdev, wdev);
- out:
- wdev_unlock(wdev);
- return err;
+ return cfg80211_mgd_wext_connect(rdev, wdev);
}
int cfg80211_mgd_wext_giwap(struct net_device *dev,
@@ -306,18 +282,15 @@ int cfg80211_mgd_wext_giwap(struct net_device *dev,
ap_addr->sa_family = ARPHRD_ETHER;
- wdev_lock(wdev);
- if (wdev->valid_links) {
- wdev_unlock(wdev);
+ if (wdev->valid_links)
return -EOPNOTSUPP;
- }
+
if (wdev->links[0].client.current_bss)
memcpy(ap_addr->sa_data,
wdev->links[0].client.current_bss->pub.bssid,
ETH_ALEN);
else
eth_zero_addr(ap_addr->sa_data);
- wdev_unlock(wdev);
return 0;
}
@@ -339,7 +312,6 @@ int cfg80211_wext_siwgenie(struct net_device *dev,
ie = NULL;
wiphy_lock(wdev->wiphy);
- wdev_lock(wdev);
/* no change */
err = 0;
@@ -370,7 +342,6 @@ int cfg80211_wext_siwgenie(struct net_device *dev,
/* userspace better not think we'll reconnect */
err = 0;
out:
- wdev_unlock(wdev);
wiphy_unlock(wdev->wiphy);
return err;
}
@@ -396,7 +367,6 @@ int cfg80211_wext_siwmlme(struct net_device *dev,
return -EINVAL;
wiphy_lock(&rdev->wiphy);
- wdev_lock(wdev);
switch (mlme->cmd) {
case IW_MLME_DEAUTH:
case IW_MLME_DISASSOC:
@@ -406,7 +376,6 @@ int cfg80211_wext_siwmlme(struct net_device *dev,
err = -EOPNOTSUPP;
break;
}
- wdev_unlock(wdev);
wiphy_unlock(&rdev->wiphy);
return err;
diff --git a/net/x25/af_x25.c b/net/x25/af_x25.c
index 0fb5143bec7a..aad8ffeaee04 100644
--- a/net/x25/af_x25.c
+++ b/net/x25/af_x25.c
@@ -598,7 +598,7 @@ static struct sock *x25_make_new(struct sock *osk)
x25 = x25_sk(sk);
sk->sk_type = osk->sk_type;
- sk->sk_priority = osk->sk_priority;
+ sk->sk_priority = READ_ONCE(osk->sk_priority);
sk->sk_protocol = osk->sk_protocol;
sk->sk_rcvbuf = osk->sk_rcvbuf;
sk->sk_sndbuf = osk->sk_sndbuf;
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 55f8b9b0e06d..ae9f8cb611f6 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -33,6 +33,7 @@
#include "xsk.h"
#define TX_BATCH_SIZE 32
+#define MAX_PER_SOCKET_BUDGET (TX_BATCH_SIZE)
static DEFINE_PER_CPU(struct list_head, xskmap_flush_list);
@@ -391,6 +392,16 @@ void __xsk_map_flush(void)
}
}
+#ifdef CONFIG_DEBUG_NET
+bool xsk_map_check_flush(void)
+{
+ if (list_empty(this_cpu_ptr(&xskmap_flush_list)))
+ return false;
+ __xsk_map_flush();
+ return true;
+}
+#endif
+
void xsk_tx_completed(struct xsk_buff_pool *pool, u32 nb_entries)
{
xskq_prod_submit_n(pool->cq, nb_entries);
@@ -413,16 +424,25 @@ EXPORT_SYMBOL(xsk_tx_release);
bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
{
+ bool budget_exhausted = false;
struct xdp_sock *xs;
rcu_read_lock();
+again:
list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list) {
+ if (xs->tx_budget_spent >= MAX_PER_SOCKET_BUDGET) {
+ budget_exhausted = true;
+ continue;
+ }
+
if (!xskq_cons_peek_desc(xs->tx, desc, pool)) {
if (xskq_has_descs(xs->tx))
xskq_cons_release(xs->tx);
continue;
}
+ xs->tx_budget_spent++;
+
/* This is the backpressure mechanism for the Tx path.
* Reserve space in the completion queue and only proceed
* if there is space in it. This avoids having to implement
@@ -436,6 +456,14 @@ bool xsk_tx_peek_desc(struct xsk_buff_pool *pool, struct xdp_desc *desc)
return true;
}
+ if (budget_exhausted) {
+ list_for_each_entry_rcu(xs, &pool->xsk_tx_list, tx_list)
+ xs->tx_budget_spent = 0;
+
+ budget_exhausted = false;
+ goto again;
+ }
+
out:
rcu_read_unlock();
return false;
@@ -684,7 +712,7 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
}
skb->dev = dev;
- skb->priority = xs->sk.sk_priority;
+ skb->priority = READ_ONCE(xs->sk.sk_priority);
skb->mark = READ_ONCE(xs->sk.sk_mark);
skb->destructor = xsk_destruct_skb;
xsk_set_destructor_arg(skb);
@@ -1228,7 +1256,7 @@ static int xsk_bind(struct socket *sock, struct sockaddr *addr, int addr_len)
xs->dev = dev;
xs->zc = xs->umem->zc;
- xs->sg = !!(flags & XDP_USE_SG);
+ xs->sg = !!(xs->umem->flags & XDP_UMEM_SG_FLAG);
xs->queue_id = qid;
xp_add_xsk(xs->pool, xs);
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index b3f7b310811e..49cb9f9a09be 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -170,6 +170,9 @@ int xp_assign_dev(struct xsk_buff_pool *pool,
if (err)
return err;
+ if (flags & XDP_USE_SG)
+ pool->umem->flags |= XDP_UMEM_SG_FLAG;
+
if (flags & XDP_USE_NEED_WAKEUP)
pool->uses_need_wakeup = true;
/* Tx needs to be explicitly woken up the first time. Also
diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
index d24b4d4f620e..5cdd3bca3637 100644
--- a/net/xfrm/xfrm_policy.c
+++ b/net/xfrm/xfrm_policy.c
@@ -2566,7 +2566,7 @@ static inline struct xfrm_dst *xfrm_alloc_dst(struct net *net, int family)
default:
BUG();
}
- xdst = dst_alloc(dst_ops, NULL, 1, DST_OBSOLETE_NONE, 0);
+ xdst = dst_alloc(dst_ops, NULL, DST_OBSOLETE_NONE, 0);
if (likely(xdst)) {
memset_after(xdst, 0, u.dst);
diff --git a/samples/bpf/Makefile b/samples/bpf/Makefile
index 4ccf4236031c..933f6c3fe6b0 100644
--- a/samples/bpf/Makefile
+++ b/samples/bpf/Makefile
@@ -150,6 +150,9 @@ always-y += ibumad_kern.o
always-y += hbm_out_kern.o
always-y += hbm_edt_kern.o
+TPROGS_CFLAGS = $(TPROGS_USER_CFLAGS)
+TPROGS_LDFLAGS = $(TPROGS_USER_LDFLAGS)
+
ifeq ($(ARCH), arm)
# Strip all except -D__LINUX_ARM_ARCH__ option needed to handle linux
# headers when arm instruction set identification is requested.
@@ -169,12 +172,16 @@ endif
TPROGS_CFLAGS += -Wall -O2
TPROGS_CFLAGS += -Wmissing-prototypes
TPROGS_CFLAGS += -Wstrict-prototypes
+TPROGS_CFLAGS += $(call try-run,\
+ printf "int main() { return 0; }" |\
+ $(CC) -Werror -fsanitize=bounds -x c - -o "$$TMP",-fsanitize=bounds,)
TPROGS_CFLAGS += -I$(objtree)/usr/include
TPROGS_CFLAGS += -I$(srctree)/tools/testing/selftests/bpf/
TPROGS_CFLAGS += -I$(LIBBPF_INCLUDE)
TPROGS_CFLAGS += -I$(srctree)/tools/include
TPROGS_CFLAGS += -I$(srctree)/tools/perf
+TPROGS_CFLAGS += -I$(srctree)/tools/lib
TPROGS_CFLAGS += -DHAVE_ATTR_TEST=0
ifdef SYSROOT
@@ -247,14 +254,15 @@ clean:
$(LIBBPF): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(LIBBPF_OUTPUT)
# Fix up variables inherited from Kbuild that tools/ build system won't like
$(MAKE) -C $(LIBBPF_SRC) RM='rm -rf' EXTRA_CFLAGS="$(TPROGS_CFLAGS)" \
- LDFLAGS=$(TPROGS_LDFLAGS) srctree=$(BPF_SAMPLES_PATH)/../../ \
+ LDFLAGS="$(TPROGS_LDFLAGS)" srctree=$(BPF_SAMPLES_PATH)/../../ \
O= OUTPUT=$(LIBBPF_OUTPUT)/ DESTDIR=$(LIBBPF_DESTDIR) prefix= \
$@ install_headers
BPFTOOLDIR := $(TOOLS_PATH)/bpf/bpftool
BPFTOOL_OUTPUT := $(abspath $(BPF_SAMPLES_PATH))/bpftool
-BPFTOOL := $(BPFTOOL_OUTPUT)/bootstrap/bpftool
-$(BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) | $(BPFTOOL_OUTPUT)
+DEFAULT_BPFTOOL := $(BPFTOOL_OUTPUT)/bootstrap/bpftool
+BPFTOOL ?= $(DEFAULT_BPFTOOL)
+$(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) | $(BPFTOOL_OUTPUT)
$(MAKE) -C $(BPFTOOLDIR) srctree=$(BPF_SAMPLES_PATH)/../../ \
OUTPUT=$(BPFTOOL_OUTPUT)/ bootstrap
@@ -312,8 +320,11 @@ XDP_SAMPLE_CFLAGS += -Wall -O2 \
-I$(LIBBPF_INCLUDE) \
-I$(src)/../../tools/testing/selftests/bpf
-$(obj)/$(XDP_SAMPLE): TPROGS_CFLAGS = $(XDP_SAMPLE_CFLAGS)
+$(obj)/$(XDP_SAMPLE): TPROGS_CFLAGS = $(XDP_SAMPLE_CFLAGS) $(TPROGS_USER_CFLAGS)
$(obj)/$(XDP_SAMPLE): $(src)/xdp_sample_user.h $(src)/xdp_sample_shared.h
+# Override includes for trace_helpers.o because __must_check won't be defined
+# in our include path.
+$(obj)/$(TRACE_HELPERS): TPROGS_CFLAGS := $(TPROGS_CFLAGS) -D__must_check=
-include $(BPF_SAMPLES_PATH)/Makefile.target
diff --git a/samples/bpf/syscall_tp_kern.c b/samples/bpf/syscall_tp_kern.c
index 090fecfe641a..58fef969a60e 100644
--- a/samples/bpf/syscall_tp_kern.c
+++ b/samples/bpf/syscall_tp_kern.c
@@ -4,6 +4,7 @@
#include <uapi/linux/bpf.h>
#include <bpf/bpf_helpers.h>
+#if !defined(__aarch64__)
struct syscalls_enter_open_args {
unsigned long long unused;
long syscall_nr;
@@ -11,6 +12,7 @@ struct syscalls_enter_open_args {
long flags;
long mode;
};
+#endif
struct syscalls_exit_open_args {
unsigned long long unused;
@@ -18,6 +20,15 @@ struct syscalls_exit_open_args {
long ret;
};
+struct syscalls_enter_open_at_args {
+ unsigned long long unused;
+ long syscall_nr;
+ long long dfd;
+ long filename_ptr;
+ long flags;
+ long mode;
+};
+
struct {
__uint(type, BPF_MAP_TYPE_ARRAY);
__type(key, u32);
@@ -54,14 +65,14 @@ int trace_enter_open(struct syscalls_enter_open_args *ctx)
#endif
SEC("tracepoint/syscalls/sys_enter_openat")
-int trace_enter_open_at(struct syscalls_enter_open_args *ctx)
+int trace_enter_open_at(struct syscalls_enter_open_at_args *ctx)
{
count(&enter_open_map);
return 0;
}
SEC("tracepoint/syscalls/sys_enter_openat2")
-int trace_enter_open_at2(struct syscalls_enter_open_args *ctx)
+int trace_enter_open_at2(struct syscalls_enter_open_at_args *ctx)
{
count(&enter_open_map);
return 0;
diff --git a/samples/bpf/syscall_tp_user.c b/samples/bpf/syscall_tp_user.c
index 7a788bb837fc..7a09ac74fac0 100644
--- a/samples/bpf/syscall_tp_user.c
+++ b/samples/bpf/syscall_tp_user.c
@@ -17,9 +17,9 @@
static void usage(const char *cmd)
{
- printf("USAGE: %s [-i num_progs] [-h]\n", cmd);
- printf(" -i num_progs # number of progs of the test\n");
- printf(" -h # help\n");
+ printf("USAGE: %s [-i nr_tests] [-h]\n", cmd);
+ printf(" -i nr_tests # rounds of test to run\n");
+ printf(" -h # help\n");
}
static void verify_map(int map_id)
@@ -45,14 +45,14 @@ static void verify_map(int map_id)
}
}
-static int test(char *filename, int num_progs)
+static int test(char *filename, int nr_tests)
{
- int map0_fds[num_progs], map1_fds[num_progs], fd, i, j = 0;
- struct bpf_link *links[num_progs * 4];
- struct bpf_object *objs[num_progs];
+ int map0_fds[nr_tests], map1_fds[nr_tests], fd, i, j = 0;
+ struct bpf_link **links = NULL;
+ struct bpf_object *objs[nr_tests];
struct bpf_program *prog;
- for (i = 0; i < num_progs; i++) {
+ for (i = 0; i < nr_tests; i++) {
objs[i] = bpf_object__open_file(filename, NULL);
if (libbpf_get_error(objs[i])) {
fprintf(stderr, "opening BPF object file failed\n");
@@ -60,6 +60,19 @@ static int test(char *filename, int num_progs)
goto cleanup;
}
+ /* One-time initialization */
+ if (!links) {
+ int nr_progs = 0;
+
+ bpf_object__for_each_program(prog, objs[i])
+ nr_progs += 1;
+
+ links = calloc(nr_progs * nr_tests, sizeof(struct bpf_link *));
+
+ if (!links)
+ goto cleanup;
+ }
+
/* load BPF program */
if (bpf_object__load(objs[i])) {
fprintf(stderr, "loading BPF object file failed\n");
@@ -101,14 +114,18 @@ static int test(char *filename, int num_progs)
close(fd);
/* verify the map */
- for (i = 0; i < num_progs; i++) {
+ for (i = 0; i < nr_tests; i++) {
verify_map(map0_fds[i]);
verify_map(map1_fds[i]);
}
cleanup:
- for (j--; j >= 0; j--)
- bpf_link__destroy(links[j]);
+ if (links) {
+ for (j--; j >= 0; j--)
+ bpf_link__destroy(links[j]);
+
+ free(links);
+ }
for (i--; i >= 0; i--)
bpf_object__close(objs[i]);
@@ -117,13 +134,13 @@ cleanup:
int main(int argc, char **argv)
{
- int opt, num_progs = 1;
+ int opt, nr_tests = 1;
char filename[256];
while ((opt = getopt(argc, argv, "i:h")) != -1) {
switch (opt) {
case 'i':
- num_progs = atoi(optarg);
+ nr_tests = atoi(optarg);
break;
case 'h':
default:
@@ -134,5 +151,5 @@ int main(int argc, char **argv)
snprintf(filename, sizeof(filename), "%s_kern.o", argv[0]);
- return test(filename, num_progs);
+ return test(filename, nr_tests);
}
diff --git a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
index bd015ec9847b..2ce900f66d6e 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
@@ -36,11 +36,14 @@ CGROUP COMMANDS
| **cgroup_device** | **cgroup_inet4_bind** | **cgroup_inet6_bind** |
| **cgroup_inet4_post_bind** | **cgroup_inet6_post_bind** |
| **cgroup_inet4_connect** | **cgroup_inet6_connect** |
-| **cgroup_inet4_getpeername** | **cgroup_inet6_getpeername** |
+| **cgroup_unix_connect** | **cgroup_inet4_getpeername** |
+| **cgroup_inet6_getpeername** | **cgroup_unix_getpeername** |
| **cgroup_inet4_getsockname** | **cgroup_inet6_getsockname** |
-| **cgroup_udp4_sendmsg** | **cgroup_udp6_sendmsg** |
+| **cgroup_unix_getsockname** | **cgroup_udp4_sendmsg** |
+| **cgroup_udp6_sendmsg** | **cgroup_unix_sendmsg** |
| **cgroup_udp4_recvmsg** | **cgroup_udp6_recvmsg** |
-| **cgroup_sysctl** | **cgroup_getsockopt** | **cgroup_setsockopt** |
+| **cgroup_unix_recvmsg** | **cgroup_sysctl** |
+| **cgroup_getsockopt** | **cgroup_setsockopt** |
| **cgroup_inet_sock_release** }
| *ATTACH_FLAGS* := { **multi** | **override** }
@@ -102,21 +105,28 @@ DESCRIPTION
**post_bind6** return from bind(2) for an inet6 socket (since 4.17);
**connect4** call to connect(2) for an inet4 socket (since 4.17);
**connect6** call to connect(2) for an inet6 socket (since 4.17);
+ **connect_unix** call to connect(2) for a unix socket (since 6.7);
**sendmsg4** call to sendto(2), sendmsg(2), sendmmsg(2) for an
unconnected udp4 socket (since 4.18);
**sendmsg6** call to sendto(2), sendmsg(2), sendmmsg(2) for an
unconnected udp6 socket (since 4.18);
+ **sendmsg_unix** call to sendto(2), sendmsg(2), sendmmsg(2) for
+ an unconnected unix socket (since 6.7);
**recvmsg4** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
an unconnected udp4 socket (since 5.2);
**recvmsg6** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
an unconnected udp6 socket (since 5.2);
+ **recvmsg_unix** call to recvfrom(2), recvmsg(2), recvmmsg(2) for
+ an unconnected unix socket (since 6.7);
**sysctl** sysctl access (since 5.2);
**getsockopt** call to getsockopt (since 5.3);
**setsockopt** call to setsockopt (since 5.3);
**getpeername4** call to getpeername(2) for an inet4 socket (since 5.8);
**getpeername6** call to getpeername(2) for an inet6 socket (since 5.8);
+ **getpeername_unix** call to getpeername(2) for a unix socket (since 6.7);
**getsockname4** call to getsockname(2) for an inet4 socket (since 5.8);
**getsockname6** call to getsockname(2) for an inet6 socket (since 5.8).
+ **getsockname_unix** call to getsockname(2) for a unix socket (since 6.7);
**sock_release** closing an userspace inet socket (since 5.9).
**bpftool cgroup detach** *CGROUP* *ATTACH_TYPE* *PROG*
diff --git a/tools/bpf/bpftool/Documentation/bpftool-net.rst b/tools/bpf/bpftool/Documentation/bpftool-net.rst
index 5e2abd3de5ab..dd3f9469765b 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-net.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-net.rst
@@ -37,7 +37,7 @@ DESCRIPTION
**bpftool net { show | list }** [ **dev** *NAME* ]
List bpf program attachments in the kernel networking subsystem.
- Currently, device driver xdp attachments, tcx and old-style tc
+ Currently, device driver xdp attachments, tcx, netkit and old-style tc
classifier/action attachments, flow_dissector as well as netfilter
attachments are implemented, i.e., for
program types **BPF_PROG_TYPE_XDP**, **BPF_PROG_TYPE_SCHED_CLS**,
@@ -52,11 +52,11 @@ DESCRIPTION
bpf programs, users should consult other tools, e.g., iproute2.
The current output will start with all xdp program attachments, followed by
- all tcx, then tc class/qdisc bpf program attachments, then flow_dissector
- and finally netfilter programs. Both xdp programs and tcx/tc programs are
+ all tcx, netkit, then tc class/qdisc bpf program attachments, then flow_dissector
+ and finally netfilter programs. Both xdp programs and tcx/netkit/tc programs are
ordered based on ifindex number. If multiple bpf programs attached
to the same networking device through **tc**, the order will be first
- all bpf programs attached to tcx, then tc classes, then all bpf programs
+ all bpf programs attached to tcx, netkit, then tc classes, then all bpf programs
attached to non clsact qdiscs, and finally all bpf programs attached
to root and clsact qdisc.
diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
index dcae81bd27ed..58e6a5b10ef7 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
@@ -47,9 +47,11 @@ PROG COMMANDS
| **cgroup/sock** | **cgroup/dev** | **lwt_in** | **lwt_out** | **lwt_xmit** |
| **lwt_seg6local** | **sockops** | **sk_skb** | **sk_msg** | **lirc_mode2** |
| **cgroup/bind4** | **cgroup/bind6** | **cgroup/post_bind4** | **cgroup/post_bind6** |
-| **cgroup/connect4** | **cgroup/connect6** | **cgroup/getpeername4** | **cgroup/getpeername6** |
-| **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** |
-| **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/sysctl** |
+| **cgroup/connect4** | **cgroup/connect6** | **cgroup/connect_unix** |
+| **cgroup/getpeername4** | **cgroup/getpeername6** | **cgroup/getpeername_unix** |
+| **cgroup/getsockname4** | **cgroup/getsockname6** | **cgroup/getsockname_unix** |
+| **cgroup/sendmsg4** | **cgroup/sendmsg6** | **cgroup/sendmsg_unix** |
+| **cgroup/recvmsg4** | **cgroup/recvmsg6** | **cgroup/recvmsg_unix** | **cgroup/sysctl** |
| **cgroup/getsockopt** | **cgroup/setsockopt** | **cgroup/sock_release** |
| **struct_ops** | **fentry** | **fexit** | **freplace** | **sk_lookup**
| }
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 085bf18f3659..6e4f7ce6bc01 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -480,13 +480,13 @@ _bpftool()
action tracepoint raw_tracepoint \
xdp perf_event cgroup/skb cgroup/sock \
cgroup/dev lwt_in lwt_out lwt_xmit \
- lwt_seg6local sockops sk_skb sk_msg \
- lirc_mode2 cgroup/bind4 cgroup/bind6 \
- cgroup/connect4 cgroup/connect6 \
- cgroup/getpeername4 cgroup/getpeername6 \
- cgroup/getsockname4 cgroup/getsockname6 \
- cgroup/sendmsg4 cgroup/sendmsg6 \
- cgroup/recvmsg4 cgroup/recvmsg6 \
+ lwt_seg6local sockops sk_skb sk_msg lirc_mode2 \
+ cgroup/bind4 cgroup/bind6 \
+ cgroup/connect4 cgroup/connect6 cgroup/connect_unix \
+ cgroup/getpeername4 cgroup/getpeername6 cgroup/getpeername_unix \
+ cgroup/getsockname4 cgroup/getsockname6 cgroup/getsockname_unix \
+ cgroup/sendmsg4 cgroup/sendmsg6 cgroup/sendmsg_unix \
+ cgroup/recvmsg4 cgroup/recvmsg6 cgroup/recvmsg_unix \
cgroup/post_bind4 cgroup/post_bind6 \
cgroup/sysctl cgroup/getsockopt \
cgroup/setsockopt cgroup/sock_release struct_ops \
diff --git a/tools/bpf/bpftool/btf_dumper.c b/tools/bpf/bpftool/btf_dumper.c
index 1b7f69714604..527fe867a8fb 100644
--- a/tools/bpf/bpftool/btf_dumper.c
+++ b/tools/bpf/bpftool/btf_dumper.c
@@ -127,7 +127,7 @@ static void btf_dumper_ptr(const struct btf_dumper *d,
print_ptr_value:
if (d->is_plain_text)
- jsonw_printf(d->jw, "%p", (void *)value);
+ jsonw_printf(d->jw, "\"%p\"", (void *)value);
else
jsonw_printf(d->jw, "%lu", value);
}
diff --git a/tools/bpf/bpftool/cgroup.c b/tools/bpf/bpftool/cgroup.c
index ac846b0805b4..af6898c0f388 100644
--- a/tools/bpf/bpftool/cgroup.c
+++ b/tools/bpf/bpftool/cgroup.c
@@ -28,13 +28,15 @@
" cgroup_device | cgroup_inet4_bind |\n" \
" cgroup_inet6_bind | cgroup_inet4_post_bind |\n" \
" cgroup_inet6_post_bind | cgroup_inet4_connect |\n" \
- " cgroup_inet6_connect | cgroup_inet4_getpeername |\n" \
- " cgroup_inet6_getpeername | cgroup_inet4_getsockname |\n" \
- " cgroup_inet6_getsockname | cgroup_udp4_sendmsg |\n" \
- " cgroup_udp6_sendmsg | cgroup_udp4_recvmsg |\n" \
- " cgroup_udp6_recvmsg | cgroup_sysctl |\n" \
- " cgroup_getsockopt | cgroup_setsockopt |\n" \
- " cgroup_inet_sock_release }"
+ " cgroup_inet6_connect | cgroup_unix_connect |\n" \
+ " cgroup_inet4_getpeername | cgroup_inet6_getpeername |\n" \
+ " cgroup_unix_getpeername | cgroup_inet4_getsockname |\n" \
+ " cgroup_inet6_getsockname | cgroup_unix_getsockname |\n" \
+ " cgroup_udp4_sendmsg | cgroup_udp6_sendmsg |\n" \
+ " cgroup_unix_sendmsg | cgroup_udp4_recvmsg |\n" \
+ " cgroup_udp6_recvmsg | cgroup_unix_recvmsg |\n" \
+ " cgroup_sysctl | cgroup_getsockopt |\n" \
+ " cgroup_setsockopt | cgroup_inet_sock_release }"
static unsigned int query_flags;
static struct btf *btf_vmlinux;
diff --git a/tools/bpf/bpftool/gen.c b/tools/bpf/bpftool/gen.c
index 2883660d6b67..ee3ce2b8000d 100644
--- a/tools/bpf/bpftool/gen.c
+++ b/tools/bpf/bpftool/gen.c
@@ -708,17 +708,22 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
codegen("\
\n\
- skel->%1$s = skel_prep_map_data((void *)\"\\ \n\
- ", ident);
+ { \n\
+ static const char data[] __attribute__((__aligned__(8))) = \"\\\n\
+ ");
mmap_data = bpf_map__initial_value(map, &mmap_size);
print_hex(mmap_data, mmap_size);
codegen("\
\n\
- \", %1$zd, %2$zd); \n\
- if (!skel->%3$s) \n\
- goto cleanup; \n\
- skel->maps.%3$s.initial_value = (__u64) (long) skel->%3$s;\n\
- ", bpf_map_mmap_sz(map), mmap_size, ident);
+ \"; \n\
+ \n\
+ skel->%1$s = skel_prep_map_data((void *)data, %2$zd,\n\
+ sizeof(data) - 1);\n\
+ if (!skel->%1$s) \n\
+ goto cleanup; \n\
+ skel->maps.%1$s.initial_value = (__u64) (long) skel->%1$s;\n\
+ } \n\
+ ", ident, bpf_map_mmap_sz(map));
}
codegen("\
\n\
@@ -733,32 +738,30 @@ static int gen_trace(struct bpf_object *obj, const char *obj_name, const char *h
{ \n\
struct bpf_load_and_run_opts opts = {}; \n\
int err; \n\
- \n\
- opts.ctx = (struct bpf_loader_ctx *)skel; \n\
- opts.data_sz = %2$d; \n\
- opts.data = (void *)\"\\ \n\
+ static const char opts_data[] __attribute__((__aligned__(8))) = \"\\\n\
",
- obj_name, opts.data_sz);
+ obj_name);
print_hex(opts.data, opts.data_sz);
codegen("\
\n\
\"; \n\
+ static const char opts_insn[] __attribute__((__aligned__(8))) = \"\\\n\
");
-
- codegen("\
- \n\
- opts.insns_sz = %d; \n\
- opts.insns = (void *)\"\\ \n\
- ",
- opts.insns_sz);
print_hex(opts.insns, opts.insns_sz);
codegen("\
\n\
\"; \n\
+ \n\
+ opts.ctx = (struct bpf_loader_ctx *)skel; \n\
+ opts.data_sz = sizeof(opts_data) - 1; \n\
+ opts.data = (void *)opts_data; \n\
+ opts.insns_sz = sizeof(opts_insn) - 1; \n\
+ opts.insns = (void *)opts_insn; \n\
+ \n\
err = bpf_load_and_run(&opts); \n\
if (err < 0) \n\
return err; \n\
- ", obj_name);
+ ");
bpf_object__for_each_map(map, obj) {
const char *mmap_flags;
@@ -1209,7 +1212,7 @@ static int do_skeleton(int argc, char **argv)
codegen("\
\n\
\n\
- s->data = (void *)%2$s__elf_bytes(&s->data_sz); \n\
+ s->data = %1$s__elf_bytes(&s->data_sz); \n\
\n\
obj->skeleton = s; \n\
return 0; \n\
@@ -1218,12 +1221,12 @@ static int do_skeleton(int argc, char **argv)
return err; \n\
} \n\
\n\
- static inline const void *%2$s__elf_bytes(size_t *sz) \n\
+ static inline const void *%1$s__elf_bytes(size_t *sz) \n\
{ \n\
- *sz = %1$d; \n\
- return (const void *)\"\\ \n\
- "
- , file_sz, obj_name);
+ static const char data[] __attribute__((__aligned__(8))) = \"\\\n\
+ ",
+ obj_name
+ );
/* embed contents of BPF object file */
print_hex(obj_data, file_sz);
@@ -1231,6 +1234,9 @@ static int do_skeleton(int argc, char **argv)
codegen("\
\n\
\"; \n\
+ \n\
+ *sz = sizeof(data) - 1; \n\
+ return (const void *)data; \n\
} \n\
\n\
#ifdef __cplusplus \n\
diff --git a/tools/bpf/bpftool/link.c b/tools/bpf/bpftool/link.c
index 2e5c231e08ac..a1528cde81ab 100644
--- a/tools/bpf/bpftool/link.c
+++ b/tools/bpf/bpftool/link.c
@@ -265,6 +265,7 @@ show_kprobe_multi_json(struct bpf_link_info *info, json_writer_t *wtr)
jsonw_bool_field(json_wtr, "retprobe",
info->kprobe_multi.flags & BPF_F_KPROBE_MULTI_RETURN);
jsonw_uint_field(json_wtr, "func_cnt", info->kprobe_multi.count);
+ jsonw_uint_field(json_wtr, "missed", info->kprobe_multi.missed);
jsonw_name(json_wtr, "funcs");
jsonw_start_array(json_wtr);
addrs = u64_to_ptr(info->kprobe_multi.addrs);
@@ -301,6 +302,7 @@ show_perf_event_kprobe_json(struct bpf_link_info *info, json_writer_t *wtr)
jsonw_string_field(wtr, "func",
u64_to_ptr(info->perf_event.kprobe.func_name));
jsonw_uint_field(wtr, "offset", info->perf_event.kprobe.offset);
+ jsonw_uint_field(wtr, "missed", info->perf_event.kprobe.missed);
}
static void
@@ -449,6 +451,10 @@ static int show_link_close_json(int fd, struct bpf_link_info *info)
show_link_ifindex_json(info->tcx.ifindex, json_wtr);
show_link_attach_type_json(info->tcx.attach_type, json_wtr);
break;
+ case BPF_LINK_TYPE_NETKIT:
+ show_link_ifindex_json(info->netkit.ifindex, json_wtr);
+ show_link_attach_type_json(info->netkit.attach_type, json_wtr);
+ break;
case BPF_LINK_TYPE_XDP:
show_link_ifindex_json(info->xdp.ifindex, json_wtr);
break;
@@ -641,6 +647,8 @@ static void show_kprobe_multi_plain(struct bpf_link_info *info)
else
printf("\n\tkprobe.multi ");
printf("func_cnt %u ", info->kprobe_multi.count);
+ if (info->kprobe_multi.missed)
+ printf("missed %llu ", info->kprobe_multi.missed);
addrs = (__u64 *)u64_to_ptr(info->kprobe_multi.addrs);
qsort(addrs, info->kprobe_multi.count, sizeof(__u64), cmp_u64);
@@ -683,6 +691,8 @@ static void show_perf_event_kprobe_plain(struct bpf_link_info *info)
printf("%s", buf);
if (info->perf_event.kprobe.offset)
printf("+%#x", info->perf_event.kprobe.offset);
+ if (info->perf_event.kprobe.missed)
+ printf(" missed %llu", info->perf_event.kprobe.missed);
printf(" ");
}
@@ -785,6 +795,11 @@ static int show_link_close_plain(int fd, struct bpf_link_info *info)
show_link_ifindex_plain(info->tcx.ifindex);
show_link_attach_type_plain(info->tcx.attach_type);
break;
+ case BPF_LINK_TYPE_NETKIT:
+ printf("\n\t");
+ show_link_ifindex_plain(info->netkit.ifindex);
+ show_link_attach_type_plain(info->netkit.attach_type);
+ break;
case BPF_LINK_TYPE_XDP:
printf("\n\t");
show_link_ifindex_plain(info->xdp.ifindex);
diff --git a/tools/bpf/bpftool/net.c b/tools/bpf/bpftool/net.c
index 66a8ce8ae012..968714b4c3d4 100644
--- a/tools/bpf/bpftool/net.c
+++ b/tools/bpf/bpftool/net.c
@@ -79,6 +79,8 @@ static const char * const attach_type_strings[] = {
static const char * const attach_loc_strings[] = {
[BPF_TCX_INGRESS] = "tcx/ingress",
[BPF_TCX_EGRESS] = "tcx/egress",
+ [BPF_NETKIT_PRIMARY] = "netkit/primary",
+ [BPF_NETKIT_PEER] = "netkit/peer",
};
const size_t net_attach_type_size = ARRAY_SIZE(attach_type_strings);
@@ -506,6 +508,9 @@ static void show_dev_tc_bpf(struct ip_devname_ifindex *dev)
{
__show_dev_tc_bpf(dev, BPF_TCX_INGRESS);
__show_dev_tc_bpf(dev, BPF_TCX_EGRESS);
+
+ __show_dev_tc_bpf(dev, BPF_NETKIT_PRIMARY);
+ __show_dev_tc_bpf(dev, BPF_NETKIT_PEER);
}
static int show_dev_tc_bpf_classic(int sock, unsigned int nl_pid,
@@ -926,7 +931,7 @@ static int do_help(int argc, char **argv)
" ATTACH_TYPE := { xdp | xdpgeneric | xdpdrv | xdpoffload }\n"
" " HELP_SPEC_OPTIONS " }\n"
"\n"
- "Note: Only xdp, tcx, tc, flow_dissector and netfilter attachments\n"
+ "Note: Only xdp, tcx, tc, netkit, flow_dissector and netfilter attachments\n"
" are currently supported.\n"
" For progs attached to cgroups, use \"bpftool cgroup\"\n"
" to dump program attachments. For program types\n"
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 8443a149dd17..7ec4f5671e7a 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -2475,9 +2475,10 @@ static int do_help(int argc, char **argv)
" sk_reuseport | flow_dissector | cgroup/sysctl |\n"
" cgroup/bind4 | cgroup/bind6 | cgroup/post_bind4 |\n"
" cgroup/post_bind6 | cgroup/connect4 | cgroup/connect6 |\n"
- " cgroup/getpeername4 | cgroup/getpeername6 |\n"
- " cgroup/getsockname4 | cgroup/getsockname6 | cgroup/sendmsg4 |\n"
- " cgroup/sendmsg6 | cgroup/recvmsg4 | cgroup/recvmsg6 |\n"
+ " cgroup/connect_unix | cgroup/getpeername4 | cgroup/getpeername6 |\n"
+ " cgroup/getpeername_unix | cgroup/getsockname4 | cgroup/getsockname6 |\n"
+ " cgroup/getsockname_unix | cgroup/sendmsg4 | cgroup/sendmsg6 |\n"
+ " cgroup/sendmsg°unix | cgroup/recvmsg4 | cgroup/recvmsg6 | cgroup/recvmsg_unix |\n"
" cgroup/getsockopt | cgroup/setsockopt | cgroup/sock_release |\n"
" struct_ops | fentry | fexit | freplace | sk_lookup }\n"
" ATTACH_TYPE := { sk_msg_verdict | sk_skb_verdict | sk_skb_stream_verdict |\n"
diff --git a/tools/bpf/bpftool/struct_ops.c b/tools/bpf/bpftool/struct_ops.c
index 3ebc9fe91e0e..d573f2640d8e 100644
--- a/tools/bpf/bpftool/struct_ops.c
+++ b/tools/bpf/bpftool/struct_ops.c
@@ -276,6 +276,9 @@ static struct res do_one_id(const char *id_str, work_func func, void *data,
res.nr_maps++;
+ if (wtr)
+ jsonw_start_array(wtr);
+
if (func(fd, info, data, wtr))
res.nr_errs++;
else if (!wtr && json_output)
@@ -288,6 +291,9 @@ static struct res do_one_id(const char *id_str, work_func func, void *data,
*/
jsonw_null(json_wtr);
+ if (wtr)
+ jsonw_end_array(wtr);
+
done:
free(info);
close(fd);
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 0448700890f7..0f6cdf52b1da 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -932,7 +932,14 @@ enum bpf_map_type {
*/
BPF_MAP_TYPE_CGROUP_STORAGE = BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED,
BPF_MAP_TYPE_REUSEPORT_SOCKARRAY,
- BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE,
+ BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,
+ /* BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE is available to bpf programs
+ * attaching to a cgroup. The new mechanism (BPF_MAP_TYPE_CGRP_STORAGE +
+ * local percpu kptr) supports all BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE
+ * functionality and more. So mark * BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE
+ * deprecated.
+ */
+ BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE = BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED,
BPF_MAP_TYPE_QUEUE,
BPF_MAP_TYPE_STACK,
BPF_MAP_TYPE_SK_STORAGE,
@@ -1040,6 +1047,13 @@ enum bpf_attach_type {
BPF_TCX_INGRESS,
BPF_TCX_EGRESS,
BPF_TRACE_UPROBE_MULTI,
+ BPF_CGROUP_UNIX_CONNECT,
+ BPF_CGROUP_UNIX_SENDMSG,
+ BPF_CGROUP_UNIX_RECVMSG,
+ BPF_CGROUP_UNIX_GETPEERNAME,
+ BPF_CGROUP_UNIX_GETSOCKNAME,
+ BPF_NETKIT_PRIMARY,
+ BPF_NETKIT_PEER,
__MAX_BPF_ATTACH_TYPE
};
@@ -1059,6 +1073,7 @@ enum bpf_link_type {
BPF_LINK_TYPE_NETFILTER = 10,
BPF_LINK_TYPE_TCX = 11,
BPF_LINK_TYPE_UPROBE_MULTI = 12,
+ BPF_LINK_TYPE_NETKIT = 13,
MAX_BPF_LINK_TYPE,
};
@@ -1644,6 +1659,13 @@ union bpf_attr {
__u32 flags;
__u32 pid;
} uprobe_multi;
+ struct {
+ union {
+ __u32 relative_fd;
+ __u32 relative_id;
+ };
+ __u64 expected_revision;
+ } netkit;
};
} link_create;
@@ -2697,8 +2719,8 @@ union bpf_attr {
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
- * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
- * and **BPF_CGROUP_INET6_CONNECT**.
+ * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**,
+ * **BPF_CGROUP_INET6_CONNECT** and **BPF_CGROUP_UNIX_CONNECT**.
*
* This helper actually implements a subset of **setsockopt()**.
* It supports the following *level*\ s:
@@ -2936,8 +2958,8 @@ union bpf_attr {
* *bpf_socket* should be one of the following:
*
* * **struct bpf_sock_ops** for **BPF_PROG_TYPE_SOCK_OPS**.
- * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**
- * and **BPF_CGROUP_INET6_CONNECT**.
+ * * **struct bpf_sock_addr** for **BPF_CGROUP_INET4_CONNECT**,
+ * **BPF_CGROUP_INET6_CONNECT** and **BPF_CGROUP_UNIX_CONNECT**.
*
* This helper actually implements a subset of **getsockopt()**.
* It supports the same set of *optname*\ s that is supported by
@@ -3257,6 +3279,11 @@ union bpf_attr {
* and *params*->smac will not be set as output. A common
* use case is to call **bpf_redirect_neigh**\ () after
* doing **bpf_fib_lookup**\ ().
+ * **BPF_FIB_LOOKUP_SRC**
+ * Derive and set source IP addr in *params*->ipv{4,6}_src
+ * for the nexthop. If the src addr cannot be derived,
+ * **BPF_FIB_LKUP_RET_NO_SRC_ADDR** is returned. In this
+ * case, *params*->dmac and *params*->smac are not set either.
*
* *ctx* is either **struct xdp_md** for XDP programs or
* **struct sk_buff** tc cls_act programs.
@@ -5089,6 +5116,8 @@ union bpf_attr {
* **BPF_F_TIMER_ABS**
* Start the timer in absolute expire value instead of the
* default relative one.
+ * **BPF_F_TIMER_CPU_PIN**
+ * Timer will be pinned to the CPU of the caller.
*
* Return
* 0 on success.
@@ -6525,6 +6554,7 @@ struct bpf_link_info {
__aligned_u64 addrs;
__u32 count; /* in/out: kprobe_multi function count */
__u32 flags;
+ __u64 missed;
} kprobe_multi;
struct {
__u32 type; /* enum bpf_perf_event_type */
@@ -6540,6 +6570,7 @@ struct bpf_link_info {
__u32 name_len;
__u32 offset; /* offset from func_name */
__u64 addr;
+ __u64 missed;
} kprobe; /* BPF_PERF_EVENT_KPROBE, BPF_PERF_EVENT_KRETPROBE */
struct {
__aligned_u64 tp_name; /* in/out */
@@ -6555,6 +6586,10 @@ struct bpf_link_info {
__u32 ifindex;
__u32 attach_type;
} tcx;
+ struct {
+ __u32 ifindex;
+ __u32 attach_type;
+ } netkit;
};
} __attribute__((aligned(8)));
@@ -6953,6 +6988,7 @@ enum {
BPF_FIB_LOOKUP_OUTPUT = (1U << 1),
BPF_FIB_LOOKUP_SKIP_NEIGH = (1U << 2),
BPF_FIB_LOOKUP_TBID = (1U << 3),
+ BPF_FIB_LOOKUP_SRC = (1U << 4),
};
enum {
@@ -6965,6 +7001,7 @@ enum {
BPF_FIB_LKUP_RET_UNSUPP_LWT, /* fwd requires encapsulation */
BPF_FIB_LKUP_RET_NO_NEIGH, /* no neighbor entry for nh */
BPF_FIB_LKUP_RET_FRAG_NEEDED, /* fragmentation required to fwd */
+ BPF_FIB_LKUP_RET_NO_SRC_ADDR, /* failed to derive IP src addr */
};
struct bpf_fib_lookup {
@@ -6999,6 +7036,9 @@ struct bpf_fib_lookup {
__u32 rt_metric;
};
+ /* input: source address to consider for lookup
+ * output: source address result from lookup
+ */
union {
__be32 ipv4_src;
__u32 ipv6_src[4]; /* in6_addr; network order */
@@ -7300,9 +7340,11 @@ struct bpf_core_relo {
* Flags to control bpf_timer_start() behaviour.
* - BPF_F_TIMER_ABS: Timeout passed is absolute time, by default it is
* relative to current time.
+ * - BPF_F_TIMER_CPU_PIN: Timer will be pinned to the CPU of the caller.
*/
enum {
BPF_F_TIMER_ABS = (1ULL << 0),
+ BPF_F_TIMER_CPU_PIN = (1ULL << 1),
};
/* BPF numbers iterator state */
diff --git a/tools/include/uapi/linux/if_link.h b/tools/include/uapi/linux/if_link.h
index 39e659c83cfd..a0aa05a28cf2 100644
--- a/tools/include/uapi/linux/if_link.h
+++ b/tools/include/uapi/linux/if_link.h
@@ -211,6 +211,9 @@ struct rtnl_link_stats {
* @rx_nohandler: Number of packets received on the interface
* but dropped by the networking stack because the device is
* not designated to receive packets (e.g. backup link in a bond).
+ *
+ * @rx_otherhost_dropped: Number of packets dropped due to mismatch
+ * in destination MAC address.
*/
struct rtnl_link_stats64 {
__u64 rx_packets;
@@ -243,6 +246,23 @@ struct rtnl_link_stats64 {
__u64 rx_compressed;
__u64 tx_compressed;
__u64 rx_nohandler;
+
+ __u64 rx_otherhost_dropped;
+};
+
+/* Subset of link stats useful for in-HW collection. Meaning of the fields is as
+ * for struct rtnl_link_stats64.
+ */
+struct rtnl_hw_stats64 {
+ __u64 rx_packets;
+ __u64 tx_packets;
+ __u64 rx_bytes;
+ __u64 tx_bytes;
+ __u64 rx_errors;
+ __u64 tx_errors;
+ __u64 rx_dropped;
+ __u64 tx_dropped;
+ __u64 multicast;
};
/* The struct should be in sync with struct ifmap */
@@ -350,7 +370,13 @@ enum {
IFLA_GRO_MAX_SIZE,
IFLA_TSO_MAX_SIZE,
IFLA_TSO_MAX_SEGS,
+ IFLA_ALLMULTI, /* Allmulti count: > 0 means acts ALLMULTI */
+
+ IFLA_DEVLINK_PORT,
+ IFLA_GSO_IPV4_MAX_SIZE,
+ IFLA_GRO_IPV4_MAX_SIZE,
+ IFLA_DPLL_PIN,
__IFLA_MAX
};
@@ -539,6 +565,12 @@ enum {
IFLA_BRPORT_MRP_IN_OPEN,
IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT,
IFLA_BRPORT_MCAST_EHT_HOSTS_CNT,
+ IFLA_BRPORT_LOCKED,
+ IFLA_BRPORT_MAB,
+ IFLA_BRPORT_MCAST_N_GROUPS,
+ IFLA_BRPORT_MCAST_MAX_GROUPS,
+ IFLA_BRPORT_NEIGH_VLAN_SUPPRESS,
+ IFLA_BRPORT_BACKUP_NHID,
__IFLA_BRPORT_MAX
};
#define IFLA_BRPORT_MAX (__IFLA_BRPORT_MAX - 1)
@@ -716,7 +748,79 @@ enum ipvlan_mode {
#define IPVLAN_F_PRIVATE 0x01
#define IPVLAN_F_VEPA 0x02
+/* Tunnel RTM header */
+struct tunnel_msg {
+ __u8 family;
+ __u8 flags;
+ __u16 reserved2;
+ __u32 ifindex;
+};
+
+/* netkit section */
+enum netkit_action {
+ NETKIT_NEXT = -1,
+ NETKIT_PASS = 0,
+ NETKIT_DROP = 2,
+ NETKIT_REDIRECT = 7,
+};
+
+enum netkit_mode {
+ NETKIT_L2,
+ NETKIT_L3,
+};
+
+enum {
+ IFLA_NETKIT_UNSPEC,
+ IFLA_NETKIT_PEER_INFO,
+ IFLA_NETKIT_PRIMARY,
+ IFLA_NETKIT_POLICY,
+ IFLA_NETKIT_PEER_POLICY,
+ IFLA_NETKIT_MODE,
+ __IFLA_NETKIT_MAX,
+};
+#define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
+
/* VXLAN section */
+
+/* include statistics in the dump */
+#define TUNNEL_MSG_FLAG_STATS 0x01
+
+#define TUNNEL_MSG_VALID_USER_FLAGS TUNNEL_MSG_FLAG_STATS
+
+/* Embedded inside VXLAN_VNIFILTER_ENTRY_STATS */
+enum {
+ VNIFILTER_ENTRY_STATS_UNSPEC,
+ VNIFILTER_ENTRY_STATS_RX_BYTES,
+ VNIFILTER_ENTRY_STATS_RX_PKTS,
+ VNIFILTER_ENTRY_STATS_RX_DROPS,
+ VNIFILTER_ENTRY_STATS_RX_ERRORS,
+ VNIFILTER_ENTRY_STATS_TX_BYTES,
+ VNIFILTER_ENTRY_STATS_TX_PKTS,
+ VNIFILTER_ENTRY_STATS_TX_DROPS,
+ VNIFILTER_ENTRY_STATS_TX_ERRORS,
+ VNIFILTER_ENTRY_STATS_PAD,
+ __VNIFILTER_ENTRY_STATS_MAX
+};
+#define VNIFILTER_ENTRY_STATS_MAX (__VNIFILTER_ENTRY_STATS_MAX - 1)
+
+enum {
+ VXLAN_VNIFILTER_ENTRY_UNSPEC,
+ VXLAN_VNIFILTER_ENTRY_START,
+ VXLAN_VNIFILTER_ENTRY_END,
+ VXLAN_VNIFILTER_ENTRY_GROUP,
+ VXLAN_VNIFILTER_ENTRY_GROUP6,
+ VXLAN_VNIFILTER_ENTRY_STATS,
+ __VXLAN_VNIFILTER_ENTRY_MAX
+};
+#define VXLAN_VNIFILTER_ENTRY_MAX (__VXLAN_VNIFILTER_ENTRY_MAX - 1)
+
+enum {
+ VXLAN_VNIFILTER_UNSPEC,
+ VXLAN_VNIFILTER_ENTRY,
+ __VXLAN_VNIFILTER_MAX
+};
+#define VXLAN_VNIFILTER_MAX (__VXLAN_VNIFILTER_MAX - 1)
+
enum {
IFLA_VXLAN_UNSPEC,
IFLA_VXLAN_ID,
@@ -748,6 +852,8 @@ enum {
IFLA_VXLAN_GPE,
IFLA_VXLAN_TTL_INHERIT,
IFLA_VXLAN_DF,
+ IFLA_VXLAN_VNIFILTER, /* only applicable with COLLECT_METADATA mode */
+ IFLA_VXLAN_LOCALBYPASS,
__IFLA_VXLAN_MAX
};
#define IFLA_VXLAN_MAX (__IFLA_VXLAN_MAX - 1)
@@ -781,6 +887,7 @@ enum {
IFLA_GENEVE_LABEL,
IFLA_GENEVE_TTL_INHERIT,
IFLA_GENEVE_DF,
+ IFLA_GENEVE_INNER_PROTO_INHERIT,
__IFLA_GENEVE_MAX
};
#define IFLA_GENEVE_MAX (__IFLA_GENEVE_MAX - 1)
@@ -826,6 +933,8 @@ enum {
IFLA_GTP_FD1,
IFLA_GTP_PDP_HASHSIZE,
IFLA_GTP_ROLE,
+ IFLA_GTP_CREATE_SOCKETS,
+ IFLA_GTP_RESTART_COUNT,
__IFLA_GTP_MAX,
};
#define IFLA_GTP_MAX (__IFLA_GTP_MAX - 1)
@@ -1162,6 +1271,17 @@ enum {
#define IFLA_STATS_FILTER_BIT(ATTR) (1 << (ATTR - 1))
+enum {
+ IFLA_STATS_GETSET_UNSPEC,
+ IFLA_STATS_GET_FILTERS, /* Nest of IFLA_STATS_LINK_xxx, each a u32 with
+ * a filter mask for the corresponding group.
+ */
+ IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS, /* 0 or 1 as u8 */
+ __IFLA_STATS_GETSET_MAX,
+};
+
+#define IFLA_STATS_GETSET_MAX (__IFLA_STATS_GETSET_MAX - 1)
+
/* These are embedded into IFLA_STATS_LINK_XSTATS:
* [IFLA_STATS_LINK_XSTATS]
* -> [LINK_XSTATS_TYPE_xxx]
@@ -1179,10 +1299,21 @@ enum {
enum {
IFLA_OFFLOAD_XSTATS_UNSPEC,
IFLA_OFFLOAD_XSTATS_CPU_HIT, /* struct rtnl_link_stats64 */
+ IFLA_OFFLOAD_XSTATS_HW_S_INFO, /* HW stats info. A nest */
+ IFLA_OFFLOAD_XSTATS_L3_STATS, /* struct rtnl_hw_stats64 */
__IFLA_OFFLOAD_XSTATS_MAX
};
#define IFLA_OFFLOAD_XSTATS_MAX (__IFLA_OFFLOAD_XSTATS_MAX - 1)
+enum {
+ IFLA_OFFLOAD_XSTATS_HW_S_INFO_UNSPEC,
+ IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST, /* u8 */
+ IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED, /* u8 */
+ __IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX,
+};
+#define IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX \
+ (__IFLA_OFFLOAD_XSTATS_HW_S_INFO_MAX - 1)
+
/* XDP section */
#define XDP_FLAGS_UPDATE_IF_NOEXIST (1U << 0)
@@ -1281,4 +1412,14 @@ enum {
#define IFLA_MCTP_MAX (__IFLA_MCTP_MAX - 1)
+/* DSA section */
+
+enum {
+ IFLA_DSA_UNSPEC,
+ IFLA_DSA_MASTER,
+ __IFLA_DSA_MAX,
+};
+
+#define IFLA_DSA_MAX (__IFLA_DSA_MAX - 1)
+
#endif /* _UAPI_LINUX_IF_LINK_H */
diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h
index c1634b95c223..2943a151d4f1 100644
--- a/tools/include/uapi/linux/netdev.h
+++ b/tools/include/uapi/linux/netdev.h
@@ -38,11 +38,27 @@ enum netdev_xdp_act {
NETDEV_XDP_ACT_MASK = 127,
};
+/**
+ * enum netdev_xdp_rx_metadata
+ * @NETDEV_XDP_RX_METADATA_TIMESTAMP: Device is capable of exposing receive HW
+ * timestamp via bpf_xdp_metadata_rx_timestamp().
+ * @NETDEV_XDP_RX_METADATA_HASH: Device is capable of exposing receive packet
+ * hash via bpf_xdp_metadata_rx_hash().
+ */
+enum netdev_xdp_rx_metadata {
+ NETDEV_XDP_RX_METADATA_TIMESTAMP = 1,
+ NETDEV_XDP_RX_METADATA_HASH = 2,
+
+ /* private: */
+ NETDEV_XDP_RX_METADATA_MASK = 3,
+};
+
enum {
NETDEV_A_DEV_IFINDEX = 1,
NETDEV_A_DEV_PAD,
NETDEV_A_DEV_XDP_FEATURES,
NETDEV_A_DEV_XDP_ZC_MAX_SEGS,
+ NETDEV_A_DEV_XDP_RX_METADATA_FEATURES,
__NETDEV_A_DEV_MAX,
NETDEV_A_DEV_MAX = (__NETDEV_A_DEV_MAX - 1)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index b0f1913763a3..9dc9625651dc 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -810,6 +810,22 @@ int bpf_link_create(int prog_fd, int target_fd,
if (!OPTS_ZEROED(opts, tcx))
return libbpf_err(-EINVAL);
break;
+ case BPF_NETKIT_PRIMARY:
+ case BPF_NETKIT_PEER:
+ relative_fd = OPTS_GET(opts, netkit.relative_fd, 0);
+ relative_id = OPTS_GET(opts, netkit.relative_id, 0);
+ if (relative_fd && relative_id)
+ return libbpf_err(-EINVAL);
+ if (relative_id) {
+ attr.link_create.netkit.relative_id = relative_id;
+ attr.link_create.flags |= BPF_F_ID;
+ } else {
+ attr.link_create.netkit.relative_fd = relative_fd;
+ }
+ attr.link_create.netkit.expected_revision = OPTS_GET(opts, netkit.expected_revision, 0);
+ if (!OPTS_ZEROED(opts, netkit))
+ return libbpf_err(-EINVAL);
+ break;
default:
if (!OPTS_ZEROED(opts, flags))
return libbpf_err(-EINVAL);
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 74c2887cfd24..d0f53772bdc0 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -415,6 +415,11 @@ struct bpf_link_create_opts {
__u32 relative_id;
__u64 expected_revision;
} tcx;
+ struct {
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ } netkit;
};
size_t :0;
};
diff --git a/tools/lib/bpf/bpf_helpers.h b/tools/lib/bpf/bpf_helpers.h
index bbab9ad9dc5a..77ceea575dc7 100644
--- a/tools/lib/bpf/bpf_helpers.h
+++ b/tools/lib/bpf/bpf_helpers.h
@@ -181,6 +181,7 @@ enum libbpf_tristate {
#define __ksym __attribute__((section(".ksyms")))
#define __kptr_untrusted __attribute__((btf_type_tag("kptr_untrusted")))
#define __kptr __attribute__((btf_type_tag("kptr")))
+#define __percpu_kptr __attribute__((btf_type_tag("percpu_kptr")))
#define bpf_ksym_exists(sym) ({ \
_Static_assert(!__builtin_constant_p(!!sym), #sym " should be marked as __weak"); \
diff --git a/tools/lib/bpf/bpf_tracing.h b/tools/lib/bpf/bpf_tracing.h
index 3803479dbe10..1c13f8e88833 100644
--- a/tools/lib/bpf/bpf_tracing.h
+++ b/tools/lib/bpf/bpf_tracing.h
@@ -362,8 +362,6 @@ struct pt_regs___arm64 {
#define __PT_PARM7_REG a6
#define __PT_PARM8_REG a7
-/* riscv does not select ARCH_HAS_SYSCALL_WRAPPER. */
-#define PT_REGS_SYSCALL_REGS(ctx) ctx
#define __PT_PARM1_SYSCALL_REG __PT_PARM1_REG
#define __PT_PARM2_SYSCALL_REG __PT_PARM2_REG
#define __PT_PARM3_SYSCALL_REG __PT_PARM3_REG
diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
index 8484b563b53d..ee95fd379d4d 100644
--- a/tools/lib/bpf/btf.c
+++ b/tools/lib/bpf/btf.c
@@ -448,6 +448,165 @@ static int btf_parse_type_sec(struct btf *btf)
return 0;
}
+static int btf_validate_str(const struct btf *btf, __u32 str_off, const char *what, __u32 type_id)
+{
+ const char *s;
+
+ s = btf__str_by_offset(btf, str_off);
+ if (!s) {
+ pr_warn("btf: type [%u]: invalid %s (string offset %u)\n", type_id, what, str_off);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int btf_validate_id(const struct btf *btf, __u32 id, __u32 ctx_id)
+{
+ const struct btf_type *t;
+
+ t = btf__type_by_id(btf, id);
+ if (!t) {
+ pr_warn("btf: type [%u]: invalid referenced type ID %u\n", ctx_id, id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int btf_validate_type(const struct btf *btf, const struct btf_type *t, __u32 id)
+{
+ __u32 kind = btf_kind(t);
+ int err, i, n;
+
+ err = btf_validate_str(btf, t->name_off, "type name", id);
+ if (err)
+ return err;
+
+ switch (kind) {
+ case BTF_KIND_UNKN:
+ case BTF_KIND_INT:
+ case BTF_KIND_FWD:
+ case BTF_KIND_FLOAT:
+ break;
+ case BTF_KIND_PTR:
+ case BTF_KIND_TYPEDEF:
+ case BTF_KIND_VOLATILE:
+ case BTF_KIND_CONST:
+ case BTF_KIND_RESTRICT:
+ case BTF_KIND_VAR:
+ case BTF_KIND_DECL_TAG:
+ case BTF_KIND_TYPE_TAG:
+ err = btf_validate_id(btf, t->type, id);
+ if (err)
+ return err;
+ break;
+ case BTF_KIND_ARRAY: {
+ const struct btf_array *a = btf_array(t);
+
+ err = btf_validate_id(btf, a->type, id);
+ err = err ?: btf_validate_id(btf, a->index_type, id);
+ if (err)
+ return err;
+ break;
+ }
+ case BTF_KIND_STRUCT:
+ case BTF_KIND_UNION: {
+ const struct btf_member *m = btf_members(t);
+
+ n = btf_vlen(t);
+ for (i = 0; i < n; i++, m++) {
+ err = btf_validate_str(btf, m->name_off, "field name", id);
+ err = err ?: btf_validate_id(btf, m->type, id);
+ if (err)
+ return err;
+ }
+ break;
+ }
+ case BTF_KIND_ENUM: {
+ const struct btf_enum *m = btf_enum(t);
+
+ n = btf_vlen(t);
+ for (i = 0; i < n; i++, m++) {
+ err = btf_validate_str(btf, m->name_off, "enum name", id);
+ if (err)
+ return err;
+ }
+ break;
+ }
+ case BTF_KIND_ENUM64: {
+ const struct btf_enum64 *m = btf_enum64(t);
+
+ n = btf_vlen(t);
+ for (i = 0; i < n; i++, m++) {
+ err = btf_validate_str(btf, m->name_off, "enum name", id);
+ if (err)
+ return err;
+ }
+ break;
+ }
+ case BTF_KIND_FUNC: {
+ const struct btf_type *ft;
+
+ err = btf_validate_id(btf, t->type, id);
+ if (err)
+ return err;
+ ft = btf__type_by_id(btf, t->type);
+ if (btf_kind(ft) != BTF_KIND_FUNC_PROTO) {
+ pr_warn("btf: type [%u]: referenced type [%u] is not FUNC_PROTO\n", id, t->type);
+ return -EINVAL;
+ }
+ break;
+ }
+ case BTF_KIND_FUNC_PROTO: {
+ const struct btf_param *m = btf_params(t);
+
+ n = btf_vlen(t);
+ for (i = 0; i < n; i++, m++) {
+ err = btf_validate_str(btf, m->name_off, "param name", id);
+ err = err ?: btf_validate_id(btf, m->type, id);
+ if (err)
+ return err;
+ }
+ break;
+ }
+ case BTF_KIND_DATASEC: {
+ const struct btf_var_secinfo *m = btf_var_secinfos(t);
+
+ n = btf_vlen(t);
+ for (i = 0; i < n; i++, m++) {
+ err = btf_validate_id(btf, m->type, id);
+ if (err)
+ return err;
+ }
+ break;
+ }
+ default:
+ pr_warn("btf: type [%u]: unrecognized kind %u\n", id, kind);
+ return -EINVAL;
+ }
+ return 0;
+}
+
+/* Validate basic sanity of BTF. It's intentionally less thorough than
+ * kernel's validation and validates only properties of BTF that libbpf relies
+ * on to be correct (e.g., valid type IDs, valid string offsets, etc)
+ */
+static int btf_sanity_check(const struct btf *btf)
+{
+ const struct btf_type *t;
+ __u32 i, n = btf__type_cnt(btf);
+ int err;
+
+ for (i = 1; i < n; i++) {
+ t = btf_type_by_id(btf, i);
+ err = btf_validate_type(btf, t, i);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
__u32 btf__type_cnt(const struct btf *btf)
{
return btf->start_id + btf->nr_types;
@@ -902,6 +1061,7 @@ static struct btf *btf_new(const void *data, __u32 size, struct btf *base_btf)
err = btf_parse_str_sec(btf);
err = err ?: btf_parse_type_sec(btf);
+ err = err ?: btf_sanity_check(btf);
if (err)
goto done;
diff --git a/tools/lib/bpf/elf.c b/tools/lib/bpf/elf.c
index 9d0296c1726a..2a62bf411bb3 100644
--- a/tools/lib/bpf/elf.c
+++ b/tools/lib/bpf/elf.c
@@ -1,5 +1,8 @@
// SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
+#ifndef _GNU_SOURCE
+#define _GNU_SOURCE
+#endif
#include <libelf.h>
#include <gelf.h>
#include <fcntl.h>
@@ -10,6 +13,17 @@
#define STRERR_BUFSIZE 128
+/* A SHT_GNU_versym section holds 16-bit words. This bit is set if
+ * the symbol is hidden and can only be seen when referenced using an
+ * explicit version number. This is a GNU extension.
+ */
+#define VERSYM_HIDDEN 0x8000
+
+/* This is the mask for the rest of the data in a word read from a
+ * SHT_GNU_versym section.
+ */
+#define VERSYM_VERSION 0x7fff
+
int elf_open(const char *binary_path, struct elf_fd *elf_fd)
{
char errmsg[STRERR_BUFSIZE];
@@ -64,13 +78,18 @@ struct elf_sym {
const char *name;
GElf_Sym sym;
GElf_Shdr sh;
+ int ver;
+ bool hidden;
};
struct elf_sym_iter {
Elf *elf;
Elf_Data *syms;
+ Elf_Data *versyms;
+ Elf_Data *verdefs;
size_t nr_syms;
size_t strtabidx;
+ size_t verdef_strtabidx;
size_t next_sym_idx;
struct elf_sym sym;
int st_type;
@@ -111,6 +130,27 @@ static int elf_sym_iter_new(struct elf_sym_iter *iter,
iter->nr_syms = iter->syms->d_size / sh.sh_entsize;
iter->elf = elf;
iter->st_type = st_type;
+
+ /* Version symbol table is meaningful to dynsym only */
+ if (sh_type != SHT_DYNSYM)
+ return 0;
+
+ scn = elf_find_next_scn_by_type(elf, SHT_GNU_versym, NULL);
+ if (!scn)
+ return 0;
+ iter->versyms = elf_getdata(scn, 0);
+
+ scn = elf_find_next_scn_by_type(elf, SHT_GNU_verdef, NULL);
+ if (!scn)
+ return 0;
+
+ iter->verdefs = elf_getdata(scn, 0);
+ if (!iter->verdefs || !gelf_getshdr(scn, &sh)) {
+ pr_warn("elf: failed to get verdef ELF section in '%s'\n", binary_path);
+ return -EINVAL;
+ }
+ iter->verdef_strtabidx = sh.sh_link;
+
return 0;
}
@@ -119,6 +159,7 @@ static struct elf_sym *elf_sym_iter_next(struct elf_sym_iter *iter)
struct elf_sym *ret = &iter->sym;
GElf_Sym *sym = &ret->sym;
const char *name = NULL;
+ GElf_Versym versym;
Elf_Scn *sym_scn;
size_t idx;
@@ -138,12 +179,83 @@ static struct elf_sym *elf_sym_iter_next(struct elf_sym_iter *iter)
iter->next_sym_idx = idx + 1;
ret->name = name;
+ ret->ver = 0;
+ ret->hidden = false;
+
+ if (iter->versyms) {
+ if (!gelf_getversym(iter->versyms, idx, &versym))
+ continue;
+ ret->ver = versym & VERSYM_VERSION;
+ ret->hidden = versym & VERSYM_HIDDEN;
+ }
return ret;
}
return NULL;
}
+static const char *elf_get_vername(struct elf_sym_iter *iter, int ver)
+{
+ GElf_Verdaux verdaux;
+ GElf_Verdef verdef;
+ int offset;
+
+ if (!iter->verdefs)
+ return NULL;
+
+ offset = 0;
+ while (gelf_getverdef(iter->verdefs, offset, &verdef)) {
+ if (verdef.vd_ndx != ver) {
+ if (!verdef.vd_next)
+ break;
+
+ offset += verdef.vd_next;
+ continue;
+ }
+
+ if (!gelf_getverdaux(iter->verdefs, offset + verdef.vd_aux, &verdaux))
+ break;
+
+ return elf_strptr(iter->elf, iter->verdef_strtabidx, verdaux.vda_name);
+
+ }
+ return NULL;
+}
+
+static bool symbol_match(struct elf_sym_iter *iter, int sh_type, struct elf_sym *sym,
+ const char *name, size_t name_len, const char *lib_ver)
+{
+ const char *ver_name;
+
+ /* Symbols are in forms of func, func@LIB_VER or func@@LIB_VER
+ * make sure the func part matches the user specified name
+ */
+ if (strncmp(sym->name, name, name_len) != 0)
+ return false;
+
+ /* ...but we don't want a search for "foo" to match 'foo2" also, so any
+ * additional characters in sname should be of the form "@@LIB".
+ */
+ if (sym->name[name_len] != '\0' && sym->name[name_len] != '@')
+ return false;
+
+ /* If user does not specify symbol version, then we got a match */
+ if (!lib_ver)
+ return true;
+
+ /* If user specifies symbol version, for dynamic symbols,
+ * get version name from ELF verdef section for comparison.
+ */
+ if (sh_type == SHT_DYNSYM) {
+ ver_name = elf_get_vername(iter, sym->ver);
+ if (!ver_name)
+ return false;
+ return strcmp(ver_name, lib_ver) == 0;
+ }
+
+ /* For normal symbols, it is already in form of func@LIB_VER */
+ return strcmp(sym->name, name) == 0;
+}
/* Transform symbol's virtual address (absolute for binaries and relative
* for shared libs) into file offset, which is what kernel is expecting
@@ -166,7 +278,8 @@ static unsigned long elf_sym_offset(struct elf_sym *sym)
long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
{
int i, sh_types[2] = { SHT_DYNSYM, SHT_SYMTAB };
- bool is_shared_lib, is_name_qualified;
+ const char *at_symbol, *lib_ver;
+ bool is_shared_lib;
long ret = -ENOENT;
size_t name_len;
GElf_Ehdr ehdr;
@@ -179,9 +292,18 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
/* for shared lib case, we do not need to calculate relative offset */
is_shared_lib = ehdr.e_type == ET_DYN;
- name_len = strlen(name);
- /* Does name specify "@@LIB"? */
- is_name_qualified = strstr(name, "@@") != NULL;
+ /* Does name specify "@@LIB_VER" or "@LIB_VER" ? */
+ at_symbol = strchr(name, '@');
+ if (at_symbol) {
+ name_len = at_symbol - name;
+ /* skip second @ if it's @@LIB_VER case */
+ if (at_symbol[1] == '@')
+ at_symbol++;
+ lib_ver = at_symbol + 1;
+ } else {
+ name_len = strlen(name);
+ lib_ver = NULL;
+ }
/* Search SHT_DYNSYM, SHT_SYMTAB for symbol. This search order is used because if
* a binary is stripped, it may only have SHT_DYNSYM, and a fully-statically
@@ -201,20 +323,17 @@ long elf_find_func_offset(Elf *elf, const char *binary_path, const char *name)
goto out;
while ((sym = elf_sym_iter_next(&iter))) {
- /* User can specify func, func@@LIB or func@@LIB_VERSION. */
- if (strncmp(sym->name, name, name_len) != 0)
- continue;
- /* ...but we don't want a search for "foo" to match 'foo2" also, so any
- * additional characters in sname should be of the form "@@LIB".
- */
- if (!is_name_qualified && sym->name[name_len] != '\0' && sym->name[name_len] != '@')
+ if (!symbol_match(&iter, sh_types[i], sym, name, name_len, lib_ver))
continue;
cur_bind = GELF_ST_BIND(sym->sym.st_info);
if (ret > 0) {
/* handle multiple matches */
- if (last_bind != STB_WEAK && cur_bind != STB_WEAK) {
+ if (elf_sym_offset(sym) == ret) {
+ /* same offset, no problem */
+ continue;
+ } else if (last_bind != STB_WEAK && cur_bind != STB_WEAK) {
/* Only accept one non-weak bind. */
pr_warn("elf: ambiguous match for '%s', '%s' in '%s'\n",
sym->name, name, binary_path);
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 96ff1aa4bf6a..e067be95da3c 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -82,17 +82,22 @@ static const char * const attach_type_name[] = {
[BPF_CGROUP_INET6_BIND] = "cgroup_inet6_bind",
[BPF_CGROUP_INET4_CONNECT] = "cgroup_inet4_connect",
[BPF_CGROUP_INET6_CONNECT] = "cgroup_inet6_connect",
+ [BPF_CGROUP_UNIX_CONNECT] = "cgroup_unix_connect",
[BPF_CGROUP_INET4_POST_BIND] = "cgroup_inet4_post_bind",
[BPF_CGROUP_INET6_POST_BIND] = "cgroup_inet6_post_bind",
[BPF_CGROUP_INET4_GETPEERNAME] = "cgroup_inet4_getpeername",
[BPF_CGROUP_INET6_GETPEERNAME] = "cgroup_inet6_getpeername",
+ [BPF_CGROUP_UNIX_GETPEERNAME] = "cgroup_unix_getpeername",
[BPF_CGROUP_INET4_GETSOCKNAME] = "cgroup_inet4_getsockname",
[BPF_CGROUP_INET6_GETSOCKNAME] = "cgroup_inet6_getsockname",
+ [BPF_CGROUP_UNIX_GETSOCKNAME] = "cgroup_unix_getsockname",
[BPF_CGROUP_UDP4_SENDMSG] = "cgroup_udp4_sendmsg",
[BPF_CGROUP_UDP6_SENDMSG] = "cgroup_udp6_sendmsg",
+ [BPF_CGROUP_UNIX_SENDMSG] = "cgroup_unix_sendmsg",
[BPF_CGROUP_SYSCTL] = "cgroup_sysctl",
[BPF_CGROUP_UDP4_RECVMSG] = "cgroup_udp4_recvmsg",
[BPF_CGROUP_UDP6_RECVMSG] = "cgroup_udp6_recvmsg",
+ [BPF_CGROUP_UNIX_RECVMSG] = "cgroup_unix_recvmsg",
[BPF_CGROUP_GETSOCKOPT] = "cgroup_getsockopt",
[BPF_CGROUP_SETSOCKOPT] = "cgroup_setsockopt",
[BPF_SK_SKB_STREAM_PARSER] = "sk_skb_stream_parser",
@@ -121,6 +126,8 @@ static const char * const attach_type_name[] = {
[BPF_TCX_INGRESS] = "tcx_ingress",
[BPF_TCX_EGRESS] = "tcx_egress",
[BPF_TRACE_UPROBE_MULTI] = "trace_uprobe_multi",
+ [BPF_NETKIT_PRIMARY] = "netkit_primary",
+ [BPF_NETKIT_PEER] = "netkit_peer",
};
static const char * const link_type_name[] = {
@@ -137,6 +144,7 @@ static const char * const link_type_name[] = {
[BPF_LINK_TYPE_NETFILTER] = "netfilter",
[BPF_LINK_TYPE_TCX] = "tcx",
[BPF_LINK_TYPE_UPROBE_MULTI] = "uprobe_multi",
+ [BPF_LINK_TYPE_NETKIT] = "netkit",
};
static const char * const map_type_name[] = {
@@ -436,9 +444,11 @@ struct bpf_program {
int fd;
bool autoload;
bool autoattach;
+ bool sym_global;
bool mark_btf_static;
enum bpf_prog_type type;
enum bpf_attach_type expected_attach_type;
+ int exception_cb_idx;
int prog_ifindex;
__u32 attach_btf_obj_fd;
@@ -765,6 +775,7 @@ bpf_object__init_prog(struct bpf_object *obj, struct bpf_program *prog,
prog->type = BPF_PROG_TYPE_UNSPEC;
prog->fd = -1;
+ prog->exception_cb_idx = -1;
/* libbpf's convention for SEC("?abc...") is that it's just like
* SEC("abc...") but the corresponding bpf_program starts out with
@@ -871,14 +882,16 @@ bpf_object__add_programs(struct bpf_object *obj, Elf_Data *sec_data,
if (err)
return err;
+ if (ELF64_ST_BIND(sym->st_info) != STB_LOCAL)
+ prog->sym_global = true;
+
/* if function is a global/weak symbol, but has restricted
* (STV_HIDDEN or STV_INTERNAL) visibility, mark its BTF FUNC
* as static to enable more permissive BPF verification mode
* with more outside context available to BPF verifier
*/
- if (ELF64_ST_BIND(sym->st_info) != STB_LOCAL
- && (ELF64_ST_VISIBILITY(sym->st_other) == STV_HIDDEN
- || ELF64_ST_VISIBILITY(sym->st_other) == STV_INTERNAL))
+ if (prog->sym_global && (ELF64_ST_VISIBILITY(sym->st_other) == STV_HIDDEN
+ || ELF64_ST_VISIBILITY(sym->st_other) == STV_INTERNAL))
prog->mark_btf_static = true;
nr_progs++;
@@ -3142,6 +3155,86 @@ static int bpf_object__sanitize_and_load_btf(struct bpf_object *obj)
}
}
+ if (!kernel_supports(obj, FEAT_BTF_DECL_TAG))
+ goto skip_exception_cb;
+ for (i = 0; i < obj->nr_programs; i++) {
+ struct bpf_program *prog = &obj->programs[i];
+ int j, k, n;
+
+ if (prog_is_subprog(obj, prog))
+ continue;
+ n = btf__type_cnt(obj->btf);
+ for (j = 1; j < n; j++) {
+ const char *str = "exception_callback:", *name;
+ size_t len = strlen(str);
+ struct btf_type *t;
+
+ t = btf_type_by_id(obj->btf, j);
+ if (!btf_is_decl_tag(t) || btf_decl_tag(t)->component_idx != -1)
+ continue;
+
+ name = btf__str_by_offset(obj->btf, t->name_off);
+ if (strncmp(name, str, len))
+ continue;
+
+ t = btf_type_by_id(obj->btf, t->type);
+ if (!btf_is_func(t) || btf_func_linkage(t) != BTF_FUNC_GLOBAL) {
+ pr_warn("prog '%s': exception_callback:<value> decl tag not applied to the main program\n",
+ prog->name);
+ return -EINVAL;
+ }
+ if (strcmp(prog->name, btf__str_by_offset(obj->btf, t->name_off)))
+ continue;
+ /* Multiple callbacks are specified for the same prog,
+ * the verifier will eventually return an error for this
+ * case, hence simply skip appending a subprog.
+ */
+ if (prog->exception_cb_idx >= 0) {
+ prog->exception_cb_idx = -1;
+ break;
+ }
+
+ name += len;
+ if (str_is_empty(name)) {
+ pr_warn("prog '%s': exception_callback:<value> decl tag contains empty value\n",
+ prog->name);
+ return -EINVAL;
+ }
+
+ for (k = 0; k < obj->nr_programs; k++) {
+ struct bpf_program *subprog = &obj->programs[k];
+
+ if (!prog_is_subprog(obj, subprog))
+ continue;
+ if (strcmp(name, subprog->name))
+ continue;
+ /* Enforce non-hidden, as from verifier point of
+ * view it expects global functions, whereas the
+ * mark_btf_static fixes up linkage as static.
+ */
+ if (!subprog->sym_global || subprog->mark_btf_static) {
+ pr_warn("prog '%s': exception callback %s must be a global non-hidden function\n",
+ prog->name, subprog->name);
+ return -EINVAL;
+ }
+ /* Let's see if we already saw a static exception callback with the same name */
+ if (prog->exception_cb_idx >= 0) {
+ pr_warn("prog '%s': multiple subprogs with same name as exception callback '%s'\n",
+ prog->name, subprog->name);
+ return -EINVAL;
+ }
+ prog->exception_cb_idx = k;
+ break;
+ }
+
+ if (prog->exception_cb_idx >= 0)
+ continue;
+ pr_warn("prog '%s': cannot find exception callback '%s'\n", prog->name, name);
+ return -ENOENT;
+ }
+ }
+skip_exception_cb:
+
sanitize = btf_needs_sanitization(obj);
if (sanitize) {
const void *raw_data;
@@ -6235,13 +6328,45 @@ static int append_subprog_relos(struct bpf_program *main_prog, struct bpf_progra
}
static int
+bpf_object__append_subprog_code(struct bpf_object *obj, struct bpf_program *main_prog,
+ struct bpf_program *subprog)
+{
+ struct bpf_insn *insns;
+ size_t new_cnt;
+ int err;
+
+ subprog->sub_insn_off = main_prog->insns_cnt;
+
+ new_cnt = main_prog->insns_cnt + subprog->insns_cnt;
+ insns = libbpf_reallocarray(main_prog->insns, new_cnt, sizeof(*insns));
+ if (!insns) {
+ pr_warn("prog '%s': failed to realloc prog code\n", main_prog->name);
+ return -ENOMEM;
+ }
+ main_prog->insns = insns;
+ main_prog->insns_cnt = new_cnt;
+
+ memcpy(main_prog->insns + subprog->sub_insn_off, subprog->insns,
+ subprog->insns_cnt * sizeof(*insns));
+
+ pr_debug("prog '%s': added %zu insns from sub-prog '%s'\n",
+ main_prog->name, subprog->insns_cnt, subprog->name);
+
+ /* The subprog insns are now appended. Append its relos too. */
+ err = append_subprog_relos(main_prog, subprog);
+ if (err)
+ return err;
+ return 0;
+}
+
+static int
bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
struct bpf_program *prog)
{
- size_t sub_insn_idx, insn_idx, new_cnt;
+ size_t sub_insn_idx, insn_idx;
struct bpf_program *subprog;
- struct bpf_insn *insns, *insn;
struct reloc_desc *relo;
+ struct bpf_insn *insn;
int err;
err = reloc_prog_func_and_line_info(obj, main_prog, prog);
@@ -6316,25 +6441,7 @@ bpf_object__reloc_code(struct bpf_object *obj, struct bpf_program *main_prog,
* and relocate.
*/
if (subprog->sub_insn_off == 0) {
- subprog->sub_insn_off = main_prog->insns_cnt;
-
- new_cnt = main_prog->insns_cnt + subprog->insns_cnt;
- insns = libbpf_reallocarray(main_prog->insns, new_cnt, sizeof(*insns));
- if (!insns) {
- pr_warn("prog '%s': failed to realloc prog code\n", main_prog->name);
- return -ENOMEM;
- }
- main_prog->insns = insns;
- main_prog->insns_cnt = new_cnt;
-
- memcpy(main_prog->insns + subprog->sub_insn_off, subprog->insns,
- subprog->insns_cnt * sizeof(*insns));
-
- pr_debug("prog '%s': added %zu insns from sub-prog '%s'\n",
- main_prog->name, subprog->insns_cnt, subprog->name);
-
- /* The subprog insns are now appended. Append its relos too. */
- err = append_subprog_relos(main_prog, subprog);
+ err = bpf_object__append_subprog_code(obj, main_prog, subprog);
if (err)
return err;
err = bpf_object__reloc_code(obj, main_prog, subprog);
@@ -6568,6 +6675,25 @@ bpf_object__relocate(struct bpf_object *obj, const char *targ_btf_path)
prog->name, err);
return err;
}
+
+ /* Now, also append exception callback if it has not been done already. */
+ if (prog->exception_cb_idx >= 0) {
+ struct bpf_program *subprog = &obj->programs[prog->exception_cb_idx];
+
+ /* Calling exception callback directly is disallowed, which the
+ * verifier will reject later. In case it was processed already,
+ * we can skip this step, otherwise for all other valid cases we
+ * have to append exception callback now.
+ */
+ if (subprog->sub_insn_off == 0) {
+ err = bpf_object__append_subprog_code(obj, prog, subprog);
+ if (err)
+ return err;
+ err = bpf_object__reloc_code(obj, prog, subprog);
+ if (err)
+ return err;
+ }
+ }
}
/* Process data relos for main programs */
for (i = 0; i < obj->nr_programs; i++) {
@@ -8792,6 +8918,8 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("tc", SCHED_CLS, 0, SEC_NONE), /* deprecated / legacy, use tcx */
SEC_DEF("classifier", SCHED_CLS, 0, SEC_NONE), /* deprecated / legacy, use tcx */
SEC_DEF("action", SCHED_ACT, 0, SEC_NONE), /* deprecated / legacy, use tcx */
+ SEC_DEF("netkit/primary", SCHED_CLS, BPF_NETKIT_PRIMARY, SEC_NONE),
+ SEC_DEF("netkit/peer", SCHED_CLS, BPF_NETKIT_PEER, SEC_NONE),
SEC_DEF("tracepoint+", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("tp+", TRACEPOINT, 0, SEC_NONE, attach_tp),
SEC_DEF("raw_tracepoint+", RAW_TRACEPOINT, 0, SEC_NONE, attach_raw_tp),
@@ -8842,14 +8970,19 @@ static const struct bpf_sec_def section_defs[] = {
SEC_DEF("cgroup/bind6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_BIND, SEC_ATTACHABLE),
SEC_DEF("cgroup/connect4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_CONNECT, SEC_ATTACHABLE),
SEC_DEF("cgroup/connect6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_CONNECT, SEC_ATTACHABLE),
+ SEC_DEF("cgroup/connect_unix", CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_CONNECT, SEC_ATTACHABLE),
SEC_DEF("cgroup/sendmsg4", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_SENDMSG, SEC_ATTACHABLE),
SEC_DEF("cgroup/sendmsg6", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP6_SENDMSG, SEC_ATTACHABLE),
+ SEC_DEF("cgroup/sendmsg_unix", CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_SENDMSG, SEC_ATTACHABLE),
SEC_DEF("cgroup/recvmsg4", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_RECVMSG, SEC_ATTACHABLE),
SEC_DEF("cgroup/recvmsg6", CGROUP_SOCK_ADDR, BPF_CGROUP_UDP6_RECVMSG, SEC_ATTACHABLE),
+ SEC_DEF("cgroup/recvmsg_unix", CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_RECVMSG, SEC_ATTACHABLE),
SEC_DEF("cgroup/getpeername4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETPEERNAME, SEC_ATTACHABLE),
SEC_DEF("cgroup/getpeername6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETPEERNAME, SEC_ATTACHABLE),
+ SEC_DEF("cgroup/getpeername_unix", CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_GETPEERNAME, SEC_ATTACHABLE),
SEC_DEF("cgroup/getsockname4", CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETSOCKNAME, SEC_ATTACHABLE),
SEC_DEF("cgroup/getsockname6", CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETSOCKNAME, SEC_ATTACHABLE),
+ SEC_DEF("cgroup/getsockname_unix", CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_GETSOCKNAME, SEC_ATTACHABLE),
SEC_DEF("cgroup/sysctl", CGROUP_SYSCTL, BPF_CGROUP_SYSCTL, SEC_ATTACHABLE),
SEC_DEF("cgroup/getsockopt", CGROUP_SOCKOPT, BPF_CGROUP_GETSOCKOPT, SEC_ATTACHABLE),
SEC_DEF("cgroup/setsockopt", CGROUP_SOCKOPT, BPF_CGROUP_SETSOCKOPT, SEC_ATTACHABLE),
@@ -10996,7 +11129,7 @@ static int attach_uprobe_multi(const struct bpf_program *prog, long cookie, stru
*link = NULL;
- n = sscanf(prog->sec_name, "%m[^/]/%m[^:]:%ms",
+ n = sscanf(prog->sec_name, "%m[^/]/%m[^:]:%m[^\n]",
&probe_type, &binary_path, &func_name);
switch (n) {
case 1:
@@ -11506,14 +11639,14 @@ err_out:
static int attach_uprobe(const struct bpf_program *prog, long cookie, struct bpf_link **link)
{
DECLARE_LIBBPF_OPTS(bpf_uprobe_opts, opts);
- char *probe_type = NULL, *binary_path = NULL, *func_name = NULL;
- int n, ret = -EINVAL;
+ char *probe_type = NULL, *binary_path = NULL, *func_name = NULL, *func_off;
+ int n, c, ret = -EINVAL;
long offset = 0;
*link = NULL;
- n = sscanf(prog->sec_name, "%m[^/]/%m[^:]:%m[a-zA-Z0-9_.]+%li",
- &probe_type, &binary_path, &func_name, &offset);
+ n = sscanf(prog->sec_name, "%m[^/]/%m[^:]:%m[^\n]",
+ &probe_type, &binary_path, &func_name);
switch (n) {
case 1:
/* handle SEC("u[ret]probe") - format is valid, but auto-attach is impossible. */
@@ -11524,7 +11657,17 @@ static int attach_uprobe(const struct bpf_program *prog, long cookie, struct bpf
prog->name, prog->sec_name);
break;
case 3:
- case 4:
+ /* check if user specifies `+offset`, if yes, this should be
+ * the last part of the string, make sure sscanf read to EOL
+ */
+ func_off = strrchr(func_name, '+');
+ if (func_off) {
+ n = sscanf(func_off, "+%li%n", &offset, &c);
+ if (n == 1 && *(func_off + c) == '\0')
+ func_off[0] = '\0';
+ else
+ offset = 0;
+ }
opts.retprobe = strcmp(probe_type, "uretprobe") == 0 ||
strcmp(probe_type, "uretprobe.s") == 0;
if (opts.retprobe && offset != 0) {
@@ -11988,6 +12131,40 @@ bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
return bpf_program_attach_fd(prog, ifindex, "tcx", &link_create_opts);
}
+struct bpf_link *
+bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
+ const struct bpf_netkit_opts *opts)
+{
+ LIBBPF_OPTS(bpf_link_create_opts, link_create_opts);
+ __u32 relative_id;
+ int relative_fd;
+
+ if (!OPTS_VALID(opts, bpf_netkit_opts))
+ return libbpf_err_ptr(-EINVAL);
+
+ relative_id = OPTS_GET(opts, relative_id, 0);
+ relative_fd = OPTS_GET(opts, relative_fd, 0);
+
+ /* validate we don't have unexpected combinations of non-zero fields */
+ if (!ifindex) {
+ pr_warn("prog '%s': target netdevice ifindex cannot be zero\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+ if (relative_fd && relative_id) {
+ pr_warn("prog '%s': relative_fd and relative_id cannot be set at the same time\n",
+ prog->name);
+ return libbpf_err_ptr(-EINVAL);
+ }
+
+ link_create_opts.netkit.expected_revision = OPTS_GET(opts, expected_revision, 0);
+ link_create_opts.netkit.relative_fd = relative_fd;
+ link_create_opts.netkit.relative_id = relative_id;
+ link_create_opts.flags = OPTS_GET(opts, flags, 0);
+
+ return bpf_program_attach_fd(prog, ifindex, "netkit", &link_create_opts);
+}
+
struct bpf_link *bpf_program__attach_freplace(const struct bpf_program *prog,
int target_fd,
const char *attach_func_name)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 0e52621cba43..6cd9c501624f 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -800,6 +800,21 @@ LIBBPF_API struct bpf_link *
bpf_program__attach_tcx(const struct bpf_program *prog, int ifindex,
const struct bpf_tcx_opts *opts);
+struct bpf_netkit_opts {
+ /* size of this struct, for forward/backward compatibility */
+ size_t sz;
+ __u32 flags;
+ __u32 relative_fd;
+ __u32 relative_id;
+ __u64 expected_revision;
+ size_t :0;
+};
+#define bpf_netkit_opts__last_field expected_revision
+
+LIBBPF_API struct bpf_link *
+bpf_program__attach_netkit(const struct bpf_program *prog, int ifindex,
+ const struct bpf_netkit_opts *opts);
+
struct bpf_map;
LIBBPF_API struct bpf_link *bpf_map__attach_struct_ops(const struct bpf_map *map);
@@ -1229,6 +1244,7 @@ LIBBPF_API int bpf_tc_query(const struct bpf_tc_hook *hook,
/* Ring buffer APIs */
struct ring_buffer;
+struct ring;
struct user_ring_buffer;
typedef int (*ring_buffer_sample_fn)(void *ctx, void *data, size_t size);
@@ -1249,6 +1265,78 @@ LIBBPF_API int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms);
LIBBPF_API int ring_buffer__consume(struct ring_buffer *rb);
LIBBPF_API int ring_buffer__epoll_fd(const struct ring_buffer *rb);
+/**
+ * @brief **ring_buffer__ring()** returns the ringbuffer object inside a given
+ * ringbuffer manager representing a single BPF_MAP_TYPE_RINGBUF map instance.
+ *
+ * @param rb A ringbuffer manager object.
+ * @param idx An index into the ringbuffers contained within the ringbuffer
+ * manager object. The index is 0-based and corresponds to the order in which
+ * ring_buffer__add was called.
+ * @return A ringbuffer object on success; NULL and errno set if the index is
+ * invalid.
+ */
+LIBBPF_API struct ring *ring_buffer__ring(struct ring_buffer *rb,
+ unsigned int idx);
+
+/**
+ * @brief **ring__consumer_pos()** returns the current consumer position in the
+ * given ringbuffer.
+ *
+ * @param r A ringbuffer object.
+ * @return The current consumer position.
+ */
+LIBBPF_API unsigned long ring__consumer_pos(const struct ring *r);
+
+/**
+ * @brief **ring__producer_pos()** returns the current producer position in the
+ * given ringbuffer.
+ *
+ * @param r A ringbuffer object.
+ * @return The current producer position.
+ */
+LIBBPF_API unsigned long ring__producer_pos(const struct ring *r);
+
+/**
+ * @brief **ring__avail_data_size()** returns the number of bytes in the
+ * ringbuffer not yet consumed. This has no locking associated with it, so it
+ * can be inaccurate if operations are ongoing while this is called. However, it
+ * should still show the correct trend over the long-term.
+ *
+ * @param r A ringbuffer object.
+ * @return The number of bytes not yet consumed.
+ */
+LIBBPF_API size_t ring__avail_data_size(const struct ring *r);
+
+/**
+ * @brief **ring__size()** returns the total size of the ringbuffer's map data
+ * area (excluding special producer/consumer pages). Effectively this gives the
+ * amount of usable bytes of data inside the ringbuffer.
+ *
+ * @param r A ringbuffer object.
+ * @return The total size of the ringbuffer map data area.
+ */
+LIBBPF_API size_t ring__size(const struct ring *r);
+
+/**
+ * @brief **ring__map_fd()** returns the file descriptor underlying the given
+ * ringbuffer.
+ *
+ * @param r A ringbuffer object.
+ * @return The underlying ringbuffer file descriptor
+ */
+LIBBPF_API int ring__map_fd(const struct ring *r);
+
+/**
+ * @brief **ring__consume()** consumes available ringbuffer data without event
+ * polling.
+ *
+ * @param r A ringbuffer object.
+ * @return The number of records consumed (or INT_MAX, whichever is less), or
+ * a negative number if any of the callbacks return an error.
+ */
+LIBBPF_API int ring__consume(struct ring *r);
+
struct user_ring_buffer_opts {
size_t sz; /* size of this struct, for forward/backward compatibility */
};
diff --git a/tools/lib/bpf/libbpf.map b/tools/lib/bpf/libbpf.map
index 57712321490f..b52dc28dc8af 100644
--- a/tools/lib/bpf/libbpf.map
+++ b/tools/lib/bpf/libbpf.map
@@ -398,6 +398,14 @@ LIBBPF_1.3.0 {
bpf_object__unpin;
bpf_prog_detach_opts;
bpf_program__attach_netfilter;
+ bpf_program__attach_netkit;
bpf_program__attach_tcx;
bpf_program__attach_uprobe_multi;
+ ring__avail_data_size;
+ ring__consume;
+ ring__consumer_pos;
+ ring__map_fd;
+ ring__producer_pos;
+ ring__size;
+ ring_buffer__ring;
} LIBBPF_1.2.0;
diff --git a/tools/lib/bpf/ringbuf.c b/tools/lib/bpf/ringbuf.c
index 02199364db13..aacb64278a01 100644
--- a/tools/lib/bpf/ringbuf.c
+++ b/tools/lib/bpf/ringbuf.c
@@ -34,7 +34,7 @@ struct ring {
struct ring_buffer {
struct epoll_event *events;
- struct ring *rings;
+ struct ring **rings;
size_t page_size;
int epoll_fd;
int ring_cnt;
@@ -57,7 +57,7 @@ struct ringbuf_hdr {
__u32 pad;
};
-static void ringbuf_unmap_ring(struct ring_buffer *rb, struct ring *r)
+static void ringbuf_free_ring(struct ring_buffer *rb, struct ring *r)
{
if (r->consumer_pos) {
munmap(r->consumer_pos, rb->page_size);
@@ -67,6 +67,8 @@ static void ringbuf_unmap_ring(struct ring_buffer *rb, struct ring *r)
munmap(r->producer_pos, rb->page_size + 2 * (r->mask + 1));
r->producer_pos = NULL;
}
+
+ free(r);
}
/* Add extra RINGBUF maps to this ring buffer manager */
@@ -107,8 +109,10 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
return libbpf_err(-ENOMEM);
rb->events = tmp;
- r = &rb->rings[rb->ring_cnt];
- memset(r, 0, sizeof(*r));
+ r = calloc(1, sizeof(*r));
+ if (!r)
+ return libbpf_err(-ENOMEM);
+ rb->rings[rb->ring_cnt] = r;
r->map_fd = map_fd;
r->sample_cb = sample_cb;
@@ -121,7 +125,7 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
err = -errno;
pr_warn("ringbuf: failed to mmap consumer page for map fd=%d: %d\n",
map_fd, err);
- return libbpf_err(err);
+ goto err_out;
}
r->consumer_pos = tmp;
@@ -131,16 +135,16 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
*/
mmap_sz = rb->page_size + 2 * (__u64)info.max_entries;
if (mmap_sz != (__u64)(size_t)mmap_sz) {
+ err = -E2BIG;
pr_warn("ringbuf: ring buffer size (%u) is too big\n", info.max_entries);
- return libbpf_err(-E2BIG);
+ goto err_out;
}
tmp = mmap(NULL, (size_t)mmap_sz, PROT_READ, MAP_SHARED, map_fd, rb->page_size);
if (tmp == MAP_FAILED) {
err = -errno;
- ringbuf_unmap_ring(rb, r);
pr_warn("ringbuf: failed to mmap data pages for map fd=%d: %d\n",
map_fd, err);
- return libbpf_err(err);
+ goto err_out;
}
r->producer_pos = tmp;
r->data = tmp + rb->page_size;
@@ -152,14 +156,17 @@ int ring_buffer__add(struct ring_buffer *rb, int map_fd,
e->data.fd = rb->ring_cnt;
if (epoll_ctl(rb->epoll_fd, EPOLL_CTL_ADD, map_fd, e) < 0) {
err = -errno;
- ringbuf_unmap_ring(rb, r);
pr_warn("ringbuf: failed to epoll add map fd=%d: %d\n",
map_fd, err);
- return libbpf_err(err);
+ goto err_out;
}
rb->ring_cnt++;
return 0;
+
+err_out:
+ ringbuf_free_ring(rb, r);
+ return libbpf_err(err);
}
void ring_buffer__free(struct ring_buffer *rb)
@@ -170,7 +177,7 @@ void ring_buffer__free(struct ring_buffer *rb)
return;
for (i = 0; i < rb->ring_cnt; ++i)
- ringbuf_unmap_ring(rb, &rb->rings[i]);
+ ringbuf_free_ring(rb, rb->rings[i]);
if (rb->epoll_fd >= 0)
close(rb->epoll_fd);
@@ -278,7 +285,7 @@ int ring_buffer__consume(struct ring_buffer *rb)
int i;
for (i = 0; i < rb->ring_cnt; i++) {
- struct ring *ring = &rb->rings[i];
+ struct ring *ring = rb->rings[i];
err = ringbuf_process_ring(ring);
if (err < 0)
@@ -305,7 +312,7 @@ int ring_buffer__poll(struct ring_buffer *rb, int timeout_ms)
for (i = 0; i < cnt; i++) {
__u32 ring_id = rb->events[i].data.fd;
- struct ring *ring = &rb->rings[ring_id];
+ struct ring *ring = rb->rings[ring_id];
err = ringbuf_process_ring(ring);
if (err < 0)
@@ -323,6 +330,58 @@ int ring_buffer__epoll_fd(const struct ring_buffer *rb)
return rb->epoll_fd;
}
+struct ring *ring_buffer__ring(struct ring_buffer *rb, unsigned int idx)
+{
+ if (idx >= rb->ring_cnt)
+ return errno = ERANGE, NULL;
+
+ return rb->rings[idx];
+}
+
+unsigned long ring__consumer_pos(const struct ring *r)
+{
+ /* Synchronizes with smp_store_release() in ringbuf_process_ring(). */
+ return smp_load_acquire(r->consumer_pos);
+}
+
+unsigned long ring__producer_pos(const struct ring *r)
+{
+ /* Synchronizes with smp_store_release() in __bpf_ringbuf_reserve() in
+ * the kernel.
+ */
+ return smp_load_acquire(r->producer_pos);
+}
+
+size_t ring__avail_data_size(const struct ring *r)
+{
+ unsigned long cons_pos, prod_pos;
+
+ cons_pos = ring__consumer_pos(r);
+ prod_pos = ring__producer_pos(r);
+ return prod_pos - cons_pos;
+}
+
+size_t ring__size(const struct ring *r)
+{
+ return r->mask + 1;
+}
+
+int ring__map_fd(const struct ring *r)
+{
+ return r->map_fd;
+}
+
+int ring__consume(struct ring *r)
+{
+ int64_t res;
+
+ res = ringbuf_process_ring(r);
+ if (res < 0)
+ return libbpf_err(res);
+
+ return res > INT_MAX ? INT_MAX : res;
+}
+
static void user_ringbuf_unmap_ring(struct user_ring_buffer *rb)
{
if (rb->consumer_pos) {
diff --git a/tools/net/ynl/Makefile b/tools/net/ynl/Makefile
index 8156f03e23ac..d664b36deb5b 100644
--- a/tools/net/ynl/Makefile
+++ b/tools/net/ynl/Makefile
@@ -3,7 +3,6 @@
SUBDIRS = lib generated samples
all: $(SUBDIRS)
- ./ynl-regen.sh -f -p $(PWD)/../../../
$(SUBDIRS):
@if [ -f "$@/Makefile" ] ; then \
diff --git a/tools/net/ynl/cli.py b/tools/net/ynl/cli.py
index 564ecf07cd2c..2ad9ec0f5545 100755
--- a/tools/net/ynl/cli.py
+++ b/tools/net/ynl/cli.py
@@ -27,6 +27,7 @@ def main():
const=Netlink.NLM_F_CREATE)
parser.add_argument('--append', dest='flags', action='append_const',
const=Netlink.NLM_F_APPEND)
+ parser.add_argument('--process-unknown', action=argparse.BooleanOptionalAction)
args = parser.parse_args()
if args.no_schema:
@@ -36,7 +37,7 @@ def main():
if args.json_text:
attrs = json.loads(args.json_text)
- ynl = YnlFamily(args.spec, args.schema)
+ ynl = YnlFamily(args.spec, args.schema, args.process_unknown)
if args.ntf:
ynl.ntf_subscribe(args.ntf)
diff --git a/tools/net/ynl/generated/Makefile b/tools/net/ynl/generated/Makefile
index c1935b01902e..84cbabdd02a8 100644
--- a/tools/net/ynl/generated/Makefile
+++ b/tools/net/ynl/generated/Makefile
@@ -19,7 +19,7 @@ SRCS=$(patsubst %,%-user.c,${GENS})
HDRS=$(patsubst %,%-user.h,${GENS})
OBJS=$(patsubst %,%-user.o,${GENS})
-all: protos.a $(HDRS) $(SRCS) $(KHDRS) $(KSRCS) $(UAPI) regen
+all: protos.a $(HDRS) $(SRCS) $(KHDRS) $(KSRCS) $(UAPI)
protos.a: $(OBJS)
@echo -e "\tAR $@"
@@ -27,11 +27,11 @@ protos.a: $(OBJS)
%-user.h: ../../../../Documentation/netlink/specs/%.yaml $(TOOL)
@echo -e "\tGEN $@"
- @$(TOOL) --mode user --header --spec $< $(YNL_GEN_ARG_$*) > $@
+ @$(TOOL) --mode user --header --spec $< -o $@ $(YNL_GEN_ARG_$*)
%-user.c: ../../../../Documentation/netlink/specs/%.yaml $(TOOL)
@echo -e "\tGEN $@"
- @$(TOOL) --mode user --source --spec $< $(YNL_GEN_ARG_$*) > $@
+ @$(TOOL) --mode user --source --spec $< -o $@ $(YNL_GEN_ARG_$*)
%-user.o: %-user.c %-user.h
@echo -e "\tCC $@"
diff --git a/tools/net/ynl/generated/devlink-user.c b/tools/net/ynl/generated/devlink-user.c
index 2cb2518500cb..75b744b47986 100644
--- a/tools/net/ynl/generated/devlink-user.c
+++ b/tools/net/ynl/generated/devlink-user.c
@@ -16,14 +16,25 @@
static const char * const devlink_op_strmap[] = {
[3] = "get",
[7] = "port-get",
+ [DEVLINK_CMD_PORT_NEW] = "port-new",
[13] = "sb-get",
[17] = "sb-pool-get",
[21] = "sb-port-pool-get",
[25] = "sb-tc-pool-bind-get",
+ [DEVLINK_CMD_ESWITCH_GET] = "eswitch-get",
+ [DEVLINK_CMD_DPIPE_TABLE_GET] = "dpipe-table-get",
+ [DEVLINK_CMD_DPIPE_ENTRIES_GET] = "dpipe-entries-get",
+ [DEVLINK_CMD_DPIPE_HEADERS_GET] = "dpipe-headers-get",
+ [DEVLINK_CMD_RESOURCE_DUMP] = "resource-dump",
+ [DEVLINK_CMD_RELOAD] = "reload",
[DEVLINK_CMD_PARAM_GET] = "param-get",
[DEVLINK_CMD_REGION_GET] = "region-get",
+ [DEVLINK_CMD_REGION_NEW] = "region-new",
+ [DEVLINK_CMD_REGION_READ] = "region-read",
+ [DEVLINK_CMD_PORT_PARAM_GET] = "port-param-get",
[DEVLINK_CMD_INFO_GET] = "info-get",
[DEVLINK_CMD_HEALTH_REPORTER_GET] = "health-reporter-get",
+ [DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET] = "health-reporter-dump-get",
[63] = "trap-get",
[67] = "trap-group-get",
[71] = "trap-policer-get",
@@ -51,7 +62,303 @@ const char *devlink_sb_pool_type_str(enum devlink_sb_pool_type value)
return devlink_sb_pool_type_strmap[value];
}
+static const char * const devlink_port_type_strmap[] = {
+ [0] = "notset",
+ [1] = "auto",
+ [2] = "eth",
+ [3] = "ib",
+};
+
+const char *devlink_port_type_str(enum devlink_port_type value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_port_type_strmap))
+ return NULL;
+ return devlink_port_type_strmap[value];
+}
+
+static const char * const devlink_port_flavour_strmap[] = {
+ [0] = "physical",
+ [1] = "cpu",
+ [2] = "dsa",
+ [3] = "pci_pf",
+ [4] = "pci_vf",
+ [5] = "virtual",
+ [6] = "unused",
+ [7] = "pci_sf",
+};
+
+const char *devlink_port_flavour_str(enum devlink_port_flavour value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_port_flavour_strmap))
+ return NULL;
+ return devlink_port_flavour_strmap[value];
+}
+
+static const char * const devlink_port_fn_state_strmap[] = {
+ [0] = "inactive",
+ [1] = "active",
+};
+
+const char *devlink_port_fn_state_str(enum devlink_port_fn_state value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_port_fn_state_strmap))
+ return NULL;
+ return devlink_port_fn_state_strmap[value];
+}
+
+static const char * const devlink_port_fn_opstate_strmap[] = {
+ [0] = "detached",
+ [1] = "attached",
+};
+
+const char *devlink_port_fn_opstate_str(enum devlink_port_fn_opstate value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_port_fn_opstate_strmap))
+ return NULL;
+ return devlink_port_fn_opstate_strmap[value];
+}
+
+static const char * const devlink_port_fn_attr_cap_strmap[] = {
+ [0] = "roce-bit",
+ [1] = "migratable-bit",
+};
+
+const char *devlink_port_fn_attr_cap_str(enum devlink_port_fn_attr_cap value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_port_fn_attr_cap_strmap))
+ return NULL;
+ return devlink_port_fn_attr_cap_strmap[value];
+}
+
+static const char * const devlink_sb_threshold_type_strmap[] = {
+ [0] = "static",
+ [1] = "dynamic",
+};
+
+const char *devlink_sb_threshold_type_str(enum devlink_sb_threshold_type value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_sb_threshold_type_strmap))
+ return NULL;
+ return devlink_sb_threshold_type_strmap[value];
+}
+
+static const char * const devlink_eswitch_mode_strmap[] = {
+ [0] = "legacy",
+ [1] = "switchdev",
+};
+
+const char *devlink_eswitch_mode_str(enum devlink_eswitch_mode value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_eswitch_mode_strmap))
+ return NULL;
+ return devlink_eswitch_mode_strmap[value];
+}
+
+static const char * const devlink_eswitch_inline_mode_strmap[] = {
+ [0] = "none",
+ [1] = "link",
+ [2] = "network",
+ [3] = "transport",
+};
+
+const char *
+devlink_eswitch_inline_mode_str(enum devlink_eswitch_inline_mode value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_eswitch_inline_mode_strmap))
+ return NULL;
+ return devlink_eswitch_inline_mode_strmap[value];
+}
+
+static const char * const devlink_eswitch_encap_mode_strmap[] = {
+ [0] = "none",
+ [1] = "basic",
+};
+
+const char *
+devlink_eswitch_encap_mode_str(enum devlink_eswitch_encap_mode value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_eswitch_encap_mode_strmap))
+ return NULL;
+ return devlink_eswitch_encap_mode_strmap[value];
+}
+
+static const char * const devlink_dpipe_match_type_strmap[] = {
+ [0] = "field-exact",
+};
+
+const char *devlink_dpipe_match_type_str(enum devlink_dpipe_match_type value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_dpipe_match_type_strmap))
+ return NULL;
+ return devlink_dpipe_match_type_strmap[value];
+}
+
+static const char * const devlink_dpipe_action_type_strmap[] = {
+ [0] = "field-modify",
+};
+
+const char *devlink_dpipe_action_type_str(enum devlink_dpipe_action_type value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_dpipe_action_type_strmap))
+ return NULL;
+ return devlink_dpipe_action_type_strmap[value];
+}
+
+static const char * const devlink_dpipe_field_mapping_type_strmap[] = {
+ [0] = "none",
+ [1] = "ifindex",
+};
+
+const char *
+devlink_dpipe_field_mapping_type_str(enum devlink_dpipe_field_mapping_type value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_dpipe_field_mapping_type_strmap))
+ return NULL;
+ return devlink_dpipe_field_mapping_type_strmap[value];
+}
+
+static const char * const devlink_resource_unit_strmap[] = {
+ [0] = "entry",
+};
+
+const char *devlink_resource_unit_str(enum devlink_resource_unit value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_resource_unit_strmap))
+ return NULL;
+ return devlink_resource_unit_strmap[value];
+}
+
+static const char * const devlink_reload_action_strmap[] = {
+ [1] = "driver-reinit",
+ [2] = "fw-activate",
+};
+
+const char *devlink_reload_action_str(enum devlink_reload_action value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_reload_action_strmap))
+ return NULL;
+ return devlink_reload_action_strmap[value];
+}
+
+static const char * const devlink_param_cmode_strmap[] = {
+ [0] = "runtime",
+ [1] = "driverinit",
+ [2] = "permanent",
+};
+
+const char *devlink_param_cmode_str(enum devlink_param_cmode value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_param_cmode_strmap))
+ return NULL;
+ return devlink_param_cmode_strmap[value];
+}
+
+static const char * const devlink_flash_overwrite_strmap[] = {
+ [0] = "settings-bit",
+ [1] = "identifiers-bit",
+};
+
+const char *devlink_flash_overwrite_str(enum devlink_flash_overwrite value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_flash_overwrite_strmap))
+ return NULL;
+ return devlink_flash_overwrite_strmap[value];
+}
+
+static const char * const devlink_trap_action_strmap[] = {
+ [0] = "drop",
+ [1] = "trap",
+ [2] = "mirror",
+};
+
+const char *devlink_trap_action_str(enum devlink_trap_action value)
+{
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(devlink_trap_action_strmap))
+ return NULL;
+ return devlink_trap_action_strmap[value];
+}
+
/* Policies */
+struct ynl_policy_attr devlink_dl_dpipe_match_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_MATCH_TYPE] = { .name = "dpipe-match-type", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_ID] = { .name = "dpipe-header-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_GLOBAL] = { .name = "dpipe-header-global", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_HEADER_INDEX] = { .name = "dpipe-header-index", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_ID] = { .name = "dpipe-field-id", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_match_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_match_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_match_value_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_MATCH] = { .name = "dpipe-match", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_match_nest, },
+ [DEVLINK_ATTR_DPIPE_VALUE] = { .name = "dpipe-value", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MASK] = { .name = "dpipe-value-mask", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MAPPING] = { .name = "dpipe-value-mapping", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_match_value_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_match_value_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_action_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ACTION_TYPE] = { .name = "dpipe-action-type", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_ID] = { .name = "dpipe-header-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_GLOBAL] = { .name = "dpipe-header-global", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_HEADER_INDEX] = { .name = "dpipe-header-index", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_ID] = { .name = "dpipe-field-id", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_action_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_action_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_action_value_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ACTION] = { .name = "dpipe-action", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_action_nest, },
+ [DEVLINK_ATTR_DPIPE_VALUE] = { .name = "dpipe-value", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MASK] = { .name = "dpipe-value-mask", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MAPPING] = { .name = "dpipe-value-mapping", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_action_value_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_action_value_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_field_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_FIELD_NAME] = { .name = "dpipe-field-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_FIELD_ID] = { .name = "dpipe-field-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_BITWIDTH] = { .name = "dpipe-field-bitwidth", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_MAPPING_TYPE] = { .name = "dpipe-field-mapping-type", .type = YNL_PT_U32, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_field_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_field_policy,
+};
+
+struct ynl_policy_attr devlink_dl_resource_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_RESOURCE_NAME] = { .name = "resource-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_RESOURCE_ID] = { .name = "resource-id", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE] = { .name = "resource-size", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_NEW] = { .name = "resource-size-new", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_VALID] = { .name = "resource-size-valid", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_MIN] = { .name = "resource-size-min", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_MAX] = { .name = "resource-size-max", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_GRAN] = { .name = "resource-size-gran", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_UNIT] = { .name = "resource-unit", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RESOURCE_OCC] = { .name = "resource-occ", .type = YNL_PT_U64, },
+};
+
+struct ynl_policy_nest devlink_dl_resource_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_resource_policy,
+};
+
struct ynl_policy_attr devlink_dl_info_version_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_INFO_VERSION_NAME] = { .name = "info-version-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_INFO_VERSION_VALUE] = { .name = "info-version-value", .type = YNL_PT_NUL_STR, },
@@ -62,6 +369,31 @@ struct ynl_policy_nest devlink_dl_info_version_nest = {
.table = devlink_dl_info_version_policy,
};
+struct ynl_policy_attr devlink_dl_fmsg_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_FMSG_OBJ_NEST_START] = { .name = "fmsg-obj-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_PAIR_NEST_START] = { .name = "fmsg-pair-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_ARR_NEST_START] = { .name = "fmsg-arr-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_NEST_END] = { .name = "fmsg-nest-end", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_OBJ_NAME] = { .name = "fmsg-obj-name", .type = YNL_PT_NUL_STR, },
+};
+
+struct ynl_policy_nest devlink_dl_fmsg_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_fmsg_policy,
+};
+
+struct ynl_policy_attr devlink_dl_port_function_policy[DEVLINK_PORT_FUNCTION_ATTR_MAX + 1] = {
+ [DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR] = { .name = "hw-addr", .type = YNL_PT_BINARY,},
+ [DEVLINK_PORT_FN_ATTR_STATE] = { .name = "state", .type = YNL_PT_U8, },
+ [DEVLINK_PORT_FN_ATTR_OPSTATE] = { .name = "opstate", .type = YNL_PT_U8, },
+ [DEVLINK_PORT_FN_ATTR_CAPS] = { .name = "caps", .type = YNL_PT_BITFIELD32, },
+};
+
+struct ynl_policy_nest devlink_dl_port_function_nest = {
+ .max_attr = DEVLINK_PORT_FUNCTION_ATTR_MAX,
+ .table = devlink_dl_port_function_policy,
+};
+
struct ynl_policy_attr devlink_dl_reload_stats_entry_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_RELOAD_STATS_LIMIT] = { .name = "reload-stats-limit", .type = YNL_PT_U8, },
[DEVLINK_ATTR_RELOAD_STATS_VALUE] = { .name = "reload-stats-value", .type = YNL_PT_U32, },
@@ -81,6 +413,69 @@ struct ynl_policy_nest devlink_dl_reload_act_stats_nest = {
.table = devlink_dl_reload_act_stats_policy,
};
+struct ynl_policy_attr devlink_dl_selftest_id_policy[DEVLINK_ATTR_SELFTEST_ID_MAX + 1] = {
+ [DEVLINK_ATTR_SELFTEST_ID_FLASH] = { .name = "flash", .type = YNL_PT_FLAG, },
+};
+
+struct ynl_policy_nest devlink_dl_selftest_id_nest = {
+ .max_attr = DEVLINK_ATTR_SELFTEST_ID_MAX,
+ .table = devlink_dl_selftest_id_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_table_matches_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_MATCH] = { .name = "dpipe-match", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_match_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_table_matches_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_table_matches_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_table_actions_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ACTION] = { .name = "dpipe-action", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_action_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_table_actions_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_table_actions_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_entry_match_values_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_MATCH_VALUE] = { .name = "dpipe-match-value", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_match_value_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_entry_match_values_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_entry_match_values_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_entry_action_values_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ACTION_VALUE] = { .name = "dpipe-action-value", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_action_value_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_entry_action_values_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_entry_action_values_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_header_fields_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_FIELD] = { .name = "dpipe-field", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_field_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_header_fields_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_header_fields_policy,
+};
+
+struct ynl_policy_attr devlink_dl_resource_list_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_RESOURCE] = { .name = "resource", .type = YNL_PT_NEST, .nest = &devlink_dl_resource_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_resource_list_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_resource_list_policy,
+};
+
struct ynl_policy_attr devlink_dl_reload_act_info_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_RELOAD_ACTION] = { .name = "reload-action", .type = YNL_PT_U8, },
[DEVLINK_ATTR_RELOAD_ACTION_STATS] = { .name = "reload-action-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_act_stats_nest, },
@@ -91,6 +486,45 @@ struct ynl_policy_nest devlink_dl_reload_act_info_nest = {
.table = devlink_dl_reload_act_info_policy,
};
+struct ynl_policy_attr devlink_dl_dpipe_table_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .name = "dpipe-table-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_TABLE_SIZE] = { .name = "dpipe-table-size", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_TABLE_MATCHES] = { .name = "dpipe-table-matches", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_matches_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE_ACTIONS] = { .name = "dpipe-table-actions", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_actions_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED] = { .name = "dpipe-table-counters-enabled", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_ID] = { .name = "dpipe-table-resource-id", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_UNITS] = { .name = "dpipe-table-resource-units", .type = YNL_PT_U64, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_table_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_table_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_entry_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ENTRY_INDEX] = { .name = "dpipe-entry-index", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_MATCH_VALUES] = { .name = "dpipe-entry-match-values", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_match_values_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_ACTION_VALUES] = { .name = "dpipe-entry-action-values", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_action_values_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_COUNTER] = { .name = "dpipe-entry-counter", .type = YNL_PT_U64, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_entry_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_entry_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_header_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_HEADER_NAME] = { .name = "dpipe-header-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_HEADER_ID] = { .name = "dpipe-header-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_GLOBAL] = { .name = "dpipe-header-global", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_HEADER_FIELDS] = { .name = "dpipe-header-fields", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_header_fields_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_header_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_header_policy,
+};
+
struct ynl_policy_attr devlink_dl_reload_stats_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_RELOAD_ACTION_INFO] = { .name = "reload-action-info", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_act_info_nest, },
};
@@ -100,6 +534,33 @@ struct ynl_policy_nest devlink_dl_reload_stats_nest = {
.table = devlink_dl_reload_stats_policy,
};
+struct ynl_policy_attr devlink_dl_dpipe_tables_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_TABLE] = { .name = "dpipe-table", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_tables_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_tables_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_entries_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_ENTRY] = { .name = "dpipe-entry", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_entries_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_entries_policy,
+};
+
+struct ynl_policy_attr devlink_dl_dpipe_headers_policy[DEVLINK_ATTR_MAX + 1] = {
+ [DEVLINK_ATTR_DPIPE_HEADER] = { .name = "dpipe-header", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_header_nest, },
+};
+
+struct ynl_policy_nest devlink_dl_dpipe_headers_nest = {
+ .max_attr = DEVLINK_ATTR_MAX,
+ .table = devlink_dl_dpipe_headers_policy,
+};
+
struct ynl_policy_attr devlink_dl_dev_stats_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_RELOAD_STATS] = { .name = "reload-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_stats_nest, },
[DEVLINK_ATTR_REMOTE_RELOAD_STATS] = { .name = "remote-reload-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_stats_nest, },
@@ -114,12 +575,75 @@ struct ynl_policy_attr devlink_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_BUS_NAME] = { .name = "bus-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_DEV_NAME] = { .name = "dev-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_PORT_INDEX] = { .name = "port-index", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_PORT_TYPE] = { .name = "port-type", .type = YNL_PT_U16, },
+ [DEVLINK_ATTR_PORT_SPLIT_COUNT] = { .name = "port-split-count", .type = YNL_PT_U32, },
[DEVLINK_ATTR_SB_INDEX] = { .name = "sb-index", .type = YNL_PT_U32, },
[DEVLINK_ATTR_SB_POOL_INDEX] = { .name = "sb-pool-index", .type = YNL_PT_U16, },
[DEVLINK_ATTR_SB_POOL_TYPE] = { .name = "sb-pool-type", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_SB_POOL_SIZE] = { .name = "sb-pool-size", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE] = { .name = "sb-pool-threshold-type", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_SB_THRESHOLD] = { .name = "sb-threshold", .type = YNL_PT_U32, },
[DEVLINK_ATTR_SB_TC_INDEX] = { .name = "sb-tc-index", .type = YNL_PT_U16, },
+ [DEVLINK_ATTR_ESWITCH_MODE] = { .name = "eswitch-mode", .type = YNL_PT_U16, },
+ [DEVLINK_ATTR_ESWITCH_INLINE_MODE] = { .name = "eswitch-inline-mode", .type = YNL_PT_U16, },
+ [DEVLINK_ATTR_DPIPE_TABLES] = { .name = "dpipe-tables", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_tables_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE] = { .name = "dpipe-table", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE_NAME] = { .name = "dpipe-table-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_TABLE_SIZE] = { .name = "dpipe-table-size", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_TABLE_MATCHES] = { .name = "dpipe-table-matches", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_matches_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE_ACTIONS] = { .name = "dpipe-table-actions", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_table_actions_nest, },
+ [DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED] = { .name = "dpipe-table-counters-enabled", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_ENTRIES] = { .name = "dpipe-entries", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entries_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY] = { .name = "dpipe-entry", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_INDEX] = { .name = "dpipe-entry-index", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_MATCH_VALUES] = { .name = "dpipe-entry-match-values", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_match_values_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_ACTION_VALUES] = { .name = "dpipe-entry-action-values", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_entry_action_values_nest, },
+ [DEVLINK_ATTR_DPIPE_ENTRY_COUNTER] = { .name = "dpipe-entry-counter", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_MATCH] = { .name = "dpipe-match", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_match_nest, },
+ [DEVLINK_ATTR_DPIPE_MATCH_VALUE] = { .name = "dpipe-match-value", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_match_value_nest, },
+ [DEVLINK_ATTR_DPIPE_MATCH_TYPE] = { .name = "dpipe-match-type", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_ACTION] = { .name = "dpipe-action", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_action_nest, },
+ [DEVLINK_ATTR_DPIPE_ACTION_VALUE] = { .name = "dpipe-action-value", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_action_value_nest, },
+ [DEVLINK_ATTR_DPIPE_ACTION_TYPE] = { .name = "dpipe-action-type", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_VALUE] = { .name = "dpipe-value", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MASK] = { .name = "dpipe-value-mask", .type = YNL_PT_BINARY,},
+ [DEVLINK_ATTR_DPIPE_VALUE_MAPPING] = { .name = "dpipe-value-mapping", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADERS] = { .name = "dpipe-headers", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_headers_nest, },
+ [DEVLINK_ATTR_DPIPE_HEADER] = { .name = "dpipe-header", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_header_nest, },
+ [DEVLINK_ATTR_DPIPE_HEADER_NAME] = { .name = "dpipe-header-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_HEADER_ID] = { .name = "dpipe-header-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_HEADER_FIELDS] = { .name = "dpipe-header-fields", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_header_fields_nest, },
+ [DEVLINK_ATTR_DPIPE_HEADER_GLOBAL] = { .name = "dpipe-header-global", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_DPIPE_HEADER_INDEX] = { .name = "dpipe-header-index", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD] = { .name = "dpipe-field", .type = YNL_PT_NEST, .nest = &devlink_dl_dpipe_field_nest, },
+ [DEVLINK_ATTR_DPIPE_FIELD_NAME] = { .name = "dpipe-field-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_DPIPE_FIELD_ID] = { .name = "dpipe-field-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_BITWIDTH] = { .name = "dpipe-field-bitwidth", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_DPIPE_FIELD_MAPPING_TYPE] = { .name = "dpipe-field-mapping-type", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_PAD] = { .name = "pad", .type = YNL_PT_IGNORE, },
+ [DEVLINK_ATTR_ESWITCH_ENCAP_MODE] = { .name = "eswitch-encap-mode", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RESOURCE_LIST] = { .name = "resource-list", .type = YNL_PT_NEST, .nest = &devlink_dl_resource_list_nest, },
+ [DEVLINK_ATTR_RESOURCE] = { .name = "resource", .type = YNL_PT_NEST, .nest = &devlink_dl_resource_nest, },
+ [DEVLINK_ATTR_RESOURCE_NAME] = { .name = "resource-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_RESOURCE_ID] = { .name = "resource-id", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE] = { .name = "resource-size", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_NEW] = { .name = "resource-size-new", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_VALID] = { .name = "resource-size-valid", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_MIN] = { .name = "resource-size-min", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_MAX] = { .name = "resource-size-max", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_SIZE_GRAN] = { .name = "resource-size-gran", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RESOURCE_UNIT] = { .name = "resource-unit", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RESOURCE_OCC] = { .name = "resource-occ", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_ID] = { .name = "dpipe-table-resource-id", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_UNITS] = { .name = "dpipe-table-resource-units", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_PORT_FLAVOUR] = { .name = "port-flavour", .type = YNL_PT_U16, },
[DEVLINK_ATTR_PARAM_NAME] = { .name = "param-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_PARAM_TYPE] = { .name = "param-type", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_PARAM_VALUE_CMODE] = { .name = "param-value-cmode", .type = YNL_PT_U8, },
[DEVLINK_ATTR_REGION_NAME] = { .name = "region-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_REGION_SNAPSHOT_ID] = { .name = "region-snapshot-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_REGION_CHUNK_ADDR] = { .name = "region-chunk-addr", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_REGION_CHUNK_LEN] = { .name = "region-chunk-len", .type = YNL_PT_U64, },
[DEVLINK_ATTR_INFO_DRIVER_NAME] = { .name = "info-driver-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_INFO_SERIAL_NUMBER] = { .name = "info-serial-number", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_INFO_VERSION_FIXED] = { .name = "info-version-fixed", .type = YNL_PT_NEST, .nest = &devlink_dl_info_version_nest, },
@@ -127,12 +651,35 @@ struct ynl_policy_attr devlink_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_INFO_VERSION_STORED] = { .name = "info-version-stored", .type = YNL_PT_NEST, .nest = &devlink_dl_info_version_nest, },
[DEVLINK_ATTR_INFO_VERSION_NAME] = { .name = "info-version-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_INFO_VERSION_VALUE] = { .name = "info-version-value", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_FMSG] = { .name = "fmsg", .type = YNL_PT_NEST, .nest = &devlink_dl_fmsg_nest, },
+ [DEVLINK_ATTR_FMSG_OBJ_NEST_START] = { .name = "fmsg-obj-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_PAIR_NEST_START] = { .name = "fmsg-pair-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_ARR_NEST_START] = { .name = "fmsg-arr-nest-start", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_NEST_END] = { .name = "fmsg-nest-end", .type = YNL_PT_FLAG, },
+ [DEVLINK_ATTR_FMSG_OBJ_NAME] = { .name = "fmsg-obj-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_HEALTH_REPORTER_NAME] = { .name = "health-reporter-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD] = { .name = "health-reporter-graceful-period", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER] = { .name = "health-reporter-auto-recover", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_FLASH_UPDATE_FILE_NAME] = { .name = "flash-update-file-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_FLASH_UPDATE_COMPONENT] = { .name = "flash-update-component", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_PORT_PCI_PF_NUMBER] = { .name = "port-pci-pf-number", .type = YNL_PT_U16, },
[DEVLINK_ATTR_TRAP_NAME] = { .name = "trap-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_TRAP_ACTION] = { .name = "trap-action", .type = YNL_PT_U8, },
[DEVLINK_ATTR_TRAP_GROUP_NAME] = { .name = "trap-group-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_RELOAD_FAILED] = { .name = "reload-failed", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_NETNS_FD] = { .name = "netns-fd", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_NETNS_PID] = { .name = "netns-pid", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_NETNS_ID] = { .name = "netns-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP] = { .name = "health-reporter-auto-dump", .type = YNL_PT_U8, },
[DEVLINK_ATTR_TRAP_POLICER_ID] = { .name = "trap-policer-id", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_TRAP_POLICER_RATE] = { .name = "trap-policer-rate", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_TRAP_POLICER_BURST] = { .name = "trap-policer-burst", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_PORT_FUNCTION] = { .name = "port-function", .type = YNL_PT_NEST, .nest = &devlink_dl_port_function_nest, },
+ [DEVLINK_ATTR_PORT_CONTROLLER_NUMBER] = { .name = "port-controller-number", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK] = { .name = "flash-update-overwrite-mask", .type = YNL_PT_BITFIELD32, },
[DEVLINK_ATTR_RELOAD_ACTION] = { .name = "reload-action", .type = YNL_PT_U8, },
+ [DEVLINK_ATTR_RELOAD_ACTIONS_PERFORMED] = { .name = "reload-actions-performed", .type = YNL_PT_BITFIELD32, },
+ [DEVLINK_ATTR_RELOAD_LIMITS] = { .name = "reload-limits", .type = YNL_PT_BITFIELD32, },
[DEVLINK_ATTR_DEV_STATS] = { .name = "dev-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_dev_stats_nest, },
[DEVLINK_ATTR_RELOAD_STATS] = { .name = "reload-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_stats_nest, },
[DEVLINK_ATTR_RELOAD_STATS_ENTRY] = { .name = "reload-stats-entry", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_stats_entry_nest, },
@@ -141,8 +688,17 @@ struct ynl_policy_attr devlink_policy[DEVLINK_ATTR_MAX + 1] = {
[DEVLINK_ATTR_REMOTE_RELOAD_STATS] = { .name = "remote-reload-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_stats_nest, },
[DEVLINK_ATTR_RELOAD_ACTION_INFO] = { .name = "reload-action-info", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_act_info_nest, },
[DEVLINK_ATTR_RELOAD_ACTION_STATS] = { .name = "reload-action-stats", .type = YNL_PT_NEST, .nest = &devlink_dl_reload_act_stats_nest, },
+ [DEVLINK_ATTR_PORT_PCI_SF_NUMBER] = { .name = "port-pci-sf-number", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_RATE_TX_SHARE] = { .name = "rate-tx-share", .type = YNL_PT_U64, },
+ [DEVLINK_ATTR_RATE_TX_MAX] = { .name = "rate-tx-max", .type = YNL_PT_U64, },
[DEVLINK_ATTR_RATE_NODE_NAME] = { .name = "rate-node-name", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_RATE_PARENT_NODE_NAME] = { .name = "rate-parent-node-name", .type = YNL_PT_NUL_STR, },
[DEVLINK_ATTR_LINECARD_INDEX] = { .name = "linecard-index", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_LINECARD_TYPE] = { .name = "linecard-type", .type = YNL_PT_NUL_STR, },
+ [DEVLINK_ATTR_SELFTESTS] = { .name = "selftests", .type = YNL_PT_NEST, .nest = &devlink_dl_selftest_id_nest, },
+ [DEVLINK_ATTR_RATE_TX_PRIORITY] = { .name = "rate-tx-priority", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_RATE_TX_WEIGHT] = { .name = "rate-tx-weight", .type = YNL_PT_U32, },
+ [DEVLINK_ATTR_REGION_DIRECT] = { .name = "region-direct", .type = YNL_PT_FLAG, },
};
struct ynl_policy_nest devlink_nest = {
@@ -151,6 +707,370 @@ struct ynl_policy_nest devlink_nest = {
};
/* Common nested types */
+void devlink_dl_dpipe_match_free(struct devlink_dl_dpipe_match *obj)
+{
+}
+
+int devlink_dl_dpipe_match_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_match *dst = yarg->data;
+ const struct nlattr *attr;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_MATCH_TYPE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_match_type = 1;
+ dst->dpipe_match_type = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_id = 1;
+ dst->dpipe_header_id = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_GLOBAL) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_global = 1;
+ dst->dpipe_header_global = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_index = 1;
+ dst->dpipe_header_index = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_FIELD_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_field_id = 1;
+ dst->dpipe_field_id = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_match_value_free(struct devlink_dl_dpipe_match_value *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_match; i++)
+ devlink_dl_dpipe_match_free(&obj->dpipe_match[i]);
+ free(obj->dpipe_match);
+ free(obj->dpipe_value);
+ free(obj->dpipe_value_mask);
+}
+
+int devlink_dl_dpipe_match_value_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_match_value *dst = yarg->data;
+ unsigned int n_dpipe_match = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_match)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-match-value.dpipe-match)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_MATCH) {
+ n_dpipe_match++;
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = mnl_attr_get_payload_len(attr);
+ dst->_present.dpipe_value_len = len;
+ dst->dpipe_value = malloc(len);
+ memcpy(dst->dpipe_value, mnl_attr_get_payload(attr), len);
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE_MASK) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = mnl_attr_get_payload_len(attr);
+ dst->_present.dpipe_value_mask_len = len;
+ dst->dpipe_value_mask = malloc(len);
+ memcpy(dst->dpipe_value_mask, mnl_attr_get_payload(attr), len);
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE_MAPPING) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_value_mapping = 1;
+ dst->dpipe_value_mapping = mnl_attr_get_u32(attr);
+ }
+ }
+
+ if (n_dpipe_match) {
+ dst->dpipe_match = calloc(n_dpipe_match, sizeof(*dst->dpipe_match));
+ dst->n_dpipe_match = n_dpipe_match;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_match_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_MATCH) {
+ parg.data = &dst->dpipe_match[i];
+ if (devlink_dl_dpipe_match_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_action_free(struct devlink_dl_dpipe_action *obj)
+{
+}
+
+int devlink_dl_dpipe_action_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_action *dst = yarg->data;
+ const struct nlattr *attr;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ACTION_TYPE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_action_type = 1;
+ dst->dpipe_action_type = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_id = 1;
+ dst->dpipe_header_id = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_GLOBAL) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_global = 1;
+ dst->dpipe_header_global = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_index = 1;
+ dst->dpipe_header_index = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_FIELD_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_field_id = 1;
+ dst->dpipe_field_id = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_action_value_free(struct devlink_dl_dpipe_action_value *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_action; i++)
+ devlink_dl_dpipe_action_free(&obj->dpipe_action[i]);
+ free(obj->dpipe_action);
+ free(obj->dpipe_value);
+ free(obj->dpipe_value_mask);
+}
+
+int devlink_dl_dpipe_action_value_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_action_value *dst = yarg->data;
+ unsigned int n_dpipe_action = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_action)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-action-value.dpipe-action)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ACTION) {
+ n_dpipe_action++;
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = mnl_attr_get_payload_len(attr);
+ dst->_present.dpipe_value_len = len;
+ dst->dpipe_value = malloc(len);
+ memcpy(dst->dpipe_value, mnl_attr_get_payload(attr), len);
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE_MASK) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = mnl_attr_get_payload_len(attr);
+ dst->_present.dpipe_value_mask_len = len;
+ dst->dpipe_value_mask = malloc(len);
+ memcpy(dst->dpipe_value_mask, mnl_attr_get_payload(attr), len);
+ } else if (type == DEVLINK_ATTR_DPIPE_VALUE_MAPPING) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_value_mapping = 1;
+ dst->dpipe_value_mapping = mnl_attr_get_u32(attr);
+ }
+ }
+
+ if (n_dpipe_action) {
+ dst->dpipe_action = calloc(n_dpipe_action, sizeof(*dst->dpipe_action));
+ dst->n_dpipe_action = n_dpipe_action;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_action_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_ACTION) {
+ parg.data = &dst->dpipe_action[i];
+ if (devlink_dl_dpipe_action_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_field_free(struct devlink_dl_dpipe_field *obj)
+{
+ free(obj->dpipe_field_name);
+}
+
+int devlink_dl_dpipe_field_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_field *dst = yarg->data;
+ const struct nlattr *attr;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_FIELD_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dpipe_field_name_len = len;
+ dst->dpipe_field_name = malloc(len + 1);
+ memcpy(dst->dpipe_field_name, mnl_attr_get_str(attr), len);
+ dst->dpipe_field_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_FIELD_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_field_id = 1;
+ dst->dpipe_field_id = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_FIELD_BITWIDTH) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_field_bitwidth = 1;
+ dst->dpipe_field_bitwidth = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_FIELD_MAPPING_TYPE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_field_mapping_type = 1;
+ dst->dpipe_field_mapping_type = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_resource_free(struct devlink_dl_resource *obj)
+{
+ free(obj->resource_name);
+}
+
+int devlink_dl_resource_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_resource *dst = yarg->data;
+ const struct nlattr *attr;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_RESOURCE_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.resource_name_len = len;
+ dst->resource_name = malloc(len + 1);
+ memcpy(dst->resource_name, mnl_attr_get_str(attr), len);
+ dst->resource_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_RESOURCE_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_id = 1;
+ dst->resource_id = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size = 1;
+ dst->resource_size = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE_NEW) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size_new = 1;
+ dst->resource_size_new = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE_VALID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size_valid = 1;
+ dst->resource_size_valid = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE_MIN) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size_min = 1;
+ dst->resource_size_min = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE_MAX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size_max = 1;
+ dst->resource_size_max = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_SIZE_GRAN) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_size_gran = 1;
+ dst->resource_size_gran = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_UNIT) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_unit = 1;
+ dst->resource_unit = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_RESOURCE_OCC) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_occ = 1;
+ dst->resource_occ = mnl_attr_get_u64(attr);
+ }
+ }
+
+ return 0;
+}
+
void devlink_dl_info_version_free(struct devlink_dl_info_version *obj)
{
free(obj->info_version_name);
@@ -194,6 +1114,77 @@ int devlink_dl_info_version_parse(struct ynl_parse_arg *yarg,
return 0;
}
+void devlink_dl_fmsg_free(struct devlink_dl_fmsg *obj)
+{
+ free(obj->fmsg_obj_name);
+}
+
+int devlink_dl_fmsg_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_fmsg *dst = yarg->data;
+ const struct nlattr *attr;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_FMSG_OBJ_NEST_START) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.fmsg_obj_nest_start = 1;
+ } else if (type == DEVLINK_ATTR_FMSG_PAIR_NEST_START) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.fmsg_pair_nest_start = 1;
+ } else if (type == DEVLINK_ATTR_FMSG_ARR_NEST_START) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.fmsg_arr_nest_start = 1;
+ } else if (type == DEVLINK_ATTR_FMSG_NEST_END) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.fmsg_nest_end = 1;
+ } else if (type == DEVLINK_ATTR_FMSG_OBJ_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.fmsg_obj_name_len = len;
+ dst->fmsg_obj_name = malloc(len + 1);
+ memcpy(dst->fmsg_obj_name, mnl_attr_get_str(attr), len);
+ dst->fmsg_obj_name[len] = 0;
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_port_function_free(struct devlink_dl_port_function *obj)
+{
+ free(obj->hw_addr);
+}
+
+int devlink_dl_port_function_put(struct nlmsghdr *nlh, unsigned int attr_type,
+ struct devlink_dl_port_function *obj)
+{
+ struct nlattr *nest;
+
+ nest = mnl_attr_nest_start(nlh, attr_type);
+ if (obj->_present.hw_addr_len)
+ mnl_attr_put(nlh, DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR, obj->_present.hw_addr_len, obj->hw_addr);
+ if (obj->_present.state)
+ mnl_attr_put_u8(nlh, DEVLINK_PORT_FN_ATTR_STATE, obj->state);
+ if (obj->_present.opstate)
+ mnl_attr_put_u8(nlh, DEVLINK_PORT_FN_ATTR_OPSTATE, obj->opstate);
+ if (obj->_present.caps)
+ mnl_attr_put(nlh, DEVLINK_PORT_FN_ATTR_CAPS, sizeof(struct nla_bitfield32), &obj->caps);
+ mnl_attr_nest_end(nlh, nest);
+
+ return 0;
+}
+
void
devlink_dl_reload_stats_entry_free(struct devlink_dl_reload_stats_entry *obj)
{
@@ -273,6 +1264,322 @@ int devlink_dl_reload_act_stats_parse(struct ynl_parse_arg *yarg,
return 0;
}
+void devlink_dl_selftest_id_free(struct devlink_dl_selftest_id *obj)
+{
+}
+
+int devlink_dl_selftest_id_put(struct nlmsghdr *nlh, unsigned int attr_type,
+ struct devlink_dl_selftest_id *obj)
+{
+ struct nlattr *nest;
+
+ nest = mnl_attr_nest_start(nlh, attr_type);
+ if (obj->_present.flash)
+ mnl_attr_put(nlh, DEVLINK_ATTR_SELFTEST_ID_FLASH, 0, NULL);
+ mnl_attr_nest_end(nlh, nest);
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_table_matches_free(struct devlink_dl_dpipe_table_matches *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_match; i++)
+ devlink_dl_dpipe_match_free(&obj->dpipe_match[i]);
+ free(obj->dpipe_match);
+}
+
+int devlink_dl_dpipe_table_matches_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_table_matches *dst = yarg->data;
+ unsigned int n_dpipe_match = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_match)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-table-matches.dpipe-match)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_MATCH) {
+ n_dpipe_match++;
+ }
+ }
+
+ if (n_dpipe_match) {
+ dst->dpipe_match = calloc(n_dpipe_match, sizeof(*dst->dpipe_match));
+ dst->n_dpipe_match = n_dpipe_match;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_match_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_MATCH) {
+ parg.data = &dst->dpipe_match[i];
+ if (devlink_dl_dpipe_match_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_table_actions_free(struct devlink_dl_dpipe_table_actions *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_action; i++)
+ devlink_dl_dpipe_action_free(&obj->dpipe_action[i]);
+ free(obj->dpipe_action);
+}
+
+int devlink_dl_dpipe_table_actions_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_table_actions *dst = yarg->data;
+ unsigned int n_dpipe_action = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_action)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-table-actions.dpipe-action)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ACTION) {
+ n_dpipe_action++;
+ }
+ }
+
+ if (n_dpipe_action) {
+ dst->dpipe_action = calloc(n_dpipe_action, sizeof(*dst->dpipe_action));
+ dst->n_dpipe_action = n_dpipe_action;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_action_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_ACTION) {
+ parg.data = &dst->dpipe_action[i];
+ if (devlink_dl_dpipe_action_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_entry_match_values_free(struct devlink_dl_dpipe_entry_match_values *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_match_value; i++)
+ devlink_dl_dpipe_match_value_free(&obj->dpipe_match_value[i]);
+ free(obj->dpipe_match_value);
+}
+
+int devlink_dl_dpipe_entry_match_values_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_entry_match_values *dst = yarg->data;
+ unsigned int n_dpipe_match_value = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_match_value)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-entry-match-values.dpipe-match-value)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_MATCH_VALUE) {
+ n_dpipe_match_value++;
+ }
+ }
+
+ if (n_dpipe_match_value) {
+ dst->dpipe_match_value = calloc(n_dpipe_match_value, sizeof(*dst->dpipe_match_value));
+ dst->n_dpipe_match_value = n_dpipe_match_value;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_match_value_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_MATCH_VALUE) {
+ parg.data = &dst->dpipe_match_value[i];
+ if (devlink_dl_dpipe_match_value_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_entry_action_values_free(struct devlink_dl_dpipe_entry_action_values *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_action_value; i++)
+ devlink_dl_dpipe_action_value_free(&obj->dpipe_action_value[i]);
+ free(obj->dpipe_action_value);
+}
+
+int devlink_dl_dpipe_entry_action_values_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_entry_action_values *dst = yarg->data;
+ unsigned int n_dpipe_action_value = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_action_value)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-entry-action-values.dpipe-action-value)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ACTION_VALUE) {
+ n_dpipe_action_value++;
+ }
+ }
+
+ if (n_dpipe_action_value) {
+ dst->dpipe_action_value = calloc(n_dpipe_action_value, sizeof(*dst->dpipe_action_value));
+ dst->n_dpipe_action_value = n_dpipe_action_value;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_action_value_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_ACTION_VALUE) {
+ parg.data = &dst->dpipe_action_value[i];
+ if (devlink_dl_dpipe_action_value_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void
+devlink_dl_dpipe_header_fields_free(struct devlink_dl_dpipe_header_fields *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_field; i++)
+ devlink_dl_dpipe_field_free(&obj->dpipe_field[i]);
+ free(obj->dpipe_field);
+}
+
+int devlink_dl_dpipe_header_fields_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_header_fields *dst = yarg->data;
+ unsigned int n_dpipe_field = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_field)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-header-fields.dpipe-field)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_FIELD) {
+ n_dpipe_field++;
+ }
+ }
+
+ if (n_dpipe_field) {
+ dst->dpipe_field = calloc(n_dpipe_field, sizeof(*dst->dpipe_field));
+ dst->n_dpipe_field = n_dpipe_field;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_field_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_FIELD) {
+ parg.data = &dst->dpipe_field[i];
+ if (devlink_dl_dpipe_field_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_resource_list_free(struct devlink_dl_resource_list *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_resource; i++)
+ devlink_dl_resource_free(&obj->resource[i]);
+ free(obj->resource);
+}
+
+int devlink_dl_resource_list_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_resource_list *dst = yarg->data;
+ unsigned int n_resource = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->resource)
+ return ynl_error_parse(yarg, "attribute already present (dl-resource-list.resource)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_RESOURCE) {
+ n_resource++;
+ }
+ }
+
+ if (n_resource) {
+ dst->resource = calloc(n_resource, sizeof(*dst->resource));
+ dst->n_resource = n_resource;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_resource_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_RESOURCE) {
+ parg.data = &dst->resource[i];
+ if (devlink_dl_resource_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
void devlink_dl_reload_act_info_free(struct devlink_dl_reload_act_info *obj)
{
unsigned int i;
@@ -327,6 +1634,186 @@ int devlink_dl_reload_act_info_parse(struct ynl_parse_arg *yarg,
return 0;
}
+void devlink_dl_dpipe_table_free(struct devlink_dl_dpipe_table *obj)
+{
+ free(obj->dpipe_table_name);
+ devlink_dl_dpipe_table_matches_free(&obj->dpipe_table_matches);
+ devlink_dl_dpipe_table_actions_free(&obj->dpipe_table_actions);
+}
+
+int devlink_dl_dpipe_table_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_table *dst = yarg->data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_TABLE_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dpipe_table_name_len = len;
+ dst->dpipe_table_name = malloc(len + 1);
+ memcpy(dst->dpipe_table_name, mnl_attr_get_str(attr), len);
+ dst->dpipe_table_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_SIZE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_size = 1;
+ dst->dpipe_table_size = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_MATCHES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_matches = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_table_matches_nest;
+ parg.data = &dst->dpipe_table_matches;
+ if (devlink_dl_dpipe_table_matches_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_ACTIONS) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_actions = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_table_actions_nest;
+ parg.data = &dst->dpipe_table_actions;
+ if (devlink_dl_dpipe_table_actions_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_counters_enabled = 1;
+ dst->dpipe_table_counters_enabled = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_resource_id = 1;
+ dst->dpipe_table_resource_id = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLE_RESOURCE_UNITS) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_table_resource_units = 1;
+ dst->dpipe_table_resource_units = mnl_attr_get_u64(attr);
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_entry_free(struct devlink_dl_dpipe_entry *obj)
+{
+ devlink_dl_dpipe_entry_match_values_free(&obj->dpipe_entry_match_values);
+ devlink_dl_dpipe_entry_action_values_free(&obj->dpipe_entry_action_values);
+}
+
+int devlink_dl_dpipe_entry_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_entry *dst = yarg->data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ENTRY_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_entry_index = 1;
+ dst->dpipe_entry_index = mnl_attr_get_u64(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_ENTRY_MATCH_VALUES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_entry_match_values = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_entry_match_values_nest;
+ parg.data = &dst->dpipe_entry_match_values;
+ if (devlink_dl_dpipe_entry_match_values_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ } else if (type == DEVLINK_ATTR_DPIPE_ENTRY_ACTION_VALUES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_entry_action_values = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_entry_action_values_nest;
+ parg.data = &dst->dpipe_entry_action_values;
+ if (devlink_dl_dpipe_entry_action_values_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ } else if (type == DEVLINK_ATTR_DPIPE_ENTRY_COUNTER) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_entry_counter = 1;
+ dst->dpipe_entry_counter = mnl_attr_get_u64(attr);
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_header_free(struct devlink_dl_dpipe_header *obj)
+{
+ free(obj->dpipe_header_name);
+ devlink_dl_dpipe_header_fields_free(&obj->dpipe_header_fields);
+}
+
+int devlink_dl_dpipe_header_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_header *dst = yarg->data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_HEADER_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dpipe_header_name_len = len;
+ dst->dpipe_header_name = malloc(len + 1);
+ memcpy(dst->dpipe_header_name, mnl_attr_get_str(attr), len);
+ dst->dpipe_header_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_id = 1;
+ dst->dpipe_header_id = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_GLOBAL) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_global = 1;
+ dst->dpipe_header_global = mnl_attr_get_u8(attr);
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADER_FIELDS) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_header_fields = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_header_fields_nest;
+ parg.data = &dst->dpipe_header_fields;
+ if (devlink_dl_dpipe_header_fields_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return 0;
+}
+
void devlink_dl_reload_stats_free(struct devlink_dl_reload_stats *obj)
{
unsigned int i;
@@ -376,6 +1863,153 @@ int devlink_dl_reload_stats_parse(struct ynl_parse_arg *yarg,
return 0;
}
+void devlink_dl_dpipe_tables_free(struct devlink_dl_dpipe_tables *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_table; i++)
+ devlink_dl_dpipe_table_free(&obj->dpipe_table[i]);
+ free(obj->dpipe_table);
+}
+
+int devlink_dl_dpipe_tables_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_tables *dst = yarg->data;
+ unsigned int n_dpipe_table = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_table)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-tables.dpipe-table)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_TABLE) {
+ n_dpipe_table++;
+ }
+ }
+
+ if (n_dpipe_table) {
+ dst->dpipe_table = calloc(n_dpipe_table, sizeof(*dst->dpipe_table));
+ dst->n_dpipe_table = n_dpipe_table;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_table_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_TABLE) {
+ parg.data = &dst->dpipe_table[i];
+ if (devlink_dl_dpipe_table_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_entries_free(struct devlink_dl_dpipe_entries *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_entry; i++)
+ devlink_dl_dpipe_entry_free(&obj->dpipe_entry[i]);
+ free(obj->dpipe_entry);
+}
+
+int devlink_dl_dpipe_entries_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_entries *dst = yarg->data;
+ unsigned int n_dpipe_entry = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_entry)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-entries.dpipe-entry)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_ENTRY) {
+ n_dpipe_entry++;
+ }
+ }
+
+ if (n_dpipe_entry) {
+ dst->dpipe_entry = calloc(n_dpipe_entry, sizeof(*dst->dpipe_entry));
+ dst->n_dpipe_entry = n_dpipe_entry;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_entry_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_ENTRY) {
+ parg.data = &dst->dpipe_entry[i];
+ if (devlink_dl_dpipe_entry_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
+void devlink_dl_dpipe_headers_free(struct devlink_dl_dpipe_headers *obj)
+{
+ unsigned int i;
+
+ for (i = 0; i < obj->n_dpipe_header; i++)
+ devlink_dl_dpipe_header_free(&obj->dpipe_header[i]);
+ free(obj->dpipe_header);
+}
+
+int devlink_dl_dpipe_headers_parse(struct ynl_parse_arg *yarg,
+ const struct nlattr *nested)
+{
+ struct devlink_dl_dpipe_headers *dst = yarg->data;
+ unsigned int n_dpipe_header = 0;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+ int i;
+
+ parg.ys = yarg->ys;
+
+ if (dst->dpipe_header)
+ return ynl_error_parse(yarg, "attribute already present (dl-dpipe-headers.dpipe-header)");
+
+ mnl_attr_for_each_nested(attr, nested) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_DPIPE_HEADER) {
+ n_dpipe_header++;
+ }
+ }
+
+ if (n_dpipe_header) {
+ dst->dpipe_header = calloc(n_dpipe_header, sizeof(*dst->dpipe_header));
+ dst->n_dpipe_header = n_dpipe_header;
+ i = 0;
+ parg.rsp_policy = &devlink_dl_dpipe_header_nest;
+ mnl_attr_for_each_nested(attr, nested) {
+ if (mnl_attr_get_type(attr) == DEVLINK_ATTR_DPIPE_HEADER) {
+ parg.data = &dst->dpipe_header[i];
+ if (devlink_dl_dpipe_header_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ i++;
+ }
+ }
+ }
+
+ return 0;
+}
+
void devlink_dl_dev_stats_free(struct devlink_dl_dev_stats *obj)
{
devlink_dl_reload_stats_free(&obj->reload_stats);
@@ -475,11 +2109,6 @@ int devlink_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
return MNL_CB_ERROR;
dst->_present.reload_failed = 1;
dst->reload_failed = mnl_attr_get_u8(attr);
- } else if (type == DEVLINK_ATTR_RELOAD_ACTION) {
- if (ynl_attr_validate(yarg, attr))
- return MNL_CB_ERROR;
- dst->_present.reload_action = 1;
- dst->reload_action = mnl_attr_get_u8(attr);
} else if (type == DEVLINK_ATTR_DEV_STATS) {
if (ynl_attr_validate(yarg, attr))
return MNL_CB_ERROR;
@@ -756,6 +2385,241 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_PORT_SET ============== */
+/* DEVLINK_CMD_PORT_SET - do */
+void devlink_port_set_req_free(struct devlink_port_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ devlink_dl_port_function_free(&req->port_function);
+ free(req);
+}
+
+int devlink_port_set(struct ynl_sock *ys, struct devlink_port_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.port_type)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_PORT_TYPE, req->port_type);
+ if (req->_present.port_function)
+ devlink_dl_port_function_put(nlh, DEVLINK_ATTR_PORT_FUNCTION, &req->port_function);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_PORT_NEW ============== */
+/* DEVLINK_CMD_PORT_NEW - do */
+void devlink_port_new_req_free(struct devlink_port_new_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void devlink_port_new_rsp_free(struct devlink_port_new_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ free(rsp);
+}
+
+int devlink_port_new_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct ynl_parse_arg *yarg = data;
+ struct devlink_port_new_rsp *dst;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_PORT_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.port_index = 1;
+ dst->port_index = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_port_new_rsp *
+devlink_port_new(struct ynl_sock *ys, struct devlink_port_new_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_port_new_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_NEW, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.port_flavour)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_PORT_FLAVOUR, req->port_flavour);
+ if (req->_present.port_pci_pf_number)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_PORT_PCI_PF_NUMBER, req->port_pci_pf_number);
+ if (req->_present.port_pci_sf_number)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_PCI_SF_NUMBER, req->port_pci_sf_number);
+ if (req->_present.port_controller_number)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_CONTROLLER_NUMBER, req->port_controller_number);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_port_new_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_PORT_NEW;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_port_new_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_PORT_DEL ============== */
+/* DEVLINK_CMD_PORT_DEL - do */
+void devlink_port_del_req_free(struct devlink_port_del_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_port_del(struct ynl_sock *ys, struct devlink_port_del_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_DEL, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_PORT_SPLIT ============== */
+/* DEVLINK_CMD_PORT_SPLIT - do */
+void devlink_port_split_req_free(struct devlink_port_split_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_port_split(struct ynl_sock *ys, struct devlink_port_split_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_SPLIT, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.port_split_count)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_SPLIT_COUNT, req->port_split_count);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_PORT_UNSPLIT ============== */
+/* DEVLINK_CMD_PORT_UNSPLIT - do */
+void devlink_port_unsplit_req_free(struct devlink_port_unsplit_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_port_unsplit(struct ynl_sock *ys,
+ struct devlink_port_unsplit_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_UNSPLIT, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_SB_GET ============== */
/* DEVLINK_CMD_SB_GET - do */
void devlink_sb_get_req_free(struct devlink_sb_get_req *req)
@@ -1048,6 +2912,44 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_SB_POOL_SET ============== */
+/* DEVLINK_CMD_SB_POOL_SET - do */
+void devlink_sb_pool_set_req_free(struct devlink_sb_pool_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_sb_pool_set(struct ynl_sock *ys,
+ struct devlink_sb_pool_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SB_POOL_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.sb_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_INDEX, req->sb_index);
+ if (req->_present.sb_pool_index)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_SB_POOL_INDEX, req->sb_pool_index);
+ if (req->_present.sb_pool_threshold_type)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE, req->sb_pool_threshold_type);
+ if (req->_present.sb_pool_size)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_POOL_SIZE, req->sb_pool_size);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_SB_PORT_POOL_GET ============== */
/* DEVLINK_CMD_SB_PORT_POOL_GET - do */
void
@@ -1209,6 +3111,45 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_SB_PORT_POOL_SET ============== */
+/* DEVLINK_CMD_SB_PORT_POOL_SET - do */
+void
+devlink_sb_port_pool_set_req_free(struct devlink_sb_port_pool_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_sb_port_pool_set(struct ynl_sock *ys,
+ struct devlink_sb_port_pool_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SB_PORT_POOL_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.sb_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_INDEX, req->sb_index);
+ if (req->_present.sb_pool_index)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_SB_POOL_INDEX, req->sb_pool_index);
+ if (req->_present.sb_threshold)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_THRESHOLD, req->sb_threshold);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_SB_TC_POOL_BIND_GET ============== */
/* DEVLINK_CMD_SB_TC_POOL_BIND_GET - do */
void
@@ -1378,6 +3319,840 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_SB_TC_POOL_BIND_SET ============== */
+/* DEVLINK_CMD_SB_TC_POOL_BIND_SET - do */
+void
+devlink_sb_tc_pool_bind_set_req_free(struct devlink_sb_tc_pool_bind_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_sb_tc_pool_bind_set(struct ynl_sock *ys,
+ struct devlink_sb_tc_pool_bind_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SB_TC_POOL_BIND_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.sb_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_INDEX, req->sb_index);
+ if (req->_present.sb_pool_index)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_SB_POOL_INDEX, req->sb_pool_index);
+ if (req->_present.sb_pool_type)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_SB_POOL_TYPE, req->sb_pool_type);
+ if (req->_present.sb_tc_index)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_SB_TC_INDEX, req->sb_tc_index);
+ if (req->_present.sb_threshold)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_THRESHOLD, req->sb_threshold);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_SB_OCC_SNAPSHOT ============== */
+/* DEVLINK_CMD_SB_OCC_SNAPSHOT - do */
+void devlink_sb_occ_snapshot_req_free(struct devlink_sb_occ_snapshot_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_sb_occ_snapshot(struct ynl_sock *ys,
+ struct devlink_sb_occ_snapshot_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SB_OCC_SNAPSHOT, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.sb_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_INDEX, req->sb_index);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_SB_OCC_MAX_CLEAR ============== */
+/* DEVLINK_CMD_SB_OCC_MAX_CLEAR - do */
+void
+devlink_sb_occ_max_clear_req_free(struct devlink_sb_occ_max_clear_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_sb_occ_max_clear(struct ynl_sock *ys,
+ struct devlink_sb_occ_max_clear_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SB_OCC_MAX_CLEAR, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.sb_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_SB_INDEX, req->sb_index);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_ESWITCH_GET ============== */
+/* DEVLINK_CMD_ESWITCH_GET - do */
+void devlink_eswitch_get_req_free(struct devlink_eswitch_get_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void devlink_eswitch_get_rsp_free(struct devlink_eswitch_get_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ free(rsp);
+}
+
+int devlink_eswitch_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_eswitch_get_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_ESWITCH_MODE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.eswitch_mode = 1;
+ dst->eswitch_mode = mnl_attr_get_u16(attr);
+ } else if (type == DEVLINK_ATTR_ESWITCH_INLINE_MODE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.eswitch_inline_mode = 1;
+ dst->eswitch_inline_mode = mnl_attr_get_u16(attr);
+ } else if (type == DEVLINK_ATTR_ESWITCH_ENCAP_MODE) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.eswitch_encap_mode = 1;
+ dst->eswitch_encap_mode = mnl_attr_get_u8(attr);
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_eswitch_get_rsp *
+devlink_eswitch_get(struct ynl_sock *ys, struct devlink_eswitch_get_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_eswitch_get_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_ESWITCH_GET, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_eswitch_get_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_ESWITCH_GET;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_eswitch_get_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_ESWITCH_SET ============== */
+/* DEVLINK_CMD_ESWITCH_SET - do */
+void devlink_eswitch_set_req_free(struct devlink_eswitch_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_eswitch_set(struct ynl_sock *ys,
+ struct devlink_eswitch_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_ESWITCH_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.eswitch_mode)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_ESWITCH_MODE, req->eswitch_mode);
+ if (req->_present.eswitch_inline_mode)
+ mnl_attr_put_u16(nlh, DEVLINK_ATTR_ESWITCH_INLINE_MODE, req->eswitch_inline_mode);
+ if (req->_present.eswitch_encap_mode)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_ESWITCH_ENCAP_MODE, req->eswitch_encap_mode);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_DPIPE_TABLE_GET ============== */
+/* DEVLINK_CMD_DPIPE_TABLE_GET - do */
+void devlink_dpipe_table_get_req_free(struct devlink_dpipe_table_get_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->dpipe_table_name);
+ free(req);
+}
+
+void devlink_dpipe_table_get_rsp_free(struct devlink_dpipe_table_get_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ devlink_dl_dpipe_tables_free(&rsp->dpipe_tables);
+ free(rsp);
+}
+
+int devlink_dpipe_table_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_dpipe_table_get_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ dst = yarg->data;
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_TABLES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_tables = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_tables_nest;
+ parg.data = &dst->dpipe_tables;
+ if (devlink_dl_dpipe_tables_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_dpipe_table_get_rsp *
+devlink_dpipe_table_get(struct ynl_sock *ys,
+ struct devlink_dpipe_table_get_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_dpipe_table_get_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_DPIPE_TABLE_GET, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.dpipe_table_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DPIPE_TABLE_NAME, req->dpipe_table_name);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_dpipe_table_get_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_DPIPE_TABLE_GET;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_dpipe_table_get_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_DPIPE_ENTRIES_GET ============== */
+/* DEVLINK_CMD_DPIPE_ENTRIES_GET - do */
+void
+devlink_dpipe_entries_get_req_free(struct devlink_dpipe_entries_get_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->dpipe_table_name);
+ free(req);
+}
+
+void
+devlink_dpipe_entries_get_rsp_free(struct devlink_dpipe_entries_get_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ devlink_dl_dpipe_entries_free(&rsp->dpipe_entries);
+ free(rsp);
+}
+
+int devlink_dpipe_entries_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_dpipe_entries_get_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ dst = yarg->data;
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_ENTRIES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_entries = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_entries_nest;
+ parg.data = &dst->dpipe_entries;
+ if (devlink_dl_dpipe_entries_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_dpipe_entries_get_rsp *
+devlink_dpipe_entries_get(struct ynl_sock *ys,
+ struct devlink_dpipe_entries_get_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_dpipe_entries_get_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_DPIPE_ENTRIES_GET, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.dpipe_table_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DPIPE_TABLE_NAME, req->dpipe_table_name);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_dpipe_entries_get_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_DPIPE_ENTRIES_GET;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_dpipe_entries_get_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_DPIPE_HEADERS_GET ============== */
+/* DEVLINK_CMD_DPIPE_HEADERS_GET - do */
+void
+devlink_dpipe_headers_get_req_free(struct devlink_dpipe_headers_get_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void
+devlink_dpipe_headers_get_rsp_free(struct devlink_dpipe_headers_get_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ devlink_dl_dpipe_headers_free(&rsp->dpipe_headers);
+ free(rsp);
+}
+
+int devlink_dpipe_headers_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_dpipe_headers_get_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ dst = yarg->data;
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DPIPE_HEADERS) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.dpipe_headers = 1;
+
+ parg.rsp_policy = &devlink_dl_dpipe_headers_nest;
+ parg.data = &dst->dpipe_headers;
+ if (devlink_dl_dpipe_headers_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_dpipe_headers_get_rsp *
+devlink_dpipe_headers_get(struct ynl_sock *ys,
+ struct devlink_dpipe_headers_get_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_dpipe_headers_get_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_DPIPE_HEADERS_GET, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_dpipe_headers_get_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_DPIPE_HEADERS_GET;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_dpipe_headers_get_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET ============== */
+/* DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET - do */
+void
+devlink_dpipe_table_counters_set_req_free(struct devlink_dpipe_table_counters_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->dpipe_table_name);
+ free(req);
+}
+
+int devlink_dpipe_table_counters_set(struct ynl_sock *ys,
+ struct devlink_dpipe_table_counters_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.dpipe_table_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DPIPE_TABLE_NAME, req->dpipe_table_name);
+ if (req->_present.dpipe_table_counters_enabled)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_DPIPE_TABLE_COUNTERS_ENABLED, req->dpipe_table_counters_enabled);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_RESOURCE_SET ============== */
+/* DEVLINK_CMD_RESOURCE_SET - do */
+void devlink_resource_set_req_free(struct devlink_resource_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_resource_set(struct ynl_sock *ys,
+ struct devlink_resource_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RESOURCE_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.resource_id)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RESOURCE_ID, req->resource_id);
+ if (req->_present.resource_size)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RESOURCE_SIZE, req->resource_size);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_RESOURCE_DUMP ============== */
+/* DEVLINK_CMD_RESOURCE_DUMP - do */
+void devlink_resource_dump_req_free(struct devlink_resource_dump_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void devlink_resource_dump_rsp_free(struct devlink_resource_dump_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ devlink_dl_resource_list_free(&rsp->resource_list);
+ free(rsp);
+}
+
+int devlink_resource_dump_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_resource_dump_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ dst = yarg->data;
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_RESOURCE_LIST) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.resource_list = 1;
+
+ parg.rsp_policy = &devlink_dl_resource_list_nest;
+ parg.data = &dst->resource_list;
+ if (devlink_dl_resource_list_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_resource_dump_rsp *
+devlink_resource_dump(struct ynl_sock *ys,
+ struct devlink_resource_dump_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_resource_dump_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RESOURCE_DUMP, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_resource_dump_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_RESOURCE_DUMP;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_resource_dump_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_RELOAD ============== */
+/* DEVLINK_CMD_RELOAD - do */
+void devlink_reload_req_free(struct devlink_reload_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void devlink_reload_rsp_free(struct devlink_reload_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ free(rsp);
+}
+
+int devlink_reload_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct ynl_parse_arg *yarg = data;
+ struct devlink_reload_rsp *dst;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_RELOAD_ACTIONS_PERFORMED) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.reload_actions_performed = 1;
+ memcpy(&dst->reload_actions_performed, mnl_attr_get_payload(attr), sizeof(struct nla_bitfield32));
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_reload_rsp *
+devlink_reload(struct ynl_sock *ys, struct devlink_reload_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_reload_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RELOAD, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.reload_action)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_RELOAD_ACTION, req->reload_action);
+ if (req->_present.reload_limits)
+ mnl_attr_put(nlh, DEVLINK_ATTR_RELOAD_LIMITS, sizeof(struct nla_bitfield32), &req->reload_limits);
+ if (req->_present.netns_pid)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_NETNS_PID, req->netns_pid);
+ if (req->_present.netns_fd)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_NETNS_FD, req->netns_fd);
+ if (req->_present.netns_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_NETNS_ID, req->netns_id);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_reload_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_RELOAD;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_reload_rsp_free(rsp);
+ return NULL;
+}
+
/* ============== DEVLINK_CMD_PARAM_GET ============== */
/* DEVLINK_CMD_PARAM_GET - do */
void devlink_param_get_req_free(struct devlink_param_get_req *req)
@@ -1530,6 +4305,42 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_PARAM_SET ============== */
+/* DEVLINK_CMD_PARAM_SET - do */
+void devlink_param_set_req_free(struct devlink_param_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->param_name);
+ free(req);
+}
+
+int devlink_param_set(struct ynl_sock *ys, struct devlink_param_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PARAM_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.param_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_PARAM_NAME, req->param_name);
+ if (req->_present.param_type)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_PARAM_TYPE, req->param_type);
+ if (req->_present.param_value_cmode)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_PARAM_VALUE_CMODE, req->param_value_cmode);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_REGION_GET ============== */
/* DEVLINK_CMD_REGION_GET - do */
void devlink_region_get_req_free(struct devlink_region_get_req *req)
@@ -1689,6 +4500,446 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_REGION_NEW ============== */
+/* DEVLINK_CMD_REGION_NEW - do */
+void devlink_region_new_req_free(struct devlink_region_new_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->region_name);
+ free(req);
+}
+
+void devlink_region_new_rsp_free(struct devlink_region_new_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ free(rsp->region_name);
+ free(rsp);
+}
+
+int devlink_region_new_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_region_new_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_PORT_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.port_index = 1;
+ dst->port_index = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_REGION_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.region_name_len = len;
+ dst->region_name = malloc(len + 1);
+ memcpy(dst->region_name, mnl_attr_get_str(attr), len);
+ dst->region_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_REGION_SNAPSHOT_ID) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.region_snapshot_id = 1;
+ dst->region_snapshot_id = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_region_new_rsp *
+devlink_region_new(struct ynl_sock *ys, struct devlink_region_new_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_region_new_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_REGION_NEW, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.region_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_REGION_NAME, req->region_name);
+ if (req->_present.region_snapshot_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_REGION_SNAPSHOT_ID, req->region_snapshot_id);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_region_new_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_REGION_NEW;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_region_new_rsp_free(rsp);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_REGION_DEL ============== */
+/* DEVLINK_CMD_REGION_DEL - do */
+void devlink_region_del_req_free(struct devlink_region_del_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->region_name);
+ free(req);
+}
+
+int devlink_region_del(struct ynl_sock *ys, struct devlink_region_del_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_REGION_DEL, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.region_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_REGION_NAME, req->region_name);
+ if (req->_present.region_snapshot_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_REGION_SNAPSHOT_ID, req->region_snapshot_id);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_REGION_READ ============== */
+/* DEVLINK_CMD_REGION_READ - dump */
+int devlink_region_read_rsp_dump_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_region_read_rsp_dump *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_PORT_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.port_index = 1;
+ dst->port_index = mnl_attr_get_u32(attr);
+ } else if (type == DEVLINK_ATTR_REGION_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.region_name_len = len;
+ dst->region_name = malloc(len + 1);
+ memcpy(dst->region_name, mnl_attr_get_str(attr), len);
+ dst->region_name[len] = 0;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+void
+devlink_region_read_rsp_list_free(struct devlink_region_read_rsp_list *rsp)
+{
+ struct devlink_region_read_rsp_list *next = rsp;
+
+ while ((void *)next != YNL_LIST_END) {
+ rsp = next;
+ next = rsp->next;
+
+ free(rsp->obj.bus_name);
+ free(rsp->obj.dev_name);
+ free(rsp->obj.region_name);
+ free(rsp);
+ }
+}
+
+struct devlink_region_read_rsp_list *
+devlink_region_read_dump(struct ynl_sock *ys,
+ struct devlink_region_read_req_dump *req)
+{
+ struct ynl_dump_state yds = {};
+ struct nlmsghdr *nlh;
+ int err;
+
+ yds.ys = ys;
+ yds.alloc_sz = sizeof(struct devlink_region_read_rsp_list);
+ yds.cb = devlink_region_read_rsp_dump_parse;
+ yds.rsp_cmd = DEVLINK_CMD_REGION_READ;
+ yds.rsp_policy = &devlink_nest;
+
+ nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_REGION_READ, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.region_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_REGION_NAME, req->region_name);
+ if (req->_present.region_snapshot_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_REGION_SNAPSHOT_ID, req->region_snapshot_id);
+ if (req->_present.region_direct)
+ mnl_attr_put(nlh, DEVLINK_ATTR_REGION_DIRECT, 0, NULL);
+ if (req->_present.region_chunk_addr)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_REGION_CHUNK_ADDR, req->region_chunk_addr);
+ if (req->_present.region_chunk_len)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_REGION_CHUNK_LEN, req->region_chunk_len);
+
+ err = ynl_exec_dump(ys, nlh, &yds);
+ if (err < 0)
+ goto free_list;
+
+ return yds.first;
+
+free_list:
+ devlink_region_read_rsp_list_free(yds.first);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_PORT_PARAM_GET ============== */
+/* DEVLINK_CMD_PORT_PARAM_GET - do */
+void devlink_port_param_get_req_free(struct devlink_port_param_get_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+void devlink_port_param_get_rsp_free(struct devlink_port_param_get_rsp *rsp)
+{
+ free(rsp->bus_name);
+ free(rsp->dev_name);
+ free(rsp);
+}
+
+int devlink_port_param_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
+{
+ struct devlink_port_param_get_rsp *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+
+ dst = yarg->data;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_BUS_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.bus_name_len = len;
+ dst->bus_name = malloc(len + 1);
+ memcpy(dst->bus_name, mnl_attr_get_str(attr), len);
+ dst->bus_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_DEV_NAME) {
+ unsigned int len;
+
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+
+ len = strnlen(mnl_attr_get_str(attr), mnl_attr_get_payload_len(attr));
+ dst->_present.dev_name_len = len;
+ dst->dev_name = malloc(len + 1);
+ memcpy(dst->dev_name, mnl_attr_get_str(attr), len);
+ dst->dev_name[len] = 0;
+ } else if (type == DEVLINK_ATTR_PORT_INDEX) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.port_index = 1;
+ dst->port_index = mnl_attr_get_u32(attr);
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+struct devlink_port_param_get_rsp *
+devlink_port_param_get(struct ynl_sock *ys,
+ struct devlink_port_param_get_req *req)
+{
+ struct ynl_req_state yrs = { .yarg = { .ys = ys, }, };
+ struct devlink_port_param_get_rsp *rsp;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_PARAM_GET, 1);
+ ys->req_policy = &devlink_nest;
+ yrs.yarg.rsp_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+
+ rsp = calloc(1, sizeof(*rsp));
+ yrs.yarg.data = rsp;
+ yrs.cb = devlink_port_param_get_rsp_parse;
+ yrs.rsp_cmd = DEVLINK_CMD_PORT_PARAM_GET;
+
+ err = ynl_exec(ys, nlh, &yrs);
+ if (err < 0)
+ goto err_free;
+
+ return rsp;
+
+err_free:
+ devlink_port_param_get_rsp_free(rsp);
+ return NULL;
+}
+
+/* DEVLINK_CMD_PORT_PARAM_GET - dump */
+void devlink_port_param_get_list_free(struct devlink_port_param_get_list *rsp)
+{
+ struct devlink_port_param_get_list *next = rsp;
+
+ while ((void *)next != YNL_LIST_END) {
+ rsp = next;
+ next = rsp->next;
+
+ free(rsp->obj.bus_name);
+ free(rsp->obj.dev_name);
+ free(rsp);
+ }
+}
+
+struct devlink_port_param_get_list *
+devlink_port_param_get_dump(struct ynl_sock *ys)
+{
+ struct ynl_dump_state yds = {};
+ struct nlmsghdr *nlh;
+ int err;
+
+ yds.ys = ys;
+ yds.alloc_sz = sizeof(struct devlink_port_param_get_list);
+ yds.cb = devlink_port_param_get_rsp_parse;
+ yds.rsp_cmd = DEVLINK_CMD_PORT_PARAM_GET;
+ yds.rsp_policy = &devlink_nest;
+
+ nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_PORT_PARAM_GET, 1);
+
+ err = ynl_exec_dump(ys, nlh, &yds);
+ if (err < 0)
+ goto free_list;
+
+ return yds.first;
+
+free_list:
+ devlink_port_param_get_list_free(yds.first);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_PORT_PARAM_SET ============== */
+/* DEVLINK_CMD_PORT_PARAM_SET - do */
+void devlink_port_param_set_req_free(struct devlink_port_param_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_port_param_set(struct ynl_sock *ys,
+ struct devlink_port_param_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_PORT_PARAM_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_INFO_GET ============== */
/* DEVLINK_CMD_INFO_GET - do */
void devlink_info_get_req_free(struct devlink_info_get_req *req)
@@ -2093,6 +5344,276 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_SET ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_SET - do */
+void
+devlink_health_reporter_set_req_free(struct devlink_health_reporter_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->health_reporter_name);
+ free(req);
+}
+
+int devlink_health_reporter_set(struct ynl_sock *ys,
+ struct devlink_health_reporter_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+ if (req->_present.health_reporter_graceful_period)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_HEALTH_REPORTER_GRACEFUL_PERIOD, req->health_reporter_graceful_period);
+ if (req->_present.health_reporter_auto_recover)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_HEALTH_REPORTER_AUTO_RECOVER, req->health_reporter_auto_recover);
+ if (req->_present.health_reporter_auto_dump)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_HEALTH_REPORTER_AUTO_DUMP, req->health_reporter_auto_dump);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_RECOVER ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_RECOVER - do */
+void
+devlink_health_reporter_recover_req_free(struct devlink_health_reporter_recover_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->health_reporter_name);
+ free(req);
+}
+
+int devlink_health_reporter_recover(struct ynl_sock *ys,
+ struct devlink_health_reporter_recover_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_RECOVER, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE - do */
+void
+devlink_health_reporter_diagnose_req_free(struct devlink_health_reporter_diagnose_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->health_reporter_name);
+ free(req);
+}
+
+int devlink_health_reporter_diagnose(struct ynl_sock *ys,
+ struct devlink_health_reporter_diagnose_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET - dump */
+int devlink_health_reporter_dump_get_rsp_dump_parse(const struct nlmsghdr *nlh,
+ void *data)
+{
+ struct devlink_health_reporter_dump_get_rsp_dump *dst;
+ struct ynl_parse_arg *yarg = data;
+ const struct nlattr *attr;
+ struct ynl_parse_arg parg;
+
+ dst = yarg->data;
+ parg.ys = yarg->ys;
+
+ mnl_attr_for_each(attr, nlh, sizeof(struct genlmsghdr)) {
+ unsigned int type = mnl_attr_get_type(attr);
+
+ if (type == DEVLINK_ATTR_FMSG) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.fmsg = 1;
+
+ parg.rsp_policy = &devlink_dl_fmsg_nest;
+ parg.data = &dst->fmsg;
+ if (devlink_dl_fmsg_parse(&parg, attr))
+ return MNL_CB_ERROR;
+ }
+ }
+
+ return MNL_CB_OK;
+}
+
+void
+devlink_health_reporter_dump_get_rsp_list_free(struct devlink_health_reporter_dump_get_rsp_list *rsp)
+{
+ struct devlink_health_reporter_dump_get_rsp_list *next = rsp;
+
+ while ((void *)next != YNL_LIST_END) {
+ rsp = next;
+ next = rsp->next;
+
+ devlink_dl_fmsg_free(&rsp->obj.fmsg);
+ free(rsp);
+ }
+}
+
+struct devlink_health_reporter_dump_get_rsp_list *
+devlink_health_reporter_dump_get_dump(struct ynl_sock *ys,
+ struct devlink_health_reporter_dump_get_req_dump *req)
+{
+ struct ynl_dump_state yds = {};
+ struct nlmsghdr *nlh;
+ int err;
+
+ yds.ys = ys;
+ yds.alloc_sz = sizeof(struct devlink_health_reporter_dump_get_rsp_list);
+ yds.cb = devlink_health_reporter_dump_get_rsp_dump_parse;
+ yds.rsp_cmd = DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET;
+ yds.rsp_policy = &devlink_nest;
+
+ nlh = ynl_gemsg_start_dump(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+
+ err = ynl_exec_dump(ys, nlh, &yds);
+ if (err < 0)
+ goto free_list;
+
+ return yds.first;
+
+free_list:
+ devlink_health_reporter_dump_get_rsp_list_free(yds.first);
+ return NULL;
+}
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR - do */
+void
+devlink_health_reporter_dump_clear_req_free(struct devlink_health_reporter_dump_clear_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->health_reporter_name);
+ free(req);
+}
+
+int devlink_health_reporter_dump_clear(struct ynl_sock *ys,
+ struct devlink_health_reporter_dump_clear_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_FLASH_UPDATE ============== */
+/* DEVLINK_CMD_FLASH_UPDATE - do */
+void devlink_flash_update_req_free(struct devlink_flash_update_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->flash_update_file_name);
+ free(req->flash_update_component);
+ free(req);
+}
+
+int devlink_flash_update(struct ynl_sock *ys,
+ struct devlink_flash_update_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_FLASH_UPDATE, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.flash_update_file_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_FLASH_UPDATE_FILE_NAME, req->flash_update_file_name);
+ if (req->_present.flash_update_component_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_FLASH_UPDATE_COMPONENT, req->flash_update_component);
+ if (req->_present.flash_update_overwrite_mask)
+ mnl_attr_put(nlh, DEVLINK_ATTR_FLASH_UPDATE_OVERWRITE_MASK, sizeof(struct nla_bitfield32), &req->flash_update_overwrite_mask);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_TRAP_GET ============== */
/* DEVLINK_CMD_TRAP_GET - do */
void devlink_trap_get_req_free(struct devlink_trap_get_req *req)
@@ -2245,6 +5766,40 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_TRAP_SET ============== */
+/* DEVLINK_CMD_TRAP_SET - do */
+void devlink_trap_set_req_free(struct devlink_trap_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->trap_name);
+ free(req);
+}
+
+int devlink_trap_set(struct ynl_sock *ys, struct devlink_trap_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_TRAP_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.trap_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_TRAP_NAME, req->trap_name);
+ if (req->_present.trap_action)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_TRAP_ACTION, req->trap_action);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_TRAP_GROUP_GET ============== */
/* DEVLINK_CMD_TRAP_GROUP_GET - do */
void devlink_trap_group_get_req_free(struct devlink_trap_group_get_req *req)
@@ -2398,6 +5953,43 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_TRAP_GROUP_SET ============== */
+/* DEVLINK_CMD_TRAP_GROUP_SET - do */
+void devlink_trap_group_set_req_free(struct devlink_trap_group_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->trap_group_name);
+ free(req);
+}
+
+int devlink_trap_group_set(struct ynl_sock *ys,
+ struct devlink_trap_group_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_TRAP_GROUP_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.trap_group_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_TRAP_GROUP_NAME, req->trap_group_name);
+ if (req->_present.trap_action)
+ mnl_attr_put_u8(nlh, DEVLINK_ATTR_TRAP_ACTION, req->trap_action);
+ if (req->_present.trap_policer_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_TRAP_POLICER_ID, req->trap_policer_id);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_TRAP_POLICER_GET ============== */
/* DEVLINK_CMD_TRAP_POLICER_GET - do */
void
@@ -2545,6 +6137,79 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_TRAP_POLICER_SET ============== */
+/* DEVLINK_CMD_TRAP_POLICER_SET - do */
+void
+devlink_trap_policer_set_req_free(struct devlink_trap_policer_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req);
+}
+
+int devlink_trap_policer_set(struct ynl_sock *ys,
+ struct devlink_trap_policer_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_TRAP_POLICER_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.trap_policer_id)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_TRAP_POLICER_ID, req->trap_policer_id);
+ if (req->_present.trap_policer_rate)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_TRAP_POLICER_RATE, req->trap_policer_rate);
+ if (req->_present.trap_policer_burst)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_TRAP_POLICER_BURST, req->trap_policer_burst);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_TEST ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_TEST - do */
+void
+devlink_health_reporter_test_req_free(struct devlink_health_reporter_test_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->health_reporter_name);
+ free(req);
+}
+
+int devlink_health_reporter_test(struct ynl_sock *ys,
+ struct devlink_health_reporter_test_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_HEALTH_REPORTER_TEST, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.port_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_PORT_INDEX, req->port_index);
+ if (req->_present.health_reporter_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_HEALTH_REPORTER_NAME, req->health_reporter_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_RATE_GET ============== */
/* DEVLINK_CMD_RATE_GET - do */
void devlink_rate_get_req_free(struct devlink_rate_get_req *req)
@@ -2704,6 +6369,124 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_RATE_SET ============== */
+/* DEVLINK_CMD_RATE_SET - do */
+void devlink_rate_set_req_free(struct devlink_rate_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->rate_node_name);
+ free(req->rate_parent_node_name);
+ free(req);
+}
+
+int devlink_rate_set(struct ynl_sock *ys, struct devlink_rate_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RATE_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.rate_node_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_RATE_NODE_NAME, req->rate_node_name);
+ if (req->_present.rate_tx_share)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RATE_TX_SHARE, req->rate_tx_share);
+ if (req->_present.rate_tx_max)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RATE_TX_MAX, req->rate_tx_max);
+ if (req->_present.rate_tx_priority)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_RATE_TX_PRIORITY, req->rate_tx_priority);
+ if (req->_present.rate_tx_weight)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_RATE_TX_WEIGHT, req->rate_tx_weight);
+ if (req->_present.rate_parent_node_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_RATE_PARENT_NODE_NAME, req->rate_parent_node_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_RATE_NEW ============== */
+/* DEVLINK_CMD_RATE_NEW - do */
+void devlink_rate_new_req_free(struct devlink_rate_new_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->rate_node_name);
+ free(req->rate_parent_node_name);
+ free(req);
+}
+
+int devlink_rate_new(struct ynl_sock *ys, struct devlink_rate_new_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RATE_NEW, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.rate_node_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_RATE_NODE_NAME, req->rate_node_name);
+ if (req->_present.rate_tx_share)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RATE_TX_SHARE, req->rate_tx_share);
+ if (req->_present.rate_tx_max)
+ mnl_attr_put_u64(nlh, DEVLINK_ATTR_RATE_TX_MAX, req->rate_tx_max);
+ if (req->_present.rate_tx_priority)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_RATE_TX_PRIORITY, req->rate_tx_priority);
+ if (req->_present.rate_tx_weight)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_RATE_TX_WEIGHT, req->rate_tx_weight);
+ if (req->_present.rate_parent_node_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_RATE_PARENT_NODE_NAME, req->rate_parent_node_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
+/* ============== DEVLINK_CMD_RATE_DEL ============== */
+/* DEVLINK_CMD_RATE_DEL - do */
+void devlink_rate_del_req_free(struct devlink_rate_del_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->rate_node_name);
+ free(req);
+}
+
+int devlink_rate_del(struct ynl_sock *ys, struct devlink_rate_del_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_RATE_DEL, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.rate_node_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_RATE_NODE_NAME, req->rate_node_name);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_LINECARD_GET ============== */
/* DEVLINK_CMD_LINECARD_GET - do */
void devlink_linecard_get_req_free(struct devlink_linecard_get_req *req)
@@ -2847,6 +6630,41 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_LINECARD_SET ============== */
+/* DEVLINK_CMD_LINECARD_SET - do */
+void devlink_linecard_set_req_free(struct devlink_linecard_set_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ free(req->linecard_type);
+ free(req);
+}
+
+int devlink_linecard_set(struct ynl_sock *ys,
+ struct devlink_linecard_set_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_LINECARD_SET, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.linecard_index)
+ mnl_attr_put_u32(nlh, DEVLINK_ATTR_LINECARD_INDEX, req->linecard_index);
+ if (req->_present.linecard_type_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_LINECARD_TYPE, req->linecard_type);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
/* ============== DEVLINK_CMD_SELFTESTS_GET ============== */
/* DEVLINK_CMD_SELFTESTS_GET - do */
void devlink_selftests_get_req_free(struct devlink_selftests_get_req *req)
@@ -2977,6 +6795,39 @@ free_list:
return NULL;
}
+/* ============== DEVLINK_CMD_SELFTESTS_RUN ============== */
+/* DEVLINK_CMD_SELFTESTS_RUN - do */
+void devlink_selftests_run_req_free(struct devlink_selftests_run_req *req)
+{
+ free(req->bus_name);
+ free(req->dev_name);
+ devlink_dl_selftest_id_free(&req->selftests);
+ free(req);
+}
+
+int devlink_selftests_run(struct ynl_sock *ys,
+ struct devlink_selftests_run_req *req)
+{
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = ynl_gemsg_start_req(ys, ys->family_id, DEVLINK_CMD_SELFTESTS_RUN, 1);
+ ys->req_policy = &devlink_nest;
+
+ if (req->_present.bus_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_BUS_NAME, req->bus_name);
+ if (req->_present.dev_name_len)
+ mnl_attr_put_strz(nlh, DEVLINK_ATTR_DEV_NAME, req->dev_name);
+ if (req->_present.selftests)
+ devlink_dl_selftest_id_put(nlh, DEVLINK_ATTR_SELFTESTS, &req->selftests);
+
+ err = ynl_exec(ys, nlh, NULL);
+ if (err < 0)
+ return -1;
+
+ return 0;
+}
+
const struct ynl_family ynl_devlink_family = {
.name = "devlink",
};
diff --git a/tools/net/ynl/generated/devlink-user.h b/tools/net/ynl/generated/devlink-user.h
index 4b686d147613..1db4edc36eaa 100644
--- a/tools/net/ynl/generated/devlink-user.h
+++ b/tools/net/ynl/generated/devlink-user.h
@@ -9,6 +9,7 @@
#include <stdlib.h>
#include <string.h>
#include <linux/types.h>
+#include <linux/netlink.h>
#include <linux/devlink.h>
struct ynl_sock;
@@ -18,8 +19,130 @@ extern const struct ynl_family ynl_devlink_family;
/* Enums */
const char *devlink_op_str(int op);
const char *devlink_sb_pool_type_str(enum devlink_sb_pool_type value);
+const char *devlink_port_type_str(enum devlink_port_type value);
+const char *devlink_port_flavour_str(enum devlink_port_flavour value);
+const char *devlink_port_fn_state_str(enum devlink_port_fn_state value);
+const char *devlink_port_fn_opstate_str(enum devlink_port_fn_opstate value);
+const char *devlink_port_fn_attr_cap_str(enum devlink_port_fn_attr_cap value);
+const char *
+devlink_sb_threshold_type_str(enum devlink_sb_threshold_type value);
+const char *devlink_eswitch_mode_str(enum devlink_eswitch_mode value);
+const char *
+devlink_eswitch_inline_mode_str(enum devlink_eswitch_inline_mode value);
+const char *
+devlink_eswitch_encap_mode_str(enum devlink_eswitch_encap_mode value);
+const char *devlink_dpipe_match_type_str(enum devlink_dpipe_match_type value);
+const char *
+devlink_dpipe_action_type_str(enum devlink_dpipe_action_type value);
+const char *
+devlink_dpipe_field_mapping_type_str(enum devlink_dpipe_field_mapping_type value);
+const char *devlink_resource_unit_str(enum devlink_resource_unit value);
+const char *devlink_reload_action_str(enum devlink_reload_action value);
+const char *devlink_param_cmode_str(enum devlink_param_cmode value);
+const char *devlink_flash_overwrite_str(enum devlink_flash_overwrite value);
+const char *devlink_trap_action_str(enum devlink_trap_action value);
/* Common nested types */
+struct devlink_dl_dpipe_match {
+ struct {
+ __u32 dpipe_match_type:1;
+ __u32 dpipe_header_id:1;
+ __u32 dpipe_header_global:1;
+ __u32 dpipe_header_index:1;
+ __u32 dpipe_field_id:1;
+ } _present;
+
+ enum devlink_dpipe_match_type dpipe_match_type;
+ __u32 dpipe_header_id;
+ __u8 dpipe_header_global;
+ __u32 dpipe_header_index;
+ __u32 dpipe_field_id;
+};
+
+struct devlink_dl_dpipe_match_value {
+ struct {
+ __u32 dpipe_value_len;
+ __u32 dpipe_value_mask_len;
+ __u32 dpipe_value_mapping:1;
+ } _present;
+
+ unsigned int n_dpipe_match;
+ struct devlink_dl_dpipe_match *dpipe_match;
+ void *dpipe_value;
+ void *dpipe_value_mask;
+ __u32 dpipe_value_mapping;
+};
+
+struct devlink_dl_dpipe_action {
+ struct {
+ __u32 dpipe_action_type:1;
+ __u32 dpipe_header_id:1;
+ __u32 dpipe_header_global:1;
+ __u32 dpipe_header_index:1;
+ __u32 dpipe_field_id:1;
+ } _present;
+
+ enum devlink_dpipe_action_type dpipe_action_type;
+ __u32 dpipe_header_id;
+ __u8 dpipe_header_global;
+ __u32 dpipe_header_index;
+ __u32 dpipe_field_id;
+};
+
+struct devlink_dl_dpipe_action_value {
+ struct {
+ __u32 dpipe_value_len;
+ __u32 dpipe_value_mask_len;
+ __u32 dpipe_value_mapping:1;
+ } _present;
+
+ unsigned int n_dpipe_action;
+ struct devlink_dl_dpipe_action *dpipe_action;
+ void *dpipe_value;
+ void *dpipe_value_mask;
+ __u32 dpipe_value_mapping;
+};
+
+struct devlink_dl_dpipe_field {
+ struct {
+ __u32 dpipe_field_name_len;
+ __u32 dpipe_field_id:1;
+ __u32 dpipe_field_bitwidth:1;
+ __u32 dpipe_field_mapping_type:1;
+ } _present;
+
+ char *dpipe_field_name;
+ __u32 dpipe_field_id;
+ __u32 dpipe_field_bitwidth;
+ enum devlink_dpipe_field_mapping_type dpipe_field_mapping_type;
+};
+
+struct devlink_dl_resource {
+ struct {
+ __u32 resource_name_len;
+ __u32 resource_id:1;
+ __u32 resource_size:1;
+ __u32 resource_size_new:1;
+ __u32 resource_size_valid:1;
+ __u32 resource_size_min:1;
+ __u32 resource_size_max:1;
+ __u32 resource_size_gran:1;
+ __u32 resource_unit:1;
+ __u32 resource_occ:1;
+ } _present;
+
+ char *resource_name;
+ __u64 resource_id;
+ __u64 resource_size;
+ __u64 resource_size_new;
+ __u8 resource_size_valid;
+ __u64 resource_size_min;
+ __u64 resource_size_max;
+ __u64 resource_size_gran;
+ enum devlink_resource_unit resource_unit;
+ __u64 resource_occ;
+};
+
struct devlink_dl_info_version {
struct {
__u32 info_version_name_len;
@@ -30,6 +153,32 @@ struct devlink_dl_info_version {
char *info_version_value;
};
+struct devlink_dl_fmsg {
+ struct {
+ __u32 fmsg_obj_nest_start:1;
+ __u32 fmsg_pair_nest_start:1;
+ __u32 fmsg_arr_nest_start:1;
+ __u32 fmsg_nest_end:1;
+ __u32 fmsg_obj_name_len;
+ } _present;
+
+ char *fmsg_obj_name;
+};
+
+struct devlink_dl_port_function {
+ struct {
+ __u32 hw_addr_len;
+ __u32 state:1;
+ __u32 opstate:1;
+ __u32 caps:1;
+ } _present;
+
+ void *hw_addr;
+ enum devlink_port_fn_state state;
+ enum devlink_port_fn_opstate opstate;
+ struct nla_bitfield32 caps;
+};
+
struct devlink_dl_reload_stats_entry {
struct {
__u32 reload_stats_limit:1;
@@ -45,21 +194,120 @@ struct devlink_dl_reload_act_stats {
struct devlink_dl_reload_stats_entry *reload_stats_entry;
};
+struct devlink_dl_selftest_id {
+ struct {
+ __u32 flash:1;
+ } _present;
+};
+
+struct devlink_dl_dpipe_table_matches {
+ unsigned int n_dpipe_match;
+ struct devlink_dl_dpipe_match *dpipe_match;
+};
+
+struct devlink_dl_dpipe_table_actions {
+ unsigned int n_dpipe_action;
+ struct devlink_dl_dpipe_action *dpipe_action;
+};
+
+struct devlink_dl_dpipe_entry_match_values {
+ unsigned int n_dpipe_match_value;
+ struct devlink_dl_dpipe_match_value *dpipe_match_value;
+};
+
+struct devlink_dl_dpipe_entry_action_values {
+ unsigned int n_dpipe_action_value;
+ struct devlink_dl_dpipe_action_value *dpipe_action_value;
+};
+
+struct devlink_dl_dpipe_header_fields {
+ unsigned int n_dpipe_field;
+ struct devlink_dl_dpipe_field *dpipe_field;
+};
+
+struct devlink_dl_resource_list {
+ unsigned int n_resource;
+ struct devlink_dl_resource *resource;
+};
+
struct devlink_dl_reload_act_info {
struct {
__u32 reload_action:1;
} _present;
- __u8 reload_action;
+ enum devlink_reload_action reload_action;
unsigned int n_reload_action_stats;
struct devlink_dl_reload_act_stats *reload_action_stats;
};
+struct devlink_dl_dpipe_table {
+ struct {
+ __u32 dpipe_table_name_len;
+ __u32 dpipe_table_size:1;
+ __u32 dpipe_table_matches:1;
+ __u32 dpipe_table_actions:1;
+ __u32 dpipe_table_counters_enabled:1;
+ __u32 dpipe_table_resource_id:1;
+ __u32 dpipe_table_resource_units:1;
+ } _present;
+
+ char *dpipe_table_name;
+ __u64 dpipe_table_size;
+ struct devlink_dl_dpipe_table_matches dpipe_table_matches;
+ struct devlink_dl_dpipe_table_actions dpipe_table_actions;
+ __u8 dpipe_table_counters_enabled;
+ __u64 dpipe_table_resource_id;
+ __u64 dpipe_table_resource_units;
+};
+
+struct devlink_dl_dpipe_entry {
+ struct {
+ __u32 dpipe_entry_index:1;
+ __u32 dpipe_entry_match_values:1;
+ __u32 dpipe_entry_action_values:1;
+ __u32 dpipe_entry_counter:1;
+ } _present;
+
+ __u64 dpipe_entry_index;
+ struct devlink_dl_dpipe_entry_match_values dpipe_entry_match_values;
+ struct devlink_dl_dpipe_entry_action_values dpipe_entry_action_values;
+ __u64 dpipe_entry_counter;
+};
+
+struct devlink_dl_dpipe_header {
+ struct {
+ __u32 dpipe_header_name_len;
+ __u32 dpipe_header_id:1;
+ __u32 dpipe_header_global:1;
+ __u32 dpipe_header_fields:1;
+ } _present;
+
+ char *dpipe_header_name;
+ __u32 dpipe_header_id;
+ __u8 dpipe_header_global;
+ struct devlink_dl_dpipe_header_fields dpipe_header_fields;
+};
+
struct devlink_dl_reload_stats {
unsigned int n_reload_action_info;
struct devlink_dl_reload_act_info *reload_action_info;
};
+struct devlink_dl_dpipe_tables {
+ unsigned int n_dpipe_table;
+ struct devlink_dl_dpipe_table *dpipe_table;
+};
+
+struct devlink_dl_dpipe_entries {
+ unsigned int n_dpipe_entry;
+ struct devlink_dl_dpipe_entry *dpipe_entry;
+};
+
+struct devlink_dl_dpipe_headers {
+ unsigned int n_dpipe_header;
+ struct devlink_dl_dpipe_header *dpipe_header;
+};
+
struct devlink_dl_dev_stats {
struct {
__u32 reload_stats:1;
@@ -112,14 +360,12 @@ struct devlink_get_rsp {
__u32 bus_name_len;
__u32 dev_name_len;
__u32 reload_failed:1;
- __u32 reload_action:1;
__u32 dev_stats:1;
} _present;
char *bus_name;
char *dev_name;
__u8 reload_failed;
- __u8 reload_action;
struct devlink_dl_dev_stats dev_stats;
};
@@ -134,7 +380,7 @@ devlink_get(struct ynl_sock *ys, struct devlink_get_req *req);
/* DEVLINK_CMD_GET - dump */
struct devlink_get_list {
struct devlink_get_list *next;
- struct devlink_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_get_rsp obj __attribute__((aligned(8)));
};
void devlink_get_list_free(struct devlink_get_list *rsp);
@@ -262,7 +508,7 @@ struct devlink_port_get_rsp_dump {
struct devlink_port_get_rsp_list {
struct devlink_port_get_rsp_list *next;
- struct devlink_port_get_rsp_dump obj __attribute__ ((aligned (8)));
+ struct devlink_port_get_rsp_dump obj __attribute__((aligned(8)));
};
void devlink_port_get_rsp_list_free(struct devlink_port_get_rsp_list *rsp);
@@ -271,6 +517,377 @@ struct devlink_port_get_rsp_list *
devlink_port_get_dump(struct ynl_sock *ys,
struct devlink_port_get_req_dump *req);
+/* ============== DEVLINK_CMD_PORT_SET ============== */
+/* DEVLINK_CMD_PORT_SET - do */
+struct devlink_port_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 port_type:1;
+ __u32 port_function:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ enum devlink_port_type port_type;
+ struct devlink_dl_port_function port_function;
+};
+
+static inline struct devlink_port_set_req *devlink_port_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_set_req));
+}
+void devlink_port_set_req_free(struct devlink_port_set_req *req);
+
+static inline void
+devlink_port_set_req_set_bus_name(struct devlink_port_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_set_req_set_dev_name(struct devlink_port_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_set_req_set_port_index(struct devlink_port_set_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_port_set_req_set_port_type(struct devlink_port_set_req *req,
+ enum devlink_port_type port_type)
+{
+ req->_present.port_type = 1;
+ req->port_type = port_type;
+}
+static inline void
+devlink_port_set_req_set_port_function_hw_addr(struct devlink_port_set_req *req,
+ const void *hw_addr, size_t len)
+{
+ free(req->port_function.hw_addr);
+ req->port_function._present.hw_addr_len = len;
+ req->port_function.hw_addr = malloc(req->port_function._present.hw_addr_len);
+ memcpy(req->port_function.hw_addr, hw_addr, req->port_function._present.hw_addr_len);
+}
+static inline void
+devlink_port_set_req_set_port_function_state(struct devlink_port_set_req *req,
+ enum devlink_port_fn_state state)
+{
+ req->_present.port_function = 1;
+ req->port_function._present.state = 1;
+ req->port_function.state = state;
+}
+static inline void
+devlink_port_set_req_set_port_function_opstate(struct devlink_port_set_req *req,
+ enum devlink_port_fn_opstate opstate)
+{
+ req->_present.port_function = 1;
+ req->port_function._present.opstate = 1;
+ req->port_function.opstate = opstate;
+}
+static inline void
+devlink_port_set_req_set_port_function_caps(struct devlink_port_set_req *req,
+ struct nla_bitfield32 *caps)
+{
+ req->_present.port_function = 1;
+ req->port_function._present.caps = 1;
+ memcpy(&req->port_function.caps, caps, sizeof(struct nla_bitfield32));
+}
+
+/*
+ * Set devlink port instances.
+ */
+int devlink_port_set(struct ynl_sock *ys, struct devlink_port_set_req *req);
+
+/* ============== DEVLINK_CMD_PORT_NEW ============== */
+/* DEVLINK_CMD_PORT_NEW - do */
+struct devlink_port_new_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 port_flavour:1;
+ __u32 port_pci_pf_number:1;
+ __u32 port_pci_sf_number:1;
+ __u32 port_controller_number:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ enum devlink_port_flavour port_flavour;
+ __u16 port_pci_pf_number;
+ __u32 port_pci_sf_number;
+ __u32 port_controller_number;
+};
+
+static inline struct devlink_port_new_req *devlink_port_new_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_new_req));
+}
+void devlink_port_new_req_free(struct devlink_port_new_req *req);
+
+static inline void
+devlink_port_new_req_set_bus_name(struct devlink_port_new_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_new_req_set_dev_name(struct devlink_port_new_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_new_req_set_port_index(struct devlink_port_new_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_port_new_req_set_port_flavour(struct devlink_port_new_req *req,
+ enum devlink_port_flavour port_flavour)
+{
+ req->_present.port_flavour = 1;
+ req->port_flavour = port_flavour;
+}
+static inline void
+devlink_port_new_req_set_port_pci_pf_number(struct devlink_port_new_req *req,
+ __u16 port_pci_pf_number)
+{
+ req->_present.port_pci_pf_number = 1;
+ req->port_pci_pf_number = port_pci_pf_number;
+}
+static inline void
+devlink_port_new_req_set_port_pci_sf_number(struct devlink_port_new_req *req,
+ __u32 port_pci_sf_number)
+{
+ req->_present.port_pci_sf_number = 1;
+ req->port_pci_sf_number = port_pci_sf_number;
+}
+static inline void
+devlink_port_new_req_set_port_controller_number(struct devlink_port_new_req *req,
+ __u32 port_controller_number)
+{
+ req->_present.port_controller_number = 1;
+ req->port_controller_number = port_controller_number;
+}
+
+struct devlink_port_new_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+void devlink_port_new_rsp_free(struct devlink_port_new_rsp *rsp);
+
+/*
+ * Create devlink port instances.
+ */
+struct devlink_port_new_rsp *
+devlink_port_new(struct ynl_sock *ys, struct devlink_port_new_req *req);
+
+/* ============== DEVLINK_CMD_PORT_DEL ============== */
+/* DEVLINK_CMD_PORT_DEL - do */
+struct devlink_port_del_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+static inline struct devlink_port_del_req *devlink_port_del_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_del_req));
+}
+void devlink_port_del_req_free(struct devlink_port_del_req *req);
+
+static inline void
+devlink_port_del_req_set_bus_name(struct devlink_port_del_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_del_req_set_dev_name(struct devlink_port_del_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_del_req_set_port_index(struct devlink_port_del_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+
+/*
+ * Delete devlink port instances.
+ */
+int devlink_port_del(struct ynl_sock *ys, struct devlink_port_del_req *req);
+
+/* ============== DEVLINK_CMD_PORT_SPLIT ============== */
+/* DEVLINK_CMD_PORT_SPLIT - do */
+struct devlink_port_split_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 port_split_count:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ __u32 port_split_count;
+};
+
+static inline struct devlink_port_split_req *devlink_port_split_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_split_req));
+}
+void devlink_port_split_req_free(struct devlink_port_split_req *req);
+
+static inline void
+devlink_port_split_req_set_bus_name(struct devlink_port_split_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_split_req_set_dev_name(struct devlink_port_split_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_split_req_set_port_index(struct devlink_port_split_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_port_split_req_set_port_split_count(struct devlink_port_split_req *req,
+ __u32 port_split_count)
+{
+ req->_present.port_split_count = 1;
+ req->port_split_count = port_split_count;
+}
+
+/*
+ * Split devlink port instances.
+ */
+int devlink_port_split(struct ynl_sock *ys, struct devlink_port_split_req *req);
+
+/* ============== DEVLINK_CMD_PORT_UNSPLIT ============== */
+/* DEVLINK_CMD_PORT_UNSPLIT - do */
+struct devlink_port_unsplit_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+static inline struct devlink_port_unsplit_req *
+devlink_port_unsplit_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_unsplit_req));
+}
+void devlink_port_unsplit_req_free(struct devlink_port_unsplit_req *req);
+
+static inline void
+devlink_port_unsplit_req_set_bus_name(struct devlink_port_unsplit_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_unsplit_req_set_dev_name(struct devlink_port_unsplit_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_unsplit_req_set_port_index(struct devlink_port_unsplit_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+
+/*
+ * Unplit devlink port instances.
+ */
+int devlink_port_unsplit(struct ynl_sock *ys,
+ struct devlink_port_unsplit_req *req);
+
/* ============== DEVLINK_CMD_SB_GET ============== */
/* DEVLINK_CMD_SB_GET - do */
struct devlink_sb_get_req {
@@ -379,7 +996,7 @@ devlink_sb_get_req_dump_set_dev_name(struct devlink_sb_get_req_dump *req,
struct devlink_sb_get_list {
struct devlink_sb_get_list *next;
- struct devlink_sb_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_sb_get_rsp obj __attribute__((aligned(8)));
};
void devlink_sb_get_list_free(struct devlink_sb_get_list *rsp);
@@ -509,7 +1126,7 @@ devlink_sb_pool_get_req_dump_set_dev_name(struct devlink_sb_pool_get_req_dump *r
struct devlink_sb_pool_get_list {
struct devlink_sb_pool_get_list *next;
- struct devlink_sb_pool_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_sb_pool_get_rsp obj __attribute__((aligned(8)));
};
void devlink_sb_pool_get_list_free(struct devlink_sb_pool_get_list *rsp);
@@ -518,6 +1135,88 @@ struct devlink_sb_pool_get_list *
devlink_sb_pool_get_dump(struct ynl_sock *ys,
struct devlink_sb_pool_get_req_dump *req);
+/* ============== DEVLINK_CMD_SB_POOL_SET ============== */
+/* DEVLINK_CMD_SB_POOL_SET - do */
+struct devlink_sb_pool_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 sb_index:1;
+ __u32 sb_pool_index:1;
+ __u32 sb_pool_threshold_type:1;
+ __u32 sb_pool_size:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 sb_index;
+ __u16 sb_pool_index;
+ enum devlink_sb_threshold_type sb_pool_threshold_type;
+ __u32 sb_pool_size;
+};
+
+static inline struct devlink_sb_pool_set_req *
+devlink_sb_pool_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_sb_pool_set_req));
+}
+void devlink_sb_pool_set_req_free(struct devlink_sb_pool_set_req *req);
+
+static inline void
+devlink_sb_pool_set_req_set_bus_name(struct devlink_sb_pool_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_sb_pool_set_req_set_dev_name(struct devlink_sb_pool_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_sb_pool_set_req_set_sb_index(struct devlink_sb_pool_set_req *req,
+ __u32 sb_index)
+{
+ req->_present.sb_index = 1;
+ req->sb_index = sb_index;
+}
+static inline void
+devlink_sb_pool_set_req_set_sb_pool_index(struct devlink_sb_pool_set_req *req,
+ __u16 sb_pool_index)
+{
+ req->_present.sb_pool_index = 1;
+ req->sb_pool_index = sb_pool_index;
+}
+static inline void
+devlink_sb_pool_set_req_set_sb_pool_threshold_type(struct devlink_sb_pool_set_req *req,
+ enum devlink_sb_threshold_type sb_pool_threshold_type)
+{
+ req->_present.sb_pool_threshold_type = 1;
+ req->sb_pool_threshold_type = sb_pool_threshold_type;
+}
+static inline void
+devlink_sb_pool_set_req_set_sb_pool_size(struct devlink_sb_pool_set_req *req,
+ __u32 sb_pool_size)
+{
+ req->_present.sb_pool_size = 1;
+ req->sb_pool_size = sb_pool_size;
+}
+
+/*
+ * Set shared buffer pool instances.
+ */
+int devlink_sb_pool_set(struct ynl_sock *ys,
+ struct devlink_sb_pool_set_req *req);
+
/* ============== DEVLINK_CMD_SB_PORT_POOL_GET ============== */
/* DEVLINK_CMD_SB_PORT_POOL_GET - do */
struct devlink_sb_port_pool_get_req {
@@ -654,7 +1353,7 @@ devlink_sb_port_pool_get_req_dump_set_dev_name(struct devlink_sb_port_pool_get_r
struct devlink_sb_port_pool_get_list {
struct devlink_sb_port_pool_get_list *next;
- struct devlink_sb_port_pool_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_sb_port_pool_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -664,6 +1363,89 @@ struct devlink_sb_port_pool_get_list *
devlink_sb_port_pool_get_dump(struct ynl_sock *ys,
struct devlink_sb_port_pool_get_req_dump *req);
+/* ============== DEVLINK_CMD_SB_PORT_POOL_SET ============== */
+/* DEVLINK_CMD_SB_PORT_POOL_SET - do */
+struct devlink_sb_port_pool_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 sb_index:1;
+ __u32 sb_pool_index:1;
+ __u32 sb_threshold:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ __u32 sb_index;
+ __u16 sb_pool_index;
+ __u32 sb_threshold;
+};
+
+static inline struct devlink_sb_port_pool_set_req *
+devlink_sb_port_pool_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_sb_port_pool_set_req));
+}
+void
+devlink_sb_port_pool_set_req_free(struct devlink_sb_port_pool_set_req *req);
+
+static inline void
+devlink_sb_port_pool_set_req_set_bus_name(struct devlink_sb_port_pool_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_sb_port_pool_set_req_set_dev_name(struct devlink_sb_port_pool_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_sb_port_pool_set_req_set_port_index(struct devlink_sb_port_pool_set_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_sb_port_pool_set_req_set_sb_index(struct devlink_sb_port_pool_set_req *req,
+ __u32 sb_index)
+{
+ req->_present.sb_index = 1;
+ req->sb_index = sb_index;
+}
+static inline void
+devlink_sb_port_pool_set_req_set_sb_pool_index(struct devlink_sb_port_pool_set_req *req,
+ __u16 sb_pool_index)
+{
+ req->_present.sb_pool_index = 1;
+ req->sb_pool_index = sb_pool_index;
+}
+static inline void
+devlink_sb_port_pool_set_req_set_sb_threshold(struct devlink_sb_port_pool_set_req *req,
+ __u32 sb_threshold)
+{
+ req->_present.sb_threshold = 1;
+ req->sb_threshold = sb_threshold;
+}
+
+/*
+ * Set shared buffer port-pool combinations and threshold.
+ */
+int devlink_sb_port_pool_set(struct ynl_sock *ys,
+ struct devlink_sb_port_pool_set_req *req);
+
/* ============== DEVLINK_CMD_SB_TC_POOL_BIND_GET ============== */
/* DEVLINK_CMD_SB_TC_POOL_BIND_GET - do */
struct devlink_sb_tc_pool_bind_get_req {
@@ -811,7 +1593,7 @@ devlink_sb_tc_pool_bind_get_req_dump_set_dev_name(struct devlink_sb_tc_pool_bind
struct devlink_sb_tc_pool_bind_get_list {
struct devlink_sb_tc_pool_bind_get_list *next;
- struct devlink_sb_tc_pool_bind_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_sb_tc_pool_bind_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -821,6 +1603,861 @@ struct devlink_sb_tc_pool_bind_get_list *
devlink_sb_tc_pool_bind_get_dump(struct ynl_sock *ys,
struct devlink_sb_tc_pool_bind_get_req_dump *req);
+/* ============== DEVLINK_CMD_SB_TC_POOL_BIND_SET ============== */
+/* DEVLINK_CMD_SB_TC_POOL_BIND_SET - do */
+struct devlink_sb_tc_pool_bind_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 sb_index:1;
+ __u32 sb_pool_index:1;
+ __u32 sb_pool_type:1;
+ __u32 sb_tc_index:1;
+ __u32 sb_threshold:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ __u32 sb_index;
+ __u16 sb_pool_index;
+ enum devlink_sb_pool_type sb_pool_type;
+ __u16 sb_tc_index;
+ __u32 sb_threshold;
+};
+
+static inline struct devlink_sb_tc_pool_bind_set_req *
+devlink_sb_tc_pool_bind_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_sb_tc_pool_bind_set_req));
+}
+void
+devlink_sb_tc_pool_bind_set_req_free(struct devlink_sb_tc_pool_bind_set_req *req);
+
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_bus_name(struct devlink_sb_tc_pool_bind_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_dev_name(struct devlink_sb_tc_pool_bind_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_port_index(struct devlink_sb_tc_pool_bind_set_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_sb_index(struct devlink_sb_tc_pool_bind_set_req *req,
+ __u32 sb_index)
+{
+ req->_present.sb_index = 1;
+ req->sb_index = sb_index;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_sb_pool_index(struct devlink_sb_tc_pool_bind_set_req *req,
+ __u16 sb_pool_index)
+{
+ req->_present.sb_pool_index = 1;
+ req->sb_pool_index = sb_pool_index;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_sb_pool_type(struct devlink_sb_tc_pool_bind_set_req *req,
+ enum devlink_sb_pool_type sb_pool_type)
+{
+ req->_present.sb_pool_type = 1;
+ req->sb_pool_type = sb_pool_type;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_sb_tc_index(struct devlink_sb_tc_pool_bind_set_req *req,
+ __u16 sb_tc_index)
+{
+ req->_present.sb_tc_index = 1;
+ req->sb_tc_index = sb_tc_index;
+}
+static inline void
+devlink_sb_tc_pool_bind_set_req_set_sb_threshold(struct devlink_sb_tc_pool_bind_set_req *req,
+ __u32 sb_threshold)
+{
+ req->_present.sb_threshold = 1;
+ req->sb_threshold = sb_threshold;
+}
+
+/*
+ * Set shared buffer port-TC to pool bindings and threshold.
+ */
+int devlink_sb_tc_pool_bind_set(struct ynl_sock *ys,
+ struct devlink_sb_tc_pool_bind_set_req *req);
+
+/* ============== DEVLINK_CMD_SB_OCC_SNAPSHOT ============== */
+/* DEVLINK_CMD_SB_OCC_SNAPSHOT - do */
+struct devlink_sb_occ_snapshot_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 sb_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 sb_index;
+};
+
+static inline struct devlink_sb_occ_snapshot_req *
+devlink_sb_occ_snapshot_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_sb_occ_snapshot_req));
+}
+void devlink_sb_occ_snapshot_req_free(struct devlink_sb_occ_snapshot_req *req);
+
+static inline void
+devlink_sb_occ_snapshot_req_set_bus_name(struct devlink_sb_occ_snapshot_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_sb_occ_snapshot_req_set_dev_name(struct devlink_sb_occ_snapshot_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_sb_occ_snapshot_req_set_sb_index(struct devlink_sb_occ_snapshot_req *req,
+ __u32 sb_index)
+{
+ req->_present.sb_index = 1;
+ req->sb_index = sb_index;
+}
+
+/*
+ * Take occupancy snapshot of shared buffer.
+ */
+int devlink_sb_occ_snapshot(struct ynl_sock *ys,
+ struct devlink_sb_occ_snapshot_req *req);
+
+/* ============== DEVLINK_CMD_SB_OCC_MAX_CLEAR ============== */
+/* DEVLINK_CMD_SB_OCC_MAX_CLEAR - do */
+struct devlink_sb_occ_max_clear_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 sb_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 sb_index;
+};
+
+static inline struct devlink_sb_occ_max_clear_req *
+devlink_sb_occ_max_clear_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_sb_occ_max_clear_req));
+}
+void
+devlink_sb_occ_max_clear_req_free(struct devlink_sb_occ_max_clear_req *req);
+
+static inline void
+devlink_sb_occ_max_clear_req_set_bus_name(struct devlink_sb_occ_max_clear_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_sb_occ_max_clear_req_set_dev_name(struct devlink_sb_occ_max_clear_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_sb_occ_max_clear_req_set_sb_index(struct devlink_sb_occ_max_clear_req *req,
+ __u32 sb_index)
+{
+ req->_present.sb_index = 1;
+ req->sb_index = sb_index;
+}
+
+/*
+ * Clear occupancy watermarks of shared buffer.
+ */
+int devlink_sb_occ_max_clear(struct ynl_sock *ys,
+ struct devlink_sb_occ_max_clear_req *req);
+
+/* ============== DEVLINK_CMD_ESWITCH_GET ============== */
+/* DEVLINK_CMD_ESWITCH_GET - do */
+struct devlink_eswitch_get_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+};
+
+static inline struct devlink_eswitch_get_req *
+devlink_eswitch_get_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_eswitch_get_req));
+}
+void devlink_eswitch_get_req_free(struct devlink_eswitch_get_req *req);
+
+static inline void
+devlink_eswitch_get_req_set_bus_name(struct devlink_eswitch_get_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_eswitch_get_req_set_dev_name(struct devlink_eswitch_get_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+
+struct devlink_eswitch_get_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 eswitch_mode:1;
+ __u32 eswitch_inline_mode:1;
+ __u32 eswitch_encap_mode:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ enum devlink_eswitch_mode eswitch_mode;
+ enum devlink_eswitch_inline_mode eswitch_inline_mode;
+ enum devlink_eswitch_encap_mode eswitch_encap_mode;
+};
+
+void devlink_eswitch_get_rsp_free(struct devlink_eswitch_get_rsp *rsp);
+
+/*
+ * Get eswitch attributes.
+ */
+struct devlink_eswitch_get_rsp *
+devlink_eswitch_get(struct ynl_sock *ys, struct devlink_eswitch_get_req *req);
+
+/* ============== DEVLINK_CMD_ESWITCH_SET ============== */
+/* DEVLINK_CMD_ESWITCH_SET - do */
+struct devlink_eswitch_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 eswitch_mode:1;
+ __u32 eswitch_inline_mode:1;
+ __u32 eswitch_encap_mode:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ enum devlink_eswitch_mode eswitch_mode;
+ enum devlink_eswitch_inline_mode eswitch_inline_mode;
+ enum devlink_eswitch_encap_mode eswitch_encap_mode;
+};
+
+static inline struct devlink_eswitch_set_req *
+devlink_eswitch_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_eswitch_set_req));
+}
+void devlink_eswitch_set_req_free(struct devlink_eswitch_set_req *req);
+
+static inline void
+devlink_eswitch_set_req_set_bus_name(struct devlink_eswitch_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_eswitch_set_req_set_dev_name(struct devlink_eswitch_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_eswitch_set_req_set_eswitch_mode(struct devlink_eswitch_set_req *req,
+ enum devlink_eswitch_mode eswitch_mode)
+{
+ req->_present.eswitch_mode = 1;
+ req->eswitch_mode = eswitch_mode;
+}
+static inline void
+devlink_eswitch_set_req_set_eswitch_inline_mode(struct devlink_eswitch_set_req *req,
+ enum devlink_eswitch_inline_mode eswitch_inline_mode)
+{
+ req->_present.eswitch_inline_mode = 1;
+ req->eswitch_inline_mode = eswitch_inline_mode;
+}
+static inline void
+devlink_eswitch_set_req_set_eswitch_encap_mode(struct devlink_eswitch_set_req *req,
+ enum devlink_eswitch_encap_mode eswitch_encap_mode)
+{
+ req->_present.eswitch_encap_mode = 1;
+ req->eswitch_encap_mode = eswitch_encap_mode;
+}
+
+/*
+ * Set eswitch attributes.
+ */
+int devlink_eswitch_set(struct ynl_sock *ys,
+ struct devlink_eswitch_set_req *req);
+
+/* ============== DEVLINK_CMD_DPIPE_TABLE_GET ============== */
+/* DEVLINK_CMD_DPIPE_TABLE_GET - do */
+struct devlink_dpipe_table_get_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_table_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *dpipe_table_name;
+};
+
+static inline struct devlink_dpipe_table_get_req *
+devlink_dpipe_table_get_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_dpipe_table_get_req));
+}
+void devlink_dpipe_table_get_req_free(struct devlink_dpipe_table_get_req *req);
+
+static inline void
+devlink_dpipe_table_get_req_set_bus_name(struct devlink_dpipe_table_get_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_dpipe_table_get_req_set_dev_name(struct devlink_dpipe_table_get_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_dpipe_table_get_req_set_dpipe_table_name(struct devlink_dpipe_table_get_req *req,
+ const char *dpipe_table_name)
+{
+ free(req->dpipe_table_name);
+ req->_present.dpipe_table_name_len = strlen(dpipe_table_name);
+ req->dpipe_table_name = malloc(req->_present.dpipe_table_name_len + 1);
+ memcpy(req->dpipe_table_name, dpipe_table_name, req->_present.dpipe_table_name_len);
+ req->dpipe_table_name[req->_present.dpipe_table_name_len] = 0;
+}
+
+struct devlink_dpipe_table_get_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_tables:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct devlink_dl_dpipe_tables dpipe_tables;
+};
+
+void devlink_dpipe_table_get_rsp_free(struct devlink_dpipe_table_get_rsp *rsp);
+
+/*
+ * Get dpipe table attributes.
+ */
+struct devlink_dpipe_table_get_rsp *
+devlink_dpipe_table_get(struct ynl_sock *ys,
+ struct devlink_dpipe_table_get_req *req);
+
+/* ============== DEVLINK_CMD_DPIPE_ENTRIES_GET ============== */
+/* DEVLINK_CMD_DPIPE_ENTRIES_GET - do */
+struct devlink_dpipe_entries_get_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_table_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *dpipe_table_name;
+};
+
+static inline struct devlink_dpipe_entries_get_req *
+devlink_dpipe_entries_get_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_dpipe_entries_get_req));
+}
+void
+devlink_dpipe_entries_get_req_free(struct devlink_dpipe_entries_get_req *req);
+
+static inline void
+devlink_dpipe_entries_get_req_set_bus_name(struct devlink_dpipe_entries_get_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_dpipe_entries_get_req_set_dev_name(struct devlink_dpipe_entries_get_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_dpipe_entries_get_req_set_dpipe_table_name(struct devlink_dpipe_entries_get_req *req,
+ const char *dpipe_table_name)
+{
+ free(req->dpipe_table_name);
+ req->_present.dpipe_table_name_len = strlen(dpipe_table_name);
+ req->dpipe_table_name = malloc(req->_present.dpipe_table_name_len + 1);
+ memcpy(req->dpipe_table_name, dpipe_table_name, req->_present.dpipe_table_name_len);
+ req->dpipe_table_name[req->_present.dpipe_table_name_len] = 0;
+}
+
+struct devlink_dpipe_entries_get_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_entries:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct devlink_dl_dpipe_entries dpipe_entries;
+};
+
+void
+devlink_dpipe_entries_get_rsp_free(struct devlink_dpipe_entries_get_rsp *rsp);
+
+/*
+ * Get dpipe entries attributes.
+ */
+struct devlink_dpipe_entries_get_rsp *
+devlink_dpipe_entries_get(struct ynl_sock *ys,
+ struct devlink_dpipe_entries_get_req *req);
+
+/* ============== DEVLINK_CMD_DPIPE_HEADERS_GET ============== */
+/* DEVLINK_CMD_DPIPE_HEADERS_GET - do */
+struct devlink_dpipe_headers_get_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+};
+
+static inline struct devlink_dpipe_headers_get_req *
+devlink_dpipe_headers_get_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_dpipe_headers_get_req));
+}
+void
+devlink_dpipe_headers_get_req_free(struct devlink_dpipe_headers_get_req *req);
+
+static inline void
+devlink_dpipe_headers_get_req_set_bus_name(struct devlink_dpipe_headers_get_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_dpipe_headers_get_req_set_dev_name(struct devlink_dpipe_headers_get_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+
+struct devlink_dpipe_headers_get_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_headers:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct devlink_dl_dpipe_headers dpipe_headers;
+};
+
+void
+devlink_dpipe_headers_get_rsp_free(struct devlink_dpipe_headers_get_rsp *rsp);
+
+/*
+ * Get dpipe headers attributes.
+ */
+struct devlink_dpipe_headers_get_rsp *
+devlink_dpipe_headers_get(struct ynl_sock *ys,
+ struct devlink_dpipe_headers_get_req *req);
+
+/* ============== DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET ============== */
+/* DEVLINK_CMD_DPIPE_TABLE_COUNTERS_SET - do */
+struct devlink_dpipe_table_counters_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 dpipe_table_name_len;
+ __u32 dpipe_table_counters_enabled:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *dpipe_table_name;
+ __u8 dpipe_table_counters_enabled;
+};
+
+static inline struct devlink_dpipe_table_counters_set_req *
+devlink_dpipe_table_counters_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_dpipe_table_counters_set_req));
+}
+void
+devlink_dpipe_table_counters_set_req_free(struct devlink_dpipe_table_counters_set_req *req);
+
+static inline void
+devlink_dpipe_table_counters_set_req_set_bus_name(struct devlink_dpipe_table_counters_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_dpipe_table_counters_set_req_set_dev_name(struct devlink_dpipe_table_counters_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_dpipe_table_counters_set_req_set_dpipe_table_name(struct devlink_dpipe_table_counters_set_req *req,
+ const char *dpipe_table_name)
+{
+ free(req->dpipe_table_name);
+ req->_present.dpipe_table_name_len = strlen(dpipe_table_name);
+ req->dpipe_table_name = malloc(req->_present.dpipe_table_name_len + 1);
+ memcpy(req->dpipe_table_name, dpipe_table_name, req->_present.dpipe_table_name_len);
+ req->dpipe_table_name[req->_present.dpipe_table_name_len] = 0;
+}
+static inline void
+devlink_dpipe_table_counters_set_req_set_dpipe_table_counters_enabled(struct devlink_dpipe_table_counters_set_req *req,
+ __u8 dpipe_table_counters_enabled)
+{
+ req->_present.dpipe_table_counters_enabled = 1;
+ req->dpipe_table_counters_enabled = dpipe_table_counters_enabled;
+}
+
+/*
+ * Set dpipe counter attributes.
+ */
+int devlink_dpipe_table_counters_set(struct ynl_sock *ys,
+ struct devlink_dpipe_table_counters_set_req *req);
+
+/* ============== DEVLINK_CMD_RESOURCE_SET ============== */
+/* DEVLINK_CMD_RESOURCE_SET - do */
+struct devlink_resource_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 resource_id:1;
+ __u32 resource_size:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u64 resource_id;
+ __u64 resource_size;
+};
+
+static inline struct devlink_resource_set_req *
+devlink_resource_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_resource_set_req));
+}
+void devlink_resource_set_req_free(struct devlink_resource_set_req *req);
+
+static inline void
+devlink_resource_set_req_set_bus_name(struct devlink_resource_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_resource_set_req_set_dev_name(struct devlink_resource_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_resource_set_req_set_resource_id(struct devlink_resource_set_req *req,
+ __u64 resource_id)
+{
+ req->_present.resource_id = 1;
+ req->resource_id = resource_id;
+}
+static inline void
+devlink_resource_set_req_set_resource_size(struct devlink_resource_set_req *req,
+ __u64 resource_size)
+{
+ req->_present.resource_size = 1;
+ req->resource_size = resource_size;
+}
+
+/*
+ * Set resource attributes.
+ */
+int devlink_resource_set(struct ynl_sock *ys,
+ struct devlink_resource_set_req *req);
+
+/* ============== DEVLINK_CMD_RESOURCE_DUMP ============== */
+/* DEVLINK_CMD_RESOURCE_DUMP - do */
+struct devlink_resource_dump_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+};
+
+static inline struct devlink_resource_dump_req *
+devlink_resource_dump_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_resource_dump_req));
+}
+void devlink_resource_dump_req_free(struct devlink_resource_dump_req *req);
+
+static inline void
+devlink_resource_dump_req_set_bus_name(struct devlink_resource_dump_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_resource_dump_req_set_dev_name(struct devlink_resource_dump_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+
+struct devlink_resource_dump_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 resource_list:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct devlink_dl_resource_list resource_list;
+};
+
+void devlink_resource_dump_rsp_free(struct devlink_resource_dump_rsp *rsp);
+
+/*
+ * Get resource attributes.
+ */
+struct devlink_resource_dump_rsp *
+devlink_resource_dump(struct ynl_sock *ys,
+ struct devlink_resource_dump_req *req);
+
+/* ============== DEVLINK_CMD_RELOAD ============== */
+/* DEVLINK_CMD_RELOAD - do */
+struct devlink_reload_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 reload_action:1;
+ __u32 reload_limits:1;
+ __u32 netns_pid:1;
+ __u32 netns_fd:1;
+ __u32 netns_id:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ enum devlink_reload_action reload_action;
+ struct nla_bitfield32 reload_limits;
+ __u32 netns_pid;
+ __u32 netns_fd;
+ __u32 netns_id;
+};
+
+static inline struct devlink_reload_req *devlink_reload_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_reload_req));
+}
+void devlink_reload_req_free(struct devlink_reload_req *req);
+
+static inline void
+devlink_reload_req_set_bus_name(struct devlink_reload_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_reload_req_set_dev_name(struct devlink_reload_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_reload_req_set_reload_action(struct devlink_reload_req *req,
+ enum devlink_reload_action reload_action)
+{
+ req->_present.reload_action = 1;
+ req->reload_action = reload_action;
+}
+static inline void
+devlink_reload_req_set_reload_limits(struct devlink_reload_req *req,
+ struct nla_bitfield32 *reload_limits)
+{
+ req->_present.reload_limits = 1;
+ memcpy(&req->reload_limits, reload_limits, sizeof(struct nla_bitfield32));
+}
+static inline void
+devlink_reload_req_set_netns_pid(struct devlink_reload_req *req,
+ __u32 netns_pid)
+{
+ req->_present.netns_pid = 1;
+ req->netns_pid = netns_pid;
+}
+static inline void
+devlink_reload_req_set_netns_fd(struct devlink_reload_req *req, __u32 netns_fd)
+{
+ req->_present.netns_fd = 1;
+ req->netns_fd = netns_fd;
+}
+static inline void
+devlink_reload_req_set_netns_id(struct devlink_reload_req *req, __u32 netns_id)
+{
+ req->_present.netns_id = 1;
+ req->netns_id = netns_id;
+}
+
+struct devlink_reload_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 reload_actions_performed:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct nla_bitfield32 reload_actions_performed;
+};
+
+void devlink_reload_rsp_free(struct devlink_reload_rsp *rsp);
+
+/*
+ * Reload devlink.
+ */
+struct devlink_reload_rsp *
+devlink_reload(struct ynl_sock *ys, struct devlink_reload_req *req);
+
/* ============== DEVLINK_CMD_PARAM_GET ============== */
/* DEVLINK_CMD_PARAM_GET - do */
struct devlink_param_get_req {
@@ -933,7 +2570,7 @@ devlink_param_get_req_dump_set_dev_name(struct devlink_param_get_req_dump *req,
struct devlink_param_get_list {
struct devlink_param_get_list *next;
- struct devlink_param_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_param_get_rsp obj __attribute__((aligned(8)));
};
void devlink_param_get_list_free(struct devlink_param_get_list *rsp);
@@ -942,6 +2579,80 @@ struct devlink_param_get_list *
devlink_param_get_dump(struct ynl_sock *ys,
struct devlink_param_get_req_dump *req);
+/* ============== DEVLINK_CMD_PARAM_SET ============== */
+/* DEVLINK_CMD_PARAM_SET - do */
+struct devlink_param_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 param_name_len;
+ __u32 param_type:1;
+ __u32 param_value_cmode:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *param_name;
+ __u8 param_type;
+ enum devlink_param_cmode param_value_cmode;
+};
+
+static inline struct devlink_param_set_req *devlink_param_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_param_set_req));
+}
+void devlink_param_set_req_free(struct devlink_param_set_req *req);
+
+static inline void
+devlink_param_set_req_set_bus_name(struct devlink_param_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_param_set_req_set_dev_name(struct devlink_param_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_param_set_req_set_param_name(struct devlink_param_set_req *req,
+ const char *param_name)
+{
+ free(req->param_name);
+ req->_present.param_name_len = strlen(param_name);
+ req->param_name = malloc(req->_present.param_name_len + 1);
+ memcpy(req->param_name, param_name, req->_present.param_name_len);
+ req->param_name[req->_present.param_name_len] = 0;
+}
+static inline void
+devlink_param_set_req_set_param_type(struct devlink_param_set_req *req,
+ __u8 param_type)
+{
+ req->_present.param_type = 1;
+ req->param_type = param_type;
+}
+static inline void
+devlink_param_set_req_set_param_value_cmode(struct devlink_param_set_req *req,
+ enum devlink_param_cmode param_value_cmode)
+{
+ req->_present.param_value_cmode = 1;
+ req->param_value_cmode = param_value_cmode;
+}
+
+/*
+ * Set param instances.
+ */
+int devlink_param_set(struct ynl_sock *ys, struct devlink_param_set_req *req);
+
/* ============== DEVLINK_CMD_REGION_GET ============== */
/* DEVLINK_CMD_REGION_GET - do */
struct devlink_region_get_req {
@@ -1065,7 +2776,7 @@ devlink_region_get_req_dump_set_dev_name(struct devlink_region_get_req_dump *req
struct devlink_region_get_list {
struct devlink_region_get_list *next;
- struct devlink_region_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_region_get_rsp obj __attribute__((aligned(8)));
};
void devlink_region_get_list_free(struct devlink_region_get_list *rsp);
@@ -1074,6 +2785,430 @@ struct devlink_region_get_list *
devlink_region_get_dump(struct ynl_sock *ys,
struct devlink_region_get_req_dump *req);
+/* ============== DEVLINK_CMD_REGION_NEW ============== */
+/* DEVLINK_CMD_REGION_NEW - do */
+struct devlink_region_new_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 region_name_len;
+ __u32 region_snapshot_id:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *region_name;
+ __u32 region_snapshot_id;
+};
+
+static inline struct devlink_region_new_req *devlink_region_new_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_region_new_req));
+}
+void devlink_region_new_req_free(struct devlink_region_new_req *req);
+
+static inline void
+devlink_region_new_req_set_bus_name(struct devlink_region_new_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_region_new_req_set_dev_name(struct devlink_region_new_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_region_new_req_set_port_index(struct devlink_region_new_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_region_new_req_set_region_name(struct devlink_region_new_req *req,
+ const char *region_name)
+{
+ free(req->region_name);
+ req->_present.region_name_len = strlen(region_name);
+ req->region_name = malloc(req->_present.region_name_len + 1);
+ memcpy(req->region_name, region_name, req->_present.region_name_len);
+ req->region_name[req->_present.region_name_len] = 0;
+}
+static inline void
+devlink_region_new_req_set_region_snapshot_id(struct devlink_region_new_req *req,
+ __u32 region_snapshot_id)
+{
+ req->_present.region_snapshot_id = 1;
+ req->region_snapshot_id = region_snapshot_id;
+}
+
+struct devlink_region_new_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 region_name_len;
+ __u32 region_snapshot_id:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *region_name;
+ __u32 region_snapshot_id;
+};
+
+void devlink_region_new_rsp_free(struct devlink_region_new_rsp *rsp);
+
+/*
+ * Create region snapshot.
+ */
+struct devlink_region_new_rsp *
+devlink_region_new(struct ynl_sock *ys, struct devlink_region_new_req *req);
+
+/* ============== DEVLINK_CMD_REGION_DEL ============== */
+/* DEVLINK_CMD_REGION_DEL - do */
+struct devlink_region_del_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 region_name_len;
+ __u32 region_snapshot_id:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *region_name;
+ __u32 region_snapshot_id;
+};
+
+static inline struct devlink_region_del_req *devlink_region_del_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_region_del_req));
+}
+void devlink_region_del_req_free(struct devlink_region_del_req *req);
+
+static inline void
+devlink_region_del_req_set_bus_name(struct devlink_region_del_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_region_del_req_set_dev_name(struct devlink_region_del_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_region_del_req_set_port_index(struct devlink_region_del_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_region_del_req_set_region_name(struct devlink_region_del_req *req,
+ const char *region_name)
+{
+ free(req->region_name);
+ req->_present.region_name_len = strlen(region_name);
+ req->region_name = malloc(req->_present.region_name_len + 1);
+ memcpy(req->region_name, region_name, req->_present.region_name_len);
+ req->region_name[req->_present.region_name_len] = 0;
+}
+static inline void
+devlink_region_del_req_set_region_snapshot_id(struct devlink_region_del_req *req,
+ __u32 region_snapshot_id)
+{
+ req->_present.region_snapshot_id = 1;
+ req->region_snapshot_id = region_snapshot_id;
+}
+
+/*
+ * Delete region snapshot.
+ */
+int devlink_region_del(struct ynl_sock *ys, struct devlink_region_del_req *req);
+
+/* ============== DEVLINK_CMD_REGION_READ ============== */
+/* DEVLINK_CMD_REGION_READ - dump */
+struct devlink_region_read_req_dump {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 region_name_len;
+ __u32 region_snapshot_id:1;
+ __u32 region_direct:1;
+ __u32 region_chunk_addr:1;
+ __u32 region_chunk_len:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *region_name;
+ __u32 region_snapshot_id;
+ __u64 region_chunk_addr;
+ __u64 region_chunk_len;
+};
+
+static inline struct devlink_region_read_req_dump *
+devlink_region_read_req_dump_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_region_read_req_dump));
+}
+void
+devlink_region_read_req_dump_free(struct devlink_region_read_req_dump *req);
+
+static inline void
+devlink_region_read_req_dump_set_bus_name(struct devlink_region_read_req_dump *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_region_read_req_dump_set_dev_name(struct devlink_region_read_req_dump *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_region_read_req_dump_set_port_index(struct devlink_region_read_req_dump *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_region_read_req_dump_set_region_name(struct devlink_region_read_req_dump *req,
+ const char *region_name)
+{
+ free(req->region_name);
+ req->_present.region_name_len = strlen(region_name);
+ req->region_name = malloc(req->_present.region_name_len + 1);
+ memcpy(req->region_name, region_name, req->_present.region_name_len);
+ req->region_name[req->_present.region_name_len] = 0;
+}
+static inline void
+devlink_region_read_req_dump_set_region_snapshot_id(struct devlink_region_read_req_dump *req,
+ __u32 region_snapshot_id)
+{
+ req->_present.region_snapshot_id = 1;
+ req->region_snapshot_id = region_snapshot_id;
+}
+static inline void
+devlink_region_read_req_dump_set_region_direct(struct devlink_region_read_req_dump *req)
+{
+ req->_present.region_direct = 1;
+}
+static inline void
+devlink_region_read_req_dump_set_region_chunk_addr(struct devlink_region_read_req_dump *req,
+ __u64 region_chunk_addr)
+{
+ req->_present.region_chunk_addr = 1;
+ req->region_chunk_addr = region_chunk_addr;
+}
+static inline void
+devlink_region_read_req_dump_set_region_chunk_len(struct devlink_region_read_req_dump *req,
+ __u64 region_chunk_len)
+{
+ req->_present.region_chunk_len = 1;
+ req->region_chunk_len = region_chunk_len;
+}
+
+struct devlink_region_read_rsp_dump {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 region_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *region_name;
+};
+
+struct devlink_region_read_rsp_list {
+ struct devlink_region_read_rsp_list *next;
+ struct devlink_region_read_rsp_dump obj __attribute__((aligned(8)));
+};
+
+void
+devlink_region_read_rsp_list_free(struct devlink_region_read_rsp_list *rsp);
+
+struct devlink_region_read_rsp_list *
+devlink_region_read_dump(struct ynl_sock *ys,
+ struct devlink_region_read_req_dump *req);
+
+/* ============== DEVLINK_CMD_PORT_PARAM_GET ============== */
+/* DEVLINK_CMD_PORT_PARAM_GET - do */
+struct devlink_port_param_get_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+static inline struct devlink_port_param_get_req *
+devlink_port_param_get_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_param_get_req));
+}
+void devlink_port_param_get_req_free(struct devlink_port_param_get_req *req);
+
+static inline void
+devlink_port_param_get_req_set_bus_name(struct devlink_port_param_get_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_param_get_req_set_dev_name(struct devlink_port_param_get_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_param_get_req_set_port_index(struct devlink_port_param_get_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+
+struct devlink_port_param_get_rsp {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+void devlink_port_param_get_rsp_free(struct devlink_port_param_get_rsp *rsp);
+
+/*
+ * Get port param instances.
+ */
+struct devlink_port_param_get_rsp *
+devlink_port_param_get(struct ynl_sock *ys,
+ struct devlink_port_param_get_req *req);
+
+/* DEVLINK_CMD_PORT_PARAM_GET - dump */
+struct devlink_port_param_get_list {
+ struct devlink_port_param_get_list *next;
+ struct devlink_port_param_get_rsp obj __attribute__((aligned(8)));
+};
+
+void devlink_port_param_get_list_free(struct devlink_port_param_get_list *rsp);
+
+struct devlink_port_param_get_list *
+devlink_port_param_get_dump(struct ynl_sock *ys);
+
+/* ============== DEVLINK_CMD_PORT_PARAM_SET ============== */
+/* DEVLINK_CMD_PORT_PARAM_SET - do */
+struct devlink_port_param_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+};
+
+static inline struct devlink_port_param_set_req *
+devlink_port_param_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_port_param_set_req));
+}
+void devlink_port_param_set_req_free(struct devlink_port_param_set_req *req);
+
+static inline void
+devlink_port_param_set_req_set_bus_name(struct devlink_port_param_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_port_param_set_req_set_dev_name(struct devlink_port_param_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_port_param_set_req_set_port_index(struct devlink_port_param_set_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+
+/*
+ * Set port param instances.
+ */
+int devlink_port_param_set(struct ynl_sock *ys,
+ struct devlink_port_param_set_req *req);
+
/* ============== DEVLINK_CMD_INFO_GET ============== */
/* DEVLINK_CMD_INFO_GET - do */
struct devlink_info_get_req {
@@ -1144,7 +3279,7 @@ devlink_info_get(struct ynl_sock *ys, struct devlink_info_get_req *req);
/* DEVLINK_CMD_INFO_GET - dump */
struct devlink_info_get_list {
struct devlink_info_get_list *next;
- struct devlink_info_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_info_get_rsp obj __attribute__((aligned(8)));
};
void devlink_info_get_list_free(struct devlink_info_get_list *rsp);
@@ -1288,7 +3423,7 @@ devlink_health_reporter_get_req_dump_set_port_index(struct devlink_health_report
struct devlink_health_reporter_get_list {
struct devlink_health_reporter_get_list *next;
- struct devlink_health_reporter_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_health_reporter_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -1298,6 +3433,466 @@ struct devlink_health_reporter_get_list *
devlink_health_reporter_get_dump(struct ynl_sock *ys,
struct devlink_health_reporter_get_req_dump *req);
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_SET ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_SET - do */
+struct devlink_health_reporter_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ __u32 health_reporter_graceful_period:1;
+ __u32 health_reporter_auto_recover:1;
+ __u32 health_reporter_auto_dump:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+ __u64 health_reporter_graceful_period;
+ __u8 health_reporter_auto_recover;
+ __u8 health_reporter_auto_dump;
+};
+
+static inline struct devlink_health_reporter_set_req *
+devlink_health_reporter_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_set_req));
+}
+void
+devlink_health_reporter_set_req_free(struct devlink_health_reporter_set_req *req);
+
+static inline void
+devlink_health_reporter_set_req_set_bus_name(struct devlink_health_reporter_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_set_req_set_dev_name(struct devlink_health_reporter_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_set_req_set_port_index(struct devlink_health_reporter_set_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_set_req_set_health_reporter_name(struct devlink_health_reporter_set_req *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_set_req_set_health_reporter_graceful_period(struct devlink_health_reporter_set_req *req,
+ __u64 health_reporter_graceful_period)
+{
+ req->_present.health_reporter_graceful_period = 1;
+ req->health_reporter_graceful_period = health_reporter_graceful_period;
+}
+static inline void
+devlink_health_reporter_set_req_set_health_reporter_auto_recover(struct devlink_health_reporter_set_req *req,
+ __u8 health_reporter_auto_recover)
+{
+ req->_present.health_reporter_auto_recover = 1;
+ req->health_reporter_auto_recover = health_reporter_auto_recover;
+}
+static inline void
+devlink_health_reporter_set_req_set_health_reporter_auto_dump(struct devlink_health_reporter_set_req *req,
+ __u8 health_reporter_auto_dump)
+{
+ req->_present.health_reporter_auto_dump = 1;
+ req->health_reporter_auto_dump = health_reporter_auto_dump;
+}
+
+/*
+ * Set health reporter instances.
+ */
+int devlink_health_reporter_set(struct ynl_sock *ys,
+ struct devlink_health_reporter_set_req *req);
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_RECOVER ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_RECOVER - do */
+struct devlink_health_reporter_recover_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+};
+
+static inline struct devlink_health_reporter_recover_req *
+devlink_health_reporter_recover_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_recover_req));
+}
+void
+devlink_health_reporter_recover_req_free(struct devlink_health_reporter_recover_req *req);
+
+static inline void
+devlink_health_reporter_recover_req_set_bus_name(struct devlink_health_reporter_recover_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_recover_req_set_dev_name(struct devlink_health_reporter_recover_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_recover_req_set_port_index(struct devlink_health_reporter_recover_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_recover_req_set_health_reporter_name(struct devlink_health_reporter_recover_req *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+
+/*
+ * Recover health reporter instances.
+ */
+int devlink_health_reporter_recover(struct ynl_sock *ys,
+ struct devlink_health_reporter_recover_req *req);
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DIAGNOSE - do */
+struct devlink_health_reporter_diagnose_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+};
+
+static inline struct devlink_health_reporter_diagnose_req *
+devlink_health_reporter_diagnose_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_diagnose_req));
+}
+void
+devlink_health_reporter_diagnose_req_free(struct devlink_health_reporter_diagnose_req *req);
+
+static inline void
+devlink_health_reporter_diagnose_req_set_bus_name(struct devlink_health_reporter_diagnose_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_diagnose_req_set_dev_name(struct devlink_health_reporter_diagnose_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_diagnose_req_set_port_index(struct devlink_health_reporter_diagnose_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_diagnose_req_set_health_reporter_name(struct devlink_health_reporter_diagnose_req *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+
+/*
+ * Diagnose health reporter instances.
+ */
+int devlink_health_reporter_diagnose(struct ynl_sock *ys,
+ struct devlink_health_reporter_diagnose_req *req);
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_GET - dump */
+struct devlink_health_reporter_dump_get_req_dump {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+};
+
+static inline struct devlink_health_reporter_dump_get_req_dump *
+devlink_health_reporter_dump_get_req_dump_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_dump_get_req_dump));
+}
+void
+devlink_health_reporter_dump_get_req_dump_free(struct devlink_health_reporter_dump_get_req_dump *req);
+
+static inline void
+devlink_health_reporter_dump_get_req_dump_set_bus_name(struct devlink_health_reporter_dump_get_req_dump *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_dump_get_req_dump_set_dev_name(struct devlink_health_reporter_dump_get_req_dump *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_dump_get_req_dump_set_port_index(struct devlink_health_reporter_dump_get_req_dump *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_dump_get_req_dump_set_health_reporter_name(struct devlink_health_reporter_dump_get_req_dump *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+
+struct devlink_health_reporter_dump_get_rsp_dump {
+ struct {
+ __u32 fmsg:1;
+ } _present;
+
+ struct devlink_dl_fmsg fmsg;
+};
+
+struct devlink_health_reporter_dump_get_rsp_list {
+ struct devlink_health_reporter_dump_get_rsp_list *next;
+ struct devlink_health_reporter_dump_get_rsp_dump obj __attribute__((aligned(8)));
+};
+
+void
+devlink_health_reporter_dump_get_rsp_list_free(struct devlink_health_reporter_dump_get_rsp_list *rsp);
+
+struct devlink_health_reporter_dump_get_rsp_list *
+devlink_health_reporter_dump_get_dump(struct ynl_sock *ys,
+ struct devlink_health_reporter_dump_get_req_dump *req);
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_DUMP_CLEAR - do */
+struct devlink_health_reporter_dump_clear_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+};
+
+static inline struct devlink_health_reporter_dump_clear_req *
+devlink_health_reporter_dump_clear_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_dump_clear_req));
+}
+void
+devlink_health_reporter_dump_clear_req_free(struct devlink_health_reporter_dump_clear_req *req);
+
+static inline void
+devlink_health_reporter_dump_clear_req_set_bus_name(struct devlink_health_reporter_dump_clear_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_dump_clear_req_set_dev_name(struct devlink_health_reporter_dump_clear_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_dump_clear_req_set_port_index(struct devlink_health_reporter_dump_clear_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_dump_clear_req_set_health_reporter_name(struct devlink_health_reporter_dump_clear_req *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+
+/*
+ * Clear dump of health reporter instances.
+ */
+int devlink_health_reporter_dump_clear(struct ynl_sock *ys,
+ struct devlink_health_reporter_dump_clear_req *req);
+
+/* ============== DEVLINK_CMD_FLASH_UPDATE ============== */
+/* DEVLINK_CMD_FLASH_UPDATE - do */
+struct devlink_flash_update_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 flash_update_file_name_len;
+ __u32 flash_update_component_len;
+ __u32 flash_update_overwrite_mask:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *flash_update_file_name;
+ char *flash_update_component;
+ struct nla_bitfield32 flash_update_overwrite_mask;
+};
+
+static inline struct devlink_flash_update_req *
+devlink_flash_update_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_flash_update_req));
+}
+void devlink_flash_update_req_free(struct devlink_flash_update_req *req);
+
+static inline void
+devlink_flash_update_req_set_bus_name(struct devlink_flash_update_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_flash_update_req_set_dev_name(struct devlink_flash_update_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_flash_update_req_set_flash_update_file_name(struct devlink_flash_update_req *req,
+ const char *flash_update_file_name)
+{
+ free(req->flash_update_file_name);
+ req->_present.flash_update_file_name_len = strlen(flash_update_file_name);
+ req->flash_update_file_name = malloc(req->_present.flash_update_file_name_len + 1);
+ memcpy(req->flash_update_file_name, flash_update_file_name, req->_present.flash_update_file_name_len);
+ req->flash_update_file_name[req->_present.flash_update_file_name_len] = 0;
+}
+static inline void
+devlink_flash_update_req_set_flash_update_component(struct devlink_flash_update_req *req,
+ const char *flash_update_component)
+{
+ free(req->flash_update_component);
+ req->_present.flash_update_component_len = strlen(flash_update_component);
+ req->flash_update_component = malloc(req->_present.flash_update_component_len + 1);
+ memcpy(req->flash_update_component, flash_update_component, req->_present.flash_update_component_len);
+ req->flash_update_component[req->_present.flash_update_component_len] = 0;
+}
+static inline void
+devlink_flash_update_req_set_flash_update_overwrite_mask(struct devlink_flash_update_req *req,
+ struct nla_bitfield32 *flash_update_overwrite_mask)
+{
+ req->_present.flash_update_overwrite_mask = 1;
+ memcpy(&req->flash_update_overwrite_mask, flash_update_overwrite_mask, sizeof(struct nla_bitfield32));
+}
+
+/*
+ * Flash update devlink instances.
+ */
+int devlink_flash_update(struct ynl_sock *ys,
+ struct devlink_flash_update_req *req);
+
/* ============== DEVLINK_CMD_TRAP_GET ============== */
/* DEVLINK_CMD_TRAP_GET - do */
struct devlink_trap_get_req {
@@ -1410,7 +4005,7 @@ devlink_trap_get_req_dump_set_dev_name(struct devlink_trap_get_req_dump *req,
struct devlink_trap_get_list {
struct devlink_trap_get_list *next;
- struct devlink_trap_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_trap_get_rsp obj __attribute__((aligned(8)));
};
void devlink_trap_get_list_free(struct devlink_trap_get_list *rsp);
@@ -1419,6 +4014,71 @@ struct devlink_trap_get_list *
devlink_trap_get_dump(struct ynl_sock *ys,
struct devlink_trap_get_req_dump *req);
+/* ============== DEVLINK_CMD_TRAP_SET ============== */
+/* DEVLINK_CMD_TRAP_SET - do */
+struct devlink_trap_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 trap_name_len;
+ __u32 trap_action:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *trap_name;
+ enum devlink_trap_action trap_action;
+};
+
+static inline struct devlink_trap_set_req *devlink_trap_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_trap_set_req));
+}
+void devlink_trap_set_req_free(struct devlink_trap_set_req *req);
+
+static inline void
+devlink_trap_set_req_set_bus_name(struct devlink_trap_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_trap_set_req_set_dev_name(struct devlink_trap_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_trap_set_req_set_trap_name(struct devlink_trap_set_req *req,
+ const char *trap_name)
+{
+ free(req->trap_name);
+ req->_present.trap_name_len = strlen(trap_name);
+ req->trap_name = malloc(req->_present.trap_name_len + 1);
+ memcpy(req->trap_name, trap_name, req->_present.trap_name_len);
+ req->trap_name[req->_present.trap_name_len] = 0;
+}
+static inline void
+devlink_trap_set_req_set_trap_action(struct devlink_trap_set_req *req,
+ enum devlink_trap_action trap_action)
+{
+ req->_present.trap_action = 1;
+ req->trap_action = trap_action;
+}
+
+/*
+ * Set trap instances.
+ */
+int devlink_trap_set(struct ynl_sock *ys, struct devlink_trap_set_req *req);
+
/* ============== DEVLINK_CMD_TRAP_GROUP_GET ============== */
/* DEVLINK_CMD_TRAP_GROUP_GET - do */
struct devlink_trap_group_get_req {
@@ -1534,7 +4194,7 @@ devlink_trap_group_get_req_dump_set_dev_name(struct devlink_trap_group_get_req_d
struct devlink_trap_group_get_list {
struct devlink_trap_group_get_list *next;
- struct devlink_trap_group_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_trap_group_get_rsp obj __attribute__((aligned(8)));
};
void devlink_trap_group_get_list_free(struct devlink_trap_group_get_list *rsp);
@@ -1543,6 +4203,82 @@ struct devlink_trap_group_get_list *
devlink_trap_group_get_dump(struct ynl_sock *ys,
struct devlink_trap_group_get_req_dump *req);
+/* ============== DEVLINK_CMD_TRAP_GROUP_SET ============== */
+/* DEVLINK_CMD_TRAP_GROUP_SET - do */
+struct devlink_trap_group_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 trap_group_name_len;
+ __u32 trap_action:1;
+ __u32 trap_policer_id:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *trap_group_name;
+ enum devlink_trap_action trap_action;
+ __u32 trap_policer_id;
+};
+
+static inline struct devlink_trap_group_set_req *
+devlink_trap_group_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_trap_group_set_req));
+}
+void devlink_trap_group_set_req_free(struct devlink_trap_group_set_req *req);
+
+static inline void
+devlink_trap_group_set_req_set_bus_name(struct devlink_trap_group_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_trap_group_set_req_set_dev_name(struct devlink_trap_group_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_trap_group_set_req_set_trap_group_name(struct devlink_trap_group_set_req *req,
+ const char *trap_group_name)
+{
+ free(req->trap_group_name);
+ req->_present.trap_group_name_len = strlen(trap_group_name);
+ req->trap_group_name = malloc(req->_present.trap_group_name_len + 1);
+ memcpy(req->trap_group_name, trap_group_name, req->_present.trap_group_name_len);
+ req->trap_group_name[req->_present.trap_group_name_len] = 0;
+}
+static inline void
+devlink_trap_group_set_req_set_trap_action(struct devlink_trap_group_set_req *req,
+ enum devlink_trap_action trap_action)
+{
+ req->_present.trap_action = 1;
+ req->trap_action = trap_action;
+}
+static inline void
+devlink_trap_group_set_req_set_trap_policer_id(struct devlink_trap_group_set_req *req,
+ __u32 trap_policer_id)
+{
+ req->_present.trap_policer_id = 1;
+ req->trap_policer_id = trap_policer_id;
+}
+
+/*
+ * Set trap group instances.
+ */
+int devlink_trap_group_set(struct ynl_sock *ys,
+ struct devlink_trap_group_set_req *req);
+
/* ============== DEVLINK_CMD_TRAP_POLICER_GET ============== */
/* DEVLINK_CMD_TRAP_POLICER_GET - do */
struct devlink_trap_policer_get_req {
@@ -1657,7 +4393,7 @@ devlink_trap_policer_get_req_dump_set_dev_name(struct devlink_trap_policer_get_r
struct devlink_trap_policer_get_list {
struct devlink_trap_policer_get_list *next;
- struct devlink_trap_policer_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_trap_policer_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -1667,6 +4403,148 @@ struct devlink_trap_policer_get_list *
devlink_trap_policer_get_dump(struct ynl_sock *ys,
struct devlink_trap_policer_get_req_dump *req);
+/* ============== DEVLINK_CMD_TRAP_POLICER_SET ============== */
+/* DEVLINK_CMD_TRAP_POLICER_SET - do */
+struct devlink_trap_policer_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 trap_policer_id:1;
+ __u32 trap_policer_rate:1;
+ __u32 trap_policer_burst:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 trap_policer_id;
+ __u64 trap_policer_rate;
+ __u64 trap_policer_burst;
+};
+
+static inline struct devlink_trap_policer_set_req *
+devlink_trap_policer_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_trap_policer_set_req));
+}
+void
+devlink_trap_policer_set_req_free(struct devlink_trap_policer_set_req *req);
+
+static inline void
+devlink_trap_policer_set_req_set_bus_name(struct devlink_trap_policer_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_trap_policer_set_req_set_dev_name(struct devlink_trap_policer_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_trap_policer_set_req_set_trap_policer_id(struct devlink_trap_policer_set_req *req,
+ __u32 trap_policer_id)
+{
+ req->_present.trap_policer_id = 1;
+ req->trap_policer_id = trap_policer_id;
+}
+static inline void
+devlink_trap_policer_set_req_set_trap_policer_rate(struct devlink_trap_policer_set_req *req,
+ __u64 trap_policer_rate)
+{
+ req->_present.trap_policer_rate = 1;
+ req->trap_policer_rate = trap_policer_rate;
+}
+static inline void
+devlink_trap_policer_set_req_set_trap_policer_burst(struct devlink_trap_policer_set_req *req,
+ __u64 trap_policer_burst)
+{
+ req->_present.trap_policer_burst = 1;
+ req->trap_policer_burst = trap_policer_burst;
+}
+
+/*
+ * Get trap policer instances.
+ */
+int devlink_trap_policer_set(struct ynl_sock *ys,
+ struct devlink_trap_policer_set_req *req);
+
+/* ============== DEVLINK_CMD_HEALTH_REPORTER_TEST ============== */
+/* DEVLINK_CMD_HEALTH_REPORTER_TEST - do */
+struct devlink_health_reporter_test_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 port_index:1;
+ __u32 health_reporter_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 port_index;
+ char *health_reporter_name;
+};
+
+static inline struct devlink_health_reporter_test_req *
+devlink_health_reporter_test_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_health_reporter_test_req));
+}
+void
+devlink_health_reporter_test_req_free(struct devlink_health_reporter_test_req *req);
+
+static inline void
+devlink_health_reporter_test_req_set_bus_name(struct devlink_health_reporter_test_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_test_req_set_dev_name(struct devlink_health_reporter_test_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_health_reporter_test_req_set_port_index(struct devlink_health_reporter_test_req *req,
+ __u32 port_index)
+{
+ req->_present.port_index = 1;
+ req->port_index = port_index;
+}
+static inline void
+devlink_health_reporter_test_req_set_health_reporter_name(struct devlink_health_reporter_test_req *req,
+ const char *health_reporter_name)
+{
+ free(req->health_reporter_name);
+ req->_present.health_reporter_name_len = strlen(health_reporter_name);
+ req->health_reporter_name = malloc(req->_present.health_reporter_name_len + 1);
+ memcpy(req->health_reporter_name, health_reporter_name, req->_present.health_reporter_name_len);
+ req->health_reporter_name[req->_present.health_reporter_name_len] = 0;
+}
+
+/*
+ * Test health reporter instances.
+ */
+int devlink_health_reporter_test(struct ynl_sock *ys,
+ struct devlink_health_reporter_test_req *req);
+
/* ============== DEVLINK_CMD_RATE_GET ============== */
/* DEVLINK_CMD_RATE_GET - do */
struct devlink_rate_get_req {
@@ -1790,7 +4668,7 @@ devlink_rate_get_req_dump_set_dev_name(struct devlink_rate_get_req_dump *req,
struct devlink_rate_get_list {
struct devlink_rate_get_list *next;
- struct devlink_rate_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_rate_get_rsp obj __attribute__((aligned(8)));
};
void devlink_rate_get_list_free(struct devlink_rate_get_list *rsp);
@@ -1799,6 +4677,270 @@ struct devlink_rate_get_list *
devlink_rate_get_dump(struct ynl_sock *ys,
struct devlink_rate_get_req_dump *req);
+/* ============== DEVLINK_CMD_RATE_SET ============== */
+/* DEVLINK_CMD_RATE_SET - do */
+struct devlink_rate_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 rate_node_name_len;
+ __u32 rate_tx_share:1;
+ __u32 rate_tx_max:1;
+ __u32 rate_tx_priority:1;
+ __u32 rate_tx_weight:1;
+ __u32 rate_parent_node_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *rate_node_name;
+ __u64 rate_tx_share;
+ __u64 rate_tx_max;
+ __u32 rate_tx_priority;
+ __u32 rate_tx_weight;
+ char *rate_parent_node_name;
+};
+
+static inline struct devlink_rate_set_req *devlink_rate_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_rate_set_req));
+}
+void devlink_rate_set_req_free(struct devlink_rate_set_req *req);
+
+static inline void
+devlink_rate_set_req_set_bus_name(struct devlink_rate_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_rate_set_req_set_dev_name(struct devlink_rate_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_rate_set_req_set_rate_node_name(struct devlink_rate_set_req *req,
+ const char *rate_node_name)
+{
+ free(req->rate_node_name);
+ req->_present.rate_node_name_len = strlen(rate_node_name);
+ req->rate_node_name = malloc(req->_present.rate_node_name_len + 1);
+ memcpy(req->rate_node_name, rate_node_name, req->_present.rate_node_name_len);
+ req->rate_node_name[req->_present.rate_node_name_len] = 0;
+}
+static inline void
+devlink_rate_set_req_set_rate_tx_share(struct devlink_rate_set_req *req,
+ __u64 rate_tx_share)
+{
+ req->_present.rate_tx_share = 1;
+ req->rate_tx_share = rate_tx_share;
+}
+static inline void
+devlink_rate_set_req_set_rate_tx_max(struct devlink_rate_set_req *req,
+ __u64 rate_tx_max)
+{
+ req->_present.rate_tx_max = 1;
+ req->rate_tx_max = rate_tx_max;
+}
+static inline void
+devlink_rate_set_req_set_rate_tx_priority(struct devlink_rate_set_req *req,
+ __u32 rate_tx_priority)
+{
+ req->_present.rate_tx_priority = 1;
+ req->rate_tx_priority = rate_tx_priority;
+}
+static inline void
+devlink_rate_set_req_set_rate_tx_weight(struct devlink_rate_set_req *req,
+ __u32 rate_tx_weight)
+{
+ req->_present.rate_tx_weight = 1;
+ req->rate_tx_weight = rate_tx_weight;
+}
+static inline void
+devlink_rate_set_req_set_rate_parent_node_name(struct devlink_rate_set_req *req,
+ const char *rate_parent_node_name)
+{
+ free(req->rate_parent_node_name);
+ req->_present.rate_parent_node_name_len = strlen(rate_parent_node_name);
+ req->rate_parent_node_name = malloc(req->_present.rate_parent_node_name_len + 1);
+ memcpy(req->rate_parent_node_name, rate_parent_node_name, req->_present.rate_parent_node_name_len);
+ req->rate_parent_node_name[req->_present.rate_parent_node_name_len] = 0;
+}
+
+/*
+ * Set rate instances.
+ */
+int devlink_rate_set(struct ynl_sock *ys, struct devlink_rate_set_req *req);
+
+/* ============== DEVLINK_CMD_RATE_NEW ============== */
+/* DEVLINK_CMD_RATE_NEW - do */
+struct devlink_rate_new_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 rate_node_name_len;
+ __u32 rate_tx_share:1;
+ __u32 rate_tx_max:1;
+ __u32 rate_tx_priority:1;
+ __u32 rate_tx_weight:1;
+ __u32 rate_parent_node_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *rate_node_name;
+ __u64 rate_tx_share;
+ __u64 rate_tx_max;
+ __u32 rate_tx_priority;
+ __u32 rate_tx_weight;
+ char *rate_parent_node_name;
+};
+
+static inline struct devlink_rate_new_req *devlink_rate_new_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_rate_new_req));
+}
+void devlink_rate_new_req_free(struct devlink_rate_new_req *req);
+
+static inline void
+devlink_rate_new_req_set_bus_name(struct devlink_rate_new_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_rate_new_req_set_dev_name(struct devlink_rate_new_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_rate_new_req_set_rate_node_name(struct devlink_rate_new_req *req,
+ const char *rate_node_name)
+{
+ free(req->rate_node_name);
+ req->_present.rate_node_name_len = strlen(rate_node_name);
+ req->rate_node_name = malloc(req->_present.rate_node_name_len + 1);
+ memcpy(req->rate_node_name, rate_node_name, req->_present.rate_node_name_len);
+ req->rate_node_name[req->_present.rate_node_name_len] = 0;
+}
+static inline void
+devlink_rate_new_req_set_rate_tx_share(struct devlink_rate_new_req *req,
+ __u64 rate_tx_share)
+{
+ req->_present.rate_tx_share = 1;
+ req->rate_tx_share = rate_tx_share;
+}
+static inline void
+devlink_rate_new_req_set_rate_tx_max(struct devlink_rate_new_req *req,
+ __u64 rate_tx_max)
+{
+ req->_present.rate_tx_max = 1;
+ req->rate_tx_max = rate_tx_max;
+}
+static inline void
+devlink_rate_new_req_set_rate_tx_priority(struct devlink_rate_new_req *req,
+ __u32 rate_tx_priority)
+{
+ req->_present.rate_tx_priority = 1;
+ req->rate_tx_priority = rate_tx_priority;
+}
+static inline void
+devlink_rate_new_req_set_rate_tx_weight(struct devlink_rate_new_req *req,
+ __u32 rate_tx_weight)
+{
+ req->_present.rate_tx_weight = 1;
+ req->rate_tx_weight = rate_tx_weight;
+}
+static inline void
+devlink_rate_new_req_set_rate_parent_node_name(struct devlink_rate_new_req *req,
+ const char *rate_parent_node_name)
+{
+ free(req->rate_parent_node_name);
+ req->_present.rate_parent_node_name_len = strlen(rate_parent_node_name);
+ req->rate_parent_node_name = malloc(req->_present.rate_parent_node_name_len + 1);
+ memcpy(req->rate_parent_node_name, rate_parent_node_name, req->_present.rate_parent_node_name_len);
+ req->rate_parent_node_name[req->_present.rate_parent_node_name_len] = 0;
+}
+
+/*
+ * Create rate instances.
+ */
+int devlink_rate_new(struct ynl_sock *ys, struct devlink_rate_new_req *req);
+
+/* ============== DEVLINK_CMD_RATE_DEL ============== */
+/* DEVLINK_CMD_RATE_DEL - do */
+struct devlink_rate_del_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 rate_node_name_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ char *rate_node_name;
+};
+
+static inline struct devlink_rate_del_req *devlink_rate_del_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_rate_del_req));
+}
+void devlink_rate_del_req_free(struct devlink_rate_del_req *req);
+
+static inline void
+devlink_rate_del_req_set_bus_name(struct devlink_rate_del_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_rate_del_req_set_dev_name(struct devlink_rate_del_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_rate_del_req_set_rate_node_name(struct devlink_rate_del_req *req,
+ const char *rate_node_name)
+{
+ free(req->rate_node_name);
+ req->_present.rate_node_name_len = strlen(rate_node_name);
+ req->rate_node_name = malloc(req->_present.rate_node_name_len + 1);
+ memcpy(req->rate_node_name, rate_node_name, req->_present.rate_node_name_len);
+ req->rate_node_name[req->_present.rate_node_name_len] = 0;
+}
+
+/*
+ * Delete rate instances.
+ */
+int devlink_rate_del(struct ynl_sock *ys, struct devlink_rate_del_req *req);
+
/* ============== DEVLINK_CMD_LINECARD_GET ============== */
/* DEVLINK_CMD_LINECARD_GET - do */
struct devlink_linecard_get_req {
@@ -1910,7 +5052,7 @@ devlink_linecard_get_req_dump_set_dev_name(struct devlink_linecard_get_req_dump
struct devlink_linecard_get_list {
struct devlink_linecard_get_list *next;
- struct devlink_linecard_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_linecard_get_rsp obj __attribute__((aligned(8)));
};
void devlink_linecard_get_list_free(struct devlink_linecard_get_list *rsp);
@@ -1919,6 +5061,73 @@ struct devlink_linecard_get_list *
devlink_linecard_get_dump(struct ynl_sock *ys,
struct devlink_linecard_get_req_dump *req);
+/* ============== DEVLINK_CMD_LINECARD_SET ============== */
+/* DEVLINK_CMD_LINECARD_SET - do */
+struct devlink_linecard_set_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 linecard_index:1;
+ __u32 linecard_type_len;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ __u32 linecard_index;
+ char *linecard_type;
+};
+
+static inline struct devlink_linecard_set_req *
+devlink_linecard_set_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_linecard_set_req));
+}
+void devlink_linecard_set_req_free(struct devlink_linecard_set_req *req);
+
+static inline void
+devlink_linecard_set_req_set_bus_name(struct devlink_linecard_set_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_linecard_set_req_set_dev_name(struct devlink_linecard_set_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_linecard_set_req_set_linecard_index(struct devlink_linecard_set_req *req,
+ __u32 linecard_index)
+{
+ req->_present.linecard_index = 1;
+ req->linecard_index = linecard_index;
+}
+static inline void
+devlink_linecard_set_req_set_linecard_type(struct devlink_linecard_set_req *req,
+ const char *linecard_type)
+{
+ free(req->linecard_type);
+ req->_present.linecard_type_len = strlen(linecard_type);
+ req->linecard_type = malloc(req->_present.linecard_type_len + 1);
+ memcpy(req->linecard_type, linecard_type, req->_present.linecard_type_len);
+ req->linecard_type[req->_present.linecard_type_len] = 0;
+}
+
+/*
+ * Set line card instances.
+ */
+int devlink_linecard_set(struct ynl_sock *ys,
+ struct devlink_linecard_set_req *req);
+
/* ============== DEVLINK_CMD_SELFTESTS_GET ============== */
/* DEVLINK_CMD_SELFTESTS_GET - do */
struct devlink_selftests_get_req {
@@ -1981,7 +5190,7 @@ devlink_selftests_get(struct ynl_sock *ys,
/* DEVLINK_CMD_SELFTESTS_GET - dump */
struct devlink_selftests_get_list {
struct devlink_selftests_get_list *next;
- struct devlink_selftests_get_rsp obj __attribute__ ((aligned (8)));
+ struct devlink_selftests_get_rsp obj __attribute__((aligned(8)));
};
void devlink_selftests_get_list_free(struct devlink_selftests_get_list *rsp);
@@ -1989,4 +5198,58 @@ void devlink_selftests_get_list_free(struct devlink_selftests_get_list *rsp);
struct devlink_selftests_get_list *
devlink_selftests_get_dump(struct ynl_sock *ys);
+/* ============== DEVLINK_CMD_SELFTESTS_RUN ============== */
+/* DEVLINK_CMD_SELFTESTS_RUN - do */
+struct devlink_selftests_run_req {
+ struct {
+ __u32 bus_name_len;
+ __u32 dev_name_len;
+ __u32 selftests:1;
+ } _present;
+
+ char *bus_name;
+ char *dev_name;
+ struct devlink_dl_selftest_id selftests;
+};
+
+static inline struct devlink_selftests_run_req *
+devlink_selftests_run_req_alloc(void)
+{
+ return calloc(1, sizeof(struct devlink_selftests_run_req));
+}
+void devlink_selftests_run_req_free(struct devlink_selftests_run_req *req);
+
+static inline void
+devlink_selftests_run_req_set_bus_name(struct devlink_selftests_run_req *req,
+ const char *bus_name)
+{
+ free(req->bus_name);
+ req->_present.bus_name_len = strlen(bus_name);
+ req->bus_name = malloc(req->_present.bus_name_len + 1);
+ memcpy(req->bus_name, bus_name, req->_present.bus_name_len);
+ req->bus_name[req->_present.bus_name_len] = 0;
+}
+static inline void
+devlink_selftests_run_req_set_dev_name(struct devlink_selftests_run_req *req,
+ const char *dev_name)
+{
+ free(req->dev_name);
+ req->_present.dev_name_len = strlen(dev_name);
+ req->dev_name = malloc(req->_present.dev_name_len + 1);
+ memcpy(req->dev_name, dev_name, req->_present.dev_name_len);
+ req->dev_name[req->_present.dev_name_len] = 0;
+}
+static inline void
+devlink_selftests_run_req_set_selftests_flash(struct devlink_selftests_run_req *req)
+{
+ req->_present.selftests = 1;
+ req->selftests._present.flash = 1;
+}
+
+/*
+ * Run device selftest instances.
+ */
+int devlink_selftests_run(struct ynl_sock *ys,
+ struct devlink_selftests_run_req *req);
+
#endif /* _LINUX_DEVLINK_GEN_H */
diff --git a/tools/net/ynl/generated/ethtool-user.h b/tools/net/ynl/generated/ethtool-user.h
index ddc1a5209992..ca0ec5fd7798 100644
--- a/tools/net/ynl/generated/ethtool-user.h
+++ b/tools/net/ynl/generated/ethtool-user.h
@@ -347,7 +347,7 @@ ethtool_strset_get_req_dump_set_counts_only(struct ethtool_strset_get_req_dump *
struct ethtool_strset_get_list {
struct ethtool_strset_get_list *next;
- struct ethtool_strset_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_strset_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_strset_get_list_free(struct ethtool_strset_get_list *rsp);
@@ -472,7 +472,7 @@ ethtool_linkinfo_get_req_dump_set_header_flags(struct ethtool_linkinfo_get_req_d
struct ethtool_linkinfo_get_list {
struct ethtool_linkinfo_get_list *next;
- struct ethtool_linkinfo_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_linkinfo_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_linkinfo_get_list_free(struct ethtool_linkinfo_get_list *rsp);
@@ -487,7 +487,7 @@ struct ethtool_linkinfo_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_linkinfo_get_ntf *ntf);
- struct ethtool_linkinfo_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_linkinfo_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_linkinfo_get_ntf_free(struct ethtool_linkinfo_get_ntf *rsp);
@@ -712,7 +712,7 @@ ethtool_linkmodes_get_req_dump_set_header_flags(struct ethtool_linkmodes_get_req
struct ethtool_linkmodes_get_list {
struct ethtool_linkmodes_get_list *next;
- struct ethtool_linkmodes_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_linkmodes_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_linkmodes_get_list_free(struct ethtool_linkmodes_get_list *rsp);
@@ -727,7 +727,7 @@ struct ethtool_linkmodes_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_linkmodes_get_ntf *ntf);
- struct ethtool_linkmodes_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_linkmodes_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_linkmodes_get_ntf_free(struct ethtool_linkmodes_get_ntf *rsp);
@@ -1014,7 +1014,7 @@ ethtool_linkstate_get_req_dump_set_header_flags(struct ethtool_linkstate_get_req
struct ethtool_linkstate_get_list {
struct ethtool_linkstate_get_list *next;
- struct ethtool_linkstate_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_linkstate_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_linkstate_get_list_free(struct ethtool_linkstate_get_list *rsp);
@@ -1129,7 +1129,7 @@ ethtool_debug_get_req_dump_set_header_flags(struct ethtool_debug_get_req_dump *r
struct ethtool_debug_get_list {
struct ethtool_debug_get_list *next;
- struct ethtool_debug_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_debug_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_debug_get_list_free(struct ethtool_debug_get_list *rsp);
@@ -1144,7 +1144,7 @@ struct ethtool_debug_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_debug_get_ntf *ntf);
- struct ethtool_debug_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_debug_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_debug_get_ntf_free(struct ethtool_debug_get_ntf *rsp);
@@ -1330,7 +1330,7 @@ ethtool_wol_get_req_dump_set_header_flags(struct ethtool_wol_get_req_dump *req,
struct ethtool_wol_get_list {
struct ethtool_wol_get_list *next;
- struct ethtool_wol_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_wol_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_wol_get_list_free(struct ethtool_wol_get_list *rsp);
@@ -1344,7 +1344,7 @@ struct ethtool_wol_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_wol_get_ntf *ntf);
- struct ethtool_wol_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_wol_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_wol_get_ntf_free(struct ethtool_wol_get_ntf *rsp);
@@ -1546,7 +1546,7 @@ ethtool_features_get_req_dump_set_header_flags(struct ethtool_features_get_req_d
struct ethtool_features_get_list {
struct ethtool_features_get_list *next;
- struct ethtool_features_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_features_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_features_get_list_free(struct ethtool_features_get_list *rsp);
@@ -1561,7 +1561,7 @@ struct ethtool_features_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_features_get_ntf *ntf);
- struct ethtool_features_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_features_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_features_get_ntf_free(struct ethtool_features_get_ntf *rsp);
@@ -1843,7 +1843,7 @@ ethtool_privflags_get_req_dump_set_header_flags(struct ethtool_privflags_get_req
struct ethtool_privflags_get_list {
struct ethtool_privflags_get_list *next;
- struct ethtool_privflags_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_privflags_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_privflags_get_list_free(struct ethtool_privflags_get_list *rsp);
@@ -1858,7 +1858,7 @@ struct ethtool_privflags_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_privflags_get_ntf *ntf);
- struct ethtool_privflags_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_privflags_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_privflags_get_ntf_free(struct ethtool_privflags_get_ntf *rsp);
@@ -2072,7 +2072,7 @@ ethtool_rings_get_req_dump_set_header_flags(struct ethtool_rings_get_req_dump *r
struct ethtool_rings_get_list {
struct ethtool_rings_get_list *next;
- struct ethtool_rings_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_rings_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_rings_get_list_free(struct ethtool_rings_get_list *rsp);
@@ -2087,7 +2087,7 @@ struct ethtool_rings_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_rings_get_ntf *ntf);
- struct ethtool_rings_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_rings_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_rings_get_ntf_free(struct ethtool_rings_get_ntf *rsp);
@@ -2395,7 +2395,7 @@ ethtool_channels_get_req_dump_set_header_flags(struct ethtool_channels_get_req_d
struct ethtool_channels_get_list {
struct ethtool_channels_get_list *next;
- struct ethtool_channels_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_channels_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_channels_get_list_free(struct ethtool_channels_get_list *rsp);
@@ -2410,7 +2410,7 @@ struct ethtool_channels_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_channels_get_ntf *ntf);
- struct ethtool_channels_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_channels_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_channels_get_ntf_free(struct ethtool_channels_get_ntf *rsp);
@@ -2697,7 +2697,7 @@ ethtool_coalesce_get_req_dump_set_header_flags(struct ethtool_coalesce_get_req_d
struct ethtool_coalesce_get_list {
struct ethtool_coalesce_get_list *next;
- struct ethtool_coalesce_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_coalesce_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_coalesce_get_list_free(struct ethtool_coalesce_get_list *rsp);
@@ -2712,7 +2712,7 @@ struct ethtool_coalesce_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_coalesce_get_ntf *ntf);
- struct ethtool_coalesce_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_coalesce_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_coalesce_get_ntf_free(struct ethtool_coalesce_get_ntf *rsp);
@@ -3124,7 +3124,7 @@ ethtool_pause_get_req_dump_set_header_flags(struct ethtool_pause_get_req_dump *r
struct ethtool_pause_get_list {
struct ethtool_pause_get_list *next;
- struct ethtool_pause_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_pause_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_pause_get_list_free(struct ethtool_pause_get_list *rsp);
@@ -3139,7 +3139,7 @@ struct ethtool_pause_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_pause_get_ntf *ntf);
- struct ethtool_pause_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_pause_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_pause_get_ntf_free(struct ethtool_pause_get_ntf *rsp);
@@ -3360,7 +3360,7 @@ ethtool_eee_get_req_dump_set_header_flags(struct ethtool_eee_get_req_dump *req,
struct ethtool_eee_get_list {
struct ethtool_eee_get_list *next;
- struct ethtool_eee_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_eee_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_eee_get_list_free(struct ethtool_eee_get_list *rsp);
@@ -3374,7 +3374,7 @@ struct ethtool_eee_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_eee_get_ntf *ntf);
- struct ethtool_eee_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_eee_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_eee_get_ntf_free(struct ethtool_eee_get_ntf *rsp);
@@ -3623,7 +3623,7 @@ ethtool_tsinfo_get_req_dump_set_header_flags(struct ethtool_tsinfo_get_req_dump
struct ethtool_tsinfo_get_list {
struct ethtool_tsinfo_get_list *next;
- struct ethtool_tsinfo_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_tsinfo_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_tsinfo_get_list_free(struct ethtool_tsinfo_get_list *rsp);
@@ -3842,7 +3842,7 @@ ethtool_tunnel_info_get_req_dump_set_header_flags(struct ethtool_tunnel_info_get
struct ethtool_tunnel_info_get_list {
struct ethtool_tunnel_info_get_list *next;
- struct ethtool_tunnel_info_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_tunnel_info_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -3964,7 +3964,7 @@ ethtool_fec_get_req_dump_set_header_flags(struct ethtool_fec_get_req_dump *req,
struct ethtool_fec_get_list {
struct ethtool_fec_get_list *next;
- struct ethtool_fec_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_fec_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_fec_get_list_free(struct ethtool_fec_get_list *rsp);
@@ -3978,7 +3978,7 @@ struct ethtool_fec_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_fec_get_ntf *ntf);
- struct ethtool_fec_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_fec_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_fec_get_ntf_free(struct ethtool_fec_get_ntf *rsp);
@@ -4221,7 +4221,7 @@ ethtool_module_eeprom_get_req_dump_set_header_flags(struct ethtool_module_eeprom
struct ethtool_module_eeprom_get_list {
struct ethtool_module_eeprom_get_list *next;
- struct ethtool_module_eeprom_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_module_eeprom_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -4340,7 +4340,7 @@ ethtool_phc_vclocks_get_req_dump_set_header_flags(struct ethtool_phc_vclocks_get
struct ethtool_phc_vclocks_get_list {
struct ethtool_phc_vclocks_get_list *next;
- struct ethtool_phc_vclocks_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_phc_vclocks_get_rsp obj __attribute__((aligned(8)));
};
void
@@ -4458,7 +4458,7 @@ ethtool_module_get_req_dump_set_header_flags(struct ethtool_module_get_req_dump
struct ethtool_module_get_list {
struct ethtool_module_get_list *next;
- struct ethtool_module_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_module_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_module_get_list_free(struct ethtool_module_get_list *rsp);
@@ -4473,7 +4473,7 @@ struct ethtool_module_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_module_get_ntf *ntf);
- struct ethtool_module_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_module_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_module_get_ntf_free(struct ethtool_module_get_ntf *rsp);
@@ -4654,7 +4654,7 @@ ethtool_pse_get_req_dump_set_header_flags(struct ethtool_pse_get_req_dump *req,
struct ethtool_pse_get_list {
struct ethtool_pse_get_list *next;
- struct ethtool_pse_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_pse_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_pse_get_list_free(struct ethtool_pse_get_list *rsp);
@@ -4849,7 +4849,7 @@ ethtool_rss_get_req_dump_set_header_flags(struct ethtool_rss_get_req_dump *req,
struct ethtool_rss_get_list {
struct ethtool_rss_get_list *next;
- struct ethtool_rss_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_rss_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_rss_get_list_free(struct ethtool_rss_get_list *rsp);
@@ -4979,7 +4979,7 @@ ethtool_plca_get_cfg_req_dump_set_header_flags(struct ethtool_plca_get_cfg_req_d
struct ethtool_plca_get_cfg_list {
struct ethtool_plca_get_cfg_list *next;
- struct ethtool_plca_get_cfg_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_plca_get_cfg_rsp obj __attribute__((aligned(8)));
};
void ethtool_plca_get_cfg_list_free(struct ethtool_plca_get_cfg_list *rsp);
@@ -4994,7 +4994,7 @@ struct ethtool_plca_get_cfg_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_plca_get_cfg_ntf *ntf);
- struct ethtool_plca_get_cfg_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_plca_get_cfg_rsp obj __attribute__((aligned(8)));
};
void ethtool_plca_get_cfg_ntf_free(struct ethtool_plca_get_cfg_ntf *rsp);
@@ -5244,7 +5244,7 @@ ethtool_plca_get_status_req_dump_set_header_flags(struct ethtool_plca_get_status
struct ethtool_plca_get_status_list {
struct ethtool_plca_get_status_list *next;
- struct ethtool_plca_get_status_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_plca_get_status_rsp obj __attribute__((aligned(8)));
};
void
@@ -5376,7 +5376,7 @@ ethtool_mm_get_req_dump_set_header_flags(struct ethtool_mm_get_req_dump *req,
struct ethtool_mm_get_list {
struct ethtool_mm_get_list *next;
- struct ethtool_mm_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_mm_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_mm_get_list_free(struct ethtool_mm_get_list *rsp);
@@ -5390,7 +5390,7 @@ struct ethtool_mm_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_mm_get_ntf *ntf);
- struct ethtool_mm_get_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_mm_get_rsp obj __attribute__((aligned(8)));
};
void ethtool_mm_get_ntf_free(struct ethtool_mm_get_ntf *rsp);
@@ -5504,7 +5504,7 @@ struct ethtool_cable_test_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_cable_test_ntf *ntf);
- struct ethtool_cable_test_ntf_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_cable_test_ntf_rsp obj __attribute__((aligned(8)));
};
void ethtool_cable_test_ntf_free(struct ethtool_cable_test_ntf *rsp);
@@ -5527,7 +5527,7 @@ struct ethtool_cable_test_tdr_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ethtool_cable_test_tdr_ntf *ntf);
- struct ethtool_cable_test_tdr_ntf_rsp obj __attribute__ ((aligned (8)));
+ struct ethtool_cable_test_tdr_ntf_rsp obj __attribute__((aligned(8)));
};
void ethtool_cable_test_tdr_ntf_free(struct ethtool_cable_test_tdr_ntf *rsp);
diff --git a/tools/net/ynl/generated/fou-user.h b/tools/net/ynl/generated/fou-user.h
index a8f860892540..fd566716ddd6 100644
--- a/tools/net/ynl/generated/fou-user.h
+++ b/tools/net/ynl/generated/fou-user.h
@@ -333,7 +333,7 @@ struct fou_get_rsp *fou_get(struct ynl_sock *ys, struct fou_get_req *req);
/* FOU_CMD_GET - dump */
struct fou_get_list {
struct fou_get_list *next;
- struct fou_get_rsp obj __attribute__ ((aligned (8)));
+ struct fou_get_rsp obj __attribute__((aligned(8)));
};
void fou_get_list_free(struct fou_get_list *rsp);
diff --git a/tools/net/ynl/generated/handshake-user.h b/tools/net/ynl/generated/handshake-user.h
index 47646bb91cea..bce537d8b8cc 100644
--- a/tools/net/ynl/generated/handshake-user.h
+++ b/tools/net/ynl/generated/handshake-user.h
@@ -28,8 +28,8 @@ struct handshake_x509 {
__u32 privkey:1;
} _present;
- __u32 cert;
- __u32 privkey;
+ __s32 cert;
+ __s32 privkey;
};
/* ============== HANDSHAKE_CMD_ACCEPT ============== */
@@ -65,7 +65,7 @@ struct handshake_accept_rsp {
__u32 peername_len;
} _present;
- __u32 sockfd;
+ __s32 sockfd;
enum handshake_msg_type message_type;
__u32 timeout;
enum handshake_auth auth_mode;
@@ -90,7 +90,7 @@ struct handshake_accept_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct handshake_accept_ntf *ntf);
- struct handshake_accept_rsp obj __attribute__ ((aligned (8)));
+ struct handshake_accept_rsp obj __attribute__((aligned(8)));
};
void handshake_accept_ntf_free(struct handshake_accept_ntf *rsp);
@@ -104,7 +104,7 @@ struct handshake_done_req {
} _present;
__u32 status;
- __u32 sockfd;
+ __s32 sockfd;
unsigned int n_remote_auth;
__u32 *remote_auth;
};
@@ -122,7 +122,7 @@ handshake_done_req_set_status(struct handshake_done_req *req, __u32 status)
req->status = status;
}
static inline void
-handshake_done_req_set_sockfd(struct handshake_done_req *req, __u32 sockfd)
+handshake_done_req_set_sockfd(struct handshake_done_req *req, __s32 sockfd)
{
req->_present.sockfd = 1;
req->sockfd = sockfd;
diff --git a/tools/net/ynl/generated/netdev-user.c b/tools/net/ynl/generated/netdev-user.c
index 68b408ca0f7f..b5ffe8cd1144 100644
--- a/tools/net/ynl/generated/netdev-user.c
+++ b/tools/net/ynl/generated/netdev-user.c
@@ -45,12 +45,26 @@ const char *netdev_xdp_act_str(enum netdev_xdp_act value)
return netdev_xdp_act_strmap[value];
}
+static const char * const netdev_xdp_rx_metadata_strmap[] = {
+ [0] = "timestamp",
+ [1] = "hash",
+};
+
+const char *netdev_xdp_rx_metadata_str(enum netdev_xdp_rx_metadata value)
+{
+ value = ffs(value) - 1;
+ if (value < 0 || value >= (int)MNL_ARRAY_SIZE(netdev_xdp_rx_metadata_strmap))
+ return NULL;
+ return netdev_xdp_rx_metadata_strmap[value];
+}
+
/* Policies */
struct ynl_policy_attr netdev_dev_policy[NETDEV_A_DEV_MAX + 1] = {
[NETDEV_A_DEV_IFINDEX] = { .name = "ifindex", .type = YNL_PT_U32, },
[NETDEV_A_DEV_PAD] = { .name = "pad", .type = YNL_PT_IGNORE, },
[NETDEV_A_DEV_XDP_FEATURES] = { .name = "xdp-features", .type = YNL_PT_U64, },
[NETDEV_A_DEV_XDP_ZC_MAX_SEGS] = { .name = "xdp-zc-max-segs", .type = YNL_PT_U32, },
+ [NETDEV_A_DEV_XDP_RX_METADATA_FEATURES] = { .name = "xdp-rx-metadata-features", .type = YNL_PT_U64, },
};
struct ynl_policy_nest netdev_dev_nest = {
@@ -97,6 +111,11 @@ int netdev_dev_get_rsp_parse(const struct nlmsghdr *nlh, void *data)
return MNL_CB_ERROR;
dst->_present.xdp_zc_max_segs = 1;
dst->xdp_zc_max_segs = mnl_attr_get_u32(attr);
+ } else if (type == NETDEV_A_DEV_XDP_RX_METADATA_FEATURES) {
+ if (ynl_attr_validate(yarg, attr))
+ return MNL_CB_ERROR;
+ dst->_present.xdp_rx_metadata_features = 1;
+ dst->xdp_rx_metadata_features = mnl_attr_get_u64(attr);
}
}
diff --git a/tools/net/ynl/generated/netdev-user.h b/tools/net/ynl/generated/netdev-user.h
index 0952d3261f4d..4fafac879df3 100644
--- a/tools/net/ynl/generated/netdev-user.h
+++ b/tools/net/ynl/generated/netdev-user.h
@@ -18,6 +18,7 @@ extern const struct ynl_family ynl_netdev_family;
/* Enums */
const char *netdev_op_str(int op);
const char *netdev_xdp_act_str(enum netdev_xdp_act value);
+const char *netdev_xdp_rx_metadata_str(enum netdev_xdp_rx_metadata value);
/* Common nested types */
/* ============== NETDEV_CMD_DEV_GET ============== */
@@ -48,11 +49,13 @@ struct netdev_dev_get_rsp {
__u32 ifindex:1;
__u32 xdp_features:1;
__u32 xdp_zc_max_segs:1;
+ __u32 xdp_rx_metadata_features:1;
} _present;
__u32 ifindex;
__u64 xdp_features;
__u32 xdp_zc_max_segs;
+ __u64 xdp_rx_metadata_features;
};
void netdev_dev_get_rsp_free(struct netdev_dev_get_rsp *rsp);
@@ -66,7 +69,7 @@ netdev_dev_get(struct ynl_sock *ys, struct netdev_dev_get_req *req);
/* NETDEV_CMD_DEV_GET - dump */
struct netdev_dev_get_list {
struct netdev_dev_get_list *next;
- struct netdev_dev_get_rsp obj __attribute__ ((aligned (8)));
+ struct netdev_dev_get_rsp obj __attribute__((aligned(8)));
};
void netdev_dev_get_list_free(struct netdev_dev_get_list *rsp);
@@ -79,7 +82,7 @@ struct netdev_dev_get_ntf {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct netdev_dev_get_ntf *ntf);
- struct netdev_dev_get_rsp obj __attribute__ ((aligned (8)));
+ struct netdev_dev_get_rsp obj __attribute__((aligned(8)));
};
void netdev_dev_get_ntf_free(struct netdev_dev_get_ntf *rsp);
diff --git a/tools/net/ynl/lib/nlspec.py b/tools/net/ynl/lib/nlspec.py
index 37bcb4d8b37b..92889298b197 100644
--- a/tools/net/ynl/lib/nlspec.py
+++ b/tools/net/ynl/lib/nlspec.py
@@ -149,6 +149,7 @@ class SpecAttr(SpecElement):
Represents a single attribute type within an attr space.
Attributes:
+ type string, attribute type
value numerical ID when serialized
attr_set Attribute Set containing this attr
is_multi bool, attr may repeat multiple times
@@ -157,10 +158,13 @@ class SpecAttr(SpecElement):
len integer, optional byte length of binary types
display_hint string, hint to help choose format specifier
when displaying the value
+
+ is_auto_scalar bool, attr is a variable-size scalar
"""
def __init__(self, family, attr_set, yaml, value):
super().__init__(family, yaml)
+ self.type = yaml['type']
self.value = value
self.attr_set = attr_set
self.is_multi = yaml.get('multi-attr', False)
@@ -170,6 +174,8 @@ class SpecAttr(SpecElement):
self.len = yaml.get('len')
self.display_hint = yaml.get('display-hint')
+ self.is_auto_scalar = self.type == "sint" or self.type == "uint"
+
class SpecAttrSet(SpecElement):
""" Netlink Attribute Set class.
diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
index 514e0d69e731..830d25097009 100644
--- a/tools/net/ynl/lib/ynl.c
+++ b/tools/net/ynl/lib/ynl.c
@@ -352,6 +352,12 @@ int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
"Invalid attribute (u64 %s)", policy->name);
return -1;
+ case YNL_PT_UINT:
+ if (len == sizeof(__u32) || len == sizeof(__u64))
+ break;
+ yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
+ "Invalid attribute (uint %s)", policy->name);
+ return -1;
case YNL_PT_FLAG:
/* Let flags grow into real attrs, why not.. */
break;
@@ -373,6 +379,12 @@ int ynl_attr_validate(struct ynl_parse_arg *yarg, const struct nlattr *attr)
yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
"Invalid attribute (string %s)", policy->name);
return -1;
+ case YNL_PT_BITFIELD32:
+ if (len == sizeof(struct nla_bitfield32))
+ break;
+ yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
+ "Invalid attribute (bitfield32 %s)", policy->name);
+ return -1;
default:
yerr(yarg->ys, YNL_ERROR_ATTR_INVALID,
"Invalid attribute (unknown %s)", policy->name);
diff --git a/tools/net/ynl/lib/ynl.h b/tools/net/ynl/lib/ynl.h
index 9eafa3552c16..e974378e3b8c 100644
--- a/tools/net/ynl/lib/ynl.h
+++ b/tools/net/ynl/lib/ynl.h
@@ -133,7 +133,9 @@ enum ynl_policy_type {
YNL_PT_U16,
YNL_PT_U32,
YNL_PT_U64,
+ YNL_PT_UINT,
YNL_PT_NUL_STR,
+ YNL_PT_BITFIELD32,
};
struct ynl_policy_attr {
@@ -156,7 +158,7 @@ struct ynl_parse_arg {
struct ynl_dump_list_type {
struct ynl_dump_list_type *next;
- unsigned char data[] __attribute__ ((aligned (8)));
+ unsigned char data[] __attribute__((aligned(8)));
};
extern struct ynl_dump_list_type *YNL_LIST_END;
@@ -186,7 +188,7 @@ struct ynl_ntf_base_type {
__u8 cmd;
struct ynl_ntf_base_type *next;
void (*free)(struct ynl_ntf_base_type *ntf);
- unsigned char data[] __attribute__ ((aligned (8)));
+ unsigned char data[] __attribute__((aligned(8)));
};
extern mnl_cb_t ynl_cb_array[NLMSG_MIN_TYPE];
@@ -234,4 +236,20 @@ int ynl_exec_dump(struct ynl_sock *ys, struct nlmsghdr *req_nlh,
void ynl_error_unknown_notification(struct ynl_sock *ys, __u8 cmd);
int ynl_error_parse(struct ynl_parse_arg *yarg, const char *msg);
+#ifndef MNL_HAS_AUTO_SCALARS
+static inline uint64_t mnl_attr_get_uint(const struct nlattr *attr)
+{
+ if (mnl_attr_get_len(attr) == 4)
+ return mnl_attr_get_u32(attr);
+ return mnl_attr_get_u64(attr);
+}
+
+static inline void
+mnl_attr_put_uint(struct nlmsghdr *nlh, uint16_t type, uint64_t data)
+{
+ if ((uint32_t)data == (uint64_t)data)
+ return mnl_attr_put_u32(nlh, type, data);
+ return mnl_attr_put_u64(nlh, type, data);
+}
+#endif
#endif
diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py
index 13c4b019a881..92995bca14e1 100644
--- a/tools/net/ynl/lib/ynl.py
+++ b/tools/net/ynl/lib/ynl.py
@@ -100,6 +100,7 @@ class NlAttr:
def __init__(self, raw, offset):
self._len, self._type = struct.unpack("HH", raw[offset:offset + 4])
self.type = self._type & ~Netlink.NLA_TYPE_MASK
+ self.is_nest = self._type & Netlink.NLA_F_NESTED
self.payload_len = self._len
self.full_len = (self.payload_len + 3) & ~3
self.raw = raw[offset + 4:offset + self.payload_len]
@@ -130,6 +131,13 @@ class NlAttr:
format = self.get_format(attr_type, byte_order)
return format.unpack(self.raw)[0]
+ def as_auto_scalar(self, attr_type, byte_order=None):
+ if len(self.raw) != 4 and len(self.raw) != 8:
+ raise Exception(f"Auto-scalar len payload be 4 or 8 bytes, got {len(self.raw)}")
+ real_type = attr_type[0] + str(len(self.raw) * 8)
+ format = self.get_format(real_type, byte_order)
+ return format.unpack(self.raw)[0]
+
def as_strz(self):
return self.raw.decode('ascii')[:-1]
@@ -404,10 +412,11 @@ class GenlProtocol(NetlinkProtocol):
class YnlFamily(SpecFamily):
- def __init__(self, def_path, schema=None):
+ def __init__(self, def_path, schema=None, process_unknown=False):
super().__init__(def_path, schema)
self.include_raw = False
+ self.process_unknown = process_unknown
try:
if self.proto == "netlink-raw":
@@ -463,9 +472,16 @@ class YnlFamily(SpecFamily):
attr_payload = bytes.fromhex(value)
else:
raise Exception(f'Unknown type for binary attribute, value: {value}')
+ elif attr.is_auto_scalar:
+ scalar = int(value)
+ real_type = attr["type"][0] + ('32' if scalar.bit_length() <= 32 else '64')
+ format = NlAttr.get_format(real_type, attr.byte_order)
+ attr_payload = format.pack(int(value))
elif attr['type'] in NlAttr.type_formats:
format = NlAttr.get_format(attr['type'], attr.byte_order)
attr_payload = format.pack(int(value))
+ elif attr['type'] in "bitfield32":
+ attr_payload = struct.pack("II", int(value["value"]), int(value["selector"]))
else:
raise Exception(f'Unknown type at {space} {name} {value} {attr["type"]}')
@@ -474,7 +490,7 @@ class YnlFamily(SpecFamily):
def _decode_enum(self, raw, attr_spec):
enum = self.consts[attr_spec['enum']]
- if 'enum-as-flags' in attr_spec and attr_spec['enum-as-flags']:
+ if enum.type == 'flags' or attr_spec.get('enum-as-flags', False):
i = 0
value = set()
while raw:
@@ -512,14 +528,41 @@ class YnlFamily(SpecFamily):
decoded.append({ item.type: subattrs })
return decoded
+ def _decode_unknown(self, attr):
+ if attr.is_nest:
+ return self._decode(NlAttrs(attr.raw), None)
+ else:
+ return attr.as_bin()
+
+ def _rsp_add(self, rsp, name, is_multi, decoded):
+ if is_multi == None:
+ if name in rsp and type(rsp[name]) is not list:
+ rsp[name] = [rsp[name]]
+ is_multi = True
+ else:
+ is_multi = False
+
+ if not is_multi:
+ rsp[name] = decoded
+ elif name in rsp:
+ rsp[name].append(decoded)
+ else:
+ rsp[name] = [decoded]
+
def _decode(self, attrs, space):
- attr_space = self.attr_sets[space]
+ if space:
+ attr_space = self.attr_sets[space]
rsp = dict()
for attr in attrs:
try:
attr_spec = attr_space.attrs_by_val[attr.type]
- except KeyError:
- raise Exception(f"Space '{space}' has no attribute with value '{attr.type}'")
+ except (KeyError, UnboundLocalError):
+ if not self.process_unknown:
+ raise Exception(f"Space '{space}' has no attribute with value '{attr.type}'")
+ attr_name = f"UnknownAttr({attr.type})"
+ self._rsp_add(rsp, attr_name, None, self._decode_unknown(attr))
+ continue
+
if attr_spec["type"] == 'nest':
subdict = self._decode(NlAttrs(attr.raw), attr_spec['nested-attributes'])
decoded = subdict
@@ -529,22 +572,26 @@ class YnlFamily(SpecFamily):
decoded = self._decode_binary(attr, attr_spec)
elif attr_spec["type"] == 'flag':
decoded = True
+ elif attr_spec.is_auto_scalar:
+ decoded = attr.as_auto_scalar(attr_spec['type'], attr_spec.byte_order)
elif attr_spec["type"] in NlAttr.type_formats:
decoded = attr.as_scalar(attr_spec['type'], attr_spec.byte_order)
+ if 'enum' in attr_spec:
+ decoded = self._decode_enum(decoded, attr_spec)
elif attr_spec["type"] == 'array-nest':
decoded = self._decode_array_nest(attr, attr_spec)
+ elif attr_spec["type"] == 'bitfield32':
+ value, selector = struct.unpack("II", attr.raw)
+ if 'enum' in attr_spec:
+ value = self._decode_enum(value, attr_spec)
+ selector = self._decode_enum(selector, attr_spec)
+ decoded = {"value": value, "selector": selector}
else:
- raise Exception(f'Unknown {attr_spec["type"]} with name {attr_spec["name"]}')
-
- if 'enum' in attr_spec:
- decoded = self._decode_enum(decoded, attr_spec)
+ if not self.process_unknown:
+ raise Exception(f'Unknown {attr_spec["type"]} with name {attr_spec["name"]}')
+ decoded = self._decode_unknown(attr)
- if not attr_spec.is_multi:
- rsp[attr_spec['name']] = decoded
- elif attr_spec.name in rsp:
- rsp[attr_spec.name].append(decoded)
- else:
- rsp[attr_spec.name] = [decoded]
+ self._rsp_add(rsp, attr_spec["name"], attr_spec.is_multi, decoded)
return rsp
diff --git a/tools/net/ynl/samples/Makefile b/tools/net/ynl/samples/Makefile
index f2db8bb78309..3dbb106e87d9 100644
--- a/tools/net/ynl/samples/Makefile
+++ b/tools/net/ynl/samples/Makefile
@@ -19,6 +19,9 @@ include $(wildcard *.d)
all: $(BINS)
$(BINS): ../lib/ynl.a ../generated/protos.a
+ @echo -e '\tCC sample $@'
+ @$(COMPILE.c) $(CFLAGS_$@) $@.c -o $@.o
+ @$(LINK.c) $@.o -o $@ $(LDLIBS)
clean:
rm -f *.o *.d *~
diff --git a/tools/net/ynl/samples/netdev.c b/tools/net/ynl/samples/netdev.c
index 06433400dddd..b828225daad0 100644
--- a/tools/net/ynl/samples/netdev.c
+++ b/tools/net/ynl/samples/netdev.c
@@ -32,12 +32,18 @@ static void netdev_print_device(struct netdev_dev_get_rsp *d, unsigned int op)
if (!d->_present.xdp_features)
return;
- printf("%llx:", d->xdp_features);
+ printf("xdp-features (%llx):", d->xdp_features);
for (int i = 0; d->xdp_features > 1U << i; i++) {
if (d->xdp_features & (1U << i))
printf(" %s", netdev_xdp_act_str(1 << i));
}
+ printf(" xdp-rx-metadata-features (%llx):", d->xdp_rx_metadata_features);
+ for (int i = 0; d->xdp_rx_metadata_features > 1U << i; i++) {
+ if (d->xdp_rx_metadata_features & (1U << i))
+ printf(" %s", netdev_xdp_rx_metadata_str(1 << i));
+ }
+
printf(" xdp-zc-max-segs=%u", d->xdp_zc_max_segs);
name = netdev_op_str(op);
diff --git a/tools/net/ynl/ynl-gen-c.py b/tools/net/ynl/ynl-gen-c.py
index 897af958cee8..13427436bfb7 100755
--- a/tools/net/ynl/ynl-gen-c.py
+++ b/tools/net/ynl/ynl-gen-c.py
@@ -20,6 +20,21 @@ def c_lower(name):
return name.lower().replace('-', '_')
+def limit_to_number(name):
+ """
+ Turn a string limit like u32-max or s64-min into its numerical value
+ """
+ if name[0] == 'u' and name.endswith('-min'):
+ return 0
+ width = int(name[1:-4])
+ if name[0] == 's':
+ width -= 1
+ value = (1 << width) - 1
+ if name[0] == 's' and name.endswith('-min'):
+ value = -value - 1
+ return value
+
+
class BaseNlLib:
def get_family_id(self):
return 'ys->family_id'
@@ -42,8 +57,12 @@ class Type(SpecAttr):
self.type = attr['type']
self.checks = attr.get('checks', {})
+ self.request = False
+ self.reply = False
+
if 'len' in attr:
self.len = attr['len']
+
if 'nested-attributes' in attr:
self.nested_attrs = attr['nested-attributes']
if self.nested_attrs == family.name:
@@ -64,6 +83,14 @@ class Type(SpecAttr):
self.enum_name = None
delattr(self, "enum_name")
+ def get_limit(self, limit, default=None):
+ value = self.checks.get(limit, default)
+ if value is None:
+ return value
+ if not isinstance(value, int):
+ value = limit_to_number(value)
+ return value
+
def resolve(self):
if 'name-prefix' in self.attr:
enum_name = f"{self.attr['name-prefix']}{self.name}"
@@ -132,6 +159,15 @@ class Type(SpecAttr):
spec = self._attr_policy(policy)
cw.p(f"\t[{self.enum_name}] = {spec},")
+ def _mnl_type(self):
+ # mnl does not have helpers for signed integer types
+ # turn signed type into unsigned
+ # this only makes sense for scalar types
+ t = self.type
+ if t[0] == 's':
+ t = 'u' + t[1:]
+ return t
+
def _attr_typol(self):
raise Exception(f"Type policy not implemented for class type {self.type}")
@@ -259,6 +295,27 @@ class TypeScalar(Type):
if 'byte-order' in attr:
self.byte_order_comment = f" /* {attr['byte-order']} */"
+ if 'enum' in self.attr:
+ enum = self.family.consts[self.attr['enum']]
+ low, high = enum.value_range()
+ if 'min' not in self.checks:
+ if low != 0 or self.type[0] == 's':
+ self.checks['min'] = low
+ if 'max' not in self.checks:
+ self.checks['max'] = high
+
+ if 'min' in self.checks and 'max' in self.checks:
+ if self.get_limit('min') > self.get_limit('max'):
+ raise Exception(f'Invalid limit for "{self.name}" min: {self.get_limit("min")} max: {self.get_limit("max")}')
+ self.checks['range'] = True
+
+ low = min(self.get_limit('min', 0), self.get_limit('max', 0))
+ high = max(self.get_limit('min', 0), self.get_limit('max', 0))
+ if low < 0 and self.type[0] == 'u':
+ raise Exception(f'Invalid limit for "{self.name}" negative limit for unsigned type')
+ if low < -32768 or high > 32767:
+ self.checks['full-range'] = True
+
# Added by resolve():
self.is_bitfield = None
delattr(self, "is_bitfield")
@@ -278,15 +335,13 @@ class TypeScalar(Type):
maybe_enum = not self.is_bitfield and 'enum' in self.attr
if maybe_enum and self.family.consts[self.attr['enum']].enum_name:
self.type_name = f"enum {self.family.name}_{c_lower(self.attr['enum'])}"
+ elif self.is_auto_scalar:
+ self.type_name = '__' + self.type[0] + '64'
else:
self.type_name = '__' + self.type
- def _mnl_type(self):
- t = self.type
- # mnl does not have a helper for signed types
- if t[0] == 's':
- t = 'u' + t[1:]
- return t
+ def mnl_type(self):
+ return self._mnl_type()
def _attr_policy(self, policy):
if 'flags-mask' in self.checks or self.is_bitfield:
@@ -298,27 +353,27 @@ class TypeScalar(Type):
flag_cnt = len(flags['entries'])
mask = (1 << flag_cnt) - 1
return f"NLA_POLICY_MASK({policy}, 0x{mask:x})"
+ elif 'full-range' in self.checks:
+ return f"NLA_POLICY_FULL_RANGE({policy}, &{c_lower(self.enum_name)}_range)"
+ elif 'range' in self.checks:
+ return f"NLA_POLICY_RANGE({policy}, {self.get_limit('min')}, {self.get_limit('max')})"
elif 'min' in self.checks:
- return f"NLA_POLICY_MIN({policy}, {self.checks['min']})"
- elif 'enum' in self.attr:
- enum = self.family.consts[self.attr['enum']]
- low, high = enum.value_range()
- if low == 0:
- return f"NLA_POLICY_MAX({policy}, {high})"
- return f"NLA_POLICY_RANGE({policy}, {low}, {high})"
+ return f"NLA_POLICY_MIN({policy}, {self.get_limit('min')})"
+ elif 'max' in self.checks:
+ return f"NLA_POLICY_MAX({policy}, {self.get_limit('max')})"
return super()._attr_policy(policy)
def _attr_typol(self):
- return f'.type = YNL_PT_U{self.type[1:]}, '
+ return f'.type = YNL_PT_U{c_upper(self.type[1:])}, '
def arg_member(self, ri):
return [f'{self.type_name} {self.c_name}{self.byte_order_comment}']
def attr_put(self, ri, var):
- self._attr_put_simple(ri, var, self._mnl_type())
+ self._attr_put_simple(ri, var, self.mnl_type())
def _attr_get(self, ri, var):
- return f"{var}->{self.c_name} = mnl_attr_get_{self._mnl_type()}(attr);", None, None
+ return f"{var}->{self.c_name} = mnl_attr_get_{self.mnl_type()}(attr);", None, None
def _setter_lines(self, ri, member, presence):
return [f"{member} = {self.c_name};"]
@@ -355,10 +410,13 @@ class TypeString(Type):
return f'.type = YNL_PT_NUL_STR, '
def _attr_policy(self, policy):
- mem = '{ .type = ' + policy
- if 'max-len' in self.checks:
- mem += ', .len = ' + str(self.checks['max-len'])
- mem += ', }'
+ if 'exact-len' in self.checks:
+ mem = 'NLA_POLICY_EXACT_LEN(' + str(self.checks['exact-len']) + ')'
+ else:
+ mem = '{ .type = ' + policy
+ if 'max-len' in self.checks:
+ mem += ', .len = ' + str(self.get_limit('max-len'))
+ mem += ', }'
return mem
def attr_policy(self, cw):
@@ -404,14 +462,17 @@ class TypeBinary(Type):
return f'.type = YNL_PT_BINARY,'
def _attr_policy(self, policy):
- mem = '{ '
- if len(self.checks) == 1 and 'min-len' in self.checks:
- mem += '.len = ' + str(self.checks['min-len'])
- elif len(self.checks) == 0:
- mem += '.type = NLA_BINARY'
+ if 'exact-len' in self.checks:
+ mem = 'NLA_POLICY_EXACT_LEN(' + str(self.checks['exact-len']) + ')'
else:
- raise Exception('One or more of binary type checks not implemented, yet')
- mem += ', }'
+ mem = '{ '
+ if len(self.checks) == 1 and 'min-len' in self.checks:
+ mem += '.len = ' + str(self.get_limit('min-len'))
+ elif len(self.checks) == 0:
+ mem += '.type = NLA_BINARY'
+ else:
+ raise Exception('One or more of binary type checks not implemented, yet')
+ mem += ', }'
return mem
def attr_put(self, ri, var):
@@ -433,6 +494,31 @@ class TypeBinary(Type):
f'memcpy({member}, {self.c_name}, {presence}_len);']
+class TypeBitfield32(Type):
+ def _complex_member_type(self, ri):
+ return "struct nla_bitfield32"
+
+ def _attr_typol(self):
+ return f'.type = YNL_PT_BITFIELD32, '
+
+ def _attr_policy(self, policy):
+ if not 'enum' in self.attr:
+ raise Exception('Enum required for bitfield32 attr')
+ enum = self.family.consts[self.attr['enum']]
+ mask = enum.get_mask(as_flags=True)
+ return f"NLA_POLICY_BITFIELD32({mask})"
+
+ def attr_put(self, ri, var):
+ line = f"mnl_attr_put(nlh, {self.enum_name}, sizeof(struct nla_bitfield32), &{var}->{self.c_name})"
+ self._attr_put_line(ri, var, line)
+
+ def _attr_get(self, ri, var):
+ return f"memcpy(&{var}->{self.c_name}, mnl_attr_get_payload(attr), sizeof(struct nla_bitfield32));", None, None
+
+ def _setter_lines(self, ri, member, presence):
+ return [f"memcpy(&{member}, {self.c_name}, sizeof(struct nla_bitfield32));"]
+
+
class TypeNest(Type):
def _complex_member_type(self, ri):
return self.nested_struct_type
@@ -476,12 +562,8 @@ class TypeMultiAttr(Type):
def presence_type(self):
return 'count'
- def _mnl_type(self):
- t = self.type
- # mnl does not have a helper for signed types
- if t[0] == 's':
- t = 'u' + t[1:]
- return t
+ def mnl_type(self):
+ return self._mnl_type()
def _complex_member_type(self, ri):
if 'type' not in self.attr or self.attr['type'] == 'nest':
@@ -516,7 +598,7 @@ class TypeMultiAttr(Type):
def attr_put(self, ri, var):
if self.attr['type'] in scalars:
- put_type = self._mnl_type()
+ put_type = self.mnl_type()
ri.cw.p(f"for (unsigned int i = 0; i < {var}->n_{self.c_name}; i++)")
ri.cw.p(f"mnl_attr_put_{put_type}(nlh, {self.enum_name}, {var}->{self.c_name}[i]);")
elif 'type' not in self.attr or self.attr['type'] == 'nest':
@@ -707,9 +789,11 @@ class AttrSet(SpecAttrSet):
pfx = f"{family.name}-a-{self.name}-"
self.name_prefix = c_upper(pfx)
self.max_name = c_upper(self.yaml.get('attr-max-name', f"{self.name_prefix}max"))
+ self.cnt_name = c_upper(self.yaml.get('attr-cnt-name', f"__{self.name_prefix}max"))
else:
self.name_prefix = family.attr_sets[self.subset_of].name_prefix
self.max_name = family.attr_sets[self.subset_of].max_name
+ self.cnt_name = family.attr_sets[self.subset_of].cnt_name
# Added by resolve:
self.c_name = None
@@ -735,6 +819,8 @@ class AttrSet(SpecAttrSet):
t = TypeString(self.family, self, elem, value)
elif elem['type'] == 'binary':
t = TypeBinary(self.family, self, elem, value)
+ elif elem['type'] == 'bitfield32':
+ t = TypeBitfield32(self.family, self, elem, value)
elif elem['type'] == 'nest':
t = TypeNest(self.family, self, elem, value)
elif elem['type'] == 'array-nest':
@@ -805,6 +891,10 @@ class Family(SpecFamily):
self.uapi_header = self.yaml['uapi-header']
else:
self.uapi_header = f"linux/{self.name}.h"
+ if self.uapi_header.startswith("linux/") and self.uapi_header.endswith('.h'):
+ self.uapi_header_name = self.uapi_header[6:-2]
+ else:
+ self.uapi_header_name = self.name
def resolve(self):
self.resolve_up(super())
@@ -842,6 +932,7 @@ class Family(SpecFamily):
self._load_root_sets()
self._load_nested_sets()
+ self._load_attr_use()
self._load_hooks()
self.kernel_policy = self.yaml.get('kernel-policy', 'split')
@@ -962,6 +1053,22 @@ class Family(SpecFamily):
child.request |= struct.request
child.reply |= struct.reply
+ def _load_attr_use(self):
+ for _, struct in self.pure_nested_structs.items():
+ if struct.request:
+ for _, arg in struct.member_list():
+ arg.request = True
+ if struct.reply:
+ for _, arg in struct.member_list():
+ arg.reply = True
+
+ for root_set, rs_members in self.root_sets.items():
+ for attr, spec in self.attr_sets[root_set].items():
+ if attr in rs_members['request']:
+ spec.request = True
+ if attr in rs_members['reply']:
+ spec.reply = True
+
def _load_global_policy(self):
global_set = set()
attr_set_name = None
@@ -1013,10 +1120,13 @@ class RenderInfo:
# 'do' and 'dump' response parsing is identical
self.type_consistent = True
- if op_mode != 'do' and 'dump' in op and 'do' in op:
- if ('reply' in op['do']) != ('reply' in op["dump"]):
- self.type_consistent = False
- elif 'reply' in op['do'] and op["do"]["reply"] != op["dump"]["reply"]:
+ if op_mode != 'do' and 'dump' in op:
+ if 'do' in op:
+ if ('reply' in op['do']) != ('reply' in op["dump"]):
+ self.type_consistent = False
+ elif 'reply' in op['do'] and op["do"]["reply"] != op["dump"]["reply"]:
+ self.type_consistent = False
+ else:
self.type_consistent = False
self.attr_set = attr_set
@@ -1037,9 +1147,11 @@ class RenderInfo:
if op_mode == 'notify':
op_mode = 'do'
for op_dir in ['request', 'reply']:
- if op and op_dir in op[op_mode]:
- self.struct[op_dir] = Struct(family, self.attr_set,
- type_list=op[op_mode][op_dir]['attributes'])
+ if op:
+ type_list = []
+ if op_dir in op[op_mode]:
+ type_list = op[op_mode][op_dir]['attributes']
+ self.struct[op_dir] = Struct(family, self.attr_set, type_list=type_list)
if op_mode == 'event':
self.struct['reply'] = Struct(family, self.attr_set, type_list=op['event']['attributes'])
@@ -1052,6 +1164,7 @@ class CodeWriter:
self._block_end = False
self._silent_block = False
self._ind = 0
+ self._ifdef_block = None
if out_file is None:
self._out = os.sys.stdout
else:
@@ -1092,6 +1205,8 @@ class CodeWriter:
if self._silent_block:
ind += 1
self._silent_block = line.endswith(')') and CodeWriter._is_cond(line)
+ if line[0] == '#':
+ ind = 0
if add_ind:
ind += add_ind
self._out.write('\t' * ind + line + '\n')
@@ -1215,11 +1330,24 @@ class CodeWriter:
for one in members:
line = '.' + one[0]
line += '\t' * ((longest - len(one[0]) - 1 + 7) // 8)
- line += '= ' + one[1] + ','
+ line += '= ' + str(one[1]) + ','
self.p(line)
+ def ifdef_block(self, config):
+ config_option = None
+ if config:
+ config_option = 'CONFIG_' + c_upper(config)
+ if self._ifdef_block == config_option:
+ return
+
+ if self._ifdef_block:
+ self.p('#endif /* ' + self._ifdef_block + ' */')
+ if config_option:
+ self.p('#ifdef ' + config_option)
+ self._ifdef_block = config_option
+
-scalars = {'u8', 'u16', 'u32', 'u64', 's32', 's64'}
+scalars = {'u8', 'u16', 'u32', 'u64', 's32', 's64', 'uint', 'sint'}
direction_to_suffix = {
'reply': '_rsp',
@@ -1509,11 +1637,8 @@ def _multi_parse(ri, struct, init_lines, local_vars):
ri.cw.p(f"parg.data = &dst->{aspec.c_name}[i];")
ri.cw.p(f"if ({aspec.nested_render_name}_parse(&parg, attr))")
ri.cw.p('return MNL_CB_ERROR;')
- elif aspec['type'] in scalars:
- t = aspec['type']
- if t[0] == 's':
- t = 'u' + t[1:]
- ri.cw.p(f"dst->{aspec.c_name}[i] = mnl_attr_get_{t}(attr);")
+ elif aspec.type in scalars:
+ ri.cw.p(f"dst->{aspec.c_name}[i] = mnl_attr_get_{aspec.mnl_type()}(attr);")
else:
raise Exception('Nest parsing type not supported yet')
ri.cw.p('i++;')
@@ -1748,6 +1873,8 @@ def print_type_helpers(ri, direction, deref=False):
def print_req_type_helpers(ri):
+ if len(ri.struct["request"].attr_list) == 0:
+ return
print_alloc_wrapper(ri, "request")
print_type_helpers(ri, "request")
@@ -1769,6 +1896,8 @@ def print_parse_prototype(ri, direction, terminate=True):
def print_req_type(ri):
+ if len(ri.struct["request"].attr_list) == 0:
+ return
print_type(ri, "request")
@@ -1797,7 +1926,7 @@ def print_wrapped_type(ri):
ri.cw.p('__u8 cmd;')
ri.cw.p('struct ynl_ntf_base_type *next;')
ri.cw.p(f"void (*free)({type_name(ri, 'reply')} *ntf);")
- ri.cw.p(f"{type_name(ri, 'reply', deref=True)} obj __attribute__ ((aligned (8)));")
+ ri.cw.p(f"{type_name(ri, 'reply', deref=True)} obj __attribute__((aligned(8)));")
ri.cw.block_end(line=';')
ri.cw.nl()
print_free_prototype(ri, 'reply')
@@ -1895,10 +2024,13 @@ def print_req_policy_fwd(cw, struct, ri=None, terminate=True):
def print_req_policy(cw, struct, ri=None):
+ if ri and ri.op:
+ cw.ifdef_block(ri.op.get('config-cond', None))
print_req_policy_fwd(cw, struct, ri=ri, terminate=False)
for _, arg in struct.member_list():
arg.attr_policy(cw)
cw.p("};")
+ cw.ifdef_block(None)
cw.nl()
@@ -1910,6 +2042,34 @@ def policy_should_be_static(family):
return family.kernel_policy == 'split' or kernel_can_gen_family_struct(family)
+def print_kernel_policy_ranges(family, cw):
+ first = True
+ for _, attr_set in family.attr_sets.items():
+ if attr_set.subset_of:
+ continue
+
+ for _, attr in attr_set.items():
+ if not attr.request:
+ continue
+ if 'full-range' not in attr.checks:
+ continue
+
+ if first:
+ cw.p('/* Integer value ranges */')
+ first = False
+
+ sign = '' if attr.type[0] == 'u' else '_signed'
+ cw.block_start(line=f'static const struct netlink_range_validation{sign} {c_lower(attr.enum_name)}_range =')
+ members = []
+ if 'min' in attr.checks:
+ members.append(('min', attr.get_limit('min')))
+ if 'max' in attr.checks:
+ members.append(('max', attr.get_limit('max')))
+ cw.write_struct_init(members)
+ cw.block_end(line=';')
+ cw.nl()
+
+
def print_kernel_op_table_fwd(family, cw, terminate):
exported = not kernel_can_gen_family_struct(family)
@@ -1988,6 +2148,7 @@ def print_kernel_op_table(family, cw):
if op.is_async:
continue
+ cw.ifdef_block(op.get('config-cond', None))
cw.block_start()
members = [('cmd', op.enum_name)]
if 'dont-validate' in op:
@@ -2018,6 +2179,7 @@ def print_kernel_op_table(family, cw):
if op.is_async or op_mode not in op:
continue
+ cw.ifdef_block(op.get('config-cond', None))
cw.block_start()
members = [('cmd', op.enum_name)]
if 'dont-validate' in op:
@@ -2053,6 +2215,7 @@ def print_kernel_op_table(family, cw):
members.append(('flags', ' | '.join([c_upper('genl-' + x) for x in flags])))
cw.write_struct_init(members)
cw.block_end(line=',')
+ cw.ifdef_block(None)
cw.block_end(line=';')
cw.nl()
@@ -2124,7 +2287,7 @@ def uapi_enum_start(family, cw, obj, ckey='', enum_name='enum-name'):
def render_uapi(family, cw):
- hdr_prot = f"_UAPI_LINUX_{family.name.upper()}_H"
+ hdr_prot = f"_UAPI_LINUX_{c_upper(family.uapi_header_name)}_H"
cw.p('#ifndef ' + hdr_prot)
cw.p('#define ' + hdr_prot)
cw.nl()
@@ -2193,8 +2356,7 @@ def render_uapi(family, cw):
if attr_set.subset_of:
continue
- cnt_name = c_upper(family.get('attr-cnt-name', f"__{attr_set.name_prefix}MAX"))
- max_value = f"({cnt_name} - 1)"
+ max_value = f"({attr_set.cnt_name} - 1)"
val = 0
uapi_enum_start(family, cw, attr_set.yaml, 'enum-name')
@@ -2206,7 +2368,7 @@ def render_uapi(family, cw):
val += 1
cw.p(attr.enum_name + suffix)
cw.nl()
- cw.p(cnt_name + ('' if max_by_define else ','))
+ cw.p(attr_set.cnt_name + ('' if max_by_define else ','))
if not max_by_define:
cw.p(f"{attr_set.max_name} = {max_value}")
cw.block_end(line=';')
@@ -2311,6 +2473,16 @@ def render_user_family(family, cw, prototype):
cw.block_end(line=';')
+def family_contains_bitfield32(family):
+ for _, attr_set in family.attr_sets.items():
+ if attr_set.subset_of:
+ continue
+ for _, attr in attr_set.items():
+ if attr.type == "bitfield32":
+ return True
+ return False
+
+
def find_kernel_root(full_path):
sub_path = ''
while True:
@@ -2396,6 +2568,8 @@ def main():
cw.p('#include <string.h>')
if args.header:
cw.p('#include <linux/types.h>')
+ if family_contains_bitfield32(parsed):
+ cw.p('#include <linux/netlink.h>')
else:
cw.p(f'#include "{parsed.name}-user.h"')
cw.p('#include "ynl.h"')
@@ -2449,6 +2623,8 @@ def main():
print_kernel_mcgrp_hdr(parsed, cw)
print_kernel_family_struct_hdr(parsed, cw)
else:
+ print_kernel_policy_ranges(parsed, cw)
+
for _, struct in sorted(parsed.pure_nested_structs.items()):
if struct.request:
cw.p('/* Common nested types */')
@@ -2511,9 +2687,8 @@ def main():
if 'dump' in op:
cw.p(f"/* {op.enum_name} - dump */")
ri = RenderInfo(cw, parsed, args.mode, op, 'dump')
- if 'request' in op['dump']:
- print_req_type(ri)
- print_req_type_helpers(ri)
+ print_req_type(ri)
+ print_req_type_helpers(ri)
if not ri.type_consistent:
print_rsp_type(ri)
print_wrapped_type(ri)
diff --git a/tools/testing/selftests/bpf/DENYLIST.aarch64 b/tools/testing/selftests/bpf/DENYLIST.aarch64
index 3babaf3eee5c..5c2cc7e8c5d0 100644
--- a/tools/testing/selftests/bpf/DENYLIST.aarch64
+++ b/tools/testing/selftests/bpf/DENYLIST.aarch64
@@ -1,5 +1,6 @@
bpf_cookie/multi_kprobe_attach_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
bpf_cookie/multi_kprobe_link_api # kprobe_multi_link_api_subtest:FAIL:fentry_raw_skel_load unexpected error: -3
+exceptions # JIT does not support calling kfunc bpf_throw: -524
fexit_sleep # The test never returns. The remaining tests cannot start.
kprobe_multi_bench_attach # needs CONFIG_FPROBE
kprobe_multi_test # needs CONFIG_FPROBE
@@ -9,3 +10,4 @@ fexit_test/fexit_many_args # fexit_many_args:FAIL:fexit_ma
fill_link_info/kprobe_multi_link_info # bpf_program__attach_kprobe_multi_opts unexpected error: -95
fill_link_info/kretprobe_multi_link_info # bpf_program__attach_kprobe_multi_opts unexpected error: -95
fill_link_info/kprobe_multi_invalid_ubuff # bpf_program__attach_kprobe_multi_opts unexpected error: -95
+missed/kprobe_recursion # missed_kprobe_recursion__attach unexpected error: -95 (errno 95)
diff --git a/tools/testing/selftests/bpf/DENYLIST.s390x b/tools/testing/selftests/bpf/DENYLIST.s390x
index 5061d9e24c16..1a63996c0304 100644
--- a/tools/testing/selftests/bpf/DENYLIST.s390x
+++ b/tools/testing/selftests/bpf/DENYLIST.s390x
@@ -1,29 +1,5 @@
# TEMPORARY
# Alphabetical order
-bloom_filter_map # failed to find kernel BTF type ID of '__x64_sys_getpgid': -3 (?)
-bpf_cookie # failed to open_and_load program: -524 (trampoline)
-bpf_loop # attaches to __x64_sys_nanosleep
-cgrp_local_storage # prog_attach unexpected error: -524 (trampoline)
-dynptr/test_dynptr_skb_data
-dynptr/test_skb_readonly
-fexit_sleep # fexit_skel_load fexit skeleton failed (trampoline)
+exceptions # JIT does not support calling kfunc bpf_throw (exceptions)
get_stack_raw_tp # user_stack corrupted user stack (no backchain userspace)
-iters/testmod_seq* # s390x doesn't support kfuncs in modules yet
-kprobe_multi_bench_attach # bpf_program__attach_kprobe_multi_opts unexpected error: -95
-kprobe_multi_test # relies on fentry
-ksyms_btf/weak_ksyms* # test_ksyms_weak__open_and_load unexpected error: -22 (kfunc)
-ksyms_module # test_ksyms_module__open_and_load unexpected error: -9 (?)
-ksyms_module_libbpf # JIT does not support calling kernel function (kfunc)
-ksyms_module_lskel # test_ksyms_module_lskel__open_and_load unexpected error: -9 (?)
-module_attach # skel_attach skeleton attach failed: -524 (trampoline)
-ringbuf # skel_load skeleton load failed (?)
stacktrace_build_id # compare_map_keys stackid_hmap vs. stackmap err -2 errno 2 (?)
-test_lsm # attach unexpected error: -524 (trampoline)
-trace_printk # trace_printk__load unexpected error: -2 (errno 2) (?)
-trace_vprintk # trace_vprintk__open_and_load unexpected error: -9 (?)
-unpriv_bpf_disabled # fentry
-user_ringbuf # failed to find kernel BTF type ID of '__s390x_sys_prctl': -3 (?)
-verif_stats # trace_vprintk__open_and_load unexpected error: -9 (?)
-xdp_bonding # failed to auto-attach program 'trace_on_entry': -524 (trampoline)
-xdp_metadata # JIT does not support calling kernel function (kfunc)
-test_task_under_cgroup # JIT does not support calling kernel function (kfunc)
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index caede9b574cb..9c27b67bc7b1 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -27,7 +27,11 @@ endif
BPF_GCC ?= $(shell command -v bpf-gcc;)
SAN_CFLAGS ?=
SAN_LDFLAGS ?= $(SAN_CFLAGS)
-CFLAGS += -g -O0 -rdynamic -Wall -Werror $(GENFLAGS) $(SAN_CFLAGS) \
+RELEASE ?=
+OPT_FLAGS ?= $(if $(RELEASE),-O2,-O0)
+CFLAGS += -g $(OPT_FLAGS) -rdynamic \
+ -Wall -Werror \
+ $(GENFLAGS) $(SAN_CFLAGS) \
-I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \
-I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT)
LDFLAGS += $(SAN_LDFLAGS)
@@ -104,7 +108,7 @@ TEST_GEN_PROGS_EXTENDED = test_sock_addr test_skb_cgroup_id_user \
xskxceiver xdp_redirect_multi xdp_synproxy veristat xdp_hw_metadata \
xdp_features
-TEST_GEN_FILES += liburandom_read.so urandom_read sign-file
+TEST_GEN_FILES += liburandom_read.so urandom_read sign-file uprobe_multi
# Emit succinct information message describing current building step
# $1 - generic step name (e.g., CC, LINK, etc);
@@ -188,7 +192,7 @@ $(OUTPUT)/%:%.c
$(Q)$(LINK.c) $^ $(LDLIBS) -o $@
# LLVM's ld.lld doesn't support all the architectures, so use it only on x86
-ifeq ($(SRCARCH),x86)
+ifeq ($(SRCARCH),$(filter $(SRCARCH),x86 riscv))
LLD := lld
else
LLD := ld
@@ -196,17 +200,20 @@ endif
# Filter out -static for liburandom_read.so and its dependent targets so that static builds
# do not fail. Static builds leave urandom_read relying on system-wide shared libraries.
-$(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c
+$(OUTPUT)/liburandom_read.so: urandom_read_lib1.c urandom_read_lib2.c liburandom_read.map
$(call msg,LIB,,$@)
- $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) \
- $^ $(filter-out -static,$(LDLIBS)) \
+ $(Q)$(CLANG) $(CLANG_TARGET_ARCH) \
+ $(filter-out -static,$(CFLAGS) $(LDFLAGS)) \
+ $(filter %.c,$^) $(filter-out -static,$(LDLIBS)) \
-fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
+ -Wl,--version-script=liburandom_read.map \
-fPIC -shared -o $@
$(OUTPUT)/urandom_read: urandom_read.c urandom_read_aux.c $(OUTPUT)/liburandom_read.so
$(call msg,BINARY,,$@)
- $(Q)$(CLANG) $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
- -lurandom_read $(filter-out -static,$(LDLIBS)) -L$(OUTPUT) \
+ $(Q)$(CLANG) $(CLANG_TARGET_ARCH) \
+ $(filter-out -static,$(CFLAGS) $(LDFLAGS)) $(filter %.c,$^) \
+ -lurandom_read $(filter-out -static,$(LDLIBS)) -L$(OUTPUT) \
-fuse-ld=$(LLD) -Wl,-znoseparate-code -Wl,--build-id=sha1 \
-Wl,-rpath=. -o $@
@@ -238,7 +245,7 @@ $(OUTPUT)/runqslower: $(BPFOBJ) | $(DEFAULT_BPFTOOL) $(RUNQSLOWER_OUTPUT)
BPFTOOL_OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
BPFOBJ_OUTPUT=$(BUILD_DIR)/libbpf \
BPFOBJ=$(BPFOBJ) BPF_INCLUDE=$(INCLUDE_DIR) \
- EXTRA_CFLAGS='-g -O0 $(SAN_CFLAGS)' \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS)' \
EXTRA_LDFLAGS='$(SAN_LDFLAGS)' && \
cp $(RUNQSLOWER_OUTPUT)runqslower $@
@@ -276,7 +283,7 @@ $(DEFAULT_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
$(HOST_BPFOBJ) | $(HOST_BUILD_DIR)/bpftool
$(Q)$(MAKE) $(submake_extras) -C $(BPFTOOLDIR) \
ARCH= CROSS_COMPILE= CC="$(HOSTCC)" LD="$(HOSTLD)" \
- EXTRA_CFLAGS='-g -O0' \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS)' \
OUTPUT=$(HOST_BUILD_DIR)/bpftool/ \
LIBBPF_OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \
LIBBPF_DESTDIR=$(HOST_SCRATCH_DIR)/ \
@@ -287,7 +294,7 @@ $(CROSS_BPFTOOL): $(wildcard $(BPFTOOLDIR)/*.[ch] $(BPFTOOLDIR)/Makefile) \
$(BPFOBJ) | $(BUILD_DIR)/bpftool
$(Q)$(MAKE) $(submake_extras) -C $(BPFTOOLDIR) \
ARCH=$(ARCH) CROSS_COMPILE=$(CROSS_COMPILE) \
- EXTRA_CFLAGS='-g -O0' \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS)' \
OUTPUT=$(BUILD_DIR)/bpftool/ \
LIBBPF_OUTPUT=$(BUILD_DIR)/libbpf/ \
LIBBPF_DESTDIR=$(SCRATCH_DIR)/ \
@@ -310,7 +317,7 @@ $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
$(APIDIR)/linux/bpf.h \
| $(BUILD_DIR)/libbpf
$(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) OUTPUT=$(BUILD_DIR)/libbpf/ \
- EXTRA_CFLAGS='-g -O0 $(SAN_CFLAGS)' \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS) $(SAN_CFLAGS)' \
EXTRA_LDFLAGS='$(SAN_LDFLAGS)' \
DESTDIR=$(SCRATCH_DIR) prefix= all install_headers
@@ -319,7 +326,7 @@ $(HOST_BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
$(APIDIR)/linux/bpf.h \
| $(HOST_BUILD_DIR)/libbpf
$(Q)$(MAKE) $(submake_extras) -C $(BPFDIR) \
- EXTRA_CFLAGS='-g -O0' ARCH= CROSS_COMPILE= \
+ EXTRA_CFLAGS='-g $(OPT_FLAGS)' ARCH= CROSS_COMPILE= \
OUTPUT=$(HOST_BUILD_DIR)/libbpf/ \
CC="$(HOSTCC)" LD="$(HOSTLD)" \
DESTDIR=$(HOST_SCRATCH_DIR)/ prefix= all install_headers
@@ -578,11 +585,20 @@ endef
# Define test_progs test runner.
TRUNNER_TESTS_DIR := prog_tests
TRUNNER_BPF_PROGS_DIR := progs
-TRUNNER_EXTRA_SOURCES := test_progs.c cgroup_helpers.c trace_helpers.c \
- network_helpers.c testing_helpers.c \
- btf_helpers.c flow_dissector_load.h \
- cap_helpers.c test_loader.c xsk.c disasm.c \
- json_writer.c unpriv_helpers.c \
+TRUNNER_EXTRA_SOURCES := test_progs.c \
+ cgroup_helpers.c \
+ trace_helpers.c \
+ network_helpers.c \
+ testing_helpers.c \
+ btf_helpers.c \
+ cap_helpers.c \
+ unpriv_helpers.c \
+ netlink_helpers.c \
+ test_loader.c \
+ xsk.c \
+ disasm.c \
+ json_writer.c \
+ flow_dissector_load.h \
ip_check_defrag_frags.h
TRUNNER_EXTRA_FILES := $(OUTPUT)/urandom_read $(OUTPUT)/bpf_testmod.ko \
$(OUTPUT)/liburandom_read.so \
@@ -640,7 +656,9 @@ $(OUTPUT)/test_verifier: test_verifier.c verifier/tests.h $(BPFOBJ) | $(OUTPUT)
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
-$(OUTPUT)/xskxceiver: xskxceiver.c xskxceiver.h $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT)
+# Include find_bit.c to compile xskxceiver.
+EXTRA_SRC := $(TOOLSDIR)/lib/find_bit.c
+$(OUTPUT)/xskxceiver: $(EXTRA_SRC) xskxceiver.c xskxceiver.h $(OUTPUT)/xsk.o $(OUTPUT)/xsk_xdp_progs.skel.h $(BPFOBJ) | $(OUTPUT)
$(call msg,BINARY,,$@)
$(Q)$(CC) $(CFLAGS) $(filter %.a %.o %.c,$^) $(LDLIBS) -o $@
diff --git a/tools/testing/selftests/bpf/bpf_experimental.h b/tools/testing/selftests/bpf/bpf_experimental.h
index 209811b1993a..1386baf9ae4a 100644
--- a/tools/testing/selftests/bpf/bpf_experimental.h
+++ b/tools/testing/selftests/bpf/bpf_experimental.h
@@ -131,4 +131,350 @@ extern int bpf_rbtree_add_impl(struct bpf_rb_root *root, struct bpf_rb_node *nod
*/
extern struct bpf_rb_node *bpf_rbtree_first(struct bpf_rb_root *root) __ksym;
+/* Description
+ * Allocates a percpu object of the type represented by 'local_type_id' in
+ * program BTF. User may use the bpf_core_type_id_local macro to pass the
+ * type ID of a struct in program BTF.
+ *
+ * The 'local_type_id' parameter must be a known constant.
+ * The 'meta' parameter is rewritten by the verifier, no need for BPF
+ * program to set it.
+ * Returns
+ * A pointer to a percpu object of the type corresponding to the passed in
+ * 'local_type_id', or NULL on failure.
+ */
+extern void *bpf_percpu_obj_new_impl(__u64 local_type_id, void *meta) __ksym;
+
+/* Convenience macro to wrap over bpf_percpu_obj_new_impl */
+#define bpf_percpu_obj_new(type) ((type __percpu_kptr *)bpf_percpu_obj_new_impl(bpf_core_type_id_local(type), NULL))
+
+/* Description
+ * Free an allocated percpu object. All fields of the object that require
+ * destruction will be destructed before the storage is freed.
+ *
+ * The 'meta' parameter is rewritten by the verifier, no need for BPF
+ * program to set it.
+ * Returns
+ * Void.
+ */
+extern void bpf_percpu_obj_drop_impl(void *kptr, void *meta) __ksym;
+
+struct bpf_iter_task_vma;
+
+extern int bpf_iter_task_vma_new(struct bpf_iter_task_vma *it,
+ struct task_struct *task,
+ unsigned long addr) __ksym;
+extern struct vm_area_struct *bpf_iter_task_vma_next(struct bpf_iter_task_vma *it) __ksym;
+extern void bpf_iter_task_vma_destroy(struct bpf_iter_task_vma *it) __ksym;
+
+/* Convenience macro to wrap over bpf_obj_drop_impl */
+#define bpf_percpu_obj_drop(kptr) bpf_percpu_obj_drop_impl(kptr, NULL)
+
+/* Description
+ * Throw a BPF exception from the program, immediately terminating its
+ * execution and unwinding the stack. The supplied 'cookie' parameter
+ * will be the return value of the program when an exception is thrown,
+ * and the default exception callback is used. Otherwise, if an exception
+ * callback is set using the '__exception_cb(callback)' declaration tag
+ * on the main program, the 'cookie' parameter will be the callback's only
+ * input argument.
+ *
+ * Thus, in case of default exception callback, 'cookie' is subjected to
+ * constraints on the program's return value (as with R0 on exit).
+ * Otherwise, the return value of the marked exception callback will be
+ * subjected to the same checks.
+ *
+ * Note that throwing an exception with lingering resources (locks,
+ * references, etc.) will lead to a verification error.
+ *
+ * Note that callbacks *cannot* call this helper.
+ * Returns
+ * Never.
+ * Throws
+ * An exception with the specified 'cookie' value.
+ */
+extern void bpf_throw(u64 cookie) __ksym;
+
+/* This macro must be used to mark the exception callback corresponding to the
+ * main program. For example:
+ *
+ * int exception_cb(u64 cookie) {
+ * return cookie;
+ * }
+ *
+ * SEC("tc")
+ * __exception_cb(exception_cb)
+ * int main_prog(struct __sk_buff *ctx) {
+ * ...
+ * return TC_ACT_OK;
+ * }
+ *
+ * Here, exception callback for the main program will be 'exception_cb'. Note
+ * that this attribute can only be used once, and multiple exception callbacks
+ * specified for the main program will lead to verification error.
+ */
+#define __exception_cb(name) __attribute__((btf_decl_tag("exception_callback:" #name)))
+
+#define __bpf_assert_signed(x) _Generic((x), \
+ unsigned long: 0, \
+ unsigned long long: 0, \
+ signed long: 1, \
+ signed long long: 1 \
+)
+
+#define __bpf_assert_check(LHS, op, RHS) \
+ _Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \
+ _Static_assert(sizeof(LHS) == 8, "Only 8-byte integers are supported\n"); \
+ _Static_assert(__builtin_constant_p(__bpf_assert_signed(LHS)), "internal static assert"); \
+ _Static_assert(__builtin_constant_p((RHS)), "2nd argument must be a constant expression")
+
+#define __bpf_assert(LHS, op, cons, RHS, VAL) \
+ ({ \
+ (void)bpf_throw; \
+ asm volatile ("if %[lhs] " op " %[rhs] goto +2; r1 = %[value]; call bpf_throw" \
+ : : [lhs] "r"(LHS), [rhs] cons(RHS), [value] "ri"(VAL) : ); \
+ })
+
+#define __bpf_assert_op_sign(LHS, op, cons, RHS, VAL, supp_sign) \
+ ({ \
+ __bpf_assert_check(LHS, op, RHS); \
+ if (__bpf_assert_signed(LHS) && !(supp_sign)) \
+ __bpf_assert(LHS, "s" #op, cons, RHS, VAL); \
+ else \
+ __bpf_assert(LHS, #op, cons, RHS, VAL); \
+ })
+
+#define __bpf_assert_op(LHS, op, RHS, VAL, supp_sign) \
+ ({ \
+ if (sizeof(typeof(RHS)) == 8) { \
+ const typeof(RHS) rhs_var = (RHS); \
+ __bpf_assert_op_sign(LHS, op, "r", rhs_var, VAL, supp_sign); \
+ } else { \
+ __bpf_assert_op_sign(LHS, op, "i", RHS, VAL, supp_sign); \
+ } \
+ })
+
+/* Description
+ * Assert that a conditional expression is true.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert(cond) if (!(cond)) bpf_throw(0);
+
+/* Description
+ * Assert that a conditional expression is true.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_with(cond, value) if (!(cond)) bpf_throw(value);
+
+/* Description
+ * Assert that LHS is equal to RHS. This statement updates the known value
+ * of LHS during verification. Note that RHS must be a constant value, and
+ * must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_eq(LHS, RHS) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, ==, RHS, 0, true); \
+ })
+
+/* Description
+ * Assert that LHS is equal to RHS. This statement updates the known value
+ * of LHS during verification. Note that RHS must be a constant value, and
+ * must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_eq_with(LHS, RHS, value) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, ==, RHS, value, true); \
+ })
+
+/* Description
+ * Assert that LHS is less than RHS. This statement updates the known
+ * bounds of LHS during verification. Note that RHS must be a constant
+ * value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_lt(LHS, RHS) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, <, RHS, 0, false); \
+ })
+
+/* Description
+ * Assert that LHS is less than RHS. This statement updates the known
+ * bounds of LHS during verification. Note that RHS must be a constant
+ * value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_lt_with(LHS, RHS, value) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, <, RHS, value, false); \
+ })
+
+/* Description
+ * Assert that LHS is greater than RHS. This statement updates the known
+ * bounds of LHS during verification. Note that RHS must be a constant
+ * value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_gt(LHS, RHS) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >, RHS, 0, false); \
+ })
+
+/* Description
+ * Assert that LHS is greater than RHS. This statement updates the known
+ * bounds of LHS during verification. Note that RHS must be a constant
+ * value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_gt_with(LHS, RHS, value) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >, RHS, value, false); \
+ })
+
+/* Description
+ * Assert that LHS is less than or equal to RHS. This statement updates the
+ * known bounds of LHS during verification. Note that RHS must be a
+ * constant value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_le(LHS, RHS) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, <=, RHS, 0, false); \
+ })
+
+/* Description
+ * Assert that LHS is less than or equal to RHS. This statement updates the
+ * known bounds of LHS during verification. Note that RHS must be a
+ * constant value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_le_with(LHS, RHS, value) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, <=, RHS, value, false); \
+ })
+
+/* Description
+ * Assert that LHS is greater than or equal to RHS. This statement updates
+ * the known bounds of LHS during verification. Note that RHS must be a
+ * constant value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_ge(LHS, RHS) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >=, RHS, 0, false); \
+ })
+
+/* Description
+ * Assert that LHS is greater than or equal to RHS. This statement updates
+ * the known bounds of LHS during verification. Note that RHS must be a
+ * constant value, and must fit within the data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_ge_with(LHS, RHS, value) \
+ ({ \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >=, RHS, value, false); \
+ })
+
+/* Description
+ * Assert that LHS is in the range [BEG, END] (inclusive of both). This
+ * statement updates the known bounds of LHS during verification. Note
+ * that both BEG and END must be constant values, and must fit within the
+ * data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the value zero when the assertion fails.
+ */
+#define bpf_assert_range(LHS, BEG, END) \
+ ({ \
+ _Static_assert(BEG <= END, "BEG must be <= END"); \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >=, BEG, 0, false); \
+ __bpf_assert_op(LHS, <=, END, 0, false); \
+ })
+
+/* Description
+ * Assert that LHS is in the range [BEG, END] (inclusive of both). This
+ * statement updates the known bounds of LHS during verification. Note
+ * that both BEG and END must be constant values, and must fit within the
+ * data type of LHS.
+ * Returns
+ * Void.
+ * Throws
+ * An exception with the specified value when the assertion fails.
+ */
+#define bpf_assert_range_with(LHS, BEG, END, value) \
+ ({ \
+ _Static_assert(BEG <= END, "BEG must be <= END"); \
+ barrier_var(LHS); \
+ __bpf_assert_op(LHS, >=, BEG, value, false); \
+ __bpf_assert_op(LHS, <=, END, value, false); \
+ })
+
+struct bpf_iter_css_task;
+struct cgroup_subsys_state;
+extern int bpf_iter_css_task_new(struct bpf_iter_css_task *it,
+ struct cgroup_subsys_state *css, unsigned int flags) __weak __ksym;
+extern struct task_struct *bpf_iter_css_task_next(struct bpf_iter_css_task *it) __weak __ksym;
+extern void bpf_iter_css_task_destroy(struct bpf_iter_css_task *it) __weak __ksym;
+
+struct bpf_iter_task;
+extern int bpf_iter_task_new(struct bpf_iter_task *it,
+ struct task_struct *task, unsigned int flags) __weak __ksym;
+extern struct task_struct *bpf_iter_task_next(struct bpf_iter_task *it) __weak __ksym;
+extern void bpf_iter_task_destroy(struct bpf_iter_task *it) __weak __ksym;
+
+struct bpf_iter_css;
+extern int bpf_iter_css_new(struct bpf_iter_css *it,
+ struct cgroup_subsys_state *start, unsigned int flags) __weak __ksym;
+extern struct cgroup_subsys_state *bpf_iter_css_next(struct bpf_iter_css *it) __weak __ksym;
+extern void bpf_iter_css_destroy(struct bpf_iter_css *it) __weak __ksym;
+
#endif
diff --git a/tools/testing/selftests/bpf/bpf_kfuncs.h b/tools/testing/selftests/bpf/bpf_kfuncs.h
index 642dda0e758a..5ca68ff0b59f 100644
--- a/tools/testing/selftests/bpf/bpf_kfuncs.h
+++ b/tools/testing/selftests/bpf/bpf_kfuncs.h
@@ -1,6 +1,8 @@
#ifndef __BPF_KFUNCS__
#define __BPF_KFUNCS__
+struct bpf_sock_addr_kern;
+
/* Description
* Initializes an skb-type dynptr
* Returns
@@ -41,4 +43,16 @@ extern bool bpf_dynptr_is_rdonly(const struct bpf_dynptr *ptr) __ksym;
extern __u32 bpf_dynptr_size(const struct bpf_dynptr *ptr) __ksym;
extern int bpf_dynptr_clone(const struct bpf_dynptr *ptr, struct bpf_dynptr *clone__init) __ksym;
+/* Description
+ * Modify the address of a AF_UNIX sockaddr.
+ * Returns__bpf_kfunc
+ * -EINVAL if the address size is too big or, 0 if the sockaddr was successfully modified.
+ */
+extern int bpf_sock_addr_set_sun_path(struct bpf_sock_addr_kern *sa_kern,
+ const __u8 *sun_path, __u32 sun_path__sz) __ksym;
+
+void *bpf_cast_to_kern_ctx(void *) __ksym;
+
+void *bpf_rdonly_cast(void *obj, __u32 btf_id) __ksym;
+
#endif
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
index cefc5dd72573..a5e246f7b202 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
@@ -138,6 +138,10 @@ __bpf_kfunc void bpf_iter_testmod_seq_destroy(struct bpf_iter_testmod_seq *it)
it->cnt = 0;
}
+__bpf_kfunc void bpf_kfunc_common_test(void)
+{
+}
+
struct bpf_testmod_btf_type_tag_1 {
int a;
};
@@ -343,6 +347,7 @@ BTF_SET8_START(bpf_testmod_common_kfunc_ids)
BTF_ID_FLAGS(func, bpf_iter_testmod_seq_new, KF_ITER_NEW)
BTF_ID_FLAGS(func, bpf_iter_testmod_seq_next, KF_ITER_NEXT | KF_RET_NULL)
BTF_ID_FLAGS(func, bpf_iter_testmod_seq_destroy, KF_ITER_DESTROY)
+BTF_ID_FLAGS(func, bpf_kfunc_common_test)
BTF_SET8_END(bpf_testmod_common_kfunc_ids)
static const struct btf_kfunc_id_set bpf_testmod_common_kfunc_set = {
diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
index f5c5b1375c24..7c664dd61059 100644
--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
+++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod_kfunc.h
@@ -104,4 +104,6 @@ void bpf_kfunc_call_test_fail1(struct prog_test_fail1 *p);
void bpf_kfunc_call_test_fail2(struct prog_test_fail2 *p);
void bpf_kfunc_call_test_fail3(struct prog_test_fail3 *p);
void bpf_kfunc_call_test_mem_len_fail1(void *mem, int len);
+
+void bpf_kfunc_common_test(void) __ksym;
#endif /* _BPF_TESTMOD_KFUNC_H */
diff --git a/tools/testing/selftests/bpf/cgroup_helpers.c b/tools/testing/selftests/bpf/cgroup_helpers.c
index 2caee8423ee0..5b1da2a32ea7 100644
--- a/tools/testing/selftests/bpf/cgroup_helpers.c
+++ b/tools/testing/selftests/bpf/cgroup_helpers.c
@@ -49,6 +49,10 @@
snprintf(buf, sizeof(buf), "%s%s", NETCLS_MOUNT_PATH, \
CGROUP_WORK_DIR)
+static __thread bool cgroup_workdir_mounted;
+
+static void __cleanup_cgroup_environment(void);
+
static int __enable_controllers(const char *cgroup_path, const char *controllers)
{
char path[PATH_MAX + 1];
@@ -195,6 +199,11 @@ int setup_cgroup_environment(void)
format_cgroup_path(cgroup_workdir, "");
+ if (mkdir(CGROUP_MOUNT_PATH, 0777) && errno != EEXIST) {
+ log_err("mkdir mount");
+ return 1;
+ }
+
if (unshare(CLONE_NEWNS)) {
log_err("unshare");
return 1;
@@ -209,9 +218,10 @@ int setup_cgroup_environment(void)
log_err("mount cgroup2");
return 1;
}
+ cgroup_workdir_mounted = true;
/* Cleanup existing failed runs, now that the environment is setup */
- cleanup_cgroup_environment();
+ __cleanup_cgroup_environment();
if (mkdir(cgroup_workdir, 0777) && errno != EEXIST) {
log_err("mkdir cgroup work dir");
@@ -306,10 +316,25 @@ int join_parent_cgroup(const char *relative_path)
}
/**
+ * __cleanup_cgroup_environment() - Delete temporary cgroups
+ *
+ * This is a helper for cleanup_cgroup_environment() that is responsible for
+ * deletion of all temporary cgroups that have been created during the test.
+ */
+static void __cleanup_cgroup_environment(void)
+{
+ char cgroup_workdir[PATH_MAX + 1];
+
+ format_cgroup_path(cgroup_workdir, "");
+ join_cgroup_from_top(CGROUP_MOUNT_PATH);
+ nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT);
+}
+
+/**
* cleanup_cgroup_environment() - Cleanup Cgroup Testing Environment
*
* This is an idempotent function to delete all temporary cgroups that
- * have been created during the test, including the cgroup testing work
+ * have been created during the test and unmount the cgroup testing work
* directory.
*
* At call time, it moves the calling process to the root cgroup, and then
@@ -320,11 +345,10 @@ int join_parent_cgroup(const char *relative_path)
*/
void cleanup_cgroup_environment(void)
{
- char cgroup_workdir[PATH_MAX + 1];
-
- format_cgroup_path(cgroup_workdir, "");
- join_cgroup_from_top(CGROUP_MOUNT_PATH);
- nftw(cgroup_workdir, nftwfunc, WALK_FD_LIMIT, FTW_DEPTH | FTW_MOUNT);
+ __cleanup_cgroup_environment();
+ if (cgroup_workdir_mounted && umount(CGROUP_MOUNT_PATH))
+ log_err("umount cgroup2");
+ cgroup_workdir_mounted = false;
}
/**
diff --git a/tools/testing/selftests/bpf/config b/tools/testing/selftests/bpf/config
index e41eb33b2704..3ec5927ec3e5 100644
--- a/tools/testing/selftests/bpf/config
+++ b/tools/testing/selftests/bpf/config
@@ -71,6 +71,7 @@ CONFIG_NETFILTER_SYNPROXY=y
CONFIG_NETFILTER_XT_CONNMARK=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
CONFIG_NETFILTER_XT_TARGET_CT=y
+CONFIG_NETKIT=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_DEFRAG_IPV4=y
@@ -84,3 +85,4 @@ CONFIG_USERFAULTFD=y
CONFIG_VXLAN=y
CONFIG_XDP_SOCKETS=y
CONFIG_XFRM_INTERFACE=y
+CONFIG_VSOCKETS=y
diff --git a/tools/testing/selftests/bpf/liburandom_read.map b/tools/testing/selftests/bpf/liburandom_read.map
new file mode 100644
index 000000000000..38a97a419a04
--- /dev/null
+++ b/tools/testing/selftests/bpf/liburandom_read.map
@@ -0,0 +1,15 @@
+LIBURANDOM_READ_1.0.0 {
+ global:
+ urandlib_api;
+ urandlib_api_sameoffset;
+ urandlib_read_without_sema;
+ urandlib_read_with_sema;
+ urandlib_read_with_sema_semaphore;
+ local:
+ *;
+};
+
+LIBURANDOM_READ_2.0.0 {
+ global:
+ urandlib_api;
+} LIBURANDOM_READ_1.0.0;
diff --git a/tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c b/tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c
index 16f1671e4bde..66191ae9863c 100644
--- a/tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c
+++ b/tools/testing/selftests/bpf/map_tests/map_in_map_batch_ops.c
@@ -33,11 +33,11 @@ static void create_inner_maps(enum bpf_map_type map_type,
{
int map_fd, map_index, ret;
__u32 map_key = 0, map_id;
- char map_name[15];
+ char map_name[16];
for (map_index = 0; map_index < OUTER_MAP_ENTRIES; map_index++) {
memset(map_name, 0, sizeof(map_name));
- sprintf(map_name, "inner_map_fd_%d", map_index);
+ snprintf(map_name, sizeof(map_name), "inner_map_fd_%d", map_index);
map_fd = bpf_map_create(map_type, map_name, sizeof(__u32),
sizeof(__u32), 1, NULL);
CHECK(map_fd < 0,
diff --git a/tools/testing/selftests/bpf/netlink_helpers.c b/tools/testing/selftests/bpf/netlink_helpers.c
new file mode 100644
index 000000000000..caf36eb1d032
--- /dev/null
+++ b/tools/testing/selftests/bpf/netlink_helpers.c
@@ -0,0 +1,358 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/* Taken & modified from iproute2's libnetlink.c
+ * Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+ */
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <errno.h>
+#include <time.h>
+#include <sys/socket.h>
+
+#include "netlink_helpers.h"
+
+static int rcvbuf = 1024 * 1024;
+
+void rtnl_close(struct rtnl_handle *rth)
+{
+ if (rth->fd >= 0) {
+ close(rth->fd);
+ rth->fd = -1;
+ }
+}
+
+int rtnl_open_byproto(struct rtnl_handle *rth, unsigned int subscriptions,
+ int protocol)
+{
+ socklen_t addr_len;
+ int sndbuf = 32768;
+ int one = 1;
+
+ memset(rth, 0, sizeof(*rth));
+ rth->proto = protocol;
+ rth->fd = socket(AF_NETLINK, SOCK_RAW | SOCK_CLOEXEC, protocol);
+ if (rth->fd < 0) {
+ perror("Cannot open netlink socket");
+ return -1;
+ }
+ if (setsockopt(rth->fd, SOL_SOCKET, SO_SNDBUF,
+ &sndbuf, sizeof(sndbuf)) < 0) {
+ perror("SO_SNDBUF");
+ goto err;
+ }
+ if (setsockopt(rth->fd, SOL_SOCKET, SO_RCVBUF,
+ &rcvbuf, sizeof(rcvbuf)) < 0) {
+ perror("SO_RCVBUF");
+ goto err;
+ }
+
+ /* Older kernels may no support extended ACK reporting */
+ setsockopt(rth->fd, SOL_NETLINK, NETLINK_EXT_ACK,
+ &one, sizeof(one));
+
+ memset(&rth->local, 0, sizeof(rth->local));
+ rth->local.nl_family = AF_NETLINK;
+ rth->local.nl_groups = subscriptions;
+
+ if (bind(rth->fd, (struct sockaddr *)&rth->local,
+ sizeof(rth->local)) < 0) {
+ perror("Cannot bind netlink socket");
+ goto err;
+ }
+ addr_len = sizeof(rth->local);
+ if (getsockname(rth->fd, (struct sockaddr *)&rth->local,
+ &addr_len) < 0) {
+ perror("Cannot getsockname");
+ goto err;
+ }
+ if (addr_len != sizeof(rth->local)) {
+ fprintf(stderr, "Wrong address length %d\n", addr_len);
+ goto err;
+ }
+ if (rth->local.nl_family != AF_NETLINK) {
+ fprintf(stderr, "Wrong address family %d\n",
+ rth->local.nl_family);
+ goto err;
+ }
+ rth->seq = time(NULL);
+ return 0;
+err:
+ rtnl_close(rth);
+ return -1;
+}
+
+int rtnl_open(struct rtnl_handle *rth, unsigned int subscriptions)
+{
+ return rtnl_open_byproto(rth, subscriptions, NETLINK_ROUTE);
+}
+
+static int __rtnl_recvmsg(int fd, struct msghdr *msg, int flags)
+{
+ int len;
+
+ do {
+ len = recvmsg(fd, msg, flags);
+ } while (len < 0 && (errno == EINTR || errno == EAGAIN));
+ if (len < 0) {
+ fprintf(stderr, "netlink receive error %s (%d)\n",
+ strerror(errno), errno);
+ return -errno;
+ }
+ if (len == 0) {
+ fprintf(stderr, "EOF on netlink\n");
+ return -ENODATA;
+ }
+ return len;
+}
+
+static int rtnl_recvmsg(int fd, struct msghdr *msg, char **answer)
+{
+ struct iovec *iov = msg->msg_iov;
+ char *buf;
+ int len;
+
+ iov->iov_base = NULL;
+ iov->iov_len = 0;
+
+ len = __rtnl_recvmsg(fd, msg, MSG_PEEK | MSG_TRUNC);
+ if (len < 0)
+ return len;
+ if (len < 32768)
+ len = 32768;
+ buf = malloc(len);
+ if (!buf) {
+ fprintf(stderr, "malloc error: not enough buffer\n");
+ return -ENOMEM;
+ }
+ iov->iov_base = buf;
+ iov->iov_len = len;
+ len = __rtnl_recvmsg(fd, msg, 0);
+ if (len < 0) {
+ free(buf);
+ return len;
+ }
+ if (answer)
+ *answer = buf;
+ else
+ free(buf);
+ return len;
+}
+
+static void rtnl_talk_error(struct nlmsghdr *h, struct nlmsgerr *err,
+ nl_ext_ack_fn_t errfn)
+{
+ fprintf(stderr, "RTNETLINK answers: %s\n",
+ strerror(-err->error));
+}
+
+static int __rtnl_talk_iov(struct rtnl_handle *rtnl, struct iovec *iov,
+ size_t iovlen, struct nlmsghdr **answer,
+ bool show_rtnl_err, nl_ext_ack_fn_t errfn)
+{
+ struct sockaddr_nl nladdr = { .nl_family = AF_NETLINK };
+ struct iovec riov;
+ struct msghdr msg = {
+ .msg_name = &nladdr,
+ .msg_namelen = sizeof(nladdr),
+ .msg_iov = iov,
+ .msg_iovlen = iovlen,
+ };
+ unsigned int seq = 0;
+ struct nlmsghdr *h;
+ int i, status;
+ char *buf;
+
+ for (i = 0; i < iovlen; i++) {
+ h = iov[i].iov_base;
+ h->nlmsg_seq = seq = ++rtnl->seq;
+ if (answer == NULL)
+ h->nlmsg_flags |= NLM_F_ACK;
+ }
+ status = sendmsg(rtnl->fd, &msg, 0);
+ if (status < 0) {
+ perror("Cannot talk to rtnetlink");
+ return -1;
+ }
+ /* change msg to use the response iov */
+ msg.msg_iov = &riov;
+ msg.msg_iovlen = 1;
+ i = 0;
+ while (1) {
+next:
+ status = rtnl_recvmsg(rtnl->fd, &msg, &buf);
+ ++i;
+ if (status < 0)
+ return status;
+ if (msg.msg_namelen != sizeof(nladdr)) {
+ fprintf(stderr,
+ "Sender address length == %d!\n",
+ msg.msg_namelen);
+ exit(1);
+ }
+ for (h = (struct nlmsghdr *)buf; status >= sizeof(*h); ) {
+ int len = h->nlmsg_len;
+ int l = len - sizeof(*h);
+
+ if (l < 0 || len > status) {
+ if (msg.msg_flags & MSG_TRUNC) {
+ fprintf(stderr, "Truncated message!\n");
+ free(buf);
+ return -1;
+ }
+ fprintf(stderr,
+ "Malformed message: len=%d!\n",
+ len);
+ exit(1);
+ }
+ if (nladdr.nl_pid != 0 ||
+ h->nlmsg_pid != rtnl->local.nl_pid ||
+ h->nlmsg_seq > seq || h->nlmsg_seq < seq - iovlen) {
+ /* Don't forget to skip that message. */
+ status -= NLMSG_ALIGN(len);
+ h = (struct nlmsghdr *)((char *)h + NLMSG_ALIGN(len));
+ continue;
+ }
+ if (h->nlmsg_type == NLMSG_ERROR) {
+ struct nlmsgerr *err = (struct nlmsgerr *)NLMSG_DATA(h);
+ int error = err->error;
+
+ if (l < sizeof(struct nlmsgerr)) {
+ fprintf(stderr, "ERROR truncated\n");
+ free(buf);
+ return -1;
+ }
+ if (error) {
+ errno = -error;
+ if (rtnl->proto != NETLINK_SOCK_DIAG &&
+ show_rtnl_err)
+ rtnl_talk_error(h, err, errfn);
+ }
+ if (i < iovlen) {
+ free(buf);
+ goto next;
+ }
+ if (error) {
+ free(buf);
+ return -i;
+ }
+ if (answer)
+ *answer = (struct nlmsghdr *)buf;
+ else
+ free(buf);
+ return 0;
+ }
+ if (answer) {
+ *answer = (struct nlmsghdr *)buf;
+ return 0;
+ }
+ fprintf(stderr, "Unexpected reply!\n");
+ status -= NLMSG_ALIGN(len);
+ h = (struct nlmsghdr *)((char *)h + NLMSG_ALIGN(len));
+ }
+ free(buf);
+ if (msg.msg_flags & MSG_TRUNC) {
+ fprintf(stderr, "Message truncated!\n");
+ continue;
+ }
+ if (status) {
+ fprintf(stderr, "Remnant of size %d!\n", status);
+ exit(1);
+ }
+ }
+}
+
+static int __rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
+ struct nlmsghdr **answer, bool show_rtnl_err,
+ nl_ext_ack_fn_t errfn)
+{
+ struct iovec iov = {
+ .iov_base = n,
+ .iov_len = n->nlmsg_len,
+ };
+
+ return __rtnl_talk_iov(rtnl, &iov, 1, answer, show_rtnl_err, errfn);
+}
+
+int rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
+ struct nlmsghdr **answer)
+{
+ return __rtnl_talk(rtnl, n, answer, true, NULL);
+}
+
+int addattr(struct nlmsghdr *n, int maxlen, int type)
+{
+ return addattr_l(n, maxlen, type, NULL, 0);
+}
+
+int addattr8(struct nlmsghdr *n, int maxlen, int type, __u8 data)
+{
+ return addattr_l(n, maxlen, type, &data, sizeof(__u8));
+}
+
+int addattr16(struct nlmsghdr *n, int maxlen, int type, __u16 data)
+{
+ return addattr_l(n, maxlen, type, &data, sizeof(__u16));
+}
+
+int addattr32(struct nlmsghdr *n, int maxlen, int type, __u32 data)
+{
+ return addattr_l(n, maxlen, type, &data, sizeof(__u32));
+}
+
+int addattr64(struct nlmsghdr *n, int maxlen, int type, __u64 data)
+{
+ return addattr_l(n, maxlen, type, &data, sizeof(__u64));
+}
+
+int addattrstrz(struct nlmsghdr *n, int maxlen, int type, const char *str)
+{
+ return addattr_l(n, maxlen, type, str, strlen(str)+1);
+}
+
+int addattr_l(struct nlmsghdr *n, int maxlen, int type, const void *data,
+ int alen)
+{
+ int len = RTA_LENGTH(alen);
+ struct rtattr *rta;
+
+ if (NLMSG_ALIGN(n->nlmsg_len) + RTA_ALIGN(len) > maxlen) {
+ fprintf(stderr, "%s: Message exceeded bound of %d\n",
+ __func__, maxlen);
+ return -1;
+ }
+ rta = NLMSG_TAIL(n);
+ rta->rta_type = type;
+ rta->rta_len = len;
+ if (alen)
+ memcpy(RTA_DATA(rta), data, alen);
+ n->nlmsg_len = NLMSG_ALIGN(n->nlmsg_len) + RTA_ALIGN(len);
+ return 0;
+}
+
+int addraw_l(struct nlmsghdr *n, int maxlen, const void *data, int len)
+{
+ if (NLMSG_ALIGN(n->nlmsg_len) + NLMSG_ALIGN(len) > maxlen) {
+ fprintf(stderr, "%s: Message exceeded bound of %d\n",
+ __func__, maxlen);
+ return -1;
+ }
+
+ memcpy(NLMSG_TAIL(n), data, len);
+ memset((void *) NLMSG_TAIL(n) + len, 0, NLMSG_ALIGN(len) - len);
+ n->nlmsg_len = NLMSG_ALIGN(n->nlmsg_len) + NLMSG_ALIGN(len);
+ return 0;
+}
+
+struct rtattr *addattr_nest(struct nlmsghdr *n, int maxlen, int type)
+{
+ struct rtattr *nest = NLMSG_TAIL(n);
+
+ addattr_l(n, maxlen, type, NULL, 0);
+ return nest;
+}
+
+int addattr_nest_end(struct nlmsghdr *n, struct rtattr *nest)
+{
+ nest->rta_len = (void *)NLMSG_TAIL(n) - (void *)nest;
+ return n->nlmsg_len;
+}
diff --git a/tools/testing/selftests/bpf/netlink_helpers.h b/tools/testing/selftests/bpf/netlink_helpers.h
new file mode 100644
index 000000000000..68116818a47e
--- /dev/null
+++ b/tools/testing/selftests/bpf/netlink_helpers.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef NETLINK_HELPERS_H
+#define NETLINK_HELPERS_H
+
+#include <string.h>
+#include <linux/netlink.h>
+#include <linux/rtnetlink.h>
+
+struct rtnl_handle {
+ int fd;
+ struct sockaddr_nl local;
+ struct sockaddr_nl peer;
+ __u32 seq;
+ __u32 dump;
+ int proto;
+ FILE *dump_fp;
+#define RTNL_HANDLE_F_LISTEN_ALL_NSID 0x01
+#define RTNL_HANDLE_F_SUPPRESS_NLERR 0x02
+#define RTNL_HANDLE_F_STRICT_CHK 0x04
+ int flags;
+};
+
+#define NLMSG_TAIL(nmsg) \
+ ((struct rtattr *) (((void *) (nmsg)) + NLMSG_ALIGN((nmsg)->nlmsg_len)))
+
+typedef int (*nl_ext_ack_fn_t)(const char *errmsg, uint32_t off,
+ const struct nlmsghdr *inner_nlh);
+
+int rtnl_open(struct rtnl_handle *rth, unsigned int subscriptions)
+ __attribute__((warn_unused_result));
+void rtnl_close(struct rtnl_handle *rth);
+int rtnl_talk(struct rtnl_handle *rtnl, struct nlmsghdr *n,
+ struct nlmsghdr **answer)
+ __attribute__((warn_unused_result));
+
+int addattr(struct nlmsghdr *n, int maxlen, int type);
+int addattr8(struct nlmsghdr *n, int maxlen, int type, __u8 data);
+int addattr16(struct nlmsghdr *n, int maxlen, int type, __u16 data);
+int addattr32(struct nlmsghdr *n, int maxlen, int type, __u32 data);
+int addattr64(struct nlmsghdr *n, int maxlen, int type, __u64 data);
+int addattrstrz(struct nlmsghdr *n, int maxlen, int type, const char *data);
+int addattr_l(struct nlmsghdr *n, int maxlen, int type, const void *data, int alen);
+int addraw_l(struct nlmsghdr *n, int maxlen, const void *data, int len);
+struct rtattr *addattr_nest(struct nlmsghdr *n, int maxlen, int type);
+int addattr_nest_end(struct nlmsghdr *n, struct rtattr *nest);
+#endif /* NETLINK_HELPERS_H */
diff --git a/tools/testing/selftests/bpf/network_helpers.c b/tools/testing/selftests/bpf/network_helpers.c
index da72a3a66230..6db27a9088e9 100644
--- a/tools/testing/selftests/bpf/network_helpers.c
+++ b/tools/testing/selftests/bpf/network_helpers.c
@@ -11,6 +11,7 @@
#include <arpa/inet.h>
#include <sys/mount.h>
#include <sys/stat.h>
+#include <sys/un.h>
#include <linux/err.h>
#include <linux/in.h>
@@ -257,6 +258,26 @@ static int connect_fd_to_addr(int fd,
return 0;
}
+int connect_to_addr(const struct sockaddr_storage *addr, socklen_t addrlen, int type)
+{
+ int fd;
+
+ fd = socket(addr->ss_family, type, 0);
+ if (fd < 0) {
+ log_err("Failed to create client socket");
+ return -1;
+ }
+
+ if (connect_fd_to_addr(fd, addr, addrlen, false))
+ goto error_close;
+
+ return fd;
+
+error_close:
+ save_errno_close(fd);
+ return -1;
+}
+
static const struct network_helper_opts default_opts;
int connect_to_fd_opts(int server_fd, const struct network_helper_opts *opts)
@@ -380,6 +401,19 @@ int make_sockaddr(int family, const char *addr_str, __u16 port,
if (len)
*len = sizeof(*sin6);
return 0;
+ } else if (family == AF_UNIX) {
+ /* Note that we always use abstract unix sockets to avoid having
+ * to clean up leftover files.
+ */
+ struct sockaddr_un *sun = (void *)addr;
+
+ memset(addr, 0, sizeof(*sun));
+ sun->sun_family = family;
+ sun->sun_path[0] = 0;
+ strcpy(sun->sun_path + 1, addr_str);
+ if (len)
+ *len = offsetof(struct sockaddr_un, sun_path) + 1 + strlen(addr_str);
+ return 0;
}
return -1;
}
diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
index 5eccc67d1a99..34f1200a781b 100644
--- a/tools/testing/selftests/bpf/network_helpers.h
+++ b/tools/testing/selftests/bpf/network_helpers.h
@@ -51,6 +51,7 @@ int *start_reuseport_server(int family, int type, const char *addr_str,
__u16 port, int timeout_ms,
unsigned int nr_listens);
void free_fds(int *fds, unsigned int nr_close_fds);
+int connect_to_addr(const struct sockaddr_storage *addr, socklen_t len, int type);
int connect_to_fd(int server_fd, int timeout_ms);
int connect_to_fd_opts(int server_fd, const struct network_helper_opts *opts);
int connect_fd_to_fd(int client_fd, int server_fd, int timeout_ms);
diff --git a/tools/testing/selftests/bpf/prog_tests/align.c b/tools/testing/selftests/bpf/prog_tests/align.c
index b92770592563..465c1c3a3d3c 100644
--- a/tools/testing/selftests/bpf/prog_tests/align.c
+++ b/tools/testing/selftests/bpf/prog_tests/align.c
@@ -6,6 +6,7 @@
struct bpf_reg_match {
unsigned int line;
+ const char *reg;
const char *match;
};
@@ -39,13 +40,13 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=2"},
- {1, "R3_w=4"},
- {2, "R3_w=8"},
- {3, "R3_w=16"},
- {4, "R3_w=32"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "2"},
+ {1, "R3_w", "4"},
+ {2, "R3_w", "8"},
+ {3, "R3_w", "16"},
+ {4, "R3_w", "32"},
},
},
{
@@ -67,19 +68,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=1"},
- {1, "R3_w=2"},
- {2, "R3_w=4"},
- {3, "R3_w=8"},
- {4, "R3_w=16"},
- {5, "R3_w=1"},
- {6, "R4_w=32"},
- {7, "R4_w=16"},
- {8, "R4_w=8"},
- {9, "R4_w=4"},
- {10, "R4_w=2"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "1"},
+ {1, "R3_w", "2"},
+ {2, "R3_w", "4"},
+ {3, "R3_w", "8"},
+ {4, "R3_w", "16"},
+ {5, "R3_w", "1"},
+ {6, "R4_w", "32"},
+ {7, "R4_w", "16"},
+ {8, "R4_w", "8"},
+ {9, "R4_w", "4"},
+ {10, "R4_w", "2"},
},
},
{
@@ -96,14 +97,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=4"},
- {1, "R3_w=8"},
- {2, "R3_w=10"},
- {3, "R4_w=8"},
- {4, "R4_w=12"},
- {5, "R4_w=14"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "4"},
+ {1, "R3_w", "8"},
+ {2, "R3_w", "10"},
+ {3, "R4_w", "8"},
+ {4, "R4_w", "12"},
+ {5, "R4_w", "14"},
},
},
{
@@ -118,12 +119,12 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {0, "R1=ctx(off=0,imm=0)"},
- {0, "R10=fp0"},
- {0, "R3_w=7"},
- {1, "R3_w=7"},
- {2, "R3_w=14"},
- {3, "R3_w=56"},
+ {0, "R1", "ctx(off=0,imm=0)"},
+ {0, "R10", "fp0"},
+ {0, "R3_w", "7"},
+ {1, "R3_w", "7"},
+ {2, "R3_w", "14"},
+ {3, "R3_w", "56"},
},
},
@@ -161,19 +162,19 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R0_w=pkt(off=8,r=8,imm=0)"},
- {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {7, "R3_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
- {8, "R3_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {9, "R3_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {10, "R3_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
- {12, "R3_w=pkt_end(off=0,imm=0)"},
- {17, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {18, "R4_w=scalar(umax=8160,var_off=(0x0; 0x1fe0))"},
- {19, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
- {20, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {21, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {22, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
+ {6, "R0_w", "pkt(off=8,r=8,imm=0)"},
+ {6, "R3_w", "var_off=(0x0; 0xff)"},
+ {7, "R3_w", "var_off=(0x0; 0x1fe)"},
+ {8, "R3_w", "var_off=(0x0; 0x3fc)"},
+ {9, "R3_w", "var_off=(0x0; 0x7f8)"},
+ {10, "R3_w", "var_off=(0x0; 0xff0)"},
+ {12, "R3_w", "pkt_end(off=0,imm=0)"},
+ {17, "R4_w", "var_off=(0x0; 0xff)"},
+ {18, "R4_w", "var_off=(0x0; 0x1fe0)"},
+ {19, "R4_w", "var_off=(0x0; 0xff0)"},
+ {20, "R4_w", "var_off=(0x0; 0x7f8)"},
+ {21, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {22, "R4_w", "var_off=(0x0; 0x1fe)"},
},
},
{
@@ -194,16 +195,16 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {6, "R3_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {7, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {8, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {9, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {10, "R4_w=scalar(umax=510,var_off=(0x0; 0x1fe))"},
- {11, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {12, "R4_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
- {13, "R4_w=scalar(id=1,umax=255,var_off=(0x0; 0xff))"},
- {14, "R4_w=scalar(umax=2040,var_off=(0x0; 0x7f8))"},
- {15, "R4_w=scalar(umax=4080,var_off=(0x0; 0xff0))"},
+ {6, "R3_w", "var_off=(0x0; 0xff)"},
+ {7, "R4_w", "var_off=(0x0; 0xff)"},
+ {8, "R4_w", "var_off=(0x0; 0xff)"},
+ {9, "R4_w", "var_off=(0x0; 0xff)"},
+ {10, "R4_w", "var_off=(0x0; 0x1fe)"},
+ {11, "R4_w", "var_off=(0x0; 0xff)"},
+ {12, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {13, "R4_w", "var_off=(0x0; 0xff)"},
+ {14, "R4_w", "var_off=(0x0; 0x7f8)"},
+ {15, "R4_w", "var_off=(0x0; 0xff0)"},
},
},
{
@@ -234,14 +235,14 @@ static struct bpf_align_test tests[] = {
},
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.matches = {
- {2, "R5_w=pkt(off=0,r=0,imm=0)"},
- {4, "R5_w=pkt(off=14,r=0,imm=0)"},
- {5, "R4_w=pkt(off=14,r=0,imm=0)"},
- {9, "R2=pkt(off=0,r=18,imm=0)"},
- {10, "R5=pkt(off=14,r=18,imm=0)"},
- {10, "R4_w=scalar(umax=255,var_off=(0x0; 0xff))"},
- {13, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
- {14, "R4_w=scalar(umax=65535,var_off=(0x0; 0xffff))"},
+ {2, "R5_w", "pkt(off=0,r=0,imm=0)"},
+ {4, "R5_w", "pkt(off=14,r=0,imm=0)"},
+ {5, "R4_w", "pkt(off=14,r=0,imm=0)"},
+ {9, "R2", "pkt(off=0,r=18,imm=0)"},
+ {10, "R5", "pkt(off=14,r=18,imm=0)"},
+ {10, "R4_w", "var_off=(0x0; 0xff)"},
+ {13, "R4_w", "var_off=(0x0; 0xffff)"},
+ {14, "R4_w", "var_off=(0x0; 0xffff)"},
},
},
{
@@ -298,20 +299,20 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Offset is added to packet pointer R5, resulting in
* known fixed offset, and variable offset from R6.
*/
- {11, "R5_w=pkt(id=1,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {11, "R5_w", "pkt(id=1,off=14,"},
/* At the time the word size load is performed from R5,
* it's total offset is NET_IP_ALIGN + reg->off (0) +
* reg->aux_off (14) which is 16. Then the variable
* offset is considered using reg->aux_off_align which
* is 4 and meets the load's requirements.
*/
- {15, "R4=pkt(id=1,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
- {15, "R5=pkt(id=1,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {15, "R4", "var_off=(0x0; 0x3fc)"},
+ {15, "R5", "var_off=(0x0; 0x3fc)"},
/* Variable offset is added to R5 packet pointer,
* resulting in auxiliary alignment of 4. To avoid BPF
* verifier's precision backtracking logging
@@ -319,46 +320,46 @@ static struct bpf_align_test tests[] = {
* instruction to validate R5 state. We also check
* that R4 is what it should be in such case.
*/
- {18, "R4_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
- {18, "R5_w=pkt(id=2,off=0,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {18, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {18, "R5_w", "var_off=(0x0; 0x3fc)"},
/* Constant offset is added to R5, resulting in
* reg->off of 14.
*/
- {19, "R5_w=pkt(id=2,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {19, "R5_w", "pkt(id=2,off=14,"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off
* (14) which is 16. Then the variable offset is 4-byte
* aligned, so the total offset is 4-byte aligned and
* meets the load's requirements.
*/
- {24, "R4=pkt(id=2,off=18,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
- {24, "R5=pkt(id=2,off=14,r=18,umax=1020,var_off=(0x0; 0x3fc))"},
+ {24, "R4", "var_off=(0x0; 0x3fc)"},
+ {24, "R5", "var_off=(0x0; 0x3fc)"},
/* Constant offset is added to R5 packet pointer,
* resulting in reg->off value of 14.
*/
- {26, "R5_w=pkt(off=14,r=8"},
+ {26, "R5_w", "pkt(off=14,r=8,"},
/* Variable offset is added to R5, resulting in a
* variable offset of (4n). See comment for insn #18
* for R4 = R5 trick.
*/
- {28, "R4_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
- {28, "R5_w=pkt(id=3,off=14,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {28, "R4_w", "var_off=(0x0; 0x3fc)"},
+ {28, "R5_w", "var_off=(0x0; 0x3fc)"},
/* Constant is added to R5 again, setting reg->off to 18. */
- {29, "R5_w=pkt(id=3,off=18,r=0,umax=1020,var_off=(0x0; 0x3fc))"},
+ {29, "R5_w", "pkt(id=3,off=18,"},
/* And once more we add a variable; resulting var_off
* is still (4n), fixed offset is not changed.
* Also, we create a new reg->id.
*/
- {31, "R4_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"},
- {31, "R5_w=pkt(id=4,off=18,r=0,umax=2040,var_off=(0x0; 0x7fc)"},
+ {31, "R4_w", "var_off=(0x0; 0x7fc)"},
+ {31, "R5_w", "var_off=(0x0; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (18)
* which is 20. Then the variable offset is (4n), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {35, "R4=pkt(id=4,off=22,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
- {35, "R5=pkt(id=4,off=18,r=22,umax=2040,var_off=(0x0; 0x7fc)"},
+ {35, "R4", "var_off=(0x0; 0x7fc)"},
+ {35, "R5", "var_off=(0x0; 0x7fc)"},
},
},
{
@@ -396,36 +397,36 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {7, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {7, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Adding 14 makes R6 be (4n+2) */
- {8, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
+ {8, "R6_w", "var_off=(0x2; 0x7fc)"},
/* Packet pointer has (4n+2) offset */
- {11, "R5_w=pkt(id=1,off=0,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
- {12, "R4=pkt(id=1,off=4,r=0,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
+ {11, "R5_w", "var_off=(0x2; 0x7fc)"},
+ {12, "R4", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {15, "R5=pkt(id=1,off=0,r=4,umin=14,umax=1034,var_off=(0x2; 0x7fc)"},
+ {15, "R5", "var_off=(0x2; 0x7fc)"},
/* Newly read value in R6 was shifted left by 2, so has
* known alignment of 4.
*/
- {17, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {17, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Added (4n) to packet pointer's (4n+2) var_off, giving
* another (4n+2).
*/
- {19, "R5_w=pkt(id=2,off=0,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
- {20, "R4=pkt(id=2,off=4,r=0,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
+ {19, "R5_w", "var_off=(0x2; 0xffc)"},
+ {20, "R4", "var_off=(0x2; 0xffc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {23, "R5=pkt(id=2,off=0,r=4,umin=14,umax=2054,var_off=(0x2; 0xffc)"},
+ {23, "R5", "var_off=(0x2; 0xffc)"},
},
},
{
@@ -458,18 +459,18 @@ static struct bpf_align_test tests[] = {
.prog_type = BPF_PROG_TYPE_SCHED_CLS,
.result = REJECT,
.matches = {
- {3, "R5_w=pkt_end(off=0,imm=0)"},
+ {3, "R5_w", "pkt_end(off=0,imm=0)"},
/* (ptr - ptr) << 2 == unknown, (4n) */
- {5, "R5_w=scalar(smax=9223372036854775804,umax=18446744073709551612,var_off=(0x0; 0xfffffffffffffffc)"},
+ {5, "R5_w", "var_off=(0x0; 0xfffffffffffffffc)"},
/* (4n) + 14 == (4n+2). We blow our bounds, because
* the add could overflow.
*/
- {6, "R5_w=scalar(smin=-9223372036854775806,smax=9223372036854775806,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {6, "R5_w", "var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>=0 */
- {9, "R5=scalar(umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {9, "R5", "var_off=(0x2; 0x7ffffffffffffffc)"},
/* packet pointer + nonnegative (4n+2) */
- {11, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
- {12, "R4_w=pkt(id=1,off=4,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {11, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
+ {12, "R4_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
/* NET_IP_ALIGN + (4n+2) == (4n), alignment is fine.
* We checked the bounds, but it might have been able
* to overflow if the packet pointer started in the
@@ -477,7 +478,7 @@ static struct bpf_align_test tests[] = {
* So we did not get a 'range' on R6, and the access
* attempt will fail.
*/
- {15, "R6_w=pkt(id=1,off=0,r=0,umin=2,umax=9223372036854775806,var_off=(0x2; 0x7ffffffffffffffc)"},
+ {15, "R6_w", "var_off=(0x2; 0x7ffffffffffffffc)"},
}
},
{
@@ -512,24 +513,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {8, "R6_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {8, "R6_w", "var_off=(0x0; 0x3fc)"},
/* Adding 14 makes R6 be (4n+2) */
- {9, "R6_w=scalar(umin=14,umax=1034,var_off=(0x2; 0x7fc))"},
+ {9, "R6_w", "var_off=(0x2; 0x7fc)"},
/* New unknown value in R7 is (4n) */
- {10, "R7_w=scalar(umax=1020,var_off=(0x0; 0x3fc))"},
+ {10, "R7_w", "var_off=(0x0; 0x3fc)"},
/* Subtracting it from R6 blows our unsigned bounds */
- {11, "R6=scalar(smin=-1006,smax=1034,umin=2,umax=18446744073709551614,var_off=(0x2; 0xfffffffffffffffc)"},
+ {11, "R6", "var_off=(0x2; 0xfffffffffffffffc)"},
/* Checked s>= 0 */
- {14, "R6=scalar(umin=2,umax=1034,var_off=(0x2; 0x7fc))"},
+ {14, "R6", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=2,off=0,r=4,umin=2,umax=1034,var_off=(0x2; 0x7fc)"},
-
+ {20, "R5", "var_off=(0x2; 0x7fc)"},
},
},
{
@@ -566,23 +566,23 @@ static struct bpf_align_test tests[] = {
/* Calculated offset in R6 has unknown value, but known
* alignment of 4.
*/
- {6, "R2_w=pkt(off=0,r=8,imm=0)"},
- {9, "R6_w=scalar(umax=60,var_off=(0x0; 0x3c))"},
+ {6, "R2_w", "pkt(off=0,r=8,imm=0)"},
+ {9, "R6_w", "var_off=(0x0; 0x3c)"},
/* Adding 14 makes R6 be (4n+2) */
- {10, "R6_w=scalar(umin=14,umax=74,var_off=(0x2; 0x7c))"},
+ {10, "R6_w", "var_off=(0x2; 0x7c)"},
/* Subtracting from packet pointer overflows ubounds */
- {13, "R5_w=pkt(id=2,off=0,r=8,umin=18446744073709551542,umax=18446744073709551602,var_off=(0xffffffffffffff82; 0x7c)"},
+ {13, "R5_w", "var_off=(0xffffffffffffff82; 0x7c)"},
/* New unknown value in R7 is (4n), >= 76 */
- {14, "R7_w=scalar(umin=76,umax=1096,var_off=(0x0; 0x7fc))"},
+ {14, "R7_w", "var_off=(0x0; 0x7fc)"},
/* Adding it to packet pointer gives nice bounds again */
- {16, "R5_w=pkt(id=3,off=0,r=0,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ {16, "R5_w", "var_off=(0x2; 0x7fc)"},
/* At the time the word size load is performed from R5,
* its total fixed offset is NET_IP_ALIGN + reg->off (0)
* which is 2. Then the variable offset is (4n+2), so
* the total offset is 4-byte aligned and meets the
* load's requirements.
*/
- {20, "R5=pkt(id=3,off=0,r=4,umin=2,umax=1082,var_off=(0x2; 0x7fc)"},
+ {20, "R5", "var_off=(0x2; 0x7fc)"},
},
},
};
@@ -635,6 +635,7 @@ static int do_test_single(struct bpf_align_test *test)
line_ptr = strtok(bpf_vlog_copy, "\n");
for (i = 0; i < MAX_MATCHES; i++) {
struct bpf_reg_match m = test->matches[i];
+ const char *p;
int tmp;
if (!m.match)
@@ -649,8 +650,8 @@ static int do_test_single(struct bpf_align_test *test)
line_ptr = strtok(NULL, "\n");
}
if (!line_ptr) {
- printf("Failed to find line %u for match: %s\n",
- m.line, m.match);
+ printf("Failed to find line %u for match: %s=%s\n",
+ m.line, m.reg, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
@@ -667,15 +668,15 @@ static int do_test_single(struct bpf_align_test *test)
* 6: R0_w=pkt(off=8,r=8,imm=0) R1=ctx(off=0,imm=0) R2_w=pkt(off=0,r=8,imm=0) R3_w=pkt_end(off=0,imm=0) R10=fp0
* 6: (71) r3 = *(u8 *)(r2 +0) ; R2_w=pkt(off=0,r=8,imm=0) R3_w=scalar(umax=255,var_off=(0x0; 0xff))
*/
- while (!strstr(line_ptr, m.match)) {
+ while (!(p = strstr(line_ptr, m.reg)) || !strstr(p, m.match)) {
cur_line = -1;
line_ptr = strtok(NULL, "\n");
sscanf(line_ptr ?: "", "%u: ", &cur_line);
if (!line_ptr || cur_line != m.line)
break;
}
- if (cur_line != m.line || !line_ptr || !strstr(line_ptr, m.match)) {
- printf("Failed to find match %u: %s\n", m.line, m.match);
+ if (cur_line != m.line || !line_ptr || !(p = strstr(line_ptr, m.reg)) || !strstr(p, m.match)) {
+ printf("Failed to find match %u: %s=%s\n", m.line, m.reg, m.match);
ret = 1;
printf("%s", bpf_vlog);
break;
diff --git a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
index d2d9e965eba5..053f4d6da77a 100644
--- a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
+++ b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
@@ -193,8 +193,8 @@ error:
void test_bloom_filter_map(void)
{
- __u32 *rand_vals, nr_rand_vals;
- struct bloom_filter_map *skel;
+ __u32 *rand_vals = NULL, nr_rand_vals = 0;
+ struct bloom_filter_map *skel = NULL;
int err;
test_fail_cases();
diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
index 1f02168103dd..e3498f607b49 100644
--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
@@ -7,10 +7,10 @@
#include "bpf_iter_ipv6_route.skel.h"
#include "bpf_iter_netlink.skel.h"
#include "bpf_iter_bpf_map.skel.h"
-#include "bpf_iter_task.skel.h"
+#include "bpf_iter_tasks.skel.h"
#include "bpf_iter_task_stack.skel.h"
#include "bpf_iter_task_file.skel.h"
-#include "bpf_iter_task_vma.skel.h"
+#include "bpf_iter_task_vmas.skel.h"
#include "bpf_iter_task_btf.skel.h"
#include "bpf_iter_tcp4.skel.h"
#include "bpf_iter_tcp6.skel.h"
@@ -215,12 +215,12 @@ static void *do_nothing_wait(void *arg)
static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
int *num_unknown, int *num_known)
{
- struct bpf_iter_task *skel;
+ struct bpf_iter_tasks *skel;
pthread_t thread_id;
void *ret;
- skel = bpf_iter_task__open_and_load();
- if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load"))
+ skel = bpf_iter_tasks__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_tasks__open_and_load"))
return;
ASSERT_OK(pthread_mutex_lock(&do_nothing_mutex), "pthread_mutex_lock");
@@ -239,7 +239,7 @@ static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
ASSERT_FALSE(pthread_join(thread_id, &ret) || ret != NULL,
"pthread_join");
- bpf_iter_task__destroy(skel);
+ bpf_iter_tasks__destroy(skel);
}
static void test_task_common(struct bpf_iter_attach_opts *opts, int num_unknown, int num_known)
@@ -307,10 +307,10 @@ static void test_task_pidfd(void)
static void test_task_sleepable(void)
{
- struct bpf_iter_task *skel;
+ struct bpf_iter_tasks *skel;
- skel = bpf_iter_task__open_and_load();
- if (!ASSERT_OK_PTR(skel, "bpf_iter_task__open_and_load"))
+ skel = bpf_iter_tasks__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_tasks__open_and_load"))
return;
do_dummy_read(skel->progs.dump_task_sleepable);
@@ -320,7 +320,7 @@ static void test_task_sleepable(void)
ASSERT_GT(skel->bss->num_success_copy_from_user_task, 0,
"num_success_copy_from_user_task");
- bpf_iter_task__destroy(skel);
+ bpf_iter_tasks__destroy(skel);
}
static void test_task_stack(void)
@@ -1399,19 +1399,19 @@ static void str_strip_first_line(char *str)
static void test_task_vma_common(struct bpf_iter_attach_opts *opts)
{
int err, iter_fd = -1, proc_maps_fd = -1;
- struct bpf_iter_task_vma *skel;
+ struct bpf_iter_task_vmas *skel;
int len, read_size = 4;
char maps_path[64];
- skel = bpf_iter_task_vma__open();
- if (!ASSERT_OK_PTR(skel, "bpf_iter_task_vma__open"))
+ skel = bpf_iter_task_vmas__open();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_task_vmas__open"))
return;
skel->bss->pid = getpid();
skel->bss->one_task = opts ? 1 : 0;
- err = bpf_iter_task_vma__load(skel);
- if (!ASSERT_OK(err, "bpf_iter_task_vma__load"))
+ err = bpf_iter_task_vmas__load(skel);
+ if (!ASSERT_OK(err, "bpf_iter_task_vmas__load"))
goto out;
skel->links.proc_maps = bpf_program__attach_iter(
@@ -1462,25 +1462,25 @@ static void test_task_vma_common(struct bpf_iter_attach_opts *opts)
out:
close(proc_maps_fd);
close(iter_fd);
- bpf_iter_task_vma__destroy(skel);
+ bpf_iter_task_vmas__destroy(skel);
}
static void test_task_vma_dead_task(void)
{
- struct bpf_iter_task_vma *skel;
+ struct bpf_iter_task_vmas *skel;
int wstatus, child_pid = -1;
time_t start_tm, cur_tm;
int err, iter_fd = -1;
int wait_sec = 3;
- skel = bpf_iter_task_vma__open();
- if (!ASSERT_OK_PTR(skel, "bpf_iter_task_vma__open"))
+ skel = bpf_iter_task_vmas__open();
+ if (!ASSERT_OK_PTR(skel, "bpf_iter_task_vmas__open"))
return;
skel->bss->pid = getpid();
- err = bpf_iter_task_vma__load(skel);
- if (!ASSERT_OK(err, "bpf_iter_task_vma__load"))
+ err = bpf_iter_task_vmas__load(skel);
+ if (!ASSERT_OK(err, "bpf_iter_task_vmas__load"))
goto out;
skel->links.proc_maps = bpf_program__attach_iter(
@@ -1533,7 +1533,7 @@ static void test_task_vma_dead_task(void)
out:
waitpid(child_pid, &wstatus, 0);
close(iter_fd);
- bpf_iter_task_vma__destroy(skel);
+ bpf_iter_task_vmas__destroy(skel);
}
void test_bpf_sockmap_map_iter_fd(void)
diff --git a/tools/testing/selftests/bpf/prog_tests/btf.c b/tools/testing/selftests/bpf/prog_tests/btf.c
index 4e0cdb593318..92d51f377fe5 100644
--- a/tools/testing/selftests/bpf/prog_tests/btf.c
+++ b/tools/testing/selftests/bpf/prog_tests/btf.c
@@ -7296,7 +7296,7 @@ static struct btf_dedup_test dedup_tests[] = {
BTF_FUNC_PROTO_ENC(0, 2), /* [3] */
BTF_FUNC_PROTO_ARG_ENC(NAME_NTH(2), 1),
BTF_FUNC_PROTO_ARG_ENC(NAME_NTH(3), 1),
- BTF_FUNC_ENC(NAME_NTH(4), 2), /* [4] */
+ BTF_FUNC_ENC(NAME_NTH(4), 3), /* [4] */
/* tag -> t */
BTF_DECL_TAG_ENC(NAME_NTH(5), 2, -1), /* [5] */
BTF_DECL_TAG_ENC(NAME_NTH(5), 2, -1), /* [6] */
@@ -7317,7 +7317,7 @@ static struct btf_dedup_test dedup_tests[] = {
BTF_FUNC_PROTO_ENC(0, 2), /* [3] */
BTF_FUNC_PROTO_ARG_ENC(NAME_NTH(2), 1),
BTF_FUNC_PROTO_ARG_ENC(NAME_NTH(3), 1),
- BTF_FUNC_ENC(NAME_NTH(4), 2), /* [4] */
+ BTF_FUNC_ENC(NAME_NTH(4), 3), /* [4] */
BTF_DECL_TAG_ENC(NAME_NTH(5), 2, -1), /* [5] */
BTF_DECL_TAG_ENC(NAME_NTH(5), 4, -1), /* [6] */
BTF_DECL_TAG_ENC(NAME_NTH(5), 4, 1), /* [7] */
diff --git a/tools/testing/selftests/bpf/prog_tests/connect_ping.c b/tools/testing/selftests/bpf/prog_tests/connect_ping.c
index 289218c2216c..40fe571f2fe7 100644
--- a/tools/testing/selftests/bpf/prog_tests/connect_ping.c
+++ b/tools/testing/selftests/bpf/prog_tests/connect_ping.c
@@ -28,9 +28,9 @@ static void subtest(int cgroup_fd, struct connect_ping *skel,
.sin6_family = AF_INET6,
.sin6_addr = IN6ADDR_LOOPBACK_INIT,
};
- struct sockaddr *sa;
+ struct sockaddr *sa = NULL;
socklen_t sa_len;
- int protocol;
+ int protocol = -1;
int sock_fd;
switch (family) {
diff --git a/tools/testing/selftests/bpf/prog_tests/exceptions.c b/tools/testing/selftests/bpf/prog_tests/exceptions.c
new file mode 100644
index 000000000000..516f4a13013c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/exceptions.c
@@ -0,0 +1,409 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include <network_helpers.h>
+
+#include "exceptions.skel.h"
+#include "exceptions_ext.skel.h"
+#include "exceptions_fail.skel.h"
+#include "exceptions_assert.skel.h"
+
+static char log_buf[1024 * 1024];
+
+static void test_exceptions_failure(void)
+{
+ RUN_TESTS(exceptions_fail);
+}
+
+static void test_exceptions_success(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, ropts,
+ .data_in = &pkt_v4,
+ .data_size_in = sizeof(pkt_v4),
+ .repeat = 1,
+ );
+ struct exceptions_ext *eskel = NULL;
+ struct exceptions *skel;
+ int ret;
+
+ skel = exceptions__open();
+ if (!ASSERT_OK_PTR(skel, "exceptions__open"))
+ return;
+
+ ret = exceptions__load(skel);
+ if (!ASSERT_OK(ret, "exceptions__load"))
+ goto done;
+
+ if (!ASSERT_OK(bpf_map_update_elem(bpf_map__fd(skel->maps.jmp_table), &(int){0},
+ &(int){bpf_program__fd(skel->progs.exception_tail_call_target)}, BPF_ANY),
+ "bpf_map_update_elem jmp_table"))
+ goto done;
+
+#define RUN_SUCCESS(_prog, return_val) \
+ if (!test__start_subtest(#_prog)) goto _prog##_##return_val; \
+ ret = bpf_prog_test_run_opts(bpf_program__fd(skel->progs._prog), &ropts); \
+ ASSERT_OK(ret, #_prog " prog run ret"); \
+ ASSERT_EQ(ropts.retval, return_val, #_prog " prog run retval"); \
+ _prog##_##return_val:
+
+ RUN_SUCCESS(exception_throw_always_1, 64);
+ RUN_SUCCESS(exception_throw_always_2, 32);
+ RUN_SUCCESS(exception_throw_unwind_1, 16);
+ RUN_SUCCESS(exception_throw_unwind_2, 32);
+ RUN_SUCCESS(exception_throw_default, 0);
+ RUN_SUCCESS(exception_throw_default_value, 5);
+ RUN_SUCCESS(exception_tail_call, 24);
+ RUN_SUCCESS(exception_ext, 0);
+ RUN_SUCCESS(exception_ext_mod_cb_runtime, 35);
+ RUN_SUCCESS(exception_throw_subprog, 1);
+ RUN_SUCCESS(exception_assert_nz_gfunc, 1);
+ RUN_SUCCESS(exception_assert_zero_gfunc, 1);
+ RUN_SUCCESS(exception_assert_neg_gfunc, 1);
+ RUN_SUCCESS(exception_assert_pos_gfunc, 1);
+ RUN_SUCCESS(exception_assert_negeq_gfunc, 1);
+ RUN_SUCCESS(exception_assert_poseq_gfunc, 1);
+ RUN_SUCCESS(exception_assert_nz_gfunc_with, 1);
+ RUN_SUCCESS(exception_assert_zero_gfunc_with, 1);
+ RUN_SUCCESS(exception_assert_neg_gfunc_with, 1);
+ RUN_SUCCESS(exception_assert_pos_gfunc_with, 1);
+ RUN_SUCCESS(exception_assert_negeq_gfunc_with, 1);
+ RUN_SUCCESS(exception_assert_poseq_gfunc_with, 1);
+ RUN_SUCCESS(exception_bad_assert_nz_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_zero_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_neg_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_pos_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_negeq_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_poseq_gfunc, 0);
+ RUN_SUCCESS(exception_bad_assert_nz_gfunc_with, 100);
+ RUN_SUCCESS(exception_bad_assert_zero_gfunc_with, 105);
+ RUN_SUCCESS(exception_bad_assert_neg_gfunc_with, 200);
+ RUN_SUCCESS(exception_bad_assert_pos_gfunc_with, 0);
+ RUN_SUCCESS(exception_bad_assert_negeq_gfunc_with, 101);
+ RUN_SUCCESS(exception_bad_assert_poseq_gfunc_with, 99);
+ RUN_SUCCESS(exception_assert_range, 1);
+ RUN_SUCCESS(exception_assert_range_with, 1);
+ RUN_SUCCESS(exception_bad_assert_range, 0);
+ RUN_SUCCESS(exception_bad_assert_range_with, 10);
+
+#define RUN_EXT(load_ret, attach_err, expr, msg, after_link) \
+ { \
+ LIBBPF_OPTS(bpf_object_open_opts, o, .kernel_log_buf = log_buf, \
+ .kernel_log_size = sizeof(log_buf), \
+ .kernel_log_level = 2); \
+ exceptions_ext__destroy(eskel); \
+ eskel = exceptions_ext__open_opts(&o); \
+ struct bpf_program *prog = NULL; \
+ struct bpf_link *link = NULL; \
+ if (!ASSERT_OK_PTR(eskel, "exceptions_ext__open")) \
+ goto done; \
+ (expr); \
+ ASSERT_OK_PTR(bpf_program__name(prog), bpf_program__name(prog)); \
+ if (!ASSERT_EQ(exceptions_ext__load(eskel), load_ret, \
+ "exceptions_ext__load")) { \
+ printf("%s\n", log_buf); \
+ goto done; \
+ } \
+ if (load_ret != 0) { \
+ if (!ASSERT_OK_PTR(strstr(log_buf, msg), "strstr")) { \
+ printf("%s\n", log_buf); \
+ goto done; \
+ } \
+ } \
+ if (!load_ret && attach_err) { \
+ if (!ASSERT_ERR_PTR(link = bpf_program__attach(prog), "attach err")) \
+ goto done; \
+ } else if (!load_ret) { \
+ if (!ASSERT_OK_PTR(link = bpf_program__attach(prog), "attach ok")) \
+ goto done; \
+ (void)(after_link); \
+ bpf_link__destroy(link); \
+ } \
+ }
+
+ if (test__start_subtest("non-throwing fentry -> exception_cb"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.pfentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod"), "set_attach_target"))
+ goto done;
+ }), "FENTRY/FEXIT programs cannot attach to exception callback", 0);
+
+ if (test__start_subtest("throwing fentry -> exception_cb"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.throwing_fentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod"), "set_attach_target"))
+ goto done;
+ }), "FENTRY/FEXIT programs cannot attach to exception callback", 0);
+
+ if (test__start_subtest("non-throwing fexit -> exception_cb"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.pfexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod"), "set_attach_target"))
+ goto done;
+ }), "FENTRY/FEXIT programs cannot attach to exception callback", 0);
+
+ if (test__start_subtest("throwing fexit -> exception_cb"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.throwing_fexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod"), "set_attach_target"))
+ goto done;
+ }), "FENTRY/FEXIT programs cannot attach to exception callback", 0);
+
+ if (test__start_subtest("throwing extension (with custom cb) -> exception_cb"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.throwing_exception_cb_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod"), "set_attach_target"))
+ goto done;
+ }), "Extension programs cannot attach to exception callback", 0);
+
+ if (test__start_subtest("throwing extension -> global func in exception_cb"))
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_exception_cb_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext_mod_cb_runtime),
+ "exception_cb_mod_global"), "set_attach_target"))
+ goto done;
+ }), "", ({ RUN_SUCCESS(exception_ext_mod_cb_runtime, 131); }));
+
+ if (test__start_subtest("throwing extension (with custom cb) -> global func in exception_cb"))
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_ext),
+ "exception_ext_global"), "set_attach_target"))
+ goto done;
+ }), "", ({ RUN_SUCCESS(exception_ext, 128); }));
+
+ if (test__start_subtest("non-throwing fentry -> non-throwing subprog"))
+ /* non-throwing fentry -> non-throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.pfentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing fentry -> non-throwing subprog"))
+ /* throwing fentry -> non-throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_fentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("non-throwing fentry -> throwing subprog"))
+ /* non-throwing fentry -> throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.pfentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing fentry -> throwing subprog"))
+ /* throwing fentry -> throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_fentry;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("non-throwing fexit -> non-throwing subprog"))
+ /* non-throwing fexit -> non-throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.pfexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing fexit -> non-throwing subprog"))
+ /* throwing fexit -> non-throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_fexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("non-throwing fexit -> throwing subprog"))
+ /* non-throwing fexit -> throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.pfexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing fexit -> throwing subprog"))
+ /* throwing fexit -> throwing subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_fexit;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ /* fmod_ret not allowed for subprog - Check so we remember to handle its
+ * throwing specification compatibility with target when supported.
+ */
+ if (test__start_subtest("non-throwing fmod_ret -> non-throwing subprog"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.pfmod_ret;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "can't modify return codes of BPF program", 0);
+
+ /* fmod_ret not allowed for subprog - Check so we remember to handle its
+ * throwing specification compatibility with target when supported.
+ */
+ if (test__start_subtest("non-throwing fmod_ret -> non-throwing global subprog"))
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.pfmod_ret;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "global_subprog"), "set_attach_target"))
+ goto done;
+ }), "can't modify return codes of BPF program", 0);
+
+ if (test__start_subtest("non-throwing extension -> non-throwing subprog"))
+ /* non-throwing extension -> non-throwing subprog : BAD (!global) */
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "subprog"), "set_attach_target"))
+ goto done;
+ }), "subprog() is not a global function", 0);
+
+ if (test__start_subtest("non-throwing extension -> throwing subprog"))
+ /* non-throwing extension -> throwing subprog : BAD (!global) */
+ RUN_EXT(-EINVAL, true, ({
+ prog = eskel->progs.extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_subprog"), "set_attach_target"))
+ goto done;
+ }), "throwing_subprog() is not a global function", 0);
+
+ if (test__start_subtest("non-throwing extension -> non-throwing subprog"))
+ /* non-throwing extension -> non-throwing global subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "global_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("non-throwing extension -> throwing global subprog"))
+ /* non-throwing extension -> throwing global subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_global_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing extension -> throwing global subprog"))
+ /* throwing extension -> throwing global subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "throwing_global_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing extension -> non-throwing global subprog"))
+ /* throwing extension -> non-throwing global subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "global_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("non-throwing extension -> main subprog"))
+ /* non-throwing extension -> main subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "exception_throw_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+ if (test__start_subtest("throwing extension -> main subprog"))
+ /* throwing extension -> main subprog : OK */
+ RUN_EXT(0, false, ({
+ prog = eskel->progs.throwing_extension;
+ bpf_program__set_autoload(prog, true);
+ if (!ASSERT_OK(bpf_program__set_attach_target(prog,
+ bpf_program__fd(skel->progs.exception_throw_subprog),
+ "exception_throw_subprog"), "set_attach_target"))
+ goto done;
+ }), "", 0);
+
+done:
+ exceptions_ext__destroy(eskel);
+ exceptions__destroy(skel);
+}
+
+static void test_exceptions_assertions(void)
+{
+ RUN_TESTS(exceptions_assert);
+}
+
+void test_exceptions(void)
+{
+ test_exceptions_success();
+ test_exceptions_failure();
+ test_exceptions_assertions();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
index 2fd05649bad1..4ad4cd69152e 100644
--- a/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
+++ b/tools/testing/selftests/bpf/prog_tests/fib_lookup.c
@@ -11,9 +11,13 @@
#define NS_TEST "fib_lookup_ns"
#define IPV6_IFACE_ADDR "face::face"
+#define IPV6_IFACE_ADDR_SEC "cafe::cafe"
+#define IPV6_ADDR_DST "face::3"
#define IPV6_NUD_FAILED_ADDR "face::1"
#define IPV6_NUD_STALE_ADDR "face::2"
#define IPV4_IFACE_ADDR "10.0.0.254"
+#define IPV4_IFACE_ADDR_SEC "10.1.0.254"
+#define IPV4_ADDR_DST "10.2.0.254"
#define IPV4_NUD_FAILED_ADDR "10.0.0.1"
#define IPV4_NUD_STALE_ADDR "10.0.0.2"
#define IPV4_TBID_ADDR "172.0.0.254"
@@ -31,6 +35,7 @@ struct fib_lookup_test {
const char *desc;
const char *daddr;
int expected_ret;
+ const char *expected_src;
int lookup_flags;
__u32 tbid;
__u8 dmac[6];
@@ -69,6 +74,22 @@ static const struct fib_lookup_test tests[] = {
.daddr = IPV6_TBID_DST, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
.lookup_flags = BPF_FIB_LOOKUP_DIRECT | BPF_FIB_LOOKUP_TBID, .tbid = 100,
.dmac = DMAC_INIT2, },
+ { .desc = "IPv4 set src addr from netdev",
+ .daddr = IPV4_NUD_FAILED_ADDR, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
+ .expected_src = IPV4_IFACE_ADDR,
+ .lookup_flags = BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_SKIP_NEIGH, },
+ { .desc = "IPv6 set src addr from netdev",
+ .daddr = IPV6_NUD_FAILED_ADDR, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
+ .expected_src = IPV6_IFACE_ADDR,
+ .lookup_flags = BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_SKIP_NEIGH, },
+ { .desc = "IPv4 set prefsrc addr from route",
+ .daddr = IPV4_ADDR_DST, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
+ .expected_src = IPV4_IFACE_ADDR_SEC,
+ .lookup_flags = BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_SKIP_NEIGH, },
+ { .desc = "IPv6 set prefsrc addr route",
+ .daddr = IPV6_ADDR_DST, .expected_ret = BPF_FIB_LKUP_RET_SUCCESS,
+ .expected_src = IPV6_IFACE_ADDR_SEC,
+ .lookup_flags = BPF_FIB_LOOKUP_SRC | BPF_FIB_LOOKUP_SKIP_NEIGH, },
};
static int ifindex;
@@ -97,6 +118,13 @@ static int setup_netns(void)
SYS(fail, "ip neigh add %s dev veth1 nud failed", IPV4_NUD_FAILED_ADDR);
SYS(fail, "ip neigh add %s dev veth1 lladdr %s nud stale", IPV4_NUD_STALE_ADDR, DMAC);
+ /* Setup for prefsrc IP addr selection */
+ SYS(fail, "ip addr add %s/24 dev veth1", IPV4_IFACE_ADDR_SEC);
+ SYS(fail, "ip route add %s/32 dev veth1 src %s", IPV4_ADDR_DST, IPV4_IFACE_ADDR_SEC);
+
+ SYS(fail, "ip addr add %s/64 dev veth1 nodad", IPV6_IFACE_ADDR_SEC);
+ SYS(fail, "ip route add %s/128 dev veth1 src %s", IPV6_ADDR_DST, IPV6_IFACE_ADDR_SEC);
+
/* Setup for tbid lookup tests */
SYS(fail, "ip addr add %s/24 dev veth2", IPV4_TBID_ADDR);
SYS(fail, "ip route del %s/24 dev veth2", IPV4_TBID_NET);
@@ -133,9 +161,12 @@ static int set_lookup_params(struct bpf_fib_lookup *params, const struct fib_loo
if (inet_pton(AF_INET6, test->daddr, params->ipv6_dst) == 1) {
params->family = AF_INET6;
- ret = inet_pton(AF_INET6, IPV6_IFACE_ADDR, params->ipv6_src);
- if (!ASSERT_EQ(ret, 1, "inet_pton(IPV6_IFACE_ADDR)"))
- return -1;
+ if (!(test->lookup_flags & BPF_FIB_LOOKUP_SRC)) {
+ ret = inet_pton(AF_INET6, IPV6_IFACE_ADDR, params->ipv6_src);
+ if (!ASSERT_EQ(ret, 1, "inet_pton(IPV6_IFACE_ADDR)"))
+ return -1;
+ }
+
return 0;
}
@@ -143,9 +174,12 @@ static int set_lookup_params(struct bpf_fib_lookup *params, const struct fib_loo
if (!ASSERT_EQ(ret, 1, "convert IP[46] address"))
return -1;
params->family = AF_INET;
- ret = inet_pton(AF_INET, IPV4_IFACE_ADDR, &params->ipv4_src);
- if (!ASSERT_EQ(ret, 1, "inet_pton(IPV4_IFACE_ADDR)"))
- return -1;
+
+ if (!(test->lookup_flags & BPF_FIB_LOOKUP_SRC)) {
+ ret = inet_pton(AF_INET, IPV4_IFACE_ADDR, &params->ipv4_src);
+ if (!ASSERT_EQ(ret, 1, "inet_pton(IPV4_IFACE_ADDR)"))
+ return -1;
+ }
return 0;
}
@@ -156,6 +190,40 @@ static void mac_str(char *b, const __u8 *mac)
mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
}
+static void assert_src_ip(struct bpf_fib_lookup *fib_params, const char *expected_src)
+{
+ int ret;
+ __u32 src6[4];
+ __be32 src4;
+
+ switch (fib_params->family) {
+ case AF_INET6:
+ ret = inet_pton(AF_INET6, expected_src, src6);
+ ASSERT_EQ(ret, 1, "inet_pton(expected_src)");
+
+ ret = memcmp(src6, fib_params->ipv6_src, sizeof(fib_params->ipv6_src));
+ if (!ASSERT_EQ(ret, 0, "fib_lookup ipv6 src")) {
+ char str_src6[64];
+
+ inet_ntop(AF_INET6, fib_params->ipv6_src, str_src6,
+ sizeof(str_src6));
+ printf("ipv6 expected %s actual %s ", expected_src,
+ str_src6);
+ }
+
+ break;
+ case AF_INET:
+ ret = inet_pton(AF_INET, expected_src, &src4);
+ ASSERT_EQ(ret, 1, "inet_pton(expected_src)");
+
+ ASSERT_EQ(fib_params->ipv4_src, src4, "fib_lookup ipv4 src");
+
+ break;
+ default:
+ PRINT_FAIL("invalid addr family: %d", fib_params->family);
+ }
+}
+
void test_fib_lookup(void)
{
struct bpf_fib_lookup *fib_params;
@@ -207,6 +275,9 @@ void test_fib_lookup(void)
ASSERT_EQ(skel->bss->fib_lookup_ret, tests[i].expected_ret,
"fib_lookup_ret");
+ if (tests[i].expected_src)
+ assert_src_ip(fib_params, tests[i].expected_src);
+
ret = memcmp(tests[i].dmac, fib_params->dmac, sizeof(tests[i].dmac));
if (!ASSERT_EQ(ret, 0, "dmac not match")) {
char expected[18], actual[18];
diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
index 9d768e083714..97142a4db374 100644
--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
@@ -308,7 +308,7 @@ void test_fill_link_info(void)
return;
/* load kallsyms to compare the addr */
- if (!ASSERT_OK(load_kallsyms_refresh(), "load_kallsyms_refresh"))
+ if (!ASSERT_OK(load_kallsyms(), "load_kallsyms"))
goto cleanup;
kprobe_addr = ksym_get_addr(KPROBE_FUNC);
diff --git a/tools/testing/selftests/bpf/prog_tests/iters.c b/tools/testing/selftests/bpf/prog_tests/iters.c
index 10804ae5ae97..c2425791c923 100644
--- a/tools/testing/selftests/bpf/prog_tests/iters.c
+++ b/tools/testing/selftests/bpf/prog_tests/iters.c
@@ -1,13 +1,25 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+#include <sys/syscall.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include <malloc.h>
+#include <stdlib.h>
#include <test_progs.h>
+#include "cgroup_helpers.h"
#include "iters.skel.h"
#include "iters_state_safety.skel.h"
#include "iters_looping.skel.h"
#include "iters_num.skel.h"
#include "iters_testmod_seq.skel.h"
+#include "iters_task_vma.skel.h"
+#include "iters_task.skel.h"
+#include "iters_css_task.skel.h"
+#include "iters_css.skel.h"
+#include "iters_task_failure.skel.h"
static void subtest_num_iters(void)
{
@@ -90,6 +102,193 @@ cleanup:
iters_testmod_seq__destroy(skel);
}
+static void subtest_task_vma_iters(void)
+{
+ unsigned long start, end, bpf_iter_start, bpf_iter_end;
+ struct iters_task_vma *skel;
+ char rest_of_line[1000];
+ unsigned int seen;
+ FILE *f = NULL;
+ int err;
+
+ skel = iters_task_vma__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+ skel->bss->target_pid = getpid();
+
+ err = iters_task_vma__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto cleanup;
+
+ getpgid(skel->bss->target_pid);
+ iters_task_vma__detach(skel);
+
+ if (!ASSERT_GT(skel->bss->vmas_seen, 0, "vmas_seen_gt_zero"))
+ goto cleanup;
+
+ f = fopen("/proc/self/maps", "r");
+ if (!ASSERT_OK_PTR(f, "proc_maps_fopen"))
+ goto cleanup;
+
+ seen = 0;
+ while (fscanf(f, "%lx-%lx %[^\n]\n", &start, &end, rest_of_line) == 3) {
+ /* [vsyscall] vma isn't _really_ part of task->mm vmas.
+ * /proc/PID/maps returns it when out of vmas - see get_gate_vma
+ * calls in fs/proc/task_mmu.c
+ */
+ if (strstr(rest_of_line, "[vsyscall]"))
+ continue;
+
+ bpf_iter_start = skel->bss->vm_ranges[seen].vm_start;
+ bpf_iter_end = skel->bss->vm_ranges[seen].vm_end;
+
+ ASSERT_EQ(bpf_iter_start, start, "vma->vm_start match");
+ ASSERT_EQ(bpf_iter_end, end, "vma->vm_end match");
+ seen++;
+ }
+
+ if (!ASSERT_EQ(skel->bss->vmas_seen, seen, "vmas_seen_eq"))
+ goto cleanup;
+
+cleanup:
+ if (f)
+ fclose(f);
+ iters_task_vma__destroy(skel);
+}
+
+static pthread_mutex_t do_nothing_mutex;
+
+static void *do_nothing_wait(void *arg)
+{
+ pthread_mutex_lock(&do_nothing_mutex);
+ pthread_mutex_unlock(&do_nothing_mutex);
+
+ pthread_exit(arg);
+}
+
+#define thread_num 2
+
+static void subtest_task_iters(void)
+{
+ struct iters_task *skel = NULL;
+ pthread_t thread_ids[thread_num];
+ void *ret;
+ int err;
+
+ skel = iters_task__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto cleanup;
+ skel->bss->target_pid = getpid();
+ err = iters_task__attach(skel);
+ if (!ASSERT_OK(err, "iters_task__attach"))
+ goto cleanup;
+ pthread_mutex_lock(&do_nothing_mutex);
+ for (int i = 0; i < thread_num; i++)
+ ASSERT_OK(pthread_create(&thread_ids[i], NULL, &do_nothing_wait, NULL),
+ "pthread_create");
+
+ syscall(SYS_getpgid);
+ iters_task__detach(skel);
+ ASSERT_EQ(skel->bss->procs_cnt, 1, "procs_cnt");
+ ASSERT_EQ(skel->bss->threads_cnt, thread_num + 1, "threads_cnt");
+ ASSERT_EQ(skel->bss->proc_threads_cnt, thread_num + 1, "proc_threads_cnt");
+ pthread_mutex_unlock(&do_nothing_mutex);
+ for (int i = 0; i < thread_num; i++)
+ ASSERT_OK(pthread_join(thread_ids[i], &ret), "pthread_join");
+cleanup:
+ iters_task__destroy(skel);
+}
+
+extern int stack_mprotect(void);
+
+static void subtest_css_task_iters(void)
+{
+ struct iters_css_task *skel = NULL;
+ int err, cg_fd, cg_id;
+ const char *cgrp_path = "/cg1";
+
+ err = setup_cgroup_environment();
+ if (!ASSERT_OK(err, "setup_cgroup_environment"))
+ goto cleanup;
+ cg_fd = create_and_get_cgroup(cgrp_path);
+ if (!ASSERT_GE(cg_fd, 0, "create_and_get_cgroup"))
+ goto cleanup;
+ cg_id = get_cgroup_id(cgrp_path);
+ err = join_cgroup(cgrp_path);
+ if (!ASSERT_OK(err, "join_cgroup"))
+ goto cleanup;
+
+ skel = iters_css_task__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto cleanup;
+
+ skel->bss->target_pid = getpid();
+ skel->bss->cg_id = cg_id;
+ err = iters_css_task__attach(skel);
+ if (!ASSERT_OK(err, "iters_task__attach"))
+ goto cleanup;
+ err = stack_mprotect();
+ if (!ASSERT_EQ(err, -1, "stack_mprotect") ||
+ !ASSERT_EQ(errno, EPERM, "stack_mprotect"))
+ goto cleanup;
+ iters_css_task__detach(skel);
+ ASSERT_EQ(skel->bss->css_task_cnt, 1, "css_task_cnt");
+
+cleanup:
+ cleanup_cgroup_environment();
+ iters_css_task__destroy(skel);
+}
+
+static void subtest_css_iters(void)
+{
+ struct iters_css *skel = NULL;
+ struct {
+ const char *path;
+ int fd;
+ } cgs[] = {
+ { "/cg1" },
+ { "/cg1/cg2" },
+ { "/cg1/cg2/cg3" },
+ { "/cg1/cg2/cg3/cg4" },
+ };
+ int err, cg_nr = ARRAY_SIZE(cgs);
+ int i;
+
+ err = setup_cgroup_environment();
+ if (!ASSERT_OK(err, "setup_cgroup_environment"))
+ goto cleanup;
+ for (i = 0; i < cg_nr; i++) {
+ cgs[i].fd = create_and_get_cgroup(cgs[i].path);
+ if (!ASSERT_GE(cgs[i].fd, 0, "create_and_get_cgroup"))
+ goto cleanup;
+ }
+
+ skel = iters_css__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ goto cleanup;
+
+ skel->bss->target_pid = getpid();
+ skel->bss->root_cg_id = get_cgroup_id(cgs[0].path);
+ skel->bss->leaf_cg_id = get_cgroup_id(cgs[cg_nr - 1].path);
+ err = iters_css__attach(skel);
+
+ if (!ASSERT_OK(err, "iters_task__attach"))
+ goto cleanup;
+
+ syscall(SYS_getpgid);
+ ASSERT_EQ(skel->bss->pre_order_cnt, cg_nr, "pre_order_cnt");
+ ASSERT_EQ(skel->bss->first_cg_id, get_cgroup_id(cgs[0].path), "first_cg_id");
+
+ ASSERT_EQ(skel->bss->post_order_cnt, cg_nr, "post_order_cnt");
+ ASSERT_EQ(skel->bss->last_cg_id, get_cgroup_id(cgs[0].path), "last_cg_id");
+ ASSERT_EQ(skel->bss->tree_high, cg_nr - 1, "tree_high");
+ iters_css__detach(skel);
+cleanup:
+ cleanup_cgroup_environment();
+ iters_css__destroy(skel);
+}
+
void test_iters(void)
{
RUN_TESTS(iters_state_safety);
@@ -103,4 +302,13 @@ void test_iters(void)
subtest_num_iters();
if (test__start_subtest("testmod_seq"))
subtest_testmod_seq_iters();
+ if (test__start_subtest("task_vma"))
+ subtest_task_vma_iters();
+ if (test__start_subtest("task"))
+ subtest_task_iters();
+ if (test__start_subtest("css_task"))
+ subtest_css_task_iters();
+ if (test__start_subtest("css"))
+ subtest_css_iters();
+ RUN_TESTS(iters_task_failure);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_testmod_test.c b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_testmod_test.c
index 1fbe7e4ac00a..9d03528f05db 100644
--- a/tools/testing/selftests/bpf/prog_tests/kprobe_multi_testmod_test.c
+++ b/tools/testing/selftests/bpf/prog_tests/kprobe_multi_testmod_test.c
@@ -4,6 +4,8 @@
#include "trace_helpers.h"
#include "bpf/libbpf_internal.h"
+static struct ksyms *ksyms;
+
static void kprobe_multi_testmod_check(struct kprobe_multi *skel)
{
ASSERT_EQ(skel->bss->kprobe_testmod_test1_result, 1, "kprobe_test1_result");
@@ -50,12 +52,12 @@ static void test_testmod_attach_api_addrs(void)
LIBBPF_OPTS(bpf_kprobe_multi_opts, opts);
unsigned long long addrs[3];
- addrs[0] = ksym_get_addr("bpf_testmod_fentry_test1");
- ASSERT_NEQ(addrs[0], 0, "ksym_get_addr");
- addrs[1] = ksym_get_addr("bpf_testmod_fentry_test2");
- ASSERT_NEQ(addrs[1], 0, "ksym_get_addr");
- addrs[2] = ksym_get_addr("bpf_testmod_fentry_test3");
- ASSERT_NEQ(addrs[2], 0, "ksym_get_addr");
+ addrs[0] = ksym_get_addr_local(ksyms, "bpf_testmod_fentry_test1");
+ ASSERT_NEQ(addrs[0], 0, "ksym_get_addr_local");
+ addrs[1] = ksym_get_addr_local(ksyms, "bpf_testmod_fentry_test2");
+ ASSERT_NEQ(addrs[1], 0, "ksym_get_addr_local");
+ addrs[2] = ksym_get_addr_local(ksyms, "bpf_testmod_fentry_test3");
+ ASSERT_NEQ(addrs[2], 0, "ksym_get_addr_local");
opts.addrs = (const unsigned long *) addrs;
opts.cnt = ARRAY_SIZE(addrs);
@@ -79,11 +81,15 @@ static void test_testmod_attach_api_syms(void)
void serial_test_kprobe_multi_testmod_test(void)
{
- if (!ASSERT_OK(load_kallsyms_refresh(), "load_kallsyms_refresh"))
+ ksyms = load_kallsyms_local();
+ if (!ASSERT_OK_PTR(ksyms, "load_kallsyms_local"))
return;
if (test__start_subtest("testmod_attach_api_syms"))
test_testmod_attach_api_syms();
+
if (test__start_subtest("testmod_attach_api_addrs"))
test_testmod_attach_api_addrs();
+
+ free_kallsyms_local(ksyms);
}
diff --git a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c
index efb8bd43653c..c440ea3311ed 100644
--- a/tools/testing/selftests/bpf/prog_tests/libbpf_str.c
+++ b/tools/testing/selftests/bpf/prog_tests/libbpf_str.c
@@ -142,10 +142,14 @@ static void test_libbpf_bpf_map_type_str(void)
/* Special case for map_type_name BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED
* where it and BPF_MAP_TYPE_CGROUP_STORAGE have the same enum value
* (map_type). For this enum value, libbpf_bpf_map_type_str() picks
- * BPF_MAP_TYPE_CGROUP_STORAGE.
+ * BPF_MAP_TYPE_CGROUP_STORAGE. The same for
+ * BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED and
+ * BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE.
*/
if (strcmp(map_type_name, "BPF_MAP_TYPE_CGROUP_STORAGE_DEPRECATED") == 0)
continue;
+ if (strcmp(map_type_name, "BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED") == 0)
+ continue;
ASSERT_STREQ(buf, map_type_name, "exp_str_value");
}
diff --git a/tools/testing/selftests/bpf/prog_tests/linked_list.c b/tools/testing/selftests/bpf/prog_tests/linked_list.c
index 18cf7b17463d..2fb89de63bd2 100644
--- a/tools/testing/selftests/bpf/prog_tests/linked_list.c
+++ b/tools/testing/selftests/bpf/prog_tests/linked_list.c
@@ -65,8 +65,8 @@ static struct {
{ "map_compat_raw_tp", "tracing progs cannot use bpf_{list_head,rb_root} yet" },
{ "map_compat_raw_tp_w", "tracing progs cannot use bpf_{list_head,rb_root} yet" },
{ "obj_type_id_oor", "local type ID argument must be in range [0, U32_MAX]" },
- { "obj_new_no_composite", "bpf_obj_new type ID argument must be of a struct" },
- { "obj_new_no_struct", "bpf_obj_new type ID argument must be of a struct" },
+ { "obj_new_no_composite", "bpf_obj_new/bpf_percpu_obj_new type ID argument must be of a struct" },
+ { "obj_new_no_struct", "bpf_obj_new/bpf_percpu_obj_new type ID argument must be of a struct" },
{ "obj_drop_non_zero_off", "R1 must have zero offset when passed to release func" },
{ "new_null_ret", "R0 invalid mem access 'ptr_or_null_'" },
{ "obj_new_acq", "Unreleased reference id=" },
@@ -94,14 +94,8 @@ static struct {
{ "incorrect_head_var_off2", "variable ptr_ access var_off=(0x0; 0xffffffff) disallowed" },
{ "incorrect_head_off1", "bpf_list_head not found at offset=25" },
{ "incorrect_head_off2", "bpf_list_head not found at offset=1" },
- { "pop_front_off",
- "15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) "
- "R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) refs=2,4\n"
- "16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" },
- { "pop_back_off",
- "15: (bf) r1 = r6 ; R1_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) "
- "R6_w=ptr_or_null_foo(id=4,ref_obj_id=4,off=48,imm=0) refs=2,4\n"
- "16: (85) call bpf_this_cpu_ptr#154\nR1 type=ptr_or_null_ expected=percpu_ptr_" },
+ { "pop_front_off", "off 48 doesn't point to 'struct bpf_spin_lock' that is at 40" },
+ { "pop_back_off", "off 48 doesn't point to 'struct bpf_spin_lock' that is at 40" },
};
static void test_linked_list_fail_prog(const char *prog_name, const char *err_msg)
@@ -268,7 +262,7 @@ end:
static void list_and_rb_node_same_struct(bool refcount_field)
{
- int bpf_rb_node_btf_id, bpf_refcount_btf_id, foo_btf_id;
+ int bpf_rb_node_btf_id, bpf_refcount_btf_id = 0, foo_btf_id;
struct btf *btf;
int id, err;
diff --git a/tools/testing/selftests/bpf/prog_tests/lwt_helpers.h b/tools/testing/selftests/bpf/prog_tests/lwt_helpers.h
index 61333f2a03f9..e9190574e79f 100644
--- a/tools/testing/selftests/bpf/prog_tests/lwt_helpers.h
+++ b/tools/testing/selftests/bpf/prog_tests/lwt_helpers.h
@@ -49,7 +49,8 @@ static int open_tuntap(const char *dev_name, bool need_mac)
return -1;
ifr.ifr_flags = IFF_NO_PI | (need_mac ? IFF_TAP : IFF_TUN);
- memcpy(ifr.ifr_name, dev_name, IFNAMSIZ);
+ strncpy(ifr.ifr_name, dev_name, IFNAMSIZ - 1);
+ ifr.ifr_name[IFNAMSIZ - 1] = '\0';
err = ioctl(fd, TUNSETIFF, &ifr);
if (!ASSERT_OK(err, "ioctl(TUNSETIFF)")) {
diff --git a/tools/testing/selftests/bpf/prog_tests/missed.c b/tools/testing/selftests/bpf/prog_tests/missed.c
new file mode 100644
index 000000000000..70d90c43537c
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/missed.c
@@ -0,0 +1,138 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "missed_kprobe.skel.h"
+#include "missed_kprobe_recursion.skel.h"
+#include "missed_tp_recursion.skel.h"
+
+/*
+ * Putting kprobe on bpf_fentry_test1 that calls bpf_kfunc_common_test
+ * kfunc, which has also kprobe on. The latter won't get triggered due
+ * to kprobe recursion check and kprobe missed counter is incremented.
+ */
+static void test_missed_perf_kprobe(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct bpf_link_info info = {};
+ struct missed_kprobe *skel;
+ __u32 len = sizeof(info);
+ int err, prog_fd;
+
+ skel = missed_kprobe__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "missed_kprobe__open_and_load"))
+ goto cleanup;
+
+ err = missed_kprobe__attach(skel);
+ if (!ASSERT_OK(err, "missed_kprobe__attach"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ err = bpf_link_get_info_by_fd(bpf_link__fd(skel->links.test2), &info, &len);
+ if (!ASSERT_OK(err, "bpf_link_get_info_by_fd"))
+ goto cleanup;
+
+ ASSERT_EQ(info.type, BPF_LINK_TYPE_PERF_EVENT, "info.type");
+ ASSERT_EQ(info.perf_event.type, BPF_PERF_EVENT_KPROBE, "info.perf_event.type");
+ ASSERT_EQ(info.perf_event.kprobe.missed, 1, "info.perf_event.kprobe.missed");
+
+cleanup:
+ missed_kprobe__destroy(skel);
+}
+
+static __u64 get_missed_count(int fd)
+{
+ struct bpf_prog_info info = {};
+ __u32 len = sizeof(info);
+ int err;
+
+ err = bpf_prog_get_info_by_fd(fd, &info, &len);
+ if (!ASSERT_OK(err, "bpf_prog_get_info_by_fd"))
+ return (__u64) -1;
+ return info.recursion_misses;
+}
+
+/*
+ * Putting kprobe.multi on bpf_fentry_test1 that calls bpf_kfunc_common_test
+ * kfunc which has 3 perf event kprobes and 1 kprobe.multi attached.
+ *
+ * Because fprobe (kprobe.multi attach layear) does not have strict recursion
+ * check the kprobe's bpf_prog_active check is hit for test2-5.
+ */
+static void test_missed_kprobe_recursion(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct missed_kprobe_recursion *skel;
+ int err, prog_fd;
+
+ skel = missed_kprobe_recursion__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "missed_kprobe_recursion__open_and_load"))
+ goto cleanup;
+
+ err = missed_kprobe_recursion__attach(skel);
+ if (!ASSERT_OK(err, "missed_kprobe_recursion__attach"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test1)), 0, "test1_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
+ ASSERT_GE(get_missed_count(bpf_program__fd(skel->progs.test5)), 1, "test5_recursion_misses");
+
+cleanup:
+ missed_kprobe_recursion__destroy(skel);
+}
+
+/*
+ * Putting kprobe on bpf_fentry_test1 that calls bpf_printk and invokes
+ * bpf_trace_printk tracepoint. The bpf_trace_printk tracepoint has test[234]
+ * programs attached to it.
+ *
+ * Because kprobe execution goes through bpf_prog_active check, programs
+ * attached to the tracepoint will fail the recursion check and increment
+ * the recursion_misses stats.
+ */
+static void test_missed_tp_recursion(void)
+{
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ struct missed_tp_recursion *skel;
+ int err, prog_fd;
+
+ skel = missed_tp_recursion__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "missed_tp_recursion__open_and_load"))
+ goto cleanup;
+
+ err = missed_tp_recursion__attach(skel);
+ if (!ASSERT_OK(err, "missed_tp_recursion__attach"))
+ goto cleanup;
+
+ prog_fd = bpf_program__fd(skel->progs.trigger);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run");
+ ASSERT_EQ(topts.retval, 0, "test_run");
+
+ ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test1)), 0, "test1_recursion_misses");
+ ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test2)), 1, "test2_recursion_misses");
+ ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test3)), 1, "test3_recursion_misses");
+ ASSERT_EQ(get_missed_count(bpf_program__fd(skel->progs.test4)), 1, "test4_recursion_misses");
+
+cleanup:
+ missed_tp_recursion__destroy(skel);
+}
+
+void test_missed(void)
+{
+ if (test__start_subtest("perf_kprobe"))
+ test_missed_perf_kprobe();
+ if (test__start_subtest("kprobe_recursion"))
+ test_missed_kprobe_recursion();
+ if (test__start_subtest("tp_recursion"))
+ test_missed_tp_recursion();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c b/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c
index c7636e18b1eb..aa9f67eb1c95 100644
--- a/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c
+++ b/tools/testing/selftests/bpf/prog_tests/module_fentry_shadow.c
@@ -61,6 +61,11 @@ void test_module_fentry_shadow(void)
int link_fd[2] = {};
__s32 btf_id[2] = {};
+ if (!env.has_testmod) {
+ test__skip();
+ return;
+ }
+
LIBBPF_OPTS(bpf_prog_load_opts, load_opts,
.expected_attach_type = BPF_TRACE_FENTRY,
);
diff --git a/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
new file mode 100644
index 000000000000..343da65864d6
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/percpu_alloc.c
@@ -0,0 +1,128 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <test_progs.h>
+#include "percpu_alloc_array.skel.h"
+#include "percpu_alloc_cgrp_local_storage.skel.h"
+#include "percpu_alloc_fail.skel.h"
+
+static void test_array(void)
+{
+ struct percpu_alloc_array *skel;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = percpu_alloc_array__open();
+ if (!ASSERT_OK_PTR(skel, "percpu_alloc_array__open"))
+ return;
+
+ bpf_program__set_autoload(skel->progs.test_array_map_1, true);
+ bpf_program__set_autoload(skel->progs.test_array_map_2, true);
+ bpf_program__set_autoload(skel->progs.test_array_map_3, true);
+ bpf_program__set_autoload(skel->progs.test_array_map_4, true);
+
+ skel->bss->my_pid = getpid();
+ skel->rodata->nr_cpus = libbpf_num_possible_cpus();
+
+ err = percpu_alloc_array__load(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_array__load"))
+ goto out;
+
+ err = percpu_alloc_array__attach(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_array__attach"))
+ goto out;
+
+ prog_fd = bpf_program__fd(skel->progs.test_array_map_1);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run array_map 1-4");
+ ASSERT_EQ(topts.retval, 0, "test_run array_map 1-4");
+ ASSERT_EQ(skel->bss->cpu0_field_d, 2, "cpu0_field_d");
+ ASSERT_EQ(skel->bss->sum_field_c, 1, "sum_field_c");
+out:
+ percpu_alloc_array__destroy(skel);
+}
+
+static void test_array_sleepable(void)
+{
+ struct percpu_alloc_array *skel;
+ int err, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ skel = percpu_alloc_array__open();
+ if (!ASSERT_OK_PTR(skel, "percpu_alloc__open"))
+ return;
+
+ bpf_program__set_autoload(skel->progs.test_array_map_10, true);
+
+ skel->bss->my_pid = getpid();
+ skel->rodata->nr_cpus = libbpf_num_possible_cpus();
+
+ err = percpu_alloc_array__load(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_array__load"))
+ goto out;
+
+ err = percpu_alloc_array__attach(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_array__attach"))
+ goto out;
+
+ prog_fd = bpf_program__fd(skel->progs.test_array_map_10);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run array_map_10");
+ ASSERT_EQ(topts.retval, 0, "test_run array_map_10");
+ ASSERT_EQ(skel->bss->cpu0_field_d, 2, "cpu0_field_d");
+ ASSERT_EQ(skel->bss->sum_field_c, 1, "sum_field_c");
+out:
+ percpu_alloc_array__destroy(skel);
+}
+
+static void test_cgrp_local_storage(void)
+{
+ struct percpu_alloc_cgrp_local_storage *skel;
+ int err, cgroup_fd, prog_fd;
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+
+ cgroup_fd = test__join_cgroup("/percpu_alloc");
+ if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup /percpu_alloc"))
+ return;
+
+ skel = percpu_alloc_cgrp_local_storage__open();
+ if (!ASSERT_OK_PTR(skel, "percpu_alloc_cgrp_local_storage__open"))
+ goto close_fd;
+
+ skel->bss->my_pid = getpid();
+ skel->rodata->nr_cpus = libbpf_num_possible_cpus();
+
+ err = percpu_alloc_cgrp_local_storage__load(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_cgrp_local_storage__load"))
+ goto destroy_skel;
+
+ err = percpu_alloc_cgrp_local_storage__attach(skel);
+ if (!ASSERT_OK(err, "percpu_alloc_cgrp_local_storage__attach"))
+ goto destroy_skel;
+
+ prog_fd = bpf_program__fd(skel->progs.test_cgrp_local_storage_1);
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "test_run cgrp_local_storage 1-3");
+ ASSERT_EQ(topts.retval, 0, "test_run cgrp_local_storage 1-3");
+ ASSERT_EQ(skel->bss->cpu0_field_d, 2, "cpu0_field_d");
+ ASSERT_EQ(skel->bss->sum_field_c, 1, "sum_field_c");
+
+destroy_skel:
+ percpu_alloc_cgrp_local_storage__destroy(skel);
+close_fd:
+ close(cgroup_fd);
+}
+
+static void test_failure(void) {
+ RUN_TESTS(percpu_alloc_fail);
+}
+
+void test_percpu_alloc(void)
+{
+ if (test__start_subtest("array"))
+ test_array();
+ if (test__start_subtest("array_sleepable"))
+ test_array_sleepable();
+ if (test__start_subtest("cgrp_local_storage"))
+ test_cgrp_local_storage();
+ if (test__start_subtest("failure_tests"))
+ test_failure();
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/preempted_bpf_ma_op.c b/tools/testing/selftests/bpf/prog_tests/preempted_bpf_ma_op.c
new file mode 100644
index 000000000000..3a2ec3923fca
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/preempted_bpf_ma_op.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
+#define _GNU_SOURCE
+#include <sched.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include <test_progs.h>
+
+#include "preempted_bpf_ma_op.skel.h"
+
+#define ALLOC_THREAD_NR 4
+#define ALLOC_LOOP_NR 512
+
+struct alloc_ctx {
+ /* output */
+ int run_err;
+ /* input */
+ int fd;
+ bool *nomem_err;
+};
+
+static void *run_alloc_prog(void *data)
+{
+ struct alloc_ctx *ctx = data;
+ cpu_set_t cpu_set;
+ int i;
+
+ CPU_ZERO(&cpu_set);
+ CPU_SET(0, &cpu_set);
+ pthread_setaffinity_np(pthread_self(), sizeof(cpu_set), &cpu_set);
+
+ for (i = 0; i < ALLOC_LOOP_NR && !*ctx->nomem_err; i++) {
+ LIBBPF_OPTS(bpf_test_run_opts, topts);
+ int err;
+
+ err = bpf_prog_test_run_opts(ctx->fd, &topts);
+ ctx->run_err |= err | topts.retval;
+ }
+
+ return NULL;
+}
+
+void test_preempted_bpf_ma_op(void)
+{
+ struct alloc_ctx ctx[ALLOC_THREAD_NR];
+ struct preempted_bpf_ma_op *skel;
+ pthread_t tid[ALLOC_THREAD_NR];
+ int i, err;
+
+ skel = preempted_bpf_ma_op__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "open_and_load"))
+ return;
+
+ err = preempted_bpf_ma_op__attach(skel);
+ if (!ASSERT_OK(err, "attach"))
+ goto out;
+
+ for (i = 0; i < ARRAY_SIZE(ctx); i++) {
+ struct bpf_program *prog;
+ char name[8];
+
+ snprintf(name, sizeof(name), "test%d", i);
+ prog = bpf_object__find_program_by_name(skel->obj, name);
+ if (!ASSERT_OK_PTR(prog, "no test prog"))
+ goto out;
+
+ ctx[i].run_err = 0;
+ ctx[i].fd = bpf_program__fd(prog);
+ ctx[i].nomem_err = &skel->bss->nomem_err;
+ }
+
+ memset(tid, 0, sizeof(tid));
+ for (i = 0; i < ARRAY_SIZE(tid); i++) {
+ err = pthread_create(&tid[i], NULL, run_alloc_prog, &ctx[i]);
+ if (!ASSERT_OK(err, "pthread_create"))
+ break;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(tid); i++) {
+ if (!tid[i])
+ break;
+ pthread_join(tid[i], NULL);
+ ASSERT_EQ(ctx[i].run_err, 0, "run prog err");
+ }
+
+ ASSERT_FALSE(skel->bss->nomem_err, "ENOMEM");
+out:
+ preempted_bpf_ma_op__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c b/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
index 722c5f2a7776..a043af9cd6d9 100644
--- a/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
+++ b/tools/testing/selftests/bpf/prog_tests/queue_stack_map.c
@@ -14,7 +14,7 @@ static void test_queue_stack_map_by_type(int type)
int i, err, prog_fd, map_in_fd, map_out_fd;
char file[32], buf[128];
struct bpf_object *obj;
- struct iphdr iph;
+ struct iphdr iph = {};
LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = &pkt_v4,
.data_size_in = sizeof(pkt_v4),
diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf.c b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
index ac104dc652e3..48c5695b7abf 100644
--- a/tools/testing/selftests/bpf/prog_tests/ringbuf.c
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf.c
@@ -91,6 +91,9 @@ static void ringbuf_subtest(void)
int err, cnt, rb_fd;
int page_size = getpagesize();
void *mmap_ptr, *tmp_ptr;
+ struct ring *ring;
+ int map_fd;
+ unsigned long avail_data, ring_size, cons_pos, prod_pos;
skel = test_ringbuf_lskel__open();
if (CHECK(!skel, "skel_open", "skeleton open failed\n"))
@@ -162,6 +165,13 @@ static void ringbuf_subtest(void)
trigger_samples();
+ ring = ring_buffer__ring(ringbuf, 0);
+ if (!ASSERT_OK_PTR(ring, "ring_buffer__ring_idx_0"))
+ goto cleanup;
+
+ map_fd = ring__map_fd(ring);
+ ASSERT_EQ(map_fd, skel->maps.ringbuf.map_fd, "ring_map_fd");
+
/* 2 submitted + 1 discarded records */
CHECK(skel->bss->avail_data != 3 * rec_sz,
"err_avail_size", "exp %ld, got %ld\n",
@@ -176,6 +186,18 @@ static void ringbuf_subtest(void)
"err_prod_pos", "exp %ld, got %ld\n",
3L * rec_sz, skel->bss->prod_pos);
+ /* verify getting this data directly via the ring object yields the same
+ * results
+ */
+ avail_data = ring__avail_data_size(ring);
+ ASSERT_EQ(avail_data, 3 * rec_sz, "ring_avail_size");
+ ring_size = ring__size(ring);
+ ASSERT_EQ(ring_size, page_size, "ring_ring_size");
+ cons_pos = ring__consumer_pos(ring);
+ ASSERT_EQ(cons_pos, 0, "ring_cons_pos");
+ prod_pos = ring__producer_pos(ring);
+ ASSERT_EQ(prod_pos, 3 * rec_sz, "ring_prod_pos");
+
/* poll for samples */
err = ring_buffer__poll(ringbuf, -1);
@@ -282,6 +304,10 @@ static void ringbuf_subtest(void)
err = ring_buffer__consume(ringbuf);
CHECK(err < 0, "rb_consume", "failed: %d\b", err);
+ /* also consume using ring__consume to make sure it works the same */
+ err = ring__consume(ring);
+ ASSERT_GE(err, 0, "ring_consume");
+
/* 3 rounds, 2 samples each */
cnt = atomic_xchg(&sample_cnt, 0);
CHECK(cnt != 6, "cnt", "exp %d samples, got %d\n", 6, cnt);
diff --git a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
index 1455911d9fcb..58522195081b 100644
--- a/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
+++ b/tools/testing/selftests/bpf/prog_tests/ringbuf_multi.c
@@ -42,6 +42,8 @@ void test_ringbuf_multi(void)
{
struct test_ringbuf_multi *skel;
struct ring_buffer *ringbuf = NULL;
+ struct ring *ring_old;
+ struct ring *ring;
int err;
int page_size = getpagesize();
int proto_fd = -1;
@@ -84,11 +86,24 @@ void test_ringbuf_multi(void)
if (CHECK(!ringbuf, "ringbuf_create", "failed to create ringbuf\n"))
goto cleanup;
+ /* verify ring_buffer__ring returns expected results */
+ ring = ring_buffer__ring(ringbuf, 0);
+ if (!ASSERT_OK_PTR(ring, "ring_buffer__ring_idx_0"))
+ goto cleanup;
+ ring_old = ring;
+ ring = ring_buffer__ring(ringbuf, 1);
+ ASSERT_ERR_PTR(ring, "ring_buffer__ring_idx_1");
+
err = ring_buffer__add(ringbuf, bpf_map__fd(skel->maps.ringbuf2),
process_sample, (void *)(long)2);
if (CHECK(err, "ringbuf_add", "failed to add another ring\n"))
goto cleanup;
+ /* verify adding a new ring didn't invalidate our older pointer */
+ ring = ring_buffer__ring(ringbuf, 0);
+ if (!ASSERT_EQ(ring, ring_old, "ring_buffer__ring_again"))
+ goto cleanup;
+
err = test_ringbuf_multi__attach(skel);
if (CHECK(err, "skel_attach", "skeleton attachment failed: %d\n", err))
goto cleanup;
diff --git a/tools/testing/selftests/bpf/prog_tests/section_names.c b/tools/testing/selftests/bpf/prog_tests/section_names.c
index 8b571890c57e..c3d78846f31a 100644
--- a/tools/testing/selftests/bpf/prog_tests/section_names.c
+++ b/tools/testing/selftests/bpf/prog_tests/section_names.c
@@ -124,6 +124,11 @@ static struct sec_name_test tests[] = {
{0, BPF_CGROUP_INET6_CONNECT},
},
{
+ "cgroup/connect_unix",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_CONNECT},
+ {0, BPF_CGROUP_UNIX_CONNECT},
+ },
+ {
"cgroup/sendmsg4",
{0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_SENDMSG},
{0, BPF_CGROUP_UDP4_SENDMSG},
@@ -134,6 +139,11 @@ static struct sec_name_test tests[] = {
{0, BPF_CGROUP_UDP6_SENDMSG},
},
{
+ "cgroup/sendmsg_unix",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_SENDMSG},
+ {0, BPF_CGROUP_UNIX_SENDMSG},
+ },
+ {
"cgroup/recvmsg4",
{0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UDP4_RECVMSG},
{0, BPF_CGROUP_UDP4_RECVMSG},
@@ -144,6 +154,11 @@ static struct sec_name_test tests[] = {
{0, BPF_CGROUP_UDP6_RECVMSG},
},
{
+ "cgroup/recvmsg_unix",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_RECVMSG},
+ {0, BPF_CGROUP_UNIX_RECVMSG},
+ },
+ {
"cgroup/sysctl",
{0, BPF_PROG_TYPE_CGROUP_SYSCTL, BPF_CGROUP_SYSCTL},
{0, BPF_CGROUP_SYSCTL},
@@ -158,6 +173,36 @@ static struct sec_name_test tests[] = {
{0, BPF_PROG_TYPE_CGROUP_SOCKOPT, BPF_CGROUP_SETSOCKOPT},
{0, BPF_CGROUP_SETSOCKOPT},
},
+ {
+ "cgroup/getpeername4",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETPEERNAME},
+ {0, BPF_CGROUP_INET4_GETPEERNAME},
+ },
+ {
+ "cgroup/getpeername6",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETPEERNAME},
+ {0, BPF_CGROUP_INET6_GETPEERNAME},
+ },
+ {
+ "cgroup/getpeername_unix",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_GETPEERNAME},
+ {0, BPF_CGROUP_UNIX_GETPEERNAME},
+ },
+ {
+ "cgroup/getsockname4",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_INET4_GETSOCKNAME},
+ {0, BPF_CGROUP_INET4_GETSOCKNAME},
+ },
+ {
+ "cgroup/getsockname6",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_INET6_GETSOCKNAME},
+ {0, BPF_CGROUP_INET6_GETSOCKNAME},
+ },
+ {
+ "cgroup/getsockname_unix",
+ {0, BPF_PROG_TYPE_CGROUP_SOCK_ADDR, BPF_CGROUP_UNIX_GETSOCKNAME},
+ {0, BPF_CGROUP_UNIX_GETSOCKNAME},
+ },
};
static void test_prog_type_by_name(const struct sec_name_test *test)
diff --git a/tools/testing/selftests/bpf/prog_tests/sock_addr.c b/tools/testing/selftests/bpf/prog_tests/sock_addr.c
new file mode 100644
index 000000000000..5fd617718991
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/sock_addr.c
@@ -0,0 +1,612 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <sys/un.h>
+
+#include "test_progs.h"
+
+#include "connect_unix_prog.skel.h"
+#include "sendmsg_unix_prog.skel.h"
+#include "recvmsg_unix_prog.skel.h"
+#include "getsockname_unix_prog.skel.h"
+#include "getpeername_unix_prog.skel.h"
+#include "network_helpers.h"
+
+#define SERVUN_ADDRESS "bpf_cgroup_unix_test"
+#define SERVUN_REWRITE_ADDRESS "bpf_cgroup_unix_test_rewrite"
+#define SRCUN_ADDRESS "bpf_cgroup_unix_test_src"
+
+enum sock_addr_test_type {
+ SOCK_ADDR_TEST_BIND,
+ SOCK_ADDR_TEST_CONNECT,
+ SOCK_ADDR_TEST_SENDMSG,
+ SOCK_ADDR_TEST_RECVMSG,
+ SOCK_ADDR_TEST_GETSOCKNAME,
+ SOCK_ADDR_TEST_GETPEERNAME,
+};
+
+typedef void *(*load_fn)(int cgroup_fd);
+typedef void (*destroy_fn)(void *skel);
+
+struct sock_addr_test {
+ enum sock_addr_test_type type;
+ const char *name;
+ /* BPF prog properties */
+ load_fn loadfn;
+ destroy_fn destroyfn;
+ /* Socket properties */
+ int socket_family;
+ int socket_type;
+ /* IP:port pairs for BPF prog to override */
+ const char *requested_addr;
+ unsigned short requested_port;
+ const char *expected_addr;
+ unsigned short expected_port;
+ const char *expected_src_addr;
+};
+
+static void *connect_unix_prog_load(int cgroup_fd)
+{
+ struct connect_unix_prog *skel;
+
+ skel = connect_unix_prog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ skel->links.connect_unix_prog = bpf_program__attach_cgroup(
+ skel->progs.connect_unix_prog, cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.connect_unix_prog, "prog_attach"))
+ goto cleanup;
+
+ return skel;
+cleanup:
+ connect_unix_prog__destroy(skel);
+ return NULL;
+}
+
+static void connect_unix_prog_destroy(void *skel)
+{
+ connect_unix_prog__destroy(skel);
+}
+
+static void *sendmsg_unix_prog_load(int cgroup_fd)
+{
+ struct sendmsg_unix_prog *skel;
+
+ skel = sendmsg_unix_prog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ skel->links.sendmsg_unix_prog = bpf_program__attach_cgroup(
+ skel->progs.sendmsg_unix_prog, cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.sendmsg_unix_prog, "prog_attach"))
+ goto cleanup;
+
+ return skel;
+cleanup:
+ sendmsg_unix_prog__destroy(skel);
+ return NULL;
+}
+
+static void sendmsg_unix_prog_destroy(void *skel)
+{
+ sendmsg_unix_prog__destroy(skel);
+}
+
+static void *recvmsg_unix_prog_load(int cgroup_fd)
+{
+ struct recvmsg_unix_prog *skel;
+
+ skel = recvmsg_unix_prog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ skel->links.recvmsg_unix_prog = bpf_program__attach_cgroup(
+ skel->progs.recvmsg_unix_prog, cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.recvmsg_unix_prog, "prog_attach"))
+ goto cleanup;
+
+ return skel;
+cleanup:
+ recvmsg_unix_prog__destroy(skel);
+ return NULL;
+}
+
+static void recvmsg_unix_prog_destroy(void *skel)
+{
+ recvmsg_unix_prog__destroy(skel);
+}
+
+static void *getsockname_unix_prog_load(int cgroup_fd)
+{
+ struct getsockname_unix_prog *skel;
+
+ skel = getsockname_unix_prog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ skel->links.getsockname_unix_prog = bpf_program__attach_cgroup(
+ skel->progs.getsockname_unix_prog, cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.getsockname_unix_prog, "prog_attach"))
+ goto cleanup;
+
+ return skel;
+cleanup:
+ getsockname_unix_prog__destroy(skel);
+ return NULL;
+}
+
+static void getsockname_unix_prog_destroy(void *skel)
+{
+ getsockname_unix_prog__destroy(skel);
+}
+
+static void *getpeername_unix_prog_load(int cgroup_fd)
+{
+ struct getpeername_unix_prog *skel;
+
+ skel = getpeername_unix_prog__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ skel->links.getpeername_unix_prog = bpf_program__attach_cgroup(
+ skel->progs.getpeername_unix_prog, cgroup_fd);
+ if (!ASSERT_OK_PTR(skel->links.getpeername_unix_prog, "prog_attach"))
+ goto cleanup;
+
+ return skel;
+cleanup:
+ getpeername_unix_prog__destroy(skel);
+ return NULL;
+}
+
+static void getpeername_unix_prog_destroy(void *skel)
+{
+ getpeername_unix_prog__destroy(skel);
+}
+
+static struct sock_addr_test tests[] = {
+ {
+ SOCK_ADDR_TEST_CONNECT,
+ "connect_unix",
+ connect_unix_prog_load,
+ connect_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_STREAM,
+ SERVUN_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ NULL,
+ },
+ {
+ SOCK_ADDR_TEST_SENDMSG,
+ "sendmsg_unix",
+ sendmsg_unix_prog_load,
+ sendmsg_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_DGRAM,
+ SERVUN_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ NULL,
+ },
+ {
+ SOCK_ADDR_TEST_RECVMSG,
+ "recvmsg_unix-dgram",
+ recvmsg_unix_prog_load,
+ recvmsg_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_DGRAM,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ SERVUN_ADDRESS,
+ },
+ {
+ SOCK_ADDR_TEST_RECVMSG,
+ "recvmsg_unix-stream",
+ recvmsg_unix_prog_load,
+ recvmsg_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_STREAM,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ SERVUN_ADDRESS,
+ },
+ {
+ SOCK_ADDR_TEST_GETSOCKNAME,
+ "getsockname_unix",
+ getsockname_unix_prog_load,
+ getsockname_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_STREAM,
+ SERVUN_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ NULL,
+ },
+ {
+ SOCK_ADDR_TEST_GETPEERNAME,
+ "getpeername_unix",
+ getpeername_unix_prog_load,
+ getpeername_unix_prog_destroy,
+ AF_UNIX,
+ SOCK_STREAM,
+ SERVUN_ADDRESS,
+ 0,
+ SERVUN_REWRITE_ADDRESS,
+ 0,
+ NULL,
+ },
+};
+
+typedef int (*info_fn)(int, struct sockaddr *, socklen_t *);
+
+static int cmp_addr(const struct sockaddr_storage *addr1, socklen_t addr1_len,
+ const struct sockaddr_storage *addr2, socklen_t addr2_len,
+ bool cmp_port)
+{
+ const struct sockaddr_in *four1, *four2;
+ const struct sockaddr_in6 *six1, *six2;
+ const struct sockaddr_un *un1, *un2;
+
+ if (addr1->ss_family != addr2->ss_family)
+ return -1;
+
+ if (addr1_len != addr2_len)
+ return -1;
+
+ if (addr1->ss_family == AF_INET) {
+ four1 = (const struct sockaddr_in *)addr1;
+ four2 = (const struct sockaddr_in *)addr2;
+ return !((four1->sin_port == four2->sin_port || !cmp_port) &&
+ four1->sin_addr.s_addr == four2->sin_addr.s_addr);
+ } else if (addr1->ss_family == AF_INET6) {
+ six1 = (const struct sockaddr_in6 *)addr1;
+ six2 = (const struct sockaddr_in6 *)addr2;
+ return !((six1->sin6_port == six2->sin6_port || !cmp_port) &&
+ !memcmp(&six1->sin6_addr, &six2->sin6_addr,
+ sizeof(struct in6_addr)));
+ } else if (addr1->ss_family == AF_UNIX) {
+ un1 = (const struct sockaddr_un *)addr1;
+ un2 = (const struct sockaddr_un *)addr2;
+ return memcmp(un1, un2, addr1_len);
+ }
+
+ return -1;
+}
+
+static int cmp_sock_addr(info_fn fn, int sock1,
+ const struct sockaddr_storage *addr2,
+ socklen_t addr2_len, bool cmp_port)
+{
+ struct sockaddr_storage addr1;
+ socklen_t len1 = sizeof(addr1);
+
+ memset(&addr1, 0, len1);
+ if (fn(sock1, (struct sockaddr *)&addr1, (socklen_t *)&len1) != 0)
+ return -1;
+
+ return cmp_addr(&addr1, len1, addr2, addr2_len, cmp_port);
+}
+
+static int cmp_local_addr(int sock1, const struct sockaddr_storage *addr2,
+ socklen_t addr2_len, bool cmp_port)
+{
+ return cmp_sock_addr(getsockname, sock1, addr2, addr2_len, cmp_port);
+}
+
+static int cmp_peer_addr(int sock1, const struct sockaddr_storage *addr2,
+ socklen_t addr2_len, bool cmp_port)
+{
+ return cmp_sock_addr(getpeername, sock1, addr2, addr2_len, cmp_port);
+}
+
+static void test_bind(struct sock_addr_test *test)
+{
+ struct sockaddr_storage expected_addr;
+ socklen_t expected_addr_len = sizeof(struct sockaddr_storage);
+ int serv = -1, client = -1, err;
+
+ serv = start_server(test->socket_family, test->socket_type,
+ test->requested_addr, test->requested_port, 0);
+ if (!ASSERT_GE(serv, 0, "start_server"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family,
+ test->expected_addr, test->expected_port,
+ &expected_addr, &expected_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ err = cmp_local_addr(serv, &expected_addr, expected_addr_len, true);
+ if (!ASSERT_EQ(err, 0, "cmp_local_addr"))
+ goto cleanup;
+
+ /* Try to connect to server just in case */
+ client = connect_to_addr(&expected_addr, expected_addr_len, test->socket_type);
+ if (!ASSERT_GE(client, 0, "connect_to_addr"))
+ goto cleanup;
+
+cleanup:
+ if (client != -1)
+ close(client);
+ if (serv != -1)
+ close(serv);
+}
+
+static void test_connect(struct sock_addr_test *test)
+{
+ struct sockaddr_storage addr, expected_addr, expected_src_addr;
+ socklen_t addr_len = sizeof(struct sockaddr_storage),
+ expected_addr_len = sizeof(struct sockaddr_storage),
+ expected_src_addr_len = sizeof(struct sockaddr_storage);
+ int serv = -1, client = -1, err;
+
+ serv = start_server(test->socket_family, test->socket_type,
+ test->expected_addr, test->expected_port, 0);
+ if (!ASSERT_GE(serv, 0, "start_server"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family, test->requested_addr, test->requested_port,
+ &addr, &addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ client = connect_to_addr(&addr, addr_len, test->socket_type);
+ if (!ASSERT_GE(client, 0, "connect_to_addr"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family, test->expected_addr, test->expected_port,
+ &expected_addr, &expected_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ if (test->expected_src_addr) {
+ err = make_sockaddr(test->socket_family, test->expected_src_addr, 0,
+ &expected_src_addr, &expected_src_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+ }
+
+ err = cmp_peer_addr(client, &expected_addr, expected_addr_len, true);
+ if (!ASSERT_EQ(err, 0, "cmp_peer_addr"))
+ goto cleanup;
+
+ if (test->expected_src_addr) {
+ err = cmp_local_addr(client, &expected_src_addr, expected_src_addr_len, false);
+ if (!ASSERT_EQ(err, 0, "cmp_local_addr"))
+ goto cleanup;
+ }
+cleanup:
+ if (client != -1)
+ close(client);
+ if (serv != -1)
+ close(serv);
+}
+
+static void test_xmsg(struct sock_addr_test *test)
+{
+ struct sockaddr_storage addr, src_addr;
+ socklen_t addr_len = sizeof(struct sockaddr_storage),
+ src_addr_len = sizeof(struct sockaddr_storage);
+ struct msghdr hdr;
+ struct iovec iov;
+ char data = 'a';
+ int serv = -1, client = -1, err;
+
+ /* Unlike the other tests, here we test that we can rewrite the src addr
+ * with a recvmsg() hook.
+ */
+
+ serv = start_server(test->socket_family, test->socket_type,
+ test->expected_addr, test->expected_port, 0);
+ if (!ASSERT_GE(serv, 0, "start_server"))
+ goto cleanup;
+
+ client = socket(test->socket_family, test->socket_type, 0);
+ if (!ASSERT_GE(client, 0, "socket"))
+ goto cleanup;
+
+ /* AF_UNIX sockets have to be bound to something to trigger the recvmsg bpf program. */
+ if (test->socket_family == AF_UNIX) {
+ err = make_sockaddr(AF_UNIX, SRCUN_ADDRESS, 0, &src_addr, &src_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ err = bind(client, (const struct sockaddr *) &src_addr, src_addr_len);
+ if (!ASSERT_OK(err, "bind"))
+ goto cleanup;
+ }
+
+ err = make_sockaddr(test->socket_family, test->requested_addr, test->requested_port,
+ &addr, &addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ if (test->socket_type == SOCK_DGRAM) {
+ memset(&iov, 0, sizeof(iov));
+ iov.iov_base = &data;
+ iov.iov_len = sizeof(data);
+
+ memset(&hdr, 0, sizeof(hdr));
+ hdr.msg_name = (void *)&addr;
+ hdr.msg_namelen = addr_len;
+ hdr.msg_iov = &iov;
+ hdr.msg_iovlen = 1;
+
+ err = sendmsg(client, &hdr, 0);
+ if (!ASSERT_EQ(err, sizeof(data), "sendmsg"))
+ goto cleanup;
+ } else {
+ /* Testing with connection-oriented sockets is only valid for
+ * recvmsg() tests.
+ */
+ if (!ASSERT_EQ(test->type, SOCK_ADDR_TEST_RECVMSG, "recvmsg"))
+ goto cleanup;
+
+ err = connect(client, (const struct sockaddr *)&addr, addr_len);
+ if (!ASSERT_OK(err, "connect"))
+ goto cleanup;
+
+ err = send(client, &data, sizeof(data), 0);
+ if (!ASSERT_EQ(err, sizeof(data), "send"))
+ goto cleanup;
+
+ err = listen(serv, 0);
+ if (!ASSERT_OK(err, "listen"))
+ goto cleanup;
+
+ err = accept(serv, NULL, NULL);
+ if (!ASSERT_GE(err, 0, "accept"))
+ goto cleanup;
+
+ close(serv);
+ serv = err;
+ }
+
+ addr_len = src_addr_len = sizeof(struct sockaddr_storage);
+
+ err = recvfrom(serv, &data, sizeof(data), 0, (struct sockaddr *) &src_addr, &src_addr_len);
+ if (!ASSERT_EQ(err, sizeof(data), "recvfrom"))
+ goto cleanup;
+
+ ASSERT_EQ(data, 'a', "data mismatch");
+
+ if (test->expected_src_addr) {
+ err = make_sockaddr(test->socket_family, test->expected_src_addr, 0,
+ &addr, &addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ err = cmp_addr(&src_addr, src_addr_len, &addr, addr_len, false);
+ if (!ASSERT_EQ(err, 0, "cmp_addr"))
+ goto cleanup;
+ }
+
+cleanup:
+ if (client != -1)
+ close(client);
+ if (serv != -1)
+ close(serv);
+}
+
+static void test_getsockname(struct sock_addr_test *test)
+{
+ struct sockaddr_storage expected_addr;
+ socklen_t expected_addr_len = sizeof(struct sockaddr_storage);
+ int serv = -1, err;
+
+ serv = start_server(test->socket_family, test->socket_type,
+ test->requested_addr, test->requested_port, 0);
+ if (!ASSERT_GE(serv, 0, "start_server"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family,
+ test->expected_addr, test->expected_port,
+ &expected_addr, &expected_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ err = cmp_local_addr(serv, &expected_addr, expected_addr_len, true);
+ if (!ASSERT_EQ(err, 0, "cmp_local_addr"))
+ goto cleanup;
+
+cleanup:
+ if (serv != -1)
+ close(serv);
+}
+
+static void test_getpeername(struct sock_addr_test *test)
+{
+ struct sockaddr_storage addr, expected_addr;
+ socklen_t addr_len = sizeof(struct sockaddr_storage),
+ expected_addr_len = sizeof(struct sockaddr_storage);
+ int serv = -1, client = -1, err;
+
+ serv = start_server(test->socket_family, test->socket_type,
+ test->requested_addr, test->requested_port, 0);
+ if (!ASSERT_GE(serv, 0, "start_server"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family, test->requested_addr, test->requested_port,
+ &addr, &addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ client = connect_to_addr(&addr, addr_len, test->socket_type);
+ if (!ASSERT_GE(client, 0, "connect_to_addr"))
+ goto cleanup;
+
+ err = make_sockaddr(test->socket_family, test->expected_addr, test->expected_port,
+ &expected_addr, &expected_addr_len);
+ if (!ASSERT_EQ(err, 0, "make_sockaddr"))
+ goto cleanup;
+
+ err = cmp_peer_addr(client, &expected_addr, expected_addr_len, true);
+ if (!ASSERT_EQ(err, 0, "cmp_peer_addr"))
+ goto cleanup;
+
+cleanup:
+ if (client != -1)
+ close(client);
+ if (serv != -1)
+ close(serv);
+}
+
+void test_sock_addr(void)
+{
+ int cgroup_fd = -1;
+ void *skel;
+
+ cgroup_fd = test__join_cgroup("/sock_addr");
+ if (!ASSERT_GE(cgroup_fd, 0, "join_cgroup"))
+ goto cleanup;
+
+ for (size_t i = 0; i < ARRAY_SIZE(tests); ++i) {
+ struct sock_addr_test *test = &tests[i];
+
+ if (!test__start_subtest(test->name))
+ continue;
+
+ skel = test->loadfn(cgroup_fd);
+ if (!skel)
+ continue;
+
+ switch (test->type) {
+ /* Not exercised yet but we leave this code here for when the
+ * INET and INET6 sockaddr tests are migrated to this file in
+ * the future.
+ */
+ case SOCK_ADDR_TEST_BIND:
+ test_bind(test);
+ break;
+ case SOCK_ADDR_TEST_CONNECT:
+ test_connect(test);
+ break;
+ case SOCK_ADDR_TEST_SENDMSG:
+ case SOCK_ADDR_TEST_RECVMSG:
+ test_xmsg(test);
+ break;
+ case SOCK_ADDR_TEST_GETSOCKNAME:
+ test_getsockname(test);
+ break;
+ case SOCK_ADDR_TEST_GETPEERNAME:
+ test_getpeername(test);
+ break;
+ default:
+ ASSERT_TRUE(false, "Unknown sock addr test type");
+ break;
+ }
+
+ test->destroyfn(skel);
+ }
+
+cleanup:
+ if (cgroup_fd >= 0)
+ close(cgroup_fd);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
index dda7060e86a0..f75f84d0b3d7 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
@@ -359,7 +359,7 @@ out:
static void test_sockmap_skb_verdict_shutdown(void)
{
struct epoll_event ev, events[MAX_EVENTS];
- int n, err, map, verdict, s, c1, p1;
+ int n, err, map, verdict, s, c1 = -1, p1 = -1;
struct test_sockmap_pass_prog *skel;
int epollfd;
int zero = 0;
@@ -414,9 +414,9 @@ out:
static void test_sockmap_skb_verdict_fionread(bool pass_prog)
{
int expected, zero = 0, sent, recvd, avail;
- int err, map, verdict, s, c0, c1, p0, p1;
- struct test_sockmap_pass_prog *pass;
- struct test_sockmap_drop_prog *drop;
+ int err, map, verdict, s, c0 = -1, c1 = -1, p0 = -1, p1 = -1;
+ struct test_sockmap_pass_prog *pass = NULL;
+ struct test_sockmap_drop_prog *drop = NULL;
char buf[256] = "0123456789";
if (pass_prog) {
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
index 36d829a65aa4..e880f97bc44d 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_helpers.h
@@ -378,7 +378,7 @@ static inline int enable_reuseport(int s, int progfd)
static inline int socket_loopback_reuseport(int family, int sotype, int progfd)
{
struct sockaddr_storage addr;
- socklen_t len;
+ socklen_t len = 0;
int err, s;
init_addr_loopback(family, &addr, &len);
diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
index 8df8cbb447f1..a934d430c20c 100644
--- a/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
+++ b/tools/testing/selftests/bpf/prog_tests/sockmap_listen.c
@@ -73,7 +73,7 @@ static void test_insert_bound(struct test_sockmap_listen *skel __always_unused,
int family, int sotype, int mapfd)
{
struct sockaddr_storage addr;
- socklen_t len;
+ socklen_t len = 0;
u32 key = 0;
u64 value;
int err, s;
@@ -871,7 +871,7 @@ static void test_msg_redir_to_listening(struct test_sockmap_listen *skel,
static void redir_partial(int family, int sotype, int sock_map, int parser_map)
{
- int s, c0, c1, p0, p1;
+ int s, c0 = -1, c1 = -1, p0 = -1, p1 = -1;
int err, n, key, value;
char buf[] = "abc";
@@ -1336,53 +1336,59 @@ static void test_redir(struct test_sockmap_listen *skel, struct bpf_map *map,
}
}
-static void unix_redir_to_connected(int sotype, int sock_mapfd,
- int verd_mapfd, enum redir_mode mode)
+static void pairs_redir_to_connected(int cli0, int peer0, int cli1, int peer1,
+ int sock_mapfd, int verd_mapfd, enum redir_mode mode)
{
const char *log_prefix = redir_mode_str(mode);
- int c0, c1, p0, p1;
unsigned int pass;
int err, n;
- int sfd[2];
u32 key;
char b;
zero_verdict_count(verd_mapfd);
- if (socketpair(AF_UNIX, sotype | SOCK_NONBLOCK, 0, sfd))
- return;
- c0 = sfd[0], p0 = sfd[1];
-
- if (socketpair(AF_UNIX, sotype | SOCK_NONBLOCK, 0, sfd))
- goto close0;
- c1 = sfd[0], p1 = sfd[1];
-
- err = add_to_sockmap(sock_mapfd, p0, p1);
+ err = add_to_sockmap(sock_mapfd, peer0, peer1);
if (err)
- goto close;
+ return;
- n = write(c1, "a", 1);
+ n = write(cli1, "a", 1);
if (n < 0)
FAIL_ERRNO("%s: write", log_prefix);
if (n == 0)
FAIL("%s: incomplete write", log_prefix);
if (n < 1)
- goto close;
+ return;
key = SK_PASS;
err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
if (err)
- goto close;
+ return;
if (pass != 1)
FAIL("%s: want pass count 1, have %d", log_prefix, pass);
- n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
+ n = recv_timeout(mode == REDIR_INGRESS ? peer0 : cli0, &b, 1, 0, IO_TIMEOUT_SEC);
if (n < 0)
FAIL_ERRNO("%s: recv_timeout", log_prefix);
if (n == 0)
FAIL("%s: incomplete recv", log_prefix);
+}
+
+static void unix_redir_to_connected(int sotype, int sock_mapfd,
+ int verd_mapfd, enum redir_mode mode)
+{
+ int c0, c1, p0, p1;
+ int sfd[2];
+
+ if (socketpair(AF_UNIX, sotype | SOCK_NONBLOCK, 0, sfd))
+ return;
+ c0 = sfd[0], p0 = sfd[1];
+
+ if (socketpair(AF_UNIX, sotype | SOCK_NONBLOCK, 0, sfd))
+ goto close0;
+ c1 = sfd[0], p1 = sfd[1];
+
+ pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, verd_mapfd, mode);
-close:
xclose(c1);
xclose(p1);
close0:
@@ -1661,14 +1667,8 @@ close_peer0:
static void udp_redir_to_connected(int family, int sock_mapfd, int verd_mapfd,
enum redir_mode mode)
{
- const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
- unsigned int pass;
- int err, n;
- u32 key;
- char b;
-
- zero_verdict_count(verd_mapfd);
+ int err;
err = inet_socketpair(family, SOCK_DGRAM, &p0, &c0);
if (err)
@@ -1677,32 +1677,8 @@ static void udp_redir_to_connected(int family, int sock_mapfd, int verd_mapfd,
if (err)
goto close_cli0;
- err = add_to_sockmap(sock_mapfd, p0, p1);
- if (err)
- goto close_cli1;
+ pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, verd_mapfd, mode);
- n = write(c1, "a", 1);
- if (n < 0)
- FAIL_ERRNO("%s: write", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete write", log_prefix);
- if (n < 1)
- goto close_cli1;
-
- key = SK_PASS;
- err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
- if (err)
- goto close_cli1;
- if (pass != 1)
- FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-
- n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
- if (n < 0)
- FAIL_ERRNO("%s: recv_timeout", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete recv", log_prefix);
-
-close_cli1:
xclose(c1);
xclose(p1);
close_cli0:
@@ -1747,15 +1723,9 @@ static void test_udp_redir(struct test_sockmap_listen *skel, struct bpf_map *map
static void inet_unix_redir_to_connected(int family, int type, int sock_mapfd,
int verd_mapfd, enum redir_mode mode)
{
- const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
- unsigned int pass;
- int err, n;
int sfd[2];
- u32 key;
- char b;
-
- zero_verdict_count(verd_mapfd);
+ int err;
if (socketpair(AF_UNIX, SOCK_DGRAM | SOCK_NONBLOCK, 0, sfd))
return;
@@ -1765,32 +1735,8 @@ static void inet_unix_redir_to_connected(int family, int type, int sock_mapfd,
if (err)
goto close;
- err = add_to_sockmap(sock_mapfd, p0, p1);
- if (err)
- goto close_cli1;
-
- n = write(c1, "a", 1);
- if (n < 0)
- FAIL_ERRNO("%s: write", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete write", log_prefix);
- if (n < 1)
- goto close_cli1;
-
- key = SK_PASS;
- err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
- if (err)
- goto close_cli1;
- if (pass != 1)
- FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-
- n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
- if (n < 0)
- FAIL_ERRNO("%s: recv_timeout", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete recv", log_prefix);
+ pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, verd_mapfd, mode);
-close_cli1:
xclose(c1);
xclose(p1);
close:
@@ -1827,15 +1773,9 @@ static void inet_unix_skb_redir_to_connected(struct test_sockmap_listen *skel,
static void unix_inet_redir_to_connected(int family, int type, int sock_mapfd,
int verd_mapfd, enum redir_mode mode)
{
- const char *log_prefix = redir_mode_str(mode);
int c0, c1, p0, p1;
- unsigned int pass;
- int err, n;
int sfd[2];
- u32 key;
- char b;
-
- zero_verdict_count(verd_mapfd);
+ int err;
err = inet_socketpair(family, SOCK_DGRAM, &p0, &c0);
if (err)
@@ -1845,32 +1785,8 @@ static void unix_inet_redir_to_connected(int family, int type, int sock_mapfd,
goto close_cli0;
c1 = sfd[0], p1 = sfd[1];
- err = add_to_sockmap(sock_mapfd, p0, p1);
- if (err)
- goto close;
-
- n = write(c1, "a", 1);
- if (n < 0)
- FAIL_ERRNO("%s: write", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete write", log_prefix);
- if (n < 1)
- goto close;
-
- key = SK_PASS;
- err = xbpf_map_lookup_elem(verd_mapfd, &key, &pass);
- if (err)
- goto close;
- if (pass != 1)
- FAIL("%s: want pass count 1, have %d", log_prefix, pass);
-
- n = recv_timeout(mode == REDIR_INGRESS ? p0 : c0, &b, 1, 0, IO_TIMEOUT_SEC);
- if (n < 0)
- FAIL_ERRNO("%s: recv_timeout", log_prefix);
- if (n == 0)
- FAIL("%s: incomplete recv", log_prefix);
+ pairs_redir_to_connected(c0, p0, c1, p1, sock_mapfd, verd_mapfd, mode);
-close:
xclose(c1);
xclose(p1);
close_cli0:
diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
index 58fe2c586ed7..fc6b2954e8f5 100644
--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
@@ -218,12 +218,14 @@ out:
bpf_object__close(obj);
}
-static void test_tailcall_count(const char *which)
+static void test_tailcall_count(const char *which, bool test_fentry,
+ bool test_fexit)
{
+ struct bpf_object *obj = NULL, *fentry_obj = NULL, *fexit_obj = NULL;
+ struct bpf_link *fentry_link = NULL, *fexit_link = NULL;
int err, map_fd, prog_fd, main_fd, data_fd, i, val;
struct bpf_map *prog_array, *data_map;
struct bpf_program *prog;
- struct bpf_object *obj;
char buff[128] = {};
LIBBPF_OPTS(bpf_test_run_opts, topts,
.data_in = buff,
@@ -265,23 +267,105 @@ static void test_tailcall_count(const char *which)
if (CHECK_FAIL(err))
goto out;
+ if (test_fentry) {
+ fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o",
+ NULL);
+ if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file"))
+ goto out;
+
+ prog = bpf_object__find_program_by_name(fentry_obj, "fentry");
+ if (!ASSERT_OK_PTR(prog, "find fentry prog"))
+ goto out;
+
+ err = bpf_program__set_attach_target(prog, prog_fd,
+ "subprog_tail");
+ if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
+ goto out;
+
+ err = bpf_object__load(fentry_obj);
+ if (!ASSERT_OK(err, "load fentry_obj"))
+ goto out;
+
+ fentry_link = bpf_program__attach_trace(prog);
+ if (!ASSERT_OK_PTR(fentry_link, "attach_trace"))
+ goto out;
+ }
+
+ if (test_fexit) {
+ fexit_obj = bpf_object__open_file("tailcall_bpf2bpf_fexit.bpf.o",
+ NULL);
+ if (!ASSERT_OK_PTR(fexit_obj, "open fexit_obj file"))
+ goto out;
+
+ prog = bpf_object__find_program_by_name(fexit_obj, "fexit");
+ if (!ASSERT_OK_PTR(prog, "find fexit prog"))
+ goto out;
+
+ err = bpf_program__set_attach_target(prog, prog_fd,
+ "subprog_tail");
+ if (!ASSERT_OK(err, "set_attach_target subprog_tail"))
+ goto out;
+
+ err = bpf_object__load(fexit_obj);
+ if (!ASSERT_OK(err, "load fexit_obj"))
+ goto out;
+
+ fexit_link = bpf_program__attach_trace(prog);
+ if (!ASSERT_OK_PTR(fexit_link, "attach_trace"))
+ goto out;
+ }
+
err = bpf_prog_test_run_opts(main_fd, &topts);
ASSERT_OK(err, "tailcall");
ASSERT_EQ(topts.retval, 1, "tailcall retval");
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
- return;
+ goto out;
data_fd = bpf_map__fd(data_map);
- if (CHECK_FAIL(map_fd < 0))
- return;
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
i = 0;
err = bpf_map_lookup_elem(data_fd, &i, &val);
ASSERT_OK(err, "tailcall count");
ASSERT_EQ(val, 33, "tailcall count");
+ if (test_fentry) {
+ data_map = bpf_object__find_map_by_name(fentry_obj, ".bss");
+ if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+ "find tailcall_bpf2bpf_fentry.bss map"))
+ goto out;
+
+ data_fd = bpf_map__fd(data_map);
+ if (!ASSERT_FALSE(data_fd < 0,
+ "find tailcall_bpf2bpf_fentry.bss map fd"))
+ goto out;
+
+ i = 0;
+ err = bpf_map_lookup_elem(data_fd, &i, &val);
+ ASSERT_OK(err, "fentry count");
+ ASSERT_EQ(val, 33, "fentry count");
+ }
+
+ if (test_fexit) {
+ data_map = bpf_object__find_map_by_name(fexit_obj, ".bss");
+ if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+ "find tailcall_bpf2bpf_fexit.bss map"))
+ goto out;
+
+ data_fd = bpf_map__fd(data_map);
+ if (!ASSERT_FALSE(data_fd < 0,
+ "find tailcall_bpf2bpf_fexit.bss map fd"))
+ goto out;
+
+ i = 0;
+ err = bpf_map_lookup_elem(data_fd, &i, &val);
+ ASSERT_OK(err, "fexit count");
+ ASSERT_EQ(val, 33, "fexit count");
+ }
+
i = 0;
err = bpf_map_delete_elem(map_fd, &i);
if (CHECK_FAIL(err))
@@ -291,6 +375,10 @@ static void test_tailcall_count(const char *which)
ASSERT_OK(err, "tailcall");
ASSERT_OK(topts.retval, "tailcall retval");
out:
+ bpf_link__destroy(fentry_link);
+ bpf_link__destroy(fexit_link);
+ bpf_object__close(fentry_obj);
+ bpf_object__close(fexit_obj);
bpf_object__close(obj);
}
@@ -299,7 +387,7 @@ out:
*/
static void test_tailcall_3(void)
{
- test_tailcall_count("tailcall3.bpf.o");
+ test_tailcall_count("tailcall3.bpf.o", false, false);
}
/* test_tailcall_6 checks that the count value of the tail call limit
@@ -307,7 +395,7 @@ static void test_tailcall_3(void)
*/
static void test_tailcall_6(void)
{
- test_tailcall_count("tailcall6.bpf.o");
+ test_tailcall_count("tailcall6.bpf.o", false, false);
}
/* test_tailcall_4 checks that the kernel properly selects indirect jump
@@ -352,11 +440,11 @@ static void test_tailcall_4(void)
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
- return;
+ goto out;
data_fd = bpf_map__fd(data_map);
- if (CHECK_FAIL(map_fd < 0))
- return;
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
@@ -442,11 +530,11 @@ static void test_tailcall_5(void)
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
- return;
+ goto out;
data_fd = bpf_map__fd(data_map);
- if (CHECK_FAIL(map_fd < 0))
- return;
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
for (i = 0; i < bpf_map__max_entries(prog_array); i++) {
snprintf(prog_name, sizeof(prog_name), "classifier_%d", i);
@@ -631,11 +719,11 @@ static void test_tailcall_bpf2bpf_2(void)
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
- return;
+ goto out;
data_fd = bpf_map__fd(data_map);
- if (CHECK_FAIL(map_fd < 0))
- return;
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
i = 0;
err = bpf_map_lookup_elem(data_fd, &i, &val);
@@ -805,11 +893,11 @@ static void test_tailcall_bpf2bpf_4(bool noise)
data_map = bpf_object__find_map_by_name(obj, "tailcall.bss");
if (CHECK_FAIL(!data_map || !bpf_map__is_internal(data_map)))
- return;
+ goto out;
data_fd = bpf_map__fd(data_map);
- if (CHECK_FAIL(map_fd < 0))
- return;
+ if (CHECK_FAIL(data_fd < 0))
+ goto out;
i = 0;
val.noise = noise;
@@ -872,7 +960,7 @@ static void test_tailcall_bpf2bpf_6(void)
ASSERT_EQ(topts.retval, 0, "tailcall retval");
data_fd = bpf_map__fd(obj->maps.bss);
- if (!ASSERT_GE(map_fd, 0, "bss map fd"))
+ if (!ASSERT_GE(data_fd, 0, "bss map fd"))
goto out;
i = 0;
@@ -884,6 +972,139 @@ out:
tailcall_bpf2bpf6__destroy(obj);
}
+/* test_tailcall_bpf2bpf_fentry checks that the count value of the tail call
+ * limit enforcement matches with expectations when tailcall is preceded with
+ * bpf2bpf call, and the bpf2bpf call is traced by fentry.
+ */
+static void test_tailcall_bpf2bpf_fentry(void)
+{
+ test_tailcall_count("tailcall_bpf2bpf2.bpf.o", true, false);
+}
+
+/* test_tailcall_bpf2bpf_fexit checks that the count value of the tail call
+ * limit enforcement matches with expectations when tailcall is preceded with
+ * bpf2bpf call, and the bpf2bpf call is traced by fexit.
+ */
+static void test_tailcall_bpf2bpf_fexit(void)
+{
+ test_tailcall_count("tailcall_bpf2bpf2.bpf.o", false, true);
+}
+
+/* test_tailcall_bpf2bpf_fentry_fexit checks that the count value of the tail
+ * call limit enforcement matches with expectations when tailcall is preceded
+ * with bpf2bpf call, and the bpf2bpf call is traced by both fentry and fexit.
+ */
+static void test_tailcall_bpf2bpf_fentry_fexit(void)
+{
+ test_tailcall_count("tailcall_bpf2bpf2.bpf.o", true, true);
+}
+
+/* test_tailcall_bpf2bpf_fentry_entry checks that the count value of the tail
+ * call limit enforcement matches with expectations when tailcall is preceded
+ * with bpf2bpf call, and the bpf2bpf caller is traced by fentry.
+ */
+static void test_tailcall_bpf2bpf_fentry_entry(void)
+{
+ struct bpf_object *tgt_obj = NULL, *fentry_obj = NULL;
+ int err, map_fd, prog_fd, data_fd, i, val;
+ struct bpf_map *prog_array, *data_map;
+ struct bpf_link *fentry_link = NULL;
+ struct bpf_program *prog;
+ char buff[128] = {};
+
+ LIBBPF_OPTS(bpf_test_run_opts, topts,
+ .data_in = buff,
+ .data_size_in = sizeof(buff),
+ .repeat = 1,
+ );
+
+ err = bpf_prog_test_load("tailcall_bpf2bpf2.bpf.o",
+ BPF_PROG_TYPE_SCHED_CLS,
+ &tgt_obj, &prog_fd);
+ if (!ASSERT_OK(err, "load tgt_obj"))
+ return;
+
+ prog_array = bpf_object__find_map_by_name(tgt_obj, "jmp_table");
+ if (!ASSERT_OK_PTR(prog_array, "find jmp_table map"))
+ goto out;
+
+ map_fd = bpf_map__fd(prog_array);
+ if (!ASSERT_FALSE(map_fd < 0, "find jmp_table map fd"))
+ goto out;
+
+ prog = bpf_object__find_program_by_name(tgt_obj, "classifier_0");
+ if (!ASSERT_OK_PTR(prog, "find classifier_0 prog"))
+ goto out;
+
+ prog_fd = bpf_program__fd(prog);
+ if (!ASSERT_FALSE(prog_fd < 0, "find classifier_0 prog fd"))
+ goto out;
+
+ i = 0;
+ err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY);
+ if (!ASSERT_OK(err, "update jmp_table"))
+ goto out;
+
+ fentry_obj = bpf_object__open_file("tailcall_bpf2bpf_fentry.bpf.o",
+ NULL);
+ if (!ASSERT_OK_PTR(fentry_obj, "open fentry_obj file"))
+ goto out;
+
+ prog = bpf_object__find_program_by_name(fentry_obj, "fentry");
+ if (!ASSERT_OK_PTR(prog, "find fentry prog"))
+ goto out;
+
+ err = bpf_program__set_attach_target(prog, prog_fd, "classifier_0");
+ if (!ASSERT_OK(err, "set_attach_target classifier_0"))
+ goto out;
+
+ err = bpf_object__load(fentry_obj);
+ if (!ASSERT_OK(err, "load fentry_obj"))
+ goto out;
+
+ fentry_link = bpf_program__attach_trace(prog);
+ if (!ASSERT_OK_PTR(fentry_link, "attach_trace"))
+ goto out;
+
+ err = bpf_prog_test_run_opts(prog_fd, &topts);
+ ASSERT_OK(err, "tailcall");
+ ASSERT_EQ(topts.retval, 1, "tailcall retval");
+
+ data_map = bpf_object__find_map_by_name(tgt_obj, "tailcall.bss");
+ if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+ "find tailcall.bss map"))
+ goto out;
+
+ data_fd = bpf_map__fd(data_map);
+ if (!ASSERT_FALSE(data_fd < 0, "find tailcall.bss map fd"))
+ goto out;
+
+ i = 0;
+ err = bpf_map_lookup_elem(data_fd, &i, &val);
+ ASSERT_OK(err, "tailcall count");
+ ASSERT_EQ(val, 34, "tailcall count");
+
+ data_map = bpf_object__find_map_by_name(fentry_obj, ".bss");
+ if (!ASSERT_FALSE(!data_map || !bpf_map__is_internal(data_map),
+ "find tailcall_bpf2bpf_fentry.bss map"))
+ goto out;
+
+ data_fd = bpf_map__fd(data_map);
+ if (!ASSERT_FALSE(data_fd < 0,
+ "find tailcall_bpf2bpf_fentry.bss map fd"))
+ goto out;
+
+ i = 0;
+ err = bpf_map_lookup_elem(data_fd, &i, &val);
+ ASSERT_OK(err, "fentry count");
+ ASSERT_EQ(val, 1, "fentry count");
+
+out:
+ bpf_link__destroy(fentry_link);
+ bpf_object__close(fentry_obj);
+ bpf_object__close(tgt_obj);
+}
+
void test_tailcalls(void)
{
if (test__start_subtest("tailcall_1"))
@@ -910,4 +1131,12 @@ void test_tailcalls(void)
test_tailcall_bpf2bpf_4(true);
if (test__start_subtest("tailcall_bpf2bpf_6"))
test_tailcall_bpf2bpf_6();
+ if (test__start_subtest("tailcall_bpf2bpf_fentry"))
+ test_tailcall_bpf2bpf_fentry();
+ if (test__start_subtest("tailcall_bpf2bpf_fexit"))
+ test_tailcall_bpf2bpf_fexit();
+ if (test__start_subtest("tailcall_bpf2bpf_fentry_fexit"))
+ test_tailcall_bpf2bpf_fentry_fexit();
+ if (test__start_subtest("tailcall_bpf2bpf_fentry_entry"))
+ test_tailcall_bpf2bpf_fentry_entry();
}
diff --git a/tools/testing/selftests/bpf/prog_tests/task_under_cgroup.c b/tools/testing/selftests/bpf/prog_tests/task_under_cgroup.c
index 4224727fb364..626d76fe43a2 100644
--- a/tools/testing/selftests/bpf/prog_tests/task_under_cgroup.c
+++ b/tools/testing/selftests/bpf/prog_tests/task_under_cgroup.c
@@ -30,8 +30,15 @@ void test_task_under_cgroup(void)
if (!ASSERT_OK(ret, "test_task_under_cgroup__load"))
goto cleanup;
- ret = test_task_under_cgroup__attach(skel);
- if (!ASSERT_OK(ret, "test_task_under_cgroup__attach"))
+ /* First, attach the LSM program, and then it will be triggered when the
+ * TP_BTF program is attached.
+ */
+ skel->links.lsm_run = bpf_program__attach_lsm(skel->progs.lsm_run);
+ if (!ASSERT_OK_PTR(skel->links.lsm_run, "attach_lsm"))
+ goto cleanup;
+
+ skel->links.tp_btf_run = bpf_program__attach_trace(skel->progs.tp_btf_run);
+ if (!ASSERT_OK_PTR(skel->links.tp_btf_run, "attach_tp_btf"))
goto cleanup;
pid = fork();
diff --git a/tools/testing/selftests/bpf/prog_tests/tc_helpers.h b/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
index 67f985f7d215..924d0e25320c 100644
--- a/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
+++ b/tools/testing/selftests/bpf/prog_tests/tc_helpers.h
@@ -4,6 +4,10 @@
#define TC_HELPERS
#include <test_progs.h>
+#ifndef loopback
+# define loopback 1
+#endif
+
static inline __u32 id_from_prog_fd(int fd)
{
struct bpf_prog_info prog_info = {};
diff --git a/tools/testing/selftests/bpf/prog_tests/tc_netkit.c b/tools/testing/selftests/bpf/prog_tests/tc_netkit.c
new file mode 100644
index 000000000000..15ee7b2fc410
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/tc_netkit.c
@@ -0,0 +1,687 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Isovalent */
+#include <uapi/linux/if_link.h>
+#include <net/if.h>
+#include <test_progs.h>
+
+#define netkit_peer "nk0"
+#define netkit_name "nk1"
+
+#define ping_addr_neigh 0x0a000002 /* 10.0.0.2 */
+#define ping_addr_noneigh 0x0a000003 /* 10.0.0.3 */
+
+#include "test_tc_link.skel.h"
+#include "netlink_helpers.h"
+#include "tc_helpers.h"
+
+#define ICMP_ECHO 8
+
+struct icmphdr {
+ __u8 type;
+ __u8 code;
+ __sum16 checksum;
+ struct {
+ __be16 id;
+ __be16 sequence;
+ } echo;
+};
+
+struct iplink_req {
+ struct nlmsghdr n;
+ struct ifinfomsg i;
+ char buf[1024];
+};
+
+static int create_netkit(int mode, int policy, int peer_policy, int *ifindex,
+ bool same_netns)
+{
+ struct rtnl_handle rth = { .fd = -1 };
+ struct iplink_req req = {};
+ struct rtattr *linkinfo, *data;
+ const char *type = "netkit";
+ int err;
+
+ err = rtnl_open(&rth, 0);
+ if (!ASSERT_OK(err, "open_rtnetlink"))
+ return err;
+
+ memset(&req, 0, sizeof(req));
+ req.n.nlmsg_len = NLMSG_LENGTH(sizeof(struct ifinfomsg));
+ req.n.nlmsg_flags = NLM_F_REQUEST | NLM_F_CREATE | NLM_F_EXCL;
+ req.n.nlmsg_type = RTM_NEWLINK;
+ req.i.ifi_family = AF_UNSPEC;
+
+ addattr_l(&req.n, sizeof(req), IFLA_IFNAME, netkit_name,
+ strlen(netkit_name));
+ linkinfo = addattr_nest(&req.n, sizeof(req), IFLA_LINKINFO);
+ addattr_l(&req.n, sizeof(req), IFLA_INFO_KIND, type, strlen(type));
+ data = addattr_nest(&req.n, sizeof(req), IFLA_INFO_DATA);
+ addattr32(&req.n, sizeof(req), IFLA_NETKIT_POLICY, policy);
+ addattr32(&req.n, sizeof(req), IFLA_NETKIT_PEER_POLICY, peer_policy);
+ addattr32(&req.n, sizeof(req), IFLA_NETKIT_MODE, mode);
+ addattr_nest_end(&req.n, data);
+ addattr_nest_end(&req.n, linkinfo);
+
+ err = rtnl_talk(&rth, &req.n, NULL);
+ ASSERT_OK(err, "talk_rtnetlink");
+ rtnl_close(&rth);
+ *ifindex = if_nametoindex(netkit_name);
+
+ ASSERT_GT(*ifindex, 0, "retrieve_ifindex");
+ ASSERT_OK(system("ip netns add foo"), "create netns");
+ ASSERT_OK(system("ip link set dev " netkit_name " up"),
+ "up primary");
+ ASSERT_OK(system("ip addr add dev " netkit_name " 10.0.0.1/24"),
+ "addr primary");
+ if (same_netns) {
+ ASSERT_OK(system("ip link set dev " netkit_peer " up"),
+ "up peer");
+ ASSERT_OK(system("ip addr add dev " netkit_peer " 10.0.0.2/24"),
+ "addr peer");
+ } else {
+ ASSERT_OK(system("ip link set " netkit_peer " netns foo"),
+ "move peer");
+ ASSERT_OK(system("ip netns exec foo ip link set dev "
+ netkit_peer " up"), "up peer");
+ ASSERT_OK(system("ip netns exec foo ip addr add dev "
+ netkit_peer " 10.0.0.2/24"), "addr peer");
+ }
+ return err;
+}
+
+static void destroy_netkit(void)
+{
+ ASSERT_OK(system("ip link del dev " netkit_name), "del primary");
+ ASSERT_OK(system("ip netns del foo"), "delete netns");
+ ASSERT_EQ(if_nametoindex(netkit_name), 0, netkit_name "_ifindex");
+}
+
+static int __send_icmp(__u32 dest)
+{
+ struct sockaddr_in addr;
+ struct icmphdr icmp;
+ int sock, ret;
+
+ ret = write_sysctl("/proc/sys/net/ipv4/ping_group_range", "0 0");
+ if (!ASSERT_OK(ret, "write_sysctl(net.ipv4.ping_group_range)"))
+ return ret;
+
+ sock = socket(AF_INET, SOCK_DGRAM, IPPROTO_ICMP);
+ if (!ASSERT_GE(sock, 0, "icmp_socket"))
+ return -errno;
+
+ ret = setsockopt(sock, SOL_SOCKET, SO_BINDTODEVICE,
+ netkit_name, strlen(netkit_name) + 1);
+ if (!ASSERT_OK(ret, "setsockopt(SO_BINDTODEVICE)"))
+ goto out;
+
+ memset(&addr, 0, sizeof(addr));
+ addr.sin_family = AF_INET;
+ addr.sin_addr.s_addr = htonl(dest);
+
+ memset(&icmp, 0, sizeof(icmp));
+ icmp.type = ICMP_ECHO;
+ icmp.echo.id = 1234;
+ icmp.echo.sequence = 1;
+
+ ret = sendto(sock, &icmp, sizeof(icmp), 0,
+ (struct sockaddr *)&addr, sizeof(addr));
+ if (!ASSERT_GE(ret, 0, "icmp_sendto"))
+ ret = -errno;
+ else
+ ret = 0;
+out:
+ close(sock);
+ return ret;
+}
+
+static int send_icmp(void)
+{
+ return __send_icmp(ping_addr_neigh);
+}
+
+void serial_test_tc_netkit_basic(void)
+{
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ LIBBPF_OPTS(bpf_netkit_opts, optl);
+ __u32 prog_ids[2], link_ids[2];
+ __u32 pid1, pid2, lid1, lid2;
+ struct test_tc_link *skel;
+ struct bpf_link *link;
+ int err, ifindex;
+
+ err = create_netkit(NETKIT_L2, NETKIT_PASS, NETKIT_PASS,
+ &ifindex, false);
+ if (err)
+ return;
+
+ skel = test_tc_link__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
+ BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
+ BPF_NETKIT_PEER), 0, "tc2_attach_type");
+
+ err = test_tc_link__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
+ pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
+
+ ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+
+ ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc1 = link;
+
+ lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+
+ optq.prog_ids = prog_ids;
+ optq.link_ids = link_ids;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PRIMARY, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ link = bpf_program__attach_netkit(skel->progs.tc2, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc2 = link;
+
+ lid2 = id_from_link_fd(bpf_link__fd(skel->links.tc2));
+ ASSERT_NEQ(lid1, lid2, "link_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 1);
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PEER, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid2, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
+cleanup:
+ test_tc_link__destroy(skel);
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+ destroy_netkit();
+}
+
+static void serial_test_tc_netkit_multi_links_target(int mode, int target)
+{
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ LIBBPF_OPTS(bpf_netkit_opts, optl);
+ __u32 prog_ids[3], link_ids[3];
+ __u32 pid1, pid2, lid1, lid2;
+ struct test_tc_link *skel;
+ struct bpf_link *link;
+ int err, ifindex;
+
+ err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
+ &ifindex, false);
+ if (err)
+ return;
+
+ skel = test_tc_link__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
+ target), 0, "tc1_attach_type");
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
+ target), 0, "tc2_attach_type");
+
+ err = test_tc_link__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
+ pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
+
+ ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+
+ ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc1 = link;
+
+ lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
+
+ assert_mprog_count_ifindex(ifindex, target, 1);
+
+ optq.prog_ids = prog_ids;
+ optq.link_ids = link_ids;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, target, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ LIBBPF_OPTS_RESET(optl,
+ .flags = BPF_F_BEFORE,
+ .relative_fd = bpf_program__fd(skel->progs.tc1),
+ );
+
+ link = bpf_program__attach_netkit(skel->progs.tc2, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc2 = link;
+
+ lid2 = id_from_link_fd(bpf_link__fd(skel->links.tc2));
+ ASSERT_NEQ(lid1, lid2, "link_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, target, 2);
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, target, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 2, "count");
+ ASSERT_EQ(optq.revision, 3, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid2, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], pid1, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], lid1, "link_ids[1]");
+ ASSERT_EQ(optq.prog_ids[2], 0, "prog_ids[2]");
+ ASSERT_EQ(optq.link_ids[2], 0, "link_ids[2]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
+cleanup:
+ test_tc_link__destroy(skel);
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+ destroy_netkit();
+}
+
+void serial_test_tc_netkit_multi_links(void)
+{
+ serial_test_tc_netkit_multi_links_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
+ serial_test_tc_netkit_multi_links_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
+ serial_test_tc_netkit_multi_links_target(NETKIT_L2, BPF_NETKIT_PEER);
+ serial_test_tc_netkit_multi_links_target(NETKIT_L3, BPF_NETKIT_PEER);
+}
+
+static void serial_test_tc_netkit_multi_opts_target(int mode, int target)
+{
+ LIBBPF_OPTS(bpf_prog_attach_opts, opta);
+ LIBBPF_OPTS(bpf_prog_detach_opts, optd);
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ __u32 pid1, pid2, fd1, fd2;
+ __u32 prog_ids[3];
+ struct test_tc_link *skel;
+ int err, ifindex;
+
+ err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
+ &ifindex, false);
+ if (err)
+ return;
+
+ skel = test_tc_link__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_load"))
+ goto cleanup;
+
+ fd1 = bpf_program__fd(skel->progs.tc1);
+ fd2 = bpf_program__fd(skel->progs.tc2);
+
+ pid1 = id_from_prog_fd(fd1);
+ pid2 = id_from_prog_fd(fd2);
+
+ ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+
+ ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ err = bpf_prog_attach_opts(fd1, ifindex, target, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup;
+
+ assert_mprog_count_ifindex(ifindex, target, 1);
+
+ optq.prog_ids = prog_ids;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, target, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup_fd1;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ LIBBPF_OPTS_RESET(opta,
+ .flags = BPF_F_BEFORE,
+ .relative_fd = fd1,
+ );
+
+ err = bpf_prog_attach_opts(fd2, ifindex, target, &opta);
+ if (!ASSERT_EQ(err, 0, "prog_attach"))
+ goto cleanup_fd1;
+
+ assert_mprog_count_ifindex(ifindex, target, 2);
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, target, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup_fd2;
+
+ ASSERT_EQ(optq.count, 2, "count");
+ ASSERT_EQ(optq.revision, 3, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid2, "prog_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], pid1, "prog_ids[1]");
+ ASSERT_EQ(optq.prog_ids[2], 0, "prog_ids[2]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, true, "seen_eth");
+ ASSERT_EQ(skel->bss->seen_tc2, true, "seen_tc2");
+
+cleanup_fd2:
+ err = bpf_prog_detach_opts(fd2, ifindex, target, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count_ifindex(ifindex, target, 1);
+cleanup_fd1:
+ err = bpf_prog_detach_opts(fd1, ifindex, target, &optd);
+ ASSERT_OK(err, "prog_detach");
+ assert_mprog_count_ifindex(ifindex, target, 0);
+cleanup:
+ test_tc_link__destroy(skel);
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+ destroy_netkit();
+}
+
+void serial_test_tc_netkit_multi_opts(void)
+{
+ serial_test_tc_netkit_multi_opts_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
+ serial_test_tc_netkit_multi_opts_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
+ serial_test_tc_netkit_multi_opts_target(NETKIT_L2, BPF_NETKIT_PEER);
+ serial_test_tc_netkit_multi_opts_target(NETKIT_L3, BPF_NETKIT_PEER);
+}
+
+void serial_test_tc_netkit_device(void)
+{
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ LIBBPF_OPTS(bpf_netkit_opts, optl);
+ __u32 prog_ids[2], link_ids[2];
+ __u32 pid1, pid2, lid1;
+ struct test_tc_link *skel;
+ struct bpf_link *link;
+ int err, ifindex, ifindex2;
+
+ err = create_netkit(NETKIT_L3, NETKIT_PASS, NETKIT_PASS,
+ &ifindex, true);
+ if (err)
+ return;
+
+ ifindex2 = if_nametoindex(netkit_peer);
+ ASSERT_NEQ(ifindex, ifindex2, "ifindex_1_2");
+
+ skel = test_tc_link__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
+ BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc2,
+ BPF_NETKIT_PEER), 0, "tc2_attach_type");
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc3,
+ BPF_NETKIT_PRIMARY), 0, "tc3_attach_type");
+
+ err = test_tc_link__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
+ pid2 = id_from_prog_fd(bpf_program__fd(skel->progs.tc2));
+
+ ASSERT_NEQ(pid1, pid2, "prog_ids_1_2");
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+
+ ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc1 = link;
+
+ lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+
+ optq.prog_ids = prog_ids;
+ optq.link_ids = link_ids;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, BPF_NETKIT_PRIMARY, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(send_icmp(), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_tc2, false, "seen_tc2");
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex2, BPF_NETKIT_PRIMARY, &optq);
+ ASSERT_EQ(err, -EACCES, "prog_query_should_fail");
+
+ err = bpf_prog_query_opts(ifindex2, BPF_NETKIT_PEER, &optq);
+ ASSERT_EQ(err, -EACCES, "prog_query_should_fail");
+
+ link = bpf_program__attach_netkit(skel->progs.tc2, ifindex2, &optl);
+ if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) {
+ bpf_link__destroy(link);
+ goto cleanup;
+ }
+
+ link = bpf_program__attach_netkit(skel->progs.tc3, ifindex2, &optl);
+ if (!ASSERT_ERR_PTR(link, "link_attach_should_fail")) {
+ bpf_link__destroy(link);
+ goto cleanup;
+ }
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 1);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+cleanup:
+ test_tc_link__destroy(skel);
+
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PRIMARY, 0);
+ assert_mprog_count_ifindex(ifindex, BPF_NETKIT_PEER, 0);
+ destroy_netkit();
+}
+
+static void serial_test_tc_netkit_neigh_links_target(int mode, int target)
+{
+ LIBBPF_OPTS(bpf_prog_query_opts, optq);
+ LIBBPF_OPTS(bpf_netkit_opts, optl);
+ __u32 prog_ids[2], link_ids[2];
+ __u32 pid1, lid1;
+ struct test_tc_link *skel;
+ struct bpf_link *link;
+ int err, ifindex;
+
+ err = create_netkit(mode, NETKIT_PASS, NETKIT_PASS,
+ &ifindex, false);
+ if (err)
+ return;
+
+ skel = test_tc_link__open();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ goto cleanup;
+
+ ASSERT_EQ(bpf_program__set_expected_attach_type(skel->progs.tc1,
+ BPF_NETKIT_PRIMARY), 0, "tc1_attach_type");
+
+ err = test_tc_link__load(skel);
+ if (!ASSERT_OK(err, "skel_load"))
+ goto cleanup;
+
+ pid1 = id_from_prog_fd(bpf_program__fd(skel->progs.tc1));
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+
+ ASSERT_EQ(skel->bss->seen_tc1, false, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, false, "seen_eth");
+
+ link = bpf_program__attach_netkit(skel->progs.tc1, ifindex, &optl);
+ if (!ASSERT_OK_PTR(link, "link_attach"))
+ goto cleanup;
+
+ skel->links.tc1 = link;
+
+ lid1 = id_from_link_fd(bpf_link__fd(skel->links.tc1));
+
+ assert_mprog_count_ifindex(ifindex, target, 1);
+
+ optq.prog_ids = prog_ids;
+ optq.link_ids = link_ids;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ memset(link_ids, 0, sizeof(link_ids));
+ optq.count = ARRAY_SIZE(prog_ids);
+
+ err = bpf_prog_query_opts(ifindex, target, &optq);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup;
+
+ ASSERT_EQ(optq.count, 1, "count");
+ ASSERT_EQ(optq.revision, 2, "revision");
+ ASSERT_EQ(optq.prog_ids[0], pid1, "prog_ids[0]");
+ ASSERT_EQ(optq.link_ids[0], lid1, "link_ids[0]");
+ ASSERT_EQ(optq.prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(optq.link_ids[1], 0, "link_ids[1]");
+
+ tc_skel_reset_all_seen(skel);
+ ASSERT_EQ(__send_icmp(ping_addr_noneigh), 0, "icmp_pkt");
+
+ ASSERT_EQ(skel->bss->seen_tc1, true /* L2: ARP */, "seen_tc1");
+ ASSERT_EQ(skel->bss->seen_eth, mode == NETKIT_L3, "seen_eth");
+cleanup:
+ test_tc_link__destroy(skel);
+
+ assert_mprog_count_ifindex(ifindex, target, 0);
+ destroy_netkit();
+}
+
+void serial_test_tc_netkit_neigh_links(void)
+{
+ serial_test_tc_netkit_neigh_links_target(NETKIT_L2, BPF_NETKIT_PRIMARY);
+ serial_test_tc_netkit_neigh_links_target(NETKIT_L3, BPF_NETKIT_PRIMARY);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/tc_opts.c b/tools/testing/selftests/bpf/prog_tests/tc_opts.c
index ca506d2fcf58..51883ccb8020 100644
--- a/tools/testing/selftests/bpf/prog_tests/tc_opts.c
+++ b/tools/testing/selftests/bpf/prog_tests/tc_opts.c
@@ -2471,7 +2471,7 @@ static void test_tc_opts_query_target(int target)
__u32 fd1, fd2, fd3, fd4, id1, id2, id3, id4;
struct test_tc_link *skel;
union bpf_attr attr;
- __u32 prog_ids[5];
+ __u32 prog_ids[10];
int err;
skel = test_tc_link__open_and_load();
@@ -2599,6 +2599,135 @@ static void test_tc_opts_query_target(int target)
ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
+ /* Test 3: Query with smaller prog_ids array */
+ memset(&attr, 0, attr_size);
+ attr.query.target_ifindex = loopback;
+ attr.query.attach_type = target;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ attr.query.prog_ids = ptr_to_u64(prog_ids);
+ attr.query.count = 2;
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ ASSERT_EQ(err, -1, "prog_query_should_fail");
+ ASSERT_EQ(errno, ENOSPC, "prog_query_should_fail");
+
+ ASSERT_EQ(attr.query.count, 4, "count");
+ ASSERT_EQ(attr.query.revision, 5, "revision");
+ ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
+ ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
+ ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
+ ASSERT_EQ(attr.query.attach_type, target, "attach_type");
+ ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
+ ASSERT_EQ(prog_ids[0], id1, "prog_ids[0]");
+ ASSERT_EQ(prog_ids[1], id2, "prog_ids[1]");
+ ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
+ ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
+ ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
+ ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
+ ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
+
+ /* Test 4: Query with larger prog_ids array */
+ memset(&attr, 0, attr_size);
+ attr.query.target_ifindex = loopback;
+ attr.query.attach_type = target;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ attr.query.prog_ids = ptr_to_u64(prog_ids);
+ attr.query.count = 10;
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(attr.query.count, 4, "count");
+ ASSERT_EQ(attr.query.revision, 5, "revision");
+ ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
+ ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
+ ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
+ ASSERT_EQ(attr.query.attach_type, target, "attach_type");
+ ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
+ ASSERT_EQ(prog_ids[0], id1, "prog_ids[0]");
+ ASSERT_EQ(prog_ids[1], id2, "prog_ids[1]");
+ ASSERT_EQ(prog_ids[2], id3, "prog_ids[2]");
+ ASSERT_EQ(prog_ids[3], id4, "prog_ids[3]");
+ ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
+ ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
+ ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
+
+ /* Test 5: Query with NULL prog_ids array but with count > 0 */
+ memset(&attr, 0, attr_size);
+ attr.query.target_ifindex = loopback;
+ attr.query.attach_type = target;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ attr.query.count = sizeof(prog_ids);
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(attr.query.count, 4, "count");
+ ASSERT_EQ(attr.query.revision, 5, "revision");
+ ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
+ ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
+ ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
+ ASSERT_EQ(attr.query.attach_type, target, "attach_type");
+ ASSERT_EQ(prog_ids[0], 0, "prog_ids[0]");
+ ASSERT_EQ(prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
+ ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
+ ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(attr.query.prog_ids, 0, "prog_ids");
+ ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
+ ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
+ ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
+
+ /* Test 6: Query with non-NULL prog_ids array but with count == 0 */
+ memset(&attr, 0, attr_size);
+ attr.query.target_ifindex = loopback;
+ attr.query.attach_type = target;
+
+ memset(prog_ids, 0, sizeof(prog_ids));
+ attr.query.prog_ids = ptr_to_u64(prog_ids);
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ if (!ASSERT_OK(err, "prog_query"))
+ goto cleanup4;
+
+ ASSERT_EQ(attr.query.count, 4, "count");
+ ASSERT_EQ(attr.query.revision, 5, "revision");
+ ASSERT_EQ(attr.query.query_flags, 0, "query_flags");
+ ASSERT_EQ(attr.query.attach_flags, 0, "attach_flags");
+ ASSERT_EQ(attr.query.target_ifindex, loopback, "target_ifindex");
+ ASSERT_EQ(attr.query.attach_type, target, "attach_type");
+ ASSERT_EQ(prog_ids[0], 0, "prog_ids[0]");
+ ASSERT_EQ(prog_ids[1], 0, "prog_ids[1]");
+ ASSERT_EQ(prog_ids[2], 0, "prog_ids[2]");
+ ASSERT_EQ(prog_ids[3], 0, "prog_ids[3]");
+ ASSERT_EQ(prog_ids[4], 0, "prog_ids[4]");
+ ASSERT_EQ(attr.query.prog_ids, ptr_to_u64(prog_ids), "prog_ids");
+ ASSERT_EQ(attr.query.prog_attach_flags, 0, "prog_attach_flags");
+ ASSERT_EQ(attr.query.link_ids, 0, "link_ids");
+ ASSERT_EQ(attr.query.link_attach_flags, 0, "link_attach_flags");
+
+ /* Test 7: Query with invalid flags */
+ attr.query.attach_flags = 0;
+ attr.query.query_flags = 1;
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ ASSERT_EQ(err, -1, "prog_query_should_fail");
+ ASSERT_EQ(errno, EINVAL, "prog_query_should_fail");
+
+ attr.query.attach_flags = 1;
+ attr.query.query_flags = 0;
+
+ err = syscall(__NR_bpf, BPF_PROG_QUERY, &attr, attr_size);
+ ASSERT_EQ(err, -1, "prog_query_should_fail");
+ ASSERT_EQ(errno, EINVAL, "prog_query_should_fail");
+
cleanup4:
err = bpf_prog_detach_opts(fd4, loopback, target, &optd);
ASSERT_OK(err, "prog_detach");
diff --git a/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c b/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
index 0cca4e8ae38e..d3491a84b3b9 100644
--- a/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
+++ b/tools/testing/selftests/bpf/prog_tests/test_bpf_ma.c
@@ -9,9 +9,10 @@
#include "test_bpf_ma.skel.h"
-void test_test_bpf_ma(void)
+static void do_bpf_ma_test(const char *name)
{
struct test_bpf_ma *skel;
+ struct bpf_program *prog;
struct btf *btf;
int i, err;
@@ -34,6 +35,11 @@ void test_test_bpf_ma(void)
skel->rodata->data_btf_ids[i] = id;
}
+ prog = bpf_object__find_program_by_name(skel->obj, name);
+ if (!ASSERT_OK_PTR(prog, "invalid prog name"))
+ goto out;
+ bpf_program__set_autoload(prog, true);
+
err = test_bpf_ma__load(skel);
if (!ASSERT_OK(err, "load"))
goto out;
@@ -48,3 +54,15 @@ void test_test_bpf_ma(void)
out:
test_bpf_ma__destroy(skel);
}
+
+void test_test_bpf_ma(void)
+{
+ if (test__start_subtest("batch_alloc_free"))
+ do_bpf_ma_test("test_batch_alloc_free");
+ if (test__start_subtest("free_through_map_free"))
+ do_bpf_ma_test("test_free_through_map_free");
+ if (test__start_subtest("batch_percpu_alloc_free"))
+ do_bpf_ma_test("test_batch_percpu_alloc_free");
+ if (test__start_subtest("percpu_free_through_map_free"))
+ do_bpf_ma_test("test_percpu_free_through_map_free");
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c
index ce2c61d62fc6..760ad96b4be0 100644
--- a/tools/testing/selftests/bpf/prog_tests/timer.c
+++ b/tools/testing/selftests/bpf/prog_tests/timer.c
@@ -15,6 +15,7 @@ static int timer(struct timer *timer_skel)
ASSERT_EQ(timer_skel->data->callback_check, 52, "callback_check1");
ASSERT_EQ(timer_skel->data->callback2_check, 52, "callback2_check1");
+ ASSERT_EQ(timer_skel->bss->pinned_callback_check, 0, "pinned_callback_check1");
prog_fd = bpf_program__fd(timer_skel->progs.test1);
err = bpf_prog_test_run_opts(prog_fd, &topts);
@@ -33,6 +34,9 @@ static int timer(struct timer *timer_skel)
/* check that timer_cb3() was executed twice */
ASSERT_EQ(timer_skel->bss->abs_data, 12, "abs_data");
+ /* check that timer_cb_pinned() was executed twice */
+ ASSERT_EQ(timer_skel->bss->pinned_callback_check, 2, "pinned_callback_check");
+
/* check that there were no errors in timer execution */
ASSERT_EQ(timer_skel->bss->err, 0, "err");
diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe.c b/tools/testing/selftests/bpf/prog_tests/uprobe.c
new file mode 100644
index 000000000000..cf3e0e7a64fa
--- /dev/null
+++ b/tools/testing/selftests/bpf/prog_tests/uprobe.c
@@ -0,0 +1,95 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Hengqi Chen */
+
+#include <test_progs.h>
+#include "test_uprobe.skel.h"
+
+static FILE *urand_spawn(int *pid)
+{
+ FILE *f;
+
+ /* urandom_read's stdout is wired into f */
+ f = popen("./urandom_read 1 report-pid", "r");
+ if (!f)
+ return NULL;
+
+ if (fscanf(f, "%d", pid) != 1) {
+ pclose(f);
+ errno = EINVAL;
+ return NULL;
+ }
+
+ return f;
+}
+
+static int urand_trigger(FILE **urand_pipe)
+{
+ int exit_code;
+
+ /* pclose() waits for child process to exit and returns their exit code */
+ exit_code = pclose(*urand_pipe);
+ *urand_pipe = NULL;
+
+ return exit_code;
+}
+
+void test_uprobe(void)
+{
+ LIBBPF_OPTS(bpf_uprobe_opts, uprobe_opts);
+ struct test_uprobe *skel;
+ FILE *urand_pipe = NULL;
+ int urand_pid = 0, err;
+
+ skel = test_uprobe__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+ urand_pipe = urand_spawn(&urand_pid);
+ if (!ASSERT_OK_PTR(urand_pipe, "urand_spawn"))
+ goto cleanup;
+
+ skel->bss->my_pid = urand_pid;
+
+ /* Manual attach uprobe to urandlib_api
+ * There are two `urandlib_api` symbols in .dynsym section:
+ * - urandlib_api@LIBURANDOM_READ_1.0.0
+ * - urandlib_api@@LIBURANDOM_READ_2.0.0
+ * Both are global bind and would cause a conflict if user
+ * specify the symbol name without a version suffix
+ */
+ uprobe_opts.func_name = "urandlib_api";
+ skel->links.test4 = bpf_program__attach_uprobe_opts(skel->progs.test4,
+ urand_pid,
+ "./liburandom_read.so",
+ 0 /* offset */,
+ &uprobe_opts);
+ if (!ASSERT_ERR_PTR(skel->links.test4, "urandlib_api_attach_conflict"))
+ goto cleanup;
+
+ uprobe_opts.func_name = "urandlib_api@LIBURANDOM_READ_1.0.0";
+ skel->links.test4 = bpf_program__attach_uprobe_opts(skel->progs.test4,
+ urand_pid,
+ "./liburandom_read.so",
+ 0 /* offset */,
+ &uprobe_opts);
+ if (!ASSERT_OK_PTR(skel->links.test4, "urandlib_api_attach_ok"))
+ goto cleanup;
+
+ /* Auto attach 3 u[ret]probes to urandlib_api_sameoffset */
+ err = test_uprobe__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto cleanup;
+
+ /* trigger urandom_read */
+ ASSERT_OK(urand_trigger(&urand_pipe), "urand_exit_code");
+
+ ASSERT_EQ(skel->bss->test1_result, 1, "urandlib_api_sameoffset");
+ ASSERT_EQ(skel->bss->test2_result, 1, "urandlib_api_sameoffset@v1");
+ ASSERT_EQ(skel->bss->test3_result, 3, "urandlib_api_sameoffset@@v2");
+ ASSERT_EQ(skel->bss->test4_result, 1, "urandlib_api");
+
+cleanup:
+ if (urand_pipe)
+ pclose(urand_pipe);
+ test_uprobe__destroy(skel);
+}
diff --git a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
index 626c461fa34d..4439ba9392f8 100644
--- a/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
+++ b/tools/testing/selftests/bpf/prog_tests/xdp_metadata.c
@@ -226,7 +226,7 @@ static int verify_xsk_metadata(struct xsk *xsk)
__u64 comp_addr;
void *data;
__u64 addr;
- __u32 idx;
+ __u32 idx = 0;
int ret;
ret = recvfrom(xsk_socket__fd(xsk->socket), NULL, 0, MSG_DONTWAIT, NULL, NULL);
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task_vma.c b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
index dd923dc637d5..dd923dc637d5 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_task_vma.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_task_vmas.c
diff --git a/tools/testing/selftests/bpf/progs/bpf_iter_task.c b/tools/testing/selftests/bpf/progs/bpf_iter_tasks.c
index 96131b9a1caa..96131b9a1caa 100644
--- a/tools/testing/selftests/bpf/progs/bpf_iter_task.c
+++ b/tools/testing/selftests/bpf/progs/bpf_iter_tasks.c
diff --git a/tools/testing/selftests/bpf/progs/bpf_misc.h b/tools/testing/selftests/bpf/progs/bpf_misc.h
index 38a57a2e70db..799fff4995d8 100644
--- a/tools/testing/selftests/bpf/progs/bpf_misc.h
+++ b/tools/testing/selftests/bpf/progs/bpf_misc.h
@@ -99,6 +99,9 @@
#elif defined(__TARGET_ARCH_arm64)
#define SYSCALL_WRAPPER 1
#define SYS_PREFIX "__arm64_"
+#elif defined(__TARGET_ARCH_riscv)
+#define SYSCALL_WRAPPER 1
+#define SYS_PREFIX "__riscv_"
#else
#define SYSCALL_WRAPPER 0
#define SYS_PREFIX "__se_"
diff --git a/tools/testing/selftests/bpf/progs/connect_unix_prog.c b/tools/testing/selftests/bpf/progs/connect_unix_prog.c
new file mode 100644
index 000000000000..ca8aa2f116b3
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/connect_unix_prog.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+
+#include <string.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_kfuncs.h"
+
+__u8 SERVUN_REWRITE_ADDRESS[] = "\0bpf_cgroup_unix_test_rewrite";
+
+SEC("cgroup/connect_unix")
+int connect_unix_prog(struct bpf_sock_addr *ctx)
+{
+ struct bpf_sock_addr_kern *sa_kern = bpf_cast_to_kern_ctx(ctx);
+ struct sockaddr_un *sa_kern_unaddr;
+ __u32 unaddrlen = offsetof(struct sockaddr_un, sun_path) +
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1;
+ int ret;
+
+ /* Rewrite destination. */
+ ret = bpf_sock_addr_set_sun_path(sa_kern, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1);
+ if (ret)
+ return 0;
+
+ if (sa_kern->uaddrlen != unaddrlen)
+ return 0;
+
+ sa_kern_unaddr = bpf_rdonly_cast(sa_kern->uaddr,
+ bpf_core_type_id_kernel(struct sockaddr_un));
+ if (memcmp(sa_kern_unaddr->sun_path, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1) != 0)
+ return 0;
+
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/exceptions.c b/tools/testing/selftests/bpf/progs/exceptions.c
new file mode 100644
index 000000000000..2811ee842b01
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exceptions.c
@@ -0,0 +1,368 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_endian.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+#ifndef ETH_P_IP
+#define ETH_P_IP 0x0800
+#endif
+
+struct {
+ __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
+ __uint(max_entries, 4);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(__u32));
+} jmp_table SEC(".maps");
+
+static __noinline int static_func(u64 i)
+{
+ bpf_throw(32);
+ return i;
+}
+
+__noinline int global2static_simple(u64 i)
+{
+ static_func(i + 2);
+ return i - 1;
+}
+
+__noinline int global2static(u64 i)
+{
+ if (i == ETH_P_IP)
+ bpf_throw(16);
+ return static_func(i);
+}
+
+static __noinline int static2global(u64 i)
+{
+ return global2static(i) + i;
+}
+
+SEC("tc")
+int exception_throw_always_1(struct __sk_buff *ctx)
+{
+ bpf_throw(64);
+ return 0;
+}
+
+/* In this case, the global func will never be seen executing after call to
+ * static subprog, hence verifier will DCE the remaining instructions. Ensure we
+ * are resilient to that.
+ */
+SEC("tc")
+int exception_throw_always_2(struct __sk_buff *ctx)
+{
+ return global2static_simple(ctx->protocol);
+}
+
+SEC("tc")
+int exception_throw_unwind_1(struct __sk_buff *ctx)
+{
+ return static2global(bpf_ntohs(ctx->protocol));
+}
+
+SEC("tc")
+int exception_throw_unwind_2(struct __sk_buff *ctx)
+{
+ return static2global(bpf_ntohs(ctx->protocol) - 1);
+}
+
+SEC("tc")
+int exception_throw_default(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 1;
+}
+
+SEC("tc")
+int exception_throw_default_value(struct __sk_buff *ctx)
+{
+ bpf_throw(5);
+ return 1;
+}
+
+SEC("tc")
+int exception_tail_call_target(struct __sk_buff *ctx)
+{
+ bpf_throw(16);
+ return 0;
+}
+
+static __noinline
+int exception_tail_call_subprog(struct __sk_buff *ctx)
+{
+ volatile int ret = 10;
+
+ bpf_tail_call_static(ctx, &jmp_table, 0);
+ return ret;
+}
+
+SEC("tc")
+int exception_tail_call(struct __sk_buff *ctx) {
+ volatile int ret = 0;
+
+ ret = exception_tail_call_subprog(ctx);
+ return ret + 8;
+}
+
+__noinline int exception_ext_global(struct __sk_buff *ctx)
+{
+ volatile int ret = 0;
+
+ return ret;
+}
+
+static __noinline int exception_ext_static(struct __sk_buff *ctx)
+{
+ return exception_ext_global(ctx);
+}
+
+SEC("tc")
+int exception_ext(struct __sk_buff *ctx)
+{
+ return exception_ext_static(ctx);
+}
+
+__noinline int exception_cb_mod_global(u64 cookie)
+{
+ volatile int ret = 0;
+
+ return ret;
+}
+
+/* Example of how the exception callback supplied during verification can still
+ * introduce extensions by calling to dummy global functions, and alter runtime
+ * behavior.
+ *
+ * Right now we don't allow freplace attachment to exception callback itself,
+ * but if the need arises this restriction is technically feasible to relax in
+ * the future.
+ */
+__noinline int exception_cb_mod(u64 cookie)
+{
+ return exception_cb_mod_global(cookie) + cookie + 10;
+}
+
+SEC("tc")
+__exception_cb(exception_cb_mod)
+int exception_ext_mod_cb_runtime(struct __sk_buff *ctx)
+{
+ bpf_throw(25);
+ return 0;
+}
+
+__noinline static int subprog(struct __sk_buff *ctx)
+{
+ return bpf_ktime_get_ns();
+}
+
+__noinline static int throwing_subprog(struct __sk_buff *ctx)
+{
+ if (ctx->tstamp)
+ bpf_throw(0);
+ return bpf_ktime_get_ns();
+}
+
+__noinline int global_subprog(struct __sk_buff *ctx)
+{
+ return bpf_ktime_get_ns();
+}
+
+__noinline int throwing_global_subprog(struct __sk_buff *ctx)
+{
+ if (ctx->tstamp)
+ bpf_throw(0);
+ return bpf_ktime_get_ns();
+}
+
+SEC("tc")
+int exception_throw_subprog(struct __sk_buff *ctx)
+{
+ switch (ctx->protocol) {
+ case 1:
+ return subprog(ctx);
+ case 2:
+ return global_subprog(ctx);
+ case 3:
+ return throwing_subprog(ctx);
+ case 4:
+ return throwing_global_subprog(ctx);
+ default:
+ break;
+ }
+ bpf_throw(1);
+ return 0;
+}
+
+__noinline int assert_nz_gfunc(u64 c)
+{
+ volatile u64 cookie = c;
+
+ bpf_assert(cookie != 0);
+ return 0;
+}
+
+__noinline int assert_zero_gfunc(u64 c)
+{
+ volatile u64 cookie = c;
+
+ bpf_assert_eq(cookie, 0);
+ return 0;
+}
+
+__noinline int assert_neg_gfunc(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_lt(cookie, 0);
+ return 0;
+}
+
+__noinline int assert_pos_gfunc(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_gt(cookie, 0);
+ return 0;
+}
+
+__noinline int assert_negeq_gfunc(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_le(cookie, -1);
+ return 0;
+}
+
+__noinline int assert_poseq_gfunc(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_ge(cookie, 1);
+ return 0;
+}
+
+__noinline int assert_nz_gfunc_with(u64 c)
+{
+ volatile u64 cookie = c;
+
+ bpf_assert_with(cookie != 0, cookie + 100);
+ return 0;
+}
+
+__noinline int assert_zero_gfunc_with(u64 c)
+{
+ volatile u64 cookie = c;
+
+ bpf_assert_eq_with(cookie, 0, cookie + 100);
+ return 0;
+}
+
+__noinline int assert_neg_gfunc_with(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_lt_with(cookie, 0, cookie + 100);
+ return 0;
+}
+
+__noinline int assert_pos_gfunc_with(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_gt_with(cookie, 0, cookie + 100);
+ return 0;
+}
+
+__noinline int assert_negeq_gfunc_with(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_le_with(cookie, -1, cookie + 100);
+ return 0;
+}
+
+__noinline int assert_poseq_gfunc_with(s64 c)
+{
+ volatile s64 cookie = c;
+
+ bpf_assert_ge_with(cookie, 1, cookie + 100);
+ return 0;
+}
+
+#define check_assert(name, cookie, tag) \
+SEC("tc") \
+int exception##tag##name(struct __sk_buff *ctx) \
+{ \
+ return name(cookie) + 1; \
+}
+
+check_assert(assert_nz_gfunc, 5, _);
+check_assert(assert_zero_gfunc, 0, _);
+check_assert(assert_neg_gfunc, -100, _);
+check_assert(assert_pos_gfunc, 100, _);
+check_assert(assert_negeq_gfunc, -1, _);
+check_assert(assert_poseq_gfunc, 1, _);
+
+check_assert(assert_nz_gfunc_with, 5, _);
+check_assert(assert_zero_gfunc_with, 0, _);
+check_assert(assert_neg_gfunc_with, -100, _);
+check_assert(assert_pos_gfunc_with, 100, _);
+check_assert(assert_negeq_gfunc_with, -1, _);
+check_assert(assert_poseq_gfunc_with, 1, _);
+
+check_assert(assert_nz_gfunc, 0, _bad_);
+check_assert(assert_zero_gfunc, 5, _bad_);
+check_assert(assert_neg_gfunc, 100, _bad_);
+check_assert(assert_pos_gfunc, -100, _bad_);
+check_assert(assert_negeq_gfunc, 1, _bad_);
+check_assert(assert_poseq_gfunc, -1, _bad_);
+
+check_assert(assert_nz_gfunc_with, 0, _bad_);
+check_assert(assert_zero_gfunc_with, 5, _bad_);
+check_assert(assert_neg_gfunc_with, 100, _bad_);
+check_assert(assert_pos_gfunc_with, -100, _bad_);
+check_assert(assert_negeq_gfunc_with, 1, _bad_);
+check_assert(assert_poseq_gfunc_with, -1, _bad_);
+
+SEC("tc")
+int exception_assert_range(struct __sk_buff *ctx)
+{
+ u64 time = bpf_ktime_get_ns();
+
+ bpf_assert_range(time, 0, ~0ULL);
+ return 1;
+}
+
+SEC("tc")
+int exception_assert_range_with(struct __sk_buff *ctx)
+{
+ u64 time = bpf_ktime_get_ns();
+
+ bpf_assert_range_with(time, 0, ~0ULL, 10);
+ return 1;
+}
+
+SEC("tc")
+int exception_bad_assert_range(struct __sk_buff *ctx)
+{
+ u64 time = bpf_ktime_get_ns();
+
+ bpf_assert_range(time, -100, 100);
+ return 1;
+}
+
+SEC("tc")
+int exception_bad_assert_range_with(struct __sk_buff *ctx)
+{
+ u64 time = bpf_ktime_get_ns();
+
+ bpf_assert_range_with(time, -1000, 1000, 10);
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/exceptions_assert.c b/tools/testing/selftests/bpf/progs/exceptions_assert.c
new file mode 100644
index 000000000000..e1e5c54a6a11
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exceptions_assert.c
@@ -0,0 +1,135 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <limits.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include <bpf/bpf_endian.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+#define check_assert(type, op, name, value) \
+ SEC("?tc") \
+ __log_level(2) __failure \
+ int check_assert_##op##_##name(void *ctx) \
+ { \
+ type num = bpf_ktime_get_ns(); \
+ bpf_assert_##op(num, value); \
+ return *(u64 *)num; \
+ }
+
+__msg(": R0_w=-2147483648 R10=fp0")
+check_assert(s64, eq, int_min, INT_MIN);
+__msg(": R0_w=2147483647 R10=fp0")
+check_assert(s64, eq, int_max, INT_MAX);
+__msg(": R0_w=0 R10=fp0")
+check_assert(s64, eq, zero, 0);
+__msg(": R0_w=-9223372036854775808 R1_w=-9223372036854775808 R10=fp0")
+check_assert(s64, eq, llong_min, LLONG_MIN);
+__msg(": R0_w=9223372036854775807 R1_w=9223372036854775807 R10=fp0")
+check_assert(s64, eq, llong_max, LLONG_MAX);
+
+__msg(": R0_w=scalar(smax=2147483646) R10=fp0")
+check_assert(s64, lt, pos, INT_MAX);
+__msg(": R0_w=scalar(smax=-1,umin=9223372036854775808,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+check_assert(s64, lt, zero, 0);
+__msg(": R0_w=scalar(smax=-2147483649,umin=9223372036854775808,umax=18446744071562067967,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+check_assert(s64, lt, neg, INT_MIN);
+
+__msg(": R0_w=scalar(smax=2147483647) R10=fp0")
+check_assert(s64, le, pos, INT_MAX);
+__msg(": R0_w=scalar(smax=0) R10=fp0")
+check_assert(s64, le, zero, 0);
+__msg(": R0_w=scalar(smax=-2147483648,umin=9223372036854775808,umax=18446744071562067968,var_off=(0x8000000000000000; 0x7fffffffffffffff))")
+check_assert(s64, le, neg, INT_MIN);
+
+__msg(": R0_w=scalar(smin=umin=2147483648,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+check_assert(s64, gt, pos, INT_MAX);
+__msg(": R0_w=scalar(smin=umin=1,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+check_assert(s64, gt, zero, 0);
+__msg(": R0_w=scalar(smin=-2147483647) R10=fp0")
+check_assert(s64, gt, neg, INT_MIN);
+
+__msg(": R0_w=scalar(smin=umin=2147483647,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff))")
+check_assert(s64, ge, pos, INT_MAX);
+__msg(": R0_w=scalar(smin=0,umax=9223372036854775807,var_off=(0x0; 0x7fffffffffffffff)) R10=fp0")
+check_assert(s64, ge, zero, 0);
+__msg(": R0_w=scalar(smin=-2147483648) R10=fp0")
+check_assert(s64, ge, neg, INT_MIN);
+
+SEC("?tc")
+__log_level(2) __failure
+__msg(": R0=0 R1=ctx(off=0,imm=0) R2=scalar(smin=smin32=-2147483646,smax=smax32=2147483645) R10=fp0")
+int check_assert_range_s64(struct __sk_buff *ctx)
+{
+ struct bpf_sock *sk = ctx->sk;
+ s64 num;
+
+ _Static_assert(_Generic((sk->rx_queue_mapping), s32: 1, default: 0), "type match");
+ if (!sk)
+ return 0;
+ num = sk->rx_queue_mapping;
+ bpf_assert_range(num, INT_MIN + 2, INT_MAX - 2);
+ return *((u8 *)ctx + num);
+}
+
+SEC("?tc")
+__log_level(2) __failure
+__msg(": R1=ctx(off=0,imm=0) R2=scalar(smin=umin=smin32=umin32=4096,smax=umax=smax32=umax32=8192,var_off=(0x0; 0x3fff))")
+int check_assert_range_u64(struct __sk_buff *ctx)
+{
+ u64 num = ctx->len;
+
+ bpf_assert_range(num, 4096, 8192);
+ return *((u8 *)ctx + num);
+}
+
+SEC("?tc")
+__log_level(2) __failure
+__msg(": R0=0 R1=ctx(off=0,imm=0) R2=4096 R10=fp0")
+int check_assert_single_range_s64(struct __sk_buff *ctx)
+{
+ struct bpf_sock *sk = ctx->sk;
+ s64 num;
+
+ _Static_assert(_Generic((sk->rx_queue_mapping), s32: 1, default: 0), "type match");
+ if (!sk)
+ return 0;
+ num = sk->rx_queue_mapping;
+
+ bpf_assert_range(num, 4096, 4096);
+ return *((u8 *)ctx + num);
+}
+
+SEC("?tc")
+__log_level(2) __failure
+__msg(": R1=ctx(off=0,imm=0) R2=4096 R10=fp0")
+int check_assert_single_range_u64(struct __sk_buff *ctx)
+{
+ u64 num = ctx->len;
+
+ bpf_assert_range(num, 4096, 4096);
+ return *((u8 *)ctx + num);
+}
+
+SEC("?tc")
+__log_level(2) __failure
+__msg(": R1=pkt(off=64,r=64,imm=0) R2=pkt_end(off=0,imm=0) R6=pkt(off=0,r=64,imm=0) R10=fp0")
+int check_assert_generic(struct __sk_buff *ctx)
+{
+ u8 *data_end = (void *)(long)ctx->data_end;
+ u8 *data = (void *)(long)ctx->data;
+
+ bpf_assert(data + 64 <= data_end);
+ return data[128];
+}
+
+SEC("?fentry/bpf_check")
+__failure __msg("At program exit the register R0 has value (0x40; 0x0)")
+int check_assert_with_return(void *ctx)
+{
+ bpf_assert_with(!ctx, 64);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/exceptions_ext.c b/tools/testing/selftests/bpf/progs/exceptions_ext.c
new file mode 100644
index 000000000000..743c05185d9b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exceptions_ext.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_helpers.h>
+#include "bpf_experimental.h"
+
+SEC("?fentry")
+int pfentry(void *ctx)
+{
+ return 0;
+}
+
+SEC("?fentry")
+int throwing_fentry(void *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline int exception_cb(u64 cookie)
+{
+ return cookie + 64;
+}
+
+SEC("?freplace")
+int extension(struct __sk_buff *ctx)
+{
+ return 0;
+}
+
+SEC("?freplace")
+__exception_cb(exception_cb)
+int throwing_exception_cb_extension(u64 cookie)
+{
+ bpf_throw(32);
+ return 0;
+}
+
+SEC("?freplace")
+__exception_cb(exception_cb)
+int throwing_extension(struct __sk_buff *ctx)
+{
+ bpf_throw(64);
+ return 0;
+}
+
+SEC("?fexit")
+int pfexit(void *ctx)
+{
+ return 0;
+}
+
+SEC("?fexit")
+int throwing_fexit(void *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?fmod_ret")
+int pfmod_ret(void *ctx)
+{
+ return 0;
+}
+
+SEC("?fmod_ret")
+int throwing_fmod_ret(void *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/exceptions_fail.c b/tools/testing/selftests/bpf/progs/exceptions_fail.c
new file mode 100644
index 000000000000..4c39e920dac2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/exceptions_fail.c
@@ -0,0 +1,347 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+extern void bpf_rcu_read_lock(void) __ksym;
+
+#define private(name) SEC(".bss." #name) __hidden __attribute__((aligned(8)))
+
+struct foo {
+ struct bpf_rb_node node;
+};
+
+struct hmap_elem {
+ struct bpf_timer timer;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, 64);
+ __type(key, int);
+ __type(value, struct hmap_elem);
+} hmap SEC(".maps");
+
+private(A) struct bpf_spin_lock lock;
+private(A) struct bpf_rb_root rbtree __contains(foo, node);
+
+__noinline void *exception_cb_bad_ret_type(u64 cookie)
+{
+ return NULL;
+}
+
+__noinline int exception_cb_bad_arg_0(void)
+{
+ return 0;
+}
+
+__noinline int exception_cb_bad_arg_2(int a, int b)
+{
+ return 0;
+}
+
+__noinline int exception_cb_ok_arg_small(int a)
+{
+ return 0;
+}
+
+SEC("?tc")
+__exception_cb(exception_cb_bad_ret_type)
+__failure __msg("Global function exception_cb_bad_ret_type() doesn't return scalar.")
+int reject_exception_cb_type_1(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__exception_cb(exception_cb_bad_arg_0)
+__failure __msg("exception cb only supports single integer argument")
+int reject_exception_cb_type_2(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__exception_cb(exception_cb_bad_arg_2)
+__failure __msg("exception cb only supports single integer argument")
+int reject_exception_cb_type_3(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__exception_cb(exception_cb_ok_arg_small)
+__success
+int reject_exception_cb_type_4(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline
+static int timer_cb(void *map, int *key, struct bpf_timer *timer)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("cannot be called from callback subprog")
+int reject_async_callback_throw(struct __sk_buff *ctx)
+{
+ struct hmap_elem *elem;
+
+ elem = bpf_map_lookup_elem(&hmap, &(int){0});
+ if (!elem)
+ return 0;
+ return bpf_timer_set_callback(&elem->timer, timer_cb);
+}
+
+__noinline static int subprog_lock(struct __sk_buff *ctx)
+{
+ volatile int ret = 0;
+
+ bpf_spin_lock(&lock);
+ if (ctx->len)
+ bpf_throw(0);
+ return ret;
+}
+
+SEC("?tc")
+__failure __msg("function calls are not allowed while holding a lock")
+int reject_with_lock(void *ctx)
+{
+ bpf_spin_lock(&lock);
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("function calls are not allowed while holding a lock")
+int reject_subprog_with_lock(void *ctx)
+{
+ return subprog_lock(ctx);
+}
+
+SEC("?tc")
+__failure __msg("bpf_rcu_read_unlock is missing")
+int reject_with_rcu_read_lock(void *ctx)
+{
+ bpf_rcu_read_lock();
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline static int throwing_subprog(struct __sk_buff *ctx)
+{
+ if (ctx->len)
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("bpf_rcu_read_unlock is missing")
+int reject_subprog_with_rcu_read_lock(void *ctx)
+{
+ bpf_rcu_read_lock();
+ return throwing_subprog(ctx);
+}
+
+static bool rbless(struct bpf_rb_node *n1, const struct bpf_rb_node *n2)
+{
+ bpf_throw(0);
+ return true;
+}
+
+SEC("?tc")
+__failure __msg("function calls are not allowed while holding a lock")
+int reject_with_rbtree_add_throw(void *ctx)
+{
+ struct foo *f;
+
+ f = bpf_obj_new(typeof(*f));
+ if (!f)
+ return 0;
+ bpf_spin_lock(&lock);
+ bpf_rbtree_add(&rbtree, &f->node, rbless);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("Unreleased reference")
+int reject_with_reference(void *ctx)
+{
+ struct foo *f;
+
+ f = bpf_obj_new(typeof(*f));
+ if (!f)
+ return 0;
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline static int subprog_ref(struct __sk_buff *ctx)
+{
+ struct foo *f;
+
+ f = bpf_obj_new(typeof(*f));
+ if (!f)
+ return 0;
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline static int subprog_cb_ref(u32 i, void *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("Unreleased reference")
+int reject_with_cb_reference(void *ctx)
+{
+ struct foo *f;
+
+ f = bpf_obj_new(typeof(*f));
+ if (!f)
+ return 0;
+ bpf_loop(5, subprog_cb_ref, NULL, 0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("cannot be called from callback")
+int reject_with_cb(void *ctx)
+{
+ bpf_loop(5, subprog_cb_ref, NULL, 0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("Unreleased reference")
+int reject_with_subprog_reference(void *ctx)
+{
+ return subprog_ref(ctx) + 1;
+}
+
+__noinline int throwing_exception_cb(u64 c)
+{
+ bpf_throw(0);
+ return c;
+}
+
+__noinline int exception_cb1(u64 c)
+{
+ return c;
+}
+
+__noinline int exception_cb2(u64 c)
+{
+ return c;
+}
+
+static __noinline int static_func(struct __sk_buff *ctx)
+{
+ return exception_cb1(ctx->tstamp);
+}
+
+__noinline int global_func(struct __sk_buff *ctx)
+{
+ return exception_cb1(ctx->tstamp);
+}
+
+SEC("?tc")
+__exception_cb(throwing_exception_cb)
+__failure __msg("cannot be called from callback subprog")
+int reject_throwing_exception_cb(struct __sk_buff *ctx)
+{
+ return 0;
+}
+
+SEC("?tc")
+__exception_cb(exception_cb1)
+__failure __msg("cannot call exception cb directly")
+int reject_exception_cb_call_global_func(struct __sk_buff *ctx)
+{
+ return global_func(ctx);
+}
+
+SEC("?tc")
+__exception_cb(exception_cb1)
+__failure __msg("cannot call exception cb directly")
+int reject_exception_cb_call_static_func(struct __sk_buff *ctx)
+{
+ return static_func(ctx);
+}
+
+SEC("?tc")
+__exception_cb(exception_cb1)
+__exception_cb(exception_cb2)
+__failure __msg("multiple exception callback tags for main subprog")
+int reject_multiple_exception_cb(struct __sk_buff *ctx)
+{
+ bpf_throw(0);
+ return 16;
+}
+
+__noinline int exception_cb_bad_ret(u64 c)
+{
+ return c;
+}
+
+SEC("?fentry/bpf_check")
+__exception_cb(exception_cb_bad_ret)
+__failure __msg("At program exit the register R0 has unknown scalar value should")
+int reject_set_exception_cb_bad_ret1(void *ctx)
+{
+ return 0;
+}
+
+SEC("?fentry/bpf_check")
+__failure __msg("At program exit the register R0 has value (0x40; 0x0) should")
+int reject_set_exception_cb_bad_ret2(void *ctx)
+{
+ bpf_throw(64);
+ return 0;
+}
+
+__noinline static int loop_cb1(u32 index, int *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+__noinline static int loop_cb2(u32 index, int *ctx)
+{
+ bpf_throw(0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("cannot be called from callback")
+int reject_exception_throw_cb(struct __sk_buff *ctx)
+{
+ bpf_loop(5, loop_cb1, NULL, 0);
+ return 0;
+}
+
+SEC("?tc")
+__failure __msg("cannot be called from callback")
+int reject_exception_throw_cb_diff(struct __sk_buff *ctx)
+{
+ if (ctx->protocol)
+ bpf_loop(5, loop_cb1, NULL, 0);
+ else
+ bpf_loop(5, loop_cb2, NULL, 0);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/getpeername_unix_prog.c b/tools/testing/selftests/bpf/progs/getpeername_unix_prog.c
new file mode 100644
index 000000000000..9c078f34bbb2
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/getpeername_unix_prog.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+
+#include <string.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_kfuncs.h"
+
+__u8 SERVUN_REWRITE_ADDRESS[] = "\0bpf_cgroup_unix_test_rewrite";
+
+SEC("cgroup/getpeername_unix")
+int getpeername_unix_prog(struct bpf_sock_addr *ctx)
+{
+ struct bpf_sock_addr_kern *sa_kern = bpf_cast_to_kern_ctx(ctx);
+ struct sockaddr_un *sa_kern_unaddr;
+ __u32 unaddrlen = offsetof(struct sockaddr_un, sun_path) +
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1;
+ int ret;
+
+ ret = bpf_sock_addr_set_sun_path(sa_kern, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1);
+ if (ret)
+ return 1;
+
+ if (sa_kern->uaddrlen != unaddrlen)
+ return 1;
+
+ sa_kern_unaddr = bpf_rdonly_cast(sa_kern->uaddr,
+ bpf_core_type_id_kernel(struct sockaddr_un));
+ if (memcmp(sa_kern_unaddr->sun_path, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1) != 0)
+ return 1;
+
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/getsockname_unix_prog.c b/tools/testing/selftests/bpf/progs/getsockname_unix_prog.c
new file mode 100644
index 000000000000..ac7145111497
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/getsockname_unix_prog.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+
+#include <string.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_kfuncs.h"
+
+__u8 SERVUN_REWRITE_ADDRESS[] = "\0bpf_cgroup_unix_test_rewrite";
+
+SEC("cgroup/getsockname_unix")
+int getsockname_unix_prog(struct bpf_sock_addr *ctx)
+{
+ struct bpf_sock_addr_kern *sa_kern = bpf_cast_to_kern_ctx(ctx);
+ struct sockaddr_un *sa_kern_unaddr;
+ __u32 unaddrlen = offsetof(struct sockaddr_un, sun_path) +
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1;
+ int ret;
+
+ ret = bpf_sock_addr_set_sun_path(sa_kern, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1);
+ if (ret)
+ return 1;
+
+ if (sa_kern->uaddrlen != unaddrlen)
+ return 1;
+
+ sa_kern_unaddr = bpf_rdonly_cast(sa_kern->uaddr,
+ bpf_core_type_id_kernel(struct sockaddr_un));
+ if (memcmp(sa_kern_unaddr->sun_path, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1) != 0)
+ return 1;
+
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/iters.c b/tools/testing/selftests/bpf/progs/iters.c
index 6b9b3c56f009..c20c4e38b71c 100644
--- a/tools/testing/selftests/bpf/progs/iters.c
+++ b/tools/testing/selftests/bpf/progs/iters.c
@@ -14,6 +14,13 @@ int my_pid;
int arr[256];
int small_arr[16] SEC(".data.small_arr");
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(max_entries, 10);
+ __type(key, int);
+ __type(value, int);
+} amap SEC(".maps");
+
#ifdef REAL_TEST
#define MY_PID_GUARD() if (my_pid != (bpf_get_current_pid_tgid() >> 32)) return 0
#else
@@ -716,4 +723,692 @@ int iter_pass_iter_ptr_to_subprog(const void *ctx)
return 0;
}
+SEC("?raw_tp")
+__failure
+__msg("R1 type=scalar expected=fp")
+__naked int delayed_read_mark(void)
+{
+ /* This is equivalent to C program below.
+ * The call to bpf_iter_num_next() is reachable with r7 values &fp[-16] and 0xdead.
+ * State with r7=&fp[-16] is visited first and follows r6 != 42 ... continue branch.
+ * At this point iterator next() call is reached with r7 that has no read mark.
+ * Loop body with r7=0xdead would only be visited if verifier would decide to continue
+ * with second loop iteration. Absence of read mark on r7 might affect state
+ * equivalent logic used for iterator convergence tracking.
+ *
+ * r7 = &fp[-16]
+ * fp[-16] = 0
+ * r6 = bpf_get_prandom_u32()
+ * bpf_iter_num_new(&fp[-8], 0, 10)
+ * while (bpf_iter_num_next(&fp[-8])) {
+ * r6++
+ * if (r6 != 42) {
+ * r7 = 0xdead
+ * continue;
+ * }
+ * bpf_probe_read_user(r7, 8, 0xdeadbeef); // this is not safe
+ * }
+ * bpf_iter_num_destroy(&fp[-8])
+ * return 0
+ */
+ asm volatile (
+ "r7 = r10;"
+ "r7 += -16;"
+ "r0 = 0;"
+ "*(u64 *)(r7 + 0) = r0;"
+ "call %[bpf_get_prandom_u32];"
+ "r6 = r0;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "1:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto 2f;"
+ "r6 += 1;"
+ "if r6 != 42 goto 3f;"
+ "r7 = 0xdead;"
+ "goto 1b;"
+ "3:"
+ "r1 = r7;"
+ "r2 = 8;"
+ "r3 = 0xdeadbeef;"
+ "call %[bpf_probe_read_user];"
+ "goto 1b;"
+ "2:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy),
+ __imm(bpf_probe_read_user)
+ : __clobber_all
+ );
+}
+
+SEC("?raw_tp")
+__failure
+__msg("math between fp pointer and register with unbounded")
+__naked int delayed_precision_mark(void)
+{
+ /* This is equivalent to C program below.
+ * The test is similar to delayed_iter_mark but verifies that incomplete
+ * precision don't fool verifier.
+ * The call to bpf_iter_num_next() is reachable with r7 values -16 and -32.
+ * State with r7=-16 is visited first and follows r6 != 42 ... continue branch.
+ * At this point iterator next() call is reached with r7 that has no read
+ * and precision marks.
+ * Loop body with r7=-32 would only be visited if verifier would decide to continue
+ * with second loop iteration. Absence of precision mark on r7 might affect state
+ * equivalent logic used for iterator convergence tracking.
+ *
+ * r8 = 0
+ * fp[-16] = 0
+ * r7 = -16
+ * r6 = bpf_get_prandom_u32()
+ * bpf_iter_num_new(&fp[-8], 0, 10)
+ * while (bpf_iter_num_next(&fp[-8])) {
+ * if (r6 != 42) {
+ * r7 = -32
+ * r6 = bpf_get_prandom_u32()
+ * continue;
+ * }
+ * r0 = r10
+ * r0 += r7
+ * r8 = *(u64 *)(r0 + 0) // this is not safe
+ * r6 = bpf_get_prandom_u32()
+ * }
+ * bpf_iter_num_destroy(&fp[-8])
+ * return r8
+ */
+ asm volatile (
+ "r8 = 0;"
+ "*(u64 *)(r10 - 16) = r8;"
+ "r7 = -16;"
+ "call %[bpf_get_prandom_u32];"
+ "r6 = r0;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "1:"
+ "r1 = r10;"
+ "r1 += -8;\n"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto 2f;"
+ "if r6 != 42 goto 3f;"
+ "r7 = -32;"
+ "call %[bpf_get_prandom_u32];"
+ "r6 = r0;"
+ "goto 1b;\n"
+ "3:"
+ "r0 = r10;"
+ "r0 += r7;"
+ "r8 = *(u64 *)(r0 + 0);"
+ "call %[bpf_get_prandom_u32];"
+ "r6 = r0;"
+ "goto 1b;\n"
+ "2:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = r8;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy),
+ __imm(bpf_probe_read_user)
+ : __clobber_all
+ );
+}
+
+SEC("?raw_tp")
+__failure
+__msg("math between fp pointer and register with unbounded")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked int loop_state_deps1(void)
+{
+ /* This is equivalent to C program below.
+ *
+ * The case turns out to be tricky in a sense that:
+ * - states with c=-25 are explored only on a second iteration
+ * of the outer loop;
+ * - states with read+precise mark on c are explored only on
+ * second iteration of the inner loop and in a state which
+ * is pushed to states stack first.
+ *
+ * Depending on the details of iterator convergence logic
+ * verifier might stop states traversal too early and miss
+ * unsafe c=-25 memory access.
+ *
+ * j = iter_new(); // fp[-16]
+ * a = 0; // r6
+ * b = 0; // r7
+ * c = -24; // r8
+ * while (iter_next(j)) {
+ * i = iter_new(); // fp[-8]
+ * a = 0; // r6
+ * b = 0; // r7
+ * while (iter_next(i)) {
+ * if (a == 1) {
+ * a = 0;
+ * b = 1;
+ * } else if (a == 0) {
+ * a = 1;
+ * if (random() == 42)
+ * continue;
+ * if (b == 1) {
+ * *(r10 + c) = 7; // this is not safe
+ * iter_destroy(i);
+ * iter_destroy(j);
+ * return;
+ * }
+ * }
+ * }
+ * iter_destroy(i);
+ * a = 0;
+ * b = 0;
+ * c = -25;
+ * }
+ * iter_destroy(j);
+ * return;
+ */
+ asm volatile (
+ "r1 = r10;"
+ "r1 += -16;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "r8 = -24;"
+ "j_loop_%=:"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto j_loop_end_%=;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "i_loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto i_loop_end_%=;"
+ "check_one_r6_%=:"
+ "if r6 != 1 goto check_zero_r6_%=;"
+ "r6 = 0;"
+ "r7 = 1;"
+ "goto i_loop_%=;"
+ "check_zero_r6_%=:"
+ "if r6 != 0 goto i_loop_%=;"
+ "r6 = 1;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto check_one_r7_%=;"
+ "goto i_loop_%=;"
+ "check_one_r7_%=:"
+ "if r7 != 1 goto i_loop_%=;"
+ "r0 = r10;"
+ "r0 += r8;"
+ "r1 = 7;"
+ "*(u64 *)(r0 + 0) = r1;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ "i_loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "r8 = -25;"
+ "goto j_loop_%=;"
+ "j_loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy)
+ : __clobber_all
+ );
+}
+
+SEC("?raw_tp")
+__failure
+__msg("math between fp pointer and register with unbounded")
+__flag(BPF_F_TEST_STATE_FREQ)
+__naked int loop_state_deps2(void)
+{
+ /* This is equivalent to C program below.
+ *
+ * The case turns out to be tricky in a sense that:
+ * - states with read+precise mark on c are explored only on a second
+ * iteration of the first inner loop and in a state which is pushed to
+ * states stack first.
+ * - states with c=-25 are explored only on a second iteration of the
+ * second inner loop and in a state which is pushed to states stack
+ * first.
+ *
+ * Depending on the details of iterator convergence logic
+ * verifier might stop states traversal too early and miss
+ * unsafe c=-25 memory access.
+ *
+ * j = iter_new(); // fp[-16]
+ * a = 0; // r6
+ * b = 0; // r7
+ * c = -24; // r8
+ * while (iter_next(j)) {
+ * i = iter_new(); // fp[-8]
+ * a = 0; // r6
+ * b = 0; // r7
+ * while (iter_next(i)) {
+ * if (a == 1) {
+ * a = 0;
+ * b = 1;
+ * } else if (a == 0) {
+ * a = 1;
+ * if (random() == 42)
+ * continue;
+ * if (b == 1) {
+ * *(r10 + c) = 7; // this is not safe
+ * iter_destroy(i);
+ * iter_destroy(j);
+ * return;
+ * }
+ * }
+ * }
+ * iter_destroy(i);
+ * i = iter_new(); // fp[-8]
+ * a = 0; // r6
+ * b = 0; // r7
+ * while (iter_next(i)) {
+ * if (a == 1) {
+ * a = 0;
+ * b = 1;
+ * } else if (a == 0) {
+ * a = 1;
+ * if (random() == 42)
+ * continue;
+ * if (b == 1) {
+ * a = 0;
+ * c = -25;
+ * }
+ * }
+ * }
+ * iter_destroy(i);
+ * }
+ * iter_destroy(j);
+ * return;
+ */
+ asm volatile (
+ "r1 = r10;"
+ "r1 += -16;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "r8 = -24;"
+ "j_loop_%=:"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto j_loop_end_%=;"
+
+ /* first inner loop */
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "i_loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto i_loop_end_%=;"
+ "check_one_r6_%=:"
+ "if r6 != 1 goto check_zero_r6_%=;"
+ "r6 = 0;"
+ "r7 = 1;"
+ "goto i_loop_%=;"
+ "check_zero_r6_%=:"
+ "if r6 != 0 goto i_loop_%=;"
+ "r6 = 1;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto check_one_r7_%=;"
+ "goto i_loop_%=;"
+ "check_one_r7_%=:"
+ "if r7 != 1 goto i_loop_%=;"
+ "r0 = r10;"
+ "r0 += r8;"
+ "r1 = 7;"
+ "*(u64 *)(r0 + 0) = r1;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ "i_loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+
+ /* second inner loop */
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "r6 = 0;"
+ "r7 = 0;"
+ "i2_loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto i2_loop_end_%=;"
+ "check2_one_r6_%=:"
+ "if r6 != 1 goto check2_zero_r6_%=;"
+ "r6 = 0;"
+ "r7 = 1;"
+ "goto i2_loop_%=;"
+ "check2_zero_r6_%=:"
+ "if r6 != 0 goto i2_loop_%=;"
+ "r6 = 1;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto check2_one_r7_%=;"
+ "goto i2_loop_%=;"
+ "check2_one_r7_%=:"
+ "if r7 != 1 goto i2_loop_%=;"
+ "r6 = 0;"
+ "r8 = -25;"
+ "goto i2_loop_%=;"
+ "i2_loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+
+ "r6 = 0;"
+ "r7 = 0;"
+ "goto j_loop_%=;"
+ "j_loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -16;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy)
+ : __clobber_all
+ );
+}
+
+SEC("?raw_tp")
+__success
+__naked int triple_continue(void)
+{
+ /* This is equivalent to C program below.
+ * High branching factor of the loop body turned out to be
+ * problematic for one of the iterator convergence tracking
+ * algorithms explored.
+ *
+ * r6 = bpf_get_prandom_u32()
+ * bpf_iter_num_new(&fp[-8], 0, 10)
+ * while (bpf_iter_num_next(&fp[-8])) {
+ * if (bpf_get_prandom_u32() != 42)
+ * continue;
+ * if (bpf_get_prandom_u32() != 42)
+ * continue;
+ * if (bpf_get_prandom_u32() != 42)
+ * continue;
+ * r0 += 0;
+ * }
+ * bpf_iter_num_destroy(&fp[-8])
+ * return 0
+ */
+ asm volatile (
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto loop_end_%=;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto loop_%=;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto loop_%=;"
+ "call %[bpf_get_prandom_u32];"
+ "if r0 != 42 goto loop_%=;"
+ "r0 += 0;"
+ "goto loop_%=;"
+ "loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_get_prandom_u32),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy)
+ : __clobber_all
+ );
+}
+
+SEC("?raw_tp")
+__success
+__naked int widen_spill(void)
+{
+ /* This is equivalent to C program below.
+ * The counter is stored in fp[-16], if this counter is not widened
+ * verifier states representing loop iterations would never converge.
+ *
+ * fp[-16] = 0
+ * bpf_iter_num_new(&fp[-8], 0, 10)
+ * while (bpf_iter_num_next(&fp[-8])) {
+ * r0 = fp[-16];
+ * r0 += 1;
+ * fp[-16] = r0;
+ * }
+ * bpf_iter_num_destroy(&fp[-8])
+ * return 0
+ */
+ asm volatile (
+ "r0 = 0;"
+ "*(u64 *)(r10 - 16) = r0;"
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto loop_end_%=;"
+ "r0 = *(u64 *)(r10 - 16);"
+ "r0 += 1;"
+ "*(u64 *)(r10 - 16) = r0;"
+ "goto loop_%=;"
+ "loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy)
+ : __clobber_all
+ );
+}
+
+SEC("raw_tp")
+__success
+__naked int checkpoint_states_deletion(void)
+{
+ /* This is equivalent to C program below.
+ *
+ * int *a, *b, *c, *d, *e, *f;
+ * int i, sum = 0;
+ * bpf_for(i, 0, 10) {
+ * a = bpf_map_lookup_elem(&amap, &i);
+ * b = bpf_map_lookup_elem(&amap, &i);
+ * c = bpf_map_lookup_elem(&amap, &i);
+ * d = bpf_map_lookup_elem(&amap, &i);
+ * e = bpf_map_lookup_elem(&amap, &i);
+ * f = bpf_map_lookup_elem(&amap, &i);
+ * if (a) sum += 1;
+ * if (b) sum += 1;
+ * if (c) sum += 1;
+ * if (d) sum += 1;
+ * if (e) sum += 1;
+ * if (f) sum += 1;
+ * }
+ * return 0;
+ *
+ * The body of the loop spawns multiple simulation paths
+ * with different combination of NULL/non-NULL information for a/b/c/d/e/f.
+ * Each combination is unique from states_equal() point of view.
+ * Explored states checkpoint is created after each iterator next call.
+ * Iterator convergence logic expects that eventually current state
+ * would get equal to one of the explored states and thus loop
+ * exploration would be finished (at-least for a specific path).
+ * Verifier evicts explored states with high miss to hit ratio
+ * to to avoid comparing current state with too many explored
+ * states per instruction.
+ * This test is designed to "stress test" eviction policy defined using formula:
+ *
+ * sl->miss_cnt > sl->hit_cnt * N + N // if true sl->state is evicted
+ *
+ * Currently N is set to 64, which allows for 6 variables in this test.
+ */
+ asm volatile (
+ "r6 = 0;" /* a */
+ "r7 = 0;" /* b */
+ "r8 = 0;" /* c */
+ "*(u64 *)(r10 - 24) = r6;" /* d */
+ "*(u64 *)(r10 - 32) = r6;" /* e */
+ "*(u64 *)(r10 - 40) = r6;" /* f */
+ "r9 = 0;" /* sum */
+ "r1 = r10;"
+ "r1 += -8;"
+ "r2 = 0;"
+ "r3 = 10;"
+ "call %[bpf_iter_num_new];"
+ "loop_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_next];"
+ "if r0 == 0 goto loop_end_%=;"
+
+ "*(u64 *)(r10 - 16) = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "r6 = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "r7 = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "r8 = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "*(u64 *)(r10 - 24) = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "*(u64 *)(r10 - 32) = r0;"
+
+ "r1 = %[amap] ll;"
+ "r2 = r10;"
+ "r2 += -16;"
+ "call %[bpf_map_lookup_elem];"
+ "*(u64 *)(r10 - 40) = r0;"
+
+ "if r6 == 0 goto +1;"
+ "r9 += 1;"
+ "if r7 == 0 goto +1;"
+ "r9 += 1;"
+ "if r8 == 0 goto +1;"
+ "r9 += 1;"
+ "r0 = *(u64 *)(r10 - 24);"
+ "if r0 == 0 goto +1;"
+ "r9 += 1;"
+ "r0 = *(u64 *)(r10 - 32);"
+ "if r0 == 0 goto +1;"
+ "r9 += 1;"
+ "r0 = *(u64 *)(r10 - 40);"
+ "if r0 == 0 goto +1;"
+ "r9 += 1;"
+
+ "goto loop_%=;"
+ "loop_end_%=:"
+ "r1 = r10;"
+ "r1 += -8;"
+ "call %[bpf_iter_num_destroy];"
+ "r0 = 0;"
+ "exit;"
+ :
+ : __imm(bpf_map_lookup_elem),
+ __imm(bpf_iter_num_new),
+ __imm(bpf_iter_num_next),
+ __imm(bpf_iter_num_destroy),
+ __imm_addr(amap)
+ : __clobber_all
+ );
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/iters_css.c b/tools/testing/selftests/bpf/progs/iters_css.c
new file mode 100644
index 000000000000..ec1f6c2f590b
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iters_css.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+char _license[] SEC("license") = "GPL";
+
+pid_t target_pid;
+u64 root_cg_id, leaf_cg_id;
+u64 first_cg_id, last_cg_id;
+
+int pre_order_cnt, post_order_cnt, tree_high;
+
+struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
+void bpf_cgroup_release(struct cgroup *p) __ksym;
+void bpf_rcu_read_lock(void) __ksym;
+void bpf_rcu_read_unlock(void) __ksym;
+
+SEC("fentry.s/" SYS_PREFIX "sys_getpgid")
+int iter_css_for_each(const void *ctx)
+{
+ struct task_struct *cur_task = bpf_get_current_task_btf();
+ struct cgroup_subsys_state *root_css, *leaf_css, *pos;
+ struct cgroup *root_cgrp, *leaf_cgrp, *cur_cgrp;
+
+ if (cur_task->pid != target_pid)
+ return 0;
+
+ root_cgrp = bpf_cgroup_from_id(root_cg_id);
+
+ if (!root_cgrp)
+ return 0;
+
+ leaf_cgrp = bpf_cgroup_from_id(leaf_cg_id);
+
+ if (!leaf_cgrp) {
+ bpf_cgroup_release(root_cgrp);
+ return 0;
+ }
+ root_css = &root_cgrp->self;
+ leaf_css = &leaf_cgrp->self;
+ pre_order_cnt = post_order_cnt = tree_high = 0;
+ first_cg_id = last_cg_id = 0;
+
+ bpf_rcu_read_lock();
+ bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
+ cur_cgrp = pos->cgroup;
+ post_order_cnt++;
+ last_cg_id = cur_cgrp->kn->id;
+ }
+
+ bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_PRE) {
+ cur_cgrp = pos->cgroup;
+ pre_order_cnt++;
+ if (!first_cg_id)
+ first_cg_id = cur_cgrp->kn->id;
+ }
+
+ bpf_for_each(css, pos, leaf_css, BPF_CGROUP_ITER_ANCESTORS_UP)
+ tree_high++;
+
+ bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_ANCESTORS_UP)
+ tree_high--;
+ bpf_rcu_read_unlock();
+ bpf_cgroup_release(root_cgrp);
+ bpf_cgroup_release(leaf_cgrp);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/iters_css_task.c b/tools/testing/selftests/bpf/progs/iters_css_task.c
new file mode 100644
index 000000000000..5089ce384a1c
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iters_css_task.c
@@ -0,0 +1,47 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
+
+#include "vmlinux.h"
+#include <errno.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
+void bpf_cgroup_release(struct cgroup *p) __ksym;
+
+pid_t target_pid;
+int css_task_cnt;
+u64 cg_id;
+
+SEC("lsm/file_mprotect")
+int BPF_PROG(iter_css_task_for_each, struct vm_area_struct *vma,
+ unsigned long reqprot, unsigned long prot, int ret)
+{
+ struct task_struct *cur_task = bpf_get_current_task_btf();
+ struct cgroup_subsys_state *css;
+ struct task_struct *task;
+ struct cgroup *cgrp;
+
+ if (cur_task->pid != target_pid)
+ return ret;
+
+ cgrp = bpf_cgroup_from_id(cg_id);
+
+ if (!cgrp)
+ return -EPERM;
+
+ css = &cgrp->self;
+ css_task_cnt = 0;
+
+ bpf_for_each(css_task, task, css, CSS_TASK_ITER_PROCS)
+ if (task->pid == target_pid)
+ css_task_cnt++;
+
+ bpf_cgroup_release(cgrp);
+
+ return -EPERM;
+}
diff --git a/tools/testing/selftests/bpf/progs/iters_task.c b/tools/testing/selftests/bpf/progs/iters_task.c
new file mode 100644
index 000000000000..c9b4055cd410
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iters_task.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+char _license[] SEC("license") = "GPL";
+
+pid_t target_pid;
+int procs_cnt, threads_cnt, proc_threads_cnt;
+
+void bpf_rcu_read_lock(void) __ksym;
+void bpf_rcu_read_unlock(void) __ksym;
+
+SEC("fentry.s/" SYS_PREFIX "sys_getpgid")
+int iter_task_for_each_sleep(void *ctx)
+{
+ struct task_struct *cur_task = bpf_get_current_task_btf();
+ struct task_struct *pos;
+
+ if (cur_task->pid != target_pid)
+ return 0;
+ procs_cnt = threads_cnt = proc_threads_cnt = 0;
+
+ bpf_rcu_read_lock();
+ bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS)
+ if (pos->pid == target_pid)
+ procs_cnt++;
+
+ bpf_for_each(task, pos, cur_task, BPF_TASK_ITER_PROC_THREADS)
+ proc_threads_cnt++;
+
+ bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_THREADS)
+ if (pos->tgid == target_pid)
+ threads_cnt++;
+ bpf_rcu_read_unlock();
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/iters_task_failure.c b/tools/testing/selftests/bpf/progs/iters_task_failure.c
new file mode 100644
index 000000000000..c3bf96a67dba
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iters_task_failure.c
@@ -0,0 +1,105 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023 Chuyi Zhou <zhouchuyi@bytedance.com> */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "bpf_misc.h"
+#include "bpf_experimental.h"
+
+char _license[] SEC("license") = "GPL";
+
+struct cgroup *bpf_cgroup_from_id(u64 cgid) __ksym;
+void bpf_cgroup_release(struct cgroup *p) __ksym;
+void bpf_rcu_read_lock(void) __ksym;
+void bpf_rcu_read_unlock(void) __ksym;
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure __msg("expected an RCU CS when using bpf_iter_task_next")
+int BPF_PROG(iter_tasks_without_lock)
+{
+ struct task_struct *pos;
+
+ bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS) {
+
+ }
+ return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure __msg("expected an RCU CS when using bpf_iter_css_next")
+int BPF_PROG(iter_css_without_lock)
+{
+ u64 cg_id = bpf_get_current_cgroup_id();
+ struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
+ struct cgroup_subsys_state *root_css, *pos;
+
+ if (!cgrp)
+ return 0;
+ root_css = &cgrp->self;
+
+ bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
+
+ }
+ bpf_cgroup_release(cgrp);
+ return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure __msg("expected an RCU CS when using bpf_iter_task_next")
+int BPF_PROG(iter_tasks_lock_and_unlock)
+{
+ struct task_struct *pos;
+
+ bpf_rcu_read_lock();
+ bpf_for_each(task, pos, NULL, BPF_TASK_ITER_ALL_PROCS) {
+ bpf_rcu_read_unlock();
+
+ bpf_rcu_read_lock();
+ }
+ bpf_rcu_read_unlock();
+ return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure __msg("expected an RCU CS when using bpf_iter_css_next")
+int BPF_PROG(iter_css_lock_and_unlock)
+{
+ u64 cg_id = bpf_get_current_cgroup_id();
+ struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
+ struct cgroup_subsys_state *root_css, *pos;
+
+ if (!cgrp)
+ return 0;
+ root_css = &cgrp->self;
+
+ bpf_rcu_read_lock();
+ bpf_for_each(css, pos, root_css, BPF_CGROUP_ITER_DESCENDANTS_POST) {
+ bpf_rcu_read_unlock();
+
+ bpf_rcu_read_lock();
+ }
+ bpf_rcu_read_unlock();
+ bpf_cgroup_release(cgrp);
+ return 0;
+}
+
+SEC("?fentry.s/" SYS_PREFIX "sys_getpgid")
+__failure __msg("css_task_iter is only allowed in bpf_lsm and bpf iter-s")
+int BPF_PROG(iter_css_task_for_each)
+{
+ u64 cg_id = bpf_get_current_cgroup_id();
+ struct cgroup *cgrp = bpf_cgroup_from_id(cg_id);
+ struct cgroup_subsys_state *css;
+ struct task_struct *task;
+
+ if (cgrp == NULL)
+ return 0;
+ css = &cgrp->self;
+
+ bpf_for_each(css_task, task, css, CSS_TASK_ITER_PROCS) {
+
+ }
+ bpf_cgroup_release(cgrp);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/iters_task_vma.c b/tools/testing/selftests/bpf/progs/iters_task_vma.c
new file mode 100644
index 000000000000..e085a51d153e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iters_task_vma.c
@@ -0,0 +1,44 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+#include "bpf_experimental.h"
+#include <bpf/bpf_helpers.h>
+#include "bpf_misc.h"
+
+pid_t target_pid = 0;
+unsigned int vmas_seen = 0;
+
+struct {
+ __u64 vm_start;
+ __u64 vm_end;
+} vm_ranges[1000];
+
+SEC("raw_tp/sys_enter")
+int iter_task_vma_for_each(const void *ctx)
+{
+ struct task_struct *task = bpf_get_current_task_btf();
+ struct vm_area_struct *vma;
+ unsigned int seen = 0;
+
+ if (task->pid != target_pid)
+ return 0;
+
+ if (vmas_seen)
+ return 0;
+
+ bpf_for_each(task_vma, vma, task, 0) {
+ if (seen >= 1000)
+ break;
+ barrier_var(seen);
+
+ vm_ranges[seen].vm_start = vma->vm_start;
+ vm_ranges[seen].vm_end = vma->vm_end;
+ seen++;
+ }
+
+ vmas_seen = seen;
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/linked_list_fail.c b/tools/testing/selftests/bpf/progs/linked_list_fail.c
index f4c63daba229..6438982b928b 100644
--- a/tools/testing/selftests/bpf/progs/linked_list_fail.c
+++ b/tools/testing/selftests/bpf/progs/linked_list_fail.c
@@ -591,7 +591,9 @@ int pop_ptr_off(void *(*op)(void *head))
n = op(&p->head);
bpf_spin_unlock(&p->lock);
- bpf_this_cpu_ptr(n);
+ if (!n)
+ return 0;
+ bpf_spin_lock((void *)n);
return 0;
}
diff --git a/tools/testing/selftests/bpf/progs/missed_kprobe.c b/tools/testing/selftests/bpf/progs/missed_kprobe.c
new file mode 100644
index 000000000000..7f9ef701f5de
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/missed_kprobe.c
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "../bpf_testmod/bpf_testmod_kfunc.h"
+
+char _license[] SEC("license") = "GPL";
+
+/*
+ * No tests in here, just to trigger 'bpf_fentry_test*'
+ * through tracing test_run
+ */
+SEC("fentry/bpf_modify_return_test")
+int BPF_PROG(trigger)
+{
+ return 0;
+}
+
+SEC("kprobe/bpf_fentry_test1")
+int test1(struct pt_regs *ctx)
+{
+ bpf_kfunc_common_test();
+ return 0;
+}
+
+SEC("kprobe/bpf_kfunc_common_test")
+int test2(struct pt_regs *ctx)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/missed_kprobe_recursion.c b/tools/testing/selftests/bpf/progs/missed_kprobe_recursion.c
new file mode 100644
index 000000000000..8ea71cbd6c45
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/missed_kprobe_recursion.c
@@ -0,0 +1,48 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+#include "../bpf_testmod/bpf_testmod_kfunc.h"
+
+char _license[] SEC("license") = "GPL";
+
+/*
+ * No tests in here, just to trigger 'bpf_fentry_test*'
+ * through tracing test_run
+ */
+SEC("fentry/bpf_modify_return_test")
+int BPF_PROG(trigger)
+{
+ return 0;
+}
+
+SEC("kprobe.multi/bpf_fentry_test1")
+int test1(struct pt_regs *ctx)
+{
+ bpf_kfunc_common_test();
+ return 0;
+}
+
+SEC("kprobe/bpf_kfunc_common_test")
+int test2(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+SEC("kprobe/bpf_kfunc_common_test")
+int test3(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+SEC("kprobe/bpf_kfunc_common_test")
+int test4(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+SEC("kprobe.multi/bpf_kfunc_common_test")
+int test5(struct pt_regs *ctx)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/missed_tp_recursion.c b/tools/testing/selftests/bpf/progs/missed_tp_recursion.c
new file mode 100644
index 000000000000..762385f827c5
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/missed_tp_recursion.c
@@ -0,0 +1,41 @@
+// SPDX-License-Identifier: GPL-2.0
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+char _license[] SEC("license") = "GPL";
+
+/*
+ * No tests in here, just to trigger 'bpf_fentry_test*'
+ * through tracing test_run
+ */
+SEC("fentry/bpf_modify_return_test")
+int BPF_PROG(trigger)
+{
+ return 0;
+}
+
+SEC("kprobe/bpf_fentry_test1")
+int test1(struct pt_regs *ctx)
+{
+ bpf_printk("test");
+ return 0;
+}
+
+SEC("tp/bpf_trace/bpf_trace_printk")
+int test2(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+SEC("tp/bpf_trace/bpf_trace_printk")
+int test3(struct pt_regs *ctx)
+{
+ return 0;
+}
+
+SEC("tp/bpf_trace/bpf_trace_printk")
+int test4(struct pt_regs *ctx)
+{
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_array.c b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
new file mode 100644
index 000000000000..37c2d2608ec0
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_array.c
@@ -0,0 +1,190 @@
+#include "bpf_experimental.h"
+
+struct val_t {
+ long b, c, d;
+};
+
+struct elem {
+ long sum;
+ struct val_t __percpu_kptr *pc;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct elem);
+} array SEC(".maps");
+
+void bpf_rcu_read_lock(void) __ksym;
+void bpf_rcu_read_unlock(void) __ksym;
+
+const volatile int nr_cpus;
+
+/* Initialize the percpu object */
+SEC("?fentry/bpf_fentry_test1")
+int BPF_PROG(test_array_map_1)
+{
+ struct val_t __percpu_kptr *p;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ p = bpf_kptr_xchg(&e->pc, p);
+ if (p)
+ bpf_percpu_obj_drop(p);
+
+ return 0;
+}
+
+/* Update percpu data */
+SEC("?fentry/bpf_fentry_test2")
+int BPF_PROG(test_array_map_2)
+{
+ struct val_t __percpu_kptr *p;
+ struct val_t *v;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p = e->pc;
+ if (!p)
+ return 0;
+
+ v = bpf_per_cpu_ptr(p, 0);
+ if (!v)
+ return 0;
+ v->c = 1;
+ v->d = 2;
+
+ return 0;
+}
+
+int cpu0_field_d, sum_field_c;
+int my_pid;
+
+/* Summarize percpu data */
+SEC("?fentry/bpf_fentry_test3")
+int BPF_PROG(test_array_map_3)
+{
+ struct val_t __percpu_kptr *p;
+ int i, index = 0;
+ struct val_t *v;
+ struct elem *e;
+
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p = e->pc;
+ if (!p)
+ return 0;
+
+ bpf_for(i, 0, nr_cpus) {
+ v = bpf_per_cpu_ptr(p, i);
+ if (v) {
+ if (i == 0)
+ cpu0_field_d = v->d;
+ sum_field_c += v->c;
+ }
+ }
+
+ return 0;
+}
+
+/* Explicitly free allocated percpu data */
+SEC("?fentry/bpf_fentry_test4")
+int BPF_PROG(test_array_map_4)
+{
+ struct val_t __percpu_kptr *p;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ /* delete */
+ p = bpf_kptr_xchg(&e->pc, NULL);
+ if (p) {
+ bpf_percpu_obj_drop(p);
+ }
+
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+int BPF_PROG(test_array_map_10)
+{
+ struct val_t __percpu_kptr *p, *p1;
+ int i, index = 0;
+ struct val_t *v;
+ struct elem *e;
+
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ bpf_rcu_read_lock();
+ p = e->pc;
+ if (!p) {
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ goto out;
+
+ p1 = bpf_kptr_xchg(&e->pc, p);
+ if (p1) {
+ /* race condition */
+ bpf_percpu_obj_drop(p1);
+ }
+ }
+
+ v = bpf_this_cpu_ptr(p);
+ v->c = 3;
+ v = bpf_this_cpu_ptr(p);
+ v->c = 0;
+
+ v = bpf_per_cpu_ptr(p, 0);
+ if (!v)
+ goto out;
+ v->c = 1;
+ v->d = 2;
+
+ /* delete */
+ p1 = bpf_kptr_xchg(&e->pc, NULL);
+ if (!p1)
+ goto out;
+
+ bpf_for(i, 0, nr_cpus) {
+ v = bpf_per_cpu_ptr(p, i);
+ if (v) {
+ if (i == 0)
+ cpu0_field_d = v->d;
+ sum_field_c += v->c;
+ }
+ }
+
+ /* finally release p */
+ bpf_percpu_obj_drop(p1);
+out:
+ bpf_rcu_read_unlock();
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
new file mode 100644
index 000000000000..a2acf9aa6c24
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_cgrp_local_storage.c
@@ -0,0 +1,109 @@
+#include "bpf_experimental.h"
+
+struct val_t {
+ long b, c, d;
+};
+
+struct elem {
+ long sum;
+ struct val_t __percpu_kptr *pc;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_CGRP_STORAGE);
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ __type(key, int);
+ __type(value, struct elem);
+} cgrp SEC(".maps");
+
+const volatile int nr_cpus;
+
+/* Initialize the percpu object */
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG(test_cgrp_local_storage_1)
+{
+ struct task_struct *task;
+ struct val_t __percpu_kptr *p;
+ struct elem *e;
+
+ task = bpf_get_current_task_btf();
+ e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0,
+ BPF_LOCAL_STORAGE_GET_F_CREATE);
+ if (!e)
+ return 0;
+
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ p = bpf_kptr_xchg(&e->pc, p);
+ if (p)
+ bpf_percpu_obj_drop(p);
+
+ return 0;
+}
+
+/* Percpu data collection */
+SEC("fentry/bpf_fentry_test2")
+int BPF_PROG(test_cgrp_local_storage_2)
+{
+ struct task_struct *task;
+ struct val_t __percpu_kptr *p;
+ struct val_t *v;
+ struct elem *e;
+
+ task = bpf_get_current_task_btf();
+ e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0, 0);
+ if (!e)
+ return 0;
+
+ p = e->pc;
+ if (!p)
+ return 0;
+
+ v = bpf_per_cpu_ptr(p, 0);
+ if (!v)
+ return 0;
+ v->c = 1;
+ v->d = 2;
+ return 0;
+}
+
+int cpu0_field_d, sum_field_c;
+int my_pid;
+
+/* Summarize percpu data collection */
+SEC("fentry/bpf_fentry_test3")
+int BPF_PROG(test_cgrp_local_storage_3)
+{
+ struct task_struct *task;
+ struct val_t __percpu_kptr *p;
+ struct val_t *v;
+ struct elem *e;
+ int i;
+
+ if ((bpf_get_current_pid_tgid() >> 32) != my_pid)
+ return 0;
+
+ task = bpf_get_current_task_btf();
+ e = bpf_cgrp_storage_get(&cgrp, task->cgroups->dfl_cgrp, 0, 0);
+ if (!e)
+ return 0;
+
+ p = e->pc;
+ if (!p)
+ return 0;
+
+ bpf_for(i, 0, nr_cpus) {
+ v = bpf_per_cpu_ptr(p, i);
+ if (v) {
+ if (i == 0)
+ cpu0_field_d = v->d;
+ sum_field_c += v->c;
+ }
+ }
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
new file mode 100644
index 000000000000..1a891d30f1fe
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/percpu_alloc_fail.c
@@ -0,0 +1,164 @@
+#include "bpf_experimental.h"
+#include "bpf_misc.h"
+
+struct val_t {
+ long b, c, d;
+};
+
+struct val2_t {
+ long b;
+};
+
+struct val_with_ptr_t {
+ char *p;
+};
+
+struct val_with_rb_root_t {
+ struct bpf_spin_lock lock;
+};
+
+struct elem {
+ long sum;
+ struct val_t __percpu_kptr *pc;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(max_entries, 1);
+ __type(key, int);
+ __type(value, struct elem);
+} array SEC(".maps");
+
+long ret;
+
+SEC("?fentry/bpf_fentry_test1")
+__failure __msg("store to referenced kptr disallowed")
+int BPF_PROG(test_array_map_1)
+{
+ struct val_t __percpu_kptr *p;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ p = bpf_kptr_xchg(&e->pc, p);
+ if (p)
+ bpf_percpu_obj_drop(p);
+
+ e->pc = (struct val_t __percpu_kptr *)ret;
+ return 0;
+}
+
+SEC("?fentry/bpf_fentry_test1")
+__failure __msg("invalid kptr access, R2 type=percpu_ptr_val2_t expected=ptr_val_t")
+int BPF_PROG(test_array_map_2)
+{
+ struct val2_t __percpu_kptr *p2;
+ struct val_t __percpu_kptr *p;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p2 = bpf_percpu_obj_new(struct val2_t);
+ if (!p2)
+ return 0;
+
+ p = bpf_kptr_xchg(&e->pc, p2);
+ if (p)
+ bpf_percpu_obj_drop(p);
+
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("R1 type=scalar expected=percpu_ptr_, percpu_rcu_ptr_, percpu_trusted_ptr_")
+int BPF_PROG(test_array_map_3)
+{
+ struct val_t __percpu_kptr *p, *p1;
+ struct val_t *v;
+ struct elem *e;
+ int index = 0;
+
+ e = bpf_map_lookup_elem(&array, &index);
+ if (!e)
+ return 0;
+
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ p1 = bpf_kptr_xchg(&e->pc, p);
+ if (p1)
+ bpf_percpu_obj_drop(p1);
+
+ v = bpf_this_cpu_ptr(p);
+ ret = v->b;
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("arg#0 expected for bpf_percpu_obj_drop_impl()")
+int BPF_PROG(test_array_map_4)
+{
+ struct val_t __percpu_kptr *p;
+
+ p = bpf_percpu_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ bpf_obj_drop(p);
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("arg#0 expected for bpf_obj_drop_impl()")
+int BPF_PROG(test_array_map_5)
+{
+ struct val_t *p;
+
+ p = bpf_obj_new(struct val_t);
+ if (!p)
+ return 0;
+
+ bpf_percpu_obj_drop(p);
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("bpf_percpu_obj_new type ID argument must be of a struct of scalars")
+int BPF_PROG(test_array_map_6)
+{
+ struct val_with_ptr_t __percpu_kptr *p;
+
+ p = bpf_percpu_obj_new(struct val_with_ptr_t);
+ if (!p)
+ return 0;
+
+ bpf_percpu_obj_drop(p);
+ return 0;
+}
+
+SEC("?fentry.s/bpf_fentry_test1")
+__failure __msg("bpf_percpu_obj_new type ID argument must not contain special fields")
+int BPF_PROG(test_array_map_7)
+{
+ struct val_with_rb_root_t __percpu_kptr *p;
+
+ p = bpf_percpu_obj_new(struct val_with_rb_root_t);
+ if (!p)
+ return 0;
+
+ bpf_percpu_obj_drop(p);
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/preempted_bpf_ma_op.c b/tools/testing/selftests/bpf/progs/preempted_bpf_ma_op.c
new file mode 100644
index 000000000000..55907ef961bf
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/preempted_bpf_ma_op.c
@@ -0,0 +1,106 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2023. Huawei Technologies Co., Ltd */
+#include <vmlinux.h>
+#include <bpf/bpf_tracing.h>
+#include <bpf/bpf_helpers.h>
+
+#include "bpf_experimental.h"
+
+struct bin_data {
+ char data[256];
+ struct bpf_spin_lock lock;
+};
+
+struct map_value {
+ struct bin_data __kptr * data;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __type(key, int);
+ __type(value, struct map_value);
+ __uint(max_entries, 2048);
+} array SEC(".maps");
+
+char _license[] SEC("license") = "GPL";
+
+bool nomem_err = false;
+
+static int del_array(unsigned int i, int *from)
+{
+ struct map_value *value;
+ struct bin_data *old;
+
+ value = bpf_map_lookup_elem(&array, from);
+ if (!value)
+ return 1;
+
+ old = bpf_kptr_xchg(&value->data, NULL);
+ if (old)
+ bpf_obj_drop(old);
+
+ (*from)++;
+ return 0;
+}
+
+static int add_array(unsigned int i, int *from)
+{
+ struct bin_data *old, *new;
+ struct map_value *value;
+
+ value = bpf_map_lookup_elem(&array, from);
+ if (!value)
+ return 1;
+
+ new = bpf_obj_new(typeof(*new));
+ if (!new) {
+ nomem_err = true;
+ return 1;
+ }
+
+ old = bpf_kptr_xchg(&value->data, new);
+ if (old)
+ bpf_obj_drop(old);
+
+ (*from)++;
+ return 0;
+}
+
+static void del_then_add_array(int from)
+{
+ int i;
+
+ i = from;
+ bpf_loop(512, del_array, &i, 0);
+
+ i = from;
+ bpf_loop(512, add_array, &i, 0);
+}
+
+SEC("fentry/bpf_fentry_test1")
+int BPF_PROG2(test0, int, a)
+{
+ del_then_add_array(0);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test2")
+int BPF_PROG2(test1, int, a, u64, b)
+{
+ del_then_add_array(512);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test3")
+int BPF_PROG2(test2, char, a, int, b, u64, c)
+{
+ del_then_add_array(1024);
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test4")
+int BPF_PROG2(test3, void *, a, char, b, int, c, u64, d)
+{
+ del_then_add_array(1536);
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/profiler.inc.h b/tools/testing/selftests/bpf/progs/profiler.inc.h
index f799d87e8700..897061930cb7 100644
--- a/tools/testing/selftests/bpf/progs/profiler.inc.h
+++ b/tools/testing/selftests/bpf/progs/profiler.inc.h
@@ -609,7 +609,7 @@ out:
}
SEC("tracepoint/syscalls/sys_enter_kill")
-int tracepoint__syscalls__sys_enter_kill(struct trace_event_raw_sys_enter* ctx)
+int tracepoint__syscalls__sys_enter_kill(struct syscall_trace_enter* ctx)
{
struct bpf_func_stats_ctx stats_ctx;
diff --git a/tools/testing/selftests/bpf/progs/recvmsg_unix_prog.c b/tools/testing/selftests/bpf/progs/recvmsg_unix_prog.c
new file mode 100644
index 000000000000..4dfbc8552558
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/recvmsg_unix_prog.c
@@ -0,0 +1,39 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+
+#include <string.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_kfuncs.h"
+
+__u8 SERVUN_ADDRESS[] = "\0bpf_cgroup_unix_test";
+
+SEC("cgroup/recvmsg_unix")
+int recvmsg_unix_prog(struct bpf_sock_addr *ctx)
+{
+ struct bpf_sock_addr_kern *sa_kern = bpf_cast_to_kern_ctx(ctx);
+ struct sockaddr_un *sa_kern_unaddr;
+ __u32 unaddrlen = offsetof(struct sockaddr_un, sun_path) +
+ sizeof(SERVUN_ADDRESS) - 1;
+ int ret;
+
+ ret = bpf_sock_addr_set_sun_path(sa_kern, SERVUN_ADDRESS,
+ sizeof(SERVUN_ADDRESS) - 1);
+ if (ret)
+ return 1;
+
+ if (sa_kern->uaddrlen != unaddrlen)
+ return 1;
+
+ sa_kern_unaddr = bpf_rdonly_cast(sa_kern->uaddr,
+ bpf_core_type_id_kernel(struct sockaddr_un));
+ if (memcmp(sa_kern_unaddr->sun_path, SERVUN_ADDRESS,
+ sizeof(SERVUN_ADDRESS) - 1) != 0)
+ return 1;
+
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/sendmsg_unix_prog.c b/tools/testing/selftests/bpf/progs/sendmsg_unix_prog.c
new file mode 100644
index 000000000000..1f67e832666e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/sendmsg_unix_prog.c
@@ -0,0 +1,40 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Meta Platforms, Inc. and affiliates. */
+
+#include "vmlinux.h"
+
+#include <string.h>
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_core_read.h>
+#include "bpf_kfuncs.h"
+
+__u8 SERVUN_REWRITE_ADDRESS[] = "\0bpf_cgroup_unix_test_rewrite";
+
+SEC("cgroup/sendmsg_unix")
+int sendmsg_unix_prog(struct bpf_sock_addr *ctx)
+{
+ struct bpf_sock_addr_kern *sa_kern = bpf_cast_to_kern_ctx(ctx);
+ struct sockaddr_un *sa_kern_unaddr;
+ __u32 unaddrlen = offsetof(struct sockaddr_un, sun_path) +
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1;
+ int ret;
+
+ /* Rewrite destination. */
+ ret = bpf_sock_addr_set_sun_path(sa_kern, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1);
+ if (ret)
+ return 0;
+
+ if (sa_kern->uaddrlen != unaddrlen)
+ return 0;
+
+ sa_kern_unaddr = bpf_rdonly_cast(sa_kern->uaddr,
+ bpf_core_type_id_kernel(struct sockaddr_un));
+ if (memcmp(sa_kern_unaddr->sun_path, SERVUN_REWRITE_ADDRESS,
+ sizeof(SERVUN_REWRITE_ADDRESS) - 1) != 0)
+ return 0;
+
+ return 1;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
new file mode 100644
index 000000000000..8436c6729167
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fentry.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Leon Hwang */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+int count = 0;
+
+SEC("fentry/subprog_tail")
+int BPF_PROG(fentry, struct sk_buff *skb)
+{
+ count++;
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
new file mode 100644
index 000000000000..fe16412c6e6e
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/tailcall_bpf2bpf_fexit.c
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright Leon Hwang */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+int count = 0;
+
+SEC("fexit/subprog_tail")
+int BPF_PROG(fexit, struct sk_buff *skb)
+{
+ count++;
+
+ return 0;
+}
+
+char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_bpf_ma.c b/tools/testing/selftests/bpf/progs/test_bpf_ma.c
index ecde41ae0fc8..b685a4aba6bd 100644
--- a/tools/testing/selftests/bpf/progs/test_bpf_ma.c
+++ b/tools/testing/selftests/bpf/progs/test_bpf_ma.c
@@ -37,10 +37,20 @@ int pid = 0;
__type(key, int); \
__type(value, struct map_value_##_size); \
__uint(max_entries, 128); \
- } array_##_size SEC(".maps");
+ } array_##_size SEC(".maps")
-static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int batch,
- unsigned int idx)
+#define DEFINE_ARRAY_WITH_PERCPU_KPTR(_size) \
+ struct map_value_percpu_##_size { \
+ struct bin_data_##_size __percpu_kptr * data; \
+ }; \
+ struct { \
+ __uint(type, BPF_MAP_TYPE_ARRAY); \
+ __type(key, int); \
+ __type(value, struct map_value_percpu_##_size); \
+ __uint(max_entries, 128); \
+ } array_percpu_##_size SEC(".maps")
+
+static __always_inline void batch_alloc(struct bpf_map *map, unsigned int batch, unsigned int idx)
{
struct generic_map_value *value;
unsigned int i, key;
@@ -65,6 +75,14 @@ static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int b
return;
}
}
+}
+
+static __always_inline void batch_free(struct bpf_map *map, unsigned int batch, unsigned int idx)
+{
+ struct generic_map_value *value;
+ unsigned int i, key;
+ void *old;
+
for (i = 0; i < batch; i++) {
key = i;
value = bpf_map_lookup_elem(map, &key);
@@ -81,8 +99,72 @@ static __always_inline void batch_alloc_free(struct bpf_map *map, unsigned int b
}
}
+static __always_inline void batch_percpu_alloc(struct bpf_map *map, unsigned int batch,
+ unsigned int idx)
+{
+ struct generic_map_value *value;
+ unsigned int i, key;
+ void *old, *new;
+
+ for (i = 0; i < batch; i++) {
+ key = i;
+ value = bpf_map_lookup_elem(map, &key);
+ if (!value) {
+ err = 1;
+ return;
+ }
+ /* per-cpu allocator may not be able to refill in time */
+ new = bpf_percpu_obj_new_impl(data_btf_ids[idx], NULL);
+ if (!new)
+ continue;
+
+ old = bpf_kptr_xchg(&value->data, new);
+ if (old) {
+ bpf_percpu_obj_drop(old);
+ err = 2;
+ return;
+ }
+ }
+}
+
+static __always_inline void batch_percpu_free(struct bpf_map *map, unsigned int batch,
+ unsigned int idx)
+{
+ struct generic_map_value *value;
+ unsigned int i, key;
+ void *old;
+
+ for (i = 0; i < batch; i++) {
+ key = i;
+ value = bpf_map_lookup_elem(map, &key);
+ if (!value) {
+ err = 3;
+ return;
+ }
+ old = bpf_kptr_xchg(&value->data, NULL);
+ if (!old)
+ continue;
+ bpf_percpu_obj_drop(old);
+ }
+}
+
+#define CALL_BATCH_ALLOC(size, batch, idx) \
+ batch_alloc((struct bpf_map *)(&array_##size), batch, idx)
+
#define CALL_BATCH_ALLOC_FREE(size, batch, idx) \
- batch_alloc_free((struct bpf_map *)(&array_##size), batch, idx)
+ do { \
+ batch_alloc((struct bpf_map *)(&array_##size), batch, idx); \
+ batch_free((struct bpf_map *)(&array_##size), batch, idx); \
+ } while (0)
+
+#define CALL_BATCH_PERCPU_ALLOC(size, batch, idx) \
+ batch_percpu_alloc((struct bpf_map *)(&array_percpu_##size), batch, idx)
+
+#define CALL_BATCH_PERCPU_ALLOC_FREE(size, batch, idx) \
+ do { \
+ batch_percpu_alloc((struct bpf_map *)(&array_percpu_##size), batch, idx); \
+ batch_percpu_free((struct bpf_map *)(&array_percpu_##size), batch, idx); \
+ } while (0)
DEFINE_ARRAY_WITH_KPTR(8);
DEFINE_ARRAY_WITH_KPTR(16);
@@ -97,8 +179,21 @@ DEFINE_ARRAY_WITH_KPTR(1024);
DEFINE_ARRAY_WITH_KPTR(2048);
DEFINE_ARRAY_WITH_KPTR(4096);
-SEC("fentry/" SYS_PREFIX "sys_nanosleep")
-int test_bpf_mem_alloc_free(void *ctx)
+/* per-cpu kptr doesn't support bin_data_8 which is a zero-sized array */
+DEFINE_ARRAY_WITH_PERCPU_KPTR(16);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(32);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(64);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(96);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(128);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(192);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(256);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(512);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(1024);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(2048);
+DEFINE_ARRAY_WITH_PERCPU_KPTR(4096);
+
+SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
+int test_batch_alloc_free(void *ctx)
{
if ((u32)bpf_get_current_pid_tgid() != pid)
return 0;
@@ -121,3 +216,76 @@ int test_bpf_mem_alloc_free(void *ctx)
return 0;
}
+
+SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
+int test_free_through_map_free(void *ctx)
+{
+ if ((u32)bpf_get_current_pid_tgid() != pid)
+ return 0;
+
+ /* Alloc 128 8-bytes objects in batch to trigger refilling,
+ * then free these objects through map free.
+ */
+ CALL_BATCH_ALLOC(8, 128, 0);
+ CALL_BATCH_ALLOC(16, 128, 1);
+ CALL_BATCH_ALLOC(32, 128, 2);
+ CALL_BATCH_ALLOC(64, 128, 3);
+ CALL_BATCH_ALLOC(96, 128, 4);
+ CALL_BATCH_ALLOC(128, 128, 5);
+ CALL_BATCH_ALLOC(192, 128, 6);
+ CALL_BATCH_ALLOC(256, 128, 7);
+ CALL_BATCH_ALLOC(512, 64, 8);
+ CALL_BATCH_ALLOC(1024, 32, 9);
+ CALL_BATCH_ALLOC(2048, 16, 10);
+ CALL_BATCH_ALLOC(4096, 8, 11);
+
+ return 0;
+}
+
+SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
+int test_batch_percpu_alloc_free(void *ctx)
+{
+ if ((u32)bpf_get_current_pid_tgid() != pid)
+ return 0;
+
+ /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
+ * then free 128 16-bytes per-cpu objects in batch to trigger freeing.
+ */
+ CALL_BATCH_PERCPU_ALLOC_FREE(16, 128, 1);
+ CALL_BATCH_PERCPU_ALLOC_FREE(32, 128, 2);
+ CALL_BATCH_PERCPU_ALLOC_FREE(64, 128, 3);
+ CALL_BATCH_PERCPU_ALLOC_FREE(96, 128, 4);
+ CALL_BATCH_PERCPU_ALLOC_FREE(128, 128, 5);
+ CALL_BATCH_PERCPU_ALLOC_FREE(192, 128, 6);
+ CALL_BATCH_PERCPU_ALLOC_FREE(256, 128, 7);
+ CALL_BATCH_PERCPU_ALLOC_FREE(512, 64, 8);
+ CALL_BATCH_PERCPU_ALLOC_FREE(1024, 32, 9);
+ CALL_BATCH_PERCPU_ALLOC_FREE(2048, 16, 10);
+ CALL_BATCH_PERCPU_ALLOC_FREE(4096, 8, 11);
+
+ return 0;
+}
+
+SEC("?fentry/" SYS_PREFIX "sys_nanosleep")
+int test_percpu_free_through_map_free(void *ctx)
+{
+ if ((u32)bpf_get_current_pid_tgid() != pid)
+ return 0;
+
+ /* Alloc 128 16-bytes per-cpu objects in batch to trigger refilling,
+ * then free these object through map free.
+ */
+ CALL_BATCH_PERCPU_ALLOC(16, 128, 1);
+ CALL_BATCH_PERCPU_ALLOC(32, 128, 2);
+ CALL_BATCH_PERCPU_ALLOC(64, 128, 3);
+ CALL_BATCH_PERCPU_ALLOC(96, 128, 4);
+ CALL_BATCH_PERCPU_ALLOC(128, 128, 5);
+ CALL_BATCH_PERCPU_ALLOC(192, 128, 6);
+ CALL_BATCH_PERCPU_ALLOC(256, 128, 7);
+ CALL_BATCH_PERCPU_ALLOC(512, 64, 8);
+ CALL_BATCH_PERCPU_ALLOC(1024, 32, 9);
+ CALL_BATCH_PERCPU_ALLOC(2048, 16, 10);
+ CALL_BATCH_PERCPU_ALLOC(4096, 8, 11);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_ldsx_insn.c b/tools/testing/selftests/bpf/progs/test_ldsx_insn.c
index 67c14ba1e87b..3ddcb3777912 100644
--- a/tools/testing/selftests/bpf/progs/test_ldsx_insn.c
+++ b/tools/testing/selftests/bpf/progs/test_ldsx_insn.c
@@ -6,7 +6,8 @@
#include <bpf/bpf_tracing.h>
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_s390)) && __clang_major__ >= 18
const volatile int skip = 0;
#else
const volatile int skip = 1;
@@ -104,7 +105,11 @@ int _tc(volatile struct __sk_buff *skb)
"%[tmp_mark] = r1"
: [tmp_mark]"=r"(tmp_mark)
: [ctx]"r"(skb),
- [off_mark]"i"(offsetof(struct __sk_buff, mark))
+ [off_mark]"i"(offsetof(struct __sk_buff, mark)
+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
+ + sizeof(skb->mark) - 1
+#endif
+ )
: "r1");
#else
tmp_mark = (char)skb->mark;
diff --git a/tools/testing/selftests/bpf/progs/test_task_under_cgroup.c b/tools/testing/selftests/bpf/progs/test_task_under_cgroup.c
index 56cdc0a553f0..7e750309ce27 100644
--- a/tools/testing/selftests/bpf/progs/test_task_under_cgroup.c
+++ b/tools/testing/selftests/bpf/progs/test_task_under_cgroup.c
@@ -18,7 +18,7 @@ const volatile __u64 cgid;
int remote_pid;
SEC("tp_btf/task_newtask")
-int BPF_PROG(handle__task_newtask, struct task_struct *task, u64 clone_flags)
+int BPF_PROG(tp_btf_run, struct task_struct *task, u64 clone_flags)
{
struct cgroup *cgrp = NULL;
struct task_struct *acquired;
@@ -48,4 +48,30 @@ out:
return 0;
}
+SEC("lsm.s/bpf")
+int BPF_PROG(lsm_run, int cmd, union bpf_attr *attr, unsigned int size)
+{
+ struct cgroup *cgrp = NULL;
+ struct task_struct *task;
+ int ret = 0;
+
+ task = bpf_get_current_task_btf();
+ if (local_pid != task->pid)
+ return 0;
+
+ if (cmd != BPF_LINK_CREATE)
+ return 0;
+
+ /* 1 is the root cgroup */
+ cgrp = bpf_cgroup_from_id(1);
+ if (!cgrp)
+ goto out;
+ if (!bpf_task_under_cgroup(task, cgrp))
+ ret = -1;
+ bpf_cgroup_release(cgrp);
+
+out:
+ return ret;
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/progs/test_tc_link.c b/tools/testing/selftests/bpf/progs/test_tc_link.c
index 30e7124c49a1..992400acb957 100644
--- a/tools/testing/selftests/bpf/progs/test_tc_link.c
+++ b/tools/testing/selftests/bpf/progs/test_tc_link.c
@@ -1,7 +1,11 @@
// SPDX-License-Identifier: GPL-2.0
/* Copyright (c) 2023 Isovalent */
#include <stdbool.h>
+
#include <linux/bpf.h>
+#include <linux/if_ether.h>
+
+#include <bpf/bpf_endian.h>
#include <bpf/bpf_helpers.h>
char LICENSE[] SEC("license") = "GPL";
@@ -12,10 +16,19 @@ bool seen_tc3;
bool seen_tc4;
bool seen_tc5;
bool seen_tc6;
+bool seen_eth;
SEC("tc/ingress")
int tc1(struct __sk_buff *skb)
{
+ struct ethhdr eth = {};
+
+ if (skb->protocol != __bpf_constant_htons(ETH_P_IP))
+ goto out;
+ if (bpf_skb_load_bytes(skb, 0, &eth, sizeof(eth)))
+ goto out;
+ seen_eth = eth.h_proto == bpf_htons(ETH_P_IP);
+out:
seen_tc1 = true;
return TCX_NEXT;
}
diff --git a/tools/testing/selftests/bpf/progs/test_uprobe.c b/tools/testing/selftests/bpf/progs/test_uprobe.c
new file mode 100644
index 000000000000..896c88a4960d
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/test_uprobe.c
@@ -0,0 +1,61 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (c) 2023 Hengqi Chen */
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+pid_t my_pid = 0;
+
+int test1_result = 0;
+int test2_result = 0;
+int test3_result = 0;
+int test4_result = 0;
+
+SEC("uprobe/./liburandom_read.so:urandlib_api_sameoffset")
+int BPF_UPROBE(test1)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test1_result = 1;
+ return 0;
+}
+
+SEC("uprobe/./liburandom_read.so:urandlib_api_sameoffset@LIBURANDOM_READ_1.0.0")
+int BPF_UPROBE(test2)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test2_result = 1;
+ return 0;
+}
+
+SEC("uretprobe/./liburandom_read.so:urandlib_api_sameoffset@@LIBURANDOM_READ_2.0.0")
+int BPF_URETPROBE(test3, int ret)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test3_result = ret;
+ return 0;
+}
+
+SEC("uprobe")
+int BPF_UPROBE(test4)
+{
+ pid_t pid = bpf_get_current_pid_tgid() >> 32;
+
+ if (pid != my_pid)
+ return 0;
+
+ test4_result = 1;
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/test_vmlinux.c b/tools/testing/selftests/bpf/progs/test_vmlinux.c
index 4b8e37f7fd06..78b23934d9f8 100644
--- a/tools/testing/selftests/bpf/progs/test_vmlinux.c
+++ b/tools/testing/selftests/bpf/progs/test_vmlinux.c
@@ -16,12 +16,12 @@ bool kprobe_called = false;
bool fentry_called = false;
SEC("tp/syscalls/sys_enter_nanosleep")
-int handle__tp(struct trace_event_raw_sys_enter *args)
+int handle__tp(struct syscall_trace_enter *args)
{
struct __kernel_timespec *ts;
long tv_nsec;
- if (args->id != __NR_nanosleep)
+ if (args->nr != __NR_nanosleep)
return 0;
ts = (void *)args->args[0];
diff --git a/tools/testing/selftests/bpf/progs/timer.c b/tools/testing/selftests/bpf/progs/timer.c
index 9a16d95213e1..8b946c8188c6 100644
--- a/tools/testing/selftests/bpf/progs/timer.c
+++ b/tools/testing/selftests/bpf/progs/timer.c
@@ -51,7 +51,7 @@ struct {
__uint(max_entries, 1);
__type(key, int);
__type(value, struct elem);
-} abs_timer SEC(".maps");
+} abs_timer SEC(".maps"), soft_timer_pinned SEC(".maps"), abs_timer_pinned SEC(".maps");
__u64 bss_data;
__u64 abs_data;
@@ -59,6 +59,8 @@ __u64 err;
__u64 ok;
__u64 callback_check = 52;
__u64 callback2_check = 52;
+__u64 pinned_callback_check;
+__s32 pinned_cpu;
#define ARRAY 1
#define HTAB 2
@@ -329,3 +331,62 @@ int BPF_PROG2(test3, int, a)
return 0;
}
+
+/* callback for pinned timer */
+static int timer_cb_pinned(void *map, int *key, struct bpf_timer *timer)
+{
+ __s32 cpu = bpf_get_smp_processor_id();
+
+ if (cpu != pinned_cpu)
+ err |= 16384;
+
+ pinned_callback_check++;
+ return 0;
+}
+
+static void test_pinned_timer(bool soft)
+{
+ int key = 0;
+ void *map;
+ struct bpf_timer *timer;
+ __u64 flags = BPF_F_TIMER_CPU_PIN;
+ __u64 start_time;
+
+ if (soft) {
+ map = &soft_timer_pinned;
+ start_time = 0;
+ } else {
+ map = &abs_timer_pinned;
+ start_time = bpf_ktime_get_boot_ns();
+ flags |= BPF_F_TIMER_ABS;
+ }
+
+ timer = bpf_map_lookup_elem(map, &key);
+ if (timer) {
+ if (bpf_timer_init(timer, map, CLOCK_BOOTTIME) != 0)
+ err |= 4096;
+ bpf_timer_set_callback(timer, timer_cb_pinned);
+ pinned_cpu = bpf_get_smp_processor_id();
+ bpf_timer_start(timer, start_time + 1000, flags);
+ } else {
+ err |= 8192;
+ }
+}
+
+SEC("fentry/bpf_fentry_test4")
+int BPF_PROG2(test4, int, a)
+{
+ bpf_printk("test4");
+ test_pinned_timer(true);
+
+ return 0;
+}
+
+SEC("fentry/bpf_fentry_test5")
+int BPF_PROG2(test5, int, a)
+{
+ bpf_printk("test5");
+ test_pinned_timer(false);
+
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/verifier_bswap.c b/tools/testing/selftests/bpf/progs/verifier_bswap.c
index 8893094725f0..107525fb4a6a 100644
--- a/tools/testing/selftests/bpf/progs/verifier_bswap.c
+++ b/tools/testing/selftests/bpf/progs/verifier_bswap.c
@@ -5,7 +5,9 @@
#include "bpf_misc.h"
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
+ __clang_major__ >= 18
SEC("socket")
__description("BSWAP, 16")
diff --git a/tools/testing/selftests/bpf/progs/verifier_gotol.c b/tools/testing/selftests/bpf/progs/verifier_gotol.c
index 2dae5322a18e..9f202eda952f 100644
--- a/tools/testing/selftests/bpf/progs/verifier_gotol.c
+++ b/tools/testing/selftests/bpf/progs/verifier_gotol.c
@@ -5,7 +5,9 @@
#include "bpf_misc.h"
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
+ __clang_major__ >= 18
SEC("socket")
__description("gotol, small_imm")
diff --git a/tools/testing/selftests/bpf/progs/verifier_ldsx.c b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
index 0c638f45aaf1..375525329637 100644
--- a/tools/testing/selftests/bpf/progs/verifier_ldsx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_ldsx.c
@@ -5,19 +5,25 @@
#include "bpf_misc.h"
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
+ __clang_major__ >= 18
SEC("socket")
__description("LDSX, S8")
__success __success_unpriv __retval(-2)
__naked void ldsx_s8(void)
{
- asm volatile (" \
- r1 = 0x3fe; \
- *(u64 *)(r10 - 8) = r1; \
- r0 = *(s8 *)(r10 - 8); \
- exit; \
-" ::: __clobber_all);
+ asm volatile (
+ "r1 = 0x3fe;"
+ "*(u64 *)(r10 - 8) = r1;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r0 = *(s8 *)(r10 - 8);"
+#else
+ "r0 = *(s8 *)(r10 - 1);"
+#endif
+ "exit;"
+ ::: __clobber_all);
}
SEC("socket")
@@ -25,12 +31,16 @@ __description("LDSX, S16")
__success __success_unpriv __retval(-2)
__naked void ldsx_s16(void)
{
- asm volatile (" \
- r1 = 0x3fffe; \
- *(u64 *)(r10 - 8) = r1; \
- r0 = *(s16 *)(r10 - 8); \
- exit; \
-" ::: __clobber_all);
+ asm volatile (
+ "r1 = 0x3fffe;"
+ "*(u64 *)(r10 - 8) = r1;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r0 = *(s16 *)(r10 - 8);"
+#else
+ "r0 = *(s16 *)(r10 - 2);"
+#endif
+ "exit;"
+ ::: __clobber_all);
}
SEC("socket")
@@ -38,35 +48,43 @@ __description("LDSX, S32")
__success __success_unpriv __retval(-1)
__naked void ldsx_s32(void)
{
- asm volatile (" \
- r1 = 0xfffffffe; \
- *(u64 *)(r10 - 8) = r1; \
- r0 = *(s32 *)(r10 - 8); \
- r0 >>= 1; \
- exit; \
-" ::: __clobber_all);
+ asm volatile (
+ "r1 = 0xfffffffe;"
+ "*(u64 *)(r10 - 8) = r1;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r0 = *(s32 *)(r10 - 8);"
+#else
+ "r0 = *(s32 *)(r10 - 4);"
+#endif
+ "r0 >>= 1;"
+ "exit;"
+ ::: __clobber_all);
}
SEC("socket")
__description("LDSX, S8 range checking, privileged")
__log_level(2) __success __retval(1)
-__msg("R1_w=scalar(smin=-128,smax=127)")
+__msg("R1_w=scalar(smin=smin32=-128,smax=smax32=127)")
__naked void ldsx_s8_range_priv(void)
{
- asm volatile (" \
- call %[bpf_get_prandom_u32]; \
- *(u64 *)(r10 - 8) = r0; \
- r1 = *(s8 *)(r10 - 8); \
- /* r1 with s8 range */ \
- if r1 s> 0x7f goto l0_%=; \
- if r1 s< -0x80 goto l0_%=; \
- r0 = 1; \
-l1_%=: \
- exit; \
-l0_%=: \
- r0 = 2; \
- goto l1_%=; \
-" :
+ asm volatile (
+ "call %[bpf_get_prandom_u32];"
+ "*(u64 *)(r10 - 8) = r0;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r1 = *(s8 *)(r10 - 8);"
+#else
+ "r1 = *(s8 *)(r10 - 1);"
+#endif
+ /* r1 with s8 range */
+ "if r1 s> 0x7f goto l0_%=;"
+ "if r1 s< -0x80 goto l0_%=;"
+ "r0 = 1;"
+"l1_%=:"
+ "exit;"
+"l0_%=:"
+ "r0 = 2;"
+ "goto l1_%=;"
+ :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
@@ -76,20 +94,24 @@ __description("LDSX, S16 range checking")
__success __success_unpriv __retval(1)
__naked void ldsx_s16_range(void)
{
- asm volatile (" \
- call %[bpf_get_prandom_u32]; \
- *(u64 *)(r10 - 8) = r0; \
- r1 = *(s16 *)(r10 - 8); \
- /* r1 with s16 range */ \
- if r1 s> 0x7fff goto l0_%=; \
- if r1 s< -0x8000 goto l0_%=; \
- r0 = 1; \
-l1_%=: \
- exit; \
-l0_%=: \
- r0 = 2; \
- goto l1_%=; \
-" :
+ asm volatile (
+ "call %[bpf_get_prandom_u32];"
+ "*(u64 *)(r10 - 8) = r0;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r1 = *(s16 *)(r10 - 8);"
+#else
+ "r1 = *(s16 *)(r10 - 2);"
+#endif
+ /* r1 with s16 range */
+ "if r1 s> 0x7fff goto l0_%=;"
+ "if r1 s< -0x8000 goto l0_%=;"
+ "r0 = 1;"
+"l1_%=:"
+ "exit;"
+"l0_%=:"
+ "r0 = 2;"
+ "goto l1_%=;"
+ :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
@@ -99,20 +121,24 @@ __description("LDSX, S32 range checking")
__success __success_unpriv __retval(1)
__naked void ldsx_s32_range(void)
{
- asm volatile (" \
- call %[bpf_get_prandom_u32]; \
- *(u64 *)(r10 - 8) = r0; \
- r1 = *(s32 *)(r10 - 8); \
- /* r1 with s16 range */ \
- if r1 s> 0x7fffFFFF goto l0_%=; \
- if r1 s< -0x80000000 goto l0_%=; \
- r0 = 1; \
-l1_%=: \
- exit; \
-l0_%=: \
- r0 = 2; \
- goto l1_%=; \
-" :
+ asm volatile (
+ "call %[bpf_get_prandom_u32];"
+ "*(u64 *)(r10 - 8) = r0;"
+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
+ "r1 = *(s32 *)(r10 - 8);"
+#else
+ "r1 = *(s32 *)(r10 - 4);"
+#endif
+ /* r1 with s16 range */
+ "if r1 s> 0x7fffFFFF goto l0_%=;"
+ "if r1 s< -0x80000000 goto l0_%=;"
+ "r0 = 1;"
+"l1_%=:"
+ "exit;"
+"l0_%=:"
+ "r0 = 2;"
+ "goto l1_%=;"
+ :
: __imm(bpf_get_prandom_u32)
: __clobber_all);
}
diff --git a/tools/testing/selftests/bpf/progs/verifier_movsx.c b/tools/testing/selftests/bpf/progs/verifier_movsx.c
index 3c8ac2c57b1b..b2a04d1179d0 100644
--- a/tools/testing/selftests/bpf/progs/verifier_movsx.c
+++ b/tools/testing/selftests/bpf/progs/verifier_movsx.c
@@ -5,7 +5,9 @@
#include "bpf_misc.h"
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
+ __clang_major__ >= 18
SEC("socket")
__description("MOV32SX, S8")
diff --git a/tools/testing/selftests/bpf/progs/verifier_sdiv.c b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
index 0990f8825675..8fc5174808b2 100644
--- a/tools/testing/selftests/bpf/progs/verifier_sdiv.c
+++ b/tools/testing/selftests/bpf/progs/verifier_sdiv.c
@@ -5,7 +5,9 @@
#include "bpf_misc.h"
#if (defined(__TARGET_ARCH_arm64) || defined(__TARGET_ARCH_x86) || \
- (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64)) && __clang_major__ >= 18
+ (defined(__TARGET_ARCH_riscv) && __riscv_xlen == 64) || \
+ defined(__TARGET_ARCH_arm) || defined(__TARGET_ARCH_s390)) && \
+ __clang_major__ >= 18
SEC("socket")
__description("SDIV32, non-zero imm divisor, check 1")
diff --git a/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c b/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c
index b2dfd7066c6e..f6d1cc9ad892 100644
--- a/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c
+++ b/tools/testing/selftests/bpf/progs/xdp_hw_metadata.c
@@ -21,7 +21,7 @@ extern int bpf_xdp_metadata_rx_timestamp(const struct xdp_md *ctx,
extern int bpf_xdp_metadata_rx_hash(const struct xdp_md *ctx, __u32 *hash,
enum xdp_rss_hash_type *rss_type) __ksym;
-SEC("xdp")
+SEC("xdp.frags")
int rx(struct xdp_md *ctx)
{
void *data, *data_meta, *data_end;
diff --git a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
index 07d786329105..e959336c7a73 100644
--- a/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
+++ b/tools/testing/selftests/bpf/progs/xdp_synproxy_kern.c
@@ -177,7 +177,7 @@ static __always_inline __u32 tcp_ns_to_ts(__u64 ns)
return ns / (NSEC_PER_SEC / TCP_TS_HZ);
}
-static __always_inline __u32 tcp_time_stamp_raw(void)
+static __always_inline __u32 tcp_clock_ms(void)
{
return tcp_ns_to_ts(tcp_clock_ns());
}
@@ -274,7 +274,7 @@ static __always_inline bool tscookie_init(struct tcphdr *tcp_header,
if (!loop_ctx.option_timestamp)
return false;
- cookie = tcp_time_stamp_raw() & ~TSMASK;
+ cookie = tcp_clock_ms() & ~TSMASK;
cookie |= loop_ctx.wscale & TS_OPT_WSCALE_MASK;
if (loop_ctx.option_sack)
cookie |= TS_OPT_SACK;
diff --git a/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c b/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c
index 24369f242853..ccde6a4c6319 100644
--- a/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c
+++ b/tools/testing/selftests/bpf/progs/xsk_xdp_progs.c
@@ -3,11 +3,12 @@
#include <linux/bpf.h>
#include <bpf/bpf_helpers.h>
-#include "xsk_xdp_metadata.h"
+#include <linux/if_ether.h>
+#include "xsk_xdp_common.h"
struct {
__uint(type, BPF_MAP_TYPE_XSKMAP);
- __uint(max_entries, 1);
+ __uint(max_entries, 2);
__uint(key_size, sizeof(int));
__uint(value_size, sizeof(int));
} xsk SEC(".maps");
@@ -52,4 +53,21 @@ SEC("xdp.frags") int xsk_xdp_populate_metadata(struct xdp_md *xdp)
return bpf_redirect_map(&xsk, 0, XDP_DROP);
}
+SEC("xdp") int xsk_xdp_shared_umem(struct xdp_md *xdp)
+{
+ void *data = (void *)(long)xdp->data;
+ void *data_end = (void *)(long)xdp->data_end;
+ struct ethhdr *eth = data;
+
+ if (eth + 1 > data_end)
+ return XDP_DROP;
+
+ /* Redirecting packets based on the destination MAC address */
+ idx = ((unsigned int)(eth->h_dest[5])) / 2;
+ if (idx > MAX_SOCKETS)
+ return XDP_DROP;
+
+ return bpf_redirect_map(&xsk, idx, XDP_DROP);
+}
+
char _license[] SEC("license") = "GPL";
diff --git a/tools/testing/selftests/bpf/test_bpftool_synctypes.py b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
index 0cfece7ff4f8..0ed67b6b31dd 100755
--- a/tools/testing/selftests/bpf/test_bpftool_synctypes.py
+++ b/tools/testing/selftests/bpf/test_bpftool_synctypes.py
@@ -509,6 +509,15 @@ def main():
source_map_types.remove('cgroup_storage_deprecated')
source_map_types.add('cgroup_storage')
+ # The same applied to BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED and
+ # BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE which share the same enum value
+ # and source_map_types picks
+ # BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE_DEPRECATED/percpu_cgroup_storage_deprecated.
+ # Replace 'percpu_cgroup_storage_deprecated' with 'percpu_cgroup_storage'
+ # so it aligns with what `bpftool map help` shows.
+ source_map_types.remove('percpu_cgroup_storage_deprecated')
+ source_map_types.add('percpu_cgroup_storage')
+
help_map_types = map_info.get_map_help()
help_map_options = map_info.get_options()
map_info.close()
diff --git a/tools/testing/selftests/bpf/test_loader.c b/tools/testing/selftests/bpf/test_loader.c
index b4edd8454934..37ffa57f28a1 100644
--- a/tools/testing/selftests/bpf/test_loader.c
+++ b/tools/testing/selftests/bpf/test_loader.c
@@ -69,7 +69,7 @@ static int tester_init(struct test_loader *tester)
{
if (!tester->log_buf) {
tester->log_buf_sz = TEST_LOADER_LOG_BUF_SZ;
- tester->log_buf = malloc(tester->log_buf_sz);
+ tester->log_buf = calloc(tester->log_buf_sz, 1);
if (!ASSERT_OK_PTR(tester->log_buf, "tester_log_buf"))
return -ENOMEM;
}
@@ -538,7 +538,7 @@ void run_subtest(struct test_loader *tester,
bool unpriv)
{
struct test_subspec *subspec = unpriv ? &spec->unpriv : &spec->priv;
- struct bpf_program *tprog, *tprog_iter;
+ struct bpf_program *tprog = NULL, *tprog_iter;
struct test_spec *spec_iter;
struct cap_state caps = {};
struct bpf_object *tobj;
diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
index 4d582cac2c09..1b9387890148 100644
--- a/tools/testing/selftests/bpf/test_progs.c
+++ b/tools/testing/selftests/bpf/test_progs.c
@@ -255,7 +255,7 @@ static void print_subtest_name(int test_num, int subtest_num,
const char *test_name, char *subtest_name,
char *result)
{
- char test_num_str[TEST_NUM_WIDTH + 1];
+ char test_num_str[32];
snprintf(test_num_str, sizeof(test_num_str), "%d/%d", test_num, subtest_num);
diff --git a/tools/testing/selftests/bpf/test_progs.h b/tools/testing/selftests/bpf/test_progs.h
index 77bd492c6024..2f9f6f250f17 100644
--- a/tools/testing/selftests/bpf/test_progs.h
+++ b/tools/testing/selftests/bpf/test_progs.h
@@ -417,6 +417,8 @@ int get_bpf_max_tramp_links(void);
#define SYS_NANOSLEEP_KPROBE_NAME "__s390x_sys_nanosleep"
#elif defined(__aarch64__)
#define SYS_NANOSLEEP_KPROBE_NAME "__arm64_sys_nanosleep"
+#elif defined(__riscv)
+#define SYS_NANOSLEEP_KPROBE_NAME "__riscv_sys_nanosleep"
#else
#define SYS_NANOSLEEP_KPROBE_NAME "sys_nanosleep"
#endif
diff --git a/tools/testing/selftests/bpf/test_xsk.sh b/tools/testing/selftests/bpf/test_xsk.sh
index 2aa5a3445056..65aafe0003db 100755
--- a/tools/testing/selftests/bpf/test_xsk.sh
+++ b/tools/testing/selftests/bpf/test_xsk.sh
@@ -73,17 +73,33 @@
#
# Run test suite for physical device in loopback mode
# sudo ./test_xsk.sh -i IFACE
+#
+# Run test suite in a specific mode only [skb,drv,zc]
+# sudo ./test_xsk.sh -m MODE
+#
+# List available tests
+# ./test_xsk.sh -l
+#
+# Run a specific test from the test suite
+# sudo ./test_xsk.sh -t TEST_NAME
+#
+# Display the available command line options
+# ./test_xsk.sh -h
. xsk_prereqs.sh
ETH=""
-while getopts "vi:d" flag
+while getopts "vi:dm:lt:h" flag
do
case "${flag}" in
v) verbose=1;;
d) debug=1;;
i) ETH=${OPTARG};;
+ m) MODE=${OPTARG};;
+ l) list=1;;
+ t) TEST=${OPTARG};;
+ h) help=1;;
esac
done
@@ -131,6 +147,16 @@ setup_vethPairs() {
ip link set ${VETH0} up
}
+if [[ $list -eq 1 ]]; then
+ ./${XSKOBJ} -l
+ exit
+fi
+
+if [[ $help -eq 1 ]]; then
+ ./${XSKOBJ}
+ exit
+fi
+
if [ ! -z $ETH ]; then
VETH0=${ETH}
VETH1=${ETH}
@@ -153,6 +179,14 @@ if [[ $verbose -eq 1 ]]; then
ARGS+="-v "
fi
+if [ -n "$MODE" ]; then
+ ARGS+="-m ${MODE} "
+fi
+
+if [ -n "$TEST" ]; then
+ ARGS+="-t ${TEST} "
+fi
+
retval=$?
test_status $retval "${TEST_NAME}"
@@ -175,6 +209,10 @@ else
cleanup_iface ${ETH} ${MTU}
fi
+if [[ $list -eq 1 ]]; then
+ exit
+fi
+
TEST_NAME="XSK_SELFTESTS_${VETH0}_BUSY_POLL"
busy_poll=1
diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
index f83d9f65c65b..4faa898ff7fc 100644
--- a/tools/testing/selftests/bpf/trace_helpers.c
+++ b/tools/testing/selftests/bpf/trace_helpers.c
@@ -7,6 +7,7 @@
#include <errno.h>
#include <fcntl.h>
#include <poll.h>
+#include <pthread.h>
#include <unistd.h>
#include <linux/perf_event.h>
#include <sys/mman.h>
@@ -14,104 +15,165 @@
#include <linux/limits.h>
#include <libelf.h>
#include <gelf.h>
+#include "bpf/libbpf_internal.h"
#define TRACEFS_PIPE "/sys/kernel/tracing/trace_pipe"
#define DEBUGFS_PIPE "/sys/kernel/debug/tracing/trace_pipe"
-#define MAX_SYMS 400000
-static struct ksym syms[MAX_SYMS];
-static int sym_cnt;
+struct ksyms {
+ struct ksym *syms;
+ size_t sym_cap;
+ size_t sym_cnt;
+};
+
+static struct ksyms *ksyms;
+static pthread_mutex_t ksyms_mutex = PTHREAD_MUTEX_INITIALIZER;
+
+static int ksyms__add_symbol(struct ksyms *ksyms, const char *name,
+ unsigned long addr)
+{
+ void *tmp;
+
+ tmp = strdup(name);
+ if (!tmp)
+ return -ENOMEM;
+ ksyms->syms[ksyms->sym_cnt].addr = addr;
+ ksyms->syms[ksyms->sym_cnt].name = tmp;
+ ksyms->sym_cnt++;
+ return 0;
+}
+
+void free_kallsyms_local(struct ksyms *ksyms)
+{
+ unsigned int i;
+
+ if (!ksyms)
+ return;
+
+ if (!ksyms->syms) {
+ free(ksyms);
+ return;
+ }
+
+ for (i = 0; i < ksyms->sym_cnt; i++)
+ free(ksyms->syms[i].name);
+ free(ksyms->syms);
+ free(ksyms);
+}
static int ksym_cmp(const void *p1, const void *p2)
{
return ((struct ksym *)p1)->addr - ((struct ksym *)p2)->addr;
}
-int load_kallsyms_refresh(void)
+struct ksyms *load_kallsyms_local(void)
{
FILE *f;
char func[256], buf[256];
char symbol;
void *addr;
- int i = 0;
-
- sym_cnt = 0;
+ int ret;
+ struct ksyms *ksyms;
f = fopen("/proc/kallsyms", "r");
if (!f)
- return -ENOENT;
+ return NULL;
+
+ ksyms = calloc(1, sizeof(struct ksyms));
+ if (!ksyms) {
+ fclose(f);
+ return NULL;
+ }
while (fgets(buf, sizeof(buf), f)) {
if (sscanf(buf, "%p %c %s", &addr, &symbol, func) != 3)
break;
if (!addr)
continue;
- if (i >= MAX_SYMS)
- return -EFBIG;
- syms[i].addr = (long) addr;
- syms[i].name = strdup(func);
- i++;
+ ret = libbpf_ensure_mem((void **) &ksyms->syms, &ksyms->sym_cap,
+ sizeof(struct ksym), ksyms->sym_cnt + 1);
+ if (ret)
+ goto error;
+ ret = ksyms__add_symbol(ksyms, func, (unsigned long)addr);
+ if (ret)
+ goto error;
}
fclose(f);
- sym_cnt = i;
- qsort(syms, sym_cnt, sizeof(struct ksym), ksym_cmp);
- return 0;
+ qsort(ksyms->syms, ksyms->sym_cnt, sizeof(struct ksym), ksym_cmp);
+ return ksyms;
+
+error:
+ fclose(f);
+ free_kallsyms_local(ksyms);
+ return NULL;
}
int load_kallsyms(void)
{
- /*
- * This is called/used from multiplace places,
- * load symbols just once.
- */
- if (sym_cnt)
- return 0;
- return load_kallsyms_refresh();
+ pthread_mutex_lock(&ksyms_mutex);
+ if (!ksyms)
+ ksyms = load_kallsyms_local();
+ pthread_mutex_unlock(&ksyms_mutex);
+ return ksyms ? 0 : 1;
}
-struct ksym *ksym_search(long key)
+struct ksym *ksym_search_local(struct ksyms *ksyms, long key)
{
- int start = 0, end = sym_cnt;
+ int start = 0, end = ksyms->sym_cnt;
int result;
/* kallsyms not loaded. return NULL */
- if (sym_cnt <= 0)
+ if (ksyms->sym_cnt <= 0)
return NULL;
while (start < end) {
size_t mid = start + (end - start) / 2;
- result = key - syms[mid].addr;
+ result = key - ksyms->syms[mid].addr;
if (result < 0)
end = mid;
else if (result > 0)
start = mid + 1;
else
- return &syms[mid];
+ return &ksyms->syms[mid];
}
- if (start >= 1 && syms[start - 1].addr < key &&
- key < syms[start].addr)
+ if (start >= 1 && ksyms->syms[start - 1].addr < key &&
+ key < ksyms->syms[start].addr)
/* valid ksym */
- return &syms[start - 1];
+ return &ksyms->syms[start - 1];
/* out of range. return _stext */
- return &syms[0];
+ return &ksyms->syms[0];
}
-long ksym_get_addr(const char *name)
+struct ksym *ksym_search(long key)
+{
+ if (!ksyms)
+ return NULL;
+ return ksym_search_local(ksyms, key);
+}
+
+long ksym_get_addr_local(struct ksyms *ksyms, const char *name)
{
int i;
- for (i = 0; i < sym_cnt; i++) {
- if (strcmp(syms[i].name, name) == 0)
- return syms[i].addr;
+ for (i = 0; i < ksyms->sym_cnt; i++) {
+ if (strcmp(ksyms->syms[i].name, name) == 0)
+ return ksyms->syms[i].addr;
}
return 0;
}
+long ksym_get_addr(const char *name)
+{
+ if (!ksyms)
+ return 0;
+ return ksym_get_addr_local(ksyms, name);
+}
+
/* open kallsyms and read symbol addresses on the fly. Without caching all symbols,
* this is faster than load + find.
*/
diff --git a/tools/testing/selftests/bpf/trace_helpers.h b/tools/testing/selftests/bpf/trace_helpers.h
index 876f3e711df6..04fd1da7079d 100644
--- a/tools/testing/selftests/bpf/trace_helpers.h
+++ b/tools/testing/selftests/bpf/trace_helpers.h
@@ -11,13 +11,17 @@ struct ksym {
long addr;
char *name;
};
+struct ksyms;
int load_kallsyms(void);
-int load_kallsyms_refresh(void);
-
struct ksym *ksym_search(long key);
long ksym_get_addr(const char *name);
+struct ksyms *load_kallsyms_local(void);
+struct ksym *ksym_search_local(struct ksyms *ksyms, long key);
+long ksym_get_addr_local(struct ksyms *ksyms, const char *name);
+void free_kallsyms_local(struct ksyms *ksyms);
+
/* open kallsyms and find addresses on the fly, faster than load + search. */
int kallsyms_find(const char *sym, unsigned long long *addr);
diff --git a/tools/testing/selftests/bpf/unpriv_helpers.c b/tools/testing/selftests/bpf/unpriv_helpers.c
index 2a6efbd0401e..b6d016461fb0 100644
--- a/tools/testing/selftests/bpf/unpriv_helpers.c
+++ b/tools/testing/selftests/bpf/unpriv_helpers.c
@@ -4,9 +4,40 @@
#include <stdlib.h>
#include <error.h>
#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
#include "unpriv_helpers.h"
+static bool get_mitigations_off(void)
+{
+ char cmdline[4096], *c;
+ int fd, ret = false;
+
+ fd = open("/proc/cmdline", O_RDONLY);
+ if (fd < 0) {
+ perror("open /proc/cmdline");
+ return false;
+ }
+
+ if (read(fd, cmdline, sizeof(cmdline) - 1) < 0) {
+ perror("read /proc/cmdline");
+ goto out;
+ }
+
+ cmdline[sizeof(cmdline) - 1] = '\0';
+ for (c = strtok(cmdline, " \n"); c; c = strtok(NULL, " \n")) {
+ if (strncmp(c, "mitigations=off", strlen(c)))
+ continue;
+ ret = true;
+ break;
+ }
+out:
+ close(fd);
+ return ret;
+}
+
bool get_unpriv_disabled(void)
{
bool disabled;
@@ -22,5 +53,5 @@ bool get_unpriv_disabled(void)
disabled = true;
}
- return disabled;
+ return disabled ? true : get_mitigations_off();
}
diff --git a/tools/testing/selftests/bpf/urandom_read.c b/tools/testing/selftests/bpf/urandom_read.c
index e92644d0fa75..4ed795655b9f 100644
--- a/tools/testing/selftests/bpf/urandom_read.c
+++ b/tools/testing/selftests/bpf/urandom_read.c
@@ -11,6 +11,9 @@
#define _SDT_HAS_SEMAPHORES 1
#include "sdt.h"
+#define SHARED 1
+#include "bpf/libbpf_internal.h"
+
#define SEC(name) __attribute__((section(name), used))
#define BUF_SIZE 256
@@ -21,10 +24,14 @@ void urand_read_without_sema(int iter_num, int iter_cnt, int read_sz);
void urandlib_read_with_sema(int iter_num, int iter_cnt, int read_sz);
void urandlib_read_without_sema(int iter_num, int iter_cnt, int read_sz);
+int urandlib_api(void);
+COMPAT_VERSION(urandlib_api_old, urandlib_api, LIBURANDOM_READ_1.0.0)
+int urandlib_api_old(void);
+int urandlib_api_sameoffset(void);
+
unsigned short urand_read_with_sema_semaphore SEC(".probes");
-static __attribute__((noinline))
-void urandom_read(int fd, int count)
+static noinline void urandom_read(int fd, int count)
{
char buf[BUF_SIZE];
int i;
@@ -83,6 +90,10 @@ int main(int argc, char *argv[])
urandom_read(fd, count);
+ urandlib_api();
+ urandlib_api_old();
+ urandlib_api_sameoffset();
+
close(fd);
return 0;
}
diff --git a/tools/testing/selftests/bpf/urandom_read_lib1.c b/tools/testing/selftests/bpf/urandom_read_lib1.c
index 86186e24b740..8c1356d8b4ee 100644
--- a/tools/testing/selftests/bpf/urandom_read_lib1.c
+++ b/tools/testing/selftests/bpf/urandom_read_lib1.c
@@ -3,6 +3,9 @@
#define _SDT_HAS_SEMAPHORES 1
#include "sdt.h"
+#define SHARED 1
+#include "bpf/libbpf_internal.h"
+
#define SEC(name) __attribute__((section(name), used))
unsigned short urandlib_read_with_sema_semaphore SEC(".probes");
@@ -11,3 +14,22 @@ void urandlib_read_with_sema(int iter_num, int iter_cnt, int read_sz)
{
STAP_PROBE3(urandlib, read_with_sema, iter_num, iter_cnt, read_sz);
}
+
+COMPAT_VERSION(urandlib_api_v1, urandlib_api, LIBURANDOM_READ_1.0.0)
+int urandlib_api_v1(void)
+{
+ return 1;
+}
+
+DEFAULT_VERSION(urandlib_api_v2, urandlib_api, LIBURANDOM_READ_2.0.0)
+int urandlib_api_v2(void)
+{
+ return 2;
+}
+
+COMPAT_VERSION(urandlib_api_sameoffset, urandlib_api_sameoffset, LIBURANDOM_READ_1.0.0)
+DEFAULT_VERSION(urandlib_api_sameoffset, urandlib_api_sameoffset, LIBURANDOM_READ_2.0.0)
+int urandlib_api_sameoffset(void)
+{
+ return 3;
+}
diff --git a/tools/testing/selftests/bpf/xdp_features.c b/tools/testing/selftests/bpf/xdp_features.c
index b449788fbd39..595c79141cf3 100644
--- a/tools/testing/selftests/bpf/xdp_features.c
+++ b/tools/testing/selftests/bpf/xdp_features.c
@@ -360,9 +360,9 @@ static int recv_msg(int sockfd, void *buf, size_t bufsize, void *val,
static int dut_run(struct xdp_features *skel)
{
int flags = XDP_FLAGS_UPDATE_IF_NOEXIST | XDP_FLAGS_DRV_MODE;
- int state, err, *sockfd, ctrl_sockfd, echo_sockfd;
+ int state, err = 0, *sockfd, ctrl_sockfd, echo_sockfd;
struct sockaddr_storage ctrl_addr;
- pthread_t dut_thread;
+ pthread_t dut_thread = 0;
socklen_t addrlen;
sockfd = start_reuseport_server(AF_INET6, SOCK_STREAM, NULL,
diff --git a/tools/testing/selftests/bpf/xdp_hw_metadata.c b/tools/testing/selftests/bpf/xdp_hw_metadata.c
index 613321eb84c1..17c0f92ff160 100644
--- a/tools/testing/selftests/bpf/xdp_hw_metadata.c
+++ b/tools/testing/selftests/bpf/xdp_hw_metadata.c
@@ -26,6 +26,7 @@
#include <linux/sockios.h>
#include <sys/mman.h>
#include <net/if.h>
+#include <ctype.h>
#include <poll.h>
#include <time.h>
@@ -47,6 +48,7 @@ struct xsk {
};
struct xdp_hw_metadata *bpf_obj;
+__u16 bind_flags = XDP_COPY;
struct xsk *rx_xsk;
const char *ifname;
int ifindex;
@@ -60,7 +62,7 @@ static int open_xsk(int ifindex, struct xsk *xsk, __u32 queue_id)
const struct xsk_socket_config socket_config = {
.rx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
.tx_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
- .bind_flags = XDP_COPY,
+ .bind_flags = bind_flags,
};
const struct xsk_umem_config umem_config = {
.fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
@@ -234,7 +236,7 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd, clockid_t
struct pollfd fds[rxq + 1];
__u64 comp_addr;
__u64 addr;
- __u32 idx;
+ __u32 idx = 0;
int ret;
int i;
@@ -263,11 +265,14 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd, clockid_t
verify_skb_metadata(server_fd);
for (i = 0; i < rxq; i++) {
+ bool first_seg = true;
+ bool is_eop = true;
+
if (fds[i].revents == 0)
continue;
struct xsk *xsk = &rx_xsk[i];
-
+peek:
ret = xsk_ring_cons__peek(&xsk->rx, 1, &idx);
printf("xsk_ring_cons__peek: %d\n", ret);
if (ret != 1)
@@ -276,12 +281,19 @@ static int verify_metadata(struct xsk *rx_xsk, int rxq, int server_fd, clockid_t
rx_desc = xsk_ring_cons__rx_desc(&xsk->rx, idx);
comp_addr = xsk_umem__extract_addr(rx_desc->addr);
addr = xsk_umem__add_offset_to_addr(rx_desc->addr);
- printf("%p: rx_desc[%u]->addr=%llx addr=%llx comp_addr=%llx\n",
- xsk, idx, rx_desc->addr, addr, comp_addr);
- verify_xdp_metadata(xsk_umem__get_data(xsk->umem_area, addr),
- clock_id);
+ is_eop = !(rx_desc->options & XDP_PKT_CONTD);
+ printf("%p: rx_desc[%u]->addr=%llx addr=%llx comp_addr=%llx%s\n",
+ xsk, idx, rx_desc->addr, addr, comp_addr, is_eop ? " EoP" : "");
+ if (first_seg) {
+ verify_xdp_metadata(xsk_umem__get_data(xsk->umem_area, addr),
+ clock_id);
+ first_seg = false;
+ }
+
xsk_ring_cons__release(&xsk->rx, 1);
refill_rx(xsk, comp_addr);
+ if (!is_eop)
+ goto peek;
}
}
@@ -404,6 +416,53 @@ static void timestamping_enable(int fd, int val)
error(1, errno, "setsockopt(SO_TIMESTAMPING)");
}
+static void print_usage(void)
+{
+ const char *usage =
+ "Usage: xdp_hw_metadata [OPTIONS] [IFNAME]\n"
+ " -m Enable multi-buffer XDP for larger MTU\n"
+ " -h Display this help and exit\n\n"
+ "Generate test packets on the other machine with:\n"
+ " echo -n xdp | nc -u -q1 <dst_ip> 9091\n";
+
+ printf("%s", usage);
+}
+
+static void read_args(int argc, char *argv[])
+{
+ char opt;
+
+ while ((opt = getopt(argc, argv, "mh")) != -1) {
+ switch (opt) {
+ case 'm':
+ bind_flags |= XDP_USE_SG;
+ break;
+ case 'h':
+ print_usage();
+ exit(0);
+ case '?':
+ if (isprint(optopt))
+ fprintf(stderr, "Unknown option: -%c\n", optopt);
+ fallthrough;
+ default:
+ print_usage();
+ error(-1, opterr, "Command line options error");
+ }
+ }
+
+ if (optind >= argc) {
+ fprintf(stderr, "No device name provided\n");
+ print_usage();
+ exit(-1);
+ }
+
+ ifname = argv[optind];
+ ifindex = if_nametoindex(ifname);
+
+ if (!ifname)
+ error(-1, errno, "Invalid interface name");
+}
+
int main(int argc, char *argv[])
{
clockid_t clock_id = CLOCK_TAI;
@@ -413,13 +472,8 @@ int main(int argc, char *argv[])
struct bpf_program *prog;
- if (argc != 2) {
- fprintf(stderr, "pass device name\n");
- return -1;
- }
+ read_args(argc, argv);
- ifname = argv[1];
- ifindex = if_nametoindex(ifname);
rxq = rxq_num(ifname);
printf("rxq: %d\n", rxq);
diff --git a/tools/testing/selftests/bpf/xsk.c b/tools/testing/selftests/bpf/xsk.c
index d9fb2b730a2c..e574711eeb84 100644
--- a/tools/testing/selftests/bpf/xsk.c
+++ b/tools/testing/selftests/bpf/xsk.c
@@ -442,10 +442,9 @@ void xsk_clear_xskmap(struct bpf_map *map)
bpf_map_delete_elem(map_fd, &index);
}
-int xsk_update_xskmap(struct bpf_map *map, struct xsk_socket *xsk)
+int xsk_update_xskmap(struct bpf_map *map, struct xsk_socket *xsk, u32 index)
{
int map_fd, sock_fd;
- u32 index = 0;
map_fd = bpf_map__fd(map);
sock_fd = xsk_socket__fd(xsk);
diff --git a/tools/testing/selftests/bpf/xsk.h b/tools/testing/selftests/bpf/xsk.h
index d93200fdaa8d..771570bc3731 100644
--- a/tools/testing/selftests/bpf/xsk.h
+++ b/tools/testing/selftests/bpf/xsk.h
@@ -204,7 +204,7 @@ struct xsk_umem_config {
int xsk_attach_xdp_program(struct bpf_program *prog, int ifindex, u32 xdp_flags);
void xsk_detach_xdp_program(int ifindex, u32 xdp_flags);
-int xsk_update_xskmap(struct bpf_map *map, struct xsk_socket *xsk);
+int xsk_update_xskmap(struct bpf_map *map, struct xsk_socket *xsk, u32 index);
void xsk_clear_xskmap(struct bpf_map *map);
bool xsk_is_in_mode(u32 ifindex, int mode);
diff --git a/tools/testing/selftests/bpf/xsk_prereqs.sh b/tools/testing/selftests/bpf/xsk_prereqs.sh
index 29175682c44d..47c7b8064f38 100755
--- a/tools/testing/selftests/bpf/xsk_prereqs.sh
+++ b/tools/testing/selftests/bpf/xsk_prereqs.sh
@@ -83,9 +83,11 @@ exec_xskxceiver()
fi
./${XSKOBJ} -i ${VETH0} -i ${VETH1} ${ARGS}
-
retval=$?
- test_status $retval "${TEST_NAME}"
- statusList+=($retval)
- nameList+=(${TEST_NAME})
+
+ if [[ $list -ne 1 ]]; then
+ test_status $retval "${TEST_NAME}"
+ statusList+=($retval)
+ nameList+=(${TEST_NAME})
+ fi
}
diff --git a/tools/testing/selftests/bpf/xsk_xdp_common.h b/tools/testing/selftests/bpf/xsk_xdp_common.h
new file mode 100644
index 000000000000..5a6f36f07383
--- /dev/null
+++ b/tools/testing/selftests/bpf/xsk_xdp_common.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef XSK_XDP_COMMON_H_
+#define XSK_XDP_COMMON_H_
+
+#define MAX_SOCKETS 2
+
+struct xdp_info {
+ __u64 count;
+} __attribute__((aligned(32)));
+
+#endif /* XSK_XDP_COMMON_H_ */
diff --git a/tools/testing/selftests/bpf/xsk_xdp_metadata.h b/tools/testing/selftests/bpf/xsk_xdp_metadata.h
deleted file mode 100644
index 943133da378a..000000000000
--- a/tools/testing/selftests/bpf/xsk_xdp_metadata.h
+++ /dev/null
@@ -1,5 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-
-struct xdp_info {
- __u64 count;
-} __attribute__((aligned(32)));
diff --git a/tools/testing/selftests/bpf/xskxceiver.c b/tools/testing/selftests/bpf/xskxceiver.c
index 2827f2d7cf30..591ca9637b23 100644
--- a/tools/testing/selftests/bpf/xskxceiver.c
+++ b/tools/testing/selftests/bpf/xskxceiver.c
@@ -80,6 +80,7 @@
#include <linux/if_ether.h>
#include <linux/mman.h>
#include <linux/netdev.h>
+#include <linux/bitmap.h>
#include <arpa/inet.h>
#include <net/if.h>
#include <locale.h>
@@ -102,10 +103,12 @@
#include <bpf/bpf.h>
#include <linux/filter.h>
#include "../kselftest.h"
-#include "xsk_xdp_metadata.h"
+#include "xsk_xdp_common.h"
-static const char *MAC1 = "\x00\x0A\x56\x9E\xEE\x62";
-static const char *MAC2 = "\x00\x0A\x56\x9E\xEE\x61";
+static bool opt_verbose;
+static bool opt_print_tests;
+static enum test_mode opt_mode = TEST_MODE_ALL;
+static u32 opt_run_test = RUN_ALL_TESTS;
static void __exit_with_error(int error, const char *file, const char *func, int line)
{
@@ -154,10 +157,10 @@ static void write_payload(void *dest, u32 pkt_nb, u32 start, u32 size)
ptr[i] = htonl(pkt_nb << 16 | (i + start));
}
-static void gen_eth_hdr(struct ifobject *ifobject, struct ethhdr *eth_hdr)
+static void gen_eth_hdr(struct xsk_socket_info *xsk, struct ethhdr *eth_hdr)
{
- memcpy(eth_hdr->h_dest, ifobject->dst_mac, ETH_ALEN);
- memcpy(eth_hdr->h_source, ifobject->src_mac, ETH_ALEN);
+ memcpy(eth_hdr->h_dest, xsk->dst_mac, ETH_ALEN);
+ memcpy(eth_hdr->h_source, xsk->src_mac, ETH_ALEN);
eth_hdr->h_proto = htons(ETH_P_LOOPBACK);
}
@@ -255,7 +258,7 @@ static int __xsk_configure_socket(struct xsk_socket_info *xsk, struct xsk_umem_i
cfg.bind_flags = ifobject->bind_flags;
if (shared)
cfg.bind_flags |= XDP_SHARED_UMEM;
- if (ifobject->pkt_stream && ifobject->mtu > MAX_ETH_PKT_SIZE)
+ if (ifobject->mtu > MAX_ETH_PKT_SIZE)
cfg.bind_flags |= XDP_USE_SG;
txr = ifobject->tx_on ? &xsk->tx : NULL;
@@ -310,19 +313,28 @@ static struct option long_options[] = {
{"interface", required_argument, 0, 'i'},
{"busy-poll", no_argument, 0, 'b'},
{"verbose", no_argument, 0, 'v'},
+ {"mode", required_argument, 0, 'm'},
+ {"list", no_argument, 0, 'l'},
+ {"test", required_argument, 0, 't'},
+ {"help", no_argument, 0, 'h'},
{0, 0, 0, 0}
};
-static void usage(const char *prog)
+static void print_usage(char **argv)
{
const char *str =
- " Usage: %s [OPTIONS]\n"
+ " Usage: xskxceiver [OPTIONS]\n"
" Options:\n"
" -i, --interface Use interface\n"
" -v, --verbose Verbose output\n"
- " -b, --busy-poll Enable busy poll\n";
+ " -b, --busy-poll Enable busy poll\n"
+ " -m, --mode Run only mode skb, drv, or zc\n"
+ " -l, --list List all available tests\n"
+ " -t, --test Run a specific test. Enter number from -l option.\n"
+ " -h, --help Display this help and exit\n";
- ksft_print_msg(str, prog);
+ ksft_print_msg(str, basename(argv[0]));
+ ksft_exit_xfail();
}
static bool validate_interface(struct ifobject *ifobj)
@@ -342,7 +354,7 @@ static void parse_command_line(struct ifobject *ifobj_tx, struct ifobject *ifobj
opterr = 0;
for (;;) {
- c = getopt_long(argc, argv, "i:vb", long_options, &option_index);
+ c = getopt_long(argc, argv, "i:vbm:lt:", long_options, &option_index);
if (c == -1)
break;
@@ -371,9 +383,28 @@ static void parse_command_line(struct ifobject *ifobj_tx, struct ifobject *ifobj
ifobj_tx->busy_poll = true;
ifobj_rx->busy_poll = true;
break;
+ case 'm':
+ if (!strncmp("skb", optarg, strlen(optarg)))
+ opt_mode = TEST_MODE_SKB;
+ else if (!strncmp("drv", optarg, strlen(optarg)))
+ opt_mode = TEST_MODE_DRV;
+ else if (!strncmp("zc", optarg, strlen(optarg)))
+ opt_mode = TEST_MODE_ZC;
+ else
+ print_usage(argv);
+ break;
+ case 'l':
+ opt_print_tests = true;
+ break;
+ case 't':
+ errno = 0;
+ opt_run_test = strtol(optarg, NULL, 0);
+ if (errno)
+ print_usage(argv);
+ break;
+ case 'h':
default:
- usage(basename(argv[0]));
- ksft_exit_xfail();
+ print_usage(argv);
}
}
}
@@ -396,11 +427,9 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
if (i == 0) {
ifobj->rx_on = false;
ifobj->tx_on = true;
- ifobj->pkt_stream = test->tx_pkt_stream_default;
} else {
ifobj->rx_on = true;
ifobj->tx_on = false;
- ifobj->pkt_stream = test->rx_pkt_stream_default;
}
memset(ifobj->umem, 0, sizeof(*ifobj->umem));
@@ -410,6 +439,15 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
for (j = 0; j < MAX_SOCKETS; j++) {
memset(&ifobj->xsk_arr[j], 0, sizeof(ifobj->xsk_arr[j]));
ifobj->xsk_arr[j].rxqsize = XSK_RING_CONS__DEFAULT_NUM_DESCS;
+ if (i == 0)
+ ifobj->xsk_arr[j].pkt_stream = test->tx_pkt_stream_default;
+ else
+ ifobj->xsk_arr[j].pkt_stream = test->rx_pkt_stream_default;
+
+ memcpy(ifobj->xsk_arr[j].src_mac, g_mac, ETH_ALEN);
+ memcpy(ifobj->xsk_arr[j].dst_mac, g_mac, ETH_ALEN);
+ ifobj->xsk_arr[j].src_mac[5] += ((j * 2) + 0);
+ ifobj->xsk_arr[j].dst_mac[5] += ((j * 2) + 1);
}
}
@@ -427,7 +465,8 @@ static void __test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
}
static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
- struct ifobject *ifobj_rx, enum test_mode mode)
+ struct ifobject *ifobj_rx, enum test_mode mode,
+ const struct test_spec *test_to_run)
{
struct pkt_stream *tx_pkt_stream;
struct pkt_stream *rx_pkt_stream;
@@ -449,6 +488,8 @@ static void test_spec_init(struct test_spec *test, struct ifobject *ifobj_tx,
ifobj->bind_flags |= XDP_COPY;
}
+ strncpy(test->name, test_to_run->name, MAX_TEST_NAME_SIZE);
+ test->test_func = test_to_run->test_func;
test->mode = mode;
__test_spec_init(test, ifobj_tx, ifobj_rx);
}
@@ -458,11 +499,6 @@ static void test_spec_reset(struct test_spec *test)
__test_spec_init(test, test->ifobj_tx, test->ifobj_rx);
}
-static void test_spec_set_name(struct test_spec *test, const char *name)
-{
- strncpy(test->name, name, MAX_TEST_NAME_SIZE);
-}
-
static void test_spec_set_xdp_prog(struct test_spec *test, struct bpf_program *xdp_prog_rx,
struct bpf_program *xdp_prog_tx, struct bpf_map *xskmap_rx,
struct bpf_map *xskmap_tx)
@@ -495,8 +531,10 @@ static int test_spec_set_mtu(struct test_spec *test, int mtu)
static void pkt_stream_reset(struct pkt_stream *pkt_stream)
{
- if (pkt_stream)
+ if (pkt_stream) {
pkt_stream->current_pkt_nb = 0;
+ pkt_stream->nb_rx_pkts = 0;
+ }
}
static struct pkt *pkt_stream_get_next_tx_pkt(struct pkt_stream *pkt_stream)
@@ -526,17 +564,17 @@ static void pkt_stream_delete(struct pkt_stream *pkt_stream)
static void pkt_stream_restore_default(struct test_spec *test)
{
- struct pkt_stream *tx_pkt_stream = test->ifobj_tx->pkt_stream;
- struct pkt_stream *rx_pkt_stream = test->ifobj_rx->pkt_stream;
+ struct pkt_stream *tx_pkt_stream = test->ifobj_tx->xsk->pkt_stream;
+ struct pkt_stream *rx_pkt_stream = test->ifobj_rx->xsk->pkt_stream;
if (tx_pkt_stream != test->tx_pkt_stream_default) {
- pkt_stream_delete(test->ifobj_tx->pkt_stream);
- test->ifobj_tx->pkt_stream = test->tx_pkt_stream_default;
+ pkt_stream_delete(test->ifobj_tx->xsk->pkt_stream);
+ test->ifobj_tx->xsk->pkt_stream = test->tx_pkt_stream_default;
}
if (rx_pkt_stream != test->rx_pkt_stream_default) {
- pkt_stream_delete(test->ifobj_rx->pkt_stream);
- test->ifobj_rx->pkt_stream = test->rx_pkt_stream_default;
+ pkt_stream_delete(test->ifobj_rx->xsk->pkt_stream);
+ test->ifobj_rx->xsk->pkt_stream = test->rx_pkt_stream_default;
}
}
@@ -596,14 +634,16 @@ static u32 pkt_nb_frags(u32 frame_size, struct pkt_stream *pkt_stream, struct pk
return nb_frags;
}
-static void pkt_set(struct xsk_umem_info *umem, struct pkt *pkt, int offset, u32 len)
+static void pkt_set(struct pkt_stream *pkt_stream, struct pkt *pkt, int offset, u32 len)
{
pkt->offset = offset;
pkt->len = len;
- if (len > MAX_ETH_JUMBO_SIZE)
+ if (len > MAX_ETH_JUMBO_SIZE) {
pkt->valid = false;
- else
+ } else {
pkt->valid = true;
+ pkt_stream->nb_valid_entries++;
+ }
}
static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
@@ -611,7 +651,7 @@ static u32 pkt_get_buffer_len(struct xsk_umem_info *umem, u32 len)
return ceil_u32(len, umem->frame_size) * umem->frame_size;
}
-static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb_pkts, u32 pkt_len)
+static struct pkt_stream *__pkt_stream_generate(u32 nb_pkts, u32 pkt_len, u32 nb_start, u32 nb_off)
{
struct pkt_stream *pkt_stream;
u32 i;
@@ -625,41 +665,45 @@ static struct pkt_stream *pkt_stream_generate(struct xsk_umem_info *umem, u32 nb
for (i = 0; i < nb_pkts; i++) {
struct pkt *pkt = &pkt_stream->pkts[i];
- pkt_set(umem, pkt, 0, pkt_len);
- pkt->pkt_nb = i;
+ pkt_set(pkt_stream, pkt, 0, pkt_len);
+ pkt->pkt_nb = nb_start + i * nb_off;
}
return pkt_stream;
}
-static struct pkt_stream *pkt_stream_clone(struct xsk_umem_info *umem,
- struct pkt_stream *pkt_stream)
+static struct pkt_stream *pkt_stream_generate(u32 nb_pkts, u32 pkt_len)
{
- return pkt_stream_generate(umem, pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
+ return __pkt_stream_generate(nb_pkts, pkt_len, 0, 1);
+}
+
+static struct pkt_stream *pkt_stream_clone(struct pkt_stream *pkt_stream)
+{
+ return pkt_stream_generate(pkt_stream->nb_pkts, pkt_stream->pkts[0].len);
}
static void pkt_stream_replace(struct test_spec *test, u32 nb_pkts, u32 pkt_len)
{
struct pkt_stream *pkt_stream;
- pkt_stream = pkt_stream_generate(test->ifobj_tx->umem, nb_pkts, pkt_len);
- test->ifobj_tx->pkt_stream = pkt_stream;
- pkt_stream = pkt_stream_generate(test->ifobj_rx->umem, nb_pkts, pkt_len);
- test->ifobj_rx->pkt_stream = pkt_stream;
+ pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
+ test->ifobj_tx->xsk->pkt_stream = pkt_stream;
+ pkt_stream = pkt_stream_generate(nb_pkts, pkt_len);
+ test->ifobj_rx->xsk->pkt_stream = pkt_stream;
}
static void __pkt_stream_replace_half(struct ifobject *ifobj, u32 pkt_len,
int offset)
{
- struct xsk_umem_info *umem = ifobj->umem;
struct pkt_stream *pkt_stream;
u32 i;
- pkt_stream = pkt_stream_clone(umem, ifobj->pkt_stream);
- for (i = 1; i < ifobj->pkt_stream->nb_pkts; i += 2)
- pkt_set(umem, &pkt_stream->pkts[i], offset, pkt_len);
+ pkt_stream = pkt_stream_clone(ifobj->xsk->pkt_stream);
+ for (i = 1; i < ifobj->xsk->pkt_stream->nb_pkts; i += 2)
+ pkt_set(pkt_stream, &pkt_stream->pkts[i], offset, pkt_len);
- ifobj->pkt_stream = pkt_stream;
+ ifobj->xsk->pkt_stream = pkt_stream;
+ pkt_stream->nb_valid_entries /= 2;
}
static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int offset)
@@ -670,15 +714,34 @@ static void pkt_stream_replace_half(struct test_spec *test, u32 pkt_len, int off
static void pkt_stream_receive_half(struct test_spec *test)
{
- struct xsk_umem_info *umem = test->ifobj_rx->umem;
- struct pkt_stream *pkt_stream = test->ifobj_tx->pkt_stream;
+ struct pkt_stream *pkt_stream = test->ifobj_tx->xsk->pkt_stream;
u32 i;
- test->ifobj_rx->pkt_stream = pkt_stream_generate(umem, pkt_stream->nb_pkts,
- pkt_stream->pkts[0].len);
- pkt_stream = test->ifobj_rx->pkt_stream;
+ test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(pkt_stream->nb_pkts,
+ pkt_stream->pkts[0].len);
+ pkt_stream = test->ifobj_rx->xsk->pkt_stream;
for (i = 1; i < pkt_stream->nb_pkts; i += 2)
pkt_stream->pkts[i].valid = false;
+
+ pkt_stream->nb_valid_entries /= 2;
+}
+
+static void pkt_stream_even_odd_sequence(struct test_spec *test)
+{
+ struct pkt_stream *pkt_stream;
+ u32 i;
+
+ for (i = 0; i < test->nb_sockets; i++) {
+ pkt_stream = test->ifobj_tx->xsk_arr[i].pkt_stream;
+ pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
+ pkt_stream->pkts[0].len, i, 2);
+ test->ifobj_tx->xsk_arr[i].pkt_stream = pkt_stream;
+
+ pkt_stream = test->ifobj_rx->xsk_arr[i].pkt_stream;
+ pkt_stream = __pkt_stream_generate(pkt_stream->nb_pkts / 2,
+ pkt_stream->pkts[0].len, i, 2);
+ test->ifobj_rx->xsk_arr[i].pkt_stream = pkt_stream;
+ }
}
static u64 pkt_get_addr(struct pkt *pkt, struct xsk_umem_info *umem)
@@ -693,16 +756,16 @@ static void pkt_stream_cancel(struct pkt_stream *pkt_stream)
pkt_stream->current_pkt_nb--;
}
-static void pkt_generate(struct ifobject *ifobject, u64 addr, u32 len, u32 pkt_nb,
- u32 bytes_written)
+static void pkt_generate(struct xsk_socket_info *xsk, struct xsk_umem_info *umem, u64 addr, u32 len,
+ u32 pkt_nb, u32 bytes_written)
{
- void *data = xsk_umem__get_data(ifobject->umem->buffer, addr);
+ void *data = xsk_umem__get_data(umem->buffer, addr);
if (len < MIN_PKT_SIZE)
return;
if (!bytes_written) {
- gen_eth_hdr(ifobject, data);
+ gen_eth_hdr(xsk, data);
len -= PKT_HDR_SIZE;
data += PKT_HDR_SIZE;
@@ -747,8 +810,15 @@ static struct pkt_stream *__pkt_stream_generate_custom(struct ifobject *ifobj, s
len = 0;
}
+ print_verbose("offset: %d len: %u valid: %u options: %u pkt_nb: %u\n",
+ pkt->offset, pkt->len, pkt->valid, pkt->options, pkt->pkt_nb);
+
if (pkt->valid && pkt->len > pkt_stream->max_pkt_len)
pkt_stream->max_pkt_len = pkt->len;
+
+ if (pkt->valid)
+ pkt_stream->nb_valid_entries++;
+
pkt_nb++;
}
@@ -762,10 +832,10 @@ static void pkt_stream_generate_custom(struct test_spec *test, struct pkt *pkts,
struct pkt_stream *pkt_stream;
pkt_stream = __pkt_stream_generate_custom(test->ifobj_tx, pkts, nb_pkts, true);
- test->ifobj_tx->pkt_stream = pkt_stream;
+ test->ifobj_tx->xsk->pkt_stream = pkt_stream;
pkt_stream = __pkt_stream_generate_custom(test->ifobj_rx, pkts, nb_pkts, false);
- test->ifobj_rx->pkt_stream = pkt_stream;
+ test->ifobj_rx->xsk->pkt_stream = pkt_stream;
}
static void pkt_print_data(u32 *data, u32 cnt)
@@ -777,7 +847,7 @@ static void pkt_print_data(u32 *data, u32 cnt)
seqnum = ntohl(*data) & 0xffff;
pkt_nb = ntohl(*data) >> 16;
- fprintf(stdout, "%u:%u ", pkt_nb, seqnum);
+ ksft_print_msg("%u:%u ", pkt_nb, seqnum);
data++;
}
}
@@ -789,13 +859,13 @@ static void pkt_dump(void *pkt, u32 len, bool eth_header)
if (eth_header) {
/*extract L2 frame */
- fprintf(stdout, "DEBUG>> L2: dst mac: ");
+ ksft_print_msg("DEBUG>> L2: dst mac: ");
for (i = 0; i < ETH_ALEN; i++)
- fprintf(stdout, "%02X", ethhdr->h_dest[i]);
+ ksft_print_msg("%02X", ethhdr->h_dest[i]);
- fprintf(stdout, "\nDEBUG>> L2: src mac: ");
+ ksft_print_msg("\nDEBUG>> L2: src mac: ");
for (i = 0; i < ETH_ALEN; i++)
- fprintf(stdout, "%02X", ethhdr->h_source[i]);
+ ksft_print_msg("%02X", ethhdr->h_source[i]);
data = pkt + PKT_HDR_SIZE;
} else {
@@ -803,15 +873,15 @@ static void pkt_dump(void *pkt, u32 len, bool eth_header)
}
/*extract L5 frame */
- fprintf(stdout, "\nDEBUG>> L5: seqnum: ");
+ ksft_print_msg("\nDEBUG>> L5: seqnum: ");
pkt_print_data(data, PKT_DUMP_NB_TO_PRINT);
- fprintf(stdout, "....");
+ ksft_print_msg("....");
if (len > PKT_DUMP_NB_TO_PRINT * sizeof(u32)) {
- fprintf(stdout, "\n.... ");
+ ksft_print_msg("\n.... ");
pkt_print_data(data + len / sizeof(u32) - PKT_DUMP_NB_TO_PRINT,
PKT_DUMP_NB_TO_PRINT);
}
- fprintf(stdout, "\n---------------------------------------\n");
+ ksft_print_msg("\n---------------------------------------\n");
}
static bool is_offset_correct(struct xsk_umem_info *umem, struct pkt *pkt, u64 addr)
@@ -916,36 +986,42 @@ static bool is_pkt_valid(struct pkt *pkt, void *buffer, u64 addr, u32 len)
return true;
}
-static void kick_tx(struct xsk_socket_info *xsk)
+static int kick_tx(struct xsk_socket_info *xsk)
{
int ret;
ret = sendto(xsk_socket__fd(xsk->xsk), NULL, 0, MSG_DONTWAIT, NULL, 0);
if (ret >= 0)
- return;
+ return TEST_PASS;
if (errno == ENOBUFS || errno == EAGAIN || errno == EBUSY || errno == ENETDOWN) {
usleep(100);
- return;
+ return TEST_PASS;
}
- exit_with_error(errno);
+ return TEST_FAILURE;
}
-static void kick_rx(struct xsk_socket_info *xsk)
+static int kick_rx(struct xsk_socket_info *xsk)
{
int ret;
ret = recvfrom(xsk_socket__fd(xsk->xsk), NULL, 0, MSG_DONTWAIT, NULL, NULL);
if (ret < 0)
- exit_with_error(errno);
+ return TEST_FAILURE;
+
+ return TEST_PASS;
}
static int complete_pkts(struct xsk_socket_info *xsk, int batch_size)
{
unsigned int rcvd;
u32 idx;
+ int ret;
- if (xsk_ring_prod__needs_wakeup(&xsk->tx))
- kick_tx(xsk);
+ if (xsk_ring_prod__needs_wakeup(&xsk->tx)) {
+ ret = kick_tx(xsk);
+ if (ret)
+ return TEST_FAILURE;
+ }
rcvd = xsk_ring_cons__peek(&xsk->umem->cq, batch_size, &idx);
if (rcvd) {
@@ -964,153 +1040,207 @@ static int complete_pkts(struct xsk_socket_info *xsk, int batch_size)
return TEST_PASS;
}
-static int receive_pkts(struct test_spec *test, struct pollfd *fds)
+static int __receive_pkts(struct test_spec *test, struct xsk_socket_info *xsk)
{
- struct timeval tv_end, tv_now, tv_timeout = {THREAD_TMOUT, 0};
- struct pkt_stream *pkt_stream = test->ifobj_rx->pkt_stream;
- struct xsk_socket_info *xsk = test->ifobj_rx->xsk;
+ u32 frags_processed = 0, nb_frags = 0, pkt_len = 0;
u32 idx_rx = 0, idx_fq = 0, rcvd, pkts_sent = 0;
+ struct pkt_stream *pkt_stream = xsk->pkt_stream;
struct ifobject *ifobj = test->ifobj_rx;
struct xsk_umem_info *umem = xsk->umem;
+ struct pollfd fds = { };
struct pkt *pkt;
+ u64 first_addr = 0;
int ret;
- ret = gettimeofday(&tv_now, NULL);
- if (ret)
- exit_with_error(errno);
- timeradd(&tv_now, &tv_timeout, &tv_end);
+ fds.fd = xsk_socket__fd(xsk->xsk);
+ fds.events = POLLIN;
- pkt = pkt_stream_get_next_rx_pkt(pkt_stream, &pkts_sent);
- while (pkt) {
- u32 frags_processed = 0, nb_frags = 0, pkt_len = 0;
- u64 first_addr;
+ ret = kick_rx(xsk);
+ if (ret)
+ return TEST_FAILURE;
- ret = gettimeofday(&tv_now, NULL);
- if (ret)
- exit_with_error(errno);
- if (timercmp(&tv_now, &tv_end, >)) {
- ksft_print_msg("ERROR: [%s] Receive loop timed out\n", __func__);
+ if (ifobj->use_poll) {
+ ret = poll(&fds, 1, POLL_TMOUT);
+ if (ret < 0)
return TEST_FAILURE;
- }
-
- kick_rx(xsk);
- if (ifobj->use_poll) {
- ret = poll(fds, 1, POLL_TMOUT);
- if (ret < 0)
- exit_with_error(errno);
- if (!ret) {
- if (!is_umem_valid(test->ifobj_tx))
- return TEST_PASS;
-
- ksft_print_msg("ERROR: [%s] Poll timed out\n", __func__);
- return TEST_FAILURE;
- }
+ if (!ret) {
+ if (!is_umem_valid(test->ifobj_tx))
+ return TEST_PASS;
- if (!(fds->revents & POLLIN))
- continue;
+ ksft_print_msg("ERROR: [%s] Poll timed out\n", __func__);
+ return TEST_CONTINUE;
}
- rcvd = xsk_ring_cons__peek(&xsk->rx, BATCH_SIZE, &idx_rx);
- if (!rcvd)
- continue;
+ if (!(fds.revents & POLLIN))
+ return TEST_CONTINUE;
+ }
- if (ifobj->use_fill_ring) {
- ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq);
- while (ret != rcvd) {
+ rcvd = xsk_ring_cons__peek(&xsk->rx, BATCH_SIZE, &idx_rx);
+ if (!rcvd)
+ return TEST_CONTINUE;
+
+ if (ifobj->use_fill_ring) {
+ ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq);
+ while (ret != rcvd) {
+ if (xsk_ring_prod__needs_wakeup(&umem->fq)) {
+ ret = poll(&fds, 1, POLL_TMOUT);
if (ret < 0)
- exit_with_error(-ret);
- if (xsk_ring_prod__needs_wakeup(&umem->fq)) {
- ret = poll(fds, 1, POLL_TMOUT);
- if (ret < 0)
- exit_with_error(errno);
- }
- ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq);
+ return TEST_FAILURE;
}
+ ret = xsk_ring_prod__reserve(&umem->fq, rcvd, &idx_fq);
}
+ }
- while (frags_processed < rcvd) {
- const struct xdp_desc *desc = xsk_ring_cons__rx_desc(&xsk->rx, idx_rx++);
- u64 addr = desc->addr, orig;
+ while (frags_processed < rcvd) {
+ const struct xdp_desc *desc = xsk_ring_cons__rx_desc(&xsk->rx, idx_rx++);
+ u64 addr = desc->addr, orig;
- orig = xsk_umem__extract_addr(addr);
- addr = xsk_umem__add_offset_to_addr(addr);
+ orig = xsk_umem__extract_addr(addr);
+ addr = xsk_umem__add_offset_to_addr(addr);
+ if (!nb_frags) {
+ pkt = pkt_stream_get_next_rx_pkt(pkt_stream, &pkts_sent);
if (!pkt) {
ksft_print_msg("[%s] received too many packets addr: %lx len %u\n",
__func__, addr, desc->len);
return TEST_FAILURE;
}
+ }
- if (!is_frag_valid(umem, addr, desc->len, pkt->pkt_nb, pkt_len) ||
- !is_offset_correct(umem, pkt, addr) ||
- (ifobj->use_metadata && !is_metadata_correct(pkt, umem->buffer, addr)))
- return TEST_FAILURE;
+ print_verbose("Rx: addr: %lx len: %u options: %u pkt_nb: %u valid: %u\n",
+ addr, desc->len, desc->options, pkt->pkt_nb, pkt->valid);
- if (!nb_frags++)
- first_addr = addr;
- frags_processed++;
- pkt_len += desc->len;
- if (ifobj->use_fill_ring)
- *xsk_ring_prod__fill_addr(&umem->fq, idx_fq++) = orig;
+ if (!is_frag_valid(umem, addr, desc->len, pkt->pkt_nb, pkt_len) ||
+ !is_offset_correct(umem, pkt, addr) || (ifobj->use_metadata &&
+ !is_metadata_correct(pkt, umem->buffer, addr)))
+ return TEST_FAILURE;
- if (pkt_continues(desc->options))
- continue;
+ if (!nb_frags++)
+ first_addr = addr;
+ frags_processed++;
+ pkt_len += desc->len;
+ if (ifobj->use_fill_ring)
+ *xsk_ring_prod__fill_addr(&umem->fq, idx_fq++) = orig;
- /* The complete packet has been received */
- if (!is_pkt_valid(pkt, umem->buffer, first_addr, pkt_len) ||
- !is_offset_correct(umem, pkt, addr))
- return TEST_FAILURE;
+ if (pkt_continues(desc->options))
+ continue;
- pkt = pkt_stream_get_next_rx_pkt(pkt_stream, &pkts_sent);
- nb_frags = 0;
- pkt_len = 0;
- }
+ /* The complete packet has been received */
+ if (!is_pkt_valid(pkt, umem->buffer, first_addr, pkt_len) ||
+ !is_offset_correct(umem, pkt, addr))
+ return TEST_FAILURE;
- if (nb_frags) {
- /* In the middle of a packet. Start over from beginning of packet. */
- idx_rx -= nb_frags;
- xsk_ring_cons__cancel(&xsk->rx, nb_frags);
- if (ifobj->use_fill_ring) {
- idx_fq -= nb_frags;
- xsk_ring_prod__cancel(&umem->fq, nb_frags);
- }
- frags_processed -= nb_frags;
+ pkt_stream->nb_rx_pkts++;
+ nb_frags = 0;
+ pkt_len = 0;
+ }
+
+ if (nb_frags) {
+ /* In the middle of a packet. Start over from beginning of packet. */
+ idx_rx -= nb_frags;
+ xsk_ring_cons__cancel(&xsk->rx, nb_frags);
+ if (ifobj->use_fill_ring) {
+ idx_fq -= nb_frags;
+ xsk_ring_prod__cancel(&umem->fq, nb_frags);
}
+ frags_processed -= nb_frags;
+ }
- if (ifobj->use_fill_ring)
- xsk_ring_prod__submit(&umem->fq, frags_processed);
- if (ifobj->release_rx)
- xsk_ring_cons__release(&xsk->rx, frags_processed);
+ if (ifobj->use_fill_ring)
+ xsk_ring_prod__submit(&umem->fq, frags_processed);
+ if (ifobj->release_rx)
+ xsk_ring_cons__release(&xsk->rx, frags_processed);
+
+ pthread_mutex_lock(&pacing_mutex);
+ pkts_in_flight -= pkts_sent;
+ pthread_mutex_unlock(&pacing_mutex);
+ pkts_sent = 0;
+
+return TEST_CONTINUE;
+}
+
+bool all_packets_received(struct test_spec *test, struct xsk_socket_info *xsk, u32 sock_num,
+ unsigned long *bitmap)
+{
+ struct pkt_stream *pkt_stream = xsk->pkt_stream;
+
+ if (!pkt_stream) {
+ __set_bit(sock_num, bitmap);
+ return false;
+ }
+
+ if (pkt_stream->nb_rx_pkts == pkt_stream->nb_valid_entries) {
+ __set_bit(sock_num, bitmap);
+ if (bitmap_full(bitmap, test->nb_sockets))
+ return true;
+ }
+
+ return false;
+}
+
+static int receive_pkts(struct test_spec *test)
+{
+ struct timeval tv_end, tv_now, tv_timeout = {THREAD_TMOUT, 0};
+ DECLARE_BITMAP(bitmap, test->nb_sockets);
+ struct xsk_socket_info *xsk;
+ u32 sock_num = 0;
+ int res, ret;
- pthread_mutex_lock(&pacing_mutex);
- pkts_in_flight -= pkts_sent;
- pthread_mutex_unlock(&pacing_mutex);
- pkts_sent = 0;
+ ret = gettimeofday(&tv_now, NULL);
+ if (ret)
+ exit_with_error(errno);
+
+ timeradd(&tv_now, &tv_timeout, &tv_end);
+
+ while (1) {
+ xsk = &test->ifobj_rx->xsk_arr[sock_num];
+
+ if ((all_packets_received(test, xsk, sock_num, bitmap)))
+ break;
+
+ res = __receive_pkts(test, xsk);
+ if (!(res == TEST_PASS || res == TEST_CONTINUE))
+ return res;
+
+ ret = gettimeofday(&tv_now, NULL);
+ if (ret)
+ exit_with_error(errno);
+
+ if (timercmp(&tv_now, &tv_end, >)) {
+ ksft_print_msg("ERROR: [%s] Receive loop timed out\n", __func__);
+ return TEST_FAILURE;
+ }
+ sock_num = (sock_num + 1) % test->nb_sockets;
}
return TEST_PASS;
}
-static int __send_pkts(struct ifobject *ifobject, struct pollfd *fds, bool timeout)
+static int __send_pkts(struct ifobject *ifobject, struct xsk_socket_info *xsk, bool timeout)
{
u32 i, idx = 0, valid_pkts = 0, valid_frags = 0, buffer_len;
- struct pkt_stream *pkt_stream = ifobject->pkt_stream;
- struct xsk_socket_info *xsk = ifobject->xsk;
+ struct pkt_stream *pkt_stream = xsk->pkt_stream;
struct xsk_umem_info *umem = ifobject->umem;
bool use_poll = ifobject->use_poll;
+ struct pollfd fds = { };
int ret;
buffer_len = pkt_get_buffer_len(umem, pkt_stream->max_pkt_len);
/* pkts_in_flight might be negative if many invalid packets are sent */
if (pkts_in_flight >= (int)((umem_size(umem) - BATCH_SIZE * buffer_len) / buffer_len)) {
- kick_tx(xsk);
+ ret = kick_tx(xsk);
+ if (ret)
+ return TEST_FAILURE;
return TEST_CONTINUE;
}
+ fds.fd = xsk_socket__fd(xsk->xsk);
+ fds.events = POLLOUT;
+
while (xsk_ring_prod__reserve(&xsk->tx, BATCH_SIZE, &idx) < BATCH_SIZE) {
if (use_poll) {
- ret = poll(fds, 1, POLL_TMOUT);
+ ret = poll(&fds, 1, POLL_TMOUT);
if (timeout) {
if (ret < 0) {
ksft_print_msg("ERROR: [%s] Poll error %d\n",
@@ -1161,10 +1291,13 @@ static int __send_pkts(struct ifobject *ifobject, struct pollfd *fds, bool timeo
tx_desc->options = 0;
}
if (pkt->valid)
- pkt_generate(ifobject, tx_desc->addr, tx_desc->len, pkt->pkt_nb,
+ pkt_generate(xsk, umem, tx_desc->addr, tx_desc->len, pkt->pkt_nb,
bytes_written);
bytes_written += tx_desc->len;
+ print_verbose("Tx addr: %llx len: %u options: %u pkt_nb: %u\n",
+ tx_desc->addr, tx_desc->len, tx_desc->options, pkt->pkt_nb);
+
if (nb_frags_left) {
i++;
if (pkt_stream->verbatim)
@@ -1186,7 +1319,7 @@ static int __send_pkts(struct ifobject *ifobject, struct pollfd *fds, bool timeo
xsk->outstanding_tx += valid_frags;
if (use_poll) {
- ret = poll(fds, 1, POLL_TMOUT);
+ ret = poll(&fds, 1, POLL_TMOUT);
if (ret <= 0) {
if (ret == 0 && timeout)
return TEST_PASS;
@@ -1207,33 +1340,67 @@ static int __send_pkts(struct ifobject *ifobject, struct pollfd *fds, bool timeo
return TEST_CONTINUE;
}
-static void wait_for_tx_completion(struct xsk_socket_info *xsk)
+static int wait_for_tx_completion(struct xsk_socket_info *xsk)
{
- while (xsk->outstanding_tx)
+ struct timeval tv_end, tv_now, tv_timeout = {THREAD_TMOUT, 0};
+ int ret;
+
+ ret = gettimeofday(&tv_now, NULL);
+ if (ret)
+ exit_with_error(errno);
+ timeradd(&tv_now, &tv_timeout, &tv_end);
+
+ while (xsk->outstanding_tx) {
+ ret = gettimeofday(&tv_now, NULL);
+ if (ret)
+ exit_with_error(errno);
+ if (timercmp(&tv_now, &tv_end, >)) {
+ ksft_print_msg("ERROR: [%s] Transmission loop timed out\n", __func__);
+ return TEST_FAILURE;
+ }
+
complete_pkts(xsk, BATCH_SIZE);
+ }
+
+ return TEST_PASS;
+}
+
+bool all_packets_sent(struct test_spec *test, unsigned long *bitmap)
+{
+ return bitmap_full(bitmap, test->nb_sockets);
}
static int send_pkts(struct test_spec *test, struct ifobject *ifobject)
{
- struct pkt_stream *pkt_stream = ifobject->pkt_stream;
bool timeout = !is_umem_valid(test->ifobj_rx);
- struct pollfd fds = { };
- u32 ret;
+ DECLARE_BITMAP(bitmap, test->nb_sockets);
+ u32 i, ret;
- fds.fd = xsk_socket__fd(ifobject->xsk->xsk);
- fds.events = POLLOUT;
+ while (!(all_packets_sent(test, bitmap))) {
+ for (i = 0; i < test->nb_sockets; i++) {
+ struct pkt_stream *pkt_stream;
- while (pkt_stream->current_pkt_nb < pkt_stream->nb_pkts) {
- ret = __send_pkts(ifobject, &fds, timeout);
- if (ret == TEST_CONTINUE && !test->fail)
- continue;
- if ((ret || test->fail) && !timeout)
- return TEST_FAILURE;
- if (ret == TEST_PASS && timeout)
- return ret;
+ pkt_stream = ifobject->xsk_arr[i].pkt_stream;
+ if (!pkt_stream || pkt_stream->current_pkt_nb >= pkt_stream->nb_pkts) {
+ __set_bit(i, bitmap);
+ continue;
+ }
+ ret = __send_pkts(ifobject, &ifobject->xsk_arr[i], timeout);
+ if (ret == TEST_CONTINUE && !test->fail)
+ continue;
+
+ if ((ret || test->fail) && !timeout)
+ return TEST_FAILURE;
+
+ if (ret == TEST_PASS && timeout)
+ return ret;
+
+ ret = wait_for_tx_completion(&ifobject->xsk_arr[i]);
+ if (ret)
+ return TEST_FAILURE;
+ }
}
- wait_for_tx_completion(ifobject->xsk);
return TEST_PASS;
}
@@ -1266,7 +1433,9 @@ static int validate_rx_dropped(struct ifobject *ifobject)
struct xdp_statistics stats;
int err;
- kick_rx(ifobject->xsk);
+ err = kick_rx(ifobject->xsk);
+ if (err)
+ return TEST_FAILURE;
err = get_xsk_stats(xsk, &stats);
if (err)
@@ -1278,8 +1447,8 @@ static int validate_rx_dropped(struct ifobject *ifobject)
* packet being invalid). Since the last packet may or may not have
* been dropped already, both outcomes must be allowed.
*/
- if (stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 ||
- stats.rx_dropped == ifobject->pkt_stream->nb_pkts / 2 - 1)
+ if (stats.rx_dropped == ifobject->xsk->pkt_stream->nb_pkts / 2 ||
+ stats.rx_dropped == ifobject->xsk->pkt_stream->nb_pkts / 2 - 1)
return TEST_PASS;
return TEST_FAILURE;
@@ -1292,7 +1461,9 @@ static int validate_rx_full(struct ifobject *ifobject)
int err;
usleep(1000);
- kick_rx(ifobject->xsk);
+ err = kick_rx(ifobject->xsk);
+ if (err)
+ return TEST_FAILURE;
err = get_xsk_stats(xsk, &stats);
if (err)
@@ -1311,7 +1482,9 @@ static int validate_fill_empty(struct ifobject *ifobject)
int err;
usleep(1000);
- kick_rx(ifobject->xsk);
+ err = kick_rx(ifobject->xsk);
+ if (err)
+ return TEST_FAILURE;
err = get_xsk_stats(xsk, &stats);
if (err)
@@ -1339,9 +1512,10 @@ static int validate_tx_invalid_descs(struct ifobject *ifobject)
return TEST_FAILURE;
}
- if (stats.tx_invalid_descs != ifobject->pkt_stream->nb_pkts / 2) {
+ if (stats.tx_invalid_descs != ifobject->xsk->pkt_stream->nb_pkts / 2) {
ksft_print_msg("[%s] tx_invalid_descs incorrect. Got [%u] expected [%u]\n",
- __func__, stats.tx_invalid_descs, ifobject->pkt_stream->nb_pkts);
+ __func__, stats.tx_invalid_descs,
+ ifobject->xsk->pkt_stream->nb_pkts);
return TEST_FAILURE;
}
@@ -1433,6 +1607,7 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
LIBBPF_OPTS(bpf_xdp_query_opts, opts);
void *bufs;
int ret;
+ u32 i;
if (ifobject->umem->unaligned_mode)
mmap_flags |= MAP_HUGETLB | MAP_HUGE_2MB;
@@ -1455,11 +1630,14 @@ static void thread_common_ops(struct test_spec *test, struct ifobject *ifobject)
if (!ifobject->rx_on)
return;
- xsk_populate_fill_ring(ifobject->umem, ifobject->pkt_stream, ifobject->use_fill_ring);
+ xsk_populate_fill_ring(ifobject->umem, ifobject->xsk->pkt_stream, ifobject->use_fill_ring);
- ret = xsk_update_xskmap(ifobject->xskmap, ifobject->xsk->xsk);
- if (ret)
- exit_with_error(errno);
+ for (i = 0; i < test->nb_sockets; i++) {
+ ifobject->xsk = &ifobject->xsk_arr[i];
+ ret = xsk_update_xskmap(ifobject->xskmap, ifobject->xsk->xsk, i);
+ if (ret)
+ exit_with_error(errno);
+ }
}
static void *worker_testapp_validate_tx(void *arg)
@@ -1475,8 +1653,6 @@ static void *worker_testapp_validate_tx(void *arg)
thread_common_ops_tx(test, ifobject);
}
- print_verbose("Sending %d packets on interface %s\n", ifobject->pkt_stream->nb_pkts,
- ifobject->ifname);
err = send_pkts(test, ifobject);
if (!err && ifobject->validation_func)
@@ -1491,26 +1667,23 @@ static void *worker_testapp_validate_rx(void *arg)
{
struct test_spec *test = (struct test_spec *)arg;
struct ifobject *ifobject = test->ifobj_rx;
- struct pollfd fds = { };
int err;
if (test->current_step == 1) {
thread_common_ops(test, ifobject);
} else {
xsk_clear_xskmap(ifobject->xskmap);
- err = xsk_update_xskmap(ifobject->xskmap, ifobject->xsk->xsk);
+ err = xsk_update_xskmap(ifobject->xskmap, ifobject->xsk->xsk, 0);
if (err) {
- printf("Error: Failed to update xskmap, error %s\n", strerror(-err));
+ ksft_print_msg("Error: Failed to update xskmap, error %s\n",
+ strerror(-err));
exit_with_error(-err);
}
}
- fds.fd = xsk_socket__fd(ifobject->xsk->xsk);
- fds.events = POLLIN;
-
pthread_barrier_wait(&barr);
- err = receive_pkts(test, &fds);
+ err = receive_pkts(test);
if (!err && ifobject->validation_func)
err = ifobject->validation_func(ifobject);
@@ -1564,7 +1737,7 @@ static void xsk_reattach_xdp(struct ifobject *ifobj, struct bpf_program *xdp_pro
xsk_detach_xdp_program(ifobj->ifindex, mode_to_xdp_flags(ifobj->mode));
err = xsk_attach_xdp_program(xdp_prog, ifobj->ifindex, mode_to_xdp_flags(mode));
if (err) {
- printf("Error attaching XDP program\n");
+ ksft_print_msg("Error attaching XDP program\n");
exit_with_error(-err);
}
@@ -1619,11 +1792,11 @@ static int __testapp_validate_traffic(struct test_spec *test, struct ifobject *i
if (ifobj2) {
if (pthread_barrier_init(&barr, NULL, 2))
exit_with_error(errno);
- pkt_stream_reset(ifobj2->pkt_stream);
+ pkt_stream_reset(ifobj2->xsk->pkt_stream);
}
test->current_step++;
- pkt_stream_reset(ifobj1->pkt_stream);
+ pkt_stream_reset(ifobj1->xsk->pkt_stream);
pkts_in_flight = 0;
signal(SIGUSR1, handler);
@@ -1647,9 +1820,15 @@ static int __testapp_validate_traffic(struct test_spec *test, struct ifobject *i
pthread_join(t0, NULL);
if (test->total_steps == test->current_step || test->fail) {
+ u32 i;
+
if (ifobj2)
- xsk_socket__delete(ifobj2->xsk->xsk);
- xsk_socket__delete(ifobj1->xsk->xsk);
+ for (i = 0; i < test->nb_sockets; i++)
+ xsk_socket__delete(ifobj2->xsk_arr[i].xsk);
+
+ for (i = 0; i < test->nb_sockets; i++)
+ xsk_socket__delete(ifobj1->xsk_arr[i].xsk);
+
testapp_clean_xsk_umem(ifobj1);
if (ifobj2 && !ifobj2->shared_umem)
testapp_clean_xsk_umem(ifobj2);
@@ -1682,7 +1861,6 @@ static int testapp_teardown(struct test_spec *test)
{
int i;
- test_spec_set_name(test, "TEARDOWN");
for (i = 0; i < MAX_TEARDOWN_ITER; i++) {
if (testapp_validate_traffic(test))
return TEST_FAILURE;
@@ -1704,18 +1882,17 @@ static void swap_directions(struct ifobject **ifobj1, struct ifobject **ifobj2)
*ifobj2 = tmp_ifobj;
}
-static int testapp_bidi(struct test_spec *test)
+static int testapp_bidirectional(struct test_spec *test)
{
int res;
- test_spec_set_name(test, "BIDIRECTIONAL");
test->ifobj_tx->rx_on = true;
test->ifobj_rx->tx_on = true;
test->total_steps = 2;
if (testapp_validate_traffic(test))
return TEST_FAILURE;
- print_verbose("Switching Tx/Rx vectors\n");
+ print_verbose("Switching Tx/Rx direction\n");
swap_directions(&test->ifobj_rx, &test->ifobj_tx);
res = __testapp_validate_traffic(test, test->ifobj_rx, test->ifobj_tx);
@@ -1723,42 +1900,44 @@ static int testapp_bidi(struct test_spec *test)
return res;
}
-static void swap_xsk_resources(struct ifobject *ifobj_tx, struct ifobject *ifobj_rx)
+static int swap_xsk_resources(struct test_spec *test)
{
int ret;
- xsk_socket__delete(ifobj_tx->xsk->xsk);
- xsk_socket__delete(ifobj_rx->xsk->xsk);
- ifobj_tx->xsk = &ifobj_tx->xsk_arr[1];
- ifobj_rx->xsk = &ifobj_rx->xsk_arr[1];
+ test->ifobj_tx->xsk_arr[0].pkt_stream = NULL;
+ test->ifobj_rx->xsk_arr[0].pkt_stream = NULL;
+ test->ifobj_tx->xsk_arr[1].pkt_stream = test->tx_pkt_stream_default;
+ test->ifobj_rx->xsk_arr[1].pkt_stream = test->rx_pkt_stream_default;
+ test->ifobj_tx->xsk = &test->ifobj_tx->xsk_arr[1];
+ test->ifobj_rx->xsk = &test->ifobj_rx->xsk_arr[1];
- ret = xsk_update_xskmap(ifobj_rx->xskmap, ifobj_rx->xsk->xsk);
+ ret = xsk_update_xskmap(test->ifobj_rx->xskmap, test->ifobj_rx->xsk->xsk, 0);
if (ret)
- exit_with_error(errno);
+ return TEST_FAILURE;
+
+ return TEST_PASS;
}
-static int testapp_bpf_res(struct test_spec *test)
+static int testapp_xdp_prog_cleanup(struct test_spec *test)
{
- test_spec_set_name(test, "BPF_RES");
test->total_steps = 2;
test->nb_sockets = 2;
if (testapp_validate_traffic(test))
return TEST_FAILURE;
- swap_xsk_resources(test->ifobj_tx, test->ifobj_rx);
+ if (swap_xsk_resources(test))
+ return TEST_FAILURE;
return testapp_validate_traffic(test);
}
static int testapp_headroom(struct test_spec *test)
{
- test_spec_set_name(test, "UMEM_HEADROOM");
test->ifobj_rx->umem->frame_headroom = UMEM_HEADROOM_TEST_SIZE;
return testapp_validate_traffic(test);
}
static int testapp_stats_rx_dropped(struct test_spec *test)
{
- test_spec_set_name(test, "STAT_RX_DROPPED");
if (test->mode == TEST_MODE_ZC) {
ksft_test_result_skip("Can not run RX_DROPPED test for ZC mode\n");
return TEST_SKIP;
@@ -1774,7 +1953,6 @@ static int testapp_stats_rx_dropped(struct test_spec *test)
static int testapp_stats_tx_invalid_descs(struct test_spec *test)
{
- test_spec_set_name(test, "STAT_TX_INVALID");
pkt_stream_replace_half(test, XSK_UMEM__INVALID_FRAME_SIZE, 0);
test->ifobj_tx->validation_func = validate_tx_invalid_descs;
return testapp_validate_traffic(test);
@@ -1782,10 +1960,8 @@ static int testapp_stats_tx_invalid_descs(struct test_spec *test)
static int testapp_stats_rx_full(struct test_spec *test)
{
- test_spec_set_name(test, "STAT_RX_FULL");
pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
- test->ifobj_rx->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
- DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
+ test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
test->ifobj_rx->xsk->rxqsize = DEFAULT_UMEM_BUFFERS;
test->ifobj_rx->release_rx = false;
@@ -1795,19 +1971,16 @@ static int testapp_stats_rx_full(struct test_spec *test)
static int testapp_stats_fill_empty(struct test_spec *test)
{
- test_spec_set_name(test, "STAT_RX_FILL_EMPTY");
pkt_stream_replace(test, DEFAULT_UMEM_BUFFERS + DEFAULT_UMEM_BUFFERS / 2, MIN_PKT_SIZE);
- test->ifobj_rx->pkt_stream = pkt_stream_generate(test->ifobj_rx->umem,
- DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
+ test->ifobj_rx->xsk->pkt_stream = pkt_stream_generate(DEFAULT_UMEM_BUFFERS, MIN_PKT_SIZE);
test->ifobj_rx->use_fill_ring = false;
test->ifobj_rx->validation_func = validate_fill_empty;
return testapp_validate_traffic(test);
}
-static int testapp_unaligned(struct test_spec *test)
+static int testapp_send_receive_unaligned(struct test_spec *test)
{
- test_spec_set_name(test, "UNALIGNED_MODE");
test->ifobj_tx->umem->unaligned_mode = true;
test->ifobj_rx->umem->unaligned_mode = true;
/* Let half of the packets straddle a 4K buffer boundary */
@@ -1816,9 +1989,8 @@ static int testapp_unaligned(struct test_spec *test)
return testapp_validate_traffic(test);
}
-static int testapp_unaligned_mb(struct test_spec *test)
+static int testapp_send_receive_unaligned_mb(struct test_spec *test)
{
- test_spec_set_name(test, "UNALIGNED_MODE_9K");
test->mtu = MAX_ETH_JUMBO_SIZE;
test->ifobj_tx->umem->unaligned_mode = true;
test->ifobj_rx->umem->unaligned_mode = true;
@@ -1834,9 +2006,8 @@ static int testapp_single_pkt(struct test_spec *test)
return testapp_validate_traffic(test);
}
-static int testapp_multi_buffer(struct test_spec *test)
+static int testapp_send_receive_mb(struct test_spec *test)
{
- test_spec_set_name(test, "RUN_TO_COMPLETION_9K_PACKETS");
test->mtu = MAX_ETH_JUMBO_SIZE;
pkt_stream_replace(test, DEFAULT_PKT_CNT, MAX_ETH_JUMBO_SIZE);
@@ -1933,7 +2104,6 @@ static int testapp_xdp_drop(struct test_spec *test)
struct xsk_xdp_progs *skel_rx = test->ifobj_rx->xdp_progs;
struct xsk_xdp_progs *skel_tx = test->ifobj_tx->xdp_progs;
- test_spec_set_name(test, "XDP_DROP_HALF");
test_spec_set_xdp_prog(test, skel_rx->progs.xsk_xdp_drop, skel_tx->progs.xsk_xdp_drop,
skel_rx->maps.xsk, skel_tx->maps.xsk);
@@ -1941,7 +2111,7 @@ static int testapp_xdp_drop(struct test_spec *test)
return testapp_validate_traffic(test);
}
-static int testapp_xdp_metadata_count(struct test_spec *test)
+static int testapp_xdp_metadata_copy(struct test_spec *test)
{
struct xsk_xdp_progs *skel_rx = test->ifobj_rx->xdp_progs;
struct xsk_xdp_progs *skel_tx = test->ifobj_tx->xdp_progs;
@@ -1955,19 +2125,38 @@ static int testapp_xdp_metadata_count(struct test_spec *test)
test->ifobj_rx->use_metadata = true;
data_map = bpf_object__find_map_by_name(skel_rx->obj, "xsk_xdp_.bss");
- if (!data_map || !bpf_map__is_internal(data_map))
- exit_with_error(ENOMEM);
+ if (!data_map || !bpf_map__is_internal(data_map)) {
+ ksft_print_msg("Error: could not find bss section of XDP program\n");
+ return TEST_FAILURE;
+ }
- if (bpf_map_update_elem(bpf_map__fd(data_map), &key, &count, BPF_ANY))
- exit_with_error(errno);
+ if (bpf_map_update_elem(bpf_map__fd(data_map), &key, &count, BPF_ANY)) {
+ ksft_print_msg("Error: could not update count element\n");
+ return TEST_FAILURE;
+ }
return testapp_validate_traffic(test);
}
-static int testapp_poll_txq_tmout(struct test_spec *test)
+static int testapp_xdp_shared_umem(struct test_spec *test)
{
- test_spec_set_name(test, "POLL_TXQ_FULL");
+ struct xsk_xdp_progs *skel_rx = test->ifobj_rx->xdp_progs;
+ struct xsk_xdp_progs *skel_tx = test->ifobj_tx->xdp_progs;
+
+ test->total_steps = 1;
+ test->nb_sockets = 2;
+
+ test_spec_set_xdp_prog(test, skel_rx->progs.xsk_xdp_shared_umem,
+ skel_tx->progs.xsk_xdp_shared_umem,
+ skel_rx->maps.xsk, skel_tx->maps.xsk);
+
+ pkt_stream_even_odd_sequence(test);
+
+ return testapp_validate_traffic(test);
+}
+static int testapp_poll_txq_tmout(struct test_spec *test)
+{
test->ifobj_tx->use_poll = true;
/* create invalid frame by set umem frame_size and pkt length equal to 2048 */
test->ifobj_tx->umem->frame_size = 2048;
@@ -1977,7 +2166,6 @@ static int testapp_poll_txq_tmout(struct test_spec *test)
static int testapp_poll_rxq_tmout(struct test_spec *test)
{
- test_spec_set_name(test, "POLL_RXQ_EMPTY");
test->ifobj_rx->use_poll = true;
return testapp_validate_traffic_single_thread(test, test->ifobj_rx);
}
@@ -1987,7 +2175,6 @@ static int testapp_too_many_frags(struct test_spec *test)
struct pkt pkts[2 * XSK_DESC__MAX_SKB_FRAGS + 2] = {};
u32 max_frags, i;
- test_spec_set_name(test, "TOO_MANY_FRAGS");
if (test->mode == TEST_MODE_ZC)
max_frags = test->ifobj_tx->xdp_zc_max_segs;
else
@@ -2054,20 +2241,16 @@ static bool hugepages_present(void)
return true;
}
-static void init_iface(struct ifobject *ifobj, const char *dst_mac, const char *src_mac,
- thread_func_t func_ptr)
+static void init_iface(struct ifobject *ifobj, thread_func_t func_ptr)
{
LIBBPF_OPTS(bpf_xdp_query_opts, query_opts);
int err;
- memcpy(ifobj->dst_mac, dst_mac, ETH_ALEN);
- memcpy(ifobj->src_mac, src_mac, ETH_ALEN);
-
ifobj->func_ptr = func_ptr;
err = xsk_load_xdp_programs(ifobj);
if (err) {
- printf("Error loading XDP program\n");
+ ksft_print_msg("Error loading XDP program\n");
exit_with_error(err);
}
@@ -2091,138 +2274,98 @@ static void init_iface(struct ifobject *ifobj, const char *dst_mac, const char *
}
}
-static void run_pkt_test(struct test_spec *test, enum test_mode mode, enum test_type type)
-{
- int ret = TEST_SKIP;
-
- switch (type) {
- case TEST_TYPE_STATS_RX_DROPPED:
- ret = testapp_stats_rx_dropped(test);
- break;
- case TEST_TYPE_STATS_TX_INVALID_DESCS:
- ret = testapp_stats_tx_invalid_descs(test);
- break;
- case TEST_TYPE_STATS_RX_FULL:
- ret = testapp_stats_rx_full(test);
- break;
- case TEST_TYPE_STATS_FILL_EMPTY:
- ret = testapp_stats_fill_empty(test);
- break;
- case TEST_TYPE_TEARDOWN:
- ret = testapp_teardown(test);
- break;
- case TEST_TYPE_BIDI:
- ret = testapp_bidi(test);
- break;
- case TEST_TYPE_BPF_RES:
- ret = testapp_bpf_res(test);
- break;
- case TEST_TYPE_RUN_TO_COMPLETION:
- test_spec_set_name(test, "RUN_TO_COMPLETION");
- ret = testapp_validate_traffic(test);
- break;
- case TEST_TYPE_RUN_TO_COMPLETION_MB:
- ret = testapp_multi_buffer(test);
- break;
- case TEST_TYPE_RUN_TO_COMPLETION_SINGLE_PKT:
- test_spec_set_name(test, "RUN_TO_COMPLETION_SINGLE_PKT");
- ret = testapp_single_pkt(test);
- break;
- case TEST_TYPE_RUN_TO_COMPLETION_2K_FRAME:
- test_spec_set_name(test, "RUN_TO_COMPLETION_2K_FRAME_SIZE");
- test->ifobj_tx->umem->frame_size = 2048;
- test->ifobj_rx->umem->frame_size = 2048;
- pkt_stream_replace(test, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
- ret = testapp_validate_traffic(test);
- break;
- case TEST_TYPE_RX_POLL:
- test->ifobj_rx->use_poll = true;
- test_spec_set_name(test, "POLL_RX");
- ret = testapp_validate_traffic(test);
- break;
- case TEST_TYPE_TX_POLL:
- test->ifobj_tx->use_poll = true;
- test_spec_set_name(test, "POLL_TX");
- ret = testapp_validate_traffic(test);
- break;
- case TEST_TYPE_POLL_TXQ_TMOUT:
- ret = testapp_poll_txq_tmout(test);
- break;
- case TEST_TYPE_POLL_RXQ_TMOUT:
- ret = testapp_poll_rxq_tmout(test);
- break;
- case TEST_TYPE_ALIGNED_INV_DESC:
- test_spec_set_name(test, "ALIGNED_INV_DESC");
- ret = testapp_invalid_desc(test);
- break;
- case TEST_TYPE_ALIGNED_INV_DESC_2K_FRAME:
- test_spec_set_name(test, "ALIGNED_INV_DESC_2K_FRAME_SIZE");
- test->ifobj_tx->umem->frame_size = 2048;
- test->ifobj_rx->umem->frame_size = 2048;
- ret = testapp_invalid_desc(test);
- break;
- case TEST_TYPE_UNALIGNED_INV_DESC:
- test_spec_set_name(test, "UNALIGNED_INV_DESC");
- test->ifobj_tx->umem->unaligned_mode = true;
- test->ifobj_rx->umem->unaligned_mode = true;
- ret = testapp_invalid_desc(test);
- break;
- case TEST_TYPE_UNALIGNED_INV_DESC_4K1_FRAME: {
- u64 page_size, umem_size;
-
- test_spec_set_name(test, "UNALIGNED_INV_DESC_4K1_FRAME_SIZE");
- /* Odd frame size so the UMEM doesn't end near a page boundary. */
- test->ifobj_tx->umem->frame_size = 4001;
- test->ifobj_rx->umem->frame_size = 4001;
- test->ifobj_tx->umem->unaligned_mode = true;
- test->ifobj_rx->umem->unaligned_mode = true;
- /* This test exists to test descriptors that staddle the end of
- * the UMEM but not a page.
- */
- page_size = sysconf(_SC_PAGESIZE);
- umem_size = test->ifobj_tx->umem->num_frames * test->ifobj_tx->umem->frame_size;
- assert(umem_size % page_size > MIN_PKT_SIZE);
- assert(umem_size % page_size < page_size - MIN_PKT_SIZE);
- ret = testapp_invalid_desc(test);
- break;
- }
- case TEST_TYPE_ALIGNED_INV_DESC_MB:
- test_spec_set_name(test, "ALIGNED_INV_DESC_MULTI_BUFF");
- ret = testapp_invalid_desc_mb(test);
- break;
- case TEST_TYPE_UNALIGNED_INV_DESC_MB:
- test_spec_set_name(test, "UNALIGNED_INV_DESC_MULTI_BUFF");
- test->ifobj_tx->umem->unaligned_mode = true;
- test->ifobj_rx->umem->unaligned_mode = true;
- ret = testapp_invalid_desc_mb(test);
- break;
- case TEST_TYPE_UNALIGNED:
- ret = testapp_unaligned(test);
- break;
- case TEST_TYPE_UNALIGNED_MB:
- ret = testapp_unaligned_mb(test);
- break;
- case TEST_TYPE_HEADROOM:
- ret = testapp_headroom(test);
- break;
- case TEST_TYPE_XDP_DROP_HALF:
- ret = testapp_xdp_drop(test);
- break;
- case TEST_TYPE_XDP_METADATA_COUNT:
- test_spec_set_name(test, "XDP_METADATA_COUNT");
- ret = testapp_xdp_metadata_count(test);
- break;
- case TEST_TYPE_XDP_METADATA_COUNT_MB:
- test_spec_set_name(test, "XDP_METADATA_COUNT_MULTI_BUFF");
- test->mtu = MAX_ETH_JUMBO_SIZE;
- ret = testapp_xdp_metadata_count(test);
- break;
- case TEST_TYPE_TOO_MANY_FRAGS:
- ret = testapp_too_many_frags(test);
- break;
- default:
- break;
- }
+static int testapp_send_receive(struct test_spec *test)
+{
+ return testapp_validate_traffic(test);
+}
+
+static int testapp_send_receive_2k_frame(struct test_spec *test)
+{
+ test->ifobj_tx->umem->frame_size = 2048;
+ test->ifobj_rx->umem->frame_size = 2048;
+ pkt_stream_replace(test, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
+ return testapp_validate_traffic(test);
+}
+
+static int testapp_poll_rx(struct test_spec *test)
+{
+ test->ifobj_rx->use_poll = true;
+ return testapp_validate_traffic(test);
+}
+
+static int testapp_poll_tx(struct test_spec *test)
+{
+ test->ifobj_tx->use_poll = true;
+ return testapp_validate_traffic(test);
+}
+
+static int testapp_aligned_inv_desc(struct test_spec *test)
+{
+ return testapp_invalid_desc(test);
+}
+
+static int testapp_aligned_inv_desc_2k_frame(struct test_spec *test)
+{
+ test->ifobj_tx->umem->frame_size = 2048;
+ test->ifobj_rx->umem->frame_size = 2048;
+ return testapp_invalid_desc(test);
+}
+
+static int testapp_unaligned_inv_desc(struct test_spec *test)
+{
+ test->ifobj_tx->umem->unaligned_mode = true;
+ test->ifobj_rx->umem->unaligned_mode = true;
+ return testapp_invalid_desc(test);
+}
+
+static int testapp_unaligned_inv_desc_4001_frame(struct test_spec *test)
+{
+ u64 page_size, umem_size;
+
+ /* Odd frame size so the UMEM doesn't end near a page boundary. */
+ test->ifobj_tx->umem->frame_size = 4001;
+ test->ifobj_rx->umem->frame_size = 4001;
+ test->ifobj_tx->umem->unaligned_mode = true;
+ test->ifobj_rx->umem->unaligned_mode = true;
+ /* This test exists to test descriptors that staddle the end of
+ * the UMEM but not a page.
+ */
+ page_size = sysconf(_SC_PAGESIZE);
+ umem_size = test->ifobj_tx->umem->num_frames * test->ifobj_tx->umem->frame_size;
+ assert(umem_size % page_size > MIN_PKT_SIZE);
+ assert(umem_size % page_size < page_size - MIN_PKT_SIZE);
+
+ return testapp_invalid_desc(test);
+}
+
+static int testapp_aligned_inv_desc_mb(struct test_spec *test)
+{
+ return testapp_invalid_desc_mb(test);
+}
+
+static int testapp_unaligned_inv_desc_mb(struct test_spec *test)
+{
+ test->ifobj_tx->umem->unaligned_mode = true;
+ test->ifobj_rx->umem->unaligned_mode = true;
+ return testapp_invalid_desc_mb(test);
+}
+
+static int testapp_xdp_metadata(struct test_spec *test)
+{
+ return testapp_xdp_metadata_copy(test);
+}
+
+static int testapp_xdp_metadata_mb(struct test_spec *test)
+{
+ test->mtu = MAX_ETH_JUMBO_SIZE;
+ return testapp_xdp_metadata_copy(test);
+}
+
+static void run_pkt_test(struct test_spec *test)
+{
+ int ret;
+
+ ret = test->test_func(test);
if (ret == TEST_PASS)
ksft_test_result_pass("PASS: %s %s%s\n", mode_string(test), busy_poll_string(test),
@@ -2290,13 +2433,56 @@ static bool is_xdp_supported(int ifindex)
return true;
}
+static const struct test_spec tests[] = {
+ {.name = "SEND_RECEIVE", .test_func = testapp_send_receive},
+ {.name = "SEND_RECEIVE_2K_FRAME", .test_func = testapp_send_receive_2k_frame},
+ {.name = "SEND_RECEIVE_SINGLE_PKT", .test_func = testapp_single_pkt},
+ {.name = "POLL_RX", .test_func = testapp_poll_rx},
+ {.name = "POLL_TX", .test_func = testapp_poll_tx},
+ {.name = "POLL_RXQ_FULL", .test_func = testapp_poll_rxq_tmout},
+ {.name = "POLL_TXQ_FULL", .test_func = testapp_poll_txq_tmout},
+ {.name = "SEND_RECEIVE_UNALIGNED", .test_func = testapp_send_receive_unaligned},
+ {.name = "ALIGNED_INV_DESC", .test_func = testapp_aligned_inv_desc},
+ {.name = "ALIGNED_INV_DESC_2K_FRAME_SIZE", .test_func = testapp_aligned_inv_desc_2k_frame},
+ {.name = "UNALIGNED_INV_DESC", .test_func = testapp_unaligned_inv_desc},
+ {.name = "UNALIGNED_INV_DESC_4001_FRAME_SIZE",
+ .test_func = testapp_unaligned_inv_desc_4001_frame},
+ {.name = "UMEM_HEADROOM", .test_func = testapp_headroom},
+ {.name = "TEARDOWN", .test_func = testapp_teardown},
+ {.name = "BIDIRECTIONAL", .test_func = testapp_bidirectional},
+ {.name = "STAT_RX_DROPPED", .test_func = testapp_stats_rx_dropped},
+ {.name = "STAT_TX_INVALID", .test_func = testapp_stats_tx_invalid_descs},
+ {.name = "STAT_RX_FULL", .test_func = testapp_stats_rx_full},
+ {.name = "STAT_FILL_EMPTY", .test_func = testapp_stats_fill_empty},
+ {.name = "XDP_PROG_CLEANUP", .test_func = testapp_xdp_prog_cleanup},
+ {.name = "XDP_DROP_HALF", .test_func = testapp_xdp_drop},
+ {.name = "XDP_SHARED_UMEM", .test_func = testapp_xdp_shared_umem},
+ {.name = "XDP_METADATA_COPY", .test_func = testapp_xdp_metadata},
+ {.name = "XDP_METADATA_COPY_MULTI_BUFF", .test_func = testapp_xdp_metadata_mb},
+ {.name = "SEND_RECEIVE_9K_PACKETS", .test_func = testapp_send_receive_mb},
+ {.name = "SEND_RECEIVE_UNALIGNED_9K_PACKETS",
+ .test_func = testapp_send_receive_unaligned_mb},
+ {.name = "ALIGNED_INV_DESC_MULTI_BUFF", .test_func = testapp_aligned_inv_desc_mb},
+ {.name = "UNALIGNED_INV_DESC_MULTI_BUFF", .test_func = testapp_unaligned_inv_desc_mb},
+ {.name = "TOO_MANY_FRAGS", .test_func = testapp_too_many_frags},
+};
+
+static void print_tests(void)
+{
+ u32 i;
+
+ printf("Tests:\n");
+ for (i = 0; i < ARRAY_SIZE(tests); i++)
+ printf("%u: %s\n", i, tests[i].name);
+}
+
int main(int argc, char **argv)
{
struct pkt_stream *rx_pkt_stream_default;
struct pkt_stream *tx_pkt_stream_default;
struct ifobject *ifobj_tx, *ifobj_rx;
+ u32 i, j, failed_tests = 0, nb_tests;
int modes = TEST_MODE_SKB + 1;
- u32 i, j, failed_tests = 0;
struct test_spec test;
bool shared_netdev;
@@ -2314,14 +2500,21 @@ int main(int argc, char **argv)
parse_command_line(ifobj_tx, ifobj_rx, argc, argv);
+ if (opt_print_tests) {
+ print_tests();
+ ksft_exit_xpass();
+ }
+ if (opt_run_test != RUN_ALL_TESTS && opt_run_test >= ARRAY_SIZE(tests)) {
+ ksft_print_msg("Error: test %u does not exist.\n", opt_run_test);
+ ksft_exit_xfail();
+ }
+
shared_netdev = (ifobj_tx->ifindex == ifobj_rx->ifindex);
ifobj_tx->shared_umem = shared_netdev;
ifobj_rx->shared_umem = shared_netdev;
- if (!validate_interface(ifobj_tx) || !validate_interface(ifobj_rx)) {
- usage(basename(argv[0]));
- ksft_exit_xfail();
- }
+ if (!validate_interface(ifobj_tx) || !validate_interface(ifobj_rx))
+ print_usage(argv);
if (is_xdp_supported(ifobj_tx->ifindex)) {
modes++;
@@ -2329,23 +2522,46 @@ int main(int argc, char **argv)
modes++;
}
- init_iface(ifobj_rx, MAC1, MAC2, worker_testapp_validate_rx);
- init_iface(ifobj_tx, MAC2, MAC1, worker_testapp_validate_tx);
+ init_iface(ifobj_rx, worker_testapp_validate_rx);
+ init_iface(ifobj_tx, worker_testapp_validate_tx);
- test_spec_init(&test, ifobj_tx, ifobj_rx, 0);
- tx_pkt_stream_default = pkt_stream_generate(ifobj_tx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
- rx_pkt_stream_default = pkt_stream_generate(ifobj_rx->umem, DEFAULT_PKT_CNT, MIN_PKT_SIZE);
+ test_spec_init(&test, ifobj_tx, ifobj_rx, 0, &tests[0]);
+ tx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
+ rx_pkt_stream_default = pkt_stream_generate(DEFAULT_PKT_CNT, MIN_PKT_SIZE);
if (!tx_pkt_stream_default || !rx_pkt_stream_default)
exit_with_error(ENOMEM);
test.tx_pkt_stream_default = tx_pkt_stream_default;
test.rx_pkt_stream_default = rx_pkt_stream_default;
- ksft_set_plan(modes * TEST_TYPE_MAX);
+ if (opt_run_test == RUN_ALL_TESTS)
+ nb_tests = ARRAY_SIZE(tests);
+ else
+ nb_tests = 1;
+ if (opt_mode == TEST_MODE_ALL) {
+ ksft_set_plan(modes * nb_tests);
+ } else {
+ if (opt_mode == TEST_MODE_DRV && modes <= TEST_MODE_DRV) {
+ ksft_print_msg("Error: XDP_DRV mode not supported.\n");
+ ksft_exit_xfail();
+ }
+ if (opt_mode == TEST_MODE_ZC && modes <= TEST_MODE_ZC) {
+ ksft_print_msg("Error: zero-copy mode not supported.\n");
+ ksft_exit_xfail();
+ }
+
+ ksft_set_plan(nb_tests);
+ }
for (i = 0; i < modes; i++) {
- for (j = 0; j < TEST_TYPE_MAX; j++) {
- test_spec_init(&test, ifobj_tx, ifobj_rx, i);
- run_pkt_test(&test, i, j);
+ if (opt_mode != TEST_MODE_ALL && i != opt_mode)
+ continue;
+
+ for (j = 0; j < ARRAY_SIZE(tests); j++) {
+ if (opt_run_test != RUN_ALL_TESTS && j != opt_run_test)
+ continue;
+
+ test_spec_init(&test, ifobj_tx, ifobj_rx, i, &tests[j]);
+ run_pkt_test(&test);
usleep(USLEEP_MAX);
if (test.fail)
diff --git a/tools/testing/selftests/bpf/xskxceiver.h b/tools/testing/selftests/bpf/xskxceiver.h
index 233b66cef64a..f174df2d693f 100644
--- a/tools/testing/selftests/bpf/xskxceiver.h
+++ b/tools/testing/selftests/bpf/xskxceiver.h
@@ -5,7 +5,10 @@
#ifndef XSKXCEIVER_H_
#define XSKXCEIVER_H_
+#include <limits.h>
+
#include "xsk_xdp_progs.skel.h"
+#include "xsk_xdp_common.h"
#ifndef SOL_XDP
#define SOL_XDP 283
@@ -33,8 +36,7 @@
#define TEST_SKIP 2
#define MAX_INTERFACES 2
#define MAX_INTERFACE_NAME_CHARS 16
-#define MAX_SOCKETS 2
-#define MAX_TEST_NAME_SIZE 32
+#define MAX_TEST_NAME_SIZE 48
#define MAX_TEARDOWN_ITER 10
#define PKT_HDR_SIZE (sizeof(struct ethhdr) + 2) /* Just to align the data in the packet */
#define MIN_PKT_SIZE 64
@@ -56,6 +58,8 @@
#define XSK_DESC__MAX_SKB_FRAGS 18
#define HUGEPAGE_SIZE (2 * 1024 * 1024)
#define PKT_DUMP_NB_TO_PRINT 16
+#define RUN_ALL_TESTS UINT_MAX
+#define NUM_MAC_ADDRESSES 4
#define print_verbose(x...) do { if (opt_verbose) ksft_print_msg(x); } while (0)
@@ -63,43 +67,9 @@ enum test_mode {
TEST_MODE_SKB,
TEST_MODE_DRV,
TEST_MODE_ZC,
- TEST_MODE_MAX
-};
-
-enum test_type {
- TEST_TYPE_RUN_TO_COMPLETION,
- TEST_TYPE_RUN_TO_COMPLETION_2K_FRAME,
- TEST_TYPE_RUN_TO_COMPLETION_SINGLE_PKT,
- TEST_TYPE_RX_POLL,
- TEST_TYPE_TX_POLL,
- TEST_TYPE_POLL_RXQ_TMOUT,
- TEST_TYPE_POLL_TXQ_TMOUT,
- TEST_TYPE_UNALIGNED,
- TEST_TYPE_ALIGNED_INV_DESC,
- TEST_TYPE_ALIGNED_INV_DESC_2K_FRAME,
- TEST_TYPE_UNALIGNED_INV_DESC,
- TEST_TYPE_UNALIGNED_INV_DESC_4K1_FRAME,
- TEST_TYPE_HEADROOM,
- TEST_TYPE_TEARDOWN,
- TEST_TYPE_BIDI,
- TEST_TYPE_STATS_RX_DROPPED,
- TEST_TYPE_STATS_TX_INVALID_DESCS,
- TEST_TYPE_STATS_RX_FULL,
- TEST_TYPE_STATS_FILL_EMPTY,
- TEST_TYPE_BPF_RES,
- TEST_TYPE_XDP_DROP_HALF,
- TEST_TYPE_XDP_METADATA_COUNT,
- TEST_TYPE_XDP_METADATA_COUNT_MB,
- TEST_TYPE_RUN_TO_COMPLETION_MB,
- TEST_TYPE_UNALIGNED_MB,
- TEST_TYPE_ALIGNED_INV_DESC_MB,
- TEST_TYPE_UNALIGNED_INV_DESC_MB,
- TEST_TYPE_TOO_MANY_FRAGS,
- TEST_TYPE_MAX
+ TEST_MODE_ALL
};
-static bool opt_verbose;
-
struct xsk_umem_info {
struct xsk_ring_prod fq;
struct xsk_ring_cons cq;
@@ -118,8 +88,11 @@ struct xsk_socket_info {
struct xsk_ring_prod tx;
struct xsk_umem_info *umem;
struct xsk_socket *xsk;
+ struct pkt_stream *pkt_stream;
u32 outstanding_tx;
u32 rxqsize;
+ u8 dst_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
};
struct pkt {
@@ -135,12 +108,16 @@ struct pkt_stream {
u32 current_pkt_nb;
struct pkt *pkts;
u32 max_pkt_len;
+ u32 nb_rx_pkts;
+ u32 nb_valid_entries;
bool verbatim;
};
struct ifobject;
+struct test_spec;
typedef int (*validation_func_t)(struct ifobject *ifobj);
typedef void *(*thread_func_t)(void *arg);
+typedef int (*test_func_t)(struct test_spec *test);
struct ifobject {
char ifname[MAX_INTERFACE_NAME_CHARS];
@@ -149,7 +126,6 @@ struct ifobject {
struct xsk_umem_info *umem;
thread_func_t func_ptr;
validation_func_t validation_func;
- struct pkt_stream *pkt_stream;
struct xsk_xdp_progs *xdp_progs;
struct bpf_map *xskmap;
struct bpf_program *xdp_prog;
@@ -169,8 +145,6 @@ struct ifobject {
bool unaligned_supp;
bool multi_buff_supp;
bool multi_buff_zc_supp;
- u8 dst_mac[ETH_ALEN];
- u8 src_mac[ETH_ALEN];
};
struct test_spec {
@@ -182,6 +156,7 @@ struct test_spec {
struct bpf_program *xdp_prog_tx;
struct bpf_map *xskmap_rx;
struct bpf_map *xskmap_tx;
+ test_func_t test_func;
int mtu;
u16 total_steps;
u16 current_step;
@@ -196,4 +171,6 @@ pthread_mutex_t pacing_mutex = PTHREAD_MUTEX_INITIALIZER;
int pkts_in_flight;
+static const u8 g_mac[ETH_ALEN] = {0x55, 0x44, 0x33, 0x22, 0x11, 0x00};
+
#endif /* XSKXCEIVER_H_ */
diff --git a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
index 7f7d20f22207..46e20b13473c 100755
--- a/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
+++ b/tools/testing/selftests/drivers/net/netdevsim/devlink.sh
@@ -31,36 +31,43 @@ devlink_wait()
fw_flash_test()
{
+ DUMMYFILE=$(find /lib/firmware -maxdepth 1 -type f -printf '%f\n' |head -1)
RET=0
- devlink dev flash $DL_HANDLE file dummy
+ if [ -z "$DUMMYFILE" ]
+ then
+ echo "SKIP: unable to find suitable dummy firmware file"
+ return
+ fi
+
+ devlink dev flash $DL_HANDLE file $DUMMYFILE
check_err $? "Failed to flash with status updates on"
- devlink dev flash $DL_HANDLE file dummy component fw.mgmt
+ devlink dev flash $DL_HANDLE file $DUMMYFILE component fw.mgmt
check_err $? "Failed to flash with component attribute"
- devlink dev flash $DL_HANDLE file dummy overwrite settings
+ devlink dev flash $DL_HANDLE file $DUMMYFILE overwrite settings
check_fail $? "Flash with overwrite settings should be rejected"
echo "1"> $DEBUGFS_DIR/fw_update_overwrite_mask
check_err $? "Failed to change allowed overwrite mask"
- devlink dev flash $DL_HANDLE file dummy overwrite settings
+ devlink dev flash $DL_HANDLE file $DUMMYFILE overwrite settings
check_err $? "Failed to flash with settings overwrite enabled"
- devlink dev flash $DL_HANDLE file dummy overwrite identifiers
+ devlink dev flash $DL_HANDLE file $DUMMYFILE overwrite identifiers
check_fail $? "Flash with overwrite settings should be identifiers"
echo "3"> $DEBUGFS_DIR/fw_update_overwrite_mask
check_err $? "Failed to change allowed overwrite mask"
- devlink dev flash $DL_HANDLE file dummy overwrite identifiers overwrite settings
+ devlink dev flash $DL_HANDLE file $DUMMYFILE overwrite identifiers overwrite settings
check_err $? "Failed to flash with settings and identifiers overwrite enabled"
echo "n"> $DEBUGFS_DIR/fw_update_status
check_err $? "Failed to disable status updates"
- devlink dev flash $DL_HANDLE file dummy
+ devlink dev flash $DL_HANDLE file $DUMMYFILE
check_err $? "Failed to flash with status updates off"
log_test "fw flash test"
diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
index 4a2881d43989..b9804ceb9494 100644
--- a/tools/testing/selftests/net/Makefile
+++ b/tools/testing/selftests/net/Makefile
@@ -90,6 +90,7 @@ TEST_PROGS += test_vxlan_mdb.sh
TEST_PROGS += test_bridge_neigh_suppress.sh
TEST_PROGS += test_vxlan_nolocalbypass.sh
TEST_PROGS += test_bridge_backup_port.sh
+TEST_PROGS += fdb_flush.sh
TEST_FILES := settings
diff --git a/tools/testing/selftests/net/af_unix/scm_pidfd.c b/tools/testing/selftests/net/af_unix/scm_pidfd.c
index a86222143d79..7e534594167e 100644
--- a/tools/testing/selftests/net/af_unix/scm_pidfd.c
+++ b/tools/testing/selftests/net/af_unix/scm_pidfd.c
@@ -294,7 +294,6 @@ static void fill_sockaddr(struct sock_addr *addr, bool abstract)
static void client(FIXTURE_DATA(scm_pidfd) *self,
const FIXTURE_VARIANT(scm_pidfd) *variant)
{
- int err;
int cfd;
socklen_t len;
struct ucred peer_cred;
diff --git a/tools/testing/selftests/net/af_unix/test_unix_oob.c b/tools/testing/selftests/net/af_unix/test_unix_oob.c
index 532459a15067..a7c51889acd5 100644
--- a/tools/testing/selftests/net/af_unix/test_unix_oob.c
+++ b/tools/testing/selftests/net/af_unix/test_unix_oob.c
@@ -180,9 +180,7 @@ main(int argc, char **argv)
char buf[1024];
int on = 0;
char oob;
- int flags;
int atmark;
- char *tmp_file;
lfd = socket(AF_UNIX, SOCK_STREAM, 0);
memset(&consumer_addr, 0, sizeof(consumer_addr));
diff --git a/tools/testing/selftests/net/fdb_flush.sh b/tools/testing/selftests/net/fdb_flush.sh
new file mode 100755
index 000000000000..90e7a29e0476
--- /dev/null
+++ b/tools/testing/selftests/net/fdb_flush.sh
@@ -0,0 +1,812 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+#
+# This test is for checking functionality of flushing FDB entries.
+# Check that flush works as expected with all the supported arguments and verify
+# some combinations of arguments.
+
+FLUSH_BY_STATE_TESTS="
+ vxlan_test_flush_by_permanent
+ vxlan_test_flush_by_nopermanent
+ vxlan_test_flush_by_static
+ vxlan_test_flush_by_nostatic
+ vxlan_test_flush_by_dynamic
+ vxlan_test_flush_by_nodynamic
+"
+
+FLUSH_BY_FLAG_TESTS="
+ vxlan_test_flush_by_extern_learn
+ vxlan_test_flush_by_noextern_learn
+ vxlan_test_flush_by_router
+ vxlan_test_flush_by_norouter
+"
+
+TESTS="
+ vxlan_test_flush_by_dev
+ vxlan_test_flush_by_vni
+ vxlan_test_flush_by_src_vni
+ vxlan_test_flush_by_port
+ vxlan_test_flush_by_dst_ip
+ vxlan_test_flush_by_nhid
+ $FLUSH_BY_STATE_TESTS
+ $FLUSH_BY_FLAG_TESTS
+ vxlan_test_flush_by_several_args
+ vxlan_test_flush_by_remote_attributes
+ bridge_test_flush_by_dev
+ bridge_test_flush_by_vlan
+ bridge_vxlan_test_flush
+"
+
+: ${VERBOSE:=0}
+: ${PAUSE_ON_FAIL:=no}
+: ${PAUSE:=no}
+: ${VXPORT:=4789}
+
+run_cmd()
+{
+ local cmd="$1"
+ local out
+ local rc
+ local stderr="2>/dev/null"
+
+ if [ "$VERBOSE" = "1" ]; then
+ printf "COMMAND: $cmd\n"
+ stderr=
+ fi
+
+ out=$(eval $cmd $stderr)
+ rc=$?
+ if [ "$VERBOSE" = "1" -a -n "$out" ]; then
+ echo " $out"
+ fi
+
+ return $rc
+}
+
+log_test()
+{
+ local rc=$1
+ local expected=$2
+ local msg="$3"
+ local nsuccess
+ local nfail
+ local ret
+
+ if [ ${rc} -eq ${expected} ]; then
+ printf "TEST: %-60s [ OK ]\n" "${msg}"
+ nsuccess=$((nsuccess+1))
+ else
+ ret=1
+ nfail=$((nfail+1))
+ printf "TEST: %-60s [FAIL]\n" "${msg}"
+ if [ "$VERBOSE" = "1" ]; then
+ echo " rc=$rc, expected $expected"
+ fi
+
+ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+ fi
+
+ if [ "${PAUSE}" = "yes" ]; then
+ echo
+ echo "hit enter to continue, 'q' to quit"
+ read a
+ [ "$a" = "q" ] && exit 1
+ fi
+
+ [ "$VERBOSE" = "1" ] && echo
+}
+
+MAC_POOL_1="
+ de:ad:be:ef:13:10
+ de:ad:be:ef:13:11
+ de:ad:be:ef:13:12
+ de:ad:be:ef:13:13
+ de:ad:be:ef:13:14
+"
+mac_pool_1_len=$(echo "$MAC_POOL_1" | grep -c .)
+
+MAC_POOL_2="
+ ca:fe:be:ef:13:10
+ ca:fe:be:ef:13:11
+ ca:fe:be:ef:13:12
+ ca:fe:be:ef:13:13
+ ca:fe:be:ef:13:14
+"
+mac_pool_2_len=$(echo "$MAC_POOL_2" | grep -c .)
+
+fdb_add_mac_pool_1()
+{
+ local dev=$1; shift
+ local args="$@"
+
+ for mac in $MAC_POOL_1
+ do
+ $BRIDGE fdb add $mac dev $dev $args
+ done
+}
+
+fdb_add_mac_pool_2()
+{
+ local dev=$1; shift
+ local args="$@"
+
+ for mac in $MAC_POOL_2
+ do
+ $BRIDGE fdb add $mac dev $dev $args
+ done
+}
+
+fdb_check_n_entries_by_dev_filter()
+{
+ local dev=$1; shift
+ local exp_entries=$1; shift
+ local filter="$@"
+
+ local entries=$($BRIDGE fdb show dev $dev | grep "$filter" | wc -l)
+
+ [[ $entries -eq $exp_entries ]]
+ rc=$?
+
+ log_test $rc 0 "$dev: Expected $exp_entries FDB entries, got $entries"
+ return $rc
+}
+
+vxlan_test_flush_by_dev()
+{
+ local vni=3000
+ local dst_ip=192.0.2.1
+
+ fdb_add_mac_pool_1 vx10 vni $vni dst $dst_ip
+ fdb_add_mac_pool_2 vx20 vni $vni dst $dst_ip
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len
+ fdb_check_n_entries_by_dev_filter vx20 $mac_pool_2_len
+
+ run_cmd "$BRIDGE fdb flush dev vx10"
+ log_test $? 0 "Flush FDB by dev vx10"
+
+ fdb_check_n_entries_by_dev_filter vx10 0
+ log_test $? 0 "Flush FDB by dev vx10 - test vx10 entries"
+
+ fdb_check_n_entries_by_dev_filter vx20 $mac_pool_2_len
+ log_test $? 0 "Flush FDB by dev vx10 - test vx20 entries"
+}
+
+vxlan_test_flush_by_vni()
+{
+ local vni_1=3000
+ local vni_2=4000
+ local dst_ip=192.0.2.1
+
+ fdb_add_mac_pool_1 vx10 vni $vni_1 dst $dst_ip
+ fdb_add_mac_pool_2 vx10 vni $vni_2 dst $dst_ip
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len vni $vni_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len vni $vni_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 vni $vni_2"
+ log_test $? 0 "Flush FDB by dev vx10 and vni $vni_2"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len vni $vni_1
+ log_test $? 0 "Test entries with vni $vni_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 vni $vni_2
+ log_test $? 0 "Test entries with vni $vni_2"
+}
+
+vxlan_test_flush_by_src_vni()
+{
+ # Set some entries with {vni=x,src_vni=y} and some with the opposite -
+ # {vni=y,src_vni=x}, to verify that when we flush by src_vni=x, entries
+ # with vni=x are not flused.
+ local vni_1=3000
+ local vni_2=4000
+ local src_vni_1=4000
+ local src_vni_2=3000
+ local dst_ip=192.0.2.1
+
+ # Reconfigure vx10 with 'external' to get 'src_vni' details in
+ # 'bridge fdb' output
+ $IP link del dev vx10
+ $IP link add name vx10 type vxlan dstport "$VXPORT" external
+
+ fdb_add_mac_pool_1 vx10 vni $vni_1 src_vni $src_vni_1 dst $dst_ip
+ fdb_add_mac_pool_2 vx10 vni $vni_2 src_vni $src_vni_2 dst $dst_ip
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len \
+ src_vni $src_vni_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len \
+ src_vni $src_vni_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 src_vni $src_vni_2"
+ log_test $? 0 "Flush FDB by dev vx10 and src_vni $src_vni_2"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len \
+ src_vni $src_vni_1
+ log_test $? 0 "Test entries with src_vni $src_vni_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 src_vni $src_vni_2
+ log_test $? 0 "Test entries with src_vni $src_vni_2"
+}
+
+vxlan_test_flush_by_port()
+{
+ local port_1=1234
+ local port_2=4321
+ local dst_ip=192.0.2.1
+
+ fdb_add_mac_pool_1 vx10 port $port_1 dst $dst_ip
+ fdb_add_mac_pool_2 vx10 port $port_2 dst $dst_ip
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len port $port_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len port $port_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 port $port_2"
+ log_test $? 0 "Flush FDB by dev vx10 and port $port_2"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len port $port_1
+ log_test $? 0 "Test entries with port $port_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 port $port_2
+ log_test $? 0 "Test entries with port $port_2"
+}
+
+vxlan_test_flush_by_dst_ip()
+{
+ local dst_ip_1=192.0.2.1
+ local dst_ip_2=192.0.2.2
+
+ fdb_add_mac_pool_1 vx10 dst $dst_ip_1
+ fdb_add_mac_pool_2 vx10 dst $dst_ip_2
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len dst $dst_ip_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 dst $dst_ip_2"
+ log_test $? 0 "Flush FDB by dev vx10 and dst $dst_ip_2"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip_1
+ log_test $? 0 "Test entries with dst $dst_ip_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 dst $dst_ip_2
+ log_test $? 0 "Test entries with dst $dst_ip_2"
+}
+
+nexthops_add()
+{
+ local nhid_1=$1; shift
+ local nhid_2=$1; shift
+
+ $IP nexthop add id 10 via 192.0.2.1 fdb
+ $IP nexthop add id $nhid_1 group 10 fdb
+
+ $IP nexthop add id 20 via 192.0.2.2 fdb
+ $IP nexthop add id $nhid_2 group 20 fdb
+}
+
+vxlan_test_flush_by_nhid()
+{
+ local nhid_1=100
+ local nhid_2=200
+
+ nexthops_add $nhid_1 $nhid_2
+
+ fdb_add_mac_pool_1 vx10 nhid $nhid_1
+ fdb_add_mac_pool_2 vx10 nhid $nhid_2
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len nhid $nhid_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len nhid $nhid_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 nhid $nhid_2"
+ log_test $? 0 "Flush FDB by dev vx10 and nhid $nhid_2"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len nhid $nhid_1
+ log_test $? 0 "Test entries with nhid $nhid_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 nhid $nhid_2
+ log_test $? 0 "Test entries with nhid $nhid_2"
+
+ # Flush also entries with $nhid_1, and then verify that flushing by
+ # 'nhid' does not return an error when there are no entries with
+ # nexthops.
+ run_cmd "$BRIDGE fdb flush dev vx10 nhid $nhid_1"
+ log_test $? 0 "Flush FDB by dev vx10 and nhid $nhid_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 nhid
+ log_test $? 0 "Test entries with 'nhid' keyword"
+
+ run_cmd "$BRIDGE fdb flush dev vx10 nhid $nhid_1"
+ log_test $? 0 "Flush FDB by nhid when there are no entries with nexthop"
+}
+
+vxlan_test_flush_by_state()
+{
+ local flush_by_state=$1; shift
+ local state_1=$1; shift
+ local exp_state_1=$1; shift
+ local state_2=$1; shift
+ local exp_state_2=$1; shift
+
+ local dst_ip_1=192.0.2.1
+ local dst_ip_2=192.0.2.2
+
+ fdb_add_mac_pool_1 vx10 dst $dst_ip_1 $state_1
+ fdb_add_mac_pool_2 vx10 dst $dst_ip_2 $state_2
+
+ # Check the entries by dst_ip as not all states appear in 'bridge fdb'
+ # output.
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len dst $dst_ip_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_by_state"
+ log_test $? 0 "Flush FDB by dev vx10 and state $flush_by_state"
+
+ fdb_check_n_entries_by_dev_filter vx10 $exp_state_1 dst $dst_ip_1
+ log_test $? 0 "Test entries with state $state_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 $exp_state_2 dst $dst_ip_2
+ log_test $? 0 "Test entries with state $state_2"
+}
+
+vxlan_test_flush_by_permanent()
+{
+ # Entries that are added without state get 'permanent' state by
+ # default, add some entries with flag 'extern_learn' instead of state,
+ # so they will be added with 'permanent' and should be flushed also.
+ local flush_by_state="permanent"
+ local state_1="permanent"
+ local exp_state_1=0
+ local state_2="extern_learn"
+ local exp_state_2=0
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_nopermanent()
+{
+ local flush_by_state="nopermanent"
+ local state_1="permanent"
+ local exp_state_1=$mac_pool_1_len
+ local state_2="static"
+ local exp_state_2=0
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_static()
+{
+ local flush_by_state="static"
+ local state_1="static"
+ local exp_state_1=0
+ local state_2="dynamic"
+ local exp_state_2=$mac_pool_2_len
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_nostatic()
+{
+ local flush_by_state="nostatic"
+ local state_1="permanent"
+ local exp_state_1=$mac_pool_1_len
+ local state_2="dynamic"
+ local exp_state_2=0
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_dynamic()
+{
+ local flush_by_state="dynamic"
+ local state_1="dynamic"
+ local exp_state_1=0
+ local state_2="static"
+ local exp_state_2=$mac_pool_2_len
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_nodynamic()
+{
+ local flush_by_state="nodynamic"
+ local state_1="permanent"
+ local exp_state_1=0
+ local state_2="dynamic"
+ local exp_state_2=$mac_pool_2_len
+
+ vxlan_test_flush_by_state $flush_by_state $state_1 $exp_state_1 \
+ $state_2 $exp_state_2
+}
+
+vxlan_test_flush_by_flag()
+{
+ local flush_by_flag=$1; shift
+ local flag_1=$1; shift
+ local exp_flag_1=$1; shift
+ local flag_2=$1; shift
+ local exp_flag_2=$1; shift
+
+ local dst_ip_1=192.0.2.1
+ local dst_ip_2=192.0.2.2
+
+ fdb_add_mac_pool_1 vx10 dst $dst_ip_1 $flag_1
+ fdb_add_mac_pool_2 vx10 dst $dst_ip_2 $flag_2
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len $flag_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len $flag_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_by_flag"
+ log_test $? 0 "Flush FDB by dev vx10 and flag $flush_by_flag"
+
+ fdb_check_n_entries_by_dev_filter vx10 $exp_flag_1 dst $dst_ip_1
+ log_test $? 0 "Test entries with flag $flag_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 $exp_flag_2 dst $dst_ip_2
+ log_test $? 0 "Test entries with flag $flag_2"
+}
+
+vxlan_test_flush_by_extern_learn()
+{
+ local flush_by_flag="extern_learn"
+ local flag_1="extern_learn"
+ local exp_flag_1=0
+ local flag_2="router"
+ local exp_flag_2=$mac_pool_2_len
+
+ vxlan_test_flush_by_flag $flush_by_flag $flag_1 $exp_flag_1 \
+ $flag_2 $exp_flag_2
+}
+
+vxlan_test_flush_by_noextern_learn()
+{
+ local flush_by_flag="noextern_learn"
+ local flag_1="extern_learn"
+ local exp_flag_1=$mac_pool_1_len
+ local flag_2="router"
+ local exp_flag_2=0
+
+ vxlan_test_flush_by_flag $flush_by_flag $flag_1 $exp_flag_1 \
+ $flag_2 $exp_flag_2
+}
+
+vxlan_test_flush_by_router()
+{
+ local flush_by_flag="router"
+ local flag_1="router"
+ local exp_flag_1=0
+ local flag_2="extern_learn"
+ local exp_flag_2=$mac_pool_2_len
+
+ vxlan_test_flush_by_flag $flush_by_flag $flag_1 $exp_flag_1 \
+ $flag_2 $exp_flag_2
+}
+
+vxlan_test_flush_by_norouter()
+{
+
+ local flush_by_flag="norouter"
+ local flag_1="router"
+ local exp_flag_1=$mac_pool_1_len
+ local flag_2="extern_learn"
+ local exp_flag_2=0
+
+ vxlan_test_flush_by_flag $flush_by_flag $flag_1 $exp_flag_1 \
+ $flag_2 $exp_flag_2
+}
+
+vxlan_test_flush_by_several_args()
+{
+ local dst_ip_1=192.0.2.1
+ local dst_ip_2=192.0.2.2
+ local state_1=permanent
+ local state_2=static
+ local vni=3000
+ local port=1234
+ local nhid=100
+ local flag=router
+ local flush_args
+
+ ################### Flush by 2 args - nhid and flag ####################
+ $IP nexthop add id 10 via 192.0.2.1 fdb
+ $IP nexthop add id $nhid group 10 fdb
+
+ fdb_add_mac_pool_1 vx10 nhid $nhid $flag $state_1
+ fdb_add_mac_pool_2 vx10 nhid $nhid $flag $state_2
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len $state_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len $state_2
+
+ run_cmd "$BRIDGE fdb flush dev vx10 nhid $nhid $flag"
+ log_test $? 0 "Flush FDB by dev vx10 nhid $nhid $flag"
+
+ # All entries should be flushed as 'state' is not an argument for flush
+ # filtering.
+ fdb_check_n_entries_by_dev_filter vx10 0 $state_1
+ log_test $? 0 "Test entries with state $state_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 $state_2
+ log_test $? 0 "Test entries with state $state_2"
+
+ ################ Flush by 3 args - VNI, port and dst_ip ################
+ fdb_add_mac_pool_1 vx10 vni $vni port $port dst $dst_ip_1
+ fdb_add_mac_pool_2 vx10 vni $vni port $port dst $dst_ip_2
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_2_len dst $dst_ip_2
+
+ flush_args="vni $vni port $port dst $dst_ip_2"
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_args"
+ log_test $? 0 "Flush FDB by dev vx10 $flush_args"
+
+ # Only entries with $dst_ip_2 should be flushed, even the rest arguments
+ # match the filter, the flush should be AND of all the arguments.
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip_1
+ log_test $? 0 "Test entries with dst $dst_ip_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 dst $dst_ip_2
+ log_test $? 0 "Test entries with dst $dst_ip_2"
+}
+
+multicast_fdb_entries_add()
+{
+ mac=00:00:00:00:00:00
+ vnis=(2000 3000)
+
+ for vni in "${vnis[@]}"; do
+ $BRIDGE fdb append $mac dev vx10 dst 192.0.2.1 vni $vni \
+ src_vni 5000
+ $BRIDGE fdb append $mac dev vx10 dst 192.0.2.1 vni $vni \
+ port 1111
+ $BRIDGE fdb append $mac dev vx10 dst 192.0.2.2 vni $vni \
+ port 2222
+ done
+}
+
+vxlan_test_flush_by_remote_attributes()
+{
+ local flush_args
+
+ # Reconfigure vx10 with 'external' to get 'src_vni' details in
+ # 'bridge fdb' output
+ $IP link del dev vx10
+ $IP link add name vx10 type vxlan dstport "$VXPORT" external
+
+ # For multicat FDB entries, the VXLAN driver stores a linked list of
+ # remotes for a given key. Verify that only the expected remotes are
+ # flushed.
+ multicast_fdb_entries_add
+
+ ## Flush by 3 remote's attributes - destination IP, port and VNI ##
+ flush_args="dst 192.0.2.1 port 1111 vni 2000"
+ fdb_check_n_entries_by_dev_filter vx10 1 $flush_args
+
+ t0_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_args"
+ log_test $? 0 "Flush FDB by dev vx10 $flush_args"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 $flush_args
+
+ exp_n_entries=$((t0_n_entries - 1))
+ t1_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ [[ $t1_n_entries -eq $exp_n_entries ]]
+ log_test $? 0 "Check how many entries were flushed"
+
+ ## Flush by 2 remote's attributes - destination IP and port ##
+ flush_args="dst 192.0.2.2 port 2222"
+
+ fdb_check_n_entries_by_dev_filter vx10 2 $flush_args
+
+ t0_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_args"
+ log_test $? 0 "Flush FDB by dev vx10 $flush_args"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 $flush_args
+
+ exp_n_entries=$((t0_n_entries - 2))
+ t1_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ [[ $t1_n_entries -eq $exp_n_entries ]]
+ log_test $? 0 "Check how many entries were flushed"
+
+ ## Flush by source VNI, which is not remote's attribute and VNI ##
+ flush_args="vni 3000 src_vni 5000"
+
+ fdb_check_n_entries_by_dev_filter vx10 1 $flush_args
+
+ t0_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_args"
+ log_test $? 0 "Flush FDB by dev vx10 $flush_args"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 $flush_args
+
+ exp_n_entries=$((t0_n_entries -1))
+ t1_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ [[ $t1_n_entries -eq $exp_n_entries ]]
+ log_test $? 0 "Check how many entries were flushed"
+
+ # Flush by 1 remote's attribute - destination IP ##
+ flush_args="dst 192.0.2.1"
+
+ fdb_check_n_entries_by_dev_filter vx10 2 $flush_args
+
+ t0_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ run_cmd "$BRIDGE fdb flush dev vx10 $flush_args"
+ log_test $? 0 "Flush FDB by dev vx10 $flush_args"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 $flush_args
+
+ exp_n_entries=$((t0_n_entries -2))
+ t1_n_entries=$($BRIDGE fdb show dev vx10 | wc -l)
+ [[ $t1_n_entries -eq $exp_n_entries ]]
+ log_test $? 0 "Check how many entries were flushed"
+}
+
+bridge_test_flush_by_dev()
+{
+ local dst_ip=192.0.2.1
+ local br0_n_ent_t0=$($BRIDGE fdb show dev br0 | wc -l)
+ local br1_n_ent_t0=$($BRIDGE fdb show dev br1 | wc -l)
+
+ fdb_add_mac_pool_1 br0 dst $dst_ip
+ fdb_add_mac_pool_2 br1 dst $dst_ip
+
+ # Each 'fdb add' command adds one extra entry in the bridge with the
+ # default vlan.
+ local exp_br0_n_ent=$(($br0_n_ent_t0 + 2 * $mac_pool_1_len))
+ local exp_br1_n_ent=$(($br1_n_ent_t0 + 2 * $mac_pool_2_len))
+
+ fdb_check_n_entries_by_dev_filter br0 $exp_br0_n_ent
+ fdb_check_n_entries_by_dev_filter br1 $exp_br1_n_ent
+
+ run_cmd "$BRIDGE fdb flush dev br0"
+ log_test $? 0 "Flush FDB by dev br0"
+
+ # The default entry should not be flushed
+ fdb_check_n_entries_by_dev_filter br0 1
+ log_test $? 0 "Flush FDB by dev br0 - test br0 entries"
+
+ fdb_check_n_entries_by_dev_filter br1 $exp_br1_n_ent
+ log_test $? 0 "Flush FDB by dev br0 - test br1 entries"
+}
+
+bridge_test_flush_by_vlan()
+{
+ local vlan_1=10
+ local vlan_2=20
+ local vlan_1_ent_t0
+ local vlan_2_ent_t0
+
+ $BRIDGE vlan add vid $vlan_1 dev br0 self
+ $BRIDGE vlan add vid $vlan_2 dev br0 self
+
+ vlan_1_ent_t0=$($BRIDGE fdb show dev br0 | grep "vlan $vlan_1" | wc -l)
+ vlan_2_ent_t0=$($BRIDGE fdb show dev br0 | grep "vlan $vlan_2" | wc -l)
+
+ fdb_add_mac_pool_1 br0 vlan $vlan_1
+ fdb_add_mac_pool_2 br0 vlan $vlan_2
+
+ local exp_vlan_1_ent=$(($vlan_1_ent_t0 + $mac_pool_1_len))
+ local exp_vlan_2_ent=$(($vlan_2_ent_t0 + $mac_pool_2_len))
+
+ fdb_check_n_entries_by_dev_filter br0 $exp_vlan_1_ent vlan $vlan_1
+ fdb_check_n_entries_by_dev_filter br0 $exp_vlan_2_ent vlan $vlan_2
+
+ run_cmd "$BRIDGE fdb flush dev br0 vlan $vlan_1"
+ log_test $? 0 "Flush FDB by dev br0 and vlan $vlan_1"
+
+ fdb_check_n_entries_by_dev_filter br0 0 vlan $vlan_1
+ log_test $? 0 "Test entries with vlan $vlan_1"
+
+ fdb_check_n_entries_by_dev_filter br0 $exp_vlan_2_ent vlan $vlan_2
+ log_test $? 0 "Test entries with vlan $vlan_2"
+}
+
+bridge_vxlan_test_flush()
+{
+ local vlan_1=10
+ local dst_ip=192.0.2.1
+
+ $IP link set dev vx10 master br0
+ $BRIDGE vlan add vid $vlan_1 dev br0 self
+ $BRIDGE vlan add vid $vlan_1 dev vx10
+
+ fdb_add_mac_pool_1 vx10 vni 3000 dst $dst_ip self master
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len vlan $vlan_1
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len vni 3000
+
+ # Such command should fail in VXLAN driver as vlan is not supported,
+ # but the command should flush the entries in the bridge
+ run_cmd "$BRIDGE fdb flush dev vx10 vlan $vlan_1 master self"
+ log_test $? 255 \
+ "Flush FDB by dev vx10, vlan $vlan_1, master and self"
+
+ fdb_check_n_entries_by_dev_filter vx10 0 vlan $vlan_1
+ log_test $? 0 "Test entries with vlan $vlan_1"
+
+ fdb_check_n_entries_by_dev_filter vx10 $mac_pool_1_len dst $dst_ip
+ log_test $? 0 "Test entries with dst $dst_ip"
+}
+
+setup()
+{
+ IP="ip -netns ns1"
+ BRIDGE="bridge -netns ns1"
+
+ ip netns add ns1
+
+ $IP link add name vx10 type vxlan id 1000 dstport "$VXPORT"
+ $IP link add name vx20 type vxlan id 2000 dstport "$VXPORT"
+
+ $IP link add br0 type bridge vlan_filtering 1
+ $IP link add br1 type bridge vlan_filtering 1
+}
+
+cleanup()
+{
+ $IP link del dev br1
+ $IP link del dev br0
+
+ $IP link del dev vx20
+ $IP link del dev vx10
+
+ ip netns del ns1
+}
+
+################################################################################
+# main
+
+while getopts :t:pPhvw: o
+do
+ case $o in
+ t) TESTS=$OPTARG;;
+ p) PAUSE_ON_FAIL=yes;;
+ P) PAUSE=yes;;
+ v) VERBOSE=$(($VERBOSE + 1));;
+ w) PING_TIMEOUT=$OPTARG;;
+ h) usage; exit 0;;
+ *) usage; exit 1;;
+ esac
+done
+
+# make sure we don't pause twice
+[ "${PAUSE}" = "yes" ] && PAUSE_ON_FAIL=no
+
+if [ "$(id -u)" -ne 0 ];then
+ echo "SKIP: Need root privileges"
+ exit $ksft_skip;
+fi
+
+if [ ! -x "$(command -v ip)" ]; then
+ echo "SKIP: Could not run test without ip tool"
+ exit $ksft_skip
+fi
+
+# Check a flag that is added to flush command as part of VXLAN flush support
+bridge fdb help 2>&1 | grep -q "\[no\]router"
+if [ $? -ne 0 ]; then
+ echo "SKIP: iproute2 too old, missing flush command for VXLAN"
+ exit $ksft_skip
+fi
+
+ip link add dev vx10 type vxlan id 1000 2> /dev/null
+out=$(bridge fdb flush dev vx10 2>&1 | grep -q "Operation not supported")
+if [ $? -eq 0 ]; then
+ echo "SKIP: kernel lacks vxlan flush support"
+ exit $ksft_skip
+fi
+ip link del dev vx10
+
+for t in $TESTS
+do
+ setup; $t; cleanup;
+done
diff --git a/tools/testing/selftests/net/forwarding/Makefile b/tools/testing/selftests/net/forwarding/Makefile
index 74e754e266c3..df593b7b3e6b 100644
--- a/tools/testing/selftests/net/forwarding/Makefile
+++ b/tools/testing/selftests/net/forwarding/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0+ OR MIT
-TEST_PROGS = bridge_igmp.sh \
+TEST_PROGS = bridge_fdb_learning_limit.sh \
+ bridge_igmp.sh \
bridge_locked_port.sh \
bridge_mdb.sh \
bridge_mdb_host.sh \
diff --git a/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh b/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
new file mode 100755
index 000000000000..0760a34b7114
--- /dev/null
+++ b/tools/testing/selftests/net/forwarding/bridge_fdb_learning_limit.sh
@@ -0,0 +1,283 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# ShellCheck incorrectly believes that most of the code here is unreachable
+# because it's invoked by variable name following ALL_TESTS.
+#
+# shellcheck disable=SC2317
+
+ALL_TESTS="check_accounting check_limit"
+NUM_NETIFS=6
+source lib.sh
+
+TEST_MAC_BASE=de:ad:be:ef:42:
+
+NUM_PKTS=16
+FDB_LIMIT=8
+
+FDB_TYPES=(
+ # name is counted? overrides learned?
+ 'learned 1 0'
+ 'static 0 1'
+ 'user 0 1'
+ 'extern_learn 0 1'
+ 'local 0 1'
+)
+
+mac()
+{
+ printf "${TEST_MAC_BASE}%02x" "$1"
+}
+
+H1_DEFAULT_MAC=$(mac 42)
+
+switch_create()
+{
+ ip link add dev br0 type bridge
+
+ ip link set dev "$swp1" master br0
+ ip link set dev "$swp2" master br0
+ # swp3 is used to add local MACs, so do not add it to the bridge yet.
+
+ # swp2 is only used for replying when learning on swp1, its MAC should not be learned.
+ ip link set dev "$swp2" type bridge_slave learning off
+
+ ip link set dev br0 up
+
+ ip link set dev "$swp1" up
+ ip link set dev "$swp2" up
+ ip link set dev "$swp3" up
+}
+
+switch_destroy()
+{
+ ip link set dev "$swp3" down
+ ip link set dev "$swp2" down
+ ip link set dev "$swp1" down
+
+ ip link del dev br0
+}
+
+h_create()
+{
+ ip link set "$h1" addr "$H1_DEFAULT_MAC"
+
+ simple_if_init "$h1" 192.0.2.1/24
+ simple_if_init "$h2" 192.0.2.2/24
+}
+
+h_destroy()
+{
+ simple_if_fini "$h1" 192.0.2.1/24
+ simple_if_fini "$h2" 192.0.2.2/24
+}
+
+setup_prepare()
+{
+ h1=${NETIFS[p1]}
+ swp1=${NETIFS[p2]}
+
+ h2=${NETIFS[p3]}
+ swp2=${NETIFS[p4]}
+
+ swp3=${NETIFS[p6]}
+
+ vrf_prepare
+
+ h_create
+
+ switch_create
+}
+
+cleanup()
+{
+ pre_cleanup
+
+ switch_destroy
+
+ h_destroy
+
+ vrf_cleanup
+}
+
+fdb_get_n_learned()
+{
+ ip -d -j link show dev br0 type bridge | \
+ jq '.[]["linkinfo"]["info_data"]["fdb_n_learned"]'
+}
+
+fdb_get_n_mac()
+{
+ local mac=${1}
+
+ bridge -j fdb show br br0 | \
+ jq "map(select(.mac == \"${mac}\" and (has(\"vlan\") | not))) | length"
+}
+
+fdb_fill_learned()
+{
+ local i
+
+ for i in $(seq 1 "$NUM_PKTS"); do
+ fdb_add learned "$(mac "$i")"
+ done
+}
+
+fdb_reset()
+{
+ bridge fdb flush dev br0
+
+ # Keep the default MAC address of h1 in the table. We set it to a different one when
+ # testing dynamic learning.
+ bridge fdb add "$H1_DEFAULT_MAC" dev "$swp1" master static use
+}
+
+fdb_add()
+{
+ local type=$1 mac=$2
+
+ case "$type" in
+ learned)
+ ip link set "$h1" addr "$mac"
+ # Wait for a reply so we implicitly wait until after the forwarding
+ # code finished and the FDB entry was created.
+ PING_COUNT=1 ping_do "$h1" 192.0.2.2
+ check_err $? "Failed to ping another bridge port"
+ ip link set "$h1" addr "$H1_DEFAULT_MAC"
+ ;;
+ local)
+ ip link set dev "$swp3" addr "$mac" && ip link set "$swp3" master br0
+ ;;
+ static)
+ bridge fdb replace "$mac" dev "$swp1" master static
+ ;;
+ user)
+ bridge fdb replace "$mac" dev "$swp1" master static use
+ ;;
+ extern_learn)
+ bridge fdb replace "$mac" dev "$swp1" master extern_learn
+ ;;
+ esac
+
+ check_err $? "Failed to add a FDB entry of type ${type}"
+}
+
+fdb_del()
+{
+ local type=$1 mac=$2
+
+ case "$type" in
+ local)
+ ip link set "$swp3" nomaster
+ ;;
+ *)
+ bridge fdb del "$mac" dev "$swp1" master
+ ;;
+ esac
+
+ check_err $? "Failed to remove a FDB entry of type ${type}"
+}
+
+check_accounting_one_type()
+{
+ local type=$1 is_counted=$2 overrides_learned=$3
+ shift 3
+ RET=0
+
+ fdb_reset
+ fdb_add "$type" "$(mac 0)"
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne "$is_counted" ]
+ check_fail $? "Inserted FDB type ${type}: Expected the count ${is_counted}, but got ${learned}"
+
+ fdb_del "$type" "$(mac 0)"
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne 0 ]
+ check_fail $? "Removed FDB type ${type}: Expected the count 0, but got ${learned}"
+
+ if [ "$overrides_learned" -eq 1 ]; then
+ fdb_reset
+ fdb_add learned "$(mac 0)"
+ fdb_add "$type" "$(mac 0)"
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne "$is_counted" ]
+ check_fail $? "Set a learned entry to FDB type ${type}: Expected the count ${is_counted}, but got ${learned}"
+ fdb_del "$type" "$(mac 0)"
+ fi
+
+ log_test "FDB accounting interacting with FDB type ${type}"
+}
+
+check_accounting()
+{
+ local type_args learned
+ RET=0
+
+ fdb_reset
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne 0 ]
+ check_fail $? "Flushed the FDB table: Expected the count 0, but got ${learned}"
+
+ fdb_fill_learned
+ sleep 1
+
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne "$NUM_PKTS" ]
+ check_fail $? "Filled the FDB table: Expected the count ${NUM_PKTS}, but got ${learned}"
+
+ log_test "FDB accounting"
+
+ for type_args in "${FDB_TYPES[@]}"; do
+ # This is intentional use of word splitting.
+ # shellcheck disable=SC2086
+ check_accounting_one_type $type_args
+ done
+}
+
+check_limit_one_type()
+{
+ local type=$1 is_counted=$2
+ local n_mac expected=$((1 - is_counted))
+ RET=0
+
+ fdb_reset
+ fdb_fill_learned
+
+ fdb_add "$type" "$(mac 0)"
+ n_mac=$(fdb_get_n_mac "$(mac 0)")
+ [ "$n_mac" -ne "$expected" ]
+ check_fail $? "Inserted FDB type ${type} at limit: Expected the count ${expected}, but got ${n_mac}"
+
+ log_test "FDB limits interacting with FDB type ${type}"
+}
+
+check_limit()
+{
+ local learned
+ RET=0
+
+ ip link set br0 type bridge fdb_max_learned "$FDB_LIMIT"
+
+ fdb_reset
+ fdb_fill_learned
+
+ learned=$(fdb_get_n_learned)
+ [ "$learned" -ne "$FDB_LIMIT" ]
+ check_fail $? "Filled the limited FDB table: Expected the count ${FDB_LIMIT}, but got ${learned}"
+
+ log_test "FDB limits"
+
+ for type_args in "${FDB_TYPES[@]}"; do
+ # This is intentional use of word splitting.
+ # shellcheck disable=SC2086
+ check_limit_one_type $type_args
+ done
+}
+
+trap cleanup EXIT
+
+setup_prepare
+
+tests_run
+
+exit $EXIT_STATUS
diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
index d0c6c499d5da..e4e3e9405056 100755
--- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
@@ -145,14 +145,14 @@ cfg_test_host_common()
# Check basic add, replace and delete behavior.
bridge mdb add dev br0 port br0 grp $grp $state vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp"
+ bridge mdb get dev br0 grp $grp vid 10 &> /dev/null
check_err $? "Failed to add $name host entry"
bridge mdb replace dev br0 port br0 grp $grp $state vid 10 &> /dev/null
check_fail $? "Managed to replace $name host entry"
bridge mdb del dev br0 port br0 grp $grp $state vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp"
+ bridge mdb get dev br0 grp $grp vid 10 &> /dev/null
check_fail $? "Failed to delete $name host entry"
# Check error cases.
@@ -200,7 +200,7 @@ cfg_test_port_common()
# Check basic add, replace and delete behavior.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_err $? "Failed to add $name entry"
bridge mdb replace dev br0 port $swp1 $grp_key permanent vid 10 \
@@ -208,31 +208,31 @@ cfg_test_port_common()
check_err $? "Failed to replace $name entry"
bridge mdb del dev br0 port $swp1 $grp_key permanent vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_fail $? "Failed to delete $name entry"
# Check default protocol and replacement.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "static"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "static"
check_err $? "$name entry not added with default \"static\" protocol"
bridge mdb replace dev br0 port $swp1 $grp_key permanent vid 10 \
proto 123
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "123"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "123"
check_err $? "Failed to replace protocol of $name entry"
bridge mdb del dev br0 port $swp1 $grp_key permanent vid 10
# Check behavior when VLAN is not specified.
bridge mdb add dev br0 port $swp1 $grp_key permanent
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_err $? "$name entry with VLAN 10 not added when VLAN was not specified"
- bridge mdb show dev br0 vid 20 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 20 &> /dev/null
check_err $? "$name entry with VLAN 20 not added when VLAN was not specified"
bridge mdb del dev br0 port $swp1 $grp_key permanent
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_fail $? "$name entry with VLAN 10 not deleted when VLAN was not specified"
- bridge mdb show dev br0 vid 20 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 20 &> /dev/null
check_fail $? "$name entry with VLAN 20 not deleted when VLAN was not specified"
# Check behavior when bridge port is down.
@@ -298,21 +298,21 @@ __cfg_test_port_ip_star_g()
RET=0
bridge mdb add dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "Default filter mode is not \"exclude\""
bridge mdb del dev br0 port $swp1 grp $grp vid 10
# Check basic add and delete behavior.
bridge mdb add dev br0 port $swp1 grp $grp vid 10 filter_mode exclude \
source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q -v "src"
+ bridge -d mdb get dev br0 grp $grp vid 10 &> /dev/null
check_err $? "(*, G) entry not created"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry not created"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q -v "src"
+ bridge -d mdb get dev br0 grp $grp vid 10 &> /dev/null
check_fail $? "(*, G) entry not deleted"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_fail $? "(S, G) entry not deleted"
## State (permanent / temp) tests.
@@ -321,18 +321,15 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp permanent vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "permanent"
check_err $? "(*, G) entry not added as \"permanent\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | \
grep -q "permanent"
check_err $? "(S, G) entry not added as \"permanent\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_err $? "(*, G) \"permanent\" entry has a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_err $? "\"permanent\" source entry has a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -342,18 +339,14 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) EXCLUDE entry not added as \"temp\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) \"blocked\" entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_fail $? "(*, G) EXCLUDE entry does not have a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_err $? "\"blocked\" source entry has a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -363,18 +356,14 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) INCLUDE entry not added as \"temp\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_err $? "(*, G) INCLUDE entry has a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_fail $? "Source entry does not have a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -383,8 +372,7 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp src $src1 vid 10 | grep -q " 0.00"
check_err $? "(S, G) entry has a pending group timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -396,11 +384,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "include"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "include"
check_err $? "(*, G) INCLUDE not added with \"include\" filter mode"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_fail $? "(S, G) entry marked as \"blocked\" when should not"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -410,11 +396,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "(*, G) EXCLUDE not added with \"exclude\" filter mode"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_err $? "(S, G) entry not marked as \"blocked\" when should"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -426,11 +410,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode exclude source_list $src1 proto zebra
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "zebra"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "zebra"
check_err $? "(*, G) entry not added with \"zebra\" protocol"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "zebra"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "zebra"
check_err $? "(S, G) entry not marked added with \"zebra\" protocol"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -443,20 +425,16 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp permanent vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "permanent"
check_err $? "(*, G) entry not marked as \"permanent\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "permanent"
check_err $? "(S, G) entry not marked as \"permanent\" after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) entry not marked as \"temp\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) entry not marked as \"temp\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -467,20 +445,16 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "include"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "include"
check_err $? "(*, G) not marked with \"include\" filter mode after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_fail $? "(S, G) marked as \"blocked\" after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "(*, G) not marked with \"exclude\" filter mode after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_err $? "(S, G) not marked as \"blocked\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -491,20 +465,20 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1,$src2,$src3
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src1 not created after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src2"
+ bridge -d mdb get dev br0 grp $grp src $src2 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src2 not created after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src3"
+ bridge -d mdb get dev br0 grp $grp src $src3 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src3 not created after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1,$src3
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src1 not created after second replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src2"
+ bridge -d mdb get dev br0 grp $grp src $src2 vid 10 &> /dev/null
check_fail $? "(S, G) entry for source $src2 created after second replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src3"
+ bridge -d mdb get dev br0 grp $grp src $src3 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src3 not created after second replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -515,11 +489,9 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1 proto bgp
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "bgp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "bgp"
check_err $? "(*, G) protocol not changed to \"bgp\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "bgp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "bgp"
check_err $? "(S, G) protocol not changed to \"bgp\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -532,8 +504,8 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp2 grp $grp vid 10 \
filter_mode include source_list $src1
bridge mdb add dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$swp1" | grep "$grp" | \
- grep "$src1" | grep -q "added_by_star_ex"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep "$swp1" | \
+ grep -q "added_by_star_ex"
check_err $? "\"added_by_star_ex\" entry not created after adding (*, G) entry"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
bridge mdb del dev br0 port $swp2 grp $grp src $src1 vid 10
@@ -606,27 +578,23 @@ __cfg_test_port_ip_sg()
RET=0
bridge mdb add dev br0 port $swp1 $grp_key vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "include"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "include"
check_err $? "Default filter mode is not \"include\""
bridge mdb del dev br0 port $swp1 $grp_key vid 10
# Check that entries can be added as both permanent and temp and that
# group timer is set correctly.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "permanent"
check_err $? "Entry not added as \"permanent\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_err $? "\"permanent\" entry has a pending group timer"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
bridge mdb add dev br0 port $swp1 $grp_key temp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "temp"
check_err $? "Entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_fail $? "\"temp\" entry has an unpending group timer"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -650,24 +618,19 @@ __cfg_test_port_ip_sg()
# Check that we can replace available attributes.
bridge mdb add dev br0 port $swp1 $grp_key vid 10 proto 123
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 proto 111
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "111"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "111"
check_err $? "Failed to replace protocol"
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 permanent
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "permanent"
check_err $? "Entry not marked as \"permanent\" after replace"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_err $? "Entry has a pending group timer after replace"
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 temp
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "temp"
check_err $? "Entry not marked as \"temp\" after replace"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_fail $? "Entry has an unpending group timer after replace"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -675,7 +638,7 @@ __cfg_test_port_ip_sg()
# (*, G) ports need to be added to it.
bridge mdb add dev br0 port $swp2 grp $grp vid 10
bridge mdb add dev br0 port $swp1 $grp_key vid 10
- bridge mdb show dev br0 vid 10 | grep "$grp_key" | grep $swp2 | \
+ bridge mdb get dev br0 $grp_key vid 10 | grep $swp2 | \
grep -q "added_by_star_ex"
check_err $? "\"added_by_star_ex\" entry not created after adding (S, G) entry"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -1132,7 +1095,7 @@ ctrl_igmpv3_is_in_test()
$MZ $h1.10 -c 1 -a own -b 01:00:5e:01:01:01 -A 192.0.2.1 -B 239.1.1.1 \
-t ip proto=2,p=$(igmpv3_is_in_get 239.1.1.1 192.0.2.2) -q
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | grep -q 192.0.2.2
+ bridge mdb get dev br0 grp 239.1.1.1 src 192.0.2.2 vid 10 &> /dev/null
check_fail $? "Permanent entry affected by IGMP packet"
# Replace the permanent entry with a temporary one and check that after
@@ -1145,12 +1108,10 @@ ctrl_igmpv3_is_in_test()
$MZ $h1.10 -a own -b 01:00:5e:01:01:01 -c 1 -A 192.0.2.1 -B 239.1.1.1 \
-t ip proto=2,p=$(igmpv3_is_in_get 239.1.1.1 192.0.2.2) -q
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | grep -v "src" | \
- grep -q 192.0.2.2
+ bridge -d mdb get dev br0 grp 239.1.1.1 vid 10 | grep -q 192.0.2.2
check_err $? "Source not add to source list"
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | \
- grep -q "src 192.0.2.2"
+ bridge mdb get dev br0 grp 239.1.1.1 src 192.0.2.2 vid 10 &> /dev/null
check_err $? "(S, G) entry not created for new source"
bridge mdb del dev br0 port $swp1 grp 239.1.1.1 vid 10
@@ -1172,8 +1133,7 @@ ctrl_mldv2_is_in_test()
$MZ -6 $h1.10 -a own -b 33:33:00:00:00:01 -c 1 -A fe80::1 -B ff0e::1 \
-t ip hop=1,next=0,p="$p" -q
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | \
- grep -q 2001:db8:1::2
+ bridge mdb get dev br0 grp ff0e::1 src 2001:db8:1::2 vid 10 &> /dev/null
check_fail $? "Permanent entry affected by MLD packet"
# Replace the permanent entry with a temporary one and check that after
@@ -1186,12 +1146,10 @@ ctrl_mldv2_is_in_test()
$MZ -6 $h1.10 -a own -b 33:33:00:00:00:01 -c 1 -A fe80::1 -B ff0e::1 \
-t ip hop=1,next=0,p="$p" -q
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | grep -v "src" | \
- grep -q 2001:db8:1::2
+ bridge -d mdb get dev br0 grp ff0e::1 vid 10 | grep -q 2001:db8:1::2
check_err $? "Source not add to source list"
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | \
- grep -q "src 2001:db8:1::2"
+ bridge mdb get dev br0 grp ff0e::1 src 2001:db8:1::2 vid 10 &> /dev/null
check_err $? "(S, G) entry not created for new source"
bridge mdb del dev br0 port $swp1 grp ff0e::1 vid 10
@@ -1208,8 +1166,8 @@ ctrl_test()
ctrl_mldv2_is_in_test
}
-if ! bridge mdb help 2>&1 | grep -q "replace"; then
- echo "SKIP: iproute2 too old, missing bridge mdb replace support"
+if ! bridge mdb help 2>&1 | grep -q "get"; then
+ echo "SKIP: iproute2 too old, missing bridge mdb get support"
exit $ksft_skip
fi
diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh
index dc895b7b94e1..75a2438efdf3 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_join.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh
@@ -1770,7 +1770,10 @@ chk_rm_nr()
# in case of simult flush, the subflow removal count on each side is
# unreliable
count=$((count + cnt))
- [ "$count" != "$rm_subflow_nr" ] && suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ if [ "$count" != "$rm_subflow_nr" ]; then
+ suffix="$count in [$rm_subflow_nr:$((rm_subflow_nr*2))]"
+ extra_msg="$extra_msg simult"
+ fi
if [ $count -ge "$rm_subflow_nr" ] && \
[ "$count" -le "$((rm_subflow_nr *2 ))" ]; then
print_ok "$suffix"
@@ -3289,6 +3292,7 @@ userspace_pm_rm_sf_addr_ns1()
local addr=$1
local id=$2
local tk sp da dp
+ local cnt_addr cnt_sf
tk=$(grep "type:1," "$evts_ns1" |
sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q')
@@ -3298,11 +3302,13 @@ userspace_pm_rm_sf_addr_ns1()
sed -n 's/.*\(daddr6:\)\([0-9a-f:.]*\).*$/\2/p;q')
dp=$(grep "type:10" "$evts_ns1" |
sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q')
+ cnt_addr=$(rm_addr_count ${ns1})
+ cnt_sf=$(rm_sf_count ${ns1})
ip netns exec $ns1 ./pm_nl_ctl rem token $tk id $id
ip netns exec $ns1 ./pm_nl_ctl dsf lip "::ffff:$addr" \
lport $sp rip $da rport $dp token $tk
- wait_rm_addr $ns1 1
- wait_rm_sf $ns1 1
+ wait_rm_addr $ns1 "${cnt_addr}"
+ wait_rm_sf $ns1 "${cnt_sf}"
}
userspace_pm_add_sf()
@@ -3324,17 +3330,20 @@ userspace_pm_rm_sf_addr_ns2()
local addr=$1
local id=$2
local tk da dp sp
+ local cnt_addr cnt_sf
tk=$(sed -n 's/.*\(token:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
da=$(sed -n 's/.*\(daddr4:\)\([0-9.]*\).*$/\2/p;q' "$evts_ns2")
dp=$(sed -n 's/.*\(dport:\)\([[:digit:]]*\).*$/\2/p;q' "$evts_ns2")
sp=$(grep "type:10" "$evts_ns2" |
sed -n 's/.*\(sport:\)\([[:digit:]]*\).*$/\2/p;q')
+ cnt_addr=$(rm_addr_count ${ns2})
+ cnt_sf=$(rm_sf_count ${ns2})
ip netns exec $ns2 ./pm_nl_ctl rem token $tk id $id
ip netns exec $ns2 ./pm_nl_ctl dsf lip $addr lport $sp \
rip $da rport $dp token $tk
- wait_rm_addr $ns2 1
- wait_rm_sf $ns2 1
+ wait_rm_addr $ns2 "${cnt_addr}"
+ wait_rm_sf $ns2 "${cnt_sf}"
}
userspace_tests()
@@ -3417,7 +3426,7 @@ userspace_tests()
continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
set_userspace_pm $ns1
pm_nl_set_limits $ns2 1 1
- speed=10 \
+ speed=5 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
wait_mpj $ns1
@@ -3438,7 +3447,7 @@ userspace_tests()
continue_if mptcp_lib_has_file '/proc/sys/net/mptcp/pm_type'; then
set_userspace_pm $ns2
pm_nl_set_limits $ns1 0 1
- speed=10 \
+ speed=5 \
run_tests $ns1 $ns2 10.0.1.1 &
local tests_pid=$!
wait_mpj $ns2
diff --git a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
index 8c8694f21e7d..a817af6616ec 100755
--- a/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
+++ b/tools/testing/selftests/net/mptcp/mptcp_sockopt.sh
@@ -11,7 +11,6 @@ cout=""
ksft_skip=4
timeout_poll=30
timeout_test=$((timeout_poll * 2 + 1))
-mptcp_connect=""
iptables="iptables"
ip6tables="ip6tables"
diff --git a/tools/testing/selftests/net/nettest.c b/tools/testing/selftests/net/nettest.c
index 39a0e01f8554..cd8a58097448 100644
--- a/tools/testing/selftests/net/nettest.c
+++ b/tools/testing/selftests/net/nettest.c
@@ -1864,8 +1864,9 @@ static char *random_msg(int len)
n += i;
len -= i;
}
- i = snprintf(m + n, olen - n, "%.*s", len,
- "abcdefghijklmnopqrstuvwxyz");
+
+ snprintf(m + n, olen - n, "%.*s", len,
+ "abcdefghijklmnopqrstuvwxyz");
return m;
}
diff --git a/tools/testing/selftests/net/route_localnet.sh b/tools/testing/selftests/net/route_localnet.sh
index 116bfeab72fa..e08701c750e3 100755
--- a/tools/testing/selftests/net/route_localnet.sh
+++ b/tools/testing/selftests/net/route_localnet.sh
@@ -18,8 +18,10 @@ setup() {
ip route del 127.0.0.0/8 dev lo table local
ip netns exec "${PEER_NS}" ip route del 127.0.0.0/8 dev lo table local
- ifconfig veth0 127.25.3.4/24 up
- ip netns exec "${PEER_NS}" ifconfig veth1 127.25.3.14/24 up
+ ip address add 127.25.3.4/24 dev veth0
+ ip link set dev veth0 up
+ ip netns exec "${PEER_NS}" ip address add 127.25.3.14/24 dev veth1
+ ip netns exec "${PEER_NS}" ip link set dev veth1 up
ip route flush cache
ip netns exec "${PEER_NS}" ip route flush cache
diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
index 488f4964365e..5f2b3f6c0d74 100755
--- a/tools/testing/selftests/net/rtnetlink.sh
+++ b/tools/testing/selftests/net/rtnetlink.sh
@@ -31,6 +31,9 @@ ALL_TESTS="
"
devdummy="test-dummy0"
+VERBOSE=0
+PAUSE=no
+PAUSE_ON_FAIL=no
# Kselftest framework requirement - SKIP code is 4.
ksft_skip=4
@@ -51,35 +54,102 @@ check_fail()
fi
}
+run_cmd_common()
+{
+ local cmd="$*"
+ local out
+ if [ "$VERBOSE" = "1" ]; then
+ echo "COMMAND: ${cmd}"
+ fi
+ out=$($cmd 2>&1)
+ rc=$?
+ if [ "$VERBOSE" = "1" -a -n "$out" ]; then
+ echo " $out"
+ fi
+ return $rc
+}
+
+run_cmd() {
+ run_cmd_common "$@"
+ rc=$?
+ check_err $rc
+ return $rc
+}
+run_cmd_fail()
+{
+ run_cmd_common "$@"
+ rc=$?
+ check_fail $rc
+ return $rc
+}
+
+run_cmd_grep_common()
+{
+ local find="$1"; shift
+ local cmd="$*"
+ local out
+ if [ "$VERBOSE" = "1" ]; then
+ echo "COMMAND: ${cmd} 2>&1 | grep -q '${find}'"
+ fi
+ out=$($cmd 2>&1 | grep -q "${find}" 2>&1)
+ return $?
+}
+
+run_cmd_grep() {
+ run_cmd_grep_common "$@"
+ rc=$?
+ check_err $rc
+ return $rc
+}
+
+run_cmd_grep_fail()
+{
+ run_cmd_grep_common "$@"
+ rc=$?
+ check_fail $rc
+ return $rc
+}
+
+end_test()
+{
+ echo "$*"
+ [ "${VERBOSE}" = "1" ] && echo
+
+ if [[ $ret -ne 0 ]] && [[ "${PAUSE_ON_FAIL}" = "yes" ]]; then
+ echo "Hit enter to continue"
+ read a
+ fi;
+
+ if [ "${PAUSE}" = "yes" ]; then
+ echo "Hit enter to continue"
+ read a
+ fi
+
+}
+
+
kci_add_dummy()
{
- ip link add name "$devdummy" type dummy
- check_err $?
- ip link set "$devdummy" up
- check_err $?
+ run_cmd ip link add name "$devdummy" type dummy
+ run_cmd ip link set "$devdummy" up
}
kci_del_dummy()
{
- ip link del dev "$devdummy"
- check_err $?
+ run_cmd ip link del dev "$devdummy"
}
kci_test_netconf()
{
dev="$1"
r=$ret
-
- ip netconf show dev "$dev" > /dev/null
- check_err $?
-
+ run_cmd ip netconf show dev "$dev"
for f in 4 6; do
- ip -$f netconf show dev "$dev" > /dev/null
- check_err $?
+ run_cmd ip -$f netconf show dev "$dev"
done
if [ $ret -ne 0 ] ;then
- echo "FAIL: ip netconf show $dev"
+ end_test "FAIL: ip netconf show $dev"
test $r -eq 0 && ret=0
return 1
fi
@@ -92,43 +162,27 @@ kci_test_bridge()
vlandev="testbr-vlan1"
local ret=0
- ip link add name "$devbr" type bridge
- check_err $?
-
- ip link set dev "$devdummy" master "$devbr"
- check_err $?
-
- ip link set "$devbr" up
- check_err $?
-
- ip link add link "$devbr" name "$vlandev" type vlan id 1
- check_err $?
- ip addr add dev "$vlandev" 10.200.7.23/30
- check_err $?
- ip -6 addr add dev "$vlandev" dead:42::1234/64
- check_err $?
- ip -d link > /dev/null
- check_err $?
- ip r s t all > /dev/null
- check_err $?
+ run_cmd ip link add name "$devbr" type bridge
+ run_cmd ip link set dev "$devdummy" master "$devbr"
+ run_cmd ip link set "$devbr" up
+ run_cmd ip link add link "$devbr" name "$vlandev" type vlan id 1
+ run_cmd ip addr add dev "$vlandev" 10.200.7.23/30
+ run_cmd ip -6 addr add dev "$vlandev" dead:42::1234/64
+ run_cmd ip -d link
+ run_cmd ip r s t all
for name in "$devbr" "$vlandev" "$devdummy" ; do
kci_test_netconf "$name"
done
-
- ip -6 addr del dev "$vlandev" dead:42::1234/64
- check_err $?
-
- ip link del dev "$vlandev"
- check_err $?
- ip link del dev "$devbr"
- check_err $?
+ run_cmd ip -6 addr del dev "$vlandev" dead:42::1234/64
+ run_cmd ip link del dev "$vlandev"
+ run_cmd ip link del dev "$devbr"
if [ $ret -ne 0 ];then
- echo "FAIL: bridge setup"
+ end_test "FAIL: bridge setup"
return 1
fi
- echo "PASS: bridge setup"
+ end_test "PASS: bridge setup"
}
@@ -139,34 +193,23 @@ kci_test_gre()
loc=10.0.0.1
local ret=0
- ip tunnel add $gredev mode gre remote $rem local $loc ttl 1
- check_err $?
- ip link set $gredev up
- check_err $?
- ip addr add 10.23.7.10 dev $gredev
- check_err $?
- ip route add 10.23.8.0/30 dev $gredev
- check_err $?
- ip addr add dev "$devdummy" 10.23.7.11/24
- check_err $?
- ip link > /dev/null
- check_err $?
- ip addr > /dev/null
- check_err $?
+ run_cmd ip tunnel add $gredev mode gre remote $rem local $loc ttl 1
+ run_cmd ip link set $gredev up
+ run_cmd ip addr add 10.23.7.10 dev $gredev
+ run_cmd ip route add 10.23.8.0/30 dev $gredev
+ run_cmd ip addr add dev "$devdummy" 10.23.7.11/24
+ run_cmd ip link
+ run_cmd ip addr
kci_test_netconf "$gredev"
-
- ip addr del dev "$devdummy" 10.23.7.11/24
- check_err $?
-
- ip link del $gredev
- check_err $?
+ run_cmd ip addr del dev "$devdummy" 10.23.7.11/24
+ run_cmd ip link del $gredev
if [ $ret -ne 0 ];then
- echo "FAIL: gre tunnel endpoint"
+ end_test "FAIL: gre tunnel endpoint"
return 1
fi
- echo "PASS: gre tunnel endpoint"
+ end_test "PASS: gre tunnel endpoint"
}
# tc uses rtnetlink too, for full tc testing
@@ -176,56 +219,40 @@ kci_test_tc()
dev=lo
local ret=0
- tc qdisc add dev "$dev" root handle 1: htb
- check_err $?
- tc class add dev "$dev" parent 1: classid 1:10 htb rate 1mbit
- check_err $?
- tc filter add dev "$dev" parent 1:0 prio 5 handle ffe: protocol ip u32 divisor 256
- check_err $?
- tc filter add dev "$dev" parent 1:0 prio 5 handle ffd: protocol ip u32 divisor 256
- check_err $?
- tc filter add dev "$dev" parent 1:0 prio 5 handle ffc: protocol ip u32 divisor 256
- check_err $?
- tc filter add dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:3 u32 ht ffe:2: match ip src 10.0.0.3 flowid 1:10
- check_err $?
- tc filter add dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:2 u32 ht ffe:2: match ip src 10.0.0.2 flowid 1:10
- check_err $?
- tc filter show dev "$dev" parent 1:0 > /dev/null
- check_err $?
- tc filter del dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:3 u32
- check_err $?
- tc filter show dev "$dev" parent 1:0 > /dev/null
- check_err $?
- tc qdisc del dev "$dev" root handle 1: htb
- check_err $?
+ run_cmd tc qdisc add dev "$dev" root handle 1: htb
+ run_cmd tc class add dev "$dev" parent 1: classid 1:10 htb rate 1mbit
+ run_cmd tc filter add dev "$dev" parent 1:0 prio 5 handle ffe: protocol ip u32 divisor 256
+ run_cmd tc filter add dev "$dev" parent 1:0 prio 5 handle ffd: protocol ip u32 divisor 256
+ run_cmd tc filter add dev "$dev" parent 1:0 prio 5 handle ffc: protocol ip u32 divisor 256
+ run_cmd tc filter add dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:3 u32 ht ffe:2: match ip src 10.0.0.3 flowid 1:10
+ run_cmd tc filter add dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:2 u32 ht ffe:2: match ip src 10.0.0.2 flowid 1:10
+ run_cmd tc filter show dev "$dev" parent 1:0
+ run_cmd tc filter del dev "$dev" protocol ip parent 1: prio 5 handle ffe:2:3 u32
+ run_cmd tc filter show dev "$dev" parent 1:0
+ run_cmd tc qdisc del dev "$dev" root handle 1: htb
if [ $ret -ne 0 ];then
- echo "FAIL: tc htb hierarchy"
+ end_test "FAIL: tc htb hierarchy"
return 1
fi
- echo "PASS: tc htb hierarchy"
+ end_test "PASS: tc htb hierarchy"
}
kci_test_polrouting()
{
local ret=0
- ip rule add fwmark 1 lookup 100
- check_err $?
- ip route add local 0.0.0.0/0 dev lo table 100
- check_err $?
- ip r s t all > /dev/null
- check_err $?
- ip rule del fwmark 1 lookup 100
- check_err $?
- ip route del local 0.0.0.0/0 dev lo table 100
- check_err $?
+ run_cmd ip rule add fwmark 1 lookup 100
+ run_cmd ip route add local 0.0.0.0/0 dev lo table 100
+ run_cmd ip r s t all
+ run_cmd ip rule del fwmark 1 lookup 100
+ run_cmd ip route del local 0.0.0.0/0 dev lo table 100
if [ $ret -ne 0 ];then
- echo "FAIL: policy route test"
+ end_test "FAIL: policy route test"
return 1
fi
- echo "PASS: policy routing"
+ end_test "PASS: policy routing"
}
kci_test_route_get()
@@ -233,65 +260,51 @@ kci_test_route_get()
local hash_policy=$(sysctl -n net.ipv4.fib_multipath_hash_policy)
local ret=0
-
- ip route get 127.0.0.1 > /dev/null
- check_err $?
- ip route get 127.0.0.1 dev "$devdummy" > /dev/null
- check_err $?
- ip route get ::1 > /dev/null
- check_err $?
- ip route get fe80::1 dev "$devdummy" > /dev/null
- check_err $?
- ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x10 mark 0x1 > /dev/null
- check_err $?
- ip route get ::1 from ::1 iif lo oif lo tos 0x10 mark 0x1 > /dev/null
- check_err $?
- ip addr add dev "$devdummy" 10.23.7.11/24
- check_err $?
- ip route get 10.23.7.11 from 10.23.7.12 iif "$devdummy" > /dev/null
- check_err $?
- ip route add 10.23.8.0/24 \
+ run_cmd ip route get 127.0.0.1
+ run_cmd ip route get 127.0.0.1 dev "$devdummy"
+ run_cmd ip route get ::1
+ run_cmd ip route get fe80::1 dev "$devdummy"
+ run_cmd ip route get 127.0.0.1 from 127.0.0.1 oif lo tos 0x10 mark 0x1
+ run_cmd ip route get ::1 from ::1 iif lo oif lo tos 0x10 mark 0x1
+ run_cmd ip addr add dev "$devdummy" 10.23.7.11/24
+ run_cmd ip route get 10.23.7.11 from 10.23.7.12 iif "$devdummy"
+ run_cmd ip route add 10.23.8.0/24 \
nexthop via 10.23.7.13 dev "$devdummy" \
nexthop via 10.23.7.14 dev "$devdummy"
- check_err $?
+
sysctl -wq net.ipv4.fib_multipath_hash_policy=0
- ip route get 10.23.8.11 > /dev/null
- check_err $?
+ run_cmd ip route get 10.23.8.11
sysctl -wq net.ipv4.fib_multipath_hash_policy=1
- ip route get 10.23.8.11 > /dev/null
- check_err $?
+ run_cmd ip route get 10.23.8.11
sysctl -wq net.ipv4.fib_multipath_hash_policy="$hash_policy"
- ip route del 10.23.8.0/24
- check_err $?
- ip addr del dev "$devdummy" 10.23.7.11/24
- check_err $?
+ run_cmd ip route del 10.23.8.0/24
+ run_cmd ip addr del dev "$devdummy" 10.23.7.11/24
+
if [ $ret -ne 0 ];then
- echo "FAIL: route get"
+ end_test "FAIL: route get"
return 1
fi
- echo "PASS: route get"
+ end_test "PASS: route get"
}
kci_test_addrlft()
{
for i in $(seq 10 100) ;do
lft=$(((RANDOM%3) + 1))
- ip addr add 10.23.11.$i/32 dev "$devdummy" preferred_lft $lft valid_lft $((lft+1))
- check_err $?
+ run_cmd ip addr add 10.23.11.$i/32 dev "$devdummy" preferred_lft $lft valid_lft $((lft+1))
done
sleep 5
-
- ip addr show dev "$devdummy" | grep "10.23.11."
+ run_cmd_grep "10.23.11." ip addr show dev "$devdummy"
if [ $? -eq 0 ]; then
- echo "FAIL: preferred_lft addresses remaining"
check_err 1
+ end_test "FAIL: preferred_lft addresses remaining"
return
fi
- echo "PASS: preferred_lft addresses have expired"
+ end_test "PASS: preferred_lft addresses have expired"
}
kci_test_promote_secondaries()
@@ -310,27 +323,17 @@ kci_test_promote_secondaries()
[ $promote -eq 0 ] && sysctl -q net.ipv4.conf.$devdummy.promote_secondaries=0
- echo "PASS: promote_secondaries complete"
+ end_test "PASS: promote_secondaries complete"
}
kci_test_addrlabel()
{
local ret=0
-
- ip addrlabel add prefix dead::/64 dev lo label 1
- check_err $?
-
- ip addrlabel list |grep -q "prefix dead::/64 dev lo label 1"
- check_err $?
-
- ip addrlabel del prefix dead::/64 dev lo label 1 2> /dev/null
- check_err $?
-
- ip addrlabel add prefix dead::/64 label 1 2> /dev/null
- check_err $?
-
- ip addrlabel del prefix dead::/64 label 1 2> /dev/null
- check_err $?
+ run_cmd ip addrlabel add prefix dead::/64 dev lo label 1
+ run_cmd_grep "prefix dead::/64 dev lo label 1" ip addrlabel list
+ run_cmd ip addrlabel del prefix dead::/64 dev lo label 1
+ run_cmd ip addrlabel add prefix dead::/64 label 1
+ run_cmd ip addrlabel del prefix dead::/64 label 1
# concurrent add/delete
for i in $(seq 1 1000); do
@@ -346,11 +349,11 @@ kci_test_addrlabel()
ip addrlabel del prefix 1c3::/64 label 12345 2>/dev/null
if [ $ret -ne 0 ];then
- echo "FAIL: ipv6 addrlabel"
+ end_test "FAIL: ipv6 addrlabel"
return 1
fi
- echo "PASS: ipv6 addrlabel"
+ end_test "PASS: ipv6 addrlabel"
}
kci_test_ifalias()
@@ -358,35 +361,28 @@ kci_test_ifalias()
local ret=0
namewant=$(uuidgen)
syspathname="/sys/class/net/$devdummy/ifalias"
-
- ip link set dev "$devdummy" alias "$namewant"
- check_err $?
+ run_cmd ip link set dev "$devdummy" alias "$namewant"
if [ $ret -ne 0 ]; then
- echo "FAIL: cannot set interface alias of $devdummy to $namewant"
+ end_test "FAIL: cannot set interface alias of $devdummy to $namewant"
return 1
fi
-
- ip link show "$devdummy" | grep -q "alias $namewant"
- check_err $?
+ run_cmd_grep "alias $namewant" ip link show "$devdummy"
if [ -r "$syspathname" ] ; then
read namehave < "$syspathname"
if [ "$namewant" != "$namehave" ]; then
- echo "FAIL: did set ifalias $namewant but got $namehave"
+ end_test "FAIL: did set ifalias $namewant but got $namehave"
return 1
fi
namewant=$(uuidgen)
echo "$namewant" > "$syspathname"
- ip link show "$devdummy" | grep -q "alias $namewant"
- check_err $?
+ run_cmd_grep "alias $namewant" ip link show "$devdummy"
# sysfs interface allows to delete alias again
echo "" > "$syspathname"
-
- ip link show "$devdummy" | grep -q "alias $namewant"
- check_fail $?
+ run_cmd_grep_fail "alias $namewant" ip link show "$devdummy"
for i in $(seq 1 100); do
uuidgen > "$syspathname" &
@@ -395,57 +391,48 @@ kci_test_ifalias()
wait
# re-add the alias -- kernel should free mem when dummy dev is removed
- ip link set dev "$devdummy" alias "$namewant"
- check_err $?
+ run_cmd ip link set dev "$devdummy" alias "$namewant"
+
fi
if [ $ret -ne 0 ]; then
- echo "FAIL: set interface alias $devdummy to $namewant"
+ end_test "FAIL: set interface alias $devdummy to $namewant"
return 1
fi
- echo "PASS: set ifalias $namewant for $devdummy"
+ end_test "PASS: set ifalias $namewant for $devdummy"
}
kci_test_vrf()
{
vrfname="test-vrf"
local ret=0
-
- ip link show type vrf 2>/dev/null
+ run_cmd ip link show type vrf
if [ $? -ne 0 ]; then
- echo "SKIP: vrf: iproute2 too old"
+ end_test "SKIP: vrf: iproute2 too old"
return $ksft_skip
fi
-
- ip link add "$vrfname" type vrf table 10
- check_err $?
+ run_cmd ip link add "$vrfname" type vrf table 10
if [ $ret -ne 0 ];then
- echo "FAIL: can't add vrf interface, skipping test"
+ end_test "FAIL: can't add vrf interface, skipping test"
return 0
fi
-
- ip -br link show type vrf | grep -q "$vrfname"
- check_err $?
+ run_cmd_grep "$vrfname" ip -br link show type vrf
if [ $ret -ne 0 ];then
- echo "FAIL: created vrf device not found"
+ end_test "FAIL: created vrf device not found"
return 1
fi
- ip link set dev "$vrfname" up
- check_err $?
-
- ip link set dev "$devdummy" master "$vrfname"
- check_err $?
- ip link del dev "$vrfname"
- check_err $?
+ run_cmd ip link set dev "$vrfname" up
+ run_cmd ip link set dev "$devdummy" master "$vrfname"
+ run_cmd ip link del dev "$vrfname"
if [ $ret -ne 0 ];then
- echo "FAIL: vrf"
+ end_test "FAIL: vrf"
return 1
fi
- echo "PASS: vrf"
+ end_test "PASS: vrf"
}
kci_test_encap_vxlan()
@@ -454,84 +441,44 @@ kci_test_encap_vxlan()
vxlan="test-vxlan0"
vlan="test-vlan0"
testns="$1"
-
- ip -netns "$testns" link add "$vxlan" type vxlan id 42 group 239.1.1.1 \
- dev "$devdummy" dstport 4789 2>/dev/null
+ run_cmd ip -netns "$testns" link add "$vxlan" type vxlan id 42 group 239.1.1.1 \
+ dev "$devdummy" dstport 4789
if [ $? -ne 0 ]; then
- echo "FAIL: can't add vxlan interface, skipping test"
+ end_test "FAIL: can't add vxlan interface, skipping test"
return 0
fi
- check_err $?
- ip -netns "$testns" addr add 10.2.11.49/24 dev "$vxlan"
- check_err $?
-
- ip -netns "$testns" link set up dev "$vxlan"
- check_err $?
-
- ip -netns "$testns" link add link "$vxlan" name "$vlan" type vlan id 1
- check_err $?
+ run_cmd ip -netns "$testns" addr add 10.2.11.49/24 dev "$vxlan"
+ run_cmd ip -netns "$testns" link set up dev "$vxlan"
+ run_cmd ip -netns "$testns" link add link "$vxlan" name "$vlan" type vlan id 1
# changelink testcases
- ip -netns "$testns" link set dev "$vxlan" type vxlan vni 43 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan group ffe5::5 dev "$devdummy" 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan ttl inherit 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan ttl 64
- check_err $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan nolearning
- check_err $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan proxy 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan norsc 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan l2miss 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan l3miss 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan external 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan udpcsum 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan udp6zerocsumtx 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan udp6zerocsumrx 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan remcsumtx 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan remcsumrx 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan gbp 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link set dev "$vxlan" type vxlan gpe 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" link del "$vxlan"
- check_err $?
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan vni 43
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan group ffe5::5 dev "$devdummy"
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan ttl inherit
+
+ run_cmd ip -netns "$testns" link set dev "$vxlan" type vxlan ttl 64
+ run_cmd ip -netns "$testns" link set dev "$vxlan" type vxlan nolearning
+
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan proxy
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan norsc
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan l2miss
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan l3miss
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan external
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan udpcsum
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan udp6zerocsumtx
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan udp6zerocsumrx
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan remcsumtx
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan remcsumrx
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan gbp
+ run_cmd_fail ip -netns "$testns" link set dev "$vxlan" type vxlan gpe
+ run_cmd ip -netns "$testns" link del "$vxlan"
if [ $ret -ne 0 ]; then
- echo "FAIL: vxlan"
+ end_test "FAIL: vxlan"
return 1
fi
- echo "PASS: vxlan"
+ end_test "PASS: vxlan"
}
kci_test_encap_fou()
@@ -539,39 +486,32 @@ kci_test_encap_fou()
local ret=0
name="test-fou"
testns="$1"
-
- ip fou help 2>&1 |grep -q 'Usage: ip fou'
+ run_cmd_grep 'Usage: ip fou' ip fou help
if [ $? -ne 0 ];then
- echo "SKIP: fou: iproute2 too old"
+ end_test "SKIP: fou: iproute2 too old"
return $ksft_skip
fi
if ! /sbin/modprobe -q -n fou; then
- echo "SKIP: module fou is not found"
+ end_test "SKIP: module fou is not found"
return $ksft_skip
fi
/sbin/modprobe -q fou
- ip -netns "$testns" fou add port 7777 ipproto 47 2>/dev/null
+
+ run_cmd ip -netns "$testns" fou add port 7777 ipproto 47
if [ $? -ne 0 ];then
- echo "FAIL: can't add fou port 7777, skipping test"
+ end_test "FAIL: can't add fou port 7777, skipping test"
return 1
fi
-
- ip -netns "$testns" fou add port 8888 ipproto 4
- check_err $?
-
- ip -netns "$testns" fou del port 9999 2>/dev/null
- check_fail $?
-
- ip -netns "$testns" fou del port 7777
- check_err $?
-
+ run_cmd ip -netns "$testns" fou add port 8888 ipproto 4
+ run_cmd_fail ip -netns "$testns" fou del port 9999
+ run_cmd ip -netns "$testns" fou del port 7777
if [ $ret -ne 0 ]; then
- echo "FAIL: fou"
+ end_test "FAIL: fou"s
return 1
fi
- echo "PASS: fou"
+ end_test "PASS: fou"
}
# test various encap methods, use netns to avoid unwanted interference
@@ -579,25 +519,16 @@ kci_test_encap()
{
testns="testns"
local ret=0
-
- ip netns add "$testns"
+ run_cmd ip netns add "$testns"
if [ $? -ne 0 ]; then
- echo "SKIP encap tests: cannot add net namespace $testns"
+ end_test "SKIP encap tests: cannot add net namespace $testns"
return $ksft_skip
fi
-
- ip -netns "$testns" link set lo up
- check_err $?
-
- ip -netns "$testns" link add name "$devdummy" type dummy
- check_err $?
- ip -netns "$testns" link set "$devdummy" up
- check_err $?
-
- kci_test_encap_vxlan "$testns"
- check_err $?
- kci_test_encap_fou "$testns"
- check_err $?
+ run_cmd ip -netns "$testns" link set lo up
+ run_cmd ip -netns "$testns" link add name "$devdummy" type dummy
+ run_cmd ip -netns "$testns" link set "$devdummy" up
+ run_cmd kci_test_encap_vxlan "$testns"
+ run_cmd kci_test_encap_fou "$testns"
ip netns del "$testns"
return $ret
@@ -607,41 +538,28 @@ kci_test_macsec()
{
msname="test_macsec0"
local ret=0
-
- ip macsec help 2>&1 | grep -q "^Usage: ip macsec"
+ run_cmd_grep "^Usage: ip macsec" ip macsec help
if [ $? -ne 0 ]; then
- echo "SKIP: macsec: iproute2 too old"
+ end_test "SKIP: macsec: iproute2 too old"
return $ksft_skip
fi
-
- ip link add link "$devdummy" "$msname" type macsec port 42 encrypt on
- check_err $?
+ run_cmd ip link add link "$devdummy" "$msname" type macsec port 42 encrypt on
if [ $ret -ne 0 ];then
- echo "FAIL: can't add macsec interface, skipping test"
+ end_test "FAIL: can't add macsec interface, skipping test"
return 1
fi
-
- ip macsec add "$msname" tx sa 0 pn 1024 on key 01 12345678901234567890123456789012
- check_err $?
-
- ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef"
- check_err $?
-
- ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef" sa 0 pn 1 on key 00 0123456789abcdef0123456789abcdef
- check_err $?
-
- ip macsec show > /dev/null
- check_err $?
-
- ip link del dev "$msname"
- check_err $?
+ run_cmd ip macsec add "$msname" tx sa 0 pn 1024 on key 01 12345678901234567890123456789012
+ run_cmd ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef"
+ run_cmd ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef" sa 0 pn 1 on key 00 0123456789abcdef0123456789abcdef
+ run_cmd ip macsec show
+ run_cmd ip link del dev "$msname"
if [ $ret -ne 0 ];then
- echo "FAIL: macsec"
+ end_test "FAIL: macsec"
return 1
fi
- echo "PASS: macsec"
+ end_test "PASS: macsec"
}
kci_test_macsec_offload()
@@ -650,19 +568,18 @@ kci_test_macsec_offload()
sysfsnet=/sys/bus/netdevsim/devices/netdevsim0/net/
probed=false
local ret=0
-
- ip macsec help 2>&1 | grep -q "^Usage: ip macsec"
+ run_cmd_grep "^Usage: ip macsec" ip macsec help
if [ $? -ne 0 ]; then
- echo "SKIP: macsec: iproute2 too old"
+ end_test "SKIP: macsec: iproute2 too old"
return $ksft_skip
fi
# setup netdevsim since dummydev doesn't have offload support
if [ ! -w /sys/bus/netdevsim/new_device ] ; then
- modprobe -q netdevsim
- check_err $?
+ run_cmd modprobe -q netdevsim
+
if [ $ret -ne 0 ]; then
- echo "SKIP: macsec_offload can't load netdevsim"
+ end_test "SKIP: macsec_offload can't load netdevsim"
return $ksft_skip
fi
probed=true
@@ -675,43 +592,25 @@ kci_test_macsec_offload()
ip link set $dev up
if [ ! -d $sysfsd ] ; then
- echo "FAIL: macsec_offload can't create device $dev"
+ end_test "FAIL: macsec_offload can't create device $dev"
return 1
fi
-
- ethtool -k $dev | grep -q 'macsec-hw-offload: on'
+ run_cmd_grep 'macsec-hw-offload: on' ethtool -k $dev
if [ $? -eq 1 ] ; then
- echo "FAIL: macsec_offload netdevsim doesn't support MACsec offload"
+ end_test "FAIL: macsec_offload netdevsim doesn't support MACsec offload"
return 1
fi
-
- ip link add link $dev kci_macsec1 type macsec port 4 offload mac
- check_err $?
-
- ip link add link $dev kci_macsec2 type macsec address "aa:bb:cc:dd:ee:ff" port 5 offload mac
- check_err $?
-
- ip link add link $dev kci_macsec3 type macsec sci abbacdde01020304 offload mac
- check_err $?
-
- ip link add link $dev kci_macsec4 type macsec port 8 offload mac 2> /dev/null
- check_fail $?
+ run_cmd ip link add link $dev kci_macsec1 type macsec port 4 offload mac
+ run_cmd ip link add link $dev kci_macsec2 type macsec address "aa:bb:cc:dd:ee:ff" port 5 offload mac
+ run_cmd ip link add link $dev kci_macsec3 type macsec sci abbacdde01020304 offload mac
+ run_cmd_fail ip link add link $dev kci_macsec4 type macsec port 8 offload mac
msname=kci_macsec1
-
- ip macsec add "$msname" tx sa 0 pn 1024 on key 01 12345678901234567890123456789012
- check_err $?
-
- ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef"
- check_err $?
-
- ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef" sa 0 pn 1 on \
+ run_cmd ip macsec add "$msname" tx sa 0 pn 1024 on key 01 12345678901234567890123456789012
+ run_cmd ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef"
+ run_cmd ip macsec add "$msname" rx port 1234 address "1c:ed:de:ad:be:ef" sa 0 pn 1 on \
key 00 0123456789abcdef0123456789abcdef
- check_err $?
-
- ip macsec add "$msname" rx port 1235 address "1c:ed:de:ad:be:ef" 2> /dev/null
- check_fail $?
-
+ run_cmd_fail ip macsec add "$msname" rx port 1235 address "1c:ed:de:ad:be:ef"
# clean up any leftovers
for msdev in kci_macsec{1,2,3,4} ; do
ip link del $msdev 2> /dev/null
@@ -720,10 +619,10 @@ kci_test_macsec_offload()
$probed && rmmod netdevsim
if [ $ret -ne 0 ]; then
- echo "FAIL: macsec_offload"
+ end_test "FAIL: macsec_offload"
return 1
fi
- echo "PASS: macsec_offload"
+ end_test "PASS: macsec_offload"
}
#-------------------------------------------------------------------
@@ -755,8 +654,7 @@ kci_test_ipsec()
ip addr add $srcip dev $devdummy
# flush to be sure there's nothing configured
- ip x s flush ; ip x p flush
- check_err $?
+ run_cmd ip x s flush ; ip x p flush
# start the monitor in the background
tmpfile=`mktemp /var/run/ipsectestXXX`
@@ -764,72 +662,57 @@ kci_test_ipsec()
sleep 0.2
ipsecid="proto esp src $srcip dst $dstip spi 0x07"
- ip x s add $ipsecid \
+ run_cmd ip x s add $ipsecid \
mode transport reqid 0x07 replay-window 32 \
$algo sel src $srcip/24 dst $dstip/24
- check_err $?
- lines=`ip x s list | grep $srcip | grep $dstip | wc -l`
- test $lines -eq 2
- check_err $?
- ip x s count | grep -q "SAD count 1"
- check_err $?
+ lines=`ip x s list | grep $srcip | grep $dstip | wc -l`
+ run_cmd test $lines -eq 2
+ run_cmd_grep "SAD count 1" ip x s count
lines=`ip x s get $ipsecid | grep $srcip | grep $dstip | wc -l`
- test $lines -eq 2
- check_err $?
-
- ip x s delete $ipsecid
- check_err $?
+ run_cmd test $lines -eq 2
+ run_cmd ip x s delete $ipsecid
lines=`ip x s list | wc -l`
- test $lines -eq 0
- check_err $?
+ run_cmd test $lines -eq 0
ipsecsel="dir out src $srcip/24 dst $dstip/24"
- ip x p add $ipsecsel \
+ run_cmd ip x p add $ipsecsel \
tmpl proto esp src $srcip dst $dstip \
spi 0x07 mode transport reqid 0x07
- check_err $?
+
lines=`ip x p list | grep $srcip | grep $dstip | wc -l`
- test $lines -eq 2
- check_err $?
+ run_cmd test $lines -eq 2
- ip x p count | grep -q "SPD IN 0 OUT 1 FWD 0"
- check_err $?
+ run_cmd_grep "SPD IN 0 OUT 1 FWD 0" ip x p count
lines=`ip x p get $ipsecsel | grep $srcip | grep $dstip | wc -l`
- test $lines -eq 2
- check_err $?
+ run_cmd test $lines -eq 2
- ip x p delete $ipsecsel
- check_err $?
+ run_cmd ip x p delete $ipsecsel
lines=`ip x p list | wc -l`
- test $lines -eq 0
- check_err $?
+ run_cmd test $lines -eq 0
# check the monitor results
kill $mpid
lines=`wc -l $tmpfile | cut "-d " -f1`
- test $lines -eq 20
- check_err $?
+ run_cmd test $lines -eq 20
rm -rf $tmpfile
# clean up any leftovers
- ip x s flush
- check_err $?
- ip x p flush
- check_err $?
+ run_cmd ip x s flush
+ run_cmd ip x p flush
ip addr del $srcip/32 dev $devdummy
if [ $ret -ne 0 ]; then
- echo "FAIL: ipsec"
+ end_test "FAIL: ipsec"
return 1
fi
- echo "PASS: ipsec"
+ end_test "PASS: ipsec"
}
#-------------------------------------------------------------------
@@ -857,10 +740,9 @@ kci_test_ipsec_offload()
# setup netdevsim since dummydev doesn't have offload support
if [ ! -w /sys/bus/netdevsim/new_device ] ; then
- modprobe -q netdevsim
- check_err $?
+ run_cmd modprobe -q netdevsim
if [ $ret -ne 0 ]; then
- echo "SKIP: ipsec_offload can't load netdevsim"
+ end_test "SKIP: ipsec_offload can't load netdevsim"
return $ksft_skip
fi
probed=true
@@ -874,11 +756,11 @@ kci_test_ipsec_offload()
ip addr add $srcip dev $dev
ip link set $dev up
if [ ! -d $sysfsd ] ; then
- echo "FAIL: ipsec_offload can't create device $dev"
+ end_test "FAIL: ipsec_offload can't create device $dev"
return 1
fi
if [ ! -f $sysfsf ] ; then
- echo "FAIL: ipsec_offload netdevsim doesn't support IPsec offload"
+ end_test "FAIL: ipsec_offload netdevsim doesn't support IPsec offload"
return 1
fi
@@ -886,40 +768,39 @@ kci_test_ipsec_offload()
ip x s flush ; ip x p flush
# create offloaded SAs, both in and out
- ip x p add dir out src $srcip/24 dst $dstip/24 \
+ run_cmd ip x p add dir out src $srcip/24 dst $dstip/24 \
tmpl proto esp src $srcip dst $dstip spi 9 \
mode transport reqid 42
- check_err $?
- ip x p add dir in src $dstip/24 dst $srcip/24 \
+
+ run_cmd ip x p add dir in src $dstip/24 dst $srcip/24 \
tmpl proto esp src $dstip dst $srcip spi 9 \
mode transport reqid 42
- check_err $?
- ip x s add proto esp src $srcip dst $dstip spi 9 \
+ run_cmd ip x s add proto esp src $srcip dst $dstip spi 9 \
mode transport reqid 42 $algo sel src $srcip/24 dst $dstip/24 \
offload dev $dev dir out
- check_err $?
- ip x s add proto esp src $dstip dst $srcip spi 9 \
+
+ run_cmd ip x s add proto esp src $dstip dst $srcip spi 9 \
mode transport reqid 42 $algo sel src $dstip/24 dst $srcip/24 \
offload dev $dev dir in
- check_err $?
+
if [ $ret -ne 0 ]; then
- echo "FAIL: ipsec_offload can't create SA"
+ end_test "FAIL: ipsec_offload can't create SA"
return 1
fi
# does offload show up in ip output
lines=`ip x s list | grep -c "crypto offload parameters: dev $dev dir"`
if [ $lines -ne 2 ] ; then
- echo "FAIL: ipsec_offload SA offload missing from list output"
check_err 1
+ end_test "FAIL: ipsec_offload SA offload missing from list output"
fi
# use ping to exercise the Tx path
ping -I $dev -c 3 -W 1 -i 0 $dstip >/dev/null
# does driver have correct offload info
- diff $sysfsf - << EOF
+ run_cmd diff $sysfsf - << EOF
SA count=2 tx=3
sa[0] tx ipaddr=0x00000000 00000000 00000000 00000000
sa[0] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
@@ -929,7 +810,7 @@ sa[1] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
sa[1] key=0x34333231 38373635 32313039 36353433
EOF
if [ $? -ne 0 ] ; then
- echo "FAIL: ipsec_offload incorrect driver data"
+ end_test "FAIL: ipsec_offload incorrect driver data"
check_err 1
fi
@@ -938,8 +819,8 @@ EOF
ip x p flush
lines=`grep -c "SA count=0" $sysfsf`
if [ $lines -ne 1 ] ; then
- echo "FAIL: ipsec_offload SA not removed from driver"
check_err 1
+ end_test "FAIL: ipsec_offload SA not removed from driver"
fi
# clean up any leftovers
@@ -947,10 +828,10 @@ EOF
$probed && rmmod netdevsim
if [ $ret -ne 0 ]; then
- echo "FAIL: ipsec_offload"
+ end_test "FAIL: ipsec_offload"
return 1
fi
- echo "PASS: ipsec_offload"
+ end_test "PASS: ipsec_offload"
}
kci_test_gretap()
@@ -959,46 +840,38 @@ kci_test_gretap()
DEV_NS=gretap00
local ret=0
- ip netns add "$testns"
+ run_cmd ip netns add "$testns"
if [ $? -ne 0 ]; then
- echo "SKIP gretap tests: cannot add net namespace $testns"
+ end_test "SKIP gretap tests: cannot add net namespace $testns"
return $ksft_skip
fi
- ip link help gretap 2>&1 | grep -q "^Usage:"
+ run_cmd_grep "^Usage:" ip link help gretap
if [ $? -ne 0 ];then
- echo "SKIP: gretap: iproute2 too old"
+ end_test "SKIP: gretap: iproute2 too old"
ip netns del "$testns"
return $ksft_skip
fi
# test native tunnel
- ip -netns "$testns" link add dev "$DEV_NS" type gretap seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type gretap seq \
key 102 local 172.16.1.100 remote 172.16.1.200
- check_err $?
-
- ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
+ run_cmd ip -netns "$testns" link set dev $DEV_NS ups
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test external mode
- ip -netns "$testns" link add dev "$DEV_NS" type gretap external
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type gretap external
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
if [ $ret -ne 0 ]; then
- echo "FAIL: gretap"
+ end_test "FAIL: gretap"
ip netns del "$testns"
return 1
fi
- echo "PASS: gretap"
+ end_test "PASS: gretap"
ip netns del "$testns"
}
@@ -1009,46 +882,38 @@ kci_test_ip6gretap()
DEV_NS=ip6gretap00
local ret=0
- ip netns add "$testns"
+ run_cmd ip netns add "$testns"
if [ $? -ne 0 ]; then
- echo "SKIP ip6gretap tests: cannot add net namespace $testns"
+ end_test "SKIP ip6gretap tests: cannot add net namespace $testns"
return $ksft_skip
fi
- ip link help ip6gretap 2>&1 | grep -q "^Usage:"
+ run_cmd_grep "^Usage:" ip link help ip6gretap
if [ $? -ne 0 ];then
- echo "SKIP: ip6gretap: iproute2 too old"
+ end_test "SKIP: ip6gretap: iproute2 too old"
ip netns del "$testns"
return $ksft_skip
fi
# test native tunnel
- ip -netns "$testns" link add dev "$DEV_NS" type ip6gretap seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type ip6gretap seq \
key 102 local fc00:100::1 remote fc00:100::2
- check_err $?
- ip -netns "$testns" addr add dev "$DEV_NS" fc00:200::1/96
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" fc00:200::1/96
+ run_cmd ip -netns "$testns" link set dev $DEV_NS up
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test external mode
- ip -netns "$testns" link add dev "$DEV_NS" type ip6gretap external
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type ip6gretap external
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
if [ $ret -ne 0 ]; then
- echo "FAIL: ip6gretap"
+ end_test "FAIL: ip6gretap"
ip netns del "$testns"
return 1
fi
- echo "PASS: ip6gretap"
+ end_test "PASS: ip6gretap"
ip netns del "$testns"
}
@@ -1058,62 +923,47 @@ kci_test_erspan()
testns="testns"
DEV_NS=erspan00
local ret=0
-
- ip link help erspan 2>&1 | grep -q "^Usage:"
+ run_cmd_grep "^Usage:" ip link help erspan
if [ $? -ne 0 ];then
- echo "SKIP: erspan: iproute2 too old"
+ end_test "SKIP: erspan: iproute2 too old"
return $ksft_skip
fi
-
- ip netns add "$testns"
+ run_cmd ip netns add "$testns"
if [ $? -ne 0 ]; then
- echo "SKIP erspan tests: cannot add net namespace $testns"
+ end_test "SKIP erspan tests: cannot add net namespace $testns"
return $ksft_skip
fi
# test native tunnel erspan v1
- ip -netns "$testns" link add dev "$DEV_NS" type erspan seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type erspan seq \
key 102 local 172.16.1.100 remote 172.16.1.200 \
erspan_ver 1 erspan 488
- check_err $?
- ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
+ run_cmd ip -netns "$testns" link set dev $DEV_NS up
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test native tunnel erspan v2
- ip -netns "$testns" link add dev "$DEV_NS" type erspan seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type erspan seq \
key 102 local 172.16.1.100 remote 172.16.1.200 \
erspan_ver 2 erspan_dir ingress erspan_hwid 7
- check_err $?
-
- ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
+ run_cmd ip -netns "$testns" link set dev $DEV_NS up
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test external mode
- ip -netns "$testns" link add dev "$DEV_NS" type erspan external
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type erspan external
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
if [ $ret -ne 0 ]; then
- echo "FAIL: erspan"
+ end_test "FAIL: erspan"
ip netns del "$testns"
return 1
fi
- echo "PASS: erspan"
+ end_test "PASS: erspan"
ip netns del "$testns"
}
@@ -1123,63 +973,49 @@ kci_test_ip6erspan()
testns="testns"
DEV_NS=ip6erspan00
local ret=0
-
- ip link help ip6erspan 2>&1 | grep -q "^Usage:"
+ run_cmd_grep "^Usage:" ip link help ip6erspan
if [ $? -ne 0 ];then
- echo "SKIP: ip6erspan: iproute2 too old"
+ end_test "SKIP: ip6erspan: iproute2 too old"
return $ksft_skip
fi
-
- ip netns add "$testns"
+ run_cmd ip netns add "$testns"
if [ $? -ne 0 ]; then
- echo "SKIP ip6erspan tests: cannot add net namespace $testns"
+ end_test "SKIP ip6erspan tests: cannot add net namespace $testns"
return $ksft_skip
fi
# test native tunnel ip6erspan v1
- ip -netns "$testns" link add dev "$DEV_NS" type ip6erspan seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type ip6erspan seq \
key 102 local fc00:100::1 remote fc00:100::2 \
erspan_ver 1 erspan 488
- check_err $?
- ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
+ run_cmd ip -netns "$testns" link set dev $DEV_NS up
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test native tunnel ip6erspan v2
- ip -netns "$testns" link add dev "$DEV_NS" type ip6erspan seq \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" type ip6erspan seq \
key 102 local fc00:100::1 remote fc00:100::2 \
erspan_ver 2 erspan_dir ingress erspan_hwid 7
- check_err $?
- ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
- check_err $?
- ip -netns "$testns" link set dev $DEV_NS up
- check_err $?
-
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" addr add dev "$DEV_NS" 10.1.1.100/24
+ run_cmd ip -netns "$testns" link set dev $DEV_NS up
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
# test external mode
- ip -netns "$testns" link add dev "$DEV_NS" \
+ run_cmd ip -netns "$testns" link add dev "$DEV_NS" \
type ip6erspan external
- check_err $?
- ip -netns "$testns" link del "$DEV_NS"
- check_err $?
+ run_cmd ip -netns "$testns" link del "$DEV_NS"
if [ $ret -ne 0 ]; then
- echo "FAIL: ip6erspan"
+ end_test "FAIL: ip6erspan"
ip netns del "$testns"
return 1
fi
- echo "PASS: ip6erspan"
+ end_test "PASS: ip6erspan"
ip netns del "$testns"
}
@@ -1195,45 +1031,35 @@ kci_test_fdb_get()
dstip="10.0.2.3"
local ret=0
- bridge fdb help 2>&1 |grep -q 'bridge fdb get'
+ run_cmd_grep 'bridge fdb get' bridge fdb help
if [ $? -ne 0 ];then
- echo "SKIP: fdb get tests: iproute2 too old"
+ end_test "SKIP: fdb get tests: iproute2 too old"
return $ksft_skip
fi
- ip netns add testns
+ run_cmd ip netns add testns
if [ $? -ne 0 ]; then
- echo "SKIP fdb get tests: cannot add net namespace $testns"
+ end_test "SKIP fdb get tests: cannot add net namespace $testns"
return $ksft_skip
fi
-
- $IP link add "$vxlandev" type vxlan id 10 local $localip \
- dstport 4789 2>/dev/null
- check_err $?
- $IP link add name "$brdev" type bridge &>/dev/null
- check_err $?
- $IP link set dev "$vxlandev" master "$brdev" &>/dev/null
- check_err $?
- $BRIDGE fdb add $test_mac dev "$vxlandev" master &>/dev/null
- check_err $?
- $BRIDGE fdb add $test_mac dev "$vxlandev" dst $dstip self &>/dev/null
- check_err $?
-
- $BRIDGE fdb get $test_mac brport "$vxlandev" 2>/dev/null | grep -q "dev $vxlandev master $brdev"
- check_err $?
- $BRIDGE fdb get $test_mac br "$brdev" 2>/dev/null | grep -q "dev $vxlandev master $brdev"
- check_err $?
- $BRIDGE fdb get $test_mac dev "$vxlandev" self 2>/dev/null | grep -q "dev $vxlandev dst $dstip"
- check_err $?
+ run_cmd $IP link add "$vxlandev" type vxlan id 10 local $localip \
+ dstport 4789
+ run_cmd $IP link add name "$brdev" type bridge
+ run_cmd $IP link set dev "$vxlandev" master "$brdev"
+ run_cmd $BRIDGE fdb add $test_mac dev "$vxlandev" master
+ run_cmd $BRIDGE fdb add $test_mac dev "$vxlandev" dst $dstip self
+ run_cmd_grep "dev $vxlandev master $brdev" $BRIDGE fdb get $test_mac brport "$vxlandev"
+ run_cmd_grep "dev $vxlandev master $brdev" $BRIDGE fdb get $test_mac br "$brdev"
+ run_cmd_grep "dev $vxlandev dst $dstip" $BRIDGE fdb get $test_mac dev "$vxlandev" self
ip netns del testns &>/dev/null
if [ $ret -ne 0 ]; then
- echo "FAIL: bridge fdb get"
+ end_test "FAIL: bridge fdb get"
return 1
fi
- echo "PASS: bridge fdb get"
+ end_test "PASS: bridge fdb get"
}
kci_test_neigh_get()
@@ -1243,50 +1069,38 @@ kci_test_neigh_get()
dstip6=dead::2
local ret=0
- ip neigh help 2>&1 |grep -q 'ip neigh get'
+ run_cmd_grep 'ip neigh get' ip neigh help
if [ $? -ne 0 ];then
- echo "SKIP: fdb get tests: iproute2 too old"
+ end_test "SKIP: fdb get tests: iproute2 too old"
return $ksft_skip
fi
# ipv4
- ip neigh add $dstip lladdr $dstmac dev "$devdummy" > /dev/null
- check_err $?
- ip neigh get $dstip dev "$devdummy" 2> /dev/null | grep -q "$dstmac"
- check_err $?
- ip neigh del $dstip lladdr $dstmac dev "$devdummy" > /dev/null
- check_err $?
+ run_cmd ip neigh add $dstip lladdr $dstmac dev "$devdummy"
+ run_cmd_grep "$dstmac" ip neigh get $dstip dev "$devdummy"
+ run_cmd ip neigh del $dstip lladdr $dstmac dev "$devdummy"
# ipv4 proxy
- ip neigh add proxy $dstip dev "$devdummy" > /dev/null
- check_err $?
- ip neigh get proxy $dstip dev "$devdummy" 2>/dev/null | grep -q "$dstip"
- check_err $?
- ip neigh del proxy $dstip dev "$devdummy" > /dev/null
- check_err $?
+ run_cmd ip neigh add proxy $dstip dev "$devdummy"
+ run_cmd_grep "$dstip" ip neigh get proxy $dstip dev "$devdummy"
+ run_cmd ip neigh del proxy $dstip dev "$devdummy"
# ipv6
- ip neigh add $dstip6 lladdr $dstmac dev "$devdummy" > /dev/null
- check_err $?
- ip neigh get $dstip6 dev "$devdummy" 2> /dev/null | grep -q "$dstmac"
- check_err $?
- ip neigh del $dstip6 lladdr $dstmac dev "$devdummy" > /dev/null
- check_err $?
+ run_cmd ip neigh add $dstip6 lladdr $dstmac dev "$devdummy"
+ run_cmd_grep "$dstmac" ip neigh get $dstip6 dev "$devdummy"
+ run_cmd ip neigh del $dstip6 lladdr $dstmac dev "$devdummy"
# ipv6 proxy
- ip neigh add proxy $dstip6 dev "$devdummy" > /dev/null
- check_err $?
- ip neigh get proxy $dstip6 dev "$devdummy" 2>/dev/null | grep -q "$dstip6"
- check_err $?
- ip neigh del proxy $dstip6 dev "$devdummy" > /dev/null
- check_err $?
+ run_cmd ip neigh add proxy $dstip6 dev "$devdummy"
+ run_cmd_grep "$dstip6" ip neigh get proxy $dstip6 dev "$devdummy"
+ run_cmd ip neigh del proxy $dstip6 dev "$devdummy"
if [ $ret -ne 0 ];then
- echo "FAIL: neigh get"
+ end_test "FAIL: neigh get"
return 1
fi
- echo "PASS: neigh get"
+ end_test "PASS: neigh get"
}
kci_test_bridge_parent_id()
@@ -1296,10 +1110,9 @@ kci_test_bridge_parent_id()
probed=false
if [ ! -w /sys/bus/netdevsim/new_device ] ; then
- modprobe -q netdevsim
- check_err $?
+ run_cmd modprobe -q netdevsim
if [ $ret -ne 0 ]; then
- echo "SKIP: bridge_parent_id can't load netdevsim"
+ end_test "SKIP: bridge_parent_id can't load netdevsim"
return $ksft_skip
fi
probed=true
@@ -1312,13 +1125,11 @@ kci_test_bridge_parent_id()
udevadm settle
dev10=`ls ${sysfsnet}10/net/`
dev20=`ls ${sysfsnet}20/net/`
-
- ip link add name test-bond0 type bond mode 802.3ad
- ip link set dev $dev10 master test-bond0
- ip link set dev $dev20 master test-bond0
- ip link add name test-br0 type bridge
- ip link set dev test-bond0 master test-br0
- check_err $?
+ run_cmd ip link add name test-bond0 type bond mode 802.3ad
+ run_cmd ip link set dev $dev10 master test-bond0
+ run_cmd ip link set dev $dev20 master test-bond0
+ run_cmd ip link add name test-br0 type bridge
+ run_cmd ip link set dev test-bond0 master test-br0
# clean up any leftovers
ip link del dev test-br0
@@ -1328,10 +1139,10 @@ kci_test_bridge_parent_id()
$probed && rmmod netdevsim
if [ $ret -ne 0 ]; then
- echo "FAIL: bridge_parent_id"
+ end_test "FAIL: bridge_parent_id"
return 1
fi
- echo "PASS: bridge_parent_id"
+ end_test "PASS: bridge_parent_id"
}
address_get_proto()
@@ -1409,10 +1220,10 @@ do_test_address_proto()
ip address del dev "$devdummy" "$addr3"
if [ $ret -ne 0 ]; then
- echo "FAIL: address proto $what"
+ end_test "FAIL: address proto $what"
return 1
fi
- echo "PASS: address proto $what"
+ end_test "PASS: address proto $what"
}
kci_test_address_proto()
@@ -1435,7 +1246,7 @@ kci_test_rtnl()
kci_add_dummy
if [ $ret -ne 0 ];then
- echo "FAIL: cannot add dummy interface"
+ end_test "FAIL: cannot add dummy interface"
return 1
fi
@@ -1455,31 +1266,39 @@ usage: ${0##*/} OPTS
-t <test> Test(s) to run (default: all)
(options: $(echo $ALL_TESTS))
+ -v Verbose mode (show commands and output)
+ -P Pause after every test
+ -p Pause after every failing test before cleanup (for debugging)
EOF
}
#check for needed privileges
if [ "$(id -u)" -ne 0 ];then
- echo "SKIP: Need root privileges"
+ end_test "SKIP: Need root privileges"
exit $ksft_skip
fi
for x in ip tc;do
$x -Version 2>/dev/null >/dev/null
if [ $? -ne 0 ];then
- echo "SKIP: Could not run test without the $x tool"
+ end_test "SKIP: Could not run test without the $x tool"
exit $ksft_skip
fi
done
-while getopts t:h o; do
+while getopts t:hvpP o; do
case $o in
t) TESTS=$OPTARG;;
+ v) VERBOSE=1;;
+ p) PAUSE_ON_FAIL=yes;;
+ P) PAUSE=yes;;
h) usage; exit 0;;
*) usage; exit 1;;
esac
done
+[ $PAUSE = "yes" ] && PAUSE_ON_FAIL="no"
+
kci_test_rtnl
exit $?
diff --git a/tools/testing/selftests/net/test_vxlan_mdb.sh b/tools/testing/selftests/net/test_vxlan_mdb.sh
index 31e5f0f8859d..6e996f8063cd 100755
--- a/tools/testing/selftests/net/test_vxlan_mdb.sh
+++ b/tools/testing/selftests/net/test_vxlan_mdb.sh
@@ -337,62 +337,62 @@ basic_common()
# Basic add, replace and delete behavior.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry addition"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
log_test $? 0 "MDB entry presence after addition"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
log_test $? 0 "MDB entry presence after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
- log_test $? 1 "MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
+ log_test $? 254 "MDB entry presence after deletion"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
log_test $? 255 "Non-existent MDB entry deletion"
# Default protocol and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"proto static\""
log_test $? 0 "MDB entry default protocol"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent proto 123 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto 123\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"proto 123\""
log_test $? 0 "MDB entry protocol replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default destination port and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" dst_port \""
log_test $? 1 "MDB entry default destination port"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip dst_port 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"dst_port 1234\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"dst_port 1234\""
log_test $? 0 "MDB entry destination port replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default destination VNI and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" vni \""
log_test $? 1 "MDB entry default destination VNI"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip vni 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"vni 1234\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"vni 1234\""
log_test $? 0 "MDB entry destination VNI replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default outgoing interface and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" via \""
log_test $? 1 "MDB entry default outgoing interface"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010 via veth0"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"via veth0\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"via veth0\""
log_test $? 0 "MDB entry outgoing interface replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
@@ -550,127 +550,127 @@ star_g_common()
# Basic add, replace and delete behavior.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry addition with source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
log_test $? 0 "(*, G) MDB entry presence after addition"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry presence after addition"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry replacement with source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
log_test $? 0 "(*, G) MDB entry presence after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry presence after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
- log_test $? 1 "(*, G) MDB entry presence after deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
- log_test $? 1 "(S, G) MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
+ log_test $? 254 "(*, G) MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
+ log_test $? 254 "(S, G) MDB entry presence after deletion"
# Default filter mode and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep exclude"
log_test $? 0 "(*, G) MDB entry default filter mode"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep include"
log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"include\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"include\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep blocked"
log_test $? 1 "\"blocked\" flag after replacing filter mode to \"include\""
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep exclude"
log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"exclude\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grep grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"exclude\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep blocked"
log_test $? 0 "\"blocked\" flag after replacing filter mode to \"exclude\""
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default source list and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep source_list"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep source_list"
log_test $? 1 "(*, G) MDB entry default source list"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src2,$src3 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 1st source after replacing source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src2 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 2nd source after replacing source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src3 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 3rd source after replacing source list"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src3 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 1st source after removing source"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
- log_test $? 1 "(S, G) MDB entry of 2nd source after removing source"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src2 src_vni 10010"
+ log_test $? 254 "(S, G) MDB entry of 2nd source after removing source"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src3 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 3rd source after removing source"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default protocol and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \"proto static\""
log_test $? 0 "(*, G) MDB entry default protocol"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \"proto static\""
log_test $? 0 "(S, G) MDB entry default protocol"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 proto bgp dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto bgp\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \"proto bgp\""
log_test $? 0 "(*, G) MDB entry protocol after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto bgp\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \"proto bgp\""
log_test $? 0 "(S, G) MDB entry protocol after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default destination port and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" dst_port \""
log_test $? 1 "(*, G) MDB entry default destination port"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" dst_port \""
log_test $? 1 "(S, G) MDB entry default destination port"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip dst_port 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" dst_port 1234 \""
log_test $? 0 "(*, G) MDB entry destination port after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" dst_port 1234 \""
log_test $? 0 "(S, G) MDB entry destination port after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default destination VNI and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" vni \""
log_test $? 1 "(*, G) MDB entry default destination VNI"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" vni \""
log_test $? 1 "(S, G) MDB entry default destination VNI"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip vni 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" vni 1234 \""
log_test $? 0 "(*, G) MDB entry destination VNI after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" vni 1234 \""
log_test $? 0 "(S, G) MDB entry destination VNI after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default outgoing interface and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" via \""
log_test $? 1 "(*, G) MDB entry default outgoing interface"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" via \""
log_test $? 1 "(S, G) MDB entry default outgoing interface"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010 via veth0"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via veth0 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" via veth0 \""
log_test $? 0 "(*, G) MDB entry outgoing interface after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via veth0 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" via veth0 \""
log_test $? 0 "(S, G) MDB entry outgoing interface after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
@@ -772,7 +772,7 @@ sg_common()
# Default filter mode.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src src_vni 10010 | grep include"
log_test $? 0 "(S, G) MDB entry default filter mode"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
@@ -2296,9 +2296,9 @@ if [ ! -x "$(command -v jq)" ]; then
exit $ksft_skip
fi
-bridge mdb help 2>&1 | grep -q "src_vni"
+bridge mdb help 2>&1 | grep -q "get"
if [ $? -ne 0 ]; then
- echo "SKIP: iproute2 bridge too old, missing VXLAN MDB support"
+ echo "SKIP: iproute2 bridge too old, missing VXLAN MDB get support"
exit $ksft_skip
fi
diff --git a/tools/testing/selftests/netfilter/Makefile b/tools/testing/selftests/netfilter/Makefile
index ef90aca4cc96..bced422b78f7 100644
--- a/tools/testing/selftests/netfilter/Makefile
+++ b/tools/testing/selftests/netfilter/Makefile
@@ -7,7 +7,7 @@ TEST_PROGS := nft_trans_stress.sh nft_fib.sh nft_nat.sh bridge_brouter.sh \
nft_queue.sh nft_meta.sh nf_nat_edemux.sh \
ipip-conntrack-mtu.sh conntrack_tcp_unreplied.sh \
conntrack_vrf.sh nft_synproxy.sh rpath.sh nft_audit.sh \
- conntrack_sctp_collision.sh
+ conntrack_sctp_collision.sh xt_string.sh
HOSTPKG_CONFIG := pkg-config
diff --git a/tools/testing/selftests/netfilter/nf_nat_edemux.sh b/tools/testing/selftests/netfilter/nf_nat_edemux.sh
index 1092bbcb1fba..a1aa8f4a5828 100755
--- a/tools/testing/selftests/netfilter/nf_nat_edemux.sh
+++ b/tools/testing/selftests/netfilter/nf_nat_edemux.sh
@@ -11,16 +11,18 @@ ret=0
sfx=$(mktemp -u "XXXXXXXX")
ns1="ns1-$sfx"
ns2="ns2-$sfx"
+socatpid=0
cleanup()
{
+ [ $socatpid -gt 0 ] && kill $socatpid
ip netns del $ns1
ip netns del $ns2
}
-iperf3 -v > /dev/null 2>&1
+socat -h > /dev/null 2>&1
if [ $? -ne 0 ];then
- echo "SKIP: Could not run test without iperf3"
+ echo "SKIP: Could not run test without socat"
exit $ksft_skip
fi
@@ -60,8 +62,8 @@ ip netns exec $ns2 ip link set up dev veth2
ip netns exec $ns2 ip addr add 192.168.1.2/24 dev veth2
# Create a server in one namespace
-ip netns exec $ns1 iperf3 -s > /dev/null 2>&1 &
-iperfs=$!
+ip netns exec $ns1 socat -u TCP-LISTEN:5201,fork OPEN:/dev/null,wronly=1 &
+socatpid=$!
# Restrict source port to just one so we don't have to exhaust
# all others.
@@ -83,17 +85,43 @@ sleep 1
# ip daddr:dport will be rewritten to 192.168.1.1 5201
# NAT must reallocate source port 10000 because
# 192.168.1.2:10000 -> 192.168.1.1:5201 is already in use
-echo test | ip netns exec $ns2 socat -t 3 -u STDIN TCP:10.96.0.1:443 >/dev/null
+echo test | ip netns exec $ns2 socat -t 3 -u STDIN TCP:10.96.0.1:443,connect-timeout=3 >/dev/null
ret=$?
-kill $iperfs
-
# Check socat can connect to 10.96.0.1:443 (aka 192.168.1.1:5201).
if [ $ret -eq 0 ]; then
echo "PASS: socat can connect via NAT'd address"
else
echo "FAIL: socat cannot connect via NAT'd address"
- exit 1
fi
-exit 0
+# check sport clashres.
+ip netns exec $ns1 iptables -t nat -A PREROUTING -p tcp --dport 5202 -j REDIRECT --to-ports 5201
+ip netns exec $ns1 iptables -t nat -A PREROUTING -p tcp --dport 5203 -j REDIRECT --to-ports 5201
+
+sleep 5 | ip netns exec $ns2 socat -t 5 -u STDIN TCP:192.168.1.1:5202,connect-timeout=5 >/dev/null &
+cpid1=$!
+sleep 1
+
+# if connect succeeds, client closes instantly due to EOF on stdin.
+# if connect hangs, it will time out after 5s.
+echo | ip netns exec $ns2 socat -t 3 -u STDIN TCP:192.168.1.1:5203,connect-timeout=5 >/dev/null &
+cpid2=$!
+
+time_then=$(date +%s)
+wait $cpid2
+rv=$?
+time_now=$(date +%s)
+
+# Check how much time has elapsed, expectation is for
+# 'cpid2' to connect and then exit (and no connect delay).
+delta=$((time_now - time_then))
+
+if [ $delta -lt 2 -a $rv -eq 0 ]; then
+ echo "PASS: could connect to service via redirected ports"
+else
+ echo "FAIL: socat cannot connect to service via redirect ($delta seconds elapsed, returned $rv)"
+ ret=1
+fi
+
+exit $ret
diff --git a/tools/testing/selftests/netfilter/xt_string.sh b/tools/testing/selftests/netfilter/xt_string.sh
new file mode 100755
index 000000000000..1802653a4728
--- /dev/null
+++ b/tools/testing/selftests/netfilter/xt_string.sh
@@ -0,0 +1,128 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# return code to signal skipped test
+ksft_skip=4
+rc=0
+
+if ! iptables --version >/dev/null 2>&1; then
+ echo "SKIP: Test needs iptables"
+ exit $ksft_skip
+fi
+if ! ip -V >/dev/null 2>&1; then
+ echo "SKIP: Test needs iproute2"
+ exit $ksft_skip
+fi
+if ! nc -h >/dev/null 2>&1; then
+ echo "SKIP: Test needs netcat"
+ exit $ksft_skip
+fi
+
+pattern="foo bar baz"
+patlen=11
+hdrlen=$((20 + 8)) # IPv4 + UDP
+ns="ns-$(mktemp -u XXXXXXXX)"
+trap 'ip netns del $ns' EXIT
+ip netns add "$ns"
+ip -net "$ns" link add d0 type dummy
+ip -net "$ns" link set d0 up
+ip -net "$ns" addr add 10.1.2.1/24 dev d0
+
+#ip netns exec "$ns" tcpdump -npXi d0 &
+#tcpdump_pid=$!
+#trap 'kill $tcpdump_pid; ip netns del $ns' EXIT
+
+add_rule() { # (alg, from, to)
+ ip netns exec "$ns" \
+ iptables -A OUTPUT -o d0 -m string \
+ --string "$pattern" --algo $1 --from $2 --to $3
+}
+showrules() { # ()
+ ip netns exec "$ns" iptables -v -S OUTPUT | grep '^-A'
+}
+zerorules() {
+ ip netns exec "$ns" iptables -Z OUTPUT
+}
+countrule() { # (pattern)
+ showrules | grep -c -- "$*"
+}
+send() { # (offset)
+ ( for ((i = 0; i < $1 - $hdrlen; i++)); do
+ printf " "
+ done
+ printf "$pattern"
+ ) | ip netns exec "$ns" nc -w 1 -u 10.1.2.2 27374
+}
+
+add_rule bm 1000 1500
+add_rule bm 1400 1600
+add_rule kmp 1000 1500
+add_rule kmp 1400 1600
+
+zerorules
+send 0
+send $((1000 - $patlen))
+if [ $(countrule -c 0 0) -ne 4 ]; then
+ echo "FAIL: rules match data before --from"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send 1000
+send $((1400 - $patlen))
+if [ $(countrule -c 2) -ne 2 ]; then
+ echo "FAIL: only two rules should match at low offset"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send $((1500 - $patlen))
+if [ $(countrule -c 1) -ne 4 ]; then
+ echo "FAIL: all rules should match at end of packet"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send 1495
+if [ $(countrule -c 1) -ne 1 ]; then
+ echo "FAIL: only kmp with proper --to should match pattern spanning fragments"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send 1500
+if [ $(countrule -c 1) -ne 2 ]; then
+ echo "FAIL: two rules should match pattern at start of second fragment"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send $((1600 - $patlen))
+if [ $(countrule -c 1) -ne 2 ]; then
+ echo "FAIL: two rules should match pattern at end of largest --to"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send $((1600 - $patlen + 1))
+if [ $(countrule -c 1) -ne 0 ]; then
+ echo "FAIL: no rules should match pattern extending largest --to"
+ showrules
+ ((rc--))
+fi
+
+zerorules
+send 1600
+if [ $(countrule -c 1) -ne 0 ]; then
+ echo "FAIL: no rule should match pattern past largest --to"
+ showrules
+ ((rc--))
+fi
+
+exit $rc
diff --git a/tools/testing/selftests/ptp/ptpchmaskfmt.sh b/tools/testing/selftests/ptp/ptpchmaskfmt.sh
new file mode 100644
index 000000000000..0a06ba8af300
--- /dev/null
+++ b/tools/testing/selftests/ptp/ptpchmaskfmt.sh
@@ -0,0 +1,14 @@
+#!/bin/bash
+# SPDX-License-Identifier: GPL-2.0
+
+# Simple helper script to transform ptp debugfs timestamp event queue filtering
+# masks from decimal values to hexadecimal values
+
+# Only takes the debugfs mask file path as an argument
+DEBUGFS_MASKFILE="${1}"
+
+#shellcheck disable=SC2013,SC2086
+for int in $(cat "$DEBUGFS_MASKFILE") ; do
+ printf '0x%08X ' "$int"
+done
+echo
diff --git a/tools/testing/selftests/ptp/testptp.c b/tools/testing/selftests/ptp/testptp.c
index c9f6cca4feb4..011252fe238c 100644
--- a/tools/testing/selftests/ptp/testptp.c
+++ b/tools/testing/selftests/ptp/testptp.c
@@ -121,6 +121,7 @@ static void usage(char *progname)
" -d name device to open\n"
" -e val read 'val' external time stamp events\n"
" -f val adjust the ptp clock frequency by 'val' ppb\n"
+ " -F chan Enable single channel mask and keep device open for debugfs verification.\n"
" -g get the ptp clock time\n"
" -h prints this message\n"
" -i val index for event/trigger\n"
@@ -187,6 +188,7 @@ int main(int argc, char *argv[])
int pps = -1;
int seconds = 0;
int settime = 0;
+ int channel = -1;
int64_t t1, t2, tp;
int64_t interval, offset;
@@ -196,7 +198,7 @@ int main(int argc, char *argv[])
progname = strrchr(argv[0], '/');
progname = progname ? 1+progname : argv[0];
- while (EOF != (c = getopt(argc, argv, "cd:e:f:ghH:i:k:lL:n:o:p:P:sSt:T:w:x:Xz"))) {
+ while (EOF != (c = getopt(argc, argv, "cd:e:f:F:ghH:i:k:lL:n:o:p:P:sSt:T:w:x:Xz"))) {
switch (c) {
case 'c':
capabilities = 1;
@@ -210,6 +212,9 @@ int main(int argc, char *argv[])
case 'f':
adjfreq = atoi(optarg);
break;
+ case 'F':
+ channel = atoi(optarg);
+ break;
case 'g':
gettime = 1;
break;
@@ -604,6 +609,18 @@ int main(int argc, char *argv[])
free(xts);
}
+ if (channel >= 0) {
+ if (ioctl(fd, PTP_MASK_CLEAR_ALL)) {
+ perror("PTP_MASK_CLEAR_ALL");
+ } else if (ioctl(fd, PTP_MASK_EN_SINGLE, (unsigned int *)&channel)) {
+ perror("PTP_MASK_EN_SINGLE");
+ } else {
+ printf("Channel %d exclusively enabled. Check on debugfs.\n", channel);
+ printf("Press any key to continue\n.");
+ getchar();
+ }
+ }
+
close(fd);
return 0;
}
diff --git a/tools/testing/selftests/tc-testing/Makefile b/tools/testing/selftests/tc-testing/Makefile
index 3c4b7fa05075..b1fa2e177e2f 100644
--- a/tools/testing/selftests/tc-testing/Makefile
+++ b/tools/testing/selftests/tc-testing/Makefile
@@ -28,4 +28,4 @@ $(OUTPUT)/%.o: %.c
$(LLC) -march=bpf -mcpu=$(CPU) $(LLC_FLAGS) -filetype=obj -o $@
TEST_PROGS += ./tdc.sh
-TEST_FILES := tdc*.py Tdc*.py plugins plugin-lib tc-tests
+TEST_FILES := tdc*.py Tdc*.py plugins plugin-lib tc-tests scripts
diff --git a/tools/testing/selftests/tc-testing/README b/tools/testing/selftests/tc-testing/README
index b0954c873e2f..be7b00799b3e 100644
--- a/tools/testing/selftests/tc-testing/README
+++ b/tools/testing/selftests/tc-testing/README
@@ -9,8 +9,7 @@ execute them inside a network namespace dedicated to the task.
REQUIREMENTS
------------
-* Minimum Python version of 3.4. Earlier 3.X versions may work but are not
- guaranteed.
+* Minimum Python version of 3.8.
* The kernel must have network namespace support if using nsPlugin
@@ -96,6 +95,15 @@ the stdout with a regular expression.
Each of the commands in any stage will run in a shell instance.
+Each test is an atomic unit. A test that for whatever reason spans multiple test
+definitions is a bug.
+
+A test that runs inside a namespace (requires "nsPlugin") will run in parallel
+with other tests.
+
+Tests that use netdevsim or don't run inside a namespace run serially with regards
+to each other.
+
USER-DEFINED CONSTANTS
----------------------
@@ -116,59 +124,6 @@ COMMAND LINE ARGUMENTS
Run tdc.py -h to see the full list of available arguments.
-usage: tdc.py [-h] [-p PATH] [-D DIR [DIR ...]] [-f FILE [FILE ...]]
- [-c [CATG [CATG ...]]] [-e ID [ID ...]] [-l] [-s] [-i] [-v] [-N]
- [-d DEVICE] [-P] [-n] [-V]
-
-Linux TC unit tests
-
-optional arguments:
- -h, --help show this help message and exit
- -p PATH, --path PATH The full path to the tc executable to use
- -v, --verbose Show the commands that are being run
- -N, --notap Suppress tap results for command under test
- -d DEVICE, --device DEVICE
- Execute test cases that use a physical device, where
- DEVICE is its name. (If not defined, tests that require
- a physical device will be skipped)
- -P, --pause Pause execution just before post-suite stage
-
-selection:
- select which test cases: files plus directories; filtered by categories
- plus testids
-
- -D DIR [DIR ...], --directory DIR [DIR ...]
- Collect tests from the specified directory(ies)
- (default [tc-tests])
- -f FILE [FILE ...], --file FILE [FILE ...]
- Run tests from the specified file(s)
- -c [CATG [CATG ...]], --category [CATG [CATG ...]]
- Run tests only from the specified category/ies, or if
- no category/ies is/are specified, list known
- categories.
- -e ID [ID ...], --execute ID [ID ...]
- Execute the specified test cases with specified IDs
-
-action:
- select action to perform on selected test cases
-
- -l, --list List all test cases, or those only within the
- specified category
- -s, --show Display the selected test cases
- -i, --id Generate ID numbers for new test cases
-
-netns:
- options for nsPlugin (run commands in net namespace)
-
- -N, --no-namespace
- Do not run commands in a network namespace.
-
-valgrind:
- options for valgrindPlugin (run command under test under Valgrind)
-
- -V, --valgrind Run commands under valgrind
-
-
PLUGIN ARCHITECTURE
-------------------
diff --git a/tools/testing/selftests/tc-testing/TdcPlugin.py b/tools/testing/selftests/tc-testing/TdcPlugin.py
index 79f3ca8617c9..aae85ce4f776 100644
--- a/tools/testing/selftests/tc-testing/TdcPlugin.py
+++ b/tools/testing/selftests/tc-testing/TdcPlugin.py
@@ -5,10 +5,10 @@ class TdcPlugin:
super().__init__()
print(' -- {}.__init__'.format(self.sub_class))
- def pre_suite(self, testcount, testidlist):
+ def pre_suite(self, testcount, testlist):
'''run commands before test_runner goes into a test loop'''
self.testcount = testcount
- self.testidlist = testidlist
+ self.testlist = testlist
if self.args.verbose > 1:
print(' -- {}.pre_suite'.format(self.sub_class))
diff --git a/tools/testing/selftests/tc-testing/TdcResults.py b/tools/testing/selftests/tc-testing/TdcResults.py
index 1e4d95fdf8d0..e56817b97f08 100644
--- a/tools/testing/selftests/tc-testing/TdcResults.py
+++ b/tools/testing/selftests/tc-testing/TdcResults.py
@@ -59,7 +59,8 @@ class TestResult:
return self.steps
class TestSuiteReport():
- _testsuite = []
+ def __init__(self):
+ self._testsuite = []
def add_resultdata(self, result_data):
if isinstance(result_data, TestResult):
diff --git a/tools/testing/selftests/tc-testing/config b/tools/testing/selftests/tc-testing/config
index 5aa8705751f0..012aa33b341b 100644
--- a/tools/testing/selftests/tc-testing/config
+++ b/tools/testing/selftests/tc-testing/config
@@ -1,12 +1,21 @@
#
+# Network
+#
+
+CONFIG_DUMMY=y
+CONFIG_VETH=y
+
+#
# Core Netfilter Configuration
#
+CONFIG_NETFILTER_ADVANCED=y
CONFIG_NF_CONNTRACK=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_FLOW_TABLE=m
+CONFIG_NF_TABLES=m
CONFIG_NF_NAT=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
diff --git a/tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py
index 9539cffa9e5e..b62429b0fcdb 100644
--- a/tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py
+++ b/tools/testing/selftests/tc-testing/plugin-lib/nsPlugin.py
@@ -3,35 +3,96 @@ import signal
from string import Template
import subprocess
import time
+from multiprocessing import Pool
+from functools import cached_property
from TdcPlugin import TdcPlugin
from tdc_config import *
+def prepare_suite(obj, test):
+ original = obj.args.NAMES
+
+ if 'skip' in test and test['skip'] == 'yes':
+ return
+
+ if 'nsPlugin' not in test['plugins']:
+ return
+
+ shadow = {}
+ shadow['IP'] = original['IP']
+ shadow['TC'] = original['TC']
+ shadow['NS'] = '{}-{}'.format(original['NS'], test['random'])
+ shadow['DEV0'] = '{}id{}'.format(original['DEV0'], test['id'])
+ shadow['DEV1'] = '{}id{}'.format(original['DEV1'], test['id'])
+ shadow['DUMMY'] = '{}id{}'.format(original['DUMMY'], test['id'])
+ shadow['DEV2'] = original['DEV2']
+ obj.args.NAMES = shadow
+
+ if obj.args.namespace:
+ obj._ns_create()
+ else:
+ obj._ports_create()
+
+ # Make sure the netns is visible in the fs
+ while True:
+ obj._proc_check()
+ try:
+ ns = obj.args.NAMES['NS']
+ f = open('/run/netns/{}'.format(ns))
+ f.close()
+ break
+ except:
+ time.sleep(0.1)
+ continue
+
+ obj.args.NAMES = original
+
class SubPlugin(TdcPlugin):
def __init__(self):
self.sub_class = 'ns/SubPlugin'
super().__init__()
- def pre_suite(self, testcount, testidlist):
- '''run commands before test_runner goes into a test loop'''
- super().pre_suite(testcount, testidlist)
+ def pre_suite(self, testcount, testlist):
+ from itertools import cycle
- if self.args.namespace:
- self._ns_create()
- else:
- self._ports_create()
+ super().pre_suite(testcount, testlist)
- def post_suite(self, index):
- '''run commands after test_runner goes into a test loop'''
- super().post_suite(index)
+ print("Setting up namespaces and devices...")
+
+ with Pool(self.args.mp) as p:
+ it = zip(cycle([self]), testlist)
+ p.starmap(prepare_suite, it)
+
+ def pre_case(self, caseinfo, test_skip):
if self.args.verbose:
- print('{}.post_suite'.format(self.sub_class))
+ print('{}.pre_case'.format(self.sub_class))
+
+ if test_skip:
+ return
+
+
+ def post_case(self):
+ if self.args.verbose:
+ print('{}.post_case'.format(self.sub_class))
if self.args.namespace:
self._ns_destroy()
else:
self._ports_destroy()
+ def post_suite(self, index):
+ if self.args.verbose:
+ print('{}.post_suite'.format(self.sub_class))
+
+ # Make sure we don't leak resources
+ for f in os.listdir('/run/netns/'):
+ cmd = self._replace_keywords("$IP netns del {}".format(f))
+
+ if self.args.verbose > 3:
+ print('_exec_cmd: command "{}"'.format(cmd))
+
+ subprocess.run(cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
+
def add_args(self, parser):
super().add_args(parser)
self.argparser_group = self.argparser.add_argument_group(
@@ -77,18 +138,43 @@ class SubPlugin(TdcPlugin):
print('adjust_command: return command [{}]'.format(command))
return command
- def _ports_create(self):
- cmd = '$IP link add $DEV0 type veth peer name $DEV1'
- self._exec_cmd('pre', cmd)
- cmd = '$IP link set $DEV0 up'
- self._exec_cmd('pre', cmd)
+ def _ports_create_cmds(self):
+ cmds = []
+
+ cmds.append(self._replace_keywords('link add $DEV0 type veth peer name $DEV1'))
+ cmds.append(self._replace_keywords('link set $DEV0 up'))
+ cmds.append(self._replace_keywords('link add $DUMMY type dummy'))
if not self.args.namespace:
- cmd = '$IP link set $DEV1 up'
- self._exec_cmd('pre', cmd)
+ cmds.append(self._replace_keywords('link set $DEV1 up'))
+
+ return cmds
+
+ def _ports_create(self):
+ self._exec_cmd_batched('pre', self._ports_create_cmds())
+
+ def _ports_destroy_cmd(self):
+ return self._replace_keywords('link del $DEV0')
def _ports_destroy(self):
- cmd = '$IP link del $DEV0'
- self._exec_cmd('post', cmd)
+ self._exec_cmd('post', self._ports_destroy_cmd())
+
+ def _ns_create_cmds(self):
+ cmds = []
+
+ if self.args.namespace:
+ ns = self.args.NAMES['NS']
+
+ cmds.append(self._replace_keywords('netns add {}'.format(ns)))
+ cmds.append(self._replace_keywords('link set $DEV1 netns {}'.format(ns)))
+ cmds.append(self._replace_keywords('link set $DUMMY netns {}'.format(ns)))
+ cmds.append(self._replace_keywords('netns exec {} $IP link set $DEV1 up'.format(ns)))
+ cmds.append(self._replace_keywords('netns exec {} $IP link set $DUMMY up'.format(ns)))
+
+ if self.args.device:
+ cmds.append(self._replace_keywords('link set $DEV2 netns {}'.format(ns)))
+ cmds.append(self._replace_keywords('netns exec {} $IP link set $DEV2 up'.format(ns)))
+
+ return cmds
def _ns_create(self):
'''
@@ -96,18 +182,10 @@ class SubPlugin(TdcPlugin):
the required network devices for it.
'''
self._ports_create()
- if self.args.namespace:
- cmd = '$IP netns add {}'.format(self.args.NAMES['NS'])
- self._exec_cmd('pre', cmd)
- cmd = '$IP link set $DEV1 netns {}'.format(self.args.NAMES['NS'])
- self._exec_cmd('pre', cmd)
- cmd = '$IP -n {} link set $DEV1 up'.format(self.args.NAMES['NS'])
- self._exec_cmd('pre', cmd)
- if self.args.device:
- cmd = '$IP link set $DEV2 netns {}'.format(self.args.NAMES['NS'])
- self._exec_cmd('pre', cmd)
- cmd = '$IP -n {} link set $DEV2 up'.format(self.args.NAMES['NS'])
- self._exec_cmd('pre', cmd)
+ self._exec_cmd_batched('pre', self._ns_create_cmds())
+
+ def _ns_destroy_cmd(self):
+ return self._replace_keywords('netns delete {}'.format(self.args.NAMES['NS']))
def _ns_destroy(self):
'''
@@ -115,35 +193,49 @@ class SubPlugin(TdcPlugin):
devices as well)
'''
if self.args.namespace:
- cmd = '$IP netns delete {}'.format(self.args.NAMES['NS'])
- self._exec_cmd('post', cmd)
+ self._exec_cmd('post', self._ns_destroy_cmd())
+ self._ports_destroy()
+
+ @cached_property
+ def _proc(self):
+ ip = self._replace_keywords("$IP -b -")
+ proc = subprocess.Popen(ip,
+ shell=True,
+ stdin=subprocess.PIPE,
+ env=ENVIR)
+
+ return proc
+
+ def _proc_check(self):
+ proc = self._proc
+
+ proc.poll()
+
+ if proc.returncode is not None and proc.returncode != 0:
+ raise RuntimeError("iproute2 exited with an error code")
def _exec_cmd(self, stage, command):
'''
Perform any required modifications on an executable command, then run
it in a subprocess and return the results.
'''
- if '$' in command:
- command = self._replace_keywords(command)
- self.adjust_command(stage, command)
- if self.args.verbose:
+ if self.args.verbose > 3:
print('_exec_cmd: command "{}"'.format(command))
- proc = subprocess.Popen(command,
- shell=True,
- stdout=subprocess.PIPE,
- stderr=subprocess.PIPE,
- env=ENVIR)
- (rawout, serr) = proc.communicate()
- if proc.returncode != 0 and len(serr) > 0:
- foutput = serr.decode("utf-8")
- else:
- foutput = rawout.decode("utf-8")
+ proc = self._proc
+
+ proc.stdin.write((command + '\n').encode())
+ proc.stdin.flush()
+
+ if self.args.verbose > 3:
+ print('_exec_cmd proc: {}'.format(proc))
+
+ self._proc_check()
- proc.stdout.close()
- proc.stderr.close()
- return proc, foutput
+ def _exec_cmd_batched(self, stage, commands):
+ for cmd in commands:
+ self._exec_cmd(stage, cmd)
def _replace_keywords(self, cmd):
"""
diff --git a/tools/testing/selftests/tc-testing/plugin-lib/rootPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/rootPlugin.py
index e36775bd4d12..8762c0f4a095 100644
--- a/tools/testing/selftests/tc-testing/plugin-lib/rootPlugin.py
+++ b/tools/testing/selftests/tc-testing/plugin-lib/rootPlugin.py
@@ -10,9 +10,9 @@ class SubPlugin(TdcPlugin):
self.sub_class = 'root/SubPlugin'
super().__init__()
- def pre_suite(self, testcount, testidlist):
+ def pre_suite(self, testcount, testlist):
# run commands before test_runner goes into a test loop
- super().pre_suite(testcount, testidlist)
+ super().pre_suite(testcount, testlist)
if os.geteuid():
print('This script must be run with root privileges', file=sys.stderr)
diff --git a/tools/testing/selftests/tc-testing/plugin-lib/valgrindPlugin.py b/tools/testing/selftests/tc-testing/plugin-lib/valgrindPlugin.py
index 4bb866575ea1..c6f61649c430 100644
--- a/tools/testing/selftests/tc-testing/plugin-lib/valgrindPlugin.py
+++ b/tools/testing/selftests/tc-testing/plugin-lib/valgrindPlugin.py
@@ -25,9 +25,10 @@ class SubPlugin(TdcPlugin):
self._tsr = TestSuiteReport()
super().__init__()
- def pre_suite(self, testcount, testidlist):
+ def pre_suite(self, testcount, testist):
'''run commands before test_runner goes into a test loop'''
- super().pre_suite(testcount, testidlist)
+ self.testidlist = [tidx['id'] for tidx in testlist]
+ super().pre_suite(testcount, testlist)
if self.args.verbose > 1:
print('{}.pre_suite'.format(self.sub_class))
if self.args.valgrind:
diff --git a/tools/testing/selftests/tc-testing/taprio_wait_for_admin.sh b/tools/testing/selftests/tc-testing/scripts/taprio_wait_for_admin.sh
index f5335e8ad6b4..f5335e8ad6b4 100755
--- a/tools/testing/selftests/tc-testing/taprio_wait_for_admin.sh
+++ b/tools/testing/selftests/tc-testing/scripts/taprio_wait_for_admin.sh
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
index 0de2f79ea329..3d0f9310bde4 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/connmark.json
@@ -6,6 +6,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -30,6 +33,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -54,6 +60,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -78,6 +87,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -102,6 +114,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -126,6 +141,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -150,6 +168,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -174,6 +195,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -198,6 +222,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -222,6 +249,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -246,6 +276,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -271,6 +304,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -295,6 +331,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -320,6 +359,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
@@ -345,6 +387,9 @@
"actions",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action connmark",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json b/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json
index 072febf25f55..56e11136d0f6 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/csum.json
@@ -6,6 +6,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -30,6 +33,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -54,6 +60,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -78,6 +87,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -102,6 +114,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -126,6 +141,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -150,6 +168,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -174,6 +195,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -198,6 +222,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -222,6 +249,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -246,6 +276,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -270,6 +303,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -294,6 +330,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -318,6 +357,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -342,6 +384,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -366,6 +411,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -390,6 +438,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -414,6 +465,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -438,6 +492,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -461,6 +518,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -485,6 +545,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -508,6 +571,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
@@ -533,6 +599,9 @@
"actions",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action csum",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/ct.json b/tools/testing/selftests/tc-testing/tc-tests/actions/ct.json
index bd843ab00a58..7d07c55bb668 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/ct.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/ct.json
@@ -6,6 +6,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -30,6 +33,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -54,6 +60,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -78,6 +87,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -102,6 +114,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -126,6 +141,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -150,6 +168,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -174,6 +195,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -198,6 +222,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -222,6 +249,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -246,6 +276,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -270,6 +303,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -294,6 +330,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -318,6 +357,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -342,6 +384,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -366,6 +411,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -390,6 +438,9 @@
"actions",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ct",
@@ -415,6 +466,9 @@
"ct",
"scapy"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"plugins": {
"requires": [
"nsPlugin",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/ctinfo.json b/tools/testing/selftests/tc-testing/tc-tests/actions/ctinfo.json
index d9710c067eb7..bb54d71241a0 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/ctinfo.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/ctinfo.json
@@ -6,6 +6,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action ctinfo",
@@ -30,6 +33,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -54,6 +60,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action ctinfo",
@@ -78,6 +87,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action ctinfo",
@@ -102,6 +114,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -132,6 +147,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -162,6 +180,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -192,6 +213,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action ctinfo",
@@ -219,6 +243,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -246,6 +273,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -271,6 +301,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
@@ -295,6 +328,9 @@
"actions",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ctinfo",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/gact.json b/tools/testing/selftests/tc-testing/tc-tests/actions/gact.json
index c652e8c1157d..0fcd52742939 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/gact.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/gact.json
@@ -6,6 +6,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -30,6 +33,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -54,6 +60,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -78,6 +87,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -102,6 +114,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -126,6 +141,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -150,6 +168,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -175,6 +196,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -199,6 +223,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -223,6 +250,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -252,6 +282,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC actions add action reclassify index 101",
"$TC actions add action reclassify index 102",
@@ -273,6 +306,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -298,6 +334,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -323,6 +362,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -348,6 +390,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -373,6 +418,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -398,6 +446,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -422,6 +473,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -448,6 +502,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -473,6 +530,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -497,6 +557,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -521,6 +584,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -544,6 +610,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -568,6 +637,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
@@ -593,6 +665,9 @@
"actions",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gact",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/gate.json b/tools/testing/selftests/tc-testing/tc-tests/actions/gate.json
index e16a4963fdd2..db645c22ad7b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/gate.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/gate.json
@@ -6,6 +6,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action gate",
@@ -30,6 +33,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -54,6 +60,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action gate",
@@ -78,6 +87,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action gate",
@@ -102,6 +114,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -132,6 +147,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -162,6 +180,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -192,6 +213,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action gate",
@@ -219,6 +243,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -246,6 +273,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -271,6 +301,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
@@ -295,6 +328,9 @@
"actions",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action gate",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/ife.json b/tools/testing/selftests/tc-testing/tc-tests/actions/ife.json
index 459bcad35810..f587a32e44c4 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/ife.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/ife.json
@@ -6,6 +6,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -30,6 +33,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -54,6 +60,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -78,6 +87,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -102,6 +114,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -126,6 +141,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -150,6 +168,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -174,6 +195,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -196,6 +220,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -220,6 +247,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -244,6 +274,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -268,6 +301,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -292,6 +328,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -316,6 +355,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -340,6 +382,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -364,6 +409,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -386,6 +434,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -410,6 +461,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -434,6 +488,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -458,6 +515,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -482,6 +542,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -506,6 +569,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -530,6 +596,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -554,6 +623,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -578,6 +650,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -600,6 +675,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -624,6 +702,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -648,6 +729,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -672,6 +756,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -696,6 +783,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -720,6 +810,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -744,6 +837,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -768,6 +864,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -792,6 +891,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -816,6 +918,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -840,6 +945,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -864,6 +972,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -888,6 +999,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -912,6 +1026,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -934,6 +1051,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -956,6 +1076,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -980,6 +1103,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1002,6 +1128,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1024,6 +1153,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1046,6 +1178,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1068,6 +1203,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1093,6 +1231,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
@@ -1118,6 +1259,9 @@
"actions",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action ife",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
index 12a2fe0e1472..b53d12909962 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mirred.json
@@ -6,6 +6,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -30,6 +33,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -55,6 +61,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -81,6 +90,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -105,6 +117,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -129,6 +144,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -153,6 +171,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -178,6 +199,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -202,6 +226,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -226,6 +253,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -250,6 +280,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -274,6 +307,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -298,6 +334,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -322,6 +361,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -346,6 +388,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -370,6 +415,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -392,6 +440,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -417,6 +468,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -442,6 +496,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -467,6 +524,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -491,6 +551,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -514,6 +577,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -538,6 +604,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
@@ -561,6 +630,9 @@
"actions",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mirred",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/mpls.json b/tools/testing/selftests/tc-testing/tc-tests/actions/mpls.json
index 866f0efd0859..b1c5dd27a70d 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/mpls.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/mpls.json
@@ -6,6 +6,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -30,6 +33,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -54,6 +60,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -78,6 +87,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -102,6 +114,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -126,6 +141,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -150,6 +168,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -174,6 +195,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -198,6 +222,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -222,6 +249,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -244,6 +274,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -266,6 +299,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -288,6 +324,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -310,6 +349,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -332,6 +374,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -356,6 +401,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -380,6 +428,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -404,6 +455,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -426,6 +480,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -448,6 +505,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -470,6 +530,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -492,6 +555,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -514,6 +580,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -538,6 +607,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -562,6 +634,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -586,6 +661,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -610,6 +688,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -634,6 +715,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -656,6 +740,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -678,6 +765,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -700,6 +790,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -722,6 +815,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -744,6 +840,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -768,6 +867,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -792,6 +894,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -814,6 +919,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -836,6 +944,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -860,6 +971,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -884,6 +998,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -906,6 +1023,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -930,6 +1050,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -954,6 +1077,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -978,6 +1104,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1002,6 +1131,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1024,6 +1156,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1046,6 +1181,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1070,6 +1208,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1094,6 +1235,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1116,6 +1260,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1138,6 +1285,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1163,6 +1313,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1188,6 +1341,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
@@ -1211,6 +1367,9 @@
"actions",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action mpls",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json b/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json
index 0a3c491edbc5..ee2792998c89 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/nat.json
@@ -6,6 +6,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -30,6 +33,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -54,6 +60,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -78,6 +87,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -102,6 +114,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -126,6 +141,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -150,6 +168,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -174,6 +195,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -203,6 +227,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -232,6 +259,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -261,6 +291,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -285,6 +318,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -309,6 +345,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -333,6 +372,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -357,6 +399,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -381,6 +426,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -405,6 +453,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -429,6 +480,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -453,6 +507,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -477,6 +534,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -501,6 +561,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -525,6 +588,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -549,6 +615,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -573,6 +642,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -597,6 +669,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -622,6 +697,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
@@ -647,6 +725,9 @@
"actions",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action nat",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/pedit.json b/tools/testing/selftests/tc-testing/tc-tests/actions/pedit.json
index 72cdc3c800a5..37c410332174 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/pedit.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/pedit.json
@@ -6,6 +6,9 @@
"actions",
"pedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -30,6 +33,9 @@
"actions",
"pedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -56,6 +62,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -81,6 +90,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -106,6 +118,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -131,6 +146,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -156,6 +174,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -206,6 +227,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -231,6 +255,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -256,6 +283,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -281,6 +311,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -306,6 +339,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -331,6 +367,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -356,6 +395,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -381,6 +423,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -406,6 +451,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -431,6 +479,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -456,6 +507,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -481,6 +535,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -506,6 +563,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -531,6 +591,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -556,6 +619,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -581,6 +647,9 @@
"pedit",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -606,6 +675,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -631,6 +703,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -656,6 +731,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -681,6 +759,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -706,6 +787,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -731,6 +815,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -756,6 +843,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -779,6 +869,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -804,6 +897,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -829,6 +925,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -854,6 +953,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -879,6 +981,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -904,6 +1009,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -929,6 +1037,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -954,6 +1065,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -979,6 +1093,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1004,6 +1121,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1029,6 +1149,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1054,6 +1177,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1079,6 +1205,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1104,6 +1233,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1129,6 +1261,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1154,6 +1289,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1179,6 +1317,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1204,6 +1345,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1229,6 +1373,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1254,6 +1401,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1279,6 +1429,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1304,6 +1457,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1329,6 +1485,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1354,6 +1513,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1379,6 +1541,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1404,6 +1569,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1429,6 +1597,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1454,6 +1625,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1479,6 +1653,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1504,6 +1681,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1554,6 +1734,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1579,6 +1762,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1629,6 +1815,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1654,6 +1843,9 @@
"pedit",
"layered_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1680,6 +1872,9 @@
"layered_op",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
@@ -1706,6 +1901,9 @@
"layered_op",
"raw_op"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action pedit",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
index b7205a069534..dd8109768f8f 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
@@ -6,6 +6,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -30,6 +33,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -55,6 +61,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -79,6 +88,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -103,6 +115,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -127,6 +142,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -151,6 +169,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -175,6 +196,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -199,6 +223,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -223,6 +250,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -247,6 +277,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -271,6 +304,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -295,6 +331,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -319,6 +358,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -343,6 +385,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -367,6 +412,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -391,6 +439,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -415,6 +466,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -439,6 +493,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -463,6 +520,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -488,6 +548,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -520,6 +583,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -545,6 +611,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -577,6 +646,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC actions add action police rate 1mbit burst 100k index 1",
"$TC actions add action police rate 2mbit burst 200k index 2",
@@ -603,6 +675,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -627,6 +702,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -651,6 +729,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -675,6 +756,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -699,6 +783,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -723,6 +810,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -747,6 +837,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -772,6 +865,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -796,6 +892,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
@@ -820,6 +919,9 @@
"actions",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action police",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/sample.json b/tools/testing/selftests/tc-testing/tc-tests/actions/sample.json
index 148d8bcb8606..af35e2f30a95 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/sample.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/sample.json
@@ -6,6 +6,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -30,6 +33,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -54,6 +60,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -78,6 +87,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -102,6 +114,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -126,6 +141,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -150,6 +168,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -174,6 +195,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -196,6 +220,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -218,6 +245,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -240,6 +270,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -262,6 +295,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -284,6 +320,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -308,6 +347,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -332,6 +374,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -356,6 +401,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -380,6 +428,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -402,6 +453,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -424,6 +478,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -446,6 +503,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -468,6 +528,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -492,6 +555,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -516,6 +582,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -541,6 +610,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -566,6 +638,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -591,6 +666,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -616,6 +694,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -641,6 +722,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
@@ -666,6 +750,9 @@
"actions",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action sample",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/simple.json b/tools/testing/selftests/tc-testing/tc-tests/actions/simple.json
index e0c5f060ccb9..ac960e70dc9b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/simple.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/simple.json
@@ -6,6 +6,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -30,6 +33,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -54,6 +60,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -79,6 +88,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -106,6 +118,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -131,6 +146,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -158,6 +176,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -183,6 +204,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
@@ -213,6 +237,9 @@
"actions",
"simple"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action simple",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json b/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json
index 9cdd2e31ac2c..27ba0f72e904 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/skbedit.json
@@ -6,6 +6,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -30,6 +33,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -54,6 +60,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -76,6 +85,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -100,6 +112,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -124,6 +139,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -146,6 +164,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -168,6 +189,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -193,6 +217,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -217,6 +244,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -241,6 +271,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -265,6 +298,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -289,6 +325,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -313,6 +352,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -337,6 +379,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -361,6 +406,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -385,6 +433,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -409,6 +460,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -433,6 +487,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -457,6 +514,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -481,6 +541,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -505,6 +568,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -529,6 +595,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -557,6 +626,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -581,6 +653,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -603,6 +678,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -628,6 +706,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC actions add action skbedit mark 500",
"$TC actions add action skbedit mark 501",
@@ -653,6 +734,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -678,6 +762,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
@@ -702,6 +789,9 @@
"actions",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbedit",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/skbmod.json b/tools/testing/selftests/tc-testing/tc-tests/actions/skbmod.json
index 742f2290973e..33ed7a8099e2 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/skbmod.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/skbmod.json
@@ -6,6 +6,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -30,6 +33,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -54,6 +60,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -78,6 +87,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -102,6 +114,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -126,6 +141,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -150,6 +168,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -174,6 +195,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -198,6 +222,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -222,6 +249,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -246,6 +276,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -270,6 +303,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -294,6 +330,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -323,6 +362,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -352,6 +394,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -377,6 +422,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC actions add action skbmod set etype 0x0001",
"$TC actions add action skbmod set etype 0x0011",
@@ -400,6 +448,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
@@ -425,6 +476,9 @@
"actions",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action skbmod",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json b/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json
index b5b47fbf6c00..0b6f0b5aeaad 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/tunnel_key.json
@@ -6,6 +6,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -30,6 +33,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -59,6 +65,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -88,6 +97,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -117,6 +129,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -146,6 +161,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -175,6 +193,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -204,6 +225,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -228,6 +252,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -252,6 +279,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -281,6 +311,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -305,6 +338,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -334,6 +370,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -358,6 +397,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -387,6 +429,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -411,6 +456,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -435,6 +483,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -459,6 +510,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -488,6 +542,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -517,6 +574,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -541,6 +601,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -565,6 +628,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -594,6 +660,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -618,6 +687,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -642,6 +714,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -666,6 +741,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -690,6 +768,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -714,6 +795,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -738,6 +822,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -762,6 +849,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -786,6 +876,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -811,6 +904,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -836,6 +932,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -864,6 +963,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -892,6 +994,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -917,6 +1022,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -941,6 +1049,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -966,6 +1077,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action tunnel_key",
@@ -991,6 +1105,9 @@
"actions",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"dependsOn": "$TC actions add action tunnel_key help 2>&1 | grep -q nofrag",
"setup": [
[
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/vlan.json b/tools/testing/selftests/tc-testing/tc-tests/actions/vlan.json
index 2aad4caa8581..e5fe8762978a 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/vlan.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/vlan.json
@@ -6,6 +6,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -30,6 +33,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -54,6 +60,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -78,6 +87,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -102,6 +114,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -126,6 +141,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -150,6 +168,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -174,6 +195,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -196,6 +220,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -220,6 +247,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -242,6 +272,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -264,6 +297,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -286,6 +322,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -310,6 +349,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -334,6 +376,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -358,6 +403,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -382,6 +430,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -406,6 +457,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -430,6 +484,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -452,6 +509,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -476,6 +536,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -500,6 +563,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -524,6 +590,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -549,6 +618,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -574,6 +646,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -599,6 +674,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -624,6 +702,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -647,6 +728,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -670,6 +754,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -696,6 +783,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -720,6 +810,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -745,6 +838,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -769,6 +865,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -792,6 +891,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -816,6 +918,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
@@ -839,6 +944,9 @@
"actions",
"vlan"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action vlan",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/actions/xt.json b/tools/testing/selftests/tc-testing/tc-tests/actions/xt.json
index c9f002aea6d4..1a92e8898fec 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/actions/xt.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/actions/xt.json
@@ -6,6 +6,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -30,6 +33,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -60,6 +66,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -90,6 +99,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -120,6 +132,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC action flush action xt",
@@ -147,6 +162,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -174,6 +192,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
@@ -199,6 +220,9 @@
"actions",
"xt"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
[
"$TC actions flush action xt",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/bpf.json b/tools/testing/selftests/tc-testing/tc-tests/filters/bpf.json
index 1f0cae474db2..013fb983bc3f 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/filters/bpf.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/filters/bpf.json
@@ -51,7 +51,10 @@
"bpf-filter"
],
"plugins": {
- "requires": "buildebpfPlugin"
+ "requires": [
+ "buildebpfPlugin",
+ "nsPlugin"
+ ]
},
"setup": [
"$TC qdisc add dev $DEV1 ingress"
@@ -73,7 +76,10 @@
"bpf-filter"
],
"plugins": {
- "requires": "buildebpfPlugin"
+ "requires": [
+ "buildebpfPlugin",
+ "nsPlugin"
+ ]
},
"setup": [
"$TC qdisc add dev $DEV1 ingress"
diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json b/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json
index 5272049566d6..a9b071e1354b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/filters/fw.json
@@ -53,111 +53,6 @@
"plugins": {
"requires": "nsPlugin"
},
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
- "plugins": {
- "requires": "nsPlugin"
- },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -173,14 +68,15 @@
{
"id": "c591",
"name": "Add fw filter with action ok by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet. Remove this when the behaviour is fixed.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact ok index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -189,9 +85,7 @@
"matchPattern": "handle 0x1.*gact action pass.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -201,6 +95,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -216,14 +113,15 @@
{
"id": "38b3",
"name": "Add fw filter with action continue by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet. Remove this when the behaviour is fixed.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact continue index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -232,9 +130,7 @@
"matchPattern": "handle 0x1.*gact action continue.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -244,6 +140,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -259,14 +158,15 @@
{
"id": "6753",
"name": "Add fw filter with action pipe by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact pipe index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -275,9 +175,7 @@
"matchPattern": "handle 0x1.*gact action pipe.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -287,6 +185,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -302,14 +203,15 @@
{
"id": "6dc6",
"name": "Add fw filter with action drop by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact drop index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -318,9 +220,7 @@
"matchPattern": "handle 0x1.*gact action drop.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -330,6 +230,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -345,14 +248,15 @@
{
"id": "3bc2",
"name": "Add fw filter with action reclassify by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact reclassify index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -361,9 +265,7 @@
"matchPattern": "handle 0x1.*gact action reclassify.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -373,6 +275,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -388,14 +293,15 @@
{
"id": "36f7",
"name": "Add fw filter with action jump 10 by reference",
- "__comment": "We add sleep here because action might have not been deleted by workqueue just yet.",
"category": [
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress",
- "/bin/sleep 1",
"$TC actions add action gact jump 10 index 1"
],
"cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 fw action gact index 1",
@@ -404,9 +310,7 @@
"matchPattern": "handle 0x1.*gact action jump 10.*index 1 ref 2 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DEV1 ingress",
- "/bin/sleep 1",
- "$TC actions del action gact index 1"
+ "$TC qdisc del dev $DEV1 ingress"
]
},
{
@@ -416,6 +320,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -435,6 +342,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -454,6 +364,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -473,6 +386,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -492,6 +408,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -511,6 +430,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -530,6 +452,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -549,6 +474,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -568,6 +496,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -587,6 +518,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -606,6 +540,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -625,6 +562,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -644,6 +584,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -663,6 +606,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -682,6 +628,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -701,6 +650,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -720,6 +672,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -739,6 +694,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -758,6 +716,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -777,6 +738,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -796,6 +760,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -815,6 +782,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -834,6 +804,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -853,6 +826,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -872,6 +848,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -891,6 +870,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -910,6 +892,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -929,6 +914,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -948,6 +936,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -967,6 +958,9 @@
"filter",
"fw"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
"$TC qdisc add dev $DEV1 ingress"
],
@@ -1096,7 +1090,6 @@
{
"id": "0e99",
"name": "Del single fw filter x1",
- "__comment__": "First of two tests to check that one filter is there and the other isn't",
"category": [
"filter",
"fw"
@@ -1121,7 +1114,6 @@
{
"id": "f54c",
"name": "Del single fw filter x2",
- "__comment__": "Second of two tests to check that one filter is there and the other isn't",
"category": [
"filter",
"fw"
@@ -1351,5 +1343,54 @@
"teardown": [
"$TC qdisc del dev $DEV1 ingress"
]
+ },
+ {
+ "id": "e470",
+ "name": "Try to delete class referenced by fw after a replace",
+ "category": [
+ "filter",
+ "fw"
+ ],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
+ "setup": [
+ "$TC qdisc add dev $DEV1 parent root handle 10: drr",
+ "$TC class add dev $DEV1 parent root classid 1 drr",
+ "$TC filter add dev $DEV1 parent 10: handle 1 prio 1 fw classid 10:1 action ok",
+ "$TC filter replace dev $DEV1 parent 10: handle 1 prio 1 fw classid 10:1 action drop"
+ ],
+ "cmdUnderTest": "$TC class delete dev $DEV1 parent 10: classid 10:1",
+ "expExitCode": "2",
+ "verifyCmd": "$TC class show dev $DEV1",
+ "matchPattern": "class drr 10:1",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 parent root drr"
+ ]
+ },
+ {
+ "id": "ec1a",
+ "name": "Replace fw classid with nil",
+ "category": [
+ "filter",
+ "fw"
+ ],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
+ "setup": [
+ "$TC qdisc add dev $DEV1 parent root handle 10: drr",
+ "$TC class add dev $DEV1 parent root classid 1 drr",
+ "$TC filter add dev $DEV1 parent 10: handle 1 prio 1 fw classid 10:1 action ok"
+ ],
+ "cmdUnderTest": "$TC filter replace dev $DEV1 parent 10: handle 1 prio 1 fw action drop",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter show dev $DEV1 parent 10:",
+ "matchPattern": "fw chain 0 handle 0x1",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 parent root drr"
+ ]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/matchall.json b/tools/testing/selftests/tc-testing/tc-tests/filters/matchall.json
index 2df68017dfb8..afa1b9b0c856 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/filters/matchall.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/filters/matchall.json
@@ -6,8 +6,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ip matchall action ok",
@@ -16,8 +18,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -27,8 +28,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0x1 prio 1 protocol ip matchall action ok",
@@ -37,8 +40,7 @@
"matchPattern": "^filter parent 1: protocol ip pref 1 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -48,8 +50,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ipv6 matchall action drop",
@@ -58,8 +62,7 @@
"matchPattern": "^filter parent ffff: protocol ipv6 pref 1 matchall.*handle 0x1.*gact action drop.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -69,8 +72,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0x1 prio 1 protocol ipv6 matchall action drop",
@@ -79,8 +84,7 @@
"matchPattern": "^filter parent 1: protocol ipv6 pref 1 matchall.*handle 0x1.*gact action drop.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -90,8 +94,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 65535 protocol ipv4 matchall action pass",
@@ -100,8 +106,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 65535 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -111,8 +116,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0x1 prio 65535 protocol ipv4 matchall action pass",
@@ -121,8 +128,7 @@
"matchPattern": "^filter parent 1: protocol ip pref 65535 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -132,8 +138,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 655355 protocol ipv4 matchall action pass",
@@ -142,8 +150,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 655355 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -153,8 +160,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0x1 prio 655355 protocol ipv4 matchall action pass",
@@ -163,8 +172,7 @@
"matchPattern": "^filter parent 1: protocol ip pref 655355 matchall.*handle 0x1.*gact action pass.*ref 1 bind 1",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -174,8 +182,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0xffffffff prio 1 protocol all matchall action continue",
@@ -184,8 +194,7 @@
"matchPattern": "^filter parent ffff: protocol all pref 1 matchall.*handle 0xffffffff.*gact action continue.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -195,8 +204,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0xffffffff prio 1 protocol all matchall action continue",
@@ -205,8 +216,7 @@
"matchPattern": "^filter parent 1: protocol all pref 1 matchall.*handle 0xffffffff.*gact action continue.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -216,8 +226,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol all matchall skip_hw action reclassify",
@@ -226,8 +238,7 @@
"matchPattern": "^filter parent ffff: protocol all pref 1 matchall.*handle 0x1.*skip_hw.*not_in_hw.*gact action reclassify.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -237,8 +248,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent 1: handle 0x1 prio 1 protocol all matchall skip_hw action reclassify",
@@ -247,8 +260,7 @@
"matchPattern": "^filter parent 1: protocol all pref 1 matchall.*handle 0x1.*skip_hw.*not_in_hw.*gact action reclassify.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: prio"
]
},
{
@@ -258,8 +270,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ipv6 matchall classid 1:1 action pass",
@@ -268,8 +282,7 @@
"matchPattern": "^filter parent ffff: protocol ipv6 pref 1 matchall.*handle 0x1.*flowid 1:1.*gact action pass.*ref 1 bind 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -279,8 +292,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ipv6 matchall classid 6789defg action pass",
@@ -289,8 +304,7 @@
"matchPattern": "^filter protocol ipv6 pref 1 matchall.*handle 0x1.*flowid 6789defg.*gact action pass.*ref 1 bind 1",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -300,8 +314,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ipv6 matchall classid 1:2 action pass"
],
@@ -311,8 +327,7 @@
"matchPattern": "^filter protocol ipv6 pref 1 matchall.*handle 0x1.*flowid 1:2.*gact action pass.*ref 1 bind 1",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -322,8 +337,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol all matchall classid 1:2 action pass",
"$TC filter add dev $DUMMY parent ffff: handle 0x2 prio 2 protocol all matchall classid 1:3 action pass",
@@ -336,8 +353,7 @@
"matchPattern": "^filter protocol all pref.*matchall.*handle.*flowid.*gact action pass",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -347,8 +363,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol all matchall classid 1:2 action pass",
"$TC filter add dev $DUMMY parent ffff: handle 0x2 prio 2 protocol all matchall classid 1:3 action pass",
@@ -361,8 +379,7 @@
"matchPattern": "^filter protocol all pref 2 matchall.*handle 0x2 flowid 1:2.*gact action pass",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -372,8 +389,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol all chain 1 matchall classid 1:1 action pass",
"$TC filter add dev $DUMMY parent ffff: handle 0x1 prio 1 protocol ipv4 chain 2 matchall classid 1:3 action continue"
@@ -384,8 +403,7 @@
"matchPattern": "^filter protocol all pref 1 matchall chain 1 handle 0x1 flowid 1:1.*gact action pass",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -395,8 +413,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions flush action police",
"$TC actions add action police rate 1mbit burst 100k index 199 skip_hw"
@@ -408,7 +428,6 @@
"matchCount": "0",
"teardown": [
"$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
"$TC actions del action police index 199"
]
},
@@ -419,8 +438,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions flush action police",
"$TC actions add action police rate 1mbit burst 100k index 199"
@@ -432,7 +453,6 @@
"matchCount": "0",
"teardown": [
"$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
"$TC actions del action police index 199"
]
},
@@ -443,8 +463,10 @@
"filter",
"matchall"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions flush action police",
"$TC actions add action police rate 1mbit burst 100k index 199"
@@ -456,7 +478,6 @@
"matchCount": "0",
"teardown": [
"$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
"$TC actions del action police index 199"
]
}
diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/route.json b/tools/testing/selftests/tc-testing/tc-tests/filters/route.json
index 1f6f19f02997..8d8de8f65aef 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/filters/route.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/filters/route.json
@@ -177,5 +177,30 @@
"teardown": [
"$TC qdisc del dev $DEV1 ingress"
]
+ },
+ {
+ "id": "b042",
+ "name": "Try to delete class referenced by route after a replace",
+ "category": [
+ "filter",
+ "route"
+ ],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
+ "setup": [
+ "$TC qdisc add dev $DEV1 parent root handle 10: drr",
+ "$TC class add dev $DEV1 parent root classid 1 drr",
+ "$TC filter add dev $DEV1 parent 10: prio 1 route from 10 classid 10:1 action ok",
+ "$TC filter replace dev $DEV1 parent 10: prio 1 route from 5 classid 10:1 action drop"
+ ],
+ "cmdUnderTest": "$TC class delete dev $DEV1 parent 10: classid 10:1",
+ "expExitCode": "2",
+ "verifyCmd": "$TC class show dev $DEV1",
+ "matchPattern": "class drr 10:1",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 parent root drr"
+ ]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json b/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
index bd64a4bf11ab..ddc7c355be0a 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/filters/u32.json
@@ -247,5 +247,30 @@
"teardown": [
"$TC qdisc del dev $DEV1 ingress"
]
+ },
+ {
+ "id": "0c37",
+ "name": "Try to delete class referenced by u32 after a replace",
+ "category": [
+ "filter",
+ "u32"
+ ],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
+ "setup": [
+ "$TC qdisc add dev $DEV1 parent root handle 10: drr",
+ "$TC class add dev $DEV1 parent root classid 1 drr",
+ "$TC filter add dev $DEV1 parent 10: prio 1 u32 match icmp type 1 0xff classid 10:1 action ok",
+ "$TC filter replace dev $DEV1 parent 10: prio 1 u32 match icmp type 1 0xff classid 10:1 action drop"
+ ],
+ "cmdUnderTest": "$TC class delete dev $DEV1 parent 10: classid 10:1",
+ "expExitCode": "2",
+ "verifyCmd": "$TC class show dev $DEV1",
+ "matchPattern": "class drr 10:1",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 parent root drr"
+ ]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/infra/actions.json b/tools/testing/selftests/tc-testing/tc-tests/infra/actions.json
index 16f3a83605e4..1ba96c467754 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/infra/actions.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/infra/actions.json
@@ -6,8 +6,10 @@
"infra",
"pedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC action add action pedit munge offset 0 u8 clear index 1"
],
@@ -17,9 +19,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action pedit"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -29,8 +29,10 @@
"infra",
"mpls"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC action add action mpls pop protocol ipv4 index 1"
],
@@ -40,9 +42,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action mpls"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -52,8 +52,10 @@
"infra",
"bpf"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC action add action bpf bytecode '4,40 0 0 12,21 0 1 2048,6 0 0 262144,6 0 0 0' index 1"
],
@@ -63,9 +65,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action bpf"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -75,8 +75,10 @@
"infra",
"connmark"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action connmark"
],
@@ -86,9 +88,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action connmark"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -98,8 +98,10 @@
"infra",
"csum"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action csum ip4h index 1"
],
@@ -109,9 +111,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action csum"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -121,8 +121,10 @@
"infra",
"ct"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action ct index 1"
],
@@ -132,9 +134,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action ct"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -144,8 +144,10 @@
"infra",
"ctinfo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC action add action ctinfo index 1"
],
@@ -155,9 +157,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action ctinfo"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -167,8 +167,10 @@
"infra",
"gact"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action pass index 1"
],
@@ -178,9 +180,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action gact"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -190,8 +190,10 @@
"infra",
"gate"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC action add action gate priority 1 sched-entry close 100000000ns index 1"
],
@@ -201,9 +203,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action gate"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -213,8 +213,10 @@
"infra",
"ife"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action ife encode allow mark pass index 1"
],
@@ -224,9 +226,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action ife"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -236,8 +236,10 @@
"infra",
"mirred"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action mirred egress mirror index 1 dev lo"
],
@@ -247,9 +249,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action mirred"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -259,8 +259,10 @@
"infra",
"nat"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action nat ingress 192.168.1.1 200.200.200.1"
],
@@ -270,9 +272,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action nat"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -282,8 +282,10 @@
"infra",
"police"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action police rate 1kbit burst 10k index 1"
],
@@ -293,9 +295,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action police"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -305,8 +305,10 @@
"infra",
"sample"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action sample rate 10 group 1 index 1"
],
@@ -316,9 +318,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action sample"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -328,8 +328,10 @@
"infra",
"skbedit"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action skbedit mark 1"
],
@@ -339,9 +341,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action skbedit"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -351,8 +351,10 @@
"infra",
"skbmod"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action skbmod set dmac 11:22:33:44:55:66 index 1"
],
@@ -362,9 +364,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action skbmod"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -374,8 +374,10 @@
"infra",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action tunnel_key set src_ip 10.10.10.1 dst_ip 20.20.20.2 id 1 index 1"
],
@@ -385,9 +387,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action tunnel_key"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -397,8 +397,10 @@
"infra",
"tunnel_key"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC actions add action vlan pop pipe index 1"
],
@@ -408,9 +410,7 @@
"matchPattern": "^filter parent ffff: protocol ip pref 1 matchall.*handle 0x1.*",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy",
- "$TC actions flush action vlan"
+ "$TC qdisc del dev $DUMMY ingress"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/infra/filter.json b/tools/testing/selftests/tc-testing/tc-tests/infra/filter.json
index c4c778e83da2..8d10042b489b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/infra/filter.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/infra/filter.json
@@ -1,13 +1,15 @@
[
{
"id": "c2b4",
- "name": "soft lockup alarm will be not generated after delete the prio 0 filter of the chain",
+ "name": "Soft lockup alarm will be not generated after delete the prio 0 filter of the chain",
"category": [
"filter",
"chain"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: htb default 1",
"$TC chain add dev $DUMMY",
"$TC filter del dev $DUMMY chain 0 parent 1: prio 0"
@@ -18,8 +20,7 @@
"matchPattern": "chain parent 1: chain 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY root handle 1: htb default 1",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY root handle 1: htb default 1"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cake.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cake.json
index 1134b72d281d..c4c5f7ba0e0f 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cake.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cake.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake bandwidth 1000",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth 1Kbit diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake autorate-ingress",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited autorate-ingress diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake rtt 200",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 200us raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake besteffort",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake diffserv8",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv8 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake diffserv4",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv4 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake flowblind",
"expExitCode": "0",
@@ -179,8 +164,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 flowblind nonat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -194,7 +178,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake dsthost nat",
"expExitCode": "0",
@@ -202,8 +185,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 dsthost nat nowash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -217,7 +199,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake hosts wash",
"expExitCode": "0",
@@ -225,8 +206,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 hosts nonat wash no-ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -240,7 +220,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake flowblind no-split-gso",
"expExitCode": "0",
@@ -248,8 +227,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 flowblind nonat nowash no-ack-filter no-split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -263,7 +241,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake dual-srchost ack-filter",
"expExitCode": "0",
@@ -271,8 +248,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 dual-srchost nonat nowash ack-filter split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -286,7 +262,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake dual-dsthost ack-filter-aggressive",
"expExitCode": "0",
@@ -294,8 +269,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 dual-dsthost nonat nowash ack-filter-aggressive split-gso rtt 100ms raw overhead",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -309,7 +283,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake memlimit 10000 ptm",
"expExitCode": "0",
@@ -317,8 +290,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw ptm overhead 0 memlimit 10000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -332,7 +304,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake fwmark 8 atm",
"expExitCode": "0",
@@ -340,8 +311,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms raw atm overhead 0 fwmark 0x8",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -355,7 +325,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake overhead 128 mpu 256",
"expExitCode": "0",
@@ -363,8 +332,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms noatm overhead 128 mpu 256",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -378,7 +346,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake conservative ingress",
"expExitCode": "0",
@@ -386,8 +353,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash ingress no-ack-filter split-gso rtt 100ms atm overhead 48",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -401,7 +367,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cake conservative ingress"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -410,7 +375,6 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash ingress no-ack-filter split-gso rtt 100ms atm overhead 48",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -424,7 +388,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cake overhead 128 mpu 256"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root cake mpu 128",
@@ -433,8 +396,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms noatm overhead 128 mpu 128",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -448,7 +410,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cake overhead 128 mpu 256"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root cake mpu 128",
@@ -457,8 +418,7 @@
"matchPattern": "qdisc cake 1: root refcnt [0-9]+ bandwidth unlimited diffserv3 triple-isolate nonat nowash no-ack-filter split-gso rtt 100ms noatm overhead 128 mpu 128",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -472,7 +432,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cake",
"expExitCode": "0",
@@ -480,8 +439,7 @@
"matchPattern": "class cake",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cbs.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cbs.json
index a46bf5ff8277..33ea986176d9 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cbs.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/cbs.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 0 idleslope 0 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs hicredit 64",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 64 locredit 0 sendslope 0 idleslope 0 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs locredit 10",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 10 sendslope 0 idleslope 0 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs sendslope 888",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 888 idleslope 0 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs idleslope 666",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 0 idleslope 666 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs hicredit 10 locredit 75 sendslope 2 idleslope 666",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 10 locredit 75 sendslope 2 idleslope 666 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cbs idleslope 666"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root cbs sendslope 10",
@@ -157,8 +144,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 10 idleslope 0 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -172,7 +158,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cbs idleslope 666"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root cbs idleslope 1",
@@ -181,8 +166,7 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 0 idleslope 1 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -196,7 +180,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root cbs idleslope 666"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -205,7 +188,6 @@
"matchPattern": "qdisc cbs 1: root refcnt [0-9]+ hicredit 0 locredit 0 sendslope 0 idleslope 1 offload 0.*qdisc pfifo 0: parent 1: limit 1000p",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -219,7 +201,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root cbs",
"expExitCode": "0",
@@ -227,8 +208,7 @@
"matchPattern": "class cbs 1:[0-9]+ parent 1:",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json
index 31b7775d25fc..d46e5e2c9430 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/choke.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 83p max 250p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min 100",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 100p max 250p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 max 900",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min.*max 900p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 ecn",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 83p max 250p ecn",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 burst 100",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 83p max 250p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -134,7 +123,6 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 83p max 250p",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min 100",
@@ -157,8 +144,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 100p max 250p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -172,7 +158,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root choke limit 1000 bandwidth 10000 min 100",
@@ -181,8 +166,7 @@
"matchPattern": "qdisc choke 1: root refcnt [0-9]+ limit 1000p min 100p max 250p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json
index ea38099d48e5..e9469ee71e6f 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/codel.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 5ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel limit 1500",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1500p target 5ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel target 100ms",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 100ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel interval 20ms",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 5ms interval 20ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel ecn",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 5ms interval 100ms ecn",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root codel ce_threshold 20ms",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 5ms ce_threshold 20ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root codel"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -157,7 +144,6 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 1000p target 5ms interval 100ms",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root codel"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root codel limit 5000",
@@ -180,8 +165,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 5000p target 5ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -195,7 +179,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root codel"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root codel limit 100",
@@ -204,8 +187,7 @@
"matchPattern": "qdisc codel 1: root refcnt [0-9]+ limit 100p target 5ms interval 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json
index 486a425b3c1c..7126ec3485cb 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/drr.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root drr",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc drr 1: root refcnt [0-9]+",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root drr"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -42,7 +39,6 @@
"matchPattern": "qdisc drr 1: root refcnt [0-9]+",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root drr",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "class drr 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json
index 0046d44bcd93..2c73ee47bf58 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/etf.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root etf clockid CLOCK_TAI",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc etf 1: root refcnt [0-9]+ clockid TAI delta 0 offload off deadline_mode off skip_sock_check off",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root etf delta 100 clockid CLOCK_TAI",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc etf 1: root refcnt [0-9]+ clockid TAI delta 100 offload off deadline_mode off skip_sock_check off",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root etf clockid CLOCK_TAI deadline_mode",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc etf 1: root refcnt [0-9]+ clockid TAI delta 0 offload off deadline_mode on skip_sock_check off",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root etf clockid CLOCK_TAI skip_sock_check",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc etf 1: root refcnt [0-9]+ clockid TAI delta 0 offload off deadline_mode off skip_sock_check on",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root etf clockid CLOCK_TAI"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -111,7 +102,6 @@
"matchPattern": "qdisc etf 1: root refcnt [0-9]+ clockid TAI delta 0 offload off deadline_mode off skip_sock_check off",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ets.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ets.json
index 180593010675..a5d94cdec605 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ets.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ets.json
@@ -6,8 +6,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 2",
"expExitCode": "0",
@@ -15,8 +17,7 @@
"matchPattern": "qdisc ets 1: root .* bands 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -26,8 +27,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 900 800 700",
"expExitCode": "0",
@@ -35,8 +38,7 @@
"matchPattern": "qdisc ets 1: root .*bands 4 quanta 1000 900 800 700",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -46,8 +48,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 3",
"expExitCode": "0",
@@ -55,8 +59,7 @@
"matchPattern": "qdisc ets 1: root .*bands 3 strict 3",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -66,8 +69,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4 quanta 1000 900 800 700",
"expExitCode": "0",
@@ -75,8 +80,7 @@
"matchPattern": "qdisc ets 1: root .*bands 4 quanta 1000 900 800 700 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -86,8 +90,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 3 strict 3",
"expExitCode": "0",
@@ -95,8 +101,7 @@
"matchPattern": "qdisc ets 1: root .*bands 3 strict 3 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -106,8 +111,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 3 quanta 1500 750",
"expExitCode": "0",
@@ -115,8 +122,7 @@
"matchPattern": "qdisc ets 1: root .*bands 5 strict 3 quanta 1500 750 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -126,8 +132,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 0 quanta 1500 750",
"expExitCode": "0",
@@ -135,8 +143,7 @@
"matchPattern": "qdisc ets 1: root .*bands 2 quanta 1500 750 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -146,8 +153,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 5 strict 3 quanta 1500 750",
"expExitCode": "0",
@@ -155,8 +164,7 @@
"matchPattern": "qdisc ets 1: root .*bands 5 .*strict 3 quanta 1500 750 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -166,8 +174,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 2 quanta 1000",
"expExitCode": "0",
@@ -175,8 +185,7 @@
"matchPattern": "qdisc ets 1: root .*bands 2 .*quanta 1000 [1-9][0-9]* priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -186,8 +195,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 3 strict 1",
"expExitCode": "0",
@@ -195,8 +206,7 @@
"matchPattern": "qdisc ets 1: root .*bands 3 strict 1 quanta ([1-9][0-9]* ){2}priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -206,8 +216,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 3 strict 1 quanta 1000",
"expExitCode": "0",
@@ -215,8 +227,7 @@
"matchPattern": "qdisc ets 1: root .*bands 3 strict 1 quanta 1000 [1-9][0-9]* priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -226,8 +237,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 16",
"expExitCode": "0",
@@ -235,8 +248,7 @@
"matchPattern": "qdisc ets 1: root .* bands 16",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -246,8 +258,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 17",
"expExitCode": "1",
@@ -255,7 +269,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -265,8 +278,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 17",
"expExitCode": "1",
@@ -274,7 +289,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -284,8 +298,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16",
"expExitCode": "0",
@@ -293,8 +309,7 @@
"matchPattern": "qdisc ets 1: root .* bands 16",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -304,8 +319,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17",
"expExitCode": "2",
@@ -313,7 +330,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -323,8 +339,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 8 quanta 1 2 3 4 5 6 7 8",
"expExitCode": "0",
@@ -332,8 +350,7 @@
"matchPattern": "qdisc ets 1: root .* bands 16",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -343,8 +360,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 9 quanta 1 2 3 4 5 6 7 8",
"expExitCode": "2",
@@ -352,7 +371,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -362,8 +380,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 5 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"expExitCode": "0",
@@ -371,8 +391,7 @@
"matchPattern": "qdisc ets 1: root .*priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -382,8 +401,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 2000 3000 4000 5000 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"expExitCode": "0",
@@ -391,8 +412,7 @@
"matchPattern": "qdisc ets 1: root .*quanta 1000 2000 3000 4000 5000 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -402,8 +422,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 5 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"expExitCode": "0",
@@ -411,8 +433,7 @@
"matchPattern": "qdisc ets 1: root .*bands 5 strict 5 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -422,8 +443,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 2 quanta 1000 2000 3000 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"expExitCode": "0",
@@ -431,8 +454,7 @@
"matchPattern": "qdisc ets 1: root .*strict 2 quanta 1000 2000 3000 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -442,8 +464,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 4000 3000 2000",
"expExitCode": "0",
@@ -451,8 +475,7 @@
"matchPattern": "class ets 1:1 root quantum 4000",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -462,8 +485,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 4000 3000 2000",
"expExitCode": "0",
@@ -471,8 +496,7 @@
"matchPattern": "class ets 1:2 root quantum 3000",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -482,8 +506,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 4000 3000 2000",
"expExitCode": "0",
@@ -491,8 +517,7 @@
"matchPattern": "class ets 1:3 root quantum 2000",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -502,8 +527,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 3",
"expExitCode": "0",
@@ -511,8 +538,7 @@
"matchPattern": "class ets 1:1 root $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -522,8 +548,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 2 quanta 1000 2000 3000",
"expExitCode": "1",
@@ -531,7 +559,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -541,8 +568,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 2 strict 3",
"expExitCode": "1",
@@ -550,7 +579,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -560,8 +588,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4 strict 2 quanta 1000 2000 3000",
"expExitCode": "1",
@@ -569,7 +599,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -579,8 +608,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 5 priomap 0 0 1 0 1 2 0 1 2 3 0 1 2 3 4 0 1 2",
"expExitCode": "1",
@@ -588,7 +619,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -598,8 +628,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 2 priomap 0 1 2",
"expExitCode": "1",
@@ -607,7 +639,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -617,8 +648,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 500 priomap 0 1 2",
"expExitCode": "1",
@@ -626,7 +659,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -636,8 +668,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 2 priomap 0 1 2",
"expExitCode": "1",
@@ -645,7 +679,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -655,8 +688,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets strict 1 quanta 1000 500 priomap 0 1 2 3",
"expExitCode": "1",
@@ -664,7 +699,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -674,8 +708,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4 strict 1 quanta 1000 500 priomap 0 1 2 3",
"expExitCode": "0",
@@ -683,7 +719,6 @@
"matchPattern": "qdisc ets",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -693,8 +728,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4 strict 1 quanta 1000 500 priomap 0 1 2 3 4",
"expExitCode": "1",
@@ -702,7 +739,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -712,8 +748,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4 priomap 0 0 0 0",
"expExitCode": "0",
@@ -721,7 +759,6 @@
"matchPattern": "qdisc ets .*priomap 0 0 0 0 3 3 3 3 3 3 3 3 3 3 3 3",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -731,8 +768,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 4",
"expExitCode": "0",
@@ -740,7 +779,6 @@
"matchPattern": "qdisc ets .*priomap 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -750,8 +788,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 0",
"expExitCode": "1",
@@ -759,7 +799,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -769,8 +808,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets bands 17",
"expExitCode": "1",
@@ -778,7 +819,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -788,8 +828,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets",
"expExitCode": "1",
@@ -797,7 +839,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -807,8 +848,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 0 800 700",
"expExitCode": "1",
@@ -816,7 +859,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -826,8 +868,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta 0",
"expExitCode": "1",
@@ -835,7 +879,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -845,8 +888,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root ets quanta",
"expExitCode": "255",
@@ -854,7 +899,6 @@
"matchPattern": "qdisc ets",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -864,8 +908,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 2000 3000"
],
"cmdUnderTest": "$TC class change dev $DUMMY classid 1:1 ets quantum 1500",
@@ -874,7 +920,6 @@
"matchPattern": "qdisc ets 1: root .*quanta 1500 2000 3000 priomap ",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -884,8 +929,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root ets quanta 1000 2000 3000"
],
"cmdUnderTest": "$TC class change dev $DUMMY classid 1:1 ets",
@@ -894,7 +941,6 @@
"matchPattern": "qdisc ets 1: root .*quanta 1000 2000 3000 priomap ",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -904,8 +950,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root ets strict 5"
],
"cmdUnderTest": "$TC class change dev $DUMMY classid 1:2 ets quantum 1500",
@@ -914,7 +962,6 @@
"matchPattern": "qdisc ets .*bands 5 .*strict 5",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -924,8 +971,10 @@
"qdisc",
"ets"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root ets strict 5"
],
"cmdUnderTest": "$TC class change dev $DUMMY classid 1:2 ets",
@@ -934,7 +983,6 @@
"matchPattern": "qdisc ets .*bands 5 .*strict 5",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json
index 5ecd93b4c473..ae3d286a32b2 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fifo.json
@@ -2,13 +2,14 @@
{
"id": "a519",
"name": "Add bfifo qdisc with system default parameters on egress",
- "__comment": "When omitted, queue size in bfifo is calculated as: txqueuelen * (MTU + LinkLayerHdrSize), where LinkLayerHdrSize=14 for Ethernet",
"category": [
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root bfifo",
"expExitCode": "0",
@@ -16,20 +17,20 @@
"matchPattern": "qdisc bfifo 1: root.*limit [0-9]+b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root bfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root bfifo"
]
},
{
"id": "585c",
"name": "Add pfifo qdisc with system default parameters on egress",
- "__comment": "When omitted, queue size in pfifo is defaulted to the interface's txqueuelen value.",
"category": [
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root pfifo",
"expExitCode": "0",
@@ -37,8 +38,7 @@
"matchPattern": "qdisc pfifo 1: root.*limit [0-9]+p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root pfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root pfifo"
]
},
{
@@ -48,8 +48,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle ffff: bfifo",
"expExitCode": "0",
@@ -57,8 +59,7 @@
"matchPattern": "qdisc bfifo ffff: root.*limit [0-9]+b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle ffff: root bfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle ffff: root bfifo"
]
},
{
@@ -68,8 +69,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root bfifo limit 3000b",
"expExitCode": "0",
@@ -77,8 +80,7 @@
"matchPattern": "qdisc bfifo 1: root.*limit 3000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root bfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root bfifo"
]
},
{
@@ -88,8 +90,11 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY txqueuelen 3000 type dummy || /bin/true"
+ "$IP link set dev $DUMMY txqueuelen 3000"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root pfifo limit 3000",
"expExitCode": "0",
@@ -97,8 +102,7 @@
"matchPattern": "qdisc pfifo 1: root.*limit 3000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root pfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root pfifo"
]
},
{
@@ -108,8 +112,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle 10000: bfifo",
"expExitCode": "255",
@@ -117,7 +123,6 @@
"matchPattern": "qdisc bfifo 10000: root.*limit [0-9]+b",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -127,8 +132,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root bfifo foorbar",
"expExitCode": "1",
@@ -136,7 +143,6 @@
"matchPattern": "qdisc bfifo 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -146,8 +152,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root pfifo foorbar",
"expExitCode": "1",
@@ -155,7 +163,6 @@
"matchPattern": "qdisc pfifo 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -165,9 +172,11 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link del dev $DUMMY type dummy || /bin/true",
- "$IP link add dev $DUMMY txqueuelen 1000 type dummy",
+ "$IP link set dev $DUMMY txqueuelen 1000",
"$TC qdisc add dev $DUMMY handle 1: root bfifo"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root bfifo limit 3000b",
@@ -176,8 +185,7 @@
"matchPattern": "qdisc bfifo 1: root.*limit 3000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root bfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root bfifo"
]
},
{
@@ -187,9 +195,11 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link del dev $DUMMY type dummy || /bin/true",
- "$IP link add dev $DUMMY txqueuelen 1000 type dummy",
+ "$IP link set dev $DUMMY txqueuelen 1000",
"$TC qdisc add dev $DUMMY handle 1: root pfifo"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root pfifo limit 30",
@@ -198,8 +208,7 @@
"matchPattern": "qdisc pfifo 1: root.*limit 30p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root pfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root pfifo"
]
},
{
@@ -209,8 +218,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root bfifo limit foo-bar",
"expExitCode": "1",
@@ -218,7 +229,6 @@
"matchPattern": "qdisc bfifo 1: root.*limit foo-bar",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -228,8 +238,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root bfifo"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root bfifo",
@@ -238,8 +250,7 @@
"matchPattern": "qdisc bfifo 1: root",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root bfifo",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root bfifo"
]
},
{
@@ -249,8 +260,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY root handle 1: bfifo",
"expExitCode": "2",
@@ -258,7 +271,6 @@
"matchPattern": "qdisc bfifo 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -268,8 +280,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle 123^ bfifo limit 100b",
"expExitCode": "255",
@@ -277,7 +291,6 @@
"matchPattern": "qdisc bfifo 123 root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -287,8 +300,10 @@
"qdisc",
"fifo"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: bfifo",
"$TC qdisc del dev $DUMMY root handle 1: bfifo"
],
@@ -298,7 +313,6 @@
"matchPattern": "qdisc bfifo 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
index 3593fb8f79ad..be293e7c6d18 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq limit 3000",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 3000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq flow_limit 300",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 300p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq quantum 9000",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p buckets.*orphan_mask 1023 quantum 9000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq initial_quantum 900000",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p buckets.*initial_quantum 900000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq initial_quantum 0x80000000",
"expExitCode": "2",
@@ -133,7 +122,6 @@
"matchPattern": "qdisc fq 1: root.*initial_quantum 2048Mb",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -147,7 +135,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq maxrate 100000",
"expExitCode": "0",
@@ -155,8 +142,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p buckets.*maxrate 100Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -170,7 +156,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq nopacing",
"expExitCode": "0",
@@ -178,8 +163,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*nopacing",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -193,7 +177,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq refill_delay 100ms",
"expExitCode": "0",
@@ -201,8 +184,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*refill_delay 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -216,7 +198,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq low_rate_threshold 10000",
"expExitCode": "0",
@@ -224,8 +205,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*low_rate_threshold 10Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -239,7 +219,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq orphan_mask 255",
"expExitCode": "0",
@@ -247,8 +226,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*orphan_mask 255",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -262,7 +240,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq timer_slack 100",
"expExitCode": "0",
@@ -270,8 +247,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*timer_slack 100ns",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -285,7 +261,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq ce_threshold 100",
"expExitCode": "0",
@@ -293,8 +268,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -308,7 +282,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq horizon 100",
"expExitCode": "0",
@@ -316,8 +289,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*horizon 100us",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -331,7 +303,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq horizon_cap",
"expExitCode": "0",
@@ -339,8 +310,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p flow_limit 100p.*horizon_cap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -354,7 +324,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -363,7 +332,6 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 10000p",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -377,7 +345,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root fq limit 5000",
@@ -386,8 +353,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 5000p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -401,7 +367,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root fq limit 100",
@@ -410,8 +375,7 @@
"matchPattern": "qdisc fq 1: root refcnt [0-9]+ limit 100p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json
index a65266357a9a..9774b1e8801b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_codel.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel limit 1000",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 1000p flows 1024 quantum.*target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel memory_limit 100000",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms interval 100ms memory_limit 100000b ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel target 2000",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 2ms interval 100ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel interval 5000",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms interval 5ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel quantum 9000",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum 9000 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel noecn",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms interval 100ms memory_limit 32Mb drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel ce_threshold 1024000",
"expExitCode": "0",
@@ -179,8 +164,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms ce_threshold 1.02s interval 100ms memory_limit 32Mb ecn drop_batch 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -194,7 +178,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel drop_batch 100",
"expExitCode": "0",
@@ -202,8 +185,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 10240p flows 1024 quantum.*target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 100",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -217,7 +199,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel limit 1000 flows 256 drop_batch 100",
"expExitCode": "0",
@@ -225,8 +206,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 1000p flows 256 quantum.*target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 100",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -240,7 +220,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq_codel limit 1000 flows 256 drop_batch 100"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root fq_codel noecn",
@@ -249,8 +228,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 1000p flows 256 quantum.*target 5ms interval 100ms memory_limit 32Mb drop_batch 100",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -264,7 +242,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq_codel limit 1000 flows 256 drop_batch 100"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root fq_codel limit 2000",
@@ -273,8 +250,7 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 2000p flows 256 quantum.*target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 100",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -288,7 +264,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root fq_codel limit 1000 flows 256 drop_batch 100"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -297,7 +272,6 @@
"matchPattern": "qdisc fq_codel 1: root refcnt [0-9]+ limit 1000p flows 256 quantum.*target 5ms interval 100ms memory_limit 32Mb noecn drop_batch 100",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -311,7 +285,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_codel",
"expExitCode": "0",
@@ -319,8 +292,7 @@
"matchPattern": "class fq_codel 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
index 773c5027553d..d012d88d67fe 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
@@ -6,8 +6,10 @@
"qdisc",
"fq_pie"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root fq_pie flows 65536",
"expExitCode": "0",
@@ -15,7 +17,6 @@
"matchPattern": "qdisc fq_pie 1: root refcnt 2 limit 10240p flows 65536",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json
index 013c8ee037a4..df07fe318de9 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/gred.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 1",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 1 grio",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 1.*grio",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 1 limit 1000",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 1 limit 1000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 2 ecn",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 2.*ecn",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 2 harddrop",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 2.*harddrop",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 1"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root gred limit 60KB min 15K max 25K burst 64 avpkt 1500 bandwidth 10Mbit DP 1 probability 0.1",
@@ -134,8 +123,7 @@
"matchPattern": "qdisc gred 1: root refcnt [0-9]+ vqs 10 default 1 limit.*vq 1 prio [0-9]+ limit 60Kb min 15Kb max 25Kb",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -149,7 +137,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root gred setup vqs 10 default 1",
"expExitCode": "0",
@@ -157,8 +144,7 @@
"matchPattern": "class gred 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hfsc.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hfsc.json
index af27b2c20e17..c98c339424d4 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hfsc.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hfsc.json
@@ -9,17 +9,14 @@
"plugins": {
"requires": "nsPlugin"
},
- "setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
- ],
+ "setup": [],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hfsc",
"expExitCode": "0",
"verifyCmd": "$TC qdisc show dev $DUMMY",
"matchPattern": "qdisc hfsc 1: root refcnt [0-9]+",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +30,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hfsc default 11"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 hfsc sc rate 20000 ul rate 10000",
@@ -42,8 +38,7 @@
"matchPattern": "class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 20Kbit ul m1 0bit d 0us m2 10Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -57,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hfsc default 11"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 hfsc sc umax 1540 dmax 5ms rate 10000 ul rate 10000",
@@ -66,8 +60,7 @@
"matchPattern": "class hfsc 1:1 parent 1: sc m1 2464Kbit d 5ms m2 10Kbit ul m1 0bit d 0us m2 10Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -81,7 +74,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hfsc default 11"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 hfsc rt rate 20000 ls rate 10000",
@@ -90,8 +82,7 @@
"matchPattern": "class hfsc 1:1 parent 1: rt m1 0bit d 0us m2 20Kbit ls m1 0bit d 0us m2 10Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -105,7 +96,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hfsc default 11"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 hfsc rt umax 1540 dmax 5ms rate 10000 ls rate 10000",
@@ -114,8 +104,7 @@
"matchPattern": "class hfsc 1:1 parent 1: rt m1 2464Kbit d 5ms m2 10Kbit ls m1 0bit d 0us m2 10Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -129,7 +118,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hfsc default 11"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -137,9 +125,7 @@
"verifyCmd": "$TC qdisc show dev $DUMMY",
"matchPattern": "qdisc hfsc 1: root refcnt [0-9]+",
"matchCount": "0",
- "teardown": [
- "$IP link del dev $DUMMY type dummy"
- ]
+ "teardown": []
},
{
"id": "8436",
@@ -151,17 +137,37 @@
"plugins": {
"requires": "nsPlugin"
},
- "setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
- ],
+ "setup": [],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hfsc",
"expExitCode": "0",
"verifyCmd": "$TC class show dev $DUMMY",
"matchPattern": "class hfsc 1: root",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
+ ]
+ },
+ {
+ "id": "bef4",
+ "name": "HFSC rt inner class upgrade to sc",
+ "category": [
+ "qdisc",
+ "hfsc"
+ ],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
+ "setup": [
+ "$TC qdisc add dev $DUMMY handle 1: root hfsc default 1",
+ "$TC class add dev $DUMMY parent 1: classid 1:1 hfsc rt rate 8"
+ ],
+ "cmdUnderTest": "$TC class add dev $DUMMY parent 1:1 classid 1:2 hfsc rt rate 8",
+ "expExitCode": "0",
+ "verifyCmd": "$TC class show dev $DUMMY",
+ "matchPattern": "class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 8bit.*rt m1 0bit d 0us m2 8bit",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json
index 949f6e5de902..dbef5474b26b 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/hhf.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf limit 1500",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+ limit 1500p.*hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf quantum 9000",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*quantum 9000b hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf reset_timeout 100ms",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*hh_limit 2048 reset_timeout 100ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf admit_bytes 100000",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*hh_limit 2048 reset_timeout 40ms admit_bytes 100000b evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf evict_timeout 0.5s",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 500ms non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf non_hh_weight 10",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+.*hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 10",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root hhf"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root hhf limit 1500",
@@ -180,8 +165,7 @@
"matchPattern": "qdisc hhf 1: root refcnt [0-9]+ limit 1500p.*hh_limit 2048 reset_timeout 40ms admit_bytes 128Kb evict_timeout 1s non_hh_weight 2",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -195,7 +179,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root hhf",
"expExitCode": "0",
@@ -203,8 +186,7 @@
"matchPattern": "class hhf 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/htb.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/htb.json
index 9529899482e0..cab745f9a83c 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/htb.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/htb.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root htb",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc htb 1: root refcnt [0-9]+ r2q 10 default 0 direct_packets_stat.*direct_qlen",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root htb default 10",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc htb 1: root refcnt [0-9]+ r2q 10 default 0x10 direct_packets_stat.* direct_qlen",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root htb r2q 5",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc htb 1: root refcnt [0-9]+ r2q 5 default 0 direct_packets_stat.*direct_qlen",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root htb direct_qlen 1024",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc htb 1: root refcnt [0-9]+ r2q 10 default 0 direct_packets_stat.*direct_qlen 1024",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20kbit burst 1000",
@@ -111,8 +102,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 20Kbit burst 1000b cburst 1600b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -126,7 +116,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit mpu 64",
@@ -135,8 +124,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 20Kbit burst 1600b cburst 1600b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -150,7 +138,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit prio 1",
@@ -159,8 +146,7 @@
"matchPattern": "class htb 1:1 root prio 1 rate 20Kbit ceil 20Kbit burst 1600b cburst 1600b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -174,7 +160,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit ceil 10Kbit",
@@ -183,8 +168,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 10Kbit burst 1600b cburst 1600b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -198,7 +182,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit cburst 2000",
@@ -207,8 +190,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 20Kbit burst 1600b cburst 2000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -222,7 +204,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit mtu 2048",
@@ -231,8 +212,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 20Kbit burst 2Kb cburst 2Kb",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -246,7 +226,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 htb rate 20Kbit quantum 2048",
@@ -255,8 +234,7 @@
"matchPattern": "class htb 1:1 root prio 0 rate 20Kbit ceil 20Kbit burst 1600b cburst 1600b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -270,7 +248,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root htb r2q 5"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -279,7 +256,6 @@
"matchPattern": "qdisc htb 1: root refcnt [0-9]+",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ingress.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ingress.json
index 11d33362408c..57bddc1212d8 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ingress.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/ingress.json
@@ -7,16 +7,17 @@
"ingress"
],
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"cmdUnderTest": "$TC qdisc add dev $DUMMY ingress",
"expExitCode": "0",
"verifyCmd": "$TC qdisc show dev $DUMMY",
"matchPattern": "qdisc ingress ffff:",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -26,8 +27,10 @@
"qdisc",
"ingress"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY ingress foorbar",
"expExitCode": "1",
@@ -35,7 +38,6 @@
"matchPattern": "qdisc ingress ffff:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -45,8 +47,10 @@
"qdisc",
"ingress"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY ingress",
@@ -55,8 +59,7 @@
"matchPattern": "qdisc ingress ffff:",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
},
{
@@ -66,8 +69,10 @@
"qdisc",
"ingress"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY ingress",
"expExitCode": "2",
@@ -75,7 +80,6 @@
"matchPattern": "qdisc ingress ffff:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -85,8 +89,10 @@
"qdisc",
"ingress"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY ingress",
"$TC qdisc del dev $DUMMY ingress"
],
@@ -96,7 +102,6 @@
"matchPattern": "qdisc ingress ffff:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -106,8 +111,10 @@
"qdisc",
"ingress"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY ingress",
"expExitCode": "0",
@@ -115,8 +122,7 @@
"matchPattern": "class ingress",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY ingress",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY ingress"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json
index 7e41f548f8e8..3c4444961488 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/netem.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ limit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem limit 200",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ limit 200",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal corrupt 1%",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms 10ms corrupt 1%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal duplicate 1%",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms 10ms duplicate 1%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution pareto loss 1%",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms 10ms loss 1%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution paretonormal loss state 1",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms 10ms loss state p13 1% p31 99% p32 0% p23 100% p14 0%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem loss gemodel 1%",
"expExitCode": "0",
@@ -179,8 +164,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*loss gemodel p 1%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -194,7 +178,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms reorder 2% gap 100",
"expExitCode": "0",
@@ -202,8 +185,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*reorder 2%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -217,7 +199,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem rate 20000",
"expExitCode": "0",
@@ -225,8 +206,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*rate 20Kbit",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -240,7 +220,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem slot 10 200 packets 2000 bytes 9000",
"expExitCode": "0",
@@ -248,8 +227,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*slot 10ns 200ns packets 2000 bytes 9000",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -263,7 +241,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem slot distribution pareto 1ms 0.1ms",
"expExitCode": "0",
@@ -271,8 +248,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*slot distribution 1ms 100us",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -286,7 +262,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal loss 1%"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal loss 2%",
@@ -295,8 +270,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*loss 2%",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -310,7 +284,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal loss 1%"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root netem delay 200ms 10ms",
@@ -319,8 +292,7 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 200ms 10ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -334,7 +306,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root netem delay 100ms 10ms distribution normal"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -343,7 +314,6 @@
"matchPattern": "qdisc netem 1: root refcnt [0-9]+ .*delay 100ms 10ms",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -357,7 +327,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root netem",
"expExitCode": "0",
@@ -365,8 +334,7 @@
"matchPattern": "class netem 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json
index ab53238f4c5a..30da27fe8806 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/pfifo_fast.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root pfifo_fast",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc pfifo_fast 1: root refcnt [0-9]+ bands 3 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root pfifo_fast",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "Sent.*bytes.*pkt \\(dropped.*overlimits.*requeues .*\\)",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root pfifo_fast"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 2: root pfifo_fast",
@@ -65,8 +60,7 @@
"matchPattern": "qdisc pfifo_fast 2: root refcnt [0-9]+ bands 3 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 2: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 2: root"
]
},
{
@@ -80,7 +74,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root pfifo_fast"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -89,7 +82,6 @@
"matchPattern": "qdisc pfifo_fast 1: root refcnt [0-9]+ bands 3 priomap",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -103,7 +95,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root pfifo_fast"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: root",
@@ -112,8 +103,7 @@
"matchPattern": "qdisc pfifo_fast 1: root refcnt [0-9]+ bands 3 priomap",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json
index 6454518af178..6ec7e0a01265 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/plug.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root plug",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root plug block",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root plug release",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root plug release_indefinite",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root plug limit 100",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root plug"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -134,7 +123,6 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root plug"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root plug limit 1000",
@@ -157,8 +144,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -172,7 +158,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root plug"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root plug limit 1000",
@@ -181,8 +166,7 @@
"matchPattern": "qdisc plug 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/prio.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/prio.json
index 8186de2f0dcf..69abf041c799 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/prio.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/prio.json
@@ -6,8 +6,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio",
"expExitCode": "0",
@@ -15,8 +17,7 @@
"matchPattern": "qdisc prio 1: root",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root prio"
]
},
{
@@ -26,8 +27,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle ffff: prio",
"expExitCode": "0",
@@ -35,7 +38,6 @@
"matchPattern": "qdisc prio ffff: root",
"matchCount": "1",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -45,8 +47,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle 10000: prio",
"expExitCode": "255",
@@ -54,7 +58,6 @@
"matchPattern": "qdisc prio 10000: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -64,8 +67,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio foorbar",
"expExitCode": "1",
@@ -73,7 +78,6 @@
"matchPattern": "qdisc prio 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -83,8 +87,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio bands 4 priomap 1 1 2 2 3 3 0 0 1 2 3 0 0 0 0 0",
"expExitCode": "0",
@@ -92,8 +98,7 @@
"matchPattern": "qdisc prio 1: root.*bands 4 priomap.*1 1 2 2 3 3 0 0 1 2 3 0 0 0 0 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root prio"
]
},
{
@@ -103,8 +108,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio bands 4 priomap 1 1 2 2 3 3 0 0 1 2 3 0 0 0 0 0 1 1",
"expExitCode": "1",
@@ -112,7 +119,6 @@
"matchPattern": "qdisc prio 1: root.*bands 4 priomap.*1 1 2 2 3 3 0 0 1 2 3 0 0 0 0 0 1 1",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -122,8 +128,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio bands 4 priomap 1 1 2 2 7 5 0 0 1 2 3 0 0 0 0 0",
"expExitCode": "1",
@@ -131,7 +139,6 @@
"matchPattern": "qdisc prio 1: root.*bands 4 priomap.*1 1 2 2 7 5 0 0 1 2 3 0 0 0 0 0",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -141,8 +148,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio bands 1 priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0",
"expExitCode": "2",
@@ -150,7 +159,6 @@
"matchPattern": "qdisc prio 1: root.*bands 1 priomap.*0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -160,8 +168,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio bands 1024 priomap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16",
"expExitCode": "2",
@@ -169,7 +179,6 @@
"matchPattern": "qdisc prio 1: root.*bands 1024 priomap.*1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -179,8 +188,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root prio"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root prio bands 8 priomap 1 1 2 2 3 3 4 4 5 5 6 6 7 7 0 0",
@@ -189,8 +200,7 @@
"matchPattern": "qdisc prio 1: root.*bands 8 priomap.*1 1 2 2 3 3 4 4 5 5 6 6 7 7 0 0",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root prio"
]
},
{
@@ -200,8 +210,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root prio"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio",
@@ -210,8 +222,7 @@
"matchPattern": "qdisc prio 1: root",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root prio"
]
},
{
@@ -221,8 +232,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY root handle 1: prio",
"expExitCode": "2",
@@ -230,7 +243,6 @@
"matchPattern": "qdisc prio 1: root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -240,8 +252,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY root handle 123^ prio",
"expExitCode": "255",
@@ -249,7 +263,6 @@
"matchPattern": "qdisc prio 123 root",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -259,8 +272,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY root handle 1: prio",
"$TC qdisc del dev $DUMMY root handle 1: prio"
],
@@ -270,7 +285,6 @@
"matchPattern": "qdisc ingress ffff:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -280,8 +294,10 @@
"qdisc",
"prio"
],
+ "plugins": {
+ "requires": "nsPlugin"
+ },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root prio",
"expExitCode": "0",
@@ -289,8 +305,7 @@
"matchPattern": "class prio 1:[0-9]+ parent 1:",
"matchCount": "3",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root prio",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root prio"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json
index 976dffda4654..c95643929841 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/qfq.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root qfq",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc qfq 1: root refcnt [0-9]+",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
@@ -42,8 +39,7 @@
"matchPattern": "class qfq 1:1 root weight 100 maxpkt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -57,7 +53,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 9999",
@@ -66,8 +61,7 @@
"matchPattern": "class qfq 1:1 root weight 9999 maxpkt",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -81,7 +75,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq maxpkt 2000",
@@ -90,8 +83,7 @@
"matchPattern": "class qfq 1:1 root weight 1 maxpkt 2000",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -105,7 +97,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq maxpkt 128",
@@ -114,8 +105,7 @@
"matchPattern": "class qfq 1:1 root weight 1 maxpkt 128",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -129,7 +119,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
"cmdUnderTest": "$TC class add dev $DUMMY parent 1: classid 1:1 qfq maxpkt 99999",
@@ -138,8 +127,7 @@
"matchPattern": "class qfq 1:1 root weight 1 maxpkt 99999",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -153,7 +141,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq",
"$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100"
],
@@ -163,8 +150,7 @@
"matchPattern": "class qfq 1:[0-9]+ root weight [0-9]+00 maxpkt",
"matchCount": "2",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -178,7 +164,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq",
"$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100"
],
@@ -188,7 +173,6 @@
"matchPattern": "qdisc qfq 1: root refcnt [0-9]+",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -202,7 +186,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root qfq",
"expExitCode": "0",
@@ -210,8 +193,7 @@
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -225,7 +207,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 2147483647 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
@@ -235,7 +216,6 @@
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -249,7 +229,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY mtu 256 || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root qfq"
],
@@ -259,7 +238,6 @@
"matchPattern": "class qfq 1:",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -277,7 +255,6 @@
]
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$IP link set dev $DUMMY up || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: stab mtu 2048 tsize 512 mpu 0 overhead 999999999 linklayer ethernet root qfq",
"$TC class add dev $DUMMY parent 1: classid 1:1 qfq weight 100",
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/red.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/red.json
index 4b3e449857f2..eec73fda6c80 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/red.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/red.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red adaptive limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb adaptive $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red ecn limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb ecn $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red ecn adaptive limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb ecn adaptive $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red ecn harddrop limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb ecn harddrop $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red ecn nodrop limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb ecn nodrop $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red nodrop limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "2",
@@ -156,7 +143,6 @@
"matchPattern": "qdisc red",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
]
},
{
@@ -170,7 +156,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red ecn harddrop nodrop limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -178,8 +163,7 @@
"matchPattern": "qdisc red 1: root .* limit 1Mb min 100Kb max 300Kb ecn harddrop nodrop $",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -193,7 +177,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root red limit 1M avpkt 1500 min 100K max 300K",
"expExitCode": "0",
@@ -201,8 +184,7 @@
"matchPattern": "class red 1:[0-9]+ parent 1:",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
index e21c7f22c6d4..aa7914c441ea 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfb.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 60s",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb rehash 60",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 60ms db 60s",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb db 100",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb limit 100",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc sfb 1: root refcnt [0-9]+ rehash 600s db 60s limit 100p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb max 100",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*max 100p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb target 100",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*target 100p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb increment 0.1",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*increment 0.1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb decrement 0.1",
"expExitCode": "0",
@@ -179,8 +164,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*decrement 0.1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -194,7 +178,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb penalty_rate 4000",
"expExitCode": "0",
@@ -202,8 +185,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*penalty_rate 4000pps",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -217,7 +199,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb penalty_burst 64",
"expExitCode": "0",
@@ -225,8 +206,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 600s db 60s.*penalty_burst 64p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -240,7 +220,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root sfb penalty_burst 64"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root sfb rehash 100",
@@ -249,8 +228,7 @@
"matchPattern": "qdisc sfb 1: root refcnt 2 rehash 100ms db 60s",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -264,7 +242,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfb",
"expExitCode": "0",
@@ -272,8 +249,7 @@
"matchPattern": "class sfb 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json
index b6be718a174a..16d51936b385 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/sfq.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum.*depth 127 divisor 1024",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq limit 8",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 8p",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq perturb 10",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "depth 127 divisor 1024 perturb 10sec",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq quantum 9000",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum 9000b depth 127 divisor 1024",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq divisor 512",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum 1514b depth 127 divisor 512",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq flows 20",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc sfq 1: root refcnt",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq depth 64",
"expExitCode": "0",
@@ -156,8 +143,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum 1514b depth 64 divisor 1024",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -171,7 +157,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq headdrop",
"expExitCode": "0",
@@ -179,8 +164,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum 1514b depth 127 headdrop divisor 1024",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -194,7 +178,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq redflowlimit 100000 min 8000 max 60000 probability 0.20 ecn headdrop",
"expExitCode": "0",
@@ -202,8 +185,7 @@
"matchPattern": "qdisc sfq 1: root refcnt [0-9]+ limit 127p quantum 1514b depth 127 headdrop divisor 1024 ewma 6 min 8000b max 60000b probability 0.2 ecn",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -217,7 +199,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root sfq",
"expExitCode": "0",
@@ -225,8 +206,7 @@
"matchPattern": "class sfq 1:",
"matchCount": "0",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json
index 5766045c9d33..076d1d69a3a4 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/skbprio.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root skbprio",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc skbprio 1: root refcnt [0-9]+ limit 64",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root skbprio limit 1",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc skbprio 1: root refcnt [0-9]+ limit 1",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root skbprio"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root skbprio limit 32",
@@ -65,8 +60,7 @@
"matchPattern": "qdisc skbprio 1: root refcnt [0-9]+ limit 32",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -80,7 +74,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root skbprio",
"expExitCode": "0",
@@ -88,8 +81,7 @@
"matchPattern": "class skbprio 1:",
"matchCount": "64",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json
index 0599635c4bc6..2d603ef2e375 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/taprio.json
@@ -170,11 +170,11 @@
"setup": [
"echo \"1 1 8\" > /sys/bus/netdevsim/new_device",
"$TC qdisc replace dev $ETH handle 8001: parent root stab overhead 24 taprio num_tc 8 map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 0 sched-entry S ff 20000000 clockid CLOCK_TAI",
- "./taprio_wait_for_admin.sh $TC $ETH"
+ "./scripts/taprio_wait_for_admin.sh $TC $ETH"
],
"cmdUnderTest": "$TC qdisc replace dev $ETH parent 8001:7 taprio num_tc 8 map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 200 sched-entry S ff 20000000 clockid CLOCK_TAI",
"expExitCode": "2",
- "verifyCmd": "bash -c \"./taprio_wait_for_admin.sh $TC $ETH && $TC -j qdisc show dev $ETH root | jq '.[].options.base_time'\"",
+ "verifyCmd": "bash -c \"./scripts/taprio_wait_for_admin.sh $TC $ETH && $TC -j qdisc show dev $ETH root | jq '.[].options.base_time'\"",
"matchPattern": "0",
"matchCount": "1",
"teardown": [
@@ -195,11 +195,11 @@
"setup": [
"echo \"1 1 8\" > /sys/bus/netdevsim/new_device",
"$TC qdisc replace dev $ETH handle 8001: parent root stab overhead 24 taprio num_tc 8 map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 0 sched-entry S ff 20000000 flags 0x2",
- "./taprio_wait_for_admin.sh $TC $ETH"
+ "./scripts/taprio_wait_for_admin.sh $TC $ETH"
],
"cmdUnderTest": "$TC qdisc replace dev $ETH parent 8001:7 taprio num_tc 8 map 0 1 2 3 4 5 6 7 queues 1@0 1@1 1@2 1@3 1@4 1@5 1@6 1@7 base-time 200 sched-entry S ff 20000000 flags 0x2",
"expExitCode": "2",
- "verifyCmd": "bash -c \"./taprio_wait_for_admin.sh $TC $ETH && $TC -j qdisc show dev $ETH root | jq '.[].options.base_time'\"",
+ "verifyCmd": "bash -c \"./scripts/taprio_wait_for_admin.sh $TC $ETH && $TC -j qdisc show dev $ETH root | jq '.[].options.base_time'\"",
"matchPattern": "0",
"matchCount": "1",
"teardown": [
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json
index a4b3dfe51ff5..547a44910041 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/tbf.json
@@ -10,7 +10,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 10000",
"expExitCode": "0",
@@ -18,8 +17,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 10Kbit burst 1500b limit 1000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -33,7 +31,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 mtu 2048",
"expExitCode": "0",
@@ -41,8 +38,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1500b limit 1000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -56,7 +52,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 mtu 1510 peakrate 30000",
"expExitCode": "0",
@@ -64,8 +59,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1500b peakrate 30Kbit minburst.*limit 1000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -79,7 +73,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf burst 1500 rate 20000 latency 100ms",
"expExitCode": "0",
@@ -87,8 +80,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1500b lat 100ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -102,7 +94,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 overhead 300",
"expExitCode": "0",
@@ -110,8 +101,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1800b limit 1000b overhead 300",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -125,7 +115,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 linklayer atm",
"expExitCode": "0",
@@ -133,8 +122,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1696b limit 1000b linklayer atm",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -148,7 +136,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 linklayer atm"
],
"cmdUnderTest": "$TC qdisc replace dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 20000 linklayer ethernet",
@@ -157,8 +144,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1500b limit 1000b",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -172,7 +158,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
"$TC qdisc add dev $DUMMY handle 1: root tbf burst 1500 rate 20000 latency 10ms"
],
"cmdUnderTest": "$TC qdisc change dev $DUMMY handle 1: root tbf burst 1500 rate 20000 latency 200ms",
@@ -181,8 +166,7 @@
"matchPattern": "qdisc tbf 1: root refcnt [0-9]+ rate 20Kbit burst 1500b lat 200ms",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
},
{
@@ -196,7 +180,6 @@
"requires": "nsPlugin"
},
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root tbf limit 1000 burst 1500 rate 10000",
"expExitCode": "0",
@@ -204,8 +187,7 @@
"matchPattern": "class tbf.*parent 1:",
"matchCount": "1",
"teardown": [
- "$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$TC qdisc del dev $DUMMY handle 1: root"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json
index 0082be0e93ac..e5cc31f265f8 100644
--- a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json
+++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/teql.json
@@ -6,11 +6,8 @@
"qdisc",
"teql"
],
- "plugins": {
- "requires": "nsPlugin"
- },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
+ "$IP link add dev $DUMMY type dummy"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root teql0",
"expExitCode": "0",
@@ -19,7 +16,7 @@
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$IP link del dev $DUMMY"
]
},
{
@@ -29,13 +26,10 @@
"qdisc",
"teql"
],
- "plugins": {
- "requires": "nsPlugin"
- },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
- "echo \"1 1 4\" > /sys/bus/netdevsim/new_device",
- "$TC qdisc add dev $ETH root handle 1: teql0"
+ "$IP link add dev $DUMMY type dummy",
+ "$IP link add dev $ETH type dummy",
+ "$TC qdisc add dev $ETH handle 1: root teql0"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root teql0",
"expExitCode": "0",
@@ -44,8 +38,8 @@
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root",
- "echo \"1\" > /sys/bus/netdevsim/del_device",
- "$IP link del dev $DUMMY type dummy"
+ "$IP link del dev $DUMMY",
+ "$IP link del dev $ETH"
]
},
{
@@ -55,11 +49,8 @@
"qdisc",
"teql"
],
- "plugins": {
- "requires": "nsPlugin"
- },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true",
+ "$IP link add dev $DUMMY type dummy",
"$TC qdisc add dev $DUMMY handle 1: root teql0"
],
"cmdUnderTest": "$TC qdisc del dev $DUMMY handle 1: root",
@@ -68,7 +59,7 @@
"matchPattern": "qdisc teql0 1: root refcnt",
"matchCount": "0",
"teardown": [
- "$IP link del dev $DUMMY type dummy"
+ "$IP link del dev $DUMMY"
]
},
{
@@ -78,11 +69,8 @@
"qdisc",
"teql"
],
- "plugins": {
- "requires": "nsPlugin"
- },
"setup": [
- "$IP link add dev $DUMMY type dummy || /bin/true"
+ "$IP link add dev $DUMMY type dummy"
],
"cmdUnderTest": "$TC qdisc add dev $DUMMY handle 1: root teql0",
"expExitCode": "0",
@@ -91,7 +79,7 @@
"matchCount": "1",
"teardown": [
"$TC qdisc del dev $DUMMY handle 1: root",
- "$IP link del dev $DUMMY type dummy"
+ "$IP link del dev $DUMMY"
]
}
]
diff --git a/tools/testing/selftests/tc-testing/tdc.py b/tools/testing/selftests/tc-testing/tdc.py
index b98256f38447..a6718192aff3 100755
--- a/tools/testing/selftests/tc-testing/tdc.py
+++ b/tools/testing/selftests/tc-testing/tdc.py
@@ -16,6 +16,8 @@ import json
import subprocess
import time
import traceback
+import random
+from multiprocessing import Pool
from collections import OrderedDict
from string import Template
@@ -38,12 +40,11 @@ class PluginMgrTestFail(Exception):
class PluginMgr:
def __init__(self, argparser):
super().__init__()
- self.plugins = {}
+ self.plugins = set()
self.plugin_instances = []
self.failed_plugins = {}
self.argparser = argparser
- # TODO, put plugins in order
plugindir = os.getenv('TDC_PLUGIN_DIR', './plugins')
for dirpath, dirnames, filenames in os.walk(plugindir):
for fn in filenames:
@@ -53,32 +54,43 @@ class PluginMgr:
not fn.startswith('.#')):
mn = fn[0:-3]
foo = importlib.import_module('plugins.' + mn)
- self.plugins[mn] = foo
- self.plugin_instances.append(foo.SubPlugin())
+ self.plugins.add(mn)
+ self.plugin_instances[mn] = foo.SubPlugin()
def load_plugin(self, pgdir, pgname):
pgname = pgname[0:-3]
+ self.plugins.add(pgname)
+
foo = importlib.import_module('{}.{}'.format(pgdir, pgname))
- self.plugins[pgname] = foo
- self.plugin_instances.append(foo.SubPlugin())
- self.plugin_instances[-1].check_args(self.args, None)
+
+ # nsPlugin must always be the first one
+ if pgname == "nsPlugin":
+ self.plugin_instances.insert(0, (pgname, foo.SubPlugin()))
+ self.plugin_instances[0][1].check_args(self.args, None)
+ else:
+ self.plugin_instances.append((pgname, foo.SubPlugin()))
+ self.plugin_instances[-1][1].check_args(self.args, None)
def get_required_plugins(self, testlist):
'''
Get all required plugins from the list of test cases and return
all unique items.
'''
- reqs = []
+ reqs = set()
for t in testlist:
try:
if 'requires' in t['plugins']:
if isinstance(t['plugins']['requires'], list):
- reqs.extend(t['plugins']['requires'])
+ reqs.update(set(t['plugins']['requires']))
else:
- reqs.append(t['plugins']['requires'])
+ reqs.add(t['plugins']['requires'])
+ t['plugins'] = t['plugins']['requires']
+ else:
+ t['plugins'] = []
except KeyError:
+ t['plugins'] = []
continue
- reqs = get_unique_item(reqs)
+
return reqs
def load_required_plugins(self, reqs, parser, args, remaining):
@@ -115,15 +127,17 @@ class PluginMgr:
return args
def call_pre_suite(self, testcount, testidlist):
- for pgn_inst in self.plugin_instances:
+ for (_, pgn_inst) in self.plugin_instances:
pgn_inst.pre_suite(testcount, testidlist)
def call_post_suite(self, index):
- for pgn_inst in reversed(self.plugin_instances):
+ for (_, pgn_inst) in reversed(self.plugin_instances):
pgn_inst.post_suite(index)
def call_pre_case(self, caseinfo, *, test_skip=False):
- for pgn_inst in self.plugin_instances:
+ for (pgn, pgn_inst) in self.plugin_instances:
+ if pgn not in caseinfo['plugins']:
+ continue
try:
pgn_inst.pre_case(caseinfo, test_skip)
except Exception as ee:
@@ -133,29 +147,37 @@ class PluginMgr:
print('testid is {}'.format(caseinfo['id']))
raise
- def call_post_case(self):
- for pgn_inst in reversed(self.plugin_instances):
+ def call_post_case(self, caseinfo):
+ for (pgn, pgn_inst) in reversed(self.plugin_instances):
+ if pgn not in caseinfo['plugins']:
+ continue
pgn_inst.post_case()
- def call_pre_execute(self):
- for pgn_inst in self.plugin_instances:
+ def call_pre_execute(self, caseinfo):
+ for (pgn, pgn_inst) in self.plugin_instances:
+ if pgn not in caseinfo['plugins']:
+ continue
pgn_inst.pre_execute()
- def call_post_execute(self):
- for pgn_inst in reversed(self.plugin_instances):
+ def call_post_execute(self, caseinfo):
+ for (pgn, pgn_inst) in reversed(self.plugin_instances):
+ if pgn not in caseinfo['plugins']:
+ continue
pgn_inst.post_execute()
def call_add_args(self, parser):
- for pgn_inst in self.plugin_instances:
+ for (pgn, pgn_inst) in self.plugin_instances:
parser = pgn_inst.add_args(parser)
return parser
def call_check_args(self, args, remaining):
- for pgn_inst in self.plugin_instances:
+ for (pgn, pgn_inst) in self.plugin_instances:
pgn_inst.check_args(args, remaining)
- def call_adjust_command(self, stage, command):
- for pgn_inst in self.plugin_instances:
+ def call_adjust_command(self, caseinfo, stage, command):
+ for (pgn, pgn_inst) in self.plugin_instances:
+ if pgn not in caseinfo['plugins']:
+ continue
command = pgn_inst.adjust_command(stage, command)
return command
@@ -177,7 +199,7 @@ def replace_keywords(cmd):
return subcmd
-def exec_cmd(args, pm, stage, command):
+def exec_cmd(caseinfo, args, pm, stage, command):
"""
Perform any required modifications on an executable command, then run
it in a subprocess and return the results.
@@ -187,9 +209,10 @@ def exec_cmd(args, pm, stage, command):
if '$' in command:
command = replace_keywords(command)
- command = pm.call_adjust_command(stage, command)
+ command = pm.call_adjust_command(caseinfo, stage, command)
if args.verbose > 0:
print('command "{}"'.format(command))
+
proc = subprocess.Popen(command,
shell=True,
stdout=subprocess.PIPE,
@@ -211,7 +234,7 @@ def exec_cmd(args, pm, stage, command):
return proc, foutput
-def prepare_env(args, pm, stage, prefix, cmdlist, output = None):
+def prepare_env(caseinfo, args, pm, stage, prefix, cmdlist, output = None):
"""
Execute the setup/teardown commands for a test case.
Optionally terminate test execution if the command fails.
@@ -229,7 +252,7 @@ def prepare_env(args, pm, stage, prefix, cmdlist, output = None):
if not cmd:
continue
- (proc, foutput) = exec_cmd(args, pm, stage, cmd)
+ (proc, foutput) = exec_cmd(caseinfo, args, pm, stage, cmd)
if proc and (proc.returncode not in exit_codes):
print('', file=sys.stderr)
@@ -352,6 +375,10 @@ def find_in_json_other(res, outputJSONVal, matchJSONVal, matchJSONKey=None):
def run_one_test(pm, args, index, tidx):
global NAMES
+ ns = NAMES['NS']
+ dev0 = NAMES['DEV0']
+ dev1 = NAMES['DEV1']
+ dummy = NAMES['DUMMY']
result = True
tresult = ""
tap = ""
@@ -366,38 +393,42 @@ def run_one_test(pm, args, index, tidx):
res.set_result(ResultState.skip)
res.set_errormsg('Test case designated as skipped.')
pm.call_pre_case(tidx, test_skip=True)
- pm.call_post_execute()
+ pm.call_post_execute(tidx)
return res
if 'dependsOn' in tidx:
if (args.verbose > 0):
print('probe command for test skip')
- (p, procout) = exec_cmd(args, pm, 'execute', tidx['dependsOn'])
+ (p, procout) = exec_cmd(tidx, args, pm, 'execute', tidx['dependsOn'])
if p:
if (p.returncode != 0):
res = TestResult(tidx['id'], tidx['name'])
res.set_result(ResultState.skip)
res.set_errormsg('probe command: test skipped.')
pm.call_pre_case(tidx, test_skip=True)
- pm.call_post_execute()
+ pm.call_post_execute(tidx)
return res
# populate NAMES with TESTID for this test
NAMES['TESTID'] = tidx['id']
+ NAMES['NS'] = '{}-{}'.format(NAMES['NS'], tidx['random'])
+ NAMES['DEV0'] = '{}id{}'.format(NAMES['DEV0'], tidx['id'])
+ NAMES['DEV1'] = '{}id{}'.format(NAMES['DEV1'], tidx['id'])
+ NAMES['DUMMY'] = '{}id{}'.format(NAMES['DUMMY'], tidx['id'])
pm.call_pre_case(tidx)
- prepare_env(args, pm, 'setup', "-----> prepare stage", tidx["setup"])
+ prepare_env(tidx, args, pm, 'setup', "-----> prepare stage", tidx["setup"])
if (args.verbose > 0):
print('-----> execute stage')
- pm.call_pre_execute()
- (p, procout) = exec_cmd(args, pm, 'execute', tidx["cmdUnderTest"])
+ pm.call_pre_execute(tidx)
+ (p, procout) = exec_cmd(tidx, args, pm, 'execute', tidx["cmdUnderTest"])
if p:
exit_code = p.returncode
else:
exit_code = None
- pm.call_post_execute()
+ pm.call_post_execute(tidx)
if (exit_code is None or exit_code != int(tidx["expExitCode"])):
print("exit: {!r}".format(exit_code))
@@ -409,7 +440,7 @@ def run_one_test(pm, args, index, tidx):
else:
if args.verbose > 0:
print('-----> verify stage')
- (p, procout) = exec_cmd(args, pm, 'verify', tidx["verifyCmd"])
+ (p, procout) = exec_cmd(tidx, args, pm, 'verify', tidx["verifyCmd"])
if procout:
if 'matchJSON' in tidx:
verify_by_json(procout, res, tidx, args, pm)
@@ -431,15 +462,49 @@ def run_one_test(pm, args, index, tidx):
else:
res.set_result(ResultState.success)
- prepare_env(args, pm, 'teardown', '-----> teardown stage', tidx['teardown'], procout)
- pm.call_post_case()
+ prepare_env(tidx, args, pm, 'teardown', '-----> teardown stage', tidx['teardown'], procout)
+ pm.call_post_case(tidx)
index += 1
# remove TESTID from NAMES
del(NAMES['TESTID'])
+
+ # Restore names
+ NAMES['NS'] = ns
+ NAMES['DEV0'] = dev0
+ NAMES['DEV1'] = dev1
+ NAMES['DUMMY'] = dummy
+
return res
+def prepare_run(pm, args, testlist):
+ tcount = len(testlist)
+ emergency_exit = False
+ emergency_exit_message = ''
+
+ try:
+ pm.call_pre_suite(tcount, testlist)
+ except Exception as ee:
+ ex_type, ex, ex_tb = sys.exc_info()
+ print('Exception {} {} (caught in pre_suite).'.
+ format(ex_type, ex))
+ traceback.print_tb(ex_tb)
+ emergency_exit_message = 'EMERGENCY EXIT, call_pre_suite failed with exception {} {}\n'.format(ex_type, ex)
+ emergency_exit = True
+
+ if emergency_exit:
+ pm.call_post_suite(1)
+ return emergency_exit_message
+
+ if args.verbose:
+ print('give test rig 2 seconds to stabilize')
+
+ time.sleep(2)
+
+def purge_run(pm, index):
+ pm.call_post_suite(index)
+
def test_runner(pm, args, filtered_tests):
"""
Driver function for the unit tests.
@@ -455,28 +520,9 @@ def test_runner(pm, args, filtered_tests):
tap = ''
badtest = None
stage = None
- emergency_exit = False
- emergency_exit_message = ''
tsr = TestSuiteReport()
- try:
- pm.call_pre_suite(tcount, [tidx['id'] for tidx in testlist])
- except Exception as ee:
- ex_type, ex, ex_tb = sys.exc_info()
- print('Exception {} {} (caught in pre_suite).'.
- format(ex_type, ex))
- traceback.print_tb(ex_tb)
- emergency_exit_message = 'EMERGENCY EXIT, call_pre_suite failed with exception {} {}\n'.format(ex_type, ex)
- emergency_exit = True
- stage = 'pre-SUITE'
-
- if emergency_exit:
- pm.call_post_suite(index)
- return emergency_exit_message
- if args.verbose > 1:
- print('give test rig 2 seconds to stabilize')
- time.sleep(2)
for tidx in testlist:
if "flower" in tidx["category"] and args.device == None:
errmsg = "Tests using the DEV2 variable must define the name of a "
@@ -539,7 +585,68 @@ def test_runner(pm, args, filtered_tests):
if input(sys.stdin):
print('got something on stdin')
- pm.call_post_suite(index)
+ return (index, tsr)
+
+def mp_bins(alltests):
+ serial = []
+ parallel = []
+
+ for test in alltests:
+ if 'nsPlugin' not in test['plugins']:
+ serial.append(test)
+ else:
+ # We can only create one netdevsim device at a time
+ if 'netdevsim/new_device' in str(test['setup']):
+ serial.append(test)
+ else:
+ parallel.append(test)
+
+ return (serial, parallel)
+
+def __mp_runner(tests):
+ (_, tsr) = test_runner(mp_pm, mp_args, tests)
+ return tsr._testsuite
+
+def test_runner_mp(pm, args, alltests):
+ prepare_run(pm, args, alltests)
+
+ (serial, parallel) = mp_bins(alltests)
+
+ batches = [parallel[n : n + 32] for n in range(0, len(parallel), 32)]
+ batches.insert(0, serial)
+
+ print("Executing {} tests in parallel and {} in serial".format(len(parallel), len(serial)))
+ print("Using {} batches".format(len(batches)))
+
+ # We can't pickle these objects so workaround them
+ global mp_pm
+ mp_pm = pm
+
+ global mp_args
+ mp_args = args
+
+ with Pool(args.mp) as p:
+ pres = p.map(__mp_runner, batches)
+
+ tsr = TestSuiteReport()
+ for trs in pres:
+ for res in trs:
+ tsr.add_resultdata(res)
+
+ # Passing an index is not useful in MP
+ purge_run(pm, None)
+
+ return tsr
+
+def test_runner_serial(pm, args, alltests):
+ prepare_run(pm, args, alltests)
+
+ if args.verbose:
+ print("Executing {} tests in serial".format(len(alltests)))
+
+ (index, tsr) = test_runner(pm, args, alltests)
+
+ purge_run(pm, index)
return tsr
@@ -568,12 +675,15 @@ def load_from_file(filename):
k['filename'] = filename
return testlist
+def identity(string):
+ return string
def args_parse():
"""
Create the argument parser.
"""
parser = argparse.ArgumentParser(description='Linux TC unit tests')
+ parser.register('type', None, identity)
return parser
@@ -631,6 +741,9 @@ def set_args(parser):
parser.add_argument(
'-P', '--pause', action='store_true',
help='Pause execution just before post-suite stage')
+ parser.add_argument(
+ '-J', '--multiprocess', type=int, default=1, dest='mp',
+ help='Run tests in parallel whenever possible')
return parser
@@ -661,7 +774,6 @@ def get_id_list(alltests):
"""
return [x["id"] for x in alltests]
-
def check_case_id(alltests):
"""
Check for duplicate test case IDs.
@@ -683,7 +795,6 @@ def generate_case_ids(alltests):
If a test case has a blank ID field, generate a random hex ID for it
and then write the test cases back to disk.
"""
- import random
for c in alltests:
if (c["id"] == ""):
while True:
@@ -742,6 +853,9 @@ def filter_tests_by_category(args, testlist):
return answer
+def set_random(alltests):
+ for tidx in alltests:
+ tidx['random'] = random.getrandbits(32)
def get_test_cases(args):
"""
@@ -840,6 +954,8 @@ def set_operation_mode(pm, parser, args, remaining):
list_test_cases(alltests)
exit(0)
+ set_random(alltests)
+
exit_code = 0 # KSFT_PASS
if len(alltests):
req_plugins = pm.get_required_plugins(alltests)
@@ -848,7 +964,12 @@ def set_operation_mode(pm, parser, args, remaining):
except PluginDependencyException as pde:
print('The following plugins were not found:')
print('{}'.format(pde.missing_pg))
- catresults = test_runner(pm, args, alltests)
+
+ if args.mp > 1:
+ catresults = test_runner_mp(pm, args, alltests)
+ else:
+ catresults = test_runner_serial(pm, args, alltests)
+
if catresults.count_failures() != 0:
exit_code = 1 # KSFT_FAIL
if args.format == 'none':
@@ -883,6 +1004,13 @@ def main():
Start of execution; set up argument parser and get the arguments,
and start operations.
"""
+ import resource
+
+ if sys.version_info.major < 3 or sys.version_info.minor < 8:
+ sys.exit("tdc requires at least python 3.8")
+
+ resource.setrlimit(resource.RLIMIT_NOFILE, (1048576, 1048576))
+
parser = args_parse()
parser = set_args(parser)
pm = PluginMgr(parser)
diff --git a/tools/testing/vsock/.gitignore b/tools/testing/vsock/.gitignore
index a8adcfdc292b..d9f798713cd7 100644
--- a/tools/testing/vsock/.gitignore
+++ b/tools/testing/vsock/.gitignore
@@ -3,3 +3,4 @@
vsock_test
vsock_diag_test
vsock_perf
+vsock_uring_test
diff --git a/tools/testing/vsock/Makefile b/tools/testing/vsock/Makefile
index 21a98ba565ab..a7f56a09ca9f 100644
--- a/tools/testing/vsock/Makefile
+++ b/tools/testing/vsock/Makefile
@@ -1,12 +1,15 @@
# SPDX-License-Identifier: GPL-2.0-only
all: test vsock_perf
-test: vsock_test vsock_diag_test
-vsock_test: vsock_test.o timeout.o control.o util.o
+test: vsock_test vsock_diag_test vsock_uring_test
+vsock_test: vsock_test.o vsock_test_zerocopy.o timeout.o control.o util.o msg_zerocopy_common.o
vsock_diag_test: vsock_diag_test.o timeout.o control.o util.o
-vsock_perf: vsock_perf.o
+vsock_perf: vsock_perf.o msg_zerocopy_common.o
+
+vsock_uring_test: LDLIBS = -luring
+vsock_uring_test: control.o util.o vsock_uring_test.o timeout.o msg_zerocopy_common.o
CFLAGS += -g -O2 -Werror -Wall -I. -I../../include -I../../../usr/include -Wno-pointer-sign -fno-strict-overflow -fno-strict-aliasing -fno-common -MMD -U_FORTIFY_SOURCE -D_GNU_SOURCE
.PHONY: all test clean
clean:
- ${RM} *.o *.d vsock_test vsock_diag_test vsock_perf
+ ${RM} *.o *.d vsock_test vsock_diag_test vsock_perf vsock_uring_test
-include *.d
diff --git a/tools/testing/vsock/msg_zerocopy_common.c b/tools/testing/vsock/msg_zerocopy_common.c
new file mode 100644
index 000000000000..5a4bdf7b5132
--- /dev/null
+++ b/tools/testing/vsock/msg_zerocopy_common.c
@@ -0,0 +1,87 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* Some common code for MSG_ZEROCOPY logic
+ *
+ * Copyright (C) 2023 SberDevices.
+ *
+ * Author: Arseniy Krasnov <avkrasnov@salutedevices.com>
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/types.h>
+#include <sys/socket.h>
+#include <linux/errqueue.h>
+
+#include "msg_zerocopy_common.h"
+
+void enable_so_zerocopy(int fd)
+{
+ int val = 1;
+
+ if (setsockopt(fd, SOL_SOCKET, SO_ZEROCOPY, &val, sizeof(val))) {
+ perror("setsockopt");
+ exit(EXIT_FAILURE);
+ }
+}
+
+void vsock_recv_completion(int fd, const bool *zerocopied)
+{
+ struct sock_extended_err *serr;
+ struct msghdr msg = { 0 };
+ char cmsg_data[128];
+ struct cmsghdr *cm;
+ ssize_t res;
+
+ msg.msg_control = cmsg_data;
+ msg.msg_controllen = sizeof(cmsg_data);
+
+ res = recvmsg(fd, &msg, MSG_ERRQUEUE);
+ if (res) {
+ fprintf(stderr, "failed to read error queue: %zi\n", res);
+ exit(EXIT_FAILURE);
+ }
+
+ cm = CMSG_FIRSTHDR(&msg);
+ if (!cm) {
+ fprintf(stderr, "cmsg: no cmsg\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if (cm->cmsg_level != SOL_VSOCK) {
+ fprintf(stderr, "cmsg: unexpected 'cmsg_level'\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if (cm->cmsg_type != VSOCK_RECVERR) {
+ fprintf(stderr, "cmsg: unexpected 'cmsg_type'\n");
+ exit(EXIT_FAILURE);
+ }
+
+ serr = (void *)CMSG_DATA(cm);
+ if (serr->ee_origin != SO_EE_ORIGIN_ZEROCOPY) {
+ fprintf(stderr, "serr: wrong origin: %u\n", serr->ee_origin);
+ exit(EXIT_FAILURE);
+ }
+
+ if (serr->ee_errno) {
+ fprintf(stderr, "serr: wrong error code: %u\n", serr->ee_errno);
+ exit(EXIT_FAILURE);
+ }
+
+ /* This flag is used for tests, to check that transmission was
+ * performed as expected: zerocopy or fallback to copy. If NULL
+ * - don't care.
+ */
+ if (!zerocopied)
+ return;
+
+ if (*zerocopied && (serr->ee_code & SO_EE_CODE_ZEROCOPY_COPIED)) {
+ fprintf(stderr, "serr: was copy instead of zerocopy\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if (!*zerocopied && !(serr->ee_code & SO_EE_CODE_ZEROCOPY_COPIED)) {
+ fprintf(stderr, "serr: was zerocopy instead of copy\n");
+ exit(EXIT_FAILURE);
+ }
+}
diff --git a/tools/testing/vsock/msg_zerocopy_common.h b/tools/testing/vsock/msg_zerocopy_common.h
new file mode 100644
index 000000000000..3763c5ccedb9
--- /dev/null
+++ b/tools/testing/vsock/msg_zerocopy_common.h
@@ -0,0 +1,18 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef MSG_ZEROCOPY_COMMON_H
+#define MSG_ZEROCOPY_COMMON_H
+
+#include <stdbool.h>
+
+#ifndef SOL_VSOCK
+#define SOL_VSOCK 287
+#endif
+
+#ifndef VSOCK_RECVERR
+#define VSOCK_RECVERR 1
+#endif
+
+void enable_so_zerocopy(int fd);
+void vsock_recv_completion(int fd, const bool *zerocopied);
+
+#endif /* MSG_ZEROCOPY_COMMON_H */
diff --git a/tools/testing/vsock/util.c b/tools/testing/vsock/util.c
index 01b636d3039a..92336721321a 100644
--- a/tools/testing/vsock/util.c
+++ b/tools/testing/vsock/util.c
@@ -11,10 +11,12 @@
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
+#include <string.h>
#include <signal.h>
#include <unistd.h>
#include <assert.h>
#include <sys/epoll.h>
+#include <sys/mman.h>
#include "timeout.h"
#include "control.h"
@@ -211,102 +213,138 @@ int vsock_seqpacket_accept(unsigned int cid, unsigned int port,
return vsock_accept(cid, port, clientaddrp, SOCK_SEQPACKET);
}
-/* Transmit one byte and check the return value.
+/* Transmit bytes from a buffer and check the return value.
*
* expected_ret:
* <0 Negative errno (for testing errors)
* 0 End-of-file
- * 1 Success
+ * >0 Success (bytes successfully written)
*/
-void send_byte(int fd, int expected_ret, int flags)
+void send_buf(int fd, const void *buf, size_t len, int flags,
+ ssize_t expected_ret)
{
- const uint8_t byte = 'A';
- ssize_t nwritten;
+ ssize_t nwritten = 0;
+ ssize_t ret;
timeout_begin(TIMEOUT);
do {
- nwritten = send(fd, &byte, sizeof(byte), flags);
- timeout_check("write");
- } while (nwritten < 0 && errno == EINTR);
+ ret = send(fd, buf + nwritten, len - nwritten, flags);
+ timeout_check("send");
+
+ if (ret == 0 || (ret < 0 && errno != EINTR))
+ break;
+
+ nwritten += ret;
+ } while (nwritten < len);
timeout_end();
if (expected_ret < 0) {
- if (nwritten != -1) {
- fprintf(stderr, "bogus send(2) return value %zd\n",
- nwritten);
+ if (ret != -1) {
+ fprintf(stderr, "bogus send(2) return value %zd (expected %zd)\n",
+ ret, expected_ret);
exit(EXIT_FAILURE);
}
if (errno != -expected_ret) {
- perror("write");
+ perror("send");
exit(EXIT_FAILURE);
}
return;
}
- if (nwritten < 0) {
- perror("write");
+ if (ret < 0) {
+ perror("send");
exit(EXIT_FAILURE);
}
- if (nwritten == 0) {
- if (expected_ret == 0)
- return;
- fprintf(stderr, "unexpected EOF while sending byte\n");
- exit(EXIT_FAILURE);
- }
- if (nwritten != sizeof(byte)) {
- fprintf(stderr, "bogus send(2) return value %zd\n", nwritten);
+ if (nwritten != expected_ret) {
+ if (ret == 0)
+ fprintf(stderr, "unexpected EOF while sending bytes\n");
+
+ fprintf(stderr, "bogus send(2) bytes written %zd (expected %zd)\n",
+ nwritten, expected_ret);
exit(EXIT_FAILURE);
}
}
-/* Receive one byte and check the return value.
+/* Receive bytes in a buffer and check the return value.
*
* expected_ret:
* <0 Negative errno (for testing errors)
* 0 End-of-file
- * 1 Success
+ * >0 Success (bytes successfully read)
*/
-void recv_byte(int fd, int expected_ret, int flags)
+void recv_buf(int fd, void *buf, size_t len, int flags, ssize_t expected_ret)
{
- uint8_t byte;
- ssize_t nread;
+ ssize_t nread = 0;
+ ssize_t ret;
timeout_begin(TIMEOUT);
do {
- nread = recv(fd, &byte, sizeof(byte), flags);
- timeout_check("read");
- } while (nread < 0 && errno == EINTR);
+ ret = recv(fd, buf + nread, len - nread, flags);
+ timeout_check("recv");
+
+ if (ret == 0 || (ret < 0 && errno != EINTR))
+ break;
+
+ nread += ret;
+ } while (nread < len);
timeout_end();
if (expected_ret < 0) {
- if (nread != -1) {
- fprintf(stderr, "bogus recv(2) return value %zd\n",
- nread);
+ if (ret != -1) {
+ fprintf(stderr, "bogus recv(2) return value %zd (expected %zd)\n",
+ ret, expected_ret);
exit(EXIT_FAILURE);
}
if (errno != -expected_ret) {
- perror("read");
+ perror("recv");
exit(EXIT_FAILURE);
}
return;
}
- if (nread < 0) {
- perror("read");
+ if (ret < 0) {
+ perror("recv");
exit(EXIT_FAILURE);
}
- if (nread == 0) {
- if (expected_ret == 0)
- return;
- fprintf(stderr, "unexpected EOF while receiving byte\n");
- exit(EXIT_FAILURE);
- }
- if (nread != sizeof(byte)) {
- fprintf(stderr, "bogus recv(2) return value %zd\n", nread);
+ if (nread != expected_ret) {
+ if (ret == 0)
+ fprintf(stderr, "unexpected EOF while receiving bytes\n");
+
+ fprintf(stderr, "bogus recv(2) bytes read %zd (expected %zd)\n",
+ nread, expected_ret);
exit(EXIT_FAILURE);
}
+}
+
+/* Transmit one byte and check the return value.
+ *
+ * expected_ret:
+ * <0 Negative errno (for testing errors)
+ * 0 End-of-file
+ * 1 Success
+ */
+void send_byte(int fd, int expected_ret, int flags)
+{
+ const uint8_t byte = 'A';
+
+ send_buf(fd, &byte, sizeof(byte), flags, expected_ret);
+}
+
+/* Receive one byte and check the return value.
+ *
+ * expected_ret:
+ * <0 Negative errno (for testing errors)
+ * 0 End-of-file
+ * 1 Success
+ */
+void recv_byte(int fd, int expected_ret, int flags)
+{
+ uint8_t byte;
+
+ recv_buf(fd, &byte, sizeof(byte), flags, expected_ret);
+
if (byte != 'A') {
fprintf(stderr, "unexpected byte read %c\n", byte);
exit(EXIT_FAILURE);
@@ -408,3 +446,134 @@ unsigned long hash_djb2(const void *data, size_t len)
return hash;
}
+
+size_t iovec_bytes(const struct iovec *iov, size_t iovnum)
+{
+ size_t bytes;
+ int i;
+
+ for (bytes = 0, i = 0; i < iovnum; i++)
+ bytes += iov[i].iov_len;
+
+ return bytes;
+}
+
+unsigned long iovec_hash_djb2(const struct iovec *iov, size_t iovnum)
+{
+ unsigned long hash;
+ size_t iov_bytes;
+ size_t offs;
+ void *tmp;
+ int i;
+
+ iov_bytes = iovec_bytes(iov, iovnum);
+
+ tmp = malloc(iov_bytes);
+ if (!tmp) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+ for (offs = 0, i = 0; i < iovnum; i++) {
+ memcpy(tmp + offs, iov[i].iov_base, iov[i].iov_len);
+ offs += iov[i].iov_len;
+ }
+
+ hash = hash_djb2(tmp, iov_bytes);
+ free(tmp);
+
+ return hash;
+}
+
+/* Allocates and returns new 'struct iovec *' according pattern
+ * in the 'test_iovec'. For each element in the 'test_iovec' it
+ * allocates new element in the resulting 'iovec'. 'iov_len'
+ * of the new element is copied from 'test_iovec'. 'iov_base' is
+ * allocated depending on the 'iov_base' of 'test_iovec':
+ *
+ * 'iov_base' == NULL -> valid buf: mmap('iov_len').
+ *
+ * 'iov_base' == MAP_FAILED -> invalid buf:
+ * mmap('iov_len'), then munmap('iov_len').
+ * 'iov_base' still contains result of
+ * mmap().
+ *
+ * 'iov_base' == number -> unaligned valid buf:
+ * mmap('iov_len') + number.
+ *
+ * 'iovnum' is number of elements in 'test_iovec'.
+ *
+ * Returns new 'iovec' or calls 'exit()' on error.
+ */
+struct iovec *alloc_test_iovec(const struct iovec *test_iovec, int iovnum)
+{
+ struct iovec *iovec;
+ int i;
+
+ iovec = malloc(sizeof(*iovec) * iovnum);
+ if (!iovec) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+ for (i = 0; i < iovnum; i++) {
+ iovec[i].iov_len = test_iovec[i].iov_len;
+
+ iovec[i].iov_base = mmap(NULL, iovec[i].iov_len,
+ PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS | MAP_POPULATE,
+ -1, 0);
+ if (iovec[i].iov_base == MAP_FAILED) {
+ perror("mmap");
+ exit(EXIT_FAILURE);
+ }
+
+ if (test_iovec[i].iov_base != MAP_FAILED)
+ iovec[i].iov_base += (uintptr_t)test_iovec[i].iov_base;
+ }
+
+ /* Unmap "invalid" elements. */
+ for (i = 0; i < iovnum; i++) {
+ if (test_iovec[i].iov_base == MAP_FAILED) {
+ if (munmap(iovec[i].iov_base, iovec[i].iov_len)) {
+ perror("munmap");
+ exit(EXIT_FAILURE);
+ }
+ }
+ }
+
+ for (i = 0; i < iovnum; i++) {
+ int j;
+
+ if (test_iovec[i].iov_base == MAP_FAILED)
+ continue;
+
+ for (j = 0; j < iovec[i].iov_len; j++)
+ ((uint8_t *)iovec[i].iov_base)[j] = rand() & 0xff;
+ }
+
+ return iovec;
+}
+
+/* Frees 'iovec *', previously allocated by 'alloc_test_iovec()'.
+ * On error calls 'exit()'.
+ */
+void free_test_iovec(const struct iovec *test_iovec,
+ struct iovec *iovec, int iovnum)
+{
+ int i;
+
+ for (i = 0; i < iovnum; i++) {
+ if (test_iovec[i].iov_base != MAP_FAILED) {
+ if (test_iovec[i].iov_base)
+ iovec[i].iov_base -= (uintptr_t)test_iovec[i].iov_base;
+
+ if (munmap(iovec[i].iov_base, iovec[i].iov_len)) {
+ perror("munmap");
+ exit(EXIT_FAILURE);
+ }
+ }
+ }
+
+ free(iovec);
+}
diff --git a/tools/testing/vsock/util.h b/tools/testing/vsock/util.h
index fb99208a95ea..a77175d25864 100644
--- a/tools/testing/vsock/util.h
+++ b/tools/testing/vsock/util.h
@@ -42,6 +42,9 @@ int vsock_stream_accept(unsigned int cid, unsigned int port,
int vsock_seqpacket_accept(unsigned int cid, unsigned int port,
struct sockaddr_vm *clientaddrp);
void vsock_wait_remote_close(int fd);
+void send_buf(int fd, const void *buf, size_t len, int flags,
+ ssize_t expected_ret);
+void recv_buf(int fd, void *buf, size_t len, int flags, ssize_t expected_ret);
void send_byte(int fd, int expected_ret, int flags);
void recv_byte(int fd, int expected_ret, int flags);
void run_tests(const struct test_case *test_cases,
@@ -50,4 +53,9 @@ void list_tests(const struct test_case *test_cases);
void skip_test(struct test_case *test_cases, size_t test_cases_len,
const char *test_id_str);
unsigned long hash_djb2(const void *data, size_t len);
+size_t iovec_bytes(const struct iovec *iov, size_t iovnum);
+unsigned long iovec_hash_djb2(const struct iovec *iov, size_t iovnum);
+struct iovec *alloc_test_iovec(const struct iovec *test_iovec, int iovnum);
+void free_test_iovec(const struct iovec *test_iovec,
+ struct iovec *iovec, int iovnum);
#endif /* UTIL_H */
diff --git a/tools/testing/vsock/vsock_perf.c b/tools/testing/vsock/vsock_perf.c
index a72520338f84..4e8578f815e0 100644
--- a/tools/testing/vsock/vsock_perf.c
+++ b/tools/testing/vsock/vsock_perf.c
@@ -18,6 +18,9 @@
#include <poll.h>
#include <sys/socket.h>
#include <linux/vm_sockets.h>
+#include <sys/mman.h>
+
+#include "msg_zerocopy_common.h"
#define DEFAULT_BUF_SIZE_BYTES (128 * 1024)
#define DEFAULT_TO_SEND_BYTES (64 * 1024)
@@ -31,6 +34,7 @@
static unsigned int port = DEFAULT_PORT;
static unsigned long buf_size_bytes = DEFAULT_BUF_SIZE_BYTES;
static unsigned long vsock_buf_bytes = DEFAULT_VSOCK_BUF_BYTES;
+static bool zerocopy;
static void error(const char *s)
{
@@ -252,10 +256,15 @@ static void run_sender(int peer_cid, unsigned long to_send_bytes)
time_t tx_begin_ns;
time_t tx_total_ns;
size_t total_send;
+ time_t time_in_send;
void *data;
int fd;
- printf("Run as sender\n");
+ if (zerocopy)
+ printf("Run as sender MSG_ZEROCOPY\n");
+ else
+ printf("Run as sender\n");
+
printf("Connect to %i:%u\n", peer_cid, port);
printf("Send %lu bytes\n", to_send_bytes);
printf("TX buffer %lu bytes\n", buf_size_bytes);
@@ -265,38 +274,82 @@ static void run_sender(int peer_cid, unsigned long to_send_bytes)
if (fd < 0)
exit(EXIT_FAILURE);
- data = malloc(buf_size_bytes);
+ if (zerocopy) {
+ enable_so_zerocopy(fd);
- if (!data) {
- fprintf(stderr, "'malloc()' failed\n");
- exit(EXIT_FAILURE);
+ data = mmap(NULL, buf_size_bytes, PROT_READ | PROT_WRITE,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ if (data == MAP_FAILED) {
+ perror("mmap");
+ exit(EXIT_FAILURE);
+ }
+ } else {
+ data = malloc(buf_size_bytes);
+
+ if (!data) {
+ fprintf(stderr, "'malloc()' failed\n");
+ exit(EXIT_FAILURE);
+ }
}
memset(data, 0, buf_size_bytes);
total_send = 0;
+ time_in_send = 0;
tx_begin_ns = current_nsec();
while (total_send < to_send_bytes) {
ssize_t sent;
+ size_t rest_bytes;
+ time_t before;
- sent = write(fd, data, buf_size_bytes);
+ rest_bytes = to_send_bytes - total_send;
+
+ before = current_nsec();
+ sent = send(fd, data, (rest_bytes > buf_size_bytes) ?
+ buf_size_bytes : rest_bytes,
+ zerocopy ? MSG_ZEROCOPY : 0);
+ time_in_send += (current_nsec() - before);
if (sent <= 0)
error("write");
total_send += sent;
+
+ if (zerocopy) {
+ struct pollfd fds = { 0 };
+
+ fds.fd = fd;
+
+ if (poll(&fds, 1, -1) < 0) {
+ perror("poll");
+ exit(EXIT_FAILURE);
+ }
+
+ if (!(fds.revents & POLLERR)) {
+ fprintf(stderr, "POLLERR expected\n");
+ exit(EXIT_FAILURE);
+ }
+
+ vsock_recv_completion(fd, NULL);
+ }
}
tx_total_ns = current_nsec() - tx_begin_ns;
printf("total bytes sent: %zu\n", total_send);
printf("tx performance: %f Gbits/s\n",
- get_gbps(total_send * 8, tx_total_ns));
- printf("total time in 'write()': %f sec\n",
+ get_gbps(total_send * 8, time_in_send));
+ printf("total time in tx loop: %f sec\n",
(float)tx_total_ns / NSEC_PER_SEC);
+ printf("time in 'send()': %f sec\n",
+ (float)time_in_send / NSEC_PER_SEC);
close(fd);
- free(data);
+
+ if (zerocopy)
+ munmap(data, buf_size_bytes);
+ else
+ free(data);
}
static const char optstring[] = "";
@@ -336,6 +389,11 @@ static const struct option longopts[] = {
.has_arg = required_argument,
.val = 'R',
},
+ {
+ .name = "zerocopy",
+ .has_arg = no_argument,
+ .val = 'Z',
+ },
{},
};
@@ -351,6 +409,7 @@ static void usage(void)
" --help This message\n"
" --sender <cid> Sender mode (receiver default)\n"
" <cid> of the receiver to connect to\n"
+ " --zerocopy Enable zerocopy (for sender mode only)\n"
" --port <port> Port (default %d)\n"
" --bytes <bytes>KMG Bytes to send (default %d)\n"
" --buf-size <bytes>KMG Data buffer size (default %d). In sender mode\n"
@@ -413,6 +472,9 @@ int main(int argc, char **argv)
case 'H': /* Help. */
usage();
break;
+ case 'Z': /* Zerocopy. */
+ zerocopy = true;
+ break;
default:
usage();
}
diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
index 90718c2fd4ea..c1f7bc9abd22 100644
--- a/tools/testing/vsock/vsock_test.c
+++ b/tools/testing/vsock/vsock_test.c
@@ -19,7 +19,9 @@
#include <time.h>
#include <sys/mman.h>
#include <poll.h>
+#include <signal.h>
+#include "vsock_test_zerocopy.h"
#include "timeout.h"
#include "control.h"
#include "util.h"
@@ -261,7 +263,6 @@ static void test_msg_peek_client(const struct test_opts *opts,
bool seqpacket)
{
unsigned char buf[MSG_PEEK_BUF_LEN];
- ssize_t send_size;
int fd;
int i;
@@ -280,17 +281,7 @@ static void test_msg_peek_client(const struct test_opts *opts,
control_expectln("SRVREADY");
- send_size = send(fd, buf, sizeof(buf), 0);
-
- if (send_size < 0) {
- perror("send");
- exit(EXIT_FAILURE);
- }
-
- if (send_size != sizeof(buf)) {
- fprintf(stderr, "Invalid send size %zi\n", send_size);
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, buf, sizeof(buf), 0, sizeof(buf));
close(fd);
}
@@ -301,7 +292,6 @@ static void test_msg_peek_server(const struct test_opts *opts,
unsigned char buf_half[MSG_PEEK_BUF_LEN / 2];
unsigned char buf_normal[MSG_PEEK_BUF_LEN];
unsigned char buf_peek[MSG_PEEK_BUF_LEN];
- ssize_t res;
int fd;
if (seqpacket)
@@ -315,34 +305,16 @@ static void test_msg_peek_server(const struct test_opts *opts,
}
/* Peek from empty socket. */
- res = recv(fd, buf_peek, sizeof(buf_peek), MSG_PEEK | MSG_DONTWAIT);
- if (res != -1) {
- fprintf(stderr, "expected recv(2) failure, got %zi\n", res);
- exit(EXIT_FAILURE);
- }
-
- if (errno != EAGAIN) {
- perror("EAGAIN expected");
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf_peek, sizeof(buf_peek), MSG_PEEK | MSG_DONTWAIT,
+ -EAGAIN);
control_writeln("SRVREADY");
/* Peek part of data. */
- res = recv(fd, buf_half, sizeof(buf_half), MSG_PEEK);
- if (res != sizeof(buf_half)) {
- fprintf(stderr, "recv(2) + MSG_PEEK, expected %zu, got %zi\n",
- sizeof(buf_half), res);
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf_half, sizeof(buf_half), MSG_PEEK, sizeof(buf_half));
/* Peek whole data. */
- res = recv(fd, buf_peek, sizeof(buf_peek), MSG_PEEK);
- if (res != sizeof(buf_peek)) {
- fprintf(stderr, "recv(2) + MSG_PEEK, expected %zu, got %zi\n",
- sizeof(buf_peek), res);
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf_peek, sizeof(buf_peek), MSG_PEEK, sizeof(buf_peek));
/* Compare partial and full peek. */
if (memcmp(buf_half, buf_peek, sizeof(buf_half))) {
@@ -355,22 +327,11 @@ static void test_msg_peek_server(const struct test_opts *opts,
* so check it with MSG_PEEK. We must get length
* of the message.
*/
- res = recv(fd, buf_half, sizeof(buf_half), MSG_PEEK |
- MSG_TRUNC);
- if (res != sizeof(buf_peek)) {
- fprintf(stderr,
- "recv(2) + MSG_PEEK | MSG_TRUNC, exp %zu, got %zi\n",
- sizeof(buf_half), res);
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf_half, sizeof(buf_half), MSG_PEEK | MSG_TRUNC,
+ sizeof(buf_peek));
}
- res = recv(fd, buf_normal, sizeof(buf_normal), 0);
- if (res != sizeof(buf_normal)) {
- fprintf(stderr, "recv(2), expected %zu, got %zi\n",
- sizeof(buf_normal), res);
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf_normal, sizeof(buf_normal), 0, sizeof(buf_normal));
/* Compare full peek and normal read. */
if (memcmp(buf_peek, buf_normal, sizeof(buf_peek))) {
@@ -415,7 +376,6 @@ static void test_seqpacket_msg_bounds_client(const struct test_opts *opts)
msg_count = SOCK_BUF_SIZE / MAX_MSG_SIZE;
for (int i = 0; i < msg_count; i++) {
- ssize_t send_size;
size_t buf_size;
int flags;
void *buf;
@@ -443,17 +403,7 @@ static void test_seqpacket_msg_bounds_client(const struct test_opts *opts)
flags = 0;
}
- send_size = send(fd, buf, buf_size, flags);
-
- if (send_size < 0) {
- perror("send");
- exit(EXIT_FAILURE);
- }
-
- if (send_size != buf_size) {
- fprintf(stderr, "Invalid send size\n");
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, buf, buf_size, flags, buf_size);
/*
* Hash sum is computed at both client and server in
@@ -554,10 +504,7 @@ static void test_seqpacket_msg_trunc_client(const struct test_opts *opts)
exit(EXIT_FAILURE);
}
- if (send(fd, buf, sizeof(buf), 0) != sizeof(buf)) {
- perror("send failed");
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, buf, sizeof(buf), 0, sizeof(buf));
control_writeln("SENDDONE");
close(fd);
@@ -679,7 +626,6 @@ static void test_seqpacket_timeout_server(const struct test_opts *opts)
static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
{
unsigned long sock_buf_size;
- ssize_t send_size;
socklen_t len;
void *data;
int fd;
@@ -706,18 +652,7 @@ static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
exit(EXIT_FAILURE);
}
- send_size = send(fd, data, sock_buf_size, 0);
- if (send_size != -1) {
- fprintf(stderr, "expected 'send(2)' failure, got %zi\n",
- send_size);
- exit(EXIT_FAILURE);
- }
-
- if (errno != EMSGSIZE) {
- fprintf(stderr, "expected EMSGSIZE in 'errno', got %i\n",
- errno);
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, data, sock_buf_size, 0, -EMSGSIZE);
control_writeln("CLISENT");
@@ -771,15 +706,9 @@ static void test_seqpacket_invalid_rec_buffer_client(const struct test_opts *opt
memset(buf1, BUF_PATTERN_1, buf_size);
memset(buf2, BUF_PATTERN_2, buf_size);
- if (send(fd, buf1, buf_size, 0) != buf_size) {
- perror("send failed");
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, buf1, buf_size, 0, buf_size);
- if (send(fd, buf2, buf_size, 0) != buf_size) {
- perror("send failed");
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, buf2, buf_size, 0, buf_size);
close(fd);
}
@@ -900,7 +829,6 @@ static void test_stream_poll_rcvlowat_client(const struct test_opts *opts)
unsigned long lowat_val = RCVLOWAT_BUF_SIZE;
char buf[RCVLOWAT_BUF_SIZE];
struct pollfd fds;
- ssize_t read_res;
short poll_flags;
int fd;
@@ -955,12 +883,7 @@ static void test_stream_poll_rcvlowat_client(const struct test_opts *opts)
/* Use MSG_DONTWAIT, if call is going to wait, EAGAIN
* will be returned.
*/
- read_res = recv(fd, buf, sizeof(buf), MSG_DONTWAIT);
- if (read_res != RCVLOWAT_BUF_SIZE) {
- fprintf(stderr, "Unexpected recv result %zi\n",
- read_res);
- exit(EXIT_FAILURE);
- }
+ recv_buf(fd, buf, sizeof(buf), MSG_DONTWAIT, RCVLOWAT_BUF_SIZE);
control_writeln("POLLDONE");
@@ -972,7 +895,7 @@ static void test_stream_poll_rcvlowat_client(const struct test_opts *opts)
static void test_inv_buf_client(const struct test_opts *opts, bool stream)
{
unsigned char data[INV_BUF_TEST_DATA_LEN] = {0};
- ssize_t ret;
+ ssize_t expected_ret;
int fd;
if (stream)
@@ -988,39 +911,18 @@ static void test_inv_buf_client(const struct test_opts *opts, bool stream)
control_expectln("SENDDONE");
/* Use invalid buffer here. */
- ret = recv(fd, NULL, sizeof(data), 0);
- if (ret != -1) {
- fprintf(stderr, "expected recv(2) failure, got %zi\n", ret);
- exit(EXIT_FAILURE);
- }
-
- if (errno != EFAULT) {
- fprintf(stderr, "unexpected recv(2) errno %d\n", errno);
- exit(EXIT_FAILURE);
- }
-
- ret = recv(fd, data, sizeof(data), MSG_DONTWAIT);
+ recv_buf(fd, NULL, sizeof(data), 0, -EFAULT);
if (stream) {
/* For SOCK_STREAM we must continue reading. */
- if (ret != sizeof(data)) {
- fprintf(stderr, "expected recv(2) success, got %zi\n", ret);
- exit(EXIT_FAILURE);
- }
- /* Don't check errno in case of success. */
+ expected_ret = sizeof(data);
} else {
/* For SOCK_SEQPACKET socket's queue must be empty. */
- if (ret != -1) {
- fprintf(stderr, "expected recv(2) failure, got %zi\n", ret);
- exit(EXIT_FAILURE);
- }
-
- if (errno != EAGAIN) {
- fprintf(stderr, "unexpected recv(2) errno %d\n", errno);
- exit(EXIT_FAILURE);
- }
+ expected_ret = -EAGAIN;
}
+ recv_buf(fd, data, sizeof(data), MSG_DONTWAIT, expected_ret);
+
control_writeln("DONE");
close(fd);
@@ -1029,7 +931,6 @@ static void test_inv_buf_client(const struct test_opts *opts, bool stream)
static void test_inv_buf_server(const struct test_opts *opts, bool stream)
{
unsigned char data[INV_BUF_TEST_DATA_LEN] = {0};
- ssize_t res;
int fd;
if (stream)
@@ -1042,11 +943,7 @@ static void test_inv_buf_server(const struct test_opts *opts, bool stream)
exit(EXIT_FAILURE);
}
- res = send(fd, data, sizeof(data), 0);
- if (res != sizeof(data)) {
- fprintf(stderr, "unexpected send(2) result %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, data, sizeof(data), 0, sizeof(data));
control_writeln("SENDDONE");
@@ -1080,7 +977,6 @@ static void test_seqpacket_inv_buf_server(const struct test_opts *opts)
static void test_stream_virtio_skb_merge_client(const struct test_opts *opts)
{
- ssize_t res;
int fd;
fd = vsock_stream_connect(opts->peer_cid, 1234);
@@ -1090,22 +986,14 @@ static void test_stream_virtio_skb_merge_client(const struct test_opts *opts)
}
/* Send first skbuff. */
- res = send(fd, HELLO_STR, strlen(HELLO_STR), 0);
- if (res != strlen(HELLO_STR)) {
- fprintf(stderr, "unexpected send(2) result %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, HELLO_STR, strlen(HELLO_STR), 0, strlen(HELLO_STR));
control_writeln("SEND0");
/* Peer reads part of first skbuff. */
control_expectln("REPLY0");
/* Send second skbuff, it will be appended to the first. */
- res = send(fd, WORLD_STR, strlen(WORLD_STR), 0);
- if (res != strlen(WORLD_STR)) {
- fprintf(stderr, "unexpected send(2) result %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ send_buf(fd, WORLD_STR, strlen(WORLD_STR), 0, strlen(WORLD_STR));
control_writeln("SEND1");
/* Peer reads merged skbuff packet. */
@@ -1116,8 +1004,8 @@ static void test_stream_virtio_skb_merge_client(const struct test_opts *opts)
static void test_stream_virtio_skb_merge_server(const struct test_opts *opts)
{
+ size_t read = 0, to_read;
unsigned char buf[64];
- ssize_t res;
int fd;
fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
@@ -1129,26 +1017,21 @@ static void test_stream_virtio_skb_merge_server(const struct test_opts *opts)
control_expectln("SEND0");
/* Read skbuff partially. */
- res = recv(fd, buf, 2, 0);
- if (res != 2) {
- fprintf(stderr, "expected recv(2) returns 2 bytes, got %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ to_read = 2;
+ recv_buf(fd, buf + read, to_read, 0, to_read);
+ read += to_read;
control_writeln("REPLY0");
control_expectln("SEND1");
- res = recv(fd, buf + 2, sizeof(buf) - 2, 0);
- if (res != 8) {
- fprintf(stderr, "expected recv(2) returns 8 bytes, got %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ /* Read the rest of both buffers */
+ to_read = strlen(HELLO_STR WORLD_STR) - read;
+ recv_buf(fd, buf + read, to_read, 0, to_read);
+ read += to_read;
- res = recv(fd, buf, sizeof(buf) - 8 - 2, MSG_DONTWAIT);
- if (res != -1) {
- fprintf(stderr, "expected recv(2) failure, got %zi\n", res);
- exit(EXIT_FAILURE);
- }
+ /* No more bytes should be there */
+ to_read = sizeof(buf) - read;
+ recv_buf(fd, buf + read, to_read, MSG_DONTWAIT, -EAGAIN);
if (memcmp(buf, HELLO_STR WORLD_STR, strlen(HELLO_STR WORLD_STR))) {
fprintf(stderr, "pattern mismatch\n");
@@ -1170,6 +1053,133 @@ static void test_seqpacket_msg_peek_server(const struct test_opts *opts)
return test_msg_peek_server(opts, true);
}
+static sig_atomic_t have_sigpipe;
+
+static void sigpipe(int signo)
+{
+ have_sigpipe = 1;
+}
+
+static void test_stream_check_sigpipe(int fd)
+{
+ ssize_t res;
+
+ have_sigpipe = 0;
+
+ res = send(fd, "A", 1, 0);
+ if (res != -1) {
+ fprintf(stderr, "expected send(2) failure, got %zi\n", res);
+ exit(EXIT_FAILURE);
+ }
+
+ if (!have_sigpipe) {
+ fprintf(stderr, "SIGPIPE expected\n");
+ exit(EXIT_FAILURE);
+ }
+
+ have_sigpipe = 0;
+
+ res = send(fd, "A", 1, MSG_NOSIGNAL);
+ if (res != -1) {
+ fprintf(stderr, "expected send(2) failure, got %zi\n", res);
+ exit(EXIT_FAILURE);
+ }
+
+ if (have_sigpipe) {
+ fprintf(stderr, "SIGPIPE not expected\n");
+ exit(EXIT_FAILURE);
+ }
+}
+
+static void test_stream_shutwr_client(const struct test_opts *opts)
+{
+ int fd;
+
+ struct sigaction act = {
+ .sa_handler = sigpipe,
+ };
+
+ sigaction(SIGPIPE, &act, NULL);
+
+ fd = vsock_stream_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ if (shutdown(fd, SHUT_WR)) {
+ perror("shutdown");
+ exit(EXIT_FAILURE);
+ }
+
+ test_stream_check_sigpipe(fd);
+
+ control_writeln("CLIENTDONE");
+
+ close(fd);
+}
+
+static void test_stream_shutwr_server(const struct test_opts *opts)
+{
+ int fd;
+
+ fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("CLIENTDONE");
+
+ close(fd);
+}
+
+static void test_stream_shutrd_client(const struct test_opts *opts)
+{
+ int fd;
+
+ struct sigaction act = {
+ .sa_handler = sigpipe,
+ };
+
+ sigaction(SIGPIPE, &act, NULL);
+
+ fd = vsock_stream_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("SHUTRDDONE");
+
+ test_stream_check_sigpipe(fd);
+
+ control_writeln("CLIENTDONE");
+
+ close(fd);
+}
+
+static void test_stream_shutrd_server(const struct test_opts *opts)
+{
+ int fd;
+
+ fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ if (shutdown(fd, SHUT_RD)) {
+ perror("shutdown");
+ exit(EXIT_FAILURE);
+ }
+
+ control_writeln("SHUTRDDONE");
+ control_expectln("CLIENTDONE");
+
+ close(fd);
+}
+
static struct test_case test_cases[] = {
{
.name = "SOCK_STREAM connection reset",
@@ -1250,6 +1260,31 @@ static struct test_case test_cases[] = {
.run_client = test_seqpacket_msg_peek_client,
.run_server = test_seqpacket_msg_peek_server,
},
+ {
+ .name = "SOCK_STREAM SHUT_WR",
+ .run_client = test_stream_shutwr_client,
+ .run_server = test_stream_shutwr_server,
+ },
+ {
+ .name = "SOCK_STREAM SHUT_RD",
+ .run_client = test_stream_shutrd_client,
+ .run_server = test_stream_shutrd_server,
+ },
+ {
+ .name = "SOCK_STREAM MSG_ZEROCOPY",
+ .run_client = test_stream_msgzcopy_client,
+ .run_server = test_stream_msgzcopy_server,
+ },
+ {
+ .name = "SOCK_SEQPACKET MSG_ZEROCOPY",
+ .run_client = test_seqpacket_msgzcopy_client,
+ .run_server = test_seqpacket_msgzcopy_server,
+ },
+ {
+ .name = "SOCK_STREAM MSG_ZEROCOPY empty MSG_ERRQUEUE",
+ .run_client = test_stream_msgzcopy_empty_errq_client,
+ .run_server = test_stream_msgzcopy_empty_errq_server,
+ },
{},
};
diff --git a/tools/testing/vsock/vsock_test_zerocopy.c b/tools/testing/vsock/vsock_test_zerocopy.c
new file mode 100644
index 000000000000..a16ff76484e6
--- /dev/null
+++ b/tools/testing/vsock/vsock_test_zerocopy.c
@@ -0,0 +1,358 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* MSG_ZEROCOPY feature tests for vsock
+ *
+ * Copyright (C) 2023 SberDevices.
+ *
+ * Author: Arseniy Krasnov <avkrasnov@salutedevices.com>
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
+#include <unistd.h>
+#include <poll.h>
+#include <linux/errqueue.h>
+#include <linux/kernel.h>
+#include <errno.h>
+
+#include "control.h"
+#include "vsock_test_zerocopy.h"
+#include "msg_zerocopy_common.h"
+
+#ifndef PAGE_SIZE
+#define PAGE_SIZE 4096
+#endif
+
+#define VSOCK_TEST_DATA_MAX_IOV 3
+
+struct vsock_test_data {
+ /* This test case if for SOCK_STREAM only. */
+ bool stream_only;
+ /* Data must be zerocopied. This field is checked against
+ * field 'ee_code' of the 'struct sock_extended_err', which
+ * contains bit to detect that zerocopy transmission was
+ * fallbacked to copy mode.
+ */
+ bool zerocopied;
+ /* Enable SO_ZEROCOPY option on the socket. Without enabled
+ * SO_ZEROCOPY, every MSG_ZEROCOPY transmission will behave
+ * like without MSG_ZEROCOPY flag.
+ */
+ bool so_zerocopy;
+ /* 'errno' after 'sendmsg()' call. */
+ int sendmsg_errno;
+ /* Number of valid elements in 'vecs'. */
+ int vecs_cnt;
+ struct iovec vecs[VSOCK_TEST_DATA_MAX_IOV];
+};
+
+static struct vsock_test_data test_data_array[] = {
+ /* Last element has non-page aligned size. */
+ {
+ .zerocopied = true,
+ .so_zerocopy = true,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { NULL, PAGE_SIZE },
+ { NULL, 200 }
+ }
+ },
+ /* All elements have page aligned base and size. */
+ {
+ .zerocopied = true,
+ .so_zerocopy = true,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { NULL, PAGE_SIZE * 2 },
+ { NULL, PAGE_SIZE * 3 }
+ }
+ },
+ /* All elements have page aligned base and size. But
+ * data length is bigger than 64Kb.
+ */
+ {
+ .zerocopied = true,
+ .so_zerocopy = true,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE * 16 },
+ { NULL, PAGE_SIZE * 16 },
+ { NULL, PAGE_SIZE * 16 }
+ }
+ },
+ /* Middle element has both non-page aligned base and size. */
+ {
+ .zerocopied = true,
+ .so_zerocopy = true,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { (void *)1, 100 },
+ { NULL, PAGE_SIZE }
+ }
+ },
+ /* Middle element is unmapped. */
+ {
+ .zerocopied = false,
+ .so_zerocopy = true,
+ .sendmsg_errno = ENOMEM,
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { MAP_FAILED, PAGE_SIZE },
+ { NULL, PAGE_SIZE }
+ }
+ },
+ /* Valid data, but SO_ZEROCOPY is off. This
+ * will trigger fallback to copy.
+ */
+ {
+ .zerocopied = false,
+ .so_zerocopy = false,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 1,
+ {
+ { NULL, PAGE_SIZE }
+ }
+ },
+ /* Valid data, but message is bigger than peer's
+ * buffer, so this will trigger fallback to copy.
+ * This test is for SOCK_STREAM only, because
+ * for SOCK_SEQPACKET, 'sendmsg()' returns EMSGSIZE.
+ */
+ {
+ .stream_only = true,
+ .zerocopied = false,
+ .so_zerocopy = true,
+ .sendmsg_errno = 0,
+ .vecs_cnt = 1,
+ {
+ { NULL, 100 * PAGE_SIZE }
+ }
+ },
+};
+
+#define POLL_TIMEOUT_MS 100
+
+static void test_client(const struct test_opts *opts,
+ const struct vsock_test_data *test_data,
+ bool sock_seqpacket)
+{
+ struct pollfd fds = { 0 };
+ struct msghdr msg = { 0 };
+ ssize_t sendmsg_res;
+ struct iovec *iovec;
+ int fd;
+
+ if (sock_seqpacket)
+ fd = vsock_seqpacket_connect(opts->peer_cid, 1234);
+ else
+ fd = vsock_stream_connect(opts->peer_cid, 1234);
+
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ if (test_data->so_zerocopy)
+ enable_so_zerocopy(fd);
+
+ iovec = alloc_test_iovec(test_data->vecs, test_data->vecs_cnt);
+
+ msg.msg_iov = iovec;
+ msg.msg_iovlen = test_data->vecs_cnt;
+
+ errno = 0;
+
+ sendmsg_res = sendmsg(fd, &msg, MSG_ZEROCOPY);
+ if (errno != test_data->sendmsg_errno) {
+ fprintf(stderr, "expected 'errno' == %i, got %i\n",
+ test_data->sendmsg_errno, errno);
+ exit(EXIT_FAILURE);
+ }
+
+ if (!errno) {
+ if (sendmsg_res != iovec_bytes(iovec, test_data->vecs_cnt)) {
+ fprintf(stderr, "expected 'sendmsg()' == %li, got %li\n",
+ iovec_bytes(iovec, test_data->vecs_cnt),
+ sendmsg_res);
+ exit(EXIT_FAILURE);
+ }
+ }
+
+ fds.fd = fd;
+ fds.events = 0;
+
+ if (poll(&fds, 1, POLL_TIMEOUT_MS) < 0) {
+ perror("poll");
+ exit(EXIT_FAILURE);
+ }
+
+ if (fds.revents & POLLERR) {
+ vsock_recv_completion(fd, &test_data->zerocopied);
+ } else if (test_data->so_zerocopy && !test_data->sendmsg_errno) {
+ /* If we don't have data in the error queue, but
+ * SO_ZEROCOPY was enabled and 'sendmsg()' was
+ * successful - this is an error.
+ */
+ fprintf(stderr, "POLLERR expected\n");
+ exit(EXIT_FAILURE);
+ }
+
+ if (!test_data->sendmsg_errno)
+ control_writeulong(iovec_hash_djb2(iovec, test_data->vecs_cnt));
+ else
+ control_writeulong(0);
+
+ control_writeln("DONE");
+ free_test_iovec(test_data->vecs, iovec, test_data->vecs_cnt);
+ close(fd);
+}
+
+void test_stream_msgzcopy_client(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ test_client(opts, &test_data_array[i], false);
+}
+
+void test_seqpacket_msgzcopy_client(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++) {
+ if (test_data_array[i].stream_only)
+ continue;
+
+ test_client(opts, &test_data_array[i], true);
+ }
+}
+
+static void test_server(const struct test_opts *opts,
+ const struct vsock_test_data *test_data,
+ bool sock_seqpacket)
+{
+ unsigned long remote_hash;
+ unsigned long local_hash;
+ ssize_t total_bytes_rec;
+ unsigned char *data;
+ size_t data_len;
+ int fd;
+
+ if (sock_seqpacket)
+ fd = vsock_seqpacket_accept(VMADDR_CID_ANY, 1234, NULL);
+ else
+ fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
+
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ data_len = iovec_bytes(test_data->vecs, test_data->vecs_cnt);
+
+ data = malloc(data_len);
+ if (!data) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+ total_bytes_rec = 0;
+
+ while (total_bytes_rec != data_len) {
+ ssize_t bytes_rec;
+
+ bytes_rec = read(fd, data + total_bytes_rec,
+ data_len - total_bytes_rec);
+ if (bytes_rec <= 0)
+ break;
+
+ total_bytes_rec += bytes_rec;
+ }
+
+ if (test_data->sendmsg_errno == 0)
+ local_hash = hash_djb2(data, data_len);
+ else
+ local_hash = 0;
+
+ free(data);
+
+ /* Waiting for some result. */
+ remote_hash = control_readulong();
+ if (remote_hash != local_hash) {
+ fprintf(stderr, "hash mismatch\n");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("DONE");
+ close(fd);
+}
+
+void test_stream_msgzcopy_server(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ test_server(opts, &test_data_array[i], false);
+}
+
+void test_seqpacket_msgzcopy_server(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++) {
+ if (test_data_array[i].stream_only)
+ continue;
+
+ test_server(opts, &test_data_array[i], true);
+ }
+}
+
+void test_stream_msgzcopy_empty_errq_client(const struct test_opts *opts)
+{
+ struct msghdr msg = { 0 };
+ char cmsg_data[128];
+ ssize_t res;
+ int fd;
+
+ fd = vsock_stream_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ msg.msg_control = cmsg_data;
+ msg.msg_controllen = sizeof(cmsg_data);
+
+ res = recvmsg(fd, &msg, MSG_ERRQUEUE);
+ if (res != -1) {
+ fprintf(stderr, "expected 'recvmsg(2)' failure, got %zi\n",
+ res);
+ exit(EXIT_FAILURE);
+ }
+
+ control_writeln("DONE");
+ close(fd);
+}
+
+void test_stream_msgzcopy_empty_errq_server(const struct test_opts *opts)
+{
+ int fd;
+
+ fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("DONE");
+ close(fd);
+}
diff --git a/tools/testing/vsock/vsock_test_zerocopy.h b/tools/testing/vsock/vsock_test_zerocopy.h
new file mode 100644
index 000000000000..3ef2579e024d
--- /dev/null
+++ b/tools/testing/vsock/vsock_test_zerocopy.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#ifndef VSOCK_TEST_ZEROCOPY_H
+#define VSOCK_TEST_ZEROCOPY_H
+#include "util.h"
+
+void test_stream_msgzcopy_client(const struct test_opts *opts);
+void test_stream_msgzcopy_server(const struct test_opts *opts);
+
+void test_seqpacket_msgzcopy_client(const struct test_opts *opts);
+void test_seqpacket_msgzcopy_server(const struct test_opts *opts);
+
+void test_stream_msgzcopy_empty_errq_client(const struct test_opts *opts);
+void test_stream_msgzcopy_empty_errq_server(const struct test_opts *opts);
+
+#endif /* VSOCK_TEST_ZEROCOPY_H */
diff --git a/tools/testing/vsock/vsock_uring_test.c b/tools/testing/vsock/vsock_uring_test.c
new file mode 100644
index 000000000000..d976d35f0ba9
--- /dev/null
+++ b/tools/testing/vsock/vsock_uring_test.c
@@ -0,0 +1,342 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/* io_uring tests for vsock
+ *
+ * Copyright (C) 2023 SberDevices.
+ *
+ * Author: Arseniy Krasnov <avkrasnov@salutedevices.com>
+ */
+
+#include <getopt.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <liburing.h>
+#include <unistd.h>
+#include <sys/mman.h>
+#include <linux/kernel.h>
+#include <error.h>
+
+#include "util.h"
+#include "control.h"
+#include "msg_zerocopy_common.h"
+
+#ifndef PAGE_SIZE
+#define PAGE_SIZE 4096
+#endif
+
+#define RING_ENTRIES_NUM 4
+
+#define VSOCK_TEST_DATA_MAX_IOV 3
+
+struct vsock_io_uring_test {
+ /* Number of valid elements in 'vecs'. */
+ int vecs_cnt;
+ struct iovec vecs[VSOCK_TEST_DATA_MAX_IOV];
+};
+
+static struct vsock_io_uring_test test_data_array[] = {
+ /* All elements have page aligned base and size. */
+ {
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { NULL, 2 * PAGE_SIZE },
+ { NULL, 3 * PAGE_SIZE },
+ }
+ },
+ /* Middle element has both non-page aligned base and size. */
+ {
+ .vecs_cnt = 3,
+ {
+ { NULL, PAGE_SIZE },
+ { (void *)1, 200 },
+ { NULL, 3 * PAGE_SIZE },
+ }
+ }
+};
+
+static void vsock_io_uring_client(const struct test_opts *opts,
+ const struct vsock_io_uring_test *test_data,
+ bool msg_zerocopy)
+{
+ struct io_uring_sqe *sqe;
+ struct io_uring_cqe *cqe;
+ struct io_uring ring;
+ struct iovec *iovec;
+ struct msghdr msg;
+ int fd;
+
+ fd = vsock_stream_connect(opts->peer_cid, 1234);
+ if (fd < 0) {
+ perror("connect");
+ exit(EXIT_FAILURE);
+ }
+
+ if (msg_zerocopy)
+ enable_so_zerocopy(fd);
+
+ iovec = alloc_test_iovec(test_data->vecs, test_data->vecs_cnt);
+
+ if (io_uring_queue_init(RING_ENTRIES_NUM, &ring, 0))
+ error(1, errno, "io_uring_queue_init");
+
+ if (io_uring_register_buffers(&ring, iovec, test_data->vecs_cnt))
+ error(1, errno, "io_uring_register_buffers");
+
+ memset(&msg, 0, sizeof(msg));
+ msg.msg_iov = iovec;
+ msg.msg_iovlen = test_data->vecs_cnt;
+ sqe = io_uring_get_sqe(&ring);
+
+ if (msg_zerocopy)
+ io_uring_prep_sendmsg_zc(sqe, fd, &msg, 0);
+ else
+ io_uring_prep_sendmsg(sqe, fd, &msg, 0);
+
+ if (io_uring_submit(&ring) != 1)
+ error(1, errno, "io_uring_submit");
+
+ if (io_uring_wait_cqe(&ring, &cqe))
+ error(1, errno, "io_uring_wait_cqe");
+
+ io_uring_cqe_seen(&ring, cqe);
+
+ control_writeulong(iovec_hash_djb2(iovec, test_data->vecs_cnt));
+
+ control_writeln("DONE");
+ io_uring_queue_exit(&ring);
+ free_test_iovec(test_data->vecs, iovec, test_data->vecs_cnt);
+ close(fd);
+}
+
+static void vsock_io_uring_server(const struct test_opts *opts,
+ const struct vsock_io_uring_test *test_data)
+{
+ unsigned long remote_hash;
+ unsigned long local_hash;
+ struct io_uring ring;
+ size_t data_len;
+ size_t recv_len;
+ void *data;
+ int fd;
+
+ fd = vsock_stream_accept(VMADDR_CID_ANY, 1234, NULL);
+ if (fd < 0) {
+ perror("accept");
+ exit(EXIT_FAILURE);
+ }
+
+ data_len = iovec_bytes(test_data->vecs, test_data->vecs_cnt);
+
+ data = malloc(data_len);
+ if (!data) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+ if (io_uring_queue_init(RING_ENTRIES_NUM, &ring, 0))
+ error(1, errno, "io_uring_queue_init");
+
+ recv_len = 0;
+
+ while (recv_len < data_len) {
+ struct io_uring_sqe *sqe;
+ struct io_uring_cqe *cqe;
+ struct iovec iovec;
+
+ sqe = io_uring_get_sqe(&ring);
+ iovec.iov_base = data + recv_len;
+ iovec.iov_len = data_len;
+
+ io_uring_prep_readv(sqe, fd, &iovec, 1, 0);
+
+ if (io_uring_submit(&ring) != 1)
+ error(1, errno, "io_uring_submit");
+
+ if (io_uring_wait_cqe(&ring, &cqe))
+ error(1, errno, "io_uring_wait_cqe");
+
+ recv_len += cqe->res;
+ io_uring_cqe_seen(&ring, cqe);
+ }
+
+ if (recv_len != data_len) {
+ fprintf(stderr, "expected %zu, got %zu\n", data_len,
+ recv_len);
+ exit(EXIT_FAILURE);
+ }
+
+ local_hash = hash_djb2(data, data_len);
+
+ remote_hash = control_readulong();
+ if (remote_hash != local_hash) {
+ fprintf(stderr, "hash mismatch\n");
+ exit(EXIT_FAILURE);
+ }
+
+ control_expectln("DONE");
+ io_uring_queue_exit(&ring);
+ free(data);
+}
+
+void test_stream_uring_server(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ vsock_io_uring_server(opts, &test_data_array[i]);
+}
+
+void test_stream_uring_client(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ vsock_io_uring_client(opts, &test_data_array[i], false);
+}
+
+void test_stream_uring_msg_zc_server(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ vsock_io_uring_server(opts, &test_data_array[i]);
+}
+
+void test_stream_uring_msg_zc_client(const struct test_opts *opts)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(test_data_array); i++)
+ vsock_io_uring_client(opts, &test_data_array[i], true);
+}
+
+static struct test_case test_cases[] = {
+ {
+ .name = "SOCK_STREAM io_uring test",
+ .run_server = test_stream_uring_server,
+ .run_client = test_stream_uring_client,
+ },
+ {
+ .name = "SOCK_STREAM io_uring MSG_ZEROCOPY test",
+ .run_server = test_stream_uring_msg_zc_server,
+ .run_client = test_stream_uring_msg_zc_client,
+ },
+ {},
+};
+
+static const char optstring[] = "";
+static const struct option longopts[] = {
+ {
+ .name = "control-host",
+ .has_arg = required_argument,
+ .val = 'H',
+ },
+ {
+ .name = "control-port",
+ .has_arg = required_argument,
+ .val = 'P',
+ },
+ {
+ .name = "mode",
+ .has_arg = required_argument,
+ .val = 'm',
+ },
+ {
+ .name = "peer-cid",
+ .has_arg = required_argument,
+ .val = 'p',
+ },
+ {
+ .name = "help",
+ .has_arg = no_argument,
+ .val = '?',
+ },
+ {},
+};
+
+static void usage(void)
+{
+ fprintf(stderr, "Usage: vsock_uring_test [--help] [--control-host=<host>] --control-port=<port> --mode=client|server --peer-cid=<cid>\n"
+ "\n"
+ " Server: vsock_uring_test --control-port=1234 --mode=server --peer-cid=3\n"
+ " Client: vsock_uring_test --control-host=192.168.0.1 --control-port=1234 --mode=client --peer-cid=2\n"
+ "\n"
+ "Run transmission tests using io_uring. Usage is the same as\n"
+ "in ./vsock_test\n"
+ "\n"
+ "Options:\n"
+ " --help This help message\n"
+ " --control-host <host> Server IP address to connect to\n"
+ " --control-port <port> Server port to listen on/connect to\n"
+ " --mode client|server Server or client mode\n"
+ " --peer-cid <cid> CID of the other side\n"
+ );
+ exit(EXIT_FAILURE);
+}
+
+int main(int argc, char **argv)
+{
+ const char *control_host = NULL;
+ const char *control_port = NULL;
+ struct test_opts opts = {
+ .mode = TEST_MODE_UNSET,
+ .peer_cid = VMADDR_CID_ANY,
+ };
+
+ init_signals();
+
+ for (;;) {
+ int opt = getopt_long(argc, argv, optstring, longopts, NULL);
+
+ if (opt == -1)
+ break;
+
+ switch (opt) {
+ case 'H':
+ control_host = optarg;
+ break;
+ case 'm':
+ if (strcmp(optarg, "client") == 0) {
+ opts.mode = TEST_MODE_CLIENT;
+ } else if (strcmp(optarg, "server") == 0) {
+ opts.mode = TEST_MODE_SERVER;
+ } else {
+ fprintf(stderr, "--mode must be \"client\" or \"server\"\n");
+ return EXIT_FAILURE;
+ }
+ break;
+ case 'p':
+ opts.peer_cid = parse_cid(optarg);
+ break;
+ case 'P':
+ control_port = optarg;
+ break;
+ case '?':
+ default:
+ usage();
+ }
+ }
+
+ if (!control_port)
+ usage();
+ if (opts.mode == TEST_MODE_UNSET)
+ usage();
+ if (opts.peer_cid == VMADDR_CID_ANY)
+ usage();
+
+ if (!control_host) {
+ if (opts.mode != TEST_MODE_SERVER)
+ usage();
+ control_host = "0.0.0.0";
+ }
+
+ control_init(control_host, control_port,
+ opts.mode == TEST_MODE_SERVER);
+
+ run_tests(test_cases, &opts);
+
+ control_cleanup();
+
+ return 0;
+}