aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/drivers/gpu/drm/amd/amdgpu/vega10_sdma_pkt_open.h (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2020-05-06MAINTAINERS: put DYNAMIC INTERRUPT MODERATION in proper orderLukas Bulwahn1-1/+1
Commit 9b038086f06b ("docs: networking: convert DIM to RST") added a new file entry to DYNAMIC INTERRUPT MODERATION to the end, and not following alphabetical order. So, ./scripts/checkpatch.pl -f MAINTAINERS complains: WARNING: Misordered MAINTAINERS entry - list file patterns in alphabetic order #5966: FILE: MAINTAINERS:5966: +F: lib/dim/ +F: Documentation/networking/net_dim.rst Reorder the file entries to keep MAINTAINERS nicely ordered. Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06wireguard: send/receive: use explicit unlikely branch instead of implicit coalescingJason A. Donenfeld2-16/+12
It's very unlikely that send will become true. It's nearly always false between 0 and 120 seconds of a session, and in most cases becomes true only between 120 and 121 seconds before becoming false again. So, unlikely(send) is clearly the right option here. What happened before was that we had this complex boolean expression with multiple likely and unlikely clauses nested. Since this is evaluated left-to-right anyway, the whole thing got converted to unlikely. So, we can clean this up to better represent what's going on. The generated code is the same. Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06wireguard: selftests: initalize ipv6 members to NULL to squelch clang warningJason A. Donenfeld1-2/+2
Without setting these to NULL, clang complains in certain configurations that have CONFIG_IPV6=n: In file included from drivers/net/wireguard/ratelimiter.c:223: drivers/net/wireguard/selftest/ratelimiter.c:173:34: error: variable 'skb6' is uninitialized when used here [-Werror,-Wuninitialized] ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count); ^~~~ drivers/net/wireguard/selftest/ratelimiter.c:123:29: note: initialize the variable 'skb6' to silence this warning struct sk_buff *skb4, *skb6; ^ = NULL drivers/net/wireguard/selftest/ratelimiter.c:173:40: error: variable 'hdr6' is uninitialized when used here [-Werror,-Wuninitialized] ret = timings_test(skb4, hdr4, skb6, hdr6, &test_count); ^~~~ drivers/net/wireguard/selftest/ratelimiter.c:125:22: note: initialize the variable 'hdr6' to silence this warning struct ipv6hdr *hdr6; ^ We silence this warning by setting the variables to NULL as the warning suggests. Reported-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06wireguard: send/receive: cond_resched() when processing worker ringbuffersJason A. Donenfeld2-0/+6
Users with pathological hardware reported CPU stalls on CONFIG_ PREEMPT_VOLUNTARY=y, because the ringbuffers would stay full, meaning these workers would never terminate. That turned out not to be okay on systems without forced preemption, which Sultan observed. This commit adds a cond_resched() to the bottom of each loop iteration, so that these workers don't hog the core. Note that we don't need this on the napi poll worker, since that terminates after its budget is expended. Suggested-by: Sultan Alsawaf <sultan@kerneltoast.com> Reported-by: Wang Jian <larkwang@gmail.com> Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06wireguard: socket: remove errant restriction on looping to selfJason A. Donenfeld2-15/+51
It's already possible to create two different interfaces and loop packets between them. This has always been possible with tunnels in the kernel, and isn't specific to wireguard. Therefore, the networking stack already needs to deal with that. At the very least, the packet winds up exceeding the MTU and is discarded at that point. So, since this is already something that happens, there's no need to forbid the not very exceptional case of routing a packet back to the same interface; this loop is no different than others, and we shouldn't special case it, but rather rely on generic handling of loops in general. This also makes it easier to do interesting things with wireguard such as onion routing. At the same time, we add a selftest for this, ensuring that both onion routing works and infinite routing loops do not crash the kernel. We also add a test case for wireguard interfaces nesting packets and sending traffic between each other, as well as the loop in this case too. We make sure to send some throughput-heavy traffic for this use case, to stress out any possible recursion issues with the locks around workqueues. Fixes: e7096c131e51 ("net: WireGuard secure network tunnel") Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06wireguard: selftests: use normal kernel stack size on ppc64Jason A. Donenfeld1-0/+1
While at some point it might have made sense to be running these tests on ppc64 with 4k stacks, the kernel hasn't actually used 4k stacks on 64-bit powerpc in a long time, and more interesting things that we test don't really work when we deviate from the default (16k). So, we stop pushing our luck in this commit, and return to the default instead of the minimum. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ethernet: ti: am65-cpsw-nuss: fix irqs typeGrygorii Strashko1-2/+3
The K3 INTA driver, which is source TX/RX IRQs for CPSW NUSS, defines IRQs triggering type as EDGE by default, but triggering type for CPSW NUSS TX/RX IRQs has to be LEVEL as the EDGE triggering type may cause unnecessary IRQs triggering and NAPI scheduling for empty queues. It was discovered with RT-kernel. Fix it by explicitly specifying CPSW NUSS TX/RX IRQ type as IRQF_TRIGGER_HIGH. Fixes: 93a76530316a ("net: ethernet: ti: introduce am65x/j721e gigabit eth subsystem driver") Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06dsa: sja1105: dynamically allocate stats structureArnd Bergmann1-70/+74
The addition of sja1105_port_status_ether structure into the statistics causes the frame size to go over the warning limit: drivers/net/dsa/sja1105/sja1105_ethtool.c:421:6: error: stack frame size of 1104 bytes in function 'sja1105_get_ethtool_stats' [-Werror,-Wframe-larger-than=] Use dynamic allocation to avoid this. Fixes: 336aa67bd027 ("net: dsa: sja1105: show more ethtool statistics counters for P/Q/R/S") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06ionic: Use debugfs_create_bool() to export boolGeert Uytterhoeven1-2/+1
Currently bool ionic_cq.done_color is exported using debugfs_create_u8(), which requires a cast, preventing further compiler checks. Fix this by switching to debugfs_create_bool(), and dropping the cast. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Shannon Nelson <snelson@pensando.io> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: phy: tja11xx: add support for master-slave configurationOleksij Rempel1-0/+43
The TJA11xx PHYs have a vendor specific Master/Slave configuration bit, which is not compatible with IEEE 803.2-2018 spec for 100Base-T1 devices. So, provide a custom config_ange call back to solve this problem. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06ethtool: provide UAPI for PHY master/slave configuration.Oleksij Rempel9-18/+197
This UAPI is needed for BroadR-Reach 100BASE-T1 devices. Due to lack of auto-negotiation support, we needed to be able to configure the MASTER-SLAVE role of the port manually or from an application in user space. The same UAPI can be used for 1000BASE-T or MultiGBASE-T devices to force MASTER or SLAVE role. See IEEE 802.3-2018: 22.2.4.3.7 MASTER-SLAVE control register (Register 9) 22.2.4.3.8 MASTER-SLAVE status register (Register 10) 40.5.2 MASTER-SLAVE configuration resolution 45.2.1.185.1 MASTER-SLAVE config value (1.2100.14) 45.2.7.10 MultiGBASE-T AN control 1 register (Register 7.32) The MASTER-SLAVE role affects the clock configuration: ------------------------------------------------------------------------------- When the PHY is configured as MASTER, the PMA Transmit function shall source TX_TCLK from a local clock source. When configured as SLAVE, the PMA Transmit function shall source TX_TCLK from the clock recovered from data stream provided by MASTER. iMX6Q KSZ9031 XXX ------\ /-----------\ /------------\ | | | | | MAC |<----RGMII----->| PHY Slave |<------>| PHY Master | |<--- 125 MHz ---+-<------/ | | \ | ------/ \-----------/ \------------/ ^ \-TX_TCLK ------------------------------------------------------------------------------- Since some clock or link related issues are only reproducible in a specific MASTER-SLAVE-role, MAC and PHY configuration, it is beneficial to provide generic (not 100BASE-T1 specific) interface to the user space for configuration flexibility and trouble shooting. Signed-off-by: Oleksij Rempel <o.rempel@pengutronix.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Crypto/chcr: fix for hmac(sha) test failsDevulapally Shiva Krishna1-1/+1
The hmac(sha) test fails for a zero length source text data. For hmac(sha) minimum length of the data must be of block-size. So fix this by including the data_len for the last block. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Crypto/chcr: support for 48 byte key_len in aes-xtsDevulapally Shiva Krishna1-2/+25
Added support for 48 byte key length for aes-xts. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Crypto/chcr: fix for ccm(aes) failed testDevulapally Shiva Krishna1-1/+1
The ccm(aes) test fails when req->assoclen > ~240bytes. The problem is the value assigned to auth_offset is wrong. As auth_offset is unsigned char, it can take max value as 255. So fix it by making it unsigned int. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Crypto/chcr: fix ctr, cbc, xts and rfc3686-ctr failed testsDevulapally Shiva Krishna2-14/+29
This solves the following issues observed during self test when CONFIG_CRYPTO_MANAGER_EXTRA_TESTS is enabled. 1. Added fallback for cbc, ctr and rfc3686 if req->nbytes is zero and for xts added a fallback case if req->nbytes is not multiple of 16. 2. In case of cbc-aes, solved wrong iv update. When chcr_cipher_fallback() is called, used req->info pointer instead of reqctx->iv. 3. In cbc-aes decryption there was a wrong result. This occurs when chcr_cipher_fallback() is called from chcr_handle_cipher_resp(). In the fallback function iv(req->info) used is wrongly updated. So use the initial iv for this case. 4)In case of ctr-aes encryption observed wrong result. In adjust_ctr_overflow() there is condition which checks if ((bytes / AES_BLOCK_SIZE) > c), where c is the number of blocks which can be processed without iv overflow, but for the above bytes (req->nbytes < 32 , not a multiple of 16) this condition fails and the 2nd block is corrupted as it requires the rollover iv. So added a '=' condition in this to take care of this. 5)In rfc3686-ctr there was wrong result observed. This occurs when chcr_cipher_fallback() is called from chcr_handle_cipher_resp(). Here also copying initial_iv in init_iv pointer for handling the fallback case correctly. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Crypto/chcr: fix gcm-aes and rfc4106-gcm failed testsDevulapally Shiva Krishna1-4/+12
This patch fixes two issues observed during self tests with CONFIG_CRYPTO_MANAGER_EXTRA_TESTS enabled. 1. gcm(aes) hang issue , that happens during decryption. 2. rfc4106-gcm-aes-chcr encryption unexpectedly succeeded. For gcm-aes decryption , authtag is not mapped due to sg_nents_for_len(upto size: assoclen+ cryptlen - authsize). So fix it by dma_mapping authtag. Also replaced sg_nents() to sg_nents_for_len() in case of aead_dma_unmap(). For rfc4106-gcm-aes-chcr, used crypto_ipsec_check_assoclen() for checking the validity of assoclen. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: Devulapally Shiva Krishna <shiva@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: kill ipa_cmd_dma_task_32b_addr_add()Alex Elder2-70/+0
A recent commit removed the only use of ipa_cmd_dma_task_32b_addr_add(). This function (and the IPA immediate command it implements) is no longer needed, so get rid of it, along with all of the definitions associated with it. Isolate its removal in a commit so it can be easily added back again if needed. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: kill ipa_endpoint_stop()Alex Elder2-23/+6
The previous commit made ipa_endpoint_stop() be a trivial wrapper around gsi_channel_stop(). Since it no longer does anything special, just open-code it in the three places it's used. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: don't retry in ipa_endpoint_stop()Alex Elder1-15/+2
The only reason ipa_endpoint_stop() had a retry loop was that the just-removed workaround required an IPA DMA command to occur between attempts. The gsi_channel_stop() call that implements the stop does its own retry loop, to cover a channel's transition from started to stop-in-progress to stopped state. Get rid of the unnecessary retry loop in ipa_endpoint_stop(). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: get rid of workaround in ipa_endpoint_stop()Alex Elder1-38/+1
In ipa_endpoint_stop(), a workaround is used for IPA version 3.5.1 where a 1-byte DMA request is issued between GSI channel stop retries. It turns out that this workaround is only required for IPA versions 3.1 and 3.2, and we don't support those. So remove the call to ipa_endpoint_stop_rx_dma() in that function. That leaves that function unused, so get rid of it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: fix a bug in ipa_endpoint_stop()Alex Elder1-5/+2
In ipa_endpoint_stop(), for TX endpoints we set the number of retries to 0. When we break out of the loop, retries being 0 means we return EIO rather than the value of ret (which should be 0). Fix this by using a non-zero retry count for both RX and TX channels, and just break out of the loop after calling gsi_channel_stop() for TX channels. This way only RX channels will retry, and the retry count will be non-zero at the end for TX channels (so the proper value gets returned). Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net> (cherry picked from commit 713b6ebb4c376b3fb65fdceb3b59e401c93248f9) Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: remove endpoint delay mode featureAlex Elder3-10/+1
A "delay mode" feature was put in place to work around a problem that was observed during development of the upstream IPA driver. It used TX endpoint "delay mode" in order to prevent transmitting packets toward the modem before it was ready. A race condition that would explain the problem has long since been fixed, and we have concluded that the "delay mode" feature is no longer required. So get rid of it. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: introduce ipa_endpoint_program_suspend()Alex Elder1-26/+41
Create a new helper function that encapsulates enabling or disabling suspend on an RX endpoint. It returns the previous state of the endpoint (true means suspend mode was enabled). Create another function that handles enabling or disabling delay mode on a TX endpoint. Delay mode does not work correctly on IPA version 4.2, so we don't currently use it (and shouldn't). We only set delay mode in one case, and although we don't expect an endpoint to already be in delay mode, it doesn't really matter if it was. So the delay function doesn't return a value. Stop issuing warnings if the previous suspend or delay mode state differs from what is expected. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: have ipa_endpoint_init_ctrl() return previous stateAlex Elder1-14/+14
Change ipa_endpoint_init_ctrl() so it returns the previous state (whether suspend or delay mode was enabled) rather than indicating whether the request caused a change in state. This makes it easier to understand what's happening where called. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: only reset channel twice for IPA v3.5.1Alex Elder1-2/+2
In gsi_channel_reset(), RX channels are subjected to two consecutive CHANNEL_RESET commands. This workaround should only be used for IPA version 3.5.1, and for newer hardware "can lead to unwanted behavior." Only issue the second CHANNEL_RESET command for legacy hardware. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: ipa: rename db_enable flagAlex Elder4-21/+21
In several places, a Boolean flag is used in the GSI code to indicate whether the "doorbell engine" should be enabled or not when a channel is configured. This is basically done to abstract this property from the IPA version; the GSI code doesn't otherwise "know" what the IPA hardware version is. The doorbell engine is enabled only for IPA v3.5.1, not for IPA v4.0 and later. The next patch makes another change that affects behavior during channel reset (which also involves programming the channel). It also distinguishes IPA v3.5.1 hardware from newer hardware. Rather than creating another flag whose value matches the "db_enable" value, just rename "db_enable" to be "legacy" so it can be used to signal more than just the special doorbell handling. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: dsa: Do not leave DSA master with NULL netdev_opsFlorian Fainelli1-1/+2
When ndo_get_phys_port_name() for the CPU port was added we introduced an early check for when the DSA master network device in dsa_master_ndo_setup() already implements ndo_get_phys_port_name(). When we perform the teardown operation in dsa_master_ndo_teardown() we would not be checking that cpu_dp->orig_ndo_ops was successfully allocated and non-NULL initialized. With network device drivers such as virtio_net, this leads to a NPD as soon as the DSA switch hanging off of it gets torn down because we are now assigning the virtio_net device's netdev_ops a NULL pointer. Fixes: da7b9e9b00d4 ("net: dsa: Add ndo_get_phys_port_name() for CPU port") Reported-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Tested-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: dsa: remove duplicate assignment in dsa_slave_add_cls_matchall_mirredVladimir Oltean1-5/+3
This was caused by a poor merge conflict resolution on my side. The "act = &cls->rule->action.entries[0];" assignment was already present in the code prior to the patch mentioned below. Fixes: e13c2075280e ("net: dsa: refactor matchall mirred action to separate function") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06tcp: defer xmit timer reset in tcp_xmit_retransmit_queue()Eric Dumazet1-6/+10
As hinted in prior change ("tcp: refine tcp_pacing_delay() for very low pacing rates"), it is probably best arming the xmit timer only when all the packets have been scheduled, rather than when the head of rtx queue has been re-sent. This does matter for flows having extremely low pacing rates, since their tp->tcp_wstamp_ns could be far in the future. Note that the regular xmit path has a stronger limit in tcp_small_queue_check(), meaning it is less likely to go beyond the pacing horizon. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06tcp: refine tcp_pacing_delay() for very low pacing ratesEric Dumazet3-20/+13
With the addition of horizon feature to sch_fq, we noticed some suboptimal behavior of extremely low pacing rate TCP flows, especially when TCP is not aware of a drop happening in lower stacks. Back in commit 3f80e08f40cd ("tcp: add tcp_reset_xmit_timer() helper"), tcp_pacing_delay() was added to estimate an extra delay to add to standard rto timers. This patch removes the skb argument from this helper and tcp_reset_xmit_timer() because it makes more sense to simply consider the time at which next packet is allowed to be sent, instead of the time of whatever packet has been sent. This avoids arming RTO timer too soon and removes spurious horizon drops. Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06arm64: dts: sdm845: add IPA iommus propertyAlex Elder1-0/+2
Add an "iommus" property to the IPA node in "sdm845.dtsi". It is required because there are two regions of memory the IPA accesses through an SMMU. The next few patches define and map those regions. Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: stricter validation of untrusted gso packetsWillem de Bruijn1-2/+24
Syzkaller again found a path to a kernel crash through bad gso input: a packet with transport header extending beyond skb_headlen(skb). Tighten validation at kernel entry: - Verify that the transport header lies within the linear section. To avoid pulling linux/tcp.h, verify just sizeof tcphdr. tcp_gso_segment will call pskb_may_pull (th->doff * 4) before use. - Match the gso_type against the ip_proto found by the flow dissector. Fixes: bfd5f4a3d605 ("packet: Add GSO/csum offload support.") Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06seg6: fix SRH processing to comply with RFC8754Ahmed Abdelsalam1-2/+8
The Segment Routing Header (SRH) which defines the SRv6 dataplane is defined in RFC8754. RFC8754 (section 4.1) defines the SR source node behavior which encapsulates packets into an outer IPv6 header and SRH. The SR source node encodes the full list of Segments that defines the packet path in the SRH. Then, the first segment from list of Segments is copied into the Destination address of the outer IPv6 header and the packet is sent to the first hop in its path towards the destination. If the Segment list has only one segment, the SR source node can omit the SRH as he only segment is added in the destination address. RFC8754 (section 4.1.1) defines the Reduced SRH, when a source does not require the entire SID list to be preserved in the SRH. A reduced SRH does not contain the first segment of the related SR Policy (the first segment is the one already in the DA of the IPv6 header), and the Last Entry field is set to n-2, where n is the number of elements in the SR Policy. RFC8754 (section 4.3.1.1) defines the SRH processing and the logic to validate the SRH (S09, S10, S11) which works for both reduced and non-reduced behaviors. This patch updates seg6_validate_srh() to validate the SRH as per RFC8754. Signed-off-by: Ahmed Abdelsalam <ahabdels@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: mscc: ocelot: ANA_AUTOAGE_AGE_PERIOD holds a value in seconds, not msVladimir Oltean1-2/+9
One may notice that automatically-learnt entries 'never' expire, even though the bridge configures the address age period at 300 seconds. Actually the value written to hardware corresponds to a time interval 1000 times higher than intended, i.e. 83 hours. Fixes: a556c76adc05 ("net: mscc: Add initial Ocelot switch support") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Faineli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: dsa: ocelot: the MAC table on Felix is twice as largeVladimir Oltean6-4/+7
When running 'bridge fdb dump' on Felix, sometimes learnt and static MAC addresses would appear, sometimes they wouldn't. Turns out, the MAC table has 4096 entries on VSC7514 (Ocelot) and 8192 entries on VSC9959 (Felix), so the existing code from the Ocelot common library only dumped half of Felix's MAC table. They are both organized as a 4-way set-associative TCAM, so we just need a single variable indicating the correct number of rows. Fixes: 56051948773e ("net: dsa: ocelot: add driver for Felix switch family") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06r8169: use fsleep in polling functionsHeiner Kallweit1-64/+44
Use new flexible sleep function fsleep() to merge the udelay and msleep polling functions. We can safely do this because no polling function is used in atomic context in this driver. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06timer: add fsleep for flexible sleepingHeiner Kallweit2-0/+14
Sleeping for a certain amount of time requires use of different functions, depending on the time period. Documentation/timers/timers-howto.rst explains when to use which function, and also checkpatch checks for some potentially problematic cases. So let's create a helper that automatically chooses the appropriate sleep function -> fsleep(), for flexible sleeping If the delay is a constant, then the compiler should be able to ensure that the new helper doesn't create overhead. If the delay is not constant, then the new helper can save some code. Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06ipv6: Implement draft-ietf-6man-rfc4941bisFernando Gont3-54/+40
Implement the upcoming rev of RFC4941 (IPv6 temporary addresses): https://tools.ietf.org/html/draft-ietf-6man-rfc4941bis-09 * Reduces the default Valid Lifetime to 2 days The number of extra addresses employed when Valid Lifetime was 7 days exacerbated the stress caused on network elements/devices. Additionally, the motivation for temporary addresses is indeed privacy and reduced exposure. With a default Valid Lifetime of 7 days, an address that becomes revealed by active communication is reachable and exposed for one whole week. The only use case for a Valid Lifetime of 7 days could be some application that is expecting to have long lived connections. But if you want to have a long lived connections, you shouldn't be using a temporary address in the first place. Additionally, in the era of mobile devices, general applications should nevertheless be prepared and robust to address changes (e.g. nodes swap wifi <-> 4G, etc.) * Employs different IIDs for different prefixes To avoid network activity correlation among addresses configured for different prefixes * Uses a simpler algorithm for IID generation No need to store "history" anywhere Signed-off-by: Fernando Gont <fgont@si6networks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: dsa: sja1105: the PTP_CLK extts input reacts on both edgesVladimir Oltean1-8/+18
It looks like the sja1105 external timestamping input is not as generic as we thought. When fed a signal with 50% duty cycle, it will timestamp both the rising and the falling edge. When fed a short pulse signal, only the timestamp of the falling edge will be seen in the PTPSYNCTS register, because that of the rising edge had been overwritten. So the moral is: don't feed it short pulse inputs. Luckily this is not a complete deal breaker, as we can still work with 1 Hz square waves. But the problem is that the extts polling period was not dimensioned enough for this input signal. If we leave the period at half a second, we risk losing timestamps due to jitter in the measuring process. So we need to increase it to 4 times per second. Also, the very least we can do to inform the user is to deny any other flags combination than with PTP_RISING_EDGE and PTP_FALLING_EDGE both set. Fixes: 747e5eb31d59 ("net: dsa: sja1105: configure the PTP_CLK pin as EXT_TS or PER_OUT") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06selftests: net: tcp_mmap: fix SO_RCVLOWAT settingEric Dumazet1-1/+3
Since chunk_size is no longer an integer, we can not use it directly as an argument of setsockopt(). This patch should fix tcp_mmap for Big Endian kernels. Fixes: 597b01edafac ("selftests: net: avoid ptl lock contention in tcp_mmap") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: Arjun Roy <arjunroy@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: hsr: fix incorrect type usage for protocol variableMurali Karicheri1-1/+1
Fix following sparse checker warning:- net/hsr/hsr_slave.c:38:18: warning: incorrect type in assignment (different base types) net/hsr/hsr_slave.c:38:18: expected unsigned short [unsigned] [usertype] protocol net/hsr/hsr_slave.c:38:18: got restricted __be16 [usertype] h_proto net/hsr/hsr_slave.c:39:25: warning: restricted __be16 degrades to integer net/hsr/hsr_slave.c:39:57: warning: restricted __be16 degrades to integer Signed-off-by: Murali Karicheri <m-karicheri2@ti.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: phy: mscc: use phy_package_sharedMichael Walle2-70/+32
Use the new phy_package_shared common storage to ease the package initialization and to access the global registers. Signed-off-by: Michael Walle <michael@walle.cc> Tested-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: phy: bcm54140: use phy_package_sharedMichael Walle1-46/+11
Use the new phy_package_shared common storage to ease the package initialization and to access the global registers. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: phy: add concept of shared storage for PHYsMichael Walle3-0/+228
There are packages which contain multiple PHY devices, eg. a quad PHY transceiver. Provide functions to allocate and free shared storage. Usually, a quad PHY contains global registers, which don't belong to any PHY. Provide convenience functions to access these registers. Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06Revert "crypto: chelsio - Inline single pdu only"Ayush Sawal1-3/+0
This reverts commit 27c6feb0fb33a665a746346e76714826a5be5d10. For ipsec offload the chelsio's ethernet driver expects a single mtu sized packet. But when ipsec traffic is running using iperf, most of the packets in that traffic are gso packets(large sized skbs) because GSO is enabled by default in TCP, due to this commit 0a6b2a1dc2a2 ("tcp: switch to GSO being always on"), so chcr_ipsec_offload_ok() receives a gso skb(with gso_size non zero). Due to the check in chcr_ipsec_offload_ok(), this function returns false for most of the packet, then ipsec offload is skipped and the skb goes out taking the coprocessor path which reduces the bandwidth for inline ipsec. If this check is removed then for most of the packets(large sized skbs) the chcr_ipsec_offload_ok() returns true and then as GSO is on, the segmentation of the packet happens in the kernel and then finally the driver_xmit is called, which receives a segmented mtu sized packet which is what the driver expects for ipsec offload. So this case becomes unnecessary here, therefore removing it. Signed-off-by: Ayush Sawal <ayush.sawal@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: macsec: fix rtnl locking issueAntoine Tenart1-1/+2
netdev_update_features() must be called with the rtnl lock taken. Not doing so triggers a warning, as ASSERT_RTNL() is used in __netdev_update_features(), the first function called by netdev_update_features(). Fix this. Fixes: c850240b6c41 ("net: macsec: report real_dev features when HW offloading is enabled") Signed-off-by: Antoine Tenart <antoine.tenart@bootlin.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: 7990: Fix use correct return type for ndo_start_xmit()Yunjian Wang2-2/+2
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: cpmac: Fix use correct return type for ndo_start_xmit()Yunjian Wang1-1/+1
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: moxa: Fix use correct return type for ndo_start_xmit()Yunjian Wang1-2/+3
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2020-05-06net: lantiq: Fix use correct return type for ndo_start_xmit()Yunjian Wang1-1/+2
The method ndo_start_xmit() returns a value of type netdev_tx_t. Fix the ndo function to use the correct type. Signed-off-by: Yunjian Wang <wangyunjian@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>