From 3e0d167a6b6e5722d7fadfad9b817f668ab41ec1 Mon Sep 17 00:00:00 2001 From: Craig Brind Date: Thu, 27 Apr 2006 02:30:46 -0700 Subject: [PATCH] via-rhine: zero pad short packets on Rhine I ethernet cards Fixes Rhine I cards disclosing fragments of previously transmitted frames in new transmissions. Before transmission, any socket buffer (skb) shorter than the ethernet minimum length of 60 bytes was zero-padded. On Rhine I cards the data can later be copied into an aligned transmission buffer without copying this padding. This resulted in the transmission of the frame with the extra bytes beyond the provided content leaking the previous contents of this buffer on to the network. Now zero-padding is repeated in the local aligned buffer if one is used. Following a suggestion from the via-rhine maintainer, no attempt is made here to avoid the duplicated effort of padding the skb if it is known that an aligned buffer will definitely be used. This is to make the change "obviously correct" and allow it to be applied to a stable kernel if necessary. There is no change to the flow of control and the changes are only to the Rhine I code path. The patch has run on an in-service Rhine-I host without incident. Frames shorter than 60 bytes are now correctly zero-padded when captured on a separate host. I see no unusual stats reported by ifconfig, and no unusual log messages. Signed-off-by: Craig Brind Signed-off-by: Roger Luethi Cc: Jeff Garzik Signed-off-by: Andrew Morton Signed-off-by: Jeff Garzik --- drivers/net/via-rhine.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/via-rhine.c b/drivers/net/via-rhine.c index 6a23964c1317..a6dc53b4250d 100644 --- a/drivers/net/via-rhine.c +++ b/drivers/net/via-rhine.c @@ -129,6 +129,7 @@ - Massive clean-up - Rewrite PHY, media handling (remove options, full_duplex, backoff) - Fix Tx engine race for good + - Craig Brind: Zero padded aligned buffers for short packets. */ @@ -1326,7 +1327,12 @@ static int rhine_start_tx(struct sk_buff *skb, struct net_device *dev) rp->stats.tx_dropped++; return 0; } + + /* Padding is not copied and so must be redone. */ skb_copy_and_csum_dev(skb, rp->tx_buf[entry]); + if (skb->len < ETH_ZLEN) + memset(rp->tx_buf[entry] + skb->len, 0, + ETH_ZLEN - skb->len); rp->tx_skbuff_dma[entry] = 0; rp->tx_ring[entry].addr = cpu_to_le32(rp->tx_bufs_dma + (rp->tx_buf[entry] - -- cgit v1.2.3-59-g8ed1b