[dpdk-stable] patch 'net/i40e: fix Tx when TSO is enabled' has been queued to LTS release 18.11.7

Kevin Traynor ktraynor at redhat.com
Fri Feb 7 16:12:34 CET 2020


Hi,

FYI, your patch has been queued to LTS release 18.11.7

Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 02/13/20. So please
shout if anyone has objections.

Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.

Queued patches are on a temporary branch at:
https://github.com/kevintraynor/dpdk-stable-queue

This queued commit can be viewed at:
https://github.com/kevintraynor/dpdk-stable-queue/commit/950dfee75576b7955b288833764c8c33c5406ba4

Thanks.

Kevin.

---
>From 950dfee75576b7955b288833764c8c33c5406ba4 Mon Sep 17 00:00:00 2001
From: Xiaoyun Li <xiaoyun.li at intel.com>
Date: Thu, 26 Dec 2019 14:45:44 +0800
Subject: [PATCH] net/i40e: fix Tx when TSO is enabled

[ upstream commit 29b2ba82c4c94df1975d0cb9c5c23feef99cf6a3 ]

Hardware limits that max buffer size per tx descriptor should be
(16K-1)B. So when TSO enabled, the mbuf data size may exceed the
limit and cause malicious behavior to the NIC. This patch fixes
this issue by using more tx descs for this kind of large buffer.

Fixes: 4861cde46116 ("i40e: new poll mode driver")

Signed-off-by: Xiaoyun Li <xiaoyun.li at intel.com>
Acked-by: Qi Zhang <qi.z.zhang at intel.com>
Tested-by: Ciara Loftus <ciara.loftus at intel.com>
---
 drivers/net/i40e/i40e_rxtx.c | 45 +++++++++++++++++++++++++++++++++++-
 1 file changed, 44 insertions(+), 1 deletion(-)

diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 9de88a9ccb..1642cf4948 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1003,4 +1003,22 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
 }
 
+/* HW requires that Tx buffer size ranges from 1B up to (16K-1)B. */
+#define I40E_MAX_DATA_PER_TXD \
+	(I40E_TXD_QW1_TX_BUF_SZ_MASK >> I40E_TXD_QW1_TX_BUF_SZ_SHIFT)
+/* Calculate the number of TX descriptors needed for each pkt */
+static inline uint16_t
+i40e_calc_pkt_desc(struct rte_mbuf *tx_pkt)
+{
+	struct rte_mbuf *txd = tx_pkt;
+	uint16_t count = 0;
+
+	while (txd != NULL) {
+		count += DIV_ROUND_UP(txd->data_len, I40E_MAX_DATA_PER_TXD);
+		txd = txd->next;
+	}
+
+	return count;
+}
+
 uint16_t
 i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -1060,6 +1078,13 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 		 * a packet equals to the number of the segments of that
 		 * packet plus 1 context descriptor if needed.
+		 * Recalculate the needed tx descs when TSO enabled in case
+		 * the mbuf data size exceeds max data size that hw allows
+		 * per tx desc.
 		 */
-		nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
+		if (ol_flags & PKT_TX_TCP_SEG)
+			nb_used = (uint16_t)(i40e_calc_pkt_desc(tx_pkt) +
+					     nb_ctx);
+		else
+			nb_used = (uint16_t)(tx_pkt->nb_segs + nb_ctx);
 		tx_last = (uint16_t)(tx_id + nb_used - 1);
 
@@ -1174,4 +1199,22 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 			buf_dma_addr = rte_mbuf_data_iova(m_seg);
 
+			while ((ol_flags & PKT_TX_TCP_SEG) &&
+				unlikely(slen > I40E_MAX_DATA_PER_TXD)) {
+				txd->buffer_addr =
+					rte_cpu_to_le_64(buf_dma_addr);
+				txd->cmd_type_offset_bsz =
+					i40e_build_ctob(td_cmd,
+					td_offset, I40E_MAX_DATA_PER_TXD,
+					td_tag);
+
+				buf_dma_addr += I40E_MAX_DATA_PER_TXD;
+				slen -= I40E_MAX_DATA_PER_TXD;
+
+				txe->last_id = tx_last;
+				tx_id = txe->next_id;
+				txe = txn;
+				txd = &txr[tx_id];
+				txn = &sw_ring[txe->next_id];
+			}
 			PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n"
 				"buf_dma_addr: %#"PRIx64";\n"
-- 
2.21.1

---
  Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- -	2020-02-07 15:08:18.671087249 +0000
+++ 0022-net-i40e-fix-Tx-when-TSO-is-enabled.patch	2020-02-07 15:08:17.534062684 +0000
@@ -1 +1 @@
-From 29b2ba82c4c94df1975d0cb9c5c23feef99cf6a3 Mon Sep 17 00:00:00 2001
+From 950dfee75576b7955b288833764c8c33c5406ba4 Mon Sep 17 00:00:00 2001
@@ -5,0 +6,2 @@
+[ upstream commit 29b2ba82c4c94df1975d0cb9c5c23feef99cf6a3 ]
+
@@ -12 +13,0 @@
-Cc: stable at dpdk.org
@@ -22 +23 @@
-index 17dc8c78f7..bbdba39b3c 100644
+index 9de88a9ccb..1642cf4948 100644
@@ -25 +26 @@
-@@ -990,4 +990,22 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
+@@ -1003,4 +1003,22 @@ i40e_set_tso_ctx(struct rte_mbuf *mbuf, union i40e_tx_offload tx_offload)
@@ -48 +49 @@
-@@ -1047,6 +1065,13 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+@@ -1060,6 +1078,13 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
@@ -63 +64 @@
-@@ -1161,4 +1186,22 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+@@ -1174,4 +1199,22 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)



More information about the stable mailing list