[dpdk-dev] [PATCH v2 2/2] ethdev: introduce Tx queue offloads API

Shahaf Shuler shahafs at mellanox.com
Sun Sep 10 14:07:49 CEST 2017


Introduce a new API to configure Tx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_TX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

In addition the Tx offloads will be disabled by default and be
enabled per application needs. This will much simplify PMD management of
the different offloads.

The new API does not have an equivalent for the below, benchmark
specific, flags:

	- ETH_TXQ_FLAGS_NOREFCOUNT
	- ETH_TXQ_FLAGS_NOMULTMEMP

Applications should set the ETH_TXQ_FLAGS_IGNORE flag on txq_flags
field in order to move to the new API.

The old Tx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs at mellanox.com>
---

Note:

In the special case were application is using the old API and the PMD
has allready converted to the new one. If there are Tx offloads which can
be set only per-port the queue setup may fail.

I choose to treat this case as an exception considering all Tx offloads are
currently defined to be per-queue. New ones to be added should require from
the application to move to the new API as well.

---
 doc/guides/nics/features.rst  |  8 ++++++
 lib/librte_ether/rte_ethdev.c | 59 +++++++++++++++++++++++++++++++++++++-
 lib/librte_ether/rte_ethdev.h | 32 ++++++++++++++++++++-
 3 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index f2c8497c2..bb25a1cee 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -131,6 +131,7 @@ Lock-free Tx queue
 If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
+* **[uses]    rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[provides] rte_eth_dev_info**: ``tx_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
@@ -220,6 +221,7 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_TCP_SEG``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
@@ -510,6 +512,7 @@ VLAN offload
 Supports VLAN offload to hardware.
 
 * **[uses]       rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
@@ -526,6 +529,7 @@ QinQ offload
 Supports QinQ (queue in queue) offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
@@ -541,6 +545,7 @@ L3 checksum offload
 Supports L3 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
@@ -558,6 +563,7 @@ L4 checksum offload
 Supports L4 checksum offload.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -576,6 +582,7 @@ MACsec offload
 Supports MACsec.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
 * **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
@@ -589,6 +596,7 @@ Inner L3 checksum
 Supports inner packet L3 checksum.
 
 * **[uses]     rte_eth_rxq_conf**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txq_conf**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index b3c10701e..cd79cb1c9 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -1186,6 +1186,50 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	return ret;
 }
 
+/**
+ * A conversion function from txq_flags API.
+ */
+static void
+rte_eth_convert_txq_flags(const uint32_t txq_flags, uint64_t *tx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOMULTSEGS))
+		offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOVLANOFFL))
+		offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMSCTP))
+		offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMUDP))
+		offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+	if (!(txq_flags & ETH_TXQ_FLAGS_NOXSUMTCP))
+		offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+
+	*tx_offloads = offloads;
+}
+
+/**
+ * A conversion function from offloads API.
+ */
+static void
+rte_eth_convert_txq_offloads(const uint64_t tx_offloads, uint32_t *txq_flags)
+{
+	uint32_t flags = 0;
+
+	if (!(tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+		flags |= ETH_TXQ_FLAGS_NOMULTSEGS;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT))
+		flags |= ETH_TXQ_FLAGS_NOVLANOFFL;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMSCTP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMUDP;
+	if (!(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM))
+		flags |= ETH_TXQ_FLAGS_NOXSUMTCP;
+
+	*txq_flags = flags;
+}
+
 int
 rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 		       uint16_t nb_tx_desc, unsigned int socket_id,
@@ -1193,6 +1237,7 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_txconf local_conf;
 	void **txq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1237,8 +1282,20 @@ rte_eth_tx_queue_setup(uint8_t port_id, uint16_t tx_queue_id,
 	if (tx_conf == NULL)
 		tx_conf = &dev_info.default_txconf;
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	local_conf = *tx_conf;
+	if (tx_conf->txq_flags & ETH_TXQ_FLAGS_IGNORE)
+		rte_eth_convert_txq_offloads(tx_conf->offloads,
+					     &local_conf.txq_flags);
+	else
+		rte_eth_convert_txq_flags(tx_conf->txq_flags,
+					  &local_conf.offloads);
+
 	return (*dev->dev_ops->tx_queue_setup)(dev, tx_queue_id, nb_tx_desc,
-					       socket_id, tx_conf);
+					       socket_id, &local_conf);
 }
 
 void
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index f424cba04..4ad7dd059 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -692,6 +692,12 @@ struct rte_eth_vmdq_rx_conf {
  */
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
+	uint64_t offloads;
+	/**
+	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 
 	/* For i40e specifically */
 	uint16_t pvid;
@@ -733,6 +739,14 @@ struct rte_eth_rxconf {
 #define ETH_TXQ_FLAGS_NOXSUMS \
 		(ETH_TXQ_FLAGS_NOXSUMSCTP | ETH_TXQ_FLAGS_NOXSUMUDP | \
 		 ETH_TXQ_FLAGS_NOXSUMTCP)
+#define ETH_TXQ_FLAGS_IGNORE	0x8000
+	/**
+	 * When set the txq_flags should be ignored,
+	 * instead per-queue Tx offloads will be set on offloads field
+	 * located on rte_eth_txq_conf struct.
+	 * This flag is temporary till the rte_eth_txq_conf.txq_flags
+	 * API will be deprecated.
+	 */
 
 /**
  * A structure used to configure a TX ring of an Ethernet port.
@@ -745,6 +759,12 @@ struct rte_eth_txconf {
 
 	uint32_t txq_flags; /**< Set flags for the Tx queue */
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	uint64_t offloads;
+	/**
+	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Only offloads set on tx_queue_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
 };
 
 /**
@@ -969,6 +989,8 @@ struct rte_eth_conf {
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
+#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+/**< multi segment send is supported. */
 
 struct rte_pci_device;
 
@@ -991,9 +1013,12 @@ struct rte_eth_dev_info {
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
 	uint64_t rx_offload_capa;
 	/**< Device per port RX offload capabilities. */
-	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t tx_offload_capa;
+	/**< Device per port TX offload capabilities. */
 	uint64_t rx_queue_offload_capa;
 	/**< Device per queue RX offload capabilities. */
+	uint64_t tx_queue_offload_capa;
+	/**< Device per queue TX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -2024,6 +2049,11 @@ int rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
  *   - The *txq_flags* member contains flags to pass to the TX queue setup
  *     function to configure the behavior of the TX queue. This should be set
  *     to 0 if no special configuration is required.
+ *     This API is obsolete and will be deprecated. Applications
+ *     should set it to ETH_TXQ_FLAGS_IGNORE and use
+ *     the offloads field below.
+ *   - The *offloads* member contains Tx offloads to be enabled.
+ *     Offloads which are not set cannot be used on the datapath.
  *
  *     Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces
  *     the transmit function to use default values.
-- 
2.12.0



More information about the dev mailing list