[dpdk-dev] [PATCH v4 2/2] ethdev: make rte_flow API thread safe

Suanming Mou suanmingm at nvidia.com
Fri Oct 9 03:17:23 CEST 2020


Currently, the rte_flow functions are not defined as thread safe.
DPDK applications either call the functions in single thread or
protect any concurrent calling for the rte_flow operations using
a lock.

For PMDs support the flow operations thread safe natively, the
redundant protection in application hurts the performance of the
rte_flow operation functions.

And the restriction of thread safe is not guaranteed for the
rte_flow functions also limits the applications' expectation.

This feature is going to change the rte_flow functions to be thread
safe. As different PMDs have different flow operations, some may
support thread safe already and others may not. For PMDs don't
support flow thread safe operation, a new lock is defined in ethdev
in order to protects thread unsafe PMDs from rte_flow level.

A new RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE device flag is added to
determine whether the PMD supports thread safe flow operation or not.
For PMDs support thread safe flow operations, set the
RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE flag, rte_flow level functions will
skip the thread safe helper lock for these PMDs. Again the rte_flow
level thread safe lock only works when PMD operation functions are
not thread safe.

For the PMDs which don't want the default mutex lock, just set the
flag in the PMD, and add the prefer type of lock in the PMD. Then
the default mutex lock is easily replaced by the PMD level lock.

The change has no effect on the current DPDK applications. No change
is required for the current DPDK applications. For the standard posix
pthread_mutex, if no lock contention with the added rte_flow level
mutex, the mutex only does the atomic increasing in
pthread_mutex_lock() and decreasing in
pthread_mutex_unlock(). No futex() syscall will be involved.

Signed-off-by: Suanming Mou <suanmingm at nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde at broadcom.com>
Acked-by: Ori Kam <orika at nvidia.com>
Acked-by: Matan Azrad <matan at nvidia.com>
---

v4:
 - Change RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE to 0x0001, since
   RTE_ETH_DEV_CLOSE_REMOVE has been removed.
 - Update small comments.
 - Update release notes.

v3:
 - update flow_lock/unlock -> fts_enter/exit

v2:
 - Update commit info and description doc.
 - Add inline for the flow lock and unlock functions.
 - Remove the PMD sample part flag configuration.

---

 doc/guides/prog_guide/rte_flow.rst     | 11 +++--
 doc/guides/rel_notes/release_20_11.rst |  6 +++
 lib/librte_ethdev/rte_ethdev.c         |  2 +
 lib/librte_ethdev/rte_ethdev.h         |  2 +
 lib/librte_ethdev/rte_ethdev_core.h    |  4 ++
 lib/librte_ethdev/rte_flow.c           | 84 ++++++++++++++++++++++++++--------
 6 files changed, 86 insertions(+), 23 deletions(-)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 119b128..b4ae9cc 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3046,10 +3046,6 @@ Caveats
 - API operations are synchronous and blocking (``EAGAIN`` cannot be
   returned).
 
-- There is no provision for re-entrancy/multi-thread safety, although nothing
-  should prevent different devices from being configured at the same
-  time. PMDs may protect their control path functions accordingly.
-
 - Stopping the data path (TX/RX) should not be necessary when managing flow
   rules. If this cannot be achieved naturally or with workarounds (such as
   temporarily replacing the burst function pointers), an appropriate error
@@ -3101,6 +3097,13 @@ This interface additionally defines the following helper function:
 - ``rte_flow_ops_get()``: get generic flow operations structure from a
   port.
 
+If PMD interfaces do not support re-entrancy/multi-thread safety, rte_flow
+level functions will do it by mutex per port. The application can test the
+dev_flags with RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE in struct rte_eth_dev_data
+to know if the rte_flow thread-safe works under rte_flow level or PMD level.
+Please note that the mutex only protects rte_flow level functions, other
+control path functions are not in scope.
+
 More will be added over time.
 
 Device compatibility
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 0b2a370..364306d 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -109,6 +109,12 @@ New Features
   * Extern objects and functions can be plugged into the pipeline.
   * Transaction-oriented table updates.
 
+* **Added thread safe support to rte_flow functions.**
+
+  Added ``RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE`` device flag to indicate
+  if PMD support thread safe operations. If PMD doesn't set the flag,
+  rte_flow level functions will protect the flow operations with mutex.
+
 
 Removed Items
 -------------
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 0f56541..60677fe 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -500,6 +500,7 @@ struct rte_eth_dev *
 	strlcpy(eth_dev->data->name, name, sizeof(eth_dev->data->name));
 	eth_dev->data->port_id = port_id;
 	eth_dev->data->mtu = RTE_ETHER_MTU;
+	pthread_mutex_init(&eth_dev->data->fts_mutex, NULL);
 
 unlock:
 	rte_spinlock_unlock(&rte_eth_dev_shared_data->ownership_lock);
@@ -564,6 +565,7 @@ struct rte_eth_dev *
 		rte_free(eth_dev->data->mac_addrs);
 		rte_free(eth_dev->data->hash_mac_addrs);
 		rte_free(eth_dev->data->dev_private);
+		pthread_mutex_destroy(&eth_dev->data->fts_mutex);
 		memset(eth_dev->data, 0, sizeof(struct rte_eth_dev_data));
 	}
 
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index d2bf74f..4dc2b85 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -1654,6 +1654,8 @@ struct rte_eth_dev_owner {
 	char name[RTE_ETH_MAX_OWNER_NAME_LEN]; /**< The owner name. */
 };
 
+/** Device PMD supports thread safety flow operation */
+#define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE  0x0001
 /** Device supports link state interrupt */
 #define RTE_ETH_DEV_INTR_LSC     0x0002
 /** Device is a bonded slave */
diff --git a/lib/librte_ethdev/rte_ethdev_core.h b/lib/librte_ethdev/rte_ethdev_core.h
index fd3bf92..89df65a 100644
--- a/lib/librte_ethdev/rte_ethdev_core.h
+++ b/lib/librte_ethdev/rte_ethdev_core.h
@@ -5,6 +5,9 @@
 #ifndef _RTE_ETHDEV_CORE_H_
 #define _RTE_ETHDEV_CORE_H_
 
+#include <pthread.h>
+#include <sys/types.h>
+
 /**
  * @file
  *
@@ -180,6 +183,7 @@ struct rte_eth_dev_data {
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
 
+	pthread_mutex_t fts_mutex; /**< rte flow ops thread safety mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
 	void *reserved_ptrs[4];   /**< Reserved for future fields */
 } __rte_cache_aligned;
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index f8fdd68..6823458 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -207,6 +207,20 @@ struct rte_flow_desc_data {
 	return -rte_errno;
 }
 
+static inline void
+fts_enter(struct rte_eth_dev *dev)
+{
+	if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
+		pthread_mutex_lock(&dev->data->fts_mutex);
+}
+
+static inline void
+fts_exit(struct rte_eth_dev *dev)
+{
+	if (!(dev->data->dev_flags & RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE))
+		pthread_mutex_unlock(&dev->data->fts_mutex);
+}
+
 static int
 flow_err(uint16_t port_id, int ret, struct rte_flow_error *error)
 {
@@ -346,12 +360,16 @@ struct rte_flow_desc_data {
 {
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->validate))
-		return flow_err(port_id, ops->validate(dev, attr, pattern,
-						       actions, error), error);
+	if (likely(!!ops->validate)) {
+		fts_enter(dev);
+		ret = ops->validate(dev, attr, pattern, actions, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -372,7 +390,9 @@ struct rte_flow *
 	if (unlikely(!ops))
 		return NULL;
 	if (likely(!!ops->create)) {
+		fts_enter(dev);
 		flow = ops->create(dev, attr, pattern, actions, error);
+		fts_exit(dev);
 		if (flow == NULL)
 			flow_err(port_id, -rte_errno, error);
 		return flow;
@@ -390,12 +410,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->destroy))
-		return flow_err(port_id, ops->destroy(dev, flow, error),
-				error);
+	if (likely(!!ops->destroy)) {
+		fts_enter(dev);
+		ret = ops->destroy(dev, flow, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -408,11 +432,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->flush))
-		return flow_err(port_id, ops->flush(dev, error), error);
+	if (likely(!!ops->flush)) {
+		fts_enter(dev);
+		ret = ops->flush(dev, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -428,12 +457,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (!ops)
 		return -rte_errno;
-	if (likely(!!ops->query))
-		return flow_err(port_id, ops->query(dev, flow, action, data,
-						    error), error);
+	if (likely(!!ops->query)) {
+		fts_enter(dev);
+		ret = ops->query(dev, flow, action, data, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -447,11 +480,16 @@ struct rte_flow *
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (!ops)
 		return -rte_errno;
-	if (likely(!!ops->isolate))
-		return flow_err(port_id, ops->isolate(dev, set, error), error);
+	if (likely(!!ops->isolate)) {
+		fts_enter(dev);
+		ret = ops->isolate(dev, set, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -1224,12 +1262,16 @@ enum rte_flow_conv_item_spec_type {
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->dev_dump))
-		return flow_err(port_id, ops->dev_dump(dev, file, error),
-				error);
+	if (likely(!!ops->dev_dump)) {
+		fts_enter(dev);
+		ret = ops->dev_dump(dev, file, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOSYS,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOSYS));
@@ -1241,12 +1283,16 @@ enum rte_flow_conv_item_spec_type {
 {
 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
 	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+	int ret;
 
 	if (unlikely(!ops))
 		return -rte_errno;
-	if (likely(!!ops->get_aged_flows))
-		return flow_err(port_id, ops->get_aged_flows(dev, contexts,
-				nb_contexts, error), error);
+	if (likely(!!ops->get_aged_flows)) {
+		fts_enter(dev);
+		ret = ops->get_aged_flows(dev, contexts, nb_contexts, error);
+		fts_exit(dev);
+		return flow_err(port_id, ret, error);
+	}
 	return rte_flow_error_set(error, ENOTSUP,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, rte_strerror(ENOTSUP));
-- 
1.8.3.1



More information about the dev mailing list