[dpdk-dev] [PATCH v2 04/15] net/mlx5: support Rx tunnel type identification

Nélio Laranjeiro nelio.laranjeiro at 6wind.com
Thu Apr 12 11:50:58 CEST 2018


On Wed, Apr 11, 2018 at 08:11:50AM +0000, Xueming(Steven) Li wrote:
> Hi Nelio,
> 
> > -----Original Message-----
> > From: Nélio Laranjeiro <nelio.laranjeiro at 6wind.com>
> > Sent: Tuesday, April 10, 2018 11:17 PM
> > To: Xueming(Steven) Li <xuemingl at mellanox.com>
> > Cc: Shahaf Shuler <shahafs at mellanox.com>; dev at dpdk.org
> > Subject: Re: [PATCH v2 04/15] net/mlx5: support Rx tunnel type
> > identification
> > 
> > On Tue, Apr 10, 2018 at 09:34:04PM +0800, Xueming Li wrote:
> > > This patch introduced tunnel type identification based on flow rules.
> > > If flows of multiple tunnel types built on same queue,
> > > RTE_PTYPE_TUNNEL_MASK will be returned, bits in flow mark could be
> > > used as tunnel type identifier.
> > 
> > I don't see anywhere in this patch where the bits are reserved to identify
> > a flow, nor values which can help to identify it.
> > 
> > Is this missing?
> > 
> > Anyway we have already very few bits in the mark making it difficult to be
> > used by the user, reserving again some to may lead to remove the mark
> > support from the flows.
> 
> Not all users will use multiple tunnel types, this is not included in this patch
> set and left to user decision. I'll update comments to make this clear.

Thanks,

> > > Signed-off-by: Xueming Li <xuemingl at mellanox.com>
<snip/>
> > >  /**
> > > + * RXQ update after flow rule creation.
> > > + *
> > > + * @param dev
> > > + *   Pointer to Ethernet device.
> > > + * @param flow
> > > + *   Pointer to the flow rule.
> > > + */
> > > +static void
> > > +mlx5_flow_create_update_rxqs(struct rte_eth_dev *dev, struct rte_flow
> > > +*flow) {
> > > +	struct priv *priv = dev->data->dev_private;
> > > +	unsigned int i;
> > > +
> > > +	if (!dev->data->dev_started)
> > > +		return;
> > > +	for (i = 0; i != flow->rss_conf.queue_num; ++i) {
> > > +		struct mlx5_rxq_data *rxq_data = (*priv->rxqs)
> > > +						 [(*flow->queues)[i]];
> > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > +			container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
> > > +		uint8_t tunnel = PTYPE_IDX(flow->tunnel);
> > > +
> > > +		rxq_data->mark |= flow->mark;
> > > +		if (!tunnel)
> > > +			continue;
> > > +		rxq_ctrl->tunnel_types[tunnel] += 1;
> > 
> > I don't understand why you need such array, the NIC is unable to return
> > the tunnel type has it returns only one bit saying tunnel.
> > Why don't it store in the priv structure the current configured tunnel?
> 
> This array is used to count tunnel types bound to queue, if only one tunnel type,
> ptype will report that tunnel type, TUNNEL MASK(max value) will be returned if 
> multiple types bound to a queue.
> 
> Flow rss action specifies queues that binding to tunnel, thus we can't assume
> all queues have same tunnel types, so this is a per queue structure.

There is something I am missing here, how in the dataplane the PMD can
understand from 1 bit which kind of tunnel the packet is matching?

<snip/>
> > > @@ -2334,9 +2414,9 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct
> > > mlx5_flows *list)  {
> > >  	struct priv *priv = dev->data->dev_private;
> > >  	struct rte_flow *flow;
> > > +	unsigned int i;
> > >
> > >  	TAILQ_FOREACH_REVERSE(flow, list, mlx5_flows, next) {
> > > -		unsigned int i;
> > >  		struct mlx5_ind_table_ibv *ind_tbl = NULL;
> > >
> > >  		if (flow->drop) {
> > > @@ -2382,6 +2462,16 @@ mlx5_flow_stop(struct rte_eth_dev *dev, struct
> > mlx5_flows *list)
> > >  		DRV_LOG(DEBUG, "port %u flow %p removed", dev->data->port_id,
> > >  			(void *)flow);
> > >  	}
> > > +	/* Cleanup Rx queue tunnel info. */
> > > +	for (i = 0; i != priv->rxqs_n; ++i) {
> > > +		struct mlx5_rxq_data *q = (*priv->rxqs)[i];
> > > +		struct mlx5_rxq_ctrl *rxq_ctrl =
> > > +			container_of(q, struct mlx5_rxq_ctrl, rxq);
> > > +
> > > +		memset((void *)rxq_ctrl->tunnel_types, 0,
> > > +		       sizeof(rxq_ctrl->tunnel_types));
> > > +		q->tunnel = 0;
> > > +	}
> > >  }
> > 
> > This hunk does not handle the fact the Rx queue array may have some holes
> > i.e. the application is allowed to ask for 10 queues and only initialise
> > some.  In such situation this code will segfault.
> 
> In other words, "q" could be NULL, correct? I'll add check for this.

Correct.

> BTW, there should be an action item to add such check in rss/queue flow creation.

As it is the responsibility of the application/user to make rule according
to what it has configured, it has not been added.  It can still be
added, but it cannot be considered as a fix.

> > It should only memset the Rx queues making part of the flow not the others.
> 
> Clean this(decrease tunnel_types counter of each queue) from each flow would be time 
> consuming.

Considering flows are already relying on syscall to communicate with
the kernel, the extra cycles consumption to only clear the queues making
part of this flow is neglectable.  

By the way in the same function the mark is cleared only for the queues
making part of the flow, the same loop can be used to clear those tunnel
informations at the same time.

> If an error happened, counter will not be cleared and such state will
> impact tunnel type after port start again.

Unless an implementation error which other kind of them do you fear to
happen?

Thanks,

-- 
Nélio Laranjeiro
6WIND


More information about the dev mailing list