[dpdk-dev] [PATCH v2 01/15] net/mlx5: support 16 hardware priorities

Xueming(Steven) Li xuemingl at mellanox.com
Thu Apr 12 15:43:04 CEST 2018



> -----Original Message-----
> From: Nélio Laranjeiro <nelio.laranjeiro at 6wind.com>
> Sent: Thursday, April 12, 2018 5:09 PM
> To: Xueming(Steven) Li <xuemingl at mellanox.com>
> Cc: Shahaf Shuler <shahafs at mellanox.com>; dev at dpdk.org
> Subject: Re: [PATCH v2 01/15] net/mlx5: support 16 hardware priorities
> 
> On Tue, Apr 10, 2018 at 03:22:46PM +0000, Xueming(Steven) Li wrote:
> > Hi Nelio,
> >
> > > -----Original Message-----
> > > From: Nélio Laranjeiro <nelio.laranjeiro at 6wind.com>
> > > Sent: Tuesday, April 10, 2018 10:42 PM
> > > To: Xueming(Steven) Li <xuemingl at mellanox.com>
> > > Cc: Shahaf Shuler <shahafs at mellanox.com>; dev at dpdk.org
> > > Subject: Re: [PATCH v2 01/15] net/mlx5: support 16 hardware
> > > priorities
> > >
> > > On Tue, Apr 10, 2018 at 09:34:01PM +0800, Xueming Li wrote:
> > > > Adjust flow priority mapping to adapt new hardware 16 verb flow
> > > > priorites support:
> > > > 0-3: RTE FLOW tunnel rule
> > > > 4-7: RTE FLOW non-tunnel rule
> > > > 8-15: PMD control flow
> > >
> > > This commit log is inducing people in error, this amount of priority
> > > depends on the Mellanox OFED installed, it is not available on
> > > upstream Linux kernel yet nor in the current Mellanox OFED GA.
> > >
> > > What happens when those amount of priority are not available, is it
> > > removing a functionality?  Will it collide with other flows?
> >
> > If 16  priorities not available, simply behavior as 8 priorities.
> 
> It is not described in the commit log, please add it.
> 
> > > > Signed-off-by: Xueming Li <xuemingl at mellanox.com>
> <snip/>
> > > >  	},
> > > >  	[HASH_RXQ_ETH] = {
> > > >  		.hash_fields = 0,
> > > >  		.dpdk_rss_hf = 0,
> > > > -		.flow_priority = 3,
> > > > +		.flow_priority = 2,
> > > >  	},
> > > >  };
> > >
> > > If the amount of priorities remains 8, you are removing the priority
> > > for the tunnel flows introduced by commit 749365717f5c ("net/mlx5:
> > > change tunnel flow priority")
> > >
> > > Please keep this functionality when this patch fails to get the
> > > expected
> > > 16 Verbs priorities.
> >
> > These priority shift are different in 16 priorities scenario, I
> > changed it to calculation. In function mlx5_flow_priorities_detect(),
> > priority shift will be 1 if 8 priorities, 4 in case of 16 priorities.
> > Please refer to changes in function mlx5_flow_update_priority() as well.
> 
> Please light my lamp, I don't see it...

Sorry, please refer to priv->config.flow_priority_shift.

> 
> <snip/>
> > > >  static void
> > > > -mlx5_flow_update_priority(struct mlx5_flow_parse *parser,
> > > > +mlx5_flow_update_priority(struct rte_eth_dev *dev,
> > > > +			  struct mlx5_flow_parse *parser,
> > > >  			  const struct rte_flow_attr *attr)  {
> > > > +	struct priv *priv = dev->data->dev_private;
> > > >  	unsigned int i;
> > > > +	uint16_t priority;
> > > >
> > > > +	if (priv->config.flow_priority_shift == 1)
> > > > +		priority = attr->priority * MLX5_VERBS_FLOW_PRIO_4;
> > > > +	else
> > > > +		priority = attr->priority * MLX5_VERBS_FLOW_PRIO_8;
> > > > +	if (!parser->inner)
> > > > +		priority += priv->config.flow_priority_shift;

Here, if non-tunnel flow, lower(increase) 1 for 8 priorities, lower 4 otherwise.
I'll append a comment here.

> > > >  	if (parser->drop) {
> > > > -		parser->queue[HASH_RXQ_ETH].ibv_attr->priority =
> > > > -			attr->priority +
> > > > -			hash_rxq_init[HASH_RXQ_ETH].flow_priority;
> > > > +		parser->queue[HASH_RXQ_ETH].ibv_attr->priority =
> priority +
> > > > +				hash_rxq_init[HASH_RXQ_ETH].flow_priority;
> > > >  		return;
> > > >  	}
> > > >  	for (i = 0; i != hash_rxq_init_n; ++i) {
> > > > -		if (parser->queue[i].ibv_attr) {
> > > > -			parser->queue[i].ibv_attr->priority =
> > > > -				attr->priority +
> > > > -				hash_rxq_init[i].flow_priority -
> > > > -				(parser->inner ? 1 : 0);
> > > > -		}
> > > > +		if (!parser->queue[i].ibv_attr)
> > > > +			continue;
> > > > +		parser->queue[i].ibv_attr->priority = priority +
> > > > +				hash_rxq_init[i].flow_priority;
> 
> Previous code was subtracting one from the table priorities which was
> starting at 1.  In the new code I don't see it.
> 
> What I am missing?

Please refer to new comment above around variable "priority" calculation.

> 
> > > >  	}
> > > >  }
> > > >
> > > > @@ -1087,7 +1097,7 @@ mlx5_flow_convert(struct rte_eth_dev *dev,
> > > >  		.layer = HASH_RXQ_ETH,
> > > >  		.mark_id = MLX5_FLOW_MARK_DEFAULT,
> > > >  	};
> > > > -	ret = mlx5_flow_convert_attributes(attr, error);
> > > > +	ret = mlx5_flow_convert_attributes(dev, attr, error);
> > > >  	if (ret)
> > > >  		return ret;
> > > >  	ret = mlx5_flow_convert_actions(dev, actions, error, parser);
> @@
> > > > -1158,7 +1168,7 @@ mlx5_flow_convert(struct rte_eth_dev *dev,
> > > >  	 */
> > > >  	if (!parser->drop)
> > > >  		mlx5_flow_convert_finalise(parser);
> > > > -	mlx5_flow_update_priority(parser, attr);
> > > > +	mlx5_flow_update_priority(dev, parser, attr);
> > > >  exit_free:
> > > >  	/* Only verification is expected, all resources should be
> released.
> > > */
> > > >  	if (!parser->create) {
> > > > @@ -2450,7 +2460,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
> > > >  	struct priv *priv = dev->data->dev_private;
> > > >  	const struct rte_flow_attr attr = {
> > > >  		.ingress = 1,
> > > > -		.priority = MLX5_CTRL_FLOW_PRIORITY,
> > > > +		.priority = priv->config.control_flow_priority,
> > > >  	};
> > > >  	struct rte_flow_item items[] = {
> > > >  		{
> > > > @@ -3161,3 +3171,50 @@ mlx5_dev_filter_ctrl(struct rte_eth_dev *dev,
> > > >  	}
> > > >  	return 0;
> > > >  }
> > > > +
> > > > +/**
> > > > + * Detect number of Verbs flow priorities supported.
> > > > + *
> > > > + * @param dev
> > > > + *   Pointer to Ethernet device.
> > > > + */
> > > > +void
> > > > +mlx5_flow_priorities_detect(struct rte_eth_dev *dev) {
> > > > +	struct priv *priv = dev->data->dev_private;
> > > > +	uint32_t verb_priorities = MLX5_VERBS_FLOW_PRIO_8 * 2;
> > > > +	struct {
> > > > +		struct ibv_flow_attr attr;
> > > > +		struct ibv_flow_spec_eth eth;
> > > > +		struct ibv_flow_spec_action_drop drop;
> > > > +	} flow_attr = {
> > > > +		.attr = {
> > > > +			.num_of_specs = 2,
> > > > +			.priority = verb_priorities - 1,
> > > > +		},
> > > > +		.eth = {
> > > > +			.type = IBV_FLOW_SPEC_ETH,
> > > > +			.size = sizeof(struct ibv_flow_spec_eth),
> > > > +		},
> > > > +		.drop = {
> > > > +			.size = sizeof(struct ibv_flow_spec_action_drop),
> > > > +			.type = IBV_FLOW_SPEC_ACTION_DROP,
> > > > +		},
> > > > +	};
> > > > +	struct ibv_flow *flow;
> > > > +
> > > > +	if (priv->config.control_flow_priority)
> > > > +		return;
> > > > +	flow = mlx5_glue->create_flow(priv->flow_drop_queue->qp,
> > > > +				      &flow_attr.attr);
> > > > +	if (flow) {
> > > > +		priv->config.flow_priority_shift =
> MLX5_VERBS_FLOW_PRIO_8 / 2;
> > > > +		claim_zero(mlx5_glue->destroy_flow(flow));
> > > > +	} else {
> > > > +		priv->config.flow_priority_shift = 1;
> > > > +		verb_priorities = verb_priorities / 2;
> > > > +	}
> > > > +	priv->config.control_flow_priority = 1;
> > > > +	DRV_LOG(INFO, "port %u Verbs flow priorities: %d",
> > > > +		dev->data->port_id, verb_priorities); }
> > > > diff --git a/drivers/net/mlx5/mlx5_trigger.c
> > > > b/drivers/net/mlx5/mlx5_trigger.c index 6bb4ffb14..d80a2e688
> > > > 100644
> > > > --- a/drivers/net/mlx5/mlx5_trigger.c
> > > > +++ b/drivers/net/mlx5/mlx5_trigger.c
> > > > @@ -148,12 +148,6 @@ mlx5_dev_start(struct rte_eth_dev *dev)
> > > >  	int ret;
> > > >
> > > >  	dev->data->dev_started = 1;
> > > > -	ret = mlx5_flow_create_drop_queue(dev);
> > > > -	if (ret) {
> > > > -		DRV_LOG(ERR, "port %u drop queue allocation failed: %s",
> > > > -			dev->data->port_id, strerror(rte_errno));
> > > > -		goto error;
> > > > -	}
> > > >  	DRV_LOG(DEBUG, "port %u allocating and configuring hash Rx
> queues",
> > > >  		dev->data->port_id);
> > > >  	rte_mempool_walk(mlx5_mp2mr_iter, priv); @@ -202,7 +196,6 @@
> > > > mlx5_dev_start(struct rte_eth_dev *dev)
> > > >  	mlx5_traffic_disable(dev);
> > > >  	mlx5_txq_stop(dev);
> > > >  	mlx5_rxq_stop(dev);
> > > > -	mlx5_flow_delete_drop_queue(dev);
> > > >  	rte_errno = ret; /* Restore rte_errno. */
> > > >  	return -rte_errno;
> > > >  }
> > > > @@ -237,7 +230,6 @@ mlx5_dev_stop(struct rte_eth_dev *dev)
> > > >  	mlx5_rxq_stop(dev);
> > > >  	for (mr = LIST_FIRST(&priv->mr); mr; mr = LIST_FIRST(&priv-
> >mr))
> > > >  		mlx5_mr_release(mr);
> > > > -	mlx5_flow_delete_drop_queue(dev);
> > > >  }
> > > >
> > > >  /**
> > > > --
> > > > 2.13.3
> > >
> > > I have few concerns on this, mlx5_pci_probe() will also probe any
> > > under layer verbs device, and in a near future the representors
> > > associated to a VF.
> > > Making such detection should only be done once by the PF, I also
> > > wander if it is possible to make such drop action in a representor
> > > directly using Verbs.
> >
> > Then there should be some work to disable flows in representors? that
> > supposed to cover this.
> 
> The code raising another Verbs device is already present and since the
> first entrance of this PMD in the DPDK tree, you must respect the code
> already present.
> This request is not related directly to a new feature but to an existing
> one, the representors being just an example.
> 
> This detection should be only done once and not for each of them.

Could you please elaborate on "under layer verbs device" and how to judge
dependency to PF, is there a probe order between them?

BTW, VF representor code seems exists in 17.11, not upstream.

> 
> > > Another concern is, this patch will be reverted in some time when
> > > those
> > > 16 priority will be always available.  It will be easier to remove
> > > this detection function than searching for all those modifications.
> > >
> > > I would suggest to have a standalone mlx5_flow_priorities_detect()
> > > which creates and deletes all resources needed for this detection.
> >
> > There is an upcoming new feature to support priorities more than 16,
> > auto detection will be kept IMHO.
> 
> Until the final values of priorities will be backported to all kernels we
> support.  You don't see far enough in the future.
> 
> > Besides, there will be a bundle of resource creation and removal in
> > this standalone function, I'm not sure it valuable to duplicate them,
> > please refer to function mlx5_flow_create_drop_queue().
> 
> You misunderstood, I am not asking you to not use the default drop queues
> but instead of making an rte_flow attributes, items and actions to make
> directly the Verbs specification on the stack.  It will be faster than
> making a bunch of conversions (relying on malloc) from rte_flow to Verbs
> whereas you know exactly what it needs i.e. 1 spec.

Sorry, still confused, mlx5_flow_priorities_detect() invokes ibv_destroy_flow(),
not rte_flow stuff, no malloc at all. BTW, mlx5 flow api bypass verb flow in 
offline mode, we can't use it to create flows at such stage.

> 
> Thanks,
> 
> --
> Nélio Laranjeiro
> 6WIND


More information about the dev mailing list