[dpdk-dev] [PATCH v4 27/29] l3fwd-graph: add graph config and main loop

Nithin Dabilpuram ndabilpuram at marvell.com
Fri Apr 10 11:29:06 CEST 2020


On Fri, Apr 10, 2020 at 01:04:00AM +0200, Andrzej Ostruszka wrote:
> On 4/5/20 10:56 AM, jerinj at marvell.com wrote:
> > From: Nithin Dabilpuram <ndabilpuram at marvell.com>
> > 
> > Add graph creation, configuration logic and graph main loop.
> > This graph main loop is run on every slave lcore and calls
> > rte_graph_walk() to walk over lcore specific rte_graph.
> > Master core accumulates and prints graph walk stats of all the
> > lcore's graph's.
> > 
> > Signed-off-by: Nithin Dabilpuram <ndabilpuram at marvell.com>
> > ---
> [...]
> > @@ -826,11 +936,26 @@ main(int argc, char **argv)
> >  					 "port=%d\n",
> >  					 ret, portid);
> >  
> > +			/* Add this queue node to its graph */
> > +			snprintf(qconf->rx_queue_list[queue].node_name,
> > +				 RTE_NODE_NAMESIZE, "ethdev_rx-%u-%u", portid,
> > +				 queueid);
> > +		}
> > +
> > +		/* Alloc a graph to this lcore only if source exists  */
> > +		if (qconf->n_rx_queue) {
> > +			qconf->graph_id = nb_graphs;
> 
> See below for a comment related to that.
> 
> > +			nb_graphs++;
> >  		}
> >  	}
> >  
> >  	printf("\n");
> >  
> > +	/* Ethdev node config, skip rx queue mapping */
> > +	ret = rte_node_eth_config(ethdev_conf, nb_conf, nb_graphs);
> > +	if (ret)
> > +		rte_exit(EXIT_FAILURE, "rte_node_eth_config: err=%d\n", ret);
> > +
> >  	/* Start ports */
> >  	RTE_ETH_FOREACH_DEV(portid)
> >  	{
> > @@ -858,6 +983,119 @@ main(int argc, char **argv)
> >  
> >  	check_all_ports_link_status(enabled_port_mask);
> >  
> > +	/* Graph Initialization */
> > +	memset(&graph_conf, 0, sizeof(graph_conf));
> > +	graph_conf.node_patterns = node_patterns;
> > +	nb_patterns = RTE_DIM(node_patterns);
> > +
> > +	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
> > +		rte_graph_t graph_id;
> > +		rte_edge_t i;
> > +
> > +		if (rte_lcore_is_enabled(lcore_id) == 0)
> > +			continue;
> > +
> > +		qconf = &lcore_conf[lcore_id];
> > +
> > +		/* Skip graph creation if no source exists */
> > +		if (!qconf->n_rx_queue)
> > +			continue;
> > +
> > +		/* Add rx node patterns of this lcore */
> > +		for (i = 0; i < qconf->n_rx_queue; i++) {
> > +			graph_conf.node_patterns[nb_patterns + i] =
> > +				qconf->rx_queue_list[i].node_name;
> > +		}
> > +
> > +		graph_conf.nb_node_patterns = nb_patterns + i;
> > +		graph_conf.socket_id = rte_lcore_to_socket_id(lcore_id);
> > +
> > +		snprintf(qconf->name, sizeof(qconf->name), "worker_%u",
> > +			 lcore_id);
> > +
> > +		graph_id = rte_graph_create(qconf->name, &graph_conf);
> > +		if (graph_id != qconf->graph_id)
> > +			rte_exit(EXIT_FAILURE,
> > +				 "rte_graph_create(): graph_id=%d not "
> > +				 " as expected for lcore %u(%u\n",
> > +				 graph_id, lcore_id, qconf->graph_id);
> 
> Should application be checking graph implementation?  Maybe just check
> that it is not "invalid" and here store it to qconf->graph_id?

Ack, will fix it in V5.
> 
> > +
> > +		qconf->graph = rte_graph_lookup(qconf->name);
> > +		if (!qconf->graph)
> > +			rte_exit(EXIT_FAILURE,
> > +				 "rte_graph_lookup(): graph %s not found\n",
> > +				 qconf->name);
> > +	}
> > +
> > +	memset(&rewrite_data, 0, sizeof(rewrite_data));
> > +	rewrite_len = sizeof(rewrite_data);
> > +
> > +	/* Add route to ip4 graph infra */
> > +	for (i = 0; i < IPV4_L3FWD_LPM_NUM_ROUTES; i++) {
> > +		char route_str[INET6_ADDRSTRLEN * 4];
> > +		char abuf[INET6_ADDRSTRLEN];
> > +		struct in_addr in;
> > +		uint32_t dst_port;
> > +		uint16_t next_hop;
> > +
> > +		/* Skip unused ports */
> > +		if ((1 << ipv4_l3fwd_lpm_route_array[i].if_out &
> > +		     enabled_port_mask) == 0)
> > +			continue;
> > +
> > +		dst_port = ipv4_l3fwd_lpm_route_array[i].if_out;
> > +		next_hop = i;
> 
> What is the difference between next_hop and i?  It looks to me like they
> both have the same type and value.  So maybe instead of another object
> just add comment to route_add() below?  No strong feeling here so you
> can ignore this comment.

Here there is no difference, but just wanted to say that next_hop is a
user choosen identifier to add rewrite data and use the same id to link
a route added to that next hop. There could be multiple routes pointing
to same next hop(gateway) in reality.
I'll change that to 'i' and add comment describing it in V5.
> 
> > +
> > +		in.s_addr = htonl(ipv4_l3fwd_lpm_route_array[i].ip);
> > +		snprintf(route_str, sizeof(route_str), "%s / %d (%d)",
> > +			 inet_ntop(AF_INET, &in, abuf, sizeof(abuf)),
> > +			 ipv4_l3fwd_lpm_route_array[i].depth,
> > +			 ipv4_l3fwd_lpm_route_array[i].if_out);
> > +
> > +		ret = rte_node_ip4_route_add(
> > +			ipv4_l3fwd_lpm_route_array[i].ip,
> > +			ipv4_l3fwd_lpm_route_array[i].depth, next_hop,
> > +			RTE_NODE_IP4_LOOKUP_NEXT_REWRITE);
> > +
> 
> With regards
> Andrzej Ostruszka


More information about the dev mailing list