[dpdk-dev] [PATCH 3/3] examples/vhost: add new vhost example

Xie, Huawei huawei.xie at intel.com
Wed Aug 6 07:34:44 CEST 2014



> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen at networkplumber.org]
> Sent: Wednesday, August 06, 2014 11:33 AM
> To: Xie, Huawei
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 3/3] examples/vhost: add new vhost example
> 
> 
> > +static const uint16_t external_pkt_default_vlan_tag = 2000;
> > +const uint16_t vlan_tags[] = {
> > +	1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007,
> > +	1008, 1009, 1010, 1011,	1012, 1013, 1014, 1015,
> > +	1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023,
> > +	1024, 1025, 1026, 1027, 1028, 1029, 1030, 1031,
> > +	1032, 1033, 1034, 1035, 1036, 1037, 1038, 1039,
> > +	1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047,
> > +	1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055,
> > +	1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063,
> > +};
> 
> Why pre-compute table if it is just: vlan_tag = n + 1000?
This is inherited from old vhost example. For demonstration purpose, I feel it is ok. Besides, the purpose of this patch is to refactor the example using vhost library, so I prefer subsequent patches to fix those kind of problems.
> 
> 
> > +
> > +/* Per-device statistics struct */
> > +struct device_statistics {
> > +	uint64_t tx_total;
> > +	rte_atomic64_t rx_total_atomic;
> > +	uint64_t rx_total;
> > +	uint64_t tx;
> > +	rte_atomic64_t rx_atomic;
> > +	uint64_t rx;
> > +} __rte_cache_aligned;
> > +struct device_statistics dev_statistics[MAX_DEVICES];
> 
> Doing per-core statistics would be faster than using atomic's
> which have implicit lock.
For software vm2vm mode, there is contention as two cores could enqueue the same virtio ring concurrently.
> 
> > +/*
> > + * Builds up the correct configuration for VMDQ VLAN pool map
> > + * according to the pool & queue limits.
> > + */
> > +static inline int
> > +get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_devices)
> > +{
> > +	struct rte_eth_vmdq_rx_conf conf;
> > +	unsigned i;
> > +
> > +	memset(&conf, 0, sizeof(conf));
> > +	conf.nb_queue_pools = (enum rte_eth_nb_pools)num_devices;
> > +	conf.nb_pool_maps = num_devices;
> > +	conf.enable_loop_back =
> > +
> 	vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.enable_loop_back;
> > +
> > +	for (i = 0; i < conf.nb_pool_maps; i++) {
> > +		conf.pool_map[i].vlan_id = vlan_tags[i];
> > +		conf.pool_map[i].pools = (1UL << i);
> > +	}
> > +
> > +	(void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf)));
> > +	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
> > +		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
> > +	return 0;
> > +}
> 
> If function always returns 0 just make it void.
> No point in making non-fastpath code inline.
> The cast to (void) is unnecessary and just makes things ugly.
> Why not use structure assignment rather than memcpy() which is not type safe?
Agree. As stated above, I prefer subsequent patches to fix those issues inherited from old example. How do you think?


More information about the dev mailing list