[dpdk-dev] [PATCH] vhost: add support for dynamic vhost PMD creation

Yuanhan Liu yuanhan.liu at linux.intel.com
Mon May 9 23:31:24 CEST 2016


On Thu, May 05, 2016 at 07:11:09PM +0100, Ferruh Yigit wrote:
> Add rte_eth_from_vhost() API to create vhost PMD dynamically from
> applications.

This sounds a good idea to me. It could be better if you name a good
usage of it though.

> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit at intel.com>
> ---
>  drivers/net/vhost/rte_eth_vhost.c           | 117 ++++++++++++++++++++++++++++
>  drivers/net/vhost/rte_eth_vhost.h           |  19 +++++
>  drivers/net/vhost/rte_pmd_vhost_version.map |   7 ++
>  3 files changed, 143 insertions(+)
> 
> diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
> index 310cbef..c860ab8 100644
> --- a/drivers/net/vhost/rte_eth_vhost.c
> +++ b/drivers/net/vhost/rte_eth_vhost.c
> @@ -796,6 +796,123 @@ error:
>  	return -1;
>  }
>  
> +static int
> +rte_eth_from_vhost_create(const char *name, char *iface_name,

It's not a public function, so don't name it with prefix "rte_".

> +		const unsigned int numa_node, struct rte_mempool *mb_pool)
> +{
> +	struct rte_eth_dev_data *data = NULL;
> +	struct rte_eth_dev *eth_dev = NULL;
> +	struct pmd_internal *internal = NULL;
> +	struct internal_list *list;
> +	int nb_queues = 1;
> +	uint16_t nb_rx_queues = nb_queues;
> +	uint16_t nb_tx_queues = nb_queues;
> +	struct vhost_queue *vq;
> +	int i;
> +
> +	int port_id = eth_dev_vhost_create(name, iface_name, nb_queues,
> +			numa_node);
> +
> +	if (port_id < 0)
> +		return -1;
> +
> +	eth_dev = &rte_eth_devices[port_id];
> +	data = eth_dev->data;
> +	internal = data->dev_private;
> +	list = find_internal_resource(internal->iface_name);
> +
> +	data->rx_queues = rte_zmalloc_socket(name,
> +			sizeof(void *) * nb_rx_queues, 0, numa_node);
> +	if (data->rx_queues == NULL)
> +		goto error;
> +
> +	data->tx_queues = rte_zmalloc_socket(name,
> +			sizeof(void *) * nb_tx_queues, 0, numa_node);
> +	if (data->tx_queues == NULL)
> +		goto error;
> +
> +	for (i = 0; i < nb_rx_queues; i++) {
> +		vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue),
> +				RTE_CACHE_LINE_SIZE, numa_node);
> +		if (vq == NULL) {
> +			RTE_LOG(ERR, PMD,
> +				"Failed to allocate memory for rx queue\n");
> +			goto error;
> +		}
> +		vq->mb_pool = mb_pool;
> +		vq->virtqueue_id = i * VIRTIO_QNUM + VIRTIO_TXQ;
> +		data->rx_queues[i] = vq;
> +	}

I would invoke eth_rx_queue_setup() here, to remove the duplicated
effort of queue allocation and initiation.

> +
> +	for (i = 0; i < nb_tx_queues; i++) {
> +		vq = rte_zmalloc_socket(NULL, sizeof(struct vhost_queue),
> +				RTE_CACHE_LINE_SIZE, numa_node);
> +		if (vq == NULL) {
> +			RTE_LOG(ERR, PMD,
> +				"Failed to allocate memory for tx queue\n");
> +			goto error;
> +		}
> +		vq->mb_pool = mb_pool;

Tx queue doesn't need a mbuf pool. And, ditto, call eth_tx_queue_setup()
instead.


> +int
> +rte_eth_from_vhost(const char *name, char *iface_name,
> +		const unsigned int numa_node, struct rte_mempool *mb_pool)

That would make this API be very limited. Assume we want to extend
vhost pmd in future, we could easily do that by adding few more
vdev options: you could reference my patch[0] to add client and
reconnect option. But here you hardcode all stuff that are needed
so far to create a vhost-pmd eth device; adding something new
would imply an API breakage in future.

So, let the vdev options as the argument of this API? That could
be friendly for future extension without breaking the API.

[0]: http://dpdk.org/dev/patchwork/patch/12608/

> +/**
> + * API to create vhost PMD
> + *
> + * @param name
> + *  Vhost device name
> + * @param iface_name
> + *  Vhost interface name
> + * @param numa_node
> + *  Socket id
> + * @param mb_pool
> + *  Memory pool
> + *
> + * @return
> + *  - On success, port_id.
> + *  - On failure, a negative value.
> + */

Hmmm, too simple.

	--yliu


More information about the dev mailing list