[dpdk-dev] [PATCH v4 2/2] vhost: Add VHOST PMD

Rich Lane rich.lane at bigswitch.com
Sat Nov 21 01:15:41 CET 2015


On Thu, Nov 12, 2015 at 9:20 PM, Tetsuya Mukawa <mukawa at igel.co.jp> wrote:

> +static uint16_t
> +eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
> +{
>
...

> +
> +       /* Enqueue packets to guest RX queue */
> +       nb_tx = rte_vhost_enqueue_burst(r->device,
> +                       r->virtqueue_id, bufs, nb_bufs);
> +
> +       r->tx_pkts += nb_tx;
> +       r->err_pkts += nb_bufs - nb_tx;
>

I don't think a full TX queue is counted as an error by physical NIC PMDs
like ixgbe and i40e. It is counted as an error by the af_packet, pcap, and
ring PMDs. I'd suggest not counting it as an error because it's a common
and expected condition, and the application might just retry the TX later.

Are the byte counts left out because it would be a performance hit? It
seems like it would be a minimal cost given how much we're already touching
each packet.


> +static int
> +new_device(struct virtio_net *dev)
> +{
>
...

> +
> +       if ((dev->virt_qp_nb < internal->nb_rx_queues) ||
> +                       (dev->virt_qp_nb < internal->nb_tx_queues)) {
> +               RTE_LOG(INFO, PMD, "Not enough queues\n");
> +               return -1;
> +       }
>

Would it make sense to take the minimum of the guest and host queuepairs
and use that below in place of nb_rx_queues/nb_tx_queues? That way the host
can support a large maximum number of queues and each guest can choose how
many it wants to use. The host application will receive vring_state_changed
callbacks for each queue the guest activates.


More information about the dev mailing list