[dpdk-dev] [PATCH v1] vhost: support cross page buf in async data path
Maxime Coquelin
maxime.coquelin at redhat.com
Tue Jul 21 10:35:24 CEST 2020
Hi Patrick,
On 7/21/20 4:57 AM, Fu, Patrick wrote:
> Hi Maxime,
>
>> -----Original Message-----
>> From: Maxime Coquelin <maxime.coquelin at redhat.com>
>> Sent: Tuesday, July 21, 2020 12:40 AM
>> To: Fu, Patrick <patrick.fu at intel.com>; dev at dpdk.org; Xia, Chenbo
>> <chenbo.xia at intel.com>
>> Subject: Re: [PATCH v1] vhost: support cross page buf in async data path
>>
>> The title could be improved, it is not very clear IMHO.
> How about:
> vhost: fix async copy failure on buffers cross page boundary
>
>> On 7/20/20 4:52 AM, patrick.fu at intel.com wrote:
>>> From: Patrick Fu <patrick.fu at intel.com>
>>>
>>> Async copy fails when ring buffer cross two physical pages. This patch
>>> fix the failure by letting copies occur in sync mode if crossing page
>>> buffers are given.
>>
>> Wouldn't it be possible to have the buffer split into two iovecs?
> Technically we can do that, however, it will also introduce significant overhead:
> - overhead from adding additional logic in vhost async data path to handle the case
> - overhead from dma device to consume 2 iovecs
> In average, I don't think dma copy can benefit too much for the buffer which are split into multiple pages.
> CPU copy shall be a more suitable method.
I think we should try, that would make a cleaner implementation. I don't
think having to fallback to sync mode is a good idea because it adds an
overhead on the CPU, which is what we try to avoid with this async mode.
Also, I am not convinced the overhead would be that significant, at
least I hope so, otherwise it would mean this new path is just
performing better because it takes a lot of shortcuts, like the vector
path in Virtio PMD.
Regards,
Maxime
>
>>> Fixes: cd6760da1076 ("vhost: introduce async enqueue for split ring")
>>>
>>> Signed-off-by: Patrick Fu <patrick.fu at intel.com>
>>> ---
>>> lib/librte_vhost/virtio_net.c | 12 +++---------
>>> 1 file changed, 3 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c
>>> b/lib/librte_vhost/virtio_net.c index 1d0be3dd4..44b22a8ad 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -1071,16 +1071,10 @@ async_mbuf_to_desc(struct virtio_net *dev,
>> struct vhost_virtqueue *vq,
>>> }
>>>
>>> cpy_len = RTE_MIN(buf_avail, mbuf_avail);
>>> + hpa = (void *)(uintptr_t)gpa_to_hpa(dev,
>>> + buf_iova + buf_offset, cpy_len);
>>>
>>> - if (unlikely(cpy_len >= cpy_threshold)) {
>>> - hpa = (void *)(uintptr_t)gpa_to_hpa(dev,
>>> - buf_iova + buf_offset, cpy_len);
>>> -
>>> - if (unlikely(!hpa)) {
>>> - error = -1;
>>> - goto out;
>>> - }
>>> -
>>> + if (unlikely(cpy_len >= cpy_threshold && hpa)) {
>>> async_fill_vec(src_iovec + tvec_idx,
>>> (void *)(uintptr_t)rte_pktmbuf_iova_offset(m,
>>> mbuf_offset), cpy_len);
>>>
>
More information about the dev
mailing list