[dpdk-dev] [PATCH v5 0/6] Virtio-net PMD: QEMU QTest extension for container

Tetsuya Mukawa mukawa at igel.co.jp
Mon Jun 6 11:28:02 CEST 2016


On 2016/06/06 17:03, Tan, Jianfeng wrote:
> Hi,
> 
> 
> On 6/6/2016 1:10 PM, Tetsuya Mukawa wrote:
>> Hi Yuanhan,
>>
>> Sorry for late replying.
>>
>> On 2016/06/03 13:17, Yuanhan Liu wrote:
>>> On Thu, Jun 02, 2016 at 06:30:18PM +0900, Tetsuya Mukawa wrote:
>>>> Hi Yuanhan,
>>>>
>>>> On 2016/06/02 16:31, Yuanhan Liu wrote:
>>>>> But still, I'd ask do we really need 2 virtio for container solutions?
>>>> I appreciate your comments.
>>> No, I appreciate your effort for contributing to DPDK! vhost-pmd stuff
>>> is just brilliant!
>>>
>>>> Let me have time to discuss it with our team.
>>> I'm wondering could we have one solution only. IMO, the drawback of
>>> having two (quite different) solutions might outweighs the benefit
>>> it takes. Say, it might just confuse user.
>> I agree with this.
>> If we have 2 solutions, it would confuse the DPDK users.
>>
>>> OTOH, I'm wondering could you adapt to Jianfeng's solution? If not,
>>> what's the missing parts, and could we fix it? I'm thinking having
>>> one unified solution will keep ours energy/focus on one thing, making
>>> it better and better! Having two just splits the energy; it also
>>> introduces extra burden for maintaining.
>> Of course, I adopt Jiangeng's solution basically.
>> Actually, his solution is almost similar I tried to implement at first.
>>
>> I guess here is pros/cons of 2 solutions.
>>
>> [Jianfeng's solution]
>> - Pros
>> Don't need to invoke QEMU process.
>> - Cons
>> If virtio-net specification is changed, we need to implement it by
>> ourselves.
> 
> It will barely introduce any change when virtio-net specification is
> changed as far as I can see. The only part we care is the how desc,
> avail, used distribute on memory, which is a very small part.

It's a good news, because we don't pay much effort to follow latest
virtio-net specification.

> 
> It's true that my solution now seriously depend on vhost-user protocol,
> which is defined in QEMU. I cannot see a big problem there so far.
> 
>>   Also, LSC interrupt and control queue functions are not
>> supported yet.
>> I agree both functions may not be so important, and if we need it
>> we can implement them, but we need to pay energy to implement them.
> 
> LSC is really less important than rxq interrupt (IMO). We don't know how
> long will rxq interrupt of virtio be available for QEMU, but we can
> accelerate it if we avoid using QEMU.
> 
> Actually, if the vhost backend is vhost-user (the main use case),
> current qemu have limited control queue support, because it needs the
> support from the vhost user backend.
> 
> Add one more con of my solution:
> - Need to write another logic to support other virtio device (say
> virtio-scsi), if it's easier of Tetsuya's solution to do that?
> 

Probably, my solution will be easier to do that.
My solution has enough facility to access to io port and PCI
configuration space of virtio-scsi device of QEMU.
So, if you invoke with QEMU with virtio-scsi, only you need to do is
changing PCI interface of current virtio-scsi PMD.
(I just assume currently we have virtio-scsi PMD.)
If the virtio-scsi PMD works on QEMU, same code should work with only
changing PCI interface.

>>
>> [My solution]
>> - Pros
>> Basic principle of my implementation is not to reinvent the wheel.
>> We can use a virtio-net device of QEMU implementation, it means we don't
>> need to maintain virtio-net device by ourselves, and we can use all of
>> functions supported by QEMU virtio-net device.
>> - Cons
>> Need to invoke QEMU process.
> 
> Two more possible cons:
> a) This solution also needs to maintain qtest utility, right?

But the spec of qtest will be more stable than virtio-net.

> b) There's still address arrange restriction, right? Although we can use
> "--base-virtaddr=0x400000000" to relieve this question, but how about if
> there are 2 or more devices? (By the way, is there still address arrange
> requirement for 32 bit system)

Our solutions are a virtio-net driver, and a vhost-user backend driver
needs to access to memory allocated by virtio-net driver.
If an application has 2 devices, it means 2 vhost-user backend PMD needs
to access to the same application memory, right?
Also, currently each virtio-net device has an one QEMU process.
So, I am not sure what will be problem if we have 2 devices.

BTW, 44bits limitations comes from current QEMU implementation itself.
(Actually, if modern virtio device is used, we should be able to remove
the restriction.)

> c) Actually, IMO this solution is sensitive to any virtio spec change
> (io port, pci configuration space).

In this case, virtio-net PMD itself will need to be fixed.
Then, my implementation will be also fixed with the same way.
Current implementation has only PCI abstraction that Yuanhan introduced,
so you may think my solution depends on above things, but actually, my
implementation depends on only how to access to io port and PCI
configuration space. This is what "qtest.h" provides.

Thanks,
Tetsuya

> 
>>
>>
>> Anyway, we can choose one of belows.
>> 1. Take advantage of invoking less processes.
>> 2. Take advantage of maintainability of virtio-net device.
>>
>> Honestly, I'm OK if my solution is not merged.
>> Thus, it should be decided to let DPDK better.
> 
> Yes, agreed.
> 
> Thanks,
> Jianfeng
> 
>>
>> What do you think?
>> Which is better for DPDK?
>>
>> Thanks,
>> Tetsuya
>>
>>>     --yliu
>>>
> 



More information about the dev mailing list