[dpdk-dev] 82599 SR-IOV with passthrough

jigsaw jigsaw at gmail.com
Thu Oct 17 15:11:29 CEST 2013


Hi Prashant,

The problem is that my patch has to be applied to ixgbe PF driver as
well. I have no idea how to make it happen.
So even DPDK accepts my patch, user won't benefit from it unless he
patched ixgbe PF by himself.

I also hate the fact that SRIOV cannot get more queues to VF. But
there's a way out: to assign more than one VF to guest.


thx &
rgds,
-ql

On Thu, Oct 17, 2013 at 4:02 PM, Prashant Upadhyaya
<prashant.upadhyaya at aricent.com> wrote:
> Hi Qinglai,
>
> I would say that SRIOV is 'useless' if the VF gets only one queue.
> At the heart of performance is to use one queue per core so that the the tx and rx remain lockless. Locks 'destroy' performance.
> So with one queue, if we want to remain lockless, that automatically means that the usecase is restricted to one core, ergo useless for any usecase worth its salt.
>
> It was courtesy your mail that  I 'discovered' that DPDK has such a limitation.
>
> So I am all for this patch to go in DPDK. Good luck !
>
> Regards
> -Prashant
>
>
> -----Original Message-----
> From: jigsaw [mailto:jigsaw at gmail.com]
> Sent: Thursday, October 17, 2013 6:14 PM
> To: Prashant Upadhyaya
> Cc: Thomas Monjalon; dev at dpdk.org
> Subject: Re: [dpdk-dev] 82599 SR-IOV with passthrough
>
> Hi Prashant,
>
> I patched both Intel ixgbe PF driver and DPDK 1.5 VF driver, so that DPDK gets 4 queues in one VF. It works fine with all 4 Tx queues. The only trick is to set proper mac address for all outgoing packets, which must be the same mac as you set to the VF. This trick is described in the release note of DPDK.
>
> I wonder whether it makes sense to push this patch to DPDK. Any comments?
>
> thx &
> rgds,
> -ql
>
> On Thu, Oct 17, 2013 at 2:55 PM, Prashant Upadhyaya <prashant.upadhyaya at aricent.com> wrote:
>> Hi Qinglai,
>>
>> Why are you using the kernel driver at all.
>> Use the DPDK driver to control the PF on the host. The guest would communicate with the PF on host using mailbox as usual.
>> Then the changes will be limited to DPDK, isn't it ?
>>
>> Regards
>> -Prashant
>>
>> -----Original Message-----
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of jigsaw
>> Sent: Wednesday, October 16, 2013 6:51 PM
>> To: Thomas Monjalon
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] 82599 SR-IOV with passthrough
>>
>> Hi Thomas,
>>
>> Thanks for reply.
>>
>> The kernel has older version of PF than the one released on sf.net. So I'm checking the sf.net release.
>> If the change is limited in DPDK then it is controllable. But now it affects Intel's PF driver, I don't even know how to push the feature to Intel. The driver on sf.net is a read-only repository, isn't it? It would be painful to maintain another branch of 10G PF driver.
>> Could Intel give some advice or hints here?
>>
>> thx &
>> rgds,
>> -Qinglai
>>
>> On Wed, Oct 16, 2013 at 3:58 PM, Thomas Monjalon <thomas.monjalon at 6wind.com> wrote:
>>> 16/10/2013 14:18, jigsaw :
>>>> Therefore, to add support for multiple queues per VF, we have to at
>>>> least fix the PF driver, then add support in DPDK's VF driver.
>>>
>>> You're right, Linux PF driver have to be updated to properly manage
>>> multiple queues per VF. Then the guest can be tested with DPDK or
>>> with Linux driver (ixgbe_vf).
>>>
>>> Note that there are 2 versions of Linux driver for ixgbe: kernel.org
>>> and sourceforge.net (supporting many kernel versions).
>>>
>>> --
>>> Thomas
>>
>>
>>
>>
>> ======================================================================
>> ========= Please refer to
>> http://www.aricent.com/legal/email_disclaimer.html
>> for important disclosures regarding this electronic communication.
>> ======================================================================
>> =========
>
>
>
>
> ===============================================================================
> Please refer to http://www.aricent.com/legal/email_disclaimer.html
> for important disclosures regarding this electronic communication.
> ===============================================================================


More information about the dev mailing list