[dpdk-users] VF RSS availble in I350-T2?

.. hyperhead at gmail.com
Wed Dec 13 13:53:41 CET 2017


Hi, sorry the ignored comment was a bit brash.

Its my first time posting, and I didnt see the email I sent come into my
inbox (I guess you don't get them sent to yourself), so I did wonder if it
posted ok.  However the list archives showed that it did.

Thanks
On Wed, 13 Dec 2017, 13:20 .., <hyperhead at gmail.com> wrote:

> Hi Paul,
>
> No I didn't spot that.
>
> I guess my only option now is 10gb card that supports it.
>
> Thanks.
>
> On Wed, 13 Dec 2017, 12:35 Paul Emmerich, <emmericp at net.in.tum.de> wrote:
>
>> Did you consult the datasheet? It says that the VF only supports one
>> queue.
>>
>> Paul
>>
>> > Am 12.12.2017 um 13:58 schrieb .. <hyperhead at gmail.com>:
>> >
>> > I assume my message was ignored due to it not being related to dpdk
>> > software?
>> >
>> >
>> > On 11 December 2017 at 10:14, .. <hyperhead at gmail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> I have an intel I350-T2 which I use for SR-IOV, however, I am hitting
>> some
>> >> rx_dropped on the card when I start increasing traffic. (I have got
>> more
>> >> with the same software out of a identical bare metal system)
>> >>
>> >> I am using the Intel igb driver on Centos 7.2 (downloaded from Intel
>> not
>> >> the driver installed with Centos), so the RSS parameters amongst
>> others are
>> >> availbe to me
>> >>
>> >> This then led me to investigate the interrupts on the tx rx ring
>> buffers
>> >> and I noticed that the interface (vfs enabled) only had on tx/rx
>> queue. Its
>> >> distributed between   This is on the KVM Host
>> >>
>> >>             CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7       CPU8
>> >> 100:          1         33        137          0          0
>> >> 0          0          0          0     IR-PCI-MSI-edge      ens2f1
>> >> 101:       2224          0          0       6309     178807
>> >> 0          0          0          0     IR-PCI-MSI-edge
>> ens2f1-TxRx-0
>> >>
>> >> Looking at my standard nic ethernet ports I see 1 rx and 4 rx queues
>> >>
>> >> On the VM I only get one tx one rx queue ( I know all the interrupts
>> are
>> >> only using CPU0) but that is defined in our builds.
>> >>
>> >> egrep "CPU|ens11" /proc/interrupts
>> >>                   CPU0       CPU1       CPU2       CPU3       CPU4
>> >> CPU5       CPU6       CPU7
>> >> 34:  715885552      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-tx-0
>> >> 35:  559402399      0          0          0          0
>> >> 0          0          0          0           PCI-MSI-edge
>> ens11-rx-0
>> >>
>> >> I activated RSS in my card, and can set if, however if I use the param
>> >> max_vfs=n then it  defaults back to to 1 rx 1 tx queue per nic port
>> >>
>> >> [  392.833410] igb 0000:07:00.0: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >> [  393.035408] igb 0000:07:00.1: Using MSI-X interrupts. 1 rx
>> queue(s), 1
>> >> tx queue(s)
>> >>
>> >> I have been reading some of the dpdk older posts and see that VF RSS is
>> >> implemented in some cards, does anybody know if its available in this
>> card
>> >> (from reading it only seemed the 10GB cards)
>> >>
>> >> One of my plans aside from trying to create more RSS per VM is to add
>> more
>> >> CPUS to the VM that are not isolated so that the rx and tx queues can
>> >> distribute their load a bit to see if this helps.
>> >>
>> >> Also is it worth investigating the VMDq options, however I understand
>> this
>> >> to be less useful than SR-IOV which works well for me with KVM.
>> >>
>> >>
>> >> Thanks in advance,
>> >>
>> >> Rolando
>> >>
>>
>> --
>> Chair of Network Architectures and Services
>> Department of Informatics
>> Technical University of Munich
>> Boltzmannstr. 3
>> 85748 Garching bei München, Germany
>>
>>
>>
>>
>>


More information about the users mailing list