[dpdk-dev] Performance hit - NICs on different CPU sockets
keith.wiles at intel.com
Tue Jun 14 15:47:50 CEST 2016
On 6/14/16, 2:46 AM, "Take Ceara" <dumitru.ceara at gmail.com> wrote:
>On Mon, Jun 13, 2016 at 9:35 PM, Wiles, Keith <keith.wiles at intel.com> wrote:
>> On 6/13/16, 9:07 AM, "dev on behalf of Take Ceara" <dev-bounces at dpdk.org on behalf of dumitru.ceara at gmail.com> wrote:
>>>I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
>>>We're working on a stateful traffic generator (www.warp17.net) using
>>>DPDK and we would like to control two XL710 NICs (one on each socket)
>>>to maximize CPU usage. It looks that we run into the following
>>>section 7.2, point 3
>>>We completely split memory/cpu/NICs across the two sockets. However,
>>>the performance with a single CPU and both NICs on the same socket is
>>>Why do all the NICs have to be on the same socket, is there a
>> Normally the limitation is in the hardware, basically how the PCI bus is connected to the CPUs (or sockets). How the PCI buses are connected to the system depends on the Mother board design. I normally see the buses attached to socket 0, but you could have some of the buses attached to the other sockets or all on one socket via a PCI bridge device.
>> No easy way around the problem if some of your PCI buses are split or all on a single socket. Need to look at your system docs or look at lspci it has an option to dump the PCI bus as an ASCII tree, at least on Ubuntu.
>This is the motherboard we use on our system:
>I need to swap some NICs around (as now we moved everything on socket
>1) before I can share the lspci output.
FYI: the option for lspci is ‘lspci –tv’, but maybe more options too.
More information about the dev