[dpdk-dev] Performance hit - NICs on different CPU sockets

Take Ceara dumitru.ceara at gmail.com
Tue Jun 14 09:47:52 CEST 2016


Hi Bruce,

On Mon, Jun 13, 2016 at 4:28 PM, Bruce Richardson
<bruce.richardson at intel.com> wrote:
> On Mon, Jun 13, 2016 at 04:07:37PM +0200, Take Ceara wrote:
>> Hi,
>>
>> I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
>>
>> We're working on a stateful traffic generator (www.warp17.net) using
>> DPDK and we would like to control two XL710 NICs (one on each socket)
>> to maximize CPU usage. It looks that we run into the following
>> limitation:
>>
>> http://dpdk.org/doc/guides/linux_gsg/nic_perf_intel_platform.html
>> section 7.2, point 3
>>
>> We completely split memory/cpu/NICs across the two sockets. However,
>> the performance with a single CPU and both NICs on the same socket is
>> better.
>> Why do all the NICs have to be on the same socket, is there a
>> driver/hw limitation?
>>
> Hi,
>
> so long as each thread only ever accesses the NIC on it's own local socket, then
> there is no performance penalty. It's only when a thread on one socket works
> using a NIC on a remote socket that you start seeing a penalty, with all
> NIC-core communication having to go across QPI.
>
> /Bruce

Thanks for the confirmation. We'll go through our code again to double
check that no thread accesses the NIC or memory on a remote socket.

Regards,
Dumitru


More information about the dev mailing list