[dpdk-dev] Performance hit - NICs on different CPU sockets
bruce.richardson at intel.com
Mon Jun 13 16:28:36 CEST 2016
On Mon, Jun 13, 2016 at 04:07:37PM +0200, Take Ceara wrote:
> I'm reposting here as I didn't get any answers on the dpdk-users mailing list.
> We're working on a stateful traffic generator (www.warp17.net) using
> DPDK and we would like to control two XL710 NICs (one on each socket)
> to maximize CPU usage. It looks that we run into the following
> section 7.2, point 3
> We completely split memory/cpu/NICs across the two sockets. However,
> the performance with a single CPU and both NICs on the same socket is
> Why do all the NICs have to be on the same socket, is there a
> driver/hw limitation?
so long as each thread only ever accesses the NIC on it's own local socket, then
there is no performance penalty. It's only when a thread on one socket works
using a NIC on a remote socket that you start seeing a penalty, with all
NIC-core communication having to go across QPI.
More information about the dev