[dpdk-dev] overcommitting CPUs
danny.zhou at intel.com
Tue Aug 26 18:42:23 CEST 2014
I have a prototype that works on Niantic to enable NIC rx interrupt and allow interrupt and polling mode switch according to real traffic load on the rx queue. It is designed for DPDK power management, and can apply to CPU resource sharing as well. It only works for non-virtualized environment at the moment. The prototype also optimized DPDK interrupt notification mechanism to user space in order to minimize the latency. Basically, it looks like a user space NAPI.
The downside of this solution is that packet latency is enlarged, which is combination of interrupt latency, CPU wakeup latency from C3/C6 C0, cache warmup latency, OS scheduling latency. Also it potentially drop packets for burst traffic on >40G NIC. In other words, the latency is non-deterministic which is not suitable for packet latency sensitive scenarios.
> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Michael Marchetti
> Sent: Wednesday, August 27, 2014 12:27 AM
> To: dev at dpdk.org
> Subject: [dpdk-dev] overcommitting CPUs
> Hi, has there been any consideration to introduce a non-spinning network driver (interrupt based), for the purpose of overcommitting
> CPUs in a virtualized environment? This would obviously have reduced high-end performance but would allow for increased guest
> density (sharing of physical CPUs) on a host.
> I am interested in adding support for this kind of operation, is there any interest in the community?
More information about the dev