[dpdk-dev] [PATCH v2] net/null: support bulk allocation
Mallesh Koujalagi
malleshx.koujalagi at intel.com
Fri Mar 9 00:40:41 CET 2018
Bulk allocation of multiple mbufs increased more than ~2% and less
than 8% throughput on single core (1.8 GHz), based on usage for example
1: Testpmd case: Two null devices with copy 8% improvement.
testpmd -c 0x3 -n 4 --socket-mem 1024,1024
--vdev 'eth_null0,size=64,copy=1' --vdev 'eth_null1,size=64,copy=1'
-- -i -a --coremask=0x2 --txrst=64 --txfreet=64 --txd=256
--rxd=256 --rxfreet=64 --burst=64 --txpt=64 --txq=1 --rxq=1 --numa
2. Ovs switch case: 2% improvement.
$VSCTL add-port ovs-br dpdk1 -- set Interface dpdk1 type=dpdk \
options:dpdk-devargs=eth_null0,size=64,copy=1
$VSCTL add-port ovs-br dpdk2 -- set Interface dpdk2 type=dpdk \
options:dpdk-devargs=eth_null1,size=64,copy=1
Signed-off-by: Mallesh Koujalagi <malleshx.koujalagi at intel.com>
---
drivers/net/null/rte_eth_null.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 9385ffd..c019d2d 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -105,10 +105,10 @@ eth_null_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return 0;
packet_size = h->internals->packet_size;
+ if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+ return 0;
+
for (i = 0; i < nb_bufs; i++) {
- bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
- if (!bufs[i])
- break;
bufs[i]->data_len = (uint16_t)packet_size;
bufs[i]->pkt_len = packet_size;
bufs[i]->port = h->internals->port_id;
@@ -130,10 +130,10 @@ eth_null_copy_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs)
return 0;
packet_size = h->internals->packet_size;
+ if (rte_pktmbuf_alloc_bulk(h->mb_pool, bufs, nb_bufs) != 0)
+ return 0;
+
for (i = 0; i < nb_bufs; i++) {
- bufs[i] = rte_pktmbuf_alloc(h->mb_pool);
- if (!bufs[i])
- break;
rte_memcpy(rte_pktmbuf_mtod(bufs[i], void *), h->dummy_packet,
packet_size);
bufs[i]->data_len = (uint16_t)packet_size;
--
2.7.4
More information about the dev
mailing list