[dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header generation outside the loop
Raslan Darawsheh
rasland at mellanox.com
Tue Apr 2 17:21:30 CEST 2019
Hi,
The performance issue that we saw in Mellanox is now fixed with latest version:
The issue was with the packet len calculation that was changed in the latest versions > 4:
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index af0be89..2f40949 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -176,6 +176,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
pkt->l3_len = sizeof(struct ipv4_hdr);
pkt_seg = pkt;
+ pkt_len = pkt->data_len;
for (i = 1; i < nb_segs; i++) {
pkt_seg->next = pkt_segs[i - 1];
pkt_seg = pkt_seg->next;
@@ -198,7 +199,7 @@ pkt_burst_prepare(struct rte_mbuf *pkt, struct rte_mempool *mbp,
* burst of packets to be transmitted.
*/
pkt->nb_segs = nb_segs;
- pkt->pkt_len += pkt_len;
+ pkt->pkt_len = pkt_len;
return true;
and now we see ~20% improvement in performance.
Kindest regards,
Raslan Darawsheh
> -----Original Message-----
> From: dev <dev-bounces at dpdk.org> On Behalf Of Pavan Nikhilesh
> Bhagavatula
> Sent: Tuesday, April 2, 2019 12:53 PM
> To: Jerin Jacob Kollanukkaran <jerinj at marvell.com>; Thomas Monjalon
> <thomas at monjalon.net>; arybchenko at solarflare.com;
> ferruh.yigit at intel.com; bernard.iremonger at intel.com; Ali Alnubani
> <alialnu at mellanox.com>
> Cc: dev at dpdk.org; Pavan Nikhilesh Bhagavatula
> <pbhagavatula at marvell.com>
> Subject: [dpdk-dev] [PATCH v6 1/4] app/testpmd: move eth header
> generation outside the loop
>
> From: Pavan Nikhilesh <pbhagavatula at marvell.com>
>
> Testpmd txonly copies the src/dst mac address of the port being
> processed to ethernet header structure on the stack for every packet.
> Move it outside the loop and reuse it.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula at marvell.com>
> ---
> v6 Changes
> - Rebase onto ToT.
> - Split the changes further
>
> v5 Changes
> - Remove unnecessary change to struct rte_port *txp (movement).
> (Bernard)
>
> v4 Changes:
> - Fix packet len calculation.
>
> v3 Changes:
> - Split the patches for easier review. (Thomas)
> - Remove unnecessary assignments to 0. (Bernard)
>
> v2 Changes:
> - Use bulk ops for fetching segments. (Andrew Rybchenko)
> - Fallback to rte_mbuf_raw_alloc if bulk get fails. (Andrew Rybchenko)
> - Fix mbufs not being freed when there is no more mbufs available for
> segments. (Andrew Rybchenko)
>
> app/test-pmd/txonly.c | 15 ++++++++-------
> 1 file changed, 8 insertions(+), 7 deletions(-)
>
> diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
> index def52a048..0d411dbf4 100644
> --- a/app/test-pmd/txonly.c
> +++ b/app/test-pmd/txonly.c
> @@ -190,6 +190,14 @@ pkt_burst_transmit(struct fwd_stream *fs)
> ol_flags |= PKT_TX_QINQ_PKT;
> if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
> ol_flags |= PKT_TX_MACSEC;
> +
> + /*
> + * Initialize Ethernet header.
> + */
> + ether_addr_copy(&peer_eth_addrs[fs->peer_addr],
> ð_hdr.d_addr);
> + ether_addr_copy(&ports[fs->tx_port].eth_addr, ð_hdr.s_addr);
> + eth_hdr.ether_type = rte_cpu_to_be_16(ETHER_TYPE_IPv4);
> +
> for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
> pkt = rte_mbuf_raw_alloc(mbp);
> if (pkt == NULL) {
> @@ -226,13 +234,6 @@ pkt_burst_transmit(struct fwd_stream *fs)
> }
> pkt_seg->next = NULL; /* Last segment of packet. */
>
> - /*
> - * Initialize Ethernet header.
> - */
> - ether_addr_copy(&peer_eth_addrs[fs-
> >peer_addr],ð_hdr.d_addr);
> - ether_addr_copy(&ports[fs->tx_port].eth_addr,
> ð_hdr.s_addr);
> - eth_hdr.ether_type =
> rte_cpu_to_be_16(ETHER_TYPE_IPv4);
> -
> /*
> * Copy headers in first packet segment(s).
> */
> --
> 2.21.0
More information about the dev
mailing list