[dpdk-dev] Surprisingly high TCP ACK packets drop counter

Prashant Upadhyaya prashant.upadhyaya at aricent.com
Sat Nov 2 06:32:31 CET 2013


Hi,

I have used DPDK1.4 and DPDK1.5 and the packets do fan out nicely on the rx queues nicely in some usecases I have.
Alexander, can you please try using DPDK1.4 or 1.5 and share the results.

Regards
-Prashant


-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Wang, Shawn
Sent: Friday, November 01, 2013 8:24 PM
To: Alexander Belyakov
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] Surprisingly high TCP ACK packets drop counter

Hi:

We had the same problem before. It turned out that RSC (receive side
coalescing) is enabled by default in DPDK. So we write this naïve patch to disable it. This patch is based on DPDK 1.3. Not sure 1.5 has changed it or not.
After this patch, ACK rate should go back to 14.5Mpps. For details, you can refer to Intel® 82599 10 GbE Controller Datasheet. (7.11 Receive Side Coalescing).

From: xingbow <xingbow at amazon.com>
Date: Wed, 21 Aug 2013 11:35:23 -0700
Subject: [PATCH] Disable RSC in ixgbe_dev_rx_init function in file

 ixgbe_rxtx.c

---

 DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h | 2 +-
 DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c       | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h
index 7fffd60..f03046f 100644

--- a/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h

+++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe/ixgbe_type.h

@@ -1930,7 +1930,7 @@ enum {

 #define IXGBE_RFCTL_ISCSI_DIS          0x00000001
 #define IXGBE_RFCTL_ISCSI_DWC_MASK     0x0000003E
 #define IXGBE_RFCTL_ISCSI_DWC_SHIFT    1
-#define IXGBE_RFCTL_RSC_DIS            0x00000010

+#define IXGBE_RFCTL_RSC_DIS            0x00000020

 #define IXGBE_RFCTL_NFSW_DIS           0x00000040
 #define IXGBE_RFCTL_NFSR_DIS           0x00000080
 #define IXGBE_RFCTL_NFS_VER_MASK       0x00000300
diff --git a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c
index 07830b7..ba6e05d 100755

--- a/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c

+++ b/DPDK/lib/librte_pmd_ixgbe/ixgbe_rxtx.c

@@ -3007,6 +3007,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)

        uint64_t bus_addr;
        uint32_t rxctrl;
        uint32_t fctrl;
+       uint32_t rfctl;

        uint32_t hlreg0;
        uint32_t maxfrs;
        uint32_t srrctl;
@@ -3033,6 +3034,12 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)

        fctrl |= IXGBE_FCTRL_PMCF;
        IXGBE_WRITE_REG(hw, IXGBE_FCTRL, fctrl);

+       /* Disable RSC */
+       RTE_LOG(INFO, PMD, "Disable RSC\n");
+       rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
+       rfctl |= IXGBE_RFCTL_RSC_DIS;
+       IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
+

        /*
         * Configure CRC stripping, if any.
         */
--


Thanks.
Wang, Xingbo




On 11/1/13 6:43 AM, "Alexander Belyakov" <abelyako at gmail.com> wrote:

>Hello,
>
>we have simple test application on top of DPDK which sole purpose is to
>forward as much packets as possible. Generally we easily achieve
>14.5Mpps with two 82599EB (one as input and one as output). The only
>suprising exception is forwarding pure TCP ACK flood when performace
>always drops to approximately 7Mpps.
>
>For simplicity consider two different types of traffic:
>1) TCP SYN flood is forwarded at 14.5Mpps rate,
>2) pure TCP ACK flood is forwarded only at 7Mpps rate.
>
>Both SYN and ACK packets have exactly the same length.
>
>It is worth to mention, this forwarding application looks at Ethernet
>and IP headers, but never deals with L4 headers.
>
>We tracked down issue to RX circuit. To be specific, there are 4 RX
>queues initialized on input port and rte_eth_stats_get() shows uniform
>packet distribution (q_ipackets) among them, while q_errors remain zero
>for all queues. The only drop counter quickly increasing in the case of
>pure ACK flood is ierrors, while rx_nombuf remains zero.
>
>We tried different kinds of traffic generators, but always got the same
>result: 7Mpps (instead of expected 14Mpps) for TCP packets with ACK
>flag bit set while all other flag bits dropped. Source IPs and ports
>are selected randomly.
>
>Please let us know if anyone is aware of such strange behavior and
>where should we look at to narrow down the problem.
>
>Thanks in advance,
>Alexander Belyakov





===============================================================================
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===============================================================================


More information about the dev mailing list