RE: Issue with DPDK-Burst Replay – No Frame Transmission Observed Despite Successful Replay
Ivan Malov
ivan.malov at arknetworks.am
Wed Jul 23 08:12:51 CEST 2025
On Wed, 23 Jul 2025, Ivan Malov wrote:
> Hi,
>
> On Wed, 23 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote:
>
>> Hi Ivan Malov 😊
>>
>> Use Case:
>> I'm currently using the DPDK-Burst-Replay tool to replay captured PCAP
>> files at specific data rates (e.g., 150–200 Mbps).
>>
>> Response to your feedback:
>> Point 1: "Port 0 is not on the good NUMA ID (-1)"
>> I’m aware that this message is printed due to the NUMA ID being returned as
>> -1.
>> I've just started diving into the source code and found that the call to
>> rte_eth_dev_socket_id() returns -1, which typically indicates an error.
>
> No, -1 typically translates to 'SOCKET_ID_ANY'. In order to rule out 'EINVAL'
> in 'rte_errno', one should attempt to invoke 'rte_eth_dev_socket_id' within
> the
> loop of 'RTE_ETH_FOREACH_DEV' (see examples in DPDK) and print the socket ID.
>
>> However, the current implementation does not output the rte_errno, which
>> could help identify the root cause. I'm working on modifying the code to
>> print the error code for better debugging.
>>
>> Point 2: "NIC Link is UP"
>> Yes, on the NIC side, the link is up. I'm also able to transmit packets
>> successfully using the testpmd application.
>>
>> Question:
>> Could you please confirm if the current version of the DPDK-Burst-Replay
>> tool supports replaying Ethernet frames larger than 64 bytes (e.g., up to
>> 1500 bytes)? Or has the tool been enhanced to support this use case?
>
> The tool seems like an external application. Try to look for any mentions of
> API 'rte_eth_dev_set_mtu' or just the term 'mtu' in that source code. If
> there
> are no such mentions, then default MTU applies, which depends on the driver.
>
> Have you tried querying statistics to find the cause of the drops?
Also, given the fact that the application replays some pcap traffic, have you
made sure the receiver (where you seek to watch the replayed traffic arrive) has
got 'promiscuous' mode [1] enabled? IIRC, 'test-pmd' enables it by default.
[1] https://doc.dpdk.org/api-25.07/rte__ethdev_8h.html#a5dd1dedaa45f05c72bcc35495e441e91
Thank you.
>
> Thank you.
>
>>
>> Thanks for your time and support!
>>
>> Best regards,
>> Gokul K.R
>>
>>
>>
>>
>>
>> -----Original Message-----
>> From: Ivan Malov <ivan.malov at arknetworks.am>
>> Sent: Friday, July 18, 2025 5:29 PM
>> To: Gokul K R (MS/ETA7-ETAS) <KR.Gokul at in.bosch.com>
>> Cc: users at dpdk.org; dev at dpdk.org
>> Subject: Re: Issue with DPDK-Burst Replay – No Frame Transmission Observed
>> Despite Successful Replay
>>
>> Hi,
>>
>> (please see below)
>>
>> On Fri, 18 Jul 2025, Gokul K R (MS/ETA7-ETAS) wrote:
>>
>>>
>>> Hi Team,
>>>
>>> I’m currently working with the dpdk-burst-replay tool and encountered an
>>> issue during execution. Below are the details:
>>>
>>>
>>> ______________________________________________________________________
>>> ______________________________________________________________________
>>> ______________________________________________
>>>
>>>
>>> Observation:
>>> During replay, we received the following informational message:
>>>
>>> port 0 is not on the good numa id (-1)
>>
>> Which API was used to check this? Was API [1] used? If not, what does it
>> show in the absence of 'numactl' command?
>>
>> [1]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ad032e25f712e6ffeb0c19eab1ec1fd2e
>>
>>>
>>> As per the DPDK mailing list discussions, this warning is typically
>>> benign—often seen on NICs like Intel I225/I210, which do not report NUMA
>>> affinity. Hence, we proceeded with execution.
>>>
>>>
>>> ______________________________________________________________________
>>> ______________________________________________________________________
>>> ______________________________________________
>>>
>>>
>>> Command Used:
>>>
>>> sudo numactl --cpunodebind=0 --membind=0 ./src/dpdk-replay
>>> Original_MAC.pcap 0000:01:00.1
>>>
>>> Execution Output:
>>>
>>> preloading Original_MAC.pcap file (of size: 143959 bytes)
>>>
>>> file read at 100.00%
>>>
>>> read 675 pkts (for a total of 143959 bytes). max packet length = 1518
>>> bytes.
>>>
>>> -> Needed MBUF size: 1648
>>>
>>> -> Needed number of MBUFs: 675
>>>
>>> -> Needed Memory = 1.061 MB
>>>
>>> -> Needed Hugepages of 1 GB = 1
>>>
>>> -> Needed CPUs: 1
>>>
>>> -> Create mempool of 675 mbufs of 1648 octets.
>>>
>>> -> Will cache 675 pkts on 1 caches.
>>
>> What does this 'cache' stand for? Does it refer to the mempool per-lcore
>> cache?
>> If so, please note that, according to API [2] documentation, cache size
>> "must be lower or equal to RTE_MEMPOOL_CACHE_MAX_SIZE and n / 1.5", where
>> 'n' stands for the number of mbufs. Also, documentation says it is advised
>> to choose cache_size to have "n modulo cache_size == 0". Does your code
>> meet these requirements?
>>
>> By the looks of it, it doesn't (cache_size = n = 675). Consider to
>> double-check.
>>
>> [2]
>> https://doc.dpdk.org/api-25.03/rte__mempool_8h.html#a0b64d611bc140a4d2a0c94911580efd5
>>
>>>
>>>
>>> ______________________________________________________________________
>>> ______________________________________________________________________
>>> ______________________________________________
>>>
>>>
>>> Issue:
>>> Despite successful parsing of the pcap file and proper initialization, no
>>> frames were transmitted or received on either the sender or receiver
>>> sides.
>>
>> Is this observation based solely on watching APIs [3] and [4] return 0 all
>> the time? If yes, one can consider to introduce invocations of APIs [5],
>> [6] and [7] in order to periodically poll and print statistics (may be with
>> 1-second delay), which may, for example, shed light on mbuf allocation
>> errors (extended stats).
>>
>> Do statistics display any interesting figures to be discussed?
>>
>> [3]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a3e7d76a451b46348686ea97d6367f102
>> [4]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a83e56cabbd31637efd648e3fc010392b
>>
>> [5]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#adec226574c53ae413252c9b15f6f4bab
>> [6]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a418ad970673eb171673185e36044fd79
>> [7]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#a300d75b583c1f5acfe5b162a5d8c0ac1
>>
>>>
>>>
>>> ______________________________________________________________________
>>> ______________________________________________________________________
>>> ______________________________________________
>>>
>>>
>>> Environment Details:
>>>
>>> * NIC used: Intel I225/I210
>>> * Hugepages configured: 1 GB
>>> * NUMA binding: --cpunodebind=0 --membind=0
>>> * OS: [Your Linux distribution, e.g., Ubuntu 20.04]
>>> * DPDK version: [Mention if known]
>>>
>>> ______________________________________________________________________
>>> ______________________________________________________________________
>>> ______________________________________________
>>>
>>>
>>> Could you please advise if any additional setup, configuration, or known
>>> limitations may be impacting the packet transmission?
>>
>> This may be a wild suggestion from my side, but it pays to check whether
>> link status is "up" upon port start on both ends. One can use API [8] to do
>> that.
>>
>> [8]
>> https://doc.dpdk.org/api-25.03/rte__ethdev_8h.html#ac05878578e4fd9ef3551d2c1c175ebe7
>>
>> Thank you.
>>
>>
>>>
>>> Thank you in advance for your support!
>>>
>>>
>>> Best regards,
>>> Gokul K R
>>>
>>>
>>>
>>>
>>>
>
More information about the dev
mailing list