[dpdk-users] Query on handling packets

Harsh Patel thadodaharsh10 at gmail.com
Tue Nov 13 03:25:37 CET 2018


Hello,
It would be really helpful if you can provide us a link (for both Tx and
Rx) to the project you mentioned earlier where you worked on a
similar problem, if possible.

Thanks and Regards,
Harsh & Hrishikesh.

On Mon, 12 Nov 2018 at 01:15, Harsh Patel <thadodaharsh10 at gmail.com> wrote:

> Thanks a lot for all the support. We are looking into our work as of now
> and will contact you once we are done checking it completely from our side.
> Thanks for the help.
>
> Regards,
> Harsh and Hrishikesh
>
> On Sat, 10 Nov 2018 at 11:47, Wiles, Keith <keith.wiles at intel.com> wrote:
>
>> Please make sure to send your emails in plain text format. The Mac mail
>> program loves to use rich-text format is the original email use it and I
>> have told it not only send plain text :-(
>>
>> > On Nov 9, 2018, at 4:09 AM, Harsh Patel <thadodaharsh10 at gmail.com>
>> wrote:
>> >
>> > We have implemented the logic for Tx/Rx as you suggested. We compared
>> the obtained throughput with another version of same application that uses
>> Linux raw sockets.
>> > Unfortunately, the throughput we receive in our DPDK application is
>> less by a good margin. Is this any way we can optimize our implementation
>> or anything that we are missing?
>> >
>>
>> The PoC code I was developing for DAPI I did not have any performance of
>> issues it run just as fast with my limited testing. I converted the l3fwd
>> code and I saw 10G 64byte wire rate as I remember using pktgen to generate
>> the traffic.
>>
>> Not sure why you would see a big performance drop, but I do not know your
>> application or code.
>>
>> > Thanks and regards
>> > Harsh & Hrishikesh
>> >
>> > On Thu, 8 Nov 2018 at 23:14, Wiles, Keith <keith.wiles at intel.com>
>> wrote:
>> >
>> >
>> >> On Nov 8, 2018, at 4:58 PM, Harsh Patel <thadodaharsh10 at gmail.com>
>> wrote:
>> >>
>> >> Thanks
>> >>  for your insight on the topic. Transmission is working with the
>> functions you mentioned. We tried to search for some similar functions for
>> handling incoming packets but could not find anything. Can you help us on
>> that as well?
>> >>
>> >
>> > I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API)
>> PoC I was working on and presented at the DPDK Summit last Sept. In the PoC
>> I did create a RX side version. The issues it has a bit of tangled up in
>> the DAPI PoC.
>> >
>> > The basic concept is a call to RX a single packet does a rx_burst of N
>> number of packets keeping then in a mbuf list. The code would spin waiting
>> for mbufs to arrive or return quickly if a flag was set. When it did find
>> RX mbufs it would just return the single mbuf and keep the list of mbufs
>> for later requests until the list is empty then do another rx_burst call.
>> >
>> > Sorry this is a really quick note on how it works. If you need more
>> details we can talk more later.
>> >>
>> >> Regards,
>> >> Harsh
>> >>  and Hrishikesh.
>> >>
>> >>
>> >> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith <keith.wiles at intel.com>
>> wrote:
>> >>
>> >>
>> >> > On Nov 8, 2018, at 8:24 AM, Harsh Patel <thadodaharsh10 at gmail.com>
>> wrote:
>> >> >
>> >> > Hi,
>> >> > We are working on a project where we are trying to integrate DPDK
>> with
>> >> > another software. We are able to obtain packets from the other
>> environment
>> >> > to DPDK environment in one-by-one fashion. On the other hand DPDK
>> allows to
>> >> > send/receive burst of data packets. We want to know if there is any
>> >> > functionality in DPDK to achieve this conversion of single incoming
>> packet
>> >> > to a burst of packets sent on NIC and similarly, conversion of burst
>> read
>> >> > packets from NIC to send it to other environment sequentially?
>> >>
>> >>
>> >> Search in the docs or lib/librte_ethdev directory on
>> rte_eth_tx_buffer_init, rte_eth_tx_buffer, ...
>> >>
>> >>
>> >>
>> >> > Thanks and regards
>> >> > Harsh Patel, Hrishikesh Hiraskar
>> >> > NITK Surathkal
>> >>
>> >> Regards,
>> >> Keith
>> >>
>> >
>> > Regards,
>> > Keith
>> >
>>
>> Regards,
>> Keith
>>
>>


More information about the users mailing list