[dpdk-users] Query on handling packets

Wiles, Keith keith.wiles at intel.com
Wed Nov 14 16:15:42 CET 2018



> On Nov 14, 2018, at 7:54 AM, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> 
> Hello,
> This is a link to the complete source code of our project :- https://github.com/ns-3-dpdk-integration/ns-3-dpdk
> For the description of the project, look through this :- https://ns-3-dpdk-integration.github.io/
> Once you go through it, you will have a basic understanding of the project.
> Installation instructions link are provided in the github.io page.
> 
> In the code we mentioned above, the master branch contains the implementation of the logic using rte_rings which we mentioned at the very beginning of the discussion. There is a branch named "newrxtx" which contains the implementation according to the logic you provided.
> 
> We would like you to take a look at the code in newrxtx branch. (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/tree/newrxtx)
> In the code in this branch, go to ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/ directory. Here we have implemented the DpdkNetDevice model. This model contains the code which implements the whole model providing interaction between ns-3 and DPDK. We would like you take a look at our Read function (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L626) and Write function (https://github.com/ns-3-dpdk-integration/ns-3-dpdk/blob/newrxtx/ns-allinone-3.28.1/ns-3.28.1/src/fd-net-device/model/dpdk-net-device.cc#L576). These contains the logic you suggested.

A couple of points for performance with DPDK.
 - Never use memcpy in the data path unless it is absolutely require and always try to avoid copying all of the data. In some cases you may want to use memcpy or rte_memcpy to only replace a small amount of data or to grab a copy of some small amount of data.
 - Never use malloc in the data path, meaning never call malloc on every packet use a list of buffers allocated up front if you need buffers of some time.
 - DPDK mempools are highly tuned and if you can use them for fixed size buffers.

I believe in the DPDK docs is a performance white paper or some information about optimizing packet process in DPDK. If you have not read it you may want to do so.

> 
> Can you go through this and suggest us some changes or find some mistake in our code? If you need any help or have any doubt, ping us.
> 
> Thanks and Regards,
> Harsh & Hrishikesh
> 
> On Tue, 13 Nov 2018 at 19:17, Wiles, Keith <keith.wiles at intel.com> wrote:
> 
> 
> > On Nov 12, 2018, at 8:25 PM, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> > 
> > Hello,
> > It would be really helpful if you can provide us a link (for both Tx and Rx) to the project you mentioned earlier where you worked on a similar problem, if possible. 
> > 
> 
> At this time I can not provide a link. I will try and see what I can do, but do not hold your breath it could be awhile as we have to go thru a lot of legal stuff. If you can try vtune tool from Intel for x86 systems if you can get a copy for your platform as it can tell you a lot about the code and where the performance issues are located. If you are not running Intel x86 then my code may not work for you, I do not remember if you told me which platform.
> 
> 
> > Thanks and Regards, 
> > Harsh & Hrishikesh.
> > 
> > On Mon, 12 Nov 2018 at 01:15, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> > Thanks a lot for all the support. We are looking into our work as of now and will contact you once we are done checking it completely from our side. Thanks for the help.
> > 
> > Regards,
> > Harsh and Hrishikesh
> > 
> > On Sat, 10 Nov 2018 at 11:47, Wiles, Keith <keith.wiles at intel.com> wrote:
> > Please make sure to send your emails in plain text format. The Mac mail program loves to use rich-text format is the original email use it and I have told it not only send plain text :-(
> > 
> > > On Nov 9, 2018, at 4:09 AM, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> > > 
> > > We have implemented the logic for Tx/Rx as you suggested. We compared the obtained throughput with another version of same application that uses Linux raw sockets. 
> > > Unfortunately, the throughput we receive in our DPDK application is less by a good margin. Is this any way we can optimize our implementation or anything that we are missing?
> > > 
> > 
> > The PoC code I was developing for DAPI I did not have any performance of issues it run just as fast with my limited testing. I converted the l3fwd code and I saw 10G 64byte wire rate as I remember using pktgen to generate the traffic.
> > 
> > Not sure why you would see a big performance drop, but I do not know your application or code.
> > 
> > > Thanks and regards
> > > Harsh & Hrishikesh
> > > 
> > > On Thu, 8 Nov 2018 at 23:14, Wiles, Keith <keith.wiles at intel.com> wrote:
> > > 
> > > 
> > >> On Nov 8, 2018, at 4:58 PM, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> > >> 
> > >> Thanks
> > >>  for your insight on the topic. Transmission is working with the functions you mentioned. We tried to search for some similar functions for handling incoming packets but could not find anything. Can you help us on that as well?
> > >> 
> > > 
> > > I do not know if a DPDK API set for RX side. But in the DAPI (DPDK API) PoC I was working on and presented at the DPDK Summit last Sept. In the PoC I did create a RX side version. The issues it has a bit of tangled up in the DAPI PoC.
> > > 
> > > The basic concept is a call to RX a single packet does a rx_burst of N number of packets keeping then in a mbuf list. The code would spin waiting for mbufs to arrive or return quickly if a flag was set. When it did find RX mbufs it would just return the single mbuf and keep the list of mbufs for later requests until the list is empty then do another rx_burst call.
> > > 
> > > Sorry this is a really quick note on how it works. If you need more details we can talk more later.
> > >> 
> > >> Regards,
> > >> Harsh
> > >>  and Hrishikesh.
> > >> 
> > >> 
> > >> On Thu, 8 Nov 2018 at 14:26, Wiles, Keith <keith.wiles at intel.com> wrote:
> > >> 
> > >> 
> > >> > On Nov 8, 2018, at 8:24 AM, Harsh Patel <thadodaharsh10 at gmail.com> wrote:
> > >> > 
> > >> > Hi,
> > >> > We are working on a project where we are trying to integrate DPDK with
> > >> > another software. We are able to obtain packets from the other environment
> > >> > to DPDK environment in one-by-one fashion. On the other hand DPDK allows to
> > >> > send/receive burst of data packets. We want to know if there is any
> > >> > functionality in DPDK to achieve this conversion of single incoming packet
> > >> > to a burst of packets sent on NIC and similarly, conversion of burst read
> > >> > packets from NIC to send it to other environment sequentially?
> > >> 
> > >> 
> > >> Search in the docs or lib/librte_ethdev directory on rte_eth_tx_buffer_init, rte_eth_tx_buffer, ...
> > >> 
> > >> 
> > >> 
> > >> > Thanks and regards
> > >> > Harsh Patel, Hrishikesh Hiraskar
> > >> > NITK Surathkal
> > >> 
> > >> Regards,
> > >> Keith
> > >> 
> > > 
> > > Regards,
> > > Keith
> > > 
> > 
> > Regards,
> > Keith
> > 
> 
> Regards,
> Keith
> 

Regards,
Keith



More information about the users mailing list