[dpdk-dev] Intel I350 fails to work with DPDK

sabu kurian sabu2kurian at gmail.com
Wed May 28 12:54:09 CEST 2014


Hai bruce,

Thanks for the reply.

I even tried that before. Having a burst size of 64 or 128 simply fails.
The card would send out a few packets (some 400 packets of 74 byte size)
and then freeze. For my application... I'm trying to generate the peak
traffic possible with the link speed and the NIC.



On Wed, May 28, 2014 at 4:16 PM, Richardson, Bruce <
bruce.richardson at intel.com> wrote:

> > -----Original Message-----
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of sabu kurian
> > Sent: Wednesday, May 28, 2014 10:42 AM
> > To: dev at dpdk.org
> > Subject: [dpdk-dev] Intel I350 fails to work with DPDK
> >
> > I have asked a similar question before, no one replied though.
> >
> > I'm crafting my own packets in mbuf's (74 byte packets all) and sending
> it
> > using
> >
> > ret = rte_eth_tx_burst(port_ids[lcore_id], 0, m_pool,burst_size);
> >
> > When burst_size is 1, it does work. Work in the sense the NIC will
> continue
> > with sending packets, at a little over
> > 50 percent of the link rate. For 1000 Mbps link rate .....The observed
> > transmit rate of the NIC is 580 Mbps (using Intel DPDK). But it should be
> > possible to achieve at least 900 Mbps transmit rate with Intel DPDK and
> > I350 on 1 Gbps link.
> >
> > Could someone help me out on this ?
> >
> > Thanks and regards
>
> Sending out a single packet at a time is going to have a very high
> overhead, as each call to tx_burst involves making PCI transactions (MMIO
> writes to the hardware ring pointer). To reduce this penalty you should
> look to send out the packets in bursts, thereby saving PCI bandwidth and
> splitting the cost of each MMIO write over multiple packets.
>
> Regards,
> /Bruce
>


More information about the dev mailing list