[dpdk-dev] [PATCH] ixgbe: avoid unnessary break when checking at the tail of rx hwring

Xu, Qian Q qian.q.xu at intel.com
Mon Mar 28 04:30:50 CEST 2016


Jianbo
Could you tell me the case that can reproduce the issue? We can help evaluate the impact of performance on ixgbe, but I'm not sure how to check if your patch really fix a problem because I don’t know how to reproduce the problem! Could you first teach me on how to reproduce your issue? Or you may not reproduce it by yourself? 

Thanks
Qian


-----Original Message-----
From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jianbo Liu
Sent: Friday, March 25, 2016 4:53 PM
To: Ananyev, Konstantin
Cc: Richardson, Bruce; Lu, Wenzhuo; Zhang, Helin; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break when checking at the tail of rx hwring

On 22 March 2016 at 22:27, Ananyev, Konstantin <konstantin.ananyev at intel.com> wrote:
>
>
>> -----Original Message-----
>> From: Jianbo Liu [mailto:jianbo.liu at linaro.org]
>> Sent: Monday, March 21, 2016 2:27 AM
>> To: Richardson, Bruce
>> Cc: Lu, Wenzhuo; Zhang, Helin; Ananyev, Konstantin; dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break when 
>> checking at the tail of rx hwring
>>
>> On 18 March 2016 at 18:03, Bruce Richardson <bruce.richardson at intel.com> wrote:
>> > On Thu, Mar 17, 2016 at 10:20:01AM +0800, Jianbo Liu wrote:
>> >> On 16 March 2016 at 19:14, Bruce Richardson <bruce.richardson at intel.com> wrote:
>> >> > On Wed, Mar 16, 2016 at 03:51:53PM +0800, Jianbo Liu wrote:
>> >> >> Hi Wenzhuo,
>> >> >>
>> >> >> On 16 March 2016 at 14:06, Lu, Wenzhuo <wenzhuo.lu at intel.com> wrote:
>> >> >> > HI Jianbo,
>> >> >> >
>> >> >> >
>> >> >> >> -----Original Message-----
>> >> >> >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Jianbo 
>> >> >> >> Liu
>> >> >> >> Sent: Monday, March 14, 2016 10:26 PM
>> >> >> >> To: Zhang, Helin; Ananyev, Konstantin; dev at dpdk.org
>> >> >> >> Cc: Jianbo Liu
>> >> >> >> Subject: [dpdk-dev] [PATCH] ixgbe: avoid unnessary break 
>> >> >> >> when checking at the tail of rx hwring
>> >> >> >>
>> >> >> >> When checking rx ring queue, it's possible that loop will 
>> >> >> >> break at the tail while there are packets still in the queue header.
>> >> >> > Would you like to give more details about in what scenario this issue will be hit? Thanks.
>> >> >> >
>> >> >>
>> >> >> vPMD will place extra RTE_IXGBE_DESCS_PER_LOOP - 1 number of 
>> >> >> empty descriptiors at the end of hwring to avoid overflow when 
>> >> >> do checking on rx side.
>> >> >>
>> >> >> For the loop in _recv_raw_pkts_vec(), we check 4 descriptors 
>> >> >> each time. If all 4 DD are set, and all 4 packets are 
>> >> >> received.That's OK in the middle.
>> >> >> But if come to the end of hwring, and less than 4 descriptors 
>> >> >> left, we still need to check 4 descriptors at the same time, so 
>> >> >> the extra empty descriptors are checked with them.
>> >> >> This time, the number of received packets is apparently less 
>> >> >> than 4, and we break out of the loop because of the condition 
>> >> >> "var != RTE_IXGBE_DESCS_PER_LOOP".
>> >> >> So the problem arises. It is possible that there could be more 
>> >> >> packets at the hwring beginning that still waiting for being received.
>> >> >> I think this fix can avoid this situation, and at least reduce 
>> >> >> the latency for the packets in the header.
>> >> >>
>> >> > Packets are always received in order from the NIC, so no packets 
>> >> > ever get left behind or skipped on an RX burst call.
>> >> >
>> >> > /Bruce
>> >> >
>> >>
>> >> I knew packets are received in order, and no packets will be 
>> >> skipped, but some will be left behind as I explained above.
>> >> vPMD will not received nb_pkts required by one RX burst call, and 
>> >> those at the beginning of hwring are still waiting to be received 
>> >> till the next call.
>> >>
>> >> Thanks!
>> >> Jianbo
>> > HI Jianbo,
>> >
>> > ok, I understand now. I'm not sure that this is a significant 
>> > problem though, since we are working in polling mode. Is there a 
>> > performance impact to your change, because I don't think that we can reduce performance just to fix this?
>> >
>> > Regards,
>> > /Bruce
>> It will be a problem because the possibility could be high.
>> Considering rx hwring size is 128 and rx burst is 32, the possiblity 
>> can be 32/128.
>> I know this change is critical, so I want you (and maintainers) to do 
>> full evaluations about throughput/latency..before making conclusion.
>
> I am still not sure what is a problem you are trying to solve here.
> Yes recv_raw_pkts_vec() call wouldn't wrap around HW ring boundary, 
> and yes can return less packets that are actually available by the HW.
> Though as Bruce pointed, they'll be returned to the user by next call.
Have you thought of the interval between these two call, how long could it be?
If application is a simple one like l2fwd/testpmd, that's fine.
But if the interval is long because application has more work to do, they are different.

> Actually recv_pkts_bulk_alloc() works in a similar way.
> Why do you consider that as a problem?
Driver should pull packets out of hardware and give them to APP as fast as possible.
If not, there is a possibility that overflow the hardware queue by more incoming packets.

I did some testings with pktgen-dpdk, and it behaves a little better with this patch (at least not worse).
Sorry I can't provide more concreate evidences because I don't have ixia/sprint equipment at hand.
That's why I asked you to do full evaluations before reject this patch. :-)

Thanks!

> Konstantin
>
>>
>> Jianbo


More information about the dev mailing list