[dpdk-dev] [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag cannot be received by default
Ferruh Yigit
ferruh.yigit at intel.com
Wed Oct 14 17:38:12 CEST 2020
On 9/30/2020 3:32 AM, Zhang, Qi Z wrote:
>
>
>> -----Original Message-----
>> From: Yang, SteveX <stevex.yang at intel.com>
>> Sent: Wednesday, September 30, 2020 9:32 AM
>> To: Zhang, Qi Z <qi.z.zhang at intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev at intel.com>; dev at dpdk.org
>> Cc: Zhao1, Wei <wei.zhao1 at intel.com>; Guo, Jia <jia.guo at intel.com>; Yang,
>> Qiming <qiming.yang at intel.com>; Wu, Jingjing <jingjing.wu at intel.com>; Xing,
>> Beilei <beilei.xing at intel.com>
>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with vlan tag
>> cannot be received by default
>>
>>
>>
>>> -----Original Message-----
>>> From: Zhang, Qi Z <qi.z.zhang at intel.com>
>>> Sent: Wednesday, September 30, 2020 8:35 AM
>>> To: Ananyev, Konstantin <konstantin.ananyev at intel.com>; Yang, SteveX
>>> <stevex.yang at intel.com>; dev at dpdk.org
>>> Cc: Zhao1, Wei <wei.zhao1 at intel.com>; Guo, Jia <jia.guo at intel.com>;
>>> Yang, Qiming <qiming.yang at intel.com>; Wu, Jingjing
>>> <jingjing.wu at intel.com>; Xing, Beilei <beilei.xing at intel.com>
>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with
>>> vlan tag cannot be received by default
>>>
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ananyev, Konstantin <konstantin.ananyev at intel.com>
>>>> Sent: Wednesday, September 30, 2020 7:02 AM
>>>> To: Zhang, Qi Z <qi.z.zhang at intel.com>; Yang, SteveX
>>>> <stevex.yang at intel.com>; dev at dpdk.org
>>>> Cc: Zhao1, Wei <wei.zhao1 at intel.com>; Guo, Jia <jia.guo at intel.com>;
>>>> Yang, Qiming <qiming.yang at intel.com>; Wu, Jingjing
>>>> <jingjing.wu at intel.com>; Xing, Beilei <beilei.xing at intel.com>
>>>> Subject: RE: [PATCH v4 3/5] net/ice: fix max mtu size packets with
>>>> vlan tag cannot be received by default
>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Yang, SteveX <stevex.yang at intel.com>
>>>>>> Sent: Monday, September 28, 2020 2:56 PM
>>>>>> To: dev at dpdk.org
>>>>>> Cc: Zhao1, Wei <wei.zhao1 at intel.com>; Guo, Jia
>>>>>> <jia.guo at intel.com>; Yang, Qiming <qiming.yang at intel.com>;
>>>>>> Zhang, Qi Z <qi.z.zhang at intel.com>; Wu, Jingjing
>>>>>> <jingjing.wu at intel.com>; Xing, Beilei <beilei.xing at intel.com>;
>>>>>> Ananyev, Konstantin <konstantin.ananyev at intel.com>; Yang, SteveX
>>>>>> <stevex.yang at intel.com>
>>>>>> Subject: [PATCH v4 3/5] net/ice: fix max mtu size packets with
>>>>>> vlan tag cannot be received by default
>>>>>>
>>>>>> testpmd will initialize default max packet length to 1518 which
>>>>>> doesn't include vlan tag size in ether overheader. Once, send
>>>>>> the max mtu length packet with vlan tag, the max packet length
>>>>>> will exceed 1518 that will cause packets dropped directly from NIC hw
>> side.
>>>>>>
>>>>>> ice can support dual vlan tags that need more 8 bytes for max
>>>>>> packet size, so, configures the correct max packet size in
>>>>>> dev_config
>>> ops.
>>>>>>
>>>>>> Fixes: 50cc9d2a6e9d ("net/ice: fix max frame size")
>>>>>>
>>>>>> Signed-off-by: SteveX Yang <stevex.yang at intel.com>
>>>>>> ---
>>>>>> drivers/net/ice/ice_ethdev.c | 11 +++++++++++
>>>>>> 1 file changed, 11 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/net/ice/ice_ethdev.c
>>>>>> b/drivers/net/ice/ice_ethdev.c index
>>>>>> cfd357b05..6b7098444 100644
>>>>>> --- a/drivers/net/ice/ice_ethdev.c
>>>>>> +++ b/drivers/net/ice/ice_ethdev.c
>>>>>> @@ -3146,6 +3146,7 @@ ice_dev_configure(struct rte_eth_dev *dev)
>>>>>> struct ice_adapter *ad =
>>>>>> ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
>>>>>> struct ice_pf *pf =
>>>>>> ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private);
>>>>>> +uint32_t frame_size = dev->data->mtu + ICE_ETH_OVERHEAD;
>>>>>> int ret;
>>>>>>
>>>>>> /* Initialize to TRUE. If any of Rx queues doesn't meet the @@
>>>>>> -3157,6
>>>>>> +3158,16 @@ ice_dev_configure(struct rte_eth_dev *dev)
>>>>>> if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
>>>>>> dev->data->dev_conf.rxmode.offloads |=
>>> DEV_RX_OFFLOAD_RSS_HASH;
>>>>>>
>>>>>> +/**
>>>>>> + * Considering QinQ packet, max frame size should be equal or
>>>>>> + * larger than total size of MTU and Ether overhead.
>>>>>> + */
>>>>>
>>>>>> +if (frame_size > dev->data->dev_conf.rxmode.max_rx_pkt_len) {
>>>>>
>>>>>
>>>>> Why we need this check?
>>>>> Can we just call ice_mtu_set directly
>>>>
>>>> I think that without that check we can silently overwrite provided
>>>> by user dev_conf.rxmode.max_rx_pkt_len value.
>>>
>>> OK, I see
>>>
>>> But still have one question
>>> dev->data->mtu is initialized to 1518 as default , but if application
>>> dev->data->set
>>> dev_conf.rxmode.max_rx_pkt_len = 1000 in dev_configure.
>>> does that mean we will still will set mtu to 1518, is this expected?
>>>
>>
>> max_rx_pkt_len should be larger than mtu at least, so we should raise the
>> max_rx_pkt_len (e.g.:1518) to hold expected mtu value (e.g.: 1500).
>
> Ok, this describe the problem more general and better to replace exist code comment and commit log for easy understanding.
> Please send a new version for reword
>
I didn't really get this set.
Application explicitly sets 'max_rx_pkt_len' to '1518', and a frame bigger than
this size is dropped.
Isn't this what should be, why we are trying to overwrite user configuration in
PMD to prevent this?
During eth_dev allocation, mtu set to default '1500', by ethdev layer.
And testpmd sets 'max_rx_pkt_len' by default to '1518'.
I think Qi's concern above is valid, what is user set 'max_rx_pkt_len' to '1000'
and mean it? PMD will not honor the user config.
Why not simply increase the default 'max_rx_pkt_len' in testpmd?
And I guess even better what we need is to tell to the application what the
frame overhead PMD accepts.
So the application can set proper 'max_rx_pkt_len' value per port for a
given/requested MTU value.
@Ian, cc'ed, was complaining almost same thing years ago, these PMD overhead
macros and 'max_mtu'/'min_mtu' added because of that, perhaps he has a solution now?
And why this same thing can't happen to other PMDs? If this is a problem for all
PMDs, we should solve in other level, not for only some PMDs.
>
>> Generally, the mtu value can be adjustable from user (e.g.: ip link set ens801f0
>> mtu 1400), hence, we just adjust the max_rx_pkt_len to satisfy mtu
>> requirement.
>>
>>> Should we just call ice_mtu_set(dev, dev_conf.rxmode.max_rx_pkt_len)
>>> here?
>> ice_mtu_set(dev, mtu) will append ether overhead to
>> frame_size/max_rx_pkt_len, so we need pass the mtu value as the 2nd
>> parameter, or not the max_rx_pkt_len.
>>
>>>
>>>
>>>>
>>>>> And please remove above comment, since ether overhead is already
>>>> considered in ice_mtu_set.
>> Ether overhead is already considered in ice_mtu_set, but it also should be
>> considered as the adjustment condition that if ice_mtu_set need be invoked.
>> So, it perhaps should remain this comment before this if() condition.
>>
>>>>>
>>>>>
>>>>>> +ret = ice_mtu_set(dev, dev->data->mtu); if (ret != 0) return
>>>>>> +ret; }
>>>>>> +
>>>>>> ret = ice_init_rss(pf);
>>>>>> if (ret) {
>>>>>> PMD_DRV_LOG(ERR, "Failed to enable rss for PF");
>>>>>> --
>>>>>> 2.17.1
>>>>>
>>>>
>>>
>>
>
More information about the dev
mailing list