[dpdk-dev] Issue with MTU/max_rx_pkt_len handling by different NICs/PMD drivers

Nitin Katiyar nitin.katiyar at ericsson.com
Tue Oct 24 14:25:38 CEST 2017


Hi,
While testing MTU configuration of physical ports using OVS-DPDK we have found that Fortville and Niantic behaves differently for Tagged packets. Both allows TX of packets with size up to programmed MTU value but in receive direction Fortville drops packets of size equal to configured MTU. Additionally, Fortville does not report any error/drop counter if packets with size more than configured MTU (max frame size) are received. In Niantic we can see error counters getting incremented if packets of size more than MTU are received.

When ports are started, device attribute max_rx_pkt_len is set during device/queue init by application (OVS in our case) and this max_rx_pkt_len is used to program hardware register in device which in turn determines the maximum size of packet/frame that it can receive.
What we have found during testing is that Niantic could receive tagged/untagged packets of size equal to max_rx_pkt_len but Fortville could only receive tagged packets (single tag) up to size <= (max_rx_pkt_len - 4). Datasheet of Niantic mentions that device implicitly accounts for  VLAN tag(s) in addition to Maximum Frame size programmed which is not the case for Fortville. This causes issue with MTU settings and maximum frame size that NIC can receive with tagged and untagged traffic.
We have tested it with OVS-DPDK where it uses device attribute max_rx_pkt_len to set max frame size in accordance with the configured MTU size of port. However, Ixgbe (Niantic) and i40e (Fortville) interpret it differently. I looked at some other PMD drivers and different drivers interpret dev_conf.rxmode.max_rx_pkt_len differently i.e. some adds one or two VLAN, few don't include it and some use this field differently. It creates issue with MTU while running same application on different NICs and PMD drivers need to be fixed to have consistent behavior with MTU/Max Frame Size settings.

Regards,
Nitin Katiyar


More information about the dev mailing list