[dpdk-users] Issue with Pktgen and OVS-DPDK

Chen, Junjie J junjie.j.chen at intel.com
Thu Jan 11 12:13:46 CET 2018


Great, it would be better to send your email in plain text format, so that others can read in most of email client.

Cheers
JJ


> -----Original Message-----
> From: wang.yong19 at zte.com.cn [mailto:wang.yong19 at zte.com.cn]
> Sent: Thursday, January 11, 2018 6:51 PM
> To: Chen, Junjie J <junjie.j.chen at intel.com>
> Cc: qin.chunhua at zte.com.cn; Hu, Xuekun <xuekun.hu at intel.com>; Wiles,
> Keith <keith.wiles at intel.com>; Gabriel.Ionescu at enea.com; Tan, Jianfeng
> <jianfeng.tan at intel.com>; users at dpdk.org
> Subject: 答复: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> 
> This patch works in our VMs.
> We really appreciate for your help!
> 
> 
> ------------------origin------------------
> 发件人: <junjie.j.chen at intel.com>;
> 收件人:秦春华10013690;
> 抄送人: <xuekun.hu at intel.com>;汪勇10032886; <keith.wiles at intel.com>;
> <Gabriel.Ionescu at enea.com>; <jianfeng.tan at intel.com>;
> <users at dpdk.org>;
> 日 期 :2018年01月11日 17:35
> 主 题 :RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Could please
> try this patch for app/pktgen.c
> 
> @@ -877,6 +877,7 @@ pktgen_setup_cb(struct rte_mempool *mp,
> { pkt_data_t *data = (pkt_data_t *)opaque; struct rte_mbuf *m = (struct
> rte_mbuf *)obj;
> +       pktmbuf_reset(m);
> port_info_t *info;
> pkt_seq_t *pkt;
> uint16_t qid;
> 
> 
> it works on my setup.
> 
> Cheers
> JJ
> 
> 
> > -----Original Message-----
> > From: qin.chunhua at zte.com.cn [mailto:qin.chunhua at zte.com.cn]
> > Sent: Wednesday, January 10, 2018 7:45 PM
> > To: Chen, Junjie J <junjie.j.chen at intel.com>
> > Cc: Hu, Xuekun <xuekun.hu at intel.com>; wang.yong19 at zte.com.cn;
> Wiles,
> > Keith <keith.wiles at intel.com>; Gabriel.Ionescu at enea.com; Tan, Jianfeng
> > <jianfeng.tan at intel.com>; users at dpdk.org
> > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> >
> > Hi,
> > Thanks a lot for your advice.
> > We used pktgen-3.0.10 + dpdk-17.02.1 + virtio1.0 applied the two
> > patches below, the problem was resolved.
> > Now we met a new problem in the above situation. We set mac of the
> > virtio port before we start generating flow.
> > At first, everything is OK. Then, we stop the flow and restart the
> > same flow without any other modifications.
> > We found the source mac of the flow was different from what we had set
> > to the virtio port.
> > Moreover, the source mac was different every time we restart the flow.
> > What's going on? Do you know any patches to fix this problem if we
> > can't change the version of virtio?
> > Looking forward to receiving your reply. Thank you!
> >
> >
> >
> > ------------------原始邮件------------------
> > 发件人: <junjie.j.chen at intel.com>;
> > 收件人: <xuekun.hu at intel.com>;汪勇10032886;
> <keith.wiles at intel.com>;
> > 抄送人: <Gabriel.Ionescu at enea.com>; <jianfeng.tan at intel.com>;
> > <users at dpdk.org>;
> > 日 期 :2018年01月10日 09:47
> > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK Start from
> qemu
> > 2.7, virtio default use 1.0 instead of 0.9, which add a flag
> > (VIRTIO_F_VERSION_1) to device feature.
> >
> > Actually, qemu use disable-legacy=on,disable-modern=off to support
> > virtio 1.0. an use disable-legacy=off,disable-modern=on to support
> > virtio 0.9. So you can use virtio 0.9 on qemu 2.7+ to workaround this.
> >
> > Cheers
> > JJ
> >
> >
> > > -----Original Message-----
> > > From: Hu, Xuekun
> > > Sent: Wednesday, January 10, 2018 9:32 AM
> > > To: wang.yong19 at zte.com.cn; Wiles, Keith <keith.wiles at intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen at intel.com>;
> > > Gabriel.Ionescu at enea.com; Tan, Jianfeng <jianfeng.tan at intel.com>;
> > > users at dpdk.org
> > > Subject: RE: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Maybe the new qemu (starting from 2.8) introduced some new features
> > > that break the pktgen and dpdk compatibility?
> > >
> > > -----Original Message-----
> > > From: wang.yong19 at zte.com.cn [mailto:wang.yong19 at zte.com.cn]
> > > Sent: Tuesday, January 09, 2018 10:30 PM
> > > To: Wiles, Keith <keith.wiles at intel.com>
> > > Cc: Chen, Junjie J <junjie.j.chen at intel.com>; Hu, Xuekun
> > > <xuekun.hu at intel.com>; Gabriel.Ionescu at enea.com; Tan, Jianfeng
> > > <jianfeng.tan at intel.com>; users at dpdk.org
> > > Subject: 答复: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > > Hi,
> > > We use pktgen-3.0.10 + dpdk-17.02.1 applied the two patches below,
> > > the problem is resolved.
> > > But when we use pktgen-3.4.6 + dpdk-17.11(the two patches below are
> > > included), the problem remains.
> > > It seems that there are still something wrong with pktgen-3.4.6 and
> > > dpdk-17.11.
> > >
> > >
> > > ------------------origin------------------
> > > 发件人: <keith.wiles at intel.com>;
> > > 收件人: <junjie.j.chen at intel.com>;
> > > 抄送人: <xuekun.hu at intel.com>; <Gabriel.Ionescu at enea.com>;
> > > <jianfeng.tan at intel.com>; <users at dpdk.org>;
> > > 日 期 :2018年01月09日 22:04
> > > 主 题 :Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > >
> > >
> > > > On Jan 9, 2018, at 7:00 AM, Chen, Junjie J
> > > > <junjie.j.chen at intel.com>
> > wrote:
> > > >
> > > > Hi
> > > > There are two defects may cause this issue:
> > > >
> > > > 1) in pktgen, see this patch [dpdk-dev] [PATCH] pktgen-dpdk: fix
> > > > low performance in VM virtio pmd mode diff --git
> > > > a/lib/common/mbuf.h b/lib/common/mbuf.h index 759f95d..93065f6
> > > > 100644 — a/lib/common/mbuf.h
> > > > +++ b/lib/common/mbuf.h
> > > > @@ -18,6 +18,7 @@ pktmbuf_reset(struct rte_mbuf *m)
> > > > m->nb_segs = 1;
> > > > m->port = 0xff;
> > > >
> > > > +    m->data_len = m->pkt_len;
> > > > m->data_off = (RTE_PKTMBUF_HEADROOM <= m->buf_len) ?
> > > > RTE_PKTMBUF_HEADROOM : m->buf_len; }
> > >
> > > This patch is in Pktgen 3.4.6
> > > >
> > > > 2) in virtio_rxtx.c, please see commit f1216c1eca5a5. net/virtio:
> > > > fix Tx packet length stats
> > > >
> > > > You could patch both these two patch to try it.
> > > >
> > > > Cheers
> > > > JJ
> > > >
> > > >
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Hu,
> > > >> Xuekun
> > > >> Sent: Tuesday, January 9, 2018 2:38 PM
> > > >> To: Wiles, Keith <keith.wiles at intel.com>; Gabriel Ionescu
> > > >> <Gabriel.Ionescu at enea.com>; Tan, Jianfeng
> > > >> <jianfeng.tan at intel.com>
> > > >> Cc: users at dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Hi, Keith
> > > >>
> > > >> Any updates on this issue? We met similar behavior that ovs-dpdk
> > > >> reports they receive packet with size increment 12 bytes until
> > > >> more than 1518, then pktgen stops sending packets, while we only
> > > >> ask pktgen to generate 64B packet. And it only happens with two
> > > >> vhost-user ports in same server. If the pktgen is running in
> > > >> another server,
> > > then no such issue.
> > > >>
> > > >> We tested the lasted pktgen 3.4.6, and OVS-DPDK 2.8, with DPDK
> 17.11.
> > > >>
> > > >> We also found qemu2.8.1 and qemu2.10 have this problem, while
> > > >> qemu
> > > >> 2.5 has no such problem. So seems like it is a compatibility
> > > >> issue with pktgen/dpdk/qemu?
> > > >>
> > > >> Thanks.
> > > >> Thx, Xuekun
> > > >>
> > > >> -----Original Message-----
> > > >> From: users [mailto:users-bounces at dpdk.org] On Behalf Of Wiles,
> > > >> Keith
> > > >> Sent: Wednesday, May 03, 2017 4:24 AM
> > > >> To: Gabriel Ionescu <Gabriel.Ionescu at enea.com>
> > > >> Cc: users at dpdk.org
> > > >> Subject: Re: [dpdk-users] Issue with Pktgen and OVS-DPDK
> > > >>
> > > >> Comments inline:
> > > >>> On May 2, 2017, at 8:20 AM, Gabriel Ionescu
> > > >>> <Gabriel.Ionescu at enea.com>
> > > >> wrote:
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> I am using DPDK-Pktgen with an OVS bridge that has two
> > > >>> vHost-user ports
> > > >> and I am seeing an issue where Pktgen does not look like it
> > > >> generates packets correctly.
> > > >>>
> > > >>> For this setup I am using DPDK 17.02, Pktgen 3.2.8 and OVS 2.7.0.
> > > >>>
> > > >>> The OVS bridge is created with:
> > > >>> ovs-vsctl add-br ovsbr0 -- set bridge ovsbr0
> > > >>> datapath_type=netdev ovs-vsctl add-port ovsbr0 vhost-user1 --
> > > >>> set Interface vhost-user1 type=dpdkvhostuser ofport_request=1
> > > >>> ovs-vsctl add-port ovsbr0
> > > >>> vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser
> > > >>> ofport_request=2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=1,action=output:2 ovs-ofctl add-flow ovsbr0
> > > >>> in_port=2,action=output:1
> > > >>>
> > > >>> DPDK-Pktgen is launched with the following command so that
> > > >>> packets
> > > >> generated through port 0 are received by port 1 and viceversa:
> > > >>> pktgen -c 0xF --file-prefix pktgen --no-pci \
> > > >>>
> > > >> --vdev=virtio_user0,path=/tmp/vhost-user1 \
> > > >>>
> > > >> --vdev=virtio_user1,path=/tmp/vhost-user2 \
> > > >>>                               -- -P -m "[0:1].0, [2:3].1”
> > > >>
> > > >> The above command line is wrong as Pktgen needs or takes the
> > > >> first lcore for display output and timers. I would not use -c
> > > >> -0xF, but -l
> > > >> 1-5 instead, as it is a lot easier to understand IMO. Using this
> > > >> option -l 1-5 you are using 5 lcores (skipping lcore 0 in a 6
> > > >> lcore
> > > >> VM) one for Pktgen and 4 for the two ports. -m
> > > >> [2:3].0 -m [4:5].1 leaving lcore 1 for Pktgen to use and I am
> > > >> concerned you did not see some performance or lockup problem. I
> > > >> really need to add a test for these types of problem :-( You can
> > > >> just have 5 lcores for the VM, which then pktgen shares lcore 0
> > > >> with Linux using
> > > -l 0-4 option.
> > > >>
> > > >> Pktgen when requested to send 64 byte frames it sends 60 byte
> > > >> payload
> > > >> + 4 byte Frame Checksum. This does work and it must be in how
> > > >> vhost-user is testing for the packet size. In the mbuf you have
> > > >> payload size and the buffer size. The Buffer size could be 1524,
> > > >> but the payload or frame size will be 60 bytes as the 4 bytes FCS
> > > >> is appended to the frame by the hardware. It seems to me that
> > > >> vhost-user is not looking at the correct struct rte_mbuf member
> > variable in its testing.
> > > >>
> > > >>>
> > > >>> In Pktgen, the default settings are used for both ports:
> > > >>>
> > > >>> -          Tx Count: Forever
> > > >>>
> > > >>> -          Rate: 100%
> > > >>>
> > > >>> -          PktSize: 64
> > > >>>
> > > >>> -          Tx Burst: 32
> > > >>>
> > > >>> Whenever I start generating packets through one of the ports (in
> > > >>> this
> > > >> example port 0 by running start 0), the OVS logs throw warnings
> > > >> similar
> > to:
> > > >>>
> > 2017-05-02T09:23:04.741Z|00022|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1194956
> > > >>> log messages in last 49 seconds (most recently, 41 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00023|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00024|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00025|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:04.741Z|00026|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 1524 max_packet_len 1518
> > > >>>
> > 2017-05-02T09:23:15.761Z|00027|netdev_dpdk(pmd9)|WARN|Dropped
> > > >> 1344988
> > > >>> log messages in last 11 seconds (most recently, 0 seconds ago)
> > > >>> due to excessive rate
> > > >>>
> > > >>
> > >
> >
> 2017-05-02T09:23:15.761Z|00028|netdev_dpdk(pmd9)|WARN|vhost-user2:
> > > >> Too
> > > >>> big size 57564 max_packet_len 1518 Port 1 does not receive any
> > packets.
> > > >>>
> > > >>> When running Pktgen with the -socket-mem option (e.g.
> > > >>> --socket-mem 512),
> > > >> the behavior is different, but with the same warnings thrown by OVS:
> > > >> port 1 receives some packages, but with different sizes, even
> > > >> though they are generated on port 0 with a 64b size:
> > > >>> Flags:Port      :   P--------------:0   P--------------:1
> > > >>> Link State        :       <UP-10000-FD>       <UP-10000-FD>
> > > >> ----TotalRate----
> > > >>> Pkts/s Max/Rx     :                 0/0
> 35136/0
> > > >> 35136/0
> > > >>>      Max/Tx     :        238144/25504
> 0/0
> > > >> 238144/25504
> > > >>> MBits/s Rx/Tx     :             0/13270
> 0/0
> > > >> 0/13270
> > > >>> Broadcast         :                   0
> > 0
> > > >>> Multicast         :                   0
> 0
> > > >>> 64 Bytes        :                   0                 288
> > > >>> 65-127          :                   0
> 1440
> > > >>> 128-255         :                   0
> 2880
> > > >>> 256-511         :                   0
> 6336
> > > >>> 512-1023        :                   0
> 12096
> > > >>> 1024-1518       :                   0
> 12096
> > > >>> Runts/Jumbos      :                 0/0
> > 0/0
> > > >>> Errors Rx/Tx      :                 0/0
> 0/0
> > > >>> Total Rx Pkts     :                   0               35136
> > > >>>     Tx Pkts     :             1571584                   0
> > > >>>     Rx MBs      :                   0
> 227
> > > >>>     Tx MBs      :              412777
> 0
> > > >>> ARP/ICMP Pkts     :                 0/0
> 0/0
> > > >>>                 :
> > > >>> Pattern Type      :             abcd...             abcd...
> > > >>> Tx Count/% Rate   :       Forever /100%       Forever /100%
> > > >>> PktSize/Tx Burst  :           64 /   32           64 /   32
> > > >>> Src/Dest Port     :         1234 / 5678         1234 / 5678
> > > >>> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> > > >>> Dst  IP Address   :         192.168.1.1         192.168.0.1
> > > >>> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24
> > > >>> Dst MAC Address   :   a6:71:4e:2f:ee:5d   b6:38:dd:34:b2:93
> > > >>> Src MAC Address   :   b6:38:dd:34:b2:93   a6:71:4e:2f:ee:5d
> > > >>> VendID/PCI Addr   :   0000:0000/00:00.0   0000:0000/00:00.0
> > > >>>
> > > >>> -- Pktgen Ver: 3.2.8 (DPDK 17.02.0)  Powered by Intel(r) DPDK
> > > >>> -------------------
> > > >>>
> > > >>> If packets are generated from an external source and testpmd is
> > > >>> used to
> > > >> forward traffic between the two vHost-user ports, the warnings
> > > >> are not thrown by the OVS bridge.
> > > >>>
> > > >>> Should this setup work?
> > > >>> Is this an issue or am I setting something up wrong?
> > > >>>
> > > >>> Thank you,
> > > >>> Gabriel Ionescu
> > > >>
> > > >> Regards,
> > > >> Keith
> > > >
> > >
> > > Regards,
> > > Keith


More information about the users mailing list