[dpdk-dev] [PATCH 0/7] net/mlx5: support for flow action on VLAN header

Hideyuki Yamashita yamashita.hideyuki at ntt-tx.co.jp
Thu Nov 7 12:02:12 CET 2019


Hello Slava,

About 1, when I turned on "CONFIG_RTE_LIBRTE_MLX5_PMD=y" it worked.
About 2, I used the latest dpdk-next-net, creating flow for entag VLAN
was successful as following:

Configuring Port 0 (socket 0)
Port 0: B8:59:9F:C1:4A:CE
Configuring Port 1 (socket 0)
Port 1: B8:59:9F:C1:4A:CF
Checking link statuses...
Done
testpmd> flow create 0 egress group 1 pattern eth src is BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp vlan_pcp 3 / end
Flow rule #0 created
testpmd> flow create 0 egress group 0 pattern eth
 dst [TOKEN]: destination MAC
 src [TOKEN]: source MAC
 type [TOKEN]: EtherType
 / [TOKEN]: specify next pattern item
testpmd> flow create 0 egress group 0 pattern eth / a
 any [TOKEN]: match any protocol for the current layer
 arp_eth_ipv4 [TOKEN]: match ARP header for Ethernet/IPv4
testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1
Bad arguments
testpmd> flow create 0 egress group 0 pattern eth / end actions jump group 1 / end
Flow rule #1 created

In short, my questions resolved!
Thanks!

BR,
Hideyuki Yamashita
NTT TechnoCross

> Hi, Hideyuki

> > 1. As you pointed out, it was configuration issue
> > (CONFIG_RTE_LIBRTE_MLX5_DEBUG=y)!
> > When I turned out the configuration, 19.11 rc1 recognized Connect-X5
> > corrcetly.
> No-no, it is not configuration, this just enables debug features and Is helpful to locate
> the reason why ConnectX-5 was not detected on your setup. In release product, of coarse,
> the CONFIG_RTE_LIBRTE_MLX5_DEBUG must be "n"
> Or was it just missed "CONFIG_RTE_LIBRTE_MLX5_PMD=y" ?
> 
> > 
> > Thanks for your help.
> > 
> > 2. How about the question I put in my previouse email (how to create flow
> > for entag VLAN tag on not-tagged packet)
> 
> I'm sorry, I did not express my answer in clear way.
> This issue is fixed, now you entagging Flow can be created successfully, I rechecked.
> 
> Now it works:
> 
> > > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > > testpmd> vlan_pcp 3 / end
> 
> Please, take (coming on Friday) 19.11rc2 and try.
> 
> With best regards, Slava



> > 
> > Thanks again.
> > 
> > 
> > BR,
> > Hideyuki Yamashita
> > NTT TechnoCross
> > 
> > > Hi, Hideyuki
> > >
> > > > -----Original Message-----
> > > > From: Hideyuki Yamashita <yamashita.hideyuki at ntt-tx.co.jp>
> > > > Sent: Wednesday, November 6, 2019 13:04
> > > > To: Slava Ovsiienko <viacheslavo at mellanox.com>
> > > > Cc: dev at dpdk.org
> > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > action on VLAN header
> > > >
> > > > Dear Slava,
> > > >
> > > > Additional question.
> > > > When I use testpmd in dpdk-next-net repo, it works in general.
> > > > However when I use dpdk19.11-rc1,  testpmd does not recognize
> > > > connectX-5 NIC.
> > >
> > > It is quite strange, it should be, ConnectX-5 is base Mellanox NIC now.
> > > Could you, please:
> > > - configure "CONFIG_RTE_LIBRTE_MLX5_DEBUG=y" in
> > ./config/common_base
> > > - reconfigure DPDK and rebuild testpmd
> > > - run testpmd with --log-level=99 --log-level=pmd.net.mlx5:8 (before
> > > -- separator)
> > > - see (and provide) the log, where it drops the eth_dev object
> > > spawning
> > >
> > > >
> > > > Is it correct that ConnectX-5 will be recognized in 19.11 release finally?
> > >
> > > It should be recognized in 19.11rc1, possible we have some
> > > configuration issue, let's have a look at.
> > >
> > > > If yes, which release candidate the necessary change will be mergerd
> > > > and available?
> > > >
> > > > BR,
> > > > Hideyuki Yamashita
> > > > NTT TechnoCross
> > > >
> > > >
> > > > > Dear Slava,
> > > > >
> > > > > Thanks for your response.
> > > > >
> > > > > Inputting other flows failed while some flows are created.
> > > > > Please help on the following two cases.
> > > > >
> > > > > 1) I would like to detag vlan tag which has specific destionation
> > > > > MAC address.  No condition about vlan id value.
> > > > >
> > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan / any / end actions of_pop_vlan
> > > > > testpmd> / queue index 1 / end
> > > > > Caught error type 10 (item specification): VLAN cannot be empty:
> > > > > Invalid argument
> > > > > testpmd> flow create 0 ingress group 1 pattern eth dst is
> > > > > testpmd> AA:AA:AA:AA:AA:AA / vlan vid is 100 / end actions
> > > > > testpmd> of_pop_vlan / queue index 1 / end
> > > > > Flow rule #0 created
> > >
> > > I'll check, possible this validation reject is imposed by HW
> > > limitations - it requires the VLAN header presence and (IIRC) VID match. If
> > possible - we'll fix.
> > >
> > > > >
> > > > > 2) I would like to entag vlan tag
> > > > >
> > > > > testpmd> flow create 0 egress group 1 pattern eth src is
> > > > > testpmd> BB:BB:BB:BB:BB:BB  / end actions of_push_vlan ethertype
> > > > > testpmd> 0x8100 / of_set_vlan_vid vlan_vid 100 / of_set_vlan_pcp
> > > > > testpmd> vlan_pcp 3 / end
> > > > > Caught error type 16 (specific action): cause: 0x7ffdc9d98348,
> > > > > match on VLAN is required in order to set VLAN VID: Invalid
> > > > > argument
> > > > >
> > >
> > > It is fixed (and patch Is already merged -
> > >
> > https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpatch
> > >
> > es.dpdk.org%2Fpatch%2F62295%2F&data=02%7C01%7Cviacheslavo%4
> > 0mellan
> > >
> > ox.com%7Ca17dfb64b04f430237ff08d7633d7346%7Ca652971c7d2e4d9ba6
> > a4d14925
> > >
> > 6f461b%7C0%7C1%7C637086987908448715&sdata=Uvi1bWYT%2BaHo
> > TSHkQ8AF6%
> > > 2FnTx%2FP5UrMqtZ3gAzjqGAA%3D&reserved=0),
> > > let's try coming 19.11rc2. I inserted your Flow successfully on current
> > Upstream..
> > >
> > > With best regards, Slava
> > >
> > >
> > >
> > > > > Thanks!
> > > > >
> > > > > BR,
> > > > > Hideyuki Yamashita
> > > > > NTT TechnoCross
> > > > >
> > > > >
> > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki at ntt-tx.co.jp>
> > > > > > > Sent: Thursday, October 31, 2019 11:52
> > > > > > > To: Slava Ovsiienko <viacheslavo at mellanox.com>
> > > > > > > Cc: dev at dpdk.org
> > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for flow
> > > > > > > action on VLAN header
> > > > > > >
> > > > > > > Dear Slava,
> > > > > > >
> > > > > > > Your guess is corrrect.
> > > > > > > When I put flow into Connect-X5, it was successful.
> > > > > > Very nice.
> > > > > >
> > > > > > >
> > > > > > > General question.
> > > > > > As we know - general questions are the most hard ones to answer ??.
> > > > > >
> > > > > > > Are there any way to input flow to ConnectX-4?
> > > > > > As usual - with RTE flow API.  Just omit dv_flow_en, or specify
> > > > > > dv_flow_en=0 and mlx5 PMD will handle RTE flow API via Verbs
> > > > > > engine,
> > > > supported by ConnectX-4.
> > > > > >
> > > > > > > In another word, are there any way to activate Verb?
> > > > > > > And which type of flow is supported in Verb?
> > > > > > Please, see flow_verbs_validate() routine in the
> > > > > > mlx5_flow_verbs.c, it shows which RTE flow items and actions are
> > > > > > actually supported by
> > > > Verbs.
> > > > > >
> > > > > > With best regards, Slava
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > -----------------------------------------------------------
> > > > > > > tx_h-yamashita at R730n10:~/dpdk-next-net/x86_64-native-
> > linuxapp-
> > > > > > > gcc/app$ sudo ./te          stpmd -c 0xF -n 4 -w 04:00.0,dv_flow_en=1
> > --
> > > > socket-
> > > > > > > mem 512,512 --huge-dir=/mnt/h
> > > > > > > uge1G --log-level port:8 -- -i --portmask=0x1 --nb-cores=2
> > > > > > > --txq=16 --rxq=16 [sudo] password for tx_h-yamashita:
> > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > EAL: Probing VFIO support...
> > > > > > > EAL: PCI device 0000:04:00.0 on NUMA socket 0
> > > > > > > EAL:   probe driver: 15b3:1017 net_mlx5
> > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx port
> > > > > > > 1 on
> > > > device
> > > > > > > mlx5_          1
> > > > > > >
> > > > > > > Interactive-mode selected
> > > > > > >
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > n=171456,
> > > > > > > size=2176, socke          t=0
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > n=171456,
> > > > > > > size=2176, socke          t=1
> > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > >
> > > > > > > Warning! port-topology=paired and odd forward ports number,
> > > > > > > the last
> > > > port
> > > > > > > will p          air with itself.
> > > > > > >
> > > > > > > Configuring Port 0 (socket 0)
> > > > > > > Port 0: B8:59:9F:C1:4A:CE
> > > > > > > Checking link statuses...
> > > > > > > Done
> > > > > > > testpmd>
> > > > > > > testpmd>  flow create 0 ingress group 1 priority 0 pattern eth
> > > > > > > testpmd> dst is
> > > > > > > 00:16:3e:2          e:7b:6a / vlan vid is 1480 / end actions of_pop_vlan
> > /
> > > > queue
> > > > > > > index 0 / end
> > > > > > > Flow rule #0 created
> > > > > > > testpmd>
> > > > > > > --------------------------------------------------------------
> > > > > > > ----
> > > > > > > ---------------------------
> > > > > > > -----------------
> > > > > > >
> > > > > > > BR,
> > > > > > > Hideyuki Yamashita
> > > > > > > NTT TechnoCross
> > > > > > >
> > > > > > > > Hi, Hideyuki
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Hideyuki Yamashita <yamashita.hideyuki at ntt-tx.co.jp>
> > > > > > > > > Sent: Wednesday, October 30, 2019 12:46
> > > > > > > > > To: Slava Ovsiienko <viacheslavo at mellanox.com>
> > > > > > > > > Cc: dev at dpdk.org
> > > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/7] net/mlx5: support for
> > > > > > > > > flow action on VLAN header
> > > > > > > > >
> > > > > > > > > Hello Slava,
> > > > > > > > >
> > > > > > > > > Thanks for your help.
> > > > > > > > > I added magic phrase. with chaging PCI number with proper
> > > > > > > > > one in my
> > > > > > > env.
> > > > > > > >
> > > > > > > > > It changes situation but still result in error.
> > > > > > > > >
> > > > > > > > > I used /usertools/dpdk-setup.sh to allocate hugepage
> > dynamically.
> > > > > > > > > Your help is appreciated.
> > > > > > > > >
> > > > > > > > > I think it is getting closer.
> > > > > > > > > tx_h-yamashita at R730n10:~/dpdk-next-net/x86_64-native-
> > > > linuxapp-
> > > > > > > > > gcc/app$
> > > > > > > > > sudo ./testpmd -c 0xF -n 4 -w 03:00.0,dv_flow_en=1
> > > > > > > > > --socket-mem
> > > > > > > > > 512,512 - -huge-dir=/mnt/h uge1G --log-level port:8 -- -i
> > > > > > > > > --portmask=0x1 --nb-cores=2
> > > > > > > >
> > > > > > > > mlx5 PMD supports two flow engines:
> > > > > > > > - Verbs, this is legacy one, almost no new features are
> > > > > > > > being added, just
> > > > > > > bug fixes,
> > > > > > > >   provides slow rule insertion rate, etc.
> > > > > > > > - Direct Rules, the new one, all new features are being added
> > here.
> > > > > > > >
> > > > > > > > (We had one more intermediate engine  - Direct Verbs, it was
> > > > > > > > dropped, but prefix dv in dv_flow_en remains ??)
> > > > > > > >
> > > > > > > > Verbs are supported over all NICs - ConnectX-4,ConnectX-4LX,
> > > > > > > > ConnectX-5,
> > > > > > > ConnectX-6, etc.
> > > > > > > > Direct Rules is supported for NICs starting from ConnectX-5.
> > > > > > > > "dv_flow_en=1" partameter engages Direct Rules, but I see
> > > > > > > > you run testpmd over 03:00.0 which is ConnectX-4, not
> > > > > > > > supporting Direct
> > > > Rules.
> > > > > > > > Please, run over ConnectX-5 you have on your host.
> > > > > > > >
> > > > > > > > As for error - it is not related to memory, rdma core just
> > > > > > > > failed to create the group table, because ConnectX-4 does
> > > > > > > > not
> > > > support DR.
> > > > > > > >
> > > > > > > > With best regards, Slava
> > > > > > > >
> > > > > > > > > --txq=16 --rxq=16
> > > > > > > > > EAL: Detected 48 lcore(s)
> > > > > > > > > EAL: Detected 2 NUMA nodes
> > > > > > > > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > > > > > > > EAL: Selected IOVA mode 'PA'
> > > > > > > > > EAL: Probing VFIO support...
> > > > > > > > > EAL: PCI device 0000:03:00.0 on NUMA socket 0
> > > > > > > > > EAL:   probe driver: 15b3:1015 net_mlx5
> > > > > > > > > net_mlx5: mlx5.c:1852: mlx5_dev_spawn(): can't query devx
> > > > > > > > > port
> > > > > > > > > 1 on device
> > > > > > > > > mlx5_3
> > > > > > > > >
> > > > > > > > > Interactive-mode selected
> > > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_0>:
> > > > > > > > > n=171456, size=2176, socket=0
> > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > > > testpmd: create a new mbuf pool <mbuf_pool_socket_1>:
> > > > > > > > > n=171456, size=2176, socket=1
> > > > > > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > > > > >
> > > > > > > > > Warning! port-topology=paired and odd forward ports
> > > > > > > > > number, the last port will pair with itself.
> > > > > > > > >
> > > > > > > > > Configuring Port 0 (socket 0) Port 0: B8:59:9F:DB:22:20
> > > > > > > > > Checking link statuses...
> > > > > > > > > Done
> > > > > > > > > testpmd> flow create 0 ingress group 1 priority 0 pattern
> > > > > > > > > testpmd> eth dst is 00:16:3e:2e:7b:6a / vlan vid is 1480 /
> > > > > > > > > testpmd> end actions of_pop_vlan / queue index 0 / end
> > > > > > > > > Caught error type 1 (cause unspecified): cannot create table:
> > > > > > > > > Cannot allocate memory
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > BR,
> > > > > > > > > Hideyuki Yamashita
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > 
> 




More information about the dev mailing list