[dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Inter-VM communication & IP allocation through DHCP issue

Abhijeet Karve abhijeet.karve at tcs.com
Wed Jan 27 17:22:31 CET 2016


Hi Przemek,

Thanks for the quick response. Now  able to get the DHCP ip's for 2 
vhostuser instances and able to ping each other. Isssue was a bug in 
cirros 0.3.0 images which we were using in openstack after using 0.3.1 
image as given in the URL(
https://www.redhat.com/archives/rhos-list/2013-August/msg00032.html), able 
to get the IP's in vhostuser VM instances.

As per our understanding, Packet flow across DPDK datapath will be like 
vhostuser ports are connected to the br-int bridge & same is being patched 
to the br-dpdk bridge where in our physical network (NIC) is connected 
with dpdk0 port. 

So for testing the flow we have to connect that physical network(NIC) with 
external packet generator (e.g - ixia, iperf) & run the testpmd 
application in the vhostuser VM, right?

Does it required to add any flows/efforts in bridge configurations(either 
br-int or br-dpdk)?


Thanks & Regards
Abhijeet Karve




From:   "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz at intel.com>
To:     Abhijeet Karve <abhijeet.karve at tcs.com>
Cc:     "dev at dpdk.org" <dev at dpdk.org>, "discuss at openvswitch.org" 
<discuss at openvswitch.org>, "Gray, Mark D" <mark.d.gray at intel.com>
Date:   01/27/2016 05:11 PM
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Inter-VM communication & IP allocation through DHCP issue



Hi Abhijeet,
 
 
It seems you are almost there! 
When booting the VM’s do you request hugepage memory for them (by setting 
hw:mem_page_size=large in flavor extra_spec)?
If not then please do, if yes then please look into libvirt logfiles for 
the VM’s (in /var/log/libvirt/qemu/instance-xxx), I think there could be a 
clue.
 
 
Regards
Przemek
 
From: Abhijeet Karve [mailto:abhijeet.karve at tcs.com] 
Sent: Monday, January 25, 2016 6:13 PM
To: Czesnowicz, Przemyslaw
Cc: dev at dpdk.org; discuss at openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Inter-VM communication & IP allocation through DHCP issue
 
Hi Przemek, 

Thank you for your response, It really provided us breakthrough. 

After setting up DPDK on compute node for stable/kilo, We are trying to 
set up Openstack stable/liberty all-in-one setup, At present we are not 
able to get the IP allocation for the vhost type instances through DHCP. 
Also we tried assigning IP's manually to them but the inter-VM 
communication also not happening, 

#neutron agent-list 
root at nfv-dpdk-devstack:/etc/neutron# neutron agent-list 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 

| id                                   | agent_type         | host      | 
alive | admin_state_up | binary                    | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 

| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent     | 
nfv-dpdk-devstack | :-)   | True           | neutron-openvswitch-agent | 
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent           | 
nfv-dpdk-devstack | :-)   | True           | neutron-l3-agent          | 
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent         | 
nfv-dpdk-devstack | :-)   | True           | neutron-dhcp-agent        | 
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | 
nfv-dpdk-devstack | :-)   | True           | neutron-linuxbridge-agent | 
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent     | 
nfv-dpdk-devstack | :-)   | True           | neutron-metadata-agent    | 
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | 
nfv-dpdk-devstack | xxx   | True           | neutron-openvswitch-agent | 
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+ 



ovs-vsctl show output# 
-------------------------------------------------------- 
Bridge br-dpdk 
        Port br-dpdk 
            Interface br-dpdk 
                type: internal 
        Port phy-br-dpdk 
            Interface phy-br-dpdk 
                type: patch 
                options: {peer=int-br-dpdk} 
    Bridge br-int 
        fail_mode: secure 
        Port "vhufa41e799-f2" 
            tag: 5 
            Interface "vhufa41e799-f2" 
                type: dpdkvhostuser 
        Port int-br-dpdk 
            Interface int-br-dpdk 
                type: patch 
                options: {peer=phy-br-dpdk} 
        Port "tap4e19f8e1-59" 
            tag: 5 
            Interface "tap4e19f8e1-59" 
                type: internal 
        Port "vhu05734c49-3b" 
            tag: 5 
            Interface "vhu05734c49-3b" 
                type: dpdkvhostuser 
        Port "vhu10c06b4d-84" 
            tag: 5 
            Interface "vhu10c06b4d-84" 
                type: dpdkvhostuser 
        Port patch-tun 
            Interface patch-tun 
                type: patch 
                options: {peer=patch-int} 
        Port "vhue169c581-ef" 
            tag: 5 
            Interface "vhue169c581-ef" 
                type: dpdkvhostuser 
        Port br-int 
            Interface br-int 
                type: internal 
    Bridge br-tun 
        fail_mode: secure 
        Port br-tun 
            Interface br-tun 
                type: internal 
                error: "could not open network device br-tun (Invalid 
argument)" 
        Port patch-int 
            Interface patch-int 
                type: patch 
                options: {peer=patch-tun} 
    ovs_version: "2.4.0" 
-------------------------------------------------------- 


ovs-ofctl dump-flows br-int# 
-------------------------------------------------------- 
root at nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int 
NXST_FLOW reply (xid=0x4): 
 cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0, 
n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136 
actions=resubmit(,24) 
 cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0, 
n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0, 
n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24) 

 cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0, 
n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346 
actions=mod_vlan_vid:5,NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0, 
n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0, 
n_bytes=0, idle_age=2416, priority=0 actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0, 
n_bytes=0, idle_age=2416, priority=0 actions=drop 
 cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0, 
n_bytes=0, idle_age=2410, 
priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, 
priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, 
priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, 
priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0, 
n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0, 
n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13 
actions=NORMAL 
 cookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0, 
n_bytes=0, idle_age=2415, priority=0 actions=drop 
root at nfv-dpdk-devstack:/etc/neutron# 
-------------------------------------------------------- 


                                              


Also attaching Neutron-server, nova-compute & nova-scheduler logs. 

It will be really great for us if get any hint to overcome out of this 
Inter-VM & DHCP communication issue, 




Thanks & Regards
Abhijeet Karve



From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz at intel.com> 
To:        Abhijeet Karve <abhijeet.karve at tcs.com> 
Cc:        "dev at dpdk.org" <dev at dpdk.org>, "discuss at openvswitch.org" <
discuss at openvswitch.org>, "Gray, Mark D" <mark.d.gray at intel.com> 
Date:        01/04/2016 07:54 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Getting memory backing issues with qemu parameter passing 




You should be able to clone networking-ovs-dpdk, switch to kilo branch, 
and run 
python setup.py install 
in the root of networking-ovs-dpdk, that should install agent and mech 
driver. 
Then you would need to enable mech driver (ovsdpdk) on the controller in 
the /etc/neutron/plugins/ml2/ml2_conf.ini 
And run the right agent on the computes (networking-ovs-dpdk-agent). 
  
  
There should be pip packeges of networking-ovs-dpdk available shortly, 
I’ll let you know when that happens. 
  
Przemek 
  
From: Abhijeet Karve [mailto:abhijeet.karve at tcs.com] 
Sent: Thursday, December 24, 2015 6:42 PM
To: Czesnowicz, Przemyslaw
Cc: dev at dpdk.org; discuss at openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Getting memory backing issues with qemu parameter passing 
  
Hi Przemek, 

Thank you so much for your quick response. 

The guide(
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst
) which you have suggested that is for openstack vhost user installations 
with devstack. 
Can't we have any reference for including ovs-dpdk mechanisam driver for 
openstack Ubuntu distribution which we are following for 
compute+controller node setup?" 

We are facing below listed issues With the current approach of setting up 
openstack kilo interactively + replacing ovs with ovs-dpdk enabled and 
Instance creation in openstack with 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances for enabling the DPDK libraries in it. 


1. Created a flavor m1.hugepages which is backed by hugepage memory, 
unable to spawn instance with this flavor – Getting a issue like: No 
matching hugetlbfs for the number of hugepages assigned to the flavor. 
2. Passing socket info to instances via qemu manually and instnaces 
created are not persistent. 

Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism 
driver and agent all of that in our openstack ubuntu distribution. 

Would be really appriciate if get any help or ref with explanation. 

We are using compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

Thanks, 
Abhijeet Karve 





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz at intel.com> 
To:        Abhijeet Karve <abhijeet.karve at tcs.com> 
Cc:        "dev at dpdk.org" <dev at dpdk.org>, "discuss at openvswitch.org" <
discuss at openvswitch.org>, "Gray, Mark D" <mark.d.gray at intel.com> 
Date:        12/17/2015 06:32 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 





I haven’t tried that approach not sure if that would work, it seems 
clunky. 
 
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add 
ports to ovs with the right type, pass the sockets to qemu) would be done 
by OpenStack. 
 
Przemek 
 
From: Abhijeet Karve [mailto:abhijeet.karve at tcs.com] 
Sent: Thursday, December 17, 2015 12:41 PM
To: Czesnowicz, Przemyslaw
Cc: dev at dpdk.org; discuss at openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 
 
Hi Przemek, 

Thank you so much for sharing the ref guide. 

Would be appreciate if clear one doubt. 

At present we are setting up openstack kilo interactively and further 
replacing ovs with ovs-dpdk enabled. 
Once the above setup done, We are creating instance in openstack and 
passing that instance id to QEMU command line which further passes the 
vhost-user sockets to instances, enabling the DPDK libraries in it. 

Isn't this the correct way of integrating ovs-dpdk with openstack? 


Thanks & Regards
Abhijeet Karve




From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz at intel.com> 
To:        Abhijeet Karve <abhijeet.karve at tcs.com> 
Cc:        "dev at dpdk.org" <dev at dpdk.org>, "discuss at openvswitch.org" <
discuss at openvswitch.org>, "Gray, Mark D" <mark.d.gray at intel.com> 
Date:        12/17/2015 05:27 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 






HI Abhijeet, 

For Kilo you need to use ovsdpdk mechanism driver and a matching agent to 
integrate ovs-dpdk with OpenStack. 

The guide you are following only talks about running ovs-dpdk not how it 
should be integrated with OpenStack. 

Please follow this guide: 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst 


Best regards 
Przemek 


From: Abhijeet Karve [mailto:abhijeet.karve at tcs.com] 
Sent: Wednesday, December 16, 2015 9:37 AM
To: Czesnowicz, Przemyslaw
Cc: dev at dpdk.org; discuss at openvswitch.org; Gray, Mark D
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 

Hi Przemek, 


We have configured the accelerated data path between a physical interface 
to the VM using openvswitch netdev-dpdk with vhost-user support. The VM 
created with this special data path and vhost library, I am calling as 
DPDK instance. 

If assigning ip manually to the newly created Cirros VM instance, We are 
able to make 2 VM's to communicate on the same compute node. Else it's not 
associating any ip through DHCP though DHCP is in compute node only. 

Yes it's a compute + controller node setup and we are using following 
software platform on compute node: 
_____________ 
Openstack: Kilo 
Distribution: Ubuntu 14.04 
OVS Version: 2.4.0 
DPDK 2.0.0 
_____________ 

We are following the intel guide 
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200 


When doing "ovs-vsctl show" in compute node, it shows below output: 
_____________________________________________ 
ovs-vsctl show 
c2ec29a5-992d-4875-8adc-1265c23e0304 
 Bridge br-ex 
     Port phy-br-ex 
         Interface phy-br-ex 
             type: patch 
             options: {peer=int-br-ex} 
     Port br-ex 
         Interface br-ex 
             type: internal 
 Bridge br-tun 
     fail_mode: secure 
     Port br-tun 
         Interface br-tun 
             type: internal 
     Port patch-int 
         Interface patch-int 
             type: patch 
             options: {peer=patch-tun} 
 Bridge br-int 
     fail_mode: secure 
     Port "qvo0ae19a43-b6" 
         tag: 2 
         Interface "qvo0ae19a43-b6" 
     Port br-int 
         Interface br-int 
             type: internal 
     Port "qvo31c89856-a2" 
         tag: 1 
         Interface "qvo31c89856-a2" 
     Port patch-tun 
         Interface patch-tun 
             type: patch 
             options: {peer=patch-int} 
     Port int-br-ex 
         Interface int-br-ex 
             type: patch 
             options: {peer=phy-br-ex} 
     Port "qvo97fef28a-ec" 
         tag: 2 
         Interface "qvo97fef28a-ec" 
 Bridge br-dpdk 
     Port br-dpdk 
         Interface br-dpdk 
             type: internal 
 Bridge "br0" 
     Port "br0" 
         Interface "br0" 
             type: internal 
     Port "dpdk0" 
         Interface "dpdk0" 
             type: dpdk 
     Port "vhost-user-2" 
         Interface "vhost-user-2" 
             type: dpdkvhostuser 
     Port "vhost-user-0" 
         Interface "vhost-user-0" 
             type: dpdkvhostuser 
     Port "vhost-user-1" 
         Interface "vhost-user-1" 
             type: dpdkvhostuser 
 ovs_version: "2.4.0" 
root at dpdk:~# 
_____________________________________________ 

Open flows output in bridge in compute node are as below: 
_____________________________________________ 
root at dpdk:~# ovs-ofctl dump-flows br-tun 
NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2) 
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, 
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,20) 
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, 
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 
actions=resubmit(,22) 
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c 
actions=mod_vlan_vid:2,resubmit(,10) 
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57 
actions=mod_vlan_vid:1,resubmit(,10) 
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=1 
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1 

cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22) 
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794, 
idle_age=19982, hard_age=65534, priority=0 actions=drop 
root at dpdk:~# 
root at dpdk:~# 
root at dpdk:~# 
root at dpdk:~# ovs-ofctl dump-flows br-tun 
int NXST_FLOW reply (xid=0x4): 
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=drop 
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912, 
idle_age=19981, hard_age=65534, priority=1 actions=NORMAL 
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0, 
idle_age=65534, hard_age=65534, priority=0 actions=drop 
root at dpdk:~# 
_____________________________________________ 


Further we don't know what all the network changes(Packet Flow addition) 
if required for associating IP address through the DHCP. 

Would be really appreciate if have clarity on DHCP flow establishment. 



Thanks & Regards
Abhijeet Karve





From:        "Czesnowicz, Przemyslaw" <przemyslaw.czesnowicz at intel.com> 
To:        Abhijeet Karve <abhijeet.karve at tcs.com>, "Gray, Mark D" <
mark.d.gray at intel.com> 
Cc:        "dev at dpdk.org" <dev at dpdk.org>, "discuss at openvswitch.org" <
discuss at openvswitch.org> 
Date:        12/15/2015 09:13 PM 
Subject:        RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# 
Successfully setup DPDK OVS with vhostuser 







Hi Abhijeet,

If you answer below questions it will help me understand your problem.

What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm 
assuming this is at least compute+ controller setup)

Best regards
Przemek

> -----Original Message-----
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: dev at dpdk.org; discuss at openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
> 
> Dear All,
> 
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
> 
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and 
another
> for Guest VM.
>          Edit /etc/default/grub.
>             GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on  hugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
>          # update-grub
>        - Mount the huge pages into different directory.
>           # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
>           # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
> 
> At present we are facing an issue in Testing DPDK application on setup. 
In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
> 
> 
> Thanks & Regards
> Abhijeet Karve
> 
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If 
you
> are not the intended recipient, any dissemination, use, review, 
distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received 
this
> communication in error, please notify us by reply e-mail or telephone 
and
> immediately and permanently delete the message and any attachments.
> Thank you
> 



More information about the dev mailing list