[dpdk-dev] [RFC 0/5] virtio support for container

Tan, Jianfeng jianfeng.tan at intel.com
Thu Dec 31 11:02:45 CET 2015



> -----Original Message-----
> From: Pavel Fedin [mailto:p.fedin at samsung.com]
> Sent: Thursday, December 31, 2015 5:40 PM
> To: Tan, Jianfeng; dev at dpdk.org
> Subject: RE: [dpdk-dev] [RFC 0/5] virtio support for container
> 
>  Hello!
> 
> > First of all, when you say openvswitch, are you referring to ovs-dpdk?
> 
>  I am referring to mainline ovs, compiled with dpdk, and using userspace
> dataplane.
>  AFAIK ovs-dpdk is early Intel fork, which is abandoned at the moment.
> 
> > And can you detail your test case? Like, how do you want ovs_on_host and
> ovs_in_container to
> > be connected?
> > Through two-direct-connected physical NICs, or one vhost port in
> ovs_on_host and one virtio
> > port in ovs_in_container?
> 
>  vhost port. i. e.
> 
>                              |
> LOCAL------dpdkvhostuser<----+---->cvio----->LOCAL
>       ovs                    |          ovs
>                              |
>                 host         |        container
> 
>  By this time i advanced in my research. ovs not only crashes by itself, but
> manages to crash host side. It does this by doing
> reconfiguration sequence without sending VHOST_USER_SET_MEM_TABLE,
> therefore host-side ovs tries to refer old addresses and dies
> badly.

Yes, this case is exactly suited for this patchset.

Before you start another ovs_in_container, previous ones get killed? If so, vhost information
in ovs_on_host will be wiped as the unix socket is broken.
And by the way, ovs just allows one virtio for one vhost port, much different from the exmpale,
vhost-switch.

Thanks,
Jianfeng  

>  Those messages about memory pool already being present are perhaps OK.
> 
> Kind regards,
> Pavel Fedin
> Expert Engineer
> Samsung Electronics Research center Russia
> 



More information about the dev mailing list