[dpdk-dev] [PATCH v2 3/3] virtio: Add a new layer to abstract pci access method

Yuanhan Liu yuanhan.liu at linux.intel.com
Tue Feb 2 03:45:18 CET 2016


On Tue, Feb 02, 2016 at 11:19:50AM +0900, Tetsuya Mukawa wrote:
> On 2016/02/01 22:15, Yuanhan Liu wrote:
> > On Mon, Feb 01, 2016 at 10:50:00AM +0900, Tetsuya Mukawa wrote:
> >> On 2016/01/29 18:17, Yuanhan Liu wrote:
> >>> On Thu, Jan 28, 2016 at 06:33:32PM +0900, Tetsuya Mukawa wrote:
> >>>> This patch addss function pointers to abstract pci access method.
> >>>> This abstraction layer will be used when virtio-net PMD supports
> >>>> container extension.
> >>>>
> >>>> The below functions abstract how to access to pci configuration space.
> >>>>
> >>>> struct virtio_pci_cfg_ops {
> >>>>         int   (*map)(...);
> >>>>         void  (*unmap)(...);
> >>>>         void *(*get_mapped_addr)(...);
> >>>>         int   (*read)(...);
> >>>> };
> >>>>
> >>>> The pci configuration space has information how to access to virtio
> >>>> device registers. Basically, there are 2 ways to acccess to the
> >>>> registers. One is using portio and the other is using mapped memory.
> >>>> The below functions abstract this access method.
> >>> One question: is there a way to map PCI memory with Qtest? I'm thinking
> >>> if we can keep the io_read/write() for Qtest as well, if so, code could
> >>> be simplified, a lot, IMO.
> >>>
> >> Yes, I agree with you.
> >> But AFAIK, we don't have a way to mmap it from DPDK application.
> >>
> >> We may be able to map PCI configuration space to a memory address space
> >> that guest CPU can handle.
> >> But even in this case, I guess we cannot access the memory without qtest
> >> messaging.
> > Acutally, I have a concern about this access abstraction, which makes
> > those simple funciton not inline. It won't be an issue for most of them,
> > as most of them are invoked during init stage, where has no impact on
> > performance.
> >
> > notify_queue(), however, is a bit different. I was thinking the "inline
> > to callback (not inline)" convertion might has some impacts on the
> > performance. Would you do a test for me?
> 
> Sure, I will be able to.

Thanks.

> But if we concern about it, I guess it's also nice to implement the PMD
> on your vtpci abstraction.
> (It means we don't use the access abstraction)
> Probably this lets our merging process faster.
> What do you think?

Another standalone PMD driver? (sorry that I didn't follow the
discussion). If so, won't it introduce too much duplicate code?

	--yliu


More information about the dev mailing list