[dpdk-dev] DPDK and custom memory

Neil Horman nhorman at tuxdriver.com
Fri Sep 19 12:18:41 CEST 2014


On Fri, Sep 19, 2014 at 12:13:55AM +0000, Saygin, Artur wrote:
> FWIW: rte_mempool_xmem_create turned out to be exactly what the use case requires. It's not without limitations but is probably better than having to copy buffers between device and DPDK memory.
> 
Ah, so its not non-kernel managed memory you were after, it was a way to make
non-dpdk managed memory  get managed by dpdk.  That makes more sense.
Neil

> -----Original Message-----
> From: Neil Horman [mailto:nhorman at tuxdriver.com] 
> Sent: Wednesday, September 03, 2014 3:04 AM
> To: Saygin, Artur
> Cc: Alex Markuze; Thomas Monjalon; dev at dpdk.org
> Subject: Re: [dpdk-dev] DPDK and custom memory
> 
> On Wed, Sep 03, 2014 at 01:17:53AM +0000, Saygin, Artur wrote:
> > Thanks for prompt responses!
> > 
> > To clarify, the questions is not about accessing a NIC, but about a NIC accessing a very specific block of physical memory, possibly non-kernel managed.
> > 
> Still not sure what you mean here by non-kernel managed.  If memory can be
> accessed from the CPU, then the kernel can allocate, free and access it, thats
> it.  If the memory isn't accessible from the cpu, then this is out of our hands
> anyway.  The only question is, how do you access it.
> 
> > Per my understanding memory that rte_mempool_create API obtains is kernel managed, grabbed by DPDK via HUGETLBFS, with address selection being outside of application control. Is there a way around that? As in have DPDK allocate buffer memory from address XYZ only...
> Nope, the DPDK allocates blocks of memory without regard to the operation of the
> NIC.  If you have some odd NIC that requires access to a specific physical
> memory range, then it is your responsibility to reserve that memory and author
> the PMD in such a way that it communicates with the NIC via that memory.
> Usually this is done via a combination of operating system facilities (e.g. the
> linux kernel commanline option memmap or the runtime mmap operation on the
> /dev/mem device).
> 
> Regards
> Neil
> 
> > 
> > If VFIO / IOMMU is still the answer - I'll poke in that direction. If not - any additional insight is appreciated.
> > 
> > -----Original Message-----
> > From: Alex Markuze [mailto:alex at weka.io] 
> > Sent: Sunday, August 31, 2014 1:27 AM
> > To: Thomas Monjalon
> > Cc: Saygin, Artur; dev at dpdk.org
> > Subject: Re: [dpdk-dev] DPDK and custom memory
> > 
> > Artur, I don't have the details of what you are trying to achieve, but
> > it sounds like something that is covered by IOMMU, SW or HW.  The
> > IOMMU creates an iova (I/O Virtual address) the nic can access the
> > range is controlled with flags passed to the dma_map functions.
> > 
> > So I understand your question this way, How does the DPDK work with
> > IOMMU enabled system and can you influence the mapping?
> > 
> > 
> > On Sat, Aug 30, 2014 at 4:03 PM, Thomas Monjalon
> > <thomas.monjalon at 6wind.com> wrote:
> > > Hello,
> > >
> > > 2014-08-29 18:40, Saygin, Artur:
> > >> Imagine a PMD for an FPGA-based NIC that is limited to accessing certain
> > >> memory regions <system, PCI, etc>.
> > >
> > > Does it mean Intel is making an FPGA-based NIC?
> > >
> > >> Is there a way to make DPDK use that exact memory?
> > >
> > > Maybe I don't understand the question well, because it doesn't seem really
> > > different of what other PMDs do.
> > > Assuming your NIC is PCI, you can access it via uio (igb_uio) or VFIO.
> > >
> > >> Perhaps this is more of a hugetlbfs question than DPDK but I thought I'd
> > >> start here.
> > >
> > > It's a pleasure to receive new drivers.
> > > Welcome here :)
> > >
> > > --
> > > Thomas
> 


More information about the dev mailing list