[dpdk-dev] [PATCH v2 5/7] mem: modify error message for DMA mask check
Burakov, Anatoly
anatoly.burakov at intel.com
Tue Nov 6 11:48:03 CET 2018
On 06-Nov-18 10:37 AM, Alejandro Lucero wrote:
>
>
> On Tue, Nov 6, 2018 at 10:31 AM Burakov, Anatoly
> <anatoly.burakov at intel.com <mailto:anatoly.burakov at intel.com>> wrote:
>
> On 06-Nov-18 9:32 AM, Alejandro Lucero wrote:
> >
> >
> > On Mon, Nov 5, 2018 at 4:35 PM Burakov, Anatoly
> > <anatoly.burakov at intel.com <mailto:anatoly.burakov at intel.com>
> <mailto:anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com>>> wrote:
> >
> > On 05-Nov-18 3:33 PM, Alejandro Lucero wrote:
> > >
> > >
> > > On Mon, Nov 5, 2018 at 3:12 PM Burakov, Anatoly
> > > <anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com> <mailto:anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com>>
> > <mailto:anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com>
> > <mailto:anatoly.burakov at intel.com
> <mailto:anatoly.burakov at intel.com>>>> wrote:
> > >
> > > On 05-Nov-18 10:13 AM, Alejandro Lucero wrote:
> > > > On Mon, Nov 5, 2018 at 10:01 AM Li, WenjieX A
> > > <wenjiex.a.li at intel.com
> <mailto:wenjiex.a.li at intel.com> <mailto:wenjiex.a.li at intel.com
> <mailto:wenjiex.a.li at intel.com>>
> > <mailto:wenjiex.a.li at intel.com
> <mailto:wenjiex.a.li at intel.com> <mailto:wenjiex.a.li at intel.com
> <mailto:wenjiex.a.li at intel.com>>>>
> > > > wrote:
> > > >
> > > >> 1. With GCC32, testpmd could not startup without
> > '--iova-mode pa'.
> > > >> ./i686-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
> > > >> The output is:
> > > >> EAL: Detected 16 lcore(s)
> > > >> EAL: Detected 1 NUMA nodes
> > > >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > >> EAL: Some devices want iova as va but pa will be used
> > because..
> > > EAL: few
> > > >> device bound to UIO
> > > >> EAL: No free hugepages reported in hugepages-1048576kB
> > > >> EAL: Probing VFIO support...
> > > >> EAL: VFIO support initialized
> > > >> EAL: wrong dma mask size 48 (Max: 31)
> > > >> EAL: alloc_pages_on_heap(): couldn't allocate
> memory due
> > to IOVA
> > > exceeding
> > > >> limits of current DMA mask
> > > >> error allocating rte services array
> > > >> EAL: FATAL: rte_service_init() failed
> > > >> EAL: rte_service_init() failed
> > > >> PANIC in main():
> > > >> Cannot init EAL
> > > >> 5: [./i686-native-linuxapp-gcc/app/testpmd(+0x95fda)
> > [0x56606fda]]
> > > >> 4:
> [/lib/i386-linux-gnu/libc.so.6(__libc_start_main+0xf6)
> > > [0xf74d1276]]
> > > >> 3: [./i686-native-linuxapp-gcc/app/testpmd(main+0xf21)
> > [0x565fcee1]]
> > > >> 2:
> [./i686-native-linuxapp-gcc/app/testpmd(__rte_panic+0x3d)
> > > [0x565edc68]]
> > > >> 1:
> > [./i686-native-linuxapp-gcc/app/testpmd(rte_dump_stack+0x33)
> > > >> [0x5675f333]]
> > > >> Aborted
> > > >>
> > > >> 2. With '--iova-mode pa', testpmd could startup.
> > > >> 3. With GCC64, there is no such issue.
> > > >> Thanks!
> > > >>
> > > >>
> > > > Does 32 bits support require IOMMU? It would be a
> surprise. If
> > > there is no
> > > > IOMMU hardware, no dma mask should be there at all.
> > >
> > > IOMMU is supported on 32-bits, however limited the address
> > space might
> > > be. Maybe limit IOMMU width to RTE_MIN(31, value) bits for
> > > everything on
> > > 32-bit?
> > >
> > >
> > > If IOMMU is supported in 32 bits, then the DMA mask check
> should
> > not be
> > > happening. AFAIK, the IOMMU hardware addressing
> limitations is a
> > problem
> > > only in 64 bits systems. The worst situation I have head
> of is 39
> > bits
> > > for virtualized IOMMU with QEMU.
> > >
> > > I would prefer not to invoke rte_mem_set_dma_mask for 32 bits
> > system for
> > > the Intel IOMMU case. The only other dma mask client is
> the NFP
> > PMD and
> > > we do not support 32 bits systems.
> > >
> >
> > I don't think not invoking DMA mask check is the right choice
> here. In
> > practice it may be, but i'd rather the behavior to be
> "correct", if at
> > all possible :) It is theoretically possible to have an IOMMU
> with an
> > addressing limitation of, say, 30 bits (even though they
> don't exist in
> > reality), so therefore our code should handle it, should it
> encounter
> > one, and it should also handle the "proper" ones correctly
> (as in,
> > treat
> > them as 32-bit-limited instead of 39- or 48-bit-limited).
> >
> >
> > Fine.
> >
> > The problem is the current sanity check about the dma mask width,
> what
> > is 31 for 32 bits systems.
> > Should we just leave a single max dma width to 63? This covers the
> > possibility of 32 bit systems integrating an IOMMU designed for 64
> > bits. I really doubt this is a real possibility in x86, although
> I can
> > see it more likely in embedded systems where this sort of hardware
> > components integration happens.
>
> Actually (and after a quick chat with Ferruh), is this even needed?
> IOVA
> addresses are independent from VA width, IOVA can happily be bigger
> than
> 32-bits if i understand things correctly. All of our IOVA addresses are
> always 64-bit throughout DPDK. I don't think this check is even valid.
>
>
> Although iova_t is 64 bits, there should not be a IOVA higher than 32
> bits, although there could be exceptions like PAE extensions (I'm old
> enough for remembering that option :-( ).
>
> Anyway, the original idea of dma mask sanity check is 32 bits systems
> was assuming there should not be a dma mask above 32 bits, but I'm happy
> with removing that sanity check for 32 bits systems. So, do you agree to
> just leave the sanity check for a max width of 63 bits?
>
So, the issue with 32-bit here is that for this check to make sense, the
*kernel* must be 32-bit - not just userspace. IOW, this check should
*not* be present in a 32-bit application running on a 64-bit kernel.
So IMO, unless you know of a way to easily check if 1) kernel is 32-bit,
and 2) PAE is enabled/disabled (and by easily i mean using something
other than reading sysfs etc.), i don't think this check should be in
there :)
> >
> > >
> > > --
> > > Thanks,
> > > Anatoly
> > >
> >
> >
> > --
> > Thanks,
> > Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>
--
Thanks,
Anatoly
More information about the dev
mailing list