[PATCH 3/9] eal: use barrier intrinsics when compiling with msvc
Konstantin Ananyev
konstantin.v.ananyev at yandex.ru
Wed Apr 5 01:49:21 CEST 2023
04/04/2023 16:49, Tyler Retzlaff пишет:
> On Tue, Apr 04, 2023 at 12:11:07PM +0000, Konstantin Ananyev wrote:
>>
>>
>>> Inline assembly is not supported for msvc x64 instead use
>>> _{Read,Write,ReadWrite}Barrier() intrinsics.
>>>
>>> Signed-off-by: Tyler Retzlaff <roretzla at linux.microsoft.com>
>>> ---
>>> lib/eal/include/generic/rte_atomic.h | 4 ++++
>>> lib/eal/x86/include/rte_atomic.h | 10 +++++++++-
>>> 2 files changed, 13 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/lib/eal/include/generic/rte_atomic.h b/lib/eal/include/generic/rte_atomic.h
>>> index 234b268..e973184 100644
>>> --- a/lib/eal/include/generic/rte_atomic.h
>>> +++ b/lib/eal/include/generic/rte_atomic.h
>>> @@ -116,9 +116,13 @@
>>> * Guarantees that operation reordering does not occur at compile time
>>> * for operations directly before and after the barrier.
>>> */
>>> +#ifndef RTE_TOOLCHAIN_MSVC
>>> #define rte_compiler_barrier() do { \
>>> asm volatile ("" : : : "memory"); \
>>> } while(0)
>>> +#else
>>> +#define rte_compiler_barrier() _ReadWriteBarrier()
>>> +#endif
>>>
>>> /**
>>> * Synchronization fence between threads based on the specified memory order.
>>> diff --git a/lib/eal/x86/include/rte_atomic.h b/lib/eal/x86/include/rte_atomic.h
>>> index f2ee1a9..5cce9ba 100644
>>> --- a/lib/eal/x86/include/rte_atomic.h
>>> +++ b/lib/eal/x86/include/rte_atomic.h
>>> @@ -27,9 +27,13 @@
>>>
>>> #define rte_rmb() _mm_lfence()
>>>
>>> +#ifndef RTE_TOOLCHAIN_MSVC
>>> #define rte_smp_wmb() rte_compiler_barrier()
>>> -
>>> #define rte_smp_rmb() rte_compiler_barrier()
>>> +#else
>>> +#define rte_smp_wmb() _WriteBarrier()
>>> +#define rte_smp_rmb() _ReadBarrier()
>>> +#endif
>>>
>>> /*
>>> * From Intel Software Development Manual; Vol 3;
>>> @@ -66,11 +70,15 @@
>>> static __rte_always_inline void
>>> rte_smp_mb(void)
>>> {
>>> +#ifndef RTE_TOOLCHAIN_MSVC
>>> #ifdef RTE_ARCH_I686
>>> asm volatile("lock addl $0, -128(%%esp); " ::: "memory");
>>> #else
>>> asm volatile("lock addl $0, -128(%%rsp); " ::: "memory");
>>> #endif
>>> +#else
>>> + rte_compiler_barrier();
>>> +#endif
>>
>> It doesn't look right to me: compiler_barrier is not identical to LOCK-ed operation,
>> and is not enough to serve as a proper memory barrier for SMP.
>
> i think i'm confused by the macro naming here. i'll take another look
> thank you for raising it.
>
>>
>> Another ore generic comment - do we really need to pollute all that code with RTE_TOOLCHAIN_MSVC ifdefs?
>> Right now we have ability to have subdir per arch (x86/arm/etc.).
>> Can we treat x86+windows+msvc as a special arch?
>
> i asked this question previously and confirmed in the technical board
> meeting. the answer i received was that the community did not want new
> directory/headers introduced for compiler support matrix and i should
> use #ifdef in the existing headers.
Ok, can I then ask at least to minimize number of ifdefs to absolute
minimum?
It is really hard to read an follow acode that is heavily ifdefed.
Let say above we probably don't need to re-define
rte_smp_rmb/rte_smp_wmb, as both are boiled down to compiler_barrier(),
which is already redefined.
Another question - could it be visa-versa approach:
can we replace some inline assembly with common instincts whenever possible?
More information about the dev
mailing list