arm64: atomic_lse: match asm register sizes

The LSE atomic code uses asm register variables to ensure that
parameters are allocated in specific registers. In the majority of cases
we specifically ask for an x register when using 64-bit values, but in a
couple of cases we use a w regsiter for a 64-bit value.

For asm register variables, the compiler only cares about the register
index, with wN and xN having the same meaning. The compiler determines
the register size to use based on the type of the variable. Thus, this
inconsistency is merely confusing, and not harmful to code generation.

For consistency, this patch updates those cases to use the x register
alias. There should be no functional change as a result of this patch.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
This commit is contained in:
Mark Rutland 2017-05-03 16:09:37 +01:00 committed by Catalin Marinas
parent 55de49f9aa
commit 8997c93452

View file

@ -322,7 +322,7 @@ static inline void atomic64_and(long i, atomic64_t *v)
#define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_AND(name, mb, cl...) \
static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \ static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("w0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
\ \
asm volatile(ARM64_LSE_ATOMIC_INSN( \ asm volatile(ARM64_LSE_ATOMIC_INSN( \
@ -394,7 +394,7 @@ ATOMIC64_OP_SUB_RETURN( , al, "memory")
#define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \ #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...) \
static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \ static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \
{ \ { \
register long x0 asm ("w0") = i; \ register long x0 asm ("x0") = i; \
register atomic64_t *x1 asm ("x1") = v; \ register atomic64_t *x1 asm ("x1") = v; \
\ \
asm volatile(ARM64_LSE_ATOMIC_INSN( \ asm volatile(ARM64_LSE_ATOMIC_INSN( \