<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><br>
    </p>
    <div class="moz-cite-prefix">On 02/03/26 16:42, Christophe Leroy (CS
      GROUP) wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:d90efa16-932e-4c29-b8e1-1a4ef08db403@kernel.org">Hi
      Sayali,
      <br>
      <br>
      Le 28/02/2026 à 14:53, Sayali Patil a écrit :
      <br>
      <blockquote type="cite">On powerpc with PREEMPT_FULL or
        PREEMPT_LAZY and function tracing enabled,
        <br>
        KUAP warnings can be triggered from the VMX usercopy path under
        memory
        <br>
        stress workloads.
        <br>
        <br>
        KUAP requires that no subfunctions are called once userspace
        access has
        <br>
        been enabled. The existing VMX copy implementation violates this
        <br>
        requirement by invoking enter_vmx_usercopy() from the assembly
        path after
        <br>
        userspace access has already been enabled. If preemption occurs
        <br>
        in this window, the AMR state may not be preserved correctly,
        <br>
        leading to unexpected userspace access state and resulting in
        <br>
        KUAP warnings.
        <br>
        <br>
        Fix this by restructuring the VMX usercopy flow so that VMX
        selection
        <br>
        and VMX state management are centralized in
        raw_copy_tofrom_user(),
        <br>
        which is invoked by the raw_copy_{to,from,in}_user() wrappers.
        <br>
        <br>
        Introduce a usercopy_mode enum to describe the copy direction
        <br>
        (IN, FROM, TO) and use it to derive the required KUAP
        permissions.
        <br>
        Userspace access is now enabled and disabled through common
        helpers
        <br>
        based on the selected mode, ensuring that the correct read/write
        <br>
        permissions are applied consistently.
        <br>
        <br>
          The new flow is:
        <br>
        <br>
           - raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
        <br>
           - raw_copy_tofrom_user() decides whether to use the VMX path
        <br>
             based on size and CPU capability
        <br>
           - Call enter_vmx_usercopy() before enabling userspace access
        <br>
           - Enable userspace access as per the usercopy mode
        <br>
             and perform the VMX copy
        <br>
           - Disable userspace access as per the usercopy mode
        <br>
           - Call exit_vmx_usercopy()
        <br>
           - Fall back to the base copy routine if the VMX copy faults
        <br>
        <br>
        With this change, the VMX assembly routines no longer perform
        VMX state
        <br>
        management or call helper functions; they only implement the
        <br>
        copy operations.
        <br>
        The previous feature-section based VMX selection inside
        <br>
        __copy_tofrom_user_power7() is removed, and a dedicated
        <br>
        __copy_tofrom_user_power7_vmx() entry point is introduced.
        <br>
        <br>
        This ensures correct KUAP ordering, avoids subfunction calls
        <br>
        while KUAP is unlocked, and eliminates the warnings while
        preserving
        <br>
        the VMX fast path.
        <br>
        <br>
        Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel
        Userspace Access Protection")
        <br>
        Reported-by: Shrikanth Hegde <a class="moz-txt-link-rfc2396E" href="mailto:sshegde@linux.ibm.com"><sshegde@linux.ibm.com></a>
        <br>
        Closes:
<a class="moz-txt-link-freetext" href="https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/">https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/</a><br>
        Suggested-by: Christophe Leroy <a class="moz-txt-link-rfc2396E" href="mailto:chleroy@kernel.org"><chleroy@kernel.org></a>
        <br>
        Co-developed-by: Aboorva Devarajan
        <a class="moz-txt-link-rfc2396E" href="mailto:aboorvad@linux.ibm.com"><aboorvad@linux.ibm.com></a>
        <br>
        Signed-off-by: Aboorva Devarajan <a class="moz-txt-link-rfc2396E" href="mailto:aboorvad@linux.ibm.com"><aboorvad@linux.ibm.com></a>
        <br>
        Signed-off-by: Sayali Patil <a class="moz-txt-link-rfc2396E" href="mailto:sayalip@linux.ibm.com"><sayalip@linux.ibm.com></a>
        <br>
        ---
        <br>
        <br>
        v1->v2
        <br>
           - Updated as per the review comments.
        <br>
           - Centralized VMX usercopy handling in
        __copy_tofrom_user_vmx() in
        <br>
             arch/powerpc/lib/vmx-helper.c.
        <br>
           - Introduced a usercopy_mode enum to describe the copy
        direction
        <br>
             (IN, FROM, TO) and derive the required KUAP permissions,
        avoiding
        <br>
             duplication across the different usercopy paths.
        <br>
      </blockquote>
      <br>
      I like the reduction of duplication you propose but I can't see
      the added value of that enum, what about:
      <br>
      <br>
      diff --git a/arch/powerpc/include/asm/uaccess.h
      b/arch/powerpc/include/asm/uaccess.h
      <br>
      index 63d6eb8b004e..14a3219db838 100644
      <br>
      --- a/arch/powerpc/include/asm/uaccess.h
      <br>
      +++ b/arch/powerpc/include/asm/uaccess.h
      <br>
      @@ -329,12 +329,6 @@ do {                                \
      <br>
       extern unsigned long __copy_tofrom_user(void __user *to,
      <br>
               const void __user *from, unsigned long size);
      <br>
      <br>
      -enum usercopy_mode {
      <br>
      -    USERCOPY_IN,
      <br>
      -    USERCOPY_FROM,
      <br>
      -    USERCOPY_TO,
      <br>
      -};
      <br>
      -
      <br>
       unsigned long __copy_tofrom_user_vmx(void __user *to, const void
      __user *from,
      <br>
                       unsigned long size, enum usercopy_mode mode);
      <br>
      <br>
      @@ -352,48 +346,18 @@ static inline bool will_use_vmx(unsigned
      long n)
      <br>
               n > VMX_COPY_THRESHOLD;
      <br>
       }
      <br>
      <br>
      -static inline void raw_copy_allow(void __user *to, enum
      usercopy_mode mode)
      <br>
      -{
      <br>
      -    switch (mode) {
      <br>
      -    case USERCOPY_IN:
      <br>
      -        allow_user_access(to, KUAP_READ_WRITE);
      <br>
      -        break;
      <br>
      -    case USERCOPY_FROM:
      <br>
      -        allow_user_access(NULL, KUAP_READ);
      <br>
      -        break;
      <br>
      -    case USERCOPY_TO:
      <br>
      -        allow_user_access(to, KUAP_WRITE);
      <br>
      -        break;
      <br>
      -    }
      <br>
      -}
      <br>
      -
      <br>
      -static inline void raw_copy_prevent(enum usercopy_mode mode)
      <br>
      -{
      <br>
      -    switch (mode) {
      <br>
      -    case USERCOPY_IN:
      <br>
      -        prevent_user_access(KUAP_READ_WRITE);
      <br>
      -        break;
      <br>
      -    case USERCOPY_FROM:
      <br>
      -        prevent_user_access(KUAP_READ);
      <br>
      -        break;
      <br>
      -    case USERCOPY_TO:
      <br>
      -        prevent_user_access(KUAP_WRITE);
      <br>
      -        break;
      <br>
      -    }
      <br>
      -}
      <br>
      -
      <br>
       static inline unsigned long raw_copy_tofrom_user(void __user *to,
      <br>
               const void __user *from, unsigned long n,
      <br>
      -        enum usercopy_mode mode)
      <br>
      +        unsigned long dir)
      <br>
       {
      <br>
           unsigned long ret;
      <br>
      <br>
           if (will_use_vmx(n))
      <br>
               return __copy_tofrom_user_vmx(to, from,    n, mode);
      <br>
      <br>
      -    raw_copy_allow(to, mode);
      <br>
      +    allow_user_access(to, dir);
      <br>
           ret = __copy_tofrom_user(to, from, n);
      <br>
      -    raw_copy_prevent(mode);
      <br>
      +    prevent_user_access(dir);
      <br>
           return ret;
      <br>
      <br>
       }
      <br>
      @@ -403,22 +367,20 @@ static inline unsigned long
      <br>
       raw_copy_in_user(void __user *to, const void __user *from,
      unsigned long n)
      <br>
       {
      <br>
           barrier_nospec();
      <br>
      -    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
      <br>
      +    return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
      <br>
       }
      <br>
       #endif /* __powerpc64__ */
      <br>
      <br>
       static inline unsigned long raw_copy_from_user(void *to,
      <br>
               const void __user *from, unsigned long n)
      <br>
       {
      <br>
      -    return raw_copy_tofrom_user((__force void __user *)to, from,
      <br>
      -                    n, USERCOPY_FROM);
      <br>
      +    return raw_copy_tofrom_user((__force void __user *)to, from,
      n, KUAP_READ);
      <br>
       }
      <br>
      <br>
       static inline unsigned long
      <br>
       raw_copy_to_user(void __user *to, const void *from, unsigned long
      n)
      <br>
       {
      <br>
      -    return raw_copy_tofrom_user(to, (__force const void __user
      *)from,
      <br>
      -                    n, USERCOPY_TO);
      <br>
      +    return raw_copy_tofrom_user(to, (__force const void __user
      *)from, n, KUAP_WRITE);
      <br>
       }
      <br>
      <br>
       unsigned long __arch_clear_user(void __user *addr, unsigned long
      size);
      <br>
      diff --git a/arch/powerpc/lib/vmx-helper.c
      b/arch/powerpc/lib/vmx-helper.c
      <br>
      index 35080885204b..4610f7153fd9 100644
      <br>
      --- a/arch/powerpc/lib/vmx-helper.c
      <br>
      +++ b/arch/powerpc/lib/vmx-helper.c
      <br>
      @@ -11,25 +11,25 @@
      <br>
       #include <asm/switch_to.h>
      <br>
      <br>
       unsigned long __copy_tofrom_user_vmx(void __user *to, const void
      __user *from,
      <br>
      -            unsigned long size, enum usercopy_mode mode)
      <br>
      +            unsigned long size, unsigned long dir)
      <br>
       {
      <br>
           unsigned long ret;
      <br>
      <br>
           if (!enter_vmx_usercopy()) {
      <br>
      -        raw_copy_allow(to, mode);
      <br>
      +        allow_user_access(to, dir);
      <br>
               ret = __copy_tofrom_user(to, from, size);
      <br>
      -        raw_copy_prevent(mode);
      <br>
      +        prevent_user_access(dir);
      <br>
               return ret;
      <br>
           }
      <br>
      <br>
      -    raw_copy_allow(to, mode);
      <br>
      +    allow_user_access(to, dir);
      <br>
           ret = __copy_tofrom_user_power7_vmx(to, from, size);
      <br>
      -    raw_copy_prevent(mode);
      <br>
      +    prevent_user_access(dir);
      <br>
           exit_vmx_usercopy();
      <br>
           if (unlikely(ret)) {
      <br>
      -        raw_copy_allow(to, mode);
      <br>
      +        allow_user_access(to, dir);
      <br>
               ret = __copy_tofrom_user_base(to, from, size);
      <br>
      -        raw_copy_prevent(mode);
      <br>
      +        prevent_user_access(dir);
      <br>
           }
      <br>
      <br>
           return ret;
      <br>
      <br>
      <br>
      <br>
      Christophe <br>
      <br>
      <font face="monospace" size="4"><br>
      </font></blockquote>
    <font face="monospace" size="4">Hi Christophe,<br>
      Thanks for the review.<br>
      With the suggested change, we are hitting a compilation error.<br>
      <br>
      The issue is related to how KUAP enforces the access direction.<br>
      allow_user_access() contains:<br>
      <br>
      BUILD_BUG_ON(!__builtin_constant_p(dir));<br>
      <br>
      which requires that the access direction is a compile-time
      constant.<br>
      If we pass a runtime value (for example, an unsigned long), the<br>
      __builtin_constant_p() check fails and triggers the following
      build error.<br>
      <br>
      Error:<br>
      In function 'allow_user_access', inlined from
      '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3:<br>
      BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706<br>
      <br>
      <br>
      The previous implementation worked because allow_user_access() was
      invoked with enum <br>
      constants (READ, WRITE, READ_WRITE), which satisfied the
      __builtin_constant_p() requirement.<br>
      So in this case, the function must be called with a compile-time
      constant to satisfy KUAP.<br>
      <br>
      Please let me know if you would prefer a different approach.<br>
      <br>
      Regards,<br>
      Sayali</font><br>
    <br>
    <br>
    <blockquote type="cite"
      cite="mid:d90efa16-932e-4c29-b8e1-1a4ef08db403@kernel.org">
      <blockquote type="cite">
        <br>
        v1:
<a class="moz-txt-link-freetext" href="https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/">https://lore.kernel.org/all/20260217124457.89219-1-sayalip@linux.ibm.com/</a><br>
        <br>
        ---
        <br>
          arch/powerpc/include/asm/uaccess.h | 95
        ++++++++++++++++++++++++------
        <br>
          arch/powerpc/lib/copyuser_64.S     |  1 +
        <br>
          arch/powerpc/lib/copyuser_power7.S | 45 +++++---------
        <br>
          arch/powerpc/lib/vmx-helper.c      | 26 ++++++++
        <br>
          4 files changed, 119 insertions(+), 48 deletions(-)
        <br>
        <br>
        diff --git a/arch/powerpc/include/asm/uaccess.h
        b/arch/powerpc/include/asm/uaccess.h
        <br>
        index ba1d878c3f40..63d6eb8b004e 100644
        <br>
        --- a/arch/powerpc/include/asm/uaccess.h
        <br>
        +++ b/arch/powerpc/include/asm/uaccess.h
        <br>
        @@ -15,6 +15,9 @@
        <br>
          #define TASK_SIZE_MAX        TASK_SIZE_USER64
        <br>
          #endif
        <br>
          +/* Threshold above which VMX copy path is used */
        <br>
        +#define VMX_COPY_THRESHOLD 3328
        <br>
        +
        <br>
          #include <asm-generic/access_ok.h>
        <br>
            /*
        <br>
        @@ -326,40 +329,96 @@ do {                                \
        <br>
          extern unsigned long __copy_tofrom_user(void __user *to,
        <br>
                  const void __user *from, unsigned long size);
        <br>
          -#ifdef __powerpc64__
        <br>
        -static inline unsigned long
        <br>
        -raw_copy_in_user(void __user *to, const void __user *from,
        unsigned long n)
        <br>
        +enum usercopy_mode {
        <br>
        +    USERCOPY_IN,
        <br>
        +    USERCOPY_FROM,
        <br>
        +    USERCOPY_TO,
        <br>
        +};
        <br>
        +
        <br>
        +unsigned long __copy_tofrom_user_vmx(void __user *to, const
        void __user *from,
        <br>
        +                unsigned long size, enum usercopy_mode mode);
        <br>
        +
        <br>
        +unsigned long __copy_tofrom_user_base(void __user *to,
        <br>
        +        const void __user *from, unsigned long size);
        <br>
        +
        <br>
        +unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
        <br>
        +        const void __user *from, unsigned long size);
        <br>
        +
        <br>
        +
        <br>
        +static inline bool will_use_vmx(unsigned long n)
        <br>
        +{
        <br>
        +    return IS_ENABLED(CONFIG_ALTIVEC) &&
        <br>
        +        cpu_has_feature(CPU_FTR_VMX_COPY) &&
        <br>
        +        n > VMX_COPY_THRESHOLD;
        <br>
        +}
        <br>
        +
        <br>
        +static inline void raw_copy_allow(void __user *to, enum
        usercopy_mode mode)
        <br>
        +{
        <br>
        +    switch (mode) {
        <br>
        +    case USERCOPY_IN:
        <br>
        +        allow_user_access(to, KUAP_READ_WRITE);
        <br>
        +        break;
        <br>
        +    case USERCOPY_FROM:
        <br>
        +        allow_user_access(NULL, KUAP_READ);
        <br>
        +        break;
        <br>
        +    case USERCOPY_TO:
        <br>
        +        allow_user_access(to, KUAP_WRITE);
        <br>
        +        break;
        <br>
        +    }
        <br>
        +}
        <br>
        +
        <br>
        +static inline void raw_copy_prevent(enum usercopy_mode mode)
        <br>
        +{
        <br>
        +    switch (mode) {
        <br>
        +    case USERCOPY_IN:
        <br>
        +        prevent_user_access(KUAP_READ_WRITE);
        <br>
        +        break;
        <br>
        +    case USERCOPY_FROM:
        <br>
        +        prevent_user_access(KUAP_READ);
        <br>
        +        break;
        <br>
        +    case USERCOPY_TO:
        <br>
        +        prevent_user_access(KUAP_WRITE);
        <br>
        +        break;
        <br>
        +    }
        <br>
        +}
        <br>
        +
        <br>
        +static inline unsigned long raw_copy_tofrom_user(void __user
        *to,
        <br>
        +        const void __user *from, unsigned long n,
        <br>
        +        enum usercopy_mode mode)
        <br>
          {
        <br>
              unsigned long ret;
        <br>
          -    barrier_nospec();
        <br>
        -    allow_user_access(to, KUAP_READ_WRITE);
        <br>
        +    if (will_use_vmx(n))
        <br>
        +        return __copy_tofrom_user_vmx(to, from,    n, mode);
        <br>
        +
        <br>
        +    raw_copy_allow(to, mode);
        <br>
              ret = __copy_tofrom_user(to, from, n);
        <br>
        -    prevent_user_access(KUAP_READ_WRITE);
        <br>
        +    raw_copy_prevent(mode);
        <br>
              return ret;
        <br>
        +
        <br>
        +}
        <br>
        +
        <br>
        +#ifdef __powerpc64__
        <br>
        +static inline unsigned long
        <br>
        +raw_copy_in_user(void __user *to, const void __user *from,
        unsigned long n)
        <br>
        +{
        <br>
        +    barrier_nospec();
        <br>
        +    return raw_copy_tofrom_user(to, from, n, USERCOPY_IN);
        <br>
          }
        <br>
          #endif /* __powerpc64__ */
        <br>
            static inline unsigned long raw_copy_from_user(void *to,
        <br>
                  const void __user *from, unsigned long n)
        <br>
          {
        <br>
        -    unsigned long ret;
        <br>
        -
        <br>
        -    allow_user_access(NULL, KUAP_READ);
        <br>
        -    ret = __copy_tofrom_user((__force void __user *)to, from,
        n);
        <br>
        -    prevent_user_access(KUAP_READ);
        <br>
        -    return ret;
        <br>
        +    return raw_copy_tofrom_user((__force void __user *)to,
        from,
        <br>
        +                    n, USERCOPY_FROM);
        <br>
          }
        <br>
            static inline unsigned long
        <br>
          raw_copy_to_user(void __user *to, const void *from, unsigned
        long n)
        <br>
          {
        <br>
        -    unsigned long ret;
        <br>
        -
        <br>
        -    allow_user_access(to, KUAP_WRITE);
        <br>
        -    ret = __copy_tofrom_user(to, (__force const void __user
        *)from, n);
        <br>
        -    prevent_user_access(KUAP_WRITE);
        <br>
        -    return ret;
        <br>
        +    return raw_copy_tofrom_user(to, (__force const void __user
        *)from,
        <br>
        +                    n, USERCOPY_TO);
        <br>
          }
        <br>
            unsigned long __arch_clear_user(void __user *addr, unsigned
        long size);
        <br>
        diff --git a/arch/powerpc/lib/copyuser_64.S
        b/arch/powerpc/lib/copyuser_64.S
        <br>
        index 9af969d2cc0c..25a99108caff 100644
        <br>
        --- a/arch/powerpc/lib/copyuser_64.S
        <br>
        +++ b/arch/powerpc/lib/copyuser_64.S
        <br>
        @@ -562,3 +562,4 @@ exc;    std    r10,32(3)
        <br>
              li    r5,4096
        <br>
              b    .Ldst_aligned
        <br>
          EXPORT_SYMBOL(__copy_tofrom_user)
        <br>
        +EXPORT_SYMBOL(__copy_tofrom_user_base)
        <br>
        diff --git a/arch/powerpc/lib/copyuser_power7.S
        b/arch/powerpc/lib/copyuser_power7.S
        <br>
        index 8474c682a178..17dbcfbae25f 100644
        <br>
        --- a/arch/powerpc/lib/copyuser_power7.S
        <br>
        +++ b/arch/powerpc/lib/copyuser_power7.S
        <br>
        @@ -5,13 +5,9 @@
        <br>
           *
        <br>
           * Author: Anton Blanchard <a class="moz-txt-link-rfc2396E" href="mailto:anton@au.ibm.com"><anton@au.ibm.com></a>
        <br>
           */
        <br>
        +#include <linux/export.h>
        <br>
          #include <asm/ppc_asm.h>
        <br>
          -#ifndef SELFTEST_CASE
        <br>
        -/* 0 == don't use VMX, 1 == use VMX */
        <br>
        -#define SELFTEST_CASE    0
        <br>
        -#endif
        <br>
        -
        <br>
          #ifdef __BIG_ENDIAN__
        <br>
          #define LVS(VRT,RA,RB)        lvsl    VRT,RA,RB
        <br>
          #define VPERM(VRT,VRA,VRB,VRC)    vperm    VRT,VRA,VRB,VRC
        <br>
        @@ -47,10 +43,14 @@
        <br>
              ld    r15,STK_REG(R15)(r1)
        <br>
              ld    r14,STK_REG(R14)(r1)
        <br>
          .Ldo_err3:
        <br>
        -    bl    CFUNC(exit_vmx_usercopy)
        <br>
        +    ld      r6,STK_REG(R31)(r1)    /* original destination
        pointer */
        <br>
        +    ld      r5,STK_REG(R29)(r1)    /* original number of bytes
        */
        <br>
        +    subf    r7,r6,r3        /* #bytes copied */
        <br>
        +    subf    r3,r7,r5        /* #bytes not copied in r3 */
        <br>
              ld    r0,STACKFRAMESIZE+16(r1)
        <br>
              mtlr    r0
        <br>
        -    b    .Lexit
        <br>
        +    addi    r1,r1,STACKFRAMESIZE
        <br>
        +    blr
        <br>
          #endif /* CONFIG_ALTIVEC */
        <br>
            .Ldo_err2:
        <br>
        @@ -74,7 +74,6 @@
        <br>
            _GLOBAL(__copy_tofrom_user_power7)
        <br>
              cmpldi    r5,16
        <br>
        -    cmpldi    cr1,r5,3328
        <br>
                std    r3,-STACKFRAMESIZE+STK_REG(R31)(r1)
        <br>
              std    r4,-STACKFRAMESIZE+STK_REG(R30)(r1)
        <br>
        @@ -82,12 +81,6 @@ _GLOBAL(__copy_tofrom_user_power7)
        <br>
                blt    .Lshort_copy
        <br>
          -#ifdef CONFIG_ALTIVEC
        <br>
        -test_feature = SELFTEST_CASE
        <br>
        -BEGIN_FTR_SECTION
        <br>
        -    bgt    cr1,.Lvmx_copy
        <br>
        -END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
        <br>
        -#endif
        <br>
            .Lnonvmx_copy:
        <br>
              /* Get the source 8B aligned */
        <br>
        @@ -263,23 +256,14 @@ err1;    stb    r0,0(r3)
        <br>
          15:    li    r3,0
        <br>
              blr
        <br>
          -.Lunwind_stack_nonvmx_copy:
        <br>
        -    addi    r1,r1,STACKFRAMESIZE
        <br>
        -    b    .Lnonvmx_copy
        <br>
        -
        <br>
        -.Lvmx_copy:
        <br>
          #ifdef CONFIG_ALTIVEC
        <br>
        +_GLOBAL(__copy_tofrom_user_power7_vmx)
        <br>
              mflr    r0
        <br>
              std    r0,16(r1)
        <br>
              stdu    r1,-STACKFRAMESIZE(r1)
        <br>
        -    bl    CFUNC(enter_vmx_usercopy)
        <br>
        -    cmpwi    cr1,r3,0
        <br>
        -    ld    r0,STACKFRAMESIZE+16(r1)
        <br>
        -    ld    r3,STK_REG(R31)(r1)
        <br>
        -    ld    r4,STK_REG(R30)(r1)
        <br>
        -    ld    r5,STK_REG(R29)(r1)
        <br>
        -    mtlr    r0
        <br>
          +    std     r3,STK_REG(R31)(r1)
        <br>
        +    std     r5,STK_REG(R29)(r1)
        <br>
              /*
        <br>
               * We prefetch both the source and destination using
        enhanced touch
        <br>
               * instructions. We use a stream ID of 0 for the load side
        and
        <br>
        @@ -300,8 +284,6 @@ err1;    stb    r0,0(r3)
        <br>
                DCBT_SETUP_STREAMS(r6, r7, r9, r10, r8)
        <br>
          -    beq    cr1,.Lunwind_stack_nonvmx_copy
        <br>
        -
        <br>
              /*
        <br>
               * If source and destination are not relatively aligned we
        use a
        <br>
               * slower permute loop.
        <br>
        @@ -478,7 +460,8 @@ err3;    lbz    r0,0(r4)
        <br>
          err3;    stb    r0,0(r3)
        <br>
            15:    addi    r1,r1,STACKFRAMESIZE
        <br>
        -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
        <br>
        +    li r3,0
        <br>
        +    blr
        <br>
            .Lvmx_unaligned_copy:
        <br>
              /* Get the destination 16B aligned */
        <br>
        @@ -681,5 +664,7 @@ err3;    lbz    r0,0(r4)
        <br>
          err3;    stb    r0,0(r3)
        <br>
            15:    addi    r1,r1,STACKFRAMESIZE
        <br>
        -    b    CFUNC(exit_vmx_usercopy)    /* tail call optimise */
        <br>
        +    li r3,0
        <br>
        +    blr
        <br>
        +EXPORT_SYMBOL(__copy_tofrom_user_power7_vmx)
        <br>
          #endif /* CONFIG_ALTIVEC */
        <br>
        diff --git a/arch/powerpc/lib/vmx-helper.c
        b/arch/powerpc/lib/vmx-helper.c
        <br>
        index 54340912398f..35080885204b 100644
        <br>
        --- a/arch/powerpc/lib/vmx-helper.c
        <br>
        +++ b/arch/powerpc/lib/vmx-helper.c
        <br>
        @@ -10,6 +10,32 @@
        <br>
          #include <linux/hardirq.h>
        <br>
          #include <asm/switch_to.h>
        <br>
          +unsigned long __copy_tofrom_user_vmx(void __user *to, const
        void __user *from,
        <br>
        +            unsigned long size, enum usercopy_mode mode)
        <br>
        +{
        <br>
        +    unsigned long ret;
        <br>
        +
        <br>
        +    if (!enter_vmx_usercopy()) {
        <br>
        +        raw_copy_allow(to, mode);
        <br>
        +        ret = __copy_tofrom_user(to, from, size);
        <br>
        +        raw_copy_prevent(mode);
        <br>
        +        return ret;
        <br>
        +    }
        <br>
        +
        <br>
        +    raw_copy_allow(to, mode);
        <br>
        +    ret = __copy_tofrom_user_power7_vmx(to, from, size);
        <br>
        +    raw_copy_prevent(mode);
        <br>
        +    exit_vmx_usercopy();
        <br>
        +    if (unlikely(ret)) {
        <br>
        +        raw_copy_allow(to, mode);
        <br>
        +        ret = __copy_tofrom_user_base(to, from, size);
        <br>
        +        raw_copy_prevent(mode);
        <br>
        +    }
        <br>
        +
        <br>
        +    return ret;
        <br>
        +}
        <br>
        +EXPORT_SYMBOL(__copy_tofrom_user_vmx);
        <br>
        +
        <br>
          int enter_vmx_usercopy(void)
        <br>
          {
        <br>
              if (in_interrupt())
        <br>
      </blockquote>
      <br>
    </blockquote>
  </body>
</html>