[PATCH 00/36] AES library improvements

Ard Biesheuvel ardb at kernel.org
Thu Jan 8 22:32:00 AEDT 2026


On Mon, 5 Jan 2026 at 06:14, Eric Biggers <ebiggers at kernel.org> wrote:
>
> This series applies to libcrypto-next.  It can also be retrieved from:
>
>     git fetch https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git aes-lib-v1
>
> This series makes three main improvements to the kernel's AES library:
>
>   1. Make it use the kernel's existing architecture-optimized AES code,
>      including AES instructions, when available.  Previously, only the
>      traditional crypto API gave access to the optimized AES code.
>      (As a reminder, AES instructions typically make AES over 10 times
>      as fast as the generic code.  They also make it constant-time.)
>
>   2. Support preparing an AES key for only the forward direction of the
>      block cipher, using about half as much memory.  This is a helpful
>      optimization for many common AES modes of operation.  It also helps
>      keep structs small enough to be allocated on the stack, especially
>      considering potential future library APIs for AES modes.
>
>   3. Replace the library's generic AES implementation with a much faster
>      one that is almost as fast as "aes-generic", while still keeping
>      the table size reasonably small and maintaining some constant-time
>      hardening.  This allows removing "aes-generic", unifying the
>      current two generic AES implementations in the kernel tree.
>

Architectures that support memory operands will be impacted by
dropping the pre-rotated lookup tables, especially if they have few
GPRs.

I suspect that doesn't really matter in practice: if your pre-AESNI
IA-32 workload has a bottleneck on "aes-generic", you would have
probably moved it to a different machine by now. But the performance
delta will likely be noticeable so it is something that deserves a
mention.

> (1) and (2) end up being interrelated: the existing
> 'struct crypto_aes_ctx' does not work for either one (in general).
> Thus, this series reworks the AES library to be based around new data
> types 'struct aes_key' and 'struct aes_enckey'.
>
> As has been the case for other algorithms, to achieve (1) without
> duplicating the architecture-optimized code, it had to be moved into
> lib/crypto/ rather than copied.  To allow actually removing the
> arch-specific crypto_cipher "aes" algorithms, a consolidated "aes-lib"
> crypto_cipher algorithm which simply wraps the library is also added.
> That's most easily done with it replacing "aes-generic" too, so that is
> done too.  (That's another reason for doing (3) at the same time.)
>
> As usual, care is taken to support all the existing arch-optimized code.
> This makes it possible for users of the traditional crypto API to switch
> to the library API, which is generally much easier to use, without being
> concerned about performance regressions.
>
> That being said, this series only deals with the bare (single-block) AES
> library.  Future patchsets are expected to build on this work to provide
> architecture-optimized library APIs for specific AES modes of operation.
>
> Eric Biggers (36):
>   crypto: powerpc/aes - Rename struct aes_key
>   lib/crypto: aes: Introduce improved AES library
>   crypto: arm/aes-neonbs - Use AES library for single blocks
>   crypto: arm/aes - Switch to aes_enc_tab[] and aes_dec_tab[]
>   crypto: arm64/aes - Switch to aes_enc_tab[] and aes_dec_tab[]
>   crypto: arm64/aes - Select CRYPTO_LIB_SHA256 from correct places
>   crypto: aegis - Switch from crypto_ft_tab[] to aes_enc_tab[]
>   crypto: aes - Remove aes-fixed-time / CONFIG_CRYPTO_AES_TI
>   crypto: aes - Replace aes-generic with wrapper around lib
>   lib/crypto: arm/aes: Migrate optimized code into library
>   lib/crypto: arm64/aes: Migrate optimized code into library
>   lib/crypto: powerpc/aes: Migrate SPE optimized code into library
>   lib/crypto: powerpc/aes: Migrate POWER8 optimized code into library
>   lib/crypto: riscv/aes: Migrate optimized code into library
>   lib/crypto: s390/aes: Migrate optimized code into library
>   lib/crypto: sparc/aes: Migrate optimized code into library
>   lib/crypto: x86/aes: Add AES-NI optimization
>   crypto: x86/aes - Remove the superseded AES-NI crypto_cipher
>   Bluetooth: SMP: Use new AES library API
>   chelsio: Use new AES library API
>   net: phy: mscc: macsec: Use new AES library API
>   staging: rtl8723bs: core: Use new AES library API
>   crypto: arm/ghash - Use new AES library API
>   crypto: arm64/ghash - Use new AES library API
>   crypto: x86/aes-gcm - Use new AES library API
>   crypto: ccp - Use new AES library API
>   crypto: chelsio - Use new AES library API
>   crypto: crypto4xx - Use new AES library API
>   crypto: drbg - Use new AES library API
>   crypto: inside-secure - Use new AES library API
>   crypto: omap - Use new AES library API
>   lib/crypto: aescfb: Use new AES library API
>   lib/crypto: aesgcm: Use new AES library API
>   lib/crypto: aes: Remove old AES en/decryption functions
>   lib/crypto: aes: Drop "_new" suffix from en/decryption functions
>   lib/crypto: aes: Drop 'volatile' from aes_sbox and aes_inv_sbox
>

Nice cleanup

Acked-by: Ard Biesheuvel <ardb at kernel.org>


More information about the Linuxppc-dev mailing list