[RFC Part1 PATCH v3 12/17] x86/mm: DMA support for SEV memory encryption

Borislav Petkov bp at suse.de
Mon Aug 7 13:48:20 AEST 2017


On Mon, Jul 24, 2017 at 02:07:52PM -0500, Brijesh Singh wrote:
> From: Tom Lendacky <thomas.lendacky at amd.com>
> 
> DMA access to memory mapped as encrypted while SEV is active can not be
> encrypted during device write or decrypted during device read.

Yeah, definitely rewrite that sentence.

> In order
> for DMA to properly work when SEV is active, the SWIOTLB bounce buffers
> must be used.
> 
> Signed-off-by: Tom Lendacky <thomas.lendacky at amd.com>
> Signed-off-by: Brijesh Singh <brijesh.singh at amd.com>
> ---
>  arch/x86/mm/mem_encrypt.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++
>  lib/swiotlb.c             |  5 +--
>  2 files changed, 89 insertions(+), 2 deletions

...

> @@ -202,6 +280,14 @@ void __init mem_encrypt_init(void)
>  	/* Call into SWIOTLB to update the SWIOTLB DMA buffers */
>  	swiotlb_update_mem_attributes();
>  
> +	/*
> +	 * With SEV, DMA operations cannot use encryption. New DMA ops
> +	 * are required in order to mark the DMA areas as decrypted or
> +	 * to use bounce buffers.
> +	 */
> +	if (sev_active())
> +		dma_ops = &sme_dma_ops;

Well, we do differentiate between SME and SEV and the check is
sev_active but the ops are called sme_dma_ops. Call them sev_dma_ops
instead for less confusion.

-- 
Regards/Gruss,
    Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg)
-- 


More information about the Linuxppc-dev mailing list