[PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()

Kirill A. Shutemov posted 3 patches 4 months ago
There is a newer version of this series
[PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Kirill A. Shutemov 4 months ago
Touching privately mapped GPA that is not properly converted to private
with MapGPA and accepted leads to unrecoverable exit to VMM.

load_unaligned_zeropad() can touch memory that is not owned by the
caller, but just happened to next after the owned memory.
This load_unaligned_zeropad() behaviour makes it important when kernel
asks VMM to convert a GPA from shared to private or back. Kernel must
never have a page mapped into direct mapping (and aliases) as private
when the GPA is already converted to shared or when GPA is not yet
converted to private.

guest.enc_status_change_prepare() called before adjusting direct mapping
and therefore it is responsible for converting the memory to private.

guest.enc_status_change_finish() called after adjusting direct mapping
and it converts the memory to shared.

It is okay to have a shared mapping of memory that is not converted
properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
stepping on it.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
Cc: stable@vger.kernel.org
---
 arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 53 insertions(+), 3 deletions(-)

diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
index e146b599260f..59cc13e41aa6 100644
--- a/arch/x86/coco/tdx/tdx.c
+++ b/arch/x86/coco/tdx/tdx.c
@@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
 	return true;
 }
 
+static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
+					  bool enc)
+{
+	/*
+	 * Only handle shared->private conversion here.
+	 * See the comment in tdx_early_init().
+	 */
+	if (enc)
+		return tdx_enc_status_changed(vaddr, numpages, enc);
+	return true;
+}
+
+static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
+					 bool enc)
+{
+	/*
+	 * Only handle private->shared conversion here.
+	 * See the comment in tdx_early_init().
+	 */
+	if (!enc)
+		return tdx_enc_status_changed(vaddr, numpages, enc);
+	return true;
+}
+
 void __init tdx_early_init(void)
 {
 	u64 cc_mask;
@@ -867,9 +891,35 @@ void __init tdx_early_init(void)
 	 */
 	physical_mask &= cc_mask - 1;
 
-	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
-	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
-	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
+	/*
+	 * Touching privately mapped GPA that is not properly converted to
+	 * private with MapGPA and accepted leads to unrecoverable exit
+	 * to VMM.
+	 *
+	 * load_unaligned_zeropad() can touch memory that is not owned by
+	 * the caller, but just happened to next after the owned memory.
+	 * This load_unaligned_zeropad() behaviour makes it important when
+	 * kernel asks VMM to convert a GPA from shared to private or back.
+	 * Kernel must never have a page mapped into direct mapping (and
+	 * aliases) as private when the GPA is already converted to shared or
+	 * when GPA is not yet converted to private.
+	 *
+	 * guest.enc_status_change_prepare() called before adjusting direct
+	 * mapping and therefore it is responsible for converting the memory
+	 * to private.
+	 *
+	 * guest.enc_status_change_finish() called after adjusting direct
+	 * mapping and it converts the memory to shared.
+	 *
+	 * It is okay to have a shared mapping of memory that is not converted
+	 * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
+	 * stepping on it.
+	 */
+	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
+	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
+
+	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
+	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
 
 	pr_info("Guest detected\n");
 }
-- 
2.39.3
Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Dave Hansen 3 months, 3 weeks ago
On 5/26/23 05:02, Kirill A. Shutemov wrote:
> Touching privately mapped GPA that is not properly converted to private
> with MapGPA and accepted leads to unrecoverable exit to VMM.
> 
> load_unaligned_zeropad() can touch memory that is not owned by the
> caller, but just happened to next after the owned memory.
> This load_unaligned_zeropad() behaviour makes it important when kernel
> asks VMM to convert a GPA from shared to private or back. Kernel must
> never have a page mapped into direct mapping (and aliases) as private
> when the GPA is already converted to shared or when GPA is not yet
> converted to private.
> 
> guest.enc_status_change_prepare() called before adjusting direct mapping
> and therefore it is responsible for converting the memory to private.
> 
> guest.enc_status_change_finish() called after adjusting direct mapping
> and it converts the memory to shared.
> 
> It is okay to have a shared mapping of memory that is not converted
> properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> stepping on it.

Yeah, as other said, the changelog grammar here is a bit funky.  Can you
fix it up and resend, please?
Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Sathyanarayanan Kuppuswamy 4 months ago

On 5/26/23 5:02 AM, Kirill A. Shutemov wrote:
> Touching privately mapped GPA that is not properly converted to private
> with MapGPA and accepted leads to unrecoverable exit to VMM.
> 
> load_unaligned_zeropad() can touch memory that is not owned by the
> caller, but just happened to next after the owned memory.

/s/to/to be ?

> This load_unaligned_zeropad() behaviour makes it important when kernel
> asks VMM to convert a GPA from shared to private or back. Kernel must
> never have a page mapped into direct mapping (and aliases) as private
> when the GPA is already converted to shared or when GPA is not yet
> converted to private.

I am wondering whether this issue exist in the AMD code? 

IMO, you can add some info on the window in set_memory_encrypted()
where this race exists.

> 
> guest.enc_status_change_prepare() called before adjusting direct mapping
> and therefore it is responsible for converting the memory to private.
> 
> guest.enc_status_change_finish() called after adjusting direct mapping
> and it converts the memory to shared.
> 
> It is okay to have a shared mapping of memory that is not converted
> properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> stepping on it.

> 
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
> Cc: stable@vger.kernel.org
> ---
>  arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 53 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> index e146b599260f..59cc13e41aa6 100644
> --- a/arch/x86/coco/tdx/tdx.c
> +++ b/arch/x86/coco/tdx/tdx.c
> @@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
>  	return true;
>  }
>  
> +static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
> +					  bool enc)
> +{
> +	/*
> +	 * Only handle shared->private conversion here.
> +	 * See the comment in tdx_early_init().
> +	 */
> +	if (enc)
> +		return tdx_enc_status_changed(vaddr, numpages, enc);
> +	return true;
> +}
> +
> +static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> +					 bool enc)
> +{
> +	/*
> +	 * Only handle private->shared conversion here.
> +	 * See the comment in tdx_early_init().
> +	 */
> +	if (!enc)
> +		return tdx_enc_status_changed(vaddr, numpages, enc);
> +	return true;
> +}
> +
>  void __init tdx_early_init(void)
>  {
>  	u64 cc_mask;
> @@ -867,9 +891,35 @@ void __init tdx_early_init(void)
>  	 */
>  	physical_mask &= cc_mask - 1;
>  
> -	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
> -	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;

I think you don't need to change the order here.

> -	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
> +	/*
> +	 * Touching privately mapped GPA that is not properly converted to
> +	 * private with MapGPA and accepted leads to unrecoverable exit
> +	 * to VMM.
> +	 *
> +	 * load_unaligned_zeropad() can touch memory that is not owned by
> +	 * the caller, but just happened to next after the owned memory.
> +	 * This load_unaligned_zeropad() behaviour makes it important when
> +	 * kernel asks VMM to convert a GPA from shared to private or back.
> +	 * Kernel must never have a page mapped into direct mapping (and
> +	 * aliases) as private when the GPA is already converted to shared or
> +	 * when GPA is not yet converted to private.
> +	 *
> +	 * guest.enc_status_change_prepare() called before adjusting direct
> +	 * mapping and therefore it is responsible for converting the memory
> +	 * to private.
> +	 *
> +	 * guest.enc_status_change_finish() called after adjusting direct
> +	 * mapping and it converts the memory to shared.
> +	 *
> +	 * It is okay to have a shared mapping of memory that is not converted
> +	 * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> +	 * stepping on it.
> +	 */
> +	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
> +	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
> +
> +	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
> +	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
>  
>  	pr_info("Guest detected\n");
>  }

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer
Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Kirill A. Shutemov 4 months ago
On Fri, May 26, 2023 at 03:10:56PM -0700, Sathyanarayanan Kuppuswamy wrote:
> 
> 
> On 5/26/23 5:02 AM, Kirill A. Shutemov wrote:
> > Touching privately mapped GPA that is not properly converted to private
> > with MapGPA and accepted leads to unrecoverable exit to VMM.
> > 
> > load_unaligned_zeropad() can touch memory that is not owned by the
> > caller, but just happened to next after the owned memory.
> 
> /s/to/to be ?

Yep, my bad.

> > This load_unaligned_zeropad() behaviour makes it important when kernel
> > asks VMM to convert a GPA from shared to private or back. Kernel must
> > never have a page mapped into direct mapping (and aliases) as private
> > when the GPA is already converted to shared or when GPA is not yet
> > converted to private.
> 
> I am wondering whether this issue exist in the AMD code? 
> 
> IMO, you can add some info on the window in set_memory_encrypted()
> where this race exists.

I don't think AMD affected by load_unaligned_zeropad() the same way as
Intel does. But I'm not sure.

Tom, do you have any comments?

> 
> > 
> > guest.enc_status_change_prepare() called before adjusting direct mapping
> > and therefore it is responsible for converting the memory to private.
> > 
> > guest.enc_status_change_finish() called after adjusting direct mapping
> > and it converts the memory to shared.
> > 
> > It is okay to have a shared mapping of memory that is not converted
> > properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> > stepping on it.
> 
> > 
> > Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 53 insertions(+), 3 deletions(-)
> > 
> > diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
> > index e146b599260f..59cc13e41aa6 100644
> > --- a/arch/x86/coco/tdx/tdx.c
> > +++ b/arch/x86/coco/tdx/tdx.c
> > @@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
> >  	return true;
> >  }
> >  
> > +static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
> > +					  bool enc)
> > +{
> > +	/*
> > +	 * Only handle shared->private conversion here.
> > +	 * See the comment in tdx_early_init().
> > +	 */
> > +	if (enc)
> > +		return tdx_enc_status_changed(vaddr, numpages, enc);
> > +	return true;
> > +}
> > +
> > +static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
> > +					 bool enc)
> > +{
> > +	/*
> > +	 * Only handle private->shared conversion here.
> > +	 * See the comment in tdx_early_init().
> > +	 */
> > +	if (!enc)
> > +		return tdx_enc_status_changed(vaddr, numpages, enc);
> > +	return true;
> > +}
> > +
> >  void __init tdx_early_init(void)
> >  {
> >  	u64 cc_mask;
> > @@ -867,9 +891,35 @@ void __init tdx_early_init(void)
> >  	 */
> >  	physical_mask &= cc_mask - 1;
> >  
> > -	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
> > -	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
> 
> I think you don't need to change the order here.

I wanted to emphasise that the comment is for _prepare/_finish callbacks
and I hoped re-order would help with this.

> > -	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
> > +	/*
> > +	 * Touching privately mapped GPA that is not properly converted to
> > +	 * private with MapGPA and accepted leads to unrecoverable exit
> > +	 * to VMM.
> > +	 *
> > +	 * load_unaligned_zeropad() can touch memory that is not owned by
> > +	 * the caller, but just happened to next after the owned memory.
> > +	 * This load_unaligned_zeropad() behaviour makes it important when
> > +	 * kernel asks VMM to convert a GPA from shared to private or back.
> > +	 * Kernel must never have a page mapped into direct mapping (and
> > +	 * aliases) as private when the GPA is already converted to shared or
> > +	 * when GPA is not yet converted to private.
> > +	 *
> > +	 * guest.enc_status_change_prepare() called before adjusting direct
> > +	 * mapping and therefore it is responsible for converting the memory
> > +	 * to private.
> > +	 *
> > +	 * guest.enc_status_change_finish() called after adjusting direct
> > +	 * mapping and it converts the memory to shared.
> > +	 *
> > +	 * It is okay to have a shared mapping of memory that is not converted
> > +	 * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
> > +	 * stepping on it.
> > +	 */
> > +	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
> > +	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
> > +
> > +	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
> > +	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
> >  
> >  	pr_info("Guest detected\n");
> >  }
> 
> -- 
> Sathyanarayanan Kuppuswamy
> Linux Kernel Developer

-- 
  Kiryl Shutsemau / Kirill A. Shutemov
Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Tom Lendacky 4 months ago
On 5/29/23 19:57, Kirill A. Shutemov wrote:
> On Fri, May 26, 2023 at 03:10:56PM -0700, Sathyanarayanan Kuppuswamy wrote:
>>
>>
>> On 5/26/23 5:02 AM, Kirill A. Shutemov wrote:
>>> Touching privately mapped GPA that is not properly converted to private
>>> with MapGPA and accepted leads to unrecoverable exit to VMM.
>>>
>>> load_unaligned_zeropad() can touch memory that is not owned by the
>>> caller, but just happened to next after the owned memory.
>>
>> /s/to/to be ?
> 
> Yep, my bad.
> 
>>> This load_unaligned_zeropad() behaviour makes it important when kernel
>>> asks VMM to convert a GPA from shared to private or back. Kernel must
>>> never have a page mapped into direct mapping (and aliases) as private
>>> when the GPA is already converted to shared or when GPA is not yet
>>> converted to private.
>>
>> I am wondering whether this issue exist in the AMD code?
>>
>> IMO, you can add some info on the window in set_memory_encrypted()
>> where this race exists.
> 
> I don't think AMD affected by load_unaligned_zeropad() the same way as
> Intel does. But I'm not sure.
> 
> Tom, do you have any comments?

Right, shouldn't be an issue for SNP.

Thanks,
Tom

> 
>>
>>>
>>> guest.enc_status_change_prepare() called before adjusting direct mapping
>>> and therefore it is responsible for converting the memory to private.
>>>
>>> guest.enc_status_change_finish() called after adjusting direct mapping
>>> and it converts the memory to shared.
>>>
>>> It is okay to have a shared mapping of memory that is not converted
>>> properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
>>> stepping on it.
>>
>>>
>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
>>> Cc: stable@vger.kernel.org
>>> ---
>>>   arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
>>>   1 file changed, 53 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
>>> index e146b599260f..59cc13e41aa6 100644
>>> --- a/arch/x86/coco/tdx/tdx.c
>>> +++ b/arch/x86/coco/tdx/tdx.c
>>> @@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
>>>   	return true;
>>>   }
>>>   
>>> +static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
>>> +					  bool enc)
>>> +{
>>> +	/*
>>> +	 * Only handle shared->private conversion here.
>>> +	 * See the comment in tdx_early_init().
>>> +	 */
>>> +	if (enc)
>>> +		return tdx_enc_status_changed(vaddr, numpages, enc);
>>> +	return true;
>>> +}
>>> +
>>> +static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
>>> +					 bool enc)
>>> +{
>>> +	/*
>>> +	 * Only handle private->shared conversion here.
>>> +	 * See the comment in tdx_early_init().
>>> +	 */
>>> +	if (!enc)
>>> +		return tdx_enc_status_changed(vaddr, numpages, enc);
>>> +	return true;
>>> +}
>>> +
>>>   void __init tdx_early_init(void)
>>>   {
>>>   	u64 cc_mask;
>>> @@ -867,9 +891,35 @@ void __init tdx_early_init(void)
>>>   	 */
>>>   	physical_mask &= cc_mask - 1;
>>>   
>>> -	x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
>>> -	x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
>>
>> I think you don't need to change the order here.
> 
> I wanted to emphasise that the comment is for _prepare/_finish callbacks
> and I hoped re-order would help with this.
> 
>>> -	x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
>>> +	/*
>>> +	 * Touching privately mapped GPA that is not properly converted to
>>> +	 * private with MapGPA and accepted leads to unrecoverable exit
>>> +	 * to VMM.
>>> +	 *
>>> +	 * load_unaligned_zeropad() can touch memory that is not owned by
>>> +	 * the caller, but just happened to next after the owned memory.
>>> +	 * This load_unaligned_zeropad() behaviour makes it important when
>>> +	 * kernel asks VMM to convert a GPA from shared to private or back.
>>> +	 * Kernel must never have a page mapped into direct mapping (and
>>> +	 * aliases) as private when the GPA is already converted to shared or
>>> +	 * when GPA is not yet converted to private.
>>> +	 *
>>> +	 * guest.enc_status_change_prepare() called before adjusting direct
>>> +	 * mapping and therefore it is responsible for converting the memory
>>> +	 * to private.
>>> +	 *
>>> +	 * guest.enc_status_change_finish() called after adjusting direct
>>> +	 * mapping and it converts the memory to shared.
>>> +	 *
>>> +	 * It is okay to have a shared mapping of memory that is not converted
>>> +	 * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
>>> +	 * stepping on it.
>>> +	 */
>>> +	x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
>>> +	x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
>>> +
>>> +	x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
>>> +	x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
>>>   
>>>   	pr_info("Guest detected\n");
>>>   }
>>
>> -- 
>> Sathyanarayanan Kuppuswamy
>> Linux Kernel Developer
>
Re: [PATCHv2 2/3] x86/tdx: Fix race between set_memory_encrypted() and load_unaligned_zeropad()
Posted by Sathyanarayanan Kuppuswamy 4 months ago
Hi,

On 5/30/23 5:57 AM, Tom Lendacky wrote:
> On 5/29/23 19:57, Kirill A. Shutemov wrote:
>> On Fri, May 26, 2023 at 03:10:56PM -0700, Sathyanarayanan Kuppuswamy wrote:
>>>
>>>
>>> On 5/26/23 5:02 AM, Kirill A. Shutemov wrote:
>>>> Touching privately mapped GPA that is not properly converted to private
>>>> with MapGPA and accepted leads to unrecoverable exit to VMM.
>>>>
>>>> load_unaligned_zeropad() can touch memory that is not owned by the
>>>> caller, but just happened to next after the owned memory.
>>>
>>> /s/to/to be ?
>>
>> Yep, my bad.
>>
>>>> This load_unaligned_zeropad() behaviour makes it important when kernel
>>>> asks VMM to convert a GPA from shared to private or back. Kernel must
>>>> never have a page mapped into direct mapping (and aliases) as private
>>>> when the GPA is already converted to shared or when GPA is not yet
>>>> converted to private.
>>>
>>> I am wondering whether this issue exist in the AMD code?
>>>
>>> IMO, you can add some info on the window in set_memory_encrypted()
>>> where this race exists.
>>
>> I don't think AMD affected by load_unaligned_zeropad() the same way as
>> Intel does. But I'm not sure.
>>
>> Tom, do you have any comments?
> 
> Right, shouldn't be an issue for SNP.

Thanks for confirming.

> 
> Thanks,
> Tom
> 
>>
>>>
>>>>
>>>> guest.enc_status_change_prepare() called before adjusting direct mapping
>>>> and therefore it is responsible for converting the memory to private.
>>>>
>>>> guest.enc_status_change_finish() called after adjusting direct mapping
>>>> and it converts the memory to shared.
>>>>
>>>> It is okay to have a shared mapping of memory that is not converted
>>>> properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
>>>> stepping on it.
>>>
>>>>

Rest looks good to me.

Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>

>>>> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>>> Fixes: 7dbde7631629 ("x86/mm/cpa: Add support for TDX shared memory")
>>>> Cc: stable@vger.kernel.org
>>>> ---
>>>>   arch/x86/coco/tdx/tdx.c | 56 ++++++++++++++++++++++++++++++++++++++---
>>>>   1 file changed, 53 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
>>>> index e146b599260f..59cc13e41aa6 100644
>>>> --- a/arch/x86/coco/tdx/tdx.c
>>>> +++ b/arch/x86/coco/tdx/tdx.c
>>>> @@ -840,6 +840,30 @@ static bool tdx_enc_status_changed(unsigned long vaddr, int numpages, bool enc)
>>>>       return true;
>>>>   }
>>>>   +static bool tdx_enc_status_change_prepare(unsigned long vaddr, int numpages,
>>>> +                      bool enc)
>>>> +{
>>>> +    /*
>>>> +     * Only handle shared->private conversion here.
>>>> +     * See the comment in tdx_early_init().
>>>> +     */
>>>> +    if (enc)
>>>> +        return tdx_enc_status_changed(vaddr, numpages, enc);
>>>> +    return true;
>>>> +}
>>>> +
>>>> +static bool tdx_enc_status_change_finish(unsigned long vaddr, int numpages,
>>>> +                     bool enc)
>>>> +{
>>>> +    /*
>>>> +     * Only handle private->shared conversion here.
>>>> +     * See the comment in tdx_early_init().
>>>> +     */
>>>> +    if (!enc)
>>>> +        return tdx_enc_status_changed(vaddr, numpages, enc);
>>>> +    return true;
>>>> +}
>>>> +
>>>>   void __init tdx_early_init(void)
>>>>   {
>>>>       u64 cc_mask;
>>>> @@ -867,9 +891,35 @@ void __init tdx_early_init(void)
>>>>        */
>>>>       physical_mask &= cc_mask - 1;
>>>>   -    x86_platform.guest.enc_cache_flush_required = tdx_cache_flush_required;
>>>> -    x86_platform.guest.enc_tlb_flush_required   = tdx_tlb_flush_required;
>>>
>>> I think you don't need to change the order here.
>>
>> I wanted to emphasise that the comment is for _prepare/_finish callbacks
>> and I hoped re-order would help with this.
>>
>>>> -    x86_platform.guest.enc_status_change_finish = tdx_enc_status_changed;
>>>> +    /*
>>>> +     * Touching privately mapped GPA that is not properly converted to
>>>> +     * private with MapGPA and accepted leads to unrecoverable exit
>>>> +     * to VMM.
>>>> +     *
>>>> +     * load_unaligned_zeropad() can touch memory that is not owned by
>>>> +     * the caller, but just happened to next after the owned memory.
>>>> +     * This load_unaligned_zeropad() behaviour makes it important when
>>>> +     * kernel asks VMM to convert a GPA from shared to private or back.
>>>> +     * Kernel must never have a page mapped into direct mapping (and
>>>> +     * aliases) as private when the GPA is already converted to shared or
>>>> +     * when GPA is not yet converted to private.
>>>> +     *
>>>> +     * guest.enc_status_change_prepare() called before adjusting direct
>>>> +     * mapping and therefore it is responsible for converting the memory
>>>> +     * to private.
>>>> +     *
>>>> +     * guest.enc_status_change_finish() called after adjusting direct
>>>> +     * mapping and it converts the memory to shared.
>>>> +     *
>>>> +     * It is okay to have a shared mapping of memory that is not converted
>>>> +     * properly. handle_mmio() knows how to deal with load_unaligned_zeropad()
>>>> +     * stepping on it.
>>>> +     */
>>>> +    x86_platform.guest.enc_status_change_prepare = tdx_enc_status_change_prepare;
>>>> +    x86_platform.guest.enc_status_change_finish  = tdx_enc_status_change_finish;
>>>> +
>>>> +    x86_platform.guest.enc_cache_flush_required  = tdx_cache_flush_required;
>>>> +    x86_platform.guest.enc_tlb_flush_required    = tdx_tlb_flush_required;
>>>>         pr_info("Guest detected\n");
>>>>   }
>>>
>>> -- 
>>> Sathyanarayanan Kuppuswamy
>>> Linux Kernel Developer
>>

-- 
Sathyanarayanan Kuppuswamy
Linux Kernel Developer