commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag
initialisation") updated the locking such the kernel now allows
VM_SHARED mapping with MTE. Update the code comment to reflect this.
Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org>
---
arch/arm64/kvm/mmu.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index a509b63bd4dd..b5824e93cee0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva)
* able to see the page's tags and therefore they must be initialised first. If
* PG_mte_tagged is set, tags have already been initialised.
*
- * The race in the test/set of the PG_mte_tagged flag is handled by:
- * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs
- * racing to santise the same page
- * - mmap_lock protects between a VM faulting a page in and the VMM performing
- * an mprotect() to add VM_MTE
+ * The race in the test/set of the PG_mte_tagged flag is handled by
+ * using PG_mte_lock and PG_mte_tagged together.
*/
static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn,
unsigned long size)
@@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
}
if (!fault_is_perm && !device && kvm_has_mte(kvm)) {
- /* Check the VMM hasn't introduced a new disallowed VMA */
+ /*
+ * not a permission fault implies a translation fault which
+ * means mapping the page for the first time
+ */
if (mte_allowed) {
sanitise_mte_tags(kvm, pfn, vma_pagesize);
} else {
--
2.43.0
On Mon, 28 Oct 2024 09:40:13 +0000, "Aneesh Kumar K.V (Arm)" <aneesh.kumar@kernel.org> wrote: > > commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag > initialisation") updated the locking such the kernel now allows > VM_SHARED mapping with MTE. Update the code comment to reflect this. > > Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org> > --- > arch/arm64/kvm/mmu.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) This is a KVM patch. Please make sure you write the subject accordingly, matching the existing conventions (in this case, this should read something like: "KVM: arm64: MTE: Update..."). > > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index a509b63bd4dd..b5824e93cee0 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) > * able to see the page's tags and therefore they must be initialised first. If > * PG_mte_tagged is set, tags have already been initialised. > * > - * The race in the test/set of the PG_mte_tagged flag is handled by: > - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs > - * racing to santise the same page > - * - mmap_lock protects between a VM faulting a page in and the VMM performing > - * an mprotect() to add VM_MTE > + * The race in the test/set of the PG_mte_tagged flag is handled by > + * using PG_mte_lock and PG_mte_tagged together. How? This comment is pretty content-free. TO be useful, you should elaborate on *how* these two are used together. > */ > static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, > unsigned long size) > @@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > } > > if (!fault_is_perm && !device && kvm_has_mte(kvm)) { > - /* Check the VMM hasn't introduced a new disallowed VMA */ > + /* > + * not a permission fault implies a translation fault which > + * means mapping the page for the first time How about an Access fault due to page ageing? Thanks, M. -- Without deviation from the norm, progress is not possible.
Hi Marc, Thanks for reviewing the changes. Marc Zyngier <maz@kernel.org> writes: > On Mon, 28 Oct 2024 09:40:13 +0000, > "Aneesh Kumar K.V (Arm)" <aneesh.kumar@kernel.org> wrote: >> >> commit d77e59a8fccd ("arm64: mte: Lock a page for MTE tag >> initialisation") updated the locking such the kernel now allows >> VM_SHARED mapping with MTE. Update the code comment to reflect this. >> >> Signed-off-by: Aneesh Kumar K.V (Arm) <aneesh.kumar@kernel.org> >> --- >> arch/arm64/kvm/mmu.c | 12 ++++++------ >> 1 file changed, 6 insertions(+), 6 deletions(-) > > This is a KVM patch. Please make sure you write the subject > accordingly, matching the existing conventions (in this case, this > should read something like: "KVM: arm64: MTE: Update..."). > Will update > >> >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c >> index a509b63bd4dd..b5824e93cee0 100644 >> --- a/arch/arm64/kvm/mmu.c >> +++ b/arch/arm64/kvm/mmu.c >> @@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) >> * able to see the page's tags and therefore they must be initialised first. If >> * PG_mte_tagged is set, tags have already been initialised. >> * >> - * The race in the test/set of the PG_mte_tagged flag is handled by: >> - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs >> - * racing to santise the same page >> - * - mmap_lock protects between a VM faulting a page in and the VMM performing >> - * an mprotect() to add VM_MTE >> + * The race in the test/set of the PG_mte_tagged flag is handled by >> + * using PG_mte_lock and PG_mte_tagged together. > > How? This comment is pretty content-free. TO be useful, you should > elaborate on *how* these two are used together. > I will add more details described in commit d77e59a8fccde7fb5dd8c57594ed147b4291c970 Should i quote the commit there in the comment? > >> */ >> static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, >> unsigned long size) >> @@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, >> } >> >> if (!fault_is_perm && !device && kvm_has_mte(kvm)) { >> - /* Check the VMM hasn't introduced a new disallowed VMA */ >> + /* >> + * not a permission fault implies a translation fault which >> + * means mapping the page for the first time > > How about an Access fault due to page ageing? > IIUC access fault is already handled by the caller kvm_handle_guest_abort? I can add that as part of the updated comments? -aneesh
On Mon, 28 Oct 2024 12:47:30 +0000, Aneesh Kumar K.V <aneesh.kumar@kernel.org> wrote: > > >> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > >> index a509b63bd4dd..b5824e93cee0 100644 > >> --- a/arch/arm64/kvm/mmu.c > >> +++ b/arch/arm64/kvm/mmu.c > >> @@ -1390,11 +1390,8 @@ static int get_vma_page_shift(struct vm_area_struct *vma, unsigned long hva) > >> * able to see the page's tags and therefore they must be initialised first. If > >> * PG_mte_tagged is set, tags have already been initialised. > >> * > >> - * The race in the test/set of the PG_mte_tagged flag is handled by: > >> - * - preventing VM_SHARED mappings in a memslot with MTE preventing two VMs > >> - * racing to santise the same page > >> - * - mmap_lock protects between a VM faulting a page in and the VMM performing > >> - * an mprotect() to add VM_MTE > >> + * The race in the test/set of the PG_mte_tagged flag is handled by > >> + * using PG_mte_lock and PG_mte_tagged together. > > > > How? This comment is pretty content-free. TO be useful, you should > > elaborate on *how* these two are used together. > > > > I will add more details described in commit d77e59a8fccde7fb5dd8c57594ed147b4291c970 > Should i quote the commit there in the comment? The commit is not relevant. What is important is an indication of how the race is resolved if that's important. A reference to try_page_mte_tagging() would probably be the right thing to do. > > > > >> */ > >> static void sanitise_mte_tags(struct kvm *kvm, kvm_pfn_t pfn, > >> unsigned long size) > >> @@ -1646,7 +1643,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, > >> } > >> > >> if (!fault_is_perm && !device && kvm_has_mte(kvm)) { > >> - /* Check the VMM hasn't introduced a new disallowed VMA */ > >> + /* > >> + * not a permission fault implies a translation fault which > >> + * means mapping the page for the first time > > > > How about an Access fault due to page ageing? > > > > IIUC access fault is already handled by the caller kvm_handle_guest_abort? > I can add that as part of the updated comments? Maybe. The thing is, you are removing a pretty essential comment for no good reason, and now there is no rational left behind the -EFAULT that is returned. M. -- Without deviation from the norm, progress is not possible.
© 2016 - 2024 Red Hat, Inc.