From nobody Fri Sep 20 12:01:27 2024 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19F05C197A0 for ; Thu, 16 Nov 2023 15:29:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231216AbjKPP3y (ORCPT ); Thu, 16 Nov 2023 10:29:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231164AbjKPP3Z (ORCPT ); Thu, 16 Nov 2023 10:29:25 -0500 Received: from mailgw02.mediatek.com (unknown [210.61.82.184]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDB85D53; Thu, 16 Nov 2023 07:29:20 -0800 (PST) X-UUID: e54c0448849411ee8051498923ad61e6-20231116 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=+AdCj3oPS4Yuvol06kpIMzdLo24kALWO/jdcfmFxzhY=; b=UQQ6ylf78n7t4194M0yV0MCHil+sUeZL9kmPdK7NdI+8wOS8T1VjhupwiouzgJ/yLL50V52GiNr1/LfDU1pk1sIcYlYmF95DqaVJmWUjNR+jADAj9bJow5eOhvdmoFTp6Iz/jtZCiE17WdFDwH4odJ6ZSYRd+gCvvLqaVKn/2nc=; X-CID-P-RULE: Release_Ham X-CID-O-INFO: VERSION:1.1.33,REQID:82eb677a-5fe1-4917-8b80-1855b8f1ab8f,IP:0,U RL:0,TC:0,Content:-25,EDM:0,RT:0,SF:0,FILE:0,BULK:0,RULE:Release_Ham,ACTIO N:release,TS:-25 X-CID-META: VersionHash:364b77b,CLOUDID:fabdbb72-1bd3-4f48-b671-ada88705968c,B ulkID:nil,BulkQuantity:0,Recheck:0,SF:102,TC:nil,Content:0,EDM:-3,IP:nil,U RL:11|1,File:nil,Bulk:nil,QS:nil,BEC:nil,COL:0,OSI:0,OSA:0,AV:0,LES:1,SPR: NO,DKR:0,DKP:0,BRR:0,BRE:0 X-CID-BVR: 0,NGT X-CID-BAS: 0,NGT,0,_ X-CID-FACTOR: TF_CID_SPAM_SNR,TF_CID_SPAM_ULN X-UUID: e54c0448849411ee8051498923ad61e6-20231116 Received: from mtkmbs14n2.mediatek.inc [(172.21.101.76)] by mailgw02.mediatek.com (envelope-from ) (Generic MTA with TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384 256/256) with ESMTP id 342339200; Thu, 16 Nov 2023 23:29:13 +0800 Received: from mtkmbs11n2.mediatek.inc (172.21.101.187) by MTKMBS14N1.mediatek.inc (172.21.101.75) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1118.26; Thu, 16 Nov 2023 23:29:11 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkmbs11n2.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.2.1118.26 via Frontend Transport; Thu, 16 Nov 2023 23:29:11 +0800 From: Yi-De Wu To: Yingshiuan Pan , Ze-Yu Wang , Yi-De Wu , Rob Herring , Krzysztof Kozlowski , Conor Dooley , Jonathan Corbet , Catalin Marinas , Will Deacon , Matthias Brugger , AngeloGioacchino Del Regno CC: Arnd Bergmann , , , , , , David Bradil , Trilok Soni , Jade Shih , Ivan Tseng , "My Chuang" , Shawn Hsiao , PeiLun Suei , Liju Chen , Willix Yeh , "Kevenny Hsieh" Subject: [PATCH v7 15/16] virt: geniezone: Add memory pin/unpin support Date: Thu, 16 Nov 2023 23:27:55 +0800 Message-ID: <20231116152756.4250-16-yi-de.wu@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20231116152756.4250-1-yi-de.wu@mediatek.com> References: <20231116152756.4250-1-yi-de.wu@mediatek.com> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-TM-AS-Product-Ver: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-AS-Result: No-10--6.116400-8.000000 X-TMASE-MatchedRID: FEQi7wiS8RY/TqR73SoO885Scd0yVs+bbd6rGhWOAwQCSZrAnTS0Brps GokK+C//AuwgUdYPZkXW0e5KCL9fbo0GdWKgGbBhMIiU395I8H3QeN4A2h64ncA5YKm8dwM6V78 paYC694DwDJaTY0Uw1tUvsMhDJV3vsEBAuoaUqK/il2r2x2PwtZyDP1rrUdvhXCmcAC8DBrOt0v JcrVX4uGgMmeqXbMt1kkL+ENlH3trw/vD6Q64ggQwfhKwa9GwDcx5k3wffojMIbcbWxixcZI2n3 CVm0yC5585VzGMOFzA9wJeM2pSaRcK21zBg2KlfGM6W3U+Lnnf9UVTtcLrq40/ky5ZO+kjg5ClJ FiBfD6zjywKvP1qavQ== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--6.116400-8.000000 X-TMASE-Version: SMEX-14.0.0.3152-9.1.1006-23728.005 X-TM-SNTS-SMTP: 337D8F43338640BF381649683B53DFACF1315DF158B48C331AD8BAC2FFB6C1042000:8 X-MTK: N Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: "Jerry Wang" Protected VM's memory cannot be swapped out because the memory pages are protected from host access. Once host accesses to those protected pages, the hardware exception is triggered and may crash the host. So, we have to make those protected pages be ineligible for swapping or merging by the host kernel to avoid host access. To do so, we pin the page when it is assigned (donated) to VM and unpin when VM relinquish the pages or is destroyed. Besides, the protected VM=E2=80=99s memory requires hypervisor to clear the content befo= re returning to host, but VMM may free those memory before clearing, it will result in those memory pages are reclaimed and reused before totally clearing. Using pin/unpin can also avoid the above problems. The implementation is described as follows. - Use rb_tree to store pinned memory pages. - Pin the page when handling page fault. - Unpin the pages when VM relinquish the pages or is destroyed. Signed-off-by: Jerry Wang Signed-off-by: Yingshiuan Pan Signed-off-by: Yi-De Wu Signed-off-by: Liju Chen --- drivers/virt/geniezone/gzvm_mmu.c | 60 ++++++++++++++++++++++++++++++- drivers/virt/geniezone/gzvm_vm.c | 18 ++++++++++ include/linux/gzvm_drv.h | 10 ++++++ 3 files changed, 87 insertions(+), 1 deletion(-) diff --git a/drivers/virt/geniezone/gzvm_mmu.c b/drivers/virt/geniezone/gzv= m_mmu.c index 77104df4afe5..0acef342e1f0 100644 --- a/drivers/virt/geniezone/gzvm_mmu.c +++ b/drivers/virt/geniezone/gzvm_mmu.c @@ -107,8 +107,59 @@ int gzvm_gfn_to_pfn_memslot(struct gzvm_memslot *memsl= ot, u64 gfn, return 0; } =20 +static int cmp_ppages(struct rb_node *node, const struct rb_node *parent) +{ + struct gzvm_pinned_page *a =3D container_of(node, + struct gzvm_pinned_page, + node); + struct gzvm_pinned_page *b =3D container_of(parent, + struct gzvm_pinned_page, + node); + + if (a->ipa < b->ipa) + return -1; + if (a->ipa > b->ipa) + return 1; + return 0; +} + +static int gzvm_insert_ppage(struct gzvm *vm, struct gzvm_pinned_page *ppa= ge) +{ + if (rb_find_add(&ppage->node, &vm->pinned_pages, cmp_ppages)) + return -EEXIST; + return 0; +} + +static int pin_one_page(struct gzvm *vm, unsigned long hva, u64 gpa) +{ + unsigned int flags =3D FOLL_HWPOISON | FOLL_LONGTERM | FOLL_WRITE; + struct gzvm_pinned_page *ppage =3D NULL; + struct mm_struct *mm =3D current->mm; + struct page *page =3D NULL; + + ppage =3D kmalloc(sizeof(*ppage), GFP_KERNEL_ACCOUNT); + if (!ppage) + return -ENOMEM; + + mmap_read_lock(mm); + pin_user_pages(hva, 1, flags, &page); + mmap_read_unlock(mm); + + if (!page) { + kfree(ppage); + return -EFAULT; + } + + ppage->page =3D page; + ppage->ipa =3D gpa; + gzvm_insert_ppage(vm, ppage); + + return 0; +} + static int handle_block_demand_page(struct gzvm *vm, int memslot_id, u64 g= fn) { + unsigned long hva; u64 pfn, __gfn; int ret, i; =20 @@ -131,6 +182,11 @@ static int handle_block_demand_page(struct gzvm *vm, i= nt memslot_id, u64 gfn) goto err_unlock; } vm->demand_page_buffer[i] =3D pfn; + + hva =3D gzvm_gfn_to_hva_memslot(&vm->memslot[memslot_id], __gfn); + ret =3D pin_one_page(vm, hva, PFN_PHYS(__gfn)); + if (ret) + goto err_unlock; } =20 ret =3D gzvm_arch_map_guest_block(vm->vm_id, memslot_id, start_gfn, @@ -148,6 +204,7 @@ static int handle_block_demand_page(struct gzvm *vm, in= t memslot_id, u64 gfn) =20 static int handle_single_demand_page(struct gzvm *vm, int memslot_id, u64 = gfn) { + unsigned long hva; int ret; u64 pfn; =20 @@ -159,7 +216,8 @@ static int handle_single_demand_page(struct gzvm *vm, i= nt memslot_id, u64 gfn) if (unlikely(ret)) return -EFAULT; =20 - return 0; + hva =3D gzvm_gfn_to_hva_memslot(&vm->memslot[memslot_id], gfn); + return pin_one_page(vm, hva, PFN_PHYS(gfn)); } =20 /** diff --git a/drivers/virt/geniezone/gzvm_vm.c b/drivers/virt/geniezone/gzvm= _vm.c index 485d1e2097aa..a7d43bedfad0 100644 --- a/drivers/virt/geniezone/gzvm_vm.c +++ b/drivers/virt/geniezone/gzvm_vm.c @@ -292,6 +292,21 @@ static long gzvm_vm_ioctl(struct file *filp, unsigned = int ioctl, return ret; } =20 +static void gzvm_destroy_ppage(struct gzvm *gzvm) +{ + struct gzvm_pinned_page *ppage; + struct rb_node *node; + + node =3D rb_first(&gzvm->pinned_pages); + while (node) { + ppage =3D rb_entry(node, struct gzvm_pinned_page, node); + unpin_user_pages_dirty_lock(&ppage->page, 1, true); + node =3D rb_next(node); + rb_erase(&ppage->node, &gzvm->pinned_pages); + kfree(ppage); + } +} + static void gzvm_destroy_vm(struct gzvm *gzvm) { size_t allocated_size; @@ -315,6 +330,8 @@ static void gzvm_destroy_vm(struct gzvm *gzvm) =20 mutex_unlock(&gzvm->lock); =20 + gzvm_destroy_ppage(gzvm); + kfree(gzvm); } =20 @@ -390,6 +407,7 @@ static struct gzvm *gzvm_create_vm(unsigned long vm_typ= e) gzvm->vm_id =3D ret; gzvm->mm =3D current->mm; mutex_init(&gzvm->lock); + gzvm->pinned_pages =3D RB_ROOT; =20 ret =3D gzvm_vm_irqfd_init(gzvm); if (ret) { diff --git a/include/linux/gzvm_drv.h b/include/linux/gzvm_drv.h index 0a8b21e1ed1a..bb470a9f92e4 100644 --- a/include/linux/gzvm_drv.h +++ b/include/linux/gzvm_drv.h @@ -12,6 +12,7 @@ #include #include #include +#include =20 /* * For the normal physical address, the highest 12 bits should be zero, so= we @@ -82,6 +83,12 @@ struct gzvm_vcpu { struct gzvm_vcpu_hwstate *hwstate; }; =20 +struct gzvm_pinned_page { + struct rb_node node; + struct page *page; + u64 ipa; +}; + struct gzvm { struct gzvm_vcpu *vcpus[GZVM_MAX_VCPUS]; /* userspace tied to this vm */ @@ -121,6 +128,9 @@ struct gzvm { * at the same time */ struct mutex demand_paging_lock; + + /* Use rb-tree to record pin/unpin page */ + struct rb_root pinned_pages; }; =20 long gzvm_dev_ioctl_check_extension(struct gzvm *gzvm, unsigned long args); --=20 2.18.0