From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0C72265CAD for ; Tue, 20 May 2025 08:53:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731185; cv=none; b=YP/IOkW8tmO5M2EtvArxiKuUGykZV9szp8p0N11rOBLTRaw+8yoZicpvr5RvpxVb3N+WJLKbAzHs3DTHG2TDzuRMizU/EeqW6P6e6eUVPj2cKXdKE6+9vd7Gyp70cnqxtwSfJDN07B9cIV5ZexjTWC+ed5yqO+agyZfb+c9Xnc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731185; c=relaxed/simple; bh=YgmArXGJBjFelTvxc+9b3y1ehEGR8e4TCnYOoPgpzsc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=U1vOeBz5+TCxdNmPTTZvb0iRRUkqaC63U6GNWd0g44rnlv0p862ZXhOt0TsPiv2gG1orUVSrozR0F+FTHb5LgqYiAntpOt87sEUa0wIfnWMHfFxgrleUBxEYRUVpJ0PCfqkDsDRemAWy1u7prjEOjQYFZ9y+F+aLXkxhB5x9YQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rpGaHqpJ; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rpGaHqpJ" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cec217977so28253295e9.0 for ; Tue, 20 May 2025 01:53:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731182; x=1748335982; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CKZgmiZHsWd3katRT3fxSYHJdGTZyCIpS+lL+y5Js0E=; b=rpGaHqpJS+HxGWu/p9IhfDrfd9Q/QjVKVbHRO6+4wTVkp8i4mCEEHxggyngcNMOpsv cK3ghRGH+GBpdMLlOZSTcCv/rgpncBynuXj0CNdP6jxXDIQwTibShQNzObYkcKkGsWmX nb4yMj6AuUKwIQCKxLofzQZ3NIP1P/mTUv1oVoBTahg7Bk9fuS3YWa+aJtr+05bSJYHM kFs7pwsnVNqMdmxpH6yxjKMuXiZ/hfzWbB7P9nXAin9TIpQx0TyRe19n0Y+pzlab7Acz t4OwcklHJt76jjH/GqbZ8vRxWFhG5qAEdLfmAtq6j8NkEM2fAux/frIEo8lZkaTRE+ln UNww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731182; x=1748335982; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CKZgmiZHsWd3katRT3fxSYHJdGTZyCIpS+lL+y5Js0E=; b=akV0/PWBDMlUQo2smJ1vHamcCMWXb7BrmecSUjFusLu7GeCfLorwOt6FFdsH1HkEJj oBkbpBi6N5SQjSfdabPD3eNFtn0r43YtThju6SYVrAKJtLmeOOu9ROtbBqsnNJNtOrdU 4Qs1a/hdivoRJaNP7wPJKJBH0T+bs2D58/6alnmNBW/buSbai4dcjSme1Akruml2+/mk CGPL9mXvcD8QyvddhPz8uI9PFa2gSyQZwuig1Ys77WPyUGDK8FS5yzoKBnFZ4tnXNRn1 Wj8sFrJ8wCi8dyAWIfyoDpCmqOKqFqkgHqoTX8/0t7PTF5ijsNS0SzwGvlCUBhzVrigj 3qXg== X-Forwarded-Encrypted: i=1; AJvYcCVkRMIyS7+82dSpKGO0KgZSBGuLlhrFRDWYNrwmlLlcQk5VgWcqA+71OX/qnA6yd1jnIU9KJaAihi70o1g=@vger.kernel.org X-Gm-Message-State: AOJu0YxvcnDOHbq1LBQfBC9/WD+L1KvEl5VELt6f6zP7P1R08bKJb0jM VsMzOJNTJVMh1iTPQJICM+zjuAWHF0w/ZrskrUUu9ZO8IFFbJX05c2okdx8q1yf1B/bbhMKnMe8 /Rqj2ZMom5jxtfiv3Gqavvw== X-Google-Smtp-Source: AGHT+IFVdTcpdyqZnbRyjGjgkt8Kni5chetO4dBq5cxEbxyb82PcnyVXthra3fVVyFQB8jcxKQCM9hCtE+YosLWO X-Received: from wmbhh15.prod.google.com ([2002:a05:600c:530f:b0:43b:c450:ea70]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:59a6:0:b0:3a3:7752:108a with SMTP id ffacd0b85a97d-3a377521242mr3030138f8f.25.1747731182145; Tue, 20 May 2025 01:53:02 -0700 (PDT) Date: Tue, 20 May 2025 09:51:52 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-2-vdonnefort@google.com> Subject: [PATCH v5 01/10] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 31173c694695..be4f7c5612f8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,32 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); + va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); + size =3D PAGE_ALIGN(size); + + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); + va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); + size =3D PAGE_ALIGN(size); + + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB7AC265CAC for ; Tue, 20 May 2025 08:53:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731187; cv=none; b=Li29ShPgEWErePWiCiPdfo1JRadYhPbF947Kg8B42iuMY4RPR6mHuXFfvXL/jpQ6MCsEmxGgnL1U7slkEKJ3Ca+3D5kW+J3nLeTQT/mdAWsVUSXOxrRir6v4qN4Mm+Gynsffp9FPJ9EyyjWA8N+QyOMqVdiE+lqCBtLEJHVlUc0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731187; c=relaxed/simple; bh=O5K7ItQfEvdR6AyA+LX5hbN194oMzarUGnY1zvOk3ZA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gfaivi6IwMcRzdk5blOvQcc5DJbZp+5UEfn+w68H81D9Y70s1ZG4ZB81jjs3Gonu3ogweoZmuGvvXuXwOV1d/kw/vsYyL99/iILL2abyWn9uF7SdghpoKEoz0uezFNmtTa+l/skT422FnbTcQIC2txq0ZWTzQyFepIt/x+SG690= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tI76Wp5T; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tI76Wp5T" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-440667e7f92so27531685e9.3 for ; Tue, 20 May 2025 01:53:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731184; x=1748335984; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=a+P4jqcPY/5X/H7GHLC9QBb5JYHi83egH9jTzrY/YYI=; b=tI76Wp5TlD39hCtXDClspN7YJw6VgSckvNYu6RbTQCORj7NK5tM4uU4AWYWgt2hefc NykcUU5Bq1wDfNvkaQfcccK5MOnV915xod57zXxfuMI+21VLB2Q+i4XpNJgbGxUb3son eRs9k/vbf+9WQ0uJ7FiiASMoCUutQO+9ZmtE/g/kzD1Uil1gBWwNFZJRsIUnQiQ3m1Uq FXIhxA0JkmfTCdhpuk/98fRyeq+9VVUdCpRwfUAQ+ARVr7oL2ILHrNyIaHS3J4ksk8BZ 6Eu/zDY8Kl4lkv0/q3u+p5tION5QCOeediV6GFv6Ki7Epf8SaSiXKom4sMkVPJYDpheG 1sbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731184; x=1748335984; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=a+P4jqcPY/5X/H7GHLC9QBb5JYHi83egH9jTzrY/YYI=; b=nx5gPtVxay/CBgW+r6mH3JcRtDXuBF625d2Ick4qXmVut8u8xyNGUuEz0J0+pt9BlG gpaaV3PSoJZuJSFmPwldQAgB/MyXIplMbjEWziuu77qEFVup5cLILtFG1KbqjRpH+nbJ 9iDbVRbEuK8BnigOFI+oJ6AYt7eYhsccHWD6E7ayOg9kuXGmwsaxAa6UhZeFVZ4nLmY1 Vr5bF/fmNiPnIztLt9ceYKHTXBKKqKYN3sCkmq66zgHRSQbcMyEw24rmohvfAdmVpibF GiU1S9NT7Agd+ppLWk9sFIxdZvIOeBaSv5F/EhNDNrtbULZ04ySXlD7HMrb7lNGNIl5v DHEQ== X-Forwarded-Encrypted: i=1; AJvYcCWDKzIOdrAF4kJFLcTf1zVM8DFq9SXNqa8d7K0tErAMSPHdBVjbosnyOM6CGks3VRAE8ybK+MWY40k1mA8=@vger.kernel.org X-Gm-Message-State: AOJu0YxMuuRQgLh5lvkNQAzYjZuSLxrWnpT88vfeDmpdHmh5oeF6rnV8 nRAyHnEvWx5/g8AStVqcd3b96SylN6Fp9qU/ZaFRFxWwxMBoipNYOO0nwrm8iG7qvp4jYChLmz8 0rY5ShUFFvIGVIfUz7GF18w== X-Google-Smtp-Source: AGHT+IGLoj+sVRo93xr3VaTiuIGmKV090qzpfNpi8aOUsOQ8a3/MaEFrOpiu7dpS+CVE/xIW+C0ev0Qm8AecEqxT X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:440:595d:fba9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5305:b0:441:b19c:96fe with SMTP id 5b1f17b1804b1-442fd622eb4mr188234885e9.10.1747731183859; Tue, 20 May 2025 01:53:03 -0700 (PDT) Date: Tue, 20 May 2025 09:51:53 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-3-vdonnefort@google.com> Subject: [PATCH v5 02/10] KVM: arm64: Introduce for_each_hyp_page From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to iterate over the hypervisor vmemmap. This will be particularly handy with the introduction of huge mapping support for the np-guest stage-2. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index eb0c2ebd1743..dee1a406b0c2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -96,24 +96,24 @@ static inline struct hyp_page *hyp_phys_to_page(phys_ad= dr_t phys) #define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page)) #define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool) =20 -static inline enum pkvm_page_state get_host_state(phys_addr_t phys) +static inline enum pkvm_page_state get_host_state(struct hyp_page *p) { - return (enum pkvm_page_state)hyp_phys_to_page(phys)->__host_state; + return p->__host_state; } =20 -static inline void set_host_state(phys_addr_t phys, enum pkvm_page_state s= tate) +static inline void set_host_state(struct hyp_page *p, enum pkvm_page_state= state) { - hyp_phys_to_page(phys)->__host_state =3D state; + p->__host_state =3D state; } =20 -static inline enum pkvm_page_state get_hyp_state(phys_addr_t phys) +static inline enum pkvm_page_state get_hyp_state(struct hyp_page *p) { - return hyp_phys_to_page(phys)->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; + return p->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; } =20 -static inline void set_hyp_state(phys_addr_t phys, enum pkvm_page_state st= ate) +static inline void set_hyp_state(struct hyp_page *p, enum pkvm_page_state = state) { - hyp_phys_to_page(phys)->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; + p->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; } =20 /* diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index be4f7c5612f8..1018a6f66359 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,11 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(__p, __st, __sz) \ + for (struct hyp_page *__p =3D hyp_phys_to_page(__st), \ + *__e =3D __p + ((__sz) >> PAGE_SHIFT); \ + __p < __e; __p++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -485,7 +490,8 @@ static int host_stage2_adjust_range(u64 addr, struct kv= m_mem_range *range) return -EAGAIN; =20 if (pte) { - WARN_ON(addr_is_memory(addr) && get_host_state(addr) !=3D PKVM_NOPAGE); + WARN_ON(addr_is_memory(addr) && + get_host_state(hyp_phys_to_page(addr)) !=3D PKVM_NOPAGE); return -EPERM; } =20 @@ -511,10 +517,8 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 siz= e, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; - - for (; addr < end; addr +=3D PAGE_SIZE) - set_host_state(addr, state); + for_each_hyp_page(page, addr, size) + set_host_state(page, state); } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -636,16 +640,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (get_host_state(addr) !=3D state) + + for_each_hyp_page(page, addr, size) { + if (get_host_state(page) !=3D state) return -EPERM; } =20 @@ -655,7 +659,7 @@ static int __host_check_page_state_range(u64 addr, u64 = size, static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - if (get_host_state(addr) =3D=3D PKVM_NOPAGE) { + if (get_host_state(hyp_phys_to_page(addr)) =3D=3D PKVM_NOPAGE) { int ret =3D host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); =20 if (ret) @@ -669,18 +673,14 @@ static int __host_set_page_state_range(u64 addr, u64 = size, =20 static void __hyp_set_page_state_range(phys_addr_t phys, u64 size, enum pk= vm_page_state state) { - phys_addr_t end =3D phys + size; - - for (; phys < end; phys +=3D PAGE_SIZE) - set_hyp_state(phys, state); + for_each_hyp_page(page, phys, size) + set_hyp_state(page, state); } =20 static int __hyp_check_page_state_range(phys_addr_t phys, u64 size, enum p= kvm_page_state state) { - phys_addr_t end =3D phys + size; - - for (; phys < end; phys +=3D PAGE_SIZE) { - if (get_hyp_state(phys) !=3D state) + for_each_hyp_page(page, phys, size) { + if (get_hyp_state(page) !=3D state) return -EPERM; } =20 @@ -931,7 +931,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pk= vm_hyp_vcpu *vcpu, goto unlock; =20 page =3D hyp_phys_to_page(phys); - switch (get_host_state(phys)) { + switch (get_host_state(page)) { case PKVM_PAGE_OWNED: WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); break; @@ -983,9 +983,9 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip if (WARN_ON(ret)) return ret; =20 - if (get_host_state(phys) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; page =3D hyp_phys_to_page(phys); + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; if (WARN_ON(!page->host_share_guest_count)) return -EINVAL; =20 diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 6d513a4b3763..c19860fc8183 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -190,6 +190,7 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { enum pkvm_page_state state; + struct hyp_page *page; phys_addr_t phys; =20 if (!kvm_pte_valid(ctx->old)) @@ -202,6 +203,8 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, if (!addr_is_memory(phys)) return -EINVAL; =20 + page =3D hyp_phys_to_page(phys); + /* * Adjust the host stage-2 mappings to match the ownership attributes * configured in the hypervisor stage-1, and make sure to propagate them @@ -210,15 +213,15 @@ static int fix_host_ownership_walker(const struct kvm= _pgtable_visit_ctx *ctx, state =3D pkvm_getstate(kvm_pgtable_hyp_pte_prot(ctx->old)); switch (state) { case PKVM_PAGE_OWNED: - set_hyp_state(phys, PKVM_PAGE_OWNED); + set_hyp_state(page, PKVM_PAGE_OWNED); return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - set_hyp_state(phys, PKVM_PAGE_SHARED_OWNED); - set_host_state(phys, PKVM_PAGE_SHARED_BORROWED); + set_hyp_state(page, PKVM_PAGE_SHARED_OWNED); + set_host_state(page, PKVM_PAGE_SHARED_BORROWED); break; case PKVM_PAGE_SHARED_BORROWED: - set_hyp_state(phys, PKVM_PAGE_SHARED_BORROWED); - set_host_state(phys, PKVM_PAGE_SHARED_OWNED); + set_hyp_state(page, PKVM_PAGE_SHARED_BORROWED); + set_host_state(page, PKVM_PAGE_SHARED_OWNED); break; default: return -EINVAL; --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D18B266EFC for ; Tue, 20 May 2025 08:53:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731189; cv=none; b=LLv4z5WITh1Jr0NHNL6PyDhfoKIkP89KvlkVXxXMLLnMXXHutXb40n33c4B0c9/k4kYxQDn86iXz47s3Oyg6+U1uO3MiJUyqxYSWVwphX2CoNFq1sA8151gkX9Hw5SeqZrn1SOLCoiiY9+HjsDUR3eZoHnqyEBeSFnl/MVE+V5g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731189; c=relaxed/simple; bh=Zt7ydWx9TCoC9zKBk9lU0ZLLLpcHQ63JPO77QNtkQRQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ATERi9dtXoR6jPg4mZb/ZDOBVWxZqk8bNpSMRjEMdiB1m5jrJRx1GdRlnZPAuBwNZKKMdVnh97S997zchlWE1k1imGP0cjcf+8cSVL342qQnZF1+thFL0wO+KGQEWSIIEI6AOHDC6j2yynZUIhuEYXNjZ1kNyeUZYakJ4CbGfx8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0s43ZvEt; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0s43ZvEt" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-441c96c1977so37061955e9.0 for ; Tue, 20 May 2025 01:53:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731185; x=1748335985; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8eV/b8y5mk2obyefKPJMTlOdN01tVjbRhdILiqDQGZo=; b=0s43ZvEt43NXs1nQado3uY05S7FPYVVMM6v5bsv1Q8mm0ymI5P66ahaZPO2UuLesKj gtiGJBb//DWMWwrqe5Ybd5Gcb39FFlJV3qQ7O/YZbFK+VcThZmwoHv0uw6NBtFJasLxU PTCw/Qpfi2i8aNgyfKZvCmmVQu3QQuLdO5PkLYFiEUbDTYzbDZokbmBYyy7s02FPJtg1 SMFLD9AR/JuzEzpOxHSCrb+kap+1nq8b4SpoqHMN/amPnQjXygtlmbROOpBW1CVldMZE /yBppe0u+/4JPC9A4eYpGl9iNmieTwPSN+BPcDKEoAhhxDwPTqQGjDBg6P1Q9IZ422Ze g4+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731185; x=1748335985; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8eV/b8y5mk2obyefKPJMTlOdN01tVjbRhdILiqDQGZo=; b=pSDXbQ09quUhtY/sCOTbBOZ2dlt2U9k7gdm4q5UgM2S7bpsIusLxVRdDDE9OI0DfhX 2zJALeEd0VAdfHTgbBqjxxZesHgBWygE7EngoqlUOBXRSYmueitWRz7DwJPRnxFRNpIA QsN7dmjIUdkD1EuS4PQ/FgEQdxgP+O11jhP7yAe0Kf9h92iPrlDvpoaaToqID2NBe15U zjPMf/AHhbP5UQFQ9b5CyRCIQtQnFXwvsMAE6A6wIf0UZ4cyN98c/eSDydQ/MEsZ3t8y Hq95BX9MSgPlKOM3Vc45jNfhl0vFIvUlFMI2XB3bFBmn6MXubGbEPyLNxDHm4Y03TwPz JapA== X-Forwarded-Encrypted: i=1; AJvYcCX5EIxUEJzKK+zlkTilSoSLuU3OgMusAB45Z4TZfx/5mXTIuWnnma0dtBsMIxEc75VT8wGU9NiKs2LtZ1k=@vger.kernel.org X-Gm-Message-State: AOJu0YzHH7UDCBfB3DNbFzyz9SW0tttUZPFKAnIeLM3+o5pVacWeHwcX vFgk/VR1J2PZMov0pTJqJawfTcg9WL+gD8XUZ2fK7Sk4YAh3Z3n0ebqjx7kSZfEhJsXhps+1vm4 KAa6Po8LG0Ao1miMKmbIhEQ== X-Google-Smtp-Source: AGHT+IGNQClPtb39kNDrZ/LoRUA+zgeQ/ywPgjx7YyU7q28JnkGlV8/zAlfQAZoYxyadZaqXQgpd/gTa3iD3bses X-Received: from wmrn15.prod.google.com ([2002:a05:600c:500f:b0:442:ea0c:c453]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:548d:b0:439:9424:1b70 with SMTP id 5b1f17b1804b1-442ff03c228mr142671705e9.30.1747731185777; Tue, 20 May 2025 01:53:05 -0700 (PDT) Date: Tue, 20 May 2025 09:51:54 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-4-vdonnefort@google.com> Subject: [PATCH v5 03/10] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 26016eb9323f..47aa7b01114f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 59db9606e6e1..4d3d215955c3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -245,7 +245,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -260,7 +261,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 1018a6f66359..8e0847aa090d 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -695,10 +695,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -907,48 +906,72 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + return -EINVAL; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); + if (ret) + return ret; + + ret =3D check_range_allowed_memory(phys, phys + size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - switch (get_host_state(page)) { - case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); - break; - case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - fallthrough; - default: - ret =3D -EPERM; - goto unlock; + for_each_hyp_page(page, phys, size) { + switch (get_host_state(page)) { + case PKVM_PAGE_OWNED: + continue; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count =3D=3D U32_MAX) { + ret =3D -EBUSY; + goto unlock; + } + + /* Only host to np-guest multi-sharing is tolerated */ + if (page->host_share_guest_count) + continue; + + fallthrough; + default: + ret =3D -EPERM; + goto unlock; + } } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + for_each_hyp_page(page, phys, size) { + set_host_state(page, PKVM_PAGE_SHARED_OWNED); + page->host_share_guest_count++; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; =20 unlock: guest_unlock_component(vm); @@ -1169,6 +1192,9 @@ static void assert_page_state(void) struct pkvm_hyp_vcpu *vcpu =3D &selftest_vcpu; u64 phys =3D hyp_virt_to_phys(virt); u64 ipa[2] =3D { selftest_ipa(), selftest_ipa() + PAGE_SIZE }; + struct pkvm_hyp_vm *vm; + + vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); =20 host_lock_component(); WARN_ON(__host_check_page_state_range(phys, size, selftest_state.host)); @@ -1179,8 +1205,8 @@ static void assert_page_state(void) hyp_unlock_component(); =20 guest_lock_component(&selftest_vm); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[0], size, selftest_state= .guest[0])); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[1], size, selftest_state= .guest[1])); + WARN_ON(__guest_check_page_state_range(vm, ipa[0], size, selftest_state.g= uest[0])); + WARN_ON(__guest_check_page_state_range(vm, ipa[1], size, selftest_state.g= uest[1])); guest_unlock_component(&selftest_vm); } =20 @@ -1218,7 +1244,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1237,7 +1263,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1249,7 +1275,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 hyp_unpin_shared_mem(virt, virt + size); @@ -1268,7 +1294,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 @@ -1279,8 +1305,8 @@ void pkvm_ownership_selftest(void *base) =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; selftest_state.guest[0] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, vcpu, prot); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, 1, vcpu, prot= ); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); @@ -1289,7 +1315,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.guest[1] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, vcpu, pro= t); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, 1, vcpu, = prot); WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 83a737484046..0285e2cd2e7f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -347,7 +347,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD113265CAD for ; Tue, 20 May 2025 08:53:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731190; cv=none; b=u4QrT/RCaTer3gRumsQfQ1nL103Pm7/zKD9Yo2wviMHUoNQ6TigqlTFnB6i9hbnZvR9fChT7RpJYGjI5htsOn/jMVfUODoahB+ksKaalYeG0ySUzNJdDmM4ENT9Gwufq+ImSCWf+8BB559y3smF/UaKrVPkFhi5wloHQnvCZfYw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731190; c=relaxed/simple; bh=VTZws/rr3NWkVDECjiFo3Socp4fltMh839a6raPpmE0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kqQ73k+w2ZKl4AsKwmyrOggZEyqOd35fzhUzqDPigOVMEpucL+bgFul/Ai1LbKOEIzCDOXgA6ztm4NXk4lSOMdy238ZqUYtQ9gCWQepdOPPeLAzF6wkB3HznJ+mX4CB8W/5h1AcgClahtzxGeq+Q/kDI90YZYlAt+dDrmhoGeC4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=txmpiOR3; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="txmpiOR3" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf172ffe1so40584675e9.3 for ; Tue, 20 May 2025 01:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731187; x=1748335987; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0qnoZeCyJg3ALVmFHMjFJfi4uCu0wBz7RX/Yio9mY38=; b=txmpiOR32DyJ4bGA2uL1eRoM9ummifLCQM8fpP/WN2Cap2MpzRUKzE8ITk8AUEBz9M QoTZ0JUABIqPKzShtYIIdN9h0tF/UIh+iK5V7+92E05mTZ23zeRNFPXaY7san9jXgmqx roGUs2QTZkLxVASeuwkv5kFI8PxEJUn0ZR4AbKjDjpbgnUPcqzqA/M+ZwlLjfzzZUEw4 08GanIlKDpA60hPzAUGxDcc5c5Uu+agcL4VzCXhUSzO6ulXqy/HbI5zKnbqhtRN4G2DA kVtN8KtbjcgjkhP/GNJudq1+R/llcb4I6VcH5vDNn5qFKfOl07Qg0tx7YSGUL4NhRg8I GG5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731187; x=1748335987; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0qnoZeCyJg3ALVmFHMjFJfi4uCu0wBz7RX/Yio9mY38=; b=ceGeAfNuHpR+RxZf2DuDpI6QdPGFFDJIVLrhgQO95EwcjhSd1lolSa3IlY8bO28i+V 1ONYY3kXSSplTC26ef/7QT29naJ+/Bf79ZKPB1Clmbxb8bjUfLNhJ3H4r4EDYUYg1fw1 iFznKM3x+PR9xYWqiZS37l/M6xgtURnU2PpMcDtDx0AvlE7ZkU6Giy1xfxDRNLJglNiC Wm6ML0xlnVJuY6rOY6B7oYHaZ+UpFi4iqyXhpczozFtGFn2WUEs3+orbUc71O3p3NYJ8 5Gajb5mWJv0OxSxJhR+DvyyQLu9mtJSIV6QoVJc3Qr6bJmEBnFV5bu1zCq+4CgCr3E9t 1gKw== X-Forwarded-Encrypted: i=1; AJvYcCUDjyIv97NnngxJ6xiw4OPvudhTrteMeTyMAlrC4WQ0G+7TodmcwJ7A/Ki/o4e7prtVsoByN64Qv8Fr0mM=@vger.kernel.org X-Gm-Message-State: AOJu0YwjR9z5I4tNS2UGv49yANkRPZ7hYAWigH5qCfFhNoUDQt+BApR9 aZSoN85xRTFLQzvSD1mZY+IUU47PCRmUT9lDK3uCRtS2LWSKLPM5iuoMnXAeKaXfHMIxUl9TDn1 zz6bE3Cc+RSkwYLYvrquVYQ== X-Google-Smtp-Source: AGHT+IEpcnPZ2ofMn+yji7Z+XgSt5iwLOb91LYysxKgZe0xq+lvgP12pvdNLF1SqkBuLey6BH72CI9pPRzVZ9+5E X-Received: from wmbbe8.prod.google.com ([2002:a05:600c:1e88:b0:43b:bf16:d6be]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d8c:b0:442:e109:3032 with SMTP id 5b1f17b1804b1-445917d2a0amr66079815e9.24.1747731187551; Tue, 20 May 2025 01:53:07 -0700 (PDT) Date: Tue, 20 May 2025 09:51:55 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-5-vdonnefort@google.com> Subject: [PATCH v5 04/10] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 47aa7b01114f..19671edbe18f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 4d3d215955c3..5c03bd1db873 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -270,6 +270,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -280,7 +281,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 8e0847aa090d..884e2316aa48 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -980,10 +980,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_p= ages, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { enum pkvm_page_state state; - struct hyp_page *page; kvm_pte_t pte; u64 phys; s8 level; @@ -994,7 +993,7 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 state =3D guest_get_page_state(pte, ipa); @@ -1002,43 +1001,49 @@ static int __check_host_shared_guest(struct pkvm_hy= p_vm *vm, u64 *__phys, u64 ip return -EPERM; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(page, phys, size) { + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + for_each_hyp_page(page, phys, size) { + /* __check_host_shared_guest() protects against underflow */ + page->host_share_guest_count--; + if (!page->host_share_guest_count) + set_host_state(page, PKVM_PAGE_OWNED); + } =20 unlock: guest_unlock_component(vm); @@ -1058,7 +1063,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1245,7 +1250,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; selftest_state.hyp =3D PKVM_NOPAGE; @@ -1253,7 +1258,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; @@ -1264,7 +1269,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1276,7 +1281,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 hyp_unpin_shared_mem(virt, virt + size); assert_page_state(); @@ -1295,7 +1300,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1319,11 +1324,11 @@ void pkvm_ownership_selftest(void *base) WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.guest[1] =3D PKVM_NOPAGE; selftest_state.host =3D PKVM_PAGE_OWNED; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, 1, vm); =20 selftest_state.host =3D PKVM_NOPAGE; selftest_state.hyp =3D PKVM_PAGE_OWNED; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 0285e2cd2e7f..f77c5157a8d7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -371,7 +371,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DF7612673BF for ; Tue, 20 May 2025 08:53:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731192; cv=none; b=clVhiyWdR3FoyJjillO68QFVkI60qDCC+Nd53ItGmm/Mw7ntQnFwkI5fJvkd7FZY5Pu1P4VUQxi3YY8mYSysh/RlgkfOrT9QeSIusePYeOpup6wc2+dtWY+pvYi66ulmLFwGV52B6UJPVo7Vmdcy6GJ7eZIJ8bY3VY84nwGy9lg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731192; c=relaxed/simple; bh=MzWQxr2iFwwyue/2v5SvXZNmjJYvK9Wh0QbtpsrDtC4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HGLuIJg2YL33gUgFhFJo3S8S4umB82L7DFoqwbVXTh2QNoMa+mDNwqrfdhcQD5nHKEdgcKCSNls+A7yj4o96yRDpiNhH92zeZ7pxW4ERnvsBV+Xo23luGHTy4riWxKQ+le2rHpGK7LSSNMbTEhD0Op+obhAbAHVv0dsRCh2Zt8k= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=P6H4d98X; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P6H4d98X" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so33670165e9.3 for ; Tue, 20 May 2025 01:53:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731189; x=1748335989; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OegFB2xZXjNRTpgqJuU8fAt2lqPaGMfs1sk3PznjF78=; b=P6H4d98X2Nd/i9LEL70Yf3IbsbMeuTxuoUZdijHNhqqn8WPAmCRErOfdQWUS0hZIwa eUIYWsxpWHqaQtF3PP05jhBvotbY7Q3z2/P3MpWpXFDjALcQrjNn07DtDfSer6wKeGFN I9cAy+V4Ki1cjPb8mbuOunwfxJpy+OwrjN6LLIEuHMZRHWQdobiB27BPbHmM7M1kknoO YalVGZn7MEITi+zaWPsNFnp7GzMXS1wU5iBfbzZewuOxEoKkpLTgmc+t3Z6XBD64RSXo opztkS2B3/utO5EQXpwAeOXDr/HOADbfW58Q2gOSmuYcwiRZbNEmexRHx5vNeSR69rqN D74A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731189; x=1748335989; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OegFB2xZXjNRTpgqJuU8fAt2lqPaGMfs1sk3PznjF78=; b=e5VDm902v9K2rUrcU9dPpa2WxSv8Xid1auihS4EkW1tJeRbGK3c3JmIWWzfImvPw1A DpwqvW+1H5/mlk/6sO0nsQiwoMGsVpYnn83I1kx8X5A/kZlz+XklqD1kMZ7U6VSqyHyu ad7R+TOY22mR8NfVPn1IILnOVuKFpkgzjsKw/UjhNzdH3vn4UQOiPQx0qhAvYJpPmWTK JhyCR+dnD96//RQV4Ci0eV8CCc+murHuFct9rTQKKtCKA7FFCCEBG3W3f3NP/qPJX8cM iY6qrGCHa9ecE8UJWn89LeMRlminUgL3qOkNc+mXXkIjeUiLk29h4ZPneSEQmnzkfV5x U2Ag== X-Forwarded-Encrypted: i=1; AJvYcCUcEP94qN2zynpRdRDQ5U/YxGwtcwYlnxjhxyTh4NirG29oreeRoakt4KLqY27Ajuaoz8jj/lT7LmKxeSc=@vger.kernel.org X-Gm-Message-State: AOJu0YyxAzZ3uNdSHR0VnmdM9+LvskPuPEjSUSjXX88keC+SYoYMKYsO qk/XOhGlPkDyC2wL/CbVidbBc2DDqT/XlHJ1eF3/wORWYapQgIGtMSseVc3FZc3PYDT5+R39XVl RAnuOBxo3gLVeQOGl4ar4VA== X-Google-Smtp-Source: AGHT+IGQHeSBunGL47W5QD9E9xMgLZTOeuSsbb54RHxgkrHxgCbNat3eBa4S5Z5uYiT9jJa6EmhQXbPkxTpqYWjr X-Received: from wmbfm26.prod.google.com ([2002:a05:600c:c1a:b0:442:f8e9:a2ac]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f42:b0:442:dc75:5625 with SMTP id 5b1f17b1804b1-442fd60cb7bmr141176955e9.5.1747731189303; Tue, 20 May 2025 01:53:09 -0700 (PDT) Date: Tue, 20 May 2025 09:51:56 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-6-vdonnefort@google.com> Subject: [PATCH v5 05/10] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 19671edbe18f..64d4f3bf6269 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 5c03bd1db873..fa7e2421d359 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -310,6 +310,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -320,7 +321,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 884e2316aa48..a6c45202aa85 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1052,7 +1052,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1063,7 +1063,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1083,7 +1083,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1091,17 +1091,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1115,7 +1119,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1131,7 +1135,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index f77c5157a8d7..daab4a00790a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -390,7 +390,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DCB35267B19 for ; Tue, 20 May 2025 08:53:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731194; cv=none; b=tU+PpakijG0uttR4J8/3R7sF3bUMyOwZa9hwoxbpp3LSKJECk15b8PLt1XWSWu36Bbzjb2jPNMm+g0Rq+ICmFKozVj3fYZXqIdnictjfemeCkWV6wE98GOjOVq3dO0xZQtwqMNk9xhw+kG5vRhLb0WEJ3fpm+fDtuxrie8AK/vg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731194; c=relaxed/simple; bh=Zf1tm63k5slx8xvKAWQSF/+vFTcVGSo/FoXjgAb4LcM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NMyEJ1QnZNb6A8ybUKuWPQXr973L+B6PJYA8koRgf66lHFc5yqiksawXiK3cYNSDxfBu1sOYYQMpgGGgHjeXQlThYHxzDpMjnrSdg4Fa+nNt8Xk5NjF5uZJnGMK/WEZnu9Xr3Hcj1SDx9hDf5EfBlwj//PNO2vI57nelxEjyivI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kcTQql73; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kcTQql73" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ea256f039so42058685e9.0 for ; Tue, 20 May 2025 01:53:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731191; x=1748335991; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=U1sUrkER67WLT4JN0Ut8NvS0zcoI4PCXbQaK2Om0Kxk=; b=kcTQql73Vf1OZ9zRhLiOXched8CxxoAQ27lhNFuTzBkv50pH0zdzRJW6syx6GoOyuq eKpA01Er/1Xfvs1MFtwuFtJqUEk2SHFRJ+zcfQ/wHr0ggsm2qEwsZvddO62WUGa9dxsX 0tJl1IkZMIdFOBzm8jjLOGnIalOxJ8GBSczZ4p8nCpAWQ9ve+2paFMsV61sF1w56/kY3 HCrvW8Ez403ybYCzvIfJtdMLpAPuLt3kZIqBQoi0D6l6A85jDKi4e9iHP8URytVzhRw4 QgHjAKDh8btwROxXuBqfIIFO1Yam1EdNnrto6CuBNlw03WkRxsIaGP0gMqpk1kvVDYU7 ZiqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731191; x=1748335991; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=U1sUrkER67WLT4JN0Ut8NvS0zcoI4PCXbQaK2Om0Kxk=; b=FkZLbj/IDuy7TUNC8zTiNy0HzJR6JdHlinNH6lRaOOa9VCt4fEFIsNUmdHytgSEHw3 tpJ6vhj6m3A8LFppnv8BQKI/xS1bTakQ2oLzSPml5b6x2AqtDAump9LYtEXsPKQP1vxe isG0DijopPKvrvI01vhm9E1SOMKUnvOc1jeDjGqeSAUhYRRPCBsRqFLJawRPJKaYK7z3 L2LIdOo/AXkecI5n37C7OB5CRq4JWSefA7jsAU5FuVVpdYRRQ8JtO04590hzX1SawzwH cKcmfZamvjQF32ptbY9bQKvgHizbtroRKi93ZQubqKBTzuyP4WGdiHWfIxcZKrkHc2Rq pY+A== X-Forwarded-Encrypted: i=1; AJvYcCVVwlfXcqdyVHA3hijp0pPEtSIqBEBHOoo8gpuEruWokwXcPB6XdwFfmmZK9t+q0EXbPHmdJgvL+Xd6Tbw=@vger.kernel.org X-Gm-Message-State: AOJu0Yz+9/94T6UnMOjcXbh/puDFNmvHYGKRmLwNIyIevryO2cc1ew0T JWQQxlZEotUYwOGTpMHvL04MbdRRYq8My46joJ8GhxjYHQ+eVUomW2+ay5Fg6kdfHxeYx+Die3o t8l1qP2RKTbguDd1LA4nEYw== X-Google-Smtp-Source: AGHT+IENthKzsKHv/HWcWQeB37Nikyn/qGeaUXDh4XyhJ9RJ8A2GBmHwwF7MrDhhpwbzghTBNva0zDUScLc85koj X-Received: from wmbej13.prod.google.com ([2002:a05:600c:3e8d:b0:440:673c:ce2]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5008:b0:43c:ef55:f1e8 with SMTP id 5b1f17b1804b1-442fd622ce5mr160161355e9.13.1747731191322; Tue, 20 May 2025 01:53:11 -0700 (PDT) Date: Tue, 20 May 2025 09:51:57 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-7-vdonnefort@google.com> Subject: [PATCH v5 06/10] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 64d4f3bf6269..5f9d56754e39 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index fa7e2421d359..8e8848de4d47 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -331,7 +331,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -342,7 +343,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index a6c45202aa85..5a7a38c5d67c 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1111,17 +1111,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index daab4a00790a..057874bbe3e1 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -420,7 +420,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2C68267F74 for ; Tue, 20 May 2025 08:53:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731196; cv=none; b=BKZAuCSky/HtI9SeZ4XnaCwF7dh0PAftcyNySbbZI2JlkDPFgk1T4mF2Yee+gH8M82kw5pfbjmyhIf04XCuYqR8a6H+xrFbxn3C2+EIhpx8TaZ3AKvdBBqRQ/eWZVYQVVcZbeLpBl25WWjqbTGhmbG4ccLcFh/3QtA+UC7ayrxU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731196; c=relaxed/simple; bh=TEHmGZnOintA1J+nfRT2TgJJ3gdS4R+4jCcNr1BjZxM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RgvyZ7MzQ92EEungURuPcC+LvDBPZXKRzi9VB60FvSxr4M+NYuw47zvmGMpR8y07Hl0LfrXClcaINJZfAFFRmVi0chFjapXrlJDhU4MnU3zfhFdHO9fevIb2xHF8o6pLIbWWpHxdNXaSGPWAFfIl2c02nOZP7EsKx+SXwKDlwWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2O3s1ndk; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2O3s1ndk" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d007b2c79so42103235e9.2 for ; Tue, 20 May 2025 01:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731193; x=1748335993; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RrWP6Zwi1iKiAKyg1cea1PjFAdTTJFrjdN8oFu8iy90=; b=2O3s1ndkcp7GM3O90omGfE6aifqJUslqVjtdfv8R0Mmux9CIhKExVA/k9T3i9UG4uf on2gHtW4Xgah+HdRUh+Pfp9HvBOWxDXL/9wIm581Y3dDkDriAALJmxhmaF7wvViCHlmV 6eJVbpHUU9Y3shXdL/7wz9webla2niCaSKfSZMJ6o8KdAkjs2046Tih4oa2/39R/aC6r NoOMYgenY0h5oiGeLCdkemcWkvsEvpC4C0sHDfyzEtToPfD5ynCenpx3UMOHZSMpT8dw 31EnqHxyzLqxIViDdYtieheEAlqySXgVfh3qEpxkwhbrMc2LcTuKcy4epqH8S6l/bZGZ 3m4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731193; x=1748335993; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RrWP6Zwi1iKiAKyg1cea1PjFAdTTJFrjdN8oFu8iy90=; b=ZeIhwflj9YARCeeR9Vk3Rbq47lvYgylh2+gKANTo8BbYOiqbyHQji0EfF3pSe+qjeV 3CfiAYiF0rAqe53xTBMN80sWP4WDYugJ+OIQotlab8cuXWnQfKQyRXHNDxgXSvrUhCGK iLcYjQqrKLlCn8ud42lor9eOvUGbzFUVUKWGEwYl2LLNYS1iCpzg7VWPeRq3ltyXM+ZH +EaIVmaQMT5qQVXP84E0gvvNRiOqUd9Hfqs9h4kTxU58GT59T/y+poLjogmVS/v4Mm2q UeqL7q4e86bjTro40jJNDEpgKDvR4S2eEFNz4EAbVMjh1KxqPCSB8KD6RloauieoP0l5 GLfA== X-Forwarded-Encrypted: i=1; AJvYcCUAJZz/aXjNydo6shSN9387/p15JhCnWqLHcNkuJFy1AmKL0mGPLuxfgSrBwlEXmzETt9tCyOhXlyNWH1g=@vger.kernel.org X-Gm-Message-State: AOJu0Ywai7gcxe1a5iytWLh0phVtgZ/BXsjZ+h21OF8qBqguO5NfbI/X Cotn/lY3Q+NZrIcUpgTyWCah4MF5RBySwYLFM/rGrL9uyG6l912anVHacYnNUsP4EsVUN/daIoR 6WyWUblc/4V6xhCmZbwwtfQ== X-Google-Smtp-Source: AGHT+IEhnGx4Z5EioPu9R9SzxcAGfn72t0MQB9mOLaN9BEycLK0Unt6J+YjxJNJg9wU++DwSP7macxTmZ+HqWzf+ X-Received: from wmbep11.prod.google.com ([2002:a05:600c:840b:b0:43c:ef7b:ffac]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3554:b0:442:cd13:f15d with SMTP id 5b1f17b1804b1-442fd67197dmr147790395e9.29.1747731193204; Tue, 20 May 2025 01:53:13 -0700 (PDT) Date: Tue, 20 May 2025 09:51:58 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-8-vdonnefort@google.com> Subject: [PATCH v5 07/10] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index d91bfcf2db56..da75d41c948c 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 057874bbe3e1..8a1a2faf66a8 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -256,80 +257,67 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + /* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. + * __tmp is updated to iter_first(pkvm_mappings) *before* entering the bod= y of the loop to allow + * freeing of __map inline. */ #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -357,28 +345,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; - - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D10CD268FD9 for ; Tue, 20 May 2025 08:53:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731199; cv=none; b=gNPAcXDvW5RvTkzLocuSr9UbqvMTPscChRaOHQBsfus2ymcg2uetZTUwynAl1S/JQSLOHEiLH4e2Vy7xskw6YMw0T2EzNjwmJc6lLF0mwnRjZ8ZULL6y8kBJ7KNMarq9AWdhDwWt8pyDgGE2z3mH6eDiSgYeF75ILYYgg9ldbvQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731199; c=relaxed/simple; bh=hPOMBWnOUvxfs71h6DqG8XcRcD6839MahtJxJVD2WVs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lvNbxhDtauA7bKbI14gVKZqq3Qy2o3h1M6VDb8L719+JODmHmkbqqmGTFYeFlKrpTykkMFM69wqL6vjfT8ng9l3cwi054Lxg8s0aPIMhi8BRxn3184/wMc95Z5Vhnz8ziPQ5ktjI1VcZAoPzH8r2J3GQwzclSgAjnmWZhhP6loY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mxkxWEcc; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mxkxWEcc" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442cd12d151so37827155e9.1 for ; Tue, 20 May 2025 01:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731195; x=1748335995; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=iaiebv1iUlPjNsuMmhJjesUK4sbc8xArY8P1OHZ/a4c=; b=mxkxWEccIVw9Hivpk+s2+v/8q4XekYvrzELseMdM7EGnqA9dxKcNyYrCwtigMlJGtz zpiKu4zGNj9Uz76sQuyz7fURPqqIcuLaGoqxwy5MqAy2xv8BJAbf/spz298nsYIMbY29 +Y6hHqbEykheGxzDPxNBSeExFmDMY/dbDgcaozSHcV3KI7EMEBnpvpWmt97gT5fgtCAJ +0tLSSFp5hN/nv331FjCDXfPmT1WvONnp8a4pBpL4hX5U7EF6Q6/e4xYcBMGAJmDP2xb wf/X718he1pHg+wNSQCLl8DKmM/+OPLhuwP5yavp8w5oOBydkB9xkVahq0gfzd84zk25 H/KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731195; x=1748335995; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=iaiebv1iUlPjNsuMmhJjesUK4sbc8xArY8P1OHZ/a4c=; b=SIQwS57ndngUOyGf3NVU+epIk1FHaLFsVdFIHAnCMNt43a1dsBBgt37YCibka566Oi NZgta2jTv60UntV260oqtGbAQokiNpXKMBvw6IWuCZ07WItXgc8x+/fQxcDGOia15jR0 Yyynv3ed7wgLvbAVWKbdoVAFCM1laNkMDBQ0vsfqqrezTFEFubhBO8hoIlDXNZ31lWN/ wJ8MglknU71ssh+mtz4hTrBu4j6foyrTSncMyAZB3IH6kqhJevfNHiY55D7RWrdPlMsP NGFkhtD+zuStvZfK1rAr/Q//jZxLybNkE5NA7lzMfspfCBxryNnhFwCksjdNo7I9I30Q Oy8Q== X-Forwarded-Encrypted: i=1; AJvYcCUtmVziMS7hiyE3pTeaGSg1Lss9t5YcAF6MDjUlrYqyATmEdTyQq3lj32Fj/CYDycVvkaO5oMRoqT5OuIM=@vger.kernel.org X-Gm-Message-State: AOJu0YwnN05oq0Q1kMfy9h5dlvGdm7YytPEZRkkrMQbXsq4wTn2YrTIV qGoMxBax4OuwUDK/LWuOnadmXl1p6X4uBH3fn5ubk1Aubbu+amJte9PPF9wGtxooQcOm/VqGAg/ U1b9ZuOS4ns0TKMcQ9L4CQQ== X-Google-Smtp-Source: AGHT+IHpkZxdcW0elwcmz1T5vd4Wk49F8B5i3CFYJRUb7n4RGyA8tbVGoOAC7m8rZnfR1snVQ+54871TJKr2vkjF X-Received: from wmqb3.prod.google.com ([2002:a05:600c:4e03:b0:43d:1ef1:b314]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:468f:b0:440:6a37:be09 with SMTP id 5b1f17b1804b1-442fd6312f8mr169931545e9.16.1747731195203; Tue, 20 May 2025 01:53:15 -0700 (PDT) Date: Tue, 20 May 2025 09:51:59 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-9-vdonnefort@google.com> Subject: [PATCH v5 08/10] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index da75d41c948c..ea58282f59bb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 8a1a2faf66a8..b1a65f50c02a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -264,7 +264,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -305,7 +305,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -335,16 +336,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -366,7 +383,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -381,7 +399,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -396,7 +415,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA0AD2690F4 for ; Tue, 20 May 2025 08:53:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731200; cv=none; b=uw6SL9n6IoxRwdSFSK7888fKuy2GJWnlZHVibu5sITcKRZgnAAcrilED+SVfMWYLQHyDy5Kks2PpW1wo+TgMTGwKtecaI7yTooVPyfIDGmRQpYkj1keN5zs5BYQe/f6WMRUU6USUOJWJFogfkyBJDrT2iH0bX3DrEEucZVlS010= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731200; c=relaxed/simple; bh=fKyfGQakCHdtjnwXmnpo6PcZIvIiLxTvpbrQEFntiX8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jHmKt8hWXOfgC1Q2lx6I5v6hvbv5jc2JmdG7Stah3WUuFg8Gu6E/i/ua7JrAOlrTLMqcdEiPw7vmFqha6twoRNqps82HeSnZoBG5QOI10OA12/MLDPEBn1o4M93xOAHPF7ONZMLjZ6G2SBiD0SncMVy4K/hujYY3hLyBa5O7Lf0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Y7wU+NfD; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Y7wU+NfD" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so30925785e9.3 for ; Tue, 20 May 2025 01:53:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731197; x=1748335997; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6RhcZhtCDaS8NkJIeedFdZ107qBBIN26D1SVsvDpNZ8=; b=Y7wU+NfDho/Aa61Ane3rKCLmPTi1bxwPdLAJIwwrUBSAIy0ZK9nAcnHkV6YedzoV8c 0NGHvtKEvs9M5N49zK0fQbwbxUYe7IfCCTBd4q9saHIeHDzYywh/WXVm7WSomT42u6Ye jy4k3HqVrUDgQjb0FU3kXO89BYzvTvb2v4/jAGc01TUiHllT/zQ5akwXQqFRgAb+1PT/ q5J0r/XmtnqQlbREQmmzkB+eaXL0BY6fCkjipegpttIDebu4YJ6hMijCPLznbfHbIoKA EzoDUXC3vvyfl0fJhsXf+H38ek5hnuGsb21OPAKgWWxtSaKBBREJT0Pg8wHuPcN5IVk1 2pzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731197; x=1748335997; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6RhcZhtCDaS8NkJIeedFdZ107qBBIN26D1SVsvDpNZ8=; b=A71puLFjdtmLYZVd3Zyv+dSwO1nwnG/WcpxdMLRyv2ROMHPRrci0FOy5rh0ktd3wZL ZDcQztPR5kLDjhUaoOxJxtCJwq3TBAoTvwh7iPlqde46lFNxpeLBXpwh61P4fWZrF3kX GolpKcwEpRoIiHEsvMunTwok/HJAdaZV3l1NSiii2MNLQRjM653WK3+dW4baV3zn2gh8 IGIzi8wE8UOX2imbQ5Yd+6Vt2fnCdT95KxYtQp5ZGga3vDT4aERpmiOJHky41WJY7FGQ BuGwBpkStFRjZ24yXZzjgkFlS0ikiDQo8E8kcEOmVGwDYJ0KJpZrgAhYxKxU8a7NE10m +Y5Q== X-Forwarded-Encrypted: i=1; AJvYcCWfDNT4q3t9LuuUJFsTgJEYrv9ub7RrxGCq9iOSvJunD+CmNbD/+mM26CMUGB45IL3jhMH7vnrQfIkUnk8=@vger.kernel.org X-Gm-Message-State: AOJu0YxnkWUpEMDQgeWs8V1dn2ECyWry7bXm43DZghx/CVoMaO4uqqsK OoJiQW9bIKp7b8nQO1v2rlS3iEUyvjaCO4Dg/lMrfq8D9VMDlVMRHQU2R2iEXI0knNih4RUkufC VMsH0PW4Cs/oi3REXojlJAg== X-Google-Smtp-Source: AGHT+IFEFnMn+5WqEzWmxQGG8vmhM7+5mi+63YjdjHfeO/BUWq/ciG944KxWrfJB4c3YAkUdS5M6ZuY5i48SqhBo X-Received: from wrbfu10.prod.google.com ([2002:a05:6000:25ea:b0:3a3:659e:161]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f4a:0:b0:3a3:6c61:9d52 with SMTP id ffacd0b85a97d-3a36c619e19mr6952768f8f.18.1747731197095; Tue, 20 May 2025 01:53:17 -0700 (PDT) Date: Tue, 20 May 2025 09:52:00 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-10-vdonnefort@google.com> Subject: [PATCH v5 09/10] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 5a7a38c5d67c..1490820b9ebe 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -166,12 +166,6 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } =20 -static bool guest_stage2_force_pte_cb(u64 addr, u64 end, - enum kvm_pgtable_prot prot) -{ - return true; -} - static void *guest_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(¤t_vm->pool, get_order(size)); @@ -278,8 +272,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, vo= id *pgd) }; =20 guest_lock_component(vm); - ret =3D __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, - guest_stage2_force_pte_cb); + ret =3D __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, NULL); guest_unlock_component(vm); if (ret) return ret; @@ -908,12 +901,24 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) =20 static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) { + size_t block_size; + if (nr_pages =3D=3D 1) { *size =3D PAGE_SIZE; return 0; } =20 - return -EINVAL; + /* We solely support second to last level huge mapping */ + block_size =3D kvm_granule_size(KVM_PGTABLE_LAST_LEVEL - 1); + + if (nr_pages !=3D block_size >> PAGE_SHIFT) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, block_size)) + return -EINVAL; + + *size =3D block_size; + return 0; } =20 int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 754f2fe0cc67..e445db2cb4a4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1304,6 +1304,10 @@ static bool fault_supports_stage2_huge_mapping(struc= t kvm_memory_slot *memslot, if (map_size =3D=3D PAGE_SIZE) return true; =20 + /* pKVM only supports PMD_SIZE huge-mappings */ + if (is_protected_kvm_enabled() && map_size !=3D PMD_SIZE) + return false; + size =3D memslot->npages * PAGE_SIZE; =20 gpa_start =3D memslot->base_gfn << PAGE_SHIFT; @@ -1537,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b1a65f50c02a..fcd70bfe44fb 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -332,7 +332,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.49.0.1143.g0be31eac6b-goog From nobody Fri Dec 19 12:46:38 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 62E85266B59 for ; Tue, 20 May 2025 08:53:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731202; cv=none; b=eWIMs7gxuZEeAOiTh2OrSc2KKmVgyNRUq0Xq9zw5F2OhuW6Jpe5VzBcJQbDd+nB0oxnksAWgHwxyxeEZRN2aocNxvIjEksHUkGs0Q08LtVeS0qGFI7Ep5zAhodmRncdHHOl3AV0UAvcGl6x6kaKu73IwDjooHlvIcGUXeYyz4hM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747731202; c=relaxed/simple; bh=BlVnBCR9fA3RbHA5SLZ5mBMQQQNL+2e7KLtIBL/QhMM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=QND6wF0pg2aacfmJE51W6ku087T5w4vumgIQfBi4Bnqz8o69Bw46p6HZTb3LGRTZsttR4Jx8b9p8Md4M17cZZ0uS1BgcyVjwzsk8irDmP0EDnj9Q7qMd3ra+OZ7KtCcpqI+M/3xkopJpXyeLr3pQ+YO126g4YG7qfKU7HSxpOW4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MQLCPw4a; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MQLCPw4a" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a364394fa8so1615629f8f.0 for ; Tue, 20 May 2025 01:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747731199; x=1748335999; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=r+aKabwaAhJ4Qqck1IC3JOTjpsrC8nvKDkSPcc7wS7Q=; b=MQLCPw4ah4HheoJFlbjvZwHzyWrPpI+lm689KVpJuKYmzi5VMzLDVTzD6WfIoGeMuZ 1yCvrurP5bPoYf+El8+4zGcMforhTSIaxs/wSb2Tnz1yoMiNd1b4iadMVViUmXoDVIDp 7i/BVLMNScPihlJcF8y5m0Vy4NtDtyXHSF4V0Pg8SXSfFwqsJC0stxjqQVOEm4nk/Vy0 /wUvfPDLyeBbaUc6wdrIKljE8T/F4xaMRT5PFTboKqCLKq7xhhNNewhFhDVcXYmOGZ32 ekA6ztKsdvtmMoHBFaYIsFwem32YuStJm/iOzR01qISNb0fhKyIdDIBQHhhqdAI8fCh4 NN+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747731199; x=1748335999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=r+aKabwaAhJ4Qqck1IC3JOTjpsrC8nvKDkSPcc7wS7Q=; b=g3XjkCbEiprprmpUoVtmGYirs73ftC+AbtHNHqeKLkiTwh31mKjUqAMvnIK9YcyBH0 SIDS2Neee6cIgZWDWgANbvZpT9wooW9ooxY3K/+z8PLncQkfhEpb+cwjgR1AzdyDif7v hU4PYFJrYpYIDeH3641q/4qYcbRpF2a8Xuk57miv2Sy4P2k+AOpsUt6fKZoigjqIdMDy qbLJ7A8b35pXGIe1T7zx0joATFCPsu/Sf1vNpbsgcm75Zh+i/NZIIuwdewbvz/w5Mxx0 qCCnm2WGFUtQPVbZQ9jT0K5e2uPC43N1cchZ4d3/qKLdRbXEaHlmeZWBopmH084+PBpS HNPQ== X-Forwarded-Encrypted: i=1; AJvYcCUTyDuICe85r0bhhpwRWoL4sjE6k74cxDoge8m8eyx8byidGxdeSTxUWcvcPI467iD34NYQaN0jZCtem3k=@vger.kernel.org X-Gm-Message-State: AOJu0YyaQ8PAjFMIy6lHpUcDwkkpaqGbon4jmLYp5+ZH1VpUOfRGhOD8 DgsmKMvM1ZSkv9Kl+LeJitnRi/u+6xL3Os0JUawuF30JVKy57v+SGEbRuMQ9722xCfvvYSElALx 5TGmAe6YOo0C/NCGeySjiYg== X-Google-Smtp-Source: AGHT+IFyEo7AAWRqt3LOScZngS2mL0TgnpSaCCAD9njaDdxJtqDLnq0XfQBZVWjrQbLHNAHvCk2nCV7tVKOozZyh X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:441:b661:2d94]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2304:b0:3a0:b521:9525 with SMTP id ffacd0b85a97d-3a35fe65fb8mr12267791f8f.1.1747731198829; Tue, 20 May 2025 01:53:18 -0700 (PDT) Date: Tue, 20 May 2025 09:52:01 +0100 In-Reply-To: <20250520085201.3059786-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250520085201.3059786-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1143.g0be31eac6b-goog Message-ID: <20250520085201.3059786-11-vdonnefort@google.com> Subject: [PATCH v5 10/10] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..6e83ce35c2f2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys, size_t *size); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 1490820b9ebe..962948534179 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -216,34 +216,42 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 -static void clean_dcache_guest_page(void *va, size_t size) +static void __apply_guest_page(void *va, size_t size, + void (*func)(void *addr, size_t size)) { size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); size =3D PAGE_ALIGN(size); =20 while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t map_size =3D PAGE_SIZE; + void *map; + + if (size >=3D PMD_SIZE) + map =3D hyp_fixblock_map(__hyp_pa(va), &map_size); + else + map =3D hyp_fixmap_map(__hyp_pa(va)); + + func(map, map_size); + + if (size >=3D PMD_SIZE) + hyp_fixblock_unmap(); + else + hyp_fixmap_unmap(); + + size -=3D map_size; + va +=3D map_size; } } =20 -static void invalidate_icache_guest_page(void *va, size_t size) +static void clean_dcache_guest_page(void *va, size_t size) { - size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); - va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); - size =3D PAGE_ALIGN(size); + __apply_guest_page(va, size, __clean_dcache_guest_page); +} =20 - while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; - } +static void invalidate_icache_guest_page(void *va, size_t size) +{ + __apply_guest_page(va, size, __invalidate_icache_guest_page); } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..ae8391baebc3 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,84 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#if PAGE_SHIFT < 16 +#define HAS_FIXBLOCK +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); +#endif + +static int create_fixblock(void) +{ +#ifdef HAS_FIXBLOCK + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D &hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +#else + return 0; +#endif +} + +void *hyp_fixblock_map(phys_addr_t phys, size_t *size) +{ +#ifdef HAS_FIXBLOCK + *size =3D PMD_SIZE; + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +#else + *size =3D PAGE_SIZE; + return hyp_fixmap_map(phys); +#endif +} + +void hyp_fixblock_unmap(void) +{ +#ifdef HAS_FIXBLOCK + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +#else + hyp_fixmap_unmap(); +#endif +} + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +403,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index c19860fc8183..a48d3f5a5afb 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -312,7 +312,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.49.0.1143.g0be31eac6b-goog