From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E190E25DD18 for ; Fri, 28 Feb 2025 10:25:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738349; cv=none; b=MbvDmJBxtB6vhpB8sBa7ddGfnCfxCfyuAG8dNYGle1jA+83Ynny4Rt99NN4zBug3BIqwtRxepqG1UsaUBNz2+oP+85kdIXZ3x3dsbGrFEesuxAQWDpSW2OAWyvb/RDf7Cv1hwB9XjB9RMD4RXRJu9mf/6frcAUTe87uPuN3GpzM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738349; c=relaxed/simple; bh=XyLO+57fDofN6GJHV7mkm+GB+xkNeXPKaNuh0zqrY5Q=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JSDaBfSIGs8XxOOO7287VrHZUv8ml54VJjapBxwyXWOa2UriW383bsIawh25gR7q4aCiUB+zfANIVnhkqIV76w9itCM3+I5Ouuh8TL8ZP02VDFt6EqpEoExQdsAkuQ3tQc8bQktFGRA0EE4D/C7BZKMeTc/dyMMK0OD++6wDI9I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fh4rKZob; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fh4rKZob" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ab5baf62cso14332005e9.0 for ; Fri, 28 Feb 2025 02:25:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738346; x=1741343146; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rZhAYztASFyJGMSgZE5i6lkqu3vzWqDwef4eaMAfdZ0=; b=fh4rKZobaumA3bi5tt0gh/cBtuQX4XxNpohCCSVQZuuuk40ICFn3nkWQiX+Ct1ugj8 i1as11yXcb66pk8x11DXuuj+ZtSa6ddml6OJUCn7fLizU8OvnYYvRQWxNfW+QHIV2slL J1zGjSN5KGffmebOC8CAzzJq9zO0d1Iq53vWarYrdjBu2h4pvRAjM/f41KzJSQ8uOPnA IV+ugp0R0/i2iGAkw2hzYm66bT4wVC0dux4KGbTsM9p4Cv0nQh5O/MSXxxUnoVdTzjIb FYsGVZYx1OJkG2y9KjgBfN2RhD+5MAcDR+xqrAoEVb/SL3HigDqBCE4kOz/OZI64WO2e IC+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738346; x=1741343146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rZhAYztASFyJGMSgZE5i6lkqu3vzWqDwef4eaMAfdZ0=; b=HNf6jrMiVZmu8st4x/Q1N7OBre5yuXSlalhYfLm7tW2jbkNO/aUmehh5OKPSrUcMoX 5/PuYosn/R63HgdOfpgtS/iDRnb9xFQt7Ztbyk9WoaXmIUID2iCqoq48qvPisCABL/CI jCykIDU7eKRygLF6uNArsZEQpZO8p7U7TcgM7I1YrO5UDJItfmBYKnqCg+zK1KyvBam/ uHZ1mKYpggg80eKYkj6L0GNeaHPGcXb5K1v/VXN+44KR0gXSnSa1uJcV0WUTg0kgJoWp OqQMVTUnNM92cU+tIMUW/SfMxBgq0CP0aEMgj3THkZtYJBvGl1QiprdkYOzqfcH+UmUS 2PCA== X-Forwarded-Encrypted: i=1; AJvYcCXKnXxb4+Y6lN+RSc28to666RLJeNtSu4cua4aQwpI2kczJZWjoTtYG/cSunitsVPfp86C+LlM+Xapr9PY=@vger.kernel.org X-Gm-Message-State: AOJu0YzubA0sirVShYjX76WN5K+1YJqP7PE2LZmSGMEKpiEuZI97SYIM 3+yHOF+fxxaUUPSH+8vqCO4MvqBQY0uhGwclvNE7xNUwTuO/YuQXRerrxPsX0sbmzrJmXtfpBMR OWS/wOQcw8AkPgUxtRg== X-Google-Smtp-Source: AGHT+IErysWGWTMM3bCvvoDqVF2QmB3ahdAOa99vtob8aheucDELjb3c2wdk5I/bEmp96LHcj4ZpHr7SpAP9x51t X-Received: from wmbfm6.prod.google.com ([2002:a05:600c:c06:b0:439:9601:298d]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1986:b0:439:98b0:f8db with SMTP id 5b1f17b1804b1-43ba66f9800mr25488185e9.16.1740738346325; Fri, 28 Feb 2025 02:25:46 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:17 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-2-vdonnefort@google.com> Subject: [PATCH 1/9] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 19c3c631708c..a796e257c41f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,24 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E239125E478 for ; Fri, 28 Feb 2025 10:25:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738351; cv=none; b=iYLi4n9KmwEwJNbM8KxU9ht2RvSFCjGQOcxN3NHSXADVX+trbWXYOdX87PEDpa3vO9EqHYUA/kQGKAiua8jVajEZ1B7AC7Eyxix2SP/gGS2UT6HLNTQUzK+bOIPatUkX7z1ZLXBmk/NYCDuuWtuCH/UD2M80rYYFJsEryCk/D0k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738351; c=relaxed/simple; bh=8O7cmsIm/v/6d5QkUHIvDmDAeWB9Hgxbolr7e9WDn4g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=a0GtZa1I9BfRjZ0gmOd9nu1tRLvPBK3pihqsQ+IGasyc8Qs8F4L1w6LdqqJkXhg+Qjn0Mgbn18kMz7nMP3bEeW4mMkrhEgytTj8sny/aSGBPkI8VfgSdrBy05lgVU5LnDH9gSz/etylwg0ZU8ypINCvZaudbAwgUzMnAsGNRah0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UY8T+3Xc; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UY8T+3Xc" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4388eee7073so13509735e9.0 for ; Fri, 28 Feb 2025 02:25:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738348; x=1741343148; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=woexeE9czyXYmcShP359qQQ+UyXxB2V/SfwPim+kvA4=; b=UY8T+3XcqdJku0n+VwWyjY6qlxpHI31abZH2/8ufP+efFL3A0tUVaUK6nITDDosTTi hf2I+7wEShIy81yksuoI36835+Xz/UR78e75Qa3+P8khf0pIKPIIpyaQpv8gLLZJJdl9 WAuPl/ZQRt+hkC+19Mqyj5QyMl2bpEfoLw7Gbb72amdc1RLpjO/3UYZvkLg+MJrc8UH/ JlezJ6b565UFwgbJjZQ0zl8s1xpIFp92jrbC9iMoTXA7ur/Wh1yg2R9M07mHLQ08VoRw Jx+1v7noHQ1XMKRsQtopjXg3SZ6C/8ABOELd+iMyXimnzkkjRBF/Ql5K7HwAsUFrUV41 7EyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738348; x=1741343148; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=woexeE9czyXYmcShP359qQQ+UyXxB2V/SfwPim+kvA4=; b=vrF+BhEWB1TbTKn2Dt4PkNXQeEe59nns36nE6p3Xp1yTv6dHjhn2kmbQr79DILjt85 Q+AvcgdFLGZwqsTKXkmffMbdLHGEkq5IMOw/SuLqIwnLZafRH3Z09ZNxSoFiHb24aK7D m6l5rZtzvpf+Gg2lhuqY+XiREMvR+LHRJNRkzQlTs0CtPuWTeWi3sbML+Ol3d/aIsReB IoEvmSCfpd+LEIYjDNoFNepwFHIlgCkgvw2FJRf8biS4NF2u0TV1eo1Uw2YIvRB/RTzP uU2AGdOAHSygBDXnaRrIiChu7/IafUfgKGkI8WhV77SkU3FEWbmXe22auEk2FLw2AVwt kI/A== X-Forwarded-Encrypted: i=1; AJvYcCWSoo4y+g7Y7RIE4v6gxOrQkWULHQx8VrjxKL18sw2yFJ6dv81nLbjuRf1oT5tUIBugVFBsZ8Y5AtD/6i4=@vger.kernel.org X-Gm-Message-State: AOJu0YxBYWEIFE8zNqY+z3WIsmDBijR/R3yvL0+B3pcxr0Ly1RXV9ron 5QsgJgcZurY2FyPDO5cYhtBzyqZxgsBUaOMd7UhQIBW83sCkX4vkxEDR77tW6HRncLmqSnKqT1U AWVd41zAJmueKgMeTJA== X-Google-Smtp-Source: AGHT+IHc3ncN6k+2sm0Vab1oQPDzwV4k07eVX5WNpjt/D15gKXa33kx3e8bUSpOfh0E2+Ri4/HEFpe1b3cCIm0UN X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:439:804a:4a89]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3549:b0:436:1b86:f05 with SMTP id 5b1f17b1804b1-43ba62901admr21902365e9.11.1740738348312; Fri, 28 Feb 2025 02:25:48 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:18 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-3-vdonnefort@google.com> Subject: [PATCH 2/9] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 978f38c386ee..1abbab5e2ff8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 2c37680d954c..e71601746935 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -249,7 +249,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -264,7 +265,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index a796e257c41f..2e49bd6e4ae8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(start, size, page) \ + for (page =3D hyp_phys_to_page(start); page < hyp_phys_to_page((start) + = (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -503,10 +506,25 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; + struct hyp_page *page; + + for_each_hyp_page(addr, size, page) + page->host_state =3D state; +} + +static void __host_update_share_guest_count(u64 phys, u64 size, bool inc) +{ + struct hyp_page *page; =20 - for (; addr < end; addr +=3D PAGE_SIZE) - hyp_phys_to_page(addr)->host_state =3D state; + for_each_hyp_page(phys, size, page) { + if (inc) { + WARN_ON(page->host_share_guest_count++ =3D=3D U32_MAX); + } else { + WARN_ON(!page->host_share_guest_count--); + if (!page->host_share_guest_count) + page->host_state =3D PKVM_PAGE_OWNED; + } + } } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -621,16 +639,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; + struct hyp_page *page; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (hyp_phys_to_page(addr)->host_state !=3D state) + for_each_hyp_page(addr, size, page) { + if (page->host_state !=3D state) return -EPERM; } =20 @@ -680,10 +698,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -890,49 +907,75 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages !=3D (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size =3D PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 page =3D hyp_phys_to_page(phys); + ret =3D __host_check_page_state_range(phys, size, page->host_state); + if (ret) + goto unlock; + switch (page->host_state) { case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); + WARN_ON(__host_set_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED)); break; case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - WARN_ON(1); - fallthrough; + for_each_hyp_page(phys, size, page) { + /* Only host to np-guest multi-sharing is tolerated */ + if (WARN_ON(!page->host_share_guest_count)) { + ret =3D -EPERM; + goto unlock; + } + } + break; default: ret =3D -EPERM; goto unlock; } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; + __host_update_share_guest_count(phys, size, true); =20 unlock: guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 930b677eb9b0..00fd9a524bf7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -361,7 +361,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3353C25EF86 for ; Fri, 28 Feb 2025 10:25:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738354; cv=none; b=qwGDxk5tfuqUUKVaJtwR+vA9OrSarDBfavwZstd0Qyr8/qExzMWjXAV8s/twuG2eXUiOmBsRj0j1KL2088vzHLtxlo2et9swqaV4HL4pWxiEwRRx8sRuRsWWWPXnn6mOP4hf/Wsot98JM51vvf/doWXIcJBVpaiaCrGMFDGXATc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738354; c=relaxed/simple; bh=uLROo1wcyOnNFi0pyMSVqV6JKBZtaWvrYzPQupAqXtY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ce/P+SCcgGVVIgwiWjpRSF3G5XNQaR9bT3QKi8RarHHmSkfrwpPjHeSTHrIlQ4tSYqLvm9+6gLmd3irmwvMS1EYhe6gjqI9nrtWyk/+ZyKBf4FTjvYa+dSrCSS7uLT9zBncS4UontluBI23MRV2mz4zr3fv8yKSKgtJcbveTr9M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lcstE+8p; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lcstE+8p" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43943bd1409so14153075e9.3 for ; Fri, 28 Feb 2025 02:25:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738350; x=1741343150; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=lNjZen1QQXUUeBW1RgZ/1gOb9pJh3jnMuCmkQ3aRRLI=; b=lcstE+8phDxQsVWSJoBSQ+12HHx4baBfCFx0TznEgr7sVLiAViC18hcjHKTZxgxXff EWAV1/HRo5LNfKyVSSX3mBNMscZ2N2Gx8BPex8mVzWoY1fT6Y+DY7dHzy7HVreticKw2 M8hCcKFX68AMDDSzJeY0x2Gxowua8HFtR6il0aFft1tiXS955gg+VZNl68myRhcjrPp7 nMLSZz/fHkwjEqq8o6aAjDFoIng6WuEHI5CVPYlpXEVCIHdQXcINJ2E29zfEH6jL4Vhw pSa8Jv8c0M3N0ZaTXlOO01yq4is7WpUhPe8JfSexuifqk7Ig6bpIZU5WDKMPPWCzGTOl lJIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738350; x=1741343150; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lNjZen1QQXUUeBW1RgZ/1gOb9pJh3jnMuCmkQ3aRRLI=; b=f6ba3CvF+KtS8VRsxTNncHx8SydwZY2DDtZnY+Po8A8B+S15Uy5923wZj62IxJLR/K rZmCeyHMVdzJGaaSFtEgJ2QK74tV263Rr4tCsR/s3e71uJFmMYoFZVGx57R+LbegsPre 49tGHIMRwL/3bWlupbMLaOEwbS7Bc/lvlX+nbqU4iTcvkb4wh/UsA2X2yap2NpaK8STZ Y5JBKwpJgPklsw1aZsF22WLuAj050SD06TdGY7a+mFlIEucftJYBdN2dCFR/ZzH5hAjF 8iGHa1Mg0llAM1rZ6qxQ+Veu8oasgfO1QHBeQw9XATEC5p4I+CLpGRX8M8p+8LJB4ZyA q8gQ== X-Forwarded-Encrypted: i=1; AJvYcCWlSYGFV2MeVCEM1p6vRReNFrp52sqPjLj/AZ+gBFBbqcQULwxJ0ETtRJbka1vNmcSCgyeXZWrdi5f2N7s=@vger.kernel.org X-Gm-Message-State: AOJu0YxiEsKsi5H/lZwihmuA91GOmUZtGsS3hQP1849NrlUfBPpEQivW 9KU1TgqSHNaQhU5/FeZB1i1OwRxTYVmnJcBV5hqZOF9ARgMEyf5PcylxKSjWb//rJmfNPK24zlh iQSWQAiHA4maUyU7lpw== X-Google-Smtp-Source: AGHT+IF0kAKi15pqdhtBbCFw0IonwK7YIZ+9sqAU9W19r6arIA4DAzoRfW7wEDRfU4OM6g8jOwfkTj3WyP80npYl X-Received: from wmbfp18.prod.google.com ([2002:a05:600c:6992:b0:439:98a4:d14]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5248:b0:439:84f8:60d7 with SMTP id 5b1f17b1804b1-43bac7a3101mr2277785e9.10.1740738350591; Fri, 28 Feb 2025 02:25:50 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:19 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-4-vdonnefort@google.com> Subject: [PATCH 2/9] KVM: arm64: Add range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 978f38c386ee..1abbab5e2ff8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 2c37680d954c..e71601746935 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -249,7 +249,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -264,7 +265,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index a796e257c41f..2e49bd6e4ae8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(start, size, page) \ + for (page =3D hyp_phys_to_page(start); page < hyp_phys_to_page((start) + = (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -503,10 +506,25 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; + struct hyp_page *page; + + for_each_hyp_page(addr, size, page) + page->host_state =3D state; +} + +static void __host_update_share_guest_count(u64 phys, u64 size, bool inc) +{ + struct hyp_page *page; =20 - for (; addr < end; addr +=3D PAGE_SIZE) - hyp_phys_to_page(addr)->host_state =3D state; + for_each_hyp_page(phys, size, page) { + if (inc) { + WARN_ON(page->host_share_guest_count++ =3D=3D U32_MAX); + } else { + WARN_ON(!page->host_share_guest_count--); + if (!page->host_share_guest_count) + page->host_state =3D PKVM_PAGE_OWNED; + } + } } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -621,16 +639,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; + struct hyp_page *page; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (hyp_phys_to_page(addr)->host_state !=3D state) + for_each_hyp_page(addr, size, page) { + if (page->host_state !=3D state) return -EPERM; } =20 @@ -680,10 +698,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -890,49 +907,75 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages !=3D (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size =3D PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 page =3D hyp_phys_to_page(phys); + ret =3D __host_check_page_state_range(phys, size, page->host_state); + if (ret) + goto unlock; + switch (page->host_state) { case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); + WARN_ON(__host_set_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED)); break; case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - WARN_ON(1); - fallthrough; + for_each_hyp_page(phys, size, page) { + /* Only host to np-guest multi-sharing is tolerated */ + if (WARN_ON(!page->host_share_guest_count)) { + ret =3D -EPERM; + goto unlock; + } + } + break; default: ret =3D -EPERM; goto unlock; } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; + __host_update_share_guest_count(phys, size, true); =20 unlock: guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 930b677eb9b0..00fd9a524bf7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -361,7 +361,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4D1F325CC64 for ; Fri, 28 Feb 2025 10:25:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738357; cv=none; b=hioaK21n9JxWOoG7exkNYLMxdibaC1BUw0J6oDobqWtiXu1QmhNJODOnPFBqJni1c5HM0TzwyPsHWBudlhnPPWHebtjkHQwq6g/axEHadlOtn6wadIFQW5jNUWBIHmf4vB5sq4FOkdx6HYxarMin8tWCLqUcwOUpowkDWW4JI8c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738357; c=relaxed/simple; bh=KYdo1+rj2avzQ2fdLRDheaWLEoeWhVviPVR/ZB7+cYw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=g4QwqMr7AYv8Ds/jlQVl7adqQgEqOrd5S/v5SeSy1p8XlLBSnvFJ2zc4aZc++y9263Gf+HGxP+wsZalx+WLVi0A4WjG+oa1e93yZR3lz66Q9e/179ciCH22n+YuYZ55jHyn6ChHfZ98881B2/z3GT1XegLrL5J0XGuOTgY8Ogl0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=h8tpRELj; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="h8tpRELj" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ab456333aso19048325e9.1 for ; Fri, 28 Feb 2025 02:25:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738353; x=1741343153; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+mthpUBcuFXcir4pSjUd/6TiIbqiuZjMcArsC605TwA=; b=h8tpRELjsCHsU16ZbxW5TxlfpgcAQXpvjPqSFxQ5BW0Uhh4pLXkKL4lIUO4CuywEdF 4ZT/y8CgFyoXLP7nnM611QcfY5ERkQNi/4yCn/VFelfpyW12WzqZkhddxzWobmJWhJaR m2LFwlIAPVOKM4Prij0vwcBHlLYaifB0wvdrNNOuwtonXyD064p/Ex6/a8wluJCunvfL PWr4+gxl5Ak4q2roevLuRzx72LJPStMuGsYE3pVclGWKwHPYGhJXpIOwykWlFLXxAMwn Td6Hgqq5IYv8PE1xY5hv1sGmwG1XWnEMcYonFmymfvbYqRFAkKX6J7fUXtMGLEiItVe8 sQhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738353; x=1741343153; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+mthpUBcuFXcir4pSjUd/6TiIbqiuZjMcArsC605TwA=; b=J7230cqfT9/nizx/+4v6cufQfnDkDv5Ya96/ny8LfROtrTOzWlgMniHLbvkpk4YNiQ H7LXdxq/4f+BUChs77yH9Lr3PbLk/QhimDJwiebpdnQ3CWEWhPKAkdZRTH/QLu8N4/M/ D/VPLU34ehshIGur7mzoVHy4SSkn6tl/JZ/DEcMjMiyiw926/hJXAqCEawqCxbYuBlil lABL5rXKJwaE1lcpSTAFIslzcUdErvMHxED1Evv5TgPMu4YeESJ1Tfh1RYhETHpgGSYg qJBxmAuQh55NB4VewvcyzAVIe/OU33u33w1Eyih5HSHsiVLLuQxtxN050suQ6kWIpWRY sJ3Q== X-Forwarded-Encrypted: i=1; AJvYcCXR2hvBGTfO1kGupWpH5zerl61Et2UKkXhns3CfIlYcndhX9FLd5d8IuzJXmQ/IFp9QxWq6zcoU4Fcu9W8=@vger.kernel.org X-Gm-Message-State: AOJu0YyTMjtSEtX4CcYYfBPNUfrqfS/5VvoXlXAOPSA25AoUmDSbUH5i 3WMPnaeEtdRBltdwrn0K8PBCGwgOFiKabcgsq/+xsRrgRlzogzKXNccfCiTmrbBr/wGgZoC+51m douTIlAhxmByJgxNZGA== X-Google-Smtp-Source: AGHT+IHtP/oHqzhg0ASQAq95RZF5lYANl+I3P8Vxk+cLo0Dh2BmKOXowoVEPz/OhfN0g9+f+h5G4M/hs2XzAvpfv X-Received: from wmsp28.prod.google.com ([2002:a05:600c:1d9c:b0:439:47d8:90a7]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d0f:b0:43b:a397:345f with SMTP id 5b1f17b1804b1-43baa2c14e7mr14416685e9.11.1740738353821; Fri, 28 Feb 2025 02:25:53 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:20 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-5-vdonnefort@google.com> Subject: [PATCH 3/9] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 1abbab5e2ff8..343569e4bdeb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e71601746935..7f22d104c1f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -274,6 +274,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -284,7 +285,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 2e49bd6e4ae8..ad45f5eaa1fd 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -984,13 +984,12 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_= pages, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { - enum pkvm_page_state state; struct hyp_page *page; kvm_pte_t pte; - u64 phys; s8 level; + u64 phys; int ret; =20 ret =3D kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); @@ -998,51 +997,52 @@ static int __check_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 - state =3D guest_get_page_state(pte, ipa); - if (state !=3D PKVM_PAGE_SHARED_BORROWED) - return -EPERM; + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_PAGE_SHARED_BO= RROWED); + if (ret) + return ret; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + __host_update_share_guest_count(phys, size, false); =20 unlock: guest_unlock_component(vm); @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 00fd9a524bf7..b65fcf245fc9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -385,7 +385,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6FA4925F7BE for ; Fri, 28 Feb 2025 10:25:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738359; cv=none; b=H2MpxHqYkGowpx/RMvP6mGk3wwZJKId+G9S/pxBef2GSz7xxSjZD9JBw83qhOraLbCyOYRmhi8fqeLxYVgeZkPhw1MAqsGqcJjjjl+naqR9+1+JhUOZCSSBCbUxGENhcBxMPnVjSYFXrnRJ397fxRyenuyc5EuEE+uTQxwyuZmE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738359; c=relaxed/simple; bh=dLvFoqofzAEa6PQSM8vjKile6zyR+h6YQTiUH51xsaI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WquzO9ejasYx1rBxTaLb+dxXhKCsfk4QRK860X43pLiGCndZYe4nIemyTSzZ1+1ZT2PVDhusQKXk232SHrMPn/u9Qzkh1zll6Q1ArHlKTlFRjSQXNmCyU9ywhCuWNwypYoU6bbc0E6EwN8MCIUDzCwvYCs6HXtm+PtYifyJjeEU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=GtW6sFHc; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GtW6sFHc" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4399a5afc95so8256905e9.3 for ; Fri, 28 Feb 2025 02:25:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738356; x=1741343156; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8zWSnyntzQ7MpNLtBXWTtAnb7hKIjHG4nXvkifYINGs=; b=GtW6sFHcLgNusrC7/P17OtEWBBfSdtgyPvnfH2vM9Fo88lEIxDFMhOesXMJ4k+wXYk /m1gMfKEg/U2ZcTDFjg+vbWv9yCQvFkqQM+ZYWm2/nyDC8Oek2zWRerOzOhDcUqGNgnD bd4DdnVI9XOg8V7emVg2R7LbjAkTCU7Wz/Ckusf6V+ncogRgt6kEPiXielZUgmMDwYv5 Y7zwL02jDZYp1XjDaGIlO3a7n5L7RSGbZS9Zk6YviSSFsRwIPXNvA7ESuAsLYcRWKjT6 RJGEL0amM6g68heVPzPjCEfUoD0e9k+23TRd/+H41YVBoKZEMcz/BvRPIMnDNqbmYX0V +sjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738356; x=1741343156; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8zWSnyntzQ7MpNLtBXWTtAnb7hKIjHG4nXvkifYINGs=; b=c6VHUNQzIMgLojuk4UzGZwLtlHfotG1LSkIZ2+nCrvj75LdiG3ysu9hDarQp64I347 zC4JmW8zZRNWyp6QMQ2GCAGjo9MDDD0wcB0IzBBlgbOjEwHoNysanDOmBynmNQzw0Wdc o74rHCR5+AIC+cxlZS8EvEhaWKuj78IlEah5K0Sq7s3/uyzCa6T+jn6O7PUym2PmaAoC 18jDUlyBl0f2aUgrVFaB32FvkSQPZ4znGOsA6hVA9pEBIGYlnELXXw6si7rFdObjFgX7 6mHXIsynpDfjh/ZgQIs8OCT/qjovdBDe9yeSR7fMMXNiegEXy6AxuaTs7tGZXHqjYHlG z/8Q== X-Forwarded-Encrypted: i=1; AJvYcCVdId5Ueq/BJ41IX0xtZXnafqCtz1bU7eZh2W9p4Lmv5u7b16wbwcV3+qphS/n+9sVBFb56NF1VRGU+deM=@vger.kernel.org X-Gm-Message-State: AOJu0YwM8KKWRtSTUf3O+tnfrq6ZMyyFV9SHNKUeZ6ti0pnr8/i2OT0F /Aiiwy3JwKrM/uaGqiUIrE/+PwsIA/2Ny5Y5K1zOZGlzB+K639MK2gag/Enr7GL7HMXGNLj50Hq 53QDjqzXPVIE1I0T+hw== X-Google-Smtp-Source: AGHT+IGFfXZf9aSaeJPV477b4zEwtmOKtPxpHM8dC1bwi3UuUWA6YfdIpTZg2yxh9oaFGqD0evxL8K6AQkOEvAO4 X-Received: from wmbfl25.prod.google.com ([2002:a05:600c:b99:b0:439:98eb:28cd]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:35cb:b0:439:98b0:f911 with SMTP id 5b1f17b1804b1-43ba66e6d13mr21948505e9.10.1740738355942; Fri, 28 Feb 2025 02:25:55 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:21 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-6-vdonnefort@google.com> Subject: [PATCH 3/9] KVM: arm64: Add range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 1abbab5e2ff8..343569e4bdeb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e71601746935..7f22d104c1f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -274,6 +274,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -284,7 +285,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 2e49bd6e4ae8..ad45f5eaa1fd 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -984,13 +984,12 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_= pages, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { - enum pkvm_page_state state; struct hyp_page *page; kvm_pte_t pte; - u64 phys; s8 level; + u64 phys; int ret; =20 ret =3D kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); @@ -998,51 +997,52 @@ static int __check_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 - state =3D guest_get_page_state(pte, ipa); - if (state !=3D PKVM_PAGE_SHARED_BORROWED) - return -EPERM; + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_PAGE_SHARED_BO= RROWED); + if (ret) + return ret; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + __host_update_share_guest_count(phys, size, false); =20 unlock: guest_unlock_component(vm); @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 00fd9a524bf7..b65fcf245fc9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -385,7 +385,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E504260396 for ; Fri, 28 Feb 2025 10:26:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738363; cv=none; b=KbMDwxdLYcpvplSalkjQWCzocD6n3bfIxhNatL6nbtviQrzuEHBHzR/Yg6lwzcphUPrZgActsJvQfSXPSss6/Y0LEgLLRimjm2+mTKX2CehTvxYCi/Pc425YqHlq51XPkeWe0ZnCWda5zD9/1gm3LnbTbgDBLsxTt4cVVqBpeP0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738363; c=relaxed/simple; bh=V3sh8gApqApSpROZm8Y3S5+uQsT2toDeMKPrZxJsqoo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jd/damsJEf3sDMg/ElKCFyE+k5hZ2TaLVjQHM0akmmozEnw/OoLn2upK40Sit7m+mSn0QMhHh8+zZzxK9atZ0A03j9h14GuV47LLpEhMqqmD5yswQ++LTlBYZb7RbpprpIxIqcx3tpPtRMaAKo23+AHtdiVlwx05vj7LbXVxq7I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Am+ageNK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Am+ageNK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ba50406fcso8261515e9.3 for ; Fri, 28 Feb 2025 02:26:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738360; x=1741343160; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=Am+ageNKfVIWJ+zuGzmameoZPjoU6mzoYXcPZ54Li0PGn0vcQkbKuDUS+v6gCjC9YU TA5yPwPfCrb2rj+MyTWOW5H6W5maniaYiLSysOy78WS4zitbmPZAFKlP3TnnJZfZmZbq VVWuVsVSEbBhmILISmoVWvDE1yQn0gkk55IiY421C/y1IY7WhM6QrF3AZkztfNlWXjRb 8PxFf22kKQGllEu7ibuiIfJzYQ97g5vwAHgAER2jU80qEmddFpYPaubxh2e1Y7fJuLZm vJJKUS4M1QVAY98aXK8nq/GOAGRz9ywNiE4JQWj1obM9RqQ9CFv0Yv0mE0vWv8gi7f2F k4rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738360; x=1741343160; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=sxZbgfy3ExSArFmNCXzcXjhivGqNeCwisIROrg04XPfR6xxBh9M2wCkoSofUlT0SyW 5rwlhzugLCSrYN/fQ+91saSFjp+dmtTOEetDdbPd79/XN4r90aYDAlK9HQmgsrcLgJ7D JiSie7Hs7kcCKTB1fGe7/FzONFfJy6JeqFy0vYVgRW559rFRmdHY/lz0tPJewcytEVTM O7AfjngT8ho5A6pwMgshgRByTaCa/V0ExGLygq0xBtvWfAPW5J9WiiqFKliQ0XtSp27h a/Zn1JktEI7gFVc9K32GChxEAaLiNQSN21uPHh66R0I7h1L3Xd+EF/WgeizNi35xiN+u IslQ== X-Forwarded-Encrypted: i=1; AJvYcCWzTHCSfpNBAdhl3IuEUrE1AP0taaLJNyGMmRlKoPTZKYINpF4gDhY6xPXCaKuz8oZ0bsC6/SxcV6U9V74=@vger.kernel.org X-Gm-Message-State: AOJu0Yz1+J1UbLUlb1raYOoJA6pqu+uOKJ/AvzRNDkBi8osAr5K0tHQo vIj3JDNA3D7zBEgr9merFYpBppWpPymmQjnLK71cF+BnBdBAvfEuwGnagbFuZd7YzuJgHmyQjDa AW+9w6drHJpiRRDaUDw== X-Google-Smtp-Source: AGHT+IE9cThuvOOumrDkSe/98JreOfmfuPXx+pald6vYIRYqluFVSVbBGpBAnMTTmkIPGHEwkSM8h/iEIfVJ0+p/ X-Received: from wrbfy7.prod.google.com ([2002:a05:6000:2d87:b0:38f:3b93:aead]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f91:0:b0:38f:3c8a:4bf4 with SMTP id ffacd0b85a97d-390ec7cd27fmr2181164f8f.6.1740738360083; Fri, 28 Feb 2025 02:26:00 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:23 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-8-vdonnefort@google.com> Subject: [PATCH 4/9] KVM: arm64: Add range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 343569e4bdeb..ad6131033114 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 7f22d104c1f1..e13771a67827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -314,6 +314,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -324,7 +325,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index ad45f5eaa1fd..c273b9c46e11 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1051,7 +1051,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1082,7 +1082,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1090,17 +1090,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1114,7 +1118,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1130,7 +1134,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b65fcf245fc9..3ea92bb79e8c 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -404,7 +404,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B95AA25FA3B for ; Fri, 28 Feb 2025 10:25:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738361; cv=none; b=MK1ek29xpJLMjQHSZbsrdxHW+BTAbnTznF82NXY+l8Cdm0L4+bdbpDVGYhC1H9uaE44G0SFE6aunodDzg2JsDf6B+P4BJuvpOVRnr3xrHXhH2/rFFS2c0xe/At0qU8aSHvnzutc2X3GgDmwhJUdicOVEowocakCjEDJVl6P33Xg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738361; c=relaxed/simple; bh=V3sh8gApqApSpROZm8Y3S5+uQsT2toDeMKPrZxJsqoo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ei9lL9yAYTXyULDcdm3KuHW75FdaGvW9B6FJI1M1KPd4he2GoJNifHEgDAdkiZXjUqG4PVREWxiZNyyA3Chedaq8BwVhr70DR0yEN7si95g7UtofvClX2RUwksxxzt8vxheEI6NIjREt5RTlwSIPrvcXlGJUU0EcRDv9zojYyqo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=jOzy6bmT; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jOzy6bmT" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-390f11e6fdbso207404f8f.0 for ; Fri, 28 Feb 2025 02:25:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738358; x=1741343158; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=jOzy6bmTZNBH+QC7tD68pQ2qVVXOAtS2j13blJC/TYIEzh+DL2pCFc4bAwo/cChfdi rrx7Q0SQ0zdt7wUXh0lmUNL6JtqgkJ9Z3tfwJzwJwZUr1ZaFQzj9GTo8Rk1cJQG7QP7M YJsDlJ5tBsUlGLwlN5HZArmFGMzw5l5XfPdlUDqdmBUAI7wQtDnoHf491muUCZ6yEMXI mwbQNB/SRqHPmUGNaM7uQXOjTyBns+j4zmMVuSkqH3k1nJXIcEl8Vw41NGc1BCCbNQk7 EOYNiNuClMaOKKCNhF4ZZVi42qJQ9P5fuzkg8cHeV4t9wfHkDskNNQJx5DAMoxHJjcKP ANEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738358; x=1741343158; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=l5JzOHIWm+Qn+D56CTs6MBaFzEo9/026iUTyI0pXPjg=; b=KdTFToBl3+k+8azutOWKUIgF39kS2ypcOKh9gWrKDb1Ynq0NF9FqpgJbh8lJWe/xgc AdYX26CfTPDbCesFWtAdBRvPRQtDtsSvb1MFsvCsEmmXTcB3ll1FFQLWI6LIWqKADVO6 5E2oXgtbfFfA0hDshOTF9ApFIkHMKeCoaZRpm39/d4Qh8hsE0OphoIVyghggrzt9T8wZ z3rnEtLDw4F7vDGG6xCL5FQyzYeI/v4AlbBZFImAZxyaCbOB4ditoPp4SaDdqEhF9vY4 vR1Wz21FCgmBE8LP4VJPjn+BxtRIgtt22s6wffQswf4pONz7XqNKiA3s7MH30QXMa5Kj K8wQ== X-Forwarded-Encrypted: i=1; AJvYcCWPnrbC9dAQVHlpZ6wJ8dxjSiV4yWbuYEDPwrw2ejA/4ypQ5noOk5IfEOaHjocN5QpwtrO7WfH371Cj5sE=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5dCndwqHqcvfLkDgYhj/hi38agwilOfNgULC9LVql5v1vQ643 hKJxxD7CPbiB5sB8aEdW390T8Eh9KHEzmSgqHQObqKEQh0swem2puC9KTpf++FMag8xUvdJpdyl kz8yCds/GKO56G9RGfg== X-Google-Smtp-Source: AGHT+IGc36H3jkruTF0XKWyqw/7fJ4jy+lnOLdRAHV8wJRsfUXrjOVQj1smZ2A50z4LIQhr1dOWCMFS3Os99bjd5 X-Received: from wmbec10.prod.google.com ([2002:a05:600c:610a:b0:439:804a:4a89]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:6da3:0:b0:38d:b325:471f with SMTP id ffacd0b85a97d-390ec7cc89dmr2470457f8f.15.1740738358111; Fri, 28 Feb 2025 02:25:58 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:22 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-7-vdonnefort@google.com> Subject: [PATCH 4/9] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 343569e4bdeb..ad6131033114 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 7f22d104c1f1..e13771a67827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -314,6 +314,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -324,7 +325,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index ad45f5eaa1fd..c273b9c46e11 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1051,7 +1051,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1062,7 +1062,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1082,7 +1082,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1090,17 +1090,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1114,7 +1118,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1130,7 +1134,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b65fcf245fc9..3ea92bb79e8c 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -404,7 +404,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B049225F7BE for ; Fri, 28 Feb 2025 10:26:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738365; cv=none; b=PV+uZwlyxlIgvuchSgVE8y6bustW4282Rz8tfYZkZ3PvciwrFnqV3HlOIaKyA5fbvPDkbliG345fnN7oHtcz42YaWGQLsZaMX2+l0HAOg1QHl3S+dEcXlcnN6zv/fUp3uWqErlq4hW5sQdBBmgpMADYwzPub/QM+bIybP0Ch/dA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738365; c=relaxed/simple; bh=wBKVapIrznQwAOlM6sIqN9ynrRTg73Po2MAdoOcT1v0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RDSHMrITE9/vDUuC8ecX95LBqe3ozjzDCcziMWOjnFCu67R5gWP8pSG7dxufH9lPB/pDz+FkH6rP5huQ4EH/jHTWYqnNqKyBT99O7Zf+f5SsAIPxXgBTrBoJBYAlUSeWFPCh++EV9CrTYtvuON7pA1LbwKvUQWceC8/37CekBBU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nAkOl9c/; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nAkOl9c/" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ab4563323so10528055e9.3 for ; Fri, 28 Feb 2025 02:26:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738362; x=1741343162; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=kBFHvME4v7wAoT7habVFSIRXDLX0Fgacd+7YWwIrSSE=; b=nAkOl9c/Yv2mrDNpK2lJaB7Rc1SuMw7i8mxD7YoVxlk7YkTT1aqv8iYVfi0GmIqppA Y1Ga+92HFw2d7Q3aEeOgLREj0KOYRYkKk6jxeCkM6CeZTUmcDZ8lEsPM6Y4ijOOwmpmP RnRK7plpyPHq0M+++xFD9Z2yjrO9A4tncFmsFyesrFjB6fhvIAlz6ayz6b0+Q9py8SeF 0UE9hsNTVyu8/UU08EvQdn3F1irjI3BU/759djbABwRnS+8IMKYLJtaY5ZlNe0Qp9K5j ue7IfsA6nLgoXR4ao8CG4FrNpXSww7XgaiXqlWtKEwb9m/9Ly/hqQt4Wy23ICNi15zvZ Ug/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738362; x=1741343162; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=kBFHvME4v7wAoT7habVFSIRXDLX0Fgacd+7YWwIrSSE=; b=LOgr+wtE7Aoj7OyGazsYb7LpQXaZu2QlowbYB9MiI5ZP9O0eMkMMwbuilYIOIQZo33 x4/JHZjypZeOPad7PQW7qBvUOPftRQG1Sr7itITrFgEf0MSpnvK4YPmK+X1Tg+rFySia xtnIoOLHnvCbtFa5jYk9hBYY8ZlkNdXpy2VGZEKwZVD+rKGQAsCEACkZdH5XGeaoavPX PAnYsHFwwS76xV5f7QQsu1bm+px/qllRQAcg0RikOxCmaknChrZNPKZi19pnyN492/p0 amIrILHoZjwxI7H8H85Q3W98uLUWxBdliglqhQuwZP+8ShhuE3EnAZmdD7h9ojbnmT0H DO0g== X-Forwarded-Encrypted: i=1; AJvYcCWgcMYiQurIHsU2TP6Le64OeHeJ6B78s4Kzez2TyCuwNdryj4pX7BsOS7YM3hdeiWAbHau/gQF9Lyp3hjE=@vger.kernel.org X-Gm-Message-State: AOJu0YxXXpOkOxtVTdbMKbhMl7X/9TUSr3qYSEYETfg9zG3/NEYwLZVF 1OGAeiHWsViJSG0o7FdvcJPlJFB0r6OpVdELOJZF6tdiyMuwHLmpEgeZf4DXB+ydk7rW19lVuAr pTxPViYbyFnTvPDlI4Q== X-Google-Smtp-Source: AGHT+IEwHls1BLgdvt5KFxyshaKSutUV2RLW38Q+fbEdSBaFOyihQeFxU0qCKw+zxyowZaUN4jkoWQg3AuWNfSEu X-Received: from wrbce9.prod.google.com ([2002:a5d:5e09:0:b0:390:e493:b594]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a8d:b0:38d:d0ca:fbd5 with SMTP id ffacd0b85a97d-390ec9bbc4cmr2282409f8f.22.1740738362092; Fri, 28 Feb 2025 02:26:02 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:24 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-9-vdonnefort@google.com> Subject: [PATCH 5/9] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index ad6131033114..0c88c92fc3a2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e13771a67827..a6353aacc36c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -335,7 +335,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -346,7 +347,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index c273b9c46e11..25944d3f8203 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1110,17 +1110,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3ea92bb79e8c..2eb1cc30124e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -434,7 +434,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BDBEB26138F for ; Fri, 28 Feb 2025 10:26:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738367; cv=none; b=LZOKZe9JIKfwdz+5BOK/nR1uxPRgYepTeX51BS4fFIO6tie4Xu3dQa62PHtyurAXNvBuBXW/tWxjq97IMdkDc+o+Wb4Z5IhwodhiUD01qboJ3Fe8LPSlXcdk5tW+ubXpqapwoAeJK/xM+WVBsybpxRoUbptmVNg/Y9dUI4w09xE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738367; c=relaxed/simple; bh=c1CjUHd1LkuAdVuDk2RFGH/LeKa2cYSLQB9PFu9Lo1E=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IGNhOo0RGThqMhkTXoaP9G8yrNOTWRBoCF3IwZm9RKLVhNk1NRoXFTLR6eu859U4LK2ZyKfUmODcUb+U3cyj4/roOX62oMh4ECZAaaLduhK5hPawxK0eRIaA6xpF3G98zHBOyVOYhZWWSx5fiGFxsM6uFwF1AfH5opoo5VTW478= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=htEwiioE; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="htEwiioE" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4398ed35b10so10296435e9.1 for ; Fri, 28 Feb 2025 02:26:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738364; x=1741343164; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FJ8YKtq6hiI77A4ivZTnaB9Ne+xlMbNHdkfYD+3eEfk=; b=htEwiioEqF9QBfSGEyH7Q8dcZRJZzlyR2pWQxKSAXFZjMMgYDZ7eLIsN+hpGwN4X+V SziZYnRMWubEbcwgoi+FDmflFdf9RUUTyTr6ifyrTeepcetTn9cwrnbBLqNwe07J006p 5NYUluXswH/GmPlufvJF/1tEt6R9OXkDxGxMRBfSOiiD659IE7n2GsV564G/1TVaOOWR KC7uMhPbOXTxASo5pcCVdd+Uo8u2WDHV6LzVbcKJLmKA/j7vE0GIlIu1qLGi4GMApty7 cjBpz9Jax5JU8b7DqGgc/YNw30XmN42z3kASNY2Ptmz0HqSa34W0QwSkGEetjfhtvQr4 hBKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738364; x=1741343164; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FJ8YKtq6hiI77A4ivZTnaB9Ne+xlMbNHdkfYD+3eEfk=; b=W1QHpzdr7Y862PVFrLrAwAOftGM+2UR0kX0QYz9mfsTBpqCCG10wKqpUDsbhNob8SG djFciBI6Rvix+V8yUgLqup7dZwZtsx36t2oswF8zo9gWFxsyds3z3XC5u793OY9CEYnu r+jKBJOL0f3E9TgrCVb3KFYVeGP3VBrG2y0ImvRvtMfjTSDzXm/p9Tj0zIhq8mpH+hrP cue7gUvmYbGMFveNB26vDgkMY90fWUkV9hqbINeiVFXmFLO3wmzzLDW1BWWzaN/P3axn h2hPvZ7KLgXa9J1EvdgNARMY9Mmp3AdRRr02lWMP/3nl+R11Tn0FejFK2N6WPNcWWRjU 58aA== X-Forwarded-Encrypted: i=1; AJvYcCUEXrJIdvFEZYxQD/TNbMvrBdTgTUygz8cw1fgrzZRZR7hMyTy9Xp2QN4MGkpsAYHOBj1RkZVErx0s9rUE=@vger.kernel.org X-Gm-Message-State: AOJu0YwFC4rXCmIryykiMiXHdpeFXvK25RYw9LlHVxzmbZZp7Jmek2f7 w9x2ELFb1SunFcfbjxD1tKMrsEXTQc8xfQk2P7QJ/pm7CCGhU3cP/O1/2ILa0yWhKOpPvNBC+30 NzZdpHHk1WaWCs5xrKQ== X-Google-Smtp-Source: AGHT+IF7RliymnqyVqI2LxMXafp0QAO3saYMD0WdzwsnhK9hIXHApQBnwQLte7rc3DZHMIaauPQxquwj982tRnHP X-Received: from wmbfm22.prod.google.com ([2002:a05:600c:c16:b0:439:5636:735f]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5123:b0:439:9274:81d0 with SMTP id 5b1f17b1804b1-43ba66da2a9mr17359265e9.1.1740738364175; Fri, 28 Feb 2025 02:26:04 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:25 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-10-vdonnefort@google.com> Subject: [PATCH 5/9] KVM: arm64: Add range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index ad6131033114..0c88c92fc3a2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e13771a67827..a6353aacc36c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -335,7 +335,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -346,7 +347,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index c273b9c46e11..25944d3f8203 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1110,17 +1110,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3ea92bb79e8c..2eb1cc30124e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -434,7 +434,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A13872620EA for ; Fri, 28 Feb 2025 10:26:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738369; cv=none; b=B60gtgkMdBDePW8z0M+VyelOipx0fbIlREnH1neEp8X54r6WiJNzCrqcKutuXMaSaeb5CvnvYHKQGYLHowBfyYzDcPzl3nzgTHI39D3bOCgEdyd7EpJqYeDVz/enKfL8Vyfic9NuwZA3ZTLzs1tQrf1PMbZVUewzATggQw+kFZM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738369; c=relaxed/simple; bh=0xiZbkVb4PxTBW8naW/8DcTbK2Rc6mIZinJTDH02mGI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tPoRHMtNjb6fCmMXhY0L+4BTzQXBaUDxdmet8dgH8wsSYiVjhQQTFoSDV/uZrN6AnUhLO+9AXAsemUGEuWSr96f/gKfNP8jcDuQYiUC/6AvdsUEsLxy0ICA880CSkqcdn1NljhqZKTCrAGowCZrK+dleMz1gHSFcEzIh95lqMHc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0OruM/TR; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0OruM/TR" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43995bff469so12445025e9.2 for ; Fri, 28 Feb 2025 02:26:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738366; x=1741343166; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=0OruM/TRzGJWJsMB+NoVfpGq9htiKj1C7fjkXfDeRYGmtOvKKbu5ZR2Zlxu8CwZPLo cb0Vj32Q8y7tb2zD9GvA8z9jmUGoc1j1tiaugrLtpi2rO1OUfHTPAeM54Lnf7WRY3kJC xglGsOx8qzB9wQ0j01ybzv+SFMAKDPCgJG2/YkH6+vdoNHCvchuvzF31XZUdvO6mFJYd ztS15VvEW+vSI+jeiNTX5jIWFwRasgAa4/qK7UsUzksPnw8wMJMMgOZtNiMVe5m4Ny+e ZC0e4Gp/MCbglDtdEy4LfsVPzDJJkOEz/c9QQXliX6tHZ/ZRuN7rJlL0rnaVeLGdlFw8 VG6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738366; x=1741343166; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=dbxntmfbqb6va4hGzfVopUVxfqwXBG4AuO5Qp8tX1q59g7gTrDT6PStoVzKQGrlMpD g10nu7W44EYdi6FRd1hpL43KJvPNookR3cueqW2YomTNqMINeBCEbB2AXb6fRxC/+N5C 6LXYmQ+N57r9bHkozXrfPsxP9IVAfTKJ58MnQ/ISapmClUkNUKoy2dRhkO4pfaf2PJ0d r4qTgGAmUfXbpM4zwemEtmJFniQLXMPaXkRkD6AxDI4SgtW357pN7tK7ht1DewuMSZn4 RFsJUCErEXHA3qk7B1YX3xnpaiRcN6U6UelXKSU07e437faVryC2hkc3UCFFO6ZPHPuj sl1w== X-Forwarded-Encrypted: i=1; AJvYcCX3go1dwycpE5Fwp9RbvI52GVL2qrZdDbp1bcGHyucWdmf/uAQSJD6xnA7vvzJoi5MnmkVhVPPy+EphkL0=@vger.kernel.org X-Gm-Message-State: AOJu0YyQQqZSjHT/uPIEpLx3O90Kpvu4EJj+GQjG0rK4jGkuGoXDhqq1 tPowOvhILy/gXRIs2u9c+12O4FCKocoWDU/qmYz0skg1apD6zvIXU8WlY+YpyLxeJ6Yi3KzzAkw 82ht8h3KlxxOaRqwKBg== X-Google-Smtp-Source: AGHT+IEZhgA3sjj9bqeEk1zidbn1TNaulw5oerE9lhwnrZnuvWYyl5QCNJRYqFMrruyHX5mq6Mbs76MBS6SDWMEK X-Received: from wmbfj14.prod.google.com ([2002:a05:600c:c8e:b0:439:86cb:c729]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5489:b0:439:9543:9491 with SMTP id 5b1f17b1804b1-43ba676062dmr18262165e9.25.1740738366120; Fri, 28 Feb 2025 02:26:06 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:26 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-11-vdonnefort@google.com> Subject: [PATCH 6/9] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index eb65f12e81d9..f0d52efb858e 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 2eb1cc30124e..da637c565ac9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -270,80 +271,63 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 -/* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. - */ +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -371,28 +355,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } - - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BE51263C9E for ; Fri, 28 Feb 2025 10:26:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738372; cv=none; b=YbjBrvBnpbB2WJYTKIe5zIVaQCTDAqilETb5Ad+Mj11+XiZvVg9uewSe84DG8UALTtZnn6BXOHP3qLWGtYqFTAKwEv+UbK35f1VJoU+i2oxfzlLYzrjz88C24n1u73SM4zf/hz01eTJftnrbKieN77Wi7nDDkN3PWG/Q7Ws3p5Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738372; c=relaxed/simple; bh=8CXJzd3bdWnUUiUgEqvdbCvJOrcXmt4cgq1d60RfuSg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Xem1j+rQkX23GthFQYQoj9nTvZtpWWALLlPT+ZJAM0aDoBhXpgJQLj8IfxscoXIoeXMG44ntUCPyM/Kmwm6YxR2p+48Rs+nwrAvnrxuLT5r+76VVxAG07LxCtCLL9t6vMYKIVNfsMSIrlZEIAZGaJjkVeui4CqDTUERFXX05iOc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rBOD7Cjo; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rBOD7Cjo" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4398ed35b10so10296755e9.1 for ; Fri, 28 Feb 2025 02:26:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738368; x=1741343168; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=rBOD7Cjo2Q/BIDlsjfU00YQWloN5DAYeTZ3rWwuIVnYG4hDRYcTcL2u7+5q9KEIFpz e/mo4GIV0XSD+N0kCRQHzThJYdPGBInlExOOYEdzfuPUpnds2ISq4ZsvVl7ZSoz7fT4u 83LoWoZhVnfMQXpAkENDHCeVHnbid4uMfGlVNcksMov0dUWrRgGsf5Z6IgKFycKos0z3 S+d1iTJpnpt8InghsQI4K43zvFowD1h6bwcW2pML0dN+GINb8EELkJBp6QsM1ReySXnV u68AFcIWJ74jpBGPSo2LFAZ8YiXqLHWXse8vmkC9IxILY/kMT1r1ef/EkaPooQfh/NGc qRGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738368; x=1741343168; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=vnELCDAa4r2pent4xTFbfvvOdqckjyNLVX8LXbqpvWCyX7Jm5MfCjOLiYUyOpFy83p svuUgQT+foj58xsqFB0zMqshx+0P6Aybg/APa+XNX54k/1LhOYLsx0H9wG1HM26naGze 8j3aiFrkoAzvDEaXuNyibNfNtRuvD+7pOaFcoPEQkM00lTENxp/DnIdg8QEg7OuLlZOt xFjhGhp/X+QdlNVegkDfa73qULxpd/D5qz6j4ZZlpPwolEtxvmX7+DzHkrlkZmeHZ36T qjiJGkBfKdUYCqT0UqutcefrxILT9Vdq+9nHPD6ka++DD3+P77tCkn4GZLQcWVaJfjn8 e9Ew== X-Forwarded-Encrypted: i=1; AJvYcCWxZZC8f3h62khB4Yne4GsdQmkJKg61BDiyYt9FTMib38SNWayQ2zlledgr97HBvgxQvM59CTpnEFfmn/4=@vger.kernel.org X-Gm-Message-State: AOJu0YyLyragDr7bvX9n9YorfVyjzDPxoTYoq3VbxCHzRHjZYhUpUSiM 82RQa5OYHRsnQacEsBvZZb1hq7QD8bD9v6fWUJW/+1haxlfdvKD38rk8k9AbsMe2dBSyhdNv8aF tUCRaf23TU4IEt2ZEzA== X-Google-Smtp-Source: AGHT+IH/ZodnD1gyWS5hoJvXm/WVgeMEtL4EusvAc9MyOb9eH7X6R8XviTaOHXTcXiUhD+8ewQSvRRjN0xDhTCV2 X-Received: from wmbfm26.prod.google.com ([2002:a05:600c:c1a:b0:439:848f:2fc7]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c25:b0:439:5a37:8157 with SMTP id 5b1f17b1804b1-43ba6774a03mr24052865e9.30.1740738368068; Fri, 28 Feb 2025 02:26:08 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:27 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-12-vdonnefort@google.com> Subject: [PATCH 7/9] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index f0d52efb858e..0e944a754b96 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index da637c565ac9..9c9833f27fe3 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -278,7 +278,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -315,7 +315,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -345,16 +346,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -376,7 +393,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -391,7 +409,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -406,7 +425,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4376264A91 for ; Fri, 28 Feb 2025 10:26:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738373; cv=none; b=GS9rba4o6umSSVRGTv7gzYbzv1OJtsazPNEHoDdcOmguB7ZGYnGYLvFXU+KFxDZW2W6rznJSrZHvvRTtm2VhHFjzD2X+9V+DrgUc/Wow4a5zi6CUZXjryZTQbqAP7rEqwPnRrhDmmR7WtI9qjM6sE2pqCEuNlHL6pM9m/k3eHXg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738373; c=relaxed/simple; bh=8CXJzd3bdWnUUiUgEqvdbCvJOrcXmt4cgq1d60RfuSg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H2NXHu2W6dkQKUiDM/Rrllpl0g+mzfsh9kyCbrVJ2h13S5+LO5w6W2IkEPFpwi/S261wPDBaG5ny/dxWpsfZCzQ9nptku50zDnVaQ5zO5Q2+wUThvrG9QIPayR0cDG3ektRk1R//1/ILgt0iwOQuJQHf0EX7kl89i5wP5SUVguM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=r1mXI9a7; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="r1mXI9a7" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4394b8bd4e1so10359295e9.0 for ; Fri, 28 Feb 2025 02:26:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738370; x=1741343170; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=r1mXI9a7PY7kV0nxdWYB4aIgnFW5MXtk2aoLFfh6yHdN6zYKtc20APWGzfYg4Cpg3w ayUeoRMy8x5GIWCP5uqVT2gjG2+TEeoiTMC93j3Z5BWUyAFtcmoPZKCKCoRYs2ni/Yqj MKTjACyjGuzdKZrX7DvQEXkdktOptdMjXymuH9qJpoDv+BzjSwvOAYsxWJBiTxDnYdqQ 1oXGYX+mGsEz8Nh9xazeBTXIHmx3qO9cMMra9rjG9WHwHSfdlC92R0CyeBjpC3cKRM6W NV4hhZ0/ATiEMGro1avhJrQGzarR/tGkSaqL8HAhuZnyyX5i7nlz68PQNlEgBa3JE+7Z ewvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738370; x=1741343170; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=f2gySRdTxhdXpivaj0avwZfWvVsJtO8ZZtCIV+X9I8PTN/FUJM1pNNlopkxUAyYhZJ Nfb0ZWGCDD4J0fMh8clT1YkLDwhpTdZj6n6YBsAF/4FV7ReVXYJzIdpBJyiBgV0FHhH+ Q2xjGI/EDeuKczA/O7AVSl10FCcT/SzeHZKyWVc8n36SuYVcKKRiWZoLKI1GXeGMu0z+ Q+HnbPVrUHR7PDtMAtfI0fde85pdcf/pBiwWQgKmGMp9UgHDtj3TiqLq2KswvjPmeyLa E9OSXxYC3lPmJu2XTrCamQN8m95jDZ4ebe5ABAI4akwdADLX2xkmk8OpfyUSZ1K4QHGL 4Ycw== X-Forwarded-Encrypted: i=1; AJvYcCVx0vrlRqu3Yvi33uiVUH0OeTZzcvDymQMQPWzoYnUgF9Fki7bDFYUW4nKwW9olFIOlSR2MQoACk1D2aWo=@vger.kernel.org X-Gm-Message-State: AOJu0Yw5+/5aJCD1biGyy9O7e8FC1NVTdFb+8SvghAlwKR0tDS/WL9He GmyYGB3gnA9JOlqdMyVs70c30ZTgMiX90w4/7WyJTJVp9MsZhlgsb7vEfEeccZjx1OHaH/65uMI o4YyvD+aa0lgSIMb3oQ== X-Google-Smtp-Source: AGHT+IEAPz/RAXYnzFohN9Raf5YNtkybSVaVQHUTq3+jLbWVdd15Teg4OP5zmX13qptaA5oh6JZSNMjkf/9AO9S4 X-Received: from wrbcc18.prod.google.com ([2002:a5d:5c12:0:b0:390:ec9b:c31f]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1ac9:b0:390:ea2f:92a5 with SMTP id ffacd0b85a97d-390ec9bbfefmr2318524f8f.31.1740738370197; Fri, 28 Feb 2025 02:26:10 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:28 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-13-vdonnefort@google.com> Subject: [PATCH 7/9] KVM: arm64: Add range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index f0d52efb858e..0e944a754b96 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index da637c565ac9..9c9833f27fe3 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -278,7 +278,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -315,7 +315,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -345,16 +346,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -376,7 +393,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -391,7 +409,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -406,7 +425,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 87C9C266198 for ; Fri, 28 Feb 2025 10:26:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738375; cv=none; b=RkBBnhbdCdjp+OFwtgZrzfg11hkG6kAPOX0M+Hdt8cI7NxtJPVhj2njs2tewFKCJm558OABOF084C6is2ssVCQrcyMiSOZAraPuGLHrK9KTKgzofF/0zKgA2g2B6PkcPYh56xQeyhH8hLpX335jYeNb4BsR7BIzsFU1b4+LTTZs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738375; c=relaxed/simple; bh=whyGc5aGHwNKn0mfmuFzvWSs8yNUHY/ZhnQubB+H94o=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LkS/Qufob5ReadF8KTiR3KXm0lWCjAUoJc+Ysqmzu5FMNj3r+PM1F3jAOjDLbdEZeDTf7ur62kjdyxpkbEfXtcG7BTTQpbkWuRHswhqkQ8kZYCtZiMVCM6bWdNOSr5DvcPBSL6el5B41Ss8PwXJZvQO8r8GhMB27ayzQfEa9pAg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qZLMUzuM; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qZLMUzuM" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4393b6763a3so8266275e9.2 for ; Fri, 28 Feb 2025 02:26:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738372; x=1741343172; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yRMnVBqxbNr0g4BbhZbbaQ06EF+wmwJkU+0Xf25VfM4=; b=qZLMUzuM0Mlu8/s1K2xvgWD0Q6chaLtQs4heFMRwG0aw5odoX68J8ORfvvazAeZHhT BRKGB7vdiDTrarDWmZ3y7dsOFYGYJFsN7YZ0OyNJFniaZvPOkHoxUQunnyQ6/4H2JtYy yYIP2aXuqdNTm2+zVvNlJ6r8TVrmNTI7cZkURKhDLZ/c9anWCkp73HXl7EljW7H4nBWv UchCcUFsL7KP96j4PSazW/CRWXE4YExfp0LsPFFpTPlaYBn9yzWUB+LDo79FikvEWYGs F7v/GnlS84n0AGBC3PR+O4rJR9u3xJipizBiMCvJsqMei7A8GlviMaO6LzFQgAOFqQ12 sKNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738372; x=1741343172; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yRMnVBqxbNr0g4BbhZbbaQ06EF+wmwJkU+0Xf25VfM4=; b=s7gVBQBoysSYuKxd47QBGm8x2zQN1r3LR14ybv7+uQ9opn7pqXUSsYZMAlbkWOE0Wn RdU3GDeRbMcPDM8Xh2F4x34vBf12BX/hc6a5lesxD45bqjRvjveYT7VBszzoIcxgih/E zfv22M2uPhQnjsCRGaMtuL/X5D+CRlE6U+oGnRUJP9HorCZWM2d9ZV4SxiElqgnYAAr/ zvHZ0030UMtCn1Ggw8Mz9/p3PSjMas/5vy4wc8Y1H4V3Vtis9WxqVHMqdQ0YMlAYmUba WqqcovkEModblAm6rDbFI4RNhWuCW3qQfgMcrMm/ImRVckezHlrwNJQnjKKt1ElhG0S6 9/pA== X-Forwarded-Encrypted: i=1; AJvYcCWpcSYLIFYAeI41ReLXcmYpioSw+hSNI8r2wA1VwtSeQ/eCAHs7uWsz35p/OAf/xUFZ1aG6l6kUXiZNaLc=@vger.kernel.org X-Gm-Message-State: AOJu0YwvOnHbcG3BNBKLJ36ET6emXvPmhMwZmTP0W8YWgROxMfHBZ4bv fQXx+HxY5z2KeaKeFi85l7FBF+oLOMoEo22x3vLxP9KyRVbISgG2Ff1MERHuuJ6evJWuqkIgwUe bkf267QyPAdoDo6lvUA== X-Google-Smtp-Source: AGHT+IH0fwtXCTDeJhT/l/khwHsuYplMhcSCRFECxM5q7EjwmroI+BdQCjojPmRxAfbH2kPDmDbv1a422TO5a1h/ X-Received: from wmqe15.prod.google.com ([2002:a05:600c:4e4f:b0:439:9fd1:8341]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1387:b0:439:91dd:cfa3 with SMTP id 5b1f17b1804b1-43ba6760305mr20891625e9.29.1740738372225; Fri, 28 Feb 2025 02:26:12 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:29 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-14-vdonnefort@google.com> Subject: [PATCH 8/9] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 25944d3f8203..271893eff021 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) static bool guest_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) { - return true; + return false; } =20 static void *guest_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1f55b0c7b11d..3143f3b52c93 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1525,7 +1525,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1535,7 +1535,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + if (is_protected_kvm_enabled() || + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) break; fallthrough; #endif diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 9c9833f27fe3..b40bcdb1814d 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -342,7 +342,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 17:48:38 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BD218266EEB for ; Fri, 28 Feb 2025 10:26:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738377; cv=none; b=BLRNIpAinzO2TgZAkQq1sgiID7Y1QBOBjezCx2t+KOGQ/D2Rl2FcuZptxsBvC3ScR6xHqdcRN66J4qk3tSZ2j1BezNs20GbZD7kSdUdHLXn4JZrt6d6MITsSx0aKp+S00ab2MCegOCN3XSk0DjzTjSuhtNpQJXlF+ChRH9Sa0Bw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1740738377; c=relaxed/simple; bh=MXYeISpx/aDlsUDzteiJDvn0e6kbJkkOswtFGqClK/w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=glmvtUtGVCe5oc11pmDwLMlaNc4px9hL9FjDh33uin9OzIaL1lS0T/XpKlfqZYVoHOwIEi6ETyStRLDOV3Ze3uSCNy57smO5LSwVG2bkach7lMYj7HVgIlpaqN8Q9HfUcVck6E9NmhrISTYesctNVNMcU5UUOGlLptRhLrn8VDg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c5FqZ/t9; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c5FqZ/t9" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4399a5afc95so8258595e9.3 for ; Fri, 28 Feb 2025 02:26:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1740738374; x=1741343174; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=okFrKrhPumydq8rLHSqC6+IdPsjGdyC8aVxeiqDQVLo=; b=c5FqZ/t9MMenJXtx9Wsdg+ZJSTW4FQFl15vINoQ5hC18nyQzElvP2/WJqtKIkPhLKL OkIpS1tE/1A71JnG4+KQ5+QRCpd+D4JKk01rPY35ll98wufgu7ZQP1d9VXAyxHctq3a4 1MS+LC/yqRLs5gKvFDS7Ym9jPahAqeYlTj9fhnk6zIFOI0k9on8Llz6CEHathFLA4Leg 3aqeDu4lJFu1FToKE7fNEq3szv+AfLsvYVhBLlNYxKvmwzmLOgB6cXUEwXoL/fRKGHuL +QJ8REwBRmqCW5NdJAzKkJXXKZySNPYwJGUdg48yC3rMXVbwQNkhURvBvuw7r173RFsE t8wg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1740738374; x=1741343174; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=okFrKrhPumydq8rLHSqC6+IdPsjGdyC8aVxeiqDQVLo=; b=YXOJTdxnqLGrvrjrfAFD5kL+ZjSIBYLL059XZD35M0MgP9NdipDz0DervBIo/02h3D YFv1gqDBvxsTQK1F1CkCgOS4snpBCSW+pxtWh5ieSbPOKKPyvy3OYggQy2kbaquI0mwk 9Pnk50P+qWz5RfdwsggL0zkJHaoU8H/II6QjuUtFeWN30VM6qB6K0l+O1LxoMnFlA1HW P9wXg4XgLlcd9BGzYfG9DLa4BNtjD+nVILrS6IKadhFwGjqphTucZRBTJmRVBuY3eIDG 7EmJSpiF/FXYjiWHfSQ4PeWXdL2dfh5K124FU/387+prlb+v4pnv4fW2Dsc9gL8HAMqj 6F7g== X-Forwarded-Encrypted: i=1; AJvYcCWNe2Q5pDTNbOgTWo2PgXw3mTWAPEdR1FcHgXXCPFkugQRKyGu6Fgk6QGcrRmE/8uWJvLtn7NeV3LpUCLM=@vger.kernel.org X-Gm-Message-State: AOJu0YyUmpEnxjKseo0uKmCgWjZmCmLZj+vkvk3lu9P7wTjDSc9pQOc1 LUj9LXdlegxJaz91CzqM12mLvZlaccg2cFdcxhH0Xdf7Y+IX9NbzAE5bqkKCMNDPHz+TeOf2hKY QSSSvzYEBMXfMCGdzhA== X-Google-Smtp-Source: AGHT+IHw8E3se+rg2k3z4uSikRRjQKSYo2S3MZjJ8Y0FSN5936j7LMJM+12PR3REuojrFW4s9g1hMmiAVM7Iz4UL X-Received: from wmbfm6.prod.google.com ([2002:a05:600c:c06:b0:439:9601:298d]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4f0d:b0:439:9828:c450 with SMTP id 5b1f17b1804b1-43ba67082e6mr22770435e9.15.1740738374284; Fri, 28 Feb 2025 02:26:14 -0800 (PST) Date: Fri, 28 Feb 2025 10:25:30 +0000 In-Reply-To: <20250228102530.1229089-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250228102530.1229089-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250228102530.1229089-15-vdonnefort@google.com> Subject: [PATCH 9/9] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..b0c72bc2d5ba 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 271893eff021..d27ce31370aa 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -220,25 +220,64 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 +static void *__fixmap_guest_page(void *va, size_t *size) +{ + if (IS_ALIGNED(*size, PMD_SIZE)) { + void *addr =3D hyp_fixblock_map(__hyp_pa(va)); + + if (addr) + return addr; + + *size =3D PAGE_SIZE; + } + + if (IS_ALIGNED(*size, PAGE_SIZE)) + return hyp_fixmap_map(__hyp_pa(va)); + + WARN_ON(1); + + return NULL; +} + +static void __fixunmap_guest_page(size_t size) +{ + switch (size) { + case PAGE_SIZE: + hyp_fixmap_unmap(); + break; + case PMD_SIZE: + hyp_fixblock_unmap(); + break; + default: + WARN_ON(1); + } +} + static void clean_dcache_guest_page(void *va, size_t size) { while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __clean_dcache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __invalidate_icache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..e3b1bece8504 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D (void *)per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#ifndef CONFIG_ARM64_64K_PAGES +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); + +void *hyp_fixblock_map(phys_addr_t phys) +{ + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +} + +void hyp_fixblock_unmap(void) +{ + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +} + +static int create_fixblock(void) +{ + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D (void *)&hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} +#else +void hyp_fixblock_unmap(void) { WARN_ON(1); } +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } +static int create_fixblock(void) { return 0; } +#endif + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +392,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index d62bcb5634a2..fb69cf5e6ea8 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -295,7 +295,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.48.1.711.g2feabab25a-goog