From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 71C8D226D0F for ; Mon, 7 Apr 2025 08:27:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014456; cv=none; b=VsnqQIJb8NT4kYgoum3z73jVAU6I4R5RI+pe2JkOSCuLt1jM8R7wud16XxdMJEuW2ltXP0M/rPuD2Ja0MGF5VDr7tC6hBuA9Sj8gnn4L9L1WEsecdqBPJy91rfVKr+SsWWLCmxoWp+b6d/kST49VHvcUYe3T/zqEMPS+k9MRHGE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014456; c=relaxed/simple; bh=uyGJyVdm3PuN+alby9AYULNVVykMkqtNQdizjxSG9ko=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bWqD3r8p2J0Bgt/fmq3JUV5lLRzu1nGscMORgcPjsPZ3xBCM2rRmE3GgzslSGGDQFCqIP7QH919vdjMSjRhYtfgE83tHU76YpLG3kUgHk/kwePJgeJl/NMALGh4+3pc8K+VIczOBp5UOzLu/tOZXp8PxdZ5tZL0IbY5fgDdNkgQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tajwix9h; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tajwix9h" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d51bd9b41so35888005e9.3 for ; Mon, 07 Apr 2025 01:27:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014453; x=1744619253; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eamk+m6hItJvvYURRtivMfLvxKXUv+N3LqKXR7/o518=; b=tajwix9h8sXjuvYgo1ytOVgnafIlXSl0yI+r9ELNpHGD4XiCQQmGZmLW5TNlVw1EFq rbNay/95cUET0roLUvZ+mOLdLUiE/h7WR8eIkDuIaTt95193EtyrqKLn/dp/PwjRurmu pcH6ufh+7WnwWYjFDD6JFq4eJ0Cb7FSX9MjmtMclrnHkil9ci7+Bd/1hZcTjK13ln7rC ZFiieN22tq5sgERNAZ/oXO+ARZpXqAA3gmng8MCxhpb5EeI9yWEGIjQx8Z6o1ZWQm22V WAEoaKqpNvtqxI3LbihjFpnpR/jYFYns9RgXLH+46W7FxmrZa98lv7b1S0tYtnb5SAFx dEIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014453; x=1744619253; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eamk+m6hItJvvYURRtivMfLvxKXUv+N3LqKXR7/o518=; b=mUACIXCJ7NuXB5b4KWvzXij/JwWSbPsaPeQ75noDMwUX/Ety6hMLPk6lZ1AhkwniNK BtpVnCeFkC9ljIQQ4HFML5EBRPcFQuSQ+9SQgVK6CSJO5Uh8Yw5+Vn5j6EYCQo+o9fta x9zdsG7VfpFv75vuHgn/GTvvq3MbV5eG3wSuLPazmRjUGoVqncizA04idSD4rnjZF4zB buzOzw/5NAQg/m2rc0gKiA6mOcuGSnq/LQuJQpLgyUT5340qeoCOJ7iVbmE+0OQ5CWzY G8kSZGq2g5MtmYlZQP6c+ModkAYXmBJTtyml6wFp+KZ3np/BvzcuoAFU8g/HGJ39/w/A DhKQ== X-Forwarded-Encrypted: i=1; AJvYcCXAaJP5acC859b0j3ZbX1DRtf3sQYL7UB33zm4sVAM9jANtEqFzYJ1revxVOPgWrtES36+BOCrounE54uU=@vger.kernel.org X-Gm-Message-State: AOJu0YyoyByqcZzZmzPj8jwMlXqncudDhwYcNDUNHPalAuPXOI94TlIK 7//Bws/g8Mvt71eQ5BYLvXOS3e0ahI5F+hdZB7VrMGrnOzL0BuAqmFC081j5gwuFdMZr+7zYufk VRFLVvzjuXZ12HN3YPg== X-Google-Smtp-Source: AGHT+IHEfDdtZEMKCFIiHPdGqbLmomiVEbcScZiVB7Z9bOtLoQQDsPDC5tGrnKssCX7IQ10njG1t/CVa1RWO127b X-Received: from wmbep23.prod.google.com ([2002:a05:600c:8417:b0:43c:f6c0:3375]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:46d0:b0:43c:e8a5:87a with SMTP id 5b1f17b1804b1-43ed0c6ba07mr107917445e9.16.1744014452806; Mon, 07 Apr 2025 01:27:32 -0700 (PDT) Date: Mon, 7 Apr 2025 09:26:58 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-2-vdonnefort@google.com> Subject: [PATCH v3 1/9] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index f34f11c720d7..6a90e7687f1f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,28 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + WARN_ON(!PAGE_ALIGNED(size)); + + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + WARN_ON(!PAGE_ALIGNED(size)); + + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3BE072288EA for ; Mon, 7 Apr 2025 08:27:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014458; cv=none; b=hARgnX848waa3ToLIJoVVJ+Us5GGpa+DQf47FJwL8mESnyv5q1SaBzC2+FIAIVlR2RCH+oVFgxqy/8E3NMGn2Xzz6/ys7uQ0zOkmSNnzg+l/1SPnl1H2QUeMWda7JC6xmJLYcc1PlJCGeoStZSaxeYiYgMl9uPI+eSv+qbJ+kb4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014458; c=relaxed/simple; bh=z9y0JvWnvG1kJ8p0FisHSHc8Cv5tzDVY6Rw1WnudYvM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=F41G5v47Rj5uMYOff5ODdufLz18iLKXZMXf6cWLM8/s+F4ztYv7T//Noc8ZZxy/CCtVjo2yU5kco7SfKDXmO8Vbsa/+pfCLhRxUHrsdfIFqEE+s0CSQZK0ynYhuMhg87dxbnqc8fhvEp0TU0Mu45Wpd98PRkLDWuwsw/hlBsOuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=q17jOsML; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="q17jOsML" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cfda30a3cso24362235e9.3 for ; Mon, 07 Apr 2025 01:27:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014454; x=1744619254; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FdIqQbEGrH6LJCk5IMP3fGkEsrPpiEvGrmtBvupTZLs=; b=q17jOsMLuALLfZq4i8dzAYg8WxaDfi2KSK0eB0qulbtaF8h3a4+M4/EDCp48GucKVX q/46PgGiA5JMDh7oyV704K1G9+p05EY9iUExmKM5CHrGFRldm5d4kSmzvv5c8WTlz1s3 FIcKzZK5eOuGYZ+xehPnr3RUgDkreN136ed9IkACCaP7Ucj1fTLsYZLY1H88KT/JKovt QVxV5VKy2lgXDdOMb/KBtm5L3ARynDRnTABYqWklH4QeLm3d8mNVZMHBsd+j/uKZHHFy /xeJ9SsGfVIeCC10H5TCDjAHzekDgEtUHf0+3tntTR53ebChjDBHvbavlQqnibk8SVy3 NAWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014454; x=1744619254; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FdIqQbEGrH6LJCk5IMP3fGkEsrPpiEvGrmtBvupTZLs=; b=HkNML+4uAcYV8+8J11XJnsAMAGdPBh5VsHs+ixNGGMSR+VWZ1npaT5n3HamlvzOzpr iu4rdLJHQ7Mq4Kene/9tL0TLxThM1qK6P4txTUt/EYTWiPcf0nMmpNyi0sD50HUQqZEX sNn+BeRFsB/23Qtb6fqUVkq+8hh8Yxv8UEU51jAVVI/oQs873HsAtLRD5Uv3bfjcfpnP I49GfYUtQNqBMenZ2ebqKc6qLgc+i2KYVyeSwL/44cZ3jO7BwhEq3bc/DuOXVhCm/raF zLvD0IdehAj4Vocu2wUxJW2fFr66s/fdoDfsPMRmy8aEKo3Ac4pg4gqfVgW+26BWK8l3 j4FA== X-Forwarded-Encrypted: i=1; AJvYcCWavVs1glxG6Xo728BCggqKsSIqAQkt479h6Ldp/nUh7JyEcG9Wyre0PndzquXNruSGsZpOm2q2DyfKVx8=@vger.kernel.org X-Gm-Message-State: AOJu0YyUqYmrg2danVnV4aTOVnVqwmBZwLtkuuPldFC/1TyKLIdJrCVY oiAoYu3mMc6/3ex4kwqev9EMlzz2Ognav2MGi/u/ZnAZgts7BW8yAJaq9wMAPSG1DuNGnfIVhAG hvjGzRbtgJ5M7JMpY7A== X-Google-Smtp-Source: AGHT+IHsGgikQanBubWw9n3TBOvtvQ0/2An9oSz0STzP+avD7elmsw808grIhtyKdtPWcqWYMd99e7w8hY/KmQYG X-Received: from wmcq3.prod.google.com ([2002:a05:600c:c103:b0:43d:41a2:b768]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e23:b0:43d:8ea:8d80 with SMTP id 5b1f17b1804b1-43ecf81b197mr105228475e9.5.1744014454734; Mon, 07 Apr 2025 01:27:34 -0700 (PDT) Date: Mon, 7 Apr 2025 09:26:59 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-3-vdonnefort@google.com> Subject: [PATCH v3 2/9] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index ea0a704da9b8..96f673f42e8e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 2c37680d954c..e71601746935 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -249,7 +249,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -264,7 +265,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 6a90e7687f1f..48bc0370515f 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(start, size, page) \ + for (page =3D hyp_phys_to_page(start); page < hyp_phys_to_page((start) + = (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -507,10 +510,10 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; + struct hyp_page *page; =20 - for (; addr < end; addr +=3D PAGE_SIZE) - hyp_phys_to_page(addr)->host_state =3D state; + for_each_hyp_page(addr, size, page) + page->host_state =3D state; } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -625,16 +628,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; + struct hyp_page *page; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (hyp_phys_to_page(addr)->host_state !=3D state) + for_each_hyp_page(addr, size, page) { + if (page->host_state !=3D state) return -EPERM; } =20 @@ -684,10 +687,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -894,49 +896,81 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages !=3D (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size =3D PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); + if (ret) + return ret; + + ret =3D check_range_allowed_memory(phys, phys + size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - switch (page->host_state) { - case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); - break; - case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - WARN_ON(1); - fallthrough; - default: - ret =3D -EPERM; - goto unlock; + for_each_hyp_page(phys, size, page) { + switch (page->host_state) { + case PKVM_PAGE_OWNED: + continue; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count =3D=3D U32_MAX) { + ret =3D -EBUSY; + goto unlock; + } + + /* Only host to np-guest multi-sharing is tolerated */ + if (page->host_share_guest_count) + continue; + + fallthrough; + default: + ret =3D -EPERM; + goto unlock; + } + } + + for_each_hyp_page(phys, size, page) { + page->host_state =3D PKVM_PAGE_SHARED_OWNED; + page->host_share_guest_count++; } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; =20 unlock: guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 0f89157d31fd..cad25357858f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -364,7 +364,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C2FF22A1F1 for ; Mon, 7 Apr 2025 08:27:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014460; cv=none; b=QkLQJrhMqwV6CZozwh3kMHg/iqjqFqkJWBOXsjjHbbz8M9qA32AsYrPI9Rp8QZOea8Sx62Ems+GfqZaU1/XO/h9N+fSM++UB8foiJbe/c8Odx2unX5neTbAk77TuJvhpfzg7aRN9flslxy260/7Gu1R6xaHB0t11nSDVTW524yI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014460; c=relaxed/simple; bh=eKDE74HFrEZkw4ae4hxApkSppF5mW45ydhlKywdh9sw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JblJCMYfA3VDtZCOV3ynLplmFqqvwMJOpqJ+EiaL57/uNI0Ibvp+FVcZLu2uJ7bivHAXPArlM7D0D728gwjgIUN+l+Y+2uRDLPNUtcRU+LXS6ngFh1wTJmupX6yD4xoUvE1+ROM87DtJTXd7D4s0ssNo9c2gWTQEUXJoBx8wwL8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xTZep7zr; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xTZep7zr" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-39123912ff0so1683498f8f.2 for ; Mon, 07 Apr 2025 01:27:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014457; x=1744619257; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2Fucuhr9dizyg89eT+HA/IrugXy5DABtvkENtK6O2wY=; b=xTZep7zr/zqSN98CcYH/J/+Xu4c8+LGRRBnvbgMWa1gDYwgXQA9oJkcQOeVTrgmOPZ mwz5N5AbBCmYFGxUehIsyqSfbNtTwdBDZYPKnKqF2UEAFgC0cHIqkL69wXbkTq03nf6D vjlNfSrj3/+z/5fM0o7zZSiM3T5lNco2ijKJEoPCzK7U3wSXDEk/x+NTxXG0+ue/wYnL qsnbRqeczTLsFIC2/bXVscZffUvOzj5bJ3wHCWk6jbKbxQAMWGqYReqO3fi2ijb4Sonw pH/eTFjyzc7YpPUdb8qu+y589sMOS5RNt9bDWsj0aBl/sBKhnIcMPQ1yBVr7PbUZlf7t AN9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014457; x=1744619257; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Fucuhr9dizyg89eT+HA/IrugXy5DABtvkENtK6O2wY=; b=t98JmLTUh+NqpyRpA5RrmgFtkgQdrhGHw4o93bIdyuTlQp9sLZJGy8SHazfZjgO7P1 i0uciUQp5xLRFEXqnkpySP2+eMlhJVXxpG1tA+DhVsbkKLlvW9lKCyogs03cdll/uGns UMWLiyb2IdmCP85YnLeAHe/13lHWIwl11clBNKRFAqyaFnB3hB1RueBnJ3nsuEzgg6Uu rGUFHz8JgW9zc9whNjgNUV1dq/7jDa6GnBB/a0ez6X+reqRM3dS96DtafyhhBlK5uuYS VtfNFSFLFpW/RVgIh6ewEyBw2hW+Ht5+qXUItfaTIalSeEVfjNBfdAE8bzbn/5eiRyjS vuDA== X-Forwarded-Encrypted: i=1; AJvYcCWuXIH96XwgGiOxUmMZwwczVm9LHioB71NBSf+wduH7gQHXaz8P8OeEhqkYQ1l8BcVJRVAxFhzlykba89U=@vger.kernel.org X-Gm-Message-State: AOJu0Yy0QmV9+OHVw6MWXiAueDORcDQxek2gEr0G7rsYQcflTRgc3QL0 3su25+sYyfpieLymzmyOG49R9Uw70z2Y3KNYpJkFsWhJuK+y60fUjIcSXnE9TyaH6Fh1SFuNGg8 i7o/QIHejkY0WZKK0YA== X-Google-Smtp-Source: AGHT+IEdhWgRCreLOVKcTXn9rS7GYa/xcK1Uq83ChnMioO7xrrZto43NYEH+Y3QBChsKi9UezwWvTk156uLbz8oN X-Received: from wrnc10.prod.google.com ([2002:adf:e74a:0:b0:39a:bdcb:1ed9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4009:b0:391:40b8:e890 with SMTP id ffacd0b85a97d-39d0de28a8cmr8552695f8f.22.1744014456820; Mon, 07 Apr 2025 01:27:36 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:00 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-4-vdonnefort@google.com> Subject: [PATCH v3 3/9] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 96f673f42e8e..33fcf8d54aa3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e71601746935..7f22d104c1f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -274,6 +274,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -284,7 +285,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 48bc0370515f..2aa4baf728eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -979,7 +979,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pa= ges, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { enum pkvm_page_state state; struct hyp_page *page; @@ -993,7 +993,7 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 state =3D guest_get_page_state(pte, ipa); @@ -1001,43 +1001,50 @@ static int __check_host_shared_guest(struct pkvm_hy= p_vm *vm, u64 *__phys, u64 ip return -EPERM; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + for_each_hyp_page(phys, size, page) { + /* __check_host_shared_guest() protects against underflow */ + page->host_share_guest_count--; + if (!page->host_share_guest_count) + page->host_state =3D PKVM_PAGE_OWNED; + } =20 unlock: guest_unlock_component(vm); @@ -1057,7 +1064,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index cad25357858f..d533e898c6be 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -388,7 +388,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26BE9224B1C for ; Mon, 7 Apr 2025 08:27:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014462; cv=none; b=h6PdkoExOUqT2O3op1On9vZ/hALUXYJHzcG/fkdaPeCpEXwx/EySKe2MueCfnkWM5eR97t3USdjs/+2cQgvYp1OHFdDirBPuPpT67UImtuQ86GW66VRPERrvvn9mVOugEByBMuiVyjHGq6Z7iq7olTj4Yng70thEDAjg7QXn7XE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014462; c=relaxed/simple; bh=zIVBaWUgLlQIaSOE1emD+vNKSY4jqWjMpBReVTzjbHM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kjnbC+BhKboHFNgJgwNdUMOECjgQj3xzvGlBBP0N29X6WgrtAS50qi/v3HeU3GhV8Zr51oebUkjwU0jhBfSSWtnOVWz1GcgokwteQkaSKUxXI/IIPJA/0BhH1wFADonzM+2koy7F5f9kiJitqBDBIQE7vuYeTSabH5zugOC8cNI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=acHTZZ8p; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="acHTZZ8p" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d08915f61so22356035e9.2 for ; Mon, 07 Apr 2025 01:27:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014458; x=1744619258; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nJ/jjb8x4qHGtCi1F2Spdei5y6Mc/3znj/Y5u8/slow=; b=acHTZZ8pGz51aB1WLlBGZabYRT2Ved1HAhoG9iwRadbY+OztGtuK20Rw//FjZ6Yeh3 LnDKBeq8mxDC6u0Xl1YJFOMMKc4+5SAI/ExUShCy7UkJCl5wxAzNVNNXkHHwdHJ1ux0f S7be2MR2msA8e2FE8ZaWWYyIcX1E4EifsetCRfXqoTIySa3uPSpXJ5lNPma430QaoKs6 nW/m4xuQjPiBLTBTn9cDZysz7RM+olNT/yfLI/NBh7qto7uBdInNJVFxppAidAytKsDl 4rUbPsZwvH8ZJHGIrIWljAuTaN/Hz9unOv4r3vqMW9BMpHYh5uJ8QkqGa2o9m5tAntsz Q19Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014458; x=1744619258; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nJ/jjb8x4qHGtCi1F2Spdei5y6Mc/3znj/Y5u8/slow=; b=gmfotXdAjksordMeIaQFpgqLMOeu7mmT4ywqcdIeHCRUkfdsTlL9nQgZwb4DqZwXoC WEfJcZSCPZYbMPzurL6HcSE5reRRX2JBXjSieSP9Jhsjh6UQ93inzymqGEZdgw1BcFS/ gy5G02fzcQsxa4wZ+dZfe+mS44H6esFw+v70vDBQtt9NDKptNhPACfWx2ijg4+SfQCNh 8BZZ+KsXLzLmAo9CGHjAlzQyo99HbxJQkj5Jc+w4MBc9As7D5sdpwFw6EIwClHd/D8YW BoZEpCdW+nG8Y6B6UfXa3elt45wLr2BpyRdwEHc9+Ko8qGi9dylKAt7W3BmFmgctl87q 5HjA== X-Forwarded-Encrypted: i=1; AJvYcCXPp+0+JuQiNeiEmFKd5klyOxRvePdVUMEa1dY0yXndlEInGXXONUf6NtNmS1QbnyS7AUkKz9wfR4TP1xk=@vger.kernel.org X-Gm-Message-State: AOJu0YzUOr6umBPFUED9D2nO05H+Vaaf9a0M1wIr96ndoL94oeV33Ed+ +Ibqnv7Gi0WPs3cj5a8SQ9ptziRcj9IZThFJ1yy3MooHkLGrG2nmiYV8CXY+MnKFEUdSnlToPAH 0tD1Y5JUlsC7f1SJJAg== X-Google-Smtp-Source: AGHT+IHP7fXZDXZO1SVUM/nAWffAl8j24dVEofbaGP6xim/maUgpuYSc9eBtkCECigSAtXktdS8fUDWaay4OhPcn X-Received: from wmqe6.prod.google.com ([2002:a05:600c:4e46:b0:43b:c914:a2d9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5249:b0:43d:585f:ebf5 with SMTP id 5b1f17b1804b1-43edf1dd307mr59921735e9.1.1744014458622; Mon, 07 Apr 2025 01:27:38 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:01 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-5-vdonnefort@google.com> Subject: [PATCH v3 4/9] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 33fcf8d54aa3..3393a8ecf243 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 7f22d104c1f1..e13771a67827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -314,6 +314,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -324,7 +325,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 2aa4baf728eb..9929fd7e729b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1053,7 +1053,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1064,7 +1064,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1084,7 +1084,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1092,17 +1092,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1116,7 +1120,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1132,7 +1136,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index d533e898c6be..1483136df01f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -407,7 +407,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0462122A7E8 for ; Mon, 7 Apr 2025 08:27:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014463; cv=none; b=QWbKtR6GDr2thNZjF3oqE1KnlRlvlD5CRP20fr7ZxrTmzMEEUns0TC81FoSscs4n10NEwKcYcamrhBhFbemi1ir/oejPVmm4Vwga7HfegIJvjUoG54fXMKrLkJbUmj8DdwR75puW6+5X4/9I9HJ4f9scUjUNf/vG1RkH8fuAGmw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014463; c=relaxed/simple; bh=xHvQY7fIwC1uiESQGah/s2aUhdT75s8bSAd/I+kPn68=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RTGODYMTU/1iK2mGhB5erkS4DNUBkLl+AUhbvA5M9HaQ7alKVv33gAV2Cxv7e5GB/kOe2g+JSARNIuHbf4LNVYGzZngj6o7cmMkUx0+tZBWJ1lQFigaZssNB3Vr3HrSjcqNq5gRHJcPvMAQD/D79pkeHlJEhZXwxf/W3Js/yBEA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0UwFQuJO; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0UwFQuJO" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d5ca7c86aso27131565e9.0 for ; Mon, 07 Apr 2025 01:27:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014460; x=1744619260; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PlP5tMdol9JVYvP3tzbTEpu6iNcDZijUZqqFs+uoGXU=; b=0UwFQuJO+2sEcwou6HmwPM6NkbaqZGYlVIKDbxHSb2GCY9jCXv8goRRzUuYqTjWTkE 6cZaYbKRdfq46cHGq4MuYqs6Rf3sTJ0Uvtp8M1ED55yeCVlayi0VwrHlhIHOOFTUMmND i/KjbBGPQFstkyMJv4cyshvWVW0IC9WetrDBqhQSNkoqOM7oYZeMFfFnVfFtRxxPFF1L 3RRAnEJFc1qfXqFR9GIJJNQcjc9XEYFGlZgi91CG+Ye6mymQCkh8BOZJFaTlgFr9WDS7 6J7+qk2fIrK6LE3xsZSWiztvqKV7ynrlgWenrRftw/Ur4seeAyLhL9CrWYW1unRBY+Hz lQRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014460; x=1744619260; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PlP5tMdol9JVYvP3tzbTEpu6iNcDZijUZqqFs+uoGXU=; b=ffZd52zdB2bSbXbE7FFUWuTiXZI3rfcCHILBSydIo6kBThlvi0kopCSlKX0Vpp/etM Ji5zQ5wphbHSB6tkpGsBBHtNC0+zRJdIcu4m9nFy+0sWLSjEqzv192Pi3uygSr98HNqd 6Bm7yi8Yc8VosyLbta3c8jjlZuxM7S8GlSQKVDsHlatbF1Cuj07DwEmpLb98oOnDoANj E720Q2VniBy5HyJj4WbJsECd24CKs9tS8BW8gXClwUq0t/k5tcv4UPBQVBh+vKj2wWPO fES6D2nQXPRwQSOjjT9zGd7sNxbN2K/f8pWKIUhHO3CR4xIN4p7GhTKI/XKDTCLLZYwY ztmg== X-Forwarded-Encrypted: i=1; AJvYcCWeB9Q9EpDtV7z3cmwZhqZV93d8+CB4ChsJ4Xj96Xw5dPDhF8tV3y2/k931DGL5Le7aE/Otmp0iXNVTC64=@vger.kernel.org X-Gm-Message-State: AOJu0YxLPqdzwfhIXNRKzGXq0zmgcVXF4LwmEnkEctmBUjBbw8ZRnciq YerKnbJZp5YPR7QzERTRMUtky4R+koGSpnQGiXvUEUaRey6C8k3BVd7ZrSzCx2AN0C/GfevJOey XR3x8aHgvm8/rwaN+yg== X-Google-Smtp-Source: AGHT+IF1riN2k2nWQ8sgip2B+3pFFoE6XJaoBxY3J2Xr5NYz4pbZRp/K7q/190A9Us4CNkYkJWM9DqTRoGRoo0Cj X-Received: from wmcq28.prod.google.com ([2002:a05:600c:c11c:b0:43b:c336:7b29]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4443:b0:43c:fa24:8721 with SMTP id 5b1f17b1804b1-43ecf8d1b10mr116267015e9.17.1744014460436; Mon, 07 Apr 2025 01:27:40 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:02 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-6-vdonnefort@google.com> Subject: [PATCH v3 5/9] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 3393a8ecf243..4404afb7ea2e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e13771a67827..a6353aacc36c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -335,7 +335,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -346,7 +347,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 9929fd7e729b..ad14b79a32e2 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1112,17 +1112,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 1483136df01f..419902faaf69 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -437,7 +437,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D089C22A80C for ; Mon, 7 Apr 2025 08:27:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014465; cv=none; b=KUNXw3i5Mgvu1wj1IWoWacnBknGD2DY/ZRHZ2CkXEwzcCZxaukuprq8+GgJOuNQPGL1Ql2bDsDI8NbntEtdVLNU20gJ85PetikQyC9Y5mfR8I6Xuyi3hYKRp8AYufFgu/3VePvrQhCA1cRH50T4u+A2cGOdbBEsMozdSNuXQz5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014465; c=relaxed/simple; bh=lsY/uLgAd9YUalmlKaeQzsV1ANQ4aHoDf8834RCnaYk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nWtCkCr6diDoal8dk0+5hD4jzg9M5+W645HZ8w3a/5x7O2s4hiwwSzz7HE+L3z8XqvgvADs/hYiLzAtCEezkwwbLKVNkT5gGvjpzAJExioVEQRlYv9ES2o3JGqvVcyomYRK7kWIUa9Ghanmgld2pjtE+rsjWj8N/0oJUW+Zp/iM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iKlaVU43; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iKlaVU43" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so26774325e9.3 for ; Mon, 07 Apr 2025 01:27:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014462; x=1744619262; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Dn2+9t0uAQKpI1fkyan9GnXZ5cSx0o5ImJw1jjfzG1Q=; b=iKlaVU438/S1P9zFQgNvcHl1rAquf9ukTT+CXDB8ugrrDXtsb9JKAiEnrPR1AV/OfC usLRdClRCaK06x2crENJaYdH2KxcTEw/2g9oOgACSks1gqIGc+1aGs8/EeTCi0adbfdY V001L8HU4eKeTGo7IPzyzEJP3Qv87HEWJO3t9TH0pZ4w24sHiGjK0wos13ToNvHZTwGr FovVyJQG73Fcix3+XtZrg/NiLBwGxaiCjQfaJnZleuShzeZKl6s8hlb0HlfBxK76IXGT kzwYzT7d6g4U5uo4+yNUgvVgQESM9WBykOXAlH/duNncrvUXSRpdbtpdE1kRcd4aaqLB thjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014462; x=1744619262; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Dn2+9t0uAQKpI1fkyan9GnXZ5cSx0o5ImJw1jjfzG1Q=; b=ngSGdSN4NBdq7oJiSxolKqY6qXOyF/5TRxW5s3mT30d8s+e7pNG7f3KVndaNiA/Sk/ AgOF4dBBfNSdwbDi8HAHSqoFrzJiHZyEFCbQ3t3jJv4v8tOlxZ5d21kmh/AsGOgprHZd NaXy2KtRrejpQSWkbkTGkEz3r7wiEpkpHXWk7ZnpOiCgizCG4MWO1qHJHXGbPE/GSYrR MVEOwFh6K5bpVeB2mQ7a9V5RWKxQjl0YPsh6bRmwu8Hcsfr9YxSzJJutmJeD8vMAy8lc lJXGTlKWGXUmk4EolbT2NCJd9Bhryf+a0Da0c+A/ndqRnH5J9M+TW34oghHxqEeJac7c jDbg== X-Forwarded-Encrypted: i=1; AJvYcCVB0G01xpS87nMhqXPR61t0UW+Ld6In+xGPCNcRPO21qt0VmhoE1alZfoadkkCvQP+psRTwuVMdbLIPJLU=@vger.kernel.org X-Gm-Message-State: AOJu0Yx9R7Nu7jC0C815+0GLDOO0QNXS1cA9JJCjwILVMpslA1AXYWL8 09XIWgoVrrMGmIfpp2VvPXgImb4TzIMEkD1uG5G/n12oBJpq35/HE2Jl8AOelqK1SrZrB8vMC4Y mFZ21Bfw+NhZZs9oBeg== X-Google-Smtp-Source: AGHT+IFH0OnDZAaSWobuiBXuiNQxhoTA3QsgR8zVSGSxr9+9Oz0U++p2AfoPpuIvPTFDemadwn2KX0LUc5hW26aG X-Received: from wmsd5.prod.google.com ([2002:a05:600c:3ac5:b0:43c:fce2:1db2]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c88:b0:43c:fe15:41d4 with SMTP id 5b1f17b1804b1-43ee06985d4mr60562095e9.18.1744014462409; Mon, 07 Apr 2025 01:27:42 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:03 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-7-vdonnefort@google.com> Subject: [PATCH v3 6/9] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index abd693ce5b93..5276e64f814e 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -167,6 +167,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 419902faaf69..08fbe79dd1e4 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -273,80 +274,63 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 -/* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. - */ +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -374,28 +358,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } - - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C5F1422ACF2 for ; Mon, 7 Apr 2025 08:27:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014467; cv=none; b=jvj29yYI0t8U0XJR40IXmKjFqV+jbNuyOd6qlDNCo22YUKBx1MHZWFx9uscCP/MXeOOZWq85CSSXcqWHqYX1Oo7/FaF6SHGu+yB9z2QCzQjpvb58s3c4iyPfOEQBjbzYN3eb/v9qhm+ugSq5WrWnI6qQYjtW7qfGqeeMdeUyjiE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014467; c=relaxed/simple; bh=rRDWSNiDbUJlfZy9PmaLhkSM7WHLSK39I68boJ6Ez1g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=o4+BH6QiVgqjpx8ETSTEf78bHbhqHEGY7/spAEx28KeoKlHM+m2wBNVHFx+dwSNPAE6fva2rL+p5nGzItHV2xLQj8hs5ByhB+UZKtAX4EfP9c3EUccuWqE9d6TWQYsE+G7CEf3PAOCD2FALKhIjVRbGI7R80Y9awwS27+HV6u0g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=UG8fKm1z; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="UG8fKm1z" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d6c65dc52so33930005e9.1 for ; Mon, 07 Apr 2025 01:27:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014464; x=1744619264; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=AkIkgXtzJydgYT4Aq3qFZ4/IxfJ3oybUqKkE6SsWBu8=; b=UG8fKm1zTE3rzS1t6DBiK4/+3Q5PlFgH+JGz2McG5u5aEq+GF7vis5DzKdLp+2p+Xf 2SyFPfey3OUUSa7PsMpQJBlza/QxPGAmH5dgmqcvlAW4hy5tOGQDv8wuO/KeaXPdiQgp U47nWgUXOqUpl6VqwyJErN3luZyGagu2DmvSQtIXvyhyqPSAeSnYJ5QTVkvUYZCsT0YX ccw+Z1wNMXWIkmwi+OEQHtne9Bk+4Rj+dRyk3+olSxhWMrwfaQ4mpa0BO+0ZIHirn0g9 0Syqi4hb+Y+hin/5hAxAPZK1J3TPVxAPHMr7mL7tTWQw2S/9GxwxnWd3SUaW8D+pswRK TU4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014464; x=1744619264; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=AkIkgXtzJydgYT4Aq3qFZ4/IxfJ3oybUqKkE6SsWBu8=; b=GhRmTjvrk+sRUEcg2aMC6m57VmJOPwVnirAEqZx8QphR7cMSfWatS4m+EDMiDl3W1u 5lPbMI47CU9qSfX4Wr1utwcBNW0hrSYKotsy8kgQs+8fms49mdT6hMkgeRb//A8Zw1q+ U5ZsZ5xGWwzsHDhkaK9x4REt8RzJLJReTKRYW2Sy5JULxXT03+ts/kCz04Zk3TJhoTEm 7oQRhkgdle/40y4rxvQNAiJOIs/GJdIdlSbhJrm4oKqSpRnxUy/e5JSHlaGUJykKUef0 46qJrSaElqE4e4ru0jY5t4izuVmduyhqWe35vsXilA0Xc0Bk71j76n21oXaLD2GR3zVU gCFA== X-Forwarded-Encrypted: i=1; AJvYcCWXbYbXTf3UkxzzkH3Ip+oXVRR+ijFPKIuh0DDlawZbSopyZi+0gMYjm8HG+7881fuiK2X2bqN/0pWh4+c=@vger.kernel.org X-Gm-Message-State: AOJu0YzMjCyFNDd4Aj5ChhMZEqxpUWqIlIujFKCrdnAFCTsuHr7PhnLY MqxLHkivM5LL7t51jqb+wG80O43EovostGqYvi+txD3GdV/NEpMVEaEW+sylWSMr2Z5tMyms6JC 33E90wZEbDmyz4jBM/g== X-Google-Smtp-Source: AGHT+IHHzlQ1gVc6d7i809HTCyxXwQG1q36o9QITFKS+Njq3l4/ScwwfjO7m2fkiwa4OY68/Yp4JeYb+QP2ZKtM9 X-Received: from wmcq28.prod.google.com ([2002:a05:600c:c11c:b0:43b:c336:7b29]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:354c:b0:43d:745a:5a50 with SMTP id 5b1f17b1804b1-43ee0694b57mr58544285e9.19.1744014464185; Mon, 07 Apr 2025 01:27:44 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:04 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-8-vdonnefort@google.com> Subject: [PATCH v3 7/9] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index 5276e64f814e..135df9914cca 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -167,6 +167,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 08fbe79dd1e4..97ce9ca68143 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -281,7 +281,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -318,7 +318,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -348,16 +349,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -379,7 +396,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -394,7 +412,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -409,7 +428,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A95F22B584 for ; Mon, 7 Apr 2025 08:27:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014469; cv=none; b=hkjCSo/Z9DHq/RF/AVDCJXPOqBVA1WLgLtiP5GQUDbd0wMyiSz1W/uPo0i/WoDaaWLgX6YWiN4RLPR1It40qKeUtrQDoHH6qcW5DrzZadzr6pYsUPCWvBajAnYLHE6/E42s0pBr0U1hrktgAoQwfwLs2GE5xw9fQhAb8hVBiid4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014469; c=relaxed/simple; bh=cSaWMlG3D7f39sDYeOOmQiW5NsLLNPUoCpZkJMcp+iw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Hfx4dXpv5D6gtDr5XlLTxxrkwGO+1GLd8G98ayudAS9yNMH6LE2rkgjH5zW1cIQn887tdRSqLFvnSK9d4PDQtJK7ekzsRr0yMDoBznNqo+h736xj4R0JHqhviHqFkSLXxvsjiizqvftKosRoHK8p43rPK2IcIIbW6KzsyrpddBc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Lz/EEjqh; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Lz/EEjqh" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cfda30a3cso24363175e9.3 for ; Mon, 07 Apr 2025 01:27:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014466; x=1744619266; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FRbmnXi/kMVYfkZ+DJBP7bhEsGPG5CEmYLC9waH1gHc=; b=Lz/EEjqh+JFu3c6DTJkQ6u7VkOYsWR6NeH2CHdle0QlYd5p14kmGvopPldHD7Vhb8p h2madksz7HwnRUq7u5JOjRx9AaJYQsjFYQkLzytwFkDGRGZ9lPVQD7q3KhMMWsKNswUo lsQN4PYbpkDLCPy6BEp9X1A/uqGFRme3hzMZqB5geaPBce+rz0qFNmJdk1uj8ap4/XJt nwDzBIb4zBS1W2LDVXxoj8/gurJWYpquDdZo2fsV9HV85wuT4Yb9PEKI2wpG1Hv+vK9V rtmn31pqhZ5M6NJUE8T6QkAtvFIZHwbfao/iFKJVL3gOcz4n7uTZehbWobzpd9P/GM0y BnRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014466; x=1744619266; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FRbmnXi/kMVYfkZ+DJBP7bhEsGPG5CEmYLC9waH1gHc=; b=mbrA/fZguiBYUN7P3UrFVa4AcOCfO8o/9ovK1Bh3pgrxkTT8lWPY1oNebGr2egvX5c p9+yxWsyXvqMjnwKe5HODOezGZXFLOYGMgHS+EcaN1Furvpv5OlL1+YxemoBchGkQ7fP GMZYCTUnKR5JxIGfpcq9vDk69ffg3sAJOPEC7unzu/738TQPi2WIaeAmN2AM+W7DC+XK +qPjXhUmvAOBwQaKzTYMviskexSsit2lONnXZloG8zNVrwkDLP0KebRKO/Q4FnLw+Ur6 RCwnOYPgtiFGc1Q0djsv4N2mJ/BuUiR3dyQyLolxjAUBxFgPKuHCH/HyLuXn0SeesMqh L1zw== X-Forwarded-Encrypted: i=1; AJvYcCWLWeKg6twJbFgGNUC0Ir+5hWLkbA09tBnF7VWhNj27QgYODYjIGXkHnTSqr6FvfP+XJps0wx2j3kMZqKs=@vger.kernel.org X-Gm-Message-State: AOJu0Yxi4YfzXqMam8YW+rgVqGqgn4VoNeXwnr/Hm4EZlaSEbi6jZDbH rTH3CTnK6IVkqppx+9+6h+k9i7BNtk+1dt9jnZhoYAP1Oe3na+2y7Ji3wFecsPjTYnYKxHilrSH 63XEtd5lDYIdVM+3sEA== X-Google-Smtp-Source: AGHT+IEic6OXLHFjJFcSMjX6yW+diABGg79TavoXgMJoLPcFtBwYVwRi0D0qAEBQti0q9fFPcQTMJ2BeP0VaFbsn X-Received: from wmqe6.prod.google.com ([2002:a05:600c:4e46:b0:43b:c914:a2d9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:500c:b0:43d:bb9:ad00 with SMTP id 5b1f17b1804b1-43ecf8cf6b2mr129812945e9.15.1744014466013; Mon, 07 Apr 2025 01:27:46 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:05 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-9-vdonnefort@google.com> Subject: [PATCH v3 8/9] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index ad14b79a32e2..da82d554ff88 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) static bool guest_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) { - return true; + return false; } =20 static void *guest_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 2feb6c6b63af..b1479e607a9b 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1537,7 +1537,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1547,7 +1547,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + if (!is_protected_kvm_enabled() && + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) break; fallthrough; #endif diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 97ce9ca68143..18dfaee3143e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -345,7 +345,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.49.0.504.g3bcea36a83-goog From nobody Sun Feb 8 14:56:57 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4340922B5BC for ; Mon, 7 Apr 2025 08:27:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014471; cv=none; b=c5TOQwmmF5vpkuZ8O3p9tMw3AT5SCIZfONTmD+JtVjyFm7yzvU6jFnYyn/+eazMPdI8eTOo2Z6Jc1hEG+H2kEtfV2SvHIT7e4+ToK1+DEUeHO9vy4h7nIDJ7CStCOnVKf8oIijv8S90QJ752NMNkDzJ7M/nn+tgo4ZE68EPG2V4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744014471; c=relaxed/simple; bh=t4rQYa1AO+sp76uCh4JMmNmwDDrwwNFLr5I8mSxBi8c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ORFHjFtVvNJsWunqj3MWAPPgUkFKgq1bCtsU9dK8TUkShJtsi+580FfYXaX7wuhuM8tuQCpdJPsiuytROp6QSSUpHUftZj4Ogwk+E/CmmRYRUZfMGe7YWpArIcuFmleWsa1E3aMcWNdtKZ6aHGwU+UsangnnfKJkNuhfzYrkPk4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=B492JNnI; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="B492JNnI" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43ced8c2eb7so31401485e9.1 for ; Mon, 07 Apr 2025 01:27:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1744014468; x=1744619268; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zzzoSA42unHPq3blB0h8MsMPa11UOtrCu037wqmBEyY=; b=B492JNnId+v92pxVCemLvOronVCkOFDW96Zwr5LrVZGUe+GaWU8O9zeV+yo1LniUhx olWE01JRP+OFe819BOEnbRl8wW5esD+amFaDPh1a+BbBi/2iPY1eyN1QRLKTOuZQl6Wy o0yiQxhE6Fmm2a2gDqKQL5RjJdz3uEeRKxY7BIjHkUlwh0zky2j2mDsCz2f3L3U2e5zi jSg8T+Xjjaj6Sx9xC4psOv9e2TFm8l+1UVnXilXPMtJQ2ZEGjUb/lMYLyB7vMmjJFqD5 oEgixCHB/EP2z5YF7QKlWtxcG8GL52vcYdVDAnA8j8YnjYbFdP5IQlBqpj+kGJTZ+jkm VG9w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744014468; x=1744619268; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zzzoSA42unHPq3blB0h8MsMPa11UOtrCu037wqmBEyY=; b=R/yluyLVdDzTKINvi8XH2bqMiJ9c6bvaxi87aYFH8ZTJW9NnGP2Ei4uDO/72KtPS4D q3joa+b+fwq/I0DLV6z/0VYROd111nXkVeSsh2NBRdPw+r40bwrCXgGNK6I7noG1VOkV rCba1aLEP7dKrOG4QS5uSzZ+XIO1ZyI6jef8h7TRpwF5XRkzfFuO80sc18+8OTmGNxQQ ZzU9uuRDN49haC5PRW3w4UqgKvLCMCTTRChYhPVwaxRd/C+IiVIOBc0EXcG45T9KW9If LLd3DILZ1eeAnjwOFQs8d9/UpaXAHFT2HC5hjhwfiNd8nqWXMZXT1e6W+0MFPFHn1xXT gHXA== X-Forwarded-Encrypted: i=1; AJvYcCUyqaI9/oA/OpqePZ3BhqypItjBF3HepWfInwJMFj4ESXEsCvsRpdSAAVD91kcIEts2S0CZXsSVWwPGPhM=@vger.kernel.org X-Gm-Message-State: AOJu0YwV4ggmp0SyeDS6e2lU2trMDmla2Al8fSFbz+rp3/CrDC2lQ1Xo 1qqSr/SI7cSm4uG9k2d85IbXfTRuwQ6XczQQVB2paNeIfGtlwOmmYGTJ19fyxXOOE5RUd2bcYp6 nCPQuQYQLm2CEvv7C6A== X-Google-Smtp-Source: AGHT+IGnUridpgjPubam8YujaYbrltWFNlafyAl6T6J8kp+n5+9Me3rdeXELJVFMtMw+9kg5EqHn5wZkJZc7sPGe X-Received: from wmbfl21.prod.google.com ([2002:a05:600c:b95:b0:43d:4ba5:b5d6]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1f17:b0:43d:4e9:27fe with SMTP id 5b1f17b1804b1-43ed0bc77bcmr112690825e9.8.1744014467973; Mon, 07 Apr 2025 01:27:47 -0700 (PDT) Date: Mon, 7 Apr 2025 09:27:06 +0100 In-Reply-To: <20250407082706.1239603-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250407082706.1239603-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.504.g3bcea36a83-goog Message-ID: <20250407082706.1239603-10-vdonnefort@google.com> Subject: [PATCH v3 9/9] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..b0c72bc2d5ba 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index da82d554ff88..858994f20741 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -220,16 +220,52 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 +static void *__fixmap_guest_page(void *va, size_t *size) +{ + if (IS_ALIGNED(*size, PMD_SIZE)) { + void *addr =3D hyp_fixblock_map(__hyp_pa(va)); + + if (addr) + return addr; + + *size =3D PAGE_SIZE; + } + + if (IS_ALIGNED(*size, PAGE_SIZE)) + return hyp_fixmap_map(__hyp_pa(va)); + + WARN_ON(1); + + return NULL; +} + +static void __fixunmap_guest_page(size_t size) +{ + switch (size) { + case PAGE_SIZE: + hyp_fixmap_unmap(); + break; + case PMD_SIZE: + hyp_fixblock_unmap(); + break; + default: + WARN_ON(1); + } +} + static void clean_dcache_guest_page(void *va, size_t size) { WARN_ON(!PAGE_ALIGNED(size)); =20 while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __clean_dcache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 @@ -238,11 +274,14 @@ static void invalidate_icache_guest_page(void *va, si= ze_t size) WARN_ON(!PAGE_ALIGNED(size)); =20 while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __invalidate_icache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..e3b1bece8504 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D (void *)per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#ifndef CONFIG_ARM64_64K_PAGES +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); + +void *hyp_fixblock_map(phys_addr_t phys) +{ + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +} + +void hyp_fixblock_unmap(void) +{ + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +} + +static int create_fixblock(void) +{ + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D (void *)&hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} +#else +void hyp_fixblock_unmap(void) { WARN_ON(1); } +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } +static int create_fixblock(void) { return 0; } +#endif + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +392,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index d62bcb5634a2..fb69cf5e6ea8 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -295,7 +295,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.49.0.504.g3bcea36a83-goog