From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 285C42E339F for ; Thu, 6 Mar 2025 11:00:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258851; cv=none; b=cP1lDs03kMqZMNvqZZqze/JUh+dTog+m7pOH5sVgTWrCU8gAxZv13qwDf0OOkhoH6G616qWEOcsYvcK0dQgvwEo15LXV3HIttPJ5AcfqnWySyomMBhU5SQE0AukZ6voAEjwWXZAjKlD5xm+fjMnwl6zIMPdvQ9Atn8uhQNb4Pag= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258851; c=relaxed/simple; bh=HjLJ7C2AIduywljQpybYoX8/PUosMtVkdvA1RpOLC6s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=X6nKPYCnUM93XBB0iltucwNKMBP0ZcyRav8NgsVrFLm5DiZJacVpd7n+RaiBD2I2JVZta3VxKBJn12O2BN1xY77C7zJgb9LkwRIT1+Vey2ysnDQaK0AOaYohHZ0UBvvkA5DFzcyfIXNfEg5nq6uJXjmFF8/cD30d1i1EvJoBbDY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KYC58hO6; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KYC58hO6" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4393e89e910so2289545e9.0 for ; Thu, 06 Mar 2025 03:00:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258847; x=1741863647; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j3EIDfh3bzz6FZ09T3hzG7XWiQpkIRvJOzVpoxagQuo=; b=KYC58hO6TrSxtyhGi90uTchBpS5ZPlt5N0SairQKaMljkOlCxarrTCqug0jskx1eIH 2PETwXtphbsE7JfCaVHZHFfF/hFFLo9w+jTpocHLcW/X2X6Cc8zqzseESA+ihUAqN1HI yx6tTax/VE7HarqBZGIIyjTxWqg+i58EWYvDuK2v3EB9eQZJp0BYzIh5EppReInOKGQX gte9+E6i267cKSRzYwBzP8POJeBd2mrPTgd1nzD05yQFZEIPp2mbDChs27ylSiVJOE4p Za2KBwe7f6lb71MRUo7S0ClKnUUskYGJmNAscA+TwYjSrh/RF5dIzpl8ve5plR/M4641 UNWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258847; x=1741863647; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j3EIDfh3bzz6FZ09T3hzG7XWiQpkIRvJOzVpoxagQuo=; b=EWRP00/2OxnEHc+HYKJU5E+eF6824UQeBeXFVDf+5Ed+HlnEPyoMMet0w4qW+OL571 MB9Q1Hm9aMA8oW6OL20ZFGBV6nOvarSvzeXcsxIZjderijR57raJrsq5kA7YMuKA4Fg5 Ew6041kGhzJccNu3pBNAY0EqwkE9KriSBv7jgbuFt3kyXzlSZrJw126NR8uifeORnl60 rW6nzLuBSlCHgf04r6cJShKEWP5i7uHo0/EIfaspaSIMnFO/VXjOU3y1cXxedzXzYnb5 78sl8+w1+t6j03mNsgO6mweRGHJq9LFX3dnaibpeOj+7vhQfV2ziBqldp9rgBQvfOKMs VeWw== X-Forwarded-Encrypted: i=1; AJvYcCWkxlguOed0A/xQWMtDDRPD/eDn+vHDj00YGLbbW0IM1sIuk+mW3CodD1IzBwpY5yuRQ7xvldT8aMOmQNk=@vger.kernel.org X-Gm-Message-State: AOJu0YzCRJseR/Ky0XQsgYl4rDB6BFq9iymIoVBage279eHyIO00w6Rc wgssSQ/Gl4zb93zdKf5frmKjd6TshWuUPt5UCO1gmqM5gT1GM6duMOBYcIHQMy8aZRY1jLwasiN TFggCaGSrYHZ0wrF4TQ== X-Google-Smtp-Source: AGHT+IHc9gvpZKIAvFGYxWaWiTqhy0gbG8Wikxg2XkAF/4yfzSNV6Mp0Ztsi5081XwxCLqL618nGLsdLrgFavXUx X-Received: from wmgg12.prod.google.com ([2002:a05:600d:c:b0:43b:d4fc:1dee]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b95:b0:43b:cfbb:3806 with SMTP id 5b1f17b1804b1-43bd29d2d84mr55969355e9.14.1741258847657; Thu, 06 Mar 2025 03:00:47 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:30 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-2-vdonnefort@google.com> Subject: [PATCH v2 1/9] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 19c3c631708c..63968c7740c3 100644 Reviewed-by: Quentin Perret --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,30 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + if (WARN_ON(!PAGE_ALIGNED(size))) + return; + + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + if (WARN_ON(!PAGE_ALIGNED(size))) + return; + + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28ECC205513 for ; Thu, 6 Mar 2025 11:00:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258853; cv=none; b=mG8ArVbk6kg6KMdhCBEndl7Z8bJ/Lk9jQkUOnZBy2DGJTt5nvwzYHGOlMa7bvy41qnTIKr8E4jdRgVwtmFcQ19A1SchwFLJl64zdmeeHr7rXtcEEuLgLR4J8Jkejut2psGbFe9nb/bhbU85eDzScE07qEantShupRz6yOVJ27+Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258853; c=relaxed/simple; bh=df8Zf3EXEpi2tBRmItYx1wvCPm/Ks729ingfWdHDQ38=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kRBrgAu5KISEb5000Ct+NyCmmGXxhatR/v7MEB0sKhgX6Z5GNwZz26EkqzSpDKyoQqIt+zRRteyaNfvVLmAwfcbf8Co96Hq7f/KckWr8QmSxN2cLLb1pD/VbQbi02JRzm18+na+DMZklq/ZnMmzemOfdIzQTokK8NYFONSY+mKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1eMkGtTq; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1eMkGtTq" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43bd0257945so2294255e9.1 for ; Thu, 06 Mar 2025 03:00:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258849; x=1741863649; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7Nk4OPygl6hlYnOAq4z2AVFwBU0df555iSROa6vHa3A=; b=1eMkGtTq+UfQB15PmuJa2AoHfteLdmib7j/2+A/dXhS+OPZC+dM799RLsMbYH0sdo4 DG82k/Xu0cxQwdv/lYWmsW8hG8KGRnyt7MQGb2Bl1RVsn43D7liQqVz8Z49Ftnh2lslx 1N5wpVTD9ZK4k7quoNGDXxQR7bqolen416hEMIeclA0mLPufzlWc2/ddEAOvjosb2cad kL0hTTWGXa8WWJOMN+0Gx+3wAxYqm9hVnmXHbTX7qA4XHkWwE3nr7MMpxDjY8cmARyN0 ImYlzaOrP4g2TIdp9CW7uXjCdfRjIfTrjNi7N9/lyfq7Mduy8NIehBdi4IvbLxYFuT1D V6gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258849; x=1741863649; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7Nk4OPygl6hlYnOAq4z2AVFwBU0df555iSROa6vHa3A=; b=Y6jXXKAt/MDw0QWnx7699ir3fL2o1sj9WdxdP9ls95x8+m15DUcL32/W3F9OTWG9UO MXabBvBm9WyaOSr7Pd76FY426lYe3thQWR9YG2MO6EdzW7q0/kvHR+IyiuMpotFm8X2n czUwM29DUk3yXjQtucdByQvWvtGjhm76s9h+MOHjoqEVH3AtHgEvg0jKh8KBOIIOflii GDw5w+PtT5walmtCPbgt7vV2STxdhjSwOrGGl7vPwR/Pk4e/OF4Q6RwkY6srADXVItB1 M3++H5f2xKos7VGn7Hq6fs5jDvMP1JhegBlTGp0YgMG2DMDbCTyPjHtNJpchlNdvk/z0 CPjA== X-Forwarded-Encrypted: i=1; AJvYcCXL16M0Bk+xOCJTSKkf06bftzeTNa5PVMkmf46eT/qltDKVraVlju6lfpJHtFSq2XYcu+GVcN7zJVCzOb4=@vger.kernel.org X-Gm-Message-State: AOJu0Yxbk+lxSoRx7a5qZX4a9Wy+6GtfotPsEfkeNN9bY3ZtuP6JW+84 jQCZ9+gqPtydcRqvEApxuBshHg6vcbRc+1/vlC1MJNAUF7LuBe07KXfKeAM6tanZXuYUDe+uH0A Svm2crnScWk9Q+3DZLA== X-Google-Smtp-Source: AGHT+IHG431C9Dkqi/qIYLv2PsF4AG9wTnpdTRpS5AhCVpzGRU9pbxvoEXa2dp5chGFByPGXrjbUxl1WoPU70v1o X-Received: from wmbfj12.prod.google.com ([2002:a05:600c:c8c:b0:43b:cdf0:8c03]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1988:b0:439:a1c7:7b2d with SMTP id 5b1f17b1804b1-43bd29272a3mr53453075e9.4.1741258849608; Thu, 06 Mar 2025 03:00:49 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:31 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-3-vdonnefort@google.com> Subject: [PATCH v2 2/9] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 978f38c386ee..1abbab5e2ff8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 2c37680d954c..e71601746935 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -249,7 +249,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -264,7 +265,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 63968c7740c3..7e3a249149a0 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,9 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(start, size, page) \ + for (page =3D hyp_phys_to_page(start); page < hyp_phys_to_page((start) + = (size)); page++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -509,10 +512,25 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; + struct hyp_page *page; =20 - for (; addr < end; addr +=3D PAGE_SIZE) - hyp_phys_to_page(addr)->host_state =3D state; + for_each_hyp_page(addr, size, page) + page->host_state =3D state; +} + +static void __host_update_share_guest_count(u64 phys, u64 size, bool inc) +{ + struct hyp_page *page; + + for_each_hyp_page(phys, size, page) { + if (inc) { + WARN_ON(page->host_share_guest_count++ =3D=3D U32_MAX); + } else { + WARN_ON(!page->host_share_guest_count--); + if (!page->host_share_guest_count) + page->host_state =3D PKVM_PAGE_OWNED; + } + } } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -627,16 +645,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; + struct hyp_page *page; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (hyp_phys_to_page(addr)->host_state !=3D state) + for_each_hyp_page(addr, size, page) { + if (page->host_state !=3D state) return -EPERM; } =20 @@ -686,10 +704,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -896,49 +913,83 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + /* We solely support PMD_SIZE huge-pages */ + if (nr_pages !=3D (1 << (PMD_SHIFT - PAGE_SHIFT))) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, PMD_SIZE)) + return -EINVAL; + + *size =3D PMD_SIZE; + return 0; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); + enum pkvm_page_state state; struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); + if (ret) + return ret; + + ret =3D check_range_allowed_memory(phys, phys + size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - switch (page->host_state) { + state =3D hyp_phys_to_page(phys)->host_state; + for_each_hyp_page(phys, size, page) { + if (page->host_state !=3D state) { + ret =3D -EPERM; + goto unlock; + } + } + + switch (state) { case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); + WARN_ON(__host_set_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED)); break; case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - WARN_ON(1); - fallthrough; + for_each_hyp_page(phys, size, page) { + /* Only host to np-guest multi-sharing is tolerated */ + if (WARN_ON(!page->host_share_guest_count)) { + ret =3D -EPERM; + goto unlock; + } + } + break; default: ret =3D -EPERM; goto unlock; } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; + __host_update_share_guest_count(phys, size, true); =20 unlock: guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 930b677eb9b0..00fd9a524bf7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -361,7 +361,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DB7FA209F2E for ; Thu, 6 Mar 2025 11:00:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258854; cv=none; b=iScF5+KTKfa9woWl3WAFopuYJ+TsWJyhRWJLEyy66+mv8UDk6Een3EpXoWrrs4KZGsP8O6dVhwlqlM/Nv3IRe6ZhFQTmAcvBPCrxSi6MHOovPp/oz1atRMvrI/1lUS+LfyY79mhVfAxGcULmxKFGxdot+0NeYMHMl8gAYCBYbjU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258854; c=relaxed/simple; bh=/5FjjMmpohgE0vJ/DnpliouDVxkymhFemjMNW5YdXTY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dgDWqaG5Ib9uDCt7XjPh0T9hBbjghjuMYrzJX3M7b69lhSWJcHZPgAMRbOqU45gkyfQJ7UN2qIPN5rplG6dneGNfinsn1ZanC7fGI1Gb4oGc1rcD1lqHFUODFSJgI1LtVztrf1gOZih+TXAr1YVp4RTGi6dkKKFDmODaUOcBZdM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vrgpUEI1; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vrgpUEI1" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4394c489babso2266505e9.1 for ; Thu, 06 Mar 2025 03:00:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258851; x=1741863651; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6m5bqbS5De2414rudA+6hni5oX136rUbEPTOwyZJ3oE=; b=vrgpUEI1DIuyUnJcLMkQlW58HYAczsRTIoU3nBR1AJKBTy10uCNO1nu92xcfVJJEOQ afd1HsPNjNfvpAA/6VfrKVmoST1V/5J6afVphV2FMT1CbCJ0YKDYMvfHEzL240LFsfFX JvdZeNTaJ7rq+WlwEahkD1KRY/gSbRJ3631SAaCjHrtk6PtD3IwFy38Dxw0AFr2uBo7j pzyM33C5xM03G6NuJ8rulZBuwnXIzlmGvRdOydzQ8goVbIcjET4gKpa7Uqo7+hpBkk5q iwp2l5H2j1Vlsv40bPRJye/Tezni2r+gwMz+kc8slRlQVxV2Rzm9cPeYJHwbnnJhSvDm C3uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258851; x=1741863651; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6m5bqbS5De2414rudA+6hni5oX136rUbEPTOwyZJ3oE=; b=i47OXrSzwEyxq6wledvHjIeQjtYEe95LdN/UklJ3Ig0adUyJOF0jUrM7O/RONBddZC YjF+UFyDkQ0glpIUurQG24PV6QLOFuKWx8pRhOF51noROYqzw16ryq/YoLZ+K1Bgrlpq DgL5Db09TKmtVCXHx7f/U3k3DzdxKKjrraYSIMJuXMZ7+y9tuPDhAdCcGZ/OdjUr1/TE +hNbyTBtUDh2sINudvFux7A878Qs9noGdDINrD5SJZSoctGJi6B44XvSdOACNQzBeFY8 o+vdqaOD2tU5SEMM0K06DAJJTXmVLiki+Di/0JG2znh8j/UyoP1sCvCN/QNg17jAWcME zWSQ== X-Forwarded-Encrypted: i=1; AJvYcCUyLvOcUrspaYeYw6Sq/29iHiTCM/UcavByJegCJw6gHqgCogvTF6OQ0AaqWpfPbtU0WWiaTsWS7C+Lx0E=@vger.kernel.org X-Gm-Message-State: AOJu0YyVOluabOQ07EDEsOua2rbbn1M9xnYZfRVKQyt0Gmn9eWfHUl9g B0P1a6M2gQocLZTFbxk4i/WTu60YTlrktnXCe7aTTDvooXa6HAb5wBFYMcz5Utaj5DAQ692kjWe OY2To2L0YPuWi2XCuMQ== X-Google-Smtp-Source: AGHT+IGZUqYVdIQcMkVSVKvjHliXkD+ge0vVye+cOo8Dd1sDdxpdmjqRMSVgh2kbfPadmJR/0CYkDABlridM7EGM X-Received: from wmqa17.prod.google.com ([2002:a05:600c:3491:b0:43b:bf84:7e47]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c9f:b0:43b:c0fa:f9c5 with SMTP id 5b1f17b1804b1-43bd2aed752mr44314515e9.21.1741258851559; Thu, 06 Mar 2025 03:00:51 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:32 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-4-vdonnefort@google.com> Subject: [PATCH v2 3/9] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 1abbab5e2ff8..343569e4bdeb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e71601746935..7f22d104c1f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -274,6 +274,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -284,7 +285,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 7e3a249149a0..7b9b112e3ebf 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -998,13 +998,12 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_= pages, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { - enum pkvm_page_state state; struct hyp_page *page; kvm_pte_t pte; - u64 phys; s8 level; + u64 phys; int ret; =20 ret =3D kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); @@ -1012,51 +1011,52 @@ static int __check_host_shared_guest(struct pkvm_hy= p_vm *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 - state =3D guest_get_page_state(pte, ipa); - if (state !=3D PKVM_PAGE_SHARED_BORROWED) - return -EPERM; + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_PAGE_SHARED_BO= RROWED); + if (ret) + return ret; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(phys, size, page) { + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + __host_update_share_guest_count(phys, size, false); =20 unlock: guest_unlock_component(vm); @@ -1076,7 +1076,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 00fd9a524bf7..b65fcf245fc9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -385,7 +385,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77BEE20AF64 for ; Thu, 6 Mar 2025 11:00:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258857; cv=none; b=C0V1DDcXIYmc94k3XGr2tb/7UzM9F8M7YNEZwLouhXvvexp/LE7pwty4tBhsEqdKI5g1LpIHEQxDcx3vq2CM1mUuI87rwcbqPIHeS5Q04YzpClMl3V69+N6sUmrOBJupXQQhRJ9lEs12FvHo+Iw00CxPclbk2gQRzYPuZ5Zv1Vg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258857; c=relaxed/simple; bh=I0XY+A2V0Q5w7oHyDeXZMWplb/dvx/ssTKzTKtfDPnk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Xg8sJxVDU46fS/dyaflJXv9+yQYDhzry/u9rxkg2nVG5atO7BQ0uiHNqNunT2EfgslMppDDQXuSQ56NpKVjE0zKYNClD/tMkwa56UjQsVZkFFboyrsU6wEGRXuCmdyTyhxzXQV4DSBGfu0tNkx+s8rQUi6lpExtUrSz0L7yQ48c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sdfBfBBN; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sdfBfBBN" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43935e09897so3508205e9.1 for ; Thu, 06 Mar 2025 03:00:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258854; x=1741863654; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7SUHezARLgNy4M/thAGHMxOGZWlACt+jxmQ+nWznSE4=; b=sdfBfBBNlazthcrgsTvcLwKzbqP/e3ZaomupRGZxCj1Ny/4KXwH0U/cHZZpQffbJ6i JIIOfTuP851mJk6cvhKRMCOeayU95im4y8yN708TlXyBHTeGs+ABDjf3lG0c2NSKGvLz MwKy08NYD3TLYL1i+z5UfBhmocY80Fq2PSfN5U1S2jyJZkLNX6IFo1e6ozA+Sa3stCQT 6TBfMhc5dY0LHBT9CLlgaOMtT5LK81VF3tO5hG8DiA6BWLAUyHAFl1TX+w5V/G0xYZkL u5Dal7Ji+i2Jj9x1ztdpF8u0sI4c3QhKQhwK1T3kBGCY1xIGADTPbqzl71Ktq99YyvnM mwfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258854; x=1741863654; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7SUHezARLgNy4M/thAGHMxOGZWlACt+jxmQ+nWznSE4=; b=ojDzZPdirGnitl6qH0Hx1Qvsfyvg3WgZNDXlA7PI7YUl9KT6tLNpXLHrp+D22rye1Z FUB7fbSzxRCvrcHSrftNBHoNa6+AhPy2NsNhXvmP2leL/zqo5RvJyH/1djAHOEReNld9 xDwdFjIjTzKGvDN9xXaPbKfaK+5Q7fS3ldhre+WYIO7R/amqvMZ3999buQzajw6R9Uic Nky/dTW87VB86IlPuw+vIo1VuC9+vRxDOMftgNhLwmqqx2ulq7UxiVzTiMmQidtyj84+ VrY0JpipSQEVIHWHA4hNv8dQY3ljBXMzScrwrq2HYTIkHz1tlW3LDDM+zE7vtBlRueqe 19UA== X-Forwarded-Encrypted: i=1; AJvYcCUtjIyFRXANiV7M/thAipn0iERt33/YuhMd7uowy0hMaO3Q7nqpL3phjN84G6BRr92BZuRG+z42gS9x56Y=@vger.kernel.org X-Gm-Message-State: AOJu0YxsngcX3TAd1uX8l4ZUIcDJFM+uULSYCb4C3Dw2a8Dujv0mIo4J YjfiKCOQe8Y/FjWiyQU0vWmnvmfTytb7LEYgNF2CCQKiOdtcTXLSG0vQIdDIlKnfOrhpj5y11I7 +zRsSOmy+V8An9WaSJg== X-Google-Smtp-Source: AGHT+IF6cBYpJoNeFh9z8abT4JT8Lu/iLr3ILSFRx27w0PV32EPVnLrrgtC4Mtm0Y1G3MzlzdEBEE6shWor6mMCw X-Received: from wmqb9.prod.google.com ([2002:a05:600c:4e09:b0:43b:d42e:35b9]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4fd0:b0:439:6118:c188 with SMTP id 5b1f17b1804b1-43bd2adab55mr51000745e9.19.1741258853873; Thu, 06 Mar 2025 03:00:53 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:33 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-5-vdonnefort@google.com> Subject: [PATCH v2 4/9] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 343569e4bdeb..ad6131033114 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 7f22d104c1f1..e13771a67827 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -314,6 +314,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -324,7 +325,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 7b9b112e3ebf..e113ece1b759 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1065,7 +1065,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1076,7 +1076,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1096,7 +1096,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1104,17 +1104,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1128,7 +1132,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1144,7 +1148,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b65fcf245fc9..3ea92bb79e8c 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -404,7 +404,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9AE120AF9C for ; Thu, 6 Mar 2025 11:00:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258859; cv=none; b=jNu2W0S9WPA2IFpB6m1bVrooTrk6eRdWXJeRpvQIULQyhNXreBXVQzD5+6r1LmQOUaXa3eJ/AvTkHGBPo7BGjie6d3MVRSclnmuQ8UpexpFFJHS/sanVoS4fi9BREWCphSS42yod++y+X2MYJphtTw26J5NPqC+v3RD3otD93Mw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258859; c=relaxed/simple; bh=4GxbZZ8KDx9FpZt0envBnAfk7dVLQTyfBHEd21OXXd4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=K0bDEpNjMQRpmd6xwcJUJpOLWkq+c6ycW9sWWH9G1YbQJgPGJx7S5hYKJwgpMojT5DFu7SGJMwTWxojQrw0kv6VAYBu2oXgECwyYpcGESLy2bGXA6KV/IUc7euTvlBnJ6EmU8lAsdjo7IcLNJM8Y8h+HnRaFCgxhOatwfwHDnNQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yrw5M/9G; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yrw5M/9G" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-390eefb2913so372075f8f.0 for ; Thu, 06 Mar 2025 03:00:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258856; x=1741863656; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=nY53fsjf1Gqi90zwwkjs/Ss31wicoUxN0lBzPVMQZ48=; b=yrw5M/9Gjb7N0u+uQGqqLjqn9KlwjQ08zFRjvLImHO2rqup5kysS+h0flP8rgwTZk5 Ip4oVPPOLOM1/nSPJVAroti3dK46CW5zv+IfwruX3f1sFkHFImroM+7dOIdlwryt4/hX sPJDWpyTQ9NqRfGReyfVJaw5AYRphgR6zSPmZW8E7JB2UXGwz90k/m70JpRTSnP1nWkJ h5U72k3mhwoaArf5XHMbjCKV9OVBIQZ8qutHmjFy0hgcz0WxngXytZ3N6UIVY4rey+sM ExGtWtRuTvs6W7IidfS4wTwkx8yv7/FLH9xuMzfbq14+c7r2UKBd+Kv0mGoyC9Kj8MFf DGqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258856; x=1741863656; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=nY53fsjf1Gqi90zwwkjs/Ss31wicoUxN0lBzPVMQZ48=; b=UhiaPr2/DHMpmEojNZNOfUdY9aPAwcTmmAPe/8TxFWz8MMKhe2etS7NP1R5pADJkAH Z3zwPew6ptShTSHs+9+GXPk81nRHxlCAeFB2ZRYVWwxOCJiIPZvbwGCP4eVkFwI9uyqb hQUvuqiIKMzkJHnRt2u+9AisQfxOZdO55t8+YefdbeEZ0UU9yIW8ovYBZe7pxfXaVzKS BR4X2CBurVS8MyVfCAbdLmd+QwUK4r6mNcgNq5tuJZE8G+q/z+ojcRnVtBsxlt/NSC2/ HGMmpJju4xLXDNcPro6hLcf9WQzSwGEF0eatuYzuOI2OgU0Rx3ADa2DH+7RN2CRBFcOG 0v9A== X-Forwarded-Encrypted: i=1; AJvYcCWv7ZXOYxpjI311u7WRGXQH+38WHSCQxeOxJWZ8flQHz7zlUal0iJAWgjritWPbEmY9DdQtdi46gCKhBko=@vger.kernel.org X-Gm-Message-State: AOJu0YzvjpLNd6nhKrtkxY2EKAqkr9101+hDzcDQ0dTm5Vv4NWROO9Fm qgXkEqmJ1EEW+kf83LZ79TxHx0Zs7PMCeV2tA/eITW+1VooMfvd57N/G8vMmA/TAm2cnJR7ZzRd 7NI9oYk4sRYY7UJyRrQ== X-Google-Smtp-Source: AGHT+IGmQeadGZM6t/fuvRDJiBZnTtGUTvo721CNjh1/3loduEsrkWUlmN6OyAD3MtikSnlm82gvqe/Rikhd5Zkg X-Received: from wmbbh11.prod.google.com ([2002:a05:600c:3d0b:b0:43b:c0cf:b8e2]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:59ae:0:b0:391:2995:5ef2 with SMTP id ffacd0b85a97d-3912995602cmr2172678f8f.37.1741258856189; Thu, 06 Mar 2025 03:00:56 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:34 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-6-vdonnefort@google.com> Subject: [PATCH v2 5/9] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index ad6131033114..0c88c92fc3a2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index e13771a67827..a6353aacc36c 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -335,7 +335,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -346,7 +347,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index e113ece1b759..61bf26a911e6 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1124,17 +1124,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 3ea92bb79e8c..2eb1cc30124e 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -434,7 +434,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B736320371F for ; Thu, 6 Mar 2025 11:00:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258861; cv=none; b=uRS43TMLmi/JE9fFqBxg9FGSH6C/+ruCgw09uYw3PoZvG9RE/m4AUtw9cUCa1d3EOCTgdb0S3yCBu4wGU6FiY0N5eSznXcyM/wc/N+jpMlaY6aVt4ArzvhFNi8DP0cN+KzA98BbiLjanLJHrNadkPdlJViGhBVyjwat54bxscVM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258861; c=relaxed/simple; bh=0xiZbkVb4PxTBW8naW/8DcTbK2Rc6mIZinJTDH02mGI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XQbLXjRE7vpJ/ztSw4NVzJEYx6Q1O2t8Ejm0YnqCXWy8lFdjsUkYU5qn5ohHY63YOWq6Q34WZmD6un/7ZPI7aXAeLY+7LOD8KnuAQf6s1cJ6pVsJv9vxu9TFhCFSRq9RRqHiu2410cmrttK2cx0BV5/5Uw7wStsjB0cL2UU6LAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=vJztTXhb; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vJztTXhb" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-390fd681712so255396f8f.3 for ; Thu, 06 Mar 2025 03:00:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258858; x=1741863658; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=vJztTXhb4+P8zw5UWMC/pyWxcX/g3zqiu8Iz8Z06Ibp5SFtaFeCaTE/0neNeZBWu4Q AvruPCO7aet8i0zQ8z4UUxW8yXgtF/ugDt3SGzd9JVuF1H1RzomubXQoVqChZTjrYpsr DtcjvXfFQQVtgd3nNk7DUgyDMthdxEYNGn3GJHfeiuhQwapvZ/jZtVum+rnCq/XcAChJ 3eQjucBRpoT9gOwkwDXQYNIapiM7uqMH85GSb/hFr6VxyiTPRTyCzadYUTbx72VBIIQ4 0JXbIzCWwAd3hEOd2Pyd6Tq0dq+19aGIImzDqqxqIp52eI/Drumzn1c7Iypjwe7ucF1m kDkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258858; x=1741863658; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KTPitoEhlDo5P7Fhw0/so0kkZd/7LTKjAqNYGdJzP/Q=; b=AZFzSmcI9/vYTvVuALGhrsOJwh7ML2r/m4G32uJ1aEg0zaYcoio6eUWUdKVR0iQawJ G9BQOcpyFyvlK+dQ6AM5jaIpZsgIHgQfUWx+WrX9CODMEeENJLyPQN5AvO+lKyPChLP7 GmEe3DhigWdPtwoiqiuPnuQD/YYR77hD4x4PGjIagUdNE5Ict4kzJeLjAnvZ4O28KeNz fPV9iM8Bx8BLNIyycFo4gIXaLzKaMEYdsW4kQMIaYVqpsFj7qBtd68ahfKI9rw+Jylrj g3XlW8JpDgXMKpN1tR34owl8WmR6ZfSbXhQOZdNaQHhj6Sbu++EGm+hulswV4OagpsKX c85w== X-Forwarded-Encrypted: i=1; AJvYcCUtEr9pD2EOj1/ICI/xaIBH+jSy7kHlr8OyEMcZNbeWa0G+cs2tmZEf7sPSzAPxEAEfXRxA86QJ/Tzs3B8=@vger.kernel.org X-Gm-Message-State: AOJu0YwywGwzpX5/FKf75yMi3CopVhF9uteeHITHc6SLpIvuYjAE8IYb VUeLhiaVyeu7OWpWN335Z6ApQO+Hfp38mEOnEr1T6G0GcFJClHgW4afKQOHFJAJj4Tow/XZ6EKM 8vWSHezYQSV4GusSGIg== X-Google-Smtp-Source: AGHT+IEQXtW78/QgWwghNEF6Ze5xCQXvF/c8p86Yy8JUgqzhZp0+q+YljOi0jNJBtHSuvG1B9ZTMgCSQ9MCrQG8P X-Received: from wmpz20.prod.google.com ([2002:a05:600c:a14:b0:43b:d6ca:6dd3]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1fa9:b0:391:c42:dbc with SMTP id ffacd0b85a97d-3911f72608amr6177818f8f.8.1741258858144; Thu, 06 Mar 2025 03:00:58 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:35 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-7-vdonnefort@google.com> Subject: [PATCH v2 6/9] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index eb65f12e81d9..f0d52efb858e 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 2eb1cc30124e..da637c565ac9 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -270,80 +271,63 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 -/* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. - */ +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -371,28 +355,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } - - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 069F820B7F2 for ; Thu, 6 Mar 2025 11:01:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258863; cv=none; b=NLLPaarNHExDstAs3sYkTi6aA6YA6Es0+YM+vOK37P1gTfTKV48j9BDTvBEDbCiZViO+obgiSlBynTw9c5v0f6/ZNtJHJ4hsTwhPJukYBLKhFQj7y9h3L074zyB5VWu+0rl1wEjewMlGEi8R0ZUYYgmvjhd3urGw4NLM1RyCIGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258863; c=relaxed/simple; bh=8CXJzd3bdWnUUiUgEqvdbCvJOrcXmt4cgq1d60RfuSg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TbD0/6aR1VFw0J7G27ojiy9htwu3ebo8u4GykFlRT3zQBLY4L+6L6ORqV/mrWmxw+rBK8IZry1ECM6TsZ4qho2ACkSJuBPksOAyQMFVg+1Q7l296iSJVopopIo1zz8vAZbOVE/mvJ+pN93qNQm/mq7/kKmY+2reiR9RMlPRji4Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3gL6v1IX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3gL6v1IX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43bd92233acso5041385e9.1 for ; Thu, 06 Mar 2025 03:01:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258860; x=1741863660; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=3gL6v1IXNE+tcFhq8vj7dhwczAD96zxhsBrBf6YzbqIGEyRZHRejR129/hCwzbjDXC ahC+HfDUhHREm0r50TzB0FJPHddVaEr9h93ninY6laMhJZC7Uy7UWqYDbnNBalCce2H8 P+gGRN+lzSD18HBelxVJfh7EQZS2dL162LXZrUIUDZ7YQbQPZzCv8uKxyFllkrZL+j6i v87lZotdedtBO4DUtEmjqyhCeb/AR0LlM2RPCJzYlSJHTCqTXLzmEsdVuT6q+FRWSbEH JomX+7iJUudnLHMfXXkc4elgNEpmFewrjJv7oAsGGb5DMNr9v8lOdB2TFdU6OFyeDD9K TDIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258860; x=1741863660; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qXik8AjbYyPYjDFsjvp2RmgmvxUrFId1oBHnRgszILo=; b=Kq22l6zSC4cQFWmHAnq3lghFh9eITmgU+3EsM1PZL5r0EziucPsxfO9E8xNZJv2Xql DTc4Y6CXw2wZncHMyrghRikyba6SlSu89f7xtc0HGjBFeyhQjhw5UPPmQXahpBZe6Qkj 08LlvJjV35gUcmKSh3XldOQMEIrzVHija4GCAZ4bXR9WJKvULSS5qmEL9QMZJy9GEZhe F5tTuBezbnTBVVhQCuSQWvvI8Jyhr3rQFoGn2W0dJ/Ax96lNAVyHRfXVjIhAkl1fR7GT hWnwq2GHo1jTnAWPnq8hVbtqtZICq/JkLUGPKa6Az8Q37xG2yuh/HbdNv+cNHE8UbiD2 7oZQ== X-Forwarded-Encrypted: i=1; AJvYcCWa/66bFsGiuvQ/LzyqTLbDIuZ/7BR2EDossoNTRz2Z5N5+5jH0dCyOkBJSLX5c/Ei5G7fkJpQ/6gaZr3k=@vger.kernel.org X-Gm-Message-State: AOJu0YwAtxU+BXuKDsq091XCngReOxwPC0a0TiI7FUHu9kYSM0DxkhgQ IGzjE2GOSANDS9mX5uvt7sAIPisOKS+uS9sGDDiqXmlBGJIQ0KfBNi8GpLgK6hOHf5nEZsY9Xt9 8to+SOpdJmxhFyf//iQ== X-Google-Smtp-Source: AGHT+IG1zpmCoE47NgJOQQ1JcAF2Bl3oBRwYgO4WaRdecPq4Be2Q4d+PX+LTxNRaXoONJrSQGsIXYlKSmDSqbhOe X-Received: from wmbfp6.prod.google.com ([2002:a05:600c:6986:b0:43b:cf2d:8027]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:198f:b0:439:9ee1:86bf with SMTP id 5b1f17b1804b1-43bd294309bmr67065855e9.7.1741258860152; Thu, 06 Mar 2025 03:01:00 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:36 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-8-vdonnefort@google.com> Subject: [PATCH v2 7/9] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index f0d52efb858e..0e944a754b96 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -166,6 +166,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index da637c565ac9..9c9833f27fe3 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -278,7 +278,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -315,7 +315,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -345,16 +346,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -376,7 +393,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -391,7 +409,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -406,7 +425,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2C1120B810 for ; Thu, 6 Mar 2025 11:01:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258865; cv=none; b=f7zJloqbv80n8KfNk54TVXNhD34xDZTLFQuDFkMzeZNPiCQUkjLcD19GF5Pj4lalneS+3nZ7z2RgJj7HfiKKlyIkQdT0lvJgw3lx1N8UWgXBOIPGP/3J3kOo8iPDXr5uKsxidXeTy2JVxYRWzYT1JVkvlCMPycMw1Rv7SzztnVg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258865; c=relaxed/simple; bh=VvUXJIr2kKN2xNaqZOPY7Q821Sz9jg0B9W+Kg6n6k7M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I/VgWnSYUGrtHcI8WcIPbnRFYbAWQYvAeGvxeOQIHkaxq1lU//w6cDEFYmAmQfWmsAS/mt19HaYt1jhrniLzRQamiqR6B4To74ofj8fuhWoUlWF9fCEZG1G90AJVsrDHxSraDsgb1IdMMwbZHq15GhnlaE+DqNfjpvh088WfZdY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FENp8OQ1; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FENp8OQ1" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43bdcdf193dso2249735e9.2 for ; Thu, 06 Mar 2025 03:01:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258862; x=1741863662; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=azWhDzZbtoxtjhQWlpXB32XSGr+Msg3K3f2k/5PFntI=; b=FENp8OQ14PlL1v/KkxDEjArvZE0gwR5XoXpeALeSXGCemrwzVqvuLVembVHWg0WB48 DJ0qqT16eZQseuE4TbW35xQyjQsci0TS2bH2w93W4JifnejrvFKvg8WgWoXVFknmZppb ppxl9yFaHexCVDhvFu+xxPoJHHJqkcEL994J96lDMPIKhAj/T7RGqWPiy2TPqAf9TY5x DyBM2l1ujhUFItOoXqUhKoNVlT/6FJOqr6tcqXT+EZnIsiMhnS8FCDUT0hfu1dwQSMDn ARhBAw9P+l+r1D7Cxl70gF//waMfn1LSqjWVmDmtHlbSt1yyzzVIQl8UINApUT5zjo6A Je+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258862; x=1741863662; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=azWhDzZbtoxtjhQWlpXB32XSGr+Msg3K3f2k/5PFntI=; b=v7SOvwj3Wvt/MXGtHveYUBHOzyj9US4DJF5YdYRimza3FPpbYkWnN9sE231zy1a3bg Q+evevfMs6cEyVmetErt0wU69yrWmWJz97Axa0bS+iIydVw8JJvItTJJ9YMQqRcE33At yfzjTeT7270IhLoo2SPBy6ntnMjQO9gPIAiN2wlO+XPv4qkUGKw+u9cI2gZ53n7cWze5 uQuxR+mBW+BHYpSNoYJXoLAVBCkXOv1HwJIOJ6/sKjcJaGnc0uoEdGx4wFigDSD8c0G4 bBWdsQLdao9ON6NFiUoVGXgMsFOWmtKe6hFqe4Q41V4eJ7QYuLbbvhfPR5Jvwd9VecAl ZRdQ== X-Forwarded-Encrypted: i=1; AJvYcCWsMEGpJitkHLLEP/OLr+alSf8vDa/r7/7c6BjGLJmKN6rQRzLyOa6beXU9cuH68r+vLaAA24WsjZBZToQ=@vger.kernel.org X-Gm-Message-State: AOJu0YzRBRmBxN6XSYByJ0iB0cnazJNsK+PJvirxJTQfBYFmKvCFMvHt dWMEF22a7QYD5T4zh43Y0FXnuIgl5UZUWVwU9BFBQHxvknPrgWrwWLMClNIThxRstb/if4GUGn4 KvmUwywUo4pEtxcvUrw== X-Google-Smtp-Source: AGHT+IHaLhB8g0ea9IzCe71YUfuaamK20WvX24Cg501MmU1HL29IV3XbEohWkl/ds2ZX3TqbOYoDRDMXlJjEcXa3 X-Received: from wmbeb18.prod.google.com ([2002:a05:600c:6792:b0:43b:c205:3a80]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4450:b0:439:98ca:e390 with SMTP id 5b1f17b1804b1-43bd2af45eamr50778475e9.27.1741258862389; Thu, 06 Mar 2025 03:01:02 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:37 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-9-vdonnefort@google.com> Subject: [PATCH v2 8/9] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 61bf26a911e6..b7a995a1d70b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -167,7 +167,7 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) static bool guest_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_prot prot) { - return true; + return false; } =20 static void *guest_s2_zalloc_pages_exact(size_t size) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 1f55b0c7b11d..3143f3b52c93 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1525,7 +1525,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1535,7 +1535,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: - if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) + if (is_protected_kvm_enabled() || + fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) break; fallthrough; #endif diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 9c9833f27fe3..b40bcdb1814d 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -342,7 +342,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.48.1.711.g2feabab25a-goog From nobody Sun Feb 8 12:32:18 2026 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C0A020C029 for ; Thu, 6 Mar 2025 11:01:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258868; cv=none; b=oxtL0Gg1OOD91Kx2UdbdrXkAQSLLT0z6xD1D/iaNvUlz8td8nOR/fECQlgloTPl0GC3PlzCgVK6dqf1AJfIZOxGwNSNITaAUGCpHUl941P3163xXIwrQ6WqpcUQnBeutn1LrWeLwnxyGliqbFtEa0sMHadSY0NdiP4eWseGaBmY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1741258868; c=relaxed/simple; bh=FfMTIa3423P7AwMqodFt9CUbjsjLi1UM0xifhA3SfW4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LQzaLENupFltbAY8eW+MecFgyhz+t7YkgdQPZFjdtWPixnWQME3q/2xxu9tcAVvei14VSZSSDYMadTdpHnl8cZf2MHGa9anvoMA7zCTnXCSTs9y7lSMY1y4m2hGJa5mx/a8Ss7kVkBOOmgmVVxE/C4VtC7L4AJcva1W181xbkWQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WEAlF9J4; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WEAlF9J4" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43bcd9a46feso2100705e9.2 for ; Thu, 06 Mar 2025 03:01:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1741258864; x=1741863664; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cvc2qCSEoHcInumCsCUHzJOPxNrN7pZm1b9N0GuNels=; b=WEAlF9J44e7jLXsickPC4EII6mxjDpq1cNOik6mP2Aukvw9218fCozelbvcxnYb8lq NHiMM1rI1yQdTw6wk19Mq05fI8gD2YgbLw/N642XGuArE04xfARP3UfIsZ6lc5gZaf8P aYoqkozzk9sWuoKeK2rDe+tc6H2OayhARWDSD7MjNZnRZtSA3QgtkrE6mbX6sCj5g+XW 2/ZCGthJSAzIrrAYnNTNBZPSaUZZWu4cGl+wX4kSwpnnCa9rgNw1u4t8bZRcncExyJYw UeLPG2Vi0vO3MfKT6B0dH+Zpx87sD9c5/3MWhx4T/3TeUI+xH5+41z6k33zIfvmkJ/rA f/hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741258864; x=1741863664; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cvc2qCSEoHcInumCsCUHzJOPxNrN7pZm1b9N0GuNels=; b=uU5u31id4IO8AM6eRJjf+boKSGNP2RCWU6oVJqTW6y6EmowztINMlK/1aVL//53lzh HTRgh4m4vk3OyYxUOABH5oWQtI8wo9FgYuIf5Fqy3+EpV3Zfi+WoSaIw657tQDLiKigt zvlJu8RoqYEbG3NDqKjpwfCGtqA89mkjUFj4gGovrpTXjB5bexgTIhgb0QVBuCslvkdH hsJMkfHUkawgYdwuiSqPGxJSLi+2OPcLKAzna4ymz77adVVYadpPDoZCXJB876gpEAyx I7SFDFCGrhlbQgDL5xlOnriGkB4s+uDTvqs2Ut4Y4q4bH4lV6Tkjosur9/d3rkmNK3dC QA1g== X-Forwarded-Encrypted: i=1; AJvYcCW+iVT3wguJ+poRwehdPo/wwMDlBB2Gt4zyNB8LEQe3HLM9g67wp8z96r6QlDDVlGFNiiQNsUlkfc3SkHM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx8Yl+tNbuKHfFhtatfDhj/QfI1tZsxKU6UDimsGCkMUIRjFRy7 kz6MzbBOMYsp4Vh5eFrtF73hAZjvhCYyeDGpKLHwN7mlOmQrHnWakQNRq3m9db3sNXmkfYO6W2I 50nNonMLEw32Ia0m5sw== X-Google-Smtp-Source: AGHT+IHEXjKh7uYAxIDUZT7+S3GCsAIurCSRFsj7/bmhTfFhgEBZjfPhhhAzv3sciVIkINBKdc3gwmpKNB2CG4iV X-Received: from wmbeu10.prod.google.com ([2002:a05:600c:81ca:b0:43b:bec3:e552]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1c95:b0:439:9b3f:2de1 with SMTP id 5b1f17b1804b1-43bd29ad100mr55221025e9.15.1741258864402; Thu, 06 Mar 2025 03:01:04 -0800 (PST) Date: Thu, 6 Mar 2025 11:00:38 +0000 In-Reply-To: <20250306110038.3733649-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250306110038.3733649-1-vdonnefort@google.com> X-Mailer: git-send-email 2.48.1.711.g2feabab25a-goog Message-ID: <20250306110038.3733649-10-vdonnefort@google.com> Subject: [PATCH v2 9/9] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..b0c72bc2d5ba 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index b7a995a1d70b..5710c97cafb0 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -220,17 +220,53 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 +static void *__fixmap_guest_page(void *va, size_t *size) +{ + if (IS_ALIGNED(*size, PMD_SIZE)) { + void *addr =3D hyp_fixblock_map(__hyp_pa(va)); + + if (addr) + return addr; + + *size =3D PAGE_SIZE; + } + + if (IS_ALIGNED(*size, PAGE_SIZE)) + return hyp_fixmap_map(__hyp_pa(va)); + + WARN_ON(1); + + return NULL; +} + +static void __fixunmap_guest_page(size_t size) +{ + switch (size) { + case PAGE_SIZE: + hyp_fixmap_unmap(); + break; + case PMD_SIZE: + hyp_fixblock_unmap(); + break; + default: + WARN_ON(1); + } +} + static void clean_dcache_guest_page(void *va, size_t size) { if (WARN_ON(!PAGE_ALIGNED(size))) return; =20 while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __clean_dcache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 @@ -240,11 +276,14 @@ static void invalidate_icache_guest_page(void *va, si= ze_t size) return; =20 while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t fixmap_size =3D size =3D=3D PMD_SIZE ? size : PAGE_SIZE; + void *addr =3D __fixmap_guest_page(va, &fixmap_size); + + __invalidate_icache_guest_page(addr, fixmap_size); + __fixunmap_guest_page(fixmap_size); + + size -=3D fixmap_size; + va +=3D fixmap_size; } } =20 diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..e3b1bece8504 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D (void *)per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#ifndef CONFIG_ARM64_64K_PAGES +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); + +void *hyp_fixblock_map(phys_addr_t phys) +{ + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +} + +void hyp_fixblock_unmap(void) +{ + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +} + +static int create_fixblock(void) +{ + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D (void *)&hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +} +#else +void hyp_fixblock_unmap(void) { WARN_ON(1); } +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } +static int create_fixblock(void) { return 0; } +#endif + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +392,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index d62bcb5634a2..fb69cf5e6ea8 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -295,7 +295,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.48.1.711.g2feabab25a-goog