From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4F26267B73 for ; Wed, 21 May 2025 12:51:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831868; cv=none; b=pd9YttX13q/FlVVRmR+AI5EU6eRCd2RrgpzciqvFCeoYg5pPGPoLVs1mdqTwqQL9QrkNH8HsOYD0fV3C0lkqx++2wNkqt0naSr90Zo+AVw2nqBZW9eKurSf/5ToiBiJ625tvcE6VWl2s0ez9y0EiPxAJelOqSSo7L3soi5ltQnk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831868; c=relaxed/simple; bh=i60uoZKPYYZP5QG1SLs+Mxw0GCgSs0Z3YuYOlmJgTz0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ws3ajTEIyMgOSAZuV8S83sMo78SykC6TeygyTbfXAYJWoWnA2mp52IxGNDW1ji6wzjD/wC/DpE+zwxwy0hloG4GbkTXg+j6CJdoegmEuybzsfdCR9VqZqULxg76Ug5q94nsXMKRNT5gd0awrAgEKAod+0cA8E11uX0xyikNDQ+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=D6in46YI; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="D6in46YI" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a364394fa8so2314523f8f.0 for ; Wed, 21 May 2025 05:51:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831865; x=1748436665; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mT8By12179daVSqWK2hMkVVh9hr6dmc7nTaRRdzemjw=; b=D6in46YILYHKclS0yRD2auEIeqZwKL/3gtxR0QD7vMtUq3K2VB/PorPjE9J7iLAulD IHIIoVrolbt/8k/xHhq8c6b/L7JtgedlFTGH/w69R62+tAyNRd8HERja+R5qfTkERmhT FRoiSnfJCG9PbhsfflQdNGv/4FgcAu77hk4Xy47F1679Z4bCWT9mKT2k4c3uvJctKM0/ DNdAAnKYD+JapPsXWdoeBpposrwdIAhFq5esw3GxP2Co6onuDL2gdrewIHgxmhR4E2jf vFTcCw9tbNeDq0nSr7Omk7D3luL2/JILwP5N+issg2EnjSuTITRd5trNo6hQsDFb5qKg DaJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831865; x=1748436665; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mT8By12179daVSqWK2hMkVVh9hr6dmc7nTaRRdzemjw=; b=czJWY73W4O34WqTs+p+FOEtEHeiFTLf+HDKcvSS4c4Gk7OXZuk+rfAYZ440m+yqu+C nbjViW84rZw68YTP0eM7oryY6vXjYLRl1fu1UEUenVTFuSL/QO2GFsqO1GspJ31mFvU3 XhHA2mKjr4yhB+gK1O2wKnLy36iRGHH9tDNw9fQhP172B/2n0zUzRanBPW7AvLmZSP+R aH4zuqZotWtkQ+BAtDB8EYWGX2qj06SqjABQAicFVhDVGlboYF+wFrGYH7KPtcOPO5ww ikGGe0zNc+nEexd/nVV6RCKcpjAS4YSvfiL/DRrUeu1S0IL37Xe5QB0zd5DTUG1lHfrc 85pw== X-Forwarded-Encrypted: i=1; AJvYcCVBXGlTiigfw4pG8LwagXKF4x3PNFzbxSH7Jbw3adO2bpBJXupr6U5xMH2JjKa3+JPKu/yp9AAnUWa/w1s=@vger.kernel.org X-Gm-Message-State: AOJu0YxjGMijIDdO/HZ0gD/NrYHsBESzN/BU1ilrVJiIPnHKhSMSssp0 Mj5UJRUtJMw9Pd4z/R2k3tVRyxAvu9r+qbH8AP/45uwfvTqDD/vhIgjM/zzkvvbfS8gGkskF83V NIXpznlca13QQJecoRgjk1A== X-Google-Smtp-Source: AGHT+IFxvQXC5e2XyrcGIorm9ck7St5THzNwlwqWNw3VZYKE1iTJWfhId/APZ3vzvHKNI9cTFqGba3uEmZNIfYCd X-Received: from wmbjg9.prod.google.com ([2002:a05:600c:a009:b0:43d:b30:d2df]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2384:b0:3a1:f684:39a7 with SMTP id ffacd0b85a97d-3a35fde361amr16068713f8f.0.1747831865060; Wed, 21 May 2025 05:51:05 -0700 (PDT) Date: Wed, 21 May 2025 13:48:25 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-2-vdonnefort@google.com> Subject: [PATCH v6 01/10] KVM: arm64: Handle huge mappings for np-guest CMOs From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" clean_dcache_guest_page() and invalidate_icache_guest_page() accept a size as an argument. But they also rely on fixmap, which can only map a single PAGE_SIZE page. With the upcoming stage-2 huge mappings for pKVM np-guests, those callbacks will get size > PAGE_SIZE. Loop the CMOs on a PAGE_SIZE basis until the whole range is done. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index d1488d4e5141..c18d4f691d2b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -219,14 +219,32 @@ static void guest_s2_put_page(void *addr) =20 static void clean_dcache_guest_page(void *va, size_t size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); + va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); + size =3D PAGE_ALIGN(size); + + while (size) { + __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 static void invalidate_icache_guest_page(void *va, size_t size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), size); - hyp_fixmap_unmap(); + size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); + va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); + size =3D PAGE_ALIGN(size); + + while (size) { + __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), + PAGE_SIZE); + hyp_fixmap_unmap(); + va +=3D PAGE_SIZE; + size -=3D PAGE_SIZE; + } } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DEC1267B6D for ; Wed, 21 May 2025 12:51:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831870; cv=none; b=KuvQHy690i+JML+JgSw4rDES1GBVumUaoOpXlL0iFMJlQ4CqDzteXZAExYex52pUap64IUtCW20iUuEZOkhvyKP/p7lFCwqbFczpNsVklDJVy50dtM29xfPFnyqUpgatNyuZ4M1Kh+cuXxogM4IVdqviniXlx4tbi6wNUsTBdgQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831870; c=relaxed/simple; bh=UbAtLg5BzpIszX+sw31M0F50p9tMAFAKqEZT4GqRiLU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=dgnvIR7/eVSTLTve+9kP/EgVM0uRHkGStUw9MOkc/JpVE/xZbfG3bxDV6sTWQVd9sqh1S7o44CDtDxw0A7ntUVt+9hbcU5JSO0gyBTAN8/y1r62jLgN3jQ1OlTKszn2MR4hTB/xcT4G36bmJMYCseqWF/Y6mzb18tVLy6Oiu6oc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rXVPg7D7; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rXVPg7D7" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so44384245e9.1 for ; Wed, 21 May 2025 05:51:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831867; x=1748436667; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=plMFgSGJhN3aobtGanbzGcqM69WCQayfYelFMBRo2Js=; b=rXVPg7D7XYLZcPWXq7lp+X8bgSowFAQcG0ZLUlJkA2WCNzEUy2JjANjG295GwiRe9I zuECssFsmNVM2Ptxuy++IDU7QnvgdLM9XxjlySSb9zVZxo3G1AJejzfEmmdC3bXM6Dhj iNmHf6kTsdcpKCnUURvRnPZsJcGSfuHi9HXq1bij0emmMjCXhdiSVzh1hLCdpPnWbU3H Yxbj5szGp0/A2yTstOHWXwEkqcyXnvk/KOarkzqdAj8DpCI05Yb64ydInwtaNdyDgANK w54QOwrQTznXJz3M6BxHmfnQqAKG7z4kQEw1GUkb2UF/srzjqW1V41VXcTpXChNYv/vf MfIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831867; x=1748436667; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=plMFgSGJhN3aobtGanbzGcqM69WCQayfYelFMBRo2Js=; b=l/GTVoCTBWmTum68vgYPGz2u14n8ZHYS8o+pMgUv+lSJU0WgFzug9CuUc2fY+SGlHS 0FCIPFy3cOJFCHdlHU5OnFTLGVOGBhKYs/Vp4814TJVDit03cr/3ndFxLqargZwQVSAC 3j5xQ1ZyxmIk/UeC1dyEbTN1JL7ObZdLgO95eqoOCBjjWuFzDw7CApZdn97HmFOBjD95 qBzIdYGs9f+Tn1ejFMJ3PK+/9FRxW/JnHtMfKaEye+6mHbmNILPyMCYsrUAEa8y+lbWw yU8S5BayU+te8ye3yxcNsDMG4dkfDjRqP82oX1BxVE8tlQBLUlHTjvUXTzo8gJbU7gQw 3zyQ== X-Forwarded-Encrypted: i=1; AJvYcCW0+xHbz/BAISkZ29WSLfPKBsbOc2cKxODk+ARKFU2FEyP5omWzw3UmlRgVVJ/d9/pRhKq9qZO8oKo+iLo=@vger.kernel.org X-Gm-Message-State: AOJu0Yzwe9wI+Fvu/DmUo8viRudHGJ4tfJ4DAK502k9exMqedk/SKqHg pY/eaU17sZshmqcaS4XQCLPaPiXcjEXucsTAFQDBtrXBNXotM/Bmf0Ir6Xx0/wfRqzHqGvvxr6P Dryr+JL+CeK4SnwAClT6Bmw== X-Google-Smtp-Source: AGHT+IGUJ63iSzGU9Dvi29HCEISiLcJX+9ayQfyUSIkAJKvRnN6Pz7ziGCcGzjc9QNZmRMp/cFiAga+uxL4Qvxfo X-Received: from wro21.prod.google.com ([2002:a05:6000:41d5:b0:3a3:6593:a846]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64ee:0:b0:3a0:8291:20d0 with SMTP id ffacd0b85a97d-3a35c83f19bmr19075001f8f.29.1747831866928; Wed, 21 May 2025 05:51:06 -0700 (PDT) Date: Wed, 21 May 2025 13:48:26 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-3-vdonnefort@google.com> Subject: [PATCH v6 02/10] KVM: arm64: Introduce for_each_hyp_page From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a helper to iterate over the hypervisor vmemmap. This will be particularly handy with the introduction of huge mapping support for the np-guest stage-2. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index eb0c2ebd1743..dee1a406b0c2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -96,24 +96,24 @@ static inline struct hyp_page *hyp_phys_to_page(phys_ad= dr_t phys) #define hyp_page_to_virt(page) __hyp_va(hyp_page_to_phys(page)) #define hyp_page_to_pool(page) (((struct hyp_page *)page)->pool) =20 -static inline enum pkvm_page_state get_host_state(phys_addr_t phys) +static inline enum pkvm_page_state get_host_state(struct hyp_page *p) { - return (enum pkvm_page_state)hyp_phys_to_page(phys)->__host_state; + return p->__host_state; } =20 -static inline void set_host_state(phys_addr_t phys, enum pkvm_page_state s= tate) +static inline void set_host_state(struct hyp_page *p, enum pkvm_page_state= state) { - hyp_phys_to_page(phys)->__host_state =3D state; + p->__host_state =3D state; } =20 -static inline enum pkvm_page_state get_hyp_state(phys_addr_t phys) +static inline enum pkvm_page_state get_hyp_state(struct hyp_page *p) { - return hyp_phys_to_page(phys)->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; + return p->__hyp_state_comp ^ PKVM_PAGE_STATE_MASK; } =20 -static inline void set_hyp_state(phys_addr_t phys, enum pkvm_page_state st= ate) +static inline void set_hyp_state(struct hyp_page *p, enum pkvm_page_state = state) { - hyp_phys_to_page(phys)->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; + p->__hyp_state_comp =3D state ^ PKVM_PAGE_STATE_MASK; } =20 /* diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index c18d4f691d2b..53bb029698c8 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -60,6 +60,11 @@ static void hyp_unlock_component(void) hyp_spin_unlock(&pkvm_pgd_lock); } =20 +#define for_each_hyp_page(__p, __st, __sz) \ + for (struct hyp_page *__p =3D hyp_phys_to_page(__st), \ + *__e =3D __p + ((__sz) >> PAGE_SHIFT); \ + __p < __e; __p++) + static void *host_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(&host_s2_pool, get_order(size)); @@ -485,7 +490,8 @@ static int host_stage2_adjust_range(u64 addr, struct kv= m_mem_range *range) return -EAGAIN; =20 if (pte) { - WARN_ON(addr_is_memory(addr) && get_host_state(addr) !=3D PKVM_NOPAGE); + WARN_ON(addr_is_memory(addr) && + get_host_state(hyp_phys_to_page(addr)) !=3D PKVM_NOPAGE); return -EPERM; } =20 @@ -511,10 +517,8 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 siz= e, =20 static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) { - phys_addr_t end =3D addr + size; - - for (; addr < end; addr +=3D PAGE_SIZE) - set_host_state(addr, state); + for_each_hyp_page(page, addr, size) + set_host_state(page, state); } =20 int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) @@ -636,16 +640,16 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - u64 end =3D addr + size; int ret; =20 - ret =3D check_range_allowed_memory(addr, end); + ret =3D check_range_allowed_memory(addr, addr + size); if (ret) return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - for (; addr < end; addr +=3D PAGE_SIZE) { - if (get_host_state(addr) !=3D state) + + for_each_hyp_page(page, addr, size) { + if (get_host_state(page) !=3D state) return -EPERM; } =20 @@ -655,7 +659,7 @@ static int __host_check_page_state_range(u64 addr, u64 = size, static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - if (get_host_state(addr) =3D=3D PKVM_NOPAGE) { + if (get_host_state(hyp_phys_to_page(addr)) =3D=3D PKVM_NOPAGE) { int ret =3D host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); =20 if (ret) @@ -669,18 +673,14 @@ static int __host_set_page_state_range(u64 addr, u64 = size, =20 static void __hyp_set_page_state_range(phys_addr_t phys, u64 size, enum pk= vm_page_state state) { - phys_addr_t end =3D phys + size; - - for (; phys < end; phys +=3D PAGE_SIZE) - set_hyp_state(phys, state); + for_each_hyp_page(page, phys, size) + set_hyp_state(page, state); } =20 static int __hyp_check_page_state_range(phys_addr_t phys, u64 size, enum p= kvm_page_state state) { - phys_addr_t end =3D phys + size; - - for (; phys < end; phys +=3D PAGE_SIZE) { - if (get_hyp_state(phys) !=3D state) + for_each_hyp_page(page, phys, size) { + if (get_hyp_state(page) !=3D state) return -EPERM; } =20 @@ -931,7 +931,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pk= vm_hyp_vcpu *vcpu, goto unlock; =20 page =3D hyp_phys_to_page(phys); - switch (get_host_state(phys)) { + switch (get_host_state(page)) { case PKVM_PAGE_OWNED: WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); break; @@ -983,9 +983,9 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip if (WARN_ON(ret)) return ret; =20 - if (get_host_state(phys) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; page =3D hyp_phys_to_page(phys); + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; if (WARN_ON(!page->host_share_guest_count)) return -EINVAL; =20 diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index 6d513a4b3763..c19860fc8183 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -190,6 +190,7 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { enum pkvm_page_state state; + struct hyp_page *page; phys_addr_t phys; =20 if (!kvm_pte_valid(ctx->old)) @@ -202,6 +203,8 @@ static int fix_host_ownership_walker(const struct kvm_p= gtable_visit_ctx *ctx, if (!addr_is_memory(phys)) return -EINVAL; =20 + page =3D hyp_phys_to_page(phys); + /* * Adjust the host stage-2 mappings to match the ownership attributes * configured in the hypervisor stage-1, and make sure to propagate them @@ -210,15 +213,15 @@ static int fix_host_ownership_walker(const struct kvm= _pgtable_visit_ctx *ctx, state =3D pkvm_getstate(kvm_pgtable_hyp_pte_prot(ctx->old)); switch (state) { case PKVM_PAGE_OWNED: - set_hyp_state(phys, PKVM_PAGE_OWNED); + set_hyp_state(page, PKVM_PAGE_OWNED); return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - set_hyp_state(phys, PKVM_PAGE_SHARED_OWNED); - set_host_state(phys, PKVM_PAGE_SHARED_BORROWED); + set_hyp_state(page, PKVM_PAGE_SHARED_OWNED); + set_host_state(page, PKVM_PAGE_SHARED_BORROWED); break; case PKVM_PAGE_SHARED_BORROWED: - set_hyp_state(phys, PKVM_PAGE_SHARED_BORROWED); - set_host_state(phys, PKVM_PAGE_SHARED_OWNED); + set_hyp_state(page, PKVM_PAGE_SHARED_BORROWED); + set_host_state(page, PKVM_PAGE_SHARED_OWNED); break; default: return -EINVAL; --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B088826D4C5 for ; Wed, 21 May 2025 12:51:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831872; cv=none; b=mojfI8L5Jaw18veHlERizRttj1S+lldbcLL66YtcX7VmsZEfN6bG1moUQt24AEEV0lzhnhDT4cC67hQ1v0p/MLYLs5ZFduw6dDhonpTb75CXcI3kHAA6JcEt9p+CgrNd9JeEWsvmxJ0+a+TUhCbXvHAFP0MbYCsSybROOul6yw4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831872; c=relaxed/simple; bh=JaLT/KjNRBZh9+0850W3wd8IYoUjk5dJsmWjAJvso/w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=I39t03QUkpZgc+edzPIbJ/2H1tOgtAxRjDSlzWPr11qorDN5s/rPsHK0V6o7aLxybps074MxDp46OgZ8+KBZSi4srRQ77AQxaeiXOaXg9y8GAr2QA4z5p/17CssvCO+ZeSxlMIaSHKtCB9xJkS0HGIaxK5fzx3onvTdaJgdLEOw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CVa5EjYX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CVa5EjYX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so42219335e9.3 for ; Wed, 21 May 2025 05:51:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831869; x=1748436669; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7M7SlMUFeV39FEEkrPwE8OC1rT3wLxQcFUOhbBKeO9k=; b=CVa5EjYXWV+qx8AeSnmQzNqQ67agjJ7viOp4CcbvGO2R/n8DpuEb8Bmd1DBkngzKRc WrtOzImxUUvW+JRcdpxJvQ4Cc5ipZqb6vw+qbsRMifrAkv1dBOgRIELhFML8jxde0lng 42Gi1p6uWZuesvFzbGa0+jR/6DbwP5YjOgpouqmS4tidKm1ItzlJFHhD+/QXGfHwh9OD 243km4TCWhQgLrvBGNarF/gWAoCgvO+JZWyIdOr0lXri8Z8d62bKDJ8WC7uHeznreW3y bHu+nNhP9AedP7EWJD4TWKsZQTprd9kGkt/X0N3y5rC+hJ2KGk0FtLCbLXBbxahES+fW Jt4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831869; x=1748436669; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7M7SlMUFeV39FEEkrPwE8OC1rT3wLxQcFUOhbBKeO9k=; b=LGaTdVPZjys6cFxnvflYdco6eG3yRBEibh1BTBVeEuY7BQVSWUz+Mc1XkvNgB2nQwo OFzYBcw+NrltbVSLtiwrboLItdXvS/UhGEWzP+fc3HuEIOUShkuul6oeeKhJEs41wBB6 L8EckwoC30/2aAVxhthM3cl4yRbNjTleK9LL8CuuDFtgH9wTCzih/bdZ9NtzOkfsj2vT 7+cNMKVztjJClI695tQP9QTatGBb1ugNnpumkle2s6rUofPdgSgQiz/khZtdv2LY9ga1 P0l1gc8kdNnGfrfQKj1k26eBzygM6m/AdRUl7oAoDTHApz1J/DQhS13WzqNGSMqbHwjt Curw== X-Forwarded-Encrypted: i=1; AJvYcCXuZQRNgkX/ea6BLj+HJU1arNvfMoPdeH+VajzMd/UrjK+fOhUPQyqO3ydgTR7yjk3bdqYUEZ0Yk60oT8I=@vger.kernel.org X-Gm-Message-State: AOJu0YxGRTvmGQzUsgKAWoo73nbDOtWjcmY40sfcaPYo++PG51YQaAg6 xXptmnzpzj8vntKWSV64TXuHo2Pq1j+xQ6bc10ONy7cQUdJ252m+MKbVqSunu6IEipk6ZCVMVje Zy/z6tS1iD+cNa+vxVHIcaQ== X-Google-Smtp-Source: AGHT+IGzvZe0qsIMITdMGvOXox9m77g+LizJVljcVxc1w9tT6r24QPzVy99fVozUqNSwp/zxZoBOGxUHRrMqjUb7 X-Received: from wrs19.prod.google.com ([2002:a05:6000:653:b0:3a3:678b:a096]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2903:b0:3a3:6e62:d8e8 with SMTP id ffacd0b85a97d-3a36e62db92mr11279266f8f.55.1747831869174; Wed, 21 May 2025 05:51:09 -0700 (PDT) Date: Wed, 21 May 2025 13:48:27 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-4-vdonnefort@google.com> Subject: [PATCH v6 03/10] KVM: arm64: Add a range to __pkvm_host_share_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_share_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 26016eb9323f..47aa7b01114f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 59db9606e6e1..4d3d215955c3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -245,7 +245,8 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) { DECLARE_REG(u64, pfn, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4); struct pkvm_hyp_vcpu *hyp_vcpu; int ret =3D -EINVAL; =20 @@ -260,7 +261,7 @@ static void handle___pkvm_host_share_guest(struct kvm_c= pu_context *host_ctxt) if (ret) goto out; =20 - ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); + ret =3D __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot); out: cpu_reg(host_ctxt, 1) =3D ret; } diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 53bb029698c8..8051235ea194 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -695,10 +695,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_p= te_t pte, u64 addr) return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); } =20 -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr, u64 size, enum pkvm_page_state state) { - struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); struct check_walk_data d =3D { .desired =3D state, .get_page_state =3D guest_get_page_state, @@ -907,48 +906,72 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) return ret; } =20 -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) +{ + if (nr_pages =3D=3D 1) { + *size =3D PAGE_SIZE; + return 0; + } + + return -EINVAL; +} + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot) { struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); u64 phys =3D hyp_pfn_to_phys(pfn); u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; + u64 size; int ret; =20 if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D __guest_check_transition_size(phys, ipa, nr_pages, &size); + if (ret) + return ret; + + ret =3D check_range_allowed_memory(phys, phys + size); if (ret) return ret; =20 host_lock_component(); guest_lock_component(vm); =20 - ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + ret =3D __guest_check_page_state_range(vm, ipa, size, PKVM_NOPAGE); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - switch (get_host_state(page)) { - case PKVM_PAGE_OWNED: - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); - break; - case PKVM_PAGE_SHARED_OWNED: - if (page->host_share_guest_count) - break; - /* Only host to np-guest multi-sharing is tolerated */ - fallthrough; - default: - ret =3D -EPERM; - goto unlock; + for_each_hyp_page(page, phys, size) { + switch (get_host_state(page)) { + case PKVM_PAGE_OWNED: + continue; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count =3D=3D U32_MAX) { + ret =3D -EBUSY; + goto unlock; + } + + /* Only host to np-guest multi-sharing is tolerated */ + if (page->host_share_guest_count) + continue; + + fallthrough; + default: + ret =3D -EPERM; + goto unlock; + } } =20 - WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + for_each_hyp_page(page, phys, size) { + set_host_state(page, PKVM_PAGE_SHARED_OWNED); + page->host_share_guest_count++; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, size, phys, pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), &vcpu->vcpu.arch.pkvm_memcache, 0)); - page->host_share_guest_count++; =20 unlock: guest_unlock_component(vm); @@ -1169,6 +1192,9 @@ static void assert_page_state(void) struct pkvm_hyp_vcpu *vcpu =3D &selftest_vcpu; u64 phys =3D hyp_virt_to_phys(virt); u64 ipa[2] =3D { selftest_ipa(), selftest_ipa() + PAGE_SIZE }; + struct pkvm_hyp_vm *vm; + + vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); =20 host_lock_component(); WARN_ON(__host_check_page_state_range(phys, size, selftest_state.host)); @@ -1179,8 +1205,8 @@ static void assert_page_state(void) hyp_unlock_component(); =20 guest_lock_component(&selftest_vm); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[0], size, selftest_state= .guest[0])); - WARN_ON(__guest_check_page_state_range(vcpu, ipa[1], size, selftest_state= .guest[1])); + WARN_ON(__guest_check_page_state_range(vm, ipa[0], size, selftest_state.g= uest[0])); + WARN_ON(__guest_check_page_state_range(vm, ipa[1], size, selftest_state.g= uest[1])); guest_unlock_component(&selftest_vm); } =20 @@ -1218,7 +1244,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1237,7 +1263,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1249,7 +1275,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); =20 hyp_unpin_shared_mem(virt, virt + size); @@ -1268,7 +1294,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 @@ -1279,8 +1305,8 @@ void pkvm_ownership_selftest(void *base) =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; selftest_state.guest[0] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, vcpu, prot); - assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, vcpu, pr= ot); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn, 1, vcpu, prot= ); + assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_donate_hyp, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_hyp, pfn); @@ -1289,7 +1315,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.guest[1] =3D PKVM_PAGE_SHARED_BORROWED; - assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, vcpu, pro= t); + assert_transition_res(0, __pkvm_host_share_guest, pfn, gfn + 1, 1, vcpu, = prot); WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 83a737484046..0285e2cd2e7f 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -347,7 +347,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); if (ret) { /* Is the gfn already mapped due to a racing vCPU? */ if (ret =3D=3D -EPERM) --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D72526D4F5 for ; Wed, 21 May 2025 12:51:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831874; cv=none; b=lfJ6fOnaQCix3CtTbMGle+50WRGd/e/aJKOddBy1XS0dF/JsQ9anSyitshvi4t/TO8Zj+buIXJUhvg0L9B+Wg+HUhE+bz+NPjYhOIhQnuBZ+MGZy0PEPaMRHQeizpWiVdN+yAOAArKdhhRcmdazW3/IebMXIbsWODNiSK5cxr5I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831874; c=relaxed/simple; bh=nwEtT+x1vftzTI0tYmPgfzg+e8CvSr12EldY+2+mMHw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oFrYeyq5EJDcDmtVBgUzY66OkTy009wMg1+uzydM4uKtvQJQLuoi2DOEaNP60At2vCWiwhZIUaJMyWIpUmJmGn5imZ2dZahmVduSIYaWTpsHZ7Oxq623lo2fPQ1+rq1jF20yRXemyvX0b1q9BR6aW6eECc8amqISkRkDRF04a+4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=PalUQcTd; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="PalUQcTd" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a1f6c5f4f2so2625063f8f.2 for ; Wed, 21 May 2025 05:51:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831871; x=1748436671; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OdCXehb9QOWC9OAAK/XLuCdQBwNqpxkcUy0DKMTcnfY=; b=PalUQcTdDQT5FtVqpBcKkOvh0NwsDQkUyC1EGi3WZsiq5MM/ZbE9kco9f6pDiLWtVr UA8ntv5qgdq6qt9VBfZ4wnhOg66dLu0TNxcqIW70X7BrBK0PW0qSZIyKJ+Fe7pq6LAv9 KdM2y6BABGi+XYyVfuwqPTHNBIiMb2XV+IAZV8vhRYjsnMR1fr1r8hdbdiNMdgmiE0dy rjFeyyqLWglFeLuHU10Q2ZIJ2uaE/KjlejhegIpH1tS34CSxcYsIwrY+hUDEmzaabyvM YNXVqAMIjNuBqNnN9XlJ/C1bwtpddhj7YjvqSfgFjY/aECkFhRPHhYJ/LsBJS3iKsTrh xFTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831871; x=1748436671; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OdCXehb9QOWC9OAAK/XLuCdQBwNqpxkcUy0DKMTcnfY=; b=NjXW293YzwbaQQbaTIk0PEVi30kIA2LEdQe1HFPyx/3EZx7gpseMfKLHTZf/UfDSQv WBRz71y+qKQczmrMkDiquK46fV3hSRXC4wMr/y+ApY9eKpyjNVDhOI7+mYXER/K4w1nG I3lJqDs5kYmkJPBKbUAcVxFXQIUpIBmqfePqD4fURPKXfApu1NEs+hLHmAZYQKCG9uA5 otsFeHchLvk7fEn1k3Ypt9Jn7tCpQs48reBk9T6i4KGZsrPpmW0hv71bY7iATKO4jewc 78T5FuOzNEfPQgWwzB/Fjf9xl9eecwIVoR1tuNsDlMJxAl/lWB6ls6Bpa53/wRboLzwg bQSw== X-Forwarded-Encrypted: i=1; AJvYcCWCS10/V+e1DJ6sID/WQEXiv+2ka3R3L4dpuqD9PTy3tpxxEdWCBzjc+Yz5T7aFYEi83AIECP5DypbQTs0=@vger.kernel.org X-Gm-Message-State: AOJu0Yx5KsN+6E4dAq3Fh+WBQkTlc3DtONRR1DwyigmfztFXVJzDuol5 sAiv+XpEXWr1WE4UwMsCK0Ivs41XeRtAmbr5555oWdzXKUVHWnTRNFkr+MwG6D3tYp1VpWbNLD9 GHUpMdIs9e+NlDnx3KoJxOg== X-Google-Smtp-Source: AGHT+IFOSbvAApsdL0WQw9HC/Z8jMn7H55eEstBUgN1REfYkPF+658GdQUPM6/v2SQsraL42umaM/rNDwNAlKI0U X-Received: from wrbee8.prod.google.com ([2002:a05:6000:2108:b0:3a3:6aae:bf2b]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2211:b0:3a3:6c48:4703 with SMTP id ffacd0b85a97d-3a36c484c01mr11697103f8f.34.1747831871056; Wed, 21 May 2025 05:51:11 -0700 (PDT) Date: Wed, 21 May 2025 13:48:28 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-5-vdonnefort@google.com> Subject: [PATCH v6 04/10] KVM: arm64: Add a range to __pkvm_host_unshare_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_unshare_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 47aa7b01114f..19671edbe18f 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,7 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, enum kvm_pgtable_prot prot); -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 4d3d215955c3..5c03bd1db873 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -270,6 +270,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -280,7 +281,7 @@ static void handle___pkvm_host_unshare_guest(struct kvm= _cpu_context *host_ctxt) if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + ret =3D __pkvm_host_unshare_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 8051235ea194..2703bce3b773 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -980,10 +980,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_p= ages, struct pkvm_hyp_vcpu return ret; } =20 -static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa, u64 size) { enum pkvm_page_state state; - struct hyp_page *page; kvm_pte_t pte; u64 phys; s8 level; @@ -994,7 +993,7 @@ static int __check_host_shared_guest(struct pkvm_hyp_vm= *vm, u64 *__phys, u64 ip return ret; if (!kvm_pte_valid(pte)) return -ENOENT; - if (level !=3D KVM_PGTABLE_LAST_LEVEL) + if (kvm_granule_size(level) !=3D size) return -E2BIG; =20 state =3D guest_get_page_state(pte, ipa); @@ -1002,43 +1001,49 @@ static int __check_host_shared_guest(struct pkvm_hy= p_vm *vm, u64 *__phys, u64 ip return -EPERM; =20 phys =3D kvm_pte_to_phys(pte); - ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + ret =3D check_range_allowed_memory(phys, phys + size); if (WARN_ON(ret)) return ret; =20 - page =3D hyp_phys_to_page(phys); - if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) - return -EPERM; - if (WARN_ON(!page->host_share_guest_count)) - return -EINVAL; + for_each_hyp_page(page, phys, size) { + if (get_host_state(page) !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + } =20 *__phys =3D phys; =20 return 0; } =20 -int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *v= m) { u64 ipa =3D hyp_pfn_to_phys(gfn); - struct hyp_page *page; - u64 phys; + u64 size, phys; int ret; =20 + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); if (ret) goto unlock; =20 - ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, size); if (ret) goto unlock; =20 - page =3D hyp_phys_to_page(phys); - page->host_share_guest_count--; - if (!page->host_share_guest_count) - WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + for_each_hyp_page(page, phys, size) { + /* __check_host_shared_guest() protects against underflow */ + page->host_share_guest_count--; + if (!page->host_share_guest_count) + set_host_state(page, PKVM_PAGE_OWNED); + } =20 unlock: guest_unlock_component(vm); @@ -1058,7 +1063,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa); + ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1245,7 +1250,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.host =3D PKVM_PAGE_OWNED; selftest_state.hyp =3D PKVM_NOPAGE; @@ -1253,7 +1258,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_host_unshare_ffa, pfn, 1); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_SHARED_OWNED; @@ -1264,7 +1269,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); assert_transition_res(0, hyp_pin_shared_mem, virt, virt + size); @@ -1276,7 +1281,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_share_ffa, pfn, 1); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); =20 hyp_unpin_shared_mem(virt, virt + size); assert_page_state(); @@ -1295,7 +1300,7 @@ void pkvm_ownership_selftest(void *base) assert_transition_res(-EPERM, __pkvm_host_unshare_hyp, pfn); assert_transition_res(-EPERM, __pkvm_hyp_donate_host, pfn, 1); assert_transition_res(-EPERM, __pkvm_host_share_guest, pfn, gfn, 1, vcpu,= prot); - assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(-ENOENT, __pkvm_host_unshare_guest, gfn, 1, vm); assert_transition_res(-EPERM, hyp_pin_shared_mem, virt, virt + size); =20 selftest_state.host =3D PKVM_PAGE_OWNED; @@ -1319,11 +1324,11 @@ void pkvm_ownership_selftest(void *base) WARN_ON(hyp_virt_to_page(virt)->host_share_guest_count !=3D 2); =20 selftest_state.guest[0] =3D PKVM_NOPAGE; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn, 1, vm); =20 selftest_state.guest[1] =3D PKVM_NOPAGE; selftest_state.host =3D PKVM_PAGE_OWNED; - assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, vm); + assert_transition_res(0, __pkvm_host_unshare_guest, gfn + 1, 1, vm); =20 selftest_state.host =3D PKVM_NOPAGE; selftest_state.hyp =3D PKVM_PAGE_OWNED; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 0285e2cd2e7f..f77c5157a8d7 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -371,7 +371,7 @@ int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held_write(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); if (WARN_ON(ret)) break; rb_erase(&mapping->node, &pgt->pkvm_mappings); --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6965126F442 for ; Wed, 21 May 2025 12:51:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831876; cv=none; b=aDvNuv3A14BmWlz1wriXhDE1qQPxqTostA0iblzQ9IWrsO2udREpzUge5AjaRKyITHNKxqSUl0qNVad5rHu4gWSswSfFC6llWmLyFu3DOAhSNnhOn8DQCZcVvLKXPxkFFAM8ormG/6q8Px3p7c9EgfnVj67zBPZZ9TC2VTVuKrM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831876; c=relaxed/simple; bh=VJ5w3a9wgVY3u4UH4rRbQ9IHP4NmbrMErLELN0TNi40=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=icqNIk0dLWjhKFTe8vGULUwPCJIIUCQXj4th2W1uz3SDGOKraQPB2XV3/kB9MoV2vyZko+ee/WZRxkwCt2ZAFc1HfiRJx1wNGZlINUTnO8c4C3qpJscTX7KCPuP22gns4Ta2HTBkJ5fahL/C6i8cT48zOMxrUxg/U9srKBGRebM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KWYUyvyV; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KWYUyvyV" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442ffaa7dbeso18809115e9.3 for ; Wed, 21 May 2025 05:51:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831873; x=1748436673; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Y8G0S34oPuJduEGHZT3JkBJgHmCmj3gKa0hFntYpVP4=; b=KWYUyvyV84Bq3labsSBMVkhhoZCYhCaz30nNb+HtrH1ONPiX1WOGMC3mpQ3N4EHNF/ 3zG2XD+GpcS4ZcGpC1GwHbLJMTEloHiivzV/WQdTl/HOOGsIUbshcFHYweD5t1FwAxRp s0033yfb2qcLYzlh1zO+IgnyDq8PEPTPt4uBNikzSoAY/TckiDtyK4EykbSgIGLhnEQ6 Px5ZU2VcwEMt2uLQC/GpTPANoddhP8KbM1MbL7d3UKDsmOLtZ4dOU9UzczzgrL0nApt2 Ocj473GKnbOuETNrDqv0cue96aZZ37I23Wq1jWpoNSTMWpjCF+uxFTjBMHzGuDBCHrvn BDvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831873; x=1748436673; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Y8G0S34oPuJduEGHZT3JkBJgHmCmj3gKa0hFntYpVP4=; b=SMR5VG/4hKCttOfaL/K4+/sTnZQ4MCE4z7UHCFms1rwSQllWra/FRfEOwux1KNt0Rq DqtKyVrCnLADqD+CavW8byW9Iy8S349drbu/zIAhq5NllZ9b1mzwWwGCZ/PR48tPRAG8 NBqVqtH86lOG6DJN2jxq9EjGluazpPsJutJHAHlgluwNAzSEIo5hm3UQyTSU3oMgvakD Wg+bLgH0n8uty9Y1WnK2/YxWYkBkfIUt8BC/LzIBcJdEwy82BNStJf8W6ODw+kSQa2AN zwOhFYWspAX9dZ+aBKFsSFqPRpwA9Xq4JIjp3fnGkaE7Eg6PTzbuERe07IP9ltL5qpGE HW2w== X-Forwarded-Encrypted: i=1; AJvYcCUtip9dYGLc0LBk5v1h7jkSOfgF6zg9o8zDJ6770wvi+SG0Q55Mzzni7X9mo3hirx1Ro5hITHbCfFHFcLM=@vger.kernel.org X-Gm-Message-State: AOJu0YxbhsI8UCu+JvWvHmieos2VtsIo7F3RQi3iJD1jZFkVdDV0hAi0 IMa6whVd6xfnmURdcmv6pvEyCvqEkSbpY6p3DT97t8MIGx3bmBXIXd5YzUVPP7LisFdHl2zGcfR wK4wkq7KQst2D3oGPjhH91w== X-Google-Smtp-Source: AGHT+IEfqaXhfmsl+d6GOSyuIS6uDVkXur2rBBwI8qxmqiwcAGWKJHZThNQpjmUTD/6m0PuW5Qpxixj1EUKcxi8H X-Received: from wmbes14.prod.google.com ([2002:a05:600c:810e:b0:43b:c336:7b29]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45d3:b0:441:d438:505c with SMTP id 5b1f17b1804b1-442fd672de0mr171005955e9.32.1747831872802; Wed, 21 May 2025 05:51:12 -0700 (PDT) Date: Wed, 21 May 2025 13:48:29 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-6-vdonnefort@google.com> Subject: [PATCH v6 05/10] KVM: arm64: Add a range to __pkvm_host_wrprotect_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_wrprotect_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 19671edbe18f..64d4f3bf6269 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 5c03bd1db873..fa7e2421d359 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -310,6 +310,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -320,7 +321,7 @@ static void handle___pkvm_host_wrprotect_guest(struct k= vm_cpu_context *host_ctxt if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + ret =3D __pkvm_host_wrprotect_guest(gfn, nr_pages, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 2703bce3b773..569adeaa0869 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1052,7 +1052,7 @@ int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, = struct pkvm_hyp_vm *vm) return ret; } =20 -static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa) +static void assert_host_shared_guest(struct pkvm_hyp_vm *vm, u64 ipa, u64 = size) { u64 phys; int ret; @@ -1063,7 +1063,7 @@ static void assert_host_shared_guest(struct pkvm_hyp_= vm *vm, u64 ipa) host_lock_component(); guest_lock_component(vm); =20 - ret =3D __check_host_shared_guest(vm, &phys, ipa, PAGE_SIZE); + ret =3D __check_host_shared_guest(vm, &phys, ipa, size); =20 guest_unlock_component(vm); host_unlock_component(); @@ -1083,7 +1083,7 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pkv= m_hyp_vcpu *vcpu, enum kvm_ if (prot & ~KVM_PGTABLE_PROT_RWX) return -EINVAL; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); guest_unlock_component(vm); @@ -1091,17 +1091,21 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct p= kvm_hyp_vcpu *vcpu, enum kvm_ return ret; } =20 -int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, size); guest_unlock_component(vm); =20 return ret; @@ -1115,7 +1119,7 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool = mkold, struct pkvm_hyp_vm * if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); guest_unlock_component(vm); @@ -1131,7 +1135,7 @@ int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hy= p_vcpu *vcpu) if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa); + assert_host_shared_guest(vm, ipa, PAGE_SIZE); guest_lock_component(vm); kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); guest_unlock_component(vm); diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index f77c5157a8d7..daab4a00790a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -390,7 +390,7 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); if (WARN_ON(ret)) break; } --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7D85D26E17A for ; Wed, 21 May 2025 12:51:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831878; cv=none; b=IHQ4RHOghmSUgHCU6hzyPjzw6cCZTnMGnrPqrfRqX2hwPHaNDmT5Dyl4CeBTtu3YuZmO4XTmKKI7aGcbnQcJTFjviYW6YBz6KDD7W23s5G8JAWGL2gaCPtv8c/8MwcuLRPKl6SBOHxi8pV0T9QQHrFGdjNMvOZtqTMXHt0Re0m4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831878; c=relaxed/simple; bh=Q1Lc4SelnIqqZ8x5yzeOZdivf0g9ssVJJqMqyBh+gKk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=r4l3ni7xzrL8wrdaq8yvCUYpQfOu2drsJn8qEyXNiwFMi5xF8Ozhr/9JGWjV8JCz5wKhbSoXkzundW5yMWJJcm95ZL5G4ummlpLkXTKz6gbfHM9U3Ts6raH9xksluG7CtJE91buATPiq4PhLXwyLBlsaawd3kIfGPJao7ATr4hE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=iBQU6RX9; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iBQU6RX9" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a3683aa00eso2115681f8f.1 for ; Wed, 21 May 2025 05:51:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831875; x=1748436675; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h7Oq5jwZK0pb08xH59h8gdjNoJEBEnOv69ZeonvXBfg=; b=iBQU6RX9JB1Bmlbe0SUKdzsJYCJ9f+c5LG/fkw/rYRLDIVMKzyZ+asFz7yiGeYc/ok cM08gCbQSh8Fq8VwBsetXuj0qETCSrEVB25pBvnAIkxjJmcSpZ0Nk6tzg1EfYRv+Ffiu fk3VgIKVlwuUe0KaKO18iwrTFU+k8P0E4Nv76Ee4vidtHd5Xf5tkLM3D1AlKIXr2MKKZ EBs/M05Hi60x2v1soSozlOIkAh+tzwRdW2jgedWuFgkWaJLpAKtBGdJBzz3E/UseaAEY 5iS+iSmCSvJ8Jyej/l92UyCPX8+lXFERNOdAEWOSWQKHdsHjL0PPTBSI6o7OzuFFhty1 iypw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831875; x=1748436675; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h7Oq5jwZK0pb08xH59h8gdjNoJEBEnOv69ZeonvXBfg=; b=elUybEQkS9iZA6zoulTF7RQqoR17ztbX1DDQlpeLnJLj8GDwwP9elZ06t9gG1L6Kkw PcKYUUH2YAa4XnGsGswti6p8tZtdXpGEeWvIjkHI0S8gkmB/75A56yTuv/q6s9a5MMde hV0i2r+tqbDoRprFgfgDN4IGy2aXdjTrwMQO6IgcKmsdmfHn7Wo65ohg3X1KE+pb3wbS 1r3gbdEe32Ksr7RyNh1Q6fWa8sAFK1VrmgO0yx4Dfb+9dFyjUjZvP8dI0SMQophmG5Qz cZTw9scyeVDuNJS+TphSnxzWSSnJaPuKFSE50FVkHT0vNGQc98mAebiDWRcs3+XJssyY M7PA== X-Forwarded-Encrypted: i=1; AJvYcCVsOiBWuOhnkQ5bM/xOEOMhNTuvHwbDPoATz3PbRtIcUGuV8y8PFCj9o0/7SJgwebeZ79avIQw7FyiUlCs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywk9xqVysXdicSivFDs5ZqloPRK42kmodqwTlR64WmYTBoi1ky1 WvQkyPl8sNbwI/tNrMYkLufRQ+QRiYYGUlSmq1UvMAQSRJmv2tPX7WnMfW06ryQGxbKW831Co4Z bbxKxZqbyBCfkhqxbVLXmtQ== X-Google-Smtp-Source: AGHT+IG2h60jELokLEU/h5uhC0DSsHsKCsOBofWotQOOeFuSTUQx51WyDXB/rj32MRC8kvwd+rTI6s1Fodm9S1qj X-Received: from wmbhj14.prod.google.com ([2002:a05:600c:528e:b0:442:f9ef:e460]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2582:b0:3a3:6915:33a6 with SMTP id ffacd0b85a97d-3a369153515mr12239550f8f.23.1747831874642; Wed, 21 May 2025 05:51:14 -0700 (PDT) Date: Wed, 21 May 2025 13:48:30 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-7-vdonnefort@google.com> Subject: [PATCH v6 06/10] KVM: arm64: Add a range to __pkvm_host_test_clear_young_guest() From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for supporting stage-2 huge mappings for np-guest. Add a nr_pages argument to the __pkvm_host_test_clear_young_guest hypercall. This range supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a 4K-pages system). Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 64d4f3bf6269..5f9d56754e39 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,8 +43,8 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_page= s, struct pkvm_hyp_vcpu enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm *h= yp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pages, struct pkvm_hyp_vm = *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm); int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index fa7e2421d359..8e8848de4d47 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -331,7 +331,8 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho { DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); DECLARE_REG(u64, gfn, host_ctxt, 2); - DECLARE_REG(bool, mkold, host_ctxt, 3); + DECLARE_REG(u64, nr_pages, host_ctxt, 3); + DECLARE_REG(bool, mkold, host_ctxt, 4); struct pkvm_hyp_vm *hyp_vm; int ret =3D -EINVAL; =20 @@ -342,7 +343,7 @@ static void handle___pkvm_host_test_clear_young_guest(s= truct kvm_cpu_context *ho if (!hyp_vm) goto out; =20 - ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + ret =3D __pkvm_host_test_clear_young_guest(gfn, nr_pages, mkold, hyp_vm); put_pkvm_hyp_vm(hyp_vm); out: cpu_reg(host_ctxt, 1) =3D ret; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 569adeaa0869..e08c735206e0 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1111,17 +1111,21 @@ int __pkvm_host_wrprotect_guest(u64 gfn, u64 nr_pag= es, struct pkvm_hyp_vm *vm) return ret; } =20 -int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +int __pkvm_host_test_clear_young_guest(u64 gfn, u64 nr_pages, bool mkold, = struct pkvm_hyp_vm *vm) { - u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 size, ipa =3D hyp_pfn_to_phys(gfn); int ret; =20 if (pkvm_hyp_vm_is_protected(vm)) return -EPERM; =20 - assert_host_shared_guest(vm, ipa, PAGE_SIZE); + ret =3D __guest_check_transition_size(0, ipa, nr_pages, &size); + if (ret) + return ret; + + assert_host_shared_guest(vm, ipa, size); guest_lock_component(vm); - ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mko= ld); + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, size, mkold); guest_unlock_component(vm); =20 return ret; diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index daab4a00790a..057874bbe3e1 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -420,7 +420,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - mkold); + 1, mkold); =20 return young; } --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-lj1-f201.google.com (mail-lj1-f201.google.com [209.85.208.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0F3EF272E5C for ; Wed, 21 May 2025 12:51:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831891; cv=none; b=RAr6p9eyS/7DGvvC+asAQGpcfX2Jf55PagC55Yr+AI0YHjB6sZO9eOp4pbN0bj6CsVDHmZVA+KmJN4f/0f6nFNlZoi5p7liWRBUtZVNjocZt9NLL4hJ5WBl+PsrjQgrOcMkTDnj3Mf7BTxakFUFrajM777sOs+bL90cAqAa1WSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831891; c=relaxed/simple; bh=Jxm7YsSOcl7SbKDCg4jgzBC7+DrbcLOhLsnFtsowMSE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=am9j2rpSEWNmr+RyTYJFDaD3MmtVivrPlAQIP8GN1+JGdzSIsYibHmFqoZDySF8eryqGgjJVuGOmMxAyyCeti25rV8+qse0zAe5SYdzzeUd53Mxu8NTATuFKgfSP+qBd7EZTEfg1IYRPsAaUpzPsbiRi97EvsN1Nptxb8dTj1mQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=j9VsMwX0; arc=none smtp.client-ip=209.85.208.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="j9VsMwX0" Received: by mail-lj1-f201.google.com with SMTP id 38308e7fff4ca-32813317834so29059281fa.1 for ; Wed, 21 May 2025 05:51:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831887; x=1748436687; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+mAmm5g1Jwly1b5ldQINlnDEQuSKb4osn8uy2X9OeKI=; b=j9VsMwX0yc4r2F/4TEoW8DHB8lOhKosRnm70w685WtAWvAbv71UB3bXa9g+au/Ix4R Oc87I9uXjG5h+bJ/FnGjPCbUmDTfGSGoT23Xob5j93MOG1B0GDSIKsZUf+ahk80uhbTy Wlo6l7aSITrlx+H30XI61Uo/mnO83yb1uyRK+x+Hap4LOP2+cNbrn3AZu4/XUCibN/xp 6Uuzp6FELThKdYnN+PYED22ukYCHrNbmuhAQo7vmeZ5C1R2Tnrlq6t3jVm3afoO+0Ell A+9vV7XndMXQFFli+Smo0yIaSDIq7YKtF5XolmBeAa9X5sQBi1hm2yZqwib5smDnHygo zWIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831887; x=1748436687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+mAmm5g1Jwly1b5ldQINlnDEQuSKb4osn8uy2X9OeKI=; b=O5fmgCptBAwo/fp2mJ8DZy4mMx1xUsEOjTohjYuWMdTM167GqJLABJ0Eigmg0MP51l BUcQFa/ytJ1YBFq2Q6/A3sTrsbk+QUu75u1YkuGkf4nmC4DeafEuxkd18VmbN63yqmt6 2J3/rgFRTSmNtGjbBT2RJoyBsQqXjB7egeedT6/spOIb+4ZiGHVSVz17RfAnBtb5OEvb IFqUf9GxkEKjsBcBNTzRJxBzf4Tpy2sihbR0YeFIZLCmE6k4CNCSBLQpVkZU1LmujL49 tJdt0TI65P3wdlMBkABAvZcoETE0Zvltggm6xB3YTxFvboWBBbTPWDKe02o+TqfcfJeP pMdg== X-Forwarded-Encrypted: i=1; AJvYcCVe0zneVoyEqcOeYMVqAo7mTnuTnXh3FaZBww3kCDsKzkAFY9lz6q3PsrWKjnL1rjctmfqqkgmtB8MFWbg=@vger.kernel.org X-Gm-Message-State: AOJu0YwRMe9/oqPDdJ/yq1TRPvM/wTzWo0xw0UsaJgjJT0welFXqhkd5 0Dgnhtl0Yh0fHhmEr0wAilkvBWuuJdK58viMJ3rez4MG5+kBrogjxxGekCmtib1DcjYJCZtv1aM F6BfVNtcZgLTQCuvt31nYoA== X-Google-Smtp-Source: AGHT+IEUewuPBYFEtkW3sMmEWy+gJssUHZIxGZsES1eNOW8n7Onooz0aLiuqT7Knq0tYOoF85SSNBz4q6/qkCg9C X-Received: from wmsu25.prod.google.com ([2002:a05:600c:c3d9:b0:43c:fd72:aae1]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1d07:b0:43c:e305:6d50 with SMTP id 5b1f17b1804b1-442fd660721mr162062465e9.24.1747831876497; Wed, 21 May 2025 05:51:16 -0700 (PDT) Date: Wed, 21 May 2025 13:48:31 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-8-vdonnefort@google.com> Subject: [PATCH v6 07/10] KVM: arm64: Convert pkvm_mappings to interval tree From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, let's convert pgt.pkvm_mappings to an interval tree. No functional change intended. Suggested-by: Vincent Donnefort Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 6b9d274052c7..1b43bcd2a679 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -413,7 +413,7 @@ static inline bool kvm_pgtable_walk_lock_held(void) */ struct kvm_pgtable { union { - struct rb_root pkvm_mappings; + struct rb_root_cached pkvm_mappings; struct { u32 ia_bits; s8 start_level; diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index d91bfcf2db56..da75d41c948c 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 __subtree_last; /* Internal member for interval tree */ }; =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 057874bbe3e1..8a1a2faf66a8 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -5,6 +5,7 @@ */ =20 #include +#include #include #include #include @@ -256,80 +257,67 @@ static int __init finalize_pkvm(void) } device_initcall_sync(finalize_pkvm); =20 -static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +static u64 __pkvm_mapping_start(struct pkvm_mapping *m) { - struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); - struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); - - if (a->gfn < b->gfn) - return -1; - if (a->gfn > b->gfn) - return 1; - return 0; + return m->gfn * PAGE_SIZE; } =20 -static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - struct rb_node *node =3D root->rb_node, *prev =3D NULL; - struct pkvm_mapping *mapping; - - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - if (mapping->gfn =3D=3D gfn) - return node; - prev =3D node; - node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; - } - - return prev; + return (m->gfn + 1) * PAGE_SIZE - 1; } =20 +INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, + __pkvm_mapping_start, __pkvm_mapping_end, static, + pkvm_mapping); + /* - * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing - * of __map inline. + * __tmp is updated to iter_first(pkvm_mappings) *before* entering the bod= y of the loop to allow + * freeing of __map inline. */ #define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ - for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ - ((__start) >> PAGE_SHIFT)); \ + for (struct pkvm_mapping *__tmp =3D pkvm_mapping_iter_first(&(__pgt)->pkv= m_mappings, \ + __start, __end - 1); \ __tmp && ({ \ - __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ - __tmp =3D rb_next(__tmp); \ + __map =3D __tmp; \ + __tmp =3D pkvm_mapping_iter_next(__map, __start, __end - 1); \ true; \ }); \ - ) \ - if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ - continue; \ - else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ - break; \ - else + ) =20 int pkvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *m= mu, struct kvm_pgtable_mm_ops *mm_ops) { - pgt->pkvm_mappings =3D RB_ROOT; + pgt->pkvm_mappings =3D RB_ROOT_CACHED; pgt->mmu =3D mmu; =20 return 0; } =20 -void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 start,= u64 end) { struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); pkvm_handle_t handle =3D kvm->arch.pkvm.handle; struct pkvm_mapping *mapping; - struct rb_node *node; + int ret; =20 if (!handle) - return; + return 0; =20 - node =3D rb_first(&pgt->pkvm_mappings); - while (node) { - mapping =3D rb_entry(node, struct pkvm_mapping, node); - kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); - node =3D rb_next(node); - rb_erase(&mapping->node, &pgt->pkvm_mappings); + for_each_mapping_in_range_safe(pgt, start, end, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + if (WARN_ON(ret)) + return ret; + pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); kfree(mapping); } + + return 0; +} + +void pkvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt) +{ + __pkvm_pgtable_stage2_unmap(pgt, 0, ~(0ULL)); } =20 int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u64 addr, u64 size, @@ -357,28 +345,16 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; - WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; } =20 int pkvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) { - struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); - pkvm_handle_t handle =3D kvm->arch.pkvm.handle; - struct pkvm_mapping *mapping; - int ret =3D 0; - - lockdep_assert_held_write(&kvm->mmu_lock); - for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); - if (WARN_ON(ret)) - break; - rb_erase(&mapping->node, &pgt->pkvm_mappings); - kfree(mapping); - } + lockdep_assert_held_write(&kvm_s2_mmu_to_kvm(pgt->mmu)->mmu_lock); =20 - return ret; + return __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); } =20 int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 s= ize) --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C36AC26FD9E for ; Wed, 21 May 2025 12:51:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831881; cv=none; b=k5pCt4Ni6nm1GY5t+oYegoDpZG3VnCTfn/c4PD89bV7KecKKBOGy2tLJpC/jcazQNwUc/GhgpKULQRnQZMNrUCHivunSC0Q06yokSTJ0oIFZ4KhHOkxtFYjDj84OhLuSSeqRKLz49Po5WIPULGKm+1D55tLiZyuH3KN30oZUNaw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831881; c=relaxed/simple; bh=m9dI9Rg8h/c091GpkdF/1r+aT4iwGTUctJ0aDf1/BoU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UAPCz3hS3Tt0keSS1o5uQu+JddNAp1N4HLK0EEDpg4SIefDbGdvKnbnV8Kk3ZmDm/jQGk96yT+0vCPkORwfG1fiCQ3+YXu5O0piKgGl8cFbI+FKbi6bf55X2IHSRNRnuP9k0ACZqgPKb9V1Lnl421wB6jLZaBMEUTCVRYUO/JWw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hFiiSQ9h; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hFiiSQ9h" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a364d121ccso1917259f8f.2 for ; Wed, 21 May 2025 05:51:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831878; x=1748436678; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=THRS6RUOhVQKyz2pzkS3iWtym7smEtX3sb7fjdkvEcQ=; b=hFiiSQ9hz2sdq0EZkVXhlPfuGqibinRHSN52UuEwRz9eouPYsB26agthhCsFxl9ZAh 3tciCZ15xrRWK8wRCVo5xR2RiRVX3GlDQAQNhceK3NrQgKSGRYsgZy8vY82BGqj2bWOi hFFOgddz+STh+dcKTa3lUV1bEt/QJeVmysagfqXTCEXxsbZtRx0De4LckT1x757XyxSX C+1LnRBtwIe1TkU498FpgrvDVCkT4LjcT9OiS42A6AWpInpTFhxApQGwF4+HI8JAZz6K OYXUrZNnrt5qEneHBMnIlmohe43TYRC/j7YCRPOSpc/dkpKtf0j7/tNUyNWyHuzLCyYl B7Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831878; x=1748436678; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=THRS6RUOhVQKyz2pzkS3iWtym7smEtX3sb7fjdkvEcQ=; b=U1BUbtg4bbP7FBUJPK5xoBYQXgR0Ce7pKFIACAuqr7TJV95mAmFFpL+1goB9SI9LO2 2qfs4tsmcB7lGd98gCA6EEAQwpPcYuNLbl3ForBZaKNyEWWZ7zKYgedhyYmrfrI1v7vL Na0DqillX8fQypmixqfAVIqHbp9Ll4mjRPKYqMCAoxpTAgWaHjcO1diEffPDrc48vA8s 13x60mVKYtVZFm/gx94W5EXO9MpzrjFh4utrvydgI3hpt9eiGGD1FaB7o3+0NhLXHnFQ ei4Yk68x+n7/jPpMIHSh937H9nLIIHH3Vjtpc4JmlgubxK525bjzkAkDQADINdIkEyjQ jgTQ== X-Forwarded-Encrypted: i=1; AJvYcCX30ldlcwLtwczD/suEB0am80i0V+bNhKERo0nVTMDcuywx/GzhvWkaDlmWxjyF6Gs3cX0zSWWmrY8rpKA=@vger.kernel.org X-Gm-Message-State: AOJu0YwWMW60k9C/9nJWlaBdnN/Z+HRj0vPMLcKZ0ctKvcsZwtnfNpDQ I+mPTUCCvqSIeErv9okT3iLPVUyTMkIlgvoV/F5Rx3z7mwDhPaWXjKHptkASc8gIBYeLPDx6n57 /RqpS2nD+0JsqZ3gOgaULYQ== X-Google-Smtp-Source: AGHT+IGhYVIBRuADQr6WRByKd7PpHl5cm15NxpBtNtkX8ms0Vg3sLjAj+oF4iS4GtuyYvGveuHPK8YSVEQ+prz7R X-Received: from wmbhc24.prod.google.com ([2002:a05:600c:8718:b0:441:bf4e:899c]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:4304:b0:3a3:6ae4:6615 with SMTP id ffacd0b85a97d-3a36ae46750mr10498053f8f.13.1747831878332; Wed, 21 May 2025 05:51:18 -0700 (PDT) Date: Wed, 21 May 2025 13:48:32 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-9-vdonnefort@google.com> Subject: [PATCH v6 08/10] KVM: arm64: Add a range to pkvm_mappings From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Quentin Perret In preparation for supporting stage-2 huge mappings for np-guest, add a nr_pages member for pkvm_mappings to allow EL1 to track the size of the stage-2 mapping. Signed-off-by: Quentin Perret Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index da75d41c948c..ea58282f59bb 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -173,6 +173,7 @@ struct pkvm_mapping { struct rb_node node; u64 gfn; u64 pfn; + u64 nr_pages; u64 __subtree_last; /* Internal member for interval tree */ }; =20 diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 8a1a2faf66a8..b1a65f50c02a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -264,7 +264,7 @@ static u64 __pkvm_mapping_start(struct pkvm_mapping *m) =20 static u64 __pkvm_mapping_end(struct pkvm_mapping *m) { - return (m->gfn + 1) * PAGE_SIZE - 1; + return (m->gfn + m->nr_pages) * PAGE_SIZE - 1; } =20 INTERVAL_TREE_DEFINE(struct pkvm_mapping, node, u64, __subtree_last, @@ -305,7 +305,8 @@ static int __pkvm_pgtable_stage2_unmap(struct kvm_pgtab= le *pgt, u64 start, u64 e return 0; =20 for_each_mapping_in_range_safe(pgt, start, end, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n, + mapping->nr_pages); if (WARN_ON(ret)) return ret; pkvm_mapping_remove(mapping, &pgt->pkvm_mappings); @@ -335,16 +336,32 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, = u64 addr, u64 size, return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); - ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, 1, prot); - if (ret) { - /* Is the gfn already mapped due to a racing vCPU? */ - if (ret =3D=3D -EPERM) + + /* + * Calling stage2_map() on top of existing mappings is either happening b= ecause of a race + * with another vCPU, or because we're changing between page and block ma= ppings. As per + * user_mem_abort(), same-size permission faults are handled in the relax= _perms() path. + */ + mapping =3D pkvm_mapping_iter_first(&pgt->pkvm_mappings, addr, addr + siz= e - 1); + if (mapping) { + if (size =3D=3D (mapping->nr_pages * PAGE_SIZE)) return -EAGAIN; + + /* Remove _any_ pkvm_mapping overlapping with the range, bigger or small= er. */ + ret =3D __pkvm_pgtable_stage2_unmap(pgt, addr, addr + size); + if (ret) + return ret; + mapping =3D NULL; } =20 + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, size / PAGE_= SIZE, prot); + if (WARN_ON(ret)) + return ret; + swap(mapping, cache->mapping); mapping->gfn =3D gfn; mapping->pfn =3D pfn; + mapping->nr_pages =3D size / PAGE_SIZE; pkvm_mapping_insert(mapping, &pgt->pkvm_mappings); =20 return ret; @@ -366,7 +383,8 @@ int pkvm_pgtable_stage2_wrprotect(struct kvm_pgtable *p= gt, u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { - ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, 1); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn, + mapping->nr_pages); if (WARN_ON(ret)) break; } @@ -381,7 +399,8 @@ int pkvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, = u64 addr, u64 size) =20 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) - __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), + PAGE_SIZE * mapping->nr_pages); =20 return 0; } @@ -396,7 +415,7 @@ bool pkvm_pgtable_stage2_test_clear_young(struct kvm_pg= table *pgt, u64 addr, u64 lockdep_assert_held(&kvm->mmu_lock); for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, - 1, mkold); + mapping->nr_pages, mkold); =20 return young; } --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8F18E2701A0 for ; Wed, 21 May 2025 12:51:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831883; cv=none; b=KIhGDVsUAI8/dht/9hv95SMptJP2MJtWZ8vWymfQ5ancr36WEVXH54Nx/QAT4zFy3flYHBnU+YEZBDaJjrZpzaOk+GFsJgQJt3+hiE3mPpC5xas7pXNSrny+W0bZJ18f19az2o6E0nxTP18MLnQXGzQY9fgFo5nwonT95GM2vpc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831883; c=relaxed/simple; bh=a+a+wevfv4p+oR9F4TvoqKDZh9S97aZ58/Pb/MgJ4LE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=aqq9jqoHgfBvk2iiJ+kx7Q+bSMYVBjHkSr1vQnmwqkGdAtmNI7aj75jLjau9cCpYtAK5JdZzza44rvpaUFDRImQIBnUc8KmCH/JFDJmfvlAp2BfbTIuydGqc2q2adc4c/A2i0VncnR53ffK3mQQrLFzaMkIGz2rJwakdAWQdz1E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=1CUmKVxV; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="1CUmKVxV" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a371fb826cso1581779f8f.1 for ; Wed, 21 May 2025 05:51:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831880; x=1748436680; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=X5HS1MTADtzk13DQ0X8VV2YLbKPpc9rF/EQ2n6h7rrc=; b=1CUmKVxVNuK2Bqzo07aP5p2jRyoE4uVp/lUXJVmCend77kJkPsv7yVw4LfI7FvD0Iz hAIMxwczJhYovykAg91aqnjhQnBNKn97ct8e/tZfE2ymohoSk9A82antDHyz/IodhKxB J8999f8giWAqvzF4B+sAY2nQgIzXBw4nv8vIga6Eye91cNKGkKqVHq3Z9m2oihLd6t+f wqnmlCGvO3WJipl0MjbfEPazxPHKQXfVVuzoXklNniwH21pohanxa963o5T6lG4hOHDx 4D2oh/D63maUJhZq8fM7GMCbGjKDGFrXgfyN9JmJyXYG7FuqN6rj7Ufw/greouVckfiN rtkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831880; x=1748436680; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=X5HS1MTADtzk13DQ0X8VV2YLbKPpc9rF/EQ2n6h7rrc=; b=aM7TEoIT06hBQnu36jdTSBc96IT0SK+yc4wxv11gwim/j14DjUH+7mI7g62XHdDMUh x8OGnaNc7HrLVbqlcmNFALmuMubNBLbV3kAkXb+p7UQ3uAN7CjFuQtN0891japzYtJHS XtiRGf7UpNtzg0YjcevH1MeGS0tKI/9zBqwSO3dBs8zmI4TGlHVFbMbtQUkCHruGSeNz cGQtbywQjJXpiynqUJKauOf0aP9ZLCBXNGH4BPuS/2J1U1jiZmt4uDoLNXtbkP/BEdaz bDqJTF8VHmVEa0YizqwJtZiZSPUKVanXaZfmjGr0mR5dbJ/9mEkCJqyKUHbms3ftmO+A QMLQ== X-Forwarded-Encrypted: i=1; AJvYcCVs0XRgn3Ez1bZI3F3DiDxmIOuNaYmtv1Bl7uFQKRJM1E1Ii+nI1EOdJ5KBZr3wqf4tYg2kUZ8MKz9viXo=@vger.kernel.org X-Gm-Message-State: AOJu0YyuPLlfLHIBE3SRkAzcOShWsoH2AZx1ADKe3OM4r+SauXj8/BEw RwvnUMuCIrAub0YQzJ6Gv2axbM8k/F3rLpTY0YkJLkv0qWB8jyrAzXAlMDzJAQRfp9mz5ruOSg+ edTfQO58rsuNiVRnElhRPJA== X-Google-Smtp-Source: AGHT+IFn3AYms1bLCqgZr1MLXfsCQOa84rpC6sI4FufTfMFtyP2s5auBVEIM75SbnJC+p900vsvf1E3FCH0tA9RV X-Received: from wrbei2.prod.google.com ([2002:a05:6000:4182:b0:3a3:6d1d:3837]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:186d:b0:3a3:6b0c:a8a3 with SMTP id ffacd0b85a97d-3a36b0ca9b1mr11358542f8f.17.1747831880271; Wed, 21 May 2025 05:51:20 -0700 (PDT) Date: Wed, 21 May 2025 13:48:33 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-10-vdonnefort@google.com> Subject: [PATCH v6 09/10] KVM: arm64: Stage-2 huge mappings for np-guests From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now np-guests hypercalls with range are supported, we can let the hypervisor to install block mappings whenever the Stage-1 allows it, that is when backed by either Hugetlbfs or THPs. The size of those block mappings is limited to PMD_SIZE. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index e08c735206e0..1c18fca82209 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -166,12 +166,6 @@ int kvm_host_prepare_stage2(void *pgt_pool_base) return 0; } =20 -static bool guest_stage2_force_pte_cb(u64 addr, u64 end, - enum kvm_pgtable_prot prot) -{ - return true; -} - static void *guest_s2_zalloc_pages_exact(size_t size) { void *addr =3D hyp_alloc_pages(¤t_vm->pool, get_order(size)); @@ -278,8 +272,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, vo= id *pgd) }; =20 guest_lock_component(vm); - ret =3D __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, - guest_stage2_force_pte_cb); + ret =3D __kvm_pgtable_stage2_init(mmu->pgt, mmu, &vm->mm_ops, 0, NULL); guest_unlock_component(vm); if (ret) return ret; @@ -908,12 +901,24 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) =20 static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, = u64 *size) { + size_t block_size; + if (nr_pages =3D=3D 1) { *size =3D PAGE_SIZE; return 0; } =20 - return -EINVAL; + /* We solely support second to last level huge mapping */ + block_size =3D kvm_granule_size(KVM_PGTABLE_LAST_LEVEL - 1); + + if (nr_pages !=3D block_size >> PAGE_SHIFT) + return -EINVAL; + + if (!IS_ALIGNED(phys | ipa, block_size)) + return -EINVAL; + + *size =3D block_size; + return 0; } =20 int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hy= p_vcpu *vcpu, diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 754f2fe0cc67..e445db2cb4a4 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1304,6 +1304,10 @@ static bool fault_supports_stage2_huge_mapping(struc= t kvm_memory_slot *memslot, if (map_size =3D=3D PAGE_SIZE) return true; =20 + /* pKVM only supports PMD_SIZE huge-mappings */ + if (is_protected_kvm_enabled() && map_size !=3D PMD_SIZE) + return false; + size =3D memslot->npages * PAGE_SIZE; =20 gpa_start =3D memslot->base_gfn << PAGE_SHIFT; @@ -1537,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active || is_protected_kvm_enabled()) { + if (logging_active) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index b1a65f50c02a..fcd70bfe44fb 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -332,7 +332,7 @@ int pkvm_pgtable_stage2_map(struct kvm_pgtable *pgt, u6= 4 addr, u64 size, u64 pfn =3D phys >> PAGE_SHIFT; int ret; =20 - if (size !=3D PAGE_SIZE) + if (size !=3D PAGE_SIZE && size !=3D PMD_SIZE) return -EINVAL; =20 lockdep_assert_held_write(&kvm->mmu_lock); --=20 2.49.0.1112.g889b7c5bd8-goog From nobody Sun Dec 14 19:26:22 2025 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 20A5D2701DE for ; Wed, 21 May 2025 12:51:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831886; cv=none; b=T92GEwJgNqI4iF3EpyslCzw4GwjbmKzID8ZgAcir/QQhGeZwOdxwpS3RnpB5dzSIVTNMEUJbQcqFVo0xGhpUuebxU+Ahh+HE9G84lDWZJVlRJmP90qPYnTyP3UIlAu2ca5uLzPcIu97T5L+PtwUf8FTJMu9pzL1s8iSbNaI928M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747831886; c=relaxed/simple; bh=FrjjkAPvsmfKvlOlRsbAfJi1psoLqx8eK/erV4N9fAA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=n6xAWWPp/3VVqNTX9aKjzPqjwqhLHHMRyYRP1O7fNxgR5T4m1kmCYxBaDpaWELi0eQ+EtaGy6ujE9Y/VjQ5E88Z3DplX4ZJo3/7aLwSSUKGePeuBP2YXzAjzzpTmrYQTcbdLy/X6gxMQSigog+k1MuvgmcqQFM4FEGSqt4uz+mo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Mf0kiRGS; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Mf0kiRGS" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a36416aef2so1843928f8f.1 for ; Wed, 21 May 2025 05:51:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747831882; x=1748436682; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=JPzlBmS/vixQNlnFlLNVTrpyQWywNNSCiH/beuvalSg=; b=Mf0kiRGSklCqPd1Gw7f9Vtb665FRvWpO86wpeW36SLDoNIRwJDkTnkKSQ9FxMKUyNA k/qI09DZ8QRdgZKhPimNptU7vikzBHFpRgFtd8l0bbsRJmBCsXBUWI6A+iHyd/q4B2/f EmfSgGK/gk+r8HeofMr1Rgr+nDdHEbToXats0Pue1dE9RxHZKrWmNJ8rpKMhlXZFcSeV dL7H10jInZGv/s5Z/451F3A31FUQTMlITS9eHlPIlRi+NGThZdl8fp6T7ohgLAegZCYj Ww82NCwTvI8AAtVW5WlWlL8oPVUESI4rGdFbSt60amGCvja1Xo845JYe4wu5SKypLp1K OoVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747831882; x=1748436682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=JPzlBmS/vixQNlnFlLNVTrpyQWywNNSCiH/beuvalSg=; b=scN1leHoZYjS08otFad5Mmm+nyGGVaelicU0OK595R0LoC0WZzQSBCkU0gSWkMCDKQ Zpu9h9GwmjbeSVR4i8V5ePPLkQ7GVrdwUPcL1Cbgf3sLAk/vX7cHWUc5yDnX3JgnPzmi G6FWDGs681MCAs3fmnOo1os+vUYadnUk/VX++Zj2Cb/brU9Rivv2BybjMrNM6dNPhqXB WjIuiIZf5MkhLBhthJSbQ83MlZvRCRnqe+efjBeSg1IR7nE1N6oR8JCxjZI3iTZ4cl3D ruN8IRF+iykNM8QqFCbR5BK+rkyqN8OjFNsPvVNrOgjy4oi1kbHuWcwGWDBaCXW7sZWi Gd0g== X-Forwarded-Encrypted: i=1; AJvYcCX86vIjZKgC4YaaPrcG+2aogxYeaniIAUI37llU7nmAmraHjN2bf2QHIBMsjzlQY3C/Gzzrty1vZC9jAcg=@vger.kernel.org X-Gm-Message-State: AOJu0Yxn8+AoS7QmTHHs8hTOsOpYF3+Ue5veeuK6ZvkE0gzrwJL1aqRd GMOo0IbOoR2bcyi8INb/1M5zoGUIlP2b0w26Cy2bJp2ZTIVl3epf0k70+cLRkBWSEelKEjsxr98 Os8QfX5FAVCAjMwTZErWo2g== X-Google-Smtp-Source: AGHT+IE/jsEpHUOKUoZ6F/+QjPoabXMpV1/0WZAcaBafnmGb5uaRjEjbFBFVLyfZWlGhTulY1NK7gFRGkHiLEeFY X-Received: from wmbdr13.prod.google.com ([2002:a05:600c:608d:b0:43d:1f28:b8bf]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:40ce:b0:3a3:68c7:e486 with SMTP id ffacd0b85a97d-3a368c7e67bmr14514425f8f.51.1747831882368; Wed, 21 May 2025 05:51:22 -0700 (PDT) Date: Wed, 21 May 2025 13:48:34 +0100 In-Reply-To: <20250521124834.1070650-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250521124834.1070650-1-vdonnefort@google.com> X-Mailer: git-send-email 2.49.0.1112.g889b7c5bd8-goog Message-ID: <20250521124834.1070650-11-vdonnefort@google.com> Subject: [PATCH v6 10/10] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap From: Vincent Donnefort To: maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org Cc: qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" With the introduction of stage-2 huge mappings in the pKVM hypervisor, guest pages CMO is needed for PMD_SIZE size. Fixmap only supports PAGE_SIZE and iterating over the huge-page is time consuming (mostly due to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) to improve guest page CMOs when stage-2 huge mappings are installed. On a Pixel6, the iterative solution resulted in a latency of ~700us, while the PMD_SIZE fixmap reduces it to ~100us. Because of the horrendous private range allocation that would be necessary, this is disabled for 64KiB pages systems. Suggested-by: Quentin Perret Signed-off-by: Vincent Donnefort Signed-off-by: Quentin Perret diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 1b43bcd2a679..2888b5d03757 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; =20 #define KVM_PHYS_INVALID (-1ULL) =20 +#define KVM_PTE_TYPE BIT(1) +#define KVM_PTE_TYPE_BLOCK 0 +#define KVM_PTE_TYPE_PAGE 1 +#define KVM_PTE_TYPE_TABLE 1 + #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) =20 #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/incl= ude/nvhe/mm.h index 230e4f2527de..6e83ce35c2f2 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h @@ -13,9 +13,11 @@ extern struct kvm_pgtable pkvm_pgtable; extern hyp_spinlock_t pkvm_pgd_lock; =20 -int hyp_create_pcpu_fixmap(void); +int hyp_create_fixmap(void); void *hyp_fixmap_map(phys_addr_t phys); void hyp_fixmap_unmap(void); +void *hyp_fixblock_map(phys_addr_t phys, size_t *size); +void hyp_fixblock_unmap(void); =20 int hyp_create_idmap(u32 hyp_va_bits); int hyp_map_vectors(void); diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 1c18fca82209..f67c1a91e4eb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -216,34 +216,42 @@ static void guest_s2_put_page(void *addr) hyp_put_page(¤t_vm->pool, addr); } =20 -static void clean_dcache_guest_page(void *va, size_t size) +static void __apply_guest_page(void *va, size_t size, + void (*func)(void *addr, size_t size)) { size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); size =3D PAGE_ALIGN(size); =20 while (size) { - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; + size_t map_size =3D PAGE_SIZE; + void *map; + + if (IS_ALIGNED((unsigned long)va, PMD_SIZE) && size >=3D PMD_SIZE) + map =3D hyp_fixblock_map(__hyp_pa(va), &map_size); + else + map =3D hyp_fixmap_map(__hyp_pa(va)); + + func(map, map_size); + + if (map_size =3D=3D PMD_SIZE) + hyp_fixblock_unmap(); + else + hyp_fixmap_unmap(); + + size -=3D map_size; + va +=3D map_size; } } =20 -static void invalidate_icache_guest_page(void *va, size_t size) +static void clean_dcache_guest_page(void *va, size_t size) { - size +=3D va - PTR_ALIGN_DOWN(va, PAGE_SIZE); - va =3D PTR_ALIGN_DOWN(va, PAGE_SIZE); - size =3D PAGE_ALIGN(size); + __apply_guest_page(va, size, __clean_dcache_guest_page); +} =20 - while (size) { - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), - PAGE_SIZE); - hyp_fixmap_unmap(); - va +=3D PAGE_SIZE; - size -=3D PAGE_SIZE; - } +static void invalidate_icache_guest_page(void *va, size_t size) +{ + __apply_guest_page(va, size, __invalidate_icache_guest_page); } =20 int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, void *pgd) diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c index f41c7440b34b..ae8391baebc3 100644 --- a/arch/arm64/kvm/hyp/nvhe/mm.c +++ b/arch/arm64/kvm/hyp/nvhe/mm.c @@ -229,9 +229,8 @@ int hyp_map_vectors(void) return 0; } =20 -void *hyp_fixmap_map(phys_addr_t phys) +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phy= s) { - struct hyp_fixmap_slot *slot =3D this_cpu_ptr(&fixmap_slots); kvm_pte_t pte, *ptep =3D slot->ptep; =20 pte =3D *ptep; @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) return (void *)slot->addr; } =20 +void *hyp_fixmap_map(phys_addr_t phys) +{ + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); +} + static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) { kvm_pte_t *ptep =3D slot->ptep; u64 addr =3D slot->addr; + u32 level; + + if (FIELD_GET(KVM_PTE_TYPE, *ptep) =3D=3D KVM_PTE_TYPE_PAGE) + level =3D KVM_PGTABLE_LAST_LEVEL; + else + level =3D KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PM= D level */ =20 WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); =20 @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *s= lot) * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#m= f10dfbaf1eaef9274c581b81c53758918c1d0f03 */ dsb(ishst); - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); dsb(ish); isb(); } @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, enum kvm_pgtable_walk_flags visit) { - struct hyp_fixmap_slot *slot =3D per_cpu_ptr(&fixmap_slots, (u64)ctx->arg= ); + struct hyp_fixmap_slot *slot =3D (struct hyp_fixmap_slot *)ctx->arg; =20 - if (!kvm_pte_valid(ctx->old) || ctx->level !=3D KVM_PGTABLE_LAST_LEVEL) + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) !=3D kvm_granule_= size(ctx->level)) return -EINVAL; =20 slot->addr =3D ctx->addr; @@ -296,13 +306,84 @@ static int create_fixmap_slot(u64 addr, u64 cpu) struct kvm_pgtable_walker walker =3D { .cb =3D __create_fixmap_slot_cb, .flags =3D KVM_PGTABLE_WALK_LEAF, - .arg =3D (void *)cpu, + .arg =3D per_cpu_ptr(&fixmap_slots, cpu), }; =20 return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); } =20 -int hyp_create_pcpu_fixmap(void) +#if PAGE_SHIFT < 16 +#define HAS_FIXBLOCK +static struct hyp_fixmap_slot hyp_fixblock_slot; +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); +#endif + +static int create_fixblock(void) +{ +#ifdef HAS_FIXBLOCK + struct kvm_pgtable_walker walker =3D { + .cb =3D __create_fixmap_slot_cb, + .flags =3D KVM_PGTABLE_WALK_LEAF, + .arg =3D &hyp_fixblock_slot, + }; + unsigned long addr; + phys_addr_t phys; + int ret, i; + + /* Find a RAM phys address, PMD aligned */ + for (i =3D 0; i < hyp_memblock_nr; i++) { + phys =3D ALIGN(hyp_memory[i].base, PMD_SIZE); + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) + break; + } + + if (i >=3D hyp_memblock_nr) + return -EINVAL; + + hyp_spin_lock(&pkvm_pgd_lock); + addr =3D ALIGN(__io_map_base, PMD_SIZE); + ret =3D __pkvm_alloc_private_va_range(addr, PMD_SIZE); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP= ); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); + +unlock: + hyp_spin_unlock(&pkvm_pgd_lock); + + return ret; +#else + return 0; +#endif +} + +void *hyp_fixblock_map(phys_addr_t phys, size_t *size) +{ +#ifdef HAS_FIXBLOCK + *size =3D PMD_SIZE; + hyp_spin_lock(&hyp_fixblock_lock); + return fixmap_map_slot(&hyp_fixblock_slot, phys); +#else + *size =3D PAGE_SIZE; + return hyp_fixmap_map(phys); +#endif +} + +void hyp_fixblock_unmap(void) +{ +#ifdef HAS_FIXBLOCK + fixmap_clear_slot(&hyp_fixblock_slot); + hyp_spin_unlock(&hyp_fixblock_lock); +#else + hyp_fixmap_unmap(); +#endif +} + +int hyp_create_fixmap(void) { unsigned long addr, i; int ret; @@ -322,7 +403,7 @@ int hyp_create_pcpu_fixmap(void) return ret; } =20 - return 0; + return create_fixblock(); } =20 int hyp_create_idmap(u32 hyp_va_bits) diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index c19860fc8183..a48d3f5a5afb 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -312,7 +312,7 @@ void __noreturn __pkvm_init_finalise(void) if (ret) goto out; =20 - ret =3D hyp_create_pcpu_fixmap(); + ret =3D hyp_create_fixmap(); if (ret) goto out; =20 diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index df5cc74a7dd0..c351b4abd5db 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -11,12 +11,6 @@ #include #include =20 - -#define KVM_PTE_TYPE BIT(1) -#define KVM_PTE_TYPE_BLOCK 0 -#define KVM_PTE_TYPE_PAGE 1 -#define KVM_PTE_TYPE_TABLE 1 - struct kvm_pgtable_walk_data { struct kvm_pgtable_walker *walker; =20 --=20 2.49.0.1112.g889b7c5bd8-goog