From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1FA32207E08 for ; Mon, 16 Dec 2024 17:58:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371890; cv=none; b=jJeQrPbuVY80GqX2bsHsLHb9DULQ262mmDoPtQPd2IyhUS+5wgEpaQJf+MzHEQ2zGz5aoxLBOREjswyqIqo3WmgUfdONLI3C0vls3opZ4Apr9eY/oVdY2mRZ3QrHRACh9h0rSYOvn1JcOeB6bv+WC/mOh1aFnFv1GjABieyyo34= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371890; c=relaxed/simple; bh=c9TuY0JzEDJDlz8409s4LmorQWb/6VBYImcN+XUquJs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=stYkfOdsj4rFrev/jf+jYP2eLmu2sNHDXGS5oNC3R/41s0L4rgnkzmoDhX7oWFWidcSQE5LDVVacQmG6QWgJqL2oDijKT+vckiaD4RV+dDwpvmQfn1/XiBZffMcrwt3lncXLvs72J/nAmZX6fJDSVVL5r/c5Or3ox38rnG9KYNE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HGKtX90M; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HGKtX90M" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa6a1595fdaso338697666b.0 for ; Mon, 16 Dec 2024 09:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371887; x=1734976687; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Zm077LWUklom1qgghNmApFcV7NaUdG/wSJAtTBhffFY=; b=HGKtX90Me03sH5ESKlNzm213I3Ji4tbr3MmaBBdz1RVy+a/w7iPV8Cg2OX6ArEt3DD yFS5uJU5jBbEae8PUP1gXURuflfRMOYn4eLxT45Df35YtjjbdUO1X18vYmzx9RUROtcT rJg6WkLP0jBrGLkthNJDbSio0fx4ci9IVC9EuvYgf/KHDRLoybpHMbhh5kzA469MuyOj pUulyW2gAj5kgff/0m4NXLJGq6aBMAZmTfBfghLg12ocqNPJ6bm17nSxS1svx73yuoQU xFDC4rTaVodz5Zd8vi+GKdl5grBHshdoEyqpMjq7mgMowpWuYvDwjvAfwrG7mc9GGEHl beDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371887; x=1734976687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Zm077LWUklom1qgghNmApFcV7NaUdG/wSJAtTBhffFY=; b=cV/pgcUYyl6ND7xyeqj2TO1WY3VVBQQSYHP+WW3Aq8/bRqO1OMPBvpGghCWji3pv2A +270yQIb1D5wzFFPKlipW/Hs6izze1aKyh1MDPPbzavfREa39i1GyQvb4KGl0rnWtm98 KmJYOZ7aIBN2kty03xejfYaaRwEIAJRtkthFtEnNFXCTxL05kWE9Eta4vJHhU1k+eEh2 gEneRmlB6x382JjP/N9P4uxVklB73f88iWVbu3/e/QMY/IwQukfVoztcBzi0baV/nNcD IhbDBCAq+CFf6Wj38XqxLaihyjtIKVA90GFZkTpaUHgw89+oEheIXlTbw9igTaetFSTe RexA== X-Forwarded-Encrypted: i=1; AJvYcCUPafPmAnlslL1BOjUmAVnw8yFtlil4f2iPcbP1PIQ9avRfA9M6b9zfbvK38+phHLtA7uNshxLWZVBTmOA=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3m0y6sJCCVWtuTbABxY1yYA5a4SJyl3e69ETskZ67aGiyzwez fRHh+kJzedfqpL8L49GK7UPrkTcASKsfaPTEGPYxokL/KbMCVDaOiEYLtiv/2PSlYptjZ5i7exQ dIftM/A== X-Google-Smtp-Source: AGHT+IHQ5oA98sWMNkT9FIe7HUKG7DjxRL6Q1mhzqfn466rudnNmwiDaQ8QrHj3NSSmISVUj3n4MBHY2yXmJ X-Received: from ejctj6.prod.google.com ([2002:a17:907:c246:b0:aab:70bc:648c]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7ba1:b0:aa6:894c:84b9 with SMTP id a640c23a62f3a-aab7795ff77mr1374185366b.23.1734371887552; Mon, 16 Dec 2024 09:58:07 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:46 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-2-qperret@google.com> Subject: [PATCH v3 01/18] KVM: arm64: Change the layout of enum pkvm_page_state From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The 'concrete' (a.k.a non-meta) page states are currently encoded using software bits in PTEs. For performance reasons, the abstract pkvm_page_state enum uses the same bits to encode these states as that makes conversions from and to PTEs easy. In order to prepare the ground for moving the 'concrete' state storage to the hyp vmemmap, re-arrange the enum to use bits 0 and 1 for this purpose. No functional changes intended. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 0972faccc2af..8c30362af2b9 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -24,25 +24,27 @@ */ enum pkvm_page_state { PKVM_PAGE_OWNED =3D 0ULL, - PKVM_PAGE_SHARED_OWNED =3D KVM_PGTABLE_PROT_SW0, - PKVM_PAGE_SHARED_BORROWED =3D KVM_PGTABLE_PROT_SW1, - __PKVM_PAGE_RESERVED =3D KVM_PGTABLE_PROT_SW0 | - KVM_PGTABLE_PROT_SW1, + PKVM_PAGE_SHARED_OWNED =3D BIT(0), + PKVM_PAGE_SHARED_BORROWED =3D BIT(1), + __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), =20 /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE, + PKVM_NOPAGE =3D BIT(2), }; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) =20 #define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, enum pkvm_page_state state) { - return (prot & ~PKVM_PAGE_STATE_PROT_MASK) | state; + prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; + prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; } =20 static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) { - return prot & PKVM_PAGE_STATE_PROT_MASK; + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); } =20 struct host_mmu { --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28FAC20A5C5 for ; Mon, 16 Dec 2024 17:58:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371892; cv=none; b=JEoD04yv7FAXQZyFAqmdjHSlRd7GwXV+NAnt6LZpr8QVaYcTUB5zT58Id3dlYgxrglobv/kPH0DfoAqND6/MKXofRR7emqtcEpCzoHURr57TRUrGm/pmIx49RocNRQo+PH/7xfs7xVk+2YUFY4yL+cZdEYKJDUInknYIl2R5qZo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371892; c=relaxed/simple; bh=YMN5nrVnm7MBUS1/MQRoHbk4AumZcaFpRHgZ3PnOdz4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XZepQbctD2zOiy4UvHdS3PQ5tQYuJyU2PMljHcFCH6+MMcbrsCHsOtVzDPgCw01xs+Q/F3ilufz5GbEjy38zMPi27/eK6BqK5q4D/yW4qDnq4ovCYTY3YJD1bhnaA97BDMnOnkHeD/+RB2DmSWirneMsMTVhwtL6k+dJmlpsGyk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=V4Mk3uCh; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="V4Mk3uCh" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa6845cf116so447105566b.1 for ; Mon, 16 Dec 2024 09:58:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371889; x=1734976689; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IKKtybYMKA7zsu5unQStwbk02cJvQpLoCTe4uFlfQf0=; b=V4Mk3uChciZltD7l5mGDBoUkNV4Q4vm5SSAraNhF1rppuv3ioXyRHVa/vCFE5f3wJ2 KtVPGmhsJOfDD11yfJ9eLWKHoI0agCDg+mr11zlBEv+/JtrrBarIakM7E3ASOFTYtt5/ wkMeS9I3YtbsF+whi45MnMc3rHOeBawPfJDscj3QtjJvvDqdGZ+fQcZkLSJ3qstvs0Gs UUzMrBLFcnSaNMOYWDtft/8+QcwCGf/eeHr4aPEeur2GANUIbjjM15Oj2nY65wVkNYUR ZJ+KrHKdM2iAkS1O47bml/6RADtCIKdalrYGDlBjqav/OHuHoZ0AKKZRTqj36g6cdfi9 kHPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371889; x=1734976689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IKKtybYMKA7zsu5unQStwbk02cJvQpLoCTe4uFlfQf0=; b=hQ0yTyaQ38AXeQCVrxl5GdqoUkau4gfWGLqLaIMu6q0OCbpl7puhhIaK9s/fjb75DS erKncjZNfDEUhKK35dCyhEONtRSnMReRV2q29rkr09rzv2ycomcsa6EFSAqdHj+pLJe3 WoEYnuX4Ve3CjoO8o9g+zEgAirdaeJq4tLU/ar0t0cuHjlVsr80Gvae3d9kEI51p+RBz C+F/JIvlTbfkkfA7dqcNwRPAaPhfznZf1eU6pAjwTR8h1sur7w55/CHQ0OHEzk2I2/oS KEBxaJpv7QDpD8LIza1KqCESh+PGRluLFkDJaBnO5aJHRsSq/fCwO0baovqEw5Z2WpWX fwrQ== X-Forwarded-Encrypted: i=1; AJvYcCVLs0KI6M0LCdlqC7DcynysvO6MnliSavWc00d79qxolMaGNXw764rFZZFPfYAe1wxv30BuwtbDw8Et4QY=@vger.kernel.org X-Gm-Message-State: AOJu0Yyd3tIV8WyZbhb2WZXdaSgmg7homWjiLm0eTsbFcqcQ1SfiFWCY eR/duY57iUk8ky+iWnnck6EQ0vykQ1RgPp1D6pCNQ813b44dzSQro354S50bIckUqjmjd0aulOu NRhFS7w== X-Google-Smtp-Source: AGHT+IGNIsVt310HOq58YI6qOaJU0R9CQoXTaCdBST5eb3fkgxzHsq57oQwvG8ZxTSrd75RDqwWjPK4h1541 X-Received: from edbin5.prod.google.com ([2002:a05:6402:2085:b0:5d2:727b:9b35]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:7da5:b0:aa6:9540:570f with SMTP id a640c23a62f3a-aabdc8bd0e0mr21397566b.18.1734371889630; Mon, 16 Dec 2024 09:58:09 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:47 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-3-qperret@google.com> Subject: [PATCH v3 02/18] KVM: arm64: Move enum pkvm_page_state to memory.h From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In order to prepare the way for storing page-tracking information in pKVM's vmemmap, move the enum pkvm_page_state definition to nvhe/memory.h. No functional changes intended. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 34 +------------------ arch/arm64/kvm/hyp/include/nvhe/memory.h | 33 ++++++++++++++++++ 2 files changed, 34 insertions(+), 33 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 8c30362af2b9..25038ac705d8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -11,42 +11,10 @@ #include #include #include +#include #include #include =20 -/* - * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: - * 00: The page is owned exclusively by the page-table owner. - * 01: The page is owned by the page-table owner, but is shared - * with another entity. - * 10: The page is shared with, but not owned by the page-table owner. - * 11: Reserved for future use (lending). - */ -enum pkvm_page_state { - PKVM_PAGE_OWNED =3D 0ULL, - PKVM_PAGE_SHARED_OWNED =3D BIT(0), - PKVM_PAGE_SHARED_BORROWED =3D BIT(1), - __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), - - /* Meta-states which aren't encoded directly in the PTE's SW bits */ - PKVM_NOPAGE =3D BIT(2), -}; -#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) - -#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) -static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, - enum pkvm_page_state state) -{ - prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; - prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); - return prot; -} - -static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) -{ - return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); -} - struct host_mmu { struct kvm_arch arch; struct kvm_pgtable pgt; diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index ab205c4d6774..c84b24234ac7 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -7,6 +7,39 @@ =20 #include =20 +/* + * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: + * 00: The page is owned exclusively by the page-table owner. + * 01: The page is owned by the page-table owner, but is shared + * with another entity. + * 10: The page is shared with, but not owned by the page-table owner. + * 11: Reserved for future use (lending). + */ +enum pkvm_page_state { + PKVM_PAGE_OWNED =3D 0ULL, + PKVM_PAGE_SHARED_OWNED =3D BIT(0), + PKVM_PAGE_SHARED_BORROWED =3D BIT(1), + __PKVM_PAGE_RESERVED =3D BIT(0) | BIT(1), + + /* Meta-states which aren't encoded directly in the PTE's SW bits */ + PKVM_NOPAGE =3D BIT(2), +}; +#define PKVM_PAGE_META_STATES_MASK (~(BIT(0) | BIT(1))) + +#define PKVM_PAGE_STATE_PROT_MASK (KVM_PGTABLE_PROT_SW0 | KVM_PGTABLE_PROT= _SW1) +static inline enum kvm_pgtable_prot pkvm_mkstate(enum kvm_pgtable_prot pro= t, + enum pkvm_page_state state) +{ + prot &=3D ~PKVM_PAGE_STATE_PROT_MASK; + prot |=3D FIELD_PREP(PKVM_PAGE_STATE_PROT_MASK, state); + return prot; +} + +static inline enum pkvm_page_state pkvm_getstate(enum kvm_pgtable_prot pro= t) +{ + return FIELD_GET(PKVM_PAGE_STATE_PROT_MASK, prot); +} + struct hyp_page { unsigned short refcount; unsigned short order; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F41C20A5FA for ; Mon, 16 Dec 2024 17:58:13 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371894; cv=none; b=jlmB3I6//wjRSmW9kIO0p6yx8n6Tbor8qAgCXyr3/n9auCwVAJNVyzQFsl0OYZH0GuT9qemsin4BW3BCZMUGF+sFShVErgdFwcANT6tRCU/dqEGmg4fQxKlejwKb+lxhXeXoFVQ2P3igUVCUXFugbjrW0G/Dj3ZckXmWYiK3Ou0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371894; c=relaxed/simple; bh=5rIXTy5TPuGoLMiV0oXvNDVO3dkl9RGtMRlZct7bt+g=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kiNyoQFIdnA0Era2prbWk4BKoCUALBap3oFBm4ja5r9T8LZ8UTTaEYRO+0T58tPek6VztdZPAkgP3bIu1ua7r6Y81S/a3ws03dC5yPkHk9QnUHrisB7avntKhg+2TFLUsGuCwc6BEZaQrQrsFaW7bXJR569tir+kBmrNvWQRYCU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rY1oa6Cg; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rY1oa6Cg" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3dbddb891so6337644a12.0 for ; Mon, 16 Dec 2024 09:58:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371892; x=1734976692; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9U+p93j/YN41MEZx68FsDZhowuKIrUGiMwA9+JFupUA=; b=rY1oa6CgFsayyh0e1GYodWAfolJhtPwSbFX62RLB0Sh2EV7tTonR9/96ySZ6t8150n FVXznlDLoigI1tDQwlyOpBEUz4QyZPXWvF4EL9+O36P1N26jdEQnXLRAKtOtGFv0Hx9m kcWXs/qnsxNqn/MCd9Wl0/8p7d7jOeigbzLcz3CcxE6Gf51INaCNx3WUfrSHsuHnuB1h jhZ0+TenJAkr/NNSKZn+6xX4mqA9SQFOpth67Ou3VXp2Kpr2QIcVUwaP1ziWxfwSPfio ZS5jIB8wiJ3OGfC9VpSKecP0FkmreH/6RBRT8lDG8rIrs9sG6EP38///6xTsxPL2Oehx Gd4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371892; x=1734976692; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9U+p93j/YN41MEZx68FsDZhowuKIrUGiMwA9+JFupUA=; b=Hwd0RejdsNtj5a5uKZSvknHcvOXYz8CnpSwfsmxN4fl4cQx2q+ZTtTlZW6WFo03teE Dk8vnBOsM2ugu7tFH3glt1mQmOcmwJJ6HBuxgOwRK71QhlNfyA0613H0rNiD7IyMvPpO wIJXypiIQ0pQ7us/mPNPEavb3ooer2P7qHxGxvCugCSULXPdw1bb4JTPljj6u+XyB9kg XqbJtHLuTSihX3YQnL3kOGeRADl2kGxBlXRBKUnL9tEhHSrl7WzNzHJOGtTh8P/4uFNp 7gpH/HzEC+TQXOAmhsm+RwZPNx05XlJ7+bhQWJWMdP9Op+J70AietNS5B78FeZSNdjeD /nxg== X-Forwarded-Encrypted: i=1; AJvYcCUnnT55guHITppu3ssy7V5bhHvhFXlBsUobfnf5iJhyrjLnLYmzFE/+l7yh0h7/1BUUoV7Vc5NNVoIlakY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy913ZpeClz51+pGumEZngpYq4qLPN5cqOPCSpcjYG9/UrHS1Ri E0yWv5exSTB83R7bhYuxUnf7tGO89JUXMqwluLro2jCvLsQ9IvLaQQ3hYV9SU2VFjVWnuHewW9b c+ketgQ== X-Google-Smtp-Source: AGHT+IG7st/yyJqIhpv2xQBIuWlDy4uU3mEKGGddfO0WV9zfYHQl4j73x0bTTMfU7vqe5be5/q+MEgxQCPxJ X-Received: from edbet14.prod.google.com ([2002:a05:6402:378e:b0:5d0:a9a6:5abc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:354d:b0:5d0:7282:6f22 with SMTP id 4fb4d7f45d1cf-5d7d56175c1mr253568a12.14.1734371891718; Mon, 16 Dec 2024 09:58:11 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:48 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-4-qperret@google.com> Subject: [PATCH v3 03/18] KVM: arm64: Make hyp_page::order a u8 From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We don't need 16 bits to store the hyp page order, and we'll need some bits to store page ownership data soon, so let's reduce the order member. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 6 +++--- arch/arm64/kvm/hyp/include/nvhe/memory.h | 5 +++-- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 14 +++++++------- 3 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/inc= lude/nvhe/gfp.h index 97c527ef53c2..f1725bad6331 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -7,7 +7,7 @@ #include #include =20 -#define HYP_NO_ORDER USHRT_MAX +#define HYP_NO_ORDER 0xff =20 struct hyp_pool { /* @@ -19,11 +19,11 @@ struct hyp_pool { struct list_head free_area[NR_PAGE_ORDERS]; phys_addr_t range_start; phys_addr_t range_end; - unsigned short max_order; + u8 max_order; }; =20 /* Allocation */ -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order); +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order); void hyp_split_page(struct hyp_page *page); void hyp_get_page(struct hyp_pool *pool, void *addr); void hyp_put_page(struct hyp_pool *pool, void *addr); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index c84b24234ac7..45b8d1840aa4 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -41,8 +41,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm= _pgtable_prot prot) } =20 struct hyp_page { - unsigned short refcount; - unsigned short order; + u16 refcount; + u8 order; + u8 reserved; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe= /page_alloc.c index e691290d3765..a1eb27a1a747 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -32,7 +32,7 @@ u64 __hyp_vmemmap; */ static struct hyp_page *__find_buddy_nocheck(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { phys_addr_t addr =3D hyp_page_to_phys(p); =20 @@ -51,7 +51,7 @@ static struct hyp_page *__find_buddy_nocheck(struct hyp_p= ool *pool, /* Find a buddy page currently available for allocation */ static struct hyp_page *__find_buddy_avail(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy =3D __find_buddy_nocheck(pool, p, order); =20 @@ -94,7 +94,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, struct hyp_page *p) { phys_addr_t phys =3D hyp_page_to_phys(p); - unsigned short order =3D p->order; + u8 order =3D p->order; struct hyp_page *buddy; =20 memset(hyp_page_to_virt(p), 0, PAGE_SIZE << p->order); @@ -129,7 +129,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, =20 static struct hyp_page *__hyp_extract_page(struct hyp_pool *pool, struct hyp_page *p, - unsigned short order) + u8 order) { struct hyp_page *buddy; =20 @@ -183,7 +183,7 @@ void hyp_get_page(struct hyp_pool *pool, void *addr) =20 void hyp_split_page(struct hyp_page *p) { - unsigned short order =3D p->order; + u8 order =3D p->order; unsigned int i; =20 p->order =3D 0; @@ -195,10 +195,10 @@ void hyp_split_page(struct hyp_page *p) } } =20 -void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) +void *hyp_alloc_pages(struct hyp_pool *pool, u8 order) { - unsigned short i =3D order; struct hyp_page *p; + u8 i =3D order; =20 hyp_spin_lock(&pool->lock); =20 --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5131720ADD3 for ; Mon, 16 Dec 2024 17:58:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371897; cv=none; b=KkPOfY7/pzVwgDv8C5+qm8+8M94FYugzA7JV6d0k2BpRNe5BqDt1ZQ3JZuUOyqyFCPv2rAxiFR8Xz0hjJhV/DHrXhLD2QyzTh7bIzqjpAVTKuBIlWHbtXF5GeLgCMLS+Sed67H6j16zZL4s3fjB7qwEVXU0nTNyfu5aIk4nF+rg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371897; c=relaxed/simple; bh=JpCnhciCKgdtJ9loLkNK97utWA9gB73i3sJzS3IT1Ho=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oO8OMK6+SB/vNbBUQ9oAXBkW2RsuwSbDtYkx1zgLLgh3Zinfuwh3SOjfXYSMzFgWVFhpBP9irKYIVBHL+yGPBnmdHFVTTV6vjj++jKUW044700gx8+4LreEza9UIguCmo+Gkae/IcO7SOWUfz8MJwJ8pRCPpEqOYpzESguLjzLA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=cKy2EXYJ; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cKy2EXYJ" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3bf8874dbso3527790a12.2 for ; Mon, 16 Dec 2024 09:58:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371894; x=1734976694; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aNbeauvqp5W7nuvO39aHLHQD/cqU02ozkTE0BlHZ7yk=; b=cKy2EXYJps5jCkMd+DrTeXBRQ9B4WF8YmVQjluEldQdE+6eSUXtlCldMeZtATtexjC qIBlHaBYWcBOMSBuVz1UIe4tyqhhoGESRKZ8RrOaWQVwPcLwnvw8jKtu4v+tmzkN6GNI ezc7a5aiZycOUpJm7cKDyZ4A27BgEAl/cNyhZE2ew6GgA4saU0ZVJYg+/YKl2JidojuY Eei2fK/lZaD54sf8vTrtoZc8pD6gD4jT0kanfaW72fF6sPUASGcMRvwaYbJGANRzpdlG T+q/GB9etLQMIJUpRazt1Omvtceo0qzpwZBCW09dvw/KfLGWHe+NIXj5/lg7LW+aLteS YEMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371894; x=1734976694; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aNbeauvqp5W7nuvO39aHLHQD/cqU02ozkTE0BlHZ7yk=; b=Rd0xsv6O7qckiqgXLxPKC7/Vrcu04pRh7nM86SyKX4uzskBwf4zDaEkyCJIcWoytQI UxkxTSVxh753YyZrZKgzBiKJjQQPQiLx11sEeG8wRGslueP1REcQRwoEbZLOvG8PuFNq OGuIhrZVZD0ikD0mkTCEtZBcVm+wQfEIur6dYYLpCALhZqjiH89UdF0pBEJkZhCw/Pm1 bfGjuEZS8MVJVIv7GAFAqgbRO8+HDhpQn4ibNEX9J0KISM+8MF11rIg5CNcIxNkPMUhJ g6bZbgG1ZnIjsfXRQyQOzwbZyEfKtw1NtDwPZCNpwvcBrbD1XbaoeW6XIkC22ky+AHZy jc8g== X-Forwarded-Encrypted: i=1; AJvYcCXbDK1ibo3KnyM2p6Ou3RcrKn+xYORedw3xljEbjfp2zMEUgUq4x8vLSqPm7yjLyEc7a/ZVpSKRHW6w9GM=@vger.kernel.org X-Gm-Message-State: AOJu0YwNW6O9toobVjcpDuP4eDOB040ubZZkM5ErnxZjrgRTHg/v2l+/ s9Vmo7EKNe1esR/zXBET3pDneTLdBabcHFMQHSQJCvPY5d4rnLthoZTo2HfN0YSSOC9pfua062q xTAWC+Q== X-Google-Smtp-Source: AGHT+IHxPGCw1y5ow9DgOQAPQ4TBp6JWxKWj0NrM4CPtdBj7NEcwM4Oy4ozYXUpiXyHJXYrisQbzG3ZC92H8 X-Received: from edon15.prod.google.com ([2002:aa7:d04f:0:b0:5d2:7266:10e]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:278b:b0:5d0:b7c5:c409 with SMTP id 4fb4d7f45d1cf-5d63c3163f0mr12759048a12.14.1734371893900; Mon, 16 Dec 2024 09:58:13 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:49 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-5-qperret@google.com> Subject: [PATCH v3 04/18] KVM: arm64: Move host page ownership tracking to the hyp vmemmap From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We currently store part of the page-tracking state in PTE software bits for the host, guests and the hypervisor. This is sub-optimal when e.g. sharing pages as this forces to break block mappings purely to support this software tracking. This causes an unnecessarily fragmented stage-2 page-table for the host in particular when it shares pages with Secure, which can lead to measurable regressions. Moreover, having this state stored in the page-table forces us to do multiple costly walks on the page transition path, hence causing overhead. In order to work around these problems, move the host-side page-tracking logic from SW bits in its stage-2 PTEs to the hypervisor's vmemmap. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/memory.h | 6 +- arch/arm64/kvm/hyp/nvhe/mem_protect.c | 100 ++++++++++++++++------- arch/arm64/kvm/hyp/nvhe/setup.c | 7 +- 3 files changed, 77 insertions(+), 36 deletions(-) diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index 45b8d1840aa4..8bd9a539f260 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -8,7 +8,7 @@ #include =20 /* - * SW bits 0-1 are reserved to track the memory ownership state of each pa= ge: + * Bits 0-1 are reserved to track the memory ownership state of each page: * 00: The page is owned exclusively by the page-table owner. * 01: The page is owned by the page-table owner, but is shared * with another entity. @@ -43,7 +43,9 @@ static inline enum pkvm_page_state pkvm_getstate(enum kvm= _pgtable_prot prot) struct hyp_page { u16 refcount; u8 order; - u8 reserved; + + /* Host (non-meta) state. Guarded by the host stage-2 lock. */ + enum pkvm_page_state host_state : 8; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index caba3e4bd09e..12bb5445fe47 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -201,8 +201,8 @@ static void *guest_s2_zalloc_page(void *mc) =20 memset(addr, 0, PAGE_SIZE); p =3D hyp_virt_to_page(addr); - memset(p, 0, sizeof(*p)); p->refcount =3D 1; + p->order =3D 0; =20 return addr; } @@ -268,6 +268,7 @@ int kvm_guest_prepare_stage2(struct pkvm_hyp_vm *vm, vo= id *pgd) =20 void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct kvm_hyp_memcache *= mc) { + struct hyp_page *page; void *addr; =20 /* Dump all pgtable pages in the hyp_pool */ @@ -279,7 +280,9 @@ void reclaim_guest_pages(struct pkvm_hyp_vm *vm, struct= kvm_hyp_memcache *mc) /* Drain the hyp_pool into the memcache */ addr =3D hyp_alloc_pages(&vm->pool, 0); while (addr) { - memset(hyp_virt_to_page(addr), 0, sizeof(struct hyp_page)); + page =3D hyp_virt_to_page(addr); + page->refcount =3D 0; + page->order =3D 0; push_hyp_memcache(mc, addr, hyp_virt_to_phys); WARN_ON(__pkvm_hyp_donate_host(hyp_virt_to_pfn(addr), 1)); addr =3D hyp_alloc_pages(&vm->pool, 0); @@ -382,19 +385,28 @@ bool addr_is_memory(phys_addr_t phys) return !!find_mem_range(phys, &range); } =20 -static bool addr_is_allowed_memory(phys_addr_t phys) +static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) +{ + return range->start <=3D addr && addr < range->end; +} + +static int check_range_allowed_memory(u64 start, u64 end) { struct memblock_region *reg; struct kvm_mem_range range; =20 - reg =3D find_mem_range(phys, &range); + /* + * Callers can't check the state of a range that overlaps memory and + * MMIO regions, so ensure [start, end[ is in the same kvm_mem_range. + */ + reg =3D find_mem_range(start, &range); + if (!is_in_mem_range(end - 1, &range)) + return -EINVAL; =20 - return reg && !(reg->flags & MEMBLOCK_NOMAP); -} + if (!reg || reg->flags & MEMBLOCK_NOMAP) + return -EPERM; =20 -static bool is_in_mem_range(u64 addr, struct kvm_mem_range *range) -{ - return range->start <=3D addr && addr < range->end; + return 0; } =20 static bool range_is_memory(u64 start, u64 end) @@ -454,8 +466,10 @@ static int host_stage2_adjust_range(u64 addr, struct k= vm_mem_range *range) if (kvm_pte_valid(pte)) return -EAGAIN; =20 - if (pte) + if (pte) { + WARN_ON(addr_is_memory(addr) && hyp_phys_to_page(addr)->host_state !=3D = PKVM_NOPAGE); return -EPERM; + } =20 do { u64 granule =3D kvm_granule_size(level); @@ -477,10 +491,33 @@ int host_stage2_idmap_locked(phys_addr_t addr, u64 si= ze, return host_stage2_try(__host_stage2_idmap, addr, addr + size, prot); } =20 +static void __host_update_page_state(phys_addr_t addr, u64 size, enum pkvm= _page_state state) +{ + phys_addr_t end =3D addr + size; + + for (; addr < end; addr +=3D PAGE_SIZE) + hyp_phys_to_page(addr)->host_state =3D state; +} + int host_stage2_set_owner_locked(phys_addr_t addr, u64 size, u8 owner_id) { - return host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, - addr, size, &host_s2_pool, owner_id); + int ret; + + if (!addr_is_memory(addr)) + return -EPERM; + + ret =3D host_stage2_try(kvm_pgtable_stage2_set_owner, &host_mmu.pgt, + addr, size, &host_s2_pool, owner_id); + if (ret) + return ret; + + /* Don't forget to update the vmemmap tracking for the host */ + if (owner_id =3D=3D PKVM_ID_HOST) + __host_update_page_state(addr, size, PKVM_PAGE_OWNED); + else + __host_update_page_state(addr, size, PKVM_NOPAGE); + + return 0; } =20 static bool host_stage2_force_pte_cb(u64 addr, u64 end, enum kvm_pgtable_p= rot prot) @@ -604,35 +641,38 @@ static int check_page_state_range(struct kvm_pgtable = *pgt, u64 addr, u64 size, return kvm_pgtable_walk(pgt, addr, size, &walker); } =20 -static enum pkvm_page_state host_get_page_state(kvm_pte_t pte, u64 addr) -{ - if (!addr_is_allowed_memory(addr)) - return PKVM_NOPAGE; - - if (!kvm_pte_valid(pte) && pte) - return PKVM_NOPAGE; - - return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); -} - static int __host_check_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - struct check_walk_data d =3D { - .desired =3D state, - .get_page_state =3D host_get_page_state, - }; + u64 end =3D addr + size; + int ret; + + ret =3D check_range_allowed_memory(addr, end); + if (ret) + return ret; =20 hyp_assert_lock_held(&host_mmu.lock); - return check_page_state_range(&host_mmu.pgt, addr, size, &d); + for (; addr < end; addr +=3D PAGE_SIZE) { + if (hyp_phys_to_page(addr)->host_state !=3D state) + return -EPERM; + } + + return 0; } =20 static int __host_set_page_state_range(u64 addr, u64 size, enum pkvm_page_state state) { - enum kvm_pgtable_prot prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, state); + if (hyp_phys_to_page(addr)->host_state =3D=3D PKVM_NOPAGE) { + int ret =3D host_stage2_idmap_locked(addr, size, PKVM_HOST_MEM_PROT); =20 - return host_stage2_idmap_locked(addr, size, prot); + if (ret) + return ret; + } + + __host_update_page_state(addr, size, state); + + return 0; } =20 static int host_request_owned_transition(u64 *completer_addr, diff --git a/arch/arm64/kvm/hyp/nvhe/setup.c b/arch/arm64/kvm/hyp/nvhe/setu= p.c index cbdd18cd3f98..7e04d1c2a03d 100644 --- a/arch/arm64/kvm/hyp/nvhe/setup.c +++ b/arch/arm64/kvm/hyp/nvhe/setup.c @@ -180,7 +180,6 @@ static void hpool_put_page(void *addr) static int fix_host_ownership_walker(const struct kvm_pgtable_visit_ctx *c= tx, enum kvm_pgtable_walk_flags visit) { - enum kvm_pgtable_prot prot; enum pkvm_page_state state; phys_addr_t phys; =20 @@ -203,16 +202,16 @@ static int fix_host_ownership_walker(const struct kvm= _pgtable_visit_ctx *ctx, case PKVM_PAGE_OWNED: return host_stage2_set_owner_locked(phys, PAGE_SIZE, PKVM_ID_HYP); case PKVM_PAGE_SHARED_OWNED: - prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_BORROWED); + hyp_phys_to_page(phys)->host_state =3D PKVM_PAGE_SHARED_BORROWED; break; case PKVM_PAGE_SHARED_BORROWED: - prot =3D pkvm_mkstate(PKVM_HOST_MEM_PROT, PKVM_PAGE_SHARED_OWNED); + hyp_phys_to_page(phys)->host_state =3D PKVM_PAGE_SHARED_OWNED; break; default: return -EINVAL; } =20 - return host_stage2_idmap_locked(phys, PAGE_SIZE, prot); + return 0; } =20 static int fix_hyp_pgtable_refcnt_walker(const struct kvm_pgtable_visit_ct= x *ctx, --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8CCF420B7EC for ; Mon, 16 Dec 2024 17:58:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371899; cv=none; b=sIBarygE+nUAHjmYAOYDMIghYCC/VCF2yAGwUrxuAQFl4oJ2DDAVuB8HV1b5DLqFHYNmL/vx0rgqwhAYtfGmS5zL3gKGCqs/kdqPAqxHR/bxUOLPRGt48KZsy54ZNzd8Lc/2j3aBFOXanvxg57boO9B4y8fwqiC1cZwXCWFs9io= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371899; c=relaxed/simple; bh=Es7cr26j70mP+64KpnhVcXIimoA87KfgbKi9ahJYCeU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XxYSVSKF8qciICnIZdEkKZqvrvT2lvgBjovCuuncQ3O9UDSqyPdlNoMxqQdR+ZHggubb2t6QJo47/8Kt8M8HW/cc80wGtoQYwSbRH7/ALAjMLQlMPgPthJbri7qEX9ZV7orDJZPwbqG5XiHfwp8au3u67YMwXhE9s/YPSa6kzLM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2gxsCuUD; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2gxsCuUD" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d3fe991854so4648270a12.0 for ; Mon, 16 Dec 2024 09:58:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371896; x=1734976696; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dMBJ8qv477+/KhCtyqixNk1hTPuipFTJvx4sNjSPhX0=; b=2gxsCuUDtOleiNFkCdcuiNTD8JyhGEAV4h430hdCbO8GuWXR45pfhlKz6YC6x+Zlin 3AxuzB5uLHnwL31Ny8fRDOO/0o6u4HS5QvGgnXu2hDCwfAK7G61ew/+Dpd9VbtJjJbyf fMYgDACSUdjdIrpHbhyKRakCNSlkZwiXiKcxYr4MkFauVSs5RagYfmhxgOfYkHpmaXdV cU1UhybHzoi+UoLabAUWHOKdb4efM438noxgz2Tk3u7jsmwQT1OGlJI4TwGoHpnEqeJp 3OBX3XhTpT0YkXdzzoig0KepK3vEbniaeYZI9//pBQStUj29TFDac0tTvGquDaGEZUxk gh9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371896; x=1734976696; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dMBJ8qv477+/KhCtyqixNk1hTPuipFTJvx4sNjSPhX0=; b=UhDaXbU5En/xyTp8cNoHPrClzA/fbydAL6llFkVNb66ijyZ6exnhj5q/j1d1l9aUWj LrypdCecZNIHJUq3pvr5eZ/DqM26ufNboUD5OfoRGQUpxRAXlpV9B8o22XAyc+wOO59i +gBrhWdBEZKtbrdXfesp67ht9eNxTsQbLHEXQMkBlHcFQBe3yIrt8RXsBUEkmegcdyo7 887yYFvo09OPUonghmjpQTqzCksc1EwhBBqGDQgYRKIO1OuVYI28pLcf+rNgJwvfrUB5 HWQ7XcwQ/xgrmhjEXlwh7aVY/0so1aUWsntM20WqvA+3kS3hXzjFjXfhWk7brvQSKjSv vOqg== X-Forwarded-Encrypted: i=1; AJvYcCUKetfbUJLjFJCN/3bYX5r1tdcVwgmeV/vMwdM3jUODILciu04CRNoAG0wlTfcCnvPhq6l0/Qm9RBKAbIs=@vger.kernel.org X-Gm-Message-State: AOJu0Ywo30Kk1BhSBplFWzn2SxIIXsPw3jgI3miRBf0Dg+FR27il7sdI BSnomMloi1kG6IEI8MWelbTaU/BgU1KzSfIuSg6L2vH8muAKlXG0YNgfR/1eXvBj324a5v+G0bW SoohNdw== X-Google-Smtp-Source: AGHT+IG78GpSrIyP8EQZsymolUELoZAfYkZfqZ5uAr74kYA6Of93xC43jJPtt/eSfM7J/zSOVAI94LqFwgRU X-Received: from edber15.prod.google.com ([2002:a05:6402:448f:b0:5d6:3c96:bb82]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2695:b0:5cf:e71c:ff88 with SMTP id 4fb4d7f45d1cf-5d7d4053910mr586302a12.4.1734371895963; Mon, 16 Dec 2024 09:58:15 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:50 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-6-qperret@google.com> Subject: [PATCH v3 05/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_mkyoung From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" kvm_pgtable_stage2_mkyoung currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 7 +++---- arch/arm64/kvm/mmu.c | 3 ++- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index aab04097b505..38b7ec1c8614 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -669,13 +669,15 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable *= pgt, u64 addr, u64 size); * kvm_pgtable_stage2_mkyoung() - Set the access flag in a page-table entr= y. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * * If there is a valid, leaf page-table entry used to translate @addr, then * set the access flag in that entry. */ -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr); +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags); =20 /** * kvm_pgtable_stage2_test_clear_young() - Test and optionally clear the a= ccess diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 40bd55966540..0470aedb4bf4 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1245,14 +1245,13 @@ int kvm_pgtable_stage2_wrprotect(struct kvm_pgtable= *pgt, u64 addr, u64 size) NULL, NULL, 0); } =20 -void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr) +void kvm_pgtable_stage2_mkyoung(struct kvm_pgtable *pgt, u64 addr, + enum kvm_pgtable_walk_flags flags) { int ret; =20 ret =3D stage2_update_leaf_attrs(pgt, addr, 1, KVM_PTE_LEAF_ATTR_LO_S2_AF= , 0, - NULL, NULL, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + NULL, NULL, flags); if (!ret) dsb(ishst); } diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index c9d46ad57e52..a2339b76c826 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1718,13 +1718,14 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, /* Resolve the access fault by making the page young again. */ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_i= pa) { + enum kvm_pgtable_walk_flags flags =3D KVM_PGTABLE_WALK_HANDLE_FAULT | KVM= _PGTABLE_WALK_SHARED; struct kvm_s2_mmu *mmu; =20 trace_kvm_access_fault(fault_ipa); =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa); + kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } =20 --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C00920C00F for ; Mon, 16 Dec 2024 17:58:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371901; cv=none; b=r9JhDrO+bQQii+wNKwlwoSTbA4PQmnc6Mb/sstlfRjmktV92bRXuAT9mbWnbHgamPNueEGKvhJYEIybBw/F3H9UaerWTvwzdJIqTvtf/6TCczGifQC5kt1J/hYVw8aptkdHPQMGYh+JplTcow0mbm9ybDUlmVY30A/GLJK7ZR3o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371901; c=relaxed/simple; bh=Pu9tKXL90m4hHLgJh+PJUU+2uRo8c/dROvim4hY2yzA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=d0vpymEVmva7kMv5CkvxJ91FyC52lp1xhLcZS2eW3sOPZ16f2f3AxXnjk7lidCkC6CfjB6CsK4TVsT+GJSWaag/t+NLoRvDz7tO9ZBp4ngLjcF8pVr0qVoByjJikVIcsVuzZS0ZQ2e2wcqTax4mZQtsfXyA8/418H1pEeJA+m/s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gl2EJSpR; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gl2EJSpR" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3c284eba0so6275565a12.1 for ; Mon, 16 Dec 2024 09:58:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371898; x=1734976698; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=6dFW7RUrUetXxVFNK2ULy2Fjq1gVFLFzwOzuHDurHwY=; b=gl2EJSpROTNrWJWhBIOLCi3ZxZJL5mWLRPukobL/iXk8bhkADlhL+AF/KMjZX+8zi8 8F87qbQbOEeLoj2TIhOFlh42qt6YVUbUguKTqHKZyemuIDEojSYVBomr2Xb5oCM21ufQ FBG9EHE0mw6MnxIw1H+Y9cnvtvEaINsecJwvfWLc8K5rEDwkqGh6ZtgnXeTfogPRvUrR UfHBhzixkZBaPPNn6HA9R1EtY0AfIiyhP1dH+ynS0OPRPQMVxJUkbQvSKLnymdYyeXc4 MDPA4IrU5ykKEUNoslC0ho96qAD+1NtRjKaQ4Yt8SAdGwu6dB5g0bMn8WuBXwrfiDe0j YvMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371898; x=1734976698; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6dFW7RUrUetXxVFNK2ULy2Fjq1gVFLFzwOzuHDurHwY=; b=A0SPzNgfc/u2hn2kB9kX3vEPzZjqbB4fQKJA8YAojzShj5ug5L5IjPO8Nb/kK3AZZj aIAyQ9g3pA1JQDnppG4Q5ZYdHHEsNCv0v8URq3QAaKkr4aTJxxqTcd4WE1MKTi82uPWn 3JVeLKpP3BqqMI9fDOe+FDF71/nmJ8iNCl08X9C1nYctsFQA65Otbn4ywLZjT7t+2sXh KSglBJ6qdwmFOuhmC5LQs1IG4f/lZgVZ888fFMKbvtrJQpxGWJiPHBs9KbzIyfCg1B9A KDjiLZaQ/xYrdOjoJ6HtMcqZ/YGyYJpPweXHDh3ryUu7204SSHqqgGE6Scl7kISNnZe2 mlGQ== X-Forwarded-Encrypted: i=1; AJvYcCUnWUXs0JGchbKqvN/oH46NdG1mBZ80SxJRRaheNi/P3dl1cY6OlsKnNJXcchtoRyYYUQ3uV2t5b24zFLc=@vger.kernel.org X-Gm-Message-State: AOJu0YyZie48Se8SOFkCTXwi8COZ7WQHRL/fmWXjr57zjxWT9/QR3nUI WRwYm3k7cQsO6CZeIs7H+k8S3UVGVCgsCTs9+WJopLMhA4uMw9pmN8j2p0goIBXe7PMNMU38Cy7 Zw8MWVQ== X-Google-Smtp-Source: AGHT+IGc0SKdQx4fRg/2UZZOEnFM7Ob6WE6YJF9bkeGHxvuSwv02bVDmCemOOlGDIUPuD02qJQBB9v2MqSc7 X-Received: from edxk18.prod.google.com ([2002:a05:6402:12d2:b0:5d0:2300:5be]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:3551:b0:5d0:d84c:abb3 with SMTP id 4fb4d7f45d1cf-5d63c3b1c3bmr12340959a12.26.1734371898142; Mon, 16 Dec 2024 09:58:18 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:51 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-7-qperret@google.com> Subject: [PATCH v3 06/18] KVM: arm64: Pass walk flags to kvm_pgtable_stage2_relax_perms From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" kvm_pgtable_stage2_relax_perms currently assumes that it is being called from a 'shared' walker, which will not be true once called from pKVM. To allow for the re-use of that function, make the walk flags one of its parameters. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 4 +++- arch/arm64/kvm/hyp/pgtable.c | 6 ++---- arch/arm64/kvm/mmu.c | 7 +++---- 3 files changed, 8 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 38b7ec1c8614..c2f4149283ef 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -707,6 +707,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgt= able *pgt, u64 addr, * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). * @addr: Intermediate physical address to identify the page-table entry. * @prot: Additional permissions to grant for the mapping. + * @flags: Flags to control the page-table walk (ex. a shared walk) * * The offset of @addr within a page is ignored. * @@ -719,7 +720,8 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_pgt= able *pgt, u64 addr, * Return: 0 on success, negative error code on failure. */ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot); + enum kvm_pgtable_prot prot, + enum kvm_pgtable_walk_flags flags); =20 /** * kvm_pgtable_stage2_flush_range() - Clean and invalidate data cache to P= oint diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 0470aedb4bf4..b7a3b5363235 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1307,7 +1307,7 @@ bool kvm_pgtable_stage2_test_clear_young(struct kvm_p= gtable *pgt, u64 addr, } =20 int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable *pgt, u64 addr, - enum kvm_pgtable_prot prot) + enum kvm_pgtable_prot prot, enum kvm_pgtable_walk_flags flags) { int ret; s8 level; @@ -1325,9 +1325,7 @@ int kvm_pgtable_stage2_relax_perms(struct kvm_pgtable= *pgt, u64 addr, if (prot & KVM_PGTABLE_PROT_X) clr |=3D KVM_PTE_LEAF_ATTR_HI_S2_XN; =20 - ret =3D stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + ret =3D stage2_update_leaf_attrs(pgt, addr, 1, set, clr, NULL, &level, fl= ags); if (!ret || ret =3D=3D -EAGAIN) kvm_call_hyp(__kvm_tlb_flush_vmid_ipa_nsh, pgt->mmu, addr, level); return ret; diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index a2339b76c826..641e4fec1659 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1452,6 +1452,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, enum kvm_pgtable_prot prot =3D KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; + enum kvm_pgtable_walk_flags flags =3D KVM_PGTABLE_WALK_HANDLE_FAULT | KVM= _PGTABLE_WALK_SHARED; =20 if (fault_is_perm) fault_granule =3D kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1695,13 +1696,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot); + ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); } else { ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, - memcache, - KVM_PGTABLE_WALK_HANDLE_FAULT | - KVM_PGTABLE_WALK_SHARED); + memcache, flags); } =20 out_unlock: --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A5C8220C472 for ; Mon, 16 Dec 2024 17:58:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371903; cv=none; b=pVbNVFzHcjlTkqQy18dbMeRPbuXlog/wlu5aMy1lyuP0jsp2TUPHxDtPgl5Etsr//JbxZ1BQMVdv86DEe86HkTDWtGDe38aNzsdal4iaYdVSCKWO34+QkjLCA/Qg2g7vbyDY0krqofF5BK98dajPnjTNy3kOAEvQHsgwoFlJ5gw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371903; c=relaxed/simple; bh=VDPjyhAniiGam9OPIR7Fg0/TYc5hRZJY+waFhhN2Kqo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tg59RFEcMvfUQvQExSZvyacH0XbzOCiAEIWvaHQmG3cBxxLDtDVAc6k75nsWiykyWDg70dkM6iYPBUApkUAQfC/B2EK4cTi9xpa9EPZA3F8azsKj55OYxMrQD3wlUs7UFJyRV3OsHluCKENqtNJ1Xp9n8Lau+U3UqmCMmF6JMSA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=k+Zdwoh+; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="k+Zdwoh+" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d3f4cbbbbcso1957052a12.1 for ; Mon, 16 Dec 2024 09:58:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371900; x=1734976700; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qd1/ZR7yY7ReH6NCWgcpm0z1q63UQhmB9TeGY0t0ioU=; b=k+Zdwoh+Cykuv43gxHIQYhXsJV4Nemi9c9JGzcwmMEOiWy5YDYvM/S1BIa3wPplxTs Ay5gQL38do+TEvSuHKkvJtt5tTMb0JOdVjulUyYAgKDb08+2ZnpqcDIzfAqa2LCkYYj1 s1NYf+JPdQJ3EfUInjiQhNeHlH4dnlBkTFFmBhKhc7tLYXM/Aw8KaQiwFXa1i3YkKVjo Aga0cmpIkYlMVtj2Amm/1521WEawVQjY/gkTe43Myg85Jrpan1zvm9g8LF0Rb5swnQJD O5dVIgLu7dz1CkISzKFig7d2FgkN73RpcPhqLEVrr3A4twXFCuMJ2x+WocFpRrZ6u18L p3CA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371900; x=1734976700; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qd1/ZR7yY7ReH6NCWgcpm0z1q63UQhmB9TeGY0t0ioU=; b=nkF2tZu8Tcb/LEZdQoqhgvT+a9Zb/cO/V5rhAK58BPHRbowqeTS8ak+7HkQDb3zogJ /3SFXmstTAwHWQ4iWuQJnT0KeqbNvDc5C14ySQQCew3ERK/uhDJ0UkZFWcaJYBmdI0MP v3t1Mvwg28eMzUjZocWhlVzErP3ZJiHGJrSL779D0oiifJcGKqCNtCBJgqir6uboU6Gy xZ3wtUI9U5ASj53FlcQ7kyBffBsDkA6AWQgLSWwWk/wgq919tLzBinP/JhnpUjLjVHNE vAMgnjoTNmvQr097wAKDwmDtQ54lWgPJnYBaylzWxangWIlDM7aUERCTsJTyfzn4JKYU SOpw== X-Forwarded-Encrypted: i=1; AJvYcCXGFiwRruTscctyl3vu9ysbu0i/dupkUllzNTTBjBkjoo/Al7l54BtXYQIQlyR3qvIPxRYzFlHoUTjPrGo=@vger.kernel.org X-Gm-Message-State: AOJu0YxY1wU/bHgF76Une1mJdvZPz2h2TFYZnGMb2ytdyuiFbe/BEA2e rR8FBLdq2b35/LpK526nj0uC3Dcj/1nYhtbhls5P/f44yq3me/F0kE8fAf316oJnR4k6s48Irym It1YAPA== X-Google-Smtp-Source: AGHT+IEZAZq2KaUjkxJv0f3RrehRlI6rnfYsshuZuD5/X0wPwG8guYjpqgqhcPwLPFFRfZ50xXtdhX2JySHz X-Received: from edyv15.prod.google.com ([2002:a05:6402:184f:b0:5d0:1fec:60a3]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:270f:b0:5d0:bcdd:ffa8 with SMTP id 4fb4d7f45d1cf-5d63c2e7bcdmr11529217a12.1.1734371900235; Mon, 16 Dec 2024 09:58:20 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:52 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-8-qperret@google.com> Subject: [PATCH v3 07/18] KVM: arm64: Make kvm_pgtable_stage2_init() a static inline function From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Turn kvm_pgtable_stage2_init() into a static inline function instead of a macro. This will allow the usage of typeof() on it later on. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_pgtable.h | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index c2f4149283ef..04418b5e3004 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -526,8 +526,11 @@ int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt,= struct kvm_s2_mmu *mmu, enum kvm_pgtable_stage2_flags flags, kvm_pgtable_force_pte_cb_t force_pte_cb); =20 -#define kvm_pgtable_stage2_init(pgt, mmu, mm_ops) \ - __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL) +static inline int kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct = kvm_s2_mmu *mmu, + struct kvm_pgtable_mm_ops *mm_ops) +{ + return __kvm_pgtable_stage2_init(pgt, mmu, mm_ops, 0, NULL); +} =20 /** * kvm_pgtable_stage2_destroy() - Destroy an unused guest stage-2 page-tab= le. --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E8754207E16 for ; Mon, 16 Dec 2024 17:58:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371905; cv=none; b=CByqn7PGzUuBDOZ31W28Y70vkm6PXEEwCkc8nF418ARCUqGTp8wse/jvLiel88p5IiBUlf3oC09CS7ua8ocpu2iESl7z8n/PuF3CmO3o5u0KiID0UXeC64qEW4kekA+aTMJDmdMiP1NrE4phkp1hyMMlM0khofH29SX38A5W2hA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371905; c=relaxed/simple; bh=3XHMTTobse3q0dEwknl/B4DvUFVndatcPNdmNVk4PtU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ikerUxOAAuQNFn60beIVi9mUDKNxPyDESrStG2DMzLGMf9J3Ywv+/YwgsiSmrLaO7re4jwnpC3ZcyUxcqrFC5Scgv1kVDohllcqsEL54ayR1gxhf7kPVy0/Ov53bodGVFeWtlRKYHbW4oZNW1A0br1WOkhQeRs+zcb3g0dZ/58c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=AqOHSezX; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="AqOHSezX" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa66bc3b46dso337418466b.3 for ; Mon, 16 Dec 2024 09:58:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371902; x=1734976702; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FdcyBmRiYnupt5usNoa8YsTNzt1V3OCJZIqsaCSd/iY=; b=AqOHSezXX+lreDw14OvGmpGfaMzg+7oGK2suDC6PPL6WdzI7d1llGZXu+LIHTMTfgw lZA8vADxHiZrxgVeE2M+GVlVgIFrZkCdFI1or3qo2SnOXsV5lAZeRq6N4s4T0aD5IqlH IEY3dcZdwRUktMzFU/UcpCFiPtPYuxukJTqmByJXGilSHwjHgza1fM8fFKbM+XT6PYki 0wPPK9wJxFFWnO+1t/4RvTyVrCREdxm29qPR+H0gQXw1DuheKV5OntDZXab+8LzeXrIz 3FabWSIIbwKEEb4/4Oq3q/wPajaSdMpLCh3T9ytE+DtWPm4omqrt+kEZlSNHBnR9sDu8 98Fg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371902; x=1734976702; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FdcyBmRiYnupt5usNoa8YsTNzt1V3OCJZIqsaCSd/iY=; b=eci2zGrVtSiLBdFaYeMfae3U8ose9IbTQi9Vks2D7Wo+9Q0Bgw09UKWlxO2/2Z/50s 0w3SuaWqXU9lvhQMyQARb1r3PTSXkakFV4M96lEyu68+jmUISgHtZG3ok8CCGLSuHn3P vbYxEk0HLIP+mb8sFrew4BNs3K0SI62QQXghKb+Zk4DZC4LaV+nIOVY/8N3SY/BWf2SJ dy8XDOYt0TVNqyZmc1JtwyEywLBBNgWOVNhnm8YCAAIZ5slU3XJiXa0a3Q4g8vDoqrMp daFJ3u6lLQOyThZa14AP++0qNanj16f8HfTgeD8+GWew7x+2GXfdMRYaMjZ8/UsXvAF2 omUQ== X-Forwarded-Encrypted: i=1; AJvYcCXfd8sXPHjhrrK2GJReYax1M3ihMdXZ4PmdCA+Am7PoCZOAWYUwLg9ilbPJy127D+fErp1i9kGvi7z7+1Y=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7P+4MNb1LmlOhfV8EhjMRZvRk8vax2vy2TJQrIayfhp1qJEli a9WumSTKQAZkYl8DsCiTDZKIXxPeSc+VVnCT8G/kvlqqE68IDfy/O0W0Bf7mz7w/SDthKqypLrk Siz5Grw== X-Google-Smtp-Source: AGHT+IHpcpJq4kyenHoOx1fxav7mIvFDvXklCXubp7lwpwsMau839iNr+DB/Q2jwccyvsQCELDrIix68lHxb X-Received: from edbfj7.prod.google.com ([2002:a05:6402:2b87:b0:5d3:e71d:9abe]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:254e:b0:5d2:7270:6128 with SMTP id 4fb4d7f45d1cf-5d63c3dbe25mr31398284a12.25.1734371902351; Mon, 16 Dec 2024 09:58:22 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:53 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-9-qperret@google.com> Subject: [PATCH v3 08/18] KVM: arm64: Add {get,put}_pkvm_hyp_vm() helpers From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for accessing pkvm_hyp_vm structures at EL2 in a context where we can't always expect a vCPU to be loaded (e.g. MMU notifiers), introduce get/put helpers to get temporary references to hyp VMs from any context. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 3 +++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 20 ++++++++++++++++++++ 2 files changed, 23 insertions(+) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index 24a9a8330d19..f361d8b91930 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -70,4 +70,7 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t ha= ndle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); =20 +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); + #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 071993c16de8..d46a02e24e4a 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -327,6 +327,26 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) hyp_spin_unlock(&vm_table_lock); } =20 +struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm; + + hyp_spin_lock(&vm_table_lock); + hyp_vm =3D get_vm_by_handle(handle); + if (hyp_vm) + hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); + + return hyp_vm; +} + +void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) +{ + hyp_spin_lock(&vm_table_lock); + hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); + hyp_spin_unlock(&vm_table_lock); +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const= struct kvm *host_kvm) { struct kvm *kvm =3D &hyp_vm->kvm; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 082BE20CCED for ; Mon, 16 Dec 2024 17:58:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371908; cv=none; b=IXCH3Ik6q8Yml9yppzWHxelh6mOaUR4cE3S4livGA2fVUWeIpW9O2iqDp9zqzS0VKaRmgL9M+DLf2Zi2zHxc6dGvE6cih5/a8gxacl0DoWhhQMkQD66q8vXm+QNrHPvvRkds1dERYgCniB7vpUzko5HUphJ47m03ErwYhI7ysqE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371908; c=relaxed/simple; bh=b1OZFrkiE8iias5sd8tvtO33uE7noFEb6FL+kzkqnqI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=XUCaIQ5CB27Mpz1RbmufQBeNyVHEP+ki67/UosoCH7IB85uQhJVu8emuoeFPqA8/LyiOWEegRws2u3K73nwHr5RdhN7YK8Nt0+cknBAJasXim/5/yHb3d24/g4akkyUD0Ez9SZvqBlDSJcgr+pfAd/7SFSg9Pnf742ls0MmEQwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pgD2WSJq; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pgD2WSJq" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa69e84128aso377597666b.1 for ; Mon, 16 Dec 2024 09:58:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371904; x=1734976704; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1GzF0OnJe5/psiQhuOUuiFtBBVZ148+mINmLmFP8at8=; b=pgD2WSJqYyDonsaHpkSVQrhok55evUock+V1WNUBrPMrIv8s51qAHNCby1jmZr/foF cH9rHjYKOjszp5SLbUFQ6T9XEYO4MHnF0norJj8Shc42An6IeibtyUyw1w/ECMHHYSlD MyzMrby9a5Ews+LUu4CmHuKVNEDPz+dWUJObYfZzF9ehl32irIY62m96qbGLPb3a98dV qK0bmHdLXHVcLEpg6/iwxJKyNoC7Do4aU2aJGqu2gF494udUoLQkSPeXdO6evYkauZGR +aUxojLWAJn1z9uDNmovvnbokUvQH3G5kfH7m8qRgq7WWxb27WIFdtlqIVgS4ePbKi1L zlyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371904; x=1734976704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1GzF0OnJe5/psiQhuOUuiFtBBVZ148+mINmLmFP8at8=; b=RY6s+CZzopuuqZERvOIES2I56kC2dv7Ii6FcLo1k3KQkS8cXcEQ3sRfJxpuNN6cbUf XvKsvlNUJhL04PWuL4LbobRbvs5dPKEu8VRCZWcRiJK4Xim43Qvt09aufqBYdv2DofzU rXTOo7yS8EH99e77lxbq5oEnvAYqkY0Ur2xgXAte/BOH8rQx+OSW/82INN/a4hAza2tF ezCVzKW9686N3e3lkzfXxjgtAJcWdfAJLrsz0yfu9XC8N7vB/SsxjCbl3cxjMQaTethA UrH21xU2021+cFKGqwAQBEylUyf7+uIZgu8xxth3IE+SJZegAkPQVB/jiO/HgZc+FuEV Y3wQ== X-Forwarded-Encrypted: i=1; AJvYcCVGjE8c0SOLN0/07DzAMrh/2QlOGLF3zGmxQO2IkjEW9XDPm86mOvsSkMIy3bwvTDeNAHG9Vy2ODv+uBlM=@vger.kernel.org X-Gm-Message-State: AOJu0YyxgIuc6m0iABp3oMsc7Mo2W/AsOELlmTFFIImUHsFOutb+VDiW BkRqxWg+iboH62xF8m6FGVWECdKItWgexeOw4vj4sUUUVPn5v1s22Gcimv/BufuEV0JLkx7X5UT g/eK4cQ== X-Google-Smtp-Source: AGHT+IHZbYiwzEGu+bCA7n40Kqr0IGh3JGxfyYng6mi5asEO4czzA3fF7vnZWUutLuTJFZ8wYZZX+Gk1uzSF X-Received: from edbfj22.prod.google.com ([2002:a05:6402:2b96:b0:5cf:bdb9:fa82]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:3da4:b0:aa6:96ad:f8ff with SMTP id a640c23a62f3a-aab77ee5aa4mr1454749966b.52.1734371904491; Mon, 16 Dec 2024 09:58:24 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:54 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-10-qperret@google.com> Subject: [PATCH v3 09/18] KVM: arm64: Introduce __pkvm_vcpu_{load,put}() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Marc Zyngier Rather than look-up the hyp vCPU on every run hypercall at EL2, introduce a per-CPU 'loaded_hyp_vcpu' tracking variable which is updated by a pair of load/put hypercalls called directly from kvm_arch_vcpu_{load,put}() when pKVM is enabled. Signed-off-by: Marc Zyngier Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 2 ++ arch/arm64/kvm/arm.c | 14 ++++++++ arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 7 ++++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 47 ++++++++++++++++++++------ arch/arm64/kvm/hyp/nvhe/pkvm.c | 29 ++++++++++++++++ arch/arm64/kvm/vgic/vgic-v3.c | 6 ++-- 6 files changed, 93 insertions(+), 12 deletions(-) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index ca2590344313..89c0fac69551 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -79,6 +79,8 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_init_vm, __KVM_HOST_SMCCC_FUNC___pkvm_init_vcpu, __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, + __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index a102c3aebdbc..55cc62b2f469 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -619,12 +619,26 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cp= u) =20 kvm_arch_vcpu_load_debug_state_flags(vcpu); =20 + if (is_protected_kvm_enabled()) { + kvm_call_hyp_nvhe(__pkvm_vcpu_load, + vcpu->kvm->arch.pkvm.handle, + vcpu->vcpu_idx, vcpu->arch.hcr_el2); + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + } + if (!cpumask_test_cpu(cpu, vcpu->kvm->arch.supported_cpus)) vcpu_set_on_unsupported_cpu(vcpu); } =20 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (is_protected_kvm_enabled()) { + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, + &vcpu->arch.vgic_cpu.vgic_v3); + kvm_call_hyp_nvhe(__pkvm_vcpu_put); + } + kvm_arch_vcpu_put_debug_state_flags(vcpu); kvm_arch_vcpu_put_fp(vcpu); if (has_vhe()) diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index f361d8b91930..be52c5b15e21 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -20,6 +20,12 @@ struct pkvm_hyp_vcpu { =20 /* Backpointer to the host's (untrusted) vCPU instance. */ struct kvm_vcpu *host_vcpu; + + /* + * If this hyp vCPU is loaded, then this is a backpointer to the + * per-cpu pointer tracking us. Otherwise, NULL if not loaded. + */ + struct pkvm_hyp_vcpu **loaded_hyp_vcpu; }; =20 /* @@ -69,6 +75,7 @@ int __pkvm_teardown_vm(pkvm_handle_t handle); struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_t handle, unsigned int vcpu_idx); void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); =20 struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 6aa0b13d86e5..95d78db315b3 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -141,16 +141,46 @@ static void sync_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_v= cpu) host_cpu_if->vgic_lr[i] =3D hyp_cpu_if->vgic_lr[i]; } =20 +static void handle___pkvm_vcpu_load(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(unsigned int, vcpu_idx, host_ctxt, 2); + DECLARE_REG(u64, hcr_el2, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_load_hyp_vcpu(handle, vcpu_idx); + if (!hyp_vcpu) + return; + + if (pkvm_hyp_vcpu_is_protected(hyp_vcpu)) { + /* Propagate WFx trapping flags */ + hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWE | HCR_TWI); + hyp_vcpu->vcpu.arch.hcr_el2 |=3D hcr_el2 & (HCR_TWE | HCR_TWI); + } +} + +static void handle___pkvm_vcpu_put(struct kvm_cpu_context *host_ctxt) +{ + struct pkvm_hyp_vcpu *hyp_vcpu; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (hyp_vcpu) + pkvm_put_hyp_vcpu(hyp_vcpu); +} + static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, host_vcpu, host_ctxt, 1); int ret; =20 - host_vcpu =3D kern_hyp_va(host_vcpu); - if (unlikely(is_protected_kvm_enabled())) { - struct pkvm_hyp_vcpu *hyp_vcpu; - struct kvm *host_kvm; + struct pkvm_hyp_vcpu *hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); =20 /* * KVM (and pKVM) doesn't support SME guests for now, and @@ -163,9 +193,6 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_contex= t *host_ctxt) goto out; } =20 - host_kvm =3D kern_hyp_va(host_vcpu->kvm); - hyp_vcpu =3D pkvm_load_hyp_vcpu(host_kvm->arch.pkvm.handle, - host_vcpu->vcpu_idx); if (!hyp_vcpu) { ret =3D -EINVAL; goto out; @@ -176,12 +203,10 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_cont= ext *host_ctxt) ret =3D __kvm_vcpu_run(&hyp_vcpu->vcpu); =20 sync_hyp_vcpu(hyp_vcpu); - pkvm_put_hyp_vcpu(hyp_vcpu); } else { /* The host is fully trusted, run its vCPU directly. */ - ret =3D __kvm_vcpu_run(host_vcpu); + ret =3D __kvm_vcpu_run(kern_hyp_va(host_vcpu)); } - out: cpu_reg(host_ctxt, 1) =3D ret; } @@ -409,6 +434,8 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_init_vm), HANDLE_FUNC(__pkvm_init_vcpu), HANDLE_FUNC(__pkvm_teardown_vm), + HANDLE_FUNC(__pkvm_vcpu_load), + HANDLE_FUNC(__pkvm_vcpu_put), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index d46a02e24e4a..496d186efb03 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -23,6 +23,12 @@ unsigned int kvm_arm_vmid_bits; =20 unsigned int kvm_host_sve_max_vl; =20 +/* + * The currently loaded hyp vCPU for each physical CPU. Used only when + * protected KVM is enabled, but for both protected and non-protected VMs. + */ +static DEFINE_PER_CPU(struct pkvm_hyp_vcpu *, loaded_hyp_vcpu); + /* * Set trap register values based on features in ID_AA64PFR0. */ @@ -306,15 +312,30 @@ struct pkvm_hyp_vcpu *pkvm_load_hyp_vcpu(pkvm_handle_= t handle, struct pkvm_hyp_vcpu *hyp_vcpu =3D NULL; struct pkvm_hyp_vm *hyp_vm; =20 + /* Cannot load a new vcpu without putting the old one first. */ + if (__this_cpu_read(loaded_hyp_vcpu)) + return NULL; + hyp_spin_lock(&vm_table_lock); hyp_vm =3D get_vm_by_handle(handle); if (!hyp_vm || hyp_vm->nr_vcpus <=3D vcpu_idx) goto unlock; =20 hyp_vcpu =3D hyp_vm->vcpus[vcpu_idx]; + + /* Ensure vcpu isn't loaded on more than one cpu simultaneously. */ + if (unlikely(hyp_vcpu->loaded_hyp_vcpu)) { + hyp_vcpu =3D NULL; + goto unlock; + } + + hyp_vcpu->loaded_hyp_vcpu =3D this_cpu_ptr(&loaded_hyp_vcpu); hyp_page_ref_inc(hyp_virt_to_page(hyp_vm)); unlock: hyp_spin_unlock(&vm_table_lock); + + if (hyp_vcpu) + __this_cpu_write(loaded_hyp_vcpu, hyp_vcpu); return hyp_vcpu; } =20 @@ -323,10 +344,18 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu) struct pkvm_hyp_vm *hyp_vm =3D pkvm_hyp_vcpu_to_hyp_vm(hyp_vcpu); =20 hyp_spin_lock(&vm_table_lock); + hyp_vcpu->loaded_hyp_vcpu =3D NULL; + __this_cpu_write(loaded_hyp_vcpu, NULL); hyp_page_ref_dec(hyp_virt_to_page(hyp_vm)); hyp_spin_unlock(&vm_table_lock); } =20 +struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void) +{ + return __this_cpu_read(loaded_hyp_vcpu); + +} + struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle) { struct pkvm_hyp_vm *hyp_vm; diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c index f267bc2486a1..c2ef41fff079 100644 --- a/arch/arm64/kvm/vgic/vgic-v3.c +++ b/arch/arm64/kvm/vgic/vgic-v3.c @@ -734,7 +734,8 @@ void vgic_v3_load(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_restore_vmcr_aprs, cpu_if); =20 if (has_vhe()) __vgic_v3_activate_traps(cpu_if); @@ -746,7 +747,8 @@ void vgic_v3_put(struct kvm_vcpu *vcpu) { struct vgic_v3_cpu_if *cpu_if =3D &vcpu->arch.vgic_cpu.vgic_v3; =20 - kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); + if (likely(!is_protected_kvm_enabled())) + kvm_call_hyp(__vgic_v3_save_vmcr_aprs, cpu_if); WARN_ON(vgic_v4_put(vcpu)); =20 if (has_vhe()) --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B2F020D516 for ; Mon, 16 Dec 2024 17:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371911; cv=none; b=SkRRNqJTKBwNiETN1RXrzObdXrJeGtwuh0qATtn8qbiBfwk92lvNaiYwOaOZsj/5MM2mCxWhv4okn+1CjwNqfBSMvekzRm6hgb3G+/nMMSPjRHO95vfUCUlytStSCsaAMPoc1H7GYoVwi0aCsmkT7wNx2iGwOEMrfpn7eUIwX5s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371911; c=relaxed/simple; bh=jufdQ52TFnEE0xKwn3lA8uAv6Xk4qZYvVB+Rbeuoqps=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=FBckGzvHeetIqUM0ZA0t1ZbVbahQwLQtAW1ix5zsfdu50+W45rIATaUB6HDgSlG5N6t8WmWHUMKDbkFDEZUH2eKJyy63WKMuZ4Nn4OtMiIRsAf5RXOdre19Qcgh/Tg/EycM3JOTippmxI5/ZCwkMIIDC2hOJoerxgdzZsDaeYi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lRUkVSQs; arc=none smtp.client-ip=209.85.218.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lRUkVSQs" Received: by mail-ej1-f73.google.com with SMTP id a640c23a62f3a-aa67fcbb549so459681066b.0 for ; Mon, 16 Dec 2024 09:58:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371906; x=1734976706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xwMJ2ucftcqP6pInsCPOaB/95gwHgo8dpQFtlrs94zw=; b=lRUkVSQstzbeJeC0XkSUKrC6iuQoxssGHhBMtMkn1Q+FhQOV9vt64F00emlEkyS9qH c6SR1P264Kfo/t2DLt7oBPzV/VW9XFhByNgDAi5RR5FaDQ0yu9sMMJpLNgXWIJnfwEA9 J4lrg498HfRNXvmAY3T015lT1k1A345IdTG9soqVUM5MnLDEGK2xPD0KMYRfxzPa7ZnD YNkYprLls8O2dpOiYXHw1zg3E6dxqm0Lfciu6XzDjrtj648DIbky+IKC4XY4xVXWcmN4 /2P9BTgGTlLepu2Zla4V2EXe0K9DD+Xue5AwqsY8z3tx/zVKEdRb8DhE0scNqH4mYbIz wpcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371906; x=1734976706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xwMJ2ucftcqP6pInsCPOaB/95gwHgo8dpQFtlrs94zw=; b=Lex55Kuqy8ZAtKumYLqW4KNjCDEwiF9MxjzMP6xVzkX5BDwWkkEKFcXWLl1EVikxmJ HDqUgX6nUzn2mDmX3mhc4qREHXgy9aqOVmj8r4sR8tgxI3WE4/R8sBXJXbzvQ+IWUZdv etddsCclXQA15BZ57Ouc/NHSwx32QI5S+Nr4RqE6doAItCUNa4n0Nz7ALDsqdBx8trUT TfFACoHgBjxBpJg0p2buPshLdukap3CFCt53LMm6Qhm7ogP+JZisaXXxCK8txwcTvUUL oMFxAjxnKkQh0fXd9TmVQnIYZMGYIRvL1cB0p3OWMSdmGYN+8CZ5oiBg0l2m/IewC1bo QRAg== X-Forwarded-Encrypted: i=1; AJvYcCUhZBA+GyJKzT/x4XPuhCT9za6dWLM5PqmX/ApDufjZcSL/Qu8TNDEJ/ixvbN+AmE39ni/jMGxPHgfCmKY=@vger.kernel.org X-Gm-Message-State: AOJu0YzsyDw/GG4wiZA4dwxK1d93mGcovm4TZe/HWt+2lox+zGbQ9a7t lDqNMGXo2SmnP3qmzt0c6qXR4QpuI4CPx/DFVSPErHl7jPtI79tdzmoJ4QRr07vLr8uwxa7jXnt 6IFSNkg== X-Google-Smtp-Source: AGHT+IF+TZrX5sYRGmkUi3VXSGFNGoOdrFGhaC+xOKcwVfwhPYfxFzDjxWrzZPX/y/JCQEOwKLwIoCJnxx32 X-Received: from edbel11.prod.google.com ([2002:a05:6402:360b:b0:5d3:ba9c:42f3]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:6d0d:b0:aab:8a24:d5a5 with SMTP id a640c23a62f3a-aab8a24d827mr1103625966b.30.1734371906697; Mon, 16 Dec 2024 09:58:26 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:55 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-11-qperret@google.com> Subject: [PATCH v3 10/18] KVM: arm64: Introduce __pkvm_host_share_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for handling guest stage-2 mappings at EL2, introduce a new pKVM hypercall allowing to share pages with non-protected guests. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/include/asm/kvm_host.h | 3 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/memory.h | 2 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 34 +++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 72 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 7 ++ 7 files changed, 120 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 89c0fac69551..449337f5b2a3 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -65,6 +65,7 @@ enum __kvm_host_smccc_func { /* Hypercalls available after pKVM finalisation */ __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, + __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index e18e9244d17a..1246f1d01dbf 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -771,6 +771,9 @@ struct kvm_vcpu_arch { /* Cache some mmu pages needed inside spinlock regions */ struct kvm_mmu_memory_cache mmu_page_cache; =20 + /* Pages to top-up the pKVM/EL2 guest pool */ + struct kvm_hyp_memcache pkvm_memcache; + /* Virtual SError ESR to restore when HCR_EL2.VSE is set */ u64 vsesr_el2; =20 diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index 25038ac705d8..a7976e50f556 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -39,6 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages); int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/memory.h b/arch/arm64/kvm/hyp/= include/nvhe/memory.h index 8bd9a539f260..cc431820c6ce 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/memory.h +++ b/arch/arm64/kvm/hyp/include/nvhe/memory.h @@ -46,6 +46,8 @@ struct hyp_page { =20 /* Host (non-meta) state. Guarded by the host stage-2 lock. */ enum pkvm_page_state host_state : 8; + + u32 host_share_guest_count; }; =20 extern u64 __hyp_vmemmap; diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 95d78db315b3..d659462fbf5d 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -211,6 +211,39 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_conte= xt *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static int pkvm_refill_memcache(struct pkvm_hyp_vcpu *hyp_vcpu) +{ + struct kvm_vcpu *host_vcpu =3D hyp_vcpu->host_vcpu; + + return refill_memcache(&hyp_vcpu->vcpu.arch.pkvm_memcache, + host_vcpu->arch.pkvm_memcache.nr_pages, + &host_vcpu->arch.pkvm_memcache); +} + +static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ct= xt) +{ + DECLARE_REG(u64, pfn, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D pkvm_refill_memcache(hyp_vcpu); + if (ret) + goto out; + + ret =3D __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -420,6 +453,7 @@ static const hcall_t host_hcall[] =3D { =20 HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), + HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 12bb5445fe47..fb9592e721cf 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -867,6 +867,27 @@ static int hyp_complete_donation(u64 addr, return pkvm_create_mappings_locked(start, end, prot); } =20 +static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr) +{ + if (!kvm_pte_valid(pte)) + return PKVM_NOPAGE; + + return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte)); +} + +static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 = addr, + u64 size, enum pkvm_page_state state) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + struct check_walk_data d =3D { + .desired =3D state, + .get_page_state =3D guest_get_page_state, + }; + + hyp_assert_lock_held(&vm->lock); + return check_page_state_range(&vm->pgt, addr, size, &d); +} + static int check_share(struct pkvm_mem_share *share) { const struct pkvm_mem_transition *tx =3D &share->tx; @@ -1349,3 +1370,54 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages) =20 return ret; } + +int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, + enum kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 phys =3D hyp_pfn_to_phys(pfn); + u64 ipa =3D hyp_pfn_to_phys(gfn); + struct hyp_page *page; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (ret) + return ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __guest_check_page_state_range(vcpu, ipa, PAGE_SIZE, PKVM_NOPAGE); + if (ret) + goto unlock; + + page =3D hyp_phys_to_page(phys); + switch (page->host_state) { + case PKVM_PAGE_OWNED: + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_SHARED_OW= NED)); + break; + case PKVM_PAGE_SHARED_OWNED: + if (page->host_share_guest_count) + break; + /* Only host to np-guest multi-sharing is tolerated */ + WARN_ON(1); + fallthrough; + default: + ret =3D -EPERM; + goto unlock; + } + + WARN_ON(kvm_pgtable_stage2_map(&vm->pgt, ipa, PAGE_SIZE, phys, + pkvm_mkstate(prot, PKVM_PAGE_SHARED_BORROWED), + &vcpu->vcpu.arch.pkvm_memcache, 0)); + page->host_share_guest_count++; + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index 496d186efb03..f2e363fe6b84 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -795,6 +795,13 @@ int __pkvm_teardown_vm(pkvm_handle_t handle) /* Push the metadata pages to the teardown memcache */ for (idx =3D 0; idx < hyp_vm->nr_vcpus; ++idx) { struct pkvm_hyp_vcpu *hyp_vcpu =3D hyp_vm->vcpus[idx]; + struct kvm_hyp_memcache *vcpu_mc =3D &hyp_vcpu->vcpu.arch.pkvm_memcache; + + while (vcpu_mc->nr_pages) { + void *addr =3D pop_hyp_memcache(vcpu_mc, hyp_phys_to_virt); + push_hyp_memcache(mc, addr, hyp_virt_to_phys); + unmap_donated_memory_noclear(addr, PAGE_SIZE); + } =20 teardown_donated_memory(mc, hyp_vcpu, sizeof(*hyp_vcpu)); } --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8527A20DD42 for ; Mon, 16 Dec 2024 17:58:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371912; cv=none; b=qdM6kDNZpsjs96qa27wUJbSg3eaL3uVqj7K/yA+mRc7a8flrHD2zEi0us2phOtndvH7zAnnFWH/M0dmv12RqVV0fc2D8sqPn4dNB1jP1oACDo1gkRBWdbEzl3nvPOs3V0+pRSp6qhDIyjvu/mXfVZmrMfvkWOWQ3gof7uclew0Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371912; c=relaxed/simple; bh=k+ngTkak7GRIfWE4ycoZPXOQuBeUPuRrK6JuE6XYOpI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=LXK+yqNgKHy+oi+qeSgmcU0dOx92OA3qOW6tkYrLzoZBZ3L1zy3M1NPs8vsHlPJv8yVfA+isjGxsJROeGnVBC64NArDUkrEqyNEbR7zX6yjMudL3/w8dYhXSINqBFQuxoHWGb++xBilp40Gv3YlX+Ze5cZZe+a6+W4LH4i32qNw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bIy6iUwe; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bIy6iUwe" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3d2cccbe4so3846450a12.3 for ; Mon, 16 Dec 2024 09:58:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371909; x=1734976709; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=L+LjFzCv5XYw4tM5gP6caUnlYbp8EBUjRNr5pJmRWzY=; b=bIy6iUwepxNjzCpHLq0NJLDxAEDR5AGtUwFLelBke3oHxJv8T1T1SKtD5TjmxpteOc Gg3WE170t72vi7nvKhp8FlsIoOAsdBMYK4yJ35WqiowLRMRN+n1wUJ5dwzDfWWzcdJaN r/Frk7HejkxSWXDg893ApYRAM0wqAW79AvfxsYpgGQ4mEThCt5QGcKXUkqM2oFtUvN/R 9SRhLiD9Zeaiu/20c0ethwBeI5Mp32nSigOdO+G9Y6FH3aXQYI3ygTMCSFLWOW24xMgy +0jja6FfPxFyVjjG2sq8mz+vD17OfT1ZIsun6J9cR7Ij3P8xx1RHWRjOQY8DfDekg/zi hEpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371909; x=1734976709; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=L+LjFzCv5XYw4tM5gP6caUnlYbp8EBUjRNr5pJmRWzY=; b=RAWeY0DvmSmw0l0VA9+yy2qrkME7K3MWgL1fh7449GBF8+yZ0kkToalSrIrku/8T2O iKXuBqW44P9cITJhjcPnT2+pVAMwyDs/fBExS/cLknviKpBXMxC853vRU6WZsJ1uA2dx c97hLeZhb92tKK2kXVHJaRZ4lFrIhrPKyIYdJAOMcEr98+xz2fmGJ7xgAD5o75i1VPcP usz49ctqZUy1AmfRIuIIijduw2dkTav5+d3ERDwAskwtcOYEFhSrKN7Y8iq3eLZRcF+U UvosBxt4bjYcT1HzdyIjEbii3y//zZKt9dzDZcN2OzFlmR7dFd6i0pTHanQ1VtaAdCPQ ufIw== X-Forwarded-Encrypted: i=1; AJvYcCVIwR9nJsDZoK1a/iy5nElizZ0gj9BkpWK+g3BS63iIAUDV+0i0SQtuenH8INzHbWwLV63vnL4ZzODsROo=@vger.kernel.org X-Gm-Message-State: AOJu0Yx2nCabOhqXK7309Hxp1F30AjhtrkyueJ2J+zrMCLchkkQjpFbh yETbTRRFG4qzd+ErH1fsis5yWjAASGCiiDmaKZerF9VJpnV5TQm+XzibNubZLGOGQ3e85q6m0Q+ k2KCLOQ== X-Google-Smtp-Source: AGHT+IHCZhqHC8extjwLovPgFG/ljnTta5VOSeafMIM6PQ2NYnnk/wOQzU+5RyPYu2VdpKxPmvEc7HcAA2bA X-Received: from edben9.prod.google.com ([2002:a05:6402:5289:b0:5cf:dcd3:49a1]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:13ce:b0:5d3:ba42:ea03 with SMTP id 4fb4d7f45d1cf-5d63c30ad8emr13380104a12.8.1734371908844; Mon, 16 Dec 2024 09:58:28 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:56 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-12-qperret@google.com> Subject: [PATCH v3 11/18] KVM: arm64: Introduce __pkvm_host_unshare_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for letting the host unmap pages from non-protected guests, introduce a new hypercall implementing the host-unshare-guest transition. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/include/nvhe/pkvm.h | 6 ++ arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 ++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 67 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/pkvm.c | 12 ++++ 6 files changed, 108 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 449337f5b2a3..0b6c4d325134 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -66,6 +66,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index a7976e50f556..e528a42ed60e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -40,6 +40,7 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages); int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h b/arch/arm64/kvm/hyp/in= clude/nvhe/pkvm.h index be52c5b15e21..0cc2a429f1fb 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/pkvm.h +++ b/arch/arm64/kvm/hyp/include/nvhe/pkvm.h @@ -64,6 +64,11 @@ static inline bool pkvm_hyp_vcpu_is_protected(struct pkv= m_hyp_vcpu *hyp_vcpu) return vcpu_is_protected(&hyp_vcpu->vcpu); } =20 +static inline bool pkvm_hyp_vm_is_protected(struct pkvm_hyp_vm *hyp_vm) +{ + return kvm_vm_is_protected(&hyp_vm->kvm); +} + void pkvm_hyp_vm_table_init(void *tbl); =20 int __pkvm_init_vm(struct kvm *host_kvm, unsigned long vm_hva, @@ -78,6 +83,7 @@ void pkvm_put_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vcpu); struct pkvm_hyp_vcpu *pkvm_get_loaded_hyp_vcpu(void); =20 struct pkvm_hyp_vm *get_pkvm_hyp_vm(pkvm_handle_t handle); +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle); void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm); =20 #endif /* __ARM64_KVM_NVHE_PKVM_H__ */ diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index d659462fbf5d..3c3a27c985a2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -244,6 +244,26 @@ static void handle___pkvm_host_share_guest(struct kvm_= cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_unshare_guest(struct kvm_cpu_context *host_= ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret =3D __pkvm_host_unshare_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -454,6 +474,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_share_hyp), HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), + HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index fb9592e721cf..30243b7922f1 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1421,3 +1421,70 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct= pkvm_hyp_vcpu *vcpu, =20 return ret; } + +static int __check_host_shared_guest(struct pkvm_hyp_vm *vm, u64 *__phys, = u64 ipa) +{ + enum pkvm_page_state state; + struct hyp_page *page; + kvm_pte_t pte; + u64 phys; + s8 level; + int ret; + + ret =3D kvm_pgtable_get_leaf(&vm->pgt, ipa, &pte, &level); + if (ret) + return ret; + if (level !=3D KVM_PGTABLE_LAST_LEVEL) + return -E2BIG; + if (!kvm_pte_valid(pte)) + return -ENOENT; + + state =3D guest_get_page_state(pte, ipa); + if (state !=3D PKVM_PAGE_SHARED_BORROWED) + return -EPERM; + + phys =3D kvm_pte_to_phys(pte); + ret =3D check_range_allowed_memory(phys, phys + PAGE_SIZE); + if (WARN_ON(ret)) + return ret; + + page =3D hyp_phys_to_page(phys); + if (page->host_state !=3D PKVM_PAGE_SHARED_OWNED) + return -EPERM; + if (WARN_ON(!page->host_share_guest_count)) + return -EINVAL; + + *__phys =3D phys; + + return 0; +} + +int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + struct hyp_page *page; + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_shared_guest(vm, &phys, ipa); + if (ret) + goto unlock; + + ret =3D kvm_pgtable_stage2_unmap(&vm->pgt, ipa, PAGE_SIZE); + if (ret) + goto unlock; + + page =3D hyp_phys_to_page(phys); + page->host_share_guest_count--; + if (!page->host_share_guest_count) + WARN_ON(__host_set_page_state_range(phys, PAGE_SIZE, PKVM_PAGE_OWNED)); + +unlock: + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c index f2e363fe6b84..1b0982fa5ba8 100644 --- a/arch/arm64/kvm/hyp/nvhe/pkvm.c +++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c @@ -376,6 +376,18 @@ void put_pkvm_hyp_vm(struct pkvm_hyp_vm *hyp_vm) hyp_spin_unlock(&vm_table_lock); } =20 +struct pkvm_hyp_vm *get_np_pkvm_hyp_vm(pkvm_handle_t handle) +{ + struct pkvm_hyp_vm *hyp_vm =3D get_pkvm_hyp_vm(handle); + + if (hyp_vm && pkvm_hyp_vm_is_protected(hyp_vm)) { + put_pkvm_hyp_vm(hyp_vm); + hyp_vm =3D NULL; + } + + return hyp_vm; +} + static void pkvm_init_features_from_host(struct pkvm_hyp_vm *hyp_vm, const= struct kvm *host_kvm) { struct kvm *kvm =3D &hyp_vm->kvm; --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 907822080C5 for ; Mon, 16 Dec 2024 17:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371915; cv=none; b=mZyKo5kRV6V22sEaetRP93HDblaomxGIwgunoTTJdI+67eO9gv9VKvrxWOg1A9xN0RoA7A6/y5pMJcrMPS8RSwILZakoAqvFnwiNh0n34+hC42xOANjVN8tpb6zMKG/lu7XgSq26UhMl/JAOZUXhxNy2Sm5yQ5Ivf0RobV2xayo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371915; c=relaxed/simple; bh=x7+FYuhLRSc4489GTUBhq7A0rSYck1JVWFO7Y19Rgzo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IUWeC/HtqRLeDAg4GqmObxUeTxv/+gDQv5G8HImvNgCW0KkPegnG+rD6vR7EphFlb9Ulwr1/L2ua4BjelIlyJTcWsPtZmWrEs7VHn87WXaL84cU6D2P02wKsSwuUR2+rLl1WColgV3GyJkrpSXDstN5edeuFygxtbUpjUrJ+w04= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3oIF6D3g; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3oIF6D3g" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d3e4e09ae7so5078536a12.3 for ; Mon, 16 Dec 2024 09:58:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371912; x=1734976712; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=dyP52yXMvdCG/qeO/nxD7I5lO2ep30nuHc1XdZd9lVE=; b=3oIF6D3goZZTEXbWSh0xRbQkbHAueUEFneQWrbYvRukbh3Yx7KkN/dEeml0RbbBVav ZrNGfjgg3hcIw06N43wfXwoJQxO381v2/znbUjvoIoHInfO+as/nhH9600Z/MB+v6hG/ roUEK37wmhDrOnP9Owy26U+BByrSRvPEUQKba+qXHeAZpmSwYQhpPO/SInKqux6a6nLS L+uyKZVGzE+y9VMXZuZe9+QayK1gtr7HFV/kIjEcZcOaKu1dd9mjyS2fFtXEwC3/BKhO iun74XthlB6pmip1M6+rSOsT8Ez08B9mq13Y5KLKvUApEA097+/IDl9KwAbzQfkd3VPt rosg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371912; x=1734976712; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dyP52yXMvdCG/qeO/nxD7I5lO2ep30nuHc1XdZd9lVE=; b=uIRBCRRW8Ya0GTWSK/sZ5W73ZDqW9bztbFhimbI2yqeR/gjYnkIIhdodd83ytQXg+Y 2DBIwvu5vZiyLbQgHWBP6X0CSChEi5v7V2YGNKZdDZ3DIn8yXdzJsnDFr7mt3LvXCsd8 OLkILOQyXTJOBcmCj07+10eLzJe+vpGrI0rTgAjYTRYQzjVWd1pCrGZhhOwj5Gro9mz2 hA+Ph01oN2r3VMBeMUNN9hFTvWf/0cqjA7hyyIGvajE1yckQdwSaPby//QjP9ov1NEC/ YlSmP8nB5b79O3NeTmOLvm1+WDpv3RAn/9ma9IPxrtiTs956LnagQldXfkJ8/0fD+eW2 KWig== X-Forwarded-Encrypted: i=1; AJvYcCW1rBtJVM88hgoqeUMhed0Jrsf3VAKXaXigmL6AEJB97whA/Te4dgBBBVx51oc9CtHutlu1GbdG5gGjLy4=@vger.kernel.org X-Gm-Message-State: AOJu0Ywsvx+q8XJnjGVz7D2zw24cV/nJNG8c50LQnlp4ej7JBicnWMjU zSOkkBqjfZBeZs6zFNgITN1H+UfwpqlT3qSZqiwjyHiScAoAmK1YMkOzEymtfHWJydgOVIQEXTp OTtivyg== X-Google-Smtp-Source: AGHT+IFP4nw4MedeF204ACC4I+mIvh9+F0wuQVKyEpLyYdXQPbpwYFvSWf+Gws8g2zlegmkHbaKOYI4g0Iku X-Received: from edbek21.prod.google.com ([2002:a05:6402:3715:b0:5d6:570c:dae5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5191:b0:5cf:bb9e:cca7 with SMTP id 4fb4d7f45d1cf-5d63c3c0697mr13666506a12.28.1734371910915; Mon, 16 Dec 2024 09:58:30 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:57 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-13-qperret@google.com> Subject: [PATCH v3 12/18] KVM: arm64: Introduce __pkvm_host_relax_guest_perms() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall allowing the host to relax the stage-2 permissions of mappings in a non-protected guest page-table. It will be used later once we start allowing RO memslots and dirty logging. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 20 ++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 23 +++++++++++++++++++ 4 files changed, 45 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 0b6c4d325134..66ee8542dcc9 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -67,6 +67,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_hyp, __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index e528a42ed60e..a308dcd3b5b8 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -41,6 +41,7 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 3c3a27c985a2..287e4ee93ef2 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -264,6 +264,25 @@ static void handle___pkvm_host_unshare_guest(struct kv= m_cpu_context *host_ctxt) cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_relax_perms_guest(struct kvm_cpu_context *h= ost_ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 2); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D __pkvm_host_relax_perms_guest(gfn, hyp_vcpu, prot); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -475,6 +494,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_unshare_hyp), HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), + HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 30243b7922f1..aa8e0408aebb 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1488,3 +1488,26 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_h= yp_vm *vm) =20 return ret; } + +int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + if (prot & ~KVM_PGTABLE_PROT_RWX) + return -EINVAL; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret =3D kvm_pgtable_stage2_relax_perms(&vm->pgt, ipa, prot, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6123120E018 for ; Mon, 16 Dec 2024 17:58:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371917; cv=none; b=uveVEYxbBnPZqhh9py8iVNZDYQsLObcBe0VKZwQF03oKezO3EcoHn6Wxmtgi2EFTZokNKafDVHXZi4aWK2Pc+Hm/1ZR47P2OZUigZ1RXi6o1x0+hlblJ+o6XVqFwBeSTMd1SUwGx9/L0e189vhFurqdcGLiZl8ng3OK/wRjMmGQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371917; c=relaxed/simple; bh=OuhtjcErD0FcNX24WgQpNP6nBjfTyxTNub7mKvq3CIo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=iDeAAPzJ0wGB4AfQkv64I+gz24bTQowlD3BM9O03eWz33ACSNvbNlgXDrFmj1Hs5AG++xyCPu+hQHKuUK/8b8mboeoySiErnXYn0wl99Uur3lMPH5wbbP9z13EFcjnSa4/WrRK34KbovP4wBveQrGwB5z9fjfkUCJ0PfyyZdhpo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3o00qR5Y; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3o00qR5Y" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa683e90dd3so159668566b.3 for ; Mon, 16 Dec 2024 09:58:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371914; x=1734976714; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gDq9DwaPgsqNr2tHpW+bdhvyfIZgJqiuY6uN4axqGiQ=; b=3o00qR5Yzua9+drxPnLbYDDFAtAOTNc5HrAq7IGPIXr8mxhlsfAxURF7MJcTVFTAJM BYObqDGfzpjhYcmFsDV3pJ5lBaj2YFBIaFrJXMmSHiGLTPEMKYQc0mBcwmjBw0S0bsS8 zFlBg3XU6Rl1TQIlRCsVBynPtztL9oS7YsaexCJ/pQp86gG/YoZa5RAla8k7ph/odLuf MxlAYUkuSxkqn+mbYG5PlnkQ8LMANCAQNxcaUUkj1GCqxVHH4v6eAI6QwooxU0v98RiI lESxNJWuzoFBJpoNsZ0CCi1XibBcX1T4CobzH4sbgw6qlUYOAXWtM6qbCGMHoYNI7ACP 3gHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371914; x=1734976714; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gDq9DwaPgsqNr2tHpW+bdhvyfIZgJqiuY6uN4axqGiQ=; b=QJ66fYzf/Ka8QyV5JRQGPRb63z5IoYTtejw34Y2GDGioupd10PjMZal0/oHS53WTh6 7YuOqEaFP64flOwSVcbMdbJq2VeMqGwTstp+FptsGs9M9YtZk+Kl7kH7aFKsQ0p9bKHv nXIPZrZqO7uF+FXCcgp5r0U4Mky8ZUWiwcDZwOT+n7xL/6qNueFksWViPCOqpGy8W5Sf A7rbdEe54ghfBtYb5LKTxoRD+hLmzKQ5cc6E++GeRjnc0MPnY4T4XwXhMfBaDdsvwwjP Pedc6XLKJdU8e25dYclbNaUg77g8ZBUxsuWFDPqAvhmtanCqpyrEAtCxZnbELuWkwFvr YFvw== X-Forwarded-Encrypted: i=1; AJvYcCWt6XR9EEg7aDuIxkzArUi/oup58K+do4SAj6HzFZvqd8Ac89K4t6pucj49VceiYLykaWNfMJt4ztdemZQ=@vger.kernel.org X-Gm-Message-State: AOJu0Yy1b01UJiKL+zGvX0q0XmikGk1I9TL69bDjG9ACV6CI5PP91FU4 eBK+3VMsE7YZ7ZgnttUIXa7sZE+e0ZZKheL/scOTNQln9nOeKC6W0HIx61kJ7h1M8ocokOXrjme A8CVdtQ== X-Google-Smtp-Source: AGHT+IHI3UoYT3X+iYsCFbInDPhwRbeEU1+XXddDdVe8vSqsWTG70vyDHz/YcjLxXIwMarrVkYcdz5XXC6so X-Received: from edaq23.prod.google.com ([2002:a05:6402:2497:b0:5d1:229d:1ee5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:906:311a:b0:aab:70d3:af43 with SMTP id a640c23a62f3a-aab779aae4cmr1377079866b.27.1734371912986; Mon, 16 Dec 2024 09:58:32 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:58 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-14-qperret@google.com> Subject: [PATCH v3 13/18] KVM: arm64: Introduce __pkvm_host_wrprotect_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall to remove the write permission from a non-protected guest stage-2 mapping. This will be used for e.g. enabling dirty logging. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 21 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 +++++++++++++++++ 4 files changed, 42 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 66ee8542dcc9..8663a588cf34 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -68,6 +68,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_share_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index a308dcd3b5b8..fc9fdd5b0a52 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -42,6 +42,7 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages); int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu, = enum kvm_pgtable_prot prot); int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 287e4ee93ef2..98d317735107 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -283,6 +283,26 @@ static void handle___pkvm_host_relax_perms_guest(struc= t kvm_cpu_context *host_ct cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_wrprotect_guest(struct kvm_cpu_context *hos= t_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret =3D __pkvm_host_wrprotect_guest(gfn, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -495,6 +515,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_share_guest), HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), + HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index aa8e0408aebb..94e4251b5077 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1511,3 +1511,22 @@ int __pkvm_host_relax_perms_guest(u64 gfn, struct pk= vm_hyp_vcpu *vcpu, enum kvm_ =20 return ret; } + +int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret =3D kvm_pgtable_stage2_wrprotect(&vm->pgt, ipa, PAGE_SIZE); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E98E20E003 for ; Mon, 16 Dec 2024 17:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371920; cv=none; b=VNa3SQGKg80It3FW6MuzkHjennB6bcf1Dk34LEpYQ6AD+LRqI7rE3/rzh3QUTK0h1+mDqVX/PpKhUlFiDJI1X581XFHG3awJNCljBBuWtNIl/Nr304+SXh+nX5YqI+BXzaDLxfjd66OpjAkcJyIIXCOrdmOmhpaO+O4jtchR3Ek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371920; c=relaxed/simple; bh=rcoYPmb/53S8Gvor6OWXTcnIDC6gvecrrUk8ZXPefhA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=YqvxQrVXYnNDSezV84rEk5PajGnR2yTO4icgs0/F+2BJ20MHoEiuA9uj68TYB1ZT+7Cpq1KE0aSi3RQuNiVrgnZneMjdADAfkrJBY1OkwE5VpSXEaXiuO+u+a8GoGTFcFQw3DOQ5eCpJ7KQXwbSHwKa+UTgsCMpHxGca3bKtIjo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dnmgLIUZ; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dnmgLIUZ" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3e77fd3b3so5297636a12.0 for ; Mon, 16 Dec 2024 09:58:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371915; x=1734976715; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b5r7yCnkAiK5kyLrxO0D/txvshFXcnr6nZkeqCQZICw=; b=dnmgLIUZpfWnzUEbr0UdNv5wWtDRGwPsz5OODmJ2N6xXLeeRjBr27v7HTsDCEF8cbV 7O+OnN+zlJ0+1vMTJEJIkv4wPa3nMqLBzNasenGyQvOPk6SQViCw0pkD4X7wpzM49a/V mHOjnL6h/P4w2SL7EgzzVkEu0hPCQTqW1zBri+HpIoYxlZ7zuJay134yRKMTvaH/6O3n lB0N+6cDwKt+BVpp0V8maKGudynRI52KrIvnAbOPf3902SgClVpXZxRGmMQlB0I2Qmsh FJUEaSsDUggOTAf9Bn5sI9BLT2G11/piamYuRotFKoMq3Mz7LlMb6Tf+AdeCgiCjLKAV zwpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371915; x=1734976715; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b5r7yCnkAiK5kyLrxO0D/txvshFXcnr6nZkeqCQZICw=; b=aPfqgo3oW/5De88Ch/Rq2WaNqrnZOeelHX/YpaWM43pRkJqek/YP+fB12p7SEx9Pd1 5vUMmE91wAq9FvPpAwRcMZy9L1MCfid8sXpu1k++gxnUg0rHLbrD8aBEeqOh5490AmZ0 vumZk+KJVtktefHBhc3ENKsowJF/F+04vBYbJLQV4WCTvAcwP3JRhOrpND3BbSUXDG5s lPMYOJjqyzOnt6yNEUxEJMwNSmYmROaGW/lee0USLxG0/MCNy6qW+W9rAhVc91necIzo 9aODM0jaFGiGqMvaEEQaobwSbCKPq/RZWPHSpqnm2y0YvYd38w8qpDsD0NFil7oqvjU5 9XYg== X-Forwarded-Encrypted: i=1; AJvYcCVXZMqTmhsoyqVJlppueBIWhjiUnDiEMgcOxkmmqYHn3z99nqYAd2A32o/9KoXtpxJkA0MJih5Xemma/v8=@vger.kernel.org X-Gm-Message-State: AOJu0YxjPSE99nESfz6T5IPb5H0DmNg7scchnIXfZaZuWUGEouza9s7J 4Qmk928XyKIfRuM/sdehOP/wKY5TZ+Ff/JwEeuBxG505D8NWIitGgM7EAveayc03MyJRwv8m7bx qns63tQ== X-Google-Smtp-Source: AGHT+IETu+O3p1iOBFO9EmhBHTULF9iIkYmes1FbMSi6hvoepqbMqPvqpSwnKmEI7JcKZIr1ZyErBddwlhp2 X-Received: from edbek12.prod.google.com ([2002:a05:6402:370c:b0:5d0:225b:ed39]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:4584:b0:5d4:5e4:1555 with SMTP id 4fb4d7f45d1cf-5d63c3200acmr13094867a12.19.1734371915059; Mon, 16 Dec 2024 09:58:35 -0800 (PST) Date: Mon, 16 Dec 2024 17:57:59 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-15-qperret@google.com> Subject: [PATCH v3 14/18] KVM: arm64: Introduce __pkvm_host_test_clear_young_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Plumb the kvm_stage2_test_clear_young() callback into pKVM for non-protected guest. It will be later be called from MMU notifiers. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 22 +++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 19 ++++++++++++++++ 4 files changed, 43 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 8663a588cf34..4f97155d6323 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -69,6 +69,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_unshare_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index fc9fdd5b0a52..b3aaad150b3e 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -43,6 +43,7 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm= _hyp_vcpu *vcpu, enum k int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 98d317735107..616e172a9c48 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -303,6 +303,27 @@ static void handle___pkvm_host_wrprotect_guest(struct = kvm_cpu_context *host_ctxt cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_test_clear_young_guest(struct kvm_cpu_conte= xt *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + DECLARE_REG(u64, gfn, host_ctxt, 2); + DECLARE_REG(bool, mkold, host_ctxt, 3); + struct pkvm_hyp_vm *hyp_vm; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vm =3D get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + goto out; + + ret =3D __pkvm_host_test_clear_young_guest(gfn, mkold, hyp_vm); + put_pkvm_hyp_vm(hyp_vm); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -516,6 +537,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_unshare_guest), HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), + HANDLE_FUNC(__pkvm_host_test_clear_young_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 94e4251b5077..0e42c3baaf4b 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1530,3 +1530,22 @@ int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm= _hyp_vm *vm) =20 return ret; } + +int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm) +{ + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + ret =3D kvm_pgtable_stage2_test_clear_young(&vm->pgt, ipa, PAGE_SIZE, mk= old); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ACE4020E338 for ; Mon, 16 Dec 2024 17:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371920; cv=none; b=oSGGyKVs62Z1iMXTbUvnN2aDYVKIIalsjaQsZblcu5CgxjZISaRBOtRzI2NYJwwidJhgsZpxrR7nOtvU8vj7+pIDB+a4j+/wsuNooxH7UnxDFtY+KAePodzc+bNjUoGA9Qwp8CvwL1QVwEZwdPAYlmz+6P4GXN45XWRrukCT28I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371920; c=relaxed/simple; bh=QvqoA8z+hgn6vgBY6ti5ESeXdlRlORzN/4hSA1c2Xrs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bYcnAFvLZd95bijUBede72mnnz2gCtVtdECYL2Ur0Jcv/bv1snfnTHnJ4cWupobCdq9q8Z1NrpDFNIUeXzLks24aSxCAjnnm2M228oI90TeNJynvojKBhNI3etqEp4zf98b6qLaESg9krPZRCkGoYwL0EBDBSANChC8uGkU6l4A= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WBNw0rfn; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WBNw0rfn" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa69e84128aso377610966b.1 for ; Mon, 16 Dec 2024 09:58:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371917; x=1734976717; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4YOHiNJrZBpE7+WMk67gEyR4d7/2QuIAsHv8fTvzpm8=; b=WBNw0rfnjs1ObZO7bvFMm/o6cSJwYr34HJG6bzv9Om4Gr0axnJZgW7haC3n15VBeeh lOb2cZN3Zwo2659H+7KcKgqt/b0bhv6gE6ak8UZKRC0l83WqyMrnboRbWFhUuw8gL40T c1WlCGMW8ORDf11gNoDjJGibPJLsmdRF6Dkl68hS0+SwZt5VnlaDUaCmRvoG+Lfpv7iN o+NacYoY7UeEgDjQ8jEvfzgkdJjJLKv+536kcHKnlxsbHM1BJrWlewDulL9phlcnQ/CI 2SYxXVAMhfXXBJ2qbgIpfl/MwRFo48Uv8rZTZl5HJ1zZS8O7hXCXuzylBkeulQXUNISJ ZsyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371917; x=1734976717; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4YOHiNJrZBpE7+WMk67gEyR4d7/2QuIAsHv8fTvzpm8=; b=BykTHIC1s+K7iAPbYgA3rZO1AeD8k1LDcyY1hqlN76434AR43TKMrFk7RZpYpB1lIW PFcJavfEwayRpCzcqFGHzZ9xr8I6kr+CCv0efZzJnI8PCbGqdaPPe4sstGZZ8ho5bfxR xB3OLkM9BE4qzru0vjfqceEOrJU/YWwJLmMu4lU1FxPSOZWnJ+9QtxRT6s+QqTA6h1F8 1qRavi/63GW+OhbZlCusUkSx45J2338O6MNwZO08z+lqsZ8Z0+ocN3szX9a43Mu0rodP zm3UsC++KEWIIwBA/iKqTZOtGY8yXuUVQaVbmX7S3pzKkA6js4WqEMXRexL3eCqf+sT5 9Fow== X-Forwarded-Encrypted: i=1; AJvYcCXBuJ2Ixgx1X/1oQ0nQUcDY/ZN8k+mNAMzWKPHglAQ9KPfeL6rZnBOV04ka4vSmQsf0mlE0cyuXe3CgLPc=@vger.kernel.org X-Gm-Message-State: AOJu0Yy5iQ9s2ji8DFU/uwwrnR15JZWEIahlXSKXlJ8gYbpYjZdXDLGv cKLCapGyjyf0FFh9h9PTu5l13qxBzN+WYk91RZ7Qf1Z8z5PFr3pGwjsMzQ1cvFoTx9vH852gCFr uiKN4tA== X-Google-Smtp-Source: AGHT+IGwqENMAGZonee0eKfEqVGzhNKsT5jXcNwjSd/KRUBOe/TazOnE3jhEec1mHF2pcCq0KmtgL72Jbhuo X-Received: from edyd3.prod.google.com ([2002:a05:6402:783:b0:5d1:f6fd:8acc]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:60d6:b0:aa6:730c:acd with SMTP id a640c23a62f3a-aab7792c704mr1428447066b.16.1734371917086; Mon, 16 Dec 2024 09:58:37 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:00 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-16-qperret@google.com> Subject: [PATCH v3 15/18] KVM: arm64: Introduce __pkvm_host_mkyoung_guest() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Plumb the kvm_pgtable_stage2_mkyoung() callback into pKVM for non-protected guests. It will be called later from the fault handling path. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/include/nvhe/mem_protect.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 19 ++++++++++++++++++ arch/arm64/kvm/hyp/nvhe/mem_protect.c | 20 +++++++++++++++++++ 4 files changed, 41 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index 4f97155d6323..a3b07db2776c 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -70,6 +70,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_host_relax_perms_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_wrprotect_guest, __KVM_HOST_SMCCC_FUNC___pkvm_host_test_clear_young_guest, + __KVM_HOST_SMCCC_FUNC___pkvm_host_mkyoung_guest, __KVM_HOST_SMCCC_FUNC___kvm_adjust_pc, __KVM_HOST_SMCCC_FUNC___kvm_vcpu_run, __KVM_HOST_SMCCC_FUNC___kvm_flush_vm_context, diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm= /hyp/include/nvhe/mem_protect.h index b3aaad150b3e..65c34753d86c 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h @@ -44,6 +44,7 @@ int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm= *hyp_vm); int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enu= m kvm_pgtable_prot prot); int __pkvm_host_wrprotect_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm); int __pkvm_host_test_clear_young_guest(u64 gfn, bool mkold, struct pkvm_hy= p_vm *vm); +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu); =20 bool addr_is_memory(phys_addr_t phys); int host_stage2_idmap_locked(phys_addr_t addr, u64 size, enum kvm_pgtable_= prot prot); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 616e172a9c48..32c4627b5b5b 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -324,6 +324,24 @@ static void handle___pkvm_host_test_clear_young_guest(= struct kvm_cpu_context *ho cpu_reg(host_ctxt, 1) =3D ret; } =20 +static void handle___pkvm_host_mkyoung_guest(struct kvm_cpu_context *host_= ctxt) +{ + DECLARE_REG(u64, gfn, host_ctxt, 1); + struct pkvm_hyp_vcpu *hyp_vcpu; + int ret =3D -EINVAL; + + if (!is_protected_kvm_enabled()) + goto out; + + hyp_vcpu =3D pkvm_get_loaded_hyp_vcpu(); + if (!hyp_vcpu || pkvm_hyp_vcpu_is_protected(hyp_vcpu)) + goto out; + + ret =3D __pkvm_host_mkyoung_guest(gfn, hyp_vcpu); +out: + cpu_reg(host_ctxt, 1) =3D ret; +} + static void handle___kvm_adjust_pc(struct kvm_cpu_context *host_ctxt) { DECLARE_REG(struct kvm_vcpu *, vcpu, host_ctxt, 1); @@ -538,6 +556,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_host_relax_perms_guest), HANDLE_FUNC(__pkvm_host_wrprotect_guest), HANDLE_FUNC(__pkvm_host_test_clear_young_guest), + HANDLE_FUNC(__pkvm_host_mkyoung_guest), HANDLE_FUNC(__kvm_adjust_pc), HANDLE_FUNC(__kvm_vcpu_run), HANDLE_FUNC(__kvm_flush_vm_context), diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvh= e/mem_protect.c index 0e42c3baaf4b..eae03509d371 100644 --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c @@ -1549,3 +1549,23 @@ int __pkvm_host_test_clear_young_guest(u64 gfn, bool= mkold, struct pkvm_hyp_vm * =20 return ret; } + +int __pkvm_host_mkyoung_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu) +{ + struct pkvm_hyp_vm *vm =3D pkvm_hyp_vcpu_to_hyp_vm(vcpu); + u64 ipa =3D hyp_pfn_to_phys(gfn); + u64 phys; + int ret; + + host_lock_component(); + guest_lock_component(vm); + + ret =3D __check_host_shared_guest(vm, &phys, ipa); + if (!ret) + kvm_pgtable_stage2_mkyoung(&vm->pgt, ipa, 0); + + guest_unlock_component(vm); + host_unlock_component(); + + return ret; +} --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD4EB20FAA7 for ; Mon, 16 Dec 2024 17:58:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371922; cv=none; b=dsjSIKOxE0/n2txf0ChXRg+67DAVcCpg2Mn3bWpijgCJf9+4vTVuqURxsL60/MyKqx+Mq2j9MO3WJ2KKRLzV8SH7mVvk766vRFeXVFoOKgD8sZcj82Uvf2MApqnPKdFxZLZTDayb0rQHm5v814MKIZMvcq277+Glg6n6gaM0dTA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371922; c=relaxed/simple; bh=S5C7PA49mbBbYP0VC8WHfs83VKenKobir8rsTzZpXTE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=kPij2P5WuBLsgQVmKIPl+AtpQVbOZGlk0ve8+q6BZdehf06IqKB+1qZmaZMMrwSst+tlZKOMt/0yAtf5sj3F5guQjmYzmNY+B6PKfW0uTQJ4y9gUbsZp4N/TlUarG6upuEXpeu/WQekS30sTEqoOY7GH+3Dkrw9rVjjW7w8HBYY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=0zXVX60f; arc=none smtp.client-ip=209.85.208.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="0zXVX60f" Received: by mail-ed1-f73.google.com with SMTP id 4fb4d7f45d1cf-5d3cef3ed56so3931308a12.1 for ; Mon, 16 Dec 2024 09:58:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371919; x=1734976719; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vLkxU7WF3J5Qh0Edw4/qg3eGiYkLZ2iXz4JSY5pi8io=; b=0zXVX60fzBTtnMLd4lvPJroPPfiIpJb6ZD6zMOphw5jhPpPeJH5de+O8tUq1UQawIr ZOpR0HuiGKv7pVf2lShNBk0eFVvQ6DL8eaJCQ/S0xjxn51BHuNV46ahYp/p6kTt7zJ04 EBHs8/z5CaEFOWl70mi7R0fJ179Hcw1HdgLGH2Injf1a4qrbJGo+lipHKO7gezslFMi0 Ji75ywrbp4wKUpXxdP+EVCxxhpjDDJWPyGOX6mybDjnFXtKMWMs+Dl9wvQFmcuUNFelc h1FepcWTja/MOWxkYoKVR3gFDC196d0umWC/WBcFDkYkFnyxqQcUEsDepxpl21+Do42Q dUyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371919; x=1734976719; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vLkxU7WF3J5Qh0Edw4/qg3eGiYkLZ2iXz4JSY5pi8io=; b=gnuv4ZG0w2oDAZbiVag6+ZzZYMWst8msM229myYrEBO3XURa1naaDadFvvgW2UOwaD Pg7iaDQBazRiSOzWBoAoJhE/YtqhDJQdBOibEWe/S44ZE02mi2xWqBSBlqzNsZR8zE2a RAK8wcVG0+aDwC1rjpa845Grz0RpwHlhT2M9hepMzSPMIWM8w8q0f3trEzzJp1LSx4Da Os5YkjTFvMcEcDIQQ8Vx/lg91poi7cHB5UJOeGyh4uxLr/C45mA0S5WwKlrrJ/DqRzkR /SEIL1mBHY2WoHq5o5QAkJ3gots5nXA7lyGY+kysh00UEk9hSna666CPVyfwYomhbIrW bIDQ== X-Forwarded-Encrypted: i=1; AJvYcCU8coiq0/udkr8fPXaFHfPBnW2r3w6ZLKlQwUvgT7SHrdrYpugggTFEOd7Kiz3iUt/ZUmQgLkC1yqCb0oM=@vger.kernel.org X-Gm-Message-State: AOJu0YzEU+/oDzUsaaUUjASCi0NsrO8m+Mz9CvxGBwG7iZNa3/cmMSBh PbUQ3sAninooZVLDuMVyYFmc43lHB747f97zja+xH8B2TnD2wNRaaJINXLvEOJPYNK+zqt/31zz gCmgTqQ== X-Google-Smtp-Source: AGHT+IGbIEoG5vbn6iQy08UEUu36i7r1jxdCnom039llyQo30KhKhUU2a+YhuuCdj1YaRdinsIVQTfjkxghQ X-Received: from edvu14.prod.google.com ([2002:a05:6402:110e:b0:5cf:ca3f:365]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:2807:b0:5d0:d610:caa2 with SMTP id 4fb4d7f45d1cf-5d63c3ad29cmr12773175a12.26.1734371919408; Mon, 16 Dec 2024 09:58:39 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:01 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-17-qperret@google.com> Subject: [PATCH v3 16/18] KVM: arm64: Introduce __pkvm_tlb_flush_vmid() From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a new hypercall to flush the TLBs of non-protected guests. The host kernel will be responsible for issuing this hypercall after changing stage-2 permissions using the __pkvm_host_relax_guest_perms() or __pkvm_host_wrprotect_guest() paths. This is left under the host's responsibility for performance reasons. Note however that the TLB maintenance for all *unmap* operations still remains entirely under the hypervisor's responsibility for security reasons -- an unmapped page may be donated to another entity, so a stale TLB entry could be used to leak private data. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_asm.h | 1 + arch/arm64/kvm/hyp/nvhe/hyp-main.c | 17 +++++++++++++++++ 2 files changed, 18 insertions(+) diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_= asm.h index a3b07db2776c..002088c6e297 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -87,6 +87,7 @@ enum __kvm_host_smccc_func { __KVM_HOST_SMCCC_FUNC___pkvm_teardown_vm, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_load, __KVM_HOST_SMCCC_FUNC___pkvm_vcpu_put, + __KVM_HOST_SMCCC_FUNC___pkvm_tlb_flush_vmid, }; =20 #define DECLARE_KVM_VHE_SYM(sym) extern char sym[] diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 32c4627b5b5b..130f5f23bcb5 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -389,6 +389,22 @@ static void handle___kvm_tlb_flush_vmid(struct kvm_cpu= _context *host_ctxt) __kvm_tlb_flush_vmid(kern_hyp_va(mmu)); } =20 +static void handle___pkvm_tlb_flush_vmid(struct kvm_cpu_context *host_ctxt) +{ + DECLARE_REG(pkvm_handle_t, handle, host_ctxt, 1); + struct pkvm_hyp_vm *hyp_vm; + + if (!is_protected_kvm_enabled()) + return; + + hyp_vm =3D get_np_pkvm_hyp_vm(handle); + if (!hyp_vm) + return; + + __kvm_tlb_flush_vmid(&hyp_vm->kvm.arch.mmu); + put_pkvm_hyp_vm(hyp_vm); +} + static void handle___kvm_flush_cpu_context(struct kvm_cpu_context *host_ct= xt) { DECLARE_REG(struct kvm_s2_mmu *, mmu, host_ctxt, 1); @@ -573,6 +589,7 @@ static const hcall_t host_hcall[] =3D { HANDLE_FUNC(__pkvm_teardown_vm), HANDLE_FUNC(__pkvm_vcpu_load), HANDLE_FUNC(__pkvm_vcpu_put), + HANDLE_FUNC(__pkvm_tlb_flush_vmid), }; =20 static void handle_host_hcall(struct kvm_cpu_context *host_ctxt) --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DABB5211484 for ; Mon, 16 Dec 2024 17:58:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371924; cv=none; b=TmNwBULFDh1lQdzru8cxBNdPBTdFQl9rtl8q5ZAKJweEZvjw4SR0rGEO5JObXQrhnU9Y12ZqHStesJw8Gj6kg7iENgcszZ3HKpWABkE5jtfN/MvIL9sCsdwPAVUiQ2RVlEUIzZpgvgwvuV55/T2awoqlU+zzYY+cNgYpiLhmztg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371924; c=relaxed/simple; bh=EpMiEdTx+L6LwtUfMkakJOBnHaVnLxop3EqsIxmJhfg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UtLfptYlgAQMo4Ix8Qb8avYOU29abKVSXCqjJoAwnkiK78KyBhKs3cbK9KfxSXeLi5HYvLTZ5KNEgK+2JGBtoTM2yClgsW8oKHZI6Fx06/BmAnUpSvxzSaL9eC0yeAC3Q/3XuiS81r6UFqlc8iuILd6MYNR/n06V09nfzrz7WHw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=3aw7CJWu; arc=none smtp.client-ip=209.85.208.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="3aw7CJWu" Received: by mail-ed1-f74.google.com with SMTP id 4fb4d7f45d1cf-5d3cb2e6c42so5070247a12.3 for ; Mon, 16 Dec 2024 09:58:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371921; x=1734976721; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=QHTtHFS5HwQp5yZdDtuGaqFCf7n78KGed6qmJTmvD8M=; b=3aw7CJWuqrxNanHF0F4wA3ngsMrH1/bzGoNslUa4MDJ9/t3r4zLZiXh2bYtN7pMfYk wq9zrDFpYTUv9Ave4Vlai93/5t/zqK0bFGixnGPwP54M7lHZQyyBe2r8XXXvCt3Yr0vY NU9MB6/1MFtICZhhQRub1Yc3c/GKMJCSIBsQ1xtZXuytAWVuDjKvqEPXFQMMX5Flae0M qAcehf36zhREfLC4INykLGd07qXYlTpr+n385z2d1dG/gXj3Lb7y2pSEgtcTgiCCjMOh NLSImg0X5EwxKByqvApus9Po7CPoncLtPRvDrcdgu4GZOf4LFl4A0XN5MfBcAeMQdmQD f2Gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371921; x=1734976721; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=QHTtHFS5HwQp5yZdDtuGaqFCf7n78KGed6qmJTmvD8M=; b=Awb96Nxjh3e1JExmD7GwnovIfcnUAMDbxz+FOQZHvdseRSDou/JxBCkQePW0tl+/Wr 1wdoSjWtbIBLRljo2aCjM7rc97OREORZkACR5VMv3QLxLCT8WoSgqNe/mChQOk2zukBr a5G/9oY2vFFFDYiRXPJ3EvfBpKXkSIzQrj2YHXUAjKaQoi0hxld2bOC76NFF9xTGpZBf 6JrwyxZGvetXu541ADXQodFmJAkaUz+n0zUgemjQGvTn5S2teZmGduCO+QeNGNIaMngh 9I7Rx/80gija1nEpkhWNZapDACkSWTR63hjZneKXnRzdMFql4cN1r/DHxE5YANXE7d+m J1eQ== X-Forwarded-Encrypted: i=1; AJvYcCVWR0F5akOHD8pniMxLAJKAUlTVecs+5k/AwwgbmrC9mAjCkkfHUMEUq3OZcj3k1XCiBLvRbJF4DcQrHig=@vger.kernel.org X-Gm-Message-State: AOJu0YwXFmiNcxrFnY1RGochfkGA5S12+c1sCXwclJUzxf8IQ+qSPNt/ tnWy+fu93lFa/cMjbt4BNmo3aG2w0VoGN88eqM9EfuDinHGqZgdDHZ5E6KeB0YH9yPqov3yHyiV 1rS6E+Q== X-Google-Smtp-Source: AGHT+IHPbTD8KaiYcFmXHQWnggpBPrsvUFvz8/+wp3Al0bflSEL8Fx1IyM4sfmVOB6RApaHAK0UEo7YC5GBv X-Received: from edcp13.prod.google.com ([2002:a05:6402:43cd:b0:5d0:1dc6:40e5]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6402:5193:b0:5d1:22c2:6c56 with SMTP id 4fb4d7f45d1cf-5d7d4092d1fmr554063a12.17.1734371921520; Mon, 16 Dec 2024 09:58:41 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:02 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-18-qperret@google.com> Subject: [PATCH v3 17/18] KVM: arm64: Introduce the EL1 pKVM MMU From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce a set of helper functions allowing to manipulate the pKVM guest stage-2 page-tables from EL1 using pKVM's HVC interface. Each helper has an exact one-to-one correspondance with the traditional kvm_pgtable_stage2_*() functions from pgtable.c, with a strictly matching prototype. This will ease plumbing later on in mmu.c. These callbacks track the gfn->pfn mappings in a simple rb_tree indexed by IPA in lieu of a page-table. This rb-tree is kept in sync with pKVM's state and is protected by a new rwlock -- the existing mmu_lock protection does not suffice in the map() path where the tree must be modified while user_mem_abort() only acquires a read_lock. Signed-off-by: Quentin Perret Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 1 + arch/arm64/include/asm/kvm_pgtable.h | 23 ++-- arch/arm64/include/asm/kvm_pkvm.h | 23 ++++ arch/arm64/kvm/pkvm.c | 198 +++++++++++++++++++++++++++ 4 files changed, 236 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm= _host.h index 1246f1d01dbf..f23f4ea9ec8b 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -85,6 +85,7 @@ void kvm_arm_vcpu_destroy(struct kvm_vcpu *vcpu); struct kvm_hyp_memcache { phys_addr_t head; unsigned long nr_pages; + struct pkvm_mapping *mapping; /* only used from EL1 */ }; =20 static inline void push_hyp_memcache(struct kvm_hyp_memcache *mc, diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/= kvm_pgtable.h index 04418b5e3004..6b9d274052c7 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -412,15 +412,20 @@ static inline bool kvm_pgtable_walk_lock_held(void) * be used instead of block mappings. */ struct kvm_pgtable { - u32 ia_bits; - s8 start_level; - kvm_pteref_t pgd; - struct kvm_pgtable_mm_ops *mm_ops; - - /* Stage-2 only */ - struct kvm_s2_mmu *mmu; - enum kvm_pgtable_stage2_flags flags; - kvm_pgtable_force_pte_cb_t force_pte_cb; + union { + struct rb_root pkvm_mappings; + struct { + u32 ia_bits; + s8 start_level; + kvm_pteref_t pgd; + struct kvm_pgtable_mm_ops *mm_ops; + + /* Stage-2 only */ + enum kvm_pgtable_stage2_flags flags; + kvm_pgtable_force_pte_cb_t force_pte_cb; + }; + }; + struct kvm_s2_mmu *mmu; }; =20 /** diff --git a/arch/arm64/include/asm/kvm_pkvm.h b/arch/arm64/include/asm/kvm= _pkvm.h index cd56acd9a842..76a8b70176a6 100644 --- a/arch/arm64/include/asm/kvm_pkvm.h +++ b/arch/arm64/include/asm/kvm_pkvm.h @@ -137,4 +137,27 @@ static inline size_t pkvm_host_sve_state_size(void) SVE_SIG_REGS_SIZE(sve_vq_from_vl(kvm_host_sve_max_vl))); } =20 +struct pkvm_mapping { + struct rb_node node; + u64 gfn; + u64 pfn; +}; + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, str= uct kvm_pgtable_mm_ops *mm_ops); +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt); +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags); +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size); +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 = size, bool mkold); +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_p= gtable_prot prot, + enum kvm_pgtable_walk_flags flags); +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgta= ble_walk_flags flags); +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct= kvm_mmu_memory_cache *mc); +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *p= gtable, s8 level); +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys,= s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte); + #endif /* __ARM64_KVM_PKVM_H__ */ diff --git a/arch/arm64/kvm/pkvm.c b/arch/arm64/kvm/pkvm.c index 85117ea8f351..9de9159afa5a 100644 --- a/arch/arm64/kvm/pkvm.c +++ b/arch/arm64/kvm/pkvm.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include @@ -268,3 +269,200 @@ static int __init finalize_pkvm(void) return ret; } device_initcall_sync(finalize_pkvm); + +static int cmp_mappings(struct rb_node *node, const struct rb_node *parent) +{ + struct pkvm_mapping *a =3D rb_entry(node, struct pkvm_mapping, node); + struct pkvm_mapping *b =3D rb_entry(parent, struct pkvm_mapping, node); + + if (a->gfn < b->gfn) + return -1; + if (a->gfn > b->gfn) + return 1; + return 0; +} + +static struct rb_node *find_first_mapping_node(struct rb_root *root, u64 g= fn) +{ + struct rb_node *node =3D root->rb_node, *prev =3D NULL; + struct pkvm_mapping *mapping; + + while (node) { + mapping =3D rb_entry(node, struct pkvm_mapping, node); + if (mapping->gfn =3D=3D gfn) + return node; + prev =3D node; + node =3D (gfn < mapping->gfn) ? node->rb_left : node->rb_right; + } + + return prev; +} + +/* + * __tmp is updated to rb_next(__tmp) *before* entering the body of the lo= op to allow freeing + * of __map inline. + */ +#define for_each_mapping_in_range_safe(__pgt, __start, __end, __map) \ + for (struct rb_node *__tmp =3D find_first_mapping_node(&(__pgt)->pkvm_map= pings, \ + ((__start) >> PAGE_SHIFT)); \ + __tmp && ({ \ + __map =3D rb_entry(__tmp, struct pkvm_mapping, node); \ + __tmp =3D rb_next(__tmp); \ + true; \ + }); \ + ) \ + if (__map->gfn < ((__start) >> PAGE_SHIFT)) \ + continue; \ + else if (__map->gfn >=3D ((__end) >> PAGE_SHIFT)) \ + break; \ + else + +int pkvm_pgtable_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, str= uct kvm_pgtable_mm_ops *mm_ops) +{ + pgt->pkvm_mappings =3D RB_ROOT; + pgt->mmu =3D mmu; + + return 0; +} + +void pkvm_pgtable_destroy(struct kvm_pgtable *pgt) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle =3D kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + struct rb_node *node; + + if (!handle) + return; + + node =3D rb_first(&pgt->pkvm_mappings); + while (node) { + mapping =3D rb_entry(node, struct pkvm_mapping, node); + kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gfn); + node =3D rb_next(node); + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } +} + +int pkvm_pgtable_map(struct kvm_pgtable *pgt, u64 addr, u64 size, + u64 phys, enum kvm_pgtable_prot prot, + void *mc, enum kvm_pgtable_walk_flags flags) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping =3D NULL; + struct kvm_hyp_memcache *cache =3D mc; + u64 gfn =3D addr >> PAGE_SHIFT; + u64 pfn =3D phys >> PAGE_SHIFT; + int ret; + + if (size !=3D PAGE_SIZE) + return -EINVAL; + + lockdep_assert_held_write(&kvm->mmu_lock); + ret =3D kvm_call_hyp_nvhe(__pkvm_host_share_guest, pfn, gfn, prot); + if (ret) { + /* Is the gfn already mapped due to a racing vCPU? */ + if (ret =3D=3D -EPERM) + return -EAGAIN; + } + + swap(mapping, cache->mapping); + mapping->gfn =3D gfn; + mapping->pfn =3D pfn; + WARN_ON(rb_find_add(&mapping->node, &pgt->pkvm_mappings, cmp_mappings)); + + return ret; +} + +int pkvm_pgtable_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle =3D kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret =3D 0; + + lockdep_assert_held_write(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_unshare_guest, handle, mapping->gf= n); + if (WARN_ON(ret)) + break; + rb_erase(&mapping->node, &pgt->pkvm_mappings); + kfree(mapping); + } + + return ret; +} + +int pkvm_pgtable_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle =3D kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + int ret =3D 0; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) { + ret =3D kvm_call_hyp_nvhe(__pkvm_host_wrprotect_guest, handle, mapping->= gfn); + if (WARN_ON(ret)) + break; + } + + return ret; +} + +int pkvm_pgtable_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + struct pkvm_mapping *mapping; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + __clean_dcache_guest_page(pfn_to_kaddr(mapping->pfn), PAGE_SIZE); + + return 0; +} + +bool pkvm_pgtable_test_clear_young(struct kvm_pgtable *pgt, u64 addr, u64 = size, bool mkold) +{ + struct kvm *kvm =3D kvm_s2_mmu_to_kvm(pgt->mmu); + pkvm_handle_t handle =3D kvm->arch.pkvm.handle; + struct pkvm_mapping *mapping; + bool young =3D false; + + lockdep_assert_held(&kvm->mmu_lock); + for_each_mapping_in_range_safe(pgt, addr, addr + size, mapping) + young |=3D kvm_call_hyp_nvhe(__pkvm_host_test_clear_young_guest, handle,= mapping->gfn, + mkold); + + return young; +} + +int pkvm_pgtable_relax_perms(struct kvm_pgtable *pgt, u64 addr, enum kvm_p= gtable_prot prot, + enum kvm_pgtable_walk_flags flags) +{ + return kvm_call_hyp_nvhe(__pkvm_host_relax_perms_guest, addr >> PAGE_SHIF= T, prot); +} + +void pkvm_pgtable_mkyoung(struct kvm_pgtable *pgt, u64 addr, enum kvm_pgta= ble_walk_flags flags) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_mkyoung_guest, addr >> PAGE_SHIFT)); +} + +void pkvm_pgtable_free_unlinked(struct kvm_pgtable_mm_ops *mm_ops, void *p= gtable, s8 level) +{ + WARN_ON_ONCE(1); +} + +kvm_pte_t *pkvm_pgtable_create_unlinked(struct kvm_pgtable *pgt, u64 phys,= s8 level, + enum kvm_pgtable_prot prot, void *mc, bool force_pte) +{ + WARN_ON_ONCE(1); + return NULL; +} + +int pkvm_pgtable_split(struct kvm_pgtable *pgt, u64 addr, u64 size, struct= kvm_mmu_memory_cache *mc) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} --=20 2.47.1.613.gc27f4b7a9f-goog From nobody Mon Feb 9 20:30:38 2026 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 01AA2211469 for ; Mon, 16 Dec 2024 17:58:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371927; cv=none; b=kmFZPi84vs5/FJs+/g35jDv02w7Va4qgaNUB0API2CGFUVvE3XBeM55YaL/STdBe83U1a1yXQ+WmAOlgRkWdwbbbd8zj5b86jlACMlpjatfLRqgXUvsBWj2loUU6M8YuHNIrHGNztcX00Uz7ffyocYf8/vbRDGZVpl9I8z7FDyw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734371927; c=relaxed/simple; bh=FVmnx3tEtWToIEfDoX2uZSWUX5XEpJlCzoydyXN99aM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=sj4ZzR0LsCdi00vVadJGvGV7gTSoEkCKs+/bAkpusQAWUIt2v71QAbKpOhyCz+utW3SJ191uZe9E5Hi3y+J1nUVq0hbYSybjnXi1Myax/CYXB/ygnxY6CsEYG0ugJk2IMPCxNlPMWTqakUzR5fCa9rVmmv7HxU9PfBPrLz16n4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=dfBKxES9; arc=none smtp.client-ip=209.85.218.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--qperret.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dfBKxES9" Received: by mail-ej1-f74.google.com with SMTP id a640c23a62f3a-aa698b61931so316518866b.2 for ; Mon, 16 Dec 2024 09:58:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734371923; x=1734976723; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=dfBKxES9QQEpvTjccFedG/TZjwu1QmbRiVO2jEkei3U771Gmwx9qXcBsDjXmNUGkMV OoXyf3AotPuZA/f1+s2E/65DgJLNPmimyKRrVjymRI43XA3awkcuWp+Q4ljupMISjq4y 70gjVo6xk7txCaPJqhgb9k9tFRDyAtt+XzdDQFZkyiKJsvOJBE408wDDNgknaAmZwQY5 9xvoXF+3qvzXsGBj9OqgVw3zDS90oUUpUgqJMv4uXVqB9/XO5SkaVey8d9kSVABd31xo 0Fqjji+bffK3oYELQJPKr27WYQl2qoT/8uu992EhGMcxADE59XAVJJfM2xvThGyZRMHz YwNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734371923; x=1734976723; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0WLd7b4UlDBftWKgCzWDkFinsnMNAvtsNvXfdB6R70k=; b=J2h0E1sS+X8z2HBPFYGqeujaBjjt6R8DksdAQ/5s7sLghDNU/IV0ljkGlNsD+r6lIC 3vne3Jc2Xs9DtyGaSOlzoGs0z96pTc2yMn7laTv3RFQ+Q3MdSubcBtkvaahviQRRFzX/ vCSCj4U4n+owKyd8N/1NHJ+FT6K55hBNQGzMkvAyVDs1Rg8ZZZS1ZtzlrxJPJzt+SMyx eqHjsxY90SlsO5BO1CP6E1N2ORiPOjb9O95kHrtqUWAXBjhpPcP1YWnKmmeluF1bq35W xR8S2hzBXf6Qq5UpHlKejlmoU9eUKmbbWPExrhn3cqufRRnipr2K3L2LMTTuGwOxYwud Xbuw== X-Forwarded-Encrypted: i=1; AJvYcCUL1zYyOYxIiYEgyJyKFeuIojU+ioWc2Gqx/EuyQ6XTfDkzv/zfv1xGeuTrAkMApG6X7QQZlDk6AHoRo0Q=@vger.kernel.org X-Gm-Message-State: AOJu0Yx3ImsB2rVvlVGtj//3Gnb5FJAGDV+lRjSgIp6bC9K1H/MMPCoW kgEGg4VDBN5kKY24J4VXsOtTYADJjbUnCY6lHGJMdHH/Yey6ifiSzDzgokAWLVp5d5mM7V5urcg QgSKYow== X-Google-Smtp-Source: AGHT+IHKLg6kMvyS5z7roX/cFF+XVRsVkjcGlNEAY1Fj07XnCutqrkNlyaRElEPNtXSoIn6/RJ2zk88JSi3I X-Received: from ejcvg16.prod.google.com ([2002:a17:907:d310:b0:aab:d747:ee70]) (user=qperret job=prod-delivery.src-stubby-dispatcher) by 2002:a17:907:72c6:b0:aa5:2a57:1779 with SMTP id a640c23a62f3a-aab77eecc9cmr1409090666b.59.1734371923561; Mon, 16 Dec 2024 09:58:43 -0800 (PST) Date: Mon, 16 Dec 2024 17:58:03 +0000 In-Reply-To: <20241216175803.2716565-1-qperret@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241216175803.2716565-1-qperret@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241216175803.2716565-19-qperret@google.com> Subject: [PATCH v3 18/18] KVM: arm64: Plumb the pKVM MMU in KVM From: Quentin Perret To: Marc Zyngier , Oliver Upton , Joey Gouly , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon Cc: Fuad Tabba , Vincent Donnefort , Sebastian Ene , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Introduce the KVM_PGT_S2() helper macro to allow switching from the traditional pgtable code to the pKVM version easily in mmu.c. The cost of this 'indirection' is expected to be very minimal due to is_protected_kvm_enabled() being backed by a static key. With this, everything is in place to allow the delegation of non-protected guest stage-2 page-tables to pKVM, so let's stop using the host's kvm_s2_mmu from EL2 and enjoy the ride. Signed-off-by: Quentin Perret Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/arm64/include/asm/kvm_mmu.h | 16 +++++ arch/arm64/kvm/arm.c | 9 ++- arch/arm64/kvm/hyp/nvhe/hyp-main.c | 2 - arch/arm64/kvm/mmu.c | 107 +++++++++++++++++++++-------- 4 files changed, 101 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_= mmu.h index 66d93e320ec8..d116ab4230e8 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -353,6 +353,22 @@ static inline bool kvm_is_nested_s2_mmu(struct kvm *kv= m, struct kvm_s2_mmu *mmu) return &kvm->arch.mmu !=3D mmu; } =20 +static inline void kvm_fault_lock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_lock(&kvm->mmu_lock); + else + read_lock(&kvm->mmu_lock); +} + +static inline void kvm_fault_unlock(struct kvm *kvm) +{ + if (is_protected_kvm_enabled()) + write_unlock(&kvm->mmu_lock); + else + read_unlock(&kvm->mmu_lock); +} + #ifdef CONFIG_PTDUMP_STAGE2_DEBUGFS void kvm_s2_ptdump_create_debugfs(struct kvm *kvm); #else diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 55cc62b2f469..9bcbc7b8ed38 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -502,7 +502,10 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu) =20 void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu) { - kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + if (!is_protected_kvm_enabled()) + kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache); + else + free_hyp_memcache(&vcpu->arch.pkvm_memcache); kvm_timer_vcpu_terminate(vcpu); kvm_pmu_vcpu_destroy(vcpu); kvm_vgic_vcpu_destroy(vcpu); @@ -574,6 +577,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) struct kvm_s2_mmu *mmu; int *last_ran; =20 + if (is_protected_kvm_enabled()) + goto nommu; + if (vcpu_has_nv(vcpu)) kvm_vcpu_load_hw_mmu(vcpu); =20 @@ -594,6 +600,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) *last_ran =3D vcpu->vcpu_idx; } =20 +nommu: vcpu->cpu =3D cpu; =20 kvm_vgic_load(vcpu); diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/h= yp-main.c index 130f5f23bcb5..258d572eed62 100644 --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c @@ -103,8 +103,6 @@ static void flush_hyp_vcpu(struct pkvm_hyp_vcpu *hyp_vc= pu) /* Limit guest vector length to the maximum supported by the host. */ hyp_vcpu->vcpu.arch.sve_max_vl =3D min(host_vcpu->arch.sve_max_vl, kvm_ho= st_sve_max_vl); =20 - hyp_vcpu->vcpu.arch.hw_mmu =3D host_vcpu->arch.hw_mmu; - hyp_vcpu->vcpu.arch.mdcr_el2 =3D host_vcpu->arch.mdcr_el2; hyp_vcpu->vcpu.arch.hcr_el2 &=3D ~(HCR_TWI | HCR_TWE); hyp_vcpu->vcpu.arch.hcr_el2 |=3D READ_ONCE(host_vcpu->arch.hcr_el2) & diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 641e4fec1659..7c2995cb4577 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -31,6 +32,14 @@ static phys_addr_t __ro_after_init hyp_idmap_vector; =20 static unsigned long __ro_after_init io_map_base; =20 +#define KVM_PGT_S2(fn, ...) \ + ({ \ + typeof(kvm_pgtable_stage2_ ## fn) *__fn =3D kvm_pgtable_stage2_ ## fn; \ + if (is_protected_kvm_enabled()) \ + __fn =3D pkvm_pgtable_ ## fn; \ + __fn(__VA_ARGS__); \ + }) + static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t e= nd, phys_addr_t size) { @@ -147,7 +156,7 @@ static int kvm_mmu_split_huge_pages(struct kvm *kvm, ph= ys_addr_t addr, return -EINVAL; =20 next =3D __stage2_range_addr_end(addr, end, chunk_size); - ret =3D kvm_pgtable_stage2_split(pgt, addr, next - addr, cache); + ret =3D KVM_PGT_S2(split, pgt, addr, next - addr, cache); if (ret) break; } while (addr =3D next, addr !=3D end); @@ -168,15 +177,23 @@ static bool memslot_is_logging(struct kvm_memory_slot= *memslot) */ int kvm_arch_flush_remote_tlbs(struct kvm *kvm) { - kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_call_hyp(__kvm_tlb_flush_vmid, &kvm->arch.mmu); return 0; } =20 int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t gfn, u64 nr_pages) { - kvm_tlb_flush_vmid_range(&kvm->arch.mmu, - gfn << PAGE_SHIFT, nr_pages << PAGE_SHIFT); + u64 size =3D nr_pages << PAGE_SHIFT; + u64 addr =3D gfn << PAGE_SHIFT; + + if (is_protected_kvm_enabled()) + kvm_call_hyp_nvhe(__pkvm_tlb_flush_vmid, kvm->arch.pkvm.handle); + else + kvm_tlb_flush_vmid_range(&kvm->arch.mmu, addr, size); return 0; } =20 @@ -225,7 +242,7 @@ static void stage2_free_unlinked_table_rcu_cb(struct rc= u_head *head) void *pgtable =3D page_to_virt(page); s8 level =3D page_private(page); =20 - kvm_pgtable_stage2_free_unlinked(&kvm_s2_mm_ops, pgtable, level); + KVM_PGT_S2(free_unlinked, &kvm_s2_mm_ops, pgtable, level); } =20 static void stage2_free_unlinked_table(void *addr, s8 level) @@ -280,6 +297,11 @@ static void invalidate_icache_guest_page(void *va, siz= e_t size) __invalidate_icache_guest_page(va, size); } =20 +static int kvm_s2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(unmap, pgt, addr, size); +} + /* * Unmapping vs dcache management: * @@ -324,8 +346,7 @@ static void __unmap_stage2_range(struct kvm_s2_mmu *mmu= , phys_addr_t start, u64 =20 lockdep_assert_held_write(&kvm->mmu_lock); WARN_ON(size & ~PAGE_MASK); - WARN_ON(stage2_apply_range(mmu, start, end, kvm_pgtable_stage2_unmap, - may_block)); + WARN_ON(stage2_apply_range(mmu, start, end, kvm_s2_unmap, may_block)); } =20 void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, phys_addr_t start, @@ -334,9 +355,14 @@ void kvm_stage2_unmap_range(struct kvm_s2_mmu *mmu, ph= ys_addr_t start, __unmap_stage2_range(mmu, start, size, may_block); } =20 +static int kvm_s2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(flush, pgt, addr, size); +} + void kvm_stage2_flush_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys= _addr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_flush); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_flush); } =20 static void stage2_flush_memslot(struct kvm *kvm, @@ -942,10 +968,14 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s= 2_mmu *mmu, unsigned long t return -ENOMEM; =20 mmu->arch =3D &kvm->arch; - err =3D kvm_pgtable_stage2_init(pgt, mmu, &kvm_s2_mm_ops); + err =3D KVM_PGT_S2(init, pgt, mmu, &kvm_s2_mm_ops); if (err) goto out_free_pgtable; =20 + mmu->pgt =3D pgt; + if (is_protected_kvm_enabled()) + return 0; + mmu->last_vcpu_ran =3D alloc_percpu(typeof(*mmu->last_vcpu_ran)); if (!mmu->last_vcpu_ran) { err =3D -ENOMEM; @@ -959,7 +989,6 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t mmu->split_page_chunk_size =3D KVM_ARM_EAGER_SPLIT_CHUNK_SIZE_DEFAULT; mmu->split_page_cache.gfp_zero =3D __GFP_ZERO; =20 - mmu->pgt =3D pgt; mmu->pgd_phys =3D __pa(pgt->pgd); =20 if (kvm_is_nested_s2_mmu(kvm, mmu)) @@ -968,7 +997,7 @@ int kvm_init_stage2_mmu(struct kvm *kvm, struct kvm_s2_= mmu *mmu, unsigned long t return 0; =20 out_destroy_pgtable: - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); out_free_pgtable: kfree(pgt); return err; @@ -1065,7 +1094,7 @@ void kvm_free_stage2_pgd(struct kvm_s2_mmu *mmu) write_unlock(&kvm->mmu_lock); =20 if (pgt) { - kvm_pgtable_stage2_destroy(pgt); + KVM_PGT_S2(destroy, pgt); kfree(pgt); } } @@ -1082,9 +1111,11 @@ static void *hyp_mc_alloc_fn(void *unused) =20 void free_hyp_memcache(struct kvm_hyp_memcache *mc) { - if (is_protected_kvm_enabled()) - __free_hyp_memcache(mc, hyp_mc_free_fn, - kvm_host_va, NULL); + if (!is_protected_kvm_enabled()) + return; + + kfree(mc->mapping); + __free_hyp_memcache(mc, hyp_mc_free_fn, kvm_host_va, NULL); } =20 int topup_hyp_memcache(struct kvm_hyp_memcache *mc, unsigned long min_page= s) @@ -1092,6 +1123,12 @@ int topup_hyp_memcache(struct kvm_hyp_memcache *mc, = unsigned long min_pages) if (!is_protected_kvm_enabled()) return 0; =20 + if (!mc->mapping) { + mc->mapping =3D kzalloc(sizeof(struct pkvm_mapping), GFP_KERNEL_ACCOUNT); + if (!mc->mapping) + return -ENOMEM; + } + return __topup_hyp_memcache(mc, min_pages, hyp_mc_alloc_fn, kvm_host_pa, NULL); } @@ -1130,8 +1167,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, break; =20 write_lock(&kvm->mmu_lock); - ret =3D kvm_pgtable_stage2_map(pgt, addr, PAGE_SIZE, pa, prot, - &cache, 0); + ret =3D KVM_PGT_S2(map, pgt, addr, PAGE_SIZE, pa, prot, &cache, 0); write_unlock(&kvm->mmu_lock); if (ret) break; @@ -1143,6 +1179,10 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr= _t guest_ipa, return ret; } =20 +static int kvm_s2_wrprotect(struct kvm_pgtable *pgt, u64 addr, u64 size) +{ + return KVM_PGT_S2(wrprotect, pgt, addr, size); +} /** * kvm_stage2_wp_range() - write protect stage2 memory region range * @mmu: The KVM stage-2 MMU pointer @@ -1151,7 +1191,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_= t guest_ipa, */ void kvm_stage2_wp_range(struct kvm_s2_mmu *mmu, phys_addr_t addr, phys_ad= dr_t end) { - stage2_apply_range_resched(mmu, addr, end, kvm_pgtable_stage2_wrprotect); + stage2_apply_range_resched(mmu, addr, end, kvm_s2_wrprotect); } =20 /** @@ -1442,9 +1482,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, unsigned long mmu_seq; phys_addr_t ipa =3D fault_ipa; struct kvm *kvm =3D vcpu->kvm; - struct kvm_mmu_memory_cache *memcache =3D &vcpu->arch.mmu_page_cache; struct vm_area_struct *vma; short vma_shift; + void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active =3D memslot_is_logging(memslot); @@ -1472,8 +1512,15 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phy= s_addr_t fault_ipa, * and a write fault needs to collapse a block entry into a table. */ if (!fault_is_perm || (logging_active && write_fault)) { - ret =3D kvm_mmu_topup_memory_cache(memcache, - kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu)); + int min_pages =3D kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) { + memcache =3D &vcpu->arch.mmu_page_cache; + ret =3D kvm_mmu_topup_memory_cache(memcache, min_pages); + } else { + memcache =3D &vcpu->arch.pkvm_memcache; + ret =3D topup_hyp_memcache(memcache, min_pages); + } if (ret) return ret; } @@ -1494,7 +1541,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, * logging_active is guaranteed to never be true for VM_PFNMAP * memslots. */ - if (logging_active) { + if (logging_active || is_protected_kvm_enabled()) { force_pte =3D true; vma_shift =3D PAGE_SHIFT; } else { @@ -1634,7 +1681,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys= _addr_t fault_ipa, prot |=3D kvm_encode_nested_level(nested); } =20 - read_lock(&kvm->mmu_lock); + kvm_fault_lock(kvm); pgt =3D vcpu->arch.hw_mmu->pgt; if (mmu_invalidate_retry(kvm, mmu_seq)) { ret =3D -EAGAIN; @@ -1696,16 +1743,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, ph= ys_addr_t fault_ipa, * PTE, which will be preserved. */ prot &=3D ~KVM_NV_GUEST_MAP_SZ; - ret =3D kvm_pgtable_stage2_relax_perms(pgt, fault_ipa, prot, flags); + ret =3D KVM_PGT_S2(relax_perms, pgt, fault_ipa, prot, flags); } else { - ret =3D kvm_pgtable_stage2_map(pgt, fault_ipa, vma_pagesize, + ret =3D KVM_PGT_S2(map, pgt, fault_ipa, vma_pagesize, __pfn_to_phys(pfn), prot, memcache, flags); } =20 out_unlock: kvm_release_faultin_page(kvm, page, !!ret, writable); - read_unlock(&kvm->mmu_lock); + kvm_fault_unlock(kvm); =20 /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) @@ -1724,7 +1771,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu= , phys_addr_t fault_ipa) =20 read_lock(&vcpu->kvm->mmu_lock); mmu =3D vcpu->arch.hw_mmu; - kvm_pgtable_stage2_mkyoung(mmu->pgt, fault_ipa, flags); + KVM_PGT_S2(mkyoung, mmu->pgt, fault_ipa, flags); read_unlock(&vcpu->kvm->mmu_lock); } =20 @@ -1764,7 +1811,7 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) } =20 /* Falls between the IPA range and the PARange? */ - if (fault_ipa >=3D BIT_ULL(vcpu->arch.hw_mmu->pgt->ia_bits)) { + if (fault_ipa >=3D BIT_ULL(VTCR_EL2_IPA(vcpu->arch.hw_mmu->vtcr))) { fault_ipa |=3D kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0); =20 if (is_iabt) @@ -1930,7 +1977,7 @@ bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_rang= e *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, true); /* @@ -1946,7 +1993,7 @@ bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn= _range *range) if (!kvm->arch.mmu.pgt) return false; =20 - return kvm_pgtable_stage2_test_clear_young(kvm->arch.mmu.pgt, + return KVM_PGT_S2(test_clear_young, kvm->arch.mmu.pgt, range->start << PAGE_SHIFT, size, false); } --=20 2.47.1.613.gc27f4b7a9f-goog